content
stringlengths
34.5k
188k
data_id
stringlengths
13
13
Noncommutative Wilson lines in higher-spin theory and correlation functions of conserved currents for free conformal fields Roberto Bonezzi*a*, Nicolas Boulanger*a*, David De Filippi*a* Per Sundell*b* **a* -.1truecm Groupe de Mécanique et Gravitation, Unit of Theoretical and Mathematical Physics, University of Mons– UMONS, 20 place du Parc, 7000 Mons, Belgium* **b* -.1truecm Departamento de Ciencias Físicas, Universidad Andres Bello, Republica 220, Santiago de Chile* #### Abstract. We first prove that, in Vasiliev’s theory, the zero-form charges studied in [1103.2360](https://arxiv.org/abs/1103.2360) and [1208.3880](https://arxiv.org/abs/1208.3880) are twisted open Wilson lines in the noncommutative *Z* space. This is shown by mapping Vasiliev’s higher-spin model on noncommutative Yang–Mills theory. We then prove that, prior to Bose-symmetrising, the cyclically-symmetric higher-spin invariants given by the leading order of these *n*-point zero-form charges are equal to corresponding cyclically-invariant building blocks of *n*-point correlation functions of bilinear operators in free conformal field theories (CFT) in three dimensions. On the higher spin gravity side, our computation reproduces the results of [1210.7963](https://arxiv.org/abs/1210.7963) using an alternative method amenable to the computation of subleading corrections obtained by perturbation theory in normal order. On the free CFT side, our proof involves the explicit computation of the separate cyclic building blocks of the correlation functions of *n* conserved currents in arbitrary dimension *d* > 2  using polarization vectors, which is an original result. It is shown to agree, for *d* = 3 , with the results obtained in [1301.3123](https://arxiv.org/abs/1301.3123) in various dimensions and where polarization spinors were used. *b* <!-- h='&#x67;&#x6d;&#x61;&#x69;&#108;&#46;&#x63;&#x6f;&#x6d;';a='&#64;';n='&#112;&#x65;&#114;&#46;&#x61;&#110;&#100;&#x65;&#114;&#x73;&#46;&#x73;&#x75;&#110;&#100;&#x65;&#108;&#108;';e=n+a+h; document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'">'+e+'<\/'+'a'+'>'); // --> per.anders.sundell at gmail dot com Introduction ============ The conjectured holographic duality between Higher Spin (HS) gauge theory on *A**d**S**d* + 1 background and free *C**F**T**d* was spelled out, after the pioneering works in the references, where the last work also contains a simple test of the conjecture. As stressed respectively in and, this holographic correspondence is remarkable in that it (i) relates two weakly coupled theories and (ii) does not require any supersymmetry. If one postulates the validity of the HS/CFT conjecture, the holographic reconstruction programme undertaken in, that starts from the free *O*(*N*) model on the boundary, enabled the reconstruction of all the cubic vertices in the bulk as well as the quartic vertex for the bulk scalar field. The cubic vertices obtained in this way involve finitely many derivatives at finite values of the spins. On the other hand, without postulating the holographic conjecture, the standard tests of the *H**S*4/*C**F**T*3 correspondence work well for the cubic vertices in Vasiliev’s theory that are fully fixed by the rigid, nonabelian higher-spin symmetry algebra of the vacuum *A**d**S*4 solution; for recent results, see. The computations of of 3-point functions show divergences in the case of the Vasiliev vertices that are not fully fixed by the pure higher-spin kinematics. These divergences have been confirmed since then from a different perspective in, where all the first nontrivial interactions around *A**d**S*4 have been extracted from the Vasiliev equations. To date, no regularisation scheme has been shown to tame these divergences in a completely satisfactory manner; for discussions, see for example, and for recent progress, see. Nonetheless. since the computation of utilises techniques of AdS/CFT that rely on the existence of a standard action principle on the bulk side, i.e. an action for which the kinetic terms are given by Fronsdal Lagrangians in *A**d**S*4, it is suggestive of the existence of an effective action of deformed Fronsdal type, but not necessarily of any underlying path integral measure for self-interacting Fronsdal fields. Being more circumspect, one can say that, for the Vasiliev’s equations, the issues of locality and weak-field perturbative expansion around *A**d**S*4 are subtle and require more investigations, see for some preliminary works in that direction. In fact, taking a closer look at the deformed Fronsdal action, it is more reminiscent of an effective than a classical action. The weakest assumption would be a duality between Vasiliev’s equations and the deformed Fronsdal theory, in the sense that the two theories would be equivalent only at the level of their respective on-shell actions subject to suitable boundary conditions. However, in view of the matching of 3-point couplings, a more natural outcome would be that Vasiliev’s equations contain a deformed Fronsdal branch, obtained by a fine-tuned field redefinition possibly related to a noncommutative geometric framework. As this branch would have to correspond to a unitary (free) CFT, one should thus think of the deformed Fronsdal action as an effective action, that is, there is no need to quantise it any further — whereas further 1/*N* corrections can be obtained by altering boundary conditions. Moreover, unlike in an ordinary quantum field theory, in which the effective action has a loop expansion, the trivial nature of the 1/*N* expansion of the free conformal field theory implies that the anti-holographically dual deformed Fronsdal action has a trivial loop expansion as well. Thus, rather than to quantise the deformed Fronsdal theory as a four-dimensional field theory (only to discover that all loop corrections actually cancel), it seems more reasonable to us that the actual microscopic theory turns to be a field theory based on a classical action of the covariant Hamiltonian form proposed in. In this paper, we shall examine the issue of locality of unfolded field equations from a different point of view, by studying higher spin invariants known as *zero-form charges*. It was argued in that rather than asking for locality in a gravitational gauge theory at the level of the field equations, the question is to determine in which way the degrees of freedom assemble themselves so as to exhibit some form of cluster decomposition, in a way reminiscent to glueball formation in the strong coupling regime of Quantum Chromo Dynamics (QCD). In the body of this paper, we shall present further evidence for that this heuristic comparison with QCD is not coincidental. In Vasiliev’s unfolded formulation, where the spacetime dependence of the locally-defined master fields can be encoded into a gauge function, the boundary condition dictated by the space-time physics that one wishes to address is translated into the choice of a suitable class of functions in the internal fiber space. Following this line of reasoning, the authors of proposed that locality properties manifest themselves at the level of zero-form charges, whose leading orders in the expansion in terms of curvatures of the bulk-to-boundary propagators found in hence ought to reproduce holographic correlation functions of the dual *C**F**T*3  as was later verified in for two- and three-point functions. A related check was performed in the case of black-hole-like solutions in, whose asymptotic charges are mapped to non-polynomial functions in the fiber space, that in their turn determine zero-form charges that indeed exhibit cluster-decomposition properties, whereby the zero-form charges of two well-separated one-body solutions are perturbatively additive in the leading order of the separation parameter. The relation between the leading orders of more general zero-form charges and general *n*-point functions was then established in following a slightly different approach in terms of Cayley transforms, reproducing the *C**F**T*3 results for the 3-point correlation functions of conserved currents for free bosons and free fermions obtained in. In, the *n*-point functions involving the conformal weight Δ = 2 operator were also computed and a comparison was made between the leading-order zero-form charges of and the results of, showing complete agreement. The authors of used a convenient twistor basis in order to express the operator product algebra of free bosons and fermions in various dimensions, and from this they computed all the correlators. The case of free *C**F**T*3 was fully covered, as well as free *C**F**T*4 (all Lorentz spins), upon certain truncations of the *s**p*(8) generalised spacetime coordinates. No expressions were given, however, for the *n*-point correlation functions in the free scalar *C**F**T**d* for arbitrary *d* . The latter were conjectured in. We want to stress that the approach to the computation of free CFT correlation functions initiated in from the bulk side has a priori nothing to do with the usual AdS/CFT prescription. From the standard approach in terms of Witten diagrams, it is rather surprising that the evaluation of some quantities in the bulk (here the zero-form charges), evaluated on the free theory, could produce *n*-point correlation functions on the CFT side, as no information from vertices of order *n* in the Vasiliev theory is being used. It can be somehow reconciled with holography if one uses a non-standard HS action that reproduces the Vasiliev equations upon variation and is non-standard in the sense that the Lagrangian density is integrated over a higher-dimensional noncommutative open manifold containing *A**d**S*4 as a submanifold of its boundary, and with kinetic terms that are *not* of the Fronsdal type but instead of the *B**F* type usually met in topological field theory. In this approach, the zero-form charges appear as some pieces of the on-shell action. The fact that the generating functional in the HS bulk theory reproduces, in the semi-classical limit, the free CFT generating function of correlation functions, is therefore not totally odd. In the present paper, one of the tasks we undertake is to pursue the evaluation of zero-form charges along these lines, using throughout a method that enables one to evaluate subleading corrections to the free CFT correlators due to the higher orders in the weak field expansion of Vasiliev’s zero-form master field, postponing the systematic evaluation of these subleading corrections to a future work. Although the results of already produced the *n*-point correlators for free bosons and fermions *C**F**T*3 , the fact that the noncommutative *Z* variables at the heart of Vasiliev’s formalism were discarded ab initio does not allow the consideration of subleading corrections in Vasiliev’s equations. The plan of the paper is a follows: After a review of Vasiliev’s bosonic model in Section [sec:Vasiliev], Section [sec:Obs] contains a description of observables in Vasiliev’s model and the proof (completed in Appendix [sec:thmWL]) that the zero-form charges discussed in, are nothing but twisted open Wilson lines in the noncommutative twistor *Z* space. In Section [sec:Correlators], we compute the twisted open Wilson lines on the bulk-to-boundary propagator computed in and derive the corresponding quasi-amplitudes for arbitrary number of external legs. In the next Section [sec:CFT], we compute the *n*-point correlation functions of conserved currents of the free *C**F**T* corresponding to a set of free bosons in arbitrary dimension *d* . We show that, even before Bose symmetrisation, the cyclic-invariant pre-amplitudes obtained from the open Wilson lines at leading order in weak field expansion correspond to the cyclically-invariant building blocks of the correlators in the free *U*(*N*) model, obtained from the Wick contraction of the nearest-neighbours free fields inside the correlation functions. Our notation for spinor indices together with some technical results are contained in Appendix [app:CS], while we relegated some other technical results in the Appendix [app:gauss]. Vasiliev’s bosonic model ======================== Four-dimensional bosonic Vasiliev’s higher spin theories are formulated in terms of locally defined differential forms on a base manifold X4 × Z4 , where X4 is a commutative spacetime manifold, with coordinates *x**μ*, and Z4 is a noncommutative four-manifold coordinatised by $Z^{{\underline{\alpha}}}$, with ${\underline{\alpha}}=1,..,4\,$. The fields are valued in an associative algebra generated by oscillators $Y^{{\underline{\alpha}}}$ that are coordinates of a noncommutative internal manifold Y4 . By using symbol calculus, we shall treat $Z^{{\underline{\alpha}}}$ and $Y^{{\underline{\alpha}}}$ as commuting variables, whereby the noncommutative structure is ensured by endowing the algebra of functions[1](#fn1) $\hat f(x,Z;Y)$ with a noncommutative associative product, denoted by  ⋆  , giving rise to the following oscillator algebra: $${\left[Y^{{\underline{\alpha}}},Y^{{\underline{\beta}}}\right]\_{\star}} = 2i\,C^{{\underline{\alpha}}{\underline{\beta}}} \;,\quad {\left[Z^{{\underline{\alpha}}},Z^{{\underline{\beta}}}\right]\_{\star}} = -2i\,C^{{\underline{\alpha}}{\underline{\beta}}} \;,\quad {\left[Y^{{\underline{\alpha}}},Z^{{\underline{\beta}}}\right]\_{\star}} = 0\;,$$ where $C^{{\underline{\alpha}}{\underline{\beta}}}$ is an *S**p*(4) invariant non-degenerate tensor used to raise and lower *S**p*(4) indices using the NW-SE convention $$V^{{\underline{\alpha}}}:=C^{{\underline{\alpha}}{\underline{\beta}}}\,V\_{{\underline{\beta}}} \;, \quad V\_{{\underline{\alpha}}}:=V^{{\underline{\beta}}}\,C\_{{\underline{\beta}}{\underline{\alpha}}} \;.$$ The *S**p*(4) indices are split under *s**l*(2, C) in the following way: $$Y^{{\underline{\alpha}}} = (y^\alpha,{\bar{y}}^{{\dot{\alpha}}}) \;,\quad Z^{{\underline{\alpha}}} = (z^\alpha,-{\bar{z}}^{{\dot{\alpha}}}) \;.$$ Correspondingly, the *S**p*(4) invariant tensor is chosen to be $$\label{eq:epsym} C^{{\underline{\alpha}}{\underline{\beta}}} = \begin{pmatrix} \epsilon^{\alpha\beta} & 0 \\ 0 & \epsilon^{{\dot{\alpha}}{\dot{\beta}}} \end{pmatrix} \;,\quad C\_{{\underline{\alpha}}{\underline{\beta}}} = \begin{pmatrix} \epsilon\_{\alpha\beta} & 0 \\ 0 & \epsilon\_{{\dot{\alpha}}{\dot{\beta}}} \end{pmatrix} \;,$$ where *ε*12 = *ε*12 = 1 and $\epsilon\_{\dot1\dot2}=\epsilon^{\dot1\dot2}=1$ for the *s**l*(2, C) invariant tensors. The star product algebra is extended from functions to differential forms on X4 × Z4  by defining $${\text{d}Z}^{{\underline{\alpha}}}\star{\hat{f}}:={\text{d}Z}^{{\underline{\alpha}}}\,{\hat{f}}\;,\quad {\hat{f}}\star{\text{d}Z}^{{\underline{\alpha}}}:={\hat{f}}\,{\text{d}Z}^{{\underline{\alpha}}} \;,$$ *idem* ${\rm d}x^\mu$, where *f̂* is a differential form and the wedge product is left implicit. Thus, the differential forms are horizontal on a total space $\mathcal{X}\_4\times\mathcal{Z}\_4\times {\cal Y}\_4$, sometimes referred to as the correspondence space, with fiber space ${\cal Y}\_4$, base X4 × Z4 and total horizontal differential $${\widehat{\rm{d}}}:= {\text{d}}+ {\text{q}}\;,\quad {\text{d}}:= {\text{d}}x^\mu \partial^x\_{\mu} \;,\quad {\text{q}}:= {\text{d}Z}^{{\underline{\alpha}}} \partial^Z\_{{\underline{\alpha}}} \;,$$ which obeys the graded Leibniz rule $${\widehat{\rm{d}}}\left({\hat{f}}\star{\hat{g}}\right) = \left({\widehat{\rm{d}}}{\hat{f}}\right)\star{\hat{g}}+(-)^{\operatorname{deg}{\hat{f}}\operatorname{deg}{\hat{g}}} {\hat{f}}\star\left({\widehat{\rm{d}}}{\hat{g}}\right) \;,$$ with deg denoting the total form degree. The differential graded star product algebra of forms admits a set of linear (anti-)automorphisms defined by $$\begin{aligned} \label{eq:def pi} \pi(x,y,{\bar{y}},z,-{\bar{z}}) &= (x,-y,{\bar{y}},-z,-{\bar{z}}) \;,\\ {\bar{\pi}}(x,y,{\bar{y}},z,-{\bar{z}}) &= (x,y,-{\bar{y}},z,{\bar{z}}) \;,\\ \tau(x,y,{\bar{y}},z,-{\bar{z}}) &= (x,iy,i{\bar{y}},-iz,i{\bar{z}}) \;,\end{aligned}$$ and $$\begin{aligned} & \pi({\widehat{\rm{d}}}{\hat{f}})={\widehat{\rm{d}}}\,\pi({\hat{f}}) \;,\quad \pi({\hat{f}}\star{\hat{g}})=\pi({\hat{f}})\star\pi({\hat{g}}) \;,\quad \text{idem for}\;{\bar{\pi}}\;,\\& \tau({\widehat{\rm{d}}}{\hat{f}})={\widehat{\rm{d}}}\,\tau({\hat{f}}) \;, \quad \tau({\hat{f}}\star{\hat{g}})= (-)^{\operatorname{deg}{\hat{f}}\operatorname{deg}{\hat{g}}}\tau({\hat{g}})\star\tau({\hat{f}}) \;,\end{aligned}$$ for differential forms *f̂* and *ĝ* . Let us notice that *τ*2 = *π**π̄* and demanding that *π**π̄*(*f̂*) = *f̂* amounts to removing all half-integer spin gauge fields on ${\cal X}\_4$ from the model, leaving a bosonic model whose gauge fields on ${\cal X}\_4$ have integer spin. The hermitian conjugation is the anti-linear anti-automorphism defined by (*x*, *y*, *ȳ*, *z*,  − *z̄*)† = (*x*, *ȳ*, *y*, *z̄*,  − *z*) ,  $$({\widehat{\rm{d}}}{\hat{f}})^\dagger={\widehat{\rm{d}}}{\hat{f}}^\dagger \;,\quad ({\hat{f}}\star{\hat{g}})^\dagger=(-)^{\operatorname{deg}{\hat{f}}\operatorname{deg}{\hat{g}}} \, {\hat{g}}^\dagger\star{\hat{f}}^\dagger \;.$$ In what follows, we shall use the normal ordered basis for the star product[2](#fn2). Among the various conventions existing in the literature, we choose to work with the explicit realisation: $$\big({\hat{f}}\star{\hat{g}}\big)(Z,Y):=\int\frac{d^4Ud^4V}{(2\pi)^4}\,e^{iV^{{\underline{\alpha}}}U\_{{\underline{\alpha}}}}\,{\hat{f}}(Z+U,Y+U)\,{\hat{g}}(Z-V,Y+V) \;,$$ for auxiliary variables $U^{{\underline{\alpha}}}:=(u^\alpha,\bar u^{{\dot{\alpha}}})$ and $V^{{\underline{\alpha}}}:=(v^\alpha,\bar v^{{\dot{\alpha}}})\,$. The space of bounded functions of *Y* and *Z* whose complex modulus is integrable (usually written $L^1(\mathcal{Y}\_4\times {\cal Z}\_4)$) forms a star product algebra that admits a trace operation, given by $$\begin{aligned} {\rm Tr}\,\hat f(Z,Y):=\int{\text{d}^{4}Z\ }{\text{d}^{4}Y\ } \hat f(Z,Y) \;; \label{Trace}\end{aligned}$$ indeed, it has the desired cyclicity property $$\begin{aligned} {\rm Tr}\, {\hat{f}}\star {\hat{g}}= {\rm Tr}\,{\hat{g}}\star {\hat{f}}\;.\end{aligned}$$ It has the remarkable property: $$\label{eq:fact trace} {\rm Tr}\,({\hat{f}}(Y)\star{\hat{g}}(Z)) = {\rm Tr}\,({\hat{f}}(Y){\hat{g}}(Z))\;.$$ In normal order, the inner Klein operators ${\hat\kappa}$ and ${\hat{\bar\kappa}}$, defined by $$\label{eq:pisfromks} {\hat\kappa}\star {\hat\kappa}=1={\hat{\bar\kappa}}\star {\hat{\bar\kappa}} \;,\quad \pi({\hat{f}}) = {\hat\kappa}\star {\hat{f}}\star {\hat\kappa} \;,\quad {\bar{\pi}}({\hat{f}}) = {\hat{\bar\kappa}}\star {\hat{f}}\star {\hat{\bar\kappa}} \;,$$ for all zero-forms *f̂*, become real-analytic functions on ${\cal Y}\_4\times {\cal Z}\_4$, *viz.* $${\hat\kappa} := e^{iy^\alpha z\_\alpha} \;,\quad {\hat{\bar\kappa}} := e^{-i{\bar{y}}^{{\dot{\alpha}}} {\bar{z}}\_{{\dot{\alpha}}}} \;.$$ As shown in, the inner kleinians factorise as $$\begin{split} \label{eq:k=ky\*kz} &{\hat\kappa} = \kappa\_y(y)\star\kappa\_z(z) \;,\quad {\hat{\bar\kappa}}={\bar\kappa}\_y({\bar{y}})\star{\bar\kappa}\_z({\bar{z}}) \;,\quad {\rm with} \\& \kappa\_y(y):=2\pi\,\delta^2(y) \;,\quad \kappa\_z(z):=2\pi\,\delta^2(z) \;,\quad {\bar\kappa}\_y({\bar{y}}):=2\pi\,\delta^2({\bar{y}}) \;,\quad {\bar\kappa}\_z({\bar{z}}):=2\pi\,\delta^2({\bar{z}}) \;, \end{split}$$ which implies that their symbols are non-real analytic in Weyl order. As we shall see, the inner kleinians define closed and central elements appearing in the equations of motion, which implies that the fields will be real-analytic on-shell in normal order but not in Weyl order. Thus, the normal order is suitable for a standard higher spin gravity interpretation of the model, which requires a manifestly Lorentz covariant symbol calculus with symbols that are real analytic at the origin of $\mathcal{Z}\_4\times {\cal Y}\_4\,$. The master fields describing the bosonic model consist of a connection one-form $\widehat A$ and a twisted-adjoint zero-form $\widehat\Phi\,$, subject to the reality conditions and bosonic projection $$\begin{aligned} \widehat{A}^\dagger = -\widehat{A} \;,\quad \pi(\widehat{A}) &= {\bar{\pi}}(\widehat{A}) \;,\\ \label{eq:RC tafV} \widehat{\Phi}^\dagger = \pi(\widehat{\Phi}) &= {\bar{\pi}}({\widehat{\Phi}}) \;.\end{aligned}$$ At the linearised level the physical spectrum consists of an infinite tower of massless particles of every integer spin, each occurring once. The bosonic model can be further projected to its minimal version, containing only even spin particles, by imposing $$\label{eq:MBP tafV} \tau(\widehat{A}) = -\widehat{A}\;,\quad\tau(\widehat{\Phi}) = \pi({\widehat{\Phi}}) \;.$$ The equations of motion are given by the twisted-adjoint covariant constancy condition $${\widehat{\rm{d}}}\widehat{\Phi} + \widehat{A}\star\widehat{\Phi} -\widehat{\Phi}\star\pi(\widehat{A}) = 0\label{eq:Vas2} \;,$$ and the curvature constraint $${\widehat{\rm{d}}}\widehat{A} + \widehat{A}\star\widehat{A} +\widehat{\Phi}\star\widehat{J} = 0\label{eq:Vas1} \;,$$ where the two-form *Ĵ* is given by $$\label{eq:defJ} \widehat{J} := -\frac{i}{4}\big(e^{i\theta}\,{\hat\kappa}\,{\text{d}z}^{\alpha}{\text{d}z}\_{\alpha} + e^{-i\theta}\,{\hat{\bar\kappa}}\,{\text{d}{\bar{z}}}^{{\dot{\alpha}}}{\text{d}{\bar{z}}}\_{{\dot{\alpha}}}\big) \;,$$ with *θ* a real constant caracterising the model. This element obeys *Ĵ*† = *τ*(*Ĵ*) =  − *Ĵ* ,  and $$\begin{split} & {\widehat{\rm{d}}}\widehat{J}\equiv0\;,\quad \widehat{A}\star\widehat{J}-\widehat{J}\star\pi(\widehat{A})\equiv0\equiv \widehat{\Phi}\star\widehat{J}-\widehat{J}\star\pi(\widehat{\Phi}) \;. \end{split}$$ It follows that the field equations are compatible with the reality conditions on the master fields and the integer-spin projections, and that they are universally Cartan integrable. The latter implies invariance of ([eq:Vas1], [eq:Vas2]) under the finite gauge transformation $$\begin{aligned} \label{eq:GIf cV} \widehat{A} &\longrightarrow \widehat{g}\star{\widehat{\rm{d}}}\widehat{g}^{-1} + \widehat{g}\star\widehat{A}\star\widehat{g}^{-1} \;,\\ \label{eq:GIf tafV} \widehat{\Phi} &\longrightarrow \widehat{g}\star\widehat{\Phi}\star\pi(\widehat{g}^{-1}) \;,\quad \widehat{g} = \widehat{g}{\left(x,Z; Y\right)} \;, \end{aligned}$$ which preserve the reality conditions on the master fields and the integer-spin projection of the model provided that *ĝ*† = *ĝ*− 1 ,  *π*(*ĝ*) = *π̄*(*ĝ*) ;  the minimal projection in addition requires that *τ*(*ĝ*) = *ĝ*− 1 . The parity invariant models are obtained by taking *θ* = 0 for the Type A model and $\theta=\tfrac{\pi}{2}$ for the Type B model, in which the physical scalar field is parity even and odd, respectively. Upon splitting the connection one-form into d*x**μ* and ${\text{d}Z}^{{\underline{\alpha}}}$ directions: $$\widehat{A} = \widehat{U} + \widehat{V} \;,\quad \widehat{U} = {\text{d}}x^\mu \widehat{A}\_\mu \;,\quad \widehat{V} = {\text{d}z}^\alpha \widehat{A}\_\alpha + {\text{d}{\bar{z}}}^{{\dot{\alpha}}}\widehat{A}\_{{\dot{\alpha}}} \;,$$ Vasiliev’s equations read $$\begin{aligned} \label{eq:Vas1x} {\text{d}}\widehat{U} + \widehat{U}\star\widehat{U} &= 0 \;,\\ \label{eq:Vas1xz} {\text{d}}\widehat{V} + {\text{q}}\widehat{U} + \widehat{U}\star\widehat{V} + \widehat{V}\star\widehat{U} &= 0 \;,\\ \label{eq:Vas1z} {\text{q}}\widehat{V} +\widehat{V}\star\widehat{V} + \widehat{\Phi}\star\widehat{J} &= 0 \;,\\ \label{eq:Vas2x} {\text{d}}\widehat{\Phi} + \widehat{U}\star\widehat{\Phi} - \widehat{\Phi}\star\pi(\widehat{U}) &= 0 \;,\\ \label{eq:Vas2z} {\text{q}}\widehat{\Phi} + \widehat{V}\star\widehat{\Phi} - \widehat{\Phi}\star\pi(\widehat{V}) &= 0 \;.\end{aligned}$$ Remarkably, unlike the case of a connection on a commutative manifold, the connection on a noncommutative (symplectic) manifold can be mapped in a one-to-one fashion to an adjoint quantity, given in Vasiliev’s theory by[3](#fn3) $$\begin{aligned} \label{eq:def S,Psi} \widehat{S}\_{{\underline{\alpha}}} := Z\_{{\underline{\alpha}}} - 2i\,\widehat{A}\_{{\underline{\alpha}}} \;,\quad \widehat\Psi := \widehat{\Phi}\star \hat\kappa \;,\quad \widehat{\bar\Psi} := \widehat{\Phi}\star \hat{\bar\kappa} \;.\end{aligned}$$ In terms of these variables, Vasiliev’s equations read as follows: $$\begin{aligned} \label{EVS1} {\text{d}}\widehat{U} + \widehat{U}\star\widehat{U} &=0 \;,& {\left[\widehat{S}\_{\alpha},\widehat{S}\_{{\dot{\alpha}}}\right]\_{\star}} &=0 \;,\\ {\text{d}}\widehat{S}\_{\alpha}+{\left[\widehat{U},\widehat{S}\_{\alpha}\right]\_{\star}} &=0 \;,& {\text{d}}\widehat{S}\_{{\dot{\alpha}}}+{\left[\widehat{U},\widehat{S}\_{{\dot{\alpha}}}\right]\_{\star}} &=0 \;,\\ {\text{d}}\widehat\Psi+{\left[\widehat{U},\widehat\Psi\right]\_{\star}} &=0 \;,& {\text{d}}\widehat{\bar\Psi}+{\left[\widehat{U},\widehat{\bar\Psi}\right]\_{\star}} &=0 \;,\\ \label{EVS4} {\left\{\widehat{S}\_{\alpha},\widehat\Psi\right\}\_{\star}} = {\left[\widehat{S}\_{\alpha},\widehat{\bar\Psi}\right]\_{\star}} &=0 \;,& {\left\{\widehat{S}\_{\alpha},\widehat\Psi\right\}\_{\star}} = {\left[\widehat{S}\_{\alpha},\widehat{\bar\Psi}\right]\_{\star}} &=0 \;,\\ \label{EVS5} {\left[\widehat{S}\_{\alpha},\widehat{S}\_{\beta}\right]\_{\star}} +2i\epsilon\_{\alpha\beta} (1-e^{i\theta}\widehat\Psi) &=0 \;,& {\left[\widehat{S}\_{{\dot{\alpha}}},\widehat{S}\_{{\dot{\beta}}}\right]\_{\star}} +2i\epsilon\_{{\dot{\alpha}}{\dot{\beta}}} (1-e^{-i\theta}\widehat{\bar\Psi}) &=0 \;.\end{aligned}$$ Thus, the adjoint variables $(\widehat{S}\_{\alpha},\widehat{S}\_{{\dot{\alpha}}})$ obey a generalized version of Wigner’s deformation of the Heisenberg algebra, as in, and are hence referred to as the deformed oscillators. The vacuum solution describing the *A**d**S*4 background is obtained by setting $\widehat{\Phi}=0=\widehat{V}$ and taking *Û* = Ω, the Cartan connection of *A**d**S*4 , given by $$\Omega(Y\vert x) =\frac{1}{4i}\, \big(y^\alpha y^\beta\,\omega\_{\alpha\beta} +{\bar{y}}^{{\dot{\alpha}}}{\bar{y}}^{{\dot{\beta}}}\,\bar{\omega}\_{{\dot{\alpha}}{\dot{\beta}}} +2\,y^\alpha{\bar{y}}^{{\dot{\alpha}}}\,h\_{\alpha{\dot{\alpha}}}\big) \;$$ obeying the zero-curvature condition dΩ + Ω ⋆ Ω = 0 . One may then perform a perturbative expansion around this background and find, at the linearised level, the Central On Mass-Shell Theorem that describes, in a suitable gauge, the free propagation of an infinite tower of Fronsdal fields around *A**d**S*4 . For our purpose it is important to recall that the twisted adjoint zero-form is *Z*-independent at the linearised level, and will be denoted by Φ(*x*; *Y*). Observables in Vasiliev’s theory ================================ As the gauge transformations of Vasiliev’s theory resemble those of noncommutative Yang-Mills theory (see ), some results of the latter theory may be applied to Vasiliev’s theory. In particular, one can construct gauge invariant observables from holonomies formed from curves that are closed in ${\cal X}\_4$ and open in ${\cal Z}\_4$. To this end, we consider a curve $$\label{eq:defCurve} \mathcal{C} : [0,1] \rightarrow \mathcal{X}\_4\times \mathcal{Z}\_4 : \sigma \rightarrow (\xi^\mu(\sigma),\xi^{{\underline{\alpha}}}(\sigma))$$ that is based at the origin, closed in the commutative directions and open along the noncommutative space, *i.e.* $$\xi^\mu(0) = \xi^\mu(1) = 0 \;,\quad \xi^{{\underline{\alpha}}}(0)=0 \;,\quad \xi^{{\underline{\alpha}}}(1)=2{M}^{{\underline{\alpha}}} =2\,C^{{\underline{\alpha}}{\underline{\beta}}}{M}\_{{\underline{\beta}}} \;.$$ Here ${M}\_{{\underline{\alpha}}}=(\mu\_{\alpha},{\bar\mu}\_{{\dot{\alpha}}})$ is seen as a momentum conjugated to $Z^{{\underline{\alpha}}}$. We can associate the following Wilson line to C : $$\begin{aligned} {W\_{\mathcal{C}}(x,Z;Y)} :&= {P\exp\_{\star}\left(\int\_0^1 {\text{d}\sigma\ } \left( \dot\xi^\mu(\sigma) \widehat{A}\_\mu(\sigma) +\dot\xi^{{\underline{\alpha}}}(\sigma) \widehat{A}\_{{\underline{\alpha}}}(\sigma) \right)\right)} \nonumber\\&= \sum\_{n=0}^{\infty} \int\_{0}^1 {\text{d}\sigma\_n\ }... \int\_{0}^{\sigma\_{2}}{\text{d}\sigma\_1\ } \operatorname\*{\bigstar}\_{i=1}^{n}\left( \dot\xi^\mu(\sigma\_i) \widehat{A}\_\mu(\sigma\_i) +\dot\xi^{{\underline{\alpha}}}(\sigma\_i) \widehat{A}\_{{\underline{\alpha}}}(\sigma\_i) \right) \;.\end{aligned}$$ where $\widehat{A}(\sigma):=\widehat{A}{\left(x^{\mu}+\xi^{\mu}(\sigma),Z^{{\underline{\alpha}}}+\xi^{{\underline{\alpha}}}(\sigma); Y^{{\underline{\alpha}}}\right)}\,$ and for any set of *n* functions {*f̂*1, ..., *f̂**n*}, the symbol $\operatorname\*{\bigstar}\_{i=1}^{n}{\hat{f}}\_i$ is defined as *f̂*1 ⋆ ... ⋆ *f̂**n* in that order. Under the gauge transformation ([eq:GIf cV], [eq:GIf tafV]) it transforms as : *W*C(*x*, *Z*; *Y*) → *ĝ*(*x*, *Z*; *Y*) ⋆ *W*C(*x*, *Z*; *Y*) ⋆ *ĝ*− 1(*x*, *Z* + 2*M*; *Y*) . Unlike open Wilson lines in commutative spaces, which cannot be made gauge invariant, their counterparts in noncommutative spaces can be made gauge invariant by star multiplying them with the generator *e**i**M**Z* of finite translations in ${\cal Z}\_4\,$, and tracing over both the fiber space and base manifold [4](#fn4). Thus, for any adjoint zero-form *Ô*(*x*, *Z*; *Y*), *i.e.* any operator transforming as *Ô* → *ĝ* ⋆ *Ô* ⋆ *ĝ*− 1, the quantity $$\label{eq:Observable} {\widetilde{O}}\_{\mathcal{C}}\left({M}\vert x\right) := {\rm Tr}\,\left[ \hat{O}{\left(x,Z; Y\right)} \star {W\_{\mathcal{C}}(x,Z;Y)} \star e^{i{M}Z}\right] \;,$$ which one may formally think of as the Fourier transform of the impurity, is gauge invariant, as shown in. Indeed, this follows from the cyclicity of the trace, the gauge transformation law and the following expression for the star commutator of a function *f̂*(*x*, *Z*; *Y*) with the exponential *e**i**M**Z*  (see ([eq:f\*e(iMZ)], [eq:e(iMZ)\*f]) for details): *f̂*(*x*, *Z* + 2*M*; *Y*) ⋆ *e**i**M**Z* = *e**i**M**Z* ⋆ *f̂*(*x*, *Z*; *Y*) . The above construction is formal, and hence the well-definiteness of the observables depends on the properties of the considered curve and solution to Vasiliev equations. The case of the leading order pre-amplitudes will be discussed at the end of this section. Previously, decorated Wilson loops in the *commutative* X4 space have been considered within the context of Vasiliev’s theory; in particular, for trivial loops, these reduce to invariants of the form $${\mathcal{I}}\_{n\_0,t}\left({M}\right) := {\rm Tr}\,\left[ {\widehat{\Psi}}{}^{\star n\_0} \star{\left({\hat\kappa}{\hat{\bar\kappa}}\right)^{\star t}} \star {\exp\_{\star}\left(i{M}\widehat{S}\right)} \right] \;,$$ for *t* being zero or one, where we note the insertion of the adjoint operator *e*⋆*i**M**Ŝ*. As these invariants are independent of the choice of base point in ${\cal X}\_4\,$, they have been referred to as zero-form charges. Let us show the equivalence between these invariants and twisted straight open Wilson lines in Z4 , thereby providing a geometrical underpinning for the insertion of the adjoint impurity *e*⋆*i**M**Ŝ* formed out of the deformed oscillator *Ŝ* . To this end, we consider the straight line $$L\_{2M} : [0,1] \rightarrow \mathcal{X}\_4\times \mathcal{Z}\_4 : \sigma \rightarrow (0,2\sigma{M}^{{\underline{\alpha}}}) \;.$$ The open Wilson lines in noncommutative space form an over-complete set of observables for noncommutative Yang-Mills theory, see and references therein. As stressed in the same reference, the *straight* open Wilson lines with a complete set of adjoint impurities inserted at one end of the Wilson line also provide such a set of observables [5](#fn5). In the case of Vasiliev’s theory, these observables can be written as follows: $$\begin{aligned} \label{eq:def WLmxO} {\widetilde{O}}\_{L\_{2M}}\left({M}\vert x\right) &= \int {\text{d}^{4}Z\ }{\text{d}^{4}Y\ } \hat{O}{\left(x,Z; Y\right)} \star {W\_{L\_{2M}}(x,Z;Y)}\star\exp(i{M}Z) \\&= \int {\text{d}^{4}Z\ }{\text{d}^{4}Y\ } \hat{O}{\left(x,Z; Y\right)} \star {\exp\_{\star}\left(i{M}\widehat{S}\right)} \;.\end{aligned}$$ The last equality comes form the following result: exp⋆(*i**M**Ŝ*) = *W**L*2*M*(*x*, *Z*; *Y*) ⋆ exp(*i**M**Z*) ,  that is proved order by order in powers of the higher spin connection *Â* in Appendix [sec:thmWL]. From the form of the equations — involving *Ŝ* and $\widehat{\Psi}\,$, one can see that the most general adjoint operator one can build out of the master fields is equivalent on shell to: $$\label{eq:gen adj op} {\hat{O}}\_{n\_0,t;\,{\underline{\alpha}}(K)} := \widehat{\Psi}^{\star n\_0} \star{\left({\hat\kappa}{\hat{\bar\kappa}}\right)^{\star t}} \star{\left(\widehat{S}\_{{\underline{\alpha}}}\right)^{\star K}} \;,$$ where, as suggested by the indices, the deformed oscillators $\widehat{S}\_{{\underline{\alpha}}}$ are symmetrized. Through, this yields the following form for the most general observable of Vasiliev’s theory: $$\label{eq:def ZFC(S)} {\mathcal{I}}\_{n\_0,t;{\underline{\alpha}}(K)}\left({M}\right) = \int {\text{d}^{4}Z\ }{\text{d}^{4}Y\ } \widehat{\Psi}^{\star n\_0} \star{\left({\hat\kappa}{\hat{\bar\kappa}}\right)^{\star t}} \star{\left(\widehat{S}\_{{\underline{\alpha}}}\right)^{\star K}} \star {\exp\_{\star}\left(i{M}\widehat{S}\right)} \;.$$ However, it turns out that one can obtain all of these observables from the evaluation of the following ones, viewed as functions of ${M}\_{\underline{\alpha}}\,$: $$\label{eq:def ZFC} {\mathcal{I}}\_{n\_0,t}\left({M}\right) = \int {\text{d}^{4}Z\ }{\text{d}^{4}Y\ } \widehat{\Psi}^{\star n\_0} \star{\left({\hat\kappa}{\hat{\bar\kappa}}\right)^{\star t}} \star {\exp\_{\star}\left(i{M}\widehat{S}\right)} \;.$$ Indeed, one has $$\label{eq:gen ZFC(S)} \left. (\partial^M\_{{\underline{\alpha}}})^K {\mathcal{I}}\_{n\_0,t}\left({M}\right) \right\vert\_{M=0} = \int {\text{d}^{4}Z\ }{\text{d}^{4}Y\ } \widehat{\Psi}^{\star n\_0} \star{\left({\hat\kappa}{\hat{\bar\kappa}}\right)^{\star t}} \star (i\widehat{S}\_{{\underline{\alpha}}})^{\star K}$$ and the observable can be written as an infinite sum of term of this form, upon applying ([EVS4],[EVS5]) repeatedly in order to symmetrise all the deformed oscillators. We will refer to the observables as *zero-form charges* since they are nothing but the observables considered and evaluated in some special cases in. In the weak field expansion scheme, we can write the leading order contribution to the zero-form charges as $$\begin{aligned} \label{eq:ZFCltem} {\mathcal{I}}^{(n\_0)}\_{n\_0,t}\left({M}\right) &= \int {\text{d}^{4}Z\ }{\text{d}^{4}Y\ } {\left(\Phi\star {\hat\kappa}\right)^{\star n\_0}} \star{\left({\hat\kappa}{\hat{\bar\kappa}}\right)^{\star t}} \star e^{i\mu z}\star e^{-i{\bar\mu}{\bar{z}}} \\&= \int {\text{d}^{4}Z\ }{\text{d}^{4}Y\ } \left(\operatorname\*{\bigstar}\_{i=1}^{n\_0} \Phi{\left(x;(-)^{i+1}y,{\bar{y}}\right)} \right) \star e^{ieyz}\star e^{-it{\bar{y}}{\bar{z}}} \star e^{i\mu z}\star e^{-i{\bar\mu}{\bar{z}}} \;.\end{aligned}$$ The second line was obtained using and defining: $$e:=n\_0+t\mod2 \;,\quad t\in\{0,1\} \;,$$ Following, we use the zero-form charges as building blocks for *quasi-amplitudes* of various orders. To define the quasi-amplitudes of order *n*, we expand a given zero-form charge to *n*th order in the twisted-adjoint weak field Φ  and replace, in a way that we specify below, the *n* fields Φ  with *n* distinct external twisted-adjoint quantities Φ*i* , *i* = 1, …, *n*, each of which transforming as under a diagonal higher spin group acting on all Φ*i*’s with the same parameter. The quasi-amplitude Q*n*0, *t*(*n*)(Φ*i*|*M*) is now defined unambiguously as the functional of Φ1, ..., Φ*n* that is totally symmetric in its *n* arguments and obeys: Q*n*0, *t*(*n*)(Φ*i*|*M*)|Φ1 = ... = Φ*n* = Φ = I*n*0, *t*(*n*)(*M*). As shown in, at the leading order, *i.e.* *n* = *n*0, these reproduce the correlation functions of bilinear operators in the free conformal field theory in three dimensions for *n* = 2, 3 and 4 . In this paper, we are interested in more fundamental building blocks from the bulk point of view, which are referred to as *pre-amplitudes* and are defined as follows: $$\label{eq:PAltemf} {\mathcal{A}}\_{n\_0,t}\left(\Phi\_{i}\vert{M}\right) := \int {\text{d}^{4}Z\ }{\text{d}^{4}Y\ } \left(\operatorname\*{\bigstar}\_{i=1}^{n\_0} \Phi\_i{\left(x;(-)^{i+1}y,{\bar{y}}\right)} \right) \star e^{ieyz}\star e^{-it{\bar{y}}{\bar{z}}} \star e^{i\mu z}\star e^{-i{\bar\mu}{\bar{z}}} \;.$$ The prescription to obtain those objects is to replace Φ in the expression of the leading order zero-form charges with the different fields Φ*i* in a given order, conventionally with the label growing from left to right. We will not discuss their generalization to higher orders in perturbation theory. The aim of this paper is to strengthen the previous results on quasi-amplitudes by extending it to *n*0-point function and showing that the correspondence holds already at the level of basic cyclic structures. The relevant cyclic blocks on the CFT side are Wick contractions (see Section [sec:CFT]), while on the bulk side they are precisely the pre-amplitudes. The first step will be to show the invariance of the pre-amplitudes under cyclic permutations of the external legs. To do so, we use the decompositions to split the integrand into a *Z*-dependent part *G**t*(*Z*|*M*) and a *Y*-dependent part to be specified below. From the cyclicity of the trace and the mutual star-commutativity of functions of *Y* and functions of *Z* , we compute: $$\begin{aligned} {\mathcal{A}}\_{n\_0,t}\left(\Phi\_{i}\vert{M}\right) &= \int {\text{d}^{4}Z\ }{\text{d}^{4}Y\ } \left(\operatorname\*{\bigstar}\_{i=1}^{n\_0} \pi^{i+1}\Phi\_{i} \right) \star{\left(\kappa\_y\right)^{\star n\_0}} \star{\left(\kappa\_y{\bar\kappa}\_y\right)^{\star t}} \star G^t(Z\vert M) \nonumber\\&= \int {\text{d}^{4}Z\ }{\text{d}^{4}Y\ } \left(\operatorname\*{\bigstar}\_{i=2}^{n\_0} \pi^{i+1}\Phi\_{i} \right) \star{\left(\kappa\_y\right)^{\star n\_0}} \star{\left(\kappa\_y{\bar\kappa}\_y\right)^{\star t}} \star G^t(Z\vert M) \star \Phi\_{1} \nonumber\\&= \int {\text{d}^{4}Z\ }{\text{d}^{4}Y\ } \left(\operatorname\*{\bigstar}\_{i=2}^{n\_0} \pi^{i+1}\Phi\_{i} \right) \star{\left(\kappa\_y\right)^{\star n\_0}} \star{\left(\kappa\_y{\bar\kappa}\_y\right)^{\star t}} \star \Phi\_{1} \star G^t(Z\vert M) \nonumber\\&= \int {\text{d}^{4}Z\ }{\text{d}^{4}Y\ } \left(\operatorname\*{\bigstar}\_{i=1}^{n\_0-1} \pi^{i}\Phi\_{i+1} \right) \star \left(\pi^{n\_0}(\pi{\bar{\pi}})^{t}\Phi\_{1}\right) \star{\left(\kappa\_y\right)^{\star n\_0}} \star{\left(\kappa\_y{\bar\kappa}\_y\right)^{\star t}} \star G^t(Z\vert M) \nonumber\\&= {{\mathcal{A}}\_{n\_0,t}\left(\Phi\_2,...,\Phi\_{n\_0},(\pi{\bar{\pi}})^t\Phi\_1\vert{M}\right)} \;,\end{aligned}$$ where the concluding line is obtained by a change of integration variables given by *y* →  − *y*. Thus, the cyclic invariance follow from the integer-spin projection in. We remark that as Weyl- and normal-ordered symbols of operators depending either only on *Y* or only on *Z* are the same, the integrations over *Y* and *Z* can be factorized. This simplifying scheme was used in in deriving the leading order quasi-amplitudes for all *n*, whereas the earlier results for *n* = 2, 3 in were based on a different scheme adapted to the weak-field expansion of Vasiliev’s equations to higher order, for which there is no obvious factorization of the integrand. In what follows, we shall follow the latter approach, and perform an alternative derivation of the results of, which thus can be generalized more straightforwardly to computing subleading corrections to open Wilson lines in ${\cal Z}\_4\,$. To the latter end, we define *Y*-space momenta $\Lambda\_{{\underline{\alpha}}}=(\lambda\_\alpha,{\bar\lambda}\_{{\dot{\alpha}}})$, that are complex conjugates of each other, *i.e.* $(\lambda,{\bar\lambda})^\dagger = ({\bar\lambda},\lambda)$, and which are not affected by  ⋆ -products. Assuming that the twisted-adjoint zero-form $\Phi\in {\cal S}(\mathcal{Y}\_4)\,$, i.e. that it is a rapidly decreasing function, we have the following Fourier transformation relations: $$\begin{aligned} \label{eq:def fttafZ} \tilde\Phi(\Lambda) :&= \int\frac{{\text{d}^{4}Y\ }}{(2\pi)^2}\Phi(Y)\exp{(-i\Lambda Y)} \;,\\ \Phi(Y) &= \int\frac{{\text{d}^{4}\Lambda\ }}{(2\pi)^2}\tilde\Phi(\Lambda)\exp{(i\Lambda Y)} \;.\end{aligned}$$ Upon defining the following mappings: $$\pi\_{\Lambda}(\lambda,{\bar\lambda}) = (-\lambda,{\bar\lambda}) \;,\quad {\bar{\pi}}\_{\Lambda}(\lambda,{\bar\lambda}) = (\lambda,-{\bar\lambda}) \;,\quad \tau\_{\Lambda}(\lambda,{\bar\lambda}) = (i\lambda,i{\bar\lambda}) \;,$$ the integer-spin projection and reality condition in and the minimal bosonic projection, respectively, translate into $$\begin{aligned} \label{eq:RC fttaf} \tilde\Phi^\dagger =\pi\_\Lambda\tilde\Phi = {\bar{\pi}}\_\Lambda\tilde\Phi \;,\\ \label{eq:MBP fttaf} \pi\_\Lambda\tilde\Phi = \tau\_\Lambda\tilde\Phi \;.\end{aligned}$$ Writing in terms of plane waves gives $$\label{eq:PAfftem} {\mathcal{A}}\_{n\_0,t}\left(\Phi\_{i}\vert{M}\right) = \int\left(\prod\_{j=1}^{{n\_0}}\frac{{\text{d}^{4}\Lambda\_j\ }}{(2\pi)^2}\right) \left(\prod\_{j=1}^{{n\_0}}\tilde\Phi(\Lambda\_j)\right) F\_{n\_0,t}\left(\Lambda\_i\vert M\right) \;,$$ where the quantity *F**n*0, *t*(Λ*i*|*M*), which one may think of as a higher spin form factor, is given by $$F\_{n\_0,t}\left(\Lambda\_i\vert M\right) = \int {\text{d}^{4}Z\ }{\text{d}^{4}Y\ } \left(\operatorname\*{\bigstar}\_{i=1}^{n\_0} e^{i(-1)^{i+1}\lambda\_{i} y+i{\bar\lambda}\_{i}{\bar{y}}} \right) \star e^{ieyz}\star e^{-it{\bar{y}}{\bar{z}}} \star e^{i\mu z}\star e^{-i{\bar\mu}{\bar{z}}} \;.$$ In order to perform the star-products appearing above, we need some lemmas. First of all, star multiplying exponentials of linear expressions in *Y* and *Z* one has $$\begin{aligned} \label{eq:f\*e(iMZ)} {\hat{f}}{\left(x,Z; Y\right)}\star e^{i{M}Z} &= e^{i{M}Z}{\hat{f}}{\left(x,Z-{M}; Y-{M}\right)} \;,\\ \label{eq:e(iMZ)\*f} e^{i{M}Z}\star {\hat{f}}{\left(x,Z; Y\right)} &= e^{i{M}Z}{\hat{f}}{\left(x,Z+{M}; Y-{M}\right)} \;, \\ {\hat{f}}{\left(x,Z; Y\right)}\star e^{i\Lambda Y} &= e^{i\Lambda Y}{\hat{f}}{\left(x,Z+\Lambda; Y+\Lambda\right)} \;,\\ e^{i\Lambda Y}\star {\hat{f}}{\left(x,Z; Y\right)} &= e^{i\Lambda Y}{\hat{f}}{\left(x,Z+\Lambda; Y-\Lambda\right)} \;.\end{aligned}$$ Then one can show recursively that the following equality holds: $$\operatorname\*{\bigstar}\_{j=1}^n \exp(i\Lambda\_j Y) = \exp\left(i\sum\_{j=1}^{n}\sum\_{k=1}^{j-1}\Lambda\_k \Lambda\_j\right) \exp\left(i\sum\_{j=1}^{n}\Lambda\_jY\right) \;.$$ The following relations, valid for any *t* and *e* , will be useful as well: $$\begin{aligned} \label{eq:f\*k} {\hat{f}}{\left(x,z,-{\bar{z}};y,{\bar{y}}\right)}\star e^{ieyz} &= e^{ieyz}{\hat{f}}{\left(x,(1-e)z-ey,-{\bar{z}};(1-e)y-ez,{\bar{y}}\right)} \;,\\ \label{eq:f\*kb} {\hat{f}}{\left(x,z,-{\bar{z}};y,{\bar{y}}\right)}\star e^{-it{\bar{y}}{\bar{z}}} &= e^{-it{\bar{y}}{\bar{z}}}{\hat{f}}{\left(x,z,-(1-t){\bar{z}}-t{\bar{y}};y,(1-t){\bar{y}}+t{\bar{z}}\right)} \;.\end{aligned}$$ The above relations allow the factorization of *F**n*0, *t*(Λ*i*|*M*) as follows: $$F\_{n\_0,t}\left(\Lambda\_i\vert M\right) = {g\_{n\_0}\left(\Lambda\_i\right)}{f\_{n\_0,e}\left(\lambda\_i\vert\mu\right)}{\bar{f}\_{n\_0,t}\left({\bar\lambda}\_i\vert {\bar\mu}\right)}\;,$$ where $$\begin{aligned} \label{eq:PAffq} {g\_{n\_0}\left(\Lambda\_i\right)}:&= \exp\left(i\left[\sum\_{i<j}^{n\_0} (-1)^{i+j}\lambda\_i\lambda\_j +\sum\_{i<j}^{n\_0} {\bar\lambda}\_i{\bar\lambda}\_j\right]\right) \;,\\ {f\_{n\_0,e}\left(\lambda\_i\vert\mu\right)}:&= {\int \text{d}^2y\,\text{d}^2z\,}\exp\left[ i(1-e)\left(-\sum\_i (-)^i \lambda\_i (y-\mu)+\mu z\right)\right] \nonumber\\ &\qquad \exp\left[ie\left(y-\sum\_i (-)^i \lambda\_i\right)(z-\mu) \right] \nonumber\\ \label{eq:PAffem noYZ} &=(2\pi)^{4-2e} \Big(\delta^2\left(\mu\right)\delta^2(\sum\_j(-)^j\lambda\_j)\Big)^{1-e} \;,\\ {\bar{f}\_{n\_0,t}\left({\bar\lambda}\_i\vert {\bar\mu}\right)}:&= {\int \text{d}^2{\bar{y}}\,\text{d}^2{\bar{z}}\,}\exp\left( i(1-t)\left(\sum\_i {\bar\lambda}\_i ({\bar{y}}-{\bar\mu})-{\bar\mu}{\bar{z}}\right) +it\left(-{\bar{y}}+\sum\_i{\bar\lambda}\_i\right)({\bar{z}}+{\bar\mu}) \right) \nonumber\\\label{eq:PAfftm noYZ}&= (2\pi)^{4-2t} \Big(\delta^2\left({\bar\mu}\right)\delta^2(\sum\_j{\bar\lambda}\_j)\Big)^{1-t} \;.\end{aligned}$$ The above functions have the following behaviour under cyclic permutations of the *Y*-space momenta: $$\begin{aligned} {g\_{n\_0}\left(\lambda\_1,\lambda\_2,...,\lambda\_{n\_0}\vert{\bar\lambda}\_1,{\bar\lambda}\_2,...,{\bar\lambda}\_{n\_0}\right)} &= {g\_{n\_0}\left(\lambda\_2,...,\lambda\_{n\_0},-(-)^{n\_0}\lambda\_{1}\vert{\bar\lambda}\_2,...,{\bar\lambda}\_{n\_0},-{\bar\lambda}\_{1}\right)} \;,\\ {f\_{n\_0,e}\left(\lambda\_1,\lambda\_2,...,\lambda\_{n\_0}\vert\mu\right)} &= {f\_{n\_0,e}\left(\lambda\_2,...,\lambda\_{n\_0},(-)^{n\_0}\lambda\_{1}\vert\mu\right)} \;,\\ {\bar{f}\_{n\_0,t}\left({\bar\lambda}\_1,{\bar\lambda}\_2,...,{\bar\lambda}\_{n\_0}\vert {\bar\mu}\right)} &= {\bar{f}\_{n\_0,t}\left({\bar\lambda}\_2,...,{\bar\lambda}\_{n\_0},{\bar\lambda}\_{1}\vert {\bar\mu}\right)} \;.\end{aligned}$$ Let us point out the fact that *f**n*0, *e*(*λ**i*|*μ*) (resp. ${\bar{f}\_{n\_0,t}\left({\bar\lambda}\_i\vert {\bar\mu}\right)}$) is given by (2*π*)2 if *e* = 1 (resp. *t* = 1)), or else it is given by a delta function that sets, in *g**n*0(Λ*i*) , the momentum *λ*1 (resp. ${\bar\lambda}\_1$) equal to combinations of the other variables *λ**j* , *j* = 2, …, *n*0. As a result, one can write the cyclic property of *F**n*0, *t*(Λ*i*|*M*) as *F**n*0, *t*(Λ1, Λ2, ..., Λ*n*0) = *F**n*0, *t*(Λ2, ..., Λ*n*0, ( − )*t*Λ1) . This behaviour under cyclic permutations can be used in to show the cyclic invariance of A*n*0, *t*(Φ*i*|*M*) as: $$\begin{aligned} {\mathcal{A}}\_{n\_0,t}\left(\Phi\_{i}\vert{M}\right) :&= \left(\prod\_{n=1}^{{n\_0}}\int\frac{{\text{d}^{4}\Lambda\_n\ }}{(2\pi)^2} \tilde\Phi\_n(\Lambda\_n)\right) F\_{n\_0,t}\left(\Lambda\_i\vert M\right) \\&= \int\frac{{\text{d}^{4}\Lambda\_1\ }}{(2\pi)^2}\tilde\Phi\_1(\Lambda\_{1}) \left(\prod\_{n=2}^{n\_0}\int\frac{{\text{d}^{4}\Lambda\_n\ }}{(2\pi)^2} \tilde\Phi\_n(\Lambda\_n)\right) F\_{n\_0,t}\left(\Lambda\_2,...,\Lambda\_{n\_0},(-)^t\Lambda\_1\right) \\&= \int\frac{{\text{d}^{4}\Lambda\_{n\_0}\ }}{(2\pi)^2}\tilde\Phi\_1((-)^{t}\Lambda\_{n\_0}) \left(\prod\_{n=1}^{n\_0-1}\int\frac{{\text{d}^{4}\Lambda\_n\ }}{(2\pi)^2} \tilde\Phi\_{n+1}(\Lambda\_n)\right) F\_{n\_0,t}\left(\Lambda\_i\vert M\right) \\&= {{\mathcal{A}}\_{n\_0,t}\left(\Phi\_2,...,\Phi\_{n\_0},\Phi\_1\vert{M}\right)} \;.\end{aligned}$$ The third line is a mere change of variables, while the last one uses. The computations of this section have shown that the *M* dependence of the pre-amplitudes A*n*0, *t*(Φ*i*|*M*) can be factorised as follows: $$\label{eq:fact M dep} {\mathcal{A}}\_{n\_0,t}\left(\Phi\_{i}\vert{M}\right) = \delta^2(\mu)^{1-e}\delta^2({\bar\mu})^{1-t} {\mathcal{A}}\_{n\_0,t}\left(\Phi\_{i}\right) \;.$$ This result presents the divergences that were first discussed in. This is a consequence of the integrand of not being in $L^1(\mathcal{Y}\_4\times {\cal Z}\_4)$. Let us replace the twistor plane waves *e**i**M**Z* by a function $\mathcal{V}(Z)\in \mathcal{S}({\cal Z}\_4)$, then the following object is well defined: $$\begin{aligned} {\mathcal{A}}^{\mathcal{V}}\_{n\_0,t}\left(\Phi\_{i}\right) :&= \int {\text{d}^{4}Z\ }{\text{d}^{4}Y\ } \left(\operatorname\*{\bigstar}\_{i=1}^{n\_0} \Phi\_i{\left(x;(-)^{i+1}y,{\bar{y}}\right)} \right) \star e^{ieyz}\star e^{-it{\bar{y}}{\bar{z}}} \star \mathcal{V}(Z) \;,\\ &= \int {\text{d}^{4}Z\ }{\text{d}^{4}Y\ } \mathcal{V}(Z) \star \left(\operatorname\*{\bigstar}\_{i=1}^{n\_0} \Phi\_i{\left(x;(-)^{i+1}y,{\bar{y}}\right)} \right) \star e^{ieyz}\star e^{-it{\bar{y}}{\bar{z}}} \;,\end{aligned}$$ where the last line follows from the cyclicity of the trace. Indeed, it can be shown from ([eq:fact trace],[eq:f\*k],[eq:f\*kb]) that [6](#fn6): $$\begin{aligned} \mathcal{S}({\cal Z}\_4)\star \mathcal{S}(\mathcal{Y}\_4)\star e^{ieyz}\star e^{-it{\bar{y}}{\bar{z}}} &\subseteq L^1({\cal Z}\_4)\star L^1(\mathcal{Y}\_4)\star e^{ieyz}\star e^{-it{\bar{y}}{\bar{z}}} \\&\subseteq L^1(\mathcal{Y}\_4\times {\cal Z}\_4)\star e^{ieyz}\star e^{-it{\bar{y}}{\bar{z}}} \\&= L^1(\mathcal{Y}\_4\times {\cal Z}\_4) .\end{aligned}$$ Because of the following analogous of : $$\begin{aligned} \mathcal{\tilde{V}}({M}) :&= \int\frac{{\text{d}^{4}Z\ }}{(2\pi)^2}\mathcal{V}(Z)\exp{(-iMZ)} \;,\\ \mathcal{V}(Z) &= \int\frac{{\text{d}^{4}M\ }}{(2\pi)^2}\mathcal{\tilde{V}}({M})\exp{(iMZ)} \;.\end{aligned}$$ we have that $$\begin{aligned} \label{eq:regularisedAmplitude} {\mathcal{A}}\_{n\_0,t}^{\mathcal{V}}\left(\Phi\_{i}\right) &= \int{\text{d}^{4}{M}\ }\mathcal{\tilde{V}}({M}) {\mathcal{A}}\_{n\_0,t}\left(\Phi\_{i}\vert{M}\right) \\&= \mathcal{\tilde{V}}\_{t,e;\alpha(0),{\dot{\alpha}}(0)}\,{\mathcal{A}}\_{n\_0,t}\left(\Phi\_{i}\right) \;.\end{aligned}$$ This amounts to the regularisation scheme introduced in, where $\mathcal{\tilde{V}}({M})$ was called smearing function. The introduction of the (field-independant) smearing function does not spoil the gauge invariance. At this order in perturbation theory, its effect on the pre-amplitudes A*n*0, *t*V(Φ*i*) is the appearance of four coupling constants, a particular case of the ones listed in Table [tab:div]. [h] | *n*0 | t | e | Divergences | $\mathcal{\tilde{V}}\_{t,e;\alpha(k),{\dot{\alpha}}(\bar{k})}$ | | --- | --- | --- | --- | --- | | even | 1 | 1 | None | $\int{\text{d}^{2}\mu\ }{\text{d}^{2}{\bar\mu}\ } (-\mu\_\alpha)^{k} (-{\bar\mu}\_{{\dot{\alpha}}})^{\bar{k}} \mathcal{\tilde{V}} (\mu,{\bar\mu})$ | | odd | 1 | 0 | *δ*2(*μ*) | $ \int{\text{d}^{2}{\bar\mu}\ } (-{\bar\mu}\_{{\dot{\alpha}}})^{\bar{k}} (i\partial\_\alpha)^k \mathcal{\tilde{V}} (0,{\bar\mu})$ | | odd | 0 | 1 | $\delta^2\left({\bar\mu}\right)$ | $ \int{\text{d}^{2}\mu\ } (-\mu\_\alpha)^{k} (i\partial\_{{\dot{\alpha}}})^{\bar{k}} \mathcal{\tilde{V}} (\mu,0)$ | | even | 0 | 0 | *δ*4(*M*) | $ (i\partial\_\alpha)^k (i\partial\_{{\dot{\alpha}}})^{\bar{k}} \mathcal{\tilde{V}} (0,0)$ | [tab:div] However, this does not mean that those constants are the only contributions of the regularising function that are observable. Indeed, let us show that the complete information about the function $\mathcal{\tilde{V}}(M)$ appear in the evaluation of the pre-amplitudes ${\mathcal{A}}^{\mathcal{V}}\_{n\_0,t;\alpha(k),{\dot{\alpha}}(\bar{k})}\left(\Phi\_{i}\right)$ obtained by applying the prescription explained below to the most general observables, then regularising as in. Let us stress that even though those observables can be built from A*n*0, *t*(Φ*i*|*M*) using, this expression needs the *M* dependent (hence divergent) version of the pre-amplitudes A*n*0, *t*(Φ*i*|*M*), and does not apply with the regularised one. This being said, the non-regularised version of ${\mathcal{A}}^{\mathcal{V}}\_{n\_0,t;\alpha(k),{\dot{\alpha}}(\bar{k})}\left(\Phi\_{i}\right)$ reads: $$\begin{aligned} {\mathcal{A}}\_{n\_0,t;{\underline{\alpha}}(K)}\left(\Phi\_i\vert{M}\right) &= \int {\text{d}^{4}Z\ }{\text{d}^{4}Y\ } \left(\operatorname\*{\bigstar}\_{i=1}^{n\_0}\hat{\Psi\_i}\right) \star{\left({\hat\kappa}{\hat{\bar\kappa}}\right)^{\star t}} \star (Z\_{{\underline{\alpha}}})^K \star e^{iMZ} \nonumber\\&= \int {\text{d}^{4}Z\ }{\text{d}^{4}Y\ } \left(\operatorname\*{\bigstar}\_{i=1}^{n\_0}\hat{\Psi\_i}\right) \star{\left({\hat\kappa}{\hat{\bar\kappa}}\right)^{\star t}} \star (Z\_{{\underline{\alpha}}}-M\_{{\underline{\alpha}}})^K e^{iMZ} \nonumber\\&= \int {\text{d}^{4}Z\ }{\text{d}^{4}Y\ } \left(\operatorname\*{\bigstar}\_{i=1}^{n\_0}\hat{\Psi\_i}\right) \star{\left({\hat\kappa}{\hat{\bar\kappa}}\right)^{\star t}} \star(-i\partial^M\_{{\underline{\alpha}}}-M\_{{\underline{\alpha}}})^K e^{iMZ} \nonumber\\&= (-i\partial^M\_{{\underline{\alpha}}}-M\_{{\underline{\alpha}}})^K {\mathcal{A}}\_{n\_0,t}\left(\Phi\_{i}\vert{M}\right) \;.\end{aligned}$$ From, we can extract the contributions where $(Z\_{\underline{\alpha}})^{K}$ brings *z**α**k* $\bar z\_{\dot \alpha}^{\bar k}\,$: $${\mathcal{A}}\_{n\_0,t;\alpha(k),{\dot{\alpha}}(\bar{k})}\left(\Phi\_{i}\vert{M}\right) = \left((-i\partial\_\alpha)^k\delta^2(\mu)\right)^{1-e} (-\mu\_\alpha)^{ke} \left((-i\partial\_{{\dot{\alpha}}})^{\bar{k}}\delta^2({\bar\mu})\right)^{1-t} (-{\bar\mu}\_{{\dot{\alpha}}})^{\bar{k}t} {\mathcal{A}}\_{n\_0,t}\left(\Phi\_{i}\right) \;.$$ Let us make precise that the absence of powers of *μ* in the *e* = 1 case is due to the symmetrization of the indices, yielding the commutation of *μ* and ∂*μ*, which in turn allows to put all *μ* right in front of the delta function. The same argument rules out the presence of ${\bar\mu}$ when *t* = 1 , ∂*μ* when *e* = 0 and $\partial\_{{\bar\mu}}$ when *e* = 1. After various integrations by part, the regularised general pre-amplitudes read: $${\mathcal{A}}^{\mathcal{V}}\_{n\_0,t;\alpha(k),{\dot{\alpha}}(\bar{k})}\left(\Phi\_{i}\right) = \mathcal{\tilde{V}}\_{t,e;\alpha(k),{\dot{\alpha}}(\bar{k})} {\mathcal{A}}\_{n\_0,t}\left(\Phi\_{i}\right) \;,$$ where, again, the prefactors are listed in Table [tab:div]. Once again, the well-definiteness of each of those observables relies on the properties of V . All of them are finite since V is a rapidly decreasing function. In the next section, we will be less general and see what happens when plugging a precise weak-field in the definition of the pre-amplitude. In that context, the question of the *M* dependence will be irrelevant and we will only be interested in computing A*n*0, *t*(Φ*i*) from the following equivalent of : $$\label{eq:PAffte} {\mathcal{A}}\_{n\_0,t}\left(\Phi\_{i}\right) = \int\left(\prod\_{j=1}^{{n\_0}}\frac{{\text{d}^{4}\Lambda\_j\ }}{(2\pi)^2}\right) \left(\prod\_{j=1}^{{n\_0}}\tilde\Phi(\Lambda\_j)\right) F\_{n\_0,t}\left(\Lambda\_i\right) \;,$$ where *F**n*0, *t*(Λ*i*) is defined by $$\label{eq:factor F(M)} F\_{n\_0,t}\left(\Lambda\_i\vert M\right) = \delta^2(\mu)^{1-e}\delta^2({\bar\mu})^{1-t} F\_{n\_0,t}\left(\Lambda\_i\right) \;.$$ Correlators from zero-form charges ================================== The purpose of this section is to use the bulk-to-boundary propagators of as weak fields inside the expression for the pre-amplitudes, and fully evaluate the complete expression that results. Then, in Section [sec:CFT] we will separately compute the *n*-point correlation functions of conserved currents of the free *C**F**T*3 corresponding to a set of free bosons. The latter model was conjectured in to be dual to the type-A Vasiliev model (with a parity-even bulk scalar field) where the bulk scalar field obeys the Neumann boundary condition, sometimes called “irregular boundary condition”. We will show that both expressions, i.e. the pre-amplitude for Vasiliev’s equations on the one hand and the correlations functions of conserved currents in the free *C**F**T*3 on the other hand, exactly coincide. We stress that the pre-amplitude only refers to the free Vasiliev equations and does not take into account the interactions incorporated into the fully nonlinear model. After, Klebanov and Polyakov further conjectured that the Vasiliev model where the scalar field is parity-even and obeys Dirichlet boundary condition should be dual to the critical *U*(*N*) model. As for the free *U*(*N*) model, we instead stick to the Neumann boundary condition for the bulk scalar field of the Vasiliev model, and use the boundary to bulk propagators with the conventions of that we are now going to recall. The metric of *A**d**S* is expressed in Poincaré coordinates, so that $\text{d}s^2 = \frac{1}{r^2}\eta\_{\mu\nu}{\text{d}x}^\mu{\text{d}x}^\nu\,$. One of the space-like coordinates in *x**μ* is the radial variable *r* that vanishes on the boundary of *A**d**S* . The Minkowski metric components *η**μ**ν* and the inverse *η**μ**ν* are used to lower and raise world indices. We do not use the components of the complete metric d*s*2 , as one might expect in a geometric formulation. We caracterise vectors that are tangent to the boundary by the vanishing of their *r*-component. In the same spirit, all spinors have a bulk notation, with dotted and undotted indices, and boundary Dirac spinors will be defined as the boundary value of bulk spinors submitted to an appropriate projection that we specify below. We use the four matrices *σ**μ* , three of which are the Pauli matrices, to link any vector *v**μ* to a 2 × 2 Hermitian matrix as $v\_{\alpha{\dot{\beta}}}=\bar{v}\_{{\dot{\beta}}\alpha}=v\_\mu(\sigma^\mu)\_{\alpha{\dot{\beta}}}\,$. As before, we will omit most of the spinorial indices and do all contractions according to the NW-SE convention. To every pair of points of *A**d**S*4 with respective coordinates *x**i**μ* and *x**j**μ* , we can associate the following two sets of quantities: $$\begin{aligned} {x\_{i,j}}^{\mu} = x\_i^\mu - x\_j^\mu \qquad {\rm and} \qquad {\check{x}\_{i,j}}^{\mu} = ({x\_{i,j}})^{-2}{x\_{i,j}}^{\mu} \;.\end{aligned}$$ From now on, all the points will be taken on the boundary, except for one bulk point with coordinates *x*0*μ* (in particular, its *r* coordinate will be denoted *r*0). Of interest in this section will be the following 2 × 2 matrices Σ*i* :  = *σ**r* − 2*r*0*x̌*0, *i* ,  that are attached to every boundary point with coordinate *x**i* . Some of the properties of the matrices Σ*i* are collected in Appendix [app:CS] and will implicitly be used in the rest of this section. Among them is: $${\det{}\_{i, j}} := \det\left({\Sigma\_{i}}-{\Sigma\_{j}}\right) = \frac{4r\_0^2({x\_{i,j}})^2}{({x\_{0,i}})^2({x\_{0,j}})^2} \;.$$ The propagator of the spin-*s* component of the master field Φ from the chosen bulk point to a given boundary point of coordinates *x**i**μ* was computed in. The boundary conditions were chosen as follows: Neumann for the scalar field and Dirichlet for the spin-*s* > 0 fields. For the case of the bosonic model, the bulk-to-boundary propagator of the master field Φ was given in by: $${\mathcal{K}\_{i}}(x\_0,x\_i,{{\chi}\_{i}}\vert Y) := {K\_{i}}e^{iy{\Sigma\_{i}}{\bar{y}}} \sum\_{{\sigma\_{i}}=\pm1}\left( e^{i\theta} e^{i{\sigma\_{i}}{{\bar\nu\,}\_{i}}{{\bar\Sigma}\_{i}}y} +e^{-i\theta} e^{i{\sigma\_{i}}{{\nu}\_{i}}{\Sigma\_{i}}{\bar{y}}} \right) \;,$$ where *χ**i* denotes the polarization spinor attached to the boundary point *x**i* , and where $$\label{eq:def K,khi,nu} {K\_{i}} := ({x\_{0,i}})^{-2}r\_0 \;,\quad {{\nu}\_{i}}:=\sqrt{2r\_0}\,{\Sigma\_{i}}\,{\check{\bar{x}}\_{0,i}}\,{{\chi}\_{i}} \;,\quad ({{\chi}\_{i}})^{\dagger} = {{\bar\chi}\_{i}} = {\bar\sigma}^{r}{{\chi}\_{i}} \;,\quad ({{\nu}\_{i}})^{\dagger} = {{\bar\nu\,}\_{i}} = -{{\bar\Sigma}\_{i}}{{\nu}\_{i}} \;.$$ The propagator K*i*(*x*0, *x**i*, *χ**i*|*Y*) is an imaginary Gaussian in ${\cal Y}\_4\,$ Hence, submitted to the usual *i**ε* prescription allowing the use of, it becomes a rapidly decreasing function, justifying the above procedure. Following the definition, the Fourier transform of K*i*(*x*0, *x**i*, *χ**i*|*Y*) is given by $$\begin{aligned} {\mathcal{\tilde{K}}\_{j}}(x\_0,x\_j,{{\chi}\_{j}}\vert \Lambda\_j) &= {K\_{j}}e^{i\lambda\_j{\Sigma\_{j}}{\bar\lambda}\_j} \sum\_{{\sigma\_{j}}=\pm1}\left( e^{-i\theta} e^{i{\sigma\_{j}}{{\nu}\_{j}}\lambda\_j} +e^{i\theta} e^{i{\sigma\_{j}}{{\bar\nu\,}\_{j}}{\bar\lambda}\_j} \right) \\\label{eq:GYbtbL}&= {K\_{j}}e^{i\lambda\_j{\Sigma\_{j}}{\bar\lambda}\_j} \sum\_{{\varepsilon\_{j}}\in\{0,1\}} \sum\_{{\sigma\_{j}}=\pm1} e^{i\theta\left(1-2{\varepsilon\_{j}}\right)} e^{i{\varepsilon\_{j}}{\sigma\_{j}}{{\nu}\_{j}}\lambda\_j + i(1-{\varepsilon\_{j}}){\sigma\_{j}}{{\bar\nu\,}\_{j}}{\bar\lambda}\_j} \;.\end{aligned}$$ Let us emphasize that ${\mathcal{\tilde{K}}\_{i}}$ satisfies the reality conditions but not the minimal bosonic projection. We will take care of this projection separately at the end of this section. Now we insert this propagator into, which yields: $$\label{eq:PAltemP1} {\mathcal{A}}\_{n\_0,t}\left({\mathcal{K}\_{i}}\right) = \int\left(\prod\_{j=1}^{{n\_0}}\int\frac{{\text{d}^{4}\Lambda\_j\ }}{(2\pi)^2}\right) \left(\prod\_{j=1}^{n\_0}{\mathcal{\tilde{K}}\_{j}}(\Lambda\_j)\right) F\_{n\_0,t}\left(\Lambda\_i\right) \;.$$ Let us introduce the following notation : ∑*σ*, *ɛ* :  = ∑*ɛ*1 ∈ {0, 1}∑*σ*1 =  ± 1...∑*ɛ**n*0 ∈ {0, 1}∑*σ**n*0 =  ± 1 By making use of ([eq:PAffq], [eq:PAffem noYZ], [eq:PAfftm noYZ],[eq:factor F(M)]), we can write the expression in the following way: $$\begin{aligned} &{\mathcal{A}}\_{n\_0,t}\left({\mathcal{K}\_{i}}\right) = \label{4.9}\\& \alpha\_{n\_0,t}\,\sum\_{\sigma,\varepsilon} e^{-i\theta\operatorname\*{{\displaystyle\Sigma}}\_i(2{\varepsilon\_{i}}-1)} \int{\text{d}^{4n\_0}\Lambda\ } e^{\frac{i}{2}{\Lambda{}^{T}} R'\Lambda+i{J'{}^{T}}\Lambda} \; \Big[\delta^2(\sum\_{j=1}^{n\_0}(-)^j\lambda\_j)\Big]^{(1-e)}\, \Big[ \delta^2(\sum\_{j=1}^{n\_0}{\bar\lambda}\_j)\Big]^{(1-t)} \;, \nonumber \end{aligned}$$ where the pre-factor in the above expression is given by *α**n*0, *t* :  = (2*π*)8 − 2*n*0 − 2*e* − 2*t*(∏*i* = 1*n*0*K**i*) ,  and the symbol $\Lambda=(\lambda\_i,{\bar\lambda}\_{{\bar{\imath}}})$, where *i* and ${\bar{\imath}}$ run from 1 to *n*0, denotes a 4*n*0-dimensional column vector[7](#fn7). Then the entries of the matrix *R*ʹ and the source $J'=({j}{\,}'{\!}\_i, \bar{\jmath}{\,}'{\!}\_{\bar \imath})$ are given by $$\begin{aligned} R'\_{ij} =& (1-{\delta\_{i,j}})(-)^{i+j+\Theta(i,\,j)} \;,\quad R'\_{{\bar{\imath}}{\bar{\jmath}}} = (1-{\delta\_{{\bar{\imath}},{\bar{\jmath}}}})(-)^{\Theta({\bar{\imath}},\,{\bar{\jmath}})} \;,\\ R'\_{i{\bar{\jmath}}} =& {\delta\_{i,{\bar{\jmath}}}}{\Sigma\_{i}} \;,\quad R'\_{{\bar{\imath}}j} = {\delta\_{{\bar{\imath}},j}}{{\bar\Sigma}\_{{\bar{\imath}}}} \;,\quad {j}{\,}'{\!}\_{i} = {\varepsilon\_{i}}{\sigma\_{i}}{{\nu}\_{i}} \;,\quad \bar{\jmath}{\,}'{\!}\_{{\bar{\imath}}}= (1-{\varepsilon\_{{\bar{\imath}}}}){\sigma\_{{\bar{\imath}}}}{{\bar\nu\,}\_{{\bar{\imath}}}} \;.\end{aligned}$$ In this expression, Θ(*x*,  *y*) is a function whose value is 1 when *x* is greater than *y* and 0 otherwise. Since it always comes in expressions multiplied by (1 − *δ**x**y*), we do not need to specify the value of Θ(*x*,  *x*). In the case when *e* = 0 (resp. *t* = 0), in we integrate out *λ**n*0 (resp. ${\bar\lambda}\_{n\_0}$) so that A*n*0, *t*(K*i*) is given by $${\mathcal{A}}\_{n\_0,t}\left({\mathcal{K}\_{i}}\right)= \alpha\_{n\_0,t} \sum\_{\sigma,\varepsilon} e^{-i\theta\operatorname\*{{\displaystyle\Sigma}}\_{j=1}^{n\_0}(2{\varepsilon\_{j}}-1)} \int{\text{d}^{2n+2\bar{n}}\Lambda\ } e^{\frac{i}{2}{\Lambda{}^{T}} R\,\Lambda+i{J{}^{T}}\,\Lambda} \;, \label{4.13}$$ where the matrix *R* and the source *J* are given by $$\begin{aligned} R\_{ij} =&\; (1-{\delta\_{i,j}})(-)^{i+j+\Theta(i,\,j)} \;,\quad R\_{{\bar{\imath}}{\bar{\jmath}}} = (1-{\delta\_{{\bar{\imath}},{\bar{\jmath}}}})(-)^{\Theta({\bar{\imath}},\,{\bar{\jmath}})} \;,\\ R\_{i{\bar{\jmath}}} =&\; {\delta\_{i,{\bar{\jmath}}}}{\Sigma\_{i}} +\left( -(1-t)\delta\_{i,n\_0}+(1-e)\delta\_{{\bar{\jmath}},n\_0}(-)^{i} +(1-t)(1-e)(-)^i\right){\Sigma\_{n\_0}} \;,\\ R\_{{\bar{\imath}}j} =&\; {\delta\_{{\bar{\imath}},j}}{{\bar\Sigma}\_{{\bar{\imath}}}} +\left( -(1-t)\delta\_{j,n\_0}+(1-e)\delta\_{{\bar{\imath}},n\_0}(-)^{j} +(1-t)(1-e)(-)^j\right){{\bar\Sigma}\_{n\_0}} \;,\\\label{eq:Gjad} {j}\,\_{i} =&\; {\varepsilon\_{i}}{\sigma\_{i}}{{\nu}\_{i}} - (1-e)(-)^{i+n\_0}{\varepsilon\_{n\_0}}{\sigma\_{n\_0}}{{\nu}\_{n\_0}} \;,\quad \bar{\jmath}\,\_{{\bar{\imath}}} = (1-{\varepsilon\_{{\bar{\imath}}}})\,{\sigma\_{{\bar{\imath}}}}\,{{\bar\nu\,}\_{{\bar{\imath}}}} - (1-t)(1-{\varepsilon\_{n\_0}})\,{\sigma\_{n\_0}}\,{{\nu}\_{n\_0}} \;,\end{aligned}$$ where the indices *i* and ${\bar{\imath}}$ labeling the entries of the matrix *R* and the column vector *J* now run over the following values : $$i\in\{1,...,n=n\_0-(1-e)\} \;,\quad {\bar{\imath}}\in\{1,...,{\bar{n}}=n\_0-(1-t)\} \;.$$ The Gaussian integration in can be carried out[8](#fn8) via the formula $$\label{eq:wiki gaussian} \mathcal{G} := \int{\text{d}^{2n+2\bar{n}}\Lambda\ } e^{\frac{i}{2}{\Lambda{}^{T}} R\Lambda+i{J{}^{T}}\Lambda} = \sqrt{\frac{(2i\pi)^{2n+2\bar{n}}}{\det R}} e^{\frac{i}{2}{J{}^{T}}R^{-1}J} \;,$$ which differs from the usual gaussian integration formula by a change of sign due to the NW-SE convention. As we show in Appendix [app:gauss], the determinant and inverse of *R* are given by $$\begin{aligned} \label{eq:Gdmad} \det R &= 2^{4(n\_0-1)}r\_0^{2n\_0}\prod\_{i=1}^{n\_0} \frac{({x\_{i,i+1}})^2}{({x\_{0,i}})^4} \;,\\ R^{-1}\_{ij} &= \sum\_{\eta=\pm1}\frac{1}{2{\det{}\_{i, i+\eta}}} \left(-\eta{\delta\_{i,j}}+{\delta\_{i+\eta,j}}\xi\_{i,i+\eta}\right) ({\Sigma\_{i}}-{\Sigma\_{i+\eta}}){{\bar\Sigma}\_{i+\eta}} \;,\\ R^{-1}\_{{\bar{\imath}}{\bar{\jmath}}} &= \sum\_{\eta=\pm1}\frac{1}{2{\det{}\_{{\bar{\imath}}, {\bar{\imath}}+\eta}}} \left(-\eta{\delta\_{{\bar{\imath}},{\bar{\jmath}}}}-{\delta\_{{\bar{\imath}}+\eta,{\bar{\jmath}}}}\xi\_{{\bar{\imath}},{\bar{\imath}}+\eta}\right) ({{\bar\Sigma}\_{{\bar{\imath}}}}-{{\bar\Sigma}\_{{\bar{\imath}}+\eta}}){\Sigma\_{{\bar{\imath}}+\eta}} \;,\\ R^{-1}\_{i{\bar{\jmath}}} &= \sum\_{\eta=\pm1}\frac{1}{2{\det{}\_{i, i+\eta}}} \left({\delta\_{i,{\bar{\jmath}}}}+\eta\,{\delta\_{i+\eta,{\bar{\jmath}}}}\,\xi\_{i,i+\eta}\right) ({\Sigma\_{i}}-{\Sigma\_{i+\eta}}) \;,\\ \label{eq:Gimad last} R^{-1}\_{{\bar{\imath}}j} &= \sum\_{\eta=\pm1}\frac{1}{2{\det{}\_{{\bar{\imath}}, {\bar{\imath}}+\eta}}} \left({\delta\_{{\bar{\imath}},j}}-\eta\,{\delta\_{{\bar{\imath}}+\eta,j}}\,\xi\_{{\bar{\imath}},{\bar{\imath}}+\eta}\right) ({{\bar\Sigma}\_{{\bar{\imath}}}}-{{\bar\Sigma}\_{{\bar{\imath}}+\eta}}) \;,\end{aligned}$$ where it should be understood that the indices *j* and *j* + *k**n*0 are identified with each other for any integers *j* and *k*, and where the coefficients *ξ**i*, *i* + *η* are defined as follows: $$\begin{aligned} \xi\_{i,i+\eta} &= -\eta + t\;\delta\_{i,n\_0}\,(\eta+1)+t\;\delta\_{i+\eta,n\_0}\,(\eta-1) \;.\end{aligned}$$ Using the above expressions for the inverse matrix *R*− 1 and the source *J* into, we get $$\begin{aligned} \label{eq:Gint} \mathcal{G} &= \sqrt{\frac{(2i\pi)^{2n+2\bar{n}}}{2^{4(n\_0-1)}r\_0^{2n\_0}} \prod\_{i=1}^{n\_0}\frac{({x\_{0,i}})^4}{({x\_{i,i+1}})^2}} \;\exp\left( -\frac{i}{4}\sum\_{i=1}^{n\_0}{Q\_{i}}\right) \mathcal{G}\_P \;,\\\label{eq:GintP} \mathcal{G}\_P &= \exp\left( -\frac{i}{2}\sum\_{i=1}^{n\_0} (-)^{t\,\delta\_{i,n\_0}} {\sigma\_{i}}{\sigma\_{i+1}} \left(2{\varepsilon\_{i+1}}-1\right) {P\_{i,i+1}} \right) \;.\end{aligned}$$ Where the conformally-invariant variables are defined as in : $$\label{eq:def PQ bulk} {P\_{i,i+1}}= {{\chi}\_{i}}\,\sigma^r\,{\check{\bar{x}}\_{i,i+1}}\,{{\chi}\_{i+1}} \;,\quad {Q\_{i}} = {{\chi}\_{i}}\,\sigma^{r}\, \left({\check{\bar{x}}\_{i,i+1}}-{\check{\bar{x}}\_{i,i-1}}\right)\,{{\chi}\_{i}} \;.$$ At this stage, as can be seen in, the next step is to sum over all values taken by *σ*1, …, *σ**n*0 . In order to do so, one can show the following two identities, holding for any *P̃**i*, *j* : $$\begin{aligned} \sum\_{{\sigma\_{2}},...,{\sigma\_{(n\_0-1)}}} e^{\frac{i}{2}\sum\_{i=1}^{n\_0-1}{\sigma\_{i}}{\sigma\_{i+1}}{\tilde{P}\_{i,i+1}}} &\equiv 2^{n\_0-2}\left( \prod\_{i=1}^{n\_0-1}\cos\left(\tfrac{1}{2}{\tilde{P}\_{i,i+1}}\right) +i^{n\_0-1}{\sigma\_{1}}{\sigma\_{n\_0}}\prod\_{i=1}^{n\_0-1}\sin\left(\tfrac{1}{2}{\tilde{P}\_{i,i+1}}\right) \right) \;,\\ \sum\_{{\sigma\_{1}},...,{\sigma\_{n\_0}}} e^{\frac{i}{2}\sum\_{i=1}^{n\_0}{\sigma\_{i}}{\sigma\_{i+1}}{\tilde{P}\_{i,i+1}}} &\equiv 2^{n\_0}\left( \prod\_{i=1}^{n\_0}\cos\left(\tfrac{1}{2}{\tilde{P}\_{i,i+1}}\right) +i^{n\_0}\prod\_{i=1}^{n\_0}\sin\left(\tfrac{1}{2}{\tilde{P}\_{i,i+1}}\right) \right) \;,\end{aligned}$$ where the first identity can be derived recursively on *n*0 and the second one can be obtained from the first relation by summing over *σ*1 and *σ**n*0 . Replacing *P̃**i*, *i* + 1 by ( − 1)*t**δ**i*, *n*0(2*ɛ**i* + 1 − 1)*P**i*, *j* , one finds $$\sum\_{\sigma\_1,\ldots,\sigma\_{n\_0}} \mathcal{G}\_P = 2^{n\_0} \prod\_{i=1}^{n\_0}\cos\left(\tfrac{1}{2}{P\_{i,i+1}}\right) -(2i)^{n\_0}(-1)^{t} \left(\prod\_{j=1}^{n\_0}\left(1-2{\varepsilon\_{j}}\right)\right) \prod\_{i=1}^{n\_0}\sin\left(\tfrac{1}{2}{P\_{i,i+1}}\right) \;.$$ At this stage, all we have to do is to sum over the 2 values (zero and one) taken by each of the variables *ɛ**i* , *i* = 1, …, *n*0 , or equivalently summing over the two values  ± 1 taken by the *n*0 variables (1 − 2*ε**i*) , *i* = 1, …, *n*0 , so as to yield $$\begin{aligned} \sum\_{\sigma,\varepsilon} e^{i\theta\operatorname\*{{\displaystyle\Sigma}}\_i(1-2{\varepsilon\_{i}})} \mathcal{G}\_P \label{eq:Gssb GintP}&= 2^{2n\_0} \left( (\cos\theta)^{n\_0} \prod\_{i=1}^{n\_0}\cos\left(\tfrac{1}{2}{P\_{i,i+1}}\right) -(-1)^{t}(\sin\theta)^{n\_0} \prod\_{i=1}^{n\_0}\sin\left(\tfrac{1}{2}{P\_{i,i+1}}\right) \right) \;.\end{aligned}$$ Gathering all the prefactors appearing in, and, we finally obtain the following expression for the pre-amplitudes A*n*0, *t*(K*i*) $$\begin{aligned} \label{eq:result PAb} {\mathcal{A}}\_{n\_0,t}\left({\mathcal{K}\_{i}}\right) &= \beta\_{n\_0,t}\, \exp\left( -\frac{i}{4}\sum\_{i=1}^{n\_0}{Q\_{i}}\right) \left( \prod\_{i=1}^{n\_0}\frac{1}{\left\vert{x\_{i,i+1}}\right\vert}\right) \nonumber\\&\quad\times \left( (\cos\theta)^{n\_0} \prod\_{i=1}^{n\_0}\cos\left(\tfrac{1}{2}{P\_{i,i+1}}\right) -(-1)^{t}(\sin\theta)^{n\_0} \prod\_{i=1}^{n\_0}\sin\left(\tfrac{1}{2}{P\_{i,i+1}}\right) \right) \;,\\ \beta\_{n\_0,t} :&= 4(i)^{2n\_0-2+e+t}(2\pi)^{2+e+t} \prod\_{j=1}^{n\_0}{\rm sgn}({x\_{j,j+1}}^2) \;,\end{aligned}$$ where ${\rm sgn}(x)$ is the sign function. This expression is one of the central results of the paper. It reproduces, by restricting to the case where *t* = 0  and up to constant coefficients, the expression obtained by combining the equations (6.19) and (6.20) of. We generalise this result to the cases where the pre-amplitudes have extra insertions of $(\hat\kappa\hat{\bar\kappa})\,$, see, which corresponds to taking *t* = 1 . However, we see that at the leading order the extra insertion has no effect on the final result, except for a global sign in the B-model. The dependence on *θ* was kept as a matter of convenience during the computation, but the result should be understood to hold only for the parity-invariant cases, i.e. for *θ* = 0 (type A model) and *θ* = *π*/2 (type B model). It would be interesting to understand how to modify the twisted open Wilson line in order to capture genuinely parity-breaking terms. If one wants to restrict to the minimal bosonic model, one has to use bulk to boundary propagators ${\mathcal{\tilde{K}}^{MB}\_{i}}$ satisfying the minimal bosonic projection. Since it is not the case of the one defined in, we have to project it explicitly by defining the propagator for the minimal model as $$\begin{aligned} {\mathcal{\tilde{K}}^{MB}\_{i}}(x\_0,x\_i,{{\chi}\_{i}}\vert \Lambda\_i) :&= \tfrac{1}{2}\sum\_{\xi=0,1} (\pi\_{\Lambda}\tau\_{\Lambda})^{\xi}{\mathcal{\tilde{K}}\_{i}}(x\_0,x\_i,{{\chi}\_{i}}\vert \Lambda\_i) \\&= \label{eq:MB prop} \tfrac{1}{2}\sum\_{\xi=0,1} {\mathcal{\tilde{K}}\_{i}}(x\_0,x\_i,i^{\xi}{{\chi}\_{i}}\vert \Lambda\_i) \\&=: \tfrac{1}{2}\sum\_{\xi=0,1} \tau\_{{{\chi}\_{i}}}^{\xi} {\mathcal{\tilde{K}}\_{i}}(x\_0,x\_i,{{\chi}\_{i}}\vert \Lambda\_i) \;.\end{aligned}$$ Then, the pre-amplitude for the minimal bosonic model is given in terms of the non-minimal one as $${\mathcal{A}}^{MB}\_{n\_0,t}\left({\mathcal{K}\_{i}}\right) = \left( \prod\_{i=1}^{n\_0} \tfrac{1}{2}\sum\_{\xi\_i=0,1} \tau\_{{{\chi}\_{i}}}^{\xi\_i} \right) {\mathcal{A}}\_{n\_0,t}\left({\mathcal{K}\_{i}}\right) \;.$$ We will now compute the correlation functions of conserved currents on the free *C**F**T*3 side, and show that, before performing Bose-symmetrisation, the result (see below) exactly reproduces the formula. Free *U*(*N*) and *O*(*N*) vector models ======================================== The purpose of this section is to compute cyclic building blocks for the amplitudes in the free U(N) vector model in a space-time of dimension *d* > 2 , thereby proving explicitly a formula conjectured in, where the 3-point functions where computed. In the three-dimensional case, we find that they match the pre-amplitudes defined in Vasiliev’s bosonic type-A model. Eventually we will show that the minimal bosonic projection of Vasiliev’s type-A model amounts to considering the *O*(*N*) vector model. As just stated, the computations of this section take place in a *d*-dimensional spacetime, whose world indices we will denote by Greek letters. This should not create any confusion with the other sections where we also use Greek letters but for base indices in *d* + 1 dimensions. Let 2 *d*-dimensional vectors $a = a^\mu \frac{\partial}{\partial x^{\mu}}$ and $b = b^\mu \frac{\partial}{\partial x^{\mu}}\,$. In this section we will use the notation *a* ⋅ *b* :  = *a**μ**b**μ* and *a*2 :  = *a**μ**a**μ* . The fields of the theory are complex Lorentz scalars *ϕ**i* carrying an internal index *i*. The theory is free and the propagators are given by $${\langle \phi^{i}(x)\phi^{j}(y)\rangle} = 0 = {\langle {{\phi}^{\ast}}\_{i}(x){{\phi}^{\ast}}\_{j}(y)\rangle} \;,\quad {\langle \phi^{i}(x){{\phi}^{\ast}}\_{j}(y)\rangle} = c\_1\,\frac{\delta^i\_{\phantom{i}j}}{\left\vert x-y\right\vert^{d-2}}\, \;,$$ where $\left\vert x\right\vert:=\sqrt{x^2}\,$. This theory is known to be conformal. The conserved current of spin *s* is a traceless tensor *J**μ*(*s*) containing *s* derivatives and a single trace in the sense of the internal algebra. Using a polarisation vector *ε**μ* and some weights *a**s*, one can gather all the conserved currents into a generating function, see e.g., and references therein: $$\label{eq:def J} \sum\_{s=0}^{\infty} a\_{s}J\_{\mu(s)}(x)\left(\epsilon^{\mu}\right)^{s} = J(x,\epsilon) = {{\phi}^{\ast}}\_i(x)f\left(\epsilon, \overleftarrow{\partial}, \overrightarrow{\partial} \right)\phi^i(x) \;.$$ We assume that the function *f* is analytical. This section will involve sums over integer values, that will always be taken from zero to infinity upon identifying the inverse of diverging factorials with zero. Since the generating function *J*(*x*, *ε*) is Lorentz invariant and the spin *s* current *J**μ*(*s*) contains *s* derivatives, the function *f*(*ε*, *u*, *v*) can be written as *f*(*u*, *v*, *ε*) = ∑*k*, ℓ, *m*, *p*, *q**f**k*, ℓ, *m*, *p*, *q*(*ε* ⋅ *u*)*k*(*ε* ⋅ *v*)ℓ((*u* ⋅ *v*)*ε*2)*m*(*u*2*ε*2)*p*(*v*2*ε*2)*q* . Once we are sure that all *J**μ*(*s*) in are traceless, the generating function will be left unchanged by transformations of the form (*ε**μ*)*s* → (*ε**μ*)*s* + (*η**μ*(2))ℓ(*ε*2)ℓ(*ε**μ*)*s* − 2ℓ. We thus may use transformations of this type to effectively constraint the polarization vector to be null (*ε*2 = 0) without affecting the generating function of the currents, hence without affecting the generating functions of the correlation functions either. Thus, the only coefficients that we need to know explicitly are *f**k*, *l*, 0, 0, 0. The tracelessness condition ∂*ε*2*f* = 0 gives several relations between the coefficients appearing in. Among those equations, we find that the *m* dependance of the coefficients *f**k*, *l*, *m*, 0, 0 is given by $$f\_{k,\ell,m+1,0,0}= -\frac{(k+1)(\ell+1)}{2(m+1)(k+\ell+m+1+\tfrac{d-2}{2})} f\_{k+1,\ell+1,m,0,0} \,.$$ Then, altogether with the conservation condition ∂*ε*∂*x**f*|*u*2 = *v*2 = 0 = 0, it gives the *k* dependence as $$f\_{k+1,\ell,m,0,0} = -\frac{(\ell+1)(\ell+m+\tfrac{d-2}{2})}{(k+1) (k+m+\tfrac{d-2}{2})} f\_{k,\ell+1,m,0,0} \,.$$ Then, choosing *b*ℓ :  = *f*0, ℓ, 0, 0, 0, one can solve those two recursions and get the following expression for the on-shell part of the current: $$f\_{k,\ell,m,0,0} = \frac{(-1)^k}{2^m} \frac{(k+\ell+2m)!}{k!\,\ell!\,m!} \frac{\Gamma(k+\ell+m+\tfrac{d-2}{2}) \Gamma(\tfrac{d-2}{2})} {\Gamma(k+m+\tfrac{d-2}{2})\Gamma(\ell+m+\tfrac{d-2}{2})} b\_{k+\ell+2m} \,.$$ After effectively removing *ε*2 from the generating function *J*(*x*, *ε*), this amounts to rewrite as $$\label{eq:series f epsnull} f(u,v,\epsilon) = \sum\_{s,k} b\_s\binom{s}{k} \frac{\Gamma(s+\tfrac{d-2}{2})\Gamma(\tfrac{d-2}{2})} {\Gamma(k+\tfrac{d-2}{2})\Gamma(s-k+\tfrac{d-2}{2})} (-\epsilon \cdot u)^k(\epsilon \cdot v)^{s-k} \;.$$ Now we choose *b**s* to be expressed in term of a constant *γ* (to be specified later) as $$\label{eq:b\_s} b\_s= \frac{\gamma^s}{s!\,\Gamma(s+\tfrac{d-2}{2})} \,.$$ We are interested in the connected correlation functions $\langle J\_1 \cdot J\_{n\_0} \rangle\_{\rm conn.}$, which descent from the contribution of several Feynman diagrams. Since *ϕ**i* can only be contracted with *ϕ*\**i*, those contributions only differ by permutations of the currents. As we will discuss later, this is to contrast with the real field theory, where each current has two possible contraction with the next one. We are interested in the cyclic building block for the correlation function, that is to say the first Wick contraction, that we define with the following normalisation: $$\begin{aligned} {\langle J\_1,...,J\_{n\_0}\rangle\_{\rm cyclic}}:&= \frac{1}{N} \left. \prod\_{i=1}^{n\_0} f\left(\partial\_{x'\_i},\partial\_{x\_i},\epsilon\_i\right) \prod\_{j=1}^{n\_0} {\langle \phi^{i\_j}(x\_j){{\phi}^{\ast}}\_{i\_{j+1}}(x'\_{j+1})\rangle} \right\vert\_{x'\_k=x\_k \forall k} \\&= \left. \prod\_{i=1}^{n\_0} f\left(\partial\_{x'\_i},\partial\_{x\_i},\epsilon\_i\right) \prod\_{j=1}^{n\_0} \left\vert x\_{j}-x'\_{j+1}\right\vert^{2-d} \right\vert\_{x'\_k=x\_k \forall k} \\\label{eq:CFTpa1}&= \prod\_{i=1}^{n\_0} f\left(-\partial\_{x\_{i-1,i}},\partial\_{x\_{ii+1}},\epsilon\_i\right) \prod\_{j=1}^{n\_0} \left\vert x\_{j,j+1}\right\vert^{2-d} \;.\end{aligned}$$ The $\frac{1}{N}$ factor has disappeared in the second line because of the internal traces. We define *x**i**j* as in the previous section (now it is manifestly a 3-vector). The rest of this section involves multiple sums that generally will be written as follows: ∑*s*∏*i* = 1*n*... = ∑*s*1...∑*s**n*∏*i* = 1*n*... This being said, we inject and into and get $$\begin{aligned} \nonumber&{\langle J\_1,...,J\_{n\_0}\rangle\_{\rm cyclic}}\\&= \sum\_{s,k} \prod\_{i=1}^{n\_0} \frac{\gamma^{s\_i}\Gamma(\tfrac{d-2}{2})} {k\_i!\,\Gamma(k\_i+\tfrac{d-2}{2})(s\_i-k\_i)!\, \Gamma(s\_i-k\_i+\tfrac{d-2}{2})} \left(\epsilon\_i\cdot\partial\_{i-1,i}\right)^{k\_i} \left(\epsilon\_i\cdot\partial\_{i,i+1}\right)^{s\_i-k\_i} \prod\_{j=1}^{n\_0} \left\vert x\_{j,j+1}\right\vert^{2-d} \nonumber\\&= \sum\_{s,k} \prod\_{i=1}^{n\_0} \frac{\gamma^{s\_i-k\_i+k\_{i+1}}\Gamma(\tfrac{d-2}{2})} {k\_{i+1}!\,\Gamma(k\_{i+1}+\tfrac{d-2}{2})(s\_i-k\_i)!\, \Gamma(s\_i-k\_i+\tfrac{d-2}{2})} \left(\epsilon\_{i+1}\cdot\partial\_{i,i+1}\right)^{k\_{i+1}} \left(\epsilon\_i\cdot\partial\_{i,i+1}\right)^{s\_i-k\_i} \left\vert x\_{i,i+1}\right\vert^{2-d} \nonumber \\ &= \sum\_{n,p} \prod\_{i=1}^{n\_0} \frac{\gamma^{n\_i+p\_{i}}\Gamma(\tfrac{d-2}{2})} {p\_i!\,\Gamma(p\_i+\tfrac{d-2}{2})n\_i!\, \Gamma(n\_i+\tfrac{d-2}{2})} \left(\epsilon\_{i+1}\cdot\partial\_{i,i+1}\right)^{p\_i} \left(\epsilon\_i\cdot\partial\_{i,i+1}\right)^{n\_i} \left\vert x\_{i,i+1}\right\vert^{2-d}\end{aligned}$$ $$\begin{aligned} \Leftrightarrow \; {\langle J\_1,...,J\_{n\_0}\rangle\_{\rm cyclic}}&= \sum\_{n,p,q} \prod\_{i=1}^{n\_0} \frac{\left(-2\right)^{n\_i+q\_i}\gamma^{n\_i+p\_{i}}} {(n\_i-p\_i+q\_i)!(p\_i-q\_i)!q\_i!} \frac{\Gamma(q\_i+n\_i+\tfrac{d-2}{2})} {\Gamma(p\_i+\tfrac{d-2}{2})\Gamma(n\_i+\tfrac{d-2}{2})} \nonumber\\&\qquad\times ((\epsilon\_i\cdot\epsilon\_{i+1}){\check{x}\_{i,i+1}}^2)^{p\_i-q\_i} (\epsilon\_i\cdot{\check{x}\_{i,i+1}})^{n\_i-p\_i+q\_i} (\epsilon\_{i+1}\cdot{\check{x}\_{i,i+1}})^{q\_i} \left\vert x\_{i,i+1}\right\vert^{2-d} \nonumber\\ \label{eq:CFTpa2}&= \sum\_{t,q,m} \prod\_{i=1}^{n\_0} \frac{\left(-2\right)^{t\_i+q\_i+m\_i}\gamma^{t\_i+q\_i+2m\_i}} {t\_i!\,q\_i!\,m\_i!} \frac{\Gamma(t\_i+q\_i+m\_i+\tfrac{d-2}{2})} {\Gamma(t\_i+m\_i+\tfrac{d-2}{2})\Gamma(q\_i+m\_i+\tfrac{d-2}{2})} \nonumber\\&\qquad\times ((\epsilon\_i\cdot\epsilon\_{i+1}){\check{x}\_{i,i+1}}^2)^{m\_i} (\epsilon\_i\cdot{\check{x}\_{i,i+1}})^{t\_i} (\epsilon\_{i+1}\cdot{\check{x}\_{i,i+1}})^{q\_i} \left\vert x\_{i,i+1}\right\vert^{2-d}\;.\end{aligned}$$ Additionally to some index redefinition and reorganization of the product, we made use of the following lemma for 2 null vectors *ε**i* and *ε**i* + 1 : $$\begin{aligned} (\epsilon\_i\cdot\partial\_x)^n(\epsilon\_{i+1}\cdot \partial\_x)^p (x^2)^{-\tfrac{d-2}{2}} = &\sum\_{q} \left(-2\right)^{n+q} \frac{n!p!}{(n-p+q)!(p-q)!q!} \frac{\Gamma(q+n+\tfrac{d-2}{2})}{\Gamma(\tfrac{d-2}{2})} \nonumber\\&\times ((\epsilon\_i\cdot\epsilon\_{i+1}){\check{x}\_{i,i+1}}^2)^{p-q} (\epsilon\_i\cdot{\check{x}\_{i,i+1}})^{n-p+q} (\epsilon\_{i+1}\cdot{\check{x}\_{i,i+1}})^{q} (x^2)^{-\tfrac{d-2}{2}} \;.\end{aligned}$$ The *p* = 0 version can be shown recursively, then the full one comes from a direct application of Leibniz rule. We will then need $$\begin{aligned} k\_d(t,q,m) :&= \frac{\Gamma(t+q+m+\tfrac{d-2}{2})} {t!\,q!\,\Gamma(t+m+\tfrac{d-2}{2})\Gamma(q+m+\tfrac{d-2}{2})} \nonumber\\&= \sum\_r \frac{1}{(t-r)!\,(q-r)!\,r!\,\Gamma(r+m+\tfrac{d-2}{2})} \;.\end{aligned}$$ The last equality is straightforward to show when *t* = 0. In the other cases, it is proven via the recursion : $$k\_d(t+1,q,m) = \frac{1}{t+1}\big( k\_d(t,q,m)+k\_d(t,q-1,m+1) \big) \;.$$ This allows to rewrite as $$\begin{aligned} {\langle J\_1,...,J\_{n\_0}\rangle\_{\rm cyclic}}&= \sum\_{t,q,m,r} \prod\_{i=1}^{n\_0} \frac{\left(-2\gamma\right)^{t\_i+q\_i+2m\_i}} {(t\_i-r\_i)!\,(q\_i-r\_i)!\,m\_i!\,r\_i! \Gamma(r\_i+m\_i+\tfrac{d-2}{2})} \nonumber\\&\qquad\times \left(-\tfrac12(\epsilon\_i\cdot\epsilon\_{i+1}){\check{x}\_{i,i+1}}^2\right)^{m\_i} (\epsilon\_i\cdot{\check{x}\_{i,i+1}})^{t\_i} (\epsilon\_{i+1}\cdot{\check{x}\_{i,i+1}})^{q\_i} \left\vert x\_{i,i+1}\right\vert^{2-d} \\&= \sum\_{a,b,c,r} \prod\_{i=1}^{n\_0} \frac{\left(-2\gamma\right)^{a\_i+b\_i+2c\_i}} {a\_i!\,b\_i!\,(c\_i-r\_i)!\,r\_i! \Gamma(c\_i+\tfrac{d-2}{2})} \nonumber\\&\qquad\times \left(-\tfrac12(\epsilon\_i\cdot\epsilon\_{i+1}){\check{x}\_{i,i+1}}^2\right)^{c\_i-r\_i} (\epsilon\_i\cdot{\check{x}\_{i,i+1}})^{a\_i+r\_i} (\epsilon\_{i+1}\cdot{\check{x}\_{i,i+1}})^{b\_i+r\_i} \left\vert x\_{i,i+1}\right\vert^{2-d} \\&= \sum\_{c} \prod\_{i=1}^{n\_0} \frac{1}{c\_i!\,\Gamma(c\_i+\tfrac{d-2}{2})} \exp\left( -2\gamma(\epsilon\_i+\epsilon\_{i+1})\cdot{\check{x}\_{i,i+1}} \right) \nonumber\\&\qquad\times \left(-2\gamma\sqrt{ (\epsilon\_i\cdot{\check{x}\_{i,i+1}})(\epsilon\_{i+1}\cdot{\check{x}\_{i,i+1}}) -\tfrac{1}{2}((\epsilon\_i\cdot\epsilon\_{i+1})){\check{x}\_{i,i+1}}^2 } \right)^{2c\_i} \left\vert x\_{i,i+1}\right\vert^{2-d} \,.\end{aligned}$$ Then, as usual one defines the conformal structures as $$\label{eq:def PQ CFT} {Q\_{i}}=2\epsilon\_i\cdot({\check{x}\_{i-1,i}}+{\check{x}\_{i,i+1}}) \;,\quad {P\_{i,i+1}}^2=4\left( (\epsilon\_i\cdot{\check{x}\_{i,i+1}})(\epsilon\_{i+1}\cdot{\check{x}\_{i,i+1}}) -\tfrac{1}{2}((\epsilon\_i\cdot\epsilon\_{i+1})){\check{x}\_{i,i+1}}^2 \right) \;.$$ In this language, we can write the final expression for the *n*-point conserved-current correlation functions of the *d*-dimensional *U*(*N*) free vector model as $$\label{eq:CFTpa3} {\langle J\_1,...,J\_{n\_0}\rangle\_{\rm cyclic}}= \prod\_{i=1}^{n\_0} \exp\left(-\gamma Q\_i\right) \sum\_{c\_i} \frac{1}{c\_i!\,\Gamma(c\_i+\tfrac{d-2}{2})} \left(\gamma P\_{i,i+1}\right)^{2c\_i} \left\vert x\_{i,i+1}\right\vert^{2-d} \;.$$ Before specifying the dimension in order to compare with the result in the 4-dimensional bulk, let us show that this is consistent with the result conjectured in. We start by rewriting as a series expansion: $$\begin{aligned} &{\langle J\_1,...,J\_{n\_0}\rangle\_{\rm cyclic}}= \sum\_{c,d} \prod\_{i=1}^{n\_0} \frac{1}{d\_i!\,c\_i!\,\Gamma(c\_i+\tfrac{d-2}{2})} \left(-\gamma Q\_i\right)^{d\_i} \left(\gamma P\_{i,i+1}\right)^{2c\_i} \left\vert x\_{i,i+1}\right\vert^{2-d} \\&= \sum\_{c,s} \prod\_{i=1}^{n\_0} \frac{1}{(s\_i-c\_i-c\_{i-1})!\,c\_i!\,\Gamma(c\_i+\tfrac{d-2}{2})} \left(-\gamma Q\_i\right)^{s\_i-c\_i-c\_{i-1}} \left(\gamma P\_{i,i+1}\right)^{2c\_i} \left\vert x\_{i,i+1}\right\vert^{2-d} \\&= \sum\_{c,s} \prod\_{i=1}^{n\_0} \frac{(-\gamma)^{s\_i}}{s\_i!\,c\_i!\,\Gamma(c\_i+\tfrac{d-2}{2})} \partial\_{Q\_i}^{c\_i+c\_{i-1}} \left(Q\_i\right)^{s\_i-c\_i-c\_{i-1}} \left(P\_{i,i+1}\right)^{2c\_i} \left\vert x\_{i,i+1}\right\vert^{2-d} \\&= \sum\_{s} \prod\_{i=1}^{n\_0} \frac{(-\gamma)^{s\_i}}{s\_i!} \sum\_{c\_i} \frac{(-)^{c\_i}}{2^{2c\_i}\,c\_i!\,\Gamma(c\_i+\tfrac{d-2}{2})} \left(-4P\_{i,i+1}^2\partial\_{Q\_i}\partial\_{Q\_{i+1}}\right)^{c\_i} \left(Q\_i\right)^{s\_i} \left\vert x\_{i,i+1}\right\vert^{2-d} \\&= \sum\_{s} \prod\_{i=1}^{n\_0} \frac{(-\gamma)^{s\_i}2^{\tfrac{d-2}{2}-1}}{s\_i!} \left. (q\_i)^{\tfrac12-\tfrac{d-2}{4}} {J}\_{\tfrac{d-2}{2}-1}(\sqrt{q\_i}) \right\vert\_{q\_i=-4P\_{i,i+1}^2\partial\_{Q\_i}\partial\_{Q\_{i+1}}} \left(Q\_i\right)^{s\_i} \left\vert x\_{i,i+1}\right\vert^{2-d} \;,\end{aligned}$$ where *J**α*(*x*) is the Bessel function of first kind. It is now clear that the Bose symmetrisation of this result is the same as in up to a function of *s**i* appearing in front of the current *J**s**i*. Let us stress that this information is encoded is the weight *a**s* appearing in rather than in the normalisation *N**s* of the current *J**s*. Hence the freedom to fix it is not spoiled by the previous fixation of *b**s* ∝ *a**s* *N**s*. Now let us go back to our three-dimensional holographic purpose. In this setup, one can use the following consequence of the duplication formula for Gamma functions: $$\Gamma(x+\tfrac12)=\frac{\sqrt{\pi}(2x)!}{2^{2x}x!} \;,$$ and rewrite the result as: $$\label{eq:CFTpa4} {\langle J\_1,...,J\_{n\_0}\rangle\_{\rm cyclic}}= \prod\_{i=1}^{n\_0} \frac{1}{\sqrt{\pi}} \exp\left(-\gamma Q\_i\right) \cos\left(2i\gamma P\_{i,i+1}\right) \left\vert x\_{i,i+1}\right\vert^{2-d} \;.$$ We already see that if we choose $\gamma=\tfrac{i}{4}$, we recover the formula for the type A-model, up to global normalisation of the *n*-point function, provided one can link the two definitions of the conformal structures *Q**i* and *P**i*, *j* . This is done by defining the polarization vector *ε**i* in terms of the polarization spinor *χ**i* of the previous section as follows: $$\label{eq:eps(khi)} \left({{\chi}\_{i}}\right)\_{\alpha}\left({{\bar\chi}\_{i}}\right)\
arxiv_0000005
and knowing the mass in each of these gas reservoirs, we can deduce the various timescales. Evolution with both the redshift and the stellar mass of five main timescales are shown in Fig. [fig:maindisktime scales]. We focus on: * the orbital time *t**o**r**b* estimated at a radius, 2.2*r**d* ; * the cooling time-scale, *t**c**o**o**l*, given by Eq. [eq:coolingtimefunction]; * the fragmentation time-scale, $t\_{frag}=\frac{M\_{frag}+M\_{sfg}}{\dot{M}\_{frag}}$, defined as the time required to double the fragmented gas mass; * the disruption timescale, $t\_{disrupt}=\frac{M\_{frag}+M\_{sfg}}{\dot{M}\_{disrupt}}$, defined as the time required to deplete the fragmented gas via the kinetic energy provided by SN and/or AGN; * the gas depletion time of the fragmented gas, $t\_{sfg}=\frac{M\_{frag}}{\dot{M}\_{sfg}}$, defined as the time required to convert all the fragmented gas and star-forming gas into stars. As the star-formation timescale is less than all other timescales, *t**s**f**g*, is essentially the time required to convert all of the fragmented gas into stars. We present details of the model output by providing the medians and the 15% and 85% percentiles of these characteristic timescales (Table [tab:maindisktime scales]). We compile the statistics for both the full and star-forming samples of model galaxies at *z* = 2.1. These estimates given in four different stellar mass bins, 109, 1010, 1010.5, and 1011.25 *M*⊙. Even if the median trends presented in Fig. [fig:maindisktime scales] suggest some regularity in galaxy properties as a function of mass, all timescales at constant mass have significant scatter (see Tab. [tab:maindisktime scales]). This large scatter indicates that the dynamics for any one galaxy or ensemble of galaxies is not regular or simple, but is a complex interaction of the various processes included in the model. First, we first focus on the disk orbital timescale which plays a key role in controlling processes that occur over long timescales. Depending on redshift and stellar mass, the orbital timescale ranges between  ≃ 4 Myrs and  ≃ 300 Myrs. At fixed stellar mass, the orbital timescale increases with the redshift. At fixed redshift, it is a decreasing function of the stellar mass. The range of orbital timescales calculated using the `G.A.S.` model are in good agreement with orbital timescales estimated for local galaxies. The fragmentation of the ISM is driven by energy injection on scales of and larger than the disc height via the condensation of the diffuse gas. The rate of fragmentation depends on the radiative cooling rate of the diffuse warm gas. The characteristic cooling time is dependent on redshift and galaxy mass and ranges over five orders-of-magnitude (Eq. [eq:coolingtimefunction]). At a given stellar mass, the characteristic cooling timescale is a decreasing function of the redshift (Fig. [fig:maindisktime scales]). This trend is driven by the increase with the redshift of the diffuse gas density. At a given redshift, the characteristic cooling time is a decreasing function of the mass. These two trends together means that the characteristic cooling timescale of a median galaxy track decreases with both redshift and stellar mass (Fig. [fig:maindisktime scales]). At *z* = 2.1, the characteristic cooling timescale of our star-forming galaxies sample is distributed between  ≃ 0.8 Myr and  ≃ 6.2 Myr. These values are slightly smaller than in the fully resolved galaxy sample of  ≃ 1.3 Myr and  ≃ 8.0 Myr (Table [tab:maindisktime scales]). At all redshifts and for all stellar masses, the characteristic cooling timescale is significantly shorter than the orbital timescale ( < 6%). If no additional mass were added to the reservoir of diffuse gas, its condensation, which is mainly governed by the characteristic cooling timescale, would deplete completely in less than an orbital time. The short cooling and condensation timescales helps to drive the overall complex dynamics of the gas cycle in model galaxies as alluded to earlier. To provide an indication of how much of the diffuse gas is condensing at any given time, we tabulate the mass fraction of the diffuse gas (Table [tab:maindisktime scales]). The mass fraction of the diffuse gas that is condensing can be estimated by comparing the effective cooling time (Eq. [eq:coolingclock]) to the characteristic cooling timescale (Eq. [eq:coolingtimefunction]). At *z* = 2.1, for the five stellar mass bins tabulated, the median values of the ratio, T*c**o**o**l*/*t**c**o**o**l*, range between  ≃ 2.7 and  ≃ 3.5 for both model galaxy samples. Low mass galaxies have the lowest ratios. The ratio is approximately constant,  ≃ 3.5, for galaxies with *M*⋆ ≥ 1010*M*⊙. The fragmentation and disruption timescales have very similar dependencies on redshift and stellar mass (Fig. [fig:maindisktime scales]). These two timescales show a decreasing trend with both redshift and stellar mass. At *z* = 2.1, for the star-forming galaxy sample, the fragmenting timescale and the disrupting timescale are distributed between 4 and 15 Myrs. Those values correspond to  ≃ 10% of the disk orbital timescale. As for the cooling timescale, the fragmentation and the disruption timescales of our star-forming galaxy sample are marginally smaller than those of the full galaxy sample. The disruption timescale is always (slightly) larger than the fragmentation timescale (Table [tab:maindisktime scales]) and thus gas fragmentation is always faster than gas disruption in `G.A.S.`. The short fragmentation timescales indicate that the growth of GMCs is a continuous and efficient process. Timescales measured in our model are fully compatible with GMC gas accretion rates found via numerical simulations. Working against the gas cooling and fragmentation is the injection of energy by SNs and AGN. This energy injection is used in `G.A.S.` to disrupt the gas and transfer some of the fragmented gas back into the reservoir of diffuse gas. Hydrodynamical simulations find that disruption timescales of about 10–20 Myr. This range of values is fully consistent with our predictions. ### Impact of fragmentation and disruption on the life cycle of GMCs The small differences between the condensation and disruption timescales implies that the the total mass of fragmented gas in GMCs stays approximately constant and that significant fraction of the fragmented mass is continuously regenerated. From this equilibrium, there are two evolutionary paths for GMCs that depend on both the scale and mass of GMCs. * Low mass GMCs, those formed in galaxies with short or intermediate disk-scale heights, *h**d* ≤ 50pc, the mass disrupted by SN kinetic energy injection is close to the total of low mass GMCs. In such circumstances, after only few SN cycles, GMCs are fully disrupted. These disrupted GMCs are constantly replenished through the formation of new GMCs through the condensation of diffuse gas. Thus the disruption/fragmentation timescales are approximately the lifetime of the GMCs in our model. Our results are in reasonable agreement with the sizes and short GMC lifetimes estimated in local galaxies, 25-70 pc and 17 ± 4 Myr. * More massive GMCs resist being disrupted by SN. It is the competition between accretion and disruption which regulates the mass of GMCs. Due to this competition, GMCs have to be therefore considered as dynamical structures where the gas is just *passing through* from the diffuse-gas state to the star-forming gas state. SN/AGN-driven gas disruption slows significantly the rate at which gas fragments. In small GMCs, the cloud structure and therefore the sites of star formation can be completely destroyed after only a few SN cycles. In more massive GMCs, the fragmentation is also significantly slowed. Indeed, gas recently added to the reservoir of diffuse gas starts to fragment on larger scale than the gas within the GMCs. In our prescription, this constant renewal and exchange of the gas in these reservoirs and the impact of the added gas to the overall cascade of fragmenting gas is fully accounted for via the mass-weighted fragmentation clock (Eq. [eq:tstr]). ### Gas depletion timescale Depending on the redshift and the stellar mass of a model galaxy, the gas depletion timescale, *t**s**f**g*, ranges from 80 Myr to 2 Gyr (Fig. [fig:maindisktime scales]). The upper end of this range is consistent with with gas depletion timescale measured in local spiral galaxies and the values predicted at high redshift, z ≈ 2-3, are also generally consistent with the values estimated for distant star-forming galaxies. Gas depletion timescales are a decreasing function of both the redshift and the stellar mass. Above *M*⋆ ≥ 1010.5*M*⊙ and at z > 3.0, the average gas depletion timescale reaches a lower-limit close to 80 Myr. The *t**s**f**g*/*t**f**r**a**g* ratio provides an estimate of the efficiency of star formation in GMCs. The number of condensation/disruption-cycles and the depletion timescale of fragmented gas imply that faster cycles lead to shorter depletion timescales. At z = 2.1, median values extracted from our star-forming sample, indicate that between  ≃ 30 and  ≃ 65 cycles are required to convert fragmented gas into star-forming gas (and therefore into stars). This number of gas cycles is consistent with gas dynamics and number of cycles estimated in hydrodynamic simulations. In the star-forming sample, the average number of condensation/disruption-cycles is strictly a decreasing function of the stellar mass. However, in the full galaxy sample, a minimum is reach for models galaxies with mass of 1010.25*M*⊙. The fragmenting gas in galaxies less and more massive than 1010.25*M*⊙ need more cycles to convert the fragmented gas into stars. During each cycle only a small fraction, 1%-5%, of the fragmented gas stops fragmenting and is available for the star formation. In fact, the majority of the gas that forms GMCs is recycled in of-order one to a few dynamical times or simply remains in a state that cannot form stars. We obtain star-formation efficiencies between  ≃ 1 - 5%. Such values are consistent with observational estimates and efficiencies measured in hydrodynamic simulations. In summary, the gas cycle implemented in the new `G.A.S.` model attempts to capture the complex dynamics of the gas in galaxies. In our prescriptions, gas is continuously exchanged between the diffuse and the fragmented non-star forming phases. A large number of condensation and disruption cycles, 30-70, are needed to progressively convert diffuse gas into very dense star-forming gas and then stars. The characteristic timescales of  ≃ 15 Myrs of these exchange rates is only a small fraction of the orbital time-scale. Under such conditions, the fragmented gas is converted into star-forming gas and stars continuously and the gas spends the majority of its time in a non-star-forming gas phase. Discussion and Conclusions ========================== We present a new semi-analytical model, `G.A.S.`, in which we implemented a more realistic gas cycle than has been previously implemented in a semi-analytical model. We introduced a prescription for delaying star formation which is underpinned by progressively fragmenting the gas. The formation of giant molecular clouds, filaments and cold cores takes time and therefore at a given instant only a small fraction of the total gas is in the form of pre-stellar cores. The majority of the gas is not in a form that is immediately available to form stars. Within this framework, we account for the continuous dissipation of the turbulent kinetic energy through the different ISM scales and phases. We implemented an approximate multi-phase ISM where the gas cycles between a warm diffuse phase, a cold fragmented non-star-forming phase, and a very dense star-forming gas phase. Diffuse warm gas is progressively fragmented following the effective cooling time. Even if the overall depletion timescale of the fragmented gas is, on average, proportional to the disk orbital time-scale, we measure a large scatter in both the depletion time-scale and the disk orbital time-scale. This proves that prescriptions that rely solely on the disk orbital timescale do not capture properly the complexity of gas cycles in galaxies. Smaller characteristic timescales, such as the cooling and/or energy injection timescales due to supernovae and AGN are also important. This energy input on shorter timescales, efficiently disrupts the star forming gas. The large-scale velocity dispersion of the diffuse gas is also maintained by SNs/AGN kinetic energy injection. A fraction of the gas can also be ejected from disks by SN explosions but the characteristic time-scale of ejection is in average 10 times larger than the fragmenting/disruption cycle time-scale. While the SN/AGN kinetic energy injection in the ISM regulates the star-formation rate in low mass galaxies, our model galaxies retain a large fraction of the gas. This “local” gas is then used with a progressively increasing efficiency necessary to build massive, *M**s**t**a**r* > 1011*M*⊙, galaxies in the early Universe, *z* ∈ [4; 6]. Star formation occurs only in the very dense gas, which in our model is a product of continuous fragmentation over a wide range of scales. This new complete gas cycle strongly regulates the star formation in our modelled galaxies. Only a few percent of the available fragmented gas is converted into stars during a GMC life cycle. By taking into account the fragmented gas on a galaxy scale, our model is able to reproduce the standard Schmidt-Kennicutt law. Our estimated gas depletion timescales, which are directly related to the time it takes for gas to fragment, are in good agreement with observational estimates. This new gas regulation cycle leads to very good agreement with the observed stellar mass function over a wide range of redshifts, *z* ∈ [0.8; 6]. The ability of our model to catch the stellar mass assembly in high redshift galaxies as been already used with success in. Galaxy properties (gas/stars contents, metallicities and FUV fluxes...) predicted by our model have been post-processed to successfully predict the [CII] luminosity functions at high redshifts and allowed us to explore the main characteristics of this emission. At *z* < 0.8, we find some discrepancies between the predicted and the observed stellar mass functions. Specifically, an over-density of low-mass galaxies and the under-density of intermediate mass galaxies modelled at z<0.8 could probably be solved by slightly increasing the efficiency of photo-ionization in our prescription. At *z* < 4.0, to reduce the efficiency of or to stop the growth of stellar mass in massive galaxies, the accretion of gas and its transformation into stars has to be reduced or stopped. Previous semi-analytical models implemented strong AGN feedback to quench gas accretion onto massive galaxies. In those models, a significant fraction of the power produced by the AGN is directly used to reduce the cooling of the hot halo gas. This implies a constant AGN power production, which may not be consistent with our understanding of AGN variability. For massive galaxies, we propose an alternative model which regulates the radiative cooling and gas accretion onto galaxies. In parallel to radiative cooling, we implemented the development of thermal instabilities creating warm gas surrounding the galaxy. The growth of these thermal instabilities in the central region of the halo progressively reduces or halts the cooling and dissipation of kinetic energy in the halo and therefore reduces or stops the accretion onto galaxies. This process of regulation is a natural outcome of the growth of thermal instabilities in the hot halo gas and does not depend on the power produced by AGN. However, our actual efficiency parameters do not fully stop gas accretion onto very massive galaxies. In some massive dark-matter halos, radiative cooling restarts (VFF  <  1.0) even if its hot gaseous halo has been fully quenched previously (VFF = 1.0). The recovery of the gas accretion leads to the formation of some ( < 10) unobserved overly massive galaxies at z ≃ 0.3. Our prescription of thermal instability growth in the hot gas phase is governed by a set of two parameters which can be further refined to overcome this problem. In our model of the gas cycle, we assume that the gas initially fragments at the disk scale height with the progressive and continuous formation of over-densities which are akin to observed GMCs. The fragmentation of the gas will progress down to the scale at which stars form or  ∼ 0.1 pc. However, this fragmentation can start at larger scales in the cold streams or in the hot(warm) gas phase surrounding the galaxy. Some clumps of warm gas could already be formed around the galaxy. We assume in our model that a fraction of the newly accreted gas is already fragmented but we do not include any interaction between this already fragmented gas in the halo and the large scale wind. This kind of interaction could reduce, perhaps substantially, the gas-accretion efficiency. Indeed such coupling could increase the cloud-cloud velocity dispersion and maintain the turbulence in the hot and warm gas contained in the CGM of the most massive galaxy. Such a hypothesised mechanism will be complementary to the development of thermal instabilities, could in fact act as a catalyst to the formation of more clouds in the CGM, and will could contribute in quenching the radiative cooling and dissipation in the most massive halos. MC acknowledges Thomas Fenouillet for his much appreciated help in the use of the LAM’s computation clusters and wishes to thank Mathieu Génois for his help in the proofreading of this article and for all physical and technical discussions. MC thanks Olivier Ilbert for the numerous and very helpfull discussion about stellar mass function (Estimations, errors, limits) and Benoit Epinat for useful discussion about galaxy disk dynamics. MC also wishes to express his appreciation to Alexandre Beelen and Yanick Roehlly for very useful physical and technical discussions. Authors thank the Centre National d’Etudes Spatiales (CNES), Aix Marseille Université, Sorbonne Université, the Programme National de Cosmologie and Galaxies (PNCG) and Programme de Physico-chimie du Milieu Interstellaire (PCMI) of CNRS/INSU for their financial support. PG gratefully acknowledges the support of the Institut Universitaire de France (IUF). The GALAKSIENN library ====================== The GALAKSIENN library stores the main results produced by our new `G.A.S.` semi-analytical model, especially MOCK galaxy catalogs and sky maps. It is available online through the ZENODO platform:, DOI: 10.5281/zenodo.1451229. A complete description of the GALAKSIENN library is given in paper III. In association with this paper I, we distribute the ASCII tables of the stellar mass functions (Fig. [fig:stellarmassfunction]). --- 1. The temperature of the accreted gas in this mode is close to 104K.[↩](#fnref1) 2. Using the adaptive time-step method described in, we ensure that all exchange rates between reservoirs are constant during each time-step.[↩](#fnref2) 3. The diffuse gas is assumed to evolve in a thick disk with scale-height *h**d* and a total radius of the stellar disk equal to 11*r**d*. We assume that the warm diffuse gas can extend to a radius that is up to two times larger than total radius of the stellar disk.[↩](#fnref3) 4. Following a scheme where gas is continuously added into the hot gas reservoir.[↩](#fnref4) `G.A.S.` I: A prescription for turbulence-regulated star formation and its impact on galaxy properties ====================================================================================================== ### Received November 19 2018 / Accepted January 29 2019 Introduction ============ Galaxies are defined by their stellar populations — the when and where  their stars formed. Therefore, if models are to capture accurately the process of galaxy formation and evolution, researchers must determine how star formation is regulated locally and globally in galaxies. However, star formation is one of the most challenging processes to characterize in galaxy evolution models, essentially because the formation of stars involves many non-linear processes that occur over a large range of temporal and spatial scales in e.g., density, velocity, and magnetic field strength and regularity. Observations show that star formation in galaxies is a very inefficient process, with typically 0.1%-10% of the available gas being converted into stars per local free-fall time. On the other hand, numerical simulations of molecular clouds indicate that the star-formation efficiency is highly dependent on how ionization and kinetic feedback is injected into the interstellar medium. The difficulty in simulating feedback-regulated star formation, as well as the absence of a detailed physical description of processes responsible for large-scale feedback (whether driven by active galactic nuclei, AGN, or intense star formation or both) and gas accretion onto galaxies, make the global regulation of star formation one of the greatest challenges in modelling galaxy formation and evolution. Recently, there is growing observational and theoretical evidence that turbulent pressure injected by young stars is comparable to gravitational pressure in distant disks, enabling self-regulated star formation with low efficiency. Warm diffuse and cold dense gas evolve under the influence of compression by passing spiral arms, thermal and gravitational instabilities, and supernovae-driven shocks. High resolution hydrodynamic simulations, with implementation of sub-grid turbulence models, show complex, multi-phase, turbulent structures within the ISM with a realistic global Kennicutt-Schmidt relation on kpc scale, and gas depletion times in star forming regions over scales of  10-50 pc consistent with observations. In these models, the global gas depletion time is long ($\tau\_{depl} = M\_g / \dot{M\_{\star}} \approx 1 - 10$ Gyr, where *M**g* and $\dot{M\_{\star}}$ are respectively the gas mass and the star-formation rate), because the gas spends most of the time in a state that does not rapidly lead to star formation. The gas is recycled many times, since the lifetime of star-forming clouds or the local gas depletion time in star forming regions is typically 1 − 500 Myr. This wide range of local depletion times which lead to significant gas cycling and re-cycling is due to dynamical disruption, dispersal by feedback, and supersonic turbulence. This complexity must be captured in some way in galaxy evolution models to generate realistic galaxies. In high resolution hydrodynamical simulations or in semi-analytical models, depending on the model and the mass of the galaxy, different prescriptions for various feedback mechanisms have been invoked to regulate star formation. For low stellar masses, *M*⋆ < 109*M*⊙, feedback is used to either regulate gas accretion, mostly by heating the gas through photo-ionisation, or ejecting the gas using the mechanical energy generated by supernovae. For high stellar masses, *M*⋆ > 1010*M*⊙, models rely on the action of Super-Massive Black-Holes (SMBH) to inhibit gas accretion onto galaxies. A significant fraction of the power generated by AGN is used to limit the cooling of the hot gas phase surrounding the galaxy. Despite including some of these processes to regulate the gas content of galaxies, galaxy evolution models fail to reproduce the star-formation histories and physical properties of galaxies, mainly because a robust theory of star formation and dynamical coupling between gas phases are still lacking. As explained in, current semi-analytical model overestimate the number of low-mass galaxies, especially at high redshift (*z* > 2.0), where the gap between models and observations is roughly an order of magnitude or more. For high mass galaxies, AGN feedback, initially used to limit the growth of massive galaxies at low redshift, has also an impact on the star-formation rate and history at higher redshift. Consequently, even the massive galaxies, those with *M*⋆ ≃ 1011*M*⊙ observed at redshifts greater than 3, are not robustly reproduced in current models. We present here a new semi-analytical model `G.A.S.`— the Galaxy Assembler from dark-matter Simulation — which is based, in part, on previous version described in and. This paper (paper I) provides an overview of the physical processes considered in `G.A.S.` and how they are implemented as phenomenological rate equations. In addition, we have two complementary companion papers: `G.A.S.`II: in which we describe and model the mechanisms that leads to dust attenuation of galaxian light and `G.A.S.`III in which we explore the panchromatic emission of galaxies from the FUV to the sub-millimetre bands. In this first paper we focus mainly on the regulation of star formation. In previous models, we have adopted an *ad-hoc* recipe to generate a delay between accretion and star formation. Here we implement a physical prescription based on the inertial cascade of turbulent energy from large to small scales. Accreted gas onto galaxies is initially considered as mainly diffuse. We compute the mass fraction of the gas subject to phase separation and fragmentation following and and references therein, which also depends on the disk properties. Star formation occurs in the fragmented gas at a scale of 0.1 pc. In a large set of semi-analytical models, star formation in massive galaxies is regulated by a strong reduction, or even complete suppression, of gas accretion by AGN feedback. In our new model, we do not need efficient AGN feedback, but instead our regulation process is a natural outcome of both the growth of thermal instabilities in the hot halo phase, and the dissipation of turbulent energy within the denser, fragmented gas reservoir. Those two processes delay gas accretion onto galactic disks and star formation. The paper is organised as following. In Sect. [sec:accretion], we provide a brief description of the dark-matter simulation we use, as well as the prescription we adopt to implement baryonic accretion rates. In Sect. [sec:theturbulentinertialcascade], we focus on the turbulent inertial cascade. We describe how we compute the energy and mass transfer rates between physical scales, we define and compute the gas fragmentation timescale, and show that it is a key parameter for the regulation of star formation in low-mass galaxies. In Sect. [sec:Gascycle], we describe our model for the gas cycle as it accretes onto galactic disks, from diffuse accreted gas to potentially star forming gas. In Sect. [sec:feedbacks], we describe our implementation of SN and AGN feedback. Sect. [sec:thermalinstabilities] focuses on the thermal instabilities arising in the hot gas phase in the halo. We assume that gas accretion onto galaxies is limited by turbulent mixing in the range of radii, where thermal instabilities develop because gas acquires large velocity dispersions. This allows us to define an effective cooling rate. In section [results] we present and discuss our results, mainly focusing on the evolution of the galaxy stellar mass function with redshift, and the evolution of relevant timescales of physical processes (e.g., gas cooling, fragmentation, and orbital timescales). We also discuss the impact of our implementation of thermal instabilities on the quenching of massive galaxies in massive halos. From dark-matter to baryons =========================== Dark-matter ----------- `G.A.S.` is built upon a set of dark-matter merger trees extracted from a pure N-body simulation. The current simulation uses WMAP-5yr cosmology with a volume of [100/*h*]3*M**p**c* in which 10243 particles evolve. Each particle has a mass of *m**p* = 1.025 108 *M*⊙. Halos and sub-structures (satellites) are identified by using the `HaloMaker` code. In our merger trees, we only consider halos with at least 20 dark-matter particles leading to a minimal dark-matter halo mass of 2.050 × 109 *M*⊙. Dark-matter halos grow from smooth accretion. The dark-matter accretion rate, $\dot{M}\_{dm}$, only includes particles that are newly detected in the halo, and that have never been previously identified in another halo. Baryonic accretion ------------------ [fig:normalizedbaryonicfraction] Based on the dark-matter accretion rate $\dot{M}\_{dm}$, the accretion rate of baryons is defined by, $$\dot{M}\_b = f\_b^{ph-ion}(M\_h,z)\dot{M}\_{dm} \, \label{eq:gas\_accretion}$$ where *f**b**p**h* − *i**o**n*(*M**h*, *z*) is the effective baryonic fraction. The baryon accretion rate depends on the gas ionisation state and we adopt here the photo-ionisation model based on the and prescription, but using the effective filtering mass given by. We assume that re-ionisation occurs at *z* = 7.0, we therefore set *f**b**p**h* − *i**o**n*(*M**h*, *z* > 7) = *f**b* = 0.16. Fig. [fig:normalizedbaryonicfraction] shows the normalised mass baryonic fraction that is associated with the dark matter smooth accretion (we assume a universal baryonic fraction *f**b* = 0.16). Following the minimal dark-matter halo mass used in our model, the main impact (*f**b**p**h* − *i**o**n*/*f**b* < 0.5) of the photo-ionisation prescription occurs at low redshift (*z* < 0.9). As initially proposed by, in `G.A.S.` we define two different modes of accretion, a cold[1](#fn1) mode, and a hot mode. Depending on the dark-matter halo mass, the fraction of accreted, hot gas is computed as $$f\_{sh}(M\_{vir}) = \dfrac{1}{2}\left[1+\text{erf}\left(\text{log}M\_{vir} - \text{log}M\_{mix}\right)\right]\, \label{eq:shock\_heated\_fraction}$$ where *M**m**i**x* is the transition mass when the cold and hot gas mass accretion rates are equal and *M**v**i**r* is the halo virial mass. The evolution of the hot gas fraction is inspired by the study of, but we do not account for evolution with redshift since it is very weak and does not strongly impact how the gas is accreted in our model. The baryonic accretion is divided in two parts that feed two different reservoirs: *M**c**o**l**d* for the cold mode and *M**h**o**t* for the hot mode. Both the cold and the hot reservoirs are fed by metal-free gas. During the evolution of any galaxy, metal-rich ejecta coming from the galaxy are added to the hot reservoir. The metal content of the hot gas therefore depends directly on the rates and timescales over which galaxies create metals. The metallicity of the hot reservoir evolves with time. The chemo-dynamical model included in `G.A.S.` tracks the abundance of the main elements in the gas phase. The production and the re-injection of these metals are taken into account for stars with initial masses between 0.1 M$\_{\sun}$ and 100 M$\_{\sun}$ over metallicities from zero to super-solar. ### The cold accretion mode For the cold accretion mode, we assume that gas falls directly onto the galaxy, and we compute the cold accretion rate via: $$\dot{M}\_{streams} = \dfrac{M\_{cold}}{2t\_{dyn}} \, \label{eq:ff\_rate}$$ where *M**c**o**l**d* is the mass stored in the cold reservoir and *t**d**y**n* is the dynamical time of the dark-matter halo: *t**d**y**n* = *r**v**i**r*/*v**v**i**r*. [fig:thermalcoolingeff] ### Hot accretion: Radiative cooling As in the previous versions of this model, we assume that the hot gas surrounding the galaxy is confined in the potential well of the dark-matter halo and in hydrostatic equilibrium. The hot gas density profile *ρ**h**o**t*(*r*) is computed following the prescriptions in,, and. We assume for such hot atmosphere a constant temperature *T**h**o**t*, and a average gas metallicity *Z**h**o**t*. For more details, please refer to. Radiative cooling and the associated gas condensation is computed using the prescription in. The cooling time of the hot gas phase is defined – as a function of the radius – as, $$t\_{cool}(r) = 10.75\dfrac{\mu m\_p T\_{hot}}{\rho\_{hot}(r)\Lambda[T\_{hot},Z\_{hot}]}\,. \label{eq:cooling\_time\_function}$$ In previous versions of this model, we adopted the cooling efficiencies Λ(*T*, *Z*). In the present version, we use those computed by, tabulated as a function of gas temperature and metallicity, which we interpolate between *T* = 103K and 108K, and for gas metallicities over the range 10− 4*Z*⊙ and 2*Z*⊙. We assume the solar metal mass fraction of *Z*⊙ = 0.02. Fig. [fig:thermalcoolingeff] shows the cooling efficiency as a function of both gas temperature and average gas metallicity. At a given time, the mass of warm gas that can condensate and feed the galaxy is enclosed within the cooling radius, *r**c**o**o**l*. This radius is calculated using the cooling time equation: *t*(*r**c**o**o**l*) = T*c**o**o**l**h**o**t* In this equation, T*c**o**o**l**h**o**t* is a “cooling clock”, which is a measure of the effective cooling time of the hot gas. After each time-step Δ*t*, the mass-weighted cooling time is updated via: $$\mathcal{T}\_{cool}^{hot,n} = \underbrace{\left(\mathcal{T}\_{cool}^{hot,n-1} + \Delta t\right)\left(1-\frac{\Delta M}{M}\right)}\_{\text{hot halo gas}} + \underbrace{\frac{\Delta t}{2}\frac{\Delta M}{M}}\_{\text{newly incoming hot halo gas}} \, \label{eq:cooling\_clock}$$ where *M* is the total mass in the hot phase after the latest time-step, Δ*t*. Δ*M* is the net mass variation of hot gas during this time-step (accretion - ejection). We assume that cooling takes place during the overall last time-step for the gas already in the hot atmosphere. However, accretion is continuous[2](#fn2), and incoming gas also starts to cool. Therefore, taking into account the time for the incoming gas to enter the hot phase, this new gas cools during only half a time-step, on average. Therefore, the effective cooling time of the hot gas halo can increase or decrease between two time-steps depending on the relative fraction of halo gas to incoming gas. During mergers, the cooling clock of the remnant hot phase is set to the value of the most massive progenitor at the time of the merger. Knowing the cooling radius *r**c**o**o**l* (deduced from Eq.[eq:coolingradius]), we can write the condensation rate of the gas: $${\displaystyle}\dot{M}\_{cool} = \dfrac{v(r\_{cool})}{2r\_{cool}}\int\_0^{r\_{cool}}\rho\_{hot}(r)r^2dr. \label{eq:cooling\_rate}$$ The mass within *r**c**o**o**l* decreases with a timescale *r**c**o**o**l*/*v*(*r**c**o**o**l*), where *v*(*r**c**o**o**l*) is the circular velocity of the dark-matter halo measured at *r* = *r**c**o**o**l*. We assume that the hot atmosphere extends up to the virial radius *r**v**i**r* of the dark-matter halo, thus the cooling radius cannot be larger than *r**v**i**r*. [fig:accrate] Fig. [fig:accrate] shows the average galaxy gas accretion rates produced by the two different modes as a function of both the dark-matter virial mass and the redshift. At low dark-matter halo masses, *M**v**i**r* < 1010.5*M*⊙, accretion on the galaxy is dominated by the cold mode. The contribution to the accreted mass of the hot mode increases progressively with halo mass. Around *M**v**i**r* ∼ 1010.5*M*⊙, the contribution of the two modes are equal. This transition occurs at approximately the same halo mass for all redshifts we considered. For both the cold mode and the hot mode accretion, the average accretion rate decreases with the redshift. For dark-matter halos with *M**h* = 1010.5*M*⊙, the average accretion due to the cold mode decreases from 20*M*⊙ yr− 1 at *z* ≃ 9.0 to 0.2*M*⊙ yr− 1 at *z* = 0.3; for the hot mode, accretion rate decreases from 30*M*⊙ yr− 1 to less than 0.3*M*⊙ yr− 1 between *z* ≃ 9.0 and *z* = 0.3. [th!] lllll **symbol** & **definition** & **Eq/Sect** & **values** & – Reference Masses – *M**h*, *m**i**n* & Minimal dark-matter halo mass (20 particles) & Sect. [sec:darkmatter] & 2.05 × 109*M*⊙ *M**m**i**x* & Transition mass from cold to hot accretion mode & Sect. [sec:gasaccretion], Eq. [eq:shockheatedfraction] & 1011*M*⊙ *M*•*i**n**i**t* & Initial super massive black hole mass (seed) & Sect. & 300 *M*⊙ & & – Thermal instability – *ɛ**T**I* & Thermal instability propagation efficiency & Sect. [sec:Mixingzone], Eq. [eq:TIradius] & 0.63 & – Feedback: Repartition – *f**M**E* & Fraction of SMBH infall rate converted in power & Sect. [sec:AGNfeedback], Eqs. [eq:AGNejectarate] & 0.1 *f**k*, *S**N* & Kinetic fraction of SN energy & Sect. [SNejectarateEQ], Eq. [SNejectarateEQ] & 2/3 & *f**w* & Fraction affected to large scale wind & Sect. [SNejectarateEQ], Eq. [SNejectarateEQ] & 0.2 & *f**T**h*, *S**N* & Thermal fraction of non kinetic SN energy & Sect. [sec:thermalluminositypower], Eq. [eq:SNdisruptionrate] & 1/2 & *f**k*, *A**G**N* & Kinetic fraction of AGN energy & Sect. [sec:Largescaleejecta], Eq. [eq:Vjet] & 10− 3 *f**T**h*, *A**G**N* & Thermal fraction of non kinetic AGN energy & Sect. [sec:thermalluminositypower] & 0.6 & – Accretion – *f**f**r**a**g**i**n* & Fraction (in mass) of gas which is already fragmented when accreted & Sect. [sec:Gascycle] & 1/3 & – Turbulent kinetic energy budget – *f**i**n**c**r* & Fraction of the rotational energy associated with the newly & Sect. [sec:averagevelocitydispersion], Eq. [eq:Updatedkinetecenergybudget] & 1/3 & diffuse accreted gas converted in turbulent kinetic energy & & *f**d**i**s**p* & Fraction of the turbulent kinetic energy budget & Sect. [sec:averagevelocitydispersion], Eq. [eq:Updatedkinetecenergybudget] & 1/2 & dissipated per dynamical time step & & & – Feedback: Additional parameters – *v**w* & Large scale wind velocity & Sect. [sec:Largescaleejecta], Eq. [SNejectarateEQ] & [100,200] km/s *E**S**N* & Total SN energy & Sect. [sec:SNfeedback] & 1044J & *η*• & Inflow-outflow ratio for SMBH activity & Sect. [sec:Largescaleejecta] & 1.0 & *η**m* & Minimal mass ratio for major merger events & & 1/3 [fullmodelparametersTAB] The turbulent inertial cascade ============================== [th] | **symbol** | **definition** | **values** | **Refs** | | --- | --- | --- | --- | | *k*⋆ | Star-formation wave number | 1/*l*⋆ = 10 *p**c*− 1 | | | *μ*⋆ | Mass surface density threshold for star-formation | 150 *M*⊙/*p**c*2 | | | *σ*⋆ | Velocity dispersion at the star-formation scale | 0.3 *k**m*/*s* | | | *a* | Slope index of the Larson surface density law | 1/5 | | | *b* | Slope index of the Larson 1D velocity dispersion law | 3/5 | | [tab:turbuparameterlist] Both observations and numerical simulations of the ISM at high spatial resolutions show that turbulence plays a fundamental role in star formation. Turbulence controls the rate at which kinetic energy is dissipated and leads to development of multi-phase morphological structures in the gas. Recent numerical models of galaxy formation have adopted a gravo-turbulent sub-grid model for star formation, but those time-consuming simulations are limited to small cosmological volumes. In this section, we develop our modelling of the mass and energy transport from the large scale of injection to the small dissipative scale, through a hierarchy of structures, using a phenomenological prescription for the cascade of turbulent energy. This allows us to compute the mass of gas that may form stars and to test the impact of these phenomenon on the properties of galaxies. In our prescription, the mass of gas that is able to form stars is the mass of gas reaching the dissipation scale *l*⋆ (see Sect. [sec:star-formationthreshold]), calculated from the mass flow rate between scales, under the assumption of a constant energy transfer (Sect. [sec:masstransferrate]). Self-similar scaling relations and energy transfer rate ------------------------------------------------------- We assume that the two self-similar scaling relations are satisfied to compute the mass and energy transfer rates during the inertial turbulent cascade. They link the mass surface density *μ* and the 1D-velocity dispersion *σ*, respectively, to the wave number *k* = 1/*l*: $$\mu\_k = \mu\_{\star}\left(\dfrac{k}{k\_{\star}}\right)^{-a} \ \rm{and} \ \ \sigma\_k = \sigma\_{\star}\left(\dfrac{k}{k\_{\star}}\right)^{-b} \, \label{eq:Larson\_mu\_sig}$$ where *k*⋆ is the wave number associated with the star-formation scale, *l*⋆ = 1/*k*⋆. *μ*⋆ and *σ*⋆ are two normalisation parameters and *a* and *b* are the two slope indices; both depend on the gas phase considered. The second self-similar scaling relation gives the energy transfer rate per unit volume, as in : $$\dot{e} \propto \rho(k)\sigma(k)^3k \propto \mu(k)\sigma(k)^3k^2~[M\cdot L^{-1} \cdot T^{-3}] \. \label{eq:constant\_transfer\_rate\_1}$$ We set the energy transfer rate $\dot{e}$ to be constant, which gives a relation between the two slopes of Larson’s relations: $b = \frac{1}{3}(2-a)$. Following, we assume (*a*, *b*) = (1/5, 3/5). These values reproduce observations of the dynamics of clouds at the atomic-molecular transition. The energy transfer rate during the inertial cascade is now proportional to: $$\dot{e} \propto \mu\_{\star}\sigma\_{\star}^3 k\_{\star}^{2} \. \label{eq:constant\_transfer\_rate\_2}$$ The normalisation parameters used in Eq. [eq:Larsonmusig] are listed in Table [tab:turbuparameterlist], and correspond to standard values for dense, supersonic, compressible gases. We assume *l*⋆ = 0.1 *p**c* for the star-formation scale, which corresponds to the characteristic width of interstellar filaments hosting pre-stellar cores. Although the detailed physical interpretation of this width is still debated, this scale is of the order of the scale below which the turbulence becomes subsonic in star-forming filaments. At this scale, we adopt a typical velocity dispersion *σ*⋆ = 0.3 km s− 1, a value slightly higher than the speed of sound for a molecular gas at 10 K (*c**s*(10*K*) = 0.2 km s− 1), which corresponds to the observed transition between bound and unbound filaments. The critical mass surface density above which the gas is gravitationally unstable and converted into stars is set to *μ*⋆ ≃ 150 *M*⊙ pc− 2. [fig:inertialcascade] [fig:fragmentingtimescale] Mass transfer rate during the inertial cascade ---------------------------------------------- In star-forming galactic disks, the structure of the gas is observed to be self-similar over a wide range of scales, from a few kpc to the length scale *l*⋆ over which star formation occurs. In this section, we compute the mass transfer rate from the energy transfer rate (Eq. [eq:constanttransferrate2]). Starting from the scale of the disk scale-height, *h**d*, which is the largest possible injection scale, the gas mass is progressively distributed over smaller scales. Following the inertial cascade, when going from a scale 1/*k* to 1/2*k*, large structures break into smaller ones. A diagram of this progressive fragmentation of the gas in the ISM of our modelled galaxies is shown in Fig. [fig:inertialcascade]. The Bonnor-Ebert mass at a given scale 1/*k*, *M**k**B**E*, sets the critical mass above which the structure becomes unstable and collapses into smaller structures, until we reach the dissipation scale *l*⋆: $$M\_k^{BE} = 1.5\dfrac{\sigma^4\_k}{G^2\mu\_k}=1.5\dfrac{\sigma\_{\star}^4}{G^2\mu\_{\star}}\left(\dfrac{k}{k\_{\star}}\right)^{-11/5}$$ We derive the mass transfer rate $\dot{M}\_k$ between wave numbers *k* and 2*k*, using the conservation of energy: $$\frac{3}{2}\dot{M}\_k\sigma^2\_k = \dot{e}V\_k\text{E}\left(\dfrac{M\_k}{M\_k^{BE}}\right) \, \label{eq:transfer\_rate}$$ where $V\_k =\frac{\pi}{6}k^{-3}$ is the volume of the cloud at scale *k*, *M**k* is the total mass stored at scale *k*, and $\dot{e}$ is the constant kinetic turbulent energy transfer rate per unit volume (Eq. [eq:constanttransferrate2]). Gas fragmentation timescale --------------------------- We define the gas fragmentation timescale, *T**f**r**a**g*, as the time needed to transfer all the gas from the disk scale-height, *h**d*, to the star-formation length scale, *l*⋆, assuming that the fragmented gas reservoir is fed at a constant rate $\dot{M}\_{frag}$. During the process of fragmentation, we assume that the disk scale-height is constant. In practice, we track the mass of the initial Bonnor-Ebert sphere along the cascade given by Eq. [eq:transferrate], until a steady-state is reached. Fig. [fig:fragmentingtimescale] shows our estimated gas fragmentation timescale $T\_{frag}[h\_d,\dot{M}\_{frag}]$ as a function of the instantaneous disk scale-height *h**d* and mass flow rate at which the largest structure is fed, $\dot{M}\_{frag}$ (see Sect. [sec:diffusegasreservoir] and Eq. [eq:fragmentingrate]). Both the average disk scale-height and the GMC growth rate increase with the stellar mass, and the average fragmentation timescale decreases. Obviously, gas accretion history, star formation, gas ejection modify those two parameters, *h**d* and $\dot{M}\_{frag}$. To take this into account, we will define an effective disk fragmentation timescale T*f**r**a**g*, which depends on the history of the disk (see Sect. [sec:fragmentedgasreservoir] and Eq. [eq:tstr]). `G.A.S.` cycle: evolution of the gas reservoirs =============================================== [fig:discstructure] To model the gas cycle in galactic disks, we follow the mass content of three gas reservoirs: *(i)* a diffuse gas reservoir; *(ii)* a fragmented gas reservoir; and *(iii)* a star-forming gas reservoir. In the following, we describe each of these gas reservoirs and their mass flow rates, of which we give a schematic view in Fig. [fig:discstructure]. We consider that the accreted gas onto the galaxy is multi-phase and we set the mass fraction of fragmented gas to *f**f**r**a**g**i**n* = 1/3. We further discuss the physical origin and consequences of that assumption in Sect. [sec:discussion]. The diffuse gas reservoir ------------------------- The first main reservoir stores a diffuse ( ≃ 1 cm3) and warm (104 K) gas which is traced by emission lines of the warm, ionised medium such as [Oiii], [Nii], etc.. As illustrated in Fig. [fig:discstructure], the diffuse gas reservoir is fed by two different sources: *(i)* the accretion of warm gas coming from both the cold and the hot mode, and *(ii)* the disruption of fragmented gas by the injection of energy due to supernovae and/or an actively accreting super-massive black hole (see Sect. [sec:Disruptionrate]). The initial temperature of the warm, diffuse gas is assumed to be 104K, and we compute its isobaric cooling using the same approach as with the hot gas phase (Sect [sec:hotmode]). The effective cooling time of this phase T*c**o**o**l**u**n**s**t**r*, *n* is computed after each time step Δ*t*, as the sum of the mass-weighted cooling times of the halo and the newly incoming gas: $$\mathcal{T}\_{cool}^{diff,n} = \underbrace{\left(\mathcal{T}\_{cool}^{diff,n-1} + \Delta t\right)\left(1-\frac{\Delta M}{M}\right)}\_{\text{warm gas}} + \underbrace{\frac{\Delta t}{2}\frac{\Delta M}{M}}\_{\text{newly incoming gas}} \, \label{eq:cooling\_clock\_diffuse\_gas}$$ where *M* is the total mass of diffuse gas after the time-step Δ*t*. Δ*M* is the mass increase of diffuse gas, coming from both accretion and disrupted gas coming from the fragmented phase. We assume that radiative cooling acts on the warm diffuse gas during all the previous time-step (Eq. [eq:coolingclockdiffusegas]). However, for the newly incoming gas, we assume that the radiative cooling occurs only during half of the previous time-step (as for the hot gas phase). Therefore, the fraction of halo and freshly acquired gas can increase or decrease after each step. The mass transfer rate between the diffuse gas phase to the fragmented gas phase $\dot{M}\_{frag}$ is computed using the cooling timescale, $$\dot{M}\_{frag} = (1-f\_{frag})f\_{Q}\phi\_m\dfrac{M\_{diff}}{t\_{cool}^{warm}} \, \label{eq:fragmenting\_rate}$$ where *M**d**i**f**f* is the mass of diffuse gas. The cooling timescale is computed as a function of the average metallicity *Z**d**i**f**f* (Eq. [eq:coolingtimefunction]). We assume a temperature of 104K and an average volume of the warm, diffuse gas component[3](#fn3) V*d**i**f**f* = 22*π**r**d*2 *h**d*. The other parameters in Eq. [eq:fragmentingrate] are computed as follows: * *ϕ**m*, the mass fraction of diffuse gas that condenses in an effective cooling time T*c**o**o**l**d**i**f**f*, depends on the characteristic cooling timescale of this reservoir *t**c**o**o**l**w**a**r**m*. *ϕ**m* was calculated in, and for computational purposes we fitted their computation by an error function given by, $$\phi\_m(x) = \dfrac{1}{2}\left[1+\text{ERF}\left(\dfrac{\log\_{10}x-\log\_{10}x\_t}{\sqrt{s}}\right)\right] \, \label{eq:mass\_filling\_factor}$$ where *x* = T*c**o**o**l**d**i**f**f*/*t**c**o**o**l**w**a**r**m*. The best fit gives *x**t* = 0.55 and *s* = 0.13. * *f**f**r**a**g*, the fraction of the gas that is fragmented, is defined as, $$f\_{frag}=\frac{M\_{frag}+M\_{sfg}}{M\_{diff}+M\_{frag}+M\_{sfg}}, \label{eq:fragmented\_gas\_fraction}$$ where *M**d**i**f**f*, *M**f**r**a**g* and *M**s**f**g* are the gas masses stored in the three different reservoirs (Fig. [fig:discstructure]). This factor accounts for the fact that the more the gas is fragmented, the lower the mass flow to bound structures. * *f**Q*, the Toomre disk instability criterion, is calculated as, $$f\_{Q} = \text{MAX}\left[1.0~;\frac{Q\_{crit}}{Q}\right] \. \label{eq:Toomre\_factor}$$ This factor accounts for the fact that the mass flow rate to bound structures increases as the diffuse gas becomes gravitationally unstable. We adopt a standard value of *Q**c**r**i**t* = 1.0 . [t!] [fig:SKlaws] The fragmented gas reservoir ---------------------------- The clumpy gas phase has low filling factor (typically less than  ≈ 10%) with densities *n*H = 1 − 103 cm− 3 and temperatures 103-10 K. It is traced by atomic and molecular gas lines such as CO, [Ci], [Cii]. We assume that the fragmented gas is contained within spherical and bound structures with initial radii *r* = *h**d*/2, and must contain at least a mass *M**B**E*1/*h* to be unstable. At each time step, the maximum number of bound structures formed in the disk is *N**G**M**C* ≈ (*M**f**r**a**g* + *M**s**f**g*)/*M*1/*h**d**B**E*. The gas fragmentation process feeds the star-forming gas reservoir at the rate: $$\dot{M}\_{sfg}=\frac{M\_{frag}}{\mathcal{T}\_{frag}} \. \label{eq:str\_rate}$$ T*f**r**a**g* is the effective disk fragmentation timescale. This timescale is updated at each time-step and allows us to follow the full history of the gas cycle in relation with the current disk properties (*h**d* and $\dot{M}\_{frag}$). If *M* is the total mass of fragmented gas after the last time-step Δ*t*, and $\Delta M = \Delta t \dot{M}\_{frag}$ is the gas mass incorporated in the fragmented gas reservoir, thus the disk fragmentation timescale is updated as: $$\mathcal{T}\_{frag}^n=\mathcal{T}\_{frag}^{n-1}\left(1-\frac{\Delta M}{M}\right)+\frac{\Delta M}{M}T\_{frag}[h\_d,\dot{M}\_{frag}] \. \label{eq:t\_str}$$ This timescale is a function of $T\_{frag}[h\_d,\dot{M}\_{frag}]$, which depends on values of *h**d* and $\dot{M}\_{frag}$, the disk scale-height and the growth rate of GMCs, respectively (see Sect. [sec:struturingtime scale]). The star-forming gas reservoir ------------------------------ Progressively, the fragmented gas is converted into star-forming gas. The gas contained in the star-forming reservoir is very dense and cold, typically traced by molecular lines as HCN. The star-forming gas reservoir is characterised by its very short timescale before it forms stars. We assume a constant star-formation timescale which is linearly dependent on the length scale over which star formation occurs, *t**s**f* = *l*⋆/*σ*⋆ = 0.1*M**y**r*. The star-formation rate is then simply given by: $$SFR = \dot{M}\_{\star} = \dfrac{M\_{sfg}}{t\_{sf}}. \label{eq:star-formation\_rate}$$ In our prescription, the rate of star formation is mainly limited by the rate at which gas becomes clumpy. Schmidt-Kennicutt laws ---------------------- The `G.A.S.` model follows the evolution of three interacting gas reservoirs. Star formation is assumed to occur in the reservoir containing the densest gas after a progressive structuring starting at disk scale-height in some GMCs. We can estimate the Schmidt-Kennicutt laws at two different scales within this context. At the galaxy scale, the gas surface density Σ*g**a**s* is computed assuming that half of the mass of the fragmented gas is enclosed in the half mass radius (1.68*r**d*) of the disk. The star-formation rate surface density Σ*S**F**R* is computed in the same way. Using these two definitions, our predictions of the galaxy-scale Schmidt-Kennicutt law is shown in Fig. [fig:SKlaws]. The median trend of our star forming galaxy sample at *z* = 2.1 is in good agreement with the relation over four orders of magnitude (Fig. [fig:SKlaws]). Below log10Σ*g**a**s* = 1.0, our star-formation rates are slightly higher than those deduced from. We measure an average scatter of  ≃ 0.46dex. Our star-forming galaxy distribution is fully consistent with individual measurements galaxies measurements. Using the whole star-forming sample of galaxies over the redshift range *z*=1.5–2.1, we estimate a power-law index, *N* = 1.232 ± 0.002. This slope is slightly shallower than the standard slope. In addition, we find a clear trend for an increasing gas surface density as the fragmented gas fraction increases (Fig. [fig:SKlaws]). In our model, we assume that star-formation is initiated in GMCs by the progressive fragmentation of the gas. GMCs are scaled to the disk scale-height *h**d*. By assuming that the fragmented gas mass and the star-formation rate is homogeneously distributed in all GMCs formed into the disk, we define the gas surface density and the star-formation rate surface density as follows: $$\Sigma\_{gas} = \dfrac{1}{N\_{GMC}}\dfrac{M\_{frag}+M\_{sfg}}{\pi (h\_d/2)^2}~~\text{and}~~\Sigma\_{SFR} = \dfrac{1}{N\_{GMC}}\dfrac{SFR}{\pi (h\_d/2)^2} \label{eq:gas\_sfr\_surf\_densities}$$ The clear correlation we found at the galaxy scale disappears at the GMC-scale (Fig. [fig:SKlaws]). Compared to galaxy scale, the fragmented gas fraction shows the opposite trend: the higher the gas surface density in a GMC, the lower its fragmented gas fraction. This trend can be translated as follow: galaxies with a relatively higher (lower) fraction of fragmented gas host relatively more (fewer) GMCs. The mass of gas and star-formation is homogeneously distributed in all GMCs (Eqs. [eq:gassfrsurfdensities]). The average gas surface density is therefore lower in galaxies with highly fragmented gas which also happen to host more GMCs than galaxies with relatively low levels of fragmentation. Model Feedback ============== Massive stars and active galactic nuclei (AGN) in galaxies inject significant amounts of energy into the ISM and the circumgalactic medium (CGM) of galaxies. One of main challenges in galaxy evolution models is to distribute this power into the various gas phases in these media. We now describe how we distribute the SN and AGN power within the galaxy and its surroundings in the `G.A.S.` model, and how this power is used to regulate star formation. Morphology and the efficiency of outflows ----------------------------------------- Galaxies in the early universe frequently have a clumpy morphology that suggest there are interacting regions of dense gas and stars. The clumps in these distant galaxies are not typically observed over the same range of mass and size as star forming regions in local disk galaxies. This morphological evolution provides clues as to how star formation in distant galaxies may have proceeded and was regulated, but overall observations are consistent with mass and energy injected into the ISM playing an important role in regulating star formation in galaxies. To account for this morphological evolution, we assign to each galaxy a specific morphological type. Galaxies that formed recently are assumed to be clumpy. The morphological type can then change, from clumpy to a smooth(er) disk, only during a merger. If two clumpy galaxies merge, the remnant galaxy can be converted to a smooth disk with a probability of 25%. If two different morphologies merge, the remnant galaxy is assigned the morphology of the most massive progenitor (clumpy or smooth). Following these rules, clumpy galaxies become progressively regular, smoother disk galaxies. Furthermore, galaxies with these two different morphologies also behave differently as they evolve. For low mass, clumpy galaxies, we assume that gas is more compact and therefore that ejection of gas through outflows is more efficient. The terminal velocity of a wind is proportional to the square root of the energy injection rate divided by the total mass flow rate of the wind $\dot{M}\_{wind}$. Thus, we can translate this efficiency in terms of the average terminal wind velocity. For clumpy galaxies, we assume the terminal velocity of the wind *V**w* = 100 km s− 1, and *V**w* = 200 km s− 1 for disk galaxies. The mass loading factor $\dot{M}\_{wind}$/SFR is higher in clumpy galaxies than in rotating smooth disks. At *z* = 2.1, distributions of the mass loading factor are characterised by the probabilities that they lie above the 25*t**h*, 50*t**h* and 75*t**h* percentiles for all star-forming galaxies, which values are $\dot{M}\_{wind}$/SFR= 6.5, 11.5, and 26.4, respectively. Supernovae feedback ------------------- In the instantaneous SN event rate (*η**S**N* Gyr− 1) is proportional to the star-formation rate, which is a function of time (related to the star-formation history; see Sects. 3.5.2). Based on the rate of SNe, the instantaneous power generated by SN is simply given by *Q**s**n* = *η**s**n**E**s**n*. We assume the standard value *E**s**n* = 1044 erg s− 1. In `G.A.S.`, outside of the morphological division in the efficiency of outflows, we use the same prescriptions as in. Please see that paper for details. Active Nuclei feedback ---------------------- In addition to the energy input from SNe, the different gas phases are also impacted by the energy produced by AGN. We assume that Super Massive Black Holes (SMBHs) are created during a major merger if the remnant galaxy have a bulge of at least 106*M*⊙. The seed of the SMBH is given an initial mass, *M*•*i**n**i**t* = 300*M*⊙. We associate a gas torus to each SMBH formed. The torus is the gas reservoir that feeds the growth and energy output of the SMBH. This torus is fed by diffuse gas during merger events using the following prescription: Δ*M**t**o**r**u**s* = 0.1*μ**g**μ**m**M**d**i**f**f*(*r* < 3*r**t**o**r**u**s*) where: * *μ**g* is the mass fraction of the gas that is diffuse in the remnant galaxy disk; * $\mu\_m=\frac{\text{MIN}(M\_1,M\_2)}{\text{MAX}(M\_1,M\_2)}$ is the merger mass ratio and *M**i* is the total mass (dark-matter + galaxy) enclosed in the half mass radius of the halo; * *M**d**i**f**f*(*r* < 3*r**t**o**r**u**s*) is the mass of diffuse gas enclosed in *r* < 3*r**t**o**r**u**s*. The torus radius, *r**t**o**r**u**s* = 10*p**c*. From Eq. [eq:torusmasstransfer] it is clear that major mergers with gas-rich progenitors will accrete the largest amount of gas onto the torus of the remnant, while minor mergers of gas-poor progenitors will lead to very little accretion. The infall rate is chosen to be the maximum value of the Bondi infall rate and the free-fall rate given by: $$\dot{M}\_{infall} = \text{MAX}\left(\frac{3\pi G \mu k\_B T\_{torus}}{4\Lambda(T\_{torus},Z\_{torus})}M\_{\bullet};~\frac{M\_{torus}}{2t\_{\bullet}}\right),$$ where the gas temperature of the torus is fixed to *T**t**o**r**u**s* = 106.5 K. *Z**t**o**r**u**s* is the gas-phase metallicity of the torus. The metallicity and temperature determine the cooling efficiency Λ(*T*, *Z*) (see Sect. [sec:hotmode]). *M*• and *M**t**o**r**u**s* are the SMBH mass and the torus mass respectively. *t*• is the orbital time of gas at the radius of the torus *r**t**o**r**u**s*. [fig:AGNcartoon] Our prescription for the infall of gas on to the SMBH follows closely that presented in. The total mass infall flux into the region of influence of the SMBH is the sum of the mass flux that is accreted onto the SMBH $\dot{M}\_{acc}$, and the fraction that is driven out of the region of influence of the SMBH $\dot{M}\_{ej}$. This yields the total infall rate $\dot{M}\_{infall} = \dot{M}\_{ej} + \dot{M}\_{acc}$. In our model, we assume that the relative fraction of ejected to accreted mass flow rates is $\eta\_{\bullet}=\frac{\dot{M}\_{ej}}{\dot{M}\_{acc}}=1.0$. This division results in equal shares of the infalling gas to be: *(i)* accreted onto the SMBH, driving the increase the SMBH mass; and *(ii)* generates power by converting a fraction of the accreted mass *f**M**E* = 0.1 (Fig. [fig:AGNcartoon]). The power produced by the AGN is then $Q\_{AGN} = f\_{ME}\dot{M}\_{acc}c^2$. Distribution of feedback power ------------------------------ We use the power output from AGN and SN to regulate the gas cycle in galaxies in three different ways in our model. The two different sources of power — the AGN power *Q**A**G**N* and the mechanical energy of SN *Q**S**N* — are each divided in two parts: kinetic power fraction *f**k*, *A**G**N* and *f**k*, *S**N*, and the bolometric power (Fig. [fig:feedbackenergydistribution]). A fraction of the kinetic power *f**w* is used to drive a large scale wind (Sect. [sec:Largescaleejecta]). The residual fraction 1 − *f**w* is used to disrupt the fragmented gas of the disk (Sect. [sec:Disruptionrate]) and power the turbulence of the diffuse gas. A fraction of both the AGN and starburst bolometric luminosity, *f**t**h*, *A**G**N* and *f**t**h*, *S**N* respectively, are used to heat the ejected gas (Sect. [sec:thermalluminositypower]). [fig:feedbackenergydistribution] ### Large scale ejecta We derive the instantaneous ejection rate using the conservation of the kinetic energy released by SNs : $$\dot{M}\_{wind,SN} = 2f\_wf\_{k,SN}f\_{esc}\dfrac{Q\_{SN}}{v\_w^2} \label{SN\_ejecta\_rate\_EQ}$$ where *v**w* is the average velocity of large scale wind (Sect. [sec:discmorphology]). *f**e**s**c* is the fraction of mass that escapes the disk. The ejection rate due to the energy output of the AGN is determined by the fraction of the mass infalling towards the SMBH driven out ($\dot{M}\_{ej}$; Sect. [sec:AGNfeedback]). We derive the velocity of the jet using conservation of energy: $$v\_{jet}^2 = \dfrac{2f\_wf\_{k,AGN}Q\_{AGN}}{\dot{M}\_{ej}}=\dfrac{2f\_wf\_{k,AGN}f\_{ME}}{\eta\_{\bullet}}c^2 \label{eq:Vjet}$$ The outflowing jet is coupled to the diffuse gas in the disk which reduces its velocity. To obtain the coupling efficiency between the jet and the diffuse gas, we assume that jet velocity is equal to the escape velocity of the galaxy *V**e**s**c*. Thus, the kinetic energy of the jet is simply used to drive the diffuse gas out of the galaxy. This leads to an mass outflow rate given by: $$\dot{M}\_{wind,AGN} = \dot{M}\_{ej}\text{MAX}\left[1.0,\left(\dfrac{v\_{jet}}{V\_{esc}}\right)^2\right] \label{eq:AGN\_ejecta\_rate}$$ The total instantaneous ejection rate of the combined action of SN and AGN is $\dot{M}\_{wind} = \dot{M}\_{wind,SN} + \dot{M}\_{wind,AGN}$. Each gas reservoir, *M**d**i**f**f*, *M**f**r**a**g* and *M**s**f*, contributes to the instantaneous ejection rate in proportion of its mass fraction (Fig. [fig:discstructure]). ### Overestimating the gas escape fraction: Re-accretion timescale When the gas is ejected from the disc, a fraction of the gas will remain in the hot circumgalactic medium. The remaining fraction is ejected from the dark matter potential. As in to compute the escape fraction of the gas, we adopt a “ballistic” approach based on the comparison of the dark-matter escape fraction to a shifted Maxwell-Boltzmann velocity distributions. However, as shown by, e.g.,, the ejected gas is not only affected by gravitational forces (*V**e**s**c*) but also by ram pressure of gas in the CGM. Due to this affect, our ballistic approach overestimates the escape fraction. To correct this bias, we store all the hot gas ejected from a halo in a extended circumgalactic reservoir *M**e**x* − *c**i**r*. The mass stored in this reservoir is then progressively re-accreted and added to the hot gas trapped into the dark-matter potential well. The gas that is re-accreted out of this reservoir has been implemented in various semi-analytical models and potentially plays a major role in contributing to the overall gas supply of galaxies. The formulation in the various semi-analytical models can vary. For example, adopt a prescription depending on both of the dark-matter halo mass and redshift. In the re-accretion of gas is inversely proportional to the dark-matter virial mass without any dependence on redshift. In `G.A.S.`, we adopt a prescription similar to and. We assume that the gas is re-accreted on a timescale which is twice the halo crossing time *t**r**e**a**c**c* = 2*r**v**i**r*/*v**v**i**r*. ### Disruption rate The power generated by SNs that does not contribute to driving a wind is assumed to be injected directly into the ISM. This remaining power is distributed between the different ISM gas phases in proportion to their mass. Massive stars are assumed to remain mostly embedded in their dense birth clouds during their short lifetimes. The fraction of the SN energy which is injected into the fragmented and star-forming gas *f**f**r**a**g* disrupts this phase, feeding the diffuse gas reservoir (Eq. [eq:fragmentedgasfraction] and Fig. [fig:discstructure]). The disruption rate is defined as: $$\dot{M}\_{disrupt,SN} = 2f\_{frag}f\_{k}^{SN}\underbrace{\left[(1-f\_w)+f\_w(1-f\_{esc})\right]}\_{1-f\_wf\_{esc}}\dfrac{Q\_{SN}}{\sigma\_v^2} \label{eq:SN\_disruption\_rate}$$ where *σ**v* is the average velocity dispersion of the diffuse gas (Sect. [sec:averagevelocitydispersion]). We assume that the velocity dispersion of the disrupted gas is equal to that of the diffuse gas. Assuming this implies that if the diffuse gas is highly turbulent (high *σ**v*) then it takes more energy to disrupt the fragmented gas. The disruption rate depends on two terms specifically related to the gas phase. The first term, 1 − *f**w*, corresponds to the minimum fraction of SN power which disrupts the fragmented gas. The second term, *f**w*(1 − *f**e**s**c*), corresponds to the fraction of power that remains in the gas because of the limit imposed by the galaxy escape fraction. The fraction of power which does not contribute to the wind is therefore re-injected to disrupt the fragmented gas. This fraction increases with the galaxy mass. The fraction of the SNs power injected into the diffuse gas phase maintains or increases the level of turbulence of the diffuse gas (i.e. *σ**v*). During a time step Δ*t*, the possible increase of the turbulent energy of the diffuse gas is given by Δ*E**σ**S**N* = (1 − *f**f**r**a**g*)*f**k**S**N*(1 − *f**w**f**e**s**c*)*Q**S**N*Δ*t* (see Sect. [sec:averagevelocitydispersion] for all other contributions). Simultaneously, we also include the energy output from AGN in disrupting the gas. The contribution from any AGN to the disruption rate is given by: $$\dot{M}\_{disrupt,AGN} = 2f\_{frag}f\_{k,AGN}(1-f\_w)\dfrac{Q\_{AGN}}{\sigma\_v^2} \label{eq:AGN\_disruption\_rate}$$ As for SNs, the residual power 1 − *f**f**r**a**g* is injected into the diffuse gas as turbulent energy: Δ*E**σ**A**G**N* = (1 − *f**f**r**a**g*)*f**k*, *A**G**N*(1 − *f**w*)*Q**A**G**N*Δ*t*. The total mass disruption rate is the sum of the SN and AGN contributions, i.e. $\dot{M}\_{disrupt} = \dot{M}\_{disrupt,SN} + \dot{M}\_{disrupt,AGN}$. ### Radiative heating and bolometric luminosity In the previous two sections we discussed our prescriptions for the kinetic power of SN and AGN. The non-kinetic fraction of the total power is also divided in two different parts. A fraction *f**t**h*, *S**N* (*f**t**h*, *A**G**N*) of the non-kinetic SN (AGN) power is used to heat the ejected gas. These fractions are adjusted so that the average temperature of the ejected gas is between 106 and 107*K*. The residual power (1 − *f**k*, *S**N*)(1 − *f**t**h*, *S**N*) for SNs is then assumed to be emitted as bolometric luminosity in each galaxy (i.e. the part of the total radiative power that does not go directly into heating the gas). Average velocity dispersion and disk scale height ------------------------------------------------- ### Average velocity dispersion The gas disruption rates, as given in Eqs. [eq:SNdisruptionrate], [eq:AGNdisruptionrate], depend on the average velocity dispersion of the diffuse gas, *σ**v*. We compute and continuously update the velocity dispersion by taking into account simultaneously the kinetic energy injected by gas accretion, SNs, and AGN. Our prescription is based on the evolution of the “turbulent” kinetic energy budget *E**σ**v*: 2*E**σ**v* = *M**d**i**f**f**σ**v*2,  *σ**v* being the 3D gas velocity dispersion. Between two time steps, we assume that: * A fraction, *f**d**i**s**p* = 1/2, of the turbulent energy is dissipated per orbital time. * The turbulent energy is increased by 2Δ*E**σ**v**a**c**c* = *f**i**n**c**r*Δ*M**v**a**c**c*2, corresponding to a fractional increase, *f**i**n**c**r*=1/3, per dynamical time, of the kinetic rotational energy of the newly accreted diffuse gas, Δ*M*. We assume that the freshly-accreted diffuse gas (Δ*M*) forms a thin rotating disk. This thin gas disk merges with the pre-existing disk hosting a mass, *M**d**i**f**f*, of diffuse gas. *v**a**c**c* is the orbital velocity of the freshly-accreted diffuse gas computed at the half mass radius. * The total budget of the turbulent energy is finally increased by the energy injected in the diffuse gas by SNs and AGN, Δ*E**σ**v**S**N* and Δ*E**σ**v**A**G**N* (see Sect. [sec:Disruptionrate]). Considering all of these energy terms, after a time step Δ*t*, we have, $$E\_{\sigma\_v}^{n+1} = \left[1-f\_{disp}\left(\frac{\Delta t}{t\_{dyn}}\right)\right]E\_{\sigma\_v}^{n} + \Delta E\_{\sigma\_v}^{acc}\left(\frac{\Delta t}{t\_{dyn}}\right) + \Delta E\_{\sigma\_v}^{SN} + \Delta E\_{\sigma\_v}^{AGN} \label{eq:Updated\_kinetec\_energy\_budget}$$ [fig:discscaleheightdiscvelocitydispersion] Based on this updated total turbulent kinetic energy, we calculate the average velocity dispersion of the diffuse gas *σ**v* (Eq. [eq:Kineticenergybudget]). The two free parameters, *f**i**n**c**r* and *f**d**i**s**p*, have been adjusted to reproduced observed values of velocity dispersion (see Fig. [fig:discscaleheightdiscvelocitydispersion]). During a merger, the total turbulent kinetic energy of the two progenitors are added. We also add a fraction of the gravitational energy due to the interaction between the two galaxies, $$E\_{int} = G\dfrac{M\_1M\_2}{1.68(r\_d^1 + r\_d^2)} \label{eq:Kinetic\_energy\_budget\_merger}$$ where *M**i* is the mass (baryon + dark-matter) included in the galaxy half mass radius, and *r**d**i* is the disk exponential radius. We compare our predictions of the gas velocity dispersion with recent observational measurements of the velocity dispersion of the warm ionised media from and. Objects targeted by these two programs are distributed in redshift between *z* ≃ 0.3 and *z* ≃ 1.7. Our predictions are in good agreement with these observational results but our results to do reproduce some of the most extreme dispersions observed (both small and large values; Fig. [fig:discscaleheightdiscvelocitydispersion]). At *z* = 1.1, the median value of the velocity distribution of the diffuse gas is close to 35 km s− 1. ### Disk scale height The disk fragmenting timescale (Eq. [eq:tstr]) depends on the average disk scale height. We use the disk scale height to define the initial energy injection scale of the inertial turbulent cascade. Our gas fragmenting scenario is mainly based on scaling relations and the disk scale height is crucial in calculating this parameters of the turbulent cascade. The disk scale height *h**d* is deduced from *σ**v* = *σ*(*h**d*− 1). At *z* = 1.1, the median value of the velocity dispersion leads to a median value of the disk scale height is  ≈ 160 pc (Fig. [fig:discscaleheightdiscvelocitydispersion]). Thermal instabilities ===================== For galaxies with large masses, previous semi-analytical models invoked powerful AGN feedback to greatly reduce or even completely quench accretion of cooling gas in their halos. Such prescriptions assume a strong coupling between AGN power, the ISM, and the hot circum-galactic gas, which results in powerful outflows expelling the gas and heating the halo gas. However, observations are not entirely clear on the precise impact of outflows on distant galaxies. In, we also assume that AGN can have an impact on the hot halo phase but we have limited the effect to simply heating the halo gas. In the following we present a new prescription to efficiently reduce the gas accretion onto massive galaxies, *M*⋆ > 1011*M*⊙. This new mechanism is based on the growth of thermal instabilities in the hot halo phase surrounding the galaxy. Thermal instabilities: Description ---------------------------------- Our new mechanism assumes that the condensation of gas and therefore the gas accretion onto the galaxy is progressively and strongly limited by the growth of thermal instabilities in the hot gaseous circum-galactic medium. In the standard model of cooling presented previously and also used in e.g.,, there is a strong dichotomy between the gas stored within and beyond the cooling radius, *r**c**o**o**l*. At radii greater than the cooling radius, the gas is assumed to remain hot, while below *r**c**o**o**l*, the gas is assumed to be warm, 104 K, and dense enough to condense and feed the galaxy. Of course, the reality is more complex, and obviously hot and warm gas co-exist around the cooling radius. In this transition region, the gas can be thermally unstable. These thermal instabilities can generate warm clouds that are orbiting in the hotter gas., in a phenomenological model of accreting gas, find that the cloud-cloud velocity dispersion can be comparable to the characteristic virial velocity of the dark-matter halo in which they formed. The mixing and dynamics of warm clouds into the hot atmosphere can moderate the effective accretion rate of gas onto a galaxy. ### Thermal instability clock As shown by, the timescale over which gas becomes thermally unstable is related to the cooling time. Following their approach, and similar to gas cooling generally, we define a thermal instability timescale, T*T**I*. This timescales evolves concomitantly with the cooling timescale. After each time step, Δ*t*, the thermal instability clock is updated following a mass-weighted prescription, similar to the one applied to the cooling clock, namely, $$\mathcal{T}\_{TI}^n = \underbrace{\left(\mathcal{T}\_{TI}^{n-1} + \Delta t\_{TI}\right)\left(1-\frac{\Delta M}{M}\right)}\_{\text{hot gas in the halo}} + \underbrace{\frac{\Delta t\_{TI}}{2}\frac{\Delta M}{M}}\_{\text{freshly accreted hot gas}} \label{eq:TI\_clock}$$ where *M* is the total mass stored in the hot phase after the last time step, Δ*t*. Δ*M* is the mass accreted by the hot gas phase and/or ejected from the galaxy. For each time step, Δ*t*, the time increment Δ*t**T**I* is calculated as, Δ*t**T**I* = 0.1*F*(Λ)Δ*t* where *F*(Λ) is the thermal instability function, depending on the logarithmic derivative of the cooling efficiency function Λ, $$F(\Lambda) = 2-\dfrac{dln\Lambda}{dlnT} \label{eq:F\_Lambda\_TI}$$ We assume that the hot gas becomes thermally unstable during all the previous time steps (Eq. [eq:TIclock]). However, as for the effective cooling clock, we consider that thermal instabilities can evolve with the addition of freshly accreted gas only during half a time step[4](#fn4). The thermal instability function, *F*(Λ), determines if the hot gas is stable or unstable. Gas is assumed to be unstable if *F*(Λ) ≥ 0 and stable if *F*(Λ) < 0. When hot gas is stable, the time increment Δ*t**T**I* is set to 0. According to the pre-existing/incoming mass ratio, the thermal instability clock can increase or decrease during a given time step. [fig:coolingTIeff] As for the cooling efficiency, the thermal instability function has been interpolated and tabulated. Figure [fig:coolingTIeff] shows the thermal instability function dependence on gas temperature between 103K and 108K and gas metallicity between 10− 4*Z*⊙ and 2*Z*⊙. As for the effective cooling clock, the thermal instability clock of the hot phase is assumed to have the value that the most massive progenitor had prior to merging. ### The mixing zone The thermal instability clock provides at any time the effective timescale of the growth of thermal instabilities. Starting at the cooling radius, we assume that thermal instabilities are propagating through the gas and ultimately reach the center of the halo. We define the instantaneous size of the mixing zone as, $$\Delta r\_{TI}^n = \varepsilon\_{TI}t\_{TI}^n\sqrt{\dfrac{\gamma R T\_{hot}}{\mu}} \. \label{eq:TI\_radius}$$ We assume an adiabatic index *γ* = 1.4 and a mean molecular mass *μ* = 0.62. *R* is the specific ideal gas constant and *T**h**o**t* is the average temperature of the hot gas. We assume that the propagation speed of these instabilities is close to the sound speed of the gas. Following this hypothesis, the value of the free parameter *ɛ**T**I* is set to 0.63. In the mixing zone, we assume that thermal instabilities lead to the formation of warm gas clouds orbiting around the galaxy. As already stated, the cloud-cloud velocity dispersion is assumed to be close to the virial velocity. In addition, even if we do not take into account explicitly such a mechanism, the warm gas clouds will probably interact strongly with the large-scale wind escaping the galaxy. Through this interaction, we can reasonably assume that kinetic energy of the outflowing gas is injected into a dynamic and cloudy circum-galactic medium. Thus we assumed that the condensation of warm clouds is suppressed in the mixing zone. The condensation of clouds is only efficient in the inner region of the hot atmosphere and the effective cooling rate is then computed using Eq. [eq:coolingrate]. For this effective cooling rate, the initial cooling radius calculated is substituted with the expression, *r**T**I* = *r**c**o**o**l* − Δ*r**T**I*. Impact of thermal instabilities on the gas cooling -------------------------------------------------- To understand and quantify the progression of thermal instabilities in the hot gas phase, we define the thermal instability volume filling factor as, $$\phi\_v = 1.0 - \left(\frac{r\_{TI}}{r\_{cool}}\right)^3 \. \label{eq:VFF}$$ *ϕ**v* measures the volume fraction in which thermal instabilities are developed. *ϕ**v* = 0 indicates that there is no impact on the gas cooling. As in the original model, all the gas enclosed in the cooling radius can condense and therefore feed the galaxy. *ϕ**v* = 1 indicates that all the region with *r* < *r**c**o**o**l* is thermally unstable. When *ϕ**v* = 1, since the clouds are orbiting at the virial velocity, is the point at which the cooling flow would otherwise occur, is effectively stopped. [fig:TIVFF] Fig. [fig:TIVFF] shows the mean trend of *ϕ**v* (Eq. [eq:VFF]) as a function of both the dark-matter halo mass and redshift. Depending on the redshift, the first resolved halos, M*v**i**r*>109*M*⊙, have *ϕ**v* distributed between 0.3 and 0.8. At *z* ≥ 9.0, the average *ϕ**v* of the first halos formed is close to 0.6. This value progressively decreases with the redshift. At *z* ≃ 4.0, the average *ϕ**v* reaches a minimum around 0.3. Then the average *ϕ**v* of first halos detected increases as the redshift decreases. The maximum value, 0.8, is reached at low redshift. In this low mass regime, hot atmospheres are only formed via large-scale winds coming from the galaxy. The average *ϕ**v* is closely linked to galaxies driving winds. Indeed, as mentioned in Sect. [sec:TIclock], the effective TI clock runs in proportion to the freshly added to pre-existing gas mass ratio in the hot halo phase (Eq. [eq:TIclock]). The higher the proportion of freshly added ejecta, the slower the TI clock advances and thus the slower *ϕ**v* increases. Between *z* ≃ 9.0 and *z* = 4.0, the intensity of ejecta affecting the first resolved halos, M*v**i**r* ≃ 109*M*⊙, increases continuously (related to both the gas accretion and the SFR). Then the intensity progressively decreases until the lowest redshifts. We caution that the number of halos in the low mass bins at very high redshift, *z* ≃ 9, is low and thus these bins are greatly affected by statistical noise. Over the redshift range, *z* = 4.0 to *z* = 1.5, the volume filling factor of the most massive halos reaches very high *ϕ**v*,  ≃ 0.95. At *z* < 1.0, the average volume filling factor of thermally unstable gas in massive halos starts to decrease. In these massive halos, some accretion still occurs. This accretion produces a new star formation and therefore some large scale ejection events. In these massive halos we therefore observe an increase of freshly acquired hot halo gas. This increase in freshly acquired hot halo gas results in a progressive decrease of the volume filling factor of thermally unstable gas. [t!] [fig:cosmicstellarmassdensity] Fig. [fig:TIVFF] also shows the change the dark matter halo mass and the evolution with redshift of the fraction of galaxies in which condensation of the hot halo gas is occurring, i.e., $\dot{M}\_{cool}>0.$ (Eq. [eq:coolingrate]). At the halo detection mass threshold of our model, radiatively cooling gas is accreting over the whole mass range. The higher the redshift, the higher the fraction of galaxies that accrete radiatively cooled hot halo gas. For *z* > 5.0, the fraction remains high,  > 0.8. Then it strongly decreases and reaches values smaller than 0.1 at *z* < 3.0. At low virial halo masses, the hot gas phase is only fed by large-scale ejecta from galactic outflows (*f**h**o**t* ≤ 0.18). In low virial-mass halos, the mass of hot halo gas is small, *M**h**o**t* ≤ 107*M*⊙. The cold mode is the dominant accretion mode at low masses. Even if at *z* > 5.0 radiative cooling produces an effective accretion in a large fraction of the galaxies, the average radiative cooling rate is less than 1*M*⊙/*y**r* (Fig. [fig:accrate]). The fraction of dark matter halos hosting actively radiatively cooling hot gas increases (or remains constant) as a function of the dark-matter virial mass. The largest population of actively cooling hot halos is reached between  ≃ 1010.25*M*⊙ and  ≃ 1012*M*⊙. Between *z* = 4.0 and *z* = 3.0, in the most massive halos  > 1012*M*⊙ the fraction reaches 1.0. However by *z* < 3.0 for the most massive halos, we find a strong decrease in the fraction of galaxies with actively cooling halos. This decrease is directly linked to the high *ϕ**v* reached in these halos and this decrease is responsible for the reduction in the effective cooling rate estimated for the most massive halos at *z* < 2 (Fig. [fig:accrate]). Results and analysis ==================== The stellar mass assembly and co-moving density of galaxies ----------------------------------------------------------- We compare the predictions of the `G.A.S.` model for the co-moving stellar mass density as a function of redshift with a compilation of observations (Fig. [fig:cosmicstellarmassdensity]). Overall, the stellar mass growth of the Universe predicted in our model are in agreement with observations. Predictions are in reasonable agreement with estimates even at the highest redshifts, *z* > 3, although generally consistent with the highest total co-moving stellar mass density estimates. Furthermore, this general agreement between `G.A.S.` and observations is also reflected in a comparison of the co-moving number density of galaxies binned by redshift (Fig. [fig:stellarmassfunction]). We compare our results with a variety of observed distributions covering the redshift range *z* ≃ 0.1 to *z* ≃ 6. In addition, we compare our best `G.A.S.` predictions for the co-moving number density with those of and also those obtained with the previous *ad-hoc* regulation prescriptions of. These comparisons allow us to gauge the impact of including the star-formation and gas-dissipation prescriptions that are in `G.A.S.` with models that do not include such prescriptions (Fig. [fig:stellarmassfunctioncomparisonotherSAM]). Between *z* ≃ 1.0 and *z* ≃ 6.0, our model agrees very well with observational measurements of both the cosmic comoving mass densities and shape of the co-moving number density of galaxies for a wide range of redshifts (cf. Figs. [fig:cosmicstellarmassdensity] and [fig:stellarmassfunction]). At *z* < 1.0, our model systematically under-predicts the comoving density of intermediate mass galaxies and over-predicts the comoving density of low-mass galaxies. We find that in the low and intermediate mass range, 108.5 ≥ *M*⋆ ≤ 1011.5*M*⊙, the co-moving density functions predicted by our model show a double power-law shape, with a steeper slope at the low mass end of the model distribution. At the highest redshifts, *z* ≃ 6.0, the density function keeps this shape even for the most massive model galaxies, *M*⋆ ≥ 1011.5*M*⊙. This power-law shape is the signature of the continuous competition between the disruption and the progressive fragmentation of the gas through the turbulent cascade. Some galaxies, those with stellar masses above 1010.5 *M*⊙, have already substantially formed at *z* ≃ 6.0. The turbulent cascade regulates the star formation in intermediate- and low-mass objects and keeps a sufficient amount of diffuse and fragmented non star-forming gas that actively feeds star formation until the formation of the most massive observed galaxies. As the redshift decreases, the power law-shape is broken at the high mass end of the distribution and an exponentially declining function is progressively formed. In agreement with observational estimates, the knee of our predicted co-moving number density functions develops around 1010.75*M*⊙ at *z* ≃ 5.0 and progressively shifts to 1011.0*M*⊙ by *z* ≃ 0.1. The break appears when the star formation is strongly reduced or quenched, i.e., when all the fresh gas contained in the disk is consumed. At the high mass end of the galaxy distribution, gas is accreted via the hot mode (Sect. [sec:gasaccretion] and Fig. [fig:accrate]). In, for example,, or, the reduction of the gas accretion is due to the power of the AGN heating the halo gas and halting accretion. To obtain the necessary heating, such prescriptions need an increasing and constant production of power from AGN. In `G.A.S.`, much of the reduction or halting of gas accretion is a result of the increase in the thermally unstable hot gas phase surrounding galaxies. As shown in Fig. [fig:stellarmassfunctioncomparison], the impact of thermal instabilities appears to be progressive and its impact starts to become significant at *z* ∼ 4.0 for the most massive galaxies. The effect is clearly visible for the galaxy mass function at *z* = 2.1. Without such regulation, the continuous accretion of gas leads to the formation of very massive, 1012.0*M*⊙, galaxies at *z* ≃ 1.5. At very high masses, the reduction of the gas accretion onto galaxies is not sufficient to quench completely the star formation. Indeed, some galaxies have stellar masses more than 1011.5*M*⊙ are predicted in our model but are not observed. This is due to a progressive decrease of the volume filling factor of thermal instabilities in the most massive galaxies (Fig.[fig:TIVFF]). A slight increase of the thermal instabilities propagation efficiency, *ɛ**T**I*, could limit the growth of these very massive model galaxies. Compared to the *ad-hoc* recipe ( ∝ *M**h*3) presented previously in, the new regulation prescription based on the turbulent cascade is in better agreement with the comoving densities of low-mass galaxies. The *ad-hoc* recipe was clearly not appropriate for capturing the necessary ingredients to produce sufficient numbers of low mass galaxies and especially those at high redshifts (Fig.[fig:stellarmassfunctioncomparisonotherSAM]). On the contrary, the stellar mass functions predicted by clearly over-produce the comoving density of low/intermediate galaxies, *M*⋆ < 1010.5*M*⊙. Their prescriptions for regulating star formation do not have the correct dependencies as a function of the stellar mass. Gas is converted to stars too efficiently in low/intermediate mass galaxies, hence producing an excess. Furthermore, a lack of gas in more massive galaxies means that their models under-estimate the number of massive galaxies. [t!] [fig:stellarmassfunction] In fact, to elucidate the role played by various physical prescriptions we now include in `G.A.S.`, we compare four different model configurations: our best complete model and three alternative versions in which: i) the gas re-accretion prescription is disabled; ii) thermal instabilities are not considered; and, iii) the photo-ionisation prescription is switched off. These three alternative versions allow us to determine the effective impact of those prescriptions in regulating the mass growth of the ensemble of model galaxies (Fig. [fig:stellarmassfunctioncomparison]). At the highest redshift for which we made this comparison, *z*=4, we see that turning off all 3 of these prescriptions has little impact on the modelled stellar mass distribution function. As we decrease in redshift, we see progressively greater impact for these 3 processes on the stellar mass distribution. The lack of thermal instabilities in the hot gas in the halo, results in overly massive galaxies by z ≈ 2-3 but leaves the number of lower mass galaxies, those with M⋆$\la$1011 *M*⊙ essentially unchanged. Photo-ionisation appears to only impact the co-moving number density of galaxies at low redshift and for galaxies with low stellar masses. At *z* < 1, the power-law shape in the co-moving density of low mass galaxies, i.e., those with *M*⋆ < 108.5*M*⊙, is broken and becomes flatter as the redshift decreases. This trend is a result of a progressive reduction in the effective gas accretion with decreasing redshift and is a direct consequence of the photo-ionisation prescription (Sect. [sec:gasaccretion], Fig. [fig:normalizedbaryonicfraction]). The process with the largest impact on the co-moving number density of galaxies appears to be the re-accretion of gas (Sect. [sec:2-orbitsre-accretion]). Its impact on the galaxy co-moving density distribution starts to become evident at *z* ≈ 3 (Fig. [fig:stellarmassfunctioncomparison]). Without re-accretion, the model strongly under-predicts the comoving density of almost all stellar masses but especially so for intermediate-mass galaxies. [t!] [fig:stellarmassfunctioncomparisonotherSAM] [t!] [fig:stellarmassfunctioncomparison] Evolution of disk properties ---------------------------- We discuss with Fig. [fig:maindiskproperties] the evolution with both the stellar mass and the redshift of five main properties of star-forming galaxies. We present the evolution of: * The gas mass fraction, $f\_{gas} = \frac{M\_{gas}}{M\_{gas}+M\_{\star}}$. *M**g**a**s* is the total mass of the three gas reservoirs combined for a given disk galaxy. * The fragmented gas fraction, $f\_{frag} = \frac{M\_{frag} + M\_{sfg}}{M\_{frag} + M\_{sfg} + M\_{diff}}$. * The galaxy half mass radius, *r*50. * The diffuse gas density, *ρ**d**i**f**f*. This density is computed based on the mass of diffuse gas, a disk with an external radius of 22 × *r**d* and a scale height, *h**d*. We also assume a mean molecular mass, *μ*= 0.62. * The ratio of the gas velocity dispersion to orbital velocity, $\frac{\sigma\_v}{V}$. The dispersion velocity is that of the diffuse gas. The orbital velocity is computed at 2.2*r**d*. The following analysis is based on model star-forming galaxies that have been extracted at a variety of redshifts. We define star forming as those galaxies which lie above 25% of the mean of the relation between the star-formation rate and stellar mass. Within this sample of model galaxies, we calculate median of the ensemble binned in redshift and stellar mass for each of the quantities listed above. We first focus on the gas mass fraction, *f**g**a**s*. At a given stellar mass, the highest redshift galaxies have the highest gas fractions while galaxies with larger stellar masses have lower gas fractions. If we consider both the increase in mass with decreasing redshift, these tracks implies that the gas mass fraction globally decreases as galaxies grow and evolve. At low redshift, *z* < 0.5, the gas mass fractions of star-forming galaxies lie between 15%-50% depending on the stellar mass. In two samples of star-forming galaxies, evolving around *z* = 1.5, measured a gas mass fraction higher than our predictions. At *z* = 1.5, for an average stellar mass of *M*⋆ = 1010.5*M*⊙, the gas mass fraction is distributed as [p15, p50, p85] = [0.18, 0.25, 0.36] in comparison to *f**g**a**s* ≃ 0.6 measured by. In the same redshift slice, for an average stellar mass of *M*⋆ = 1011*M*⊙ we measured [p15, p50, p85] = [0.11, 0.16, 0.26] in comparison to *f**g**a**s* ≃ 0.5 measured by. In parallel to the progressive decrease of the gas-mass fraction, the distribution of this gas between the diffuse and the fragmented/dense gas also evolves. We first note that at a given stellar mass, the fragmented gas mass fraction is an increasing function of the redshift. Galaxies formed at higher redshift contain a larger fraction of fragmented gas than galaxies evolving at lower redshift with a similar mass. At high redshift *z* > 2.5, the fragmented gas mass fraction is a clear decreasing function of the stellar mass. The hierarchy in mass is less clear at lower redshifts. We note that galaxies hosting a stellar mass larger than 1010*M*⊙ stabilise their fragmented gas mass fraction at around 42%. For less massive galaxies,  < 1010*M*⊙, the fragmented gas fraction strongly decreases and reaches low values distributed between 25% and 33% at *z* = 0.1. [t!] [fig:maindiskproperties] [t!] [fig:maindisktime scales] The gas content clearly evolves with time: gas rich and strongly fragmented galaxies evolve through structures dominated by stars and in which the gas is mainly diffuse. This evolution in gaseous and stellar contents is following the evolution of galaxy size: At a given stellar mass, the disc half mass radius is a decreasing function of the redshift. At redshift *z* > 3.0, the disc half mass radius is a decreasing function of the stellar mass. Massive galaxies appear more compact than low-massive galaxies. For all stellar mass bin analysed, the average half mass radius is strictly lower than 1 kpc. At *z* < 3.0, we note the opposite trend: the average half mass radius is strictly larger than 1 kpc and is an increasing function of the stellar mass. The anti-correlated evolution of the gas content and disk size directly impacts the density of the diffuse gas which is the main driver of the GMC feeding rate (Sect. [sec:diffusegasreservoir], Eq. [eq:fragmentingrate]). At a given stellar mass, the diffuse gas density appears to be an increasing function of the redshift. In galaxies formed at higher redshift, the gas is denser than in galaxies of a same stellar mass formed at lower redshift. At a *z* > 3, the diffuse gas density is an increasing function of the stellar mass. At *z* < 3, the most massive galaxies see their diffuse gas density converge to 0.2-0.3 atoms/cm3. The less massive galaxies ( ≤ 1010*M*⊙) see their diffuse gas density strongly decrease and reach a value of  < 0.1atoms/cm3. This evolution is mainly linked to the decrease of the fragmented gas-mass fraction discussed previously. This decrease also impacts the dispersion to orbital velocities ratio, $\frac{\sigma\_v}{V}$. This ratio appears to be a decreasing function of both the redshift and the stellar mass. At *z* ≤ 3.0, we measure a strong increase of $\frac{\sigma\_v}{V}$ in the two lowest stellar mass bins. This strong evolution observed at low redshift is linked to the injection of kinetic energy by the latest-formed SN populations in a gas that becomes rare and diffuse. We recall that the velocity dispersion is computed at the disk scale height and takes into account only the diffuse gas mass. Less gas means a turbulent energy budget (Sect. [sec:Disruptionrate]) divided into a smaller number of gas atoms and therefore a higher average velocity per mass unit. In parallel, the decrease with the redshift of the ratio $\frac{\sigma\_v}{V}$ is also due to the increase with the redshift of the orbital velocity traced by the decrease of the average disk orbital timescale (see upper left panel of Fig. [fig:maindisktime scales]). By tracing average evolutions from low mass at high redshift to high mass at low redshift, the average galaxy trends indicate that the $\frac{\sigma\_v}{V}$ ratio slightly decreases through the evolution. Timescales ---------- [th!] l|ccccc & 109.0 ± 0.2*M*⊙ & 1010.0 ± 0.2*M*⊙ & 1010.25 ± 0.2*M*⊙ & 1010.75 ± 0.2*M*⊙ & 1011.25 ± 0.2*M*⊙ **Full Sample** & & & & & 259090 & 11.1% & 2.0% & 1.4% & 0.6% & 0.0% **timescales** & [p15, p50, p85] & [p15, p50, p85] & [p15, p50, p85] & [p15, p50, p85] & [p15, p50, p85] *t**o**r**b* [Myr] & [71.48, 132.42, 210.06] & [32.38, 67.23, 118.72] & [29.12, 61.81, 108.64] & [17.38, 45.52, 100.14] & [16.63, 36.94, 106.27] *t**c**o**o**l* [Myr] & [2.281, 7.913, 28.065] & [0.616, 1.791, 5.333] & [0.527, 1.527, 4.041] & [0.263, 1.140, 4.030] & [0.326, 1.325, 5.042] *t**f**r**a**g* [Myr] & [7.929, 16.453, 31.759] & [3.152, 6.811, 13.585] & [2.684, 5.865, 11.290] & [1.587, 4.311, 9.970] & [1.790, 3.883, 11.050] *t**d**i**s**r**u**p**t* [Myr] & [7.732, 15.892, 28.138] & [3.111, 6.672, 12.628] & [2.642, 5.874, 10.966] & [1.481, 4.328, 9.919] & [1.661, 3.733, 10.815] *t**s**f**g* [Myr] & [864.5, 974.5, 1577.5] & [255.8, 390.2, 903.2] & [181.0, 280.2, 615.8] & [139.5, 182.3, 878.4] & [91.7, 181.1, 897.2] **ratios** & [p15,p50,p85] & [p15,p50,p85] & [p15,p50,p85] & [p15,p50,p85] & [p15,p50,p85] *t**c**o**o**l*/*t**o**r**b* & [0.029, 0.055, 0.143] & [0.018, 0.025, 0.042] & [0.017, 0.023, 0.035] & [0.012, 0.022, 0.042] & [0.014, 0.029, 0.058] T*c**o**o**l*/*t**c**o**o**l* & [1.781, 2.694, 3.556] & [2.982, 3.488, 3.746] & [3.171, 3.472, 3.757] & [3.058, 3.445, 4.199] & [2.963, 3.312, 4.314] *t**f**r**a**g*/*t**o**r**b* & [0.105, 0.118, 0.151] & [0.088, 0.096, 0.115] & [0.083, 0.092, 0.107] & [0.075, 0.088, 0.113] & [0.075, 0.093, 0.124] *t**s**f**g*/*t**o**r**b* & [5.48, 8.30, 18.55] & [4.13, 6.55, 13.47] & [3.36, 5.36, 11.69] & [2.57, 5.36, 18.16] & [2.12, 6.16, 32.75] *t**s**f**g*/*t**f**r**a**g* & [43.63, 69.07, 137.39] & [41.00, 65.86, 123.26] & [35.63, 57.21, 115.10] & [29.59, 57.33, 169.20] & [24.12, 60.52, 235.34] & & & & & & 109.0 ± 0.2*M*⊙ & 1010.0 ± 0.2*M*⊙ & 1010.25 ± 0.2*M*⊙ & 1010.75 ± 0.2*M*⊙ & 1011.25 ± 0.2*M*⊙ **Star Forming Sample** & & & & & 76762 & 2.2% & 1.5% & 1.1% & 0.3% & 0.0% **timescales** & [p15, p50, p85] & [p15, p50, p85] & [p15, p50, p85] & [p15, p50, p85] & [p15, p50, p85] *t**o**r**b* [Myr] & [65.87, 129.84, 209.79] & [30.95, 65.21, 115.34] & [28.80, 60.72, 105.58] & [18.18, 45.44, 98.73] & [11.16, 47.99, 109.13] *t**c**o**o**l* [Myr] & [1.738, 6.264, 16.940] & [0.547, 1.504, 3.506] & [0.471, 1.355, 2.933] & [0.191, 0.804, 2.395] & [0.070, 0.801, 3.186] T*c**o**o**l* [Myr] & [6.024, 17.445, 36.808] & [1.967, 5.358, 11.648] & [1.673, 4.771, 9.871] & [0.794, 2.929, 8.081] & [0.355, 3.290, 9.476] *t**f**r**a**g* [Myr] & [7.059, 14.864, 26.348] & [2.914, 6.132, 11.352] & [2.542, 5.504, 9.915] & [1.407, 3.806, 8.482] & [0.713, 3.976, 9.133] *t**d**i**s**r**u**p**t* [Myr] & [7.166, 15.119, 26.448] & [2.937, 6.252, 11.630] & [2.584, 5.608, 10.130] & [1.429, 3.906, 8.714] & [0.714, 4.146, 9.505] *t**s**f**g* [Myr] & [834.8, 943.1, 1162.4] & [255.4, 382.5, 590.6] & [174.4, 269.8, 400.7] & [105.2, 166.7, 262.0] & [69.4, 99.3, 175.9] **ratios** & [p15,p50,p85] & [p15,p50,p85] & [p15,p50,p85] & [p15,p50,p85] & [p15,p50,p85] *t**c**o**o**l*/*t**o**r**b* & [0.029, 0.055, 0.143] & [0.018, 0.025, 0.042] & [0.017, 0.023, 0.035] & [0.012, 0.022, 0.042] & [0.014, 0.029, 0.058] T*c**o**o**l*/*t**c**o**o**l* & [1.781, 2.694, 3.556] & [2.982, 3.488, 3.746] & [3.171, 3.472, 3.757] & [3.058, 3.445, 4.199] & [2.963, 3.312, 4.314] *t**f**r**a**g*/*t**o**r**b* & [0.104, 0.115, 0.132] & [0.087, 0.094, 0.104] & [0.082, 0.091, 0.099] & [0.072, 0.083, 0.092] & [0.067, 0.081, 0.095] *t**s**f**g*/*t**o**r**b* & [5.33, 7.61, 12.45] & [3.88, 5.81, 9.11] & [3.17, 4.76, 7.60] & [2.07, 3.70, 7.31] & [1.32, 2.30, 7.58] *t**s**f**g*/*t**f**r**a**g* & [43.41, 64.93, 114.44] & [39.95, 60.73, 97.30] & [34.20, 51.90, 85.50] & [24.63, 42.88, 89.36] & [14.79, 30.06, 124.96] [tab:maindisktime scales] The gas cycle presented in Sect. [sec:Gascycle] is based on three gas reservoirs and different exchange rates between these reservoirs. From these transfer rates and knowing the mass in each of these gas reservoirs, we can deduce the various timescales. Evolution with both the redshift and the stellar mass of five main timescales are shown in Fig. [fig:maindisktime scales]. We focus on: * the orbital time *t**o**r**b* estimated at a radius, 2.2*r**d* ; * the cooling time-scale, *t**c**o**o**l*, given by Eq. [eq:coolingtimefunction]; * the fragmentation time-scale, $t\_{frag}=\frac{M\_{frag}+M\_{sfg}}{\dot{M}\_{frag}}$, defined as the time required to double the fragmented gas mass; * the disruption timescale, $t\_{disrupt}=\frac{M\_{frag}+M\_{sfg}}{\dot{M}\_{disrupt}}$, defined as the time required to deplete the fragmented gas via the kinetic energy provided by SN and/or AGN; * the gas depletion time of the fragmented gas, $t\_{sfg}=\frac{M\_{frag}}{\dot{M}\_{sfg}}$, defined as the time required to convert all the fragmented gas and star-forming gas into stars. As the star-formation timescale is less than all other timescales, *t**s**f**g*, is essentially the time required to convert all of the fragmented gas into stars. We present details of the model output by providing the medians and the 15% and 85% percentiles of these characteristic timescales (Table [tab:maindisktime scales]). We compile the statistics for both the full and star-forming samples of model galaxies at *z* = 2.1. These estimates given in four different stellar mass bins, 109, 1010, 1010.5, and 1011.25 *M*⊙. Even if the median trends presented in Fig. [fig:maindisktime scales] suggest some regularity in galaxy properties as a function of mass, all timescales at constant mass have significant scatter (see Tab. [tab:maindisktime scales]). This large scatter indicates that the dynamics for any one galaxy or ensemble of galaxies is not regular or simple, but is a complex interaction of the various processes included in the model. First, we first focus on the disk orbital timescale which plays a key role in controlling processes that occur over long timescales. Depending on redshift and stellar mass, the orbital timescale ranges between  ≃ 4 Myrs and  ≃ 300 Myrs. At fixed stellar mass, the orbital timescale increases with the redshift. At fixed redshift, it is a decreasing function of the stellar mass. The range of orbital timescales calculated using the `G.A.S.` model are in good agreement with orbital timescales estimated for local galaxies. The fragmentation of the ISM is driven by energy injection on scales of and larger than the disc height via the condensation of the diffuse gas. The rate of fragmentation depends on the radiative cooling rate of the diffuse warm gas. The characteristic cooling time is dependent on redshift and galaxy mass and ranges over five orders-of-magnitude (Eq. [eq:coolingtimefunction]). At a given stellar mass, the characteristic cooling timescale is a decreasing function of the redshift (Fig. [fig:maindisktime scales]). This trend is driven by the increase with the redshift of the diffuse gas density. At a given redshift, the characteristic cooling time is a decreasing function of the mass. These two trends together means that the characteristic cooling timescale of a median galaxy track decreases with both redshift and stellar mass (Fig. [fig:maindisktime scales]). At *z* = 2.1, the characteristic cooling timescale of our star-forming galaxies sample is distributed between  ≃ 0.8 Myr and  ≃ 6.2 Myr. These values are slightly smaller than in the fully resolved galaxy sample of  ≃ 1.3 Myr and  ≃ 8.0 Myr (Table [tab:maindisktime scales]). At all redshifts and for all stellar masses, the characteristic cooling timescale is significantly shorter than the orbital timescale ( < 6%). If no additional mass were added to the reservoir of diffuse gas, its condensation, which is mainly governed by the characteristic cooling timescale, would deplete completely in less than an orbital time. The short cooling and condensation timescales helps to drive the overall complex dynamics of the gas cycle in model galaxies as alluded to earlier. To provide an indication of how much of the diffuse gas is condensing at any given time, we tabulate the mass fraction of the diffuse gas (Table [tab:maindisktime scales]). The mass fraction of the diffuse gas that is condensing can be estimated by comparing the effective cooling time (Eq. [eq:coolingclock]) to the characteristic cooling timescale (Eq. [eq:coolingtimefunction]). At *z* = 2.1, for the five stellar mass bins tabulated, the median values of the ratio, T*c**o**o**l*/*t**c**o**o**l*, range between  ≃ 2.7 and  ≃ 3.5 for both model galaxy samples. Low mass galaxies have the lowest ratios. The ratio is approximately constant,  ≃ 3.5, for galaxies with *M*⋆ ≥ 1010*M*⊙. The fragmentation and disruption timescales have very similar dependencies on redshift and stellar mass (Fig. [fig:maindisktime scales]). These two timescales show a decreasing trend with both redshift and stellar mass. At *z* = 2.1, for the star-forming galaxy sample, the fragmenting timescale and the disrupting timescale are distributed between 4 and 15 Myrs. Those values correspond to  ≃ 10% of the disk orbital timescale. As for the cooling timescale, the fragmentation and the disruption timescales of our star-forming galaxy sample are marginally smaller than those of the full galaxy sample. The disruption timescale is always (slightly) larger than the fragmentation timescale (Table [tab:maindisktime scales]) and thus gas fragmentation is always faster than gas disruption in `G.A.S.`. The short fragmentation timescales indicate that the growth of GMCs is a continuous and efficient process. Timescales measured in our model are fully compatible with GMC gas accretion rates found via numerical simulations. Working against the gas cooling and fragmentation is the injection of energy by SNs and AGN. This energy injection is used in `G.A.S.` to disrupt the gas and transfer some of the fragmented gas back into the reservoir of diffuse gas. Hydrodynamical simulations find that disruption timescales of about 10–20 Myr. This range of values is fully consistent with our predictions. ### Impact of fragmentation and disruption on the life cycle of GMCs The small differences between the condensation and disruption timescales implies that the the total mass of fragmented gas in GMCs stays approximately constant and that significant fraction of the fragmented mass is continuously regenerated. From this equilibrium, there are two evolutionary paths for GMCs that depend on both the scale and mass of GMCs. * Low mass GMCs, those formed in galaxies with short or intermediate disk-scale heights, *h**d* ≤ 50pc, the mass disrupted by SN kinetic energy injection is close to the total of low mass GMCs. In such circumstances, after only few SN cycles, GMCs are fully disrupted. These disrupted GMCs are constantly replenished through the formation of new GMCs through the condensation of diffuse gas. Thus the disruption/fragmentation timescales are approximately the lifetime of the GMCs in our model. Our results are in reasonable agreement with the sizes and short GMC lifetimes estimated in local galaxies, 25-70 pc and 17 ± 4 Myr. * More massive GMCs resist being disrupted by SN. It is the competition between accretion and disruption which regulates the mass of GMCs. Due to this competition, GMCs have to be therefore considered as dynamical structures where the gas is just *passing through* from the diffuse-gas state to the star-forming gas state. SN/AGN-driven gas disruption slows significantly the rate at which gas fragments. In small GMCs, the cloud structure and therefore the sites of star formation can be completely destroyed after only a few SN cycles. In more massive GMCs, the fragmentation is also significantly slowed. Indeed, gas recently added to the reservoir of diffuse gas starts to fragment on larger scale than the gas within the GMCs. In our prescription, this constant renewal and exchange of the gas in these reservoirs and the impact of the added gas to the overall cascade of fragmenting gas is fully accounted for via the mass-weighted fragmentation clock (Eq. [eq:tstr]). ### Gas depletion timescale Depending on the redshift and the stellar mass of a model galaxy, the gas depletion timescale, *t**s**f**g*, ranges from 80 Myr to 2 Gyr (Fig. [fig:maindisktime scales]). The upper end of this range is consistent with with gas depletion timescale measured in local spiral galaxies and the values predicted at high redshift, z ≈ 2-3, are also generally consistent with the values estimated for distant star-forming galaxies. Gas depletion timescales are a decreasing function of both the redshift and the stellar mass. Above *M*⋆ ≥ 1010.5*M*⊙ and at z > 3.0, the average gas depletion timescale reaches a lower-limit close to 80 Myr. The *t**s**f**g*/*t**f**r**a**g* ratio provides an estimate of the efficiency of star formation in GMCs. The number of condensation/disruption-cycles and the depletion timescale of fragmented gas imply that faster cycles lead to shorter depletion timescales. At z = 2.1, median values extracted from our star-forming sample, indicate that between  ≃ 30 and  ≃ 65 cycles are required to convert fragmented gas into star-forming gas (and therefore into stars). This number of gas cycles is consistent with gas dynamics and number of cycles estimated in hydrodynamic simulations. In the star-forming sample, the average number of condensation/disruption-cycles is strictly a decreasing function of the stellar mass. However, in the full galaxy sample, a minimum is reach for models galaxies with mass of 1010.25*M*⊙. The fragmenting gas in galaxies less and more massive than 1010.25*M*⊙ need more cycles to convert the fragmented gas into stars. During each cycle only a small fraction, 1%-5%, of the fragmented gas stops fragmenting and is available for the star formation. In fact, the majority of the gas that forms GMCs is recycled in of-order one to a few dynamical times or simply remains in a state that cannot form stars. We obtain star-formation efficiencies between  ≃ 1 - 5%. Such values are consistent with observational estimates and efficiencies measured in hydrodynamic simulations. In summary, the gas cycle implemented in the new `G.A.S.` model attempts to capture the complex dynamics of the gas in galaxies. In our prescriptions, gas is continuously exchanged between the diffuse and the fragmented non-star forming phases. A large number of condensation and disruption cycles, 30-70, are needed to progressively convert diffuse gas into very dense star-forming gas and then stars. The characteristic timescales of  ≃ 15 Myrs of these exchange rates is only a small fraction of the orbital time-scale. Under such conditions, the fragmented gas is converted into star-forming gas and stars continuously and the gas spends the majority of its time in a non-star-forming gas phase. Discussion and Conclusions ========================== We present a new semi-analytical model, `G.A.S.`, in which we implemented a more realistic gas cycle than has been previously implemented in a semi-analytical model. We introduced a prescription for delaying star formation which is underpinned by progressively fragmenting the gas. The formation of giant molecular clouds, filaments and cold cores takes time and therefore at a given instant only a small fraction of the total gas is in the form of pre-stellar cores. The majority of the gas is not in a form that is immediately available to form stars. Within this framework, we account for the continuous dissipation of the turbulent kinetic energy through the different ISM scales and phases. We implemented an approximate multi-phase ISM where the gas cycles between a warm diffuse phase, a cold fragmented non-star-forming phase, and a very dense star-forming gas phase. Diffuse warm gas is progressively fragmented following the effective cooling time. Even if the overall depletion timescale of the fragmented gas is, on average, proportional to the disk orbital time-scale, we measure a large scatter in both the depletion time-scale and the disk orbital time-scale. This proves that prescriptions that rely solely on the disk orbital timescale do not capture properly the complexity of gas cycles in galaxies. Smaller characteristic timescales, such as the cooling and/or energy injection timescales due to supernovae and AGN are also important. This energy input on shorter timescales, efficiently disrupts the star forming gas. The large-scale velocity dispersion of the diffuse gas is also maintained by SNs/AGN kinetic energy injection. A fraction of the gas can also be ejected from disks by SN explosions but the characteristic time-scale of ejection is in average 10 times larger than the fragmenting/disruption cycle time-scale. While the SN/AGN kinetic energy injection in the ISM regulates the star-formation rate in low mass galaxies, our model galaxies retain a large fraction of the gas. This “local” gas is then used with a progressively increasing efficiency necessary to build massive, *M**s**t**a**r* > 1011*M*⊙, galaxies in the early Universe, *z* ∈ [4; 6]. Star formation occurs only in the very dense gas, which in our model is a product of continuous fragmentation over a wide range of scales. This new complete gas cycle strongly regulates the star formation in our modelled galaxies. Only a few percent of the available fragmented gas is converted into stars during a GMC life cycle. By taking into account the fragmented gas on a galaxy scale, our model is able to reproduce the standard Schmidt-Kennicutt law. Our estimated gas depletion timescales, which are directly related to the time it takes for gas to fragment, are in good agreement with observational estimates. This new gas regulation cycle leads to very good agreement with the observed stellar mass function over a wide range of redshifts, *z* ∈ [0.8; 6]. The ability of our model to catch the stellar mass assembly in high redshift galaxies as been already used with success in. Galaxy properties (gas/stars contents, metallicities and FUV fluxes...) predicted by our model have been post-processed to successfully predict the [CII] luminosity functions at high redshifts and allowed us to explore the main characteristics of this emission. At *z* < 0.8, we find some discrepancies between the predicted and the observed stellar mass functions. Specifically, an over-density of low-mass galaxies and the under-density of intermediate mass galaxies modelled at z<0.8 could probably be solved by slightly increasing the efficiency of photo-ionization in our prescription. At *z* < 4.0, to reduce the efficiency of or to stop the growth of stellar mass in massive galaxies, the accretion of gas and its transformation into stars has to be reduced or stopped. Previous semi-analytical models implemented strong AGN feedback to quench gas accretion onto massive galaxies. In those models, a significant fraction of the power produced by the AGN is directly used to reduce the cooling of the hot halo gas. This implies a constant AGN power production, which may not be consistent with our understanding of AGN variability. For massive galaxies, we propose an alternative model which regulates the radiative cooling and gas accretion onto galaxies. In parallel to radiative cooling, we implemented the development of thermal instabilities creating warm gas surrounding the galaxy. The growth of these thermal instabilities in the central region of the halo progressively reduces or halts the cooling and dissipation of kinetic energy in the halo and therefore reduces or stops the accretion onto galaxies. This process of regulation is a natural outcome of the growth of thermal instabilities in the hot halo gas and does not depend on the power produced by AGN. However, our actual efficiency parameters do not fully stop gas accretion onto very massive galaxies. In some massive dark-matter halos, radiative cooling restarts (VFF  <  1.0) even if its hot gaseous halo has been fully quenched previously (VFF = 1.0). The recovery of the gas accretion leads to the formation of some ( < 10) unobserved overly massive galaxies at z ≃ 0.3. Our prescription of thermal instability growth in the hot gas phase is governed by a set of two parameters which can be further refined to overcome this problem. In our model of the gas cycle, we assume that the gas initially fragments at the disk scale height with the progressive and continuous formation of over-densities which are akin to observed GMCs. The fragmentation of the gas will progress down to the scale at which stars form or  ∼ 0.1 pc. However, this fragmentation can start at larger scales in the cold streams or in the hot(warm) gas phase surrounding the galaxy. Some clumps of warm gas could already be formed around the galaxy. We assume in our model that a fraction of the newly accreted gas is already fragmented but we do not include any interaction between this already fragmented gas in the halo and the large scale wind. This kind of interaction could reduce, perhaps substantially, the gas-accretion efficiency. Indeed such coupling could increase the cloud-cloud velocity dispersion and maintain the turbulence in the hot and warm gas contained in the CGM of the most massive galaxy. Such a hypothesised mechanism will be complementary to the development of thermal instabilities, could in fact act as a catalyst to the formation of more clouds in the CGM, and will could contribute in quenching the radiative cooling and dissipation in the most massive halos. MC acknowledges Thomas Fenouillet for his much appreciated help in the use of the LAM’s computation clusters and wishes to thank Mathieu Génois for his help in the proofreading of this article and for all physical and technical discussions. MC thanks Olivier Ilbert for the numerous and very helpfull discussion about stellar mass function (Estimations, errors, limits) and Benoit Epinat for useful discussion about galaxy disk dynamics. MC also wishes to express his appreciation to Alexandre Beelen and Yanick Roehlly for very useful physical and technical discussions. Authors thank the Centre National d’Etudes Spatiales (CNES), Aix Marseille Université, Sorbonne Université, the Programme National de Cosmologie and Galaxies (PNCG) and Programme de Physico-chimie du Milieu Interstellaire (PCMI) of CNRS/INSU for their financial support. PG gratefully acknowledges the support of the Institut Universitaire de France (IUF). The GALAKSIENN library ====================== The GALAKSIENN library stores the main results produced by our new `G.A.S.` semi-analytical model, especially MOCK galaxy catalogs and sky maps. It is available online through the ZENODO platform:, DOI: 10.5281/zenodo.1451229. A complete description of the GALAKSIENN library is given in paper III. In association with this paper I, we distribute the ASCII tables of the stellar mass functions (Fig. [fig:stellarmassfunction]). --- 1. The temperature of the accreted gas in this mode is close to 104K.[↩](#fnref1) 2. Using the adaptive time-step method described in, we ensure that all exchange rates between reservoirs are constant during each time-step.[↩](#fnref2) 3. The diffuse gas is assumed to evolve in a thick disk with scale-height *h**d* and a total radius of the stellar disk equal to 11*r**d*. We assume that the warm diffuse gas can extend to a radius that is up to two times larger than total radius of the stellar disk.[↩](#fnref3) 4. Following a scheme where gas is continuously added into the hot gas reservoir.[↩](#fnref4)
arxiv_0000008
e^{-\Bar{m}\_s / x} \left[ 1 + \Bar{m}\_s \left( 1 - \frac{\nu}{3} \right) + \mathcal{O}(\Bar{m}\_s^2) \mathcal{O}(x) \right] \;.$$ This expansion can generally be written as $$e^{-\Bar{m}\_s / \gamma\_\text{GR}} = \sum\_{n=0}^\infty f\_{n}(x) \frac{\Bar{m}\_s^n}{n!} e^{-\Bar{m}\_s / x}$$ with *f**n*(*x*) absorbing the remaining dependence on *x* and some overall coefficients of order O(1)[6](#fn6). Note that each term individually, if viewed as a function of $\Bar{m}\_s$, reaches a maximum at $\Bar{m}\_s = a x$, such that $$f\_n(x) \frac{\Bar{m}\_s^n}{n!} e^{-\Bar{m}\_s / x} \leq f\_n(x) x^n \frac{n^n}{n!} e^{-n} \leq f\_n(x) x^n \;.$$ Hence, a term proportional to $\Bar{m}\_s^n$ can at most contribute with *x**n*. This means that effectively we can still assume $\Bar{m}\_s \sim \gamma \sim x$ when doing this expansion, even if the scalar mass does not formally live on the hard scale. In other words, $\Bar{m}\_s^n$ can never become more important than *x**n*. By assuming $\Bar{m}\_s \sim x$ we can make sure to always only keep those terms in the expansion that have a chance to become relevant at some point during the inspiral. The same reasoning also applies to the other functions that appear in the Lagrangian, such that we can also expand terms like ${{\rm Ei}}(-\Bar{m}\_s/\gamma\_\text{GR})$. Continuing with the calculation and following the above discussion, we are allowed to truncate the expansion in Eq.  to the desired PN order. At NLO, this is $$\label{non-pert-expansion-exp2} e^{-\Bar{m}\_s / \gamma\_\text{GR}} = e^{-\Bar{m}\_s / x} \left[ 1 + \Bar{m}\_s \left( 1 - \frac{\nu}{3} \right) \right] \;,$$ Inserting this relation into Eq.  and truncating at the relevant order again shows that the inverse Kepler law becomes $$\gamma = x + \left(1 - \frac{\nu}{3}\right) x^2 - \Bar{Q} \frac{8}{3} (\Bar{m}\_s + x) e^{-\Bar{m}\_s / x} \;.$$ Note in this case the NLO term of Eq.  just gets truncated, so that the above-described expansion effectively results in replacing *γ*GR with *x*. However, the quantities presented in App. [app:Kepler relations] are expanded consistently up to 1PN order, including all NLO terms. #### **Binding energy.** The last quantity we compute regarding the conservative dynamics is the binding energy for a given orbital separation and frequency. Applying a Legendre transformation to the Lagrangian, we obtain the Hamiltonian *H* for this system: $$H = \sum\_{i= 1,2} \frac{\partial L}{\partial \Dot{\mathbf{x}}\_i^k} \Dot{\mathbf{x}}\_i^k - L \;.$$ From here, we calculate the binding energy for the considered circular orbits, as $$\frac{E}{\mu} = 4 \bar{Q} e^{-\frac{\bar{m}\_s}{\gamma }} (\bar{m}\_s - \gamma) - \frac{\gamma}{2} \left[ 1 + \left( - \frac{7}{4} + \frac{\nu}{4} \right) \gamma \right]\;,$$ where *μ* is the reduced mass (cf. Tab. [tab:dimenionlessparams]). This expression can equivalently be expressed in terms of *x* at 1PN order as $$\frac{E}{\mu} = \frac{8}{3} \bar{Q} e^{-\frac{\bar{m}\_s}{x}} (2 \bar{m}\_s - x) - \frac{x}{2} \left[ 1 - \left(\frac{3}{4} + \frac{\nu}{12} \right) x \right]\;.$$ Note that the last term, i.e., the term independent of $\Bar{Q}$, is just the 1PN expression for the binding energy in pure GR. The full expression considering all scalar effects is found in App. [app:Kepler relations]. [b] [renormalization::massrenorm] 0.5cm [b] [renormalization::singlegravitonvertex] 0.5cm [b] [renormalization::singlescalarvertex] [renormalization::1PN] 0.49 [fig::puregravitonemission] 0.49 [fig::modifiedgravitonemission] 0.49 [fig::purescalaremission] 0.49 [fig::modifiedscalaremission] [fig::allemissiondiagrams] Radiation --------- Let us now discuss the computations involving soft degrees of freedom. Fig. [fig::allemissiondiagrams] shows all emission diagrams that contribute at 1PN order. This comprises pure GR, pure scalar, as well as mixed contributions. We do not carry out the full calculations of all pure GR diagrams. Instead, it is more sensible to only calculate the corrections due to the scalar field. We then add these contributions in the fashion of a Taylor expansion to an existing template at high PN order in pure GR. [fig::scalarradiationselfforce] In the following, we first outline how the power loss via gravitational waves is affected by the scalar. Subsequently, the power loss via scalar radiation is computed. #### **Gravitational radiation.** After integrating out the orbital scale, the radiation gravitons are sourced by an effective energy-momentum tensor $T\_{\rm eff}^{\mu\nu}$, which enters the effective action via $$S\_{\rm eff} \supset S\_{\text{src},\, \Bar{h}} = - \frac{1}{2 {M\_\mathrm{Pl}}} \int dx^4\, T\_{\rm eff}^{\mu\nu}(x) \Bar{h}\_{\mu\nu}(x) \;.$$ The general strategy to retain a consistent power counting is to expand *h**μ**ν* in multipole moments around the source. We intend to compute the gravitational waveform at NLO with regard to the scalar emission. As discussed in the subsequent section, the LO multipole in the scalar sector is given by the dipole, contributing at  − 1PN order. This implies it is sufficient to only consider the scalar effects on the quadrupolar radiation in the graviton sector. At LO, the quadrupole moment reads *I**i**j* = ∫*d*3**x** *T*00[**x***i***x***j*]STF ,  where STF denotes the symmetric and trace-free projection. We find that none of the diagrams in Fig. [fig::modifiedgravitonemission] contribute to this order. Therefore, the quadrupole moment takes the same form as in pure GR. Explicitly, we have *I**i**j* = ∑*a* = 1, 2*M**a*[**r***a**i***r***a**j*]STF . Using our expression for **r***i* and the expansion for *r*2, Eq.  and Eq. , we further obtain *I**i**j* = *μ* [**r***i***r***j*]STF ,  with **r** = **r**1 − **r**2. Here we have already dropped all terms that will only contribute beyond the LO energy loss. Especially, all corrections due to the scalar field on *r*2 will only contribute beyond 0PN and are thus negligible for the energy loss due to graviton emission at LO. The energy loss is then given by the usual quadrupole formula $$\label{eq:power\_loss\_GR} \begin{split} P\_{\Bar{h}} &= \frac{G}{5} \left< \left( \partial\_t^3 I^{ij} \right)^2 \right> \\ &= \frac{32 G}{5} M^2 \nu^2 r^4 \omega^6 \\ &= \frac{32}{5 G} \nu^2 x^5 + \frac{1024}{15G} \nu^2 {\Bar{Q}}e^{-{\Bar{m}\_s}/ x} ({\Bar{m}\_s}+ x) x^4 \;, \end{split}$$ where $\left<\dots\right>$ denotes time averaging. In the last line, we have dropped all terms which are of quadratic or higher power in ${\Bar{Q}}$, and also those terms that would only contribute from 1PN order on. The first term is the usual pure GR quadrupole radiation, while the second term describes the modifications arising due to the scalar field. Notably, the presence of the scalar field affects the LO graviton radiation only via the modified Kepler relation, Eq. . #### **Scalar radiation.** Similar to the gravitational radiation, we obtain the energy loss by scalar emission by first identifying the source term that produces the radiation at linear order $$S\_{\rm eff} \supset S\_{\text{src},\, {\Bar{\phi}}} = \frac{1}{{M\_\mathrm{Pl}}} \int d^4 x J(x) {\Bar{\phi}}(x)$$ All relevant scalar radiation diagrams contributing to *J* are shown in Fig. [fig::purescalaremission] and [fig::modifiedscalaremission]. We follow Ref.  to obtain the energy loss. First, we Taylor expand the field around the center of mass. This allows us to rewrite the source term via $$S\_\text{src} = \frac{1}{{M\_\mathrm{Pl}}} \int dt \sum\_{l=0}^\infty \frac{1}{l!} \mathcal{I}^L \partial\_L {\Bar{\phi}},$$ where the derivatives $\partial\_L {\Bar{\phi}}$ are evaluated at (*t*, **x** = 0). *L* denotes a multi index, such that *x**L* = *x**i*1*x**i*2…*x**i**l* and $$\label{eq::multipole\_moments} \begin{split} \mathcal{I}^L = \sum\_{p = 0}^\infty &\frac{(2l + 1)!!}{(2p)!! (2l + 2p + 1)!!} \\ &\qquad \int d^3 \mathbf{x}\,r^{2p} \mathbf{x}\_\text{STF}^L \,(\partial\_t^2 + m\_s^2)^{p} J \;, \end{split}$$ with *r* = ∣**x**∣, are the multipole moments of the source. A more detailed derivation of the above expression is given in App. [app:energylossscalar]. Please note that Eq.  deviates from Ref.  since we consider a non-vanishing scalar potential. Note further that the contributions to the source term arising from diagrams in which the radiating field couples directly to one of the worldlines can be calculated without performing the multipole expansion. The source term then always contains a *δ*-function, which sets the position to the current position of the object whose worldline is being coupled to. The diagrams in which the radiating scalar instead couples to an interaction vertex whose position is integrated over, such as in Figs. [fig::purescalaremission]iii) and [fig::modifiedscalaremission]i), are slightly more complicated. They typically require performing the multipole expansion before the diagram can be solved analytically. In particular, this means that each term in the multipole expansion has to be calculated separately. In previous computations this subtlety has not been taken into account. Instead, higher-order moments were calculated by supplementing the monopole term with an appropriate *δ*-function in order to isolate the corresponding source term. This approach however does not capture the full dynamics, which can be verified by a direct computation of the higher-order moments. As a consequence, we find additional terms contributing to the power loss via scalar radiation compared to existing literature. The explicit computations are relegated to App. [app:scalarradiationcalculation]. To derive the energy loss, we need to compute the imaginary part of the self-force diagrams in which the scalar field couples to the multipole moments. For our desired accuracy, it is sufficient to consider the LO topology depicted by the left diagram in Fig. [fig::scalarradiationselfforce]. The calculation – see App. [app:energylossscalar] for details – yields $$\begin{aligned} \label{eq::scalar\_radiation\_generic\_multipole} \begin{split} P\_{\Bar{\phi}} = \frac{1}{4\pi^2 T} &\sum\_{l=0}^\infty \frac{1}{l!(2l+1)!!} \\ &\int\_{m\_s}^\infty d\omega \, \omega \left( \omega^2 - m\_s^2\right)^{l + 1/2} \left| \Tilde{\mathcal{I}}^L (\omega) \right|^2 \;, \end{split}\end{aligned}$$ where *T* = 2*π**δ*(0). A tilde denotes Fourier-transformed quantities, e.g. in this case $$\Tilde{\mathcal{I}}^L (\omega) = \int dt\,e^{i \omega t} \mathcal{I}^L (t).$$ We decompose the total energy loss into its contributions from different multipole moments. Up to 0PN order we obtain $$\label{eq::power\_loss} \begin{split} P\_{\Bar{\phi}}^\text{0PN} &= P\_{\Bar{\phi}}^{l = 1} \left( 1 - \frac{m\_s^2}{\omega^2} \right)^{3/2} + P\_{\Bar{\phi}}^{l = 2} \left( 1 - \frac{m\_s^2}{4\,\omega^2} \right)^{5/2} \;, \end{split}$$ where we factored out the common factor that inhibits scalar radiation for frequencies of *ω* < *m**s* (*ω* < *m**s*/2) in the case of dipole (quadrupole) radiation. Note that no monopole radiation is emitted, since, as mentioned above, we only consider circular orbits. [t] [fig:powerlossplot] [t] | | *m**s* [eV] | *q*1 [*M*1] | *q*2 [*M*2] | *p*1 [*q*1] | *p*2 [*q*2] | *c*3 | | --- | --- | --- | --- | --- | --- | --- | | | 10− 15 | 10− 3 | 10− 3 | 5 × 10− 2 | 5 × 10− 2 | *m**s*2/*M*Pl2 | | | 10− 15 | 10− 4 | 0 | 5 × 10− 3 | 0 | *m**s*2/*M*Pl2 | [tab::powerlossplotparams] When evaluating Eq. , we calculate the *p* = 0 component for each diagram and only keep the *p* = 1 contribution for the LO scalar emission diagram, i.e., the one in Fig. [fig::purescalaremission]i). Judging from the power counting rules, one does not expect other diagrams to contribute for *l* = *p* = 1 at the considered order. However, a direct computation of the *l* = *p* = 1 component of the diagram in Fig. [fig::modifiedscalaremission]i) reveals IR divergent terms in the limit of *m**s* → 0. Since in the massless limit the diagram itself vanishes as there is no corresponding operator in the action, these contributions are considered unphysical. Yet they contribute to the here relevant order for non-vanishing *m**s*. We suspect that these divergent terms are canceled by higher-order diagrams and thus do not further include their effect, instead postponing a more detailed study to future research. Plugging the results from App. [app:scalarradiationcalculation] into Eq. , we obtain [eq:scalarpowerloss] $$\begin{aligned} \begin{split} \label{eq:scalar\_power\_loss:LO\_dipole} P\_{\Bar{\phi}}^{l=1, \text{LO}} &= \frac{8}{3} G M^2 \delta q^2 r^2 \omega^4, \end{split}\\[2ex] \begin{split}\label{eq:scalar\_power\_loss:p\_dipole} P\_{\Bar{\phi}}^{l=1, p\_1, p\_2} &= \frac{256}{3} G^2 M^3 \Xi\_p \, r \omega^4, \end{split}\\[2ex] \begin{split} \label{eq:scalar\_power\_loss:c3\_dipole} P\_{\Bar{\phi}}^{l=1, c\_3} &= \frac{G M\_1 M\_2 M}{3 \pi m\_s} \Xi\_c (m\_sr - 1) r^2 \omega^4, \end{split}\\[2ex] \begin{split} \label{eq:scalar\_power\_loss:NLO\_dipole} P\_{\Bar{\phi}}^{l=1, \text{NLO}} &= \frac{8 G M^2}{15} \delta q \bigg[ - 10 g\_1 G M \\ &{}\hspace{1cm} + g\_2 r^3 ( m\_s^2 - 6 \omega^2) \bigg] r \omega^4, \end{split}\\[2ex] \begin{split} \label{eq:scalar\_power\_loss:LO\_quadropole} P\_{\Bar{\phi}}^{l=2} &= \frac{128}{15} G M^2 \Xi\_q^2 r^4 \omega^6. \end{split} \end{aligned}$$ Here we have expanded the energy loss terms to only keep relevant contributions. More precisely, Eq.  contains the LO energy loss, which in our case is the usual dipole radiation term, formally entering at  − 1PN order. Likewise, Eq.  contains the dipole radiation terms which enter at 0PN order. The terms which contain effects due to *p*1, *p*2 (Eq. ), and *c*3 (Eq. ) also contribute at 0PN order as does the scalar quadrupole radiation (Eq. ). All combinations of worldline coefficients have been combined to dimensionless parameters and are summarized in Table [tab:dimenionlessparams] for the reader’s convenience. We may further use the Kepler relation to express the power losses in terms of the PN expansion parameter *x*. In this case, we obtain [eq:scalarpowerlossx] $$\begin{aligned} \begin{split} \label{eq:scalar\_power\_loss:LO\_dipole\_x} P\_{\Bar{\phi}}^{l=1, \text{LO}} &= \frac{8}{3 G} \delta q^2 x^4 \;, \end{split}\\[2ex] \begin{split} \label{eq:scalar\_power\_loss:p\_dipole\_x} P\_{\Bar{\phi}}^{l=1, p\_1, p\_2} &= \frac{256}{3 G} \Xi\_p \, x^5\;, \end{split}\\[2ex] \begin{split} \label{eq:scalar\_power\_loss:c3\_dipole\_x} P\_{\Bar{\phi}}^{l=1, c\_3} &= \frac{M\_1 M\_2}{3 \pi} \frac{{\Bar{m}\_s}- x}{{\Bar{m}\_s}} \Xi\_c\, x^3 \;, \end{split}\\[2ex] \begin{split} \label{eq:scalar\_power\_loss:NLO\_dipole\_x} \raisetag{0.8cm} P\_{\Bar{\phi}}^{l=1, \text{NLO}} &= \frac{16}{9 G} (-3 + \nu) \delta q^2\,x^5 + \frac{8}{15 G} {\Bar{m}\_s}^2\, g\_2\, \delta q\, x^2 \\ &- \frac{16}{15 G} (5 g\_1 + 3 g\_2) \delta q\,x^5\;, \end{split}\\[2ex] \begin{split} \label{eq:scalar\_power\_loss:LO\_quadropole\_x} P\_{\Bar{\phi}}^{l=2} &= \frac{128}{15 G} \Xi\_q^2\,x^5\;. \end{split} \end{aligned}$$ Fig. [fig:powerlossplot] shows the different contributions to the power loss relative to the LO power loss in pure GR. The corresponding model parameters are summarized in Table [tab::powerlossplotparams]. In the left panel, both (induced) scalar charges are non-zero, a situation typical for NS-NS binaries. We observe that the modification of the Kepler relation due to the impact of the scalar field on the binding energy is clearly the most significant effect for the chosen values. This contribution to the power loss is present during a large part of the inspiral. For small values of $x \lesssim 10^{-4}$, which corresponds to the radial scale where *m**s* ∼ 1/*r*, the exponential suppression of hard scalar interactions becomes sizable. The effect of the scalar mass also becomes apparent regarding the radiative dynamics of the system. Here, dipole (quadrupole) radiation is shut off when *m**s* < *ω* (*m**s* < *ω*/2). Therefore, smaller values of *m**s* would shift this cutoff to lower *x*, leading to a larger impact of the scalar field during earlier stages of the inspiral. In particular, the LO dipole contribution then becomes enhanced as $P^{l=1,\mathrm{LO}}\_{{\Bar{\phi}}}/P\_\mathrm{GR}^\mathrm{LO} \propto 1/x$. In the right panel, we set *q*2 = *p*2 = 0, i.e., only one constituent carries a scalar charge. Such values would typically be realized in NS-BH inspirals. We immediately notice that the power loss contribution from the modified Kepler relation vanishes. This can be seen from Eq.  as $P\_{\Bar{h}} \propto q\_1 q\_2$. The same is true for scalar radiation emitted via the cubic self-interaction vertex. In addition, we observe no scalar radiation proportional to induced scalar charges since $P\_{{\Bar{\phi}}}^{l=1,p\_1,p\_2} \propto (p\_2 q\_1 - p\_1 q\_2)$. Nonetheless, please note that the power loss from LO dipole radiation is comparable to the left panel, although the scalar charge *q*1 is an order of magnitude smaller. This is due to the fact that this contribution is proportional to *δ**q*2 ∝ (*M*2*q*1 − *M*1*q*2)2, i.e., becomes typically larger if the scalar charges deviate from each other. The same considerations apply to the NLO dipole. The scalar quadrupole, on the other hand, scales with Ξ*q*2 ∝ (*M*22*q*1 + *M*12*q*2)2, hence is suppressed for *q*2 = 0. Gravitational Waveform ====================== We are now set to derive the gravitational waveform. In the time domain, a gravitational wave *h*(*t*) is expressed by the waveform $$\begin{aligned} h(t) = A(t) \cos \varphi(t)\;, \end{aligned}$$ where *A* denotes the amplitude and *φ* is the phase. We derive the scalar-induced modifications to the TaylorF2 approximand  and thus employ the stationary phase approximation (SPA). For the Fourier transform of the above expression we have $$\Tilde{h}(f) = \mathcal{A}(f) e^{i \Psi} \;.$$ Here, Ψ solves the following differential equations : $$\begin{split} 0 &= \frac{d \Psi}{d f} - 2 \pi t(f) \;, \\ 0 &=\frac{d t}{d f} + \frac{G \pi M}{3 v^2} \frac{E'(f)}{P(f)} \;. \end{split}$$ Above, *v* denotes the orbital velocity which is related to the GW frequency *f* via *v* = (*π**G**M**f*)1/3. Also, the prime denotes differentiation with respect to *v*. It is therefore necessary to calculate the fraction *E*ʹ/*P* and integrate twice with respect to the frequency. Here, the binding energy *E* is given by Eq. . The different contributions to the power loss of the binary are found in Eq.  and Eq. , respectively. We perform the computation by expanding *E*ʹ/*P* in terms of the perturbative scalar quantities, such as the scalar charge *q* and self-interaction *c*3. We then only keep the LO terms, which we calculate up to 0PN order. We thus have Ψ(*q*1, *q*2, …) = ΨGR + ΨST ,  where ΨGR denotes the pure GR waveform. ΨST encodes the corrections due to the scalar field at leading order in the scalar charge, induced charge and self-interaction. Note that this means that the following results can be added to any existing TaylorF2 template. We further decompose ΨST to be able to easily identify the components that arise due to the modification to the binding energy, which is always present during the inspiral, and the additional scalar radiation, which is only present for sufficient large frequencies. Thus, we decompose $$\begin{split} \Psi\_\text{ST} = \Psi\_E &+ \theta \left( \frac{v^3}{\bar{m}\_s} - 1 \right) \Psi\_{l=1} \\ &+ \theta \left( \frac{v^3}{\bar{m}\_s} - \frac{1}{2} \right) \Psi\_{l=2} \;, \end{split}$$ where *θ* is the Heaviside step function. Ψ*E* denotes the impact of the scalar field onto the gravitational energy loss via the modified Kepler relation. The contribution due to direct emission of scalar dipole and quadrupole radiation is captured by Ψ*l* = 1, 2, respectively. We are now ready to present our final results. The lowest contribution due to radiation arises at  − 1PN order, corresponding to scalar dipole radiation. Ψ*E*, on the other hand, only contributes from 0PN order. These two corrections read [eq:phaseLO] $$\begin{aligned} \begin{split} \Psi\_E^{\text{0PN}} &= \frac{5 \bar{Q}}{6 \nu \bar{m}\_s^{5/2}} \left[\Gamma \left(\frac{5}{2},\frac{\bar{m}\_s}{v^2}\right)+\Gamma \left(\frac{7}{2},\frac{\bar{m}\_s}{v^2}\right)+2 \, \Gamma \left(\frac{9}{2},\frac{\bar{m}\_s}{v^2}\right)\right] \\ &\hspace{5cm} - v^3 \frac{5 \bar{Q}}{6 \nu \bar{m}\_s^4} \bigg[\Gamma \bigg(4,\frac{\bar{m}\_s}{v^2}\bigg)+\Gamma \bigg(5,\frac{\bar{m}\_s}{v^2}\bigg) + 2\,\Gamma \bigg(6,\frac{\bar{m}\_s}{v^2}\bigg)\bigg] \;, \end{split}\\[0.6cm] \Psi\_{l=1}^{-1\text{PN}} &= \delta q^2 \frac{5}{896 \nu ^3} \left[-\frac{1}{v^7} \,{}\_3F\_2\left(-\frac{3}{2},\frac{5}{3},\frac{7}{6} ; \frac{8}{3},\frac{13}{6};\frac{\bar{m}\_s^2}{v^6}\right) - \frac{90 \cdot 2^{1/3} 3^{1/2} v^3 \Gamma \left(\frac{2}{3}\right)^3 }{247 \pi \bar{m}\_s^{10/3}} + \frac{63 \, \Gamma \left(\frac{1}{3}\right)^3}{256 \cdot 2^{1/3} \pi \bar{m}\_s^{7/3}} \right] \;. \end{aligned}$$ Here we utilized the *upper* incomplete gamma function Γ(*a*, *x*) = ∫*x*∞*t**a* − 1*e*− *t**d**t* and 3*F*2 denotes a generalized hypergeometric function. Note that the above expressions only contain modifications due to the scalar charges *q*1 and *q*2. Corrections due to the induced charges *p*1 and *p*2, as well as the self-interaction parameter *c*3, arise at NLO (0PN) with respect to the scalar radiation. Explicitly, these terms are given by [eq:phaseNLO] $$\begin{aligned} \begin{split} \Psi\_{l=1}^{\text{0PN},q} &= -\frac{5 g\_2 \delta q \bar{m}\_s^2}{9856 \nu^3 v^{11}} \,{}\_3F\_2\left(-\frac{3}{2},\frac{7}{3},\frac{11}{6} ; \frac{10}{3},\frac{17}{6} ; \frac{\bar{m}\_s^2}{v^6}\right) + \frac{45 \Gamma\left(\frac{2}{3}\right)^3 \delta q \left(-1680 g\_1- 966 g\_2+5 (1064 \nu +659) \delta q\right)}{1605632\ 2^{2/3} \pi \nu ^3 \bar{m}\_s^{5/3}} \\ &{}\hspace{0.4cm} + \frac{\delta q \left(1680 g\_1 + 1008 g\_2 - 5 (1064 \nu +659) \delta q\right)}{86016 \nu ^3 v^5} \,{}\_3F\_2 \left(-\frac{3}{2},\frac{4}{3},\frac{5}{6} ; \frac{7}{3},\frac{11}{6} ; \frac{\bar{m}\_s^2}{v^6}\right) \\ &{}\hspace{0.4cm}+ \frac{5 \sqrt{3} v^3 \Gamma \left(\frac{1}{3}\right)^3 \delta q \left(7728 g\_1+ 4368 g\_2-23 (1064 \nu + 659) \delta q\right)}{30829568 \sqrt[3]{2} \pi \nu ^3 \bar{m}\_s^{8/3}} \;, \end{split}\\[0.6cm] \begin{split} \Psi\_{l=1}^{\text{0PN},p} &= \Xi\_p \frac{5}{16 \nu^3} \left[- \frac{1}{v^5} \,{}\_3F\_2\left(-\frac{3}{2},\frac{4}{3},\frac{5}{6} ; \frac{7}{3},\frac{11}{6};\frac{\bar{m}\_s^2}{v^6}\right) - \frac{6 \cdot 2^{2/3} 3^{1/2} v^3 \Gamma \left(\frac{1}{3}\right)^3 }{187 \pi \bar{m}\_s^{8/3}} + \frac{135 \, \Gamma \left(\frac{2}{3}\right)^3}{56 \cdot 2^{2/3} \pi \bar{m}\_s^{5/3}} \right] \;, \end{split}\\[0.6cm] \begin{split} \label{c3::waveform} \Psi\_{l=1}^{\text{0PN},c\_3} &= \Xi\_c \frac{5 G M\_1 M\_2}{7168 \pi {\Bar{m}\_s}\nu^3} \left[\frac{1}{v^7} \,{}\_3F\_2\left(-\frac{3}{2},\frac{5}{3},\frac{7}{6} ; \frac{8}{3},\frac{13}{6};\frac{\bar{m}\_s^2}{v^6}\right) + \frac{90 \cdot 2^{1/3} 3^{1/2} v^3 \Gamma \left(\frac{2}{3}\right)^3 }{247 \pi \bar{m}\_s^{10/3}} - \frac{63 \, \Gamma \left(\frac{1}{3}\right)^3}{256 \cdot 2^{1/3} \pi \bar{m}\_s^{7/3}} \right] \\ &{}\hspace{0.4cm}+ \Xi\_c \frac{25 G M\_1 M\_2}{55296 \pi \nu^3} \left[ - \frac{1}{v^9} \,{}\_3F\_2\left(-\frac{3}{2}, 2,\frac{3}{2} ; 3,\frac{5}{2};\frac{\bar{m}\_s^2}{v^6}\right) - \frac{24 v^3}{35 \bar{m}\_s^4} + \frac{3 \pi}{8 \bar{m}\_s^3} \right] \;, \end{split}\\[0.6cm] \begin{split} \Psi\_{l=2}^\text{0PN} &= \Xi\_q^2 \frac{1}{32 \nu ^3} \left[-\frac{1}{v^5} \,{}\_3F\_2\left(-\frac{5}{2},\frac{4}{3},\frac{5}{6} ; \frac{7}{3},\frac{11}{6};\frac{\bar{m}\_s^2}{4 v^6}\right) - \frac{720 \cdot 2^{1/3} 3^{1/2} v^3 \Gamma \left(\frac{1}{3}\right)^3 }{4301 \pi \bar{m}\_s^{8/3}} + \frac{405 \, \Gamma \left(\frac{2}{3}\right)^3}{112 \pi \bar{m}\_s^{5/3}} \right] \;. \end{split} \end{aligned}$$ Above, we denote contributions arising from a certain scalar field parameter with a respective superscript. For example, Ψ*l* = 10PN, *c*3 denotes the contribution of *c*3 to the 0PN order phase shift in dipole radiation. For the definition of all rescaled parameters, we refer to Table [tab:dimenionlessparams]. Note that all hypergeometric functions appear in the form of 3*F*2(*a*, *b*, *c* ; *b* + 1, *c* + 1 ; *z*), with *a*, *b*, *c* ∈ Q. This can formally be expanded as $$\begin{aligned} \begin{split} &{}\_3F\_2\left(a, b, c\,; b + 1, c + 1\,; z\right) = \\ &\frac{c}{c - b} \,{}\_2F\_1\left(a, b\,; b + 1 \,; z\right) - \frac{b}{c - b} \,{}\_2F\_1\left(a, c\,; c + 1\,; z\right). \end{split} \raisetag{1.3cm}\end{aligned}$$ Since not all math libraries support higher-order hypergeometric functions, this relation might be particularly useful for the implementation of our results. Further, keep in mind that in the above expressions, we use ℏ = *c* = 1. If one wishes to restore these factors in the waveform, one may use ${\Bar{m}\_s}= G M m\_s / (\hbar c)$ and replace the combination *G**M*1*M*2 appearing in Eq.  with *G**M*1*M*2/(ℏ*c*). Additionally, it is necessary to substitute all occurrences of the velocity *v* with *v*/*c*. The corrections due to the additional scalar degree of freedom can now readily be combined with pure GR TaylorF2 templates at arbitrary precision. Further, since we kept the scalar parameters generic up to this point, our results can easily be adapted to a large class of scalar-tensor theories. All there is left to do is adjust the potential parameters and worldline couplings to a desired model. To facilitate using our results, we provide a link to a repository in the supplemental material which contains a Python implementation of Eqs.  & . Conclusions [sec:conclusions] ============================= In this work, we have for the first time derived the NLO gravitational waveform of inspiraling binary systems in GR augmented by a massive, self-interacting scalar field. To this end, we have employed the scale hierarchy during the inspiral to treat the binary within an EFT framework. We have calculated the conservative and dissipative dynamics of the system at 1PN order to ultimately determine the modifications of the gravitational waveform up to 0PN order. This corresponds to NLO in the scalar radiation. This result can now readily be combined with pure GR templates – in the TaylorF2 approximation – at arbitrary precision. Throughout our work, we have chosen a largely model-independent ansatz by choosing a generic, polynomial scalar potential and leaving the scalar-wordline couplings undetermined. This allows to match our result to a multitude of new physics scenarios, including extensions of GR as well as of the SM of particle physics, provided an appropriate coupling to the binary worldlines. Ultimately, the goal is to set constraints on new physics using data from GW observatories. Therefore, we will as a next step use data from GW170817 to test various scalar-tensor theories. Once new data is available in the future, we will then be able to easily repeat the procedure, further narrowing down the window of new physics. It will also be interesting to extend the calculations to elliptic orbits, thus allowing to utilize, e.g., pulsar data. From the EFT point of view, we will extend our computations to take into account scalar effects which enter at a higher order in the velocity expansion, further pushing the precision of our predictions. In addition, our computations may be further generalized by, e.g., including vector degrees of freedom. These challenging tasks are left for future work. The authors thank N. Becker, E. Genoud-Prachex, S. Ghosh, A. Kuntz, S. Pal, Y. Schaper, S. Tsujikawa, and J. Zhang for useful discussions. RFD and DS acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the CRC-TR 211 ‘Strong-interaction matter under extreme conditions’ – project number 315477589 – TRR 211. LS acknowledges the support by the State of Hesse within the Research Cluster ELEMENTS (Project ID 500/10.006). Redundant Operators =================== In general, there are many additional worldline couplings of the scalar field beyond the ones shown in Eq. . In fact, Appendix A of lists these couplings explicitly up to terms containing second derivatives in the scalar field *ϕ* and the metric *g**μ**ν*. Here, it is important to note that at this point the authors did not employ any power counting rules, i.e., these operators do not necessarily contribute at 1PN order. They arrive at the general wordline action $$\begin{aligned} \begin{split} S\_{pp} = - \int d \tau \bigg[ m(\phi) &+ I(\phi) R + J(\phi) u^\mu u^\nu R\_{\mu\nu} + K(\phi) \square \phi + L(\phi) u^\mu u^\nu \nabla\_\mu \partial\_\nu \phi \\ &+ M(\phi) u^\mu u^\nu \partial\_\mu \phi \partial\_\nu \phi + N(\phi) g^{\mu\nu} \partial\_\mu \phi \partial\_\nu \phi \bigg] \;. \end{split}\end{aligned}$$ After a series of redefinitions and neglecting operators which cannot contribute at orders lower than *v*4, one obtains an equally valid action $$S\_{pp} = - \int d \tau \bigg[ m(\phi) + \Tilde{N}(\phi) g^{\mu\nu} \partial\_\mu \phi \partial\_\nu \phi \bigg] \;,$$ where $\Tilde{N} = N + \alpha(\phi) L + 2 I$ and *α*(*ϕ*) = ∂ln*m*(*ϕ*)/∂*ϕ*. It is thus directly shown that all other operators are either redundant or do not contribute at 1PN order, hence do not have to be considered. Also, the second term can be identified as the LO finite-size effect due to an external tidal field. In order to further investigate at which PN order these operators contribute, it is useful to expand *m*(*ϕ*) and $\Tilde{N}(\phi)$ in a Taylor series: $$m(\phi) = m(0) + q \frac{\phi}{{M\_\mathrm{Pl}}} + p \frac{\phi^2}{{M\_\mathrm{Pl}}^2} + \mathcal{O}(\phi^3)\;, \quad \text{and} \quad \Tilde{N}(\phi) = \Tilde{N}(0) + \Tilde{N}'(0) \frac{\phi}{{M\_\mathrm{Pl}}} + \mathcal{O}(\phi^2)\;.$$ Here, primes denote the derivative with respect to the scalar field, and we have defined *q* = *M*Pl *m*ʹ(0) and *p* = *M*Pl2 *m*ʺ(0). It is straightforward to identify *m*(0) with the usual GR mass, while the higher-order derivatives define the couplings of the scalar field to the worldline. Employing the power counting rules, it is now easy to see that at 1PN order the expansion of *m*(*ϕ*) can be truncated beyond the quadratic term. Likewise, it can be shown that the expansion of *N*(*ϕ*) contributes only from 3PN order for compact objects. The impact of the second term on the waveform has been investigated in Ref.  for massless scalar fields. Similarly, by expanding the self-interaction potential of the scalar field in a power series, it can be shown that at 1PN order only the cubic self-interaction is relevant. In fact, a self-interaction term of the form *ϕ**n* becomes relevant at (*n* − 2)PN order in the expansion. Validity of the EFT =================== Let us comment on the assumptions we make in order for the EFT prescription to be valid. Firstly, throughout the calculations we assumed that at LO the inspiral is described by Newtonian gravity. This assumption is, e.g., extensively used when establishing the power counting rules. To justify this perturbative treatment of the binary system, the Newtonian potential has to remain the LO term even when adding the scalar field, i.e., the scalar exchange has to be suppressed. This directly gives a constraint on the scalar charges $$\frac{|q\_n|}{M\_n} < 1 \;, \quad \text{and likewise} \quad \frac{|p\_n|}{M\_n} < 1 \;.$$ Further, for the cubic self-coupling *c*3 we note that the three-scalar vertex scales  ∼ *c*3*M*Pl2*r*2*L*− 1/2*v*2. Higher-order terms in *c*3 scale with higher powers of *c*3*M*Pl2*r*2. We thus also demand that ∣*c*3∣*M*Pl2*r*2 < 1, which implies that $$|c\_3| \frac{M^2}{{M\_\mathrm{Pl}}^2} < \gamma^2 < 1 \;.$$ With *M* ∼ *M*⊙ this implies ∣*c*3∣ < 10− 80, which was also noted earlier. However, it is important to keep in mind that the above constraint only applies for the case when the scalar field behaves effectively massless (i.e., *m**s**r* ≪ 1), meaning that it is to be understood as a conservative upper bound on ∣*c*3∣. If *m**s**r* ≪ 1 is never achieved during the inspiral phase, the constraint could be relaxed, since additional exponential suppression in *m**s**r* helps to subdue higher-order *c*3 terms. In the final expressions for the Lagrangian, binding energy, power loss, and waveform, we always have a combination of the form *c*3*M*2/*M*Pl2. Therefore, if *c*3 fulfills the above-mentioned bound, its effect is still relevant for the dynamics. While Ref.  considered this constraint to be resulting in the need for unnatural fine-tuning on *c*3, it is indeed naturally fulfilled in several modified gravity models, such as *R*2 gravity (see e.g., ), where the Einstein-Hilbert action is supplemented with a term quadratic in the Ricci scalar. After performing a transformation to the Einstein frame one can isolate an additional scalar degree of freedom with a potential of the form $$V(\phi) \sim m\_s^2 \phi^2 - \frac{m\_s^2}{{M\_\mathrm{Pl}}} \phi^3 + \mathcal{O}(\phi^4) \;,$$ where we have dropped O(1) pre-factors for brevity. Hence ∣*c*3∣ ∼ *m**s*2/*M*Pl2, which is much less than 10− 80 for *m**s* < 10− 13eV, thus fulfilling the above constraint. If *m**s* > 10− 13eV, then *m**s**r* < 1 is never achieved during the inspiral and hence all contributions of *c*3 are highly suppressed, meaning that in either case one expects a perturbative treatment to be valid. When applying the here obtained results for, e.g., the power loss, we also implicitly assume that the effects from higher-order worldline couplings and higher-order self-interactions in the scalar field are suppressed. It is reasonable to presume, on grounds of the established power counting rules, that those terms would only contribute to higher-order PN orders. However, the numerical values of the parameter accompanying those higher-order operators might be large enough to still cause relevant effects even at lower orders in the PN expansion. For example, this is the case with the tidal deformability in pure GR, which formally contributes at 5PN order for non-spinning objects, but might be relevant even for lower-order waveforms, if its numerical value is sufficiently large. Lastly, we assume that the worldline coupling coefficients are constant during the inspiral. However, it is expected that at higher order in the expansion it is necessary to consider time-dependent couplings. Complete Kepler Relations and Binding Energy at 1PN =================================================== In the following, we give the full expressions for the modified Kepler relations and the binding energy at 1PN order. Following the procedure outlined in Sec. [sec:scalartensoreft], we solve Eq.  for *ω*, now taking into account all scalar effects. This yields $$\begin{aligned} \begin{split} x^3 = &\frac{\gamma ^2 \bar{c}\_3}{2 \pi \bar{m}\_s} \left[e^{\frac{\bar{m}\_s}{\gamma }} \left(\bar{m}\_s-\gamma \right) \text{Ei}\left(-\frac{3 \bar{m}\_s}{\gamma }\right)+ e^{-\frac{\bar{m}\_s}{\gamma }} \left(\bar{m}\_s+\gamma \right) \text{Ei}\left(-\frac{\bar{m}\_s}{\gamma }\right)\right] -128 \gamma ^3 \xi \_p e^{-\frac{2 \bar{m}\_s}{\gamma }} \left(\bar{m}\_s+\gamma \right)\\ &+4 \gamma ^3 \xi \_q \bar{m}\_s e^{-\frac{2 \bar{m}\_s}{\gamma }} + 8 \gamma ^2 \bar{Q} e^{-\frac{\bar{m}\_s}{\gamma }} \bigg[-\bar{m}\_s e^{\frac{2 \bar{m}\_s}{\gamma }} \left(\bar{m}\_s-\gamma \right) \text{Ei}\left(-\frac{2 \bar{m}\_s}{\gamma }\right)+\gamma (3 \nu -4) \bar{m}\_s \\ &- \bar{m}\_s \left(\bar{m}\_s+\gamma \right) \log \left(\frac{2 \bar{m}\_s}{\gamma }\right)+\bar{m}\_s+2 \gamma ^2 (\nu -2)+\gamma \bigg] +\underbrace{\gamma ^3 (\gamma (\nu -3)+1)}\_{\displaystyle \equiv x\_\mathrm{GR}} \;. \end{split} \label{eq:xPN\_ST}\end{aligned}$$ We invert the above expression to arrive at the inverse Kepler law $$\begin{aligned} \label{eq:gamma\_ST\_1PN} \begin{split} \gamma = &\frac{\bar{c}\_3}{6 \pi \bar{m}\_s} \left[-e^{\frac{\bar{m}\_s}{x}} \left(\bar{m}\_s-x\right) \text{Ei}\left(-\frac{3 \bar{m}\_s}{x}\right)- e^{-\frac{\bar{m}\_s}{x}} \left(\bar{m}\_s+x\right) \text{Ei}\left(-\frac{\bar{m}\_s}{x}\right)\right] + \frac{128}{3} x \xi \_p e^{-\frac{2 \bar{m}\_s}{x}} \left(\bar{m}\_s+x\right) \\ &-\frac{4}{3} x \xi \_q \bar{m}\_s e^{-\frac{2 \bar{m}\_s}{x}} + \frac{8}{9} \bar{Q} e^{-\frac{\bar{m}\_s}{x}} \bigg[3 \bar{m}\_s e^{\frac{2 \bar{m}\_s}{x}} \left(\bar{m}\_s-x\right) \text{Ei}\left(-\frac{2 \bar{m}\_s}{x}\right)+\nu \left(-4 x \bar{m}\_s+\bar{m}\_s^2-x^2\right) \\ &-3 \left(x \bar{m}\_s+\bar{m}\_s^2+\bar{m}\_s+x^2+x\right)+3 \bar{m}\_s \left(\bar{m}\_s+x\right) \log \left(\frac{2 \bar{m}\_s}{x}\right)\bigg] \underbrace{- \frac{1}{3} (\nu - 3) x^2+x}\_{\displaystyle \equiv \gamma\_\mathrm{GR}} \;. \end{split}\end{aligned}$$ To obtain the binding energy in the presence of the scalar field, we compute the Legendre tranformation of the Lagrangian given in Eq. . As a function of *γ*, the binding energy reads $$\begin{aligned} \begin{split} \frac{E}{\mu} = &\frac{\bar{c}\_3}{4 \pi \bar{m}\_s} \left[e^{-\frac{\bar{m}\_s}{\gamma }} \left(\bar{m}\_s-\gamma \right) \text{Ei}\left(-\frac{\bar{m}\_s}{\gamma }\right)+e^{\frac{\bar{m}\_s}{\gamma }} \left(\bar{m}\_s+\gamma \right) \text{Ei}\left(-\frac{3 \bar{m}\_s}{\gamma }\right)\right] \\ &-64 \gamma \xi \_p \bar{m}\_s e^{-\frac{2 \bar{m}\_s}{\gamma }} + \xi \_q \left[-8 \bar{m}\_s^2 \text{Ei}\left(-\frac{2 \bar{m}\_s}{\gamma }\right)-2 \gamma \bar{m}\_s e^{-\frac{2 \bar{m}\_s}{\gamma }}\right] \\ &+ \bar{Q} e^{-\frac{\bar{m}\_s}{\gamma }} \bigg[-4 \bar{m}\_s e^{\frac{2 \bar{m}\_s}{\gamma }} \left(\bar{m}\_s+\gamma \right) \text{Ei}\left(-\frac{2 \bar{m}\_s}{\gamma }\right)+2 (- \nu \gamma +\gamma +2) \bar{m}\_s \\ &+ 4 \bar{m}\_s \left(\gamma -\bar{m}\_s\right) \log \left(\frac{2 \bar{m}\_s}{\gamma }\right)-2 \gamma (\gamma (\nu -3)+2)\bigg] - \frac{1}{8} \gamma (\gamma (\nu -7)+4) \;. \end{split}\end{aligned}$$ Plugging in Eq. , we obtain an expression depending on *x*: $$\begin{aligned} \label{eq:binding\_energy\_full\_1PN} \begin{split} \frac{E}{\mu} = &\frac{\bar{c}\_3}{6 \pi \bar{m}\_s} e^{-\frac{\bar{m}\_s}{x}} \left[\left(2 \bar{m}\_s-x\right) \text{Ei}\left(-\frac{\bar{m}\_s}{x}\right)+e^{\frac{2 \bar{m}\_s}{x}} \left(2 \bar{m}\_s+x\right) \text{Ei}\left(-\frac{3 \bar{m}\_s}{x}\right)\right] \\ &-\frac{64}{3} x \xi \_p e^{-\frac{2 \bar{m}\_s}{x}} \left(4 \bar{m}\_s+x\right) +\frac{4}{3} \xi \_q \bar{m}\_s \left[x \left(-e^{-\frac{2 \bar{m}\_s}{x}}\right)-6 \bar{m}\_s \text{Ei}\left(-\frac{2 \bar{m}\_s}{x}\right)\right] \\ &+\frac{4}{9} \bar{Q} e^{-\frac{\bar{m}\_s}{x}} \bigg[-6 \bar{m}\_s e^{\frac{2 \bar{m}\_s}{x}} \left(2 \bar{m}\_s+x\right) \text{Ei}\left(-\frac{2 \bar{m}\_s}{x}\right) -4 (\nu -3) \bar{m}\_s^2 \\ &+4 ((\nu -3) x+3) \bar{m}\_s+6 \bar{m}\_s \left(x-2 \bar{m}\_s\right) \log \left(\frac{2 \bar{m}\_s}{x}\right) +x ((\nu -3) x-6)\bigg] \\ &+\frac{\nu x^2}{24}+\frac{3 x^2}{8}-\frac{x}{2} \;. \end{split}\end{aligned}$$ The definitions of all redefined parameters are collected in Tab. [tab:dimenionlessparams]. Renormalization =============== In this section, we discuss some technical aspects regarding the renormalization procedure. In the computation of the binding energy (cf. Sec. [sec:scalartensoreft]) we have neglected pure self-force diagrams, shown in Fig. [renormalization::1PN]. Here we show that these diagrams solely contribute to the renormalization of the worldline couplings, thus can be absorbed by a redefinition of the gravitational masses *M**i* and scalar charges *q**i*. Since we employ dimensional regularization, diagrams such as the right one in Fig. [renormalization::massrenorm] simply vanish, since they contain scale-less and power-law divergent integrals, i.e., $$\begin{aligned} \includegraphics[scale=0.85,valign=c]{Renorm\_figure1.pdf} \; \propto \int \frac{d^3 {{\boldsymbol{k}}}}{(2\pi)^3} \frac{1}{{{\boldsymbol{k}}}^2} \xrightarrow{\rm dim.~reg.} 0 \;.\end{aligned}$$ However, couplings of the worldline to itself via a scalar propagator do not vanish identically in this scheme since they contain the mass of the scalar particle as a physical scale. As such, the left diagram in Fig. [renormalization::massrenorm] evaluates to $$\begin{split} \includegraphics[scale=0.85,valign=c]{Renorm\_figure0.pdf} &= \frac{1}{2} \left( \frac{-i q}{{M\_\mathrm{Pl}}} \right)^2 \int dt \int \frac{d^3{{\boldsymbol{k}}}}{(2 \pi)^3} \frac{-i}{{{\boldsymbol{k}}}^2 + m\_s^2} \\ & = \frac{i}{2} \left( \frac{-i q}{{M\_\mathrm{Pl}}} \right)^2 \int dt \frac{m\_s}{4 \pi} = - i \underbrace{\frac{m\_s\, q^2}{8 \pi {M\_\mathrm{Pl}}^2}}\_{\equiv \delta M\_\text{sf}} \int dt \;. \end{split}$$ Note the factor of 1/2 in front, which arises from the expansion of the action and keeping two identical vertices. Also, we here used $$\int \frac{d^3 {{\boldsymbol{k}}}}{(2\pi)^3} \frac{1}{{{\boldsymbol{k}}}^2 + m\_s^2} \xrightarrow{\rm dim.~reg.} - \frac{m\_s}{4 \pi} \;.$$ Thus, the above diagram effectively shifts the mass of the neutron star by *δ**M*sf. We denote the mass *M* appearing in Spp as the *bare* mass *M**b*. Likewise, in the following we denote all other bare parameter, i.e., those parameters that directly appear in the expansion of the point-particle action, with a subscript ‘b’. Then, at lowest order, the bare mass is shifted to the *physical* mass with $$\label{renormalization::physicalMass} M\_{\rm phys} = M\_{\rm b} + \delta M\_\text{sf} = M\_{\rm b} + \frac{m\_s\, q\_\text{b}^2}{8 \pi {M\_\mathrm{Pl}}^2} \;.$$ By calculating all diagrams in which a single graviton is coupled to the second NS we should get the same result for the renormalized mass of the first neutron star as the one given in Eq. . Contributing diagrams are the ones given on the left in Fig. [renormalization::singlegravitonvertex] and in Fig. [fig:bindingenergy:scalar]v), respectively. Indeed, the first one evaluates to  ∼ 2*δ**M*sf and the second contributes with  ∼  − *δ**M*sf, thus yielding the same total mass shift as in Eq. . In addition to the mass, the scalar charge also obtains corrections due to self-force diagrams shown in Fig. [renormalization::singlescalarvertex]. Evaluating this integral leads to $$\delta q\_{\rm sf} = m\_s \frac{q\_\text{b}\,p\_\text{b}}{4 \pi {M\_\mathrm{Pl}}^2} \;.$$ However, additional contributions are coming from diagrams in Fig. [fig:bindingenergy:scalar]vi) and [fig:bindingenergy:scalar]vii), which are not pure self-force diagrams (in the sense that they do not only contain diverging integrals). In total, we obtain $$\begin{aligned} \label{renormalization::phyiscalCharge} \begin{split} q\_{\rm phys} &= q\_\text{b} + \delta q\_{\rm sf} + \frac{ q\_\text{b}^2 c\_3 \log 3}{16 \pi m\_s} - \frac{m\_s M\_\text{b} q\_\text{b} \,\gamma}{32 \pi {M\_\mathrm{Pl}}^2} \\ &= q\_\text{b} + \frac{q\_\text{b}}{4\pi} \bigg[ \frac{p\_\text{b}\,m\_s}{{M\_\mathrm{Pl}}^2} + \frac{q\_\text{b} \, c\_3 \log 3}{4 \, m\_s} - \frac{m\_s M\_\text{b}\,\gamma}{8 {M\_\mathrm{Pl}}^2}\bigg] \;, \end{split}\end{aligned}$$ with *γ* denoting the Euler-Mascheroni constant and log denoting the natural logarithm. In summary, by expressing the *bare* charge and mass in terms of the *physical* charge and mass via Eqs.  and, we automatically cancel all pure self-force diagrams appearing in Fig. [renormalization::1PN] and additionally also all redundant terms that appear when evaluating the diagrams in Fig. [fig:bindingenergydiagrams]. Radiated Energy in Multipole Moments of the Scalar Field ======================================================== [app:radiationmomentsscalar] Let us outline how to compute the power loss via scalar radiation. The general procedure to perform the multipole expansion of a massless scalar field without self-interactions is described in Ref. . Here, a general formula is presented to obtain the emitted power in arbitrary moments due to the LO linear emission diagram in Fig. [app:radiationmomentsscalar]. Since we here consider a massive, self-interacting scalar field, we appropriately modify the derivations of to obtain adapted expressions for the power loss. For the convenience of the reader, we also repeat some of the initial steps of the calculation presented in, but refer to this publication for the full amount of details. Considering a linear source term, the relevant part of the scalar field action can be written as $$S = \int dx^4 \left( \frac{1}{2} \partial\_\mu \phi \partial^\mu \phi + \frac{1}{{M\_\mathrm{Pl}}} J \phi \right) \;.$$ We assume the source to be localized and Taylor expand the field coupled to *J* at the origin, which we choose to be inside the source: $$\phi(t, {\boldsymbol{x}}) = \sum\_{n=0}^\infty \frac{{\boldsymbol{x}}^N}{n!} (\partial\_N \phi) (t, 0) \;.$$ Inserting this expansion into the linear source term and decomposing the source moments into symmetric and trace free (STF) tensors, we further obtain $$\label{eq::source\_term} S\_\text{src} = \frac{1}{{M\_\mathrm{Pl}}} \int dx^4 J \phi = \frac{1}{{M\_\mathrm{Pl}}} \int dt \sum\_{l = 0}^\infty \frac{1}{l!} \sum\_{p = 0}^\infty \frac{(2l + 1)!!}{(2p)!! (2l + 2p + 1)!!} \int d^3 {\boldsymbol{x}}J r^{2p} {\boldsymbol{x}}\_\text{STF}^L (\nabla^2)^p \partial\_L \phi \;.$$ At this point Ref.  utilizes that outside the source ▫*ϕ* ≡ ∂*t*2*ϕ* − ∇2*ϕ* = 0, and thus ∇2*ϕ* = ∂*t*2*ϕ*, in order to turn the contracted spatial derivatives acting on the last term into time derivatives. However, as was also recently noted in Ref. , in the case of non-vanishing potential the equation of motion instead reads ▫*ϕ* ≡ ∂*t*2*ϕ* − ∇2*ϕ* =  − *V*ʹ(*ϕ*), hence ∇2*ϕ* = ∂*t*2*ϕ* + *V*ʹ(*ϕ*). Since we only consider linear emission diagrams, we can truncate all non-linear terms in *V*ʹ, i.e., only the mass term is relevant. Therefore, Eq.  becomes $$S\_\text{src} = \frac{1}{{M\_\mathrm{Pl}}} \int dt \sum\_{l = 0}^\infty \frac{1}{l!} \sum\_{p = 0}^\infty \frac{(2l + 1)!!}{(2p)!! (2l + 2p + 1)!!} \int d^3 {\boldsymbol{x}}J r^{2p} {\boldsymbol{x}}\_\text{STF}^L (\partial\_t^2 + m\_s^2)^p \partial\_L \phi \;.$$ Partial integration with respect to time now also allows having time derivatives acting on the source term *J* instead of the field *ϕ*. We can thus write $$S\_\text{src} = \frac{1}{{M\_\mathrm{Pl}}} \int dt \sum\_{l = 0}^\infty \frac{1}{l!} \mathcal{I}^L \partial\_L \phi \;,$$ with the multipole moments I*L* given by $$\mathcal{I}^L = \sum\_{p = 0}^\infty \frac{(2l + 1)!!}{(2p)!! (2l + 2p + 1)!!} \int d^3 {\boldsymbol{x}}\,r^{2p} {\boldsymbol{x}}\_\text{STF}^L \,(\partial\_t^2 + m\_s^2)^{p} J \;.$$ Up to 0PN order, the relevant terms are given by $$\mathcal{I}^i\_{p=0} = \int d^3 {\boldsymbol{x}}\, {\boldsymbol{x}}^i J \;, \quad \mathcal{I}^i\_{p=1} = \frac{1}{10} \int d^3 {\boldsymbol{x}}\, {\boldsymbol{x}}^i (\partial\_t^2 + m\_s^2) r^2 J \;, \quad \text{and} \quad \mathcal{I}^{ij}\_{p=0} = \int d^3 {\boldsymbol{x}}\left[{\boldsymbol{x}}^i {\boldsymbol{x}}^j \right]\_\text{STF} J \;.$$ We now turn to the calculation of the emitted power. The diagram shown in Fig. [app:radiationmomentsscalar] evaluates to $$\begin{split} \includegraphics[scale=0.8,valign=c]{SF\_figure0.pdf} \hspace{0.5cm} &= \frac{1}{{M\_\mathrm{Pl}}^2} \frac{1}{l!\,\Tilde{l}!} \int dt\_1 \int dt\_2\, \mathcal{I}^L(t\_1) \mathcal{I}^{\Tilde{L}}(t\_2) \int \frac{d^4k}{(2\pi)^4} \frac{e^{i k\_0 (t\_1 - t\_2)}}{k^2 - m^2 + i \epsilon} {{\boldsymbol{k}}}^L {{\boldsymbol{k}}}^{\Tilde{L}} \\ &= \frac{1}{{M\_\mathrm{Pl}}^2} \frac{1}{l!\,\Tilde{l}!} \int \frac{d^4k}{(2\pi)^4} \frac{1}{k^2 - m^2 + i \epsilon} \mathcal{I}^L(k\_0) \mathcal{I}^{\Tilde{L}}(k\_0)^\* {{\boldsymbol{k}}}^L {{\boldsymbol{k}}}^{\Tilde{L}} \\ &= \frac{1}{{M\_\mathrm{Pl}}^2} \frac{1}{l!\,\Tilde{l}!} \int \frac{k\_0}{2\pi} \int d \Omega \int \frac{d k}{(2\pi)^3} k^{2 + l + \Tilde{l}} {\boldsymbol{n}}^L {\boldsymbol{n}}^{\Tilde{L}} \frac{1}{k\_0^2 - k^2 - m^2 + i \epsilon} \mathcal{I}^L(k\_0) \mathcal{I}^{\Tilde{L}}(k\_0)^\* \;, \end{split}$$ where ${\boldsymbol{n}}= {{\boldsymbol{k}}}/ |{{\boldsymbol{k}}}|$. In the second line, we have expressed the moments I in terms of their Fourier modes, and subsequently switched to spherical coordinates. The surface integral results in $$\int d \Omega\,{\boldsymbol{n}}^L {\boldsymbol{n}}^{\Tilde{L}} = \delta^{l \Tilde{l}} \frac{4 \pi}{(2l + 1)!!} l! \;,$$ such that $$\begin{split} \includegraphics[scale=0.8,valign=c]{SF\_figure0.pdf} \hspace{0.5cm} &= \frac{1}{{M\_\mathrm{Pl}}^2} \frac{1}{l! (2l + 1)!!} \frac{1}{4 \pi^3} \int dk\_0 \int dk\, k^{2l + 2} \left| \mathcal{I}^L(k\_0) \right|^2 \frac{1}{k\_0^2 - k^2 - m^2 + i \epsilon} \\ &= \frac{1}{{M\_\mathrm{Pl}}^2} \frac{1}{l! (2l + 1)!!} \frac{i}{4 \pi^2} \int dk\_0 (k\_0 - m^2)^{l + 1/2} \left| \mathcal{I}^L(k\_0) \right|^2 \;. \end{split}$$ In the second line, we have solved the *d**k* integral by closing the contour around the pole $k = + \sqrt{k\_0^2 - m^2}$ in the upper plane. The emitted energy is thus given by $$P = \frac{1}{l! (2l + 1)!!} \frac{1}{4 \pi^2 {M\_\mathrm{Pl}}^2 T} \int d \omega \left(1 - \frac{m\_s^2}{\omega^2}\right)^{l + 1/2} \omega^{2l + 2} \left| \mathcal{I}^L(\omega) \right|^2,$$ where *T* = 2*π**δ*(0) and *k*0 = *ω*. The same result was presented in Ref. , but without elaborating on the calculation and the authors did not mention that the multipole moments also have to be modified in the massive case. Computation of the Feynman Diagrams =================================== In this section, we explicitly compute the Feynman diagrams which are shown in the main text. First, let us give a general recipe on how to compute the diagrams to a certain PN order with the power counting scheme from Sec. [sec:EFT]. A similar discussion is found in Ref. . 1. Expand the full action of the theory in powers of the velocity ${\boldsymbol{v}}$, as shown in Sec. [sec:scalartensoreft]. This renders all vertices between the NS worldlines, the scalar field, as well as the graviton field. 2. Decompose the fields into radiation and potential modes. From the power counting rules derived in Sec. [sec:EFT], identify the PN order at which a vertex contributes. 3. From the action expansion, read off the Feynman rules for the given interactions. These are multiplied by a factor *i* from the expansion of the path integral. An overview of the relevant interactions at 1PN order, together with their corresponding velocity scalings and Feynman rules, is given in Tabs. [feynmanRules::bindingEnergy] and [feynmanRules::radiation]. 4. Draw all diagrams that contribute to the desired PN order. Here, consider only diagrams which remain connected when the wordlines are removed. Note that potential modes can only appear as internal lines, while radiation modes solely enter as external legs. For any internal line, multiply by the respective propagator. In addition, neglect quantum loop diagrams as they are suppressed by ℏ/*L* , with *L* the orbital angular momentum of the system. 5. Collect the combinatorial factor for each diagram. This includes both pre-factors from the expansion of the path integral, as well as symmetry factors from the Wick contractions. Feynman Rules ------------- [H] | Sc | C2cm | C11cm | **Diagram & **Scaling & **Expression &  ∼ *L* & $\displaystyle i \int dt \frac{M}{2} {\boldsymbol{v}}^2$ &  ∼ *L**v*2 & $ \displaystyle i \int dt \frac{M}{8} {\boldsymbol{v}}^4$ &  ∼ 1 & $ \displaystyle -(2\pi)^3 \frac{i}{|{{\boldsymbol{k}}}|^2} \delta^{(3)}({{\boldsymbol{k}}}+ {\boldsymbol{q}}) \delta(t\_1 - t\_2) P\_{\mu \nu; \alpha \beta}$****** &  ∼ 1 & $ \displaystyle -(2\pi)^3 \frac{i}{|{{\boldsymbol{k}}}|^2 + m^2} \delta^{(3)}({{\boldsymbol{k}}}+ {\boldsymbol{q}}) \delta(t\_1 - t\_2) $ &  ∼ *v*2 & $ \displaystyle -(2\pi)^3 \frac{i}{|{{\boldsymbol{k}}}|^4} \delta^{(3)}({{\boldsymbol{k}}}+ {\boldsymbol{q}}) \frac{\partial^2}{\partial t\_1 \partial t\_2}\delta(t\_1 - t\_2) P\_{\mu \nu; \alpha \beta} $ &  ∼ *v*2 & $ \displaystyle -(2\pi)^3 \frac{i}{(|{{\boldsymbol{k}}}|^2 + m^2)^2} \delta^{(3)}({{\boldsymbol{k}}}+ {\boldsymbol{q}})\frac{\partial^2}{\partial t\_1 \partial t\_2}\delta(t\_1 - t\_2) $ & $\displaystyle \sim L^\frac{1}{2}$ & $ \displaystyle -i \frac{M}{2 {M\_\mathrm{Pl}}} \int dt \int\_{{\boldsymbol{k}}}\exp\left(i {{\boldsymbol{k}}}{\boldsymbol{x}}\right) \eta^{0\mu} \eta^{0\nu}$ & $\displaystyle \sim L^\frac{1}{2} v$ & $ \displaystyle -i \frac{M}{{M\_\mathrm{Pl}}} \int dt \int\_{{\boldsymbol{k}}}\exp\left(i {{\boldsymbol{k}}}{\boldsymbol{x}}\right) {\boldsymbol{v}}^i \eta^{0(\mu} \eta^{\nu)}\_{\;i}$ & $\displaystyle \sim L^\frac{1}{2} v^2$ & $ \displaystyle -i \frac{M}{2{M\_\mathrm{Pl}}} \int dt \int\_{{\boldsymbol{k}}}\exp\left(i {{\boldsymbol{k}}}{\boldsymbol{x}}\right) \left(\eta\_i^\mu \eta\_j^\mu {\boldsymbol{v}}^i {\boldsymbol{v}}^j + \frac{1}{2} \eta^{0\mu} \eta^{0\nu} {\boldsymbol{v}}^2 \right)$ &  ∼ *v*2 & $ \displaystyle i {\frac{M}{8{M\_\mathrm{Pl}}^2}} \int dt \int\_{{{\boldsymbol{k}}},{\boldsymbol{q}}⁄} \exp\left(i ({{\boldsymbol{k}}}+ {\boldsymbol{q}}) {\boldsymbol{x}}\right) \eta^{0\mu} \eta^{0\nu} \eta^{0\lambda} \eta^{0\sigma}$ & $\displaystyle \sim \frac{q}{M} L^\frac{1}{2}$ & $\displaystyle -i \frac{q}{{M\_\mathrm{Pl}}} \int dt \int\_{{{\boldsymbol{k}}}} \exp\left(i {{\boldsymbol{k}}}{\boldsymbol{x}}\right)$ & $\displaystyle \sim \frac{q}{M} L^\frac{1}{2} v^2$ & $\displaystyle i \frac{q}{2{M\_\mathrm{Pl}}} \int dt \int\_{{{\boldsymbol{k}}}} \exp\left(i {{\boldsymbol{k}}}{\boldsymbol{x}}\right) {\boldsymbol{v}}^2$ & $\displaystyle \sim \frac{p}{M} v^2$ & $\displaystyle -i \frac{p}{{M\_\mathrm{Pl}}^2} \int dt \int\_{{{\boldsymbol{k}}}, {\boldsymbol{q}}} \exp\left(i ({{\boldsymbol{k}}}+ {\boldsymbol{q}}) {\boldsymbol{x}}\right)$ | Sc | C2cm | C11cm | & $\displaystyle \sim \frac{q}{M} v^2$ & $\displaystyle -i \frac{q}{2 {M\_\mathrm{Pl}}^2} \int dt \int\_{{{\boldsymbol{k}}}, {\boldsymbol{q}}} \exp\left(i ({{\boldsymbol{k}}}+ {\boldsymbol{q}}) {\boldsymbol{x}}\right)$ & $\displaystyle \sim L^{-\frac{1}{2}} v^2$ & $\displaystyle -\frac{i}{4 {M\_\mathrm{Pl}}} \delta(t\_1 - t\_2) \delta(t\_1 - t\_3) (2\pi)^3 \delta^{(3)}\left(\sum\_r {{\boldsymbol{k}}}\_r\right) \prod\_r \frac{i}{{{\boldsymbol{k}}}\_r^2} \sum\_r {{\boldsymbol{k}}}\_r^2 $ & $\displaystyle \sim \frac{m\_s^2}{{M\_\mathrm{Pl}}} \frac{r^2 v^2}{L^\frac{1}{2}}$ & $\displaystyle -i \frac{m\_s^2}{4 {M\_\mathrm{Pl}}} \delta(t\_1-t\_2) \delta(t\_1-t\_3) (2\pi)^3 \delta^{(3)}\left(\sum\_r {{\boldsymbol{k}}}\_r\right) \left( \prod\_{r=1,2} \frac{i}{{{\boldsymbol{k}}}\_r^2 + m^2} \right) \frac{i}{{{\boldsymbol{k}}}\_3^2} \eta^{\mu\nu} P\_{00;\mu\nu}$ & $\displaystyle \sim c\_3 \frac{{M\_\mathrm{Pl}}r^2}{L^\frac{1}{2}} v^2 $ & $\displaystyle -i \frac{c\_3}{3!} \delta(t\_1-t\_2) \delta(t\_1-t\_3) (2\pi)^3 \delta^{(3)}\left(\sum\_r {{\boldsymbol{k}}}\_r\right) \prod\_r \frac{i}{{{\boldsymbol{k}}}\_r^2 + m^2}$ [feynmanRules::bindingEnergy] [H] | Sc | C2cm | C11cm | **Diagram & **Scaling & **Expression & $\displaystyle \sim L^\frac{1}{2} v^\frac{1}{2}$ & $\displaystyle -i \frac{M}{2 {M\_\mathrm{Pl}}} \int dt \; \Bar{h}\_{00}$****** & $\displaystyle \sim L^\frac{1}{2} v^\frac{3}{2}$ & $\displaystyle -i \frac{M}{{M\_\mathrm{Pl}}} \int dt \; \Bar{h}\_{0i} {\boldsymbol{v}}^i$ & $\displaystyle \sim L^\frac{1}{2} v^\frac{5}{2}$ & $\displaystyle -i \frac{M}{2 {M\_\mathrm{Pl}}} \int dt \left(\bar{h}\_{ij} {\boldsymbol{v}}^i {\boldsymbol{v}}^j + \frac{\bar{h}\_{00}}{2} {\boldsymbol{v}}^2 \right)$ & $ \displaystyle \sim \frac{q}{M} L^\frac{1}{2} v^\frac{1}{2}$ & $\displaystyle -i \frac{q}{{M\_\mathrm{Pl}}} \int dt \; {\Bar{\phi}}$ & $\displaystyle \sim \frac{q}{M} L^\frac{1}{2} v^\frac{5}{2}$ & $\displaystyle -i \frac{q}{2 {M\_\mathrm{Pl}}} \int dt \; {\boldsymbol{v}}^2 {\Bar{\phi}}$ & $\displaystyle \sim v^\frac{5}{2} $ & $\displaystyle -i \frac{M}{4 {M\_\mathrm{Pl}}^2} \int dt \int\_{{\boldsymbol{k}}}\exp\left(i {{\boldsymbol{k}}}{\boldsymbol{x}}\right) \eta^{0\mu} \eta^{0\nu} \Bar{h}\_{00}$ & $\displaystyle \sim \frac{q}{M} v^\frac{5}{2}$ & $\displaystyle -i \frac{q}{2{M\_\mathrm{Pl}}^2} \int dt \int\_{{\boldsymbol{k}}}\exp\left(i {{\boldsymbol{k}}}{\boldsymbol{x}}\right) \bar{h}\_{00}$ & $\displaystyle \sim \frac{p}{M} v^\frac{5}{2}$ & $\displaystyle -i \frac{2 p}{{M\_\mathrm{Pl}}} \int dt \int\_{{\boldsymbol{k}}}\exp\left(i {{\boldsymbol{k}}}{\boldsymbol{x}}\right) {\Bar{\phi}}$ [H] | Sc | C2cm | C11cm | & $\displaystyle \sim \frac{q}{M} v^\frac{5}{2}$ & $\displaystyle -i \frac{ q}{2 {M\_\mathrm{Pl}}^2} \int dt \int\_{{\boldsymbol{k}}}\exp\left(i {{\boldsymbol{k}}}{\boldsymbol{x}}\right) \eta^{0\mu} \eta^{0\nu} {\Bar{\phi}}$ & $\displaystyle \sim c\_3 \frac{M r v^\frac{1}{2}}{L^\frac{1}{2}} $ & $\displaystyle -i {M\_\mathrm{Pl}}\frac{3 c\_3}{3!} \delta(t-t\_1) \delta(t-t\_2) (2\pi)^3 \frac{i}{{{\boldsymbol{k}}}\_1^2 + m^2} \frac{i}{{{\boldsymbol{k}}}\_2^2 + m^2} {\Bar{\phi}}$ & $\displaystyle \sim {\Bar{m}\_s}^2 \frac{1}{L^\frac{1}{2} v^\frac{3}{2}} $ & $\displaystyle -i \frac{2 m\_s^2}{4 {M\_\mathrm{Pl}}} \delta(t-t\_1) \delta(t-t\_2) (2\pi)^3 \frac{i}{{{\boldsymbol{k}}}\_1^2 + m^2} \frac{i}{{{\boldsymbol{k}}}\_2^2} \eta^{\mu\nu} P\_{00;\mu\nu} {\Bar{\phi}}$ [feynmanRules::radiation] Conservative Dynamics --------------------- We start with the calculation of the conservative dynamics. This comprises all diagrams at 1PN which only contain hard scalar and graviton modes (cf. Fig. [fig:bindingenergydiagrams]). $$\begin{aligned} &\begin{aligned} \includegraphics[scale=0.8,valign=c]{BE\_figure0.pdf} &= i \frac{M\_1 M\_2}{4 {M\_\mathrm{Pl}}^2} \int dt\_1 \int dt\_2 \delta(t\_1-t\_2) \int \frac{d^3{{\boldsymbol{k}}}}{(2\pi)^3} \frac{\exp(-i {{\boldsymbol{k}}}({\boldsymbol{x}}\_1 - {\boldsymbol{x}}\_2))}{{|\boldsymbol{k}|}^2} P\_{00;00} = i \frac{M\_1 M\_2}{32 \pi {M\_\mathrm{Pl}}^2} \int dt \frac{1}{r}\\ &= i \int dt \frac{G M\_1 M\_2}{r} \;. \end{aligned}\\[0.6cm] &\begin{aligned} \includegraphics[scale=0.8,valign=c]{BE\_figure1.pdf} &= i\left(\frac{M\_1}{2 {M\_\mathrm{Pl}}}\right) \left(\frac{M\_2}{2 {M\_\mathrm{Pl}}}\right) \int dt\_1 \int dt\_2 \delta(t\_1 - t\_2) \int\frac{d^3{{\boldsymbol{k}}}}{(2\pi)^3} \frac{1}{{|\boldsymbol{k}|}^4} \\ &\phantom{=-} \times \frac{\partial^2}{\partial t\_1 \partial t\_2} \exp(-i {{\boldsymbol{k}}}({\boldsymbol{x}}\_1(t\_1) - {\boldsymbol{x}}\_2(t\_2))) P\_{00;00} \\ &= i \frac{M\_1 M\_2}{8 {M\_\mathrm{Pl}}^2} {\boldsymbol{v}}\_{1,i} {\boldsymbol{v}}\_{2,j} \int dt \int\frac{d^3{{\boldsymbol{k}}}}{(2\pi)^3} \frac{{{\boldsymbol{k}}}\_i {{\boldsymbol{k}}}\_j}{{|\boldsymbol{k}|}^4} \exp(-i {{\boldsymbol{k}}}({\boldsymbol{x}}\_1(t\_1) - {\boldsymbol{x}}\_2(t\_2))) \\ &= i \frac{M\_1 M\_2}{64 \pi {M\_\mathrm{Pl}}^2} \int dt \frac{1}{r} \left({\boldsymbol{v}}\_1 \cdot {\boldsymbol{v}}\_2 - \frac{({\boldsymbol{v}}\_1 \cdot {\boldsymbol{r}}) ({\boldsymbol{v}}\_2 \cdot {\boldsymbol{r}})}{r^2}\right) =i \int dt \frac{G M\_1 M\_2}{2r} \left({\boldsymbol{v}}\_1 \cdot {\boldsymbol{v}}\_2 - \frac{({\boldsymbol{v}}\_1 \cdot {\boldsymbol{r}}) ({\boldsymbol{v}}\_2 \cdot {\boldsymbol{r}})}{r^2}\right) \;.\\ \end{aligned}\\[0.6cm] &\begin{aligned} \includegraphics[scale=0.8,valign=c]{BE\_figure2.pdf} &= i \frac{M\_1 M\_2}{{M\_\mathrm{Pl}}^2} \int dt\_1\int dt\_2 \delta(t\_1 - t\_2) \int\frac{d^3{{\boldsymbol{k}}}}{(2\pi)^3} \frac{\exp(-i {{\boldsymbol{k}}}({\boldsymbol{x}}\_1 - {\boldsymbol{x}}\_2))}{{|\boldsymbol{k}|}^2} \\ &\phantom{=-} \times \biggr[\underbrace{P\_{0i;0j}}\_{ = -\delta\_{ij}/2} {\boldsymbol{v}}\_{1i} {\boldsymbol{v}}\_{2j} + \frac{1}{4} \biggr(\frac{\overbrace{P\_{00;00}}^{ = 1/2}}{2} {\boldsymbol{v}}\_1^2 + \underbrace{P\_{ij;00}}\_{ = \delta\_{ij}/2} {\boldsymbol{v}}\_{1,i} {\boldsymbol{v}}\_{1,j}\biggr) \biggr] \\ &= -i \frac{M\_1 M\_2}{2{M\_\mathrm{Pl}}^2} \int dt \int\frac{d^3{{\boldsymbol{k}}}}{(2\pi)^3} \frac{\exp(-i {{\boldsymbol{k}}}({\boldsymbol{x}}\_1 - {\boldsymbol{x}}\_2))}{{|\boldsymbol{k}|}^2} \left({\boldsymbol{v}}\_{1} \cdot {\boldsymbol{v}}\_{2} - \frac{3}{8} {\boldsymbol{v}}\_1^2\right) \\ &= -4i \int dt \frac{G M\_1 M\_2}{r} \left[{\boldsymbol{v}}\_1 \cdot {\boldsymbol{v}}\_2 - \frac{3}{8} \left({\boldsymbol{v}}\_1^2 + {\boldsymbol{v}}\_2^2 \right) \right]\;.\\ \end{aligned}\\[0.6cm] &\begin{aligned} \includegraphics[scale=0.8,valign=c]{BE\_figure3.pdf} &= i {\sum\_{1 \leftrightarrow 2}}\left(\frac{M\_1}{8 {M\_\mathrm{Pl}}^2}\right) \left(\frac{M\_2}{2 {M\_\mathrm{Pl}}}\right) \left(\frac{M\_2}{2 {M\_\mathrm{Pl}}}\right) \int dt \int dt\_1 \int dt\_2 \delta(t-t\_1) \delta(t-t\_2) \\ &\phantom{=-}\times \int\frac{d^3{{\boldsymbol{k}}}}{(2\pi)^3} \frac{\exp(-i {{\boldsymbol{k}}}({\boldsymbol{x}}\_1 - {\boldsymbol{x}}\_2))}{{|\boldsymbol{k}|}^2} \int\frac{d^3{\boldsymbol{q}}}{(2\pi)^3} \frac{\exp(-i {\boldsymbol{q}}({\boldsymbol{x}}\_1 - {\boldsymbol{x}}\_2))}{{|\boldsymbol{q}|}^2} P\_{00;00}^2 \\ &= \frac{i}{2} {\sum\_{1 \leftrightarrow 2}}\frac{M\_1 M\_2^2}{(32 \pi)^2 {M\_\mathrm{Pl}}^4} \int dt \frac{1}{r}\\ &= \frac{i}{2} \int dt \frac{G^2 M\_1 M\_2 (M\_1 + M\_2)}{r^2} \;. \end{aligned}\\[0.6cm] &\begin{aligned} \includegraphics[scale=0.8,valign=c]{BE\_figure4.pdf} &= -\frac{i}{2} {\sum\_{1 \leftrightarrow 2}}\left(\frac{M\_1}{2{M\_\mathrm{Pl}}}\right) \left(\frac{M\_2}{2{M\_\mathrm{Pl}}}\right) \left(\frac{M\_2}{2{M\_\mathrm{Pl}}}\right) \left(\frac{1}{4{M\_\mathrm{Pl}}}\right) \int dt \int dt\_2 \int dt\_3 \delta(t-t\_2) \delta(t-t\_3) \\ &\phantom{=-} \times \int \frac{d^3{{\boldsymbol{k}}}\_1}{(2\pi)^3} \int \frac{d^3{{\boldsymbol{k}}}\_2}{(2\pi)^3} \int \frac{d^3{{\boldsymbol{k}}}\_3}{(2\pi)^3} (2\pi)^3 \exp\left[i{{\boldsymbol{k}}}\_1 x\_1 + i({{\boldsymbol{k}}}\_2 + {{\boldsymbol{k}}}\_3) x\_2\right] \\ &\phantom{=-} \times \delta^{(3)}\left({{\boldsymbol{k}}}\_1 + {{\boldsymbol{k}}}\_2 + {{\boldsymbol{k}}}\_3\right) \frac{{{\boldsymbol{k}}}\_1^2 + {{\boldsymbol{k}}}\_2^2 + {{\boldsymbol{k}}}\_3^2}{|{{\boldsymbol{k}}}\_1|^2 |{{\boldsymbol{k}}}\_2|^2 |{{\boldsymbol{k}}}\_3|^2} \\ &= -\frac{i}{2} {\sum\_{1 \leftrightarrow 2}}\frac{M\_1 M\_2^2}{32 {M\_\mathrm{Pl}}^4} \int dt \int \frac{d^3{{\boldsymbol{k}}}}{(2\pi)^3} \int \frac{d^3{\boldsymbol{q}}}{(2\pi)^3} \frac{|{{\boldsymbol{k}}}+ {\boldsymbol{q}}|^2 + |{{\boldsymbol{k}}}|^2 + |{\boldsymbol{q}}|^2}{|{{\boldsymbol{k}}}+ {\boldsymbol{q}}|^2 |{{\boldsymbol{k}}}|^2 |{\boldsymbol{q}}|^2} \exp\left(-i ({{\boldsymbol{k}}}+ {\boldsymbol{q}}) {\boldsymbol{r}}\right) \\ &= -\frac{i}{2} {\sum\_{1 \leftrightarrow 2}}\frac{M\_1 M\_2^2}{32 {M\_\mathrm{Pl}}^4} \Biggr[ \int dt \int \frac{d^3{{\boldsymbol{k}}}}{(2\pi)^3} \int \frac{d^3{\boldsymbol{q}}}{(2\pi)^3} \frac{1}{|{{\boldsymbol{k}}}|^2 |{\boldsymbol{q}}|^2} \exp\left(-i ({{\boldsymbol{k}}}+ {\boldsymbol{q}}) {\boldsymbol{r}}\right) \\ &\phantom{=-} +\int dt \int \frac{d^3{{\boldsymbol{k}}}}{(2\pi)^3} \int \frac{d^3{\boldsymbol{q}}}{(2\pi)^3} \frac{|{{\boldsymbol{k}}}|^2 + |{\boldsymbol{q}}|^2}{|{{\boldsymbol{k}}}+{\boldsymbol{q}}|^2 |{{\boldsymbol{k}}}|^2 |{\boldsymbol{q}}|^2} \exp\left(-i ({{\boldsymbol{k}}}+ {\boldsymbol{q}}) {\boldsymbol{r}}\right) \Biggr] \\ &= -\frac{i}{2} {\sum\_{1 \leftrightarrow 2}}\frac{M\_1 M\_2^2}{32 {M\_\mathrm{Pl}}^4} \frac{1}{16 \pi^2} \int dt \frac{1}{r^2} = -i \int dt \frac{G^2 M\_1 M\_2 (M\_1+M\_2)}{r^2} \;,\\[0.3cm] &\text{where we drop the second term in the second to last line as it vanishes in dimensional} \\[-0.3cm] &\text{regularization.} \end{aligned}\\[0.6cm] &\begin{aligned} \includegraphics[scale=0.8,valign=c]{BE\_figure5.pdf} &= \frac{1}{2} {\sum\_{1 \leftrightarrow 2}}\left(\frac{- i q\_1}{{M\_\mathrm{Pl}}}\right)\left(\frac{- i q\_2}{{M\_\mathrm{Pl}}} \right)\int dt \int \frac{d^3{{\boldsymbol{k}}}}{(2\pi)^3} \frac{-i}{{{\boldsymbol{k}}}^2 + m\_s^2} \exp\left(i {{\boldsymbol{k}}}{\boldsymbol{r}}\right) \left(1 - {\boldsymbol{v}}\_1^2\right) \\ &= \frac{1}{2} {\sum\_{1 \leftrightarrow 2}}i \frac{q\_1 q\_2}{4 \pi {M\_\mathrm{Pl}}^2} \int dt \frac{\exp\left(-m\_s r\right)}{r} \left(1 - {\boldsymbol{v}}\_1^2\right) = i 8 G q\_1 q\_2 \int dt \frac{\exp\left(-m\_s r\right)}{r} \left[1 - \frac{1}{2}\left({\boldsymbol{v}}\_1^2 + {\boldsymbol{v}}\_2^2 \right)\right] \;, \end{aligned}\\[0.6cm] &\begin{aligned} \includegraphics[scale=0.8,valign=c]{BE\_figure6.pdf} &= \left(\frac{-i q\_1}{{M\_\mathrm{Pl}}}\right) \left(\frac{-i q\_2}{{M\_\mathrm{Pl}}}\right) \int dt\_1 \int dt\_2 \delta(t\_1 - t\_2) \int\frac{d^3{{\boldsymbol{k}}}}{(2\pi)^3} \frac{-i}{\left({|\boldsymbol{k}|}^2 + m\_s^2\right)^2} \\ &\phantom{=-} \times \frac{\partial^2}{\partial t\_1 \partial t\_2} \exp(-i {{\boldsymbol{k}}}({\boldsymbol{x}}\_1(t\_1) - {\boldsymbol{x}}\_2(t\_2))) \\ &= -i \frac{q\_1 q\_2}{{M\_\mathrm{Pl}}^2} {\boldsymbol{v}}\_{1,i} {\boldsymbol{v}}\_{2,j} \int dt \int\frac{d^3{{\boldsymbol{k}}}}{(2\pi)^3} \frac{{{\boldsymbol{k}}}\_i {{\boldsymbol{k}}}\_j}{\left({|\boldsymbol{k}|}^2 + m\_s^2\right)^2} \exp(-i {{\boldsymbol{k}}}({\boldsymbol{x}}\_1(t) - {\boldsymbol{x}}\_2(t))) \\ &= -i \frac{q\_1 q\_2}{8 \pi {M\_\mathrm{Pl}}^2} \int dt \frac{\exp\left(-m\_s r\right)}{r} \left(\frac{({\boldsymbol{v}}\_1 \cdot {\boldsymbol{r}}) ({\boldsymbol{v}}\_2 \cdot {\boldsymbol{r}})}{r^2} (1 + m\_sr)-{\boldsymbol{v}}\_1 \cdot {\boldsymbol{v}}\_2\right) \\ &= i 4 G q\_1 q\_2 \int dt \frac{\exp\left(-m\_s r\right)}{r} \left({\boldsymbol{v}}\_1 \cdot {\boldsymbol{v}}\_2 - \frac{({\boldsymbol{v}}\_1 \cdot {\boldsymbol{r}}) ({\boldsymbol{v}}\_2 \cdot {\boldsymbol{r}})}{r^2} (1 + m\_sr)\right)\;.\\ \end{aligned}\\[0.6cm] &\begin{aligned} \includegraphics[scale=0.8,valign=c]{BE\_figure7.pdf} &= {\sum\_{1 \leftrightarrow 2}}\left( \frac{-i p\_1}{{M\_\mathrm{Pl}}^2} \right) \left( \frac{-i q\_2}{{M\_\mathrm{Pl}}}\right)\left( \frac{-i q\_2}{{M\_\mathrm{Pl}}}\right) \int dt \int dt\_1 \delta(t - t\_1) \int dt\_2 \delta(t - t\_2) \\ &\phantom{=-} \times \int\frac{d^3{{\boldsymbol{k}}}}{(2\pi)^3} \frac{-i \exp(-i {{\boldsymbol{k}}}({\boldsymbol{x}}\_1 - {\boldsymbol{x}}\_2))}{{|\boldsymbol{k}|}^2+ m\_s^2} \int\frac{d^3{\boldsymbol{q}}}{(2\pi)^3} \frac{-i \exp(-i {\boldsymbol{q}}({\boldsymbol{x}}\_1 - {\boldsymbol{x}}\_2))}{{|\boldsymbol{q}|}^2+ m\_s^2} \\ &= -i {\sum\_{1 \leftrightarrow 2}}\frac{p\_1 q\_2^2}{16 \pi^2 {M\_\mathrm{Pl}}^4} \int dt \frac{\exp(-2 m\_s r)}{r^2} = -i 64 G^2 \int dt (p\_1 q\_2^2 + p\_2 q\_1^2) \frac{\exp(-2 m\_s r)}{r^2} \;. \end{aligned}\\[0.6cm] &\begin{aligned} \includegraphics[scale=0.8,valign=c]{BE\_figure8.pdf} &= {\sum\_{1 \leftrightarrow 2}}\left(\frac{-i q\_1}{2 {M\_\mathrm{Pl}}^2}\right) \left( \frac{-i q\_2}{{M\_\mathrm{Pl}}} \right) \left( \frac{-i M\_2}{2 {M\_\mathrm{Pl}}} \right) \int dt \int dt\_1 \delta(t-t\_1) \int dt\_2 \delta(t-t\_2) \\ &\phantom{=-} \times \int \frac{d^3{{\boldsymbol{k}}}}{(2\pi)^3} \frac{-i}{{{\boldsymbol{k}}}^2} \exp\left(i {{\boldsymbol{k}}}{\boldsymbol{r}}\right) P\_{00;00} \int \frac{d^3{\boldsymbol{q}}}{(2\pi)^3} \frac{-i}{{\boldsymbol{q}}^2 + m\_s^2} \exp\left(i {\boldsymbol{q}}{\boldsymbol{r}}\right) \\ &={\sum\_{1 \leftrightarrow 2}}-i \frac{q\_1 q\_2 M\_2}{8 {M\_\mathrm{Pl}}^4} \int dt \left(\frac{1}{4 \pi r}\right)^2 \exp(-m\_s r) = -i 8 G^2 q\_1 q\_2 (M\_1+M\_2) \int dt \frac{\exp(-m\_s r)}{r^2} \;, \end{aligned}\\[0.6cm] &\begin{aligned} \includegraphics[scale=0.8,valign=c]{BE\_figure9.pdf} &= {\sum\_{1 \leftrightarrow 2}}\underbrace{(-i)^3 \left(\frac{- i q\_2}{{M\_\mathrm{Pl}}}\right)^2 \left( \frac{- i M\_1}{2 {M\_\mathrm{Pl}}} \right) \left( \frac{-i m\_s^2}{4 {M\_\mathrm{Pl}}} \right) \eta^{\mu \nu} P\_{00;\mu\nu}}\_{\equiv A} \int dt \int \frac{d^3{{\boldsymbol{k}}}\_1}{(2\pi)^3} \int \frac{d^3{{\boldsymbol{k}}}\_2}{(2\pi)^3} \int \frac{d^3{{\boldsymbol{k}}}\_3}{(2\pi)^3} \\ &\phantom{=-} \times \frac{e^{i {{\boldsymbol{k}}}\_1 {\boldsymbol{r}}\_1}}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \frac{e^{i {{\boldsymbol{k}}}\_2 {\boldsymbol{r}}\_1}}{{{\boldsymbol{k}}}\_2^2 + m\_s^2} \frac{e^{-i {{\boldsymbol{k}}}\_3 {\boldsymbol{r}}\_2}}{{{\boldsymbol{k}}}\_3^2} (2\pi)^3 \delta^{(3)}({{\boldsymbol{k}}}\_1 + {{\boldsymbol{k}}}\_2 - {{\boldsymbol{k}}}\_3)\\ &= {\sum\_{1 \leftrightarrow 2}}A \int dt \int \frac{d^3{{\boldsymbol{k}}}\_1}{(2\pi)^3} \int \frac{d^3{{\boldsymbol{k}}}\_3}{(2\pi)^3} \frac{1}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \frac{1}{({{\boldsymbol{k}}}\_3 - {{\boldsymbol{k}}}\_1)^2 + m\_s^2} \frac{e^{i {{\boldsymbol{k}}}\_3 {\boldsymbol{r}}}}{{{\boldsymbol{k}}}\_3^2} \\[0.3cm] &\qquad \text{We employ the Feynman parametrisation to rewrite the integral over } {{\boldsymbol{k}}}\_1 \text{as} \\[0.3cm] &\qquad \int \frac{d^3 {{\boldsymbol{k}}}\_1}{(2\pi)^3} \frac{1}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \frac{1}{({{\boldsymbol{k}}}\_3 - {{\boldsymbol{k}}}\_1)^2 + m\_s^2} = \int\_0^1 da \int \frac{d^3 {{\boldsymbol{k}}}\_1}{(2\pi)^3} \frac{1}{\left[ {{\boldsymbol{k}}}\_1^2 + (a - a^2) {{\boldsymbol{k}}}\_3^2 + m\_s^2) \right]^2}\\ &\phantom{\qquad \int \frac{d^3 {{\boldsymbol{k}}}\_1}{(2\pi)^3} \frac{1}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \frac{1}{({{\boldsymbol{k}}}\_3 - {{\boldsymbol{k}}}\_1)^2 + m\_s^2} } =\int\_0^1 da \frac{1}{4 \pi \sqrt{(a-a^2) {{\boldsymbol{k}}}\_3^2 + m\_s^2}} \\ &\phantom{\qquad \int \frac{d^3 {{\boldsymbol{k}}}\_1}{(2\pi)^3} \frac{1}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \frac{1}{({{\boldsymbol{k}}}\_3 - {{\boldsymbol{k}}}\_1)^2 + m\_s^2} }= \frac{1}{2 \pi k\_3} \arctan \left( \frac{k\_3}{2m} \right) \;.\\ &= {\sum\_{1 \leftrightarrow 2}}A \int dt \int \frac{d^3{{\boldsymbol{k}}}\_3}{(2\pi)^3} \frac{1}{2 \pi k\_3} \arctan \left( \frac{k\_3}{2 m\_s} \right) \frac{e^{i {{\boldsymbol{k}}}\_3 {\boldsymbol{r}}}}{{{\boldsymbol{k}}}\_3^2} \\[0.3cm] &\qquad \text{This is solved by using spherical coordinates and partial integration.}\\[0.3cm] &= {\sum\_{1 \leftrightarrow 2}}(-i)^3 \left(\frac{- i q\_2}{{M\_\mathrm{Pl}}}\right)^2 \left( \frac{- i M\_1}{2 {M\_\mathrm{Pl}}}\right)\left( \frac{-i m\_s^2}{4 {M\_\mathrm{Pl}}}\right) \underbrace{\eta^{\mu \nu} P\_{00;\mu\nu}}\_{=-1} \frac{ 1}{r} \frac{1}{(2\pi)^3} \\ &\phantom{=-} \times \int dt \left( - \frac{r \pi}{2} \text{Ei}(-2m\_sr) + \frac{\pi}{4m\_s} - \frac{\pi}{4m\_s} e^{-2m\_s r} \right)\\ &= -i \frac{q\_2^2 M\_1 + q\_1^2 M\_2}{ 256 \pi^2 {M\_\mathrm{Pl}}^4} \int dt \left( - 2m\_s^2 \text{Ei}(-2m\_sr) + \frac{m\_s}{r} - \frac{m\_s}{r} e^{-2m\_sr} \right) \\ &= -i 4 G^2 \left(q\_2^2 M\_1 + q\_1^2 M\_2 \right) \int dt \left( - 2m\_s^2 \text{Ei}(-2m\_sr) + \frac{m\_s}{r} - \frac{m\_s}{r} e^{-2m\_sr} \right) \;. \end{aligned}\\[0.6cm] &\begin{aligned} \includegraphics[scale=0.8,valign=c]{BE\_figure10.pdf} &= {\sum\_{1 \leftrightarrow 2}}\underbrace{(-i)^3 \left( \frac{- i q\_2}{{M\_\mathrm{Pl}}} \right)\left(\frac{- i q\_1}{{M\_\mathrm{Pl}}}\right)\left( \frac{- i M\_2}{2 {M\_\mathrm{Pl}}}\right)\left( \frac{-i m\_s^2}{4 {M\_\mathrm{Pl}}}\right) \eta^{\mu \nu} P\_{00;\mu\nu}}\_{A\equiv} \int dt \int \frac{d^3{{\boldsymbol{k}}}\_1}{(2\pi)^3} \int \frac{d^3{{\boldsymbol{k}}}\_2}{(2\pi)^3} \int \frac{d^3{{\boldsymbol{k}}}\_3}{(2\pi)^3}\\ &\phantom{=-} \times \frac{e^{i {{\boldsymbol{k}}}\_1 {\boldsymbol{r}}\_1 }}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \frac{e^{i {{\boldsymbol{k}}}\_2 {\boldsymbol{r}}\_1}}{{{\boldsymbol{k}}}\_2^2} \frac{e^{-i {{\boldsymbol{k}}}\_3 {\boldsymbol{r}}\_2}}{{{\boldsymbol{k}}}\_3^2 + m\_s^2} (2\pi)^3 \delta^{(3)}({{\boldsymbol{k}}}\_1 + {{\boldsymbol{k}}}\_2 - {{\boldsymbol{k}}}\_3)\\ &= {\sum\_{1 \leftrightarrow 2}}A \int dt \int \frac{d^3{{\boldsymbol{k}}}\_1}{(2\pi)^3} \int \frac{d^3{{\boldsymbol{k}}}\_3}{(2\pi)^3} \frac{1}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \frac{1}{({{\boldsymbol{k}}}\_3 - {{\boldsymbol{k}}}\_1)^2} \frac{e^{i {{\boldsymbol{k}}}\_3 {\boldsymbol{r}}}}{{{\boldsymbol{k}}}\_3^2 + m\_s^2} \\ &= {\sum\_{1 \leftrightarrow 2}}A \int dt \int \frac{d^3{{\boldsymbol{k}}}\_3}{(2\pi)^3} \frac{1}{2 \pi k\_3} \arctan \left( \frac{k\_3}{m\_s} \right) \frac{e^{i {{\boldsymbol{k}}}\_3 {\boldsymbol{r}}}}{{{\boldsymbol{k}}}\_3^2 + m\_s^2}\\[0.3cm] &\text{At this point, we again switch to spherical coordinates and apply partial integration.} \\[0.3cm] &= {\sum\_{1 \leftrightarrow 2}}\frac{\pi}{r} \frac{A}{(2\pi)^3} \frac{1}{m\_s} \int dt \underbrace{\frac{2}{\pi} \int\_0^\infty dk' \arctan \left( k' \right) \frac{1}{k'^2 + 1} \sin (m\_s r k')}\_{\displaystyle\equiv \mathcal{I}(m\_s r)} \;, \qquad \mathrm{with} \quad k' = \frac{|{{\boldsymbol{k}}}\_3|}{m\_s} \\ &= i \frac{q\_1 q\_2 (M\_1+M\_2)}{64 \pi^2 {M\_\mathrm{Pl}}^4} \frac{m\_s}{r} \int dt \,\mathcal{I}(m\_s r) \; \\ &= -i 8 G^2 q\_1 q\_2 m\_2 m\_s \int dt \,\frac{1}{r} \bigg[ - {{\rm Ei}}(-2m\_sr) e^{m\_s r} + \log(2m\_s r) e^{-m\_s r} + \gamma e^{-m\_s r} \bigg] \;. \end{aligned}\\[0.6cm] &\begin{aligned} \includegraphics[scale=0.8,valign=c]{BE\_figure11.pdf} &= {\sum\_{1 \leftrightarrow 2}}\overbrace{(-i)^3 \frac{3!}{2} \left( \frac{- i q\_2}{{M\_\mathrm{Pl}}} \right)^2 \left(\frac{- i q\_1}{{M\_\mathrm{Pl}}} \right) \left(\frac{-i c\_3 {M\_\mathrm{Pl}}}{3!}\right)}^{\equiv A} \int dt \int d^3{\boldsymbol{r}}\_v \int \frac{d^3{{\boldsymbol{k}}}\_1}{(2\pi)^3} \int \frac{d^3{{\boldsymbol{k}}}\_2}{(2\pi)^3} \int \frac{d^3{{\boldsymbol{k}}}\_3}{(2\pi)^3} \\ &\phantom{=-} \times \frac{e^{i {{\boldsymbol{k}}}\_1 ({\boldsymbol{r}}\_1 - {\boldsymbol{r}}\_v)}}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \frac{e^{i {{\boldsymbol{k}}}\_2 ({\boldsymbol{r}}\_1 - {\boldsymbol{r}}\_v)}}{{{\boldsymbol{k}}}\_2^2 + m\_s^2} \frac{e^{i {{\boldsymbol{k}}}\_3 ({\boldsymbol{r}}\_v - {\boldsymbol{r}}\_2)}}{{{\boldsymbol{k}}}\_3^2 + m\_s^2} \\ &= {\sum\_{1 \leftrightarrow 2}}(2 \pi)^3 A \int dt \int \frac{d^3{{\boldsymbol{k}}}\_1}{(2\pi)^3} \int \frac{d^3{{\boldsymbol{k}}}\_2}{(2\pi)^3} \int \frac{d^3{{\boldsymbol{k}}}\_3}{(2\pi)^3} \frac{e^{i {{\boldsymbol{k}}}\_1 {\boldsymbol{r}}\_1}}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \frac{e^{i {{\boldsymbol{k}}}\_2 {\boldsymbol{r}}\_1}}{{{\boldsymbol{k}}}\_2^2 + m\_s^2} \frac{e^{-i {{\boldsymbol{k}}}\_3 {\boldsymbol{r}}\_2}}{{{\boldsymbol{k}}}\_3^2 + m\_s^2} \delta^{(3)}({{\boldsymbol{k}}}\_1+{{\boldsymbol{k}}}\_2-{{\boldsymbol{k}}}\_3) \\ & = {\sum\_{1 \leftrightarrow 2}}A \int dt \int \frac{d^3{{\boldsymbol{k}}}\_1}{(2\pi)^3} \int \frac{d^3{{\boldsymbol{k}}}\_3}{(2\pi)^3} \frac{1}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \frac{1}{({{\boldsymbol{k}}}\_3 - {{\boldsymbol{k}}}\_1)^2 + m\_s^2} \frac{e^{i {{\boldsymbol{k}}}\_3 {\boldsymbol{r}}}}{{{\boldsymbol{k}}}\_3^2 + m\_s^2}\\ &= {\sum\_{1 \leftrightarrow 2}}A \int dt \int \frac{d^3{{\boldsymbol{k}}}\_3}{(2\pi)^3} \frac{1}{2 \pi k\_3} \arctan \left( \frac{k\_3}{2m\_s} \right) \frac{e^{i {{\boldsymbol{k}}}\_3 {{\boldsymbol{k}}}}}{{{\boldsymbol{k}}}\_3^2 + m\_s^2} \\ &= {\sum\_{1 \leftrightarrow 2}}\frac{\pi}{r} \frac{A}{(2\pi)^3} \frac{1}{m\_s} \int dt \underbrace{\frac{2}{\pi} \int\_0^\infty dk' \arctan \left( k' \right) \frac{1}{4 k'^2 + 1} \sin (2 m\_s r k')}\_{\displaystyle \equiv \mathcal{K}(2 m\_s r)} \\ &= i \frac{q\_2 q\_1 (q\_1 + q\_2) c\_3}{16 \pi^2 {M\_\mathrm{Pl}}^2 m\_s} \int dt \; \frac{\mathcal{K}(2 m\_s r)}{r} \\ &= i \frac{q\_2 q\_1 (q\_1 + q\_2) c\_3}{64 \pi^2 {M\_\mathrm{Pl}}^2 m\_s} \int dt \frac{1}{r} \left[-{{\rm Ei}}(-3 m\_s r) e^{m\_s r} + {{\rm Ei}}(-m\_s r) e^{-m\_s r} + \log(3) e^{-m\_s r} \right] \\ &= i \frac{G}{2\pi} \frac{q\_1 q\_2 (q\_1 + q\_2) c\_3}{m\_s} \int dt \frac{1}{r} \left[-{{\rm Ei}}(-3 m\_s r) e^{m\_s r} + {{\rm Ei}}(-m\_s r) e^{-m\_s r} + \log(3) e^{-m\_s r} \right] \;. \end{aligned}\end{aligned}$$ Scalar Radiation ---------------- Now we compute the diagrams that contribute to the radiative dynamics of the system. For the gravitational waveform at NLO, it is sufficient to consider the scalar radiation. $$\begin{aligned} &\begin{aligned} \includegraphics[scale=0.8,valign=c,raise=0.38cm]{R\_figure5.pdf} = -i {\sum\_{1 \leftrightarrow 2}}\int dt \left( 1 - \frac{1}{2} {\boldsymbol{v}}\_1^2 \right) \frac{q\_1}{{M\_\mathrm{Pl}}} {\Bar{\phi}}+ \mathcal{O}({\boldsymbol{v}}\_1^4) \end{aligned}\\[0.6cm] &\begin{aligned} \includegraphics[scale=0.8,valign=c,raise=0.38cm]{R\_figure6.pdf} &= {\sum\_{1 \leftrightarrow 2}}-i \left( \frac{-i 2 p\_1}{{M\_\mathrm{Pl}}^2} \right) \left( \frac{-i q\_2}{{M\_\mathrm{Pl}}}\right) \int dt \int \frac{d^3 {{\boldsymbol{k}}}}{(2\pi)^3} \frac{e^{i {{\boldsymbol{k}}}(\mathbf{r\_1} - \mathbf{r\_2})}}{{{\boldsymbol{k}}}^2 + m\_s^2} {\Bar{\phi}}= i \frac{p\_1 q\_2 + p\_2 q\_1}{2 \pi {M\_\mathrm{Pl}}^2} \int dt \frac{e^{-m\_s r}}{r} \frac{{\Bar{\phi}}}{{M\_\mathrm{Pl}}} \end{aligned}\\[0.6cm] &\begin{aligned} \includegraphics[scale=0.8,valign=c,raise=0.38cm]{R\_figure9.pdf} &= {\sum\_{1 \leftrightarrow 2}}(-i) \left( \frac{-i q\_1}{2 {M\_\mathrm{Pl}}^2} \right) \left( \frac{-i M\_2}{2 {M\_\mathrm{Pl}}} \right) \int dt \int \frac{d^3 {{\boldsymbol{k}}}}{(2\pi)^3} \frac{e^{i {{\boldsymbol{k}}}(\mathbf{r\_1} - \mathbf{r\_2})}}{{{\boldsymbol{k}}}^2} {\Bar{\phi}}P\_{00;00} \\ & = i \frac{q\_1 M\_2 + q\_2 M\_1}{32 \pi {M\_\mathrm{Pl}}^2} \int dt \frac{1}{r} \frac{{\Bar{\phi}}}{{M\_\mathrm{Pl}}} \;. \end{aligned}\\[0.6cm] &\begin{aligned} \includegraphics[scale=0.8,valign=c]{R\_figure7.pdf} &= \underbrace{3! {M\_\mathrm{Pl}}(-i)^2 \left( \frac{-i q\_1}{{M\_\mathrm{Pl}}} \right) \left( \frac{-i q\_2}{{M\_\mathrm{Pl}}} \right) \left(\frac{-i c\_3}{3!} {M\_\mathrm{Pl}}\right)}\_{A \equiv} \int dt \int d^3{\boldsymbol{r}}\_v \int \frac{d^3{{\boldsymbol{k}}}\_1}{(2\pi)^3} \frac{e^{i {{\boldsymbol{k}}}\_1 ({\boldsymbol{r}}\_1 - {\boldsymbol{r}}\_v)}}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \\ &\phantom{-=} \times \int \frac{d^3{{\boldsymbol{k}}}\_2}{(2\pi)^3} \frac{e^{i {{\boldsymbol{k}}}\_2 ({\boldsymbol{r}}\_2 - {\boldsymbol{r}}\_v)}}{{{\boldsymbol{k}}}\_2^2 + m\_s^2} \frac{{\Bar{\phi}}}{{M\_\mathrm{Pl}}} \;.\\[0.3cm] &\text{Hence, the contribution of this diagram to the source term is} \\[0.3cm] \mathmakebox[\widthof{\includegraphics[scale=0.8,valign=c]{R\_figure7.pdf}}][r] J &= A \int \frac{d^3{{\boldsymbol{k}}}\_1}{(2\pi)^3} \frac{e^{i {{\boldsymbol{k}}}\_1 ({\boldsymbol{r}}\_1 - {\boldsymbol{r}}\_v)}}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \int \frac{d^3{{\boldsymbol{k}}}\_2}{(2\pi)^3} \frac{e^{i {{\boldsymbol{k}}}\_2 ({\boldsymbol{r}}\_2 - {\boldsymbol{r}}\_v)}}{{{\boldsymbol{k}}}\_2^2 + m\_s^2} \;.\\[0.3cm] &\text{With this we can use Eq.~\eqref{eq::multipole\_moments} to calculate the $l=1$ and $p=0$ moment.} \\[0.3cm] \mathmakebox[\widthof{\includegraphics[scale=0.8,valign=c]{R\_figure7.pdf}}][r]{\mathcal{I}^i} &= A \int d^3{\boldsymbol{r}}\_v \int\frac{d^3{{\boldsymbol{k}}}\_1}{(2\pi)^3} \int\frac{d^3{{\boldsymbol{k}}}\_2}{(2\pi)^3} \frac{e^{i {{\boldsymbol{k}}}\_1({\boldsymbol{r}}\_1 - {\boldsymbol{r}}\_v)}}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \frac{e^{{{\boldsymbol{k}}}\_2({\boldsymbol{r}}\_2 - {\boldsymbol{r}}\_v)}}{{{\boldsymbol{k}}}\_2^2 + m\_s^2} {\boldsymbol{r}}\_v^i \\ &= A \int d^3{\boldsymbol{r}}\_v \int\frac{d^3{{\boldsymbol{k}}}\_1}{(2\pi)^3} \int\frac{d^3{{\boldsymbol{k}}}\_2}{(2\pi)^3} \frac{e^{i {{\boldsymbol{k}}}\_1({\boldsymbol{r}}\_1 - {\boldsymbol{r}}\_v)}}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \left(i \frac{2 {{\boldsymbol{k}}}\_2^i}{({{\boldsymbol{k}}}\_2^2 + m\_s^2)^2} e^{i {{\boldsymbol{k}}}\_2 ({\boldsymbol{r}}\_2 - {\boldsymbol{r}}\_v)} + \frac{{\boldsymbol{r}}\_2^i}{{{\boldsymbol{k}}}\_2^2 + m\_s^2} e^{i {{\boldsymbol{k}}}\_2 ({\boldsymbol{r}}\_2 - {\boldsymbol{r}}\_v)}\right) \\ &= A \int \frac{d^3{{\boldsymbol{k}}}\_1}{(2\pi)^3} \frac{e^{i{{\boldsymbol{k}}}\_1{\boldsymbol{r}}}}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \left(-i \frac{2 {{\boldsymbol{k}}}\_1^i}{({{\boldsymbol{k}}}\_1^2 + m\_s^2)^2} + \frac{{\boldsymbol{r}}\_2^i}{{{\boldsymbol{k}}}\_1^2 + m\_s^2}\right) \\ &= A \frac{1}{16 m\_s \pi} e^{- m\_s r} ({\boldsymbol{r}}\_1^i + {\boldsymbol{r}}\_2^i) \\ &= -i \frac{q\_1 q\_2 c\_3}{16 m\_s \pi} e^{- m\_s r} ({\boldsymbol{r}}\_1^i + {\boldsymbol{r}}\_2^i) \;. \end{aligned}\\[0.6cm] &\begin{aligned} \includegraphics[scale=0.8,valign=c]{R\_figure8.pdf} &= {\sum\_{1 \leftrightarrow 2}}\underbrace{(-i)^2 {M\_\mathrm{Pl}}\left(\frac{-i q\_1}{{M\_\mathrm{Pl}}} \right) \left( \frac{-i M\_2}{2 {M\_\mathrm{Pl}}} \right) \left( \frac{-i 2 m\_s^2}{4 {M\_\mathrm{Pl}}}\right)}\_{\equiv A} \int dt \int d^3 {\boldsymbol{r}}\_v \int \frac{d^3 {{\boldsymbol{k}}}\_1}{(2\pi)^3} \frac{e^{i {{\boldsymbol{k}}}\_1 ({\boldsymbol{r}}\_1 - {\boldsymbol{r}}\_v)}}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \\ &\phantom{=-} \times \int \frac{d^3 {{\boldsymbol{k}}}\_2}{(2\pi)^3} \frac{e^{i {{\boldsymbol{k}}}\_2 ({\boldsymbol{r}}\_v - {\boldsymbol{r}}\_2)}}{{{\boldsymbol{k}}}\_2^2} \eta^{\mu\nu} P\_{\mu\nu; 00} \frac{{\Bar{\phi}}}{{M\_\mathrm{Pl}}} \;.\\[0.3cm] \qquad &\text{Again we identify the contribution of this diagram to the source term:} \\[0.3cm] \mathmakebox[\widthof{\includegraphics[scale=0.8,valign=c]{R\_figure8.pdf}}][r] J &= -A \int \frac{d^3{{\boldsymbol{k}}}\_1}{(2\pi)^3} \frac{e^{i {{\boldsymbol{k}}}\_1 ({\boldsymbol{r}}\_1 - {\boldsymbol{r}}\_v)}}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \int \frac{d^3{{\boldsymbol{k}}}\_2}{(2\pi)^3} \frac{e^{i {{\boldsymbol{k}}}\_2 ({\boldsymbol{r}}\_2 - {\boldsymbol{r}}\_v)}}{{{\boldsymbol{k}}}\_2^2} \;.\\[0.3cm] \qquad &\text{From this we obtain the $l=1$ and $p=0$ moment via Eq.~\eqref{eq::multipole\_moments}.}\\[0.3cm] \mathmakebox[\widthof{\includegraphics[scale=0.8,valign=c]{R\_figure8.pdf}}][r]{\mathcal{I}^i} &= {\sum\_{1 \leftrightarrow 2}}-A \int d^3{\boldsymbol{r}}\_v \int\frac{d^3{{\boldsymbol{k}}}\_1}{(2\pi)^3} \int\frac{d^3{{\boldsymbol{k}}}\_2}{(2\pi)^3} \frac{e^{i {{\boldsymbol{k}}}\_1({\boldsymbol{r}}\_1 - {\boldsymbol{r}}\_v)}}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \frac{e^{{{\boldsymbol{k}}}\_2({\boldsymbol{r}}\_2 - {\boldsymbol{r}}\_v)}}{{{\boldsymbol{k}}}\_2^2} {\boldsymbol{r}}\_v^i \\ &= {\sum\_{1 \leftrightarrow 2}}-A \int d^3{\boldsymbol{r}}\_v \int\frac{d^3{{\boldsymbol{k}}}\_1}{(2\pi)^3} \int\frac{d^3{{\boldsymbol{k}}}\_2}{(2\pi)^3} \frac{e^{i {{\boldsymbol{k}}}\_1({\boldsymbol{r}}\_1 - {\boldsymbol{r}}\_v)}}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \left(i \frac{2 {{\boldsymbol{k}}}\_2^i}{|{{\boldsymbol{k}}}\_2|^4} e^{i {{\boldsymbol{k}}}\_2 ({\boldsymbol{r}}\_2 - {\boldsymbol{r}}\_v)} + \frac{{\boldsymbol{r}}\_2^i}{{{\boldsymbol{k}}}\_2^2} e^{i {{\boldsymbol{k}}}\_2 ({\boldsymbol{r}}\_2 - {\boldsymbol{r}}\_v)}\right) \\ &= {\sum\_{1 \leftrightarrow 2}}-A \int \frac{d^3{{\boldsymbol{k}}}\_1}{(2\pi)^3} \frac{e^{i{{\boldsymbol{k}}}\_1{\boldsymbol{r}}}}{{{\boldsymbol{k}}}\_1^2 + m\_s^2} \left(-i \frac{2 {{\boldsymbol{k}}}\_1^i}{|{{\boldsymbol{k}}}\_1|^4} + \frac{{\boldsymbol{r}}\_2^i}{{{\boldsymbol{k}}}\_1^2}\right) \\ &= {\sum\_{1 \leftrightarrow 2}}-A \left[{\boldsymbol{r}}^i \frac{(m\_s^2 r^2 - 2) + 2e^{-m\_s r}(m\_s r+1)}{4 \pi m\_s^4 r^3} + {\boldsymbol{r}}\_2^i \frac{1 - e^{-m\_s r}}{4\pi m\_s^2 r}\right] \\ &= {\sum\_{1 \leftrightarrow 2}}i \frac{q\_1 M\_2}{4 {M\_\mathrm{Pl}}^2} \left[{\boldsymbol{r}}^i \frac{(m\_s^2 r^2 - 2) + 2e^{-m\_s r}(m\_s r+1)}{4 \pi m\_s^2 r^3} + {\boldsymbol{r}}\_2^i \frac{1 - e^{-m\_s r}}{4\pi r}\right] \;. \end{aligned}\end{aligned}$$ --- 1. Black hole (BH) binaries, in contrast, are typically not as sensitive to new physics as a consequence of no-hair theorems; see e.g. Ref.  for a review.[↩](#fnref1) 2. The cubic interaction term typically leads to IR divergences in the absence of a mass term when evaluating tree-level interactions, as we are doing here. These divergences are of the form  ∝ *c*3/*m**s* and do thus not have to pose a problem, since in many theories *c*3 is proportional to powers of the mass. See App. [app:validity] for an example. Further note that in the case of a quantum theory with no bare mass term, one generally expects a generation of a scalar mass term *m**s* ∼ *c*3 from loop diagrams. See also Ref.  for a discussion on this.[↩](#fnref2) 3. The first two operators that enter are *E**μ**ν**E**μ**ν* and *B**μ**ν**B**μ**ν* with *E**μ**ν* = *R**μ**α**ν**β**u**α**u**β* and *B**μ**ν* = *ε**μ**α**β**ρ**R**α**β**σ**ν**u**σ**u**ρ*, which are the decompositions of the Riemann tensor into its electric and magnetic type components, respectively. The Wilson coefficients of these operators capture the leading order finite size effects arising in pure GR, i.e., the tidal deformability.[↩](#fnref3) 4. Note that scalar radiation is only emitted if the scalar mass is sufficiently small, i.e., *m**s* < *ω* where *ω* is the orbital frequency. This will become apparent when computing the radiative dynamics in Sec. [subsec::radiativedynamics].[↩](#fnref4) 5. When applying the same procedure to the scalar field, there appears a subtlety due to the finite scalar mass, which introduces an additional scale. We will comment on this later in this section when constructing the scalar propagator.[↩](#fnref5) 6. A more careful analysis reveals *f**n*(*x*) = (1/*x* − 1/*γ*GR)*n*.[↩](#fnref6)
arxiv_0000012
Improved Constraints on the Preferential Heating and Acceleration of Oxygen Ions in the Extended Solar Corona ============================================================================================================= We present a detailed analysis of oxygen ion velocity distributions in the extended solar corona, based on observations made with the Ultraviolet Coronagraph Spectrometer (UVCS) on the *SOHO* spacecraft. Polar coronal holes at solar minimum are known to exhibit broad line widths and unusual intensity ratios of the *λ**λ*1032, 1037 emission line doublet. The traditional interpretation of these features has been that oxygen ions have a strong temperature anisotropy, with the temperature perpendicular to the magnetic field being much larger than the temperature parallel to the field. However, recent work by Raouafi and Solanki suggested that it may be possible to model the observations using an isotropic velocity distribution. In this paper we analyze an expanded data set to show that the original interpretation of an anisotropic distribution is the only one that is fully consistent with the observations. It is necessary to search the full range of ion plasma parameters to determine the values with the highest probability of agreement with the UVCS data. The derived ion outflow speeds and perpendicular kinetic temperatures are consistent with earlier results, and there continues to be strong evidence for preferential ion heating and acceleration with respect to hydrogen. At heliocentric heights above 2.1 solar radii, every UVCS data point is more consistent with an anisotropic distribution than with an isotropic distribution. At heights above 3 solar radii, the exact probability of isotropy depends on the electron density chosen to simulate the line-of-sight distribution of emissivity. The most realistic electron densities (which decrease steeply from 3 to 6 solar radii) produce the lowest probabilities of isotropy and most-probable temperature anisotropy ratios that exceed 10. We also use UVCS absolute intensities to compute the frozen-in O5 + ion concentration in the extended corona; the resulting range of values is roughly consistent with recent downward revisions in the oxygen abundance. Introduction ============ The physical processes that heat the solar corona and accelerate the solar wind are not yet understood completely. In order to construct and test theoretical models, there must exist accurate measurements of relevant plasma parameters in the regions that are being heated and accelerated. In the low-density, open-field regions that reach into interplanetary space, the number of plasma parameters that need to be measured increases because the plasma begins to become collisionless and individual particle species (e.g., protons, electrons, and heavy ions) can exhibit different properties. Such differences in particle velocity distributions are valuable probes of “microscopic” processes of heating and acceleration. The Ultraviolet Coronagraph Spectrometer (UVCS) operating aboard the *Solar and Heliospheric Observatory* (*SOHO*) spacecraft has measured these properties for a variety of open-field regions in the extended corona (Kohl et al.  1995, 1997, 2006). In this paper we focus on UVCS observations of heavy ion emission lines (specifically *λ**λ*1032, 1037) in polar coronal holes at solar minimum. One main goal is to resolve a recent question that has arisen regarding the existence of anisotropic ion temperatures in polar coronal holes. Several prior analyses of UVCS data have concluded that there must be both intense preferential heating of the O5 + ions, in comparison to hydrogen, and a strong field-aligned anisotropy with a much larger temperature in the direction perpendicular to the magnetic field than in the parallel direction (see, e.g., Kohl et al.  1997, 1998; Li et al.  1998; Cranmer et al.  1999; Antonucci et al.  2000; Zangrilli et al.  2002; Antonucci 2006; Telloni et al.  2007). However, Raouafi & Solanki (2004, 2006) and Raouafi et al.  (2007) have reported that there may not be a compelling need for O5 + anisotropy depending on the assumptions made about the other plasma properties of the coronal hole (e.g., electron density). The determination of O5 + preferential heating, preferential acceleration, and temperature anisotropy has spurred a great deal of theoretical work (see reviews by Hollweg & Isenberg 2002; Cranmer 2002a; Marsch 2005; Kohl et al.  2006). It is thus important to resolve the question of whether these plasma properties are definitively present on the basis of the UVCS/*SOHO* observations. In this paper, we attempt to analyze all possible combinations of O5 + properties (number density, outflow speed, parallel temperature, and perpendicular temperature) with the full effects of the extended line of sight (LOS) taken into account. The applicability of any particular combination of ion properties is evaluated by computing a quantitative probability of agreement between the modeled set of emission lines and a given observation. Preliminary results from this work were presented by Cranmer et al.  (2005) and Kohl et al.  (2006). The original UVCS results of preferential ion heating and acceleration—as well as strong ion temperature anisotropy (*T*⊥ ≫ *T*∥)—were somewhat surprising, but these extreme departures from thermal equilibrium are qualitatively similar to conditions that have been measured for decades in high-speed streams in the heliosphere. At their closest approaches to the Sun ( ∼  0.3 AU), the *Helios* probes measured substantial proton temperature anisotropies with *T*⊥ > *T*∥ (Marsch et al.  1982; Feldman & Marsch 1997). In the fast wind, most ion species also appear to flow faster than the protons by about an Alfvén speed (*V**A*), and this velocity difference decreases with increasing radius and decreasing proton flow velocity (e.g., Hefti et al.  1998; Reisenfeld et al.  2001). The temperatures of heavy ions are significantly larger than proton and electron core temperatures. In the highest-speed wind streams, ion temperatures exceed simple mass proportionality with protons (i.e., heavier ions have larger most-probable speeds), with $(T\_{\rm ion} / T\_{p}) > (m\_{\rm ion} / m\_{p})$, for $m\_{\rm ion} > m\_{p}$ (e.g., Collier et al.  1996). UVCS provided the first evidence that these plasma properties are already present near the Sun. The outline of this paper is as follows. In § 2 we present an expanded collection of UVCS/*SOHO* observational data that is used to determine the O5 + ion properties. § 3 outlines the procedure we have developed to produce empirical models of the plasma conditions in polar coronal holes and to compute the probability of agreement between any given set of ion properties and the observations. The resulting ranges of ion properties that are consistent with the UVCS observations are presented in § 4 along with, in our view, a resolution of the controversy regarding the oxygen temperature anisotropy. Finally, § 5 gives a summary of the major results of this paper and a discussion of the implications these results may have on theoretical models of coronal heating and solar wind acceleration. Observations ============ The UVCS instrument contains three reflecting telescopes that feed two ultraviolet toric-grating spectrometers and one visible light polarimeter (Kohl et al.  1995, 1997). Light from the bright solar disk is blocked by external and internal occulters that have the same linear geometry as the spectrometer slits. The slits are oriented in the direction tangent to the solar limb. They can be positioned in heliocentric radius *r* anywhere between about 1.4 and 10 solar radii (*R*⊙) and rotated around the Sun in position angle. The slit length projected on the sky is 40, or approximately 2.5 *R*⊙ in the corona, and the slit width can be adjusted to optimize the desired spectral resolution and count rate. The UVCS data discussed in this paper consist of a large ensemble of observations of polar coronal holes from the last solar minimum (1996–1997). The solar magnetic field is observed to exist in a nearly axisymmetric configuration at solar minimum, with open field lines emerging from the north and south polar regions and expanding superradially to fill a large fraction of the heliospheric volume. The plasma properties in polar coronal holes remain reasonably constant in the year or two around solar minimum (see, e.g., Kohl et al.  2006), so we assemble the data over this time into a single function of radius. In this paper, we limit ourselves to the analysis of observations of the *λ**λ*1032, 1037 emission line doublet in these polar regions. The relevant UVCS observations are taken from the following three sources. 1. The empirical model study of Kohl et al.  (1998) and Cranmer et al.  (1999) covered the period between 1996 November and 1997 April and took the north and south polar coronal hole properties to be similar enough to treat them together. 2. A detailed analysis of the north polar coronal hole by Antonucci et al.  (2000) coincided with the second *SOHO* joint observing program (JOP 2) on 1996 May 21. 3. We searched the UVCS/*SOHO* archive for any other north or south polar hole observations having sufficient count-rate statistics to be able to measure the line widths at radii above 2 *R*⊙. A total of 14 new or reanalyzed data points were identified between 1996 June and 1997 July. The remainder of this section describes the data reduction for the third group of new data points. Table 1 provides details of these 14 measurements. lccccccc 1996 Jun 21, 16:55 & 2.07 & 15.8 (N) & 75 & 9.1 & 363 ± 21.7 & 2.09 ± 0.4 & 32.2 ± 6.8 1996 Sep 29, 15:26 & 2.08 & 17.5 (N) & 75 & 5.6 & 473 ± 12.4 & 1.87 ± 0.14 & 43.2 ± 8.7 1996 Nov 07, 23:38 & 2.07 & 29.7 (N) & 75 & 7.6 & 409 ± 18.8 & 1.78 ± 0.2 & 44.6 ± 9.1 1996 Nov 10, 06:30 & 3.00 & 25.7 (N) & 350 & 9.4 & 475 ± 27.0 & 1.16 ± 0.18 & 2.08 ± 0.44 1996 Nov 10, 15:55 & 2.56 & 25.9 (N) & 350 & 10.8 & 505 ± 30.5 & 1.17 ± 0.17 & 4.83 ± 1.0 1996 Nov 16, 16:56 & 2.17 & 29.6 (N) & 340 & 9.5 & 417 ± 7.43 & 1.485 ± 0.06 & 25.1 ± 5.0 1997 Jan 05, 21:00 & 3.07 & 28.0 (N) & 100 & 17.6 & 690 ± 87.2 & 0.957 ± 0.45 & 2.02 ± 0.63 1997 Jan 10, 14:54 & 3.08 & 20.8 (N) & 150 & 68.8 & 686 ± 38.7 & 1.23 ± 0.3 & 3.37 ± 0.76 1997 Jan 24, 16:03 & 3.08 & 20.6 (N) & 150 & 70.8 & 594 ± 41.6 & 1.01 ± 0.3 & 1.66 ± 0.40 1997 Mar 09, 18:00 & 2.57 & 19.0 (S) & 300 & 8.9 & 500 ± 22.0 & 1.15 ± 0.12 & 5.66 ± 1.2 1997 Jun 04, 16:49 & 2.56 & 18.8 (N) & 300 & 9.0 & 527 ± 33.9 & 0.958 ± 0.15 & 4.14 ± 0.90 1997 Jun 08, 20:15 & 3.10 & 18.7 (N) & 300 & 8.3 & 645 ± 65.6 & 1.33 ± 0.44 & 1.33 ± 0.33 1997 Jul 01, 15:50 & 2.56 & 17.2 (N) & 342 & 9.8 & 534 ± 56.8 & 0.988 ± 0.26 & 3.26 ± 0.80 1997 Jul 04, 16:45 & 3.63 & 18.9 (N) & 342 & 29.5 & 451 ± 47.9 & 1.49 ± 0.43 & 0.559 ± 0.13 The criteria for identifying new UVCS data were as follows. We adopted a time period from 1996 April, the beginning of primary science operations, to 1998 January, after which the new cycle’s activity began to rise and high-latitude streamers appeared regularly to signal the end of true solar minimum. Only measurements of the lines above the poles (i.e., position angles within  ± 15 of the north or south poles) and at heights above 2 *R*⊙ were sought.[1](#fn1) Prior experience with the count rates at large heights in coronal holes refined the search further to use measurements only with relatively long exposure times (see Table 1) to gather sufficient statistics to measure the line widths. There were two observations that appeared initially to satisfy the above criteria (1997 April 15, at 4.14 *R*⊙, and 1997 July 2, at 3.10 *R*⊙), but they were not used because the count rate statistics were inadequate for reliable line widths. The only point of overlap between the data in Table 1 and prior analyses (e.g., Cranmer et al.  1999) concerns the end of the month-long study of the north polar coronal hole in 1997 January (at *r* ≈ 3 *R*⊙). These data were reanalyzed with a different line fitting technique and an improved UVCS pointing correction; the computed line widths are similar to those presented by Cranmer et al.  (1999) and the intensities are given here for the first time. For completeness, though, both the old and new data points are kept in the full ensemble of data used below. To achieve the lowest uncertainties in the determinations of the *λ**λ*1032, 1037 intensities and line widths, we typically integrated over 15 to 30 along the slit (see Table 1). This corresponds to  ± 0.5–1 *R*⊙ on either side of the north-south axis. The use of such large areas implies that narrow flux-tube structures such as dense *polar plumes* and the less-dense interplume regions were not resolved. As long as all steps of the analysis remain consistent with such a coarsely averaged state (e.g., the use of a similarly averaged electron density), though, this need not be a problem. The derived plasma properties thus describe the average conditions inside coronal holes at solar minimum and do not address differences between plumes and interplume regions. Details concerning the analysis of UVCS data are given by Gardner et al.  (1996, 2000, 2002) and Kohl et al.  (1997, 1999, 2006). The UVCS Data Analysis Software (DAS) was used to remove image distortion and to calibrate the data in wavelength and intensity. The coronal line profiles are broadened by various instrumental effects. The optical point spread function of the spectrometer depends on the slit width used (with 270.3 *μ*m corresponding to 1 Å in the spectrum), the on-board data binning, the exposed mirror area, and the intrinsic quantization error of the detector. This broadening is taken into account by adjusting the line widths of Gaussian fits to the coronal components of the data; the data points themselves are not corrected. Tests have shown that the coronal line width can be recovered accurately even when the total instrumental width is within about a factor of two of the width of the coronal component. Instrument-scattered stray light from the solar disk is modeled as an additional narrow Gaussian component with an intensity and profile shape constrained by the known stray light properties of the instrument. The analysis of the emission line doublet involves four basic observable quantities: the total intensities of the two lines and their 1/*e* Gaussian half-widths Δ*λ*1/*e*. The latter quantities are typically expressed in Doppler velocity units as *V*1/*e* = *c*Δ*λ*1/*e*/*λ*0, where *λ*0 is the rest wavelength of the line and *c* is the speed of light. Rather than give the two total intensities, Table 1 provides the total intensity $I\_{\rm tot}$ of the *λ*1032 line and the ratio ${\cal R}$ of the *λ*1032 to the *λ*1037 intensities. The uncertainties given in Table 1 take account of both Poisson count-rate statistics and the fact that the various instrumental corrections are known only to finite levels of precision. Note that the ratio ${\cal R}$ does not depend on the absolute intensity calibration of the instrument. UVCS/*SOHO* has not been able to resolve any departures from Gaussian shapes for the lines in large polar coronal holes, so the profiles are described by just the one parameter *V*1/*e*. For the measurements given in Table 1, we performed the line fitting by constraining the coronal components of the *λ*1032 and *λ*1037 lines to have the same width. Thus, the *V*1/*e* values given in Table 1 are formally a weighted mean between the two components. This is done mainly to lower the statistical uncertainties but there is some observational justification for assuming that the two components have the same width. In situations where the count rates are high, it is difficult to see any significant or systematic difference between the line widths of the two components. There are various reasons why they may be different from one another in some regions (e.g., Cranmer 2001; Morgan & Habbal 2004), but more work needs to be done to identify such subtle effects. Figure 1 displays the combined ensemble of old and new UVCS data for the three main observables: the line width *V*1/*e*, the dimensionless intensity ratio ${\cal R}$, and the absolute (line-integrated) intensity of the *λ*1032 line. There are a total of 53 separate data points from the three sources discussed above, but not all of these points have all three of the main quantities: there are 50 values of *V*1/*e*, 52 values of ${\cal R}$ (with only 49 cases where both *V*1/*e* and ${\cal R}$ exist for the same measurement), and 44 values of $I\_{\rm tot}$. This relative paucity of data illustrates the difficulties of measuring the plasma parameters at large heights in polar coronal holes. In general, the radial dependences of the quantities in Figure 1 are similar to those given by Kohl et al.  (1998), Cranmer et al.  (1999), and Antonucci et al.  (2000). There exists a reasonably large spread in the *V*1/*e* values in Figure 1*a* above *r* ≈ 2.5 *R*⊙. This spread exceeds the magnitude of the  ± 1*σ* uncertainty limits for the individual measurements, and thus seems to indicate that there is an *intrinsic variability* (possibly temporal) of the O5 + plasma conditions in polar coronal holes above heights where the ions become collisionless. It is possible that polar plumes and interplume regions become collisionless over different ranges of radius, and thus preferential ion heating mechanisms may begin to broaden the lines at different rates in the two regions. The observed variation in line width may thus depend on the relative concentrations of plume and interplume plasma along the line of sight at different observation times. Empirical Model Procedure ========================= The observable properties of the line doublet depend on a nontrivial combination of various O5 + plasma parameters, as well as electron parameters, integrated along the optically thin line of sight. In general, then, it is not possible to derive accurate and self-consistent plasma parameters via a simple “inversion” from the line widths and intensities. Rather, one must build up a so-called *empirical model* of the coronal hole—with the O5 + velocity distribution and other properties as free parameters—and synthesize trial line profiles. After some procedure of varying the coronal parameters to achieve agreement between the synthesized line profiles and the observations, the self-consistent empirical model of the ion properties can be considered complete. This technique is closely related to forward modeling approaches being used in other areas of solar physics (e.g., Judge & McIntosh 1999). The use of the term “empirical model” has resulted in a bit of confusion regarding what assumptions are embedded in the derived plasma parameters. We emphasize that the empirical models described here do not specify the physical processes that maintain the coronal plasma in its assumed steady state. Thus, there is no explicit dependence on “theoretical” concepts such as coronal heating and acceleration mechanisms, waves and turbulent motions, or magnetohydrodynamics (MHD), within the empirical models. The derived O5 + plasma parameters depend on only the observations and on well-established theory such as the radiative transfer inherent in the line-formation process. In this section we summarize the forward modeling of line profiles for an arbitrary set of coronal parameters (§ 3.1), then we describe how these parameters are specified and varied to produce various empirical model grids (§ 3.2). Finally, we present a new method of computing the probability of agreement between a given empirical model and the observations (§ 3.3), such that no regions of the possible solution space are neglected. Forward Modeling ---------------- The line emission in coronal holes comes from two sources of comparable magnitude: (1) collisional electron impact excitation followed by radiative decay, and (2) resonant scattering of photons that originate on the bright solar disk. The emergent specific intensity of an emission line from an optically thin corona is given by $$I\_{\nu} \, = \, \int\_{-\infty}^{+\infty} dx \, \left( j\_{\nu}^{\rm coll} + j\_{\nu}^{\rm res} \right) \,\,\,, \label{eq:Iline}$$ where *x* is the coordinate direction along the observer’s line of sight (LOS) and $j\_{\nu}^{\rm coll}$ and $j\_{\nu}^{\rm res}$ are the collisionally excited and resonantly scattered line emissivities, respectively. We neglect the relatively weak UV continuum and the Thomson electron-scattered components of the spectral lines in question. At a given point in the three-dimensional coronal hole volume, the line emissivities are specified by $$\begin{aligned} j\_{\nu}^{\rm coll} &=& \frac{h\nu\_{0}}{4\pi} \, q\_{12}(T\_{e}) \, n\_{e} \, n\_{1} \, \phi\_{\nu} \label{eq:jcoll} \\ j\_{\nu}^{\rm res} &=& \frac{h\nu\_{0}}{4\pi} \, B\_{12} \, n\_{1} \int\_{0}^{\infty} d\nu' \oint \frac{d\Omega'}{4\pi} \, R(\nu', {\bf\hat{n}}', \nu, {\bf\hat{n}}) \, \tilde{I}\_{\nu'}({\bf\hat{n}}') \label{eq:jres}\end{aligned}$$ (see, e.g., Mihalas 1978). Here, *ν*0 is the rest-frame line-center frequency, *q*12 is the collision rate per particle for the transition between atomic levels 1 and 2, *n*1 is the number density in the lower level of the atom or ion of interest (here, the 2*s* 2 *S*1/2 state of O5 +), and *B*12 is the Einstein absorption rate of the transition. The emission profile *ϕ**ν* is assumed to be Gaussian. The scattering redistribution function *R* takes the incident frequency *ν*ʹ and photon direction vector ${\bf\hat{n}}'$ and transforms it into the observed frequency *ν* along the LOS direction ${\bf\hat{n}}$. The profile *ϕ**ν* and the redistribution function *R* contain the main dependences on the properties of the ion velocity distribution. We allow for the possibility of an anisotropic O5 + velocity distribution by using a bi-Maxwellian function (e.g., Whang 1971), with the parallel and perpendicular axes oriented arbitrarily with respect to the radial direction in the corona; see § 3.2. The emissivity profiles along the LOS are modeled with the full effects of the bi-Maxwellian velocity distribution and the projected components of the bulk outflow speed along the ${\bf\hat{n}}'$ and ${\bf\hat{n}}$ directions. For the polar coronal hole measurements being modeled here, we define a Cartesian coordinate system for which the LOS direction is denoted *x* and the north-south polar axis of the Sun is *z*. The other coordinate *y* is set to zero. General expressions for the emissivities are given in various levels of detail by Withbroe et al.  (1982), Noci et al.  (1987), Allen et al.  (1998), Cranmer (1998), Li et al.  (1998), Noci & Maccari (1999), Kohl et al.  (2006), and Akinari (2007). The resonantly scattered components depend sensitively on the intensity profiles incident from the solar disk (*Ĩ**ν*ʹ). As in Cranmer et al.  (1999), we used empirically derived Gaussian profiles with total intensities measured on the disk by UVCS at solar minimum (Raymond et al.  1997). The adopted *λ*1032 (1031.93 Å) disk intensity is 1.94 × 1013 photons s− 1 cm− 2 sr− 1, and the total intensities of the *λ*1037 (1037.62 Å), *λ*1037.02, and *λ*1036.34 disk lines are 0.500, 0.214, and 0.171 times the *λ*1032 intensity, respectively. We used the profile widths as given by Noci et al.  (1987); see also the comparative tables of Gabriel et al.  (2003) and Raouafi & Solanki (2004). The collisional components depend on how the collision rate *q*12 varies with electron temperature *T**e*. We kept the same tabulated values as were used by Raymond et al.  (1997) and Cranmer et al.  (1999). For completeness, we give a fit to *q*12(*T**e*) for the *λ*1032 transition: log10(*q*12)  =   − 0.22117*t*2 + 2.4565*t* − 14.695  ,  where *t* = log10*T**e*, and *T**e* and *q*12 are given in units of K and cm3 s− 1 respectively. This expression is valid to within about  ±  2% over the range 5.3 ≤ *t* ≤ 6.3. The collision rate for the *λ*1037 line is half of that of the *λ*1032 line. The numerical code that synthesizes line profiles by numerically integrating equations ([eq:Iline])–([eq:jres]) is essentially the same as the one used by Cranmer et al.  (1999). The integrations over *x* and *ν*ʹ have been simplified by replacing the adaptive Romberg method by fixed grids, with spacings that have been adjusted to minimize both numerical discretization errors and run time. The LOS integration was performed in steps of 0.1 *R*⊙ from –15 to  + 15 *R*⊙ along the *x* axis. The incident frequency grid corresponds to a wavelength grid with a spacing of 0.03 Å in *λ*ʹ. These step sizes were verified to give accurate results by halving the step sizes and obtaining the same results to within a desired precision. We integrated over the solid angle of the solar disk (*d*Ωʹ = sin*θ*ʹ *d**θ*ʹ *d**ϕ*ʹ) by Gauss-Legendre quadrature in *θ*ʹ and equally spaced trapezoidal quadrature in *ϕ*ʹ. The solar disk was assumed to be uniformly bright. Parameter Selection for Line Synthesis -------------------------------------- For the doublet, there are three primary observables ($I\_{\rm tot}$ of *λ*1032, *V*1/*e*, and ${\cal R}$) that depend on the LOS distributions of four “unknown” quantities as well as a longer list of quantities that can be considered to be known independently of the UVCS observations. The four unknowns are the ion fraction (essentially *n*1/*n**e*), the O5 + bulk outflow speed along the magnetic field (*u**i* ∥), and the parallel and perpendicular O5 + kinetic temperatures (*T**i* ∥ and *T**i* ⊥). The known quantities include the electron density *n**e*, the electron temperature *T**e*, the incident intensity from the solar disk, and the overall magnetic geometry of the coronal hole (i.e., how to compute “parallel” and “perpendicular” at any point along the LOS). Note that both emissivities (eqs.  [[eq:jcoll]]–[[eq:jres]]) depend linearly on the ion fraction, so that the total intensity $I\_{\rm tot}$ can be used as a straightforward diagnostic of this quantity after the other parameters have been determined. The line widths and intensity ratios do not depend on the ion fraction. This leaves two observables (*V*1/*e* and ${\cal R}$) to specify the values of three ion quantities (*u**i* ∥, *T**i* ∥, and *T**i* ⊥). Although this system is formally underdetermined, we can nonetheless put some firm limits on the *ranges* of these quantities and compute the most probable values. Below, the three O5 + velocity distribution parameters are discussed in § 3.2.1 and the other “known” parameters are discussed in § 3.2.2. ### Ionized Oxygen Parameters We treat the three unknown ion quantities as free parameters that are varied independently of one another. Other empirical modeling efforts (e.g., Cranmer et al.  1999; Antonucci et al.  2000; Raouafi & Solanki 2004, 2006) have tended to use some form of iterative refinement; i.e., they started with a specific set of initial conditions and assumptions, and they varied some parameters—and kept others fixed—to find the most probable values of *u**i* ∥, *T**i* ∥, and *T**i* ⊥. The initial estimates tended to utilize the fact that the line widths are most sensitive to *T**i* ⊥, whereas the line ratios depend mainly on the effect of Doppler dimming (and Doppler pumping from the solar disk lines) and thus are sensitive mainly to the parallel velocity distribution (*u**i* ∥ and *T**i* ∥). These iterative procedures contain the inherent possibility that some regions of the parameter space could be neglected, and thus possibly valid solutions could be ignored. In this paper we *search the entire parameter space* by constructing a three-dimensional “data cube” which contains all possible combinations of the three parameters. The three axes of the data cube were chosen to be *u**i* ∥, *T**i* ⊥, and the anisotropy ratio *T**i* ⊥/*T**i* ∥. The modeled ranges of these quantities were made as wide as possible in order to avoid missing possibly relevant regions of parameter space. The outflow speed *u**i* ∥ was varied between 0 and 1000 km s− 1 using a linearly spaced grid. The perpendicular kinetic temperature *T**i* ⊥ was varied logarithmically between 5 × 105 and 109 K. The anisotropy ratio was varied logarithmically between 0.1 and 100. There were 50 values of each parameter along the three axes of the data cube, and we synthesized 12 wavelengths—spaced linearly between the line center and 2.7 Å redward of line center—for both lines. Thus, a data cube constructed for a specific height in the corona (*z*) consisted of 3 × 106 (503 × 24) individual LOS integrations. For each point in a data cube, the scalar values of *u**i* ∥ and *T**i* ⊥ were assumed to be those in the plane of the sky (i.e., *x* = 0). For other points along the LOS, the models used slightly larger values that are consistent with an assumed radial increase in both parameters. Mass flux conservation—using the modeled *n**e*(*r*) and flux tube geometry—was used to specify the radial increase in *u**i* ∥ along the LOS. Earlier empirical modeling results (specifically, eq. [28] of Cranmer et al.  1999) were used to specify the radial increase in *T**i* ⊥. The modeled anisotropy ratio *T**i* ⊥/*T**i* ∥ was assumed to remain constant along the LOS. It is important to note that the modeled radial increases in *u**i* ∥ and *T**i* ⊥ were always taken to be *relative* to the plane-of-sky values that were varied freely throughout each data cube. Thus, there is no *a priori* reason for the resulting most-probable values of these parameters (determined via comparisons with observations over a range of heights *z*) to exhibit similar radial increases.[2](#fn2) We note that the kinetic temperature quantities *T**i* ⊥ and *T**i* ∥ may describe some combination of “thermal” microscopic motions and any unresolved bulk motions due to waves or turbulence. Thus, there is a further step of interpretation required after the most likely values of these kinetic temperatures have been derived from the empirical modeling process. Making a definitive separation between the thermal and nonthermal components of these temperatures is beyond the scope of this paper. However, we can make some qualitative comments on the likely ranges of magnitude of these two components based on recent theoretical models of Alfvén waves in coronal holes; see §§ 4.2 and 4.3. Finally, the O5 + ion fraction *n*1/*n**e* was kept at a constant (and arbitrary) value in all of the models. Comparisons between the observed and synthesized total intensities were used to derive measurements of this ion fraction in the polar coronal holes; see § 4.5. ### Electron and Flux Tube Parameters The three main “known” parameters that are explored in the models shown below (but kept constant over each data cube) are the electron density *n**e*(*r*), electron temperature *T**e*(*r*), and the macroscopic flux-tube geometry of the coronal hole. Any other parameters that could be varied—e.g., the disk intensities of the and lines—were kept fixed at the values given above in § 3.1. Because one main purpose of this paper is to determine why the results of Raouafi & Solanki (2004, 2006) appear to differ from earlier empirical modeling efforts, we constructed two main sets of electron and flux tube parameters: *model R,* which is designed to replicate many of the conditions assumed by Raouafi & Solanki (2004, 2006), and *model C,* which is essentially the same as used by Cranmer et al.  (1999). Below, we also discuss hybrid models with various combinations of the conditions assumed in models R and C. Model C uses an electron temperature derived by Ko et al.  (1997) from measurements of ion charge states in the fast solar wind made by the SWICS instrument on *Ulysses* (Gloeckler et al.  1992). We utilize the fitting formula $$T\_{e} (r) \, = \, 10^{6} \, \mbox{K} \, \left[ 0.35 \left( \frac{r}{R\_{\odot}} \right)^{1.1} + 1.9 \left( \frac{r}{R\_{\odot}} \right)^{-6.6} \right]^{-1} \,\,. \label{eq:Tko}$$ For the electron density, model C uses the expression derived by Cranmer et al.  (1999) from direct inversion of UVCS/*SOHO* white-light polarization brightness (*p**B*) data over the poles at solar minimum; i.e., $$\frac{n\_{e} (r)}{10^{5} \, \mbox{cm}^{-3}} \, = \, 3890 \left( \frac{R\_{\odot}}{r} \right)^{10.5} + 8.69 \left( \frac{R\_{\odot}}{r} \right)^{2.57} \,\,. \label{eq:neC99}$$ The above electron density is a mean value for polar coronal holes (intermediate between plumes and interplume regions) between *r* ≈ 1.5 and 4 *R*⊙. Model C also uses the three-parameter empirical function of Kopp & Holzer (1976) to specify the superradial expansion of a polar coronal hole. The transverse area *A*(*r*) ∝ *r*2*f*(*r*) of the entire coronal hole is specified by $$f(r) \, = \, 1 + (f\_{\rm max} - 1) \left\{ \frac{1 - \exp [( R\_{\odot} - r) / \sigma\_{1}]} {1 + \exp [( R\_{1} - r) / \sigma\_{1}]} \right\} \,\,, \label{eq:fofr}$$ and Cranmer et al.  (1999) determined $f\_{\rm max} = 6.5$, *R*1 = 1.5 *R*⊙, and *σ*1 = 0.6 *R*⊙. Also, the area of the hole is normalized by setting the basal colatitude Θ0 to 28. The field lines inside the coronal hole volume are assumed to self-similarly follow colatitudes that remain proportional to the overall boundary of the coronal hole at any given radius (see Cranmer et al.  1999). Model R uses a constant electron temperature of 106 K. This value is lower than the peak of the Ko et al.  (1997) model (*T**e* ≈ 1.5 × 106 K at *r* ≈ 1.6 *R*⊙), and higher than the value from this model at the coronal base (*T**e* ≈ 4 × 105 K at *r* = *R*⊙). The constant value of 106 K seems to be in closer agreement to both theoretical models that take account of strong electron heat conduction in the corona (e.g., Lie-Svendsen & Esser 2005; Cranmer et al.  2007) and with SUMER/*SOHO* observations made above the limb in coronal holes (e.g., Wilhelm et al.  1998; Doschek et al.  2001).[3](#fn3) For the electron density, model R uses equation (2) of Doyle et al.  (1999), i.e., $$\frac{n\_{e} (r)}{10^{5} \, \mbox{cm}^{-3}} \, = \, 1000 \left( \frac{R\_{\odot}}{r} \right)^{8} + 0.025 \left( \frac{R\_{\odot}}{r} \right)^{4} + 2.9 \left( \frac{R\_{\odot}}{r} \right)^{2} \,\,. \label{eq:neD99}$$ To specify the superradial geometry of flux tubes in the polar coronal hole, model R uses the analytic magnetic field model of Banaszkiewicz et al.  (1998). Note that Raouafi & Solanki (2004, 2006) used equation (1) of Doyle et al.  (1999), which is a one-parameter hydrostatic fit to various measured electron densities. Above a height of *r* ≈ 6 *R*⊙, though, the radial decrease of *n**e* in the hydrostatic expression becomes substantially *shallower* than an inverse-square radial decrease. This is not generally expected to occur; i.e., in most observations and models, the radial decrease in *n**e* goes from a rate much steeper than 1/*r*2 at low heights to 1/*r*2 at large heights where the geometry is radial and the wind speed is constant. The shallow radial density decrease in a hydrostatic model is probably unphysical and could lead to an overestimated contribution from large distances along the LOS. Figure 2 illustrates the differences between the magnetic geometries used in models R and C. Figures 2*a* and 2*b* show field lines that are distributed evenly in polar angle *θ* between 0 and 29 as measured on the solar surface from the north pole. The superradial angle *δ* characterizes the departure from the radial direction, and it is shown in Figure 2*c* as a function of LOS distance *x* for a polar observing height *z* = 2.5 *R*⊙. Formally, *δ* is defined as the angle between the radius vector ${\bf r}$ and the magnetic field ${\bf B}$ (assuming the field points outward), i.e., $$\delta \, = \, \cos^{-1} \left( \frac{{\bf r} \cdot {\bf B}}{|{\bf r}| \, |{\bf B}|} \right) \,\,.$$ For the polar observations described here, the LOS projection of any quantity that follows the magnetic field (e.g., the outflow velocity) is given by multiplying its magnitude by sin(*θ* + *δ*). The Banaszkiewicz et al.  (1998) model exhibits a larger degree of departure from radial geometry than does the Cranmer et al.  (1999) model. However, at the large heights for which the UVCS anisotropy results are of interest here, the relative differences between the two models—and also the differences between either model and a radial geometry (*δ* = 0)—are small. Figure 3 shows the range of electron densities measured by several instruments in polar coronal holes. The strong radial decrease in *n**e*(*r*) has been removed by dividing all measurements by equation ([eq:neD99]). The use of this normalization more clearly illustrates the relative differences between the different sets of values, which Raouafi & Solanki (2004, 2006) claimed to be important in the derivation of O5 + temperature anisotropy. The differences between plumes and interplume regions is certainly responsible for some of the wide range of variation, but some of it may also be due to absolute calibration uncertainties between instruments. Note, though, that the curve representing the hydrostatic equation (1) of Doyle et al.  (1999) appears to be clearly divergent from the other empirical curves above *r* ≈ 8 *R*⊙, with a slope that is flatter than the other measurements even several solar radii below that. The gray-scale histogram boxes in Figure 3 illustrate the variations due to differing concentrations of polar plumes along a single polar LOS over three months (1996 November 1 to 1997 February 1) using a consistent data set and large enough count rates to make Poisson uncertainties negligible (see Table 3 of Cranmer et al.  1999). The curves from Fisher & Guhathakurta (1995) and Guhathakurta et al.  (1999) show averages of the various plume and interplume values given in those papers, with vertical lines illustrating the relative contrast between the densest plume-filled lines of sight and the regions with the fewest numbers of plumes. (These vertical lines are shown at *r* ≈ 10 *R*⊙ for clarity, but they are representative of the values at the lower heights corresponding to the observed white-light data.) Polar electron density values reported recently by Quémerais et al. (2007) are not shown, but they are similar in radial shape to the Fisher & Guhathakurta (1995) mean curve (but with values about 10%–20% higher). Overall, the variations between data sets that appear to exceed the plume-interplume contrast may be due to different instrument calibrations. When modeling the observations summarized in § 2, it is probably incorrect to use the lowest “pure interplume” electron densities. At *z* ≈ 2.5–3 *R*⊙ in polar coronal holes, the UVCS observations were typically integrated over 15 to 30 in the tangential direction, whereas polar plumes at these heights have transverse sizes of only about 1 to 2. Thus, the most appropriate electron densities to use are those that *average* over plumes and interplume regions. The lower limits from Fisher & Guhathakurta (1995) and Guhathakurta et al.  (1999), as well as the fitting function given by Esser et al.  (1999), seem to be inappropriate to apply to the empirical modeling of these UVCS data. At lower heights, where the plume and interplume regions have been resolved by UVCS (e.g., Kohl et al.  1997; Giordano et al.  2000), the use of the full range of plume and interplume values of *n**e* would be warranted. Comparison with Observations ---------------------------- Once a model data cube (which varies *u**i* ∥, *T**i* ⊥, and *T**i* ⊥/*T**i* ∥ along its axes) has been produced for a given observing height *z* and a given set of *n**e*, *T**e*, and flux tube parameters, the next step is to compute the probability of agreement between a given observation and each of the simulated observations in the cube. We compute this probability *P* as the product of two quantities that are assumed to be independent of one another: (1) the probability $P\_{\rm R}$ that the observed line ratio agrees with the simulated ratio, and (2) the probability $P\_{\rm S}$ that the observed profile shape of the *λ*1032 line agrees with the simulated shape. Because the brighter *λ*1032 line tends to dominate the measured “weighted” line width *V*1/*e*, we use only the simulated *λ*1032 line shape in the latter comparison. The line ratio probability $P\_{\rm R}$ is relatively straightforward to compute. The modeled total intensities of the two components of the doublet are determined by summing up the specific intensities over the 12 wavelength bins. Their ratio thus gives ${\cal R}\_{\rm model}$. The relative distance between ${\cal R}\_{\rm model}$ and the observed ratio ${\cal R}\_{\rm obs}$, in units of the observational standard deviation ($\delta {\cal R}\_{\rm obs}$), is the quantity that determines the probability of agreement. Assuming the uncertainties are normally distributed, the probability is $$P\_{\rm R} \, = \, 1 - \mbox{erf} \left( \frac{|{\cal R}\_{\rm obs} - {\cal R}\_{\rm model}|} {\delta {\cal R}\_{\rm obs} \, \sqrt{2}} \right) \label{eq:PR}$$ (see, e.g., Bevington & Robinson 2003). A larger argument in the error function (“erf” above) denotes a larger discrepancy between the modeled and observed ratios, and thus a lower probability of agreement. The line shape probability $P\_{\rm S}$ is not as easy to compute as $P\_{\rm R}$. An initial attempt was made to fit the simulated profiles with Gaussian functions, and then to compare the resulting *V*1/*e* widths with the observed values using a similar expression as equation ([eq:PR]). However, there were many instances where the modeled lines were far from Gaussian in shape, but the best-fitting Gaussian (which was a poor fit in an absolute sense) happened to agree with the observed *V*1/*e*. This resulted in spuriously high probabilities for wide regions of parameter space that should have been excluded. Thus, we found that the tabulated specific intensities (i.e., the full *line shapes*) need to be compared on a wavelength-by-wavelength basis. This raises the issue of what to use for the “observed” line shape. As described in § 2, the UVCS/*SOHO* data points contain a wide range of instrumental effects that were taken into account in the line fitting process. In order to compare similar quantities, either these effects must be inserted into the model profiles, or we must reconstruct “observed profile” information from the extracted *V*1/*e* measurements and the *δ**V*1/*e* uncertainties. We chose the latter option. To determine the probability of agreement between the set of modeled specific intensities ($I\_{\lambda, {\rm model}}$) and the reconstructed observed intensities ($I\_{\lambda, {\rm obs}}$), we computed a *χ*2 quantity, $$\chi^{2} \, = \, \sum\_{\lambda} \left( \frac{I\_{\lambda, {\rm obs}} - I\_{\lambda, {\rm model}}} {\delta I\_{\lambda, {\rm obs}}} \right)^{2}$$ where $I\_{\lambda, {\rm model}}$ came from the data cube, and $I\_{\lambda, {\rm obs}}$ was constrained to be a Gaussian function with the observed *V*1/*e* width and a total intensity equal to that of the modeled profile. (The observed total intensity was not used because the comparison being done here is only between the relative shapes.) The $\delta I\_{\lambda, {\rm obs}}$ uncertainty was computed as a function of wavelength by comparing the idealized $I\_{\lambda, {\rm obs}}$ profile with two others computed with line widths *V*1/*e* − *δ**V*1/*e* and *V*1/*e* + *δ**V*1/*e* (with all three profiles normalized to the same modeled total intensity). These three profiles exhibited a range of specific intensities at each wavelength, and the standard deviation quantity $\delta I\_{\lambda, {\rm obs}}$ was defined as half of that full range. Then, the *χ*2 quantity above constrains the probability that the observed and modeled profiles are in agreement (i.e., the probability that the observed and modeled specific intensity values are drawn from the same distribution). Assuming normally distributed uncertainties, this probability is given by $$P\_{\rm S} \, \equiv \, Q (\chi^{2} | \nu) = \frac{1}{\Gamma (\nu / 2)} \int\_{\chi^{2} / 2}^{\infty} e^{-t} t^{(\nu / 2)-1} dt$$ (Press et al.  1992), where *ν* = *N**λ* − 1 is the effective number of degrees of freedom (for *N**λ* = 12 wavelength points) and Γ(*x*) is the complete Gamma function. When *χ*2 ≪ *ν* the above probability approaches unity (i.e., the modeled profile is a good match to the observed profile), and when *χ*2 ≫ *ν* the above probability is negligibly small. We thus obtained the total probability $P = P\_{\rm R} P\_{\rm S}$ as a function of the three main O5 + variables of each data cube, for each observation at the height *z* consistent with that data cube. The question of what is considered to be a large or small probability is open to some interpretation. Below, we often use a standard “one sigma” probability $P\_{1\sigma} = 1 - \mbox{erf} (1/\sqrt{2}) = 0.317$ as a fiducial value above which the solutions are considered to be good matches with the data. Empirical Model Results ======================= In this section we present results for the most probable values of the O5 + ion properties between 1.5 and 3.5 *R*⊙. In § 4.1, we show how the essential information inside the three-dimensional probability cubes can be extracted and analyzed in a manageable way. In § 4.2, the optimal values for O5 + outflow speed and perpendicular temperature are presented for models C and R. The resulting values of *u**i* ∥ and *T**i* ⊥ are consistent with earlier determinations of preferential ion heating and acceleration with respect to protons. In § 4.3, we discuss the determination of the anisotropy ratio *T**i* ⊥/*T**i* ∥ for models C and R, which is less certain than the other two quantities. We then focus in detail on a single representative height in § 4.4 in order to determine how models C and R can give rise to qualitatively different conclusions about the ion temperature anisotropy. Finally, in § 4.5 we extract information from both models C and R about the O5 + ion concentration in the extended corona—i.e., we compute the ratio $n\_{{\rm O}^{5+}} / n\_{\rm H}$ from the comparison of observed and modeled *λ*1032 total intensities. Deriving Ion Properties from the Data Cubes ------------------------------------------- We constructed two sets of radially dependent data cubes: one for the model C assumptions for *n**e*, *T**e*, and flux-tube geometry, and one for model R. Each set consisted of 13 data cubes with observation heights *z* = 1.5, 1.6, 1.7, 1.8, 1.98, 2.11, 2.3, 2.42, 2.563, 2.7, 3.0, 3.09, and 3.565 *R*⊙. These values were chosen to align with the observed data points shown in Figure 1. Any discrepancies between the observed and modeled heights never exceeded  ± 0.065 *R*⊙, and for the whole data set the average absolute value of the discrepancy was only 0.012 *R*⊙. We then constructed 49 individual “probability cubes” for each of the data points for which both *V*1/*e* and ${\cal R}$ exist. Even for just a single comparison between an observation and a data cube at one height, it is a challenge to display the full three-dimensional nature of the probability “cloud” *P*(*u**i* ∥, *T**i* ⊥, *T**i* ⊥/*T**i* ∥). We limit ourselves to showing lower-dimensional projections that keep only the highest probability values taken over the axes that are not being shown. As an example, in Figure 4 we display two-dimensional contours of *P* as a function of all three unique pairings of the three axis-quantities of the data cube. The specific comparison is between the measurement shown in Table 1 from 1997 January 5 (3.07 *R*⊙, *V*1/*e* = 690 km s− 1) and the model R data cube constructed at *z* = 3.09 *R*⊙. In all three contour plots, the probability shown at each location is a maximum taken over the third quantity that is orthogonal to the projection plane. Thus, for regions with a low probability in these diagrams, we can be assured that there are *no* values of the unplotted coordinate that can give synthetic line profiles in agreement with the observations. Figure 4*a* shows an approximate anticorrelation between the ion outflow speed and the perpendicular kinetic temperature in the subset of generally “successful” models. This arises mainly because the lines can be broadened both by microscopic LOS motions (roughly proportional to *T**i* ⊥) and by the projection of the superradially flowing bulk outflow speed along the LOS (which goes as *u**i* ∥). When one of these quantities goes up, the other must go down in order to match a given observed line profile. Figure 4*b* shows that the region of parameter space with the larger contribution by *T**i* ⊥ (upper left) also requires a large anisotropy ratio, but the region with the larger contribution by bulk LOS motions (lower right) may be able to match the observations with an isotropic velocity distribution (i.e., *T**i* ⊥/*T**i* ∥ ≈ 1). The large amount of information in contour plots like Figure 4 can be collapsed down to a smaller list of parameters. We created three one-dimensional probability curves as a function of each of the three main axis quantities, with the maximum values extracted from the full plane subtended by the remaining two neglected quantities. Thus, we define the reduced probability functions *P**u*(*u**i* ∥), *P**t*(*T**i* ⊥), and *P**a*(*T**i* ⊥/*T**i* ∥) (the subscript “*a*” denotes the anisotropy ratio). These functions are generally peaked at some high value close to 1 and exhibit lower values far from the optimal solutions. Figure 5 shows these reduced probabilities for the same example case shown in Figure 4. The reduced probabilities for *u**i* ∥ and *T**i* ⊥ are peaked relatively sharply around their most probable values. Note that we plot the perpendicular kinetic temperature in units of a most-probable speed $w\_{i \perp} = (2 k\_{\rm B} T\_{i \perp} / m\_{i})^{1/2}$ in order to facilitate comparison with earlier papers. The reduced probability for the anisotropy ratio, shown in Figure 5*b*, is less centrally peaked and thus the best solution for this value is less certain. The peak value corresponds to a most-probable anisotropy ratio of *T**i* ⊥/*T**i* ∥ ≈ 6, but note that the probability of isotropy remains reasonably high at  ∼ 40%. It is interesting to contrast the exhaustive data-cube-search technique used here with the more straightforward approaches taken in earlier papers. For example, Raouafi & Solanki (2004, 2006) simulated the properties of the *λ**λ*1032, 1037 lines after first fixing the radial variation of the outflow speed and ion temperature. Figure 5*b* shows an illustrative “cut” through the data cube at *fixed* values of *u**i* ∥ = 600 km s− 1 and *w**i* ⊥ = 215 km s− 1 at *r* = 3.09 *R*⊙ (similar to the values used by Raouafi & Solanki 2006). In this case, the most probable value of the anisotropy ratio is surprisingly close to 1, as was also assumed by Raouafi & Solanki. The apparent consistency with the observations (i.e., a value of *P**a* of about 30%) may be misleading if the rest of the data cube is not searched. Thus, we can assert that any results concerning the anisotropy ratio that were obtained by *not* searching the full range of possibilities for the ion parameters are potentially inaccurate. The ultimate goal of the empirical modeling process is to characterize the peak values and widths of the reduced probability curves, in order to obtain the optimal measured values (with uncertainty limits) for the relevant O5 + plasma properties. The most satisfactory outcome, of course, would be very narrow peaks that occur far from the edges of the parameter space, but this is not always the case. After some experimentation, we chose to use weighted means to obtain the peak values, i.e., $$\langle x \rangle \, = \, \frac{\int dx \, x \, P\_{x}(x)}{\int dx \, P\_{x}(x)} \label{eq:Pmom}$$ where *x* denotes any of the three axis quantities *u**i* ∥, *T**i* ⊥, or log(*T**i* ⊥/*T**i* ∥). We used the logarithm of the anisotropy ratio in equation ([eq:Pmom]) because tests showed that if the ratio itself (which spans three orders of magnitude) was used, the above mean would be weighted strongly toward the largest values even when the maximum of the probability distribution is at much lower values. We experimented with using the variance, or second moment, of the reduced probability distributions to characterize the widths of “error bars” for each measurement. However, because the probability curves are generally not symmetric around the peak values, the second moment often did not accurately give a range of values with reasonable probabilities. Instead, we performed a straightforward search for the range of probabilities that are higher than the threshold value *P*1*σ* ≈ 0.317 discussed above. The lower and upper limits of that range were taken to be the ends of the uncertainty bounds for each measurement. Preferential Ion Heating and Acceleration ----------------------------------------- Figure 6 shows the weighted mean and error-bar quantities for the O5 + plasma properties, defined as in the previous subsection, as a function of heliocentric height. The results from model C and model R are plotted in two different colors, with only the error bars of model R shown for clarity. Here we focus on the ion outflow speed (Fig.  6*a*) and the perpendicular ion temperature (Fig.  6*b*). On average, the derived values of ⟨*u**i* ∥⟩ and ⟨*w**i* ⊥⟩ were consistent between the two sets of models. To quantify the impact of varying the electron density, electron temperature, and flux-tube geometry, we computed ratios of the model C values to the model R values for each data point. For the 49 data points taken together, the mean value of the ratio of the outflow speeds was 1.002, with a standard deviation of 19%, and the mean value of the ratio of perpendicular most-probable speeds was 0.991, with a standard deviation of 14%. This shows that the determination of these parameters is relatively insensitive to the choices of electron density, electron temperature, and flux-tube geometry. The radial dependence of the derived ⟨*u**i* ∥⟩ values in Figure 6*a* is similar to that of the O5 + empirical models B1 and B2 given by Cranmer et al.  (1999). Note the emergence of a natural trend of radial acceleration in ⟨*u**i* ∥⟩, with the possible exception of the data points at $r \gtrsim 3.5 \, R\_{\odot}$. This is especially serendipitous given that each data point was analyzed independently of all others. The derived O5 + outflow speeds support earlier claims of *preferential ion acceleration* in coronal holes. At *r* = 2.5 *R*⊙, the range of ion outflow speeds that gives rise to high probabilities of agreement with the data points is approximately 280–500 km s− 1 At this height, these values are substantially larger than bulk (proton-electron) solar wind outflow speeds derived via mass flux conservation. Figure 41 of Kohl et al.  (2006) showed a selection of 12 bulk outflow speed models derived using all possible combinations of four *n**e* models and three coronal-hole geometries. At *r* = 2.5 *R*⊙, these 12 models gave a range of bulk outflow speeds of 115–300 km s− 1. Despite the small degree of overlap between the two ranges, the mean value of the O5 + range (390 km s− 1) exceeds the mean value of the mass flux conservation range (208 km s− 1) by almost a factor of two. Also, proton outflow speeds derived from Ly*α* Doppler dimming (from a selection of papers all dealing with polar coronal holes at the 1996–1997 solar minimum) were shown in Figure 41 of Kohl et al.  (2006). At 2.5 *R*⊙, the range of these values is 160–260 km s− 1—with a mean of 210 km s− 1—which is still significantly lower than the range of O5 + ion outflow speeds discussed above. In Figure 6*b*, the trend of radial increase in ⟨*w**i* ⊥⟩ is also roughly similar to that found by Cranmer et al.  (1999), especially below about 2.3 *R*⊙. At larger heights, though, there appears to be less evidence for a systematic increase than existed in the model B1 and B2 curves. This could have resulted either from the inclusion of the new data points or from the more exhaustive treatment of uncertainties in the new parameter determination method described above. However, if one takes all of the derived ⟨*w**i* ⊥⟩ values for *r* ≥ 2 *R*⊙ and fits to a straight line, the best-fitting slope is still increasing with height at a rate of 50 km s− 1 per *R*⊙. This is about a third of the  ∼ 150 km s− 1 per *R*⊙ slope in the B1 and B2 models. The most-probable speeds ⟨*w**i* ⊥⟩ shown in Figure 6*b*, although slightly smaller than those given by the Cranmer et al.  (1999) model B1 and B2 curves at some heights, still show definite evidence for *preferential ion heating.* The mean value of the ⟨*w**i* ⊥⟩ values at heights *r* ≥ 2.5 *R*⊙ in Figure 6*b* is 363 km s− 1, with a standard deviation of 73 km s− 1. Between 2.5 and 3 *R*⊙, the perpendicular proton most-probable speeds derived from Ly*α* were about 210–240 km s− 1, with a mean value of about 225 km s− 1 (see models A1 and A2 of Cranmer et al.  1999). The fact that the O5 + mean value exceeds the proton mean value by almost two standard deviations implies that the O5 + kinetic temperature at this height is very likely to be more than “mass proportional” (i.e., implying an oxygen kinetic temperature of 130 MK, or more than 40 times the proton kinetic temperature of  ∼ 3 MK. It is important to note that the derived kinetic temperatures are likely to be a combination of thermal and nonthermal motions. The ion-to-proton kinetic temperature ratio of  ∼ 40, derived above, is likely to be a *lower limit* to the true ratio of thermal, or microscopic temperatures. If unresolved wave motions are deconvolved from the empirical values of ⟨*w**i* ⊥⟩, the proton most-probable speed will be reduced by a relatively larger amount than the O5 + speed. As an example, the theoretical polar coronal hole model of Cranmer et al.  (2007) has a LOS-projected Alfvén wave amplitude at *r* = 2.5 *R*⊙ of 116 km s− 1 (this is also plotted in Fig.  6*b*). Converting these motions into temperature-like units and subtracting from both values given above, one obtains an O5 + perpendicular temperature of 115 MK and a proton perpendicular temperature of 2.2 MK. The ratio of ion to proton temperatures has thus increased from about 40 to 50. In any case, it is clear that the dominant contributor to the ion kinetic temperature is the true “thermal” temperature, with only a relatively minor impact from broadening due to macroscopic motions. The Ion Anisotropy Ratio ------------------------ Figure 6*c* illustrates the largest discrepancy between the empirical models of Cranmer et al.  (1999) and the present models (both C and R). Above a height of  ∼ 2.5 *R*⊙, models B1 and B2 demanded a strong anisotropy ratio *T**i* ⊥/*T**i* ∥ > 10, but the optimal ratios derived in this paper seem to cluster between 2 and 10 with no discernible radial dependence. It is important to note, though, that there is considerable overlap of the *uncertainties* between the old and new ranges of ⟨*T**i* ⊥/*T**i* ∥⟩. Several of the error bars shown in Figure 6*c* extend up into the range of ratios from models B1 and B2. Also, the dotted curves that illustrate models B1 and B2 correspond to the “optimal” values of the anisotropy ratio from Kohl et al.  (1998) and Cranmer et al.  (1999); the uncertainties in those models are not shown. Because the derived values of the kinetic temperature ratio ⟨*T**i* ⊥/*T**i* ∥⟩ in Figure 6*c* exceed unity by only a relatively small amount, it is worthwhile to examine whether the numerator (*T**i* ⊥) may have been enhanced by unresolved wave motions perpendicular to the magnetic field. In other words, for a realistic model of perpendicular wave amplitudes in polar coronal holes, we investigate whether a truly *isotropic* microscopic velocity distribution could have given an effective anisotropy ratio that exceeds 1. We compute such an effective anisotropy ratio as $$A\_{\rm eff} \, = \, \frac{1}{1 - (\langle \delta v\_{\perp}^{2} \rangle\_{x} / w\_{i \perp}^{2})} \,\,,$$ where ⟨*δ**v*⊥2⟩*x* is the square of the frequency-integrated Alfvén wave amplitude divided by two to sample only the motions along one of the two perpendicular directions (i.e., only along the LOS or *x* axis). As above, we used the Alfvén wave properties from the turbulence-driven polar coronal hole model of Cranmer et al.  (2007). The model wave amplitude is plotted in Figure 6*b* and the quantity $A\_{\rm eff}$ is plotted in Figure 6*c*. The condition $A\_{\rm eff} \approx 1$ corresponds to the situation where the amplitudes are too small to contribute to the anisotropy ratio (as defined in the empirical models). Below *r* ≈ 1.7 *R*⊙, the curve in Figure 6*c* shows that $A\_{\rm eff}$ does indeed exceed 1 by about the amount computed from the UVCS data. At these low heights, the derived value of ⟨*w**i* ⊥⟩ is of the same order of magnitude as the wave amplitude, so the latter can “contaminate” the determination of the true perpendicular most-probable speed. At heights larger than about 2 *R*⊙, though, the wave amplitudes are small in comparison to the derived ⟨*w**i* ⊥⟩ values, and thus $A\_{\rm eff} \approx 1$. We thus conclude that above 2 *R*⊙, any derived anisotropy ratio ⟨*T**i* ⊥/*T**i* ∥⟩ is likely to be truly representative of the microscopic velocity distribution and not affected by wave motions. Despite the comparatively low values of the anisotropy ratio shown in Figure 6*c* (⟨*T**i* ⊥/*T**i* ∥⟩ ≈ 2–10), we should emphasize that these values are often significantly different from unity. It is important to note that *all* of the data points have their largest reduced probability—measured either using the weighted mean defined above or by simply locating the maximum value—for anisotropy ratios larger than unity. The preponderance of evidence for anisotropy is also illustrated in Figure 7, which shows the probability that each measurement could be explained by an isotropic O5 + velocity distribution. In other words, Figure 7 gives the value of *P**a*(1) for each probability cube. Taken together, a significant majority of the values (about 78% of the total number) fall below the fiducial one-sigma value of *P*1*σ*, indicating that isotropy should not be considered a “baseline” assumption. Below about *r* = 2 *R*⊙, a few of the measurements correspond to large probabilities that an isotropic distribution can explain the observations. Note from Figure 6*c*, though, that the most-probable anisotropy ratios for these measurements tend to be greater than 1, but some of the error bars extend down past *T**i* ⊥/*T**i* ∥ = 1. However, between 2.1 and 2.7 *R*⊙ the probability of isotropy is very small for all of the observed data points. Above 3 *R*⊙, some of the values of *P**a*(1) become large again, but we believe this may be due to the relatively high observational uncertainties on the intensities and line widths at these large heights (see below). To better understand the impact of observational uncertainties on the probability of isotropy, Figure 8 shows the full *P**a*(*T**i* ⊥/*T**i* ∥) curves for one specific measurement at *r* = 3.07 *R*⊙ (i.e., the same measurement used in Fig.  5). The multiple curves were constructed by multiplying the known observational uncertainties *δ**V*1/*e* and $\delta {\cal R}$ by arbitrary factors *ε*. Generally, larger uncertainties lead to lower *χ*2 values when comparing the observed and modeled line shapes, and thus to larger probabilities of agreement between the observed and modeled profiles. Interestingly, though, the anisotropy ratio *T**i* ⊥/*T**i* ∥ at which the maximum probability occurs remains roughly constant when *ε* is varied between 0.5 and 2. Thus, if future observations above 3 *R*⊙ were to obtain the same general range of values for *V*1/*e* and ${\cal R}$ but with lower uncertainties, it could provide stronger evidence for ion anisotropy up at these heights. Although Figure 6*c* does not seem to indicate a substantial difference between models R and C, it is useful to compare these models in some additional detail. For all data points, the mean ratio of model C to model R anisotropy ratios was 1.199, but the large standard deviation (76%) shows that the models are often quite different from one another. Taking only the heights above 2.2 *R*⊙, the mean ratio of model C to model R anisotropy ratios increases to 1.514, indicating that *on average* model C generates larger anisotropies than model R over the height range where anisotropies appear to be required. Figure 9 shows the ratio of model C to model R anisotropy ratios as a function of height. Below *r* ≈ 2.2 *R*⊙ the two models produce roughly the same result for the anisotropy ratio. Above that height, the solutions split into two groups: one where model C produces a substantially larger ratio (2–3 times that of model R), and one where model C produces a comparable or slightly smaller ratio than model R. Note that the height range of 3.0–3.1 *R*⊙—over which model R predicted a rise in the probability of isotropy (see Fig.  7)—strongly favors larger anisotropies for model C. Varying the Electron Density, Electron Temperature, and Geometry ---------------------------------------------------------------- One of the main motivations for this paper was to explore why the results of Raouafi & Solanki (2004, 2006) were so different from earlier results (e.g., Cranmer et al.  1999) regarding the O5 + anisotropy ratio. In this subsection, we study the differences between model R and model C in more detail by focusing on the shapes of the reduced probability distributions for one representative data point. As in Figures 4, 5, and 8, we used the probabilities generated by comparing the UVCS/*SOHO* measurement from 1997 January 5 (*r* = 3.07 *R*⊙, *V*1/*e* = 690 km s− 1) with data cubes constructed with various assumptions. This data point is denoted by a filled circle in Figure 9, and it is clear that this point is representative of the majority of the data points (5 out of 7) at *r* ≈ 3 *R*⊙. Figure 10 shows a range of reduced probability curves *P**a*(*T**i* ⊥/*T**i* ∥) that were computed from data cubes constructed with various combinations of the model R and model C parameters. The three-letter names for the models denote the individual choices for *n**e*, flux-tube geometry, and *T**e* (in that order). The “pure” model R and model C cases are thus called RRR and CCC. Before examining the impact of the individual parameters on the reduced probability curves, we note that the model CCC curve in Figure 10*b* mirrors almost exactly the results of Kohl et al.  (1998) and Cranmer et al.  (1999) at *r* ≈ 3 *R*⊙: the most likely O5 + anisotropy ratio ranges between 10 and 100, and an isotropic distribution is highly improbable. Model CCC exhibits a most probable ion outflow speed ⟨*u**i* ∥⟩ = 508 km s− 1, which is only marginally smaller than the model RRR value of 521 km s− 1. Model CCC has an optimal solution for ⟨*w**i* ⊥⟩, though, of 541 km s− 1, which is 23% larger than the corresponding value of 440 km s− 1 for model RRR (i.e., a 51% higher value of ⟨*T**i* ⊥⟩ for model CCC). Model CCC tended to produce more line broadening via “thermal” motions near the plane of the sky, and model RRR tended to produce more line broadening via bulk outflow projected along the LOS. The other curves shown in Figure 10 explore which of the three varied parameters were most responsible for the differences between models RRR and CCC. We see immediately that the choice of electron temperature *T**e*, which in our models impacts only the collision rate *q*12, is relatively unimportant. The 8 curves can thus be separated into 4 pairs, each of which has the same choice for *n**e* and flux-tube geometry (i.e., RRX, RCX, CRX, and CCX, where ‘X’ denotes either option for *T**e*). The overall insensitivity to electron temperature is evident from the fact that the two curves in each pair are virtually indistinguishable from one another. Figure 10 shows that the unique features of the RRX models (i.e., a higher probability of isotropy and a strong peak at *T**i* ⊥/*T**i* ∥ < 10) are only present when *both* the electron density and flux-tube geometry are treated using model R. The models with only one of these two parameters treated using model R (i.e., RCX and CRX) appear more similar to the CCX models. At large values of the anisotropy ratio, both the RCX and CRX models are virtually identical to the CCX models. At low values of the anisotropy ratio, the CRX model is roughly intermediate between the CCX and RRX models. Generally, though, the *combination* of the model R assumptions for electron density (e.g., Doyle et al.  1999) and flux-tube geometry (e.g., Banaszkiewicz et al.  1998) are needed to produce broad enough profiles via outflow speed projection along the LOS to explain the observations *without* the need for extreme temperature anisotropies. Specifically, this enhanced LOS projection effect arises for two coupled reasons. 1. As seen in Figure 2*c*, the Banaszkiewicz et al.  (1998) flux tubes are tilted to a greater degree away from the radial direction than the Cranmer et al.  (1999) flux tubes. Because of these larger values of *δ*, a larger fraction of the outflow speed *u**i* ∥ is projected into the LOS direction (when ∣*x*∣ > 0) for model R. 2. Figure 3 shows that the Doyle et al.  (1999) electron density does not drop as rapidly with increasing height (between about 3 and 10 *R*⊙) as nearly all of the other plotted *n**e* functions. Thus, for observation heights at about 3 *R*⊙, the Doyle et al.  (model R) electron density provides a relative enhancement for points along the foreground and background (∣*x*∣ > 0) in comparison to the plane of the sky (*x* = 0). Note also from Figure 3 that the electron density used for model C is about 10% to 30% larger than that used for model R at the heights of interest (*r* ≈ 3–4 *R*⊙). A higher value of *n**e* is expected to result in emission lines that are dominated more by the collisional component of the emissivity, which scales as *n**e*2 (eq. [[eq:jcoll]]), with a correspondingly weaker contribution from the radiative component, which scales linearly with *n**e* (eq. [[eq:jres]]). Because of the different density dependences, the collisional component is not extended as far along the LOS as the radiative component. Thus, models with higher densities would be expected to behave more like model C (with emission dominated by the plane of the sky), and models with lower densities would be expected to behave more like model R (with emission extended over a larger swath of the LOS). To explore the effects of varying the electron density, we repeated the model R data cube analysis (for the fiducial height shown in Figs.  8 and 10) with *n**e*(*r*) multiplied by constant factors. Figure 11 shows the resulting reduced probability curves as a function of the O5 + temperature anisotropy. A model with half of the Doyle et al.  (1999) electron density has a lower preferred value of *T**i* ⊥/*T**i* ∥ and a much higher probability of isotropy than the standard model R. A model with double the Doyle et al.  (1999) electron density resembles model C in that there is a high preferred range of *T**i* ⊥/*T**i* ∥ and a low probability of isotropy. Despite the large change in appearance of the *P**a* curves as shown in Figure 11, the preferred values of the outflow speed and perpendicular kinetic temperature do not vary by very much as *n**e* is varied up and down by a factor of two: ⟨*u**i* ∥⟩ changes by only about  ± 8% (increasing as *n**e* decreases), and ⟨*w**i* ⊥⟩ changes by only about  ± 10% (increasing as *n**e* increases). These determinations appear to be relatively insensitive to the choices for *n**e* and flux-tube geometry. The ratio of collisional emissivity to the total line emission changes dramatically for the models shown in Figure 11. For model RRR, the optimal model in the data cube exhibited a collisional fraction of 93.7% for the *λ*1032 line and a fraction of 44.6% for the *λ*1037 line (the latter being “Doppler pumped”). The model with half of the model R density had lower collisional fractions for the *λ**λ*1032, 1037 lines of 88.2% and 28.7%, respectively. The model with double the model R density had higher collisional fractions of 96.8% and 61.7%. It is important to note, however, that the differences in collisionality for the models shown in Figure 10 are not as drastic as those shown in Figure 11. Model CCC exhibited collisional fractions for the *λ**λ*1032, 1037 lines of 90.7% and 47.0%. These values are only a few percentage points different from the model RRR fractions. The other intermediate models have values that cluster between those of models RRR and CCC. The larger value of *n**e* in the plane of the sky for model C is compensated—to some degree—by the slower decrease in *n**e* along the LOS for model R. Thus, despite the superficial resemblance between the model CCC curve in Figure 10 and the “double *n**e*” curve in Figure 11, one cannot invoke a varying amount of collisionality to explain the differences between models R and C. The LOS extension effects discussed above are more subtle than simply varying *n**e* by a constant amount. Another way we explored the dependence of the reduced probabilities on electron density was to produce a set of three other models with alternate functional forms for *n**e*(*r*), but the same flux-tube geometry and *T**e* as used in model R. These models utilized the mean electron density curves from Guhathakurta & Holzer (1994), Fisher & Guhathakurta (1995), and Guhathakurta et al.  (1999) (see also Fig.  3), and the data cubes were created only at the fiducial height of 3.09 *R*⊙.[4](#fn4) The reduced probabilities *P**a* for these models all fell within the general range of variation illustrated in Figure 10 and are not plotted. However, the construction of these models increased the number of data cubes with “model R-like” flux-tube and *T**e* parameters to seven: i.e., these three new ones, the three models shown in Figure 11, and model CRR (with a model C electron density). We performed a regression analysis on the seven values of the probability of isotropy *P**a*(1) and the weighted mean anisotropy ⟨*T**i* ⊥/*T**i* ∥⟩ to find the optimal functional dependence on two “independent” variables that characterize the electron density: $$n\_{3} \, \equiv \, \frac{n\_{e}(3 \, R\_{\odot})}{10^{6} \, \mbox{cm}^{-3}} \,\,, \,\,\,\,\,\,\, n\_{6} \, \equiv \, \frac{n\_{e}(6 \, R\_{\odot})}{10^{6} \, \mbox{cm}^{-3}}$$ where the arbitrary normalizations serve only to keep the combined quantities (discussed below) of order unity. The quantity *n*3 characterizes the electron density in the plane of the sky of the observation, and the ratio *n*3/*n*6 characterizes the large-scale density gradient and thus the relative enhancement of foreground and background regions along the LOS. From the discussion above, we expect that larger values of both *n*3 and *n*3/*n*6 should result in lower probabilities of isotropy and higher most-probable values of the anisotropy ratio. Indeed, the regression analysis found that the modeled values of these quantities exhibited the lowest combined *χ*2 spread for a single independent variable that scales as *n*31.83/*n*6 (close to the product of *n*3 and *n*3/*n*6). Figure 12 shows these values as well as the best-fitting quadratic functions to *P**a*(1) and ⟨*T**i* ⊥/*T**i* ∥⟩. The combined dependence on both *n**e* and its radial gradient appears to be a key factor in determining the relative probabilities of isotropy and strong anisotropy. Finally, we must evaluate which sets of choices for the electron density and the flux-tube geometry are most consistent with observed polar coronal holes. Figure 3 does seem to indicate that most measured *n**e* curves (as well as one example theoretical result for *n**e*) behave more like the “model C” (Cranmer et al.  1999) case than the “model R” (Doyle et al.  1999) case. Between the heights of about 3 and 10 *R*⊙, the majority of the curves in Figure 3 exhibit a *steeper* radial decrease than the Doyle et al.  (1999) model. Thus, the CCX or CRX models shown in Figure 10*b* appear to be more consistent with observations than the RRX or RCX models in Figure 10*a*. This then implies that substantial O5 + anisotropy ($T\_{i \perp} / T\_{i \parallel} \gtrsim 10$) is also preferred at large heights. The optimal choice of the flux-tube geometry is less certain. Ideally, observations of the nonradial shapes of polar plumes should be able to constrain the magnetic field geometry (see, e.g., Wang et al.  2007; Pasachoff et al.  2007), but it is unclear whether existing plume observations would be able to distinguish the subtle differences between, e.g., Figures 2*a* and 2*b*. In any case, the geometry does not seem to be as major an issue as the electron density, since the variance between the four curves in Figure 10*b* is not large. Oxygen Ion Number Density ------------------------- By comparing the observed and modeled total intensities of the *λ*1032 line, it is possible to derive firm limits on the combined elemental abundance and ionization fraction of O5 +. The ion concentration is useful both as a tracer of fast and slow solar wind streams (e.g., Zurbuchen et al.  2002) and as a possible diagnostic of the amount of preferential heating deposited in coronal holes (Lie-Svendsen & Esser 2005). A first attempt at determining the O5 + number density from UVCS data was made by Cranmer et al.  (1999), but the “data cube search” technique developed in this paper allows a much more definitive set of measurements to be made. The numerical code that computes the line emission used an arbitrary constant value for the ratio $f\_{0} = n\_{{\rm O}^{5+}} / n\_{e}$ of 4.959 × 10− 6, which was derived from the oxygen abundance of Anders & Grevesse (1989) and the measured O5 + ionization fraction of Wimmer-Schweingruber et al.  (1998). This is merely a fiducial value that does not affect the final determination of this ratio for a given UVCS observation. When comparing the results from an empirical model data cube with a specific observation, the probability values *P*(*u**i* ∥, *T**i* ⊥, *T**i* ⊥/*T**i* ∥) are used to construct a weighted mean of the modeled *λ*1032 total intensity using equation ([eq:Pmom]), as well as lower and upper limits using the full range of models with probabilities that exceed *P*1*σ*. These values are converted into “observed” ion concentration ratios $f\_{\rm obs}$ by assuming that the ratio $f\_{\rm obs} / f\_{0}$ is equal to the ratio of the observed to the modeled values of $I\_{\rm tot}$. By using the modeled weighted mean, lower limit, and upper limit of $I\_{\rm tot}$ we obtain the weighted mean, upper limit, and lower limit of $f\_{\rm obs}$. (Note that the lower limit of $I\_{\rm tot}$ gives the upper limit of $f\_{\rm obs}$ and vice versa.) Finally, $f\_{\rm obs}$ is converted into the ratio of O5 + to total hydrogen number density ($n\_{{\rm O}^{5+}} / n\_{\rm H}$) by multiplying by a factor of 1.1 (assuming a helium to hydrogen number density ratio of 5%). Figure 13 shows the resulting ion concentration ratios as a function of height for the full range of model C data points. There is a hint of systematic radial increase at low heights. A similar radial increase would be predicted for ions that flow substantially faster than protons in the corona (by a factor of two) and then flow only  ∼ 10% faster than protons at 1 AU. Above 2 *R*⊙, though, Figure 13 does not show any definitive radial trend. Taking account of the uncertainty limits, the data appear consistent with the O5 + ionization fraction being more or less “frozen in” (see, e.g., Hundhausen et al.  1968; Owocki et al.  1983; Ko et al.  1997). A linear least squares fit (using the logarithm of $n\_{{\rm O}^{5+}} / n\_{\rm H}$ as the ordinate) is also shown, but the relatively high uncertainties at large heights preclude any reliable interpretation of the slope. Figure 13 also shows a range of values that would have been expected from prior studies of both the oxygen abundance and the O5 + ionization fraction. The abundance ratio ($n\_{\rm O} / n\_{\rm H}$) ranges from a relatively recent historical high of 8.5 × 10− 4 (Anders & Grevesse 1989) to the more recent—and somewhat controversial—low of 4.6 × 10− 4 (Asplund et al.  2004, 2005; Grevesse et al.  2007). The ionization fraction ($n\_{\rm O^{5+}} / n\_{\rm O}$) was measured in situ by the SWICS instrument on *Ulysses* (Wimmer-Schweingruber et al.  1998) to be about 0.0058 in the fast solar wind. Models that include the freezing in of heavy ions have produced values for this ratio from 0.0035 (Esser & Leer 1990) to about 0.005 (Chen et al.  2003). Although collisional ionization equilibrium is not expected to hold in the extended corona (for polar coronal holes), it is interesting to note that for *T**e* = 106 K, both Arnaud & Rothenflug (1985) and Mazzotta et al.  (1998) give a ratio of about 0.0045. This is similar to the above values, but it varies up and down by about a factor of 50% as *T**e* is decreased or increased by only  ± 30%. We thus take tentative lower and upper limits for $n\_{\rm O^{5+}} / n\_{\rm O}$ of 0.003 and 0.006. The horizontal lines shown in Figure 13 were computed from the products of the two lower limits and the two upper limits given above for $$\frac{n\_{\rm O^{5+}}}{n\_{\rm H}} \, = \, \left( \frac{n\_{\rm O}}{n\_{\rm H}} \right) \left( \frac{n\_{\rm O^{5+}}}{n\_{\rm O}} \right) \,\,.$$ The model C data points shown in Figure 13 have a mean value of $n\_{{\rm O}^{5+}} / n\_{\rm H} = 1.52 \times 10^{-6}$ (taking a simple average) or 1.39 × 10− 6 (taking the average of the logarithms). Performing the same analysis using model R yielded values that were larger by about 5% (on average for all data points) to 20% (specifically for points at heights above  ∼ 3 *R*⊙). The standard deviations of both sets of data points gave lower and upper bounds of approximately 8 × 10− 7 and 2.4 × 10− 6 that encompass the  ± 1*σ* range. The prior studies of oxygen abundance and O5 + ionization discussed above give somewhat higher values, which extend from 1.4 × 10− 6 to 5.1 × 10− 6. Thus, both the model C and model R data points are in reasonably good agreement with the *lower limit* of the expected range, which gives some support for the recent low oxygen abundances of Asplund et al.  (2004, 2005). Discussion and Conclusions ========================== The *SOHO* mission (Domingo et al.  1995) has made significant progress toward identifying and characterizing the processes that heat the corona and accelerate the solar wind (see also Fleck & Švestka 1997; Domingo 2002; Fleck 2004, 2005). The results from the UVCS instrument regarding preferential heating and acceleration of heavy ions (i.e., O5 +) have contributed in a major way to these advances in understanding over the past decade, and it is important to verify and confirm the key features of these results. Thus, this paper has analyzed an expanded set of UVCS data from polar coronal holes at solar minimum with the goal of ascertaining whether ion temperature anisotropies are definitively present (as claimed by Kohl et al.  1997, 1998; Li et al.  1998; Cranmer et al.  1999; Antonucci et al.  2000; Zangrilli et al.  2002; Antonucci 2006; Telloni et al.  2007) or whether one can explain the observations without such anisotropies (as claimed by Raouafi & Solanki 2004, 2006; Raouafi et al.  2007). These cases were exemplified by two sets of empirical models: one (model R) that was designed to replicate many of the conditions assumed by Raouafi & Solanki (2004, 2006), and one (model C) that used the same conditions as Cranmer et al.  (1999). The main conclusion of this paper is that there remains strong evidence in favor of both preferential O5 + heating and acceleration and significant O5 + ion temperature anisotropy (in the sense *T**i* ⊥ > *T**i* ∥) above *r* ≈ 2.1 *R*⊙ in coronal holes. More detailed conclusions, linked to the sections of the paper in which they were first discussed, are summarized as follows. 1. It is important to search the full range of possible O5 + ion properties and not make arbitrary assumptions about, e.g., the ion outflow speed or the ion temperature. It is clear from Figure 5*b* that if the comparison with observations is restricted to certain choices for the ion parameters, the resulting conclusions about the ion temperature anisotropy can be potentially misleading. (§ 4.1) 2. The derived ion outflow speeds *u**i* ∥ and perpendicular kinetic temperatures *T**i* ⊥ exhibit values similar to those reported by Kohl et al.  (1998) and Cranmer et al.  (1999), independent of the choices of electron density and flux-tube geometry. There is significant evidence for preferential ion heating and ion acceleration with respect to protons, although the radial rate of increase of *T**i* ⊥ may be slightly lower than that given by Cranmer et al.  (1999). The large values of *T**i* ⊥ appear to be due to true “thermal” motions and not unresolved wave motions. (§ 4.2) 3. For heights above about 2.1 *R*⊙, the models in this paper yielded higher probabilities of agreement with the UVCS observations for *anisotropic velocity distributions* than for isotropic distributions. The UVCS observations between the radii of 2.1 and 2.7 *R*⊙ were found to have probabilities of isotropy below about 10% (see Fig.  7), no matter what was assumed for the coronal electron density or flux-tube expansion (i.e., for either model R or model C). Even when using coronal properties that seemed to maximize the probability of isotropy (e.g., model R), 78% of the UVCS data points exhibited probabilities of isotropy below our threshold one-sigma value of  ∼ 32%. (§ 4.3) 4. The UVCS data at heights at and above 3 *R*⊙ can be used to put limits on the likelihood of strong O5 + temperature anisotropies. A key factor in discriminating between empirical models that either require or do not require a substantial anisotropy is the degree of *extension along the line of sight (LOS)* of the emissivity. This extension is driven strongly by the rate of radial decrease in the electron density. The relatively shallow slope of *n**e*(*r*) used in model R (from eq. [[eq:neD99]]) appears to be an “outlier” when compared to other observational and theoretical determinations of the electron density profile in coronal holes (see Fig.  3). Most other *n**e*(*r*) curves exhibit a steeper radial decrease and thus a lesser degree of LOS extension for the emissivities. Our model C, which utilized the empirical model parameters derived by Cranmer et al.  (1999), had a representative “steep” electron density profile and thus required a substantial O5 + temperature anisotropy to explain the UVCS observations above *r* ≈ 3 *R*⊙. (§ 4.4) 5. Models that exhibit enough LOS extension to reproduce the observed UVCS line profiles and intensities *without a temperature anisotropy* appear to require both (1) an electron density that decreases shallowly with increasing height, and (2) a highly superradial flux-tube geometry that projects a large fraction of the outflow speed vector into the LOS. Our model R, designed to be similar to the models used by Raouafi & Solanki (2004, 2006), exhibited both of these conditions and thus had higher probabilities of an isotropic velocity distribution at heights above *r* ≈ 3 *R*⊙. (§ 4.4) 6. At the largest heights ($r \gtrsim 3 \, R\_{\odot}$), the uncertainties in the existing UVCS measurements make difficult a firm determination of the anisotropy ratio. The analysis technique developed in this paper takes full account of these observational uncertainties. Future observations with smaller observational uncertainties (see Fig.  8) should yield correspondingly “sharper” probability distributions for the anisotropy ratio and thus better determinations of this quantity. (§ 4.3) 7. Total intensities of the *λ**λ*1032, 1037 lines constrain the ion concentration ratio $n\_{{\rm O}^{5+}} / n\_{\rm H}$ to be approximately 1.5 × 10− 6, with at least a factor of two range of uncertainty. If the freezing in of O5 + ions is considered to be relatively well understood, then this value implies a relatively low oxygen abundance in agreement with the recent downward revision of Asplund et al.  (2004, 2005). (§ 4.5) Because of existing observational uncertainties in the electron density, flux tube geometry, and line parameters such as *V*1/*e* (the line width) and ${\cal R}$ (the *λ*1032 to *λ*1037 intensity ratio), we cannot yet give “preferred” values for the O5 + anisotropy ratio *T**i* ⊥/*T**i* ∥ as a detailed function of height. Below *r* ≈ 2 *R*⊙, the observations are consistent with an isotropic velocity distribution. Between 2.2 and 2.7 *R*⊙, the most probable anisotropy ratio appears to range between 2 and 10 (see Fig.  6*c*). At heights around *r* ≈ 3 *R*⊙ the uncertainties are large, but there does seem to be evidence that the anisotropy ratio is likely to exceed 10 (see, e.g., Fig.  10*b*). The ratio must increase between 2 and 3 *R*⊙, but we do not yet claim to know the exact rate of increase. New observations are required to make further progress. For example, as seen in Figure 3, there is still some disagreement about the radial dependence of electron density in polar coronal holes. Measurements of the white-light polarization brightness (*p**B*) at solar minimum need to be made with lower uncertainties in the absolute radiometric calibration. Also, care must be taken to exclude time periods when high-latitude streamers may be contaminating the LOS in order to obtain a true mean electron density for a polar coronal hole. Existing measurements of the superradial geometry (as traced by, e.g., polar plumes) tend to be limited by the fact that the shapes evident in LOS-integrated images are often assumed to be identical to the shapes of flux tubes in the plane of the sky. Better constraints on the flux-tube geometry could thus be made by using stereoscopy (Aschwanden 2005), tomography (e.g., Frazin et al.  2007), or other time-resolved rotational techniques (e.g., DeForest et al.  2001) to trace the full three-dimensional shapes of the plume-filled flux tubes. Improved ultraviolet spectroscopic measurements would greatly improve our ability to determine the plasma parameters in coronal holes. We anticipate that the UVCS instrument will continue to observe polar coronal holes through the present solar minimum (2007–2008). We do not yet know whether the wide spread in line widths seen a decade ago (which exceeded the observational uncertainties) was due to a changing filling factor of polar plumes along the LOS or whether it may be connected to other kinds of time variability at the coronal base. An even wider range of variation in coronal hole properties was observed over the last solar cycle with UVCS (e.g., Miralles et al.  2006). These observations of how coronal holes evolve in size and latitude have helped to constrain the realm of possible parameter space of preferential ion heating and acceleration. There are also observations that cannot be made with UVCS that could greatly improve our understanding of ion energization in the solar wind acceleration region. For example, rather than having only O5 + (and, to a lesser extent, Mg9 +; see Kohl et al.  1999) in coronal holes, an instrument with greater sensitivity and a wider spectral range could sample the velocity distributions of dozens of additional ions with a range of charges and masses. Obtaining the distribution of derived kinetic temperatures as a function of the ion charge-to-mass ratio *Z*/*A* would put a firm constraint on the shape of the power spectrum of cyclotron-resonant fluctuations (e.g., Hollweg 1999; Cranmer 2002b). A next-generation instrument with greater sensitivity may also be able to detect subtle departures from Gaussian line shapes that signal the presence of specific non-Maxwellian distributions (e.g., Cranmer, 1998, 2001). The strong heating and acceleration of minor ions, as documented by UVCS/*SOHO*, has provided significant insight into the physics of solar wind acceleration, but the basic chain of physical processes is still somewhat unclear. Many theoretical studies have attempted to trace this chain “backwards” from the known facts of kinetic ion energization to the properties of, e.g., ion cyclotron resonant waves that can provide such energization naturally (see reviews by Hollweg & Isenberg 2002; Marsch 2005; Kohl et al.  2006). Complementary progress has also been made in constraining the large-scale properties of the MHD fluctuations that may eventually cascade down to the microscopic kinetic scales (e.g., Verdini & Velli 2007; Cranmer et al.  2007). There are still many areas of disconnect, though, between our understanding of the macroscopic MHD scales and the microscopic kinetic scales. Future theoretical work is expected to continue exploring how the combined state of turbulent fluctuations, wave-particle interactions, and species-dependent heating and acceleration can be produced and maintained. The authors would like to thank Adriaan van Ballegooijen, Mari Paz Miralles, Leonard Strachan, and John Raymond for valuable discussions. This work has been supported by the National Aeronautics and Space Administration (NASA) under grants NNG04GE84G, NNG05GG38G, NNG06GI88G, NNX06AG95G, and NNX07AL72G to the Smithsonian Astrophysical Observatory, by Agenzia Spaziale Italiana, and by the Swiss contribution to the ESA PRODEX program. Akinari, N. 2007,, 660, 1660 Allen, L. A., Habbal, S. R., & Hu, Y. Q. 1998,, 103, 6551 Anders, E., & Grevesse, N. 1989,, 53, 197 Antonucci, E. 2006,, 124, 35 Antonucci, E., Dodero, M. A., & Giordano, S. 2000,, 197, 115 Arnaud, M., & Rothenflug, R. 1985,, 60, 425 Aschwanden, M. J. 2005,, 228, 339 Asplund, M., Grevesse, N., Sauval, A. J., Allende Prieto, C., & Kiselman, D. 2004,, 417, 751 Asplund, M., Grevesse, N., & Sauval, A. J. 2005, in Cosmic Abundances as Records of Stellar Evolution and Nucleosynthesis, ed. T. G. Barnes & F. N. Bash, ASP Conf. Ser. 336, 25 Banaszkiewicz, M., Axford, W. I., & McKenzie, J. F. 1998,, 337, 940 Bevington, P. R., & Robinson, D. K. 2003, Data Reduction and Error Analysis for the Physical Sciences, 3rd ed. (Boston: McGraw-Hill) Chen, Y., Esser, R., & Hu, Y. 2003,, 582, 467 Collier, M. R., Hamilton, D. C., Gloeckler, G., Bochsler, P., & Sheldon, R. B. 1996,, 23, 1191 Cranmer, S. R. 1998,, 508, 925 Cranmer, S. R. 2001,, 106, 24937 Cranmer, S. R. 2002a,, 101, 229 Cranmer, S. R. 2002b, in SOHO-11: From Solar Minimum to Solar Maximum, ESA SP-508, 361 (arXiv astro-ph/0209301) Cranmer, S. R., Panasyuk, A. V., & Kohl, J. L. 2005, Eos Trans.  AGU, 86 (18), Joint Assembly Suppl., SP33A-02 Cranmer, S. R., van Ballegooijen, A. A., & Edgar, R. J. 2007,, 171, 520 Cranmer, S. R., et al. 1999,, 511, 481 DeForest, C. E., Lamy, P. L., & Llebaria, A. 2001,, 560, 490 Domingo, V., 2002,, 282, 171 Domingo, V., Fleck. B., & Poland, A. I. 1995,, 162, 1 Doschek, G. A., Feldman, U., Laming, J. M., Schühle, U., & Wilhelm, K. 2001,, 546, 559 Doyle, J. G., Teriaca, L., & Banerjee, D. 1999,, 349, 956 Esser, R., & Edgar, R. J. 2000,, 532, L71 Esser, R., & Edgar, R. J. 2001,, 563, 1055 Esser, R., Fineschi, S., Dobrzycka, D., Habbal, S. R., Edgar, R. J., Raymond, J. C., Kohl, J. L., & Guhathakurta, M. 1999,, 510, L63 Esser, R., & Leer, E. 1990,, 95, 10269 Feldman, W. C., & Marsch, E. 1997, in Cosmic Winds and the Heliosphere, ed. J. R. Jokipii, C. P. Sonett, & M. S. Giampapa (Tucson: U. Arizona Press), 617 Fisher, R. R., & Guhathakurta, M. 1995,, 447, L139 Fleck, B., 2004, in IAU Symp.  223, Multi-Wavelength Investigations of Solar Activity, ed. A. V. Stepanov, E.E. Benevolenskaya, & A. G. Kosovichev (Cambridge: Cambridge U.  Press), 589 Fleck, B. 2005, in Solar Magnetic Phenomena, ed. A. Hanslmeier, A. Veronig, & M. Messerotti (Dordrecht: Springer), 139 Fleck, B., & Švestka, Z., eds. 1997, The First Results from SOHO (Dordrecht: Kluwer) Frazin, R. A., Vasquez, A. M., Kamabaladi, F., & Park, H. 2007,, 671, L201 Gabriel, A. H., Bely-Dubau, F., & Lemaire, P. 2003,, 589, 623 Gardner, L. D., Atkins, N., Fineschi, S., Smith, P. L.; Kohl, J. L., Maccari, L., & Romoli, M. 2000, Proc.  SPIE, 4139, 362 Gardner, L. D., et al. 1996, Proc.  SPIE, 2831, 2 Gardner, L. D., et al. 2002, in The Radiometric Calibration of SOHO, ed. A. Pauluhn, M. Huber, & R. von Steiger, ISSI SR-002, 161 Giordano, S., Antonucci, E., Noci, G., Romoli, M., & Kohl, J. L. 2000,, 531, L79 Gloeckler, G., et al. 1992,, 92, 267 Grevesse, N., Asplund, M., & Sauval, A. J. 2007,, 130, 105 Guhathakurta, M., Fludra, A., Gibson, S. E., Biesecker, D., & Fisher, R. 1999,, 104, 9801 Guhathakurta, M., & Holzer, T. E. 1994,, 426, 782 Hefti, S., et al. 1998,, 103, 29697 Hollweg, J. V. 1999,, 104, 24781 Hollweg, J. V., & Isenberg, P. A. 2002,, 107 (A7), 1147, doi:10.1029/2001JA000270 Hundhausen, A. J., Gilbert, H. E., & Bame, S. J. 1968,, 152, L3 Judge, P. G., & McIntosh, S. W. 1999,, 190, 331 Ko, Y.-K., Fisk, L. A., Geiss, J., Gloeckler, G., & Guhathakurta, M. 1997,, 171, 345 Kohl, J. L., Noci, G., Cranmer, S. R., & Raymond, J. C. 2006,, 13, 31 Kohl, J. L., et al. 1995,, 162, 313 Kohl, J. L., et al. 1997,, 175, 613 Kohl, J. L., et al. 1998,, 501, L127 Kohl, J. L., et al. 1999,, 510, L59 Kopp, R. A., & Holzer, T. E. 1976,, 49, 43 Laming, J. M., & Lepri, S. T. 2007,, 660, 1642 Li, X., Habbal, S. R., Kohl, J. L., & Noci, G. 1998,, 501, L133 Lie-Svendsen, Ø., & Esser, R. 2005,, 618, 1057 Marsch, E. 2005, in Solar Wind 11/SOHO–16: Connecting Sun and Heliosphere, ed. B. Fleck, T. Zurbuchen, & H. Lacoste (Noordwijk, The Netherlands: ESA), ESA SP-592, 191 Marsch, E., Mühlhäuser, K.-H, Schwenn, R., Rosenbauer, H., Pilipp, W., & Neubauer, F. M. 1982,, 87, 52 Mazzotta, P., Mazzitelli, G., Colafrancesco, S., & Vittorio, N. 1998,, 133, 403 Mihalas, D. 1978, Stellar Atmospheres, 2nd ed. (San Francisco: W. H. Freeman) Miralles, M. P., Cranmer, S. R., & Kohl, J. L. 2006, in SOHO-17: Ten Years of SOHO and Beyond, ed. H. Lacoste & L. Ouwehand (Noordwijk, The Netherlands: ESA), ESA SP-617, 15.1 Morgan, H., & Habbal, S. R. 2004,, 36, 695 (abstract 29.03) Noci, G., Kohl, J. L., & Withbroe, G. L. 1987,, 315, 706 Noci, G., & Maccari, L. 1999,, 341, 275 Owocki, S. P., Holzer, T. E., & Hundhausen, A. J. 1983,, 275, 354 Pasachoff, J. M., Rušin, V., Druckmüller, M., & Saniga, M. 2007,, 665, 824 Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 1992, Numerical Recipes in Fortran: The Art of Scientific Computing (Cambridge: Cambridge U.  Press) Quémerais, E., Lallement, R., Koutroumpa, D., & Lamy, P. 2007,, 667, 1229 Raouafi, N.-E., Harvey, J. W., & Solanki, S. K. 2007,, 658, 643 Raouafi, N.-E., & Solanki, S. K. 2004,, 427, 725 Raouafi, N.-E., & Solanki, S. K. 2006,, 445, 735 Raymond, J. C., et al. 1997,, 175, 645 Reisenfeld, D. B., Gary, S. P., Gosling, J. T., Steinberg, J. T., McComas, D. J., Goldstein, B. E., & Neugebauer, M., 2001,, 106, 5693 Telloni, D., Antonucci, E., & Dodero, M. A. 2007,, 472, 299 Verdini, A., & Velli, M. 2007,, 662, 669 Wang, Y.-M., Biersteker, J. B., Sheeley, N. R., Jr., Koutchmy, S., Mouette, J., & Druckmüller, M. 2007,, 660, 882 Whang, Y. C. 1971,, 76, 7503 Wilhelm, K., Marsch, E., Dwivedi, B. N., Hassler, D. M., Lemaire, P., Gabriel, A. H., & Huber, M. C. E. 1998,, 500, 1023 Wimmer-Schweingruber, R. F., von Steiger, R., Geiss, J., Gloeckler, G., Ipavich, F. M., & Wilken, B. 1998,, 85, 387 Withbroe, G. L., Kohl, J. L., Weiser, H., & Munro, R. H. 1982,, 33, 17 Zangrilli, L., Poletto, G., Nicolosi, P., Noci, G., & Romoli, M. 2002,, 574, 477 Zurbuchen, T. H., Fisk, L. A., Gloeckler, G., & von Steiger, R. 2002,, 29 (9), 1352, doi:10.1029/2001GL013946 --- 1. Below *r* ≈ 2 *R*⊙, the existing data appear to be adequate, uncertainties are low, and there is not much of an intrinsic spread in the intensities and line widths as a function of height. Also, earlier analyses did not show that collisionless effects (ion temperature anisotropies, preferential ion heating, or differential flow) became strong until above this height.[↩](#fnref1) 2. Because the degree of radial increase in *T**i* ⊥ is relatively uncertain, we constructed an additional set of empirical models with no radial increase in *T**i* ⊥ (i.e., where the plane-of-sky values were kept constant over the LOS). The resulting probability distributions (§ 4) were virtually identical to those computed with the specified radial increase along the LOS.[↩](#fnref2) 3. The discrepancies between electron temperatures derived from spectroscopy and from frozen-in ion charge states are not yet fully understood (see, e.g., Esser & Edgar 2000, 2001; Chen et al.  2003; Laming & Lepri 2007).[↩](#fnref3) 4. We also created a data cube for the hydrostatic equation (1) of Doyle et al.  (1999), but this model exhibited an unusually strong extension along the LOS. There was a substantial contribution to the emissivity even at the LOS integration limits of *x* =  ± 15 *R*⊙, which actually led to an extremely *low* probability of isotropy. However, we discarded this model because the shallow *n**e* at large heights is clearly unphysical.[↩](#fnref4) Improved Constraints on the Preferential Heating and Acceleration of Oxygen Ions in the Extended Solar Corona ============================================================================================================= We present a detailed analysis of oxygen ion velocity distributions in the extended solar corona, based on observations made with the Ultraviolet Coronagraph Spectrometer (UVCS) on the *SOHO* spacecraft. Polar coronal holes at solar minimum are known to exhibit broad line widths and unusual intensity ratios of the *λ**λ*1032, 1037 emission line doublet. The traditional interpretation of these features has been that oxygen ions have a strong temperature anisotropy, with the temperature perpendicular to the magnetic field being much larger than the temperature parallel to the field. However, recent work by Raouafi and Solanki suggested that it may be possible to model the observations using an isotropic velocity distribution. In this paper we analyze an expanded data set to show that the original interpretation of an anisotropic distribution is the only one that is fully consistent with the observations. It is necessary to search the full range of ion plasma parameters to determine the values with the highest probability of agreement with the UVCS data. The derived ion outflow speeds and perpendicular kinetic temperatures are consistent with earlier results, and there continues to be strong evidence for preferential ion heating and acceleration with respect to hydrogen. At heliocentric heights above 2.1 solar radii, every UVCS data point is more consistent with an anisotropic distribution than with an isotropic distribution. At heights above 3 solar radii, the exact probability of isotropy depends on the electron density chosen to simulate the line-of-sight distribution of emissivity. The most realistic electron densities (which decrease steeply from 3 to 6 solar radii) produce the lowest probabilities of isotropy and most-probable temperature anisotropy ratios that exceed 10. We also use UVCS absolute intensities to compute the frozen-in O5 + ion concentration in the extended corona; the resulting range of values is roughly consistent with recent downward revisions in the oxygen abundance. Introduction ============ The physical processes that heat the solar corona and accelerate the solar wind are not yet understood completely. In order to construct and test theoretical models, there must exist accurate measurements of relevant plasma parameters in the regions that are being heated and accelerated. In the low-density, open-field regions that reach into interplanetary space, the number of plasma parameters that need to be measured increases because the plasma begins to become collisionless and individual particle species (e.g., protons, electrons, and heavy ions) can exhibit different properties. Such differences in particle velocity distributions are valuable probes of “microscopic” processes of heating and acceleration. The Ultraviolet Coronagraph Spectrometer (UVCS) operating aboard the *Solar and Heliospheric Observatory* (*SOHO*) spacecraft has measured these properties for a variety of open-field regions in the extended corona (Kohl et al.  1995, 1997, 2006). In this paper we focus on UVCS observations of heavy ion emission lines (specifically *λ**λ*1032, 1037) in polar coronal holes at solar minimum. One main goal is to resolve a recent question that has arisen regarding the existence of anisotropic ion temperatures in polar coronal holes. Several prior analyses of UVCS data have concluded that there must be both intense preferential heating of the O5 + ions, in comparison to hydrogen, and a strong field-aligned anisotropy with a much larger temperature in the direction perpendicular to the magnetic field than in the parallel direction (see, e.g., Kohl et al.  1997, 1998; Li et al.  1998; Cranmer et al.  1999; Antonucci et al.  2000; Zangrilli et al.  2002; Antonucci 2006; Telloni et al.  2007). However, Raouafi & Solanki (2004, 2006) and Raouafi et al.  (2007) have reported that there may not be a compelling need for O5 + anisotropy depending on the assumptions made about the other plasma properties of the coronal hole (e.g., electron density). The determination of O5 + preferential heating, preferential acceleration, and temperature anisotropy has spurred a great deal of theoretical work (see reviews by Hollweg & Isenberg 2002; Cranmer 2002a; Marsch 2005; Kohl et al.  2006). It is thus important to resolve the question of whether these plasma properties are definitively present on the basis of the UVCS/*SOHO* observations. In this paper, we attempt to analyze all possible combinations of O5 + properties (number density, outflow speed, parallel temperature, and perpendicular temperature) with the full effects of the extended line of sight (LOS) taken into account. The applicability of any particular combination of ion properties is evaluated by computing a quantitative probability of agreement between the modeled set of emission lines and a given observation. Preliminary results from this work were presented by Cranmer et al.  (2005) and Kohl et al.  (2006). The original UVCS results of preferential ion heating and acceleration—as well as strong ion temperature anisotropy (*T*⊥ ≫ *T*∥)—were somewhat surprising, but these extreme departures from thermal equilibrium are qualitatively similar to conditions that have been measured for decades in high-speed streams in the heliosphere. At their closest approaches to the Sun ( ∼  0.3 AU), the *Helios* probes measured substantial proton temperature anisotropies with *T*⊥ > *T*∥ (Marsch et al.  1982; Feldman & Marsch 1997). In the fast wind, most ion species also appear to flow faster than the protons by about an Alfvén speed (*V**A*), and this velocity difference decreases with increasing radius and decreasing proton flow velocity (e.g., Hefti et al.  1998; Reisenfeld et al.  2001). The temperatures of heavy ions are significantly larger than proton and electron core temperatures. In the highest-speed wind streams, ion temperatures exceed simple mass proportionality with protons (i.e., heavier ions have larger most-probable speeds), with $(T\_{\rm ion} / T\_{p}) > (m\_{\rm ion} / m\_{p})$, for $m\_{\rm ion} > m\_{p}$ (e.g., Collier et al.  1996). UVCS provided the first evidence that these plasma properties are already present near the Sun. The outline of this paper is as follows. In § 2 we present an expanded collection of UVCS/*SOHO* observational data that is used to determine the O5 + ion properties. § 3 outlines the procedure we have developed to produce empirical models of the plasma conditions in polar coronal holes and to compute the probability of agreement between any given set of ion properties and the observations. The resulting ranges of ion properties that are consistent with the UVCS observations are presented in § 4 along with, in our view, a resolution of the controversy regarding the oxygen temperature anisotropy. Finally, § 5 gives a summary of the major results of this paper and a discussion of the implications these results may have on theoretical models of coronal heating and solar wind acceleration. Observations ============ The UVCS instrument contains three reflecting telescopes that feed two ultraviolet toric-grating spectrometers and one visible light polarimeter (Kohl et al.  1995, 1997). Light from the bright solar disk is blocked by external and internal occulters that have the same linear geometry as the spectrometer slits. The slits are oriented in the direction tangent to the solar limb. They can be positioned in heliocentric radius *r* anywhere between about 1.4 and 10 solar radii (*R*⊙) and rotated around the Sun in position angle. The slit length projected on the sky is 40, or approximately 2.5 *R*⊙ in the corona, and the slit width can be adjusted to optimize the desired spectral resolution and count rate. The UVCS data discussed in this paper consist of a large ensemble of observations of polar coronal holes from the last solar minimum (1996–1997). The solar magnetic field is observed to exist in a nearly axisymmetric configuration at solar minimum, with open field lines emerging from the north and south polar regions and expanding superradially to fill a large fraction of the heliospheric volume. The plasma properties in polar coronal holes remain reasonably constant in the year or two around solar minimum (see, e.g., Kohl et al.  2006), so we assemble the data over this time into a single function of radius. In this paper, we limit ourselves to the analysis of observations of the *λ**λ*1032, 1037 emission line doublet in these polar regions. The relevant UVCS observations are taken from the following three sources. 1. The empirical model study of Kohl et al.  (1998) and Cranmer et al.  (1999) covered the period between 1996 November and 1997 April and took the north and south polar coronal hole properties to be similar enough to treat them together. 2. A detailed analysis of the north polar coronal hole by Antonucci et al.  (2000) coincided with the second *SOHO* joint observing program (JOP 2) on 1996 May 21. 3. We searched the UVCS/*SOHO* archive for any other north or south polar hole observations having sufficient count-rate statistics to be able to measure the line widths at radii above 2 *R*⊙. A total of 14 new or reanalyzed data points were identified between 1996 June and 1997 July. The remainder of this section describes the data reduction for the third group of new data points. Table 1 provides details of these 14 measurements. lccccccc 1996 Jun 21, 16:55 & 2.07 & 15.8 (N) & 75 & 9.1 & 363 ± 21.7 & 2.09 ± 0.4 & 32.2 ± 6.8 1996 Sep 29, 15:26 & 2.08 & 17.5 (N) & 75 & 5.6 & 473 ± 12.4 & 1.87 ± 0.14 & 43.2 ± 8.7 1996 Nov 07, 23:38 & 2.07 & 29.7 (N) & 75 & 7.6 & 409 ± 18.8 & 1.78 ± 0.2 & 44.6 ± 9.1 1996 Nov 10, 06:30 & 3.00 & 25.7 (N) & 350 & 9.4 & 475 ± 27.0 & 1.16 ± 0.18 & 2.08 ± 0.44 1996 Nov 10, 15:55 & 2.56 & 25.9 (N) & 350 & 10.8 & 505 ± 30.5 & 1.17 ± 0.17 & 4.83 ± 1.0 1996 Nov 16, 16:56 & 2.17 & 29.6 (N) & 340 & 9.5 & 417 ± 7.43 & 1.485 ± 0.06 & 25.1 ± 5.0 1997 Jan 05, 21:00 & 3.07 & 28.0 (N) & 100 & 17.6 & 690 ± 87.2 & 0.957 ± 0.45 & 2.02 ± 0.63 1997 Jan 10, 14:54 & 3.08 & 20.8 (N) & 150 & 68.8 & 686 ± 38.7 & 1.23 ± 0.3 & 3.37 ± 0.76 1997 Jan 24, 16:03 & 3.08 & 20.6 (N) & 150 & 70.8 & 594 ± 41.6 & 1.01 ± 0.3 & 1.66 ± 0.40 1997 Mar 09, 18:00 & 2.57 & 19.0 (S) & 300 & 8.9 & 500 ± 22.0 & 1.15 ± 0.12 & 5.66 ± 1.2 1997 Jun 04, 16:49 & 2.56 & 18.8 (N) & 300 & 9.0 & 527 ± 33.9 & 0.958 ± 0.15 & 4.14 ± 0.90 1997 Jun 08, 20:15 & 3.10 & 18.7 (N) & 300 & 8.3 & 645 ± 65.6 & 1.33 ± 0.44 & 1.33 ± 0.33 1997 Jul 01, 15:50 & 2.56 & 17.2 (N) & 342 & 9.8 & 534 ± 56.8 & 0.988 ± 0.26 & 3.26 ± 0.80 1997 Jul 04, 16:45 & 3.63 & 18.9 (N) & 342 & 29.5 & 451 ± 47.9 & 1.49 ± 0.43 & 0.559 ± 0.13 The criteria for identifying new UVCS data were as follows. We adopted a time period from 1996 April, the beginning of primary science operations, to 1998 January, after which the new cycle’s activity began to rise and high-latitude streamers appeared regularly to signal the end of true solar minimum. Only measurements of the lines above the poles (i.e., position angles within  ± 15 of the north or south poles) and at heights above 2 *R*⊙ were sought.[1](#fn1) Prior experience with the count rates at large heights in coronal holes refined the search further to use measurements only with relatively long exposure times (see Table 1) to gather sufficient statistics to measure the line widths. There were two observations that appeared initially to satisfy the above criteria (1997 April 15, at 4.14 *R*⊙, and 1997 July 2, at 3.10 *R*⊙), but they were not used because the count rate statistics were inadequate for reliable line widths. The only point of overlap between the data in Table 1 and prior analyses (e.g., Cranmer et al.  1999) concerns the end of the month-long study of the north polar coronal hole in 1997 January (at *r* ≈ 3 *R*⊙). These data were reanalyzed with a different line fitting technique and an improved UVCS pointing correction; the computed line widths are similar to those presented by Cranmer et al.  (1999) and the intensities are given here for the first time. For completeness, though, both the old and new data points are kept in the full ensemble of data used below. To achieve the lowest uncertainties in the determinations of the *λ**λ*1032, 1037 intensities and line widths, we typically integrated over 15 to 30 along the slit (see Table 1). This corresponds to  ± 0.5–1 *R*⊙ on either side of the north-south axis. The use of such large areas implies that narrow flux-tube structures such as dense *polar plumes* and the less-dense interplume regions were not resolved. As long as all steps of the analysis remain consistent with such a coarsely averaged state (e.g., the use of a similarly averaged electron density), though, this need not be a problem. The derived plasma properties thus describe the average conditions inside coronal holes at solar minimum and do not address differences between plumes and interplume regions. Details concerning the analysis of UVCS data are given by Gardner et al.  (1996, 2000, 2002) and Kohl et al.  (1997, 1999, 2006). The UVCS Data Analysis Software (DAS) was used to remove image distortion and to calibrate the data in wavelength and intensity. The coronal line profiles are broadened by various instrumental effects. The optical point spread function of the spectrometer depends on the slit width used (with 270.3 *μ*m corresponding to 1 Å in the spectrum), the on-board data binning, the exposed mirror area, and the intrinsic quantization error of the detector. This broadening is taken into account by adjusting the line widths of Gaussian fits to the coronal components of the data; the data points themselves are not corrected. Tests have shown that the coronal line width can be recovered accurately even when the total instrumental width is within about a factor of two of the width of the coronal component. Instrument-scattered stray light from the solar disk is modeled as an additional narrow Gaussian component with an intensity and profile shape constrained by the known stray light properties of the instrument. The analysis of the emission line doublet involves four basic observable quantities: the total intensities of the two lines and their 1/*e* Gaussian half-widths Δ*λ*1/*e*. The latter quantities are typically expressed in Doppler velocity units as *V*1/*e* = *c*Δ*λ*1/*e*/*λ*0, where *λ*0 is the rest wavelength of the line and *c* is the speed of light. Rather than give the two total intensities, Table 1 provides the total intensity $I\_{\rm tot}$ of the *λ*1032 line and the ratio ${\cal R}$ of the *λ*1032 to the *λ*1037 intensities. The uncertainties given in Table 1 take account of both Poisson count-rate statistics and the fact that the various instrumental corrections are known only to finite levels of precision. Note that the ratio ${\cal R}$ does not depend on the absolute intensity calibration of the instrument. UVCS/*SOHO* has not been able to resolve any departures from Gaussian shapes for the lines in large polar coronal holes, so the profiles are described by just the one parameter *V*1/*e*. For the measurements given in Table 1, we performed the line fitting by constraining the coronal components of the *λ*1032 and *λ*1037 lines to have the same width. Thus, the *V*1/*e* values given in Table 1 are formally a weighted mean between the two components. This is done mainly to lower the statistical uncertainties but there is some observational justification for assuming that the two components have the same width. In situations where the count rates are high, it is difficult to see any significant or systematic difference between the line widths of the two components. There are various reasons why they may be different from one another in some regions (e.g., Cranmer 2001; Morgan & Habbal 2004), but more work needs to be done to identify such subtle effects. Figure 1 displays the combined ensemble of old and new UVCS data for the three main observables: the line width *V*1/*e*, the dimensionless intensity ratio ${\cal
arxiv_0000019
description with no problem: $$\left\{ \begin{array}{l} p\_1 \in \mathbb R\\ \displaystyle \mbox{Always } \big( (p\_1 > 0) \mbox{ AND } (p\_1 < 0) \big)\\ \end{array} \right..$$ The PDL language grammar is not a tool with capabilities to perceive logical contradictions which may be contained within statements. This kind of consideration is outside of the scope of the present standard. For this reason, a validation system for PDL definitions is not required to implement the detection of contradictions, but may do if the services within its description realm make this feasible. The current PDL reference implementation does not perform such contradiction detection and thus any parameter *p*1 provided by user would be rejected for this example. In other words **people providing descriptions of services must pay great attention to their contents.** Remarks for software components implementing PDL ================================================ Throughout this document we have described PDL as a *grammar*. If we consider it just as a grammar, then a specific description should be considered as an implementation. We remember that, since a PDL description is detailed, it is a priori possible to write once for all generic software components. These components will be automatically *configured* by a PDL description thus becoming *ad hoc* implementation software for the described service. Moreover checking algorithms could also be generated automatically starting from a description instance. In our implementations we wanted to check practically that these concepts implied in the definition of PDL really works. The development of operational services (as the Paris-Durham shock code) also permits to ensure the coherence of the core grammar and to verify if the PDL’s capabilities could meet the description needs of state of the art simulation codes. At the present (Fall 2013) four software elements are implemented around PDL: * the standalone dynamic client. It embeds the automatic generation of the verification layer (Google code repository at <https://code.google.com/p/vo-param/>). This development shows that a PDL description instance can be used for generating the checking algorithms and for generating a client with a dynamic-intelligent behavior helping the user in interacting with a service. This client could be used for interacting with services exposed using different job systems; * a server for exposing any exiting code as a web services. It embeds the verification layer. This development was done for showing that a PDL description instance can be used for generating the *ad hoc* server, exposing the described service. A particular feature of this server is that it can generates grids of model starting from a single job submission, which indicates ranges for parameters (GitHub repository at <https://github.com/cmzwolf>); * the Taverna Plugin. From one point of view this plugin could be seen as an alternate client to the standalone one. From another point of view it is strongly oriented towards general physical and scientific interoperability (discussed in paragraph [ParInteropIssues]) since it uses PDL description for validating the chaining of jobs composing a workflow. As the dynamic client, the Taverna plugin can be used for interacting with services exposed different job systems (GitHub repository for stable version at <https://github.com/wf4ever/astrotaverna>). * the description editor, for editing PDL description from a Graphical User Interface. Since the key point for using PDL and take advantage of the software tools we have just described is a PDL description, we decided to provide the community with a tool for easily composing PDL description. In some sense this is the entry-point of the PDL software farmework (google code repository at <https://code.google.com/p/pdl-editor/>). All these developments validate the concepts of automatic generation of algorithms and the possibility of configuring, with highly specialized individual behaviors, generic software components. This is very important since it reduces drastically the development time for building services based on the PDL grammar. This is essential in a scientific context where only few scientists have access to software engineer for their IVOA developments. In further developments, PDL-client implementations will include a formal-logic module. This will permit finding contradictions inside the descriptions. Such a module will also be required for implementing the automatic computation of *a priori interoperability graphs*. It will also permit checking interoperability in terms of semantic annotations: for example, let A be the concept that describes an input parameter of a service $\mathcal S$ and B the concept that describes an output parameter of a service $\mathcal S'$. If A and B are the same concept, then both services match the interoperability criterion. However, if A and B are not the same concept we need, for piping the result of $\mathcal S'$ to $\mathcal S$, to ask if the concept B is more specific than the concept A, in other words, if the concept B is generalized or subsumed by the concept A. If this happens then both services match again the interoperability criterion. Interoperability only makes sense when there is an application or infrastructure that allows communication and connection of different services. An example is the applications for orchestrating services by designing workflows (as described in section 2.2). Further developments for PDL include the implementation of interoperability mechanisms in Taverna. Annex ===== A practice introduction to PDL (or dive in PDL) ----------------------------------------------- In this section we present a practice approach to PDL. It is inspired by one of the first services we deployed using the PDL framework: the Meudon Stark-H broadening computation service for Hydrogen (<http://atomsonline.obspm.fr>). The exposed code take as input four parameters: * A quantum number *N**i*, which corresponds to the upper energy level. * A quantum number *N**f*, which corresponds to the lower energy level. * A temperature *T*, which is the temperature of the simulated medium. * A density *ρ*, which is an electron density. With the existing exposure systems (mostly Soap, REST, servlet web services) the information about parameters is more or less limited to a basic *function signature*: the two first parameters are *Integer*, while the two last are *Double*. But this information is not sufficient for a user wishing to use the service without knowing a priori the code: what are the unit of these parameters? What are their physical meaning? PDL is a unified way for providing user with this information by hardcoding it directly in the software composing the service. With PDL service provider can easily express that * *N**i* is *Integer*, it corresponds to the principal quantum number of the upper energy level and, as a quantum number, it has no dimension. The PDL *translation* of this sentence is: ``` <parameter dependency="required"> <Name>InitialLevel</Name> <ParameterType>integer</ParameterType> <SkosConcept>http://example.edu/skos/initialLevel</SkosConcept> <Unit>None</Unit> <Dimension xsi:type="AtomicConstantExpression" ConstantType="integer"> <Constant>1</Constant> </Dimension> </parameter> ``` The PDL description points to the skos uri containing the definition of the physical concept. Moreover it says that the current parameter has 1 as dimension. This means that the parameter is scalar (a dimensions greater than one is for vector parameters). The required attribute indicate that the user must submit this parameter to the service, and it is not optional. * *N**f* is *Integer*, it corresponds to the principal quantum number of the lower energy level and, as a quantum number, it has no dimension. The PDL *translation* of this sentence is: ``` <parameter dependency="required"> <Name>FinalLevel</Name> <ParameterType>integer</ParameterType> <SkosConcept>http://example.edu/skos/finalLevell</SkosConcept> <Unit>None</Unit> <Dimension xsi:type="AtomicConstantExpression" ConstantType="integer"> <Constant>1</Constant> </Dimension> </parameter> ``` * *T* is the thermodynamic temperature of the simulated medium and is expressed in Kelvin. The PDL *translation* for this sentence is: ``` <parameter dependency="required"> <Name>Temperature</Name> <ParameterType>real</ParameterType> <SkosConcept>http://example.edu/skos/temperaturel</SkosConcept> <Unit>K</Unit> <Dimension xsi:type="AtomicConstantExpression" ConstantType="integer"> <Constant>1</Constant> </Dimension> </parameter> ``` * *ρ* is an electron density in *c**m*− 3. The PDL version is: ``` <parameter dependency="required"> <Name>Density</Name> <ParameterType>real</ParameterType> <SkosConcept>http://example.edu/skos/denisty</SkosConcept> <Unit>cm^-3</Unit> <Dimension xsi:type="AtomicConstantExpression" ConstantType="integer"> <Constant>1</Constant> </Dimension> </parameter> ``` Even with this information, it is not guaranteed that users will be able to correctly use the service. Indeed, two constraints involve parameters. The first comes from the definition of *N**i* and *N**f*: the condition (*N**i* − *N**f*) > 1 must always be satisfied. The second comes from the physical model implemented into the exposed code. The result has a physical meaning only if the Debey approximation hypothesis holds: $$\label{DebeyApprox} \frac{9 \, \rho^{1/6}}{100 \, T^{1/2} }<1$$ How to alert the user of these two constraints? A first solution consists in writing explanation (e.g. a code documentation) but it is not sure that users will read it. A more secure approach consists in writing checking algorithms. But this solution is time consuming, since you have to write *ad hoc* tests for every specific code. PDL answer this issues by providing a unified way for expressing the constraints. The PDL formulation of ([energyDifference]) is ``` <always> <Criterion xsi:type="Criterion"> <Expression xsi:type="AtomicParameterExpression"> <parameterRef ParameterName="FinalLevel"/> <Operation operationType="MINUS"> <Expression xsi:type="AtomicParameterExpression"> <parameterRef ParameterName="InitialLevel"/> </Expression> </Operation> </Expression> <ConditionType xsi:type="ValueLargerThan" reached="true"> <Value xsi:type="AtomicConstantExpression" ConstantType="real"> <Constant>1</Constant> </Value> </ConditionType> </Criterion> </always> ``` whereas the formulation for ([DebeyApprox]) is ``` <always> <Criterion xsi:type="Criterion"> <Expression xsi:type="AtomicConstantExpression" ConstantType="real"> <Constant>0.09</Constant> <Operation operationType="MULTIPLY"> <Expression xsi:type="AtomicParameterExpression"> <parameterRef ParameterName="Density"/> <power xsi:type="AtomicConstantExpression" ConstantType="real"> <Constant>0.16666666</Constant> </power> <Operation operationType="DIVIDE"> <Expression xsi:type="AtomicParameterExpression"> <parameterRef ParameterName="Temperature"/> <power xsi:type="AtomicConstantExpression" ConstantType="real"> <Constant>0.5</Constant> </power> </Expression> </Operation> </Expression> </Operation> </Expression> <ConditionType xsi:type="ValueSmallerThan" reached="false"> <Value xsi:type="AtomicConstantExpression" ConstantType="real"> <Constant>1</Constant> </Value> </ConditionType> </Criterion> </always> ``` These two last pieces of XML (composing a wider PDL description) are not intended for humans. They are parsed by the PDL framework for automatically generate the checking algorithms associated with the described constraints. The key point to retain is that PDL is simple for simple services is flexible and powerful enough for meeting description requirements coming with the most complex scientific codes (of course the associated description won’t be simple). The PDL description of the example of equation ([PDLExemplum01]) ---------------------------------------------------------------- The reader will find the xml file related to this example at the following URL: [http://vo-param.googlecode.com/svn/trunk/model/documentation/ PDL-Description\_example01.xml](http://vo-param.googlecode.com/svn/trunk/model/documentation/PDL-Description_example01.xml) The PDL description of the example of equation ([PDLExemplum02]) ---------------------------------------------------------------- The reader will find the xml file related to this example at the following URL: [http://vo-param.googlecode.com/svn/trunk/model/documentation/ PDL-Description\_Example02.xml](http://vo-param.googlecode.com/svn/trunk/model/documentation/PDL-Description_Example02.xml). The PDL XSD Schema ------------------ ``` <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:pm="http://www.ivoa.net/xml/PDL/v1.0" elementFormDefault="qualified" targetNamespace="http://www.ivoa.net/xml/PDL/v1.0"> <!-- needs isActive property on group - need to be able to reference a group --> <xs:annotation> <xs:documentation> IVOA Description of the set of parameters for a service</xs:documentation> </xs:annotation> <xs:element name="Service"> <xs:annotation> <xs:documentation> The base service description. A service in this context is simply some sort of process that has input parameters and produces output parameters. </xs:documentation> </xs:annotation> <xs:complexType> <xs:sequence> <xs:element name="ServiceId" type="xs:string" minOccurs="1" maxOccurs="1"> <xs:annotation> <xs:documentation>The ivoa identifier for the service</xs:documentation> </xs:annotation> </xs:element> <xs:element name="ServiceName" type="xs:string" minOccurs="1" maxOccurs="1"/> <xs:element name="Description" type="xs:string" minOccurs="1" maxOccurs="1"/> <xs:element name="Parameters" type="pm:Parameters" minOccurs="1" maxOccurs="1"> <xs:annotation> <xs:documentation>The list of all possible parameters both input and output parameters</xs:documentation> </xs:annotation> </xs:element> <xs:element name="Inputs" type="pm:ParameterGroup" minOccurs="1" maxOccurs="1"> <xs:annotation> <xs:documentation>The input parameters for a service.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="Outputs" type="pm:ParameterGroup" minOccurs="1" maxOccurs="1"> <xs:annotation> <xs:documentation>The parameters output from a service.</xs:documentation> </xs:annotation> </xs:element> </xs:sequence> </xs:complexType> <!-- keys to ensure that parameter names are unique --> <xs:unique name="KeyName"> <xs:selector xpath="./pm:ParameterList/pm:parameter"/> <xs:field xpath="pm:Name"/> </xs:unique> <xs:keyref name="expressionKeyref" refer="pm:KeyName"> <xs:selector xpath=".//pm:parameterRef"/> <xs:field xpath="pm:parameterName"/> </xs:keyref> </xs:element> <xs:complexType name="Parameters"> <xs:annotation> <xs:documentation>The list of possible parameters both input and output.</xs:documentation> </xs:annotation> <xs:sequence> <xs:element name="parameter" type="pm:SingleParameter" minOccurs="1" maxOccurs="unbounded"> </xs:element> </xs:sequence> </xs:complexType> <xs:complexType name="ParameterReference"> <xs:annotation> <xs:documentation>A reference to a parameter</xs:documentation> </xs:annotation> <xs:attribute name="ParameterName" type="xs:string"> <xs:annotation> <xs:documentation>The name of the parameter being referred to.</xs:documentation> </xs:annotation> </xs:attribute> </xs:complexType> <xs:complexType name="Description"> <xs:sequence> <xs:element name="humanReadableDescription" type="xs:string"/> </xs:sequence> </xs:complexType> <xs:simpleType name="ParameterDependency"> <xs:annotation> <xs:documentation>The types that a parameter may have.</xs:documentation> <xs:documentation> Flag for saying if a parameter is required or optional </xs:documentation> </xs:annotation> <xs:restriction base="xs:string"> <xs:enumeration value="required"> <xs:annotation> <xs:documentation>The parameter must be provided by user.</xs:documentation> </xs:annotation> </xs:enumeration> <xs:enumeration value="optional"> <xs:annotation> <xs:documentation>The parameter is optional.</xs:documentation> </xs:annotation> </xs:enumeration> </xs:restriction> </xs:simpleType> <xs:simpleType name="ParameterType"> <xs:annotation> <xs:documentation>The types that a parameter may have.</xs:documentation> <xs:documentation> Note that the types are made more specific by using the UCD attribute of the parameter definition. In particular it is expected that a Parameter Model library would be able to recognise the more specific types associated with the following UCDs <ul> <li>pos - to provide a suitable widget for positions</li> <li>time - to provide suitable widgets for times and durations</li> </ul> </xs:documentation> </xs:annotation> <xs:restriction base="xs:string"> <xs:enumeration value="boolean"> <xs:annotation> <xs:documentation>A representation of a boolean - e.g. true/false</xs:documentation> </xs:annotation> </xs:enumeration> <xs:enumeration value="string"> <xs:annotation> <xs:documentation>Data that can be interpreted as text.</xs:documentation> </xs:annotation> </xs:enumeration> <xs:enumeration value="integer"/> <xs:enumeration value="real"/> <xs:enumeration value="date"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="FunctionType"> <xs:restriction base="xs:string"> <xs:enumeration value="size"/> <xs:enumeration value="abs"/> <xs:enumeration value="sin"/> <xs:enumeration value="cos"/> <xs:enumeration value="tan"/> <xs:enumeration value="asin"/> <xs:enumeration value="acos"/> <xs:enumeration value="atan"/> <xs:enumeration value="exp"/> <xs:enumeration value="log"/> <xs:enumeration value="sum"/> <xs:enumeration value="product"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="OperationType"> <xs:restriction base="xs:string"> <xs:enumeration value="PLUS"/> <xs:enumeration value="MINUS"/> <xs:enumeration value="MULTIPLY"/> <xs:enumeration value="DIVIDE"/> <xs:enumeration value="SCALAR"/> </xs:restriction> </xs:simpleType> <xs:complexType name="SingleParameter"> <xs:sequence> <xs:element name="Name" type="xs:string" minOccurs="1" maxOccurs="1"> </xs:element> <xs:element name="ParameterType" type="pm:ParameterType" minOccurs="1" maxOccurs="1"> </xs:element> <xs:element name="UCD" type="xs:string" maxOccurs="1" minOccurs="0"> </xs:element> <xs:element name="UType" type="xs:string" maxOccurs="1" minOccurs="0"/> <xs:element name="SkosConcept" type="xs:string" minOccurs="0" maxOccurs="1"/> <xs:element name="Unit" type="xs:string" minOccurs="0" maxOccurs="1"/> <xs:element name="Precision" type="pm:Expression" minOccurs="0" maxOccurs="1"/> <xs:element name="Dimension" type="pm:Expression" maxOccurs="1" minOccurs="1"/> </xs:sequence> <xs:attribute name="dependency" type="pm:ParameterDependency"> </xs:attribute> </xs:complexType> <xs:complexType name="ParameterGroup"> <xs:annotation> <xs:documentation>A logical grouping of parameters</xs:documentation> </xs:annotation> <xs:sequence> <xs:element name="Name" type="xs:string" maxOccurs="1" minOccurs="1"> <xs:annotation> <xs:documentation>The name of the parameter group which can be used for display</xs:documentation> </xs:annotation> </xs:element> <xs:element name="ParameterRef" type="pm:ParameterReference" minOccurs="0" maxOccurs="unbounded"> <xs:annotation> <xs:documentation>The list of parameters that are in the group</xs:documentation> </xs:annotation> </xs:element> <xs:element name="ConstraintOnGroup" type="pm:ConstraintOnGroup" maxOccurs="1" minOccurs="0"> <xs:annotation> <xs:documentation>The constraints on parameters in the group</xs:documentation> </xs:annotation> </xs:element> <xs:element name="ParameterGroup" type="pm:ParameterGroup" minOccurs="0" maxOccurs="unbounded"> <xs:annotation> <xs:documentation>possibly nested parameter groups</xs:documentation> </xs:annotation> </xs:element> <xs:element name="Active" type="pm:WhenConditionalStatement" maxOccurs="1" minOccurs="0"> <xs:annotation> <xs:documentation>It the goup active? i.e. should it be displayed - The default is yes if there is no active element, otherwise it is the result of the evaluation of the When conditional statement.</xs:documentation> </xs:annotation> </xs:element> </xs:sequence> </xs:complexType> <xs:complexType name="ConstraintOnGroup"> <xs:annotation> <xs:documentation>The possible constraints on the parameters in a group</xs:documentation> </xs:annotation> <xs:sequence> <xs:element name="ConditionalStatement" type="pm:ConditionalStatement" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <xs:complexType abstract="true" name="ConditionalStatement"> <xs:sequence> <xs:element name="comment" type="xs:string" minOccurs="1" maxOccurs="1"/> </xs:sequence> </xs:complexType> <xs:complexType name="IfThenConditionalStatement"> <xs:complexContent> <xs:extension base="pm:ConditionalStatement"> <xs:sequence> <xs:element name="if" type="pm:If" minOccurs="1" maxOccurs="1"/> <xs:element name="then" type="pm:Then" minOccurs="1" maxOccurs="1"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="AlwaysConditionalStatement"> <xs:complexContent> <xs:extension base="pm:ConditionalStatement"> <xs:sequence> <xs:element name="always" type="pm:Always" minOccurs="1" maxOccurs="1"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="WhenConditionalStatement"> <xs:annotation> <xs:documentation> A statement that has only a True or a False value </xs:documentation> </xs:annotation> <xs:complexContent> <xs:extension base="pm:ConditionalStatement"> <xs:sequence> <xs:element name="when" type="pm:When"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType abstract="true" name="LogicalConnector"> <xs:sequence> <xs:element name="Criterion" type="pm:AbstractCriterion" minOccurs="1" maxOccurs="1"/> </xs:sequence> </xs:complexType> <xs:complexType name="And"> <xs:complexContent> <xs:extension base="pm:LogicalConnector"/> </xs:complexContent> </xs:complexType> <xs:complexType name="Or"> <xs:complexContent> <xs:extension base="pm:LogicalConnector"/> </xs:complexContent> </xs:complexType> <xs:complexType abstract="true" name="ConditionalClause"> <xs:sequence> <xs:element name="Criterion" type="pm:AbstractCriterion" minOccurs="1" maxOccurs="1"> </xs:element> </xs:sequence> </xs:complexType> <xs:complexType name="Always"> <xs:complexContent> <xs:extension base="pm:ConditionalClause"/> </xs:complexContent> </xs:complexType> <xs:complexType name="If"> <xs:complexContent> <xs:extension base="pm:ConditionalClause"/> </xs:complexContent> </xs:complexType> <xs:complexType name="Then"> <xs:complexContent> <xs:extension base="pm:ConditionalClause"/> </xs:complexContent> </xs:complexType> <xs:complexType name="When"> <xs:complexContent> <xs:extension base="pm:ConditionalClause"/> </xs:complexContent> </xs:complexType> <xs:complexType abstract="true" name="AbstractCondition"/> <xs:complexType name="IsNull"> <xs:complexContent> <xs:extension base="pm:AbstractCondition"/> </xs:complexContent> </xs:complexType> <xs:complexType name="IsInteger"> <xs:complexContent> <xs:extension base="pm:AbstractCondition"> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="IsReal"> <xs:complexContent> <xs:extension base="pm:AbstractCondition"> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="BelongToSet"> <xs:annotation> <xs:documentation>The value must belong to a set</xs:documentation> </xs:annotation> <xs:complexContent> <xs:extension base="pm:AbstractCondition"> <xs:sequence> <xs:element name="Value" type="pm:Expression" minOccurs="1" maxOccurs="unbounded"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="ValueLargerThan"> <xs:complexContent> <xs:extension base="pm:AbstractCondition"> <xs:sequence> <xs:element name="Value" type="pm:Expression" maxOccurs="1" minOccurs="1"/> </xs:sequence> <xs:attribute name="reached" type="xs:boolean"/> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="ValueSmallerThan"> <xs:complexContent> <xs:extension base="pm:AbstractCondition"> <xs:sequence> <xs:element name="Value" type="pm:Expression" maxOccurs="1" minOccurs="1"/> </xs:sequence> <xs:attribute name="reached" type="xs:boolean"/> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="ValueInRange"> <xs:complexContent> <xs:extension base="pm:AbstractCondition"> <xs:sequence> <xs:element name="Sup" type="pm:ValueSmallerThan" maxOccurs="1" minOccurs="1"/> <xs:element name="Inf" type="pm:ValueLargerThan" maxOccurs="1" minOccurs="1"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="ValueDifferentFrom"> <xs:complexContent> <xs:extension base="pm:AbstractCondition"> <xs:sequence> <xs:element name="Value" type="pm:Expression" maxOccurs="1" minOccurs="1"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="DefaultValue"> <xs:complexContent> <xs:extension base="pm:AbstractCondition"> <xs:sequence> <xs:element name="Value" type="pm:Expression" maxOccurs="1" minOccurs="1"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType abstract="true" name="AbstractCriterion"> <xs:sequence> <xs:element name="Expression" type="pm:Expression" minOccurs="1" maxOccurs="1"> </xs:element> <xs:element name="ConditionType" type="pm:AbstractCondition" minOccurs="1" maxOccurs="1"/> <xs:element name="LogicalConnector" type="pm:LogicalConnector" maxOccurs="1" minOccurs="0" /> </xs:sequence> </xs:complexType> <xs:complexType name="Criterion"> <xs:complexContent> <xs:extension base="pm:AbstractCriterion"> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="ParenthesisCriterion"> <xs:complexContent> <xs:extension base="pm:AbstractCriterion"> <xs:sequence> <xs:element name="ExternalLogicalConnector" type="pm:LogicalConnector" maxOccurs="1" minOccurs="0"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="Function"> <xs:complexContent> <xs:extension base="pm:Expression"> <xs:sequence> <xs:element name="expression" type="pm:Expression"/> </xs:sequence> <xs:attribute name="functionName" type="pm:FunctionType"/> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="Operation"> <xs:sequence> <xs:element name="expression" type="pm:Expression" maxOccurs="1" minOccurs="1"/> </xs:sequence> <xs:attribute name="operationType" type="pm:OperationType"> </xs:attribute> </xs:complexType> <xs:complexType abstract="true" name="Expression"> </xs:complexType> <xs:complexType name="ParenthesisContent"> <xs:complexContent> <xs:extension base="pm:Expression"> <xs:sequence> <xs:element name="expression" type="pm:Expression" minOccurs="1" maxOccurs="1"/> <xs:element name="power" type="pm:Expression" maxOccurs="1" minOccurs="0"/> <xs:element name="Operation" type="pm:Operation" maxOccurs="1" minOccurs="0"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="AtomicParameterExpression"> <xs:complexContent> <xs:extension base="pm:Expression"> <xs:sequence> <xs:element name="parameterRef" type="pm:ParameterReference" maxOccurs="1" minOccurs="1"> </xs:element> <xs:element name="power" type="pm:Expression" maxOccurs="1" minOccurs="0"/> <xs:element name="Operation" type="pm:Operation" maxOccurs="1" minOccurs="0"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="AtomicConstantExpression"> <xs:complexContent> <xs:extension base="pm:Expression"> <xs:sequence> <xs:element name="Constant" type="xs:string" maxOccurs="unbounded" minOccurs="1"/> <xs:element name="power" type="pm:Expression" maxOccurs="1" minOccurs="0"/> <xs:element name="Operation" type="pm:Operation" maxOccurs="1" minOccurs="0"/> </xs:sequence> <xs:attribute name="ConstantType" type="pm:ParameterType"/> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="FunctionExpression"> <xs:complexContent> <xs:extension base="pm:Expression"> <xs:sequence> <xs:element name="Function" type="pm:Function" maxOccurs="1" minOccurs="1"/> <xs:element name="Power" type="pm:Expression" maxOccurs="1" minOccurs="0"/> <xs:element name="Operation" type="pm:Operation" maxOccurs="1" minOccurs="0"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> </xs:schema> ``` 10 , W3C Recommendation 26 June 2007. , W3C Member Submission 31 August 2009. , 2011, International Journal of High-performance Computing Applications (IJHPCA), LLNL-JRNL-465223, DOI: 10.1177/1094342011414036 *How the Common Component Architecture Advances Computational Science Proceedings of Scientific Discovery through Advanced Computing* (SciDAC 2006), June 2006, in J. Phys.: Conf. Series. (also LLNL Tech Report UCRL-CONF-222279) . *A Component Architecture for High-Performance Scientific Computing*. Int. J. High Perform. Comput. Appl. 20(2). May 2006, pp. 163-202. . *A formal semantics for the Taverna 2 workflow model*, Journal of Computer and System Sciences, vol. 76, iss. 6, pp. 490-508, 2009. , *Taverna: a tool for building and running workflows of services*., Nucleic Acids Research, vol. 34, iss. Web Server issue, pp. 729-732, 2006. , Ios Press 2003, ISBN: 1586033115 .*PALM: A Computational framework for assembling high performance computing applications*, Concurrency Computat.: Pract. Exper., Vol. 18(2), 2006, 247-262 . *A new representation of data assimilation methods: the PALM flow charting approach*, Q.J.R.M.S., Vol. 127, 2001, pp. 189-207 .*The PALM Group, PALM: A Dynamic Parallel Coupler*. Lecture Notes In Computer Science, High Performance Computing for Computational Science, Vol. 2565, 2003, pp. 479-492 , in proceedings of ICNS 2007 Conference. , in proceedings of ICNS 2005 Conference. . *Finding Data Resources in a Virtual Observatory Using SKOS Vocabularies,* BNCOD 2008:189-192. A. J. G. Gray, N. Gray, A. Paul Millar, and I. Ounis*Semantically Enabled Vocabularies in Astronomy*, Technical Report, University of Glasgow, December 2007 (http://www.cs.man.ac.uk/ graya/Publications/vocabulariesInAstronomy.pdf). D. Oberle, N. Guarino and S Staab. *What is an ontology?*, In: Handbook on Ontologies. Springer, 2nd edition, 2009. , International Virtual Observatory Alliance Recommendation, August 2005 (http://www.ivoa.net/documents/latest/UCD.html). , IVOA Note 25 June 2007 (http://www.ivoa.net/documents/latest/UtypeListCharacterisationDM.html). *Units in the VO*, IVOA Proposed Recommendation 29 April 2013 (http://www.ivoa.net/documents/VOUnits/). Astrophysics Source Code Library, record ascl:1307.007. 2013. R. T. Fielding*Architectural Styles and the Design of Network-based software Architecture.* PhD Thesis in Information and computer Science, University of California, Irvine. http://www.w3.org/TR/2007/REC-soap12-testcollection-20070427/ --- 1. For example, for expressing that a given parameter must be greater and smaller than arbitrary values, we could define a *bounded* type containing an *inf* field and a *sup* field. If another user defines a similar object calling these two fields *inf-bound* and *sub-bound*, the instances of these two types could not interoperate straightforwardly. The theory of types is not sufficient to ensure the interoperability of the object defined.[↩](#fnref1) 2. This is obvious, since this value corresponds to a vector size.[↩](#fnref2) 3. this came from the evaluation of parameterRef field in case of an *AtomicParameterExpression* (cf. paragraph [par02]), from the evaluation of constant field in the case of a *AtomicConstantExpression* (cf. paragraph [par0203]), from the evaluation of the Expression field in case of an *parenthesisContentExpression* (cf. paragraph [par0204]) and from the evaluation of the Function object in case of a *FunctionExpression* (cf. par. [functionExpressionPar])[↩](#fnref3) 4. The first criterion is the one containing the *LogicalConnector* and the second is the criterion contained within the connector itself.[↩](#fnref4) PDL: The Parameter Description Language ======================================= ### May 23, 2014 Abstract ======== This document discusses the definition of the *Parameter Description Language* (PDL). In this language parameters are described in a rigorous data model. With no loss of generality, we will represent this data model using XML. It intends to be a expressive language for self-descriptive web services exposing the semantic nature of input and output parameters, as well as all necessary complex constraints. PDL is a step forward towards true web services interoperability. Status of this document ======================= This document has been produced by the Grid and Web Service Working Group. It follows the previous working draft. Acknowledgements ================ We wish to thank the members of the IVOA Grid and Web Services working group for the discussions around PDL it has hosted during the Interop meetings (starting from Naples, May 2011). A special thanks are due to André Schaaff for his advice, useful discussions and feedback on every version of this work. Preface ======= Before going into technical details and explanations about PDL, we would like to suggest what are the categories of users potentially interested in this description language and underline in what field PDL has a strategic impact. PDL is particularly addressed to scientists or engineers * wishing to expose their research codes (without limits or compromise on the complexity of the exposed code) online as public services, * wishing to interconnect their codes into workflows. We will continue this preface by a quick ‘theoretical’ overview of PDL. For a practice-oriented introduction, reader should refer to the annex paragraph [divePDL]. **The online code aspect -** Usually, people who are about to publish their codes as online services are wondering if user will be able to correctly use the new services: the code may have many parameters (usually with obscure names, due to historical reasons). These parameters can be intercorrelated and/or have forbidden ranges, according to the configuration or validity domain of the implemented model. In other words the use of the code may be reserved to experts. How can the service provider ensure that the users will be aware of all the subtleties (units, physical meaning, value ranges of parameters)? The natural answer is to provide a good code documentation to the community. However, our experience as service providers showed that rarely users read carefully the documentation. And even if they read it entirely, they are not safe from making gross and/or distraction errors. Two commons consequences of this situation are the abandonment of the service by users, after few inconclusive tests and the abandonment of the service by the provider him(her)self, buried by e-mails from users containing question whose answer are... in the provided documentation. PDL is a powerful tool for improving both the user and the provider experiences: it may be seen as a way for hardcoding all the subtleties and constraints of the code into the software components used by the provider for exposing the code and by users for interacting with it. The fine expertise on the code to expose is fixed into the PDL description. The PDL software framework is able to automatically generate client interfaces (for assisting the user in interacting with the service) and checking algorithms (for verifying that the data submitted by users are compliant with the description). Moreover the softwares composing the PDL framework are generic elements which are automatically configured by the description into ad-hoc components (cf. paragraph [SoftwareImplementation] for further details and explanation about these concepts). The work for people wishing to expose code is indeed essentially restricted to redaction of the description. For these reasons PDL is particularly indicated for people wishing to expose their code but don’t have much time or the technical skills for building web services. **The workflow aspect -** Scientists or engineers wishing to integrate their code into workflow engines have to write ad hoc artifacts: (in the case of the *Taverna* engine ) these could be *Java Beans*, *Shell* and *Python* artefacts. Normally one has to write a specific artefact for every code to integrate. This could be extremely time consuming and, from the technical point of view, this is not an ‘out of the box’ procedure for people starting using workflows engine. PDL is indicated to facilitate the integration of code into workflow engines: the integration phase is reduced to the redaction of the description. Moreover PDL introduce a new feature into the workflow domain: since every description embeds fine grained details and metadata on parameters (also with their related constraints), the physical sense (meaning and integrity) of a workflow could be automatically verified. The PDL application domain is not limited only to online code or workflows. We are now going to detail all the technical aspects of this description grammar. Introduction ============ In the context of the *International Virtual Observatory Alliance* researchers would like to provide astronomical services to the community. These services could be * access to an existing catalogue of images and/or data, * access to smaller sub-products images, spectra and/or data generated on the fly, * the entry point to a database listing the results of complex and compute-intensive numerical simulations, * a computation code exposed online, etc... In the following we will ignore any specific feature and will use the term *generic service* to refer to any kind of process that receives input parameters and produces output ones. Interoperability with other services and the immediacy of use are two key factors in the success of a service: in general, a service will not be used by the community if users do not know how to call it, the inputs it needs, or what it does and how. However, other issues may have influence in the user decision e.g. Who has implemented it? Who is the service provider? Does it implement a well known technique? Is there a paper to support the research behind the service? Can it be used as a standalone application and can it be used together with other services? A new service will be more useful for some users if it can be released easily as an interactive and standalone application whereas for other users the interoperability with other services and applications is more important. This standard is focused on the needs of the second group, as the ease of the distribution of web services is the primary concern of service providers. Indeed, service description and interoperability are two key points for building efficient and useful ecosystem of services. PDL aims to provide a solution to the problems of description and interoperability of services. With PDL, service providers will be able to share with users (either as humans or as computer systems) the knowledge of what the service does (and how). Moreover this new service will be immediately interactive and well integrated with other services. **Service description** and **Interoperability** are indeed two key points for building efficient and useful services. The service description: existing solutions and specific needs -------------------------------------------------------------- For a client starting to interact with an unknown service, its description is fundamental: in a sense it is this description that puts the service from the *unknown* to the *known* state. Since the client could be a computer system, a generic description should be machine-readable. There are several pre-existing service description languages. The most well known for their high expression level and their wide use are the *W3C* *WSDL* and *WADL*,. Since both *WSDL* and *WADL* support *XML-schema*, one could include in these descriptions complex and highly specialized XML objects for expressing conditions and/or restrictions. However, the process for building these ad-hoc XML extension types is not standard[1](#fn1): a service provider could only describe, using the native standard feature of WADL or WSDL, primitive-typed parameters. It thus serves a roughly similar purpose as a method-signature in a programming language, with no possibility for defining restrictions, semantics and criteria to satisfy. PDL proposes a way for expressing these features in a unified way. In the case of *generic services* for science, the description needs are very specific: since we have to deal with complex formalisms and models, one should be able to describe for each parameter; its physical meaning, its unit and precision and a range (or set) of admissible values (according to a model). In many cases, especially for theoretical simulations, parameters could be linked by complex conditions or have to satisfy, under given conditions, a set of constraints (that could involve mathematical properties and formulas). Two examples of this high level description we would be able to provide are the following: $$\label{PDLExemplum01} \mbox{Service1 }\left\{ \begin{array}{l} \ \mbox{Input } \left\{ \begin{array}{c} \mbox{$\vec p\_1$ is a $m/s$ vector speed and $\| \vec p\_1\|<c$} \\ \mbox{ $p\_2$ is time (in second) and $p\_2 \geq 0$ }\\ \mbox{$p\_3$ is a $kg$ mass and $p\_3 > 0$}\\ \end{array} \right. \\ \\ \ \mbox{\hspace{1cm} Output } \left\{ \begin{array}{l} \mbox{ $p\_4$ is a Joule Kinetic Energy and $p\_4 \geq 0$} \\ \mbox{ $p\_5$ is a distance (in meter) }\\ \end{array} \right.\\ \end{array} \right.$$ $$\label{PDLExemplum02} \mbox{Service2 } \left\{ \begin{array}{l} \ \mbox{Input } \left\{ \begin{array}{l} \ \mbox{ $\mathbb R \ni p\_1 >0$; $p\_2 \in \mathbb N$; $p\_3 \in \mathbb R$} \\ \ \bullet \mbox{ if $p\_1 \in ]0,\pi/2]$ then $p\_2 \in \{2;4;6\}$,}\\ \ \mbox{$p\_3 \in [-1,+1]$ and $\displaystyle \left( \left| \sin(p\_1)^{p\_2} -p\_3 \right| \right)^{1/2}<3/2$ } \\ \ \bullet \mbox{ if $p\_1 \in ]\pi/2,\pi]$ then $0<p\_2 < 10$,}\\ \ \mbox{$p\_3>\log(p\_2)$ and $(p\_1 \cdot p\_2)$ must belong to $\mathbb N$} \\ \end{array} \right. \\ \\ \ \mbox{\hspace{1cm} Output } \left\{ \begin{array}{l} \mbox{$\vec p\_4, \, \vec p\_5 \in \mathbb R^3$ } \\ \ \mbox{Always $\displaystyle \frac{\| \vec p\_5\|}{\|\vec p\_4 \|} \leq 0.01 $} \\ \end{array} \right.\\ \end{array} \right.$$ To our knowledge, no existing description language meets these exacting requirements of scientific services. This leads us naturally to work on a new solution and consider developing a new description language. **Remark:** The PDL descriptions for the two examples above are provided respectively in paragraphs [Exemplum1XML] and [Exemplum2XML]. Interoperability issues ----------------------- Nowadays, with the massive spread and popularity of *cloud* services, interoperability has become an important element for the success and usability of services. This remains true in the context of astronomy. For the astronomical community, the ability of systems to work together without restrictions (and without further *ad hoc* implementations) is of high value: this is the ultimate goal that guides the *IVOA*. Computer scientists have developed different tools for setting up service interoperability and orchestration. The most well known are * *BAbel*,, (<https://computation.llnl.gov/casc/components/>), * *Taverna*, (<http://www.taverna.org.uk>), * *OSGI* and *D-OSGI* (<http://www.osgi.org/>), * *OPalm*,, (<http://www.cerfacs.fr/globc/PALM_WEB/>), * *GumTree*, (<http://docs.codehaus.org/display/GUMTREE/>). In general, with those tools one could coordinate only the services written with given languages. Moreover the interoperability is achieved only in a basic ‘computer’ way: if the input of the *B* service is a double and the output of the *A* service is a double too, thus the two services could interact. Our needs are more complex than this: let us consider a service *B*ʹ whose inputs are a density and a temperature and a service *A*ʹ whose outputs are density and temperature too. The interoperability is not so straightforward: the interaction of the two services has a sense only if the two densities (likewise the two temperatures) * have the same ‘computer’ type (ex. double), * are expressed in the same system of units, * correspond to the same physical concepts (for example, in the service *A*ʹ density could be an electronic density whereas in the service *B*ʹ the density could be a mass density) But things could be more complicated, even if all the previous items are satisfied: the model behind the service *B*ʹ could implement an Equation of State which is valid only if the product (density × temperature) is smaller than a given value. Thus the interoperability with *A*ʹ could be achieved only if the outputs of this last satisfy the condition on product. Again, as in case of descriptions no existing solutions could meet our needs and we are oriented towards building our own solution. **Remark**: We will present further considerations on the workflows aspects in paragraph [PDLWF], once we have exposed some basic concepts about PDL in the following paragraph. Astronomical and Astrophysical use-cases needing PDL’s fine description capabilities ------------------------------------------------------------------------------------ PDL was originally designed for meeting requirements coming from the community members wishing to expose their code online as public services. One of the difficulty they often mentioned is that online codes are often complex to use and users may do mistake with online simulations. For example, they could use them outside of their validity domain. The description of parameters with PDL allows to constrain those ones in validity domains, and so PDL answers this fear of the theorist community. In order to build a grammar like PDL, we could go two ways: in the first we would have built a monolithic solution for meeting the vast majority of astronomical and astrophysical needs. In the other we would have to provide community with a flexible enough tool (modular and extensible) to fit the majority of use-cases: if the parameters (of a given service) are decomposed with the finest granularity, PDL is a good tool for performing *a priori verification*, notifying errors to user before submitting jobs to a server system. This has, for example, an immediate consequence on how we deal, in PDL, with sky coordinates: we don’t have particular fields/entries for ascensions and declinations. For us this parameters could be stored in *double* parameters. The associated unit will precise if the angle will be considered in degrees or radians and the associated *SKOS* concepts, (<http://www.w3.org/TR/skos-reference/>) will provide further information. If a service provider has to define particular conditions on the angular distance between two coordinates (*a**s**c*1,  *d**e**c*1) and (*a**s**c*2,  *d**e**c*2) (e.g. ∣*a**s**c*1 − *a**s**c*2∣ + ∣*d**e**c*1 − *d**e**c*2∣ < *ε*) he/she may use the expression capabilities of PDL (cf. paragraph [par02]) During the PDL development, close cooperation naturally born with the Workflow community. PDL indeed allow the real *scientific* interoperability (not only based on computer types) required by the Astronomy and Astrophysics workflow community. The following sections of this document could seems complex at first reading. This is because we present all the features and the descriptive richness of PDL. Nevertheless this does not mean that all PDL descriptions are necessarily complex. They could be complex in case of services with many parameters linked by many constraints. But PDL description could be very simple in case of simple services. For example the PDL description associated with a common cone search service is very simple. It could be consulted at the following URL: [http://www.myexperiment.org/files/999/versions/4/ download/AMIGA-PDL-Description.xml]( http://www.myexperiment.org/files/999/versions/4/download/AMIGA-PDL-Description.xml ). A new Parameter Description Language: a unique solution to description and interoperability needs ------------------------------------------------------------------------------------------------- To overcome the lack of a solution to our description and interoperability needs, it is proposed to introduce a new language. Our aim is to finely describe the set of parameters (inputs and outputs of a given generic services) in a way that * could be *interpreted* by human beings (we could say *understood* for the simpler description cases), * could be parsed and handled by a computer, * complex relations and constraints involving parameters could be formulated unambiguously. Indeed we would like to express + mathematical laws/formulas, + conditional relationships (provided they have a logical sense)involving parameters. The new language is based on a generic data model (DM). Each object of the DM corresponds to a syntactic element. Sentences are made by building object-structures. Each sentence can be interpreted by a computer by parsing the object structure. With PDL one could build a mathematical expression (respectively conditional sentences) assembling the base-element described in section [par02] (resp. section [complexRelations]). If a particular expression (or condition) could not be expressed using the existing features, this modular grammar could be extended by introducing an ad hoc syntactic element into the object DM. For describing the physical scientific concept or model behind a given parameter, the idea is to use *SKOS* concepts and, if more complexity is required by the use case, a richer ontology. Since the inputs and outputs of every service (including their constraints and complex conditions) could be described with this fine grained granularity, interoperability becomes possible in the *smart* and *intelligent* sense we really need: services should be able to work out if they can reasonably use their output as input for another one, by simply looking at its description. With no loss of generality and to ensure that the model could work with the largest possible number of programming languages, we decided to fix it under the form of an XML schema (cf paragraph [pdlSchema]).This choice is also convenient because there are many libraries and tools for handling and parsing XML documents. **Remark:** We recall that PDL is a syntactic framework for describing parameters (with related constraints) of generic services. Since a PDL description is rigorous and unambiguous, it is possible to verify if the instance of a given parameter (i.e. the value of the parameter that a user sends to the service) is consistent with the description. In what follows in this document, we will often use the terms *evaluate* and *interpret* with reference to an expression and/or condition composed with PDL. By this we mean that one must replace the referenced parameters (in the PDL expressions/conditions) by the set of values provided to the service by the user. The replacement mechanisms will be explained in detail, case by case. ### PDL in the IVOA architecture [Pic-arch] Within the IVOA Architecture of figure [Pic-arch], PDL is a VO standard for richly describing parameters with a fine grained granularity, allowing to introduce constraints and mathematical formulae. If PDL describes the nature, the hierarchy of parameters and their constraints, **it does not describe** how this parameters are transmitted to a service, nor how these parameters will be processed by the described service. For example, PDL does not prescribe whether to transfer parameters through a SOAP envelope or through a REST post, nor what will be the phases that the submitted job will pass through. In the context of the IVOA, this means that the separation between PDL and UWS is clear and one can be used without the other without infringing any rules of those standards. Indeed, PDL could be seen a supplementary layer, for explaining in a unified way the physical/computational meaning of every parameter, whereas UWS has only a description of the values of parameters. PDL could be plugged as an additional layer to every existing IVOA service and is suitable for solving issues not covered by other IVOA standards and is particularly indicated for workflows. ### Some consideration on PDL and Workflows The orchestration of services defines a Scientific Workflow, and services interoperability is key in the process of designing and building workflows. An important consideration in this process of orchestration is the control of parameters constraints at the moment of the workflow execution. Even if interoperability is assured at the phase of workflow design, a control at the execution phase has to be implemented by workflow engines as service clients. As we suggested in the remark of the previous paragraph, testing for valid parameters provided to a service could be automatically generated starting from the PDL description. This automation facility could be used to perform the verification on both client side and on server side: * verifications made on client-side will avoid sending the wrong set of parameters to a server, reducing the load on the latter, * verifications made on server-side will avoid running jobs with wrong set of parameters. Indeed a server does not know if the job is sent by a client implementing the verifications or not. Therefore it must behave as if the data had never been checked. Verification of non-standard errors (e.g. network issues) are out of the scope of PDL. ![Graphical convention adopted for building graphical representations, starting from the XML schema.](pictures/XMLConvention.jpg "fig:") [figSchema] ### On the graphical representations adopted into this document As recalled at the end of the paragraph [ANewPDL], we decided to fix the PDL grammar into an XML schema. The graphical diagrams proposed into this document are a simple rendering of every XML element contained into the schema, obtained following the graphical convention of the figure [figSchema]. Indeed, a list with the defined schema components (elements, attributes, simple and complex types, groups and attribute groups) is presented into the graphical representation: every complex element described is linked with segments to the contained sub-elements. A bold segment indicates that the sub-element is required and a thin segment indicates that the sub-element is optional. Moreover, the cardinality of the contained sub-elements could be expressed on the segments. The Service Class ================= ![Graphical representation of the Service Class](pictures/Service.jpg "fig:") [Pic-Service] The root element of the PDL description of a generic service is the object *Service* (see figure [Pic-Service]). This **must contain** * A single *ServiceName*. This field is a String containing the name of the service. * A *ServiceId*. This field is a String containing the IVOA id of the service. It is introduced for a future integration of PDL into the registries: each service in the registry will be marked with its own unique id. * A *Description*. This field is a String and contains a human readable description of the service. This description is not intended to be understood/parsed by a machine. * A *Parameters* field which is a list of *SingleParameter* object types (cf. paragraph [par01]). This list contains the definition of all parameters (both inputs and outputs) of the service. The two following fields specify if a given parameter is a input or an output one. * An *Inputs* field of type *ParameterGroup* (cf. paragraph [par-group]). This object contains the detailed description (with constraints and conditions) of all the input parameters. * An *Outputs* field of type *ParameterGroup*. This object contains the detailed description (with constraints and conditions) of all the output parameters. The SingleParameter Class ========================= ![Graphical representation of the Parameter Class](pictures/Parameter.jpg "fig:") [Pic-Parameter] The *SingleParameter* Class (see figure [Pic-Parameter]) is the core element for describing jobs. Every object of this type must be characterized by: * A name, which is the Id of the parameter. In a given PDL description instance, two parameters cannot have the same name; * A single parameter type, which explains the nature of the current parameter. The allowed types are : boolean, string, integer, real, date; * A dimension. A 1-dimension corresponds to a scalar parameter whereas a dimension equal to N corresponds to a N-size vector. The dimension is expressed using an *expression* (cf. paragraph [par02]). The result of the expression that appears in this *SingleParameter*-field object **must be integer**.[2](#fn2) **Remark on the vector aspect:** It could seem unnecessarily complex to have the parameter dimension into an *Expression*. This feature has been introduced for meeting some particular needs: consider for example a service computing polynomial interpolations. Let the first parameter *d* (an integer) be the degree of the interpolation and the second parameter $\vec p$ be the vector containing the set of points to interpolate. For basic mathematical reasons, these two parameters are linked by the condition $(\mbox{size}(\vec v)-1) = d$. By defining dimensions as *Expressions* we can easily include this kind of constraints into PDL descriptions. Vectors in PDL are intended as one dimensional arrays of values. Further significations should be documented using the UCD, Utype or the Skos concept fields. Moreover, if one wish to define an *Expression* using the scalar product of two vectors (cf. paragraph [par02]) he/she has to pay attention that the involved vectors are expressed in the same orthonormal basis. The attribute *dependency* can take one of the two values **required** or **optional**. If required the parameter **must be** provided to the service. If optional, the service could work even without the current parameter and the values will be considered for processing only if provided. Optional fields for the *SingleParameter* Class are: * a UCD : which is a text reference to an existing UCD for characterizing the parameter ; * a Utype : which is a reference to an existing Utype for characterizing the parameter (); * a Skos Concept ( containing the valid URL of a Skos concept). * a Unit (). * a precision. This field must be specified only for parameter types where the concept of precision makes sense. It has indeed no meaning for integer, rational or string. It is valid, for instance, for a real type. To understand the meaning of this field, let the function *f* be a model of a given service. If *i* denotes the input parameter, *f*(*i*) denotes the output. The precision *δ* is the smaller value such that *f*(*i* + *δ*) ≠ *f*(*i*). The precision is expressed using an *expression* (cf. paragraph [par02]). The result of the expression that appears in this *precision*-field **must be** of the same type as (or could be naturally cast to) the type appearing in the field *parameter type*. **NB:** The name of every *SingleParameter* is unique. The ParameterRef Class ====================== ![Graphical representation of the Parameter Reference Class](pictures/ParameterRef.jpg "fig:") [Pic-ParameterRef] This Class, as it name suggests, is used to reference an existing parameter defined in the *Service* context (cf. paragraph [par-service]). It contains only an attribute *ParameterName* of type String which must corresponds to the *Name* field of an existing *SingleParameter* (cf. paragraph [par01]). The ParameterType Class ======================= This Class is used to explain the type of a parameter (cf. paragraph [par01]) or an expression (cf. paragraph [par0203]). The allowed types are : * Boolean. The allowed values for parameters of this type are *true / false*, non case sensitive. * String. Any String (UTF8 encoding recommend). * Integer. Numbers (positive and negatives) composed of [0-9] characters. * Real. Two formats are allowed for parameters of this type: + numbers (positives and negatives) composed of [0-9] characters, with dot as decimal separator, + scientific notation: number composed of [0-9] characters, with dot as decimal separator, followed by the *E* character (non case sensitive), followed by an integer. * Date. Parameters of this type are dates in ISO8601 format. **Remark** There is a lot of complexity in expressing Date/Time values in astronomy. A first solution for covering all the cases would be to include all the possibility into the data model. However, this hardcoded approach does not fit with the PDL modular approach. For example, if a service provider wish to indicate to users that they have to provide a GMT date, he can use two parameters: the first will contain the date itself and the second (e.g. *dateType*) will specify the nature of the first parameter. The service provider may then use all the richness of the PDL grammar for expressing conditions on/between these two parameters. The ParameterGroup Class ======================== ![Graphical representation of the ParameterGroup Class](pictures/ParameterGroup.jpg "fig:") [Pic-ParameterGroup] The *ParameterGroup* Class (see figure [Pic-ParameterGroup]) is used for grouping parameters according to a criterion of relevancy arbitrarily chosen by service provider (for instance parameters may be grouped according to the physics : position-group, speed-group; thermodynamic-group). However, the ParameterGroup is not only a kind of parameter set, but also can be used for defining complex relations and/or constraints involving the contained parameters (cf. paragraph [par-ConstraintsOnGroup]). This Class **must contain** a single Name. This name is a String and is the identification label of the ParameterGroup, and two groups cannot have the same Name in a given PDL description instance. Optional fields are * the references to the parameters (cf. paragraph [par-parRef]) one want to include into the group; * a *ConstraintOnGroup* object (cf. paragraph [par-ConstraintsOnGroup]). This object is used for expressing the complex relations and constraints involving parameters. * an *Active* field of type *WhenConditionalStatement* (cf. paragraph [par-WhenConditionalStatment]). If there is no *Active* element the group will be considered active by default (e.g. in case of a graphical representation it will be displayed). Otherwise, the activations depends on the result of the evaluation of the Criterion contained into the When conditional statement (cf. paragraphs [par-WhenConditionalStatment] and [par-EvalCriteria]). * the *ParameterGroup* object contained within the current root group. Indeed the *ParametersGroup* is a recursive object which can contain other sub-groups. **NB:** The name of every *ParameterGroup* is unique. **NB:** For any practical use, the number on the parameter referenced into a given group summed to the number of sub-groups of the same group must be greater than one. Otherwise the group would be a hollow shell. The Expression Class ==================== The *Expression* is the most versatile component of the PDL. It occurs almost everywhere: in defining fields for *SingleParameters* (cf. paragraph [par01]) or in defining conditions and criteria). Expression itself is an abstract Class. In this section we are going to review all the concrete Class extending and specializing expressions. **N.B.** In what follows, we will call a **numerical expression** every *expression* involving only numerical types. This means that the evaluation of such expressions should lead to a number (or a vector number if the dimension of the expression is greater than one). The AtomicParameterExpression ----------------------------- ![Graphical representation of the AtomicParameterExpression Class](pictures/AtomicParameter.jpg "fig:") [Pic-AtomicParameter] The *AtomicParameterExpression* (extending *Expression*, see figure [Pic-AtomicParameter]) is the simplest expression that could be built involving a defined parameter. This Class **must contain** unique reference to a given parameter. Optional fields, valid only for numerical types, are : * A **numerical** power *Expression* object; * An *Operation* object (cf. paragraph [par0202]). Let *p* and *e**x**p* be respectively the parameter and the power expression we want to encapsulate. The composite object could be presented as follows: $$\label{eq01} \underbrace{ p^{exp} \underbrace{ \overbrace{\left( \begin{array}{c} + \\ - \\ \ast \\ \cdot \\ \div \end{array} \right) }^{\mbox{\tiny Operation type}} \overbrace{ \left( \mbox{AnotherExpression}\right) }^{\mbox{\tiny expression contained in operation}} }\_{\mbox{\tiny Operation object}}}\_{\mbox{\tiny Atomic Parameter Expression}}$$ To evaluate a given *AtomicParameterExpression*, one proceeds as follows: Let *d**p*, *d**e**x**p* be respectively the dimension of the parameter *p* referenced, the dimension of the power expression and the dimension of the expression contained within the operation object. The exponent part of the expression is legal if and only if: * *d**p* = *d**e**x**p*. In this case *p**e**x**p* is a *d**p*-size vector expression and ∀ *i* = 1, ..., *d**p* the *i* component of this vector is equal to *p**i**e**x**p**i*, where *p**i* is the value of the *i* component of vector parameter *p* and *e**x**p**i* is the value obtained by interpreting the *i* component of vector expression *e**x**p*. * Or *d**e**x**p* = 1. In this case, ∀ *i* = 1, ..., *d**p* the *i* component of the vector result is equal to *p**i**e**x**p*, where *p**i* is the same as defined above. Whatever the method used, let us note *e**p* the result of this first step. We recall that the dimension of *e**p* is always equal to *d**p*. In order to complete the evaluation of the expression, one should proceed as shown in paragraph [par0202], by setting there *b* = *e**p*. The AtomicConstantExpression ---------------------------- ![Graphical representation of the AtomicConstantExpression Class](pictures/AtomicConstant.jpg "fig:") [Pic-AtomicConstant] The *AtomicConstantExpression* (extending *Expression*, see figure [Pic-AtomicConstant]) is the simplest expression that could be built involving constants. Since this class could be used for defining a constant vector expression, it **must contain** * A single list of String which expresses the value of each component of the expression. Let *d**c* be the size of the String list. If *d**c* = 1 the expression is scalar and it is a vector expression if *d**c* > 1. * An attribute *ConstantType* of type *ParameterType* (cf. paragraph [par-ParameterType]) which defines the nature of the constant expression. The allowed types are the same as in the field *parameterType* of the Class *SingleParameter*. The Class **is legal if and only if** every element of the String list could be cast into the type expressed by the attribute *constantType*. Optional fields, valid only for numerical types, are : * A **numerical** power *Expression* object; * An *operation* object (cf. paragraph [par0202]). Let *s**i* (*i* = 1, ..., *d**c*) and *e**x**p* be respectively the *i* component of the String list and the power expression we want to encapsulate. The composite Class could be presented as follows: $$\underbrace{ \overbrace{ \left( s\_1, s\_2,..., s\_{d\_c}\right)^{exp} }^{\mbox{\tiny List of String to cast into the provided type}} \underbrace{ \overbrace{\left( \begin{array}{c} + \\ - \\ \ast \\ \cdot \\ \div \end{array} \right) }^{\mbox{\tiny Operation type}} \overbrace{ \left( \mbox{AnotherExpression}\right) }^{\mbox{\tiny expression contained in operation}} }\_{\mbox{\tiny Operation object}}}\_{\mbox{\tiny Atomic Constant Expression}}$$ To evaluate a given *AtomicConstantExpression*, one proceeds as follows: let *d**c*, *d**e**x**p* be respectively the dimension of the vector constant (*d**c* is equal to one in case of scalar constant), the dimension of the power expression and the dimension of the expression contained within the operation object. The exponent part of the expression is legal if and only if: * *d**c* = *d**e**x**p*. In this case (*s*1, ..., *s**d**c*)*e**x**p* is a *d**c* size vector expression and ∀*i* = 1, ..., *d**c* the *i*-th component of this vector is equal to *s**i**e**x**p**i*, where *e**x**p**i* is the value obtained by interpreting the *i* component of vector *e**x**p*. * Or *d**e**x**p* = 1. In this case, ∀*i* = 1, ..., *d**c* the *i* component of the vector result is equal to *s**i**e**x**p*. Whatever the method used, let us note *e**p* (whose dimension is always equal to *d**c*) is the result of this first step. In order to complete the evaluation of the expression, one should proceed as exposed in paragraph [par0202], by substituting there *b* = *e**p*. The ParenthesisContentExpression Class -------------------------------------- ![Graphical representation of the ParenthesisContent expression Class](pictures/ParenthesisContent.jpg "fig:") [Pic-ParenthesisContent] The *parenthesisContent* Class (extending *Expression*, see [Pic-ParenthesisContent]) is used to explicitly denote precedence by grouping the expressions that should be evaluated first. This Class **must contain** a single **numerical** object *E**x**p**r**e**s**s**i**o**n* (referred to hereafter as *e**x**p*1). Optional fields are * A **numerical** power *expression* object (referred to hereafter as *e**x**p*2); * An *Operation* object (cf. paragraph [par0202]). This composite Class could be presented as follows: $$\underbrace{ \underbrace{ \left( exp\_1\right) }\_{\mbox{\tiny Priority term}} ^{\hspace{15mm}exp\_2} \underbrace{ \overbrace{\left( \begin{array}{c} + \\ - \\ \ast \\ \cdot \\ \div \end{array} \right) }^{\mbox{\tiny Operation type}} \overbrace{ \left( \mbox{AnotherExpression}\right) }^{\mbox{\tiny expression contained in operation}} }\_{\mbox{\tiny Operation object}}}\_{\mbox{\tiny Parenthesis Expression}}$$ In order to evaluate this object expression, one proceeds as follows: first one evaluates the expression *e**x**p*1 that has the main priority. Then one proceeds exactly as in paragraph [par0201] (after the equation ([eq01])) by substituting *p* = *e**x**p*1 and *e**x**p* = *e**x**p*2. The Operation Class ------------------- ![Graphical representation of Operation Class](pictures/Operation.jpg "fig:") [Pic-Operation] The *Operation* Class (see figure [Pic-Operation]) is used for expressing operations involving two **numerical** expressions. This Class **must contain**: * an *operationType* attribute. This attribute could take the following values: plus for the sum, minus for the difference, multiply for the standard product, scalarProduct for the scalar product and divide for the standard division. Hereafter these operators will be respectively denoted  + ,  − ,  \* ,  ⋅ ,  ÷ . * an *Expression* Class. $$\label{OperationEquation} \underbrace{ \overbrace{\left( \begin{array}{c} + \\ - \\ \ast \\ \cdot \\ \div \end{array} \right) }^{\mbox{\tiny Operation type}} \overbrace{ \left( \mbox{ContaindedExpression}\right) }^{\mbox{\tiny expression contained in operation}} }\_{\mbox{\tiny Operation object}}$$ The *Operation* Class is always contained within a **numerical** *Expression* (cf. paragraph [par02]) and could not exist alone. Let *a* be the result of the evaluation of the expression object containing the operation[3](#fn3) let *b* the result of the evaluation of the **numerical** expression contained within the operation. As usual, we note *d**a* and *d**b* the dimensions of *a* and *b*. The operation evaluation is legal if and only if: * *d**a* = *d**b* and operation type (i.e. the operator) *o**p* ∈ { + ,  − ,  \* ,  ÷ }. In this case *a* *o**p* *b* is a vector expression of size *d**a* and ∀ *i* = 1, ..., *d**a* the *i* component of this vector is equal to (*a**i* *o**p* *b**i*) (i.e. a term by term operation). * Or *d**a* = *d**b* and operation type *o**p* is ‘ ⋅ ’. In this case *a* ⋅ *b* is the result of the scalar product ∑*i* = 1*d**a**a**i* \* *b**i*. It is obvious that the dimension of this result is equal to 1. * Or *d**b* = 1 and operation type (i.e. the operator) *o**p* ∈ { + ,  − ,  \* ,  ÷ }. In this case *a* *o**p* *b* is a vector expression of size *d**a* and ∀ *i* = 1, ..., *d**a* the *i* component of this vector is equal to (*a**i* *o**p* *b*). * Or *d**a* = 1 and operation type (i.e. the operator) *o**p* ∈ { + ,  − ,  \* ,  ÷ }. This case in symmetric to the previous one. The type of the result is automatically induced by a standard cast operation performed during the evaluations (for example a double vector added to an integer vector is a double vector). The FunctionType Class ---------------------- This Class is used for specifying the mathematical nature of the function contained within a *Function* object (cf. paragraph [par0205]). The unique String field this Class contains could take one of these values: size, abs, sin, cos, tan, asin, acos, atan, exp, log, sum, product. In paragraph [par0205] it is explained how these different function types are used and handled. The Function Class ------------------ ![Graphical representation of Function Class](pictures/Function.jpg "fig:") [Pic-Function] The *Function* Class (extending *expression*, see figure [Pic-Function]) is used for expressing a mathematical function on expressions. This Class **must contain** * A *functionName* attribute (of type *functionType* (cf. paragraph [par-FunctionType])) which specifies the nature of the function. * An *Expression* object (which is the function argument). Let *a**r**g* be the result of the evaluation of the function argument expression and *d**a**r**g* its dimension. The *function* Class evaluation **is legal if and only if**: * *f* ∈ {abs, sin, cos, tan, asin, acos, atan, exp, log} and the function argument is a **numerical** expression. In this case the result is a *d**a**r**g*-size vector and each component *r**i* = *f*(*a**r**g**i*), ∀ *i* = 1, ..., *d**a**r**g*. * Or *f* = sum (likewise *f* = product) and the argument is a **numerical** expression. In this case the result is a scalar value equal to ∑*i* = 1*i* = *d**a**r**g**a**r**g**i* (likewise ∏*i* = 1*i* = *d**a**r**g**a**r**g**i*), where *a**r**g**i* is the value obtained by interpreting the *i* component of vector expression *a**r**g*. * Or *f* = size. In this case the result is the scalar integer value *d**a**r**g*. From what we saw above, the result of the interpretation of a function Class **is always a number**. The FunctionExpression Class ---------------------------- ![Graphical representation of FunctionExpression Class](pictures/FunctionExpression.jpg "fig:") [Pic-FunctionExpression] The *FunctionExpression* Class (extending *Expression*, see figure [Pic-FunctionExpression]) is used for building mathematical expressions involving functions. This Class **must contains** a single *Function* object (cf. paragraph [par0205]). Optional fields, valid only for numerical types, are : * A **numerical** power *Expression* object; * An *Operation* object� (cf. paragraph [par0202]). This composite Class could be presented as follows: $$\underbrace{ \underbrace{ \left(\mbox{function}\right) }\_{\mbox{\tiny Function object}} ^{\hspace{15mm}exp} \underbrace{ \overbrace{\left( \begin{array}{c} + \\ - \\ \ast \\ \cdot \\ \div \end{array} \right) }^{\mbox{\tiny Operation type}} \overbrace{ \left( \mbox{AnotherExpression}\right) }^{\mbox{\tiny expression contained in operation}} }\_{\mbox{\tiny Operation object}}}\_{\mbox{\tiny FunctionExpression Object}}$$ In order to evaluate this Class expression, one proceed as follows: first one evaluate the funtion expression as explained in paragraph [par0205]. Then one proceed exactly as in paragraph [par0201] (after the equation ([eq01])) by taking *p* = function. Expressing complex relations and constraints on parameters ========================================================== In this part of the document we will explain how PDL objects could be used for building complex constraints and conditions involving input and/or output parameters. The ConstraintOnGroup Object ---------------------------- ![Graphical representation of ConstraintOnGroup object](pictures/ConstraintOnGroup.jpg "fig:") [Pic-ConstraintOnGroup] The *ConstraintOnGroup* object (see figure [Pic-ConstraintOnGroup]) is always contained within a *ParameterGroup* object and could not exist alone. This object **must contain** the *ConditionalStatement* objects. The latter are used, as is shown in paragraph [par-ConditionalStatement], for expressing the complex relations and constraints involving parameters. The ConditionalStatement object ------------------------------- The *ConditionalStatement* object is abstract and, as its name suggests, is used for defining conditional statements. In this section we are going to review the two concrete objects extending and specializing *ConditionalStatement*. ### The AlwaysConditionalStatement ![Graphical representation of AlwaysConditionalStatement object](pictures/AlwaysStatement.jpg "fig:") [Pic-AlwaysConditionalStatement] This object (see figure [Pic-AlwaysConditionalStatement]), as it name suggests, is used for expressing statement that must always be valid. It **must contain** a single *Always* object (which extends *ConditionalClause*, cf. paragraph [par-ConditionalClause]). ### The IfThenConditionalStatement ![Graphical representation of IfThenConditionalStatement object](pictures/IfThenStatement.jpg "fig:") [Pic-IfThenConditionalStatement] This object (see figure [Pic-IfThenConditionalStatement]), as it name suggests, is used for expressing statements that are valid only if a previous condition is verified. It **must contain**: * an *If* object (which extends *ConditionalClause*, cf. paragraph [par-ConditionalClause]). * a *Then* object (which extends *ConditionalClause*, cf. paragraph [par-ConditionalClause]). If the condition contained within the *If* object is valid, the condition contained within the *Then* object **must be** valid too. ### The WhenConditionalStatement object The when conditional statement is valid when the enclosed *When* conditional clause evaluates to true (cf. paragraph [par-EvalCriteria]). It contains a unique field of *When* type (cf. paragraph [par-ConditionalClause]). It was introduced for the purpose of dealing with the case of activating a ParameterGroup (cf paragraph [par-group]): Thus When has the advantage of having a restricted form of ConditionalStatement that could have no *side effects* in the Then part. ![Graphical representation of a WhenConditionalStatement object](pictures/WhenConditionalStatement.jpg "fig:") [Pic-WhenConditionalStatement] The ConditionalClause object ---------------------------- ![Graphical representation of ConditionalClause object](pictures/ConditionalClause.jpg "fig:") [Pic-ConditionalClause] The *ConditionalClause* object (see figure [Pic-ConditionalClause]) is abstract. It **must contain** a single *Criterion* object of type *AbstractCriterion* (cf. paragraph [par-AbstractCriterion]). The four concrete objects extending the abstract *ConditionalClause* are (see figure [Pic-ConcreteClause]): * *Always*; * *If*; * *Then*; * *When*. ![Graphical representation of Always, If, Then and When clauses](pictures/ConcreteClauses.jpg "fig:") [Pic-ConcreteClause] The Criterion contained within a *Always* object must always evaluates to *true* (we will hereafter say it is valid, cf paragraph [par-AlwaysConditionalStatement]). With other words, this means that *it is good* only when the evaluation of the criterion contained into the *Always* object’ evaluates to *true*. What if it is not good? It is wrong. *Wrong* evaluation is typically cached for notifying errors to users. The Criterion contained within a *When* object will be valid only when the enclosed Expression evaluates to True (cf. paragraphs [par-WhenConditionalStatment] and [par-EvalCriteria] ). The *If* and *Then* objects work as a tuple by composing the *IfThenConditionalStatement* (cf. paragraph [par-IfThenConditionalStatement]). The AbstractCriterion object ---------------------------- ![Graphical representation of AbstractCriterion object](pictures/AbstractCriterion.jpg "fig:") [Pic-AbstractCriterion] The objects extending *AbstractCriterion* (see figure [Pic-AbstractCriterion]) are essentials for building *ConditionalStatemets* (cf. paragraph [par-ConditionalStatement]) since they are contained within the *Always, If* and *Then* objects (cf. paragraph [par-ConditionalClause]). An *AbstractCriterion* object **must contain**: * an *Expression* object (cf. paragraph [par02]); * a *ConditionType* which is an object of type *AbstractCondition* (cf. paragraph [par-ConditionType]). This object specify which condition must be satisfied by the previous *Expression*. An optional field is the single *LogicalConnector* object (cf. paragraph [par-LogicalConnector]) used for building logical expressions. The two concrete objects extending *AbstractCriterion* are *Criterion* and *ParenthesisCriterion*. The latter of these two objects allows for assigning priority when interpreting and linking the criteria (cf. paragraph [par-EvalCriteria]). ### The Criterion object ![Graphical representation of Criterion object](pictures/Criterion.jpg "fig:") [Pic-Criterion] This object (see figure [Pic-Criterion]) extends the *AbstractCriterion* without specializing it. It is indeed just a concrete version of the abstract type. ### The ParenthesisCriterion object ![Graphical representation of ParenthesisCriterion object](pictures/ParenthesisCriterion.jpg "fig:") [Pic-ParenthesisCriterion] This object (see figure [Pic-ParenthesisCriterion]) extends and specialize the *AbstractCriterion*. It is used for defining arbitrary priority in interpreting boolean expression based on criteria. The optional field of *ParenthesisCriterion* is an *ExternalLogicalConnector* object of type *LogicalConnector*. It is used for linking other criteria, out of the priority perimeter defined by the parenthesis (cf. paragraph [par-EvalCriteria]). The LogicalConnector object --------------------------- ![Graphical representation of LogicalConnector object](pictures/LogicalConnector.jpg "fig:") [Pic-LogicalConnector] The *LogicalConnector* object (see figure [Pic-LogicalConnector]) is used for building complex logical expressions. It is an abstract object and it **must** contain a Criterion of type *AbstractCriterion* (cf. paragraph [par-AbstractCriterion]). The two concrete objects extending *LogicalConnector* are: * the *And* object used for introducing the logical AND operator between two criteria;[4](#fn4) * the *Or* object used for introducing the logical OR operator between two criteria. The AbstractCondition object ---------------------------- *AbstractCondition* is an abstract object. The objects extending it always belong to an *AbstractCriterion* (cf. [par-AbstractCriterion]). In this context, they are used combined with an *Expression* object, for expressing the condition that the expression must satisfy. Let us consider a given criterion object CR (extending*AbstractCriterion*) and let us note $\mathcal E$ and $\mathcal C$ the expression and the condition contained within CR. In what follows we are going to explain the different objects specializing *AbstractCondition* and their behavior. ### The IsNull condition This object is used for specifying that the expression $\mathcal E$ has no assigned value (this is exactly the same concept as the NULL value in Java or the None value in Python). Indeed, if and only if $\mathcal E$ has no assigned value, the evaluation of the tuple $(\mathcal E, \mathcal C)$ leads to a TRUE boolean value. Thus, in the case CR has no *LogicalConnector*, the criterion is true. ### The ‘numerical-type’ conditions These objects are used for specifying that the result of the evaluation of the expression $\mathcal E$ is of a given numerical type. The tuple $(\mathcal E, \mathcal C)$ is legal if and only if $\mathcal E$ is a **numerical** expression. The ‘numerical-type’ objects extending *AbstractCondition* are: * *IsInteger*, in this case the evaluation of the tuple $(\mathcal E, \mathcal C)$ leads to a TRUE boolean value if and only if the evaluation of the numerical expression $\mathcal E$ is an integer. * *IsReal*, in this case the evaluation of the tuple $(\mathcal E, \mathcal C)$ leads to a TRUE boolean value if and only if the evaluation of the numerical expression $\mathcal E$ is a real number. ### The BelongToSet condition ![Graphical representation of BelongToSet object](pictures/BelongToSet.jpg "fig:") [Pic-BelongToSet] This object (see figure [Pic-BelongToSet]) is used for specifying that the expression $\mathcal E$ could take only a finite set of values. It **must contain** the *Values* (which are objects of type *Expression*) defining the set of legal values. The number of *Values* must be greater than one. This object is legal only if all the *Expressions* of the set are of the same type (e.g. they are all numerical, or all boolean or all String expressions). The tuple $(\mathcal E, \mathcal C)$ leads to a TRUE boolean value if and only if: * the expression $\mathcal E$ and the expressions composing the set are of the same type * and an element $\mathcal E\_s$ exists in the set such that $\mathcal E\_s = \mathcal E$. This last equality is to be understood in the following sense: let  = *t* be the equality operator induced by the type (for numerical type the equality is in the real number sense, for String type the equality is case sensitive and for boolean the equality is in the classic boolean sense). Two expressions are equal if and only if + the expressions have the same size $d\_{\mathcal E}$, + and $\mathcal E\_s^i =\_t \mathcal E^i$, $\forall i =1,...,d\_{\mathcal E}$, where $\mathcal E\_s^i$ and $ \mathcal E^i$ are respectively the result of the evaluation of the *i* component of expressions $\mathcal E\_s$ and $\mathcal E$. ### The ValueLargerThan object ![Graphical representation of ValueLargerThan object](pictures/ValueLargerThan.jpg "fig:") [Pic-ValueLargerThan] This object (see figure [Pic-ValueLargerThan]) is used for expressing that the result of the evaluation of the expression $\mathcal E$ must be greater than a given value. It **must contain** * a **numerical** *Expression* $\mathcal E\_c$. * a *Reached* attribute, which is a boolean type. The tuple $(\mathcal E, \mathcal C)$ is legal only if $\mathcal E$ is a numerical expression. This tuple leads to a TRUE boolean value if and only if the result of the evaluation of the expression $\mathcal E$ is greater than the result of the evaluation of the expression $\mathcal E\_c$ and the attribute *Reached* is false. Otherwise if the *Reached* attribute is true the expression $\mathcal E$ may be greater than or equal to the result. ### The ValueSmallerThan object ![Graphical representation of ValueSmallerThan object](pictures/ValueSmallerThan.jpg "fig:") [Pic-ValueSmallerThan] This object (see figure [Pic-ValueSmallerThan]) is used for expressing that the result of the evaluation of the expression $\mathcal E$ must be smaller than a given value. It **must contain** * a **numerical** *Expression* $\mathcal E\_c$. * a *Reached* attribute which is a boolean type. The tuple $(\mathcal E, \mathcal C)$ is legal only if $\mathcal E$ is a numerical expression. This tuple leads to a TRUE boolean value if and only if the result of the evaluation of the expression $\mathcal E$ is smaller (otherwise smaller or equal when the attribute *Reached* is true) than the result of the evaluation of the expression $\mathcal E\_c$. ### The ValueInRange object ![Graphical representation of ValueInRange object](pictures/ValueInRange.jpg "fig:") [Pic-ValueInRange] This object (see figure [Pic-ValueInRange]) is used for expressing that the result of the evaluation of the expression $\mathcal E$ must belong to a given interval. The definition of the interval is made using the *ValueLargerThan* *ValueSmallerThan* objects. Indeed, the *ValueInRange* object **must contain**: * an object Inf of type *ValueLargerThan* for specifying the Inferior limit of the interval, * an object Sup of type *ValueSmallerThan* for specifying the Superior limit of the interval. The tuple $(\mathcal E, \mathcal C)$ is legal only if $\mathcal E$ is a numerical expression. This tuple leads to a TRUE boolean value if and only if the evaluation of both tuples $(\mathcal E, \mbox{\it ValueSmallerThan})$ and $(\mathcal E, \mbox{\it ValueLargerThan})$ lead to TRUE boolean values. ### The ValueDifferentFrom object ![Graphical representation of ValueDifferentFrom object](pictures/ValueDifferentFrom.jpg "fig:") [Pic-ValueDifferentFrom] This object (see figure [Pic-ValueDifferentFrom]) is used for specifying that the expression $\mathcal E$ must be different from a given value. It **must contain** an *Expression* $\mathcal E\_c$. In order to be compared, the two expressions $\mathcal E$ and $\mathcal E\_c$ must have the same type. The evaluation of the tuple $(\mathcal E, \mathcal C)$ leads to a TRUE boolean value only if $\mathcal E \neq \mathcal E\_c$. This inequality has to be understood in the sense explained in paragraph [par-BelongToSet] (in the second point of the list). ### The DefaultValue object ![Graphical representation of DefaultValue object](pictures/DefaultValue.jpg "fig:") [Pic-DefaultValue] This object (see figure [Pic-DefaultValue]) is used for specifying the default value of a parameter. It **must contain** an *Expression* $\mathcal E\_c$. Since the default value of an expression involving functions, multiple parameters, etc. has no particular sense, in the case of the present object the tuple $(\mathcal E, \mathcal C)$ is legal only if * $\mathcal E$ is an *AtomicParameterExpression* (cf. paragraph. [par0201]) * and the dimension and the type of the expression $\mathcal E\_c$ are equal to the dimension and type expressed in the *SingleParameter* object referenced into the *AtomicParameterExpression*. Moreover, for having a legal *DefaultValue* object, the criterion CR containing it must be contained within the *Always* or *Then* objects (cf. paragraph [par-ConditionalClause]). Evaluating and interpreting criteria objects -------------------------------------------- The evaluation of the criterion type objects (cf. paragraph [par-AbstractCriterion]) always leads to a boolean value (the only exception is what we saw in paragraph [par-DefaultValue], where the criterion contains a *DefaultValue* condition). We use hereafter the same notation introduced in [par-ConditionType]: let us consider a given criterion (extending*AbstractCriterion*) CR and let us note $\mathcal E$ and $\mathcal C$ the expression and the condition contained within CR. When CR contains no *LogicalConnector* objects, the evaluation of the criterion is straightforward : the result is equal to the boolean-evaluation of the tuple $(\mathcal E, \mathcal C)$. This tuple is evaluated according to the concrete class involved, as explained in paragraphs [par-IsNull] to [par-DefaultValue] It is a bit more complex when criteria contain *LogicalConnectors*. Let us see how to proceed. To begin with, let us consider only *Criterion* concrete objects: As we saw in the previous paragraphs, criteria object are (with the help of *LogicalConnectors* object) recursive and hierarchical objects. This hierarchical structure composing a complex criterion could be graphically represented as follows. $$\label{eq-CriterionStructure01} (\mathcal E\_1, \mathcal C\_1) \xrightarrow[{\mbox{\tiny AND/OR}}]{LC\_1} (\mathcal E\_2, \mathcal C\_2) \xrightarrow[{\mbox{\tiny AND/OR}}]{LC\_2} \cdots (\mathcal E\_i, \mathcal C\_i) \xrightarrow[{\mbox{\tiny AND/OR}}]{LC\_i} \cdots (\mathcal E\_{N-1}, \mathcal C\_{N-1}) \xrightarrow[{\mbox{\tiny AND/OR}}]{LC\_{N-1}} (\mathcal E\_N, \mathcal C\_N)$$ where the index 1, *i* and *N* are respectively for the root, the *i* and the leaf criterion composing the structure. The term *L**C**i* denotes the *LogicalConnector* contained within the criterion CR*i*. As we saw in paragraphs [par-IsNull] to [par-DefaultValue] every tuple $(\mathcal E\_i, \mathcal C\_i)$, *i* = 1, .., *N* could be evaluated (according to the concrete object involved) and leads to a boolean value $\mathcal B\_i$. Thus the expression ([eq-CriterionStructure01]) become $$\label{eq-CriterionStructure02} \mathcal B\_1 \xrightarrow[{\mbox{\tiny AND/OR}}]{LC\_1} \mathcal B\_2 \xrightarrow[{\mbox{\tiny AND/OR}}]{LC\_2} \cdots \mathcal B\_i \xrightarrow[{\mbox{\tiny AND/OR}}]{LC\_i} \cdots \mathcal B\_{N-1} \xrightarrow[{\mbox{\tiny AND/OR}}]{LC\_{N-1}} \mathcal B\_N$$ This last is a classic sequential boolean expression. It is evaluated from left to right and the operator AND takes precedence over the OR operator. Let us now consider *ParenthesisCriterion* criteria. A representation of such a criterion CR could be the following: $$\Big \langle (\mathcal E, \mathcal C) \xrightarrow{LC} \mathcal{CR}\_c \Big \rangle\_{\mathcal{CR}} \xrightarrow{ELC}\,,$$ where $\mathcal E$, $\mathcal C$, *L**C*, CR*c* are respectively the *Expression*, the condition, the *LogicalConnector* and the criterion contained within *L**C*. The term *E**L**C* is the *ExternalLogicalConnector* of CR. The criterion structure contained within ⟨ ⋅ ⟩CR has the highest priority and has to be evaluate, before the *ExternalLogicalConnector* evaluation. In the case where CR*c* is composed only of *Criterion* objects (so with no *ParenthesisCriterion*), the evaluation of the content of ⟨ ⋅ ⟩CR is performed as shown before in ([eq-CriterionStructure01]) and ([eq-CriterionStructure02]). In the case where CR*c* contains at least one *ParenthesisCriterion*, one has to go deeper in the criterion structure to find the deepest criterion CR*d* such that ⟨ ⋅ ⟩CR*d* contains only criteria of type *Criterion*.Thus one can simply evaluate the content of ⟨ ⋅ ⟩CR*d* as already shown. For illustrating how to proceed, let us consider the following complex-criterion structure: $$\label{eq-CriterionStructure03} \begin{array}{l} \displaystyle \Big \langle (\mathcal E\_1, \mathcal C\_1) \xrightarrow{LC\_1} (\mathcal E\_2, \mathcal C\_2) \Big \rangle\_{\mathcal{CR}\_1 } \xrightarrow{ELC\_1} \cdots \vspace{1mm} \\ \hspace{1.5cm} \displaystyle \Big \langle (\mathcal E\_{i-1}, \mathcal C\_{i-1}) \xrightarrow{LC\_{i-1}} \Big \langle (\mathcal E\_i, \mathcal C\_i) \xrightarrow{LC\_i} (\mathcal E\_{i+1}, \mathcal C\_{i+1}) \Big\rangle\_{\mathcal{CR}\_{i} } \Big \rangle\_{\mathcal{CR}\_{i-1}} \xrightarrow{ELC\_{i-1}} \vspace{1mm} \\ \hspace{7cm} \cdots \displaystyle \Big \langle (\mathcal E\_{N-1}, \mathcal C\_{N-1}) \xrightarrow{LC\_{N-1}} (\mathcal E\_N, \mathcal C\_N) \Big\rangle\_{\mathcal{CR}\_{N-1} } \\ \end{array}$$ From what we saw above, the expression ([eq-CriterionStructure03]) becomes $$\begin{array}{l} \displaystyle \Big \langle \mathcal B\_1 \xrightarrow{LC\_1} \mathcal B\_2 \Big \rangle\_{\mathcal{CR}\_1} \xrightarrow{ELC\_1} \cdots \\ \displaystyle \hspace{3cm} \Big \langle \mathcal B\_{i-1} \xrightarrow{LC\_{i-1}} \Big \langle \mathcal B\_i \xrightarrow{LC\_i} \mathcal B\_{i+1} \Big \rangle\_{\mathcal{CR}\_i} \Big \rangle\_{\mathcal{CR}\_{i-1}} \xrightarrow{ELC\_{i-1}} \\ \displaystyle \hspace{7.5cm} \cdots \Big \langle \mathcal B\_{N-1} \xrightarrow{LC\_{N-1}} \mathcal B\_N \Big \rangle\_{\mathcal{CR}\_{N-1}} \end{array}$$ and finally $$\begin{array}{l} \displaystyle \Big ( \mathcal B\_1 \xrightarrow[{\mbox{\tiny AND/OR}}]{LC\_1} \mathcal B\_2 \Big ) \xrightarrow[{\mbox{\tiny AND/OR}}]{ELC\_1} \cdots \\ \displaystyle \hspace{2.5cm} \Big ( \mathcal B\_{i-1} \xrightarrow[{\mbox{\tiny AND/OR}}]{LC\_{i-1}} \Big ( \mathcal B\_i \xrightarrow[{\mbox{\tiny AND/OR}}]{LC\_i} \mathcal B\_{i+1} \Big) \Big ) \xrightarrow[{\mbox{\tiny AND/OR}}]{ELC\_{i-1}} \\ \displaystyle \hspace{7.5cm} \cdots \Big ( B\_{N-1} \xrightarrow[{\mbox{\tiny AND/OR}}]{LC\_{N-1}} \mathcal B\_N \Big ) \,. \\ \end{array}$$ This last is a classical sequential boolean expression. It is evaluated from the left to the right. The sub-expression between the parenthesis must be evaluated with the highest priority and the operator AND takes precedence over the OR operator. PDL and formal logic ==================== We recall that PDL is a grammar and syntax framework for describing parameters and their constraints. Since the description is rigorous and unambiguous, PDL could verify if the instance of a given parameter is consistent with the provided description and related constraints. For example, consider the description $$\left\{ \begin{array}{l} p\_1 \mbox{is a Kelvin temperature}\\ \mbox{Always } p\_1 > 0 \\ \end{array} \right..$$ According to the description, the PDL framework could automatically verify the validity of the parameter provided by the user. If he/she provides *p*1 =  − 3, then this value will be rejected. In any case PDL is not a formal-logic calculation tool. One could build the following description with no problem: $$\left\{ \begin{array}{l} p\_1 \in \mathbb R\\ \displaystyle \mbox{Always } \big( (p\_1 > 0) \mbox{ AND } (p\_1 < 0) \big)\\ \end{array} \right..$$ The PDL language grammar is not a tool with capabilities to perceive logical contradictions which may be contained within statements. This kind of consideration is outside of the scope of the present standard. For this reason, a validation system for PDL definitions is not required to implement the detection of contradictions, but may do if the services within its description realm make this feasible. The current PDL reference implementation does not perform such contradiction detection and thus any parameter *p*1 provided by user would be rejected for this example. In other words **people providing descriptions of services must pay great attention to their contents.** Remarks for software components implementing PDL ================================================ Throughout this document we have described PDL as a *grammar*. If we consider it just as a grammar, then a specific description should be considered as an implementation. We remember that, since a PDL description is detailed, it is a priori possible to write once for all generic software components. These components will be automatically *configured* by a PDL description thus becoming *ad hoc* implementation software for the described service. Moreover checking algorithms could also be generated automatically starting from a description instance. In our implementations we wanted to check practically that these concepts implied in the definition of PDL really works. The development of operational services (as the Paris-Durham shock code) also permits to ensure the coherence of the core grammar and to verify if the PDL’s capabilities could meet the description needs of state of the art simulation codes. At the present (Fall 2013) four software elements are implemented around PDL: * the standalone dynamic client. It embeds the automatic generation of the verification layer (Google code repository at <https://code.google.com/p/vo-param/>). This development shows that a PDL description instance can be used for generating the checking algorithms and for generating a client with a dynamic-intelligent behavior helping the user in interacting with a service. This client could be used for interacting with services exposed using different job systems; * a server for exposing any exiting code as a web services. It embeds the verification layer. This development was done for showing that a PDL description instance can be used for generating the *ad hoc* server, exposing the described service. A particular feature of this server is that it can generates grids of model starting from a single job submission, which indicates ranges for parameters (GitHub repository at <https://github.com/cmzwolf>); * the Taverna Plugin. From one point of view this plugin could be seen as an alternate client to the standalone one. From another point of view it is strongly oriented towards general physical and scientific interoperability (discussed in paragraph [ParInteropIssues]) since it uses PDL description for validating the chaining of jobs composing a workflow. As the dynamic client, the Taverna plugin can be used for interacting with services exposed different job systems (GitHub repository for stable version at <https://github.com/wf4ever/astrotaverna>). * the description editor, for editing PDL description from a Graphical User Interface. Since the key point for using PDL and take advantage of the software tools we have just described is a PDL description, we decided to provide the community with a tool for easily composing PDL description. In some sense this is the entry-point of the PDL software farmework (google code repository at <https://code.google.com/p/pdl-editor/>). All these developments validate the concepts of automatic generation of algorithms and the possibility of configuring, with highly specialized individual behaviors, generic software components. This is very important since it reduces drastically the development time for building services based on the PDL grammar. This is essential in a scientific context where only few scientists have access to software engineer for their IVOA developments. In further developments, PDL-client implementations will include a formal-logic module. This will permit finding contradictions inside the descriptions. Such a module will also be required for implementing the automatic computation of *a priori interoperability graphs*. It will also permit checking interoperability in terms of semantic annotations: for example, let A be the concept that describes an input parameter of a service $\mathcal S$ and B the concept that describes an output parameter of a service $\mathcal S'$. If A and B are the same concept, then both services match the interoperability criterion. However, if A and B are not the same concept we need, for piping the result of $\mathcal S'$ to $\mathcal S$, to ask if the concept B is more specific than the concept A, in other words, if the concept B is generalized or subsumed by the concept A. If this happens then both services match again the interoperability criterion. Interoperability only makes sense when there is an application or infrastructure that allows communication and connection of different services. An example is the applications for orchestrating services by designing workflows (as described in section 2.2). Further developments for PDL include the implementation of interoperability mechanisms in Taverna. Annex ===== A practice introduction to PDL (or dive in PDL) ----------------------------------------------- In this section we present a practice approach to PDL. It is inspired by one of the first services we deployed using the PDL framework: the Meudon Stark-H broadening computation service for Hydrogen (<http://atomsonline.obspm.fr>). The exposed code take as input four parameters: * A quantum number *N**i*, which corresponds to the upper energy level. * A quantum number *N**f*, which corresponds to the lower energy level. * A temperature *T*, which is the temperature of the simulated medium. * A density *ρ*, which is an electron density. With the existing exposure systems (mostly Soap, REST, servlet web services) the information about parameters is more or less limited to a basic *function signature*: the two first parameters are *Integer*, while the two last are *Double*. But this information is not sufficient for a user wishing to use the service without knowing a priori the code: what are the unit of these parameters? What are their physical meaning? PDL is a unified way for providing user with this information by hardcoding it directly in the software composing the service. With PDL service provider can easily express that * *N**i* is *Integer*, it corresponds to the principal quantum number of the upper energy level and, as a quantum number, it has no dimension. The PDL *translation* of this sentence is: ``` <parameter dependency="required"> <Name>InitialLevel</Name> <ParameterType>integer</ParameterType> <SkosConcept>http://example.edu/skos/initialLevel</SkosConcept> <Unit>None</Unit> <Dimension xsi:type="AtomicConstantExpression" ConstantType="integer"> <Constant>1</Constant> </Dimension> </parameter> ``` The PDL description points to the skos uri containing the definition of the physical concept. Moreover it says that the current parameter has 1 as dimension. This means that the parameter is scalar (a dimensions greater than one is for vector parameters). The required attribute indicate that the user must submit this parameter to the service, and it is not optional. * *N**f* is *Integer*, it corresponds to the principal quantum number of the lower energy level and, as a quantum number, it has no dimension. The PDL *translation* of this sentence is: ``` <parameter dependency="required"> <Name>FinalLevel</Name> <ParameterType>integer</ParameterType> <SkosConcept>http://example.edu/skos/finalLevell</SkosConcept> <Unit>None</Unit> <Dimension xsi:type="AtomicConstantExpression" ConstantType="integer"> <Constant>1</Constant> </Dimension> </parameter> ``` * *T* is the thermodynamic temperature of the simulated medium and is expressed in Kelvin. The PDL *translation* for this sentence is: ``` <parameter dependency="required"> <Name>Temperature</Name> <ParameterType>real</ParameterType> <SkosConcept>http://example.edu/skos/temperaturel</SkosConcept> <Unit>K</Unit> <Dimension xsi:type="AtomicConstantExpression" ConstantType="integer"> <Constant>1</Constant> </Dimension> </parameter> ``` * *ρ* is an electron density in *c**m*− 3. The PDL version is: ``` <parameter dependency="required"> <Name>Density</Name> <ParameterType>real</ParameterType> <SkosConcept>http://example.edu/skos/denisty</SkosConcept> <Unit>cm^-3</Unit> <Dimension xsi:type="AtomicConstantExpression" ConstantType="integer"> <Constant>1</Constant> </Dimension> </parameter> ``` Even with this information, it is not guaranteed that users will be able to correctly use the service. Indeed, two constraints involve parameters. The first comes from the definition of *N**i* and *N**f*: the condition (*N**i* − *N**f*) > 1 must always be satisfied. The second comes from the physical model implemented into the exposed code. The result has a physical meaning only if the Debey approximation hypothesis holds: $$\label{DebeyApprox} \frac{9 \, \rho^{1/6}}{100 \, T^{1/2} }<1$$ How to alert the user of these two constraints? A first solution consists in writing explanation (e.g. a code documentation) but it is not sure that users will read it. A more secure approach consists in writing checking algorithms. But this solution is time consuming, since you have to write *ad hoc* tests for every specific code. PDL answer this issues by providing a unified way for expressing the constraints. The PDL formulation of ([energyDifference]) is ``` <always> <Criterion xsi:type="Criterion"> <Expression xsi:type="AtomicParameterExpression"> <parameterRef ParameterName="FinalLevel"/> <Operation operationType="MINUS"> <Expression xsi:type="AtomicParameterExpression"> <parameterRef ParameterName="InitialLevel"/> </Expression> </Operation> </Expression> <ConditionType xsi:type="ValueLargerThan" reached="true"> <Value xsi:type="AtomicConstantExpression" ConstantType="real"> <Constant>1</Constant> </Value> </ConditionType> </Criterion> </always> ``` whereas the formulation for ([DebeyApprox]) is ``` <always> <Criterion xsi:type="Criterion"> <Expression xsi:type="AtomicConstantExpression" ConstantType="real"> <Constant>0.09</Constant> <Operation operationType="MULTIPLY"> <Expression xsi:type="AtomicParameterExpression"> <parameterRef ParameterName="Density"/> <power xsi:type="AtomicConstantExpression" ConstantType="real"> <Constant>0.16666666</Constant> </power> <Operation operationType="DIVIDE"> <Expression xsi:type="AtomicParameterExpression"> <parameterRef ParameterName="Temperature"/> <power xsi:type="AtomicConstantExpression" ConstantType="real"> <Constant>0.5</Constant> </power> </Expression> </Operation> </Expression> </Operation> </Expression> <ConditionType xsi:type="ValueSmallerThan" reached="false"> <Value xsi:type="AtomicConstantExpression" ConstantType="real"> <Constant>1</Constant> </Value> </ConditionType> </Criterion> </always> ``` These two last pieces of XML (composing a wider PDL description) are not intended for humans. They are parsed by the PDL framework for automatically generate the checking algorithms associated with the described constraints. The key point to retain is that PDL is simple for simple services is flexible and powerful enough for meeting description requirements coming with the most complex scientific codes (of course the associated description won’t be simple). The PDL description of the example of equation ([PDLExemplum01]) ---------------------------------------------------------------- The reader will find the xml file related to this example at the following URL: [http://vo-param.googlecode.com/svn/trunk/model/documentation/ PDL-Description\_example01.xml](http://vo-param.googlecode.com/svn/trunk/model/documentation/PDL-Description_example01.xml) The PDL description of the example of equation ([PDLExemplum02]) ---------------------------------------------------------------- The reader will find the xml file related to this example at the following URL: [http://vo-param.googlecode.com/svn/trunk/model/documentation/ PDL-Description\_Example02.xml](http://vo-param.googlecode.com/svn/trunk/model/documentation/PDL-Description_Example02.xml). The PDL XSD Schema ------------------ ``` <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:pm="http://www.ivoa.net/xml/PDL/v1.0" elementFormDefault="qualified" targetNamespace="http://www.ivoa.net/xml/PDL/v1.0"> <!-- needs isActive property on group - need to be able to reference a group --> <xs:annotation> <xs:documentation> IVOA Description of the set of parameters for a service</xs:documentation> </xs:annotation> <xs:element name="Service"> <xs:annotation> <xs:documentation> The base service description. A service in this context is simply some sort of process that has input parameters and produces output parameters. </xs:documentation> </xs:annotation> <xs:complexType> <xs:sequence> <xs:element name="ServiceId" type="xs:string" minOccurs="1" maxOccurs="1"> <xs:annotation> <xs:documentation>The ivoa identifier for the service</xs:documentation> </xs:annotation> </xs:element> <xs:element name="ServiceName" type="xs:string" minOccurs="1" maxOccurs="1"/> <xs:element name="Description" type="xs:string" minOccurs="1" maxOccurs="1"/> <xs:element name="Parameters" type="pm:Parameters" minOccurs="1" maxOccurs="1"> <xs:annotation> <xs:documentation>The list of all possible parameters both input and output parameters</xs:documentation> </xs:annotation> </xs:element> <xs:element name="Inputs" type="pm:ParameterGroup" minOccurs="1" maxOccurs="1"> <xs:annotation> <xs:documentation>The input parameters for a service.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="Outputs" type="pm:ParameterGroup" minOccurs="1" maxOccurs="1"> <xs:annotation> <xs:documentation>The parameters output from a service.</xs:documentation> </xs:annotation> </xs:element> </xs:sequence> </xs:complexType> <!-- keys to ensure that parameter names are unique --> <xs:unique name="KeyName"> <xs:selector xpath="./pm:ParameterList/pm:parameter"/> <xs:field xpath="pm:Name"/> </xs:unique> <xs:keyref name="expressionKeyref" refer="pm:KeyName"> <xs:selector xpath=".//pm:parameterRef"/> <xs:field xpath="pm:parameterName"/> </xs:keyref> </xs:element> <xs:complexType name="Parameters"> <xs:annotation> <xs:documentation>The list of possible parameters both input and output.</xs:documentation> </xs:annotation> <xs:sequence> <xs:element name="parameter" type="pm:SingleParameter" minOccurs="1" maxOccurs="unbounded"> </xs:element> </xs:sequence> </xs:complexType> <xs:complexType name="ParameterReference"> <xs:annotation> <xs:documentation>A reference to a parameter</xs:documentation> </xs:annotation> <xs:attribute name="ParameterName" type="xs:string"> <xs:annotation> <xs:documentation>The name of the parameter being referred to.</xs:documentation> </xs:annotation> </xs:attribute> </xs:complexType> <xs:complexType name="Description"> <xs:sequence> <xs:element name="humanReadableDescription" type="xs:string"/> </xs:sequence> </xs:complexType> <xs:simpleType name="ParameterDependency"> <xs:annotation> <xs:documentation>The types that a parameter may have.</xs:documentation> <xs:documentation> Flag for saying if a parameter is required or optional </xs:documentation> </xs:annotation> <xs:restriction base="xs:string"> <xs:enumeration value="required"> <xs:annotation> <xs:documentation>The parameter must be provided by user.</xs:documentation> </xs:annotation> </xs:enumeration> <xs:enumeration value="optional"> <xs:annotation> <xs:documentation>The parameter is optional.</xs:documentation> </xs:annotation> </xs:enumeration> </xs:restriction> </xs:simpleType> <xs:simpleType name="ParameterType"> <xs:annotation> <xs:documentation>The types that a parameter may have.</xs:documentation> <xs:documentation> Note that the types are made more specific by using the UCD attribute of the parameter definition. In particular it is expected that a Parameter Model library would be able to recognise the more specific types associated with the following UCDs <ul> <li>pos - to provide a suitable widget for positions</li> <li>time - to provide suitable widgets for times and durations</li> </ul> </xs:documentation> </xs:annotation> <xs:restriction base="xs:string"> <xs:enumeration value="boolean"> <xs:annotation> <xs:documentation>A representation of a boolean - e.g. true/false</xs:documentation> </xs:annotation> </xs:enumeration> <xs:enumeration value="string"> <xs:annotation> <xs:documentation>Data that can be interpreted as text.</xs:documentation> </xs:annotation> </xs:enumeration> <xs:enumeration value="integer"/> <xs:enumeration value="real"/> <xs:enumeration value="date"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="FunctionType"> <xs:restriction base="xs:string"> <xs:enumeration value="size"/> <xs:enumeration value="abs"/> <xs:enumeration value="sin"/> <xs:enumeration value="cos"/> <xs:enumeration value="tan"/> <xs:enumeration value="asin"/> <xs:enumeration value="acos"/> <xs:enumeration value="atan"/> <xs:enumeration value="exp"/> <xs:enumeration value="log"/> <xs:enumeration value="sum"/> <xs:enumeration value="product"/> </xs:restriction> </xs:simpleType> <xs:simpleType name="OperationType"> <xs:restriction base="xs:string"> <xs:enumeration value="PLUS"/> <xs:enumeration value="MINUS"/> <xs:enumeration value="MULTIPLY"/> <xs:enumeration value="DIVIDE"/> <xs:enumeration value="SCALAR"/> </xs:restriction> </xs:simpleType> <xs:complexType name="SingleParameter"> <xs:sequence> <xs:element name="Name" type="xs:string" minOccurs="1" maxOccurs="1"> </xs:element> <xs:element name="ParameterType" type="pm:ParameterType" minOccurs="1" maxOccurs="1"> </xs:element> <xs:element name="UCD" type="xs:string" maxOccurs="1" minOccurs="0"> </xs:element> <xs:element name="UType" type="xs:string" maxOccurs="1" minOccurs="0"/> <xs:element name="SkosConcept" type="xs:string" minOccurs="0" maxOccurs="1"/> <xs:element name="Unit" type="xs:string" minOccurs="0" maxOccurs="1"/> <xs:element name="Precision" type="pm:Expression" minOccurs="0" maxOccurs="1"/> <xs:element name="Dimension" type="pm:Expression" maxOccurs="1" minOccurs="1"/> </xs:sequence> <xs:attribute name="dependency" type="pm:ParameterDependency"> </xs:attribute> </xs:complexType> <xs:complexType name="ParameterGroup"> <xs:annotation> <xs:documentation>A logical grouping of parameters</xs:documentation> </xs:annotation> <xs:sequence> <xs:element name="Name" type="xs:string" maxOccurs="1" minOccurs="1"> <xs:annotation> <xs:documentation>The name of the parameter group which can be used for display</xs:documentation> </xs:annotation> </xs:element> <xs:element name="ParameterRef" type="pm:ParameterReference" minOccurs="0" maxOccurs="unbounded"> <xs:annotation> <xs:documentation>The list of parameters that are in the group</xs:documentation> </xs:annotation> </xs:element> <xs:element name="ConstraintOnGroup" type="pm:ConstraintOnGroup" maxOccurs="1" minOccurs="0"> <xs:annotation> <xs:documentation>The constraints on parameters in the group</xs:documentation> </xs:annotation> </xs:element> <xs:element name="ParameterGroup" type="pm:ParameterGroup" minOccurs="0" maxOccurs="unbounded"> <xs:annotation> <xs:documentation>possibly nested parameter groups</xs:documentation> </xs:annotation> </xs:element> <xs:element name="Active" type="pm:WhenConditionalStatement" maxOccurs="1" minOccurs="0"> <xs:annotation> <xs:documentation>It the goup active? i.e. should it be displayed - The default is yes if there is no active element, otherwise it is the result of the evaluation of the When conditional statement.</xs:documentation> </xs:annotation> </xs:element> </xs:sequence> </xs:complexType> <xs:complexType name="ConstraintOnGroup"> <xs:annotation> <xs:documentation>The possible constraints on the parameters in a group</xs:documentation> </xs:annotation> <xs:sequence> <xs:element name="ConditionalStatement" type="pm:ConditionalStatement" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> <xs:complexType abstract="true" name="ConditionalStatement"> <xs:sequence> <xs:element name="comment" type="xs:string" minOccurs="1" maxOccurs="1"/> </xs:sequence> </xs:complexType> <xs:complexType name="IfThenConditionalStatement"> <xs:complexContent> <xs:extension base="pm:ConditionalStatement"> <xs:sequence> <xs:element name="if" type="pm:If" minOccurs="1" maxOccurs="1"/> <xs:element name="then" type="pm:Then" minOccurs="1" maxOccurs="1"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="AlwaysConditionalStatement"> <xs:complexContent> <xs:extension base="pm:ConditionalStatement"> <xs:sequence> <xs:element name="always" type="pm:Always" minOccurs="1" maxOccurs="1"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="WhenConditionalStatement"> <xs:annotation> <xs:documentation> A statement that has only a True or a False value </xs:documentation> </xs:annotation> <xs:complexContent> <xs:extension base="pm:ConditionalStatement"> <xs:sequence> <xs:element name="when" type="pm:When"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType abstract="true" name="LogicalConnector"> <xs:sequence> <xs:element name="Criterion" type="pm:AbstractCriterion" minOccurs="1" maxOccurs="1"/> </xs:sequence> </xs:complexType> <xs:complexType name="And"> <xs:complexContent> <xs:extension base="pm:LogicalConnector"/> </xs:complexContent> </xs:complexType> <xs:complexType name="Or"> <xs:complexContent> <xs:extension base="pm:LogicalConnector"/> </xs:complexContent> </xs:complexType> <xs:complexType abstract="true" name="ConditionalClause"> <xs:sequence> <xs:element name="Criterion" type="pm:AbstractCriterion" minOccurs="1" maxOccurs="1"> </xs:element> </xs:sequence> </xs:complexType> <xs:complexType name="Always"> <xs:complexContent> <xs:extension base="pm:ConditionalClause"/> </xs:complexContent> </xs:complexType> <xs:complexType name="If"> <xs:complexContent> <xs:extension base="pm:ConditionalClause"/> </xs:complexContent> </xs:complexType> <xs:complexType name="Then"> <xs:complexContent> <xs:extension base="pm:ConditionalClause"/> </xs:complexContent> </xs:complexType> <xs:complexType name="When"> <xs:complexContent> <xs:extension base="pm:ConditionalClause"/> </xs:complexContent> </xs:complexType> <xs:complexType abstract="true" name="AbstractCondition"/> <xs:complexType name="IsNull"> <xs:complexContent> <xs:extension base="pm:AbstractCondition"/> </xs:complexContent> </xs:complexType> <xs:complexType name="IsInteger"> <xs:complexContent> <xs:extension base="pm:AbstractCondition"> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="IsReal"> <xs:complexContent> <xs:extension base="pm:AbstractCondition"> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="BelongToSet"> <xs:annotation> <xs:documentation>The value must belong to a set</xs:documentation> </xs:annotation> <xs:complexContent> <xs:extension base="pm:AbstractCondition"> <xs:sequence> <xs:element name="Value" type="pm:Expression" minOccurs="1" maxOccurs="unbounded"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="ValueLargerThan"> <xs:complexContent> <xs:extension base="pm:AbstractCondition"> <xs:sequence> <xs:element name="Value" type="pm:Expression" maxOccurs="1" minOccurs="1"/> </xs:sequence> <xs:attribute name="reached" type="xs:boolean"/> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="ValueSmallerThan"> <xs:complexContent> <xs:extension base="pm:AbstractCondition"> <xs:sequence> <xs:element name="Value" type="pm:Expression" maxOccurs="1" minOccurs="1"/> </xs:sequence> <xs:attribute name="reached" type="xs:boolean"/> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="ValueInRange"> <xs:complexContent> <xs:extension base="pm:AbstractCondition"> <xs:sequence> <xs:element name="Sup" type="pm:ValueSmallerThan" maxOccurs="1" minOccurs="1"/> <xs:element name="Inf" type="pm:ValueLargerThan" maxOccurs="1" minOccurs="1"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="ValueDifferentFrom"> <xs:complexContent> <xs:extension base="pm:AbstractCondition"> <xs:sequence> <xs:element name="Value" type="pm:Expression" maxOccurs="1" minOccurs="1"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="DefaultValue"> <xs:complexContent> <xs:extension base="pm:AbstractCondition"> <xs:sequence> <xs:element name="Value" type="pm:Expression" maxOccurs="1" minOccurs="1"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType abstract="true" name="AbstractCriterion"> <xs:sequence> <xs:element name="Expression" type="pm:Expression" minOccurs="1" maxOccurs="1"> </xs:element> <xs:element name="ConditionType" type="pm:AbstractCondition" minOccurs="1" maxOccurs="1"/> <xs:element name="LogicalConnector" type="pm:LogicalConnector" maxOccurs="1" minOccurs="0" /> </xs:sequence> </xs:complexType> <xs:complexType name="Criterion"> <xs:complexContent> <xs:extension base="pm:AbstractCriterion"> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="ParenthesisCriterion"> <xs:complexContent> <xs:extension base="pm:AbstractCriterion"> <xs:sequence> <xs:element name="ExternalLogicalConnector" type="pm:LogicalConnector" maxOccurs="1" minOccurs="0"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="Function"> <xs:complexContent> <xs:extension base="pm:Expression"> <xs:sequence> <xs:element name="expression" type="pm:Expression"/> </xs:sequence> <xs:attribute name="functionName" type="pm:FunctionType"/> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="Operation"> <xs:sequence> <xs:element name="expression" type="pm:Expression" maxOccurs="1" minOccurs="1"/> </xs:sequence> <xs:attribute name="operationType" type="pm:OperationType"> </xs:attribute> </xs:complexType> <xs:complexType abstract="true" name="Expression"> </xs:complexType> <xs:complexType name="ParenthesisContent"> <xs:complexContent> <xs:extension base="pm:Expression"> <xs:sequence> <xs:element name="expression" type="pm:Expression" minOccurs="1" maxOccurs="1"/> <xs:element name="power" type="pm:Expression" maxOccurs="1" minOccurs="0"/> <xs:element name="Operation" type="pm:Operation" maxOccurs="1" minOccurs="0"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="AtomicParameterExpression"> <xs:complexContent> <xs:extension base="pm:Expression"> <xs:sequence> <xs:element name="parameterRef" type="pm:ParameterReference" maxOccurs="1" minOccurs="1"> </xs:element> <xs:element name="power" type="pm:Expression" maxOccurs="1" minOccurs="0"/> <xs:element name="Operation" type="pm:Operation" maxOccurs="1" minOccurs="0"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="AtomicConstantExpression"> <xs:complexContent> <xs:extension base="pm:Expression"> <xs:sequence> <xs:element name="Constant" type="xs:string" maxOccurs="unbounded" minOccurs="1"/> <xs:element name="power" type="pm:Expression" maxOccurs="1" minOccurs="0"/> <xs:element name="Operation" type="pm:Operation" maxOccurs="1" minOccurs="0"/> </xs:sequence> <xs:attribute name="ConstantType" type="pm:ParameterType"/> </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="FunctionExpression"> <xs:complexContent> <xs:extension base="pm:Expression"> <xs:sequence> <xs:element name="Function" type="pm:Function" maxOccurs="1" minOccurs="1"/> <xs:element name="Power" type="pm:Expression" maxOccurs="1" minOccurs="0"/> <xs:element name="Operation" type="pm:Operation" maxOccurs="1" minOccurs="0"/> </xs:sequence> </xs:extension> </xs:complexContent> </xs:complexType> </xs:schema> ``` 10 , W3C Recommendation 26 June 2007. , W3C Member Submission 31 August 2009. , 2011, International Journal of High-performance Computing Applications (IJHPCA), LLNL-JRNL-465223, DOI: 10.1177/1094342011414036 *How the Common Component Architecture Advances Computational Science Proceedings of Scientific Discovery through Advanced Computing* (SciDAC 2006), June 2006, in J. Phys.: Conf. Series. (also LLNL Tech Report UCRL-CONF-222279) . *A Component Architecture for High-Performance Scientific Computing*. Int. J. High Perform. Comput. Appl. 20(2). May 2006, pp. 163-202. . *A formal semantics for the Taverna 2 workflow model*, Journal of Computer and System Sciences, vol. 76, iss. 6, pp. 490-508, 2009. , *Taverna: a tool for building and running workflows of services*., Nucleic Acids Research, vol. 34, iss. Web Server issue, pp. 729-732, 2006. , Ios Press 2003, ISBN: 1586033115 .*PALM: A Computational framework for assembling high performance computing applications*, Concurrency Computat.: Pract. Exper., Vol. 18(2), 2006, 247-262 . *A new representation of data assimilation methods: the PALM flow charting approach*, Q.J.R.M.S., Vol. 127, 2001, pp. 189-207 .*The PALM Group, PALM: A Dynamic Parallel Coupler*. Lecture Notes In Computer Science, High Performance Computing for Computational Science, Vol. 2565, 2003, pp. 479-492 , in proceedings of ICNS 2007 Conference. , in proceedings of ICNS 2005 Conference. . *Finding Data Resources in a Virtual Observatory Using SKOS Vocabularies,* BNCOD 2008:189-192. A. J. G. Gray, N. Gray, A. Paul Millar, and I. Ounis*Semantically Enabled Vocabularies in Astronomy*, Technical Report, University of Glasgow, December 2007 (http://www.cs.man.ac.uk/ graya/Publications/vocabulariesInAstronomy.pdf). D. Oberle, N. Guarino and S Staab. *What is an ontology?*, In: Handbook on Ontologies. Springer, 2nd edition, 2009. , International Virtual Observatory Alliance Recommendation, August 2005 (http://www.ivoa.net/documents/latest/UCD.html). , IVOA Note 25 June 2007 (http://www.ivoa.net/documents/latest/UtypeListCharacterisationDM.html). *Units in the VO*, IVOA Proposed Recommendation 29 April 2013 (http://www.ivoa.net/documents/VOUnits/). Astrophysics Source Code Library, record ascl:1307.007. 2013. R. T. Fielding*Architectural Styles and the Design of Network-based software Architecture.* PhD Thesis in Information and computer Science, University of California, Irvine. http://www.w3.org/TR/2007/REC-soap12-testcollection-20070427/ --- 1. For example, for expressing that a given parameter must be greater and smaller than arbitrary values, we could define a *bounded* type containing an *inf* field and a *sup* field. If another user defines a similar object calling these two fields *inf-bound* and *sub-bound*, the instances of these two types could not interoperate straightforwardly. The theory of types is not sufficient to ensure the interoperability of the object defined.[↩](#fnref1) 2. This is obvious, since this value corresponds to a vector size.[↩](#fnref2) 3. this came from the evaluation of parameterRef field in case of an *AtomicParameterExpression* (cf. paragraph [par02]), from the evaluation of constant field in the case of a *AtomicConstantExpression* (cf. paragraph [par0203]), from the evaluation of the Expression field in case of an *parenthesisContentExpression* (cf. paragraph [par0204]) and from the evaluation of the Function object in case of a *FunctionExpression* (cf. par. [functionExpressionPar])[↩](#fnref3) 4. The first criterion is the one containing the *LogicalConnector* and the second is the criterion contained within the connector itself.[↩](#fnref4)
arxiv_0000022
Lindblad Equation for the Inelastic Loss of Ultracold Atoms ============================================================ The loss of ultracold trapped atoms due to deeply inelastic reactions has previously been taken into account in effective field theories for low-energy atoms by adding local anti-Hermitian terms to the effective Hamiltonian. Here we show that when multi-atom systems are considered, an additional modification is required in the equation governing the density matrix. We define an effective density matrix by tracing over the states containing high-momentum atoms produced by deeply inelastic reactions. We show that it satisfies a Lindblad equation, with local Lindblad operators determined by the local anti-Hermitian terms in the effective Hamiltonian. We use the Lindblad equation to derive the universal relation for the two-atom inelastic loss rate for fermions with two spin states and the universal relation for the three-atom inelastic loss rate for identical bosons. Introduction ============ The development of technologies to trap and cool neutral atoms has led to the emergence of cold atom physics as a new subfield of atomic physics. The atoms can be cooled to temperatures that are orders of magnitude smaller than the tiny differences between the hyperfine energy levels of the atoms. Many of the most important loss processes for ultracold atoms involve deeply inelastic reactions, which produce atoms with kinetic energies much larger than those of the cold trapped atoms. One such process is three-body recombination, in which a collision of three low-energy atoms results in the binding of two of the atoms into a diatomic molecule. If the binding energy of the molecule is large compared to the energy scale of the cold atoms, the momenta of the molecule and the recoiling atom are often large enough for them to escape from the trapping potential for the atoms. Ultracold atoms can be described by a local nonrelativistic effective field theory for which the coupling constant is the scattering length. Local effective field theories can be applied most rigorously to a system in which there is an energy gap separating the low-energy particles described explicitly by the effective field theory from the higher-momentum particles. In a system consisting of low-energy atoms, conservation of energy ensures that a high-momentum atom can only appear through a virtual state that, by the time-energy uncertainty principle, has a short lifetime. During that short lifetime, the high-momentum atom can propagate only over a short distance. Its effects on low-energy atoms are therefore restricted to short distances. These effects can be reproduced by local Hermitian operators in the Hamiltonian for the effective field theory. Local effective field theories have also been applied to systems with deeply inelastic reactions that produce particles with momenta too large to be described accurately within the effective field theory. For example, a deeply inelastic three-body recombination event produces a molecule and an atom whose momenta may be outside the domain of validity of the effective theory. The large energy release from the inelastic reaction comes from the conversion of rest energy into kinetic energy. The standard argument for the locality of the effective field theory does not apply. The particles with large momenta produced by the inelastic reaction can propagate over arbitrarily long distances, so their effects on low-energy particles are not obviously restricted to short distances. Nevertheless, there are general arguments based on the uncertainty principle that indicate that their effects can be taken into account systematically through local anti-Hermitian operators in the effective Hamiltonian. The effective Hamiltonian can be expressed as $H\_{\rm eff} = H -iK$, where *H* and *K* are Hermitian. The dynamics of a multi-atom system with deeply inelastic reactions is conveniently described by a density matrix. A system that starts as a pure quantum state with *n* low-energy atoms evolves into a mixed quantum state that is a superposition of a state with *n* low-energy atoms and states with fewer low-energy atoms, as the inelastic reactions shift probability from the low-energy atoms into the high-momentum atoms. An effective density matrix *ρ* can be defined by tracing the density matrix over states containing high-momentum atoms. Naively we might expect the time evolution equation for *ρ* to be $$\begin{aligned} i\hbar \frac{d\ }{dt}\rho &\overset{?}{=} & H\_{\mathrm{eff}}\rho- \rho H\_{\mathrm{eff}}= \left[H, \rho\right] - i \left\{K,\rho\right\}. \label{eq:drhodt-naive}\end{aligned}$$ As we will demonstrate in this paper, the correct evolution equation for the effective density matrix is the *Lindblad equation*, which has an additional term. The Lindblad equation arises in the quantum information theory of *open quantum systems*. An open quantum system consists of all the degrees of freedom of both the subsystem of interest and the environment. Under special circumstances, the density matrix for the subsystem evolves in time according to the Lindblad equation. In the Lindblad equation for the density matrix of an effective field theory obtained by integrating out deeply inelastic reactions, the additional Lindblad term is local, and it can be deduced from the local anti-Hermitian terms in the effective Hamiltonian. An open quantum system in which the subsystem of interest is a field theory is called an *open effective field theory*. Grozdanov and Polonyi have proposed that an open effective field theory for the hydrodynamic modes of a quantum field theory can be used as a framework for deriving dissipative hydrodynamics. Burgess, Holman, Tasinato, and Williams have applied open effective field theory to the super-Hubble modes of primordial quantum fluctuations in the early universe. In the stochastic inflation framework, the master equation is the Lindblad equation. Since the density matrix for an effective field theory in which deeply inelastic reaction products have been integrated out satisfies the Lindblad equation, this system can also be regarded as an open effective field theory. In this case, the environment consists of the high-momentum particles produced by the deeply inelastic reactions. The paper is organized as follows. In Section [sec:DensityMatrix], we summarize the basic properties of the density matrix and we introduce the Lindblad equation. In Section [sec:Effective Theory], we explain how integrating out deeply inelastic reaction products results in local operators in the effective Lagrangian density. We also explain why the effective density matrix obtained by tracing over states that include deeply inelastic reaction products must satisfy the Lindblad equation. In Section [sec:AtomLoss], we apply the Lindblad equation to the mean number of low-energy atoms. We derive the universal relation for the two-atom inelastic loss rate for fermions with two spin states and the universal relation for the three-atom inelastic loss rate for identical bosons. Our results are summarized briefly in Section [sec:Summary]. In an Appendix, we demonstrate how the Lindblad equation for the density matrix emerges from the diagrammatic analysis of a simple field theory model with a deeply inelastic two-atom scattering reaction. Density Matrix ============== The density matrix can be used to describe pure quantum states, mixed quantum states, and statistical ensembles of quantum states. The dynamics of a multi-atom system with atom losses must be described by the density matrix in order to be able to track the losses in the different *n*-atom sectors. To lay the foundation for the rest of the paper, we summarize the key properties of the density matrix and we introduce the Lindblad equation. General Properties ------------------ A pure state in a quantum system can be represented by a vector ∣*ψ*⟩ in a complex vector space. The time evolution of the quantum state is described by the Schrödinger equation: *i*ℏ(*d*/*d**t*)∣*ψ*⟩ = *H*∣*ψ*⟩, where the Hamiltonian *H* is a Hermitian operator. The time evolution preserves the norm ⟨*ψ*∣*ψ*⟩ of a state. An arbitrary statistical ensemble of quantum states can be represented by a density matrix *ρ*. The density matrix has the following basic properties: * It is Hermitian: *ρ*† = *ρ*. * It is positive: ⟨*ψ*∣*ρ*∣*ψ*⟩ ≥ 0 for all nonzero states ∣*ψ*⟩. * It can be normalized to have unit trace: ${\rm Tr}(\rho) = 1$. The density matrix can also describe a pure quantum state, in which case *ρ*2 = *ρ*. The time evolution of the density matrix is described by the von Neumann equation: $$i \hbar \frac{d\ }{dt} \rho = [ H, \rho], \label{evol:Schrodinger}$$ which follows from the Schrödinger equation for ∣*ψ*⟩. This evolution equation has the following properties: * It is linear in *ρ*. * It preserves the trace of *ρ*. It therefore respects the normalization condition ${\rm Tr}(\rho) = 1$. * It is *Markovian*: the future is determined by the present only. There is no additional dependence on the past history. The system average of an operator *O* can be expressed as the trace of its product with the density matrix: $$\langle O \rangle = {\rm Tr}(\rho O ). \label{d<O>}$$ If the operator has no explicit time dependence, the time evolution of the system average is determined by the evolution equation of *ρ* in Eq. . Lindblad equation ----------------- In the field of quantum information theory, the full system is often separated into a *subsystem* of interest and its *environment*. Of particular interest is the decoherence of the subsystem due to the effects of the environment. The basis for the quantum states of the full system can be chosen to be direct products of basis states for the subsystem and those for the environment. A density matrix *ρ* for the subsystem can be obtained from the density matrix *ρ*full for the full system by the partial trace over the states of the environment: $$\label{eq:rho\_eff-qi} \rho = {\mathrm{Tr}}\_{\rm environment} \left(\rho\_{\mathrm{full}}\right).$$ The density matrix for the full system evolves according to the von Neumann equation in Eq. . Given the initial density matrix *ρ*(*t* = 0) for the subsystem and a specified initial state of the environment, the von Neumann equation can in principle be solved to determine *ρ*(*t*) at future times. It is possible in principle to construct a self-contained differential equation for *ρ*(*t*), but its evolution is non-Markovian. The time derivative of *ρ*(*t*) is determined by the present density matrix and by its history from time 0 to *t*. The previous history is needed to take into account the flow of information between the subsystem and the environment. There are situations in which the time evolution of the subsystem can be described approximately by a self-contained Markovian differential equation. The time during which the subsystem is observed must be much larger than the time scale for correlations between the subsystem and the environment. We must also restrict our consideration to the low-frequency behavior of the subsystem, which can be accomplished by smoothing out over times larger than the correlation time scale. The density matrix *ρ* for the subsystem necessarily satisfies the three basic properties listed before Eq. : it is Hermitian, it is positive, and it can be normalized. It would be desirable for its time evolution to also be in accord with the three basic properties listed after Eq. : it should be linear in *ρ*, it should preserve the trace of *ρ*, and it should be Markovian. In 1976, Lindblad and, independently, Gorini, Kossakowski, and Sudarshan showed that these conditions together with one additional technical condition require the time evolution equation to have the form $$i \hbar \frac{d\ }{dt} \rho = [ H, \rho] - \frac{i}{2} \sum\_n \left( L\_n^\dagger L\_n \rho + \rho L\_n^\dagger L\_n -2 L\_n \rho L\_n^\dagger \right), \label{evol:Lindblad}$$ where *H* is a Hermitian operator and the *L**n*’s are an additional set of operators called *Lindblad operators*. The evolution equation in Eq.  is called the *Lindblad equation*. The linear operator acting on *ρ* defined by the right side of Eq.  is called the *Lindbladian*. Thus the Lindbladian is the generator of the time evolution of the density matrix. The additional technical condition required to derive the Lindblad equation is that the linear operator that determines the time evolution of *ρ* should be completely positive. For any complex vector space, this linear operator has a natural extension to an operator acting on the tensor product of the quantum-state spaces for the subsystem and the other complex vector space. Complete positivity requires the extension of the operator to be positive in the larger space. An accessible derivation of the Lindblad equation can be found in lecture notes on quantum information and computation by John Preskill. The Lindblad equation in Eq.  can be expressed in the form $$i \hbar \frac{d\ }{dt} \rho = [ H, \rho] - i \left\{K,\rho\right\} + i \sum\_n L\_n \rho L\_n^\dagger, \label{evol:Lindblad-2}$$ where the Hermitian operator *K* is $$K =\frac12\sum\_n L\_n^\dagger L\_n. \label{eq:K-Ln}$$ Comparison with the naive evolution equation in Eq.  reveals that *H* − *i**K* can be interpreted as a non-Hermitian effective Hamiltonian for the subsystem. The additional *Lindblad term* in Eq.  is necessary to preserve the trace of *ρ*. Effective Theory from Deeply Inelastic Reactions ================================================ In this section, we review how effective field theories for cold atoms are constructed, with particular focus on the treatment of deeply inelastic reactions. We first discuss the construction of the effective Lagrangian, and how integrating deeply inelastic reaction products out of a theory results in new local operators in the effective Lagrangian density. We then discuss how such a formalism can be used to study multi-particle states by means of an effective density matrix obtained by tracing over states that include deeply inelastic reaction products. Locality -------- [fig:2to2-elastic] An effective field theory is obtained by removing (“integrating out”) states from a field theory. The simplest applications involve removing particles with much higher energies. We consider atoms with mass *M* in a specific hyperfine state labelled *ψ* and with kinetic energies much smaller than the splitting Δ between hyperfine states. As illustrated in Fig. [fig:2to2-elastic], there is a contribution to the elastic scattering amplitude from scattering into a virtual pair of particles in a higher hyperfine state labelled *ϕ*, which then rescatter into atoms in the original hyperfine state. The virtual atoms are off their energy shells by large amounts of order Δ. By the time-energy uncertainty principle, the virtual states have short lifetimes of order ℏ/Δ, during which the *ϕ* atoms can propagate only over short distances much smaller than the wavelengths of the *ψ* atoms. The effects of the higher hyperfine states can be modeled in the effective field theory by a contact interaction of the form *ψ*†*ψ*†*ψ**ψ*, where *ψ* is the quantum field that annihilates an atom in the specific hyperfine state *ψ*. (We use the same symbol for the label of the hyperfine state and for its quantum field.) The 2 → 2 contact interaction is the leading term in a series of local operators in the effective Lagrangian that mimic the effects of the higher hyperfine states to arbitrary precision. This series is obtained at tree-level by Taylor expanding the scattering amplitude in the momentum transfer and then writing down terms in the effective Lagrangian density that reproduce this expansion: *δ*L = *g* *ψ*†*ψ*†*ψ**ψ* + *h* ∇(*ψ*†*ψ*) ⋅ ∇(*ψ*†*ψ*) + …,  where the coefficients *g*, *h*, … are real valued. The individual operators are renormalized when loop corrections are included, but the effective theory is still capable of reproducing the original theory to arbitrary precision provided operators with a sufficient number of gradients are retained. Operators with *n* gradients correct the theory at order (*q*/*P*)*n*, where *q* is the momentum transfer and *P* = (*M*Δ)1/2 is the momentum scale associated with the energy splitting Δ between the hyperfine states. We have integrated the higher hyperfine states out of the theory by replacing them by the interactions terms in Eq. . [fig:2to2-inelastic] A less obvious opportunity to remove states from the theory arises when atoms in the hyperfine state *ψ* scatter inelastically into atoms in a lower hyperfine state *ϕ*. Such a scattering process is an example of a *deeply inelastic reaction*, in which the difference in the rest energies of the initial-state and final-state atoms is converted into large kinetic energies of the final-state atoms. The rate for the deeply inelastic scattering process *ψ**ψ* → *ϕ**ϕ* is related by the optical theorem to the imaginary part of the amplitude for *ψ**ψ* → *ϕ**ϕ* → *ψ**ψ* (see Fig. [fig:2to2-inelastic]). That amplitude is analytic in the momentum transfer $\bm{q}$, because the nearest non-analyticity in the amplitude is at the threshold energy for the *ϕ**ϕ* state, which is lower by an amount of order Δ. As a result, we can Taylor expand the amplitude in powers of $\bm{q}/(M \Delta)^{1/2}$ and reproduce this expansion by terms in the effective Lagrangian as in Eq. . In this case, the coefficients *g*, *h*, … are complex valued. We are particularly interested in their imaginary parts, which come from the deeply inelastic scattering reaction. These terms mimic the effects of the lower hyperfine states in the effective theory. It may seem nonintuitive that the effects of inelastic scattering to on-shell particles can be mimicked by local operators, because once an atom in the lower hyperfine state *ϕ* is created, it can propagate over long distances. In fact, as far as low-energy atoms in the hyperfine state *ψ* are concerned, the inelastic scattering process is quite local. This is because the reaction region in which the low-energy atoms disappear can be reconstructed by tracking the inelastic scattering products back to their origin. The inelastic reaction products have high momenta of order (*m*Δ)1/2 and therefore short wavelengths of order ℏ/(*m*Δ)1/2, so they can locate the decay with a resolution of order ℏ/(*m*Δ)1/2. Therefore the reaction is localized over a region of size ℏ/(*m*Δ)1/2, which is very small compared to the wavelengths of the incoming low-energy atoms. Effective density matrix for decaying atoms ------------------------------------------- Some aspects of the effective field theory obtained by integrating out deeply inelastic reactions are most easily understood by considering a deeply inelastic 1-body reaction, namely the decay of an atom. The atom could be a metastable excited state, such as the 23*S*1 state of 4He, which was one of the earliest atoms for which Bose-Einstein condensation was achieved. The radiative decay of the atom into its ground state is a deeply inelastic reaction. For simplicity, we assume the ground-state atoms interact weakly with the metastable atoms and that, once they are produced, the ground-state atoms quickly escape from the system. The Hamiltonian *H* − *i**K* for the effective theory is the sum of the Hermitian Hamiltonian for the metastable atoms and an anti-Hermitian piece  − *i**K*. The Hermitian operator *K* can be expressed as $\frac12 \Gamma\, N$, where Γ is the decay rate of the metastable atom and *N* is the number operator that counts the metastable atoms: $$N = \int d^3{\bm{r}}\, \psi^\dagger \psi. \label{eq:Nmuhat}$$ The quantum mechanics of such a theory is unconventional. The effective Hamiltonian *H* − *i**K* commutes with the number operator *N*, so the time evolution generated by *H* − *i**K* does *not* change the number of atoms in a state. Instead the effects of atom decay must be taken into account by transferring the probability carried by the *n* − atom component to states containing fewer atoms. An *n* − atom state evolves into a superposition of states with *n* and fewer atoms. The norm of a state containing *n* atoms decreases to zero exponentially with the decay rate *n*Γ. This is the correct result: if the probability for one atom to still be an atom after time *t* is exp( − Γ*t*), the probability for *n* atoms to still be *n* atoms is exp( − *n*Γ*t*). We typically want more information about where the probability goes. An *n* − atom state evolves into a superposition of states with *n*, *n* − 1, *n* − 2…atoms. In the full theory, the superposition can be described by the density matrix *ρ*full. In the effective theory, we describe the superposition of states with different atom numbers using an *effective density matrix* *ρ*, from which we remove ground-state atoms by tracing over states that include them: *ρ* ≡ Trdeep(*ρ*full). The subscript “deep” indicates that the partial trace is over states that include deeply inelastic reaction products. This effective density matrix, like *ρ*full, is Hermitian, positive, and has unit trace: Tr(*ρ*) = 1. These properties follow from the definition in Eq. ([eq:rho-eff-defn]). Naively we might expect the time evolution equation for *ρ* to be Eq. , which reduces to $$i\frac{d\ }{dt}\rho = \left[H, \rho\right] - \frac{i}{2} \Gamma \left\{ N, \rho\right\}.$$ However this equation does *not* conserve the total probability Tr(*ρ*). The correct evolution equation is the Lindblad equation: $$i\frac{d}{dt}\rho = \left[H, \rho\right] - \frac{i}{2} \Gamma \left\{ N, \rho\right\} + i \Gamma \int d^3{\bm{r}}\,\psi({\bm{r}})\,\rho\,\psi^\dagger({\bm{r}}). \label{eq:muon-evol-eq}$$ Time evolution preserves the trace of *ρ*, because the trace of the additional Lindblad term cancels the trace of the anticommutator term in Eq. . The role of the Lindblad term is easily understood if we use the evolution equation to calculate the rate of change of the probability *P**n*(*t*) for finding *n* metastable atoms in the system. This probability can be expressed as the partial trace of *ρ* over states ∣*X**n*⟩ that contain *n* atoms: *P**n*(*t*) ≡ ∑*X**n*⟨*X**n*∣*ρ*(*t*)∣*X**n*⟩. The partial trace of the evolution equation in Eq.  gives $$\frac{d\ }{dt} P\_n(t)= - n\Gamma P\_n(t) + (n+1)\Gamma P\_{n+1}(t). \label{eq:dPn(t)}$$ The partial trace of the commutator term in Eq.  is 0, because the operator *H* does not change the atom number. This allows a complete set of *n* − atom states to be inserted between *H* and *ρ*. The partial trace of the anticommutator term in Eq.  gives  − *n*Γ*P**n*, which is the rate at which probability leaves the *n* − atom sector because of the decay of an atom. The partial trace of the Lindblad term in Eq.  gives  + (*n* + 1)Γ*P**n* + 1, which is the rate at which probability enters the *n* − atom sector from the decay of an atom in the (*n* + 1) − atom sector. This expression can be obtained by inserting complete sets of (*n* + 1) − atom states on the left and right of *ρ* in the Lindblad term in Eq. ([eq:muon-evol-eq]) and then rearranging the factors to give $$\begin{aligned} i \Gamma \sum\_{X\_{n+1}} \sum\_{X'\_{n+1}}\, \langle X'\_{n+1}| \rho |X\_{n+1} \rangle \langle X\_{n+1} | N |X'\_{n+1} \rangle. \label{eq:drho-lindblad}\end{aligned}$$ The number operator *N*, which is given in Eq. , was obtained by replacing the sum of ∣*X**n*⟩⟨*X**n*∣ between $ \psi^\dagger({\bm{r}})$ and $\psi({\bm{r}})$ by the identity operator. The number operator in Eq.  can be replaced by its eigenvalue *n* + 1. The Lindblad term in Eq. ([eq:muon-evol-eq]) is essential to get the correct physical behavior for the time evolution of the total number of metastable atoms. The expectation value of the atom number is $$N(t) \equiv {\rm Tr} \big( N \rho(t)\big) = \sum\_n n P\_n(t). \label{eq:Nmu-t}$$ We can use Eq.  to determine the time dependence of *N*(*t*): $$\frac{d\ }{dt} N(t)= -\Gamma \Big[ \sum\_n n^2 P\_n(t) - \sum\_n n(n+1) P\_{n+1}(t) \Big].$$ After shifting the index of the second term on the right side, we obtain $$\frac{d\ }{dt} N(t)= -\Gamma N.$$ This implies that *N*(*t*) = *N*0exp( − Γ*t*), as expected for a collection of atoms with decay rate Γ. Effective density matrix from deeply inelastic reactions -------------------------------------------------------- We now consider a more general effective field theory obtained by integrating out the reaction products from deeply inelastic reactions. By the arguments presented in Sec. [sec:Locality], the effects of the deeply inelastic reactions can be reproduced by local anti-Hermitian terms in the effective Hamiltonian. The effective Hamiltonian can be expressed in the form *H*eff = *H* − *i**K*, where *H* and *K* are both Hermitian operators. The operator *K* can be expressed in the form *K* = ∑*i**γ**i*∫*d*3*r* Φ*i*†Φ*i*,  where the local operators $\Phi\_i(\bm{r})$ annihilate low-energy atoms in configurations corresponding to inelastic reactions. The real constants *γ**i* can be determined by matching low-energy observables in the full theory and in the effective theory. The Hamiltonian *H* − *i**K* commutes with the number operator *N* for low-energy atoms, so the time evolution generated by *H* − *i**K* does *not* change the number of low-energy atoms in a state. Instead the effects of the deeply inelastic reactions must be taken into account by transferring the probability carried by the *n* − atom component to states containing fewer atoms. The norm of a state containing *n* atoms decreases to zero at a rate that increases with *n*. An *n* − atom state evolves into a superposition of states with *n* and fewer atoms. We describe this superposition of states using an *effective density matrix* *ρ* defined by tracing over the deeply inelastic decay products as in Eq. . More precisely, we trace out any state containing an atom with momentum exceeding the ultraviolet cutoff of the effective field theory. The effective density matrix, like the underlying density matrix, is Hermitian, positive, and it has unit trace: Tr(*ρ*) = 1. These properties follow from the definition in Eq. ([eq:rho-eff-defn]). As is familiar in the field of quantum information, the definition of the effective density matrix *ρ* as the partial trace in Eq. ([eq:rho-eff-defn]) implies that *ρ*(*t*) satisfies a self-contained time evolution equation that is completely determined by *ρ* and by the full density matrix *ρ*full(0) at some initial time. This evolution equation is however non-Markovian: the time derivative (*d*/*d**t*)*ρ*(*t*) is determined not only by *ρ*(*t*) but also by *ρ*(*t*ʹ) at the earlier times 0 < *t*ʹ < *t*. This dependence on the history takes into account the effects of high-momentum atoms that are produced by inelastic reactions and subsequently interact with the low-energy atoms. However the density matrix *ρ*(*t*) for the effective theory does not need to reproduce the full density matrix on short time scales of order $\hbar /E\_{\rm deep}$. It is sufficient to reproduce its effects after time averaging over time intervals much larger than $\hbar /E\_{\rm deep}$. This time average removes transients associated with high-momentum atoms that cannot be described accurately within the effective field theory. It is only after removing these transients that it is possible to have a density matrix *ρ*(*t*) that is Markovian. Thus the proper definition of the effective density matrix requires that the partial trace in Eq. ([eq:rho-eff-defn]) be supplemented by an appropriate time average that makes *ρ*(*t*) Markovian. Naively we might expect the time evolution equation for *ρ* to be Eq. , where *K* is the Hermitian operator in Eq. . However, this equation does *not* conserve the total probability Tr(*ρ*). The correct evolution equation is the Lindblad equation: $$i \hbar \frac{d\ }{dt} \rho = \left[H, \rho\right]- i \left\{K,\rho\right\} +2i \sum\_i \gamma\_i \int \!\!d^3r\, \Phi\_i \rho \Phi\_i^\dagger. \label{eq:LindbladPhi}$$ As in Eq. , the trace of the additional Lindblad term cancels the trace of the anticommutator term. The local operators $\Phi\_i(\bm{r})$, which annihilate configurations of low-energy atoms that can undergo a deeply inelastic reaction, are the *Lindblad operators*. In quantum information theory, a Lindblad operator is sometimes called a *quantum jump operator*. In the low-energy effective theory, a Lindblad operator produces a jump to a state with a different low-energy atom number. Given the form of *K* in Eq. , the Lindblad equation is a necessary consequence of our physical requirements on the effective density matrix: *ρ* is Hermitian, positive, has unit trace, and it is Markovian. An explicit diagrammatic illustration of the emergence of the Lindblad equation from tracing over states that include deeply inelastic reaction products is given in Appendix [sec:Deductive]. In order to obtain the Lindblad equation in Eq. , it is essential that *K* have the structure shown in Eq. ([eq:K-Phi]). A more generic form for this operator is $$K = \int \!\!d^3 r \sum\_{ij} c\_{ij} \Phi\_i^\dagger({\bm{r}}) \Phi\_j({\bm{r}}), \label{eq:K-structure-gen}$$ where *c**i**j* is a positive Hermitian matrix. It is a Hermitian matrix, because *K* is Hermitian by definition. It is guaranteed to be a positive matrix by the optical theorem, which implies that the *T* matrix satisfies  − *i*(*T* − *T*†) = *T*†*T*. Matrix elements of *K* reproduce the anti-Hermitian parts of scattering amplitudes ⟨*b*∣*T*∣*a*⟩, where ∣*a*⟩ and ∣*b*⟩ are states in the effective theory that are connected by intermediate deeply inelastic reaction channels. The Hermitian parts of these amplitudes are reproduced by *H*. The optical theorem guarantees the positivity of the anti-Hermitian parts. The double sum in Eq.  is easily rewritten in the canonical form in Eq.  by expanding *c**i**j* in terms of outer products of its eigenvectors. Atom Loss from Deeply Inelastic Reactions ========================================= In this section, we use the Lindblad equation to determine the time evolution of the mean number of low-energy atoms, which can be changed by deeply inelastic reactions. We also derive universal relations for the inelastic two-atom loss rate for fermions with two spin states and for the inelastic three-atom loss rate for identical bosons. Mean Atom Number ---------------- We consider an open effective field theory with effective Hamiltonian $H\_{\rm eff}=H-iK$, where *H* and *K* are Hermitian. The Hermitian operator *K* in the anti-Hermitian part can be expressed in terms of local Lindblad operators $\Phi\_i(\bm{r})$ as in Eq. . The time evolution of the density matrix *ρ* for the effective field theory is given by the Lindblad equation in Eq. . We reconsider an atom number operator *N* that is conserved by the reactions in the effective field theory but is violated by deeply inelastic reactions in the full theory. Since Lindbladian time evolution preserves the trace of *ρ*, the system average of *N* can be expressed as $$\langle N \rangle = {\rm Tr}(\rho N). \label{N-ave}$$ The statement that *N* is conserved by the reactions in the effective theory can be expressed as the commutation relation $$\big[ N,H\big ]=0. \label{[N,H]}$$ The inelastic reaction responsible for the Φ*i*†Φ*i* term in *K* changes the atom number *N* by some integer *k**i*. This statement can be expressed as a commutation relation: $$\big[ N,\Phi\_i(\bm{r}) \big]=-k\_i \Phi\_i(\bm{r}). \label{[N,Phi]}$$ We now consider the time evolution of ⟨*N*⟩. By taking the time derivative of ${\rm Tr}(\rho N)$, inserting the time evolution equation in Eq. , and rearranging the terms, we get $$\begin{aligned} \frac{d\ }{dt} \langle N \rangle = - \frac{i}{\hbar} {\rm Tr}(\rho [N,H]) - \frac{1}{\hbar} \sum\_i \gamma\_i \int \!\! d^3r \, {\rm Tr} \big( \rho (N\Phi\_i^\dagger \Phi\_i +\Phi\_i^\dagger \Phi\_i N -2 \Phi\_i^\dagger N \Phi\_i) \big). \label{Nevol:Lindblad}\end{aligned}$$ The first term on the right side vanishes, because *N* commutes with *H*. By using the commutation relation in Eq.  in the second term on the right side, the rate of change of atom number reduces to $$\begin{aligned} \frac{d\ }{dt} \langle N \rangle &=& -\frac{2}{\hbar} \sum\_i k\_i \gamma\_i \int \!\! d^3r \, \langle \Phi\_i^\dagger \Phi\_i \rangle. \label{Nevol:Phi}\end{aligned}$$ The rate of change in ⟨*N*⟩ has been expressed as a sum of expectation values of the same terms that appear in the expression for *K* in Eq.  but multiplied by integers *k**i*. Three-Body Recombination Rate ----------------------------- The atoms that are studied in cold atom experiments can form deeply bound diatomic molecules whose binding energies are orders of magnitude larger than the energy scales of the cold trapped atoms. Three-body recombination, in which three low-energy atoms collide and two of them bind into a molecule, is a deeply inelastic reaction if the molecule is deeply bound. The momenta of the outgoing molecule and the recoiling atom are often large enough for them to escape from the trapping potential. In a locally homogeneous gas of atoms in thermal equilibrium, the rate of decrease in the local number density *n**A* of low-energy atoms can be expressed as a rate equation: (*d*/*d**t*)*n**A* =  − 3*K*3(*T*)*n**A*3,  where *K*3(*T*) is the 3-body recombination event rate constant, which depends on the temperature *T*. Bosonic atoms that are all in the same hyperfine state can be described by an effective field theory with a quantum field *ψ*. At the low temperatures in cold atom experiments, the only relevant interaction between the atoms is S-wave scattering, whose strength is determined by the scattering length *a*. The interaction term in the Hamiltonian is $$H\_{\rm int} = (2 \pi \hbar^2 a/m) \int d^3 r\, \psi^{\dagger 2}\psi^2. \label{Hint-psi}$$ This interaction term can be treated using perturbation theory, provided the scattering length *a* is not much larger than the range of the interactions between the atoms, which for ultracold atoms is the van der Waals length scale. We now consider a deeply inelastic three-body recombination reaction that produces a diatomic molecule with binding energy much larger than the temperature *T*. Its effect on low-energy atoms can be described in the effective field theory by adding to the effective Hamiltonian the anti-Hermitian term  − *i**K*, where the local Hermitian operator *K* is *K* = (ℏ*K*3/12)∫*d*3*r* (*ψ*3)†*ψ*3,  and where *K*3 is a constant that does not depend on the temperature. The expression for *K* in Eq.  has a single term with Lindblad operator Φ*i* = *ψ*3 and coefficient *γ**i* = ℏ*K*3/12. The expression for the atom loss rate in Eq.  thus has a single term with integer *k**i* = 3. In a locally homogeneous system of atoms, the local version of the loss rate in Eq.  is $$\begin{aligned} \frac{d\ }{dt} \langle \psi^\dagger \psi \rangle &=& - \frac{K\_3}{2} \,\langle (\psi^3)^\dagger \psi^3 \rangle. \label{Nevol:Phi3}\end{aligned}$$ The expectation value of *ψ*†*ψ* is the local number density: *n**A* = ⟨*ψ*†*ψ*⟩. In a thermal gas, the expectation value of (*ψ*3)†*ψ*3 is 6*n**A*3. Comparing with Eq. , we see that the constant *K*3 in Eq.  is the *T* → 0 limit of the 3-body recombination event rate constant *K*3(*T*). Its temperature dependence is negligible, because *k**T* is much smaller than the binding energy of the molecule. In a Bose-Einstein condensate at zero temperature, the expectation value of (*ψ*3)†*ψ*3 can be expressed as ∣⟨*ψ*⟩3∣2 = *n**A*3, where ⟨*ψ*⟩ is the mean field. The atom loss rate is given by Eq.  except that the prefactor on the right side is smaller than for a thermal gas by a factor of 1/6. Inelastic Two-Atom Loss Rate ---------------------------- In cold atom experiments, the scattering length *a* can be controlled experimentally by tuning an external magnetic field to near a Feshbach resonance. If *a* is much larger than the range of the interactions between the atoms, which for ultracold atoms is the van der Waals length scale, the interactions must be treated nonperturbatively. Fermionic atoms in two hyperfine states 1 and 2 with scattering length *a* can be described by an effective field theory with quantum fields *ψ*1 and *ψ*2. The interaction term in the Hamiltonian can be expressed as $$H\_{\rm int} = \frac{g\_0}{m}\int d^3 r\, \big(\psi\_2 \psi\_1\big)^\dagger \big( \psi\_2 \psi\_1 \big), \label{Hint-psi0}$$ where *g*0 is the bare coupling constant. If the ultraviolet cutoff Λ is imposed on the momenta of virtual atoms, the bare coupling constant is $$g\_0 = \frac{4 \pi}{1/a-(2/\pi)\Lambda}. \label{g0}$$ If the pair of atoms in the hyperfine states 1 and 2 has a *spin-relaxation* scattering channel into a pair atoms in lower hyperfine states 3 and 4, the optical theorem implies that the scattering length *a* is complex with a small negative imaginary part. The energy release from the spin-relaxation reaction is much larger than the energy scales for ultracold atoms, so this is a deeply inelastic reaction. The high-momentum atoms in the hyperfine states 3 and 4 can be integrated out to get an effective field theory for low-energy atoms in the hyperfine states 1 and 2 only. The interaction term in the Hamiltonian is still that in Eq. , except that now $H\_{\rm int}$ has an anti-Hermitian part because the complex scattering length *a* makes the bare coupling constant *g*0 in Eq.  complex. Determining the loss rate of atoms due to the deeply inelastic spin-relaxation reaction is not trivial, because the large scattering length makes the problem nonperturbative. However, an exact result for the inelastic two-atom loss rate in any state has been proposed by Shina Tan. The rates of decrease in the numbers *N*1 and *N*2 of atoms in the two hyperfine states are given by the universal relation $$\frac{d\ }{dt} \langle N\_1 \rangle= \frac{d\ }{dt} \langle N\_2 \rangle = - \frac{\hbar}{2 \pi m}{\rm Im} \big( 1/a \big)\, C, \label{uni2loss}$$ where *C* is a property of the system called the *contact*. The coefficient of *C*, which is proportional to the imaginary part of 1/*a*, is determined by 2-body physics. The contact was first introduced by Shina Tan in 2005. It is the thermodynamic variable conjugate to 1/*a*. The contact *C* is an extensive variable: it is the integral over space of the contact density ${\cal C}$, which measures the number of 1-2 pairs per (volume)4/3. (The unusual power of volume arises from an anomalous dimension associated with a non-Gaussian renormalization group fixed point.) Shina Tan derived other universal relations involving the contact that hold for any state of the system. They include the “adiabatic relation” that specifies how the free energy of a system is affected by a change in the scattering length : $$\frac{d~~~~}{d(1/a)}F = - \frac{\hbar^2}{4 \pi m} C. \label{adiabatic}$$ Many other universal relations involving the contact were subsequently derived. The universal relation for the inelastic two-atom loss rate in Eq. ([uni2loss]) can be derived by expressing the Hermitian operator *K* in the effective Hamiltonian *H* − *i**K* as $$K = \frac{\hbar^2}{4\pi m}{\rm Im} \big( 1/a \big) \int\! d^3r\, \Phi^\dagger \Phi, \label{Kint-psi}$$ where the local Lindblad operator is Φ = *g*0*ψ*2*ψ*1. The operator product *ψ*2*ψ*1 is singular: its matrix elements diverge linearly with the cutoff Λ. The multiplication by *g*0, which according to Eq.  decreases as 1/Λ for large Λ, makes Φ a finite operator whose matrix elements have well-behaved limits as Λ → ∞. In a spin-relaxation scattering reaction, the decreases in the atom numbers *N*1 and *N*2 are both *k**i* = 1. According to Eq. , the rates of decrease in their system averages are therefore $$\begin{aligned} \frac{d\ }{dt} \langle N\_1 \rangle= \frac{d\ }{dt} \langle N\_2 \rangle = - \frac{\hbar}{2 \pi m}{\rm Im} \big( 1/a \big) \int\! d^3r\, \left\langle \Phi^\dagger \Phi \right\rangle. \label{N12evol:psi}\end{aligned}$$ The universal loss rate in Eq. ([uni2loss]) then follows from the identification of the contact density as $$\begin{aligned} {\cal C} = \left\langle\Phi^\dagger \Phi \right\rangle. \label{contactdensity}\end{aligned}$$ This identification can be verified by using the adiabatic relation in Eq. . The only dependence of the Hamiltonian on the scattering length *a* is in the interaction term in the Hamiltonian density ${\cal H}\_{\rm int}$ in Eq. . If the tiny imaginary part of the scattering length is neglected, the derivative of the Hermitian part *H* of the effective Hamiltonian with respect to 1/*a* is $$\frac{d~~~~}{d(1/a)}H = - \frac{\hbar^2}{4 \pi m} \, \int\! d^3r\, \Phi^\dagger \Phi. \label{dHint/da}$$ By the Feynman-Hellman theorem, the system average of the left side of this equation is the left side of the adiabatic relation in Eq. . With the identification of the contact density in Eq. , the system average of the right side of Eq.  is the right side of the adiabatic relation in Eq. . This completes the derivation of the inelastic two-atom loss rate in Eq. ([uni2loss]) and the adiabatic relation in Eq. . In Ref. , the authors presented an incorrect derivation of the universal relation for the inelastic two-atom loss rate that suggests that there are additional terms on the right side of Eq. . They assumed the density matrix *ρ* satisfies the naive time evolution equation in Eq. . This equation implies that the number of atoms is conserved, but that the probability ${\rm Tr}(\rho)$ decreases with time. In Ref. , the authors assumed that the mean atom number ⟨*N*⟩ was given by ${\rm Tr}(\rho N)/{\rm Tr}(\rho)$. The resulting expression for the time derivative of ⟨*N*⟩ can be expressed as a double integral over space of an expression involving the product of the number density operator and the Hamiltonian density operator. Upon applying the operator product expansion to the product of these operators, they obtained an expansion for (*d*/*d**t*)⟨*N*⟩ in terms of increasingly higher-dimension operators with the leading term given by Eq. . The derivation of the universal relation using the Lindblad equation makes it clear that there are no additional terms beyond the contact term in Eq. ([uni2loss]). Inelastic Three-Atom Loss Rate ------------------------------ Bosonic atoms that are all in the same hyperfine state can be described by a nonrelativistic quantum field theory with a quantum field *ψ*. If the scattering length *a* is much larger than the range of interactions of the atoms, two-atom interactions and three-atom interactions must both be treated nonperturbatively. The resulting quantum field theory is characterized by a discrete scaling symmetry in which lengths and time are multiplied by integer multiples of *λ*0 and *λ*02, respectively, where *λ*0 = *e**π*/*s*0 and *s*0 ≈ 1.00624 is a universal number. Three-atom interactions are determined by a parameter Λ\* with dimensions of momentum, upon which physical observables can only depend log-periodically. If *a* is positive, two-atom interactions produce a weakly bound diatomic molecule called the *shallow dimer*. Regardless of the sign of *a*, three-atom interactions produce a sequence of weakly bound triatomic molecules called *Efimov trimers*. In Ref. , several universal relations were derived that hold for any system consisting of identical bosons. The universal relations relate various properties of the system to two extensive thermodynamic variables: the 2-body contact *C*2, which is the analog of the contact for fermions with two spin states, and the 3-body contact *C*3. The atoms that are studied in cold atom experiments form deeply bound diatomic molecules (*deep dimers*) whose binding energies are orders of magnitude larger than the energy scales of the cold trapped atoms. The deep dimers provide various pathways for deeply inelastic reactions that result in the loss of three atoms. They include (a) three-body recombination, in which three low-energy atoms collide and two of them bind into a deep dimer, (b) dimer relaxation, in which a shallow dimer and an atom scatter into a deep dimer and an atom, and (c) Efimov trimer decay into a deep dimer and an atom. In Ref. , a universal relation for the three-atom inelastic loss rate was presented but not derived. In this section, we use the Lindblad equation to sketch the derivation of this relation. To derive the universal relation for the three-atom inelastic loss rate, we follow the formalism laid out in Ref. . In the absence of deep dimers, the interaction term in the Hamiltonian can be written as $$H\_{\rm int} = \frac{g\_2}{4m} \, \int\! d^3r\, (\psi^2)^\dagger\psi^2 +\frac{g\_3}{36m} \, \int\! d^3r\, (\psi^3)^\dagger\psi^3. \label{Hint-psi-3b}$$ Since this effective field theory has ultraviolet divergences, it must be regularized. The bare coupling constants *g*2 and *g*3 depend on an ultraviolet momentum cutoff Λ: $$\begin{aligned} g\_2(\Lambda) &=& \frac{8 \pi}{1/a-(2/\pi)\Lambda}\,, \label{eq:g2} \\ g\_3(\Lambda) &=& h\_0 \,\frac{9g\_2^2}{\Lambda^2}\, \frac{\sin(s\_0 \ln(\Lambda/\Lambda\_\*)-\arctan(1/s\_0))} {\sin(s\_0 \ln(\Lambda/\Lambda\_\*)+\arctan(1/s\_0))}, \label{eq:g3}\end{aligned}$$ where *a* is the two-body scattering length of the bosons and Λ\* is a three-body parameter introduced in Ref. . The three-body parameter Λ\* can be determined from any three-body datum, such as the binding energy of an Efimov trimer or the atom-dimer scattering length. (See Ref.  for explicit relations). The numerical prefactor *h*0 in Eq.  depends on the ultraviolet cutoff prescription and has the value *h*0 ≈ 0.879 if Λ is a sharp momentum cutoff. Three-atom losses from deeply inelastic reactions involving deep dimers can be taken into account by adding an imaginary part to the coupling constant *g*3 in the Hamiltonian in Eq. . The resulting anti-Hermitian part of the Hamiltonian is  − *i**K*, where *K* is $$K = -\frac{{\rm Im}(g\_3)}{36m} \int\! d^3r\, (\psi^{3})^\dagger\psi^3. \label{Hint-imag-3b}$$ Physical observables can be expressed particularly conveniently in terms of Λ\* and an additional real three-body loss parameter *η*\* defined by analytically continuing the three-body coupling constant *g*3 in Eq.  to complex values using the substitution lnΛ\* → lnΛ\* + *i**η*\*/*s*0. Inserting *k**i* = 3 into the expression in Eq.  for the three-atom inelastic loss rate, we obtain the universal relation $$\frac{d\ }{dt} \langle N \rangle= - \frac{6\hbar}{m s\_0}\sinh(2\eta\_\*) C\_3, \label{uni3loss}$$ where *C*3 is the three-body contact, which can be expressed as *C*3 = *f*(Λ)∫*d*3*r* ⟨(*ψ*3)†*ψ*3⟩. The prefactor *f*(Λ) depends on the ultraviolet cutoff. Since its precise form is not very insightful, it will not be given here. The three-body contact can also be defined in terms of the expectation value of the logarithmic derivative with respect to Λ\* of the Hermitian part of the effective Hamiltonian in Eq.  at fixed *a* : $$\Lambda\_\*\frac{\partial ~~}{\partial\Lambda\_\*}\langle H \rangle\bigg|\_a = -\frac{2\hbar^2}{m}C\_3.$$ The universal relation in Eq. , with the factor sinh(2*η*\*) approximated by 2*η*\*, was previously given in Ref. . It was also given previously in Ref.  for the special case of an Efimov trimer state. Summary ======= In this paper, we have shown that integrating out the high-momentum atoms produced by deeply inelastic reactions produces an open effective field theory in which the time evolution of the effective density matrix is given by the Lindblad equation. The effective density matrix can be obtained from the density matrix of the full theory by taking a partial trace over states that include high-momentum atoms and then carrying out an appropriate time average to eliminate high-frequency transients. The Lindblad operators are local, and they can be deduced from the anti-Hermitian terms in the effective Hamiltonian. The Lindblad terms in the evolution equation are essential to get the correct evolution equation for the mean atom number. We used the Lindblad equation to present the first correct derivation of the universal relation for the two-atom inelastic loss rate for fermionic atoms with two spin states that interact through a large scattering length. We also used the Lindblad equation to present the first derivation of the universal relation for the three-atom inelastic loss rate for identical bosonic atoms with a large scattering length. The Lindblad equation has many other applications to atom loss processes in cold atom physics. An obvious extension is to heteronuclear systems, for which there are two types of Efimov trimers. Due to the smaller discrete scaling factor associated with one type of Efimov trimer, atom loss processes in such systems have recently been of great theoretical and experimental interest. The Lindblad equation could be applied to fermionic atoms with two spin states on the upper branch of the two-atom spectrum at scattering lengths for which three-body recombination into the shallow dimer is a deeply inelastic reaction. It could also be applied to losses of dipolar atoms from three-body recombination into deep dimers. E.B. and H.-W.H. acknowledge useful discussions with Shina Tan. The research of E.B. was supported in part by the Department of Energy under grant DE-FG02-05ER15715, by the National Science Foundation under grant PHY-1310862, and by the Simons Foundation. The research of H.-W.H. was supported in part by the BMBF under contract 05P15RDFN1, and by the Deutsche Forschungsgemeinschaft (SFB 1245). The research of G.P.L. was supported in part by the National Science Foundation. Diagrammatic Illustration ========================= In this section, we demonstrate how the Lindblad structure of the evolution equation for the density matrix emerges from a diagrammatic analysis of a simple field theory model with a deeply inelastic 2-body reaction. To reduce visual clutter, we introduce compact notation for the integral over space and for the integral over a momentum: $$\int\_{\bm{r}} \equiv \int\! \!d^3r \,, \quad\quad \int\_{\bm{p}} \equiv \int\! \!\frac{d^3p}{(2\pi)^3}\,.$$ We also set ℏ = 1. Field Theory Model ------------------ We consider a quantum field theory with two fields *ψ* and *ϕ*. We refer to the particles annihilated by these field operators as *ψ* atoms and *ϕ* atoms, respectively. The field operators satisfy canonical commutation relations if the atoms are bosons and canonical anticommutation relations if they are fermions. The Hamiltonian for the full theory has the form $$\label{eq:hamiltonian} H\_{\rm{full}} = {H}\_0^\psi + {H}\_0^\phi + H\_\mathrm{int}\,.$$ The free-field terms in the Hamiltonian are $$\begin{aligned} H\_0^\psi &=& \int\_{\bm{r}} {\psi^\dagger}(\bm{r}) \left(-\frac{\nabla^2}{2M} \right) \psi(\bm{r})\,, \\ H\_0^\phi &=& \int\_{\bm{r}} {\phi^\dagger}(\bm{r}) \left(-\Delta - \frac{\nabla^2}{2M} \right) \phi(\bm{r})\,,\end{aligned}$$ where *M* is the mass of the atoms. We have chosen the rest energies of a *ψ* atom and a *ϕ* atom to be zero and  − Δ, respectively. The on-shell energies of atoms with momentum $\bm{p}$ are $$\begin{aligned} E\_{\bm{p}} &=& \bm{p}^2/(2M), \\ \omega\_{\bm{p}} &=& -\Delta + \bm{p}^2/(2M).\end{aligned}$$ The interaction Hamiltonian includes a term that allows a pair of *ψ* atoms to scatter into a pair of *ϕ* atoms: $$\label{eq:Hint} H\_\mathrm{int} = \tfrac{1}{4}g \int\_{\bm{r}} \left(\psi^{\dagger 2}(\bm{r})\phi^2(\bm{r}) +\phi^{\dagger 2}(\bm{r}) \psi^2(\bm{r})\right)+\ldots\,,$$ where the ellipses denote further local interaction terms, such as *ψ*† 2*ψ*2 and *ϕ*† 2*ϕ*2. Their precise form will not be needed in the following discussion. The leading contribution to the transition amplitude for the process *ψ**ψ* → *ψ**ψ* from inelastic reactions with a *ϕ**ϕ* intermediate state comes from the imaginary part of the one-loop diagram in Fig. [fig:scattering]. The energy release in the reaction *ψ**ψ* → *ϕ**ϕ* is $E\_{\rm deep} = 2\Delta$, and the corresponding momentum scale is $P\_{\rm deep} = (2M \Delta)^{1/2}$. We are interested in systems consisting of *ψ* atoms whose momenta are small compared to $P\_{\rm deep}$. The reaction *ψ**ψ* → *ϕ**ϕ* is therefore a deeply inelastic reaction. We refer to the momentum scale $P\_{\rm deep}$ as *high momentum*. Locality -------- [fig:scattering] Because of the large energy release, the deeply inelastic scattering process *ψ**ψ* → *ϕ**ϕ* is effectively local and instantaneous. It takes place over a spatial region of size $1/P\_{\rm deep}$ and during a time interval of length $1/E\_{\rm deep}$. We proceed to show how this locality can be exploited to remove high-momentum *ϕ* atoms from the theory and construct a low-energy effective field theory for *ψ* atoms. We do this first for two *ψ* atoms in this subsection and then for a system containing many *ψ* atoms in subsection [sec:Effrho]. For simplicity, we assume the coupling *g* is small and we work to leading order in *g*. The leading contribution to the *ψ**ψ* → *ψ**ψ* scattering amplitude from a *ϕ**ϕ* intermediate state is given by the diagram in Fig. [fig:scattering]. Using time-ordered perturbation theory (or using Feynman perturbation theory and integrating by contours over the loop energies), the off-shell scattering amplitude is $$\label{psi-selfenergy} T(E,\bm{p}) = - \tfrac{1}{2} g^2 \int\_{\bm{q}} \frac{1}{E - \omega\_{\bm{q}} -\omega\_{\bm{p}-\bm{q}} + i\epsilon}\,,$$ where *E* and $\bm{p}$ are the total energy and the total momentum of the two incoming *ψ* atoms, respectively. For convenience, we consider *ψ**ψ* → *ψ**ψ* scattering in the center-of-mass frame. In this reference frame, the incoming *ψ* atoms have momenta $\pm\bm{k}$ and energies $E\_{\bm{k}}=\bm{k}^2/(2M)$. The on-shell scattering amplitude is therefore $T(2E\_{\bm{k}},0)$. The total cross section for *ψ**ψ* → *ϕ**ϕ* scattering can be obtained using the optical theorem by evaluating the imaginary part of *T* on the energy shell. The real part of the scattering amplitude in Eq.  is ultraviolet divergent. The divergence can be canceled by a renormalization of the coupling constant for the *ψ**ψ* → *ϕ**ϕ* interaction. After renormalization, the integral over $\bm{q}$ in Eq.  is dominated by high momenta of order $P\_{\rm deep}$. Consequently we can expand the on-shell scattering amplitude $T(2E\_{\bm{k}},0)$ in powers of $\bm{k}^2/P\_{\rm deep}^2$. We are primarily interested in constructing an effective Hamiltonian that takes into account the leading term *T*(0, 0). Successively higher powers of $\bm{k}^2/P\_{\rm deep}^2$ could be taken into account through successively higher-order gradient terms in the effective Hamiltonian. Similarly, successively higher powers of $E-2E\_{\bm{k}}$ in the expansion of *T* about the on-shell energy could be taken into account through successively higher-order time-derivative terms in the effective Lagrangian. In the sector with only two *ψ* atoms, the leading effect of the scattering amplitude $T(E,\bm{p})$ can be taken into account by adding a local term to the free Hamiltonian *H*0*ψ*. The resulting effective Hamiltonian is $$\label{eq:heff} H - i K = H\_0^\psi - \tfrac14 T(0,0) \int\_{\bm{r}} \psi^{\dagger 2}(\bm{r}) \psi^2(\bm{r}).$$ This equation defines the Hermitian operators *H* and *K*. The real part of *T*(0, 0), which is ultraviolet-divergent, can be cancelled by renormalizing the coupling constant for the *ψ*† 2*ψ*2 interaction term contained in the ellipses in Eq. ([eq:Hint]). The anti-Hermitian part  − *i**K* of the effective Hamiltonian comes from the imaginary part of *T*(0, 0), which is $${\rm Im} T(0,0) = \frac{Mg^2}{8\pi}(2 M \Delta)^{1/2}. \label{ImT00}$$ The locality of the inelastic scattering process, which implies $T(2E\_{\bm{k}},0)\approx T(0,0)$, allows us to simplify the contributions to correlators involving the operator $\phi^2(\bm{r},t)$ from its annihilation of *ϕ* atoms that come from *ψ**ψ* scattering. [fig:scattering-simp] For example, the correlator ${\langle 0 |}\phi^2(\bm{r},t)\psi^{\dagger2}(\bm{r}',0){|0\rangle}$ illustrated in Fig. [fig:scattering-simp] can be simplified as follows: $$\begin{aligned} \label{eq:phi2psid} g\,{\langle 0 |}\phi^2(\bm{r},t)\psi^{\dagger2}(\bm{r}',0){|0\rangle} &=& \int\frac{dE}{2\pi} \int\_{\bm{p}} e^{-iEt+i\bm{p}\cdot(\bm{r}-\bm{r}')} \int\_{\bm{k}} \frac{- 2i \,T(E,\bm{p})}{E - E\_{\bm{k}} - E\_{\bm{p}-\bm{k}} + i\epsilon} \nonumber \\ &\approx& -T(0,0)\int\frac{dE}{2\pi}\int\_{\bm{p}} e^{-iEt+i\bm{p}\cdot(\bm{r}-\bm{r}')} \int\_{\bm{k}} \frac{2i}{E - E\_{\bm{k}} - E\_{\bm{p}-\bm{k}} + i\epsilon} \nonumber \\ &=& -T(0,0)\,{\langle 0 |}\psi^2(\bm{r},t) \psi^{\dagger 2}(\bm{r}',0) {|0\rangle},\end{aligned}$$ where *T* is given in Eq. ([psi-selfenergy]). Generally, we can make the following replacements in such situations: $$\begin{aligned} g\, \phi^2(\bm{r},t) &\longrightarrow - T(0,0)\,\psi^2(\bm{r},t), \label{eq:phi^2approx} \\ g\, \phi^{\dagger2}(\bm{r},t) &\longrightarrow - T^\*(0,0)\,\psi^{\dagger2}(\bm{r},t). \label{eq:phid^2approx}\end{aligned}$$ We use these substitutions repeatedly in the next subsection. Effective Density Matrix ------------------------ Replacing the free Hamiltonian *H*0*ψ* by the effective Hamiltonian *H* − *i**K* is all that is needed to analyze the impact of inelastic scattering processes on states with only two *ψ* atoms. Analyzing multi-*ψ* states is more complicated, however, because a system that is described initially by a state with *N* *ψ* atoms evolves into a superposition of states with *N*, *N* − 2, *N* − 4, … *ψ* atoms. The state with two *ψ* atoms also evolves into a superposition, but there are only two states, *N* = 2 and *N* = 0, and we do not care about the second one. For *N* > 2, we need the density matrix *ρ*full(*t*) to track the superposition of states containing different numbers of *ψ* atoms over time. A convenient basis for the quantum-state space of the full theory consists of the direct products ${|\bm{x}\_1\ldots \bm{x}\_n\rangle}\_\psi\otimes {|\bm{y}\_1\ldots \bm{y}\_m\rangle}\_\phi$ of localized multi-atom *ψ* and *ϕ* states defined by $$\begin{aligned} {|\bm{x}\_1\ldots \bm{x}\_n\rangle}\_\psi &=& \frac{1}{\sqrt{n!}}{\psi^\dagger}(\bm{x}\_n) \cdots {\psi^\dagger}(\bm{x}\_1) {|0\rangle}\_\psi, \label{def-psistates} \\ {|\bm{y}\_1\ldots \bm{y}\_m\rangle}\_\phi &=& \frac{1}{\sqrt{m!}}{\phi^\dagger}(\bm{y}\_m) \cdots {\phi^\dagger}(\bm{y}\_1) {|0\rangle}\_\phi,\end{aligned}$$ where ∣0⟩*ψ* and ∣0⟩*ϕ* are the vacuum states annihilated by $\psi(\bm{r})$ and $\phi(\bm{r})$, respectively. The full density matrix *ρ*full can be expanded in the basis of direct product states. Its time evolution is given by *ρ*full(*t*) ≡ e− *i**H*full*t**ρ*full(0)e*i**H*full*t*,  where *H*full is the full Hamiltonian in Eq. ([eq:hamiltonian]). We can define an effective density matrix *ρ*(*t*) by tracing over the *ϕ* states: $$\begin{aligned} \label{eq:rhoeff} \rho(t) &\equiv& \mathrm{Tr}\_\phi\,\rho\_{\mathrm{full}}(t) \\ &=& \sum\_{m=0}^\infty \int\_{\bm{y}\_1 \ldots \bm{y}\_m} {}\_\phi{\langle \bm{y}\_1 \ldots \bm{y}\_m |} \rho\_{\mathrm{full}}(t) {|\bm{y}\_1 \ldots \bm{y}\_m\rangle}\_\phi.\end{aligned}$$ This operator acts only on *ψ* states. A convenient basis for the effective density matrix consists of outer products of the *ψ* states defined in Eq.  of the form ${|\bm{x}\_1 \ldots \bm{x}\_n\rangle}\_\psi\, {}\_\psi {\langle \bm{x}\_1^\prime \ldots \bm{x}\_{n'}^\prime |}$. The time derivative of the effective density matrix can be obtained by differentiating Eq.  and using the time dependence of the full density matrix in Eq. : $$\label{eq:ddtrhoeff} i\frac{d\ }{dt} \rho = \mathrm{Tr}\_\phi\big( H\_{\mathrm{full}}\,\rho\_{\mathrm{full}}- \rho\_{\mathrm{full}}\,H\_{\mathrm{full}}\big).$$ The contributions from the kinetic terms in the full Hamiltonian in Eq. ([eq:hamiltonian]) are simple: $$\begin{aligned} \mathrm{Tr}\_\phi \big(\big[H\_0^\psi, \rho\_{\mathrm{full}}\big] \big) &=& \big[H\_0^\psi,\rho\big], \\ \mathrm{Tr}\_\phi \big(\big[H\_0^\phi, \rho\_{\mathrm{full}}\big] \big)&=& 0. \label{Trphi[Hrho]}\end{aligned}$$ The first equation holds because *H*0*ψ* does not act on *ϕ* states. The second equation holds because *H*0*ϕ* depends only on *ϕ* fields.[1](#fn1) The evolution equation reduces to $$\label{eq:ddtrhoeff-2} i\frac{d\ }{dt} \rho = \big[H\_0^\psi,\rho\big] + \mathrm{Tr}\_\phi\big(H\_{\rm int}\,\rho\_{\mathrm{full}}- \rho\_{\mathrm{full}}\,H\_{\rm int} \big).$$ Since there are two terms in the interaction Hamiltonian in Eq. , there are four contributions to the partial trace in Eq. . We first consider the contribution to the partial trace of *H*int*ρ*full from the first term in *H*int in Eq. ([eq:Hint]). This contribution is dominated by terms in which $\phi^2(\bm{r})$ annihilates *ϕ* atoms generated (through interactions) by the *ψ*-sector of *ρ*full, leading to correlators like Eq. ([eq:phi2psid]). We can therefore use the substitution in Eq. ([eq:phi2approx]) to obtain $$\begin{aligned} \label{drho/dt-1} \mathrm{Tr}\_\phi\left[\Big(\tfrac{1}{4}g\int\_{\bm{r}} \psi^{\dagger 2}(\bm{r})\phi^2(\bm{r})\Big)\rho\_{\mathrm{full}}\right] \longrightarrow -\tfrac{1}{4}T(0,0)\int\_{\bm{r}} \psi^{\dagger 2}(\bm{r})\psi^2(\bm{r}) \; \rho.\end{aligned}$$ Similarly, we can use the substitution in Eq. ([eq:phid2approx]) to obtain the contribution to the partial trace of *ρ*full*H*int from the second term in *H*int: $$\begin{aligned} \label{drho/dt-2} \mathrm{Tr}\_\phi\left[ \rho\_{\mathrm{full}}\Big(\tfrac{1}{4}g\int\_{\bm{r}}\phi^{\dagger 2}(\bm{r})\psi^2(\bm{r}) \Big) \right] \longrightarrow -\tfrac{1}{4}T^\*(0,0)\int\_{\bm{r}} \rho \; \psi^{\dagger 2}(\bm{r})\psi^2(\bm{r}).\end{aligned}$$ The contributions to the sum of the partial traces in Eqs.  and from the real part of *T*(0, 0) can be added to the term [*H*0*ψ*, *ρ*] in Eq.  to get [*H*, *ρ*], where *H* is the Hermitian part of the effective Hamiltonian defined in Eq. ([eq:heff]). The contributions to the sum of the partial traces in Eqs.  and from the imaginary part of *T*(0, 0) gives  − *i*{*K*, *ρ*}, where  − *i**K* is the anti-Hermitian part of the effective Hamiltonian defined in Eq. ([eq:heff]). Note that we are making a key physical assumption about *ρ*full when we use the substitution in Eq. ([eq:phi2approx]) to replace $g\,\phi^2(\bm{r})$ by $-T(0,0)\,\psi(\bm{r})^2$. As we indicated above, this substitution is valid provided $\phi^2(\bm{r})$ annihilates *ϕ* atoms produced by the *ψ*-sector of *ρ*full. In principle, it is also possible for $\phi^2(\bm{r})$ to annihilate *ϕ* atoms from the *ϕ*-sector of *ρ*full. We assume that such contributions can be ignored because the probability for finding two *ϕ* atoms at the same space-time point is vanishingly small (and therefore the probability of an inverse reaction, *ϕ**ϕ* → *ψ**ψ*, is negligible). This is the case, for example, if *ρ*full describes a situation in which all *ϕ* atoms are produced by inelastic *ψ**ψ* reactions and, once produced, they either escape from the system or they interact so weakly with the low-energy *ψ* atoms that they decouple. The contribution to the partial trace of *H*int*ρ*full from the second term in the interaction Hamiltonian *H*int in Eq.  is $$\begin{aligned} \label{drho/dt-3a} \mathrm{Tr}\_\phi\left[\Big(\tfrac{1}{2}g\int\_{\bm{r}} \phi^{\dagger 2}(\bm{r})\psi^2(\bm{r})\Big) \rho\_{\mathrm{full}}\right] =\mathrm{Tr}\_\phi\left[\tfrac{1}{2}g\int\_{\bm{r}}\psi^2(\bm{r})\rho\_{\mathrm{full}}\phi^{\dagger 2}(\bm{r})\right].\end{aligned}$$ In the second expression, the factor of $\phi^{\dagger 2}(\bm{r})$ has been moved to the right side of *ρ*full. To verify this equality, we begin by inserting the definition of the partial trace on the left side of Eq. : $$\begin{aligned} \label{drho/dt-3b} \sum\_{m=0}^\infty \int\_{\bm{y}\_1 \ldots \bm{y}\_m} {}\_\phi{\langle \bm{y}\_1\ldots \bm{y}\_m |} \Big(\tfrac{1}{2}g\int\_{\bm{r}} \phi^{\dagger 2}(\bm{r})\psi^2(\bm{r})\Big) \rho\_{\mathrm{full}}{|\bm{y}\_1\ldots \bm{y}\_m\rangle}\_\phi.\end{aligned}$$ The ${\phi^\dagger}(\bm{r})$ factors remove *ϕ* atoms from the bras on the left side of this equation. Taking into account the symmetry under permutations of the integration variables, their effect can be taken into account by the substitution $$\begin{aligned} \label{eq:brasub} {}\_\phi{\langle \bm{y}\_1\bm{y}\_2\ldots\bm{y}\_m |} \phi^{\dagger2}(\bm{r}) \longrightarrow {}\_\phi{\langle \bm{y}\_1\bm{y}\_2\ldots\bm{y}\_{m-2} |}\, \sqrt{m(m-1)} \,\delta^3(\bm{r}-\bm{y}\_m)\,\delta^3(\bm{r}-\bm{y}\_{m-1}).\end{aligned}$$ The kets on the right side of Eq.  can be expressed as $$\begin{aligned} \label{eq:ketsub} {|\bm{y}\_1\bm{y}\_2\ldots\bm{y}\_m\rangle}\_\phi = \frac{{\phi^\dagger}(\bm{y}\_m){\phi^\dagger}(\bm{y}\_{m-1})}{\sqrt{m(m-1)}}\,{|\bm{y}\_1\ldots\bm{y}\_{m-2}\rangle}\_\phi.\end{aligned}$$ Making the substitutions in Eqs. ([eq:brasub]) and and using the delta functions to integrate over $\bm{y}\_m$ and $\bm{y}\_{m-1}$, we obtain the expression on the right side of Eq. . Upon making the substitution in Eq. , we obtain $$\begin{aligned} \label{drho/dt-3} \mathrm{Tr}\_\phi\left[\Big(\tfrac{1}{4}g\int\_{\bm{r}} \phi^{\dagger 2}(\bm{r})\psi^2(\bm{r})\Big) \rho\_{\mathrm{full}}\right] \longrightarrow -\tfrac{1}{4}T^\*(0,0) \int\_{\bm{r}} \psi^2(\bm{r})\,\rho\, \psi^{\dagger 2}(\bm{r}).\end{aligned}$$ Similarly, the contribution to the partial trace of *ρ*full*H*int from the second term in *H*int in Eq. ([eq:Hint]) is $$\begin{aligned} \label{drho/dt-4} \mathrm{Tr}\_\phi\left[ \rho\_{\mathrm{full}}\Big(\tfrac{1}{4}g\int\_{\bm{r}} \psi^{\dagger 2}(\bm{r})\phi^2(\bm{r})\Big)\right] \longrightarrow -\tfrac{1}{4}T(0,0)\int\_{\bm{r}} \psi^2(\bm{r})\,\rho\,\psi^{\dagger 2}(\bm{r}).\end{aligned}$$ The real part of *T*(0, 0) cancels in the difference between the contributions in Eqs.  and. The contributions from the imaginary part of *T*(0, 0) to the partial trace in Eq.  is proportional to the integral over space of the operator $\psi^2(\bm{r})\,\rho\,\psi^{\dagger 2}(\bm{r})$. Adding the four contributions in Eqs. ,,, and to the partial trace in Eq. , we see that the evolution equation for the effective density matrix is $$\begin{aligned} \label{eq:rhoefffinal} i \frac{d\ }{dt}\rho = \big[H,\rho\big] - \frac{i}{4}\,{\rm Im}T(0,0) \int\_{\bm{r}} \big[ \psi^{\dagger2}\psi^2(\bm{r})\, \rho + \rho\, \psi^{\dagger2}\psi^2(\bm{r}) -2\, \psi^2(\bm{r})\, \rho\, \psi^{\dagger2}(\bm{r}) \big],\end{aligned}$$ where ${\rm Im}T(0,0)$ is given in Eq. . This has the standard form of the Lindblad equation . The last term removes *ψ* atoms two at a time to account for their disappearance due to inelastic scattering into pairs of high-momentum *ϕ* atoms. 99 E. Braaten and H.-W. Hammer, Universality in few-body systems with large scattering length, Phys. Rept.  **428**, 259 (2006) [cond-mat/0410417]. H. Georgi, Effective field theory, Ann. Rev. Nucl. Part. Sci.  **43**, 209 (1993). D.B. Kaplan, Effective field theories, nucl-th/9506035. A.V. Manohar, Effective field theories, Lect. Notes Phys.  **479**, 311 (1997) [hep-ph/9606222]. R. Shankar, Effective field theory in condensed matter physics, in *Conceptual Foundations of Quantum Field Theory*, ed. T.Y. Cao (Cambridge University Press, 1999) [cond-mat/9703210]. C.P. Burgess, Introduction to Effective Field Theory, Ann. Rev. Nucl. Part. Sci.  **57**, 329 (2007) [hep-th/0701053]. E. Braaten, H.-W. Hammer, and G.P. Lepage, Open effective field theories from deeply inelastic reactions, arXiv:1607.02939. G. Lindblad, On the generators of quantum dynamical semigroups, Commun. Math. Phys.  **48**, 119 (1976). V. Gorini, A. Kossakowski, and E.C.G. Sudarshan, Completely positive dynamical semigroups of *N*-level systems, J. Math. Phys.  **17**, 821 (1976). S. Grozdanov and J. Polonyi, Viscosity and dissipative hydrodynamics from effective field theory, Phys. Rev. D **91**, 105031 (2015) [arXiv:1305.3670]. C.P. Burgess, R. Holman, G. Tasinato, and M. Williams, EFT beyond the horizon: stochastic inflation and how primordial quantum fluctuations go classical, JHEP **1503**, 090 (2015) [arXiv:1408.5002]. C.P. Burgess, R. Holman, and G. Tasinato, Open EFTs, IR effects and late-time resummations: systematic corrections in stochastic inflation, JHEP **1601**, 153 (2016) [arXiv:1512.00169]. J. Preskill, Lecture Notes for Physics 229: Quantum Information and Computation (unpublished), Chapter 3. A. Robert, O. Sirjean, A. Browaeys, J. Poupard, S. Nowak, D. Boiron, C.I. Westbrook, and A. Aspect, A Bose-Einstein condensate of metastable atoms, Science **292**, 461 (2001). F. Pereira Dos Santos, J Léonard, Ju. Wang, C.J. Barrelet, F. Perales, E. Rasel, C.S. Unnikrishnan, M. Leduc, and C. Cohen-Tannoudji, Bose-Einstein condensation of metastable helium, Phys. Rev. Lett.  **86**, 3459 (2001) [cond-mat/0103387]. S. Tan, unpublished. S. Tan, Energetics of a strongly correlated Fermi gas, Annals of Physics **323**, 2952 (2008) [cond-mat/0505200]. S. Tan, Large momentum part of a strongly correlated Fermi gas, Annals of Physics **323**, 2971 (2008) [cond-mat/0508320]. E. Braaten, Universal relations for fermions with large scattering length, Lect. Notes Phys.  **836**, 193 (2012) [arXiv:1008.2922]. E. Braaten and L. Platter, Exact relations for a strongly interacting Fermi gas from the operator product expansion, Phys. Rev. Lett.  **100**, 205301 (2008) [arXiv:0803.1125]. E. Braaten and H.-W. Hammer, Universal relation for the inelastic two-body loss rate, J. Phys. B **46**, 215203 (2013) [arXiv:1302.5617]. E. Braaten, D. Kang, and L. Platter, Universal relations for identical bosons from 3-body physics, Phys. Rev. Lett.  **106**, 153005 (2011) [arXiv:1101.2854]. D. H. Smith, E. Braaten, D. Kang, and L. Platter, Two-body and three-body contacts for identical bosons near unitarity, Phys. Rev. Lett.  **112**, 110402 (2014) [arXiv:1309.6922]. P.F. Bedaque, H.-W. Hammer, and U. van Kolck, Renormalization of the three-body system with short range interactions, Phys. Rev. Lett.  **82**, 463 (1999) [nucl-th/9809025]. E. Braaten and H.-W. Hammer, Enhanced dimer deactivation in an atomic/molecular BEC, Phys. Rev. A **70**, 042706 (2004) [cond-mat/0303249]. F. Werner and Y. Castin, General relations for quantum gases in two and three dimensions. II. Bosons and mixtures, Phys. Rev. A **86**, 053633 (2012) [arXiv:1210.1784]. K. Helfrich, H.-W. Hammer, and D.S. Petrov, Three-body problem in heteronuclear mixtures with resonant interspecies interaction, Phys. Rev. A **81**, 042715 (2010) [arXiv:1001.4371]. S.-K. Tung, K. Jimenez-Garcia, J. Johansen, C.V. Parker, and C. Chin, Geometric scaling of Efimov states in a 6Li–133Cs mixture, Phys. Rev. Lett. **113**, 240402 (2014) [arXiv:1402.5943]. R. Pires, J. Ulmanis, S. Häfner, M. Repp, A. Arias, E.D. Kuhnle, and M. Weidemüller, Observation of Efimov resonances in a mixture with extreme mass imbalance, Phys. Rev. Lett. **112**, 250404 (2014) [arXiv:1403.7246]. D.S. Petrov and F. Werner, Three-body recombination in heteronuclear mixtures at finite temperature, Phys. Rev. A **92**, 022704 (2015) [arXiv:1502.04092]. V.B. Shenoy and T.-L. Ho, Nature and properties of a repulsive Fermi gas in the upper branch of the energy spectrum, Phys. Rev. Lett.  **107**, 210401 (2011) [arXiv:1106.0960]. C. Ticknor and S.T. Rittenhouse, Three body recombination of ultracold dipoles to weakly bound dimers, Phys. Rev. Lett. **105**, 013201 (2010) [arXiv:1003.2164]. Y. Wang, J.P. D’Incao, C.H. Greene, Efimov effect for three interacting bosonic dipoles, Phys. Rev. Lett. **106**, 233201 (2011) [arXiv:1103.1406]. Y. Wang, J.P. D’Incao, C.H. Greene, Universal three-body physics for fermionic dipoles, Phys. Rev. Lett. **107**, 233201 (2011) [arXiv:1106.6133]. --- 1. The identity ${\mathrm{Tr}}\_\phi ( \hat A \hat B ) = {\mathrm{Tr}}\_\phi ( \hat B \hat A )$ holds for any operator $\hat A$ constructed out of the field *ϕ* and any operator $ \hat B$. This can be verified by expressing the partial trace as a sum over a complete set of *ϕ* states and inserting a complete set of *ϕ* states between $\hat A$ and $\hat B$.[↩](#fnref1) Lindblad Equation for the Inelastic Loss of Ultracold Atoms ============================================================ The loss of ultracold trapped atoms due to deeply inelastic reactions has previously been taken into account in effective field theories for low-energy atoms by adding local anti-Hermitian terms to the effective Hamiltonian. Here we show that when multi-atom systems are considered, an additional modification is required in the equation governing the density matrix. We define an effective density matrix by tracing over the states containing high-momentum atoms produced by deeply inelastic reactions. We show that it satisfies a Lindblad equation, with local Lindblad operators determined by the local anti-Hermitian terms in the effective Hamiltonian. We use the Lindblad equation to derive the universal relation for the two-atom inelastic loss rate for fermions with two spin states and the universal relation for the three-atom inelastic loss rate for identical bosons. Introduction ============ The development of technologies to trap and cool neutral atoms has led to the emergence of cold atom physics as a new subfield of atomic physics. The atoms can be cooled to temperatures that are orders of magnitude smaller than the tiny differences between the hyperfine energy levels of the atoms. Many of the most important loss processes for ultracold atoms involve deeply inelastic reactions, which produce atoms with kinetic energies much larger than those of the cold trapped atoms. One such process is three-body recombination, in which a collision of three low-energy atoms results in the binding of two of the atoms into a diatomic molecule. If the binding energy of the molecule is large compared to the energy scale of the cold atoms, the momenta of the molecule and the recoiling atom are often large enough for them to escape from the trapping potential for the atoms. Ultracold atoms can be described by a local nonrelativistic effective field theory for which the coupling constant is the scattering length. Local effective field theories can be applied most rigorously to a system in which there is an energy gap separating the low-energy particles described explicitly by the effective field theory from the higher-momentum particles. In a system consisting of low-energy atoms, conservation of energy ensures that a high-momentum atom can only appear through a virtual state that, by the time-energy uncertainty principle, has a short lifetime. During that short lifetime, the high-momentum atom can propagate only over a short distance. Its effects on low-energy atoms are therefore restricted to short distances. These effects can be reproduced by local Hermitian operators in the Hamiltonian for the effective field theory. Local effective field theories have also been applied to systems with deeply inelastic reactions that produce particles with momenta too large to be described accurately within the effective field theory. For example, a deeply inelastic three-body recombination event produces a molecule and an atom whose momenta may be outside the domain of validity of the effective theory. The large energy release from the inelastic reaction comes from the conversion of rest energy into kinetic energy. The standard argument for the locality of the effective field theory does not apply. The particles with large momenta produced by the inelastic reaction can propagate over arbitrarily long distances, so their effects on low-energy particles are not obviously restricted to short distances. Nevertheless, there are general arguments based on the uncertainty principle that indicate that their effects can be taken into account systematically through local anti-Hermitian operators in the effective Hamiltonian. The effective Hamiltonian can be expressed as $H\_{\rm eff} = H -iK$, where *H* and *K* are Hermitian. The dynamics of a multi-atom system with deeply inelastic reactions is conveniently described by a density matrix. A system that starts as a pure quantum state with *n* low-energy atoms evolves into a mixed quantum state that is a superposition of a state with *n* low-energy atoms and states with fewer low-energy atoms, as the inelastic reactions shift probability from the low-energy atoms into the high-momentum atoms. An effective density matrix *ρ* can be defined by tracing the density matrix over states containing high-momentum atoms. Naively we might expect the time evolution equation for *ρ* to be $$\begin{aligned} i\hbar \frac{d\ }{dt}\rho &\overset{?}{=} & H\_{\mathrm{eff}}\rho- \rho H\_{\mathrm{eff}}= \left[H, \rho\right] - i \left\{K,\rho\right\}. \label{eq:drhodt-naive}\end{aligned}$$ As we will demonstrate in this paper, the correct evolution equation for the effective density matrix is the *Lindblad equation*, which has an additional term. The Lindblad equation arises in the quantum information theory of *open quantum systems*. An open quantum system consists of all the degrees of freedom of both the subsystem of interest and the environment. Under special circumstances, the density matrix for the subsystem evolves in time according to the Lindblad equation. In the Lindblad equation for the density matrix of an effective field theory obtained by integrating out deeply inelastic reactions, the additional Lindblad term is local, and it can be deduced from the local anti-Hermitian terms in the effective Hamiltonian. An open quantum system in which the subsystem of interest is a field theory is called an *open effective field theory*. Grozdanov and Polonyi have proposed that an open effective field theory for the hydrodynamic modes of a quantum field theory can be used as a framework for deriving dissipative hydrodynamics. Burgess, Holman, Tasinato, and Williams have applied open effective field theory to the super-Hubble modes of primordial quantum fluctuations in the early universe. In the stochastic inflation framework, the master equation is the Lindblad equation. Since the density matrix for an effective field theory in which deeply inelastic reaction products have been integrated out satisfies the Lindblad equation, this system can also be regarded as an open effective field theory. In this case, the environment consists of the high-momentum particles produced by the deeply inelastic reactions. The paper is organized as follows. In Section [sec:DensityMatrix], we summarize the basic properties of the density matrix and we introduce the Lindblad equation. In Section [sec:Effective Theory], we explain how integrating out deeply inelastic reaction products results in local operators in the effective Lagrangian density. We also explain why the effective density matrix obtained by tracing over states that include deeply inelastic reaction products must satisfy the Lindblad equation. In Section [sec:AtomLoss], we apply the Lindblad equation to the mean number of low-energy atoms. We derive the universal relation for the two-atom inelastic loss rate for fermions with two spin states and the universal relation for the three-atom inelastic loss rate for identical bosons. Our results are summarized briefly in Section [sec:Summary]. In an Appendix, we demonstrate how the Lindblad equation for the density matrix emerges from the diagrammatic analysis of a simple field theory model with a deeply inelastic two-atom scattering reaction. Density Matrix ============== The density matrix can be used to describe pure quantum states, mixed quantum states, and statistical ensembles of quantum states. The dynamics of a multi-atom system with atom losses must be described by the density matrix in order to be able to track the losses in the different *n*-atom sectors. To lay the foundation for the rest of the paper, we summarize the key properties of the density matrix and we introduce the Lindblad equation. General Properties ------------------ A pure state in a quantum system can be represented by a vector ∣*ψ*⟩ in a complex vector space. The time evolution of the quantum state is described by the Schrödinger equation: *i*ℏ(*d*/*d**t*)∣*ψ*⟩ = *H*∣*ψ*⟩, where the Hamiltonian *H* is a Hermitian operator. The time evolution preserves the norm ⟨*ψ*∣*ψ*⟩ of a state. An arbitrary statistical ensemble of quantum states can be represented by a density matrix *ρ*. The density matrix has the following basic properties: * It is Hermitian: *ρ*† = *ρ*. * It is positive: ⟨*ψ*∣*ρ*∣*ψ*⟩ ≥ 0 for all nonzero states ∣*ψ*⟩. * It can be normalized to have unit trace: ${\rm Tr}(\rho) = 1$. The density matrix can also describe a pure quantum state, in which case *ρ*2 = *ρ*. The time evolution of the density matrix is described by the von Neumann equation: $$i \hbar \frac{d\ }{dt} \rho = [ H, \rho], \label{evol:Schrodinger}$$ which follows from the Schrödinger equation for ∣*ψ*⟩. This evolution equation has the following properties: * It is linear in *ρ*. * It preserves the trace of *ρ*. It therefore respects the normalization condition ${\rm Tr}(\rho) = 1$. * It is *Markovian*: the future is determined by the present only. There is no additional dependence on the past history. The system average of an operator *O* can be expressed as the trace of its product with the density matrix: $$\langle O \rangle = {\rm Tr}(\rho O ). \label{d<O>}$$ If the operator has no explicit time dependence, the time evolution of the system average is determined by the evolution equation of *ρ* in Eq. . Lindblad equation ----------------- In the field of quantum information theory, the full system is often separated into a *subsystem* of interest and its *environment*. Of particular interest is the decoherence of the subsystem due to the effects of the environment. The basis for the quantum states of the full system can be chosen to be direct products of basis states for the subsystem and those for the environment. A density matrix *ρ* for the subsystem can be obtained from the density matrix *ρ*full for the full system by the partial trace over the states of the environment: $$\label{eq:rho\_eff-qi} \rho = {\mathrm{Tr}}\_{\rm environment} \left(\rho\_{\mathrm{full}}\right).$$ The density matrix for the full system evolves according to the von Neumann equation in Eq. . Given the initial density matrix *ρ*(*t* = 0) for the subsystem and a specified initial state of the environment, the von Neumann equation can in principle be solved to determine *ρ*(*t*) at future times. It is possible in principle to construct a self-contained differential equation for *ρ*(*t*), but its evolution is non-Markovian. The time derivative of *ρ*(*t*) is determined by the present density matrix and by its history from time 0 to *t*. The previous history is needed to take into account the flow of information between the subsystem and the environment. There are situations in which the time evolution of the subsystem can be described approximately by a self-contained Markovian differential equation. The time during which the subsystem is observed must be much larger than the time scale for correlations between the subsystem and the environment. We must also restrict our consideration to the low-frequency behavior of the subsystem, which can be accomplished by smoothing out over times larger than the correlation time scale. The density matrix *ρ* for the subsystem necessarily satisfies the three basic properties listed before Eq. : it is Hermitian, it is positive, and it can be normalized. It would be desirable for its time evolution to also be in accord with the three basic properties listed after Eq. : it should be linear in *ρ*, it should preserve the trace of *ρ*, and it should be Markovian. In 1976, Lindblad and, independently, Gorini, Kossakowski, and Sudarshan showed that these conditions together with one additional technical condition require the time evolution equation to have the form $$i \hbar \frac{d\ }{dt} \rho = [ H, \rho] - \frac{i}{2} \sum\_n \left( L\_n^\dagger L\_n \rho + \rho L\_n^\dagger L\_n -2 L\_n \rho L\_n^\dagger \right), \label{evol:Lindblad}$$ where *H* is a Hermitian operator and the *L**n*’s are an additional set of operators called *Lindblad operators*. The evolution equation in Eq.  is called the *Lindblad equation*. The linear operator acting on *ρ* defined by the right side of Eq.  is called the *Lindbladian*. Thus the Lindbladian is the generator of the time evolution of the density matrix. The additional technical condition required to derive the Lindblad equation is that the linear operator that determines the time evolution of *ρ* should be completely positive. For any complex vector space, this linear operator has a natural extension to an operator acting on the tensor product of the quantum-state spaces for the subsystem and the other complex vector space. Complete positivity requires the extension of the operator to be positive in the larger space. An accessible derivation of the Lindblad equation can be found in lecture notes on quantum information and computation by John Preskill. The Lindblad equation in Eq.  can be expressed in the form $$i \hbar \frac{d\ }{dt} \rho = [ H, \rho] - i \left\{K,\rho\right\} + i \sum\_n L\_n \rho L\_n^\dagger, \label{evol:Lindblad-2}$$ where the Hermitian operator *K* is $$K =\frac12\sum\_n L\_n^\dagger L\_n. \label{eq:K-Ln}$$ Comparison with the naive evolution equation in Eq.  reveals that *H* − *i**K* can be interpreted as a non-Hermitian effective Hamiltonian for the subsystem. The additional *Lindblad term* in Eq.  is necessary to preserve the trace of *ρ*. Effective Theory from Deeply Inelastic Reactions ================================================ In this section, we review how effective field theories for cold atoms are constructed, with particular focus on the treatment of deeply inelastic reactions. We first discuss the construction of the effective Lagrangian, and how integrating deeply inelastic reaction products out of a theory results in new local operators in the effective Lagrangian density. We then discuss how such a formalism can be used to study multi-particle states by means of an effective density matrix obtained by tracing over states that include deeply inelastic reaction products. Locality -------- [fig:2to2-elastic] An effective field theory is obtained by removing (“integrating out”) states from a field theory. The simplest applications involve removing particles with much higher energies. We consider atoms with mass *M* in a specific hyperfine state labelled *ψ* and with kinetic energies much smaller than the splitting Δ between hyperfine states. As illustrated in Fig. [fig:2to2-elastic], there is a contribution to the elastic scattering amplitude from scattering into a virtual pair of particles in a higher hyperfine state labelled *ϕ*, which then rescatter into atoms in the original hyperfine state. The virtual atoms are off their energy shells by large amounts of order Δ. By the time-energy uncertainty principle, the virtual states have short lifetimes of order ℏ/Δ, during which the *ϕ* atoms can propagate only over short distances much smaller than the wavelengths of the *ψ* atoms. The effects of the higher hyperfine states can be modeled in the effective field theory by a contact interaction of the form *ψ*†*ψ*†*ψ**ψ*, where *ψ* is the quantum field that annihilates an atom in the specific hyperfine state *ψ*. (We use the same symbol for the label of the hyperfine state and for its quantum field.) The 2 → 2 contact interaction is the leading term in a series of local operators in the effective Lagrangian that mimic the effects of the higher hyperfine states to arbitrary precision. This series is obtained at tree-level by Taylor expanding the scattering amplitude in the momentum transfer and then writing down terms in the effective Lagrangian density that reproduce this expansion: *δ*L = *g* *ψ*†*ψ*†*ψ**ψ* + *h* ∇(*ψ*†*ψ*) ⋅ ∇(*ψ*†*ψ*) + …,  where the coefficients *g*, *h*, … are real valued. The individual operators are renormalized when loop corrections are included, but the effective theory is still capable of reproducing the original theory to arbitrary precision provided operators with a sufficient number of gradients are retained. Operators with *n* gradients correct the theory at order (*q*/*P*)*n*, where *q* is the momentum transfer and *P* = (*M*Δ)1/2 is the momentum scale associated with the energy splitting Δ between the hyperfine states. We have integrated the higher hyperfine states out of the theory by replacing them by the interactions terms in Eq. . [fig:2to2-inelastic] A less obvious opportunity to remove states from the theory arises when atoms in the hyperfine state *ψ* scatter inelastically into atoms in a lower hyperfine state *ϕ*. Such a scattering process is an example of a *deeply inelastic reaction*, in which the difference in the rest energies of the initial-state and final-state atoms is converted into large kinetic energies of the final-state atoms. The rate for the deeply inelastic scattering process *ψ**ψ* → *ϕ**ϕ* is related by the optical theorem to the imaginary part of the amplitude for *ψ**ψ* → *ϕ**ϕ* → *ψ**ψ* (see Fig. [fig:2to2-inelastic]). That amplitude is analytic in the momentum transfer $\bm{q}$, because the nearest non-analyticity in the amplitude is at the threshold energy for the *ϕ**ϕ* state, which is lower by an amount of order Δ. As a result, we can Taylor expand the amplitude in powers of $\bm{q}/(M \Delta)^{1/2}$ and reproduce this expansion by terms in the effective Lagrangian as in Eq. . In this case, the coefficients *g*, *h*, … are complex valued. We are particularly interested in their imaginary parts, which come from the deeply inelastic scattering reaction. These terms mimic the effects of the lower hyperfine states in the effective theory. It may seem nonintuitive that the effects of inelastic scattering to on-shell particles can be mimicked by local operators, because once an atom in the lower hyperfine state *ϕ* is created, it can propagate over long distances. In fact, as far as low-energy atoms in the hyperfine state *ψ* are concerned, the inelastic scattering process is quite local. This is because the reaction region in which the low-energy atoms disappear can be reconstructed by tracking the inelastic scattering products back to their origin. The inelastic reaction products have high momenta of order (*m*Δ)1/2 and therefore short wavelengths of order ℏ/(*m*Δ)1/2, so they can locate the decay with a resolution of order ℏ/(*m*Δ)1/2. Therefore the reaction is localized over a region of size ℏ/(*m*Δ)1/2, which is very small compared to the wavelengths of the incoming low-energy atoms. Effective density matrix for decaying atoms ------------------------------------------- Some aspects of the effective field theory obtained by integrating out deeply inelastic reactions are most easily understood by considering a deeply inelastic 1-body reaction, namely the decay of an atom. The atom could be a metastable excited state, such as the 23*S*1 state of 4He, which was one of the earliest atoms for which Bose-Einstein condensation was achieved. The radiative decay of the atom into its ground state is a deeply inelastic reaction. For simplicity, we assume the ground-state atoms interact weakly with the metastable atoms and that, once they are produced, the ground-state atoms quickly escape from the system. The Hamiltonian *H* − *i**K* for the effective theory is the sum of the Hermitian Hamiltonian for the metastable atoms and an anti-Hermitian piece  − *i**K*. The Hermitian operator *K* can be expressed as $\frac12 \Gamma\, N$, where Γ is the decay rate of the metastable atom and *N* is the number operator that counts the metastable atoms: $$N = \int d^3{\bm{r}}\, \psi^\dagger \psi. \label{eq:Nmuhat}$$ The quantum mechanics of such a theory is unconventional. The effective Hamiltonian *H* − *i**K* commutes with the number operator *N*, so the time evolution generated by *H* − *i**K* does *not* change the number of atoms in a state. Instead the effects of atom decay must be taken into account by transferring the probability carried by the *n* − atom component to states containing fewer atoms. An *n* − atom state evolves into a superposition of states with *n* and fewer atoms. The norm of a state containing *n* atoms decreases to zero exponentially with the decay rate *n*Γ. This is the correct result: if the probability for one atom to still be an atom after time *t* is exp( − Γ*t*), the probability for *n* atoms to still be *n* atoms is exp( − *n*Γ*t*). We typically want more information about where the probability goes. An *n* − atom state evolves into a superposition of states with *n*, *n* − 1, *n* − 2…atoms. In the full theory, the superposition can be described by the density matrix *ρ*full. In the effective theory, we describe the superposition of states with different atom numbers using an *effective density matrix* *ρ*, from which we remove ground-state atoms by tracing over states that include them: *ρ* ≡ Trdeep(*ρ*full). The subscript “deep” indicates that the partial trace is over states that include deeply inelastic reaction products. This effective density matrix, like *ρ*full, is Hermitian, positive, and has unit trace: Tr(*ρ*) = 1. These properties follow from the definition in Eq. ([eq:rho-eff-defn]). Naively we might expect the time evolution equation for *ρ* to be Eq. , which reduces to $$i\frac{d\ }{dt}\rho = \left[H, \rho\right] - \frac{i}{2} \Gamma \left\{ N, \rho\right\}.$$ However this equation does *not* conserve the total probability Tr(*ρ*). The correct evolution equation is the Lindblad equation: $$i\frac{d}{dt}\rho = \left[H, \rho\right] - \frac{i}{2} \Gamma \left\{ N, \rho\right\} + i \Gamma \int d^3{\bm{r}}\,\psi({\bm{r}})\,\rho\,\psi^\dagger({\bm{r}}). \label{eq:muon-evol-eq}$$ Time evolution preserves the trace of *ρ*, because the trace of the additional Lindblad term cancels the trace of the anticommutator term in Eq. . The role of the Lindblad term is easily understood if we use the evolution equation to calculate the rate of change of the probability *P**n*(*t*) for finding *n* metastable atoms in the system. This probability can be expressed as the partial trace of *ρ* over states ∣*X**n*⟩ that contain *n* atoms: *P**n*(*t*) ≡ ∑*X**n*⟨*X**n*∣*ρ*(*t*)∣*X**n*⟩. The partial trace of the evolution equation in Eq.  gives $$\frac{d\ }{dt} P\_n(t)= - n\Gamma P\_n(t) + (n+1)\Gamma P\_{n+1}(t). \label{eq:dPn(t)}$$ The partial trace of the commutator term in Eq.  is 0, because the operator *H* does not change the atom number. This allows a complete set of *n* − atom states to be inserted between *H* and *ρ*. The partial trace of the anticommutator term in Eq.  gives  − *n*Γ*P**n*, which is the rate at which probability leaves the *n* − atom sector because of the decay of an atom. The partial trace of the Lindblad term in Eq.  gives  + (*n* + 1)Γ*P**n* + 1, which is the rate at which probability enters the *n* − atom sector from the decay of an atom in the (*n* + 1) − atom sector. This expression can be obtained by inserting complete sets of (*n* + 1) − atom states on the left and right of *ρ* in the Lindblad term in Eq. ([eq:muon-evol-eq]) and then rearranging the factors to give $$\begin{aligned} i \Gamma \sum\_{X\_{n+1}} \sum\_{X'\_{n+1}}\, \langle X'\_{n+1}| \rho |X\_{n+1} \rangle \langle X\_{n+1} | N |X'\_{n+1} \rangle. \label{eq:drho-lindblad}\end{aligned}$$ The number operator *N*, which is given in Eq. , was obtained by replacing the sum of ∣*X**n*⟩⟨*X**n*∣ between $ \psi^\dagger({\bm{r}})$ and $\psi({\bm{r}})$ by the identity operator. The number operator in Eq.  can be replaced by its eigenvalue *n* + 1. The Lindblad term in Eq. ([eq:muon-evol-eq]) is essential to get the correct physical behavior for the time evolution of the total number of metastable atoms. The expectation value of the atom number is $$N(t) \equiv {\rm Tr} \big( N \rho(t)\big) = \sum\_n n P\_n(t). \label{eq:Nmu-t}$$ We can use Eq.  to determine the time dependence of *N*(*t*): $$\frac{d\ }{dt} N(t)= -\Gamma \Big[ \sum\_n n^2 P\_n(t) - \sum\_n n(n+1) P\_{n+1}(t) \Big].$$ After shifting the index of the second term on the right side, we obtain $$\frac{d\ }{dt} N(t)= -\Gamma N.$$ This implies that *N*(*t*) = *N*0exp( − Γ*t*), as expected for a collection of atoms with decay rate Γ. Effective density matrix from deeply inelastic reactions -------------------------------------------------------- We now consider a more general effective field theory obtained by integrating out the reaction products from deeply inelastic reactions. By the arguments presented in Sec. [sec:Locality], the effects of the deeply inelastic reactions can be reproduced by local anti-Hermitian terms in the effective Hamiltonian. The effective Hamiltonian can be expressed in the form *H*eff = *H* − *i**K*, where *H* and *K* are both Hermitian operators. The operator *K* can be expressed in the form *K* = ∑*i**γ**i*∫*d*3*r* Φ*i*†Φ*i*,  where the local operators $\Phi\_i(\bm{r})$ annihilate low-energy atoms in configurations corresponding to inelastic reactions. The real constants *γ**i* can be determined by matching low-energy observables in the full theory and in the effective theory. The Hamiltonian *H* − *i**K* commutes with the number operator *N* for low-energy atoms, so the time evolution generated by *H* − *i**K* does *not* change the number of low-energy atoms in a state. Instead the effects of the deeply inelastic reactions must be taken into account by transferring the probability carried by the *n* − atom component to states containing fewer atoms. The norm of a state containing *n* atoms decreases to zero at a rate that increases with *n*. An *n* − atom state evolves into a superposition of states with *n* and fewer atoms. We describe this superposition of states using an *effective density matrix* *ρ* defined by tracing over the deeply inelastic decay products as in Eq. . More precisely, we trace out any state containing an atom with momentum exceeding the ultraviolet cutoff of the effective field theory. The effective density matrix, like the underlying density matrix, is Hermitian, positive, and it has unit trace: Tr(*ρ*) = 1. These properties follow from the definition in Eq. ([eq:rho-eff-defn]). As is familiar in the field of quantum information, the definition of the effective density matrix *ρ* as the partial trace in Eq. ([eq:rho-eff-defn]) implies that *ρ*(*t*) satisfies a self-contained time evolution equation that is completely determined by *ρ* and by the full density matrix *ρ*full(0) at some initial time. This evolution equation is however non-Markovian: the time derivative (*d*/*d**t*)*ρ*(*t*) is determined not only by *ρ*(*t*) but also by *ρ*(*t*ʹ) at the earlier times 0 < *t*ʹ < *t*. This dependence on the history takes into account the effects of high-momentum atoms that are produced by inelastic reactions and subsequently interact with the low-energy atoms. However the density matrix *ρ*(*t*) for the effective theory does not need to reproduce the full density matrix on short time scales of order $\hbar /E\_{\rm deep}$. It is sufficient to reproduce its effects after time averaging over time intervals much larger than $\hbar /E\_{\rm deep}$. This time average removes transients associated with high-momentum atoms that cannot be described accurately within the effective field theory. It is only after removing these transients that it is possible to have a density matrix *ρ*(*t*) that is Markovian. Thus the proper definition of the effective density matrix requires that the partial trace in Eq. ([eq:rho-eff-defn]) be supplemented by an appropriate time average that makes *ρ*(*t*) Markovian. Naively we might expect the time evolution equation for *ρ* to be Eq. , where *K* is the Hermitian operator in Eq. . However, this equation does *not* conserve the total probability Tr(*ρ*). The correct evolution equation is the Lindblad equation: $$i \hbar \frac{d\ }{dt} \rho = \left[H, \rho\right]- i \left\{K,\rho\right\} +2i \sum\_i \gamma\_i \int \!\!d^3r\, \Phi\_i \rho \Phi\_i^\dagger. \label{eq:LindbladPhi}$$ As in Eq. , the trace of the additional Lindblad term cancels the trace of the anticommutator term. The local operators $\Phi\_i(\bm{r})$, which annihilate configurations of low-energy atoms that can undergo a deeply inelastic reaction, are the *Lindblad operators*. In quantum information theory, a Lindblad operator is sometimes called a *quantum jump operator*. In the low-energy effective theory, a Lindblad operator produces a jump to a state with a different low-energy atom number. Given the form of *K* in Eq. , the Lindblad equation is a necessary consequence of our physical requirements on the effective density matrix: *ρ* is Hermitian, positive, has unit trace, and it is Markovian. An explicit diagrammatic illustration of the emergence of the Lindblad equation from tracing over states that include deeply inelastic reaction products is given in Appendix [sec:Deductive]. In order to obtain the Lindblad equation in Eq. , it is essential that *K* have the structure shown in Eq. ([eq:K-Phi]). A more generic form for this operator is $$K = \int \!\!d^3 r \sum\_{ij} c\_{ij} \Phi\_i^\dagger({\bm{r}}) \Phi\_j({\bm{r}}), \label{eq:K-structure-gen}$$ where *c**i**j* is a positive Hermitian matrix. It is a Hermitian matrix, because *K* is Hermitian by definition. It is guaranteed to be a positive matrix by the optical theorem, which implies that the *T* matrix satisfies  − *i*(*T* − *T*†) = *T*†*T*. Matrix elements of *K* reproduce the anti-Hermitian parts of scattering amplitudes ⟨*b*∣*T*∣*a*⟩, where ∣*a*⟩ and ∣*b*⟩ are states in the effective theory that are connected by intermediate deeply inelastic reaction channels. The Hermitian parts of these amplitudes are reproduced by *H*. The optical theorem guarantees the positivity of the anti-Hermitian parts. The double sum in Eq.  is easily rewritten in the canonical form in Eq.  by expanding *c**i**j* in terms of outer products of its eigenvectors. Atom Loss from Deeply Inelastic Reactions ========================================= In this section, we use the Lindblad equation to determine the time evolution of the mean number of low-energy atoms, which can be changed by deeply inelastic reactions. We also derive universal relations for the inelastic two-atom loss rate for fermions with two spin states and for the inelastic three-atom loss rate for identical bosons. Mean Atom Number ---------------- We consider an open effective field theory with effective Hamiltonian $H\_{\rm eff}=H-iK$, where *H* and *K* are Hermitian. The Hermitian operator *K* in the anti-Hermitian part can be expressed in terms of local Lindblad operators $\Phi\_i(\bm{r})$ as in Eq. . The time evolution of the density matrix *ρ* for the effective field theory is given by the Lindblad equation in Eq. . We reconsider an atom number operator *N* that is conserved by the reactions in the effective field theory but is violated by deeply inelastic reactions in the full theory. Since Lindbladian time evolution preserves the trace of *ρ*, the system average of *N* can be expressed as $$\langle N \rangle = {\rm Tr}(\rho N). \label{N-ave}$$ The statement that *N* is conserved by the reactions in the effective theory can be expressed as the commutation relation $$\big[ N,H\big ]=0. \label{[N,H]}$$ The inelastic reaction responsible for the Φ*i*†Φ*i* term in *K* changes the atom number *N* by some integer *k**i*. This statement can be expressed as a commutation relation: $$\big[ N,\Phi\_i(\bm{r}) \big]=-k\_i \Phi\_i(\bm{r}). \label{[N,Phi]}$$ We now consider the time evolution of ⟨*N*⟩. By taking the time derivative of ${\rm Tr}(\rho N)$, inserting the time evolution equation in Eq. , and rearranging the terms, we get $$\begin{aligned} \frac{d\ }{dt} \langle N \rangle = - \frac{i}{\hbar} {\rm Tr}(\rho [N,H]) - \frac{1}{\hbar} \sum\_i \gamma\_i \int \!\! d^3r \, {\rm Tr} \big( \rho (N\Phi\_i^\dagger \Phi\_i +\Phi\_i^\dagger \Phi\_i N -2 \Phi\_i^\dagger N \Phi\_i) \big). \label{Nevol:Lindblad}\end{aligned}$$ The first term on the right side vanishes, because *N* commutes with *H*. By using the commutation relation in Eq.  in the second term on the right side, the rate of change of atom number reduces to $$\begin{aligned} \frac{d\ }{dt} \langle N \rangle &=& -\frac{2}{\hbar} \sum\_i k\_i \gamma\_i \int \!\! d^3r \, \langle \Phi\_i^\dagger \Phi\_i \rangle. \label{Nevol:Phi}\end{aligned}$$ The rate of change in ⟨*N*⟩ has been expressed as a sum of expectation values of the same terms that appear in the expression for *K* in Eq.  but multiplied by integers *k**i*. Three-Body Recombination Rate ----------------------------- The atoms that are studied in cold atom experiments can form deeply bound diatomic molecules whose binding energies are orders of magnitude larger than the energy scales of the cold trapped atoms. Three-body recombination, in which three low-energy atoms collide and two of them bind into a molecule, is a deeply inelastic reaction if the molecule is deeply bound. The momenta of the outgoing molecule and the recoiling atom are often large enough for them to escape from the trapping potential. In a locally homogeneous gas of atoms in thermal equilibrium, the rate of decrease in the local number density *n**A* of low-energy atoms can be expressed as a rate equation: (*d*/*d**t*)*n**A* =  − 3*K*3(*T*)*n**A*3,  where *K*3(*T*) is the 3-body recombination event rate constant, which depends on the temperature *T*. Bosonic atoms that are all in the same hyperfine state can be described by an effective field theory with a quantum field *ψ*. At the low temperatures in cold atom experiments, the only relevant interaction between the atoms is S-wave scattering, whose strength is determined by the scattering length *a*. The interaction term in the Hamiltonian is $$H\_{\rm int} = (2 \pi \hbar^2 a/m) \int d^3 r\, \psi^{\dagger 2}\psi^2. \label{Hint-psi}$$ This interaction term can be treated using perturbation theory, provided the scattering length *a* is not much larger than the range of the interactions between the atoms, which for ultracold atoms is the van der Waals length scale. We now consider a deeply inelastic three-body recombination reaction that produces a diatomic molecule with binding energy much larger than the temperature *T*. Its effect on low-energy atoms can be described in the effective field theory by adding to the effective Hamiltonian the anti-Hermitian term  − *i**K*, where the local Hermitian operator *K* is *K* = (ℏ*K*3/12)∫*d*3*r* (*ψ*3)†*ψ*3,  and where *K*3 is a constant that does not depend on the temperature. The expression for *K* in Eq.  has a single term with Lindblad operator Φ*i* = *ψ*3 and coefficient *γ**i* = ℏ*K*3/12. The expression for the atom loss rate in Eq.  thus has a single term with integer *k**i* = 3. In a locally homogeneous system of atoms, the local version of the loss rate in Eq.  is $$\begin{aligned} \frac{d\ }{dt} \langle \psi^\dagger \psi \rangle &=& - \frac{K\_3}{2} \,\langle (\psi^3)^\dagger \psi^3 \rangle. \label{Nevol:Phi3}\end{aligned}$$ The expectation value of *ψ*†*ψ* is the local number density: *n**A* = ⟨*ψ*†*ψ*⟩. In a thermal gas, the expectation value of (*ψ*3)†*ψ*3 is 6*n**A*3. Comparing with Eq. , we see that the constant *K*3 in Eq.  is the *T* → 0 limit of the 3-body recombination event rate constant *K*3(*T*). Its temperature dependence is negligible, because *k**T* is much smaller than the binding energy of the molecule. In a Bose-Einstein condensate at zero temperature, the expectation value of (*ψ*3)†*ψ*3 can be expressed as ∣⟨*ψ*⟩3∣2 = *n**A*3, where ⟨*ψ*⟩ is the mean field. The atom loss rate is given by Eq.  except that the prefactor on the right side is smaller than for a thermal gas by a factor of 1/6. Inelastic Two-Atom Loss Rate ---------------------------- In cold atom experiments, the scattering length *a* can be controlled experimentally by tuning an external magnetic field to near a Feshbach resonance. If *a* is much larger than the range of the interactions between the atoms, which for ultracold atoms is the van der Waals length scale, the interactions must be treated nonperturbatively. Fermionic atoms in two hyperfine states 1 and 2 with scattering length *a* can be described by an effective field theory with quantum fields *ψ*1 and *ψ*2. The interaction term in the Hamiltonian can be expressed as $$H\_{\rm int} = \frac{g\_0}{m}\int d^3 r\, \big(\psi\_2 \psi\_1\big)^\dagger \big( \psi\_2 \psi\_1 \big), \label{Hint-psi0}$$ where *g*0 is the bare coupling constant. If the ultraviolet cutoff Λ is imposed on the momenta of virtual atoms, the bare coupling constant is $$g\_0 = \frac{4 \pi}{1/a-(2/\pi)\Lambda}. \label{g0}$$ If the pair of atoms in the hyperfine states 1 and 2 has a *spin-relaxation* scattering channel into a pair atoms in lower hyperfine states 3 and 4, the optical theorem implies that the scattering length *a* is complex with a small negative imaginary part. The energy release from the spin-relaxation reaction is much larger than the energy scales for ultracold atoms, so this is a deeply inelastic reaction. The high-momentum atoms in the hyperfine states 3 and 4 can be integrated out to get an effective field theory for low-energy atoms in the hyperfine states 1 and 2 only. The interaction term in the Hamiltonian is still that in Eq. , except that now $H\_{\rm int}$ has an anti-Hermitian part because the complex scattering length *a* makes the bare coupling constant *g*0 in Eq.  complex. Determining the loss rate of atoms due to the deeply inelastic spin-relaxation reaction is not trivial, because the large scattering length makes the problem nonperturbative. However, an exact result for the inelastic two-atom loss rate in any state has been proposed by Shina Tan. The rates of decrease in the numbers *N*1 and *N*2 of atoms in the two hyperfine states are given by the universal relation $$\frac{d\ }{dt} \langle N\_1 \rangle= \frac{d\ }{dt} \langle N\_2 \rangle = - \frac{\hbar}{2 \pi m}{\rm Im} \big( 1/a \big)\, C, \label{uni2loss}$$ where *C* is a property of the system called the *contact*. The coefficient of *C*, which is proportional to the imaginary part of 1/*a*, is determined by 2-body physics. The contact was first introduced by Shina Tan in 2005. It is the thermodynamic variable conjugate to 1/*a*. The contact *C* is an extensive variable: it is the integral over space of the contact density ${\cal C}$, which measures the number of 1-2 pairs per (volume)4/3. (The unusual power of volume arises from an anomalous dimension associated with a non-Gaussian renormalization group fixed point.) Shina Tan derived other universal relations involving the contact that hold for any state of the system. They include the “adiabatic relation” that specifies how the free energy of a system is affected by a change in the scattering length : $$\frac{d~~~~}{d(1/a)}F = - \frac{\hbar^2}{4 \pi m} C. \label{adiabatic}$$ Many other universal relations involving the contact were subsequently derived. The universal relation for the inelastic two-atom loss rate in Eq. ([uni2loss]) can be derived by expressing the Hermitian operator *K* in the effective Hamiltonian *H* − *i**K* as $$K = \frac{\hbar^2}{4\pi m}{\rm Im} \big( 1/a \big) \int\! d^3r\, \Phi^\dagger \Phi, \label{Kint-psi}$$ where the local Lindblad operator is Φ = *g*0*ψ*2*ψ*1. The operator product *ψ*2*ψ*1 is singular: its matrix elements diverge linearly with the cutoff Λ. The multiplication by *g*0, which according to Eq.  decreases as 1/Λ for large Λ, makes Φ a finite operator whose matrix elements have well-behaved limits as Λ → ∞. In a spin-relaxation scattering reaction, the decreases in the atom numbers *N*1 and *N*2 are both *k**i* = 1. According to Eq. , the rates of decrease in their system averages are therefore $$\begin{aligned} \frac{d\ }{dt} \langle N\_1 \rangle= \frac{d\ }{dt} \langle N\_2 \rangle = - \frac{\hbar}{2 \pi m}{\rm Im} \big( 1/a \big) \int\! d^3r\, \left\langle \Phi^\dagger \Phi \right\rangle. \label{N12evol:psi}\end{aligned}$$ The universal loss rate in Eq. ([uni2loss]) then follows from the identification of the contact density as $$\begin{aligned} {\cal C} = \left\langle\Phi^\dagger \Phi \right\rangle. \label{contactdensity}\end{aligned}$$ This identification can be verified by using the adiabatic relation in Eq. . The only dependence of the Hamiltonian on the scattering length *a* is in the interaction term in the Hamiltonian density ${\cal H}\_{\rm int}$ in Eq. . If the tiny imaginary part of the scattering length is neglected, the derivative of the Hermitian part *H* of the effective Hamiltonian with respect to 1/*a* is $$\frac{d~~~~}{d(1/a)}H = - \frac{\hbar^2}{4 \pi m} \, \int\! d^3r\, \Phi^\dagger \Phi. \label{dHint/da}$$ By the Feynman-Hellman theorem, the system average of the left side of this equation is the left side of the adiabatic relation in Eq. . With the identification of the contact density in Eq. , the system average of the right side of Eq.  is the right side of the adiabatic relation in Eq. . This completes the derivation of the inelastic two-atom loss rate in Eq. ([uni2loss]) and the adiabatic relation in Eq. . In Ref. , the authors presented an incorrect derivation of the universal relation for the inelastic two-atom loss rate that suggests that there are additional terms on the right side of Eq. . They assumed the density matrix *ρ* satisfies the naive time evolution equation in Eq. . This equation implies that the number of atoms is conserved, but that the probability ${\rm Tr}(\rho)$ decreases with time. In Ref. , the authors assumed that the mean atom number ⟨*N*⟩ was given by ${\rm Tr}(\rho N)/{\rm Tr}(\rho)$. The resulting expression for the time derivative of ⟨*N*⟩ can be expressed as a double integral over space of an expression involving the product of the number density operator and the Hamiltonian density operator. Upon applying the operator product expansion to the product of these operators, they obtained an expansion for (*d*/*d**t*)⟨*N*⟩ in terms of increasingly higher-dimension operators with the leading term given by Eq. . The derivation of the universal relation using the Lindblad equation makes it clear that there are no additional terms beyond the contact term in Eq. ([uni2loss]). Inelastic Three-Atom Loss Rate ------------------------------ Bosonic atoms that are all in the same hyperfine state can be described by a nonrelativistic quantum field theory with a quantum field *ψ*. If the scattering length *a* is much larger than the range of interactions of the atoms, two-atom interactions and three-atom interactions must both be treated nonperturbatively. The resulting quantum field theory is characterized by a discrete scaling symmetry in which lengths and time are multiplied by integer multiples of *λ*0 and *λ*02, respectively, where *λ*0 = *e**π*/*s*0 and *s*0 ≈ 1.00624 is a universal number. Three-atom interactions are determined by a parameter Λ\* with dimensions of momentum, upon which physical observables can only depend log-periodically. If *a* is positive, two-atom interactions produce a weakly bound diatomic molecule called the *shallow dimer*. Regardless of the sign of *a*, three-atom interactions produce a sequence of weakly bound triatomic molecules called *Efimov trimers*. In Ref. , several universal relations were derived that hold for any system consisting of identical bosons. The universal relations relate various properties of the system to two extensive thermodynamic variables: the 2-body contact *C*2, which is the analog of the contact for fermions with two spin states, and the 3-body contact *C*3. The atoms that are studied in cold atom experiments form deeply bound diatomic molecules (*deep dimers*) whose binding energies are orders of magnitude larger than the energy scales of the cold trapped atoms. The deep dimers provide various pathways for deeply inelastic reactions that result in the loss of three atoms. They include (a) three-body recombination, in which three low-energy atoms collide and two of them bind into a deep dimer, (b) dimer relaxation, in which a shallow dimer and an atom scatter into a deep dimer and an atom, and (c) Efimov trimer decay into a deep dimer and an atom. In Ref. , a universal relation for the three-atom inelastic loss rate was presented but not derived. In this section, we use the Lindblad equation to sketch the derivation of this relation. To derive the universal relation for the three-atom inelastic loss rate, we follow the formalism laid out in Ref. . In the absence of deep dimers, the interaction term in the Hamiltonian can be written as $$H\_{\rm int} = \frac{g\_2}{4m} \, \int\! d^3r\, (\psi^2)^\dagger\psi^2 +\frac{g\_3}{36m} \, \int\! d^3r\, (\psi^3)^\dagger\psi^3. \label{Hint-psi-3b}$$ Since this effective field theory has ultraviolet divergences, it must be regularized. The bare coupling constants *g*2 and *g*3 depend on an ultraviolet momentum cutoff Λ: $$\begin{aligned} g\_2(\Lambda) &=& \frac{8 \pi}{1/a-(2/\pi)\Lambda}\,, \label{eq:g2} \\ g\_3(\Lambda) &=& h\_0 \,\frac{
arxiv_0000033
Finite presentations for stated skein algebras and lattice gauge field theory ============================================================================= [section] [theorem]Main Theorem [theorem]Proposition [theorem]Corollary [theorem]Corollaire [theorem]Lemma [theorem]Notations [theorem]Convention [theorem]Problem [theorem]Definition [theorem]Theorem-Definition [theorem]Remark [theorem]Conjecture [theorem]Example We provide finite presentations for stated skein algebras and deduce that those algebras are Koszul and that they are isomorphic to the quantum moduli algebras appearing in lattice gauge field theory, generalizing previous results of Bullock, Frohman, Kania-Bartoszynska and Faitg. Introduction ============ #### **Stated skein algebras and lattice gauge field theory** A *punctured surface* is a pair **\Sigma** = (Σ, P), where Σ is a compact oriented surface and P is a (possibly empty) finite subset of Σ which intersects non-trivially each boundary component. We write ΣP :  = Σ \ P. The set ∂Σ \ P consists of a disjoint union of open arcs which we call *boundary arcs*. **Warning:** In this paper, the punctured surface **\Sigma** will be called open if the surface Σ has non empty boundary and closed otherwise. This convention differs from the traditional one, where some authors refer to open surface a punctured surface **\Sigma** = (Σ, P) with Σ closed and P ≠ ∅ (in which case ΣP is not closed). The *Kauffman-bracket skein algebras* were introduced by Bullock and Turaev as a tool to study the SU(2) Witten-Reshetikhin-Turaev topological quantum field theories (). They are associative unitary algebras S*ω*(**\Sigma**) indexed by a closed punctured surface **\Sigma** and an invertible element $\omega \in \mathds{k}^{\times}$ in some commutative unital ring $\mathds{k}$. Bonahon-Wong and Lê generalized the notion of Kauffman-bracket skein algebras to open punctured surfaces, where in addition to closed curves the algebras are generated by arcs whose endpoints are endowed with a sign  ±  (a state). The motivation for the introduction of these so-called *stated skein algebras* is their good behaviour for the operation of gluing two boundary arcs together. This property permitted the authors of to define an embedding of the skein algebra into a quantum torus, named the quantum trace, and offers new tools to study the representation theory of skein algebras. Except for genus 0 and 1 surfaces (), no finite presentation for the Kauffman-bracket skein algebras is known, though a conjecture in that direction was formulated in. However, it is well-known that they are finitely generated (). The corresponding problem for stated skein algebras of open punctured surfaces is easier. Finite presentations of stated skein algebras were given for a disc with two punctures on its boundary (i.e. for the bigon) and for the disc with three punctures on its boundary (i.e. for the triangle) in, for the disc with two punctures on its boundary and one inner puncture in and for any connected punctured surface having exactly one boundary component, one puncture on the boundary and eventual inner punctures, in. The first purpose of this paper is to provide explicit finite presentations for stated skein algebras of an arbitrary connected open punctured surface **\Sigma**. Let us briefly sketch their construction, we refer to Section [secpresentation] for details. The finite presentations we will define depend on the choice of a finite presentation P of some groupoid Π1(ΣP, V). In brief, for each boundary arc *a* of **\Sigma**, choose a point *v**a* ∈ *a* and let V be the set of such points. The groupoid Π1(ΣP, V) is the full subcategory of the fundamental groupoid of ΣP whose set of objects is V. A finite presentation P = (G, RL) for Π1(ΣP, V) will consist in a finite set G of generating paths relating points of V and a finite set RL of relations among those paths which satisfy some axioms (see Section [secpresentation] for details). For instance for the triangle T (the disc with three punctures on its boundary), the groupoid Π1(T, V) admits the presentation with generators G = {*α*, *β*, *γ*} drawn in Figure [figtriangle] and the unique relation *α**β**γ* = 1. [figtriangle] A path *α* ∈ G can be seen as an arc in ΣP and, after choosing some states *ɛ*, *ɛ*ʹ ∈ { − ,  + } for its endpoints, we get an element *α**ɛ**ɛ*ʹ ∈ S*ω*(**\Sigma**) in the stated skein algebra. We denote by AG ⊂ S*ω*(**\Sigma**) the (finite) set of such elements. It was proved in that AG generates S*ω*(**\Sigma**) and its elements will be the generators of our presentations. Concerning the relations, first for each *α* ∈ G, one has a *q-determinant* relation between the elements *α**ɛ**ɛ*ʹ. For each pair (*α*, *β*) ∈ G2 we will associate a finite set of *arcs exchange relations* permitting to express an element of the form *α**ɛ**ɛ*ʹ*β**μ**μ*ʹ ∈ S*ω*(**\Sigma**) as a linear combination of elements of the form *β**a**b**α**c**d*. Eventually, to each relation *R* ∈ RL in the finite presentation P, we will associate a finite set of so-called *trivial loops relations*. [theorem1] Let **\Sigma** a connected open punctured surface and P a finite presentation of Π1(ΣP, V). Then the stated skein algebra S*ω*(**\Sigma**) is presented by the set of generators AG and by the q-determinant, arcs exchange and trivial loops relations. For every open punctured surface, we can choose finite presentation P of Π1(ΣP, V) such that the set of relations is empty (for instance for the triangle of Figure [figtriangle], one might choose the presentation with generators G = {*α*, *β*} and no relation). In this case, the presentation of S*ω*(**\Sigma**) is quadratic inhomogeneous and, by using the Diamond Lemma, we prove the [theorem2] For **\Sigma** a connected open punctured surface, the quadratic inhomogeneous algebra S*ω*(**\Sigma**) is Koszul and admits a Poincaré-Birkhoff-Witt basis. Theorem [theorem2] implies that S*ω*(**\Sigma**) has an explicit minimal projective resolution (the so-called Koszul resolution) which permits to compute effectively its cohomology (see for details). Let (Γ, *c*) be a ciliated graph, that is a finite graph with the data for each vertex of a linear ordering of its adjacent half-edges. Inspired by Fock and Rosly’s original work in on the Poisson structure of character varieties, Alekseev-Grosse-Schomerus () and Buffenoir-Roche () independently defined the so-called *quantum moduli algebras* L*ω*(Γ, *c*), which are combinatorial quantizations of relative character varieties (see Section [seccharvar] for details). Those algebras arise with some right comodule map ΔG : L*ω*(Γ, *c*) → L*ω*(Γ, *c*) ⊗ O*q*[G], where O*q*[G] = O*q*[SL2]⊗ *V̊*(Γ) is the so-called quantum gauge group Hopf algebra. The subalgebra L*ω**i**n**v*(Γ) ⊂ L*ω*(Γ, *c*) of coinvariant vectors plays an important role in combinatorial quantization. More precisely, as reviewed in Section [secgraphs], we associate to each ciliated graph (Γ, *c*) two punctured surfaces: an open one **\Sigma**0(Γ, *c*) and a closed one **\Sigma**(Γ) such that the algebras L*ω*(Γ, *c*) and L*ω**i**n**v*(Γ) are quantization of the SL2(C) (relative) character varieties of **\Sigma**0(Γ, *c*) and **\Sigma**(Γ) respectively with their Fock-Rosly Poisson structures. We deduce from Theorem [theorem1] the following: [theorem3] There exist isomorphisms of algebras S*ω*(**\Sigma**0(Γ, *c*)) ≅ L*ω*(Γ, *c*) and S*ω*(**\Sigma**(Γ)) ≅ L*ω**i**n**v*(Γ). Theorem [theorem3] is not surprising and was already proved in some cases. First it is well-known that (stated) skein algebras also induces deformation quantizations of (relative) character varieties : it follows from the work in for closed punctured surfaces and is proved in and for open punctured surfaces. So Theorem [theorem3] was expected; for instance its statement was conjectured by Costantino and Lê in. Next the skein origin of the defining relations of quantum moduli algebra was discovered by Bullock, Frohman and Kania-Bartoszynska in where the authors already proved that S*ω*(**\Sigma**(Γ)) and L*ω**i**n**v*(Γ) are isomorphic in the particular case where $\mathds{k}=\mathbb{C}[[\hbar]]$ and *q* :  = *ω*− 4 = expℏ. However, their proof does not extend straightforwardly to arbitrary ring (see item (6) of Section [secfinal]). Eventually, in the special case where (Γ, *c*) is the so-called daisy graph (it has only one vertex, so **\Sigma**0(Γ, *c*) has exactly one boundary component with one puncture on it), Theorem [theorem3] was proved by Faitg in in the case where *ω* is not a root of unity. A detailed comparison between Faitg’s isomorphism and ours is made in Section [seccomparaison]. Faitg’s result can also be derived indirectly from the works in, as detailed in Section [seccomparaison], though the so-obtained isomorphism is less explicit due to a change of duality. #### **Acknowledgments.** The author thanks S.Baseilhac, F.Costantino, M.Faitg, L.Funar, A.Quesney and P.Roche for useful discussions. He acknowledges support from the Japanese Society for Promotion of Science (JSPS) and the Centre National pour la Recherche et l’Enseignement (CNRS). Finite presentations for stated skein algebras ============================================== Definitions and first properties of stated skein algebras --------------------------------------------------------- A *punctured surface* is a pair **\Sigma** = (Σ, P) where Σ is a compact oriented surface and P is a finite subset of Σ which intersects non-trivially each boundary component. A *boundary arc* is a connected component of ∂Σ \ P. We write ΣP :  = Σ \ P. **Definition of stated skein algebras** Before stating precisely the definition of stated skein algebras, let us sketch it informally. Given a punctured surface **\Sigma** and an invertible element $\omega \in \mathds{k}^{\times}$ in some commutative unital ring $\mathds{k}$, the stated skein algebra S*ω*(**\Sigma**) is the quotient of the $\mathds{k}$-module freely spanned by isotopy classes of stated tangles in ΣP × (0, 1) by some local skein relations. The left part of Figure [figstatedtangle] illustrates such a stated tangle: each point of ∂*T* ⊂ ∂ΣP is equipped with a sign  +  or  −  (the state). Here the stated tangle is the union of three stated arcs and one closed curve. In order to work with two-dimensional pictures, we will consider the projection of tangles in ΣP as in the right part of Figure [figstatedtangle]; such a projection will be referred to as a diagram. [figstatedtangle] A *tangle* in ΣP × (0, 1) is a compact framed, properly embedded 1-dimensional manifold *T* ⊂ ΣP × (0, 1) such that for every point of ∂*T* ⊂ ∂ΣP × (0, 1) the framing is parallel to the (0, 1) factor and points to the direction of 1. Here, by framing, we refer to a thickening of *T* to an oriented surface. The *height* of (*v*, *h*) ∈ ΣP × (0, 1) is *h*. If *b* is a boundary arc and *T* a tangle, we impose that no two points in ∂*b**T* :  = ∂*T* ∩ *b* × (0, 1) have the same heights, hence the set ∂*b**T* is totally ordered by the heights. Two tangles are isotopic if they are isotopic through the class of tangles that preserve the boundary height orders. By convention, the empty set is a tangle only isotopic to itself. Let *π* : ΣP × (0, 1) → ΣP be the projection with *π*(*v*, *h*) = *v*. A tangle *T* is in *generic position* if for each of its points, the framing is parallel to the (0, 1) factor and points in the direction of 1 and is such that $\pi\_{\big| T} : T\rightarrow \Sigma\_{\mathcal{P}}$ is an immersion with at most transversal double points in the interior of ΣP. Every tangle is isotopic to a tangle in generic position. We call *diagram* the image *D* = *π*(*T*) of a tangle in generic position, together with the over/undercrossing information at each double point. An isotopy class of diagram *D* together with a total order of ∂*b**D* :  = ∂*D* ∩ *b* for each boundary arc *b*, define uniquely an isotopy class of tangle. When choosing an orientation o(*b*) of a boundary arc *b* and a diagram *D*, the set ∂*b**D* receives a natural order by setting that the points are increasing when going in the direction of o(*b*). We will represent tangles by drawing a diagram and an orientation (an arrow) for each boundary arc, as in Figure [figstatedtangle]. When a boundary arc *b* is oriented we assume that ∂*b**D* is ordered according to the orientation. A *state* of a tangle is a map *s* : ∂*T* → { − ,  + }. A pair (*T*, *s*) is called a *stated tangle*. We define a *stated diagram* (*D*, *s*) in a similar manner. Let $\omega\in \mathds{k}^{\times}$ an invertible element and write *A* :  = *ω*− 2. [defstatedskein] The *stated skein algebra* S*ω*(**\Sigma**) is the free $\mathds{k}$-module generated by isotopy classes of stated tangles in ΣP × (0, 1) modulo the following relations and, $$\label{eq: skein 1} \begin{tikzpicture}[baseline=-0.4ex,scale=0.5,>=stealth] \draw [fill=gray!45,gray!45] (-.6,-.6) rectangle (.6,.6) ; \draw[line width=1.2,-] (-0.4,-0.52) -- (.4,.53); \draw[line width=1.2,-] (0.4,-0.52) -- (0.1,-0.12); \draw[line width=1.2,-] (-0.1,0.12) -- (-.4,.53); \end{tikzpicture} =A \begin{tikzpicture}[baseline=-0.4ex,scale=0.5,>=stealth] \draw [fill=gray!45,gray!45] (-.6,-.6) rectangle (.6,.6) ; \draw[line width=1.2] (-0.4,-0.52)..controls +(.3,.5).. (-.4,.53); \draw[line width=1.2] (0.4,-0.52)..controls +(-.3,.5).. (.4,.53); \end{tikzpicture} +A^{-1} \begin{tikzpicture}[baseline=-0.4ex,scale=0.5,rotate=90] \draw [fill=gray!45,gray!45] (-.6,-.6) rectangle (.6,.6) ; \draw[line width=1.2] (-0.4,-0.52)..controls +(.3,.5).. (-.4,.53); \draw[line width=1.2] (0.4,-0.52)..controls +(-.3,.5).. (.4,.53); \end{tikzpicture} \hspace{.5cm} \text{ and }\hspace{.5cm} \begin{tikzpicture}[baseline=-0.4ex,scale=0.5,rotate=90] \draw [fill=gray!45,gray!45] (-.6,-.6) rectangle (.6,.6) ; \draw[line width=1.2,black] (0,0) circle (.4) ; \end{tikzpicture} = -(A^2+A^{-2}) \begin{tikzpicture}[baseline=-0.4ex,scale=0.5,rotate=90] \draw [fill=gray!45,gray!45] (-.6,-.6) rectangle (.6,.6) ; \end{tikzpicture} ;$$ $$\label{eq: skein 2} \begin{tikzpicture}[baseline=-0.4ex,scale=0.5,>=stealth] \draw [fill=gray!45,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[->] (0.4,-0.75) to (.4,.75); \draw[line width=1.2] (0.4,-0.3) to (0,-.3); \draw[line width=1.2] (0.4,0.3) to (0,.3); \draw[line width=1.1] (0,0) ++(90:.3) arc (90:270:.3); \draw (0.65,0.3) node {\scriptsize{$+$}}; \draw (0.65,-0.3) node {\scriptsize{$+$}}; \end{tikzpicture} = \begin{tikzpicture}[baseline=-0.4ex,scale=0.5,>=stealth] \draw [fill=gray!45,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[->] (0.4,-0.75) to (.4,.75); \draw[line width=1.2] (0.4,-0.3) to (0,-.3); \draw[line width=1.2] (0.4,0.3) to (0,.3); \draw[line width=1.1] (0,0) ++(90:.3) arc (90:270:.3); \draw (0.65,0.3) node {\scriptsize{$-$}}; \draw (0.65,-0.3) node {\scriptsize{$-$}}; \end{tikzpicture} =0, \hspace{.2cm} \begin{tikzpicture}[baseline=-0.4ex,scale=0.5,>=stealth] \draw [fill=gray!45,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[->] (0.4,-0.75) to (.4,.75); \draw[line width=1.2] (0.4,-0.3) to (0,-.3); \draw[line width=1.2] (0.4,0.3) to (0,.3); \draw[line width=1.1] (0,0) ++(90:.3) arc (90:270:.3); \draw (0.65,0.3) node {\scriptsize{$+$}}; \draw (0.65,-0.3) node {\scriptsize{$-$}}; \end{tikzpicture} =\omega \begin{tikzpicture}[baseline=-0.4ex,scale=0.5,>=stealth] \draw [fill=gray!45,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[-] (0.4,-0.75) to (.4,.75); \end{tikzpicture} \hspace{.1cm} \text{ and } \hspace{.1cm} \omega^{-1} { \begin{tikzpicture}[baseline=-0.4ex,scale=0.5, >=stealth] \draw [fill=gray!60,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[->] (0.4,-0.75) to (.4,.75); \draw[line width=1.2] (0.4,-0.3) to (-.7,-.3); \draw[line width=1.2] (0.4,0.3) to (-.7,.3); \draw (0.65,0.3) node {\scriptsize{$-$}}; \draw (0.65,-0.3) node {\scriptsize{$+$}}; \end{tikzpicture} } - \omega^{-5} { \begin{tikzpicture}[baseline=-0.4ex,scale=0.5, >=stealth] \draw [fill=gray!60,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[->] (0.4,-0.75) to (.4,.75); \draw[line width=1.2] (0.4,-0.3) to (-.7,-.3); \draw[line width=1.2] (0.4,0.3) to (-.7,.3); \draw (0.65,0.3) node {\scriptsize{$+$}}; \draw (0.65,-0.3) node {\scriptsize{$-$}}; \end{tikzpicture} } = { \begin{tikzpicture}[baseline=-0.4ex,scale=0.5] \draw [fill=gray!20,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[-] (0.4,-0.75) to (.4,.75); \draw[line width=1.2] (-.7,-0.3) to (-.4,-.3); \draw[line width=1.2] (-.7,0.3) to (-.4,.3); \draw[line width=1.15] (-.4,0) ++(-90:.3) arc (-90:90:.3); \end{tikzpicture} }.$$ The product of two classes of stated tangles [*T*1, *s*1] and [*T*2, *s*2] is defined by isotoping *T*1 and *T*2 in ΣP × (1/2, 1) and ΣP × (0, 1/2) respectively and then setting [*T*1, *s*1] ⋅ [*T*2, *s*2] = [*T*1 ∪ *T*2, *s*1 ∪ *s*2]. Figure [figproduct] illustrates this product. For a closed punctured surface, S*ω*(**\Sigma**) coincides with the classical (Turaev’s) Kauffman-bracket skein algebra. [figproduct] **Reflexion anti-involution** Suppose that $\mathds{k} = \mathbb{Z}[\omega^{\pm 1}]$ and consider the Z-linear involution *x* ↦ *x*\* on $\mathds{k}$ sending *ω* to *ω*− 1. Let $r: \Sigma\_{\mathcal{P}} \times (0,1) \xrightarrow{\cong} \Sigma\_{\mathcal{P}}$ be the homeomorphism defined by *r*(*x*, *t*) = (*x*, 1 − *t*). Define an anti-linear map $\theta : \mathcal{S}\_{\omega}(\mathbf{\Sigma}) \xrightarrow{\cong} \mathcal{S}\_{\omega}(\mathbf{\Sigma}) $ by setting *θ*(∑*i**x**i*[*T**i*, *s**i*]) :  = ∑*i**x**i*\*[*r*(*T**i*), *s**i* ∘ *r*]. () *θ* is an anti-morphism of algebras, i.e. *θ*(*x**y*) = *θ*(*y*)*θ*(*x*). **Bases for stated skein algebras** A closed component of a diagram *D* is trivial if it bounds an embedded disc in ΣP. An open component of *D* is trivial if it can be isotoped, relatively to its boundary, inside some boundary arc. A diagram is *simple* if it has neither double point nor trivial component. By convention, the empty set is a simple diagram. Let o denote an arbitrary orientation of the boundary arcs of **\Sigma**. For each boundary arc *b* we write  < o the induced total order on ∂*b**D*. A state *s* : ∂*D* → { − ,  + } is o − *increasing* if for any boundary arc *b* and any two points *x*, *y* ∈ ∂*b**D*, then *x* < o*y* implies *s*(*x*) < *s*(*y*), with the convention  −  <  + . [defbasis] We denote by Bo ⊂ S*ω*(**\Sigma**) the set of classes of stated diagrams (*D*, *s*) such that *D* is simple and *s* is o-increasing. [theorembasis] () the set Bo is a basis of S*ω*(**\Sigma**). [remarkchangebasis] The basis Bo is independent on the choice of the ground ring $\mathds{k}$ and of $q\in \mathds{k}^{\times}$. This fact has the following useful consequence: let $\mathds{k} := \mathbb{Z}[\omega^{\pm 1}]$ and $\mathds{k}'$ be any other commutative unital ring with an invertible element $\omega' \in \mathds{k}^{'\times}$. There is a unique morphism of rings $\mu : \mathds{k} \rightarrow \mathds{k}'$ sending *ω* to *ω*ʹ and the two $\mathds{k}'$ algebras $\mathcal{S}\_{\omega}(\mathbf{\Sigma})\otimes\_{\mathds{k}} \mathds{k}'$ and S*ω*ʹ(**\Sigma**) are canonically isomorphic through the isomorphism preserving the basis Bo. This fact permits to prove formulas in $\mathds{k}$ using the reflexion anti-involution *θ* and then apply them to any ring $\mathds{k}'$ by changing the coefficients. **Gluing maps** Let *a*, *b* be two distinct boundary arcs of **\Sigma** and let **\Sigma**∣*a*#*b* be the punctured surface obtained from **\Sigma** by gluing *a* and *b*. Denote by *π* : ΣP → (Σ∣*a*#*b*)P∣*a*#*b* the projection and *c* :  = *π*(*a*) = *π*(*b*). Let (*T*0, *s*0) be a stated framed tangle of Σ∣*a*#*b*P∣*a*#*b* × (0, 1) transversed to *c* × (0, 1) and such that the heights of the points of *T*0 ∩ *c* × (0, 1) are pairwise distinct and the framing of the points of *T*0 ∩ *c* × (0, 1) is vertical. Let *T* ⊂ ΣP × (0, 1) be the framed tangle obtained by cutting *T*0 along *c*. Any two states *s**a* : ∂*a**T* → { − ,  + } and *s**b* : ∂*b**T* → { − ,  + } give rise to a state (*s**a*, *s*, *s**b*) on *T*. Both the sets ∂*a**T* and ∂*b**T* are in canonical bijection with the set *T*0 ∩ *c* by the map *π*. Hence the two sets of states *s**a* and *s**b* are both in canonical bijection with the set St(*c*) :  = {*s* : *c* ∩ *T*0 → { − ,  + }}. [defgluingmap] Let *i*∣*a*#*b* : S*ω*(**\Sigma**∣*a*#*b*) → S*ω*(**\Sigma**) be the linear map given, for any (*T*0, *s*0) as above, by: *i*∣*a*#*b*([*T*0, *s*0]) :  = ∑*s* ∈ St(*c*)[*T*, (*s*, *s*0, *s*)]. [figgluingmap] [theoremgluing] The linear map *i*∣*a*#*b* : S*ω*(**\Sigma**∣*a*#*b*) → S*ω*(**\Sigma**) is an injective morphism of algebras. Moreover the gluing operation is coassociative in the sense that if *a*, *b*, *c*, *d* are four distinct boundary arcs, then we have *i*∣*a*#*b* ∘ *i*∣*c*#*d* = *i*∣*c*#*d* ∘ *i*∣*a*#*b*. **Relation with *U**q*sl2 and its restricted dual O*q*[SL2]** Recall that *A* = *ω*− 2 and write *q* :  = *A*2. The stated skein algebra has deep relations with the quantum group *U**q*sl2 and its restricted dual O*q*(SL2), explored in that we briefly reproduce here for later use. Let *ρ* : *U**q*sl2 → End(*V*) be the standard representation of *U**q*sl2, where *V* is two dimensional with basis (*v*+, *v*−) and $$\rho(E) = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}, \quad \rho(F) = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}, \quad \rho(K) = \begin{pmatrix} q & 0 \\ 0 & q^{-1} \end{pmatrix}.$$ Let *ρ*\* : *U**q*sl2 → End(*V*\*) be the dual representation of (*ρ*, *V*), where *ρ*\*(*x*) is the transposed of *ρ*(*S*(*x*)). One has a *U**q*sl2-equivariant isomorphism $V^\* \xrightarrow{\cong} V$ whose matrix in the bases (*v*+\*, *v*−\*) and (*v*+, *v*−) writes $$C= \begin{pmatrix} C\_+^+ & C\_-^+ \\ C\_+^- & C\_-^- \end{pmatrix} := \begin{pmatrix} 0 & \omega \\ -\omega^{5} & 0 \end{pmatrix}. \mbox{ Therefore }C^{-1}= -A^3 C = \begin{pmatrix} 0 & -\omega^{-5} \\ \omega^{-1} & 0 \end{pmatrix}.$$ Define the operators $\tau, q^{\frac{H\otimes H}{2}} \in \mathrm{End}(V^{\otimes 2})$ by *τ*(*v**i* ⊗ *v**j*) :  = *v**j* ⊗ *v**i* and $q^{\frac{H\otimes H}{2}} (v\_i\otimes v\_j) = A^{ij} v\_i\otimes v\_j$ for *i*, *j* ∈ { + ,  − } (we identified  −  with  − 1 and  +  with  + 1). Let R ∈ End(*V*⊗ 2) be the braiding operator $$\mathscr{R} = \tau \circ q^{\frac{H\otimes H}{2}} \circ \exp\_q\left( (q-q^{-1})\rho(E) \otimes \rho(F) \right) = \tau \circ q^{\frac{H\otimes H}{2}} \circ \left( \mathds{1}\_2 + (q-q^{-1}) \rho(E) \otimes \rho(F) \right).$$ In the basis (*v*+ ⊗ *v*+, *v*+ ⊗ *v*−, *v*− ⊗ *v*+, *v*− ⊗ *v*−), it writes $$\mathscr{R} = \begin{pmatrix} \mathscr{R}\_{++}^{++} & \mathscr{R}\_{+-}^{++} &\mathscr{R}\_{-+}^{++} &\mathscr{R}\_{--}^{++} \\ \mathscr{R}\_{++}^{+-} &\mathscr{R}\_{+-}^{+-} &\mathscr{R}\_{-+}^{+-} &\mathscr{R}\_{--}^{+-} \\ \mathscr{R}\_{++}^{-+} &\mathscr{R}\_{+-}^{-+} &\mathscr{R}\_{-+}^{-+} &\mathscr{R}\_{--}^{-+} \\ \mathscr{R}\_{++}^{--} &\mathscr{R}\_{+-}^{--} &\mathscr{R}\_{-+}^{--} &\mathscr{R}\_{--}^{--} \end{pmatrix} := \begin{pmatrix} A & 0 & 0 & 0 \\ 0 & 0 &A^{-1} & 0 \\ 0 & A^{-1} & A-A^{-3} & 0 \\ 0 & 0 & 0 & A \end{pmatrix}, \mbox{so } \mathscr{R}^{-1} = \begin{pmatrix} A^{-1} & 0 & 0 & 0 \\ 0 & A^{-1} - A^3 & A & 0 \\ 0 & A & 0 & 0 \\ 0 & 0 & 0 & A^{-1}\end{pmatrix}.$$ We now list three families of skein relations, which are straightforward consequences of the definition, and will be used in the paper. Let *i*, *j* ∈ { − ,  + }.  •  *The trivial arc relations:* $$\label{trivial\_arc\_rel} \begin{tikzpicture}[baseline=-0.4ex,scale=0.5,>=stealth] \draw [fill=gray!45,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[->] (0.4,-0.75) to (.4,.75); \draw[line width=1.2] (0.4,-0.3) to (0,-.3); \draw[line width=1.2] (0.4,0.3) to (0,.3); \draw[line width=1.1] (0,0) ++(90:.3) arc (90:270:.3); \draw (0.65,0.3) node {\scriptsize{$i$}}; \draw (0.65,-0.3) node {\scriptsize{$j$}}; \end{tikzpicture} = C^i\_j \hspace{.2cm} \begin{tikzpicture}[baseline=-0.4ex,scale=0.5,>=stealth] \draw [fill=gray!45,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[-] (0.4,-0.75) to (.4,.75); \end{tikzpicture} , \hspace{.4cm} \begin{tikzpicture}[baseline=-0.4ex,scale=0.5,>=stealth] \draw [fill=gray!45,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[->] (-0.7,-0.75) to (-.7,.75); \draw[line width=1.2] (-0.7,-0.3) to (-0.3,-.3); \draw[line width=1.2] (-0.7,0.3) to (-0.3,.3); \draw[line width=1.15] (-.4,0) ++(-90:.3) arc (-90:90:.3); \draw (-0.9,0.3) node {\scriptsize{$i$}}; \draw (-0.9,-0.3) node {\scriptsize{$j$}}; \end{tikzpicture} =(C^{-1})^i\_j \hspace{.2cm} \begin{tikzpicture}[baseline=-0.4ex,scale=0.5,>=stealth] \draw [fill=gray!45,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[-] (-0.7,-0.75) to (-0.7,.75); \end{tikzpicture}.$$  •  *The cutting arc relations:* $$\label{cutting\_arc\_rel} { \begin{tikzpicture}[baseline=-0.4ex,scale=0.5] \draw [fill=gray!20,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[-] (-0.7,-0.75) to (-.7,.75); \draw[line width=1.2] (0.1,-0.3) to (.4,-.3); \draw[line width=1.2] (0.1,0.3) to (.4,.3); \draw[line width=1.15] (.1,0) ++(90:.3) arc (90:270:.3); \end{tikzpicture} }= \sum\_{i,j = \pm} C^i\_j \hspace{.2cm} { \begin{tikzpicture}[baseline=-0.4ex,scale=0.5, >=stealth] \draw [fill=gray!60,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[->] (-0.7,-0.75) to (-0.7,.75); \draw[line width=1.2] (0.4,-0.3) to (-.7,-.3); \draw[line width=1.2] (0.4,0.3) to (-.7,.3); \draw (-1,0.3) node {\scriptsize{$i$}}; \draw (-1,-0.3) node {\scriptsize{$j$}}; \end{tikzpicture} } , \hspace{.4cm} { \begin{tikzpicture}[baseline=-0.4ex,scale=0.5] \draw [fill=gray!20,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[-] (0.4,-0.75) to (.4,.75); \draw[line width=1.2] (-.7,-0.3) to (-.4,-.3); \draw[line width=1.2] (-.7,0.3) to (-.4,.3); \draw[line width=1.15] (-.4,0) ++(-90:.3) arc (-90:90:.3); \end{tikzpicture} }= \sum\_{i,j = \pm} (C^{-1})\_j^i \hspace{.2cm} { \begin{tikzpicture}[baseline=-0.4ex,scale=0.5, >=stealth] \draw [fill=gray!60,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[->] (0.4,-0.75) to (.4,.75); \draw[line width=1.2] (0.4,-0.3) to (-.7,-.3); \draw[line width=1.2] (0.4,0.3) to (-.7,.3); \draw (0.65,0.3) node {\scriptsize{$i$}}; \draw (0.65,-0.3) node {\scriptsize{$j$}}; \end{tikzpicture} }$$ .  •  *The height exchange relations:* $$\label{height\_exchange\_rel} { \begin{tikzpicture}[baseline=-0.4ex,scale=0.5, >=stealth] \draw [fill=gray!60,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[->] (0.4,-0.75) to (.4,.75); \draw[line width=1.2] (0.4,-0.3) to (-.7,-.3); \draw[line width=1.2] (0.4,0.3) to (-.7,.3); \draw (0.65,0.3) node {\scriptsize{$i$}}; \draw (0.65,-0.3) node {\scriptsize{$j$}}; \end{tikzpicture} }= \adjustbox{valign=c}{\includegraphics[width=0.9cm]{cross2.eps}} = \sum\_{k,l = \pm} \mathscr{R}\_{i j}^{k l} \hspace{.2cm} { \begin{tikzpicture}[baseline=-0.4ex,scale=0.5, >=stealth] \draw [fill=gray!60,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[<-] (0.4,-0.75) to (.4,.75); \draw[line width=1.2] (0.4,-0.3) to (-.7,-.3); \draw[line width=1.2] (0.4,0.3) to (-.7,.3); \draw (0.65,0.3) node {\scriptsize{$l$}}; \draw (0.65,-0.3) node {\scriptsize{$k$}}; \end{tikzpicture} } , \hspace{.4cm} { \begin{tikzpicture}[baseline=-0.4ex,scale=0.5, >=stealth] \draw [fill=gray!60,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[<-] (0.4,-0.75) to (.4,.75); \draw[line width=1.2] (0.4,-0.3) to (-.7,-.3); \draw[line width=1.2] (0.4,0.3) to (-.7,.3); \draw (0.65,0.3) node {\scriptsize{$j$}}; \draw (0.65,-0.3) node {\scriptsize{$i$}}; \end{tikzpicture} } = \adjustbox{valign=c}{\includegraphics[width=0.9cm]{cross1.eps}} = \sum\_{k,l = \pm} (\mathscr{R}^{-1})\_{i j}^{k l} \hspace{.2cm} { \begin{tikzpicture}[baseline=-0.4ex,scale=0.5, >=stealth] \draw [fill=gray!60,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[->] (0.4,-0.75) to (.4,.75); \draw[line width=1.2] (0.4,-0.3) to (-.7,-.3); \draw[line width=1.2] (0.4,0.3) to (-.7,.3); \draw (0.65,0.3) node {\scriptsize{$k$}}; \draw (0.65,-0.3) node {\scriptsize{$l$}}; \end{tikzpicture} }.$$ We refer to for proofs. The algebra O*q*[SL2] is the algebra presented by generators *x**ɛ**ɛ*ʹ, *ɛ*, *ɛ*ʹ ∈ { − ,  + } and relations $$\begin{aligned} \label{relbigone} x\_{++}x\_{+-} &= q^{-1}x\_{+-}x\_{++} & x\_{++}x\_{-+}&=q^{-1}x\_{-+}x\_{++} \\ x\_{--}x\_{+-} &= q x\_{+-}x\_{--} & x\_{--}x\_{-+}&=q x\_{-+}x\_{--} \\ x\_{++}x\_{--}&=1+q^{-1}x\_{+-}x\_{-+} & x\_{--}x\_{++}&=1 + q x\_{+-}x\_{-+} \\ x\_{-+}x\_{+-}&=x\_{+-}x\_{-+} & &\end{aligned}$$ It has a Hopf algebra structured characterized by the formulas $$\begin{pmatrix} \Delta (x\_{++}) & \Delta (x\_{+-}) \\ \Delta(x\_{-+}) & \Delta(x\_{--}) \end{pmatrix} = \begin{pmatrix} x\_{++} & x\_{+-} \\ x\_{-+} & x\_{--} \end{pmatrix} \otimes \begin{pmatrix} x\_{++} & x\_{+-} \\ x\_{-+} & x\_{--} \end{pmatrix}$$ $$\begin{pmatrix} \epsilon(x\_{++}) & \epsilon(x\_{+-}) \\ \epsilon(x\_{-+}) & \epsilon(x\_{--}) \end{pmatrix} = \begin{pmatrix} 1 &0 \\ 0& 1 \end{pmatrix} \text{ and } \begin{pmatrix} S(x\_{++}) & S(x\_{+-}) \\ S(x\_{-+}) & S(x\_{--}) \end{pmatrix} = \begin{pmatrix} x\_{--} & -q x\_{+-} \\ -q^{-1}x\_{-+} & x\_{++} \end{pmatrix}.$$ When *q* ∈ C\* is generic (not a root of unity), O*q*[SL2] is the restricted dual of *U**q*sl2 (see ). The *bigon* B is the punctured surface made of a disc with two punctures on its boundary. It has two boundary arcs *a* and *b* and is generated by the stated arcs *α**ɛ**ɛ*, *ɛ*, *ɛ*ʹ =  ±  made of an arc *α* linking *a* to *b* with state *ɛ* on *α* ∩ *a* and *ɛ*ʹ on *α* ∩ *b*. Consider a disjoint union B⨆B of two bigons; by gluing together the boundary arc *b*1 of the first bigon with the boundary arc *a*2 of the second, one obtains a morphism Δ :  = *i**b*1#*a*2 : S*ω*(B) → S*ω*(B)⊗ 2 which endows S*ω*(B) with a structure of Hopf algebra where Δ is the coproduct. [] There is an isomorphism of Hopf algebras *φ* : O*q*[SL2] ≅ S*ω*(B) sending the generator *x**ɛ**ɛ*ʹ ∈ O*q*[SL2] to the element *α**ɛ**ɛ*ʹ ∈ S*ω*(B). More precisely, the fact that *φ* is an isomorphism of algebras is proved in and the fact that it preserves the coproduct was noticed independently in. In all the paper, we will (abusively) identify the Hopf algebras O*q*[SL2] and S*ω*(B) using *φ*. Note that the definition of *φ* depends on an indexing by *a* and *b* of the boundary arcs of B. Now consider a punctured surface **\Sigma** and a boundary arc *c*. By gluing a bigon B along **\Sigma** while gluing *b* with *c*, one obtains a punctured surface isomorphic to **\Sigma**, hence a map Δ*c**L* :  = *i**b*#*c* : S*ω*(**\Sigma**) → O*q*[SL2] ⊗ S*ω*(**\Sigma**) which endows S*ω*(**\Sigma**) with a structure of left O*q*[SL2] comodule. Similarly, gluing *c* with *a* induces a right comodule morphism Δ*c**R* :  = *i**c*#*a* : S*ω*(**\Sigma**) → S*ω*(**\Sigma**) ⊗ O*q*[SL2]. The following theorem characterizes the image of the gluing map and was proved independently in and. [theoremexactsequence](, ) Let **\Sigma** be a punctured surface and *a*, *b* two boundary arcs. The following sequence is exact: $$0 \to \mathcal{S}\_{\omega}(\mathbf{\Sigma}\_{|a\#b}) \xrightarrow{i\_{a\#b}} \mathcal{S}\_{\omega}(\mathbf{\Sigma}) \xrightarrow{\Delta^L\_a - \sigma \circ \Delta^R\_b}\mathcal{O}\_q[{\operatorname{SL}}\_2] \otimes \mathcal{S}\_{\omega}(\mathbf{\Sigma}),$$ where *σ*(*x* ⊗ *y*) :  = *y* ⊗ *x*. An easy but very important consequence of the fact that Δ*a**L* and Δ*a**R* are comodule maps are the following *boundary skein relations*: (*ε* ⊗ *i**d*) ∘ Δ*a**L* = *i**d* and (*i**d* ⊗ *ε*) ∘ Δ*a**R* = *i**d*. The image through the counit *ε* of a stated diagram in B can be computed using the formulas: $$\label{epsilon\_formula} \epsilon \left( { \begin{tikzpicture}[baseline=-0.4ex,scale=0.5,>=stealth] \draw [fill=gray!45,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[->] (0.4,-0.75) to (.4,.75); \draw[->] (-0.7,-0.75) to (-.7,.75); \draw[line width=1.2] (0.4,-0.3) to (0,-.3); \draw[line width=1.2] (0.4,0.3) to (0,.3); \draw[line width=1.1] (0,0) ++(90:.3) arc (90:270:.3); \draw (0.65,0.3) node {\scriptsize{$i$}}; \draw (0.65,-0.3) node {\scriptsize{$j$}}; \end{tikzpicture} }\right) = C\_j^i, \quad \epsilon \left( { \begin{tikzpicture}[baseline=-0.4ex,scale=0.5,>=stealth] \draw [fill=gray!45,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[->] (0.4,-0.75) to (.4,.75); \draw[->] (-0.7,-0.75) to (-.7,.75); \draw[line width=1.2] (-.7,-0.3) to (-.4,-.3); \draw[line width=1.2] (-.7,0.3) to (-.4,.3); \draw[line width=1.15] (-.4,0) ++(-90:.3) arc (-90:90:.3); \draw (-1,0.3) node {\scriptsize{$i$}}; \draw (-1,-0.3) node {\scriptsize{$j$}}; \end{tikzpicture} } \right) = (C^{-1})\_j^i, \quad \epsilon \left( \adjustbox{valign=c}{\includegraphics[width=1.2cm]{crossbigon2.eps}} \right) = \mathscr{R}^{i j}\_{k l}, \quad \epsilon \left( \adjustbox{valign=c}{\includegraphics[width=1.2cm]{crossbigon1.eps}}\right) = (\mathscr{R}^{-1})^{i j}\_{k l}.$$ Figure [figboundaryskein] illustrates an instance of boundary skein relation. Here we draw a dotted arrow to illustrate where we cut the bigon. Note that all the trivial arcs, cutting arc and height exchange relations are particular cases of. [figboundaryskein] The small fundamental groupoid and its finite presentations ----------------------------------------------------------- During all this section we fix a punctured surface **\Sigma** = (Σ, P) such that Σ is connected and has non empty boundary. For each boundary arc *a* of **\Sigma**, fix a point *v**a* ∈ *a* and denote by V the set {*v**a*}*a*. The *small fundamental groupoid* Π1(ΣP, V) is the full subcategory of the fundamental groupoid Π1(ΣP) generated by V. Said differently, Π1(ΣP, V) is the small groupoid whose set of objects is V and such that a morphism (called path) *α* : *v*1 → *v*2 is a homotopy class of continuous map *φ**α* : [0, 1] → ΣP with *φ**α*(0) = *v*1 and *φ**α*(1) = *v*2. The map *φ**α* will be referred to as a *geometric representative* of *α*. The composition is the concatenation of paths. For a path *α* : *v*1 → *v*2, we write *s*(*α*) = *v*1 (the source point) and *t*(*α*) = *v*2 (the target point) and *α*− 1 : *v*2 → *v*1 the path with opposite orientation (*i.e.* *φ**α*− 1(*t*) = *φ**α*(1 − *t*)). We will define the notion of *finite presentation* P of the groupoid Π1(ΣP, V) and attach to each such P a finite presentation of S*ω*(**\Sigma**). In order to get some intuition, consider the punctured surface in Figure [figpresentation]: it is an annulus with two punctures per boundary component, so it has four boundary arcs. The figure shows some paths *β*1, …, *β*5 and we will say that Π1(ΣP, V) is finitely presented by the set of generators {*β*1, …, *β*5} together with the relation *β*2− 1*β*4*β*5*β*3 = 1. We will deduce that S*ω*(**\Sigma**) is generated by the stated arcs (*β**i*)*ɛ**ɛ*ʹ and that the relation *β*2− 1*β*4*β*5*β*3 = 1 induces a relation among them. Alternatively, the same punctured surface has a presentation with the smaller set of generators {*β*1, …, *β*4} and no relation. The induced finite presentation of S*ω*(**\Sigma**) will be simpler. [figpresentation] [defgenerators] 1. A *set of generators* for Π1(ΣP, V) is a set G of paths in Π1(ΣP, V) such that any path *α* ∈ Π1(ΣP, V) decomposes as *α* = *α*1*ɛ*1…*α**n**ɛ**n* with *ɛ**i* =  ± 1 and *α**i* ∈ G. We also require that each path *α* ∈ G is the homotopy class of some embedding *φ**α* : [0, 1] → ΣP such that the images of the *φ**α* do not intersect outside V and eventually intersect transversally at V. The *generating graph* is the oriented ribbon graph Γ ⊂ ΣP whose set of vertices is V and edges are the image of the *φ*. We will always assume implicitly that the geometric representatives *φ**α* is part of the data defining a set of generators. Moreover, when *α* ∈ G is a path such that *s*(*α*) = *t*(*α*) (i.e. *α* is a loop) we add the additional datum of a “height order” for its endpoints, that is we specify whether *h*(*s*(*α*)) < *h*(*t*(*α*)) or *h*(*t*(*α*)) < *h*(*s*(*α*)). 2. For a path *α* : *v*1 → *v*2 and *ɛ*, *ɛ*ʹ ∈ { − ,  + }, we denote by *α**ɛ**ɛ*ʹ ∈ S*ω*(**\Sigma**) the class of the stated arc (*α*, *σ*), where the state *σ* is given by *σ*(*v*1) = *ɛ* and *σ*(*v*2) = *ɛ*ʹ. When both endpoints lye in the same boundary arc (i.e. when *s*(*α*) = *t*(*α*)) we use the chosen height order to specify which endpoint lies on the top. Set AG :  = {*α**ɛ**ɛ*ʹ∣*α* ∈ G, *ɛ*, *ɛ*ʹ ∈ { − ,  + }} ⊂ S*ω*(**\Sigma**). [exemplepres] For any connected open punctured surface **\Sigma**, the groupoid Π1(ΣP, V) admits a finite set of generators depicted in Figure [figgeneratorsfinal] and defined as follows. Denote by *a*0, …, *a**n* the boundary arcs, by ∂0, …, ∂*r* the boundary components of Σ with *a*0 ⊂ ∂0 and write *v**i* :  = *a**i* ∩ V. Let $\overline{\Sigma}$ be the surface obtained from Σ by gluing a disc along each boundary component ∂*i* for 1 ≤ *i* ≤ *r*, and choose *α*1, *β*1, …, *α**g*, *β**g* some paths in *π*1(ΣP, *v*0)( = EndΠ1(ΣP, V)(*v*0)), such that their images in $\overline{\Sigma}$ generate the free group $\pi\_1(\overline{\Sigma}, v\_0)$ (said differently, the *α**i* and *β**i* are longitudes and meridians of Σ). For each inner puncture *p* choose a peripheral curve *γ**p* ∈ *π*1(ΣP, *v*0) encircling *p* once and for each boundary puncture *p*∂ between two boundary arcs *a**i* and *a**j*, consider the path *α**p*∂ : *v**i* → *v**j* represented by the corner arc in *p*∂. Eventually, for each boundary component ∂*j*, with 1 ≤ *j* ≤ *r*, containing a boundary arc *a**k**j* ⊂ ∂*j*, choose a path *δ*∂*j* : *v*0 → *v**k**j*. The set Gʹ :  = {*α**i*, *β**i*, *α**p*, *δ*∂*j*∣1 ≤ *i* ≤ *g*, *p* ∈ P, 1 ≤ *j* ≤ *r*} is a generating set for Π1(ΣP, V) and Figure [figgeneratorsfinal] represents a set of geometric representatives for Gʹ. Moreover each of its generators which is not one of the *δ*∂*j* can be expressed as a composition of the other ones (we will soon say that there is a relation among those generators), therefore a set G obtained from Gʹ by removing one of the element of the form *α**i*, *β**i* or *γ**p*, is still a generating set for Π1(ΣP, V). The height orders can be chosen arbitrarily. Note that G has cardinality 2*g* − 2 + *s* + *n*∂, where *g* is the genus of Σ, *s* :  = ∣P∣ is the number of punctures and *n*∂ :  = ∣*π*0(∂Σ)∣ is the number of boundary components. In the particular case where Σ has exactly one boundary component with one puncture on it (and eventual inner punctures), the generating graph of G is called the *daisy graph*. The daisy graph was first considered in in the context of classical lattice gauge field theory and in in the quantum case. [figgeneratorsfinal] [propgenerators][] If G is a set of generators of Π1(ΣP, V), then the set AG generates S*ω*(**\Sigma**) as an algebra. The proof of Proposition [propgenerators] is an easy consequence of the cutting arc relations illustrated in Figure [figpropgenerators]. [figpropgenerators] We now define the notion of relations for a generating set G. Let F(G) denote the free semi-group generated by the elements of G and let RelG denote the subset of F(G) of elements of the form *R* = *β*1 ⋆ … ⋆ *β**n* such that *s*(*β**i*) = *t*(*β**i* + 1) and such that the path *β*1…*β**n* is trivial. We write *R*− 1 :  = *β**n*− 1 ⋆ … ⋆ *β*1− 1. A relation *R* = *β*1 ⋆ … ⋆ *β**n* ∈ RelG is called *simple* if the *β**i* admit representative as embedded curves whose concatenation forms a contractible simple closed curve *γ* in ΣP whose orientation coincides with the orientation of the disc bounded by *γ*. Note that “being simple” depends on the choice of geometric representatives of the generators. A finite subset RL ⊂ RelG is called a *finite set of relations* if its elements are simple and every word *R* ∈ RelG can be decomposed as *R* = *β* ⋆ *R*1*ɛ*1 ⋆ … ⋆ *R**m**ɛ**m* ⋆ *β*− 1, where *R**i* ∈ RL, *ɛ**i* ∈ { ± 1} and *β* = *β*1 ⋆ … ⋆ *β**n* ∈ F(G) is such that *s*(*β**i*) = *t*(*β**i* + 1). The pair P :  = (G, RL) is called a *finite presentation* of Π1(ΣP, V). As illustrated in the introduction, the small fundamental groupoid of the triangle T admits the finite presentation with generating set G = {*α*, *β*, *γ*} and unique relation RL = {*α* ⋆ *β* ⋆ *γ*}. For a general connected open punctured surface **\Sigma**, the set G of Example [exemplepres] is the generating set of a presentation of Π1(ΣP, V) with no relation. Relations among the generators of the stated skein algebras ----------------------------------------------------------- We fix a connected open punctured surface **\Sigma**, a finite presentation P = (G, RL) of Π1(ΣP, V), and look for relations in S*ω*(**\Sigma**) among the elements of AG. An *oriented arc* *β* is a non-closed connected simple diagram of ΣP together with an orientation plus an eventual height order of its endpoints in the case where they both lye in the same boundary arc. We will denote by *s*(*β*) and *t*(*β*) its endpoints so that *β* is oriented from *s*(*β*) towards *t*(*β*). For *ɛ*, *ɛ*ʹ ∈ { − ,  + }, we denote by *β**ɛ**ɛ*ʹ ∈ S*ω*(**\Sigma**) the class of the stated diagram (*β*, *σ*) where *σ*(*s*(*β*)) = *ɛ* and *σ*(*t*(*β*)) = *ɛ*ʹ. Note that to each oriented arc one can associate a path in Π1(ΣP, V) by first isotoping its endpoints to V and then taking its homotopy class. However a path in Π1(ΣP, V) can be associated to several distinct oriented arcs, so an oriented arc contains more information that a path in the small fundamental groupoid. We want to see the elements of G as pairwise non intersecting oriented arcs as illustrated in Figure [figpathtoarc]. Recall that by Definition [defgenerators], any path *α* ∈ G is endowed with a geometric representative *φ**α* whose image is an oriented arc $\underline{\alpha} \subset \Sigma\_{\mathcal{P}}$ so that the $\underline{\alpha}$ pairwise do not intersect outside of V and their intersect transversally in V. So each point *v**a* ∈ V is endowed with a total order  < *v**a* on the set of its adjacent arcs (so the presenting graph has a ciliated ribbon graph structure). The orientation of ΣP induces an orientation of its boundary arcs which, in turn, induces a total order  < *a* on each boundary arc *a*, where *v*1 < *a**v*2 if *a* is oriented from *v*1 towards *v*2. After isotoping the $\underline{\alpha}$ in a small neighbourhood of each *v**a* in such a way that the vertex order order  < *v**a* matches with the boundary arc order  < *a* as illustrated in Figure [figpathtoarc], we get a family of pairwise non-intersecting oriented arcs representing the elements of G. [figpathtoarc] From now on, we consider the elements of G as pairwise non-intersecting oriented arcs. Let *α* be an oriented arc, set *v*1 :  = *s*(*α*) and *v*2 :  = *t*(*α*) and denote by *u* and *v* the boundary arcs containing *v*1 and *v*2 respectively. The arc *α* is said * *of type* *a* if *u* ≠ *v*; * *of type* *b* if *u* = *v*, *h*(*v*1) < *h*(*v*2) and *v*2 < *u**v*1; * *of type* *c* if *u* = *v*, *h*(*v*1) < *h*(*v*2) and *v*1 < *u**v*2; * *of type* *d* if *u* = *v*, *h*(*v*2) < *h*(*v*1) and *v*1 < *u**v*2; * *of type* *e* if *u* = *v*, *h*(*v*2) < *h*(*v*1) and *v*2 < *u**v*1. Here *h*(*v*) represents the height of *v* (*h* is the second projection ΣP × (0, 1) → (0, 1)). Figure [figtypearcs] illustrates the five types of oriented arcs. [figtypearcs] 1. For *α* an oriented arc, write $M(\alpha) := \begin{pmatrix} \alpha\_{++} & \alpha\_{+-} \\ \alpha\_{-+} & \alpha\_{--} \end{pmatrix}$ the 2 × 2 matrix with coefficients in S*ω*(**\Sigma**). The relations among the generators of S*ω*(**\Sigma**) that we will soon define are much more elegant when written using of the following matrix $$N(\alpha) := \left\{ \begin{array}{ll} M(\alpha) & \mbox{, if }\alpha \mbox{ is of type }a; \\ M(\alpha)C & \mbox{, if }\alpha \mbox{ is of type }b; \\ M(\alpha) ^tC & \mbox{, if }\alpha \mbox{ is of type }c; \\ C^{-1}M(\alpha) & \mbox{, if }\alpha \mbox{ is of type }d; \\ {}^t C^{-1} M(\alpha) & \mbox{, if }\alpha \mbox{ is of type }e; \end{array} \right.$$ where *t**M* denotes the transpose of *M*. 2. Let *M**a*, *b*(*R*) the ring of *a* × *b* matrices with coefficients in some ring *R* (here *R* will be S*ω*(**\Sigma**)). The *Kronecker product*  ⊙  : *M**a*, *b*(*R*) ⊗ *M**c*, *d*(*R*) → *M**a**c*, *b**d*(*R*) is defined by (*A* ⊙ *B*)*j*, *l**i*, *k* = *A**j**i**B**l**k*. For instance $$M(\alpha) \odot M(\beta) = \begin{pmatrix} \alpha\_{++} \beta\_{++} & \alpha\_{++} \beta\_{+-} & \alpha\_{+-} \beta\_{++} & \alpha\_{+-} \beta\_{+-} \\ \alpha\_{++} \beta\_{-+} &\alpha\_{++} \beta\_{--} &\alpha\_{+-} \beta\_{-+} &\alpha\_{+-} \beta\_{--} \\ \alpha\_{-+} \beta\_{++} &\alpha\_{-+} \beta\_{+-} &\alpha\_{--} \beta\_{++} &\alpha\_{--} \beta\_{+-} \\ \alpha\_{-+} \beta\_{-+} &\alpha\_{-+} \beta\_{--} &\alpha\_{--} \beta\_{-+} &\alpha\_{--} \beta\_{--} \end{pmatrix}.$$ 3. By abuse of notations, we also denote by *τ* the matrix of the flip map *τ* : *v**i* ⊗ *v**j* ↦ *v**j* ⊗ *v**i*, *V*⊗ 2 → *V*⊗ 2, *i.e.* $$\tau = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}.$$ 4. For a 4 × 4 matrix *X* = (*X**k**l**i**j*)*i*, *j*, *k*, *l* =  ±, we define the 2 × 2 matrices tr*L*(*X*) and tr*R*(*X*) by the formulas tr*L*(*X*)*a**b* :  = ∑*i* =  ±*X**i**a**i**b* and tr*R*(*X*)*a**b* :  = ∑*i* =  ±*X**a**i**b**i*. 5. For $M= \begin{pmatrix} a & b \\ c & d \end{pmatrix}$, we set det*q*(*M*) :  = *a**d* − *q*− 1*b**c* and det*q*2(*M*) :  = *a**d* − *q*− 2*b**c*. [Orientation reversing formulas][lemmaorientationreversing] Let *α* be an oriented arc and *α*− 1 be the same arc with opposite orientation. Then one has *M*(*α*− 1) = *t**M*(*α*). Therefore, one has $$\label{eq\_inversion} N(\alpha^{-1}) = \left\{ \begin{array}{ll} {}^tN(\alpha) & \mbox{, if }\alpha \mbox{ is of type }a; \\ {}^tC^{-1} {}^tN(\alpha)^tC & \mbox{, if }\alpha \mbox{ is of type }b \mbox{ or }d; \\ C^{-1} {}^tN(\alpha) C & \mbox{, if }\alpha \mbox{ is of type }c \mbox{ or }e. \end{array} \right.$$ This is a straightforward consequence of the definitions. [Height reversing formulas][lemmaheightreversing] Let *α* be an oriented arc with both endpoints in the same boundary arcs let *α*0 be the same arc with reversing height order for its endpoints. Then one has: $$\label{eq\_height\_reversing} M(\alpha^0) = \left\{ \begin{array}{ll} {\operatorname{tr}}\_R\left( \mathscr{R}^{-1}( {}^tC^{-1} \odot M(\alpha){}^tC) \right) & \mbox{, if }\alpha \mbox{ is of type }b; \\ {\operatorname{tr}}\_L \left( \mathscr{R}^{-1}( M(\alpha)C \odot C^{-1}) \right) & \mbox{, if }\alpha \mbox{ is of type }c; \\ {\operatorname{tr}}\_L \left( ({}^tC^{-1}M(\alpha) \odot {}^tC) \mathscr{R} \right) & \mbox{, if }\alpha \mbox{ is of type }d; \\ {\operatorname{tr}}\_R \left( (C\odot C^{-1}M(\alpha)) \mathscr{R} \right)& \mbox{, if }\alpha \mbox{ is of type }e. \end{array} \right.$$ Equations are obtained by using the boundary skein relations. Figure [figheightreversing] illustrates the proof in the case where *α* is of type e. The other cases are similar and left to the reader. In Figure [figheightreversing], we represented the curve *α* in blue to emphasize that, despite what the picture suggests, the curve can be arbitrarily complicated. Since the boundary arcs relation only involves the intersection of *α* with a small neighborhood (a bigon) of the boundary arc (colored in grey), how is the blue part of the figure does not matter. [figheightreversing] Reversing the orientation of an arc exchanges (type b)  ↔  (type c) and (type d)  ↔  (type e) whereas reversing the height order exchanges (type b)  ↔  (type e) and (type c)  ↔  (type d). Therefore Equations and permit to switch between the types *b*, *c*, *d*, *e*; this will permit us to write the arcs exchange and trivial loops relations in a simpler form by specifying the type of arc. [Trivial loops relations][lemmatrivialloopsrelations] Let *R* = *β**k* ⋆ … ⋆ *β*1 be a simple relation. Suppose that all arcs *β**i* are either of type *a* or *d*. Then $$\label{eq\_trivial\_loops\_rel} \mathds{1}\_2 = C M(\beta\_k) C^{-1} M(\beta\_{k-1}) C^{-1} \ldots C^{-1} M(\beta\_1).$$ Equation is a consequence of the trivial arc and cutting arc relations illustrated in Figure [figpolygonrel] in the case of the triangle with presentation whose generators are the arcs {*α*, *β*, *γ*} drawn in Figure [figtriangle] and the relation is *α* ⋆ *β* ⋆ *γ* = 1. Figure [figpolygonrel] shows the equality between the matrix coefficients of *C*− 1 and *M*(*α*)*C*− 1*M*(*β*)*C*− 1*M*(*γ*). [figpolygonrel] Let us detail the proof in the general case. Since *β**i* is either of type *a* or *d*, it can be represented by a tangle *T*(*β**i*) such that the height of the source endpoint of *β**i* (say *v**i*) is smaller than the height of its target endpoint (say *w**i*); said differently *h*(*v**i*) < *h*(*w**i*). One can further choose the *T*(*β**i*) so that *T*(*β**i* + 1) lies on the top of *T*(*β**i*) (so *h*(*v*1) < *h*(*w*1) < *h*(*v*2) < … < *h*(*w**k*)). Let *T* be the tangle made of the disjoint union of the *T*(*β**i*). By the assumption that *R* is a simple relation, we can suppose that *T* is in generic position (in the sense of Section 2.1) and that its projection diagram is simple. Fix *i*, *j* ∈ { − ,  + } and let *α*0 be a trivial arc with endpoints *s*(*α*0) = *v*1 and *t*(*α*0) = *w**k* so that *α*0 can be isotoped (relatively to its boundary) to an arc inside ∂ΣP. One the one hand, the trivial arc relation gives the equality *α**i**j*0 = (*C*− 1)*i**j*. On the other hand, the cutting arc relation gives the equality $$\begin{aligned} & (C^{-1})\_i^j = \alpha^{0}\_{i j } = \sum\_{ s\in \mathrm{St}(T), s(v\_1)= i, s(w\_k)=j} [T,s] (C^{-1})\_{s(w\_1)}^{s(v\_2)} (C^{-1})\_{s(w\_2)}^{s(v\_3)} \ldots (C^{-1})\_{s(w\_{k-1})}^{v\_k} \\ &= \sum\_{\mu\_1, \ldots \mu\_{2k-2} = \pm} M(\beta\_k)^j\_{\mu\_1} (C^{-1})\_{\mu\_2}^{\mu\_1} M(\beta\_{k-1})\_{\mu\_2}^{\mu\_3} \ldots M(\beta\_1)\_i^{\mu\_{2k-2}} = \left( M(\beta\_k) C^{-1} M(\beta\_{k-1}) C^{-1} \ldots M(\beta\_1) \right)\_i^j. \end{aligned}$$ This concludes the proof. Let *α*, *β* be two non-intersecting oriented arcs. Denote by *a*, *b*, *c*, *d* the boundary arcs containing *s*(*α*), *t*(*α*), *s*(*β*), *t*(*β*) respectively. Reversing the orientation and the height order of *α* or *β* if necessary, we have ten different possibilities illustrated in Figure [figarcrelations]. [figarcrelations] [lemmaarcsrelations] * If the elements of {*a*, *b*, *c*, *d*} are pairwise distinct, one has *N*(*α*) ⊙ *N*(*β*) = *τ*(*N*(*β*) ⊙ *N*(*α*))*τ*. * When *a* = *c*, {*a*, *b*, *d*} has cardinal 3 and *s*(*β*) < *a**s*(*α*), one has *N*(*α*) ⊙ *N*(*β*) = *τ*(*N*(*β*) ⊙ *N*(*α*))R. * When *a* = *c* ≠ *b* = *d* and *s*(*β*) < *a**s*(*α*), *t*(*α*) < *b**t*(*β*), one has *N*(*α*) ⊙ *N*(*β*) = R− 1(*N*(*β*) ⊙ *N*(*α*))R. * When *a* = *c* ≠ *b* = *d* and *s*(*β*) < *a**s*(*α*), *t*(*β*) < *b**t*(*α*), *N*(*α*) ⊙ *N*(*β*) = R(*N*(*β*) ⊙ *N*(*α*))R. * When *b* = *c* = *d* ≠ *a* and *s*(*β*) < *a**t*(*β*) < *a**t*(*α*) and *h*(*s*(*β*)) < *h*(*t*(*β*)), one has $$\label{arcrel5} N(\alpha) \odot N(\beta) = \mathscr{R}^{-1} \left( N(\beta) \odot \mathds{1}\_2 \right) \mathscr{R} \left( N(\alpha) \odot \mathds{1}\_2 \right).$$ * When *b* = *c* = *d* ≠ *a* and *t*(*α*) < *a**t*(*β*) < *a**s*(*β*) and *h*(*s*(*β*)) < *h*(*t*(*β*)) < *h*(*t*(*α*)), one has $$\label{arcrel6} N(\alpha) \odot N(\beta) = \mathscr{R}^{-1}\left( N(\beta) \odot \mathds{1}\_2 \right) \mathscr{R} \left( N(\alpha) \odot \mathds{1}\_2 \right).$$ * When *b* = *c* = *d* ≠ *a* and *t*(*β*) < *a**t*(*α*) < *a**s*(*β*) and *h*(*s*(*β*)) < *h*(*t*(*α*)) < *h*(*t*(*β*)), one has $$\label{arcrel7} N(\alpha) \odot N(\beta) = \mathscr{R} \left( N(\beta) \odot \mathds{1}\_2 \right) \mathscr{R} \left( N(\alpha) \odot \mathds{1}\_2 \right).$$ * When *a* = *b* = *c* = *d* and *s*(*β*) < *a**s*(*α*) < *a**t*(*β*) < *a**t*(*α*) and *h*(*s*(*β*)) < *h*(*s*(*α*)) < *h*(*t*(*β*)) < *h*(*t*(*α*)), one has $$\label{arcrel8} \left( \mathds{1}\_2 \odot N(\alpha) \right) \mathscr{R}^{-1} \left( \mathds{1}\_2 \odot N(\beta) \right) \mathscr{R}^{-1} = \mathscr{R} \left( \mathds{1}\_2 \odot N(\beta) \right) \mathscr{R}^{-1} \left( \mathds{1}\_2 \odot N(\alpha) \right).$$ * When *a* = *b* = *c* = *d* and *s*(*β*) < *a**t*(*β*) < *a**s*(*α*) < *a**t*(*α*) and *h*(*s*(*β*)) < *h*(*t*(*β*)) < *h*(*s*(*α*)) < *h*(*t*(*α*)), one has $$\label{arcrel9} \mathscr{R}^{-1} \left( \mathds{1}\_2 \odot N(\alpha) \right) \mathscr{R} \left( \mathds{1}\_2 \odot N(\beta) \right) = \left( \mathds{1}\_2 \odot N(\beta) \right) \mathscr{R}^{-1} \left( \mathds{1}\_2 \odot N(\alpha) \right) \mathscr{R}.$$ * When *a* = *b* = *c* = *d* and *s*(*α*) < *a**s*(*β*) < *a**t*(*β*) < *a**t*(*α*), and *h*(*s*(*α*)) < *h*(*s*(*β*)) < *h*(*t*(*β*)) < *h*(*t*(*α*)), one has $$\label{arcrel10} \left( \mathds{1}\_2 \odot N(\alpha) \right) \mathscr{R}^{-1} \left( \mathds{1}\_2 \odot N(\beta) \right)\mathscr{R} = \mathscr{R} \left( \mathds{1}\_2 \odot N(\beta) \right) \mathscr{R}^{-1} \left( \mathds{1}\_2 \odot N(\alpha) \right).$$ Equation says that in case (*i*) any *α**i**j* commutes with any *β**k**l*, which is obvious. Equations,, in cases (*i**i*), (*i**i**i*) and (*i**v*) are straightforward consequences of the height exchange relation. All other cases will be derived using the boundary skein relations. As in the proof of Lemma [lemmaheightreversing], we will color the arcs *α* and *β* in red and blue to remind the reader that they might be much more complicated than what they look in the picture: in the computations we perform while using the boundary skein relation, we only care about the restriction of the diagrams (depicted in grey) in a small bigon in the neighborhood of the boundary arc *a* and not of the actual shape of the blue and red parts. Equations and in cases (*v*) and (*v**i*) are proved in a very similar way; we detail the proof of and leave to the reader. In case (*v**i*), one has: $$\begin{aligned} &\left( M(\alpha) \odot M(\beta) \right)\_{kl}^{ij} = \alpha\_{k i} \beta\_{l j} = \adjustbox{valign=c}{\includegraphics[width=1.5cm]{Case\_vi\_1.eps}} = \adjustbox{valign=c}{\includegraphics[width=3cm]{Case\_vi\_2.eps}} \\ & = \sum\_{a,b,c,d,e,f=\pm} (\mathscr{R}^{-1})\_{f d}^{i j} M(\beta)\_e^f C\_c^e \mathscr{R}\_{ab}^{cd} M(\alpha)\_k^a (C^{-1})\_l^b = \left( \mathscr{R}^{-1} (M(\beta) C \odot \mathds{1}\_2) \mathscr{R}(M(\alpha)\odot C^{-1}) \right)\_{kl}^{ij}.\end{aligned}$$ To handle cases (*v**i**i*) to (*x*), we introduce the 4 × 4 matrix *V* = (*V**k**l**i**j*)*i*, *j*, *k*, *l* ∈ { − ,  + }, where *V**k**l**i**j* = [*α* ∪ *β*, *σ**i**j**k**l*] ∈ S*ω*(**\Sigma**) is the class of the simple diagram *α* ∪ *β* with state *σ**i**j**k**l* sending *t*(*α*), *t*(*β*), *s*(*α*) and *s*(*β*) to *i*, *j*, *k* and *l* respectively. Here the height order of the points of ∂(*α* ∪ *β*) is given by the boundary arc orientation drawn in Figure [figarcrelations]. The trick is to compute *V* in two different ways and then equating the two obtained formulas. In case (*v**i**i*), on the one hand, we first prove the equality $V=\tau (M(\beta)C \odot \mathds{1}\_2) \mathscr{R} (M(\alpha)\odot C^{-1})$ as follows: $$V^{ij}\_{kl} = \adjustbox{valign=c}{\includegraphics[width=1.5cm]{Case\_vii\_1.eps}} =\adjustbox{valign=c}{\includegraphics[width=3cm]{Case\_vii\_2.eps}}= \left( (M(\beta)C \odot \mathds{1}\_2) \mathscr{R} (M(\alpha)\odot C^{-1}) \right)\_{kl}^{ji}.$$ On the other hand, we prove the equality *V* = *τ*R− 1(*M*(*α*) ⊙ *M*(*β*)) as follows: $$V^{ij}\_{kl} = \adjustbox{valign=c}{\includegraphics[width=1.5cm]{Case\_vii\_1.eps}} = \adjustbox{valign=c}{\includegraphics[width=3cm]{Case\_vii\_3.eps}} =\left( \mathscr{R}^{-1} (M(\alpha) \odot M(\beta)) \right)\_{kl}^{ji}.$$ So we get the equality $ \mathscr{R}^{-1} (M(\alpha) \odot M(\beta)) = (M(\beta)C \odot \mathds{1}\_2) \mathscr{R} (M(\alpha)\odot C^{-1}) (=\tau V)$ and Equation follows. In case (*v**i**i*), on the one hand, we first prove the equality $V=\tau (C\odot M(\alpha)) \mathscr{R}^{-1} (\mathds{1}\_2 \odot C^{-1}M(\beta))$ as follows: $$V^{ij}\_{kl} = \adjustbox{valign=c}{\includegraphics[width=1.5cm]{Case\_viii\_1.eps}} =\adjustbox{valign=c}{\includegraphics[width=3cm]{Case\_viii\_2.eps}}= \left( (C\odot M(\alpha)) \mathscr{R}^{-1}(\mathds{1}\_2 \odot C^{-1}M(\beta)) \right)\_{kl}^{ji}.$$ On the other hand, we prove the equality $V= \tau (C\odot C) \mathscr{R} (\mathds{1}\_2 \odot C^{-1}M(\beta)) \mathscr{R}^{-1} (\mathds{1}\_2 \odot C^{-1}M(\alpha)) \mathscr{R}$ as follows: $$V^{ij}\_{kl} = \adjustbox{valign=c}{\includegraphics[width=1.5cm]{Case\_viii\_1.eps}} =\adjustbox{valign=c}{\includegraphics[width=3.5cm]{Case\_viii\_3.eps}}= \left( (C\odot C) \mathscr{R} (\mathds{1}\_2 \odot C^{-1}M(\beta)) \mathscr{R}^{-1} (\mathds{1}\_2 \odot C^{-1}M(\alpha)) \mathscr{R} \right)\_{kl}^{ji}.$$ Equation follows by equating the two obtained expressions for *V*. In case (*x*), on the one hand, we first prove the equality $V= (C\odot M(\alpha))\mathscr{R}^{-1} (\mathds{1}\_2 \odot C^{-1} M(\beta)) \mathscr{R}$ as follows: $$V\_{kl}^{ij} = \adjustbox{valign=c}{\includegraphics[width=1.5cm]{Case\_x\_1.eps}} =\adjustbox{valign=c}{\includegraphics[width=3.5cm]{Case\_x\_2.eps}}= \left( (C\odot M(\alpha))\mathscr{R}^{-1} (\mathds{1}\_2 \odot C^{-1} M(\beta)) \mathscr{R} \right)\_{kl}^{ij}.$$ On the other hand, we prove the equality $V=(C\odot C) \mathscr{R} (\mathds{1}\_2 \odot C^{-1}M(\beta)) \mathscr{R}^{-1}(\mathds{1}\_2 \odot C^{-1}M(\alpha))$ as follows: $$V\_{kl}^{ij} = \adjustbox{valign=c}{\includegraphics[width=1.5cm]{Case\_x\_1.eps}} =\adjustbox{valign=c}{\includegraphics[width=4cm]{Case\_x\_3.eps}}= \left( (C\odot C) \mathscr{R} (\mathds{1}\_2 \odot C^{-1}M(\beta)) \mathscr{R}^{-1}(\mathds{1}\_2 \odot C^{-1}M(\alpha)) \right)\_{kl}^{ij}.$$ Therefore, we obtain the following equality that will be used in the proof of Lemma [lemmaqdetrel]: $$\label{eq\_V} V = (C\odot M(\alpha))\mathscr{R}^{-1} (\mathds{1}\_2 \odot C^{-1} M(\beta)) \mathscr{R} = (C\odot C) \mathscr{R} (\mathds{1}\_2 \odot C^{-1}M(\beta)) \mathscr{R}^{-1}(\mathds{1}\_2 \odot C^{-1}M(\alpha)).$$ Equation follows. In case (*i**x*), we slightly change the strategy. We define the 4 × 4 matrix *W* = (*W**k**l**i**j*)*i*, *j*, *k*, *l* ∈ { − ,  + } by $W\_{kl}^{ij}:= \adjustbox{valign=c}{\includegraphics[width=1.5cm]{Case\_ix\_1.eps}} $. We first prove the equality $W= (C\odot M(\beta)) \mathscr{R}^{-1}(\mathds{1}\_2 \odot C^{-1}M(\alpha))$ as follows: $$W\_{kl}^{ij}= \adjustbox{valign=c}{\includegraphics[width=1.5cm]{Case\_ix\_1.eps}} = \adjustbox{valign=c}{\includegraphics[width=3.5cm]{Case\_ix\_2.eps}} = \left( (C\odot M(\beta)) \mathscr{R}^{-1}(\mathds{1}\_2 \odot C^{-1}M(\alpha)) \right)\_{kl}^{ij}.$$ Next, we prove the equality $W= (C\odot C) \mathscr{R}^{-1}(\mathds{1}\_2\odot C^{-1}M(\alpha)) \mathscr{R} (\mathds{1}\_2\odot C^{-1}M(\beta)) \mathscr{R}^{-1}$ as follows: $$W\_{kl}^{ij}= \adjustbox{valign=c}{\includegraphics[width=1.5cm]{Case\_ix\_1.eps}} = \adjustbox{valign=c}{\includegraphics[width=3.5cm]{Case\_ix\_3.eps}} = \left( (C\odot C) \mathscr{R}^{-1}(\mathds{1}\_2\odot C^{-1}M(\alpha)) \mathscr{R} (\mathds{1}\_2\odot C^{-1}M(\beta)) \mathscr{R}^{-1} \right)\_{kl}^{ij}.$$ Equation follows by equating the two obtained expressions for *W*. This concludes the proof. [q-determinant relations][lemmaqdetrel] Let *α* be an oriented arc. Then det*q*(*N*(*α*)) = 1, if *α*is of type a, and det*q*2(*N*(*α*)) = 1, else. First suppose that *α* is of type a. Applying the trivial arc and cutting arc relation, we obtain: $$(C^{-1})\_+^- = { \begin{tikzpicture}[baseline=-0.4ex,scale=0.5,>=stealth] \draw [fill=gray!45,gray!45] (-.7,-.75) rectangle (.4,.75) ; \draw[->] (0.4,-0.75) to (.4,.75); \draw[->] (-0.7,-0.75) to (-.7,.75); \draw[line width=1.2] (-.7,-0.3) to (-.4,-.3); \draw[line width=1.2] (-.7,0.3) to (-.4,.3); \draw[line width=1.15] (-.4,0) ++(-90:.3) arc (-90:90:.3); \draw (-1,0.3) node {\scriptsize{$-$}}; \draw (-1,-0.3) node {\scriptsize{$+$}}; \end{tikzpicture} } = (C^{-1})^+\_- \adjustbox{valign=c}{\includegraphics[width=1.2cm]{qdet\_rel1.eps}} + (C^{-1})^-\_+ \adjustbox{valign=c}{\includegraphics[width=1.2cm]{qdet\_rel2.eps}},$$ which is equivalent to the equation *α*+  +*α*−  − − *q*− 1*α*+  −*α*−  + = 1 as claimed. Next we suppose that *α* is of type d. Let *β* be an arc isotope to and disjoint from *α*, placed as in the configuration (x) of Figure [figarcrelations]. Consider the matrix *V* = (*V**k**l**i**j*)*i*, *j*, *k*, *l* ∈ { − ,  + }, where *V**k**l**i**j* = [*α* ∪ *β*, *σ**i**j**k**l*] ∈ S*ω*(**\Sigma**) is the class of the simple diagram *α* ∪ *β* with state *σ**i**j**k**l* sending *t*(*α*), *t*(*β*), *s*(*α*) and *s*(*β*) to *i*, *j*, *k* and *l* respectively, like in the proof of Lemma [lemmaarcsrelations] (i.e. $V\_{kl}^{ij}= \adjustbox{valign=c}{\includegraphics[width=1.5cm]{Case\_x\_1.eps}}$). Again, using the trivial arc and cutting arc relation, we obtain: $$C\_-^+ = \adjustbox{valign=c}{\includegraphics[width=1.5cm]{qdet\_rel3.eps}} = C\_+^- \adjustbox{valign=c}{\includegraphics[width=1.5cm]{qdet\_rel4.eps}} + C\_-^+ \adjustbox{valign=c}{\includegraphics[width=1.5cm]{qdet\_rel5.eps}} \quad \Leftrightarrow V^{+-}\_{-+} - q V^{+-}\_{+-} = 1.$$ Next, by developing the matrix coefficients in the equalities, we find the equalities *V*−  ++  − = *q**α*+  −*α*−  + + *A*− 1 and *V*+  −+  − = *q*− 2*α*−  −*α* +  +  − *A*− 3. Putting these equalities together, we find *α*−  −*α*+  + − *q*2*α*+  −*α*−  + = *A*. Now developing Equation, we obtain *α*+  −*α*−  + = *α*−  +*α*+  −, therefore: $$\mathrm{det}\_{q^2} (N(\alpha)) = \mathrm{det}\_{q^2} \begin{pmatrix} - \omega^{-5} \alpha\_{-+} & -\omega^{-5} \alpha\_{--} \\ \omega^{-1} \alpha\_{++} & \omega^{-1} \alpha\_{+-} \end{pmatrix} = - A^3 \alpha\_{-+} \alpha\_{+-} + A^{-1} \alpha\_{--} \alpha\_{++} = 1.$$ Now, if *α* is of type e, then *α*− 1 is of type d. A simple computation shows that if $M= \begin{pmatrix} a & b \\ c & d \end{pmatrix}$ is such that *a**d* = *d**a* then det*q*2(*M*) = det*q*2(*C*− 1*t**M**C*), so we deduce the q-determinant formula for *α* of type e from the facts that it holds for *α*− 1, from the orientation reversing formula in Lemma [lemmaorientationreversing] and from the equality *α*+  −*α*−  + = *α*−  +*α*+  −. Suppose that *α* is of type c and choose $\mathds{k}=\mathbb{Z}[\omega^{\pm 1}]$. Recall from Section 2.1 the reflexion anti-involution *θ*. The image *θ*(*α*) is of type d, so applying *θ* to Equation, we obtain that: *α*+  +*α*−  − − *q*− 2*α*−  +*α*+  − = *A*− 1. By Remark [remarkchangebasis], since Equation [eqqdettypec] holds for $\mathds{k}=\mathbb{Z}[\omega^{\pm 1}]$, it also holds for any other ring. Also using *θ*, we find that *α*+  −*α*−  + = *α*−  +*α*+  − and the equation det*q*2(*N*(*α*)) = 1 follows. Eventually, when *α* is of type b, we deduce the q-determinant relation from the facts that it holds for *α*− 1 (of type c), from the orientation reversing formulas of Lemma [lemmaorientationreversing] and from the identity *α*+  −*α*−  + = *α*−  +*α*+  −. Let P = (G, RL) be a finite presentation of Π1(ΣP, V). By Proposition [propgenerators], the set AG generates S*ω*(**\Sigma**) and we have found three families of relations: 1. For each *α* ∈ G we have the either the relation det*q*(*N*(*α*)) = 1 or det*q*2(*N*(*α*)) = 1 by Equation in Lemma [lemmaqdetrel]; we call them the *q-determinant relations*. 2. For each *R* ∈ RL, we have four relations obtained by considering the matrix coefficients in Equation in Lemma [lemmatrivialloopsrelations]; we call them *trivial loops relations*. 3. For each pair (*α*, *β*) of elements in G, we have 16 relations obtained by considering the matrix coefficients in one of the Equations, …, of Lemma [lemmaarcsrelations] after having possibly replaced *α* or *β* by *α*− 1 or *β*− 1, if necessary, and using the inversion formula ; we call them *arcs exchange relations*. Proof of Theorems [theorem1] and [theorem2] =========================================== In this section, we prove Theorems [theorem1] and [theorem2]. Let L*ω*(P) be the algebra generated by the elements of G modulo the q-determinant, trivial loops and arcs exchange relations and write Ψ : L*ω*(P) → S*ω*(**\Sigma**) the obvious algebra morphism. By Proposition [propgenerators], Ψ is surjective and we need to show that Ψ is injective to prove Theorem [theorem1]. We cut the proof of Theorem [theorem1] in three steps: (1) first in Step 1, we show that it is sufficient to make the proof in the case where P has no relation (as in Example [exemplepres]); (2) in this particular case, the finite presentation defining L*ω*(P) is inhomogeneous quadratic and we will use the Diamond Lemma to extract PBW bases of L*ω*(P) and to prove it is Koszul; in Step 2 we extract the re-written rules and their leading terms from the q-determinant and arc exchange relations and exhibit the associated spanning family $\underline{\mathcal{B}}^{\mathbb{G}}\subset\mathcal{L}\_{\omega}(\mathbb{P})$; (3) eventually in Step 3, we show that the image by Ψ of $\underline{\mathcal{B}}^{\mathbb{G}}$ is a basis, this will prove both the injectivity of Ψ and the fact that $\underline{\mathcal{B}}^{\mathbb{G}}$ is a Poincaré-Birkhoff-Witt basis and conclude the proofs of Theorems [theorem1] and [theorem2]. Step 1: Reduction to the case where P has no relation ----------------------------------------------------- Let Γ be the presenting graph of P and consider its fundamental groupoid Π1(Γ): the objects of Π1(Γ) are the vertices of Γ (*i.e.* the set V) and the morphisms are compositions *α**k**ɛ**k*…*α*1*ɛ*1 where *α**i* ∈ G. The inclusion Γ ⊂ ΣP induces a functor *F* : Π1(Γ) → Π1(ΣP, V) which is the identity on the objects. The fact that G is a set of generators implies that *F* is full and P has no relations if and only if *F* is faithful. Fix *v*0 ∈ V. For a relation *R* ∈ RL of the form *R* = *β**k* ⋆ … ⋆ *β*1, the *base point of* *R* is *s*(*β*1) = *t*(*β**k*). By inspecting the trivial loop relation, we see that changing a relation *R* by a relation *β* ⋆ *R* ⋆ *β*− 1 does not change the algebra L*ω*(P). Since ΣP is assumed to be connected, we can suppose that all relations in RL have the same base point *v*0, so each relation *R* = *β**k* ⋆ … ⋆ *β*1 induces an element [*R*] = *β**k*…*β*1 ∈ *π*1(Γ, *v*0). The functor *F* induces a surjective group morphism *F**v*0 : *π*1(Γ, *v*0) → *π*1(ΣP, *v*0) and the fact that RL is a set of relations implies that {[*R*], *R* ∈ RL} generates ker(*F**v*0). Since *π*1(Γ, *v*0) is a free group, so is ker(*F**v*0). Let *R*1, …, *R**m* ∈ RL be such that {[*R*1], …, [*R**m*]} is a minimal set of generators for the free group ker(*F**v*0). For each *R**i*, choose an element *β**i* ∈ G such that either *β**i* or *β**i*− 1 appears in the expression of *R**i* such that the set Gʹ obtained from G by removing the *β**i*’s is a generating set. So if Γʹ is the presenting graph of Gʹ, the morphism *F*ʹ*v*0 : *π*1(Γʹ, *v*0) → *π*1(ΣP, *v*0) is injective, so the functor *F*ʹ : Π1(Γʹ) → Π1(ΣP, V) is faithful and Pʹ :  = (Gʹ, ∅) is a finite presentation of Π1(ΣP, V) with no relations. The inclusion Gʹ ⊂ G induces an algebra morphism $\widetilde{\varphi} : \mathcal{T}[\mathbb{G}'] \hookrightarrow \mathcal{T}[\mathbb{G}]$ on the free tensor algebras generated by Gʹ and G respectively and $\widetilde{\varphi}$ sends q-determinant and arc exchange relations to q-determinant and arc exchange relations, so it induces an algebra morphism *φ* : L*ω*(Pʹ) → L*ω*(P) . [lemmareduction] The morphism *φ* is an isomorphism. To prove the surjectivity, we need to show that for each removed path *β**i* ∈ G \ Gʹ, the stated arcs (*β**i*)*ɛ**ɛ*ʹ can be expressed as a polynomial in the stated arcs (*α*± 1)*μ**μ*ʹ for *α* ∈ Gʹ. This follows from the trivial loop relation associated to the relation *R**i* ∈ RL containing *β**i*± 1. The injectivity of *φ* is a straightforward consequence of the definition. Step 2: Poincaré-Birkhoff-Witt bases and Koszulness --------------------------------------------------- In the rest of the section, we now suppose that P = (G, ∅) is a presentation with no relations and that every arc in G is either of type *a*, *c* or *d*. Note that the convention on the type of the generators is not restrictive but purely conventional since we can always replace a generator *α* by *α*− 1 without changing the set AG of generators of S*ω*(**\Sigma**). Since P has no relation, the defining presentation of L*ω*(P) contains only q-determinant and arc exchange relations. All these relations are quadratic (inhomogeneous) in the generators AG and we want to apply the Diamond Lemma to prove that L*ω*(P) is Koszul. **Reminder on the Diamond Lemma for PBW bases** Following the exposition in Section 4 of, we briefly recall the statement of the Diamond Lemma for PBW bases. Let *V* be a free finite rank $\mathds{k}$-module, denote by *T*(*V*) :  =  ⊕ *n* ≥ 0*V*⊗ *n* the tensor algebra and fix *R* ⊂ *V*⊗ 2 a finite subset. The quotient algebra $\mathcal{A}:= {{\raisebox{.2em}{$T(V)$}\left/\raisebox{-.2em}{$(R)$}\right.}}$ is called a *quadratic algebra*. Let {*v**i*}*i* ∈ *I* be a totally ordered basis of *V* and write *I* = {1, …, *k*} so that *v**i* < *v**i* + 1. Then the set *J* :  = ⨆*n* ≥ 0*I**n* (where *I*0 = {0}) is totally ordered by the lexicographic order and the set of elements *v***i** = *v**i*1…*v**i**n*, for **i** = (*i*1, …, *i**k*), forms a basis of *T*(*V*). We suppose that the elements *r* ∈ *R* (named relators) have the form *r* = *v**i**v**j* − ∑(*k*, *l*) < (*i*, *j*)*λ**k**l**i**j**v**k**v**l*. The term *v**i**v**j* is called the *leading term* of *r*. We assume that two distinct relators have distinct leading terms. Define the family B :  = {*v**i*1…*v**i**n*∣ so that *v**i**k**v**i**k* + 1is not a leading term,  ∀1 ≤ *k* ≤ *n* − 1},  and denote by B(3) ⊂ B the subset of elements of length 3 (of the form *v**i*1*v**i*2*v**i*3). Obviously the set B spans A. [Diamond Lemma for PBW bases: Bergman, see also Theorem 4.3.10][theoremPBWbases1] If B(3) is free, then B is a (Poincaré-Birkhoff-Witt) basis and A is Koszul. The arc exchange relations defining L*ω*(P) are quadratic, however the *q*-determinant relations are not (because of the 1 in det*q*(*N*(*α*)) = 1), so L*ω*(P) is not quadratic but rather inhomogeneous quadratic. An *inhomogeneous quadratic algebra* is an algebra of the form $\mathcal{A}:= {{\raisebox{.2em}{$T(V)$}\left/\raisebox{-.2em}{$(R)$}\right.}}$, where $R\subset V^{\otimes 2}\oplus V\oplus \mathds{k} \subset T(V)$. We further make the assumptions that (*q**l*1) : *R* ∩ *V* = {0} and (*q**l*2) : (*R* ⊗ *V* + *V* ⊗ *R*) ∩ *V*⊗ 2 ⊂ *R* ∩ *V*⊗ 2. The hypothesis (*q**l*2) says that one cannot create new relations by adding an element to *R*, so it is not restrictive. Like before, we fix an ordered basis {*v**i*}*i* ∈ *I* of *V* and suppose that the relators of *R* have the form *r* = *v**i**v**j* − ∑(*k*, *l*) < (*i*, *j*)*λ**k**l**i**j**v**k**v**l* − *c**i*, *j*,  where *c**i*, *j* are some scalars and we suppose that two distinct relators have distinct leading terms. The associated quadratic algebra *q*A is the algebra with same generators *v**i* but where the relators have been changed by replacing the scalars *c**i*, *j* by 0. Let *q*B ⊂ *q*A and B ⊂ A be the two generating families defined by Equation. [ Theorem 4.3.18][theoremPBWbases2] Suppose that *q*B(3) ⊂ *q*A is free, then both *q*B and B are (PBW) bases of *q*A and A respectively and both *q*A and A are Koszul. There exists a linear surjective morphism *φ* : *q*A → A sending the generating family *q*B to B (see ). So, if B is a basis of A, then *q*B is free, therefore Theorem [theoremPBWbases2] implies that A is Koszul. Therefore, we have the [theoremKoszul] If B is a basis of A, then A is Koszul. **The relators of the stated skein presentations and PBW bases** For *α* ∈ G, we write B(*α*) = {(*α*+  +)*a*(*α*+  −)*b*(*α*−  −)*c*, *a*, *b*, *c* ≥ 0} ∪ {(*α*+  +)*a*(*α*−  +)*b*(*α*−  −)*c*, *a*, *b*, *c* ≥ 0} ⊂ L*ω*(P). Fix a total order  <  on the set G of generators and index its elements as G = {*α*1, …, *α**n*}, where *α**i* < *α**i* + 1. Let $$\underline{\mathcal{B}}^{\mathbb{G}}:= \{ m\_1 m\_2\ldots m\_n | \quad m\_i \in \mathcal{B}(\alpha\_i) \} \subset \mathcal{L}\_{\omega}(\mathbb{P}).$$ We want to apply Theorem [theoremKoszul] to prove that L*ω*(P) is Koszul. By definition, L*ω*(P) is an inhomogeneous quadratic algebra whose set of generators is AG = {*α**i**j*∣*α* ∈ G, *i*, *j* =  ± } and whose relations are the arcs exchange and *q*-determinant relations. We first define a total order  ≺  on AG by imposing that *α**a**b* ≺ *β**c**d* if *α* < *β* and that *α*+  + ≺ *α*+  − ≺ *α*−  + ≺ *α*−  −. The goal of this subsection is to rewrite the q-determinant and arc exchange relations such that they define a set of relators of the form whose leading terms are pairwise distinct, satisfying (*q**l*1) and (*q**l*2) and such that the set of leading terms is Leading Terms :  = {*α**a**b**β**c**d*∣such that either (*α* > *β*)or (*α* = *β*and either *a* < *c*or *b* < *d*)}. The set $ \underline{\mathcal{B}}^{\mathbb{G}}$ is the generating set defined by with this set of leading terms (*i.e.* $ \underline{\mathcal{B}}^{\mathbb{G}}$ is the set of elements *v*1…*v**n* where *v**i* ∈ AG and *v**i**v**i* + 1 is not in Leading Terms). At this stage, it will become clear that $ \underline{\mathcal{B}}^{\mathbb{G}}$ spans L*ω*(P). Once we’ll perform this task, we will prove in Step 3 that $ \underline{\mathcal{B}}^{\mathbb{G}}$ if free by showing that its image through Ψ : L*ω*(P) → S*ω*(**\Sigma**) is a basis of S*ω*(**\Sigma**). This will imply that Ψ is an isomorphism (so will prove Theorem [theorem1]) and Theorem [theoremKoszul] will imply that L*ω*(P) is Koszul (so it will prove Theorem [theorem2]). Consider two distinct generators *α*, *β* ∈ G such that *α* > *β*. For each *a*, *b*, *c*, *d* ∈ { ± }, we have an arc exchange relation of the form *α**a**b**β**c**d* = ∑*i**j**k**l* =  ±*c**a*, *b*, *c*, *d**i*, *j*, *k*, *l**β**i**j**α**k**l*,  where *c**a*, *b*, *c*, *d**i*, *j*, *k*, *l* are some scalars. We associate the relator *r* = *α**a**b**β**c**d* − ∑*i**j**k**l* =  ±*c**a*, *b*, *c*, *d**i*, *j*, *k*, *l**β**i**j**α**k**l*, whose leading term is *α**a**b**β**c**d* (because *α* > *β* implies that *α**a**b**β**c**d* ≻ *β**i**j**α**k**l*) and denote by *R**α*, *β* the set (of cardinal 16) of such relators. Now suppose that *α* ∈ G is of type *a*. The set of relations between the generators *α**i**j* are given by *M*(*α*) ⊙ *M*(*α*) = R− 1(*M*(*α*) ⊙ *M*(*α*))R,  and det*q*(*M*(*α*)) = 1. Note that in this case, the subalgebra of L*ω*(P) generated by the *α**i**j* is isomorphic to O*q*[SL2] ≅ S*ω*(B). We rewrite those relations as follows: $$\tag{Ra} \left\{ \begin{array}{ll} \alpha\_{+-}\alpha\_{++} = q\alpha\_{++}\alpha\_{+-}, & \alpha\_{-+}\alpha\_{++}=q\alpha\_{++}\alpha\_{-+}, \\ \alpha\_{--}\alpha\_{+-}=q\alpha\_{+-}\alpha\_{--}, & \alpha\_{--}\alpha\_{-+} = q\alpha\_{-+}\alpha\_{--}, \\ \alpha\_{+-}\alpha\_{-+} = q \alpha\_{++}\alpha\_{--} - q, & \alpha\_{-+}\alpha\_{+-}=q \alpha\_{++}\alpha\_{--} - q, \\ \alpha\_{--}\alpha\_{++} = q^2 \alpha\_{++}\alpha\_{--} + 1 - q^2. & \end{array} \right.$$ The associated set of relators *R**α* is defined by assigning to each of the seven equalities of the form *x* = *y* in the system *R**a*, the relator *r* :  = *x* − *y* with leading term *x*. Note that the set of leading terms of the elements of *R**α* is the set of elements *α**a**b**α**c**d* such that either *a* < *c* or *b* < *d*. Now suppose that *α* ∈ G is of type *d*. The set of relations between the generators *α**i**j* are given by $$\left( \mathds{1}\_2 \odot N(\alpha) \right) \mathscr{R}^{-1} \left( \mathds{1}\_2 \odot N(\alpha) \right)\mathscr{R} = \mathscr{R} \left( \mathds{1}\_2 \odot N(\alpha) \right) \mathscr{R}^{-1} \left( \mathds{1}\_2 \odot N(\alpha) \right), \quad \mbox{ and }\mathrm{det}\_{q^2}(N(\alpha))=1,$$ where *N*(*α*) = *C*− 1*M*(*α*). These relations generate the same ideal as the following set of relations: $$\tag{Rd} \left\{ \begin{array}{ll} \alpha\_{-+} \alpha\_{++} = \alpha\_{++}\alpha\_{-+} + (q-q^{-1})q^2 \alpha\_{+-}\alpha\_{--}, & \alpha\_{+-}\alpha\_{++} = q^2 \alpha\_{++} \alpha\_{+-}, \\ \alpha\_{--}\alpha\_{-+} = \alpha\_{-+}\alpha\_{--} + (q-q^{-1})q^2\alpha\_{+-}\alpha\_{--}, & \alpha\_{--}\alpha\_{+-} = q^2 \alpha\_{+-} \alpha\_{--}, \\ \alpha\_{+-}\alpha\_{-+} = \alpha\_{++}\alpha\_{--}- (q- q^{-1})^2 \alpha\_{+-}^2 - A, & \alpha\_{-+}\alpha\_{+-}= \alpha\_{++}\alpha\_{--}- (q- q^{-1})^2 \alpha\_{+-}^2 - A, \\ \alpha\_{--}\alpha\_{++} = q^2 \alpha\_{++}\alpha\_{--} - q^2(q-q^{-1})^2 \alpha\_{+-}^2 +A(1-q^2).& \end{array} \right.$$ As before, we denote by *R**α* the set of relators obtained from system (Rd) by assigning to each of the seven equalities of the form *x* = *y* in the system *R**a*, the relator *r* :  = *x* − *y* with leading term *x*. Again, the set of leading terms of the elements of *R**α* is the set of elements *α**a**b**α**c**d* such that either *a* < *c* or *b* < *d*. For *α* ∈ G of type *c*, the set of relations between the elements *α**i**j* can be obtained from the system (Rd) using the reflection anti-involution. Once re-arranging the terms, we get the system of relations: $$\tag{Rc} \left\{ \begin{array}{ll} \alpha\_{-+} \alpha\_{++} = \alpha\_{++}\alpha\_{-+} + (q-q^{-1}) \alpha\_{+-}\alpha\_{--}, & \alpha\_{+-}\alpha\_{++} = q^2 \alpha\_{++} \alpha\_{+-}, \\ \alpha\_{--}\alpha\_{-+} = \alpha\_{-+}\alpha\_{--} + (q-q^{-1})\alpha\_{+-}\alpha\_{--}, & \alpha\_{--}\alpha\_{+-} = q^2 \alpha\_{+-} \alpha\_{--}, \\ \alpha\_{+-}\alpha\_{-+} = q^2\alpha\_{++}\alpha\_{--}- A^3, & \alpha\_{-+}\alpha\_{+-}= q^2\alpha\_{++}\alpha\_{--}- A^3, \\ \alpha\_{--}\alpha\_{++} = q^2 \alpha\_{++}\alpha\_{--} +(q-q^{-1})^2 \alpha\_{+-}^2 +A^{-1}(1-q^2).& \end{array} \right.$$ Like previously, we denote by *R**α* the associated set of relators and note that the set of leading terms is the set of elements *α**a**b**α**c**d* such that either *a* < *c* or *b* < *d*. Let *V* be the free $\mathds{k}$-module with basis AG and $R \subset \mathds{k}\oplus V^{\otimes 2} \subset T(V)$ be the union of the sets of relators *R**α*, *β* and *R**α*, where *α*, *β* ∈ G and *α* > *β*. Then $\mathcal{L}\_{\omega}(\mathbb{P})={{\raisebox{.2em}{$T(V)$}\left/\raisebox{-.2em}{$(R)$}\right.}}$, the leading terms of *R* are pairwise distinct and they form the set LeadingTerms of Equation and the hypotheses (*q**l*1) and (*q**l*2) are obviously satisfied. Therefore, if we prove that $\underline{\mathcal{B}}^{\mathbb{G}}$ is a basis of L*ω*(P) then Theorem [theoremKoszul] would imply that L*ω*(P) is Koszul. Step 3: Injectivity of Ψ ------------------------ Denote by BG ⊂ S*ω*(P) the image of $\underline{\mathcal{B}}^{\mathbb{G}}$ by Ψ : L*ω*(P) → S*ω*(**\Sigma**). [propnewbasis] The set BG is a basis of S*ω*(**\Sigma**). 1. The morphism Ψ : L*ω*(P) → S*ω*(**\Sigma**) is an isomorphism. 2. The family BG is a PBW basis and S*ω*(**\Sigma**) is Koszul. The fact that BG spans linearly S*ω*(**\Sigma**) follows from the surjectivity of Ψ (so follows from Proposition [propgenerators]), however we will reprove this fact. The proof of Theorem [propnewbasis] is divided in two steps: first we introduce another family B+G ⊂ S*ω*(**\Sigma**) and prove that B+G is free by relating it to the basis B. Next we use a filtration of S*ω*(**\Sigma**) to deduce that BG is free from the fact that B+G is free. For *α* ∈ G and
arxiv_0000036
& 0.264 & 0.336 & ∞ & 0.867 & 1.05 & ∞ & 3.23 & 3.95 108 & 3 ⋅ 107 & ∞ & 3.19 & 3.56 & ∞ & 9.75 & 10.3 & ∞ & 145 & 160 [!ht] |c|c|c|c|c|c|c|c|c|c|c| *m*1[*M*⊙] & *m*2[*M*⊙] & & & & & & & RWF & SWF & FWF & RWF & SWF & FWF & RWF & SWF & FWF 3 ⋅ 105 & 105 & 4.95 ⋅ 10− 4 & 4.11 ⋅ 10− 4 & 2.94 ⋅ 10− 4 & 1.47 ⋅ 10− 3 & 1.17 ⋅ 10− 3 & 8.12 ⋅ 10− 4 & 9.63 ⋅ 10− 3 & 4.83 ⋅ 10− 3 & 3.32 ⋅ 10− 3 106 & 105 & 3.81 ⋅ 10− 4 & 2.72 ⋅ 10− 4 & 1.97 ⋅ 10− 4 & 8.98 ⋅ 10− 4 & 6.11 ⋅ 10− 4 & 4.38 ⋅ 10− 4 & 2.72 ⋅ 10− 3 & 1.78 ⋅ 10− 3 & 1.39 ⋅ 10− 3 106 & 3 ⋅ 105 & 8.68 ⋅ 10− 4 & 7.30 ⋅ 10− 4 & 4.97 ⋅ 10− 4 & 2.16 ⋅ 10− 3 & 1.69 ⋅ 10− 3 & 1.20 ⋅ 10− 3 & 1.05 ⋅ 10− 2 & 5.57 ⋅ 10− 3 & 3.93 ⋅ 10− 3 3 ⋅ 105 & 3 ⋅ 105 & 9.96 ⋅ 10− 4 & 9.62 ⋅ 10− 4 & 7.46 ⋅ 10− 4 & 6.81 ⋅ 10− 3 & 6.45 ⋅ 10− 3 & 3.93 ⋅ 10− 3 & 8.34 ⋅ 10− 2 & 7.08 ⋅ 10− 2 & 3.75 ⋅ 10− 2 106 & 106 & 1.53 ⋅ 10− 3 & 1.50 ⋅ 10− 3 & 1.17 ⋅ 10− 3 & 1.31 ⋅ 10− 2 & 1.27 ⋅ 10− 2 & 6.82 ⋅ 10− 3 & 0.218 & 0.189 & 7.51 ⋅ 10− 2 107 & 106 & 1.37 ⋅ 10− 3 & 8.44 ⋅ 10− 4 & 4.58 ⋅ 10− 4 & 3.68 ⋅ 10− 3 & 1.90 ⋅ 10− 3 & 1.07 ⋅ 10− 3 & 1.64 ⋅ 10− 2 & 6.33 ⋅ 10− 3 & 3.45 ⋅ 10− 3 107 & 3 ⋅ 106 & 3.87 ⋅ 10− 3 & 2.41 ⋅ 10− 3 & 1.21 ⋅ 10− 3 & 1.47 ⋅ 10− 2 & 5.93 ⋅ 10− 3 & 3.03 ⋅ 10− 3 & 0.118 & 2.15 ⋅ 10− 2 & 1.24 ⋅ 10− 2 107 & 107 & 0.108 & 5.00 ⋅ 10− 2 & 2.04 ⋅ 10− 2 & 1.20 & 0.488 & 0.136 & 9.77 & 5.45 & 1.17 3 ⋅ 107 & 107 & 0.438 & 1.93 ⋅ 10− 2 & 9.36 ⋅ 10− 3 & 1.87 & 6.19 ⋅ 10− 2 & 3.31 ⋅ 10− 2 & 7.24 & 0.269 & 0.147 3 ⋅ 107 & 3 ⋅ 107 & 16.8 & 2.04 & 1.31 & 83.6 & 11.9 & 5.35 & 499 & 61.4 & 26.3 108 & 107 & ∞ & 0.256 & 0.316 & ∞ & 1.10 & 1.35 & ∞ & 4.65 & 5.62 108 & 3 ⋅ 107 & ∞ & 3.47 & 4.19 & ∞ & 15.1 & 16.3 & ∞ & 172 & 185 [!ht] |c|c|c|c|c|c|c|c|c|c|c| *m*1[*M*⊙] & *m*2[*M*⊙] & & & & & & & RWF & SWF & FWF & RWF & SWF & FWF & RWF & SWF & FWF 3 ⋅ 105 & 105 & 8.05 ⋅ 10− 4 & 7.19 ⋅ 10− 4 & 5.40 ⋅ 10− 4 & 3.23 ⋅ 10− 3 & 2.70 ⋅ 10− 3 & 1.91 ⋅ 10− 3 & 2.30 ⋅ 10− 2 & 1.34 ⋅ 10− 2 & 9.81 ⋅ 10− 3 106 & 105 & 1.71 ⋅ 10− 3 & 1.09 ⋅ 10− 3 & 8.34 ⋅ 10− 4 & 5.24 ⋅ 10− 3 & 3.61 ⋅ 10− 3 & 2.78 ⋅ 10− 3 & 3.20 ⋅ 10− 2 & 2.00 ⋅ 10− 2 & 1.45 ⋅ 10− 2 106 & 3 ⋅ 105 & 1.41 ⋅ 10− 3 & 1.22 ⋅ 10− 3 & 9.01 ⋅ 10− 4 & 4.72 ⋅ 10− 3 & 3.91 ⋅ 10− 3 & 2.78 ⋅ 10− 3 & 2.65 ⋅ 10− 2 & 1.63 ⋅ 10− 2 & 1.20 ⋅ 10− 2 3 ⋅ 105 & 3 ⋅ 105 & 1.02 ⋅ 10− 3 & 9.58 ⋅ 10− 4 & 7.83 ⋅ 10− 4 & 6.66 ⋅ 10− 3 & 6.34 ⋅ 10− 3 & 3.95 ⋅ 10− 3 & 7.84 ⋅ 10− 2 & 6.33 ⋅ 10− 2 & 3.68 ⋅ 10− 2 106 & 106 & 1.67 ⋅ 10− 3 & 1.66 ⋅ 10− 3 & 1.22 ⋅ 10− 3 & 1.27 ⋅ 10− 2 & 1.25 ⋅ 10− 2 & 6.76 ⋅ 10− 3 & 0.217 & 0.186 & 7.07 ⋅ 10− 2 107 & 106 & 6.33 ⋅ 10− 3 & 3.68 ⋅ 10− 3 & 1.98 ⋅ 10− 3 & 3.31 ⋅ 10− 2 & 1.63 ⋅ 10− 2 & 9.66 ⋅ 10− 3 & 0.187 & 7.57 ⋅ 10− 2 & 4.01 ⋅ 10− 2 107 & 3 ⋅ 106 & 7.35 ⋅ 10− 3 & 4.64 ⋅ 10− 3 & 2.42 ⋅ 10− 3 & 3.32 ⋅ 10− 2 & 1.72 ⋅ 10− 2 & 9.46 ⋅ 10− 3 & 0.193 & 6.87 ⋅ 10− 2 & 4.13 ⋅ 10− 2 107 & 107 & 0.111 & 4.66 ⋅ 10− 2 & 1.94 ⋅ 10− 2 & 1.22 & 0.511 & 0.130 & 9.94 & 5.60 & 1.11 3 ⋅ 107 & 107 & 0.594 & 4.02 ⋅ 10− 2 & 2.07 ⋅ 10− 2 & 4.44 & 0.205 & 0.104 & 26.8 & 0.960 & 0.496 3 ⋅ 107 & 3 ⋅ 107 & 16.1 & 2.20 & 1.34 & 83.0 & 11.4 & 5.43 & 515 & 59.5 & 26.5 108 & 107 & ∞ & 1.07 & 1.38 & ∞ & 11.7 & 13.7 & ∞ & 52.8 & 62.6 108 & 3 ⋅ 107 & ∞ & 8.31 & 9.25 & ∞ & 48.0 & 50.4 & ∞ & 657 & 700 [!ht] |c|c|c|c|c|c|c|c|c|c|c| *m*1[*M*⊙] & *m*2[*M*⊙] & & & & & & & RWF & SWF & FWF & RWF & SWF & FWF & RWF & SWF & FWF 3 ⋅ 105 & 105 & 5.09 & 4.96 & 3.18 & 20.2 & 18.5 & 12.7 & 92.2 & 85.8 & 67.2 106 & 105 & 8.67 & 8.13 & 7.14 & 34.1 & 28.2 & 20.1 & 124 & 100 & 82.5 106 & 3 ⋅ 105 & 8.95 & 8.61 & 6.28 & 31.4 & 28.0 & 19.8 & 124 & 110 & 85.0 3 ⋅ 105 & 3 ⋅ 105 & 6.00 & 5.81 & 3.73 & 26.6 & 25.4 & 18.6 & 113 & 109 & 97.7 106 & 106 & 8.45 & 8.40 & 5.42 & 38.5 & 37.6 & 26.3 & 158 & 154 & 129 107 & 106 & 19.2 & 15.5 & 8.34 & 64.2 & 48.4 & 23.2 & 316 & 199 & 113 107 & 3 ⋅ 106 & 19.5 & 17.4 & 7.31 & 84.3 & 65.2 & 31.0 & 461 & 283 & 158 107 & 107 & 32.7 & 27.4 & 11.6 & 202 & 155 & 77.7 & 1360 & 818 & 496 3 ⋅ 107 & 107 & 169 & 68.5 & 24.4 & 1500 & 319 & 133 & 16600 & 1550 & 738 3 ⋅ 107 & 3 ⋅ 107 & 7910 & 537 & 363 & 188000 & 2590 & 2570 & 2890000 & 14700 & 16400 108 & 107 & ∞ & 998 & 1300 & ∞ & 3380 & 4400 & ∞ & 18200 & 23800 108 & 3 ⋅ 107 & ∞ & 5900 & 6510 & ∞ & 26300 & 31000 & ∞ & 237000 & 279000 [!ht] |c|c|c|c|c|c|c|c|c|c|c| *m*1[*M*⊙] & *m*2[*M*⊙] & & & & & & & RWF & SWF & FWF & RWF & SWF & FWF & RWF & SWF & FWF 3 ⋅ 105 & 105 & 0.795 & 0.778 & 0.453 & 3.76 & 3.49 & 2.10 & 13.8 & 12.1 & 7.40 106 & 105 & 2.17 & 1.69 & 0.985 & 10.2 & 7.61 & 4.50 & 23.6 & 15.5 & 10.2 106 & 3 ⋅ 105 & 1.83 & 1.63 & 0.984 & 8.63 & 7.70 & 4.47 & 24.4 & 19.2 & 12.5 3 ⋅ 105 & 3 ⋅ 105 & 1.00 & 1.00 & 0.575 & 5.52 & 5.29 & 3.17 & 20.3 & 18.6 & 13.8 106 & 106 & 1.62 & 1.61 & 0.948 & 9.11 & 9.04 & 5.26 & 31.9 & 29.9 & 19.4 107 & 106 & 3.40 & 2.81 & 1.17 & 15.7 & 12.5 & 5.20 & 41.3 & 27.2 & 12.3 107 & 3 ⋅ 106 & 3.04 & 2.65 & 1.11 & 13.8 & 11.9 & 4.87 & 54.3 & 37.2 & 17.3 107 & 107 & 5.77 & 4.64 & 1.93 & 24.8 & 21.6 & 9.49 & 125 & 90.8 & 48.3 3 ⋅ 107 & 107 & 36.4 & 11.1 & 4.00 & 164 & 47.9 & 17.5 & 896 & 157 & 69.9 3 ⋅ 107 & 3 ⋅ 107 & 727 & 108 & 57.2 & 5740 & 306 & 230 & 85000 & 1090 & 1350 108 & 107 & ∞ & 201 & 251 & ∞ & 724 & 930 & ∞ & 2610 & 3600 108 & 3 ⋅ 107 & ∞ & 1140 & 1340 & ∞ & 3890 & 4630 & ∞ & 44400 & 52500 [!ht] |c|c|c|c|c|c|c|c|c|c|c| *m*1[*M*⊙] & *m*2[*M*⊙] & & & & & & & RWF & SWF & FWF & RWF & SWF & FWF & RWF & SWF & FWF 3 ⋅ 105 & 105 & 8.67 ⋅ 10− 4 & 8.24 ⋅ 10− 4 & 5.76 ⋅ 10− 4 & 2.94 ⋅ 10− 3 & 2.43 ⋅ 10− 3 & 1.75 ⋅ 10− 3 & 1.28 ⋅ 10− 2 & 1.01 ⋅ 10− 2 & 8.43 ⋅ 10− 3 106 & 105 & 2.08 ⋅ 10− 3 & 1.44 ⋅ 10− 3 & 1.03 ⋅ 10− 3 & 4.90 ⋅ 10− 3 & 3.51 ⋅ 10− 3 & 2.44 ⋅ 10− 3 & 1.49 ⋅ 10− 2 & 8.74 ⋅ 10− 3 & 7.04 ⋅ 10− 3 106 & 3 ⋅ 105 & 1.75 ⋅ 10− 3 & 1.43 ⋅ 10− 3 & 1.11 ⋅ 10− 3 & 4.62 ⋅ 10− 3 & 3.74 ⋅ 10− 3 & 2.70 ⋅ 10− 3 & 1.97 ⋅ 10− 2 & 1.39 ⋅ 10− 2 & 1.12 ⋅ 10− 2 3 ⋅ 105 & 3 ⋅ 105 & 1.13 ⋅ 10− 3 & 1.08 ⋅ 10− 3 & 7.20 ⋅ 10− 4 & 4.41 ⋅ 10− 3 & 3.80 ⋅ 10− 3 & 2.88 ⋅ 10− 3 & 1.89 ⋅ 10− 2 & 1.48 ⋅ 10− 2 & 1.27 ⋅ 10− 2 106 & 106 & 1.85 ⋅ 10− 3 & 1.77 ⋅ 10− 3 & 1.12 ⋅ 10− 3 & 6.88 ⋅ 10− 3 & 6.20 ⋅ 10− 3 & 4.36 ⋅ 10− 3 & 2.65 ⋅ 10− 2 & 2.02 ⋅ 10− 2 & 1.71 ⋅ 10− 2 107 & 106 & 3.14 ⋅ 10− 3 & 2.32 ⋅ 10− 3 & 1.48 ⋅ 10− 3 & 8.53 ⋅ 10− 3 & 6.25 ⋅ 10− 3 & 3.45 ⋅ 10− 3 & 4.14 ⋅ 10− 2 & 1.94 ⋅ 10− 2 & 1.26 ⋅ 10− 2 107 & 3 ⋅ 106 & 3.84 ⋅ 10− 3 & 3.11 ⋅ 10− 3 & 2.18 ⋅ 10− 3 & 1.09 ⋅ 10− 2 & 8.03 ⋅ 10− 3 & 4.69 ⋅ 10− 3 & 7.26 ⋅ 10− 2 & 3.43 ⋅ 10− 2 & 1.99 ⋅ 10− 2 107 & 107 & 1.45 ⋅ 10− 2 & 8.99 ⋅ 10− 3 & 6.78 ⋅ 10− 3 & 6.05 ⋅ 10− 2 & 2.76 ⋅ 10− 2 & 2.03 ⋅ 10− 2 & 0.277 & 0.113 & 7.82 ⋅ 10− 2 3 ⋅ 107 & 107 & 0.212 & 1.94 ⋅ 10− 2 & 1.68 ⋅ 10− 2 & 0.801 & 4.63 ⋅ 10− 2 & 3.52 ⋅ 10− 2 & 3.44 & 0.186 & 0.101 3 ⋅ 107 & 3 ⋅ 107 & 3.52 & 0.188 & 0.213 & 31.7 & 0.614 & 0.523 & 462 & 2.33 & 2.23 108 & 107 & ∞ & 0.257 & 0.424 & ∞ & 0.698 & 1.11 & ∞ & 2.53 & 4.00 108 & 3 ⋅ 107 & ∞ & 1.76 & 3.17 & ∞ & 5.53 & 9.16 & ∞ & 69.7 & 111 We found that the binaries can roughly be separated into three classes: low unequal-mass binaries, low equal-mass binaries ($M \lesssim 10^7 M\_\odot$), and high-mass binaries ($M \gtrsim 10^7 M\_\odot$). We discuss below these three distinct cases, and plot the estimated error distributions for a representative sample of each one of the three classes on the parameters Δ*m*1/*m*1, Δ*χ*1, 2*a*, and Δ*d**L*/*d**L*. The distributions for Δ*m*2/*m*2 are similar to those for Δ*m*1/*m*1, those for Δ*χ*2 to those for Δ*χ*1, and those for 2*b* to those for 2*a*. In general, for lower-mass binaries and independently on the mass ratio, we find that the errors expected for extrinsic parameters using the FWF are  ∼ 1.5 times the ones expected for the RWF. This factor is  ∼ 1.2 comparing the SWF to the RWF. This is changed when considering higher-mass binaries, because the second harmonic, the only one present in the RWF, spends very few cycles inside the LISA band. To discuss the mass limit above which no information can be extracted from a system anymore, we present at the end the proportion of systems for which the individual masses and the luminosity distance can be measured with 50% and 25% accuracy, for all samples. We also plot for different mass ratios the maximum redshift at which information can be extracted from a binary system, as a function of *m*1. We present then for each waveform how far the measurement of supermassive black hole mergers could help determining the Hubble diagram. To do so, we compute up to what redshift half of the systems can be localized inside the field of view of Hubble and/or XMM-Newton (see e.g. ) which we take to be 30ʹ wide, with an error on *d**L* smaller than 10%. Low unequal-mass binaries ------------------------- We put in this class all systems with total mass smaller than 107*M*⊙, and with a mass ratio of at least 1:3. We chose to present as a representative sample systems with *m*1 = 106*M*⊙ and *m*2 = 3 ⋅ 105*M*⊙. We plot the estimated distribution of the errors on *m*1 in Fig. [fig:m1635], on *χ*1 in Fig. [fig:ch1635], on the sky positioning in Fig. [fig:2a1635], and on *d**L* in Fig. [fig:dl1635]. For these systems, the gain in accuracy obtained in the determination of all interesting parameters with respect to the RWF is typically a factor  ∼ 1.5 for the FWF and a factor  ∼ 1.2 for the SWF. However, when the mass ratio is close to 1:3, the distribution of the errors on the individual masses for the RWF has a relatively long tail of bad errors, which is absent for the SWF and FWF. The fact that including such extra structure as contained in the FWF fails to provide much extra accuracy can allow including extra parameters in the template. It has been recently suggested that the eccentricity of SMBH binaries could be significant in the last stages of the inspiral . Thus, inserting eccentricity parameters  could be important. Furthermore, GW observations could help constraining alternative gravity theories . ![Estimated distribution of the measurement error on m_1 for a low unequal-mass binary system with m_1 = 10^6 M_\odot and m_2 = 3 \cdot 10^5 M_\odot. We expect to have errors as high as 8 \cdot 10^{-3} with the RWF (95\% percentile), whereas we do not expect errors higher than 1.5 \cdot 10^{-3} with the FWF.[fig:m1635]](3)Estimated distribution of the measurement error on *m*1 for a low unequal-mass binary system with *m*1 = 106*M*⊙ and *m*2 = 3 ⋅ 105*M*⊙. We expect to have errors as high as 8 ⋅ 10− 3 with the RWF (95% percentile), whereas we do not expect errors higher than 1.5 ⋅ 10− 3 with the FWF.[fig:m1635] ![Estimated distribution of the measurement error on \chi_1 for a low unequal-mass binary system with m_1 = 10^6 M_\odot and m_2 = 3 \cdot 10^5 M_\odot. We expect the error to be 1.5 {\ \mbox{-} \ }2 times lower using the FWF than using the RWF.[fig:ch1635]](4)Estimated distribution of the measurement error on *χ*1 for a low unequal-mass binary system with *m*1 = 106*M*⊙ and *m*2 = 3 ⋅ 105*M*⊙. We expect the error to be 1.5 - 2 times lower using the FWF than using the RWF.[fig:ch1635] ![Estimated distribution of the major axis of the positioning error ellipse for a low unequal-mass binary system with m_1 = 10^6 M_\odot and m_2 = 3 \cdot 10^5 M_\odot. We expect the error to be \sim 1.5 times lower using the FWF than using the RWF.[fig:2a1635]](5)Estimated distribution of the major axis of the positioning error ellipse for a low unequal-mass binary system with *m*1 = 106*M*⊙ and *m*2 = 3 ⋅ 105*M*⊙. We expect the error to be  ∼ 1.5 times lower using the FWF than using the RWF.[fig:2a1635] ![Estimated distribution of the measurement error on d_L for a low unequal-mass binary systems with m_1 = 10^6 M_\odot and m_2 = 3 \cdot 10^5 M_\odot. We expect the error to be \sim 1.5 times lower using the FWF than using the RWF.[fig:dl1635]](6)Estimated distribution of the measurement error on *d**L* for a low unequal-mass binary systems with *m*1 = 106*M*⊙ and *m*2 = 3 ⋅ 105*M*⊙. We expect the error to be  ∼ 1.5 times lower using the FWF than using the RWF.[fig:dl1635] Low equal-mass binaries ----------------------- We put in this class all systems of equal-mass black holes, with total mass smaller than 107*M*⊙. We chose to present as a representative sample systems with *m*1 = *m*2 = 3 ⋅ 105*M*⊙. We plot the estimated distribution of the errors on *m*1 in Fig. [fig:m3535], on *χ*1 in Fig. [fig:ch3535], on the sky positioning in Fig. [fig:2a3535], and on *d**L* in Fig. [fig:dl3535]. In these cases, the errors on extrinsic parameters are, as for unequal-mass systems, improved by a factor  ∼ 1.5 for the FWF with respect to the RWF. The errors on the spins are improved for the worst cases by a factor 2 - 4, and typically by a factor 1.5 - 2 for the FWF with respect to the two other waveforms. However, the error on the individual masses is improved typically by a factor 3.5 - 4.5, and even by a factor 10 - 20 in the worst cases, comparing the FWF with the two others. Thus, much more information can be extracted from a measure of an equal-mass binary system using the former waveform than one of the latter. The SWF brings little improvement for intrinsic parameters in these cases, because the odd harmonics are absent from it, so that it has only two corrections to the RWF instead of five for unequal-mass systems. ![Estimated distribution of the measurement error on m_1 for a low equal-mass binary system with m_1 = m_2 = 3 \cdot 10^5 M_\odot. The FWF clearly gives better results than the two other waveforms.[fig:m3535]](7)Estimated distribution of the measurement error on *m*1 for a low equal-mass binary system with *m*1 = *m*2 = 3 ⋅ 105*M*⊙. The FWF clearly gives better results than the two other waveforms.[fig:m3535] ![Estimated distribution of the measurement error on \chi_1 for a low equal-mass binary system with m_1 = m_2 = 3 \cdot 10^5 M_\odot. The improvement on the median value is a factor 1.5 {\ \mbox{-} \ }2 and on the 95\% quantile a factor 2 {\ \mbox{-} \ }4 for the FWF with respect to the other two waveforms.[fig:ch3535]](8)Estimated distribution of the measurement error on *χ*1 for a low equal-mass binary system with *m*1 = *m*2 = 3 ⋅ 105*M*⊙. The improvement on the median value is a factor 1.5 - 2 and on the 95% quantile a factor 2 - 4 for the FWF with respect to the other two waveforms.[fig:ch3535] ![Estimated distribution of the major axis of the positioning error ellipse for a low equal-mass binary system with m_1 = m_2 = 3 \cdot 10^5 M_\odot. We expect the error to be \sim 1.5 times lower using the FWF than using the RWF.[fig:2a3535]](9)Estimated distribution of the major axis of the positioning error ellipse for a low equal-mass binary system with *m*1 = *m*2 = 3 ⋅ 105*M*⊙. We expect the error to be  ∼ 1.5 times lower using the FWF than using the RWF.[fig:2a3535] ![Estimated distribution of the measurement error on d_L for a low equal-mass binary system with m_1 = m_2 = 3 \cdot 10^5 M_\odot. We expect the error to be \sim 1.5 times lower using the FWF than using the RWF.[fig:dl3535]](10)Estimated distribution of the measurement error on *d**L* for a low equal-mass binary system with *m*1 = *m*2 = 3 ⋅ 105*M*⊙. We expect the error to be  ∼ 1.5 times lower using the FWF than using the RWF.[fig:dl3535] High-mass binaries ------------------ We put in this class all systems with total mass higher than 107*M*⊙. We chose to present as a representative sample systems with *m*1 = 3 ⋅ 107*M*⊙ and *m*2 = 107*M*⊙. We plot the estimated distribution of the errors on *m*1 in Fig. [fig:m3717], on *χ*1 in Fig. [fig:ch3717], on the sky positioning in Fig. [fig:2a3717], and on *d**L* in Fig. [fig:dl3717]. For this class of binaries, the second harmonic is hardly or not at all visible in the LISA band, so that the RWF fails to provide good accuracy unless the total mass is close to 107*M*⊙. However, the SWF and FWF still provide relatively high precision measurement on the masses, spins and luminosity distance of a system with *m*1 = 3 ⋅ 107*M*⊙ at redshift one. For equal-mass systems in this class, the FWF provides in all cases an improvement of a factor 10 - 30 for the determination of the masses with respect to the SWF. For other parameters and/or other mass ratios, the improvement using the FWF is of a factor 1.5 - 2 with respect to the SWF. The fact that the SWF seems to give better results than the FWF for the highest-mass systems comes from the fact that the SNR for systems in this mass range is higher with the former than with the latter. However, we do not expect to extract more information from a more approximate waveform. Furthermore, some information can still be extracted from binaries that are completely invisible to the RWF using higher harmonics. ![Estimated distribution of the measurement error on m_1 for a high-mass binary system with m_1 = 3 \cdot 10^7 M_\odot and m_2 = 10^7 M_\odot. Very few systems are measurable with the RWF with a precision less than 50\%, whereas the other two waveforms provide in the worst cases a few percent precision, the FWF typically a factor of 1.5 {\ \mbox{-} \ }2 better than the SWF.[fig:m3717]](11)Estimated distribution of the measurement error on *m*1 for a high-mass binary system with *m*1 = 3 ⋅ 107*M*⊙ and *m*2 = 107*M*⊙. Very few systems are measurable with the RWF with a precision less than 50%, whereas the other two waveforms provide in the worst cases a few percent precision, the FWF typically a factor of 1.5 - 2 better than the SWF.[fig:m3717] ![Estimated distribution of the measurement error on \chi_1 for a high-mass binary system with m_1 = 3 \cdot 10^7 M_\odot and m_2 = 10^7 M_\odot. We can see that no information on the spins can be extracted with the RWF, whereas some can be extracted with the two others in all cases, a factor of two better for the FWF than for the SWF.[fig:ch3717]](12)Estimated distribution of the measurement error on *χ*1 for a high-mass binary system with *m*1 = 3 ⋅ 107*M*⊙ and *m*2 = 107*M*⊙. We can see that no information on the spins can be extracted with the RWF, whereas some can be extracted with the two others in all cases, a factor of two better for the FWF than for the SWF.[fig:ch3717] ![Estimated distribution of the major axis of the positioning error ellipse for a high-mass binary system with m_1 = 3 \cdot 10^7 M_\odot and m_2 = 10^7 M_\odot. We expect to have a positioning error in the best cases (5\% quantile) of 2.8^\circ for the RWF, of 1^\circ for the SWF, and of 25' for the FWF.[fig:2a3717]](13)Estimated distribution of the major axis of the positioning error ellipse for a high-mass binary system with *m*1 = 3 ⋅ 107*M*⊙ and *m*2 = 107*M*⊙. We expect to have a positioning error in the best cases (5% quantile) of 2.8∘ for the RWF, of 1∘ for the SWF, and of 25ʹ for the FWF.[fig:2a3717] ![Estimated distribution of the measurement error on d_L for a high-mass binary system with m_1 = 3 \cdot 10^7 M_\odot and m_2 = 10^7 M_\odot. We do not expect a measurement more accurate than 20\% to be possible with the RWF, whereas the accuracy should be always less than 20\% with the SWF, and less than 10\% with the FWF.[fig:dl3717]](14)Estimated distribution of the measurement error on *d**L* for a high-mass binary system with *m*1 = 3 ⋅ 107*M*⊙ and *m*2 = 107*M*⊙. We do not expect a measurement more accurate than 20% to be possible with the RWF, whereas the accuracy should be always less than 20% with the SWF, and less than 10% with the FWF.[fig:dl3717] Upper mass limit ---------------- We present here for all samples, the proportion of systems for which both individual masses can be measured at the level of 25% and 50% at a redshift of *z* = 1 in Table [tab:mofz], as well as the luminosity distance in Table [tab:dlofz]. When at least 25% accuracy is obtainable for all systems of a sample, we do not show it on the table. [!ht] |c|c|c|c|c|c|c|c| *m*1[*M*⊙] & *m*2[*M*⊙] & & & & RWF & SWF & FWF & RWF & SWF & FWF 107 & 3 ⋅ 106 & 98% & 100% & 100% & 100% & 100% & 100% 107 & 107 & 50% & 75% & 100% & 71% & 89% & 100% 3 ⋅ 107 & 107 & 1% & 100% & 100% & 5% & 100% & 100% 3 ⋅ 107 & 3 ⋅ 107 & 0% & 0% & 84% & 0% & 3% & 98% 108 & 107 & 0% & 4% & 2% & 0% & 21% & 15% 108 & 3 ⋅ 107 & 0% & 0% & 0% & 0% & 0% & 0% [!ht] |c|c|c|c|c|c|c|c| *m*1[*M*⊙] & *m*2[*M*⊙] & & & & RWF & SWF & FWF & RWF & SWF & FWF 107 & 107 & 94% & 99% & 100% & 99% & 100% & 100% 3 ⋅ 107 & 107 & 8% & 97% & 99% & 28% & 100% & 100% 3 ⋅ 107 & 3 ⋅ 107 & 0% & 12% & 9% & 0% & 40% & 47% 108 & 107 & 0% & 5% & 1% & 0% & 29% & 10% 108 & 3 ⋅ 107 & 0% & 0% & 0% & 0% & 2% & 0% We see in the tables that the RWF reaches its limits for 107*M*⊙ binaries, whereas the SWF and FWF can still provide significant information for 3 ⋅ 107*M*⊙ binaries, and even for some 108*M*⊙ binaries with high enough mass ratio. Furthermore, we computed from our simulations the maximum redshift at which a binary system is observable, as a function of *m*1, for different values of the mass ratio. We chose to call a system with parameters (*m*1, *m*2, *z*) observable if at least half of the systems of these masses at this redshift have both individual masses measurable with at least 25% precision. We present in Fig. [fig:Mq1] the maximum redshift at which equal-mass systems can be observed, in Fig. [fig:Mq3] the same for systems with a mass ratio between 1:3 and 3:10, and in Fig. [fig:Mq10] the same for systems with a mass ratio of 1:10. Some points are absent of the plots, because no signal at all could be extracted from the RWF when the higher-mass black hole had a redshifted mass *m*1 ≈ 108*M*⊙. ![Maximum redshift at which a system is observable as a function of the mass of the black holes, for equal-mass systems. The FWF allows to observe 10^4 {\ \mbox{-} \ }10^5 M_\odot binaries up to z = 30 {\ \mbox{-} \ }50, the other two up to z = 15 {\ \mbox{-} \ }25. A binary of \sim 2 \cdot 10^7 M_\odot black holes should be observable up to z \approx 2 with the FWF, z \approx 1 with the SWF, and z \approx 0.3 with the RWF.[fig:Mq1]](15)Maximum redshift at which a system is observable as a function of the mass of the black holes, for equal-mass systems. The FWF allows to observe 104 - 105*M*⊙ binaries up to *z* = 30 - 50, the other two up to *z* = 15 - 25. A binary of  ∼ 2 ⋅ 107*M*⊙ black holes should be observable up to *z* ≈ 2 with the FWF, *z* ≈ 1 with the SWF, and *z* ≈ 0.3 with the RWF.[fig:Mq1] ![Maximum redshift at which a system is observable as a function of the mass of the most massive black hole, for systems with a mass ratio between 1:3 and 3:10. A binary with m_1 \approx 2 \cdot 10^4 M_\odot should be observable up to z \approx 36 with the FWF, up to z \approx 30 with the SWF, up to z \approx 28 with the RWF. A binary with m_1 \approx 10^7 M_\odot should be observable up to z \approx 7 with the FWF, up to z \approx 6 with the SWF, and up to z \approx 2 with the RWF.[fig:Mq3]](16)Maximum redshift at which a system is observable as a function of the mass of the most massive black hole, for systems with a mass ratio between 1:3 and 3:10. A binary with *m*1 ≈ 2 ⋅ 104*M*⊙ should be observable up to *z* ≈ 36 with the FWF, up to *z* ≈ 30 with the SWF, up to *z* ≈ 28 with the RWF. A binary with *m*1 ≈ 107*M*⊙ should be observable up to *z* ≈ 7 with the FWF, up to *z* ≈ 6 with the SWF, and up to *z* ≈ 2 with the RWF.[fig:Mq3] ![Maximum redshift at which a system is observable as a function of the mass of the most massive black hole, for systems with a mass ratio of 1:10. A binary with m_1 \approx 6 \cdot 10^4 M_\odot should be observable up to z \approx 38 with the FWF, up to z \approx 34 with the SWF, up to z \approx 29 with the RWF. A binary with m_1 \approx 10^8 M_\odot should still be observable up to z \approx 0.7 with the FWF and SWF, and not visible at all with the RWF.[fig:Mq10]](17)Maximum redshift at which a system is observable as a function of the mass of the most massive black hole, for systems with a mass ratio of 1:10. A binary with *m*1 ≈ 6 ⋅ 104*M*⊙ should be observable up to *z* ≈ 38 with the FWF, up to *z* ≈ 34 with the SWF, up to *z* ≈ 29 with the RWF. A binary with *m*1 ≈ 108*M*⊙ should still be observable up to *z* ≈ 0.7 with the FWF and SWF, and not visible at all with the RWF.[fig:Mq10] The figures show that a much higher redshift can be reached with the FWF than with the other waveforms, and that the difference is bigger for mass ratios closer to 1:1. The FWF is giving the possibility to observe any binary system with total mass of *M* ≤ 107*M*⊙ up to a redshift of $z \gtrsim 10$, whereas the other waveforms fail, especially for equal-mass systems. The same combinations of redshifted masses can be observed with the FWF at redshifts 1.5 - 5 times higher than with the RWF, which could greatly help constraining black hole and galaxy formation models . Extrinsic parameters -------------------- We plot here as a function of the mass of the most massive black hole the maximum redshift where the major axis of the positioning error ellipse is less than 30ʹ for half of the binaries. Equal-mass binaries are shown in Fig. [fig:aq1], binaries with mass ratio between 1:3 and 3:10 in Fig. [fig:aq3], and binaries with a mass ratio of 1:10 in Fig. [fig:aq10]. ![Maximum redshift at which the binary can be located with a 30' precision as function of the mass of the most massive black hole, for equal-mass systems. The FWF allows to locate a binary accurately up to a redshift of \sim 0.2 greater than the two other waveforms.[fig:aq1]](18)Maximum redshift at which the binary can be located with a 30ʹ precision as function of the mass of the most massive black hole, for equal-mass systems. The FWF allows to locate a binary accurately up to a redshift of  ∼ 0.2 greater than the two other waveforms.[fig:aq1] ![Maximum redshift at which the binary can be located with a 30' precision as a function of the mass of the most massive black hole, for systems with a mass ratio between 1:3 and 3:10. The FWF allows to locate a binary accurately up to a redshift of 0.2 {\ \mbox{-} \ }0.3 greater than the SWF, and the SWF up to a redshift of \sim 0.1 greater than the RWF.[fig:aq3]](19)Maximum redshift at which the binary can be located with a 30ʹ precision as a function of the mass of the most massive black hole, for systems with a mass ratio between 1:3 and 3:10. The FWF allows to locate a binary accurately up to a redshift of 0.2 - 0.3 greater than the SWF, and the SWF up to a redshift of  ∼ 0.1 greater than the RWF.[fig:aq3] ![Maximum redshift at which the binary can be located with a 30' precision as a function of the mass of the most massive black hole, for systems with a mass ratio of 1:10. The FWF allows to locate a binary accurately up to a redshift of 0.2 {\ \mbox{-} \ }0.4 greater than the SWF, and the SWF up to a redshift of \sim 0.1 greater than the RWF.[fig:aq10]](20)Maximum redshift at which the binary can be located with a 30ʹ precision as a function of the mass of the most massive black hole, for systems with a mass ratio of 1:10. The FWF allows to locate a binary accurately up to a redshift of 0.2 - 0.4 greater than the SWF, and the SWF up to a redshift of  ∼ 0.1 greater than the RWF.[fig:aq10] We found that in all cases, the localization in the sky is far more difficult than the determination of the luminosity distance. Irrespective of the waveforms, masses and mass ratios, the luminosity distance can be measured with a precision of 0.3% - 0.5% when the major axis of the positioning error ellipse is 30ʹ. The FWF could help locating binaries accurately enough for the observation of their merger to become possible up to redshifts 0.2 - 0.4 greater than the two other waveforms. The SWF could furthermore, in the case of unequal-mass binaries, go up to redshifts  ∼ 0.1 greater than the RWF. Our simulations show that supermassive black hole binaries could be very accurate standard candles, and could successfully extend the measurements of the Hubble diagram up to redshifts of *z* = 1.6, with a precision on the luminosity distance of a few per mille. This would be a great breakthrough in the distance ladder, as the current most effective standard candles at large distances, type Ia supernovae, provide much less precision on large luminosity distances. Conclusion ========== The fact that the detection of gravitational waves with interferometric detectors relies on template-based searches should suggest to use the most accurate waveform available for detecting systems emitting such waves. The gravitational waveform of two spinning bodies orbiting each other is however complicated, and each new further step implies more and more complicated corrections to the waveform. It is a good thing to know whether it is worth using a more accurate waveform, and in what cases. Comparing our results for the RWF to those of Lang and Hughes , we found that our estimates are systematically a factor of  ∼ 1.2 more pessimistic for lower-mass binaries, and a factor of  ∼ 2 more pessimistic for higher-mass binaries. As stated in section [sec:dataal], we expected such a discrepancy because of the differences in the noise curves we used. As higher-mass binaries spend more time in the lower frequency range where the noise differs the most, we expected the differences for these binaries to be larger than for less massive binaries. We found that the addition of higher harmonics to the waveform at the 2PN level can help increasing the mass limit above which no information can be extracted from the signal. The amplitude corrections can also bring important improvements to the determination of the individual masses, also for lower-mass binaries. The FWF allows for detecting binaries up to redshifts *z* > 40, whereas the other waveforms can allow detection up to *z* ≈ 30. This could be very interesting for constraining galaxy and black hole formation models. The range of LISA could be also extended for the determination of the Hubble diagram. The RWF would allow to measure the redshift-luminosity distance relation at a few per mille precision up to *z* ≈ 1.2, whereas the FWF would allow measures with the same precision up to *z* ≈ 1.6. It would be interesting to quantify how well LISA would be able to determine the Hubble diagram. The use of the full waveform as a template for the gravitational radiation of comparable-mass binaries can be important to extract the maximum information possible, especially for high-mass and/or close to equal-mass binaries. However, in the case of unequal low-mass binaries at low redshifts, the restricted waveform used in earlier studies can be sufficient. The fact that using the full waveform in these searches can fail to provide much more accuracy for some systems suggests that including more parameters, such as eccentricity  or alternative gravity parameters , could keep the accuracy for the other parameters reasonably high, allowing to extract more information from the detection of a wave. Gravitational waves can be a powerful tool for constraining alternative theories of gravitation, in the sense that each event will provide an independent measurement of their parameters. Even though the spin-coupling effects in the wave amplitude are not yet known at the 2.5PN level, it could be interesting to compare the measurement accuracy we get from a 2.5PN accurate waveform as compared to the 2PN accurate one we used in this study. It has been shown  in the case of spinless bodies that theoretical errors due to the inaccuracy of the waveform can be important for high SNR systems. It would also be interesting to perform the same study for spinning systems. We would like to thank Neil Cornish for useful precisions about noise models for interferometric gravitational wave detectors, and Prasenjit Saha for useful discussions about the efficiency and precision of numerical methods. We would also like to thank the referee for useful comments. A. K. and M. S. are supported by the Swiss National Science Foundation. Polarizations ============= We give here the plus and cross polarizations we used in our studies, in terms of the orbital phase *ψ*. The plus and cross polarizations of the simplified waveform (SWF) is obtained by keeping only the lowest order in *A*+(*n*) and *B*×(*n*), and those of the RWF by keeping only the lowest order of *A*+(2) and *B*×(2). To obtain the SWF and RWF, the function *S*(*f*) in Eq.  should also be set to *S*(*f*) = 1. The plus and cross polarization waveforms are: $$\begin{aligned} h\_{+,\times} &= \frac{2GM\nu x}{d\_L c^2} \left[ \sum\_{n \geqslant 0} \left(A\_{+,\times}^{(n)} \cos n\psi + B\_{+,\times}^{(n)} \sin n\psi \right) \right], \\ s\_i &= {\left| {\bm{\hat{L}}} \times {\bm{\hat{n}}} \right|}, \\ c\_i &= {\bm{\hat{L}}} \cdot {\bm{\hat{n}}}.\end{aligned}$$ With the use of the spin-orbit coupling parameter *τ* defined as: $$\tau \equiv \frac{c}{G M} \left( \frac{\bm{S}\_1}{m\_1} - \frac{\bm{S}\_2}{m\_2} \right) \cdot {\bm{\hat{L}}},$$ we can write the nonvanishing parameters *A*+ ,  ×(*n*) and *B*+ ,  ×(*n*), as currently known at 2PN level. The value of *ω̄* appearing below can be chosen arbitrarily. $$\begin{aligned} A\_+^{(0)} &= - \frac{s\_i^2}{96} \left( 17 + c\_i^2 \right)\\ A\_+^{(1)} &= s\_i \sqrt{1 - 4\nu} \left( -\frac{5}{8} - \frac{c\_i^2}{8} \right) x^{1/2} + s\_i \tau x \nonumber\\ &\qquad\qquad + s\_i \sqrt{1 - 4\nu} \left[ \frac{19}{64} + \frac{5c\_i^2}{16} - \frac{c\_i^4}{192} + \nu \left( -\frac{49}{96} + \frac{c\_i^2}{8} + \frac{c\_i^4}{96} \right) \right] x^{3/2} + s\_i \sqrt{1 - 4\nu} \left( -\frac{5}{8} - \frac{c\_i^2}{8} \right) \pi x^2 \\ A\_+^{(2)} &= \left( - 1 - c\_i^2 \right) + \left[ \frac{19}{6} + \frac{3c\_i^2}{2} - \frac{c\_i^4}{3} + \nu \left( - \frac{19}{6} + \frac{11c\_i^2}{6} + c\_i^4 \right) \right] x + \left[ 2\pi \left( - 1 - c\_i^2 \right) + \frac{8}{3} \beta\left( 3 - 9 c\_i^2, 2 - 10 c\_i^2 \right) \right] x^{3/2} \nonumber\\ &\qquad\qquad\qquad + \bigg[ \frac{11}{60} + \frac{33c\_i^2}{10} + \frac{29c\_i^4}{24} - \frac{c\_i^6}{24} + \nu \left( \frac{353}{36} - 3 c\_i^2 - \frac{251c\_i^4}{72} + \frac{5c\_i^6}{24} \right) \nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad + \nu^2 \left( -\frac{49}{12} + \frac{9c\_i^2}{2} - \frac{7c\_i^4}{24} - \frac{5c\_i^6}{24} \right) - 2 \sigma\left( 1 + c\_i^2, 0 \right) \bigg] x^2 \\ A\_+^{(3)} &= s\_i \sqrt{1 - 4 \nu} \left( \frac{9}{8} + \frac{9c\_i^2}{8} \right) x^{1/2} \nonumber\\ &\quad + s\_i \sqrt{1 - 4\nu} \left[- \frac{657}{128} - \frac{45c\_i^2}{16} + \frac{81c\_i^4}{128} + \nu \left( \frac{225}{64} - \frac{9c\_i^2}{8} - \frac{81c\_i^4}{64} \right) \right] x^{3/2} + s\_i \sqrt{1 - 4\nu} \left( \frac{27}{8} + \frac{27c\_i^2}{8} \right) \pi x^2 \\ A\_+^{(4)} &= s\_i^2\left( 1 + c\_i^2 \right) \left( - \frac{4}{3} +4\nu \right) x \nonumber\\ &\quad + \left[ \frac{118}{15} - \frac{16c\_i^2}{5} - \frac{86c\_i^4}{15} + \frac{16c\_i^6}{15} + \nu \left( - \frac{262}{9} + 16 c\_i^2 + \frac{166c\_i^4}{9} - \frac{16c\_i^6}{3} \right) + \nu^2 \left( 14 - 16c\_i^2 - \frac{10c\_i^4}{3} + \frac{16c\_i^6}{3} \right) \right] x^2\\ A\_+^{(5)} &= s\_i^3 \sqrt{1 - 4 \nu} \left( \frac{625}{384} - \frac{625\nu}{192} \right) \left( 1 + c\_i^2 \right) x^{3/2} \\ A\_+^{(6)} &= s\_i^4 \left( 1 + c\_i^2 \right) \left( -\frac{81}{40} + \frac{81\nu}{8} - \frac{81\nu^2}{8} \right) x^2\end{aligned}$$ $$\begin{aligned} A\_\times^{(1)} &= s\_i c\_i \sqrt{1 - 4\nu} \left[ -\frac{9}{20} - \frac{3\log2}{2} + \frac{9}{4} \log\left( \frac{\omega}{\bar{\omega}} \right) \right] x^2 \\ A\_\times^{(2)} &= 12c\_i \log\left( \frac{\omega}{\bar{\omega}} \right) x^{3/2} \\ A\_\times^{(3)} &= s\_i c\_i \sqrt{1 - 4\nu} \left[ \frac{189}{20} - \frac{27\log(3/2)}{2} - \frac{81}{4} \log\left( \frac{\omega}{\bar{\omega}} \right) \right] x^2\end{aligned}$$ $$\begin{aligned} B\_\times^{(1)} &= -\frac{3}{4} s\_i c\_i \sqrt{1 - 4\nu} \, x^{1/2} + s\_i c\_i \tau x + s\_i c\_i \sqrt{1 - 4\nu} \left[ \frac{21}{32} - \frac{5c\_i^2}{96} + \nu\left( -\frac{23}{48} + \frac{5c\_i^2}{48} \right) \right] x^{3/2} - \frac{3\pi}{4} s\_i c\_i \sqrt{1 - 4\nu} \, x^2 \\ B\_\times^{(2)} &= -2 c\_i + c\_i \left[ \frac{17}{3} - \frac{4c\_i^2}{3} + \nu\left( -\frac{13}{3} + 4 c\_i^2 \right) \right] x + c\_i \left[ -4\pi - \frac{4}{3} \beta\left( 1 + 3 c\_i^2, 3 c\_i^2 \right) \right] x^{3/2} \nonumber\\ &\qquad\qquad + c\_i \left[ \frac{17}{15} + \frac{113c\_i^2}{30} - \frac{c\_i^4}{4} + \nu \left( \frac{143}{9} - \frac{245c\_i^2}{18} +\frac{5c\_i^4}{4} \right) + \nu^2 \left( -\frac{14}{3} + \frac{35c\_i^2}{6} - \frac{5c\_i^4}{4} \right) - 4 \sigma\left( 1, 0 \right) \right] x^2 \\ B\_\times^{(3)} &= \frac{9}{4} s\_i c\_i \sqrt{1 - 4\nu} \, x^{1/2} + s\_i c\_i \sqrt{1 - 4\nu} \left[ - \frac{603}{64} + \frac{135c\_i^2}{64} + \nu \left( \frac{171}{32} - \frac{135c\_i^2}{32} \right) \right] x^{3/2} + \frac{27\pi}{4} s\_i c\_i \sqrt{1 - 4\nu} \, x^2 \\ B\_\times^{(4)} &= c\_i s\_i^2 \left( -\frac{8}{3} + 8\nu \right) x + c\_i \left[ \frac{44}{3} - \frac{268c\_i^2}{15} + \frac{16c\_i^4}{5} + \nu \left( -\frac{476}{9} + \frac{620c\_i^2}{9} - 16c\_i^4 \right) + \nu^2 \left( \frac{68}{3} - \frac{116c\_i^2}{3} + 16c\_i^4 \right) \right] x^2 \\ B\_\times^{(5)} &= s\_i^3 c\_i \sqrt{1 - 4\nu} \left( \frac{625}{192} - \frac{625\nu}{96} \right) x^{3/2} \\ B\_\times^{(6)} &= s\_i^4 c\_i \left( -\frac{81}{20} + \frac{81\nu}{4} - \frac{81\nu^2}{4} \right) x^2\end{aligned}$$ $$\begin{aligned} B\_+^{(1)} &= s\_i \sqrt{1 - 4\nu} \left[ \frac{11}{40} + \frac{5 \log2}{4} + \left( \frac{7}{40} + \frac{\log2}{4} \right) c\_i^2 + \left( -\frac{15}{8} - \frac{3c\_i^2}{8} \right) \log\left(\frac{\omega}{\bar{\omega}}\right) \right] x^2 \\ B\_+^{(2)} &= \left( - 6 - 6 c\_i^2 \right) \log\left( \frac{\omega}{\bar{\omega}} \right) x^{3/2} \\ B\_+^{(3)} &= s\_i \sqrt{1 - 4\nu} \left[ -\frac{189}{40} + \frac{27\log(3/2)}{4} + \frac{81}{8} \log\left( \frac{\omega}{\bar{\omega}} \right) \right] \left( 1 + c\_i^2 \right) x^2\end{aligned}$$ 39 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url#1`#1`urlprefix[2]#2 [2][]<#2> ,,,, (). ,, (). ,, (). ,,,, (). ,, (). ,, (). ,, (). ,,,, (). ,,,, (). ,,,, (). ,,,,, (). ,,,,, (). ,, (). ,, (). ,, (). ,, (). ,, (). ,,,,,, (). ,, (). ,, (). ,,,, (). ,, (). ,,,,,, (). ,, (). ,, (). ,,,,, (). ,, (). ,, (). ,, (). ,, (). ,, (). ,, (). ,,,, (). ,,,,,, (). ,,, (),. ,,,,,, (). ,,, (),. ,, (). ,, (). --- 1. Note that the symbol *f* here and in the following denotes the argument of the Fourier transform of the signal, and not the orbital frequency.[↩](#fnref1) 2. We found a typo in the confusion noise, which made it discontinuous at 10− 3 Hz. For *f* ≤ 10− 3 Hz, we replaced 10− 44.62*f*− 2/3 with the correct value of 10− 44.62*f*− 2.3.[↩](#fnref2) Parameter estimation for coalescing massive binary black holes with LISA using the full 2-post-Newtonian gravitational waveform and spin-orbit precession ========================================================================================================================================================= Gravitational waves emitted by binary systems in the inspiral phase carry a complicated structure, consisting in a superposition of different harmonics of the orbital frequency, the amplitude of each of them taking the form of a post-Newtonian series. In addition to that, spinning binaries experience spin-orbit and spin-spin couplings which induce a precession of the orbital angular momentum and of the individual spins. With one exception, previous analyses of the measurement accuracy of gravitational wave experiments for comparable-mass binary systems have neglected either spin-precession effects or subdominant harmonics and amplitude modulations. Here we give the first explicit description of how these effects combine to improve parameter estimation. We consider supermassive black hole binaries as expected to be observed with the planned space-based interferometer LISA, and study the measurement accuracy for several astrophysically interesting parameters obtainable taking into account the full 2PN waveform for spinning bodies, as well as spin-precession effects. We find that for binaries with a total mass in the range 105*M*⊙ < *M* < 107*M*⊙ at a redshift of 1, a factor  ∼ 1.5 is in general gained in accuracy, with the notable exception of the determination of the individual masses in equal-mass systems, for which a factor  ∼ 5 can be gained. We also find, as could be expected, that using the full waveform helps increasing the upper mass limit for detection, which can be as high as *M* = 108*M*⊙ at a redshift of 1, as well as the redshift limit where some information can be extracted from a system, which is roughly $z \gtrsim 10$ for *M* ≤ 107*M*⊙, 1.5 - 5 times higher than with the restricted waveform. We computed that the full waveform allows to use supermassive black hole binaries as standard sirens up to a redshift of *z* ≈ 1.6, about 0.4 larger than what previous studies allowed. We found that for lower unequal-mass binary systems, the measurement accuracy is not as drastically improved as for other systems. This suggests that for these systems, adding parameters such as eccentricity or alternative gravity parameters could be achieved without much loss in the accuracy. Introduction ============ Gravitational waves (GW’s), once their observation is made possible, will provide new means of observing the Universe. So far, the vast majority of observations have been made through electromagnetic radiation, and gravitational waves will surely make visible different aspects of the Universe. For example, one could probe the different galaxy formation models by detecting supermassive black hole mergers in a large redshift range . Or, as GW’s provide a good way of measuring the luminosity distance to their source, one could combine gravitational and electromagnetic observations to build a robust measurement of the Hubble diagram, which would be of great interest for cosmology . Another possibility would be to measure alternative gravity parameters . This can potentially be a powerful way to constrain such theories, as each observed GW will give an independent measurement of their parameters. The new generation of ground-based detectors, such as advanced LIGO, and the space-based detector LISA will probably make the direct detection of gravitational waves possible. Some of the most important sources of gravitational waves are the compact binary systems, i.e. systems of two compact objects (white dwarfs, neutron stars, or black holes). As such detections rely on matched filtering techniques, several groups have made efforts in building accurate templates based on the post-Newtonian (PN) approximation (see e.g. ). The limitations of such results have then been checked to estimate the precision with which one could measure the properties of a system emitting such waves, for example its distance from the Solar System, location in the sky, or the individual masses of the objects forming it . A binary system of compact objects emits gravitational waves during three distinct phases, called inspiral, merger, and ringdown. During the inspiral phase, most of the gravitational radiation is emitted at twice the orbital frequency of the system, which slowly increases as it loses energy emitting gravitational waves. As its two members come closer, higher harmonics become more and more important, and more power gets emitted. Finally, the two members enter the merger phase, where they have come so close that they cannot be treated anymore as two separate objects, and begin to merge, emitting complicated gravitational radiation that has not yet been described but numerically. After that, the remnant begins to radiate away its energy during the ringdown phase, where it approaches exponentially the structure of a Kerr black hole. LISA has been designed to be particularly sensitive to binaries that contain a supermassive black hole (SMBH, of mass 105 - 108*M*⊙), which can be separated into two (or three) categories, according to the companion mass: extreme mass-ratio inspirals (EMRIs) and supermassive black hole binaries (SMBHBs). A third category is sometimes added: intermediate mass ratio inspirals (IMRIs), which somehow lies between the two others. EMRIs are inspirals with a mass ratio between the two members of the order of 10− 4 - 10− 7, and are most accurately described by black hole perturbation theory. Such events are likely to occur in the center of galaxies, the majority of which are believed to host a SMBH, when a compact object of stellar mass is “eaten up” by the central object. Several groups have attacked the problem of describing the form of the gravitational radiation emitted by such objects . SMBHBs are binaries containing two SMBHs, forming during the merging of two galaxies, when they both host one in their center. Such events should be much more rare than EMRIs, but some galaxy formation models  and observation of nearby galaxies  suggest that they might happen often enough to be observable. As these events are much louder than any other source, we could observe them at very high redshifts (up to *z* = 20 or even higher), and thus tightly constrain galaxy formation models. The first attempt to estimate with which accuracy an interferometer could measure the properties of a compact object binary was made by Finn , who first introduced the Fisher matrix analysis, which is now widely used in this context. A few years later, Cutler  applied this formalism to LISA, focusing on the angular resolution that the space-based detector could get for black hole binaries, using the Newtonian quadrupole formula. Hughes  repeated the study including the PN expansion for the frequency of the wave. Vecchio , then, considered the case of the “simple precession”  of the angular momenta for spinning BH’s. Lang and Hughes  then used the full precession equations to further refine the parameter estimation. Recently, Arun et al.  and Porter and Cornish  included the full post-Newtonian waveform in the context of nonspinning black holes, and Trias and Sintes  used it for spinning black holes neglecting spin-precession effects. The LISA Parameter Estimation Taskforce  used the full waveform with spin-precession effects, without publishing a detailed study of the expected statistical errors. It is worth noting that all of the effects that these works studied, as more and more precise waveforms were used, helped to improve subsequently the expected measurement accuracy of LISA. In this paper, we study the question of whether the inclusion of the full post-Newtonian waveform in the context of spinning black holes undergoing spin-orbit precession helps to break more degeneracies, thus helping to further increase the accuracy in the measurement of the source parameters. In Sec. [sec:theory], we derive the waveform used to perform our study, and we quote some basics of Fisher matrix analysis. In Sec. [sec:simulations], we describe the simulations that we ran. In Sec. [sec:results], we give the results of our simulations, and analyze them from an astrophysical point of view. We conclude in Sec. [sec:conclusion]. Theory ====== Evolution --------- The state of a binary system of two Kerr black holes at a given time in the center of mass frame is fully described by 14 intrinsic parameters. These reduce to 12 if we assume that the binary lies on a circular orbit. One possible choice is to take as intrinsic parameters a unit vector pointing in the direction of the orbital angular momentum ${\bm{\hat{L}}}$, the orbital angular frequency *ω* (we will reserve the symbol *f* for arguments of Fourier transforms, and will express the orbital frequency always with the angular frequency to avoid confusions), the individual spins of each black hole, $\bm{S}\_1$ and $\bm{S}\_2$, their masses *m*1 and *m*2, and the orbital phase *φ*. To these intrinsic parameters, we have to add three more extrinsic parameters which locate the binary in space. Those can be chosen to be ${\bm{\hat{n}}}$, a unit vector pointing in the direction of the binary as seen from the Solar System, and *d**L*, the luminosity distance from the binary to the Sun. We will denote all unit vectors with a hat throughout this paper. Another extrinsic parameter, the redshift, also plays a role in the determination of the waveform, but it cannot be detected by GW observations. Indeed, the redshift causes the observed angular frequency *f**o* of the wave to decrease with respect to the emitted one *f**e*, as *f**o* = *f**e*/(1 + *z*). But (see following derivation) the exact same wave, within the post-Newtonian framework, is emitted by a second system with parameters *m**i*(2) = (1 + *z*)*m**i*(1), *d**L*(2) = (1 + *z*)*d**L*(1), not experiencing any redshift. Therefore, the redshift and luminosity distance cannot be measured separately with a gravitational wave observation, so that we have to assume a relation between the two parameters. This implies that observations of a light signal emitted during a merger, the redshift of which is possible to determine, are of great astrophysical interest. During the whole derivation of the gravitational wave signal below, we assume that the source is at redshift *z* = 0. The actual observed wave can then be easily determined redshifting the masses and luminosity distance, as we did in our simulations. To compute the relation between redshift and luminosity distance, we assume a flat ΛCDM cosmology without radiation with the latest WMAP parameters : ΩΛ = 0.72, Ω*m* = 0.28, *H*0 = 70.1 km/s/Mpc. The relation is then given by $$d\_L(z) = (1+z) \frac{c}{H\_0} \int\_0^z \frac{dz'}{\sqrt{\Omega\_m (1+z')^3 + \Omega\_\Lambda}}, \label{redlumdist}$$ which can be determined numerically. The problem of the motion of the system during the inspiral phase in full General Relativity has been too hard to be solved so far. However, a great effort has been made to attack the problem in the framework of the post-Newtonian formalism. The current state-of-the-art evolution equations go up to the 2.5PN order beyond leading order for spinning objects . As 2.5PN spin-orbit and spin-spin coupling terms are not yet known in the waveform, we chose to stop at the 2PN level, up to which both the evolution equations and the waveform are known. We will use the following mass parameters for the derivation of the evolution equations and of the waveform: the total mass *M* = *m*1 + *m*2, the reduced mass *μ* = *m*1*m*2/*M*, and the symmetric mass ratio *ν* = *μ*/*M*. The 2PN orbit-averaged relation between the orbital angular frequency *ω* and the orbital separation in harmonic coordinates *r* is given by  $$\begin{gathered} \label{om\_of\_gam} \omega = \frac{c^3}{GM} \gamma^{3/2} \bigg[ 1 + \left( \frac{\nu}{2} - \frac{3}{2} \right) \gamma - \frac{1}{2} \beta(2,3) \gamma^{3/2} \\ + \left( \frac{15}{8} + \frac{47\nu}{8} + \frac{3\nu^2}{8} - \frac{3}{4} \sigma(1,3) \right) \gamma^2 \bigg],\end{gathered}$$ where the orbital separation parameter *γ* and the spin-orbit and spin-spin couplings *β* and *σ* are given by $$\begin{aligned} \gamma &\equiv \frac{GM}{rc^2}, \\ \beta(a,b) &\equiv \frac{c}{G} \sum\_{i=1}^2 \left( \frac{a}{M^2} + \frac{b\nu}{m\_i^{\phantom{i}2}} \right) \bm{S}\_i \cdot {\bm{\hat{L}}}, \\ \sigma(a,b) &\equiv \frac{c^2}{\nu M^4 G^2} \left( a \bm{S}\_1 \cdot \bm{S}\_2 - b \left( \bm{S}\_1 \cdot {\bm{\hat{L}}} \right) \left( \bm{S}\_2 \cdot {\bm{\hat{L}}} \right) \right).\end{aligned}$$ The evolution equation of the angular frequency is given at 2PN order by  $$\begin{aligned} \frac{dx}{dt} &= \frac{64\nu}{5} \frac{c^3}{GM} x^{5} \Bigg[ 1 - \left( \frac{743}{336} + \frac{11\nu}{4} \right) x \nonumber\\ &+ \left( 4\pi - \frac{1}{12} \beta(113,75) \right) x^{3/2} \label{domdt\_of\_x}\\ &+ \left( \frac{34103}{18144} + \frac{13661\nu}{2016} + \frac{59\nu^2}{18} - \frac{1}{48} \sigma(247,721) \right) x^2 \Bigg], \nonumber\end{aligned}$$ where *x* is a dimensionless orbital frequency parameter defined as $$x \equiv \left( \frac{GM\omega}{c^3} \right)^{2/3}.$$ We can integrate Eq.  to get $$\begin{aligned} t &= t\_c - \frac{5GM}{256 \nu c^3} x^{-4} \bigg[ 1 + \left( \frac{743}{252} + \frac{11\nu}{3} \right) x \nonumber\\ &+ \left( \frac{2}{15} \beta(113,75) - \frac{32}{5}\pi \right) x^{3/2} \label{t\_of\_x}\\ &+ \left( \frac{3058673}{508032} + \frac{5429\nu}{504} + \frac{617\nu^2}{72} + \frac{1}{24} \sigma(247,721) \right) x^2 \bigg]. \nonumber\end{aligned}$$ Integrating once more yields the orbital phase *φ* = ∫*ω**d**t*, as a function of the orbital frequency parameter $$\begin{aligned} \varphi(x) &= \varphi\_c - \frac{x^{-5/2}}{32\nu} \bigg[ 1 + \left(\frac{3715}{1008} + \frac{55\nu}{12} \right) x \nonumber\\ &\phantom{=}+ \left( \frac{5}{24} \beta(113,75) - 10\pi \right) x^{3/2} \nonumber\\ &\phantom{=}+ \bigg( \frac{15293365}{1016064} + \frac{27145\nu}{1008} \nonumber\\ &\phantom{=}+ \frac{3085\nu^2}{144} + \frac{5}{48} \sigma(247,721) \bigg) x^2 \bigg]. \label{phi\_of\_x}\end{aligned}$$ The dragging of inertial frames induces a coupling between the individual spins and the orbital angular momentum. The orbit-averaged conservative part of the evolution equations (i.e. without radiation reaction, ${\dot{\bm{L}}} + {\dot{\bm{S}}}\_1 + {\dot{\bm{S}}}\_2 = 0$) are given for circular orbits at 2PN order by $$\begin{aligned} {\dot{\bm{L}}} &= \frac{G}{c^2} \frac{1}{r^3} \left( \left( 2 + \frac{3 m\_2}{2 m\_1} \right) \bm{S}\_1 + \left( 2 + \frac{3 m\_1}{2 m\_2} \right) \bm{S}\_2 \right) \times \bm{L} \nonumber\\ &\phantom{=} - \frac{3 G}{2 c^2} \frac{1}{r^3} \left( \left( \bm{S}\_2 \cdot {\bm{\hat{L}}} \right) \bm{S}\_1 + \left( \bm{S}\_1 \cdot {\bm{\hat{L}}} \right) \bm{S}\_2 \right) \times {\bm{\hat{L}}}, \label{Lhatdot}\\ {\dot{\bm{S}}}\_i &= \frac{G}{c^2} \frac{1}{r^3} \left[ \left( 2 + \frac{3 m\_j}{2 m\_i} \right) \bm{L} + \frac{1}{2} \bm{S}\_j - \frac{3}{2} \left( \bm{S}\_j \cdot {\bm{\hat{L}}} \right) {\bm{\hat{L}}} \right] \times \bm{S}\_i,\end{aligned}$$ where it is understood that *i* ≠ *j*, *i*, *j* ∈ {1, 2}, and the orbital separation *r* and the norm of the orbital angular momentum *L* are related to the orbital frequency by their Newtonian relation: $$\begin{aligned} L &= \mu \left( \frac{G^2M^2}{\omega} \right)^{1/3}, \\ r &= \left( \frac{GM}{\omega^2} \right)^{1/3}.\end{aligned}$$ Higher order relations would give corrections which exceed the 2PN order. Using the above relations together with the first order of Eq. , we can change variables from time to orbital angular frequency, and use the relations to express the precession equations: $$\begin{aligned} \frac{d\bm{S}\_i}{d\omega} &= \frac{5}{96} \frac{c^3}{G M} \omega^{-2} \Bigg[ {\bm{\hat{L}}} \times \bm{\Sigma}\_i \nonumber \\ &+ \frac{1}{2L} \left( \bm{S}\_j - 3 \left( \bm{S}\_j \cdot {\bm{\hat{L}}} \right) {\bm{\hat{L}}} \right) \times \bm{S}\_i \Bigg], \\ \frac{d{\bm{\hat{L}}}}{d\omega} &= \frac{5}{96} \frac{c^3}{G M} \omega^{-2} \frac{1}{L}\left[ \bm{\Sigma}\_1 + \bm{\Sigma}\_2 - \frac{3}{2L} \left( \bm{\sigma}\_1 + \bm{\sigma}\_2 \right) \right] \times {\bm{\hat{L}}} \label{dLhat\_df}\\ &= -\frac{1}{L} \left( \frac{d\bm{S}\_1}{d\omega} + \frac{d\bm{S}\_2}{d\omega} \right), \nonumber\end{aligned}$$ where $$\begin{aligned} \bm{\Sigma}\_i &= \left( 2 + \frac{3 m\_j}{2 m\_i} \right) \bm{S}\_i, \\ \bm{\sigma}\_i &= \left( \bm{S}\_j \cdot {\bm{\hat{L}}} \right) \bm{S}\_i.\end{aligned}$$ Waveform -------- The general form of a gravitational wave emitted by a two-body system, even nonspinning, is not known in the context of full general relativity. However, it has been computed in the post-Newtonian framework. The results for a nonspinning binary system are available at 2.5PN order , and the spin effects at 2PN order . A convenient way to define the phase of the wave observed in a detector is in terms of the “principal+ direction” , which is defined as the direction of the vector ${\bm{\hat{L}}} \times {\bm{\hat{n}}}$. As the orbital angular momentum precesses, the principal+ direction changes, and this must be taken into account in the waveform. This effect amounts, at 2PN order, to $$\begin{aligned} \delta \varphi &= - \int\_t^{t\_c} \frac{{\bm{\hat{L}}} \cdot {\bm{\hat{n}}}}{1 - \left( {\bm{\hat{L}}} \cdot {\bm{\hat{n}}} \right)^2} \left( {\bm{\hat{L}}} \times {\bm{\hat{n}}} \right) \cdot {\dot{\bm{\hat{L}}}} dt \nonumber\\ &= \delta\varphi\_0 + \int\_{\omega\_0}^\omega \frac{{\bm{\hat{L}}} \cdot {\bm{\hat{n}}}}{1 - \left( {\bm{\hat{L}}} \cdot {\bm{\hat{n}}} \right)^2} \left( {\bm{\hat{L}}} \times {\bm{\hat{n}}} \right) \cdot \frac{d{\bm{\hat{L}}}}{d\omega} d\omega, \label{deltaphiprec}\end{aligned}$$ where *ω*0 is an arbitrary constant corresponding to the time *t*0, *δ**φ*0 =  − ∫*t*0*t**c*(*d**δ**φ*/*d**t*)*d**t*, and $d{\bm{\hat{L}}}/d\omega$ is given in Eq. . The 2PN accurate orbital phase is then given in terms of orbital angular frequency by: *ϕ*(*ω*) = *φ*(*ω*) + *δ**φ*(*ω*). The waveform is a series of harmonics of the orbital frequency: $$h\_{+,\times} = \frac{2GM\nu x}{d\_L c^2} \left[ \sum\_{n \geqslant 0} \left(A\_{+,\times}^{(n)} \cos n\phi + B\_{+,\times}^{(n)} \sin n\phi \right) \right].$$ The coefficients of the series take the form of post-Newtonian series: $$\begin{aligned} A\_{+,\times}^{(n)} = \sum\_{i \geqslant 0} a^{(n,i/2)}\_{+,\times} x^{i/2}, \\ B\_{+,\times}^{(n)} = \sum\_{i \geqslant 0} b^{(n,i/2)}\_{+,\times} x^{i/2}.\end{aligned}$$ The exact form of the coefficients for a nonspinning system can be found in . Note, however, that both express their final result using another phase which differs from the orbital phase at 1.5PN order: Ψ = *ϕ* − 2log(*ω*/*ω̄*)*x*3/2, where *ω̄* is an arbitrary constant. We put together the results from these two papers to build a coherent 2PN accurate waveform for spinning bodies, see the appendix. ### Extrinsic effects The LISA constellation will consist in three spacecrafts launched in orbit around the Sun, at a mean distance of 1 AU, on slightly eccentric orbit so that the spacecrafts stay at the same distance from each other all along the year. The barycenter of LISA will be located on the orbit of the Earth, 20∘ behind it, and the normal to the plane on which the spacecrafts lie will make a 60∘ angle with the normal to the ecliptic, see Fig. [figLISAorbit]. [!ht] ![The orbit of LISA around the Sun as currently planned. Image taken from the LISA pre-phase-A report .[figLISAorbit]](1)The orbit of LISA around the Sun as currently planned. Image taken from the LISA pre-phase-A report .[figLISAorbit] To describe extrinsic effects that depend on the position of LISA, we follow  and define two different frames: a frame tied to the detector, (*x*, *y*, *z*), and a fixed, Solar System frame, tied to the distant stars (*x̄*, *ȳ*, *z̄*) (we consider that the motion of the Sun with respect to the distant stars can be neglected during the lifetime of the LISA mission). The unit vectors along the arms of LISA ${\bm{\hat{l}}}\_i$, *i* ∈ {1, 2, 3} are defined in the detector frame: $$\begin{aligned} {\bm{\hat{l}}}\_i &= \cos \gamma\_i {\bm{\hat{x}}} + \sin \gamma\_i {\bm{\hat{y}}}, \nonumber\\ \gamma\_i &= \frac{\pi}{12} + (i-1) \frac{\pi}{3}.\end{aligned}$$ The (*x̄*, *ȳ*) plane of the Solar System frame is defined to be the ecliptic, so that the spherical angles of the barycenter of LISA are $$\bar{\Theta} = \frac{\pi}{2}, \qquad \bar{\Phi}(t) = 2\pi t/T,$$ where *T* = 1 yr, and we chose that $\bar{\Phi} = 0$ at *t* = 0. The waveform is given relative to the Solar System frame. To take into account the fact that the detector is not static in this frame, we have to add a phase to each harmonic, which is equivalent to add the so-called Doppler phase to the orbital phase: $$\phi\_D(t) = \frac{\omega R}{c} \sin \bar{\theta}\_N \cos (\bar{\Phi}(t) - \bar{\phi}\_N),$$ where *R* = 1 AU, and *θ̄**N* and *ϕ̄**N* are the spherical angles of the position of the source in the Solar System frame. The orbital phase then becomes *ψ* = *ϕ* + *ϕ**D* = *φ* + *δ**φ* + *ϕ**D* The normal to the detector plane ${\bm{\hat{z}}}$ is at constant angle *θ̄**z* = *π*/3 from the normal to the ecliptic $\bm{\hat{\bar{z}}}$, and constantly points in the direction of the *z̄*-axis from the barycenter of LISA. Furthermore, each satellite rotates around the ${\bm{\hat{z}}}$-axis once a year (see Fig. [figLISAorbit]). Let us express then the detector frame in the Solar System frame, assuming that ${\bm{\hat{y}}} \cdot \bm{\hat{\bar{y}}} = 1$ at *t* = 0: $$\begin{aligned} {\bm{\hat{x}}} &= \left( \frac{3}{4} - \frac{1}{4} \cos 2 \bar{\Phi}(t) \right) \bm{\hat{\bar{x}}} - \frac{1}{4} \sin 2 \bar{\Phi}(t) \bm{\hat{\bar{y}}}\nonumber \\ &\qquad\qquad\qquad\qquad\qquad+ \frac{\sqrt{3}}{2} \cos \bar{\Phi}(t) \bm{\hat{\bar{z}}}, \\ {\bm{\hat{y}}} &= - \frac{1}{4} \sin 2 \bar{\Phi}(t) \bm{\hat{\bar{x}}} + \left( \frac{3}{4} + \frac{1}{4} \cos 2 \bar{\Phi}(t) \right) \bm{\hat{\bar{y}}} \nonumber\\ &\qquad\qquad\qquad\qquad\qquad+ \frac{\sqrt{3}}{2} \sin \bar{\Phi}(t) \bm{\hat{\bar{z}}}, \\ {\bm{\hat{z}}} &= - \frac{\sqrt{3}}{2} \cos \bar{\Phi}(t) \bm{\hat{\bar{x}}} - \frac{\sqrt{3}}{2} \sin \bar{\Phi}(t) \bm{\hat{\bar{y}}} + \frac{1}{2} \bm{\hat{\bar{z}}}.\end{aligned}$$ LISA will act during the incoming of a gravitational wave as a pair of two-arm detectors, but with a response scaled by a $\sqrt{3}/2$ factor due to the 60∘ opening angle of the constellation, following the pattern (*k* = 1, 2): $$\begin{aligned} h\_k &= \frac{\sqrt{3}}{2} \left( F\_{k}^{+} h\_+ + F\_{k}^{\times} h\_\times \right), \\ F\_1^+(\theta\_N, \phi\_N, \psi\_N) &= \frac{1}{2} \left( 1 + \cos^2 \theta\_N \right) \cos 2\phi\_N \cos 2\psi\_N \nonumber\\ &\phantom{=}- \cos \theta\_N \sin 2\phi\_N \sin 2\psi\_N,\\ F\_1^\times(\theta\_N, \phi\_N, \psi\_N) &= F\_1^+(\theta\_N,\phi\_N,\psi\_N-\pi/4),\\ F\_2^+(\theta\_N, \phi\_N, \psi\_N) &= F\_1^+(\theta\_N,\phi\_N-\pi/4,\psi\_N),\\ F\_2^\times(\theta\_N, \phi\_N, \psi\_N) &= F\_1^+(\theta\_N,\phi\_N-\pi/4,\psi\_N-\pi/4),\end{aligned}$$ where *θ**N* and *ϕ**N* are the spherical angles of the position of the binary in the detector frame, and *ψ**N* is defined through $$\tan \psi\_N \equiv \frac{{\bm{\hat{L}}}\cdot{\bm{\hat{z}}} - ({\bm{\hat{L}}}\cdot{\bm{\hat{n}}}) ({\bm{\hat{z}}}\cdot{\bm{\hat{n}}})}{{\bm{\hat{n}}}\cdot( {\bm{\hat{L}}}\times{\bm{\hat{z}}} )}.$$ We expressed here two combinations of the response of the three arms of LISA whose detector noises are uncorrelated . Using this, we find the response function of the detectors (*k* = 1, 2): $$\begin{aligned} h\_k &= \frac{\sqrt{3}GM\nu x}{d\_L c^2} \sum\_{n \geqslant 0} \Bigg[ \nonumber \\ &\phantom{=} \sum\_{i \geqslant 0} \left(F\_k^+(t) a\_+^{(n,i/2)} + F\_k^\times(t) a\_\times^{(n,i/2)}\right)x^{i/2} \cos n\psi \nonumber\\ &\phantom{=}+ \sum\_{i \geqslant 0} \left(F\_k^+(t) b\_+^{(n,i/2)} + F\_k^\times(t) b\_\times^{(n,i/2)}\right)x^{i/2} \sin n\psi \Bigg] \\ &= \frac{\sqrt{3}GM\nu x}{d\_L c^2} \sum\_{n \geq 0} \left[ A\_{k,n}(t) \cos n\psi + B\_{k,n}(t) \sin n \psi \right].\end{aligned}$$ We can change this into the phase-amplitude representation: $$\begin{gathered} h\_k = \frac{\sqrt{3}GM\nu x}{d\_L c^2} \bigg[ A\_+^{(0)}(t) F\_k^+(t) \\ + \sum\_{n \geq 1} A\_{k,n}^{pol}(t) \cos\left( n \psi + \phi\_{k,n}^{pol}(t) \right) \bigg],\end{gathered}$$ where *ϕ**k*, *n**p**o**l* is the polarization phase, and *A**k*, *n**p**o**l* is the polarization amplitude: $$\begin{aligned} \tan \phi\_{k,n}^{pol} &= - \frac{B\_{k,n}}{A\_{k,n}}, \label{phipol}\\ A\_{k,n}^{pol} &= \mathrm{sgn}(A\_{k,n}) \sqrt{A\_{k,n}^2 + B\_{k,n}^2}. \label{Apol}\end{aligned}$$ The final form of the gravitational wave signal is thus $$\begin{aligned} h\_k &= \frac{\sqrt{3}GM\nu x}{d\_L c^2} \left[ A\_+^{(0)} F\_k^+ + \sum\_{n \geqslant 1} A\_{k,n}^{pol} \cos\psi\_{k,n} \right], \\ \psi\_{k,n} &= n (\varphi + \delta\varphi + \phi\_D) + \phi\_{k,n}^{pol}.\end{aligned}$$ To estimate the measurement error in the different parameters of the binary, we need to know the Fourier transform of the signal *h̃**k*(*f*) [1](#fn1). $$\begin{aligned} \tilde{h}\_k(f) &= \int\_{-\infty}^\infty h\_k(t) e^{2\pi i f t} dt \nonumber\\ &= \frac{\sqrt{3}GM\nu}{d\_L c^2} \Bigg[ \int\_{-\infty}^\infty x \sum\_{n \geqslant 1} A\_{k,n}^{pol} \cos\psi\_{k,n} e^{2\pi i f t} dt \nonumber\\ &\phantom{=}+ \int\_{-\infty}^\infty x A\_+^{(0)} F\_k^+ e^{2\pi i f t} dt \Bigg] \nonumber\\ &\approx \frac{\sqrt{3}GM\nu}{2 d\_L c^2} \sum\_{n \geqslant 1} \Bigg[ \int\_{-\infty}^\infty x A\_{k,n}^{pol} e^{i (2\pi ft + \psi\_{k,n})} dt \nonumber\\ &\phantom{=}+ \int\_{-\infty}^\infty x A\_{k,n}^{pol} e^{i (2\pi ft - \psi\_{k,n})} dt \Bigg]. \label{fthk}\end{aligned}$$ Note that we neglected in the last line the Fourier transform of the so-called memory effect *A*+(0). This is based on the fact that the Fourier transform of the function *x**A*+(0)*F**k*+ accumulates around frequencies which are separated from the orbital frequency range, at least during most of the inspiral. It will thus not contribute to the relevant frequencies. To compute the integrals, we rely on the stationary phase approximation. Neglecting the integrals with the *e**i*(2*π**f**t* + *ψ**k*, *n*) factor, as they will only contribute to negative frequencies, the stationary points for the other integrals are given by 2*π**f* = *ψ**k*, *n*ʹ(*t**k*, *n*) = *n**ω*(*t**k*, *n*) + *n**ϕ**D*ʹ(*t**k*, *n*) + (*ϕ**k*, *n**p**o**l*)ʹ(*t**k*, *n*). For the same reasons as before, we can safely neglect the derivatives of the Doppler phase and of the polarization phase. We thus get the following expression for the stationary point: *t**k*, *n* = *t**n* = *t*(*f*/*n*),  where the function *t*(*f*) is defined at 2PN order by Eq. . Thus, we get the following expression for the Fourier transform of the gravitational wave signal: $$\begin{aligned} \tilde{h}\_k(f) &= \frac{\sqrt{5\pi\nu}G^2M^2}{8d\_L c^5} \sum\_{n \geqslant 1} A\_{k,n}^{pol}[t(f/n)] x\_n^{\phantom{n}-7/4} S(f/n) \nonumber\\ &\phantom{=} \cdot \exp\bigg\{i \Big[ n\Big(\Psi(f/n) - \delta\varphi(f/n) \label{htilde} \\ &\qquad\qquad\qquad- \phi\_D[t(f/n)] \Big) - \phi\_{k,n}^{pol}[t(f/n)] \Big] \bigg\}, \nonumber\end{aligned}$$ where *x**n* = *x*(*f*/*n*) = *n*− 2/3*x*, and $$\begin{aligned} S(f) &= \Bigg[ 1 + \left( \frac{743}{672} + \frac{11\nu}{8} \right) x \nonumber\\ &\qquad\qquad + \left( \frac{1}{24} \beta(113,75) - 2\pi) \right) x^{3/2} \nonumber\\ &\qquad\qquad + \bigg( \frac{7266251}{8128512} + \frac{18913\nu}{16128} \nonumber\\ &\qquad\qquad+ \frac{1379\nu^2}{1152} + \frac{1}{96} \sigma(247,721) \bigg) x^2 \Bigg], \\ \Psi(f) &= \left( \frac{t\_c c^3}{GM} \right) x^{3/2} - \varphi\_c -\frac{\pi}{4} \nonumber\\ &\qquad\qquad+ \frac{3 x^{-5/2}}{256\nu} \Bigg[ 1 + \left( \frac{3715}{756} + \frac{55\nu}{9} \right) x \nonumber\\ &\qquad\qquad+ \left( \frac{1}{3} \beta(113,75) - 16\pi \right) x^{3/2} \nonumber\\ &\qquad\qquad + \bigg( \frac{15293365}{508032} + \frac{27145 \nu}{504} \nonumber\\ &\qquad\qquad+ \frac{3085 \nu^2}{72} + \frac{5}{24} \sigma(247,721) \bigg) x^2 \Bigg].\end{aligned}$$ where we used here *x* = *x*(*ω* = 2*π**f*) a different orbital frequency parameter for each harmonic. Note that Lang and Hughes  took the zeroth order form for *S*, *S*(*f*) = 1, consistently with neglecting all amplitude modulations. Finally, a binary will be observed with LISA during a finite amount of time. Therefore, if we denote by *t**i* and *t**f* respectively the initial and final time of observation, the orbital frequencies available for the Fourier transform will lie between *f**o**r**b*(*t**i*) and *f**o**r**b*(*t**f*). Thus, the final Fourier transform will be of the form: *h̃**k*(*f*) = ∑*n* ≥ 1*h̃**k*, *n*(*f*)*θ*(*f* − *n**f**o**r**b*(*t**i*))*θ*(*n**f**o**r**b*(*t**f*) − *f*),  where *θ* is the Heaviside step function. We compared in this work three different waveforms, which we called full waveform (FWF), simplified waveform (SWF), and restricted waveform (RWF). The latter is the one used in . The FWF contains all post-Newtonian corrections of the frequency and amplitude of the wave up to 2PN order. It is obtained using the amplitudes given in the appendix in Eqs.  and , and inserting the results in Eq. . The SWF contains all post-Newtonian corrections of the frequency up to 2PN order, and the lowest order amplitude of each harmonic present at the 2PN level. With this approximation, we find particularly simple forms for the polarization amplitudes and phases (with *F*+ ,  × = *F**k*+ ,  ×, $c\_i = {\bm{\hat{L}}} \cdot {\bm{\hat{n}}}$, $s\_i = |{\bm{\hat{L}}} \times {\bm{\hat{n}}}|$): $$\begin{aligned} A\_{k,1}^{pol} &= - \mathrm{sgn}(F\_+) \frac{x^{1/2} s\_i}{8} \sqrt{1-4\nu} \ \cdot \nonumber\\ &\qquad\qquad\qquad \sqrt{F\_+^2 \left(5 + c\_i^2\right)^2 + 36 F\_\times^2 c\_i^2}, \\ A\_{k,2}^{pol} &= - \mathrm{sgn}(F\_+) \sqrt{F\_+^2\left(1 + c\_i^2\right)^2 + 4 F\_\times^2 c\_i^2}, \\ A\_{k,3}^{pol} &= -\frac{9 x^{1/2} s\_i}{8} \sqrt{1-4\nu} A\_{k,2}^{pol},\\ A\_{k,4}^{pol} &= \frac{4 x s\_i^2}{3} (1-3\nu) A\_{k,2}^{pol},\\ A\_{k,5}^{pol} &= - \frac{625 x^{3/2} s\_i^3}{384} \sqrt{1-4\nu} (1-2\nu) A\_{k,2}^{pol},\\ A\_{k,6}^{pol} &= \frac{81 x^2 s\_i^4}{40} (1 - 5\nu + 5\nu^2) A\_{k,2}^{pol},\\ \phi\_{k,1}^{pol} &= - \arctan\left( \frac{6 c\_i F\_\times}{(5 + c\_i^2) F\_+} \right), \\ \phi\_{k,n}^{pol} &= - \arctan\left( \frac{2 c\_i F\_\times}{(1 + c\_i^2) F\_+} \right), \qquad n \geqslant 2.\end{aligned}$$ The SWF is obtained inserting the polarization amplitudes and phases above into Eq.  and, consistently with neglecting all amplitude corrections, taking the lowest order of the overall amplitude correction *S*(*f*) = 1. The RWF contains all post-Newtonian corrections of the frequency up to 2PN order, and the lowest order amplitude of the second harmonic. It is identical to the SWF, with the further approximation *A**k*, *n**p**o**l* = *ϕ**k*, *n**p**o**l* = 0,  *n* ≠ 2. Data analysis ------------- The signal will of course also include noise. A good description of the impact of noise can be found in , and we will refer to that study for how to model it. A deeper study of the Fisher information formalism in the context of gravitational wave experiments can be found in . As defined in , the inner product in the space of signals is $$(a|b) \equiv 4 \mathrm{Re} \int\_0^\infty \frac{\tilde{a}^\*(f) \tilde{b}(f)}{S\_h(f)} df, \label{scalprod}$$ where *S**h*(*f*) is the noise spectral density. Now, if we have a signal *h*, described by a certain set of *n* parameters $\bm{\theta}$, the Fisher information matrix is an *n*-by-*n* symmetric matrix defined by $$\Gamma\_{ij} \equiv \left( {\frac{\partial h}{\partial \theta^i}} \right|\left. {\frac{\partial h}{\partial \theta^j}} \right).$$ When we have several detectors, the Fisher information matrices are simply added: Γ*i**j* = ∑*k*Γ*i**j*(*k*). Finally, the covariance matrix is the inverse of the information matrix: Σ ≡ Γ− 1. Its off-diagonal elements represent correlation coefficients between the different parameters, and must satisfy $${\left| \frac{\Sigma^{ij}}{\sqrt{\Sigma^{ii}\Sigma^{jj}}} \right|} < 1,$$ whereas its diagonal elements represent lower limits to statistical errors on their measurement: $$\Delta\theta^i = \sqrt{\Sigma^{ii}}.$$ We chose to use a different noise curve than , motivated by the fact that an agreement seems to have been found among the LISA parameter estimation community  to use a piecewise fit of the expected noise, used in the second round of the mock LISA data challenge . We use the noise model described in , which consists in a sum of an instrumental noise *S**n*(*f*) and a Galactic confusion noise *S*conf(*f*) [2](#fn2). The instrumental and confusion noise curves read (*f* is given in Hertz) $$\begin{aligned} S\_n(f) &= \frac{1}{L^2} \Bigg\{ \left[ 1 + \frac{1}{2} \left( \frac{f}{f\_\*} \right)^2 \right] S\_p \nonumber\\ & \qquad\qquad+ \left[ 1 + \left( \frac{10^{-4}}{f} \right)^2 \right] \frac{4 S\_a}{\left(2\pi f\right)^4} \Bigg\}, \\ S\_{\mathrm{conf}}(f) &= \left\{ \begin{array}{l l} 10^{-44.62} f^{-2.3} & (f \leqslant 10^{-3}), \\ 10^{-50.92} f^{-4.4} & (10^{-3} < f \leqslant 10^{-2.7}), \\ 10^{-62.8} f^{-8.8} & (10^{-2.7} < f \leqslant 10^{-2.4}), \\ 10^{-89.68} f^{-20} & (10^{-2.4} < f \leqslant 10^{-2}), \\ 0 & (10^{-2} < f). \end{array} \right.\end{aligned}$$ where *L* = 5 ⋅ 109 m is the arm length of LISA, *S**p* = 4 ⋅ 10− 22 m2 Hz− 1 is the white position noise level, *S**a* = 9 ⋅ 10− 30 m2 s− 4 Hz− 1 is the white acceleration noise level, and *f*\* = *c*/(2*π**L*) is the arm transfer frequency. This way, we have *S**h*(*f*) = *S**n*(*f*) + *S*conf(*f*). We found substantial differences in the noise curve that we used with respect the one used by Lang and Hughes . We plot the ratio of the two curves in Fig. [fig:noisecomp]. ![Ratio of the noise curve we used over the one used by Lang and Hughes, as a function of the frequency. We added on top the total mass of a binary which emits its second harmonic at the corresponding maximum frequency. At high frequencies (f \gtrsim 5 \cdot 10^{-2} Hz), the curve used by Lang and Hughes is inaccurate. There is a peak at f = 2 \cdot 10^{-3} Hz, where our noise is 4 times larger, and which corresponds to the maximum frequency of binaries with a total mass M \approx 2 \cdot 10^6 M_\odot. Between 10^{-3} and 2 \cdot 10^{-4} Hz, we find good agreement, and for lower frequencies our noise is much larger.[fig:noisecomp]](2)Ratio of the noise curve we used over the one used by Lang and Hughes, as a function of the frequency. We added on top the total mass of a binary which emits its second harmonic at the corresponding maximum frequency. At high frequencies ($f \gtrsim 5 \cdot 10^{-2}$ Hz), the curve used by Lang and Hughes is inaccurate. There is a peak at *f* = 2 ⋅ 10− 3 Hz, where our noise is 4 times larger, and which corresponds to the maximum frequency of binaries with a total mass *M* ≈ 2 ⋅ 106*M*⊙. Between 10− 3 and 2 ⋅ 10− 4 Hz, we find good agreement, and for lower frequencies our noise is much larger.[fig:noisecomp] Our noise is in general larger for frequencies below 10− 2 Hz, which corresponds to the maximum frequency of the RWF emitted by binaries with total mass *M* ≈ 5 ⋅ 105*M*⊙ (*f**m**a**x* is proportional to 1/*M*). Therefore we expect our results to be more pessimistic than the ones in . However, we do not expect the comparison between the different waveforms to depend very much on the particular noise curve used in such a study. We also compared the two instrumental noises and the two confusion noises separately. The two confusion noises are similar below 10− 3 Hz and differ above this value. They become negligible compared to the instrumental noise above 5 ⋅ 10− 3 Hz. The peak visible in Fig. [fig:noisecomp] at *f* = 2 ⋅ 10− 3 Hz is due to the differences in confusion noises. The discrepancies come from the fact that after the publication of , a better estimation of what fraction of low-mass binaries could be resolved and not contribute to the confusion noise has been made  and used in the noise model of . The instrumental noise used in  is a low-frequency approximation based on the online sensitivity curve generator provided by S. Larson. The noise we used differs from the one used by Larson for frequencies below 10− 4 Hz. This comes from the fact that Larson assumes white acceleration noise, whereas the authors of  additionally considered it to increase as 1/*f* below 10− 4 Hz. Simulations =========== As a set of 12 intrinsic plus 3 extrinsic parameters for our simulations, we used: 1. log10*m*1/*M*⊙ and log10*m*2/*M*⊙, for the masses of the two black holes. 2. *μ**l* = cos*θ**l* and *ϕ**l*, for the spherical angles of the orbital angular momentum $\bm{L}$ at $\gamma=\frac{1}{6}$. 3. *μ*1 = cos*θ*1 and *ϕ*1 for the spherical angles of the spin of the first black hole $\bm{S}\_1$ at $\gamma=\frac{1}{6}$. 4. $\chi\_1 = \frac{c}{Gm\_1^2} {\left| \bm{S}\_1 \right|}$ for the dimensionless strength of the spin of the first black hole, which has to satisfy 0 ≤ *χ*1 < 1. 5. *μ*2 = cos*θ*2, *ϕ*2, and *χ*2, same for the second black hole as for the first one. 6. *t**c*, the time of coalescence. 7. *φ**c*, the phase at coalescence. As this phase is random and its determination is not of any astrophysical interest, we can safely neglect constants in the orbital phase, in particular *δ**φ*0 from Eq. . 8. *μ**n* = cos*θ**n* and *ϕ**n*, the spherical angles of the position of the binary in the sky. 9. *d**L*, the luminosity distance between the source and the Solar System. All angles are taken in the frame tied to the distant stars. We fix the zero point of time by the beginning of the LISA mission. We performed Monte Carlo simulations where, given a set of parameters, we evolved the binary backwards in frequency starting from *ω*(*γ* = 1/6) using a fourth order adaptive Runge-Kutta algorithm to find ${\bm{\hat{L}}}(\omega)$, $\bm{S}\_1(\omega)$, $\bm{S}\_2(\omega)$, and *δ**φ*(*ω*). We stopped the simulations either at *t* = 0, or when the frequency of the highest harmonic had gone below the LISA band, for 6*ω* < 3 ⋅ 10− 5 Hz. We chose to start at *γ* = 1/6 because it is the radius of the innermost stable circular orbit (ISCO) for a Schwartzschild black hole. Of course, when dealing with a spinning black hole, the radius of the ISCO can vary between *γ* = 1 and *γ* = 1/9, depending on the spin parameter of the black hole and on the orientation of the orbit, but the series may not converge when *γ* is close to 1, so that we chose to consider the post-Newtonian expansion as accurate for *γ* ≤ 1/6, which seems to be a good enough prescription . We then put these functions inside the Fourier transforms of the waves . Five derivatives ∂*θ**i**h̃**k* out of the 15 needed can be found analytically. Three simple ones are: $$\begin{aligned} \frac{\partial\tilde{h}\_k(\theta^j,f)}{\partial t\_c} &= 2 \pi i f \tilde{h}\_k(\theta^j,f), \\ \frac{\partial\tilde{h}\_k(\theta^j,f)}{\partial d\_L} &= - \frac{\tilde{h}\_k(\theta^j,f)}{d\_L}, \\ \frac{\partial\tilde{h}\_k(\theta^j,f)}{\partial \varphi\_c} &= -i \sum\_n n \tilde{h}\_{k,n}(\theta^j,f),\end{aligned}$$ where *h̃**k*, *n* is the *n*-th harmonic component of *h̃**k*. The other two are the derivatives with respect to *μ**n* and *ϕ**n*. The only quantities in Eq.  that depend on these parameters are *A**k*, *n**p**o**l*, *ϕ**k*, *n**p**o**l*, *δ**φ*, and *ϕ**D*. The derivatives which we could not find analytically have been computed numerically using the relation $$\frac{\partial \tilde{h}\_k (\theta^j,f)}{\partial \theta^i} \approx \frac{\tilde{h}\_k(\theta^j + \epsilon \delta^{ij} /2, f) - \tilde{h}\_k(\theta^j -\epsilon \delta^{ij}/2, f)}{\epsilon},$$ where *ε* is a small displacement of the parameter *θ**i*. We used a constant value of *ε* = 10− 7 for every parameter, except for *ϕ**l* for which *ε* was divided by 2 − 2∣*μ**l*∣, *μ**i* (*i* ∈ {1, 2}) for which *ε* was divided by 5*χ**i*, and *ϕ**i* for which *ε* was divided by 10*χ**i*(1 − ∣*μ**i*∣). The formula is accurate up to *O*(*ε*2). We computed the functions ${\bm{\hat{L}}}(\omega)$, $\bm{S}\_1(\omega)$, $\bm{S}\_2(\omega)$, and *δ**φ*(*ω*) for each displacement of the parameters. We then evaluated numerically the integrals (∂*θ**i**h̃**k*∣∂*θ**j**h̃**k*) to find the Fisher information matrix. Each harmonic *h̃**k*, *n*(*f*) is truncated if necessary to remain inside the LISA band, which we take to be [3 ⋅ 10− 5, 1] Hz. We added the contributions from both detectors, and then inverted the matrix numerically to find the statistical error estimates. We found that in some extreme situations, when ${\bm{\hat{L}}} \cdot {\bm{\hat{n}}}$ gets close to 1, the Runge-Kutta method fails to converge when computing *δ**φ*, because $$\frac{d\delta \varphi}{d\omega} = \frac{{\bm{\hat{L}}} \cdot {\bm{\hat{n}}}}{1 - \left( {\bm{\hat{L}}} \cdot {\bm{\hat{n}}} \right)^2} \left( {\bm{\hat{L}}} \times {\bm{\hat{n}}} \right) \cdot \frac{d{\bm{\hat{L}}}}{d\omega} \sim \frac{1}{{\left| {\bm{\hat{L}}} \times {\bm{\hat{n}}} \right|}}, \ {\bm{\hat{L}}} \cdot {\bm{\hat{n}}} \to 1.$$ In those situations, we chose to compute *δ**φ*(*ω*) whenever ${\bm{\hat{L}}} \cdot {\bm{\hat{n}}}$ is to close to 1 with an approximate value, which is $$\begin{gathered} \delta\varphi(\omega + \delta\omega) \approx \\ \delta\varphi(\omega) + \mbox{angle} \left[ \left( {\bm{\hat{L}}}(\omega+\delta\omega) \times {\bm{\hat{n}}} \right), \left( {\bm{\hat{L}}}(\omega) \times {\bm{\hat{n}}} \right) \right].\end{gathered}$$ We ran different sets of simulations fixing the redshift and the masses, and selected the other parameters randomly, using a flat distribution. The bounds to put for the Monte Carlo selection of the different parameters are clear, except for *t**c*. We chose the following bounds, consistently with : the lower bound for *t**c* is for the physical separation parameter *γ* to be equal to 1/6 at *t* = 0 (combining Eqs.  and ), and the higher bound is *t**c* = 3 yrs (this is in fact the minimum science requirement of the mission), which we take to be the duration of the LISA mission, so that the coalescence is visible during it. We computed for each set the mean measurement errors for the parameters and signal-to-noise ratio (SNR), comparing the output for the RWF, the SWF, and the FWF defined at the end of Sec. [exteff]. Results ======= We ran different sets of simulations, each of them at a redshift of *z* = 1, varying the masses between *O*(105*M*⊙) and *O*(108*M*⊙), and the mass ratio between 1:1 and 1:10. We did not vary the redshift, because, as described in Sec. [sec:theory], it cannot be measured by a gravitational wave experiment. Furthermore, with a redshift to luminosity distance relation fixed, the only parameter varying with the redshift for constant redshifted masses is the luminosity distance. The statistical errors in this case scale for all parameters as (1 + *z*)*d**L*, as this parameter appears only as an overall factor in the waveforms. For example, the statistical error estimates on the parameters for a system with *m*1 = 5 ⋅ 106*M*⊙, *m*2 = 5 ⋅ 105*M*⊙, and *z* = 1 are exactly the same as those for a system with *m*1 = 2 ⋅ 106*M*⊙, *m*2 = 2 ⋅ 105*M*⊙, and *z* = 4 (same redshifted masses), multiplied by 2.5*d**L*(*z* = 4)/*d**L*(*z* = 1). For binaries with masses higher than 107*M*⊙, the results for the RWF cannot be trusted, as the second harmonic spends typically only a few orbits inside the LISA band, and even no signal at all can be observed for 108*M*⊙ binaries. Each of our sets of simulations consisted in over a thousand binaries. We performed *a posteriori* statistical checks showing that the medians should be correctly estimated up to a few percent. We present here in tables for all samples and all interesting parameters a best-case measurement error (5% quantile), a typical error (the median), and a worst-case error (95% quantile). The parameters we are interested in are the (redshifted) individual masses of the black holes, shown in Tables [tab:m1] and [tab:m2] their spin parameters, shown in Tables [tab:ch1] and [tab:ch2], the principal axes of the localization ellipse in the sky, shown in Tables [tab:2a] and [tab:2b], and the (redshifted) luminosity distance to the source, shown in Table [tab:dl]. We followed  to present as sensible quantities for the sky localization the principal axes 2*a* and 2*b* of the ellipse enclosing the region outside of which there is a 1/*e* probability of finding the binary. For binaries for which no signal can be extracted from the data, we fixed the errors to ∞. For errors apparently meaningless, such as Δ*d**L*/*d**L* > 1 or 2*a* > 2*π*, we still provide the error as computed, because it can give an indication on up to what redshift quantities can be computed using the scaling property of the error with respect to (1 + *z*)*d**L*. [!ht] |c|c|c|c|c|c|c|c|c|c|c| *m*1[*M*⊙] & *m*2[*M*⊙] & & & & & & & RWF & SWF & FWF & RWF & SWF & FWF & RWF & SWF & FWF 3 ⋅ 105 & 105 & 1.85 ⋅ 10− 4 & 1.69 ⋅ 10− 4 & 1.44 ⋅ 10− 4 & 8.06 ⋅ 10− 4 & 6.71 ⋅ 10− 4 & 4.76 ⋅ 10− 4 & 8.17 ⋅ 10− 3 & 2.54 ⋅ 10− 3 & 1.58 ⋅ 10− 3 106 & 105 & 2.44 ⋅ 10− 4 & 1.99 ⋅ 10− 4 & 1.63 ⋅ 10− 4 & 7.13 ⋅ 10− 4 & 5.36 ⋅ 10− 4 & 4.25 ⋅ 10− 4 & 4.63 ⋅ 10− 3 & 2.82 ⋅ 10− 3 & 2.13 ⋅ 10− 3 106 & 3 ⋅ 105 & 4.08 ⋅ 10− 4 & 3.59 ⋅ 10− 4 & 2.90 ⋅ 10− 4 & 1.36 ⋅ 10− 3 & 1.10 ⋅ 10− 3 & 8.01 ⋅ 10− 4 & 1.11 ⋅ 10− 2 & 3.95 ⋅ 10− 3 & 2.60 ⋅ 10− 3 3 ⋅ 105 & 3 ⋅ 105 & 1.69 ⋅ 10− 4 & 1.62 ⋅ 10− 4 & 1.19 ⋅ 10− 4 & 1.24 ⋅ 10− 3 & 1.17 ⋅ 10− 3 & 2.91 ⋅ 10− 4 & 1.14 ⋅ 10− 2 & 8.82 ⋅ 10− 3 & 6.52 ⋅ 10− 4 106 & 106 & 3.59 ⋅ 10− 4 & 3.53 ⋅ 10− 4 & 2.53 ⋅ 10− 4 & 2.48 ⋅ 10− 3 & 2.42 ⋅ 10− 3 & 7.16 ⋅ 10− 4 & 2.98 ⋅ 10− 2 & 2.29 ⋅ 10− 2 & 1.57 ⋅ 10− 3 107 & 106 & 1.21 ⋅ 10− 3 & 7.93 ⋅ 10− 4 & 4.46 ⋅ 10− 4 & 4.34 ⋅ 10− 3 & 2.49 ⋅ 10− 3 & 1.37 ⋅ 10− 3 & 4.21 ⋅ 10− 2 & 1.12 ⋅ 10− 2 & 5.09 ⋅ 10− 3 107 & 3 ⋅ 106 & 3.39 ⋅ 10− 3 & 1.89 ⋅ 10− 3 & 8.23 ⋅ 10− 4 & 1.59 ⋅ 10− 2 & 4.26 ⋅ 10− 3 & 1.79 ⋅ 10− 3 & 0.140 & 1.18 ⋅ 10− 2 & 5.86 ⋅ 10− 3 107 & 107 & 2.20 ⋅ 10− 2 & 1.21 ⋅ 10− 2 & 2.04 ⋅ 10− 3 & 0.213 & 8.97 ⋅ 10− 2 & 5.79 ⋅ 10− 3 & 1.47 & 0.825 & 1.57 ⋅ 10− 2 3 ⋅ 107 & 107 & 0.377 & 8.64 ⋅ 10− 3 & 4.35 ⋅ 10− 3 & 1.01 & 1.99 ⋅ 10− 2 & 9.48 ⋅ 10− 3 & 3.23 & 5.35 ⋅ 10− 2 & 2.43 ⋅ 10− 2 3 ⋅ 107 & 3 ⋅ 107 & 3.74 & 0.525 & 5.55 ⋅ 10− 2 & 23.1 & 2.26 & 0.120 & 115 & 9.05 & 0.386 108 &
arxiv_0000039
**T**R* = 0, and the equilibrium is as proposed. It is immediately apparent that while the LERGM rate function resembles the transition function of a Gibbs sampler, the change inhibition process resembles the corresponding Metropolis algorithm. However, it is again important to note that in this process all change events are competing with each other, and it is not equivalent to quasi-time Metropolis dynamics. Like the LERGM, dwell times at each state are related to the gaps in favorability between the focal state and its neighbors, but in this case uphill moves are not enhanced; thus, dwell times will be longer for low-potential states with high-potential neighbors. Also like the LERGM (but not the competing rate SAOM), the change inhibition process has a maximum rate of change (which is realized for all uphill moves). ### The Differential Stability Process Another process that has not to our knowledge appeared in the prior literature (but that is motivated by well-known properties of CTMCs) arises from allowing changes between states to occur at random, with only the dwell time in individual states varying in a systematic manner. Specifically, we consider a process in which the expected time spent in an arbitrary state, *a*, after entry is proportional to exp(*q*(*a*)). In this *differential stability process,* some states are more favorable, or stable, than others, and the system will spend more time in such states before transitioning out of them. However, the transitions themselves are to random Hamming neighbors. Such a system lies at the opposite extreme from a constant rate SAOM, where aggregate event rates are held constant and transitions are driven by differences in the conditional probabilities of transitioning between competing states, and is hence interesting as a corner case; substantively, however, it may be a useful starting point for modeling “win-stay, lose-shift” dynamics, or network evolution that is driven primarily by exogenous shocks. For the differential stability process, we take S = G, and let *T* be given by Hamming adjacency. We define the process by specifying a rate structure, *R*, such that for Hamming-adjacent neighbors *a* and *b*, *R**a**b* = *A*∣H(*a*)∣− 1exp( − *q*(*a*)), with transition rates of 0 for *b* ∉ H(*a*). Note that, per the above, this rate does not depend upon the destination state. It is clearly the case that the dwell time here in state *a* is then  − 1/*R**a**a* = *A*exp(*q*(*a*)), as desired, with *A* ∈ R+ being a pacing constant. Since the rate structure of this system is connected on the state space, and since the state space is finite, the process has an equilibrium distribution. We posit that this equilibrium has the form *π**a* = exp(*q*(*a*))/*Z*. To show that this is so, we first consider the flux into an arbitrary state *a*: $$\begin{aligned} \sum\_{b\in \mathcal{H}(a)} \pi\_b R\_{ba} &= \sum\_{b\in \mathcal{H}(a)} \frac{\exp(q(b))}{Z} A |\mathcal{H}(a)|^{-1} \exp(-q(b))\\ &= \frac{A}{Z|\mathcal{H}(a)|} \sum\_{b\in \mathcal{H}(a)} 1\\ &=A/Z.\end{aligned}$$ Now, consider the outgoing flux at *a*: $$\begin{aligned} \pi\_a \sum\_{b\in \mathcal{H}(a)} R\_{ab} &= \frac{\exp(q(a))}{Z} \sum\_{b\in \mathcal{H}(a)} A |\mathcal{H}(a)|^{-1} \exp(-q(a))\\ &= A/Z.\end{aligned}$$ The incoming flux equals the outgoing flux, and hence *π* is indeed the equilibrium distribution of the differential stability process. Although the property is not unique to this family, the fact that transitions are uniformly random in the differential stability process makes especially obvious the fact that inference for its parameters can be performed using only aggregate information on state occupancies, without sequential information. This may facilitate the use of this model with fragmentary data sources, or in other settings where it is easier to estimate where the system is spending most of its time than to track specific transitions. ### Continuum STERGMs As discussed, the STERGM class of temporal ERGMs has been an important focus of work related to continuum-limit models, because of the potential for fitting to limited observational data sets. The constant dissolution model has been central to this effort, and we treat it here separately, followed for by its constant formation counterpart. While less trivial to fit, general STERGMs can also be extended to the continuum limit, and we close this section with a consideration of this broader family of cases. #### Constant-dissolution Continuum STERGMs As noted above, employ STERGMs with constant dissolution rates as a tool for approximate estimation of network dynamics from combined cross-sectional and durational data; they obtain exact results for the dyadic independence case, and then argue from simulation that such a STERGM will approach an approximate ERGM equilibrium whose potential is equal to the potential of the formation model, with an edge parameter adjustment for dissolution. In an inferential setting, this parameter can be directly estimated given tie duration. Subsequently, have shown that the error in this approximation declines as the length of the (discrete) STERGM time step decreases, and one does (under mild regularity conditions) approach the presumed limit as step size goes to zero. They refer to the model under this finite but arbitrarily small step size an “infinitesimal STERGM.” Here, we directly consider the corresponding continuous time process, which we refer to as a “continuum STERGM” (CSTERGM) with constant dissolution rates. Specifically, we here consider the case of a CSTERGM in which rates of edge formation are driven by an ERGM potential *q**f*, while dissolution is driven by a single parameter *θ**d* (i.e., all edges are lost at constant rate exp(*θ**d*)). We construct the continuum process as follows. Let *S* be a CTMC on state space S = G, with G being the set of all order-*N* (di)graphs and *T* being the Hamming adjacency on the state space. Let H+ be the set of Hamming neighbors formed by adding an edge to the argument, and let H− be the set of Hamming neighbors formed by removing an edge from the argument. *S* then follows the instantaneous rate structure $$R\_{ab} = \begin{cases} 0 & \text{ if } b\not\in \mathcal{H}^+(a) \cup \mathcal{H}^-(a)\\ \exp(\theta\_d) & \text{ if } b\in \mathcal{H}^-(a)\\ \exp(q\_f(b)-q\_f(a)) & \text{ if } b\in \mathcal{H}^+(a)\\ -\sum\_{b\in\mathbb{G}\setminus a} R\_{ab} & \text{ if } a=b \end{cases}.$$ Clearly, ∣S∣ < ∞, and *T* is connected; thus, *S* has an equilibrium distribution. We posit that this equilibrium distribution corresponds to *π**a* = exp(*q**f*(*a*) − *θ**d**w**e*(*a*))/*Z*. To verify this, we consider the influx to an arbitrary state *a*: $$\begin{aligned} \sum\_{b\in\mathbb{G}\setminus a} R\_{ba} & =\sum\_{b\in\mathcal{H}^+(a)} \pi\_b \exp(\theta\_d) + \sum\_{b\in\mathcal{H}^-(a)} \pi\_b \exp\left(q\_f(a)-q\_f(b)\right)\\ &= \sum\_{b\in\mathcal{H}^+(a)} \frac{\exp(q\_f(b)-\theta\_d w\_e(b))}{Z} \exp(\theta\_d) + \sum\_{b\in\mathcal{H}^-(a)} \frac{\exp(q\_f(b)-\theta\_d w\_e(b))}{Z} \exp\left(q\_f(a)-q\_f(b)\right)\\ &=\frac{1}{Z} \left[\sum\_{b\in\mathcal{H}^+(a)} \exp\left(q\_f(b)-\theta\_d (w\_e(b)-1)\right) + \sum\_{b\in\mathcal{H}^-(a)} \exp\left(q\_f(a)-\theta\_d w\_e(b)\right)\right].\\ \intertext{Observe that, for $b\in\mathcal{H}^+(a),$ $w\_e(b)=w\_e(a)+1$, and for $b\in\mathcal{H}^{-}(a),$ $w\_e(b)=w\_e(a)-1$. Substituting and factoring out the putative equilibrium probability of $a$ gives} &= \frac{\exp(q\_f(a)-\theta\_d w\_e(a))}{Z} \left[ \sum\_{b\in\mathcal{H}^+(a)} \exp\left(q\_f(b)-q\_f(a)\right) + \sum\_{b\in\mathcal{H}^-(a)} \exp(\theta\_d) \right]\\ &= \pi\_a \sum\_{b\in\mathbb{G}\setminus a} R\_{ab}\\ &=-\pi\_a R\_{aa}.\end{aligned}$$ It thus follows that *π**T**R* = 0, and the continuum STERGM has the desired equilibrium. #### Constant Formation Continuum STERGMs While constant dissolution STERGMs have received more attention in the literature, a very similar development is also possible for constant *formation* STERGMs: processes in which edges form at a constant rate, but where their persistence is governed by a potentially complex process. Such a model represents relational “selection” in its truest sense, since the key dynamics are driven by the differential survival of relationships. That this side of the STERGM family is less well-studied than the constant dissolution side may reflect a bias in the field towards processes that create edges rather than those that retain or remove them (a broader concern raised e.g. by ), though the constant dissolution approximation is doubtless reasonable in many settings. In some cases, however, ties may form initially through processes that are idiosyncratic and largely exogenous, with enduring structure resulting from the maturation of some fraction of those ties into longer-term relations. For instance, in work settings structured around *ad hoc* teams convened to perform specific projects, tie formation may be driven by short-term assignments that result from shifting labor needs, rather than any preferences of the individuals involved. Some, however, may elect to keep in touch with colleagues from those short-term assignments over the long haul, a process that may be driven by individual considerations. Alternately (as with the differential stability process), network evolution may not involve individuals or decision processes at all, and may simply reflect differences in the stability of relationships to exogenous shocks. Although, to our knowledge, a continuum version of the constant formation STERGM has not appeared in the literature, we can easily define it here. As with the constant dissolution STERGM, we take S = G, presume that G is the set of order-*N* graphs, and employ a separate dissolution potential *q**d*(*a*) = *θ**d**T**w**d*(*a*) and constant formation parameter *θ**f*. Our rate function is given by $$R\_{ab} = \begin{cases} \exp(\theta\_f) & \text{if } b\in\mathcal{H}^{+}(a)\\ \exp(q\_d(b)-q\_d(a)) & \text{if } b\in\mathcal{H}^{-}(a)\\ -\sum\_{b\in\mathcal{H}(a)}R\_{ab} & \text{if } a=b\\ 0 & \text{otherwise} \end{cases},$$ and we propose the equilibrium distribution *π**a* = exp[*q**d*(*a*) + *w**e*(*a*)*θ**f*]/*Z*, with *w**e* as usual being the edge count. To verify, we examine the in-flux to an arbitrary state *a* in equilibrium, $$\begin{aligned} \sum\_{b\in\mathcal{H}(a)} \pi\_b R\_{ba} &= \sum\_{b\in\mathcal{H}^{-}(a)}\pi\_b \exp[\theta\_f] + \sum\_{b\in\mathcal{H}^{+}(a)} \pi\_b \exp\left[q\_d(a)-q\_d(b)\right]\\ &= \frac{1}{Z} \left[ \sum\_{b\in\mathcal{H}^{-}(a)}\exp\left[q\_d(b)+w\_e(a) \theta\_f\right] + \sum\_{b\in\mathcal{H}^{+}(a)} \exp\left[q\_d(a)+(w\_e(a)+1) \theta\_f\right]\right], \\ \intertext{where we have used the relationship between the edge counts of $a$ and $b$, respectively. Pulling out $\pi\_a$ then gives us} &= \frac{\exp\left[q\_d(a)+w\_e(a)\theta\_f\right]}{Z} \left[ \sum\_{b\in\mathcal{H}^{-}(a)} \exp\left[q\_d(b)-q\_d(a)\right] + \sum\_{b\in\mathcal{H}^{+}(a)} \exp\left[\theta\_f\right]\right]\\ &= \pi\_a \sum\_{b\in\mathcal{H}(a)} R\_{ab}\\ &= -\pi\_a R\_{aa}.\end{aligned}$$ The condition that *π**T**R* = 0 is then satisfied, and the constant formation CSTERGM has the posited equilibrium. As with the constant dissolution CSTERGM, the constant formation CSTERGM can in principle be fit to data containing only information on cross-sectional network structure and durations, here specifically the duration of *nulls* rather than edges; equivalently, edges added per unit time will suffice, provided that care is taken to control for the current density (since the total rate of edge addition scales with the number of edges that are available to add). We do not consider this problem in greater detail here, but the approach parallels that for the constant dissolution case. #### General Continuum STERGMs While the constant rate cases are of particular pragmatic interest, it should be noted that the same general approach used above can be used to provide a continuum version of STERGMs with arbitrary rate structure. These general CSTERGMs have not, to our knowledge, been treated in previous literature. We define them as follows. Combining our previous two cases, let *q**f* be a formation potential, and *q**d* a dissolution potential. We take S = G on the order-*N* (di)graphs, and let *T* correspond to Hamming adjacency, with transition rates for states *a*, *b* ∈ S given by $$R\_{ab} = \begin{cases} 0 & \text{ if } b\not\in \mathcal{H}^+(a) \cup \mathcal{H}^-(a)\\ \exp(q\_d(b)-q\_d(a)) & \text{ if } b\in \mathcal{H}^-(a)\\ \exp(q\_f(b)-q\_f(a)) & \text{ if } b\in \mathcal{H}^+(a)\\ -\sum\_{b\in\mathbb{G}\setminus a} R\_{ab} & \text{ if } a=b \end{cases}.$$ Since the system is irreducible and has a finite state space, it has an equilibrium distribution, *π*. We posit that *π**a* = exp(*q**f*(*a*) + *q**d*(*a*))/*Z* for *a* ∈ S. As usual, we verify by considering the influx to an arbitrary state, *a*: $$\begin{aligned} \sum\_{b\in\mathcal{H}(a)} \pi\_b R\_{ba} &= \sum\_{b\in\mathcal{H}^{-}(a)}\pi\_b \exp[q\_f(a)-q\_f(b)] + \sum\_{b\in\mathcal{H}^{+}(a)} \pi\_b \exp\left[q\_d(a)-q\_d(b)\right]\\ &=\frac{1}{Z}\left[ \sum\_{b\in\mathcal{H}^{-}(a)} \exp[q\_f(a)+q\_d(b)] + \sum\_{b\in\mathcal{H}^{+}(a)} \exp\left[q\_d(a)+q\_f(b)\right] \right]\\ &=\frac{\exp\left[q\_f(a)+q\_d(a)\right]}{Z} \left[ \sum\_{b\in\mathcal{H}^{-}(a)} \exp[q\_d(b)-q\_d(a)] + \sum\_{b\in\mathcal{H}^{+}(a)} \exp\left[q\_f(b)-q\_f(a)\right] \right]\\ &= \pi\_a \sum\_{b\in\mathcal{H}(a)} R\_{ab}\\ &=-\pi\_a R\_{aa}.\end{aligned}$$ Thus, *π**T**R* = 0, and the equilibrium is as posited. Like their discrete counterparts, the general CSTERGMs are particularly natural in settings for which the factors shaping edge formation are different from those shaping persistence or dissolution of edges. Inference for these models, however, is more complex than for their constant formation/dissolution counterparts, due to the need to separate the two distinct potentials (*q**f* and *q**d*) that cannot be inferred from mean durations. We briefly comment further on this issue in Section [secdisc]. ### Continuum TERGMs Having seen continuum forms of the STERGM family, it is reasonable to ask if there is a similar continuum extension of the general TERGMs. The answer is affirmative, as can be appreciated by simply imposing *q* = *q**f* = *q**d* in the CSTERGM development. We thus state here the general form of the continuum TERGM process (CTERGM), which has not to our knowledge appeared in prior literature. Following the same assumptions as the CSTERGMs, we take S = G with G being the order-*N* (di)graphs, with *T* corresponding to Hamming adjacency on G and rate structure $$R\_{ab} = \begin{cases} 0 & \text{ if } b\not\in \mathcal{H}(a)\\ \exp(q(b)-q(a)) & \text{ if } b\in \mathcal{H}(a)\\ -\sum\_{b\in\mathbb{G}\setminus a} R\_{ab} & \text{ if } a=b \end{cases}.$$ Finiteness of the state space and irreducibility of the transition structure give us an equilibrium *π*, which we assert has the form *π**a* = exp(2*q*(*a*))/*Z*. We verify this by examining the influx to an arbitrary state, *a* ∈ S: $$\begin{aligned} \sum\_{b\in\mathcal{H}(a)} \pi\_b R\_{ba} &= \sum\_{b\in\mathcal{H}(a)}\pi\_b \exp[q(a)-q(b)]\\ &=\frac{1}{Z} \sum\_{b\in\mathcal{H}(a)}\exp[q(a)+q(b)]\\ &=\frac{\exp[2 q(a)]}{Z} \sum\_{b\in\mathcal{H}(a)}\exp[q(b)-q(a)]\\ &= \pi\_a \sum\_{b\in\mathcal{H}(a)} R\_{ab}\\ &= - R\_{aa}.\end{aligned}$$ This establishes that *π**T**R* = 0, and *π* is indeed the CTERGM equilibrium. While the factor of two that appears in the CTERGM equilibrium may appear counterintuitive, we can appreciate from the CSTERGM case that it arises from the interaction of the flux out of *a* from edge loss (respectively edge gain) and the flux into *a* from neighbor edge gain (respectively, neighbor edge loss): when *a* is a high-potential state, it both sends less flux to its neighbors and gets more flux in return. In some cases, it may be more aesthetic to drop this factor of two from the equilibrium graph potential, and divide the log transition rate by two instead; we keep the present parameterization, however, to emphasize the relationship with the CSTERGMs. Discussion ========== As we have seen, there are many ways of defining continuous time graph processes that lead to ERGM equilibria, underscoring the point that the same cross-sectional distribution can arise from different generative processes. However, not all such processes are equally plausible in any given setting. Here, we briefly comment on what can be said about the general properties of the processes described above, and how they may be usefully distinguished on qualitative grounds. We also note some potential insights from these processes regarding network evolution more generally, as well as some observations for practical use in applied settings. Summary of General ERGM-generating Processes -------------------------------------------- Leaving aside special case models, the eight families of ERGM-generating processes are summarized in Table [tabeq]. (Note that, since the time scale on which the model is defined is arbitrary, all rates are defined up to a similarity transform. For clarity, we leave the fundamental rate constants, *A*, when used in model definition.) The diversity of formulations seen in our above development is on display here. However, some general patterns are evident. Transitions can be governed by the absolute potential of the target state (*q*(*b*), in an *a* → *b* transition), the absolute potential of the originating state, or the difference in potentials; this reflects three broad classes of dynamics, where transitions are governed by (respectively) the favorability of the target, the unfavorability of the source, or the gain in favorability associated with transitioning from source to target. One or another of these may be more plausible in particular settings, suggesting a specific choice of model family. Going further, we may characterize the differences between models in terms of a number of basic properties, as shown in Table [tabprop]. First, we observe that two families (the LERGMs and the Change Inhibition process) have transitions rates that are bounded above by a constant - there is, for these families, a fundamental “speed limit” on the pace of network change, potentially reflecting unmodeled micro-level mechanisms that depend upon some other temporal process (e.g., “collisions” or “encounters”) whose pace is not controlled by network structure. For the other families, there is in principle no limit on how rapidly transitions can occur. This is sensible where edges can evolve by near-instantaneous processes (e.g., one actor deciding in rapid succession to break off ties to each member of a group), though in other cases care may be needed in empirical settings to ensure that transition rates do not become unrealistically large. We have also seen that families vary in whether these rates are fundamentally driven by target potential (Competing Rate SAOMs), source potential (Differential Stability), or potential differences (all others). The target-focus of the SAOM arises from its utility-theoretic motivation, as actors’ decisions within the model are assumed to be based on the attractiveness of the target state, and not on a comparison with their existing state (such comparisons being characteristic of non-rational choice models such as Prospect Theory ). By turns, an exclusive focus on the stability of the current state is the defining condition of the Differential Stability process. It is interesting that all other processes so far proposed operate instead on *differences* in potentials, such that dynamics slow down as the graph process enters a “flatter” region of the potential surface. The motivation for difference-driven dynamics in the LERGM was originally by appeal to a conditional probability argument, although as we have seen this can also be motivated by the notion that formation and dissolution processes are “competing” with each other even within a dyad and that observed change rates are the result of that competition. For Change Inhibition, the motivation is direct: we posit that the system “resists” downhill moves *per se,* which implies a comparative structure. Finally, the use of potential differences in the continuum (S)TERGMs is necessary for constant rate models to lead to desired behavior (i.e., to serve as continuum limits in the manner proposed e.g by and ). These differences underscore the point that distinct generative mechanisms can lead to quite different dynamics. As a next point of contrast, we see that models differ in how rates are varied across the state space. Most immediately, the CSTERGM families are defined separably, so that formation and dissolution rates are distinct; in the general case, this feature is not shared by the other families (though some may coincide, as with the trivial case of a general CSTERGM with *q**f* = *q**d* and the CTERGMs). Less trivially, we observe that processes vary in whether transition rates vary across (are “sensitive to”) neighboring states of higher or lower potential (respectively). Although most processes considered here have higher rates of transition to states with much higher potential than those with only marginally higher potential, this is not true for Change Inhibition (which is indifferent between uphill moves) and Differential Stability (which is indifferent to *all* moves). This is even more true for sensitivity to variation in potential loss, with only the Differential Stability process being indifferent to transitions to states with slightly versus substantially lower potential. These distinctions lead to very different patterns of network evolution. Finally, we distinguish models that are sensitive to differences in potential among neighboring states formed by adding edges (members of H+) versus those formed by removing edges (members of H−). Other than the Differential Stability process (which, as noted, is insensitive to all neighbor properties), these distinctions are characteristic of the constant rate CSTERGMs, which respectively treat all dissolution transitions or formation transitions equivalently. Taken together, these distinctions separate all eight families, with each having a distinctive signature. We observe that there are some signatures that do not have a corresponding model within this set, potentially suggesting opportunities for defining additional families. Insights for Network Evolution ------------------------------ Considering network evolution in its most general terms provides a number of substantive insights. First, we observe that complex structure does not imply a selective transition process: timing alone is sufficient to generate arbitrarily rich graph structure. One cannot thus automatically assume that a focal system is driven by dynamics that seek to walk uphill on the potential (though this may be a good assumption or other reasons). Further, this observation reminds us that factors that alter the pace of change within a social network may have a substantial impact on long-term structure, so long as they do not act uniformly. Another insight is that complex structure can arise equally well from formation or dissolution processes. Though it is often reasonable to think of structure as being driven primarily by constraints on which ties can or will form (e.g., interaction opportunities, lowered or raised barriers to tie formation), it may be that differences in selection or survival of ties are the primary structural drivers. Or both may be involved. As above, there may be other information that leads us to believe that one class or the other will be primary in a given setting, but the mere presence of complex structure has no bearing on this. We can also see from the processes examined here that ERGM-generating graph processes that *do* follow a potential do not need to do so symmetrically: like the Change Inhibition process, they can be more sensitive to downhill moves than uphill moves (or, conceivably, *vice versa*). It may be reasonable to assume symmetry on substantive or other grounds, but it need not exist for complex structure to result. A further insight that can be obtained by connecting dynamics with equilibrium behavior is that many dynamic processes lead to degenerate long-run behavior (in the sense of ). Indeed, any ERGM-generating process coupled with a potential leading to a degenerate ERGM distribution will also have degenerate dynamics; the fact that such a wide range of different dynamic processes can lead to the same ERGM equilibria thus confirms that degeneracy is not a consequence of any specific choice of local dynamics. For instance, the choice-motivated SAOMs and competing rate-motivated LERGMs can both lead to the same degenerate graph distributions as CTERGMs and Differential Stability processes. Sometimes, as per, this is in fact an empirically realistic outcome; for most social networks, however, degeneracy is usually pathological. What we can glean from the present work is that - at least for processes of the type studied here - degeneracy should be understood as arising from the graph potential rather than the processes operating on it. While this is not unprecedented (the presence of many quasi-time MCMC processes leading to identical graph distributions, for instance, suggests the same conclusion, as do results of ), it may not always have been appreciated. A deeper consideration of the connection between dynamic processes and their long-run behavior thus has the potential to produce more general insights about the factors that lead to the types of networks we see in the real world, as opposed to the many types of networks that *could* arise, but that do not. Finally, we note that while some processes may be especially natural in particular settings (e.g., SAOMs for unilaterally controlled edges in interpersonal networks, LERGMs in physical networks), any given graph process may admit multiple interpretations. It is important not to assume that any one mechanistic interpretation of a graph process is the only one possible. That said, deriving an ERGM-generating graph process from first principles can provide a strong motivation for applying it in cases where the associated assumptions are met. Mechanistic derivations of ERGM-generating continuous time graph processes would thus seem to be an important area for further research. Considerations for Practical Use -------------------------------- Although our focus here is on model definition and general properties, we may glean a few considerations for use of these models in practice. First, we observe that all of the ERGM-generating graph processes studied here except for the CSTERGMs have parameters that can be estimated up to a pacing constant from cross-sectional observations. Specifically, inferring *q* from one or more equilibrium draws determines the behavior of the model, although quantitative rates of change obviously require that the characteristic timescale of the process (and, where applicable, the maximum change rate) be inferred from information on observed dynamics. In the case of the constant rate CSTERGMs, one cannot identify the formation or dissolution potentials (respectively) without knowing the respective dissolution or formation rates, but these are easily estimated from duration or pacing information (as discussed by ). Moreover, since ERGM inference can often be performed using sampled or incomplete data, it follows that a considerable amount can be inferred about network dynamics from quite limited observations, provided that one has some *a priori* basis for constraining the associated model family. It is also useful to observe that, given an ERGM, it is straightforward to simulate hypothetical equilibrium dynamics using these models; in such applications, one must choose the pacing constant and, for the constant rate CSTERGMs, the corresponding mean durations. As noted above, such an approach has been used in the discrete case e.g. by and in the continuous case by, both of whom used approximate matching to available empirical observations to calibrate total change rates. In some applications, considering dynamics relative to the characteristic timescale of the process (“phenomenological time”) may be sufficient to gain useful insights. Although a detailed discussion of simulation procedures is beyond the scope of this paper, it should be noted that standard discrete event simulation methods can be used for all of the models considered here. In particular, given rate structure *R* and current state *a*, the probability that the next transition will be to state *b* is simply *R**a**b*/∑*c* ∈ S \ *a**R**a**c*, with the time to the next event being exponentially distributed with expectation [∑*b* ∈ S \ *a**R**a**b*]− 1. For these families (all of which involve single-edge changes), this can be implemented by calculating the rates for all potential toggles from the current state, then selecting both the next toggle and the time to the next event independently. Model-specific implementations may allow for greater efficiency in some cases (e.g., by pooling events with identical rates). It should be noted that the presence of *absolute* exponentiated graph potentials in the exit rates for the Competing Rate SAOMs and Differential Stability processes (as opposed to exponentiated potential *differences*; see Table [tabeq]) implies that these processes may in some cases exhibit dynamics with very large numbers of transition events per unit phenomenological time. For the former process, this arises when the model shuttles very rapidly between two or more Hamming-adjacent, high-potential graphs. For the latter process, the phenomenon arises when the model has entered a low-potential region of graph space, and rapidly switches at random between low-potential graphs. Although these dynamics take very little phenomenological time, they can become very computationally expensive in large systems with large variation in graph potential. Specialized algorithms for handling these cases may be needed in some circumstances; where transition rates become extremely large, it may also be prudent to consider whether the process in question is substantively reasonable. Where detailed dynamic information is available, this can of course be exploited, allowing for out-of-equilibrium inference; see, e.g.. Such information is generally required for general CSTERGM inference (since one cannot distinguish the formation and dissolution potentials from equilibrium information), and is also needed when seeking to distinguish among ERGM-generating processes on an empirical basis. Although model adequacy checking for continuous time models is an area of active development (see e.g. ), one obvious diagnostic is to compare observed dwell times in each state to those of Table [tabeq]. While more conventional adequacy checking strategies based on cross-sectional simulation are useful for assessing whether a model can reproduce equilibrium behavior, they will not distinguish among ERGM-generating processes with the same equilibrium. By contrast, different dynamic processes with equivalent graph equilibria generally have very different inter-event timing. Conclusion ========== The above sketches some of the precursors and known classes of stochastic processes leading to known ERGM distributions. As observed, there are several general classes of such processes; however, it is likely that these only scratch the surface of what is possible. It is interesting to note that, with the lone exception of the CFP/R, the frameworks studied to date all take S = G. Although this is convenient, it limits the dynamics that are possible. Extending the state space will necessarily add an extra entropic contribution to the ERGM potential - equivalently, a change of reference measure - since it will change the number of ways that each graph can be realized (i.e., it adds hidden degrees of freedom). Since entropic effects are (broadly) related to opportunity structures, state space extension may be important for capturing contextual constraints on network dynamics. We also observe that while dynamics are limited to Hamming transitions for the LERGM, Change Inhibition, and Differential Stability processes, this limitation is not required for deriving their equilibria (other than establishing irreducibility), and it is not essential: these processes can operate (with equivalent rate functions) on any fully connected set of transitions on G. Thus, simultaneous edge changes can be accommodated within existing CTMC processes, albeit at computational cost. We conclude by observing that a major limitation on progress in this area is a dearth of high-quality dynamic data on social or other networks that is capable of discriminating among competing models. It is hoped that advances in data collection will produce a body of observations that will put continuous time network models on a firmer empirical footing. --- 1. Departments of Sociology, Statistics, Computer Science, and EECS and Institute for Mathematical Behavioral Sciences, University of California Irvine; `[email protected]`[↩](#fnref1) 2. This work was supported by NIH award 1R01GM144964-01 and NSF award SES-1826589. The author thanks Martina Morris, Michael Schweinberger, Chad Klumb, and Steve Goodreau for their input and helpful comments.[↩](#fnref2) 3. E.g., we need the change rate to not go to infinity in finite time, so that the system cannot “ergotize” out from under us.[↩](#fnref3) 4. We also need non-simultaneous edge state transitions, i.e., all non-vanishing transition rates are on unit moves in the Hamming space of the support.[↩](#fnref4) 5. E.g., spontaneous deflation of a balloon is rather faster than the time required for it to spontaneously reinflate. But this seeming asymmetry is driven by the fact that we *selected* the balloon when it was in a very rare conformation, and the waiting time to observe such a rare conformation is very long. The “arrow” is in our selection process, not the system itself. This is merely the regression effect, much disguised.[↩](#fnref5) 6. This formulation itself appears earlier, but is not discussed as a general ERGM generating process.[↩](#fnref6) Continuous Time Graph Processes with Known ERGM Equilibria: Contextual Review, Extensions, and Synthesis[2](#fn2) ================================================================================================================== ### 2/12/23; to appear in *Journal of Mathematical Sociology*, DOI 10.1080/0022250X.2023.2180001 Graph processes that unfold in continuous time are of obvious theoretical and practical interest. Particularly useful are those whose long-term behavior converges to a graph distribution of known form. Here, we review some of the conditions for such convergence, and provide examples of novel and/or known processes that do so. These include subfamilies of the well-known stochastic actor oriented models, as well as continuum extensions of temporal and separable temporal exponential family random graph models. We also comment on some related threads in the broader work on network dynamics, which provide additional context for the continuous time case. Graph processes that unfold in continuous time are natural models for social network dynamics: able to directly represent changes in structure as they unfold (rather than, e.g. as snapshots at discrete intervals), such models not only offer the promise of capturing dynamics at high temporal resolution, but are also easily mapped to empirical data without the need to preselect a level of granularity with respect to which the dynamics are defined. Although relatively few general frameworks of this type have been extensively studied, at least one (the stochastic actor-oriented models, or SAOMs) is arguably among the most successful and widely used families of models in the social sciences (see, e.g., among many others). Work using other continuous time graph processes has also found applications both within and beyond the social sciences, suggesting the potential for further advances. While some graph processes are non-ergodic, many of those used in network modeling lead to well-defined equilibrium graph distributions. Of the latter, few have equilibria that are well-characterized. For instance, a typical constant-rate SAOM of the type noted above will, in the large-time limit, converge from any initial condition to a well-defined distribution over the order-*N* digraphs, but analytical expressions for such distributions are known only in trivial cases. Although the behavior of such distributions can be explored through simulation, this proves costly in cases in which the natural dynamics of the system exhibit slow mixing. More fundamentally, the inability to characterize the equilibrium behavior of such models limits our understanding of how structural dynamics lead to the observed incidence of structural forms, and may make it more difficult to identify and avoid model families with undesirable long-term behavior. For some classes of continuous time graph processes, however, we can show not only the existence of an equilibrium graph distribution, but also the the form of that distribution. These processes are of particular interest, since they allow us to directly link short-term dynamics with their long-term consequences. Often, they are also inferentially tractable, with parameters that can be estimated from cross-sectional information supplemented by sometimes modest dynamic data (e.g., edge durations, or dwell times). Further, it is frequently possible to study the long-term behavior of such graph processes by sampling directly from their equilibrium distributions (using, e.g., Markov Chain Monte Carlo (MCMC)), greatly economizing over explicitly simulated dynamics. Here (and with minimal loss of generality), we focus on expressions of such equilibria in exponential-family random graph model form. The exponential-family random graph models (ERGMs) are a highly general framework for describing distributions on sets of graphs. Indeed, the ERGMs are complete for finite support (i.e., for any distribution on a finite set of graphs, there exists an ERGM representation), and can represent broad and useful classes of distributions beyond this setting (e.g., for count-valued networks ). The currently predominant use case is for unvalued networks on fixed vertex sets, which will be our focus here; however, many of the ideas discussed can be generalized beyond this case. At present, ERGMs are employed primarily as models for single graph realizations (i.e., “cross-sectional” analysis), with no particular assumptions regarding associated dynamics. As a general language for specifying graph distributions, it is important to stress that the ERGM form itself has no *inherent* substantive interpretation (though it has many extremely convenient mathematical, statistical, and computational properties. However, ERGM forms arise “naturally” from a number of dynamic processes that are thought to be plausible accounts of structure formation in real systems, motivating a better understanding of the connection between ERGM families and stochastic processes. Among other motivations: understanding ERGMs that arise from substantively plausible dynamics allows us to study the long-term behavior of those processes; dynamic interpretations of cross-sectional ERGMs can serve as useful conceptual devices for thinking about or understanding cross-sectional models; prior knowledge regarding network dynamics may help us better constrain the space of plausible cross-sectional models; in some cases, connections between dynamic processes and cross-sectional ERGMs may allow us to constrain the former from cross-sectional observations; and a deeper understanding of the connection between dynamic processes and cross-sectional ERGMs may help us design more efficient algorithms for simulation, parametric inference, or other computational tasks involving ERGMs. This last is evidenced e.g. by the central role of quasi-time dynamics in the Markov chain Monte Carlo (MCMC) algorithms that are the backbone of ERGM computation. The study of these relationships is thus of both practical and theoretical import. While there are many types of graph processes that could be studied - and many connections between ERGMs and those processes - our focus here is specifically on *continuous time processes with equilibrium graph distributions having a known ERGM form*. From the above, we can immediately see that any continuous time process on a finite graph set that possesses an equilibrium distribution must trivially have an equilibrium in ERGM form; however, there is no obligation that this distribution be obvious from the dynamic specification, nor that it be easily specified. Indeed, few cases where a continuous time process leads to a specified ERGM form are currently known. In this paper, we review known families of continuous time graph processes with ERGM distributions, and also identify several additional families not previously reported in the literature. In each case, we describe the family’s transition rate structure and equilibrium behavior, demonstrating that each has the desired equilibrium using a uniform approach that aids in seeing similarities and differences across model families. We also summarize a number of properties that distinguish these families from each other, and that may aid model selection in applied settings. Given the broad range of problems that can examined within this space, we also note at the outset some questions that we do not address here. With the exception of a few very brief remarks, we do not deal with the question of inference for model parameters, nor data-driven model selection (though we will comment on aspects of model behavior that may inform model selection on substantive grounds). Likewise, we do not here discuss computational strategies for simulation of trajectories or equilibrium draws from graph processes, and we do not focus on specific examples of particular models in depth (as opposed to model classes). All of these are important topics in their own right, but lie outside the scope of the present work. It is hoped, however, that progress on these issues will be facilitated by a more complete catalog of candidate models to be studied. The remainder of the paper is structured as follows. Section [secnotation] briefly reviews concepts and notation related to ERGMs and other formalisms that will be used throughout. Section [secbackground] discusses more general background on related dynamic network processes that are not defined in continuous time and/or do not have known equilibria, but that provide useful context for understanding our processes of interest. These are the focus of Section [seccont], which discusses continuous time graph processes with ERGM equilibria, providing both novel cases and cases from the existing literature. A comparative overview of the properties of these model families and related general issues are discussed in Section [secdisc], and Section [secconc] concludes the paper. Exponential Family Random Graph Models and Related Concepts =========================================================== Although we shall for the most part introduce concepts and notation as we encounter them, it is useful to begin with a few notions that will be used throughout. We start in particular with the exponential family random graph models. Formally, let *G* be a random graph on support G. A representation of the probability mass function of *G* written as $$\Pr(G=g|\theta,X) = \frac{\exp(\theta^T w(g,X)) h(g)}{\sum\_{g'\in\mathbb{G}}\exp(\theta^T w(g',X)) h(g')} \label{eq\_ergm}$$ is said to be an *ERGM form* for *G*, and any distribution written in such a form is generically referred to as an ERGM. Here, *w* : G, *X* ↦ R*k* is a vector of *sufficient statistics*, *θ* ∈ R*k* is a vector of *model parameters*, *X* is a covariate set, and *h* : G ↦ R≥ 0 is a *reference measure* on G. Intuitively, *θ* biases (or “tilts”) the distribution of *G* with respect to the degrees of freedom indexed by *w*, relative to *h*; in particular, the conditional expectation of *w**i*(*G*) is monotone in *θ**i*. It is common (but by no means necessary) to take *h*(*g*) ∝ *I*(*g* ∈ G), i.e. to specify ERGMs in terms of the *counting measure* on G. In this case, the choice *θ* = 0 leads to the uniform distribution on G, and the resulting family can be seen as an exponential tilting of the uniform random graphs (net of support constraints). In the current development, we take G to be some set of unvalued graphs (directed or otherwise), although many of these ideas extend directly to the valued case. For notational and conceptual simplicity, it is useful to observe that Eq. [eqergm] can be written as lnPr(*G* = *g*∣*θ*, *X*) = *q*(*g*) − ln*Z*, where *q*(*g*) = *θ**T**w*(*g*, *X*) + ln*h*(*g*) is a quantity we will call the *graph potential* and *Z* = ∑*g*ʹ ∈ Gexp[*q*(*g*ʹ) + ln*h*(*g*ʹ)] is the *normalizing factor* or *partition function*. We notationally suppress the dependence of *q* and *Z* on *θ*, *X*, *w*, and *h* when there is no danger of confusion (i.e., we are interested only in the variation of *q* with respect to *g*, and in *Z* as a normalizer). More explicit notation is invoked when needed. While appropriate choices of *q* can lead to ERGMs with very complex patterns of dependence among edges, it is also possible to specify ERGMs with no dependence among edges, or (respectively) dyads. ERGMs in which all edges are independent are referred to as Bernoulli graphs (since edge presence/absence in such models is equivalent to independent Bernoulli trials). Likewise, ERGMs on sets of directed graphs are referred to as Categorical graphs when there is no dependence between dyads, since the distribution of dyad states in such a model can be be treated as independent Categorical random variables. In the undirected case, dyadic and edgewise independence are equivalent. Neither Bernoulli nor Categorical graphs need be homogeneous (i.e., edge/dyad state probabilities may vary), and this is not assumed here unless explicitly stated. In discussing network dynamics, it will often be useful to consider “neighborhoods” surrounding graphs in G. The most pragmatically important such neighborhoods are defined vis a vis the Hamming space of graphs, i.e. G together with the Hamming metric, *d*, such that *d*(*g*, *g*ʹ) is the number of edge changes required to make *g* into *g*ʹ (or vice versa). Such edge state changes (adding or removing the *i*, *j* edge) are often referred to in the literature as “toggles.” The set of all graphs *H* such that *d*(*g*, *H*) = *k* forms the Hamming sphere of radius *k* about *g*, and the members of the radius-1 Hamming sphere about *g* are said to be the (Hamming) neighbors of *g*. We denote this set by H(*g*). It is important to observe that typical choices of G in real models (e.g., the set of all order-*N* graphs) are Hamming-connected, meaning that for any two graphs *g*, *g*ʹ ∈ G, there exists a sequence of graphs *g*, …, *g*ʹ ∈ G such that each graph is the Hamming neighbor of those adjacent to it in the sequence. In such sets, it is possible to get from any graph in the support to any other graph, by applying a sequence of edge toggles. In some cases, we will be interested in further narrowing down the set of Hamming neighbors, based on the type of toggle involved. The subset of neighbors of *g* reached by *adding an edge* will be denoted H+(*g*), while those reached by *removing an edge* will be denoted by H−(*g*); clearly, H(*g*) = H+(*g*) ∪ H−(*g*). We will also in some cases be interested in distinguishing between toggles that take *g* into a graph of higher potential, versus a graph of lower potential. We thus define *N*+(*g*) = {*g*ʹ ∈ H(*g*) : *q*(*g*ʹ) ≥ *q*(*g*)} and *N*−(*g*) = {*g*ʹ ∈ H(*g*) : *q*(*g*ʹ) < *q*(*g*)} to be the set of “uphill” and “downhill” Hamming moves (respectively) with respect to the potential surface. Background: ERGMs and Network Dynamics ====================================== Given our currently limited substantive understanding of network dynamics, the network modeling literature is surprisingly rich: models for network dynamics have been studied for a very long time, sometimes with similar or identical ideas being rediscovered or reinvented in new guises. While our focus here is on the narrow question of continuous time network models that give rise to known equilibria, it is thus difficult to have a context for this work without considering either non-continuous time models with known equilibria (on the one hand), or continuous time models without known equilibria (on the other). Of course, there is also a large set of models that are neither continuous nor that have known equilibria, but these are too far afield to be of interest. While even these constraints leave considerable ground, we comment briefly on several topics that are particularly relevant to our focal concern. ERGMs from Pseudo-time and Quasi-time Dynamics ---------------------------------------------- As noted earlier, practical computation for ERGMs depends almost entirely on MCMC algorithms (introduced in the ERGM context by and, with subsequent elaborations and extensions by e.g. ). Broadly, current MCMC algorithms for ERGM simulation involve sequentially proposing single edge toggles, which are then accepted or rejected with a probability that depends on the current graph state. As such, all such algorithms can be viewed as dynamic processes on G that operate in “quasi-time:” an imagined time dimension with no necessary connection to physical time, but that is relevant for the computational aspects of the respective algorithms (e.g. mixing time). While quasi-time dynamics are usually encountered in computational settings, they can also be used to gain substantive insights. For instance, use quasi-time dynamics scaled to roughly approximate plausible relationship durations to show how partnership concurrency can generate disparities in HIV prevalence, and uses quasi-time dynamics along the order parameter of a phase transition in the edge/concurrent vertex model to provide an intuition on the mechanisms driving the transition, and how it might hypothetically unfold. It may be useful to distinguish such “quasi-time” dynamics - where there may be no simple correspondence between the timeline on which the dynamics unfold and physical time - from “pseudo-time” dynamics, in which some relationship is asserted, but not one of a cardinal nature. For instance, many early “dynamic network” studies compared networks at two points in time, and simply examined how the later states were related to the earlier ones. Temporal ordering is present between earlier and later snapshots, but the model makes no other treatment of dynamics. In another vein, dynamic processes much like those in the MCMC algorithms mentioned above can be motivated as stylized models of real network dynamics, but without a clear notion of pacing. (See e.g., for non-ERGM examples.) In that case, it may be proposed that real dynamics are approximated by the network process, but the latter captures only the sequence and not the timing of events (“ordinal pseudo-time”). Indeed, any Markov chain with a known ERGM distribution can be used in this way. The lack of direct correspondence can even be seen as an advantage; for instance, introduce decision-theoretic models of network dynamics with this property (closely related to the SAOMs mentioned below), with the specific objective of interpreting cross-sectional network structure. Because the equilibrium properties of the family hold under very loose conditions on the timing of decisions, it is not necessary to specify an exact timing model in order to obtain empirically useful results. Of course, all such pseudo-time treatments are predictively limited by their inability to capture the pace of change, which by definition they do not represent, and more subtly to capture dynamic effects that depend on such pacing (e.g., high rates of turnover involving one set of relationships destabilizing another set of relationships). This motivates models with explicit treatment of time. TERGMs and Other Discrete Time Models ------------------------------------- Perhaps the simplest way to capture physical time is to consider a graph series *G*0, *G*1, …, indexed by some temporal variable *t*0, *t*1, …. Typically, the time periods between graphs are taken to be equal (though this can be relaxed). This is a natural framework for treating network panel data, where we observe cross-sectional information on the state of a network at regular intervals in time. In this context, discrete time is a coarsening of an underlying continuous time process, and models for such time series are viewed as approximating dynamics with respect to this coarsened process (which may omit mechanisms that operate on time scales much faster than the time scale of measurement). It must also be noted that, in such settings, there are multiple ways in which the graph series may be defined relative to the underlying continuous time dynamics. For instance, *G**i* may reflect a snapshot of the state of a network at time *t**i*, or it may summarize the state of a network over the *period* from *t**i* − 1 to *t**i* (or from *t**i* to *t**i* + 1, depending on definition). Different representations may be natural in different settings (particularly when representations are chosen to correspond a particular type of measurement), and all have implications for modeling. It should also be observed that some processes *do* essentially occur in discrete time (or at least, are naturally treated in this way). For instance, famously studied daily interactions among windsurfers on a California beach, over a one-month period. Since the windsurfers convened on the beach each day (and were not continuously in residence), it is reasonable to treat the network as a genuinely discrete time process on a daily scale. Similar observations could be made regarding Baker’s () classic study of interactions among traders in a securities exchange, or any other study of networks that occur in settings that are regularly convened but that are not persistent. Whether they arise from coarsened continuous processes or from naturally episodic phenomena, distributions for graph series can be defined by positing an ERGM form for the conditional distribution of each graph within the series given those that have come before, i.e., Pr(*G**i* = *g**i*∣*θ*, *G*< *i* = *g*< *i*, *X*) = ERGM(*g**i*∣*θ*, {*X* ∪ *g*< *i*}),  where *g*< *i* refers to the time points prior to the *i*th, and ERGM(*g*∣*θ*, *X*) is the ERGM pmf evaluated at *g* with parameters *θ* and covariates *X* (*w* and *h* left tacit for brevity). Models of this type are said to be *temporal ERGMs* or TERGMs, and are the network analog of VAR models in classical time series analysis. All points made earlier regarding the generality of the ERGM representation apply here, although it must be borne in mind that if *s*, *h*, and/or *θ* are restricted to be time-homogeneous, the completeness of ERGMs for single graphs does not generalize to series thereof. Even so, the TERGM framework can accommodate extremely general classes of discrete-time dynamics. Partly because of this generality, most work using TERGMs focuses on sub-classes that are simpler to work with. Common simplifications including imposing either first-order or *k*-th order Markov structure on past-dependence (so that present graph states are conditionally independent of states more than some *k* steps removed in time), or simplifications on dependence among edges in the present. Imposing complete edgewise independence in the present (given the past) typically leads to models with a simple regression structure, which are sometimes called dynamic network regression (DNR) families. Another strategy is to combine both ideas, using the previous time point to classify edge variables into those that are “edge present” and those that are “edge absent” and allowing conditional dependence only among edge variables in the same class. This allows for distinct processes to be specified for the formation of new edges versus the persistence of existing ones, at the price of requiring conditional independence of formation and dissolution. Families of this type are called *separable TERGMs* or STERGMs, and are especially useful in systems for which the mechanisms governing the emergence of new relationships differ greatly from those leading to the cessation of existing relationships. When employed as approximations to a continuous time process, it must be noted that the degree of conditional dependence among edges in an approximating TERGM depends on the relationship between the timescale of the TERGM process and the timescale of the underlying network dynamics on which it is based. Observing that the past is a covariate to the present, it is intuitively obvious that (subject to some mild regularity conditions[3](#fn3)) *G**i* will be increasingly determined by *G*< *i* as *t**i* − *t**i* − 1 → 0, and the magnitude of approximation error resulting from omitting dependence in *G**i*∣*G*< *i* will decline (if only because the entropy of *G**i*∣*G*< *i* will itself decline). Likewise, under similar regularity conditions,[4](#fn4) we can appreciate that as *t**i* − *t**i* − 1 → 0, all observations will contain either zero or one edge state change, and dependence is impossible. This “thin slicing” limit can be used to motivate the use of DNR families for networks that are frequently measured. As have observed, the reverse of this is also true as measurement intervals grow: for typical processes, edge variables become increasingly dependent, and the DNR (and eventually, STERGM) approximations become poor. Another implication of this phenomenon is that the magnitude of dependence in an approximating TERGM depends on the measurement interval, and not simply the process under study. Adapting TERGMs estimated at one timescale to model dynamics on an alternative timescale hence requires non-trivial adjustments. Whether or not one regards this fact with displeasure, it is a universal truth that coarsened models differ depending on the degree of coarsening, and this is not unique to TERGMs. In general, it should be observed that TERGMs need not have a well-defined equilibrium distribution (they can, e.g., be periodic, or if forced via *X*, undergo other types of continually fluctuating dynamics), nor are they necessarily intended to represent networks that are in equilibrium. Further, even when TERGMs *are* ergodic, we do not in general have a simple means of knowing the ERGM representation of their equilibrium distribution. Other than trivial cases, or the pseudo-dynamic examples mentioned above, the family most studied in this regard is the family of STERGMs with constant dissolution rates. For this family, the equilibrium can be approximated by an ERGM with the statistics of the formation submodel and parameters that are a simple function of both submodels. Importantly, this allows for STERGM parameters to be estimated from a single cross-sectional observation, provided that information on mean tie duration is also available; since, further, ERGMs depend only on their sufficient statistics, this allows some STERGM families to be fit to sampled egocentric data with duration measurements (making them feasible for use in large populations). This special case is obviously of particular practical importance, although many other TERGMs may also have interesting and useful equilibrium behavior that is so far unstudied. REMs, SAOMs, and Other Continuous Time Frameworks ------------------------------------------------- While not as common as discrete time approaches, various continuous time frameworks for network and/or social interaction dynamics have been proposed. Most (the exceptions to which are our main point of interest here) are not intended to produce stable equilibrium behavior, and/or have equilibrium properties that are not in general known. Only a few examples are mentioned here. Continuous time models of a non-statistical sort are well-known in the agent-based modeling field, where they have traditionally been constructed with a view either to ensuring well-orderedness of social dynamics, or studying pacing in events. As discussed below, there have been many proposals for continuous time models of edge state change in networks from a dyadic point of view, leading to fairly tractable models without dependence (but possibly with heterogeneity); these go back at least to the 1950s, and it seems likely that they have been rediscovered more than once. A more significant development from the present point of view was the creation of the stochastic actor-oriented models (SAOMs), a relatively general framework that combines an agent-based model of network dynamics with a statistical framework that allows for mechanistic drivers of interaction behavior to be inferred from cross-sectional data. We will have more to say about a special case of stationary SAOMs with known ERGM distributions in Section [seccont], but here merely sketch some salient characteristics. SAOMs posit that the vertices of the network being modeled act as agents, who (in the standard formulation) have unilateral control over their outgoing edges. These agents have utility functions over the states of the network (which they are assumed to accurately perceive), and when given opportunities to do so, will attempt to alter the state of a single edge so as to maximize their utility (myopically) for the resulting network; the choice set is taken to be the set of all unilateral edge changes, decisions must be made asynchronously, and neither forward nor backward looking behavior is assumed (i.e., agents neither remember past states, nor anticipate future responses of others to their own actions). Choices are made via a multinomial logit, and are considered instantaneous. Opportunities for such choices to be made are determined by a separate rate function (as Poissonian events), which is also part of the model specification; this can be a simple constant, or can be assumed to vary in one or another way with either the current state of the network or agent characteristics (or both). While the generative process of the SAOM is a continuous time framework, it is worth noting that the *inferential* uses of SAOMs are nearly always based on sequential cross-sectional observations (network panel data), and this has been the almost exclusive focus of work in this area. The continuous network dynamics are hence assumed to be fully latent, and only episodic snapshots are observed; changes of the network between snapshots are used to infer the drivers of dynamics (i.e., rate and utility functions, in the typical case). In general, SAOMs are not assumed to be stationary (it is very common e.g. to assume time-varying effects that are presumed to be exogenously driven), and where equilibria exist, their forms are rarely known (see e.g. for a simulation-based investigation). An important exception will be discussed below. When continuous time dynamics can be observed directly (e.g., event history data), many of the inferential challenges of the SAOMs disappear, and it is possible to specify and perform inference for processes with many fewer constraints. The relational event models (REMs) are one framework of this type. *Relational events* are discrete events involving interactions between actors and/or actors and their environment, which can be effectively treated as instantaneous relative to the underlying dynamics of the system as a whole. A REM consists of a population of such events, together with a specification for their hazards over time; it is very common for such models to assume locally Poissonian events with piecewise constant hazards, although this is not necessary. (As with Cox models, it is also possible to propose REMs with a non-parametrically specified and time-varying baseline hazard, though this limits their predictive value.) Typically, REM specifications include interdependence among events, and may or may not be stationary; since inference for REMs is usually performed using observed event histories, stationarity is rarely necessary (and may not be desired). Although REMs can be interpreted in behavioral terms, they need not be. While REMs are usually employed to model instantaneous interaction patterns - and not, hence, networks - it is possible to model network dynamics in REM terms by positing two classes of events (formation and dissolution), and treating edges as the gaps between formation and dissolution events. This approach has been leveraged e.g. by to build SAOM-like models for use when event histories of network change are observable, allowing many constraints of the standard SAOMs to be relaxed. It is noteworthy that this SAOM-like parameterization scheme can be translated into the scheme proposed by, and vice versa; thus, the two are simply different ways of describing relational event processes. However, the mapping between parameterizations is non-trivial, and each can have practical advantages in particular situations. REM-like schemes have also been explored for graphs such as citation networks that grow entirely via formation, with edge addition treated as an instantaneous process that occurs when new nodes are added to the graph. ERGMs as Physical Equilibria ---------------------------- One last topic of relevance to our focal subject is the motivation of ERGMs as *statistical mechanical* (as opposed to statistical) models. In this interpretation, one views the focal graph, *G*, as a *microstate* from an ensemble thereof, in energetic exchange with a thermal reservoir and observed at a *random time.* The detailed dynamics that give rise to an observation *g**t* of *G* are unobserved; however, in equilibrium, the probability of such an observation is $$\Pr(G\_t=g\_t|\mathscr{H},T) = \frac{\exp\left[\tfrac{-\mathscr{H}(g\_t)}{k\_B T}+\sigma(g\_t)\right]}{Z(\mathscr{H},T,\sigma)}, \label{eq\_phys}$$ where H is the Hamiltonian of the system (expressing the total energy associated with its topological degrees of freedom), *T* is the temperature, *k**B* is Boltzmann’s constant, *σ* is the entropy of the graph microstate, and *Z* is the partition function. If we write H with respect to a basis *w* of functions on G, we then have $$\Pr(G\_t=g\_t|\phi,T) = \frac{\exp\left[\tfrac{-1}{k\_B T}\phi^T w(g\_t) + \sigma(g\_t)\right]}{Z(\phi,T,\sigma)}, \label{eq\_netham}$$ which is plainly an ERGM with $\theta=\tfrac{-1}{k\_B T}\phi$, and *h*(*g**t*) = exp(*σ*(*g**t*)). Aside from its immediate motivation in physical applications of ERGMs, this representation also provides insights into the behavior of ERGMs in other settings. For instance, within the statistical mechanical picture, *θ* represents the effective generalized forces (relative to *k**B**T*) driving the system, with *w* being the degrees of freedom on which those forces act. The reference measure can also be seen a generalized *multiplicity,* representing (roughly, and sometimes exactly) the number of ways that a given graph can be generated by the unmodeled generative processes that create the network. This makes explicit the role of unmodeled degrees of freedom in driving equilibrium behavior, and motivates greater theoretical attention to the oft-neglected reference measure. (It also shows that there are conceptual differences between true parameter offsets and changes of reference measure, since one represents an energetic contribution while the other represents an entropic contribution. In systems with a well-defined temperature, these behave very differently.) Versions of this interpretation have (with varying degrees of specificity and attention to physical detail) been used by a number of researchers as tools or metaphors to study or explain ERGM behavior, and by others as physical models in their own right. From the standpoint of the present work, it is clear that the statistical mechanical interpretation of ERGMs treats them as arising from a latent continuous time process, and provides important insights into how this process translates into the equilibrium graph distribution. As with quasi-time dynamics, thinking of ERGMs in terms of a physical equilibrium may be helpful in understanding model behavior (e.g., interpreting *θ*s as forces). At the same time, this approach does not by itself fix dynamics, and additional assumptions must be made for this purpose. Below, we will examine one such scheme, which provides a useful mechanistic interpretation of the longitudinal ERGMs of. Continuous Time Processes with Known Graph Equilibria ===================================================== With the foregoing as context, we now turn to the investigation of continuous time graph processes with well-defined equilibrium behavior, where that behavior can be expressed in terms of a graph distribution in ERGM form. For compactness, we refer to these as *ERGM generating processes*. In what follows, we consider the following setup. We assume the presence of a stochastic process, *S*, on state space S, indexed by a temporal variable *t* ∈ R (such that *S**t* is a random variable on support S). Taking G as before to be a graph set, we define the continuous extension of our random graph *G* by the mapping *G* : S ↦ G. Thus, we may naturally define *G**t* ≡ *G*(*S**t*) to be the graph associated with the state of *S* at time *t*. Note that, while it is possible to take S = G (which indeed leads to a class of Markovian models), this is not always assumed; for instance, systems with memory will require a more elaborate state space. For notational convenience, we will let realizations of *S* be treated as time indexed sets, such that e.g. *s**t* ∈ *S* can be read as “the history *s* of *S* was at state *s**t* at time *t*.” We will also restrict attention to countable S. It is convenient to endow S with a topology, *T*, i.e. a directed graph representing allowable transitions; specifically, for all *s*, *s*ʹ ∈ S, if *s**t*, *s*ʹ*t* + *δ* ∈ *S* and (*s*, *s*ʹ) ∉ *T*, then there exists *t*ʹ ∈ (*t*, *t**δ*) and *s*ʺ ∈ S such that (*s*, *s*ʺ) ∈ *T* and *s*ʺ*t*ʹ ∈ *S*, and there exists *t*ʺ ∈ (*t*, *t**δ*) and *s*‴ ∈ S such that (*s*‴, *s*ʹ) ∈ *T* and *s*‴*t*ʺ ∈ *S*. We also define an *instantaneous rate structure,* *R*, such that *R**s**s*ʹ is the transition rate from state *s* to state *s*ʹ for *s* ≠ *s*ʹ, and *R**s**s* =  − ∑*s*ʹ ∈ S \ *s**R**s**s*ʹ. (Here, we treat memory or time-dependent covariates as folded into the state space, so *R* can be treated as fixed.) Clearly, *R**i**j* > 0 only if (*i*, *j*) ∈ *T*. At present, we will confine ourselves to relatively well-behaved (Poissonian) processes that are characterized by the condition that, for all *s*, *s*ʹ ∈ S and time *t*, lim*δ* → 0Pr(*S**t* + *δ* = *s*ʹ∣*S**t* = *s*) = *I*(*s* = *s*ʹ) + *δ**R**s**s*ʹ + *o*(*δ*) where *I* is an indicator of its argument, and *o*(*δ*) are arbitrary terms that go to zero faster than *δ*. Equivalently, for such processes, 1. For all *s* ∈ S, ∑*s*ʹ ∈ S*R**s**s*ʹ = 0; 2. For all time *t*, the infimum *δ* > 0 such that *S**t* + *δ* ≠ *S**t* is exponentially distributed with rate parameter ∑*s* ∈ S*R**S**t**s*; and 3. For arbitrary time *t*, let *δ* > 0 be the infimum *δ* such that *S**t* + *δ* ≠ *S**t*; then Pr(*S**t* + *δ* = *s*) = *R**S**t**s*/[∑*s*ʹ ∈ S \ *S**t**R**S**t**s*ʹ]. Other characterizations are also possible. Processes with these characteristics are of course *continuous time Markov chains* (CTMCs), and have many salubrious properties. We note that it is not only possible to create stochastic processes with much more elaborate structure, but that examples of such processes do exist in the network literature (e.g., non-stationary DyNAMs); however, we are not aware of any examples of such processes leading to known ERGM forms, and thus not consider these cases here. Conditions for Convergence -------------------------- Although there is no known recipe that allows for specification of an equilibrium graph distribution from *S* in all cases, there are sufficient conditions that can in some cases be used to establish that an equilibrium exists (and that can be exploited in seeking its form). We here review some of these, again working within the above CTMC framework. The underlying results are well-known, and can be found in any standard stochastic process text. To speak of an equilibrium (aka stationary) distribution for *S*, we need a unique pmf on S, *π*, such that *π* the marginal distribution of *S**t* for all “random” *t* (in a sense to be explained). Formally, *π* is said to be the *equilibrium distribution* of *S* if, for all times *t* and intervals *δ* > 0, and states *s*, Pr(*S**t* = *s*) = *π**s* implies Pr(*s**t* + *δ* = *s*) = *π**s*. In other words, if we find that *S**t* has marginal distribution *π* at one point in time, it must also have marginal distribution *π* at future times. This condition (which is often written in terms of a time 0, instead of time *t*) is usually thought of in terms of initialization: if we manually initialize a CTMC with its equilibrium distribution, it should remain in equilibrium. Here, however, we instead take this as a characterization of the notion of observing the system at a “random time:” if we sample the system in a manner that does not bias our observations with respect to *π*, then we will observe its future states to preserve this distribution. By contrast, if we were to choose *t* such that the system state were badly biased (“out of equilibrium”), there would exist future times for which bias would also be preserved. Such state selection is the true basis for the illusion of an “arrow of time” in reversible systems,[5](#fn5) and occasions much mischief. We also note from the outset the trivial but important fact that if *S* has an equilibrium distribution then so must *G*(*S*): defining *H* to be the mapping from S to G such that *H**i**j* = 1 if *G*(*i*) = *j* (else 0), then the equilibrium distribution *μ* on G is *H**π* (where we notationally treat pmfs as column vectors and *H* as a matrix). It is convenient for our purposes to define a transition probability function *P*, for *S*, such that *P**i**j*(*t*) = Pr(*S**δ* + *t* = *j*∣*S**t* = *i*) for arbitrary *δ*. (Note that since we have designed our rate structure to be time homogeneous, our choice of *δ* is indeed arbitrary.) *S* is said to be *irreducible* iff the graph formed by the dichotomized adjacency structure *R* > 0 is strongly connected. (*T* being strongly connected is thus a sufficient condition for irreducibility.) Iff *S* is irreducible, then *P**i**j*(*t*) > 0 for all *i*, *j* ∈ S, and times *t*. Clearly, these conditions are also inherited by *G*(*S*), so an irreducible CTMC on S is also an irreducible CTMC on G. #### Irreducible Finite Chains: The simplest sufficient conditions for equilibrium are that (1) *S* is irreducible, and (2) ∣S∣ < ∞. In this case, there exists a unique *π* such that (i) lim*t* → ∞*P**i**j*(*t*) = *π**j* for all *i*, *j* ∈ S, and (ii) if Pr(*S**t* = *s*) = *π**s* for all *s* ∈ S, then Pr(*S**t* + *δ* = *s*ʹ) = *π**s*ʹ for all *s*ʹ ∈ S,  *t*, *δ* > 0. #### Co-kernel of *R*: Because *π* must be a fixed point, we also have the condition that *π* is an equilibrium distribution iff *π**T**R* = 0 (where 0 is understood here to be the zero-vector). This is equivalent, in intuitive terms, to the statement that *π* is an equilibrium if *π* makes the total incoming flux for any give state equal to its corresponding out-flux. The solutions to this equation form the cokernel or left nullspace of *R*; the existence of such a solution is thus a necessary and sufficient condition for a stationary distribution. If *S* is also irreducible, then any such solution is unique. These conditions hold regardless of the size of S. Another potentially useful result (which also stems from the ergodic theorem) is that if there does *not* exist a stationary distribution *π*, then *P**s**s*ʹ(*t*) → 0 as *t* → ∞, for all *s*, *s*ʹ. Showing that there exist distinct states that communicate over infinite time thus contradicts the absence of an equilibrium, albeit without providing one. #### Relationship with Embedded MCs: Any CTMC can obviously be associated with a discrete-time Markov chain constructed from its state transitions (its embedded Markov chain). Such a chain has state space S and transition probabilities *P̃**s**s*ʹ = *R**s**s*ʹ/*u**s*, where *u**s* = (∑*s*ʺ ∈ S*R**s**s*ʺ). If *S* is irreducible, then so is its embedded chain, and if *π̃* is the equilibrium pmf of the embedded chain, then *π**s* = (*π̃**s*/*u**s*)/∑*s*ʹ ∈ S*π̃**s*/*u**s*. This provides an alternative route to constructing a target distribution for *S* by starting with a pseudo-time Markov process and adding duration information to obtain the desired target. Known Examples -------------- Although there exists an unlimited number of continuous time stochastic processes that lead to a given distribution, few are currently well-characterized. Here, we describe all processes in the literature known to lead to general classes of random graph distributions with known ERGM forms, as well as some examples that do not appear to have previously been proposed. In each case, we describe the process itself, and demonstrate that the process does lead to the specified equilibrium (generally by the co-kernel method described above). In cases where such convergence is already known, examination of the inbound and outbound flux to an arbitrary state can nevertheless provide other insights into model properties; for novel processes, such demonstrations are obviously necessary. Key results from this section are summarized in tabular form in Sec. [secdisc]. ### Dyad State CTMCs Numerous models (going back to the 1950s, at least) have been put forward for networks without dyadic dependence, that lead to homogeneous or inhomogeneous Bernoulli or Categorical graphs. An early example is, who estimate transition probabilities between mutual, asymmetric, and null dyad states; though they do not flesh out their model in full detail, it is notable in including inhomogeneity by a vertex attribute (gender). describe time-homogeneous CTMCs on the set of order-*N* digraphs, with the transition rates being an arbitrary function of the graph state. Although they explicitly describe the inclusion of effects for e.g. degree and triadic statistics, the forms they describe have no simple equilibrium, and they do not attempt to obtain one (merely noting that such models can be specified). Instead, they consider only the special case of a homogeneous dyadic process with reciprocity effects. This same dyad process was studied by, who provided a more rigorous treatment. Special case constructions were also used by, who obtained dyad state CTMCs as special cases of the competing rate SAOMs (discussed further below). Because all Bernoulli/Categorical graphs are trivial to write in ERGM form, any model of this kind has an ERGM representation. Analyses of these processes exploit the fact that dyads can be written as either two or four-state systems (depending on directedness), and in a dyadic independence model each dyad can simply be treated as its own system with its own dynamics; due to the small number of states, this can be treated explicitly. Because these families do not give rise to very general purpose data models - and since they arise as special cases of the other processes described here - we do not consider them in further detail here. However, we do observe that dyad state models may sometimes have useful applications in special cases. For instance, use models of this kind as a building block for developing equilibrium approximations to constant dissolution rate STERGMs. It is also plausible that inhomogeneous extensions of such processes may be useful when dynamics are driven primarily by exogenous factors. ### The Contact Formation Process A very simple example of a related class of processes with known limits (but with a more complex state space) is the contact formation process (CFP). Introduced to provide a mechanistic foundation for the sparse graph reference measure of Krivitsky and colleagues () (and subsequently extended to reproduce reference measures with constant reciprocity and power law mean degree scaling ), the CFP is intended as a baseline model rather than a general specification; thus, it is by design quite simple, and leads to a limited set of equilibrium ERGMs. We describe it here because it provides a rare example of a graph process with known equilibrium for which S ≠ G. The base CFP involves a set of *N* vertices, *V*, each of which resides in one of *M* locations (“foci,” per ). The state space of the CFP is thus G × {1, …, *M*}*N*, unlike more typical models whose state space is synomymous with the graph support. Vertices migrate at random between foci, with migration occurring in continuous time with constant rate *r**m* and destinations chosen uniformly at random. Edge variables whose endpoints reside in the same locations produce *formation events* at constant rate *r**f*, and all edge variables produce *dissolution events* at rate *r**d*. In the contact formation process with reciprocity (CFPR), formation events are also produced for (*i*, *j*) edge variables for which the (*j*, *i*) edge is present, regardless of co-location. An (*i*, *j*) edge is then said to be present at time *t* when the most recent (*i*, *j*) event is a formation event, and such an edge is said to be absent at time *t* when the most recent event is a dissolution event. (Although introduced as a formal convenience, formation and dissolution events can be rationalized as reflecting opportunities or situations that might cause a relationship to form or to break. Obviously, such opportunities only act when circumstances allow, e.g. if a tie is already present, then an occasion that would have led to the formation of said tie has no effect.) The behavior of the CFP/R depends critically on the migration rate, *r**m*. As *r**m*/min(*r**f*, *r**d*) → 0, the limiting distribution of *G* approaches a random union of Bernoulli (respectively, Categorical) graphs, with graph memberships being distributed Categorical(*M*). The case of greater interest is the “fast mixing” regime in which *r**m*/max(*r**f*, *r**d*) → ∞. In this limit, the migration process is “blurred out” of the graph structure, but still affects the resulting graph distribution. To obtain useful behavior, we replace the unobserved *M* by an assumed scaling with *N* (intuitively, a statement about how the population spreads out as network size changes). For the important case of constant population density (*M* ∝ *N*), we obtain for the CFP an ERGM class with sufficient statistic *w**e* (the edge count) and reference measure *h*(*g*) = *N*− *w**e*(*g*), i.e. the Krivitsky reference. For the CFPR, the corresponding class has *w**e*, *w**m* (edge and mutual count) as sufficient statistics, and reference measure *h*(*g*) = *N**w**m*(*g*) − *w**e*(*g*) (i.e., the measure). Alternative choices of *M*-scaling lead to different mean degree scaling. In particular, choosing *M* ∝ *N*1 − *γ* (hence *h*(*g*) = *N*(1 − *γ*)(*w**m*(*g*) − *w**e*(*g*)) for the CFPR) leads to mean degree that scales as *N**γ*. This allows regeneration of the power-law degree scaling measure of, which takes both the conventional (counting measure) and Krivitsky reference as special cases. The CFP/R illustrates how unmodeled degrees of freedom (here, migration between foci) can alter the equilibrium graph distribution; in this case, the mean degree and/or reciprocity are impacted by restrictions on the opportunity to form ties. As these are entropic effects, deriving them from first principles clarifies that they properly “belong” in the reference measure, as opposed to functioning as offsets to the edge (and, for the CFPR, mutuality) parameters (as previously supposed). Although they can be implemented in this latter way in constant-temperature models, Eq. [eqnetham] shows that this leads to models with different temperature scaling. Even in social systems for which “temperature” is not currently well-understood, being able to separate opportunity-based drivers of structure from behavioral ones is a fundamental sociological objective, and we can thus obtain greater clarity from the distinction. Finally, the CFP/R provides us with a mechanistic account of *why* mean degree/reciprocity scaling may occur, and thus where we may expect to see deviations from the typical pattern. On the other hand, the CFP/R is by design a very limited, baseline processes, whose analysis is difficult to extend. While it seems likely that more elaborate variants can be constructed, it not obvious that they will be suitable for deriving general-purpose ERGM families. ### Competing Rate SAOMs Plausibly the first - and one of the most interesting - families of CTMCs leading to general ERGM equilibria was proposed by as a special case of the SAOMs.[6](#fn6) Although he does not give this specific case a name, we here refer to the models as “competing rate” SAOMs (since the system can be viewed as a set of potential actions “racing” to be next to be chosen based on their utility gain to the respective actor). They may be summarized as follows. We begin by taking G to be the set of order-*N* digraphs, and setting S = G; *S* is hence synonymous with a process that operates directly on the order-*N* digraphs, and in what follows we simply refer directly to *G**t* rather than *G*(*S**t*). Actions within the model represent single-edge changes by individual agents (who are presumed to control the state of their outgoing edges). To represent such changes, let *g**i**j**c* be the graph *g* with the state of the *i*, *j* edge “toggled” (i.e., the (*i*, *j*) edge added if absent, or otherwise removed). Opportunities for individuals to act occur as Poisson-like events with piecewise constant hazards, such that the hazard of agent *i* being able to act at time *t* is *λ**i*(*G**t*) = ∑*j* ∈ *V* \ *i*exp[*q*((*G**t*)*i**j**c*)]. The potential function *q*(*g**t*) is taken to encode changes in actor utilities (making this a potential game ), and in particular it is assumed that if actor *i* is able to act, he or she will choose to alter his or her tie to actor *j* by the multinomial logit $$\Pr(i\leadsto j|\theta,G\_t) = \frac{\exp(q((G\_t)^c\_{ij}))}{\sum\_{k\in V\setminus i}\exp(q((G\_t)^c\_{ik}))},$$ where $i\leadsto j$ denotes the choice of *i* to toggle the (*i*, *j*) edge. The network then evolves by the dual process of actor activations and subsequent toggles. From this, we can immediately infer that *T* is the set of Hamming adjacencies in G (since graph *g* can instantaneously transition to *g*ʹ iff they differ by one edge). To obtain *R*, begin by denoting the single changed edge for an *a*, *b* transition by (*i*, *j*). This transition can only occur if (1) *i* gets to choose, and (2) *i* chooses to toggle the (*i*, *j*) edge. This occurs with rate $$\begin{aligned} R\_{ab} &= \lambda\_i(a) \Pr(i\leadsto j|\theta,a) \\ &= \frac{\sum\_{k\in V\setminus i} \exp\left[q(a^c\_{ik})\right] \exp(q(a^c\_{ij}))}{\sum\_{k\in V\setminus i}\exp(q(a^c\_{ik}))}\\ &=\exp\left[q(a^c\_{ij})\right]\\ &=\exp\left[q(b)\right].\end{aligned}$$ Since the set of transitions is connected, and the state space is finite, the competing rate SAOM has an equilibrium distribution. We can verify that the equilibrium is *π**a* = exp(*q*(*a*))/*Z* by inspecting the rate structure. Per Section [secnotation], let H(*a*) be the set of Hamming neighbors of *a*. The incoming flux to state *a* ∈ G under the hypothesized equilibrium is then $$\begin{aligned} \sum\_{b \in \mathcal{H}(a)} \pi\_b R\_{ba} &= \sum\_{b \in \mathcal{H}(a)} \left[\exp(q(b))/Z\right] \exp\left[q(a)\right]\\ &= \exp(q(a))/Z \sum\_{b \in \mathcal{H}(a)} \exp(q(b))\\ &=\pi\_a \sum\_{b \in \mathcal{H}(a)} R\_{ab}\\ &=-\pi\_a R\_{aa}.\end{aligned}$$ Since this is true for all *a* ∈ G, it then follows that *π**T**R* = 0, and *π* is the equilibrium distribution of the competing rate SAOM process. As alluded to above, one feature of this process is that it implies that actors move faster when they “have a chance at a good thing,” and move more slowly when not surrounded by favorable options. This property is the precisely that implied by the agent-based interpretation of the “vanilla” REM specification, and indeed this process provides perhaps the most natural example of the connection between REMs, SAOMs, and ERGMs. Whether this is a desirable or realistic property, of course, depends on one’s system of interest; it is a fairly natural model for e.g. freely acting agents facing no exogenous constraints on their choices, but not e.g. for systems in which change rates may be limited by exogenous factors. In particular, note that there is no upper limit on the pace of change in the competing rate SAOM, and the system will transition to favorable states exponentially fast. Moreover, since transitions depend only on the potential of the target state, a competing rate SAOM will be just as fast to move to a favorable target state from another favorable state as from an unfavorable state. This can result in rapid cycling between high-potential states. Below, we will see examples of other model classes that do not have this property. We comment in passing that the rate specification of the competing rate SAOM is only one of many possible choices, and indeed it is not widely used in practice. Another, more common specification retains the SAOM choice process, but sets the per-actor event rates to some constant, *A*. (This is also commonly used in DyNAMs.) To our knowledge, the equilibrium for this family in the general case remains an open problem. ### Longitudinal ERGMs Koskinen, Snijders, and colleagues proposed a general family of CTMC processes with specified ERGM distributions (inspired by prior work on SAOMs, and on quasi-time Gibbs samplers), which they dubbed “longitudinal ERGMs” (LERGMs). Subsequently, showed that a formally equivalent process could be derived from physical principles. Since this last provides some additional motivation for the LERGM framework, our exposition here combines ideas from both the statistical and physical points of view. As with the competing rate SAOMs, the LERGMs take G to be the set of order-*N* (di)graphs, and set S = G. We take *T* to be the set of Hamming adjacencies on G (i.e., *S* evolves through single-edge changes), and further take *R**a**b* = 0 iff (*a*, *b*) ∉ *T*. We proceed by starting with the target distribution, and working backwards. We first observe that G is finite and Hamming connected, so it follows that the LERGM process has an equilibrium distribution. Let us posit that *G* has an equilibrium ERGM distribution, such that Pr(*G**t* = *g*∣*θ*) = exp(*q*(*g*))/*Z*. Now, consider two Hamming-adjacent states *a* and *b* ∈ G. The conditional probability of finding the system in state *b*, given that it is in either *a* or *b*, is $$\begin{aligned} \frac{\Pr(G\_t=b|\theta)}{\Pr(G\_t=a|\theta)+\Pr(G\_t=b|\theta)} &= \frac{\exp\left[q(b)\right]/Z}{\exp\left[q(a)\right]/Z + \exp\left[q(b)\right]/Z} \\ &= \frac{1}{1+\exp\left[ q(a)-q(b)\right]} \label{eq\_deltakin}\end{aligned}$$ where *q*(*b*) − *q*(*a*) is the change in the ERGM potential in going from state *a* to state *b*. This “change” interpretation suggests the notion of taking the transition rate to be based on Eq. [eqdeltakin], i.e. $$R\_{ab} = \frac{A}{1+\exp\left[q(a)-q(b)\right]}, \label{eq\_kinrate}$$ where *A* is a free parameter (the rate constant) with units of inverse time, and as usual *R**a**a* =  − ∑*b* ≠ *a**R**a**b*. With *R* defined per the above, now take *π* to be the target ERGM distribution and consider *π**T**R*. Without loss of generality, let us focus on some state *a*, and let H denote the Hamming neighborhood of a given state. The incoming flux into *a* is equal to $$\begin{aligned} \sum\_{b\in \mathcal{H}(a)} \pi\_b R\_{ba} &= \frac{A}{Z} \sum\_{b\in \mathcal{H}(a)} \exp\left[q(b)\right] \frac{\exp\left[q(a)\right]}{\exp\left[q(a)\right] + \exp\left[q(b)\right] }\\ &= \frac{A \exp\left[q(a)\right]}{Z} \sum\_{b\in \mathcal{H}(a)} \frac{\exp\left[q(b)\right]}{\exp\left[q(a)\right] + \exp\left[q(b)\right] }\\ &= \pi\_a \sum\_{b\in\mathcal{H}(a)} R\_{ab}\\ &= - \pi\_a R\_{aa},\end{aligned}$$ Since this relationship holds for all *a*, it thus follows that *π**T**R* = 0, and the equilibrium of the LERGM process is ERGM(*θ*, *h*) distributed. The physical motivation for the LERGM process stems from the observation that, in a physical setting, Eq. [eqkinrate] approximates the well-known Arrhenius law of chemical kinetics. In this interpretation, each potential transition is viewed as a competing “reaction” (literal or metaphorical), and the resulting network arises from their joint kinetics. In this regard, *q*(*b*) − *q*(*a*) = *θ**T*(*w*(*b*) − *w*(*a*)) + (log*h*(*b*) − log*h*(*a*)) corresponds to the  − 1/*k**B**T*-scaled free energy change associated with a specific edge toggle; thus, energy-increasing moves are unfavorable (with the rates becoming approximately *A*exp( − Δ*E*/*k**B**T*) when the free energy change Δ*E* is large), while energy-decreasing moves are favorable (with a rate that approaches *A*). The constant *A* corresponds to the *collision rate,* a microscopic upper bound on the pace of change in the system. This highlights the fact that the LERGM process is thus dynamically asymmetric: transitions can be arbitrarily slow, but there is an upper bound on how rapid they can be (which is approached exponentially fast for favorable transitions). In a social context, a parallel interpretation is that the system experiences myriad randomly occurring perturbations that could lead absent ties to form, or existing ties to dissolve. These vary in strength, but independently impact all relationships (arriving at rate *A*). The chance of any given relationship being altered depends on the change in potential associated with the corresponding toggle: the more unfavorable the change, the stronger a perturbation must be for it to occur (and the lower the rate). By contrast, more favorable transitions are increasingly likely to occur; however, a perturbation is still needed to activate the change, eventually leading to *A* as the limiting rate. Another feature of the LERGM process is that the “dwell time” in a given state depends on the gap in the ERGM potential (i.e., log equilibrium probability) between the state and its neighbors. A state that is substantially more favorable than its neighbors will have very long occupancies (conditional on entry), compared to one that is less favorable than its neighbors. Likewise, independence of rates across state-dyads means that *R**a**b* is neither enhanced nor inhibited by the presence of high or low *R**a**c* values for some third state *c*. Although the *probability* of an *a* → *b* transition obviously falls when some *R**a**c* rises, the *rate* of this transition is not affected. Finally, we observe that there is a certain resemblance between the LERGM process and the updates used by the standard edge-toggle Gibbs sampler ; this is because both formulations begin by considering the conditional distribution of a state pair, though this distribution is not used in exactly the same ways. Note in particular that *all* transitions are competing in the LERGM process, and not simply those within a single dyad, and transition opportunities do not occur at random. The embedded Markov chain induced by the LERGM process is not therefore a Gibbs sampler, despite their superficial similarity. ### The Change Inhibition Process This process has not to our knowledge appeared in the prior literature, but is an obvious counterpart to the LERGM process. As with the LERGM development, take S = G for G being the set of order-*N* graphs (or digraphs), with *T* corresponding to Hamming adjacency and *R**s**s*ʹ > 0 iff (*s*, *s*ʹ) ∈ *T*. Let us presume the presence of a (social or otherwise) potential, *q*, such that *q* : G → R. We imagine that a transition from graph state *a* to state *b* occurs at some constant rate (i.e., “at random”) when *q*(*b*) ≥ *q*(*a*), but that downhill moves are inhibited; specifically, given states *a*, *b* ∈ G, we posit *R**a**b* = *A*min(1, exp(*q*(*b*) − *q*(*a*)). We refer to this as the “change inhibition” process, because its dynamics are entirely driven by inhibition of unfavorable transitions. We now show that, like the LERGM, this process has an ERGM stationary distribution. First, we observe that (by the same arguments as the LERGM case), this is a irreducible process on a finite state space, and hence has an equilibrium distribution. We posit that this distribution is given by *π**a* = exp(*q*(*a*))/*Z*. As before, we proceed by considering the flux into an arbitrary state, *a*, using N+ to denote the set of neighboring states *b* such that *q*(*b*) ≥ *q*(*a*), and N− to denote the set of neighboring states such that *q*(*b*) < *q*(*a*). We then have $$\begin{aligned} \sum\_{b\in\mathcal{N}(a)} \pi\_b R\_{ba} &= \frac{A}{Z} \sum\_{b\in\mathcal{N}(a)} \exp(q(b)) \min(1,\exp(q(a)-q(b)))\\ &= \frac{A}{Z} \sum\_{b\in\mathcal{N}^+(a)} \exp(q(a)) + \frac{A}{Z} \sum\_{b\in\mathcal{N}^-(a)} \exp(q(b))\\ &= \frac{\exp(q(a))}{Z} \left[ \sum\_{b\in\mathcal{N}^+(a)} A + \sum\_{b\in\mathcal{N}^-(a)} A\exp(q(b)-q(a)) \right]\\ &= \pi\_a \left[ \sum\_{b\in\mathcal{N}^+(a)} R\_{ab} + \sum\_{b\in\mathcal{N}^-(a)} R\_{ab} \right]\\ &= \pi\_a \sum\_{b\in\mathcal{N}(a)} R\_{ab}\\ &= -\pi\_a R\_{aa},\end{aligned}$$ which establishes that *π**T**R* = 0, and the equilibrium is as proposed. It is immediately apparent that while the LERGM rate function resembles the transition function of a Gibbs sampler, the change inhibition process resembles the corresponding Metropolis algorithm. However, it is again important to note that in this process all change events are competing with each other, and it is not equivalent to quasi-time Metropolis dynamics. Like the LERGM, dwell times at each state are related to the gaps in favorability between the focal state and its neighbors, but in this case uphill moves are not enhanced; thus, dwell times will be longer for low-potential states with high-potential neighbors. Also like the LERGM (but not the competing rate SAOM), the change inhibition process has a maximum rate of change (which is realized for all uphill moves). ### The Differential Stability Process Another process that has not to our knowledge appeared in the prior literature (but that is motivated by well-known properties of CTMCs) arises from allowing changes between states to occur at random, with only the dwell time in individual states varying in a systematic manner. Specifically, we consider a process in which the expected time spent in an arbitrary state, *a*, after entry is proportional to exp(*q*(*a*)). In this *differential stability process,* some states are more favorable, or stable, than others, and the system will spend more time in such states before transitioning out of them. However, the transitions themselves are to random Hamming neighbors. Such a system lies at the opposite extreme from a constant rate SAOM, where aggregate event rates are held constant and transitions are driven by differences in the conditional probabilities of transitioning between competing states, and is hence interesting as a corner case; substantively, however, it may be a useful starting point for modeling “win-stay, lose-shift” dynamics, or network evolution that is driven primarily by exogenous shocks. For the differential stability process, we take S = G, and let *T* be given by Hamming adjacency. We define the process by specifying a rate structure, *R*, such that for Hamming-adjacent neighbors *a* and *b*, *R**a**b* = *A*∣H(*a*)∣− 1exp( − *q*(*a*)), with transition rates of 0 for *b* ∉ H(*a*). Note that, per the above, this rate does not depend upon the destination state. It is clearly the case that the dwell time here in state *a* is then  − 1/*R**a**a* = *A*exp(*q*(*a*)), as desired, with *A* ∈ R+ being a pacing constant. Since the rate structure of this system is connected on the state space, and since the state space is finite, the process has an equilibrium distribution. We posit that this equilibrium has the form *π**a* = exp(*q*(*a*))/*Z*. To show that this is so, we first consider the flux into an arbitrary state *a*: $$\begin{aligned} \sum\_{b\in \mathcal{H}(a)} \pi\_b R\_{ba} &= \sum\_{b\in \mathcal{H}(a)} \frac{\exp(q(b))}{Z} A |\mathcal{H}(a)|^{-1} \exp(-q(b))\\ &= \frac{A}{Z|\mathcal{H}(a)|} \sum\_{b\in \mathcal{H}(a)} 1\\ &=A/Z.\end{aligned}$$ Now, consider the outgoing flux at *a*: $$\begin{aligned} \pi\_a \sum\_{b\in \mathcal{H}(a)} R\_{ab} &= \frac{\exp(q(a))}{Z} \sum\_{b\in \mathcal{H}(a)} A |\mathcal{H}(a)|^{-1} \exp(-q(a))\\ &= A/Z.\end{aligned}$$ The incoming flux equals the outgoing flux, and hence *π* is indeed the equilibrium distribution of the differential stability process. Although the property is not unique to this family, the fact that transitions are uniformly random in the differential stability process makes especially obvious the fact that inference for its parameters can be performed using only aggregate information on state occupancies, without sequential information. This may facilitate the use of this model with fragmentary data sources, or in other settings where it is easier to estimate where the system is spending most of its time than to track specific transitions. ### Continuum STERGMs As discussed, the STERGM class of temporal ERGMs has been an important focus of work related to continuum-limit models, because of the potential for fitting to limited observational data sets. The constant dissolution model has been central to this effort, and we treat it here separately, followed for by its constant formation counterpart. While less trivial to fit, general STERGMs can also be extended to the continuum limit, and we close this section with a consideration of this broader family of cases. #### Constant-dissolution Continuum STERGMs As noted above, employ STERGMs with constant dissolution rates as a tool for approximate estimation of network dynamics from combined cross-sectional and durational data; they obtain exact results for the dyadic independence case, and then argue from simulation that such a STERGM will approach an approximate ERGM equilibrium whose potential is equal to the potential of the formation model, with an edge parameter adjustment for dissolution. In an inferential setting, this parameter can be directly estimated given tie duration. Subsequently, have shown that the error in this approximation declines as the length of the (discrete) STERGM time step decreases, and one does (under mild regularity conditions) approach the presumed limit as step size goes to zero. They refer to the model under this finite but arbitrarily small step size an “infinitesimal STERGM.” Here, we directly consider the corresponding continuous time process, which we refer to as a “continuum STERGM” (CSTERGM) with constant dissolution rates. Specifically, we here consider the case of a CSTERGM in which rates of edge formation are driven by an ERGM potential *q**f*, while dissolution is driven by a single parameter *θ**d* (i.e., all edges are lost at constant rate exp(*θ**d*)). We construct the continuum process as follows. Let *S* be a CTMC on state space S = G, with G being the set of all order-*N* (di)graphs and *T* being the Hamming adjacency on the state space. Let H+ be the set of Hamming neighbors formed by adding an edge to the argument, and let H− be the set of Hamming neighbors formed by removing an edge from the argument. *S* then follows the instantaneous rate structure $$R\_{ab} = \begin{cases} 0 & \text{ if } b\not\in \mathcal{H}^+(a) \cup \mathcal{H}^-(a)\\ \exp(\theta\_d) & \text{ if } b\in \mathcal{H}^-(a)\\ \exp(q\_f(b)-q\_f(a)) & \text{ if } b\in \mathcal{H}^+(a)\\ -\sum\_{b\in\mathbb{G}\setminus a} R\_{ab} & \text{ if } a=b \end{cases}.$$ Clearly, ∣S∣ < ∞, and *T* is connected; thus, *S* has an equilibrium distribution. We posit that this equilibrium distribution corresponds to *π**a* = exp(*q**f*(*a*) − *θ**d**w**e*(*a*))/*Z*. To verify this, we consider the influx to an arbitrary state *a*: $$\begin{aligned} \sum\_{b\in\mathbb{G}\setminus a} R\_{ba} & =\sum\_{b\in\mathcal{H}^+(a)} \pi\_b \exp(\theta\_d) + \sum\_{b\in\mathcal{H}^-(a)} \pi\_b \exp\left(q\_f(a)-q\_f(b)\right)\\ &= \sum\_{b\in\mathcal{H}^+(a)} \frac{\exp(q\_f(b)-\theta\_d w\_e(b))}{Z} \exp(\theta\_d) + \sum\_{b\in\mathcal{H}^-(a)} \frac{\exp(q\_f(b)-\theta\_d w\_e(b))}{Z} \exp\left(q\_f(a)-q\_f(b)\right)\\ &=\frac{1}{Z} \left[\sum\_{b\in\mathcal{H}^+(a)} \exp\left(q\_f(b)-\theta\_d (w\_e(b)-1)\right) + \sum\_{b\in\mathcal{H}^-(a)} \exp\left(q\_f(a)-\theta\_d w\_e(b)\right)\right].\\ \intertext{Observe that, for $b\in\mathcal{H}^+(a),$ $w\_e(b)=w\_e(a)+1$, and for $b\in\mathcal{H}^{-}(a),$ $w\_e(b)=w\_e(a)-1$. Substituting and factoring out the putative equilibrium probability of $a$ gives} &= \frac{\exp(q\_f(a)-\theta\_d w\_e(a))}{Z} \left[ \sum\_{b\in\mathcal{H}^+(a)} \exp\left(q\_f(b)-q\_f(a)\right) + \sum\_{b\in\mathcal{H}^-(a)} \exp(\theta\_d) \right]\\ &= \pi\_a \sum\_{b\in\mathbb{G}\setminus a} R\_{ab}\\ &=-\pi\_a R\_{aa}.\end{aligned}$$ It thus follows that *π**T**R* = 0, and the continuum STERGM has the desired equilibrium. #### Constant Formation Continuum STERGMs While constant dissolution STERGMs have received more attention in the literature, a very similar development is also possible for constant *formation* STERGMs: processes in which edges form at a constant rate, but where their persistence is governed by a potentially complex process. Such a model represents relational “selection” in its truest sense, since the key dynamics are driven by the differential survival of relationships. That this side of the STERGM family is less well-studied than the constant dissolution side may reflect a bias in the field towards processes that create edges rather than those that retain or remove them (a broader concern raised e.g. by ), though the constant dissolution approximation is doubtless reasonable in many settings. In some cases, however, ties may form initially through processes that are idiosyncratic and largely exogenous, with enduring structure resulting from the maturation of some fraction of those ties into longer-term relations. For instance, in work settings structured around *ad hoc* teams convened to perform specific projects, tie formation may be driven by short-term assignments that result from shifting labor needs, rather than any preferences of the individuals involved. Some, however, may elect to keep in touch with colleagues from those short-term assignments over the long haul, a process that may be driven by individual considerations. Alternately (as with the differential stability process), network evolution may not involve individuals or decision processes at all, and may simply reflect differences in the stability of relationships to exogenous shocks. Although, to our knowledge, a continuum version of the constant formation STERGM has not appeared in the literature, we can easily define it here. As with the constant dissolution STERGM, we take S = G, presume that G is the set of order-*N* graphs, and employ a separate dissolution potential *q**d*(*a*) = *θ**d**T**w**d*(*a*) and constant formation parameter *θ**f*. Our rate function is given by $$R\_{ab} = \begin{cases} \exp(\theta\_f) & \text{if } b\in\mathcal{H}^{+}(a)\\ \exp(q\_d(b)-q\_d(a)) & \text{if } b\in\mathcal{H}^{-}(a)\\ -\sum\_{b\in\mathcal{H}(a)}R\_{ab} & \text{if } a=b\\ 0 & \text{otherwise} \end{cases},$$ and we propose the equilibrium distribution *π**a* = exp[*q**d*(*a*) + *w**e*(*a*)*θ**f*]/*Z*, with *w**e* as usual being the edge count. To verify, we examine the in-flux to an arbitrary state *a* in equilibrium, $$\begin{aligned} \sum\_{b\in\mathcal{H}(a)} \pi\_b R\_{ba} &= \sum\_{b\in\mathcal{H}^{-}(a)}\pi\_b \exp[\theta\_f] + \sum\_{b\in\mathcal{H}^{+}(a)} \pi\_b \exp\left[q\_d(a)-q\_d(b)\right]\\ &= \frac{1}{Z} \left[ \sum\_{b\in\mathcal{H}^{-}(a)}\exp\left[q\_d(b)+w\_e(a) \theta\_f\right] + \sum\_{b\in\mathcal{H}^{+}(a)} \exp\left[q\_d(a)+(w\_e(a)+1) \theta\_f\right]\right], \\ \intertext{where we have used the relationship between the edge counts of $a$ and $b$, respectively. Pulling out $\pi\_a$ then gives us} &= \frac{\exp\left[q\_d(a)+w\_e(a)\theta\_f\right]}{Z} \left[ \sum\_{b\in\mathcal{H}^{-}(a)} \exp\left[q\_d(b)-q\_d(a)\right] + \sum\_{b\in\mathcal{H}^{+}(a)} \exp\left[\theta\_f\right]\right]\\ &= \pi\_a \sum\_{b\in\mathcal{H}(a)} R\_{ab}\\ &= -\pi\_a R\_{aa}.\end{aligned}$$ The condition that *π**T**R* = 0 is then satisfied, and the constant formation CSTERGM has the posited equilibrium. As with the constant dissolution CSTERGM, the constant formation CSTERGM can in principle be fit to data containing only information on cross-sectional network structure and durations, here specifically the duration of *nulls* rather than edges; equivalently, edges added per unit time will suffice, provided that care is taken to control for the current density (since the total rate of edge addition scales with the number of edges that are available to add). We do not consider this problem in greater detail here, but the approach parallels that for the constant dissolution case. #### General Continuum STERGMs While the constant rate cases are of particular pragmatic interest, it should be noted that the same general approach used above can be used to provide a continuum version of STERGMs with arbitrary rate structure. These general CSTERGMs have not, to our knowledge, been treated in previous literature. We define them as follows. Combining our previous two cases, let *q**f* be a formation potential, and *q**d* a dissolution potential. We take S = G on the order-*N* (di)graphs, and let *T* correspond to Hamming adjacency, with transition rates for states *a*, *b* ∈ S given by $$R\_{ab} = \begin{cases} 0 & \text{ if } b\not\in \mathcal{H}^+(a) \cup \mathcal{H}^-(a)\\ \exp(q\_d(b)-q\_d(a)) & \text{ if } b\in \mathcal{H}^-(a)\\ \exp(q\_f(b)-q\_f(a)) & \text{ if } b\in \mathcal{H}^+(a)\\ -\sum\_{b\in\mathbb{G}\setminus a} R\_{ab} & \text{ if } a=b \end{cases}.$$ Since the system is irreducible and has a finite state space, it has an equilibrium distribution, *π*. We posit that *π**a* = exp(*q**f*(*a*) + *q**d*(*a*))/*Z* for *a* ∈ S. As usual, we verify by considering the influx to an arbitrary state, *a*: $$\begin{aligned} \sum\_{b\in\mathcal{H}(a)} \pi\_b R\_{ba} &= \sum\_{b\in\mathcal{H}^{-}(a)}\pi\_b \exp[q\_f(a)-q\_f(b)] + \sum\_{b\in\mathcal{H}^{+}(a)} \pi\_b \exp\left[q\_d(a)-q\_d(b)\right]\\ &=\frac{1}{Z}\left[ \sum\_{b\in\mathcal{H}^{-}(a)} \exp[q\_f(a)+q\_d(b)] + \sum\_{b\in\mathcal{H}^{+}(a)} \exp\left[q\_d(a)+q\_f(b)\right] \right]\\ &=\frac{\exp\left[q\_f(a)+q\_d(a)\right]}{Z} \left[ \sum\_{b\in\mathcal{H}^{-}(a)} \exp[q\_d(b)-q\_d(a)] + \sum\_{b\in\mathcal{H}^{+}(a)} \exp\left[q\_f(b)-q\_f(a)\right] \right]\\ &= \pi\_a \sum\_{b\in\mathcal{H}(a)} R\_{ab}\\ &=-\pi\_a R\_{aa}.\end{aligned}$$ Thus, *π**T**R* = 0, and the equilibrium is as posited. Like their discrete counterparts, the general CSTERGMs are particularly natural in settings for which the factors shaping edge formation are different from those shaping persistence or dissolution of edges. Inference for these models, however, is more complex than for their constant formation/dissolution counterparts, due to the need to separate the two distinct potentials (*q**f* and *q**d*) that cannot be inferred from mean durations. We briefly comment further on this issue in Section [secdisc]. ### Continuum TERGMs Having seen continuum forms of the STERGM family, it is reasonable to ask if there is a similar continuum extension of the general TERGMs. The answer is affirmative, as can be appreciated by simply imposing *q* = *q**f* = *q**d* in the CSTERGM development. We thus state here the general form of the continuum TERGM process (CTERGM), which has not to our knowledge appeared in prior literature. Following the same assumptions as the CSTERGMs, we take S = G with G being the order-*N* (di)graphs, with *T* corresponding to Hamming adjacency on G and rate structure $$R\_{ab} = \begin{cases} 0 & \text{ if } b\not\in \mathcal{H}(a)\\ \exp(q(b)-q(a)) & \text{ if } b\in \mathcal{H}(a)\\ -\sum\_{b\in\mathbb{G}\setminus a} R\_{ab} & \text{ if } a=b \end{cases}.$$ Finiteness of the state space and irreducibility of the transition structure give us an equilibrium *π*, which we assert has the form *π**a* = exp(2*q*(*a*))/*Z*. We verify this by examining the influx to an arbitrary state, *a* ∈ S: $$\begin{aligned} \sum\_{b\in\mathcal{H}(a)} \pi\_b R\_{ba} &= \sum\_{b\in\mathcal{H}(a)}\pi\_b \exp[q(a)-q(b)]\\ &=\frac{1}{Z} \sum\_{b\in\mathcal{H}(a)}\exp[q(a)+q(b)]\\ &=\frac{\exp[2 q(a)]}{Z} \sum\_{b\in\mathcal{H}(a)}\exp[q(b)-q(a)]\\ &= \pi\_a \sum\_{b\in\mathcal{H}(a)} R\_{ab}\\ &= - R\_{aa}.\end{aligned}$$ This establishes that *π**T**R* = 0, and *π* is indeed the CTERGM equilibrium. While the factor of two that appears in the CTERGM equilibrium may appear counterintuitive, we can appreciate from the CSTERGM case that it arises from the interaction of the flux out of *a* from edge loss (respectively edge gain) and the flux into *a* from neighbor edge gain (respectively, neighbor edge loss): when *a* is a high-potential state, it both sends less flux to its neighbors and gets more flux in return. In some cases, it may be more aesthetic to drop this factor of two from the equilibrium graph potential, and divide the log transition rate by two instead; we keep the present parameterization, however, to emphasize the relationship with the CSTERGMs. Discussion ========== As we have seen, there are many ways of defining continuous time graph processes that lead to ERGM equilibria, underscoring the point that the same cross-sectional distribution can arise from different generative processes. However, not all such processes are equally plausible in any given setting. Here, we briefly comment on what can be said about the general properties of the processes described above, and how they may be usefully distinguished on qualitative grounds. We also note some potential insights from these processes regarding network evolution more generally, as well as some observations for practical use in applied settings. Summary of General ERGM-generating Processes -------------------------------------------- Leaving aside special case models, the eight families of ERGM-generating processes are summarized in Table [tabeq]. (Note that, since the time scale on which the model is defined is arbitrary, all rates are defined up to a similarity transform. For clarity, we leave the fundamental rate constants, *A*, when used in model definition.) The diversity of formulations seen in our above development is on display here. However, some general patterns are evident. Transitions can be governed by the absolute potential of the target state (*q*(*b*), in an *a* → *b* transition), the absolute potential of the originating state, or the difference in potentials; this reflects three broad classes of dynamics, where transitions are governed by (respectively) the favorability of the target, the unfavorability of the source, or the gain in favorability associated with transitioning from source to target. One or another of these may be more plausible in particular settings, suggesting a specific choice of model family. Going further, we may characterize the differences between models in terms of a number of basic properties, as shown in Table [tabprop]. First, we observe that two families (the LERGMs and the Change Inhibition process) have transitions rates that are bounded above by a constant - there is, for these families, a fundamental “speed limit” on the pace of network change, potentially reflecting unmodeled micro-level mechanisms that depend upon some other temporal process (e.g., “collisions” or “encounters”) whose pace is not controlled by network structure. For the other families, there is in principle no limit on how rapidly transitions can occur. This is sensible where edges can evolve by near-instantaneous processes (e.g., one actor deciding in rapid succession to break off ties to each member of a group), though in other cases care may be needed in empirical settings to ensure that transition rates do not become unrealistically large. We have also seen that families vary in whether these rates are fundamentally driven by target potential (Competing Rate SAOMs), source potential (Differential Stability), or potential differences (all others). The target-focus of the SAOM arises from its utility-theoretic motivation, as actors’ decisions within the model are assumed to be based on the attractiveness of the target state, and not on a comparison with their existing state (such comparisons being characteristic of non-rational choice models such as Prospect Theory ). By turns, an exclusive focus on the stability of the current state is the defining condition of the Differential Stability process. It is interesting that all other processes so far proposed operate instead on *differences* in potentials, such that dynamics slow down as the graph process enters a “flatter” region of the potential surface. The motivation for difference-driven dynamics in the LERGM was originally by appeal to a conditional probability argument, although as we have seen this can also be motivated by the notion that formation and dissolution processes are “competing” with each other even within a dyad and that observed change rates are the result of that competition. For Change Inhibition, the motivation is direct: we posit that the system “resists” downhill moves *per se,* which implies a comparative structure. Finally, the use of potential differences in the continuum (S)TERGMs is necessary for constant rate models to lead to desired behavior (i.e., to serve as continuum limits in the manner proposed e.g by and ). These differences underscore the point that distinct generative mechanisms can lead to quite different dynamics. As a next point of contrast, we see that models differ in how rates are varied across the state space. Most immediately, the CSTERGM families are defined separably, so that formation and dissolution rates are distinct; in the general case, this feature is not shared by the other families (though some may coincide, as with the trivial case of a general CSTERGM with *q**f* = *q**d* and the CTERGMs). Less trivially, we observe that processes vary in whether transition rates vary across (are “sensitive to”) neighboring states of higher or lower potential (respectively). Although most processes considered here have higher rates of transition to states with much higher potential than those with only marginally higher potential, this is not true for Change Inhibition (which is indifferent between uphill moves) and Differential Stability (which is indifferent to *all* moves). This is even more true for sensitivity to variation in potential loss, with only the Differential Stability process being indifferent to transitions to states with slightly versus substantially lower potential. These distinctions lead to very different patterns of network evolution. Finally, we distinguish models that are sensitive to differences in potential among neighboring states formed by adding edges (members of H+) versus those formed by removing edges (members of H−). Other than the Differential Stability process (which, as noted, is insensitive to all neighbor properties), these distinctions are characteristic of the constant rate CSTERGMs, which respectively treat all dissolution transitions or formation transitions equivalently. Taken together, these distinctions separate all eight families, with each having a distinctive signature. We observe that there are some signatures that do not have a corresponding model within this set, potentially suggesting opportunities for defining additional families. Insights for Network Evolution ------------------------------ Considering network evolution in its most general terms provides a number of substantive insights. First, we observe that complex structure does not imply a selective transition process: timing alone is sufficient to generate arbitrarily rich graph structure. One cannot thus automatically assume that a focal system is driven by dynamics that seek to walk uphill on the potential (though this may be a good assumption or other reasons). Further, this observation reminds us that factors that alter the pace of change within a social network may have a substantial impact on long-term structure, so long as they do not act uniformly. Another insight is that complex structure can arise equally well from formation or dissolution processes. Though it is often reasonable to think of structure as being driven primarily by constraints on which ties can or will form (e.g., interaction opportunities, lowered or raised barriers to tie formation), it may be that differences in selection or survival of ties are the primary structural drivers. Or both may be involved. As above, there may be other information that leads us to believe that one class or the other will be primary in a given setting, but the mere presence of complex structure has no bearing on this. We can also see from the processes examined here that ERGM-generating graph processes that *do* follow a potential do not need to do so symmetrically: like the Change Inhibition process, they can be more sensitive to downhill moves than uphill moves (or, conceivably, *vice versa*). It may be reasonable to assume symmetry on substantive or other grounds, but it need not exist for complex structure to result. A further insight that can be obtained by connecting dynamics with equilibrium behavior is that many dynamic processes lead to degenerate long-run behavior (in the sense of ). Indeed, any ERGM-generating process coupled with a potential leading to a degenerate ERGM distribution will also have degenerate dynamics; the fact that such a wide range of different dynamic processes can lead to the same ERGM equilibria thus confirms that degeneracy is not a consequence of any specific choice of local dynamics. For instance, the choice-motivated SAOMs and competing rate-motivated LERGMs can both lead to the same degenerate graph distributions as CTERGMs and Differential Stability processes. Sometimes, as per, this is in fact an empirically realistic outcome; for most social networks, however, degeneracy is usually pathological. What we can glean from the present work is that - at least for processes of the type studied here - degeneracy should be understood as arising from the graph potential rather than the processes operating on it. While this is not unprecedented (the presence of many quasi-time MCMC processes leading to identical graph distributions, for instance, suggests the same conclusion, as do results of ), it may not always have been appreciated. A deeper consideration of the connection between dynamic processes and their long-run behavior thus has the potential to produce more general insights about the factors that lead to the types of networks we see in the real world, as opposed to the many types of networks that *could* arise, but that do not. Finally, we note that while some processes may be especially natural in particular settings (e.g., SAOMs for unilaterally controlled edges in interpersonal networks, LERGMs in physical networks), any given graph process may admit multiple interpretations. It is important not to assume that any one mechanistic interpretation of a graph process is the only one possible. That said, deriving an ERGM-generating graph process from first principles can provide a strong motivation for applying it in cases where the associated assumptions are met. Mechanistic derivations of ERGM-generating continuous time graph processes would thus seem to be an important area for further research. Considerations for Practical Use -------------------------------- Although our focus here is on model definition and general properties, we may glean a few considerations for use of these models in practice. First, we observe that all of the ERGM-generating graph processes studied here except for the CSTERGMs have parameters that can be estimated up to a pacing constant from cross-sectional observations. Specifically, inferring *q* from one or more equilibrium draws determines the behavior of the model, although quantitative rates of change obviously require that the characteristic timescale of the process (and, where applicable, the maximum change rate) be inferred from information on observed dynamics. In the case of the constant rate CSTERGMs, one cannot identify the formation or dissolution potentials (respectively) without knowing the respective dissolution or formation rates, but these are easily estimated from duration or pacing information (as discussed by ). Moreover, since ERGM inference can often be performed using sampled or incomplete data, it follows that a considerable amount can be inferred about network dynamics from quite limited observations, provided that one has some *a priori* basis for constraining the associated model family. It is also useful to observe that, given an ERGM, it is straightforward to simulate hypothetical equilibrium dynamics using these models; in such applications, one must choose the pacing constant and, for the constant rate CSTERGMs, the corresponding mean durations. As noted above, such an approach has been used in the discrete case e.g. by and in the continuous case by, both of whom used approximate matching to available empirical observations to calibrate total change rates. In some applications, considering dynamics relative to the characteristic timescale of the process (“phenomenological time”) may be sufficient to gain useful insights. Although a detailed discussion of simulation procedures is beyond the scope of this paper, it should be noted that standard discrete event simulation methods can be used for all of the models considered here. In particular, given rate structure *R* and current state *a*, the probability that the next transition will be to state *b* is simply *R**a**b*/∑*c* ∈ S \ *a**R**a**c*, with the time to the next event being exponentially distributed with expectation [∑*b* ∈ S \ *a**R**a**b*]− 1. For these families (all of which involve single-edge changes), this can be implemented by calculating the rates for all potential toggles from the current state, then selecting both the next toggle and the time to the next event independently. Model-specific implementations may allow for greater efficiency in some cases (e.g., by pooling events with identical rates). It should be noted that the presence of *absolute* exponentiated graph potentials in the exit rates for the Competing Rate SAOMs and Differential Stability processes (as opposed to exponentiated potential *differences*; see Table [tabeq]) implies that these processes may in some cases exhibit dynamics with very large numbers of transition events per unit phenomenological time. For the former process, this arises when the model shuttles very rapidly between two or more Hamming-adjacent, high-potential graphs. For the latter process, the phenomenon arises when the model has entered a low-potential region of graph space, and rapidly switches at random between low-potential graphs. Although these dynamics take very little phenomenological time, they can become very computationally expensive in large systems with large variation in graph potential. Specialized algorithms for handling these cases may be needed in some circumstances; where transition rates become extremely large, it may also be prudent to consider whether the process in question is substantively reasonable. Where detailed dynamic information is available, this can of course be exploited, allowing for out-of-equilibrium inference; see, e.g.. Such information is generally required for general CSTERGM inference (since one cannot distinguish the formation and dissolution potentials from equilibrium information), and is also needed when seeking to distinguish among ERGM-generating processes on an empirical basis. Although model adequacy checking for continuous time models is an area of active development (see e.g. ), one obvious diagnostic is to compare observed dwell times in each state to those of Table [tabeq]. While more conventional adequacy checking strategies based on cross-sectional simulation are useful for assessing whether a model can reproduce equilibrium behavior, they will not distinguish among ERGM-generating processes with the same equilibrium. By contrast, different dynamic processes with equivalent graph equilibria generally have very different inter-event timing. Conclusion ========== The above sketches some of the precursors and known classes of stochastic processes leading to known ERGM distributions. As observed, there are several general classes of such processes; however, it is likely that these only scratch the surface of what is possible. It is interesting to note that, with the lone exception of the CFP/R, the frameworks studied to date all take S = G. Although this is convenient, it limits the dynamics that are possible. Extending the state space will necessarily add an extra entropic contribution to the ERGM potential - equivalently, a change of reference measure - since it will change the number of ways that each graph can be realized (i.e., it adds hidden degrees of freedom). Since entropic effects are (broadly) related to opportunity structures, state space extension may be important for capturing contextual constraints on network dynamics. We also observe that while dynamics are limited to Hamming transitions for the LERGM, Change Inhibition, and Differential Stability processes, this limitation is not required for deriving their equilibria (other than establishing irreducibility), and it is not essential: these processes can operate (with equivalent rate functions) on any fully connected set of transitions on G. Thus, simultaneous edge changes can be accommodated within existing CTMC processes, albeit at computational cost. We conclude by observing that a major limitation on progress in this area is a dearth of high-quality dynamic data on social or other networks that is capable of discriminating among competing models. It is hoped that advances in data collection will produce a body of observations that will put continuous time network models on a firmer empirical footing. --- 1. Departments of Sociology, Statistics, Computer Science, and EECS and Institute for Mathematical Behavioral Sciences, University of California Irvine; `[email protected]`[↩](#fnref1) 2. This work was supported by NIH award 1R01GM144964-01 and NSF award SES-1826589. The author thanks Martina Morris, Michael Schweinberger, Chad Klumb, and Steve Goodreau for their input and helpful comments.[↩](#fnref2) 3. E.g., we need the change rate to not go to infinity in finite time, so that the system cannot “ergotize” out from under us.[↩](#fnref3) 4. We also need non-simultaneous edge state transitions, i.e., all non-vanishing transition rates are on unit moves in the Hamming space of the support.[↩](#fnref4) 5. E.g., spontaneous deflation of a balloon is rather faster than the time required for it to spontaneously reinflate. But this seeming asymmetry is driven by the fact that we *selected* the balloon when it was in a very rare conformation, and the waiting time to observe such a rare conformation is very long. The “arrow” is in our selection process, not the system itself. This is merely the regression effect, much disguised.[↩](#fnref5) 6. This formulation itself appears earlier, but is not discussed as a general ERGM generating process.[↩](#fnref6)
arxiv_0000040
26 & 6.38 & 2.13 & 4.79 & 4.26 `Clean All X` & 0.91 & 0.00 & 4.55 & 3.64 & 9.09 & 3.64 & 6.36 & 4.55 & 2.73 `Put All X In One Y` & 1.02 & 0.00 & 2.04 & 6.12 & 1.02 & 4.08 & 3.06 & 3.06 & 3.06 `Put All X On Y` & 0.00 & 2.94 & 0.00 & 0.00 & 5.88 & 4.41 & 4.41 & 0.00 & 4.41 `Prepare Salad` & 0.75 & 0.38 & 1.13 & 6.04 & 2.26 & 4.53 & 1.51 & 3.40 & 3.02 `Prepare Sandwich` & 0.40 & 0.40 & 2.78 & 5.16 & 3.57 & 3.97 & 3.97 & 4.37 & 3.17 `Prepare Breakfast` & 0.27 & 0.82 & 4.37 & 5.19 & 6.28 & 3.28 & 5.19 & 6.56 & 4.37 & `Make Coffee` & 0.00 & 0.00 & 4.17 & 16.67 & 14.58 & 10.42 & 10.42 & 12.50 & 12.50 `Water Plant` & 0.00 & 0.00 & 10.00 & 15.00 & 5.00 & 15.00 & 15.00 & 15.00 & 15.00 `Make Plate Of Toast` & 0.00 & 0.00 & 8.33 & 6.25 & 8.33 & 6.25 & 6.25 & 4.17 & 6.25 `Boil Potato` & 0.00 & 0.00 & 0.00 & 4.17 & 4.17 & 4.17 & 0.00 & 0.00 & 4.17 `N Slices Of X In Y` & 1.43 & 1.43 & 4.29 & 4.29 & 7.14 & 5.71 & 8.57 & 5.71 & 4.29 `N Cooked Slices Of X In Y` & 0.00 & 0.00 & 2.67 & 5.33 & 8.00 & 9.33 & 4.00 & 5.33 & 2.67 `Clean All X` & 4.05 & 0.00 & 6.76 & 8.11 & 8.11 & 6.76 & 16.22 & 6.76 & 13.51 `Put All X In One Y` & 2.00 & 6.00 & 0.00 & 4.00 & 6.00 & 4.00 & 8.00 & 4.00 & 6.00 `Put All X On Y` & 0.00 & 0.00 & 2.13 & 2.13 & 4.26 & 6.38 & 4.26 & 0.00 & 6.38 `Prepare Salad` & 1.38 & 0.00 & 4.83 & 6.90 & 4.83 & 6.90 & 4.14 & 6.90 & 6.21 `Prepare Sandwich` & 0.00 & 0.00 & 1.47 & 0.00 & 2.94 & 2.94 & 2.94 & 1.47 & 2.94 `Prepare Breakfast` & 0.00 & 1.75 & 4.68 & 4.68 & 6.43 & 6.43 & 8.77 & 4.09 & 6.43 & `Make Coffee` & 0.69 & 2.08 & 1.39 & 4.86 & 9.72 & 5.56 & 9.72 & 5.56 & 10.42 `Water Plant` & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 `Make Plate Of Toast` & 0.37 & 0.75 & 4.10 & 5.22 & 3.36 & 4.10 & 5.97 & 4.10 & 5.22 `Boil Potato` & 0.53 & 0.53 & 4.28 & 3.21 & 2.67 & 6.95 & 3.74 & 6.42 & 3.21 `N Slices Of X In Y` & 0.97 & 0.49 & 3.88 & 4.85 & 6.80 & 3.88 & 4.85 & 6.31 & 5.34 `N Cooked Slices Of X In Y` & 1.87 & 1.49 & 1.87 & 7.84 & 3.73 & 6.34 & 4.85 & 8.58 & 4.48 `Clean All X` & 0.51 & 1.54 & 3.08 & 10.26 & 5.64 & 9.74 & 5.13 & 8.21 & 7.18 `Put All X In One Y` & 0.00 & 0.00 & 3.00 & 3.00 & 5.00 & 5.00 & 2.00 & 5.00 & 2.00 `Put All X On Y` & 0.00 & 0.00 & 0.00 & 5.22 & 5.22 & 5.22 & 5.22 & 2.24 & 2.24 `Prepare Salad` & 1.06 & 1.06 & 4.22 & 7.12 & 4.22 & 7.12 & 2.37 & 9.23 & 4.75 `Prepare Sandwich` & 0.00 & 1.01 & 3.36 & 6.04 & 4.36 & 4.36 & 3.02 & 5.03 & 2.35 `Prepare Breakfast` & 0.87 & 0.22 & 2.17 & 4.99 & 4.99 & 4.99 & 5.21 & 4.77 & 3.69 [tab:pertasksuccess] Rule-Based Agents for TATC ========================== As mentioned in section 5.2, we engineered rule-based *Commander* and *Follower* agents as an attempt to solve the Two-Agent Task Completion benchmark. **Rule-based *Follower*** maintains a queue of actions that it needs to execute. Whenever this queue gets empty, it utters *``What should I do next?"*. *Commander* in the next turn detects this utterance and executes a `Progress Check` action to generate a templated language instruction. The templated instruction consists of space-separated low-level actions like *``Forward Forward TurnRight LookUp Pickup Mug at 0.57 0.25"*. This instruction can be split up by the *Follower* into action tokens and each token has a one-to-one correspondence to an action in the action space of *Follower*. For interaction action like *``Pickup Mug at 0.57 0.25"*, *“Mug”* represents the object to be interacted with, and *`0.57 0.25"* represents the normalized coordinates in the egocentric view of *Follower* accepted by the *TEACh* API as a click position. **Rule-based *Commander*** executes a `Progress Check` action whenever it is asked to provide supervision. Listing [progresscheck:plateoftoast] shows the `Progress Check` output at the start of Make Plate of Toast task. `problem_keys` in the `Progress Check` output contains info about all objectives that need to be completed to solve the task. `property_name` of a problem key defines the type of the problem key to be solved. For the 11 tasks we consider, there are 7 problem keys that can be solved: `parentReceptacle`, `isDirty`, `isFilledWithLiquid`, `isFilledWithCoffee`, `isBoiled`, `isCooked` and `Slice`. `DesiredValue` denotes the state that the object should be in. For each of the problem keys, we can engineer a hand-crafted logic to solve it. Consequently, the scope of what rule-based agents can accomplish is limited by the engineering hours needed to identify and hand-write solutions to these problem keys in a modular planning fashion. For `parentReceptacle` problem key, an object needs to be placed on/inside another object. Algorithm [alg:parentReceptacle] shows the hand-crafted logic for `parentReceptacle` problem key. Whenever *Commander* is asked for supervision, it checks the `property_name` of problem key and if it is `parentReceptacle`, it will call the `parentReceptacle()` function. The `current_step` in the function keeps track of the supervision that needs to be provided based on the progress for the current problem key. For `current_step = navigation_1`, it will call a function `Navigate(ObjectID)` which runs a shortest path planner from the current pose of the *Follower* agent to get the low-level navigation instruction to reach `ObjectID`. Similar logic can be written for other problem keys. [!ht] [alg:parentReceptacle] **Input**: step **Output**: instruction, step parentReceptacle(step=``navigation\_1"): [1] ObjectID = get\_object\_id() ParentID = get\_parent\_id() % None if ObjectID not present inside an object instruction = Navigate(ObjectID) step = ``interaction\_1\_1“ instruction = ``ToggleOff ParentObject” instruction += ``Open ParentObject“ step = ``interaction\_1\_2” instruction = ``Pickup ObjectID“ step = ``step\_completed” instruction = ``Pickup ObjectID“ step = ``interaction\_1\_3” instruction = ``Close ParentObject“ step = ``step\_completed” **return** instruction, step [!ht] [frame=none, framesep=3mm, breaklines=true, xleftmargin=21pt, tabsize=1, tabsize=1,breaksymbolleft=]json “taskdesc”: “Make a plate of toast.”, “success”: 0, “subgoals”: [ “representativeobjid”: “Bread|-00.58| 00.27|-01.27”, “stepsuccesses” : [0], “success”: 0, “description”: “Make a slice of toast.”, “steps”: [ “success”: 0, “objectId”: “Bread|-00.58| 00.27|-01.27”, “objectType”: “Bread”, “desc”: “The bread needs to be sliced using a knife.”,, “success”: 0, “objectId”: “Bread|-00.58| 00.27|-01.27”, “objectType”: “Bread”, “desc”: “The bread needs to be toasted.”, ], “problemkeys”: “Bread|-00.58| 00.27|-01.27” : [ “objectType”: “Bread”, “determiner”: “a”, “propertyname”: “objectType”, “desiredpropertyvalue”: “BreadSliced”, “objectType”: “Bread”, “determiner”: “a”, “propertyname”: “isCooked”, “desiredpropertyvalue”: 1 ] , “representativeobjid”: “Plate|-01.18| 00.21|-01.27”, “stepsuccesses”: [1, 0], “success”: 0, “description”: “Clean a Plate.”, “steps”: [ “success”: 0, “objectId”: “Plate|-01.18| 00.21|-01.27”, “objectType”: “Plate”, “desc”: “The Plate is dirty. Rinse with water.” ], “problemkeys”: “Plate|-01.18| 00.21|-01.27”: [ “objectType”: “Plate”, “determiner”: “a”, “propertyname”: “isDirty”, “desiredpropertyvalue”: 0 ] ] [progresscheck:plateoftoast] Task Definition Language ======================== We define a Task Definition Language to define household tasks in terms of object properties that need to be satisfied in the environment for the task to be considered successful. This Task Definition Language is based on a PDDL-like syntax . A sample task can be seen in Listing [task:plateoftoast], which defines the Make a Plate of Toast task in *TEACh*. [!ht] [frame=none, framesep=3mm, breaklines=true, xleftmargin=21pt, tabsize=4, tabsize=2,breaksymbolleft=]json “taskid”: 106, “taskname”: “Plate Of Toast”, “tasknparams”: 0, “taskanchorobject”: “plate”, “desc”: “Make a plate of toast.”, “components”: “toast”: “determiner”: “a”, “taskname”: “Toast”, “taskparams”: [], “plate”: “determiner”: “a”, “taskname”: “Clean X”, “taskparams”: [“Plate”] , “relations”: [ “property”: “parentReceptacles”, “tailentitylist”: [“plate”], “taildeterminerlist”: [“the”], “headentitylist”: [“toast”], “headdeterminerlist”: [“a”], “failuredesc”: “The toast needs to be on a clean plate.” ] [task:plateoftoast] A task is specified in terms of `components` and `relations`. A `component` is specified using a set of conditions to be satisfied for the task to be considered complete. As seen in the above example, a `component` can be specified by referencing another task. In Make a Plate of Toast, the `component` `toast` is described by referencing another task, `Toast` (Listing [task:toast]), and the `component` `plate` is described by referencing another task, `Clean X` (Listing [task:cleanx]), with parameter value `Plate`. In task definitions, `relations` are used to describe relationships between `components` that must be satisfied for the task to be considered complete. In Make a Plate of Toast, the relation specifies that one object satisfying the conditions of `component` `plate` must be the container of (captured by AI2-THOR property `parentReceptacles`) one object satisfying the conditions of `component` `toast`. [!ht] [frame=none, framesep=3mm, breaklines=true, xleftmargin=21pt, tabsize=4, tabsize=2,breaksymbolleft=]json “taskid”: 101, “taskname”: “Toast”, “tasknparams”: 0, “taskanchorobject”: “toast”, “desc”: “Make a slice of toast.”, “components”: “toast”: “determiner”: “a”, “primarycondition”: “objectType”, “instanceshareable”: false, “conditions”: “objectType”: “BreadSliced”, “isCooked”: 1, “conditionfailuredescs”: “objectType”: “The bread needs to be sliced using a knife.”, “isCooked”: “The bread needs to be toasted.” , “knife”: “determiner”: “a”, “primarycondition”: “objectType”, “instanceshareable”: true, “conditions”: “objectType”: “Knife”, “conditionfailuredescs”: , “relations”: [] [task:toast] [!ht] [frame=none, framesep=3mm, breaklines=true, xleftmargin=21pt, tabsize=4, tabsize=2,breaksymbolleft=]json “taskid”: 103, “taskname”: “Clean X”, “tasknparams”: 1, “taskanchorobject”: “#0”, “desc”: “Clean a #0.”, “components”: “#0”: “determiner”: “a”, “primarycondition”: “objectClass”, “instanceshareable”: false, “conditions”: “objectClass”: “#0”, “isDirty”: 0, “conditionfailuredescs”: “isDirty”: “The #0 is dirty. Rinse with water.” , “sink”: “determiner”: “a”, “primarycondition”: “objectType”, “instanceshareable”: true, “conditions”: “objectType”: “Sink”, “receptacle”: 1, “conditionfailuredescs”: , “relations”: [] [task:cleanx] The task Clean X is an example of a parameterized task. The parameter `#0` is set to `Plate` when this task is referenced as a part of the more complex task Plate of Toast (Listing [task:plateoftoast]). In Clean X, the parameter is intended to be a custom object class (such as `Plate` or `Utensil`) which have been pre-defined. Parameters can also be used to specify in determiners or even free text to go into a natural language description, for example “Put all Fork *in* any Sink” versus “Put all cups *on* any Table.” When we check for task completion, parameters can be thought of as macros—we first do a text replacement of the parameter value wherever the parameter occurs in the task definition, and then the task definition is processed as if it does not have parameters. The use of parameters allows easy creation of different variants of a task with low manual effort, thus allowing us to create a more diverse dataset. More formally, a Task is defined by * **`task_id`** - A unique ID for the task * **`task_name`** - A unique name for the task used to reference it in other tasks * **`task_nparams`** - Number of parameters required by the task * **`desc`** - Natural language prompt describing the task to provide to a *Commander*, (be it human or agent model. * **`components`** - Dictionary specifying sets of conditions to be satisfied by objects of different types. This is used to specify both precondition objects required to complete the task such as knife if slicing is required or a sink if cleaning is required, as well as objects that need to be converted to the correct state as part of the task such as `toast` in Listing [task:toast]. A component can also be described using another `Task`, such as Clean X being used to define the target receptacle on which toast should sit in Make a Plate of Toast. * **`relations`** - List of conditions that relate one set of objects to another. Currently the only relationship used in our task definitions is `parentReceptacles` which checks if one object is the container for other objects. However, pairwise operators could also be used to capture other spatial relations or time of completion of components. * **`task_anchor_object`** - This is either the key of a **`component`** or `null`. This is used to identify the specific object in the simulation environment whose properties would be checked when a **`component`** specified by this **`Task`** is used in a **`relation`**. For example, the Make a Plate of Toast Task (Listing [task:plateoftoast]) contains a **`relation`** that says that its `toast` **`component`** should be contained in its `plate` **`component`**. Looking at the task definition for task `Toast` (Listing [task:toast]), we find that its **`task_anchor_object`** is its **`component`** `toast` (and not its **`component`** `knife`) which will resolve to an object of type `BreadSliced`. Looking at the task definition for Clean X (Listing [task:cleanx]), we find that its **`task_anchor_object`** is the **`component`** whose key is set to the value of parameter `#0` which will be resolved to an object of type `#0`. Since Make a Plate of Toast passes the parameter value `Plate` to Clean X, the `plate` **`component`** in Make a Plate of Toast will resolve to an object of type `Plate`. Thus overall the relation should check for an object of type `BreadSliced` (which also satisfies other conditions specified in **`component`** `toast`) to be placed on the object of type `Plate` (which also satisfies other conditions specified in **`component`** `plate`). Note that if the **`task_anchor_object`** for a `Task` is `null`, it cannot be used in **`relations`** in other `Task`s (but can still be a **`component`**). A **`component`** can be of one of two types: * Atomic **`component`** - A **`component`** that is specified in terms of base simulator properties, for example all **`component`**s of `Task`s Toast and Clean X. * `Task` **`component`** - A **`component`** that is specified in terms of another `Task`, for example the components in Makke a Plate of Toast). Atomic **`component`**s are specified using the following keys: * **`conditions`** - Set of `property : desired_value` pairs for this **`component`** to be considered satisfied. For example the conditions for the `toast` **`component`** in Toast look for an object of type `BreadSliced` whose property `isCooked` has been set to 1. * **`condition_failure_descs`** - For properties in conditions that correspond to changes that have to happen by the annotator taking an action, this specifies the description to be provided to the annotator if the property is currently not set to the desired value. For example, in **`component`** `toast` the value for **`condition_failure_descs`** specifies that if there is no sliced bread in the scene, we should send the message “The bread needs to be sliced using a knife” and if there is no toasted bread slice in the scene we should send the message “The bread needs to be toasted”. * **`determiner`** - This is used to specify how many object instances should satisfy this set of conditions. The possible values are `a`, `all` or a positive integer. For example in the `Task Toast`, the `toast` **`component`** has **`determiner`** `a` so we would say this component is satisfied if there is any slice of toasted bread in the scene. Instead if the **`determiner`** was 2, we would only say that the **`component`** is satisfied if there are at least 2 slices of toasted bread in the scene. If it was `all`, we would say that the **`component`** is satisfied if all slices of bread present in the scene are toasted. * **`primary_condition`** - This is used to find candidate objects that an annotator needs to modify to satisfy this **`component`**. It is usually an object type or class. * **`instance_shareable`** - This is a parameter used to handle how numbers cascade across hierarchies. In the Make a Plate of Toast, the **`determiner`** for component `toast` is `a`. Suppose instead that this was 2. By default we would multiply the **`determiner`**s of all **`component`**s of Toast (except `all`) by 2 (treating `a` as 1). So to check if the Toast-2 is satisfied we would check both if there are 2 slices of toast and if there are 2 knives. But the knife has only been specified as a precondition and we do not actually need 2 knives in Toast-2. This exception is captured by the property **`instance_shareable`**. The knife **`component`** has **`instance_shareable`** `= true` so regardless of the **`determiner`** associated with Toast, we would only check for one knife, but the **`component`** `toast` has **`instance_shareable`** `= false` so we would require *n* slices of toast in Toast-2. [!ht] [frame=none, framesep=3mm, breaklines=true, xleftmargin=21pt, tabsize=4, tabsize=2,breaksymbolleft=]json “taskid”: 110, “taskname”: “Put All X On Y”, “tasknparams”: 3, “taskanchorobject”: null, “desc”: “Put all #0 #1 any #2.”, “components”: “#0”: “determiner”: “all”, “primarycondition”: “objectClass”, “instanceshareable”: false, “conditions”: “objectClass”: “#0”, “conditionfailuredescs”:, “#2”: “determiner”: “a”, “primarycondition”: “objectClass”, “instanceshareable”: true, “conditions”: “objectClass”: “#2”, “receptacle”: 1, “conditionfailuredescs”: , “relations”: [ “property”: “parentReceptacles”, “tailentitylist”: [“#2”], “taildeterminerlist”: [“a”], “headentitylist”: [“#0”], “headdeterminerlist”: [“all”], “failuredesc”: “The #0 needs to be put #1to a #2” ] [task:putallxony] [!ht] [frame=none, framesep=3mm, breaklines=true, xleftmargin=21pt, tabsize=4, tabsize=2,breaksymbolleft=]json “taskid”: 111, “taskname”: “Put All X In One Y”, “tasknparams”: 3, “taskanchorobject”: null, “desc”: “Put all #0 #1 one #2.”, “components”: “#0”: “determiner”: “all”, “primarycondition”: “objectClass”, “instanceshareable”: false, “conditions”: “objectClass”: “#0”, “conditionfailuredescs”:, “#2”: “determiner”: “a”, “primarycondition”: “objectClass”, “instanceshareable”: true, “conditions”: “objectClass”: “#2”, “receptacle”: 1, “conditionfailuredescs”: , “relations”: [ “property”: “parentReceptacles”, “tailentitylist”: [“#2”], “taildeterminerlist”: [“the”], “headentitylist”: [“#0”], “headdeterminerlist”: [“all”], “failuredesc”: “The #0 needs to be put #1to a single #2” ] [task:putallxinoney] Task relations are specified by: * **`property`** - The property being checked (currently we only have support for `parentReceptacles`) * **`head_entity_list`**, **`tail_entity_list`** - While exactly which object is the head and which object is the tail is arbitrary and would be decided by implementation used to check a property, we assume that if we examine the property value of the head entities, the tail entities would be specified in them. Currently these are implemented as lists to handle the very specific case where we want to define that multiple objects need to be placed in a single container (e.g.: multiple sandwich components in a plate). The entities are specified using the **`component`** keys and we recursively check **`task_anchor_object`** of `Task` components to find the exact object to be used when checking the relation. * **`head_determiner_list`** - A list of the same length as **`head_entity_list`** where each entry can take values `a`, `all`, or a number and specify how many objects matching conditions specified by the respective **`component`** are involved in this relation. * **`tail_determiner_list`** - A list of the same length as **`tail_entity_list`** where each entry can take values `a` or `the`. To illustrate the difference, compare the `Task`s Put All X On Y (Listing [task:putallxony]) and Put All X In One Y (Listing [task:putallxinoney]). Suppose there are two objects of type `X` (*x*1 and *x*2) and two objects of type `Y` (*y*1 and *y*2) in the scene. If *x*1 is placed in/on *y*1 and *x*2 is placed in/on *y*2, this would satisfy the Task Put All X On Y (because each object of type `X` is on `a` object of type `Y`) but does not satisfy Put All X In One Y (because there is no single object of type `Y` such that *x*1 and *x*2 are in `the` object of type Y) * **`failure_desc`** - The message to be shown to an annotator if some action needs to be taken to make this relation satisfied. **Checking task completion**: The task definition specifies all conditions that need to be satisfied by objects in the scene for a `Task` to be considered satisfied. To check if a `Task` is satisfied, first, for each **`component`**, we check if as many instances, specified by the **`determiner`** of that **`component`** satisfy the conditions specified in **`conditions`** (or in the case of `Task` **`component`**s, we recursively check that the `Task` specified as the **`component`** is satisfied). Next, we take the objects satisfying the conditions of each **`component`** and use them to check **`relation`**s. If there exist objects within this subset that also satisfy all **`relation`**s, the `Task` is considered satisfied. *TEACh* Examples and Qualitative Analysis ========================================= For several example figures below, we provide video session replays in the attached supplementary material. To compress video size and length, we play 1 action, either an utterance or environment action, per second, rather than the “real time” playback. Videos show the *Commander* and *Follower* egocentric view, as well as the object search camera for the *Commander* together with the segementation mask of the searched object. Additionally, each video shows utterance data and progress check response data in JSON format. *TEACh* was collected using an annotator interface that allowed unconstrained chat between annotators. Annotators need to communicate because the task information is only available to the *Commander*—during collection called the *User*—but only the *Follower*—in collection called the *Robot*—can actually take actions. We provide some guidelines for annotators on how to conduct these conversations, detailed in §[sec:appcollection]. Our guidelines encourage annotators to explicitly request for and mention only task-relevant information. However, annotators can and do decide to provide annotations in different levels of detail and relevance. [!ht]![Sample session for the Make Coffee task where the Commander does not explain in much detail how the task is to be completed. The session also includes an example where the Follower needs to ask for help because the Commander initially provided the wrong location for the mug.](example_images/EDH_Table1_3793c8580da1c1a9_8258.png "fig:") [fig:sampledialog1] [!ht]![Another sample session for Make Coffee. In this session, the Commander provides step by step instructions and feedback to the Follower despite the Follower not asking for the next instruction or help.](example_images/Cofee_82ef609bb3149ca8_7ddc.png "fig:") [fig:Coffeeexample2] [!ht]![Sample session for the Clean All X task in a bathroom. While the task could be solved more efficiently by simply turning on the faucet in the bathtub, the Commander and Follower instead choose to clean the cloth in the sink. This session also demonstrates examples of how utterances can get out of order due to the absence of forced turn taking. ](example_images/Clean_All_X__9cf2b24ad4dba768_b33e.png "fig:") [fig:examplecleanallx] [!ht]![Sample session for the Put All X In One Y task. In this session, Commander corrects the Follower to pick up the correct tissue box. Then Commander does not realize that all the tissue boxes need to be placed on the same side table, and hence initially gives the Follower incorrect instructions.](example_images/Put_all_X_in_One_Y_4e32a37852512e84_811a.png "fig:") [fig:exampleputallxinoney] [!ht]![Sample session for the Water Plant task. The Commander initially gives an incorrect instruction requiring the Follower to ask for help and search for a container. The Follower finds another container before getting help from the Commander.](example_images/Water_Plant_a29fe298de2eb81b_9f43.png "fig:") [fig:examplewaterplant] [!ht]![Sample session for the N Slices Of X In Y task. This example demonstrates interleaving chat messages between the Commander and Follower. The Commander uses referring expressions, such as On the table where chair was, to help the Follower locate the target object. The Follower also asks the Commander for help, and gives confirmation, frequently.](example_images/N_Slices_of_X_in_Y_2de9e62dc7b62f3b_bac3.png "fig:") [fig:examplenslicesofxiny] [!ht]![Sample session for the Put All X On Y task in a bedroom. The Commander intends to give step by step instructions but occasionally provides the next step before the Follower has finished the previous step. ](example_images/Put_All_X_On_Y__c0ae74d6938f4c1d_c3c4.png "fig:") [fig:exampleputallxony] [!ht]![Sample session for the Sandwich task. The Follower requires the task of making a sandwich to be broken down into simpler steps but anticipates a few steps, finding the bread and knife before being explicitly asked to.](example_images/Sandwich__6aa775df37538c2a_6508.png "fig:") [fig:examplesandwich] [!ht]![Sample session for the Boil Potato task. The session demonstrates an example where the Commander helps the Follower to solve the issue of a pot not fitting into the sink. ](example_images/Boil_Potato__47cdabd6da727ed1_d685.png "fig:") [fig:exampleboilpotato] [!ht]![Sample session for the Breakfast task where the Follower has to make coffee and a sandwich with lettuce. The Commander provides step by step instructions but occasionally provides the next step, for example slicing bread, before the Follower is done with the previous step, and is sometimes late with help. For example, the Follower finds the knife alone because the Commander does not provide its location. ](example_images/Breakfast__0fb77c7054bc7a0c_c243.png "fig:") [fig:examplebreakfast] [!ht]![Sample session for the Salad task. The Follower anticipates the Commander’s directions, slicing the tomato and lettuce before it is asked, but forgets to plate the salad until directed to do so by the Commander.](example_images/Salad_7f5bf828ea7530fc_dc63.png "fig:") [fig:examplesalad] [!ht]![Sample session for the N Cooked Slices Of X In Y task. The Follower finds a potato before the Commander directs it to one.](example_images/N_Cooked_slices_of_X_In_Y__0395663fca8c7dbc_648d.png "fig:") [fig:examplencookedslicesofy] [!ht]![Sample session for the Plate Of Toast task. This session demonstrates interleaving chat messages, referring expressions and Commander proving feedback to the Follower for sub-tasks.](example_images/Plate_Of_Toast_3469d4785fac2d22_a7a9.png "fig:") [fig:exampleplateoftoast] Consider the example dialogs in Figures [fig:sampledialog1] and [fig:Coffeeexample2]. In Figure [fig:sampledialog1], the *Commander* simply tells the *Follower* to prepare coffee, but in Figure [fig:Coffeeexample2], the *Commander* provides much lower level instructions, and waits for the *Follower* to complete each step. The initial instruction provided in Figure [fig:sampledialog1] (*“can you make me a coffee please?”*) are similar to goal-level instructions and requests typically seen in task-oriented dialog. The instructions in Figure [fig:Coffeeexample2] (*``grab the dirty mug out of the fridge*, *go wash in the sink*) are more similar to the detailed instructions in datasets such as R2R . Every trajectory in ALFRED  is annotated with instructions at both of these levels of granularity. In *TEACh*, by contrast, dialogues may contain either one or in some cases both levels of granularity. Thus, *TEACh* benchmark agents need to be able to effectively map instructions at different levels of granularity to low level actions. Dialogues also contain situations where the *Commander* helps the *Follower* get “unstuck”. For example, in Figure [fig:Coffeeexample2], the *Commander* suggests that the *Follower* needs to clear out the sink in order to place the mug in it. In future work, we could attempt to leverage such utterances to learn more general knowledge of the environment that can be used by a *Follower* to get unstuck, either via student forcing from learned rules or by adding hand-written recovery modules analogous to the simple navigation and interaction recovery modules in ABP  and EmBERT . For example, an agent may use the dialogue in Figure [fig:Coffeeexample2] to infer that if it tries to place an object in a container and fails, it must try to clear out the container. In Figure [fig:Coffeeexample2], the *Follower* did not explicitly ask for help. In contrast, in Figure [fig:sampledialog1], the *Follower* asks for help when it does not find a mug in the places it initially searches, which prompts the *Commander* to correct their instruction. This session also illustrates a difference between *TEACh* where the task is completed by a human annotator based on online instructions from another human annotator, and benchmarks that elicit descriptions for trajectories generated by a planner, such as ALFRED. In Figure [fig:exampleputallxinoney], the *Commander* keeps changing their mind about where the *Follower* should place the tissue boxes, resulting in a less efficient path. A human *Commander* may make mistakes when providing instructions, and a human *Follower* may not perfectly follow instructions. Standard teacher forcing training, including that used in our baseline experiments, does not account for such imperfections in demonstrated trajectories. However, robust agent models for *TEACh* benchmarks will need to learn to identify what information is essential and what is irrelevant or wrong. Dialogues can contain a lot of feedback, for example the *Follower* informing the *Commander* when it has completed a step or the task, and the *Commander* affirming that a step or task has been completed. In the EDH and TfD tasks, an agent will likely need to learn to ignore these feedback steps. However, in future, these self-reported completions could be useful to segment large tasks into pragmatic subgoals. Unlike ALFRED, since our tasks have varying levels of hierarchy, what may constitute pragmatic subgoals for one task may be too much detail for another task. We place no constraints on our chat interface - for example, we do not impose turn taking. Thus, chat messages from the two annotators interleave in interesting ways. For example, consider Figure [fig:examplecleanallx]. The *Follower*’s messages *“What task do I do today?”* and *“I have picked the purple object. What next?”* are preceded by their responses from the *Commander*. An agent performing our EDH or TfD task will need to be able to mentally reorder these messages to successfully complete the task. To facilitate detection of interleaved messages, we provide millisecond level timesteps for each action and utterance in the *TEACh* data, though in figures we represent each action as a “timestep.” The *Follower* can also ask for different kinds of help as they try to complete the task including clarification, for example in Figure [fig:exampleputallxinoney], “which table? the one with the other tissue box?”, asking for the location of an object, as in Figure [fig:sampledialog1], *“Where can I find a mug?”*, and help if it is unable to perform an action requested by the *Commander*, as in Figure [fig:sampledialog1], *“I can’t seem to see a mug”*. A good *Follower* model should be able to execute actions based on dialogue history, while also being able to interact with the *Commander* in natural language - clarifying ambiguous instructions, obtaining additional information as needed, learning to solve problems, and providing feedback as it completes tasks. To accomplish these needs, a model may have to identify different dialog acts, translate dialog history to actions (EDH), detect situations where additional information is needed, and generate appropriate dialog responses in these situations. Jointly learning a *Commander* and *Follower* model may begin to enable these strategies. --- 1. https://github.com/alexa/teach[↩](#fnref1) 2. Using the spaCy tokenizer: <https://pypi.org/project/spacy/>[↩](#fnref2) 3. An earlier version of this dataset had a larger number of EDH instances. The current released split has filtered EDH instances so that only state changes that directly result in task progress are considered.[↩](#fnref3) 4. We follow ALFRED in using a macro-, rather than micro-average for Goal-Conditioned Success Rate.[↩](#fnref4) 5. <https://pypi.org/project/spacy/>[↩](#fnref5) 6. https://pypi.org/project/symspellpy/[↩](#fnref6) TEACh: Task-driven Embodied Agents that Chat ============================================ Robots operating in human spaces must be able to engage in natural language interaction, both understanding and executing instructions, and using conversation to resolve ambiguity and correct mistakes. To study this, we introduce *TEACh*, a dataset of over 3,000 human–human, interactive dialogues to complete household tasks in simulation. A *Commander* with access to oracle information about a task communicates in natural language with a *Follower*. The *Follower* navigates through and interacts with the environment to complete tasks varying in complexity from Make Coffee to Prepare Breakfast, asking questions and getting additional information from the *Commander*. We propose three benchmarks using *TEACh* to study embodied intelligence challenges, and we evaluate initial models’ abilities in dialogue understanding, language grounding, and task execution. Introduction ============ [fig:teaser] Many benchmarks for translating visual observations and an initial language instruction to actions assume no further language communication . However, obtaining clarification via simulated interactions  or learning from human-human dialogue  can improve embodied navigation. We hypothesize that dialogue has even more to offer for object-centric, hierarchical tasks. [t] 2.5pt lcccccc **Dataset & & & **Demonstrations & Interaction & State Changes & Conversational & # Sessions & Freeform & R2R  & & & & -& -& Planner CHAI  & & & & -& -& Human CVDN  & & & & 2050 & & Human CerealBar  & & & & 1202 & & Human MDC (Narayan-Chen et al. ) & & & & 509 & & Human ALFRED  & & & & -& -& Planner III  & & & & - & -& Human *TEACh*& & & & 3215 & & Human**** [tab:benchmarkqualitativecomparison] We introduce ***T**ask-driven **E**mbodied **A**gents that **C**hat* (*TEACh*)[1](#fn1) to study how agents can learn to ground natural language  to the visual world and actions, while considering long-term and intermediate goals, and using dialogue to communicate. *TEACh* contains over 3,000 human–human sessions interleaving utterances and environment actions where a *Commander* with oracle task and world knowledge and a *Follower* with the ability to interact with the world communicate in written English to complete household chores (Figure [fig:teaser]). *TEACh* dialogues are unconstrained, not turn-based, yielding variation in instruction granularity, completeness, relevance, and overlap. Utterances include coreference with previously mentioned entities, past actions, and locations. Because *TEACh* sessions are human, rather than planner-based , *Follower* trajectories include mistakes and corresponding, language-guided correction. We propose three benchmarks based on *TEACh* sessions to study the ability of learned models to achieve aspects of embodied intelligence: Execution from Dialog History (EDH), Trajectory from Dialog (TfD) and Two-Agent Task Completion (TATC). We also demonstrate baseline performance on these benchmarks. To model the *Follower* agent for the EDH and TfD benchmarks, we build on the Episodic Transformer (E.T.) model  as a baseline. When modeling both agents for end-to-end task completion, we demonstrate the difficulty of engineering rule-based solvers. The main contributions of this work are: * *TEACh*, a dataset of over 3000 human-human dialogs simulating the experience of a user interacting with their robot to complete tasks in their home, that interleaves dialogue messages with actions taken in the environment. * An extensible task definition framework (§[sec:dataset]) that can be used to define and check completion status for a wide range of tasks in a simulated environment. * Three benchmarks based on *TEACh* sessions and experiments demonstrating initial models for each. Related Work ============ Table [tab:benchmarkqualitativecomparison] situates *TEACh* with respect to other datasets involving natural language instructions for visual task completion. #### Vision & Language Navigation (VLN) tasks agents with taking in language instructions and a visual observation to produce an action, such as turning or moving forward, to receive a new visual observation. VLN benchmarks have evolved from the use of symbolic environment representations  to photorealistic indoor  and outdoor environments , as well as the prediction of continuous control . *TEACh* goes beyond navigation to object interactions for task completion, and beyond single instructions to dialogue. #### Vision & Language Task Completion involves actions beyond navigation. Models have evolved from individual rule-based or learned components for language understanding, perception and action execution , to end-to-end models in fully observable blocks worlds . More complex tasks involve partially observable worlds  and object state changes . Some works use a planner to generate ideal demonstrations that are then labeled, while others first gather instructions and gather human demonstrations . In *TEACh*, human instructions and demonstrations are gathered simultaneously. [t] [fig:interface] #### Vision & Dialogue Navigation and Task Completion Agents that engage in dialogue instead of simply following natural language instructions can be learned by combining individual rule-based or learned components . Simulated clarification can also improve end-to-end VLN models . Models are also able to take advantage of conversational history in human-human dialogues to perform better navigation , learn agent-agent policies for navigating and speaking , and deploy individual agent policies for human-in-the-loop evaluation . However, such models and underlying datasets are limited to navigation actions and turn-taking conversation. In contrast, *TEACh* involves *Follower* navigation and object interaction, as well as freeform dialogue acts with the *Commander*. The Minecraft Dialogue Corpus (MDC)  gives full dialogues between two humans for assembly tasks. MDC is similar in spirit to *TEACh*; we introduce a larger action space and resulting object state changes, such as slicing and toasting bread, as well as collecting many more human-human dialogues. The *TEACh* Dataset =================== We collect 3,047 human–human *gameplay sessions* for completing household tasks in the AI2-THOR simulator . Each session includes an initial environment state, *Commander* actions to access oracle information, utterances between the *Commander* and *Follower*, movement actions, and object interactions taken by the *Follower*. Figure [fig:interface] gives an overview of the annotation interface. Household Tasks --------------- [t] [fig:tdlexample] We design a *task definition language* (TDL) to define household tasks in terms of object properties to satisfy, and implement a framework over AI2-THOR that evaluates these criteria. For example, for a task to make coffee, we consider the environment to be in a successful state if there is a mug in the environment that is clean and filled with coffee. Parameterized tasks such as Put All X On Y enable task variation. Parameters can be object classes, such as putting all `forks` on a `countertop`, or predefined abstract hypernyms, for example putting all `silverware`—forks, spoons, and knives—on the counter. *TEACh* task definitions are also hierarchical. For example, Prepare Breakfast contains the subtasks Make Coffee and Make Plate of Toast. We incorporate determiners such as “a”, “all” and numbers such as 2 to enable easy definition of a wide range of tasks, such as N Slices of X in Y. The *TEACh* TDL includes template-based language prompts to describe tasks and subtasks to *Commander*s (Figure [fig:tdlexample]). Gameplay Session Collection --------------------------- Annotators first completed a tutorial task demonstrating the interface to vet their understanding. For each session, two vetted crowdworkers were paired using a web interface and assigned to the *Commander* and *Follower* roles (Figure [fig:interface]). The *Commander* is shown the task to be completed and the steps needed to achieve this given the current state of the environment, using template-based language prompts, none of which are accessible to the *Follower*. The *Commander* can additionally search for the location of objects, either by string name, such as “sink”, or by clicking a task-relevant object in the display (Figure [fig:tdlexample]). The *Commander* and *Follower* must use text chat to communicate the parameters of the task and clarify object locations. Only the *Follower* can interact with objects in the environment. We obtained initial states for each parameterized task by randomizing AI2-THOR environments and retaining those that satisfied preconditions such as task-relevant objects being present and reachable. For each session, we store the initial simulator state *S**i*, the sequence of actions *A* = (*a*1, *a*2, …) taken, and the final simulator state *S**f*. *TEACh* *Follower* actions are `Forward`, `Backward`, `Turn Left`, `Turn Right`, `Look Up`, `Look Down`, `Strafe Left`, `Strafe Right`, `Pickup`, `Place`, `Open`, `Close`, `ToggleOn`, `ToggleOff`, `Slice`, and `Pour`. Navigation actions move the agent in discrete steps. Object manipulation expects the agent to specify an object via a relative coordinate (*x*, *y*) on *Follower* egocentric frame. The *TEACh* wrapper on the AI2-THOR simulator examines the ground truth segmentation mask of the agent’s egocentric image, selects an object in a 10x10 pixel patch around the coordinate if the desired action can be performed on it, and executes the action in AI2-THOR. The *Commander* can execute a `Progress Check` and `SearchObject` actions, demonstrated in Figure [fig:tdlexample]. *TEACh* *Commander* actions also allow navigation, but the *Commander* is a disembodied camera. [t] lrrrrrr & & & & & & & & & & & & Water Plant & 1 & 10 & 176 & 6.37 ± 4.36 & 51.86 ± 30.71 & 67.93 ± 40.70 Make Coffee & 1 & 30 & 308 & 7.75 ± 5.08 & 55.25 ± 33.61 & 72.29 ± 50.85 Clean All X & 19 & 52 & 336 & 9.65 ± 7.03 & 74.06 ± 59.66 & 96.92 ± 71.31 Put All X On Y & 209 & 92 & 344 & 8.66 ± 5.82 & 82.13 ± 66.39 & 103.53 ± 80.97 Boil Potato & 1 & 26 & 202 & 10.65 ± 7.61 & 104.66 ± 79.50 & 130.13 ± 94.80 Make Plate of Toast & 1 & 27 & 225 & 12.26 ± 8.51 & 108.30 ± 55.81 & 136.11 ± 70.73 N Slices Of X In Y & 16 & 29 & 304 & 13.50 ± 10.86 & 113.62 ± 94.25 & 146.23 ± 113.96 Put All X In One Y & 84 & 50 & 302 & 11.32 ± 7.03 & 115.74 ± 90.13 & 147.80 ± 104.45 N Cooked X Slices In Y & 10 & 30 & 240 & 14.94 ± 9.43 & 155.18 ± 75.17 & 189.26 ± 87.90 Prepare Sandwich & 5 & 28 & 241 & 18.03 ± 9.96 & 195.93 ± 83.96 & 241.61 ± 100.86 Prepare Salad & 9 & 30 & 323 & 20.47 ± 10.80 & 206.29 ± 111.47 & 253.94 ± 130.09 Prepare Breakfast & 80 & 30 & 308 & 27.67 ± 14.73 & 295.06 ± 138.76 & 359.90 ± 162.33 ***TEACh* Overall & **438 & **109 & **3320 & **13.67** ± 10.81 & **131.80** ± 109.68 & **164.65** ± 130.89******** [tab:pertaskstats] *TEACh* Statistics ------------------ *TEACh* is comprised of 3,047 successful gameplay sessions, each of which can be replayed using the AI2-THOR simulator for model training, feature extraction, or model evaluation. In total, 4,365 crowdsourced sessions were collected with a human-level success rate of 74.17% (3320 sessions) and total cost of $105k; more details in appendix. Some successful sessions were not included in the final split used in benchmarks due to replay issues. *TEACh* sessions span all 30 AI2-THOR kitchens, and include most of the 30 each AI2-THOR living rooms, bedrooms, and bathrooms. Successful *TEACh* sessions consist of over 45k utterances, with an average of 8.40 *Commander* and 5.25 *Follower* utterances per session. The average *Commander* utterance length is 5.70 tokens and the average *Follower* utterance length is 3.80 tokens. The *TEACh* data has a vocabulary size of 3,429 unique tokens.[2](#fn2) Table [tab:pertaskstats] summarizes such metrics across the 12 task types in *TEACh*. Simple tasks like Make Coffee require fewer dialogue acts and *Follower* actions on average than complex, composite tasks like Prepare Breakfast which subsume those simpler tasks. [t] [fig:edhexample] *TEACh*Benchmarks ================= We introduce three benchmarks based on *TEACh* sessions to train and evaluate the ability of embodied AI models to complete household tasks using natural language dialogue. **Execution from Dialogue History** and **Trajectory from Dialogue** require modeling the *Follower*. **Two-Agent Task Completion**, by contrast, requires modeling both the *Commander* and *Follower* agents to complete *TEACh* tasks end-to-end. For each benchmark, we define how we derive benchmark instances from *TEACh* gameplay sessions, and by what metrics we evaluate model performance. Each session has an initial state *S**i*, the sequence of actions *A* = (*a*1, *a*2, …) taken by the *Commander* and *Follower* including dialogue and environment actions, and the final state *S**f*. We denote the subsequence of all dialogue actions as *A**D*, and of all navigation and interaction as *A**I*. Following ALFRED, we create validation and test splits in both seen and unseen environments (Table [tab:splits]) [3](#fn3). Seen splits contain sessions based in AI2-THOR rooms that were seen the training, whereas unseen splits contain only sessions in rooms absent from the training set. [t] llrr Fold & Split & # Sessions & # EDH Instances Train & & 1482 (49%) & 5475 (49%) & Seen & 181 (6%) & 608 (5%) & Unseen & 612 (20%) & 2157 (19%) & Seen & 181 (6%) & 666 (6%) & Unseen & 589 (19%) & 2270 (21%) [tab:splits] Execution from Dialogue History (EDH) ------------------------------------- We segment *TEACh* sessions into EDH instances. We construct EDH instances (*S**E*, *A**H*, *A**R**I*, *F**E*) where *S**E* is the initial state of the EDH instance, *A**H* is an action history, and the agent is tasked with predicting a sequence of actions that changes the environment state to *F**E*, using *A**R**I* reference interaction actions taken in the session as supervision. We constrain instances to have ∣*A**H**D*∣ > 0 and at least one object interaction in *A**R**I*. Each EDH instance is punctuated by a dialogue act starting a new instance or the session end. We append a `Stop` action to each *A**R**I*. An example is included in Figure [fig:edhexample]. [t] lrrrrrrrr & & & & & & Model & & & & & & & & Random & 0.82 [0.62] & 0.75 [0.43] & 1.34 [0.43] & 0.41 [0.07] & 0.6 [0.09] & 0.7 [0.27] & 1.89 [0.94] & 0.72 [0.16] Lang & 0.99 [0.28] & 1.04 [0.29] & 2.36 [0.23] & 0.78 [0.29] & 0.75 [0.27] & 0.8 [0.31] & 2.03 [0.29] & 0.82 [0.14] Vision & 5.1 [1.15] & 6.96 [1.76] & 3.89 [0.61] & 3.56 [0.73] & 5.86 [0.32] & 8.14 [1.09] & 4.23 [0.56] & 4.71 [0.66] E.T. & 5.76 [0.90] & 7.99 [1.65] & 4.96 [0.54] & 4.71 [0.53] & 4.8 [1.07] & 8.08 [1.94] & 5.02 [0.91] & 5.57 [1.09] +H & 7.4 [1.06] & 10.31 [2.02] & 4.31 [0.63] & 4.51 [0.72] & 6.01 [0.27] & 10.33 [1.58] & 5.68 [0.56] & 6.34 [0.75] +A & 10.2 [0.71] & 15.71 [4.07] & 5.56 [0.51] & 5.2 [0.77] & 7.06 [0.53] & 9.57 [1.44] & 5.24 [0.67] & 6.1 [1.01] +S & 8.55 [1.69] & 12.84 [3.41] & 7.83 [0.89] & 9.07 [1.69] & 4.2 [0.30] & 6.37 [1.80] & 3.88 [0.26] & 4.86 [0.84] +H+A & 8.39 [0.82] & 14.92 [3.03] & 6.12 [0.88] & 6.43 [1.12] & 6.91 [0.37] & 10.34 [1.54] & 5.11 [0.67] & 5.84 [1.17] +H+S & 9.38 [1.22] & 15.97 [3.55] & 5.7 [0.50] & 6.38 [0.78] & 6.31 [0.33] & 9.68 [1.71] & 5.29 [0.58] & 6.19 [0.93] & & Rand & 0.00 [0.00] & 0.00 [0.00] & 0.00 [0.00] & 0.00 [0.00] & 0.00 [0.00] & 0.00 [0.00] & 0.00 [0.00] & 0.00 [0.00] E.T. & **1.02 [0.17] & **1.42 [4.82] & **0.48 [0.12] & **0.35 [0.59] & **0.51 [0.23] & **1.60 [6.46] & **0.17 [0.04] & **0.67 [2.50]**************** [tab:edhtfdresults] To evaluate inferred EDH action sequences, we compare the simulator state changes *Ê* at the end of inference with *F**E* using similar evaluation criteria generalized from the ALFRED benchmark. * Success {0, 1}: 1 if all expected state changes *F**E* are present in *Ê*, else 0. We average over all trajectories. * Goal-Condition Success (GC) (0, 1): The fraction of expected state changes in *F**E* present in *Ê*. We average over all trajectories.[4](#fn4) * Trajectory Weighted Metrics: For a reference trajectory *A**R**I* and inferred action sequence *Â**I*, we calculate trajectory length weighted metric for metric value *m* as $$TLW\mhyphen m = \frac{m \* |A\_R^I|}{max(|A\_R^I|, |\hat{A}^I|)}.$$ During inference, the learned *Follower* agent predicts actions until either it predicts the `Stop` action, hits a limit of 1000 steps, or hits a limit of 30 failed actions. Trajectory from Dialogue (TfD) ------------------------------ A *Follower* agent model is tasked with inferring the whole sequence of *Follower* environmental actions taken during the session conditioned on the dialogue history. A TfD instance is (*S**i*, *A**H**D*, *A**R**I*, *S**f*), where *A**H**D* is all dialogue actions taken by both agents, and *A**R**I* is all non-dialogue actions taken by the *Follower*. We append a `Stop` action to *A**R**I*. The agent does not observe dialogue actions in context, however, we use this task to test long horizon action prediction with a block of instructions, analogous to ALFRED or TouchDown . We calculate success and goal-conditioned success by comparing *Ê* against state changes between *S**i* and *S**f*. Two-Agent Task Completion (TATC) -------------------------------- To explore modeling both a *Commander* and *Follower* agent, the TATC benchmark gives as input only environment observations to both agents. The *Commander* model must use the `Progress Check` action to receive task information, then synthesize that information piece by piece to the *Follower* agent via language generation. The *Follower* model can communicate back via language generation. The TATC benchmark represents studying the “whole” set of challenges the *TEACh* dataset provides. We calculate success and goal-conditioned success by comparing *Ê* against state changes between *S**I* and *S**f*. Experiments and Results ======================= We implement initial baseline models and establish the richness of *TEACh* data and difficulty of resulting benchmarks. *Follower* Models for EDH and TfD --------------------------------- We use a single model architecture to train and evaluate on the EDH and TfD benchmark tasks. #### Model. We establish baseline performance for the EDH and TfD tasks using the Episodic Transformer (E.T.) model , designed for the ALFRED benchmark. The original E.T. model trains a transformer language encoder and uses a ResNet-50 backbone to encode visual observations. Two multimodal transformer layers are used to fuse information from the language, image, and action embeddings, followed by a fully connected layer to predict the next action and target object category for interaction actions. E.T. uses a MaskRCNN  model pretrained on ALFRED images to predict a segmentation of the egocentric image for interactive actions, matching the predicted mask to the predicted object category. We convert the centroid of this mask to a relative coordinate specified to the *TEACh*API wrapper for AI2-THOR. We modify E.T. by learning a new action prediction head to match *TEACh* *Follower* actions. Given an EDH or TfD instance, we extract all dialogue utterances from the action history *A**H**D* and concatenate these with a separator between utterances to form the language input. The remaining actions *A**H**I* are fed in order as the past action input with associated image observations. Consequently, our adapted E.T. does not have temporal alignment between dialogue actions and environment actions. Following the mechanism used in the original E.T. paper, we provide image observations from both actions in the history *A**H**I*, and the reference actions *A**R**I*, and task the model to predict the entire sequence of actions. The model parameters are optimized using cross entropy loss between the predicted action sequence and the ground truth action sequence. For EDH, we ablate a history loss (H) as cross entropy over the entire action sequence—actions in both *A**H**I* and *A**R**I*, to compare against loss only against actions in *A**R**I*. Note that in TfD, ∣*A**H**I*∣ = 0. We additionally experiment with initializing the model using weights trained on the ALFRED dataset. Note that since the language vocabulary and action space change, some layers need to be retrained. For EDH, we experiment with initializing the model both with weights from the E.T. model trained only on base ALFRED annotations (A) and the model trained on ALFRED augmented with synthetic instructions (S) (from ). We also perform unimodal ablations of the E.T. model to determine whether the model is simply memorizing sequences from the training data . At inference time, the agent uses dialogue history as language input, and the environment actions in *A**H**I* as past action input along with their associated visual observations. At each time step we execute the predicted action, with predicted object coordinate when applicable, in the simulator. The predicted action and resulting image observation are added to agent’s input for the next timestep. The appendix details model hyperparameters. #### Results. Table [tab:edhtfdresults] summarizes our adapted E.T. model performance on the EDH and TfD benchmarks. We observe that all E.T. model conditions in EDH are significantly better than `Random` and `Lang-Only` condition on all splits on SR and GC, according to a paired two-sided Welch *t*-test with Bonferroni corrections. Compared to the `Vision-Only` baseline, the improvements of the E.T. models are statistically significant on unseen splits, but not on seen splits. Qualitatively, we observe that the `Random` baseline only succeeds on very short EDH instances that only include one object manipulation involving a large target object, for example placing an object on a `countertop`. The same is true of most of the successful trajectories of the `Lang-Only` baseline. The success rate of the `Vision-Only` baseline suggests that the E.T.-based models are not getting much purchase with language signal. Notably, E.T. performs well below its success rates on ALFRED, where it achieves 38.24% on the ALFRED test-seen split and 8.57% on the ALFRED test-unseen split. Additionally, although there appears to be a small benefit from initializing the E.T. model with pretrained weights from ALFRED, these differences are not statistically significant. *TEACh* language is more complex, involving multiple speakers, irrelevant phatic utterances, and dialogue anaphora. E.T. model performance on TfD is poor but non-zero, unlike a `Random` baseline. We do not perform additional ablations for TfD given the low initial performance. Notably, in addition to the complexity of language, TfD instances have substantially longer average trajectory length ( ∼ 130) than those in ALFRED ( ∼ 50). Rule-based Agents for TATC -------------------------- [t] l@rrr Task& Success & & (Shrtnd) & Rate & & Plant & 26.70 & 230.26 ± 54.65 & 67.93 ± 40.70 Coffee & 54.55 & 120.24 ± 66.55 & 72.29 ± 50.85 Clean & 52.98 & 182.38 ± 79.84 & 96.92 ± 71.31 All X Y & 52.91 & 126.82 ± 64.75 & 103.53 ± 80.97 Boil & 0.00 & -& 130.13 ± 94.80 Toast & 0.00 & -& 136.11 ± 70.73 N Slices & 22.51 & 248.77 ± 98.57 & 146.23 ± 113.96 X One Y & 50.98 & 150.09 ± 97.12 & 147.80 ± 104.45 Cooked & 1.67 & 424.25 ± 135.57 & 189.26 ± 87.90 Sndwch & 0.00 & -& 241.61 ± 100.86 Salad & 1.55 & 351.20 ± 82.09 & 253.94 ± 130.09 Bfast & 0.00 & -& 359.90 ± 162.33 **Overall & 24.40 & 161.54 ± 92.00 & 164.65 ± 130.89** [tab:tatc] In benchmarks like ALFRED, a PDDL  planner can be used to determine what actions are necessary to solve relatively simple tasks. In VLN, simple search algorithms yield the shortest paths to goals. Consequently, some language-guided visual task models build a semantic representation of the environment, then learn a hierarchical policy to execute such planner-style goals . Inspired by such planning-based solutions, we attempted to write a pair of rule-based *Commander* and *Follower* agents to tackle the TATC benchmark. In a loop, the rule-based *Commander* executes a `Progress Check` action, then forms a language utterance to the *Follower* consisting of navigation and object interaction actions needed to accomplish the next sub-goal in the response. Each sub-goal needs to be identified by the language template used to describe it, then a hand-crafted policy must be created for the rule-based *Commander* to reference. For example, for the Put All X On Y task, all sub-goals are of the form “X needs to be on some Y” for a particular instance of object X, and so a rule-based policy can be expressed as “navigate to the X instance, pick up the X instance, navigate to Y, put X down on Y.” *Commander* utterances are simplified to sequences of action names with a one-to-one mapping to *Follower* actions to execute, with interaction actions including (*x*, *y*) screen click positions to select objects. The rule-based agents perform *no learning*. Table [tab:tatc] summarizes the success rate of these rule-based agents across task types. Note that for the tasks Boil Potato, Make Plate of Toast, Make Sandwich, and Breakfast, sub-goal policies were not successfully developed. The rule-based agents represent about 150 hours of engineering work to hand-craft subgoal policies. While success rates could certainly be increased by increasing subgoal policy coverage and handling simulation corner cases, it is clear that, unlike ALFRED and navigation-only tasks, a planner-based solution is not reasonable for *TEACh* data and the TATC benchmark. The appendix contains detailed implementation information about the rule-based agents. Conclusions and Future Work =========================== We introduce ***T**ask-driven **E**mbodied **A**gents that **C**hat* (*TEACh*), a dataset of over 3000 situated dialogues in which a human *Commander* and human *Follower* collaborate in natural language to complete household tasks in the AI2-THOR simulation environment. *TEACh* contains dialogue phenomena related to grounding dialogue in objects and actions in the environment, varying levels of instruction granularity, and interleaving of utterances between speakers in the absence of enforced turn taking. We also introduce a task definition language that is extensible to new tasks and even other simulators. We propose three benchmarks based on *TEACh*. To study *Follower* models, we define the Execution from Dialogue History (EDH) and Trajectory from Dialogue (TfD) benchmarks, and evaluate an adapted Episodic Transformer  as an initial baseline. To study the potential of *Commander* and *Follower* models, we define the Two-Agent Task Completion benchmark, and explore the difficulty of defining rule-based agents from *TEACh* data. In future, we will apply other ALFRED modeling approaches  to the EDH and TfD *Follower* model benchmarks. However, *TEACh* requires learning several different tasks, all of which are more complex than the simple tasks in ALFRED. Models enabling few shot generalization to new tasks will be critical for *TEACh* *Follower* agents. For *Commander* models, a starting point would be to train a *Speaker* model  on *TEACh* sessions. We are excited to explore human-in-the-loop evaluation of *Commander* and *Follower* models developed for TATC. Acknowledgements ================ We would like to thank Ron Rezac, Shui Hu, Lucy Hu, Hangjie Shi for their assistance with the data and code release, and Sijia Liu for assistance with data cleaning. We would also like to thank Nicole Chartier, Savanna Stiff, Ana Sanchez, Ben Kelk, Joel Sachar, Govind Thattai, Gaurav Sukhatme, Joel Chengottusseriyil, Tony Bissell, Qiaozi Gao, Kaixiang Lin, Karthik Gopalakrishnan, Alexandros Papangelis, Yang Liu, Mahdi Namazifar, Behnam Hedayatnia, Di Jin, Seokhwan Kim and Nikko Strom for feedback and suggestions over the course of the project. We provide additional statistics about *TEACh* (§[sec:appstats]), a summary of the data collection procedure for *TEACh* sessions (§[sec:appcollection]), additional details about EDH and TfD benchmark experiments (§[sec:appedhtfd]), additional details about the rule-based agents we implemented for the TTAC benchmark (§[sec:apptatc]), an explanation of the task definition language that guides the *TEACh* (§[sec:apptdl]), and representative examples and qualitative analysis of *TEACh* sessions (§[sec:appexamples]). Additional *TEACh* Statistics ============================= During data collection, we aimed to obtain at least 50 sessions per task in the unseen validation and test splits. The exact number finally obtained varies due to the success rate of annotators in completing the tasks, and replicability issues related to non-determinism in the simulator. The final number of sessions per task per split is included in table [tab:gamesper-taskpersplit]. [!ht] 2.5pt lccc & & **train & **val-seen & **val-unseen `Make Coffee` & 145 & 18 & 57 `Water Plant` & 91 & 11 & 63 `Make Plate Of Toast` & 90 & 11 & 52 `Boil Potato` & 86 & 10 & 44 `N Slices Of X In Y` & 164 & 20 & 56 `N Cooked Slices` & 107 & 13 & 53 `Prepare Salad` & 176 & 22 & 53 `Prepare Sandwich` & 111 & 13 & 51 `Clean All X` & 178 & 22 & 59 `Put All X In One Y` & 167 & 20 & 50 `Put All X On Y` & 184 & 23 & 50 `Prepare Breakfast` & 162 & 20 & 53****** [tab:gamesper-taskpersplit] We created unseen splits using the same floorplans as the split used in ALFRED to enable easy sharing of models between *TEACh* and ALFRED. In the future, we also plan to explore generating entirely new floorplans and layouts to expand the test scene distribution with controllable generation methods . ![Human success rate for different tasks during data collection. Note that TEACh benchmarks only contains successful dialogue sessions, so human performance here is more a measure of how complex tasks were for annotators to complete against both coordination and simulator quirks.](human_success.png "fig:") [fig:humansuccess] [!ht]![Object Distribution: Frequency with which objects are interacted with by the Follower across all sessions. Log scale.](objects.png "fig:") [fig:objectdist] The success rate of human annotators on different tasks when gathering data can be seen in Figure [fig:humansuccess]. We find that success rates are much higher for simpler tasks such as `Make Coffee` and `Water Plant` compared to more difficult tasks. The lowest success rates were obtained with the `Prepare Breakfast` task which had the most steps, and consequently the maximum number of possible issues annotators could run into, and the `Boil Potato` task which required some additional reasoning from annotators in order to use a smaller container such as a `Cup` to fill a `Pot` or `Bowl` with water, in which the the `Potato` could then be boiled. Common causes of failure across tasks included difficulties in placement of objects (an artifact of AI2-THOR), objects being in initial positions where they are difficult to see and hence manipulate, connection problems, and timeouts due to one annotator becoming unresponsive. Note that only sessions in which humans were successful are included in *TEACh* benchmarks. We include vocabulary distributions of the 100 most common words in successful sessions in *TEACh*, as well as the 100 most common each of verbs, nouns and adjectives in Figure [fig:vocab] (with POS tagging done using spaCy [5](#fn5)). We also include the distribution of the frequency with with objects of different types are interacted with by the *Follower* (Figure [fig:objectdist]). We can see that some objects get interacted with in many tasks, for example the counter is often used as a space to move things around, and faucets have to be interacted with frequently since many kitchen tasks involve the cleaning of utensils. Overall, since kitchen objects have more affordances, many of our tasks are set in the kitchen. The `Clean All X` task can be done in both the kitchen and the bathroom, but only the `Put All X On Y` and `Put All X In One Y` tasks can be done in the bedroom and living room. Our task definition language is extensible. We have defined `Prepare Study Desk`—analogous to `Prepare Breakfast` in scope and compositionality, for bedrooms and living rooms—together with some simpler tasks like `Turn On/Off All Lights` that better represent different room types. We plan to incorporate these *unseen tasks* into future iterations of the TATC benchmark. [!ht] [fig:vocab] ![Distribution of dialogue lengths in terms of number of utterances per session. Log scale.](num_utterances_per_game.png "fig:") [fig:dialoglengthdist] ![Distribution of utterance lengths in terms of number of tokens across sessions. Log scale.](utterance_lengths.png "fig:") [fig:utterancelengthdist] To analyze language in *TEACh*, we include a distribution of the number of utterances per session in figure [fig:dialoglengthdist]. We observe a significant number of games for a range of utterance lengths up to about 40 utterances per session, and the longest session has 139 utterances. The distribution of utterance lengths can be seen in Figure [fig:utterancelengthdist]. ![Distribution of Follower action trajectory lengths across sessions. Log scale.](game_traj_lens.png "fig:") [fig:gametrajlens] We include a distribution of the number of environment actions taken by the *Follower* in in [fig:gametrajlens]. Our sessions are quite long, often involving several hundred actions per session. Data Cleaning ============= In additional to original utterances entered by annotators in gameplay sessions, we also release cleaned versions of the utterances. Utterances were cleaned to remove spelling errors using SymSpell [6](#fn6) followed by manual checking to avoid spurious changes, as well as expand contractions commonly used in chat (for example expanding “nvm” to “never mind”). We also removed utterances that only referred to aspects of the annotation interface (for example the *Commander* mentioning that they did not understand what object was highlighted). Utterances that involved a mix of references to the interface and task relevant information were modified to retain task relevant information while removing references to the interface. All manual annotation was done by an expert annotator. While we did not used the cleaned utterances for modeling, we believe their availability will enable better generalization of trained models to utterances received in the form of speech, and to new utterances not collected via our interface. Annotator Instructions ====================== Our annotator pool on Mechanical Turk was drawn using a private vendor service that provides high quality work in exchange for considerably higher pay than is normalized in the Mechanical Turk marketplace. Annotators first completed a tutorial version of the task individually. In the tutorial, the annotator could see the tasks to be completed in the way the *Commander* does in the main annotation, but can also act in the environment as the *Follower* does. They are then provided step by step instructions to control the *Follower* to complete two tasks - making coffee and cooking a slice of a potato. Only annotators who successfully completed the tasks in the tutorial were allowed to participate in the main collection. Annotators were primed with the following description of the task: ***2-Player Game:*** *Now when you log into the game, you will be one of two players, the User/Commander or the Robot/Driver.* *The User will have the progress check button but will not have the buttons to pick up / place objects or do other things with them.* *The Robot cannot see the progress check button but has the buttons to pick up / place objects or do other things with them.* *There is a chat box on the bottom right of the screen for the User and Robot to chat with each other. Both players have to work together to complete the task.* ***When you open the game, see if you have a Progress Check button on the top left.*** ***Robot***: * *If you don’t have a Progress Check button, you are the Robot.* * *You need to pretend you are a robot who is completing tasks in this house for your user.* * *Enter in the chat box “What should I do today?” so that your User knows you’re there.* * *Once the User tells you what to do, try to complete the task. You can use the chat to ask them questions, ask them to search for objects, or check whether steps have been completed. When the task is completed, your partner has to hit Finish to take both of you to the survey where you will rate your partner. When you submit the survey, you will get the code to enter in the HIT.* ***User***: * *When you open the game, if you have the Progress Check button, you are the User.* * *You need to pretend that the house in the game is your house and you are telling your Robot to complete tasks for you.* * *Remember that “the robot” is another worker like you pretending to be a robot to be respectful when talking to them.* * *To get started, click on the Progress Check button to see what tasks you have to do. Each HIT will have a slightly different task. Use the chat box to tell your Robot what task to do.* * *The Robot can ask you questions as they do the task. You can search for objects to help them and confirm whether they have successfully completed the task.* * *When the Progress Check button says that the task is done, hit Finish. That will take both you and your partner to the survey where you will rate each other. When you submit the survey, you will get the code to enter in the HIT.* ***Robot/ Driver Do’s and Don’ts*** *When you finish a step, use the chat box to tell the User and ask for the next step. E.g.: “I toasted the bread. What should I do next?”* *When you need an object, ask the User to search for it first. If they cannot find it you will have to search for it yourself. Remember to open drawers and cabinets while searching. For example: User: We need to make toast Robot: Can you help me find the bread? User: The bread is inside the fridge. Robot can directly go to the fridge and get the bread. Robot: I also need a knife. Do you know where that is? User: I don’t know. Can you search for it? Robot should search for knife.* ***User/ Commander Do’s and Don’ts*** *Use the search function in the Progress Checker to help the Driver find objects. If you don’t understand what it says you can say you don’t know. For example: User: We need to make toast Robot: Can you help me find the bread? User: The bread is inside the fridge. Robot: I also need a knife. Do you know where that is? User: I don’t know. Can you search for it?* *If a task has many steps. Don’t tell all of them as once. Wait for your Robot to finish a step before giving them the next step. Good example: User: Today we need to put all the forks in the sink. You can start with the one inside the microwave. Robot: I got the fork from the microwave. Heading to the sink now. Robot: I placed that fork in the sink. Are there more? User: There is another fork on a plate to the left of the stove. Robot: Found it. I will take it to the sink now. Robot: That’s in the sink. What should I do next? User: I think we’re done.* *Bad example (**don’t do this**): User: Today we need to put all the forks in the sink. There is one inside the microwave and another on a plate to the left of the stove.* *Try to help the Robot solve problems. For example:* *User: Today we need to put all the forks in the sink. You can start with the one inside the microwave. Robot: I found the fork but I am not able to place it in the sink. User: Is the water running? Robot: Yes it is User: Try turning it off first Robot: Thanks I tried that but I am still not able to place the fork in the sink User: What else is there in the sink? Robot: There is a plate and a few cups User: Try removing something from the sink first.* After reading priming instructions, workers gathered 2-player sessions where they were randomly assigned one of the roles of *Commander* or *Follower*. ![Interface seen by the crowdworker playing the Commander. Numbered stars are added for the purpose of explanation and correspond to the descriptions in this section. ](commander_components.png "fig:") [fig:commandercomponents] The interface seen by the *Commander* is shown in Figure [fig:commandercomponents]. The components of this interface are: 1. Main panel: The *Commander* is allowed to move around in the environment. The Main Panel shows their egocentric view. 2. Target object panel: If the *Commander* clicks on an instruction or searches for an object, a visual indication of the location of the object is shown here. For example in Figure [fig:commandercomponents], one of the instruction steps is clicked and the target object panel is highlighting a drawer next to the sink. This indicates that a fork is present in that drawer which needs to be used in that instruction step. 3. Top down view: A top down view of the room the *Follower* is in. The blue circle is the *Follower*’s current position, and the translucent blue triangle represents the view cone of the *Follower*. The red circle and cone correspond to the position of the *Commander*. 4. *Follower* view panel: This shows the current egocentric view of the *Follower*. 5. Progress Check Menu: This is shown when the Progress Check button has been clicked. It shows a list of steps to be completed. 6. Chat window to send messages to the *Follower*. 7. The navigation control panel used by the *Commander* to move around. The *Commander* can move through walls and objects but cannot interact with objects. ![Interface seen by the crowdworker playing the Follower. Numbered stars are added for the purpose of explanation and correspond to the descriptions in this section.](driver_components.png "fig:") [fig:drivercomponents] The interface used by the *Follower* is shown in Figure [fig:drivercomponents]. The components of this interface are: 1. Main panel: This shows the egocentric view of the *Follower*. 2. Top down view: A top down view of the room the *Follower* is in. The blue circle is the *Follower*’s current position, and the translucent blue triangle represents the view cone of the *Follower*: the direction in the room the *Follower* is currently facing. 3. Chat window to chat with the *Commander*. 4. Navigation control panel to move the *Follower* and change its orientation. 5. Object interaction control panel to interact with objects. 6. Console log: This shows messages from the simulator to give annotators hints for what is going wrong when an action fails. For example, if the *Follower* tries to place an object but the placement fails, a simulator message might let the annotator know that the receptacle they’re trying to place into is too full (e.g. they may need to clear out the sink before placing a new plate into it). A session is ended by the *Commander* clicking the “Finish” button adjacent to the “Progress Check” button on their interface, then confirming that they would like to end the game. The “Progress Check” display changes to let the *Commander* know when the task is completed, prompting the *Commander* to end the game. *TEACh* contains only successful sessions where the task was completed. Additional EDH and TfD Experiment Details ========================================= ![Distribution of dialog history length across EDH instances. Log scale. Note that EDH double counts many histories, skewing to a much longer, compounded average than full TEACh sessions.](num_utterances_per_edh.png "fig:") [fig:edhdialoghistorylens] ![Distribution of action history lengths across EDH instances.](history_lens_per_edh.png "fig:") [fig:edhactionhistorylens] We include a distribution of EDH dialogue and action history lengths in Figures [fig:edhdialoghistorylens] and [fig:edhactionhistorylens], respectively. While the average action history length for EDH instances is 86.97 actions, a significant number of EDH instances have an action history of over 200 actions. ![Distribution of number of actions to be predicted per instance across EDH instances.](pred_lens_per_edh.png "fig:") [fig:predlensperedh] ![Distribution of number of object interaction actions to be predicted per instance across EDH instances.](obj_pred_lens_per_edh.png "fig:") [fig:objpredlensperedh] We include the distribution of the number of all and object interaction actions to be predicted in Figures [fig:predlensperedh] and [fig:objpredlensperedh], respectively. While on average, a model for EDH needs to predict 19.76 actions of which on average 4.74 actions are object interaction actions, EDH instances may require as many as 324 actions to be predicted, with many instances requiring over 50 actions to be predicted. A significant number of EDH instances also require 10-20 object interactions to be predicted in order to successfully complete the instance. #### Moeling EDH and TfD with E.T. For our human demonstrations, we showed image observations of size 900 x 900, and hence obtained images of the same size during replay for modeling. Images are resized to 224 x 244 for the ResNet-50 backbone of the main E.T. transformer and to 300 x 300 for the MaskRCNN model. We use the pretrained visual encoder based on Faster R-CNN and mask generator based on Mask R-CNN from E.T., where they are trained on 325K frames of expert demonstrations from the ALFRED train fold (which matches our train fold in terms of floorplans and objects visible). Additionally, as in E.T., we do not update the visual encoder or mask generator during model training for any of our tasks. The E.T. visual encoder average-pools ResNet features 4 times and adds a dropout of 0.3 to obtain feature maps of 512 x 7 x 7. These are then fed into 2 convolutional layers of with 256 and 64 filters of size 1 x 1 respectively, and mapped using a fully connected layer to size 768. Additionally, as in E.T., we use transformer encoders each with 2 blocks, 12 self attention heads, hidden size of 768, and dropout of 0.1. We also follow E.T in using the AdamW optimizer with 0.33 weight decay with a learning rate of 1*e* − 4 for the first 10 epochs and 1*e* − 5 for the last 10 epochs. We trained all models for 20 epochs with a batch size of 3, and report results from the final epoch. We reused all hyperparameters except the batch size from the released E.T. model without further tuning, and used the largest batch size that could fit in a single GPU of a `p3.16x` EC2 instance. One change we made was that E.T. samples 30000 instances per epoch from the pool of training examples with replacement. We replace sampling with rotation permutations of our training dataset per epoch, ensuring that every train example is seen exactly once in our dataset. E.T. uses 2 cross entropy losses: one over actions and one over object categories during object interaction actions. We use an equal weight for these two losses. E.T. additionally uses auxiliary losses for overall and subgoal progress based on ALFRED but we do not use these as they are tailored for ALFRED and we do not have equivalent subgoal progress signals in *TEACh* benchmarks. Overall, our episode replay phase to generate image observations for training takes about 6 hours using 50 threads on a single `p3.16x` EC2 instance. The preprocessing phase of E.T. involving extracting image features and vectorizing language inputs takes about 7 hours using all GPUs of a single `p3.16x` EC2 instance. Training a model for 20 epochs takes about 5 hours using a single GPU of a single `p3.16x` EC2 instance. Evaluation of EDH instances takes about 6 hours using 2 GPUs of a single `p3.16x` EC2 instance for seen splits, and about 14 hours using 3 GPUs of a single `p3.16x` EC2 instance for unseen splits, but we did see a considerable amount of variation across runs. TfD evaluation takes about 2 hours using 2 GPUs of a single `p3.16x` EC2 instance for seen splits and 3 hours using 2 GPUs of a single `p3.16x` EC2 instance for unseen splits. We believe it should be possible to improve evaluation runtimes through optimizations to our wrapper over AI2-THOR. We include a breakdown of EDH success rates across tasks in table [tab:pertasksuccess]. Note that the task referenced here is the task for the original gameplay session the EDH instance is created from. Since our task definitions are hierarchical, it is possible for some EDH instances from different tasks to involve the same steps to be predicted. For example, the `Make Coffee` task is a subtask of the `Prepare Breakfast` task, so it is possible for there to be EDH instances from sessions of the `Prepare Breakfast` task where the agent is only required to make coffee—the same as it would do in the `Make Coffee` task. [!ht] 3.5pt lrrrrrrrrr & Task & Rand & Lang & Vision & E.T. & +H & +A & +H+A & +S & +H+S `Make Coffee` & 2.17 & 0.00 & 17.39 & 15.22 & 13.04 & 8.70 & 13.04 & 13.04 & 13.04 `Water Plant` & 0.00 & 0.00 & 0.00 & 10.53 & 10.53 & 15.79 & 10.53 & 21.05 & 5.26 `Make Plate Of Toast` & 0.00 & 2.38 & 0.00 & 7.14 & 4.76 & 4.76 & 2.38 & 7.14 & 4.76 `Boil Potato` & 0.00 & 0.00 & 0.00 & 17.39 & 8.70 & 13.04 & 4.35 & 13.04 & 13.04 `N Slices Of X In Y` & 2.08 & 1.04 & 7.29 & 14.58 & 8.33 & 10.42 & 8.33 & 12.50 & 8.33 `N Cooked Slices Of X In Y` & 1.15 & 4.60 & 4.60 & 8.05 & 4.60 & 6.90 & 5.75 & 10.34 & 8.05 `Clean All X` & 0.00 & 2.00 & 8.00 & 10.00 & 10.00 & 4.00 & 12.00 & 6.00 & 6.00 `Put All X In One Y` & 0.00 & 0.00 & 0.00 & 12.82 & 5.13 & 2.56 & 7.69 & 7.69 & 2.56 `Put All X On Y` & 0.00 & 0.00 & 6.25 & 4.17 & 10.42 & 8.33 & 8.33 & 10.42 & 6.25 `Prepare Salad` & 0.66 & 0.66 & 4.64 & 3.97 & 4.64 & 6.62 & 3.97 & 5.96 & 6.62 `Prepare Sandwich` & 1.02 & 1.02 & 4.08 & 6.12 & 5.10 & 5.10 & 9.18 & 6.12 & 8.16 `Prepare Breakfast` & 0.00 & 0.56 & 4.52 & 4.52 & 2.82 & 4.52 & 4.52 & 3.39 & 4.52 & `Make Coffee` & 2.00 & 0.00 & 4.00 & 8.00 & 9.00 & 7.00 & 8.00 & 7.00 & 10.00 `Water Plant` & 0.00 & 0.00 & 2.30 & 2.30 & 5.75 & 2.30 & 1.15 & 2.30 & 3.45 `Make Plate Of Toast` & 0.00 & 0.00 & 1.17 & 1.17 & 2.34 & 2.34 & 1.75 & 2.92 & 2.92 `Boil Potato` & 0.00 & 0.00 & 0.94 & 6.60 & 6.60 & 1.89 & 4.72 & 3.77 & 7.55 `N Slices Of X In Y` & 0.69 & 0.69 & 4.17 & 9.72 & 4.17 & 9.03 & 6.25 & 9.03 & 6.94 `N Cooked Slices Of X In Y` & 0.00 & 0.53 & 3.19 & 4.26 & 4.26 & 6.38 & 2.13 & 4.79 & 4.26 `Clean All X` & 0.91 & 0.00 & 4.55 & 3.64 & 9.09 & 3.64 & 6.36 & 4.55 & 2.73 `Put All X In One Y` & 1.02 & 0.00 & 2.04 & 6.12 & 1.02 & 4.08 & 3.06 & 3.06 & 3.06 `Put All X On Y` & 0.00 & 2.94 & 0.00 & 0.00 & 5.88 & 4.41 & 4.41 & 0.00 & 4.41 `Prepare Salad` & 0.75 & 0.38 & 1.13 & 6.04 & 2.26 & 4.53 & 1.51 & 3.40 & 3.02 `Prepare Sandwich` & 0.40 & 0.40 & 2.78 & 5.16 & 3.57 & 3.97 & 3.97 & 4.37 & 3.17 `Prepare Breakfast` & 0.27 & 0.82 & 4.37 & 5.19 & 6.28 & 3.28 & 5.19 & 6.56 & 4.37 & `Make Coffee` & 0.00 & 0.00 & 4.17 & 16.67 & 14.58 & 10.42 & 10.42 & 12.50 & 12.50 `Water Plant` & 0.00 & 0.00 & 10.00 & 15.00 & 5.00 & 15.00 & 15.00 & 15.00 & 15.00 `Make Plate Of Toast` & 0.00 & 0.00 & 8.33 & 6.25 & 8.33 & 6.25 & 6.25 & 4.17 & 6.25 `Boil Potato` & 0.00 & 0.00 & 0.00 & 4.17 & 4.17 & 4.17 & 0.00 & 0.00 & 4.17 `N Slices Of X In Y` & 1.43 & 1.43 & 4.29 & 4.29 & 7.14 & 5.71 & 8.57 & 5.71 & 4.29 `N Cooked Slices Of X In Y` & 0.00 & 0.00 & 2.67 & 5.33 & 8.00 & 9.33 & 4.00 & 5.33 & 2.67 `Clean All X` & 4.05 & 0.00 & 6.76 & 8.11 & 8.11 & 6.76 & 16.22 & 6.76 & 13.51 `Put All X In One Y` & 2.00 & 6.00 & 0.00 & 4.00 & 6.00 & 4.00 & 8.00 & 4.00 & 6.00 `Put All X On Y` & 0.00 & 0.00 & 2.13 & 2.13 & 4.26 & 6.38 & 4.26 & 0.00 & 6.38 `Prepare Salad` & 1.38 & 0.00 & 4.83 & 6.90 & 4.83 & 6.90 & 4.14 & 6.90 & 6.21 `Prepare Sandwich` & 0.00 & 0.00 & 1.47 & 0.00 & 2.94 & 2.94 & 2.94 & 1.47 & 2.94 `Prepare Breakfast` & 0.00 & 1.75 & 4.68 & 4.68 & 6.43 & 6.43 & 8.77 & 4.09 & 6.43 & `Make Coffee` & 0.69 & 2.08 & 1.39 & 4.86 & 9.72 & 5.56 & 9.72 & 5.56 & 10.42 `Water Plant` & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 `Make Plate Of Toast` & 0.37 & 0.75 & 4.10 & 5.22 & 3.36 & 4.10 & 5.97 & 4.10 & 5.22 `Boil Potato` & 0.53 & 0.53 & 4.28 & 3.21 & 2.67 & 6.95 & 3.74 & 6.42 & 3.21 `N Slices Of X In Y` & 0.97 & 0.49 & 3.88 & 4.85 & 6.80 & 3.88 & 4.85 & 6.31 & 5.34 `N Cooked Slices Of X In Y` & 1.87 & 1.49 & 1.87 & 7.84 & 3.73 & 6.34 & 4.85 & 8.58 & 4.48 `Clean All X` & 0.51 & 1.54 & 3.08 & 10.26 & 5.64 & 9.74 & 5.13 & 8.21 & 7.18 `Put All X In One Y` & 0.00 & 0.00 & 3.00 & 3.00 & 5.00 & 5.00 & 2.00 & 5.00 & 2.00 `Put All X On Y` & 0.00 & 0.00 & 0.00 & 5.22 & 5.22 & 5.22 & 5.22 & 2.24 & 2.24 `Prepare Salad` & 1.06 & 1.06 & 4.22 & 7.12 & 4.22 & 7.12 & 2.37 & 9.23 & 4.75 `Prepare Sandwich` & 0.00 & 1.01 & 3.36 & 6.04 & 4.36 & 4.36 & 3.02 & 5.03 & 2.35 `Prepare Breakfast` & 0.87 & 0.22 & 2.17 & 4.99 & 4.99 & 4.99 & 5.21 & 4.77 & 3.69 [tab:pertasksuccess] Rule-Based Agents for TATC ========================== As mentioned in section 5.2, we engineered rule-based *Commander* and *Follower* agents as an attempt to solve the Two-Agent Task Completion benchmark. **Rule-based *Follower*** maintains a queue of actions that it needs to execute. Whenever this queue gets empty, it utters *``What should I do next?"*. *Commander* in the next turn detects this utterance and executes a `Progress Check` action to generate a templated language instruction. The templated instruction consists of space-separated low-level actions like *``Forward Forward TurnRight LookUp Pickup Mug at 0.57 0.25"*. This instruction can be split up by the *Follower* into action tokens and each token has a one-to-one correspondence to an action in the action space of *Follower*. For interaction action like *``Pickup Mug at 0.57 0.25"*, *“Mug”* represents the object to be interacted with, and *`0.57 0.25"* represents the normalized coordinates in the egocentric view of *Follower* accepted by the *TEACh* API as a click position. **Rule-based *Commander*** executes a `Progress Check` action whenever it is asked to provide supervision. Listing [progresscheck:plateoftoast] shows the `Progress Check` output at the start of Make Plate of Toast task. `problem_keys` in the `Progress Check` output contains info about all objectives that need to be completed to solve the task. `property_name` of a problem key defines the type of the problem key to be solved. For the 11 tasks we consider, there are 7 problem keys that can be solved: `parentReceptacle`, `isDirty`, `isFilledWithLiquid`, `isFilledWithCoffee`, `isBoiled`, `isCooked` and `Slice`. `DesiredValue` denotes the state that the object should be in. For each of the problem keys, we can engineer a hand-crafted logic to solve it. Consequently, the scope of what rule-based agents can accomplish is limited by the engineering hours needed to identify and hand-write solutions to these problem keys in a modular planning fashion. For `parentReceptacle` problem key, an object needs to be placed on/inside another object. Algorithm [alg:parentReceptacle] shows the hand-crafted logic for `parentReceptacle` problem key. Whenever *Commander* is asked for supervision, it checks the `property_name` of problem key and if it is `parentReceptacle`, it will call the `parentReceptacle()` function. The `current_step` in the function keeps track of the supervision that needs to be provided based on the progress for the current problem key. For `current_step = navigation_1`, it will call a function `Navigate(ObjectID)` which runs a shortest path planner from the current pose of the *Follower* agent to get the low-level navigation instruction to reach `ObjectID`. Similar logic can be written for other problem keys. [!ht] [alg:parentReceptacle] **Input**: step **Output**: instruction, step parentReceptacle(step=``navigation\_1"): [1] ObjectID = get\_object\_id() ParentID = get\_parent\_id() % None if ObjectID not present inside an object instruction = Navigate(ObjectID) step = ``interaction\_1\_1“ instruction = ``ToggleOff ParentObject” instruction += ``Open ParentObject“ step = ``interaction\_1\_2” instruction = ``Pickup ObjectID“ step = ``step\_completed” instruction = ``Pickup ObjectID“ step = ``interaction\_1\_3” instruction = ``Close ParentObject“ step = ``step\_completed” **return** instruction, step [!ht] [frame=none, framesep=3mm, breaklines=true, xleftmargin=21pt, tabsize=1, tabsize=1,breaksymbolleft=]json “taskdesc”: “Make a plate of toast.”, “success”: 0, “subgoals”: [ “representativeobjid”: “Bread|-00.58| 00.27|-01.27”, “stepsuccesses” : [0], “success”: 0, “description”: “Make a slice of toast.”, “steps”: [ “success”: 0, “objectId”: “Bread|-00.58| 00.27|-01.27”, “objectType”: “Bread”, “desc”: “The bread needs to be sliced using a knife.”,, “success”: 0, “objectId”: “Bread|-00.58| 00.27|-01.27”, “objectType”: “Bread”, “desc”: “The bread needs to be toasted.”, ], “problemkeys”: “Bread|-00.58| 00.27|-01.27” : [ “objectType”: “Bread”, “determiner”: “a”, “propertyname”: “objectType”, “desiredpropertyvalue”: “BreadSliced”, “objectType”: “Bread”, “determiner”: “a”, “propertyname”: “isCooked”, “desiredpropertyvalue”: 1 ] , “representativeobjid”: “Plate|-01.18| 00.21|-01.27”, “stepsuccesses”: [1, 0], “success”: 0, “description”: “Clean a Plate.”, “steps”: [ “success”: 0, “objectId”: “Plate|-01.18| 00.21|-01.27”, “objectType”: “Plate”, “desc”: “The Plate is dirty. Rinse with water.” ], “problemkeys”: “Plate|-01.18| 00.21|-01.27”: [ “objectType”: “Plate”, “determiner”: “a”, “propertyname”: “isDirty”, “desiredpropertyvalue”: 0 ] ] [progresscheck:plateoftoast] Task Definition Language ======================== We define a Task Definition Language to define household tasks in terms of object properties that need to be satisfied in the environment for the task to be considered successful. This Task Definition Language is based on a PDDL-like syntax . A sample task can be seen in Listing [task:plateoftoast], which defines the Make a Plate of Toast task in *TEACh*. [!ht] [frame=none, framesep=3mm, breaklines=true, xleftmargin=21pt, tabsize=4, tabsize=2,breaksymbolleft=]json “taskid”: 106, “taskname”: “Plate Of Toast”, “tasknparams”: 0, “taskanchorobject”: “plate”, “desc”: “Make a plate of toast.”, “components”: “toast”: “determiner”: “a”, “taskname”: “Toast”, “taskparams”: [], “plate”: “determiner”: “a”, “taskname”: “Clean X”, “taskparams”: [“Plate”] , “relations”: [ “property”: “parentReceptacles”, “tailentitylist”: [“plate”], “taildeterminerlist”: [“the”], “headentitylist”: [“toast”], “headdeterminerlist”: [“a”], “failuredesc”: “The toast needs to be on a clean plate.” ] [task:plateoftoast] A task is specified in terms of `components` and `relations`. A `component` is specified using a set of conditions to be satisfied for the task to be considered complete. As seen in the above example, a `component` can be specified by referencing another task. In Make a Plate of Toast, the `component` `toast` is described by referencing another task, `Toast` (Listing [task:toast]), and the `component` `plate` is described by referencing another task, `Clean X` (Listing [task:cleanx]), with parameter value `Plate`. In task definitions, `relations` are used to describe relationships between `components` that must be satisfied for the task to be considered complete. In Make a Plate of Toast, the relation specifies that one object satisfying the conditions of `component` `plate` must be the container of (captured by AI2-THOR property `parentReceptacles`) one object satisfying the conditions of `component` `toast`. [!ht] [frame=none, framesep=3mm, breaklines=true, xleftmargin=21pt, tabsize=4, tabsize=2,breaksymbolleft=]json “taskid”: 101, “taskname”: “Toast”, “tasknparams”: 0, “taskanchorobject”: “toast”, “desc”: “Make a slice of toast.”, “components”: “toast”: “determiner”: “a”, “primarycondition”: “objectType”, “instanceshareable”: false, “conditions”: “objectType”: “BreadSliced”, “isCooked”: 1, “conditionfailuredescs”: “objectType”: “The bread needs to be sliced using a knife.”, “isCooked”: “The bread needs to be toasted.” , “knife”: “determiner”: “a”, “primarycondition”: “objectType”, “instanceshareable”: true, “conditions”: “objectType”: “Knife”, “conditionfailuredescs”: , “relations”: [] [task:toast] [!ht] [frame=none, framesep=3mm, breaklines=true, xleftmargin=21pt, tabsize=4, tabsize=2,breaksymbolleft=]json “taskid”: 103, “taskname”: “Clean X”, “tasknparams”: 1, “taskanchorobject”: “#0”, “desc”: “Clean a #0.”, “components”: “#0”: “determiner”: “a”, “primarycondition”: “objectClass”, “instanceshareable”: false, “conditions”: “objectClass”: “#0”, “isDirty”: 0, “conditionfailuredescs”: “isDirty”: “The #0 is dirty. Rinse with water.” , “sink”: “determiner”: “a”, “primarycondition”: “objectType”, “instanceshareable”: true, “conditions”: “objectType”: “Sink”, “receptacle”: 1, “conditionfailuredescs”: , “relations”: [] [task:cleanx] The task Clean X is an example of a parameterized task. The parameter `#0` is set to `Plate` when this task is referenced as a part of the more complex task Plate of Toast (Listing [task:plateoftoast]). In Clean X, the parameter is intended to be a custom object class (such as `Plate` or `Utensil`) which have been pre-defined. Parameters can also be used to specify in determiners or even free text to go into a natural language description, for example “Put all Fork *in* any Sink” versus “Put all cups *on* any Table.” When we check for task completion, parameters can be thought of as macros—we first do a text replacement of the parameter value wherever the parameter occurs in the task definition, and then the task definition is processed as if it does not have parameters. The use of parameters allows easy creation of different variants of a task with low manual effort, thus allowing us to create a more diverse dataset. More formally, a Task is defined by * **`task_id`** - A unique ID for the task * **`task_name`** - A unique name for the task used to reference it in other tasks * **`task_nparams`** - Number of parameters required by the task * **`desc`** - Natural language prompt describing the task to provide to a *Commander*, (be it human or agent model. * **`components`** - Dictionary specifying sets of conditions to be satisfied by objects of different types. This is used to specify both precondition objects required to complete the task such as knife if slicing is required or a sink if cleaning is required, as well as objects that need to be converted to the correct state as part of the task such as `toast` in Listing [task:toast]. A component can also be described using another `Task`, such as Clean X being used to define the target receptacle on which toast should sit in Make a Plate of Toast. * **`relations`** - List of conditions that relate one set of objects to another. Currently the only relationship used in our task definitions is `parentReceptacles` which checks if one object is the container for other objects. However, pairwise operators could also be used to capture other spatial relations or time of completion of components. * **`task_anchor_object`** - This is either the key of a **`component`** or `null`. This is used to identify the specific object in the simulation environment whose properties would be checked when a **`component`** specified by this **`Task`** is used in a **`relation`**. For example, the Make a Plate of Toast Task (Listing [task:plateoftoast]) contains a **`relation`** that says that its `toast` **`component`** should be contained in its `plate` **`component`**. Looking at the task definition for task `Toast` (Listing [task:toast]), we find that its **`task_anchor_object`** is its **`component`** `toast` (and not its **`component`** `knife`) which will resolve to an object of type `BreadSliced`. Looking at the task definition for Clean X (Listing [task:cleanx]), we find that its **`task_anchor_object`** is the **`component`** whose key is set to the value of parameter `#0` which will be resolved to an object of type `#0`. Since Make a Plate of Toast passes the parameter value `Plate` to Clean X, the `plate` **`component`** in Make a Plate of Toast will resolve to an object of type `Plate`. Thus overall the relation should check for an object of type `BreadSliced` (which also satisfies other conditions specified in **`component`** `toast`) to be placed on the object of type `Plate` (which also satisfies other conditions specified in **`component`** `plate`). Note that if the **`task_anchor_object`** for a `Task` is `null`, it cannot be used in **`relations`** in other `Task`s (but can still be a **`component`**). A **`component`** can be of one of two types: * Atomic **`component`** - A **`component`** that is specified in terms of base simulator properties, for example all **`component`**s of `Task`s Toast and Clean X. * `Task` **`component`** - A **`component`** that is specified in terms of another `Task`, for example the components in Makke a Plate of Toast). Atomic **`component`**s are specified using the following keys: * **`conditions`** - Set of `property : desired_value` pairs for this **`component`** to be considered satisfied. For example the conditions for the `toast` **`component`** in Toast look for an object of type `BreadSliced` whose property `isCooked` has been set to 1. * **`condition_failure_descs`** - For properties in conditions that correspond to changes that have to happen by the annotator taking an action, this specifies the description to be provided to the annotator if the property is currently not set to the desired value. For example, in **`component`** `toast` the value for **`condition_failure_descs`** specifies that if there is no sliced bread in the scene, we should send the message “The bread needs to be sliced using a knife” and if there is no toasted bread slice in the scene we should send the message “The bread needs to be toasted”. * **`determiner`** - This is used to specify how many object instances should satisfy this set of conditions. The possible values are `a`, `all` or a positive integer. For example in the `Task Toast`, the `toast` **`component`** has **`determiner`** `a` so we would say this component is satisfied if there is any slice of toasted bread in the scene. Instead if the **`determiner`** was 2, we would only say that the **`component`** is satisfied if there are at least 2 slices of toasted bread in the scene. If it was `all`, we would say that the **`component`** is satisfied if all slices of bread present in the scene are toasted. * **`primary_condition`** - This is used to find candidate objects that an annotator needs to modify to satisfy this **`component`**. It is usually an object type or class. * **`instance_shareable`** - This is a parameter used to handle how numbers cascade across hierarchies. In the Make a Plate of Toast, the **`determiner`** for component `toast` is `a`. Suppose instead that this was 2. By default we would multiply the **`determiner`**s of all **`component`**s of Toast (except `all`) by 2 (treating `a` as 1). So to check if the Toast-2 is satisfied we would check both if there are 2 slices of toast and if there are 2 knives. But the knife has only been specified as a precondition and we do not actually need 2 knives in Toast-2. This exception is captured by the property **`instance_shareable`**. The knife **`component`** has **`instance_shareable`** `= true` so regardless of the **`determiner`** associated with Toast, we would only check for one knife, but the **`component`** `toast` has **`instance_shareable`** `= false` so we would require *n* slices of toast in Toast-2. [!ht] [frame=none, framesep=3mm, breaklines=true, xleftmargin=21pt, tabsize=4, tabsize=2,breaksymbolleft=]json “taskid”: 110, “taskname”: “Put All X On Y”, “tasknparams”: 3, “taskanchorobject”: null, “desc”: “Put all #0 #1 any #2.”, “components”: “#0”: “determiner”: “all”, “primarycondition”: “objectClass”, “instanceshareable”: false, “conditions”: “objectClass”: “#0”, “conditionfailuredescs”:, “#2”: “determiner”: “a”, “primarycondition”: “objectClass”, “instanceshareable”: true, “conditions”: “objectClass”: “#2”, “receptacle”: 1, “conditionfailuredescs”: , “relations”: [ “property”: “parentReceptacles”, “tailentitylist”: [“#2”], “taildeterminerlist”: [“a”], “headentitylist”: [“#0”], “headdeterminerlist”: [“all”], “failuredesc”: “The #0 needs to be put #1to a #2” ] [task:putallxony] [!ht] [frame=none, framesep=3mm, breaklines=true, xleftmargin=21pt, tabsize=4, tabsize=2,breaksymbolleft=]json “taskid”: 111, “taskname”: “Put All X In One Y”, “tasknparams”: 3, “taskanchorobject”: null, “desc”: “Put all #0 #1 one #2.”, “components”: “#0”: “determiner”: “all”, “primarycondition”: “objectClass”, “instanceshareable”: false, “conditions”: “objectClass”: “#0”, “conditionfailuredescs”:, “#2”: “determiner”: “a”, “primarycondition”: “objectClass”, “instanceshareable”: true, “conditions”: “objectClass”: “#2”, “receptacle”: 1, “conditionfailuredescs”: , “relations”: [ “property”: “parentReceptacles”, “tailentitylist”: [“#2”], “taildeterminerlist”: [“the”], “headentitylist”: [“#0”], “headdeterminerlist”: [“all”], “failuredesc”: “The #0 needs to be put #1to a single #2” ] [task:putallxinoney] Task relations are specified by: * **`property`** - The property being checked (currently we only have support for `parentReceptacles`) * **`head_entity_list`**, **`tail_entity_list`** - While exactly which object is the head and which object is the tail is arbitrary and would be decided by implementation used to check a property, we assume that if we examine the property value of the head entities, the tail entities would be specified in them. Currently these are implemented as lists to handle the very specific case where we want to define that multiple objects need to be placed in a single container (e.g.: multiple sandwich components in a plate). The entities are specified using the **`component`** keys and we recursively check **`task_anchor_object`** of `Task` components to find the exact object to be used when checking the relation. * **`head_determiner_list`** - A list of the same length as **`head_entity_list`** where each entry can take values `a`, `all`, or a number and specify how many objects matching conditions specified by the respective **`component`** are involved in this relation. * **`tail_determiner_list`** - A list of the same length as **`tail_entity_list`** where each entry can take values `a` or `the`. To illustrate the difference, compare the `Task`s Put All X On Y (Listing [task:putallxony]) and Put All X In One Y (Listing [task:putallxinoney]). Suppose there are two objects of type `X` (*x*1 and *x*2) and two objects of type `Y` (*y*1 and *y*2) in the scene. If *x*1 is placed in/on *y*1 and *x*2 is placed in/on *y*2, this would satisfy the Task Put All X On Y (because each object of type `X` is on `a` object of type `Y`) but does not satisfy Put All X In One Y (because there is no single object of type `Y` such that *x*1 and *x*2 are in `the` object of type Y) * **`failure_desc`** - The message to be shown to an annotator if some action needs to be taken to make this relation satisfied. **Checking task completion**: The task definition specifies all conditions that need to be satisfied by objects in the scene for a `Task` to be considered satisfied. To check if a `Task` is satisfied, first, for each **`component`**, we check if as many instances, specified by the **`determiner`** of that **`component`** satisfy the conditions specified in **`conditions`** (or in the case of `Task` **`component`**s, we recursively check that the `Task` specified as the **`component`** is satisfied). Next, we take the objects satisfying the conditions of each **`component`** and use them to check **`relation`**s. If there exist objects within this subset that also satisfy all **`relation`**s, the `Task` is considered satisfied. *TEACh* Examples and Qualitative Analysis ========================================= For several example figures below, we provide video session replays in the attached supplementary material. To compress video size and length, we play 1 action, either an utterance or environment action, per second, rather than the “real time” playback. Videos show the *Commander* and *Follower* egocentric view, as well as the object search camera for the *Commander* together with the segementation mask of the searched object. Additionally, each video shows utterance data and progress check response data in JSON format. *TEACh* was collected using an annotator interface that allowed unconstrained chat between annotators. Annotators need to communicate because the task information is only available to the *Commander*—during collection called the *User*—but only the *Follower*—in collection called the *Robot*—can actually take actions. We provide some guidelines for annotators on how to conduct these conversations, detailed in §[sec:appcollection]. Our guidelines encourage annotators to explicitly request for and mention only task-relevant information. However, annotators can and do decide to provide annotations in different levels of detail and relevance. [!ht]![Sample session for the Make Coffee task where the Commander does not explain in much detail how the task is to be completed. The session also includes an example where the Follower needs to ask for help because the Commander initially provided the wrong location for the mug.](example_images/EDH_Table1_3793c8580da1c1a9_8258.png "fig:") [fig:sampledialog1] [!ht]![Another sample session for Make Coffee. In this session, the Commander provides step by step instructions and feedback to the Follower despite the Follower not asking for the next instruction or help.](example_images/Cofee_82ef609bb3149ca8_7ddc.png "fig:") [fig:Coffeeexample2] [!ht]![Sample session for the Clean All X task in a bathroom. While the task could be solved more efficiently by simply turning on the faucet in the bathtub, the Commander and Follower instead choose to clean the cloth in the sink. This session also demonstrates examples of how utterances can get out of order due to the absence of forced turn taking. ](example_images/Clean_All_X__9cf2b24ad4dba768_b33e.png "fig:") [fig:examplecleanallx] [!ht]![Sample session for the Put All X In One Y task. In this session, Commander corrects the Follower to pick up the correct tissue box. Then Commander does not realize that all the tissue boxes need to be placed on the same side table, and hence initially gives the Follower incorrect instructions.](example_images/Put_all_X_in_One_Y_4e32a37852512e84_811a.png "fig:") [fig:exampleputallxinoney] [!ht]![Sample session for the Water Plant task. The Commander initially gives an incorrect instruction requiring the Follower to ask for help and search for a container. The Follower finds another container before getting help from the Commander.](example_images/Water_Plant_a29fe298de2eb81b_9f43.png "fig:") [fig:examplewaterplant] [!ht]![Sample session for the N Slices Of X In Y task. This example demonstrates interleaving chat messages between the Commander and Follower. The Commander uses referring expressions, such as On the table where chair was, to help the Follower locate the target object. The Follower also asks the Commander for help, and gives confirmation, frequently.](example_images/N_Slices_of_X_in_Y_2de9e62dc7b62f3b_bac3.png "fig:") [fig:examplenslicesofxiny] [!ht]![Sample session for the Put All X On Y task in a bedroom. The Commander intends to give step by step instructions but occasionally provides the next step before the Follower has finished the previous step. ](example_images/Put_All_X_On_Y__c0ae74d6938f4c1d_c3c4.png "fig:") [fig:exampleputallxony] [!ht]![Sample session for the Sandwich task. The Follower requires the task of making a sandwich to be broken down into simpler steps but anticipates a few steps, finding the bread and knife before being explicitly asked to.](example_images/Sandwich__6aa775df37538c2a_6508.png "fig:") [fig:examplesandwich] [!ht]![Sample session for the Boil Potato task. The session demonstrates an example where the Commander helps the Follower to solve the issue of a pot not fitting into the sink. ](example_images/Boil_Potato__47cdabd6da727ed1_d685.png "fig:") [fig:exampleboilpotato] [!ht]![Sample session for the Breakfast task where the Follower has to make coffee and a sandwich with lettuce. The Commander provides step by step instructions but occasionally provides the next step, for example slicing bread, before the Follower is done with the previous step, and is sometimes late with help. For example, the Follower finds the knife alone because the Commander does not provide its location. ](example_images/Breakfast__0fb77c7054bc7a0c_c243.png "fig:") [fig:examplebreakfast] [!ht]![Sample session for the Salad task. The Follower anticipates the Commander’s directions, slicing the tomato and lettuce before it is asked, but forgets to plate the salad until directed to do so by the Commander.](example_images/Salad_7f5bf828ea7530fc_dc63.png "fig:") [fig:examplesalad] [!ht]![Sample session for the N Cooked Slices Of X In Y task. The Follower finds a potato before the Commander directs it to one.](example_images/N_Cooked_slices_of_X_In_Y__0395663fca8c7dbc_648d.png "fig:") [fig:examplencookedslicesofy] [!ht]![Sample session for the Plate Of Toast task. This session demonstrates interleaving chat messages, referring expressions and Commander proving feedback to the Follower for sub-tasks.](example_images/Plate_Of_Toast_3469d4785fac2d22_a7a9.png "fig:") [fig:exampleplateoftoast] Consider the example dialogs in Figures [fig:sampledialog1] and [fig:Coffeeexample2]. In Figure [fig:sampledialog1], the *Commander* simply tells the *Follower* to prepare coffee, but in Figure [fig:Coffeeexample2], the *Commander* provides much lower level instructions, and waits for the *Follower* to complete each step. The initial instruction provided in Figure [fig:sampledialog1] (*“can you make me a coffee please?”*) are similar to goal-level instructions and requests typically seen in task-oriented dialog. The instructions in Figure [fig:Coffeeexample2] (*``grab the dirty mug out of the fridge*, *go wash in the sink*) are more similar to the detailed instructions in datasets such as R2R . Every trajectory in ALFRED  is annotated with instructions at both of these levels of granularity. In *TEACh*, by contrast, dialogues may contain either one or in some cases both levels of granularity. Thus, *TEACh* benchmark agents need to be able to effectively map instructions at different levels of granularity to low level actions. Dialogues also contain situations where the *Commander* helps the *Follower* get “unstuck”. For example, in Figure [fig:Coffeeexample2], the *Commander* suggests that the *Follower* needs to clear out the sink in order to place the mug in it. In future work, we could attempt to leverage such utterances to learn more general knowledge of the environment that can be used by a *Follower* to get unstuck, either via student forcing from learned rules or by adding hand-written recovery modules analogous to the simple navigation and interaction recovery modules in ABP  and EmBERT . For example, an agent may use the dialogue in Figure [fig:Coffeeexample2] to infer that if it tries to place an object in a container and fails, it must try to clear out the container. In Figure [fig:Coffeeexample2], the *Follower* did not explicitly ask for help. In contrast, in Figure [fig:sampledialog1], the *Follower* asks for help when it does not find a mug in the places it initially searches, which prompts the *Commander* to correct their instruction. This session also illustrates a difference between *TEACh* where the task is completed by a human annotator based on online instructions from another human annotator, and benchmarks that elicit descriptions for trajectories generated by a planner, such as ALFRED. In Figure [fig:exampleputallxinoney], the *Commander* keeps changing their mind about where the *Follower* should place the tissue boxes, resulting in a less efficient path. A human *Commander* may make mistakes when providing instructions, and a human *Follower* may not perfectly follow instructions. Standard teacher forcing training, including that used in our baseline experiments, does not account for such imperfections in demonstrated trajectories. However, robust agent models for *TEACh* benchmarks will need to learn to identify what information is essential and what is irrelevant or wrong. Dialogues can contain a lot of feedback, for example the *Follower* informing the *Commander* when it has completed a step or the task, and the *Commander* affirming that a step or task has been completed. In the EDH and TfD tasks, an agent will likely need to learn to ignore these feedback steps. However, in future, these self-reported completions could be useful to segment large tasks into pragmatic subgoals. Unlike ALFRED, since our tasks have varying levels of hierarchy, what may constitute pragmatic subgoals for one task may be too much detail for another task. We place no constraints on our chat interface - for example, we do not impose turn taking. Thus, chat messages from the two annotators interleave in interesting ways. For example, consider Figure [fig:examplecleanallx]. The *Follower*’s messages *“What task do I do today?”* and *“I have picked the purple object. What next?”* are preceded by their responses from the *Commander*. An agent performing our EDH or TfD task will need to be able to mentally reorder these messages to successfully complete the task. To facilitate detection of interleaved messages, we provide millisecond level timesteps for each action and utterance in the *TEACh* data, though in figures we represent each action as a “timestep.” The *Follower* can also ask for different kinds of help as they try to complete the task including clarification, for example in Figure [fig:exampleputallxinoney], “which table? the one with the other tissue box?”, asking for the location of an object, as in Figure [fig:sampledialog1], *“Where can I find a mug?”*, and help if it is unable to perform an action requested by the *Commander*, as in Figure [fig:sampledialog1], *“I can’t seem to see a mug”*. A good *Follower* model should be able to execute actions based on dialogue history, while also being able to interact with the *Commander* in natural language - clarifying ambiguous instructions, obtaining additional information as needed, learning to solve problems, and providing feedback as it completes tasks. To accomplish these needs, a model may have to identify different dialog acts, translate dialog history to actions (EDH), detect situations where additional information is needed, and generate appropriate dialog responses in these situations. Jointly learning a *Commander* and *Follower* model may begin to enable these strategies. --- 1. https://github.com/alexa/teach[↩](#fnref1) 2. Using the spaCy tokenizer: <https://pypi.org/project/spacy/>[↩](#fnref2) 3. An earlier version of this dataset had a larger number of EDH instances. The current released split has filtered EDH instances so that only state changes that directly result in task progress are considered.[↩](#fnref3) 4. We follow ALFRED in using a macro-, rather than micro-average for Goal-Conditioned Success Rate.[↩](#fnref4) 5. <https://pypi.org/project/spacy/>[↩](#fnref5) 6. https://pypi.org/project/symspellpy/[↩](#fnref6)
arxiv_0000048
6614) – (-2.859999947028949,2.6303203984981516); (-2.859999947028949,2.6303203984981516) – (-2.849999947824581,2.6270104970150245); (-2.849999947824581,2.6270104970150245) – (-2.8399999486202128,2.6232331667569087); (-2.8399999486202128,2.6232331667569087) – (-2.8299999494158445,2.6189978490807517); (-2.8299999494158445,2.6189978490807517) – (-2.8199999502114763,2.61431513007335); (-2.8199999502114763,2.61431513007335) – (-2.809999951007108,2.609196714091645); (-2.809999951007108,2.609196714091645) – (-2.79999995180274,2.603655394507938); (-2.79999995180274,2.603655394507938) – (-2.7899999525983716,2.5977050217331206); (-2.7899999525983716,2.5977050217331206) – (-2.7799999533940034,2.5913604685978693); (-2.7799999533940034,2.5913604685978693) – (-2.769999954189635,2.5846375931783196); (-2.769999954189635,2.5846375931783196) – (-2.759999954985267,2.5775531991591425); (-2.759999954985267,2.5775531991591425) – (-2.7499999557808987,2.5701249938330863); (-2.7499999557808987,2.5701249938330863) – (-2.7399999565765305,2.5623715438419787); (-2.7399999565765305,2.5623715438419787) – (-2.7299999573721623,2.554312228769795); (-2.7299999573721623,2.554312228769795) – (-2.719999958167794,2.545967192703801); (-2.719999958167794,2.545967192703801) – (-2.709999958963426,2.5373572938848326); (-2.709999958963426,2.5373572938848326) – (-2.6999999597590576,2.528504052572565); (-2.6999999597590576,2.528504052572565) – (-2.6899999605546894,2.519429597256079); (-2.6899999605546894,2.519429597256079) – (-2.679999961350321,2.5101566093441723); (-2.679999961350321,2.5101566093441723) – (-2.669999962145953,2.500708266473659); (-2.669999962145953,2.500708266473659) – (-2.6599999629415847,2.4911081845773566); (-2.6599999629415847,2.4911081845773566) – (-2.6499999637372165,2.4813803588565615); (-2.6499999637372165,2.4813803588565615) – (-2.6399999645328482,2.4715491038055544); (-2.6399999645328482,2.4715491038055544) – (-2.62999996532848,2.461638992438034); (-2.62999996532848,2.461638992438034) – (-2.619999966124112,2.4516747948673934); (-2.619999966124112,2.4516747948673934) – (-2.6099999669197436,2.4416814163943426); (-2.6099999669197436,2.4416814163943426) – (-2.5999999677153753,2.4316838352566386); (-2.5999999677153753,2.4316838352566386) – (-2.589999968511007,2.4217070401965115); (-2.589999968511007,2.4217070401965115) – (-2.579999969306639,2.41177596800183); (-2.579999969306639,2.41177596800183) – (-2.5699999701022707,2.4019154411771284); (-2.5699999701022707,2.4019154411771284) – (-2.5599999708979024,2.3921501059002845); (-2.5599999708979024,2.3921501059002845) – (-2.549999971693534,2.38250437041992); (-2.549999971693534,2.38250437041992) – (-2.539999972489166,2.373002344047497); (-2.539999972489166,2.373002344047497) – (-2.5299999732847978,2.3636677768966092); (-2.5299999732847978,2.3636677768966092) – (-2.5199999740804295,2.354524000520073); (-2.5199999740804295,2.354524000520073) – (-2.5099999748760613,2.3455938695932073); (-2.5099999748760613,2.3455938695932073) – (-2.499999975671693,2.3368997047890554); (-2.499999975671693,2.3368997047890554) – (-2.489999976467325,2.328463236988337); (-2.489999976467325,2.328463236988337) – (-2.4799999772629566,2.320305552963572); (-2.4799999772629566,2.320305552963572) – (-2.4699999780585884,2.312447042673139); (-2.4699999780585884,2.312447042673139) – (-2.45999997885422,2.3049073482970055); (-2.45999997885422,2.3049073482970055) – (-2.449999979649852,2.2977053151415165); (-2.449999979649852,2.2977053151415165) – (-2.4399999804454837,2.2908589445359513); (-2.4399999804454837,2.2908589445359513) – (-2.4299999812411155,2.28438534883858); (-2.4299999812411155,2.28438534883858) – (-2.4199999820367473,2.2783007086646885); (-2.4199999820367473,2.2783007086646885) – (-2.409999982832379,2.272620232443472); (-2.409999982832379,2.272620232443472) – (-2.399999983628011,2.267358118404891); (-2.399999983628011,2.267358118404891) – (-2.3899999844236426,2.2625275190914924); (-2.3899999844236426,2.2625275190914924) – (-2.3799999852192744,2.258140508483912); (-2.3799999852192744,2.258140508483912) – (-2.369999986014906,2.254208051822213); (-2.369999986014906,2.254208051822213) – (-2.359999986810538,2.250739978198502); (-2.359999986810538,2.250739978198502) – (-2.3499999876061697,2.2477449559893214); (-2.3499999876061697,2.2477449559893214) – (-2.3399999884018015,2.245230471189226); (-2.3399999884018015,2.245230471189226) – (-2.3299999891974332,2.243202808699698); (-2.3299999891974332,2.243202808699698) – (-2.319999989993065,2.241667036620168); (-2.319999989993065,2.241667036620168) – (-2.3099999907886968,2.2406269935804097); (-2.3099999907886968,2.2406269935804097) – (-2.2999999915843286,2.2400852791459656); (-2.2999999915843286,2.2400852791459656) – (-2.2899999923799603,2.2400432473205862); (-2.2899999923799603,2.2400432473205862) – (-2.279999993175592,2.2405010031619286); (-2.279999993175592,2.2405010031619286) – (-2.269999993971224,2.2414574025189666); (-2.269999993971224,2.2414574025189666) – (-2.2599999947668556,2.2429100548917704); (-2.2599999947668556,2.2429100548917704) – (-2.2499999955624874,2.244855329406513); (-2.2499999955624874,2.244855329406513) – (-2.239999996358119,2.2472883638907604); (-2.239999996358119,2.2472883638907604) – (-2.229999997153751,2.250203077026369); (-2.229999997153751,2.250203077026369) – (-2.2199999979493827,2.2535921835496118); (-2.2199999979493827,2.2535921835496118) – (-2.2099999987450145,2.2574472124605416); (-2.2099999987450145,2.2574472124605416) – (-2.1999999995406463,2.261758528196074); (-2.1999999995406463,2.261758528196074) – (-2.190000000336278,2.2665153547138743); (-2.190000000336278,2.2665153547138743) – (-2.18000000113191,2.271705802426847); (-2.18000000113191,2.271705802426847) – (-2.1700000019275416,2.277316897920907); (-2.1700000019275416,2.277316897920907) – (-2.1600000027231734,2.2833346163817514); (-2.1600000027231734,2.2833346163817514) – (-2.150000003518805,2.289743916649589); (-2.150000003518805,2.289743916649589) – (-2.140000004314437,2.2965287788141984); (-2.140000004314437,2.2965287788141984) – (-2.1300000051100687,2.3036722442563526); (-2.1300000051100687,2.3036722442563526) – (-2.1200000059057005,2.311156458035533); (-2.1200000059057005,2.311156458035533) – (-2.1100000067013323,2.31896271351797); (-2.1100000067013323,2.31896271351797) – (-2.100000007496964,2.3270714991334813); (-2.100000007496964,2.3270714991334813) – (-2.090000008292596,2.335462547144227); (-2.090000008292596,2.335462547144227) – (-2.0800000090882276,2.3441148843034894); (-2.0800000090882276,2.3441148843034894) – (-2.0700000098838593,2.3530068842778613); (-2.0700000098838593,2.3530068842778613) – (-2.060000010679491,2.3621163217018073); (-2.060000010679491,2.3621163217018073) – (-2.050000011475123,2.3714204277294924); (-2.050000011475123,2.3714204277294924) – (-2.0400000122707547,2.3808959469450324); (-2.0400000122707547,2.3808959469450324) – (-2.0300000130663864,2.390519195488911); (-2.0300000130663864,2.390519195488911) – (-2.020000013862018,2.400266120255287); (-2.020000013862018,2.400266120255287) – (-2.01000001465765,2.4101123590122198); (-2.01000001465765,2.4101123590122198) – (-2.0000000154532818,2.4200333012945547); (-2.0000000154532818,2.4200333012945547) – (-1.9900000162489135,2.4300041499172575); (-1.9900000162489135,2.4300041499172575) – (-1.9800000170445453,2.4399999829554546); (-1.9800000170445453,2.4399999829554546) – (-1.970000017840177,2.449995816036254); (-1.970000017840177,2.449995816036254) – (-1.9600000186358089,2.459966664786658); (-1.9600000186358089,2.459966664786658) – (-1.9500000194314406,2.469887607281473); (-1.9500000194314406,2.469887607281473) – (-1.9400000202270724,2.4797338463351344); (-1.9400000202270724,2.4797338463351344) – (-1.9300000210227042,2.489480771481745); (-1.9300000210227042,2.489480771481745) – (-1.920000021818336,2.499104020488415); (-1.920000021818336,2.499104020488415) – (-1.9100000226139677,2.5085795402481454); (-1.9100000226139677,2.5085795402481454) – (-1.9000000234095995,2.5178836469000605); (-1.9000000234095995,2.5178836469000605) – (-1.8900000242052313,2.526993085026715); (-1.8900000242052313,2.526993085026715) – (-1.880000025000863,2.535885085780518); (-1.880000025000863,2.535885085780518) – (-1.8700000257964948,2.5445374237939866); (-1.8700000257964948,2.5445374237939866) – (-1.8600000265921266,2.552928472731577); (-1.8600000265921266,2.552928472731577) – (-1.8500000273877584,2.561037259344256); (-1.8500000273877584,2.561037259344256) – (-1.8400000281833901,2.568843515891691); (-1.8400000281833901,2.568843515891691) – (-1.830000028979022,2.5763277308010375); (-1.830000028979022,2.5763277308010375) – (-1.8200000297746537,2.583471197435702); (-1.8200000297746537,2.583471197435702) – (-1.8100000305702855,2.5902560608521834); (-1.8100000305702855,2.5902560608521834) – (-1.8000000313659172,2.5966653624281277); (-1.8000000313659172,2.5966653624281277) – (-1.790000032161549,2.6026830822500435); (-1.790000032161549,2.6026830822500435) – (-1.7800000329571808,2.6082941791547363); (-1.7800000329571808,2.6082941791547363) – (-1.7700000337528126,2.6134846283243784); (-1.7700000337528126,2.6134846283243784) – (-1.7600000345484443,2.618241456341244); (-1.7600000345484443,2.618241456341244) – (-1.750000035344076,2.62255277361449); (-1.750000035344076,2.62255277361449) – (-1.7400000361397079,2.626407804097939); (-1.7400000361397079,2.626407804097939) – (-1.7300000369353397,2.6297969122245752); (-1.7300000369353397,2.6297969122245752) – (-1.7200000377309714,2.6327116269904445); (-1.7200000377309714,2.6327116269904445) – (-1.7100000385266032,2.635144663127744); (-1.7100000385266032,2.635144663127744) – (-1.700000039322235,2.6370899393142); (-1.700000039322235,2.6370899393142) – (-1.6900000401178668,2.6385425933731996); (-1.6900000401178668,2.6385425933731996) – (-1.6800000409134985,2.6394989944267); (-1.6800000409134985,2.6394989944267) – (-1.6700000417091303,2.6399567519705327); (-1.6700000417091303,2.6399567519705327) – (-1.660000042504762,2.639914721849415); (-1.660000042504762,2.639914721849415) – (-1.6500000433003938,2.6393730091167447); (-1.6500000433003938,2.6393730091167447) – (-1.6400000440960256,2.638332967772019); (-1.6400000440960256,2.638332967772019) – (-1.6300000448916574,2.636797197376543); (-1.6300000448916574,2.636797197376543) – (-1.6200000456872892,2.634769536555882); (-1.6200000456872892,2.634769536555882) – (-1.610000046482921,2.632255053405294); (-1.610000046482921,2.632255053405294) – (-1.6000000472785527,2.6292600328221405); (-1.6000000472785527,2.6292600328221405) – (-1.5900000480741845,2.62579196079691); (-1.5900000480741845,2.62579196079691) – (-1.5800000488698163,2.62185950570215); (-1.5800000488698163,2.62185950570215) – (-1.570000049665448,2.617472496626051); (-1.570000049665448,2.617472496626051) – (-1.5600000504610798,2.6126418988048488); (-1.5600000504610798,2.6126418988048488) – (-1.5500000512567116,2.607379786215448); (-1.5500000512567116,2.607379786215448) – (-1.5400000520523434,2.6016993113967746); (-1.5400000520523434,2.6016993113967746) – (-1.5300000528479751,2.595614672575283); (-1.5300000528479751,2.595614672575283) – (-1.520000053643607,2.5891410781767874); (-1.520000053643607,2.5891410781767874) – (-1.5100000544392387,2.5822947088133286); (-1.5100000544392387,2.5822947088133286) – (-1.5000000552348705,2.5750926768400713); (-1.5000000552348705,2.5750926768400713) – (-1.4900000560305022,2.5675529835833397); (-1.4900000560305022,2.5675529835833397) – (-1.480000056826134,2.559694474346681); (-1.480000056826134,2.559694474346681) – (-1.4700000576217658,2.551536791307429); (-1.4700000576217658,2.551536791307429) – (-1.4600000584173976,2.5431003244214994); (-1.4600000584173976,2.5431003244214994) – (-1.4500000592130293,2.534406160459125); (-1.4500000592130293,2.534406160459125) – (-1.440000060008661,2.525476030298921); (-1.440000060008661,2.525476030298921) – (-1.4300000608042929,2.5163322546120153); (-1.4300000608042929,2.5163322546120153) – (-1.4200000615999246,2.506997688072003); (-1.4200000615999246,2.506997688072003) – (-1.4100000623955564,2.4974956622301736); (-1.4100000623955564,2.4974956622301736) – (-1.4000000631911882,2.4878499271987944); (-1.4000000631911882,2.4878499271987944) – (-1.39000006398682,2.4780845922882055); (-1.39000006398682,2.4780845922882055) – (-1.3800000647824517,2.468224065746113); (-1.3800000647824517,2.468224065746113) – (-1.3700000655780835,2.458292993749688); (-1.3700000655780835,2.458292993749688) – (-1.3600000663737153,2.44831619880297); (-1.3600000663737153,2.44831619880297) – (-1.350000067169347,2.4383186176935436); (-1.350000067169347,2.4383186176935436) – (-1.3400000679649788,2.428325239163568); (-1.3400000679649788,2.428325239163568) – (-1.3300000687606106,2.418361041450943); (-1.3300000687606106,2.418361041450943) – (-1.3200000695562424,2.4084509298567345); (-1.3200000695562424,2.4084509298567345) – (-1.3100000703518742,2.3986196744949004); (-1.3100000703518742,2.3986196744949004) – (-1.300000071147506,2.388891848379917); (-1.300000071147506,2.388891848379917) – (-1.2900000719431377,2.37929176600705); (-1.2900000719431377,2.37929176600705) – (-1.2800000727387695,2.369843422578788); (-1.2800000727387695,2.369843422578788) – (-1.2700000735344013,2.3605704340293414); (-1.2700000735344013,2.3605704340293414) – (-1.260000074330033,2.3514959779971183); (-1.260000074330033,2.3514959779971183) – (-1.2500000751256648,2.342642735892705); (-1.2500000751256648,2.342642735892705) – (-1.2400000759212966,2.334032836207163); (-1.2400000759212966,2.334032836207163) – (-1.2300000767169283,2.325687799202333); (-1.2300000767169283,2.325687799202333) – (-1.2200000775125601,2.3176284831213976); (-1.2200000775125601,2.3176284831213976) – (-1.210000078308192,2.3098750320541437); (-1.210000078308192,2.3098750320541437) – (-1.2000000791038237,2.302446825587237); (-1.2000000791038237,2.302446825587237) – (-1.1900000798994554,2.2953624303653557); (-1.1900000798994554,2.2953624303653557) – (-1.1800000806950872,2.2886395536842556); (-1.1800000806950872,2.2886395536842556) – (-1.170000081490719,2.28229499923176); (-1.170000081490719,2.28229499923176) – (-1.1600000822863508,2.2763446250872965); (-1.1600000822863508,2.2763446250872965) – (-1.1500000830819825,2.270803304084966); (-1.1500000830819825,2.270803304084966) – (-1.1400000838776143,2.265684886639206); (-1.1400000838776143,2.265684886639206) – (-1.130000084673246,2.261002166125975); (-1.130000084673246,2.261002166125975) – (-1.1200000854688779,2.256766846905981); (-1.1200000854688779,2.256766846905981) – (-1.1100000862645096,2.252989515069878); (-1.1100000862645096,2.252989515069878) – (-1.1000000870601414,2.249679611978558); (-1.1000000870601414,2.249679611978558) – (-1.0900000878557732,2.2468454106646685); (-1.0900000878557732,2.2468454106646685) – (-1.080000088651405,2.2444939951543437); (-1.080000088651405,2.2444939951543437) – (-1.0700000894470367,2.242631242760831); (-1.0700000894470367,2.242631242760831) – (-1.0600000902426685,2.2412618093942704); (-1.0600000902426685,2.2412618093942704) – (-1.0500000910383003,2.2403891179243476); (-1.0500000910383003,2.2403891179243476) – (-1.040000091833932,2.2400153496249007); (-1.040000091833932,2.2400153496249007) – (-1.0300000926295638,2.2401414387218748); (-1.0300000926295638,2.2401414387218748) – (-1.0200000934251956,2.240767070058244); (-1.0200000934251956,2.240767070058244) – (-1.0100000942208274,2.241890679881738); (-1.0100000942208274,2.241890679881738) – (-1.0000000950164591,2.243509459753411); (-1.0000000950164591,2.243509459753411) – (-0.9900000958120909,2.24561936356727); (-0.9900000958120909,2.24561936356727) – (-0.9800000966077227,2.248215117663437); (-4.97999977835497,0.20604615969150264) – (-4.97999977835497,0.20604615969150264); (-4.97999977835497,0.20604615969150264) – (-4.969999779150602,0.1986149916191795); (-4.969999779150602,0.1986149916191795) – (-4.9599997799462345,0.19152747415055837); (-4.9599997799462345,0.19152747415055837) – (-4.949999780741867,0.18480132238538466); (-4.949999780741867,0.18480132238538466) – (-4.939999781537499,0.17845334819748457); (-4.939999781537499,0.17845334819748457) – (-4.929999782333131,0.17249941821384251); (-4.929999782333131,0.17249941821384251) – (-4.919999783128763,0.1669544141563032); (-4.919999783128763,0.1669544141563032) – (-4.909999783924396,0.16183219564502327); (-4.909999783924396,0.16183219564502327) – (-4.899999784720028,0.15714556555664416); (-4.899999784720028,0.15714556555664416) – (-4.88999978551566,0.15290623802377346); (-4.88999978551566,0.15290623802377346) – (-4.879999786311292,0.14912480915575846); (-4.879999786311292,0.14912480915575846) – (-4.8699997871069245,0.14581073055393545); (-4.8699997871069245,0.14581073055393545) – (-4.859999787902557,0.14297228568755238); (-4.859999787902557,0.14297228568755238) – (-4.849999788698189,0.14061656918941293); (-4.849999788698189,0.14061656918941293) – (-4.839999789493821,0.1387494691229923); (-4.839999789493821,0.1387494691229923) – (-4.829999790289453,0.13737565226534718); (-4.829999790289453,0.13737565226534718) – (-4.819999791085086,0.13649855244260536); (-4.819999791085086,0.13649855244260536) – (-4.809999791880718,0.1361203619471902); (-4.809999791880718,0.1361203619471902) – (-4.79999979267635,0.13624202605823202); (-4.79999979267635,0.13624202605823202) – (-4.789999793471982,0.13686324067886305); (-4.789999793471982,0.13686324067886305) – (-4.7799997942676145,0.13798245309630108); (-4.7799997942676145,0.13798245309630108) – (-4.769999795063247,0.13959686586282205); (-4.769999795063247,0.13959686586282205) – (-4.759999795858879,0.14170244378792166); (-4.759999795858879,0.14170244378792166) – (-4.749999796654511,0.14429392402418814); (-4.749999796654511,0.14429392402418814) – (-4.739999797450143,0.14736482922167818); (-4.739999797450143,0.14736482922167818) – (-4.7299997982457755,0.1509074837179157); (-4.7299997982457755,0.1509074837179157) – (-4.719999799041408,0.15491303272304768); (-4.719999799041408,0.15491303272304768) – (-4.70999979983704,0.15937146445220363); (-4.70999979983704,0.15937146445220363) – (-4.699999800632672,0.16427163514973994); (-4.699999800632672,0.16427163514973994) – (-4.689999801428304,0.16960129694282092); (-4.689999801428304,0.16960129694282092) – (-4.679999802223937,0.17534712845471737); (-4.679999802223937,0.17534712845471737) – (-4.669999803019569,0.18149476810130563); (-4.669999803019569,0.18149476810130563) – (-4.659999803815201,0.18802884998754268); (-4.659999803815201,0.18802884998754268) – (-4.649999804610833,0.1949330423141952); (-4.649999804610833,0.1949330423141952) – (-4.6399998054064655,0.20219008819882592); (-4.6399998054064655,0.20219008819882592) – (-4.629999806202098,0.2097818488090055); (-4.629999806202098,0.2097818488090055) – (-4.61999980699773,0.2176893486999398); (-4.61999980699773,0.2176893486999398) – (-4.609999807793362,0.22589282324319226); (-4.609999807793362,0.22589282324319226) – (-4.599999808588994,0.2343717680279538); (-4.599999808588994,0.2343717680279538) – (-4.589999809384627,0.24310499011138326); (-4.589999809384627,0.24310499011138326) – (-4.579999810180259,0.252070660989919); (-4.579999810180259,0.252070660989919) – (-4.569999810975891,0.261246371159161); (-4.569999810975891,0.261246371159161) – (-4.559999811771523,0.2706091861259523); (-4.559999811771523,0.2706091861259523) – (-4.5499998125671555,0.28013570373265895); (-4.5499998125671555,0.28013570373265895) – (-4.539999813362788,0.28980211265036676); (-4.539999813362788,0.28980211265036676) – (-4.52999981415842,0.29958425189479376); (-4.52999981415842,0.29958425189479376) – (-4.519999814954052,0.30945767121615847); (-4.519999814954052,0.30945767121615847) – (-4.509999815749684,0.319397692212061); (-4.509999815749684,0.319397692212061) – (-4.4999998165453166,0.32937947001062706); (-4.4999998165453166,0.32937947001062706) – (-4.489999817340949,0.3393780553697389); (-4.489999817340949,0.3393780553697389) – (-4.479999818136581,0.3493684570371373); (-4.479999818136581,0.3493684570371373) – (-4.469999818932213,0.35932570421552706); (-4.469999818932213,0.35932570421552706) – (-4.459999819727845,0.3692249089765558); (-4.459999819727845,0.3692249089765558) – (-4.449999820523478,0.3790413284676635); (-4.449999820523478,0.3790413284676635) – (-4.43999982131911,0.3887504267563184); (-4.43999982131911,0.3887504267563184) – (-4.429999822114742,0.39832793615706025); (-4.429999822114742,0.39832793615706025) – (-4.419999822910374,0.40774991788806625); (-4.419999822910374,0.40774991788806625) – (-4.4099998237060065,0.4169928219056287); (-4.4099998237060065,0.4169928219056287) – (-4.399999824501639,0.4260335457669906); (-4.399999824501639,0.4260335457669906) – (-4.389999825297271,0.43484949237441145); (-4.389999825297271,0.43484949237441145) – (-4.379999826092903,0.44341862645613467); (-4.379999826092903,0.44341862645613467) – (-4.369999826888535,0.4517195296430822); (-4.369999826888535,0.4517195296430822) – (-4.359999827684168,0.45973145400361437); (-4.359999827684168,0.45973145400361437) – (-4.3499998284798,0.46743437390254533); (-4.3499998284798,0.46743437390254533) – (-4.339999829275432,0.47480903605479396); (-4.339999829275432,0.47480903605479396) – (-4.329999830071064,0.4818370076485619); (-4.329999830071064,0.4818370076485619) – (-4.3199998308666965,0.48850072241775633); (-4.3199998308666965,0.48850072241775633) – (-4.309999831662329,0.4947835245484997); (-4.309999831662329,0.4947835245484997) – (-4.299999832457961,0.5006697103099835); (-4.299999832457961,0.5006697103099835) – (-4.289999833253593,0.5061445673056111); (-4.289999833253593,0.5061445673056111) – (-4.279999834049225,0.5111944112463207); (-4.279999834049225,0.5111944112463207) – (-4.269999834844858,0.5158066201541767); (-4.269999834844858,0.5158066201541767) – (-4.25999983564049,0.5199696659107353); (-4.25999983564049,0.5199696659107353) – (-4.249999836436122,0.5236731430713331); (-4.249999836436122,0.5236731430713331) – (-4.239999837231754,0.5269077948732745); (-4.239999837231754,0.5269077948732745) – (-4.229999838027386,0.5296655363729155); (-4.229999838027386,0.5296655363729155) – (-4.219999838823019,0.5319394746538072); (-4.219999838823019,0.5319394746538072) – (-4.209999839618651,0.5337239260553961); (-4.209999839618651,0.5337239260553961) – (-4.199999840414283,0.5350144303792118); (-4.199999840414283,0.5350144303792118) – (-4.189999841209915,0.5358077620370398); (-4.189999841209915,0.5358077620370398) – (-4.1799998420055475,0.5361019381132103); (-4.1799998420055475,0.5361019381132103) – (-4.16999984280118,0.5358962233208534); (-4.16999984280118,0.5358962233208534) – (-4.159999843596812,0.5351911318397343); (-4.159999843596812,0.5351911318397343) – (-4.149999844392444,0.5339884260310708); (-4.149999844392444,0.5339884260310708) – (-4.139999845188076,0.5322911120325491); (-4.139999845188076,0.5322911120325491) – (-4.129999845983709,0.5301034322445467); (-4.129999845983709,0.5301034322445467) – (-4.119999846779341,0.527430854726342); (-4.119999846779341,0.527430854726342) – (-4.109999847574973,0.5242800595288163); (-4.109999847574973,0.5242800595288163) – (-4.099999848370605,0.5206589219978079); (-4.099999848370605,0.5206589219978079) – (-4.0899998491662375,0.5165764930898528); (-4.0899998491662375,0.5165764930898528) – (-4.07999984996187,0.5120429767495093); (-4.07999984996187,0.5120429767495093) – (-4.069999850757502,0.5070697044048159); (-4.069999850757502,0.5070697044048159) – (-4.059999851553134,0.5016691066446266); (-4.059999851553134,0.5016691066446266) – (-4.049999852348766,0.4958546821486174); (-4.049999852348766,0.4958546821486174) – (-4.039999853144399,0.48964096394762235); (-4.039999853144399,0.48964096394762235) – (-4.029999853940031,0.4830434830986313); (-4.029999853940031,0.4830434830986313) – (-4.019999854735663,0.4760787298652416); (-4.019999854735663,0.4760787298652416) – (-4.009999855531295,0.4687641125005938); (-4.009999855531295,0.4687641125005938) – (-3.999999856326927,0.46111791373581124); (-3.999999856326927,0.46111791373581124) – (-3.989999857122559,0.4531592450827028); (-3.989999857122559,0.4531592450827028) – (-3.9799998579181906,0.44490799906494166); (-3.9799998579181906,0.44490799906494166) – (-3.9699998587138223,0.43638479949712533); (-3.9699998587138223,0.43638479949712533) – (-3.959999859509454,0.4276109499359876); (-3.959999859509454,0.4276109499359876) – (-3.949999860305086,0.4186083804326095); (-3.949999860305086,0.4186083804326095) – (-3.9399998611007176,0.40939959271872267); (-3.9399998611007176,0.40939959271872267) – (-3.9299998618963494,0.4000076039641057); (-3.9299998618963494,0.4000076039641057) – (-3.919999862691981,0.3904558892456569); (-3.919999862691981,0.3904558892456569) – (-3.909999863487613,0.38076832287193796); (-3.909999863487613,0.38076832287193796) – (-3.8999998642832447,0.3709691187098435); (-3.8999998642832447,0.3709691187098435) – (-3.8899998650788765,0.36108276966255526); (-3.8899998650788765,0.36108276966255526) – (-3.8799998658745083,0.3511339864500491); (-3.8799998658745083,0.3511339864500491) – (-3.86999986667014,0.3411476358451737); (-3.86999986667014,0.3411476358451737) – (-3.859999867465772,0.33114867851968); (-3.859999867465772,0.33114867851968) – (-3.8499998682614036,0.32116210665554895); (-3.8499998682614036,0.32116210665554895) – (-3.8399998690570354,0.31121288147756326); (-3.8399998690570354,0.31121288147756326) – (-3.829999869852667,0.301325870863256); (-3.829999869852667,0.301325870863256) – (-3.819999870648299,0.2915257871861755); (-3.819999870648299,0.2915257871861755) – (-3.8099998714439307,0.2818371255478324); (-3.8099998714439307,0.2818371255478324) – (-3.7999998722395625,0.27228410255271257); (-3.7999998722395625,0.27228410255271257) – (-3.7899998730351943,0.2628905957793864); (-3.7899998730351943,0.2628905957793864) – (-3.779999873830826,0.25368008409900883); (-3.779999873830826,0.25368008409900883) – (-3.769999874626458,0.2446755889903761); (-3.769999874626458,0.2446755889903761) – (-3.7599998754220896,0.23589961699822856); (-3.7599998754220896,0.23589961699822856) – (-3.7499998762177214,0.22737410347861947); (-3.7499998762177214,0.22737410347861947) – (-3.739999877013353,0.21912035777195582); (-3.739999877013353,0.21912035777195582) – (-3.729999877808985,0.21115900994075443); (-3.729999877808985,0.21115900994075443) – (-3.7199998786046167,0.20350995920523773); (-3.7199998786046167,0.20350995920523773) – (-3.7099998794002484,0.19619232420565397); (-3.7099998794002484,0.19619232420565397) – (-3.69999988019588,0.1892243952156421); (-3.69999988019588,0.1892243952156421) – (-3.689999880991512,0.1826235884260789); (-3.689999880991512,0.1826235884260789) – (-3.6799998817871438,0.17640640241367947); (-3.6799998817871438,0.17640640241367947) – (-3.6699998825827755,0.1705883769031551); (-3.6699998825827755,0.1705883769031551) – (-3.6599998833784073,0.16518405392599925); (-3.6599998833784073,0.16518405392599925) – (-3.649999884174039,0.16020694147298875); (-3.649999884174039,0.16020694147298875) – (-3.639999884969671,0.15566947973124698); (-3.639999884969671,0.15566947973124698) – (-3.6299998857653026,0.15158300999025934); (-3.6299998857653026,0.15158300999025934) – (-3.6199998865609344,0.14795774629456138); (-3.6199998865609344,0.14795774629456138) – (-3.609999887356566,0.14480274991394979); (-3.609999887356566,0.14480274991394979) – (-3.599999888152198,0.14212590669503097); (-3.599999888152198,0.14212590669503097) – (-3.5899998889478297,0.13993390735071506); (-3.5899998889478297,0.13993390735071506) – (-3.5799998897434615,0.13823223073691987); (-3.5799998897434615,0.13823223073691987) – (-3.5699998905390933,0.13702513015828716); (-3.5699998905390933,0.13702513015828716) – (-3.559999891334725,0.13631562273713768); (-3.559999891334725,0.13631562273713768) – (-3.549999892130357,0.13610548187223787); (-3.549999892130357,0.13610548187223787) – (-3.5399998929259886,0.13639523280622717); (-3.5399998929259886,0.13639523280622717) – (-3.5299998937216204,0.13718415131278516); (-3.5299998937216204,0.13718415131278516) – (-3.519999894517252,0.13847026550682007); (-3.519999894517252,0.13847026550682007) – (-3.509999895312884,0.14025036077315345); (-3.509999895312884,0.14025036077315345) – (-3.4999998961085157,0.14251998780138306); (-3.4999998961085157,0.14251998780138306) – (-3.4899998969041475,0.1452734737068397); (-3.4899998969041475,0.1452734737068397) – (-3.4799998976997792,0.14850393620984254); (-3.4799998976997792,0.14850393620984254) – (-3.469999898495411,0.1522033008378109); (-3.469999898495411,0.1522033008378109) – (-3.459999899291043,0.1563623211072377); (-3.459999899291043,0.1563623211072377) – (-3.4499999000866746,0.16097060163507917); (-3.4499999000866746,0.16097060163507917) – (-3.4399999008823063,0.1660166241217944); (-3.4399999008823063,0.1660166241217944) – (-3.429999901677938,0.1714877761410908); (-3.429999901677938,0.1714877761410908) – (-3.41999990247357,0.17737038266441643); (-3.41999990247357,0.17737038266441643) – (-3.4099999032692017,0.18364974024140465); (-3.4099999032692017,0.18364974024140465) – (-3.3999999040648334,0.19031015375083624); (-3.3999999040648334,0.19031015375083624) – (-3.389999904860465,0.19733497563026234); (-3.389999904860465,0.19733497563026234) – (-3.379999905656097,0.20470664748623418); (-3.379999905656097,0.20470664748623418) – (-3.3699999064517288,0.21240674398113585); (-3.3699999064517288,0.21240674398113585) – (-3.3599999072473605,0.2204160188869248); (-3.3599999072473605,0.2204160188869248) – (-3.3499999080429923,0.22871445319067116); (-3.3499999080429923,0.22871445319067116) – (-3.339999908838624,0.23728130513165627); (-3.339999908838624,0.23728130513165627) – (-3.329999909634256,0.24609516204496468); (-3.329999909634256,0.24609516204496468) – (-3.3199999104298876,0.25513399388198604); (-3.3199999104298876,0.25513399388198604) – (-3.3099999112255194,0.2643752082740546); (-3.3099999112255194,0.2643752082740546) – (-3.299999912021151,0.27379570700159617); (-3.299999912021151,0.27379570700159617) – (-3.289999912816783,0.283371943727639); (-3.289999912816783,0.283371943727639) – (-3.2799999136124147,0.2930799828513839); (-3.2799999136124147,0.2930799828513839) – (-3.2699999144080465,0.3028955593347308); (-3.2699999144080465,0.3028955593347308) – (-3.2599999152036783,0.31279413935222755); (-3.2599999152036783,0.31279413935222755) – (-3.24999991599931,0.32275098161284776); (-3.24999991599931,0.32275098161284776) – (-3.239999916794942,0.3327411992003225); (-3.239999916794942,0.3327411992003225) – (-3.2299999175905736,0.3427398217774604); (-3.2299999175905736,0.3427398217774604) – (-3.2199999183862054,0.3527218579989762); (-3.2199999183862054,0.3527218579989762) – (-3.209999919181837,0.3626623579768297); (-3.209999919181837,0.3626623579768297) – (-3.199999919977469,0.3725364756419416); (-3.199999919977469,0.3725364756419416) – (-3.1899999207731007,0.38231953084641646); (-3.1899999207731007,0.38231953084641646) – (-3.1799999215687325,0.39198707105104974); (-3.1799999215687325,0.39198707105104974) – (-3.1699999223643642,0.40151493244393205); (-3.1699999223643642,0.40151493244393205) – (-3.159999923159996,0.4108793003373843); (-3.159999923159996,0.4108793003373843) – (-3.149999923955628,0.42005676869226505); (-3.149999923955628,0.42005676869226505) – (-3.1399999247512596,0.4290243986208694); (-3.1399999247512596,0.4290243986208694) – (-3.1299999255468913,0.4377597757221934); (-3.1299999255468913,0.4377597757221934) – (-3.119999926342523,0.44624106610625397); (-3.119999926342523,0.44624106610625397) – (-3.109999927138155,0.45444707096743403); (-3.109999927138155,0.45444707096743403) – (-3.0999999279337866,0.46235727957044837); (-3.0999999279337866,0.46235727957044837) – (-3.0899999287294184,0.46995192051649204); (-3.0899999287294184,0.46995192051649204) – (-3.07999992952505,0.47721201116143264); (-3.07999992952505,0.47721201116143264) – (-3.069999930320682,0.4841194050625267); (-3.069999930320682,0.4841194050625267) – (-3.0599999311163137,0.4906568373350695); (-3.0599999311163137,0.4906568373350695) – (-3.0499999319119455,0.4968079678056081); (-3.0499999319119455,0.4968079678056081) – (-3.0399999327075773,0.5025574218538592); (-3.0399999327075773,0.5025574218538592) – (-3.029999933503209,0.5078908288412461); (-3.029999933503209,0.5078908288412461) – (-3.019999934298841,0.5127948580300057); (-3.019999934298841,0.5127948580300057) – (-3.0099999350944726,0.5172572519030855); (-3.0099999350944726,0.5172572519030855) – (-2.9999999358901044,0.5212668568015468); (-2.9999999358901044,0.5212668568015468) – (-2.989999936685736,0.5248136508029); (-2.989999936685736,0.5248136508029) – (-2.979999937481368,0.5278887687706864); (-2.979999937481368,0.5278887687706864) – (-2.9699999382769997,0.5304845245127); (-2.9699999382769997,0.5304845245127) – (-2.9599999390726315,0.5325944299924615); (-2.9599999390726315,0.5325944299924615) – (-2.9499999398682633,0.5342132115459279); (-2.9499999398682633,0.5342132115459279) – (-2.939999940663895,0.5353368230629045); (-2.939999940663895,0.5353368230629045) – (-2.929999941459527,0.5359624561002111); (-2.929999941459527,0.5359624561002111) – (-2.9199999422551586,0.5360885469013267); (-2.9199999422551586,0.5360885469013267) – (-2.9099999430507903,0.5357147803049653); (-2.9099999430507903,0.5357147803049653) – (-2.899999943846422,0.5348420905328156); (-2.899999943846422,0.5348420905328156) – (-2.889999944642054,0.5334726588544727); (-2.889999944642054,0.5334726588544727) – (-2.8799999454376857,0.5316099081354018); (-2.8799999454376857,0.5316099081354018) – (-2.8699999462333174,0.5292584942815577); (-2.8699999462333174,0.5292584942815577) – (-2.859999947028949,0.5264242946020476); (-2.859999947028949,0.5264242946020476) – (-2.849999947824581,0.5231143931189207); (-2.849999947824581,0.5231143931189207) – (-2.8399999486202128,0.519337062860805); (-2.8399999486202128,0.519337062860805) – (-2.8299999494158445,0.5151017451846482); (-2.8299999494158445,0.5151017451846482) – (-2.8199999502114763,0.510419026177246); (-2.8199999502114763,0.510419026177246) – (-2.809999951007108,0.5053006101955415); (-2.809999951007108,0.5053006101955415) – (-2.79999995180274,0.49975929061183433); (-2.79999995180274,0.49975929061183433) – (-2.7899999525983716,0.4938089178370167); (-2.7899999525983716,0.4938089178370167) – (-2.7799999533940034,0.48746436470176546); (-2.7799999533940034,0.48746436470176546) – (-2.769999954189635,0.48074148928221616); (-2.769999954189635,0.48074148928221616) – (-2.759999954985267,0.4736570952630387); (-2.759999954985267,0.4736570952630387) – (-2.7499999557808987,0.46622888993698264); (-2.7499999557808987,0.46622888993698264) – (-2.7399999565765305,0.45847543994587486); (-2.7399999565765305,0.45847543994587486) – (-2.7299999573721623,0.4504161248736913); (-2.7299999573721623,0.4504161248736913) – (-2.719999958167794,0.4420710888076974); (-2.719999958167794,0.4420710888076974) – (-2.709999958963426,0.433461189988729); (-2.709999958963426,0.433461189988729) – (-2.6999999597590576,0.4246079486764612); (-2.6999999597590576,0.4246079486764612) – (-2.6899999605546894,0.41553349335997514); (-2.6899999605546894,0.41553349335997514) – (-2.679999961350321,0.4062605054480687); (-2.679999961350321,0.4062605054480687) – (-2.669999962145953,0.39681216257755547); (-2.669999962145953,0.39681216257755547) – (-2.6599999629415847,0.3872120806812527); (-2.6599999629415847,0.3872120806812527) – (-2.6499999637372165,0.37748425496045773); (-2.6499999637372165,0.37748425496045773) – (-2.6399999645328482,0.36765299990945055); (-2.6399999645328482,0.36765299990945055) – (-2.62999996532848,0.3577428885419307); (-2.62999996532848,0.3577428885419307) – (-2.619999966124112,0.34777869097128983); (-2.619999966124112,0.34777869097128983) – (-2.6099999669197436,0.3377853124982386); (-2.6099999669197436,0.3377853124982386) – (-2.5999999677153753,0.32778773136053496); (-2.5999999677153753,0.32778773136053496) – (-2.589999968511007,0.31781093630040785); (-2.589999968511007,0.31781093630040785) – (-2.579999969306639,0.30787986410572593); (-2.579999969306639,0.30787986410572593) – (-2.5699999701022707,0.2980193372810245); (-2.5699999701022707,0.2980193372810245) – (-2.5599999708979024,0.28825400200418105); (-2.5599999708979024,0.28825400200418105) – (-2.549999971693534,0.27860826652381626); (-2.549999971693534,0.27860826652381626) – (-2.539999972489166,0.2691062401513935); (-2.539999972489166,0.2691062401513935) – (-2.5299999732847978,0.25977167300050563); (-2.5299999732847978,0.25977167300050563) – (-2.5199999740804295,0.2506278966239694); (-2.5199999740804295,0.2506278966239694) – (-2.5099999748760613,0.24169776569710344); (-2.5099999748760613,0.24169776569710344) – (-2.499999975671693,0.23300360089295158); (-2.499999975671693,0.23300360089295158) – (-2.489999976467325,0.22456713309223342); (-2.489999976467325,0.22456713309223342) – (-2.4799999772629566,0.2164094490674686); (-2.4799999772629566,0.2164094490674686) – (-2.4699999780585884,0.2085509387770353); (-2.4699999780585884,0.2085509387770353) – (-2.45999997885422,0.20101124440090165); (-2.45999997885422,0.20101124440090165) – (-2.449999979649852,0.19380921124541295); (-2.449999979649852,0.19380921124541295) – (-2.4399999804454837,0.1869628406398476); (-2.4399999804454837,0.1869628406398476) – (-2.4299999812411155,0.18048924494247623); (-2.4299999812411155,0.18048924494247623) – (-2.4199999820367473,0.17440460476858455); (-2.4199999820367473,0.17440460476858455) – (-2.409999982832379,0.16872412854736837); (-2.409999982832379,0.16872412854736837) – (-2.399999983628011,0.16346201450878708); (-2.399999983628011,0.16346201450878708) – (-2.3899999844236426,0.15863141519538865); (-2.3899999844236426,0.15863141519538865) – (-2.3799999852192744,0.1542444045878082); (-2.3799999852192744,0.1542444045878082) – (-2.369999986014906,0.15031194792610914); (-2.369999986014906,0.15031194792610914) – (-2.359999986810538,0.14684387430239826); (-2.359999986810538,0.14684387430239826) – (-2.3499999876061697,0.14384885209321785); (-2.3499999876061697,0.14384885209321785) – (-2.3399999884018015,0.14133436729312251); (-2.3399999884018015,0.14133436729312251) – (-2.3299999891974332,0.13930670480359422); (-2.3299999891974332,0.13930670480359422) – (-2.319999989993065,0.13777093272406427); (-2.319999989993065,0.13777093272406427) – (-2.3099999907886968,0.13673088968430616); (-2.3099999907886968,0.13673088968430616) – (-2.2999999915843286,0.1361891752498618); (-2.2999999915843286,0.1361891752498618) – (-2.2899999923799603,0.1361471434244824); (-2.2899999923799603,0.1361471434244824) – (-2.279999993175592,0.13660489926582497); (-2.279999993175592,0.13660489926582497) – (-2.269999993971224,0.13756129862286282); (-2.269999993971224,0.13756129862286282) – (-2.2599999947668556,0.13901395099566685); (-2.2599999947668556,0.13901395099566685) – (-2.2499999955624874,0.1409592255104094); (-2.2499999955624874,0.1409592255104094) – (-2.239999996358119,0.14339225999465652); (-2.239999996358119,0.14339225999465652) – (-2.229999997153751,0.14630697313026497); (-2.229999997153751,0.14630697313026497) – (-2.2199999979493827,0.14969607965350804); (-2.2199999979493827,0.14969607965350804) – (-2.2099999987450145,0.15355110856443782); (-2.2099999987450145,0.15355110856443782) – (-2.1999999995406463,0.15786242429997022); (-2.1999999995406463,0.15786242429997022) – (-2.190000000336278,0.16261925081777062); (-2.190000000336278,0.16261925081777062) – (-2.18000000113191,0.16780969853074343); (-2.18000000113191,0.16780969853074343) – (-2.1700000019275416,0.17342079402480312); (-2.1700000019275416,0.17342079402480312) – (-2.1600000027231734,0.17943851248564782); (-2.1600000027231734,0.17943851248564782) – (-2.150000003518805,0.1858478127534856); (-2.150000003518805,0.1858478127534856) – (-2.140000004314437,0.19263267491809455); (-2.140000004314437,0.19263267491809455) – (-2.1300000051100687,0.19977614036024904); (-2.1300000051100687,0.19977614036024904) – (-2.1200000059057005,0.2072603541394292); (-2.1200000059057005,0.2072603541394292) – (-2.1100000067013323,0.21506660962186636); (-2.1100000067013323,0.21506660962186636) – (-2.100000007496964,0.2231753952373778); (-2.100000007496964,0.2231753952373778) – (-2.090000008292596,0.2315664432481232); (-2.090000008292596,0.2315664432481232) – (-2.0800000090882276,0.24021878040738567); (-2.0800000090882276,0.24021878040738567) – (-2.0700000098838593,0.24911078038175777); (-2.0700000098838593,0.24911078038175777) – (-2.060000010679491,0.25822021780570353); (-2.060000010679491,0.25822021780570353) – (-2.050000011475123,0.26752432383338876); (-2.050000011475123,0.26752432383338876) – (-2.0400000122707547,0.2769998430489287); (-2.0400000122707547,0.2769998430489287) – (-2.0300000130663864,0.28662309159280763); (-2.0300000130663864,0.28662309159280763) – (-2.020000013862018,0.2963700163591833); (-2.020000013862018,0.2963700163591833) – (-2.01000001465765,0.30621625511611605); (-2.01000001465765,0.30621625511611605) – (-2.0000000154532818,0.3161371973984509); (-2.0000000154532818,0.3161371973984509) – (-1.9900000162489135,0.32610804602115395); (-1.9900000162489135,0.32610804602115395) – (-1.9800000170445453,0.3361038790593509); (-1.9800000170445453,0.3361038790593509) – (-1.970000017840177,0.34609971214015034); (-1.970000017840177,0.34609971214015034) – (-1.9600000186358089,0.35607056089055433); (-1.9600000186358089,0.35607056089055433) – (-1.9500000194314406,0.3659915033853694); (-1.9500000194314406,0.3659915033853694) – (-1.9400000202270724,0.3758377424390306); (-1.9400000202270724,0.3758377424390306) – (-1.9300000210227042,0.3855846675856413); (-1.9300000210227042,0.3855846675856413) – (-1.920000021818336,0.3952079165923113); (-1.920000021818336,0.3952079165923113) – (-1.9100000226139677,0.40468343635204185); (-1.9100000226139677,0.40468343635204185) – (-1.9000000234095995,0.4139875430039568); (-1.9000000234095995,0.4139875430039568) – (-1.8900000242052313,0.4230969811306113); (-1.8900000242052313,0.4230969811306113) – (-1.880000025000863,0.43198898188441465); (-1.880000025000863,0.43198898188441465) – (-1.8700000257964948,0.44064131989788274); (-1.8700000257964948,0.44064131989788274) – (-1.8600000265921266,0.44903236883547315); (-1.8600000265921266,0.44903236883547315) – (-1.8500000273877584,0.45714115544815226); (-1.8500000273877584,0.45714115544815226) – (-1.8400000281833901,0.4649474119955874); (-1.8400000281833901,0.4649474119955874) – (-1.830000028979022,0.4724316269049339); (-1.830000028979022,0.4724316269049339) – (-1.8200000297746537,0.4795750935395982); (-1.8200000297746537,0.4795750935395982) – (-1.8100000305702855,0.4863599569560798); (-1.8100000305702855,0.4863599569560798) – (-1.8000000313659172,0.49276925853202413); (-1.8000000313659172,0.49276925853202413) – (-1.790000032161549,0.4987869783539397); (-1.790000032161549,0.4987869783539397) – (-1.7800000329571808,0.5043980752586325); (-1.7800000329571808,0.5043980752586325) – (-1.7700000337528126,0.5095885244282748); (-1.7700000337528126,0.5095885244282748) – (-1.7600000345484443,0.5143453524451402); (-1.7600000345484443,0.5143453524451402) – (-1.750000035344076,0.5186566697183864); (-1.750000035344076,0.5186566697183864) – (-1.7400000361397079,0.5225117002018351); (-1.7400000361397079,0.5225117002018351) – (-1.7300000369353397,0.5259008083284717); (-1.7300000369353397,0.5259008083284717) – (-1.7200000377309714,0.5288155230943408); (-1.7200000377309714,0.5288155230943408) – (-1.7100000385266032,0.5312485592316407); (-1.7100000385266032,0.5312485592316407) – (-1.700000039322235,0.5331938354180965); (-1.700000039322235,0.5331938354180965) – (-1.6900000401178668,0.5346464894770959); (-1.6900000401178668,0.5346464894770959) – (-1.6800000409134985,0.5356028905305965); (-1.6800000409134985,0.5356028905305965) – (-1.6700000417091303,0.5360606480744291); (-1.6700000417091303,0.5360606480744291) – (-1.660000042504762,0.5360186179533115); (-1.660000042504762,0.5360186179533115) – (-1.6500000433003938,0.535476905220641); (-1.6500000433003938,0.535476905220641) – (-1.6400000440960256,0.5344368638759153); (-1.6400000440960256,0.5344368638759153) – (-1.6300000448916574,0.5329010934804396); (-1.6300000448916574,0.5329010934804396) – (-1.6200000456872892,0.530873432659778); (-1.6200000456872892,0.530873432659778) – (-1.610000046482921,0.5283589495091907); (-1.610000046482921,0.5283589495091907) – (-1.6000000472785527,0.5253639289260367); (-1.6000000472785527,0.5253639289260367) – (-1.5900000480741845,0.5218958569008063); (-1.5900000480741845,0.5218958569008063) – (-1.5800000488698163,0.5179634018060466); (-1.5800000488698163,0.5179634018060466) – (-1.570000049665448,0.5135763927299477); (-1.570000049665448,0.5135763927299477) – (-1.5600000504610798,0.508745794908745); (-1.5600000504610798,0.508745794908745) – (-1.5500000512567116,0.5034836823193443); (-1.5500000512567116,0.5034836823193443) – (-1.5400000520523434,0.4978032075006711); (-1.5400000520523434,0.4978032075006711) – (-1.5300000528479751,0.49171856867917907); (-1.5300000528479751,0.49171856867917907) – (-1.520000053643607,0.4852449742806839); (-1.520000053643607,0.4852449742806839) – (-1.5100000544392387,0.47839860491722475); (-1.5100000544392387,0.47839860491722475) – (-1.5000000552348705,0.47119657294396766); (-1.5000000552348705,0.47119657294396766) – (-1.4900000560305022,0.463656879687236); (-1.4900000560305022,0.463656879687236) – (-1.480000056826134,0.45579837045057714); (-1.480000056826134,0.45579837045057714) – (-1.4700000576217658,0.4476406874113254); (-1.4700000576217658,0.4476406874113254) – (-1.4600000584173976,0.43920422052539565); (-1.4600000584173976,0.43920422052539565) – (-1.4500000592130293,0.43051005656302105); (-1.4500000592130293,0.43051005656302105) – (-1.440000060008661,0.42157992640281716); (-1.440000060008661,0.42157992640281716) – (-1.4300000608042929,0.4124361507159116); (-1.4300000608042929,0.4124361507159116) – (-1.4200000615999246,0.40310158417589925); (-1.4200000615999246,0.40310158417589925) – (-1.4100000623955564,0.39359955833407); (-1.4100000623955564,0.39359955833407) – (-1.4000000631911882,0.38395382330269046); (-1.4000000631911882,0.38395382330269046) – (-1.39000006398682,0.3741884883921019); (-1.39000006398682,0.3741884883921019) – (-1.3800000647824517,0.3643279618500094); (-1.3800000647824517,0.3643279618500094) – (-1.3700000655780835,0.35439688985358414); (-1.3700000655780835,0.35439688985358414) – (-1.3600000663737153,0.3444200949068659); (-1.3600000663737153,0.3444200949068659) – (-1.350000067169347,0.3344225137974398); (-1.350000067169347,0.3344225137974398) – (-1.3400000679649788,0.32442913526746425); (-1.3400000679649788,0.32442913526746425) – (-1.3300000687606106,0.3144649375548394); (-1.3300000687606106,0.3144649375548394) – (-1.3200000695562424,0.3045548259606307); (-1.3200000695562424,0.3045548259606307) – (-1.3100000703518742,0.29472357059879656); (-1.3100000703518742,0.29472357059879656) – (-1.300000071147506,0.2849957444838134); (-1.300000071147506,0.2849957444838134) – (-1.2900000719431377,0.27539566211094635); (-1.2900000719431377,0.27539566211094635) – (-1.2800000727387695,0.265947318682684); (-1.2800000727387695,0.265947318682684) – (-1.2700000735344013,0.2566743301332377); (-1.2700000735344013,0.2566743301332377) – (-1.260000074330033,0.24759987410101458); (-1.260000074330033,0.24759987410101458) – (-1.2500000751256648,0.23874663199660137); (-1.2500000751256648,0.23874663199660137) – (-1.2400000759212966,0.23013673231105924); (-1.2400000759212966,0.23013673231105924) – (-1.2300000767169283,0.22179169530622933); (-1.2300000767169283,0.22179169530622933) – (-1.2200000775125601,0.21373237922529395); (-1.2200000775125601,0.21373237922529395) – (-1.210000078308192,0.20597892815804); (-1.210000078308192,0.20597892815804) – (-1.2000000791038237,0.19855072169113322); (-1.2000000791038237,0.19855072169113322) – (-1.1900000798994554,0.1914663264692519); (-1.1900000798994554,0.1914663264692519) – (-1.1800000806950872,0.1847434497881519); (-1.1800000806950872,0.1847434497881519) – (-1.170000081490719,0.1783988953356562); (-1.170000081490719,0.1783988953356562) – (-1.1600000822863508,0.1724485211911929); (-1.1600000822863508,0.1724485211911929) – (-1.1500000830819825,0.16690720018886207); (-1.1500000830819825,0.16690720018886207) – (-1.1400000838776143,0.16178878274310204); (-1.1400000838776143,0.16178878274310204) – (-1.130000084673246,0.15710606222987142); (-1.130000084673246,0.15710606222987142) – (-1.1200000854688779,0.15287074300987746); (-1.1200000854688779,0.15287074300987746) – (-1.1100000862645096,0.14909341117377442); (-1.1100000862645096,0.14909341117377442) – (-1.1000000870601414,0.1457835080824542); (-1.1000000870601414,0.1457835080824542) – (-1.0900000878557732,0.1429493067685646); (-1.0900000878557732,0.1429493067685646) – (-1.080000088651405,0.14059789125823982); (-1.080000088651405,0.14059789125823982) – (-1.0700000894470367,0.1387351388647271); (-1.0700000894470367,0.1387351388647271) – (-1.0600000902426685,0.13736570549816687); (-1.0600000902426685,0.13736570549816687) – (-1.0500000910383003,0.13649301402824385); (-1.0500000910383003,0.13649301402824385) – (-1.040000091833932,0.1361192457287968); (-1.040000091833932,0.1361192457287968) – (-1.0300000926295638,0.13624533482577106); (-1.0300000926295638,0.13624533482577106) – (-1.0200000934251956,0.1368709661621402); (-1.0200000934251956,0.1368709661621402) – (-1.0100000942208274,0.13799457598563458); (-1.0100000942208274,0.13799457598563458) – (-1.0000000950164591,0.13961335585730697); (-1.0000000950164591,0.13961335585730697) – (-0.9900000958120909,0.14172325967116617); (-0.9900000958120909,0.14172325967116617) – (-0.9800000966077227,0.14431901376733314); (-2.9799998891774853,1.631784886368755) – (-2.9799998891774853,1.631784886368755); (-2.9799998891774853,1.631784886368755) – (-2.969999889548372,1.634380639878598); (-2.969999889548372,1.634380639878598) – (-2.9599998899192586,1.6364905430562566); (-2.9599998899192586,1.6364905430562566) – (-2.949999890290145,1.6381093222429939); (-2.949999890290145,1.6381093222429939) – (-2.939999890661032,1.639232931334187); (-2.939999890661032,1.639232931334187) – (-2.9299998910319185,1.6398585618924804); (-2.9299998910319185,1.6398585618924804) – (-2.919999891402805,1.6399846501674165); (-2.919999891402805,1.6399846501674165) – (-2.909999891773692,1.639610881003997); (-2.909999891773692,1.639610881003997) – (-2.8999998921445784,1.638738188630406); (-2.8999998921445784,1.638738188630406) – (-2.889999892515465,1.6373687543229258); (-2.889999892515465,1.6373687543229258) – (-2.8799998928863517,1.6355060009538842); (-2.8799998928863517,1.6355060009538842) – (-2.8699998932572384,1.6331545844362552); (-2.8699998932572384,1.6331545844362552) – (-2.859999893628125,1.6303203820863026); (-2.859999893628125,1.6303203820863026) – (-2.8499998939990117,1.6270104779333523); (-2.8499998939990117,1.6270104779333523) – (-2.8399998943698983,1.623233145013408); (-2.8399998943698983,1.623233145013408) – (-2.829999894740785,1.6189978246908727); (-2.829999894740785,1.6189978246908727) – (-2.8199998951116716,1.6143151030600555); (-2.8199998951116716,1.6143151030600555) – (-2.8099998954825582,1.609196684485451); (-2.8099998954825582,1.609196684485451) – (-2.799999895853445,1.603655362346926); (-2.799999895853445,1.603655362346926) – (-2.7899998962243315,1.597704987062935); (-2.7899998962243315,1.597704987062935) – (-2.779999896595218,1.5913604314716894); (-2.779999896595218,1.5913604314716894) – (-2.769999896966105,1.5846375536568105); (-2.769999896966105,1.5846375536568105) – (-2.7599998973369915,1.5775531573103823); (-2.7599998973369915,1.5775531573103823) – (-2.749999897707878,1.5701249497324752); (-2.749999897707878,1.5701249497324752) – (-2.7399998980787648,1.5623714975721217); (-2.7399998980787648,1.5623714975721217) – (-2.7299998984496514,1.5543121804203652); (-2.7299998984496514,1.5543121804203652) – (-2.719999898820538,1.54596714237138); (-2.719999898820538,1.54596714237138) – (-2.7099998991914247,1.5373572416727304); (-2.7099998991914247,1.5373572416727304) – (-2.6999998995623113,1.5285039985906166); (-2.6999998995623113,1.5285039985906166) – (-2.689999899933198,1.5194295416204235); (-2.689999899933198,1.5194295416204235) – (-2.6799999003040846,1.5101565521770086); (-2.6799999003040846,1.5101565521770086) – (-2.6699999006749713,1.5007082079029819); (-2.6699999006749713,1.5007082079029819) – (-2.659999901045858,1.4911081247366742); (-2.659999901045858,1.4911081247366742) – (-2.6499999014167446,1.4813802978845956); (-2.6499999014167446,1.4813802978845956) – (-2.639999901787631,1.4715490418459185); (-2.639999901787631,1.4715490418459185) – (-2.629999902158518,1.461638929638898); (-2.629999902158518,1.461638929638898) – (-2.6199999025294045,1.4516747313811282); (-2.6199999025294045,1.4516747313811282) – (-2.609999902900291,1.4416813523771523); (-2.609999902900291,1.4416813523771523) – (-2.599999903271178,1.4316837708681764); (-2.599999903271178,1.4316837708681764) – (-2.5899999036420644,1.4217069755994802); (-2.5899999036420644,1.4217069755994802) – (-2.579999904012951,1.4117759033615715); (-2.579999904012951,1.4117759033615715) – (-2.5699999043838377,1.4019153766612018); (-2.5699999043838377,1.4019153766612018) – (-2.5599999047547244,1.3921500416780308); (-2.5599999047547244,1.3921500416780308) – (-2.549999905125611,1.3825043066620186); (-2.549999905125611,1.3825043066620186) – (-2.5399999054964977,1.3730022809255154); (-2.5399999054964977,1.3730022809255154) – (-2.5299999058673843,1.3636677145825415); (-2.5299999058673843,1.3636677145825415) – (-2.519999906238271,1.354523939185876); (-2.519999906238271,1.354523939185876) – (-2.5099999066091576,1.34559380941033); (-2.5099999066091576,1.34559380941033) – (-2.4999999069800443,1.3368996459279643); (-2.4999999069800443,1.3368996459279643) – (-2.489999907350931,1.3284631796180413); (-2.489999907350931,1.3284631796180413) – (-2.4799999077218176,1.3203054972511448); (-2.4799999077218176,1.3203054972511448) – (-2.469999908092704,1.312446988783241); (-2.469999908092704,1.312446988783241) – (-2.459999908463591,1.304907296391411); (-2.459999908463591,1.304907296391411) – (-2.4499999088344775,1.297705265378641); (-2.4499999088344775,1.297705265378641) – (-2.439999909205364,1.2908588970703825); (-2.439999909205364,1.2908588970703825) – (-2.429999909576251,1.2843853038206174); (-2.429999909576251,1.2843853038206174) – (-2.4199999099471374,1.2783006662398881); (-2.4199999099471374,1.2783006662398881) – (-2.409999910318024,1.2726201927522003); (-2.409999910318024,1.2726201927522003) – (-2.3999999106889107,1.2673580815818868); (-2.3999999106889107,1.2673580815818868) – (-2.3899999110597974,1.2625274852654438); (-2.3899999110597974,1.2625274852654438) – (-2.379999911430684,1.2581404777770409); (-2.379999911430684,1.2581404777770409) – (-2.3699999118015707,1.2542080243498763); (-2.3699999118015707,1.2542080243498763) – (-2.3599999121724573,1.2507399540688071); (-2.3599999121724573,1.2507399540688071) – (-2.349999912543344,1.247744935302757); (-2.349999912543344,1.247744935302757) – (-2.3399999129142306,1.2452304540383101); (-2.3399999129142306,1.2452304540383101) – (-2.3299999132851172,1.2432027951686453); (-2.3299999132851172,1.2432027951686453) – (-2.319999913656004,1.2416670267845762); (-2.319999913656004,1.2416670267845762) – (-2.3099999140268905,1.2406269875069658); (-2.3099999140268905,1.2406269875069658) – (-2.299999914397777,1.2400852768921733); (-2.299999914397777,1.2400852768921733) – (-2.289999914768664,1.2400432489345177); (-2.289999914768664,1.2400432489345177) – (-2.2799999151395505,1.2405010086819979); (-2.2799999151395505,1.2405010086819979) – (-2.269999915510437,1.2414574119737263); (-2.269999915510437,1.2414574119737263) – (-2.2599999158813238,1.2429100682997363); (-2.2599999158813238,1.2429100682997363) – (-2.2499999162522104,1.2448553467760113); (-2.2499999162522104,1.2448553467760113) – (-2.239999916623097,1.247288385219802); (-2.239999916623097,1.247288385219802) – (-2.2299999169939837,1.2502031023025522); (-2.2299999169939837,1.2502031023025522) – (-2.2199999173648703,1.2535922127500505); (-2.2199999173648703,1.2535922127500505) – (-2.209999917735757,1.2574472455518213); (-2.209999917735757,1.2574472455518213) – (-2.1999999181066436,1.2617585651342367); (-2.1999999181066436,1.2617585651342367) – (-2.1899999184775303,1.2665153954444317); (-2.1899999184775303,1.2665153954444317) – (-2.179999918848417,1.271705846884822); (-2.179999918848417,1.271705846884822) – (-2.1699999192193036,1.2773169460309035); (-2.1699999192193036,1.2773169460309035) – (-2.15999991959019,1.2833346680580553); (-2.15999991959019,1.2833346680580553) – (-2.149999919961077,1.2897439717962937); (-2.149999919961077,1.2897439717962937) – (-2.1399999203319635,1.2965288373253623); (-2.1399999203319635,1.2965288373253623) – (-2.12999992070285,1.3036723060161857); (-2.12999992070285,1.3036723060161857) – (-2.119999921073737,1.3111565229186075); (-2.119999921073737,1.3111565229186075) – (-2.1099999214446234,1.318962781389464); (-2.1099999214446234,1.318962781389464) – (-2.09999992181551,1.3270715698494455); (-2.09999992181551,1.3270715698494455) – (-2.0899999221863967,1.335462620551881); (-2.0899999221863967,1.335462620551881) – (-2.0799999225572834,1.344114960241544); (-2.0799999225572834,1.344114960241544) – (-2.06999992292817,1.3530069625768664); (-2.06999992292817,1.3530069625768664) – (-2.0599999232990567,1.3621164021845227); (-2.0599999232990567,1.3621164021845227) – (-2.0499999236699433,1.371420510211287); (-2.0499999236699433,1.371420510211287) – (-2.03999992404083,1.380896031234303); (-2.03999992404083,1.380896031234303) – (-2.0299999244117166,1.3905192813875256); (-2.0299999244117166,1.3905192813875256) – (-2.0199999247826033,1.400266207559047); (-2.0199999247826033,1.400266207559047) – (-2.00999992515349,1.4101124475113467); (-2.00999992515349,1.4101124475113467) – (-1.9999999255243768,1.420033390774191); (-1.9999999255243768,1.420033390774191) – (-1.9899999258952636,1.4300042401579898); (-1.9899999258952636,1.4300042401579898) – (-1.9799999262661505,1.4400000737338494); (-1.9799999262661505,1.4400000737338494) – (-1.9699999266370374,1.449995907125413); (-1.9699999266370374,1.449995907125413) – (-1.9599999270079242,1.4599667559567837); (-1.9599999270079242,1.4599667559567837) – (-1.9499999273788111,1.469887698300449); (-1.9499999273788111,1.469887698300449) – (-1.939999927749698,1.4797339369691158); (-1.939999927749698,1.4797339369691158) – (-1.9299999281205849,1.4894808614957595); (-1.9299999281205849,1.4894808614957595) – (-1.9199999284914717,1.4991041096469704); (-1.9199999284914717,1.4991041096469704) – (-1.9099999288623586,1.508579628315845); (-1.9099999288623586,1.508579628315845) – (-1.8999999292332455,1.5178837336422224); (-1.8999999292332455,1.5178837336422224) – (-1.8899999296041323,1.5269931702099957); (-1.8899999296041323,1.5269931702099957) – (-1.8799999299750192,1.5358851691735367); (-1.8799999299750192,1.5358851691735367) – (-1.869999930345906,1.5445375051679486); (-1.869999930345906,1.5445375051679486) – (-1.859999930716793,1.5529285518608986); (-1.859999930716793,1.5529285518608986) – (-1.8499999310876798,1.5610373360071823); (-1.8499999310876798,1.5610373360071823) – (-1.8399999314585667,1.5688435898709103); (-1.8399999314585667,1.5688435898709103) – (-1.8299999318294535,1.5763278018842888); (-1.8299999318294535,1.5763278018842888) – (-1.8199999322003404,1.583471265416374); (-1.8199999322003404,1.583471265416374) – (-1.8099999325712273,1.5902561255299035); (-1.8099999325712273,1.5902561255299035) – (-1.7999999329421141,1.596665423609338); (-1.7999999329421141,1.596665423609338) – (-1.789999933313001,1.6026831397485652); (-1.789999933313001,1.6026831397485652) – (-1.7799999336838879,1.6082942327923182); (-1.7799999336838879,1.6082942327923182) – (-1.7699999340547747,1.6134846779312286); (-1.7699999340547747,1.6134846779312286) – (-1.7599999344256616,1.618241501756543); (-1.7599999344256616,1.618241501756543) – (-1.7499999347965485,1.6225528146868855); (-1.7499999347965485,1.6225528146868855) – (-1.7399999351674353,1.626407840686018); (-1.7399999351674353,1.626407840686018) – (-1.7299999355383222,1.629796944197316); (-1.7299999355383222,1.629796944197316) – (-1.719999935909209,1.63271165422764); (-1.719999935909209,1.63271165422764) – (-1.709999936280096,1.635144685520407); (-1.709999936280096,1.635144685520407) – (-1.6999999366509828,1.6370899567649335); (-1.6999999366509828,1.6370899567649335) – (-1.6899999370218697,1.6385426057965469); (-1.6899999370218697,1.6385426057965469) – (-1.6799999373927565,1.6394990017494622); (-1.6799999373927565,1.6394990017494622) – (-1.6699999377636434,1.639956754132056); (-1.6699999377636434,1.639956754132056) – (-1.6599999381345303,1.6399147188018492); (-1.6599999381345303,1.6399147188018492) – (-1.6499999385054172,1.6393730008252683); (-1.6499999385054172,1.6393730008252683) – (-1.639999938876304,1.6383329542150327); (-1.639999938876304,1.6383329542150327) – (-1.6299999392471909,1.6367971785458297); (-1.6299999392471909,1.6367971785458297) – (-1.6199999396180778,1.634769512456731); (-1.6199999396180778,1.634769512456731) – (-1.6099999399889646,1.6322550240565952); (-1.6099999399889646,1.6322550240565952) – (-1.5999999403598515,1.6292599982564366); (-1.5999999403598515,1.6292599982564366) – (-1.5899999407307384,1.625791921060421); (-1.5899999407307384,1.625791921060421) – (-1.5799999411016252,1.621859460854756); (-1.5799999411016252,1.621859460854756) – (-1.569999941472512,1.6174724467412411); (-1.569999941472512,1.6174724467412411) – (-1.559999941843399,1.612641843969634); (-1.559999941843399,1.612641843969634) – (-1.5499999422142858,1.607379726530239); (-1.5499999422142858,1.607379726530239) – (-1.5399999425851727,1.6016992469752214); (-1.5399999425851727,1.6016992469752214) – (-1.5299999429560596,1.5956146035440792); (-1.5299999429560596,1.5956146035440792) – (-1.5199999433269464,1.5891410046754413); (-1.5199999433269464,1.5891410046754413) – (-1.5099999436978333,1.5822946309938943); (-1.5099999436978333,1.5822946309938943) – (-1.4999999440687202,1.5750925948668506); (-1.4999999440687202,1.5750925948668506) – (-1.489999944439607,1.5675528976325446); (-1.489999944439607,1.5675528976325446) – (-1.479999944810494,1.5596943846060662); (-1.479999944810494,1.5596943846060662) – (-1.4699999451813808,1.5515366979758904); (-1.4699999451813808,1.5515366979758904) – (-1.4599999455522676,1.5431002277086394); (-1.4599999455522676,1.5431002277086394) – (-1.4499999459231545,1.534406060584789); (-1.4499999459231545,1.534406060584789) – (-1.4399999462940414,1.5254759274927019); (-1.4399999462940414,1.5254759274927019) – (-1.4299999466649282,1.51633214911273); (-1.4299999466649282,1.51633214911273) – (-1.4199999470358151,1.5069975801271405); (-1.4199999470358151,1.5069975801271405) – (-1.409999947406702,1.4974955520953173); (-1.409999947406702,1.4974955520953173) – (-1.3999999477775888,1.4878498151370183); (-1.3999999477775888,1.4878498151370183) – (-1.3899999481484757,1.4780844785694485); (-1.3899999481484757,1.4780844785694485) – (-1.3799999485193626,1.4682239506465278); (-1.3799999485193626,1.4682239506465278) – (-1.3699999488902495,1.4582928775509718); (-1.3699999488902495,1.4582928775509718) – (-1.3599999492611363,1.4483160817916756); (-1.3599999492611363,1.4483160817916756) – (-1.3499999496320232,1.4383185001603735); (-1.3499999496320232,1.4383185001603735) – (-1.33999995000291,1.4283251214026507); (-1.33999995000291,1.4283251214026507) – (-1.329999950373797,1.4183609237590977); (-1.329999950373797,1.4183609237590977) – (-1.3199999507446838,1.4084508125327229); (-1.3199999507446838,1.4084508125327229) – (-1.3099999511155707,1.3986195578386689); (-1.3099999511155707,1.3986195578386689) – (-1.2999999514864575,1.38889173269183); (-1.2999999514864575,1.38889173269183) – (-1.2899999518573444,1.3792916515871168); (-1.2899999518573444,1.3792916515871168) – (-1.2799999522282313,1.3698433097258857); (-1.2799999522282313,1.3698433097258857) – (-1.2699999525991181,1.3605703230404373); (-1.2699999525991181,1.3605703230404373) – (-1.259999952970005,1.3514958691664882); (-1.259999952970005,1.3514958691664882) – (-1.2499999533408919,1.3426426295111566); (-1.2499999533408919,1.3426426295111566) – (-1.2399999537117787,1.3340327325612622); (-1.2399999537117787,1.3340327325612622) – (-1.2299999540826656,1.3256876985736359); (-1.2299999540826656,1.3256876985736359) – (-1.2199999544535525,1.31762838578569); (-1.2199999544535525,1.31762838578569) – (-1.2099999548244393,1.3098749382806916); (-1.2099999548244393,1.3098749382806916) – (-1.1999999551953262,1.3024467356380485); (-1.1999999551953262,1.3024467356380485) – (-1.189999955566213,1.2953623444944575); (-1.189999955566213,1.2953623444944575) – (-1.1799999559371,1.2886394721369845); (-1.1799999559371,1.2886394721369845) – (-1.1699999563079868,1.2822949222440727); (-1.1699999563079868,1.2822949222440727) – (-1.1599999566788737,1.2763445528851007); (-1.1599999566788737,1.2763445528851007) – (-1.1499999570497605,1.270803236883471); (-1.1499999570497605,1.270803236883471) – (-1.1399999574206474,1.2656848246422978); (-1.1399999574206474,1.2656848246422978) – (-1.1299999577915343,1.2610021095256188); (-1.1299999577915343,1.2610021095256188) – (-1.1199999581624211,1.256766795881646); (-1.1199999581624211,1.256766795881646) – (-1.109999958533308,1.252989469787996); (-1.109999958533308,1.252989469787996) – (-1.0999999589041949,1.2496795725920091); (-1.0999999589041949,1.2496795725920091) – (-1.0899999592750818,1.2468453773123005); (-1.0899999592750818,1.2468453773123005) – (-1.0799999596459686,1.2444939679605242); (-1.0799999596459686,1.2444939679605242) – (-1.0699999600168555,1.242631221835033); (-1.0699999600168555,1.242631221835033) – (-1.0599999603877424,1.241261794830696); (-1.0599999603877424,1.241261794830696) – (-1.0499999607586292,1.2403891098015867); (-1.0499999607586292,1.2403891098015867) – (-1.039999961129516,1.2400153480056302); (-1.039999961129516,1.2400153480056302) – (-1.029999961500403,1.2401414436525957); (-1.029999961500403,1.2401414436525957) – (-1.0199999618712898,1.2407670815690588); (-1.0199999618712898,1.2407670815690588) – (-1.0099999622421767,1.24189069798617); (-1.0099999622421767,1.24189069798617) – (-0.9999999626130635,1.243509484448263); (-0.9999999626130635,1.243509484448263) – (-0.9899999629839502,1.2456193948325303); (-0.9899999629839502,1.2456193948325303) – (-0.979999963354837,1.2482151554622225); (-0.979999963354837,1.2482151554622225) – (-0.9699999637257237,1.2512902782880924); (-0.9699999637257237,1.2512902782880924) – (-0.9599999640966105,1.2548370771051387); (-0.9599999640966105,1.2548370771051387) – (-0.9499999644674972,1.2588466867641128); (-0.9499999644674972,1.2588466867641128) – (-0.939999964838384,1.2633090853297753); (-0.939999964838384,1.2633090853297753) – (-0.9299999652092708,1.2682131191305113); (-0.9299999652092708,1.2682131191305113) – (-0.9199999655801575,1.2735465306366995); (-0.9199999655801575,1.2735465306366995) – (-0.9099999659510443,1.2792959890981501); (-0.9099999659510443,1.2792959890981501) – (-0.899999966321931,1.285447123864035); (-0.899999966321931,1.285447123864035) – (-0.8899999666928178,1.2919845603020295); (-0.8899999666928178,1.2919845603020295) – (-0.8799999670637045,1.2988919582268805); (-0.8799999670637045,1.2988919582268805) – (-0.8699999674345913,1.3061520527423602); (-0.8699999674345913,1.3061520527423602) – (-0.859999967805478,1.31374669739451); (-0.859999967805478,1.31374669739451) – (-0.8499999681763648,1.3216569095283213); (-0.8499999681763648,1.3216569095283213) – (-0.8399999685472516,1.3298629177344852); (-0.8399999685472516,1.3298629177344852) – (-0.8299999689181383,1.3383442112676132); (-0.8299999689181383,1.3383442112676132) – (-0.8199999692890251,1.3470795913124172); (-0.8199999692890251,1.3470795913124172) – (-0.8099999696599118,1.3560472239697026); (-0.8099999696599118,1.3560472239697026) – (-0.7999999700307986,1.3652246948297415); (-0.7999999700307986,1.3652246948297415) – (-0.7899999704016853,1.3745890649966188); (-0.7899999704016853,1.3745890649966188) – (-0.7799999707725721,1.384116928423523); (-0.7799999707725721,1.384116928423523) – (-0.7699999711434589,1.3937844704156688); (-0.7699999711434589,1.3937844704156688) – (-0.7599999715143456,1.4035675271546262); (-0.7599999715143456,1.4035675271546262) – (-0.7499999718852324,1.4134416460952783); (-0.7499999718852324,1.4134416460952783) – (-0.7399999722561191,1.4233821470844459); (-0.7399999722561191,1.4233821470844459) – (-0.7299999726270059,1.4333641840484117); (-0.7299999726270059,1.4333641840484117) – (-0.7199999729978926,1.44336280709516); (-0.7199999729978926,1.44336280709516) – (-0.7099999733687794,1.45335302487611); (-0.7099999733687794,1.45335302487611) – (-0.6999999737396662,1.4633098670514657); (-0.6999999737396662,1.4633098670514657) – (-0.6899999741105529,1.473208446703058); (-0.6899999741105529,1.473208446703058) – (-0.6799999744814397,1.483024022538673); (-0.6799999744814397,1.483024022538673) – (-0.6699999748523264,1.4927320607323942); (-0.6699999748523264,1.4927320607323942) – (-0.6599999752232132,1.5023082962463854); (-0.6599999752232132,1.5023082962463854) – (-0.6499999755940999,1.5117287934808439); (-0.6499999755940999,1.5117287934808439) – (-0.6399999759649867,1.5209700061005302); (-0.6399999759649867,1.5209700061005302) – (-0.6299999763358735,1.530008835888338); (-0.6299999763358735,1.530008835888338) – (-0.6199999767067602,1.538822690478805); (-0.6199999767067602,1.538822690478805) – (-0.609999977077647,1.547389539827256); (-0.609999977077647,1.547389539827256) – (-0.5999999774485337,1.5556879712734364); (-0.5999999774485337,1.5556879712734364) – (-0.5899999778194205,1.5636972430620084); (-0.5899999778194205,1.5636972430620084) – (-0.5799999781903072,1.5713973361861333); (-0.5799999781903072,1.5713973361861333) – (-0.569999978561194,1.57876900442456); (-0.569999978561194,1.57876900442456) – (-0.5599999789320808,1.5857938224471482); (-0.5599999789320808,1.5857938224471482) – (-0.5499999793029675,1.592454231868599); (-0.5499999793029675,1.592454231868599) – (-0.5399999796738543,1.598733585135267); (-0.5399999796738543,1.598733585135267) – (-0.529999980044741,1.6046161871353741); (-0.529999980044741,1.6046161871353741) – (-0.5199999804156278,1.61008733442861); (-0.5199999804156278,1.61008733442861) – (-0.5099999807865145,1.615133351997074); (-0.5099999807865145,1.615133351997074) – (-0.4999999811574013,1.6197416274256957); (-0.4999999811574013,1.6197416274256957) – (-0.48999998152828805,1.6239006424267013); (-0.48999998152828805,1.6239006424267013) – (-0.4799999818991748,1.6276000016293324); (-0.4799999818991748,1.6276000016293324) – (-0.46999998227006157,1.6308304585628566); (-0.46999998227006157,1.6308304585628566) – (-0.4599999826409483,1.633583938767929); (-0.4599999826409483,1.633583938767929) – (-0.4499999830118351,1.635853559978532); (-0.4499999830118351,1.635853559978532) – (-0.43999998338272184,1.6376336493240555); (-0.43999998338272184,1.6376336493240555) – (-0.4299999837536086,1.6389197575085181); (-0.4299999837536086,1.6389197575085181) – (-0.41999998412449535,1.6397086699314898); (-0.41999998412449535,1.6397086699314898) – (-0.4099999844953821,1.6399984147229167); (-0.4099999844953821,1.6399984147229167) – (-0.39999998486626887,1.6397882676717699); (-0.39999998486626887,1.6397882676717699) – (-0.3899999852371556,1.6390787540361964); (-0.3899999852371556,1.6390787540361964) – (-0.3799999856080424,1.6378716472306456); (-0.3799999856080424,1.6378716472306456) – (-0.36999998597892914,1.6361699643932586); (-0.36999998597892914,1.6361699643932586) – (-0.3599999863498159,1.6339779588445942); (-0.3599999863498159,1.6339779588445942) – (-0.34999998672070265,1.6313011094565435); (-0.34999998672070265,1.6313011094565435) – (-0.3399999870915894,1.6281461069580039); (-0.3399999870915894,1.6281461069580039) – (-0.32999998746247616,1.6245208372115436); (-0.32999998746247616,1.6245208372115436) – (-0.3199999878333629,1.6204343615028527); (-0.3199999878333629,1.6204343615028527) – (-0.3099999882042497,1.6158968938922502); (-0.3099999882042497,1.6158968938922502) – (-0.29999998857513643,1.610919775684854); (-0.29999998857513643,1.610919775684854) – (-0.2899999889460232,1.6055154470832265); (-0.2899999889460232,1.6055154470832265) – (-0.27999998931690995,1.5996974160933504); (-0.27999998931690995,1.5996974160933504) – (-0.2699999896877967,1.5934802247616502); (-0.2699999896877967,1.5934802247616502) – (-0.25999999005868346,1.5868794128274517); (-0.25999999005868346,1.5868794128274517) – (-0.24999999042957022,1.5799114788817303); (-0.24999999042957022,1.5799114788817303) – (-0.23999999080045697,1.5725938391292293); (-0.23999999080045697,1.5725938391292293) – (-0.22999999117134373,1.5649447838570203); (-0.22999999117134373,1.5649447838570203) – (-0.2199999915422305,1.5569834317183147); (-0.
arxiv_0000051
representation of manifolds. To illustrate these ideas, consider the case of *d* = 2. In this case the *d* − 1 manifolds are disjoint unions of circles, and the cobordisms are ``closed string" worldsheets with circular ingoing and outgoing boundaries. All 2d closed, oriented manifolds can be generated by gluing a finite set of cobordisms (read from top to bottom): $$\begin{aligned} {\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \filldraw[left color=lightgray, right color=white] (-0.25,0) to [out=-90,in=180] (0,-0.33) to [in=-90,out=0] (0.25,0) to (0.75,0) to [in=90,out=-90] (0.25,-1) to [out=-90,in=-90] (-0.25,-1) to [in=-90,out=90] (-0.75,0); \filldraw[left color=white,right color=lightgray] (-0.5,0) ellipse (0.25 and 0.1); \filldraw[left color=white,right color=lightgray] (0.5,0) ellipse (0.25 and 0.1); \draw[dotted] (0.25,-1) arc (0:180:0.25 and 0.1); \end{scope} }}}} \quad},\quad {\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \filldraw[right color=white,left color=lightgray] (-0.25,0) to [out=90,in=180] (0,0.33) to [in=90,out=0] (0.25,0) to [in=-90,out=-90] (-0.25,0); \draw[dotted] (0.25,0) arc (0:180:0.25 and 0.1); \end{scope} }}}} \quad},\quad {\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \filldraw[right color=white,left color=lightgray] (-0.25,0) to [out=-90,in=180] (0,-0.33) to [in=-90,out=0] (0.25,0); \filldraw[left color=white,right color=lightgray] (0,0) ellipse (0.25 and 0.1); \end{scope} }}}} \quad},\quad {\quad \vcenter{\hbox{\tikz{{\begin{scope}[xshift=0,yshift=0] \filldraw[left color=lightgray, right color=white] (-0.25,-1) to [out=90,in=180] (0,-0.66) to [in=90,out=0] (0.25,-1) to [out=-90,in=-90] (0.75,-1) to [in=-90,out=90] (0.25,0) to (-0.25,0) to [in=90,out=-90] (-0.75,-1) to [out=-90,in=-90] (-0.25,-1); \filldraw[left color=white,right color=lightgray] (0,0) ellipse (0.25 and 0.1); \draw[dotted] (-0.25,-1) arc (0:180:0.25 and 0.1); \draw[dotted] (0.75,-1) arc (0:180:0.25 and 0.1); \end{scope} }}}} \quad} \label{eq:gen}.\end{aligned}$$ The vector space *Z*(*S*1) ≡ H*S*1 is endowed with a multiplication rule due to the pair of pants cobordism, which gives the analog of the OPE coefficients: $$\begin{aligned} {\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \filldraw[left color=lightgray, right color=white] (-0.25,0) to [out=-90,in=180] (0,-0.33) to [in=-90,out=0] (0.25,0) to (0.75,0) to [in=90,out=-90] (0.25,-1) to [out=-90,in=-90] (-0.25,-1) to [in=-90,out=90] (-0.75,0); \filldraw[left color=white,right color=lightgray] (-0.5,0) ellipse (0.25 and 0.1); \filldraw[left color=white,right color=lightgray] (0.5,0) ellipse (0.25 and 0.1); \draw[dotted] (0.25,-1) arc (0:180:0.25 and 0.1); \end{scope} }}}} \quad}:\mathcal{H}\_{S1}\otimes \mathcal{H}\_{S1} &\to \mathcal{H}\_{S1},{\nonumber\\}\ket{i}\otimes \ket{j} &\to \sum\_{k} c\_{ijk} \ket{k}.\end{aligned}$$ This makes H*S*1 an algebra, with a unit, co-unit, and co-product given by the 3 rightmost diagrams of. The properties of this algebra are given by the sewing relations of Bord*d*, *d* − 1closed. These ensure that different ways of cutting up a manifold into the generators should give the same partition function when gluing them back together. These TQFT sewing relations imply that H*S*1 is a *commutative Frobenius algebra*. A simple example is given by a ``classical limit" of a 1+1d rational CFT, in which we take all primary dimensions *h**i*, *j*, *k* to go to zero. Then the OPE coefficients *C**i**j**k* go to the structure constants *c**i**j**k* of a TQFT (see e.g. ). ### Extended open-closed TQFT in 2d For applications to the two-sided black holes in JT and 3d gravity, we need to introduce (*d* − 1)-manifolds with boundaries. These codimension-2 boundaries are decorated by a label denoting an abstract boundary condition. This is a familiar aspect of boundary conformal field theory in 2D, where a spatial boundary is labeled by a conformal boundary condition that satisfies a set of sewing constraints. An *extended* TQFT incorporates the extra structure associated with codimension-2 or higher boundaries into the TQFT framework. Different formulations of extended TQFT exist in the mathematical literature. The approach we follow in this work, based on a version called ``open-closed" TQFT, is tailored for computations of entanglement entropy. As before, we begin by introducing these ideas in *d* = 2. An open TQFT is a functor *Z* $$\begin{aligned} Z: \mathbb{Bord}^{\text{open}}\_{2,1} \to \mathbb{Vect}\_{\mathbb{C}},\end{aligned}$$ where Bord2, 1open is a geometric category whose objects are intervals $\hspace{-0.25cm}{\quad \vcenter{\hbox{\tikz{{}}} \quad}{ \begin{scope}[xshift=0,yshift=0] \draw (-0.25,.25) -- (0.25,.25); \end{scope} }; \node at (-.35,.22) {\small $a$};\node at (.35,.25) {\small $b$}}\hspace{-0.25cm}$ with labeled endpoints, and the morphisms are cobordisms between these intervals. A vector space H*a**b* is assigned to a labeled interval, and a cobordism maps ingoing intervals into outgoing intervals. The open TQFT makes a distinction between the ``gluing“ boundary corresponding to the initial and final slice, and the ``free boundary” which describes the time evolution of the interval endpoint where a boundary condition is assigned. A generating set of cobordisms for the open TQFT is given by $$\begin{aligned} \label{opencob} {\quad \vcenter{\hbox{\tikz{{\begin{scope}[xshift=0,yshift=0] \draw (-0.25,0) -- (0.25,0); \draw (-0.25,0) to [out=-90,in=180] (0,-0.33) to [in=-90,out=0] (0.25,0); \end{scope} };\node at (0,-0.45) {\small $a$}}}} \quad},\quad {\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \draw (-0.75,0) -- (-0.25,0) to [out=-90,in=180] (0,-0.33) to [in=-90,out=0] (0.25,0) -- (0.75,0) to [in=90,out=-90] (0.25,-1); \draw (-0.25,-1) -- (0.25,-1); \draw (-0.75,0) to [in=90,out=-90] (-0.25,-1); \end{scope} };\node at (0,0) {\small $b$};\node at (-.6,-.6) {\small $a$};\node at (.6,-.6) {\small $c$}}}} \quad}, \quad {\quad \vcenter{\hbox{\tikz{{\begin{scope}[xshift=0,yshift=0] \draw (-0.25,0) -- (0.25,0); \draw (-0.25,0) to [out=90,in=180] (0,0.33) to [in=90,out=0] (0.25,0); \end{scope} };\node at (0,0.45) {\small $a$}}}} \quad}, \quad {\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \draw (-0.75,-1) -- (-0.25,-1) to [out=90,in=180] (0,-0.66) to [in=90,out=0] (0.25,-1) -- (0.75,-1) to [in=-90,out=90] (0.25,0) -- (-0.25,0) to [in=90,out=-90] (-0.75,-1); \end{scope} };\node at (0,-0.85) {\small $b$};\node at (-.65,-.35) {\small $a$};\node at (.65,-.35) {\small $c$} }}} \quad}.\end{aligned}$$ Notice that open cobordisms satisfy a superselection rule for the boundary labels : these are always unchanged along a free boundary. As in the closed TQFT, the total Hilbert space  ⊕ *a*, *b*H*a**b* of the open TQFT forms a Frobenius algebra, with the ``open string fusion" (second diagram of ) as the multiplication rule. However in this case the Frobenius algebra need not be commutative. We have now defined two Frobenius algebras describing the closed and open sector of a TQFT ``path integral". The combined, *open-closed* TQFT relates these two sectors via algebra homomorphisms which split the circle into an interval and vice versa: $$\begin{aligned} \label{zip} {\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \draw (-0.25,-1) -- (0.25,-1); \filldraw[right color=white,left color=lightgray] (-0.25,0) to (-0.25,-1) to [out=90,in=225] (0,-0.5) to [out=-45,in=90] (0.25,-1) to (0.25,0); \filldraw[left color=white,right color=lightgray] (0,0) ellipse (0.25 and 0.1); \end{scope} };\node at (0,-.7) {\small $a$}}}} \quad},\quad {\quad \vcenter{\hbox{\tikz{ { \begin{scope}[xshift=0,yshift=0] \draw (-0.25,0) -- (0.25,-0); \filldraw[right color=white,left color=lightgray] (-0.25,-1) to (-0.25,0) to [out=-90,in=135] (0,-0.5) to [out=45,in=-90] (0.25,0) to (0.25,-1) to [in=-90,out=-90] (-0.25,-1); \draw[dotted] (0.25,-1) arc (0:180:0.25 and 0.1); \end{scope} };\node at (0,-.3) {\small $a$}}}} \quad}\end{aligned}$$ The resulting combined open-closed TQFT satisfies sewing relations making the total Hilbert space  ⊕ *a*, *b*H*a**b* a *knowledgeable* Frobenius algebra. ### The higher category viewpoint and the boundary category The gluing of manifolds along codimension-2 boundaries introduces a higher categorical structure that we now explain. The idea is to treat the labeled codimension-2 surfaces as objects in a geometric 2-category Bord*d*, *d* − 1, *d* − 2, and iterate the same structure that was previously associated with (*d* − 1)-dimensional objects. In particular, the 1-morphisms of this category are (*d* − 1)-dimensional cobordisms between these (*d* − 2)-dimensional objects. Gluing along (*d* − 2)-manifolds corresponds to composition of these 1-morphisms. For *d* ≥ 3, we can have a (*d* − 1)-manifold *M* with a single outgoing boundary ∂*M*. In this case the TQFT assigns to *M* an object *Z*(∂*M*) of the boundary category B*s*. The *d*-dimensional cobordisms are now viewed as 2-morphisms (morphisms between 1-morphisms), and the composition of these 2-morphisms corresponds to gluing along (*d* − 1)-manifolds. These two types of gluing satisfy natural compatibility conditions. Let’s illustrate these ideas in *d* = 2. The objects of Bord2, 1, 0 are labeled points and the 1-morphisms are labeled intervals $\hspace{-0.25cm}{\quad \vcenter{\hbox{\tikz{{}}} \quad}{ \begin{scope}[xshift=0,yshift=0] \draw (-0.25,.25) -- (0.25,.25); \end{scope} }; \node at (-.35,.22) {\small $a$};\node at (.35,.25) {\small $b$}}\hspace{-0.25cm}$. The composition of these 1-morphisms is determined by the open string fusion, which satisfies the requirement of associativity. The 2-morphisms are strips swept out by an interval: $$\begin{aligned} {\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \filldraw[fill=white,draw=black] (-0.25,0) rectangle (0.25,-1); \end{scope} };\node at (-.4,-.5) {\small $a$};\node at (.4,-.5) {\small $b$} }}} \quad}\end{aligned}$$ The compatibility condition says we can interchange the order of vertical versus horizontal gluing. In higher categorical language, a *d* = 2 extended TQFT is defined to be a functor $$\begin{aligned} Z: \mathbb{Bord}\_{2,1,0} \to \mathcal{B}\_{S}, \label{eq:2d-category}\end{aligned}$$ where B*S* is a 2-category of boundary conditions. For the *d* = 2 examples considered in this paper, B*S* is the category of algebras, with bimodules as the 1-morphisms and bimodule homomorphisms as the 2-morphisms.[44](#fn44) The composition of 1-morphisms is the relative tensor product of modules, which coincides with the entangling product. What do we gain by this higher categorical description? Note that in any dimension the extension can be iterated all the way down to a point, allowing us to consistently glue along manifolds of all codimension. This increases the computational power of the TQFT when *d* ≥ 3. The enhanced computational power arises because the mathematical structure becomes more refined as we go to higher codimensions, capturing more information about the total theory. For the particular *d* = 2 and *d* = 3 theories we consider, the partition functions, Hilbert spaces, and quantum information measures can be fully determined by the boundary category assigned to a codimension-2 surface, i.e. an entangling surface.[45](#fn45) The shrinkable boundary condition in extended TQFT -------------------------------------------------- To discuss entangling surfaces, we introduce the shrinkable boundary condition *e* into the open-closed TQFT formalism. For simplicity, consider first the open-closed TQFT with only this boundary condition. Thus there is a single interval Hilbert space H*e**e* with the multiplication rule $$\begin{aligned} {\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \draw (-0.75,0) -- (-0.25,0) to [out=-90,in=180] (0,-0.33) to [in=-90,out=0] (0.25,0) -- (0.75,0) to [in=90,out=-90] (0.25,-1); \draw (-0.25,-1) -- (0.25,-1); \draw (-0.75,0) to [in=90,out=-90] (-0.25,-1); \end{scope} };\node at (0,0) {\small $e$};\node at (-.6,-.6) {\small $e$};\node at (.6,-.6) {\small $e$}}}} \quad} :\mathcal{H}\_{ee} \otimes \mathcal{H}\_{ee} \to \mathcal{H}\_{ee},\end{aligned}$$ required to satisfy all sewing relations of a knowledgeable Frobenius algebra. In addition, we impose the shrinkable boundary condition given by : $$\begin{aligned} \label{emult} {\quad \vcenter{\hbox{\tikz{{\begin{scope}[xshift=0,yshift=0] \draw (-0.25,0) -- (0.25,0); \draw (-0.25,0) to [out=90,in=180] (0,0.33) to [in=90,out=0] (0.25,0); \end{scope} } { \begin{scope}[xshift=0,yshift=0] \draw (-0.25,0) -- (0.25,-0); \filldraw[right color=white,left color=lightgray] (-0.25,-1) to (-0.25,0) to [out=-90,in=135] (0,-0.5) to [out=45,in=-90] (0.25,0) to (0.25,-1) to [in=-90,out=-90] (-0.25,-1); \draw[dotted] (0.25,-1) arc (0:180:0.25 and 0.1); \end{scope} }; \node at (0,.46) {\small $e$}}}} \quad} ={\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \filldraw[right color=white,left color=lightgray] (-0.25,0) to [out=90,in=180] (0,0.33) to [in=90,out=0] (0.25,0) to [in=-90,out=-90] (-0.25,0); \draw[dotted] (0.25,0) arc (0:180:0.25 and 0.1); \end{scope} }}}} \quad}\end{aligned}$$ This implies all holes created by the *e* boundary can be closed. The factorization map on the interval is then given by the co-product of the Frobenius algebra H*e**e*: $$\begin{aligned} \label{cop} {\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \draw (-0.75,-1) -- (-0.25,-1) to [out=90,in=180] (0,-0.66) to [in=90,out=0] (0.25,-1) -- (0.75,-1) to [in=-90,out=90] (0.25,0) -- (-0.25,0) to [in=90,out=-90] (-0.75,-1); \end{scope} }\node at (0,-1) {\small $e$};\node at (-.5,-.3) {\small $e$};\node at (.5,-.3) {\small $e$}}}} \quad}\end{aligned}$$ For 2d gauge theories, we give an explicit realization of the Frobenius algebra H*e**e* in appendix [app:2dgauge]. In the open-closed TQFT formalism, the *e* label essentially turns a ``free“ boundary into a gluing boundary along which we can fuse or split an interval using and. As we alluded to in section [sec:path-integral], this should be viewed as cutting and gluing along a codimension-2 entangling surface. This is a useful way to view codimension-2 gluings because it gives an open TQFT interpretation to any replica calculation of entanglement entropy on a closed 2d manifold: one simply introduces holes with the shrinkable boundary condition at each connected component of the entangling surface. In particular, the sphere can be interpreted either as a “closed string” amplitude, or an “open string” trace as shown in. The extended TQFT language provides a precise algebraic description of this type of open-closed duality. As illustrated in section [sec:facgrav], such a duality provides a mechanism by which the Euclidean gravity path integral can ``know” about black hole microstates. Since this is an important point, let us spell out the details. First, consider computing *Z*(*S*2) in the unextended TQFT by making a codimension-1 cut along the equatorial circle. *Z*(*S*2) is then obtained by composing $$\text{the unit}\,\,={\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \filldraw[right color=white,left color=lightgray] (-0.25,0) to [out=90,in=180] (0,0.33) to [in=90,out=0] (0.25,0) to [in=-90,out=-90] (-0.25,0); \draw[dotted] (0.25,0) arc (0:180:0.25 and 0.1); \end{scope} }}}} \quad} \,\,\text{with the co-unit} \,\,\operatorname{Tr}\_{\text{Fr}}(\cdot)={\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \filldraw[right color=white,left color=lightgray] (-0.25,0) to [out=-90,in=180] (0,-0.33) to [in=-90,out=0] (0.25,0); \filldraw[left color=white,right color=lightgray] (0,0) ellipse (0.25 and 0.1); \end{scope} }}}} \quad}\,,$$ which is the trace function for the closed Frobenius algebra H*S*1. This gives $$\begin{aligned} \label{TrFr1-m} Z(S^2) = \operatorname{Tr}\_{\text{Fr}}(\mathbb{1}).\end{aligned}$$ These operations are fixed by the Frobenius algebra corresponding to the closed TQFT. However the trace corresponds to the evaluation of an amplitude in the “closed string channel” and is not relevant to counting quantum states. On the other hand, a state-counting interpretation of *Z*(*S*2) can be obtained by introducing codimension-2 cuts along the equator. This is achieved by applying the factorization map to the unit, which gives a cobordism describing the ``thermofield double state": $$\begin{aligned} {\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \filldraw[right color=white,left color=lightgray] (-0.25,0) to [out=90,in=180] (0,0.33) to [in=90,out=0] (0.25,0) to [in=-90,out=-90] (-0.25,0); \draw[dotted] (0.25,0) arc (0:180:0.25 and 0.1); \end{scope} } }}} \quad} \to {\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \filldraw[right color=white,left color=lightgray] (-0.25,0) to [out=90,in=180] (0,0.33) to [in=90,out=0] (0.25,0) to [in=-90,out=-90] (-0.25,0); \draw[dotted] (0.25,0) arc (0:180:0.25 and 0.1); \end{scope} } { \begin{scope}[xshift=0,yshift=0] \draw (-0.25,-1) -- (0.25,-1); \filldraw[right color=white,left color=lightgray] (-0.25,0) to (-0.25,-1) to [out=90,in=225] (0,-0.5) to [out=-45,in=90] (0.25,-1) to (0.25,0); \filldraw[right color=white,left color=lightgray] (0,0) ellipse (0.25 and 0.1); \end{scope} } { \begin{scope}[xshift=0cm,yshift=-1cm] \draw (-0.75,-1) -- (-0.25,-1) to [out=90,in=180] (0,-0.66) to [in=90,out=0] (0.25,-1) -- (0.75,-1) to [in=-90,out=90] (0.25,0) -- (-0.25,0) to [in=90,out=-90] (-0.75,-1); \end{scope} }}}} \quad} \end{aligned}$$ A similar procedure acting on the co-unit gives a second algebraic characterization of *Z*(*S*2): $$\begin{aligned} Z(S^{2}) = {\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \filldraw[right color=white,left color=lightgray] (-0.25,0) to [out=90,in=180] (0,0.33) to [in=90,out=0] (0.25,0) to [in=-90,out=-90] (-0.25,0); \draw[dotted] (0.25,0) arc (0:180:0.25 and 0.1); \end{scope} } { \begin{scope}[xshift=0,yshift=0] \draw (-0.25,-1) -- (0.25,-1); \filldraw[right color=white,left color=lightgray] (-0.25,0) to (-0.25,-1) to [out=90,in=225] (0,-0.5) to [out=-45,in=90] (0.25,-1) to (0.25,0); \filldraw[right color=white,left color=lightgray] (0,0) ellipse (0.25 and 0.1); \end{scope} } { \begin{scope}[xshift=0cm,yshift=-1cm] \draw (-0.75,-1) -- (-0.25,-1) to [out=90,in=180] (0,-0.66) to [in=90,out=0] (0.25,-1) -- (0.75,-1) to [in=-90,out=90] (0.25,0) -- (-0.25,0) to [in=90,out=-90] (-0.75,-1); \end{scope} } { \begin{scope}[xshift=0,yshift=-4cm] \filldraw[right color=white,left color=lightgray] (-0.25,0) to [out=-90,in=180] (0,-0.33) to [in=-90,out=0] (0.25,0); \filldraw[left color=white,right color=lightgray] (0,0) ellipse (0.25 and 0.1); \end{scope} } { \begin{scope}[xshift=0,yshift=-3cm] \draw (-0.25,0) -- (0.25,-0); \filldraw[right color=white,left color=lightgray] (-0.25,-1) to (-0.25,0) to [out=-90,in=135] (0,-0.5) to [out=45,in=-90] (0.25,0) to (0.25,-1) to [in=-90,out=-90] (-0.25,-1); \draw[dotted] (0.25,-1) arc (0:180:0.25 and 0.1); \end{scope} }{ \begin{scope}[xshift=0cm,yshift=-2cm] \draw (-0.75,0) -- (-0.25,0) to [out=-90,in=180] (0,-0.33) to [in=-90,out=0] (0.25,0) -- (0.75,0) to [in=90,out=-90] (0.25,-1); \draw (-0.25,-1) -- (0.25,-1); \draw (-0.75,0) to [in=90,out=-90] (-0.25,-1); \end{scope} } }}} \quad} ={\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \draw (-0.75,0) -- (-0.25,0) to [out=90,in=180] (0,0.33) to [in=90,out=0] (0.25,0) -- (0.75,0) to [out=90,in=0] (0,0.83) to [out=180,in=90] (-0.75,0); \end{scope} } { \begin{scope}[xshift=0,yshift=0] \draw (-0.75,0) -- (-0.25,0) to [out=-90,in=180] (0,-0.33) to [in=-90,out=0] (0.25,0) -- (0.75,0) to [out=-90,in=0] (0,-0.83) to [out=180,in=-90] (-0.75,0); \end{scope} }}}} \quad} = \operatorname{Tr}\_{\mathcal{H}\_{ee}}(\mathbf{id}\_{\mathcal{H}\_{ee}}) \end{aligned}$$ In the second equality, an open-closed sewing relation was used. The final expression for *Z*(*S*2) is a quantum mechanical trace on the Hilbert space assigned to a subregion. In other words, unlike, this trace counts the dimension of an ``open string" Hilbert space H*e**e*. In appendix [sec:CS-review], we give a *d* = 3 TQFT analog of this calculation in the context of Chern-Simons theory with compact gauge group. #### Coset and shrinkable boundary conditions. In the fully extended higher category description of 2d gauge theory, a point is assigned to the category of algebras. In particular, the shrinkable boundary label *e* corresponds to the object C[G], viewed as a Frobenius algebra: we refer the reader to for the details of this construction. Here, we observe that the Peter-Weyl theorem implies that the interval Hilbert space H*e**e* = L2(G) is indeed a bimodule of C[G], consistent with the definition of 1-morphisms in the category of algebras, as stated below. The relative tensor product which describes the composition of 1-morphisms just corresponds to the entangling product defined by the surface symmetry group *G*. Consider now a subgroup *H*. In the extended TQFT language, two objects *e*, *H* are introduced in the boundary category B*S* of algebras. These correspond to the group algebras C[G], C[G/H]. Following the TQFT rules, a bimodule of these algebras is assigned to the intervals $\hspace{-0.25cm} {\quad \vcenter{\hbox{\tikz{{}}} \quad}{ \begin{scope}[xshift=0,yshift=0] \draw (-0.25,.25) -- (0.25,.25); \end{scope} };\node at (-.35,.25) {\small $e$};\node at (.37,.28) {\small $H$}}\hspace{-0.25cm}$ and $\hspace{-0.25cm} {\quad \vcenter{\hbox{\tikz{{}}} \quad}{ \begin{scope}[xshift=0,yshift=0] \draw (-0.25,.25) -- (0.25,.25); \end{scope} };\node at (-.37,.28) {\small $H$};\node at (.35,.25) {\small $e$}}\hspace{-0.25cm}$. Using the relative tensor product, these two intervals can be glued along the endpoint labeled by *e*. This is the entangling product producing the interval $\hspace{-0.25cm} {\quad \vcenter{\hbox{\tikz{{}}} \quad}{ \begin{scope}[xshift=0,yshift=0] \draw (-0.25,.25) -- (0.25,.25); \end{scope} };\node at (-.37,.28) {\small $H$};\node at (.37,.28) {\small $H$}}\hspace{-0.25cm}$ and the Hilbert space $$\begin{aligned} \mathcal{H}\_{HH}= \bigoplus\_{R} \mathcal{P}\_{R,0}\otimes \mathcal{P}\_{R,0}.\end{aligned}$$ The coset boundary condition *H* is relevant to the one-sided black hole states in JT and 3d gravity, where the gluing process just described provides a categorical description of how entanglement generates spacetime, or ER=EPR. JT gravity as an extended TQFT ------------------------------ Let us now apply the extended TQFT paradigm to JT gravity. As alluded to earlier, we interpret *e* as the semi-group algebra C[SL+(2, R)]. The interval Hilbert space H*e**e* is a bimodule of C[SL+(2, R)], which we identify as the space of functions on the semi-group $$\begin{aligned} \mathcal{H}\_{ee}=L^{2}(\text{SL}^{+}(2,\mathbb{R})).\end{aligned}$$ As we saw previously, this space supports the left-right regular representation of the semigroup, and has a basis given by representation matrix elements of SL+(2, R). In this basis, we can define the Frobenius algebra structure of H*e**e* by $$\begin{aligned} &{\quad \vcenter{\hbox{\tikz{ {\begin{scope}[xshift=0cm,yshift=0cm] \draw (-0.25,0) -- (0.25,0); \draw (-0.25,0) to [out=90,in=180] (0,0.33) to [in=90,out=0] (0.25,0); \end{scope} };\draw (0cm,0.5cm) node {\footnotesize $e$ } }}} \quad}= \int\_{0}^{+\infty} d k\, \int\_{-\infty}^{\infty} ds \sqrt{k \sinh 2 \pi k}\,e^{-\frac{\beta C(k)}{2}} \ket{k\, s\,s}, {\nonumber\\}&{\quad \vcenter{\hbox{\tikz{ { \begin{scope}[xshift=0cm,yshift=0cm] \draw (-0.75,0) -- (-0.25,0) to [out=-90,in=180] (0,-0.33) to [in=-90,out=0] (0.25,0) -- (0.75,0) to [in=90,out=-90] (0.25,-1); \draw (-0.25,-1) -- (0.25,-1); \draw (-0.75,0) to [in=90,out=-90] (-0.25,-1); \end{scope} };\draw (0cm,-.1 cm) node {\footnotesize $e$ } ;\draw (-.5cm,-.7cm) node {\footnotesize $e$ };\draw (.5cm,-.7cm) node {\footnotesize $e$ } }}} \quad} : \ket{k\,s\_{1} s} \otimes \ket{k\, s' s\_{2} } \to \frac{1}{\sqrt{k \sinh 2 \pi k}} \delta(s-s') \ket{k\,s\_{1} s\_{2}}, {\nonumber\\}&{\quad \vcenter{\hbox{\tikz{{\begin{scope}[xshift=0,yshift=0] \draw (-0.25,0) -- (0.25,0); \draw (-0.25,0) to [out=-90,in=180] (0,-0.33) to [in=-90,out=0] (0.25,0); \end{scope} };\draw (0cm,-0.5cm) node {\footnotesize $e$ } }}} \quad}= \int\_{0}^{+\infty} d k\, \int\_{-\infty}^{\infty} ds \sqrt{k \sinh 2 \pi k}\,e^{-\frac{\beta C(k)}{2}} \bra{k\, s\,s}, {\nonumber\\}&{\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \draw (-0.75,-1) -- (-0.25,-1) to [out=90,in=180] (0,-0.66) to [in=90,out=0] (0.25,-1) -- (0.75,-1) to [in=-90,out=90] (0.25,0) -- (-0.25,0) to [in=90,out=-90] (-0.75,-1); \end{scope} }\node at (0,-1) {\small $e$};\node at (-.5,-.3) {\small $e$};\node at (.5,-.3) {\small $e$}}}} \quad}: \ket{k s\_{1} s\_{2} } \to \frac{1}{\sqrt{k \sinh 2 \pi k} } \int\_{-\infty}^{\infty} ds \ket{k\,s\_{1} \,s} \ket {k\, s s\_{2} }.\end{aligned}$$ These are formally identical to the cobordism generators of two-dimensional gauge theory, except that the representation basis have continuous labels. From a formal point of view, this modification does not alter the sewing relations provided that we account for the associated volume factors. For this reason, this set of cobordisms define a Frobenius algebra just as in the compact case. One aspect of the JT TQFT that is a bit more subtle than the compact case involves the identification of the unit element in the Frobenius algebra with the unit element of the group: $$\begin{aligned} \label{unit} {\quad \vcenter{\hbox{\tikz{ {\begin{scope}[xshift=0cm,yshift=0cm] \draw (-0.25,0) -- (0.25,0); \draw (-0.25,0) to [out=90,in=180] (0,0.33) to [in=90,out=0] (0.25,0); \end{scope} };\draw (0cm,0.5cm) node {\footnotesize $e$}}}} \quad}= \,\,\, \ket{g=\mathbb{1}}.\end{aligned}$$ For compact groups, this follows directly from the delta function identity: $$\begin{aligned} \frac{1}{V\_C}\sum\_{R} \sum\_{a} \dim R \,R\_{aa}(g) = \delta(g-\mathbb{1}),\end{aligned}$$ which in turn follows from the completeness relation satisfied by the characters $$\begin{aligned} \label{complete} \frac{1}{V\_C}\sum\_{R} \chi\_{R}(g) \chi\_{R} (g') = \delta(g-g'),\end{aligned}$$ with *g*ʹ set to the identity, and the delta-functions on the right hand side only contain the contribution from the maximal torus. For the Frobenius algebra associated to JT gravity, is still true provided we define the characters in with respect to the appropriate measure. Notice also that the “closed string” cap state ${\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \filldraw[right color=white,left color=lightgray] (-0.25,0) to [out=90,in=180] (0,0.33) to [in=90,out=0] (0.25,0) to [in=-90,out=-90] (-0.25,0); \draw[dotted] (0.25,0) arc (0:180:0.25 and 0.1); \end{scope} } }}} \quad} $ corresponds to the identity element of the Frobenius algebra. This is made explicit by mapping the full disk to the half disk using one of the sewing relations of the extended TQFT: $$\label{axiom1} {\quad \vcenter{\hbox{\tikz{ { \begin{scope}[xshift=0cm,yshift=1cm] \draw (-0.25,-1) -- (0.25,-1); \filldraw[right color=white,left color=lightgray] (-0.25,0) to (-0.25,-1) to [out=90,in=225] (0,-0.5) to [out=-45,in=90] (0.25,-1) to (0.25,0); \filldraw[left color=white,right color=lightgray] (0,0) ellipse (0.25 and 0.1); \end{scope} } { \begin{scope}[xshift=0cm,yshift=1cm] \filldraw[right color=white,left color=lightgray] (-0.25,0) to [out=90,in=180] (0,0.33) to [in=90,out=0] (0.25,0) to [in=-90,out=-90] (-0.25,0); \draw[dotted] (0.25,0) arc (0:180:0.25 and 0.1); \end{scope} };\draw (0cm,0.2cm) node {\footnotesize $e$ } }}} \quad} = {\quad \vcenter{\hbox{\tikz{ {\begin{scope}[xshift=0cm,yshift=0cm] \draw (-0.25,0) -- (0.25,0); \draw (-0.25,0) to [out=90,in=180] (0,0.33) to [in=90,out=0] (0.25,0); \end{scope} };\draw (0cm,0.5cm) node {\footnotesize $e$ } }}} \quad}= \,\, \ket{g=\mathbb{1}}.$$ See appendix [app:unitgroup] for further details. #### Coset boundary conditions in JT To capture the two-sided geometries in figure [BTZ], we introduce the asymptotic boundary labels i*R*, i*L*, which are the gravitational analog of the coset boundary labels *H* described at the end of section [sec:sh-eTQFT]. The extended TQFT assigns the right quotient algebra C[SL+(2, R)/ ∼ ] to the boundary label i*R* and the analogous left quotient to i*L*. The two-sided Hilbert space Hi*L*i*R* is then the 1-morphism corresponding to the bi-module of these algebras given by $$\begin{aligned} \mathcal{H}\_{\mathfrak{i}\_{L}\mathfrak{i}\_{R}} \equiv \text{L}^2( \sim \backslash \text{SL}^{+}(2,\mathbb{R})/\sim).\end{aligned}$$ Similarly, the one-sided Hilbert spaces H*e*i*R* = L2(SL+(2, R)/ ∼ ) are bimodules on which *e* and i*R* act by left and right multiplication. We can think of each element *g* ∈ SL+(2, R)/ ∼  as a Wilson line connecting the bifurcation surface with the asymptotic boundary. The composition of one-morphisms that glue H*e*i*R* to Hi*L**e* into Hi*L*i*R* can be viewed as a categorical description of how entanglement generates spacetime, or ER=EPR. This more abstract perspective is useful in this situation, since defining an explicit path integral over SL+(2, R) gauge fields is difficult due to the non-local constraints that one has to impose on the gauge field. For 3d gravity, we will have to suitably *q*-deform these objects. 3d gravity as an extended TQFT ------------------------------ In three dimensions, the boundary category which defines an extended TQFT is a modular tensor category. For Chern-Simons theory with compact gauge group *G*, this is usually taken to be the representation category of the loop group LG. As reviewed in appendix [sec:CS-review], the Hilbert space factorization, shrinkable boundary condition, and computation of entanglement entropy can be defined within the extended TQFT framework: in particular the entanglement edge modes are identified with elements of the representation category Rep(LG). Since we have identified SL*q*+(2, R) × SL*q*+(2, R) as the edge mode symmetry of bulk 3d gravity, it is natural to ask whether there exists a corresponding extended TQFT with a boundary category given by Rep(SL*q*+(2, R) × SL*q*+(2, R)). A natural candidate for such a TQFT is given by the Teichmüller TQFT. This is the TQFT defined via the quantization of Teichmüller space, which is the classical phase space of 3d gravity.[46](#fn46) More precisely, canonical quantization of Teichmüller space on a Riemann surface Σ introduces a representation of the mapping class group (large diffeomorphisms), and it is well known that such a ``modular functor" defines a 3d TQFT. Concretely, the quantization of Teichmüller space on Σ gives rise to the Hilbert space of Virasoro conformal blocks[47](#fn47) on Σ, and the fact that these conformal blocks define a modular functor means that they satisfy a continuum version of the modular bootstrap equations. There is a wealth of evidence suggesting that Rep(SL*q*+(2, R) × SL*q*+(2, R)) defines an analog of a modular tensor category which underlies the Teichmüller TQFT. In particular, the fusion rules and *F*-matrices dictated by the Virasoro modular bootstrap are given by the Clebsch-Gordon decomposition and 6*j*-symbols of Rep(SL*q*+(2, R) × SL*q*+(2, R)).[48](#fn48) #### The identity Wilson line and the entanglement boundary state Finally, we address one well known feature of the Teichmüller TQFT that might raise concerns about its viability as our gravitational TQFT. Unlike the compact case, the Teichmüller TQFT does not contain the identity Wilson line. This is related to the fact that the spectrum does not contain the vacuum Virasoro block, defined by a path integral on a solid torus with an identity Wilson loop inserted. This seems like a serious defect in the context of entanglement calculations; the entanglement brane boundary state associated to our factorization of the BTZ black hole is given by such a solid torus path integral, in the limit that it shrinks to zero size. However, we circumvented this problem by applying a modular transform to the vacuum block, leading to a superposition of heavy Virasoro blocks with a density of states given by the SL*q*+(2, R) Plancherel measure. This gives the entanglement boundary state $$\begin{aligned} \ket{e} \equiv \int dp \, 4\sqrt{2}\sinh ( 2 \pi b p) \sinh( 2 \pi b^{-1} p) \ket{p},\end{aligned}$$ which dimensionally reduces to the entanglement boundary state in JT gravity corresponding to a bulk disk path integral. Summary ------- We reviewed how Hilbert space factorization can be embedded within the framework of extended TQFT, emphasizing a more abstract categorical perspective that we argue to be applicable to both gauge theories and gravity in d=2 and 3. In its standard formulation, the *extended* TQFT framework assigns a *boundary category* B*S* to any closed oriented codimension-2 manifold *S*. In *d* = 3, B*S* corresponds to a category of representations. Furthermore, a codimension-1 manifold *V* ending on *S* is assigned to an object of the category B\tiny S. For example, for Chern-Simons theory with a compact gauge group, this is a representation of G\tiny S. In physical terms, this is a Hilbert space H\tiny V of the bounded region *V*. These facts are summarized in Figure [category]. [category] To provide a precise framework for the emergence of spacetime from entanglement, we embedded the Euclidean path integral calculations provided in section [sec:facgrav] into the extended TQFT formalism. In particular, we introduced codimension-2 entangling surfaces for which the associated representation category is determined by the shrinkability constraint. This constraint determines the gravitational edge modes necessary for the consistent cutting and gluing of spacetime subregions according to the rules of the extended TQFT. In this language, the ER=EPR paradigm reduces to the composition of morphisms. More generally, we proposed that the extended TQFT formulation of JT and 3d gravity provides the abstract algebraic structure from which classical geometry emerges. Concluding remarks ================== We proposed an effective quantum mechanical model of 3d gravity with Λ < 0 based on the universal features of 2d holographic CFTs at high temperature. From the perspective of a microscopic AdS/CFT, our model is a theory of vacuum Virasoro blocks in the dual channel. While the bulk theory is 3d pure gravity, the boundary theory is given by the geometric action of Alekseev-Shatashvili for a Diff(*S*1) reparametrization field, with *S*1 being the time coordinate. Instead of the usual Chern-Simons formulation, we argued the bulk Euclidean path integral should be defined as an extended TQFT with a boundary category given by Rep(SL*q*+(2, R) × SL*q*+(2, R)). In particular, this viewpoint reproduces the Bekenstein-Hawking entropy as the bulk entanglement entropy of gravity edge modes transforming under the quantum semi-group SL*q*+(2, R) × SL*q*+(2, R). As known from gauge theory, once one introduces surface edge states, the Hilbert space can be split. This means that the gravitational Hilbert space can be split at the expense of introducing gravitational anyons. The entanglement of these anyons then allows us to glue spacetime back together again, providing a concrete picture of how gravitational entanglement generates connected spacetime. Forgetting the exterior of a subregion, the gravitational entanglement entropy associated to it is the black hole entropy. In the TQFT framework, the magic that allows the Euclidean path integral to give the correct counting of black hole microstates is attributed to the TQFT sewing relations which constrains the cutting and gluing of the path integral along codimension-2 surfaces. Our proposal is incomplete. First, we did not give a full description of the putative extended TQFT describing the 3d bulk theory. A natural candidate is given by the Teichmüller TQFT formulated in, since this is the unitary TQFT associated with the representation category Rep(SL*q*+(2, R) × SL*q*+(2, R)).[49](#fn49) Perhaps the most direct way to see the connection between our formulation of 3d gravity and Teichmüller TQFT is to consider the state sum model, in which the Teichmüller TQFT on a hyperbolic 3 manifold is defined by gluing together tetrahedra that gives a triangulation of the 3-manifold. Such a state sum model defines an extension of the TQFT, since it must involve gluing along co-dimension 2 and higher surfaces (like the edges of the tetrahedra). The 6*j*-symbols for Rep(SL*q*+(2, R) × SL*q*+(2, R)) are naturally identified with the tetrahedra in the state sum model, with each edge labelled by a quantum group representation. Indeed, it has been shown that the semi-classical limit of the 6*j*-symbols for Rep(SL*q*+(2, R) × SL*q*+(2, R)) give the quantum volume of these hyperbolic tetrahedra, which is given by the exponential of the Einstein-Hilbert action. This relation suggests that the gluing of cauchy slices along co-dimension 2 entangling surfaces correspond to the fusion of quantum group representations along the edges of the tetrahedra. We leave an explicit verification of this idea to future work. One way to test our proposal would be to see if it gives a consistent set of rules to compute bulk entanglement entropy on different states and with different bi-partitions. In particular, one might wonder if the same gravitational edge modes can reproduce the Ryu-Takayanagi formula via a bulk entanglement entropy calculation: the answer is affirmative and reported on in. We have given a description of gravitational edge modes as anyons: we showed that the bulk entanglement entropy measures their quantum dimension in a manner that is consistent with the black hole entropy. However, our proposal does not provide a microscopic description of the edge modes. Similarly, our description of the subregion Hilbert was abstract: in particular, we did not give a realization of these states in terms of one-sided geometries. To understand the true implications of our proposal, we would need to make a connection to the microstates of the bulk string theory on AdS3 ×  M7. As a first step in this direction, one might add the appropriate matter content due to reduction of string theory on M7, and ask how the SL*q*+(2, R) × SL*q*+(2, R) symmetry is modified. Finally, the TQFT language of local cutting and gluing can only take us so far in gravity. The complication has to do with the group of large diffeomorphisms that would spoil locality. In 2d, this is the non-trivial mapping class group on Riemann surfaces of more complicated topology, leading to non-trivial global considerations. For the application to 3d chiral gravity, see. #### Edge states, hidden symmetries and superselection sectors. The Lorentzian interpretation of the inner boundary in the shrinkable boundary condition is the horizon of the black hole. Hence, the resulting density of states should correspond to the gravitational edge modes localized at the black hole event horizon. The “hidden” property of the edge mode symmetry structure is consistent with the idea that edge modes are not accessible from an outside observer perspective, since it would require an infinite amount of time to measure them.[50](#fn50) In a sense, edge modes can only be probed, when the system is cut or split into two pieces. This picture is also consistent with the physical picture of edge modes in Maxwell gauge theory in Rindler space. One expects that due to infinite redshift, all excitations localized at the black hole event horizon should have vanishing energy for the outside observer, in terms of which the temperature and energies are measured. An explicit mode analysis shows the existence of vanishing *ω*\tiny Rindler = 0 at the Rindler horizon *ρ*\tiny Rindler = 0. Such bulk localization is consistent with these modes not being accessible for these observers in finite time. Since they should commute with any other exterior bulk operator, they should introduce superselection sectors decomposing the bulk Hilbert space H\tiny bulk = ⨁*α*H*α* ⊗ H*ᾱ*. In our 3d gravitational discussion, *α* labels irreducible representations of SLq+(2, R): the *p*± labels describe the macroscopic properties of the black hole states, i.e. the energy *H* and angular momentum *J*. Since the gravitational edge sector is based on a non-abelian group structure, there are additional edge labels: the *s* and *s̄* are continuous microscopic labels counting the black hole degeneracy within a fixed “macroscopic” (*p*+, *p*−) sector. Indeed, we already saw explicitly in JT gravity, see, that the density matrix is block-diagonal in all of these edge labels. An analogous property holds in 3d gravity. These edge quantum numbers (*p*+, *p*−*s*, *s̄*) cannot be changed by any observer who only has access to the exterior of the black hole, or to the right half of the Hilbert space. This is obvious for the *s*, *s̄* quantum numbers since they are localized at the entangling surface. The macroscopic quantum numbers *p*± can also not be changed at the entangling surface in a finite amount of time, since attempting to inject some energy and/or angular momentum into the system only succeeds in creating multiple regions with different *p*± quantum numbers, without changing these quantum numbers at the black hole horizon.[51](#fn51) Note that quantum group edge modes have been studied previously in the classical analysis of pure gravity edge modes, based on the covariant phase space formalism. However, the quantum group that arose in is the ordinary U*q*(sl(2, R)) and not the modular double associated to SLq+(2, R). It would be interesting to understand the relation between the two approaches. #### Generalization to other models and higher dimensions. Since our work specifically applies to *d* = 3, it is a natural question to ask whether the extracted lessons may carry over to higher dimensions. In such systems, gravity includes dynamical degrees of freedom. However, in a perturbative regime in *G*\tiny N, these can be carefully described as a further contribution to the matter sector propagating in a given background. Thus, the origin of the gluing of spacetime is still expected to be carried by gravitational edge modes, with symmetry transformation properties which may appear as “hidden”, to outside observers, as in our 3d context. An important outcome of our work uncovered the lack of descendants at the entangling surface in 3d gravity. An equivalent statement is that the edge sector of 3d gravity has no quantum numbers describing fluctuations along the horizon. This rephrasing makes the potential generalization of this statement to other models and higher dimensions straightforward. To be concrete, let us compare this description of the edge sector to that of dynamical Maxwell theory, as studied in this language by. In spacetime dimensions *d* ≥ 3, there are edge modes on the black hole horizon, in one-to-one correspondence with a surface electric flux perpendicular to the horizon: *E*⊥(**x**) = ∑**k***ε***k***e**i***k** ⋅ **x**,  where **x** is the transverse coordinate along the black hole horizon and **k** is a (continuous) momentum label. Regularizing the black hole horizon with a brick wall boundary condition at *ρ*Rindler = *ε*, each edge mode labeled by **k** carries an energy $$E\_\mathbf{k}=\frac{\vert\epsilon\_\mathbf{k} \vert^2}{2k^2\ln \frac{2}{k\epsilon}} \,\, \underset{\epsilon \to 0}{\to} \,\, 0,$$ leading to the total energy in the coherent edge state $\ket{E\_\perp}$, defined to be an eigenstate of the perpendicular electric field with eigenvalue : $$H\_{\text{edge}}\ket{E\_\perp}=\sum\_{\mathbf{k}}E\_\mathbf{k}\ket{E\_\perp}.$$ The fact that edge states are labeled by a transverse momentum label **k** immediately leads to a contribution to the entropy as *S* ∼ *A*/*ε**d* − 2. These tangential modes play the same role as the descendant labels in the 3d gravity story. Given the apparent similarity between the origin of edges states in 3d Chern-Simons models (reviewed in section [sec:CS-review]) and in Maxwell’s theory in *d* ≥ 3, together with the absence of such edge sector in 3d gravity, we are lead to conjecture that no gravitational edge modes with non-trivial profile tangential to the black hole horizon exist in higher-dimensional gravity. It would be interesting to get more clues on how this could be proven more generally.[52](#fn52) #### Towards more general formulations of the shrinkable boundary condition. It is not apparent how to formulate our approach for implementing the shrinkable boundary condition to higher dimensions. However, as reviewed in section [sec:shgravity], an alternative approach that generalizes more naturally to higher dimensions was given in. Here one combines a *local* cutting map J and a defect operator insertion $\sqrt{D}$ to provide the factorization map in. Even though the statistical mechanical interpretation of the bulk entanglement entropy seems to be gone from this perspective, it has the advantage that such a defect operator is related to the area of the black hole. This relation can be understood via Carlip and Teitelboim’s off-shell formulation for the BTZ black hole. They showed that the phase space associated with the Euclidean cigar topology is enlarged to include the conical angle and horizon area as a conjugate pair. This implies that the horizon area operator *Â* generates translations in the conical angle. Therefore the defect operator which imposes a 2*π* cone angle can be identified with the operator $D=\exp (\frac{ \hat{A}}{4G})$. A similar statement holds for higher dimensional black holes when we allow for a conical deficit at the Euclidean horizon. It should be possible to obtain an explicit realization of *Â* in a subregion Hilbert space along the lines of. Finally, it would be interesting to connect this Euclidean picture to the Lorentzian description of gravitational edge modes in. Here, the conical angle becomes a boost, and the area operator originates from a gauge-fixed version of an SL(2, R) gravitational surface symmetry associated with the normal geometry of the entangling surface. We hope to report on these ideas in future work. Acknowledgments =============== We would like to thank Nezhla Aghaei, Alexandre Belin, Andreas Blommaert, Andreas Brauer, Daniel Jafferis, Daniel Kapec, David Kolchmeyer, Alex Maloney, Samir Mathur, Du Pei, Ingo Runkel, and Shinsei Ryu for discussions related to this work. GW especially thanks David Kolchmeyer and Daniel Jafferis for extended discussions on related topics. TM was supported by Research Foundation Flanders (FWO Vlaanderen), and acknowledges financial support from the European Research Council (grant BHHQG-101040024). Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. JS is supported by the Science and Technology Facilities Council [grant number ST/T000600/1]. GW would like to thank the Aspen center for physics for hospitality while this work was being completed. Boundary correlators in doubled JT limit ======================================== In subsection [sec:double-JT], we identified a double scaling limit of our 3d gravity partition function leading to the squared JT gravity partition functions. As mentioned in the main text, this is an interesting observation on its own right. In particular, it is natural to describe the insertion of boundary matter operators, extending the known techniques in the JT limit. We briefly discuss this below. Consider two identical operators *O**h*, *h̄*, with conformal weights *h* and *h̄*, inserted on the boundary of the solid torus and consider the Euclidean grand canonical correlator: $$\text{Tr}[O\_{h,\bar{h}}(\tau\_E,\varphi)O\_{h,\bar{h}}(0,0) e^{-\beta H + i\mu \frac{\beta}{\ell} J}].$$ The set-up is illustrated in Figure [2pttorus]. [2pttorus] The regime of interest to relate to known JT expressions is the following: $$c \gg 1, \qquad \frac{\tau\_E}{\ell},\frac{\beta-\tau\_E}{\ell} \sim c\,, \qquad h, \bar{h} \sim 1, \label{eq:double-scaleop}$$ combined with a condition on the spectral gap being not too small: $\Delta\_{\text{gap}} \gtrapprox \frac{\tau\_E}{\ell},\frac{\beta-\tau\_E}{\ell}$. The above correlator can be written more explicitly as $$\begin{aligned} \label{eq:cor} \text{Tr}[O\_{h,\bar{h}}(\tau\_E,\varphi)O\_{h,\bar{h}}(0,0) e^{-\beta H + i\mu \frac{\beta}{\ell} J}] = \sum\_{\text{primaries } O\_1 O\_2} \vert C\_{O O\_1 O\_2}\vert^2 \mathcal{F}\_{h\_1,h\_2}(\tau\_E,\varphi) \bar{\mathcal{F}}\_{\bar{h}\_1,\bar{h}\_2}(\tau\_E,\varphi)),\end{aligned}$$ in terms of primary sphere three-point functions *C**O**O*1*O*2, and Virasoro 2-point torus conformal blocks F*h*1, *h*2 and $\bar{\mathcal{F}}\_{\bar{h}\_1,\bar{h}\_2}$. The latter can be obtained by inserting complete sets of states between the operators, when evolving in the Euclidean time direction: $$\begin{aligned} \label{eq:blocks} \mathcal{F}\_{h\_1,h\_2}(\tau\_E,\varphi) &\equiv \hspace{-0.2cm}\sum\_{N\_1,N\_2} \hspace{-0.1cm} \langle h\_2,N\_2\vert O\vert h\_1,N\_1\rangle \langle h\_1,N\_1 \vert O \vert h\_2,N\_2 \rangle e^{-\frac{\tau\_E-i\varphi \ell}{\ell} (h\_1 + \vert N\_1 \vert -\frac{c}{24}) - \frac{\beta-\tau\_E-i(\mu\beta-\varphi \ell)}{\ell}(h\_2 + \vert N\_2 \vert -\frac{c}{24})}, \nonumber \\ \bar{\mathcal{F}}\_{\bar{h}\_1,\bar{h}\_2}(\tau\_E,\varphi) &\equiv \hspace{-0.15cm} \sum\_{\bar{N}\_1,\bar{N}\_2} \hspace{-0.1cm} \langle \bar{h}\_2,\bar{N}\_2\vert O\vert \bar{h}\_1,\bar{N}\_1\rangle\langle \bar{h}\_1,\bar{N}\_1 \vert O \vert \bar{h}\_2,\bar{N}\_2 \rangle e^{-\frac{\tau\_E+i\varphi \ell}{\ell} (\bar{h}\_1 + \vert \bar{N}\_1 \vert -\frac{c}{24}) - \frac{\beta-\tau\_E+i(\mu\beta-\varphi \ell)}{\ell}(\bar{h}\_2 + \vert \bar{N}\_2 \vert -\frac{c}{24})},\end{aligned}$$ where *N*1, 2 is the descendant label, |*N*1, 2| its level, and where we normalized the primary matrix element ⟨*h*2|*O*|*h*1⟩ = 1 (since the OPE coefficient was explicitly extracted). We largely follow the notation of. The diagrammatic representation of one of such blocks is drawn as: $$\mathcal{F}\_{h\_1,h\_2}(\tau\_E,\varphi) \equiv \begin{tikzpicture}[scale=0.25, baseline={([yshift=-0.1cm]current bounding box.center)}] \draw[thick] (-0.77,1.77) to (-2.5,3.5); \draw[thick] (-0.77,-1.77) to (-2.5,-3.5); \node at (-2.5,0) {\small $h\_1$}; \draw[thick](1,0) ellipse (2.5 and 2.5); \node at (4.5,0) {\small $h\_2$}; \node at (-3.5,-3.5) {\small $O$}; \node at (-3.5,3.5) {\small $O$}; \draw[->,thick, bend left=30] (5.5,3) to (5.5,-3); \node at (7.5,0) {\small $\tau\_E$}; \end{tikzpicture}$$ To proceed, we need to distinguish between two situations. One where our model is a proposal for 3d pure gravity as in section [s:propgrav], and a second one as containing universal dynamics of 2d CFT in the regime sketched in section [sec:2duniv]. Let’s start with the first case. In the regime of interest $\frac{\tau\_E}{\ell},\frac{\beta-\tau\_E}{\ell} \sim c \gg 1$, descendants scale out and the boundary correlator reduces to: $$\begin{aligned} &\text{Tr}[O\_{h,\bar{h}}(\tau\_E,\varphi)O\_{h,\bar{h}}(0,0) e^{-\beta H + i\mu \frac{\beta}{\ell} J}] = \\ &\sum\_{\text{primaries } O\_1 O\_2} \hspace{-0.4cm}\vert C\_{O O\_1 O\_2}\vert^2 e^{-\frac{\tau\_E-i\varphi \ell}{\ell} (h\_1-c/24) + \frac{\beta-\tau\_E-i(\mu\beta-\varphi \ell)}{\ell} (h\_2 - c/24)} e^{-\frac{\tau\_E+i\varphi\ell}{\ell} (\bar{h}\_1-c/24) + \frac{\beta-\tau\_E+i(\mu\beta-\varphi \ell)}{\ell} (\bar{h}\_2 - c/24)}. \nonumber\end{aligned}$$ For our 3d gravity proposal, the intermediate Hilbert space, and hence the precise range of primaries *O*1, 2 (including the measure) is known. Thus, the intermediate operators have conformal weights *h**i* = *Q*2/4 + (*p**i*+)2. Furthermore, for irrational Virasoro 2d CFTs with such weights, the sphere three-point function is determined by the conformal bootstrap and explicitly given by a DOZZ-like formula, derived in. In principle, we should compute the Euclidean path integral of the Alekseev-Shatashvili model of subsection [s:propgrav] with bilocal operator insertions to prove this statement explicitly purely from the gravity perspective. We leave this to future work. This DOZZ-formula simplifies in the scaling limit as (where we set *p**i*± → *b**p**i*±): $$\vert C\_{O O\_1 O\_2} \vert^2\, \to \, \frac{\Gamma(h\pm ip^+\_1 \pm i p^+\_2)}{\Gamma(2h)} \frac{\Gamma(\bar{h}\pm ip^-\_1 \pm i p^-\_2)}{\Gamma(2\bar{h})}\,.$$ The resulting sum over primaries *O*1, 2 decouples into four integrals over *p*1, 2±: $$\begin{aligned} \label{eq:twoptres} &\text{Tr}[O\_{h,\bar{h}}(\tau\_E,\varphi)O\_{h,\bar{h}}(0,0) e^{-\beta H + i\mu \frac{\beta}{\ell} J}] = \text{Tr}\left[ O\_{h,\bar{h}} e^{-\tau\_E H +i\varphi J} O\_{h,\bar{h}} e^{-(\beta-\tau\_E) H +i(\mu \frac{\beta}{\ell}-\varphi) J}\right] \\ &\to \, \# b^{12} e^{\frac{\beta}{12\ell}}\int\_0^{+\infty} \hspace{-0.2cm} dp^+\_1 dp^+\_2 p^+\_1 \sinh(2\pi p^+\_1) p^+\_2 \sinh(2\pi p^+\_2) e^{-\frac{b^2 \tau\_E }{\ell}{p^+\_1}^2} e^{-\frac{b^2 (\beta-\tau\_E-i\mu\beta)}{\ell}{p^+\_2}^2} \frac{\Gamma(h\pm ip^+\_1 \pm i p^+\_2)}{\Gamma(2h)} \nonumber \\ &\qquad \qquad \times \int\_0^{+\infty} \hspace{-0.2cm} dp^-\_1dp^-\_2 p^-\_1 \sinh(2\pi p^-\_1) p^-\_2 \sinh(2\pi p^-\_2) e^{-\frac{b^2 \tau\_E}{\ell}{p^-\_1}^2} e^{-\frac{b^2 (\beta-\tau\_E+i\mu\beta)}{\ell}{p^-\_2}^2} \frac{\Gamma(\bar{h}\pm ip^-\_1 \pm i p^-\_2)}{\Gamma(2\bar{h})}, \nonumber\end{aligned}$$ where we have not written some numerical prefactors similar as in. This expression is invariant under *τ**E* → *β* − *τ**E*. Notice that even though all dependence on *φ* gets scaled away in the double-scaled limit of interest discussed here, there is still an imprint of rotation on this doubled 1+1d JT system: the chemical potential *μ* is present and able to describe correlators probing a system with a rotating BTZ black hole as its saddle. Let us next consider the situation describing the universal dynamics of an irrational 2d CFT in the regime. Whenever $\Delta\_{\text{gap}} \gtrapprox \frac{\tau\_E}{\ell},\frac{\beta-\tau\_E}{\ell}$, no additional primaries beyond the vacuum contribute in the *dual* channel. This is the same argument as in subsection [sec:2duniv]. In this channel, one inserts complete sets of states in the spatial *φ*-cycle instead, leading to an expansion in dual conformal blocks. In our case, the latter is dominated by identity blocks:[53](#fn53) $$\tilde{\mathcal{F}}\_{O,\mathbb{1}}(\tau\_E,\varphi) \equiv \begin{tikzpicture}[scale=0.25, baseline={([yshift=-0.1cm]current bounding box.center)}] \draw[thick] (-0.77,1.77) to (-2.5,3.5); \draw[thick] (-0.77,-1.77) to (-2.5,-3.5); \node at (-2.5,0) {\small $O$}; \draw[thick](1,0) ellipse (2.5 and 2.5); \node at (4.5,0) {\small $\mathbb{1}$}; \node at (-3.5,-3.5) {\small $O$}; \node at (-3.5,3.5) {\small $O$}; \draw[->,thick, bend left=30] (5.5,3) to (5.5,-3); \node at (7.5,0) {\small $\varphi$}; \end{tikzpicture}$$ To proceed, we need to transfer this information by transforming the above torus two-point conformal blocks between *S*-dual channels. Luckily this analysis has already been done for a chiral sector in, by applying a three-step sequence of fusion, modular *S*, and fusion transformations. Here we just apply it for both chiral sectors. The result for the boundary two-point function on the torus in terms of the conformal blocks is: $$\begin{aligned} \label{eq:res3d} &\text{Tr}[O\_{h,\bar{h}}(\tau\_E,\varphi)O\_{h,\bar{h}}(0,0) e^{-\beta H + i\mu \frac{\beta}{\ell} J}] = \\ &\int\_0^{+\infty} \hspace{-0.2cm} dp\_1^+dp\_2^+ \rho(p\_1^+) \rho(p\_2^+) C\_{h,p\_1^+,p\_2^+} \mathcal{F}\_{p\_1^+,p\_2^+}(\tau\_E,\varphi) \int\_0^{+\infty} \hspace{-0.2cm} dp\_1^-dp\_2^- \rho(p\_1^-) \rho(p\_2^-) C\_{\bar{h},p\_1^-,p\_2^-} \mathcal{F}\_{p\_1^-,p\_2^-}(\tau\_E,\varphi), \nonumber\end{aligned}$$ where $$\begin{aligned} \rho(p^\pm) &= 4\sqrt{2} \sinh ( 2 \pi b p^{\pm} ) \sinh ( 2 \pi b^{-1} p^{\pm} ), \\ C\_{h,p\_1^\pm,p\_2^\pm} &= \text{DOZZ-like formula of \cite{Collier:2019weq}}.\end{aligned}$$ In the double-scaling limit, these blocks are further dominated by the primaries once again, and this expression reduces to. However, the expression is fully valid in 3d, and is the boundary correlator on the solid torus for sufficiently small spectral gap. Note that this is the unnormalized correlator; the normalized one can be obtained by dividing by the partition function. Generalizations to higher-point functions and out-of-time ordered configurations can similarly be worked out, giving rise to results compatible with the doubled JT expressions in the double-scaling limit discussed in this appendix. Regularization of volume factors for non-compact groups ======================================================= Following Appendix C of, we review and improve here a natural strategy to regularize the different volume factors appearing in section [sec:JT]. Our discussion should hold for non-compact groups with a continuous set of irreducible representations. Regularization of the representation dimensions ----------------------------------------------- The main idea in Appendix C of is to relate the Schur orthogonality relation $$\int\_G dg\,R\_{ab}(g) R^{\prime}\_{cd}(g^{-1}) = V\_\text{G}\,\frac{\delta\_{RR^\prime}}{\text{dim}\,R}\delta\_{ad}\delta\_{bc}\,, \label{eq:disc-rep}$$ holding for discrete representations *R* and *R*ʹ, with the delta-regularized orthogonality relation $$\int\_G dg\,R^k\_{ab}(g) R^{k^\prime}\_{cd}(g^{-1}) = \frac{\delta(k-k^\prime)}{\rho(k)}\delta\_{ad}\delta\_{bc}\,, \label{eq:cont-rep}$$ leading to the mathematically more accurate notion of Plancherel measure *ρ*(*k*) ≡ dim*k* for continuous representations. Here we associate *R* ↔ *k*. A formal comparison leads to $$\frac{\text{dim}\, R}{V\_\text{G}} \delta (k-k^\prime) = \delta\_{RR^\prime}\,\rho(k)\,. \label{eq:formal-id}$$ Taking the trace in, and using, allows us to derive the orthogonality relation for characters $$\label{eq:B4} \int\_G dg\,\chi^k(g) \chi^{k^\prime}(g^{-1}) = \delta(k-k^\prime)\frac{\text{dim}\,R}{\rho(k)}\,.$$ This integral over *G* can be simplified using Weyl’s integration formula:[54](#fn54) $$\label{eq:Weylint} \int\_G dg \, f(g) = \frac{1}{\vert W\vert} \int\_C d\alpha \, \vert \Delta(\alpha) \vert^2 \int\_{G/C} dh\, f(h\alpha h^{-1}),$$ where *W* is the Weyl group, and the Weyl denominator |Δ(*α*)|2 is the Jacobian in the change of variables on the group *g* → (*α*, *h*). The subgroup of conjugacy class elements *C* is a maximal torus in the group. For the specific case of a class function *f*, such as the characters in our set-up, we can simplify this formula into: $$\int\_G dg \, f(g) = \frac{1}{\vert W\vert} \int\_C d\alpha \, \vert \Delta(\alpha) \vert^2 f(\alpha)\left(\int\_{G/C} dh\right).$$ Taking the unit function **1**(*g*), we get the formal equality: *V*G = *V**C*(∫*G*/*C**d**h*), where *V*G is the volume of *G* and *V**C* is the volume of a maximal torus (or a Cartan subgroup, or the subgroup *C* of conjugacy classes).[55](#fn55) The last factor represents the volume of any fixed conjugacy class, which we write as the ratio *V*G/*V**C*. We can hence rewrite into: $$\label{eq:charcoj} \frac{1}{\vert W\vert} \int\_C d\alpha\, \vert \Delta(\alpha) \vert^2\chi^k(\alpha) \chi^{k^\prime}(\alpha) = V\_C\,\delta\_{kk^\prime} = 2\pi \delta(k-k^\prime)\,.$$ This leads to the formal relation *δ*(*k* − *k*) = *V**C*/2*π*, which combined with allows us to infer $$\text{dim}\, R = 2\pi\,\frac{V\_\text{G}}{V\_C}\rho(k)\,.$$ This equation explicitly extracts the “infinity” from the dimension of the representation, and makes manifest its *k*-dependence.[56](#fn56) The existence of several *types* of inequivalent conjugacy classes for non-compact groups requires an additional sum over class types *C**i* when applying Weyl’s integration formula, as developed in a general theorem for reductive Lie groups by Harish-Chandra. This sum over class types *C**i* carries over into the left hand side of. E.g. for the specific case of SL(2, R), these class types are elliptic, parabolic and hyperbolic. However, the elliptic conjugacy classes do not contribute to since the elliptic characters of the principal series irreps *χ**k*(*α*) vanish. The parabolic classes are of measure zero, and hence also do not contribute to the left hand side of. For the positive semi-group SL+(2, R), elliptic conjugacy classes do not even exist. We conclude that for the group-like structures of interest in the main text, equation still holds but is restricted to the hyperbolic conjugacy class elements. These conjugacy classes are generated by the Cartan subalgebra: $$\mathfrak{h} = \left\{\left(\begin{array}{cc} s & 0 \\ 0 & -s \end{array}\right), \quad s \in \mathbb{R} \right\}.$$ Hence **V**C* is the volume of the maximal torus for the hyperbolic conjugacy classes.* For the principal series irreps *k* of these group-like structures, we use a continuous basis of quantum numbers *s* to label states within the representation. Hence we formally have dim *R* = ∫R*d**s*. These equations are written in equation in the main text. The *e*-brane boundary state and the unit group element ------------------------------------------------------- For a compact group, we have the character completeness relations: $$\begin{aligned} \sum\_R \chi\_R(U) \, \chi\_R(U') = V\_C \delta (U- U'), \\ \label{eq:id1} \sum\_R \text{dim }R \, \chi\_R(U) = V\_C \delta (U- \mathbb{1}),\end{aligned}$$ where *U* is a group element in the maximal torus (or a conjugacy class). We can expand the (non-normalizable) unit state ∣*U* = 1⟩ in the irrep basis as: $$\label{eq:ecom} |U = \mathbb{1}\rangle = \frac{1}{V\_C}\sum\_R \, \text{dim }R \, |R \rangle.$$ As an explicit example, for the compact group SU(2) we have the character completeness relation: $$\label{eq:su2} \sum\_{j\in \frac{\mathbb{N}}{2}} \frac{\sin (2j+1) \phi }{\sin \phi} \frac{\sin (2j+1) \phi' }{\sin \phi'} = 2\pi \frac{\delta(\phi-\phi')}{4\sin^2 \phi},$$ where *ϕ* labels the different conjugacy classes.[57](#fn57) Taking the limit *ϕ*ʹ → 0, we obtain the formal identity: $$\sum\_{j\in \frac{\mathbb{N}}{2}} \frac{\sin (2j+1) \phi }{\sin \phi} (2j+1) = 2\pi\frac{\delta(\phi)}{4\sin^2 \phi}.$$ We can view as the natural regularization of the above limiting identity. Let us now look at the same argument for the positive semigroup SL+(2, R). We start with a brief review on the principal series characters themselves, starting with the full group SL(2, R). The characters of the principal series irrep of SL(2, R) are computed by the following integral $$\label{eq:chara} \chi\_j(g) = \int\_{\mathbb{R}} dx |bx+d|^{2j}\delta\left(\frac{ax+c}{bx+d}-x\right),$$ where *j* =  − 1/2 + *i**k*. Using that the character is a class function, we write for an element in the hyperbolic conjugacy class: $$g = \left[\begin{array}{cc} e^{\phi} & \epsilon \\ 0 & e^{-\phi} \end{array} \right],$$ where *ε* can be viewed as a regulator since the delta-function generically has two solutions for *x* unless *ε* = 0.[58](#fn58) A better way to think about this is we should instead of R attach the point at infinity as R ∪ ∞, and allow for a “solution at infinity”. Hence $$\chi\_j(g) = \int\_{\mathbb{R}} dx |\epsilon x+e^{-\phi}|^{2j}\delta\left(\frac{e^\phi x}{\epsilon x+e^{-\phi}}-x\right).$$ The integral over *x* boils down to the fixed point of the group action. We have two fixed points, and the delta function evaluates to $$\delta\left(\frac{e^{\phi}x}{\epsilon x + e^{-\phi}}-x\right) = \frac{\delta(x)}{|e^{2\phi}-1|} + \frac{\delta(x-\frac{e^{\phi}- e^{-\phi}}{\epsilon})}{|1-e^{-2\phi}|},$$ Hence $$\label{eq:charh} \chi\_j(g) = \frac{e^{-2j\phi}}{|e^{2\phi}-1|} + \frac{e^{2j\phi}}{|1-e^{-2\phi}|} = \frac{\cos 2k \phi}{ |\sinh \phi|}.$$ If we specify to *g* = 1, then we would get from the formal expression *χ**j*(1) = *δ*(*x* − *x*)∫R*d**x*, which is divergent both due to the proliferation of fixed points and the range of the *x*-integral. Instead letting *ϕ* → 0 in, the result diverges due to sinh*ϕ* → 0, as we approach the identity group element from the hyperbolic side. The characters of SL+(2, R) are $$\label{eq:charpl} \chi\_j(g) = \frac{\cos 2k \phi}{2|\sinh \phi|}$$ only including a factor 1/2 compared to the full SL(2, R) manifold. This can be proven directly using the argument around footnote 21 of. Alternatively, from the above computation, we restrict to *x* ∈ R+. This causes both fixed point locations *x* = 0 and *x* = ∞ to be at the integration endpoints, picking up an additional factor of 1/2 in the process.[59](#fn59) We write again a completeness relation of these characters. By explicit calculation we have: $$\label{eq:charcop} \int dk \, \frac{\cos 2k \phi}{ 2|\sinh \phi|} \frac{\cos 2k \phi'}{ 2|\sinh \phi'|} = 2\pi\frac{\delta(\phi-\phi')}{16\sinh^2 \phi}.$$ Letting *ϕ*ʹ → 0, we obtain the limiting identity: $$\int dk \, \frac{\cos 2k \phi}{ 2|\sinh \phi|} \frac{2\pi V\_G}{V\_C} k \sinh (2\pi k) = 2\pi \frac{\delta(\phi)}{16\sinh^2 \phi},$$ or $$\int dk \, \frac{\cos 2k \phi}{ 2|\sinh \phi|} k \sinh (2\pi k) = \frac{V\_C}{V\_G} \delta(U-\mathbb{1}).$$ This rather formal expression is given meaning as a limit of the well-defined relation. This expression is as expected: using our finite volume regularization, we replace $\sum\_R \to \frac{V\_C}{2\pi} \int dk$ and $\text{dim }R \to \frac{2\pi V\_G}{V\_C} \rho(k)$ in, and we find the non-compact version: $$\int\_k \rho(k) \, \chi\_k(U) = \frac{V\_C}{V\_G} \delta (U- \mathbb{1}),$$ where the delta-function on the maximal torus contains the Weyl denominator contragrediently to the measure factor: $\delta(U-\mathbb{1}) \sim \frac{\delta(\phi)}{\sinh^2 \phi}$. Hence this indeed matches the above expression found from the character completeness relation. The unit element state ∣*U* = 1⟩ is then expanded in the representation basis as $$\label{eq:ebrane} \boxed{ |U= \mathbb{1}\rangle = \frac{V\_G}{V\_C}\int dk \, k\sinh 2\pi k \, |k \rangle}.$$ Indeed, using $\sum\_R \to \frac{V\_C}{2\pi} \int dk$ and $\text{dim }R \to \frac{2\pi V\_G}{V\_C} \rho(k)$ in, we reproduce, and the overlap $\braket{k|U= \mathbb{1}}= \frac{V\_G}{V\_C} k \sinh 2 \pi k$ if we normalize the states such that $\braket{k|k'}=\delta(k-k')$. We observe that the “closed string” *e*-brane boundary state, defined by the shrinkability constraint in the main text, is precisely this unit group element state (up to a 1/*V**G* normalization factor): $$|e \rangle = \frac{1}{V\_G}|U= \mathbb{1}\rangle = \frac{1}{V\_C}\int dk \, k\sinh 2\pi k \, |k \rangle.$$ Hopf algebra of functions on a (quantum) group ============================================== The properties of a group *G* can be captured by the algebra of functions on *G*. The latter is a Hopf algebra F(*G*), which has both a co-product and a product. The axioms satisfied by *G* can be translated into compatibility relations for the algebraic structure of F(*G*). A quantum group can be defined via a non-commutative deformation of F(*G*). The non-commutativity is encoded by the *R*-matrix. Below we review these algebraic structures. #### Hopf algebra structure. The quantum group F*q*(*G*) is a quasi-triangular Hopf algebra. To explain what this is, we start with the simpler structure of a bi-algebra A, which is an algebra endowed with 4 operations $$\begin{aligned} \text{product}& \quad\nabla: \mathcal{A}\otimes \mathcal{A} \rightarrow \mathcal{A},{\nonumber\\}\text{unit}& \quad\eta: \mathbb{C} \rightarrow \mathcal{A},{\nonumber\\}\text{co-product}&\quad \Delta : \mathcal{A}\rightarrow \mathcal{A} \otimes \mathcal{A},{\nonumber\\}\text{co-unit}& \quad \epsilon: \mathcal{A} \rightarrow \mathbb{C}. \end{aligned}$$ These operations are required to satisfy compatibility relations. In particular the product and co-product are associative and co-associative respectively. Our main example is the set F(*G*) of C-valued functions on a group *G*, where the operations are the following: $$\begin{aligned} {3} \nabla(f\_1,f\_2)(g) &= f\_1(g)f\_2(g), \quad &&f\_1,f\_2 \in \mathcal{F}(G), \\ \eta &= 1 \text{ (the identity function)}, \\ \label{DE} \Delta(f) (g\_1,g\_2) &= f(g\_1g\_2 ),\quad &&g\_1,g\_2 \in G, \\ \epsilon (f) &= f(\mathbb{1}\_{G}).\end{aligned}$$ The first line is pointwise multiplication. Here *g*1*g*2 denotes the group multiplication of *g*1 and *g*2, and 1*G* is the identity element of *G*. The definition makes Δ : F(*G*) → F(*G* × *G*), which is not what we want. By the Peter-Weyl theorem, providing a dense basis of F(*G*) in terms of the representation matrix elements *R**a**b*(*g*), we have the isomorphism F(*G* × *G*) ≃ F(*G*) × F(*G*) since its action on a single basis element is: Δ(*R**a**b*)(*g*1, *g*2) = *R**a**b*(*g*1*g*2) = ∑*c**R**a**c*(*g*1)*R**c**b*(*g*2) = ∑*c**R**a**c* ⊗ *R**c**b*(*g*1, *g*2),  making it an element in F(*G*) × F(*G*). Linearly extending then proves the isomorphism. We can hence summarize the above natural co-product as: $$\boxed{ \Delta(R\_{ab}) = \sum\_c R\_{ac} \otimes R\_{cb}}.$$ The co-unit acting on representation matrix elements is: *ε*(*R**a**b*) = *R**a**b*(1*G*) = *δ**a**b*. A bi-algebra structure is upgraded into a Hopf algebra by the introduction of an anti-homomorphism: $$\begin{aligned} S: \mathcal{A}\rightarrow \mathcal{A}, \quad S(f\_1f\_2) &= S(f\_2)S(f\_1),\quad f\_1,f\_2 \in \mathcal{A} \label{anti},\end{aligned}$$ called the antipode. For F(*G*), the natural definition is to act as the pullback of the inverse action: $$\begin{aligned} S(f)(g) = f(g^{-1}), \qquad S(g\_{ij} ) = g^{-1}\_{ij}.\end{aligned}$$ On a single basis element of F(*G*), we write *S*(*R**a**b*)(*g*) = *R**a**b*(*g*− 1) = *R**a**b*− 1(*g*),  mapping a representation matrix element to its inverse. Depicting the group element as an oriented interval, the inverse map changes this orientation by diagrammatically twisting: $$\begin{aligned} {\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0cm,yshift=0cm] \draw (-0.25,0) to [out=-90,in=90] (0.25,-1); \draw[line width=2mm, white] (0.25,0) to [out=-90,in=90] (-0.25,-1); \draw (-0.25,0) -- (0.25,0) to [out=-90,in=90] (-0.25,-1) -- (0.25,-1); \end{scope} }}}} \quad} : f(g)\to f( S(g) ),\qquad {\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0cm,yshift=0cm] \draw (-0.25,0) to [out=-90,in=90] (0.25,-1); \draw[line width=2mm, white] (0.25,0) to [out=-90,in=90] (-0.25,-1); \draw (-0.25,0) -- (0.25,0) to [out=-90,in=90] (-0.25,-1) -- (0.25,-1); \end{scope} }, { \begin{scope}[xshift=0cm,yshift=-1cm] \draw (-0.25,0) to [out=-90,in=90] (0.25,-1); \draw[line width=2mm, white] (0.25,0) to [out=-90,in=90] (-0.25,-1); \draw (-0.25,0) -- (0.25,0) to [out=-90,in=90] (-0.25,-1) -- (0.25,-1); \end{scope} } }}} \quad} f(g) \to f( S^{2}(g) ) \end{aligned}$$ In the underformed algebra, *S*2 = 1, so the double twist is just the identity. Crucially, this will change under *q*-deformation. #### R-matrix. Adding an R-matrix to a Hopf algebra, makes it a quasi-triangular Hopf algebra. Given a vector space *V* carrying the ``fundamental rep" of the group, the R-matrix can be viewed as a linear operator on the tensor product *V* ⊗ *V*: $$\begin{aligned} \label{eq:R} \mathcal{R} & \in \text{End } V (\otimes V).\end{aligned}$$ We should view *V* as the Hilbert space of an anyon, and the R-matrix applies a braiding operation to a pair of anyons. Given the co-unit *ε* and antipode *S*, it is heavily constrained to satisfy several relations among itself, *ε* and *S*, including famously the Yang-Baxter equation. #### *q*-deformation of F(*G*). As a vector space, F(*G*) is defined over the complex numbers and spanned by the basis $$\begin{aligned} g\_{i\_1 j\_1} g\_{i\_2 j\_2} \cdots g\_{i\_n j\_n}, \quad n=1,\cdots \infty.\end{aligned}$$ where *g**i**j* are the matrix elements in the fundamental representation of *G*. The co-product in the fundamental representation can be written as: Δ(*g**i**j*) = ∑*k**g**i**k* ⊗ *g**k**j*. In the undeformed algebra, the matrix elements themselves commute: $$\begin{aligned} g\_{ij} g\_{kl} = g\_{kl}g\_{ij}.\end{aligned}$$ However, in the quantum group F*q*(*G*) this multiplication law (distinct from the matrix multiplication rule) becomes non-commutative.[60](#fn60) The precise nature of the non-commutative product in F*q*(*G*) is determined by the R-matrix of the quantum group. To express the product rule it is useful to consider an element *g* as a matrix acting in the fundamental representation. Thus it acts on a vector space *V* according to $$\begin{aligned} \label{V} g:V \rightarrow G\_q \times V,\qquad v\_{i}\mapsto \sum\_{i} g\_{ij} \otimes v\_{j}.\end{aligned}$$ If we analogously define matrices acting in *V* ⊗ *V* as $$\begin{aligned} g\_{(1)} = g\_1 \otimes \mathbb{1},\qquad g\_{(2)} = \mathbb{1} \otimes g\_2,\end{aligned}$$ then the multiplication rule for the coordinate algebra is defined now using the R-matrix as: $$\begin{aligned} \label{Ru} \mathcal{R} g\_{(1)} g\_{(2)}= g\_{(2)}g\_{(1)} \mathcal{R},\end{aligned}$$ where the composition of the operators above is defined with ordinary matrix multiplication. If R is not trivial, the group elements *g**i**j* no longer commute. The non-commutative product defined by the R-matrix also implies that the antipode is no longer the usual inverse. However it still satisfies $$\begin{aligned} \sum\_{j} g\_{ij} S(g\_{jk})= \sum\_j S(g\_{ij})g\_{jk}= \delta\_{ik}.\end{aligned}$$ #### Example: SL*q*(2). To illustrate these definition, consider the quantum group SL*q*(2). Its coordinate algebra is generated by 4 elements (*a*, *b*, *c*, *d*) of a matrix $$\begin{aligned} g= \begin{pmatrix} a & b\\ c & d\end{pmatrix}.\end{aligned}$$ The commutation relations of the matrix elements are encoded in the *R*-matrix, $$\begin{aligned} \mathcal{R} = \begin{pmatrix} q && 0 && 0 && 0 \\ 0 && q^{1/2} && 0 && 0 \\ 0 && q-1 && q^{1/2} && 0 \\ 0 && 0 && 0 && q \end{pmatrix}.\end{aligned}$$ Then the multiplication rule is equivalent to the commutation relations $$\begin{aligned} \label{sl2commutators} ab&= q^{1/2} ba, \quad ac= q^{1/2} ca,\quad bd= q^{1/2} db,\quad cd= q^{1/2} dc,\quad {\nonumber\\}bc&=cb,\quad ad-da = (q^{1/2}-q^{-1/2}) bc.\end{aligned}$$ Additionally we impose the condition *a**d* − *q*1/2*b**c* = 1,  which is the *q*-deformed version of the condition det*g* = 1. The antipode is given by $$S(g) = \begin{pmatrix} d & -q^{-1/2} b \\ -q^{1/2} c & a \end{pmatrix}.$$ Using the relations and we see that the antipode satisfies *S*(*g*)*g* = *g**S*(*g*) = 1. Examples of extended TQFT ========================= We review the extended TQFT formulation of 2d gauge theories and Chern-Simons theories. 2d gauge theory --------------- Two-dimensional YM or BF theories with compact gauge group *G* can be formulated as an extended TQFT. In the open-closed TQFT description, the Frobenius algebra H*e**e* associated to the interval is the *group algebra* C[G]. This is the algebra with basis states $\ket{g}$ labeled by group elements, sum defined as $\sum\_i \alpha\_i \ket{g\_i}, \, \alpha\_i \in \mathbb{C}$, and the product $\ket{g\_{1}}\cdot \ket{g\_{2}} = \ket{g\_{1}g\_{2}} $ and linearly extending. The Frobenius form TrFr(*g*1 ⋅ *g*2) ≡ ⟨*g*1, *g*2⟩ is defined by taking the coefficient of the identity group element in the expansion of *g*1 ⋅ *g*2. To relate the group algebra to the conventional description of the gauge theory Hilbert space, it is natural to identify C[G] with the function space L2(G), whose elements are wavefunctions *ψ*(*g*) $$\begin{aligned} \label{eq:groups} \ket{\psi} = \int dg\, \psi(g) \ket{g}.\end{aligned}$$ However, note the group product on C[G] corresponds to the *convolution product* on L2(G) $$\begin{aligned} \psi\_{1} \* \psi\_{2} (g) \equiv \int d h \,\, \psi\_{1}(gh) \psi\_{2}(h^{-1} ),\end{aligned}$$ rather than pointwise multiplication. This is due to the property: $$\begin{aligned} \ket{\psi\_1}\cdot \ket{\psi\_2} = \int dg \left( \int d h \,\, \psi\_{1}(gh) \psi\_{2}(h^{-1} ) \right) \ket{g},\end{aligned}$$ where we twice inserted and used the right-invariance of the Haar measure. This is an important distinction: in the TQFT language, spacetime locality is captured by the convolution product on L2(G), rather than the pointwise multiplication associated to the Hopf algebra. This distinction between the Hopf and the Frobenius algebras can be more explicitly seen by considering the group algebra in the representation basis. Here, the Frobenius algebra multiplication corresponds to a path integral process fusing the endpoints of a pair of intervals according to $$\begin{aligned} {\quad \vcenter{\hbox{\tikz{{ \begin{scope}[xshift=0,yshift=0] \draw (-0.75,0) -- (-0.25,0) to [out=-90,in=180] (0,-0.33) to [in=-90,out=0] (0.25,0) -- (0.75,0) to [in=90,out=-90] (0.25,-1); \draw (-0.25,-1) -- (0.25,-1); \draw (-0.75,0) to [in=90,out=-90] (-0.25,-1); \end{scope} }}}} \quad} (R\_{ab}(g\_{1}), R\_{cd}(g\_{2})) &\to R\_{ab} \* R\_{cd}(g)=\int dh\, R\_{ab}(gh) R\_{cd}(h^{-1}) {\nonumber\\}&= \sum\_{e} R\_{ae}(g) \int dh\, R\_{eb}(h) R\_{cd}(h^{-1}) =\frac{1}{\dim R} R\_{ad}(g) ( \delta\_{bc}),\end{aligned}$$ where we used the orthogonality relation. On the other hand, the Hopf algebra product is simply pointwise multiplication $$\begin{aligned} (R\_{ab}(g\_{1}), R\_{cd}(g\_{2})) &\to R\_{ab}(g)R\_{cd}(g)\,,\end{aligned}$$ and has *no* path integral interpretation. The closed TQFT describing gauge theory states on the circle corresponds to the center *Z*[C[G]], which forms a *commutative Frobenius algebra*. This can be identified with the gauge-invariant Hilbert space of *class functions* on the circle.[61](#fn61) More details about the open-closed TQFT formulation of 2d gauge theory are given in. Chern-Simons theory ------------------- Consider Chern-Simons theory with compact gauge group. In this case, the definition of an unextended TQFT for *d* = 3 is much less powerful because cutting 3-manifolds along a codimension-1 slice does not produce a finite, generating set of cobordisms as in the *d* = 2 case. However, the extension of the TQFT down to codimension two, i.e. a circle, does provide the computational power needed to evaluate path integrals on 3-manifolds. Whereas a *d* = 2 extended TQFT is equivalent to a Frobenius algebra, a *d* = 3 once-extended TQFT is equivalent to a modular tensor category (MTC),[62](#fn62) which is the boundary category assigned to a codimension-2 surface. For gapless boundaries such as the entangling surface, the latter is given by the representation category of a loop group or quantum group. When we extend down to a circle, it is possible to obtain a finite set of generating 1-morphisms analogous to, along with a set of 2-morphisms satisfying sewing relations. An important part of the MTC data is the modular S-matrix. Equipped with the S-matrix, we can perform a ``closed channel" calculation of the 3-sphere partition function *Z*(*S*3). Following the approach of, cut open *S*3 along a codimension-1 surface Σ = *T*2 with the topology of a torus. Using the Heegaard Splitting (Figure [Hee]), this torus separates *S*3 into two solid tori, to which the TQFT assigns a quantum state $\ket{0}\in \mathcal{H}\_{T^{2}}$. [Hee] The Heegaard splitting means that we glue these solid tori with a large diffeomorphism, represented by an operator *Ŝ* on H*T*2 defined by the S-matrix. We can thus represent the 3-sphere partition function as an S-matrix element: $$\begin{aligned} Z(S^3) = \braket{0|\hat{S}|0}=S\_{00}. \label{eq:S00} \end{aligned}$$ Notice expresses the “path integral on *Z*(*S*3)” in terms of algebraic data associated with the boundary category. However, as in equation, it does not yield a state counting interpretation[63](#fn63). #### Open-slicing and ``microstate counting". To get a state counting intepretation of *Z*(*S*3), consider an ``open slicing“ analogous to the annulus interpretation of *Z*(*S*2) in two-dimensional gauge theory. This involves cutting along a codimension-2 surface, requiring us to perform a more intricate type of surgery. First, notice that a 3-sphere *S*3 can be identified as two solid 3*d*-balls glued along their 2-sphere boundary. The latter forms the equator of *S*3 (see Figure [2balls]) and can be viewed as our Cauchy slice. As a codimension-1 cut of *S*3, it defines a Hilbert space H*S*2. From the gluing of the two 3d-balls, we can just view $Z(S^3)=\braket{\Psi\_{S^2}|\Psi\_{S^2} }$, where $\ket{\Psi\_{S^2}} \in \mathcal{H}\_{S^2}$, is the ``Hartle-Hawking” state. [2balls] Second, let us cut the *S*2 Cauchy surface along a codimension-2 surface with the topology of a circle, separating the 2-sphere into two disk like subregions *V* and *V̄*. As in the extended Hilbert space procedure described in section [sec:algebra], separate the subregions *V* and *V̄* with a regulator *ε*. This construction gives a quantum mechanical interpretation to the codimension-2 cut as the factorization of $\ket{\Psi\_{S^2}}$ (and $\bra{\Psi\_{S^2}}$) implemented by the path integral on a three ball with a trough dug out by a stretched entangling surface *S**ε*. This is shown in Figure [trough]. [trough] The extended TQFT assigns a mathematical structure to the codimension-2 closed oriented manifold emerging from the *ε* → 0 limit of this construction: a linear category, with objects, the collection of Hilbert spaces assigned to codimension-1 manifolds that can end on it, and morphisms, the maps between these Hilbert spaces. To compute *Z*(*S*3), we reverse the cutting process and glue back together the basic building blocks. We observe that the cutting process has left us with two factorized states $i \ket{\Psi\_{S^2}}$ and $\bra{\Psi\_{S^2}} i^\*$, which when glued together give the partial trace of a reduced density matrix. A successful gluing then implies $$\begin{aligned} \label{S3shrink} Z(S^3)= \lim\_{\epsilon \to 0} \operatorname{Tr}\_{{\textrm{\tiny V}}} e^{-2\pi \epsilon H\_{{\textrm{\tiny V}}}},\end{aligned}$$ where Hamiltonian *H**V* is the generator of modular flow in region *V*. Thus, the criteria for the codimension-2 cutting process to produce *Z*(*S*3) is precisely the shrinkability condition. #### The shrinkable boundary condition, the boundary category B*S*. Unlike, the shrinkability condition can only be satisfied up to an infinite subtraction. This is because the introduction of the stretched entangling surface *S**ε* breaks the topological invariance of the TQFT and introduces a UV divergence on the right hand side of. In particular, the standard holographic boundary condition setting *A**τ**E* − *A**φ*|∂M = 0 on *S**ε* introduces CFT edge modes which depend on the complex structure of *S**ε*. Under modular evolution by 2*π*, *S**ε* becomes a thin torus which pinches as *ε* → 0, giving a divergence due to the infinite tower of descendants contributing to the CFT partition function at infinite temperature. Technically, one can subtract this divergence via zeta-function regularization or a counterterm subtraction. However, when Chern-Simons theory is viewed as an emergent, low energy description of an underlying microscopic model, this divergence is a feature rather than a bug. In particular, this divergence gives the leading area law term in the entanglement entropy *S**V* of a subregion which is needed to ensure the positivity of *S**V*. In subsequent sections, we will see that gravity regulates this divergence such that shrinkability can be satisfied exactly. The upshot is that for Chern-Simons theory with a compact gauge group *G*, we relax the definition of shrinkability to allow for this divergence. The corresponding shrinkable boundary condition *A**τ**E* − *A**φ*|∂M = 0 leads to the usual holographic duality, where the Hilbert space H*V* on a disk is given by the representations of the loop group of *G* associated with boundary edge modes on *S* = ∂*V*. The boundary category B*S* on the codimension-2 entangling surface is thus identified with the representation category of the loop group. Expression then implies that *Z*(*S*3) can by computed from the thermal partition function of the associated WZW model, with the CFT Hamiltonian $H\_{V}= \frac{2\pi}{l} \left(L\_{0} + \bar{L}\_{0}-\frac{c}{24}\right)$ as the modular Hamiltonian, where *l* is the length of *S*.[64](#fn64) #### Factorization Map. The factorization map *i* on *S*2 is represented by a cobordism from *S*2 to a disjoint union *V* ⊔ *V̄* of two disks, which according to Atiyah’s axioms, is assigned to the tensor product H*V* ⊗ H*V̄*. Rather than computing the path integral on such a topologically nontrivial space, we simply treat the factorization as a linear map $$\begin{aligned} i: \mathcal{H}\_{S^2} \to \mathcal{H}\_{V}\otimes \mathcal{H}\_{\bar{V}},\end{aligned}$$ subject to the shrinkable boundary condition. In, it was shown that a solution is given by embedding $\ket{\Psi\_{S^2}}$ into a regulated Ishibashi state: $$\begin{aligned} i \ket{ \Psi\_{S^2}} =\frac{1}{\sqrt{Z(2\pi\epsilon)}} \sum\_{m} e^{\frac{-2 \pi^2 \epsilon}{l} (L\_{0}+\bar{L\_{0}})} \ket{ m}\_{V} \otimes \ket{ \bar{m}}\_{\bar{V}}, \end{aligned}$$ where *m* is a schematic label for the descendants, *l* is the length of *S*, and 2*π**ε* is the effective temperature at the entangling surface. We can deduce the mapping *i* from the fact that the right hand side is the thermofield double state for the thermal CFT on the stretched horizon. The norm of the resulting state is 1, which is the partition function on *S*2 × *S*1. In the presence of a Wilson line in the representation *R*, which adds a puncture on *S*2, the state is modified to $\ket{ \Psi\_{S^2}(R)}$, and the factorization map is given by $$\begin{aligned} i \ket{ \Psi\_{S^2}(R)} =\frac{1}{\sqrt{Z(2\pi\epsilon)}} \sum\_{a,m} e^{\frac{-2 \pi^2 \epsilon}{l} (L\_{0}+\bar{L\_{0}})} \ket{R,\,a,\, m}\_{V} \otimes \ket{R,\,a,\, \bar{m}}\_{\bar{V}},\end{aligned}$$ where *a* = 1, ⋯, dim*R* labels the Kac-Moody zero modes. This is a generalization of the co-product factorization in. Moreover, the image of *i* is exactly the entangling product with loop group of G as the surface symmetry G\tiny S. --- 1. In general not every state in a holographic CFT has a semi-classical dual. But if the bulk theory exists independently, we can still specify a dual state in the bulk Hilbert space.[↩](#fnref1) 2. A different approach was considered in by gauging a global 1-form bulk symmetry in a non-abelian Chern-Simons theory, leading to a modular invariant boundary CFT with a factorized partition function on wormhole geometries. We however will not discuss higher topologies in this work.[↩](#fnref2) 3. The same claim was made in, but there is a problem with the naive application of their arguments. We explain the relevant issues in section [CSdefect].[↩](#fnref3) 4. Reference denoted such a CFT as ``compact".[↩](#fnref4) 5. E.g. *e*− 4*π*2 ∼ 10− 18. This is a purely “numerical” suppression, and not a suppression caused by a parametric limit of a ratio of dimensionful parameters.[↩](#fnref5) 6. Space is a circle *S*1.[↩](#fnref6) 7. [fn2] They derive this entropy by applying the formulas of topological entanglement entropy as log(*S**p*+0*S**p*−0) to the irrational Virasoro case. A subtlety is that one needs to ad hoc use instead *S*0*p*± for this identification to work. For rational models, this is not a problem as *S* is symmetric, but for the irrational Virasoro case, one actually has *S**p*±0 = 0 by the modular bootstrap. One of our goals is to precisely understand how to think about, and more generally, as entanglement entropy.[↩](#fnref7) 8. The right hand side contains the super-Virasoro characters: $$\chi\_p^{NS}(\tau) = \sqrt{\frac{\theta\_3(\tau)}{\eta(\tau)}}\frac{\mathfrak{q}^{p^2/2}}{\eta(\tau)}, \qquad \chi\_p^{R}(\tau) = \sqrt{\frac{\theta\_2(\tau)}{2\eta(\tau)}}\frac{\mathfrak{q}^{p^2/2}}{\eta(\tau)}.$$ [↩](#fnref8) 9. These will turn out to be rescaled versions of our usual coordinates (*τ**E*, *φ*), see further around equation.[↩](#fnref9) 10. The classical saddle determines the stress tensor expectation value from which one can reconstruct the metric, either directly or through the Chern-Simons gauge connection.[↩](#fnref10) 11. This is identical to macroscopic “punctures” on the Liouville worldsheet.[↩](#fnref11) 12. There is an infinite volume factor *V**ϕ* from the Liouville zero-mode, that is not generated from the bulk gravitational path integral. So this factor has to be removed. The factor of 2 originates simply from our choice throughout this work of limiting the *p*± integrals to positive values.[↩](#fnref12) 13. Our conclusion mirrors the case of 2d JT gravity, where the AdS2 global vacuum is also not contained within the Hilbert space of the model. In both cases, the lowest-energy state is at the bottom of the continuum.[↩](#fnref13) 14. Where *h* = *λ*2/2 + *Q*2/4 to relate to the notation of equation.[↩](#fnref14) 15. See e.g. for some recent discussion on these statements.[↩](#fnref15) 16. We thank Alexandre Belin for pointing this out.[↩](#fnref16) 17. This argument is a bit finicky, so as a particular example, set *c* = 10000, Δgap = *c*/12, *μ* = 0 and *β*JT = 1. We get the ratio  ≈ 5 ⋅ 10− 6, meaning contributions from other primaries are indeed heavily suppressed. At the same time, one directly checks numerically that $\frac{\eqref{solidtorus}}{\eqref{doubleJT}} \approx 1.00005$, so the solid torus partition function is approximated very well (up to 0.005%) by the doubled JT partition function. This single datapoint then shows that there is a dynamic range of the parameters where the doubled JT approximation holds, assuming the spectral gap Δgap is not much smaller than its maximally allowed value *c*/12.[↩](#fnref17) 18. Alternatively, one may quantize the theory for a given set of “boundary conditions” at *S* and sum over all such possible choices, leading to a schematic decomposition $$\mathcal{H}\_{\text{physical}} = \bigoplus\_\alpha \mathcal{H}\_{{\textrm{\tiny V}},\alpha} \otimes \mathcal{H}\_{\bar{{\textrm{\tiny V}}},\alpha}$$ in superselection sectors. This perspective has been employed mainly in the literature on edge states in Maxwell theory, with the superselection label *α* = *E*⊥.[↩](#fnref18) 19. This is equivalent to adding Stueckelberg fields.[↩](#fnref19) 20. In the 2d BF theory case, it should be understood that *A**φ* = *B* by dimensional reduction of the 3d Chern-Simons action, as discussed around equation.[↩](#fnref20) 21. This is clear in 2d, since the metric around the conical singularity is locally of the form *d**s*2 = *d**r*2 + *r*2*d**θ*2, where *θ* ∼ *θ* + 2*π*(1 − *α*). This metric has a spin connection *ω* = *d**θ*, so that we get ∮*S*1*ω* = 2*π*(1 − *α*). Avoiding a conical singularity (*α* = 0) hence means indeed the advertized property. The same calculation applies in higher dimensions, if we restrict to a normal plane transverse to a point on the codimension-2 entangling surface.[↩](#fnref21) 22. In the BF formulation of JT gravity, the defect operator imposes the constraint that the Chern class of the gauge bundle is equal to one; in the gravity language, this corresponds to the fact that the boundary Schwarzian has winding number one around the boundary circle.[↩](#fnref22) 23. Equivalently, if we define *ρ̃**V* = *e*− *β**H**V*, we can write the entropy as $$\begin{aligned} S= -\operatorname{Tr}D \tilde{\rho}\_{V} \log \tilde{\rho}\_{V}.\end{aligned}$$ Here *ρ̃**V* is a reduced density matrix obtained from a factorization map with a local boundary condition.[↩](#fnref23) 24. In the mathematical definition of the relative tensor product of modules H*V* ⊗ *G**S*H*V̄*, *G**S* is usually taken to be the group algebra, which just means we allow for the addition of group elements.[↩](#fnref24) 25. One might interpret the volume regulator as precluding a strict factorization of the Hartle-Hawking state. We prefer to read it as showing us how to make sense of gravitationally factorized states. Analogous comments hold for the 3d case later on, though we will not be as explicit about the analogs of these volume factors.[↩](#fnref25) 26. Had we followed and used the co-product factorization map for the universal cover of SL(2, R), these factors would not have cancelled.[↩](#fnref26) 27. In pure gravity theory in higher dimensions, there is also a (R2)*S* factor which describes the transverse deformations of *S*. See for some specific relevant work in 3d and for interesting related work.[↩](#fnref27) 28. This phase space variable is denoted *k*0 in.[↩](#fnref28) 29. Fourier transforming with respect to *τ*2, and then multiplying by cos(2*π**λ**p*ʹ) directly leads to this result.[↩](#fnref29) 30. As a final comment, we also note that 2d Liouville gravity is governed by the same *q*-deformation of the JT structure, and can be formulated in the same language. In particular, its disk partition function, amplitudes and (presumably) its factorization can be similarly developed as we have done here. The amplitudes differ from 3d gravity in that there are no descendants anywhere, and the energy eigenvalues (in the exponentials) are also *q*-deformed. We refer the reader to for the expressions and discussions.[↩](#fnref30) 31. For example, in 2d the extended TQFT forms a ``knowledgeable" Frobenius algebra. So in addition to the Hopf algebra compatibility relations, the co-product has to satify further compatibility relations of the Frobenius algebra.[↩](#fnref31) 32. The sum of subleading divergences ends in a logarithmic term only for *d* even.[↩](#fnref32) 33. Asymptotic boundary descendants are added in a somewhat ad hoc way from the perspective of this section.[↩](#fnref33) 34. What is instead true is the (weaker) property that the DOZZ formula for the bulk Liouville three-point function *C*(*α*1, *α*2, *α*3) with operator labels *α*1 = *Q*/2 + *i**P*1, *α*3 = *Q*/2 + *i**P*2 and *α*2 = *ε* → 0 yields precisely a Dirac delta function on *P*1 + *P*2 = 0. The main difference with the compact case is that the identity representation is not normalizable.[↩](#fnref34) 35. An interesting feature that appears there is that for N = 2 JT supergravity, the shift between the vacuum and the continuum disappears (in technical terms, the Weyl vector for the supergroup OSp(2∣2, R) vanishes). It would be interesting to see whether in this case the anyon fusion rules are more in line with the compact case.[↩](#fnref35) 36. Such a bulk semi-classical expansion was performed around thermal AdS3 in. Their computation of *S*vN(Λ) involves Virasoro edge modes assigned to the bulk entangling surface. As expected, the resulting entanglement entropy of bulk gravitons and photons is divergent, but with the correct area dependence to allow for an interpretation in terms of the renormalization of *G*\tiny N, bare.[↩](#fnref36) 37. These Wilson lines are identified with the lattice spin configurations for the Levin-Wen model, while for the toric code, they live on the dual lattice.[↩](#fnref37) 38. Technically, the Chern-Simons description of bulk gravity goes beyond the scope of reference, which restricts to compact gauge groups, since gravity requires a theory of SL(2, R) Wilson lines. The proper TQFT-like framework underlying our gravitational Chern-Simons description was developed mainly by J. Teschner and collaborators, see e.g., and generalizes the usual formulation in terms of modular tensor categories which is appropriate to the case of compact gauge groups.[↩](#fnref38) 39. This is heuristic because the gravitational Wilson line is what produces the ER bridge connecting the two sides to begin with.[↩](#fnref39) 40. A related idea appears in the Tannaka-Krein duality (see e.g. ), which says that a group *G* can be reconstructed from the category Rep(G) of its representations.[↩](#fnref40) 41. 2d Yang-Mills and BF theory are not strictly topological theories. In particular, they both have infinite dimensional Hilbert spaces. However, they can still be formulated within the framework of extended TQFT, provided we make some small modifications to the standard axioms.[↩](#fnref41) 42. As explained in together with the factorization map, the extended TQFT also defines the basic ingredients such as subregion Hilbert spaces, partial traces, and reduced density matrices which are needed to compute quantum information measures in a continuum theory.[↩](#fnref42) 43. The cobordism hypothesis, originally formulated in, (see for more recent discussions) states that an extended TQFT in any dimension can be reconstructed from what it assigns to a point.[↩](#fnref43) 44. In 2d BCFT, one would view the objects of B*S* as conformally invariant boundary conditions, and the 1-morphisms as boundary condition changing operators. This defines a vector space H*a**b* = HomB*S*(*a*, *b*), where *a*, *b* labels Cardy boundary states.[↩](#fnref44) 45. In fact, the cobordism hypothesis states that a fully extended TQFT is completely determined by what it assigns to a point.[↩](#fnref45) 46. This is not the phase space of PSL(2, R) Chern-Simons theory: Teichmüller space on a Riemann surface Σ is a particular component of the space of flat connections on Σ.[↩](#fnref46) 47. In the literature, these are often referred to Liouville conformal blocks, since they arise in the quantization of Liouville theory on Σ. However we avoid the Liouville terminology, since this is sometimes used to refer to the flat spectrum, which does not give the Cardy density of states.[↩](#fnref47) 48. This is a direct analog of the Kazhdan-Lusztig equivalence for compact groups *G*, where Rep(U*q*(*G*)) data provides the solution to the bootstrap equations for the Kac-Moody conformal blocks associated to the loop group LG.[↩](#fnref48) 49. A physically intuitive way to understand this TQFT was provided in, which relates Teichmüller theory to analytically continued Chern-Simons theory.[↩](#fnref49) 50. However, if we were to introduce a cut-off, which is typically the most sensible thing to do in physics when an infinity arises, the argument may become less “sharp”.[↩](#fnref50) 51. This can be made more explicit in the 2d JT dimensional reduction, where injecting energy locally can be done by studying boundary correlation functions, which have multiple sectors in the spacetime characterized by different values of the energy.[↩](#fnref51) 52. There exist four dimensional perturbative calculations computing the entanglement entropy of linearized gravitons on a sphere starting with, which was reproduced using the hyperbolic cylinder method in. More recently, these were extended into higher dimensions in.[↩](#fnref52) 53. It is incompatible with the OPE to have both intermediate channels be the identity. There is a kinematic regime where this configuration dominates, and one where the complement (with swapped 1 and *O* channels) dominates. See for the argument.[↩](#fnref53) 54. Note that this is the compact version of Weyl’s integration formula, which turns out to be correct as we explain in the next paragraph.[↩](#fnref54) 55. The Jacobian factor |Δ(*α*)|2 and size of the Weyl group |*W*| are part of the measure *d**α* of the Cartan subgroup.[↩](#fnref55) 56. As a simple check, for the abelian group R, we have irreducible representation matrix elements *e**i**k**x* and hence by *ρ*(*k*) = 1/2*π*, and *V*G = *V**C*, leading indeed to dim*R* = 1.[↩](#fnref56) 57. Note that *ϕ* and  − *ϕ* are in the same conjugacy class: this is simply a swap of the two eigenvalues of the SU(2) or SL(2, R) matrix. We hence restrict to *ϕ* > 0 here without loss of generality.[↩](#fnref57) 58. See for an analogous recent application of this calculation procedure to the supergroup OSp(1∣2, R).[↩](#fnref58) 59. One might worry about the case *ϕ* < 0 picking up only one of the solutions of the delta-function. This is misleading however, since in the case the second fixed point is at *x* →  − ∞, which in the one-point compactification of R is identified with *x* →  + ∞.[↩](#fnref59) 60. It is customary to abuse language and refer to both the ``quantum space" *G**q* and the deformed algebra of functions F*q*(*G*) as the quantum group.[↩](#fnref60) 61. This is because such center has a basis consisting of averages within a conjugacy class C: $$\begin{aligned} \ket{C}&\equiv \frac{1}{|C|}\sum\_{g \in C} \ket{g}, \end{aligned}$$ so a state $\ket{\psi} \in Z(\mathbb{C}[G])$ in the center has an expansion $$\begin{aligned} \ket{\psi} =\sum\_{C} \psi(C) \ket{C}\,.\end{aligned}$$ [↩](#fnref61) 62. Once extended means extending down to circles. A fully extended TQFT would extend down to a point.[↩](#fnref62) 63. Moreover, when applying this to the irrational Virasoro CFT case relevant to 3d gravity, we encounter a modular S-matrix for which *S*00 vanishes, so equation is no longer valid.[↩](#fnref63) 64. To relate this to the 3d coordinates (*t*, *r*, *φ*) of section [sec:3dgravdef], where the entangling surface is a *φ*-circle, we have *l* = 2*π*ℓ.[↩](#fnref64)
arxiv_0000053
in Ref.  for comparison. Ref.  uses a broadening of 0.36 eV in order to reproduce the experimental resolution. We use 0.36 eV in our Fig. [SrVO3mfbsdftvc] as well.). However, in our case the peak between 2 eV and 2.5 eV stems mainly from the *e**g* states, which are not included into the Hubbard model in Ref. . Ref.  includes the *e**g* states, but still finds a small peak from the upper Hubbard band for the *t*2*g* states at around 3 eV above the Fermi energy. Such a small peak is consistent with our Fig. [SrVO3mfbsdftvc], where a shoulder in the V-3d(t2*g*) is clearly visible between 2 eV and 2.5 eV. Moreover, Ref.  finds a large contribution from the *e**g* states to the DOS at this energy as well. In this regard, our Fig. [SrVO3mfbsdftvc] resembles closely Fig. 8 in Ref.  when the energy is above the Fermi energy. However, Fig. 8 in Ref.  does not find a strong V-*d* DOS at around 2 eV below the Fermi energy. In contrast, we find a strong V-*d* DOS at around 2 eV below the Fermi energy with a dominant part from the *e**g* states and a small contribution from the *t*2*g* states. [SrVO3t2gegpbe] Contributions of the V-3*d* *e**g* and *t*2*g* states to the DOS in SrVO3. Results from KS-DFT using the PBE functional. [SrVO3mfbsdftvc] Contributions of the V-3*d* *e**g* and *t*2*g* states to the DOS in SrVO3. Results obtained within MFbSDFT when the moment functionals are constructed according to Eq.  and Eq. . [SrVO3mfbsdftr2] Contributions of the V-3*d* *e**g* and *t*2*g* states to the DOS in SrVO3. Results obtained within MFbSDFT when the moment functionals are constructed according to Eq.  and Eq. . In Fig. [SrVO3mfbsdftr2] we show the contributions of the V-3*d* *e**g* and *t*2*g* states to the DOS, as obtained with MFbSDFT when we take *N* = 200 and use Eq.  and Eq. , where we set *c**σ*(2 + ) = 1.1 [Ry]2, and *c**σ*(3 + ) =  − 1.5 [Ry]3. The peak between 2 eV and 2.5 eV is very pronounced and both *e**g* and *t*2*g* bands contribute to it. In contrast, the peak around -2 eV in Fig. [SrVO3mfbsdftvc] is significantly smaller in Fig. [SrVO3mfbsdftr2] and shifted to lower energy between -2.5 eV and -3 eV. Several main features of the *t*2*g* band in Fig. [SrVO3mfbsdftr2] resemble those obtained from a LDA+DMFT calculation with a Hubbard *U* of 6 eV (see Fig. 8 in Ref. ). Notably, the *t*2*g* band, which ends around 2 eV in Fig. [SrVO3t2gegpbe], is expanded to higher energies like in LDA+DMFT. Overall, the total V-*d* DOS in Fig. [SrVO3mfbsdftr2] is qualitatively similar to the one in Fig. 8 of Ref. , which displays pronounced peaks close to 1 eV and close to 2.5 eV, while the V-*d* DOS close to -2 eV is small, in agreement with our result in Fig. [SrVO3mfbsdftr2]. However, the peak close to 2.5 eV is much larger in our Fig. [SrVO3mfbsdftr2]. [SrVO3mfbsdftr2smallparam] Contributions of the V-3*d* *e**g* and *t*2*g* states to the DOS in SrVO3. Results obtained within MFbSDFT when the moment functionals are constructed according to Eq.  and Eq. . In contrast to Fig. [SrVO3mfbsdftr2] the parameters *c**σ*(2 + ) and *c**σ*(3 + ) are reduced by 18 % and 33 %, respectively. We may reduce the intensity of this peak at 2.5 eV by reducing the parameters *c**σ*(2 + ) and *c**σ*(3 + ). In Fig. [SrVO3mfbsdftr2smallparam] we show the DOS obtained with the parameters *c**σ*(2 + ) = 0.9[Ry]2 and *c**σ*(3 + ) =  − 1[Ry]3. Indeed, the peak intensity is reduced and it is now significantly smaller than the intensity of the main peak at around 1 eV, but relative to the main peak it is still more pronounced than in Fig. 8 of Ref. . Moreover, the peak is shifted from 2.5 eV in our Fig. [SrVO3mfbsdftr2] to higher energies and lies now between 3 eV and 3.5 eV. However, overall the total V-*d* DOS, the V-3*d* (t2*g*) DOS, and the V-3*d* (e*g*) DOS in Fig. 8 of Ref.  is in better agreement with our Fig. [SrVO3mfbsdftr2smallparam] than with our Fig. [SrVO3mfbsdftr2]. Discussion and outlook ====================== In the previous section we have shown that MFbSDFT reproduces features such as satellite peaks and spectral weight shifts which are usually obtained by solving the correlated electron problem directly, e.g. by means of LDA+DMFT. While these first MFbSDFT results look therefore very promising there is a large number of open questions and obvious possibilities to improve this method further. First, accurate moment functionals are required. The construction of accurate moment functionals should be possible based on Monte Carlo data of correlation functions such as those given in Eq.  through Eq. . Second, it is desirable to derive gradient approximations for the moment functionals. Using local functionals that depend only on the spin densities for MFbSDFT misses effects related to their spatial inhomogeneity. Like GGA is an improvement over LDA, we expect that MFbSDFT will become more accurate by adding gradient corrections to the functionals. Third, in this work we do not explore the calculation of total energies and atomic forces for structural relaxation. However, since force calculations in correlated materials are possible within LDA+DMFT we expect that total energies and forces may also be obtained within our MFbSDFT approach. Fourth, one may use more than the first 4 moments. While the first four moments are sufficient to reproduce the quasi-particle band structure qualitatively correctly in the strong-correlation regime , the precision of MFbSDFT is expected to increase with the number of moments used. The moments become increasingly more complicated with increasing order. However, one may use computer algebra systems in order to derive the higher-order moments and to assess them for the uniform electron gas. This seems feasible since the complexity is probably comparable to higher-order perturbation theory in QED, where high-order contributions have to be tackled by computer algebra. Assuming that computer algebra systems can manage the complexity of the higher-order moments the question remains if the spectral function can be found for more than 2 or 4 moments. In Ref.  we give an argument that the spectral function may be found from the first 4 moments, which is based on counting the number of available equations and the number of parameters that determine the spectral function and showing that these numbers match. In the present paper we have explicitly constructed the spectral function from the first 4 moments in Sec. [secconstruspecfun]. We may generalize the argument given in Ref.  and show that from the first 2*P* moments (*P* = 1, 2, …) one may construct the spectral function. This generalization is discussed in App. [secappmore]. If one uses only Delta-functions in the expression for the spectral function (as in Eq. ) one misses lifetime effects, which may be accommodated by employing Gaussians instead . When 4 moments are required to put the spectral peaks, such as satellite peaks, at the right energies, it is clear that more than 4 moments are generally required to correct the spectral widths of these spectral features by lifetime effects. Fifth, perhaps the method of spectral moments may contribute to the understanding of SIE, because it is remarkable that setting $\mu\_{c}({{\boldsymbol{r}}})=0$ in Eq. (2.22) of Ref.  leads to a HF-type method that suffers from the same SIE and is equivalent to the method of spectral moments derived from the first two moments (see Sec. [sectheory]). We suspect that increasing the number of moments used will ultimately eliminate the SIE. However, it is an open question, how the self-interaction correction (SIC) takes place exactly within the method of spectral moments. Within the KS-DFT framework the explanation of SIC is that $\mu\_{c}({{\boldsymbol{r}}})$ in Eq. (2.22) of Ref.  has to eliminate SIE when the exact exchange correlation functional is used. However, within the spectral moment method a valid explanation of SIE seems to be that using only the first two moments produces an error, which may be eliminated by using more moments. Of course, the precise moment functionals are expected to be necessary in order to remove SIE. However, Eq.  through Eq.  provide explicit expressions, which may be used for the construction of the moment functionals. Sixth, it is an important open question how to extend the MFbSDFT approach to finite temperatures. In Ref.  we have shown how to generalize the spectral moment method so that it can be applied to many-band Hamiltonians. Since the method of Ref.  computes the correlation functions from the spectral theorem, which involves the Fermi function and the actual excitation energies, it naturally includes finite temperature effects. As the spectral theorem is not used for the higher-order correlation functions in MFbSDFT, which are obtained from moment functionals, it is currently unknown how to accomodate finite temperatures accurately in this method. While accurate moment functionals are currently not available yet, the MFbSDFT method may also be used in practice in a way similar to LDA+*U*: In LDA+*U* the *U* and *J* parameters are usually chosen for a given material in order to add correlation effects that are not described by LDA. Similarly, one may use parameterizations of the moment functionals similar to the ones that we discussed in Sec. [secfunc] and choose the coefficients in the functional in order to optimize spectral features. Summary ======= We describe the concept of moment functionals, which allow us to obtain the spectral moments from functionals of the charge density. These functionals play a similar role in MFbSDFT as the exchange correlation functional does in KS-DFT. We derive explicit expressions for the moment functionals and use perturbation theory to investigate their scaling with the charge density. We describe an efficient algorithm to obtain the spectral function from the first four spectral moments. We demonstrate that MFbSDFT allows us to reproduce spectral features such as satellite peaks in Ni and lower and upper Hubbard bands in SrVO3. At this stage of its development, MFbSDFT may be used in a way similar to LDA+*U*: The parameters in the moment functionals are chosen such that spectral features found in experiments are reproduced. Acknowledgments =============== We acknowledge funding by SPP 2137 ``Skyrmionics" of the DFG and by Sino-German research project DISTOMAT (DFG project MO ). We gratefully acknowledge financial support from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant No. 856538, project “3D MAGiC”). The work was also supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)  −  TRR 288  −  422213477 (project B06). We also gratefully acknowledge the Jülich Supercomputing Centre and RWTH Aachen University for providing computational resources under project No. jiff40. Comparison of the spectral moments of the many-band case to the spectral moments of the single-band Hubbard model ================================================================================================================= The spectral moments of the many-band case contain many new contributions that do not have counterparts in the single-band Hubbard model. In this appendix we discuss which of the many-band terms have a correspondence in the single-band Hubbard model. In the single band Hubbard model the Coulomb matrix element is simply Vnn’tt’=Unn’tt’nt, where *U* is the Hubbard-*U*. For the single band Hubbard model the first four moments are: [eqsibahuzero] (0)= ljei(l-j) + =1, [eqsibahuone] &(1)= ljei(l-j) -,cj]+ &=()+Un-, [eqsibahutwo] &(2)= ljei(l-j) -, [H,cj]- ]+ &=(())2 +2 Un- () + U2n-, and [eqsinglebandmoment3] &(3)= ljei(l-j) -,H]-, [H,cjs]- ]+ &=[()]3 +3Un- 2 &+2 U2 () n- +2 U2 t00n- +U3 n- &-U2 ljei(l-j)tlj c-lc-jc-lc-j &+U2 lj tlj (2nl-1)c-lc-j &+U2 ljei(l-j)tlj cjc-lclc-j &+U2 ljei(l-j)tlj cjc-jclc-l. Here, N is the number of ${{\boldsymbol{k}}}$ points. Clearly, Eq.  and Eq.  turn into Eq.  and Eq. , respectively, when one evaluates them for the single-band Hubbard model and performs a Fourier transformation. The following contributions to *M*(2 + ) are zero for the single-band Hubbard model: *M*(2 + , 3), *M*(2 + , 4), *M*(2 + , 5), *M*(2 + , 6), *M*(2 + , 7). The sum *M*(2 + , 1) + *M*(2 + , 2) turns into *U*2⟨*n*− *σ*⟩ in the single-band case, which is the last term in Eq. . The sum ${{\boldsymbol{T}}}{{\boldsymbol{V}}}^{\rm H}+{{\boldsymbol{T}}}{{\boldsymbol{V}}}^{\rm X}\_{\sigma}+{{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{T}}}+{{\boldsymbol{V}}}^{\rm X}\_{\sigma}{{\boldsymbol{T}}}$, which contributes to Eq. , evaluates to $2U\langle n\_{-\sigma}\rangle\epsilon({{\boldsymbol{k}}})$ in the case of the single-band Hubbard model. This is the middle term in Eq. . For the single-band Hubbard model the sum of *M**n**m*(3 + , 3) (Eq. ) and *M**σ**n**m*(3 + , 4) (Eq. ) is *U*3⟨*n*− *σ*⟩, which is the last term in the third line of Eq. . Evaluation of *M**σ**n**m*(2 + , *j*) ===================================== In order to keep the notation simple, we discuss the contractions C*σ*(2 + , *j*), Eq. , for the uniform electron gas without spin-polarization. We evaluate the contraction of Eq.  by transforming it into the momentum representation, where we obtain at zero temperature [eqc2p2] &(2+,2)=- v()v(4-5+)n5-n4+ &=-A2d3 q d3 k4 d3 k5. Here, kF=(3 2 n)1/3 is the Fermi wave number, Θ(*k*) is the Heaviside step function, v()=[Ry][aB] is the Coulomb potential expressed in terms of the Bohr radius $a\_{\rm B}$, ${\rm Ry}=13.6$ eV, and A2=[Ry]2[aB]2. Scaling all momenta in Eq.  by the factor *ξ*, we observe that this integral is proportional to *ξ*2, i.e., it is proportional to $k\_{\rm F}^2$. In this scaling analysis we took into account that *n* depends on $k\_{\rm F}$ as well: $n=k\_{\rm F}^3/(3\pi^2)$. It is convenient to express C*σ*(2 + , 2) in terms of the dimensionless density parameter rs= ( ). According to the scaling analysis above, it is sufficient to evaluate the integral for a single density parameter, e.g. *r*ʹ*s* = 1, because (2+,2)(rs)=. The integral can be performed numerically using the VEGAS  package for the Monte-Carlo integration of high-dimensional integrals. We obtain (2+,2)(rs)=[Ry]2. Next, we consider C(2 + , 16). This integral is given by [eqc2p16] &(2+,16)= v() &v(4-5+)n5-n4+n5 &=A2d3 q d3 k4 d3 k5 (kF-|5|) &, which also scales like *r**s*− 2, which is easy to see with a scaling analysis. Evaluating this integral with the VEGAS  package gives (2+,16)(rs)=[Ry]2. For the contraction C(2 + , 17) we need to compute the integral [eqc2p17] &(2+,17)= v() &v(4-5+)n5-n4+n4 &=A2d3 q d3 k4 d3 k5 (kF-|4|) &. Using a scaling analysis, we find that this integral also scales like *r**s*− 2. Employing the VEGAS  package yields (2+,17)(rs)=[Ry]2. While the contractions C(2 + , 2), C(2 + , 16), and C(2 + , 17) above are straightforward to evaluate with ${\tt VEGAS}$ , the contributions C(2 + , 1), C(2 + , 6), C(2 + , 8), C(2 + , 12), C(2 + , 14), and C(2 + , 15) require more care, because their integrands contain factors $(v({{\boldsymbol{q}}}))^2$, which lead to a strong divergence of the integrands in the limit *q* → 0. In contrast, the integrals of the contractions C(2 + , 2), C(2 + , 16), and C(2 + , 17) contain only a single factor $v({{\boldsymbol{q}}})$, which does not produce a divergence, because it is compensated by the *q*2 of *d*3*q* = *q*2sin(*θ*)*d**θ**d**ϕ*. However, these contributions may be grouped into pairs of two, where the two partners in a pair differ in sign. When we replace the Coulomb potential by v()=[Ry][aB], we observe that the limit *η* → 0 is finite for the pair, while both partners in a pair diverge in this limit. Consider for example the pair composed of C(2 + , 1) and C(2 + , 14). Evaluating the integrals [eqc2p1] &(2+,1)=2 v() v()n5-n4+ &=2A2d3 q d3 k4 d3 k5 (kF-|4+|) & and [eqc2p14] &(2+,14)=-2 v() v()n5-n4+n5 &=2A2d3 q d3 k4 d3 k5 (kF-|5|) & with the VEGAS  package we obtain 0=2[Ry]2. We explicitly left the spin-degeneracy factor 2 in this equation. Evaluation of *M**σ**n**m*(3 + , *j*) ===================================== In order to keep the notation simple, we discuss the contractions C*σ*(3 + , *j*) for the uniform electron gas without spin-polarization. Transforming Eq.  into the momentum representation, we obtain the following expression for C*σ*(3 + , 4) in terms of an integral: [eqc3p4] &(3+,4)= v()v(’)v(4-5+-’) &n5-n4+ &=A3 d3 q d3 q’ d3 k4 d3 k5 &, where A3=[Ry]3 [aB]3. Scaling all momenta in Eq.  by the factor *ξ* one may easily find that $\mathcal{C}^{(3+,4)}\_{\sigma }\propto k\_{\rm F}^3\propto r\_s^{-3}$. Using VEGAS  we obtain (3+,4)(rs)=-[Ry]3. Algorithm to construct the spectral function from non-commuting spectral moment matrices ======================================================================================== In this section we provide the derivation of the algorithm described in Sec. [secconstruspecfun] for the construction of the spectral function from the first four *N* × *N* spectral moment matrices, ${{\boldsymbol{M}}}^{(0)}$, ${{\boldsymbol{M}}}^{(1)}$, ${{\boldsymbol{M}}}^{(2)}$, and ${{\boldsymbol{M}}}^{(3)}$, where ${{\boldsymbol{M}}}^{(0)}$ is the unit matrix. Assume that we manage to find hermitean 2*N* × 2*N* matrices (1)=( cc (1) &1 1 &1 ), (2)=( cc (2) &2 2 &2 ), and (3)=( cc (3) &3 3 &3 ), which mutually commute, i.e., [appeqb1b2b3commut] &[(1),(2)]-=0, &[(1),(3)]-=0, &[(2),(3)]-=0, and which satisfy [appeqb2fact] (2)=(1)(1), and [appeqb3fact] (3)=(1)(2)=(1)(1)(1). Note that Eq.  is satisfied if Eq.  and Eq.  are satisfied. We will therefore solve only Eq.  and Eq.  below. The matrices ${{\boldsymbol{M}}}^{(I)}$, ${{\boldsymbol{B}}}\_{i}$ and ${{\boldsymbol{D}}}\_{i}$ have the size *N* × *N*. The matrices ${{\boldsymbol{M}}}^{(I)}$ are the given hermitean spectral moment matrices, while ${{\boldsymbol{B}}}\_{i}$ and ${{\boldsymbol{D}}}\_{i}$ are matrices that need to be determined such that Eq.  and Eq.  are satisfied. While ${{\boldsymbol{D}}}\_{i}$ is required to be hermitean, ${{\boldsymbol{B}}}\_{i}$ is not. If we manage to find these matrices ${{\boldsymbol{\mathcal{B}}}}^{(1)}$, ${{\boldsymbol{\mathcal{B}}}}^{(2)}$, and ${{\boldsymbol{\mathcal{B}}}}^{(3)}$, we know that they possess a common system of eigenvectors, i.e., they may be diagonalized by the same unitary transformation, because they are hermitean and they commute mutually. Consequently, we may find a unitary transformation ${{\boldsymbol{\mathcal{U}}}}$ so that [appeqb1diag] (1)=, where ${{\boldsymbol{\mathcal{D}}}}$ is a diagonal matrix. Using ${{\boldsymbol{\mathcal{U}}}}$ and ${{\boldsymbol{\mathcal{D}}}}$ we may write [appeqb2diag] (2)=2 and [appeqb3diag] (3)=3. In Ref.  we have shown that the eigenvalue problems Eq. , Eq. , and Eq.  may be rewritten in the form |(I)=|(I), where *I* = 1, 2, 3 (see Eq. (13) in Ref. ). When we denote the representation of the unit matrix as a column vector by $\bar{{{\boldsymbol{\mathcal{{{\boldsymbol{B}}}}}}}}^{(0)}$, we may combine Eq. , Eq. , and Eq.  into the compact expression [appeqcompactwa=b] |=|, where =[(0),(1),(2),(3)] and |=[|(0),|(1),|(2),|(3)]. Next, we rewrite $\bar{{{\boldsymbol{\mathcal{{{\boldsymbol{B}}}}}}}}$ as |=( cc |Low ) and |=( cc |Low ), where ${{\boldsymbol{\mathcal{{{\boldsymbol{M}}}}}}}$ and ${{\boldsymbol{\mathcal{{{\boldsymbol{W}}}}}}}$ are the matrices defined in Ref.  (see Eq. (7) and Eq. (8) in Ref. ). ${{\boldsymbol{\mathcal{{{\boldsymbol{M}}}}}}}$ is a *N*2 × 4 matrix, $\bar{{{\boldsymbol{\mathcal{{{\boldsymbol{B}}}}}}}}\_{\rm Low}$ is a 3*N*2 × 4 matrix, ${{\boldsymbol{\mathcal{{{\boldsymbol{W}}}}}}}$ is a *N*2 × 2*N* matrix, and $\bar{{{\boldsymbol{\mathcal{{{\boldsymbol{W}}}}}}}}\_{\rm Low}$ is a 3*N*2 × 2*N* matrix. Thus, we may rewrite Eq.  as two equations: [appeqwa=mupper] = and |Low=|Low. Eq.  is identical to the Eq. (9) in Ref. , which needs to be solved to obtain the spectral function. Thus, we may solve Eq.  by determining the matrices ${{\boldsymbol{B}}}\_{1}$ and ${{\boldsymbol{D}}}\_{1}$, and by diagonalizing the matrix ${{\boldsymbol{\mathcal{B}}}}^{(1)}$. Therefore, in order to prove the algorithm in Sec. [secconstruspecfun] it remains to show that the matrices ${{\boldsymbol{B}}}\_{1}$ and ${{\boldsymbol{D}}}\_{1}$ may be found by solving Eq.  and Eq. . From Eq.  we obtain the following equation for ${{\boldsymbol{B}}}\_{1}$: [appeqbbdag] 11=(2)-(1)(1). Since ${{\boldsymbol{B}}}\_{1}{{\boldsymbol{B}}}\_{1}^{\dagger}$ is a hermitean matrix, it may be diagonalized: 11=, where ${{\boldsymbol{U}}}$ is a unitary matrix and ${{\boldsymbol{D}}}$ is a diagonal matrix. If ${{\boldsymbol{B}}}\_{1}{{\boldsymbol{B}}}\_{1}^{\dagger}$ is positive definite, we obtain 1=, which is Eq.  in the main text. If ${{\boldsymbol{B}}}\_{1}{{\boldsymbol{B}}}\_{1}^{\dagger}$ is not positive definite, the algorithm described in this section cannot be used. However, in all applications discussed in this paper, ${{\boldsymbol{B}}}\_{1}{{\boldsymbol{B}}}\_{1}^{\dagger}$ is positive definite. We suspect that the reason for this is that ${{\boldsymbol{M}}}^{(2+)}$ is generally positive definite. From Eq.  we obtain the following equation for ${{\boldsymbol{B}}}\_{2}$: [appeqb2] 2=-1. This is Eq.  in the main text. From Eq.  we obtain the following equation for ${{\boldsymbol{D}}}\_{1}$: [appeqd1] 1=1-1. This is Eq.  in the main text. ${{\boldsymbol{D}}}\_{1}$ is required to be hermitean, which is not directly obvious from Eq. . However, making use of Eq.  and Eq.  it is straightforward to show that [eqd1hermitean] 1-1=0. At this point we have completely determined the matrix ${{\boldsymbol{\mathcal{B}}}}^{(1)}$, from which the spectral function may be constructed using its eigenvalues, which are contained in the diagonal matrix ${{\boldsymbol{\mathcal{D}}}}$, and the unitary transformation ${{\boldsymbol{\mathcal{U}}}}$ defined in Eq. . However, it remains to show that all those additional equations that follow from Eq.  and Eq.  but that we did not use to derive the expressions for ${{\boldsymbol{B}}}\_{1}$ and ${{\boldsymbol{D}}}\_{1}$ can be satisfied as well. From Eq.  we obtain [appeqremain1] 2=11+11, and from Eq.  we obtain [appeqremain2] 3=(1)2+12 and [appeqremain3] 3=12+12. ${{\boldsymbol{D}}}\_2$ as given by Eq.  is hermitean, because ${{\boldsymbol{D}}}\_1$ is hermitean according to Eq. . Thus, it does not violate any of the equations above. ${{\boldsymbol{B}}}\_3$ as given by Eq.  does not violate any of the equations above either. ${{\boldsymbol{D}}}\_3$ as given by Eq.  should be hermitean, which is not directly obvious. However, using Eq. , Eq. , and Eq. , it is straightforward to show that 3-3=0. Eq.  in the main text follows from Eq. , Eq. , and Ref. . Generalization to more moments ============================== We may generalize the argument given in Ref.  and show that from the first 2*P* moments (*P* = 1, 2, …) one may construct the spectral function: We may map each moment $\tilde{{{\boldsymbol{M}}}}^{(I)}$ (where $\tilde{{{\boldsymbol{M}}}}^{(I)}$ denotes the moment computed from the nested commutator expression – as opposed to the moment obtained from the explicit energy integration) onto an *N*2-dimensional real-valued vector ${{\boldsymbol{\mathcal{M}}}}^{(I)}$, because *N*2 real-valued parameters fully define a hermitean *N* × *N* matrix. We introduce the *N*2 × 2*P* matrix ${{\boldsymbol{\mathcal{M}}}}$ by =[(0),…,(2P-1)]. We try to approximate the spectral function by [eqspectralmatrixgeneral] =p=1P=1N ap p\*p (E-Ep), because we expect that *P**N* bands can be computed from the first 2*P* spectral moment matrices. Inserting this approximation into Eq.  yields [eqspectralmatrixgeneraleneint] M(I)= p=1P=1N ap p [Ep]I, where we defined W*α**β**γ**p* = V*α**γ**p*V*β**γ**p*\*. We may consider W*α**β**γ**p* as the row-*α* column-*β* element of a hermitean matrix ${{\boldsymbol{\mathcal{W}}}}\_{\gamma p}$. Since *γ* = 1, …, *N* and *p* = 1, ..., *P*, there are *P**N* such matrices. As the hermitean *N* × *N* matrix ${{\boldsymbol{\mathcal{W}}}}\_{\gamma p}$ is equivalent to a *N*2-dimensional real-valued vector $\tilde{{{\boldsymbol{\mathcal{W}}}}}\_{\gamma p}$, we define the *N*2 × *P**N* matrix ${{\boldsymbol{\mathcal{W}}}}=[\tilde{{{\boldsymbol{\mathcal{W}}}}}\_{11}\dots \tilde{{{\boldsymbol{\mathcal{W}}}}}\_{NP}]$. Additionally, we construct the *P**N* × 2*P* matrix ${{\boldsymbol{\mathcal{A}}}}$ by setting the element A*γ**p**m* in row (*γ*, *p*) and column *m* to *a**γ**p*(*E**γ**p*)*m* − 1. The requirements ${{\boldsymbol{M}}}^{(I)}=\tilde{{{\boldsymbol{M}}}}^{(I)}$ with *I* = 0, 1, …2*P* − 1 (where $\tilde{{{\boldsymbol{M}}}}^{(I)}$ are the moments computed from the nested commutator expressions) can now be formulated in compact form by [eqcompactmeqm] =. This is the generalization of Eq. (9) in Ref.  for the first 2*P* moments. The form of the equation is the same, only the sizes of the matrices are different. Since the matrix ${{\boldsymbol{\mathcal{M}}}}$ contains 2*P**N*2 elements, Eq.  defines 2*P**N*2 nonlinear equations. Each vector ${{\boldsymbol{\mathcal{V}}}}\_{\gamma p}$ has *N* components and there are *P**N* such vectors. ${{\boldsymbol{\mathcal{V}}}}\_{\gamma p}$ is required to be normalized and the gauge-transformation ${{\boldsymbol{\mathcal{V}}}}\_{\gamma p}\rightarrow e^{i\Phi}{{\boldsymbol{\mathcal{V}}}}\_{\gamma p} $ does not affect W*α**β**γ**p* = V*α**γ**p*V*β**γ**p*\*. Thus, every ${{\boldsymbol{\mathcal{V}}}}\_{\gamma p}$ is determined by 2(*N* − 1) real-valued unknowns, i.e., 2*P*(*N*2 − *N*) unknown coefficients need to be found to determine all vectors ${{\boldsymbol{\mathcal{V}}}}\_{\gamma p}$. Additionally, we need to find the *P**N* energies *E**γ**p* as well as the *P**N* spectral weights *a**γ**p*. Consequently, Eq.  is a system of 2*P**N*2 nonlinear equations for 2*P**N*2 unknowns. Thus, one may expect that it should be possible to compute *P**N* bands from the first 2*P* spectral moment matrices of size *N* × *N*, because the number of unknowns matches the number of available nonlinear equations. Moment-functional based spectral density-functional theory ========================================================== We describe a density-functional method which aims at computing the ground state electron density and the spectral function at the same time. One basic ingredient of our method is the construction of the spectral function from the first four spectral moment matrices. The second basic ingredient is the construction of the spectral moment matrices from density functionals. We call our method moment-functional based spectral density-functional theory (MFbSDFT), because it is based on density-functionals for the spectral moments and because it allows us to compute the spectral function. If it is implemented in second variation our method consumes only a fraction more computer time than a standard DFT calculation with the PBE functional. We show that MFbSDFT captures correlation effects such as the valence-band satellite in Ni and the formation of lower and upper Hubbard bands in SrVO3. For the purpose of constructing the spectral function from the first four *N* × *N* spectral moment matrices we describe an efficient algorithm based on the diagonalization of one hermitean 2*N* × 2*N* matrix. Introduction ============ In density-functional theory (DFT) the ground state electron density is determined by minimizing the total energy functional . While most contributions to the total energy functional, such as the Hartree energy, the exchange energy, and the correlation energy, can be expressed as functionals of the electron density, it is difficult to express the kinetic energy directly in this way. This is why within the most popular kind of DFT – the Kohn-Sham (KS) DFT – the KS-Hamiltonian  is set up and solved with the main purpose to provide the kinetic energy. However, the KS energy bands agree very often fairly well with photoemission data  and the KS spectrum is therefore even used to compute response properties such as the anomalous Hall effect , the Gilbert damping , the direct and inverse spin-orbit torque , and the inverse Faraday effect  in metallic systems. These KS response functions are often in good agreement with the corresponding material property tensors measured experimentally. Well-known deficiencies of this approach are the underestimation of the band gap, which may require the application of band shifts when computing optical responses such as photocurrents  in semiconductors such as GaAs. Instead of shifting the bands to match the band gap known from experiments, one may use the *G**W* approximation , which is a parameter-free technique based on many-body perturbation theory and which often predicts gaps that are closer to experiment than KS-DFT. However, since one deals then directly with a many-body Hamiltonian, one forsakes the DFT idea of obtaining all properties as directly as possible from the ground state density in order to avoid the complexity and factorial growth of the many-body Hilbert space. Another short-coming of KS-spectra is the overestimation of the magnetic moment and the resulting overestimation of the exchange splitting of some weak itinerant ferromagnets such as MnSi, which requires us to reduce the exchange field by a scaling factor in order to compute the topological Hall effect in MnSi . Moreover, a well-known deficiency of the KS spectrum is the absence of the splitting of bands into lower and upper Hubbard bands due to strong electron correlations . Such a splitting of the single-particle bands leads for example to the appearance of a satellite peak roughly 6 eV below the Fermi energy in Ni . In order to compute the spectrum in such cases of strongly interacting electrons one often uses DFT only to obtain the KS wavefunctions of a small manifold corresponding to a small energy window around the Fermi energy and constructs an interacting Hamiltonian for this manifold, which one solves by dynamical mean-field theory (DMFT)  in order to obtain the spectral function. In other words, one remains within the DFT-framework in order to determine the ground state density, but similarly to *G**W* one leaves this framework and directly solves an interacting many-electron Hamiltonian in order to obtain the spectrum of the correlated system instead of evaluating a density functional. However, one may also take a different viewpoint: The local spectral function of DMFT minimizes the effective action. In this sense DMFT is a spectral density-functional approach . Nevertheless, the question still poses itself if it is possible to obtain both ground state density and correlated spectral function within a density-functional approach which avoids the direct use of many-body techniques such as *G**W* and DMFT. A hermitean *N* × *N* matrix has *N* real-valued eigenvalues. This well-known fact from linear algebra is exploited in many electronic structure programs based on density-functional theory, where the KS equations are solved numerically by diagonalizing a hermitean matrix. The direct construction of 2*N* state vectors, 2*N* state energies, and 2*N* spectral weight factors from 4 hermitean *N* × *N* spectral moment matrices  may be considered as a generalization of the diagonalization of hermitean matrices. It may also be interpreted as a generalization of the two-pole approximation  used in the self-consistent spectral moment method of the single-band Hubbard model to the case of many bands . Since the self-consistent moment method based on the first four spectral moments captures the Ni satellite peak , such a generalization may be useful when the description of the spectral properties obtained from standard DFT needs to be improved because of strong correlation effects, which split the electronic bands into lower and upper Hubbard bands. The *I*-th spectral moment matrix is defined by  [eqspecmomseneint] (I)=- (E) EI dE, where ${{\boldsymbol{S}}}\_{\sigma}(E)$ is the spectral density matrix at energy *E*. In this paper we discuss only the magnetically collinear case without spin-orbit coupling. Therefore, there are only spectral density matrices ${{\boldsymbol{S}}}\_{\uparrow}(E)$ with spin *σ* =  ↑  and spectral density matrices ${{\boldsymbol{S}}}\_{\downarrow}(E)$ with spin *σ* =  ↓ . So far, the direct construction of the spectral function of interacting fermionic many-particle systems from the first four spectral moments has not yet been investigated intensively. Well-explored are only the single-band case with the first four spectral moments , the many-band case with the first two spectral moments , which has been shown to provide a Hartree-Fock type approximation, and the option to use the spectral moments as sum rules in order to guide the construction of the spectral function by other means  – a concept which one may extend even to nonequilibrium conditions . Recently, we have demonstrated how to solve the Hubbard-Rashba model within the many-band generalization of the two-pole approximation of the spectral density and of the self-consistent moment method . For this purpose we did not make use of the DFT concept, but instead we computed the higher-order correlation functions ⟨*c**i**α*†*c**j**β*†*c**l**γ**c**m**δ*⟩ self-consistently based on the spectral theorem . Such higher-order correlation functions are needed to compute the spectral moment ${{\boldsymbol{M}}}^{(3)}\_{\sigma}$, for example. While the spectral moment matrices ${{\boldsymbol{M}}}^{(I)}\_{\sigma}$ all have two orbital indices, the higher-order correlation functions such as ⟨*c**i**α*†*c**j**β*†*c**l**γ**c**m**δ*⟩ have at least four orbital indices. The computational effort of standard KS DFT scales with the third power of the number of basis functions $N\_{\rm B}$. Obviously, the many-band self-consistent moment method scales worse, namely at least $\propto N\_{\rm B}^{4}$. In order to keep the computational effort low, we therefore proposed in Ref.  to map the KS electronic structure of the valence bands and the first few conduction bands first onto Wannier functions. The resulting Wannier Hamiltonian may then be supplemented by Hubbard-type interactions and this interacting Hamiltonian may be treated with the many-band generalization of the self-consistent moment method. However, similarly to the *G**W* and LDA+DMFT approaches discussed above, one thereby leaves the DFT framework, because one computes the spectrum using a many-body Hamiltonian technique instead of a density functional. In this paper we combine basic ideas of DFT with the many-band generalization of the self-consistent moment method in order to develop an approach which aims at computing both the ground state density and the spectral function at the same time without forsaking the DFT framework. The first Hohenberg-Kohn theorem states that the ground state electron density determines the Hamiltonian up to a constant. Since the spectral moments can be computed from the Hamiltonian, the Hohenberg-Kohn theorem implies therefore that also the spectral moments may be expressed as density-functionals. To explore how this can be done in practice is the central goal of this paper. Combining this with our recipe  to construct the spectral function from the first four spectral moments we obtain a moment-functional based spectral density-functional theory (MFbSDFT). The rest of this paper is structured as follows. In Sec. [sectheory] we explain the theory of MFbSDFT. In Sec. [secconstruspecfun] we describe an efficient algorithm for computing the spectral function from the first four spectral moments. In Sec. [secfunc] we explain how we construct the moment functionals. In Sec. [secsecvar] we explain how the MFbSDFT method may be implemented within the full-potential linearized augmented plane wave method (FLAPW) within a second variation approach. In Sec. [secresults] we present applications of our method to fcc Ni and SrVO3. In Sec. [secdisout] we discuss some open questions of MFbSDFT and strategies of how to develop it further. This paper ends with a summary in Sec. [secsummary]. Theory ====== The concept of moment functionals --------------------------------- The ground state charge density defines the Hamiltonian uniquely (up to a constant) . Consequently, it determines also the spectral function uniquely. In order to write the spectral function in matrix form we need a suitable set of orthonormal basis functions $\phi\_{n}({{\boldsymbol{r}}})$. Denoting the creation and annihilation operators corresponding to state $\phi\_{n}({{\boldsymbol{r}}})|\sigma\rangle$ – where ∣*σ*⟩ is a spinor – by *c**σ**n*† and *c**σ**n*, respectively, the matrix elements of the spectral function matrix are [eqspecfunft] Snm(E)=dt eE t +. When periodic boundary conditions are used, the spectral function and the spectral moments aquire an additional ${{\boldsymbol{k}}}$-index for the *k* point ${{\boldsymbol{k}}}$, which we often suppress in this manuscript for notational convenience. The spectral moments may be obtained by plugging Eq.  into Eq. . Since the spectral function is uniquely determined by the ground state density, also the spectral moments are uniquely defined by it. The spectral moments may also be expressed in terms of real-space coordinates: M(I)(,’)=d E EI nmSnm(E)n()\*m(’). We may consider $M^{(I)}\_{\sigma }({{\boldsymbol{r}}},{{\boldsymbol{r}}}')$ as a non-local potential, from which we may obtain the spectral moment matrices by computing the matrix elements: M(I)nm=d3 r d3 r’ M(I)(,’)\*n()m(’). According to our arguments above, the non-local potentials $M^{(I)}\_{\sigma }({{\boldsymbol{r}}},{{\boldsymbol{r}}}')$ are unique functionals of the electron density. In KS-DFT the total energy functional is split into the kinetic energy, the Hartree energy, and the exchange-correlation energy . The kinetic energy is computed from the KS-wavefunctions, the Hartree energy is computed from the charge density, and for the exchange-correlation energy one often uses analytical expressions in terms of the charge density, which have been derived for the uniform electron gas . Similarly, the potentials $M^{(I)}\_{\sigma }({{\boldsymbol{r}}},{{\boldsymbol{r}}}')$ (*I* = 1, 2, 3 if we use the first four moments) contain contributions from the kinetic energy and from the Hartree term. In the following section we show that these contributions may be identified and separated from a remainder, which thus plays a similar role in MFbSDFT like the exchange-correlation potential does in KS-DFT. We expect that useful expressions for this remainder can be found by evaluating it for the uniform electron gas. Explicit expressions for the moments ------------------------------------ We consider the Hamiltonian &H=nmT nm cncm &+’ n m n’ m’ Vnmn’m’ cnc’ mc’ m’cn’, where Tnm=d3 r \*n() m() and Vnmn’m’=d3 r1 d3 r2 |1-2|, and $V({{\boldsymbol{r}}})$ is the lattice potential. Note that in the entire Sec. [secexplicitexprmom] we use Hartree atomic units for notational convenience. Many-body approaches such as LDA+DMFT often take into account the Coulomb matrix element *V**n**m**n*ʹ*m*ʹ only when all orbitals, i.e., *n*, *m*, *n*ʹ and *m*ʹ, describe the same crystal lattice site. In the simplest approximation *V**n**m**n*ʹ*m*ʹ is described by a single parameter, the so-called Hubbard-*U*. Components of *V**n**m**n*ʹ*m*ʹ that are neglected hereby are of course partly treated in LDA+DMFT, because the lattice potential $V({{\boldsymbol{r}}})$ is replaced by the KS potential in this case. Therefore, the Hubbard-*U* only describes the Coulomb interaction from strong localization of electrons. These effects are underestimated by KS-DFT and become important when *U* approaches or exceeds the bandwidth. In contrast, we do not restrict *V**n**m**n*ʹ*m*ʹ at this point, i.e., both local and non-local contributions are described by it in MFbSDFT and $V({{\boldsymbol{r}}})$ is the pure lattice potential without exchange-correlation terms. The zeroth moment is given by [eqmubazero] M(0)nm=+=nm, where […]+ denotes the anticommutator, and the first moment evaluates to [eqmubaone] &M(1)nm=-,c m]+= Tnm+ &+n’m’’Vnn’mm’c ’ n’ c’ m’ &-n’m’Vnn’m’mc n’ cm’. Defining the Hartree potential by [eqhartreepotential] VH()=’ n’ m’d3 r2 |-2| c ’ n’ c’ m’and the non-local exchange potential by [xcnonloc] VX(1,2)=- n’ m’ |1-2| c n’ cm’we may write the first moment as &M(1)nm=Tnm+VHnm+VXnm= HFnm, where $V^{\rm H}\_{nm}$ and $V^{\rm X}\_{\sigma nm}$ are the matrix elements of the Hartree potential and of the non-local exchange potential, respectively. Thus, one obtains a method of Hartree-Fock (HF) type if one considers only the first two moments. It differs from the exact Hartree-Fock method by the self-interaction error (SIE)  (see also Sec. [secdisout] for a brief discussion of SIE from the perspective of MFbSDFT). Therefore, we introduced the alternative label $\mathcal{M}^{\rm HF}\_{\sigma nm}$ for the first moment, which expresses concisely what this first moment contains. Instead of using the non-local exchange potential Eq.  one may use the local exchange potential  [eqxcloc] VlocX()= [n()X(n())], where [eqchargedensinout] n()=n m\*n()m() c n cmis the electron density at position ${{\boldsymbol{r}}}$ and $\epsilon^{\rm X}(n({{\boldsymbol{r}}}))$ is the exchange energy density for electron density $n({{\boldsymbol{r}}})$. The local potential Eq.  has the advantage that it is computationally very cheap to evaluate in contrast to the non-local version Eq. . However, hybrid density functionals, which admix exact exchange, are often more precise than density functionals that use only the local approximation Eq. . Fortunately, one may reduce the computational burden of exact exchange by screening the Coulomb potential . In the numerical calculations in this work we will use only the local expression Eq. , but, similar to KS-DFT, we expect that the precision of the MFbSDFT approach can be increased by avoiding the approximation of non-local potentials by local potentials. We leave it for future work to explore how the MFbSDFT approach may be combined with non-local potentials. In Ref.  we explain that for independent electrons the spectral moment matrices commute, i.e., -=0 for all *I* and *J*, and that the eigenvalues of the spectral moment matrix ${{\boldsymbol{M}}}\_{\sigma}^{(I)}$ are simply the eigenvalues of the single-particle Hamiltonian raised to the *I*-th power, i.e., (*E**σ**n*)*I*. For correlated electrons this is not the case . However, we may expect that the moment ${{\boldsymbol{M}}}\_{\sigma}^{(I)}$ contains a term $[{{\boldsymbol{\mathcal{M}}}}\_{\sigma}^{\rm HF}]^I$, because there may be cases where Hartree-Fock provides an excellent description because correlation effects are small, and these special cases have to be accommodated by the general theory. We may therefore expect that the second moment should contain a term [eqsecmomhfhf] &HFHF= +H+X +H+HH+ &+ HX+X +XH+XX, which is indeed what we find. We may use this observation to split the second moment into the anticipated part ${{\boldsymbol{\mathcal{M}}}}^{\rm HF}\_{\sigma}{{\boldsymbol{\mathcal{M}}}}^{\rm HF}\_{\sigma}$ plus additional new terms, which we denote by ${{\boldsymbol{M}}}^{(2+)}\_{\sigma}$, i.e., [eqm2mhfhfm2p] (2)=HFHF+(2+). In contrast to the single-band Hubbard model with on-site Coulomb interaction, where higher-order correlation functions appear in the third moment and in the higher moments, already the second moment ${{\boldsymbol{M}}}^{(2)}\_{\sigma}$ of the many-band case with the full Coulomb interaction contains the higher-order correlation function ⟨*c**σ**n*†*c**σ*ʹ*m*†*c**σ**n*ʹ*c**σ*ʹ*m*ʹ⟩. In order to identify the terms ${{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{V}}}^{\rm H}$, ${{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{V}}}\_{\sigma}^{\rm X}$, ${{\boldsymbol{V}}}^{\rm X}\_{\sigma}{{\boldsymbol{V}}}^{\rm H}$, and ${{\boldsymbol{V}}}^{\rm X}\_{\sigma}{{\boldsymbol{V}}}^{\rm X}\_{\sigma}$ predicted by Eq.  we need to evaluate ⟨*c**σ**n*†*c**σ*ʹ*m*†*c**σ**n*ʹ*c**σ*ʹ*m*ʹ⟩ in perturbation theory. In contrast, the terms ${{\boldsymbol{T}}}{{\boldsymbol{V}}}^{\rm H}$, ${{\boldsymbol{T}}}{{\boldsymbol{V}}}^{\rm X}\_{\sigma}$, ${{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{T}}}$, and ${{\boldsymbol{V}}}^{\rm X}\_{\sigma} {{\boldsymbol{T}}}$ can be identified without using perturbation theory, because they appear with the correlation function ⟨*c**σ*ʹ*m*†*c**σ*ʹ*m*ʹ⟩. The term ${{\boldsymbol{T}}}{{\boldsymbol{T}}}$ appears even without any correlation function. Therefore, we define [eqperturbisplit] &cn c’ m cn’ c’ m’ = cn c’ m cn’ c’ m’ &- cnc’ m’ c’ mcn’ + cncn’ c’ mc’ m’. The idea behind Eq.  is that the diagrammatic expression of ⟨*c**σ**n*†*c**σ*ʹ*m*†*c**σ**n*ʹ*c**σ*ʹ*m*ʹ⟩ as obtained within perturbation theory contains terms that may be written as ⟨*c**σ**n*†*c**σ*ʹ*m*ʹ⟩⟨*c**σ*ʹ*m*†*c**σ**n*ʹ⟩ and  − ⟨*c**σ**n*†*c**σ**n*ʹ⟩⟨*c**σ*ʹ*m*†*c**σ*ʹ*m*ʹ⟩. Since these latter two terms occur sometimes in ${{\boldsymbol{\mathcal{M}}}}^{\rm HF}\_{\sigma}{{\boldsymbol{\mathcal{M}}}}\_{\sigma}^{\rm HF}$ we introduce the notation of Eq.  in order to split ${{\boldsymbol{M}}}^{(2)}\_{\sigma}$ into ${{\boldsymbol{\mathcal{M}}}}^{\rm HF}\_{\sigma}{{\boldsymbol{\mathcal{M}}}}\_{\sigma}^{\rm HF}$ and ${{\boldsymbol{M}}}\_{\sigma}^{(2+)}$. Using this notation we may write ${{\boldsymbol{M}}}\_{\sigma}^{(2+)}$ as a sum of 17 terms: [eqm2pfirst] Mnm(2+,1)=n’m’tt’’Vnn’tt’Vt’tm’m c’ n’ c’ m’, which is spin-independent, [eqm2psecond] Mnm(2+,2)=-n’m’tt’Vnn’tt’Vtt’m’m cn’ cm’, [eqm2pthird] M nm(2+,3)=-n’m’tt’z’Vnn’zt’Vtzm’m c’ n’ c’ t c’ t’ c’ m’, which does not depend on the spin, [eqm2pfourth] Mnm(2+,4)=-n’m’tt’zVnn’t’tVtm’zm cn’ cm’ ct’ cz, [eqm2pfifth] Mnm(2+,5)=n’m’tt’zVnn’tt’Vtm’zm cn’ cm’ ct’ cz, [eqm2psixth] Mnm(2+,6)=n’m’tt’zVnn’tt’Vzm’n’m cz cm’ ct ct’, [eqm2pseventh] Mnm(2+,7)=n’m’tt’zVnn’tzVm’zt’m cn’ cm’ ct ct’, [eqm2peighth] Mnm(2+,8)=-n’m’tt’zVnn’tt’Vzm’n’m cm’ c-z ct c-t’, [eqm2pnineth] Mnm(2+,9)=-n’m’tt’zVnn’tt’Vm’tzm cn’ c-m’ ct’ c-z, [eqm2ptenth] Mnm(2+,10)=-n’m’tt’zVnn’tt’Vm’tzm cm’ c-n’ cz c-t’, [eqm2peleventh] Mnm(2+,11)=n’m’tt’zVnn’tt’Vtm’zm cm’ c-n’ cz c-t’, [eqm2ptwelfth] Mnm(2+,12)=n’m’tt’zVnn’tt’Vt’m’zm cm’ c-n’ ct c-z, [eqm2pthirteenth] Mnm(2+,13)=n’m’tt’zVnn’tt’Vm’t’zm cn’ c-m’ ct c-z, [eqm2pfourteenth] M nm(2+,14)=-’ n’m’tt’zVnn’tt’Vm’tzm c’ m’ c’ t’ c’ n’ c’ z, which is spin-independent, [eqm2pfifteenth] Mnm(2+,15)=-n’m’tt’zVnn’tt’Vt’m’zm cm’ ct cn’ cz, [eqm2psixteenth] Mnm(2+,16)=n’m’tt’zVnn’tt’Vtm’zm cm’ ct’ cn’ cz, and [eqm2pseventeenth] Mnm(2+,17)=n’m’tt’zVnn’tt’Vm’t’zm cm’ ct cn’ cz. In order to evaluate the contributions to ${{\boldsymbol{M}}}^{(2+)}$ in a way similar to Eq. , we suggest to consider the contractions [eqdftcontraction] (2+,j)=nmMnm(2+,j)cn cm and to compute them for the uniform electron gas as a function of electron density. Similarly to Eq.  we assume that we may derive local potentials [eqdftlocpot] (2+,j)()=[(2+,j)] from these contractions and compute the moments from these local potentials: [eqdftstylemat] Mnm(2+,j)=d3 r (2+,j)() n\*()m(). Many popular exchange-correlation potentials are constructed with the help of Green’s function Monte Carlo simulations of the energy of the uniform electron gas , because the universality of the exchange correlation potential implies that it may be constructed from a uniform system. However, Green’s function Monte Carlo data are not yet available for our expressions Eq.  through Eq. . On the other hand, diagrammatic perturbation theory has been used to derive expressions for the energy of the uniform electron gas in the limit of low and high density and these results are considered in the construction of exchange correlation potentials as well . For the purpose of demonstrating the feasibility of the MFbSDFT approach it is sufficient to find simple approximate expressions for the contractions Eq. . Therefore, we evaluate these contractions for the uniform electron gas using perturbation theory in Appendix [feg2p]. We leave it for future work to find accurate analytic representations of the contractions of Eq.  through Eq.  based on techniques such as Green’s function Monte Carlo simulations and diagrammatic expansions for the high-density limit. Similar to Eq. , one may anticipate that the third moment may be decomposed as (3)=HFHFHF+(3+), where [eqhfhfhf] &HFHFHF= +H+X &+H+HH + HX &+X +HXH+HXX &+H+HH+HX &+HH+HHH + HHX &+HX +HXH+HXX &+X+XH+XX &+XH+XHH + XHX &+XX +XXH+XXX, which is indeed what we find: To identify ${{\boldsymbol{T}}}{{\boldsymbol{T}}}{{\boldsymbol{T}}}$ in ${{\boldsymbol{M}}}^{(3)}\_{\sigma}$ one needs to check the terms without correlation functions. To find the terms that contain two factors of the matrix ${{\boldsymbol{T}}}$, i.e., the terms ${{\boldsymbol{T}}}{{\boldsymbol{T}}}{{\boldsymbol{V}}}^{\rm H}$, ${{\boldsymbol{T}}}{{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{T}}}$, ${{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{T}}}{{\boldsymbol{T}}}$, ${{\boldsymbol{T}}}{{\boldsymbol{T}}}{{\boldsymbol{V}}}^{\rm X}$, ${{\boldsymbol{T}}}{{\boldsymbol{V}}}^{\rm X}{{\boldsymbol{T}}}$, and ${{\boldsymbol{V}}}^{\rm X}{{\boldsymbol{T}}}{{\boldsymbol{T}}}$, one needs to look out for the contributions to ${{\boldsymbol{M}}}^{(3)}\_{\sigma}$ that contain the correlation function ⟨*c**σ*ʹ*m*†*c**σ*ʹ*m*ʹ⟩. To track down the terms that contain a single factor of the matrix ${{\boldsymbol{T}}}$, i.e., the terms ${{\boldsymbol{T}}}{{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{V}}}^{\rm H}$, ${{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{T}}}{{\boldsymbol{V}}}^{\rm H}$, ${{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{T}}}$, ${{\boldsymbol{T}}}{{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{V}}}^{\rm X}$, ${{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{T}}}{{\boldsymbol{V}}}^{\rm X}$, ${{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{V}}}^{\rm X}{{\boldsymbol{T}}}$, ${{\boldsymbol{T}}}{{\boldsymbol{V}}}^{\rm X}{{\boldsymbol{V}}}^{\rm H}$, ${{\boldsymbol{V}}}^{\rm X}{{\boldsymbol{T}}}{{\boldsymbol{V}}}^{\rm H}$, ${{\boldsymbol{V}}}^{\rm X}{{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{T}}}$, ${{\boldsymbol{T}}}{{\boldsymbol{V}}}^{\rm X}{{\boldsymbol{V}}}^{\rm X}$, ${{\boldsymbol{V}}}^{\rm X}{{\boldsymbol{T}}}{{\boldsymbol{V}}}^{\rm X}$, and ${{\boldsymbol{V}}}^{\rm X}{{\boldsymbol{V}}}^{\rm X}{{\boldsymbol{T}}}$, one needs to find the contributions to ${{\boldsymbol{M}}}^{(3)}\_{\sigma}$ that contain the correlation function ⟨*c**σ**m*†*c**σ*ʹ*m*ʹ†*c**σ**n**c**σ*ʹ*n*ʹ⟩ and one has to evaluate this correlation function in perturbation theory. In order to identify all those terms in Eq.  that do not contain the matrix ${{\boldsymbol{T}}}$, i.e., the terms ${{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{V}}}^{\rm H}$, ${{\boldsymbol{V}}}^{\rm X}{{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{V}}}^{\rm H}$, ${{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{V}}}^{\rm X}{{\boldsymbol{V}}}^{\rm H}$, ${{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{V}}}^{\rm X}$, ${{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{V}}}^{\rm X}{{\boldsymbol{V}}}^{\rm X}$, ${{\boldsymbol{V}}}^{\rm X}{{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{V}}}^{\rm X}$, ${{\boldsymbol{V}}}^{\rm X}{{\boldsymbol{V}}}^{\rm X}{{\boldsymbol{V}}}^{\rm H}$, and ${{\boldsymbol{V}}}^{\rm X}{{\boldsymbol{V}}}^{\rm X}{{\boldsymbol{V}}}^{\rm X}$, one needs to check the expressions that contain the correlation function ⟨*c**σ**m*†*c**σ*ʹ*m*ʹ†*c**σ*ʺ*t*†*c**σ**n**c**σ*ʹ*n*ʹ*c**σ*ʺ*t*ʹ⟩ and one has to evaluate this correlation function in perturbation theory. When one evaluates the correlator ⟨*c**σ**n*†*c**σ*ʹ*m*†*c**σ**n*ʹ*c**σ*ʹ*m*ʹ⟩ in perturbation theory in order to extract the terms discussed above, e.g. ${{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{T}}}$, one may use Eq.  like for the second moment. This procedure generates a group of terms in ${{\boldsymbol{M}}}\_{\sigma}^{(3+)}$ that contain ⟨⟨*c**σ**n*†*c**σ*ʹ*m*†*c**σ**n*ʹ*c**σ*ʹ*m*ʹ⟩⟩. Similarly, it is convenient to define [eqperturbisplit6ops] &cm c’ m’c” t cn c’ n’c” t’ &= cm c’ m’c” t cn c’ n’c” t’ &-cmcn c’ m’c” t c’ n’c” t’ &+cm c’ n’c’ m’c” t cn c” t’ &-cmc” t’ c’ m’c” t cn c’ n’ and to use Eq.  in order to replace the correlators of the type ⟨*c**σ*ʹ*m*ʹ†*c**σ*ʺ*t*†*c**σ*ʹ*n*ʹ*c**σ*ʺ*t*ʹ⟩ on the right-hand side of Eq.  by ⟨⟨*c**σ*ʹ*m*ʹ†*c**σ*ʺ*t*†*c**σ*ʹ*n*ʹ*c**σ*ʺ*t*ʹ⟩⟩ and the simpler correlators ⟨*c**σ*ʹ*m*ʹ†*c**σ*ʹ*n*ʹ⟩. When we use this procedure to express ⟨*c**σ**m*†*c**σ*ʹ*m*ʹ†*c**σ*ʺ*t*†*c**σ**n**c**σ*ʹ*n*ʹ*c**σ*ʺ*t*ʹ⟩ in terms of the correlators ⟨*c**σ*ʹ*m*ʹ†*c**σ*ʹ*n*ʹ⟩ and thereby extract the terms discussed above, e.g. ${{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{V}}}^{\rm H}$, we generate additional groups of terms in ${{\boldsymbol{M}}}\_{\sigma}^{(3+)}$, which contain ⟨⟨*c**σ*ʹ*m*ʹ†*c**σ*ʺ*t*†*c**σ*ʹ*n*ʹ*c**σ*ʺ*t*ʹ⟩⟩ or ⟨⟨*c**σ**m*†*c**σ*ʹ*m*ʹ†*c**σ*ʺ*t*†*c**σ**n**c**σ*ʹ*n*ʹ*c**σ*ʺ*t*ʹ⟩⟩. The remaining contributions to ${{\boldsymbol{M}}}\_{\sigma}^{(3+)}$ may be split into groups of formally similar expressions. The first group of two terms in ${{\boldsymbol{M}}}\_{\sigma}^{(3+)}$ contains two matrices ${{\boldsymbol{T}}}$ and the correlation function ⟨*c**σ*ʹ*m*†*c**σ*ʹ*m*ʹ⟩: [eqm3pone] M nm(3+,1)=n’tt’’Vnn’tt’ Ttm S’ t’n’, which is spin-independent, and [eqm3ptwo] Mnm(3+,2)=-n’tt’Vnn’t’t Ttm St’n’, where S’ t’ n’=m’is the commutator between the matrix ${{\boldsymbol{T}}}$ and the density matrix. It is desirable to rewrite Eq.  in a form that permits a numerically efficient evaluation of this term, because the direct computation of Eq.  will often be numerically demanding due to the four indices of the Coulomb matrix element. We may exploit that the indices *t*ʹ and *n*ʹ couple to the Coulomb matrix element in a way that allows us to identify the density matrix. Therefore we may define ()=n’t’’\*n’()t’() S’ t’ n’, from which we compute the Hartree-type integral ()=d3 r2. Eq.  may now be written as M nm(3+,1)=tntTtm, where F*n**t* are the matrix elements of the Hartree-type potential $\mathcal{F}({{\boldsymbol{r}}})$. Similarly, in order to evaluate Eq.  we may exploit that the indices *t*ʹ and *n*ʹ couple to the Coulomb matrix element in a way that allows us to identify a non-local potential. Therefore we may define (1,2)=n’t’ S’ t’ n’. Eq.  may now be written as Mnm(3+,2)=-tntTtm, where L*σ**n**t* are the matrix elements of the non-local Fock-type potential $\mathcal{L}\_{\sigma}({{\boldsymbol{r}}}\_{1},{{\boldsymbol{r}}}\_{2})$. The next group of two terms in ${{\boldsymbol{M}}}\_{\sigma}^{(3+)}$ does not contain the matrix ${{\boldsymbol{T}}}$, but it contains the correlation function ⟨*c**σ*ʹ*m*†*c**σ*ʹ*m*ʹ⟩: [eqm3p3] Mnm(3+,3)=’n’tt’zz’m’ Vnn’tt’Vtt’zz’Vz’zm’mc’ n’ c’ m’, which does not depend on the spin, and [eqm3p4] Mnm(3+,4)=-n’tt’zz’m’ Vnn’tt’Vtt’z’zVz’zm’mcn’ cm’. In Appendix [secappcompare] we show that the sum of Eq.  and Eq.  turns into the very simple result *U*⟨*n*− *σ*⟩ in the case of the single-band Hubbard model. However, for realistic many-band systems, the direct evaluation of these expressions may be numerically demanding due to the four indices of the Coulomb matrix element. Therefore, we may try to use concepts from DFT to simplify the calculations. In order to approximate these contributions by local functionals, we may use the recipe described above in Eq. , Eq. , and Eq. . In Appendix [feg3p] we evaluate the corresponding contractions for the uniform electron gas. ${{\boldsymbol{M}}}^{(3+)}$ contains several additional groups of terms that we have not discussed yet. One group of terms contains three factors of the Coulomb matrix element and the correlator ⟨*c**σ**m*†*c**σ*ʹ*m*ʹ†*c**σ*ʺ*t*†*c**σ**n**c**σ*ʹ*n*ʹ*c**σ*ʺ*t*ʹ⟩, but the indices are not connected in a way that terms such as ${{\boldsymbol{H}}}{{\boldsymbol{H}}}{{\boldsymbol{H}}}$ or ${{\boldsymbol{H}}}{{\boldsymbol{X}}}{{\boldsymbol{H}}}$ arise, which we have already discussed above. An example from this group of terms is [eqexample5] Mnm(3+,5)&= -n’tt’ uu’ zz’m’ Vnn’tt’ Vt’m’uu’ Vzz’m’m &cn’ czcz’ ct cucu’. The first index *t*ʹ of *V**t*ʹ*m*ʹ*u**u*ʹ is shared with *V**n**n*ʹ*t**t*ʹ, while the second index *m*ʹ of *V**t*ʹ*m*ʹ*u**u*ʹ is shared with *V**z**z*ʹ*m*ʹ*m*. The last two indices, *u* and *u*ʹ are contracted with the correlation function ⟨*c**σ**n*ʹ†*c**σ**z*†*c**σ**z*ʹ†*c**σ**t**c**σ**u**c**σ**u*ʹ⟩. In this term, it is therefore not possible to express *V**t*ʹ*m*ʹ*u**u*ʹ through the matrices ${{\boldsymbol{H}}}$ or ${{\boldsymbol{X}}}$ when perturbation theory is used. Other terms in this group differ from Eq.  for example due to different spin quantum numbers in the correlation function, e.g.⟨*c*− *σ**n*ʹ†*c**σ**z*†*c**σ**z*ʹ†*c**σ**t**c*− *σ**u**c**σ**u*ʹ⟩, or they differ due to different indices of the Coulomb matrix elements, e.g. *V**n**t*ʹ*u**u*ʹ*V**m*ʹ*n*ʹ*t**t*ʹ*V**z**z*ʹ*m*ʹ*m*. There is a second group of terms that contains three factors of the Coulomb matrix element as well. However, it contains the correlator ⟨*c**σ**m*†*c**σ*ʹ*m*ʹ†*c**σ**n**c**σ*ʹ*n*ʹ⟩ instead. An example from this group of terms is [eqexample6] Mnm(3+,6)&= -n’tt’ zz’uu’ Vnn’tt’ Vtt’zz’ Vz’uu’m &cn’ cu cz cu’. Since *V**t**t*ʹ*z**z*ʹ couples to the correlation function only through the index *z*, it cannot be expressed through the matrices ${{\boldsymbol{H}}}$ or ${{\boldsymbol{X}}}$ when perturbation theory is used. Similar to the previous group, the other members in this group differ from this example due to different spin quantum numbers in the correlation function, or due to different indices of the Coulomb matrix elements. Another group of terms contains the matrix ${{\boldsymbol{T}}}$ once, the Coulomb matrix elements twice, and the correlation function ⟨*c**σ**m*ʹ†*c**σ**n*ʹ⟩. An example from this group of terms is [eqexample7] Mnm(3+,7)= -tt’ n’m’z Vnn’tt’ Vtt’zm’Tm’m cn’ cz. Similar to the previous two groups, the other members in this group differ from the example of Eq.  due to different spin quantum numbers in the correlator, i.e., ⟨*c*− *σ**n*ʹ†*c*− *σ**z*⟩, and due to different indices in the Coulomb matrix elements. A similar group of terms contains the correlator ⟨*c**σ*ʹ*z*ʹ†*c**σ**u*†*c**σ*ʹ*z**c**σ**t*ʹ⟩ instead of ⟨*c**σ**n*ʹ†*c**σ**z*⟩. An example is given by [eqexample8] Mnm(3+,8)= tt’ n’m’z Vntm’t’ Vz’uzt Tm’m c-z’cu c-zct’. Construction of the spectral function from the spectral moments =============================================================== In Ref.  we have shown that the spectral function of 4 spectral moment matrices of size *N* × *N* may be obtained by solving 4*N*2 coupled non-linear equations. While this approach is efficient for small *N*, it may become inefficient for large *N*. The reason may be understood from the amount of computer memory needed to store the Jacobian of the system of non-linear equations. The size of the Jacobian scales like 16*N*4. In contrast, the size of the KS Hamiltonian matrix used in DFT codes scales like *N*2 with the number *N* of basis functions. Therefore, we describe an alternative algorithm in this section, which is more efficient than solving systems of coupled non-linear equations when *N* is large. Since we discuss in Ref.  that finding the spectral function from non-commuting spectral matrices can be interpreted as a generalization of matrix diagonalization, it is perhaps not surprising that the new algorithm that we describe in this section uses such concepts. In the following we describe the algorithm to construct the spectral function from the spectral moment matrices ${{\boldsymbol{M}}}^{(1)}$, ${{\boldsymbol{M}}}^{(2)}$, ${{\boldsymbol{M}}}^{(3)}$, where we assume that the zeroth spectral moment matrix is simply the unit matrix. Only the final result is described here, while the detailed proof is given in Appendix [secappdiag]. First, construct the hermitean *N* × *N* matrix (2+)=(2)-(1)(1). Next, diagonalize ${{\boldsymbol{M}}}^{(2+)}$: (2+)=, where ${{\boldsymbol{U}}}$ is a unitary matrix and ${{\boldsymbol{D}}}$ is a diagonal matrix. Using ${{\boldsymbol{D}}}$ and ${{\boldsymbol{U}}}$ construct the matrix [eqb1] 1=. Employ the inverse of its hermitean adjoint together with the moment matrices to compute the matrix [eqb2] 2=[(3)-(2) (1)][1]-1. Use it to obtain the matrix [eqd1] 1=1-1[2-(1)1]. Finally, take ${{\boldsymbol{B}}}\_{1}$, ${{\boldsymbol{D}}}\_{1}$ and the first moment matrix to construct the 2*N* × 2*N* matrix (1)=( cc (1) &1 1 &1 ) and diagonalize it: [eqb1ududag] (1)=. The unitary matrix ${{\boldsymbol{\mathcal{U}}}}$ contains the normalized eigenvectors of ${{\boldsymbol{\mathcal{B}}}}^{(1)}$ as its columns. Compute the spectral weight of state *j* from [eqspecwei] aj=i=1N i j [i j]\*. Note that *a**j* may be smaller than one, because the summation over the index *i* goes only from 1 to *N* and not from 1 to 2*N*. Therefore, spectral weights smaller than 1 may occur when bands split into lower and upper Hubbard bands. Construct the *N* × 2*N* matrix ${{\boldsymbol{\mathcal{V}}}}$ according to [eqstatevecs] i j=. Note that *i* = 1, ..., *N*, i.e., only the first *N* entries of the *j*-th column of ${{\boldsymbol{\mathcal{U}}}}$ are used, while every column of ${{\boldsymbol{\mathcal{U}}}}$ has of course 2*N* entries in total. The spectral function is given by [eqfinalspec] =l=12Nalil\*jl(E-El), where *E**l* is the *l*-th diagonal element of ${{\boldsymbol{\mathcal{D}}}}$, i.e., *E**l* = D*l**l*. Here, 1 ≤ *i*, *j* ≤ *N*, because in Eq.  we utilize only the first *N* × 2*N* block of the 2*N* × 2*N* matrix ${{\boldsymbol{\mathcal{U}}}}$ to construct the matrix ${{\boldsymbol{\mathcal{V}}}}$. Note that in this paper we do not use the grand canonical Hamiltonian H = *H* − *μ**N̂*, where *μ* is the chemical potential, but instead we use *H*, because most DFT codes do not work with H = *H* − *μ**N̂*, as the chemical potential *μ* is typically redetermined only before the end of every iteration in the selfconsistency loop to achieve matching between electronic and nuclear charge, i.e., charge neutrality. When comparing our result Eq.  to the literature, one therefore needs to be aware of this difference by *μ* in the expressions for the spectral function. Choice of the moment functionals ================================ In Appendix [feg2p] we have shown that [eqpoorfirstmom] (2+)()=+…and in Appendix [feg3p] we have found [eqpoorsecmom] (3+)()=+…, where [eqdimlessdenparam] rs()= ( )= ( ) is the dimensionless density parameter. The corresponding matrix elements of the moments are obtained from these potentials according to Mnm(2+)=d3 r (2+)() n\*()m() and Mnm(3+)=d3 r (3+)() n\*()m(). While it might be tempting to use these expansions, Eq.  and Eq. , to compute the moment functionals it is instructive to recall first the parameterization of the correlation energy of the uniform electron gas. [plotvwnec] Plot of the square of *V**c**r**s* vs. *r**s*. *r**s* is the dimensionless density parameter defined in Eq. . In order to construct an accurate analytic representation of the correlation energy of the uniform electron gas one considers the high-density expansion, the low-density expansion, and Green’s-function Monte Carlo data . In the low-density expansion, the leading order is *r**s*− 1 for the exact correlation energy. In the high-density limit one considers instead the parameterization $c\_{0}(\zeta){\rm ln} r\_s -c\_1(\zeta) +c\_2(\zeta)r\_s {\rm ln} r\_s$. Since these two functional forms for the low and high density limits differ considerably, we cannot expect good results, if we construct moment functionals only based on the parameterizations Eq.  and Eq. , which describe the case of low density. To give an impression of the deviation of the correlation energy density *ε**c* from the low-density behavior  ∝ *r**s*− 1, we plot in Fig. [plotvwnec] the quantity [*V**c**r**s*]2, where Vc=. In order to take into account Monte Carlo simulations in the construction of the moment functionals, we would need such calculations for correlation functions such as Eq.  through Eq.  for the uniform electron gas. Since these data are currently not available in the literature, we nevertheless use the parameterization Eq.  in our applications below. As in this paper we present our first tests of the MFbSDFT-method, this slightly crude approach is justified, because the development of accurate moment functionals will probably take similarly long as the development of the modern functionals used in KS-DFT calculations. Therefore, it is important to demonstrate the feasibility of the method before developing accurate moment functionals. Additionally, we test the following strategy to find more elaborated moment functionals: Eq.  suggests that the leading order at low density is *r**s*− 2. Since the leading order of the correlation energy is *r**s*− 1 in this limit, we try to replace *r**s*− 2 in Eq.  by the square of the correlation potential *V**c*, i.e., [eqfirstmomelab] (2+)()=d(2+)[Vc(rs)]2 and similarly [eqsecmomelab] (3+)()=d(3+)[Vc(rs)]3. This strategy should yield better results, because the *r**s*− 2 and *r**s*− 3 of the low-density expansion are hereby replaced by a more realistic functional form at high density. In Appendix [feg2p] we have estimated Eq.  through Eq.  based on zeroth order perturbation theory. According to this estimate, the prefactor of *r**s*− 2 is of the order of 10[Ry]2. The prefactor of *r**s*− 1 in the low-density expansion of the correlation potential is of the order of 1.2[Ry]. When we use the square of the correlation energy we have to choose the prefactor *d**σ*(2 + ) of the square of the correlation energy so that *d**σ*(2 + )(1.2*r**s*− 1)2 becomes comparable to 10*r**s*− 2. We therefore expect *d**σ*(2 + ) to be of the order of 10. At this order of magnitude of *d**σ*(2 + ) we indeed find a strong satellite peak in Ni (see Sec. [secresults]). Second variation approach ========================= In this section we describe the implementation of our MFbSDFT method within a second variation approach. By second variation we mean that first a standard KS Hamiltonian is diagonalized at a given *k*-point and only part of its eigenvectors are used to compute the matrix elements of the moment functionals. The computation of the state vector matrix ${{\boldsymbol{\mathcal{V}}}}$ and of the energies *E**l* = D*l**l* may therefore be considered as a second variation step. The size of the KS Hamiltonian matrix depends on the number of basis functions $N\_{\rm B}$. We do not compute all eigenvectors, but only as many eigenvectors as we need to describe the occupied bands and a fraction of the unoccupied bands. We call this number $N<N\_{\rm B}$. At a given *k* point we additionally compute the $N\_{\rm B}\times N\_{\rm B}$ matrices ${{\boldsymbol{M}}}^{(2+)}$ and ${{\boldsymbol{M}}}^{(3+)}$ and project them onto the *N* eigenstates. By $\bar{{{\boldsymbol{M}}}}^{(2+)}$ and $\bar{{{\boldsymbol{M}}}}^{(3+)}$ we denote these projections: |(2+)=|(2+)| and |(3+)=|(3+)|, where $\bar{{{\boldsymbol{U}}}}$ is a $N\_{\rm B}\times N$ matrix, which holds the *N* eigenvectors in its *N* columns. The implementation of the moments ${{\boldsymbol{M}}}^{(2+)}$ and ${{\boldsymbol{M}}}^{(3+)}$ is easy to do: In the subroutines computing the standard KS-Hamiltonian one needs to switch off the kinetic energy contribution such that only the computation of the matrix elements of the potential remains. If one additionally replaces the exchange-correlation potential by the moment functional potential for ${{\boldsymbol{M}}}^{(2+)}$ or ${{\boldsymbol{M}}}^{(3+)}$, the subroutine computes the corresponding moment matrix. The first and zeroth moments in the basis of the *N* eigenstates are diagonal matrices: [eqfirstmomsecvar] |Mnm(1)=EHFnnm and |Mnm(0)=nm. Note that in contrast to a standard KS-DFT calculation, the KS-Hamiltonian used in the first variation step does not use the full exchange-correlation potential, but only the local or non-local first-order exchange, i.e., either Eq.  or Eq. . Therefore, we denote the band energies from the first variation step by $E^{\rm HF}\_{n}$ in Eq. . Moments and band energies depend additionally on the *k*-point if periodic boundary conditions are used, but we suppress again the *k* index in the moments and also in the band energy, i.e., instead of $E^{\rm HF}\_{{{\boldsymbol{k}}}n}$ we write $E^{\rm HF}\_{n}$. The second and third moments are given by [eqbarmom2] |(2)=|(1)|(1) + |(2+) and [eqbarmom3] |(3)= |(1)|(1)|(1) + |(3+) respectively. The size of the matrices $\bar{{{\boldsymbol{M}}}}^{(0)}$, $\bar{{{\boldsymbol{M}}}}^{(1)}$, $\bar{{{\boldsymbol{M}}}}^{(2)}$, and $\bar{{{\boldsymbol{M}}}}^{(3)}$ is *N* × *N* and typically $N \ll N\_{\rm B}$. Therefore, the second variation approach is fast. Close to the end of the selfconsistency cycle the Fermi energy is determined such that the total electronic charge compensates the nuclear charge. Typically, the subroutine computing the Fermi energy makes use of the eigenvalues and of weights, which are determined by the multiplicities of the *k* points, when symmetries are used. In order to include the spectral weights Eq.  into the calculation of the Fermi energy, one only needs to multiply the *k*-point weights with these spectral weights. Similarly, the spectral weights need to be considered when computing the charge density from the matrix ${{\boldsymbol{\mathcal{V}}}}$ of state vectors, Eq. , according to [eqchargedensfinaleq] n()=n m j \*n() m() aj \*nj mjf(Ej), which may be derived from Eq.  by using the spectral theorem  to express the correlator ⟨*c**σ**n*†*c**σ**m*⟩ in terms of the spectral function. We illustrate the selfconsistency loop by the flowchart in Fig. [mfbsdftflowchart]. All results presented in Sec. [secresults] have been obtained according to the flowchart in Fig. [mfbsdftflowchart]. width= [node distance=2cm] (start) [mybox] Input Charge Density $n\_{\rm inp}({{\boldsymbol{r}}})$; (hartree) [mybox3, right of=start, xshift=6cm] Hartree and Exchange potentials $V^{\rm H}({{\boldsymbol{r}}})$, $V^{\rm X}({{\boldsymbol{r}}})$ (Eq. , Eq. , Eq. ); (hamil) [mybox, right of=hartree,xshift=7cm] Diagonalize $H=\left[-\frac{\hbar^2}{2m}\Delta+V({{\boldsymbol{r}}})+V^{\rm H}({{\boldsymbol{r}}})+V^{\rm X}({{\boldsymbol{r}}})\right]$; (mompot) [mybox2, below of=hartree] Moment potentials $\mathcal{V}^{(2+)}({{\boldsymbol{r}}})$, $\mathcal{V}^{(3+)}({{\boldsymbol{r}}})$ Eq. , Eq. , Eq. , Eq. ; (start) – (hartree); ([xshift=4cm]start) |- (mompot); (hartree) – (hamil); (moments) [mybox2,below of=hamil] Compute moments $\bar{{{\boldsymbol{M}}}}^{(1)}$, $\bar{{{\boldsymbol{M}}}}^{(2)}$, $\bar{{{\boldsymbol{M}}}}^{(3)}$ Eq. , Eq. , Eq. ; (mompot) – (moments); (hamil) – (moments); (spectral) [mybox3,below of=moments] Compute spectral poles *E**l* = D*l**l* (Eq. ) Compute state vectors ${{\boldsymbol{\mathcal{V}}}}$ (Eq. ) Compute spectral weights *a**j* (Eq. ) ; (moments) – (spectral); (outputdens) [mybox,below of=mompot] Compute output charge density $n\_{\rm out}({{\boldsymbol{r}}})$ (Eq. ); (spectral) – (outputdens); (mixing) [mybox2,left of=outputdens,xshift=-6cm] Mix charges to obtain new charge density for next iteration; (outputdens) – (mixing); (mixing) – (start); ([xshift=-20cm]start) – ([xshift=-1cm]mixing); KS-DFT is so constructed that it may be used to obtain the total energy and the charge density in principle exactly. However, in practice the exact exchange correlation potential is not known and therefore the charge density computed in KS-DFT is an approximation. Since considerable progress has been made in the construction of exchange correlation potentials, the KS charge density is a very good approximation in many cases. Whenever the KS charge density is sufficiently correct, one may run MFbSDFT in a simplified mode: The converged KS charge density is used as starting density in Fig. [mfbsdftflowchart] and only one iteration is performed, i.e., the output charge density is not computed but instead the results are calculated immediately from the state vector matrix ${{\boldsymbol{\mathcal{V}}}}$, from the spectral poles, and from the spectral weights. Applications ============ In this section we apply the MFbSDFT method to several well-studied materials that show genuine many-body effects such as satellite peaks. According to the literature, the details of the spectral function of these materials depend strongly on the theoretical model used to study them. According to our discussion in Sec. [secfunc] the parameterizations that we use for the moment functionals should be considered only as a first step towards the development of accurate moment functionals. Consequently, if the results shown below are more similar to one theoretical model than they are to another one this does not imply at all that MFbSDFT confirms one particular theoretical model, because accurate moment functionals remain to be developed. The main purpose of this section is therefore to show that MFbSDFT is able to reproduce spectral features qualitatively that have been identified as genuine correlation effects before. However, beyond validating the concept of MFbSDFT, the results shown also hint at a practical perspective for MFbSDFT already at this early stage of its development: Since the MFbSDFT reproduces spectral features of correlated materials, it may be used to compute response properties  such as the anomalous Hall effect, which is likely to require considerably less computer time than LDA+DMFT. For such an application one would finetune the parameters in the parameterizations of the moment functionals to match the spectral function known from LDA+DMFT or photoemission. While this approach is not parameter-free, it is similar to many applications of LDA+*U*, where the *U* parameter is chosen to reproduce a material property. fcc Ni ------ The DOS obtained in KS-DFT with the PBE functional is shown in Fig. [DOSNickelferromagpbe]. The valence DOS starts to become significant starting from 5 eV below the Fermi energy and the exchange splitting is around 0.75 eV. In contrast, the width of the main bands found experimentally is significantly smaller than 5 eV, namely only 3 eV. Additionally, a much smaller exchange splitting of around 0.3 eV is found in photoemission experiments . Moreover, the satellite peak observed in experiments at around 6 eV below the Fermi energy is absent in the KS-DFT spectrum. [DOSNickelferromagpbe] DOS of Ni vs. energy *E* in the ferromagnetic state as obtained in KS-DFT. $E\_{\rm F}$ is the Fermi energy. Next, we discuss the MFbSDFT-spectrum obtained with $\mathcal{V}^{(2+)}\_{\sigma}({{\boldsymbol{r}}})=15\zeta\_{\sigma}^{(5/3)}[V\_{c}(r\_s)]^2$ and $\mathcal{V}^{(3+)}\_{\sigma}({{\boldsymbol{r}}})=0$. Here *ζ**σ* = (1 − *σ*(*n*↑ − *n*↓)/*n*). We use *N* = 36. With this choice of parameters the magnetic moment computed self-consistently in MFbSDFT is 0.58 $\mu\_{\rm B}$. The resulting DOS is presented in Fig. [DOSNickelferromag]. The exchange splitting of around 0.3 eV is strongly reduced compared to the KS-DFT calculation and close to the experiments . Additionally, the main bands are much narrower than in KS-DFT and therefore in much better agreement with experiments. Moreover, satellite peaks are found close to 6 eV. However, the spectral weight of these satellite peaks is smaller than what is found in experiments and in LDA+DMFT calculations (see e.g.Fig. 9 in Ref. , Fig. 2 in Ref. , and Fig. 2 in Ref. ). [DOSNickelferromag] DOS of Ni vs. energy *E* in the ferromagnetic state as obtained in MFbSDFT when the moment functional is constructed according to Eq. . $E\_{\rm F}$ is the Fermi energy. Finally, we discuss the MFbSDFT-spectrum obtained with $\mathcal{V}^{(2+)}\_{\sigma}({{\boldsymbol{r}}})=0.015 \zeta\_{\sigma}^{(7/3)}r\_s^{-2}$ [Ry]2 and $\mathcal{V}^{(3+)}\_{\sigma}({{\boldsymbol{r}}})=-0.00472 \zeta\_{-\sigma}^{(1/3)}r\_s^{-3}$ [Ry]3. We use *N* = 36. With this choice of parameters the magnetic moment is 0.63 $\mu\_{\rm B}$. Fig. [DOSNickelferromagrs2] shows the density of states (DOS) of Ni in the ferromagnetic state as computed selfconsistently in MFbSDFT. While the exchange splitting is similar to KS-DFT, the width of the main bands is reduced, leading to a slightly better agreement with experiment. Around 6 eV below the Fermi energy satellite peaks appear with a spectral weight of a similar order of magnitude like in experiment and LDA+DMFT. However, the spin polarization of the satellite peak structure differs from both experiment and LDA+DMFT, which both predict the minority satellite to be strongly suppressed (see e.g.Fig. 9 in Ref. , Fig. 2 in Ref. , and Fig. 2 in Ref. ). In contrast, in Fig. [DOSNickelferromagrs2] the satellites of the majority and minority band are comparable in magnitude and only shifted in energy. [DOSNickelferromagrs2] DOS of Ni vs. energy *E* in the ferromagnetic state as obtained in MFbSDFT when the moment functional is constructed according to Eq.  and Eq. . $E\_{\rm F}$ is the Fermi energy. As discussed in Sec. [secfunc] the parameters employed in $\mathcal{V}^{(2+)}\_{\sigma}({{\boldsymbol{r}}})=15\zeta\_{\sigma}^{(5/3)}[V\_{c}(r\_s)]^2$ (used to generate the DOS shown in Fig. [DOSNickelferromag]) are of the order of magnitude expected from the estimate given in Sec. [secfunc]. In contrast, we determined the parameters employed in $\mathcal{V}^{(2+)}\_{\sigma}({{\boldsymbol{r}}})=0.015 \zeta\_{\sigma}^{(7/3)}r\_s^{-2}$ [Ry]2 (used to generate the DOS shown in Fig. [DOSNickelferromagrs2]) only based on try-out, because it is unclear how to renormalize the parameters of the low-density expansion so that it effectively describes the regimes of intermediate and high densities as well. However, Fig. [DOSNickelferromagrs2] is useful nevertheless, because it shows that satellite peaks with the correct order of magnitude of spectral weight can be produced by MFbSDFT. Taken together with the result of Fig. [DOSNickelferromag], which shows that band widths, exchange splittings and location of the satellite peaks are predicted very well if the correlation potential is used to construct the moment functionals, the overall conclusion from this finding is that it is likely that assessing Eq.  through Eq.  in the low-density and the high-density regimes and using Monte Carlo results to interpolate between these limits will allow us to formulate a moment functional that predicts the spectral features in Ni quite well. SrVO3 ----- In Fig. [SrVO3t2gegpbe] we show the contributions of the V-3*d* *e**g* and *t*2*g* states to the DOS of SrVO3, as obtained with KS-DFT using the PBE functional. The KS spectrum is not in good agreement with experiment. It has been shown that the agreement with experiment is improved significantly , when DMFT is used to supplement these bands with correlation effects. The detailed rearrangement of the spectral features obtained from LDA+DMFT depends on the details of the modelling of the correlation effects by the Hubbard model. Ref.  includes only the *t*2*g* states into the Hubbard model. In this case the DOS of the *t*2*g* states obtained from LDA+DMFT is distributed into three pronounced spectral peaks: A dominant central peak roughly 0.5 eV above the Fermi energy, a lower Hubbard band around 2 eV below the Fermi energy and an additional upper Hubbard band around 3 eV above the Fermi energy. The position of these peaks is in good agreement with experiments, which find peaks roughly at -1.7 eV, 0.5 eV, and 2.4 eV . These spectral features are also observed in Ref. , which includes the *e**g* states, however they strongly depend on the parameters, and the intensities of the lower and upper Hubbard bands are much smaller for some parameters. Additionally, the intensities of the lower and upper Hubbard bands depend strongly on the double counting correction. In Fig. [SrVO3mfbsdftvc] we present the contributions of the V-3*d* *e**g* and *t*2*g* states to the DOS, as obtained with MFbSDFT when we use Eq.  and Eq. , where we set *d**σ*(2 + ) = 100 and *d**σ*(3 + ) =  − 200. We use *N* = 200. The total V-*d* DOS is in good agreement with both the experimental spectrum and the LDA+DMFT spectrum (See e.g. Fig. 7 in Ref.  for comparison. Ref.  uses a broadening of 0.36 eV in order to reproduce the experimental resolution. We use 0.36 eV in our Fig. [SrVO3mfbsdftvc] as well.). However, in our case the peak between 2 eV and 2.5 eV stems mainly from the *e**g* states, which are not included into the Hubbard model in Ref. . Ref.  includes the *e**g* states, but still finds a small peak from the upper Hubbard band for the *t*2*g* states at around 3 eV above the Fermi energy. Such a small peak is consistent with our Fig. [SrVO3mfbsdftvc], where a shoulder in the V-3d(t2*g*) is clearly visible between 2 eV and 2.5 eV. Moreover, Ref.  finds a large contribution from the *e**g* states to the DOS at this energy as well. In this regard, our Fig. [SrVO3mfbsdftvc] resembles closely Fig. 8 in Ref.  when the energy is above the Fermi energy. However, Fig. 8 in Ref.  does not find a strong V-*d* DOS at around 2 eV below the Fermi energy. In contrast, we find a strong V-*d* DOS at around 2 eV below the Fermi energy with a dominant part from the *e**g* states and a small contribution from the *t*2*g* states. [SrVO3t2gegpbe] Contributions of the V-3*d* *e**g* and *t*2*g* states to the DOS in SrVO3. Results from KS-DFT using the PBE functional. [SrVO3mfbsdftvc] Contributions of the V-3*d* *e**g* and *t*2*g* states to the DOS in SrVO3. Results obtained within MFbSDFT when the moment functionals are constructed according to Eq.  and Eq. . [SrVO3mfbsdftr2] Contributions of the V-3*d* *e**g* and *t*2*g* states to the DOS in SrVO3. Results obtained within MFbSDFT when the moment functionals are constructed according to Eq.  and Eq. . In Fig. [SrVO3mfbsdftr2] we show the contributions of the V-3*d* *e**g* and *t*2*g* states to the DOS, as obtained with MFbSDFT when we take *N* = 200 and use Eq.  and Eq. , where we set *c**σ*(2 + ) = 1.1 [Ry]2, and *c**σ*(3 + ) =  − 1.5 [Ry]3. The peak between 2 eV and 2.5 eV is very pronounced and both *e**g* and *t*2*g* bands contribute to it. In contrast, the peak around -2 eV in Fig. [SrVO3mfbsdftvc] is significantly smaller in Fig. [SrVO3mfbsdftr2] and shifted to lower energy between -2.5 eV and -3 eV. Several main features of the *t*2*g* band in Fig. [SrVO3mfbsdftr2] resemble those obtained from a LDA+DMFT calculation with a Hubbard *U* of 6 eV (see Fig. 8 in Ref. ). Notably, the *t*2*g* band, which ends around 2 eV in Fig. [SrVO3t2gegpbe], is expanded to higher energies like in LDA+DMFT. Overall, the total V-*d* DOS in Fig. [SrVO3mfbsdftr2] is qualitatively similar to the one in Fig. 8 of Ref. , which displays pronounced peaks close to 1 eV and close to 2.5 eV, while the V-*d* DOS close to -2 eV is small, in agreement with our result in Fig. [SrVO3mfbsdftr2]. However, the peak close to 2.5 eV is much larger in our Fig. [SrVO3mfbsdftr2]. [SrVO3mfbsdftr2smallparam] Contributions of the V-3*d* *e**g* and *t*2*g* states to the DOS in SrVO3. Results obtained within MFbSDFT when the moment functionals are constructed according to Eq.  and Eq. . In contrast to Fig. [SrVO3mfbsdftr2] the parameters *c**σ*(2 + ) and *c**σ*(3 + ) are reduced by 18 % and 33 %, respectively. We may reduce the intensity of this peak at 2.5 eV by reducing the parameters *c**σ*(2 + ) and *c**σ*(3 + ). In Fig. [SrVO3mfbsdftr2smallparam] we show the DOS obtained with the parameters *c**σ*(2 + ) = 0.9[Ry]2 and *c**σ*(3 + ) =  − 1[Ry]3. Indeed, the peak intensity is reduced and it is now significantly smaller than the intensity of the main peak at around 1 eV, but relative to the main peak it is still more pronounced than in Fig. 8 of Ref. . Moreover, the peak is shifted from 2.5 eV in our Fig. [SrVO3mfbsdftr2] to higher energies and lies now between 3 eV and 3.5 eV. However, overall the total V-*d* DOS, the V-3*d* (t2*g*) DOS, and the V-3*d* (e*g*) DOS in Fig. 8 of Ref.  is in better agreement with our Fig. [SrVO3mfbsdftr2smallparam] than with our Fig. [SrVO3mfbsdftr2]. Discussion and outlook ====================== In the previous section we have shown that MFbSDFT reproduces features such as satellite peaks and spectral weight shifts which are usually obtained by solving the correlated electron problem directly, e.g. by means of LDA+DMFT. While these first MFbSDFT results look therefore very promising there is a large number of open questions and obvious possibilities to improve this method further. First, accurate moment functionals are required. The construction of accurate moment functionals should be possible based on Monte Carlo data of correlation functions such as those given in Eq.  through Eq. . Second, it is desirable to derive gradient approximations for the moment functionals. Using local functionals that depend only on the spin densities for MFbSDFT misses effects related to their spatial inhomogeneity. Like GGA is an improvement over LDA, we expect that MFbSDFT will become more accurate by adding gradient corrections to the functionals. Third, in this work we do not explore the calculation of total energies and atomic forces for structural relaxation. However, since force calculations in correlated materials are possible within LDA+DMFT we expect that total energies and forces may also be obtained within our MFbSDFT approach. Fourth, one may use more than the first 4 moments. While the first four moments are sufficient to reproduce the quasi-particle band structure qualitatively correctly in the strong-correlation regime , the precision of MFbSDFT is expected to increase with the number of moments used. The moments become increasingly more complicated with increasing order. However, one may use computer algebra systems in order to derive the higher-order moments and to assess them for the uniform electron gas. This seems feasible since the complexity is probably comparable to higher-order perturbation theory in QED, where high-order contributions have to be tackled by computer algebra. Assuming that computer algebra systems can manage the complexity of the higher-order moments the question remains if the spectral function can be found for more than 2 or 4 moments. In Ref.  we give an argument that the spectral function may be found from the first 4 moments, which is based on counting the number of available equations and the number of parameters that determine the spectral function and showing that these numbers match. In the present paper we have explicitly constructed the spectral function from the first 4 moments in Sec. [secconstruspecfun]. We may generalize the argument given in Ref.  and show that from the first 2*P* moments (*P* = 1, 2, …) one may construct the spectral function. This generalization is discussed in App. [secappmore]. If one uses only Delta-functions in the expression for the spectral function (as in Eq. ) one misses lifetime effects, which may be accommodated by employing Gaussians instead . When 4 moments are required to put the spectral peaks, such as satellite peaks, at the right energies, it is clear that more than 4 moments are generally required to correct the spectral widths of these spectral features by lifetime effects. Fifth, perhaps the method of spectral moments may contribute to the understanding of SIE, because it is remarkable that setting $\mu\_{c}({{\boldsymbol{r}}})=0$ in Eq. (2.22) of Ref.  leads to a HF-type method that suffers from the same SIE and is equivalent to the method of spectral moments derived from the first two moments (see Sec. [sectheory]). We suspect that increasing the number of moments used will ultimately eliminate the SIE. However, it is an open question, how the self-interaction correction (SIC) takes place exactly within the method of spectral moments. Within the KS-DFT framework the explanation of SIC is that $\mu\_{c}({{\boldsymbol{r}}})$ in Eq. (2.22) of Ref.  has to eliminate SIE when the exact exchange correlation functional is used. However, within the spectral moment method a valid explanation of SIE seems to be that using only the first two moments produces an error, which may be eliminated by using more moments. Of course, the precise moment functionals are expected to be necessary in order to remove SIE. However, Eq.  through Eq.  provide explicit expressions, which may be used for the construction of the moment functionals. Sixth, it is an important open question how to extend the MFbSDFT approach to finite temperatures. In Ref.  we have shown how to generalize the spectral moment method so that it can be applied to many-band Hamiltonians. Since the method of Ref.  computes the correlation functions from the spectral theorem, which involves the Fermi function and the actual excitation energies, it naturally includes finite temperature effects. As the spectral theorem is not used for the higher-order correlation functions in MFbSDFT, which are obtained from moment functionals, it is currently unknown how to accomodate finite temperatures accurately in this method. While accurate moment functionals are currently not available yet, the MFbSDFT method may also be used in practice in a way similar to LDA+*U*: In LDA+*U* the *U* and *J* parameters are usually chosen for a given material in order to add correlation effects that are not described by LDA. Similarly, one may use parameterizations of the moment functionals similar to the ones that we discussed in Sec. [secfunc] and choose the coefficients in the functional in order to optimize spectral features. Summary ======= We describe the concept of moment functionals, which allow us to obtain the spectral moments from functionals of the charge density. These functionals play a similar role in MFbSDFT as the exchange correlation functional does in KS-DFT. We derive explicit expressions for the moment functionals and use perturbation theory to investigate their scaling with the charge density. We describe an efficient algorithm to obtain the spectral function from the first four spectral moments. We demonstrate that MFbSDFT allows us to reproduce spectral features such as satellite peaks in Ni and lower and upper Hubbard bands in SrVO3. At this stage of its development, MFbSDFT may be used in a way similar to LDA+*U*: The parameters in the moment functionals are chosen such that spectral features found in experiments are reproduced. Acknowledgments =============== We acknowledge funding by SPP 2137 ``Skyrmionics" of the DFG and by Sino-German research project DISTOMAT (DFG project MO ). We gratefully acknowledge financial support from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant No. 856538, project “3D MAGiC”). The work was also supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)  −  TRR 288  −  422213477 (project B06). We also gratefully acknowledge the Jülich Supercomputing Centre and RWTH Aachen University for providing computational resources under project No. jiff40. Comparison of the spectral moments of the many-band case to the spectral moments of the single-band Hubbard model ================================================================================================================= The spectral moments of the many-band case contain many new contributions that do not have counterparts in the single-band Hubbard model. In this appendix we discuss which of the many-band terms have a correspondence in the single-band Hubbard model. In the single band Hubbard model the Coulomb matrix element is simply Vnn’tt’=Unn’tt’nt, where *U* is the Hubbard-*U*. For the single band Hubbard model the first four moments are: [eqsibahuzero] (0)= ljei(l-j) + =1, [eqsibahuone] &(1)= ljei(l-j) -,cj]+ &=()+Un-, [eqsibahutwo] &(2)= ljei(l-j) -, [H,cj]- ]+ &=(())2 +2 Un- () + U2n-, and [eqsinglebandmoment3] &(3)= ljei(l-j) -,H]-, [H,cjs]- ]+ &=[()]3 +3Un- 2 &+2 U2 () n- +2 U2 t00n- +U3 n- &-U2 ljei(l-j)tlj c-lc-jc-lc-j &+U2 lj tlj (2nl-1)c-lc-j &+U2 ljei(l-j)tlj cjc-lclc-j &+U2 ljei(l-j)tlj cjc-jclc-l. Here, N is the number of ${{\boldsymbol{k}}}$ points. Clearly, Eq.  and Eq.  turn into Eq.  and Eq. , respectively, when one evaluates them for the single-band Hubbard model and performs a Fourier transformation. The following contributions to *M*(2 + ) are zero for the single-band Hubbard model: *M*(2 + , 3), *M*(2 + , 4), *M*(2 + , 5), *M*(2 + , 6), *M*(2 + , 7). The sum *M*(2 + , 1) + *M*(2 + , 2) turns into *U*2⟨*n*− *σ*⟩ in the single-band case, which is the last term in Eq. . The sum ${{\boldsymbol{T}}}{{\boldsymbol{V}}}^{\rm H}+{{\boldsymbol{T}}}{{\boldsymbol{V}}}^{\rm X}\_{\sigma}+{{\boldsymbol{V}}}^{\rm H}{{\boldsymbol{T}}}+{{\boldsymbol{V}}}^{\rm X}\_{\sigma}{{\boldsymbol{T}}}$, which contributes to Eq. , evaluates to $2U\langle n\_{-\sigma}\rangle\epsilon({{\boldsymbol{k}}})$ in the case of the single-band Hubbard model. This is the middle term in Eq. . For the single-band Hubbard model the sum of *M**n**m*(3 + , 3) (Eq. ) and *M**σ**n**m*(3 + , 4) (Eq. ) is *U*3⟨*n*− *σ*⟩, which is the last term in the third line of Eq. . Evaluation of *M**σ**n**m*(2 + , *j*) ===================================== In order to keep the notation simple, we discuss the contractions C*σ*(2 + , *j*), Eq. , for the uniform electron gas without spin-polarization. We evaluate the contraction of Eq.  by transforming it into the momentum representation, where we obtain at zero temperature [eqc2p2] &(2+,2)=- v()v(4-5+)n5-n4+ &=-A2d3 q d3 k4 d3 k5. Here, kF=(3 2 n)1/3 is the Fermi wave number, Θ(*k*) is the Heaviside step function, v()=[Ry][aB] is the Coulomb potential expressed in terms of the Bohr radius $a\_{\rm B}$, ${\rm Ry}=13.6$ eV, and A2=[Ry]2[aB]2. Scaling all momenta in Eq.  by the factor *ξ*, we observe that this integral is proportional to *ξ*2, i.e., it is proportional to $k\_{\rm F}^2$. In this scaling analysis we took into account that *n* depends on $k\_{\rm F}$ as well: $n=k\_{\rm F}^3/(3\pi^2)$. It is convenient to express C*σ*(2 + , 2) in terms of the dimensionless density parameter rs= ( ). According to the scaling analysis above, it is sufficient to evaluate the integral for a single density parameter, e.g. *r*ʹ*s* = 1, because (2+,2)(rs)=. The integral can be performed numerically using the VEGAS  package for the Monte-Carlo integration of high-dimensional integrals. We obtain (2+,2)(rs)=[Ry]2. Next, we consider C(2 + , 16). This integral is given by [eqc2p16] &(2+,16)= v() &v(4-5+)n5-n4+n5 &=A2d3 q d3 k4 d3 k5 (kF-|5|) &, which also scales like *r**s*− 2, which is easy to see with a scaling analysis. Evaluating this integral with the VEGAS  package gives (2+,16)(rs)=[Ry]2. For the contraction C(2 + , 17) we need to compute the integral [eqc2p17] &(2+,17)= v() &v(4-5+)n5-n4+n4 &=A2d3 q d3 k4 d3 k5 (kF-|4|) &. Using a scaling analysis, we find that this integral also scales like *r**s*− 2. Employing the VEGAS  package yields (2+,17)(rs)=[Ry]2. While the contractions C(2 + , 2), C(2 + , 16), and C(2 + , 17) above are straightforward to evaluate with ${\tt VEGAS}$ , the contributions C(2 + , 1), C(2 + , 6), C(2 + , 8), C(2 + , 12), C(2 + , 14), and C(2 + , 15) require more care, because their integrands contain factors $(v({{\boldsymbol{q}}}))^2$, which lead to a strong divergence of the integrands in the limit *q* → 0. In contrast, the integrals of the contractions C(2 + , 2), C(2 + , 16), and C(2 + , 17) contain only a single factor $v({{\boldsymbol{q}}})$, which does not produce a divergence, because it is compensated by the *q*2 of *d*3*q* = *q*2sin(*θ*)*d**θ**d**ϕ*. However, these contributions may be grouped into pairs of two, where the two partners in a pair differ in sign. When we replace the Coulomb potential by v()=[Ry][aB], we observe that the limit *η* → 0 is finite for the pair, while both partners in a pair diverge in this limit. Consider for example the pair composed of C(2 + , 1) and C(2 + , 14). Evaluating the integrals [eqc2p1] &(2+,1)=2 v() v()n5-n4+ &=2A2d3 q d3 k4 d3 k5 (kF-|4+|) & and [eqc2p14] &(2+,14)=-2 v() v()n5-n4+n5 &=2A2d3 q d3 k4 d3 k5 (kF-|5|) & with the VEGAS  package we obtain 0=2[Ry]2. We explicitly left the spin-degeneracy factor 2 in this equation. Evaluation of *M**σ**n**m*(3 + , *j*) ===================================== In order to keep the notation simple, we discuss the contractions C*σ*(3 + , *j*) for the uniform electron gas without spin-polarization. Transforming Eq.  into the momentum representation, we obtain the following expression for C*σ*(3 + , 4) in terms of an integral: [eqc3p4] &(3+,4)= v()v(’)v(4-5+-’) &n5-n4+ &=A3 d3 q d3 q’ d3 k4 d3 k5 &, where A3=[Ry]3 [aB]3. Scaling all momenta in Eq.  by the factor *ξ* one may easily find that $\mathcal{C}^{(3+,4)}\_{\sigma }\propto k\_{\rm F}^3\propto r\_s^{-3}$. Using VEGAS  we obtain (3+,4)(rs)=-[Ry]3. Algorithm to construct the spectral function from non-commuting spectral moment matrices ======================================================================================== In this section we provide the derivation of the algorithm described in Sec. [secconstruspecfun] for the construction of the spectral function from the first four *N* × *N* spectral moment matrices, ${{\boldsymbol{M}}}^{(0)}$, ${{\boldsymbol{M}}}^{(1)}$, ${{\boldsymbol{M}}}^{(2)}$, and ${{\boldsymbol{M}}}^{(3)}$, where ${{\boldsymbol{M}}}^{(0)}$ is the unit matrix. Assume that we manage to find hermitean 2*N* × 2*N* matrices (1)=( cc (1) &1 1 &1 ), (2)=( cc (2) &2 2 &2 ), and (3)=( cc (3) &3 3 &3 ), which mutually commute, i.e., [appeqb1b2b3commut] &[(1),(2)]-=0, &[(1),(3)]-=0, &[(2),(3)]-=0, and which satisfy [appeqb2fact] (2)=(1)(1), and [appeqb3fact] (3)=(1)(2)=(1)(1)(1). Note that Eq.  is satisfied if Eq.  and Eq.  are satisfied. We will therefore solve only Eq.  and Eq.  below. The matrices ${{\boldsymbol{M}}}^{(I)}$, ${{\boldsymbol{B}}}\_{i}$ and ${{\boldsymbol{D}}}\_{i}$ have the size *N* × *N*. The matrices ${{\boldsymbol{M}}}^{(I)}$ are the given hermitean spectral moment matrices, while ${{\boldsymbol{B}}}\_{i}$ and ${{\boldsymbol{D}}}\_{i}$ are matrices that need to be determined such that Eq.  and Eq.  are satisfied. While ${{\boldsymbol{D}}}\_{i}$ is required to be hermitean, ${{\boldsymbol{B}}}\_{i}$ is not. If we manage to find these matrices ${{\boldsymbol{\mathcal{B}}}}^{(1)}$, ${{\boldsymbol{\mathcal{B}}}}^{(2)}$, and ${{\boldsymbol{\mathcal{B}}}}^{(3)}$, we know that they possess a common system of eigenvectors, i.e., they may be diagonalized by the same unitary transformation, because they are hermitean and they commute mutually. Consequently, we may find a unitary transformation ${{\boldsymbol{\mathcal{U}}}}$ so that [appeqb1diag] (1)=, where ${{\boldsymbol{\mathcal{D}}}}$ is a diagonal matrix. Using ${{\boldsymbol{\mathcal{U}}}}$ and ${{\boldsymbol{\mathcal{D}}}}$ we may write [appeqb2diag] (2)=2 and [appeqb3diag] (3)=3. In Ref.  we have shown that the eigenvalue problems Eq. , Eq. , and Eq.  may be rewritten in the form |(I)=|(I), where *I* = 1, 2, 3 (see Eq. (13) in Ref. ). When we denote the representation of the unit matrix as a column vector by $\bar{{{\boldsymbol{\mathcal{{{\boldsymbol{B}}}}}}}}^{(0)}$, we may combine Eq. , Eq. , and Eq.  into the compact expression [appeqcompactwa=b] |=|, where =[(0),(1),(2),(3)] and |=[|(0),|(1),|(2),|(3)]. Next, we rewrite $\bar{{{\boldsymbol{\mathcal{{{\boldsymbol{B}}}}}}}}$ as |=( cc |Low ) and |=( cc |Low ), where ${{\boldsymbol{\mathcal{{{\boldsymbol{M}}}}}}}$ and ${{\boldsymbol{\mathcal{{{\boldsymbol{W}}}}}}}$ are the matrices defined in Ref.  (see Eq. (7) and Eq. (8) in Ref. ). ${{\boldsymbol{\mathcal{{{\boldsymbol{M}}}}}}}$ is a *N*2 × 4 matrix, $\bar{{{\boldsymbol{\mathcal{{{\boldsymbol{B}}}}}}}}\_{\rm Low}$ is a 3*N*2 × 4 matrix, ${{\boldsymbol{\mathcal{{{\boldsymbol{W}}}}}}}$ is a *N*2 × 2*N* matrix, and $\bar{{{\boldsymbol{\mathcal{{{\boldsymbol{W}}}}}}}}\_{\rm Low}$ is a 3*N*2 × 2*N* matrix. Thus, we may rewrite Eq.  as two equations: [appeqwa=mupper] = and |Low=|Low. Eq.  is identical to the Eq. (9) in Ref. , which needs to be solved to obtain the spectral function. Thus, we may solve Eq.  by determining the matrices ${{\boldsymbol{B}}}\_{1}$ and ${{\boldsymbol{D}}}\_{1}$, and by diagonalizing the matrix ${{\boldsymbol{\mathcal{B}}}}^{(1)}$. Therefore, in order to prove the algorithm in Sec. [secconstruspecfun] it remains to show that the matrices ${{\boldsymbol{B}}}\_{1}$ and ${{\boldsymbol{D}}}\_{1}$ may be found by solving Eq.  and Eq. . From Eq.  we obtain the following equation for ${{\boldsymbol{B}}}\_{1}$: [appeqbbdag] 11=(2)-(1)(1). Since ${{\boldsymbol{B}}}\_{1}{{\boldsymbol{B}}}\_{1}^{\dagger}$ is a hermitean matrix, it may be diagonalized: 11=, where ${{\boldsymbol{U}}}$ is a unitary matrix and ${{\boldsymbol{D}}}$ is a diagonal matrix. If ${{\boldsymbol{B}}}\_{1}{{\boldsymbol{B}}}\_{1}^{\dagger}$ is positive definite, we obtain 1=, which is Eq.  in the main text. If ${{\boldsymbol{B}}}\_{1}{{\boldsymbol{B}}}\_{1}^{\dagger}$ is not positive definite, the algorithm described in this section cannot be used. However, in all applications discussed in this paper, ${{\boldsymbol{B}}}\_{1}{{\boldsymbol{B}}}\_{1}^{\dagger}$ is positive definite. We suspect that the reason for this is that ${{\boldsymbol{M}}}^{(2+)}$ is generally positive definite. From Eq.  we obtain the following equation for ${{\boldsymbol{B}}}\_{2}$: [appeqb2] 2=-1. This is Eq.  in the main text. From Eq.  we obtain the following equation for ${{\boldsymbol{D}}}\_{1}$: [appeqd1] 1=1-1. This is Eq.  in the main text. ${{\boldsymbol{D}}}\_{1}$ is required to be hermitean, which is not directly obvious from Eq. . However, making use of Eq.  and Eq.  it is straightforward to show that [eqd1hermitean] 1-1=0. At this point we have completely determined the matrix ${{\boldsymbol{\mathcal{B}}}}^{(1)}$, from which the spectral function may be constructed using its eigenvalues, which are contained in the diagonal matrix ${{\boldsymbol{\mathcal{D}}}}$, and the unitary transformation ${{\boldsymbol{\mathcal{U}}}}$ defined in Eq. . However, it remains to show that all those additional equations that follow from Eq.  and Eq.  but that we did not use to derive the expressions for ${{\boldsymbol{B}}}\_{1}$ and ${{\boldsymbol{D}}}\_{1}$ can be satisfied as well. From Eq.  we obtain [appeqremain1] 2=11+11, and from Eq.  we obtain [appeqremain2] 3=(1)2+12 and [appeqremain3] 3=12+12. ${{\boldsymbol{D}}}\_2$ as given by Eq.  is hermitean, because ${{\boldsymbol{D}}}\_1$ is hermitean according to Eq. . Thus, it does not violate any of the equations above. ${{\boldsymbol{B}}}\_3$ as given by Eq.  does not violate any of the equations above either. ${{\boldsymbol{D}}}\_3$ as given by Eq.  should be hermitean, which is not directly obvious. However, using Eq. , Eq. , and Eq. , it is straightforward to show that 3-3=0. Eq.  in the main text follows from Eq. , Eq. , and Ref. . Generalization to more moments ============================== We may generalize the argument given in Ref.  and show that from the first 2*P* moments (*P* = 1, 2, …) one may construct the spectral function: We may map each moment $\tilde{{{\boldsymbol{M}}}}^{(I)}$ (where $\tilde{{{\boldsymbol{M}}}}^{(I)}$ denotes the moment computed from the nested commutator expression – as opposed to the moment obtained from the explicit energy integration) onto an *N*2-dimensional real-valued vector ${{\boldsymbol{\mathcal{M}}}}^{(I)}$, because *N*2 real-valued parameters fully define a hermitean *N* × *N* matrix. We introduce the *N*2 × 2*P* matrix ${{\boldsymbol{\mathcal{M}}}}$ by =[(0),…,(2P-1)]. We try to approximate the spectral function by [eqspectralmatrixgeneral] =p=1P=1N ap p\*p (E-Ep), because we expect that *P**N* bands can be computed from the first 2*P* spectral moment matrices. Inserting this approximation into Eq.  yields [eqspectralmatrixgeneraleneint] M(I)= p=1P=1N ap p [Ep]I, where we defined W*α**β**γ**p* = V*α**γ**p*V*β**γ**p*\*. We may consider W*α**β**γ**p* as the row-*α* column-*β* element of a hermitean matrix ${{\boldsymbol{\mathcal{W}}}}\_{\gamma p}$. Since *γ* = 1, …, *N* and *p* = 1, ..., *P*, there are *P**N* such matrices. As the hermitean *N* × *N* matrix ${{\boldsymbol{\mathcal{W}}}}\_{\gamma p}$ is equivalent to a *N*2-dimensional real-valued vector $\tilde{{{\boldsymbol{\mathcal{W}}}}}\_{\gamma p}$, we define the *N*2 × *P**N* matrix ${{\boldsymbol{\mathcal{W}}}}=[\tilde{{{\boldsymbol{\mathcal{W}}}}}\_{11}\dots \tilde{{{\boldsymbol{\mathcal{W}}}}}\_{NP}]$. Additionally, we construct the *P**N* × 2*P* matrix ${{\boldsymbol{\mathcal{A}}}}$ by setting the element A*γ**p**m* in row (*γ*, *p*) and column *m* to *a**γ**p*(*E**γ**p*)*m* − 1. The requirements ${{\boldsymbol{M}}}^{(I)}=\tilde{{{\boldsymbol{M}}}}^{(I)}$ with *I* = 0, 1, …2*P* − 1 (where $\tilde{{{\boldsymbol{M}}}}^{(I)}$ are the moments computed from the nested commutator expressions) can now be formulated in compact form by [eqcompactmeqm] =. This is the generalization of Eq. (9) in Ref.  for the first 2*P* moments. The form of the equation is the same, only the sizes of the matrices are different. Since the matrix ${{\boldsymbol{\mathcal{M}}}}$ contains 2*P**N*2 elements, Eq.  defines 2*P**N*2 nonlinear equations. Each vector ${{\boldsymbol{\mathcal{V}}}}\_{\gamma p}$ has *N* components and there are *P**N* such vectors. ${{\boldsymbol{\mathcal{V}}}}\_{\gamma p}$ is required to be normalized and the gauge-transformation ${{\boldsymbol{\mathcal{V}}}}\_{\gamma p}\rightarrow e^{i\Phi}{{\boldsymbol{\mathcal{V}}}}\_{\gamma p} $ does not affect W*α**β**γ**p* = V*α**γ**p*V*β**γ**p*\*. Thus, every ${{\boldsymbol{\mathcal{V}}}}\_{\gamma p}$ is determined by 2(*N* − 1) real-valued unknowns, i.e., 2*P*(*N*2 − *N*) unknown coefficients need to be found to determine all vectors ${{\boldsymbol{\mathcal{V}}}}\_{\gamma p}$. Additionally, we need to find the *P**N* energies *E**γ**p* as well as the *P**N* spectral weights *a**γ**p*. Consequently, Eq.  is a system of 2*P**N*2 nonlinear equations for 2*P**N*2 unknowns. Thus, one may expect that it should be possible to compute *P**N* bands from the first 2*P* spectral moment matrices of size *N* × *N*, because the number of unknowns matches the number of available nonlinear equations.
arxiv_0000058
Aperture effects on the oxygen abundance determinations from CALIFA data ======================================================================== This paper aims at providing aperture corrections for emission lines in a sample of spiral galaxies from the Calar Alto Legacy Integral Field Area Survey (CALIFA) database. In particular, we explore the behavior of the log([] *λ*5007/H*β*)/([] *λ*6583/H*α*) (O3N2) and log[] *λ*6583/H*α* (N2) flux ratios since they are closely connected to different empirical calibrations of the oxygen abundances in star forming galaxies. We compute median growth curves of H*α*, H*α*/H*β*, O3N2 and N2 up to 2.5*R*50 and 1.5 disk $R\_{\rm eff}$. These distances cover most of the optical spatial extent of the CALIFA galaxies. The growth curves simulate the effect of observing galaxies through apertures of varying radii. We split these growth curves by morphological types and stellar masses to check if there is any dependence on these properties. The median growth curve of the H*α* flux shows a monotonous increase with radius with no strong dependence on galaxy inclination, morphological type and stellar mass. The median growth curve of the H*α*/H*β* ratio monotonically decreases from the center towards larger radii, showing for small apertures a maximum value of  ≈ 10% larger than the integrated one. It does not show any dependence on inclination, morphological type and stellar mass. The median growth curve of N2 shows a similar behavior, decreasing from the center towards larger radii. No strong dependence is seen with the inclination, morphological type and stellar mass. Finally, the median growth curve of O3N2 increases monotonically with radius, and it does not show dependence with the inclination. However, at small radii it shows systematically higher values for galaxies of earlier morphological types and for high stellar mass galaxies. Applying our aperture corrections to a sample of galaxies from the SDSS survey at 0.02 ≤ *z* ≤ 0.3 shows that the average difference between fiber-based and aperture corrected oxygen abundances, for different galaxy stellar mass and redshift ranges, reaches typically to  ≈ 11%, depending on the abundance calibration used. This average difference is found to be systematically biased, though still within the typical uncertainties of oxygen abundances derived from empirical calibrations. Caution must be exercised when using observations of galaxies for small radii (e.g. below 0.5 $R\_{\rm eff}$) given the high dispersion shown around the median growth curves. Thus, the application of these median aperture corrections to derive abundances for individual galaxies is not recommended when their fluxes come from radii much smaller than either *R*50  or $R\_{\rm eff}$. Introduction ============ Bidimensional spectroscopy is gaining more and more importance as a powerful observational technique capable to provide new results on the properties of galaxies. Many instruments (commonly known as Integral Field Spectrographs, IFS) have been developed in the last years to produce bidimensional spectroscopy, most of them based on arrays of fibers that collect light from the sky area of interest and drive it through a dispersor. The main limitation of these fiber-fed spectrograps is their limited field-of-view, which makes impossible the blind observation of large areas of the sky to observe large amounts of galaxies as it is the case of the large scale surveys like SDSS (York et al. 2000), 2dFGRS (Colless et al. 2001), VVDS (Le Fèvre et al. 2005), z-COSMOS (Lilly et al. 2007) and DEEP/DEEP2 (Davis et al. 2003). The best option for IFS surveys is thus to focus on selected individual objects, but this is very time consuming and prevents the study of large samples of galaxies. So far, there are only a few surveys employing IFS, from which we highlight: SAURON (Bacon et al. 2001), ATLAS-3D (Cappellari et al. 2011), PINGS (Rosales-Ortega et al. 2010), VENGA (Blanc et al. 2010), VIXENS (Heiderman et al. 2011) and CALIFA (Sánchez et al. 2012a). The CALIFA Survey (Sánchez et al. 2012a) is observing a statistically well-defined sample of 600 galaxies in the local Universe with the Potsdam Multi Aperture Spectrograph in the PPAK mode (Roth et al. 2005) at the 3.5m telescope at Calar Alto Observatory. The survey benefits from the wide field-of-view of PPAK (about 1 arcmin2) compared to similar instruments at other telescopes. Thus, CALIFA galaxies are mapped over most of their optical spatial extent, and so far it has allowed up to date complete bidimensional studies of galaxy properties like the star formation histories (Cid Fernandes et al. 2013, González-Delgado et al. 2014), the properties of the ionized gas in early-type galaxies (Kehrig et al. 2012, Papaderos et al. 2013, Singh et al. 2013, Gomes et al. 2015a), the properties of large samples of regions (Sánchez et al. 2013,2014), or the effects of the spatial resolution at different redshifts (Mast et al. 2014) among others. In addition to this, the CALIFA database allows the study of the biases introduced when galaxies are observed through small and size-limited apertures, which usually are single-fiber spectrographs. Several studies have already notices the existence of such aperture effects in the properties of early and late-type galaxies (e.g. Kehrig et al. 2013, Gomes et al. 2015b). Aperture effects on galaxy properties have been previously addressed by following different approaches (e.g. Hopkins et al. 2003, Brinchmann et al. 2004, Ellis et al. 2005, Kewley et al. 2005, Salim et al. 2007, Gerssen et al. 2012, Zahid et al. 2013). On top of that, a recent study by Richards et al. (2015) based on part of the SAMI Galaxy Survey (Allen et al. 2015; Bryant et al. 2015; Sharp et al. 2015) concludes that biases in the estimation of the total instantaneous star formation rate of a galaxy arise when the aperture correction is built only from spectra of the nuclear region of galaxies. A preliminary study of the aperture effects based on CALIFA data was presented in Iglesias-Páramo et al. (2013, IP13) and was devoted to the H*α* and H*β* emission line fluxes. In the present work we extend their analysis using a much larger sample of galaxies which allows us to perform a more detailed study. We try to go beyond and focus on observational parameters related to the derivation of the oxygen abundances, like the (widely used in the literature) N2 and O3N2 parameters. This is a crucial point since spiral galaxies are known to show radial abundance gradients, which means that observing them through a reduced aperture must not necessarily provide a complete information on the spatial abundance distribution. Regarding this point, it has been reported in the literature that the oxygen abundance derived for the integrated fluxes of emission lines of spiral galaxies equals the corresponding abundance of their Hii regions at a typical galactocentric distance of 0.4 × *R**o**p**t* (Pilyugin et al. 2004, Moustakas et al. 2006). This result linking the abundances of Hii regions at a fixed galactocentric radius and the abundance obtained for the integrated emission line flux ratios from the whole disk of spiral galaxies, give some support to adopting a reference characteristic value for the abundance of a spiral galaxy. Nonetheless, we should bear in mind that the integrated emission of the spiral disks represents in fact a composite spectrum, including different Hii regions (plus diffuse ionized ISM) with varying physical conditions and chemical compositions. Although there is a classical method to derive oxygen abundances of bright star forming regions based on atomic data and the fluxes of emision lines ([Oii]*λ*3727Å, [Oiii]*λ*4363Å, [Oiii]*λ*5007Å), known as the direct method, it cannot be used for more distant or intrinsically fainter galaxies since some of these lines are usually not detected. For this reason, abundance calibrations based on theoretical or empirical methods based only on the brightest emission lines must be used. Significant differences among the different calibrations commonly used in the literature (see Kewley & Ellison 2008 for an extensive discussion on this topic) have been reported. Moreover, in some cases different calibrations are used for a single observed emission line flux ratio. For these two reasons care must be taken when comparing relations involving oxygen abundances derived from different methods since the differences could be due to the choice of the calibration rather than to the real abundance. In this work we study the effect of the aperture on two observational quantities, namely N2 (log[Nii]*λ*6583Å/H*α*) and O3N2 (log([Oiii]*λ*5007Å/H*β*)/([Nii]*λ*6583Å/H*α*)), that have been widely used in the literature to estimate oxygen abundanes of star formation galaxies (e.g. Pettini & Pagel 2004, Pérez-Montero & Contini 2009, Marino et al. 2013). The applicability intervals of these indicators are  − 2.5 < N2 <  − 0.3 and O3N2 < 2. Marino et al. (2013) found dispersions of *σ* ≈ 0.16 and 0.18 dex when fitting temperature-based abundances to abundances derived from the N2 and O3N2 methods using the following calibrations: $$12 + \log {\rm O/H} = 8.743 + 0.462 \times {\rm N2}$$ and $$12 + \log {\rm O/H} = 8.533 - 0.214 \times {\rm O3N2}$$ Our study is performed for these two observational quantities, so the effect of the aperture on the abundances depends on the choice of the calibration based on any of these observed quantities. A further advantage of this larger sample, compared to the one used in IP13, is that it allows us to study the effect of parameters like inclination, morphological type and stellar masses, on the derived average aperture corrections. The paper is organized as follows: Section 2 explains the selection of the sample. The main results from our analysis are detailed in Section 3. Section 4 contains a discussion on the implications of the aperture effects on the abundance determination and an example to illustrate this interesting point. Finally, the conclusions of the paper are enumerated in Section 5. Sample selection ================ The sample of galaxies used for this work has been selected from the CALIFA database updated to December 2013, reduced with the v1.4 version of the pipeline (García-Benito et al. 2015). Details on the instrumental setup and properties of the spectra can be found in Sánchez et al. (2012a), where the survey is presented. We started with an initial sample of 402 galaxies, the ones observed up to December 2013, out of the 937 galaxies comprising the total CALIFA mother sample (Walcher et al. 2014). Then we removed the elliptical and lenticular galaxies (E, S0, S0a) in order to keep only spiral galaxies. As we are interested in covering the galaxy disks as much as possible within the CALIFA aperture ( ≈ 36ʺ radius), we have to choose a proper scale related to the spatial extent of the galaxies. For this, two possibilities, related to different structural component of the galaxies, arise as the most interesting: * The Petrosian radius in the SDSS-*r*ʹ band within a circular aperture containing 50% of the total Petrosian flux (SDSS *p**e**t**r**o**R*50\_*r*, and hereafter *R*50): this scale takes into account the stellar emission from both the bulge and the disk inside this circular aperture. Thus, this spatial scale is sensitive to light coming from stars of different ages, and has the great advantage of its availability for a huge number of galaxies, in particular the whole CALIFA sample. * The effective radius encompasing 50% of the light coming from the disk component (hereafter $R\_{\rm eff}$): this scale is computed after a morphological decomposition bulge/disk of the galaxy SDSS g’-band surface brightness profile, removing the light coming from the bulge and assuming that the disk has an exponential profile of the form $I(r) = I\_{0} e^{-1.678 r/R\_{\rm eff}}$. Details about the procedure can be found in Sánchez et al. (2014). Figure [comparad] shows the comparison of *R*50 and $R\_{\rm eff}$ with the optical radii[1](#fn1), which correspond to the major semi-axis of the elliptical aperture at *μ**B* = 25 mag arcsec− 2 isophote (hereafter $R\_{\rm op}$) of the CALIFA spirals. As the figure shows, for most spiral galaxies 2*R*50 $\leq R\_{\rm op} \leq$ 4*R*50, with the exception of the Sa-Sab galaxies for most of which $R\_{\rm op}$ seems to be lower than 4*R*50. This differential behavior is likely due to the fact that Sa-Sab galaxies present promiment bulges that result in reduced values of *R*50. In addition, it is shown that $R\_{\rm eff}$ $\leq R\_{\rm op} \leq$ 3$R\_{\rm eff}$, and this relation holds in the same way for all morphological types. In this case, all the morphological types behave similarly because $R\_{\rm eff}$  is estimated by using only the light from the disk. In the subsequent analysis we will present the results of the growth curves (as a function of *R*50 and $R\_{\rm eff}$) of several observable fluxes or flux ratios based on a selected set of emission lines. They will be used as indicators of the bias induced when estimating galaxy properties from fluxes measured within reduced apertures instead of taking the integrated values of these fluxes. Hereafter we will refer to the median growth curves of the H*α* flux, and the H*α*/H*β*, N2 and O3N2 ratios as a function of *R*50  as *x*(*α*50), *x*(*α**β*50), $x({\rm N}2\_{50})$  and $x({\rm O}3{\rm N}2\_{50})$  respectively. These curves represent the value of the parameter measured within a circular aperture of a given radius normalized to the value of the parameter measured within 36”, which is the radius of the largest circular aperture considered. This way, the value of all parameters within a circular aperture of radius 36” will be taken as the integrated value of this property. In the case of N2 and O3N2, the growth curves are logarithmic. Thus, *x*(*α*50)*r* = *f*(H*α*)*r*/*f*(H*α*)36ʺ, *x*(*α**β*50)*r* = (*f*(H*α*)*r*/*f*(H*β*)*r*)/(*f*(H*α*)36ʺ/*f*(H*β*)36ʺ), $x({\rm N}2\_{50})$*r* = log[*f*([Nii]6583)/*f*(H*α*)]*r* − log[*f*([Nii]6583)/*f*(H*α*)]36ʺ, and $x({\rm O}3{\rm N}2\_{50})$*r* = log[(*f*([Oiii]5007)/*f*(H*β*))/(*f*([Nii]6583)/*f*(H*α*))]*r* −   − log[(*f*([Oiii]5007)/*f*(H*β*))/(*f*([Nii]6583)/*f*(H*α*))]36ʺ. Correspondingly, we will refer to the growth curves of the H*α* flux, and the H*α*/H*β*, N2 and O3N2 ratios as a function of $R\_{\rm eff}$  as $x(\alpha\_{\rm eff})$, $x(\alpha\beta\_{\rm eff})$, $x({\rm N}2\_{\rm eff})$  and $x({\rm O}3{\rm N}2\_{\rm eff})$  respectively. The procedure followed to produce the growth curves is similar to the one described in IP13. For each galaxy, unidimensional spectra are constructed by adding all the pixels within circular apertures of radii varying from 3” to 36”, in steps of 3”. In order to obtain the pure emission line spectra, we use a single stellar population (SSP) fitting to remove the contribution of the underlying continuum of the stellar population. We apply a linear combination of two SSP synthesis models of Vazdekis et al. (2010) based on the MILES stellar library (Sánchez-Blázquez et al. 2006) and a Kroupa IMF (Kroupa 2001). The ages of the SSPs used range from 0.10 to 0.79 Gyr for the case of the young stellar population and from 2.00 to 14.13 Gyr for the old stellar population. Five different metallicities are considered for each age ([M/H] values equal to 0.00, 0.20, -0.40, -0.71 and -1.31 dex offset from the solar value). Once the underlying stellar continuum is removed, the fluxes of the emission lines were obtained from gaussian fits to the residual spectra. The procedure followed to get the fits is as follows: first we fit the triplet H*α*+[Nii] assuming a common recession velocity and FWHM for the three lines, and the usual line ratio *f*(6548)/*f*(6583)=0.333. No broad components in H*α* were considered because we have removed AGNs from our samples and thus only narrow emission lines are expected. Then we produced individual fits for H*β* and [Oiii] leaving free the recession velocity and the FWHM but using as first guess the values obtained for H*α*+[Nii]. After that, for each galaxy we have 12-element vectors containing the fluxes of the emission lines contained within each of the circular apertures. The median growth curves as a function of *R*50 or$R\_{\rm eff}$ where then produced by combining the properly interpolated vectors. In addition to this, given that we focus on star-forming galaxies, for the subsequent analysis we remove those CALIFA galaxies classified as AGN, according to the prescriptions of Best et al. (2012), Kewley et al. (2001, 2006) and Cid Fernandes et al. (2011), and to the NED[2](#fn2) database. In order to keep only galaxies with good quality data we will remove those galaxies for which *f*(H*α*)  > 0 and *σ*(H*α*)/*f*(H*α*)  ≤ 0.333 in all the circular apertures. We note that the results of this study are the same if we apply a 2-*σ* cut in signal-to-noise, resulting in a significantly larger sample, instead of the 3-*σ* applied in what follows. As a first step we investigate the distribution of the H*α* emission in the CALIFA galaxies. Figure [ha40] shows *x*(*α*50)  and $x(\alpha\_{\rm eff})$  for the spirals covered in the CALIFA aperture up to 4*R*50 and 2.2$R\_{\rm eff}$ respectively. These limits were imposed with the compromise of covering as much as possible the disks of the CALIFA galaxies, but keeping samples with reasonable numbers of galaxies. In both cases, *x*(*α*50)  and $x(\alpha\_{\rm eff})$  seem to saturate at 4*R*50 and 2.2$R\_{\rm eff}$ respectively, suggesting that the bulk of the H*α* emission of spiral galaxies is contained within these apertures. But the numbers of galaxies available for this test are very low for a detailed study of the aperture effects including the role of the stellar mass and morphological type. For this reason, we chose two less conservative limits, namely 2.5*R*50 and 1.5$R\_{\rm eff}$, that contain on average 85%[3](#fn3) and 90% of the total H*α* emission in spiral galaxies (see figure [ha40]). After imposing this last condition, we end up with 165 galaxies covered up to 2.5*R*50 and 133 galaxies covered up to 1.5$R\_{\rm eff}$, which is the case for galaxies with *R*50   ≤  14.4 and $R\_{\rm eff}$   ≤  24, respectively[4](#fn4). These two subsamples (*S*50 and $S\_{\rm eff}$) will be the basis of our statistical study of aperture effects, taking into account that *S*50 galaxies will be studied up to  ≈ 2.5*R*50, which contains on average 85% of their H*α* emission, and that $S\_{\rm eff}$ galaxies will be be studied up to  ≈ 1.5$R\_{\rm eff}$, which contains on average 90% of their H*α* emission. Table [samples] shows the number of galaxies of each subsample after imposing each of the filters previously mentioned. Also, Table [samplesprop] contains some basic properties of the two subsamples. Results ======= Corrections for fixed angular apertures --------------------------------------- ### H*α* growth curves We start discussing the behavior of the growth curves of *x*(*α*50)  and $x(\alpha\_{\rm eff})$  of our galaxies. For this we first compare *x*(*α*50)  of IP13 with that obtained in this work. As in IP13, we produce median growth curves and a confidence interval equivalent to 1*σ*, consistent in that one containing 68.2% of the individual growth curves at each radius. Table [compa] lists the values of *x*(*α*50)  of IP13 and the one produced with the present sample. As it can be seen from Table [compa], both curves are consistent within 1*σ* along the probed range. The increased number of galaxies in our current sample will make possible a more detailed analysis of the aperture corrections split by morphological types or stellar masses. Figures [haincli] to [hamass] show the H*α* growth curves split by galaxy inclination, morphological type and stellar mass. Inclination is defined as the ratio of SDSS *i**s**o**b*\_*r*/*i**s**o**a*\_*r* and we consider face-on galaxies those with inclination larger than 0.4, the rest being edge-on galaxies. Concerning the morphological types, two bins were defined as follows: Sa to Sbc, and Sc to Sdm. This splits our sample into early spirals and late spirals. Finally, the stellar mass was also split in two ranges, separated at log*M*\*/*M*⊙ = 10.3. This limit was imposed since after removing E-S0 galaxies and AGNs, it splits the sample in two subsamples of similar numbers of galaxies. As it is clear from the figures, the H*α* growth curves do not seem to show any dependence on inclination, morphological type and stellar mass. The only exception, for which a slight dependence is seen, is $x(\alpha\_{\rm eff})$ when different inclinations are taken into account. In this case, the growth curves for edge-on and face-on galaxies look different, although still consistent with each other within the 1*σ* limits shown in the figure. ### H*α*/H*β* growth curves Figure [hahbincli] shows *x*(*α**β*50)  and  $x(\alpha\beta\_{\rm eff})$  for different inclinations. Both curves are almost coincident irrespective of the inclination of the galaxies. It is remarkable that both *x*(*α**β*50) and $x(\alpha\beta\_{\rm eff})$ show a very mild decline from the central regions to the outskirts, with a maximum value of  ≈ 1.1, which corresponds to an increase of 0.20 dex in *c*(H*β*) or 0.43 mag in *A**V* at the center of the galaxies. This means that observing galaxies through a small aperture results in values of the extinction larger than those obtained with the integrated values of the H*α* and H*β* fluxes. Figure [hahbtype] shows *x*(*α**β*50) and $x(\alpha\beta\_{\rm eff})$  for different morphological types. *x*(*α**β*50) is slightly higher for Sa-Sbc than for Sc-Sm galaxies, reaching values of  ≈ 1.15 at the center of the galaxies in the first case. This difference is not observed for $x(\alpha\beta\_{\rm eff})$. Figure [hahbmass] shows *x*(*α**β*50) and $x(\alpha\beta\_{\rm eff})$  for different stellar masses. As in the previous case, *x*(*α**β*50) is slightly higher for galaxies with log*M*\*/*M*⊙ > 10.3 than for galaxies with log*M*\*/*M*⊙ ≤ 10.3 for small apertures, but this difference almost disappears in $x(\alpha\beta\_{\rm eff})$. However, the differences observed in the median growth curves for galaxies of different morphological types and stellar masses are always lower than the dispersions around these median growth curves, which as the figures show, increase as the radii of the apertures decrease. This slight increase in the H*α*/H*β* growth curve observed for small apertures with respect to the integrated values can be compared with the variation in this Balmer line ratio as a function of electron temperature. Assuming case B recombination and low-density limit, a substantial change, from 20000 K to 5000 K, in electron temperature should be present to see a 10% change in H*α*/H*β*. This change in electronic temperature is not observed in typical Hii regions in the disks of spirals (Pérez-Montero & Contini 2009). For this reason, a radial gradient on the dust content of star forming regions, likely related to the abundance gradient measured in spiral galaxies, would explain this aperture effect. ### N2 growth curves Figure [n2haincli] shows $x({\rm N}2\_{50})$  and $x({\rm N}2\_{\rm eff})$  for different inclinations. $x({\rm N}2\_{50})$ and $x({\rm N}2\_{\rm eff})$ behave in a similar way as *x*(*α**β*50) and $x(\alpha\beta\_{\rm eff})$, that is, they show a mild decline from the inner parts of the galaxies and do not show clear differences related to the inclination of galaxies. $x({\rm N}2\_{50})$ and $x({\rm N}2\_{\rm eff})$ show central values reach maxima of  ≈ 1.2 − 1.3 and larger dispersions around these central values in the inner regions of the galaxies compared to the outskirts. The meaning of this trend is that observing galaxies through small apertures results in values of N2 larger than the ones obtained with the integrated values. Figure [n2hatype] shows $x({\rm N}2\_{50})$ and $x({\rm N}2\_{\rm eff})$  for different morphological types. The behavior of $x({\rm N}2\_{50})$ and $x({\rm N}2\_{\rm eff})$ is similar to the one previously described. Figure [n2hamass] shows $x({\rm N}2\_{50})$ and $x({\rm N}2\_{\rm eff})$  for different stellar masses which shows a marginal difference for small apertures with more massive galaxies showing larger values of $x({\rm N}2\_{50})$  and $x({\rm N}2\_{\rm eff})$  than less massive ones. However, again this difference is much smalles than the dispersions around the median values of $x({\rm N}2\_{50})$  and $x({\rm N}2\_{\rm eff})$. As we will show in the next section, the radial change of the N2 growth curves reflects in a radial change in the oxygen abundances. ### O3N2 growth curves Figure [o3n2incli] shows $x({\rm O}3{\rm N}2\_{50})$  and $x({\rm O}3{\rm N}2\_{\rm eff})$  for different inclinations. Following this figure, $x({\rm O}3{\rm N}2\_{50})$ and $x({\rm O}3{\rm N}2\_{\rm eff})$ show an almost linear increasing trend from the inner regions to the outskirts of the galaxies, and no difference with the inclination of the galaxy is apparent. This means that observing galaxies through small apertures results in values of O3N2 smaller than those obtained with the integrated values. Figure [o3n2type] shows $x({\rm O}3{\rm N}2\_{50})$ and $x({\rm O}3{\rm N}2\_{\rm eff})$  for different morphological types. In this case, Sa-Sbc galaxies show larger values of $x({\rm O}3{\rm N}2\_{50})$ and $x({\rm O}3{\rm N}2\_{\rm eff})$  for apertures with *R*/*R*50  ≤ 1.2 and *R*/$R\_{\rm eff}$  ≤ 0.7 respectively than Sc-Sm galaxies, although the difference is lower than the dispersions around the median values. Figure [o3n2mass] shows $x({\rm O}3{\rm N}2\_{50})$ and $x({\rm O}3{\rm N}2\_{\rm eff})$  for different stellar masses. Similarly to the previous case, galaxies with log*M*\*/*M*⊙ > 10.3 present larger values of $x({\rm O}3{\rm N}2\_{50})$ and $x({\rm O}3{\rm N}2\_{\rm eff})$  for apertures with *R*/*R*50  ≤ 1.2 and *R*/$R\_{\rm eff}$  ≤ 0.7 respectively than galaxies with log*M*\*/*M*⊙ ≤ 10.3, but as commented above, this difference is smalles than the dispersions around the median values of $x({\rm O}3{\rm N}2\_{50})$  and $x({\rm O}3{\rm N}2\_{\rm eff})$. In this case also the radial change of the O3N2 growth curves can be interpreted in terms of a radial change of the oxygen abundanes. Tables [hatypetab] to [o3n2massretab] show the numerical values of the median growth curves and the 1*σ* confidence intervals corresponding to the growth curves previously shown. These tables also contain the fits to all spiral galaxies considered in this work. As a summary of the present section we show in Table [polyfits] the coefficients of the fits to 5th order polynomials of the aperture corrections previously discussed. Corrections for fixed physical apertures ---------------------------------------- The large field-of-view covered by the CALIFA data allows us to predict the fraction of flux enclosed within a given angular aperture with respect to the flux enclosed within a fixed physical aperture at different redshifts. In our case, we have focused on the SDSS and SAMI fields-of-view. As the median value of $R\_{\rm eff}$  of our sample (spirals from Sa to Sdm excluding AGNs) is  ≈ 6.68 kpc, and based on $x(\alpha\_{\rm eff})$  shown in figure [ha40], we will estimate the quantities of interest within two circular apertures, one containing most of the flux of the galaxy (10 kpc radius, containing on average 90% of the total H*α* flux) and another one enclosing the flux in the central region (3.3 kpc radius, containing on average 30% of the total H*α* flux). Then for each of the CALIFA spirals we measure each of the relevant quantities within the circular aperture subtended by the SDSS fiber if the galaxy was placed at different redshifts. Figure [aper12sdss] gives the median values of the aperture corrections with respect to an aperture of 10 kpc radius for the *f*(H*α*), *f*(H*α*)/*f*(H*β*), N2 and O3N2. The SDSS fiber covers an aperture of  ≈ 10 kpc at a redshift of *z* ≈ 0.6. The trends with redshift shown by the aperture corrections are similar to the ones obtained in previous sections as a function of *R*50  and $R\_{\rm eff}$. Figure [aper6sdss] also provides corrections for a physical aperture of 3.3 kpc radius. In this case, the SDSS fiber covers this aperture at a redshift of *z* ≈ 0.12. We also provide corrections with respect to these two fixed apertures for the SAMI case (a circular bundle of fibers of 15 arcsec diameter). The results are shown in Figs. [aper12sami] and [aper6sami]. The larger size of the SAMI aperture compared to the SDSS one reduces the range of applicability of our aperture corrections since it results in complete coverage of the 10 and 3.3 kpc apertures at redshifts of *z* ≈ 0.07 and 0.02 respectively. Tables [aper12sdsshatab] to [aper12sdsso3n2tab] list the median values and 1-*σ* dispersions of the aperture corrections for fixed apertures of 10 kpc and 3.3 kpc diameter, focused on the cases of the SDSS and SAMI surveys. Discussion ========== In previous sections we have shown the growth curves of some emission line fluxes and line ratios, relevant for the estimation of Star Formation Rates (SFRs), extinction and characteristic oxygen abundances of spiral galaxies. These growth curves provide with aperture corrections for these quantities that should be considered in a statistical sense, i.e. applicable to large statistical samples of spiral galaxies. The effect of the aperture on H*α* luminosity, H*α*/H*β* ratio, N2 and O3N2 is found to be statistically uncorrelated to the inclination, morphological type and stellar mass of galaxies, except for some marginal differences always below the typical dispersions of the median growth curves for small apertures. It is interesting noting that the aperture effect found for N2 and O3N2 can be interpreted as an aperture effect in the oxygen abundances. As a first step to illustrate the aperture effect on the oxygen abundances we show in Fig. [ohtypecompameta] the median growth curves of the logarithmic oxygen abundance as a function of *R*50 , $x({\rm OH}\_{50})$[5](#fn5), derived using the calibrations by Marino et al. (2013, M13), Pettini & Pagel (2004, PP04) and Pérez-Montero & Contini (2009, PMC09), for N2 and O3N2 respectively. As it can be seen, the effect of the aperture marginally depends on the calibration used both for N2 and O3N2 showing a maximum difference of  ≈ 0.02 − 0.03 dex, which corresponds to 5-7.5%. Although the dispersions around the median values are not shown in the figures (for clarity), they are much lower than this difference. Figures [ohn2hatype] to  [oho3n2mass] show $x({\rm OH}\_{50})$ and $x({\rm OH}\_{\rm eff})$  estimated from N2 and O3N2 using the M13 calibrations. As shown in the figures, the effect of the aperture for both indicators is maximal for small apertures and only marginal differences (maximum values of  ≈ 0.04 dex) with morphological type and stellar mass are seeing, much lower than the typical dispersions around the median values. However, what is more relevant is the dispersions around the median values, than can reach maximal values of up to  + 25% for small apertures when deriving oxygen abundances of early-type and/or high mass spirals using N2. We keep in mind that using other calibrations for the oxygen abundances could result in even larger dispersions around the median values. In particular, as shown in figure [ohtypecompameta], using for example the PMC09 calibrations would result in dispersions  ≈ 5% larger than using the M13 calibrations. This means that when the fluxes are obtained through very small apertures, the aperture corrected oxygen abundances for individual galaxies could be very uncertain and far from the real values. We stress again that the median aperture corrections for oxygen abundances (and other relevant quantities reported in this work) must be applied to large samples of galaxies and interpreted in a statistical sense. A sample case: SDSS galaxies at 0.02 ≤ *z* ≤ 0.3 ------------------------------------------------ In this subsection we apply the results of our work on a sample of galaxies from the SDSS survey and quantify the average effect of the aperture resulting when estimating the oxygen abundance from the flux measured through the SDSS fiber. For this we will make use of the corrections derived using *R*50  and split in two bins of stellar masses, since these two parameters are available for all the SDSS galaxies. The sample of galaxies is the MPA-JHU sample (Kauffmann et al. 2003, Brinchmann et al. 2004, Tremonti et al. 2004, and Salim et al. 2007), that provides stellar masses and uses spectroscopy from SDSS-DR7 (Abazajian et al. 2009), and photometry complemented from SDSS-DR12 (Alam et al. 2015), satisfying the following criteria: * Redshift in the range 0.02 ≤ *z* ≤ 0.3. * Signal-to-noise ratio of the emission lines H*α*, H*β*, [] *λ*6583 and [] *λ*5007 larger than 3. * Stellar mass in the range 8.5 ≤ log*M*\*/*M*⊙ ≤ 11.5. * Signal-to-noise ratio of the half-light Petrosian radius in the *r*ʹ band (*R*50) larger than 3. Next we remove those galaxies classified as QSO by SDSS. We also remove those galaxies not classified as star-forming according to the BPT diagram (Baldwin et al. 1981) of Kauffmann et al. (2003) and the condition EW(H*α*)  ≥ 3 Å as indicated in the WHAN diagram of Cid Fernandes et al. (2011). Finally, we keep only the galaxies for which 0.3 ≤ 1.5ʺ/*R*50 ≤ 2.5, as it is the range of applicability of the corrections as indicated in IP13. Figure [sdssbpt] shows the BPT diagram of the SDSS sample. In order to be as much realistic as possible, we want to take into account the large dispersion observed around the median values of the growth curves for small apertures. For this, we first construct approximate cumulative distribution functions (CDFs) of the distribution of the individual O/H growth curves at each radius for the two stellar mass bins previously considered in Figs. [ohn2hamass] and [oho3n2mass]. Figures [cdfohn2ha] and [cdfoho3n2] show the CDFs at three different radii normalized to *R*50. As it can be seen, the larger the radius normalized with respect to *R*50, the closer the distribution remains to zero, as it was observed in the previous section. Several Monte Carlo tests were produced to verify that the random distributions following these CDFs are consistent with being drawn from the same parental distribution as the original distributions from the CALIFA galaxies. Then we are ready to quantify the aperture effect with a sample of galaxies from SDSS for which the stellar mass, *R*50, and the fluxes of the required emission lines are well known. To each of the galaxies we will apply a value of the correction corresponding to its stellar mass and coverage of the SDSS fiber (normalized to *R*50) taken at random from the corresponding CDF. The difference between the oxygen abundances estimated from the flux of the SDSS fiber and that obtained using the aperture correction as described above was computed for each galaxy using the two indicators N2 and O3N2. Then, the median value of these differences was computed for five stellar mass bins: log*M*\*/*M*⊙ ∈  [8.5,9.1], [9.1,9.7], [9.7,10.3], [10.3,10.9] and [10.9,11.5]. We repeated this process a total of 25 times, and computed the average value of the median of the differences for each of these four mass bins. Table [abunmonte] shows the results of this simulation split in six redshift bins: *z* ∈  [0.02,0.05], [0.05,0.10], [0.10,0.15], [0.15,0.20], [0.20,0.25] and [0.25,0.30]. In this study the aperture effect depends on two different aspects: on the one side, the fraction of galaxy covered by the SDSS fiber is partly driven by the redshift of the galaxies. On the other hand, the aperture effect depends on the stellar mass of the galaxy. Thus, table [abunmonte] must show a combination of these effects. The first thing we note is that the maximum average aperture effect measured with these two calibrations at the redshifts and stellar mass ranges probed is of the order of  ≈ 0.047 dex ( ≈ 11%), which as previously said, is much smaller than the typical uncertainties of any of the two empirical calibrations. The average corrections when using the N2 calibration tend to increase with stellar mass at all redshift ranges probed, and for a given stellar mass, the average correction decreases whn redshift increases. Regarding the O3N2 calibration, it reaches a maximum for stellar masses 9.7 < log*M*\*/*M*⊙ < 10.3 at all redshift ranges, and for a given mass range it tends to decrease as redshift increases. We also remark that in this work we have studied the effect of the aperture on the indicators used to estimate the oxygen abundance (N2 and O3N2). However, it is out of the scope of this paper enter in the discussion about the goodness of the integrated values of these two indicators as proxies for the oxygen abundance, since they contain emission not only from star-forming regions but also from the diffuse ionized gas of the disks of spiral galaxies. It has been reported in the literature that the oxygen abundance estimated from integrated fluxes of emission lines approximately equals the corresponding abundance of star-forming regions at the galactocentric distance 0.4 × *R**o**p**t* (Pilyugin et al. 2004; Moustakas et al. 2006). Although this relation links these two quantities, still further information is required in order to get information about the chemical evolution of spiral galaxies. Conclusions =========== This paper presents the growth curves taken through circular apertures of several emission line fluxes for a sample of spiral galaxies from the CALIFA project. These curves allow to study the effect of estimating galactic properties from the light of a circular aperture covering only partially the spatial extent of a galaxy instead of using the integrated light. The main conclusions arising from this work are the following: * Whereas the median H*α* growth curves are insensitive to inclination, morphology and total stellar mass of the studied galaxies, our analysis documents a strong dependence of the registered H*α* luminosity on the aperture size (consequently, also on redshift). * The median H*α*/H*β* growth curves are insensitive to the inclination, morphological types and stellar masses of the galaxies, and shows a mild trend with a maximum for small apertures smoothly decreasing towards large apertures, which means that the extinction is overestimated on average when observing galaxies through small apertures. * The median N2 growth curves are also insensitive to the inclinations, morphological types and stellar masses of the galaxies, showing a decreasing trend towards large apertures. This means that the oxygen abundance is overestimated when observing through small apertures. * The median O3N2 growth curves are insensitive to the inclinations, and show higher values for galaxies of earlier morphological types and log*M*\*/*M*⊙ > 10.3 at small radii apertures. Again, this means that estimating oxygen abundances through small apertures tends to overestimate the abundances. * When applying our aperture corrections to a sample of SDSS galaxies with 0.02 ≤ *z* ≤ 0.3, the average corrected oxygen abundances derived from the SDSS fluxes may result enhanced by a maximum of  ≈ 11% with respect to the fiber-based ones. This work has shown that although the median aperture corrections for oxygen abundance are always small, the dispersions around these median values become very large for small apertures, thus preventing the use of aperture corrections for studies of individual galaxies unless a very large uncertainty is assumed. A more detailed study on the aperture effects on some individual emission line ratios will be presented in a forthcoming paper. This study made use of the data provided by the Calar Alto Legacy Integral Field Area (CALIFA) survey (). The CALIFA collaboration would like to thank the IAA-CSIC and MPIA-MPG as major partners of the observatory, and CAHA itself, for the unique access to telescope time and support in manpower and infrastructures. The CALIFA collaboration also thanks the CAHA staff for the dedication to this project. Based on observations collected at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max-Planck-Institut für Astronomie and the Instituto de Astrofísica de Andalucía (CSIC). We thank the Viabilidad, Diseño, Acceso y Mejora funding program ICTS-2009-10, for supporting the initial developement of this project. JIP, JVM, CK, EPM and SDP acknowledge financial support from the Spanish MINECO under grant AYA2010-21887-C04-01, and from Junta de Andalucía Excellence Project PEX2011-FQM7058. CCT and ACM also thank the support from the Plan Nacional de Investigación y Desarrollo funding program AYA2013-46724-P. JMG acknowledges support by Fundação para a Ciência e a Tecnologia (FCT) through the Fellowship SFRH/BPD/66958/2009 and POPH/FSE (EC) by FEDER funding through the program Programa Operacional de Factores de Competitividade (COMPETE). PP is supported by FCT through the Investigador FCT Contract No. IF/01220/2013 and POPH/FSE (EC) by FEDER funding through the program COMPETE. JMG&PP also acknowledge support by FCT under project FCOMP-01-0124-FEDER-029170 (Reference FCT PTDC/FIS-AST/3214/2012), funded by FCT-MEC (PIDDAC) and FEDER (COMPETE). They also acknowledge support by the exchange programme ‘Study of Emission-Line Galaxies with Integral-Field Spectroscopy’ (SELGIFS, FP7-PEOPLE-2013-IRSES-612701), funded by the EU through the IRSES scheme. FFRO acknowledges the exchange programme ‘Study of Emission-Line Galaxies with Integral-Field Spectroscopy’ (SELGIFS, FP7-PEOPLE-2013-IRSES-612701), funded by the EU through the IRSES scheme. Support for LG is provided by the Ministry of Economy, Development, and Tourism’s Millennium Science Initiative through grant IC120009, awarded to The Millennium Institute of Astrophysics, MAS. LG acknowledges support by CONICYT through FONDECYT grant 3140566. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We acknowledge the usage of the HyperLeda database (). Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofísica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. The SDSS-III web site is. Abazajian, K. N., Adelman-McCarthy, J. K., Agüeros, M. A., et al. 2009,, 182, 543 Alam, S., Albareti, F. D., Allende Prieto, C., et al. 2015,, 219, 12 Allen, J. T., Croom, S. M., Konstantopoulos, I. S., et al. 2015,, 446, 1567 Bacon, R., Copin, Y., Monnet, G., et al. 2001,, 326, 23 Baldwin, J. A., Phillips, M. M., & Terlevich, R. 1981,, 93, 5 Best, P. N., & Heckman, T. M. 2012,, 421, 1569 Blanc, G. A., Gebhardt, K., Heiderman, A., et al. 2010, New Horizons in Astronomy: Frank N. Bash Symposium 2009, 432, 180 Brinchmann, J., Charlot, S., White, S. D. M., et al. 2004,, 351, 1151 Bryant, J. J., Owers, M. S., Robotham, A. S. G., et al. 2015,, 447, 2857 Cappellari, M., Emsellem, E., Krajnović, D., et al. 2011,, 413, 813 Cid Fernandes, R., Stasińska, G., Mateus, A., & Vale Asari, N. 2011,, 413, 1687 Cid Fernandes, R., Pérez, E., García Benito, R., et al. 2013,, 557, A86 Colless, M., Dalton, G., Maddox, S., et al. 2001,, 328, 1039 Davis, M., Faber, S. M., Newman, J., et al. 2003,, 4834, 161 Ellis, S. C., Driver, S. P., Allen, P. D., et al. 2005,, 363, 1257 García-Benito, R., Zibetti, S., Sánchez, S. F., et al. 2015,, 576, A135 Gerssen, J., Wilman, D. J., & Christensen, L. 2012,, 420, 197 Gomes, J. M., Papaderos, P., Kehrig, C., et al. 2015a, arXiv:1511.02191 Gomes, J. M., Papaderos, P., Vílchez, J. M., et al. 2015b, arXiv:1511.01300 González Delgado, R. M., Pérez, E., Cid Fernandes, R., et al. 2014,, 562, A47 Heiderman, A. L., Evans, N. J., II, Gebhardt, K., et al. 2011, New Horizons in Astronomy, Proceedings of the Frank N. Bash Symposium 2011, held October 9-11, 2011 Hopkins, A. M., Miller, C. J., Nichol, R. C., et al. 2003,, 599, 971 Iglesias-Páramo, J., Vílchez, J. M., Galbany, L., et al. 2013,, 553, L7 Kauffmann, G., Heckman, T. M., Tremonti, C., et al. 2003,, 346, 1055 Kehrig C., et al., 2012, A&A, 540, A11 Kehrig C., et al., 2013, MNRAS, 432, 2731 Kewley, L. J., Dopita, M. A., Sutherland, R. S., Heisler, C. A., & Trevena, J. 2001,, 556, 121 Kewley, L. J., Jansen, R. A., & Geller, M. J. 2005,, 117, 227 Kewley, L. J., Groves, B., Kauffmann, G., & Heckman, T. 2006,, 372, 961 Kewley, L. J., & Ellison, S. L. 2008,, 681, 1183 Kroupa, P. 2001,, 322, 231 Le Fèvre, O., Vettolani, G., Garilli, B., et al. 2005,, 439, 845 Lilly, S. J., Le Fèvre, O., Renzini, A., et al. 2007,, 172, 70 Makarov, D., Prugniel, P., Terekhova, N., Courtois, H., & Vauglin, I. 2014,, 570, A13 Marino, R. A., Rosales-Ortega, F. F., Sánchez, S. F., et al. 2013,, 559, A114 Mast, D., Rosales-Ortega, F. F., Sánchez, S. F., et al. 2014,, 561, A129 Moustakas, J., & Kennicutt, R. C., Jr. 2006,, 651, 155 Osterbrock, D. E. 1989, Research supported by the University of California, John Simon Guggenheim Memorial Foundation, University of Minnesota, et al. Mill Valley, CA, University Science Books, 1989, 422 p., Papaderos, P., Gomes, J. M., Vílchez, J. M., et al. 2013,, 555, L1 Pérez-Montero, E., & Contini, T. 2009,, 398, 949 Pettini, M., & Pagel, B. E. J. 2004,, 348, L59 Pilyugin, L. S., Contini, T., & Vílchez, J. M. 2004,, 423, 427 Richards, S. N., Bryant, J., Croom, S., et al. 2015, arXiv:1510.06038 Rosales-Ortega, F. F., Kennicutt, R. C., Sánchez, S. F., et al. 2010,, 405, 735 Roth, M. M., Kelz, A., Fechner, T., et al. 2005,, 117, 620 Salim, S., Rich, R. M., Charlot, S., et al. 2007,, 173, 267 Sánchez, S. F., Kennicutt, R. C., Gil de Paz, A., et al. 2012a,, 538, A8 Sánchez, S. F., Rosales-Ortega, F. F., Jungwiert, B., et al. 2013,, 554, A58 Sánchez, S. F., Rosales-Ortega, F. F., Iglesias-Páramo, J., et al. 2014,, 563, A49 Sánchez-Blázquez, P., Peletier, R. F., Jiménez-Vicente, J., et al. 2006,, 371, 703 Sharp, R., Allen, J. T., Fogarty, L. M. R., et al. 2015,, 446, 1551 Singh, R., van de Ven, G., Jahnke, K., et al. 2013,, 558, A43 Tremonti, C. A., Heckman, T. M., Kauffmann, G., et al. 2004,, 613, 898 Vazdekis, A., Sánchez-Blázquez, P., Falcón-Barroso, J., et al. 2010,, 404, 1639 Walcher, C. J., Wisotzki, L., Bekeraité, S., et al. 2014,, 569, A1 York, D. G., Adelman, J., Anderson, J. E., Jr., et al. 2000,, 120, 1579 Zahid, H. J., Yates, R. M., Kewley, L. J., & Kudritzki, R. P. 2013,, 763, 92 ccccc *r*/*R*50& *x*(*α*50)& *σ*(*x*(*α*50)) & *x*(*α*50)& *σ*(*x*(*α*50)) & & 0.3 & 0.091 & 0.074 & 0.085 & 0.060 0.5 & 0.194 & 0.119 & 0.182 & 0.102 0.7 & 0.301 & 0.156 & 0.283 & 0.126 0.9 & 0.419 & 0.158 & 0.397 & 0.132 1.1 & 0.530 & 0.146 & 0.506 & 0.134 1.3 & 0.629 & 0.130 & 0.620 & 0.121 1.5 & 0.718 & 0.117 & 0.715 & 0.121 1.7 & 0.798 & 0.101 & 0.801 & 0.104 1.9 & 0.854 & 0.079 & 0.865 & 0.078 2.1 & 0.917 & 0.070 & 0.920 & 0.050 2.3 & 0.961 & 0.039 & 0.965 & 0.027 llll & Initial sample & 402 & Initial sample & 394 †  Excluding Es and S0s & 301 & Excluding Es and S0s & 296 Excluding AGNs & 212 & Excluding AGNs & 208 *R*50  ≤ 14.4” & 171 & $R\_{\rm eff}$  ≤ 24” & 138 *f*(H*α*)  ≥ 0, *σ*(H*α*)/*f*(H*α*)  ≤ 0.333 & 165 & *f*(H*α*)  ≥ 0, *σ*(H*α*)/*f*(H*α*)  ≤ 0.333 & 133  †  The initial number of galaxies in $S\_{\rm eff}$ is different than that in *S*50 because $R\_{\rm eff}$ could not be determined for 8 galaxies, and thus they are excluded from the analysis. ccc & *S*50 & $S\_{\rm eff}$ ⟨log*M*\*/*M*⊙⟩ & 10.25 & 10.25 ⟨*R*50⟩ (”) & 11.04 & ⟨$R\_{\rm eff}$⟩ (”) & & 20.31 Sa + Sab + Sb + Sbc & 107 & 85 Sc + Scd + Sd + Sdm + Sm & 85 & 48 cccccccccc *r*/*R*50& & & &  − *σ* & *x*(*α*50)&  + *σ* &  − *σ* & *x*(*α*50)&  + *σ* &  − *σ* & *x*(*α*50)&  + *σ* 0.00 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 0.10 & 0.007 & 0.021 & 0.066 & 0.009 & 0.018 & 0.032 & 0.008 & 0.020 & 0.057 0.20 & 0.022 & 0.046 & 0.131 & 0.027 & 0.045 & 0.073 & 0.024 & 0.046 & 0.114 0.30 & 0.040 & 0.082 & 0.200 & 0.053 & 0.086 & 0.135 & 0.049 & 0.085 & 0.169 0.40 & 0.072 & 0.130 & 0.262 & 0.084 & 0.132 & 0.213 & 0.077 & 0.130 & 0.246 0.50 & 0.105 & 0.182 & 0.333 & 0.128 & 0.187 & 0.284 & 0.111 & 0.182 & 0.316 0.60 & 0.140 & 0.226 & 0.388 & 0.165 & 0.244 & 0.361 & 0.148 & 0.237 & 0.375 0.70 & 0.176 & 0.282 & 0.456 & 0.212 & 0.297 & 0.435 & 0.193 & 0.283 & 0.445 0.80 & 0.223 & 0.335 & 0.515 & 0.273 & 0.358 & 0.488 & 0.244 & 0.340 & 0.505 0.90 & 0.264 & 0.386 & 0.560 & 0.327 & 0.423 & 0.542 & 0.288 & 0.397 & 0.553 1.00 & 0.312 & 0.441 & 0.610 & 0.371 & 0.484 & 0.608 & 0.329 & 0.454 & 0.609 1.10 & 0.354 & 0.491 & 0.653 & 0.419 & 0.543 & 0.669 & 0.385 & 0.506 & 0.654 1.20 & 0.408 & 0.545 & 0.693 & 0.480 & 0.594 & 0.718 & 0.449 & 0.567 & 0.697 1.30 & 0.457 & 0.606 & 0.731 & 0.544 & 0.638 & 0.770 & 0.504 & 0.620 & 0.746 1.40 & 0.498 & 0.662 & 0.775 & 0.607 & 0.687 & 0.818 & 0.552 & 0.673 & 0.789 1.50 & 0.545 & 0.703 & 0.809 & 0.663 & 0.743 & 0.858 & 0.592 & 0.715 & 0.833 1.60 & 0.583 & 0.737 & 0.838 & 0.698 & 0.788 & 0.884 & 0.645 & 0.759 & 0.862 1.70 & 0.641 & 0.780 & 0.873 & 0.729 & 0.829 & 0.907 & 0.682 & 0.801 & 0.890 1.80 & 0.701 & 0.820 & 0.902 & 0.777 & 0.863 & 0.927 & 0.727 & 0.834 & 0.912 1.90 & 0.747 & 0.853 & 0.923 & 0.815 & 0.898 & 0.937 & 0.773 & 0.865 & 0.930 2.00 & 0.798 & 0.890 & 0.941 & 0.848 & 0.918 & 0.953 & 0.817 & 0.895 & 0.944 2.10 & 0.850 & 0.915 & 0.958 & 0.880 & 0.939 & 0.967 & 0.861 & 0.920 & 0.961 2.20 & 0.891 & 0.940 & 0.971 & 0.913 & 0.957 & 0.982 & 0.899 & 0.943 & 0.975 2.30 & 0.928 & 0.961 & 0.984 & 0.943 & 0.973 & 0.991 & 0.934 & 0.965 & 0.988 2.40 & 0.965 & 0.983 & 0.993 & 0.973 & 0.987 & 0.996 & 0.968 & 0.984 & 0.995 2.50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 cccccccccc *r*/$R\_{\rm eff}$& & & &  − *σ* & $x(\alpha\_{\rm eff})$&  + *σ* &  − *σ* & $x(\alpha\_{\rm eff})$&  + *σ*  − *σ* & $x(\alpha\_{\rm eff})$&  + *σ* 0.00 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 0.10 & 0.016 & 0.038 & 0.142 & 0.018 & 0.040 & 0.080 & 0.017 & 0.039 & 0.107 0.20 & 0.049 & 0.109 & 0.237 & 0.057 & 0.100 & 0.189 & 0.054 & 0.107 & 0.227 0.30 & 0.103 & 0.210 & 0.364 & 0.113 & 0.184 & 0.310 & 0.103 & 0.195 & 0.335 0.40 & 0.173 & 0.284 & 0.471 & 0.189 & 0.271 & 0.425 & 0.177 & 0.280 & 0.457 0.50 & 0.254 & 0.388 & 0.589 & 0.264 & 0.365 & 0.531 & 0.256 & 0.385 & 0.561 0.60 & 0.337 & 0.478 & 0.670 & 0.333 & 0.455 & 0.623 & 0.336 & 0.468 & 0.664 0.70 & 0.427 & 0.577 & 0.724 & 0.403 & 0.535 & 0.728 & 0.417 & 0.568 & 0.727 0.80 & 0.536 & 0.658 & 0.781 & 0.505 & 0.634 & 0.804 & 0.535 & 0.655 & 0.782 0.90 & 0.613 & 0.715 & 0.834 & 0.583 & 0.735 & 0.869 & 0.612 & 0.722 & 0.835 1.00 & 0.700 & 0.785 & 0.880 & 0.669 & 0.799 & 0.896 & 0.684 & 0.794 & 0.889 1.10 & 0.775 & 0.853 & 0.922 & 0.750 & 0.848 & 0.934 & 0.769 & 0.851 & 0.925 1.20 & 0.845 & 0.905 & 0.945 & 0.828 & 0.897 & 0.967 & 0.842 & 0.902 & 0.958 1.30 & 0.901 & 0.940 & 0.976 & 0.905 & 0.939 & 0.984 & 0.903 & 0.940 & 0.981 1.40 & 0.956 & 0.972 & 0.991 & 0.956 & 0.977 & 0.994 & 0.956 & 0.974 & 0.992 1.50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 ccccccc *r*/*R*50& & &  − *σ* & *x*(*α*50)&  + *σ* &  − *σ* & *x*(*α*50)&  + *σ* 0.00 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 0.10 & 0.009 & 0.020 & 0.039 & 0.007 & 0.020 & 0.066 0.20 & 0.027 & 0.046 & 0.083 & 0.021 & 0.048 & 0.132 0.30 & 0.052 & 0.085 & 0.146 & 0.039 & 0.084 & 0.202 0.40 & 0.083 & 0.133 & 0.231 & 0.070 & 0.130 & 0.263 0.50 & 0.120 & 0.186 & 0.298 & 0.104 & 0.182 & 0.331 0.60 & 0.164 & 0.237 & 0.372 & 0.138 & 0.231 & 0.379 0.70 & 0.211 & 0.286 & 0.455 & 0.175 & 0.281 & 0.421 0.80 & 0.260 & 0.352 & 0.516 & 0.218 & 0.338 & 0.483 0.90 & 0.309 & 0.412 & 0.564 & 0.254 & 0.389 & 0.547 1.00 & 0.363 & 0.463 & 0.629 & 0.309 & 0.443 & 0.593 1.10 & 0.420 & 0.515 & 0.694 & 0.347 & 0.496 & 0.649 1.20 & 0.478 & 0.579 & 0.731 & 0.401 & 0.562 & 0.687 1.30 & 0.532 & 0.624 & 0.766 & 0.444 & 0.617 & 0.729 1.40 & 0.587 & 0.679 & 0.818 & 0.491 & 0.668 & 0.764 1.50 & 0.630 & 0.725 & 0.856 & 0.545 & 0.707 & 0.804 1.60 & 0.658 & 0.769 & 0.883 & 0.583 & 0.752 & 0.837 1.70 & 0.710 & 0.822 & 0.906 & 0.634 & 0.794 & 0.872 1.80 & 0.743 & 0.852 & 0.927 & 0.695 & 0.829 & 0.897 1.90 & 0.796 & 0.884 & 0.941 & 0.746 & 0.862 & 0.920 2.00 & 0.834 & 0.907 & 0.955 & 0.797 & 0.891 & 0.938 2.10 & 0.867 & 0.928 & 0.970 & 0.849 & 0.919 & 0.956 2.20 & 0.903 & 0.947 & 0.981 & 0.891 & 0.943 & 0.971 2.30 & 0.939 & 0.965 & 0.989 & 0.929 & 0.965 & 0.984 2.40 & 0.970 & 0.984 & 0.995 & 0.964 & 0.984 & 0.994 2.50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 ccccccc *r*/$R\_{\rm eff}$& & &  − *σ* & $x(\alpha\_{\rm eff})$&  + *σ* &  − *σ* & $x(\alpha\_{\rm eff})$&  + *σ* 0.00 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 0.10 & 0.019 & 0.040 & 0.091 & 0.016 & 0.035 & 0.114 0.20 & 0.057 & 0.110 & 0.229 & 0.046 & 0.096 & 0.226 0.30 & 0.116 & 0.203 & 0.375 & 0.103 & 0.180 & 0.332 0.40 & 0.191 & 0.293 & 0.464 & 0.165 & 0.263 & 0.452 0.50 & 0.273 & 0.404 & 0.587 & 0.235 & 0.366 & 0.557 0.60 & 0.357 & 0.498 & 0.667 & 0.322 & 0.465 & 0.638 0.70 & 0.441 & 0.590 & 0.732 & 0.415 & 0.561 & 0.718 0.80 & 0.525 & 0.657 & 0.815 & 0.536 & 0.643 & 0.775 0.90 & 0.612 & 0.744 & 0.871 & 0.612 & 0.713 & 0.831 1.00 & 0.683 & 0.803 & 0.905 & 0.685 & 0.784 & 0.867 1.10 & 0.768 & 0.851 & 0.933 & 0.771 & 0.850 & 0.922 1.20 & 0.841 & 0.906 & 0.966 & 0.843 & 0.898 & 0.938 1.30 & 0.903 & 0.946 & 0.982 & 0.902 & 0.938 & 0.970 1.40 & 0.956 & 0.977 & 0.994 & 0.957 & 0.973 & 0.990 1.50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 cccccccccc *r*/*R*50& & & &  − *σ* & *x*(*α**β*50)&  + *σ* &  − *σ* & *x*(*α**β*50)&  + *σ* &  − *σ* & *x*(*α**β*50)&  + *σ* 0.00 & 1.006 & 1.144 & 1.391 & 0.965 & 1.087 & 1.234 & 0.992 & 1.123 & 1.333 0.10 & 0.996 & 1.153 & 1.404 & 0.962 & 1.092 & 1.244 & 0.985 & 1.123 & 1.351 0.20 & 0.998 & 1.149 & 1.399 & 0.964 & 1.094 & 1.241 & 0.984 & 1.126 & 1.352 0.30 & 1.010 & 1.153 & 1.369 & 0.987 & 1.092 & 1.233 & 0.993 & 1.118 & 1.328 0.40 & 1.007 & 1.132 & 1.304 & 0.995 & 1.079 & 1.220 & 0.996 & 1.104 & 1.284 0.50 & 0.992 & 1.113 & 1.260 & 0.994 & 1.059 & 1.200 & 0.993 & 1.083 & 1.234 0.60 & 0.977 & 1.108 & 1.240 & 0.998 & 1.045 & 1.188 & 0.993 & 1.070 & 1.221 0.70 & 0.973 & 1.099 & 1.220 & 1.007 & 1.044 & 1.154 & 0.977 & 1.072 & 1.207 0.80 & 0.952 & 1.086 & 1.191 & 0.993 & 1.036 & 1.147 & 0.972 & 1.061 & 1.175 0.90 & 0.944 & 1.069 & 1.166 & 0.980 & 1.035 & 1.117 & 0.972 & 1.055 & 1.162 1.00 & 0.956 & 1.059 & 1.147 & 0.970 & 1.036 & 1.104 & 0.960 & 1.041 & 1.143 1.10 & 0.966 & 1.049 & 1.138 & 0.972 & 1.025 & 1.099 & 0.968 & 1.034 & 1.131 1.20 & 0.967 & 1.037 & 1.126 & 0.975 & 1.016 & 1.096 & 0.972 & 1.027 & 1.120 1.30 & 0.969 & 1.034 & 1.128 & 0.966 & 1.014 & 1.079 & 0.968 & 1.023 & 1.109 1.40 & 0.966 & 1.033 & 1.121 & 0.958 & 1.010 & 1.078 & 0.963 & 1.024 & 1.109 1.50 & 0.953 & 1.028 & 1.106 & 0.964 & 1.006 & 1.070 & 0.959 & 1.022 & 1.087 1.60 & 0.952 & 1.026 & 1.092 & 0.968 & 1.004 & 1.050 & 0.960 & 1.016 & 1.084 1.70 & 0.960 & 1.018 & 1.079 & 0.959 & 1.002 & 1.039 & 0.959 & 1.012 & 1.071 1.80 & 0.960 & 1.016 & 1.078 & 0.953 & 0.999 & 1.043 & 0.958 & 1.011 & 1.059 1.90 & 0.972 & 1.014 & 1.070 & 0.951 & 0.999 & 1.046 & 0.964 & 1.007 & 1.064 2.00 & 0.977 & 1.015 & 1.058 & 0.958 & 0.998 & 1.051 & 0.972 & 1.010 & 1.052 2.10 & 0.982 & 1.014 & 1.046 & 0.966 & 1.001 & 1.038 & 0.972 & 1.010 & 1.046 2.20 & 0.982 & 1.009 & 1.037 & 0.969 & 1.001 & 1.034 & 0.975 & 1.007 & 1.037 2.30 & 0.986 & 1.007 & 1.029 & 0.970 & 1.000 & 1.024 & 0.980 & 1.005 & 1.025 2.40 & 0.992 & 1.002 & 1.016 & 0.986 & 0.999 & 1.015 & 0.990 & 1.001 & 1.016 2.50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 cccccccccc *r*/$R\_{\rm eff}$& & & &  − *σ* & $x(\alpha\beta\_{\rm eff})$&  + *σ* &  − *σ* & $x(\alpha\beta\_{\rm eff})$&  + *σ* &  − *σ* & $x(\alpha\beta\_{\rm eff})$&  + *σ* 0.00 & 0.968 & 1.114 & 1.369 & 0.940 & 1.101 & 1.196 & 0.965 & 1.109 & 1.320 0.10 & 0.974 & 1.116 & 1.387 & 0.933 & 1.102 & 1.192 & 0.958 & 1.110 & 1.330 0.20 & 0.966 & 1.110 & 1.311 & 0.941 & 1.093 & 1.208 & 0.954 & 1.104 & 1.295 0.30 & 0.968 & 1.085 & 1.216 & 0.974 & 1.067 & 1.210 & 0.969 & 1.076 & 1.213 0.40 & 0.963 & 1.080 & 1.201 & 0.965 & 1.047 & 1.123 & 0.965 & 1.064 & 1.180 0.50 & 0.981 & 1.064 & 1.173 & 0.970 & 1.044 & 1.107 & 0.975 & 1.055 & 1.140 0.60 & 0.969 & 1.058 & 1.149 & 0.961 & 1.034 & 1.092 & 0.965 & 1.041 & 1.137 0.70 & 0.960 & 1.044 & 1.134 & 0.953 & 1.022 & 1.091 & 0.957 & 1.032 & 1.129 0.80 & 0.965 & 1.041 & 1.112 & 0.947 & 1.014 & 1.101 & 0.959 & 1.021 & 1.109 0.90 & 0.957 & 1.032 & 1.116 & 0.952 & 1.015 & 1.060 & 0.953 & 1.019 & 1.110 1.00 & 0.963 & 1.026 & 1.092 & 0.970 & 1.007 & 1.067 & 0.965 & 1.016 & 1.085 1.10 & 0.955 & 1.018 & 1.086 & 0.954 & 1.009 & 1.055 & 0.955 & 1.012 & 1.076 1.20 & 0.956 & 1.018 & 1.069 & 0.961 & 1.004 & 1.058 & 0.960 & 1.009 & 1.068 1.30 & 0.964 & 1.008 & 1.058 & 0.967 & 1.000 & 1.042 & 0.964 & 1.005 & 1.053 1.40 & 0.980 & 1.005 & 1.033 & 0.983 & 1.001 & 1.021 & 0.982 & 1.002 & 1.025 1.50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 ccccccc *r*/*R*50& & &  − *σ* & *x*(*α**β*50)&  + *σ* &  − *σ* & *x*(*α**β*50)&  + *σ* 0.00 & 0.989 & 1.102 & 1.257 & 1.005 & 1.178 & 1.441 0.10 & 0.981 & 1.100 & 1.267 & 0.998 & 1.181 & 1.461 0.20 & 0.982 & 1.101 & 1.263 & 1.000 & 1.181 & 1.461 0.30 & 0.993 & 1.101 & 1.255 & 1.003 & 1.167 & 1.430 0.40 & 0.996 & 1.081 & 1.216 & 1.005 & 1.149 & 1.340 0.50 & 0.995 & 1.063 & 1.199 & 0.992 & 1.124 & 1.287 0.60 & 0.999 & 1.046 & 1.197 & 0.969 & 1.117 & 1.257 0.70 & 0.985 & 1.047 & 1.175 & 0.972 & 1.104 & 1.233 0.80 & 0.980 & 1.040 & 1.158 & 0.953 & 1.098 & 1.229 0.90 & 0.977 & 1.040 & 1.134 & 0.954 & 1.075 & 1.214 1.00 & 0.969 & 1.037 & 1.129 & 0.958 & 1.062 & 1.182 1.10 & 0.968 & 1.029 & 1.111 & 0.969 & 1.057 & 1.160 1.20 & 0.963 & 1.021 & 1.105 & 0.981 & 1.041 & 1.150 1.30 & 0.961 & 1.014 & 1.090 & 0.977 & 1.041 & 1.138 1.40 & 0.957 & 1.014 & 1.082 & 0.967 & 1.038 & 1.126 1.50 & 0.961 & 1.011 & 1.079 & 0.953 & 1.036 & 1.109 1.60 & 0.960 & 1.011 & 1.054 & 0.961 & 1.032 & 1.092 1.70 & 0.958 & 1.003 & 1.048 & 0.965 & 1.025 & 1.091 1.80 & 0.948 & 1.002 & 1.045 & 0.965 & 1.022 & 1.079 1.90 & 0.952 & 1.000 & 1.044 & 0.976 & 1.020 & 1.088 2.00 & 0.958 & 1.000 & 1.044 & 0.979 & 1.022 & 1.068 2.10 & 0.966 & 1.002 & 1.037 & 0.982 & 1.017 & 1.049 2.20 & 0.969 & 1.002 & 1.036 & 0.981 & 1.011 & 1.040 2.30 & 0.971 & 1.001 & 1.024 & 0.984 & 1.008 & 1.030 2.40 & 0.987 & 1.000 & 1.014 & 0.993 & 1.002 & 1.016 2.50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 ccccccc *r*/$R\_{\rm eff}$& & &  − *σ* & $x(\alpha\beta\_{\rm eff})$&  + *σ* &  − *σ* & $x(\alpha\beta\_{\rm eff})$&  + *σ* 0.00 & 0.941 & 1.094 & 1.226 & 0.969 & 1.145 & 1.370 0.10 & 0.941 & 1.099 & 1.239 & 0.975 & 1.146 & 1.393 0.20 & 0.941 & 1.084 & 1.219 & 0.984 & 1.126 & 1.320 0.30 & 0.964 & 1.057 & 1.203 & 0.995 & 1.096 & 1.232 0.40 & 0.956 & 1.047 & 1.144 & 0.979 & 1.090 & 1.210 0.50 & 0.973 & 1.044 & 1.115 & 0.981 & 1.073 & 1.179 0.60 & 0.963 & 1.036 & 1.093 & 0.968 & 1.064 & 1.166 0.70 & 0.956 & 1.030 & 1.093 & 0.959 & 1.039 & 1.145 0.80 & 0.958 & 1.020 & 1.102 & 0.959 & 1.023 & 1.122 0.90 & 0.962 & 1.018 & 1.102 & 0.952 & 1.023 & 1.117 1.00 & 0.971 & 1.016 & 1.080 & 0.962 & 1.020 & 1.097 1.10 & 0.967 & 1.010 & 1.071 & 0.952 & 1.018 & 1.093 1.20 & 0.959 & 1.009 & 1.065 & 0.960 & 1.009 & 1.069 1.30 & 0.972 & 1.004 & 1.051 & 0.958 & 1.006 & 1.054 1.40 & 0.986 & 1.003 & 1.025 & 0.977 & 1.002 & 1.028 1.50 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 & 1.000 cccccccccc *r*/*R*50& & & &  − *σ* & $x({\rm N}2\_{50})$&  + *σ* &  − *σ* & $x({\rm N}2\_{50})$&  + *σ* &  − *σ* & $x({\rm N}2\_{50})$&  + *σ* 0.00 & 0.005 & 0.075 & 0.249 & 0.031 & 0.087 & 0.152 & 0.006 & 0.081 & 0.221 0.10 & 0.006 & 0.074 & 0.254 & 0.032 & 0.091 & 0.148 & 0.008 & 0.082 & 0.223 0.20 & 0.006 & 0.074 & 0.255 & 0.033 & 0.090 & 0.151 & 0.008 & 0.082 & 0.223 0.30 & 0.004 & 0.072 & 0.248 & 0.034 & 0.082 & 0.165 & 0.007 & 0.080 & 0.200 0.40 & 0.001 & 0.072 & 0.226 & 0.034 & 0.072 & 0.163 & 0.004 & 0.072 & 0.189 0.50 & 0.000 & 0.064 & 0.196 & 0.028 & 0.062 & 0.143 & 0.005 & 0.063 & 0.160 0.60 & 0.001 & 0.053 & 0.179 & 0.010 & 0.058 & 0.118 & 0.006 & 0.056 & 0.129 0.70 & -0.003 & 0.042 & 0.161 & 0.009 & 0.054 & 0.094 & 0.004 & 0.044 & 0.124 0.80 & -0.008 & 0.036 & 0.127 & 0.010 & 0.051 & 0.087 & -0.000 & 0.039 & 0.114 0.90 & -0.011 & 0.030 & 0.113 & 0.010 & 0.044 & 0.080 & -0.002 & 0.034 & 0.101 1.00 & -0.010 & 0.025 & 0.102 & 0.006 & 0.042 & 0.075 & 0.002 & 0.031 & 0.090 1.10 & -0.005 & 0.024 & 0.092 & 0.004 & 0.043 & 0.070 & 0.002 & 0.028 & 0.077 1.20 & -0.005 & 0.022 & 0.083 & 0.003 & 0.039 & 0.068 & -0.002 & 0.025 & 0.072 1.30 & -0.004 & 0.019 & 0.070 & 0.001 & 0.032 & 0.067 & -0.002 & 0.022 & 0.069 1.40 & -0.005 & 0.018 & 0.059 & 0.000 & 0.025 & 0.060 & -0.002 & 0.019 & 0.059 1.50 & -0.005 & 0.016 & 0.049 & 0.001 & 0.019 & 0.054 & -0.002 & 0.016 & 0.050 1.60 & -0.002 & 0.012 & 0.039 & 0.000 & 0.013 & 0.048 & -0.001 & 0.013 & 0.041 1.70 & -0.003 & 0.011 & 0.033 & -0.000 & 0.010 & 0.036 & -0.002 & 0.011 & 0.035 1.80 & -0.002 & 0.009 & 0.023 & -0.001 & 0.008 & 0.030 & -0.002 & 0.008 & 0.024 1.90 & -0.001 & 0.007 & 0.019 & -0.000 & 0.007 & 0.026 & -0.001 & 0.007 & 0.021 2.00 & -0.002 & 0.005 & 0.017 & -0.004 & 0.006 & 0.021 & -0.003 & 0.006 & 0.019 2.10 & -0.004 & 0.004 & 0.014 & -0.005 & 0.005 & 0.019 & -0.004 & 0.005 & 0.014 2.20 & -0.002 & 0.003 & 0.010 & -0.004 & 0.003 & 0.017 & -0.003 & 0.003 & 0.011 2.30 & -0.002 & 0.002 & 0.008 & -0.003 & 0.002 & 0.009 & -0.002 & 0.002 & 0.008 2.40 & -0.001 & 0.001 & 0.004 & -0.001 & 0.001 & 0.003 & -0.001 & 0.001 & 0.004 2.50 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 cccccccccc *r*/$R\_{\rm eff}$& & & &  − *σ* & $x({\rm N}2\_{\rm eff})$&  + *σ* &  − *σ* & $x({\rm N}2\_{\rm eff})$&  + *σ* &  − *σ* & $x({\rm N}2\_{\rm eff})$&  + *σ* 0.00 & 0.008 & 0.073 & 0.252 & 0.004 & 0.066 & 0.149 & 0.006 & 0.072 & 0.170 0.10 & 0.007 & 0.076 & 0.259 & 0.005 & 0.074 & 0.150 & 0.006 & 0.075 & 0.178 0.20 & 0.008 & 0.068 & 0.236 & 0.001 & 0.062 & 0.141 & 0.005 & 0.064 & 0.171 0.30 & 0.002 & 0.052 & 0.169 & -0.005 & 0.049 & 0.120 & 0.001 & 0.051 & 0.150 0.40 & -0.006 & 0.045 & 0.127 & -0.007 & 0.044 & 0.093 & -0.006 & 0.044 & 0.104 0.50 & -0.010 & 0.032 & 0.102 & -0.004 & 0.039 & 0.083 & -0.008 & 0.033 & 0.092 0.60 & -0.010 & 0.023 & 0.074 & 0.001 & 0.036 & 0.073 & -0.007 & 0.029 & 0.074 0.70 & -0.009 & 0.020 & 0.063 & 0.001 & 0.030 & 0.065 & -0.008 & 0.021 & 0.064 0.80 & -0.008 & 0.017 & 0.048 & -0.007 & 0.022 & 0.056 & -0.007 & 0.017 & 0.051 0.90 & -0.008 & 0.016 & 0.034 & -0.008 & 0.018 & 0.054 & -0.008 & 0.016 & 0.041 1.00 & -0.005 & 0.012 & 0.030 & -0.008 & 0.013 & 0.041 & -0.006 & 0.013 & 0.032 1.10 & -0.003 & 0.009 & 0.024 & -0.005 & 0.009 & 0.035 & -0.004 & 0.009 & 0.026 1.20 & -0.004 & 0.005 & 0.017 & -0.003 & 0.006 & 0.022 & -0.003 & 0.006 & 0.017 1.30 & -0.002 & 0.003 & 0.010 & -0.001 & 0.003 & 0.014 & -0.002 & 0.003 & 0.012 1.40 & -0.001 & 0.002 & 0.006 & -0.001 & 0.002 & 0.007 & -0.001 & 0.002 & 0.006 1.50 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 ccccccc *r*/*R*50& & &  − *σ* & $x({\rm N}2\_{50})$&  + *σ* &  − *σ* & $x({\rm N}2\_{50})$&  + *σ* 0.00 & 0.006 & 0.075 & 0.145 & 0.013 & 0.081 & 0.258 0.10 & 0.003 & 0.076 & 0.147 & 0.014 & 0.085 & 0.266 0.20 & 0.004 & 0.076 & 0.146 & 0.014 & 0.083 & 0.269 0.30 & 0.004 & 0.075 & 0.141 & 0.014 & 0.084 & 0.264 0.40 & 0.008 & 0.064 & 0.132 & 0.002 & 0.083 & 0.249 0.50 & 0.005 & 0.057 & 0.129 & 0.004 & 0.071 & 0.218 0.60 & 0.004 & 0.047 & 0.120 & 0.008 & 0.059 & 0.191 0.70 & -0.003 & 0.041 & 0.096 & 0.007 & 0.050 & 0.167 0.80 & -0.002 & 0.038 & 0.088 & 0.003 & 0.042 & 0.138 0.90 & -0.003 & 0.035 & 0.080 & 0.002 & 0.033 & 0.122 1.00 & 0.001 & 0.033 & 0.072 & 0.002 & 0.031 & 0.111 1.10 & 0.001 & 0.030 & 0.065 & 0.001 & 0.027 & 0.099 1.20 & 0.001 & 0.027 & 0.063 & -0.007 & 0.023 & 0.083 1.30 & -0.000 & 0.023 & 0.057 & -0.006 & 0.020 & 0.069 1.40 & -0.001 & 0.019 & 0.052 & -0.005 & 0.018 & 0.059 1.50 & -0.001 & 0.016 & 0.051 & -0.006 & 0.017 & 0.050 1.60 & -0.001 & 0.013 & 0.041 & -0.001 & 0.015 & 0.042 1.70 & -0.002 & 0.010 & 0.033 & -0.005 & 0.013 & 0.035 1.80 & -0.001 & 0.008 & 0.024 & -0.004 & 0.010 & 0.029 1.90 & -0.001 & 0.006 & 0.020 & -0.002 & 0.008 & 0.024 2.00 & -0.002 & 0.006 & 0.016 & -0.003 & 0.006 & 0.019 2.10 & -0.003 & 0.005 & 0.013 & -0.005 & 0.004 & 0.016 2.20 & -0.002 & 0.003 & 0.010 & -0.003 & 0.003 & 0.012 2.30 & -0.002 & 0.002 & 0.007 & -0.002 & 0.002 & 0.008 2.40 & -0.001 & 0.001 & 0.004 & -0.001 & 0.001 & 0.004 2.50 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 ccccccc *r*/$R\_{\rm eff}$& & &  − *σ* & $x({\rm N}2\_{\rm eff})$&  + *σ* &  − *σ* & $x({\rm N}2\_{\rm eff})$&  + *σ* 0.00 & 0.003 & 0.068 & 0.148 & 0.009 & 0.077 & 0.293 0.10 & 0.003 & 0.073 & 0.151 & 0.007 & 0.078 & 0.300 0.20 & 0.004 & 0.058 & 0.134 & 0.008 & 0.077 & 0.249 0.30 & 0.000 & 0.049 & 0.113 & -0.000 & 0.056 & 0.188 0.40 & -0.005 & 0.044 & 0.092 & -0.008 & 0.042 & 0.131 0.50 & -0.002 & 0.039 & 0.085 & -0.012 & 0.029 & 0.101 0.60 & 0.003 & 0.033 & 0.074 & -0.012 & 0.020 & 0.074 0.70 & 0.003 & 0.030 & 0.065 & -0.012 & 0.019 & 0.064 0.80 & -0.002 & 0.022 & 0.055 & -0.010 & 0.015 & 0.048 0.90 & -0.005 & 0.017 & 0.051 & -0.009 & 0.013 & 0.031 1.00 & -0.003 & 0.015 & 0.039 & -0.007 & 0.012 & 0.026 1.10 & -0.001 & 0.010 & 0.034 & -0.006 & 0.008 & 0.019 1.20 & -0.002 & 0.006 & 0.023 & -0.005 & 0.005 & 0.014 1.30 & -0.001 & 0.004 & 0.016 & -0.003 & 0.002 & 0.008 1.40 & -0.001 & 0.002 & 0.007 & -0.001 & 0.001 & 0.005 1.50 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 cccccccccc *r*/*R*50& & & &  − *σ* & $x({\rm O}3{\rm N}2\_{50})$&  + *σ* &  − *σ* & $x({\rm O}3{\rm N}2\_{50})$&  + *σ* &  − *σ* & $x({\rm O}3{\rm N}2\_{50})$&  + *σ* 0.00 & -0.275 & -0.091 & 0.083 & -0.459 & -0.234 & -0.084 & -0.368 & -0.127 & 0.063 0.10 & -0.269 & -0.098 & 0.090 & -0.459 & -0.254 & -0.080 & -0.385 & -0.133 & 0.061 0.20 & -0.274 & -0.095 & 0.092 & -0.465 & -0.250 & -0.080 & -0.379 & -0.128 & 0.060 0.30 & -0.268 & -0.081 & 0.085 & -0.448 & -0.225 & -0.088 & -0.348 & -0.123 & 0.063 0.40 & -0.241 & -0.089 & 0.077 & -0.415 & -0.196 & -0.100 & -0.299 & -0.125 & 0.061 0.50 & -0.217 & -0.080 & 0.078 & -0.371 & -0.199 & -0.066 & -0.251 & -0.111 & 0.052 0.60 & -0.196 & -0.073 & 0.064 & -0.333 & -0.172 & -0.058 & -0.225 & -0.094 & 0.044 0.70 & -0.194 & -0.067 & 0.053 & -0.304 & -0.140 & -0.046 & -0.210 & -0.089 & 0.030 0.80 & -0.193 & -0.060 & 0.038 & -0.280 & -0.133 & -0.034 & -0.208 & -0.078 & 0.022 0.90 & -0.182 & -0.067 & 0.033 & -0.252 & -0.118 & -0.038 & -0.208 & -0.074 & 0.019 1.00 & -0.175 & -0.062 & 0.024 & -0.222 & -0.102 & -0.053 & -0.195 & -0.069 & 0.013 1.10 & -0.166 & -0.058 & 0.023 & -0.206 & -0.093 & -0.052 & -0.182 & -0.073 & 0.008 1.20 & -0.153 & -0.055 & 0.018 & -0.188 & -0.089 & -0.051 & -0.167 & -0.069 & -0.002 1.30 & -0.139 & -0.049 & 0.009 & -0.171 & -0.070 & -0.040 & -0.156 & -0.061 & -0.003 1.40 & -0.125 & -0.050 & 0.007 & -0.167 & -0.055 & -0.026 & -0.128 & -0.051 & -0.006 1.50 & -0.114 & -0.046 & 0.003 & -0.128 & -0.047 & -0.017 & -0.117 & -0.047 & -0.002 1.60 & -0.105 & -0.039 & 0.001 & -0.095 & -0.043 & -0.011 & -0.096 & -0.040 & -0.003 1.70 & -0.094 & -0.032 & -0.000 & -0.082 & -0.040 & -0.003 & -0.087 & -0.032 & -0.001 1.80 & -0.077 & -0.029 & 0.002 & -0.075 & -0.027 & -0.001 & -0.075 & -0.028 & 0.001 1.90 & -0.067 & -0.025 & 0.004 & -0.068 & -0.019 & -0.001 & -0.068 & -0.024 & 0.003 2.00 & -0.055 & -0.021 & 0.005 & -0.063 & -0.009 & -0.001 & -0.057 & -0.020 & 0.004 2.10 & -0.046 & -0.017 & 0.003 & -0.054 & -0.006 & 0.002 & -0.048 & -0.016 & 0.003 2.20 & -0.040 & -0.011 & 0.003 & -0.042 & -0.007 & 0.003 & -0.040 & -0.011 & 0.003 2.30 & -0.029 & -0.008 & 0.005 & -0.025 & -0.007 & 0.004 & -0.026 & -0.008 & 0.004 2.40 & -0.013 & -0.005 & 0.001 & -0.012 & -0.004 & 0.002 & -0.013 & -0.004 & 0.002 2.50 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 cccccccccc *r*/$R\_{\rm eff}$& & & &  − *σ* & $x({\rm O}3{\rm N}2\_{\rm eff})$&  + *σ* &  − *σ* & $x({\rm O}3{\rm N}2\_{\rm eff})$&  + *σ* &  − *σ* & $x({\rm O}3{\rm N}2\_{\rm eff})$&  + *σ* 0.00 & -0.291 & -0.092 & 0.154 & -0.434 & -0.214 & -0.082 & -0.338 & -0.140 & 0.072 0.10 & -0.300 & -0.093 & 0.157 & -0.452 & -0.224 & -0.081 & -0.355 & -0.145 & 0.071 0.20 & -0.278 & -0.079 & 0.146 & -0.422 & -0.204 & -0.085 & -0.310 & -0.137 & 0.081 0.30 & -0.258 & -0.093 & 0.103 & -0.396 & -0.186 & -0.079 & -0.264 & -0.117 & 0.074 0.40 & -0.229 & -0.067 & 0.093 & -0.324 & -0.155 & -0.058 & -0.234 & -0.101 & 0.020 0.50 & -0.201 & -0.065 & 0.051 & -0.278 & -0.123 & -0.061 & -0.213 & -0.091 & 0.009 0.60 & -0.189 & -0.063 & 0.057 & -0.219 & -0.095 & -0.057 & -0.192 & -0.081 & -0.016 0.70 & -0.166 & -0.070 & 0.025 & -0.184 & -0.086 & -0.043 & -0.172 & -0.079 & -0.001 0.80 & -0.152 & -0.070 & 0.004 & -0.153 & -0.081 & -0.014 & -0.152 & -0.070 & 0.003 0.90 & -0.123 & -0.067 & 0.014 & -0.120 & -0.063 & 0.005 & -0.123 & -0.065 & 0.010 1.00 & -0.104 & -0.056 & 0.015 & -0.104 & -0.048 & -0.007 & -0.102 & -0.050 & 0.005 1.10 & -0.088 & -0.038 & 0.011 & -0.077 & -0.028 & -0.008 & -0.085 & -0.035 & 0.000 1.20 & -0.058 & -0.028 & 0.001 & -0.047 & -0.023 & -0.005 & -0.057 & -0.024 & -0.003 1.30 & -0.034 & -0.015 & 0.008 & -0.051 & -0.011 & 0.000 & -0.036 & -0.014 & 0.004 1.40 & -0.020 & -0.007 & 0.014 & -0.016 & -0.006 & 0.001 & -0.020 & -0.006 & 0.007 1.50 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 ccccccc *r*/*R*50& & &  − *σ* & $x({\rm O}3{\rm N}2\_{50})$&  + *σ* &  − *σ* & $x({\rm O}3{\rm N}2\_{50})$&  + *σ* 0.00 & -0.445 & -0.228 & -0.024 & -0.203 & -0.055 & 0.139 0.10 & -0.459 & -0.237 & -0.021 & -0.209 & -0.059 & 0.146 0.20 & -0.462 & -0.232 & -0.022 & -0.204 & -0.059 & 0.144 0.30 & -0.437 & -0.215 & -0.004 & -0.199 & -0.050 & 0.134 0.40 & -0.403 & -0.196 & -0.018 & -0.196 & -0.057 & 0.120 0.50 & -0.362 & -0.188 & -0.042 & -0.187 & -0.038 & 0.117 0.60 & -0.333 & -0.157 & -0.023 & -0.173 & -0.026 & 0.087 0.70 & -0.307 & -0.131 & -0.010 & -0.179 & -0.037 & 0.074 0.80 & -0.283 & -0.111 & -0.027 & -0.186 & -0.044 & 0.044 0.90 & -0.255 & -0.107 & -0.031 & -0.172 & -0.045 & 0.042 1.00 & -0.221 & -0.097 & -0.028 & -0.158 & -0.055 & 0.027 1.10 & -0.204 & -0.089 & -0.024 & -0.153 & -0.051 & 0.024 1.20 & -0.190 & -0.078 & -0.026 & -0.151 & -0.049 & 0.018 1.30 & -0.174 & -0.065 & -0.029 & -0.135 & -0.047 & 0.010 1.40 & -0.148 & -0.055 & -0.025 & -0.121 & -0.046 & 0.007 1.50 & -0.125 & -0.049 & -0.013 & -0.108 & -0.045 & 0.008 1.60 & -0.111 & -0.043 & -0.006 & -0.095 & -0.039 & 0.009 1.70 & -0.100 & -0.040 & -0.006 & -0.084 & -0.032 & 0.012 1.80 & -0.086 & -0.030 & -0.010 & -0.074 & -0.026 & 0.005 1.90 & -0.073 & -0.028 & -0.004 & -0.062 & -0.021 & 0.006 2.00 & -0.065 & -0.022 & -0.001 & -0.050 & -0.017 & 0.006 2.10 & -0.052 & -0.017 & 0.003 & -0.044 & -0.015 & 0.003 2.20 & -0.042 & -0.011 & 0.002 & -0.039 & -0.010 & 0.004 2.30 & -0.026 & -0.007 & 0.003 & -0.030 & -0.008 & 0.006 2.40 & -0.013 & -0.004 & 0.002 & -0.014 & -0.005 & 0.001 2.50 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 ccccccc *r*/$R\_{\rm eff}$& & &  − *σ* & $x({\rm O}3{\rm N}2\_{\rm eff})$&  + *σ* &  − *σ* & $x({\rm O}3{\rm N}2\_{\rm eff})$&  + *σ* 0.00 & -0.407 & -0.215 & 0.005 & -0.177 & -0.059 & 0.171 0.10 & -0.427 & -0.236 & 0.008 & -0.189 & -0.064 & 0.178 0.20 & -0.407 & -0.211 & 0.004 & -0.177 & -0.058 & 0.176 0.30 & -0.347 & -0.185 & 0.009 & -0.161 & -0.053 & 0.121 0.40 & -0.314 & -0.151 & -0.027 & -0.173 & -0.054 & 0.098 0.50 & -0.276 & -0.125 & -0.032 & -0.157 & -0.050 & 0.073 0.60 & -0.221 & -0.102 & -0.041 & -0.169 & -0.058 & 0.058 0.70 & -0.206 & -0.098 & -0.025 & -0.159 & -0.064 & 0.027 0.80 & -0.157 & -0.088 & 0.002 & -0.130 & -0.063 & 0.004 0.90 & -0.124 & -0.068 & 0.014 & -0.111 & -0.044 & 0.004 1.00 & -0.121 & -0.053 & 0.004 & -0.094 & -0.042 & 0.012 1.10 & -0.084 & -0.033 & -0.000 & -0.084 & -0.035 & -0.000 1.20 & -0.056 & -0.024 & -0.003 & -0.059 & -0.029 & -0.002 1.30 & -0.045 & -0.013 & 0.004 & -0.034 & -0.014 & 0.005 1.40 & -0.018 & -0.007 & 0.009 & -0.020 & -0.005 & 0.006 1.50 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 & 0.000 lcrrrrrr Aperture & Selection & *a*0 & *a*1 & *a*2 & *a*3 & *a*4 & *a*5 correction & criterion & & & & & & *x*(*α*50)& All & 0.0000 & 0.1599 & 0.5027 &  − 0.2276 & 0.0167 & 0.0037 & Sa-Sbc & 0.0000 & 0.1588 & 0.5059 &  − 0.2649 & 0.0459 &  − 0.0022 & Sc-Sdm & 0.0000 & 0.1192 & 0.6799 &  − 0.3951 & 0.0812 &  − 0.0056 & log*M*\*/*M*⊙ ≤ 10.3 & 0.0000 & 0.1661 & 0.4937 &  − 0.2044 & 0.0024 & 0.0061 & 10.3 ≤ log*M*\*/*M*⊙ & 0.0000 & 0.1691 & 0.4630 &  − 0.1956 & 0.0079 & 0.0044 *x*(*α**β*50)& All & 1.1253 & 0.0274 &  − 0.3125 & 0.2959 &  − 0.1091 & 0.0143 & Sa-Sbc & 1.1453 & 0.1087 &  − 0.4826 & 0.4127 &  − 0.1432 & 0.0179 & Sc-Sdm & 1.0912 & 0.0346 &  − 0.2795 & 0.2680 &  − 0.1026 & 0.0142 & log*M*\*/*M*⊙ ≤ 10.3 & 1.1053 &  − 0.0108 &  − 0.2144 & 0.2275 &  − 0.0916 & 0.0132 & 10.3 ≤ log*M*\*/*M*⊙ & 1.1801 & 0.0548 &  − 0.4810 & 0.4594 &  − 0.1721 & 0.0229 $x({\rm N}2\_{50})$& All & 0.0812 & 0.0477 &  − 0.2779 & 0.2715 &  − 0.1075 & 0.0153 & Sa-Sbc & 0.0809 & 0.0843 &  − 0.3771 & 0.3646 &  − 0.1439 & 0.0204 & Sc-Sdm & 0.0923 &  − 0.0394 &  − 0.0350 & 0.0353 &  − 0.0137 & 0.0021 & log*M*\*/*M*⊙ ≤ 10.3 & 0.0769 & 0.0129 &  − 0.1737 & 0.1749 &  − 0.0705 & 0.0102 & 10.3 ≤ log*M*\*/*M*⊙ & 0.1021 & 0.0944 &  − 0.4838 & 0.4857 &  − 0.1957 & 0.0281 $x({\rm O}3{\rm N}2\_{50})$& All &  − 0.1299 &  − 0.0366 & 0.2645 &  − 0.2624 & 0.1092 &  − 0.0164 & Sa-Sbc &  − 0.0947 & 0.0003 & 0.1096 &  − 0.1220 & 0.0561 &  − 0.0090 & Sc-Sdm &  − 0.2433 &  − 0.0711 & 0.5757 &  − 0.5465 & 0.2132 &  − 0.0304 & log*M*\*/*M*⊙ ≤ 10.3 &  − 0.2318 &  − 0.0840 & 0.6316 &  − 0.6166 & 0.2423 &  − 0.0342 & 10.3 ≤ log*M*\*/*M*⊙ &  − 0.0616 & 0.0597 &  − 0.0420 &  − 0.0329 & 0.0401 &  − 0.0090 $x(\alpha\_{\rm eff})$& All & 0.0000 & 0.2469 & 1.8195 &  − 1.9620 & 0.8182 &  − 0.1297 & Sa-Sbc & 0.0000 & 0.2338 & 2.0857 &  − 2.6054 & 1.3520 &  − 0.2758 & Sc-Sdm & 0.0000 & 0.3275 & 1.0776 &  − 0.4843 &  − 0.2743 & 0.1458 & log*M*\*/*M*⊙ ≤ 10.3 & 0.0000 & 0.1685 & 2.4921 &  − 3.2276 & 1.7208 &  − 0.3527 & 10.3 ≤ log*M*\*/*M*⊙ & 0.0000 & 0.1504 & 2.0411 &  − 2.1685 & 0.9098 &  − 0.1455 $x(\alpha\beta\_{\rm eff})$& All & 1.1100 & 0.0648 &  − 0.8553 & 1.3690 &  − 0.8714 & 0.1986 & Sa-Sbc & 1.1153 & 0.0497 &  − 0.6904 & 1.0995 &  − 0.7145 & 0.1672 & Sc-Sdm & 1.1027 & 0.0477 &  − 0.8742 & 1.5044 &  − 1.0126 & 0.2426 & log*M*\*/*M*⊙ ≤ 10.3 & 1.0977 &  − 0.0051 &  − 0.6129 & 1.1462 &  − 0.8099 & 0.2003 & 10.3 ≤ log*M*\*/*M*⊙ & 1.1474 &  − 0.0245 &  − 0.5907 & 0.8897 &  − 0.5065 & 0.1027 $x({\rm N}2\_{\rm eff})$& All & 0.0729 & 0.0320 &  − 0.5459 & 0.9304 &  − 0.6278 & 0.1508 & Sa-Sbc & 0.0771 & 0.0762 &  − 0.7109 & 1.1317 &  − 0.7310 & 0.1698 & Sc-Sdm & 0.0690 & 0.0123 &  − 0.3467 & 0.5777 &  − 0.3970 & 0.0991 & log*M*\*/*M*⊙ ≤ 10.3 & 0.0706 &  − 0.0218 &  − 0.2287 & 0.4115 &  − 0.2904 & 0.0735 & 10.3 ≤ log*M*\*/*M*⊙ & 0.0837 & 0.1841 &  − 1.3430 & 2.2246 &  − 1.4944 & 0.3581 $x({\rm O}3{\rm N}2\_{\rm eff})$& All &  − 0.1420 &  − 0.0977 & 1.0274 &  − 1.8753 & 1.4059 &  − 0.3702 & Sa-Sbc &  − 0.0920 &  − 0.0559 & 0.6785 &  − 1.4892 & 1.2567 &  − 0.3538 & Sc-Sdm &  − 0.2147 &  − 0.2356 & 1.8838 &  − 2.9887 & 1.9851 &  − 0.4784 & log*M*\*/*M*⊙ ≤ 10.3 &  − 0.2192 &  − 0.2647 & 2.1138 &  − 3.5033 & 2.4170 &  − 0.5995 & 10.3 ≤ log*M*\*/*M*⊙ &  − 0.0615 & 0.0185 & 0.0830 &  − 0.3739 & 0.4270 &  − 0.1386 cccccc Ang. Diam. & Lin. Diam. & *z* &  − *σ* & ($\alpha\_{\rm Ang}$/$\alpha\_{\rm Lin}$)50 &  + *σ* (``) & (kpc) & & & & 3.0 & 10. & 0.010 & 0.030 & 0.061 & 0.207 3.0 & 10. & 0.011 & 0.030 & 0.060 & 0.206 3.0 & 10. & 0.012 & 0.030 & 0.059 & 0.205 3.0 & 10. & 0.013 & 0.029 & 0.058 & 0.204 3.0 & 10. & 0.014 & 0.029 & 0.057 & 0.204 3.0 & 10. & 0.015 & 0.029 & 0.056 & 0.203  † Where the 3“ aperture is for SDSS and the 15” aperture is for SAMI. cccccc Ang. Diam. & Lin. Diam. & *z* &  − *σ* & ($\alpha\beta\_{\rm Ang}$/$\alpha\beta\_{\rm Lin}$)50 &  + *σ* (``) & (kpc) & & & & 3.0 & 10. & 0.010 & 0.985 & 1.120 & 1.287 3.0 & 10. & 0.011 & 0.985 & 1.120 & 1.288 3.0 & 10. & 0.012 & 0.985 & 1.120 & 1.288 3.0 & 10. & 0.013 & 0.984 & 1.120 & 1.289 3.0 & 10. & 0.014 & 0.984 & 1.120 & 1.289 3.0 & 10. & 0.015 & 0.984 & 1.120 & 1.290  † Where the 3“ aperture is for SDSS and the 15” aperture is for SAMI. cccccc Ang. Diam. & Lin. Diam. & *z* &  − *σ* & (N2$\_{\rm Ang}$  −  N2$\_{\rm Lin}$)50 &  + *σ* (``) & (kpc) & & & & 3.0 & 10. & 0.010 & 0.012 & 0.079 & 0.234 3.0 & 10. & 0.011 & 0.011 & 0.079 & 0.234 3.0 & 10. & 0.012 & 0.011 & 0.079 & 0.234 3.0 & 10. & 0.013 & 0.011 & 0.079 & 0.234 3.0 & 10. & 0.014 & 0.011 & 0.079 & 0.234 3.0 & 10. & 0.015 & 0.011 & 0.080 & 0.233  † Where the 3“ aperture is for SDSS and the 15” aperture is for SAMI. cccccc Ang. Diam. & Lin. Diam. & *z* &  − *σ* & (O3N2$\_{\rm Ang}$  −  O3N2$\_{\rm Lin}$)50
arxiv_0000059
**Gravitational multipole momentsfor asymptotically de Sitter spacetimes** ========================================================================== We provide a prescription to compute the gravitational multipole moments of compact objects for asymptotically de Sitter spacetimes. Our prescription builds upon a recent definition of the gravitational multipole moments in terms of Noether charges associated to specific vector fields, within the residual harmonic gauge, dubbed multipole symmetries. We first derive the multipole symmetries for spacetimes which are asymptotically de Sitter; we also show that these symmetry vector fields eliminate the non-propagating degrees of freedom from the linearized gravitational wave equation in a suitable gauge. We then apply our prescription to the Kerr-de Sitter black hole and compute its multipole structure. Our result recovers the Geroch-Hansen moments of the Kerr black hole in the limit of vanishing cosmological constant. Introduction and summary of the results ======================================= The multipole moments associated with a gravitational field have always been relevant and important in the study of various solutions arising from General Relativity, since its early days. These studies involving multipole moments have impacted many areas of research ranging from mathematical physics to astrophysics. The study of multipole moments in various contexts has become even more timely, since the discovery of gravitational waves. The observation of gravitational waves from coalescence of binary compact objects can have potential applications in order to address questions about the nature of compact objects, such as black holes, neutron stars and the binary systems thereof. Future space-based gravitational wave detectors, besides mass and spin, will be also able to measure the quadrupole mass moment of a supermassive object in a binary system with great accuracy, as well as put bounds on four leading order multipole moments in the high mass ratio ( ∼ 10) limit. In General Relativity, the gravitational field is decomposed in two sets of multipole moments — the mass and the spin moments. Using the Penrose’s conformal completion technique, a geometrical definition of the multipole moments for static, asymptotically flat spacetimes was pioneered by Geroch. Later, Geroch’s definition was generalised to the stationary case by Hansen. Subsequently, Beig, Simon and Kundu had developed further properties of the multipole moments for stationary and asymptotically flat spacetimes. On the other hand, imposing no incoming radiation for linearized radiating gravitational fields, Thorne provided a definition of multipole moments for asymptotically flat spacetimes. Thorne’s moments are defined within the harmonic gauge, upon further fixing the residual gauge, where the gravitational field is expanded in spherical harmonics and the multipole moments are read from such an expansion. Thorne’s definition applies also to stationary non-linear configurations in General Relativity and it is shown to be equivalent to the Geroch–Hansen’s definition for stationary spacetimes, up to a choice of normalization. More recent developments about gravitational multipole moments in asymptotically flat spacetime can be found in. Intriguingly, the multipole moments of a black hole have very simple structures: *e.g.*, the multipole moments of a Kerr black hole only depends on its mass and spin. However, such is not the situation for neutron stars or for the exotic compact objects. The multipole structures of these objects are much more complex and are very much distinct from one another. Thus the study of the multipole structure of a compact object may reveal its true nature and interesting properties. For example, for a Kerr black hole all the odd mass moments and even spin moments are identically zero, however for certain fuzzball states odd mass moments and even spin moments do exist. Therefore, if the gravitational wave observations tell us that the odd mass moments and even spin moments of the merging compact objects are indeed non-zero, then the fuzzball models will become particularly relevant. This shows the importance of understanding the multipole moments of compact objects in greater detail. As emphasized earlier, the multipole moments of gravitational fields are well understood for asymptotically flat spacetimes, however they remain largely unexplored for asymptotically non-flat and in particular for asymptotically de Sitter spacetimes. This is because the Geroch-Hansen as well as the formalism of Thorne depend crucially on the asymptotic flatness of the spacetime. Though there are instances where the formalism can be applied for asymptotically non-flat spacetimes: this is the case for spacetimes with NUT charge, because the relevant codimension-1 hypersurfaces are asymptotically flat even in the presence of NUT charge and the Geroch-Hansen formalism is readily applicable. On the other hand, for asymptotically de Sitter spacetimes these codimension-1 hypersurfaces are non-flat, rendering the known analysis of multipole moments inapplicable. The aim of this work is to start filling this gap in the literature and provide a definition of gravitational multipole moments for asymptotically de Sitter spacetimes. The mass and spin multipole moments have been recently defined for a generic metric theory of gravity in terms of the Noether charges associated with the so-called multipole symmetries,[4](#fn4) that are specific residual transformations in the harmonic gauge. This definition also agrees with the previous results of Thorne for linearized radiating spacetimes and the Geroch-Hansen formalism for stationary spacetimes. At this outset, let us briefly summarize the main strategy to compute multipole moments in. The main idea is to extract the multipole moments from the radial expansion of the metric tensor. For asymptotically flat spacetimes, following Thorne, one has to reach the harmonic gauge as explained earlier. Upon fixing the harmonic gauge, one is left with its residual gauge transformations. In, it was demonstrated that the multipole symmetries are actually residual gauge transformations preserving the asymptotic behaviour of the lapse function and shift vector. The next step in the analysis of is to compute the Noether charges associated with the multipole symmetries. Explicit expressions for the surface charges in the four-dimensional General Relativity can be found in. The multipole moments of the solution are identified from the Noether charges upon a regularization procedure. In this paper, we wish to extend the method of, as outlined above, for asymptotically de Sitter spacetimes. More specifically, we achieve three main results — (a) Our first result is to write down the de Sitter spacetime in harmonic gauge; surprisingly, de Sitter spacetime was never written in the harmonic gauge before, to the best of our knowledge. Expressing the de Sitter spacetime in the harmonic gauge is instrumental to derive the residual harmonic gauge transformations and, among these, the multipole symmetries preserving certain fall-off conditions for asymptotically de Sitter spacetimes. These fall-off conditions are such that they preserve the lapse function and the shift vector, analogously as in. (b) We also provide an alternative derivation of the multipole symmetries to be those residual gauge transformations that eliminate the non-propagating degrees of freedom in the linearized outgoing wave solutions around de Sitter background. This different interpretation of multipole symmetries sheds more light over the physics of the multipole symmetries and complements the original derivation carried out in. (c) Finally, in order to apply our findings on a concrete example, we compute the multipole structure of the Kerr-de Sitter black hole. In particular, we provide explicit computation of the first few mass and spin multipole moments. Our expressions reproduce the well-known mass and angular momentum of the Kerr-de Sitter black hole and, in the limit of asymptotically flat spacetime, we recover the Geroch-Hansen’s multipole moments of the Kerr black hole. The paper is organized as follows. In section [dS], we briefly present different coordinate systems for the de Sitter spacetime, which will be relevant for our purpose. Section [Harmonic] deals with writing down the de Sitter metric in the harmonic gauge. In section [Sec.4], exploiting the residual harmonic gauge freedom, we obtain the multipole symmetries for asymptotically de Sitter spacetimes. In section [sec:5], we provide the linearized gravitational wave equation in de Sitter background, and we obtain the multipole symmetries from the residual gauge transformations of the linearized theory. Finally, in section [Sec:KdS], we compute the multipole moments of the Kerr de Sitter spacetime. We conclude with a summary and perspectives for future works. Two appendices contain — (a) the technical derivation of converting the de Sitter metric to harmonic coordinates and (b) the exact analytic expressions of the first few mass and spin multipole moments of the Kerr-de Sitter black hole. *Notation and conventions:* We adopt the mostly positive signature convention, *i.e.*, the Minkowski metric in the Cartesian coordinate system takes the form: diag( − ,  + ,  + ,  + ). The Greek indices run over all the spacetime coordinates, while the Latin indices run over the spatial coordinates. Furthermore, we set the fundamental constants, such that *c* = 1 = *G*. Brief review of the de Sitter spacetime ======================================= We briefly review the de Sitter spacetime to point out the key features and the coordinate charts we will be using in this work. Unlike the Minkowski spacetime, which admits a natural, global Cartesian coordinate chart, the de Sitter spacetime has several charts appropriate for different situations — the global, Poincaré and static patches. Furthermore, the de Sitter spacetime is a solution to the equations: *R**μ**ν* = Λ*g**μ**ν*, where Λ is referred to as the cosmological constant term. The de Sitter spacetime is most naturally defined as the hyperboloid in the five dimensional Minkowski spacetime, which also introduces a set of coordinates covering the full de Sitter spacetime, known as the *global patch*. The global patch is covered by the coordinates (*τ*, *χ*, *θ*, *ϕ*), and the de Sitter metric is given by [dSglobal] ds2=-d2+2(H) , where $H:=\sqrt{(\Lambda/3)}$ is the Hubble constant. As evident from the above metric, the global topology of the de Sitter spacetime is R × *S*3; see Fig. [GlobalChartFig]. In the global chart, the axial symmetry of the de Sitter spacetime is guaranteed by the Killing vector (∂/∂*ϕ*). However, unlike the Minkowski spacetime, (∂/∂*τ*) is not a Killing vector in the global coordinates. ABCD denotes the *global chart*, ABD is a *Poincaré patch*, while AED is a *static patch*. The angular coordinates, *θ* and *ϕ*, are suppressed. An observer, represented by the world line DA, has its causal future *J*+ spanning the region DBA and is one of the *Poincaré patches*. Its spacelike boundary, denoted by the line AB, is the *future “null” infinity*, ${\cal J}^+$. [GlobalChartFig] Among the other coordinate systems associated with the de Sitter spacetime, the *Poincaré patch*, which constitutes the causal future (past) of observers and covers “half” of the global chart is of much interest. As we will see, in the cosmological context and for our current purpose of studying compact sources in the de Sitter background, the Poincaré patch will turn out to be very useful. There are two natural coordinate charts for the Poincaré patch: (a) the *conformal chart* with coordinates (*η*, *x**i*), and (b) the *cosmological chart* with coordinates (*t*, *x**i*). In the conformal chart (*η*, *x**i*), the de Sitter metric takes the form $$\begin{aligned} \label{ConformallyFlat} ds^{2}=a^{2}(\eta)~\left(-d\eta^{2}+\delta\_{ij}dx^idx^{j}\right)~,\end{aligned}$$ where *a*(*η*) =  − (*H**η*)− 1 with *η* ∈ ( − ∞, 0). A drawback of the conformal coordinates is that they are not suitable for studying the flat limit Λ → 0 (or, equivalently *H* → 0); see, *e.g.*, (however also see, ). Thus one is led to adopt the cosmological coordinates, with the cosmic time *t* being related to the conformal time *η* via *η* :  =  − (1/*H*)*e*− *H**t*. In these coordinates (*t*, *x**i*), the line element for the de Sitter spacetime becomes $$\begin{aligned} \label{PlanarMetric} ds^2=-dt^2+e^{2Ht}\left(\delta\_{ij}dx^idx^{j}\right)~.\end{aligned}$$ This is the most well-known form for the de Sitter metric and it will be used extensively in this work. The Poincaré patch has a seven dimensional isometry group — *three* spatial rotations, *three* spatial translations and *one* time translation. In order to find out the Killing vector field of time translations, we let *t* → *t* + *δ**t* in [PlanarMetric], where *δ**t* is taken to be infinitesimal. This time translation changes the de Sitter line element and hence in order to make the de Sitter metric in the Poincaré patch invariant, we must have the following transformation for the spatial coordinates: *x**i* → *x**i* − *H**x**i**δ**t*. Hence, the Killing vector field generating the time translation in the cosmological coordinate system is *t**μ*∂*μ* = ∂*t* − *H**x**i*∂*i* . This vector will also play an important role in the study of multipole symmetries of de Sitter spacetime. The third patch, corresponding to “half” of the Poincaré patch, is referred to as the *static patch*; see Fig. [GlobalChartFig]. This is a natural patch for an isolated body or a black hole with a stationary neighbourhood. This patch of de Sitter spacetime can be covered by static coordinates (*T*, *R*, *θ*, *ϕ*). In these coordinates, the line element of the de Sitter spacetime becomes [dSstatic] ds2=-(1-H2R2) dT2 ++R2(d 2+2d2). The existence of two Killing vector fields, (∂/∂*T*) and (∂/∂*ϕ*), is clear from the line element above; correspondingly, the spacetime has both axial and time-translational symmetries. It should be emphasized that the Killing[5](#fn5) vector field (∂/∂*T*) becomes null on the cosmological horizon of the de Sitter spacetime located at *R* = *H*− 1. Let us conclude this section with a side remark about the global isometry group in the three patches of the de Sitter spacetime. The *ten* dimensional isometry group of the full de Sitter spacetime is reduced to a *seven* dimensional subgroup in the Poincaré patch, and to a *four* dimensional subgroup in the static patch. The symmetry reduction of the global isometry group can be best understood along the following lines: the null hyperplane BD (see Fig. [GlobalChartFig]) can be thought of as adding an additional boundary to the full de Sitter spacetime. Hence, the symmetry generators which are not tangential to BD are absent in the Poincaré patch. Similarly for the static patch, the Killing fields which are not tangential to the cosmological horizon are not symmetry generators in the static patch. de Sitter metric in the harmonic gauge ====================================== The harmonic gauge, also known as the de Donder gauge, plays a crucial role in solving the Einstein’s equations. The harmonic gauge is usually the favourite gauge where to read off the multipole moments from the spherical harmonic decomposition of the metric tensor. In addition, the residual harmonic gauge symmetry is intimately connected with the nature of the multipole moments of any compact object. Given the metric tensor *g**μ**ν*, the harmonic gauge condition is given by (g)=0 . To impose the harmonic gauge condition on the metric *g**μ**ν*, we first perform the coordinate transformation *x*ʹ*μ* = *f**μ*(*x*). Subsequently, imposing the harmonic gauge condition on the metric in the *x*ʹ*μ* coordinate system amounts to ’(g’ ’)= g’ x’=0 , where ▫*g*ʹ :  = *g**μ**ν*∇*μ*∇*ν*. Hence, choosing *each* coordinate *x*ʹ*μ* to be harmonic in the metric *g**μ**ν*, *i.e.*, satisfying ▫*g*ʹ*x*ʹ*μ* = 0, ensures that the harmonic gauge condition holds in the new coordinates. We will employ this procedure to transform the de Sitter metric to harmonic coordinates in all the three coordinate patches discussed in the previous section. Intriguingly, despite being one of the most familiar solution to the Einstein’s equations with a cosmological constant, to our knowledge, harmonic coordinates for de Sitter spacetime do *not* exist in the literature. Even though we can provide the transformation of all the three coordinate patches to harmonic coordinates, here we provide the explicit coordinate transformation from the cosmological coordinates to the harmonic coordinates, since it will be of relevance in what follows. We relegate to Appendix [App:harm] the explicit coordinate transformations that bring the de Sitter metric in global and static coordinates to the harmonic form. As discussed in the previous section, the Poincaré patch of the de Sitter spacetime can be described by the cosmological coordinates (*t*, *x**i*), with the associated line element given by [PlanarMetric], for which $$\sqrt{-g}g^{\mu \nu} = \mbox{diag}\left(-a(t)^{3}, a(t), a(t),a(t)\right),$$ where *a*(*t*) :  = *e**H**t*. Therefore, it follows that the cosmological coordinates are not harmonic since $\partial\_{\mu}(\sqrt{-g}g^{\mu 0})=-3H e^{3Ht}\neq 0$. In order to obtain the harmonic coordinates for de Sitter in the Poincaré patch, we introduce a new set of coordinates *x̄**α*, such that ▫*g**x̄**α* = 0, where ▫*g* is associated with the metric given in [PlanarMetric]. Expanding out the above harmonic gauge condition, we obtain the following differential equation for the new coordinates |x=0 . [9XII20.3] We now have to choose an appropriate coordinate transformation, such that the above differential equation can be satisfied. For our purpose it will suffice to perform the following coordinate transformation |x=x,|y=y,|z=z,|t=f(t) , where *f*(*t*) is an arbitrary function of the cosmic time. It is straightforward to verify that [9XII20.3] is indeed obeyed for all the spatial coordinates, while for the time coordinate it yields the following differential equation ()+3H()=0 . Solving this differential equation for the function *f*(*t*), equipped with the boundary conditions *f*(*t* = 0) = 0 and (*d**f*/*d**t*)(*t* = 0) = 1 (these boundary conditions are necessary to have a smooth flat spacetime limit), we obtain, |t:=f(t)=(1-e-3Ht) . Therefore, the line element of the Poincaré patch of the de Sitter spacetime takes the following form in the harmonic coordinates [dSCosmoHarmonic] ds2=- + (1-3H|t)-2/3(dx2+dy2+dz2). It can again be verified that, with somewhat long but straightforward algebra, that the de Sitter metric in the above coordinate system indeed satisfies the harmonic gauge condition. Having determined the harmonic coordinates for the de Sitter spacetime in the Poincaré patch, we will proceed to find out the residual gauge symmetry and the vector fields generating these symmetries. As we will demonstrate, these vector fields will be crucial in obtaining the multipole symmetries and hence the multipole moments of compact objects living in an asymptotically de Sitter spacetime. We will determine these multipole symmetry vector fields in the next section. Residual harmonic gauge and multipole symmetries ================================================ We are now in the position to discuss the residual harmonic gauge transformations and hence derive the vector fields from which the multipole moments, in terms of the Noether charges, can be associated. We choose an operational definition of asymptotically de Sitter spacetimes such that in harmonic gauge our metric matches with that of [PlanarMetric] in the asymptotic regime. For our purpose it will suffice to consider a constant cosmological time *t* such that the asymptotic regime corresponds to the *r* → ∞ limit, which is also identical to the $r\_{\rm phys}:= re^{Ht}\to \infty$. Since this corresponds to the point *B* in Fig. [GlobalChartFig], which is the analogue to spatial infinity *i*0 in asymptotically flat spacetime, we will use this limit to define the multipole moments. Therefore, following, here also we demand the following asymptotic conditions on the metric components [25XII20.1] g0=|g0+(1/rphys) Here *ḡ**μ**ν* denotes the background de Sitter spacetime. It should also be noticed that [25XII20.1] is a non-tensorial relation and hence must be used in the harmonic coordinates, such that *ḡ**μ**ν* satisfies the harmonic gauge condition (note that the radial distance remains the same in both cosmological and harmonic coordinates). The vector field *ξ**μ* will preserve this asymptotic conditions given in [25XII20.1] if it satisfies *asymptotically* the following relation [25XII20.2] £ |g0:= |g0+|g 0+|g0 =(1/rphys) . For the harmonic coordinates in the Poincaré patch and for the component *μ* = 0, [25XII20.2] becomes *ξ**α*∂*α**g*00 + 2*g*0*α*∂0*ξ**α* = 0. This yields the following solution for the time component of the vector field *ξ**μ* in harmonic coordinates [27XII20.1] [19V21.2] 0=(1-3H |t) ()+(1/rphys) , where $\epsilon(\bm{x})$ is an arbitrary function of the spatial coordinates $\bm{x}$. Similarly, for the spatial components of the asymptotic condition given in [25XII20.2], we obtain the following differential equation for the vector field *ξ**α* [19V21.1] g0i+g0 i+gi 0 =(1/rphys) , Using [19V21.2] and [19V21.1], the solution for the spatial components, in the harmonic coordinates, is given by [killingspatial] i=ij j () + i()+(1/rphys) . Thus, it follows that under the diffeomorphism *x**μ* → *x**μ* + *ξ**μ*, the change in the de Sitter metric satisfies the necessary asymptotic boundary condition given by [25XII20.1], provided the time and spatial components of *ξ**μ* are given by [27XII20.1] and [killingspatial], respectively. The unknown functions, $\epsilon(\bm{x})$ and $\zeta^{i}(\bm{x})$, appearing in the components of the vector field *ξ**μ* above, are arbitrary as of now. However the vector field should satisfy one more additional condition, namely the harmonic gauge condition ▫*ḡ**ξ**μ* = 0 in the background de Sitter spacetime. We must emphasize that ▫*ḡ**ξ**μ* = 0 should be understood as four scalar equations, for each one of the four functions *ξ**μ* (for a detailed discussion along these lines, see ). The time component of the above equation, ▫*ḡ**ξ**α* = 0, demands that the function $\epsilon(\bm{x})$ must satisfy the following differential equation[6](#fn6) [27XII20.2] ij i j ()=0 . While the spatial components of the equation, ▫*ḡ**ξ**α* = 0, yields the following differential equation for *ξ**i*, in the harmonic coordinates [27XII20.3] -|t2 i+(1-3H |t)-4/3 kl k l i=0 . Substituting for the spatial components of the vector field *ξ**i* from [killingspatial] and using [27XII20.2], we obtain the following differential equation for the spatial vector field *ζ**i* [veczeta] kl k l i() - Hij j ()=0 , whose solution can be written as [solzeta] = -r 1()+2() - Hr + V, where $\epsilon\_{1}(\bm{x})$ and $\epsilon\_{2}(\bm{x})$ are two harmonic functions satisfying [27XII20.2]. Moreover, we have defined $\boldsymbol{\nabla}\epsilon:=\delta^{ij}\partial\_{i} \epsilon(\bm{x})\partial\_{j}$, and  ×  denotes the cross product defined in the three dimensional Euclidean space. Further, the vector field ${\bm V}$ is the inhomogeneous part of [veczeta] satisfied by *ζ**i*. This vector field is discarded since it does not play any role in General Relativity. A similar analysis can be performed for the global and the static patch of the de Sitter spacetime as well. However, in our subsequent discussions, we will restrict ourselves to the harmonic coordinates of the de Sitter metric in the Poincaré patch, since this will turn out to be the most useful for the ensuing discussions. For this reason, we wish to express the above vector field *ξ**μ* in the cosmological coordinates, rather than in the harmonic coordinates. A simple coordinate transformation, following section [Harmonic], between the two sets of coordinates, yields [killingFRW] =()(t)+(i) . Substituting the above solution for *ζ**i*, as in [solzeta], in the above expression for the residual gauge vector field *ξ**μ* in the cosmological coordinates, we obtain, $$\begin{aligned} \label{killing\_FRW2} \xi^{\mu}&=\epsilon(\bm{x})\left(\partial\_t\right)^{\mu} \nonumber \\ &\hskip 1 cm +\left[\frac{1}{2H}\left(1-e^{-2Ht}\right)\delta^{ij} \partial\_{j}\epsilon(\bm{x})+\left(-\bm{r}\times \boldsymbol{\nabla}\epsilon(\bm{x})+\boldsymbol{\nabla}\epsilon(\bm{x})-H\bm{r}\right)^{i}\right]\left(\partial\_i\right)^{\mu}~,\end{aligned}$$ where $\epsilon(\bm{x})$ satisfies the equation $\nabla^{2}\epsilon(\bm{x})=0$, with ∇2 being the three-dimensional Laplacian operator. This vector field *ξ**μ* is the multipole symmetry vector field. In particular, we can decompose the above multipole symmetry vector field into three sets, namely [MultipoleSymm] $$\begin{aligned} \label{MultipoleSymmK} {\bm K\_{\epsilon}}&:=\epsilon(\bm{x})\partial\_{t}+\frac{1}{2H}\left(1-e^{-2Ht}\right){\bm\nabla} \epsilon(\bm{x}) -Hx^i \partial\_i~, \\ \label{MultipoleSymmL} {\bm L\_{\epsilon}}&:=-{\bm r}\times {\bm \nabla} \epsilon(\bm{x})~, \\ {\bm P\_{\epsilon}}&:={\bm \nabla} \epsilon(\bm{x})~.\end{aligned}$$ Most importantly, in the limit, *H* → 0, the above multipole symmetry vectors reduce to those of the asymptotically flat spacetime. Hence, following the flat spacetime analogy, one can identify the vector field $\bm{K}\_{\epsilon}$ as the generator of the mass multipole moments, $\bm{L}\_{\epsilon}$ as the generator of the spin multipole moments and $\bm{P}\_{\epsilon}$ as the generator of the momentum multipole moments. Notice that, except for $\bm{K}\_{\epsilon}$, both $\bm{L}\_{\epsilon}$ and $\bm{P}\_{\epsilon}$ are identical to their flat spacetime counterparts. This is expected, since the isometry group of the Poincaré patch includes both rotation and spatial translation symmetries as that of the flat spacetime, but the mass multipole symmetry vector gets modified by the presence of the cosmological constant. As evident from the above structure of the vector fields $\bm{K}\_{\epsilon}$, $\bm{L}\_{\epsilon}$ and $\bm{P}\_{\epsilon}$, the multipole symmetries [MultipoleSymm] depend on the harmonic function $\epsilon(\bm{x})$, whose decomposition consists of irregular and regular solid spherical harmonics. These are given by, *r*− (ℓ + 1)*Y*ℓ*m*(*θ*, *ϕ*) and *r*ℓ*Y*ℓ*m*(*θ*, *ϕ*), respectively. The first branch, which are irregular at *r* = 0, are simply gauge transformations and hence discarded[7](#fn7); the second branch, instead, is used to decompose the harmonic function $\epsilon(\bm{x})$ as ()==0m=-mrYm(,) , where *ε*ℓ*m* are arbitrary coefficients. We notice that for the *l* = 0 and *l* = 1 modes, the multipole symmetries in [MultipoleSymm] reduce to the background symmetries of the de Sitter spacetime, discussed in section [dS]. We will now demonstrate, that the vector associated with the residual gauge symmetry, also arises from the gauge freedom of the linear gravitational perturbations around de Sitter background. Linearized perturbation of de Sitter spacetime ============================================== In this section, we will consider linear gravitational perturbation of the de Sitter background in the cosmological coordinates. This will enable us to provide the perturbation equations in the cosmological coordinates along with the associated gauge choice simplifying the equations. It turns out that there is still a residual gauge freedom left in these perturbation equations, which enables us to eliminate the non-dynamical degrees of freedom and yields a symmetry vector identical to the one derived in section [Sec.4]. This provides a completely independent way of deriving the multipole symmetries, further bolstering our claims in the previous section. We start by fixing the gauge condition associated with the linear perturbation equations. Fixing the wave gauge: evolution of the linear gravitational perturbations -------------------------------------------------------------------------- The gravitational perturbation around the de Sitter background is often considered in the conformal coordinates (*η*, *x*, *y*, *z*). However, for our current purpose, it will prove useful to consider the gravitational perturbations around de Sitter spacetime in the cosmological coordinates (*t*, *x*, *y*, *z*). This is primarily because the residual gauge symmetry has the cleanest expression in the cosmological coordinates and can be compared to the corresponding expression for the flat spacetime with ease. The de Sitter metric in the cosmological coordinates has already been presented in [PlanarMetric], for which the non-zero components of the Christoffel connections are 0 ij= H e2Ht ij , i0j= H ij . Given the above expressions for the non-zero connection components and the fact that the background de Sitter spacetime is maximally symmetric, one can determine the differential equation satisfied by the gravitational perturbation *h**μ**ν*. To linear order in *h**μ**ν*, one obtains the following wave equations in the de Sitter background [waveeqdS] |-- (-|g)=-16T , where, as in the previous section, the “bar” denotes quantities evaluated for the background de Sitter spacetime. Also in the above expression, we have used the trace-reversed perturbation $\widetilde{h}\_{\mu\nu}:= h\_{\mu\nu}-(1/2)\bar{g}\_{\mu\nu}h$ and its covariant divergence $B\_{\mu}:=\bar{\nabla}\_{\alpha}\widetilde{h} ^{\alpha}\_{\mu}$. The above equation for the gravitational perturbation looks complicated and thus we need to impose an appropriate gauge condition in order to simplify it further. In the case of asymptotically flat background, one often chooses the Lorenz gauge condition, *B**μ* = 0, to simplify the perturbation equations. However, in the present context it is possible to choose another gauge, which will simplify the above wave equation considerably. For that purpose, let us concentrate on the wave equation associated with the spatial components $\widetilde{h}\_{ij}$ of the gravitational perturbation. We choose the following wave gauge condition $$\label{wave gauge} B\_{\mu}:=f(t)\widetilde{h}\_{0\mu}~,$$ where *f*(*t*) is an arbitrary function. For this choice of *B**μ*, taking a cue from [waveeqdS], the linearized wave equation for $\widetilde{h}\_{ij}$ becomes $$\begin{aligned} \nn \label{15VII20.1} -\partial\_{0}^{2} \widetilde{h}\_{ij}&+e^{-2Ht} \left(\delta^{mk} \partial\_{m} \partial\_{k} \widetilde{h}\_{ij}\right)+ H \partial\_{0} \widetilde{h}\_{ij}+ 2 H^{2} \widetilde{h}\_{ij}-\delta\_{ij}\Big[-2H^{2}e^{2Ht} \bar{g}^{kl} \widetilde{h}\_{kl} \nonumber \\ &+f(t)He^{2Ht}\widetilde{h}\_{00}+e^{2Ht}(df/dt)\widetilde{h}\_{00}+ e^{2Ht} f(t) \partial\_{0}\widetilde{h}\_{00}- f(t)\partial^{k} \widetilde {h}\_{0k}\Big] \nonumber \\ &-\left(f(t)+2H\right)\left(\partial\_{i} \widetilde{h}\_{0j}+\partial\_{j} \widetilde{h}\_{0i}\right)=-16\pi T\_{ij}~.\end{aligned}$$ We observe that *f*(*t*) =  − 2*H* will simplify it considerably. Writing down this gauge condition explicitly in terms of the trace reversed gravitational perturbation, we obtain $\bar{\nabla}\_{\alpha}\widetilde{h}^{\alpha}\_{\mu}=-2H \widetilde{h}\_{0\mu}$. Imposing this condition, finally the wave equation for $\widetilde{h}\_{ij}$ reads as -02ij+H0 ij+e-2Ht(mk mk ij)+2H2ij=-16 Tij . Note that the wave equation for $\widetilde{h}\_{ij}$ decouples from the other components of the gravitational perturbation. Further, expanding out $(\bar{\square}\widetilde{h}\_{00})$ for the background de Sitter spacetime in the cosmological coordinates and using the gauge condition $B\_{\mu}=-2H\widetilde{h}\_{0\mu}$, introduced above, the evolution equation for the time-time component of the gravitational perturbation $\widetilde{h}\_{00}$ becomes [15VII20.2] -02 00 + e-2Ht ij (i j 00) -3H0 00-2H2 00 - 2H2 e-2Ht ij ij=-16T00 . Unfortunately, unlike the wave equation for the spatial part of the gravitational perturbation, the above wave equation for the $\widetilde{h}\_{00}$ is a coupled differential equation. However, it is possible to decouple the spatial and the temporal part of the gravitational perturbation by introducing a redefined perturbation variable in favour of $\widetilde{h}\_{00}$. The first step is to take the trace of [15VII20.1] with respect to the flat spatial metric, which yields [15VII20.3] -02 (e-2Htij ij)-3H0(e-2Htij ij) +e-2Htkk (e-2Htij ij)=-16 e-2Ht ijTij . As a next step, we define a new perturbation variable $$\widetilde{\mathcal{H}}:= \widetilde{h}\_{00}+ {e^{-2Ht}} (\delta^{ij}\widetilde{h}\_{ij})~.$$ Subsequently, summing up [15VII20.2] and [15VII20.3], and using the definition for the gravitational perturbation $\widetilde{\mathcal{H}}$ given above, we obtain the following wave equation for $\widetilde{\mathcal{H}}$ [30XI20.2] -02 + e-2Ht ij i j -3 H 0 -2H2 =-16(T00+e-2Htij Tij) . It is clear that the above wave equation for $\widetilde{\mathcal{H}}$ is decoupled, *i.e.*, it depends on $\widetilde{\mathcal{H}}$ alone and not on other perturbation variables, as desired. Finally, the evolution equation for the temporal-spatial part of the perturbation, *i.e.*, for $\widetilde{h}\_{0i}$, takes the following form [26XI20.1] -02 0i + e-2Htjk jk 0i - H 00i=-16T0i. Therefore, we have decoupled all the components of the gravitational perturbation, *i.e.*, purely spatial part $\widetilde{h}\_{ij}$, spatial-temporal part $\widetilde{h}\_{0i}$ and $\widetilde{\mathcal{H}}$, a combination of purely temporal part and spatial part. This provides the desired wave equations for the gravitational perturbations around the de Sitter spacetime in cosmological coordinates. Concluding, as an aside remark, the wave gauge condition $\bar{{\nabla}}\_{\alpha} \widetilde{h}^{\alpha}\_{\mu}=-2H \widetilde{h}\_{0\mu}$ in the cosmological coordinates becomes [wavegauge1] -00+e-2Htj j-H0 |gkl kl- H 0=0 . [9VII20.1] As it will turn out, it will play an important role in the subsequent section, where we would like to determine the corresponding residual gauge and highlight its connection with the multipole symmetry vector fields. Residual gauge transformations and multipole symmetries ------------------------------------------------------- In section [Sec.4], we have derived the multipole symmetry vector field *ξ**μ*, see [killingFRW2], in the cosmological coordinates, which respect the harmonic gauge condition of the de Sitter spacetime and preserves the relevant asymptotic fall-off condition, as in [25XII20.1]. It is instructive to see any connection between the multipole symmetries and the residual diffeomorphism vector fields preserving the wave gauge condition [wave gauge]-[wavegauge1], used in deriving the wave equations for linear gravitational perturbation in the cosmological coordinates. Suppose the wave gauge condition $\bar{\nabla}\_{\alpha} \widetilde{h}^{\alpha}\_{\mu}=-2H \widetilde{h}\_{0\mu}$ is preserved by the diffeomorphism generating vector field *ξ**μ*,[8](#fn8) whose form we would like to determine. First of all, note that under the transformation, *x**μ* → *x**μ* + *ξ**μ*, the trace-reversed gravitational perturbation transforms as [30XI20.1] =|+| -|g(| ) . The covariant derivatives appearing in the above expression are in the background de Sitter spacetime; expanding these derivatives in the cosmological coordinate yields =+-2H(0+ 0 )+H 0|g+ 2H 0 0 0-|g |g  . Thus, the wave gauge condition ${\bar \nabla}\_{\alpha} \widetilde{h}^{\alpha}\_{\mu}=-2H\widetilde{h}\_{0\mu}$, will also be modified under the above diffeomorphism. As we want to preserve it, the vector field *ξ**μ* must satisfy the following differential equation |+ 4 H0+2H2 -2H2 00 +2H 0|g-() 0 ij ij=0 . Using explicitly the metric for the de Sitter spacetime in the cosmological coordinates, the above differential equation can be reduced to a partial differential equation [30XI20.3] |g +H0+2 H2 -2H20 0-2H000=0 . In addition to the above condition on *ξ**μ*, we would like to see if this vector field can also be used to eliminate the $\widetilde{h}\_{0i}$ and $\widetilde{\mathcal{H}}$ components of the gravitational perturbation. The $\widetilde{h}\_{0i}$ component of the gravitational perturbation transforms under diffeomorphism as [h0ichange] 0i=0i+i0-2Hi . In order to proceed further, we have to take various derivatives of the residual wave gauge condition. Firstly, taking the time derivative of the spatial component of the residual gauge condition in [30XI20.3], we obtain [30XI20.4] |g 0 i=2H|gklkl i-H02i-2H2 0i . Similarly, taking the spatial derivative of the temporal component of the residual gauge condition in [30XI20.3], we obtain [30XI20.5] |g i0=H0i0 . Hence, we obtain the following result for the change in the spatial-temporal component of the gravitational perturbation under diffeomorphism $$\begin{aligned} \label{30XI20.6} \bar{g}^{\mu\nu}\partial\_{\mu}\partial\_{\nu}\delta\_{\xi}\widetilde{h}\_{0i}&= \bar{g}^{\mu\nu}\partial\_{\mu}\partial\_{\nu} (\partial\_{0}\xi\_{i}+\partial\_{i}\xi\_{0}-2H\xi\_{i}) \nonumber \\ &=2H\bar{g}^{kl}\partial\_{k}\partial\_{l} \xi\_{i}-H\partial\_{0}^{2}\xi\_{i}-2H^{2} \partial\_{0}\xi\_{i}+H\partial\_{0}\partial\_{i}\xi\_{0}+2H\left(H\partial\_{0}\xi\_{i}+2 H^{2} \xi\_{i}\right) \nonumber \\ &=H\partial\_{0}^{2}\xi\_{i}+H\partial\_{0}\partial\_{i}\xi\_{0}-2H^{2}\partial\_{0}\xi^{i}~,\end{aligned}$$ where in arriving at the first line we have used the identities derived in [30XI20.4] and [30XI20.5], respectively. Further, in the second line we have used the result *ḡ**k**l*∂*k*∂*l**ξ**i* = ∂02*ξ**i* − *H*∂0*ξ**i* − 2*H*2*ξ**i*, which follows from [30XI20.3]. Thus, using [h0ichange], we finally obtain the differential equation for the change in the spatial-temporal component of the gravitational perturbation under diffeomorphism |g(0i)-H0 (0i)=0 , which is the same of the wave equation satisfied by $\widetilde{h}\_{0i}$ in the absence of any source; see [26XI20.1]. Hence using the diffeomorphism vector field *ξ**α*, which preserves the wave gauge condition, we can set $\widetilde{h}\_{0i}=0$, outside the matter source. Again, under diffeomorphism, the combination $\widetilde{\mathcal{H}}$ of purely temporal and purely spatial part of the gravitational perturbation, transforms as &=&00+|gijij=400 , where we have used [30XI20.1] to compute ${\delta \widetilde h\_{00}}$ and $\bar{g}^{ij}\delta{\widetilde h\_{ij}}$. Further, using the differential equation satisfied by *ξ**α*, preserving the wave gauge condition, it can also be shown that $\delta\_{\xi}\widetilde{\mathcal{H}}$ satisfies the following differential equation |g() -3H0( )-2H2=0 , which, upon comparison with [30XI20.2], turns out to be identical to the wave equation satisfied by the perturbation variable $\widetilde{\mathcal{H}}$ outside the source. Thus, we can also set $\widetilde{\mathcal{H}}=0$ outside the source using the diffeomorphism *ξ**μ*, while preserving the gauge condition. To summarize, in addition to [30XI20.3], we also demand the conditions $\widetilde{h}\_{0i}=0=\widetilde{\mathcal{H}}$ should be preserved under diffeomorphism *ξ**μ*, namely $\delta\_{\xi} \widetilde{\mathcal{H}}=0$ and $\delta\_{\xi} \widetilde{h}\_{0i}=0$, to obtain $$\begin{aligned} \partial\_{0}\xi\_{0}&=0~, \\ \partial\_{0} \xi\_{i}+\partial\_{i} \xi\_{0}-2H \xi\_{i}&=0~.\end{aligned}$$ These differential equations can be immediately solved, yielding $$\begin{aligned} \xi\_{0}&=-\epsilon(\bm{x})~, \\ \xi\_{i}&=e^{2Ht}\Big[\zeta\_{i}(\bm{x})+\frac{1}{2H}\left(1-e^{-2Ht}\right)\partial\_{i}\epsilon(\bm{x})\Big]~.\end{aligned}$$ Raising the indices of both the components of the vector field *ξ**μ*, we immediately obtain 0=() ,i=ij j()+(1-e-2Ht)iji() . As one can explicitly verify, this is identical to the multipole symmetry vector derived earlier in the context of harmonic gauge; see [killingFRW2]. This demonstrates the internal consistency of our analysis and the relevance of the multipole symmetry vector field. They do not only obey the asymptotic fall-off conditions and preserve the harmonic gauge condition, but they further can be used to eliminate the time-space component $\widetilde{h}\_{0i}$ and $\widetilde{\mathcal{H}}$, a suitable combination of the time-time and space-space component of the gravitational perturbation around the de Sitter background in cosmological coordinates. Multipole structure of the Kerr-de Sitter black hole ==================================================== The formalism for computing the multipole moments of a compact object in asymptotically de Sitter spacetime follows closely the method in for asymptotically flat spacetimes. In this section, we will compute the multipole moments of the Kerr-de Sitter (KdS) black hole spacetime, which will allow us to show explicitly how the mass and spin multipole moments for the KdS black hole differ from the Geroch-Hansen moments for Kerr black hole. We believe that the explicit expressions for the mass and spin multipole moments of the KdS black hole are new in the literature. The key object to compute the multipole moments is the Barnich-Brandt charge, or equivalently the Abbott-Deser charge. In four-dimensional General Relativity, the infinitesimal surface charge *δ*Q*ξ*[*h*; *ḡ*] associated to the vector field *ξ* and the linearized solution *h**μ**ν* around a background spacetime *ḡ**μ**ν* is $$\label{BBcharge} \delta \mathcal{Q}\_{\xi}[h;\bar{g}]:=\frac{1}{8\pi }\int\_S \bm{k}\_{\xi}[h; \bar{g}]=\frac{1}{32\pi }\int\_S \sqrt{-\bar{g}}~ k^{\mu\nu}\_{\xi}[h; \bar{g}]~ \epsilon\_{\mu\nu\alpha \beta}~dx^{\alpha}\wedge dx^{\beta}~,$$ where, the surface charge density *k**ξ**μ**ν* is given by $$\label{chargedensity} k^{\mu\nu}\_{\xi}[h; \bar{g}]:=\xi^{\nu}\left(\bar{\nabla}^{\mu}h-\bar{\nabla}\_{\sigma}h^{\mu\sigma} \right) + \xi\_{\sigma}\bar{\nabla}^{\nu}h^{\mu\sigma} + \frac{1}{2} h \bar{\nabla}^{\nu}\xi^{\mu} - h^{\rho\nu}\bar{\nabla}\_{\rho}\xi^{\mu} + \frac{1}{2} h^{\sigma \nu}\left(\bar{\nabla}^{\mu}\xi\_{\sigma} + \bar{\nabla}\_{\sigma}\xi^{\mu} \right)~.$$ In the case of the KdS black hole, the background metric *ḡ**μ**ν* is the metric for the de Sitter spacetime, while *h**μ**ν* is the linearized solution obtained by varying the parameters of the KdS black hole. We will first provide a brief review of the KdS black hole spacetime, since it will be extremely useful for our subsequent computation of the multipole moments from the charges associated with multipole symmetry vector fields. Kerr-de Sitter black hole ------------------------- KdS black hole is an exact solution to the Einstein’s equations with a positive cosmological constant Λ = 3*H*2 > 0. The metric of the KdS black hole spacetime reads as[9](#fn9) $$\label{metric:KdS} ds^2 = -\frac{\Delta\_r}{\rho^2}\left(dt - \frac{a}{\Xi}\sin^2 \theta d\phi \right)^2 + \frac{\rho^2}{\Delta\_r}dr^2+ \frac{\rho^2}{\Delta\_{\theta}}d\theta^2 +\frac{\Delta\_{\theta}}{\rho^2}\sin^2 \theta\left(adt -\frac{r^2+a^2}{\Xi}d\phi \right)^2,$$ where the functions in the metric components are given by $$\begin{aligned} \Delta\_r &= \left(r^2+a^2\right)\left(1-H^2 r^2 \right)-2Mr,\\ \Delta\_{\theta} &= 1+H^2 a^2 \cos^2 \theta,\\ \rho^2 &= r^2 + a^2 \cos^2 \theta,\\ \Xi &= 1+a^2H^2.\end{aligned}$$ The KdS black hole is described by three parameters: the mass *M*, the spin *a*, and the Hubble constant *H*. While we allow the mass *M* and the spin *a* to vary, we keep the Hubble constant *H* fixed. The de Sitter spacetime (*i.e.*, the background spacetime) is recovered for *M* = 0 = *a*, while the Kerr black hole is recovered for *H* = 0. The KdS black hole spacetime is considered as a two-parameter family, since as the parameters *M* and *a* change, the metric changes its configuration. The linearized perturbation *h**μ**ν*, necessary for the computation of the charges can be considered as tangent to the space of the metric configurations and computed as *δ**g**μ**ν* :  = *h**μ**ν* = (∂*M**g**μ**ν*)*δ**M* + (∂*a**g**μ**ν*)*δ**a*. This sets the stage for our subsequent application of our prescription to the KdS black hole and hence to compute its mass and spin multipole moments. Multipole moments of the Kerr-de Sitter black hole -------------------------------------------------- In what follows, we will show how to compute in practice the multipole moments of the KdS black hole. Here are the main steps: * First of all, one starts by decomposing in spherical harmonics the multipole symmetry vectors in [MultipoleSymm]. The vector fields $\xi=\{\bm{K}\_{\epsilon},\bm{L}\_{\epsilon}\}$ are named, respectively, the mass and spin multipole symmetries because they generate the mass and spin multipole moments of the compact object in the asymptotically de Sitter spacetimes[10](#fn10); * Then, we will choose *h**μ**ν* to be the linearized perturbation of the KdS metric with *ḡ**μ**ν* being the dS background and shall compute the surface charge density, using [chargedensity]. Integration of the same over a generic 2-sphere, yields the surface charges, according to [BBcharge]. One thus gets the (*l*, *m*) modes of the infinitesimal charge *δ*Q*ξ**l**m*; * The infinitesimal charge mode *δ*Q*ξ**l**m*, derived above, is, in general, a function of *t* and *r*. One considers a *t* = constant time slice and computes the large radius expansion of the mode *δ*Q*ξ**l**m*. Only its finite part (FP) is retained, namely only the coefficient of O(*r*0) is kept; * In the final step, one integrates the FP *δ*Q*ξ**l**m* over the solution parameters, *i.e.*, over the parameters *m* and *a*, to get the multipole moments *M**ξ**l**m* of the solution. In formulæ, $$\label{mult\_mom} M^{lm}\_{\xi} = \underset{\begin{subarray}{c} r\to\infty \\ t=\text{constant} \end{subarray}}{\text{FP}} \mathcal{Q}^{lm}\_{\xi}.$$ ### Spin multipole moments The first step in the computation of the spin multipole moments is to recall the symmetry vector associated with spin multipole symmetry, $\bm{L}\_{\epsilon} = -{\bm r} \times {\bm \nabla} \epsilon({\bm x})$, written in the same coordinates as that of the KdS metric. We call $\bm{L}\_{\epsilon}$ the spin multipole symmetry since it will generate the spin multipole moments. The spherical harmonic decomposition of the spin multipole symmetry vector field is given by[11](#fn11) $$\label{spinL} \bm{L}\_{lm} = \mathcal{L}\_l~ r^{l-1}\left(~^{\rm B}Y^{\theta}\_{lm}\partial\_{\theta} + \frac{1}{\sin \theta} ~^{\rm B}Y^{\phi}\_{lm}\partial\_{\phi}\right)~,$$ where the magnetic-type harmonic vector field $~^{\rm B}\bm{Y}\_{lm}$ is defined by $$~^{\rm B}\bm{Y}\_{lm} = \frac{1}{\sqrt{l(l+1)}} ~\bm{r} \times \bm{\nabla} Y\_{lm}, \quad Y\_{lm} = (-1)^m \sqrt{\frac{2l+1}{4\pi} \frac{(l-m)!}{(l+m)!}} e^{im\phi}P\_{lm}(\theta)~,$$ with *P**l**m*(*θ*) being the associated Legendre polynomials and *Y**l**m* the spherical harmonics. The normalization factor, L*l*, at this stage of the discussion, is not fixed. However, we decide to adjust it in such a way that we recover the spin multipole moments of the Kerr black hole in the limit *H* → 0. Such requirement implies that the normalization factor takes the following form $$\mathcal{L}\_{l}=\frac{8\sqrt{\pi}}{3}\sqrt{\frac{l(2l+1)}{l+1}}\frac{(2l-1)!!}{(l+1)!}~.$$ We now use the expression of the surface charge from [BBcharge] and the explicit expression of the spin multipole symmetry from [spinL] to compute the spin multipole moments S*l* of the KdS black hole, using [multmom]. It is important to recall that the charge mode Q*L**l**m* is non-vanishing for *l* odd and *m* = 0, and so is the spin multipole moment S*l*. The angular momentum, *i.e.* the spin dipole moment *l* = 1, reads as $$\mathcal{S}\_{1} = \frac{Ma}{\left(1+a^2H^2\right)^2}~,$$ in agreement with the known results in the literature. Moreover, in the limit *H* → 0, we recover the angular momentum of the Kerr black hole. The higher spin multipole moments are computationally more involved. We report the exact expressions of the first spin multipole moments in Appendix [Appspin]. It is more instructive to compute their expressions for small values of the Hubble constant: $$\begin{aligned} \mathcal{S}\_{3} &= -M a^3 \left[1 -\frac{28}{15} a^2 H^2 + \frac{25}{9} a^4 H^4 + \mathcal{O}(H^6)\right]~,\\ \mathcal{S}\_{5} &= +M a^5 \left[1 - \frac{118}{63} a^2 H^2 + \frac{2891}{1053} a^4 H^4 + \mathcal{O}(H^6)\right]~,\\ \mathcal{S}\_{7} &= -M a^7 \left[1 - \frac{17}{9} a^2 H^2 + \frac{513}{187} a^4 H^4+ \mathcal{O}(H^6)\right]~,\\ \mathcal{S}\_{9} &= +M a^9 \left[1 - \frac{314}{165} a^2 H^2 + \frac{3751}{1365} a^4 H^4+ \mathcal{O}(H^6)\right]~.\end{aligned}$$ For *H* → 0, they reproduce the well-known Geroch-Hansen’s formulæ  for the spin multipole moments of the Kerr black hole, namely, $$\underset{H \to 0}{\lim} ~\mathcal{S}\_{2l+1} = (-1)^l M a^{2l+1}~.$$ Thus the spin multipole moments derived here satisfies the result that all even spin moments are identically zero. This is because of the reflection symmetry of the KdS spacetime about the equatorial plane. Note that all the corrections over and above the multipole moments of the Kerr black hole are dependent on the dimensionless combination (*a**H*), which will be small if we consider *H*− 1 as the age of our universe. Thus, the effect of the cosmological constant on the spin multipole moments are negligible. We will analyze the situation for the mass multipole moments in a while. ### Mass multipole moments The mass multipole symmetry vector reads as $\bm{K}\_{\epsilon} = \epsilon(\bm{x}) \partial\_{t}+(1/2H)(1-e^{-2Ht})\bm{\nabla}\epsilon(\bm{x})-Hx^{i}\partial\_{i}$, which admits a spherical harmonic decomposition of the form $$\label{Kharmonic} \bm{K}\_{lm}=\mathcal{K}\_{l}~r^l\left(Y\_{lm}\partial\_t + \frac{1}{r}\chi^{r}\_{lm}\partial\_r + \frac{1}{r^2}\chi^{\theta}\_{lm}\partial\_{\theta} + \frac{1}{r^2 \sin \theta}\chi^{\phi}\_{lm}\partial\_\phi \right) -H\delta\_{l1}\partial\_r~.$$ The spatial vector field $\bm{\chi}\_{lm}$, whose components appear explicitly in the above expression, is a certain linear combination of the electric-type vector harmonic, defined as $~^{\rm E}\bm{Y}\_{lm} = r {\bm \nabla} Y\_{lm} $, and the radial-type vector harmonic, $~^{\rm R}\bm{Y}\_{lm} = \bm{n}Y\_{lm}$, such that $$\bm{\chi}\_{lm}=\frac{1}{2H}\left(1-e^{-2Ht}\right)\left(\sqrt{l(l+1)}~^{\rm E}\bm{Y}\_{lm}+l ~^{\rm R}\bm{Y}\_{lm} \right)~.$$ After computing the surface charge given in [BBcharge], associated to the mass multipole symmetry, the mass multipole moments are non-vanishing for *l* even and *m* = 0. Thus the mass multipole moments are denoted by M*l*. Moreover, we demand that the normalization factor K*l* in the mass multipole symmetry is such that we recover the Geroch-Hansen’s mass moments for Kerr black hole. This implies that $$\label{Kl} \mathcal{K}\_l=\sqrt{\pi}\sqrt{2l+1}\left(\frac{2^{l/2+1}}{N\_{l/2+1}}\right)\frac{(2l-1)!!}{l!}~,$$ and *N**l* obeys the following recursive relation for *l* ≥ 2, $$N\_{l+2} = -2\frac{(17-12l)N\_{l+1}+10(l-2)N\_l}{7l-9}~,$$ with initial conditions *N*1 = 1 and *N*2 = 4. Following this strategy outlined above, the mass of the KdS black hole, *i.e.*, the monopole *l* = 0 mode takes the form $$\mathcal{M}\_0 = \frac{M}{1+a^2H^2}~,$$ which reproduces the de Sitter analogue of the mass for the Kerr-AdS black hole obtained in,[12](#fn12) and gives the correct expression for the mass of the Kerr black holes for *H* = 0. The mass quadrupole, as expected, is time independent and it is given by $$\mathcal{M}\_2 = -\frac{Ma^2}{1+a^2H^2}~.$$ The exact expressions of the higher mass multipole moments can be found in Appendix [Appmass]. For small values of the Hubble constant, they read as $$\begin{aligned} \mathcal{M}\_4 &= +Ma^4\left[ 1 + \frac{4}{5}Ht - \frac{47a^2 + 44t^2}{55} H^2 + \mathcal{O}(H^3)\right]~,\\ \mathcal{M}\_6 &= -Ma^6 \left[1+\frac{6}{7} Ht -\frac{7a^2+30t^2}{35}H^2 + \mathcal{O}(H^3) \right]~,\\ \mathcal{M}\_8 &= +Ma^8 \left[ 1+\frac{8}{9} Ht +\frac{9a^2-152t^2}{171}H^2 + \mathcal{O}(H^3) \right]~.\end{aligned}$$ As evident from the above expressions, for *H* → 0, we recover the well-known Geroch-Hansen’s formulæ  for the mass multipole moments of the Kerr black hole $$\underset{H \to 0}{\lim} ~\mathcal{M}\_{2l} = (-1)^{l}Ma^{2l}.$$ Notice that, owing to the reflection symmetry of the KdS spacetime about the equatorial plane, only the even order mass moments are non-zero. We conclude this section with an interesting outcome from the mass multipole moments computation. While the *H* → 0 limit, or the static limit, of M2*l* gives the Hansen’s mass moments for the Kerr black hole, there exists another limit, which may be of interest. It is indeed true that except for the first two, all the other higher order mass moments are time dependent and the time variable enters the expressions via the exponential function *e*− 2*H**t*, typical of the de Sitter dynamics; see Appendix [Appmass]. One can therefore perform the late-time limit, *i.e.*, *H**t* → ∞. In this case, one has $$\begin{aligned} \underset{Ht \to \infty}{\lim} \mathcal{M}\_2 &= -\frac{Ma^2}{1+a^2H^2}~,\\ \underset{Ht \to \infty}{\lim} \mathcal{M}\_4 &= +\frac{7}{5}\frac{Ma^4}{1+a^2H^2}~,\\ \underset{Ht \to \infty}{\lim} \mathcal{M}\_6 &= -\frac{10}{7}\frac{Ma^6}{1+a^2H^2}~,\\ \underset{Ht \to \infty}{\lim} \mathcal{M}\_8 &= +\frac{13}{9}\frac{Ma^8}{1+a^2H^2}~.\end{aligned}$$ While the quadrupole mass moment is the same in both the static and late-time limits, higher mass multipole moments differ by a numerical factor. More precisely, they can be recast as $$\underset{Ht \to \infty}{\lim} \mathcal{M}\_{2l}=\frac{\tilde{\mathcal{K}}\_{l}}{2l+1}\frac{(-1)^l Ma^{2l}}{1+a^2H^2},$$ with $\tilde{\mathcal{K}}\_{l}$ satisfying the recursive relation: $\tilde{\mathcal{K}}\_{l+2} = -2\tilde{\mathcal{K}}\_{l+1} - \tilde{\mathcal{K}}\_{l}$, with initial conditions $\tilde{\mathcal{K}}\_{1}=-3$, $\tilde{\mathcal{K}}\_{2}=7$, $\tilde{\mathcal{K}}\_{3}=-10$, for *l* ≥ 4. It is straightforward to define a different normalization factor in the mass multipole symmetry (see [Kl]), such that $\mathcal{K}\_l \to (2l+1)\mathcal{K}\_l/\tilde{\mathcal{K}}\_{l}$ to get rid of the numerical factor and obtain, in the late-time limit, the Hansen’s mass moments rescaled by the factor Ξ = 1 + *a*2*H*2. In the asymptotically flat case, for which *H* → 0, the Hansen’ mass moments are recovered. Discussion and concluding remarks ================================= We have addressed the problem of computing the gravitational multipole moments of a compact object, living in an asymptotically de Sitter spacetime. Since the standard approaches of computing the gravitational multipole moments relies heavily on the asymptotic flatness, they could not be used in computing the moments for asymptotically de Sitter spacetimes. We achieved our results by using the method proposed in to compute the multipole moments by means of Noether charges associated with specific residual harmonic gauge transformations. The application of the Noether charge technique to compute gravitational multipole moments in asymptotically de Sitter spacetime requires one to implement the following steps: (a) Expressing the de Sitter spacetime in harmonic gauge, (b) Finding out the symmetry vector field, generating residual gauge transformations in the de Sitter spacetime, expressed in harmonic gauge, (c) Checking the consistency of the symmetry vector field with the corresponding one associated with linear gravitational perturbations around the de Sitter background. All these lead to a unique vector field that depends on the spherical harmonic decomposition of a function $\epsilon(\bm{x})$, satisfying Laplace’s equation. This vector field can be decomposed into three parts — (i) $\bm{K}\_{\epsilon}$, generating mass multipole moments, (ii) $\bm{L}\_{\epsilon}$, generating spin multipole moments and (iii) $\bm{P}\_{\epsilon}$, which does not provide any further independent multipole moments in General Relativity. Following this strategy, in section [Harmonic] we provide the transformation of the de Sitter metric expressed in the cosmological coordinates to the harmonic coordinates. To our knowledge, such a transformation of the de Sitter metric to the harmonic coordinates has not been attempted before. Having transformed the de Sitter metric to the harmonic form, we derive the diffeomorphism vector field respecting the harmonic gauge as well as the asymptotic fall-off condition [25XII20.1]. The expression for the resulting multipole symmetries are given in [MultipoleSymm]. In addition, this vector fields can also be used to eliminate the non-dynamical components of the linear gravitational perturbation around the de Sitter background in cosmological coordinates. Incidentally, we provide an analysis of the linear gravitational perturbation of the de Sitter spacetime in the cosmological coordinates and the associated appropriate gauge condition, referred to as the wave gauge. The above formalism allows us to compute the gravitational multipole moments of any compact object in asymptotically de Sitter spacetime. The charges associated to the above-mentioned multipole symmetries are the key objects from which we can extract the multipole moments. As an example of this procedure, we consider the case of the Kerr-de Sitter black hole spacetime and following the prescription outlined in section [Sec:KdS], we compute the mass and spin multipole moments of the Kerr-de Sitter black hole. It turns out that the mass (monopole mass moment) and the angular momentum (dipole spin moment) reproduce earlier results in the literature. However, the higher order mass and spin multipole moments for Kerr-de Sitter spacetime do not exist in the literature, and are discussed in this work for the first time. It turns out that the spin moments involve additional corrections over and above the Kerr moments, depending on the dimensionless combination *a**H*. Since for the present epoch *H* ∼ (age of universe)− 1, the corrections depending on various powers of the combination *a**H* are supposed to be negligible. On the other hand, except for the quadrupole mass moment, all the higher order mass moments are time dependent and depends on various powers of the combination *a**H* as well as *H**t*, over and above the Kerr mass moments. It is assuring that in the *H* → 0 limit, we recover the mass and spin moments of the Kerr spacetime. The higher order mass moments, though complicated, in the late time limit (*H**t* → ∞) takes a very simple form. Notice that the mass moments of the Kerr-de Sitter black hole in the late time limit is very much related to the Kerr mass moments, except for some overall normalization factor. It is worthwhile to mention that, even though *a**H* is small, since in the present epoch *H**t* ∼ O(1), there can be significant departure of the mass moments from that of the Kerr black hole. Since these moments directly affect various gravitational wave observables, *e.g.*, the energy emitted by gravitational waves, there can be some observational consequences, which we wish to explore in a future work. Let us finally conclude with some future directions. To date, the geometrical Geroch-Hansen formalism is applicable only to stationary and asymptotically flat spacetimes. A few attempts exist in the literature to extend the formalism to asymptotic non-flat cases, see *e.g.*, where multipole moments of spacetimes with NUT charges has been computed. In line with these developments, it would be interesting to develop a formalism à la Geroch-Hansen for asymptotically (anti-)de Sitter spacetimes. Additionally, a method to compute the multipole moments for radiating spacetimes, following the approach of Thorne, is also non-existent in literature for asymptotically (anti-)de Sitter spacetimes. A possible extension of the same to the (anti-)de Sitter spacetimes will prove very useful for various gravitational-wave-related implications. These will also provide other independent methods, than the one presented in this work, to compute the multipole structure of the Kerr-(anti-)de Sitter black hole spacetime. It is also important to ask, whether the formalism developed here can be successfully applied to investigate the complex multipole structure, and its related properties, of neutron stars and exotic objects, such as boson stars and fuzzball configurations. Finally, it will be very interesting to investigate the relation between the multipole symmetries for asymptotically de Sitter spacetimes and the adiabatic modes in cosmology; see, *e.g.*,. We will come back to these further applications elsewhere. Acknowledgements ================ The authors thank Geoffrey Compère, Ali Seraj and the anonymous referee for useful comments and feedback on the manuscript. Research of S.C. is funded by the INSPIRE Faculty fellowship from the DST, Government of India (Reg. No. DST/INSPIRE/04 /2018/000893) and by the Start-Up Research Grant from SERB, DST, Government of India (Reg. No. SRG/2020/000409). The research of J.H. is supported in part by the Czech Science Foundation Grant 19-01850S. The research of R.O. is funded by the European Structural and Investment Funds (ESIF) and the Czech Ministry of Education, Youth and Sports (MSMT), Project CoGraDS - CZ.02.1.01/0.0/0.0/15003/0000437. de Sitter in harmonic coordinates ================================= In this Appendix, we provide the explicit coordinate transformations from static and global coordinates to harmonic ones for the de Sitter spacetime. From static to harmonic coordinates ----------------------------------- The de Sitter spacetime has already been expressed in the static coordinates (*T*, *R*, *θ*, *ϕ*) in [dSstatic]: ds2= -(1-H2R2) dT2 ++R2(d 2+2d2). As one can explicitly check, the metric indeed solves the Einstein’s field equations with a positive cosmological constant term, Λ ≡ 3*H*2. It is also straightforward to check that the de Sitter metric in the static coordinates does not obey the harmonic gauge condition, *i.e.*, $\partial\_{\mu}\left( \sqrt{-g}g^{\mu\nu}\right)\neq 0$. In what follows, we will depict an explicit coordinate transformation taking the de Sitter spacetime from static coordinates to the harmonic ones. The strategy to arrive at the harmonic coordinates is to first introduce the Cartesian coordinates, *i.e.*, consider the following coordinate transformation *x* = *R*cos*ϕ*sin*θ*,  *y* = *R*sin*ϕ*sin*θ*,  *z* = *R*cos*θ*,  so that the line element becomes, $$\label{dS\_Static\_01} ds^{2}=-\left(1-H^2R^2\right)dT^2+\left(\delta\_{ij} + \frac{H^2R^2}{1-H^2R^2}n\_i n\_j\right)dx^i dx^j,$$ where *n**i* = *x**i*/*R* is the radial unit vector. The second and the final step consists of a coordinates transformation *x**μ* → *x̄**μ*(*x**α*), such that □*g**x̄**μ* = 0, where *g**μ**ν* is the metric given in [dSStatic01]. To achieve the second step, we introduce a new radial coordinate *r̄* = *f*(*R*), such that $$\begin{aligned} \bar{t}&=T~, \quad \bar{x}=f(R)\cos\phi \sin\theta~, \quad \bar{y}=f(R) \sin\phi \sin\theta~,\quad \bar{z}= f(R) \cos \theta.\end{aligned}$$ The harmonic gauge condition □*g**x̄**μ* = 0, when applied on the above set of coordinates, yields the following second order ordinary differential equation for the function *f*(*r*) $$\frac{d}{dR}\left[R^2\left( 1-H^2 R^2\right)\frac{df(R)}{dR}\right] - 2f(R)=0,$$ whose most general solution is $$f(R) = \alpha \left(1+\frac{1}{H^2R^2}\right) + \frac{\beta}{4}\left[\left(1+\frac{1}{H^2R^2}\right) \tanh^{-1} \left( H R\right) -\frac{1}{HR}\right], \quad \alpha, \beta \in \mathbb{R}.$$ In order fix the constants of integration *α* and *β*, we need to impose appropriate boundary conditions. We demand the following condition on the function *f*(*R*), namely f(R=1/H) = 1/H, which provides the following expressions for the coefficients: *α* = 1/(2*H*) and *β* = 0. Thus, in the new system of coordinate (*t̄*, *x̄*, *ȳ*, *z̄*), the de Sitter metric in the static patch reads, $$ds^2 = -2\frac{\left( H\bar{r}-1\right)}{2H \bar{r}-1}d\bar{t}^2+ \frac{1}{2}\frac{d\bar{r}^2}{\left( H\bar{r}-1\right)\left(2 H\bar{r}-1\right)^2} + \frac{1}{H^2\bar{r}^2 \left( 2H\bar{r}-1\right)}\left( \delta\_{ij} - \bar{n}\_i\bar{n}\_j\right)d\bar{x}^i d\bar{x}^j.$$ As one can explicitly check that, through a lengthy but straightforward algebra, the above metric element indeed satisfies the harmonic gauge condition. From global to harmonic coordinates ----------------------------------- Another useful coordinate system for de Sitter spacetime is the global coordinate system (*τ*, *χ*, *θ*, *ϕ*) with the line element given by [dSglobal]: ds2=-d2 +2(H). In global coordinates, we obtain $\sqrt{-g}=(1/H^{3})\cosh^{3}(H\tau)\sin^{2}\chi \sin\theta$ and hence one gets $$\begin{aligned} \sqrt{-g}g^{\mu \nu}=&\textrm{diag}\Big(-H^{-3}\cosh^{3}(H\tau)\sin^{2}\chi \sin\theta,H^{-1}\cosh(H\tau)\sin^{2}\chi \sin\theta, \nonumber \\ &\quad \qquad H^{-1}\cosh(H\tau)\sin \theta,H^{-1}\cosh(H\tau)\textrm{cosec}\theta \Big).\end{aligned}$$ As one can check, the global coordinate system defined above does not satisfy the harmonic gauge condition, *i.e.*, $\partial\_{\mu}(\sqrt{-g}g^{\mu \nu})\neq 0$. Thus, we need to find out a new set of coordinates which satisfies the harmonic gauge condition and for that purpose, the following coordinate transformation becomes useful $$\begin{aligned} \bar{\tau}=f(\tau), \quad \bar{x}=g(\chi)\sin \theta \cos \phi, \quad \bar{y}=g(\chi)\sin \theta \sin \phi, \quad \bar{z}=g(\chi)\cos\theta.\end{aligned}$$ In order for these coordinates to obey the harmonic gauge condition, we need them to satisfy the following differential equation: $(1/\sqrt{-g})\partial\_{\alpha}(\sqrt{-g}g^{\alpha \beta})\partial\_{\beta}\bar{x}^{\mu}=0$, where *g**α**β* is the metric introduced in [dSglobal]. From the time component of the above differential equation, we obtain the following ordinary differential equation for the function *f*(*τ*) $$\begin{aligned} \frac{d^{2}f}{d\tau^{2}}+3H\tanh(H\tau)\frac{df}{d\tau}=0.\end{aligned}$$ The first integral of the above differential equation yields, (*d**f*/*d**τ*) = *A* sech3(*H**τ*), whose further integration yields $$\begin{aligned} \bar{\tau}=B+A\left[\frac{1}{H}\tan^{-1}\left\{\tanh\left(\frac{H\tau}{2}\right)\right\}+\frac{1}{2H}\textrm{sech}(H\tau)\tanh(H\tau)\right].\end{aligned}$$ If we impose the condition *τ̄* = 0, when *τ* = 0, then it follows that we can set the coefficients such that *A* = 1 and *B* = 0, thereby providing the desired relation between the harmonic gauge time coordinate *τ̄* and the global time coordinate *τ*. On the other hand, imposing the harmonic gauge condition on the coordinate *x̄*, we obtain the following differential equation for the function *g*(*χ*) as $$\begin{aligned} \frac{d}{d\chi}\left(\sin^{2}\chi\frac{dg}{d\chi}\right)-2g=0.\end{aligned}$$ with solution *g*(*χ*) = *A*cosec2*χ* + *B*cosec2*χ*{(*χ*/2) − (1/4)sin(2*χ*)}. Setting *A* = 1 and *B* = 0, we obtain the following line element for de Sitter spacetime in the global patch in harmonic coordinate system $$\begin{aligned} ds^{2}=-\cosh^{6}(H\tau)d\bar{\tau}^{2}+\frac{\cosh^{2}(H\tau)}{H^{2}\bar{r}^{3}}\left[\delta\_{ij}+\left(\frac{4-3\bar{r}}{4(\bar{r}-1)}\right)\bar{n}\_{i}\bar{n}\_{j}\right]d\bar{x}^{i}d\bar{x}^{j},\end{aligned}$$ where, $\bar{r}=\sqrt{\bar{x}^{2}+\bar{y}^{2}+\bar{z}^{2}}$ and *n̄**i* = (1/*r̄*)*x̄**i*. One can check that the metric element above indeed satisfies the harmonic gauge condition. Thus we have found out the set of coordinates in which even the global patch of the de Sitter spacetime can be converted to harmonic gauge. Mass and spin moments of Kerr-de Sitter black hole ================================================== We list the exact analytic expressions of the first mass and spin multipole moments of the Kerr-de Sitter black hole. The function *L**i*2(*z*), appearing in the spin multipole moments, is the poly-logarithm function of order two. It can be represented by the power series *L**i*2(*z*) = ∑*k* = 1∞*z**k*/*k*2. The computation has been performed with the use of [*Riemanian Geometry and Tensor Calculus*](https://library.wolfram.com/infocenter/MathSource/4484/) package developed by Sotirios Bonanos and the [*Surface Charges*](https://ptm.ulb.be/gcompere/package.html) package developed by Geoffrey Compère. Spin multipole moments ---------------------- Here are the exact expressions of the spin multipole moments S2*l* + 1 for *l* = {1, 2, 3, 4}. $$\begin{aligned} \mathcal{S}\_3 &= -Ma^3 \Bigg\{ \frac{525 +1038a^2H^2+601a^4H^4}{96 a^4 H^4 \left(1+a^2H^2 \right)^2}\nonumber\\ &\qquad \qquad - \frac{\left(525+373 a^2H^2 \right)\arctan(aH)}{96 a^5 H^5} \nonumber\\ &\qquad \qquad - \frac{35i \left[Li\_2(i aH)-Li\_2(-iaH)\right]}{32 a^3 H^3}\Bigg\},\\ \mathcal{S}\_5 &= Ma^5 \Bigg\{ \frac{72765 +218295 a^2H^2+219255a^4H^4+74365 a^6 H^6 + 1664 a^8 H^8}{1152 a^8 H^8 (1 + a^2 H^2)^2} \nonumber\\ &\qquad \qquad- \frac{\left(8085 + 10780 a^2 H^2 + 3623 a^4 H^4 \right)\arctan(aH)}{128 a^9 H^9} \nonumber\\ &\qquad \qquad - \frac{385i \left[Li\_2(i aH)-Li\_2(-iaH)\right]}{64 a^5 H^5}\Bigg\},\\ \mathcal{S}\_7 &= -Ma^7 \Bigg\{ \frac{7}{49152 a^{12} H^{12} \big(1 + a^2 H^2)^2} (5521230 + 18758025 a^2 H^2 + 24083631 a^4 H^4\nonumber\\ &\qquad \qquad \qquad+ 13962747 a^6 H^6 + 3105671 a^8 H^8 - 8192 a^{10} H^{10} + 8192 a^{12} H^{12}\big) \nonumber\\ &\qquad \qquad- \frac{5 (2576574 + 4459455 a^2 H^2 + 2432430 a^4 H^4 + 460013 a^6 H^6)\arctan(aH)}{16384 a^{13} H^{13}} \nonumber\\ &\qquad \qquad - \frac{225225i \left[Li\_2(i aH)-Li\_2(-iaH)\right]}{8192 a^7 H^7}\Bigg\},\\ \mathcal{S}\_9 &= Ma^9 \Bigg\{ \frac{1}{17203200 a^{16} H^{16} \big(1 + a^2 H^2)^2}\times\nonumber\\ &\qquad \qquad \qquad \times (176849597925 + 683124917475 a^2 H^2 + 1032894122260 a^4 H^4\nonumber\\ &\qquad \qquad \qquad +768754011740 a^6 H^6 + 287149590415 a^8 H^8 + 45062550345 a^{10} H^{10} \nonumber\\ &\qquad \qquad \qquad+ 38535168 a^{12} H^{12} - 5505024 a^{14} H^{14} + 19038208 a^{16} H^{16}\big) \nonumber\\ &\qquad \qquad - \frac{1}{98304 a^{17} H^{17}}\big(1010569131 + 2219289072 a^2 H^2 + 1664466804 a^4 H^4 \nonumber\\ &\qquad \qquad \qquad + 512143632 a^6 H^6 + 66269153 a^8 H^8\big)\arctan(aH) \nonumber\\ &\qquad \qquad - \frac{969969i \left[Li\_2(i aH)-Li\_2(-iaH)\right]}{8192 a^9 H^9}\Bigg\},\end{aligned}$$ Mass multipole moments ---------------------- Here are the exact expressions of the mass multipole moments M2*l* for *l* = {2, 3, 4}. $$\begin{aligned} \mathcal{M}\_4 &= Ma^4 \Big\{\frac{7/5}{1+a^2H^2} + \frac{21}{32}\frac{e^{-2Ht}}{a^9 H^9}\times \nonumber\\ &\qquad\qquad \times\left[ 5aH \left( 21+11a^2H^2\right)-3\left(35+30a^2H^2 +3a^4H^4\right)\arctan(aH)\right]\Big\},\\ \mathcal{M}\_6 &= -Ma^6 \Big\{\frac{10/7}{1+a^2H^2}-\frac{1287}{1280}\frac{e^{-2Ht}}{a^{13} H^{13}}(1-a^2H^2)\times \nonumber\\ &\quad \quad\quad \quad \quad\times \big[-7aH \left( 165+170a^2H^2+33a^4H^4\right) \nonumber\\ &\quad \quad \quad \quad\quad \quad \quad +5 \left(231+5a^2H^2(63+21a^2H^2+a^4H^4) \right)\arctan(aH)\big] \Big\},\\ \mathcal{M}\_8 &= Ma^8 \Bigg\{\frac{13/9}{1+a^2H^2}-\frac{2431}{28672}\frac{e^{-2Ht}}{a^{17} H^{17}}(1-a^2H^2+a^4H^4)\times \nonumber\\ &\quad\times \bigg[-a H (225225 + 345345 a^2 H^2 + 147455 a^4 H^4 + 15159 a^6 H^6) \nonumber\\ &\qquad+ 35 (6435 + 12012 a^2 H^2 + 6930 a^4 H^4 + 1260 a^6 H^6 + 35 a^8 H^8)\arctan(aH)\bigg] \Bigg\}.\end{aligned}$$ --- 1. [email protected][↩](#fnref1) 2. [email protected][↩](#fnref2) 3. [email protected][↩](#fnref3) 4. The concept of multipole symmetries and the relation with multipole moments has been originally introduced in in the context of Maxwell electrodynamics.[↩](#fnref4) 5. It is immediate to show that the Killing vector, defined in [TimeTrans] in the cosmological coordinates, transforms to (∂/∂*T*) in the static coordinates. For that purpose one simply has to note that ∂/∂*t* = (1 − *H*2*R*2)(∂/∂*T*) + *H**R*(∂/∂*R*) and *r*(∂/∂*r*) = *R*(∂/∂*R*) + *H**R*2/(1 − *H*2*R*2)(∂/∂*T*), such that, (∂/∂*t*) − *H**r*(∂/∂*r*) = (∂/∂*T*).[↩](#fnref5) 6. As we are interested in computing multipole symmetry vector in asymptotic regime, we neglect $\mathcal{O}(1/r\_{\rm phys})$ in the rest of the section to avoid cluttering in notation.[↩](#fnref6) 7. Another reason to discard the branch of irregular solid spherical harmonics is the following : we want to probe the multipole moments, that naively are the coefficients of the 1/*r* expansion of the metric tensor. In order to extract such coefficients, one needs a vector field that – after being contracted with the metric tensor and derivative thereof – gives us access to the *r*− *l* component of the metric tensor. This is achieved by the regular branch of the solid spherical harmonics.[↩](#fnref7) 8. As of now, this vector field is completely different from the multipole symmetry vector in [killingFRW2]. Though, for convenience, we are using the same symbol to denote the diffeomorphism vector field in the present context as well.[↩](#fnref8) 9. See, *e.g.* the Λ > 0 counterpart of the metric in and references therein for a complete account of the Kerr-dS and Kerr-AdS black hole thermodynamics. Sometimes, the KdS metric is written with the time rescaled by Ξ; see, *e.g.*,.[↩](#fnref9) 10. We also assume that, in the asymptotic regime, the coordinates of the KdS black hole coincide with that of the cosmological coordinates.[↩](#fnref10) 11. We choose the convention that $\bm{L}\_{10} = - \partial\_{\phi}$.[↩](#fnref11) 12. Notice that we could have also rescaled the time coordinate by Ξ in the KdS metric, see [metric:KdS], and obtain M0 = *M*/(1 + *a*2*H*2)2 to match with some earlier results in the literature. This is a trivial modification in the computation and it does not affect the validity of our prescription. Moreover, though the *H* → 0 limit is left untouched, any time-dependent rescaling affects O(*H**n*) corrections to the Geroch-Hansen’s mass moments.[↩](#fnref12) **Gravitational multipole momentsfor asymptotically de Sitter spacetimes** ========================================================================== We provide a prescription to compute the gravitational multipole moments of compact objects for asymptotically de Sitter spacetimes. Our prescription builds upon a recent definition of the gravitational multipole moments in terms of Noether charges associated to specific vector fields, within the residual harmonic gauge, dubbed multipole symmetries. We first derive the multipole symmetries for spacetimes which are asymptotically de Sitter; we also show that these symmetry vector fields eliminate the non-propagating degrees of freedom from the linearized gravitational wave equation in a suitable gauge. We then apply our prescription to the Kerr-de Sitter black hole and compute its multipole structure. Our result recovers the Geroch-Hansen moments of the Kerr black hole in the limit of vanishing cosmological constant. Introduction and summary of the results ======================================= The multipole moments associated with a gravitational field have always been relevant and important in the study of various solutions arising from General Relativity, since its early days. These studies involving multipole moments have impacted many areas of research ranging from mathematical physics to astrophysics. The study of multipole moments in various contexts has become even more timely, since the discovery of gravitational waves. The observation of gravitational waves from coalescence of binary compact objects can have potential applications in order to address questions about the nature of compact objects, such as black holes, neutron stars and the binary systems thereof. Future space-based gravitational wave detectors, besides mass and spin, will be also able to measure the quadrupole mass moment of a supermassive object in a binary system with great accuracy, as well as put bounds on four leading order multipole moments in the high mass ratio ( ∼ 10) limit. In General Relativity, the gravitational field is decomposed in two sets of multipole moments — the mass and the spin moments. Using the Penrose’s conformal completion technique, a geometrical definition of the multipole moments for static, asymptotically flat spacetimes was pioneered by Geroch. Later, Geroch’s definition was generalised to the stationary case by Hansen. Subsequently, Beig, Simon and Kundu had developed further properties of the multipole moments for stationary and asymptotically flat spacetimes. On the other hand, imposing no incoming radiation for linearized radiating gravitational fields, Thorne provided a definition of multipole moments for asymptotically flat spacetimes. Thorne’s moments are defined within the harmonic gauge, upon further fixing the residual gauge, where the gravitational field is expanded in spherical harmonics and the multipole moments are read from such an expansion. Thorne’s definition applies also to stationary non-linear configurations in General Relativity and it is shown to be equivalent to the Geroch–Hansen’s definition for stationary spacetimes, up to a choice of normalization. More recent developments about gravitational multipole moments in asymptotically flat spacetime can be found in. Intriguingly, the multipole moments of a black hole have very simple structures: *e.g.*, the multipole moments of a Kerr black hole only depends on its mass and spin. However, such is not the situation for neutron stars or for the exotic compact objects. The multipole structures of these objects are much more complex and are very much distinct from one another. Thus the study of the multipole structure of a compact object may reveal its true nature and interesting properties. For example, for a Kerr black hole all the odd mass moments and even spin moments are identically zero, however for certain fuzzball states odd mass moments and even spin moments do exist. Therefore, if the gravitational wave observations tell us that the odd mass moments and even spin moments of the merging compact objects are indeed non-zero, then the fuzzball models will become particularly relevant. This shows the importance of understanding the multipole moments of compact objects in greater detail. As emphasized earlier, the multipole moments of gravitational fields are well understood for asymptotically flat spacetimes, however they remain largely unexplored for asymptotically non-flat and in particular for asymptotically de Sitter spacetimes. This is because the Geroch-Hansen as well as the formalism of Thorne depend crucially on the asymptotic flatness of the spacetime. Though there are instances where the formalism can be applied for asymptotically non-flat spacetimes: this is the case for spacetimes with NUT charge, because the relevant codimension-1 hypersurfaces are asymptotically flat even in the presence of NUT charge and the Geroch-Hansen formalism is readily applicable. On the other hand, for asymptotically de Sitter spacetimes these codimension-1 hypersurfaces are non-flat, rendering the known analysis of multipole moments inapplicable. The aim of this work is to start filling this gap in the literature and provide a definition of gravitational multipole moments for asymptotically de Sitter spacetimes. The mass and spin multipole moments have been recently defined for a generic metric theory of gravity in terms of the Noether charges associated with the so-called multipole symmetries,[4](#fn4) that are specific residual transformations in the harmonic gauge. This definition also agrees with the previous results of Thorne for linearized radiating spacetimes and the Geroch-Hansen formalism for stationary spacetimes. At this outset, let us briefly summarize the main strategy to compute multipole moments in. The main idea is to extract the multipole moments from the radial expansion of the metric tensor. For asymptotically flat spacetimes, following Thorne, one has to reach the harmonic gauge as explained earlier. Upon fixing the harmonic gauge, one is left with its residual gauge transformations. In, it was demonstrated that the multipole symmetries are actually residual gauge transformations preserving the asymptotic behaviour of the lapse function and shift vector. The next step in the analysis of is to compute the Noether charges associated with the multipole symmetries. Explicit expressions for the surface charges in the four-dimensional General Relativity can be found in. The multipole moments of the solution are identified from the Noether charges upon a regularization procedure. In this paper, we wish to extend the method of, as outlined above, for asymptotically de Sitter spacetimes. More specifically, we achieve three main results — (a) Our first result is to write down the de Sitter spacetime in harmonic gauge; surprisingly, de Sitter spacetime was never written in the harmonic gauge before, to the best of our knowledge. Expressing the de Sitter spacetime in the harmonic gauge is instrumental to derive the residual harmonic gauge transformations and, among these, the multipole symmetries preserving certain fall-off conditions for asymptotically de Sitter spacetimes. These fall-off conditions are such that they preserve the lapse function and the shift vector, analogously as in. (b) We also provide an alternative derivation of the multipole symmetries to be those residual gauge transformations that eliminate the non-propagating degrees of freedom in the linearized outgoing wave solutions around de Sitter background. This different interpretation of multipole symmetries sheds more light over the physics of the multipole symmetries and complements the original derivation carried out in. (c) Finally, in order to apply our findings on a concrete example, we compute the multipole structure of the Kerr-de Sitter black hole. In particular, we provide explicit computation of the first few mass and spin multipole moments. Our expressions reproduce the well-known mass and angular momentum of the Kerr-de Sitter black hole and, in the limit of asymptotically flat spacetime, we recover the Geroch-Hansen’s multipole moments of the Kerr black hole. The paper is organized as follows. In section [dS], we briefly present different coordinate systems for the de Sitter spacetime, which will be relevant for our purpose. Section [Harmonic] deals with writing down the de Sitter metric in the harmonic gauge. In section [Sec.4], exploiting the residual harmonic gauge freedom, we obtain the multipole symmetries for asymptotically de Sitter spacetimes. In section [sec:5], we provide the linearized gravitational wave equation in de Sitter background, and we obtain the multipole symmetries from the residual gauge transformations of the linearized theory. Finally, in section [Sec:KdS], we compute the multipole moments of the Kerr de Sitter spacetime. We conclude with a summary and perspectives for future works. Two appendices contain — (a) the technical derivation of converting the de Sitter metric to harmonic coordinates and (b) the exact analytic expressions of the first few mass and spin multipole moments of the Kerr-de Sitter black hole. *Notation and conventions:* We adopt the mostly positive signature convention, *i.e.*, the Minkowski metric in the Cartesian coordinate system takes the form: diag( − ,  + ,  + ,  + ). The Greek indices run over all the spacetime coordinates, while the Latin indices run over the spatial coordinates. Furthermore, we set the fundamental constants, such that *c* = 1 = *G*. Brief review of the de Sitter spacetime ======================================= We briefly review the de Sitter spacetime to point out the key features and the coordinate charts we will be using in this work. Unlike the Minkowski spacetime, which admits a natural, global Cartesian coordinate chart, the de Sitter spacetime has several charts appropriate for different situations — the global, Poincaré and static patches. Furthermore, the de Sitter spacetime is a solution to the equations: *R**μ**ν* = Λ*g**μ**ν*, where Λ is referred to as the cosmological constant term. The de Sitter spacetime is most naturally defined as the hyperboloid in the five dimensional Minkowski spacetime, which also introduces a set of coordinates covering the full de Sitter spacetime, known as the *global patch*. The global patch is covered by the coordinates (*τ*, *χ*, *θ*, *ϕ*), and the de Sitter metric is given by [dSglobal] ds2=-d2+2(H) , where $H:=\sqrt{(\Lambda/3)}$ is the Hubble constant. As evident from the above metric, the global topology of the de Sitter spacetime is R × *S*3; see Fig. [GlobalChartFig]. In the global chart, the axial symmetry of the de Sitter spacetime is guaranteed by the Killing vector (∂/∂*ϕ*). However, unlike the Minkowski spacetime, (∂/∂*τ*) is not a Killing vector in the global coordinates. ABCD denotes the *global chart*, ABD is a *Poincaré patch*, while AED is a *static patch*. The angular coordinates, *θ* and *ϕ*, are suppressed. An observer, represented by the world line DA, has its causal future *J*+ spanning the region DBA and is one of the *Poincaré patches*. Its spacelike boundary, denoted by the line AB, is the *future “null” infinity*, ${\cal J}^+$. [GlobalChartFig] Among the other coordinate systems associated with the de Sitter spacetime, the *Poincaré patch*, which constitutes the causal future (past) of observers and covers “half” of the global chart is of much interest. As we will see, in the cosmological context and for our current purpose of studying compact sources in the de Sitter background, the Poincaré patch will turn out to be very useful. There are two natural coordinate charts for the Poincaré patch: (a) the *conformal chart* with coordinates (*η*, *x**i*), and (b) the *cosmological chart* with coordinates (*t*, *x**i*). In the conformal chart (*η*, *x**i*), the de Sitter metric takes the form $$\begin{aligned} \label{ConformallyFlat} ds^{2}=a^{2}(\eta)~\left(-d\eta^{2}+\delta\_{ij}dx^idx^{j}\right)~,\end{aligned}$$ where *a*(*η*) =  − (*H**η*)− 1 with *η* ∈ ( − ∞, 0). A drawback of the conformal coordinates is that they are not suitable for studying the flat limit Λ → 0 (or, equivalently *H* → 0); see, *e.g.*, (however also see, ). Thus one is led to adopt the cosmological coordinates, with the cosmic time *t* being related to the conformal time *η* via *η* :  =  − (1/*H*)*e*− *H**t*. In these coordinates (*t*, *x**i*), the line element for the de Sitter spacetime becomes $$\begin{aligned} \label{PlanarMetric} ds^2=-dt^2+e^{2Ht}\left(\delta\_{ij}dx^idx^{j}\right)~.\end{aligned}$$ This is the most well-known form for the de Sitter metric and it will be used extensively in this work. The Poincaré patch has a seven dimensional isometry group — *three* spatial rotations, *three* spatial translations and *one* time translation. In order to find out the Killing vector field of time translations, we let *t* → *t* + *δ**t* in [PlanarMetric], where *δ**t* is taken to be infinitesimal. This time translation changes the de Sitter line element and hence in order to make the de Sitter metric in the Poincaré patch invariant, we must have the following transformation for the spatial coordinates: *x**i* → *x**i* − *H**x**i**δ**t*. Hence, the Killing vector field generating the time translation in the cosmological coordinate system is *t**μ*∂*μ* = ∂*t* − *H**x**i*∂*i* . This vector will also play an important role in the study of multipole symmetries of de Sitter spacetime. The third patch, corresponding to “half” of the Poincaré patch, is referred to as the *static patch*; see Fig. [GlobalChartFig]. This is a natural patch for an isolated body or a black hole with a stationary neighbourhood. This patch of de Sitter spacetime can be covered by static coordinates (*T*, *R*, *θ*, *ϕ*). In these coordinates, the line element of the de Sitter spacetime becomes [dSstatic] ds2=-(1-H2R2) dT2 ++R2(d 2+2d2). The existence of two Killing vector fields, (∂/∂*T*) and (∂/∂*ϕ*), is clear from the line element above; correspondingly, the spacetime has both axial and time-translational symmetries. It should be emphasized that the Killing[5](#fn5) vector field (∂/∂*T*) becomes null on the cosmological horizon of the de Sitter spacetime located at *R* = *H*− 1. Let us conclude this section with a side remark about the global isometry group in the three patches of the de Sitter spacetime. The *ten* dimensional isometry group of the full de Sitter spacetime is reduced to a *seven* dimensional subgroup in the Poincaré patch, and to a *four* dimensional subgroup in the static patch. The symmetry reduction of the global isometry group can be best understood along the following lines: the null hyperplane BD (see Fig. [GlobalChartFig]) can be thought of as adding an additional boundary to the full de Sitter spacetime. Hence, the symmetry generators which are not tangential to BD are absent in the Poincaré patch. Similarly for the static patch, the Killing fields which are not tangential to the cosmological horizon are not symmetry generators in the static patch. de Sitter metric in the harmonic gauge ====================================== The harmonic gauge, also known as the de Donder gauge, plays a crucial role in solving the Einstein’s equations. The harmonic gauge is usually the favourite gauge where to read off the multipole moments from the spherical harmonic decomposition of the metric tensor. In addition, the residual harmonic gauge symmetry is intimately connected with the nature of the multipole moments of any compact object. Given the metric tensor *g**μ**ν*, the harmonic gauge condition is given by (g)=0 . To impose the harmonic gauge condition on the metric *g**μ**ν*, we first perform the coordinate transformation *x*ʹ*μ* = *f**μ*(*x*). Subsequently, imposing the harmonic gauge condition on the metric in the *x*ʹ*μ* coordinate system amounts to ’(g’ ’)= g’ x’=0 , where ▫*g*ʹ :  = *g**μ**ν*∇*μ*∇*ν*. Hence, choosing *each* coordinate *x*ʹ*μ* to be harmonic in the metric *g**μ**ν*, *i.e.*, satisfying ▫*g*ʹ*x*ʹ*μ* = 0, ensures that the harmonic gauge condition holds in the new coordinates. We will employ this procedure to transform the de Sitter metric to harmonic coordinates in all the three coordinate patches discussed in the previous section. Intriguingly, despite being one of the most familiar solution to the Einstein’s equations with a cosmological constant, to our knowledge, harmonic coordinates for de Sitter spacetime do *not* exist in the literature. Even though we can provide the transformation of all the three coordinate patches to harmonic coordinates, here we provide the explicit coordinate transformation from the cosmological coordinates to the harmonic coordinates, since it will be of relevance in what follows. We relegate to Appendix [App:harm] the explicit coordinate transformations that bring the de Sitter metric in global and static coordinates to the harmonic form. As discussed in the previous section, the Poincaré patch of the de Sitter spacetime can be described by the cosmological coordinates (*t*, *x**i*), with the associated line element given by [PlanarMetric], for which $$\sqrt{-g}g^{\mu \nu} = \mbox{diag}\left(-a(t)^{3}, a(t), a(t),a(t)\right),$$ where *a*(*t*) :  = *e**H**t*. Therefore, it follows that the cosmological coordinates are not harmonic since $\partial\_{\mu}(\sqrt{-g}g^{\mu 0})=-3H e^{3Ht}\neq 0$. In order to obtain the harmonic coordinates for de Sitter in the Poincaré patch, we introduce a new set of coordinates *x̄**α*, such that ▫*g**x̄**α* = 0, where ▫*g* is associated with the metric given in [PlanarMetric]. Expanding out the above harmonic gauge condition, we obtain the following differential equation for the new coordinates |x=0 . [9XII20.3] We now have to choose an appropriate coordinate transformation, such that the above differential equation can be satisfied. For our purpose it will suffice to perform the following coordinate transformation |x=x,|y=y,|z=z,|t=f(t) , where *f*(*t*) is an arbitrary function of the cosmic time. It is straightforward to verify that [9XII20.3] is indeed obeyed for all the spatial coordinates, while for the time coordinate it yields the following differential equation ()+3H()=0 . Solving this differential equation for the function *f*(*t*), equipped with the boundary conditions *f*(*t* = 0) = 0 and (*d**f*/*d**t*)(*t* = 0) = 1 (these boundary conditions are necessary to have a smooth flat spacetime limit), we obtain, |t:=f(t)=(1-e-3Ht) . Therefore, the line element of the Poincaré patch of the de Sitter spacetime takes the following form in the harmonic coordinates [dSCosmoHarmonic] ds2=- + (1-3H|t)-2/3(dx2+dy2+dz2). It can again be verified that, with somewhat long but straightforward algebra, that the de Sitter metric in the above coordinate system indeed satisfies the harmonic gauge condition. Having determined the harmonic coordinates for the de Sitter spacetime in the Poincaré patch, we will proceed to find out the residual gauge symmetry and the vector fields generating these symmetries. As we will demonstrate, these vector fields will be crucial in obtaining the multipole symmetries and hence the multipole moments of compact objects living in an asymptotically de Sitter spacetime. We will determine these multipole symmetry vector fields in the next section. Residual harmonic gauge and multipole symmetries ================================================ We are now in the position to discuss the residual harmonic gauge transformations and hence derive the vector fields from which the multipole moments, in terms of the Noether charges, can be associated. We choose an operational definition of asymptotically de Sitter spacetimes such that in harmonic gauge our metric matches with that of [PlanarMetric] in the asymptotic regime. For our purpose it will suffice to consider a constant cosmological time *t* such that the asymptotic regime corresponds to the *r* → ∞ limit, which is also identical to the $r\_{\rm phys}:= re^{Ht}\to \infty$. Since this corresponds to the point *B* in Fig. [GlobalChartFig], which is the analogue to spatial infinity *i*0 in asymptotically flat spacetime, we will use this limit to define the multipole moments. Therefore, following, here also we demand the following asymptotic conditions on the metric components [25XII20.1] g0=|g0+(1/rphys) Here *ḡ**μ**ν* denotes the background de Sitter spacetime. It should also be noticed that [25XII20.1] is a non-tensorial relation and hence must be used in the harmonic coordinates, such that *ḡ**μ**ν* satisfies the harmonic gauge condition (note that the radial distance remains the same in both cosmological and harmonic coordinates). The vector field *ξ**μ* will preserve this asymptotic conditions given in [25XII20.1] if it satisfies *asymptotically* the following relation [25XII20.2] £ |g0:= |g0+|g 0+|g0 =(1/rphys) . For the harmonic coordinates in the Poincaré patch and for the component *μ* = 0, [25XII20.2] becomes *ξ**α*∂*α**g*00 + 2*g*0*α*∂0*ξ**α* = 0. This yields the following solution for the time component of the vector field *ξ**μ* in harmonic coordinates [27XII20.1] [19V21.2] 0=(1-3H |t) ()+(1/rphys) , where $\epsilon(\bm{x})$ is an arbitrary function of the spatial coordinates $\bm{x}$. Similarly, for the spatial components of the asymptotic condition given in [25XII20.2], we obtain the following differential equation for the vector field *ξ**α* [19V21.1] g0i+g0 i+gi 0 =(1/rphys) , Using [19V21.2] and [19V21.1], the solution for the spatial components, in the harmonic coordinates, is given by [killingspatial] i=ij j () + i()+(1/rphys) . Thus, it follows that under the diffeomorphism *x**μ* → *x**μ* + *ξ**μ*, the change in the de Sitter metric satisfies the necessary asymptotic boundary condition given by [25XII20.1], provided the time and spatial components of *ξ**μ* are given by [27XII20.1] and [killingspatial], respectively. The unknown functions, $\epsilon(\bm{x})$ and $\zeta^{i}(\bm{x})$, appearing in the components of the vector field *ξ**μ* above, are arbitrary as of now. However the vector field should satisfy one more additional condition, namely the harmonic gauge condition ▫*ḡ**ξ**μ* = 0 in the background de Sitter spacetime. We must emphasize that ▫*ḡ**ξ**μ* = 0 should be understood as four scalar equations, for each one of the four functions *ξ**μ* (for a detailed discussion along these lines, see ). The time component of the above equation, ▫*ḡ**ξ**α* = 0, demands that the function $\epsilon(\bm{x})$ must satisfy the following differential equation[6](#fn6) [27XII20.2] ij i j ()=0 . While the spatial components of the equation, ▫*ḡ**ξ**α* = 0, yields the following differential equation for *ξ**i*, in the harmonic coordinates [27XII20.3] -|t2 i+(1-3H |t)-4/3 kl k l i=0 . Substituting for the spatial components of the vector field *ξ**i* from [killingspatial] and using [27XII20.2], we obtain the following differential equation for the spatial vector field *ζ**i* [veczeta] kl k l i() - Hij j ()=0 , whose solution can be written as [solzeta] = -r 1()+2() - Hr + V, where $\epsilon\_{1}(\bm{x})$ and $\epsilon\_{2}(\bm{x})$ are two harmonic functions satisfying [27XII20.2]. Moreover, we have defined $\boldsymbol{\nabla}\epsilon:=\delta^{ij}\partial\_{i} \epsilon(\bm{x})\partial\_{j}$, and  ×  denotes the cross product defined in the three dimensional Euclidean space. Further, the vector field ${\bm V}$ is the inhomogeneous part of [veczeta] satisfied by *ζ**i*. This vector field is discarded since it does not play any role in General Relativity. A similar analysis can be performed for the global and the static patch of the de Sitter spacetime as well. However, in our subsequent discussions, we will restrict ourselves to the harmonic coordinates of the de Sitter metric in the Poincaré patch, since this will turn out to be the most useful for the ensuing discussions. For this reason, we wish to express the above vector field *ξ**μ* in the cosmological coordinates, rather than in the harmonic coordinates. A simple coordinate transformation, following section [Harmonic], between the two sets of coordinates, yields [killingFRW] =()(t)+(i) . Substituting the above solution for *ζ**i*, as in [solzeta], in the above expression for the residual gauge vector field *ξ**μ* in the cosmological coordinates, we obtain, $$\begin{aligned} \label{killing\_FRW2} \xi^{\mu}&=\epsilon(\bm{x})\left(\partial\_t\right)^{\mu} \nonumber \\ &\hskip 1 cm +\left[\frac{1}{2H}\left(1-e^{-2Ht}\right)\delta^{ij} \partial\_{j}\epsilon(\bm{x})+\left(-\bm{r}\times \boldsymbol{\nabla}\epsilon(\bm{x})+\boldsymbol{\nabla}\epsilon(\bm{x})-H\bm{r}\right)^{i}\right]\left(\partial\_i\right)^{\mu}~,\end{aligned}$$ where $\epsilon(\bm{x})$ satisfies the equation $\nabla^{2}\epsilon(\bm{x})=0$, with ∇2 being the three-dimensional Laplacian operator. This vector field *ξ**μ* is the multipole symmetry vector field. In particular, we can decompose the above multipole symmetry vector field into three sets, namely [MultipoleSymm] $$\begin{aligned} \label{MultipoleSymmK} {\bm K\_{\epsilon}}&:=\epsilon(\bm{x})\partial\_{t}+\frac{1}{2H}\left(1-e^{-2Ht}\right){\bm\nabla} \epsilon(\bm{x}) -Hx^i \partial\_i~, \\ \label{MultipoleSymmL} {\bm L\_{\epsilon}}&:=-{\bm r}\times {\bm \nabla} \epsilon(\bm{x})~, \\ {\bm P\_{\epsilon}}&:={\bm \nabla} \epsilon(\bm{x})~.\end{aligned}$$ Most importantly, in the limit, *H* → 0, the above multipole symmetry vectors reduce to those of the asymptotically flat spacetime. Hence, following the flat spacetime analogy, one can identify the vector field $\bm{K}\_{\epsilon}$ as the generator of the mass multipole moments, $\bm{L}\_{\epsilon}$ as the generator of the spin multipole moments and $\bm{P}\_{\epsilon}$ as the generator of the momentum multipole moments. Notice that, except for $\bm{K}\_{\epsilon}$, both $\bm{L}\_{\epsilon}$ and $\bm{P}\_{\epsilon}$ are identical to their flat spacetime counterparts. This is expected, since the isometry group of the Poincaré patch includes both rotation and spatial translation symmetries as that of the flat spacetime, but the mass multipole symmetry vector gets modified by the presence of the cosmological constant. As evident from the above structure of the vector fields $\bm{K}\_{\epsilon}$, $\bm{L}\_{\epsilon}$ and $\bm{P}\_{\epsilon}$, the multipole symmetries [MultipoleSymm] depend on the harmonic function $\epsilon(\bm{x})$, whose decomposition consists of irregular and regular solid spherical harmonics. These are given by, *r*− (ℓ + 1)*Y*ℓ*m*(*θ*, *ϕ*) and *r*ℓ*Y*ℓ*m*(*θ*, *ϕ*), respectively. The first branch, which are irregular at *r* = 0, are simply gauge transformations and hence discarded[7](#fn7); the second branch, instead, is used to decompose the harmonic function $\epsilon(\bm{x})$ as ()==0m=-mrYm(,) , where *ε*ℓ*m* are arbitrary coefficients. We notice that for the *l* = 0 and *l* = 1 modes, the multipole symmetries in [MultipoleSymm] reduce to the background symmetries of the de Sitter spacetime, discussed in section [dS]. We will now demonstrate, that the vector associated with the residual gauge symmetry, also arises from the gauge freedom of the linear gravitational perturbations around de Sitter background. Linearized perturbation of de Sitter spacetime ============================================== In this section, we will consider linear gravitational perturbation of the de Sitter background in the cosmological coordinates. This will enable us to provide the perturbation equations in the cosmological coordinates along with the associated gauge choice simplifying the equations. It turns out that there is still a residual gauge freedom left in these perturbation equations, which enables us to eliminate the non-dynamical degrees of freedom and yields a symmetry vector identical to the one derived in section [Sec.4]. This provides a completely independent way of deriving the multipole symmetries, further bolstering our claims in the previous section. We start by fixing the gauge condition associated with the linear perturbation equations. Fixing the wave gauge: evolution of the linear gravitational perturbations -------------------------------------------------------------------------- The gravitational perturbation around the de Sitter background is often considered in the conformal coordinates (*η*, *x*, *y*, *z*). However, for our current purpose, it will prove useful to consider the gravitational perturbations around de Sitter spacetime in the cosmological coordinates (*t*, *x*, *y*, *z*). This is primarily because the residual gauge symmetry has the cleanest expression in the cosmological coordinates and can be compared to the corresponding expression for the flat spacetime with ease. The de Sitter metric in the cosmological coordinates has already been presented in [PlanarMetric], for which the non-zero components of the Christoffel connections are 0 ij= H e2Ht ij , i0j= H ij . Given the above expressions for the non-zero connection components and the fact that the background de Sitter spacetime is maximally symmetric, one can determine the differential equation satisfied by the gravitational perturbation *h**μ**ν*. To linear order in *h**μ**ν*, one obtains the following wave equations in the de Sitter background [waveeqdS] |-- (-|g)=-16T , where, as in the previous section, the “bar” denotes quantities evaluated for the background de Sitter spacetime. Also in the above expression, we have used the trace-reversed perturbation $\widetilde{h}\_{\mu\nu}:= h\_{\mu\nu}-(1/2)\bar{g}\_{\mu\nu}h$ and its covariant divergence $B\_{\mu}:=\bar{\nabla}\_{\alpha}\widetilde{h} ^{\alpha}\_{\mu}$. The above equation for the gravitational perturbation looks complicated and thus we need to impose an appropriate gauge condition in order to simplify it further. In the case of asymptotically flat background, one often chooses the Lorenz gauge condition, *B**μ* = 0, to simplify the perturbation equations. However, in the present context it is possible to choose another gauge, which will simplify the above wave equation considerably. For that purpose, let us concentrate on the wave equation associated with the spatial components $\widetilde{h}\_{ij}$ of the gravitational perturbation. We choose the following wave gauge condition $$\label{wave gauge} B\_{\mu}:=f(t)\widetilde{h}\_{0\mu}~,$$ where *f*(*t*) is an arbitrary function. For this choice of *B**μ*, taking a cue from [waveeqdS], the linearized wave equation for $\widetilde{h}\_{ij}$ becomes $$\begin{aligned} \nn \label{15VII20.1} -\partial\_{0}^{2} \widetilde{h}\_{ij}&+e^{-2Ht} \left(\delta^{mk} \partial\_{m} \partial\_{k} \widetilde{h}\_{ij}\right)+ H \partial\_{0} \widetilde{h}\_{ij}+ 2 H^{2} \widetilde{h}\_{ij}-\delta\_{ij}\Big[-2H^{2}e^{2Ht} \bar{g}^{kl} \widetilde{h}\_{kl} \nonumber \\ &+f(t)He^{2Ht}\widetilde{h}\_{00}+e^{2Ht}(df/dt)\widetilde{h}\_{00}+ e^{2Ht} f(t) \partial\_{0}\widetilde{h}\_{00}- f(t)\partial^{k} \widetilde {h}\_{0k}\Big] \nonumber \\ &-\left(f(t)+2H\right)\left(\partial\_{i} \widetilde{h}\_{0j}+\partial\_{j} \widetilde{h}\_{0i}\right)=-16\pi T\_{ij}~.\end{aligned}$$ We observe that *f*(*t*) =  − 2*H* will simplify it considerably. Writing down this gauge condition explicitly in terms of the trace reversed gravitational perturbation, we obtain $\bar{\nabla}\_{\alpha}\widetilde{h}^{\alpha}\_{\mu}=-2H \widetilde{h}\_{0\mu}$. Imposing this condition, finally the wave equation for $\widetilde{h}\_{ij}$ reads as -02ij+H0 ij+e-2Ht(mk mk ij)+2H2ij=-16 Tij . Note that the wave equation for $\widetilde{h}\_{ij}$ decouples from the other components of the gravitational perturbation. Further, expanding out $(\bar{\square}\widetilde{h}\_{00})$ for the background de Sitter spacetime in the cosmological coordinates and using the gauge condition $B\_{\mu}=-2H\widetilde{h}\_{0\mu}$, introduced above, the evolution equation for the time-time component of the gravitational perturbation $\widetilde{h}\_{00}$ becomes [15VII20.2] -02 00 + e-2Ht ij (i j 00) -3H0 00-2H2 00 - 2H2 e-2Ht ij ij=-16T00 . Unfortunately, unlike the wave equation for the spatial part of the gravitational perturbation, the above wave equation for the $\widetilde{h}\_{00}$ is a coupled differential equation. However, it is possible to decouple the spatial and the temporal part of the gravitational perturbation by introducing a redefined perturbation variable in favour of $\widetilde{h}\_{00}$. The first step is to take the trace of [15VII20.1] with respect to the flat spatial metric, which yields [15VII20.3] -02 (e-2Htij ij)-3H0(e-2Htij ij) +e-2Htkk (e-2Htij ij)=-16 e-2Ht ijTij . As a next step, we define a new perturbation variable $$\widetilde{\mathcal{H}}:= \widetilde{h}\_{00}+ {e^{-2Ht}} (\delta^{ij}\widetilde{h}\_{ij})~.$$ Subsequently, summing up [15VII20.2] and [15VII20.3], and using the definition for the gravitational perturbation $\widetilde{\mathcal{H}}$ given above, we obtain the following wave equation for $\widetilde{\mathcal{H}}$ [30XI20.2] -02 + e-2Ht ij i j -3 H 0 -2H2 =-16(T00+e-2Htij Tij) . It is clear that the above wave equation for $\widetilde{\mathcal{H}}$ is decoupled, *i.e.*, it depends on $\widetilde{\mathcal{H}}$ alone and not on other perturbation variables, as desired. Finally, the evolution equation for the temporal-spatial part of the perturbation, *i.e.*, for $\widetilde{h}\_{0i}$, takes the following form [26XI20.1] -02 0i + e-2Htjk jk 0i - H 00i=-16T0i. Therefore, we have decoupled all the components of the gravitational perturbation, *i.e.*, purely spatial part $\widetilde{h}\_{ij}$, spatial-temporal part $\widetilde{h}\_{0i}$ and $\widetilde{\mathcal{H}}$, a combination of purely temporal part and spatial part. This provides the desired wave equations for the gravitational perturbations around the de Sitter spacetime in cosmological coordinates. Concluding, as an aside remark, the wave gauge condition $\bar{{\nabla}}\_{\alpha} \widetilde{h}^{\alpha}\_{\mu}=-2H \widetilde{h}\_{0\mu}$ in the cosmological coordinates becomes [wavegauge1] -00+e-2Htj j-H0 |gkl kl- H 0=0 . [9VII20.1] As it will turn out, it will play an important role in the subsequent section, where we would like to determine the corresponding residual gauge and highlight its connection with the multipole symmetry vector fields. Residual gauge transformations and multipole symmetries ------------------------------------------------------- In section [Sec.4], we have derived the multipole symmetry vector field *ξ**μ*, see [killingFRW2], in the cosmological coordinates, which respect the harmonic gauge condition of the de Sitter spacetime and preserves the relevant asymptotic fall-off condition, as in [25XII20.1]. It is instructive to see any connection between the multipole symmetries and the residual diffeomorphism vector fields preserving the wave gauge condition [wave gauge]-[wavegauge1], used in deriving the wave equations for linear gravitational perturbation in the cosmological coordinates. Suppose the wave gauge condition $\bar{\nabla}\_{\alpha} \widetilde{h}^{\alpha}\_{\mu}=-2H \widetilde{h}\_{0\mu}$ is preserved by the diffeomorphism generating vector field *ξ**μ*,[8](#fn8) whose form we would like to determine. First of all, note that under the transformation, *x**μ* → *x**μ* + *ξ**μ*, the trace-reversed gravitational perturbation transforms as [30XI20.1] =|+| -|g(| ) . The covariant derivatives appearing in the above expression are in the background de Sitter spacetime; expanding these derivatives in the cosmological coordinate yields =+-2H(0+ 0 )+H 0|g+ 2H 0 0 0-|g |g  . Thus, the wave gauge condition ${\bar \nabla}\_{\alpha} \widetilde{h}^{\alpha}\_{\mu}=-2H\widetilde{h}\_{0\mu}$, will also be modified under the above diffeomorphism. As we want to preserve it, the vector field *ξ**μ* must satisfy the following differential equation |+ 4 H0+2H2 -2H2 00 +2H 0|g-() 0 ij ij=0 . Using explicitly the metric for the de Sitter spacetime in the cosmological coordinates, the above differential equation can be reduced to a partial differential equation [30XI20.3] |g +H0+2 H2 -2H20 0-2H000=0 . In addition to the above condition on *ξ**μ*, we would like to see if this vector field can also be used to eliminate the $\widetilde{h}\_{0i}$ and $\widetilde{\mathcal{H}}$ components of the gravitational perturbation. The $\widetilde{h}\_{0i}$ component of the gravitational perturbation transforms under diffeomorphism as [h0ichange] 0i=0i+i0-2Hi . In order to proceed further, we have to take various derivatives of the residual wave gauge condition. Firstly, taking the time derivative of the spatial component of the residual gauge condition in [30XI20.3], we obtain [30XI20.4] |g 0 i=2H|gklkl i-H02i-2H2 0i . Similarly, taking the spatial derivative of the temporal component of the residual gauge condition in [30XI20.3], we obtain [30XI20.5] |g i0=H0i0 . Hence, we obtain the following result for the change in the spatial-temporal component of the gravitational perturbation under diffeomorphism $$\begin{aligned} \label{30XI20.6} \bar{g}^{\mu\nu}\partial\_{\mu}\partial\_{\nu}\delta\_{\xi}\widetilde{h}\_{0i}&= \bar{g}^{\mu\nu}\partial\_{\mu}\partial\_{\nu} (\partial\_{0}\xi\_{i}+\partial\_{i}\xi\_{0}-2H\xi\_{i}) \nonumber \\ &=2H\bar{g}^{kl}\partial\_{k}\partial\_{l} \xi\_{i}-H\partial\_{0}^{2}\xi\_{i}-2H^{2} \partial\_{0}\xi\_{i}+H\partial\_{0}\partial\_{i}\xi\_{0}+2H\left(H\partial\_{0}\xi\_{i}+2 H^{2} \xi\_{i}\right) \nonumber \\ &=H\partial\_{0}^{2}\xi\_{i}+H\partial\_{0}\partial\_{i}\xi\_{0}-2H^{2}\partial\_{0}\xi^{i}~,\end{aligned}$$ where in arriving at the first line we have used the identities derived in [30XI20.4] and [30XI20.5], respectively. Further, in the second line we have used the result *ḡ**k**l*∂*k*∂*l**ξ**i* = ∂02*ξ**i* − *H*∂0*ξ**i* − 2*H*2*ξ**i*, which follows from [30XI20.3]. Thus, using [h0ichange], we finally obtain the differential equation for the change in the spatial-temporal component of the gravitational perturbation under diffeomorphism |g(0i)-H0 (0i)=0 , which is the same of the wave equation satisfied by $\widetilde{h}\_{0i}$ in the absence of any source; see [26XI20.1]. Hence using the diffeomorphism vector field *ξ**α*, which preserves the wave gauge condition, we can set $\widetilde{h}\_{0i}=0$, outside the matter source. Again, under diffeomorphism, the combination $\widetilde{\mathcal{H}}$ of purely temporal and purely spatial part of the gravitational perturbation, transforms as &=&00+|gijij=400 , where we have used [30XI20.1] to compute ${\delta \widetilde h\_{00}}$ and $\bar{g}^{ij}\delta{\widetilde h\_{ij}}$. Further, using the differential equation satisfied by *ξ**α*, preserving the wave gauge condition, it can also be shown that $\delta\_{\xi}\widetilde{\mathcal{H}}$ satisfies the following differential equation |g() -3H0( )-2H2=0 , which, upon comparison with [30XI20.2], turns out to be identical to the wave equation satisfied by the perturbation variable $\widetilde{\mathcal{H}}$ outside the source. Thus, we can also set $\widetilde{\mathcal{H}}=0$ outside the source using the diffeomorphism *ξ**μ*, while preserving the gauge condition. To summarize, in addition to [30XI20.3], we also demand the conditions $\widetilde{h}\_{0i}=0=\widetilde{\mathcal{H}}$ should be preserved under diffeomorphism *ξ**μ*, namely $\delta\_{\xi} \widetilde{\mathcal{H}}=0$ and $\delta\_{\xi} \widetilde{h}\_{0i}=0$, to obtain $$\begin{aligned} \partial\_{0}\xi\_{0}&=0~, \\ \partial\_{0} \xi\_{i}+\partial\_{i} \xi\_{0}-2H \xi\_{i}&=0~.\end{aligned}$$ These differential equations can be immediately solved, yielding $$\
arxiv_0000062
. However, for a resonant frequency response of a floating structure, estimating some tail aspects may be less critical. Other responses such as fatigue are not governed by extremes of the environment in general. ### Linearising the failure surface Due to the complexity of solving Equation [E:pF], approximate approaches have been sought, including the use of environmental contours. For example, IFORM typically adopts a hierarchical model outlined in Section [Sct:EstCnt:IFORM] to estimate $f\_{{\boldsymbol{X}}}$ and hence contours of constant probability density in the transformed Gaussian ${\boldsymbol{U}}$-space, with given probabilities of non-exceedance. The direct sampling method of Section [Sct:EstCnt:JntExcCnt:DrcSmpCnt] generates contours in the original ${\boldsymbol{X}}$-space of environmental variables with given non-exceedance probability. Solving Equation [E:pF] is still not easy, since failure surface $g\_{{\boldsymbol{X}}}$ is unknown. To overcome this, the direct sampling and IFORM methods make the assumption that a linear approximation to the failure surface at the design point is appropriate. This approximation is made on the original ${\boldsymbol{X}}$-scale for direct sampling, and on the transformed ${\boldsymbol{U}}$-scale for IFORM; this is the key difference between the methods. There is no a-priori physical reason for assuming that linearisation of the failure surface is appropriate, and the assumption must be justified on engineering grounds on a case-by-case basis. In certain applications, for example of wave loads on fixed structures, extreme loads typically correspond to severe sea states; in this situation, we might assume load to be dominated by significant wave height *H**S*. It is then probably reasonable to assume that the set ${\boldsymbol{x}}$ of values (including *H**S*) such that $g\_{{\boldsymbol{X}}}({\boldsymbol{x}})<0$ (and the corresponding set ${\boldsymbol{u}}$ of values such that $g\_U({\boldsymbol{u}})<0$) is *convex*. In this case, assuming $g\_{{\boldsymbol{X}}}$ (or $g\_{{\boldsymbol{U}}}$) to be linear leads to a *conservative overestimate* of the probability of failure associated with a given contour. However, we emphasise that there is no guarantee that either $g\_{{\boldsymbol{X}}}$ or $g\_{{\boldsymbol{U}}}$ is convex in general. Hence there is no guarantee that linearising the failure surface is reasonable, and that the probability of failure will be smaller than that associated with the environmental contour. ### Finding governing conditions In a typical IFORM analysis, once the environmental contour $\{{\boldsymbol{u}}(\theta)\}$ (the surface of a hypersphere, for *θ* ∈ C) is estimated in ${\boldsymbol{U}}$-space, we find the point ${\boldsymbol{u}}(\theta^\*) \in \{{\boldsymbol{u}}(\theta)\}$ corresponding to the largest values of response (and hence the lowest value of structural failure *p**F* for given structural strength *S*). Then ${\boldsymbol{u}}(\theta^\*)$ is transformed back to a corresponding ${\boldsymbol{x}}(\theta^\*)$ (in terms of the original variables) which is taken as the design set corresponding to the specified failure probability. Other points of interest, for example the whole contour $\{{\boldsymbol{u}}(\theta)\}$ in the transformed space, can be similarly transformed to the original space. In a direct sampling analysis, the point ${\boldsymbol{x}}(\theta^\*)$ can be identified directly in ${\boldsymbol{X}}$-space. generalised IFORM to dynamic systems. describe the use of the second-order reliability method (SORM) for contour estimation. Adjusting contours for model mis-specification and short-term variation ----------------------------------------------------------------------- We use the *N*-year environmental contour for the set ${\boldsymbol{X}}$ to provide a computationally-fast but potentially biased estimate of the *N*-year response for the structure discussed in Section [Sct:EstCnt:Cnt2Rsp]. A typical approach is to estimate the distribution of the maximum response (in any 3-hour sea state, corresponding to a specified return period) given values of environmental variables $\{{\boldsymbol{x}}(\theta)\}$ on the contour to identify a “design point” ${\boldsymbol{x}}(\theta^\*) \in \{{\boldsymbol{x}}(\theta)\}$ yielding the largest structural response. Then a quantile *q**C* with non-exceedance probability *p**C* (for example typically the mode, median or mean) of the distribution $F\_{R|{\boldsymbol{X}}}(r|{\boldsymbol{x}}(\theta^\*))$ is used to estimate some (possibly different) quantile *q**R* with non-exceedance probability *p**R* of the distribution *F**M**R*, *N*(*r*) of the *N*-year maximum response. Quantile *q**C* is used as an estimate for *q**R*, where FR|(qC|(\*)) = pC and FMR,N(qR) = pR. There is no guarantee that *q**C* and *q**R* will coincide. Even if *p**C* = *p**R*, since $F\_{R|{\boldsymbol{X}}}(r|{\boldsymbol{x}})$ has a long right-hand tail for any environment ${\boldsymbol{x}}$, it is usually possible for “short-term variation” in “less extreme” sea states to contribute to the distribution *F**M**R*, *N*, whereas this is by definition not possible for the corresponding distribution $F\_{R|{\boldsymbol{X}}}(q\_C|{\boldsymbol{x}}(\theta^\*))$ from a single sea state ${\boldsymbol{x}}(\theta^\*)$. Mis-specification of the environmental model, or violation of assumptions concerning the relationship between environment and response, may also lead to disagreement between *q**C* and *q**R*. For this reason, it is useful to define an inflation factor Δ such that qR = qC, where we might expect Δ > 1 for $p\_C \approxeq p\_R$. The factor Δ can be used to inflate the whole environmental contour if desired. Standard makes recommendation for appropriate choices of *q**R*, *q**C* and the corresponding Δ. Section [Sct:Prc] provides illustrations of contour adjustment for simple simulation models. It is apparent that, in situations where short-term variability is relatively large compared with long-term variability, contour-based approaches should be use with great caution. Case studies: contours in practice ================================== Section [Sct:EstCnt] outlines various forms of an environmental contour. In the absence of a unified approach to defining and applying contours (see Section [Sct:Cnc] and Section [Sct:Int:Srv]), it is informative to consider the practicalities of environmental contour estimation. Our objective in Section [Sct:Prc] is to quantify how well estimates of extreme responses (in a three-hour sea state, for a particular return period) on a contour compare with estimates obtained by direct simulation of the response. In this sense we replicate the typical approach to application of contours: looking at response for a small set of environmental conditions, in the hope that this analysis approximates the characteristics of maximum response for that return period. In doing so, we discuss the key challenges in applying contours, including choice of contour, sampling along the contour and contour inflation. As opposed to typical applications, we perform our analysis for four responses, whose relationship to the environment is quantified entirely in terms of *H**S* and *T**P*, and is known to us. We are therefore able to simulate from the known distributions to estimate the correct characteristics of response, and hence to quantify the performance of contour-based estimated for maximum response. Then, in Section [Sct:DscCnc], we summarise our findings regarding the estimation and application of environmental contours for metocean design, with a particular focus on the appropriate use of contours, given the extent of knowledge about the response: environmental contours are clearly useful under certain conditions, but these conditions need to be carefully defined so that the user knows when environmental contours are likely to be a good option. For simplicity in the case studies below, we define the environment in terms of a large historical sample of sea-state *H**S* and *T**P* for a typical northern North Sea environment for the period 1979-2013, from the NORA10-WAM hindcast (). NORA10 (Norwegian ReAnalysis 10km grid) is a 58-year hindcast that has been developed by the Norwegian Meteorological Institute. It is a regional HIRLAM (atmosphere) and WAM Cycle-4 (wave) hindcast covering Northern European waters. The regional model uses wind and wave boundary conditions from the ERA-40 reanalysis (1958-2002) and is extended using the ERA-Interim reanalysis from 2002 onwards. NORA10 produces three-hourly wave and wind fields at 10km resolution. We isolate storm peak events using the procedure of. We then estimate structural responses using *known* non-linear functions of environmental variables corresponding to each storm event. To construct an environmental contour, we require a statistical model for the environment. Here, we achieve this by means of a conditional extremes model (Section [Sct:MdlEnv:Cnd]) for the historical sample, using a penalised piecewise constant (`PPC`) extreme value model () and software (outlined in [SctApp:PPC]). We choose the conditional extremes model because of its generality and flexibility to model different forms of extremal dependence (e.g. ). The `PPC` extreme value model allows the estimation of non-stationary marginal and conditional extremes for peaks over threshold using a simple description of non-stationarity with respect to covariates in marginal and dependence models. We use the `PPC` model to estimate a number of the environmental contours discussed in Section [Sct:EstCnt] and investigate their characteristics, in particular their relationship to extremes of structural response. Because of its recent popularity, we consider the direct sampling contour (Section [Sct:EstCnt:JntExcCnt:DrcSmpCnt]) in case studies 1 and 2. In case study 2, we also consider the joint exceedance contour outlined in Section [Sct:EstCnt:JntExcCnt:JntEA] and the isodensity contour (Section [Sct:EstCnt:IsoDnsCnt]). To estimate any of these contour methods requires a (*H**S*, *T**P*) sample simulated under the environmental model. The isodensity contour is similar to the approach of recommended in the standard. Case study 1 ------------ In this case study we consider the direct sampling contour of Section [Sct:EstCnt:JntExcCnt:DrcSmpCnt] only. The objective of the case study is to examine the general correspondence between estimates for the distribution of the 100-year maximum response *M**R*, 100. We compare an estimate from direct simulation of *R* (taken to be accurate) and one generated from combinations of *H**S* and *T**P* on the 100-year environmental contour. We make this comparison for a number of different responses. The procedure we use is intended to reflect common practice in industry. Once the contour is estimated, we identify a “frontier” interval of the contour which we think might be informative for estimation of response. In the current work, we assume that the “frontier” corresponds to the whole interval of the environmental contour lying close to pairs of *H**S*, *T**P* values present in the sample. Then we consider two possibilities: (a) that only a single combination of *H**S* and *T**P* corresponding to the maximum value *H**S*max of *H**S* on the contour is informative for estimating *M**R*, 100, and (b) that the whole frontier interval is informative. Then, for scenario (a), we estimate the distribution of maximum 100-year response $f\_{M\_{R,100}\text{Point}}(r) = f\_{R|{\boldsymbol{X}}}(r|H\_S^{\text{max}})$. For scenario (b), we estimate the ensemble distribution fMR,100(r) = k fR|(r|100(k)) where $\{{\boldsymbol{x}}\_{100}(\theta\_i)\}\_{k=1}^L$ defines a set of equally-spaced points on the 100-year contour on the frontier interval. We can then compare quantiles of the distributions from scenarios (a) and (b) with quantiles estimated from direct simulation of *M**R*, 100. notes a trade-off between the number of points on the contour used to evaluate the response, the quality of the estimate of response and the computation time required. A total of four responses *R*1, *R*2, *R*3, *R*4 were considered. Two responses correspond to output of a structural response simulator for maximum base shear (*R*1, for a typical fixed structure) and maximum heave (*R*2, for a floating structure), as a function of *H**S* and *T**P* for a three-hour sea state. These response simulators assume that the most probable value of maximum response in a sea state can be written as a closed form expression in terms of a number of sea state variables, including sea state *H**S* and *T**P*. The actual value of maximum response is then simulated from a Rayleigh distribution with the most probable maximum response as scale parameter. A further two synthetic responses are defined, which are simple deterministic functions of *H**S* and *T**P*, using the following equation $$\begin{aligned} R\_{i} = \frac{\alpha\_{i}H\_{S}}{(1+\beta\_{i}(T\_{p}-T\_{p0,i})^2)} \text{ for } i=3,4,\end{aligned}$$ [Eqn:SynRsp] where *T**p*0, *i* (in seconds) is the resonant peak period for response *R**i*. The values of {*α**i*, *β**i*, *T**p*0, *i*} are {2, 0.007, 7} and {2, 0.005, 26} for *i* = 3, 4 respectively. These combinations of parameters were chosen to provide large responses at different neighbourhoods of the environmental space, and hence to correspond to different frontier intervals. The distribution of maximum response *M**R*, 100 for synthetic responses *R*3, *R*4 was estimated by generating multiple environmental simulations corresponding to periods of 100 years, calculating response per sea state and storing only the maximum response observed and the values of *H**S* and *T**P* responsible for it. For responses *R*1, *R*2, `PPC` was used to extend the environmental model to include response; simulation under the model was then again used to accumulate the distribution of *M**R*, 100. For each response in turn, the mean value *M̄**R*, 100 of the maximum 100-year response *M**R*, 100 is plotted in Figure [Fgr:ToyScatter1], and coloured by the value of *M**R*, 100. Also plotted in the figure are direct sampling contours corresponding to 20, 30, 40, 50, 70, 100 and 200 years. Note that for each response *R**i*, only combinations of *H**S* and *T**P* giving rise to a least one occurrence of *M**R**i*, *N* appear in the figure. ![Mean 100-year maximum responses \{\bar{M}_{R_i,100}\} as a function of H_S and T_P estimated using 1000 realisations (of length 100 years) of H_S and T_P. Points are coloured by the local mean value of maximum response estimated on a lattice of values for H_S and T_P. Also shown are N-year (H_S,T_P) direct sampling environmental contours for different values of N; contours are coloured yellow to dark brown by return period, in order of N = \{20,30,40,50,70,100,200\} years. Panels on top row correspond to historic responses R_1 (left) and R_2 (right); panels on bottom row correspond to synthetic responses R_3 (left) and R_4 (right).](Fig1CntMeanRsp.jpg "fig:") [Fgr:ToyScatter1] The figure shows typical features of the different responses. Synthetic response *R*3 shows resonance effects *T**P* ≈ 13s. Maximum base sheer (*R*1) and synthetic response *R*4 increase with increasing *H**S* and *T**P*. This is true in general for maximum heave (*R*2), but there are clearly large values of *M**R*2, 100 within even the 20-year environmental contour. That is, there are relatively benign environmental conditions, not even exceeding the 20-year contour, which sometimes generate the 100-year maximum response. For contours to be useful, we would expect to see the largest values of 100-year maximum response lying outside the 100-year contour, and smallest values of response within it. This is approximately the case for all responses, but certainly not always true for *R*2. The extent to which the maximum response on the 100-year environmental contour agrees with the actual distribution of *M**R*, *N* from simulation can be assessed by comparing an estimate for the distribution of the true response against that evaluated for conditions on the contour, as illustrated in Figure [Fgr:ToyDensAll]. It shows kernel density estimates for {*M**R**i*, 100} estimated by direct simulation (in dashed blue; which can be regarded as “the truth”). The figure also shows corresponding kernel density estimates for *f**M**R*, 100Frontier of *M**R*, 100 from combinations of (*H**S*, *T**P*) lying on the contour frontier (scenario (b), shown in Figure [Fgr:ToyScatter1]), for a range of choices of *N*. ![Kernel density estimates for 100-year maximum responses \{M_{R_i,100}\}. Estimates based on direct simulation of response are shown in dashed blue. Other density estimates (solid lines) are calculated from (H_S, T_P) combinations lying near the corresponding N-year direct sampling contour shown in Figure [Fgr:ToyScatter1], for N = \{20,30,40,50,70,100,200\}. Coloured crosses indicate the location of the quantile of the response distribution with non-exceedance probability \exp(-1) along each contour; the blue dot gives the corresponding \exp(-1) true response from the blue curve. The factor \Delta by which the \exp(-1) response of the 100-year environmental contour would need to be inflated to give the true \exp(-1) 100-year response is given in the title to each panel.](Fig2AllCntDensMostPrb.jpg "fig:") [Fgr:ToyDensAll] There is an obvious ordering of response density estimates with increasing return period, particularly for responses *R*3 and *R*4 as might be expected. Further, the location of densities estimated from different *N*-year environmental contours agrees to some degree with that of the true density of *M**R*, 100. Moreover, for most cases the location of the quantile (see Section [Sct:EstCnt:Adj] with *p**C* = *p**R*) of the distribution of maximum response with non-exceedance probability exp( − 1) (henceforth the “exp( − 1)” value) of the density estimate from the 100-year environmental contour is in reasonable agreement with the location of the true exp( − 1) value. Following Section [Sct:EstCnt:Adj], defining Δ as the ratio of the exp( − 1) quantile of the true 100-year maximum response to the exp( − 1) quantile of the distribution from the 100-year environmental contour, we see that the environmental contour approach underestimates the exp( − 1) response by between 1% and 9% for these examples. We next perform a similar comparison of response distributions, this time using only (*H**S*, *T**P*) combinations near the point on the contour with maximum *H**S* (that is, scenario (a), to estimate *f**M**R*, 100Point). Results, shown in Figure [Fgr:ToyDensPnt], have similar general characteristics to those of Figure [Fgr:ToyDensAll]. Values of Δ in the interval (0.98, 1.04) are estimated. In the current illustrations, therefore, it appears that both scenarios (a) and (b) provide reasonable estimates for the exp( − 1) quantile of *M**R*, 100. ![Kernel density estimates for 100-year maximum responses \{M_{R_i,100}\}. Estimates based on direct simulation of response are shown in dashed blue. Other density estimates (solid lines) are calculated from (H_S, T_P) combinations near the point on the corresponding N-year direct sampling contour (Figure [Fgr:ToyScatter1]) corresponding to maximum H_S, for N = \{20,30,40,50,70,100,200\}. Coloured crosses indicate the location of the quantile of the response distribution with non-exceedance probability \exp(-1) along each contour; the blue dot gives the corresponding \exp(-1) true response from the blue curve. The factor \Delta by which the \exp(-1) response of the 100-year environmental contour would need to be inflated to give the true \exp(-1) 100-year response is given in the title to each panel.](Fig3CntPntDensMostPrb.jpg "fig:") [Fgr:ToyDensPnt] Case study 2 ------------ Here we extend the study of Section [Sct:Prc:Cas1] for responses *R*1 (maximum base shear) and *R*2 (maximum heave), specifically to make a comparison of direct sampling contours, joint exceedance contours (Section [Sct:EstCnt:JntExcCnt:JntEA]), and isodensity contours (from a conditional extremes analysis in Section [Sct:MdlEnv:Cnd]). For brevity, these approaches are henceforth referred to as “direct sampling”, “joint exceedance” and “empirical density” respectively in this section. Figure [Fgr:Toy1] shows minima {*M**R**i*, 100min} and maxima {*M**R**i*, 100max} values of maximum responses {*M**R**i*, 100} from the same 1000 simulations used to generate Figure [Fgr:ToyScatter1]. The colour of each disc in the top row indicates the value of the *minimum* 100-year maximum response seen for that combination of *H**S* and *T**P*, using the same algorithm as for Figure [Fgr:ToyScatter1] to identify near neighbours. The bottom row shows corresponding values of *maximum* 100-year maximum response. It is clear that there is considerable variability in response for a given pair of values for *H**S* and *T**P*. 100-year environmental contours from each of the direct sampling, joint exceedance and empirical density methods are also shown in the figure. All contours have a similar frontier interval. There is good agreement between the direct sampling and joint exceedance contours in particular on the frontier interval; this is not surprising since the underlying methods have similar motivations. For the same simulation size, the empirical density contour is more difficult to estimate without applying considerable smoothing. Comparing the top and bottom rows of the figure, it also appears that the natural variability in the response (near the frontier interval of the contours) dominates any variability in the value of response *along* the contours. In this case, therefore, none of the contours is particularly preferable; any of them would give approximately the same quality of estimate for *M**R*, 100. It is interesting and intuitively appealing that the (yellow) area of largest values of maximum response (on the bottom row of the figure) is centred approximately on the frontier interval of the contours. However, for synthetic response *R*3 in Figure [Fgr:ToyScatter1], we see that the frontier interval is offset (to lower *T**P*) from part of the environmental contour corresponding to largest *H**S*: focussing on an interval on the contour corresponding to largest *H**S* to estimate *M**R*, 100 would seem particularly suspect in this case, regardless of the choice of contour method. Overall, it appears that the key to success is ensuring that the response is quantified (using time-domain simulation or other) at a sufficient number of (*H**S*, *T**P*) combinations on or near the frontier interval of any reasonably well-defined contour. ![Minima (\{M_{R_i,100}^{\text{min}}\}, top row) and maxima (\{M_{R_i,100}^{\text{max}}\}, bottom row) of the 100-year maximum response, as a function of H_S and T_P estimated using 1000 realisations (of length 100 years) of H_S and T_P for responses R_1 (left) and R_2 (right). Points are coloured by the local minimum (top) or maximum (bottom) value of maximum response estimated on a lattice of values for H_S and T_P. Each panel also shows 100-year environmental contours from each of the direct sampling (black dot-dashed), joint exceedance (black dashed) and CE density (black solid) methods.](Fig4CntCompare.jpg "fig:") [Fgr:Toy1] Discussion and conclusions ========================== As shown in Section [Sct:EstCnt], environmental contours provide useful characterisations of the extent of the joint distribution of environmental variables. Some contour methods assume particular parametric forms for the (conditional) distributions of environmental variables; other methods generate convex contours on particular scales; other contour approaches are only defined on part of the domain of environmental variables. There is concern in the user community that a contour should “look right”, closely hugging the boundaries of scatter plots of historic or simulated environmental variables. The usual motivation for applying a contour approach in ocean engineering is to find environmental conditions efficiently (for a return period of *N* years say) which will generate approximately the *N*-year maximum response. Environmental contours therefore provide a means of reducing the burden of running full long-term response analysis for a wide range of environmental conditions. Different types of environmental contours find favour based on their ability to estimate the *N*-year maximum response from the *N*-year environmental contour. An environmental contour is estimated with no regard whatsoever to structural details. Since environmental contours are independent of structural specifics, they can then be used in principle to study different structures in a given environment provided that the underlying assumptions linking environment and structure are not violated. There is no fundamental link between points on an environmental contour and structural response in general, and no reasonable expectation therefore that points on the *N*-year environmental contour should yield the *N*-year maximum response. Attempts have been made to compare different environmental contour methods using a response-based criterion, although it is mathematically obvious and generally recognised in the user community that no single approach is appropriate for all structure and response types, and that considerable ambiguity will always remain. The manner in which an environmental contour relates to extreme response depends on the specifics of the structure. However, for typical *H**S*-driven structures, empirical evidence suggests the responses generated from points along the environmental contour in (*H**S*, *T**P*)-space for a given return period are reasonable estimates of the actual maximum response corresponding to the same return period. In the presence of resonant response and non-extreme values of *T**P*, using points from the contour near the maximum *H**S* can be misleading, since the response is not completely *H**S*-dominated. It is critical therefore that the dominant environmental variables are included in the estimation of environmental contour. It is apparent from physical considerations that extreme occurrences of some structural responses should not coincide with those of extreme environmental variables; the *N*-year environmental contour is unlikely to provide any guidance regarding the *N*-year maximum response for such responses. We also note methods to adjust (or inflate) contours to correct for sources of bias (for estimation of extreme response) including the effects short-term variability, violation of (marginal and dependence) modelling assumptions, uncertainty in parameter estimates, etc. There is some debate within the user community regarding the relative merits of using observations of the environmental variables for serially-dependent sea states, compared with near-independent storm peak characteristics. Given that the rates of occurrence of events are taken into account, both can provide useful estimates of joint models for the environment and hence environmental contours. The advantage of using sea state data is that sample size is large, potentially allowing a more detailed description of the joint distribution of environmental parameters to be estimated. However, because sea state data is serially-correlated, naive estimates of uncertainties for model parameters and inferences under the model will be too small, but can be corrected (for example, using sandwich estimators or bootstrap resampling). Multi-modal distributions of environmental variables can be caused by different physical processes or by covariate effects (for example, fetch length as a function of direction); in these cases, isodensity contours may be more reasonable summaries of the joint distribution than those based on joint exceedance, since there may be regions of low probability between modes. Further it may be useful to partition the environmental space by covariate and perform separate analyses per partition. Alternatively, it is also possible to estimate joint models incorporating covariate effects. Basing design conditions on the *N*-year environmental contour alone neglects sources of bias and variability in estimation of the *N*-year maximum response, including short-term variability in response. Approximate methods to inflate the environmental contour, or adjust its return period, are available. It is always possible, given fully-specified environments and structural responses, to estimate inflation factors which map some quantile of the distribution of *N*-year maximum response onto the corresponding quantile of the distribution of maximum response given sea states on the environmental contour. However, the value of inflation factor in general will be a function of quantile level, the structure and the response. It is likely that estimating inflation factors (or adjusting contour return periods) based on comparing central characteristics (for example, mean, median or mode) of response distributions will prove more stable, since Monte Carlo simulations of a given size provide better estimates of central characteristics than those of tails. In some applications, it may be that environmental contours will be used to estimate multiple correlated responses. In such cases, care needs to be taken that the contour is used to estimate the responses *jointly* corresponding to a given return period, rather than estimating independent marginal return values. As can be seen from the end-user survey in [SctApp:SrvFnd], there are valid concerns regarding a sensible definition of an environmental contour, its estimation and adjustment in relation to structural response. There are many articles in the literature comparing different contour methods, inflation factors etc. However, in reality, most sensibly-proposed contour methods (including the direct sampling, joint exceedance and empirical density examined here) locate the frontier interval of the contour within the same region of environmental space. Uncertainties due to the details of the structure, and even when the structure is defined to the natural variability of structural response given the environment, are in general issues of far greater concern. Any reasonable choice of contour, given the considerations explored in this paper, will suffice. Methods such as IFORM and the direct sampling method are advantageous in that they impose a link between environment and structure by make assumptions about the characteristics of failure surfaces as a function of the environmental variables. Given these assumptions, it is possible to link the exceedance probability associated with a given environmental contour with structural failure probability. Although conditions from an *N*-year environmental contour need not result exactly in *N*-year responses, IFORM and direct sampling provide at least some understanding of how an *N*-year environmental contour is related to the *N*-year maximum response. Both IFORM and direct sampling approaches assume a linearised failure boundary. The basic difference between the approaches arises from the fact that linearisation for IFORM is performed in the transformed ${\boldsymbol{U}}$-space, and in direct sampling approach in the original ${\boldsymbol{X}}$-space of environmental variables (e.g. and references therein). For both IFORM and direct sampling contours, the relationship established is between the exceedance probability associated with the contour (on some scale) and the probability of structural failure. This does not guarantee however that searching along an IFORM or direct sampling contour for return period *T* will isolate the key features of the *T*-year maximum response; the relative performance of IFORM and direct sampling in estimating extreme responses is application-dependent. For better specification of design conditions, a response model is necessary. To determine the frontier interval of a design contour within the domain of environmental variables, inflation factors for contours etc., some knowledge of, or working assumption regarding the response is required. As specification of the response evolves, sufficient, for example, to estimate the frontier interval and inflation factors, arguably there is already sufficient information (or supposition) to use better models to describe approximate responses and their uncertainties. From a statistical perspective, it is hard to avoid the impression that, in terms of estimating extreme structural response, an environmental contour is just an approximation to a sample from the tail of the distribution of environmental variables. In fact, for most purposes, an appropriate sample from the tail of the distribution of environmental variables would be preferable. Furthermore, with the advent of methods such as statistical emulation (used widely in approximating complex physical systems, including metocean design e.g. ), a computationally-efficient approximate response model can be estimated, along with its uncertainty, for many if not all applications. Given this, the emulator would provide a mechanism for response-based design in all situations, avoiding the need for environmental contours completely. It is apparent that the coastal engineering community is already moving in this direction (e.g. ); a key assumption with statistical emulation is that a representative set of cases is available to quantify all important relationships between environment and structural loading. Nevertheless, part of the user-cited strength of the environmental contour method is its relative simplicity and computational efficiency. There is clearly a trade-off between a thorough probabilistic response-driven analysis (which may be technically more involved and less familiar to the practitioner) compared to a simple approximate approach (with which the user is relatively familiar and confident). Given the above considerations, and comments in the end-user survey, to assist the practitioner in deciding when and where to use an environmental contour approach, we present the following brief check-list. We recommend the use of environmental contours in the following circumstances. When to use environmental contours ---------------------------------- **Nature of responses and environmental variables are known**: The dominant structural responses are all known. The dominant environmental variables driving each structural response are all known, and the value of response is dominated by long-term variability of the environmental variables: extreme environments produce extreme responses. The influence of short-term environmental variability is relatively small. [There are some responses (for example, fatigue) that are not dominated by extremes of the environment; environmental contour methods are therefore not appropriate.] **Response-based analysis is not possible**: (a) There are no adequate computationally-efficient structural response models available. [If these models are available, a response-based analysis should be performed.] (b) There are computationally-demanding structural response models available, but no time or expertise to develop approximate structural response models (for example, generic load models, statistical emulators) using these. [If these models can be estimated, a response-based analysis should be performed.] **At outline design stage**: The specifics of the structure may not be clear at outline design. For this reason, the environmental contour may provide a useful source of extreme sea states suitable for evaluating a range of different structures. Once it has been decided that an environmental contour approach is suitable, the following are then recommended. How to use environmental contours wisely ---------------------------------------- **Reality check**: Remember that environmental contours are approximate methods that can only provide approximations to extreme responses. The use of contour approaches may need to be supported in final design by full long-term analysis. **Sufficient environmental data available**: There are sufficient historical data available to estimate the joint distribution of all these environmental variables adequately using, for example, a method from Section [Sct:MdlEnv]. **Estimate more than one environmental model, and consider the sensitivity of the model to arbitrary modelling choices**: The sensitivity of environmental contour estimates to arbitrary choices made when estimating a model for the joint distribution of environmental parameters should be investigated. [When different equally-plausible environmental models provide different contour estimates, the current research suggests that all contours should be considered valid and used together for choice of environmental values corresponding to extreme responses. Equally, the user might well be concerned when two different environmental models provide materially different contour estimates (using a common contouring approach).] **If unsure which contour to use, estimate more than one type**: Each type of environmental contour is seeking to achieve different objectives. If you are not clear which contour is most suitable for your application, consider estimating contours of different types, and establish approximate consistency of inferences from different contours. [When different equally-plausible contours give materially different results, all contours should be considered valid and used together for choice of environmental values corresponding to extreme responses.] **Choose multiple points from the environmental contours for response evaluation**: Multiple combinations of values of environmental variables falling on or near the frontier interval of the environmental contour should be used. [When the frontier interval is not known, a wide set of combinations of values of environmental variables on or near the environmental contour should be used. If in doubt, choose more points and choose points more widely.] **Consider other sources of uncertainty**: (a) How influential are the effects of covariates (directionality, seasonality) [If important, fit a non-stationary environmental model (for example, using PPC from Section [Sct:Prc])]; (b) Have all environmental variables influencing the response been considered in the environmental model and contours? [If not, consider estimating environmental models and contours in higher dimensions]. Acknowledgement =============== This work was part-funded by the European Union ERA-NET project entitled “Environmental Contours for SAfe DEsign of Ships and other marine structures (ECSADES)”. The `PPC` software and user guide is freely-available as MATLAB code from the authors. We thanks Kevin Ewans for useful comments on the manuscript. Supporting information ====================== Survey findings --------------- Further details of the survey summarised in Section [Sct:Int:Srv] are given here. For brevity, the respondents full answers are summarised here. The full (anonymised) survey responses can however be provided on request. The survey questions were distributed to industry/consulting contacts of those involved in the ECSADES project as well as authors of key literature in the area. We do not claim that this sample is representative of the user community for contours, but hopefully it is informative. Of the 19 respondents, a large number were based in Norway with contributions also from the US, the Netherlands, the UK and Australia; with 7 from academia and 12 from industry/consulting. 1. How many times have you used environmental contours as part of your work in the last 3 years? * Different frequencies of use: from daily to hardly ever * Geographic variability: frequent use cited by respondents based in Norway, less frequent in the U.S. * Contours form an integral part of reliability assessment / design practices * Academically, contours garner less interest. Appearing mainly in post-graduate projects with industrial applications. 2. What kind of environmental contours do you use (for example, derived from FORM, or the conditional extremes model of Heffernan & Tawn, or other)? * Large user group for FORM/IFORM * Conditional extremes approach of * Marginal and dependence modelling based on threshold exceedance distributions * AND and JOIN-SEA cited for coastal applications * Kernel density estimation * The need for contours in any form also questioned in the event that direct response-based analysis is possible 3. For what purpose do you use environmental contours? Can you describe as precisely as possible the information you take from the environmental contour, and how you use this? Do you use the whole contour or just part of it? Which part of the contour, and why? Do you actually need information about the whole contour? Do you take any steps to account for uncertainties and potential biases? * Two dimensional contour representing “N-year” environmental conditions is typically estimated (never higher dimensions) * Response analysis performed on a subset of points along the environmental contour * Region of the contour over which this subset of points is focussed can be motivated by physical understanding of mechanisms driving response, for example, wave-dominated or resonant response * Sometimes only “short term” response analysis performed; additional extrapolation from “short-term” to “long-term” necessary. * N-year structural response is estimated from the responses resulting from conditions defined by points on the environmental contour by: + Inflating the response somehow: 90% quantile, median  ×  1.3 or other ad-hoc method, to adjust for short-term variability + Inflate “environment”: that is, explore responses for a contour representing a longer return period (N) * Bias, (epistemic) uncertainty and inherent (aleatory) randomness is almost never incorporated 4. What do you think are the advantages of using environmental contours? * Environmental contours provide a reasonable approximate approach to estimate N-year responses * Industry-accepted * Calculated without any knowledge of structure being designed; independent of the response; same contour can be used for range of responses * Simple to use, inexpensive, quick - especially for typical cases (for example, Hs-Tp) * Computational efficiency compared full time-domain analysis 5. What, in your opinion, are the disadvantages or problems associated with using environmental contours? Are there specific circumstances in which you find that environmental contours do not work well or are difficult to use? * Concern about details of choosing response from a subset estimated along the contour: using arbitrary quantiles and scale factors etc. * Simulations using short-term responses used to estimate long-term response; high quantile necessary. * Environmental contour method does not involve the response directly (so arbitrary calibration of response will always be necessary); “response-based” analysis is a better approach when possible * Clients do not understand what the contour represents and how it has been calculated, can therefore sometimes be a “hard sell” * Lots of approaches to defining the contour, not clear which (if any) is better * Naïve application of statistical methods can yield physically unreasonable contours (for example, non-physical wave steepness) * Approaches to extending to higher dimensions (3D and above) or to multiple responses are more ad-hoc, incorporating correct physics even more problematic * Need to check conclusions using full-scale simulation / long-term analysis * No obvious way to estimate the *distributional* properties of N-year response * Uncertainties are rarely quantified 6. Do you use design guidelines or other literature sources to guide your use of environmental contours? Could you list these? * Statoil procedures * Norsok N003 * DNV-RP-C205 (2014) * FORM/iFORM: * JOIN-SEA: * * Shell LSM model for response-based design `PPC` model ----------- The penalised piecewise constant (`PPC`) extreme value model allows the estimation of non-stationary marginal and conditional extremes for peaks over threshold using a simple description of non-stationarity with respect to covariates in marginal and dependence models. An early deployment of the `PPC` model as software is described in ; the current version of the `PPC` software, developed as part of the ECSADES project (see Section [Sct:Ack]), is freely available from the authors. For each observation ${\boldsymbol{x}}\_i$ in the sample $\{{\boldsymbol{x}}\_i\}$, we assume that an associated (potentially vector) covariate ${\boldsymbol{\theta}}\_i$ is available. The value of covariate ${\boldsymbol{\theta}}\_i$ is used to allocate the observation to one and only one of *n**C* covariate intervals {*C**k*}*k* = 1*n**C* by means of an allocation vector *A* such that *k* = *A*(*i*) and $\{{\boldsymbol{x}}\_i\}=\bigcup C\_k$. For each *k*, all observations in the set $\{{\boldsymbol{x\_{i'}}}\}\_{A(i')=k}$ with the same covariate interval *C**k* are assumed to have common extreme marginal and dependence characteristics. Non-stationary marginal extreme value characteristics of each variate *X**j* are then estimated in turn using a generalised Pareto model and cross-validated roughness-penalised maximum likelihood estimation. For variable *X**j* and covariate interval *C**k*, the extreme value threshold *ψ**j**k* > 0 is assumed to be a quantile of the empirical distribution of the variate in that interval, with specified non-exceedance probability *τ**j* ∈ (0, 1), with *τ**j* constant across intervals, and estimated by counting. Threshold exceedances are assumed to follow the generalised Pareto distribution with shape *ξ**j* ∈ R and scale *σ**j**k* > 0. *ξ**j* is assumed constant (but unknown) across covariate intervals, and the reasonableness of the assumption assessed by inspection of diagnostic plots. Parameters *ξ**j*, {*σ**j**k*} are estimated by maximising the predictive performance of a roughness-penalised model, optimally regulating the extent to which {*σ**j**k*} varies across interval, using a cross-validation procedure. After marginal fitting, the sample $\{{\boldsymbol{x}}\_i\}$ is transformed to standard Laplace scale as $\{{\boldsymbol{\tilde{x}}}\_i\}$ and the conditional extremes model outlined in Section [Sct:MdlEnv:Cnd] fitted. Following the notation of that section, for each choice of conditioning variate *X**q*, linear parameters ${\boldsymbol{\alpha}}\_{-q}$ for the conditioned variates vary across covariate bins with variation regularised using cross-validation to optimise predictive performance for each response in turn. The corresponding value of exponent parameters ${\boldsymbol{\beta}}\_{-k}$ is assumed constant with respect to covariates. Sets of residuals R− *q* from the fit, for each choice of *q* are also isolated. Simulation under the fitted model, and transformation of realisations to their original marginal scale is then possible, corresponding to a return period of arbitrary length. In particular, samples simulated under the model can be used to estimate environmental contours of different types, and explore their relative characteristics. In practice, the full `PPC` modelling procedure is repeated for *n**U* bootstrap resamples of the original sample to capture sampling uncertainty, each resample using a different choice of marginal and dependence thresholds to capture threshold uncertainty. Estimates for marginal and dependence parameters therefore correspond to *n**U* arrays of values capturing sampling and threshold specification uncertainty, which are used to propagate uncertainty from these sources into simulations under the model. We use the `PPC` model to estimate a number of the environmental contours discussed in Section [Sct:EstCnt] and investigate their characteristics, in particular their relationship to extremes of structural response. Specifically, because of its recently popularity, we consider the direct sampling contour (Section [Sct:EstCnt:JntExcCnt:DrcSmpCnt]); we also consider the joint exceedance contour outlined in Section [Sct:EstCnt:JntExcCnt:JntEA]. Both these contour methods use a simulated sample under the environmental model as starting point. Further we consider an isodensity contour (Section [Sct:EstCnt:IsoDnsCnt] based on the conditional extremes model of Section [Sct:MdlEnv:Cnd]; this approach infers the contour directly from the properties of the environmental model with no need for simulation. References ========== On environmental contours for marine and coastal design. ======================================================== Environmental contours are used in structural reliability analysis of marine and coastal structures as an approximate means to locate the boundary of the distribution of environmental variables, and hence sets of environmental conditions giving rise to extreme structural loads and responses. Outline guidance concerning the application of environmental contour methods is given in recent design guidelines from many organisations. However there is lack of clarity concerning the differences between approaches to environmental contour estimation reported in the literature, and regarding the relationship between the environmental contour, corresponding to some return period, and the extreme structural response for the same period. Hence there is uncertainty about precisely when environmental contours should be used, and how they should be used well. This article seeks to provide some assistance in understanding the fundamental issues regarding environmental contours and their use in structural reliability analysis. Approaches to estimating the joint distribution of environmental variables, and to estimating environmental contours based on that distribution, are described. Simple software for estimation of the joint distribution, and hence environmental contours, is illustrated (and is freely available from the authors). Extra assumptions required to relate the characteristics of environmental contour to structural failure are outlined. Alternative response-based methods not requiring environmental contours are summarised. The results of an informal survey of the metocean user community regarding environmental contours are presented. Finally, recommendations about when and how environmental contour methods should be used are made. extreme, structural reliability, return value, environmental contour, structural response, joint probability, IFORM. Introduction ============ Metocean design --------------- Currently, numerous approaches for establishing design criteria for metocean loads and responses of marine structures and coastal facilities are used by different practitioners. Some of these are included in marine industry standards and guidelines; others are internal standards of different organisations active in ocean engineering. Rigorous comparison of some approaches has been reported in the literature, but there is still uncertainty in the user community regarding the relative merits of different approaches. Within the marine industry, estimation of a joint metocean description has been considered for more than thirty years. It was shown that typically, environmental forces on marine structures may be reduced by 5% to 40% by accounting for the lack of complete dependence between metocean variables (wind, wave, current, etc.) traditionally used in design (e.g., ). Development of reliability methods (e.g. ) and their implementation by some parts of the industry in the 1980s brought joint probabilities into focus: they are required for a consistent treatment of the loading in Level III reliability analysis and for assessment of the relative importance of various metocean variables during extreme load and response conditions, fatigue damage and at failure. Until the middle of the 1990s, very few metocean data sets of sufficient quality were available, limiting development of joint probability models: this has changed during the last twenty years. Comprehensive hindcast data are now available for locations world-wide, including simultaneous values for wind, waves, current, sea water level, ice and snow of sufficient quality and duration. Today joint probabilities are referenced in industry standards and guidelines (e.g.,,, ). They are required for application of the Formal Safety Assessment (FSA) methodology in rule development, providing risk based goal-oriented regulations that are well balanced with respect to acceptable risk levels and economic considerations, as recommended in. Different standards describing the application of joint probability methods exist., and suggest that joint probability methods should be applied if reliable simultaneous data exist. further recommends that the duration of a data should be sufficiently long to capture the probability of occurrence for all combinations of importance regarding predictions of metocean actions and action effects. Further, in a case of wind, wave and current it recommends that at least three years of simultaneous data is required to characterise the lack of complete dependence between these variables reliably in design. Given that it is possible to establish a model for the “short-term” distribution of response given sea state parameters (for example, for a three hour sea state), states that the designer has essentially three different risk-based approaches to estimating the “long-term” distribution of response (corresponding to hundreds or thousands of years): a) the so-called “all short-term conditions” (or “all sea state”) approach, b) the ``storm event`` approach, and c) the environmental contour method, an approximate method using only short-term analysis. There are two distinct joint probability approaches in widespread use in coastal engineering practice in the UK (e.g. ). These are (a) a simplified method that involves the use of joint probability contours (JPC) and (b) a risk-based statistical method. Both approaches are implemented within the widely-used JOIN-SEA software system (). Environmental contours ---------------------- The environmental contour defines a set of extreme sea state conditions, and can be used to approximate extreme values of long-term structural response extremes by considering only a few short-term metocean conditions. Environmental contours are appealing since they can be specified for a given metocean environment independently of any structure; they are also linked to a well-established approach to structural design, familiar to practitioners. To establish them, joint probabilities of metocean parameters, historically in the metocean community in the form of tables, are needed. The idea behind the method is to define contours in the metocean space (for example, *H**S*, *T**P*) along which extreme responses with given return period should lay. It is a simplified and approximate method compared with full long-term response analysis but requires less computational effort. All environmental contour methods have a common goal of summarising the tail of the joint distribution of environmental variables, with a view to learning about the distribution of extreme structural response within a prescribed return period. This is achieved by identifying combinations of environmental variables (sometimes referred to as *governing conditions*) responsible for extreme structural response. Structural responses for combinations of environmental conditions lying on the contour can be used to estimate the extreme response due to a sea state with the same return period as the contour. Importantly, only combinations of metocean parameters lying on the contour need be considered. With additional a priori knowledge of the response, it is possible to limit the interval of the contour over which to evaluate structural response, substantially reducing the computational effort for calculating extreme response. An underlying assumption is that the extreme *N*-year response is governed by sea state conditions on the *N*-year environmental contour. Contours estimated using different methods will be different in general, since each method makes different assumptions in characterising the environment, or seeks to summarise the environment in a different way. Hence, the choice of environmental contour method will influence the estimation of the distribution of extreme response. Nevertheless, the environmental contour approach may be useful for early phase concept evaluation. For example, as stated by, if the application under consideration is of a very non-linear nature, an extensive model test program may be necessary to model the short-term variability for all important metocean conditions; the environmental contour approach can help identify those conditions. Some approaches to estimation of environmental contours (for example IFORM, ) make additional explicit assumptions regarding the nature of structural failure surfaces expressed in terms of (potentially transformed) environmental variables. When these assumptions are valid, statements regarding the relative magnitude of the exceedance probability of the *N*-year environmental contour and the *N*-year structural failure probability can be made more reasonably. However it is not always clear that the additional assumptions are satisfied for a given application. The environmental contour procedure as given by can be summarised as: (a) Establish environmental contours of the metocean characteristics (e.g. *H**S*, *T**P*) corresponding to some annual non-exceedance probability 1 − 1/*T* (for a *T*-year return period); (b) Identify the worst metocean condition along the contour for the response under consideration; (c) For this sea state, determine the distribution function for the appropriate three-hour (or possible one-hour) extreme value for the response under consideration; and (d) Estimate the value of the response (corresponding to the same annual non-exceedance probability 1 − 1/*T*) using the quantile of the distribution of response (from (c)) with non-exceedance probability *α*. A value *α* = 0.9 is recommended for ULS (the ultimate limit state), and *α* = 0.95 for ALS (the accidental limit state). The standard provides some guidance as to the adequacy of the approach in terms of the width of the distribution of response in (c). Although this standard discusses the environmental contour method for sea states of length three hours, the procedure can be applied to sea states of any appropriate length (for example, 30 minutes); we use three-hour sea states here for consistency and ease of explanation. recommends two environmental contour approaches: IFORM (, procedure similar to above), and a constant probability density approach (). The procedure for the latter can be summarised as: (a) Estimate a joint environmental model of sea state variables of interest; (b) Estimate the extreme value for the governing variable for the prescribed return period, and associated values for other variables (for example, 100-year *H**S* and conditional median *T**P*); and (c) Develop a contour line from the joint model or scatter diagram as the contour of constant probability density going through the parameter combination mentioned above. Using the environmental contour, an estimate of the extreme response is obtained by searching along the contour for the condition giving maximum characteristic extreme response. The contour method is affected by uncertainties related to metocean data and adopted joint models and has its own limitations which are pointed out by and. It will tend to underestimate extreme response levels because it neglects short term variability of response between different realisations of sea states. Both standards recommend approaches based on to account for this, including (a) increasing the return period corresponding to the contour, and hence *inflating* the environmental contours; (b) replacing the stochastic response by a fixed fractile level higher than the median value; or (c) applying multipliers of the median extreme response estimates, to introduce more conservatism. In the coastal engineering community, contours of joint exceedance probability of environmental variables are estimated using the JPC method (a) to find design events that form the boundary conditions for numerical and physical models for the purposes of structural design; and (b) to estimate return values of overtopping and overflow rates corresponding to some return period for use in flood mapping and risk analysis (). A series of combinations of values of environmental variables from the contour are tested in order to find the *worst case* value of the response. This worst case value is then assumed to have the same return period as the return period associated with the environmental contour. Since again, without further assumptions, there is no link between environmental contour and structural response, there are obvious short-comings to this approach, which are well recognised (e.g. ). The performance of different environmental contour methods has been investigated in several studies, including work by some of the current authors (including ). After consideration of the fundamental mathematical differences between different contour methods, it is unreasonable in general to expect to find any consistent trends in comparisons of contour methods across different applications. The characteristics of different environmental contour methods must be assessed on an application-by-application basis. There are a number of more fundamental reviews of environmental contour methods, including excellent recent work by. End-user survey --------------- As part of the ECSADES research project (see Section [Sct:Ack]), a survey was conducted on end-user practice in the use of environmental contours. The survey, receiving 19 respondents from industry and academia, consisted of questions aimed at establishing the following: (a) Frequency of use of environmental contours in structural reliability and design; (b) Variety in methods used to define the contour and popular sources of guidance and literature; (c) Variety in application of contours (how is information from the contour used); and (d) Perceived advantages and disadvantages of using environmental contours. Further details of the survey can be found in [SctApp:SrvFnd]. Two key insights resulted from the survey. Firstly, though respondents cited varying frequency of application of contours, they did appear to agree on contours forming an integral part of reliability assessment and design. Respondents cited contours as an “industry-accepted approach to approximating *N*-year responses quickly” (when compared to long-term time-domain methods), especially when there is little or no knowledge about the structure being designed - the same environmental contour potentially being applicable to a range of responses. Secondly, respondents cited that there is no single standard approach to defining the contour nor to applying it in the estimation of responses. Seven key sources of guidelines were cited, part of a collection of over fifty relevant papers collected by the authors of this paper. Further, respondents expressed concern over a lack of understanding of the meaning of the contours and the risks associated with naive application of statistical methods leading to physically unreasonable contours. The level of interest in application offshore is greater in Norway than other locations, although coastal practitioners use contours widely. These insights highlight the need for clarity both on the modelling choices available when defining contours, and on the applicability of contours given the information available for a given structure and environment. We have attempted to address the majority of comments emerging from the survey in this paper Objective and outline --------------------- The objective of this article is to (a) Overview the statistical ideas underpinning environmental contour methods, (b) Highlight fundamental differences between methods, (c) Explain the link between environmental contour and structural failure probability claimed by some approaches, (d) Provide simple software to allow a metocean practitioner to estimate a sensible model for general settings (based on extreme value analysis of a historical sample from the environment), and (e) Provide basic guidance regarding *when* and *how* environmental contour methods should be used sensibly. The layout of the article is as follows. Section [Sct:Cnc] discusses fundamental issues regarding the meaning and definition of return value in a multivariate setting, and lack of invariance of a probability density function under transformation of variables. It also discusses the procedure typically used to estimate the distribution of so-called “long-term” statistics (such as the *N*-year maximum response) from “short-term” statistics (such as the distribution of maximum response in a three-hour sea state). Section [Sct:MdlEnv] provides a description of different families of models for the joint distribution of environmental variables. Section [Sct:EstCnt] outlines the different kinds of environmental contours discussed in the literature. It also outlines a rationale to relate the characteristics of an environmental contour with a structural failure surface for some response, and describes an approach to modify environmental contours to account for the short-term variability of maximum response in a three-hour sea state. Section [Sct:Prc] provides a discussion of case studies used to illustrate the competing characteristics of different environmental contour methods, and the challenges of linking contour with response. Section [Sct:DscCnc] provides discussion, and a concluding protocol to aid the practitioner in deciding when and how to apply environmental contour approaches reasonably. Return values, transformation of variables and long-term statistics =================================================================== In this section, we start by considering the definition of a univariate return value, and consider the issues in extending this concept to the multivariate case. We illustrate the sensitivity of the probability density function for the joint distribution of environmental variables to transformation of variables. We also describe how the characteristics of a response for a sea state (“short-term”) can be used to estimate its characteristics over an extended period of time (“long-term”). Univariate return values ------------------------ Estimation of extremes for a single variable *X* is relatively straightforward and has been studied extensively (e.g. ). Given a representative set of independent observations of *X* spanning many years, extreme value analysis can be used to estimate the distribution *F**M**X*, 1 of the annual maximum *M**X*, 1. This in turn can be used to estimate the *T*-year return value *x**T* by solving the equation (MX,1>xT)=1-FMX,1(xT)=1/T. [E:RV1D] The *T*-year return value *x**T* for a single variable is therefore well-defined in terms of the tail of the cumulative distribution function *F**M**X*, 1 of the annual maximum *M**X*, 1 of *X*. Multivariate return values -------------------------- Unfortunately, the joint return value for two or more variables (*X*, *Y*) cannot be uniquely defined (e.g. ). For example, in the case of two variables (*X*, *Y*), we could define the return value (*x**T*, *y**T*) in terms of the joint distribution *F**M**X*, 1, *M**Y*, 1 of the annual maxima *M**X*, 1, *M**Y*, 1 using (MX,1>xT,MY,1>yT)=1-FMX,1,MY,1(xT,yT)=1/T. [E:RV2D] However, it is immediately apparent that there is no unique solution to this equation; given any pair (*x**T*, *y**T*) which satisfies the equation, we can increase *x**T* slightly, and reduce *y**T* such that the equation is still satisfied. That is, there is a continuum of solutions to the equation, which we can write as the set {*x**T*(*θ*), *y**T*(*θ*)} indexed by parameter *θ* ∈ C. As we vary *θ*, the pair (*x**T*(*θ*), *y**T*(*θ*)) maps out a contour (corresponding to {*x**T*(*θ*), *y**T*(*θ*)}) of constant exceedance probability 1/*T* in (*x*, *y*)-space. We note for clarity, that a contour refers to a closed curve in some space ((*x*, *y*)-space here), on which the value of some function is constant. We can rewrite the last equation as (MX,1>xT,MY,1>yT)=(MY,1>yT|MX,1>xT)(MX,1>xT)=1/T, motivating a definition of return value for the pair in terms of a marginal return value of *M**X*, 1 and a conditional return value of *M**Y*, 1 given *M**X*, 1. Two boundary cases exist: (a) when *M**X*, 1 and *M**Y*, 1 are perfectly correlated, points (*x**T*, *y**T*) on the solution contour would satisfy Pr(*M**X*, 1 > *x**T*) = 1/*T* since Pr(*M**Y*, 1 > *y**T*∣*M**X*, 1 > *x**T*) = 1 in this case, and (b) when *M**X*, 1 and *M**Y*, 1 are independent, solutions (*x**T*, *y**T*) would satisfy Pr(*M**Y*, 1 > *y**T*)Pr(*M**X*, 1 > *x**T*) = 1/*T* since now Pr(*M**Y*, 1 > *y**T*∣*M**X*, 1 > *x**T*) = Pr(*M**Y*, 1 > *y**T*). In general, the extent of dependence between pair *M**X*, 1 and *M**Y*, 1 will be somewhere between perfect dependence (a) or independence (b). To illustrate the importance of dependence in practice, we could for example choose to use the 100-year maximum wave height, wind speed and current speed to estimate the environmental loading with a return period of 100 years. If winds, waves and currents are perfectly correlated, the probability of this combination of variables occurring would be 10− 2 per annum as required. But if the variables were independent, the probability of this combination of variables occurring would be 10− 6 per annum, considerably less than 10− 2. Some design codes and guidelines suggest taking the 100-year return period of one (dominant) variable together with the values of associated variables corresponding to shorter return periods to accommodate dependence between variables (e.g. ). The DNV recommended practice for on-bottom stability of pipelines suggests the combination of the 100-year return condition for waves combined with the 10-year return condition for current or vice-versa, when detailed information about the joint probability of waves and current is not available. In summary: the concept of return value is not uniquely defined for more than one variable. In order to specify design values for more than one variable rationally, we need to understand and exploit the joint distribution of the variables. This leads naturally to consideration of joint probabilities, of environmental contours, and of structure variables (such as structural response to environmental loading) which capture the important joint characteristics of (a multivariate) environment in terms of a single “structure” or response variable (see Section [Sct:EstCnt]). ### Transformation of variables Suppose we describe the environment in terms of variables (*X*, *Y*) and a distribution with joint density *f**X*, *Y*(*x*, *y*). Suppose further we choose also to describe the environment in terms of transformed variables (*a*(*X*), *b*(*Y*)). Then the corresponding density *f**a*(*X*), *b*(*Y*)(*a*(*x*), *b*(*y*)) is given by fa(X),b(Y)(a(x),b(y)) &=& fX,Y(x,y) || && fX,Y(x,y) because of the influence of the final Jacobian term on the right hand side of the first line. As a result, if the set of values {*x*(*θ*), *y*(*θ*)} (for *θ* ∈ C, say) yield constant density in (*X*, *Y*)-space, the transformed set {*a*(*x*(*θ*)), *b*(*y*(*θ*))} will not do so in (*a*(*X*), *b*(*Y*))-space (if the absolute value of the Jacobian is not unity). Thus, contours of constant probability density on one scale will not be so on a different scale. However, contours defined in terms of cumulative distribution functions are invariant to monotonic transformations of variables: for example, a bivariate contour estimated in terms of *X* and *Y* would be equivalent to that estimated in terms of *a*(*X*) and *a*(*Y*). ### Distribution of the *N*-year maximum response We can formally evaluate the distribution *F**M**R*, *N* of the *N*-year maximum response *M**R*, *N* (in any 3-hour sea state, e.g., ) using FMR,N(r) = where *λ* is the expected number of storms per annum, and *F**R*(*r*) is the distribution of maximum response *R* in a random storm, given by FR(r) = s { 1 2... s d1 d2... ds } fS(s) ds where $f\_{\{{\boldsymbol{X}}\_i\}|S}(\{{\boldsymbol{x}}\_i\}|s)$ is the joint density of sea state variables for a storm of *s* sea states, and *f**S*(*s*) is the density of the number of sea states in a storm. $F\_{R|\{{\boldsymbol{X}}\_i\}}(r|\{{\boldsymbol{x}}\_i\})$ is the distribution of maximum response in a storm consisting of *s* sea states with variables $\{{\boldsymbol{X}}\_i\}\_{i=1}^s$, which can be written as FR|{i}(r|{i}) = i=1s FR|i(r|i) where $F\_{R|{\boldsymbol{X}}}(r|{\boldsymbol{x}})$ is the distribution of maximum response in a sea states with variables ${\boldsymbol{X}}$. If we have access to all of the distributions above, we can use Monte Carlo simulation, numerical integration, importance sampling or similar to estimate *F**M**R*, *N*. When this calculation is feasible, we can directly estimate the joint distribution $F\_{{\boldsymbol{X}}|M\_{R,N}}$ of the environmental variables given the occurrence of a *N*-year maximum response (in a 3-hour sea state); these values are often called *associated values* for the environmental variables given the *N*-year response, and can be represented as environmental contours using the methods of Section [Sct:EstCnt:JntExcCnt]-[Sct:EstCnt:IsoDnsCnt]. However, this calculation can be computationally complex, for example when the evaluation of $F\_{R|{\boldsymbol{X}}}$ is demanding, potentially involving time-domain simulation of environment-structure interaction, finite element analysis, etc.; other approximate approaches, including those exploiting environmental contours, are then appealing, as explained in Section [Sct:Prc]. In the coastal engineering literature, discussion tends to be in terms of “risk-based” estimation as opposed to estimation of “long term response”, since key concerns are the annual probability of failure or expected annual damage, over a particular epoch (for example, “present day” or “2050”). Modelling the joint distribution of environmental variables =========================================================== Given a sample $\{{\boldsymbol{x}}\_i\}\_{i=1}^n$ with ${\boldsymbol{x}}\_i=\{x\_{i1},x\_{i2},...,x\_{im}\}$ from the joint distribution of *m* environmental variables ${\boldsymbol{X}}$  = (*X*1, *X*2, ..., *X**m*), a number of different models for the joint distribution have been reported in the literature. Models can be categorised as being parametric (adopting a functional form for the density of the joint distribution) or non-parametric (typically using kernels for density estimation). Non-parametric models --------------------- The simplest form of non-parametric density estimation is kernel density estimation. The joint probability density function $f\_{{\boldsymbol{X}}}({\boldsymbol{x}})$ of ${\boldsymbol{X}}$ evaluated at ${\boldsymbol{x}}$ takes the form f()=i=1n k(;i,) for *n* kernel functions *k* with common parameters P centred at each of $\{{\boldsymbol{x}}\_i\}\_{i=1}^n$ such that for any ${\boldsymbol{x}}'$ k(;’,) d=1. A typical kernel choice might be the multivariate normal density $\phi({\boldsymbol{x}};{\boldsymbol{x}}',{\boldsymbol{\Sigma}})$ with mean ${\boldsymbol{x}}'$ and covariance matrix ${\boldsymbol{\Sigma}}$. Some of the parameters P can be set prior to estimation, and the remainder estimated by maximum likelihood estimation. For example, in the case of the multivariate normal density, we might set ${\boldsymbol{\Sigma}}=h^2 I\_m$ (where *I**m* is the *m* × *m* identify matrix) so that the model-fitting problem is reduced to estimating a single kernel width parameter *h*. Kernel density models are suitable in general for the description of the body of a distribution, and the choice of kernel parameters P tends not to be too critical to estimate central characteristics. In contrast, kernel density models are not suitable to describe tails of distributions, since the tail (away for locations of observations $\{{\boldsymbol{x}}\_i\}\_{i=1}^n$) is strongly influenced by the choice of kernel *k* and kernel parameters P. Copula models ------------- Consider a two-dimensional (*H**S*, *T**P*) environment. We might try to describe the joint density of *H**S* and *T**P* as the product of marginal densities *f**H**S*(*h*), *f**T**P*(*t*) for *H**S* and *T**P*, and a function *ρ*2(*F**H**S*(*h*), *F**T**P*(*t*)) describing their dependence fHS,TP(h,t) = fHS(h) fTP(t) 2(FHS(h),FTP(t)) where *F**H**S*(*h*) and *F**T**P*(*t*) are marginal cumulative distribution functions for *H**S* and *T**P*. *ρ*2 is the probability density function of a two-dimensional copula, a multivariate probability distribution for which the marginal probability distribution of each variable is uniform. Copula models are useful since they focus on describing the dependence structure between random variables. Before estimating the copula model, we fit marginal distributions to *H**S* and *T**P*; tails of marginal distributions can be estimated using extreme value models. In general for set ${\boldsymbol{X}}=\{X\_1,X\_2,...,X\_m\}$ of environmental variables, the copula density takes the form f() = m(FX1(x1), FX1(x2),..., FXm(xm)) where *ρ**m* is the density of an *m*-dimensional copula. There is a huge literature on copulas (e.g., ), and there are many families of copulas (including the Gaussian and Archimedian), and some (so-called *max-stable* or inverted max-stable copulas) more suited to the descriptions of extreme environments. illustrate the specification and estimation of multivariate extreme value models using copulas. discusses the special class of extreme value (or max-stable) copulas appropriate for describing joint tails of distributions of component-wise maxima. provides an excellent review of extreme value copulas and their relationship to max-stable processes. Copula methods have received some attention in the ocean engineering literature. discuss the use of copulas in metocean design. discuss the estimation of environmental contours using copula methods. and propose bivariate extreme value models incorporating non-stationary marginal and dependence inference. Asymmetric copula models were found to be necessary to model *H**S*, *T**Z* by. Hierarchical conditional models ------------------------------- In a hierarchical model, the structure of the dependence between environmental parameters takes on a particularly advantageous form. Again consider the case of *H**S* and *T**P*; the joint density *f**H**S*, *T**P*(*h*, *t*) can be written in the form fHS,TP(h,t)=fTP|HS(t|h)fHS(h). It is always possible to factorise the joint density into the product of densities. For *H**S* and *T**P*, because of their physical characteristics, the densities *f**T**P*∣*H**S* and *f**H**S* are relatively simple: a Weibull distribution for *H**S* has been used for many years, and a log-normal distribution for *T**P*∣*H**S* (e.g.,, ). These are combined to estimate the joint density *f**H**S*, *T**P*. More generally, for example in the case of three environmental variables {*X*1, *X*2, *X*3}, it is always possible to factorise the joint density as fX1,X2,X3(x1,x2,x3)=fX3|X1,X2(x3|x1,x2)fX2|X1(x2|x1)fX1(x1) with equivalent factorisations for permutations of the three variables. Estimating the joint distribution on the left hand side therefore reduces to estimating all of the distributions on the right hand side. Depending on the statistical characteristics of {*X*1, *X*2, *X*3}, estimating all the distribution on the right hand side may be more straightforward to achieve in practice, in which case the factorisation is useful. Specifying a physically-realistic and useful conditional structure for *m* variables {*X*1, *X*2, ..., *X**m*} becomes increasingly problematic as *m* increases;,, provide examples with different levels of complexity of dependence structure. The conditional structure can often be usefully expressed as a graphical model (e.g. ). Once a useful conditional structure is established, we need to estimate the densities involved. In general, different functional forms are considered based on inspection of the data. For the tail of the distribution of a random variable, a standard tail distribution (for example, Weibull, Gumbel, Frechet, generalised Pareto, generalised extreme value) would seem to be a reasonable choice. Choice of a suitable generic density for conditional densities for (say) *X*2∣*X*1 or *X*3∣*X*2, *X*1 is less obvious. propose a two-parameter Weibull distribution to describe the conditional distribution of wind speed given *H**S*. applies the approach to specify the joint distribution of a relatively large number of environmental variables. Ideas from hierarchical graphical and copula models can be combined, as illustrated by in a metocean context. Conditional extremes model -------------------------- The conditional extremes model is motivated by the existence of an asymptotic form for the limiting conditional distribution of one or more conditioned random variables given a large value of a conditioning variable for a large class of distributions (e.g. ), on particular standard marginal scales. For a set ${\boldsymbol{X}}=\{X\_1,X\_2,...,X\_m\}$ of environmental variables, it provides a flexible framework to estimate the joint distribution of ${\boldsymbol{X}}\_{-k}=\{X\_1,X\_2,...,X\_{k-1},X\_{k+1},...X\_m\}$ given that *X**k* (*k* = 1, 2, ..., *m*) is extreme in its marginal distribution. The modelling procedure proceeds in four steps: (a) marginal extreme value modelling of each of {*X**j*} independently, followed by (b) marginal transformation of each of {*X**j*} independently to the corresponding variable in {*X̃**j*} with standard Laplace marginal distribution, (c) dependence modelling of ${\boldsymbol{\tilde{X}}}\_{-k}|\tilde{X}\_k>\psi$ for large *ψ* for each *k*, and (d) simulation under the estimated model to estimate return values, environmental contours, etc. The conditional extremes model with parameters ${\boldsymbol{\alpha}}\_{-k} \in [0,1]^{m-1}$ and ${\boldsymbol{\beta}}\_{-k} \in (-\infty,1]^{m-1}$ is given by -k|{k=k} = -k k + k-k -k k>, k where ${\boldsymbol{W}}\_{-k}$ is a residual process assumed to be distributed as ${\boldsymbol{W}}\_{-k} \sim \texttt{MVN}({\boldsymbol{\mu}}\_{-k}, \texttt{diag}({\boldsymbol{\zeta}}\_{-k}))$ with mean ${\boldsymbol{\mu}}\_{-k}$ and variance ${\boldsymbol{\zeta}}\_{-k}$ (elements of which positive) for model estimation only. Threshold *ψ* ∈ R is defined as the quantile of the standard Laplace distribution with appropriately high non-exceedance probability *κ* ∈ (0, 1). provide additional constraints on the parameters of the conditional extremes model. An outline of the approach is given by in application to wave spectral characteristics. Extensions incorporating covariates (), to conditioning on multiple locations (), to modelling the evolution of time-series () and the spatial distribution of extremes () have recently been reported. The main advantage of the conditional extremes model compared with copula or hierarchical models is that it incorporates a full class of asymptotic extremal dependence (e.g. ), and also allows relatively straightforward extension to higher dimensions. Of course, the conditional extremes method relies on having sufficient sample to be able to estimate the marginal and conditional tails adequately. Estimating contours =================== In this section, we outline the different types of contour (in Sections [Sct:EstCnt:JntExcCnt] and [Sct:EstCnt:IsoDnsCnt]) typically used to describe the environment, and then discuss how response-based design (Section [Sct:EstCnt:Cnt2Rsp]) also yields joint distributions of *associated* environmental variables which can be usefully summarised using a contour. First, we provide a brief summary of some of the literature on joint modelling of the ocean environment leading to contour estimation. Overview of literature ---------------------- Joint modelling of environmental variables, and the construction of environmental contours, has a long history., present joint models for environmental variables from which environmental contours can be estimated. introduces the IFORM method, motivated by transformation of the joint distribution of environmental variables to standard multivariate Normal using the Rosenblatt transformation. provides a comparison of stochastic process models for definition of design contours. and present methods for estimation of joint exceedance contours based on direct Monte Carlo simulation under a model for the joint distribution of environmental variables. estimate highest density contours, again using a random sample simulated under a model for the joint distribution of environmental variables. also provides an illuminating discussion of the characteristics of different approaches to environmental contour estimation. The approaches of, and seek only to find contours which describe the distribution of environmental variables. The methods of and, with extra assumptions, provide a direct link between the characteristics of the environmental contour and structural failure. and provide comparisons of different approaches to contour estimation. Other literature (e.g.,, ) discusses how joint models for the environment can be combined with simple models for structural responses given environment, to estimate the characteristics of response directly. As a result, the joint distribution of environmental variables corresponding to an extreme response can be estimated. discusses incorporating the effects of direction and other sources of non-stationarity or homogeneity in design contour estimation. More generally, it is interesting also to consider the influence of different structural responses (and modes of failure) active for a particular structure on the desired characteristics of corresponding environmental contours. For example, if there are *n**R* *independent* structural responses, and the structure designed so that the probability of failure with respect to each response is *p**F*, then the overall failure probability is *n**R**p**F*; yet if the responses are perfectly correlated the overall failure probability is still only *p**F*. It would seem reasonable in general to design to the overall failure probability, and to adjust failure probabilities for individual responses to account for dependence; this in general would also require inflation of environmental contours. We note recent developments which seek to estimate buffered environmental contours () which incorporate not just structural failure, but the extent of structural failure. Because of its prevalence in ocean engineering practice, we start our overview of methods for contour estimation using the IFORM approach. Then (in Section [Sct:EstCnt:JntExcCnt] and Section [Sct:EstCnt:IsoDnsCnt]) we describe related approaches to estimating joint exceedance and isodensity contours. Finally, we consider direct estimation of the distribution of long-term response (in Section [Sct:EstCnt:Cnt2Rsp]). IFORM contours -------------- The IFORM method of typically assumes a hierarchical model (Section [Sct:MdlEnv:Hrr]) for the joint distribution of the environmental variables. We assume that we can describe the joint distribution of variables ${\boldsymbol{X}}$ sufficiently well that a transformation of variables is possible, so that the joint probability distribution of the transformed variables takes on a particularly simple form. The transformation is achieved by re-expressing the set ${\boldsymbol{X}}=\{X\_1, X\_2,..., X\_m\}$ as a set of (independent) conditional random variables ${\boldsymbol{\tilde{X}}}=\{{{\tilde{X}}}\_1, {{\tilde{X}}}\_2,..., {{\tilde{X}}}\_m\}$. For example, for appropriately ordered variables we can write *X̃*1 = *X*1, *X̃*2 = *X*2∣*X*1 and *X̃*3 = *X*3∣*X*1, *X*2 and so on, with cumulative distribution functions *F**X̃*1, *F**X̃*2, … such that F()=j=1m Fj(xj). That is, by design, the random variables ${\boldsymbol{\tilde{X}}}$ are independent of each other, and can hence be transformed independently to standard Gaussian random variables ${\boldsymbol{U}}$  = {*U*1, *U*2, ..., *U**m*} via the probability integral transform Fj(x) = (uj) j=1,2,...,m where Φ is the cumulative distribution function of the standard Gaussian distribution. Isodensity contours in ${\boldsymbol{U}}$-space, with a given non-exceedance probability, can be back-transformed to the original physical space. It should be noted (Section [Sct:Cnc:TrnVrb]) that isodensity in ${\boldsymbol{U}}$-space does not correspond to isodensity in ${\boldsymbol{X}}$-space however. The non-exceedance probability corresponding to the IFORM contour can be related to the probability of structural failure, given certain assumptions, as explained in Section [Sct:EstCnt:Cnt2Rsp]. We also note recent work by on constructing inverse second-order (ISORM) contours. Joint exceedance contours ------------------------- The equations in Section [Sct:Cnc] illustrate the key characteristic of a *T*-year return value: namely that it defines a region A with closed boundary $\{{\boldsymbol{x}}(\theta)\}$ for *θ* ∈ C of the domain over which variables are defined associated with probability 1 − 1/*T* per annum, and a complementary set with “exceedance” probability 1/*T*. In a multivariate setting, for a pair of variables for simplicity, Equation [E:RV2D] shows one way to define A using the joint cumulative distribution function, leading to the so-called *joint exceedance* contour. However, A could be defined quite arbitrarily, provided that it corresponds to the desired non-exceedance probability. In practice, A might even correspond to the union of disjoint sets; the only requirement is that the probability *p* associated with A is 1 − 1/*T*. The IFORM procedure of Section [Sct:EstCnt:IFORM] provides a specific approach to the estimation of region A and hence of joint exceedance contours. IFORM is used typically for offshore applications. Joint exceedance contours are also widely used in coastal applications(e.g., ), and their limitations in terms of naive estimation of extreme responses recognised for some time. ### Direct sampling contours () The IFORM-method (Section [Sct:EstCnt:IFORM]) produces a contour where the probability of any convex failure region in the transformed Gaussian ${\boldsymbol{U}}$-space which do not overlap with the interior of the contour is less than or equal to a given target probability. When this contour is transformed back to the environmental ${\boldsymbol{X}}$-space, however, this probabilistic interpretation is no longer valid (as explained in Section [Sct:Cnc:TrnVrb]). The direct sampling contour () is constructed so that it has the same probabilistic properties in the environmental space as the IFORM contour has in transformed space. This implies that the region A enclosed by the direct sampling contour is always convex; that is, for any two points in A, the straight line joining them would also be in A. Estimating the direct sampling contour in two-dimensions is relatively easy, based on a simulation under a model for the joint distribution of variables *X*1 and *X*2 (see Section [Sct:MdlEnv]). For probability level *α*, following, we first find the function *C*(*θ*), the (1 − *α*)-quantile of the distribution of the projection *X*1cos(*θ*) + *X*2sin(*θ*) for each value of *θ* ∈ [0, 2*π*) C()={C:()>C)=}. Then we estimate the contour C = {(*x*1(*θ*), *x*2(*θ*)) : *θ* ∈ [0, 2*π*)} using x1() &=& C() () - (), x2() &=& C() () + (), and potentially further smooth C as a function of *θ*. Following, for at *T*-year return period, it is recommended that the value of *α* be set to 1/*T*. Generalisations to higher dimensions is mathematically straightforward; three-dimensional contours based on the direct sampling approach are presented in. ### Joint exceedance contours () propose joint exceedance contours for which a particular probability at any point on the contour is constant. Specifically, in two-dimensions, the closed contour {(*x*1(*θ*), *x*2(*θ*)) : *θ* ∈ [0, 2*π*)} is defined by (j=12 (rj(; r\*) Xj > rj(; r\*) xj())) = for *α* ∈ (0, 1), where *r*(*θ*; *r*\*) = {*r*1(*θ*; *r*\*), *r*2(*θ*; *r*\*)} is defined by *r*(*θ*; *r*\*) = *x*(*θ*) − *r*\* and *r*\* is a reference location for the distribution under consideration. In some cases, it is appropriate that *r*\* refer to some central feature (for example, mean, median or mode). In other situations, when we are interested solely in the large values of a variable *X*1 (say), it might be appropriate to set *r*1\* = 0. We estimate the contour using simulation under a model for the joint distribution of *X*1, *X*2; but we might choose to estimate the contour for any transformation of variables, in particular to independent standard normals *U*1, *U*2. The probability *p* associated with region A enclosed by the contour is a (generally unknown) function of *α*. The value of *p* can be set to approximately 1 − 1/*T* by iteration over different choices of *α*. Again, we can potentially further smooth the contour as a function of *θ*. Isodensity contours ------------------- Another obvious approach would be to define contours using the joint probability density function $f\_{{\boldsymbol{X}}}$ of the environmental variables instead of its cumulative distribution function $F\_{{\boldsymbol{X}}}$. If we assume to start that $f\_{{\boldsymbol{X}}}$ is uni-modal, we might choose a value *τ* such that f()=defines a closed contour $\{{\boldsymbol{x}}\_\tau(\theta)\}$ for *θ* ∈ C enclosing a set A in ${\boldsymbol{x}}$-space such that $f\_{{\boldsymbol{X}}}>\tau$ within A. This defines an *isodensity* contour (or a contour of constant probability density); see e.g.. The approach can be extended to include multi-modal $f\_{{\boldsymbol{X}}}$ (, ). Kernel density estimation (Section [Sct:MdlEnv:NnP]) is a popular choice for estimation of isodensity contours, but this choice is problematic for estimating the tails of distributions since the relatively arbitrary choice of kernel function and its width tend to dominate the shape of the tail; we would prefer that the shape of the tail was informed more directly by the data. Relating environmental contours to response ------------------------------------------- If the objective of a study is to estimate environmental conditions corresponding to extreme structural responses, the obvious approach is direct estimation of the characteristics of the *N*-year maximum response and the environments which generate it. In contrast to Section [Sct:EstCnt:JntExcCnt]-[Sct:EstCnt:IsoDnsCnt], the purpose of this analysis is not characterisation of extreme environments, but rather of environments related to extreme responses. Response-based methods obviously require at least some information about the response function. Of course, if structural response is monotonically related to a dominant environmental driver variable, then the resulting contours may be quite similar; but this is not always the case. A number of different approaches relying on some knowledge of the response have been developed and applied over the past thirty years (e.g. ). ### Approximating the response When evaluation of $R|{\boldsymbol{X}}$ is demanding (see Section [Sct:Cnc:Str2Lng]), an alternative approach is to adopt a simple approximation to the response function which can be easily evaluated. Sometimes, the functional form is relatively apparent from physical considerations (for example, the semi-empirical Morison equation () for the drag and inertial forces on a body); in this case, it is usually necessary to set the parameters of the response function to correspond with the structure of interest. Nevertheless, once set, structural loads can be estimated quickly for given environment ${\boldsymbol{x}}$. This is the basis of, for example, the *generic load model* of, and the response-based joint probability model in coastal applications (e.g. ). More generally, a statistical model (known as an *emulator*) can be used to estimate $R|{\boldsymbol{X}}$. The emulator is estimated by evaluating $R|\{{\boldsymbol{X}}={\boldsymbol{x}}\}$ for points ${\boldsymbol{x}}$ drawn from a set of representative environments ${\boldsymbol{X}}$ (which itself can be a computationally demanding analysis), and then fitting a statistical model such as a response surface to explain response in terms of the environmental variables. Once estimated, the emulator provides rapid evaluation of $R|\{{\boldsymbol{X}}={\boldsymbol{x}}\}$ for any ${\boldsymbol{x}}$, and hence of associated environmental values corresponding to the *N*-year maximum response (in Section [Sct:Cnc:Str2Lng]). It is a natural framework for error propagation and uncertainty quantification. ### Sampling from environmental contours The direct estimation of the distribution *F**M**R*, *N* of the *N*-year maximum response *M**R*, *N*, and hence of the joint distribution $F\_{{\boldsymbol{X}}|M\_{R,N}}$ of environmental variables associated with it is often computationally complex. In such cases, reducing the number of evaluations of $R|{\boldsymbol{X}}$ made is advantageous. It is intuitive therefore that we should focus on values ${\boldsymbol{x}}$ of environmental variables (to evaluate $R|\{{\boldsymbol{X}}={\boldsymbol{x}}\}$) corresponding to extreme environments to achieve this; the *N*-year environmental contour provides one approach to identifying those environments. We can also associate the *N*-year environmental contour with the probability of structural failure in the same period, as described in Section [Sct:Int:EnvCnt] and outlined here. ### Reliability theory Engineering codes stipulate that marine structures should be designed to exceed specific levels of reliability, usually expressed in terms of an annual probability of failure. Reliability theory provides an approach to estimating required structural strength and environmental design conditions causing failure. Structural failure is assumed to occur when structural loads *R* exceed structural resistance or strength *S*, expressed in terms of the equation S-R=g()<0, where $g\_{{\boldsymbol{X}}}({\boldsymbol{X}})$ is a limit state expression for a particular failure mechanism (or “failure surface” for brevity) and the vector ${\boldsymbol{X}}$ represents all of environmental, hydraulic loading and structural variables appropriate to a particular problem, with joint probability density function $f\_{{\boldsymbol{X}}}({\boldsymbol{x}})$. In the context of floating structures and ships, *R* might correspond to a motion response of the vessel (such as roll) and *S* to a limiting value for roll at which structural integrity is deemed impaired. The probability *p**F* of structural failure can then be evaluated using pF = (g() < 0) = g() < 0 f()  d. [E:pF] For a given environment, *p**F* can clearly be reduced by increasing *S*, since then the region of ${\boldsymbol{x}}$ space for which *S* − *R* < 0 is reduced. In this way, structural strength can be adjusted to achieve desired *p**F*. Solving Equation [E:pF] however presents multiple challenges of first specifying $g\_{{\boldsymbol{X}}}$ and $f\_{{\boldsymbol{X}}}$ adequately, and then performing the computation reasonably. We note that the form of $g\_{{\boldsymbol{X}}}$ is in general quite arbitrary, and that estimating *g* adequately for a full-scale structure is likely to be problematic. If we suspect that extreme environments ${\boldsymbol{X}}$ produce extreme responses *R*, adequate characterisation of the joint tails of $f\_{{\boldsymbol{X}}}$ will be necessary to estimate *p**F* well, requiring careful multivariate extreme value analysis of the environment. However, for a resonant frequency response of a floating structure, estimating some tail aspects may be less critical. Other responses such as fatigue are not governed by extremes of the environment in general. ### Linearising the failure surface Due to the complexity of solving Equation [E:pF], approximate approaches have been sought, including the use of environmental contours. For example, IFORM typically adopts a hierarchical model outlined in Section [Sct:EstCnt:IFORM] to estimate $f\_{{\boldsymbol{X}}}$ and hence contours of constant probability density in the transformed Gaussian ${\boldsymbol{U}}$-space, with given probabilities of non-exceedance. The direct sampling method of Section [Sct:EstCnt:JntExcCnt:DrcSmpCnt] generates contours in the original ${\boldsymbol{X}}$-space of environmental variables with given non-exceedance probability. Solving Equation [E:pF] is still not easy, since failure surface $g\_{{\boldsymbol{X}}}$ is unknown. To overcome this, the direct sampling and IFORM methods make the assumption that a linear approximation to the failure surface at the design point is appropriate. This approximation is made on the original ${\boldsymbol{X}}$-scale for direct sampling, and on the transformed ${\boldsymbol{U}}$-scale for IFORM; this is the key difference between the methods. There is no a-priori physical reason for assuming that linearisation of the failure surface is appropriate, and the assumption must be justified on engineering grounds on a case-by-case basis. In certain applications, for example of wave loads on fixed structures, extreme loads typically correspond to severe sea states; in this situation, we might assume load to be dominated by significant wave height *H**S*. It is then probably reasonable to assume that the set ${\boldsymbol{x}}$ of values (including *H**S*) such that $g\_{{\boldsymbol{X}}}({\boldsymbol{x}})<0$ (and the corresponding set ${\boldsymbol{u}}$ of values such that $g\_U({\boldsymbol{u}})<0$) is *convex*. In this case, assuming $g\_{{\boldsymbol{X}}}$ (or $g\_{{\boldsymbol{U}}}$) to be linear leads to a *conservative overestimate* of the probability of failure associated with a given contour. However, we emphasise that there is no guarantee that either $g\_{{\boldsymbol{X}}}$ or $g\_{{\boldsymbol{U}}}$ is convex in general. Hence there is no guarantee that linearising the failure surface is reasonable, and that the probability of failure will be smaller than that associated with the environmental contour. ### Finding governing conditions In a typical IFORM analysis, once the environmental contour $\{{\boldsymbol{u}}(\theta)\}$ (the surface of a hypersphere, for *θ* ∈ C) is estimated in ${\boldsymbol{U}}$-space, we find the point ${\boldsymbol{u}}(\theta^\*) \in \{{\boldsymbol{u}}(\theta)\}$ corresponding to the largest values of response (and hence the lowest value of structural failure *p**F* for given structural strength *S*). Then ${\boldsymbol{u}}(\theta^\*)$ is transformed back to a corresponding ${\boldsymbol{x}}(\theta^\*)$ (in terms of the original variables) which is taken as the design set corresponding to the specified failure probability. Other points of interest, for example the whole contour $\{{\boldsymbol{u}}(\theta)\}$ in the transformed space, can be similarly transformed to the original space. In a direct sampling analysis, the point ${\boldsymbol{x}}(\theta^\*)$ can be identified directly in ${\boldsymbol{X}}$-space. generalised IFORM to dynamic systems. describe the use of the second-order reliability method (SORM) for contour estimation. Adjusting contours for model mis-specification and short-term variation ----------------------------------------------------------------------- We use the *N*-year environmental contour for the set ${\boldsymbol{X}}$ to provide a computationally-fast but potentially biased estimate of the *N*-year response for the structure discussed in Section [Sct:EstCnt:Cnt2Rsp]. A typical approach is to estimate the distribution of the maximum response (in any 3-hour sea state, corresponding to a specified return period) given values of environmental variables $\{{\boldsymbol{x}}(\theta)\}$ on the contour to identify a “design point” ${\boldsymbol{x}}(\theta^\*) \in \{{\boldsymbol{x}}(\theta)\}$ yielding the largest structural response. Then a quantile *q**C* with non-exceedance probability *p**C* (for example typically the mode, median or mean) of the distribution $F\_{R|{\boldsymbol{X}}}(r|{\boldsymbol{x}}(\theta^\*))$ is used to estimate some (possibly different) quantile *q**R* with non-exceedance probability *p**R* of the distribution *F**M**R*, *N*(*r*) of the *N*-year maximum response. Quantile *q**C* is used as an estimate for *q**R*, where FR|(qC|(\*)) = pC and FMR,N(qR) = pR. There is no guarantee that *q**C* and *q**R* will coincide. Even if *p**C* = *p**R*, since $F\_{R|{\boldsymbol{X}}}(r|{\boldsymbol{x}})$ has a long right-hand tail for any environment ${\boldsymbol{x}}$, it is usually possible for “short-term variation” in “less extreme” sea states to contribute to the distribution *F**M**R*, *N*, whereas this is by definition not possible for the corresponding distribution $F\_{R|{\boldsymbol{X}}}(q\_C|{\boldsymbol{x}}(\theta^\*))$ from a single sea state ${\boldsymbol{x}}(\theta^\*)$. Mis-specification of the environmental model, or violation of assumptions concerning the relationship between environment and response, may also lead to disagreement between *q**C* and *q**R*. For this reason, it is useful to define an inflation factor Δ such that qR = qC, where we might expect Δ > 1 for $p\_C \approxeq p\_R$. The factor Δ can be used to inflate the whole environmental contour if desired. Standard makes recommendation for appropriate choices of *q**R*, *q**C* and the corresponding Δ. Section [Sct:Prc] provides illustrations of contour adjustment for simple simulation models. It is apparent that, in situations where short-term variability is relatively large compared with long-term variability, contour-based approaches should be use with great caution. Case studies: contours in practice ================================== Section [Sct:EstCnt] outlines various forms of an environmental contour. In the absence of a unified approach to defining and applying contours (see Section [Sct:Cnc] and Section [Sct:Int:Srv]), it is informative to consider the practicalities of environmental contour estimation. Our objective in Section [Sct:Prc] is to quantify how well estimates of extreme responses (in a three-hour sea state, for a particular return period) on a contour compare with estimates obtained by direct simulation of the response. In this sense we replicate the typical approach to application of contours: looking at response for a small set of environmental conditions, in the hope that this analysis approximates the characteristics of maximum response for that return period. In doing so, we discuss the key challenges in applying contours, including choice of contour, sampling along the contour and contour inflation. As opposed to typical applications, we perform our analysis for four responses, whose relationship to the environment is quantified entirely in terms of *H**S* and *T**P*, and is known to us. We are therefore able to simulate from the known distributions to estimate the correct characteristics of response, and hence to quantify the performance of contour-based estimated for maximum response. Then, in Section [Sct:DscCnc], we summarise our findings regarding the estimation and application of environmental contours for metocean design, with a particular focus on the appropriate use of contours, given the extent of knowledge about the response: environmental contours are clearly useful under certain conditions, but these conditions need to be carefully defined so that the user knows when environmental contours are likely to be a good option. For simplicity in the case studies below, we define the environment in terms of a large historical sample of sea-state *H**S* and *T**P* for a typical northern North Sea environment for the period 1979-2013, from the NORA10-WAM hindcast (). NORA10 (Norwegian ReAnalysis 10km grid) is a 58-year hindcast that has been developed by the Norwegian Meteorological Institute. It is a regional HIRLAM (atmosphere) and WAM Cycle-4 (wave) hindcast covering Northern European waters. The regional model uses wind and wave boundary conditions from the ERA-40 reanalysis (1958-2002) and is extended using the ERA-Interim reanalysis from 2002 onwards. NORA10 produces three-hourly wave and wind fields at 10km resolution. We isolate storm peak events using the procedure of. We then estimate structural responses using *known* non-linear functions of environmental variables corresponding to each storm event. To construct an environmental contour, we require a statistical model for the environment. Here, we achieve this by means of a conditional extremes model (Section [Sct:MdlEnv:Cnd]) for the historical sample, using a penalised piecewise constant (`PPC`) extreme value model () and software (outlined in [SctApp:PPC]). We choose the conditional extremes model because of its generality and flexibility to model different forms of extremal dependence (e.g. ). The `PPC` extreme value model allows the estimation of non-stationary marginal and conditional extremes for peaks over threshold using a simple description of non-stationarity with respect to covariates in marginal and dependence models. We use the `PPC` model to estimate a number of the environmental contours discussed in Section [Sct:EstCnt] and investigate their characteristics, in particular their relationship to extremes of structural response. Because of its recent popularity, we consider the direct sampling contour (Section [Sct:EstCnt:JntExcCnt:DrcSmpCnt]) in case studies 1 and 2. In case study 2, we also consider the joint exceedance contour outlined in Section [Sct:EstCnt:JntExcCnt:JntEA] and the isodensity contour (Section [Sct:EstCnt:IsoDnsCnt]). To estimate any of these contour methods requires a (*H**S*, *T**P*) sample simulated under the environmental model. The isodensity contour is similar to the approach of recommended in the standard. Case study 1 ------------ In this case study we consider the direct sampling contour of Section [Sct:EstCnt:JntExcCnt:DrcSmpCnt] only. The objective of the case study is to examine the general correspondence between estimates for the distribution of the 100-year maximum response *M**R*, 100. We compare an estimate from direct simulation of *R* (taken to be accurate) and one generated from combinations of *H**S* and *T**P* on the 100-year environmental contour. We make this comparison for a number of different responses. The procedure we use is intended to reflect common practice in industry. Once the contour is estimated, we identify a “frontier” interval of the contour which we think might be informative for estimation of response. In the current work, we assume that the “frontier” corresponds to the whole interval of the environmental contour lying close to pairs of *H**S*, *T**P* values present in the sample. Then we consider two possibilities: (a) that only a single combination of *H**S* and *T**P* corresponding to the maximum value *H**S*max of *H**S* on the contour is informative for estimating *M**R*, 100, and (b) that the whole frontier interval is informative. Then, for scenario (a), we estimate the distribution of maximum 100-year response $f\_{M\_{R,100}\text{Point}}(r) = f\_{R|{\boldsymbol{X}}}(r|H\_S^{\text{max}})$. For scenario (b), we estimate the ensemble distribution fMR,100(r) = k fR|(r|100(k)) where $\{{\boldsymbol{x}}\_{100}(\theta\_i)\}\_{k=1}^L$ defines a set of equally-spaced points on the 100-year contour on the frontier interval. We can then compare quantiles of the distributions from scenarios (a) and (b) with quantiles estimated from direct simulation of *M**R*, 100. notes a trade-off between the number of points on the contour used to evaluate the response, the quality of the estimate of response and the computation time required. A total of four responses *R*1, *R*2, *R*3, *R*4 were considered. Two responses correspond to output of a structural response simulator for maximum base shear (*R*1, for a typical fixed structure) and maximum heave (*R*2, for a floating structure), as a function of *H**S* and *T**P* for a three-hour sea state. These response simulators assume that the most probable value of maximum response in a sea state can be written as a closed form expression in terms of a number of sea state variables, including sea state *H**S* and *T**P*. The actual value of maximum response is then simulated from a Rayleigh distribution with the most probable maximum response as scale parameter. A further two synthetic responses are defined, which are simple deterministic functions of *H**S* and *T**P*, using the following equation $$\begin{aligned} R\_{i} = \frac{\alpha\_{i}H\_{S}}{(1+\beta\_{i}(T\_{p}-T\_{p0,i})^2)} \text{ for } i=3,4,\end{aligned}$$ [Eqn:SynRsp] where *T**p*0, *i* (in seconds) is the resonant peak period for response *R**i*. The values of {*α**i*, *β**i*, *T**p*0, *i*} are {2, 0.007, 7} and {2, 0.005, 26} for *i* = 3, 4 respectively. These combinations of parameters were chosen to provide large responses at different neighbourhoods of the environmental space, and hence to correspond to different frontier intervals. The distribution of maximum response *M**R*, 100 for synthetic responses *R*3, *R*4 was estimated by generating multiple environmental simulations corresponding to periods of 100 years, calculating response per sea state and storing only the maximum response observed and the values of *H**S* and *T**P* responsible for it. For responses *R*1, *R*2, `PPC` was used to extend the environmental model to include response; simulation under the model was then again used to accumulate the distribution of *M**R*, 100. For each response in turn, the mean value *M̄**R*, 100 of the maximum 100-year response *M**R*, 100 is plotted in Figure [Fgr:ToyScatter1], and coloured by the value of *M**R*, 100. Also plotted in the figure are direct sampling contours corresponding to 20, 30, 40, 50, 70, 100 and 200 years. Note that for each response *R**i*, only combinations of *H**S* and *T**P* giving rise to a least one occurrence of *M**R**i*, *N* appear in the figure. ![Mean 100-year maximum responses \{\bar{M}_{R_i,100}\} as a function of H_S and T_P estimated using 1000 realisations (of length 100 years) of H_S and T_P. Points are coloured by the local mean value of maximum response estimated on a lattice of values for H_S and T_P. Also shown are N-year (H_S,T_P) direct sampling environmental contours for different values of N; contours are coloured yellow to dark brown by return period, in order of N = \{20,30,40,50,70,100,200\} years. Panels on top row correspond to historic responses R_1 (left) and R_2 (right); panels on bottom row correspond to synthetic responses R_3 (left) and R_4 (right).](Fig1CntMeanRsp.jpg "fig:") [Fgr:ToyScatter1] The figure shows typical features of the different responses. Synthetic response *R*3 shows resonance effects *T**P* ≈ 13s. Maximum base sheer (*R*1) and synthetic response *R*4 increase with increasing *H**S* and *T**P*. This is true in general for maximum heave (*R*2), but there are clearly large values of *M**R*2, 100 within even the 20-year environmental contour. That is, there are relatively benign environmental conditions, not even exceeding the 20-year contour, which sometimes generate the 100-year maximum response. For contours to be useful, we would expect to see the largest values of 100-year maximum response lying outside the 100-year contour, and smallest values of response within it. This is approximately the case for all responses, but certainly not always true for *R*2. The extent to which the maximum response on the 100-year environmental contour agrees with the actual distribution of *M**R*, *N* from simulation can be assessed by comparing an estimate for the distribution of the true response against that evaluated for conditions on the contour, as illustrated in Figure [Fgr:ToyDensAll]. It shows kernel density estimates for {*M**R**i*, 100} estimated by direct simulation (in dashed blue; which can be regarded as “the truth”). The figure also shows corresponding kernel density estimates for *f**M**R*, 100Frontier of *M**R*, 100 from combinations of (*H**S*, *T**P*) lying on the contour frontier (scenario (b), shown in Figure [Fgr:ToyScatter1]), for a range of choices of *N*. ![Kernel density estimates for 100-year maximum responses \{M_{R_i,100}\}. Estimates based on direct simulation of response are shown in dashed blue. Other density estimates (solid lines) are calculated from (H_S, T_P) combinations lying near the corresponding N-year direct sampling contour shown in Figure [Fgr:ToyScatter1], for N = \{20,30,40,50,70,100,200\}. Coloured crosses indicate the location of the quantile of the response distribution with non-exceedance probability \exp(-1) along each contour; the blue dot gives the corresponding \exp(-1) true response from the blue curve. The factor \Delta by which the \exp(-1) response of the 100-year environmental contour would need to be inflated to give the true \exp(-1) 100-year response is given in the title to each panel.](Fig2AllCntDensMostPrb.jpg "fig:") [Fgr:ToyDensAll] There is an obvious ordering of response density estimates with increasing return period, particularly for responses *R*3 and *R*4 as might be expected. Further, the location of densities estimated from different *N*-year environmental contours agrees to some degree with that of the true density of *M**R*, 100. Moreover, for most cases the location of the quantile (see Section [Sct:EstCnt:Adj] with *p**C* = *p**R*) of the distribution of maximum response with non-exceedance probability exp( − 1) (henceforth the “exp( − 1)” value) of the density estimate from the 100-year environmental contour is in reasonable agreement with the location of the true exp( − 1) value. Following Section [Sct:EstCnt:Adj], defining Δ as the ratio of the exp( − 1) quantile of the true 100-year maximum response to the exp( − 1) quantile of the distribution from the 100-year environmental contour, we see that the environmental contour approach underestimates the exp( − 1) response by between 1% and 9% for these examples. We next perform a similar comparison of response distributions, this time using only (*H**S*, *T**P*) combinations near the point on the contour with maximum *H**S* (that is, scenario (a), to estimate *f**M**R*, 100Point). Results, shown in Figure [Fgr:ToyDensPnt], have similar general characteristics to those of Figure [Fgr:ToyDensAll]. Values of Δ in the interval (0.98, 1.04) are estimated. In the current illustrations, therefore, it appears that both scenarios (a) and (b) provide reasonable estimates for the exp( − 1) quantile of *M**R*, 100. ![Kernel density estimates for 100-year maximum responses \{M_{R_i,100}\}. Estimates based on direct simulation of response are shown in dashed blue. Other density estimates (solid lines) are calculated from (H_S, T_P) combinations near the point on the corresponding N-year direct sampling contour (Figure [Fgr:ToyScatter1]) corresponding to maximum H_S, for N = \{20,30,40,50,70,100,200\}. Coloured crosses indicate the location of the quantile of the response distribution with non-exceedance probability \exp(-1) along each contour; the blue dot gives the corresponding \exp(-1) true response from the blue curve. The factor \Delta by which the \exp(-1) response of the 100-year environmental contour would need to be inflated to give the true \exp(-1) 100-year response is given in the title to each panel.](Fig3CntPntDensMostPrb.jpg "fig:") [Fgr:ToyDensPnt] Case study 2 ------------ Here we extend the study of Section [Sct:Prc:Cas1] for responses *R*1 (maximum base shear) and *R*2 (maximum heave), specifically to make a comparison of direct sampling contours, joint exceedance contours (Section [Sct:EstCnt:JntExcCnt:JntEA]), and isodensity contours (from a conditional extremes analysis in Section [Sct:MdlEnv:Cnd]). For brevity, these approaches are henceforth referred to as “direct sampling”, “joint exceedance” and “empirical density” respectively in this section. Figure [Fgr:Toy1] shows minima {*M**R**i*, 100min} and maxima {*M**R**i*, 100max} values of maximum responses {*M**R**i*, 100} from the same 1000 simulations used to generate Figure [Fgr:ToyScatter1]. The colour of each disc in the top row indicates the value of the *minimum* 100-year maximum response seen for that combination of *H**S* and *T**P*, using the same algorithm as for Figure [Fgr:ToyScatter1] to identify near neighbours. The bottom row shows corresponding values of *maximum* 100-year maximum response. It is clear that there is considerable variability in response for a given pair of values for *H**S* and *T**P*. 100-year environmental contours from each of the direct sampling, joint exceedance and empirical density methods are also shown in the figure. All contours have a similar frontier interval. There is good agreement between the direct sampling and joint exceedance contours in particular on the frontier interval; this is not surprising since the underlying methods have similar motivations. For the same simulation size, the empirical density contour is more difficult to estimate without applying considerable smoothing. Comparing the top and bottom rows of the figure, it also appears that the natural variability in the response (near the frontier interval of the contours) dominates any variability in the value of response *along* the contours. In this case, therefore, none of the contours is particularly preferable; any of them would give approximately the same quality of estimate for *M**R*, 100. It is interesting and intuitively appealing that the (yellow) area of largest values of maximum response (on the bottom row of the figure) is centred approximately on the frontier interval of the contours. However, for synthetic response *R*3 in Figure [Fgr:ToyScatter1], we see that the frontier interval is offset (to lower *T**P*) from part of the environmental contour corresponding to largest *H**S*: focussing on an interval on the contour corresponding to largest *H**S* to estimate *M**R*, 100 would seem particularly suspect in this case, regardless of the choice of contour method. Overall, it appears that the key to success is ensuring that the response is quantified (using time-domain simulation or other) at a sufficient number of (*H**S*, *T**P*) combinations on or near the frontier interval of any reasonably well-defined contour. ![Minima (\{M_{R_i,100}^{\text{min}}\}, top row) and maxima (\{M_{R_i,100}^{\text{max}}\}, bottom row) of the 100-year maximum response, as a function of H_S and T_P estimated using 1000 realisations (of length 100 years) of H_S and T_P for responses R_1 (left) and R_2 (right). Points are coloured by the local minimum (top) or maximum (bottom) value of maximum response estimated on a lattice of values for H_S and T_P. Each panel also shows 100-year environmental contours from each of the direct sampling (black dot-dashed), joint exceedance (black dashed) and CE density (black solid) methods.](Fig4CntCompare.jpg "fig:") [Fgr:Toy1] Discussion and conclusions ========================== As shown in Section [Sct:EstCnt], environmental contours provide useful characterisations of the extent of the joint distribution of environmental variables. Some contour methods assume particular parametric forms for the (conditional) distributions of environmental variables; other methods generate convex contours on particular scales; other contour approaches are only defined on part of the domain of environmental variables. There is concern in the user community that a contour should “look right”, closely hugging the boundaries of scatter plots of historic or simulated environmental variables. The usual motivation for applying a contour approach in ocean engineering is to find environmental conditions efficiently (for a return period of *N* years say) which will generate approximately the *N*-year maximum response. Environmental contours therefore provide a means of reducing the burden of running full long-term response analysis for a wide range of environmental conditions. Different types of environmental contours find favour based on their ability to estimate the *N*-year maximum response from the *N*-year environmental contour. An environmental contour is estimated with no regard whatsoever to structural details. Since environmental contours are independent of structural specifics, they can then be used in principle to study different structures in a given environment provided that the underlying assumptions linking environment and structure are not violated. There is no fundamental link between points on an environmental contour and structural response in general, and no reasonable expectation therefore that points on the *N*-year environmental contour should yield the *N*-year maximum response. Attempts have been made to compare different environmental contour methods using a response-based criterion, although it is mathematically obvious and generally recognised in the user community that no single approach is appropriate for all structure and response types, and that considerable ambiguity will always remain. The manner in which an environmental contour relates to extreme response depends on the specifics of the structure. However, for typical *H**S*-driven structures, empirical evidence suggests the responses generated from points along the environmental contour in (*H**S*, *T**P*)-space for a given return period are reasonable estimates of the actual maximum response corresponding to the same return period. In the presence of resonant response and non-extreme values of *T**P*, using points from the contour near the maximum *H**S* can be misleading, since the response is not completely *H**S*-dominated. It is critical therefore that the dominant environmental variables are included in the estimation of environmental contour. It is apparent from physical considerations that extreme occurrences of some structural responses should not coincide with those of extreme environmental variables; the *N*-year environmental contour is unlikely to provide any guidance regarding the *N*-year maximum response for such responses. We also note methods to adjust (or inflate) contours to correct for sources of bias (for estimation of extreme response) including the effects short-term variability, violation of (marginal and dependence) modelling assumptions, uncertainty in parameter estimates, etc. There is some debate within the user community regarding the relative merits of using observations of the environmental variables for serially-dependent sea states, compared with near-independent storm peak characteristics. Given that the rates of occurrence of events are taken into account, both can provide useful estimates of joint models for the environment and hence environmental contours. The advantage of using sea state data is that sample size is large, potentially allowing a more detailed description of the joint distribution of environmental parameters to be estimated. However, because sea state data is serially-correlated, naive estimates of uncertainties for model parameters and inferences under the model will be too small, but can be corrected (for example, using sandwich estimators or bootstrap resampling). Multi-modal distributions of environmental variables can be caused by different physical processes or by covariate effects (for example, fetch length as a function of direction); in these cases, isodensity contours may be more reasonable summaries of the joint distribution than those based on joint exceedance, since there may be regions of low probability between modes. Further it may be useful to partition the environmental space by covariate and perform separate analyses per partition. Alternatively, it is also possible to estimate joint models incorporating covariate effects. Basing design conditions on the *N*-year environmental contour alone neglects sources of bias and variability in estimation of the *N*-year maximum response, including short-term variability in response. Approximate methods to inflate the environmental contour, or adjust its return period, are available. It is always possible, given fully-specified environments and structural responses, to estimate inflation factors which map some quantile of the distribution of *N*-year maximum response onto the corresponding quantile of the distribution of maximum response given sea states on the environmental contour. However, the value of inflation factor in general will be a function of quantile level, the structure and the response. It is likely that estimating inflation factors (or adjusting contour return periods) based on comparing central characteristics (for example, mean, median or mode) of response distributions will prove more stable, since Monte Carlo simulations of a given size provide better estimates of central characteristics than those of tails. In some applications, it may be that environmental contours will be used to estimate multiple correlated responses. In such cases, care needs to be taken that the contour is used to estimate the responses *jointly* corresponding to a given return period, rather than estimating independent marginal return values. As can be seen from the end-user survey in [SctApp:SrvFnd], there are valid concerns regarding a sensible definition of an environmental contour, its estimation and adjustment in relation to structural response. There are many articles in the literature comparing different contour methods, inflation factors etc. However, in reality, most sensibly-proposed contour methods (including the direct sampling, joint exceedance and empirical density examined here) locate the frontier interval of the contour within the same region of environmental space. Uncertainties due to the details of the structure, and even when the structure is defined to the natural variability of structural response given the environment, are in general issues of far greater concern. Any reasonable choice of contour, given the considerations explored in this paper, will suffice. Methods such as IFORM and the direct sampling method are advantageous in that they impose a link between environment and structure by make assumptions about the characteristics of failure surfaces as a function of the environmental variables. Given these assumptions, it is possible to link the exceedance probability associated with a given environmental contour with structural failure probability. Although conditions from an *N*-year environmental contour need not result exactly in *N*-year responses, IFORM and direct sampling provide at least some understanding of how an *N*-year environmental contour is related to the *N*-year maximum response. Both IFORM and direct sampling approaches assume a linearised failure boundary. The basic difference between the approaches arises from the fact that linearisation for IFORM is performed in the transformed ${\boldsymbol{U}}$-space, and in direct sampling approach in the original ${\boldsymbol{X}}$-space of environmental variables (e.g. and references therein). For both IFORM and direct sampling contours, the relationship established is between the exceedance probability associated with the contour (on some scale) and the probability of structural failure. This does not guarantee however that searching along an IFORM or direct sampling contour for return period *T* will isolate the key features of the *T*-year maximum response; the relative performance of IFORM and direct sampling in estimating extreme responses is application-dependent. For better specification of design conditions, a response model is necessary. To determine the frontier interval of a design contour within the domain of environmental variables, inflation factors for contours etc., some knowledge of, or working assumption regarding the response is required. As specification of the response evolves, sufficient, for example, to estimate the frontier interval and inflation factors, arguably there is already sufficient information (or supposition) to use better models to describe approximate responses and their uncertainties. From a statistical perspective, it is hard to avoid the impression that, in terms of estimating extreme structural response, an environmental contour is just an approximation to a sample from the tail of the distribution of environmental variables. In fact, for most purposes, an appropriate sample from the tail of the distribution of environmental variables would be preferable. Furthermore, with the advent of methods such as statistical emulation (used widely in approximating complex physical systems, including metocean design e.g. ), a computationally-efficient approximate response model can be estimated, along with its uncertainty, for many if not all applications. Given this, the emulator would provide a mechanism for response-based design in all situations, avoiding the need for environmental contours completely. It is apparent that the coastal engineering community is already moving in this direction (e.g. ); a key assumption with statistical emulation is that a representative set of cases is available to quantify all important relationships between environment and structural loading. Nevertheless, part of the user-cited strength of the environmental contour method is its relative simplicity and computational efficiency. There is clearly a trade-off between a thorough probabilistic response-driven analysis (which may be technically more involved and less familiar to the practitioner) compared to a simple approximate approach (with which the user is relatively familiar and confident). Given the above considerations, and comments in the end-user survey, to assist the practitioner in deciding when and where to use an environmental contour approach, we present the following brief check-list. We recommend the use of environmental contours in the following circumstances. When to use environmental contours ---------------------------------- **Nature of responses and environmental variables are known**: The dominant structural responses are all known. The dominant environmental variables driving each structural response are all known, and the value of response is dominated by long-term variability of the environmental variables: extreme environments produce extreme responses. The influence of short-term environmental variability is relatively small. [There are some responses (for example, fatigue) that are not dominated by extremes of the environment; environmental contour methods are therefore not appropriate.] **Response-based analysis is not possible**: (a) There are no adequate computationally-efficient structural response models available. [If these models are available, a response-based analysis should be performed.] (b) There are computationally-demanding structural response models available, but no time or expertise to develop approximate structural response models (for example, generic load models, statistical emulators) using these. [If these models can be estimated, a response-based analysis should be performed.] **At outline design stage**: The specifics of the structure may not be clear at outline design. For this reason, the environmental contour may provide a useful source of extreme sea states suitable for evaluating a range of different structures. Once it has been decided that an environmental contour approach is suitable, the following are then recommended. How to use environmental contours wisely ---------------------------------------- **Reality check**: Remember that environmental contours are approximate methods that can only provide approximations to extreme responses. The use of contour approaches may need to be supported in final design by full long-term analysis. **Sufficient environmental data available**: There are sufficient historical data available to estimate the joint distribution of all these environmental variables adequately using, for example, a method from Section [Sct:MdlEnv]. **Estimate more than one environmental model, and consider the sensitivity of the model to arbitrary modelling choices**: The sensitivity of environmental contour estimates to arbitrary choices made when estimating a model for the joint distribution of environmental parameters should be investigated. [When different equally-plausible environmental models provide different contour estimates, the current research suggests that all contours should be considered valid and used together for choice of environmental values corresponding to extreme responses. Equally, the user might well be concerned when two different environmental models provide materially different contour estimates (using a common contouring approach).] **If unsure which contour to use, estimate more than one type**: Each type of environmental contour is seeking to achieve different objectives. If you are not clear which contour is most suitable for your application, consider estimating contours of different types, and establish approximate consistency of inferences from different contours. [When different equally-plausible contours give materially different results, all contours should be considered valid and used together for choice of environmental values corresponding to extreme responses.] **Choose multiple points from the environmental contours for response evaluation**: Multiple combinations of values of environmental variables falling on or near the frontier interval of the environmental contour should be used. [When the frontier interval is not known, a wide set of combinations of values of environmental variables on or near the environmental contour should be used. If in doubt, choose more points and choose points more widely.] **Consider other sources of uncertainty**: (a) How influential are the effects of covariates (directionality, seasonality) [If important, fit a non-stationary environmental model (for example, using PPC from Section [Sct:Prc])]; (b) Have all environmental variables influencing the response been considered in the environmental model and contours? [If not, consider estimating environmental models and contours in higher dimensions]. Acknowledgement =============== This work was part-funded by the European Union ERA-NET project entitled “Environmental Contours for SAfe DEsign of Ships and other marine structures (ECSADES)”. The `PPC` software and user guide is freely-available as MATLAB code from the authors. We thanks Kevin Ewans for useful comments on the manuscript. Supporting information ====================== Survey findings --------------- Further details of the survey summarised in Section [Sct:Int:Srv] are given here. For brevity, the respondents full answers are summarised here. The full (anonymised) survey responses can however be provided on request. The survey questions were distributed to industry/consulting contacts of those involved in the ECSADES project as well as authors of key literature in the area. We do not claim that this sample is representative of the user community for contours, but hopefully it is informative. Of the 19 respondents, a large number were based in Norway with contributions also from the US, the Netherlands, the UK and Australia; with 7 from academia and 12 from industry/consulting. 1. How many times have you used environmental contours as part of your work in the last 3 years? * Different frequencies of use: from daily to hardly ever * Geographic variability: frequent use cited by respondents based in Norway, less frequent in the U.S. * Contours form an integral part of reliability assessment / design practices * Academically, contours garner less interest. Appearing mainly in post-graduate projects with industrial applications. 2. What kind of environmental contours do you use (for example, derived from FORM, or the conditional extremes model of Heffernan & Tawn, or other)? * Large user group for FORM/IFORM * Conditional extremes approach of * Marginal and dependence modelling based on threshold exceedance distributions * AND and JOIN-SEA cited for coastal applications * Kernel density estimation * The need for contours in any form also questioned in the event that direct response-based analysis is possible 3. For what purpose do you use environmental contours? Can you describe as precisely as possible the information you take from the environmental contour, and how you use this? Do you use the whole contour or just part of it? Which part of the contour, and why? Do you actually need information about the whole contour? Do you take any steps to account for uncertainties and potential biases? * Two dimensional contour representing “N-year” environmental conditions is typically estimated (never higher dimensions) * Response analysis performed on a subset of points along the environmental contour * Region of the contour over which this subset of points is focussed can be motivated by physical understanding of mechanisms driving response, for example, wave-dominated or resonant response * Sometimes only “short term” response analysis performed; additional extrapolation from “short-term” to “long-term” necessary. * N-year structural response is estimated from the responses resulting from conditions defined by points on the environmental contour by: + Inflating the response somehow: 90% quantile, median  ×  1.3 or other ad-hoc method, to adjust for short-term variability + Inflate “environment”: that is, explore responses for a contour representing a longer return period (N) * Bias, (epistemic) uncertainty and inherent (aleatory) randomness is almost never incorporated 4. What do you think are the advantages of using environmental contours? * Environmental contours provide a reasonable approximate approach to estimate N-year responses * Industry-accepted * Calculated without any knowledge of structure being designed; independent of the response; same contour can be used for range of responses * Simple to use, inexpensive, quick - especially for typical cases (for example, Hs-Tp) * Computational efficiency compared full time-domain analysis 5. What, in your opinion, are the disadvantages or problems associated with using environmental contours? Are there specific circumstances in which you find that environmental contours do not work well or are difficult to use? * Concern about details of choosing response from a subset estimated along the contour: using arbitrary quantiles and scale factors etc. * Simulations using short-term responses used to estimate long-term response; high quantile necessary. * Environmental contour method does not involve the response directly (so arbitrary calibration of response will always be necessary); “response-based” analysis is a better approach when possible * Clients do not understand what the contour represents and how it has been calculated, can therefore sometimes be a “hard sell” * Lots of approaches to defining the contour, not clear which (if any) is better * Naïve application of statistical methods can yield physically unreasonable contours (for example, non-physical wave steepness) * Approaches to extending to higher dimensions (3D and above) or to multiple responses are more ad-hoc, incorporating correct physics even more problematic * Need to check conclusions using full-scale simulation / long-term analysis * No obvious way to estimate the *distributional* properties of N-year response * Uncertainties are rarely quantified 6. Do you use design guidelines or other literature sources to guide your use of environmental contours? Could you list these? * Statoil procedures * Norsok N003 * DNV-RP-C205 (2014) * FORM/iFORM: * JOIN-SEA: * * Shell LSM model for response-based design `PPC` model ----------- The penalised piecewise constant (`PPC`) extreme value model allows the estimation of non-stationary marginal and conditional extremes for peaks over threshold using a simple description of non-stationarity with respect to covariates in marginal and dependence models. An early deployment of the `PPC` model as software is described in ; the current version of the `PPC` software, developed as part of the ECSADES project (see Section [Sct:Ack]), is freely available from the authors. For each observation ${\boldsymbol{x}}\_i$ in the sample $\{{\boldsymbol{x}}\_i\}$, we assume that an associated (potentially vector) covariate ${\boldsymbol{\theta}}\_i$ is available. The value of covariate ${\boldsymbol{\theta}}\_i$ is used to allocate the observation to one and only one of *n**C* covariate intervals {*C**k*}*k* = 1*n**C* by means of an allocation vector *A* such that *k* = *A*(*i*) and $\{{\boldsymbol{x}}\_i\}=\bigcup C\_k$. For each *k*, all observations in the set $\{{\boldsymbol{x\_{i'}}}\}\_{A(i')=k}$ with the same covariate interval *C**k* are assumed to have common extreme marginal and dependence characteristics. Non-stationary marginal extreme value characteristics of each variate *X**j* are then estimated in turn using a generalised Pareto model and cross-validated roughness-penalised maximum likelihood estimation. For variable *X**j* and covariate interval *C**k*, the extreme value threshold *ψ**j**k* > 0 is assumed to be a quantile of the empirical distribution of the variate in that interval, with specified non-exceedance probability *τ**j* ∈ (0, 1), with *τ**j* constant across intervals, and estimated by counting. Threshold exceedances are assumed to follow the generalised Pareto distribution with shape *ξ**j* ∈ R and scale *σ**j**k* > 0. *ξ**j* is assumed constant (but unknown) across covariate intervals, and the reasonableness of the assumption assessed by inspection of diagnostic plots. Parameters *ξ**j*, {*σ**j**k*} are estimated by maximising the predictive performance of a roughness-penalised model, optimally regulating the extent to which {*σ**j**k*} varies across interval, using a cross-validation procedure. After marginal fitting, the sample $\{{\boldsymbol{x}}\_i\}$ is transformed to standard Laplace scale as $\{{\boldsymbol{\tilde{x}}}\_i\}$ and the conditional extremes model outlined in Section [Sct:MdlEnv:Cnd] fitted. Following the notation of that section, for each choice of conditioning variate *X**q*, linear parameters ${\boldsymbol{\alpha}}\_{-q}$ for the conditioned variates vary across covariate bins with variation regularised using cross-validation to optimise predictive performance for each response in turn. The corresponding value of exponent parameters ${\boldsymbol{\beta}}\_{-k}$ is assumed constant with respect to covariates. Sets of residuals R− *q* from the fit, for each choice of *q* are also isolated. Simulation under the fitted model, and transformation of realisations to their original marginal scale is then possible, corresponding to a return period of arbitrary length. In particular, samples simulated under the model can be used to estimate environmental contours of different types, and explore their relative characteristics. In practice, the full `PPC` modelling procedure is repeated for *n**U* bootstrap resamples of the original sample to capture sampling uncertainty, each resample using a different choice of marginal and dependence thresholds to capture threshold uncertainty. Estimates for marginal and dependence parameters therefore correspond to *n**U* arrays of values capturing sampling and threshold specification uncertainty, which are used to propagate uncertainty from these sources into simulations under the model. We use the `PPC` model to estimate a number of the environmental contours discussed in Section [Sct:EstCnt] and investigate their characteristics, in particular their relationship to extremes of structural response. Specifically, because of its recently popularity, we consider the direct sampling contour (Section [Sct:EstCnt:JntExcCnt:DrcSmpCnt]); we also consider the joint exceedance contour outlined in Section [Sct:EstCnt:JntExcCnt:JntEA]. Both these contour methods use a simulated sample under the environmental model as starting point. Further we consider an isodensity contour (Section [Sct:EstCnt:IsoDnsCnt] based on the conditional extremes model of Section [Sct:MdlEnv:Cnd]; this approach infers the contour directly from the properties of the environmental model with no need for simulation. References ==========
arxiv_0000063
Parametrized Homology via Zigzag Persistence ============================================ This paper introduces parametrized homology, a continuous-parameter generalization of levelset zigzag persistent homology that captures the behavior of the homology of the fibers of a real-valued function on a topological space. This information is encoded as a ‘barcode’ of real intervals, each corresponding to a homological feature supported over that interval; or, equivalently, as a persistence diagram. Points in the persistence diagram are classified algebraically into four classes; geometrically, the classes identify the distinct ways in which homological features perish at the boundaries of their interval of persistence. We study the conditions under which spaces fibered over the real line have a well-defined parametrized homology; we establish the stability of these invariants; and we show how the four classes of persistence diagram correspond to the four diagrams that appear in the theory of extended persistence. Introduction ============ Persistent homology is one of the key topological methods used in data analysis; as such it deserves substantial credit for the emergence of applied topology as a field. A common theme in this history has been the introduction of a method, motivated by applications or computation, that is encumbered by restrictive theoretical assumptions. The original persistent homology  required discretization of the input, an assumption that was lifted as the theory became better understood . The celebrated stability result  had strong tameness assumptions that were relaxed over a sequence of papers . Viewed in this context, our paper is another rung on the climb to a transparent theory of persistence, free of unnecessary restrictions. The specific goal of this paper is to generalize levelset zigzag persistence  to the continuous case, lifting the restriction that the spaces under consideration have discrete structure. Our main tools are the theory of rectangular measures and a graphical notation for quiver representation calculations; both taken from . On the algebraic side there are some technical requirements, regarding choice of homology theory, that we work through in detail. On the geometric side, we study the different phenomena recorded by our invariants. Finally, we generalize the equivalence  between levelset persistence and extended persistence  to the continuous case; and we discuss parametrized cohomology. The general set-up is this. Let *X* be a topological space and let *f* : *X* → R be a continuous function. Such a pair X = (*X*, *f*) is commonly called a *space fibered over the real line*; in this paper, we use the convenient term *R-space*. We can view an R-space as a collection of topological spaces X*a**a* = *f*− 1(*a*),   *a* ∈ R called the *levelsets* of X, where the topology on the total space *X* bestows upon this collection of spaces the structure of a ‘family’. In particular, the *interlevelsets* X*a**b* = *f*− 1[*a*, *b*],   *a*, *b* ∈ R,  *a* ≤ *b* provide cobordism-style relationships between the levelsets. The basic question is to understand the homological invariants of X. In particular, how does the homology of X*a**a* vary with *a*? Taking the family structure into account, this question demands a richer answer than simply recording the homology of each X*a**a* separately. What we seek is a reasonable theory for taking an R-space and decomposing its homological information into discrete features supported over intervals. To shed light on the meaning of ‘reasonable’, we highlight some desired properties. Such a theory would: * retrieve all obvious homological information stored in (*X*, *f*); * be manifestly symmetric with respect to reversal of the real line R; * be widely applicable, free from excessively strong finiteness assumptions. We return to the question of what we mean by ‘all obvious homological information’. First, we consider four examples of existing theories, indicating why they do not fully satisfy these properties. [standard persistent homology] The classical theory of persistence  is defined in terms of the *sublevelsets* X*a* = *f*− 1( − ∞, *a*] of the R-space (*X*, *f*). We begin by choosing a finite set of values *a*0 < *a*1 < … < *a**n*. This could be the set of critical values in the case of a manifold with a Morse function; or it could simply be an arbitrary discretization of the real line. We then form the diagram of topological spaces $$\begin{tikzpicture}[xscale=1.35,yscale=1] \draw (0,0) node(00){${\mathbb{X}}^{a\_0}$} (1,0) node(10){${\mathbb{X}}^{a\_1}$} (2,0) node(20){$\ldots$} (3,0) node(30){${\mathbb{X}}^{a\_n}$,} ; \draw[->] (00) -- (10); \draw[->] (10) -- (20); \draw[->] (20) -- (30); \end{tikzpicture}$$ where the arrows denote the canonical inclusion maps. By applying a homology functor H with field coefficients, we get a diagram of vector spaces and linear maps $$\begin{tikzpicture}[xscale=1.75,yscale=1] \draw (0,0) node(00){${\mathrm{H}}({\mathbb{X}}^{a\_0})$} (1.1,0) node(10){${\mathrm{H}}({\mathbb{X}}^{a\_1})$} (2,0) node(20){$\ldots$} (3,0) node(30){${\mathrm{H}}({\mathbb{X}}^{a\_n})$.} ; \draw[->] (00) -- (10); \draw[->] (10) -- (20); \draw[->] (20) -- (30); \end{tikzpicture}$$ The structure of such a diagram is described by its *barcode* or *persistence diagram* (Section [subsec:zigzag]). The resulting collection of barcodes captures some of the information that we are seeking in the present work. Standard persistent homology doesn’t satisfy all our desired properties. Although it is possible to get rid of the finite discretization of the real line , the first two properties are not satisfied. Most obviously, the construction is asymmetric when reversing the real line. For instance, let *X* be the cone on a topological space *Y* *X* = (*Y* × [0, 1])/(*Y* × {0}) and let *f*([*x*, *t*]) = *t* be the cone height function. Then the persistent homology of (*X*, *f*) is indistinguishable from the persistent homology of a 1-point R-space ( \* , 0). On the other hand, the persistent homology of (*X*,  − *f*) detects the homology of *Y* over the interval [ − 1, 0). One might imagine that the persistent homology of (*X*, *f*) and (*X*,  − *f*) together capture all information of interest. The next example shows that there is, in fact, more information to be gathered. [extended persistent homology] The theory of ‘extended persistence’ introduced by Cohen-Steiner et al.  has similar goals to ours, but addresses them under the restriction that X be ‘tame’ in the sense of having finitely many critical values and cylindrical behavior, i.e. ‘Morse-like’ behavior, between those critical values. Adding *superlevelsets* X*a* = *f*− 1[*a*,  + ∞),  Cohen-Steiner et al. consider the sequence of spaces and pairs $$\begin{tikzpicture}[xscale=1.35,yscale=1] \draw (0,0) node(00){${\mathbb{X}}^{a\_0}$} (1,0) node(10){$\ldots$} (2,0) node(20){${\mathbb{X}}^{a\_n}$} (3,0) node(30){$X$} (4.3,0) node(40){$(X, {\mathbb{X}}\_{a\_n})$} (5.6,0) node(50){$\ldots$} (6.9,0) node(60){$(X, {\mathbb{X}}\_{a\_0})$,} ; \draw[->] (00) -- (10); \draw[->] (10) -- (20); \draw[->] (20) -- (30); \draw[->] (30) -- (40); \draw[->] (40) -- (50); \draw[->] (50) -- (60); \end{tikzpicture}$$ where *a*0 < … < *a**n* is the set of critical values. The extended persistence of X is the persistent homology of this sequence $$\begin{tikzpicture}[xscale=1.75,yscale=1] \draw (0,0) node(00){${\mathrm{H}}({\mathbb{X}}^{a\_0})$} (1,0) node(10){$\ldots$} (2,0) node(20){${\mathrm{H}}({\mathbb{X}}^{a\_n})$} (3.22,0) node(30){${\mathrm{H}}(X)$} (4.6,0) node(40){${\mathrm{H}}(X, {\mathbb{X}}\_{a\_n})$} (5.75,0) node(50){$\ldots$} (6.9,0) node(60){${\mathrm{H}}(X, {\mathbb{X}}\_{a\_0})$} ; \draw[->] (00) -- (10); \draw[->] (10) -- (20); \draw[->] (20) -- (30); \draw[->] (30) -- (40); \draw[->] (40) -- (50); \draw[->] (50) -- (60); \end{tikzpicture}$$ obtained by applying a homology functor H with field coefficients. If we fix the homology theory and the field of coefficients, and vary the homological dimension, then it turns out  that the resulting collection of barcodes captures all the information that we are seeking in the present work. There are four types of bars identified in , each having a different geometric significance; this is explored in some detail by Bendich et al. , as part of a broader program to understand homological stability of the fibers of an R-space. Two of of the four types can be matched to the standard persistence of (*X*, *f*) and (*X*,  − *f*). The other two types provide new information. The symmetry of this theory is, however, not at all obvious: there is no immediately manifest relationship between the extended persistence barcodes of (*X*, *f*) and (*X*,  − *f*). The existence of such a symmetry was conjectured by Cohen-Steiner et al.  on the basis of results obtained for closed manifolds using duality theorems. The matter was resolved in , which establishes a precise symmetry between the two sets of barcodes, via calculations in zigzag persistent homology. The symmetry requires considering homology in more than one dimension at once, since the correspondence between the barcodes involves dimension shifts. Finally, we note that it is relatively straightforward to use rectangle measures to generalize extended persistence to the continuous case; the procedure is outlined in . We will say more about extended persistence in Section [sec:extended-persistence]. [interval persistent homology] Dey and Wenger  proposed a theory of ‘interval persistence’. They consider interlevelsets X*a**b*, seeking maximal intervals [*a*, *b*] such that the sequence H(X*a**a*) → H(X*a**b* − *ε*) → H(X*a**b*),  supports a summand over the first two vector spaces, but not the third. In other words, they look for classes in the levelsets that vanish in interlevelsets. Although interval persistent homology still does not satisfy all our desired properties, it does suggest additional homological information that we want to recover from an R-space. Building on this work, Burghelea, Dey and Haller have developed an analogous program to study the persistent homology of spaces fibered over the circle. Extended and interval persistence hint at what we mean by ‘all obvious homological information’, and invite us to adopt a categorical perspective. Let Int denote the category of closed intervals [*a*, *b*] in the real line; the morphisms are the inclusions [*a*, *b*] ⊆ [*c*, *d*]. Then an R-space X = (*X*, *f*) can be thought of as a functor X : Int → Top that carries each interval [*a*, *b*] to the corresponding interlevelset X*a**b*; the morphism associated to an inclusion [*a*, *b*] ⊆ [*c*, *d*] is the inclusion X*a**b* ⊆ X*c**d*. We are interested, then, in understanding the composite functors $$\begin{tikzpicture}[xscale=1.5,yscale=1] \draw (0,0) node(00){${\mathsf{Int}}$} ; \draw (1.5,0) node(10) {${\mathsf{Top}}$} ; \draw (3,0) node(02) {${\mathsf{Vect}}$} ; \draw[->] (00) to node[above]{${\mathbb{X}}$} (10); \draw[->] (10) to node[above]{${\mathrm{H}}$} (02); \end{tikzpicture}$$ where H is a homology functor with coefficients in a field, and Vect is the category of vectors spaces over that field. The following can be viewed as a preliminary attempt to understand this functor: [levelset zigzag persistent homology] In , Carlsson et al. proposed the following protocol for studying an R-space X = (*X*, *f*). Suppose X is Morse-like, with critical values *a*1 < *a*2 < … < *a**n*. Let *s*0 < *s*1 < … < *s**n* be a collection of ‘intercritical values’, interleaved between the critical values in the sense that *s**i* − 1 < *a**i* < *s**i*. Then the zigzag diagram of topological spaces (and inclusion maps) $$\begin{tikzpicture}[xscale=1.2,yscale=1.2] \draw (1,1) node(11) {${\mathbb{X}}\_{s\_0}^{s\_1}$} (3,1) node(31) {${\mathbb{X}}\_{s\_1}^{s\_2}$} (5,1) node(51) {$\cdots$} (7,1) node(71) {${\mathbb{X}}\_{s\_{n-1}}^{s\_n}$} ; \draw (0,0) node(00) {${\mathbb{X}}\_{s\_0}^{s\_0}$} (2,0) node(20) {${\mathbb{X}}\_{s\_1}^{s\_1}$} (4,0) node(40) {${\mathbb{X}}\_{s\_2}^{s\_2}$} (6,0) node(60) {$\cdots$} (8,0) node(80) {${\mathbb{X}}\_{s\_n}^{s\_n}$} ; \draw[->] (00) -- (11); \draw[->] (20) -- (11); \draw[->] (20) -- (31); \draw[->] (40) -- (31); \draw[->] (40) -- (51); \draw[->] (60) -- (71); \draw[->] (80) -- (71); \end{tikzpicture}$$ gives rise to a zigzag diagram of vector spaces (and linear maps) $$\begin{tikzpicture}[xscale=1.2,yscale=1.2] \draw (1,1) node(11) {${\mathrm{H}}({\mathbb{X}}\_{s\_0}^{s\_1})$} (3,1) node(31) {${\mathrm{H}}({\mathbb{X}}\_{s\_1}^{s\_2})$} (5,1) node(51) {$\cdots$} (7,1) node(71) {${\mathrm{H}}({\mathbb{X}}\_{s\_{n-1}}^{s\_n})$} ; \draw (0,0) node(00) {${\mathrm{H}}({\mathbb{X}}\_{s\_0}^{s\_0})$} (2,0) node(20) {${\mathrm{H}}({\mathbb{X}}\_{s\_1}^{s\_1})$} (4,0) node(40) {${\mathrm{H}}({\mathbb{X}}\_{s\_2}^{s\_2})$} (6,0) node(60) {$\cdots$} (8,0) node(80) {${\mathrm{H}}({\mathbb{X}}\_{s\_n}^{s\_n})$} ; \draw[->] (00) -- (11); \draw[->] (20) -- (11); \draw[->] (20) -- (31); \draw[->] (40) -- (31); \draw[->] (40) -- (51); \draw[->] (60) -- (71); \draw[->] (80) -- (71); \end{tikzpicture}$$ whose indecomposable summands are recorded as the levelset zigzag barcode of X. There are four types of bars, according as the ends of the summand lie in the top row or the bottom row of the diagram. Each bar is then associated with an open, closed, or half-closed real interval with endpoints in the set of critical values; the Morse-like assumption ensures that the interval is precisely the interval of persistence of the corresponding homological feature. In  it is shown that the levelset zigzag barcode carries exactly the same information as the extended persistence barcodes of (*X*, *f*) and of (*X*,  − *f*), as well as another related object called the ‘up-down persistence’ barcode. The advantage of levelset zigzag over the other, equivalent, theories is that it is manifestly symmetrical with respect to symmetries of the real line. Moreover, fiberwise homological features are expressed in the correct dimension in this theory; no dimension shifts take place. The main weakness of levelset zigzag persistence is that it is stubbornly discrete, in the sense that it is a forbidding prospect to try to take a continuous limit of the zigzag diagrams used in the theory. Parametrized homology is our response to this weakness. We take advantage of the theory of rectangle measures from  to define four continuous-parameter persistence diagrams, corresponding to the four types of bars in the levelset zigzag barcode. Each diagram represents a set of homological features and carries information about how they perish at both ends of the interval over which they are defined. The diagrams are stable with respect to perturbation of the function *f*. One advantage of using rectangle measures is that the proofs, in a certain sense, become ‘bounded’. In the levelset zigzag framework, in order to prove anything, one has to consider zigzag diagrams of arbitrary length. In the parametrized homology framework, result can be expressed as statements about rectangle measures, and can be proved using specific diagrams of a fixed size. The proofs are generally very straightforward, once the appropriate ‘diagram calculus’ has been mastered. Outline ------- In Section [sec:tools], we review the algebraic machinery needed to define parametrized homology: zigzag modules, quiver representation diagrams, rectangle measures. Section [sec:PH] comprises the main body of this paper. In Section [subsec:4-measures], we provisionally define four rectangle measures that will eventually yield the four persistence diagrams of parametrized homology. A certain homological tautness property is required for these measures to be additive; this is treated in Sections [sec:tautness] and [sec:additivity]. Section [sec:finiteness] identifies conditions under which the measures are finite. Under these favourable conditions, the construction of the four persistence diagrams in Section [paramhom] is immediate. In Section [subsec:LZZ], we show that parametrized homology exactly emulates levelset zigzag persistence in the discrete Morse-like case. Section [sec:16] is devoted to geometric considerations. Each of the four diagrams may contain features supported over open, closed and half-open intervals. We illustrate the sixteen possible behaviours, and show that only four of them occur in the compact case. Finally, in Section [subsec:stability] we prove the stability theorem, and in Section [sec:extended-persistence] establish the relationship with continuous-parameter extended persistence. A brief discussion of parametrized cohomology, in Section [sec:parcoho], concludes the paper. Algebraic Tools =============== In this section, we review the tools from and  that we use to develop parametrized homology invariants. Throughout this paper, vector spaces are taken to be over an arbitrary field $\mathbf k$. In certain instances, the field is specified. Zigzag modules -------------- A *zigzag module* V of length *n* (see ) is a sequence of vector spaces and linear maps between them $$\begin{tikzpicture}[xscale=1.2,yscale=1.2] \draw (1,1) node(11) {$V\_1$} (1.97,1) node(21) {$V\_2$} (2.94,1) node(31){$\ldots$} (4,1) node(41) {$V\_n.$}(4.15,1); \draw[<->] (11) to node[above]{$$} (21); \draw[<->] (21) to node[above]{$$} (31); \draw[<->] (31) to node[above]{$$} (41); \end{tikzpicture}$$ Each [xscale=0.8,yscale=1] (1,1) node(11) (2,1) node(21); (11) – (21); represents either a forward map [xscale=0.8,yscale=1] (1,1) node(11) (2,1) node(21); (11) – (21); or a backward map $\begin{tikzpicture}[xscale=0.8,yscale=1] \draw (1,1) node(11){} (2,1) node(21){}; \draw[<-] (11) -- (21); \end{tikzpicture}$. The particular choice of directions for a given zigzag module is called its *shape*. If every map is a forward map the zigzag module is called a *persistence module* . The basic building blocks of zigzag modules are the interval modules. Fix a shape of length *n*. The interval module I[*p*, *q*] of that shape is the zigzag module $$\begin{tikzpicture}[xscale=1.2,yscale=1.2] \draw (1.05,1) node(11) {$I\_1$} (1.97,1) node(21) {$I\_2$} (2.93,1) node(31){$\ldots$} (3.95,1) node(41) {$I\_n$}(4.15,1); \draw[<->] (11) to node[above]{$$} (21); \draw[<->] (21) to node[above]{$$} (31); \draw[<->] (31) to node[above]{$$} (41); \end{tikzpicture}$$ where $I\_i =\mathbf k$ for *p* ≤ *i* ≤ *q*, and *I**i* = 0 otherwise, and where every or is the identity map. Let ${\mathbb{V}}\_{\{1, 2, 3\}} = {\raisebox{-0.5em}{\begin{tikzpicture}[xscale=1.15,yscale=1.15] \draw (1,1) node(11) {$V1$} (2,1) node(21) {$V2$} (3,1) node(31){$V3$}; \draw[->] (11) to node[above]{$$} (21); \draw[->] (21) to node[above]{$$} (31); \end{tikzpicture}}}$. The six interval modules over V may be represented pictorially as follows: $$\begin{aligned} {3} {\mathbb{I}}[1,3] &= { \raisebox{-0.25ex}{\includegraphics[scale=1.8,page=6]{onof}}} \qquad & {\mathbb{I}}[2,3] &= { \raisebox{-0.25ex}{\includegraphics[scale=1.8,page=5]{onof}}} \qquad & {\mathbb{I}}[3,3] &= { \raisebox{-0.25ex}{\includegraphics[scale=1.8,page=3]{onof}}} \\ {\mathbb{I}}[1,2] &= { \raisebox{-0.25ex}{\includegraphics[scale=1.8,page=4]{onof}}} \qquad & {\mathbb{I}}[2,2] &= { \raisebox{-0.25ex}{\includegraphics[scale=1.8,page=2]{onof}}} \\ {\mathbb{I}}[1,1] &= { \raisebox{-0.25ex}{\includegraphics[scale=1.8,page=1]{onof}}}\end{aligned}$$ Each dark green node represents a copy of the field **k** and each light pink node represents a copy of the zero vector space. Identity maps are represented by thickened green lines. A theorem of Gabriel  implies that any finite-dimensional zigzag module can be decomposed as a direct sum of interval modules. The extension to infinite-dimensional zigzag modules follows from a theorem of Auslander . The list of summands that appear in the decomposition is an isomorphism invariant of V by the Krull–Schmidt–Azumaya theorem . We call this isomorphism invariant the *zigzag persistence* of V. [zigps] Consider a zigzag diagram X of topological spaces and continuous maps between them: $$\begin{tikzpicture}[xscale=1.25,yscale=1.2] \draw (1,1) node(11) {$X\_1$} (2,1) node(21) {$X\_2$} (3,1) node(31){$\ldots$} (4,1) node(41) {$X\_n$}; \draw[<->] (11) to node[above]{$$} (21); \draw[<->] (21) to node[above]{$$} (31); \draw[<->] (31) to node[above]{$$} (41); \end{tikzpicture}$$ We get a zigzag module HX by applying a homology functor H = H*j*( − ; **k**) to this diagram. Decomposing the diagram, we can write $$\begin{tikzpicture}[xscale=1.63,yscale=1.2] \draw (0.75,1) node(11) {${\mathrm{H}}\_j(X\_1)$} (2,1) node(21) {${\mathrm{H}}\_j(X\_2)$} (3,1) node(31){$\ldots$} (4,1) node(41) {${\mathrm{H}}\_j(X\_n)$} (4.85,1) node(51){$\cong$} (6,1) node(61){$\bigoplus\_{i\in I}{\mathbb{I}}[p\_i, q\_i].$}; \draw[<->] (11) to node[above]{$$} (21); \draw[<->] (21) to node[above]{$$} (31); \draw[<->] (31) to node[above]{$$} (41); \end{tikzpicture}$$ The zigzag persistent homology of X (for the functor H) is then the multiset of intervals [*p**i*, *q**i*] in the interval decomposition. The *multiplicity* of an interval [*p*, *q*] in a zigzag module V is the number of copies of I[*p*, *q*] that occur in the interval decomposition of V. This number is written ⟨[*p*, *q*] ∣ V⟩ and takes values in the set {0, 1, 2, …, ∞}. (For our purposes we do not need to distinguish different infinite cardinals.) Finally, the *persistence diagram* of V is the multiset Dgm(V) in {(*p*, *q*) ∣ 1 ≤ *p* ≤ *q* ≤ *n*} defined by the multiplicity function (*p*, *q*) ↦ ⟨[*p*, *q*] ∣ V⟩. We will often use pictorial notation for these multiplicities. For example, given a persistence module $ {\mathbb{V}}= {\raisebox{-0.5em}{\begin{tikzpicture}[xscale=1.2,yscale=1.2] \draw (1,1) node(11) {$V1$} (2,1) node(21) {$V2$} (3,1) node(31){$V3$}; \draw[->] (11) to node[above]{$$} (21); \draw[->] (21) to node[above]{$$} (31); \end{tikzpicture}}} $ we may write $$\langle [2,3] \mid {\mathbb{V}}\rangle \quad \text{or} \quad \langle { \raisebox{-0.25ex}{\includegraphics[scale=1.8,page=5]{onof}}} \mid {\mathbb{V}}\rangle \quad \text{or simply} \quad \langle { \raisebox{-0.25ex}{\includegraphics[scale=1.8,page=5]{onof}}} \rangle$$ for the multiplicity of I[2, 3] in V. Two calculation principles -------------------------- There are two methods from  that we repeatedly use to calculate multiplicities: the Restriction Principle and the Diamond Principle. [Restriction Principle][RP] Let V be a zigzag module with two consecutive maps in the same direction $$\begin{tikzpicture}[xscale=1.2,yscale=1] \draw (0.85,1) node(11) {$V\_1$} (1.9,1) node(21) {$V\_2$} (2.95,1) node(31){$\ldots$} (4.15,1) node(41) {$V\_{k-1}$} (5.30,1) node(51){$V\_k$} (6.30,1) node(61){$V\_{k+1}$} (7.35,1) node(71){$\ldots$} (8.45,1) node(81){$V\_n$} ; \draw[<->] (11) to node[above]{$$} (21); \draw[<->] (21) to node[above]{$$} (31); \draw[<->] (31) to node[above]{$$} (41); \draw[->] (41) to node[above]{$\scriptstyle g$} (51); \draw[->] (51) to node[above]{$\scriptstyle h$} (61); \draw[<->] (61) to node[above]{$$} (71); \draw[<->] (71) to node[above]{$$} (81); \end{tikzpicture}$$ and let W be the zigzag module $$\begin{tikzpicture}[xscale=1.2,yscale=1] \draw (0.85,1) node(11) {$V\_1$} (1.9,1) node(21) {$V\_2$} (2.95,1) node(31){$\ldots$} (4.15,1) node(41) {$V\_{k-1}$} (6.30,1) node(61){$V\_{k+1}$} (7.35,1) node(71){$\ldots$} (8.45,1) node(81){$V\_n$} ; \draw[<->] (11) to node[above]{$$} (21); \draw[<->] (21) to node[above]{$$} (31); \draw[<->] (31) to node[above]{$$} (41); \draw[->] (41) to node[above]{$\scriptstyle hg$} (61); \draw[<->] (61) to node[above]{$$} (71); \draw[<->] (71) to node[above]{$$} (81); \end{tikzpicture}$$ obtained by combining those maps into a single composite map and deleting the intermediate vector space *V**k*. Let [*p*, *q*] be an interval over the index set for W (so *p*, *q* ≠ *k*). Then ⟨[*p*, *q*] ∣ W⟩ = ∑[*p̂*, *q̂*]⟨[*p̂*, *q̂*] ∣ V⟩ where the sum is over those intervals [*p̂*, *q̂*] over the index set for V that restrict to [*p*, *q*] over the index set of W. Take an arbitrary interval decomposition of V. This induces an interval decomposition of W. Summands of W of type [*p*, *q*] arise precisely from summands of V of types [*p̂*, *q̂*] that restrict to [*p*, *q*] over the index set of W. Consider a zigzag module $${\mathbb{V}}= {\raisebox{-0.5em}{\begin{tikzpicture}[xscale=1.1,yscale=1] \draw (1,1) node(11) {$V\_1$} (2,1) node(21) {$V\_2$} (3,1) node(31){$V\_3$}(4, 1) node(41){$V\_4$} (5, 1)node(51){$V\_5$}; \draw[->] (11) to node[above]{$$} (21); \draw[->] (21) to node[above]{$$} (31); \draw[<-] (31) to node[above]{$$} (41); \draw[<-] (41) to node[above]{$$} (51); \end{tikzpicture}}}$$ and its restrictions $$\begin{aligned} {\mathbb{V}}\_{1,2,3,5} &= {\raisebox{-0.5em}{\begin{tikzpicture}[xscale=1.1,yscale=1] \draw (1,1) node(11) {$V\_1$} (2,1) node(21) {$V\_2$} (3,1) node(31){$V\_3$} (5,1) node(51){$V\_5$}; \draw[->] (11) to node[above]{$$} (21); \draw[->] (21) to node[above]{$$} (31); \draw[<-] (31) to node[above]{$$} (51); \end{tikzpicture}}} \\ {\mathbb{V}}\_{1,3,4,5} &= {\raisebox{-0.5em}{\begin{tikzpicture}[xscale=1.1,yscale=1] \draw (1,1) node(11) {$V\_1$} (3,1) node(31){$V\_3$} (4,1) node(41){$V\_4$} (5,1) node(51){$V\_5$}; \draw[->] (11) to node[above]{$$} (31); \draw[<-] (31) to node[above]{$$} (41); \draw[<-] (41) to node[above]{$$} (51); \end{tikzpicture}}}\end{aligned}$$ obtained in the manner described above. Then $$\begin{aligned} \langle { \raisebox{-0.25ex}{\includegraphics[scale=1.8,page=2]{example}}} \mid {\mathbb{V}}\_{1,2,3,5} \, \rangle &= \langle { \raisebox{-0.25ex}{\includegraphics[scale=1.8,page=1]{example}}} \mid {\mathbb{V}}\, \rangle, \\ \langle { \raisebox{-0.25ex}{\includegraphics[scale=1.8,page=4]{example}}} \mid {\mathbb{V}}\_{1,3,4,5} \, \rangle &= \langle { \raisebox{-0.25ex}{\includegraphics[scale=1.8,page=1]{example}}} \mid {\mathbb{V}}\, \rangle + \langle { \raisebox{-0.25ex}{\includegraphics[scale=1.8,page=6]{example}}} \mid {\mathbb{V}}\, \rangle.\end{aligned}$$ The extra term occurs when the interval for the restricted module abuts the long edge on either side (so there is both a clear node and a filled node at that edge). There are then two possible intervals which restrict to it. The Diamond Principle relates the interval multiplicities of zigzag modules that are related by a different kind of local change. The principle is most sharply expressed in terms of the reflection functors of Bernstein, Gelfand and Ponomarev . We make do with a simpler non-functorial statement. We say that a diamond-shaped commuting diagram of vector spaces $$\begin{tikzpicture}[xscale=1,yscale=1] \draw (1.5,1.5) node(11) {$C$} (-1.5,1.5) node(011) {$B$} (0,3) node(02) {$D$} ; \draw (0,0) node(00){$A$} ; \draw[->] (00) to node[below]{$~i\_2$} (11); \draw[->] (00) to node[below]{$i\_1$} (011); \draw[->] (011) to node[left]{$j\_1~$} (02); \draw[->] (11) to node[right]{$~j\_2$} (02); \end{tikzpicture}$$ is *exact* if the sequence $$\begin{tikzpicture}[xscale=1.8,yscale=1] \draw (1.5,0) node(10) {$B\oplus C$} (3,0) node(02) {$D$} ; \draw (0,0) node(00){$A$} ; \draw[->] (00) to node[above]{{\small$i\_1\oplus i\_2$}} (10); \draw[->] (10) to node[above]{$j\_1-j\_2$} (02); \end{tikzpicture}$$ is exact at *B* ⊕ *C*. This means that a pair of vectors *β* ∈ *B*, *γ* ∈ *C* satisfies *j*1(*β*) = *j*2(*γ*) if and only if there exists *α* ∈ *A* such that *β* = *i*1(*α*) and *γ* = *i*2(*α*). [Diamond Principle ] [DP] Consider a diagram of vector spaces [xscale=1,yscale=1] (-2.3,0) node(00)*V*1 ; (-1,0) node(10)… ; (0.5,0) node(110)*V**k* − 2 ; (2,0) node(20)*V**k* − 1 ; (3,1) node(31)*V**k*+ ; (3,-1) node(301)*V**k*−; (4,0) node(40)*V**k* + 1; (5.5,0) node(550)*V**k* + 2; (7,0) node(50)…; (8.3,0) node(60)*V**n*; (00) to node[above] $$} (10); \draw[<->] (10) to node[above]{$$ (110); (110) to node[above] $$} (20); \draw[->] (20) to node[left]{} (31); \draw[->] (40) to node[right]{} (31); \draw[->] (301) to node[left]{} (20); \draw[->] (301) to node[right]{} (40); \draw[<->] (40) to node[above]{$$ (550); (550) to node[above] $$} (50); \draw[<->] (50) to node[above]{$$ (60); where the middle diamond is exact. Let V+, V− respectively denote the upper zigzag module (containing *V**k*+) and the lower zigzag module (containing *V**k*−) in this diagram. Then the following multiplicities are equal. (i) If the interval [*p*, *q*] does not meet {*k* − 1, *k*, *k* + 1} then ⟨[*p*, *q*] ∣ V+⟩ = ⟨[*p*, *q*] ∣ V−⟩. (ii) If the interval [*p*, *q*] completely contains {*k* − 1, *k*, *k* + 1} then ⟨[*p*, *q*] ∣ V+⟩ = ⟨[*p*, *q*] ∣ V−⟩. (iii) For *p* ≤ *k* − 1 we have $$\begin{aligned} {4} &\langle [p,k] &&\mid {\mathbb{V}}^+ \rangle &&= \langle [p,k-1] &&\mid {\mathbb{V}}^- \rangle, \\ &\langle [p,k-1] &&\mid {\mathbb{V}}^+ \rangle &&= \langle [p,k] &&\mid {\mathbb{V}}^- \rangle.\end{aligned}$$ (iv) For *q* ≥ *k* + 1 we have $$\begin{aligned} {4} &\langle [k,q] &&\mid {\mathbb{V}}^+ \rangle &&= \langle [k+1,q] &&\mid {\mathbb{V}}^- \rangle, \\ &\langle [k+1,q] &&\mid {\mathbb{V}}^+ \rangle &&= \langle [k,q] &&\mid {\mathbb{V}}^- \rangle.\end{aligned}$$ The diagrams ![image](diamond-principle)![image](diamond-principle)![image](diamond-principle)![image](diamond-principle)![image](diamond-principle) express the last three of these rules pictorially. The theorem gives no information about ⟨[*k*, *k*] ∣ V+⟩ or ⟨[*k*, *k*] ∣ V−⟩. These quantities are independent of each other and of all other multiplicities. We use the Diamond Principle frequently in the following situation. Consider a diagram of topological spaces of the following form: [xscale=1,yscale=1] (-2.3,0) node(00)*X*1 ; (-1,0) node(10)… ; (0.5,0) node(110)*X**k* − 2 ; (2,0) node(20)*A* ; (3,1) node(31)*A* ∪ *B* ; (3,-1) node(301)*A* ∩ *B*; (4,0) node(40)*B*; (5.5,0) node(550)*X**k* + 2; (7,0) node(50)…; (8.3,0) node(60)*X**n*; (00) to node[above] $$} (10); \draw[<->] (10) to node[above]{$$ (110); (110) to node[above] $$} (20); \draw[->] (20) to node[left]{} (31); \draw[->] (40) to node[right]{} (31); \draw[->] (301) to node[left]{} (20); \draw[->] (301) to node[right]{} (40); \draw[<->] (40) to node[above]{$$ (550); (550) to node[above] $$} (50); \draw[<->] (50) to node[above]{$$ (60); Here *A*, *B* are subspaces of some common ambient space. Applying a homology functor H, we obtain an upper zigzag diagram V∪ and a lower zigzag diagram V∩. The exactness of the diamond is precisely the exactness of the central term in the following excerpt from the Mayer–Vietoris sequence: $$\begin{tikzpicture}[xscale=3,yscale=1] \draw (0.3,1) node(01) {$\ldots$} (1,1) node(11) {$ {\mathrm{H}}(A \cap B)$} (2,1) node(21) {$ {\mathrm{H}}(A)\oplus {\mathrm{H}}(B)$} (3,1) node(31){$ {\mathrm{H}}(A \cup B)$} (3.7,1) node(41) {$\ldots$}; \draw[->] (01) to node[above]{$$} (11); \draw[->] (11) to node[above]{$$} (21); \draw[->] (21) to node[above]{$$} (31); \draw[->] (31) to node[above]{$$} (41); \end{tikzpicture}$$ In situations where the Mayer–Vietoris theorem holds, we can use the Diamond Principle to compare the interval summands of V∪ and V∩. The reader is reminded that the Mayer–Vietoris theorem is not always applicable. We treat this matter carefully in Section [sec:tautness]. Persistence diagrams and measures --------------------------------- As we discussed in Section 2.1, a zigzag module with a finite index set decomposes into interval modules, the list of summands being unique up to reordering. There are finitely many interval module types, so the structure of the zigzag module is determined by a finite list of multiplicities. On the other hand, the objects we are studying are spaces parametrized over the real line; and so we will want to define continuous-parameter persistence diagrams. The motivating heuristic is that each topological feature will be supported over some interval of R. These intervals may be open, closed or half-open, so we follow Chazal et al.  in describing their endpoints as real numbers *decorated* with a + or − superscript. The superscript \* may be used for an unspecified decoration. Here are the four options: | | | | | --- | --- | --- | | (*p*, *q*) | (*p*+, *q*−) | | | (*p*, *q*] | (*p*+, *q*+) | | | [*p*, *q*) | (*p*−, *q*−) | | | [*p*, *q*] | (*p*−, *q*+) | | Except for the degenerate interval [*p*, *p*] = (*p*−, *p*+), we require *p* < *q*. For infinite intervals, we allow *p* =  − ∞ and *q* =  + ∞ and their decorated forms *p*\* =  − ∞+ and *q*\* =  + ∞−. Given a collection (i.e. multiset) of such intervals, we can form a persistence diagram by drawing each (*p*\*, *q*\*) as a point in the plane with a tick to indicate the decorations. The tick convention is self-explanatory. The diagram resides in the extended half-plane H = {(*p*, *q*) ∣  − ∞ ≤ *p* < *q* ≤ ∞} which we can draw schematically as a triangle. If we omit the ticks (i.e. forget the decorations), what remains is an *undecorated* persistence diagram. Our main mechanism for defining and studying continuous-parameter persistence modules is taken from : a finite measure theory designed for this task. Define Rect(H) = {[*a*, *b*] × [*c*, *d*] ⊂ H ∣  − ∞ ≤ *a* < *b* < *c* < *d* ≤  + ∞}. This consists of finite rectangles, horizontal semi-infinite strips, vertical semi-infinite strips and infinite quadrants in H. A rectangle measure or *r-measure* on H is a function $$\begin{tikzpicture}[xscale=1.3,yscale=1] \draw (0.1,0)node(11) {$\mu \colon {\operatorname{Rect}}({\mathcal{H}})$} (3,0) node(21){$ \{0, 1, 2, 3, \ldots \}\cup \{\infty \}$}; \draw[->] (11) -- (21); \end{tikzpicture}$$ that is additive with respect to splitting a rectangle horizontally or vertically into two rectangles. Explicitly, we require $$\begin{aligned} {2} \mu([a,b] \times [c,d]) &= \mu([a,p] \times [c,d]) + \mu([p,b] \times [c,d]) && \qquad \text{\footnotesize (horizontal split)} \\ \mu([a,b] \times [c,d]) &= \mu([a,b] \times [c,q]) + \mu([a,b] \times [q,d]) && \qquad \text{\footnotesize (vertical split)}\end{aligned}$$ whenever *a* < *p* < *b* < *c* < *q* < *d* (see Figure [fig1]). By iterating these formulas, it follows that *μ* must be additive with respect to arbitrary tilings of a rectangle by other rectangles. This implies, in particular, that *μ* is monotone with respect to inclusion of rectangles. [fig1] The ‘atoms’ for this measure theory are decorated points rather than points; when a rectangle is split in two, points along the split line have to be assigned to one side or the other and this is done using the tick. We write (*p*\*, *q*\*) ∈ *R* to mean that (*p*, *q*) lies in *R* with the tick pointing into the interior of *R* (this is automatic for interior points). A decorated point (*p*\*, *q*\*) is contained in *R* if and only if (*p*, *q*) is contained in *R* and the tick points into the interior. [] [equivextended] There is a bijective correspondence between * Finite r-measures *μ* on H; and * Locally finite multisets *A* of decorated points in H. Here ‘finite’ means that *μ*(*R*) < ∞ for all *R*, and ‘locally finite’ means that card(*A*∣*R*) < ∞ for all *R*. Explicitly, a multiset *A* corresponds to the measure *μ* defined by the formula *μ*(*R*) = card(*A*∣*R*),  (the cardinality of the multiset of decorated points of *A* that belong to *R*); and, conversely, a measure *μ* corresponds to the multiset *A* with multiplicity function m*A*(*p*\*, *q*\*) = min{*μ*(*R*)  ∣  *R* ∈ Rect(H) \rm such that (*p*\*, *q*\*) ∈ *R*}. In other words, finite r-measures correspond exactly to decorated persistence diagrams. Since r-measures are monotone, the ‘min’ in the formula for m*A* can be calculated as a limit. For example $$\begin{aligned} \mathrm{m}\_A(p^+,q^-) &= \lim\_{\epsilon \to 0} \mu([p, p+\epsilon] \times [q-\epsilon, q]),\end{aligned}$$ with similar formulas for the other choices of decoration for (*p*\*, *q*\*) and for points at infinity. Since the expression inside the ‘lim’ takes values in the natural numbers and decreases as *ε* decreases, it necessarily stabilizes for sufficiently small *ε*. The multiset *A* corresponding to a finite r-measure *μ* is its *decorated diagram*, written Dgm(*μ*). We obtain the *undecorated diagram* ${\operatorname{Dgm}\_{\rm u}}(\mu)$ by forgetting the decorations. This is a multiset in H. When the r-measure is not finite, the *finite support* is defined in  to be the set of decorated points in H that are contained in some rectangle of finite measure. Within the finite support there is a well-defined decorated persistence diagram which characterizes the r-measure as above, with the proviso that rectangles which extend beyond the finite support have infinite measure. In particular, the undecorated diagram can be thought of as a locally finite multiset defined in some open set F ⊆ H and deemed to have infinite multiplicity everywhere else in the extended plane. Parametrized Homology ===================== In this section we define ‘parametrized homology’ invariants for R-spaces. Given an R-space X = (X, *f*) and a homology functor H with field coefficients, we define four persistence diagrams $${\operatorname{Dgm}}\!\!{}^{{^{\scriptscriptstyle{{\slash\!\backslash}}}}}({\mathrm{H}}{\mathbb{X}}),\; {\operatorname{Dgm}}\!\!{}^{{^{\scriptscriptstyle{{\backslash\!\backslash}}}}}({\mathrm{H}}{\mathbb{X}}),\; {\operatorname{Dgm}}\!\!{}^{{^{\scriptscriptstyle{{\slash\!\slash}}}}}({\mathrm{H}}{\mathbb{X}}),\; {\operatorname{Dgm}}\!\!{}^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}({\mathrm{H}}{\mathbb{X}})$$ that detect topological features exhibiting four different behaviors. We will need to impose conditions on H and X to guarantee that the r-measures used to define these diagrams are additive and finite. Four measures ------------- Let X = (*X*, *f*) be a R-space and let H be a homology functor with field coefficients. Given a rectangle *R* = [*a*, *b*] × [*c*, *d*],    − ∞ ≤ *a* < *b* < *c* < *d* ≤  + ∞,  we wish to count the homological features of X that are supported over the closed interval [*b*, *c*] but do not reach either end of the open interval (*a*, *d*). Accordingly, consider the diagram $${\mathbb{X}}\_{\{a, b, c, d\}}: \raisebox{-4.25ex}{ \begin{tikzpicture}[xscale=1.05,yscale=1.05] \draw (1,1) node(11) {${\mathbb{X}}\_a^b$} (3,1) node(31) {${\mathbb{X}}\_b^c$} (5,1) node(51) {${\mathbb{X}}\_c^d$} ; \draw (0,0) node(00){${\mathbb{X}}\_a^a$} (2,0) node(20){${\mathbb{X}}\_b^b$} (4,0) node(40){${\mathbb{X}}\_c^c$} (6,0) node(60){${\mathbb{X}}\_d^d$} ; \draw[->] (00) -- (11); \draw[->] (20) -- (11); \draw[->] (20) -- (31); \draw[->] (40) -- (31); \draw[->] (40) -- (51); \draw[->] (60) -- (51); \end{tikzpicture}}$$ of spaces and inclusion maps, where X*a**b* = *f*− 1[*a*, *b*]. We assume X− ∞− ∞ and X+ ∞+ ∞ to be empty if they occur. Apply H to obtain a diagram $${\mathrm{H}}{\mathbb{X}}\_{\{a, b, c, d\}}: \raisebox{-4.25ex}{ \begin{tikzpicture}[xscale=1.1,yscale=1.1] \draw (1,1) node(11) {${\mathrm{H}}({\mathbb{X}}\_a^b)$} (3,1) node(31) {${\mathrm{H}}({\mathbb{X}}\_b^c)$} (5,1) node(51) {${\mathrm{H}}({\mathbb{X}}\_c^d)$} ; \draw (0,0) node(00){${\mathrm{H}}({\mathbb{X}}\_a^a)$} (2,0) node(20){${\mathrm{H}}({\mathbb{X}}\_b^b)$} (4,0) node(40){${\mathrm{H}}({\mathbb{X}}\_c^c)$} (6,0) node(60){${\mathrm{H}}({\mathbb{X}}\_d^d)$} ; \draw[->] (00) -- (11); \draw[->] (20) -- (11); \draw[->] (20) -- (31); \draw[->] (40) -- (31); \draw[->] (40) -- (51); \draw[->] (60) -- (51); \end{tikzpicture}}$$ of vector spaces and linear maps. Decomposing this zigzag module into interval modules, four of the multiplicities are of interest to us. Define four quantities as follows: $$\begin{aligned} {\mu^{{^{\scriptscriptstyle{{\slash\!\backslash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}(R) &= \langle{ \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{zzud} }}\, |\, {\mathrm{H}}{\mathbb{X}}\_{\{a, b, c, d\}} \rangle \\ {\mu^{{^{\scriptscriptstyle{{\backslash\!\backslash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}(R) &= \langle { \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{zzdd} }}\, |\, {\mathrm{H}}{\mathbb{X}}\_{\{a, b, c, d\}} \rangle \\ {\mu^{{^{\scriptscriptstyle{{\slash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}(R) &= \langle { \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{zzuu} }}\, |\, {\mathrm{H}}{\mathbb{X}}\_{\{a, b, c, d\}} \rangle \\ {\mu^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}(R) &= \langle { \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{zzdu} }}\, |\, {\mathrm{H}}{\mathbb{X}}\_{\{a, b, c, d\}} \rangle.\end{aligned}$$ Each of these counts topological features of a certain type, supported over [*b*, *c*] but not outside (*a*, *d*). Under favorable circumstances, these four functions of *R* turn out to be finite r-measures and therefore their behavior can be completely described by a decorated persistence diagram in the extended half-space. We will identify such circumstances in later parts of this chapter. The distinction between the four behaviors is seen in Figure [fig:four]. Consider 0-dimensional singular homology H = H0( − ; **k**). In each example HX*b**b* ≅ HX*b**c* ≅ HX*c**c* have rank two whereas HX*a**a*, HX*d**d* each have rank one. The way in which the second feature (i.e. the second connected component) perishes at each end is determined by the ranks of the maps $$\raisebox{-1.4ex}{\begin{tikzpicture}[xscale=1.6,yscale=1] \draw (0,0) node(00){${\mathrm{H}}{\mathbb{X}}\_a^b$} (1,0) node(10){${\mathrm{H}}{\mathbb{X}}\_b^b$}; \draw[->] (10) -- (00); \end{tikzpicture}} \quad\text{and}\quad \raisebox{-1.4ex}{\begin{tikzpicture}[xscale=1.6,yscale=1] \draw (0,0) node(00){${\mathrm{H}}{\mathbb{X}}\_c^c$} (1,0) node(10){${\mathrm{H}}{\mathbb{X}}\_c^d$}; \draw[->] (00) -- (10); \end{tikzpicture}}.$$ If the rank is two, then the feature has simply *expired* at that end: it is no longer there at X*a**a* or X*d**d*. If the rank is one, that means the feature has been *killed* by some 1-cell that has appeared in X*a**b* or X*c**d*. In terms of zigzag summands, the situation looks like this: $$\begin{aligned} {2} \raisebox{0.5ex}{\text{\scriptsize{is killed}}} &\; { \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{zzud} }}\; \raisebox{0.5ex}{\text{\scriptsize{is killed}}} \qquad & \raisebox{0.5ex}{\text{\scriptsize{is killed}}} &\; { \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{zzuu} }}\; \raisebox{0.5ex}{\text{\scriptsize{expires}}} \\ \raisebox{0.5ex}{\text{\scriptsize{expires}}} &\; { \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{zzdd} }}\; \raisebox{0.5ex}{\text{\scriptsize{is killed}}} \qquad & \raisebox{0.5ex}{\text{\scriptsize{expires}}} &\; { \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{zzdu} }}\; \raisebox{0.5ex}{\text{\scriptsize{expires}}}\end{aligned}$$ Our definitions associate the four symbols ${\protect\raisebox{0.35ex}{$\scriptstyle{{{\slash\!\backslash}}}$}}, {\protect\raisebox{0.35ex}{$\scriptstyle{{{\backslash\!\backslash}}}$}}, {\protect\raisebox{0.35ex}{$\scriptstyle{{{\slash\!\slash}}}$}}, {\protect\raisebox{0.35ex}{$\scriptstyle{{{\backslash\!\slash}}}$}}$ with these four behaviors. An unspecified behavior may be indicated by the symbol ${\protect\raisebox{0.35ex}{$\scriptstyle{{{\slash\!\slash\!\!\!\!\!\backslash\!\backslash}}}$}}$. [fig:four] The four behaviors have ‘coordinate-reversal’ symmetry. Specifically, suppose X = (*X*, *f*) and *R* = [*a*, *b*] × [*c*, *d*]. If we define the coordinate reversals $\overline{\mathbb{X}}= (X,-f)$ and $\overline{R} = [-d,-c] \times [-b,-a]$ then the relations $$\begin{aligned} {2} {\mu^{{^{\scriptscriptstyle{{\slash\!\backslash}}}}}}\_{{\mathrm{H}}\overline{{\mathbb{X}}}} (\overline{R}) &= {\mu^{{^{\scriptscriptstyle{{\slash\!\backslash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}} (R) \qquad& {\mu^{{^{\scriptscriptstyle{{\slash\!\slash}}}}}}\_{{\mathrm{H}}\overline{{\mathbb{X}}}} (\overline{R}) &= {\mu^{{^{\scriptscriptstyle{{\backslash\!\backslash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}} (R) \\ {\mu^{{^{\scriptscriptstyle{{\backslash\!\backslash}}}}}}\_{{\mathrm{H}}\overline{{\mathbb{X}}}} (\overline{R}) &= {\mu^{{^{\scriptscriptstyle{{\slash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}} (R) \qquad& {\mu^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}}\_{{\mathrm{H}}\overline{{\mathbb{X}}}} (\overline{R}) &= {\mu^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}} (R)\end{aligned}$$ follow immediately. Our next step is to identify when the four functions $\mu^{{^{\scriptscriptstyle{{\slash\!\slash\!\!\!\!\!\backslash\!\backslash}}}}}\_{{\mathrm{H}}{\mathbb{X}}}(R)$ are finite r-measures. We consider additivity first (Sections [sec:tautness] and [sec:additivity]), then finiteness (Section [sec:finiteness]). Tautness -------- In proving additivity and other identities, we will make much use of the Diamond Principle. For *p* < *q* < *r* < *s*, consider the following diamonds: $$\raisebox{-8ex}{ \begin{tikzpicture}[xscale=1.2,yscale=1.2] \draw (1,1) node(11) {${\mathrm{H}}\_k({\mathbb{X}}\_p^s)$} (0,0) node(00){${\mathrm{H}}\_k({\mathbb{X}}\_p^r)$} (2,0) node(20){${\mathrm{H}}\_k({\mathbb{X}}\_q^s)$} (1,-1) node(101){${\mathrm{H}}\_k({\mathbb{X}}\_q^r)$} ; \draw[->] (00) -- (11); \draw[->] (20) -- (11); \draw[<-] (20) -- (101); \draw[<-] (00) -- (101); \end{tikzpicture} } \qquad \text{and} \qquad \raisebox{-8ex}{ \begin{tikzpicture}[xscale=1.2,yscale=1.2] \draw (1,1) node(11) {${\mathrm{H}}\_k({\mathbb{X}}\_p^r)$} (0,0) node(00){${\mathrm{H}}\_k({\mathbb{X}}\_p^q)$} (2,0) node(20){${\mathrm{H}}\_k({\mathbb{X}}\_q^r)$} (1,-1) node(101){${\mathrm{H}}\_k({\mathbb{X}}\_q^q)$} ; \draw[->] (00) -- (11); \draw[->] (20) -- (11); \draw[<-] (20) -- (101); \draw[<-] (00) -- (101); \end{tikzpicture} }$$ The exactness of the left diamond is guaranteed by the Mayer–Vietoris theorem, which applies because the relative interiors of X*p**r*, X*q**s* contain the sets *f*− 1[*p*, *r*), *f*− 1(*q*, *s*] which cover X*p**s*. In contrast, there is no such guarantee for the right diamond: the relative interiors of X*p**q*, X*q**r* do not cover X*p**r*. We identify a local condition on the embedding of X*q**q* in X, in terms of the homology theory H, which gives us exactness of all such diamonds. Let *U* be any neighborhood of X*q**q* (such as X*p**r*). It splits into two parts: a lower-neighborhood *A* = *U* ∩ X*q* = *U* ∩ *f*− 1( − ∞, *q*],  and an upper-neighborhood *B* = *U* ∩ X*q* = *U* ∩ *f*− 1[*q*,  + ∞). Then *U* = *A* ∪ *B* and X*q**q* = *A* ∩ *B*, and we desire the exactness of $$\tag{$\diamond\_{AB}$} \raisebox{-3.5pc}{ \begin{tikzpicture}[xscale=1.2,yscale=1.2] \draw (1,1) node(11) {${\mathrm{H}}\_k(U)$} ; \draw (0,0) node(00){${\mathrm{H}}\_k(A)$} (2,0) node(20){${\mathrm{H}}\_k(B)$} (1,-1) node(101){${\mathrm{H}}\_k({\mathbb{X}}\_q^q)$} ; \draw[->] (00) -- (11); \draw[->] (20) -- (11); \draw[<-] (20) -- (101); \draw[<-] (00) -- (101); \end{tikzpicture} }$$ in whichever dimension *k* we are considering. Here are two criteria. The levelset X*q**q* is *H*k*-taut in *U** if the map (induced by inclusion) *α**k* + 1: H*k* + 1(*A*, X*q**q*) → H*k* + 1(*U*, *B*) is an epimorphism, and the map (induced by inclusion) *α**k*: H*k*(*A*, X*q**q*) → H*k*(*U*, *B*) is a monomorphism. The levelset X*q**q* is *H*k*-taut in *U** if the map (induced by inclusion) *β**k* + 1: H*k* + 1(*B*, X*q**q*) → H*k* + 1(*U*, *A*) is an epimorphism, and the map (induced by inclusion) *β**k*: H*k*(*B*, X*q**q*) → H*k*(*U*, *A*) is a monomorphism. The maps *α*\*, *β*\* are excision maps, and they would automatically be isomorphisms if the excision axiom applied to them. For the axiom to apply we would need $$\begin{aligned} \operatorname{closure}(B - {\mathbb{X}}\_q^q) &\subseteq \operatorname{interior}(B) \\ \operatorname{closure}(A - {\mathbb{X}}\_q^q) &\subseteq \operatorname{interior}(A)\end{aligned}$$ for *α*\*, *β*\* respectively, and this is not true in general. The two criteria are equivalent. We show that the statements for *α**k* + 1, *α**k* together imply the statements for *β**k* + 1, *β**k* (the converse being symmetric). The following commutative diagram is obtained by criss-crossing the long exact sequences for the triples (*U*, *A*, X*q**q*) and (*U*, *B*, X*q**q*): $$\begin{tikzpicture}[xscale=1.4,yscale=1.3] \draw (-0.1,0) node(00){${\mathrm{H}}\_{k+1}(B, {\mathbb{X}}\_q^q)$} (2,0) node(20){${\mathrm{H}}\_{k+1}(U, A)$} (4,0) node(40){${\mathrm{H}}\_{k}(A, {\mathbb{X}}\_q^q)$} (5.9,0) node(60){${\mathrm{H}}\_{k}(U, B)$} (1,1) node(11){${\mathrm{H}}\_{k+1}(U, {\mathbb{X}}\_q^q)$}; \draw (-0.1,2) node(02){${\mathrm{H}}\_{k+1}(A, {\mathbb{X}}\_q^q)$} (2,2) node(22){${\mathrm{H}}\_{k+1}(U, B)$} (4,2) node(42){${\mathrm{H}}\_{k}(B, {\mathbb{X}}\_q^q)$} (5.9,2) node(62){${\mathrm{H}}\_{k}(U, A)$} (5,1) node(51){${\mathrm{H}}\_{k}(U, {\mathbb{X}}\_q^q)$}; \draw[->] (00) to node[below]{$\scriptstyle \beta\_{k+1}$} (20); \draw[->] (00) -- (11); \draw[<-] (20) -- (11); \draw[->] (20) to node[below]{$\scriptstyle \partial$} (40); \draw[->] (40) to node[below]{$\scriptstyle \alpha\_k$} (60); \draw[->] (02) to node[above]{$\scriptstyle \alpha\_{k+1}$} (22); \draw[->] (22) to node[above]{$\scriptstyle \partial$} (42); \draw[->] (42) to node[above]{$\scriptstyle \beta\_k$} (62); \draw[->] (11) -- (22); \draw[->] (02) -- (11); \draw[->] (51) -- (60); \draw[->] (40) -- (51); \draw[->] (51) -- (62); \draw[->] (42) -- (51); \end{tikzpicture}$$ Note that *α**k* + 1 being an epimorphism implies that the upper ∂ is zero, and *α**k* being a monomorphism implies that the lower ∂ is zero. With that in mind, it becomes a routine diagram-chase to show that *β**k* + 1 is an epimorphism and *β**k* is a monomorphism. We use the term *normal neighborhood* to refer to a neighborhood which contains a closed neighborhood. In a normal topological space (such as a compact Hausdorff space), all neighborhoods of a closed set are normal. Closed neighborhoods are trivially normal. If the levelset X*q**q* is H*k*-taut in some normal neighborhood, then it is H*k*-taut in any normal neighborhood. Since any two normal neighborhoods contain a closed neighborhood in common, it is enough to show that $$\text{${\mathbb{X}}\_q^q$ is ${\mathrm{H}}\_k$-taut in~$U$} \quad \Leftrightarrow \quad \text{${\mathbb{X}}\_q^q$ is ${\mathrm{H}}\_k$-taut in~$W$}$$ whenever *U* ⊆ *W* are neighborhoods and *U* is closed. Writing *U* = *A* ∪ *B* and *W* = *A*ʹ ∪ *B*ʹ as usual, we also consider *V* = *A* ∪ *B*ʹ. Criterion A gives the same result for *U* as for *V*, by considering $$\begin{tikzpicture}[xscale=1.45,yscale=1.3] \draw (-0.1,0) node(00){${\mathrm{H}}\_{\*}(A, {\mathbb{X}}\_q^q)$} (1.8,0) node(20){${\mathrm{H}}\_{\*}(A\cup B, A)$} (4,0) node(40){${\mathrm{H}}\_{k}(A \cup B', B')$}; \draw[->] (00) -- (20); \draw[->] (20) to node[above]{$\scriptstyle \simeq$} (40); \end{tikzpicture}$$ The right-hand map is an isomorphism by the excision axiom, which applies in this situation because *A* ∪ *B* is a closed neighborhood of *A* in *A* ∪ *B*ʹ. Criterion B gives the same result for *V* as for *W*, by considering $$\begin{tikzpicture}[xscale=1.45,yscale=1.3] \draw (-0.1,0) node(00){${\mathrm{H}}\_{\*}(B', {\mathbb{X}}\_q^q)$} (1.8,0) node(20){${\mathrm{H}}\_{\*}(A\cup B', A)$} (4,0) node(40){${\mathrm{H}}\_{k}(A' \cup B', A')$}; \draw[->] (00) -- (20); \draw[->] (20) to node[above]{$\scriptstyle \simeq$} (40); \end{tikzpicture}$$ The right-hand map is an isomorphism by excision, since *A* ∪ *B*ʹ is a closed neighborhood of *B*ʹ in *A*ʹ ∪ *B*ʹ. The result follows. Accordingly, we say that the levelset X*q**q* is *H*k*-taut* if it is H*k*-taut in some, and therefore every, normal neighborhood. [taut] We say that the levelset X*q**q* is *H-taut* if it is H*k*-taut in all dimensions *k*. This means that for every normal neighborhood *U*, the maps *α**k* : H*k*(*A*, X*q**q*) → H*k*(*U*, *B*) are isomorphisms for all *k*, or equivalently *β**k* : H*k*(*B*, X*q**q*) → H*k*(*U*, *A*) are isomorphisms for all *k*. If the levelset X*q**q* is H*k*-taut, then the diagram (⋄*A**B*) is exact for any normal neighborhood *U* = *A* ∪ *B*. Using Criterion B, say, this is a straightforward chase on the diagram $$\begin{tikzpicture}[xscale=1.2,yscale=0.8] \draw (-0.1,0) node(00){${\mathrm{H}}\_{k+1}(B, {\mathbb{X}}\_q^q)$} (2,0) node(20){${\mathrm{H}}\_{k}({\mathbb{X}}\_q^q)$} (4,0) node(40){${\mathrm{H}}\_{k}(B)$} (5.9,0) node(60){${\mathrm{H}}\_{k}(B, {\mathbb{X}}\_q^q)$}; \draw (-0.1,2) node(02){${\mathrm{H}}\_{k+1}(U, A)$} (2,2) node(22){${\mathrm{H}}\_{k}(A)$} (4,2) node(42){${\mathrm{H}}\_{k}(U)$} (5.9,2) node(62){${\mathrm{H}}\_{k}(U, A)$}; \draw[->] (00) -- (20); \draw[->] (20) -- (40); \draw[->] (40) -- (60); \draw[->] (02) --(22); \draw[->] (22) -- (42); \draw[->] (42) -- (62); \draw[->] (00) to node[left]{\footnotesize{epi}} (02); \draw[->] (20) -- (22); \draw[->] (40) -- (42); \draw[->] (60) to node[right]{\footnotesize{mono}} (62); \end{tikzpicture}$$ for the map of long exact sequences induced by the inclusion (*B*, X*q**q*) → (*U*, *A*). This completes our treatment of tautness. Here are some examples. [prop:taut-ex] The R-space X = (*X*, *f*) has H-taut levelsets under any of the following circumstances: (i) *X* is locally compact, *f* is proper, and H is Steenrod–Sitnikov homology . (ii) Each X*q**q* is a deformation retract of some closed neighborhood in X*q* or X*q*. (iii) *X* is a smooth manifold and *f* is a proper Morse function. (iv) *X* is a locally compact polyhedron and *f* is a proper piecewise-linear map. (v) *X* ⊆ R*n* × R is a closed definable set in some o-minimal structure  and *f* is the projection onto the second factor. In particular, this applies when *X* is semialgebraic . *(i)* Steenrod–Sitnikov homology satisfies a strengthened form of the excision axiom  that does not require any restriction on the subspaces under consideration. Therefore maps in Definition [taut] are isomorphisms for any levelset X*q**q*. *(ii)* Let *C*1 be a closed neighborhood of X*q**q*. We know X*q**q* is a deformation retract of a closed neighborhood *C*2 in X*q*. We may assume without loss of generality that *C*2 ⊆ *C*1. Let *C* = *C*2 ∪ (*C*1 ∩ X*q*). The homology groups H*k*(*C*2, X*q**q*) and H*k*(*C*, *C* ∩ X*q*) are trivial for every *k* and therefore isomorphic, implying that X*q**q* is H-taut. *(iii)*, *(iv)* and *(v)* follow from *(ii)*. In particular, we prove *(v)* by applying. We occasionally need to consider Mayer–Vietoris diamonds in relative homology. We establish their exactness individually as they occur. Additivity ---------- We are now ready to prove that the four measures $\mu^{{^{\scriptscriptstyle{{\slash\!\slash\!\!\!\!\!\backslash\!\backslash}}}}}\_{{\mathrm{H}}{\mathbb{X}}}$ are additive. [thm:additivity] Let H be a homology functor with field coefficients and let X = (*X*, *f*) be an R-space whose levelsets are H-taut. Then ${\mu^{{^{\scriptscriptstyle{{\slash\!\backslash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}$, ${\mu^{{^{\scriptscriptstyle{{\backslash\!\backslash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}$, ${\mu^{{^{\scriptscriptstyle{{\slash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}$, and ${\mu^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}$ are additive. Let *R* = [*a*, *b*] × [*c*, *d*] and consider a horizontal split *R*1 = [*a*, *p*] × [*c*, *d*],   *R*2 = [*p*, *b*] × [*c*, *d*],  so *a* < *p* < *b* < *c* < *d*. The diagram $$\begin{tikzpicture}[xscale=1.2,yscale=1.2] \draw (1,1) node(11) {${\mathbb{X}}\_b^c$} ; \draw (0,0) node(00){${\mathbb{X}}\_b^b$}; \draw (-1,1) node(011){${\mathbb{X}}\_p^b$} ; \draw (2,0) node(20) {${\mathbb{X}}\_c^c$} ; \draw (4,0) node(40) {${\mathbb{X}}\_d^d$} ; \draw (3,1) node(31) {${\mathbb{X}}\_c^d$} ; \draw (-2,0) node(020) {${\mathbb{X}}\_p^p$} ; \draw (-2,2) node(022) {${\mathbb{X}}\_a^b$} ; \draw (-4,0) node(040) {${\mathbb{X}}\_a^a$} ; \draw (-3,1) node(031) {${\mathbb{X}}\_a^p$} ; \draw (0,2) node(02) {${\mathbb{X}}\_p^c$} ; \draw[->] (00) -- (011); \draw[->] (20) -- (11); \draw[->] (00) -- (11); \draw[->] (040) -- (031); \draw[->] (020) -- (031); \draw[->] (020) -- (011); \draw[->] (20) -- (31); \draw[->] (40) -- (31); \draw[->] (11) -- (02); \draw[->] (011) -- (02); \draw[->] (031) -- (022); \draw[->] (011) -- (022); \end{tikzpicture}$$ contains the zigzags X{*a*, *b*, *c*, *d*}, X{*a*, *p*, *c*, *d*}, X{*p*, *b*, *c*, *d*} for all three rectangles. When we apply H, the two diamonds in the resulting diagram are exact since the levelsets X*p**p*, X*b**b* are H-taut. We calculate: $$\begin{array}{ll} {\mu^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}(R) = {\left\langle \raisebox{-1.25ex}{\includegraphics[scale=1.8,page=1]{zz-uu-horiz-split2}} \right\rangle} & = {\left\langle \raisebox{-1.25ex}{\includegraphics[scale=1.8,page=3]{zz-uu-horiz-split2}} \right\rangle} + {\left\langle \raisebox{-1.25ex}{\includegraphics[scale=1.8,page=2]{zz-uu-horiz-split2}} \right\rangle} \\ & = {\left\langle \raisebox{-1.25ex}{\includegraphics[scale=1.8,page=6]{zz-uu-horiz-split2}} \right\rangle} + {\left\langle \raisebox{-1.25ex}{\includegraphics[scale=1.8,page=4]{zz-uu-horiz-split2}} \right\rangle} \\ & = {\left\langle \raisebox{-1.25ex}{\includegraphics[scale=1.8,page=7]{zz-uu-horiz-split2}} \right\rangle} + {\left\langle \raisebox{-1.25ex}{\includegraphics[scale=1.8,page=5]{zz-uu-horiz-split2}} \right\rangle} = {\mu^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}(R\_1) + {\mu^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}(R\_2). \end{array}$$ In the first line we add two extra nodes to refine the 7-term zigzag to a 9-term zigzag and use the Restriction Principle. In the second line we use the Diamond Principle twice. In the third line we drop two nodes in each term and use the Restriction Principle again. Similar calculations establish the additivity of ${\mu^{{^{\scriptscriptstyle{{\slash\!\backslash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}$, ${\mu^{{^{\scriptscriptstyle{{\backslash\!\backslash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}$ and ${\mu^{{^{\scriptscriptstyle{{\slash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}$ under horizontal splitting. Additivity under vertical splitting follows by coordinate-reversal symmetry. Finiteness ---------- We now consider the finiteness of the four r-measures $\mu^{{^{\scriptscriptstyle{{\slash\!\slash\!\!\!\!\!\backslash\!\backslash}}}}}\_{{\mathrm{H}}{\mathbb{X}}}$. As discussed in Section [subsec:measures], finiteness of an r-measure implies that its decorated persistence diagram is defined everywhere in H; in general the diagram is defined in the finite support of the r-measure. It turns out to be essentially the same issue as the finiteness of the *well groups* . Well groups measure that part of the homology of a fiber H(X*m**m*) of an R-space that is stable under *ε*-perturbations of the coordinate. One defines $${\mathrm{W}}({\mathrm{H}}{\mathbb{X}}; m, \epsilon) = \bigcap\_g \operatorname{image} \big[ {\mathrm{H}}(g^{-1}(q)) \longrightarrow {\mathrm{H}}{\mathbb{X}}\_{q-\epsilon}^{q+\epsilon} \big]$$ where the intersection is taken over all *ε*-perturbations *g* of the coordinate *f*, perhaps in a suitable regularity class. Considering the perturbations *g* = *f* ± *ε*, it follows that the well group is contained in[1](#fn1) $$\;\operatorname{image}\! \big[ {\mathrm{H}}{\mathbb{X}}\_{q-\epsilon}^{q-\epsilon} \longrightarrow {\mathrm{H}}{\mathbb{X}}\_{q-\epsilon}^{q+\epsilon} \big] \,\cap\, \operatorname{image}\! \big[ {\mathrm{H}}{\mathbb{X}}\_{q+\epsilon}^{q+\epsilon} \longrightarrow {\mathrm{H}}{\mathbb{X}}\_{q-\epsilon}^{q+\epsilon} \big]$$ and therefore its rank is bounded by $$\langle { \raisebox{-0.25ex}{\includegraphics[scale=1.8,page=6]{onof}}} \mid {\mathrm{H}}\_{q-\epsilon}^{q-\epsilon} \longrightarrow {\mathrm{H}}\_{q-\epsilon}^{q+\epsilon} \longleftarrow {\mathrm{H}}\_{q+\epsilon}^{q+\epsilon} \rangle = \langle { \raisebox{-0.25ex}{\includegraphics[scale=1.8,page=6]{onof}}} \mid {\mathrm{H}}{\mathbb{X}}\_{\{q-\epsilon, q+\epsilon\}} \rangle.$$ This takes the same form as the term that we need to bound. [bendichprop] Let X = (*X*, *f*) be an R-space and H be a homology functor. For any rectangle *R* = [*a*, *b*] × [*c*, *d*] with *a* < *b* < *c* < *d* we have $${\mu^{{^{\scriptscriptstyle{{\slash\!\backslash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}(R) + {\mu^{{^{\scriptscriptstyle{{\backslash\!\backslash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}(R) + {\mu^{{^{\scriptscriptstyle{{\slash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}} (R) + {\mu^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}} (R) \leq \langle { \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{wud} }}\mid {\mathrm{H}}{\mathbb{X}}\_{\{a, b, c, d\}}\rangle = \langle { \raisebox{-0.25ex}{\includegraphics[scale=1.8,page=6]{onof}}} \mid {\mathrm{H}}{\mathbb{X}}\_{\{b, c\}}\rangle.$$ By the Restriction Principle $$\begin{aligned} \langle { \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{wud} }}\rangle &\geq \langle { \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{zzud} }}\rangle + \langle { \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{zzdd} }}\rangle + \langle { \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{zzuu} }}\rangle + \langle { \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{zzdu} }}\rangle \\ &= ({\mu^{{^{\scriptscriptstyle{{\slash\!\backslash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}} + {\mu^{{^{\scriptscriptstyle{{\backslash\!\backslash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}} + {\mu^{{^{\scriptscriptstyle{{\slash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}} + {\mu^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}})(R). \qedhere\end{aligned}$$ [prop:finite1] Let X = (*X*, *f*). Then ${\mu^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}$, ${\mu^{{^{\scriptscriptstyle{{\slash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}$, ${\mu^{{^{\scriptscriptstyle{{\slash\!\backslash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}$ and ${\mu^{{^{\scriptscriptstyle{{\backslash\!\backslash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}$ are finite for any H under any of the following circumstances: (i) *X* is a locally compact polyhedron and *f* a proper continuous map. (iii) *X* is a smooth manifold and *f* is a proper Morse function. (iv) *X* is a locally compact polyhedron and *f* is a proper piecewise-linear map. (v) *X* ⊆ R*n* × R is a closed definable set in some o-minimal structure and *f* is the projection onto the second factor. In cases (iii), (iv), (iv) each slice X*b**c* has the homotopy type of a finite cell complex, and therefore has finite-dimensional homology. The proof of (i) is a little more involved. Let *R* = [*a*, *b*] × [*c*, *d*]. Choose *m* and *ε* > 0 such that *b* + 2*ε* < *m* < *c* − 2*ε*, and approximate *f* with a piecewise-linear map *g*: *X* → R for which ∥*g* − *f*∥ ≤ *ε*. Then *g* is also proper, and *Y* = *g*− 1(*m*) is triangulable as a finite simplicial complex and is H-taut as a fiber of (*X*, *g*). We can split the neighborhood X*b**c* into lower- and upper-neighborhoods of *Y* by defining *U* = X*b**c* ∩ *g*− 1( − ∞, *m*],  *V* = X*b**c* ∩ *g*− 1[*m*,  + ∞). Thus X*b**c* = *U* ∪ *V* and *Y* = *U* ∩ *V*. Since ∣∣*g* − *f*∣∣ ≤ *ε*, we also have X*b**b* ⊆ *U* and X*c**c* ⊆ *V*. Consider the following diagram of spaces and maps: $$\begin{tikzpicture}[xscale=1.2,yscale=1.2, font=\small] \draw (1,1) node(11) {${\mathrm{H}}({\mathbb{X}}\_{b}^{c})$} ; \draw (0,0) node(00){${\mathrm{H}}(U)$} (2,0) node(20){${\mathrm{H}}(V)$} (1,-1) node(101){${\mathrm{H}}(Y)$} (-1,-1) node(001){${\mathrm{H}}({\mathbb{X}}\_{b}^{b})$} (-3,-1) node(0301){${\mathrm{H}}({\mathbb{X}}\_{a}^{a})$} (-2,0) node(020){${\mathrm{H}}({\mathbb{X}}\_{a}^{b})$} (3,-1) node(201){${\mathrm{H}}({\mathbb{X}}\_{c}^{c})$} (5,-1) node(501){${\mathrm{H}}({\mathbb{X}}\_{d}^{d})$} (4,0) node(40){${\mathrm{H}}({\mathbb{X}}\_{c}^{d})$}; \draw[->] (0301) -- (020); \draw[->] (001) -- (020); \draw[->] (001) -- (00); \draw[->] (201) -- (20); \draw[->] (00) -- (11); \draw[->] (20) -- (11); \draw[<-] (20) -- (101); \draw[<-] (00) -- (101); \draw[->] (501) -- (40); \draw[->] (201) -- (40); \end{tikzpicture}$$ By the Restriction and Diamond Principles (since *Y* is H-taut) we have $$\left\langle { \raisebox{-0.5ex}{\includegraphics[scale=1.8,page=1]{spider}}} \right\rangle =\left \langle { \raisebox{-0.5ex}{\includegraphics[scale=1.8,page=2]{spider}}}\right\rangle = \left\langle { \raisebox{-0.5ex}{\includegraphics[scale=1.8,page=3]{spider}}} \right\rangle \leq {\mathrm{H}}(Y) < \infty.$$ The result now follows from Lemma [bendichprop]. The Four Diagrams of Parametrized Homology ------------------------------------------ Let X = (*X*, *f*) be an R-space and let H be a homology functor with field coefficients. Quantities ${\mu^{{^{\scriptscriptstyle{{\backslash\!\backslash}}}}}}\_{\mathbb{X}}$, ${\mu^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}}\_{\mathbb{X}}$, ${\mu^{{^{\scriptscriptstyle{{\slash\!\backslash}}}}}}\_{\mathbb{X}}$, and ${\mu^{{^{\scriptscriptstyle{{\slash\!\slash}}}}}}\_{\mathbb{X}}$ capture the way topological features of X perish at endpoints. When they are r-measures, each defines a persistence diagram via the Equivalence Theorem. We denote these four decorated persistence diagrams by ${\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\backslash\!\backslash}}}}}({\mathbb{X}})$, ${\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}({\mathbb{X}})$, ${\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\slash\!\backslash}}}}}({\mathbb{X}})$, and ${\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\slash\!\slash}}}}}({\mathbb{X}})$. These, collectively, comprise the *parametrized homology* of X with respect to the homology functor H. [definepar] We can define parametrized homology of X = (*X*, *f*) when: (i) *X* is a locally compact polyhedron, *f* is proper, and H is Steenrod–Sitnikov homology. (iii) *X* is a smooth manifold and *f* is a proper Morse function. (iv) *X* is a locally compact polyhedron and *f* is a proper piecewise-linear map. (v) *X* ⊆ R*n* × R is a closed definable set in some o-minimal structure and *f* is the projection onto the second factor. Additivity follows from Proposition [prop:taut-ex] and finiteness from Proposition [prop:finite1]. Levelset Zigzag Persistence --------------------------- In some situations finite zigzag diagrams carry all the needed information. Let X = (*X*, *f*) be an R-space constructed as follows. There is a finite set of real-valued indices *S* = {*a*1, ..., *a**n*} (listed in increasing order), called the *critical values* of X. Then: * For 1 ≤ *i* ≤ *n*, *V**i* is a locally path-connected compact space; * For 1 ≤ *i* ≤ *n* − 1, *E**i* is a locally path-connected compact space; * For 1 ≤ *i* ≤ *n* − 1, *l**i*: *E**i* → *V**i* and *r**i*: *E**i* → *V**i* + 1 are continuous maps. Let *X* be the quotient space obtained from the disjoint union of the spaces *V**i* × {*a**i*} and *E**i* × [*a**i*, *a**i* + 1] by making the identifications (*l**i*(*x*), *a**i*) ∼ (*x*, *a**i*) and (*r**i*(*x*), *a**i* + 1) ∼ (*x*, *a**i* + 1) for all *i* and all *x* ∈ *E**i*. Let *f*: *X* → R be the projection onto the second factor. In this paper, we follow Carlsson et al.  in calling such an X = (*X*, *f*) a *Morse type* R-space. (In they are called *constructible R-spaces*.) Such R-spaces include X = (*X*, *f*), where *X* is a compact manifold and *f* a Morse function, and *X* a compact polyhedron and *f* piecewise linear. We can track the appearance and disappearance of topological features using *levelset zigzag persistence* construction . Given X = (*X*, *f*) of Morse type, select a set of indices *s**i* which satisfy  − ∞ < *s*0 < *a*1… < *a**n* < *s**n* < ∞,  and build a zigzag diagram that serves as a model for X: [xscale=1.05,yscale=1.05] (-2, 0.5)node(010)X{*s*0, …, *s**n*} :  (1,1) node(11) X*s*0*s*1 (3,1) node(31) X*s*1*s*2 (5,1) node(51) … (7,1) node(71) X*s**n* − 2*s**n* − 1 (9,1) node(91) X*s**n* − 1*s**n* ; (0,0) node(00)X*s*0*s*0 (2,0) node(20)X*s*1*s*1 (4,0) node(40)X*s*2*s*2 (6,0) node(60)X*s**n* − 2*s**n* − 2 (8,0) node(80)X*s**n* − 1*s**n* − 1 (10,0) node(100)X*s**n**s**n*. ; (00) – (11); (20) – (11); (20) – (31); (40) – (31); (40) – (51); (60) – (51); (80) – (71); (60) – (71); (80) – (91); (100) – (91); Apply homology functor H to obtain: [xscale=1.05,yscale=1.05] (-2, 0.5)node(010)HX{*s*0, …, *s**n*} :  (1,1) node(11) H(X*s*0*s*1) (3,1) node(31) H(X*s*1*s*2) (5,1) node(51) … (7,1) node(71) H(X*s**n* − 2*s**n* − 1) (9,1) node(91) H(X*s**n* − 1*s**n*) ; (0,0) node(00)H(X*s*0*s*0) (2,0) node(20)H(X*s*1*s*1) (4,0) node(40)H(X*s*2*s*2) (6,0) node(60)H(X*s**n* − 2*s**n* − 2) (8,0) node(80)H(X*s**n* − 1*s**n* − 1) (10,0) node(100)H(X*s**n**s**n*). ; (00) – (11); (20) – (11); (20) – (31); (40) – (31); (40) – (51); (60) – (51); (80) – (71); (60) – (71); (80) – (91); (100) – (91); This quiver representation is decomposable by Gabriel’s Theorem . We translate between the notation of intervals that appear in the levelset zigzag persistence of X and critical values as follows: $$\begin{aligned} {4} [{\mathrm{H}}({\mathbb{X}}\_{s\_{i-1}}^{s\_i}), {\mathrm{H}}({\mathbb{X}}\_{s\_{j-1}}^{s\_j})] & \text{ corresponds to } & [a\_i, a\_j] & \text{ for }1\leq i \leq j \leq n,\\ [{\mathrm{H}}({\mathbb{X}}\_{s\_{i-1}}^{s\_i}), {\mathrm{H}}({\mathbb{X}}\_{s\_{j-1}}^{s\_j})] & \text{ corresponds to }&[a\_i, a\_j) & \text{ for }1\leq i < j \leq n+1,\\ [{\mathrm{H}}({\mathbb{X}}\_{s\_i}^{s\_i}), {\mathrm{H}}({\mathbb{X}}\_{s\_{j-1}}^{s\_j})] &\text{ corresponds to } & (a\_i, a\_j] & \textrm{ for }1\leq i \leq j \leq n,\\ [{\mathrm{H}}({\mathbb{X}}\_{s\_i}^{s\_i}), {\mathrm{H}}({\mathbb{X}}\_{s\_{j-1}}^{s\_{j-1}})] &\text{ corresponds to } & (a\_i, a\_j) & \textrm{ for }1\leq i < j \leq n+1.\\\end{aligned}$$ We interpret *a*0 as  − ∞ and *a**n* + 1 as ∞. The collection of these pairs of critical values, taken with multiplicity and labelled by the interval type is called the levelset zigzag persistence diagram of X and denoted by DgmZZ(HX). The four quantities defined in Section [subsec:4-measures], ${\mu^{{^{\scriptscriptstyle{{\slash\!\backslash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}$, ${\mu^{{^{\scriptscriptstyle{{\backslash\!\backslash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}$, ${\mu^{{^{\scriptscriptstyle{{\slash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}$ and ${\mu^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}}$, are measures when X is of Morse type. Additivity follows from (ii) of Proposition [prop:taut-ex], while finiteness from the assumption that all interlevelsets and levelsets have finite dimensional homology groups. In fact, parametrized homology and levelset zigzag persistence of a Morse type R-space carry the same information, as the following theorem demonstrates. [16to4Morse] If X is an R-space of Morse type with critical values *a*1 < *a*2 < … < *a**n*,  then the levelset zigzag persistence diagram of X, DgmZZ(HX), contains the same information as the four diagrams ${\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\backslash\!\backslash}}}}}({\mathbb{X}})$, ${\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}({\mathbb{X}})$, ${\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\slash\!\backslash}}}}}({\mathbb{X}})$, and ${\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\slash\!\slash}}}}}({\mathbb{X}})$. To be more precise, $$\begin{aligned} {3} (a\_i, a\_j) \in {\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\slash\!\backslash}}}}}\!({\mathrm{H}}{\mathbb{X}}) & \;\;\text{if and only if}\;\; & (a\_i^+, a\_j^-)\in {{\operatorname{Dgm}}^{\mathrm{ZZ}}}({\mathrm{H}}{\mathbb{X}}) \\ [a\_i, a\_j) \in {\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\backslash\!\backslash}}}}}\!({\mathrm{H}}{\mathbb{X}}) & \;\;\text{if and only if}\;\; & (a\_i^-, a\_j^-)\in {{\operatorname{Dgm}}^{\mathrm{ZZ}}}({\mathrm{H}}{\mathbb{X}}) \\ (a\_i, a\_j]\in {\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\slash\!\slash}}}}}\!({\mathrm{H}}{\mathbb{X}}) & \;\;\text{if and only if}\;\; & (a\_i^+, a\_j^+)\in {{\operatorname{Dgm}}^{\mathrm{ZZ}}}({\mathrm{H}}{\mathbb{X}}) \\ [a\_i, a\_j]\in {\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}\!({\mathrm{H}}{\mathbb{X}}) & \;\;\text{if and only if}\;\; & (a\_i^-, a\_j^+) \in {{\operatorname{Dgm}}^{\mathrm{ZZ}}}({\mathrm{H}}{\mathbb{X}}). \end{aligned}$$ Diagrams ${\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\slash\!\backslash}}}}}\!({\mathrm{H}}{\mathbb{X}})$, ${\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\backslash\!\backslash}}}}}\!({\mathrm{H}}{\mathbb{X}})$, ${\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\slash\!\slash}}}}}\!({\mathrm{H}}{\mathbb{X}})$ and ${\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}\!({\mathrm{H}}{\mathbb{X}})$ contain no decorated points with nonzero multiplicity other than those specified above. First we prove that if [*a**i*, *a**j*] with multiplicity *m*, *m* ≥ 1, is contained in the levelset zigzag persistence diagram of X, then $\textrm{m}\_{{\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}({\mathbb{X}})}(a\_i^-, a\_j^+)= m$. We select a set of indices *s**i* which satisfy  − ∞ < *s*0 < *a*1 < *s*1 < *a*2 < … < *s**n* − 1 < *a**n* < *s**n* < ∞. By definition [*a**i*, *a**j*] appears in the levelset zigzag persistence diagram with multiplicity *m* if and only if ⟨[H(X*s**i* − 1*s**i*), H(X*s**j* − 1*s**j*)] ∣ HX{*s*0, …, *s**n*}⟩ = *m*. By the Diamond and the Restriction Principle $$\begin{array}{lcl} \langle [{\mathrm{H}}({\mathbb{X}}\_{s\_{i-1}}^{s\_i} ), {\mathrm{H}}({\mathbb{X}}\_{s\_{j-1}}^{s\_j})] \mid {\mathrm{H}}{\mathbb{X}}\_{\{s\_0, \ldots, s\_n\}} \rangle &= &\langle [{\mathrm{H}}({\mathbb{X}}\_{s\_{i-1}}^{s\_i} ), {\mathrm{H}}({\mathbb{X}}\_{s\_{j-1}}^{s\_j})] \mid {\mathrm{H}}{\mathbb{X}}\_{\{s\_{i-1}, s\_i, s\_{j-1}, s\_j\}} \rangle. \\ \end{array}$$ Choose $\epsilon < \frac{1}{2}\min \{ a\_i-s\_{i-1}, s\_j-a\_{j}\}$. Observe the diagram below. $$\begin{tikzpicture}[xscale=1,yscale=1.2] \draw (1,1) node(11) {${\mathrm{H}}({\mathbb{X}}\_{s\_{i-1}}^{a\_i-\epsilon})$} (3,1) node(31) {${\mathrm{H}}({\mathbb{X}}\_{a\_i}^{a\_i-\epsilon})$} (5,1) node(51) {${\mathrm{H}}({\mathbb{X}}\_{a\_i}^{s\_{i}})$} ; \draw (4,2) node(42) {${\mathrm{H}}({\mathbb{X}}\_{a\_i-\epsilon}^{s\_{i}})$} ; \draw (2,2) node(22){${\mathrm{H}}({\mathbb{X}}\_{s\_{i-1}}^{a\_{i}})$}; \draw (3,3) node(33){${\mathrm{H}}({\mathbb{X}}\_{s\_{i-1}}^{s\_{i}})$}; \draw (0,0) node(00){${\mathrm{H}}({\mathbb{X}}\_{s\_{i-1}}^{s\_{i-1}})$} (2,0) node(20){${\mathrm{H}}({\mathbb{X}}\_{a\_i-\epsilon}^{a\_i-\epsilon})$} (4,0) node(40){${\mathrm{H}}({\mathbb{X}}\_{a\_i}^{a\_i})$} (6,0) node(60){${\mathrm{H}}({\mathbb{X}}\_{s\_{i}}^{s\_{i}})$} (6,2) node(62){${\mathrm{H}}({\mathbb{X}}\_{a\_{i}}^{s\_{j-1}})$} (8,0) node(80){${\mathrm{H}}({\mathbb{X}}\_{s\_{j-1}}^{s\_{j-1}})$} (10,0) node(100){${\mathrm{H}}({\mathbb{X}}\_{a\_{j}}^{a\_{j}})$} (12,0) node(120){${\mathrm{H}}({\mathbb{X}}\_{a\_{j}+\epsilon}^{a\_{j}+\epsilon})$} (14,0) node(140){${\mathrm{H}}({\mathbb{X}}\_{s\_{j}}^{s\_{j}})$} (7,3) node(73){${\mathrm{H}}({\mathbb{X}}\_{a\_{i}}^{a\_{j}})$} (9,1) node(91){${\mathrm{H}}({\mathbb{X}}\_{s\_{j-1}}^{a\_{j}})$} (11,1) node(111){${\mathrm{H}}({\mathbb{X}}\_{a\_{j}}^{a\_{j+\epsilon}})$} (13,1) node(131){${\mathrm{H}}({\mathbb{X}}\_{a\_{j}+\epsilon}^{s\_{j}})$} (12,2) node(122){${\mathrm{H}}({\mathbb{X}}\_{a\_{j}}^{s\_{j}})$} (11,3) node(113){${\mathrm{H}}({\mathbb{X}}\_{s\_{j-1}}^{s\_{j}})$} (10,2) node(102){${\mathrm{H}}({\mathbb{X}}\_{s\_{j-1}}^{a\_{j}+\epsilon})$} (7,1) node(71){${\mathrm{H}}({\mathbb{X}}\_{s\_{i}}^{s\_{j-1}})$} (8,2) node(82){${\mathrm{H}}({\mathbb{X}}^{a\_j}\_{s\_{i}})$} ; \draw[->] (91) -- (102); \draw[->] (102) -- (113); \draw[->] (122) -- (113); \draw[->] (111) -- (122); \draw[->] (111) -- (102); \draw[->] (131) -- (122); \draw[->] (140) -- (131); \draw[->] (120) -- (131); \draw[->] (120) -- (111); \draw[->] (100) -- (111); \draw[->] (91) -- (82); \draw[->] (100) -- (91); \draw[->] (80) -- (91); \draw[->] (82) -- (73); \draw[->] (62) -- (73); \draw[->] (42) -- (33); \draw[->] (22) -- (33); \draw[->] (31) -- (22); \draw[->] (31) -- (42); \draw[->] (11) -- (22); \draw[->] (00) -- (11); \draw[->] (20) -- (11); \draw[->] (20) -- (31); \draw[->] (40) -- (31); \draw[->] (60) -- (71); \draw[->] (80) -- (71); \draw[->] (60) -- (51); \draw[->] (51) -- (42); \draw[->] (40) -- (51); \draw[->] (51) -- (62); \draw[->] (71) -- (62); \draw[->] (71) -- (82); \end{tikzpicture}$$ Using the Diamond Principle and the Restriction Principle we calculate: $$\begin{array}{ll} \langle [{\mathrm{H}}({\mathbb{X}}\_{s\_{i-1}}^{s\_i} ), {\mathrm{H}}({\mathbb{X}}\_{s\_{j-1}}^{s\_j})] \mid {\mathrm{H}}{\mathbb{X}}\_{\{s\_{i-1}, s\_i, s\_{j-1}, s\_j\}} \rangle &= {\left\langle \raisebox{-1.25ex}{\includegraphics[scale=1.8,page=1]{morsetype.pdf}}\right\rangle}\\ & = {\left\langle \raisebox{-1.25ex}{\includegraphics[scale=1.8,page=2]{morsetype.pdf}}\right\rangle} \\ & = {\left\langle \raisebox{-1.25ex}{\includegraphics[scale=1.8,page=3]{morsetype.pdf}}\right\rangle} \\ & = {\left\langle \raisebox{-1.25ex}{\includegraphics[scale=1.8,page=4]{morsetype.pdf}}\right\rangle} \\ &= {\left\langle \raisebox{-1.25ex}{\includegraphics[scale=1.8,page=5]{morsetype.pdf}}\right\rangle} \\ &\\ &= {\mu^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}} ([a\_i-\epsilon, a\_i]\times [a\_j, a\_j+\epsilon]). \end{array}$$ In the second line we used the fact that X is of Morse type. This implies X*s**i* − 1*s**i* − 1 is homotopy equivalent to X*s**i* − 1*a**i* − *ε*, X*s**i* − 1*a**i* to X*s**i* − 1*s**i*, X*a**j* + *ε**s**j* to X*s**j**s**j* and X*a**j**s**j* to X*s**j* − 1*s**j* for all sufficiently small *ε*. Therefore $$\textrm{m}\_{{\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}({\mathbb{X}})}(a\_i^-, a\_j^+)= \lim\_{\epsilon\to 0} {\mu^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}}\_{{\mathrm{H}}{\mathbb{X}}} ([a\_i-\epsilon, a\_i]\times [a\_j, a\_j+\epsilon]) = m.$$ We must now show that ${\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}({\mathbb{X}})$ contains only points of the type (*a**i*−, *a**j*+), where *a**i* and *a**j* are critical values of X. For any *p* ∈ R, an *ε* > 0 exists such that X*p**p* + *ε* and X*p* − *ε**p* strongly deformation retracts to X*p**p*. This means that H(X*p**p* + *ε*) ≅ H(X*p**p*) ≅ H(X*p* − *ε**p*), forcing $$\left\langle { \raisebox{-0.75ex}{\includegraphics[scale=1.8,page=1]{critical}}}\, |\, {\mathrm{H}}{\mathbb{X}}\_{\{p-\epsilon, p\}} \right\rangle = \left\langle { \raisebox{-0.75ex}{\includegraphics[scale=1.8,page=4]{critical}}}\, |\, {\mathrm{H}}{\mathbb{X}}\_{\{p, p+\epsilon\}} \right\rangle = 0.$$ For *ε* small enough $${ \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{zzud} }}, { \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{zzdd} }}, { \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{zzuu} }}$$ all appear with 0 multiplicity for any *p* and *q* in the quiver decomposition of HX{*p* − *ε*, *p*, *q*, *q* + *ε*}. This holds since by the restriction principle $$0 \leq \left\langle { \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{zzud} }}\, |\, {\mathrm{H}}{\mathbb{X}}\_{\{p-\epsilon, p, q, q+\epsilon\}}) \right\rangle, \left\langle { \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{zzuu} }}\, |\, {\mathrm{H}}{\mathbb{X}}\_{\{p-\epsilon, p, q, q+\epsilon\}}) \right\rangle \leq \left\langle { \raisebox{-0.75ex}{\includegraphics[scale=1.8,page=1]{critical}}}\, |\, {\mathrm{H}}{\mathbb{X}}\_{\{p-\epsilon, p\}} \right\rangle=0$$ and $$0 \leq \left\langle { \raisebox{-0.5ex}{ \includegraphics[scale=1.8]{zzdd} }}\, |\, {\mathrm{H}}{\mathbb{X}}\_{\{p-\epsilon, p, q, q+\epsilon\}}) \right\rangle \leq \left\langle { \raisebox{-0.75ex}{\includegraphics[scale=1.8,page=4]{critical}}}\, |\, {\mathrm{H}}{\mathbb{X}}\_{\{q, q+\epsilon\}} \right\rangle =0.$$ So ${\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}({\mathbb{X}})$ contains exactly points that correspond to intervals of type [*a**i*, *a**j*] in DgmZZ(HX). We prove the statement for other measures similarly. Sixteen behaviors ----------------- Let X be an R-space. Depending on the way a feature perishes and whether the corresponding interval is closed or open at endpoints, there are sixteen different cases that can occur (see  Figure [fig:16]). [fig:16] For a Morse type R-space X = (*X*, *f*), where *X* is compact, this number drops down to four (highlighted green in Figure [fig:16]) as demonstrated by Theorem [16to4Morse]. Something similar occurs when *X* is a locally compact polyhedron, *f* a proper continuous map and H the Steenrod–Sitnikov homology functor. The following theorem, inspired by Frosini et al. , relies heavily on the continuity property of Čech homology . For a wide variety of coefficient groups (infinitely divisible; finite exponent)  Čech homology coincides with Steenrod–Sitnikov homology. In particular, this is the case for some of the more common fields we may be interested in: **F***p*, **Q**, **R**. [frosinithm] Let X = (*X*, *f*). We assume that *X* is a locally compact polyhedron, *f* is a proper continuous map, and H is the Steenrod–Sitnikov homology functor with coefficients in **F***p*, **Q** or **R**. Then: $$\begin{aligned} {4} & {\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\slash\!\backslash}}}}}\!({\mathrm{H}}{\mathbb{X}}) && \;\;\text{contains only points of type} && \raisebox{-1.4ex}{ \includegraphics[scale=1.25]{plusminus.pdf}} \!= (p^+,q^-) && = (p,q) \\ & {\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\backslash\!\backslash}}}}}\!({\mathrm{H}}{\mathbb{X}}) && \;\;\text{contains only points of type} && \raisebox{-1.4ex}{ \includegraphics[scale=1.25]{minusminus.pdf}} \!= (p^-,q^-) && = [p,q) \\ & {\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\slash\!\slash}}}}}\!({\mathrm{H}}{\mathbb{X}}) && \;\;\text{contains only points of type} && \raisebox{-1.4ex}{ \includegraphics[scale=1.25]{plusplus.pdf}} \!= (p^+,q^+) && = (p,q] \\ & {\operatorname{Dgm}}^{{^{\scriptscriptstyle{{\backslash\!\slash}}}}}\!({\mathrm{H}}{\mathbb{X}}) && \;\;\text{contains only points of type} && \raisebox{-1.4ex}{ \includegraphics[scale=1.25]{minusplus.pdf}} \!= (p^-,q^+) && = [p,q]\end{aligned}$$ In other words, the four possible decorations correspond exactly to the four ways in which a feature can perish at the ends of its interval. Let *a* < *b* < *m* < *c* < *d*. We fix a piecewise-linear structure on *X*, and approximate *f*: *X* → R with a piecewise-linear map *g*: *X* → R for which $||g - f||\leq \min \{\frac{c-m}{2}, \frac{m-b}{2}\}$. The preimage *Y* = *g*− 1(*m*) is a finite simplicial complex. Let *V**q* = *g*− 1((∞, *m*]) ∩ X*q*  and *U**q* = *g*− 1([*m*, ∞)) ∩ X*q* for *q* ∈ R. In the proof of Theorem [frosinithm] we will make use of diagrams of this type: $$\begin{tikzpicture}[xscale=1.2,yscale=1.2] \draw (-2,2) node(022) {${\mathrm{H}}{\mathbb{X}}\_{\{a, b, c, d\}}^B\colon$} ; \draw (1,1) node(11) {${\mathrm{H}}({\mathbb{X}}\_{a}^{b})$} (3,1) node(31) {${\mathrm{H}}(V\_{b})$} (5,1) node(51) {${\mathrm{H}}(U^{c})$} ; \draw (4,2) node(42) {${\mathrm{H}}({\mathbb{X}}\_{b}^c)$} ; \draw (2,2) node(22){${\mathrm{H}}(V\_{a})$} (-1,1) node(011){$0$} (-1,3) node(013){$0$} (1,3) node(13){${\mathrm{H}}(V\_{a}, {\mathbb{X}}\_{a}^{a})$} (0,4) node(04){${\mathrm{H}}(V\_{a}, {\mathbb{X}}\_{a}^{b})$} (0,2) node(02){${\mathrm{H}}({\mathbb{X}}\_{a}^{b}, {\mathbb{X}}\_{a}^{a})$}; \draw (0,0) node(00){${\mathrm{H}}({\mathbb{X}}\_{a}^{a})$} (2,0) node(20){${\mathrm{H}}({\mathbb{X}}\_{b}^{b})$} (4,0) node(40){${\mathrm{H}}(Y)$} (6,0) node(60){${\mathrm{H}}({\mathbb{X}}\_{c}^{c})$} (6,2) node(62){${\mathrm{H}}(U^{d})$} (8,0) node(80){${\mathrm{H}}({\mathbb{X}}\_{d}^{d})$} (7,1) node(71){${\mathrm{H}}({\mathbb{X}}\_{c}^{d})$} (8,2) node(82){${\mathrm{H}}({\mathbb{X}}^d\_c, {\mathbb{X}}\_{d}^{d})$} (9,1) node(91){$0$} (7,3) node(73){${\mathrm{H}}(U^d, {\mathbb{X}}\_{d}^{d})$} (
arxiv_0000064
})\right) + \text{div\,}\alpha(\mathbf{x}))/2 + C,\end{aligned}$$ where $C := \| \nabla\log\pi(\hat{\mathbf{x}})\|^2 /2 + \Delta \log\pi(\hat{\mathbf{x}})/2$ is a constant. Letting A now be the law of $I,J\overset{\text{iid}}{\sim} {\text{U}}\{0, \ldots,n\}$ the following unbiased estimator of *ϕ* can be constructed, $$\begin{aligned} \mathbbm{E}\_{\mathcal{A}}\big[\underbrace{(\tilde{\alpha}\_I(\mathbf{x}))^{T}\left(2\nabla\log\pi(\hat{\mathbf{x}})+ \tilde{\alpha}\_J(\mathbf{x})\right) + \text{div\,}\tilde{\alpha}\_I(\mathbf{x}))/2 + C}\_{=:\tilde{\phi}\_A(\mathbf{x}) }\big] = \phi(\mathbf{x}). \label{eq:subsamplelambda}\end{aligned}$$ The estimators *α̃**I*(**x**) and *ϕ̃**A*(**x**) are nothing more than classical *control variate* estimators, albeit in a fairly elaborate setting, and henceforth we shall refer to these accordingly. The construction of the estimator requires evaluation of the constants $\nabla\log\pi(\hat{\mathbf{x}})$ and $\Delta \log\pi(\hat{\mathbf{x}})$. Although both are O(*n*) evaluations they only have to be computed once, and furthermore, as mentioned above, can be calculated entirely in parallel. The unbiased estimators *α̃**I*(**x**) and *ϕ̃**A*(**x**) use (respectively) single and double draws from {1, …, *n*} although it is possible to replace these by averaging over multiple draws (sampled with replacement), although this is not studied theoretically in the present paper and is exploited only in Section [s:examp:2] of the empirical study. Embedding our sub-sampling estimator described above within the QSMC algorithm of, results in termed the *Scalable Langevin Exact algorithm (ScaLE)*. A similar modification could be made to the rejection sampling version, R-QSMC, which was discussed in and detailed in. This variant is termed the *Rejection Scalable Langevin Exact algorithm (R-ScaLE)* and full algorithmic details are provided in. [h] [alg:subscale3] [alg:scale] 1. Choose $\hat{\mathbf{x}}$ and compute $\nabla\log\pi(\hat{\mathbf{x}})$, $ \Delta \log\pi(\hat{\mathbf{x}})$. 2. On calling, 1. Replace *L***X**(*i*), *U***X**(*i*) in with *L̃***X**(*i*), *Ũ***X**(*i*). 2. Replace with: *τ**i*: If *ξ**j* = *τ**i*, set *i* = *i* + 1, and return to. Else simulate *A**j* = (*I**j*, *J**j*), with $I\_j,J\_j\overset{\text{iid}}{\sim} {\text{U}}\{0, \ldots,n\}$, and set *w**ξ**j*\* = *w**ξ**j*\* ⋅ (*Ũ***X**(*i*) − *ϕ̃**A**j*(**X***ξ**j*))/(*Ũ***X**(*i*) − *L̃***X**(*i*)) (where *ϕ̃**A**j* is defined as in ([eq:subsamplelambda])) and return to. Implementation Details ---------------------- In this section we detail some simple choices of the various algorithmic parameters which lead to a concrete implementation of the ScaLE algorithm. These choices have been made on the bases of parsimony and convenience and are certainly not optimal. In practice, we are likely to want to employ a suitable preconditioning transformation: $\bX'=\precon^{-1}\bX$ before applying the algorithm in order to roughly equate scales for different components. If we did not do this, it is likely that some components would mix particularly slowly. Obtaining a suitable $\hat{\mathbf{x}}$ and $\precon$ is important. One concrete approach, and that used throughout our empirical study except where otherwise stated, is as follows. Divide a data set into a number of batches which are small enough to be processed using standard maximum likelihood estimation approaches and estimate the MLE and observed Fisher information for each batch; $\hat{\mathbf{x}}$ can then be chosen to be the mean of these MLEs and $\precon^{-1}$ to be a diagonal matrix with elements equal to the square root of the sum of the diagonal elements of the estimated information matrices. Better performance would generally be obtained using a non-diagonal matrix, but this serves to illustrate a degree of robustness to the specification of these parameters. The constants required within the control variate can then be evaluated. For a given hypercube, H, a bound, $\widetilde{K}\_{\mathbf{X}}$, on the dominating Poisson process intensity can then be obtained by simple analytic arguments facilitated by extending that hypercube to include $\hat{\mathbf{x}}$ and obtaining bounds on the modulus of continuity of $\widetilde{\phi}\_A$. In total, two passes of the full dataset are required to obtain the necessary algorithmic parameters and to fully specify the control variate. As discussed in, it is necessary to choose an execution time, *T*, for the algorithm and an auxiliary mesh (*t*0 :  = 0, *t*1, …, *t**m* :  = *T*) on which to evaluate *g* in the computation of the QSMC estimator ([eq:occupation3]). Note that within the algorithm the particle set is evolving according to killed Brownian motion with a preconditioning matrix $\precon^{-1}$ chosen to approximately match the square root of the information matrix of the target posterior. As such, *T* should be chosen to match the time taken for preconditioned Brownian motion to explore such a space, which in the examples considered in this paper ranged from *T* ≈ 1 to *T* ≈ 100. The number of temporal mesh points, *m*, was chosen with computational considerations in mind — increasing *m* increases the cost of evaluating the estimator and leads to greater correlation between the particle set at consecutive mesh points, but ensures when running the algorithm on a multiple user cluster that the simulation is periodically saved and reduces the variance of the estimator. As the computational cost of the algorithm is entirely determined by the bounds on the discussed modulus of continuity of *ϕ̃**A*, in each of the examples we later consider our mesh size was loosely determined by the inverse of this quantity and ranged from (*t**i* − *t**i* − 1) ≈ 10− 3 to (*t**i* − *t**i* − 1) ≈ 10− 6. The initial distribution *f***x**0 is not *too* critical, provided that it is concentrated reasonably close (within a neighbourhood of size O(*n*− 1/2)) to the mode of the distribution. The stability properties of the SMC implementation ensure that the initial conditions will be forgotten (see Chapter 7 of for a detailed discussion). The empirical results presented below were obtained by choosing as *f***x**, either a singular distribution concentrated at $\hat{\mathbf{x}}$ or a normal distribution centred at that location with a covariance matrix matching $\precon \precon^T$; results were found to be insensitive to the particular choice. Complexity of ScaLE =================== The computational cost of ScaLE will be determined by two factors: the speed at which *μ**t* approaches *π* and the computational cost of running the algorithm per unit algorithm time. Throughout the exposition of this paper, the proposal process is simple Brownian motion. Due to posterior contraction, as *n* grows this proposal Brownian motion moves increasingly rapidly through the support of *π*. On the other hand, as *n* grows, killing rates will grow. In this subsection we shall explore in detail how the computational cost of ScaLE varies with *n* (its complexity) while bringing out explicitly the delicate link to the rate of posterior contraction and the effect of the choice of ${\hat\bx}$. We start by examining the speed of convergence of *μ**t* and in particular its dependence on posterior contraction. Being more explicit about posterior contraction, we say that {*π**n*} are O(*n*− *η*/2) or have *contraction rate *η*/2* for some *η* > 0 to a limit $\bx \_0$ if for all *ε* > 0 there exists *K* > 0 such that when $\bX \_n \sim \pi \_n$, $ {\Bbb P} (|\bX\_n - \bx \_0|>Kn^{-\eta /2})<\epsilon $. It is necessary to extend the definition of *μ**t* to a setting where *n* increases, hence define $$\label{e:mute} \mu \_t^{n,{\mathbf{u}}}(d\bx ) := \mathbbm{P}(\bX\_t \in d\bx \mid \zeta >t, \bX\_0=\bx\_0+n^{-\eta /2} {\mathbf{u}}).$$ Since we are dealing with Markov processes that are essentially never uniformly ergodic, it is impossible to control convergence times uniformly. The specification of the initial value as $\bX\_0=\bx\_0+n^{-\eta /2} {\mathbf{u}}$, which, as *n* increases, remains close to the centre of the posterior as specified through the contraction rate, goes as far as we can before incurring additional computational costs for bad starting values. Set *T**n*, **u**, *ε* = inf{*t* ≥ 0;  ∥*μ**t**n*, **u** − *π**n*∥ < *ε*} where ∥ ⋅ ∥ represents total variation distance. It will be necessary to make the following technical assumption. For all *ε*, *K* > 0 limsup*n* → ∞sup∣**u**∣ < *K**n**η**T**n*, **u**, *ε* < ∞ At first sight, assumption ([e:unifqsconv]) may seem strong, but it is very natural and is satisfied in reasonable situations. For example suppose we have a contraction scaling limit: $\pi \_n (dx) \approx h\left({\bx - \bx \_0 \over n^{\eta /2}}\right)$. (A special case of this is the Bernstein–von Mises theorem with *η* = 1 and *h* being Gaussian, but our set up is far broader.) If $\{ \bX \_t ^n\}$ denotes ScaLE on *π**n*, then by simple scaling and time change properties of Brownian motion it is easily checked that if $\bY\_t = \bX \_{n^{-\eta } t}$ then $\bY $ is (approximately) ScaLE on *h* which is clearly independent of *n*. Thus to obtain a process which converges in O(1) we need to *slow down* $\bX $ by a time scaling factor of $$\label{e:timefactor} \hbox{time factor }=n^{\eta }\.$$ Similar arguments have been used for scaling arguments of other Monte Carlo algorithms that use similar control variates, see for instance the concurrent work of. While posterior contraction has a positive effect on computational cost, it is also the case that for large *n* the rate at which a likelihood subsample needs to be calculated, as measured by *λ̃*, needs to increase. Since *λ̃* depends on the current location in the state space, where we need to be precise we shall set *λ̃**n*, *K* to be an available bound which applies uniformly for $|\bx - \bx \_0| <Kn^{-\eta /2}$. The following notion of *convergence cost* will be required: setting $$\iterC = \iterC(n, K,\epsilon ) = T\_{n,K, \epsilon} \cdot \tilde{\lambda }\_{n,K}$$ ScaLE is said to have *iteration complexity* $n^{\varpi}$ or, equivalently, is $\mathcal{O} (n^{\varpi})$ if $\iterC (n, K,\epsilon )$ is $\mathcal{O} (n^{\varpi})$ for all *K*, *ε* > 0. Therefore to understand iteration complexity of ScaLE it is necessary to understand the rate at which *λ̃**n*, *K* grows with *n*. A general way to do this is to use global, or local, bounds on the second-derivatives of the log-likelihood for each datum. To simplify the following exposition a global bound is assumed, so that $$\begin{aligned} \rho (\nabla^2\log f\_I(\mathbf{x}))\leq P\_{n}, \label{eq:hessiancon}\end{aligned}$$ for some *P**n* > 0, where *ρ*( ⋅ ) represents the spectral radius and ∇2 is the Hessian matrix. For smooth densities with Gaussian and heavier tails, the Hessian of the log-likelihood is typically uniformly bounded (in both data and parameter). In such cases such a global bound would be expected, and in fact *P**n* would be constant in *n*. Recalling the layer construction of for a single trajectory of killed Brownian motion, we can ensure that over any finite time interval we have **x** ∈ H, some hypercube. Let the centre of the hypercube be **x**\*. In this section, eventually the assumption that the posterior contracts at a rate *n*− *η*/2 will be made, i.e. that {*n**η*/2(**x** − **x**0), *n* = 1, 2, …} is tight. The so-called *regular case* corresponds to the case where *η* = 1, although there is no need to make any explicit assumptions about normality in the following. The practitioner has complete freedom to choose H, and it makes sense to choose this so that ∥**x** − **x**\*∥ < *C*\**n*− *η*/2 for some *C*\* > 0 and for all **x** ∈ H. It is possible to bound *ϕ̃**A*(**x**) both above and below if we can bound ∣*ϕ̃**A*(**x**)∣ over A-almost all possible realisations of *A*. To bound ∣*ϕ̃**A*(**x**)∣, the approach here is to first consider the elementary estimator in ([eq:alphatilde]). By imposing the condition in ([eq:hessiancon]) we can then obtain $$\begin{aligned} \max\_{\mathbf{x} \in \mathcal{H},I\in\{0,\ldots{},n\}}|\tilde{\alpha}\_I(\mathbf{x})| \leq (n+1)\cdot P\_{{n}} \cdot \max\_{\mathbf{x} \in \mathcal{H}}\|\mathbf{x}-\hat{\mathbf{x}}\|.\end{aligned}$$ Thus it is possible to bound the estimator in ([eq:subsamplelambda]) as follows $$\begin{aligned} & 2\max\_{\mathbf{x} \in \mathcal{H},A\in\mathcal{A}}|\tilde{\phi}\_A(\mathbf{x})- C | \leq \nonumber \\ & {(n+1)P\_n \max\_{\mathbf{x} \in \mathcal{H}} \|\mathbf{x}-\hat{\mathbf{x}}\| \left[ |2\nabla\log\pi(\hat{\mathbf{x}})| +P\_{{n}} (n+1) \max\_{\mathbf{x} \in \mathcal{H}} \|\mathbf{x}-\hat{\mathbf{x}}\| \right] +P\_{{n}} d (n+1) }.\end{aligned}$$ We can use the fact that $\max\_{\mathbf{x} \in \mathcal{H}}\|\mathbf{x}-\hat{\mathbf{x}}\| \leq \|\mathbf{x}^\*-\hat{\mathbf{x}}\|+C^\*n^{-\eta /2}$ to bound the terms in this expression. We now directly consider the iteration complexity of ScaLE. We note that the appropriate killing rate to ensure mixing in time O(1) involves slowing down by the time factor given in [e:timefactor], and is therefore just *n*− *η**λ̃*. Assuming *η* ≤ 1, and using the bound on ∣*ϕ̃**A*(**x**) − *C*∣ for the hypercube centred on **x**\*, we have that whilst we remain within the hypercube, $$\begin{aligned} \label{e:lamtibd} \frac{1}{n^{\eta }}\tilde{\lambda} & = \mathcal{O}\Big( P\_{{n}} n^{1-3\eta /2} \big( P\_{{n}} n^{1-\eta /2} +|\nabla \log \pi (\hat{\mathbf{x}}) |\big) \Big). \end{aligned}$$ Here the assumption has been made that at stationarity **x**\* will be a draw from the support of the posterior, so that under the assumption of posterior contraction at the *n*− *η*/2 rate, then $\|\mathbf{x}^\*-\hat{\mathbf{x}}\|=\mathcal{O}\_p(n^{-\eta /2})$. This discussion is summarised in the following result. [prop:contraction] Suppose that ([e:unifqsconv]) and ([eq:hessiancon]) hold, posterior contraction occurs at rate *n*− *η*/2 for *η* ≤ 1, *P**n* is O(1) and that $|\nabla \log \pi (\hat{\mathbf{x}}) | = \mathcal{O} (n^\iota )$ for some *ι* > 0. Then the iterative complexity of ScaLE is $\mathcal{O} (n^{\varpi})$ where $$\varpi := \max(1-\eta/2,\iota) +1 - 3\eta /2.$$ In particular, where *ι* ≤ 1 − *η*/2, $ \varpi = 2- 2\eta. $ If *η* = 1, then it follows that $\varpi =0$ and the iterative complexity of ScaLE is O(1). This result also illuminates the role played by $|\nabla \log \pi (\hat{\mathbf{x}}) |$ in the efficiency of the algorithm. In the following discussion it is assumed that *η* = 1. It is worth noting that while a completely arbitrary starting value for $\hat{\mathbf{x}} $ might make $|\nabla \log \pi (\hat{\mathbf{x}}) |$ an ${\mathcal O}(n)$ quantity leading to an iterative complexity of the algorithm which is ${\mathcal O}(n^{1/2})$. To obtain ${\mathcal O}(1)$ it is simply required that $|\nabla \log \pi (\hat{\mathbf{x}}) |$ be ${\mathcal O}(n^{1/2})$ which gives considerable leeway for any initial explorative algorithm to find a good value for ${\hat{\bx}}$. Note that given bounds on the third derivatives, ([e:lamtibd]) can be improved by linearising the divergence term in ([eq:subsamplelambda]). This idea is exploited later in a logistic regression example (see Sections [s:examp:ulr], [s:examp:alr], [s:examp:1]). In the absence of a global bound on the second derivatives, it is possible to replace *P**n* in the above arguments by any constant that bounds the second-derivatives for all **x** such that $\|\mathbf{x}-\hat{\mathbf{x}}\|\leq \max\_{\mathbf{x} \in \mathcal{H}} \|\mathbf{x}-\hat{\mathbf{x}}\|$. In this case, the *most extreme* rate at which ${\tilde \lambda}$ can grow is logarithmically with *n*, for instance for light-tailed models where the data really comes from the model being used. Where the tails are mis-specified and light-tailed models are being used, the algorithmic complexity can be considerably worse. There is considerable scope for more detailed analyses of these issues in future work. The above arguments give insight into the impact of our choice of $\hat{\mathbf{x}}$. It affects the bound on *λ̃*, and hence the computational efficiency of ScaLE, through the terms $\|\mathbf{x}^\*-\hat{\mathbf{x}}\|$. Furthermore the main term in the order of *λ̃* is the square of this distance. If $\hat{\mathbf{x}}$ is the posterior mean, then the square of this distance will, on average, be the posterior variance. By comparison, if $\hat{\mathbf{x}}$ is *k* posterior standard deviations away from the posterior mean, then on average the square distance will be *k*2 + *a* times the posterior variance (for some constant *a*), and the computational cost of ScaLE will be increased by a factor of roughly *k*2 + *a*. Overall complexity ------------------ Here we will briefly discuss the overall complexity of ScaLE. The general setup of Theorem [prop:contraction] describes the iteration complexity of ScaLE on the assumption that $|\nabla \log \pi (\hat{\mathbf{x}}) |$ grows no worse than O(*n**ι*). However there is a substantial initial computational cost in locating $\hat{\mathbf{x}}$ and calculating $\nabla \log \pi (\hat{\mathbf{x}}) $ which is likely to be O(*n*) as there are *n* terms in the calculation of the latter. Therefore the *overall complexity* of ScaLE can be described as $$\mathcal{C} = \initC + \iterC = \mathcal{O}(n)+ \mathcal{O}(n^{\varpi } t)$$ where *t* represents algorithm time. This is in contrast to an MCMC algorithm for which iteration cost would be O(*n*) leading to overall complexity *t**n*. A Laplace approximation will involve an initial cost that is (at very least) O(*n*) but no further computation. Since they both involve full likelihood calculations, finding the posterior mode and finding ${\hat x}$ are both likely to be O(*n*) calculations. This can be shown to be the case for strongly log-concave posterior densities, though the cost may be higher if the log-posterior is not concave. On the other hand, the above discussion shows that in order to achieve O(1) scaling with data we typically only need to find ${\hat x}$ within O(*n*− 1/2) of the posterior model. Thus finding ${\hat x}$ is certainly no harder than finding the posterior mode, as we can use the same mode-finding algorithm, e.g., but have the option of stopping earlier. If *n* is sufficiently large that the initialisation cost dominates the iteration cost, ScaLE will computationally be no more expensive to implement than the Laplace approximation. In this case we obtain an *exact approximate* algorithm (ScaLE) for at most the computational complexity of an approximate method (Laplace). These complexity considerations are summarised in Table [t:complex]. [t:complex] | | $\initC$ | $\iterC$ | C | | --- | --- | --- | --- | | MCMC | 0 | *t**n* | *t**n* | | Laplace approximation | *n* | 0 | *n* | | ScaLE | *n* | $tn^{\varpi}$ | $n+tn^{\varpi}$ | | ScaLE when *η* = 1 | *n* | *t* | *n* + *t* | Theoretical Properties ====================== SMC algorithms in both discrete and continuous time have been studied extensively in the literature (for related theory for approximating a fixed-point distribution, including for algorithms with resampling implemented in continuous-time, see ; a discrete-time algorithm to approximate a fixed-point distribution in a different context was considered by ). In order to avoid a lengthy technical diversion, we restrict ourselves here to studying a slightly simplified version of the problem in order to obtain the simplest and most interpretable possible form of results. The technical details of this construction are deferred to and give here only a qualitative description intended to guide intuition and the key result: that the resulting estimator satisfies a Gaussian central limit theorem with the usual Monte Carlo rate. Consider a variant of the algorithm in which (multinomial) resampling occurs at times *k**h* for *k* ∈ N where *h* is a time step resolution specified in advance and consider the behaviour of estimates obtained at these times. Extension to resampling at a random subset of these resampling times would be possible using the approach of, considering precisely the QSMC algorithm presented in and the ScaLE algorithm in would require additional technical work somewhat beyond the scope of this paper; no substantial difference in behaviour was observed. In order to employ standard results for SMC algorithms it is convenient to consider a discrete time embedding of the algorithms described. We consider an abstract formalism in which between the specified resampling times the trajectory of the Brownian motion is sampled, together with such auxiliary random variables as are required in any particular variant of the algorithm. Provided the potential function employed to weight each particle prior to resampling has conditional expectation (given the path) proportional to the exact killing rate integrated over these discrete time intervals a valid version of the ScaLE algorithm is recovered. This discrete time formalism allows for results on more standard SMC algorithms to be applied directly to the ScaLE algorithm. We provide in the following proposition a straightforward corollary to a result in Chapter 9 of, which demonstrates that estimates obtained from a single algorithmic time slice of the ScaLE algorithm satisfy a central limit theorem. [Central Limit Theorem] [prop:3] In the context described, under mild regularity conditions (see references given in ): $$\begin{aligned} \lim\_{N \to \infty} \sqrt{N} \left[\frac{1}{N} \sum\_{i=1}^N \varphi(X^i\_{hk}) - \mathbb{E}\_{\mathbb{K}\_{hk}^x}\left[\varphi(X^i\_{hk})\right] \right] \Rightarrow \sigma\_k(\varphi) Z\end{aligned}$$ where, *φ* : R*d* → R, *Z* is a standard normal random variable,  ⇒  denotes convergence in distribution, and *σ**k*(*φ*) depends upon the precise choice of sub-sampling scheme as well as the test function of interest and is specified in along with the law K*h**k**x*. Examples ======== In this section we present five example applications of the methodology developed in this paper, each highlighting a different aspect of ScaLE and contrasted with appropriate competing algorithms. In particular: in we consider a simple pedagogical example which has a skewed target distribution, contrasted with MCMC; considers the performance of a logistic regression model in which significantly less information is available from the data about one of the covariates than the others; in we apply both ScaLE and SGLD to a regression problem based upon the ASA Data Expo Airline On-time Performance data, which is of moderately large size ( ≈ 108); considers ScaLE applied to a very large logistic regression problem, with a data set of size *n* = 234 ≈ 1010.2, along with consideration of scalability with respect to data size; Finally, in parameter inference for a contaminated regression example is given, motivated by a big data application with *n* = 227 ≈ 108.1, and illustrating the potential of an *approximate* implementation of ScaLE even when mis-initialised. All simulations were conducted in R on an Xeon X5660 CPU running at 2.8 GHz. Note that for the purposes of presenting the ScaLE methodology as cleanly as possible, in each example no prior has been specified. In practice, a prior can be simply included within the methodology as described in. Details of data and code used to produce the output within this section can be found in Journal of the Royal Statistical Society: Series B (Statistical Methodology) Datasets. Skewed Target Distribution -------------------------- In order to illustrate ScaLE applied to a simple non-Gaussian target distribution, we constructed a small data set of size *n* = 10, to which we applied a logistic regression model $$\label{eq:logistic} y\_i = \left\lbrace \begin{array}{ll} 1 & \quad\text{with probability } \dfrac{\exp\{\mathbf{x}^T\_i \mathbf{\beta}\}}{1+\exp\{\mathbf{x}^T\_i \mathbf{\beta}\}}, \\\*[10pt] 0 & \quad\text{otherwise}. \end{array} \right.$$ The data was chosen to induce a skewed target, with **y***T* = (1, 1, 0, …, 0) and **x***i**T* = (1, ( − 1)*i*/*i*). We used the glm R package to obtain the MLE (*β*\* ≈ ( − 1.5598,  − 1.3971)) and observed Fisher information, in order to (mis-)initialise the particles in the ScaLE algorithm. In total *N* = 210 particles were used, along with a sub-sampling mechanism of size 2 and a control variate computed as in by setting $\hat{\mathbf{x}}=\beta^\*$. For comparison we ran random walk Metropolis on the same example initialised at *β*\* using the MCMClogit function provided by MCMCpack, computed the posterior marginals based on 1,000,000 iterations thinned to 100,000 and after discarding a burn-in of 10,000 iterations, and overlaid them with together with those estimated by ScaLE in. These are accompanied by the glm fit used to mis-initialise ScaLE. It is clear from that the posterior obtained by simulating ScaLE matches that of MCMC, and both identify the skew which would be over-looked by a simple normal approximation. The particle set in ScaLE quickly recovers from its mis-initialisation, and only a modest burn-in period is required. In practice, we would of course not advocate using ScaLE for such a small data setting — the computational and implementational complexity of ScaLE does not compete with MCMC in this example. However, as indicated in and the subsequent examples, ScaLE is robust to increasing data size whereas simple MCMC will scale at best linearly. ![Upper plots are trace trajectories of ScaLE applied to the skewed target distribution example of. Solid red lines on the trace plots mark the parameter values fitted using the glm R package, and dashed red lines are 95% confidence intervals imputed using the covariance matrix estimated from the glm package. Lower plots are marginal densities obtained by ScaLE (black lines). Overlaid on the marginal plots are the normal approximation from the glm R package (red dashed line), and from MCMC run (green dashed lines).](skewed_adam.png "fig:") [fig:skewedExample] Heterogeneous Logistic Regression --------------------------------- For this example a synthetic data set of size *n* = 107 was produced from the logistic regression model in ([eq:logistic]). Each record contained three covariates, in addition to an intercept. The covariates were simulated independently from a three-dimensional normal distribution with identity covariance truncated to [ − 0.001, 0.001] × [ − 1,  + 1] × [ − 1,  + 1], and with the true **\beta** = (0, 2,  − 2, 2) (where the first coordinate corresponds to the intercept). The specification of this data set is such that significantly less information is available from the data about the second covariate than the others. Data was then generated from ([eq:logistic]) using the simulated covariates. As before, the glm R package was used to obtain the MLE and observed Fisher information, which was used within ScaLE to set $\mathbf{\beta}^\star = \hat{\mathbf{x}}\approx (2.3581\times 10^{-4},\allowbreak 2.3407,\allowbreak -2.0009,\allowbreak 1.9995)$ and $\precon \approx \textrm{diag}(7.6238\times 10^{-4}, \allowbreak 1.3202,\allowbreak 1.5137\times10^{-3},\allowbreak 1.5138\times10^{-3})$ respectively. For the control variate $\nabla\log\pi(\hat{\mathbf{x}}) \approx (2.0287\times10^{-9},\allowbreak 2.2681 \times 10^{-9},\allowbreak -2.3809 \times 10^{-6},\allowbreak -2.3808 \times 10^{-6})$ was calculated using the full data set, and as expected (and required for computational considerations) is extremely small, along with $\Delta\log\pi(\hat{\mathbf{x}})$. ScaLE was then applied to this example using *N* = 210 particles initialised using a normal approximation given by the computed $\hat{\mathbf{x}}$ and $\precon$, and a subsampling mechanism of size 2. The simulation was run for 20 hours, in which time 84,935,484 individual records of the data set were accessed (equivalent to roughly 8.5 full data evaluations). Trace plots for the simulation can be found in, along with posterior marginals given by the output (after discarding as burn-in a tenth of the simulation). The posterior marginals are overlaid with the normal approximation given by the R glm fit. The estimated means and standard deviations for the regression parameters were ${\bf\mathbf{x}} \approx (-2.3194\times 10^{-4},\allowbreak 2.3197,\allowbreak -2.0009,\allowbreak 1.9995)$, and $\sigma\_{\bf x} \approx (7.6703 \times 10^{-4},\allowbreak 1.3296,\allowbreak 1.6386 \times 10^{-4},\allowbreak 1.6217 \times 10^{-4})$ respectively. This is in contrast with *β*\* and standard deviations of $\approx (7.6238 \times 10^{-4},\allowbreak 1.3203,\allowbreak 1.6237 \times 10^{-4},\allowbreak 1.6233 \times 10^{-4})$ from the glm output. To assess the quality of the output we adopted a standard method for estimating effective sample size (ESS) for a single parameter. In particular, we first estimated a marginal ESS associated with the particles from ScaLE at a single time-point, with this defined as the average of the ratio of the variance of the estimator of the parameter using these particles to the posterior variance of the parameter. To calculate the overall ESS, the dependence of these estimators over-time is accounted for by modelling this dependence as an AR(1) process. Full details of this approach are given in Appendix [app:ESS]. The resulting average ESS per parameter using this approach was found to be 352. The ScaLE output is highly stable and demonstrates that despite the heterogeneity in the information for different parameters, the Bernstein–von Mises limit (Laplace approximation) proves here to be an excellent fit. Although the GLM fit is therefore excellent in this case, ScaLE can be effectively used to verify this. This is in contrast to the example in Section 7.1 where ScaLE demonstrates that the GLM-Laplace approximation is a poor approximation of the posterior distribution. ![Upper plots are trace plots of ScaLE applied to the heterogeneous logistic regression example of. Solid red lines on the trace plots mark the parameter values fitted using the glm R package, and dashed red lines are 95% confidence intervals imputed using the covariance matrix estimated from the glm package. Lower plots are marginal densities obtained by ScaLE (black lines). Overlaid on the marginal plots are the normal approximation from the glm R package (red dotted lines).](uninformative_adam.png "fig:") [fig:ulr] Airline Dataset --------------- To demonstrate our methodology applied to a real (and moderately large) dataset we consider the ‘Airline on-time performance’ dataset which was used for the 2009 American Statistical Association (ASA) Data Expo, and can be obtained from <http://stat-computing.org/dataexpo/2009/>. The ‘Airline’ data set consists in its entirety of a record of all flight arrival and departure details for all commercial flights within the USA from October 1987 to April 2008. In total the data set comprises 123,534,969 such flights together with 29 covariates. For the purposes of this example we selected a number of covariates to investigate what effect (if any) they may have on whether a flight is delayed. The Federal Aviation Administration (FAA) considers an arriving flight to be late if it arrives more than 15 minutes later than its scheduled arrival time. As such we take the *flight arrival delay* as our observed data (given by **ArrDelay** in the Airline data) and treat it as a binary taking a value of one for any flight delayed in excess of the FAA definition. In addition to an intercept, we determine three further covariates which may reasonably affect flight arrival: a *weekend* covariate, which we obtain by treating **DayOfWeek** as a binary taking a value of one if the flight operated on a Saturday or Sunday; a *night flight* covariate, which we obtain by taking **DepTime** (Departure Time) and treating it as a binary taking a value of one if the departure is between 8pm and 5am; and flight *distance*, which we obtain by taking **Distance** and normalising by subtracting the minimum distance and dividing by the range. The resulting data set obtained by the above process contained a number of missing entries, and so all such flights were omitted from the data set (in total 2,786,730 rows), leaving *n* = 120,748,238 rows. We performed logistic regression taking the *flight arrival delay* variable as the response and treating all other variables as covariates. To allow computation of $\hat{\mathbf{x}}$ and $\precon$ as required by ScaLE the data was first divided into 13 subsets each of size 9,288,326 and the MLE and observed information matrix for each was obtained using the R glm package. It should be noted that the Airline data set is highly structured, and so for robustness the order of the flight records was first permuted before applying glm to the data subsets. An estimate for the MLE and observed information matrix for the full data set was obtained by simply taking the mean for each coefficient of the subset MLE fits, and summing the subset information matrices. The centring point $\hat{\mathbf{x}} \approx (-1.5609,\allowbreak -0.1698,\allowbreak 0.2823,\allowbreak 0.9865)$ was chosen to be the computed MLE fit, and for simplicity $\precon^{-1}$ was chosen to be the square root of the diagonal of the computed information matrix ($\precon \approx \text{diag}(2.309470\times 10^{-4},\allowbreak 4.632830\times 10^{-4},\allowbreak 6.484359\times 10^{-4},\allowbreak 1.2231\times 10^{-5}))$. As before, and as detailed in, we use the full data set in order to compute $\nabla\log\pi(\hat{\mathbf{x}}) \approx (0.00249,\allowbreak 0.0018,\allowbreak 0.0021,\allowbreak 0.0029)$ (which again is small as suggested by the theory, and required for efficient implementation of ScaLE) and $\Delta\log\pi(\hat{\mathbf{x}}) \approx -3.999$. The ScaLE algorithm was initialised using the normal approximation available from the glm fit. In total *N* = 212 particles were used in the simulation, and for the purposes of computing the unbiased estimator *ϕ̃**A*(**x**) we used a sub-sample of size 2. The algorithm was executed so that *n* individual records of the data set were accessed (i.e. a single access to the full data set), which took 36 hours of computational time. The first tenth of the simulation trajectories were discarded as burn-in, and the remainder used to estimate the posterior density. The trace plots and posterior densities for each marginal for the simulation can be found in. For comparison, we also ran stochastic gradient Langevin diffusion (SGLD; ). This algorithm approximately simulates from a Langevin diffusion which has the posterior distribution as its stationary distribution. The approximation comes from both simulating an Euler discretised version of the Langevin diffusion and from approximating gradients of the log posterior at each iteration. The approximation error can be controlled by tuning the step-size of the Euler discretisation — with smaller step-sizes meaning less approximation but slower mixing. We implemented SGLD using a decreasing step-size, as recommended by the theoretical results of ; and used pilot runs to choose the smallest scale for the step-size schedule which still led to a well-mixing algorithm. As such, the pre-processing expenditure matched that of ScaLE. The accuracy of the estimate of the gradient is also crucial to the performance of SGLD, and we employed an estimator that used control variates (similar to that developed in ScaLE) and a mini-batch size of 1000, following the guidance of. For comparable results we ensured that SGLD had the same number of log-likelihood evaluations as ScaLE (i.e. equivalent to one single access to the full data set), and initiated SGLD from the centring value used for the control variates. In total the SGLD simulation took 4 hours to execute. The first tenth was discarded as burn-in and the remainder was used to estimate the marginal posteriors, which are overlaid with those estimated by ScaLE in. As can be seen in, SGLD estimates seem to be unstable here, with the algorithm struggling to mix effectively under the decreasing step size constraint, particularly for the fourth covariate. Indeed, the marginal posteriors should be convex and SGLD deviates strongly from this. This unstable behaviour was confirmed in replicate SGLD runs, and indeed it would be difficult to separate out bias from Monte Carlo error for SGLD without much longer runs. This is in contrast with ScaLE which produces far more stable output in this example. ![Upper plots are trace plots of ScaLE applied to the Airline data set. Solid red lines on the trace plots mark the parameter values fitted using the glm R package, and dashed red lines are 95% confidence intervals imputed using the covariance matrix estimated from the glm package. Lower plots are marginal densities obtained by ScaLE (black lines). Overlaid on the marginal plots are the normal approximation from the glm R package (red dotted lines), and from the comparable SGLD run (green dotted lines).](airline_adam.png "fig:") [fig:airlineanalysis] Large Data Scenario ------------------- In this subsection we consider an application of ScaLE to a 5-dimensional logistic regression model, considering data sets of up to size *n* = 234 ≈ 1010.2. Logistic regression is a model frequently employed within big data settings, and here the scalability of ScaLE is illustrated for this canonical model. In this example, we generate a data set of size 234 from this model ([eq:logistic]) by first constructing a design matrix in which the *i*th entry ${\bx}\_{i} := [1,\zeta\_{i,1},\ldots{},\zeta\_{i,4}]^T$, where *ζ*1, 1, …, *ζ**n*, 4 are i.i.d. truncated normal random variables with support [ − 1, 1]. In the big data setting it is natural to assume such control on the extreme entries of the design matrix, either through construction or physical limitation. Upon simulating the design matrix, binary observations are obtained by simulation using the parameters ${\bf \beta} = [1,1,-1,2,-2]^T$. Due to the extreme size of the data we realised observations only as they were required to avoid storing the entire data set; see code provided for implementation details. First considering the data set of size *n* = 234, then following the approach outlined in, $\hat{\mathbf{x}}$ and $\precon$ were chosen by breaking the data into a large number of subsets, fitting the R glm package to each subset, then appropriately pooling the fitted MLE and observed Fisher information matrices. In total the full data set was broken into 213 subsets of size 221, and the glm fitting and pooling was conducted entirely in parallel on a network of 100 cores. Consequently, $\hat{\mathbf{x}} = \beta^\* \approx (0.9999943, \allowbreak 0.9999501,\allowbreak -0.9999813,\allowbreak 1.999987,\allowbreak -1.999982)$ and $\precon \approx \textrm{diag}(1.9710\times 10^{-5},\allowbreak 3.6921\times 10^{-5},\allowbreak 3.6910\times 10^{-5},\allowbreak 3.8339\times 10^{-5},\allowbreak 3.8311\times 10^{-5})$. Upon computing $\hat{\mathbf{x}}$ an additional pass of the 213 subsets of the data of size 221 was conducted in parallel in order to compute $\nabla\log\pi(\hat{\mathbf{x}}) \approx (-0.0735,\allowbreak -0.0408,\allowbreak 0.0428,\allowbreak -0.09495,\allowbreak 0.0987)$ and $\Delta\log\pi(\hat{\mathbf{x}}) \approx -5.006$ for construction of the control variate. Fully utilising the 100 cores available the full suite of pre-processing steps required for executing ScaLE (i.e. the computation of both the glm fit and control variate) took 27 hours of wall-clock time. ScaLE was applied to this example using *N* = 210 particles initialised using a normal approximation given by the available glm fit, and a subsampling mechanism of size 2. The simulation was run for 70 hours, in which time 49,665,450 individual records of the data set were accessed (equivalent to roughly 0.0029 full data evaluations). Trace plots for the simulation can be found in. The first tenth of the simulation trajectories were discarded as burn-in, and the remainder used to estimate the posterior density of each marginal. These can also be found in, together with the normal approximation to the posterior marginals given by the R glm fit, is again very accurate here, agreeing closely with the ScaLE output. Using the ESS approach described in and Appendix [app:ESS], the average ESS per parameter was found to be 553. ![Upper plots are trace plots of ScaLE applied to the large data set of size 2^{34} example of. Solid red lines on the trace plots mark the parameter values fitted using the glm R package, and dashed red lines are 95% confidence intervals imputed using the covariance matrix estimated from the glm package. Lower plots are marginal densities obtained by ScaLE (black lines). Overlaid on the marginal plots are the normal approximation from the glm R package (red dotted lines).](large_adam.png "fig:") [fig:largeexample] To investigate scaling with data size for this example, we considered the same model using the same process as outlined above with data sets varying in size by a factor of 2 from *n* = 221 to *n* = 233. Computing explicitly the dominating intensity *λ̃**n*, *K* over the support of the density the relative cost of ScaLE for each data set with respect to the data set of size *n* = 234 can be inferred. This is shown in. [fig:largeexample2] Contaminated Mixture -------------------- In this subsection we consider parameter inference for a contaminated mixture model. This is motivated by big data sets obtained from internet applications, in which the large data sets are readily available, but the data is of low quality and corrupted with noisy observations. In particular, in our example each datum comprises two features and a model is fitted in which the likelihood of an individual observation (*y**i*) is, $$\begin{aligned} F\_i := \dfrac{1-p}{\sqrt{2\pi\sigma^2}}\exp\left\{-\dfrac{1}{2\sigma^2}\left(\alpha\cdot x\_{i,1} + \beta \cdot x\_{i,2} -y\_i\right)^2\right\} + \dfrac{p}{\sqrt{2\pi\phi^2}}\exp\left\{-\dfrac{1}{2\phi^2}y\_i^2\right\}. \label{eq:contamlike}\end{aligned}$$ In this model *p* represents the level of corruption and *ϕ* the variance of the corruption. A common approach uses MCMC with data augmentation. However, for large data sets this is not feasible as the dimensionality of the auxiliary variable vector will be O(*n*). For convenience a transformation of the likelihood was made so that each transformed parameter is on $\mathbbm{R}$. The details are omitted, and the results presented are given under the original parametrisation. A data set of size *n* = 227 ≈ 108.1 was generated from the model with parameters **\mu** = [*α*, *β*, *σ*, *ϕ*, *p*] = [2, 5, 1, 10, 0.05]. To illustrate a natural future direction for the ScaLE methodology, in this example we instead implemented an approximate version of ScaLE (as opposed to the exact version illustrated in the other examples of ). In particular, the primary implementational and computational bottleneck in ScaLE is the formal ‘localization procedure’ to obtain almost sure bounds on the killing rate by constraining Brownian motion to a hypercube (as fully detailed in and ). Removing the localization procedure results in the Brownian motion trajectories being unconstrained, and the resulting dominating intensity *λ̃* being infinite. However, in practice the probability of such an excursion by Brownian motion outside a suitably chosen hypercube can be made vanishingly small (along with the consequent impact on the Monte Carlo output) by simply adjusting the temporal resolution at which the ergodic average is obtained from the algorithm (noting Brownian motion scaling is $\mathcal{O}(\sqrt{t}$), and inflating the bounds on the Hessian for computing the intensity. The resulting ‘approximate’ algorithm is approximate in a different (more controllable and monitorable) sense than for instance SGLD, but results in substantial (10x-50x) computational speed-ups over the available (but expensive) ‘exact’ ScaLE. In contrast with the other examples of, rather fitting an approximate model in order to initialise the algorithm, instead in this example a single point mass to initialise the algorithm was chosen ($\mu = [2.00045,\allowbreak 5.00025,\allowbreak 0.999875,\allowbreak 10.005\,\allowbreak 0.0499675]$), and this was also used as the point to compute our control variate (described in ). The pre-processing for executing ScaLE took approximately 6 hours of computational time (and is broadly indicative of the length of time a single iteration of an alternative MCMC scheme such as MALA would require). Note that as discussed in, this ‘mis-initialisation’ impacts the efficiency of the algorithm by a constant factor, but is however representative of what one in practice may conceivably be able to do (i.e. find by means of an optimisation scheme a point within the support of the target posterior close to some mode, and conduct a single O(*n*) calculation). The forgetting of this initialisation is shown in. Applying ScaLE for this application we used a particle set of size *N* = 211, and run the algorithm for diffusion time of *T* = 200, with observations of each trajectory at a resolution of *t**i* − *t**i* − 1 = 0.1. Again, the choice of *N* was made as in as it provided the required stability. The choice of *T* was made as it corresponded approximately to a computational budget of one week. Each particle trajectory at each time *t* ∈ [0, *T*] was associated with a sub-sample of the full data set of size 32, rather than 2, with the resulting likelihood estimates combined by simple averaging. This size was chosen as it provided balance with other components of the algorithm, but allowed stabilisation of the importance weights which was beneficial for the approximate algorithm. In total the entire run required accessing 500 million individual data points, which corresponds to approximately 4 full evaluations of the data set. An example of a typical run can be found in. A burn-in period of 100 was chosen, and alongside the trace plots in an estimate of the marginal density of the parameters is provided using the occupation measure of the trajectories in the interval *t* ∈ [100, 200]. To assess the quality of the simulation, the same batch mean method is employed to estimate the marginal ESS for the run post burn-in as detailed in. The mean ESS per dimension for this run was around 930. An analysis of MALA (for a necessarily much smaller run), indicated it is possible to achieve an ESS of around *T*/3, where *T* corresponds to the run length subsequent to burn-in. As indicated above, and neglecting burn-in, this would mean an achievable ESS for a comparable computational budget for MALA would be around 10-15. ![Upper plots are trace trajectories of ScaLE applied to the contaminated mixture data set, lower plots are marginal densities. In both sets of plots, blue lines mark the parameter values used to initialise the algorithm, green lines mark the parameter values the associated data set was generated from, and red lines mark the mean of the marginal densities.](contam_adam.png "fig:") [fig:contamanalysis] Conclusions =========== In this paper we have introduced a new class of *Quasi-Stationary Monte Carlo (QSMC)* methods which are genuinely continuous-time algorithms for simulating from complex target distributions. We have emphasised its particular effectiveness in the context of *big data* by developing novel sub-sampling approaches and the *Scalable Langevin Exact (ScaLE)* algorithm. Unlike its immediate competitors, our sub-sampling approach within ScaLE is essentially computationally free and does not result in any approximation to the target distribution. Our methodology is embedded within an SMC framework, supported by underpinning theoretical results. In addition, examples to which ScaLE is applied demonstrate its robust scaling properties for large data sets. We show through theory and examples that computational cost of ScaLE is more stable to data set size than *gold standard* MCMC approaches. Moreover we have seen it substantially outperform other approaches such as SGLD which are designed to be robust to data-size at the cost of bias and serial correlation. ScaLE can both confirm that simpler approaches such as Laplace approximation are accurate, and identify when such approximations are poor (as we see in the examples). We see this as a first step in a fruitful new direction for Computational Statistics. Many ideas for variations and extensions to our implementation exist and will stimulate further investigation. Firstly, the need to simulate a quasi-stationary distribution creates particular challenges. Although quasi-stationarity is underpinned by an elegant mathematical theory, the development of numerical methods for quasi-stationarity is understudied. We have presented an SMC methodology for this problem, but alternatives exist. For instance, suggest alternative approaches. Even within an SMC framework for extracting the quasi-stationary distribution, there are interesting alternatives we have not explored. For example, by a modification of our re-weighting mechanism it is possible to relate the target distribution of interest to the limiting *smoothing* distribution of the process, as opposed to the filtering distribution as we do here. Within the quasi-stationary literature this is often termed the type II quasi-stationary distribution. As such, the rich SMC literature offers many other variations on the procedures adopted here. Using SMC benefits from the rich theory it possesses. However the use of quasi-stationary Monte Carlo actually demands new questions of SMC. gives convergence as *T* → ∞, while gives a precise description of the limit as the number of particles *N* increases. There are theoretical and practical questions associated with letting both *N* and *T* tend to ∞ together. Within the examples in this paper *ad hoc* rules are used to assign computational effort to certain values of *N* and *T*. However the general question of how to choose these parameters seems completely open. Throughout the paper, we have concentrated on so-called *exact approximate* quasi-stationary Monte Carlo methods. Of course in many cases good approximations are good enough and frequently computationally less demanding. However, for many approximate methods it will be difficult to quantify the systematic error being created by the approximation. Moreover, we emphasise that there are different strategies for creating effective approximations that emanate directly from exact approximate methods, and where the approximation error can be well-understood. We have given an example of this in [s:examp:2] but other options are possible also. There are interesting options for parallel implementation of SMC algorithms in conjunction with ScaLE. For instance an appealing option would be to implement the island particle filter which could have substantial effects on the efficiency of our algorithms where large numbers of particles are required. Alternatively one could attempt to embed our scheme in other divide and conquer schemes as described in the introduction. The approach in this paper has concentrated solely on killed (or re-weighted) Brownian motion, and this strategy has been demonstrated to possess robust convergence properties. However, given existing methodology for the exact simulation of diffusions in, there is scope to develop methods which use proposal measures which much better *mimic* the shape of the posterior distribution. The sub-sampling and control variate approaches developed here offer dramatic computational savings for tall data as we see from the examples and from the theory of results like Theorem [prop:contraction]. We have not presented the ScaLE algorithm as a method for high-dimensional inference, and the problem of large *n* and *d* will inevitably lead to additional challenges. However there may be scope to extend the ideas of ScaLE still further in this direction. For instance, it might be possible to sub-sample dimensions and thus reduce the dimensional complexity for implementing each iteration. We conclude by noting that as a by-product, the theory behind our methodology offers new insights into problems concerning the existence of quasi-stationary distributions for diffusions killed according to a state-dependent hazard rate, complementing and extending current state-of-the-art literature. Acknowledgments =============== This work was supported by the EPSRC (grant numbers EP/D002060/1, EP/K014463/1, EP/N031938/1, EP/R018561/1, EP/R034710/1) and two Alan Turing Institute programmes; the Lloyd’s Register Foundation Programme on Data-Centric Engineering; and the UK Government’s Defence & Security Programme. The authors would like to thank Louis Aslett, Tigran Nagapetyan and Andi Q. Wang for useful discussions on aspects of this paper. In addition, we would like to thank the referees for a number of very helpful suggestions and questions which have improved the exposition of this paper. Proof of ======== Here we present a proof of. However, we first formally state the required regularity conditions. We suppose that $$\label{eq:Q0} \pi(x)\hbox{ is bounded,}$$ and defining *ν*(*x*) = *π*2(*x*), we further assume that, for some *γ* > 0, $$\begin{aligned} \label{eq:Q1} \liminf\_{\mathbf{x} \to \infty } \left( {\Delta \nu (\bx) \over \nu ^{\gamma + 1/2}(\bx ) } - { \gamma \| \nabla \nu (\bx)\|^2 \over \nu ^{\gamma + 3/2}(\bx)} \right) >0, \end{aligned}$$ where Δ represents the Laplacian. [Proof (Theorem [thm:qsd])] Consider the diffusion with generator given by $$\mathfrak{A} f (\bx )= {1\over 2} \Delta f (\bx) + {1\over 2} \nabla \log \nu (\bx ) \cdot \nabla f (\bx ).$$ As *ν* is bounded, we assume without loss of generality that its upper bound is 1. Our proof shall proceed by checking the conditions of Corollary 6 of, which establishes the result. In particular, we need to check that the following are satisfied: 1. For all *δ* > 0, the discrete time chain {*X**n**δ*, *n* = 0, 1, 2, …} is irreducible; 2. All closed bounded sets are petite; 3. We can find a drift function $V(\bx ) = \nu(\bx )^{-\gamma}$ for some *γ* > 0, that satisfies the condition $$\begin{aligned} \label{eq:drift} {\mathfrak{A}} V^{\eta }(\bx ) \le - c\_\eta V(\bx)^{\eta - \alpha} \end{aligned}$$ for $\bx $ outside some bounded set, for each *η* ∈ [*α*, 1] with associated positive constant *c**η*, and where *α* = 1 − (2*γ*)− 1. The first condition holds for any regular diffusion since the diffusion possesses positive continuous transition densities over time intervals *t* > 0; and positivity and continuity of the density also implies the second condition. For the final condition we require that $$\begin{aligned} \label{eq:ratiodrift} \limsup\_{|\bx | \to \infty } {{\mathfrak{A}} V^{\eta }(\bx ) \over V^{\eta - \alpha } (\bx)} <0.\end{aligned}$$ Now by direct calculation $$\begin{aligned} {\mathfrak{A}} V^{\eta }(\bx ) = {\eta \gamma \over 2} \nu (\bx )^{-\gamma \eta -2} \left[ {\eta \gamma \|\nabla \nu (\bx ) \|^2 } - \nu (\bx ) \Delta \nu (\bx ) \right],\end{aligned}$$ so that $$\begin{aligned} {{\mathfrak{A}} V^{\eta }(\bx )\over V(\bx )^{\eta - \alpha}} = {\eta \gamma \nu (\bx )^{-3/2-\gamma } \over 2} \left[ {\eta \gamma \|\nabla \nu (\bx ) \|^2 } - \nu (\bx ) \Delta \nu (\bx ) \right].\end{aligned}$$ Therefore ([eq:ratiodrift]) will hold whenever ([eq:Q1]) is true since we have the constraint that *η* ≤ 1 and $\|\nabla \nu (\bx ) \|^2$ is clearly non-negative. As such the result holds as required. Note that the condition in ([eq:Q1]) is essentially a condition on the tail of *ν*. This will hold even for heavy-tailed distributions, and we show this is the case for a class of 1-dimension target densities in. Polynomial tails ================ In this appendix we examine condition ([eq:Q1]) which we use within. This is essentially a condition on the tail of *ν*, and so we examine the critical case in which the tails of *ν* are heavy. More precisely, we demonstrate that for polynomial tailed densities in one-dimension that ([eq:Q1]) essentially amounts to requiring that *ν*1/2 is integrable. Recall that by construction *ν*1/2 will be integrable as we have chosen *ν*1/2 = *π*. For simplicity, suppose that *ν* is a density on [1, ∞) such that *ν*(*x*) = *x*− *p*. In this case we can easily compute that for *p* > 1, $$\begin{aligned} \nabla \nu (x) & = -px^{-p-1} \nonumber\\ \Delta \nu (x) & = p(p+1) x^{-p-2} \nonumber\end{aligned}$$ from which we can easily compute the quantity whose limit is taken in ([eq:Q1]) as $$\begin{aligned} x^{p(\gamma - 1/2) -2} [p(p+1) - \gamma p^2]. \nonumber\end{aligned}$$ As such, we have that condition ([eq:Q1]) holds if and only if *p* + 1 − *γ**p* > 0 and $$\begin{aligned} \label{eq:pol2} p(\gamma - 1/2) - 2 \ge 0.\end{aligned}$$ Now we shall demonstrate that we can find such *γ* for all *p* > 2. For instance, suppose that *p* = 2 + *ε*. The case *ε* ≥ 2 can be handled by just setting *γ* = 1, so suppose otherwise and set *γ* = 3/2 − *ε*/4. In this case, ([eq:pol2]) just gives *ε*/2 − *ε*2/4 ≥ 0. Moreover the expression in ([eq:pol1]) becomes 3*ε*/2 + *ε*2 > 0, completing our argument. Simulation of a Path-Space Layer and Intermediate Points ======================================================== [apx:fpttheory] In this appendix we present the methodology and algorithms required for simulating an individual proposal trajectory of (layered) killed multivariate Brownian motion, which is what is required in. Our exposition is as follows: In we present the work of, in which a highly efficient rejection sampler is developed (based on the earlier work of ) for simulating the first passage time for univariate standard Brownian motion for a given symmetric boundary, extending it to consider the case of the univariate first passage times of *d*-dimensional standard Brownian motion with non-symmetric boundaries. This construction allows us to determine an interval (given by the first, first passage time) and layer (a hypercube inscribed by the user specified univariate boundaries) in which the sample path is almost-surely constrained, and by application of the strong Markov property can be applied iteratively to find, for any interval of time, a layer (a concatenation of hypercubes) which almost-surely constrains the sample path; In we present a rejection sampler enabling the simulation of constrained univariate standard Brownian motion as developed in, at any desired intermediate point. As motivated in these intermediate points may be at some random time (corresponding to a proposed killing point of the proposed sample path), or a deterministic time (in which the sample path is extracted for inclusion within the desired Monte Carlo estimator of QSMC ([eq:occupation2])); Finally, in we present the full methodology required in Sections [s:emcfd] and [s:scale] in which we simulate multivariate Brownian motion at any desired time marginal, with *d*-dimensional hypercubes inscribing intervals of the state space in which the sample path almost surely lies. Simulating the first passage times of univariate and multivariate standard Brownian motion ------------------------------------------------------------------------------------------ To begin with we restrict our attention to the (*i*th) dimension of multivariate standard Brownian motion initialised at 0, and the first passage time of the level *θ*(*i*) (which is specified by the user). In particular we denote, $$\begin{aligned} \tau^{(i)} := \inf\{t\in\mathbbm{R}+\,:\,|W^{(i)}\_t-W^{(i)}\_0| \geq \theta^{(i)}\}.\end{aligned}$$ Recalling the self similarity properties of Brownian motion (), we can further restrict our attention to the simulation of the first passage time of univariate Brownian motion of the level 1, noting that $\tau^{(i)}\overset{\mathcal{D}}{=}\left(\theta^{(i)}\right)^2\bar{\tau}$ where, $$\begin{aligned} \bar{\tau} := \inf\{t\in\mathbbm{R}+\,:\,|W\_t-W\_0| \geq 1\},\end{aligned}$$ noting that at this level, $$\begin{aligned} \mathbbm{P}(W\_\tau=W\_0+1)=\mathbbm{P}(W\_\tau=W\_0-1)=\dfrac{1}{2}.\end{aligned}$$ Denoting by *f**τ̄* the density of *τ̄* (which cannot be evaluated point-wise), the approach outlined in for drawing random samples from *f**τ̄* is a series sampler. In particular, an accessible dominating density of *f**τ̄* is found (denoted *g**τ̄*) from which exact proposals can be made, then upper and lower monotonically convergent bounding functions are constructed (lim*n* → ∞*f**τ̄*, *n*↑ → *f**τ̄* and lim*n* → ∞*f**τ̄*, *n*↓ → *f**τ̄* such that for any $t\in\mathbbm{R}\_+$ and *ε* > 0 ∃ *n*\*(*t*, *ε*) such that ∀ *n* ≥ *n*\*(*t*, *ε*) we have *f**τ̄*, *n*↑(*t*) − *f**τ̄*, *n*↓(*t*) < *ε*), and then evaluated to sufficient precision such that acceptance or rejection can be made while retaining exactness. A minor complication arises in that no known, tractable dominating density is uniformly efficient on $\mathbbm{R}\_+$, and furthermore no single representation of the bounding function converge monotonically to the target density point-wise on $\mathbbm{R}\_+$. As such, the strategy deployed by is to exploit a dual representation of *f**τ̄* given by in order to construct a hybrid series sampler, using one representation of *f**τ̄* for the construction of a series sampler on the interval (0, *t*1] and the other representation for the interval [*t*2, ∞) (fortunately we have *t*1 > *t*2, and so we have freedom to choose a threshold *t*\* ∈ [*t*2, *t*1] in which to splice the series samplers). In particular, as shown in *f**τ̄*(*t*) = *π*∑*k* = 0∞( − 1)*k**a**k*(*t*) where, the elements of the two expansions are given by $$a\_k(t) = \left\lbrace \begin{array}{ll} \left(\dfrac{2}{\pi t} \right)^{{3}/{2}} \left(k+\frac{1}{2} \right) \exp \left\{-\dfrac{2}{t}(k+\frac{1}{2})^2\right\}, & \quad\quad(1) \\\*[10pt] \left(k+\frac{1}{2} \right)\exp \left\{-\dfrac{1}{2}(k+\frac{1}{2})^2 \pi^2 t\right\}, & \quad\quad (2) \label{eq:alt1and2} \end{array} \right.$$ and so by consequence upper and lower bounding sequences can be constructed by simply taking either representation and truncating the infinite sum to have an odd or even number of terms respectively (and thresholding to between zero and the proposal, *g**τ̄*, introduced below). More precisely, $$\begin{aligned} f^{\downarrow}\_{\bar{\tau},n}(t) :=\left(\pi\sum\_{k=0}^{2n+1} (-1)^k a\_k(t)\right)\_+, & \quad \text{}\quad f^{\uparrow}\_{\bar{\tau},n}(t) :=\left[\pi\sum\_{k=0}^{2n} (-1)^k a\_k(t)\right]\wedge g\_{\bar{\tau}}(t).\end{aligned}$$ As shown in Lemma 1 of, the bounding sequences based on the representation of *f**τ̄*(*t*) in ([eq:alt1and2].1) are monotonically converging for *t* ∈ (0, 4/log(3)], and for ([eq:alt1and2].2) monotonically converging for *t* ∈ [log(3)/*π*2, ∞). After choosing a suitable threshold *t*\* ∈ [4/log(3), log(3)/*π*2] for which to splice the series samplers, then by simply taking the first term in each representation of *f**τ̄*(*t*) a dominating density can be constructed as follows, $$\begin{aligned} f\_{\bar{\tau}} (t) \leq g\_{\bar{\tau}} (t) \propto\underbrace{\dfrac{2}{\pi t^{3/2}}\exp \left\{-\dfrac{1}{2t} \right\}\cdot \mathbbm{1}\_{t \leq t^\*}}\_{\propto g^{(1)}\_{\bar{\tau}}(t)} + \underbrace{\dfrac{\pi}{2} \exp \left\{-\dfrac{\pi^2 t}{8} \right\} \cdot \mathbbm{1}\_{t \geq t^\*}}\_{\propto g^{(2)}\_{\bar{\tau}}(t)}. \label{eq:gtau}\end{aligned}$$ empirically optimises the choice of *t*\* = 0.64 so as to minimise the normalising constant of ([eq:gtau]). With this choice *M*1 :  = ∫*g**τ̄*(1)(*t*) d*t* ≈ 0.422599 (to 6 d.p.) and *M*2 :  = ∫*g**τ̄*(2)(*t*) d*t* ≈ 0.578103 (to 6 d.p.), and so we have a normalising constant *M* = *M*1 + *M*2 ≈ 1.000702 (to 6 d.p.) which equates to the expected number of proposal random samples drawn from *g**τ̄* before one would expect an accepted draw (the algorithmic ‘outer loop’). Now considering the iterative algorithmic ‘inner loop’ – in which the bounding sequences are evaluated to precision sufficient to determine acceptance or rejection – as shown in, the exponential convergence of the sequences ensures that the number of iterations required is uniformly bounded in expectation by 3. Simulation from *g**τ̄* is possible by either simulating *τ̄* ∼ *g**τ̄*(1) with probability *M*1/*M*, else *τ̄* ∼ *g**τ̄*(2). Simulating *τ̄* ∼ *g**τ̄*(1) can be achieved by noting that, for $t \sim g\_{\bar\tau}^1$, $t\overset{\mathcal{D}}{=}t^\*+8X/\pi^2$, where *X* ∼ Exp(1). Simulating *τ̄* ∼ *g**τ̄*(2) can be achieved by noting that as outlined in, for $t \sim g\_{\bar\tau}^2$, $t\overset{\mathcal{D}}{=}t^\*/(1+t^\*X)^2$, where $X:=\inf\_{i}\{\left\{X\_i\right\}^\infty\_{i=1}\overset{\text{iid}}{\sim}{\text{Exp}}(1):(X\_i)^2\leq 2X\_{i+1}/t^\*,(i-1)/2\in\mathbbm{Z}\}$. A summary of the above for simulating jointly the first passage time and location of the *i*th dimension of Brownian motion of the threshold level *θ*(*i*) is provided in. [h] [alg:devroyefpg] 1. Input *W*0(*i*) and *θ*(*i*). 2. *g**τ̄*: Simulate *u* ∼ U[0, 1], [alg:st:proposal] 1. *g**τ̄*(1): If *u* ≤ *M*1/*M*, then simulate *X* ∼ Exp(1) and set *τ̄* :  = *t*\* + 8*X*/*π*2. 2. *g**τ̄*(2): If *u* > *M*1/*M*, then set $X:=\inf\_{i}\{\left\{X\_i\right\}^\infty\_{i=1}\overset{\text{iid}}{\sim}{\text{Exp}}(1):(X\_i)^2\leq 2X\_{i+1}/t^\*,(i-1)/2\in\mathbbm{Z}\}$ and set *τ̄* :  = *t*\*/(1 + *t*\**X*)2. 3. *u*: Simulate *u* ∼ U[0, 1] and set *n* = 0. 4. *f**τ̄*, *n*⋅: While *u* ⋅ *g**τ̄*(*τ̄*) ∈ (*f**τ̄*, *n*↓(*τ̄*), *f**τ̄*, *n*↑(*τ̄*)), set *n* = *n* + 1. 5. *f**τ̄*: If *u* ⋅ *g**τ̄*(*τ̄*) ≤ *f**τ̄*, *n*↓(*τ̄*) accept, else reject and return to. 6. *τ*: Set *τ* :  = (*θ*(*i*))2*τ̄*. 7. *W**τ*(*i*): With probability 1/2 set *W**τ*(*i*) = *W*0(*i*) + *θ*(*i*), else set *W**τ*(*i*) = *W*0(*i*) − *θ*(*i*). 8. Return (*τ*, *W**τ*(*i*)). Note that generalising to the case where we are interested in the first passage time of Brownian motion of a non-symmetric barrier, in particular for $\ell^{(i)},\upsilon^{(i)}\in\mathbbm{R}\_+$, $$\begin{aligned} \tau^{(i)} := \inf\{t\in\mathbbm{R}+\,:\,W^{(i)}\_t-W^{(i)}\_0 \notin (W^{(i)}\_0-\ell^{(i)},W^{(i)}\_0+\upsilon^{(i)})\},\end{aligned}$$ is trivial algorithmically. In particular, using the strong Markov property we can iteratively apply setting *θ*(*i*) :  = min(ℓ(*i*), *υ*(*i*)) and simulating intermediate first passage times of lesser barriers, halting whenever the desired barrier is attained. We suppress this (desirable) flexibility in the remainder of the paper to avoid the resulting notational complexity. Simulating intermediate points of multivariate standard Brownian motion conditioned on univariate first passage times --------------------------------------------------------------------------------------------------------------------- Clearly in addition to being able to simulate the first passage times of a single dimension of Brownian motion, we want to be able simulate the remainder of the dimensions of Brownian motion at that time, or indeed the sample path at times other than its first passage times. As the dimensions of standard Brownian motion are independent (and so Brownian motion can be composed by considering each dimension separately), we can restrict our attention to simulating a single dimension of the sample path for an intermediate time *q* ∈ [*s*, *τ*] given *W**s*, the extremal value *W**τ*, and constrained such that ∀*u* ∈ [*s*, *τ*], *W**u* ∈ [*W**s* − *θ*, *W**s* + *θ*]. Furthermore, as we are interested in only the forward simulation of Brownian motion, by application of the strong Markov property we need consider only the simulation of a single intermediate point (although by application of simulation at times conditional on future information is possible). To proceed, note that (as outlined in ) the law of a univariate Brownian motion sample path in the interval [*s*, *τ*] (where *s* < *τ*) initialised at (*s*, *W**s*) and constrained to attain its extremal value at (*τ*, *W**τ*), is simply the law of a three-dimensional Bessel bridge. We require the additional constraint that ∀*u* ∈ [*s*, *τ*], *W**u* ∈ [*W**s* − *θ*, *W**s* + *θ*], which can be imposed in simulation by deploying a rejection sampling scheme in which a Bessel bridge sample path is simulated at a single required point (as above) and accepted if it meets the imposed constraint at either side of the simulated point, and rejected otherwise. As presented in, the law of a Bessel bridge sample path (parametrised as above) coincides with that of an appropriate time rescaling of three independent Brownian bridge sample paths of unit length conditioned to start and end at the origin (denoted by {*b*(*i*)}*i* = 13). Supposing we require the realisation of a Bessel bridge sample path at some time *q* ∈ [*s*, *τ*], then by simply realising three independent Brownian bridge sample paths at that time marginal ({*b**q*(*i*)}*i* = 13), we have, $$\begin{aligned} W\_q & = W\_s + (-1)^{\mathbbm{1}(W\_\tau<W\_s)} \sqrt{(\tau-s)\left[\left(\dfrac{\theta(\tau-q)}{(\tau-s)^{3/2}} + b^{(1)}\_q\right)^2 + (b^{(2)}\_q)^2 +(b^{(3)}\_q)^2\right]}.\end{aligned}$$ The method by which the proposed Bessel bridge intermediate point is accepted or rejected (recall, to impose the constraint that ∀*u* ∈ [*s*, *τ*], *W**u* ∈ [*W**s* − *θ*, *W**s* + *θ*]) is non-trivial as there does not exist a closed form representation of the required probability (which we will denote in this appendix by *p*). Instead, as shown in, a representation for *p* can be found as the product of two infinite series, which as a consequence of this form cannot be evaluated directly in order to make the typical acceptance-rejection comparison (i.e. determining whether *u* ≤ *p* or *u* > *p*, where *u* ∼ U[0, 1]). The strategy we deploy to retain exactness and accept with the correct probability *p* is that of a retrospective Bernoulli sampler. In particular, in we construct monotonically convergent upper and lower bounding probabilities (*p**n*↑ and *p**n*↓ respectively) with the property that lim*n* → ∞*p**n*↑ → *p* and lim*n* → ∞*p**n*↓ → *p* such that for any *u* ∈ [0, 1] and *ε* > 0 ∃ *n*\*(*t*) such that ∀ *n* ≥ *n*\*(*t*) we have *p**n*↑ − *p**n*↓ < *ε*, which are then evaluated to sufficient precision to make the acceptance-rejection decision, taking almost surely finite computational time. [thm:besselaccprob] The probability that a three-dimensional Bessel bridge sample path $W\sim \left.\mathbbm{W}^{W\_s,W\_\tau}\_{s,\tau}\,{\middle\vert}\,(W\_\tau, W\_q)\right.$ for *s* < *q* < *τ* attaining its boundary value at (*τ*, *W**τ*), remains in the interval [*W**s* − *θ*, *W**s* + *θ*], can be represented by the following product of infinite series (where we denote by $m:=\mathbbm{1}(W\_\tau>W\_s)-\mathbbm{1}(W\_\tau<W\_s)$), $$\begin{aligned} & \mathbbm{P}\left(W\_{[s,\tau]}\in [W\_s\!-\!\theta,W\_s\!+\!\theta]|W\_s,W\_q,W\_\tau\right) \nonumber\\ & \quad\quad\quad = \underbrace{\left(\dfrac{1 -\sum^{\infty}\_{j=1}\big[\varsigma\_{q-s}(j;W\_s-W\_q,\theta)-\varphi\_{q-s}(j;W\_s-W\_q,\theta)\big]}{1-\exp\left\{-2\theta[m(W\_s-W\_q)+\theta]/(q-s)\right\}} \right)}\_{=:p\_1} \nonumber\\ & \quad\quad\quad\quad\quad\quad \cdot \underbrace{\left(1 + \sum^{\infty}\_{j=1}\big[ \psi\_{\tau-q}(j;W\_q-W\_\tau,\theta,m) + \chi\_{\tau-q}(j;W\_q-W\_\tau,\theta,m)\big] \right)}\_{=:p\_2}, \end{aligned}$$ where, $$\begin{aligned} \varsigma\_{\Delta}(j;\delta,\theta) & := 2\cdot\exp\left\{-\dfrac{2\theta^2(2j-1)^2}{\Delta}\right\}\cdot\cosh\left(\dfrac{2(2j-1)\theta \delta}{\Delta}\right),\end{aligned}$$ $$\begin{aligned} \varphi\_{\Delta}(j;\delta,\theta) & := 2\cdot\exp\left\{-\dfrac{8\theta^2j^2}{\Delta}\right\}\cdot\cosh\left\{\dfrac{4\theta\delta j}{\Delta}\right\},\end{aligned}$$ $$\begin{aligned} \psi\_{\Delta}(j;\delta,\theta,m) := \chi\_{\Delta}(j;\delta,\theta,-m) & := \dfrac{(4\theta j +m\delta)}{m\delta}\cdot\exp\left\{-\dfrac{4\theta j}{\Delta}(2\theta j +m\delta)\right\}.\end{aligned}$$ Begin by noting that that the strong Markov property allows us to decompose our required probability as follows, $$\begin{aligned} &\mathbbm{P}\left(W\_{[s,\tau]}\in[W\_s-\theta,W\_s+\theta]|W\_s,W\_q,W\_\tau\right) \nonumber\\ &\,\, = \underbrace{\mathbbm{P}\left(W\_{[s,q]}\in[W\_s-\theta,W\_s+\theta]|W\_s,W\_q\right)}\_{p\_1}\cdot \underbrace{\mathbbm{P}\left(W\_{[q,\tau]}\in[W\_s-\theta,W\_s+\theta]|W\_q,W\_\tau\right)}\_{p\_2}.\end{aligned}$$ Relating the decomposition to the statement of the theorem, *p*1 follows directly from the parametrisation given and the representation in of the result in. *p*2 similarly follows from the representation found in. [cor:besselacrej] Letting $p:=\mathbbm{P}\left(W\_{[s,\tau]}\in [W\_s\!-\!\theta,W\_s\!+\!\theta]\right)$, monotonically convergent upper and lower bounding probabilities (*p**n*↑ and *p**n*↓ respectively) with the property that lim*n* → ∞*p**n*↑ → *p* and lim*n* → ∞*p**n*↓ → *p* can be found (where $n\_0:=\lceil\sqrt{(\tau-q)+4\theta^2}/4\theta\rceil$), $$\begin{aligned} & p^{\downarrow}\_n := \left(\dfrac{1 -\sum^{n}\_{j=1}\varsigma\_{q-s}(j;W\_s-W\_q,\theta)+\sum^{n-1}\_{j=1}\varphi\_{q-s}(j;W\_s-W\_q,\theta)}{1-\exp\left\{-2\theta[m(W\_s-W\_q)+\theta]/(q-s)\right\}} \right)\nonumber\\ & \quad\quad\quad \cdot \left(1 + \sum^{n\_0+n}\_{j=1}\psi\_{\tau-q}(j;W\_q-W\_\tau,\theta,m) + \sum^{n\_0+n-1}\_{j=1}\chi\_{\tau-q}(j;W\_q-W\_\tau,\theta,m)\big] \right), \label{eq:pdownarrow}\end{aligned}$$ $$\begin{aligned} & p^{\uparrow}\_n := \left(\dfrac{1 -\sum^{n}\_{j=1}\varsigma\_{q-s}(j;W\_s-W\_q,\theta)+\sum^{n}\_{j=1}\varphi\_{q-s}(j;W\_s-W\_q,\theta) }{1-\exp\left\{-2\theta[m(W\_s-W\_q)+\theta]/(q-s)\right\}}\right)\nonumber\\ & \quad\quad\quad \cdot \left(1 + \sum^{n\_0+n}\_{j=1}\psi\_{\tau-q}(j;W\_q-W\_\tau,\theta,m) + \sum^{n\_0+n}\_{j=1}\chi\_{\tau-q}(j;W\_q-W\_\tau,\theta,m)\big] \right). \label{eq:puparrow}\end{aligned}$$ Furthermore we have $$\begin{aligned} \dfrac{p^\uparrow\_n-p^\downarrow\_n}{p^\uparrow\_{n-1}-p^\downarrow\_{n-1}} =: r\_n \leq r\in (0,1), \label{eq:geobound}\end{aligned}$$ and so, $$\begin{aligned} \bar{K} := \sum^\infty\_{i=1} |p^\uparrow\_i-p^\downarrow\_i| = (p^\uparrow\_1-p^\downarrow\_1) + \sum^\infty\_{i=2} \prod\_{j=2}^i r\_j \leq \sum^\infty\_{i=0} r^i = \dfrac{1}{1-r} <\infty. \label{eq:geosum}\end{aligned}$$ The summations in the left hand brackets of the sequences ([eq:pdownarrow]) and ([eq:puparrow]) follows from and. The summations in the right hand brackets of the sequences ([eq:pdownarrow]) and ([eq:puparrow]), and the necessary condition on *n*0, follows from. The validity of the product form of ([eq:pdownarrow]) and ([eq:puparrow]) follows from. The bound on the ratio of subsequent bound ranges of *p* in ([eq:geobound]) follows from the exponential decay in *n* of $\varsigma(n)$, *φ*(*n*), *ψ*(*n*) and *χ*(*n*) of, and as shown in the proof of and. ([eq:geosum]) follows directly from ([eq:geobound]). Having established and we can now construct a (retrospective) rejection sampler in which we simulate *W**q* (as per the law of a Bessel bridge) and, by means of an algorithmic loop in which the bounding sequences of the acceptance probability are evaluated to sufficient precision, we make the determination of acceptance or rejection. This is summarised in, further noting that although the embedded loop is of random length, by we know that it halts in finite expected time (*K̄* can be interpreted as the expected computational cost of the nested loop, noting that $\mathbbm{E}[\text{iterations}] := \sum^\infty\_{i=0} i\mathbbm{P}(\text{halt at step i}) = \sum^\infty\_{i=0} \mathbbm{P}(\text{halt at step i or later}) =\bar{K} $). [h] [alg:qWqsim] 1. {*b**q*(*i*)}*i* = 13: Simulate $b^{(1)}\_q,b^{(2)}\_q,b^{(3)}\_q \overset{\text{iid}}{\sim} {\text{N}}\left(0,\dfrac{|\tau-q|\cdot|q-s|}{(\tau-s)^2}\right)$.[alg:qWqsim:proposal] 2. *W**q*: Set $W\_{q}:=W\_\tau + (-1)^{\mathbbm{1}(W\_\tau<W\_s)} \sqrt{(\tau-s)\left[\left(\dfrac{\theta(\tau-q)}{(\tau-s)^{3/2}} + b^{(1)}\_q\right)^2 + (b^{(2)}\_q)^2 +(b^{(3)}\_q)^2\right]}$. 3. *u*: Simulate *u* ∼ U[0, 1] and set *n* = 1. 4. *p*⋅↓, *p*⋅↑: While $u\notin [p^\downarrow\_n,p^\uparrow\_n]$, set *n* = *n* + 1. 5. *p*: If *u* ≤ *p**n*↓ accept, else reject and return to. 6. Return (*q*, *W**q*). Simulation of a single trajectory of constrained Brownian motion ---------------------------------------------------------------- We now have the constituent elements for, in which we simulate multivariate Brownian motion at any desired time marginal, with *d*-dimensional hypercubes inscribing intervals of the state space in which the sample path almost surely lies (layers, more formally defined in ). Recall from that the killing times are determined by a random variable whose distribution depends upon the inscribed layers, and so the presentation of necessitates a loop in which the determination of whether the stopping time occurs in the interval is required. We require the user-specified vector **\theta** in order to determine the default hypercube inscription size. In practice, as with other MCMC methods, we might often apply a preconditioning matrix to the state space before applying the algorithm. Further note that, due to the strong Markov property, it is user preference whether this algorithm is run in its entirety for every required time marginal, or whether it resets layer information whenever any one component breaches its boundary and re-initialises from that time on according to. [h] [alg:scaleiterate] 1. Input **W***s* and **\theta**. 2. **\tau**: For *i* ∈ {1, …, *d*}, simulate (*τ*(*i*), *W**τ*(*i*)) as per. 3. *τ̂*: Set *τ̂* :  = inf*i*{*τ*(*i*)}, set *j* :  = {*i* ∈ {1, …, *d*} : *τ*(*i*) = *τ̂*}. [algtauhat] 4. *t*: If required, simulate *t* as outlined in. 5. *t*: If $t\notin[s,\hat{\tau}]$, 1. (*τ̂*, *W**τ̂*( ⋅ )): For *i* ∈ {1, …, *d*} \ *j*, simulate (*τ̂*, *W**τ̂*(*i*)) as per. 2. (*τ*(*j*), *W**τ*(*j*)): Simulate (*τ*(*j*), *W**τ*(*j*)) as per. [alg:restart] 3. *s*: Set *s* :  = *τ̂*, and return to Step [algtauhat]. 6. (*t*, *W**t*( ⋅ )): For *i* ∈ {1, …, *d*}, simulate (*t*, *W**t*(*i*)) as per. 7. Return (*t*, *W**t*). Path-space Rejection Sampler (PRS) for *μ**T* ============================================= A path-space rejection sampler for *μ**T* can be constructed by drawing from Brownian motion measure, $\mathbf{X}\sim\mathbbm{W}^{\mathbf{x}}\_T$, accepting with probability *P*(**X**) given by $$\begin{aligned} {P}(\mathbf{X}) & = \underbrace{\exp\left\{\Phi T -\sum^{n\_R}\_{i=1} L^{(i)}\_{\mathbf{X}}\cdot[(\tau\_i\wedge T)\!-\!\tau\_{i-1}]\right\}}\_{=:P^{(1)}(\mathbf{X}) \in [0,1]} \cdot \prod^{n\_R}\_{i=1}\Bigg[\underbrace{\exp\left\{-\!\!\int^{\tau\_i\wedge T}\_{\tau\_{i-1}}\!\left(\phi(\mathbf{X}\_s)\!-\!L^{(i)}\_{\mathbf{X}}\right){\,\mathrm{d}}s\right\}}\_{=:P^{(2,i)}(\mathbf{X})}\Bigg] \label{eq:p1decomp} \\ & = \prod^{n\_R}\_{i=1} \Bigg[\underbrace{\exp\left\{(\Phi - L^{(i)}\_{\mathbf{X}})\cdot[(\tau\_i\wedge T)\!-\!\tau\_{i-1}]\right\}}\_{=:P^{(1,i)}(\mathbf{X}) \in [0,1]} \cdot \underbrace{\exp\left\{-\int^{\tau\_i\wedge T}\_{\tau\_{i-1}}\!\left(\phi(\mathbf{X}\_s)\!-\!L^{(i)}\_{\mathbf{X}}\right){\,\mathrm{d}}s\right\}}\_{=:P^{(2,i)}(\mathbf{X})}\Bigg]. \label{eq:p1decomp2}\end{aligned}$$ The algorithmic pseudo-code for this approach is thus presented in. [h] [alg:altuea] 1. Input: **X**0. 2. *R*: Simulate layer information *R* ∼ R as per. [alg:altuea:start][alg:cuea:layer] 3. *P*(1): With probability 1 − exp{Φ*T* − ∑*i* = 1*n**R**L***X**(*i*) ⋅ [(*τ**i* ∧ *T*)  −  *τ**i* − 1]} reject and return to. [alg:cuea:pre] 4. *n**R*: For *i* in 1 → *n**R*, 1. $\mathbbm{U}^{(i)}\_R$: Set *j* = 0, *κ**i* = 0, *ξ*0(*i*) :  = *τ**i* − 1 and *E*1(*i*) ∼ Exp(*U***X**(*i*) − *L***X**(*i*)). While ∑*j**E**j*(*i*) < [(*τ**i* ∧ *T*)  −  *τ**i* − 1], 1. *ξ**j*(*i*): Set *j* = *j* + 1 and *ξ**j*(*i*) = *ξ**j* − 1(*i*) + *E**j*(*i*). 2. **X***ξ**j*(*i*): Simulate **X***ξ**j*(*i*) ∼ MVN(**X***ξ**j* − 1(*i*), (*ξ**j*(*i*) − *ξ**j* − 1(*i*)))∣*R***X**(*i*). 3. *P*(2, *i*, *j*): With probability $1-[U^{(i)}\_\mathbf{X}\!-\!\phi\big(\mathbf{X}\_{\xi^{(i)}\_j}\big)]/[U^{(i)}\_\mathbf{X}\!-\!L^{(i)}\_\mathbf{X}]$, reject path and return to. 4. *E**j* + 1(*i*): Simulate *E**j* + 1(*i*) ∼ Exp(*U***X**(*i*) − *L***X**(*i*)). 2. **X***τ**i* ∧ *T*: Simulate **X***τ**i* ∧ *T* ∼ MVN(**X***ξ**j*(*i*), [(*τ**i* ∧ *T*) − *ξ**j*(*i*)])∣*R***X**(*i*). Crucially, determination of acceptance is made using only a path *skeleton* (as introduced in, a path *skeleton* is a finite-dimensional realisation of the sample path, including a *layer* constraining the sample path, sufficient to recover the sample path at any other finite collection of time points without error as desired). The PRS for *μ**T* outputs the skeleton composed of all intermediate simulations, $$\begin{aligned} \mathcal{S}\_\text{PRS}\left(\mathbf{X}\right) :=\left\{\mathbf{X}\_0,\left(\left(\xi^{(i)}\_j,\mathbf{X}\_{\xi^{(i)}\_j} \right)^{\kappa\_i}\_{j=1},R^{(i)}\_\mathbf{X}\right)^{n\_R}\_{i=1}\right\}, \label{eq:altueaskel}\end{aligned}$$ which is sufficient to simulate any finite-dimensional subset of the remainder of the sample path (denoted by **X**rem) as desired without error (as outlined in and ), $$\begin{aligned} {{\mathbf{X}}^{\text{rem}}}\_{(0,T)} & \sim\left.\otimes^{n\_R}\_{i=1}\left(\otimes^{\kappa\_i}\_{j=1} \mathbbm{W}^{\mathbf{X}[\xi^{(i)}\_{j-1},\xi^{(i)}\_j]}\_{\xi^{(i)}\_{j-1},\xi^{(i)}\_j}\right){\middle\vert}R^{(i)}\_\mathbf{X}\right..\end{aligned}$$ Killed Brownian Motion (KBM) ============================ In we detailed an approach to simulate the killing time and location, (*τ̄*, **X***τ̄*), for killed Brownian motion. To avoid unnecessary algorithmic complexity, note that we can recover the pair (*τ̄*, **X***τ̄*) by a simple modification of in which we set ∀*i* *L***X**(*i*) :  = Φ and return the first rejection time. This is presented in. A variant in which *L***X**(*i*) is incorporated would achieve greater efficiency, but is omitted for notational clarity. [h] [alg:kbm] 1. Initialise: Set *i* = 1, *j* = 0, *τ*0 = 0. Input initial value **X**0. 2. *R*: Simulate layer information *R***X**(*i*) ∼ R as per, obtaining *τ**i*, *U***X**(*i*). [alg:kbm:layer] 3. *E*: Simulate *E* ∼ Exp(*U***X**(*i*) − Φ). [alg:kbm:loop] 4. *ξ**j*: Set *j* = *j* + 1 and *ξ**j* = (*ξ**j* − 1 + *E*) ∧ *τ**i*. 5. **X***ξ**j*: Simulate **X***ξ**j* ∼ MVN(**X***ξ**j* − 1, (*ξ**j* − *ξ**j* − 1))∣*R***X**(*i*). 6. *τ**i*: If *ξ**j* = *τ**i*, set *i* = *i* + 1 and return to. 7. *P*: With probability $[U^{(i)}\_\mathbf{X}\!-\!\phi\big(\mathbf{X}\_{\xi\_j}\big)]/[U^{(i)}\_\mathbf{X}-\Phi]$ return to. [alg:omit1] 8. (*τ̄*, **X***τ̄*): Return (*τ̄*, **X***τ̄*) = (*ξ**j*, **X***ξ**j*), *i**τ̄* = *i*, *j**τ̄* = *j*. [alg:omit2] As in the PRS for *μ**T* presented in, in KBM () we can recover in the interval [0, *τ̄*) the remainder of the sample path as desired without error as follows (where for clarity we have suppressed full notation, but can be conducted as described in ), $$\begin{aligned} \mathcal{S}\_\text{KBM}\left(\mathbf{X}\right) :=\left\{\mathbf{X}\_0,(\xi\_j,\mathbf{X}\_{\xi\_j})^{j\_{\bar{\tau}}}\_{j=1},(R^{(i)}\_\mathbf{X})^{i\_{\bar{\tau}}}\_{i=1}\right\}, & \quad\quad\quad {{\mathbf{X}}^{\text{rem}}}\_{(0,T)} \sim\mathbbm{W} | \mathcal{S}\_\text{KBM}. \label{eq:kbmskel}\end{aligned}$$ Rejection Sampling based QSMC Algorithm ======================================= In we considered the embedding of IS-KBM of within SMC. A similar embedding for the rejection sampling variant (KBM) of is considered here as the probability of the killed Brownian motion trajectory of remaining alive becomes arbitrarily small as diffusion time increases. As such, if one wanted to approximate the law of the process conditioned to remain alive until large *T* it would have prohibitive computational cost. Considering the KBM algorithm presented in, in which we simulate trajectories of killed Brownian motion, the most natural embedding of this within an SMC framework is to assign each particle constant un-normalised weight while alive, and zero weight when killed. Resampling in this framework simply consists of sampling killed particles uniformly at random from the remaining alive particle set. The manner in which we have constructed allows us to conduct this resampling in continuous time, and so we avoid the possibility of at any time having an alive particle set of size zero. We term this approach (Continuous Time) Rejection Quasi-Stationary Monte Carlo (R-QSMC), and present it in. In we denote by *m*(*k*) as a count of the number of killing events of particle trajectory *k* in the time elapsed until the *m*th iteration of the algorithm. [h] [alg:ctr-qsmc] 1. **Initialisation Step (*m* = 0)** 1. Input: Starting value, $\hat{\mathbf{x}}$, number of particles, *N*. 2. **X**0( ⋅ ): For *k* in 1 to *N* set $\mathbf{X}^{(1:N)}\_{t\_0}=\hat{\mathbf{x}}$ and *w**t*0(1 : *N*) = 1/*N*. 3. *τ̄*1( ⋅ ): For *k* in 1 to *N*, simulate $\!\left.\left(\bar{\tau}^{(k)}\_1,\mathbf{X}^{(k)}\_{\bar{\tau}\_1}\right){\middle\vert}\left(t\_0^{(k)},\mathbf{X}^{(k)}\_{t\_0}\right)\right.\!$ as per. [r-scale:sub1] 2. **Iterative Update Steps (*m* = *m* + 1)** 1. $\bar{\bar{\tau}}\_m$: Set $\bar{\bar{\tau}}\_m:=\inf\{\{\bar{\tau}^{(k)}\_{m(k)}\}^N\_{k=1}\}$, $\bar{k}:=\{k:\bar{\bar{\tau}}\_m=\bar{\tau}^{(k)}\_{m(k)}\}$. 2. *K*: Simulate *K* ∼ U{{1, …, *n*} \ *k̄*}. 3. $\mathbf{X}^{(\cdot)}\_{\bar{\bar{\tau}}\_m}$: Simulate $\mathbf{X}^{(\bar{k})}\_{\bar{\bar{\tau}}\_m}\sim \mathbbm{W}| \mathcal{S}^{(K)}\_\text{KBM}$ as given by ([eq:kbmskel]) and as per. 4. $\bar{\bar{\tau}}\_{m+1}$: Simulate $\!\left.\left(\bar{\tau}^{(\bar{k})}\_{m(\bar{k})+1},\mathbf{X}^{(\bar{k})}\_{\bar{\tau}\_{m(\bar{k})+1}}\right){\middle\vert}\left(\bar{\bar{\tau}}\_m,\mathbf{X}^{(\bar{k})}\_{\bar{\bar{\tau}}\_m}\right)\right.\!$ as per. [r-scale:sub2] Iterating the R-QSMC algorithm beyond some time *t*\* at which point we believe we have obtained convergence, and halting at time *T* > *t*\*, we can approximate the law of the killed process by the weighted occupation measures of the trajectories (where ∀*t* *w**t*( ⋅ ) = 1/*N*), $$\begin{aligned} \pi(\!{\,\mathrm{d}}\mathbf{x}) \approx \hat{\pi}(\!{\,\mathrm{d}}\mathbf{x}) & := \dfrac{1}{T-t^\*}\int^T\_{t^\*} \sum^N\_{k=1}w^{(k)}\_t\cdot \delta\_{\mathbf{X}^{(k)}\_{t}}(\!{\,\mathrm{d}}\mathbf{x}) {\,\mathrm{d}}t. \label{eq:occupation1}\end{aligned}$$ In some instances the tractable nature of Brownian motion will admit an explicit representation of ([eq:occupation1]). If not, one can simply sample the trajectories exactly at equally spaced points to find an unbiased approximation of ([eq:occupation1]), by means detailed in and. In particular, if we let *t*0 :  = 0 < *t*1 < … < *t**m* :  = *T* such that *t**i* − *t**i* − 1 :  = *T*/*m*, then we can approximate the law of the killed process as we did in ([eq:occupation2]), where *w**t* \*  : *T*(1 : *N*) = 1/*N*. Rejection sampling Scalable Langevin Exact (R-ScaLE) algorithm ============================================================== In we noted that the survival probability of a proposal Brownian motion sample path was related to the estimator *P*(**X**) of and in ([s:repest]) where we develop a replacement estimator. The construction of control variates in allows us to construct the replacement estimator such that it has good scalability properties. In a similar fashion to the embedding of this estimator within QSMC () resulting in ScaLE (), we can embed this estimator with the rejection sampling variant R-QSMC () resulting in the *Rejection Scalable Langevin Exact algorithm (R-ScaLE)* which we present in. Note as presented in we may also be concerned with the absolute growth of $\tilde{\Phi}$ (relative to Φ) as a function of *n* in order to study its computational complexity. Note however, as remarked upon in, if this growth is not favourable one can modify to incorporate the additional path-space bound *L̃***X**(*i*) for each layer. Details of this modification are omitted for notational clarity. [h] [alg:subscale3] [alg:r-scale] 1. Choose $\hat{\mathbf{x}}$ and compute $\nabla\log\pi(\hat{\mathbf{x}})$, $ \Delta \log\pi(\hat{\mathbf{x}})$. $\tilde{\Phi}$. 2. On calling 1. Replace Φ with $\tilde{\Phi}$. 2. Replace *U***X**(*i*) in with *Ũ***X**(*i*). 3. Replace with: Simulate $I,J\overset{\text{iid}}{\sim} {\text{U}}\{0, \ldots,n\}$, and with probability $[\tilde{U}^{(i)}\_\mathbf{X}\!-\!\tilde{\phi}\big(\mathbf{X}\_{\xi\_j}\big)]/[\tilde{U}^{(i)}\_\mathbf{X}-\Phi]$ return to. 3. As Step [r-scale:sub1]. Discrete Time Sequential Monte Carlo Construction ================================================= Considering the discrete time system with state space ${\ensuremath{E}\_{k}\xspace} = ({C}(h(k-1),hk],\mathcal{Z}\_k)$ at discrete time *k*, with the process denoted ${\ensuremath{\mathfrak{X}\_{k}}\xspace} = (X\_{(h(k-1),hk]},{\ensuremath{\mathfrak{Z}\_{k}}\xspace})$ in which the auxiliary variables ${\ensuremath{\mathfrak{Z}\_{k}}\xspace}$ take values in some space Z*k*. The ScaLE Algorithm, with resampling conducted deterministically at times *h*, 2*h*, … coincides exactly with the mean field particle approximation of a discrete time Feynman-Kac flow, in the sense and notation of, with transition kernel $$M\_k({\ensuremath{\mathfrak{X}\_{k-1}}\xspace},d{\ensuremath{\mathfrak{X}\_{k}}\xspace}) = \mathbb{W}^{X\_{h(k-1)}}\_{h(k-1),hk}(dX\_{(h(k-1),hk]}) Q\_k(X\_{(h(k-1),hk]},d{\ensuremath{\mathfrak{Z}\_{k}}\xspace})$$ and a potential function $G\_k({\ensuremath{\mathfrak{X}\_{k}}\xspace})$, which is left intentionally unspecified to allow a broad range of variants of the algorithm to be included, the property which it must possess to lead to a valid form of ScaLE Algorithm is specified below. Allowing $$\overline{\mathbb{W}}^{x}\_{0,hk}({\ensuremath{\mathfrak{X}\_{1:k}}\xspace}) = \mathbb{W}\_x^{0,hk}(dX\_{0:hk}) \prod\_{i=1}^k Q\_i(X\_{(h(i-1),hi]},d{\ensuremath{\mathfrak{Z}\_{i}}\xspace})$$ and specifying an extended version of the killed process via $$\frac{d\overline{\mathbb{K}}^x\_{0,hk}}{d\overline{\mathbb{W}}^x\_{0,hk}} ({\ensuremath{\mathfrak{X}\_{1:k}}\xspace}) \propto \prod\_{i=1}^k G({\ensuremath{\mathfrak{X}\_{i}}\xspace}).$$ The validity of such a ScaLE Algorithm depends upon the following identity holding: $$\frac{d\mathbb{K}^x\_{0,hk}}{d\mathbb{W}^x\_{0,hk}}(X\_{0:hk}) \propto \mathbb{E}\_{\mathbb{W}^x\_{0,hk}}\left[\left.\prod\_{i=1}^k G\_i({\ensuremath{\mathfrak{X}\_{i}}\xspace})\right|X\_{0:hk}\right].$$ It is convenient to define some simplifying notation. We define the law of a discrete time process (in which is embedded a continuous time process taking values in *C*[0, ∞)): $$\overline{\mathbb{W}}^x(d{\ensuremath{\mathfrak{X}\_{}}\xspace}) = \overline{\mathbb{W}}^x\_{0,h}(d{\ensuremath{\mathfrak{X}\_{1}}\xspace}) \prod\_{k=1}^\infty \overline{\mathbb{W}}^{X\_{h(k-1)}}\_{h(k-1),hk}(d{\ensuremath{\mathfrak{X}\_{k}}\xspace})$$ and of a family of processes indexed by *k*, $\overline{\mathbb{K}}^x\_k$, again incorporating a continuous time process taking values in *C*[0, ∞), via: $$\frac{d\overline{\mathbb{K}}^x\_{k}}{d\overline{\mathbb{W}}\_x}({\ensuremath{\mathfrak{X}\_{}}\xspace}) \propto \prod\_{i=1}^k G\_i({\ensuremath{\mathfrak{X}\_{i}}\xspace}).$$ With a slight abuse of notation, we use the same symbol to refer to the associated finite-dimensional distributions, with the intended distribution being indicated by the argument. We also define the *marginal* laws, W*x* and K*k**x* via: $$\begin{aligned} \mathbb{W}^x(dX) =& \overline{\mathbb{W}}^x(dX \times (\otimes\_{p=1}^{\infty} \mathcal{Z}\_p))\\ \mathbb{K}^x\_k(dX) =& \overline{\mathbb{K}}^x\_k(dX \times (\otimes\_{p=1}^{\infty} \mathcal{Z}\_p)).\end{aligned}$$ Under mild regularity conditions (cf. ), for any *φ* : R*d* → R, any algorithm within the framework described admits a central limit in that: $$\begin{aligned} \lim\_{N \to \infty} \sqrt{N} \left[\frac{1}{N} \sum\_{i=1}^N \varphi(X^i\_{hk}) - \mathbb{K}\_k^x(\varphi(X^i\_{hk})) \right] \Rightarrow \sigma\_{k,G}(\varphi) Z\end{aligned}$$ where, *Z* is a standard normal random variable,  ⇒  denotes convergence in distribution, and: $$\begin{aligned} \sigma\_k^2(\varphi) =& \mathbb{E}\_{\overline{\mathbb{W}}} \left[ \left( \frac{G\_1({\ensuremath{\mathfrak{X}\_{1}}\xspace}) \mathbb{E}\_{\overline{\mathbb{W}}^x}\left[\prod\_{i=2}^k G({\ensuremath{\mathfrak{X}\_{i}}\xspace})|{\ensuremath{\mathfrak{X}\_{1}}\xspace} \right]}{\overline{\mathbb{W}}^x(\prod\_{i=1}^k G({\ensuremath{\mathfrak{X}\_{i}}\xspace}))} \right)^2 \mathbb{E}\_{{\mathbb{K}}\_{k}^x} \left[\left. \left( \varphi(X\_{hk}) - \mathbb{K}\_{k}^x(\varphi(X\_{hk})) \right)^2 \right\vert X\_{h} \right] \right]+\\ & \sum\limits\_{p=2}^{k-1} \mathbb{E}\_{\overline{\mathbb{K}}\_{p-1}^x} \left[ \left( \frac{\overline{\mathbb{W}}^x(\prod\_{i=0}^{p-1} G({\ensuremath{\mathfrak{X}\_{i}}\xspace}))}{\overline{\mathbb{W}}^x(\prod\_{i=0}^k G({\ensuremath{\mathfrak{X}\_{i}}\xspace}))} G({\ensuremath{\mathfrak{X}\_{p}}\xspace}) \mathbb{E}\_{\mathbb{W}^x}\left[\left.\prod\_{i={p+1}}^k G({\ensuremath{\mathfrak{X}\_{i}}\xspace}) \right| X\_{hp}\right] \right)^2 \mathbb{E}\_{\mathbb{K}\_{k}^x} \left[\left. \left(\varphi(X\_{hk}) - \mathbb{K}\_{k}^x(\varphi(X\_{k})) \right)^2 \right| X\_{hp}\right] \right]+\\ & \mathbb{E}\_{\overline{\mathbb{K}}\_{k-1}^x} \left[ \left( \frac{\overline{\mathbb{W}}^x(\prod\_{i=0}^{k-1} G({\ensuremath{\mathfrak{X}\_{i}}\xspace}))}{\overline{\mathbb{W}}^x(\prod\_{i=0}^k G({\ensuremath{\mathfrak{X}\_{i}}\xspace}))} G({\ensuremath{\mathfrak{X}\_{k}}\xspace}) \right)^2 \left(\varphi(X\_{hk}) - \overline{\mathbb{K}}\_{k}^x(\varphi(X\_{hk})) \right)^2 \right]\end{aligned}$$ [Proof Outline] It follows by a direct application of the argument underlying the Proposition of (which itself follows from simple but lengthy algebraic manipulations from the results of ) that for any test function, *φ* : R*d* → R satisfying mild regularity conditions (cf. ) that $$\begin{aligned} \lim\_{N \to \infty} \sqrt{N} \left[\frac{1}{N} \sum\_{i=1}^N \varphi(X^i\_{hk}) - \mathbb{K}\_k^x(\varphi(X^i\_{hk})) \right] \Rightarrow \sigma\_{k,G}(\varphi) Z\end{aligned}$$ where, *Z* is a standard normal random variable,  ⇒  denotes convergence in distribution, and: $$\begin{aligned} \sigma\_{k,G}^2(\varphi) =& \mathbb{E}\_{\overline{\mathbb{W}}} \left[ \left( \frac{d\overline{\mathbb{K}}\_{k}^x}{d\overline{\mathbb{W}}^x} (X\_{(0,h]},{\ensuremath{\mathfrak{Z}\_{1}}\xspace}) \right)^2 \mathbb{E}\_{\overline{\mathbb{K}}\_{k}^x} \left[\left. \left( \varphi(X\_{hk}) - \overline{\mathbb{K}}\_{k}^x(\varphi(X\_{hk})) \right)^2 \right\vert \overline{\mathcal{F}}\_{1} \right] \right]+\\ & \sum\limits\_{p=2}^{k-1} \mathbb{E}\_{\overline{\mathbb{K}}\_{p-1}} \left[\left( \frac{d\overline{\mathbb{K}}\_{k}^x}{d\overline{\mathbb{K}}\_{p-1}^x} (X\_{(0,hp]},{\ensuremath{\mathfrak{Z}\_{1:p}}\xspace}) \right)^2 \mathbb{E}\_{\overline{\mathbb{K}}\_{k}^x} \left[\left. \left(\varphi(X\_{hk}) - \overline{\mathbb{K}}\_{k}^x(\varphi(X\_{k})) \right)^2 \right\vert \overline{\mathcal{F}}\_{p}\right] \right]+\\ & \mathbb{E}\_{\overline{\mathbb{K}}\_{k-1}} \left[\left( \frac{d\overline{\mathbb{K}}\_{k}^x}{d\overline{\mathbb{K}}\_{k-1}} (X\_{(0,hk]},{\ensuremath{\mathfrak{Z}\_{1:k}}\xspace}) \right)^2 \left(\varphi(X\_{hk}) - \overline{\mathbb{K}}\_{k}^x(\varphi(X\_{hk})) \right)^2 \right]\end{aligned}$$ with $\{\overline{\mathcal{F}}\_p\}\_{p\geq0}$ being the natural filtration associated with $\overline{\mathbb{W}}^x$. This can be straightforwardly simplified to: $$\begin{aligned} \sigma\_k^2(\varphi) =& \mathbb{E}\_{\overline{\mathbb{W}}} \left[ \left( \frac{G\_1({\ensuremath{\mathfrak{X}\_{1}}\xspace}) \mathbb{E}\_{\overline{\mathbb{W}}^x}\left[\prod\_{i=2}^k G({\ensuremath{\mathfrak{X}\_{i}}\xspace})|{\ensuremath{\mathfrak{X}\_{1}}\xspace} \right]}{\overline{\mathbb{W}}^x(\prod\_{i=1}^k G({\ensuremath{\mathfrak{X}\_{i}}\xspace}))} \right)^2 \mathbb{E}\_{{\mathbb{K}}\_{k}^x} \left[\left. \left( \varphi(X\_{hk}) - \mathbb{K}\_{k}^x(\varphi(X\_{hk})) \right)^2 \right\vert X\_{h} \right] \right]+\\ & \sum\limits\_{p=2}^{k-1} \mathbb{E}\_{\overline{\mathbb{K}}\_{p-1}^x} \left[ \left( \frac{\overline{\mathbb{W}}^x(\prod\_{i=0}^{p-1} G({\ensuremath{\mathfrak{X}\_{i}}\xspace}))}{\overline{\mathbb{W}}^x(\prod\_{i=0}^k G({\ensuremath{\mathfrak{X}\_{i}}\xspace}))} G({\ensuremath{\mathfrak{X}\_{p}}\xspace}) \mathbb{E}\_{\mathbb{W}^x}\left[\left.\prod\_{i={p+1}}^k G({\ensuremath{\mathfrak{X}\_{i}}\xspace}) \right| X\_{hp}\right] \right)^2 \mathbb{E}\_{\mathbb{K}\_{k}^x} \left[\left. \left(\varphi(X\_{hk}) - \mathbb{K}\_{k}^x(\varphi(X\_{k})) \right)^2 \right| X\_{hp}\right] \right]+\\ & \mathbb{E}\_{\overline{\mathbb{K}}\_{k-1}^x} \left[ \left( \frac{\overline{\mathbb{W}}^x(\prod\_{i=0}^{k-1} G({\ensuremath{\mathfrak{X}\_{i}}\xspace}))}{\overline{\mathbb{W}}^x(\prod\_{i=0}^k G({\ensuremath{\mathfrak{X}\_{i}}\xspace}))} G({\ensuremath{\mathfrak{X}\_{k}}\xspace}) \right)^2 \left(\varphi(X\_{hk}) - \overline{\mathbb{K}}\_{k}^x(\varphi(X\_{hk})) \right)^2 \right]\end{aligned}$$ We conclude with the following corollary, showing that the particular combination of sub-sampling scheme and path space sampler fits into this framework and providing its particular asymptotic variance expression. Such a CLT is satisfied in particular: 1. If no sub-sampling is used and one evaluates the *exact* (intractable) killing rate (as described in ). 2. If sub-sampling is employed within the construct of the layered path-space rejection sampler (as described in ). Both claims follow directly by the above argument with the appropriate identifications. (a) is established by setting: $$\begin{aligned} \mathcal{Z}\_k =& \emptyset & G\_k({\ensuremath{\mathfrak{X}\_{k}}\xspace}) =& G(X\_{[h(k-1),hk]})\\ && \propto& \frac{d\mathbb{K}^{X\_{h(k-1)}}\_{h(k-1),hk}}{d\mathbb{W}^{X\_{h(k-1)}}\_{h(k-1),hk}}(X\_{(h(k-1):hk]}) \end{aligned}$$ (b) is established by setting (where we denote by *c* the number of pairs of data points employed by the subsampling mechanism; *c* = 1 for the examples in this paper): $$\begin{aligned} \mathcal{Z}\_k =& \cup\_{m\_k=1}^\infty \otimes\_{p=1}^{m\_k} R(\tau\_{k,p-1},\tau\_{k,p})\\ R(s,t) =& \cup\_{\kappa=0}^\infty \{\kappa\} \times (s,t]^\kappa \times \{1,\ldots,n\}^{2c\kappa} \\ {\ensuremath{\mathfrak{Z}\_{k}}\xspace} =& (r\_{k,1},\ldots,r\_{k,m\_k}) \\ r\_{k,p} =&( \kappa\_{k,p}, \xi\_{k,p,1},\ldots,\xi\_{k,p,\kappa\_{k,p}},s\_{k,j,1,1:2c},\ldots,s\_{k,p,\kappa\_{k,p},1:2c}) \\ G\_k({\ensuremath{\mathfrak{X}\_{k}}\xspace}) =&\exp\left(-\sum\_{p=1}^{m\_k} L\_\theta(X\_{\tau\_k,p-1})(\tau\_{k,p} - \tau\_{k,p-1})\right)\\ &\qquad\cdot \prod\_{p=1}^{m\_k} \prod\_{j=1}^{\kappa\_{k,p}} \left[ \frac{U\_\theta(X\_{\tau\_{k,p-1}}) - \widetilde{\phi}(X\_{\xi\_{k,p,j}},s\_{k,p,j,1:2c})}{U\_\theta(X\_{\tau\_{k,p-1}}) - L\_\theta(X\_{\tau\_{k,p-1}})} \right] \\ Q\_k(X\_{(h(k-1),hk]},d{\ensuremath{\mathfrak{Z}\_{k}}\xspace}) =& \prod\_{p=1}^{m\_k} \left[ \textsf{PP}(d\xi\_{k,p,1:\kappa\_{k,p}}; ( U\_\theta(X\_{\tau\_{k,p-1}}) - L\_\theta(X\_{\tau\_{k,p-1}}) ), [\tau\_{k,p-1},\tau\_{k,p}])\right.\\ & \left. \qquad \cdot \prod\_{j=1}^{\kappa\_{k,p}} \frac{1}{n^{2c}} \prod\_{l=1}^{2c} \delta\_{\{1,\ldots,n\}} (ds\_{k,j,l})) \right] \end{aligned}$$ where $\textsf{PP}(\cdot;\lambda,[a,b])$ denotes the law of a homogeneous Poisson process of rate *λ* over interval [*a*, *b*], *δ*{1, …, *n*} denotes the counting measure over the first *n* natural numbers and a number of variables which correspond to deterministic transformations of the *X* process have been defined to lighten notation: $$\begin{aligned} \tau\_{k,p} = \left\{ \begin{array}{ll} (k-1)h & p = 0\\ \inf \{ t : | X\_t - X\_{\tau\_{k,p-1}}| \geq \theta\} & p=1,\ldots,m\_k-1\\ kh & p=m\_k \end{array}\right. \end{aligned}$$ and *m**k* is the number of distinct layer pairs employed in interval *k* of the discrete time embedding of the algorithm (i.e. it is the number of first passage times simulated within the continuous time algorithm after time (*k* − 1)*h* until one of them exceeds *k**h*; as detailed in Appendices [ss:fpt] and [ss:fptinter]). Estimation of Effective Sample Size =================================== Assume QSMC (or ScaLE) has been run for an execution (diffusion) time of length *T*, and that the weighted particle set (of size *N*) is to be used at the following auxiliary mesh times *t*\*, …, *t**m* :  = *T* (recalling from that *t*\* ∈ (*t*0, …, *t**m*) is a user selected quasi-stationary burn-in time) for computation of the Monte Carlo estimators ([eq:occupation2], [eq:occupation3]). The posterior mean for the parameters at time *t**i* ∈ [*t*\*, *T*] is simply estimated using the particle set by $\hat{\mathbf{X}}\_{t\_i}=\sum\_{k=1}^N w^{(k)}\_{t\_i}\cdot \mathbf{X}^{(k)}\_{t\_i}$. An overall estimate of the posterior mean and variance can be computed as follows: $$\begin{aligned} \bar{\mathbf{X}} & =\dfrac{1}{m(T-t^\*)/T}\sum\_{i=m(T-t^\*)/T}^m \hat{\mathbf{X}}\_{t\_i}, \\ \hat{\sigma}^2\_{\mathbf{X}} & = \dfrac{1}{m(T-t^\*)/T}\sum\_{i=m(T-t^\*)/T}^m \sum\_{k=1}^N w^{(k)}\_{t\_i}\Big(\mathbf{X}^{(k)}\_{t\_i}- \bar{\mathbf{X}}\Big)^2,\end{aligned}$$ The marginal ESS for particles at a single time point can be estimated as the ratio of the variance of $\hat{\mathbf{X}}\_{t}$ to the estimate of the posterior variance, $$\begin{aligned} \mbox{ESS}\_{M}=\hat{\sigma}^2\_{\mathbf{X}}\left(\frac{1}{m(T-t^\*)/T} \sum\_{t=m(T-t^\*)/T}^m \Big(\hat{\mathbf{X}}\_{t\_i}-\bar{\mathbf{X}}\Big)^2 \right)^{-1}.\end{aligned}$$ Although in total we have (*m*(*T* − *t*\*)/*T*) sets of particles (after burn-in), these will be correlated. This is accounted for using the lag-1 auto correlation of $\hat{\mathbf{X}}\_{t^\*},\ldots,\hat{\mathbf{X}}\_{T}$, which we denote *ρ̂*. Our overall estimated ESS is, $$\begin{aligned} \mbox{ESS} := (m(T-t^\*)/T)\cdot \dfrac{1-\hat{\rho}}{1+\hat{\rho}} \cdot \mbox{ESS}\_{M}.\end{aligned}$$
arxiv_0000068
– a computer programme for the dynamical processing of the initial binary star population ========================================================================================== [firstpage] The first version of the Binary Population Synthesizer () is made publicly available. It allows to efficiently calculate binary distribution functions after the dynamical processing of a realistic population of binary stars during the first few Myr in the hosting embedded star cluster. Instead of time-consuming N-body simulations, uses the stellar dynamical operator $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}(\log\_{10}(E\_{\rm b}),t)$, which determines the fraction of surviving binaries depending on the binding energy of the binaries, $E\_{\rm b}$. The Ω-operator depends on the initial star cluster density, $\rho\_{\rm ecl}$, as well as the time, *t*, until the residual gas of the star cluster is expelled. has also a galactic-field mode, in order to synthesize the stellar population of a whole galaxy. At the time of gas expulsion, the dynamical processing of the binary population is assumed to effiently end due to the subsequent expansion of the star cluster. While has been used previously unpublished, here we demonstrate its use in the modelling of the binary populations in the Orion Nebula Cluster, in OB associations and as an input for simulations of globular clusters. galaxies: kinematics and dynamics, software: public release, binaries: general, stars: pre-main-sequence, stars: statistics, methods: numerical Introduction ============ Stars do not only come as single stars, but they are often bound to a partner star by their gravity. These gravitationally bound stellar systems are called binaries. If they are not disturbed from the outside, they are gravitationally, or dynamically stable. Also systems of more than two stars can be stable over long periods of time, if they are built up hierarchically. There is in principle no limit to the number of hierarchies such a stellar system can have, but important is that on each hierarchy level, the system can be approximated well as two dominant point masses. Thus, the stellar system can be treated as a binary on each level of hierarchy. Star clusters on the other hand consist also of more than two stars, but they lack the hierachy on the top level of their build-up. Thus, star clusters show the chaotic behaviour of many-body dynamics instead of the regular behaviour of two-body dynamics of hierarchical multiples. It is thought that most, if not all stars are born in embedded star clusters (e.g. ). However, embedded star clusters dissolve, first through the removal of their residual gas and then, if they survive, through processes of energy equipartition. Over time, they all leave single stars, binaries and hierarchical multiples in their galaxies. Observations of the stars in the Galactic Field (GF), that is the stars that are not part of a star cluster any more, allow to fix the ratio of single stars to binaries and to hierachical binaries in the Solar neighbourhood. It turns out that about half of the centre-of-mass stellar systems are single stars. The hiercharchical binaries on the other hand, that is triples, quadruples, quintuples and so on, are comparatively rare against the binaries (e.g. ). Thus, to first order, the population of centre-of-mass stellar systems in the field can be assumed to consist only of single stars and binaries, which will be done in this paper. If all stars are born in embedded star clusters, then the field population of the Milky Way must consist of dissolved star clusters, and thus be an aged and dynamically evolved stellar population. This poses the question what the single, binary and hierarchical multiple content of a newly born stellar population is. A good place to look for almost primeval stellar populations are T-Tauri stars, which are stars with a mass $\apprle 3 \, {\rm M}\_{\odot}$ and an age of $\apprle 10^7$years, and therefore have not reached the main sequence yet. They are usually found in or near a star-forming gas cloud, that is currently forming embedded star clusters. For instance, observed the Taurus star forming region and found a multiplicity, that is the number of binaries and hierachical binaries over the total number of systems, of 42.5 ± 4.9 per cent in the range of apparent separations, *s*, from 0.13 to 13 arcsec. This is 1.9 times the multiplicity of in terms of mass and separation comparable binaries in the Galactic Field of 23.5 per cent. The Scopius-Centaurus OB association has a slightly lower multiplicity than the Taurus star forming region, but still notably higher than in the Galactic field. An observation of the *ρ* Ophiuchi Dark Cloud shows however a multiplicity of 29.1 ± 4.3 per cent in the range of apparent separations from 0.13 to 6.4 arcsec. This corresponds to 18 to 900 AU at a distance of 140 pc, which is about the distance to the *ρ* Ophiuchi dark cloud and which is at the same distance as the Taurus star-forming region according to. Thus, the binaries observed in the *ρ* Ophiuchus dark cloud are comparable to those observed in the Galactic field, but their binary fraction is about 1.2 times as high as the binary fraction in the galactic field. Binaries in young stellar populations are however not restricted to T-Tauri stars, but encompass young stars (and thus also young star clusters) in general. For instance, the Orion Nebula Cluster (ONC) is younger than 1–2 Myr and its binary population with periods between 104.8 and 106.5 days (170 and 6700 years) is comparable to the Galactic Field. When observing wide binaries with periods of 107 to 108.1 days (2.7 × 104 to 3.4 × 105 years) in the ONC, a lot less binaries than in the Galactic Field are found. However, the ONC is also very dense with  > 104 stars/pc3, which implies significant dynamical evolution through stellar interactions. Thus, a probably more primeval population can be found in less dense star clusters. Such a cluster is for instance NGC 2024, which has about the same age as the ONC, but is a factor 10 less dense, and has a binary fraction much larger than in the Galactic Field in the period range between 105.7 and 107.1 days (1400 and 3.4 × 104 years). On the other, the cluster IC348 has similar parameters regarding radius and mass as NGC2024, but is about 3–5 times as old, and has again a binary fraction indistinguishable from the Galactic Field in a period range between 105.7 and 107.1 days (1400 and 3.4 × 104 years, ). Thus, observations show that stellar populations of all ages contain a sizable fraction of binaries. However, are star clusters born with binaries or do these form later? By-chance encounters of three single stars, where two stars form a binary and the remaining star carries away the excess energy, would require very high densities of about $10^8 \ {\rm M}\_{\odot}/{\rm pc}^3$ to be efficient. Those densities are not observed in the Local Universe (figure 4 in ), but may exist for a short time in the most extreme star bursts. However, such extreme starbursts are the exception and cannot be responsible for the binary content of the Galactic Field, even if every single star in an extreme star burst would be turned into a binary. When taking into account that stars are not point masses, tidal captures may play a role. In tidal captures, some of the orbital energy of two stars is converted into tides in a close encounter, and the stars become a binary because of that. However, showed with numerical simulations of star clusters made up entirely of single stars that tidal captures are also very inefficient to spawn binaries in them. This is somewhat higher in dense environments like the cores of globular clusters, but with $\approx 10^{-7} {\rm yr}^{-1}$ still too low to explain the observed abundance of binaries even in young clusters. Thus, processes that turn single stars into binaries are too inefficient to explain present binary numbers. Therefore, a large number of stars must be born as binaries or hierarchical multiples. The hierarchical multiples can be neglected at the birth of a stellar population. They form anyway in the evolution of binary populations, and may reach values consistent with the values observed today in the Galactic Field. Thus, proposed that all stars are born in binaries, which are parts of star clusters. A binary fraction of 100 percent is probably a simpification. However, what can be said is that the initial binary fraction is very high, and indistinguishable from being at 100 percent at birth in all observations [4](#fn4). In more detail,, and proceeded as follows. starts out with a population of binaries of 100 per cent binaries at birth. They are born in star clusters and have formed with a mass function as in. The mass function is modelled based on the luminosity function by, with an extension to very low-mass stars described in Section (4.2) of. The stars of the binaries are paired at random, because of the lack of evidence for a correlation for low-mass stars treated in and. Low-mass stars, that is stars below a mass of $1 \, {\rm M}\_{\odot}$, are however the majority with the mass function from, and are the only stars discussed in, and. The binaries are assumed to be in statistical equilibrium, and thus have a thermal eccentricity distribution (see for a profound treatise). Finally, the binaries have a flat distribution of semi-major axes between $a\_{\rm min}=1.69$ AU and $a\_{\rm max}=1690$ AU. This distribution of semimajor axes is equivalent to a flat distribution of periods with $\log\_{10}(P\_{\rm min}/{\rm days})=3.0$ as the minimum period and $\log\_{10}(P\_{\rm max}/{\rm days})=7.5$ as the maximum period for a mean system mass of $0.64 \, {\rm M}\_{\odot}$. The lower and upper limits to the periods are consistent with the observations for pre-main sequence binaries shown in figure (1) of. The flatness of the distribution is however an assumption. This assumption only becomes justifiable, because credible assumptions on binary evolution lead to observable populations of main sequence binaries (see below, or and for more details). The evolution of the binaries is separated into two modes, namely internal binary evolution in close binaries through the partner star (eigenevolution, ) and external binary evolution through interactions with other binaries, and later on also with other single stars (stimulated evolution, ). Internal binary evolution, or pre-main sequence eigenevolution, transforms originally eccentric orbits into circular orbits and sometimes feeds the secondary star from the mass of the primary star. It is especially effective on pre-main sequence binaries with small semi-major axes or strongly eccentric orbits. In both types of orbits, the stars come close to their partner stars, allowing for efficient mass transfer from one star to the other. The reason why it works especially on pre-main sequence binaries is that pre-main sequence stars have larger radii than main sequence stars of the same mass, and are thus more easily disturbed by their companions. estimate that it takes 105 years to circularize pre-main sequence binaries. therefore assumes (and quantifies) that internal binary evolution, or pre-main sequence eigenevolution, takes place in binaries with short semi-major axes and/or high eccentricities and finishes in the pre-main sequence phase of star clusters. He furthermore distinguishes between the birth population, that is the stellar population with parameters as detailed above, and the initial population, that is the stellar population changed by internal binary evolution, or pre-main sequence internal eigenevolution. The binary evolution of the initial binary population can then be followed with an N-body programme over a longer time-span, as done in with the N-Body code NBODY5. As a result of internal and external binary evolution, or pre-main sequence eigenevolution and stimulated evolution, a binary population consistent with the galactic field regarding binary fraction, mass ratio distribution, semi-major axis distribution and eccentricity distribution is obtained after one Gyr. The condition for this to be achieved is that all binaries come from star clusters that had 200 binaries on a half-mass radius of 0.8 pc after internal binary evolution (pre-main sequence eigenevolution), or from star clusters that produce the same binary spectrum after their dissolution. Star clusters that have also 200 binaries, but have a noticeably larger or smaller half-mass radius, produce a different binary spectrum. For these reasons, calls a star cluster that has initially 200 binaries distributed on a half-mass radius of 0.8 pc the *dominant-mode star cluster*, and star clusters that produce the same binary spectrum like the dominant-mode star cluster *dynamically equivalent*. and showed that also the different binary population of well observed star clusters (that is at that time the Plejades, the Hyades, and within observational limitations also the ONC) can be reproduced with the methods devised in. slightly improved the physics of the internal binary evolution, or pre-main sequence eigenevolution. They showed consistency with the present-day population in globular clusters, which must have formed with the same (universal) birth binary formulation. Thus, in short, Kroupa’s method of dynamically equivalent star clusters can be used to characterize the binary populations known so far. Figure 3 in shows that most binaries that dissolve into single stars do so in the first Myr, and after that the binary fraction is nearly constant. Motivated by this, M. Marks developed the first version of a computer programme, which he called (nary pulation ynthesizer, version 1). It is implicitely introduced already in. In this programme, the binary population goes first through internal binary evolution, or pre-main sequence eigenevolution (cf. ). Then, the effect of external binary evolution, or stimulated evolution, is calculated with a stellar dynamical operator, $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}(\log\_{10}(E\_{\rm b}),t)$. This operator was introduced in and gives the survival fraction of binaries as a function of the binding energy of the binaries, $E\_{\rm b}$, and the time, *t*, for which binary evolution takes place. It depends on the initial density, $\rho\_{\rm ecl}$, as determined by the embedded cluster mass at the birth of the star cluster. This density can equivalently be expressed by the embedded mass of the star cluster, $M\_{\rm ecl}$, and its half-mass radius at that time, $r\_{\rm h}$, which is the approach chosen in. The operator $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}(\log\_{10}(E\_{\rm b}),t)$ has been gauged in to N-body simulations of a set of star clusters with Nbody6 for times of dynamical evolution of 1, 3, and 5 Myr. makes use of these fitted values and is therefore much faster than the more exact, but also much more time-consuming calculation with an N-body programme. This is because the wider binaries are destroyed quite quickly, but until that happens, they use much of the computing time of the N-body programme. Hence, provides a shortcut for simulating the first few Myr of evolution of a star cluster, and simulations may become feasible that would otherwise take too long because of the prominent appearance of wide binaries at the beginning of the simulation. In short, tells the user, in dependency of some parameters, which binaries to keep for eventual further processing with an N-body programme. An important test for theories of star formation is that the successful theory needs to reproduce the fraction of binaries in dependency of the mass of the primary stars. Most observations have shown that the fraction of binaries decreases as the mass of the primary star decreases. This appears to be in contradiction to the constancy of the fraction of binaries near 100 percent for all primary-star masses at birth, which is assumed here. The observed correlation however arises naturally from this constancy through dynamical processing, as has been shown explicitly in previous work (figure 7 in ; figure 6 in ; and as a basis of prediction in figure 3 in ). The purpose of this paper is to introduce and how it works in more detail. For this, Section ([sec:theory]) lays out the fundamental equations with which works. Section ([sec:implementation]) describes how the equations from Section ([sec:theory]) are implemented into, Section ([sec:working]) deals with actually running the programme from the command line and Section ([sec:examples]) gives examples for running the programme. Section ([sec:discussion]) is a discussion of some results and Section ([sec:summary]) concludes the paper. can be downloaded at GitHub under the web address `https://github.com/JDabringhausen/BiPoS1`. The theory behind ================= In the following sections, binary distribution functions (BDFs) of some orbital parameter *x* are used, where *x* is, for instance, the period, *P*, in days, the semi-major axis, *a*, in AU or the eccentricity *e*. They are defined as $$\label{eq:bdf} \Phi\_x=\frac{d f\_{\rm bin}(x)}{dx}=\frac{1}{N\_{\rm cms}}\frac{d N\_{\rm b}(x)}{dx},$$ such that $$\label{eq:bdf-nb} f\_{\rm bin}=\int\Phi\_x(m\_1) \, dx.$$ In equations ([eq:bdf]) and ([eq:bdf-nb]), $f\_{\rm bin}=N\_{\rm b}/N\_{\rm cms}$ is the fraction of binaries, $N\_{\rm b}$ is the number of binaries, $N\_{\rm cms}$ the number of centre-of-mass systems (that is singles and binaries), and *m*1 is the mass of the primary of a binary, i.e. the more massive star. The parameters in the BDFs are for simplicity assumed to be separable upon the formation of the binaries, that is one parameter in the BDF does not depend on any of the others. The observable correlations of the BDFs later on (for example, binaries with short periods, *P*, have low eccentricities, *e*) are due to the subsequent internal binary evolution, or pre-main sequence eigenevolution, leading to the initial binary population (see or Section [sec:ibp]). Synthesizing the initial binary population from the birth population -------------------------------------------------------------------- ### The binary population at its birth The initial binary population (IBP) stems from a birth binary population. The birth binary population undergoes binary-internal evolution, while being in the formation process, which termed *pre-main sequence eigenevolution*. The birth binary population is not observable, but is a mathematical model which allows the initial binary population to be calculated. The initial binary population is the population which an observer would construct from an observed very young population of stars, if every star could be traced back to its origin, such that all binary systems were reconstructed to an individual age of 0.1 Myr. This can in principle be done in high-resolution radiation hydrodynamical simulations of star formation, as pioneered by. Thus, also the initial binary population is a theoretical construct. The periods of the birth binary population, *P*, are selected from a universal period BDF. How this period BDF follows from physical laws is unknown. However, it must fullfill certain requirements, such as that it allows for the binary periods which are observed, or that the integral over all periods equals 1. find that the function $$\label{eq:dist-a} \Phi\_{\rm P,birth}=\delta \frac{\log\_{10}P-\log\_{10}P\_{\rm min}}{\eta+(\log\_{10}P-\log\_{10}P\_{\rm min})^2}$$ with the period generating function $$\log\_{10} P (X)=\log\_{10}P\_{\rm min}+[\delta(e^{2X/\eta}-1)]^{1/2}$$ with *X* ∈ [0, 1] meets these requirements, while it can easily be integrated. choose *δ* = 2.5, *η* = 45 and $\log\_{10}P\_{\rm min}=1$. $\log\_{10}P\_{\rm max}=8.43$ follows then from the condition that the integration over the whole period range must be unity. These parameters are also adapted in. For the finding of Equation ([eq:dist-a]) and its parameters, only primary stars with masses $\le 1 \, {\rm M}\_{\odot}$ were used. More massive stars are likely to show somewhat different period distributions, see for instance equation (3) in, and for a review. Also, massive stars are likely not born with 100 percent binaries (or indinguishably close to it), but have also a substantial fraction of primordial triples, which is potentially connected with the different period functions. However, 84 percent of the stars have a mass $\le 1 \, {\rm M}\_{\odot}$ according to the canonical IMF (see equation [eq:imf] below) which is used in. Moreover, equation ([eq:dist-a]) gains credibility because it leads to observable period distributions today, if the conditions that follow in Section ([sec:ibp-2]) regarding the the internal binary evolution, or pre-main sequence eigenevolution, are met. The primary and secondary component masses for stars with $m<5 \, {\rm M}\_{\odot}$ are selected randomly from the canonical stellar initial mass function, $$\xi\_{\rm IMF}(m)\propto ka\_im^{-\alpha\_i} \left\{ \begin{matrix} \alpha\_1=1.3 & 0.08<m<0.5 \\ \alpha\_2=2.3 & 0.5<m<150 \end{matrix} \right., \label{eq:imf}$$ where all masses are in ${\rm M}\_{\odot}$, and *k* and *a**i* are coefficients, which normalize equation ([eq:imf]) to unity and ensure continuity. For stars more massive than $5 \, {\rm M}\_{\odot}$, secondary masses are selected such that the mass ratio *q* = *m*2/*m*1 is larger than 0.9 (i.e. close to Unity). Thus, stars with masses $<5 \, {\rm M}\_{\odot}$ follow *random pairing* and stars with masses $>5 \, {\rm M}\_{\odot}$ follow *ordered pairing* (see also ). It is important to note that in the case of ordered pairing masses selected from the IMF are not discarded if they do not fullfil the *q*-criterion. Instead they are saved for later use in order to preserve the shape of the IMF. Thus, the shape of the stellar initial mass function (IMF) is heavily restricted. In practice however, this means little limitations, because the shape of the IMF in most young star clusters is given by equation ([eq:imf]), or some equation that is observationally indistingushable from it. Only the upper mass limit changes in low- to mid-mass clusters (see figure 1 in, figure 2 in ; ). In consequence, the user may change it also in by setting any value for the upper mass limit, `MHIGH`, provided that it is higher than the lower mass limit, `MLOW` (see Section [sec:GenBinaries]). The galaxy-wide IMFs may have different slopes than their star clusters, but that is covered in by the IGIMF-theory, according to which a galaxy is built up by many embedded star clusters with different upper mass limits for their stars. The upper mass limit for stars depends on the mass of the embedded star cluster, and the distribution of embedded star cluster masses mainly (but not only) depends on the star formation rate of the galaxy. What is not covered by are IMFs that are flatter than *α*2 = 2.3 in the high-mass range. Such IMFs may occur in the most massive star clusters, and conseqently also in galaxies with the highest star formation rates. The binary eccentricities at birth are selected from a thermal BDF, $$\label{eq:thermal-dist} \Phi\_{e,\rm birth}=2e.$$ This is the thermal distribution function for eccentricties; see Section (2.2) in for a profound treatise, and e.g. for its applications. ### Internal binary evolution, or pre-main sequence eigenevolution The binary population at birth thereby obtained is then subjected to internal binary evolution, or *pre-main sequence eigenevolution*. Internal binary evolution comprises of two aspects: The circularisation of the of the orbits and mass transfer from the (more massive) donor star to the receptor star. Note that both aspects only become relevant when the two stars that make up a binary come close enough together, that is when the binary is very thight, very eccentric, or both. The driver of the circularisation are the tides that act on eccentric orbits, because at the pericentre, the stars are more strongly harrassed by the gravity of the other star than at their apocentre. (The pericentre, and apocentre, respectively, are the points on the elliptic orbit of a star where the distance to the other star is at the minimum, and maximum, respectively.) The tides heat up the stars, which radiate the energy away. In effect, the apocentre approaches the pericentre as the stars orbit each other, while the pericentre does almost not change. When pericentre and apocentre are equal, the orbit is circular. The stars then have constantly the maximum distortion they originally had anywhere on their orbit. However, with the reason for the tides on the orbit gone, the orbit is stable. To calculate the change in eccentricity due to pre-main sequence eigenevolution, or internal binary evolution, first the pericenter distance is calculated where eigenevolution is expected to be significant. It is given by $$R\_{\rm peri}=(1-e\_{\rm birth})P^{2/3}(m\_1+m\_2)^{1/3},$$ in which $e\_{\rm birth}$ is the eccentricity at birth, *P* is the period measured in years and *m*1 and *m*2 are the masses of the stars measured in ${\rm M}\_{\odot}$. The initial eccentricity may then be calculated from $$\ln e\_{\rm initial} = -T +\ln e\_{\rm birth}, \notag$$ where $$T = \left(\frac{\lambda R\_{\odot}}{R\_{\rm peri}}\right)^\chi \notag$$ is a measure of the duration of internal binary evolution, or pre-main sequence eigenevolution. The parameters *λ* = 28 and *χ* = 0.75 for internal binary evolution, or pre-main sequence eigenevolution, measure the length-scale over which internal binary evolution of the orbital elements occurs during the proto-stellar phase, and the ‘interaction strength’ between the two protostars in the binary system, respectively. For the mass transfer, it is important to note that it affects stars during their pre-main sequence phase, that is when they still gather a significant amount of mass from their surroundings. Thus, mass transfer does not necessarily diminish the mass of one star by the amount the other star grows, but the stars can feed from the matter that is not yet part of any star. In fact, proposed that the less massive star could feed on the larger circum-stellar disk of the more massive star. This process would stop at the lastest when the less massive star reaches the mass of the more massive star. On the other hand, noted that a model, which kept the mass of the binary stars constant in total, proved unsatisfactory when compared with the observational data for binaries with short periods. Thus, adopted a feeding model, which is given by $$q\_{\rm initial} = q\_{\rm birth}+(1-q\_{\rm birth}) T^\*, \notag$$ where $$T^\* = \left\{ \begin{matrix} T, & T\leq1 \\ 1, & T>1 \end{matrix} \right.. \notag$$ The initial mass of the secondary component may then be calculated from $$m\_{2,\rm initial} = q\_{\rm initial} \, m\_{1,\rm birth},\notag$$ and $m\_{1,\rm initial}=m\_{1,\rm birth}$ is assumed not to change. This model is also taken in. From the new parameters, the initial period is calculated according to $$P\_{\rm initial}=\left(\frac{m\_{1,\rm birth}+m\_{2,\rm birth}}{m\_{1,\rm initial}+m\_{2,\rm initial}}\right)^{\frac{1}{2}}\left(\frac{1-e\_{\rm birth}}{1-e\_{\rm initial}}\right)^{\frac{3}{2}}.\notag$$ The semi-major axis, energy and angular momentum may now be calculated from the pre-main sequence eigenevolved masses, eccentricities and periods. Note that with this implementation of internal binary evolution, or pre-main sequence eigenevolution, the periods can only be shortened. However, binaries with periods of $\log\_{10}(P/ {\rm days})<-1$ are not allowed in, but small minority at best anyway. In reality, such binaries would likely merge to single stars. For a comparison of a stellar population before and after internal binary evolution in, see figure ([birth+init]) in this paper, and for the effects in general, see figure (2) in. Synthesizing a star cluster --------------------------- We assume that the embedded cluster is a result of monolithic collapse of a molecular cloud core, because have shown that the observed very young clusters (ONC, NGC3603, R136) are too smooth, compact and young to allow significant sub-structured initial conditions. That is, initial conditions where the final cluster starts forming from the collapse of sub-clusters are constrained to be compact. The sub-clusters are therefore initially so close that the whole structure is dynamically and morphologically next to identical to the assumed monolithic smooth initial conditions (modelled as a Plummer phase-space distribution function). Essentially, sub-clustered initial conditions take too long to collapse and virialise to be consistent with the observed very young clusters. To calculate the binding energies of the binaries, $E\_{\rm b}$, of an evolved star cluster with initial embedded mass in stars, $M\_{\rm ecl}$, and initial half-mass radius, $r\_{\rm h}$, at first the initial binary energy distribution, $\Phi\_{{\log\_{10}(E)},\rm init}$ needs to be constructed. do this as specified in Section ([sec:ibp]), and then they calculate the binding energies of the binaries in the star cluster with $$E\_{\rm b}=2^{-\frac{1}{3}}\left(\frac{\pi m\_1m\_2}{P}\right)^{\frac{2}{3}}.\notag$$ After that, perform N-body simulations, using Nbody6, to transform the initial binding energies into the final ones. They use an array of star clusters of different $M\_{\rm ecl}$, $r\_{\rm h}$ and ages, *t*, for this step. The result is an array of evolved binary energy distribution functions, $\Phi\_{\log\_{10}(E\_{\rm b}),\rm evolved}$, which depend on $M\_{\rm ecl}$, $r\_{\rm h}$, and the age, *t*, of the star cluster. Alternatively, $M\_{\rm ecl}$ and $r\_{\rm h}$ can be replaced with a single parameter, $\rho\_{\rm ecl}$, that is the average density of the star cluster within $r\_{\rm h}$, because it has been demonstrated in that star clusters with identical crossing times, $t\_{\rm cross}$, develop identical binary fractions, $f\_{\rm b}$. And since $t\_{\rm cross} \propto \rho\_{\rm ecl}^{-0.5}$, the initial density is the only relevant parameter to determine the resulting binary population in their computations. When the initial binary population (IBP) is placed inside a star cluster, the IBP will evolve due to interactions between systems in which energy and angular momentum is transferred. This is called external binary evolution, or stimulated evolution. Generally, external binary evolution, or stimulated evolution, removes binaries from the population with time, and the amount to which this happens depends on the binding energy of the binary. The change of the IBP due to external binary evolution, or stimulated evolution, until the time *t* can be written down as a stellar dynamical operator, $\Omega\_{\rm dyn}^{M\_{\rm ecl},r\_{\rm h}}$, which acts on the initial energy BDF. Thus, $$\label{eq:operator} \Phi\_{\log\_{10}(E\_{\rm b}),\rm evolved} = \Omega\_{\rm dyn}^{M\_{\rm ecl},r\_{\rm h}}(\log\_{10}(E\_{\rm b}),t) \times \Phi\_{\log\_{10}(E\_{\rm b}), \rm initial}.$$ $E\_{\rm b}$ is the binding energy of the binaries, and the superscipts $M\_{\rm ecl}$ and $r\_{\rm h}$ on the operator signify its dependence on $M\_{\rm ecl}$, and $r\_{\rm h}$, respectively, of the star cluster in question. Equivalently, $\Omega\_{\rm dyn}^{M\_{\rm ecl},r\_{\rm h}}(\log\_{10}(E\_{\rm b}),t)$ can be replaced by $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}(\log\_{10}(E\_{\rm b}),t)$, where $\rho\_{\rm ecl}$ is the average initial density of the embedded star cluster, corresponding to the above combination of $M\_{\rm ecl}$ and $r\_{\rm h}$. The next step is to characterize $\Omega\_{\rm dyn}^{M\_{\rm ecl},r\_{\rm h}}(\log\_{10}(E\_{\rm b}),t)$, or $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}(\log\_{10}(E\_{\rm b}),t)$, respectively. find that $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}(\log\_{10}(E\_{\rm b}),t)$ can be described as the upper half of a sigmoidal curve, which is given as $$\begin{gathered} \label{eq:sigmoidal} \Omega\_{\rm dyn}^{\rho\_{\rm ecl}}(\log\_{10}(E\_{\rm b}),t)=\\ \frac{{\cal A}(t)}{1+\exp[{\cal S}(t)(\log\_{10}(E\_{\rm b})-\log\_{10}(E\_{\rm b,cut}))]}-\frac{{\cal A}(t)}{2}.\end{gathered}$$ The parameters ${\cal A}$, ${\log\_{10} (E\_{\rm b,cut}})$ and ${\cal S}$ can be interpolated as functions of $\rho\_{\rm ecl}$ and *t*. They are given as $$\label{eq:operator2} {\cal A}(t) = \left\{ \begin{matrix} a(t)+b(t)\log\_{10}\rho\_{\rm ecl} & \mbox{if result}>-3.2 \\ -3.2 & \mbox{otherwise} \end{matrix} \right.,$$ $$\label{eq:operator3} \log\_{10}(E\_{\rm b,cut}) = \left\{ \begin{matrix} c(t)+d(t)\log\_{10}\rho\_{\rm ecl} & \mbox{if result}\leq2 \\ 2 & \mbox{otherwise} \end{matrix} \right.,$$ and $$\label{eq:operator4} {\cal S}(t) = -\frac{1}{\exp[e(t) \times (\log\_{10}\rho\_{\rm ecl}-f(t))]}-g(t).$$ The time-dependent coefficients *a*(*t*) to *g*(*t*) in equations ([eq:operator2]) to ([eq:operator4]) are listed in table (2) in for the ages of of 1, 3, and $5\, {\rm Myr}$. These ages are also the available choices for. They can be interpreted in this context as the times for which external binary evolution, or stimulated evolution, acts on the star clusters. Note that the simulations on which $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}(\log\_{10}(E\_{\rm b}),t)$ is gauged do not include gas expulsion. The expansion of the star clusters, which inhibits binary-binary interactions, and later on also binary-single interactions, is caused by the energy that is set free by these processes themselves. Thus, $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}(\log\_{10}(E\_{\rm b}),t)$ saturates, and after $5 \, {\rm Myr}$ at the latest, star clusters have expanded so much that encounters between binaries happen only rarely. However, also gas expulsion will happen in the first few Myr, and also lead to an expansion or even a dissolution of the star clusters. The binary populations of such star clusters do not evolve much beyond this point in time, but become ‘frozen in’. In principle, binaries of a certain binding energy can still have the full range of eccentricities, *e*, between 0 (circular orbit) and 1 (radial orbit). Radial orbits with a certain binding energy, $E\_{\rm b}$, are easier to destroy by encounters with other stars than circular orbits with the same $E\_{\rm b}$. The reason is that the radial orbits are more lightly bound than the average value of $E\_{\rm b}$ for most of the orbital period spent at larger distances. The circular orbits are, in contrast, always bound with the average $E\_{\rm b}$. In practice however, the effect of different eccentricties is secondary next to the effect of different $E\_{\rm b}$, see figure 8 in. Also that internal binary evolution, or pre-main sequence eigenevolution (see Section [sec:ibp]), is making eccentric orbits more circular diminishes the problem. High eccentricities remain after internal binary evolution, or pre-main sequence eigenevolution, only for the wide binaries, which are weakly bound and are therefore easy to destroy with external binary evolution, or stimulated evolution, independent of their *e*. Thus, only considering $E\_{\rm b}$ proves to be sufficient for the purpose of. could also be adapted for highly substructured star clusters, and not just for the monolithic case considered here. For this, the coefficients *a*(*t*) to *g*(*t*) obtained by the fits of equations ([eq:operator2]) to ([eq:operator4]) would potentially be different, but the general code of would remain the same. Note however that run computations of both substructured (clumpy) and rather spherical star cluster setups with initially 100 percent binaries and different binary distribution functions. Among them is also the IBP by. They find that the resulting binary fractions are a weak function of ‘clumpiness’: Substructured clusters produce up to  ≈ 10 percent lower binary fractions after 10 Myr of dynamical evolution when compared to more spherical setups. However, a population initially dominated by binaries is processed strongly in both types of clusters. While in a spherical setup, the processing of binaries depends on their density in their core, the driver of breaking up the binaries is in clumpy clusters the density in the substructures. Although has been gauged in simulations from spherical cluster setups, this finding allows to interpret the cluster density passed to as the density in such clumps in the substructure of a star cluster. The clumps in substructured star cluster later merge, e.g. after a cool collapse, to form the cluster population seen nowadays. Synthesizing a galactic field ----------------------------- Adding up the stellar populations from dynamically evolved and eventually dispersed star clusters yields a galactic field population. Young star clusters follow an embedded cluster mass function (ECMF) described by power-law index *β*, $$\label{eq:ECMF} \xi\_{\rm ECMF}(M\_{\rm ecl}) \propto M\_{\rm ecl}^{-\beta}.$$ An integrated galaxy-wide field BDF (IGBDF) is then arrived at evaluating $$\label{eq:IGIMF} \Phi\_{x}^{\rm field} = \int\_{M\_{\rm ecl,min}}^{M\_{\rm ecl,max}({\rm SFR})} \Phi\_{x,\rm evolved}^{\rm cluster} \xi\_{\rm ECMF}(M\_{\rm ecl}) d M\_{\rm ecl},$$ where the *x* stands for the observed orbital parameter (for instance *q*, *e*, and so on). The limits $M\_{\rm ecl,min}$ and $M\_{\rm ecl,max}({\rm SFR})$ are the mass of the star cluster with the lowest stellar mass and star cluster with the highest stellar mass, respectively, in the star cluster system (SCS). The maximum mass $M\_{\rm ecl,max}$ depends on the star formation rate (SFR) with which the SCS has formed and is calculated from $$\label{eq:SFR-clustermass} \frac{M\_{\rm ecl,max}}{{\rm M}\_{\odot}}=84793\times\left(\frac{{\rm SFR}}{{\rm M}\_{\odot} \, yr^{-1}}\right)^{0.75}$$ according to. Each cluster selected from the above ECMF will contribute its own final binary population depending on its initial density, where lower-mass clusters, on average, will retain a larger binary population, which is contributed to the field population. The individual binary populations that contribute to the field population of a galaxy are found for each star cluster following the recipe in Section [sec:omega]. Implementation in ================= Generating a library of binaries -------------------------------- Before calculating the final properties of a population of binaries, needs to create a birth binary population and calculate the initial binary population from it. The shape of the initial stellar mass function (IMF) is set to the canonical IMF ( and equation [eq:imf] in this paper), but the user can choose the lower and upper mass limit of the IMF, `MLOW` and `MHIGH`, as well as the total number of binaries to be generated, `Nlib`. The part of the programme responsible for creating the initial binary population is archived in the file `Library.c` For creating the birth binary population, chooses 2 × `Nlib` stellar masses, that are randomly selected from the IMF (eq. [eq:imf]), and stores them into an array. The IMF is interpreted here as a pure probabilistic function in the mass interval $[\texttt{MLOW} \ge 0.08 \ {\rm M}\_{\odot}, \texttt {MHIGH} \le 150 \, {\rm M}\_{\odot}]$, where `MLOW` < `MHIGH` [5](#fn5). Thus, together with the canonical IMF (equation [eq:imf]), the normalisation condition 1 = *k**a**i*∫`MLOW``MHIGH`(*m*ʹ)− *α**i* *d**m*ʹ is used to determine the coefficients such that the IMF is continuous. The constant *k* is a normalisation that guarantees that, together with the choices for the *a**i*, the integration of the right side of equation ([eq:imf]) equals 1. In order to select a mass from the IMF determined from equation ([eq:imf]) and ([eq:imfnorm]), the cumulative initial mass distribution is mapped to a uniform random variate *X*(*m*) ∈ [0 : 1], such that *X*(*m*) = *k**a**i*∫`MLOW`*m*(*m*ʹ)− *α**i* *d**m*ʹ. Thus, by integrating equation ([eq:randomIMF]) and solving it for *m*, the stellar mass *m* in dependency of the uniform random variate *X*(*m*) is obtained. By doing this for the *X*(*m*) values, the distribution of the 2 `Nlib` values for *m* is consistent with the canonical IMF. then pairs the stars to binaries by looking at the mass of the next star at first in the array. If the star is less massive than $5 \, {\rm M}\_{\odot}$, it is simply paired with the following star in the list that is also less massive than $5 \, {\rm M}\_{\odot}$, while stars more massive than $5 \, {\rm M}\_{\odot}$ are overlooked. This produces random pairing of the stars less massive than $5 \, {\rm M}\_{\odot}$, because the stars in the array are not sorted. If the star is more massive than $5 \, {\rm M}\_{\odot}$, then the programme searches for the next star that together with the first star has a mass ratio *q* > 0.9. Thus, stars more massive than $5 \, {\rm M}\_{\odot}$ usually have partner stars of almost the same mass, which corresponds to ordered pairing. When for a star with a mass larger than $5 \, {\rm M}\_{\odot}$ no companion star is found such that *q* > 0.9, then simply the next star in the array is picked, independent of its mass, as in the random pairing procedure. This is necessary as one cannot simply add more massive stars to fullfill the mass-ratio criterion since this would change the underlying IMF. However, the effect of random pairing of a few massive stars becomes negligible for a large `Nlib`, because almost every star with a mass $>5 \, {\rm M}\_{\odot}$ finds a suitable partner star in that case. The programme goes through the array of stars until every star is part of a binary. then assigns the period *P* using equation ([eq:dist-a]). The programme proceeds analoguous to the creation of stars from the IMF, that is the inverse of the integral of equation ([eq:dist-a]) is formed, and then uniform random variates between 0 and 1 are mapped onto this function. A random variate *X* ∈ [0, 1] thereby produces the correct distribution in *P* (see chapter 7.2 in for a more profound treatise of the topic). The eccetricities *e*, which are at birth distributed according to equation ([eq:thermal-dist]), are assigned here analogously. By default, the birth values for *e*, *P*, *m*1 and *m*2 are then subjected to internal binary evolution, or pre-main sequence eigenevolution (see or Section[sec:ibp]) to arrive at the initial values. The logarithmic binding energies log10(*E**b*) (in $\log\_{10}[{\rm M}\_{\odot} \, {\rm km}^2 \, {\rm s}^{-2}]$), the logarithmic angular momenta log10(*L*) (in $\log\_{10}[\rm cm^2 \, s^{-1}]$), and the logarithmic semi-major axes log10(*a*) (in $\log\_{10}[\rm AU]$) follow from *P*, *m*1 and *m*2. The apparent separations *s* are projections on the sky of the correspondent *a* that are obtained as follows. There is a vector **r** = (*r*, *ϕ*, cos(*θ*)) that determines the orientation of each binary with respect to the observer. The component of the vector determining the radius *r* is set fixed to 1, but for the angles *ϕ* and *θ*, random variates are chosen, so that the values for *ϕ* are distributed uniformly between 0 and 2*π*, and the values for *θ* are distributed uniformly between  − *π* and *π*. Thus, the semimajor axis rotates with *ϕ* between 0 and 2*π*, and *θ* determines whether the observer sees the binary face-on (cos(*θ*) =  ± 1), egde-on (cos(*θ*) = 0), or somewhere in between. The projected separations are obtained by projecting the 3-dimensional vectors **r** onto an arbitrary, but then fixed plane; for instance the *x* − *y*-plane. Also the quantities $t/{\cal T}$ and *φ* are listed in the output file. These parameters tell the user the orbital-time and phase in which the binary can currently be found, and are distributed uniformly between 0 and 1, and 0 and 2*π*, respectively. About internal binary evolution, or pre-main sequence eigenevolution, note that it can in principle lower the fraction of binaries born in a star cluster. For this, the orbits become so tight that the stars merge to single stars. However, this happens rarely, as can be seen in Section ([sec:birth-initial]). Thus, the binary fraction remains here where it is during internal binary evolution, or pre-main sequence eigenevolution; that is at 100 percent. The only way to lower this value is external binary evolution, or stimulated evolution, which will be dealt with in Sections ([sec:impCluster]) and ([sec:impField]). The user may, besides making choices for `MLOW`, `MHIGH` and `Nlib`, also turn off pre-main sequence eigenevolution and ordered pairing for stars more massive than $5 \, {\rm M}\_{\odot}$ in the creation of the file of initial binary properties. The programme then delivers the birth population of binaries (pre-main sequence eigenevolution off) and pairs stars over the whole mass range completely at random to binaries (ordered pairing off) to test their influence on the resulting populations. However, this is not recommended in the light of the literature concerning pre-main sequence eigenevolution (see ) and ordered pairing of massive binaries (see e.g. ). This should therefore only be done for comparisons to the more realistic populations, where eigenevolution and ordered pairing for massive stars are turned on (the default). Implementing external binary evolution, or stimulated evolution --------------------------------------------------------------- ### General remarks The user has now two possibilities to proceed when has created an initial stellar population from a birth stellar population, namely by assuming that the initial population is a single star cluster (see Section [sec:impCluster]), or by assuming that the initial population is the basis for a field population made up from multiple dissolved embedded star clusters (see Section [sec:impField]). However, some general remarks first. Generally, each encounter of a binary either destroys the binary, or it transforms its set of initial binary parameters into a new set. Thus, in reality, also the encounters that leave the binary intact would change its orbital parameters. however concentrates only on how many binaries per energy bin are destroyed. Figures (8) and (10) in show that the data obtained with N-body simulations agree very well with those obtained analytically (that is with ). This gives confidence that the approximation, that makes, is valid. Thus, if lets a binary survive, it does so with all initial binary parameters unchanged. If the user decides to use in the star cluster mode, the user can specify a cluster mass from the command line. This might seem like a double effort at first, since the user has already set up a library of binaries, corresponding of a specific total mass. However, the purpose of the library is predominantely to control statistical uncertainties occuring especially when dealing with small star clusters. Thus, if the user is interested in the properties of an *average* small star cluster, the user is still encouraged to set the size of the library to $\apprge 10^7$ binaries (also the default in ), even though the total mass of the star cluster will be much smaller. The user should instead choose `MLOW` and `MHIGH` according to the problem to be covered. For instance, if the user decides to investigate very small star clusters of $10 \, {\rm M}\_{\odot}$, the library should definetely not hold stars more massive than $10 \, {\rm M}\_{\odot}$, and according to the actual highest stellar mass is much lower still. However, the programme would run in any case smoothly through the whole process, also when the library contains stars with masses up to $150 \, {\rm M}\_{\odot}$. The mass of the star cluster does, however, matter for its evolution. This can be seen for instance by comparing star clusters of the same half-mass radius, the same evolution time and the same library of binaries, but with different masses. The less massive star clusters will then have more surviving binaries because of their, on average, lower densities. Thus, ultimately the fraction of surviving binaries decreases with increasing $\rho\_{\rm ecl}$. The part of, which is responsible for external binary evolution, or stimulated evolution, is archived in the file `Synth.c`. As the programme proceeds, `Synth.c` calls a number of functions, which are archived in `InitFinDistr.c`. finally stores the requested information into table(s) in the folder `output/`; that is the requested, binned binary parameters of the surviving binaries. Note that even though only the energy distribution is considered when running, the output files will contain the requested orbital parameters, i.e. also different from $E\_{\rm b}$ (see Section [sec:impCluster]). Also the default number of bins for the requested parameter(s) may be (chosen) different from the default for energy bins. All these output tables have three columns. The first column contains the centres of the bins of the chosen orbital parameter. The second column holds the normalised binary fractions, that is the binary fraction divided by the bin width. The third column stores the absolute number of binaries per bin, if in an observation `N` targets have been searched for multiplicity. This number results from a multiplication of the second column with `N` and the bin width. The default value for `N` is 100; a different value may be set by the user using the command-line (see Section [sec:command-line]). Using the default value, the last column will hold the percentage of the stars which are part of a binary, that is the binary fraction in percent. ### Implementing star clusters The function `Synthesize(...)` in the file `Synth.c` performs the tasks required to synthesize a single star cluster population (this section) or the population in a whole galactic field (Section [sec:impField]). It starts by reading the textfile `flE_eigenevolved.dat`, which is included in the package. The textfile `flE_eigenevolved.dat` contains the initial energy distribution, that is the one after internal binary evolution, or pre-main-sequence eigenevolution. It lists binary fractions of $\log\_{10}(E\_{\rm b})$ into an array named `ledf[]`. The small logarithmic energy-intervals in `flE_eigenevolved.dat` range from $\log\_{10}(E\_{\rm b})=-6.0$ to $\log\_{10}(E\_{\rm b})=6.0$ in steps of 0.01 and in units of $\log\_{10}[{\rm M}\_{\odot} \, {\rm km}^2 \, {\rm s}^{-2}]$. They are read into the array and normalised to the width of the equidistant bins through a call of the function `edf(...)` located in the file `InitFinDistr.c`. The array `ledf[]` then contains the initial energy distribution on which the stellar dynamical operator acts (eq. [eq:operator]). then retrieves the values of the parameters ${\cal A}$, ${\log\_{10} (E\_{\rm b,cut}})$ and ${\cal S}$, which enter the stellar dynamical operator $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}(\log\_{10}(E\_{\rm b}),t)$ (eq. [eq:sigmoidal]) using eqs. ([eq:operator2]), ([eq:operator3]) and ([eq:operator4]). These equations result directly from the time, *t*, for stimulated evolution and the initial cluster density within the half-mass radius, *ρ**e**c**l*, provided by the user through the command-line. The $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}(\log\_{10}(E\_{\rm b}),t)$ operator can now be evaluated and its function values are stored in another array named `sigm[]` of the same size as the energy array generated previously. Through a simple element-by-element multiplication of the `sigm[]` and `ledf[]` arrays (cf. eq. [eq:operator]), the final energy array `res[]` arises as a result, which contains the normalised binary fractions after external binary evolution, or stimulated evolution. In the function `distribute_nrgs(...)`, the final distribution is then transformed into an energy array `nrgspect_fin[]`. The array `nrgspect_fin[]` contains the *number* of binaries in each bin, through a multiplication of each element in `res[]` with the binwidth and the final number of centre-of-mass systems, that is the added numbers of singles and binaries after stimulated evolution. A similar array `nrgspect_in[]`, which holds the numbers of initial binaries per bin, is created for the initial energy distribution contained in `ledf[]` and the initial number of centre-of-mass systems. The initial number of systems is simply half the number of stars selected from an IMF in a population of 100% binaries. The final number of systems is calculated from the final binary fraction. This final binary fraction is the sum of the entries in `res[]` multiplied by the binwidth, and the average number of stars in the considered cluster. The latter is calculated from the division of the initial cluster mass, *M**e**c**l*, and the average mass of the canonical IMF.[6](#fn6) The array `nrgspect_fin[]` contains no information whatsoever about other orbital parameters distributions. In order to extract other parameters from the the resulting energy distribution for binaries, the library of binaries created before, as described in Section ([sec:GenBinaries]), comes in. Calling the function `populate_initial_energy_distribution(...)`, `Synthesize(...)` has read the user-generated library at its start into an array `fE_in[]`, which is coded in the file `InitFinDistr.c`. The function `Synthesize(...)` passes this array alongside `nrgspect_in[]` and `nrgspect_fin[]` to the function `orbital_parameter_distributions()` contained in the same file, in order to extract different orbital parameters. The idea is to break up dissolved binaries in the user-generated library into its constituents and retain only a number of binaries in this library that correspond to the energy distribution after external binary evolution, or stimulated evolution. Information about the retained binaries is contained in `nrgspect_fin[]`. Since the number of binaries in `nrgspect_in[]`, however, does in no way resemble the number of binaries in the user-generated library contained in `fE_in[]`, each entry, *i*, in the final energy array `nrgspect_fin[]` is scaled by the factor `fE_in[i]/nrgspect_in[i]`. The programme then reads the user-generated library (again) and loops through each of its entries. The binding energy of the binary is compared to `nrgspect_fin[]` until the corresponding energy bin has been found. Is the number of binaries in this bin larger than 0, the binary in the library is left intact and the number of binaries in the `nrgspect_fin[]` bin is reduced by 1. Does the number of binaries in this bin equal 0, all binaries in the final energy spectrum after stimulated evolution have been distributed and the binary in the library is broken up. Dissolved binaries are counted as two single stars and two centre-of-mass systems, while an intact binary counts as one binary and one centre-of-mass system. This procedure is continued until all entries in the `nrgspect_fin[]` array contain zeroes without exception. Thus, leaves the first binaries in the library intact and destroys those coming last in the library. However, for a sufficiently large library, the biases concerning surviving and destroyed binaries are negligible. For each orbital parameter in the library of binaries an individual array has been created. If a binary did not dissolve during the procedure before, a binary is added (+1) in the corresponding bin of the respective orbital parameter array. This way distributions for all other orbital parameters for surviving binaries can be extracted. finally stores the requested orbital parameter distributions into table(s) in the folder `output` (see the end of Section [sec:impGenRemarks]). ### Implementing galactic fields works for galactic fields in principle the same as for star clusters. The only difference is that for galactic fields, additionally deals with an embedded star cluster mass function (ECMF) instead of a single star cluster. To handle this problem, assumes that a star cluster population (SCP) is fully populated after a certain time *δ**t*. Thus, $$\label{eq:deltat} M\_{\rm scp}=\delta t \times SFR,$$ where $M\_{\rm scp}$ is the mass of the star cluster population and the star formation rate (SFR) is set by the user. found a universal value for *δ**t*, which they give as  ≈ 10 Myr. This is the timescale over which the interstellar medium of a galaxy forms a new population of embedded star clusters of combined stellar mass $M\_{\rm scp}$ (see also ). Therefore, *δ**t* = 10 Myr is also set as a fixed value for. Thus, for example, a galaxy like the Milky Way with a SFR of $3 \, {\rm M}\_{\odot} \, {\rm yr}^{-1}$ (consistent with, and the default for the SFR in ) has always $M\_{\rm scp}=3\times 10^7 \, {\rm M}\_{\odot}$, independent of the changeable parameters in. One of these parameters the user may change is the exponent *β* of the embedded cluster mass function (ECMF; e.g. or equation [eq:ECMF] in this paper). The default value for *β* set by is *β* = 2, as proposed by and. The ECMF is needed to estimate its normalisation factor $k\_{\rm ecl}$. For this, using $$\label{eq:normalisation} M\_{\rm scp}=k\_{\rm ecl} \int\_{M\_{\rm ecl,min}}^{M\_{\rm ecl,max}(SFR)} M\_{\rm ecl}^{- \beta} dM\_{\rm ecl}$$ is used, where $M\_{\rm ecl}$ is the stellar mass of the embedded star clusters, $M\_{\rm ecl,min}$ is the lowest mass for embedded star clusters and $M\_{\rm ecl,max}(SFR)$ the highest mass for embedded star clusters. Thus, the normalisation also depends on *β*. With higher values for *β*, the ECMF has more low-mass clusters that are not as efficient in destroying binaries. Note that $M\_{\rm scp}$ is the same value as in equation ([eq:deltat]), that is the mass of a fully populated star cluster population. However, not only *β*, but also the value for $M\_{\rm ecl,min}$ is set directly by the user in. The default value for $M\_{\rm ecl,min}$ is $5 \, {\rm M}\_{\odot}$, corresponding to Taurus-Auriga-like embedded star-clusters (e.g. ). On the other hand, $M\_{\rm ecl,max}$ is set indirectly by the user by the choice of the SFR, because $M\_{\rm ecl,max}$ is calculated with equation ([eq:SFR-clustermass]) from the SFR. Note that only produces output tables for $M\_{\rm ecl,min} < M\_{\rm ecl,max}$ and leaves a failure notice on the computer screen otherwise. If however $M\_{\rm ecl,min} < M\_{\rm ecl,max}$, evaluates $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}(\log\_{10}(E\_{\rm b}),t)$, that is the operator that determines how many binaries survive in every energy bin. For this, chooses $M\_{\rm ecl,min}$ as the initial mass of a star cluster, and calculates $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}(\log\_{10}(E\_{\rm b}),t)$ for it. then increases the mass of the star cluster in small steps of $10 \, {\rm M}\_{\odot}$ and calculates $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}(\log\_{10}(E\_{\rm b}),t)$ iteratively until $M\_{\rm ecl,max}$ is reached. The embedded star cluster half-mass radius, *r**h*, is taken from $$\label{eq:rh-mass} \frac{r\_{\rm h}}{{\rm pc}}=0.10 \times \left( \frac{M\_{\rm ecl}}{{\rm M}\_{\odot}} \right)^{0.13},$$ which is a weak mass-radius relation for embedded star clusters taken from. Equation ([eq:rh-mass]) is required to calculate $\rho\_{\rm ecl}$, which enters the stellar-dynamical operator. Thus, ultimately, $\rho\_{\rm ecl}$ increases with increasing $M\_{\rm ecl}$. The density $\rho\_{\rm ecl}$ is however the only parameter relevant in determining $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}$ and, thus, the resulting binary fraction. The user may choose to deviate from this default behaviour by setting a constant initial value for *r**h* for all star cluster masses (Section [sec:working]). Equation ([eq:rh-mass]) is a theoretical result, which is needed for the dynamical population synthesis. An analogy is the stellar IMF, which is also not observable, but a mathematical workaround needed for calculations of stellar populations. The reation between radii and stellar masses at birth can be physically interpreted as the state of deepest cloud-core collapse. This corresponds to the assumption that all binaries materialised simultaneously at time zero, which is when the N-body simulation of the embedded cluster with the initial binary population begins, that is the binary population after internal binary population, or pre-main sequence eigenevoulution. Interesting is that the implied relation between densities and masses is in good agreement with the observed cloud core densities (figure 6 in ). In order to account for the different numbers of embedded star clusters in each interval $[M\_{\rm ecl},M\_{\rm ecl}+10 \, {\rm M}\_{\odot}]$, the resulting populations for a star cluster of a given mass are weighted with the ECMF (equation [eq:ECMF]). The numbers in each interval vary depending on the slope *β* – the larger *β*, the fewer embedded star clusters with increasing $M\_{\rm ecl}$ in the ECMF. The period of time, for which $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}(\log\_{10}(E\_{\rm b}),t)$ is evaluated, is fixed to a value of 3 Myr, instead of leaving the choice between 1 ,3 and 5 Myr to the user, as possible in the star cluster mode in. The idea behind this is that most star clusters do not survive gas expulsion, that is the event that turns an embedded cluster into an open cluster, and gas expulsion has usually taken place before 5 Myr. Thus, after 3 Myr, most binaries are in fact part of the field of the galaxy in question, and do not interact with each other any more (see e.g. the models calculated by ). But even star clusters that do survive gas expulsion do not loose many binaries any more, because the star clusters expand during gas expulsion, such that close interactions between binaries, which are the reason for the destruction of binaries, happen quite rarely afterwards. Also, most of the dynamical evolution of the binaries is over after 3 Myr anyway, especially for more massive embedded star clusters (see and Section [sec:SCexample]). Note that the early application of the galactic field mode of led to the prediction that low-mass dwarf irregular galaxies should have field binary fractions of about 80 precent, while massive elliptical galaxies should have about 30 percent. If $0.1 \, {\rm pc} \apprle r\_{\rm h} \apprle \, 0.2 \, {\rm pc}$ for the half-mass radii of star clusters in the Milky Way initially, then the binary faction of the Milky Way today is about 50 percent, as is observed by. Working with ============ Setup ----- The program files of can be downloaded from GitHub under the web address `https://github.com/JDabringhausen/BiPoS1`. The user should take care that the folder structure is preserved when extracting, because the folders `Lib/` and `output/` are vital for the storage of the output data, and are not created during runtime. The program is compiled by typing `make BiPoS` into the command line of a terminal in the directory where the user has stored the programme files. is started afterwards by typing `./BiPoS` into the command line. Working with from the command line ----------------------------------- After typing `./BiPoS` into the command line, the user is directed to the help menu. The syntax to be used for working with is explained in detail under the different items of that menu. These items are: * `./BiPoS\ genlib help`. With `./BiPoS\ genlib (...)`, the user can create a library of binary systems, which is used in later calls of. First-time users have to generate a library of binaries first, to which can refer in the following steps. * `./BiPoS\ clust help`. With `./BiPoS\ clust (...)`, the user can synthesize a binary population in a single star cluster. * `./BiPoS\ field help`. With `./BiPoS\ field (...)`, the user can synthesize a galaxy-wide field population. * `./BiPoS\ SpT help`. Typing this into the command line shows the default mass ranges for spectral types and a way to change them. Proficient users may skip the help menu and type in directly the syntax for the binary population they want to synthesize. Some remarks are useful when working with : * In principle, the commands `./BiPoS\ clust` and `./BiPoS\ field` suffice to put to work, after a standard library of binaries has been created calling `./BiPoS\ genlib`. It then produces the binary population of a single default star cluster, or default galactic field, respectively, where all parameters the user may specify are set to their defaults. The default for a specific value is overridden by a specification by the user. * understands only the decimal notation for numbers. Thus, for example, the user will have to use `1000` instead of `1e3`, `1e+03`, or similar notations. In the previous example, will break reading at the `e`; that is it interprets `1e3` or `1e+03` as `1`. * ignores parts of a command it does not understand and replaces them by the default values. It does not produce an error message when a command is wrong. Thus, the user should check the spelling of the part of the command if continues to use a default value. * The order of the parts of a command does not matter in. Thus, for instance, the user could type `./BiPoS\ clust OPD=P SpT=A libname=X` or `./BiPoS\ clust libname=X SpT=A OPD=P`, and would return the distribution of the periods of A-star binaries after *t*=3 Myr of dynamical evolution in a star cluster with $M\_{\rm ecl}= 10^3 \, {\rm M}\_{\odot}$ and $r\_{\rm h}=0.24 \, {\rm pc}$ (that is the defaults for *t*, $M\_{\rm ecl}$ and $r\_{\rm h}$; the half-mass radius results from eq. [eq:rh-mass]) in both cases, while using the library of binaries `X` as a reference. Also the ranges and the number of bins in which the binary parameters (like the periods, the semimajor axes, and so on) are divided are pre-defined by default values. However, the user can override the default for any parameter considered in by typing `constrain=par,x,y,z`. In this command, `par` defines the parameter to be constrained, `x` and `y` are the lower and the upper limits of the constraint, respectively, and `z` gives the number of (equal-sized) bins that the constrained zone is divided into. The parameter constrained needs not to be equal to the parameter that is outputted. Thus, for instance, the combination `OPD=a constrain=q,0.2,0.4,4` is possible. In this case, the programme outputs the distribution of semi-major axes *a*, but only for those binaries that additionally have mass-ratios ranging from 0.2 to 0.4. The *z* = 4 bins in *q* would only impact an outputted mass-ratio distribution, but has no effect on the here requested semi-major axis distribution. In this example an arbitrary value can be used. The user can constrain multiple parameters at the same time by using `constrain=par,x,y,z` several times in one command. The output can thus be taylored precisely to the observational constraints of the survey the results of are to be compared to. The user then lets create tables by typing `constrain=par,x,y,z` once for each parameter `par` to be constrained. A similar scheme exists also for the spectral types of stars. By default, returns the binary populations of the requested parameters for stars of all spectral types in one table. If the user was instead only interested in the periods of, say, the G-stars ($0.8 \, {\rm M}\_{\odot} <m \le 1 \, {\rm M}\_{\odot}$) and K-stars ($0.5 \, {\rm M}\_{\odot} <m \le 0.8 \, {\rm M}\_{\odot}$) instead of all stars, the user would add `SpT=GK` to the command. then returns two tables for each requested parameter, namely one containing the distribution of just the G-stars for each requested parameter, and one with the same values for just the K-stars. The user may also choose a user-defined mass range by typing `SpT=u mmin=low mmax=up`. Here, the user would replace `low` with the lower mass limit for stars in ${\rm M}\_{\odot}$, and `up` with the upper mass limit in ${\rm M}\_{\odot}$ for the stars to be considered. Note that constraining the sample to the values an observer has actually targeted may be the solution when finds by default unexpectedly many binaries, or unexpectedly few binaries. For instance, by default, considers binaries with semi-major axes of $-5 \le \, \log\_{10}(a / {\rm AU})\le 5$. If however the observational constrains are such that, say, binaries with $\log\_{10}(a / {\rm AU}) \le -1$ cannot be resolved, while the observational field is too crowded to identify binaries with $\log\_{10}(a / {\rm AU}) \ge 3$ reliably, then the user may add `constrain=a,-1,3,z` to `OPD=a` in the command which sets off the calculation of *a* with. then returns only the binaries with $-1 \le \, \log\_{10}(a / {\rm AU})\le 3$ in *z* equal-sized bins, while it does not consider binaries with $\log\_{10}(a / {\rm AU}) < -1$ or $\log\_{10}(a / {\rm AU}) > 3$. Likewise, the user may not be able to see (very faint) M-dwarfs below, say, $0.3 \, {\rm M}\_{\odot}$ while stars more massive than, say, $2 \, {\rm M}\_{\odot}$ have already evolved into remnants. However, the library the user uses has been generated for, say, stars from $0.1 \, {\rm M}\_{\odot}$ to $10 \, {\rm M}\_{\odot}$, meant to resemble the initial population of the cluster or galaxy field under consideration. If the user does not use any constrains, the full library from $0.1 \, {\rm M}\_{\odot}$ to $10 \, {\rm M}\_{\odot}$ is used for creating the resulting orbital-parameter distributions. If however the user would add `SpT=u mmin=0.3 mmax=2` to the command which sets off, only binaries with companion masses between $0.3 \, {\rm M}\_{\odot}$ and $2 \, {\rm M}\_{\odot}$ are considered in the requested semi-major axis distribution. If furthermore the user would also add `scale=N` to the command which sets off, where `N` is the total number of targets observed and searched for components, then the total number of binaries with primary masses between $0.3 \, {\rm M}\_{\odot}$ and $2 \, {\rm M}\_{\odot}$ expected in the model is returned in the third column of the output table (see end of Section [sec:impGenRemarks]). Examples ======== Examples for the earliest usage of can be found in for galactic fields and in for star clusters. The papers also compare the results from with observed data. However, those papers do not mention explicitly. Therefore, it is introduced here, including some further examples with an emphasis on the usage of. Birth binary population and initial binary population ----------------------------------------------------- [birth+init] The binary fraction per bin in percent over the periods of the binaries for the birth population (red x-symbols), and the population it evolves into through internal binary evolution, or pre-main sequence eigenevolution, namely the primordial population (blue circles). The top panel shows the birth population and the primordial population for binaries with primaries from $0.08 \, {\rm M}\_{\odot}$ to $150 \, {\rm M}\_{\odot}$ (that is all stars), the middle panel the birth population and the primordial population for only the binaries with primaries below $5 \, {\rm M}\_{\odot}$ (that is the binaries with random pairing) and the bottom panel the birth population and the primordial population for only the binaries with primaries above $5 \, {\rm M}\_{\odot}$ (that is the binaries with ordered pairing). We want to test the effects of internal binary evolution, or pre-main sequence eigenevolution in. This is equivalent with checking for the difference between the birth binary population and its evolutionary descendant, the initial binary population (see also figure 1 in ). To do this, a library of binaries has to be generated. The default values for this are 107 binaries with primary masses between $0.08 \, {\rm M}\_{\odot}$ and $150 \, {\rm M}\_{\odot}$, which are the values which we choose here. Thus, we enter `./BiPoS\ genlib libname=lib1.dat -eigen` for the birth binary population and `./BiPoS\ genlib libname=lib2.dat` for the initial binary population. Note that the initial binary population is the default in, because this is usually the basis for the external binary evolution, or stimulated evolution of binaries, that is the standard application of. Also note that `libname=libN.dat` lets write the contents of the library to a file stored in `Lib/libN.dat`. The default is that the library is stored in `Bin_lib.dat`. Also, we want to test the effect of internal binary evolution, or pre-main sequence eigenevolution, for only the binaries with primary masses of $m\_1 < 5 \, {\rm M}\_{\odot}$ and only the binaries with primary masses of $m\_1 > 5 \, {\rm M}\_{\odot}$. This is the limit where switches from random sampling of the binaries ($m\_1 < 5 \, {\rm M}\_{\odot}$) to ordered sampling ($m\_1 > 5 \, {\rm M}\_{\odot}$) by default. This is done by giving the commands `./BiPoS\ genlib libname=lib3.dat mmax=5.0 -eigen`, and `./BiPoS\ genlib libname=lib4.dat mmax=5.0`, respectively, for the library with only primary masses of $m\_1 < 5 \, {\rm M}\_{\odot}$, and `./BiPoS\ genlib libname=lib5.dat mmin=5.0 -eigen`, and `./BiPoS\ genlib libname=lib6.dat mmin=5.0`, respectively, for the library with only primary masses of $m\_1 > 5 \, {\rm M}\_{\odot}$. Note that all library sizes are still 107 binaries. The next step is to output the orbital-period distributions from the libraries before and after eigenevolution, that is to plot the birth and initial distributions. We do this by, for instance, giving the command `./BiPoS\ clust mecl=100000 OPD=P constrain=P,-1,8.5,50 +init -evolve libname=lib2.dat` for the library that contains 107 internally evolved, or pre-main sequence eigenevolved, binaries with primary masses $0.08 \, {\rm M}\_{\odot} \le m\_1 \le 150 \, {\rm M}\_{\odot}$. This command lets put 105 binaries into 50 equal-sized bins with periods from $\log\_{10}(P/{\rm days})=-1$ to $\log\_{10}(P/{\rm days})=8.5$. By putting `+init` into the command, returns the datafiles from which it starts, that is without any external binary evolution, or stimulated evolution. By putting `-evolve` into the command, suppresses the datafiles with external binary evolution, or stimulated evolution. We use the same command also for the other libraries that we mention above. Figure ([birth+init]) shows the comparisons between the birth binary populations and the initial binary populations of all binaries (top panel), only the binaries with pimary masses $m\_1 < 5 \, {\rm M}\_{\odot}$ (middle panel), and only the binaries with primary masses $m\_1 > 5 \, {\rm M}\_{\odot}$ (bottom panel). The scaling of the binaries is such that the summing over all bins gives 100 percent binaries for both the birth binary population and the initial binary population. An alternative interpretation of this scaling is that each of the 50 bins shows the binary fraction that has periods between *P* and *P* + Δ*P* in percent, so that the sum over all binaries is 100 percent. Attentive observers may find that the internal binary evolution, or pre-main sequence eigenevolution, acts more strongly on the randomly sampled binaries (middle panel) than on the binaries with ordered sampling (bottom panel). However, when observing the full canonical IMF, random sampling overwhelms ordered sampling, because of the sparsity of stars with $m > 5 \, {\rm M}\_{\odot}$ over all stars in the canonical IMF. Also, the initial binary population is slightly larger than zero at the bin the most to the left. This corresponds to binaries with periods slightly larger than 0.1 days. This hints at the possibility that also binaries with periods below 0.1 days could exist, but 0.1 days is the lower limit for periods in. However, binaries this close would probably merge to single stars. But this concerns only a minority of binaries anyway. Finally, after a steady rise in binary numbers with increasing periods, there is a large drop in the right-most bin. This may be surprising, because the function from which the periods stem (see equation [eq:dist-a]) only rises over the whole range where it is defined. However, the last bin in Figure ([birth+init]) stretch over the definition range of equation ([eq:dist-a]), and therefore are not fully filled with binaries. But this issue can be remediated by additionally constraining the upper limit in accordingly. Binary period distributions of star clusters in dependency of their mass and age -------------------------------------------------------------------------------- [mass-age] The binary fraction per bin in per cent over the periods of the binaries for different ages. The panels show from top to bottom a star cluster with an embedded cluster mass in stars of $M\_{\rm ecl}=3\times 10^4 \, {\rm M}\_{\odot}$, $M\_{\rm ecl}=3\times 10^6 \, {\rm M}\_{\odot}$ and $M\_{\rm ecl}=3\times 10^8 \, {\rm M}\_{\odot}$. In each panel, the histogramms show which binary fraction survived external binary evolution, or stimulated evolution, for 0 Myr (that is 100 per cent over all bins), 1 Myr, 3 Myr and 5 Myr. [diff-mass] The binary fraction per bin in per cent over the periods of the binaries for different masses. The top curve is the initial binary distribution, which is for all star clusters the same due to the universality of not only the IMF (except under extreme conditions, see e.g. ), but also of the binary distribution function (see Section [sec:discussion] for a discussion). The curves below show from top to bottom how the binary population has evolved after 3 Myr, if it is in an embedded cluster mass of$ M\_{\rm ecl}=3\times 10^4 \, {\rm M}\_{\odot}$, $M\_{\rm ecl}=3\times 10^5 \, {\rm M}\_{\odot}$, $M\_{\rm ecl}=3\times 10^6 \, {\rm M}\_{\odot}$, $M\_{\rm ecl}=3\times 10^7 \, {\rm M}\_{\odot}$ and $M\_{\rm ecl}=3\times 10^8 \, {\rm M}\_{\odot}$ from top to bottom. We want to check the dynamical evolution of the periods of the binary population in star clusters with initial total embedded stellar masses, $M\_{\rm ecl}$, from $3\times 10^4 \, {\rm M}\_{\odot}$ to $3\times 10^8 \, {\rm M}\_{\odot}$. Such star clusters are thought to be the precursors of globular clusters. We assume no peculiarities about the lower mass limit of the stars in star clusters, $m\_{\rm min}$, so that it is at $0.08 \, {\rm M}\_{\odot}$ for all of them (e.g. ). The upper stellar mass limit of star clusters, $m\_{\rm max}$, saturates at about $150 \, {\rm M}\_{\odot}$ for star clusters with total masses $M\_{\rm ecl} \apprge 10^4 \, {\rm M}\_{\odot}$ (e.g. ). Therefore, we generate the default library in (see Section [sec:birth-initial]) and call it `GCs.dat`. Note that the library size (107 binaries) is smaller than required for the largest cluster masses here. With the full canonical IMF, like it is chosen here, the star clusters with $M\_{\rm ecl}=10^8 \, {\rm M}\_{\odot}$ must also have more stars than there are stars in the library of binaries. However, this is no issue in, because calulates the ratio of surviving binaries over the initial binaries in every binary bin, and then selects the binaries from the library until that ratio is reached. Furthermore, we assume the average half-mass radius for each star cluster is given by equation ([eq:rh-mass]). This is equation 7 in and the default assumed by. Finally, we calculate the surviving binary fractions for all three ages that offers, that is 1, 3 and 5 Myr. Thus, for instance, calculates the periods of the binary fraction for a cluster with $M\_{\rm ecl}=3\times 10^4 \, {\rm M}\_{\odot}$, a radius following from equation ([eq:rh-mass]) and a binary evolution time of 3 Myr after typing `./BiPoS\ clust OPD=P mecl=30000 libname=GCs.dat` into the command line. We request a distribution of periods by typing `OPD=P`, for a star cluster with $M\_{\rm ecl}=3\times 10^4 \, {\rm M}\_{\odot}$ by typing `mecl=30000`, and add `libname=GCs.dat` for the name we have given the library of binaries we have created before. For the other values, we are content with the default values. Figures ([mass-age]) and ([diff-mass]) compare the period distribution functions of the binary stars for the calculated star clusters with each other. Figure ([mass-age]) shows a star cluster with an embedded cluster mass of $M\_{\rm ecl}=3\times 10^4 \, {\rm M}\_{\odot}$ (upper panel), $M\_{\rm ecl}=3\times 10^6 \, {\rm M}\_{\odot}$ (middle panel) and $M\_{\rm ecl}=3\times 10^8 \, {\rm M}\_{\odot}$ (lower panel) for 0, 1, 3 and 5 Myr of external binary evolution, or stimulated evolution, of the periods of the binaries. The y-axis shows the percentage of the binaries per period bin. Thus, the sum over all bins together shows the initial fraction of binary stars, which is 100 percent in. It is shown here as the uppermost red lines. Also, the red lines are identical for all three panels, which is a consequence of the universality of the binary distribution function, which is adopted in. The sum over all bins for 1, 3, and 5 Myr of external binary evolution, or stimulated evolution, show the percentage of binaries which survive this time of dynamical evolution. This is 37 percent of the binaries for a cluster with $M\_{\rm ecl}=3\times 10^4 \, {\rm M}\_{\odot}$, 30 percent of the binaries for a cluster with $M\_{\rm ecl}=3\times 10^5 \, {\rm M}\_{\odot}$, 24 percent of the binaries for a cluster with $M\_{\rm ecl}=3\times 10^6 \, {\rm M}\_{\odot}$, 19 percent of the binaries for a cluster with $M\_{\rm ecl}=3\times 10^7 \, {\rm M}\_{\odot}$ and 15 percent of the binaries for a cluster with $M\_{\rm ecl}=3\times 10^8 \, {\rm M}\_{\odot}$ for an age of *t* = 3 Myr, but in fact almost independent of which one of the three ages was chosen. This indicates that almost all external binary evolution, or stimulated evolution, for the shown star clusters happens before the smallest choosable dynamical evolution time, which is 1 Myr. Figure ([diff-mass]) thus only shows the period distributions of clusters with different $M\_{\rm ecl}$ after 3 Myr of external binary evolution, or stimulated evolution. That the binary distributions are independent of whether 1 Myr, 3 Myr or 5 Myr was chosen as the age of the star cluster is however not true for small star clusters, where the external binary evolution takes much longer until it is stopped by gas expulsion, as can be seen in. While asks for $M\_{\rm ecl}$ and $r\_{\rm h}$ on the input, it uses the initial density, $\rho\_{\rm ecl}$, to calculate its output. The combinations of $M\_{\rm ecl}$ and $r\_{\rm h}$ in this example imply a rising $\rho\_{\rm ecl}$ with raising $M\_{\rm ecl}$. A higher density in turn implies a more efficient binary destruction, until only the most tightly bound binaries survive. Thus, with all the other parameters the same, that is, the same library of binaries was used, and always assumes a binary fraction of 100 percent for the initial population, the density of a star cluster decides which percentage of binaries survive dynamical evolution for a time *t*. This happens in a time span of less than a Myr in the case of massive and dense star clusters, like here. In consequence, N-body simulations of young GCs with a very low binary content like the ones in are justified, as is demonstrated here with. Close binaries in the Orion Nebula Cluster ------------------------------------------ ![[fig:duchene2018-0] The observed binary fractions for the ONC normalized by the bin-width over apparent separation. The data from are shown as grey x-symbols and the data from is shown as grey circle. The red dotted lines are the observations for M-type field binaries by and the green dotted lines are the observations for G-type field binaries by. The dashed lines are the field populations according to \ for M-type primaries in green, and for G-type primaries in red.](fig4-duchene2018-0)[fig:duchene2018-0] The observed binary fractions for the ONC normalized by the bin-width over apparent separation. The data from are shown as grey x-symbols and the data from is shown as grey circle. The red dotted lines are the observations for M-type field binaries by and the green dotted lines are the observations for G-type field binaries by. The dashed lines are the field populations according to for M-type primaries in green, and for G-type primaries in red. ![[fig:duchene2018-1] The observed binary fractions for the ONC normalized by the bin-width over apparent separation. The data from are shown as grey x-symbols and the data from are shown as grey circle. They are compared to different binary fractions calculated with. In the upper panel, the binary fractions according to \ are the field population of the M-type primaries indicated as green dashed line, and the field-population of G-type primaries indicated as red dashed line (like in Fig. [fig:duchene2018-0]). The dotted lines are the binary fractions for the ONC with present-day parameters according to ; again for M-type primaries in green and for G-type primaries in red. The blue dotted line is also for the ONC with present-day parameters, but for primary masses of 0.3 \, {\rm M}_{\odot} le m_1 \le 2 \, {\rm M}_{\odot}. It is also shown in the lower panel, but here indicated with the initial density \rho_{\rm ecl}=1050 \, {\rm M}_{\odot} \, {\rm pc}^{-3}, as these are the present-day parameters for the cluster mass and half-mass radius from converted into density. Density is the parameter that is actually relevant for. This is compared to a initial density of \rho_{\rm ecl}=68000 \, {\rm M}_{\odot} \, {\rm pc}^{-3}, which is indicated by a violet dotted curve. \rho_{\rm ecl}=68000 \, {\rm M}_{\odot} \, {\rm pc}^{-3} is the most likely initial density for the ONC according to.](fig5-duchene2018-1)[fig:duchene2018-1] The observed binary fractions for the ONC normalized by the bin-width over apparent separation. The data from are shown as grey x-symbols and the data from are shown as grey circle. They are compared to different binary fractions calculated with. In the upper panel, the binary fractions according to are the field population of the M-type primaries indicated as green dashed line, and the field-population of G-type primaries indicated as red dashed line (like in Fig. [fig:duchene2018-0]). The dotted lines are the binary fractions for the ONC with present-day parameters according to ; again for M-type primaries in green and for G-type primaries in red. The blue dotted line is also for the ONC with present-day parameters, but for primary masses of $0.3 \, {\rm M}\_{\odot} le m\_1 \le 2 \, {\rm M}\_{\odot}$. It is also shown in the lower panel, but here indicated with the initial density $\rho\_{\rm ecl}=1050 \, {\rm M}\_{\odot} \, {\rm pc}^{-3}$, as these are the present-day parameters for the cluster mass and half-mass radius from converted into density. Density is the parameter that is actually relevant for. This is compared to a initial density of $\rho\_{\rm ecl}=68000 \, {\rm M}\_{\odot} \, {\rm pc}^{-3}$, which is indicated by a violet dotted curve. $\rho\_{\rm ecl}=68000 \, {\rm M}\_{\odot} \, {\rm pc}^{-3}$ is the most likely initial density for the ONC according to. state that the Orion Nebula Cluster (ONC) has a companion star fraction in close binaries that is about twice as high as in comparable field stars. While this is true, it cannot be concluded that the initial binary function is not universal, as we demonstrate in this section. Note that in figures ([fig:duchene2018-0]) and ([fig:duchene2018-1]) in this Section, the binary fractions per decade are shown on the y-axis, and not just the plain binary fractions like in Sections ([sec:birth-initial]), ([sec:SCexample]) and ([sec:small]). Thus, summing over all bins does not give the total fraction of binaries over centre-of-mass systems here. The advantage is however that also companion star fractions that were obtained for different bin-widths can be directly compared to each other. Also, these are the numbers that appear in the second columns of the output from for this reason. Figure ([fig:duchene2018-0]) shows the datapoint for the companion star fraction between 10 and 60 AU in the ONC from (grey circle), and the datapoints from (grey x-symbols). The datapoints from are also for companion star fractions in the ONC, but they have wider separations. Also the observed binary fractions for M-type field stars and G-type field stars are shown in this figure. They are compared in this figure with the according results from, which were obtained by entering the command `./BiPoS\ field SpT=M OPD=s constrain=s,0.222,4.112,5` into. Here, generates apparent separations, *s*, of a field population of binaries with M-type stars as primaries. The part `constrain=s,0.222,4.112,5` of the command lets divide the range in *s* from log(*s*/*A**U*) = 0.222 to log(*s*/*A**U*) = 4.112 into five equal-sized bins. This was inspired from the filled data point in figure (6) in, or the grey circle in Fig. ([fig:duchene2018-0]) here. It is matched perfectly by the second bin here, while the other bins cover the full range of data in *s*. The same command was also given for G-stars, except that `SpT=M` was replaced by `SpT=G` for them. thus only picks stars with masses $0.08 \, {\rm M}\_{\odot}< m\_1 \le 0.45 \, {\rm M}\_{\odot}$ for the M-stars from the user-generated library, and with masses $0.8 \, {\rm M}\_{\odot}< m\_1 \le 1 \, {\rm M}\_{\odot}$ for the G-stars. Observationally, it is doubtful whether the stars are complete in, and in, respectively. On the other hand, the sample in contains also some low-mass K-stars, and the sample in contains also some F-stars according to the definitions used in the papers and in. Thus, it cannot be expected that the comparisons between the observations and would match precisely, but they can at least give a hint. Nevertheless, the data from indeed match the observed data from, and, respectively, very well. Also the match of the observed data from with the M-type field binaries is good, while they are a bit too low for G-type field stars. The match with M-type stars is remarkable, because the data from is actually taken in the ONC, and not the field. The data point from on the other hand lies much higher than both the observed and calculated values for field stars. We now enter the information that the ONC is a star cluster with specific properties into. According to, the ONC has an $r\_{\rm h}$ of 0.8 pc *today*, a total mass of about 4500 ${\rm M}\_{\odot}$ and is $\apprle$1 Myr old. This mass implies a density within $r\_{\rm h}$ of 4400 ${\rm M}\_{\odot} \, {\rm pc}^{-3}$. Thus, we enter `./BiPoS\ clust SpT=M OPD=s contrain=s,0.222,4.112,5 mecl=4500 rh=0.8 t=1` for M-stars into. Note that `field` of the above command was replaced by `clust`, in order to tell to synthesize a star cluster population. $M\_{\rm ecl}$, $r\_{\rm h}$ and age of the star cluster are chosen as close as possible to the data in. Also here, the command was entered again for G-stars. Moreover, have observed stars from $0.3 \, {\rm M}\_{\odot}$ to $2 \, {\rm M}\_{\odot}$, and not just M-type stars ($0.08 \, {\rm M}\_{\odot}$ to $0.5 \, {\rm M}\_{\odot}$) or just G-type stars ($0.8 \, {\rm M}\_{\odot}$ to $1 \, {\rm M}\_{\odot}$). This can be accomodated for in by replacing `SpT=M` with `SpT=u mmin=0.3 mmax=2`, which lets look for primaries with masses from $0.3 \, {\rm M}\_{\odot}$ to $2 \, {\rm M}\_{\odot}$ instead of M-stars. Note that also in this case, the data for observed stars is hardly complete, in contrast to the stars with $0.3 \, {\rm M}\_{\odot}< m\_1 \le 2 \, {\rm M}\_{\odot}$ in. However suprisingly, the resulting histogram very much resembles the histogram for just the G-stars, even though the range of stars encompasses also K-stars and bright M-stars at low stellar masses, and F-stars and A-stars at high stellar masses. Finally, the ONC may be in the process of gas expulsion, and was therefore very likely denser when it formed. According to figure (3) in, the most likely initial density of the ONC was 68000 ${\rm M}\_{\odot} \, {\rm pc}^{-3}$, and not 1050 ${\rm M}\_{\odot} \, {\rm pc}^{-3}$, as it is today. Thus, we replace `mecl=4500 rh=0.8` with `rho=68000` in the above command. The results with for the most likely parameters of the ONC are seen as the violet curve in the lower panel of Fig. ([fig:duchene2018-1]). It is still more than 1*σ* below the observational value, but definitively much less than 2.7*σ*, as for the M-type field stars observed by. This is not a problem by itself, and could happen by pure chance, even if the model in were correct. However, noted that also other star clusters show an excess of close, very low-mass binaries, if the observations are compared to. This may be understood with brown dwarfs interspersed to the stars at low stellar masses. The brown dwarfs would essentially form as very massive planets in close vicinity of their primary star, while actual formation of low-mass stars would take place farther away. This problem does not exist with wider binaries, because then also for very low-mass binaries, the wider ‘star-like’ formation mode predominates over the closer ‘brown-dwarf-like’ formation mode. assumes however that all stars, also those with extremely low masses, form in the ‘star-like’ mode. The ONC was not observed at separations comparable with closer star clusters until. However, now that it is, the binary excess appears also here. We return to the issue of the ‘brown-dwarf-like’ stellar population in Section ([sec:BD]). On the other hand, the data for the observational binary fractions at wider separations from fit the results from taylored to the observations by very well. The ONC has also been found to be more complex with the discovery that it consists of three subpopulations of different ages. This is supported by with data from Gaia. also discovered evidence for wide binaries in the ONC, which most likely belong to the youngest, just-born population. Also, much more stellar dynamical modelling is needed, which takes the removal of photo-ionising stars through ejection into account. Two small star-forming regions in Orion --------------------------------------- ![[fig:OB1a+OB1b] The observed binary fractions per bin of the stellar associations OB1a (green circles) and OB1b (blue exes) according to over apparent separation. Shown as horizontal lines of the same colours are correspondent field populations according to. The top (red) lines are the initial binary distribution, which is for both stellar associations almost the same. Seen in the observed range are about 32.6 percent of all initial binaries (=100 percent).](fig6-OB1a+OB1b)[fig:OB1a+OB1b] The observed binary fractions per bin of the stellar associations OB1a (green circles) and OB1b (blue exes) according to over apparent separation. Shown as horizontal lines of the same colours are correspondent field populations according to. The top (red) lines are the initial binary distribution, which is for both stellar associations almost the same. Seen in the observed range are about 32.6 percent of all initial binaries (=100 percent). OB1a and OB1b are stellar associations in Orion (see and for details on them). consider in their paper 658 centre-of-mass (CMS) systems in OB1a and 363 CMS systems in OB1b. These CMS systems can either be single pre-main-sequence (PMS) stars or they can have a companion star. The primary stars (whether they turn out to be singles or binaries) are in the mass range $0.3 \, {\rm M}\_{\odot} <m\_1 <0.9 \, {\rm M}\_{\odot}$. We assume for simplicity that the primaries are complete in this mass range, although the truth is probably more complicated than that: state that PMS stars with $0.4 \, {\rm M}\_{\odot} <m\_1 <0.8 \, {\rm M}\_{\odot}$ dominate in their figure (6), even though the figure shows PMS stars with $0.3 \, {\rm M}\_{\odot} <m\_1 <0.9 \, {\rm M}\_{\odot}$, and they furthermore say that the mass estimates of PMS stars are highly uncertain anyway. detected companions to 74 PMS stars in OB1a and to 61 PMS stars in OB1b. They are thus binaries[7](#fn7). have also set the detection limit for a binary to Δ*J* < 3 mag, where Δ*J* is the magnitude difference of the two paired stars in the *J*-band of the 2MASS-survey. If the mass ratio of the stars, *q*, is calculated from their luminosities as *q* ≈ 10− 0.3Δ*J*, like in, 0.13 < *q* < 1 for 3 > Δ*J* > 0. This means that almost the complete range of values between 0 and 1 for *q* is covered, and if the distribution of the values for *q* follow from a random selection of the companion stars from the IMF in this mass range, almost every binary has been detected. Random pairing of stars from the IMF in that mass range is indeed proposed in, and, and is also implemented as default in. Thus, we assume that the 658 CMS systems in OB1a are 658+74=732 PMS stars and the 363 CMS systems in OB1b are 363+61=424 PMS stars in the mass range $0.3 \, {\rm M}\_{\odot} <m <0.9 \, {\rm M}\_{\odot}$ (now primary and secondary stars together). A normalisation value *k* for the IMF in OB1a and OB1b, respectively, over the considered mass range can be calculated over *N* = *k*∫0.30.9*ξ*(*m*) *d**m*,  where *N* is the number of stars in OB1a and OB1b with masses $0.3 \, {\rm M}\_{\odot} <m <0.9 \, {\rm M}\_{\odot}$, respectively. Assuming that the PMS stars in OB1a and OB1b follow the canonical IMF, this leads to *k* = 617.4 for OB1a and to *k* = 357.6 for OB1b. If it is furthermore assumed that OB1a and OB1b formed stars over the full range of stellar masses from $0.08 \, {\rm M}\_{\odot}$ to $150 \, {\rm M}\_{\odot}$, then OB1a formed 2441 stars and OB1b formed 1414 stars. Given that the average mass of a star is $0.59 \, {\rm M}\_{\odot}$ for the canonial IMF for stellar masses from $0.08 \, {\rm M}\_{\odot}$ to $150 \, {\rm M}\_{\odot}$, the total mass of OB1a would be roughly $M\_{\rm ecl}=1400 \, {\rm M}\_{\odot}$ and the total mass of OB1b roughly $M\_{\rm ecl}=800 \, {\rm M}\_{\odot}$. The above numbers for *N* and *M* of OB1a and OB1b are only rough approximations, because OB1a and OB1b are stellar associations. This means that even before their dissolution, they were complexes of multiple small, young star clusters, and not single star clusters. For this reason, they are rather created with the field mode than with the star cluster mode in. We will do this in the following, but we still use the calculated values for *M* and *N*, where needed. According to, OB1a has an age of  ≈ 11 Myr and OB1b an age of  ≈ 5 Myr, which we take also as durations for star formation in them. Thus, OB1a formed its total stellar mass of 1400 ${\rm M}\_{\odot}$ with an average star formation rate (SFR) of $1.27\times 10^{-4} \, {\rm M}\_{\odot} \, {\rm yr}^{-1}$, and OB1b its total stellar mass of 800 ${\rm M}\_{\odot}$ with an average SFR of $1.6\times 10^{-4} \, {\rm M}\_{\odot} \, {\rm yr}^{-1}$. This can be translated into maximum stellar masses that were formed with those SFRs with the theory of integrated galactic stellar initial mass functions (IGIMFs). According to equation (14) in, the result is $m\_{\rm max}=9.9 \, {\rm M}\_{\odot}$ for the most massive star in OB1a and $m\_{\rm max}=10.6 \, {\rm M}\_{\odot}$ for the most massive star in OB1b. Thus, in retrospective, the choice of $M\_{\rm scp}=1400 \, {\rm M}\_{\odot}$ for OB1a and $M\_{\rm scp}=800 \, {\rm M}\_{\odot}$ for OB1b proves to be appropriate, because only about 0.5 per cent of the stars which formed with the full canonical IMF are more massive than $10 \, {\rm M}\_{\odot}$, and they account for about 20 per cent of the total mass of the full canonical IMF. We create binary libraries with by typing `./BiPoS\ genlib libname=OB1a.dat mmax=9.9` for OB1a and `./BiPoS\ genlib libname=OB1b.dat mmax=10.6` for OB1b into the command line. For the lower limits of the libraries, we use the default values of $0.08 \, {\rm M}\_{\odot}$ by not specifying any other value, and `OB1a.dat` and `OB1b.dat` are the chosen library names (see also Section [sec:birth-initial] for the creation of binary libraries). The quantities that searched for are the apparent projected separations of their binaries. They find values between 0.6 and 19.2 arcsec. The mean distances of OB1a and OB1b are 363 pc and 388 pc respectively, so that not much information is lost if both associations are assumed to be 370 pc away. Hence, this distance is adopted here as well, and the measured apparent projected separations of the binaries turn into real projected distances between 222 AU and 7104 AU. divide this range into five equal-sized bins in logarithmic distance. Thus, as a first step, can predict the binary population of OB1a with the command `./BiPoS\ field OPD=s constrain=s,2.34,3.85,5 constrain=q,0.1,1,z SpT=u mmin=0.3 mmax=0.9 sfr=0.000127 libname=OB1a.dat`. The command-line argument `OPD=s` tells to put the projected separations in the output file. The argument `constrain=s,2.34,3.85,5` tells not to use the default values for the binning of the projected separations *s*, but to make 5 equal-sized bins for them from $\log\_{10}(s/{\rm AU})=2.34$ to $\log\_{10}(s/{\rm AU})=3.85$. The argument `constrain=q,0.1,1,z` limits the mass ratios from 0 < *q* ≤ 1 to 0.1 ≤ *q* ≤ 1. Note that the letter `z` is a dummy, because there are no output files from for *q*. Internally, interprets this letter as that the values for *q* are not to be sorted into bins[8](#fn8). The argument `SpT=u` tells to use a user-defined interval for stellar masses. This interval is defined with the next two parameters, `mmin=0.3` and `mmax=0.9`, which give the lower mass limit and the upper mass limit in Solar units. The argument `sfr=0.000127` defines the SFR, which according to equation (6) in is equivalent to setting the upper mass limit for star clusters. Finally, the argument `libname=OB1a.dat` tells to use the library `OB1a.dat` in the folder `Lib/`, where the library of that name is stored. For the binary population of OB1b, the same syntax is used, except that `sfr=0.000127` is replaced by `sfr=0.00016` and `libname=OB1a.dat` is used instead of by `libname=OB1b.dat`. The results are in both cases a total binary fraction of about 20 percent with, while found total binary fraction of 11.2 ± 1.3 percent in OB1a and 16.8 ± 2.2. percent in OB1b. Thus, the results from do not only show a significant difference, but are also notably lower. Now, let us suppose that the star formation time scale was shorter than originally presumed, so that the SFR was higher, and conseqently more massive star clusters could form. However, even if the SFR was as high that $1400 \, {\rm M}\_{\odot}$, and $800 \, {\rm M}\_{\odot}$ in stars, respectively, could form in just 1 Myr, the total binary fractions become 17.0 percent for OB1a and 17.5 percent for OB1b. Let us then suppose that not all stars which formed in OB1a and OB1b are seen, so that the SFR was higher still. For the default version of selecting the $r\_{\rm h}$ of the star clusters in, namely with equation (7) in, we arrive at the observed binary fractions for an SFR of $2\times 10^{-3} \, {\rm M}\_{\odot} \, {\rm yr}^{-1}$ in OB1b and at an SFR of $4 \, {\rm M}\_{\odot} \, {\rm yr}^{-1}$ in OB1a. In the case of OB1a, this would be more than the actual SFR of the whole Milky Way according to! Thus, we discard this possibility. Also changing the slope, *β*, of the ECMF is as ineffective as raising the SFR for solving the problem. This is no surprise, because for effectively changing the ratio between high-mass clusters and low-mass clusters, high-mass clusters have to form in the first place. High-mass star clusters form however only at a high SFR (see equation 6 in ). Thus, we go with the standard value in, which is *β* = 2. This leaves changing $r\_{\rm h}$ of the star clusters as the only viable option for effectivly changing the binary fraction of OB1a and OB1b. This is not to say that all star clusters in OB1a formed with this $r\_{\rm h}$ and all star clusters in OB1b formed with another, smaller $r\_{\rm h}$. What is said, though, is that the star formation in the said stellar associations was equivalent to all stars forming in star clusters of that radius, while the star clusters can still have their individual $r\_{\rm h}$. This notion is not in tension with equation (7) in, or equation ([eq:rh-mass]) here, the default choice for $r\_{\rm h}$ with $M\_{\rm ecl}$ in. But it does indicate that the said equation has not only a most likely value, but also a large variance. For instance, a star cluster with $M\_{\rm ecl}=1000 \, {\rm M}\_{\odot}$ has a most likely value of $r\_{\rm h}=0.25 \, {\rm pc}$ according to equation ([eq:rh-mass]), but also values between $r\_{\rm h}=0.11 \, {\rm pc}$ and $r\_{\rm h}=0.55 \, {\rm pc}$ are possible within 1*σ* of that equation. This implies significant changes for the surviving binary fractions, and shows that leaving the user the possibility to change the average $r\_{\rm h}$ is an advantage. This is especially so when dealing with rather small populations, like in this section. The results of fits-by-eye for a varying $r\_{\rm h}$ can be seen in Fig. ([fig:OB1a+OB1b]). In this figure, `rh=0.07` was added to the command for `OB1a.dat`, and `rh=0.13` was added to the command for `OB1b.dat`, to fix $r\_{\rm h}$ to 0.07 pc, and 0.13 pc, respectively. We compare this with the most likely $r\_{\rm h}$ of a star cluster with $M\_{\rm ecl}=100 \, {\rm M}\_{\odot}$, that is about the mass of the most massive star cluster with a SFR of $\approx 1.5\times 10^{-4} \, {\rm M}\_{\odot} \, {\rm yr}^{-1}$. It would be about 0.18 pc according to equation ([eq:rh-mass]). On the other hand, the most likely value for $r\_{\rm h}$ according to equation ([eq:rh-mass]) for a star cluster with $M\_{\rm ecl}=5 \, {\rm M}\_{\odot}$ is 0.12 pc. Thus, the actual average value for $r\_{\rm h}$ is still roughly consistent with the most likely value for $r\_{\rm h}$ according to equation ([eq:rh-mass]) in the case of OB1a, but definitely below the most likely value for $r\_{\rm h}$ in the case of OB1b. The red lines in Fig. ([fig:OB1a+OB1b]) indicate the initial binary distribution, that is before external evolution, or stimulated evolution, of the binaries. It is almost identical for OB1a and OB1b, as the SFRs in both associations are almost the same. In general however, while both binary distributions would have a binary fraction of 100 per cent in total, their distributions would be different because of the different $m\_{\rm max}$ of the stars. At the moment, there is however no reason to assume that the initial binary populations of OB1a and OB1b were different in the sense that, for example, OB1b formed with a binary fraction of 100 percent and OB1a did not. The difference they show nowadays can also be explained by them forming both with 100 percent binaries and having credible variations of their parameters at their births. These parameters are the radii and masses of the embedded clusters which contributed to the associations; see for example for the type of embedded clusters which are creating the currently forming new OB association in Orion. Discussion ========== Are binaries created in? ------------------------- It is stated in Section ([sec:impGenRemarks]) that can only destroy binaries, but yet there are more binaries after 5 Myr than after 3 Myr of evolving the same star clusters with in some cases in Section ([sec:SCexample]). This difference is only about 1 per cent or less, when 100 per cent stands for the initial number of binaries, but undeniable. This seeming contradiction is resolved in Section ([sec:omega]), where it is stated that the user may choose between 3 options for the considered time, and then looks up the parameters accordingly. The three sets of parameters *a*(*t*) to *g*(*t*) (see equation [eq:operator2] to [eq:operator4] for those parameters) are howewer fits to direct *N*-body simulations. In general, *N*-body simulations can also form binaries from single stars (see e.g. ). This is however unlikely to be the case here, because the *N*-body simulations on which is based started at a binary fraction of 100 percent. Thus, binaries can only be destroyed in the beginning. When the first single stars appear in the *N*-body simulations, binaries can also be created, but they continue to be overwhelmed by the ongoing destruction of binaries. Later on, an equilibrium between binary destruction and creation develops in *N*-body simulations, that is the binary fraction does not diminish any more, but it also cannot grow. Thus, the eventual growths by one percent in the binary fractions from 3 Myr to 5 Myr in are probably due to uncertainties in the fits, rather than to the actual creation of binaries in the *N*-body simulations to which are fitted. The most massive star clusters ------------------------------ The IMF of star clusters with the highest masses may have been top-heavy, that is over-abundant in high-mass stars, compared to the canonical IMF (e.g. ). The mass-range for star clusters where the canonical IMF changes to an increasingly top-heavy IMF has been placed roughly above $10^6 \, {\rm M}\_{\odot}$. The slopes of the IMF in on the other hand are fixed to the ones of the canonical IMF and the user may only change the lower and the upper mass limit for stars. Thus, it may seem questionable how far the results in Section ([sec:SCexample]) for $M\_{\rm ecl}=3\times 10^7 \, {\rm M}\_{\odot}$, and especially those for $M\_{\rm ecl}=3\times 10^8 \, {\rm M}\_{\odot}$, can be trusted. Moreover, $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}(\log\_{10}(E\_{\rm b}),t)$ in has been fitted to *N*-body simulations of star clusters in. They have embedded cluster masses from $M\_{\rm ecl}=10 \, {\rm M}\_{\odot}$ to $M\_{\rm ecl}=3000 \, {\rm M}\_{\odot}$ and initial (embedded) half-light radii from $r\_{\rm h}=0.1$ pc to $r\_{\rm h}=0.8$ pc. This corresponds to densities within $r\_{\rm h}$ between $2.3 \, {\rm M}\_{\odot} \, {\rm pc}^{-3}$ and $3.6 \times 10^5 \, {\rm M}\_{\odot} \, {\rm pc}^{-3}$, as calculates the binary survival rates with the densities, rather than relying on combinations of $M\_{\rm ecl}$ and $r\_{\rm h}$. The most massive star cluster in Section ([sec:SCexample]) has however $M\_{\rm ecl}=3\times 10^8 \, {\rm M}\_{\odot}$, and has consequently by default an average density within $r\_{\rm h}$ of $1.8 \times 10^7 \, {\rm M}\_{\odot} \, {\rm pc}^{-3}$, or about 50 times larger than the maximum density in. More broadly speaking, places no limits on the densities it allows, but the more the densities in are below or above the densities covered in, the more caution is advised. However, $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}(\log\_{10}(E\_{\rm b}),t)$ saturates above a certain $\rho\_{\rm ecl}$, so even higher densities make no large differences any more. The meaning of the initial distribution functions ------------------------------------------------- What do the initial distribution functions (IMF, birth binary distribution functions) and values (initial half-mass radius) applied as defaults in mean? Such distribution functions are needed in order to model stellar populations, because it is not possible to generate realistic populations of stars and binaries on star-cluster- and galaxy-scales from radiation hydrodynamical star-formation simulations. The IMF is well known not to be observable but it is a mathematical tool needed to synthesize stellar populations across space and time. The mathematical model of the IMF can be constructed from star counts by statistically accounting for all stars formed in one embedded cluster. The IMF shows an insignificant variation with physical conditions in the present-day star-forming activity observed in the Local Group, but has been found to vary systematically with metallicity and density. Likewise, the birth binary distribution functions are also not observable since no time exists when the fully assembled initial population can be observed because the binary systems evolve upon formation with their formation times not being equal. This can be seen in and. In his hydodynamical simulations of the formation of a star cluster, binaries begin to evolve and even to dissolve well before star formation in general has come to an end. But, just as with the IMF, mathematical formulations of the distribution functions can be constructed from the observational data by taking this evolution into account, as detailed in Section ([sec:theory]). The birth binary distribution functions are, like the IMF, largely invariant for present-day star-formation in the Local Group. But they are expected to vary with extreme conditions, just as the IMF, since both are causally connected. Thus, it is likely that the very metal-poor populations in the Milky Way that formed at a high redshift may have formed with somewhat different distribution functions (IMF, birth binary properties), and that subtle differences may exist between populations formed with different metallicities in the present-day. A simple argument explains why the vast majority of stars should have been be born as binaries: First of all, when a proto-stellar cloud core collapses it will always have too much angular momentum which constitutes a barrier opposing gas infall towards a single centre (e.g. ). If the collapse would in most cases form a stellar system containing more than two stars, then such a system decays on a crossing time scale, which is 104 − 105 yr. Thus, a triple system would lead to a binary and a single star while a quadruple would decay to a binary and two single stars in two successive decay steps. But low-density (dynamically not significantly processed) star-forming regions show a binary fraction of near unity at an age of about 1 Myr. This means that largely only binaries form, with a certain fraction of hierarchical higher-order systems (inner tight binary with a third companion on a wider orbit, say). The birth half-mass radius of an embedded cluster, $r\_{\rm h}$, has been derived by by using the observed binary populations in star clusters to infer their initial density. The derived birth half-mass radii are not observable and represent a mathematical model which is equivalent to the physical reality of the actual formed stellar population. The birth half-mass radii can be understood to represent the instant in time when the entire stellar population appears instantaneously in the deeply embedded cluster. Brown Dwarfs? ------------- In, only the binary evolution of stars is treated, but brown dwarfs are missing. If brown dwarfs are understood as objects which have too little mass to ignite hydrogen burning, but are the same as stars in all other respects, including their formation, this poses a potential problem at the lower mass limit of the IMF. The reason is that the IMF in has $m\_{\rm min}=0.08 \, {\rm M}\_{\odot}$, that is about the mass required for hydrogen burning, as a lower limit. However, if the objects below $0.08 \, {\rm M}\_{\odot}$ are numerous and continue to be star-like, there is no reason why a star above the hydrogen-burning limit should not have a star below the hydrogen-burning limit as a companion. Such binaries, though, are not produced in. However, using the pairing properties of brown dwarfs and of stars, concluded that brown dwarfs follow an IMF$\_{\rm BD}$, which is related to, but different to that of stars. Following up on this, normalised the theoretical IMFs from and to the observed one, such that they produce an equal number of objects in the stellar regime. The shape of the theoretical IMFs and the observed IMF fit in the stellar mass range above $m=0.2 \, {\rm M}\_{\odot}$. If they however subtract the theoretical IMFs from the observed one, they find many objects unaccounted for below a mass of $\approx 0.2 \, {\rm M}\_{\odot}$. Hence, they conclude that below a mass of $\approx 0.2 \, {\rm M}\_{\odot}$, there must be also a process besides direct fragmentation of a gas cloud, which they assume is the process responsible for the production of stars. This could be the production of brown dwarfs from dynamically pre-processed gas, for example through the fragmentation of circumstellar disks. The hydrodynamical modelling of leads indeed to an impressive agreement with observations of the mass distribution of formed brown dwarfs, as well as their binary distributions. Therefore, argue that a star-like population and a brown-dwarf-like population of objects below stellar masses of $0.2 \, {\rm M}\_{\odot}$, while above that mass only the star-like population exists. The star-like population forms objects just like conventional stars, that is stars massive enough to support hydrogen burning. Indeed, those stars are part of the star-like population, but the star-like population continues also to masses with lower masses. The brown-dwarf-like population on the other hand forms its objects differently than the star-like population, but like the star-like population on both sides of the hydrogen burning limit. Note that it is not important here how exactly objects in the two populations form; it suffices that they form differently. Thus, at a mass of $\approx 0.1 \, {\rm M}\_{\odot}$, both populations are present. There is of course no reason why two different populations should have the same properties. Indeed, state that the binary fraction at birth of the star-like population is near 100 percent, while the binary fraction at birth of the brown-dwarf-like population is about 20 percent. This makes clear why mingling the two populations in should be done with extreme care, as assumes that they all are star-like objects and therefore all have a companion at the beginning. Potential trouble with this can be avoided by only considering stars with a mass larger than $\approx 0.2 \, {\rm M}\_{\odot}$, that is by excluding brown-dwarf-like objects. However it makes no sense to extend the range of considered objects in below $0.08 \, {\rm M}\_{\odot}$, because the objects rarely pair at their birth due to their different origin. This is known as the brown dwarf desert (e.g. ). In the future, can be extended to also process the brown dwarf population dynamically. Hardened binaries ----------------- currently computes the disruption of binaries in embedded clusters, but the stellar-dynamical processing of a binary population also include the hardening of orbits. A small fraction of hard binaries (those with an orbital velocity larger than the average velocity dispersion in the embedded cluster) will, on average, gain binding energy in an encounter and thus exit the encounter with a smaller semi-major axis and a shorter period and typically a larger eccentricity. The hardening of binary star orbits through stellar encounters can be included into in the future using statistical rate equations and a hardening operator, which quantifies which orbits harden and by how much in a dynamically processed population. The hardened binaries appear in the eccentricity-period diagramme as ‘forbidden binaries’ before they rapidly circularise and tighten further with possible merging. They thus comprise a potentially powerful tool to study the interiors of stars. Summary and Conclusion ====================== According to and, all stars are born in embedded star clusters, although many of these dissolve within a few Myr. A property of stars is that many of them have a companion star, that is that they are binaries. Because the binaries settle into more stable configurations by themselves (pre-main sequence eigenevolution, or internal binary evolution), and are especially in the dense environments of star clusters disturbed by other stars and binaries (stimulated evolution, or external binary evolution), a correct treatment of stellar evolution and stellar dynamics involves a treatment of binary dynamics as well. While the existence of binaries is a common feature of all stellar populations, the exact number of binary stars per centre-of-mass systems (that is single stars and multiple stars together) can be quite different from environment to environment. Many stars are born as binaries, but how high is the binary fraction at birth? and argue that it was extremely high in the beginning and declined afterwards through stellar encounters, and that it can be set for all practical purposes to 100 percent in the beginning. However, chances to observe such a very young population with nearly 100 percent binaries are very small, because the widest binaries dissolve almost instantly to single stars. The quick dissolution of wide binaries is exactly the point where the computer programme comes in. Rather than calculating a star cluster from its birth with 100 percent binaries in a very expensive direct N-body simulation (e.g. Nbody6; ), it allows the user to skip the first Myrs of evolution, and set up the star cluster correctly. solves the problem of the binary evolution within the first Myr based on a stellar-dynamical operator, $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}$, introduced in and. This operator is basically a function which determines how many binaries survive dynamical evolution in dependency of the binding energy of the binary. It depends on the initial stellar density, $\rho\_{\rm ecl}$, within the half-mass radius, $r\_{\rm h}$, as well as the time, *t*, for which the dynamical evolution of the binaries proceeds. The latter can be set equal to the time of gas expulsion, which happens after a few Myr and leads to the dramatic expansion, if not the dissolution of the star cluster. Thus, the environment for efficient external evolution of binaries, or stimulated evolution, is destroyed by gas expulsion. Equivalently to $\rho\_{\rm ecl}$, the user may also choose the half-mass radius, $r\_{\rm h}$, and the initial embedded cluster mass in stars, $M\_{\rm ecl}$, before starts its calculation. The operator $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}$ has been fitted in to numerical N-body simulations done with Nbody6. started with star clusters with 100 per cent binaries and evaluated $\Omega\_{\rm dyn}^{\rho\_{\rm ecl}}$ for 1, 3, and 5 Myr. Thus, these are the values for *t* available in. calculates the requested orbital parameter(s), such as periods, semimajor-axes, apparent separations, and many more, sorted into bins of binary fraction in intervals of those parameters. The benefit of using is that it provides an extremely fast and good approximation (solutions within seconds or minutes) to the dynamical processing of a population of binary stars, compared to the exact, but much more time-consuming solution with an N-body programme. can also generate galactic field populations, that is stellar populations that are not in star clusters any more. For this, relies on the assumptions that all stars were born in embedded star clusters, and that these embedded star clusters follow a power law with a slope of *β* ≈ 2. Thus, follows essentially the theory of the IGIMF for the stellar population (e.g. ) when it calculates its binary content. assumes here that gas expulsion sets in after 3 Myr. It efficiently stops further binary processing in the cluster through its expansion and dissolution, and sheds its evolved population into the field. In the output of, there are single stars and binaries only, while higher-oder multiples do not appear. They exist in reality though, and their percentage is surprisingly high in massive stars, see for instance. However, low-mass stars overwhelm high mass stars for in the IMF taken in (see equation [eq:imf]), and for all stars, the number of higher-order multiples is small compared to singles and binaries. For instance, find 2743 single stars and 2508 binaries in the stellar population of the Galactic Field, but only 23 triples and 14 quadruples. Therefore, higher-order multiples are neglected in, and the success of with reproducing the observed binary populations lends credence to the initial assumptions on which is based, namely the shape of the initial period distribution function of the binaries, and the initial binarity of 100 percent. For the treatise of higher-order multiples, recently published the Multiple Stellar Evolution code (). takes a single higher-order multiple system as its input, and evolves it, including its stellar evolution and post-Newtonian terms, besides evolution of its sub-binaries. However, does not, unlike, perform dynamical processing in embedded clusters of an entire birth population of stars. It will be an important task in the future to combine both codes to improve the population synthesis of galactic field populations. It is also important to study how changing the initial conditions affect the outcome of. In particular, it will be very interesting to investigate degeneracies: for example, a larger birth half-mass radius might be compensated by a larger star formation efficiency, which leads to the same dynamically processed binary-star population. In this case, the initial conditions would be ´dynamically equivalent’. Note that there are also other, earlier successful attempts to explain binary populations with, although it was never properly introduced until this paper. This concerns the papers by, and, even though adapted versions of BiPoS were used for some of them. There are two caveats regarding the ranges in which is applied. The first caveat is for star clusters with masses $M\_{\rm ecl}>10^6 \, {\rm M}\_{\odot}$ in the beginning. This is because GCs with masses $M \apprge 10^6 \, {\rm M}\_{\odot}$ today show evidence that their IMF was not canonical, but top-heavy. The IMF in is however fixed to the canonical slopes. The second caveat is for stars with masses $m \apprle 0.2 \, {\rm M}\_{\odot}$. In this mass range, an additional brown-dwarf-like mass function with different binary properties comes in besides the star-like mass function, which is the single mass function above $m= 0.2 \, {\rm M}\_{\odot}$. However, assumes that all stars belong to the star-like population (see ). A few examples of how to operate with are also given in this paper, namely 1.) a comparison between the birth population of binaries with the initial population after internal binary evolution, or pre-main sequence eigenevolution, 2.) a comparison of the binary survival fractions of progenitor models for globular clusters over the range of their embedded stellar masses, 3.) a discussion of the high binary fraction at low apparent separation discovered by in the Orion Nebula Cluster, and 4.) a demonstration of the galactic field mode in with two associations of O- and B-stars. Future developements of BiPoS might include adding brown dwarfs, hardening of binaries, a variable IMF, and an update of the pre-main sequence eigenevolution, or internal binary evolution. Acknowledgements ================ Jörg Dabringhausen and Pavel Kroupa acknowledge support from the Grant Agency of the Czech Republic under grant number 20-21855S. Data availability ================= All data in this paper is either available in the cited literature, or was generated with the computer programme, using the commands detailed in this Paper. can be downloaded at GitHub under the web adress `https://github.com/JDabringhausen/BiPoS1`. [lastpage] --- 1. E-mail: [email protected][↩](#fnref1) 2. [email protected][↩](#fnref2) 3. [email protected][↩](#fnref3) 4. That all stars are born in star clusters with a binarity of 100 percent is also for this paper taken for granted. Thus, stars are not born into a galactic field in this paper, but released into a galactic field from a star cluster. A field population is thus always an evolved population, and its binarity is lower than the 100 percent binarity at its birth. In other words, galactic fields are *not* the dynamically unevolved upper limits for the binary fractions. See Section ([sec:field]) for the theory, Section ([sec:impField]) for the implementation into, and Section ([sec:small]) for an example.[↩](#fnref4) 5. The IMF can also be interpreted as an *optimal distribution function*. In this case, stars can be paired randomly from an array of optimally sampled masses.[↩](#fnref5) 6. For the cluster mode in the number of systems used to calculate the number of binaries per energy bin can in principle be any number and does not need to be calculated from the initial cluster mass, since `nrgspect_fin[]` is scaled in the next step, anyway. Here this procedure has been chosen since the relative numbers of stars in clusters of different mass become important when is run in the field-mode (Section [sec:impField]).[↩](#fnref6) 7. In fact, do not report 74+61=135 binaries, as is assumed here, but 127 binaries and 4 triples. The 4 triples can however formally be converted into 8 binaries, consisting of 4 inner binaries with the comparatively far away third star being neglegted, and 4 outer binaries with the inner two stars being treated as one star. This again leads to 135 binaries.[↩](#fnref7) 8. Actually, to leave out the argument `constrain=q,0.1,1,z` would lead to almost the same results, because *q* covers almost the complete range of values, as argued earlier.[↩](#fnref8) – a computer programme for the dynamical processing of the initial binary star population ========================================================================================== [firstpage] The first version of the Binary Population Synthesizer () is made publicly available. It allows to efficiently calculate binary distribution functions after the dynamical processing of a realistic population of binary stars during the first few Myr in the hosting embedded star cluster. Instead of time-consuming N-body simulations
arxiv_0000074
In this article, we combine Bhargava’s geometry-of-numbers methods with the dynamical point-counting methods of Eskin–McMullen and Benoist–Oh to develop a new technique for counting integral points on symmetric varieties lying within fundamental domains for coregular representations. As applications, we study the distribution of the 2-torsion subgroup of the class group in thin families of cubic number fields, as well as the distribution of the 2-Selmer groups in thin families of elliptic curves over $\Q$. For example, our results suggest that the existence of a generator of the ring of integers with small norm has an increasing effect on the average size of the 2-torsion subgroup of the class group, relative to the Cohen–Lenstra predictions. Introduction ============ The foundational heuristics of Cohen and Lenstra , formulated in the 1980s, constitute a system of conjectures that predict the distribution of the class groups of quadratic fields. These heuristics are based on the philosophy that algebraic objects occur in nature with probability inversely proportional to the sizes of their automorphism groups. In the years since, the Cohen–Lenstra heuristics have enjoyed an extensive series of generalisations and elaborations. Some notable examples include the works of Cohen–Martinet , who generalized the heuristics to extensions of number fields of arbitrary degree, the work of Malle , who realized that adjustments to the Cohen–Lenstra–Martinet heuristics were needed to account for roots of unity in the base field, and the very recent work of Wood–Sawin , which proposes the necessary adjustments. See also the related works of Gerth , Wood , Liu–Wood–Zureick-Brown , Bartel–Lenstra , Despite these significant advances on the conjectural side, proofs have been discovered for only a handful of the predictions made by the Cohen–Lenstra–Martinet heuristics. In the 1970s (predating the Cohen–Lenstra heuristics!), the seminal work of Davenport and Heilbronn determined the average size of the 3-torsion subgroups of the class groups of quadratic number fields ordered by discriminant. Some three decades later, the landmark work of Bhargava determined the average size of the 2-torsion subgroups of the class groups of cubic number fields  ordered by discriminant. In fact, as shown in subsequent works of Bhargava–Varma , the aforementioned averages remain stable under the imposition of general sets of *local* conditions. More recently, the stunning work of Lemke Oliver–Wang–Wood  determined the average size of the 3-torsion subgroups of certain relative class groups associated to *G*-extensions of *K*, where *G* ⊂ *S*2*m* is any transitive permutation 2-group containing a transposition and *K* is any number field; their result is also expected to be insensitive to the imposition of local conditions. On the other hand, the question of how *global* conditions impact class group distributions is much less well-understood and has developed into an active area of research in recent years. In particular, an intriguing picture has emerged of the behaviour of the 2-primary part of the class group under the condition of being *monogenised* — i.e., having ring of integers generated by one element as a $\Z$-algebra. First considered in work of Bhargava–Hanke–Shankar and further studied in works of Siad, Swaminathan, and Bhargava–Shankar–Swaminathan, the upshot of this line of inquiry is as follows: when considered as a “random variable” valued in finite abelian 2-groups, the distribution of the 2-primary parts of the class groups over the thin family of monogenised fields appears to be different from the distribution predicted by the Cohen–Lenstra–Martinet heuristics. In this paper, we study how the distribution of the 2-torsion subgroups of the class groups of *cubic* number fields is impacted by the condition of being *unit-monogenised* — i.e., having ring of integers generated over $\Z$ by a unit — among other related conditions. Our work suggests that imposing these global conditions leads to class group distributions that are different from the corresponding distributions for both the full family of cubic fields and the subfamily of monogenised fields. We also prove analogous results concerning the distribution of the 2-Selmer groups in thin families of elliptic curves (see §[sec-selrel]), which indicate that these distributions sometimes differ from those predicted by Poonen–Rains. To accomplish this, we introduce tools from dynamics into arithmetic statistics. Specifically, we combine Bhargava’s geometry-of-numbers methods — which give a systematic procedure for counting integral orbits of representations — with the works of Eskin–McMullen  and Benoist–Oh  — which utilize the exponential mixing of a semisimple Lie group on its finite volume quotient to count integral points on symmetric varieties. Results on unit-monogenised cubic fields ---------------------------------------- A *unit-monogenised* cubic field is a pair (*K*, *α*) consisting of a cubic field $K/\Q$ with a *unit monogeniser* *α* of its ring of integers — i.e., an $\alpha \in \mc{O}\_K^\times$ such that $\O\_K = \Z[\alpha]$. A natural way to order such pairs (*K*, *α*) is by the *balanced height* $\rmH\_{\on{bal}}(K,\alpha)$, which is defined to be the maximum of the sizes of the coefficients of the minimal polynomial of *α*. The balanced height gives a natural ordering since it is comparable to the discriminant Δ; indeed, one checks that for 100% of unit-monogenised cubic fields, we have $|\Delta(K)|\asymp \rmH\_{\on{bal}}(K,\alpha)^4$.[1](#fn1) For unit-monogenised cubic fields ordered by balanced height, we prove the following theorem, which describes the average size of the 2-torsion subgroups of their class groups: [first main] When totally real unit-monogenised cubic fields are ordered by balanced height, the average size of the 2-torsion subgroups of their class groups is at most 2. Although we only prove an upper bound in Theorem [first main], we expect that the average is in fact *equal* to 2; see Remark [rmk:unif estimate]. In this regard, Theorem [first main] suggests that the average size of the *non-trivial* 2-torsion in the class groups of totally real unit-monogenised cubic fields is equal to 1. This represents a *4-fold increase* in the average nontrivial 2-torsion relative to the full family of totally real cubic fields, where the average was determined to be 1/4 by Bhargava , and a *2-fold increase* relative to the family of totally real monogenised cubic fields, where the average was determined to be 1/2 by Bhargava–Hanke–Shankar ; see Table [tab-avgs]. We restrict to totally real fields in Theorem [first main] because when unit-monogenised cubic fields are ordered by balanced height, 100% of them are totally real; in fact, Theorem [first main] continues to hold with the “totally real” condition removed. To study complex unit-monogenised cubic fields, we need to order pairs (*K*, *α*) in such a way that the complex fields occur with positive proportion. A natural way to do this is to order pairs (*K*, *α*) by the “size” of *α* in $\R \otimes\_{\Q} K$. More precisely, given a pair (*K*, *α*), let *f*(*x*) = *x*3 + *a**x*2 + *b**x* ± 1 be the minimal polynomial of *α*; then we define the *weighted height* of (*K*, *α*) by $\rmH\_{\on{wei}}(K,\alpha) \defeq \max\{|a|,\sqrt{|b|}\}$. For unit-monogenised cubic fields ordered by weighted height, we prove the following analogue of Theorem [first main]: [first main2] Let *K* run through all unit-monogenised cubic fields 1. Over totally real *K*, the average size of the 2-torsion subgroup of the class group of *K* is at most 2. 2. Over complex *K*, the average size of the 2-torsion subgroup of the class group of *K* is at most 3 + 3/14. Once again, we expect that the average sizes in Theorem [first main2] are in fact *equal* to 2 and 3 + 3/14; see Remark [rmk:unif estimate]. Theorem [first main2] suggests a phenomenon that, to the authors’ knowledge, has not previously been observed for families of *S**n*-number fields: there is a lack of conformity between the averages in the totally real and complex cases. This constitutes a deviation from the Cohen–Lenstra–Martinet philosophy, which predicts that the class group of a random totally real cubic field behaves like the quotient by a random element of the class group of a random complex cubic field. If this were true for unit-monogenised cubic fields, then the average *nontrivial* 2-torsion in the class group would be equal to 2 in the complex case — i.e., a 4*-fold increase* relative to the full family of complex cubic fields, where the corresponding average was found to be 1/2 by Bhargava  — rather than 2 + 3/14; see Table [tab-avgs]. |c||\*4l| & Full family & 1 + 1/2*r*1 + *r*2 − 1 Monogenised family & 1 + 2/2*r*1 + *r*2 − 1 Unit-monogenised family & 1 + 4/2*r*1 + *r*2 − 1 + 3*r*2/14 [tab-avgs] Results on more general thin families of cubic fields ----------------------------------------------------- Our new approach also allows us to study the distribution of the 2-torsion in the class groups of more general thin families of cubic fields. Before we state our results in this direction, we first generalise the notion of a unit-monogenised cubic field. For this purpose, let *U* be the affine scheme over $\Z$ whose *R*-points consist of binary cubic forms over *R* for any ring *R*. To each form *f* ∈ *U*(*R*) is a naturally associated cubic extension *R**f*/*R* (see §[sec-rep]). When $R = \Z$ or $\Z\_p$ for a prime *p*, we say that *f* ∈ *U*(*R*) is *maximal* if *f* has nonzero discriminant and the corresponding cubic ring *R**f* is integrally closed in $\on{Frac}(R) \otimes\_R R\_f$. Next, take $a,d \in \Z \smallsetminus \{0\}$, and let *U**a*, *d* ⊂ *U* be the subscheme of *U* consisting of forms with *x*3-coefficient *a* and *y*3-coefficient *d*. Then the set of unit-monogenised cubic fields (*K*, *α*) is in bijection with the set of maximal irreducible forms $f \in U\_{1,\pm1}(\Z)$ via the map that sends *α* to the homogenisation of its characteristic polynomial. Thus, Theorems [first main] and [first main2] may be interpreted as giving the average size of the 2-torsion in the class group of *R**f* over the family $\{f \in U\_{1,\pm 1}(\Z) : \text{$f$ is maximal, irreducible}\}$. In this section, we generalise these theorems to the family $U\_{a,d}(\Z)\_{\max} \defeq \{f \in U\_{a,d}(\Z) : \text{$f$ is maximal, irreducible}\}$, where $a,d \in \Z \smallsetminus \{0\}$ are any fixed nonzero integers. We use the following generalisations of the balanced and weighted heights defined in §[sec-unitresult]: given $f(x,y) = ax^3 + bx^2y+cxy^2+dy^3 \in U\_{a,d}(\R)$, the *balanced height* is $\rmH\_\bx(f) \defeq \max \{|b|,|c|\}$ and the *weighted height* is $\rmH\_{\wei}(f) \defeq \max \{ \left|b\right|, \left|c\right|^{1/2}\}$. In fact, we go one step further by stating our results for subfamilies of $U\_{a,d}(\Z)\_{\max}$ defined by quite general infinite sets of local conditions. To this end, we pick a sign *ɛ* ∈ { ± } and define $$U\_{a,d}(\R)^{\varepsilon} \defeq \{f \in U\_{a,d}(\R) : \varepsilon\on{Disc}(f) > 0\}.$$ For each prime prime *p*, we pick an open subset $\Sigma\_p\subset U\_{a,d}(\Z\_p)$ whose boundary has measure 0. To this collection *ɛ*,  (Σ*p*)*p*, we associate the subset $\Sigma=U\_{a,d}(\R)^{\varepsilon}\cap\bigcap\_p\Sigma\_p$ of $U(\Z)$. A set of binary cubic forms arising this way is said to be *acceptable* if $\Sigma\_p \supset \{f \in U\_{a,d}(\Z\_p) : p^2 \nmid \on{Disc}(f)\}$ for each *p* ≫ 1. Write *a* = *a**k**a**m*2 and *d* = *d**k**d**m*2, where $a\_k, a\_m, d\_k, d\_m \in \Z$ with *a**k*, *d**k* squarefree. A form $f \in U\_{a,d}(\Z\_p)\_{\max}$ is said to be *right-* (resp., *left-*) *sufficiently-ramified* at *p* if *p* ∣ *a**k* (resp., *p* ∣ *d**k*) and $f(x,1) \equiv \alpha \cdot g(x)^2 \pmod p$ (resp., $f(1,y) \equiv \alpha \cdot g(y)^2 \pmod p$) for some $\alpha \in \F\_p^\times$ and $g \in \F\_p[x]$ (resp., $g \in \F\_p[y]$). For an acceptable family $\Sigma \subset U\_{a,d}(\Z)\_{\max}$, the *right-* (resp., *left-*) *sufficient-ramification density* of Σ at *p* is denoted *ρ*Σ(*p*) (resp., *λ*Σ(*p*)) and defined as the density of right- (resp., left-) sufficiently-ramified forms in Σ*p*. Set $${\rho}\_\Sigma \defeq \prod\_{p \mid a\_k} {\rho}\_\Sigma(p);\quad\quad {\lambda}\_\Sigma \defeq \prod\_{p \mid d\_k} {\lambda}\_\Sigma(p).$$ Given the above setup, we have the following theorems, which generalize the results of §[sec-unitresult] to cubic rings associated to binary cubic forms in an acceptable family in $U\_{a,d}(\Z)\_{\max}$: [second main real] Let $\rmH \in \{\on{H}\_{\on{bal}}, \on{H}\_{\on{wei}}\}$, and let $\Sigma \subset U\_{a,d}(\Z)\_{\max}$ be an acceptable family of forms of positive discriminant. Then we have $$\begin{aligned} {3} &\limsup\_{X \to \infty} \underset{\substack{f \in \Sigma \\ \rmH(f) < X}}{\on{Avg}}\,\, \#\on{Cl}(R\_f)[2] \hspace{1ex} & \le\hspace{1ex} & \frac{5}{4} + \frac{{\rho}\_\Sigma + {\lambda}\_\Sigma + \chi\_{a,d} \,\, {\rho}\_\Sigma {\lambda}\_\Sigma }{4},\end{aligned}$$ where *χ**a*, *d* = 1 if gcd(*a**k*, *d**k*) = 1 and [second main complex] Let $\Sigma \subset U\_{a,d}(\Z)\_{\max}$ be an acceptable family of forms of negative discriminant. Then we have $$\begin{aligned} {3} &\limsup\_{X \to \infty} \underset{\substack{f \in \Sigma \\ \rmH\_{\on{wei}}(f) < X}}{\on{Avg}}\,\, \#\on{Cl}(R\_f)[2] \hspace{1ex} & \le\hspace{1ex} & \frac{3}{2} + \frac{{\rho}\_\Sigma + {\lambda}\_\Sigma + \chi\_{a,d} \,\, {\rho}\_\Sigma {\lambda}\_\Sigma }{2} + \delta\_\Sigma,\end{aligned}$$ where *δ*Σ is the density defined in §[sec-dist]. As was the case for Theorems [first main] and [first main2], we expect that the bounds in Theorems [second main real] and [second main complex] are in fact the exact averages (i.e., the theorems should hold with “limsup” replaced by “lim” and “ ≤ ” replaced by “ = ”); see Remark [rmk:unif estimate]. The averages given in Theorems [second main real] and [second main complex] may be evaluated explicitly in various cases of interest. For example, take $\Sigma = U\_{a,d}(\Z)\_{\max}$ to consist of all maximal cubic orders in $U\_{a,d}(\Z)$. Formulas for the associated sufficient-ramification densities *ρ*Σ and *λ*Σ are determined in Appendix [appendix-localcomputationsat2]; see . From these formulas, one readily observes that the average over all $a \in \Z \smallsetminus \{0\}$ (resp., $d \in \Z \smallsetminus \{0\}$) of *ρ*Σ (resp., *λ*Σ) is equal to zero. This observation has two noteworthy consequences. The first consequence is that, on average over all $a,d \in \Z \smallsetminus \{0\}$, the averages given in Theorems [second main real] and [second main complex] respectively converge to 5/4 and 3/2. As is to be expected, these are precisely the averages determined by Bhargava  (cf. Ho–Shankar–Varma ) for the The second consequence is that, on average over all $d \in \Z \smallsetminus \{0\}$, the averages given in Theorems [second main real] and [second main complex] respectively converge to 5/4 + 1/4*σ*(*a**k*) and 3/2 + 1/2*σ*(*a**k*), where *σ* denotes the divisor function. Unsurprisingly, these are precisely the averages determined by Bhargava–Hanke–Shankar  for the family of cubic fields corresponding to binary cubic forms with *x*3-coefficient equal to *a*. As for small *a* and *d*, explicit values of the corresponding averages are presented in Table [fig-tabs], along with computational evidence for comparison. |l||\*5c|&*a* = 1&*a* = 2&*a* = 3 &*a* = 4&*a* = 5 *d* = 1 & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{2.009} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{2.000}] \\ \textcolor[rgb]{0,0,0.54}{3.216} \,\,[\textcolor[rgb]{0,0,0.54}{3.214}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.740} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.714}] \\ \textcolor[rgb]{0,0,0.54}{2.551} \,\,[\textcolor[rgb]{0,0,0.54}{2.536}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.656} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.640}] \\ \textcolor[rgb]{0,0,0.54}{2.332} \,\,[\textcolor[rgb]{0,0,0.54}{2.340}] \end{array}$ &$\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.978} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{2.000}] \\ \textcolor[rgb]{0,0,0.54}{3.096} \,\,[\textcolor[rgb]{0,0,0.54}{3.167}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.578} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.587}] \\ \textcolor[rgb]{0,0,0.54}{2.221} \,\,[\textcolor[rgb]{0,0,0.54}{2.211}] \end{array}$ *d* = 2 & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.732} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.714}] \\ \textcolor[rgb]{0,0,0.54}{2.538} \,\,[\textcolor[rgb]{0,0,0.54}{2.536}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.413} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.417}] \\ \textcolor[rgb]{0,0,0.54}{1.835} \,\,[\textcolor[rgb]{0,0,0.54}{1.833}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.471} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.457}] \\ \textcolor[rgb]{0,0,0.54}{1.942} \,\,[\textcolor[rgb]{0,0,0.54}{1.944}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.499} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.500}] \\ \textcolor[rgb]{0,0,0.54}{2.000} \,\,[\textcolor[rgb]{0,0,0.54}{2.000}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.430} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.388}] \\ \textcolor[rgb]{0,0,0.54}{1.863} \,\,[\textcolor[rgb]{0,0,0.54}{1.857}] \end{array}$ *d* = 3 & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.643} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.640}] \\ \textcolor[rgb]{0,0,0.54}{2.334} \,\,[\textcolor[rgb]{0,0,0.54}{2.340}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.458} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.457}] \\ \textcolor[rgb]{0,0,0.54}{1.950} \,\,[\textcolor[rgb]{0,0,0.54}{1.944}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.373} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.375}] \\ \textcolor[rgb]{0,0,0.54}{1.749} \,\,[\textcolor[rgb]{0,0,0.54}{1.750}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.666} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.640}] \\ \textcolor[rgb]{0,0,0.54}{2.344} \,\,[\textcolor[rgb]{0,0,0.54}{2.327}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.378} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.372}] \\ \textcolor[rgb]{0,0,0.54}{1.756} \,\,[\textcolor[rgb]{0,0,0.54}{1.761}] \end{array}$ *d* = 4 & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{2.000} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{2.000}] \\ \textcolor[rgb]{0,0,0.54}{3.208} \,\,[\textcolor[rgb]{0,0,0.54}{3.167}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.495} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.500}] \\ \textcolor[rgb]{0,0,0.54}{2.008} \,\,[\textcolor[rgb]{0,0,0.54}{2.000}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.647} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.640}] \\ \textcolor[rgb]{0,0,0.54}{2.336} \,\,[\textcolor[rgb]{0,0,0.54}{2.327}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{2.024} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{2.000}] \\ \textcolor[rgb]{0,0,0.54}{3.231} \,\,[\textcolor[rgb]{0,0,0.54}{3.250}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.593} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.587}] \\ \textcolor[rgb]{0,0,0.54}{2.209} \,\,[\textcolor[rgb]{0,0,0.54}{2.202}] \end{array}$ *d* = 5 & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.577} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.587}] \\ \textcolor[rgb]{0,0,0.54}{2.205} \,\,[\textcolor[rgb]{0,0,0.54}{2.211}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.421} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.388}] \\ \textcolor[rgb]{0,0,0.54}{1.862} \,\,[\textcolor[rgb]{0,0,0.54}{1.857}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.382} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.372}] \\ \textcolor[rgb]{0,0,0.54}{1.763} \,\,[\textcolor[rgb]{0,0,0.54}{1.761}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.590} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.587}] \\ \textcolor[rgb]{0,0,0.54}{2.186} \,\,[\textcolor[rgb]{0,0,0.54}{2.202}] \end{array}$ & $\begin{array}{c}\textcolor[rgb]{0.86,0.07,0.23}{1.321} \,\,[\textcolor[rgb]{0.86,0.07,0.23}{1.354}] \\ \textcolor[rgb]{0,0,0.54}{1.672} \,\,[\textcolor[rgb]{0,0,0.54}{1.667}] \end{array}$ [fig-tabs] We conclude this subsection by noting that the number of times each cubic field arises from a form in $U\_{a,d}(\Z)\_{\max}$ is bounded in terms of *a* and *d*. Indeed, it follows from Baker’s work  on the effective solution of an arbitrary Thue equation that the fibres of the map taking a form $f \in U\_{a,d}(\Z)\_{\max}$ to its corresponding cubic field are finite and bounded in a way that depends only on *a* (or on *d*). Furthermore, *a* = 1 or *d* = 1, it follows from work of Bennett  that the fibres of this map are of size at most 10. Even more can be said for complex fields when it follows from work of Nagell  and Delone  that the fibres have size at most 5, and in fact at most 3 for all but finitely many cubic fields! Results on 2-Selmer groups of elliptic curves --------------------------------------------- From the perspective of arithmetic invariant theory, the 2-torsion in the class group of a monogenised cubic field (*K*, *α*) is closely related to the 2-Selmer group of the elliptic curve *E**f*: *y*2 = *f*(*x*, 1), where *f* is the homogenisation of the minimal polynomial of *α* (see §[sec-parametrisations]). In this regard, our methods for proving Theorems [first main]–[second main complex] may be adapted to obtain similar theorems about the average size of the 2-Selmer groups in families of elliptic curves of the form *y*2 = *x*3 + *b**x*2 + *c**x* + *d*, where *d* is fixed. Indeed, we prove the following Selmer-group analogues of Theorems [second main real] and [second main complex]: [third main] Let $\rmH \in \{\rmH\_{\on{bal}},\rmH\_{\on{wei}}\}$. Suppose $d \equiv 1 \pmod 8$, and let $\Sigma \subset U\_{1,d}(\Z)\_{\max}$ be an acceptable family of forms of positive discriminant such that $\Sigma\_2 = \{f \in U\_{1,d}(\Z\_2) : f(x,y) \equiv x^3 + x^2y + y^3 \pmod8\}$. $$\limsup\_{X \to \infty} \underset{\substack{f \in \Sigma \\ \rmH(f) < X}}{\on{Avg}}\,\, \#\on{Sel}\_2(E\_f) \hspace{1ex} \le \begin{cases} 3 + 3{\lambda}\_\Sigma, & \text{ if $d > 0$,} \\ 3 + 2\Pi\_{d,\rmH}{\lambda}\_\Sigma, & \text{ if $d < 0$,} \end{cases}$$ where $\Pi\_{d,\rmH}$ is the real asymptotic volume given in . [third main2] Let $\Sigma \subset U\_{1,d}(\Z)\_{\max}$ be an acceptable family of forms of negative discriminant such that $\Sigma\_2 = \{f \in U\_{1,d}(\Z\_2) : f(x,y) \equiv x^3 + x^2y + y^3 \pmod8\}$. Then we have $$\limsup\_{X \to \infty} \underset{\substack{f \in \Sigma \\ \rmH\_{\on{wei}}(f) < X}}{\on{Avg}}\,\, \#\on{Sel}\_2(E\_f) \hspace{1ex} \le 3+ 3{\lambda}\_\Sigma.$$ As was the case for Theorems [first main]–[second main complex], we expect that the bounds in Theorems [third main] and [third main2] are in fact the exact averages (i.e., the theorems should hold with “limsup” replaced by “lim” and “ ≤ ” replaced by “ = ”); see Remark [rmk:unif estimate]. The condition on Σ2 that occurs in the statements of Theorems [third main] and [third main2] may be interpreted as fixing the mod-2 reduction-type of the elliptic curves in our family. We impose this condition for the sake of simplicity; with further computation, one can use our method of proof to obtain similar results for any family of elliptic curves with discriminant having bounded 2-adic valuation. We note that our methods do not apply when *d* = 0, in which case the average 2-Selmer size was shown to be infinite in work of Klagsbrun–Lemke Oliver . The determination of precise asymptotics for the count of 2-Selmer elements in the case *d* = 0 is the subject of current work-in-progress of Alpöge and Ho (private communication). Just like with Theorems [second main real] and [second main complex], the averages given in Theorems [third main] and [third main2] may be evaluated explicitly in various cases of interest. When $\Sigma = U\_{a,d}(\Z)\_{\max}$ the average over all $d \in \Z \smallsetminus \{0\}$ of *λ*Σ is equal to zero. Thus, on average over all $d \in \Z \smallsetminus \{0\}$, the averages given in Theorems [third main] and [third main2] both converge to 3. This is, of course, precisely the average determined by Bhargava–Shankar  for the full family of elliptic curves. On the other hand, when *d* is a perfect square, the averages given in Theorems [third main] and [third main2] are both equal to 6. This doubling of the average size of the 2-Selmer group makes sense, as *d* = *f*(0, 1) being a square forces *E**f* to have an extra rational point; furthermore, this accords with the work of Bhargava–Ho , who obtain an average of 6 for the family of elliptic curves of the form *y*2 = *x*3 + *b**x*2 + *c**x* + *d*, where *d* runs through nonzero perfect squares (cf., the related work of Ananth Shankar , which generalizes this work to higher genus curves). The Poonen–Rains heuristics  make predictions for the distributions of Selmer groups in quite general families of elliptic curves. In particular, these heuristics predict that the average size of the 2-Selmer group in quite general families of elliptic curves is 3; this was verified for the universal family in the aforementioned work of Bhargava–Shankar. They also predict that an average of 6 for families with a marked point (as was shown in the aforementioned work of Bhargava–Ho). Moreover, these averages have been shown to be stable under the imposition of sets of local conditions. The averages obtained in Theorem [third main], when *d* > 0, and in Theorem [third main2] seem to be consistent with these findings: the family Σ can be partitioned into two subfamilies, one on which the average is 3 (namely the subfamily of cubics with no sufficient ramification), and one on which the average is 6 (namely the subfamily of cubics with sufficient ramification, in which there appears to be a marked nontrivial 2-Selmer element). On the other hand, the averages obtained in Theorem [third main] with *d* < 0 are more mysterious: in this case, it is unclear how to partition the family Σ into subfamilies on which the average is either 3 or 6 (in particular, no local conditions achieve this). This suggests an example of a *global* condition under which a family of elliptic curves behaves differently from the full family of elliptic curves and from the family of elliptic curves with a marked point (or marked nontrivial 2-Selmer element). Discussion of the proof ----------------------- In this subsection, we discuss the structure of the proof of Theorem [first main2], and we explain the modifications needed to obtain the other main theorems. To prove Theorem [first main2], we need an asymptotic upper bound for the number of 2-torsion classes in unit-monogenised cubic fields of bounded weighted height. As is the case for several other results in arithmetic statistics, such asymptotics are derived by first parametrising the arithmetic objects of interest in terms of integral orbits for the action of a reductive group *G* on a scheme *W* over $\Z$, and by then counting these orbits, which amounts to counting integer points within a fundamental domain for the action of $G(\Z)$ on $W(\R)$. In our setting, we use a result of Bhargava, which parametrises the dual of the 2-torsion in the class group of a maximal cubic order *R**f* by the set of $\SL\_3(\Z)$-orbits on the set of pairs (*A*, *B*) of integral ternary quadratic forms with 4det(*A**x* − *B**y*) = *f*(*x*, *y*) (see §[sec-params]).[2](#fn2) Using this parametrization, Bhargava determined the average size of the 2-torsion subgroup in the class groups of all cubic fields . To do so, he developed a general “averaging method,” which applies to any (*G*, *W*) as above. This method transforms the problem of determining asymptotics for the number of $G(\Z)$-orbits on $W(\Z)$ of bounded height into one of evaluating the integral over elements $\gamma\in G(\Z)\backslash G(\R)$ of the number of integral points lying in the *γ*-translates of a fixed bounded region in $W(\R)$. For our applications, applying Bhargava’s averaging method requires us to count pairs of ternary quadratic forms (*A*, *B*) lying within regions skewed by elements in $\FF\defeq\SL\_3(\Z)\backslash\SL\_3(\R)$, satisfying the additional conditions det*A* = det*B* = 1. A linearisation trick (used to handle the family of monogenised cubic fields considered in ) can be applied to fix the choice of *A*, but that still leaves us with the problem of counting integral points on the degree-3 hypersurface cut out by det*B* = 1 in the 6-dimensional space $V(\R)$ of ternary quadratic forms over $\R$. This problem falls outside the range of applicability of the circle method, which has recently been applied to great effect in arithmetic statistics; see, e.g., the works of Ruth, Alpöge, Sanjaya–Wang, and Alpöge–Bhargava–Shnidman. Since the set of forms $B \in V(\Z)$ with det*B* = 1 is a union of finitely many $\SL\_3(\Z)$-orbits of forms $B\_i \in V(\Z)$, our problem is reduced to one of counting integral points in regions of each of the finitely many symmetric $\R$-varieties $\SL\_3(\R)/{\mathrm{SO}}\_{B\_i}(\R)$. A key difficulty that arises when counting integral points in regions of symmetric $\R$-varieties is as follows: the volume of a thickening of the boundary of such a region is of the same order of magnitude as the volume of the region itself. Consequently, some input about the equidistribution of integral points near the boundary is needed to obtain the desired asymptotic point counts. There are at least three methods to prove these equidistribution results in the literature. The first, by Duke–Rudnick–Sarnak, uses automorphic forms; the second, by Eskin–McMullen, builds on the seminal work of Margulis in his thesis (see, e.g., ) by using the mixing property of the geodesic flow on Lie groups; and the third, by Eskin–Mozes–Shah, uses Ratner theory. The aforementioned work of Eskin–McMullen was subsequently effectivised by Benoist–Oh, whose work can be used to count the number of unit-determinant integral ternary quadratic forms in $X\cdot \mc{R}$, for sufficiently nice regions $\mc{R}\subset V(\R)$, with a power saving error term. More precisely, if we denote the set of unit-determinant elements in $X\cdot \mc{R}$ by $\mc{R}\_X$, then Benoist–Oh prove that the number of lattice points in $\mc{R}\_X$ is equal to $\on{Vol}(\mc{R}\_X)$, up to an error of $O(\on{Vol}(\mc{R}\_X)^{1-\delta})$, for some positive constant *δ*. For our application, we must additionally understand how the error term in the above asymptotic changes when the regions $\{\mc{R}\_X\}\_{X > 0}$ is skewed by $\gamma\in\FF$. To this end, we apply the results of Benoist–Oh to deduce that the number of lattice points in $\gamma\mc{R}\_X$ is $$\label{eq-skewasymp} \Vol(\mc{R}\_X)+O\left(h(\gamma)^M\Vol(\mc{R}\_X)^{1-\delta}\right),$$ where *h*(*γ*) denotes the maximum of the absolute values of the entries of *γ* and *γ*− 1, and where *M* > 0 is a constant (see Theorem [main effective counting]). This asymptotic formula  serves as a viable substitute for Davenport’s Lemma (used in the case when counting in affine space) in our setting as long as *h*(*γ*) is small — this corresponds to *γ* lying in the “main body” of the fundamental domain $\mc{F}$. When *h*(*γ*) is large — this corresponds to *γ* lying in the “cusp” of $\mc{F}$. In this regime, the asymptotic  is insufficient. We find an interesting phenomenon in our analysis of the cusp. Following Bhargava’s counting techniques in, we partition the cusp into two pieces, a shallow cusp and a deep cusp. Similar to the situation in, every relevant integral element in the deep cusp corresponds to the identity element in the class group of the corresponding cubic field. However, the shallow cusp behaves very differently. In particular, we find a positive proportion of non-identity elements in the shallow cusp. We refer to these “new” elements as *Δ-distinguished*, and we count them separately by hand. Interestingly, these Δ-distinguished orbits only appear for complex cubic fields and never for totally real fields. In fact, these orbits explain the aforementioned deviation from the Cohen–Lenstra philosophy that was observed in Theorem [first main2]. To control the contribution from the non-Δ-distinguished elements in the shallow cusp, we prove a suitable *upper bound* for the number of lattice points in skewed regions in $\gamma \mc{B}\_X$ for symmetric $\R$-varieties of the form $S\_B(\R)$ (see Theorem [th:skbound]). Our proof of this upper bound is completely elementary, and it exploits the particular structure of the formula for the determinant of a ternary quadratic form. We note that our methods are widely applicable for counting away from the cusp (i.e., in the main body); in particular, they work for any symmetric $\R$-variety, not just the variety of unit-determinant quadratic forms. It is only our handling of the cusp that is specific to our situation. In §[sec-countonsymmetricvarieties] and §[sec-countpairsternaryfixeddet], we combine Bhargava’s averaging method with our adaptation of the Benoist–Oh technique as described above. This allows us to obtain an upper bound on the average size of the 2-torsion in the class group of cubic fields in our families, as well as the average size of the 2-Selmer groups of elliptic curves in our families. However, the obtained upper bound is not explicit but instead expressed as a product of local masses. In order to compute these masses, it is necessary to compute the joint distribution of the Hasse invariants (at each place) of the pairs of ternary quadratic forms (*A*, *B*) that arise in our count. In §[sec-proofofmainresults] and §[sec-selproofs], we compute this distribution for the cases of cubic fields and elliptic curves, respectively, thus obtaining our main results. Acknowledgments --------------- Shankar was supported by an NSERC discovery grant and a Sloan fellowship. Siad was supported by a QEII/Steve Halperin GSST from the University of Toronto, a Postdoctoral Fellowship from the NSERC, Princeton University, the Institute for Advanced Study, and NSF Grant No. DMS-1926686. Swaminathan was supported by the NSF, under the Graduate Research Fellowship as well as Award No. 2202839. We are very grateful to Iman Setayesh for many helpful conversations, and in particular for helping us understand many of the key ideas behind the work of Eskin–McMullen. It is a pleasure to thank Manjul Bhargava for many helpful discussions and for sharing class group data for these thin families; Table [fig-tabs] was generated by adapting code originally written by him. We thank Hee Oh immensely for explaining how her joint work with Benoist  immediately implies Theorem [main effective counting], simplifying our previous proof. We are grateful to Manjul Bhargava, Hee Oh, Akshay Venkatesh, and Melanie Matchett Wood for providing us with numerous helpful comments and suggestions on earlier drafts of this paper, as well as for suggesting further lines of inquiry. We also have the pleasure of thanking Levent Alpöge, Wei Ho, Pouya Honaryar, Aaron Landesman, Lillian Pierce, Peter Sarnak, James Tao, Yuri Tschinkel, and Orbit parametrisations ====================== Due to work of Bhargava  (resp., Bhargava–Gross ), the 2-torsion subgroups of the class groups of cubic fields (resp., 2-Selmer groups of elliptic curves) may be parametrised in terms of the integral orbits of the representation $2 \otimes \on{Sym}^3(2)$ (resp., $2 \otimes \on{Sym}\_3(2)$) of the group $\on{SL}\_3$. In §[sec-rep], we define these representations, and in §§[sec-params]–[sec-sel], we recall the parametrisations of 2-class groups and 2-Selmer groups. Then, in §[sec-dist], we discuss the notions of distinguished and Δ-distinguished orbits mentioned in §[sec-discuss]. The representations $2 \otimes \on{Sym}\_2(3)$ and $2 \otimes \on{Sym}^2(3)$ of $\on{SL}\_3$ -------------------------------------------------------------------------------------------- As in §[sec-stat], let *U* be the affine scheme over $\Z$ whose *R*-points are given by binary cubic forms over *R* for any ring *R*. The algebraic group $\GL\_2$ acts on *U* via a twisted action $\gamma\cdot f(x,y) \defeq (\det(\gamma)^{-1}f((x,y)\cdot\gamma)$, and the work of Delone–Faddeev and Gan–Gross–Savin yields a bijection between $\on{GL}\_2(\Z)$-orbits on $U(\Z)$ and isomorphism classes of cubic rings. Denote the cubic ring corresponding to $f\in U(\Z)$ under Next, let *V* be the affine scheme over $\Z$ whose *R*-points are given by ternary quadratic forms over *R* for any ring *R*, let $W=V \times\_{\Z} V$ denote the space of pairs of ternary quadratic forms, and let *W*∨ ⊂ *W* be the subscheme of pairs of integer-matrix ternary quadratic forms. We represent elements in *V* (resp., *W*) by their Gram matrices (resp., pairs of Gram matrices). The algebraic group $\SL\_3$ acts on *V* via *γ* ⋅ *A* = *γ**A**γ**T*, and this action induces an action of $\on{SL}\_3$ on *W*. Let *R* be a ring. Given (*A*, *B*) ∈ *W*(*R*), we define the *resolvent* of (*A*, *B*) to be the binary cubic form 4det(*x**A* − *y**B*) ∈ *U*(*R*). Let $\on{Res} \colon W \to U$ be the function that takes a pair of ternary quadratic forms to its resolvent; then one readily verifies that the map $\on{Res}$ is $\on{SL}\_3$-invariant and that its coefficients generate the ring of polynomial invariants for the action of $\on{SL}\_3$ on *W*. Parametrisation of 2-torsion subgroups of the class groups of cubic fields -------------------------------------------------------------------------- Let *K* be an *S*3-cubic field, and let *L* be an unramified quadratic extension of *K*. Heilbronn  proved that the Galois closure of *L* is a degree-24 extension of $\Q$ with Galois group *S*4, thus yielding a quartic field up to conjugacy. In combination with Bhargava’s parametrisation of quartic fields in , this gives the following parametrisation of elements in the 2-torsion of the dual of the class groups of cubic fields in terms of $\on{SL}\_3(\Z)$ orbits on $W(\Z)$ (see for more details): [thm-dual2torsparam] Let *K* be an *S*3-cubic field and let $f \in U(\Z)$ be an (irreducible) binary cubic form corresponding to the ring of integers of *K* under the Delone–Faddeev parametrisation. Then * If Δ(*K*) > 0, then there is a canonical bijection between the elements of the dual group $\on{Cl}^+(K)[2]^\*$ and $\on{SL}\_3(\Z)$-orbits of pairs $(A,B) \in W(\Z)$ with $\on{Res}(A,B) = f$. Under this bijection, the elements of the dual group $\on{Cl}(K)[2]^\* \subset \on{Cl}^+(K)[2]^\*$ correspond to $\on{SL}\_3(\Z)$-orbits of pairs $(A,B) \in W(\Z)$ such that *A* and *B* have a common zero over $\R$. * If Δ(*K*) < 0, there is a canonical bijection between the elements of the dual group $\on{Cl}^+(K)[2]^\* = \on{Cl}(K)[2]^\*$ and $\on{SL}\_3(\Z)$-orbits of pairs $(A,B) \in W(\Z)$ with $\on{Res}(A,B)=f$. Parametrisation of 2-Selmer groups of elliptic curves ----------------------------------------------------- We now recall how the orbits of $\on{SL}\_3(\Z)$ on $W^\vee(\Z)$ parametrise 2-Selmer elements (i.e., 2-covers that have rational points over every completion of $\Q$) of elliptic curves. Let $K = \Q$ or $\Q\_v$ for a place *v* of $\Q$, and let *f* ∈ *U*(*K*) be monic with nonzero discriminant. Let $\mc{A} \in V(\Z)$ be the anti-diagonal matrix with anti-diagonal entries given by 1,  − 1, and 1. We start by explaining how to construct a 2-cover of the elliptic curve *E**f* from a pair of the form $(\mc{A},B) \in \on{Res}^{-1}(4f) \cap W^\vee(K)$. Given such a pair $(\mc{A},B)$, we may add a *K*-rational multiple of $\mc{A}$ to *B* = (*b**i**j*) to arrange that *b*13 = 0. Then it follows from  and  that the genus-1 curve with equation $$\label{eq-associate2cover} z^2 = \frac{b\_{11}}{4} x^4 + b\_{12} x^3y + b\_{22} x^2y^2 + 2b\_{23} x y^3 + b\_{33} y^4$$ is a 2-cover of *E**f*. If (*A*ʹ, *B*ʹ) ∈ *W*(*K*) is $\on{SL}\_3(K)$-equivalent to a pair of the form $(\cA, B)$ with *b*13 = 0, we define the binary quartic form on the right-hand side of  to be *binary quartic form associated to the pair (*A*ʹ, *B*ʹ)*. The 2-cover  is trivial if and only if the corresponding binary quartic has a root over *K*. A pair $(\mc{A},B) \in \on{Res}^{-1}(4f) \cap W^\vee(K)$ is said to be *soluble* over *K* if the associated 2-cover  has a *K*-point. If $K = \Q$, we say that $(\mc{A},B)$ is *locally soluble* if the associated 2-cover is soluble over $\Q\_v$ for every place *v* of $\Q$. The following parametrisation result is due to Bhargava–Gross (see ): [thm-2selparam] For a monic binary form $f \in U(\Z)$ of nonzero discriminant, consider the corresponding elliptic curve *E**f*: *y*2 = *f*(*x*, 1). Then there is a natural bijection between elements in $\Sel\_2(E\_f)$ and $\on{SL}\_3(\Q)$-equivalence classes on the set of locally soluble pairs $(\cA,B)\in \on{Res}^{-1}(4f) \cap W^\vee(\Z)$. Furthermore, it follows from that if *f* is maximal, then the locally soluble $\SL\_3(\Q)$-equivalence classes on $\Res^{-1}(4f)\cap W^\vee(\Z)$ are the same as the $\SL\_3(\Z)$-orbits on $\Res^{-1}(4f)\cap W^\vee(\Z)$. More precisely, every element in $\Res^{-1}(4f)\cap W^\vee(\Z)$ is locally soluble, and no two $\SL\_3(\Z)$-orbits on $\Res^{-1}(4f)\cap W^\vee(\Z)$ collapse into the same $\SL\_3(\Q)$-equivalence class. Distinguished and Δ-distinguished orbits ---------------------------------------- We start by definining the notions of distinguished and Δ-distinguished orbits: [def-disting] Let *K* be a field of characteristic not 2, and let *f* ∈ *U*(*K*) of nonzero discriminant. We say that an $\on{SL}\_3(K)$-orbit on $\on{Res}^{-1}(f) \cap W(K)$ is: *distinguished* if it contains a representative (*A*, *B*) with *a*11 = *b*11 = 0; and Δ*-distinguished* if it contains a representative (*A*, *B*) with *a*11*a*22 − *a*122 = *b*11*b*22 − *b*122 = 0. Here, we denote by *m**i**j* the *i**j*-matrix entry of *M* ∈ *W*(*K*) if *i* = *j* and twice the *i**j*-matrix entry if *i* ≠ *j*. It is easy to check that there exists a unique on both *W*(*K*) and *W*∨(*K*) with resolvent *f*; for *W*(*K*), the orbit is represented by the following pair: $$(A,B) = \left(\begin{pmatrix} 0 & 0 & 1/2\\ 0 & -a & 0\\ 1/2 & 0 & -c \end{pmatrix}, \begin{pmatrix} 0 & 1/2 & 0\\ 1/2 & b & 0\\ 0 & 0 & d \end{pmatrix}\right)$$ It is likewise easy to check that there exists a unique on both *W*(*K*) and *W*∨(*K*) with resolvent *f*; for *W*(*K*), the orbit is represented by the following pair: $$\label{eq-datsdeldist} (A,B) = \left(\begin{pmatrix} -a & 0 & 0\\ 0 & 0 & 1/2\\ 0 & 1/2 & b/4ad \end{pmatrix}, \begin{pmatrix} 0 & 0 & 1/2\\ 0 & d & 0\\ 1/2 & 0 & -c/4ad \end{pmatrix}\right)$$ The next result classifies distinguished orbits over maximal binary cubic forms. [thm-howmanydist] Let $f(x,y) = ax^3 + bx^2y + cxy^2 + dy^3 \in U(\Z)\_{\max}$. Then the distinguished $\on{SL}\_3(\Q)$-orbit on $\on{Res}^{-1}(f) \cap W(\Q)$ and on $\on{Res}^{-1}(4f) \cap W^\vee(\Q)$ contains a unique $\on{SL}\_3(\Z)$-orbit. We omit the proof because the result is well-known. The analogous result for Δ-distinguished orbits is significantly more complicated. In the following pair of theorems, we show that while the Δ-distinguished orbit over a maximal binary cubic form does not always have an integral representative, although such a representative is unique up to $\on{SL}\_3(\Z)$-action if it exists. We postpone the proofs to the appendix (see §[sec-distdensities]). [th:deltadistex] Let $f(x,y) = ax^3 + bx^2y + cxy^2 + dy^3 \in U(\Z)\_{\max}$, and as usual, factor *a* and *d* as *a* = *a**k**a**m*2 and *d* = *d**k**d**m*2. Then the Δ-distinguished $\on{SL}\_3(\Q)$-orbit on $\on{Res}^{-1}(f) \cap W(\Q)$ contains an integral representative if the following conditions are satisfied: * gcd(*a**k*, *d*) = gcd(*a*, *d**k*) = 1; * $b^2 - 4ac \equiv 0 \pmod{d\_k}$ and $c^2 - 4bd \equiv 0 \pmod{a\_k}$; and * If $2 \nmid a\_kd\_k$, then the mod-4 residue of (*b*, *c*) occurs in the following table, where the left column indicates the mod-4 residue of (*a**k*, *d**k*), and the top row indicates the mod-2 residue of (*a**m*, *d**m*) :  |l||\*4c| &(0, 0)&(0, 1)&(1, 0) &(1, 1) (1, 1) & (1, 1) & (1, 3) & (3, 1) & (0, 0),  (1, 2),  (2, 1) (1, 3) & (3, 1) & (3, 3) & (1, 1) & (0, 0),  (2, 1),  (3, 2) (3, 3) & (3, 3) & (3, 1) & (1, 3) & (0, 0),  (2, 3),  (3, 2) On the other hand, if 2 ∣ *d**k*, then the mod-8 residue of (*b*, *c*) occurs in the following table, where the left column indicates the mod-8 residue of (*a**k*, *d**k*), and the top row indicates the mod-2 residue of (*a**m*, *d**m*) :  |l||\*2c| &(1, 0)&(1, 1) (1, 2) & (0, 1),  (4, 1) & (0, 0),  (2, 1),  (2, 4),  (4, 4),  (6, 0),  (6, 1) (1, 6) & (0, 1),  (4, 1) & (0, 0),  (2, 0),  (2, 1),  (4, 4),  (6, 1),  (6, 4) (3, 2) & (0, 3),  (4, 3) & (0, 0),  (2, 3),  (2, 4),  (4, 4),  (6, 0),  (6, 3) (3, 6) & (0, 3),  (4, 3) & (0, 0),  (2, 0),  (2, 3),  (4, 4),  (6, 3),  (6, 4) (5, 2) & (0, 5),  (4, 5) & (0, 0),  (2, 4),  (2, 5),  (4, 4),  (6, 0),  (6, 5) (5, 6) & (0, 5),  (4, 5) & (0, 0),  (2, 0),  (2, 5),  (4, 4),  (6, 4),  (6, 5) (7, 2) & (0, 7),  (4, 7) & (0, 0),  (2, 4),  (2, 7),  (4, 4),  (6, 0),  (6, 7) (7, 6) & (0, 7),  (4, 7) & (0, 0),  (2, 0),  (2, 7),  (4, 4),  (6, 4),  (6, 7) The entries corresponding to the values of (*a**k*, *d**k*) not represented in the above tables may be deduced by symmetry. If such an integral representative exists, it is unique up to $\on{SL}\_3(\Z)$-action. [th:deltadistex2] Let $f(x,y) = ax^3 + bx^2y + cxy^2 + dy^3 \in U(\Z)\_{\max}$, and as usual, factor *a* and *d* as *a* = *a**k**a**m*2 and *d* = *d**k**d**m*2. Then the Δ-distinguished $\on{SL}\_3(\Q)$-orbit on $\on{Res}^{-1}(4f) \cap W^\vee(\Q)$ contains an integral representative if and only if the following conditions are satisfied: * gcd(*a**k*, *d*) = gcd(*a*, *d**k*) = 1; and * $b^2 - 4ac \equiv 0 \pmod{d\_k}$ and $c^2 - 4bd \equiv 0 \pmod{a\_k}$. If such an integral representative exists, it is unique up to $\on{SL}\_3(\Z)$-action. It is possible for an $\on{SL}\_3(\Z)$-orbit on $W(\Z)$ to be both distinguished and Δ-distinguished. However, this happens quite rarely; for instance, we prove the following result: [prop-coincidenot] When binary cubic forms $f \in U\_{a,d}(\Z)$ are ordered by $\on{H}\_{\on{bal}}$ or $\on{H}\_{\on{wei}}$, the density of *f* such that the distinguished and Δ-distinguished $\on{SL}\_3(\Q)$-orbits on $\on{Res}^{-1}(f) \cap W(\Q)$ coincide is zero. Let $f \in U\_{a,d}(\Z)$ be of nonzero discriminant, and suppose that the distinguished and Δ-distinguished $\on{SL}\_3(\Q)$-orbits on $\on{Res}^{-1}(f) \cap W(\Q)$ coincide. Then the same must also be true of the distinguished and Δ-distinguished $\on{SL}\_3(\Q(\sqrt{a}))$-orbits with resolvent *a*− 1*f*. Because *a*− 1*f* is monic, a pair $(A,B) \in \on{Res}^{-1}(a^{-}f) \cap W(\Q(\sqrt{a}))$ is distinguished if and only if the associated binary quartic form *g* is reducible over $\Q(\sqrt{a})$ (see §[sec-sel]). For the explicit representative of the Δ-distinguished $\on{SL}\_3(\Q(\sqrt{a}))$-orbit with resolvent *a*− 1*f* given in , the associated binary quartic form is as follows: $$\label{eq-binquartdel} \frac{d}{4a}x^4 - \frac{b}{2a}x^2y^2 + 2xy^3 + \left( \frac{b^2}{16ad} - \frac{c}{4ad} \right)y^4.$$ Since the binary quartic form in  is irreducible over the function field $\Q(\sqrt{a})(b,c)$, the proposition follows from Hilbert’s Irreducibility Theorem. We now describe Δ-distinguished orbits over $\R$: [prop-zeroset] Let $f \in U\_{a,d}(\R)$ be of nonzero discriminant. Then any pair (*A*, *B*) lying in the Δ-distinguished $\on{SL}\_3(\R)$-orbit on $\on{Res}^{-1}(f) \cap W(\R)$ is such that the zero sets of *A* and *B* have at most 2 $\R$-points of intersection. Let $\wt{f}(x,y) \defeq a^{-1}f(x-by/3a,y/\sqrt[3]{a^{-1}f(-b/3a,1)})$. The desired property holds for the Δ-distinguished orbit on $\on{Res}^{-1}(f) \cap W(\R)$ if and only if it holds for the Δ-distinguished orbit on $\on{Res}^{-1}(\wt{f}) \cap W(\R)$. Because $\wt{f}$ is monic, the desired property holds for $(A,B) \in \on{Res}^{-1}(\wt{f}) \cap W(\R)$ if and only if the associated binary quartic form *g* has at most 2 real roots. If we write $\wt{f}(x,y) = x^3 + cx \pm 1$, then for the explicit representative of the Δ-distinguished $\on{SL}\_3(\R)$-orbit with resolvent $\wt{f}$ given in , the associated binary quartic form is given by  to be the following: $$\label{eq-binquartdel2} \pm \frac{1}{4} x^4 + 2xy^3 - \frac{c}{4}y^4.$$ It follows from Descarte’s rule of signs that the binary quartic form  has at most 2 real zeros, from which the proposition follows. The final result of this section, which describes the $\R$-solubility of the Δ-distinguished orbit, is a corollary of (the proof of) Proposition [prop-zeroset]: [cor-rsol] Let $f \in U(\R)$ be monic. Then the Δ-distinguished $\on{SL}\_3(\R)$-orbit on $\on{Res}^{-1}(4f) \cap W^\vee(\R)$ is $\R$-soluble if and only if $\on{Disc}(f) < 0$ or $\on{Disc}(f) > 0$ and *f*(0, 1) > 0. The associated binary quartic form  has 2 real roots precisely when $\on{Disc}(f) < 0$; on the other hand, when $\on{Disc}(f) > 0$, this quartic is definite, and it is positive-definite if and only if *d* = *f*(0, 1) > 0. Counting integral points on symmetric varieties =============================================== In this section, we start by defining the notion of a symmetric variety (see §[sec-notset]). We then obtain an asymptotic formula with effective error term for the number of lattice points in skews of families of homogeneously expanding regions in certain symmetric varieties (see §[sec-maincounter]). We also prove a version of this formula for lattice points satisfying sets of congruence conditions (see §[sec-congrconds]). Notation and setup ------------------ Let *G* be a connected almost simple algebraic group defined over $\Z$, and let *V* be a finite-dimensional representation of *G*, defined over $\Z$, with finite kernel. A sub-algebraic-group *H* ⊂ *G* defined over $\Z$ is said to be a *symmetric $\Z$-subgroup* if the identity component of *H* is the identity component of the fixed points of some involution *σ* of *G* defined over $\Z$. Let $v\in V(\Z)$ be such that the stabilizer of *v* in *G* is a symmetric $\Z$-subgroup *H**v* ⊂ *G*. Then $\mc{S}\_v \defeq G/H \simeq Gv$ is said to be a *symmetric variety*; note that $\mc{S}\_v$ is in fact a subscheme of *V*. Let *S**v* be the functor that assigns to a ring *R* the quotient $S\_v(R) \defeq G(R)/H\_v(R)$. Then *S**v*(*R*) = *G*(*R*)/*H**v*(*R*) ≃ *G*(*R*)*v* for any *R*; when *R* = *K* is a field, *S**v*(*K*) is said to be a *symmetric *K*-variety*. Note that the functor *S**v* is *not* to be confused with the functor of points of $\mc{S}\_v$. Indeed, regard *G*, *H**v*, and $\mc{S}\_v$ as sheaves on the category of *R*-algebras via their functors of points. Then the short exact sequence $$\label{eq-exactgroups} 0 \to H\_v \overset{\iota}\longrightarrow G \overset{\pi}\longrightarrow \mc{S}\_v \to 0$$ of sheaves induces the following short exact sequence in fppf cohomology: $$\label{eq-galois} 0 \to S\_v(R) \to \mc{S}\_v(R) \to \ker\big(H\_{\on{fppf}}^1(\on{Spec} R, H\_v) \to H\_{\on{fppf}}^1(\on{Spec} R, G)\big) \to 0.$$ We see that the set $\ker\big(H\_{\on{fppf}}^1(\on{Spec} R, H\_v) \to H\_{\on{fppf}}^1(\on{Spec} R, G)\big)$ is in bijection with the set of orbits of the action of *G*(*R*) on $\mc{S}\_v(R)$, which is not necessarily transitive, and the subset $S\_v(R) \subset \mc{S}\_v(R)$ constitutes just *one* of these orbits, namely the orbit containing *v*. It is a remarkable fact, due to Borel and Harish-Chandra , that when $R = \R$, and even when $R = \Z$, the set of *G*(*R*)-orbits on $\mc{S}\_v(R)$ is finite. Let d*g* and d*h* respectively denote *volume forms* (i.e., generators of the 1-dimensional vector space of left-invariant top-degree differential forms) on *G* and *H**v* defined over $\Q$. This naturally yields a *G*-invariant quotient measure d*s* on $\mc{S}\_v$; in what follows, we often restrict this quotient measure to the subset *S**v*(*R*) ≃ *G*(*R*)*v*, where $R = \R$ or $\Z\_p$ for a prime *p*. We write $\tau\_{G,\infty} \defeq \on{Vol}(G(\R)/G(\Z))$ and $\tau\_{v,\infty} \defeq \on{Vol}(H\_v(\R)/H\_v(\Z))$, and we write $\tau\_{G,p} \defeq \on{Vol}(G(\Z\_p))$ and $\tau\_{v,p} \defeq \on{Vol}(H\_v(\Z\_p))$. Main counting results --------------------- The main theorem of this section is the following: [][main effective counting] Let $\Gamma\subset G(\R)$ be a lattice, denote $\Gamma\cap H\_v(\R)$ by Γ*H**v*, and assume $\on{Vol}(H\_v(\R)/\Gamma\_{H\_v}) < \infty$. Let $\cB\subset V(\R)$ be a nonempty open bounded set with smooth boundary. Let $X\cB$ be dilation of $\cB$ by *X* > 0, and write $\cB\_X(v) = X\cB \cap G(\R)v$. Then for any $\gamma \in G(\R)$, we have $$\#(\Gamma v\cap \gamma \cB\_X )=\frac{\Vol(H\_v(\R)/\Gamma\_{H\_v})}{\Vol( G(\R)/\Gamma)}\Vol\big(\cB\_X(v)\big)+E\_v(X,\gamma),$$ where $E\_v(X,\gamma) \defeq O(\Vol(\cB\_X(v))^{1-\delta}h(\gamma)^M)$ for some *δ*, *M* > 0. Here, *h*(*γ*) denotes the maximum of the absolute values of the entries of *γ* and *γ*− 1, considered as elements of $\GL(V)$, and the implied constant is independent of *γ* and *X*.[3](#fn3) The theorem follows directly from the work of Benoist–Oh, as we now explain. We first recall the necessary notation from. Set $\cB\_X \defeq \cB\_X(v)$ for the sake of brevity. Let *ɛ* > 0, and let $B\_\varepsilon \subset G(\R)$ be the closed ball of radius *ɛ* centered at $1 \in G(\R)$. Let $(\gamma\cB\_X)\_\varepsilon^+ \defeq B\_{\varepsilon}\gamma \cB\_X$ and $(\gamma\cB\_X)\_\varepsilon^- \defeq \bigcap\_{g \in B\_{\varepsilon}} g\gamma\cB\_X$. Then it is shown in that there exists *p* > 0 such that, for all sufficiently small *ɛ*, the desired asymptotic holds up to an error bounded by $$\label{eq-initbound} \varepsilon^{-p} \int\_{s \in (\gamma \mc{B}\_X)\_{\varepsilon}^+} \lVert s \rVert^{-k} \mathrm{d}s + \Vol\big((\gamma\cB\_X)\_\varepsilon^+ \smallsetminus (\gamma\cB\_X)\_\varepsilon^-\big),$$ for any *k* > 0; here, ∣∣ − ∣∣ denotes any fixed Euclidean norm on $V(\R)$. Now, the family $\cB\_X$ is effectively well-rounded (see ). Indeed, effective well-roundedness is verified in  in the special case where $\mc{B}$ is a Euclidean ball centered at the origin; the general case can be proven by adapting this argument and applying the asymptotic expansion for $\Vol(\cB\_X)$ provided by. As a consequence of effective well-roundedness, we have that $$\int\_{s \in (\gamma \cB\_X)\_\varepsilon^+} \lVert s \rVert^{-k} \mathrm{d}s \le \int\_{s \in \gamma(\cB\_X)\_{\varepsilon'}^+} \lVert s \rVert^{-k} \mathrm{d}s \le \lVert \gamma^{-1} \rVert^{k} \int\_{s \in (\cB\_X)\_{\varepsilon'}^+} \lVert s \rVert^{-k} \mathrm{d}s \ll h(\gamma)^{M'}\on{Vol}(\mc{B}\_X)^{1- \delta'},$$ for some *M*ʹ, *δ*ʹ > 0; here, *ɛ*ʹ ≪ *h*(*γ*)*M*ʺ*ɛ* is chosen so that $(\gamma \mc{B}\_X)\_{\varepsilon}^+ \subset \gamma(\mc{B}\_X)\_{\varepsilon'}^+$ and $(\gamma \mc{B}\_X)\_{\varepsilon}^- \supset \gamma(\mc{B}\_X)\_{\varepsilon'}^-$. This gives a bound on the first term in ; as for the second term, we have $(\gamma\cB\_X)\_\varepsilon^+ \smallsetminus (\gamma\cB\_X)\_\varepsilon^- \subset \gamma \left( (\cB\_X)\_{\varepsilon'}^+ \smallsetminus (\cB\_X)\_{\varepsilon'}^- \right)$, so it follows that $$\Vol\big((\gamma\cB\_X)\_\varepsilon^+ \smallsetminus (\gamma\cB\_X)\_\varepsilon^-\big) \ll \on{Vol}\big((\cB\_X)\_{\varepsilon'}^+ \smallsetminus (\cB\_X)\_{\varepsilon'}^-\big).$$ As another consequence of effective well-roundedness, we have for some *κ* > 0 that $$\on{Vol}\big((\cB\_X)\_{\varepsilon'}^+ \smallsetminus (\cB\_X)\_{\varepsilon'}^-\big) \ll {\varepsilon'}^\kappa \on{Vol}(\cB\_X).$$ Taking $\varepsilon = h(\gamma)^{-\frac{\kappa (M''-M')}{p+\kappa}}\on{Vol}(\mc{B}\_X)^{-\frac{\delta'}{p + k}}$ then yields the theorem. We note that Theorem [main effective counting] was proven in the case *γ* = 1 by Benoist–Oh, who effectivised the earlier work of Eskin–McMullen. We adapt their proof to the setting of arbitrary *γ*, and we obtain an effective dependence on *γ* in the error term. Before we give the proof, we recall the main technical result that we need from , known as *effective equidistribution*: [] [effective equi] Fix a Euclidean norm ∥ ⋅ ∥ on $V(\R)$. Then there exists *m* ∈ N and *r* > 0 such that for any compact subset $E \subset G(\R)/\Gamma$, any function $\psi \in C\_c^\infty(G(\R)/\Gamma)$ with support in *E*, and any element $g\_0 \in G(\R)$, we have $$\left| \int\_{h \in H\_v(\R)/\Gamma\_{H\_v}} \psi(g\_0h) \mathrm{d}h - \int\_{g \in G(\R)/\Gamma} \psi(g) \mathrm{d}g \right| \ll \big(M\_\psi + S\_m(\psi)\big) \lVert vg\_0 \rVert^{-r},$$ where *M**ψ* = max(∥*ψ*∥∞, *C**ψ*) with *C**ψ* the Lipschitz constant of *ψ*, where *S**m*( ⋅ ) denotes the Sobolev norm of order *m*, and where the implied constant is independent of the choices. This follows immediately from the proof of , where the bound is stated without the term *M**ψ*, but with an implied constant depending on *E*. However, their dependence on *E* stems from their use of the inequality *M**ψ* ≪ *E**S*2*m*(*ψ*). Adding the additional error term of *M**ψ* to the right hand side of Theorem [effective equi] removes the dependence on *E*. [Proof of Theorem [main effective counting]] We adapt the proof given by Benoist–Oh  in the case *γ* = 1. Set $\cB\_X \defeq \cB\_X(v)$ for ease of notation, and let $F\_{\gamma\cB\_X} \colon G(\R)/\Gamma \to \mathbb{N}$ be the function that sends the class of $g \in G(\R)$ to $\#(g\Gamma v \cap \gamma\cB\_X)$. Then we want to derive an asymptotic formula for $F\_{\gamma \cB\_X}(1)$. Let *ɛ* > 0, and let $B\_\varepsilon \subset G(\R)$ be the closed ball of radius *ɛ* centered at $1 \in G(\R)$. Choose *ɛ* to be small enough that *B**ɛ* is contained within a fundamental domain for $G(\R)/\Gamma$. By , there exists *p* > 0 and a smooth function $\alpha\_\varepsilon \colon G(\R) \to \R$ supported in *B**ɛ* such that $\int\_{g \in G(\R)} \alpha\_\varepsilon(g) \mathrm{d}g = 1$ and *S**m*(*α**ɛ*) ≤ *ɛ*− *p*. Now, define $B\_{\varepsilon,\gamma} \defeq \gamma B\_\varepsilon \gamma^{-1}$ and $\alpha\_{\varepsilon,\gamma}(x) \defeq \alpha(\gamma^{-1} x \gamma)$, and define $(\gamma\cB\_X)\_\varepsilon^+ \defeq B\_{\varepsilon,\gamma}\gamma \cB\_X$ and $(\gamma\cB\_X)\_\varepsilon^- \defeq \bigcap\_{g \in B\_{\varepsilon,\gamma}} g\gamma\cB\_X$. Note that $$(\gamma\cB\_X)\_\varepsilon^+ = \gamma B\_{\varepsilon}\cB\_X = \gamma (\cB\_X)\_\varepsilon^+$$ and $$(\gamma\cB\_X)\_\varepsilon^- = \bigcap\_{g \in B\_\varepsilon} (\gamma g \gamma^{-1})(\gamma\cB\_X) = \bigcap\_{g \in B\_\varepsilon} \gamma g \cB\_X = \gamma \bigcap\_{g \in B\_\varepsilon} g \cB\_X = \gamma (\cB\_X)\_\varepsilon^{-}.$$ In particular, the volumes $\Vol((\gamma\cB\_X)\_\varepsilon^\pm) = \Vol((\cB\_X)\_\varepsilon^\pm)$ are independent of *γ*. By analogy with $F\_{\gamma\mc{B}\_X}$, let $F\_{\gamma \cB\_X}^{\varepsilon,\pm} \colon G(\R)/\Gamma \to \mathbb{N}$ be defined by $g \mapsto \#(g\Gamma v \cap (\gamma \cB )\_\varepsilon^\pm)$. Define $$\langle F\_{\gamma \cB\_X}^{\varepsilon,\pm}, \alpha\_{\varepsilon,\gamma} \rangle \defeq \int\_{g \in G(\R)/\Gamma} F\_{\gamma \cB\_X}^{\varepsilon,\pm}(g) \alpha\_{\varepsilon,\gamma}(g) \mathrm{d}g = \int\_{s \in (\gamma \cB\_X)\_\varepsilon^\pm} \Big(\int\_{h \in H\_v(\R)/\Gamma\_{H\_v}} \alpha\_{\varepsilon,\gamma}(sh) \mathrm{d}h \Big) \mathrm{d}s.$$ We have that $F\_{\gamma \cB\_X}^{\varepsilon,-}(g) \le F\_{\gamma\cB\_X}(1) \le F\_{\gamma \cB\_X}^{\varepsilon,+}(g)$ for all *g* ∈ *B**ɛ*, *γ*, and it follows that $$\label{eq-fundamental} \langle F\_{\gamma\cB\_X}^{\varepsilon,-},\alpha\_{\varepsilon,\gamma} \rangle \le F\_{\gamma\cB\_X}(1) \le \langle F\_{\gamma\cB\_X}^{\varepsilon,+},\alpha\_{\varepsilon,\gamma} \rangle.$$ Given the inequalities , it now suffices to prove an upper bound of the form $$\big|\langle F\_{\gamma\cB\_X,\varepsilon}^{\pm}, \alpha\_{\varepsilon,\gamma} \rangle - \Vol(\gamma\cB\_X)\big| \ll \Vol(\cB\_X)^{1-\delta} h(\gamma)^M.$$ An application of effective equidistribution, as stated in Theorem [effective equi], yields the following estimate: $$\begin{aligned} & \big|\langle F\_{\gamma\cB\_X,\varepsilon}^{\pm}, \alpha\_{\varepsilon,\gamma} \rangle - \Vol(\gamma\cB\_X)\big| \ll \nonumber \\ & \qquad \big(M\_{\alpha\_{\varepsilon,\gamma}} + S\_m(\alpha\_{\varepsilon,\gamma})\big)\int\_{s \in (\gamma\cB\_X)\_\varepsilon^+} \lVert s \rVert^{-k} \mathrm{d}s + \Vol\big((\gamma\cB\_X)\_\varepsilon^+\big) - \Vol\big((\gamma\cB\_X)\_\varepsilon^-\big) = \nonumber \\ & \qquad\qquad \big(M\_{\alpha\_{\varepsilon,\gamma}} + S\_m(\alpha\_{\varepsilon,\gamma})\big)\int\_{s \in (\cB\_X)\_\varepsilon^+} \lVert \gamma s \rVert^{-k} \mathrm{d}s + \Vol\big((\cB\_X)\_\varepsilon^+\big) - \Vol\big((\cB\_X)\_\varepsilon^-\big) \label{eq-fourterms}\end{aligned}$$ We proceed by separately bounding each of the four terms *M**α**ɛ*, *γ*, *S**m*(*α**ɛ*, *γ*), $\int\_{s \in (\cB\_X)\_\varepsilon^+} \lVert \gamma s \rVert^{-k} \mathrm{d}s$, and $\Vol((\cB\_X)\_\varepsilon^+) - \Vol((\cB\_X)\_\varepsilon^-)$ that appear in . To bound the second and fourth terms, we use the fact that the family $\{\cB\_X\}$ is *effectively well-rounded*, which means that the following two properties are satisfied: * There exists *κ* > 0 such that uniformly for all *X* > 0 and all *ɛ* ∈ (0, 1) we have $$\Vol\big((\cB\_X)\_\varepsilon^+\big) - \Vol\big((\cB\_X)\_\varepsilon^-\big) \ll \varepsilon^\kappa \Vol(\cB\_X).$$ * For any *k* > 0, there exists *δ* > 0 such that uniformly for all *X* > 0 and *ɛ* ∈ (0, 1) we have $$\int\_{ s\in (\cB\_X)\_\varepsilon^+} \lVert s \rVert^{-k} \mathrm{d}s \ll \Vol(\cB\_X)^{1-\delta}.$$ Effective well-roundedness is verified in  in the special case where $\mc{B}$ is a Euclidean ball centered at the origin; the general case can be proven by adapting this argument and applying the asymptotic expansion for $\Vol(\cB\_X)$ provided by. We are now ready to bound each of the four terms in , which we handle in order as follows: 1. We have ∥*α**ɛ*, *γ*∥∞ = ∥*α**ɛ*∥∞ and *C**α**ɛ*, *γ* ≪ *h*(*γ*)*C**α**ɛ*. Thus, *M**α**ɛ*, *γ* ≪ *h*(*γ*)*M**α**ɛ*, and using the bound , we deduce that *M**α**ɛ*, *γ* ≪ *h*(*γ*)*S*2*m*(*α**ɛ*) ≪ *h*(*γ*)*ɛ*− *p*.[4](#fn4) 2. We have $\int\_{s \in (\cB\_X)\_\varepsilon^+} \lVert \gamma s \rVert^{-k} ds \ll h(\gamma)^{M'} \Vol(\cB\_X)^{1-\delta}$, because for some *M*ʹ > 0 we have the bound ∥*γ**s*∥− 1 ≪ *h*(*γ*)*M*ʹ∥*s*∥− 1, and because of the effective well-roundedness of $\cB\_X$. 3. We claim that *S**m*(*α**ɛ*, *γ*) ≪ *h*(*γ*)*M*ʺ*S**m*(*α**ɛ*) for some *M*ʺ > 0. To prove this claim, we work locally using partitions of unity. Set *y* = *γ**x**γ*− 1, and write *y* = (*y**i**j*), *x* = (*x**i**j*), *γ* = (*γ**i**j*), and $\gamma^{-1} = (\widetilde{\gamma}\_{ij})$. Using Einstein notation, we have $y\_{kl} = \gamma\_{ki}x\_{ij}\widetilde{\gamma}\_{jl}$, and it follows that $$\left.\frac{\d\alpha\_{\varepsilon,\gamma}}{\d x\_{ij}}\right|\_{x} = \left.\frac{\d \alpha\_\varepsilon}{\d y\_{kl}}\right|\_{\gamma x\gamma^{-1}} \left.\frac{\d y\_{kl}}{\d x\_{ij}}\right|\_{x} = \gamma\_{ki} \widetilde{\gamma}\_{jl}\left.\frac{\d \alpha\_\varepsilon}{\d y\_{kl}}\right|\_{\gamma x \gamma^{-1}}.$$ Thus, the partial derivatives of *α**ɛ*, *γ* of any given order ℓ, evaluated at *x*, depend linearly on the partial derivatives of *α**ɛ* of order ℓ, evaluated at *γ**x**γ*− 1, with the coefficients being homogeneous polynomials of degree 2ℓ in the entries of *γ* and *γ*− 1. Taking *M*ʺ = 2ℓ, we see that *S**m*(*α**ɛ*, *γ*) ≪ *h*(*γ*)*M*ʺ*S**m*(*α**ɛ*), as claimed; furthermore, we have *S**m*(*α**ɛ*, *γ*) ≪ *h*(*γ*)*M*ʺ*ɛ*− *p*. 4. We have $\Vol((\cB\_X)\_\varepsilon^+) - \Vol((\cB\_X)\_\varepsilon^-) \ll \varepsilon^\kappa \Vol(\cB\_X)$ by the effective well-roundedness of $\cB\_X$. Substituting the four bounds obtained above into , we find that $$\big|\langle F\_{\gamma\cB\_X,\varepsilon}^{\pm}, \alpha\_{\varepsilon,\gamma} \rangle - \Vol(\gamma\cB\_X)\big| \ll h(\gamma)^M \varepsilon^{-p} \Vol(\cB\_X)^{1-\delta}+\varepsilon^\kappa \Vol(\cB\_X).$$ Taking *δ*0 = *δ*/(1 + *κ*/*p*) and $\varepsilon = \Vol(\cB\_X)^{-\delta/\kappa}$ and applying  yields the theorem. Take $G = \SL\_n$; take *V* = Sym2(*n*) to be the space of *n* × *n* symmetric matrices with the action given by *g* ⋅ *A*ʹ = *g**A*ʹ*g**T*; take $A \in V(\Z)$ of nonzero determinant; and take $H = \on{SO}\_A$, the orthogonal group scheme associated to *A*. As is well-known (see, e.g., ), these choices fit into the setup outlined in §[sec-notset]. Indeed, $\SL\_n$ is a connected almost simple algebraic group over $\Z$, and *H**A* = SO*A* is the fixed points of the involution *σ*(*g*) = *A*(*g*− 1)*T**A*− 1 on $\SL\_n$. Let *a* = det*A*, and denote by *V**a* ⊂ *V* the subvariety of matrices with determinant *a*. The following lemma shows that the subscheme $\mc{S}\_A \subset V$ is precisely *V**a*: The map of $\Z$-schemes $\on{SL}\_3 \to V\_a$ defined by *g* ↦ *g**A**g**T* induces an isomorphism of $\Z$-schemes $\mc{S} \simeq V\_a$. As $\mc{S}\_A$ and *V**a* are flat over $\Z$, it suffices to prove the desired isomorphism after basechanging to *K*, where $K = \Q$ or $\F\_p$ for a prime *p*. Fix an algebraic closure $\ol{K}$ of *K*. As the map $\on{SL}\_3 \to V\_a$ is defined by $\on{SO}\_A$-invariant polynomial functions, it induces a map $\mc{S}\_A \to V\_a$. We claim that this induced map is a bijection on $\ol{K}$-points. Indeed, the map $\on{SL}\_3(\ol{K}) \to V\_a(\ol{K})$ is surjective since $\on{SL}\_3(\ol{K})$ acts transitively on $V\_a(\ol{K})$, and two elements $g\_1, g\_2 \in \on{SL}\_3(\ol{K})$ have the same image if and only if $g\_1^{-1}g\_2 \in \on{SO}\_A(\ol{K})$, so we obtain a bijection $S\_A(\ol{K}) \to V\_a(\ol{K})$. But by the exact sequence , we have $\mc{S}\_A(\ol{K}) = S\_A(\ol{K})$, as claimed. It now follows from Zariski’s Main Theorem  that the map $\mc{S}\_A \to V\_a$ defines an isomorphism of $\ol{K}$-varieties; then, the theory of descent  implies that the map $\mc{S}\_A \to V\_a$ is also an isomorphism at the level of *K*-varieties. By the result of Borel and Harish-Chandra referenced in §[sec-notset], obtaining asymptotics for the number of integral points of $\mc{S}\_A(\Z) = V\_a(\Z)$ that lie in skewed expanding domains of $V(\R)$ reduces to obtaining asymptotics for $\#\{ \SL\_n(\Z) A\cap \gamma X\cB \}$ for arbitrary elements $A\in V\_a(\Z)$, which is precisely the problem solved in Theorem [main effective counting]. We thus obtain the following corollary to the theorem: [cor:nqf] For any $a \in \Z \smallsetminus \{0\}$, we have $$\displaystyle\#\big(V\_a(\Z) \cap \gamma X\cB\big)=\displaystyle \sum\_{A\in\frac{V\_a(\Z)}{\SL\_n(\Z)}} \frac{\tau\_{A,\infty}}{\tau\_{n,\infty}}\Vol\big(\cB\_X(A)\big) + E\_{A}(X,\gamma).$$ Congruence conditions --------------------- Assume now that *G* is smooth over $\Z$. We next prove a version of Theorem [main effective counting] for counting lattice points defined by finitely many congruence conditions. Given a $\Z$-scheme *S* equipped with a nowhere vanishing top-degree differential form defined over $\Q$, a function $\phi \colon S(\Z) \to [0,1]$ is said to be *defined by congruence conditions* if, for every prime *p*, there exists a corresponding function $\phi\_p \colon S(\Z\_p) \to [0,1]$ such that: 1. For every $s \in S(\Z)$, the product ∏*p**ϕ**p*(*s*) converges and is equal to *ϕ*(*s*); 2. For every *p*, the function *ϕ**p* is locally constant outside a closed subset of $S(\Z\_p)$ of volume zero with respect to the chosen differential form. If *ϕ**p* ≡ 1 for all but finitely many *p*, then we say that *ϕ* is *defined by finitely many congruence conditions*. We then have the following result: [prop-singcong] Let $\phi \colon \mc{S}\_v(\Z) \to [0,1]$ be a function defined by finitely many congruence conditions. Then we have $$\sum\_{v' \in \mc{S}\_v(\Z) \cap \gamma X\cB}\phi(v') = \sum\_{v'\in\frac{\mc{S}\_v(\Z)}{G(\Z)}} \frac{\tau\_{H\_{v'},\infty}}{\tau\_{G,\infty}}\Vol\big(\cB\_X(v')\big) \Vol\big(S\_{v'}(\widehat{\Z})\big)^{-1}\int\_{s \in S\_{v'}(\widehat{\Z})} \phi(s) \mathrm{d}s +E\_{v'}(X,\gamma).$$ We assume that *ϕ* is defined modulo *m* for some integer *m* > 0, and we define $\Gamma\subset G(\Z)$ to be the congruence subgroup of matrices reducing to the identity modulo *m*. In particular, the action of Γ on $\mc{S}\_v(\Z)$ preserves *ϕ*. Given $v' \in V(\Z)$ and $g \in G(\Z)$, we can choose the measures on *H**v* and *H**g**v* to correspond to each other via the map *H**v* → *H**g**v* defined by conjugation *h* ↦ *g**h**g*− 1. This implies in particular that *τ**v*, *ν* = *τ**γ**v*, *ν* for all places *ν* of $\Q$. Then an application of Theorem [main effective counting] yields that $$\begin{aligned} \sum\_{v'\in \mc{S}\_v(\Z) \cap \gamma X\cB}\phi(v') &= \sum\_{v'\in \frac{\mc{S}\_v(\Z)}{\Gamma}}\phi(v')\times \#(\Gamma v' \cap \gamma X) \\ &= \sum\_{v'\in \frac{\mc{S}\_v(\Z)}{\Gamma}}\phi(v') \frac{\Vol(H\_{v'}(\R)/\Gamma\_{H\_{v'}})}{\Vol(G(\R)/\Gamma)}\Vol\big(\cB\_X(v')\big)+E\_{v'}(X,\gamma) \\ &= \sum\_{v'\in\frac{\mc{S}\_v(\Z)}{G(\Z)}} \frac{\tau\_{H\_{v'},\infty}}{\tau\_{G,\infty}}\Vol\big(\cB\_X(v')\big) \sum\_{g\in G(\Z)/\langle \Gamma, H\_{v'}(\Z) \rangle} \phi(g v')\frac{[H\_{g v'}(\Z):\Gamma\_{H\_{g v'}}]}{[G(\Z):\Gamma]} +E\_{v'}(X,\gamma).\end{aligned}$$ Next, since *ϕ* is defined modulo *m*, we have $$\Vol\big(S\_{v'}(\widehat{\Z})\big)^{-1}\int\_{s\in S\_{v'}(\widehat{\Z})}\phi(s)\mathrm{d}s=\big(\#S\_{v'}(\Z/m\Z)\big)^{-1}\sum\_{s\in S\_{v'}(\Z/m\Z)}\phi(s).$$ Writing $S\_{v'}(\Z/m\Z)$ as $G(\Z/m\Z)/H\_{v'}(\Z/m\Z)$, we see that it now suffices to prove that $$\label{eq:tempcong} \sum\_{g \in G(\Z)/\langle \Gamma, H\_{v'}(\Z) \rangle} \phi(g v')[H\_{g v'}(\Z): \Gamma\_{H\_{g v'}}]= \sum\_{g\in G(\Z/m\Z)/H\_{v'}(\Z/m\Z)}\phi(g v') \times \#H\_{g v'}(\Z/m\Z).$$ but this equality follows since $[H\_{g v'}(\Z):\Gamma\_{H\_{g v'}}]$ and $\#H\_{g v'}(\Z/m\Z)$ are independent of *g* and since we have $G(\Z)/\langle \Gamma, H\_{v'}(\Z) \rangle=G(\Z/m\Z)/(H\_{v'}(\Z)/\Gamma\_{H\_{v'}})$. We now specialize to the case when $G=\SL\_n$ and *V* = Sym2(*n*) and prove the following analogue of Corollary [cor:nqf] for counting symmetric matrices of fixed determinant that satisfy finitely many congruence conditions: Let $a \in \Z \smallsetminus \{0\}$, and let $\phi \colon V\_a(\Z) \to [0,1]$ be a function defined by finitely many congruence conditions. Then we have $$\sum\_{A \in V\_a(\Z) \cap \gamma X\cB}\phi(A) = \sum\_{A\in\frac{V\_a(\Z)}{\SL\_n(\Z)}} \frac{\tau\_{A,\infty}}{\tau\_{\on{SL}\_n,\infty}}\Vol\big(\cB\_X(A)\big) \Vol\big(S\_{A}(\widehat{\Z})\big)^{-1}\int\_{s \in S\_{A}(\widehat{\Z})} \phi(s) \mathrm{d}s +E\_{A}(X,\gamma).$$ Counting integral orbits on pairs of ternary quadratic forms with fixed determinants ==================================================================================== Fix $a,d \in \Z \smallsetminus \{0\}$, and let *W**a*, *d* ⊂ *W* denote the subvariety of pairs (*A*, *B*) satisfying det*A* = *a*/4 and det*B* =  − *d*/4. Recall from §[sec-stat] that we define two heights on $U\_{a,d}(\R)$, the balanced height and the weighted height. We lift these heights to $W\_{a,b}(\R)$ by setting $\rmH\_\circ(A,B) \defeq \rmH\_\circ(\Res(A,B))$ for each $\circ \in \{\on{bal},\on{wei}\}$. For a real number *X* > 0 and a subset $T\subset U\_{a,d}(\R)$ or $T\subset W\_{a,d}(\R)$, we denote the set of elements in *T* with balanced height (resp., weighted height) less than *X* by $T^\bx\_X$ (resp., $T^\wei\_X$). In this section, our goal is to determine asymptotics for the number of non-distinguished $\SL\_3(\Z)$-orbits on $W\_{a,d}(\Z)$ having bounded balanced (resp., weighted) height. To state the result precisely, we first need to define two notions, namely splitting types over $\R$ and local masses. **Splitting types over $\R$.** The analogue of an $\SL\_3(\Z\_p)$-invariant weight function at the infinite place is given by the notion of a splitting type. For a set $T\subset U(\R)$, let $T^{\pm} \defeq \{f\in T : \pm\on{Disc}(f)>0\}$. We say that a pair $(A,B)\in W(\R)$ with resolvent of nonzero discriminant has *splitting type* (1111), (112), or (22) if the zero loci in $\P^2(\R)$ of the quadratic forms *A* and *B* intersect in four real points, two real points and one set of complex conjugate points, or two sets of complex conjugate points respectively. Note that the splitting type of $(A,B) \in W(\R)$ remains unchanged under the action of $\on{SL}\_3(\R)$. From, we obtain the following useful description of the splitting types of the orbits of $\on{SL}\_3(\R)$ on $W(\R)$ having resolvent $f^{\pm} \in U(\R)^{\pm}$: We have the following two points: * The set of elements in $W(\R)$ having resolvent *f*+ consists of three $\SL\_3(\R)$-orbits, one with splitting type (1111) and three with splitting type (22). The stabilizer in $\SL\_3(\R)$ of each element in any of these orbits has size *σ*+ = 4. * The set of elements in $W(\R)$ having resolvent *f*− consists of a single $\SL\_3(\R)$-orbit with splitting type (112). The stabilizer in $\SL\_3(\R)$ of each element in this orbit has size *σ*− = 2. Moreover, from, we may canonically distinguish between the three (22)-orbits. The first consists of pairs $(A,B)\in W(\R)$, where *A* is anisotropic; the second consists of pairs (*A*, *B*), where *A* is isotropic and *B* takes positive values on the zeroes of *A*; and the third consists of pairs (*A*, *B*), where *A* is isotropic and *B* takes negative value on the zeroes of *A*. We denote these refinements of the splitting type (22) by (22#), (22 + ), and (22 − ), respectively. For any set $T\subset W(\R)$ and splitting type (*i*), denote the set of elements in *T* with splitting type (*i*) by *T*(*i*). **Local masses.** Given $A,B\in V(\Z)$ of nonzero determinant, write $S\_{AB} \defeq S\_A\times S\_B$. Next, we introduce the notion of local masses of pairs (*A*, *B*) over binary cubic forms. For each $f\in U\_{a,d}(\Z)$, each $(A,B)\in W\_{a,d}(\Z)$, each splitting type (*i*) at infinity, and each $\SL\_3(\Z)$-invariant function $\phi \colon W\_{a,d}(\Z) \to [0,1]$ defined by congruence conditions, define $$\begin{array}{rcl} \on{Mass}\_\infty\big(f;A,B;(i)\big)&:=&\displaystyle \sum\_{\substack{w\in\frac{S\_{AB}(\R)^{(i)} \cap \on{Res}^{-1}(f)}{\SL\_3(\R)}}}\frac{1}{\#\operatorname{Stab}\_{\SL\_3(\R)}(w)}; \\[.2in] \on{Mass}\_p\big(f;A,B;\phi\big)&:=&\displaystyle \sum\_{\substack{w\in\frac{S\_{AB}(\Z\_p)\cap \on{Res}^{-1}(f)}{\SL\_3(\Z\_p)}}}\frac{\phi\_p(w)}{\#\operatorname{Stab}\_{\SL\_3(\Z\_p)}(w)}. \end{array}$$ Given $k \in \Z \smallsetminus \{0\}$, let $\mc{G}\_k$ denote the set of genera of quadratic forms in $V\_k(\Z)$. Note that the masses defined above depend only on the genera of *A* and *B*. We are now in position to state the main result of this section, which is as follows: [th:sec4main] Let $\circ \in \{\on{bal},\on{wei}\}$, let (*i*) be a splitting type, and let $\phi \colon W\_{a,d}(\Z) \to [0,1]$ be an $\SL\_3(\Z)$-invariant function defined by finitely many congruence conditions. Then we have $$\sum\_{\substack{w\in \frac{W(\Z)^{(i),\circ}\_X}{\SL\_3(\Z)}\\w\text{ \rm non-dist.}}}\phi(w)\sim 4\sum\_{\substack{A\in \mc{G}\_a\\B\in \mc{G}\_{-d}}}\int\_{f\in U\_{a,d}(\R)^{\pm,\circ}\_X}\on{Mass}\_\infty\big(f;A,B;(i)\big)\mathrm{d}f\cdot\prod\_p\int\_{f\in U\_{a,d}(\Z\_p)}\on{Mass}\_p(f;A,B;\phi)\mathrm{d}f.$$ We now briefly outline the proof of Theorem [th:sec4main] in the setting of balanced height; the proof in the weighted height case is similar. Proving Theorem [th:sec4main] amounts to counting lattice points $(A,B) \in W\_{a,d}(\Z)$ in fundamental sets for the action of $\on{SL}\_3(\Z)$ on $W(\R)$. In §[sec-reductave], we define the relevant fundamental sets and apply Bhargava’s averaging method to express the count as an integral over $\gamma \in \on{SL}\_3(\Z) \backslash \on{SL}\_3(\R)$ of the number of non-distinguished points in a certain region $\mc{R}\_\gamma \subset \gamma W\_{a,d}(\R)^{(i)}$. In §[sec-cuspcut], we prove that the contribution to this integral from the cuspidal regions is negligible, and in §[sec-dister], we prove that, on average over *γ*, the region $\mc{R}\_\gamma$ contains negligibly many distinguished and Δ-distinguished points. Using the results of §[sec-countonsymmetricvarieties], we express the number of integer elements in $\mc{R}\_\gamma$ in terms of the volume of $\mc{R}\_\gamma$. We compute this volume via a Jacobian change-of-variables argument in §[sec-jacobianchvals]. Finally, we complete the proof of Theorem [th:sec4main] in §[sec-congr]. Reduction theory and averaging ------------------------------ There are two approaches to counting $\on{SL}\_3(\Z)$-orbits on $W\_{a,d}(\Z)$. We could do so directly, or we could realize the set of $\on{SL}\_3(\Z)$-orbits on $W\_{a,d}(\Z)$ as the union over $A \in V\_a(\Z)$ of the set of $\on{SO}\_A(\Z)$-orbits on $\{A\} \times V\_{-d}(\Z)$. We shall use both approaches; the first is amenable to working with the balanced height, whereas the second is amenable to working with the weighted height. Fix $A\in V\_a(\Z)$. The following proposition follows immediately from and gives fundamental sets for the action of $\on{SL}\_3(\R)$ (resp., $\on{SO}\_A(\R)$) on subsets of $W(\R)$ (resp., $\{A\} \times V(\R)$) having given splitting type: There exists continuous sections $$\begin{array}{rcl} s^{(i)}\_\bx:U(\R)^+&\to&W(\R)^{(i)} \;\;{\rm for}\; i\in\{0,2\#,2+,2-\}, \\[.1in] s^{(1)}\_\bx:U(\R)^-&\to&W(\R)^{(1)}, \\[.1in] s\_A^{(i)}:U\_a(\R)^+&\to& \big(\{A\} \times V(\R)\big)^{(i)} \;\;\;{\rm for}\; i\in\{0,2+,2-\} \mbox{ when $A$ is isotropic}, \\[.1in] s^{(2\#)}\_A:U\_a(\R)^+&\to& \big(\{A\} \times V(\R)\big)^{(2\#)} \mbox{ \,when $A$ is anisotropic}, \\[.1in] s^{(1)}\_A:U\_a(\R)^-&\to& \big(\{A\} \times V(\R)\big)^{(1)} \;\;\;\mbox{when $A$ is isotropic}, \end{array}$$ such that the following properties hold for all *f*: $\Res(s\_\bx(f))=f$ and $\Res(s\_A(f))=f$; the coefficients of $s\_\bx(f)$ are $\ll \rmH\_\bx(f)^{1/3}$; and the coefficients of *s**A*(*f*) are $\ll \rmH\_\wei(f)$. We denote the image of $s\_\bx^{(i)}$ by $R\_\bx^{(i)}$ and the image of *s**A*(*i*) by *R**A*(*i*). Next, we construct fundamental domains for the action of $\on{SL}\_3(\Z)$ on $\on{SL}\_3(\R)$ (resp., $\on{SO}\_A(\Z)$ on $\on{SO}\_A(\R)$). Consider the Iwasawa decomposition $\SL\_3(\R)=N\_3T\_3K\_3$, where *N*3 is the subgroup of lower-triangular unipotent matrices, *T*3 is the subgroup of diagonal matrices with nonnegative entries, and $K\_3 = {\mathrm{SO}}\_3(\R)$. We write elements in *T*3 as (*s*1, *s*2), where (*s*1, *s*2) corresponds to the diagonal matrix with diagonal entries *s*1− 2*s*2− 1, *s*1*s*2− 1, and *s*1*s*22. By a well-known result of Minkowski, we may choose a fundamental domain $\FF\_3$ for the action of $\SL\_3(\Z)$ on $\SL\_3(\R)$ that lies in a Siegel domain *N*3ʹ*T*3ʹ*K*3, where *N*3ʹ ⊂ *N*3 is a bounded subset and *T*3ʹ = {(*s*1, *s*2) ∈ *T*3ʹ : *s*1, *s*2 > *c*} for some constant *c* > 0. Then for *i* ∈ {0, 1, 2#, 2 + , 2 − }, the multiset $\FF\_3\cdot R\_\bx^{(i)}$ is generically a *σ**i*-fold cover of a fundamental set for the action of $\SL\_3(\Z)$ on $W(\R)^{(i)}$, where *σ*0 = *σ*2 + = *σ*2 − = *σ*2# = *σ*+ = 4, and *σ*1 = *σ*− = 2. Following, we choose a fundamental domain $\FF\_A$ for the action of ${\mathrm{SO}}\_A(\Z)$ on ${\mathrm{SO}}\_A(\R)$. When *A* is anisotropic over $\Q$, this domain $\FF\_A$ may be chosen to be compact. On the other hand, when *A* is isotropic, then *A* is $\SL\_3(\Q)$-equivalent to the element $A\_a \in S(\Z)$, where *A**a* is antidiagonal with entries 1/2,  − *a*, and 1/2. That is, there exists $g\_A\in\SL\_3(\Q)$ with *g**A**A**g**A**T* = *A**a*. We thus obtain the following isomorphisms: $$\begin{array}{rclrcl} m\_A\colon \{A\} \times V(\R)&\to& \{A\_a\} \times V(\R),\quad &(A,B)&\mapsto & (A\_a,g\_A Bg\_A^T); \\[.1in] m\_A'\colon {\mathrm{SO}}\_A(\R)&\to&{\mathrm{SO}}\_{A\_a}(\R),\quad &g&\mapsto& g\_A gg\_A^{-1}. \end{array}$$ Consider the Iwasawa decomposition $\on{SO}\_{A\_a}(\R) = N\_aT\_aK\_a$, where *N**a* is the subgroup of lower-triangular unipotent matrices, *T**a* is the subgroup of diagonal matrices with nonnegative entries, and *K**a* is a maximal compact subgroup. We write elements of *T**a* as *t*, where *t* corresponds to the diagonal matrix diagonal entries *t*− 2, 1, and *t*2. Choose a suitable Siegel domain S*a* = *N**a*ʹ*T**a*ʹ*K**a* for the action of ${\mathrm{SO}}\_{A\_a}(\Z)$ on ${\mathrm{SO}}\_{A\_a}(\R)$, where *N**a*ʹ ⊂ *N**a* is a bounded subset and *T**a*ʹ = {*t* ∈ *T**a* : *t* > *c*} for some constant *c* > 0. Let $\mathscr{S}\_A \defeq {m\_A'}^{-1}(\mathscr{S}\_a)$. By, there exists a fundamental domain $\FF\_A$ for the action of ${\mathrm{SO}}\_A(\Z)$ on ${\mathrm{SO}}\_A(\R)$ contained within a finite union of translates ⋃*i**g**i*S*A*, where each *g**i* belongs to ${\mathrm{SO}}\_A(\Q)$. That is, we may write $$\label{eq:FcA} \FF\_A\subset \bigcup\_{i}g\_ig\_A^{-1}\mathscr{S}\_ag\_A=g\_A^{-1}h\_i\mathscr{S}\_ag\_A,$$ where $h\_i \defeq m\_A'(g\_i)$. Then for *i* ∈ {0, 1, 2#, 2 + , 2 − }, the multiset $\FF\_A\cdot R\_A^{(i)}$ is generically a *σ**i*-fold cover of a fundamental set for the action of ${\mathrm{SO}}\_A(\Z)$ on $(\{A\} \times V(\R))^{(i)}$. Now, for any $\SL\_3(\Z)$-invariant subset $L \subset W\_{a,d}(\Z)$, define $$N\_{a,d}^{\bx}(L,X) \defeq \#\bigl(\SL\_3(\Z)\backslash L^{\nd,\bx}\_X\bigr),\quad\quad N\_{a,d}^{\wei}(L,X) \defeq \#\bigl(\SL\_3(\Z)\backslash L^{\nd,\wei}\_X\bigr),$$ where $L^\nd \subset L$ denotes the subset of non-distinguished elements, with resolvent having Galois group *S*3. Let $G\_A\subset {\mathrm{SO}}\_A(\R)$ be a nonempty open bounded subset with measure-0 boundary, and let $G\_0\subset \SL\_3(\Z)$ be a subset satisfying the same conditions. Upon combining the above discussion with Bhargava’s averaging method (see, e.g., ), we obtain the following result: [prop:avg] Fix *i* ∈ {0, 1, 2#, 2 + , 2 − }. Let $L\subset W\_{a,d}(\Z)^{(i)}$ be an $\SL\_3(\Z)$-invariant subset, and let $L\_A \defeq L\cap \{A\} \times V(\Z)$. Then we have $$\begin{array}{rcl} N\_{a,d}^{\bx}(L,X)&=&\displaystyle \frac{1}{\sigma\_{(i)}\Vol(G\_0)}\int\_{\gamma\in\FF\_3}\#\Bigl(L^\nd \cap \gamma G\_0\bigl(R\_\bx^{(i)}\bigr)^\bx\_X\Bigl)\mathrm{d}\gamma; \\[.2in] N\_{a,d}^{\wei}(L,X)&=&\displaystyle \sum\_{A\in\frac{V\_a(\Z)}{\SL\_3(\Z)}} \frac{1}{\sigma\_{(i)}\Vol(G\_A)}\int\_{\gamma\in\FF\_A}\#\Bigl(L\_A^\nd \cap \gamma G\_A\bigl(R\_A^{(i)}\bigr)^\wei\_{X}\Bigl)\mathrm{d}\gamma. \end{array}$$ Cutting off the cusp -------------------- Now take *A* to be isotropic. For *δ* > 0, let $\FF\_{3}^{(\delta)} \defeq \{n(s\_1,s\_2)k \in \mc{F}\_3 : \max\{s\_1, s\_2\} > X^\delta\}$, and let $\FF\_{A}^{(\delta)} \defeq \{ntk \in \mc{F}\_A : t > X^\delta\}$. We think of $\FF\_3^{(\delta)}$ and $\FF\_A^{(\delta)}$ as the *cuspidal regions* of the fundamental domains $\FF\_{3}$ and $\FF\_{A}$, respectively. In this subsection, we prove that the contribution to Proposition [prop:avg] from the cuspidal regions is negligible. [th:cuspcut] We have $$\begin{array}{rcl} \displaystyle \int\_{\gamma\in\FF\_3^{(\delta)}}\#\Bigl(W\_{a,d}(\Z)^\nd \cap \gamma G\_0\bigl(R\_\bx^{(i)}\bigr)^\bx\_{X}\Bigl)\mathrm{d}\gamma &\ll& X^{2-\delta\_1}, \\[.2in] \displaystyle \int\_{\gamma\in\FF\_A^{(\delta)}}\#\Bigl(W\_{a,d}(\Z)^\nd \cap \gamma G\_A\bigl(R\_A^{(i)}\bigr)^\wei\_{X}\Bigl)\mathrm{d}\gamma &\ll& X^{3-\delta\_2}, \end{array}$$ for some *δ*1, *δ*2 > 0. As it happens, the asymptotic derived in §[sec-countonsymmetricvarieties] using dynamical techniques is insufficient to prove Theorem [th:cuspcut], as bounding the contribution from the cuspidal regions requires us to have a more explicit understanding of the error dependence on the skewing factor *γ*. Instead of making this dependence more explicit, we take a different approach: using a completely elementary argument, we obtain a tight upper bound on the number of ternary quadratic forms in skewed boxes having fixed determinant *k*/4 for $k \in \Z \smallsetminus \{0\}$. To this end, write the rows of a matrix $A\in V\_k(\Z)$ as (*a*, *b*/2, *d*/2), (*b*/2, *c*, *e*/2), and (*d*/2, *e*/2, *f*). Let $\cB\subset V(\R)$ be a bounded subset. We write the equation det*A* = *k*/4 as  − *k* =  − 4det(*A*) = *f*(*b*2 − 4*a**c*) + (*a**e*2 − *b**e**d* + *c**d*2). Given a triple $(a,b,c) \in \Z^3$, let $Q(x,y) \defeq Q\_{a,b,c}(x,y)$ denote the quadratic form *a**x*2 − *b**x**y* + *c**y*2, which has discriminant $\Delta \defeq b^2 - 4ac$. Then our equation  takes the cleaner form  − *k* = *f*Δ + *Q*(*e*, *d*). Now, for *s*1, *s*2 ≫ 1, put $$\begin{array}{lcl} \displaystyle N(s\_1,s\_2;Y)& \defeq & \#\bigl( V\_k(\Z) \cap (s\_1,s\_2)Y\cB\bigr); \\[.1in]\displaystyle N^\*(s\_1,s\_2;Y)& \defeq & \#\bigl\{A\in V\_k(\Z) \cap (s\_1,s\_2)Y\cB : a\neq 0\bigr\}. \end{array}$$ and subdivide these counts as follows: $$\begin{array}{rcl} N(s\_1,s\_2;Y)&=&N\_{\Delta\neq 0}(s\_1,s\_2;Y)+N\_{\Delta= 0}(s\_1,s\_2;Y),\\[.1in] N^\*(s\_1,s\_2;Y)&=&N^\*\_{\Delta\neq 0}(s\_1,s\_2;Y)+N^\*\_{\Delta= 0}(s\_1,s\_2;Y), \end{array}$$ where *N*Δ ≠ 0(*s*1, *s*2; *Y*) is the contribution to *N*(*s*1, *s*2; *Y*) from ternary quadratic forms *A* with Δ ≠ 0, and *N*Δ = 0(*s*1, *s*2; *Y*) is the contribution from forms *A* with Δ = 0, with the analogous convention for the counts *N*\*. Then we have the following result: [th:skbound] For any *Y* > 1, we have $$\begin{array}{lclll} \displaystyle N\_{\Delta\neq 0}(s\_1,s\_2;Y)&\ll\_\varepsilon& s\_1^3Y^{3+\varepsilon};\quad\quad N\_{\Delta= 0}(s\_1,s\_2;Y)&\ll\_\varepsilon& s\_2^3Y^{3+\varepsilon}+s\_1^4s\_2^5Y^{2+\varepsilon}; \\[.1in]\displaystyle N^\*\_{\Delta\neq 0}(s\_1,s\_2;Y)&\ll\_\varepsilon& Y^{3+\varepsilon};\quad\quad\,\,\,\,\,\, N^\*\_{\Delta= 0}(s\_1,s\_2;Y)&\ll\_\varepsilon& s\_2^3Y^{3+\varepsilon}. \end{array}$$ [Proof of Theorem [th:skbound]] We denote the range of each matrix entry *α* in the set $(s\_1,s\_2)Y\cB$ by *R**α*. Note that we have $$\begin{array}{rcl} &R\_a\ll s\_1^{-4}s\_2^{-2}Y;\quad R\_b\ll s\_1^{-1}s\_2^{-2}Y;\quad R\_c\ll s\_1^{2}s\_2^{-2}Y;& \\[.05in] &R\_d\ll s\_1^{-1}s\_2Y;\quad R\_e\ll s\_1^{2}s\_2Y;& \\[.05in] &R\_f\ll s\_1^{2}s\_2^{4}Y.& \end{array}$$ We now split into four cases depending on whether or not *a* and Δ are zero: We start by bounding *N*Δ ≠ 0\*(*s*1, *s*2; *Y*). For fixed $a,b,c \in \Z$ with *a*, Δ ≠ 0, let *N**a*, *b*, *c*(*k*) denote the number of matrices $A \in V\_k(\Z) \cap (s\_1,s\_2)X\cB$ where the 11-, 12-, and 22-matrix entries of *A* are *a*, *b*/2, and *c*, respectively. Then we have $$\begin{array}{rcl} N\_{a,b,c}(k)&\ll&\#\bigl\{(d,e)\in\Z^2:|d|\ll R\_d,\,|e|\ll R\_e,\,Q(e,d)\equiv -k\pmod\Delta\bigr\} \\[.1in]&\ll& \#\bigl\{(d,e)\in\Z^2:|d|\ll R\_d,\,|e|\ll R\_e,\,4aQ(e,d)\equiv -4ak\pmod\Delta\bigr\} \\[.1in]&\ll& \#\bigl\{(d,e)\in\Z^2:|d|\ll R\_d,\,|e|\ll R\_e,\,(2ae-bd)^2\equiv -4ak\pmod\Delta\bigr\}, \end{array}$$ since $4aQ(x,y)\equiv (2ax-by)^2\pmod\Delta$. This forces (2*a**e* − *b**d*)2 to lie within the set *T* of squareroots of  − 4*a**k* modulo Δ. Write Δ = *s*Δ2*q*Δ, where *q*Δ is squarefree. Then #*T* ≪ *Y**ɛ**s*Δ. For any ℓ ∈ *T*, two solutions (*e*1, *d*1) and (*e*2, *d*2) to $2ax-by\equiv\ell \pmod \Delta$ gives a solution (*e*ʹ = *e*1 − *e*2, *d*ʹ = *d*1 − *d*2) to $2ax-by\equiv 0\pmod\Delta$; note that we have ∣*d*ʹ∣ ≪ *R**d* and ∣*e*ʹ∣ ≪ *R**e*. It follows that $$N\_{a,b,c}(k)\ll Y^\varepsilon s\_\Delta \#\bigl\{(d',e')\in\Z^2:|d'|\ll R\_d,\,|e'|\ll R\_e,\, 2ae'-bd'\equiv 0\pmod{\Delta}\bigr\}.$$ We now fibre over *g*, *b* ≠ 0, and Δ, where *g* = gcd(*b*, Δ). With these fixed, we also fibre over *σ*, the possible values of 2*a**e*ʹ − *b**d*ʹ, which must be a multiple of Δ. This gives a contribution of  ≪  $$\begin{array}{ll} &\displaystyle\sum\_{\substack{g\geq 1}}\sum\_{\substack{|b|\ll R\_b,|\Delta|\ll R\_b^2\\ \gcd(b,\Delta)=g}}\sum\_{\substack{a,c\\b^2-ac=\Delta
arxiv_0000076
723944 & 3.6427 & 0.1949 15 & rs16832560 & 3 & 181564785 & SOX2-OT & 3.6316 & 0.1060 16 & rs818055 & 9 & 131092123 & LAMC3 & 3.6123 & 0.2010 17 & rs10018350 & 4 & 6320374 & PPP2R2C & 3.5524 & 0.1383 18 & rs949292 & 18 & 60501574 & & 3.5417 & 0.3216 19 & rs6878253 & 5 & 132277600 & LOC124901063 & 3.5211 & 0.0858 20 & rs7593557 & 2 & 233955144 & TRPM8 & 3.5197 & 0476 [!htb] | c| c| c| c| c |c| c | Rank & SNP & Chr & Position(BP) & Gene & & & & & & *M**C**E**M**D**S**P* & *L**I**M**E**D**S**P* 1 & rs9294284 & 6 & 83954925 & CYB5R4 & 279.1653 & 0.4817 2 & rs4660212 & 1 & 42030728 & HIVEP3 & 6.6689 & 0.4603 3 & rs6485742 & 11 & 12432529 & PARVA & 6.4115 & 0.3281 4 & rs7835497 & 8 & 39765231 & ADAM2 & 5.61404 & 0.1476 5 & rs8130490 & 21 & 18355236 & TMPRSS15 & 5.5795 & 0.4691 6 & rs5022654 & 9 & 4766378 & & 4.8936 & 0.0844 7 & rs12306837 & 12 & 4571302 & DYRK4 & 4.6017 & 0.4845 8 & rs7928271 & 11 & 126918754 & KIRREL3 & 4.5619 & 0.4643 9 & rs3751020 & 11 & 119608340 & & 4.5395 & 0.3099 10 & rs4961 & 4 & 2904980 & ADD1 & 4.4915 & 0.1957 11 & rs12197569 & 6 & 2164221 & GMDS & 4.3880 & 0.4700 12 & rs12703608 & 7 & 144547846 & TPK1 & 4.3071 & 0.0997 13 & rs3811515 & 2 & 228018313 & SPHKAP & 4.2972 & 0.2384 14 & rs4814626 & 20 & 17580396 & DSTN & 4.2546 & 0.1779 15 & rs17594685 & 13 & 41679215 & VWA8 & 4.1531 & 0.1108 16 & rs7129737 & 11 & 124888741 & ROBO4 & 4.1144 & 0.1369 17 & rs12986413 & 19 & 2170955 & DOT1L & 4.0802 & 0.4825 18 & rs10807372 & 6 & 47681838 & ADGRF2 & 4.066 & 0.3564 19 & rs10188832 & 2 & 16621901 & CYRIA & 4.0573 & 0.2162 20 & rs11595441 & 10 & 15010629 & & 4.0397 & 0.4536 Monte Carlo Expectation-Maximization algorithm to detect imprinting and maternal effects for discordant sib-pair data ===================================================================================================================== Numerous statistical methods have been developed to explore genomic imprinting and maternal effects, which are causes of parent-of-origin patterns in complex human diseases. Most of the methods, however, either only model one of these two confounded epigenetic effects, or make strong yet unrealistic assumptions about the population to avoid over- parameterization. A recent partial likelihood method (*L**I**M**E**D**S**P*) can identify both epigenetic effects based on discordant sibpair family data without those assumptions. Theoretical and empirical studies have shown its validity and robustness. As *L**I**M**E**D**S**P* method obtains parameter estimation by maximizing partial likelihood, it is interesting to compare its efficiency with full likelihood maximizer. To overcome the difficulty in over-parameterization when using full likelihood, this study proposes a discordant sib-pair design based Monte Carlo Expectation Maximization (*M**C**E**M**D**S**P*) method to detect imprinting and maternal effects jointly. Those unknown mating type probabilities, the nuisance parameters, are considered as latent variables in EM algorithm. Monte Carlo samples are used to numerically approximate the expectation function that cannot be solved algebraically. Our simulation results show that though this *M**C**E**M**D**S**P* algorithm takes longer computation time, it can generally detect both epigenetic effects with higher power, which demonstrates that it can be a good complement of *L**I**M**E**D**S**P* method. missing heritability; imprinting effect; maternal effect; discordant sib-pair design; Monte Carlo Expectation Maximization algorithm Introduction ============ A genome-wide association study (GWAS) is an observational study of a genome-wide set of genetic variants in different individuals to see if any variant is associated with a trait. When applied to human data, GWAS compares participants’ DNA with varying phenotypes for a particular trait or disease. GWAS is a powerful tool for identifying hundreds of genetic variants associated with complex human diseases and traits and has provided valuable insights into their genetic architecture. However, most of the variants identified have explained only a small proportion of complex diseases or traits, leading many to question “missing heritability”, which refers to the fact that single genetic variations cannot account for much of the heritability of diseases, behaviors, and other phenotypes. Other mechanisms may be also involved in this process, such as epigenetic modifications and transcriptional/ translational regulation. In biology, epigenetics studies the heritable phenotype changes that do not involve alterations in the DNA sequence. Thus, the epigenetic factors, including imprinting and maternal genotype effects, have become a research focus to detect missing heritability. Genomic imprinting is an effect of an epigenetic process involving methylation and histone modifications that can silence the expression of a gene inherited from a particular parent without altering the genetic sequence. In diploid organisms, the somatic cells possess two copies of the genome, one inherited from the father and one from the mother. Each autosomal gene is therefore represented by two copies of alleles, with one copy inherited from each parent at fertilization. In mammals, however, a small proportion (<1%) of genes are imprinted, which means gene expression occurs from only one allele partially or completely. The expressed allele is dependent upon its parental origin, i.e., unequal expression of a heterozygous genotype depending on whether the imprinted variant is inherited from the mother (maternal imprinting) or from the father (paternal imprinting). The imprinting effect is considered as a critical factor in understanding the interplay between the epigenome and genome. For many diploid genes, even if the copy inherited from one parent is defective, there is a substitute allele from the other parent. However, in the case of imprinting, even though there are two copies of the gene, it behaves as a haploid (having a single set of unpaired chromosomes) for this gene because only one copy is expressed. In other words, no substitute allele makes imprinted genes more vulnerable to the adverse effects of mutations. Additionally, genes and mutations that might be generally recessive can be expressed if a gene is imprinted and the dominant allele is silenced. Therefore, the disease can occur due to deletions or mutations in imprinting genes.. Most common conditions involving imprinting include Beckwith-Wiedemann syndrome, Silver-Russell syndrome, pseudohypoparathyroidism, and transient neonatal diabetes mellitus can also involve imprinting. Maternal effect is the situation where the phenotype of an organism is determined not only by the environment it experiences and its genotype, but also by the environment and genotype of its mother. In genetics, the maternal effects occur when the genotype of its mother determines the phenotype of an organism, irrespective of its own genotype. This effect often occurs because the mother supplies mRNA or proteins during pregnancy. Maternal effect can be considered as a significant reason for varieties of diseases such as those that are related to pregnancy outcomes, childhood cancers and congenital disabilities, certain psychiatric illnesses, and some pregnancy complications. Parent-of-origin effects occur when the phenotypic effect of an allele depends on whether it is inherited from an individual’s mother or father. Since both imprinting and maternal effects have parent-of-origin patterns , they can be confounded with each other. For example, paternal imprinting can mimic maternal effect. Therefore, to identify imprinting and maternal effect, family-based data are needed to find the inheritance path, and it is better to detect the two epigenetic effects jointly. Existing methods for detecting Imprinting and Maternal effects -------------------------------------------------------------- Both parametric and non-parametric methods are suggested to study the imprinting and maternal effects. A Parental-Asymmetry Test (PAT) with case-parents trio data was proposed to detect the imprinting effect in nuclear families in the absence of maternal effect. It was extended to accommodate general pedigree data by proposing Pedigree Parental-Asymmetry Test (PPAT), which uses all informative family trios. In addition, to accommodate pedigree data with missing genotypes, a Monte Carlo sampling based PPAT (MCPPAT) method was further developed. The non-parametric methods are mainly used for detecting imprinting effect while assuming there are no maternal effect. Though these non-parametric methods may be more powerful if there is indeed no maternal effect, under the existence of maternal effects, these methods suffer from potential confounding between imprinting and maternal effects, that can inflate false positive or false negative rates. Numerous parametric methods can identify imprinting and maternal effects jointly based on full likelihood. Case-parent triads and case-mother pairs are two popular study designs. Almost all these methods, however, depend on strong yet unrealistic assumptions concerning mating type probabilities (nuisance parameters) to avoid over-parameterization, with the Logarithm Likelihood Ratio test (LL-LRT) as a classic example. A partial Likelihood method for detecting Imprinting and Maternal effects (LIME) was then proposed as an exception. LIME can avoid over-parameterization by deriving a partial likelihood that is free of the nuisance parameters through matching the case families with control families of the same familial genotypes. LIME is powerful and robust, but it requires the recruitment of separate control families which can be hard to obtain. Therefore, a LIME method based on a Discordant Sib-Pair design *L**I**M**E**D**S**P* was proposed to receive the benefit of LIME without the requirement of separate control families. Discordant Sibpair design refers to that a nuclear family is recruited if there are at least an affected sibling and an unaffected sibling. Similarly as LIME method, *L**I**M**E**D**S**P* derives partial likelihood by matching affected proband-parent triads with unaffected proband-parent triads and factoring out common terms involving mating type probabilities. Theoretical and empirical studies have shown the validity and robustness of LIME and *L**I**M**E**D**S**P*. The two methods, however, might lack of efficiency in parameter estimation, as they estimate parameters by maximizing partial likelihood, rather than full likelihood. Therefore, it is interesting to compare the estimation efficiency of these partial likelihood methods with a full likelihood method. To avoid over-parameterization problem in full likelihood, a Monte Carlo EM algorithm was proposed earlier for case - control family data. The results showed that the MCEM algorithm can detect both epigenetic effects with higher power and smaller standard error compared with the LIME method. In this study, we will propose a discordant sibpair design based MCEM algorithm (*M**C**E**M**D**S**P*) to detect both epigenetic effects and compare its performance with *L**I**M**E**D**S**P*. MCEM algorithm -------------- When missing values or latent variables exist, the direct computation of the Maximum Likelihood Estimate (MLE) is not usually achievable. Instead, we use expectation maximization algorithm, which can deal with models that depend on unobservable latent variables. The EM algorithm considers the fact that the distribution of the complete data, which include both observed data and latent variables, can be simpler to deal with. Each EM iteration consists of two steps as the Expectation (E) and the Maximization (M) steps. The E - step computes the expected log-likelihood density of the complete data, conditionally on the observed data and the current parameter value. The parameter is updated in the M - step by maximizing the conditional expectation. The Monte Carlo EM (MCEM) algorithm is a modification of the EM algorithm which is used when the E - step in EM algorithm is not available in closed form. In MCEM, Monte Carlo simulations are used to generate realizations of the conditional hidden data through Markov Chain Monte Carlo (MCMC) routines such as the Gibbs and Metropolis - Hasting sampler. Then the expectation in the E - step is replaced by the empirical mean of the complete log-likelihood. This is the principle of MCEM algorithm proposed by Wei and Tanner (1990). In our study to detect imprinting effect and maternal effect, as we have nine unobservable mating type probabilities involved in the full likelihood, we will overcome the over-parameterization problem by using the nuisance parameters as latent variables in MCEM algorithm to get maximum likelihood estimation. In this dissertation, we seek to propose Monte Carlo Expectation Maximization algorithm (*M**C**E**M**D**S**P*) to detect imprinting and maternal effect for discordant sibpair data. First, we have applied *M**C**E**M**D**S**P* algorithm for discordant sibpair data we generated and compared the results with *L**I**M**E**D**S**P*, which is an existing method for detecting imprinting and maternal effects based on partial likelihood. A major difficulty of using *M**C**E**M**D**S**P* is the high computation time. Secondly, we propose importance sampling based *M**C**E**M**D**S**P* as solution for the higher computation time of *M**C**E**M**D**S**P*. It can give the similar results as *M**C**E**M**D**S**P* in shorter time. Finally, we applied *M**C**E**M**D**S**P* method to Framingham Heart Study (FHS) data to illustrate the utility of *M**C**E**M**D**S**P*. Chapter 02 contains general introduction for EM and MCEM algorithm with a Gaussian mixture data as an application example. Then we demonstrate the over-parameterization problem in the full likelihood for detecting the epigenetic effects based on discordant sibpair data, which is the reason for the assumptions concerning mating type probabilities in most existing methods. Then we propose the *M**C**E**M**D**S**P* method to overcome the over-pamameterization problem by using mating type probabilities as the latent variables. To increase the time efficiency, we further develop an importance sampling based *M**C**E**M**D**S**P* method. Chapter 03 contains extensive simulations of discordant family data with and without additional siblings for eight disease models and eight scenarios. We apply *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* to the simulated data, and compare the type I error, power, and relative bias for parameter estimation. In addition, we also use the simulation data to demonstrate the time efficiency for importance sampling based *M**C**E**M**D**S**P*. Chapter 04 contains real data analysis. We apply *L**I**M**E**D**S**P* and *M**C**E**M**D**S**P* to Framingham Heart Study data to identify SNPs with association, imprinting or maternal effects related to hypertension. MCEM algorithm ============== Expectation Maximization algorithm ---------------------------------- In this section, we summarize the expectation maximization algorithm which is known as the EM algorithm. This is a method to get likelihood maximizer when there are missing data or latent variables. Let *Y* be the random vector corresponding to the observed data *y* with p.d.f. postulated as $g(y,\boldsymbol{\psi})$, where $\boldsymbol{\psi}$ = $\big(\psi\_1,...., \psi\_d\big)^d$ is a vector of unknown parameter with parameter space Ω. In the context of EM algorithm, we consider the observed data y as incomplete and the complete data may contain some variables that are never observable. We define *x* as complete data and we let *z* denote the vector containing the additional data, referred to as the latent variables or missing data. Consider the EM algorithm as follows: 1. Let $g\_c(x,\boldsymbol{\psi})$ denote the p.d.f. of the random vector *X* corresponding to the complete data vector *X* = (*Y*, *Z*), where *Y* is the observed data, and *Z* is missing data or latent variable. Then we have the following complete-data log likelihood function as: $$log L\_c(\boldsymbol{\psi}) = g\_c(x,\boldsymbol{\psi})$$ 2. **E-Step:** Let $\boldsymbol{\psi}^{(k)}$ be values for $\boldsymbol{\psi}$ at the iteration *k*. Then, the E-step requires the calculation of: $$Q\big(\boldsymbol{\psi},\boldsymbol{\psi}^{(k)}\big) = E\_{\boldsymbol{\psi}^{(k)}}[log L\_c(\boldsymbol{\psi})|Y]$$ Above equation simply says that at iteration *k*, we need to calculate the expectation of complete likelihood given the information we have for the observable data *y* and values $\boldsymbol{\psi}^{(k)}$ for the unknown parameter $\boldsymbol{\psi}$. 3. **M-Step:** In M-step, we calculate the maximization of $Q\big(\boldsymbol{\psi},\boldsymbol{\psi}^{(k)}\big)$ with respect to $\boldsymbol{\psi}$ over the parameter space Ω and denote it as $\boldsymbol{\psi}^{(k+1)}$. It can be shown that, $$Q\big(\boldsymbol{\psi},\boldsymbol{\psi}^{(k+1)}\big) \geq Q\big(\boldsymbol{\psi},\boldsymbol{\psi}^{(k)}\big)$$ and $$L(\boldsymbol{\psi}^{(k+1)},Y) \geq L(\boldsymbol{\psi}^{(k)},Y)$$ for all $\boldsymbol{\psi}$  ∈  Ω. We continue with the E-step and M-step until the difference $\boldsymbol{\psi}^{(k+1)} - \boldsymbol{\psi}^{(k)}$ changes by an arbitrary small amount. In practice, a EM sequence will converge to a compact connected set of local maxima of the likelihood function; this limit set may or may not consist of a single point. Monte Carlo Expectation Maximization algorithm ---------------------------------------------- As in the previous section, in E-step of EM algorithm we need to take the expectation with respect to the conditional distribution of the missing data *Z* given the observed data *Y* = *y*. In many cases, however, evaluating the expectation in E-step of the EM algorithm does not yield closed form of the solutions.To overcome this problem, we can simulate random draws from target conditional distribution *Z* and approximate Q - function by Monte Carlo integration. To proceed this, let *z*1, ...., *z**m* be random samples from $f(z|y,\boldsymbol{\psi}^{(k)})$. Then we can use a Monte Carlo approximation to the expectation of EM algorithm as follow: $$Q\_{MC}(\boldsymbol{\psi}|\boldsymbol{\psi}^{(k)};y) = \frac{1}{m} \sum\_{t=1}^{m}\log L\_c(\boldsymbol{\psi}|Y,Z^{(t)})$$ The update $\boldsymbol{\psi}^{(k+1)}$ is the value of $\boldsymbol{\psi}$ that maximizes $Q\_{MC}(\boldsymbol{\psi}|\boldsymbol{\psi}^{(k)};y)$. Convergence of MCEM algorithm ----------------------------- The expectation-maximization algorithm is an algorithm for maximizing likelihood functions, specially when there are missing data or latent variables exist. If EM algorithm converges, it converges to a stationary point of the likelihood function. Even when the EM algorithm converges, there is no guarantee in general that it converges to a global maximum. However, the limit of an EM sequence can be a local maximum or a saddle point. Further, the convergence rate of the EM algorithm cannot be super linear. When EM works well, the output of the MCEM algorithm is a sequence of parameter values that converges to the maximum likelihood estimate (MLE). For a given suitable initial value, a sequence of parameter values generated by the Monte Carlo Expectation Maximization algorithm will get arbitrarily close to a maximizer of the observed likelihood with a high probability. Metropolis-Hastings Algorithm ----------------------------- The Monte Carlo EM algorithm is a stochastic version of the EM algorithm using MCMC methods to approximate the conditional distribution of the hidden data. A general method for generating samples from posterior distribution is Metropolis-Hastings algorithm. ### **Algorithm**: 1. Sample a candidate value *Z*\* from a proposal distribution *g*(.∣*z*(*t*)) 2. Compute Metropolis-Hastings ratio *R*(*z*(*t*), *Z*\*) $$R(u,v) = \frac{f(v)g(u|v)}{f(u)g(v|u)},$$ where *f*(.) is the probability density function of the stationary distribution. 3. Sample a value for *Z*(*t* + 1) according to the following: $$Z^{(t+1)} = \begin{cases} Z^\* & \textrm{ with probability } {\min [R(Z^{(t)},Z^{\\*})]}\\ Z^{(t)} & \textrm{ otherwise}\\ \end{cases}$$ 4. Increment t and return to step 1 until it reaches to the stopping rule. Importance Sampling Method -------------------------- We’ll consider the situation where the sample of the latent variables *Z*1, ...*Z**m* in the E-step is obtained from a MCMC routine such as Gibbs sampler or Metropolis-Hasting algorithm with stationary distribution $g(Z|Y,\boldsymbol{\psi})$. Drawing an MCMC sample in each iteration of the MCEM algorithm could be very time consuming. We can use importance sampling to improve the computational expenses of the MCEM algorithm. We initialize the algorithm with a sample *Z*1, *Z*2, ..., *Z**m* from the distribution $g(Z|Y,\boldsymbol{\psi}^{(0)})$, where $\boldsymbol{\psi}^{(0)}$ is the initial value of the parameter $\boldsymbol{\psi}$ at the start of EM algorithm. Then, at each iteration r, rather than obtaining a new sample from $g(Z|Y,\hat{\boldsymbol{\psi}}^{(r)})$, with the most recent iterate, we can importance weight the original sample through the updated distribution $g(u|y,\boldsymbol{\psi}^{(r)})$That is, $$Q\_m\big(\boldsymbol{\psi} | \hat{\boldsymbol{\psi}}^{(r)} \big) = \frac{\sum\_{t=1}^{m} w\_t \ln f(Y,Z\_t | \boldsymbol{\psi})}{\sum\_{t=1}^{m}w\_t}$$ where $$w\_t = \frac{g(Z\_t|Y,\hat{\boldsymbol{\psi}}^{(r)})}{g(Z\_t|Y,\hat{\boldsymbol{\psi}}^{(0)})}$$ In addition, we can show that $$w\_t = \frac{L(\hat{\boldsymbol{\psi}}^{(r)}|Z\_t,Y) / L(\hat{\boldsymbol{\psi}}^{(r)}|Y)}{L(\hat{\boldsymbol{\psi}}^{(0)}|Z\_t,Y) / L(\hat{\boldsymbol{\psi}}^{(0)}|Y)}$$ The likelihood $L(\boldsymbol{\psi}|y)$ can be canceled in formula (2.4.3) and, $$Q\_m(\boldsymbol{\psi} | \hat{\boldsymbol{\psi}}^{(r)}) = \frac{\sum\_{t=1}^{m} w\_t^{'} \ln f(Y,Z\_t | \boldsymbol{\psi})}{\sum\_{t=1}^{m}w\_t^{'}}$$ where, $$w\_t^{'} = \frac{L(\hat{\boldsymbol{\psi}}^{(r)} | Z\_t,Y)}{L(\hat{\boldsymbol{\psi}}^{(0)} | Z\_t,Y)}$$ This choice will not affect the EM algorithm because the unknown normalizing constant $$\frac{L(\hat{\boldsymbol{\psi}}^{(0)}|Y)}{L(\hat{\boldsymbol{\psi}}^{(r)}|Y)}$$ does not depend on the unknown value of $\boldsymbol{\psi}$, and it does not come into play in the maximization step. ### **Algorithm:** 1. Initialize $\boldsymbol{\psi}^{(0)}$. 2. Generate $Z\_1,....Z\_m \sim g(Z | Y,\boldsymbol{\psi}^{(o)})$ via an MCMC algorithm. 3. At iteration *r* + 1 compute the importance weights $$w\_t^{(r)} = \frac{L(\hat{\boldsymbol{\psi}}^{(r)} | Z\_t,Y)}{L(\hat{\boldsymbol{\psi}}^{(0)} | Z\_t,Y)}$$ 4. *E - step:* Estimate $Q(\boldsymbol{\psi} | \hat{\boldsymbol{\psi}}^{(r)})$ by $$Q\_m(\boldsymbol{\psi} | \hat{\boldsymbol{\psi}}^{(r)}) = \frac{\sum\_{t=1}^{m}w\_t^{(r)} \ln f(Z\_t,Y | \boldsymbol{\psi})}{\sum\_{t=1}^{m}w\_t^{(r)}}$$ 5. *M - step:* Maximize $Q\_m(\boldsymbol{\psi} | \hat{\boldsymbol{\psi}}^{(r)})$ to obtain $\hat{\boldsymbol{\psi}}^{(r+1)}$ 6. Repeat Steps 3 through 5 until convergence. There can be some drawbacks for importance sampling estimators. If the importance density $g(Z\_t | Y,\hat{\boldsymbol{\psi}}^{(r)})$ is not close enough to $g(Z\_t | Y,\hat{\boldsymbol{\psi}}^{(0)})$, then the weights will vary widely giving many samples little weight and allowing for a few variates to be overinfluetial. Also, if the initial values $\boldsymbol{\psi}^{(0)}$ are poor, the estimator $Q\_m\big(\boldsymbol{\psi} | \hat{\boldsymbol{\psi}}^{(r)}\big)$ will take a long time to converge. We can alleviate this problem by initiating burn-in whereby for the first few iterations and a new sample is obtained from $g(Z\_t | Y,\hat{\boldsymbol{\psi}}^{(r)})$ rather than importance weighting. 1. Initialize $\boldsymbol{\psi}^{(0)}$, and run burn-in. 1. Set importance weights *w**t* = 1 for all *t* = 1, ..., *m*. At iteration b, 2. Generate $Z\_1,...,Z\_m \sim g(Z | Y,\boldsymbol{\psi}^{(b)})$ via an MCMC algorithm. 3. Run E and M steps above with *r* = *b*. 4. Repeat Steps 1(b) and 1(c) for B burn-in iterations. 5. Reinitialize $\boldsymbol{\psi}^{(0)} = \boldsymbol{\psi}^{(B)}$. Applying Expectation Maximization Algorithm to detecting imprinting and Maternal Effects based on discordant sibpair design =========================================================================================================================== Notation and Genetic Model -------------------------- Consider a candidate genetic marker with two alleles *M*1 and *M*2, which may code for disease susceptibility or epigenetic effect. We define M and F as the number of variant allele *M*1 carried by the mother and the father in a nuclear family. They can take values 0,1 or 2, corresponding to genotype *M*2*M*2, *M*1*M*2 or *M*1*M*1, respectively. Let *C**i* be the random variable denoting the number of *M*1 alleles carried by the child *i*, *i* = 1, 2, ...., *k*. Specifically, *C*1 and *C*2 are designated for the affected and unaffected probands, respectively, through which the discordant sib-pair family is recruited, whereas *C**i*, *i* = 3, 4, .., *k* are for the additional siblings. The indicator variable *D**i*, *i* = 1, 2, ..., *k* denotes the disease status of children where 1 denotes being affected and 0 denotes being normal. Thus, for the affected child and unaffected child *D*1 = 1 and *D*2 = 0, respectively. Assume the disease penetrance follows a multiplicative relative risk model *P*(*D* = 1∣*M* = *m*, *F* = *f*, *C* = *c*) = *δ**R*1*I*(*c* = 1)*R*2*I*(*c* = 2)*R**i**m**I*(*c* = 1*m*)*S*1*I*(*m* = 1)*S*2*I*(*m* = 2) where *R*1 and *R*2 denote the relative risk due to one or two copies of individual’s own variant allele, *R**i**m* denotes relative risk due to imprinting effect, *S*1 and *S*2 denote the relative risk due to one or two copies of the mother’s variant allele, and *δ* denote the disease phenocopy rate. The notation *c* = 1*m* indicates that the child’s genotype is *M*1*M*2 and the variant allele is inherited from mother. We are interested in the model parameter estimation, collectively denoted as $\boldsymbol{\theta} = (\delta, R\_1, R\_2, R\_{im}, S\_1, S\_2)$. Note that, all parameters are positive. Further, *R**i**m* > 1,  < 1,  and  = 1 represent paternal, maternal, and no imprinting effect, respectively. Although there is no restriction placed on maternal effects, *S*1 and *S*2, they are typically  ≥ 1, where the equality denotes no maternal effect. The situation when *μ**i**j*s are fixed unknown parameters ---------------------------------------------------------- *Y* = {*n**m**f**c*1*c*2*c*3...*c**k*}; *k* ≥ 2 Denote *μ**m**f* as the mating type probability of (M,F) = (*m,f*), that is, probability of parental pairs in which the mothers carry m copies and the fathers f of the variant allele. *n**m**f**c*1*c*2*c*3...*c**k* for *k* > 2 represent the number of families with certain familial genotype combinations. Let $\boldsymbol{\mu} = (\mu\_{00},\mu\_{01},\mu\_{02}, \mu\_{10}, \mu\_{11}, \mu\_{12}, \mu\_{20}, \mu\_{21}, \mu\_{22})$ denote the mating type probabilities which are fixed and unknown. We can define full likelihood as $$\begin{aligned}[b] P(Y;\boldsymbol{\theta},& \boldsymbol{\mu}) = \prod\_{i=1}^{n} \big[P(M^{(i)}=m,F^{(i)}=f,C\_1^{(i)} = c\_1,C\_2^{(i)}=c\_2,C\_3^{(i)}=c\_3,\\ &...,C\_k^{(i)}=c\_k,D\_3^{(i)},...D\_k^{(i)}|D\_1^{(i)}=1,D\_2^{(i)}=0)\big] \\ &= \prod\_{i=1}^{n}\Big[P\big(M^{(i)}=m,F^{(i)}=f,C\_1^{(i)} = c\_1,C\_2^{(i)}=c\_2)|D\_1^{(i)}=1,D\_2^{(i)}=0\big)\\ &.P\big(C\_3^{(i)} = c\_3, C\_4^{(i)} = c\_4,...,C\_k^{(i)} = c\_k, D\_3^{(i)}, D\_4^{(i)},...,D\_k^{(i)}|M^{(i)}=m,F^{(i)}=f\big)\Big] \\ &= \prod\_{i=1}^{n}\Bigg[\bigg(\frac{ P(M^{(i)}=m,F^{(i)}=f,C\_1^{(i)}=c\_1,C\_2^{(i)}=c\_2,D\_1^{(i)}=1,D\_2^{(i)}=0)}{P(D^{(i)}\_1=1,D^{(i)}\_2=0)}\bigg)\\ &.P\big(C\_3^{(i)} = c\_3, C\_4^{(i)} = c\_4,...,C\_k^{(i)} = c\_k, D\_3^{(i)}, D\_4^{(i)},...,D\_k^{(i)}|M^{(i)}=m,F^{(i)}=f\big)\Bigg]\\ \end{aligned}$$ Here, we consider *k* ≥ 2, since there should be at least one affected child and one unaffected child. The disease status for additional siblings can be arbitrary. When there are no additional siblings, the second term of the equation will equal to 1. The numerator of the first term on the right hand side of the equation can be expanded as $$\begin{aligned}[b] P(M^{(i)}=m &,F^{(i)} =f,C\_1^{(i)} =c\_1,C\_2^{(i)}=c\_2,D\_1^{(i)}=1,D\_2^{(i)}=0) \\ & = P(M=m,F=f).P(C\_1=c\_1|M=m,F=f)\\ &.P(C\_2=c\_2|M=m,F=f).P(D\_1=1|M=m,F=f,C\_1=c\_1)\\ &.P(D\_2=0|M=m,F=f,C\_2=c\_2)\\ & = \mu\_{mf}.P(C\_1=c\_1|M=m,F=f).P(C\_2=c\_2|M=m,F=f)\\ &.P(D\_1=1|M=m,F=f,C\_1=c\_1).P(D\_2=0|M=m,F=f,C\_2=c\_2) \end{aligned}$$ where *P*(*D*1 = 1∣*M* = *m*, *F* = *f*, *C*1 = *c*1) = *δ**R*1*I*(*c*1 = 1)*R*2*I*(*c*1 = 2)*R**i**m**I*(*c*1 = 1*m*)*S*1*I*(*M* = 1)*S*2*I*(*M* = 2) *P*(*D*2 = 0∣*M* = *m*, *F* = *f*, *C*2 = *c*2) = 1 − *δ**R*1*I*(*c*2 = 1)*R*2*I*(*c*2 = 2)*R**i**m**I*(*c*2 = 1*m*)*S*1*I*(*M* = 1)*S*2*I*(*M* = 2) are obtained according to disease penetrance model. In total, there are 29 possible combinations of genotypes for parents (*M*, *F*), affected child (*C*1) and unaffected child (*C*2). Table 3.2 lists the corresponding joint probability of mother, father, affected child, unaffected child genotypes and proband disease status *P*(*M* = *m*, *F* = *f*, *C*1 = *c*1, *C*2 = *c*2, *D*1 = 1, *D*2 = 0). (See Appendix 2) In the expressions in Table 3.2, the *μ**m**f*’s (*m* = 0, 1, 2, *f* = 0, 1, 2) are the mating type probabilities, i.e., *μ**m**f* = *P*(*M* = *m*, *F* = *f*). Note that, we do not make any assumptions about the mating symmetry, and therefore, *μ**m**f* is not necessarily equal to *μ**f**m*. [!htb] | Type | m | f | *c*1 | *c*2 | *P*(*M* = *m*, *F* = *f*, *C*1 = *c*1, *C*2 = *c*2, *D*1 = 1, *D*2 = 0) | | --- | --- | --- | --- | --- | --- | | 1 | 0 | 0 | 0 | 0 | *μ*00*δ*(1 − *δ*) | | 2 | 0 | 1 | 0 | 0 | $\mu\_{01}\frac{1}{4}\delta(1-\delta)$ | | 3 | 0 | 1 | 1 | 0 | $\mu\_{01}\frac{1}{4}\delta r\_1(1-\delta)$ | | 4 | 0 | 1 | 0 | 1 | $\mu\_{01}\frac{1}{4}\delta(1-\delta r\_1)$ | | 5 | 0 | 1 | 1 | 1 | $\mu\_{01}\frac{1}{4}\delta r\_1(1-\delta r\_1)$ | | 6 | 0 | 2 | 1 | 1 | *μ*02*δ**r*1(1 − *δ**r*1) | | 7 | 1 | 0 | 0 | 0 | $\mu\_{10}\frac{1}{4}\delta s\_1 (1-\delta s\_1 )$ | | 8 | 1 | 0 | 1 | 0 | $\mu\_{10}\frac{1}{4}\delta r\_1 s\_1 r\_{im}(1-\delta s\_1 r\_{im})$ | | 9 | 1 | 0 | 0 | 1 | $\mu\_{10}\frac{1}{4}\delta s\_1 r\_{im}(1-\delta r\_1 s\_1 r\_{im})$ | | 10 | 1 | 0 | 1 | 1 | $\mu\_{10}\frac{1}{4}\delta r\_1 s\_1 r\_{im}(1-\delta r\_1 s\_1 r\_{im})$ | | 11 | 1 | 1 | 0 | 0 | $\mu\_{11}\frac{1}{16}\delta s\_1(1-\delta s\_1)$ | | 12 | 1 | 1 | 1 | 0 | $\mu\_{11}\frac{1}{16}\delta r\_1 s\_1(1-\delta s\_1)(1 + r\_{im})$ | | 13 | 1 | 1 | 0 | 1 | $\mu\_{11}\frac{1}{16}\delta s\_1 [2 - \delta r\_1 s\_1(1 + r\_{im})]$ | | 14 | 1 | 1 | 1 | 1 | $\mu\_{11}\frac{1}{16}\delta r\_1 s\_1(1-\delta s\_1)(1 + r\_{im}) [2 - \delta r\_1 s\_1(1 + r\_{im})]$ | | 15 | 1 | 1 | 2 | 0 | $\mu\_{11}\frac{1}{16}\delta r\_2 s\_1(1-\delta s\_1)$ | | 16 | 1 | 1 | 0 | 2 | $\mu\_{11}\frac{1}{16}\delta s\_1 (1-\delta r\_2 s\_1)$ | | 17 | 1 | 1 | 2 | 2 | $\mu\_{11}\frac{1}{16}\delta r\_2 s\_1(1 - \delta r\_2 s\_1)$ | | 18 | 1 | 1 | 1 | 2 | $\mu\_{11}\frac{1}{16}\delta r\_1 s\_1 (1 + r\_{im}) (1-\delta r\_2 s\_1)$ | | 19 | 1 | 1 | 2 | 1 | $\mu\_{11}\frac{1}{16}\delta r\_2 s\_1 [2 - \delta r\_1 s\_1(1 + r\_{im})]$ | | 20 | 1 | 2 | 1 | 1 | $\mu\_{12}\frac{1}{4}\delta r\_1 s\_1(1-\delta r\_1 s\_1)$ | | 21 | 1 | 2 | 1 | 2 | $\mu\_{12}\frac{1}{4}\delta r\_1 s\_1(1-\delta r\_2 s\_1)$ | | 22 | 1 | 2 | 2 | 1 | $\mu\_{12}\frac{1}{4}\delta r\_2 s\_1(1-\delta r\_1 s\_1)$ | | 23 | 1 | 2 | 2 | 2 | $\mu\_{12}\frac{1}{4}\delta r\_2 s\_1(1-\delta r\_2 s\_1)$ | | 24 | 2 | 0 | 1 | 1 | $\mu\_{12}\frac{1}{4}\delta r\_2 s\_1(1-\delta r\_2 s\_2)$ | | 25 | 2 | 1 | 1 | 1 | *μ*20*δ**r*1*s*2*r**i**m*(1 − *δ**r*1*s*2*r**i**m*) | | 26 | 2 | 1 | 2 | 1 | $\mu\_{21}\frac{1}{4}\delta r\_1 s\_2 r\_{im}(1-\delta r\_1 s\_2 r\_{im})$ | | 27 | 2 | 1 | 1 | 2 | $\mu\_{21}\frac{1}{4}\delta r\_2 s\_2(1-\delta r\_1 s\_2 r\_{im})$ | | 28 | 2 | 1 | 2 | 2 | $\mu\_{21}\frac{1}{4}\delta r\_2 s\_2(1-\delta r\_2 s\_2)$ | | 29 | 2 | 2 | 2 | 2 | *μ*22*δ**r*2*s*2(1 − *δ**r*2*s*2) | The denominator of the first term of the right hand side of the equation and the second term can be expanded as follows. $$\begin{aligned}[b] P(D^{(i)}\_1=1 &,D^{(i)}\_2=0) \\ & = \sum\_{m,f,c\_1,c\_2}P(M=m,F=f,C\_1=c\_1,C\_2=c\_2,D\_1=1,D\_2=0) \end{aligned}$$ and $$\begin{aligned}[] P\big(C\_3^{(i)} = c\_3 &, C\_4^{(i)} = c\_4,...,C\_k^{(i)} = c\_k, D\_3^{(i)}, D\_4^{(i)},...,D\_k^{(i)}|M^{(i)}=m,F^{(i)}=f\big)\\ & = P(C\_3 = c\_3|M=m,F=f)....P(C\_k = c\_k|M=m,F=f)\\ &.P(D\_3|M=m,F=f,C\_3 = c\_3)...P(D\_k|M=m,F=f,C\_k = c\_k) \end{aligned}$$ Therefore, we can see that the full likelihood includes both parameter vector $\boldsymbol{\theta}$ of interest and nuisance parameter vector $\boldsymbol{\mu}$. It is hard to maximize the full likelihood function directly with the over-parameterization problem. This is the reason why many methods need to make assumptions concerning mating type probabilities, such as mating symmetry,i.e., *μ**i**j* = *μ**j**i*, to reduce the number of parameters. To overcome this problem of over-parameterization without making those strong and unrealistic assumptions, we propose *M**C**E**M**D**S**P* algorithm to get the likelihood maximizer with the nuisance parameters used as latent variables. Applying *M**C**E**M**D**S**P* ------------------------------ Let the random vector X be complete data that consist of the random vectors Y and Z where *Z**i**j* = *μ**i**j* *i* = 0,1,2 *j* = 0,1,2 *Y* = {*n**m**f**c*1*c*2*c*3...*c**k*}; *k* ≥ 2 In this study, we consider the mating type probabilities are latent variables. We use the Expectation Maximization (EM) algorithm to compute maximum likelihood estimates in the problems with latent variables. We can define *P*(*Y*∣*Z* = *μ*) = *P*(*Y*; *μ*) as equation 2.5.2, where the only difference is mating type probabilities are viewed as latent random variables, rather than fixed parameters. We consider the mating type probabilities *μ**i**j*’s, are random values that follow Dirichlet distribution *π*(*Z*). $$\pi(Z) = \frac{1}{B(\boldsymbol{\alpha})}\prod\_{m=0}^{2}\prod\_{f=0}^{2}{\mu\_{mf}^{\alpha\_{mf}}}^{-1}$$ Here, $\boldsymbol{\alpha} = \big(\alpha\_{00}, \alpha\_{01}, \alpha\_{02}, \alpha\_{10}, \alpha\_{11}, \alpha\_{12}, \alpha\_{20}, \alpha\_{21}, \alpha\_{22} \big)$ is the concentration parameter. To apply EM algorithm, the complete log likelihood function can be defined as $$\begin{aligned}[b] log L\_c(\psi) & = log\{P(Y|Z)P(Z)\}\\ & = log \Biggl\{\Big[\prod\_{i=1}^{n} \big(P(M^{(i)}=m,F^{(i)}=f,C\_1^{(i)} = c\_1,C\_2^{(i)}=c\_2,C\_3^{(i)}=c\_3, \\ &...,C\_k^{(i)}=c\_k,D\_3^{(i)},...D\_k^{(i)}|D\_1^{(i)}=1,D\_2^{(i)}=0)\big)\Big].\pi(Z)\Biggr\}\\ & = \sum\_{i=1}^{n} \Biggl\{ log \Bigg[ \frac{ P(M^{(i)}=m,F^{(i)}=f,C\_1^{(i)}=c\_1,C\_2^{(i)}=c\_2,D\_1^{(i)}=1,D\_2^{(i)}=0)}{P(D^{(i)}\_1=1,D^{(i)}\_2=0)}\Bigg] \\ & + \log \bigg(P\big(C\_3^{(i)} = c\_3, C\_4^{(i)} = c\_4,...,C\_k^{(i)} = c\_k, D\_3^{(i)}, D\_4^{(i)}\\ &,...,D\_k^{(i)}|M^{(i)}=m,F^{(i)}=f\big)\bigg)\Biggr\} + log \pi(Z)\\ \end{aligned}$$ We define $\boldsymbol{\psi}$ as $\boldsymbol{\psi}$ = ($\boldsymbol{\theta}$, $\boldsymbol{\alpha}$) where $\boldsymbol{\theta}$ = (*δ*, *R*1, *R*2, *R**i**m*, *S*1, *S*2) and $\boldsymbol{\alpha}$ = (*α*00, *α*01, *α*02, *α*10, *α*11, *α*12, *α*20, *α*21, *α*22) The initial values for $\boldsymbol{\psi}^0 = (\boldsymbol{\theta}^0, \boldsymbol{\alpha}^0)$ are taken in the way that $\boldsymbol{\theta}^0$ is from the results for *L**I**M**E**D**S**P* method and $\boldsymbol{\alpha^0 = \Bigg[(\frac{n\_{mf}}{\sum n\_{mf}})\times 100\Bigg] + 1} $. ### **E-Step**: For E-step, we calculate $Q(\boldsymbol{\psi};\boldsymbol{\psi}^k)$ as follows $$\begin{aligned}[b] & Q( \boldsymbol{\psi} ;\boldsymbol{\psi}^k) = E\_{\boldsymbol{\psi}^k}\{log L\_c(\boldsymbol{\psi})|Y\} \\ & = E\_{\boldsymbol{\psi}^k}\Bigg(\sum\_{i=1}^{n} \Bigg\{log \Bigg[\frac{ P(M^{(i)}=m,F^{(i)}=f,C\_1^{(i)}=c\_1,C\_2^{(i)}=c\_2,D\_1^{(i)}=1,D\_2^{(i)}=0)}{P(D\_1^{(i)}=1,D\_2^{(i)}=0)}\Bigg] \\ & + \log \bigg(P\big(C\_3^{(i)} = c\_3, C\_4^{(i)} = c\_4,...,C\_k^{(i)} = c\_k, D\_3^{(i)}, D\_4^{(i)},...,D\_k^{(i)}|M^{(i)}=m,F^{(i)}=f\big)\bigg)\Bigg\}\\ & + log \pi(Z) |Y\Bigg)\\ \end{aligned}$$ where $$\begin{aligned}[b] E\_{\boldsymbol{\psi}^k} & \Bigg(\sum\_{i=1}^{n} \Bigg\{log \Bigg[\frac{ P(M^{(i)}=m,F^{(i)}=f,C\_1^{(i)}=c\_1,C\_2^{(i)}=c\_2,D\_1^{(i)}=1,D\_2^{(i)}=0)}{P(D\_1^{(i)}=1,D\_2^{(i)}=0)}\Bigg]\\ & + \log \bigg(P\big(C\_3^{(i)} = c\_3, C\_4^{(i)} = c\_4,...,C\_k^{(i)} = c\_k, D\_3^{(i)}, D\_4^{(i)}\\ &,...,D\_k^{(i)}|M^{(i)}=m,F^{(i)}=f\big)\bigg)\Bigg\}|Y\Bigg) \\ & = \int\sum\_{i=1}^{n} \Bigg\{log \Bigg[\frac{ P(M^{(i)}=m,F^{(i)}=f,C\_1^{(i)}=c\_1,C\_2^{(i)}=c\_2,D\_1^{(i)}=1,D\_2^{(i)}=0)}{P(D\_1^{(i)}=1,D\_2^{(i)}=0)}\Bigg] \\ & + \log \bigg(P\big(C\_3^{(i)} = c\_3, C\_4^{(i)} = c\_4,...,C\_k^{(i)} = c\_k, D\_3^{(i)}, D\_4^{(i)}\\ &,...,D\_k^{(i)}|M^{(i)}=m,F^{(i)}=f\big)\bigg)\Bigg\}.f\_{\psi^k}(Z|Y).dZ \end{aligned}$$ and $$\begin{aligned}[b] E\_{\boldsymbol{\psi}^k}(log \pi(Z) |Y) & = E\_{\boldsymbol{\psi}^k}\Bigg(log\Bigg\{\frac{1}{B(\boldsymbol{\alpha})}\prod\_{m=0}^{2}\prod\_{f=0}^{2}{Z\_{mf}^{\alpha\_{mf}}}^{-1}\Bigg\}|Y\Bigg)\\ & = E\_{\boldsymbol{\psi}^k}\Bigg(-logB(\boldsymbol{\alpha}) + (\alpha\_{mf} - 1)\sum\_{m=0}^{2}\sum\_{f=0}^{2}log Z\_{mf}\Bigg)\\ & = \int\Bigg(-logB(\boldsymbol{\alpha}) + (\alpha\_{mf} - 1)\sum\_{m=0}^{2}\sum\_{f=0}^{2}log Z\_{mf}\Bigg).f\_{\psi^k}(Z|Y).dZ\\ \end{aligned}$$ Since we cannot calculate the integrals in the above equations explicitly, we use Metropolis-Hastings algorithm to take samples from $f\_{\boldsymbol{\psi}^k}(Z|Y)$ and get the Monte Carlo estimation of $Q(\boldsymbol{\psi},\boldsymbol{\psi}^{(k)})$. Here we use proposal distribution $g(Z) = \pi^{(\boldsymbol{\alpha}^k)}(Z)$. The stationary distribution is $f(Z) = P^{(\boldsymbol{\psi}^k)}(Z|y)$. Here, $$\pi^{(\boldsymbol{\alpha}^k)}(Z) = \frac{1}{B(\boldsymbol{\alpha}^k)}\prod\_{m=0}^{2}\prod\_{f=0}^{2}{Z\_{mf}^{\boldsymbol{\alpha}\_{mf}^k-1}}$$ and we consider $\boldsymbol{\alpha}^k$ are obtained from the last EM iteration. In the first step we take a sample *Z*\* from *π*(*α**k*)(*Z*) distribution. Then we take turns to update two elements of *Z* random vector at a time to improve the acceptance rate of EM algorithm. Suppose we want to update the first two element of the sample based on the conditional distribution of *Z**A*1∣*Z**A*2 where *Z**A*1 = (*Z*1, *Z*2) and *Z**A*2 = (*Z*3, ..., *Z*9). Then we can show the conditional distribution is as follows $$\begin{aligned}[b] f\_{Z\_1,Z\_2|Z\_3,...,z\_9}(z\_1,z\_2|z\_3,...,z\_9) & & = \frac{\Gamma(\sum\_{i=1}^{2}\alpha\_i)}{\prod\_{i=1}^{2}\Gamma(\alpha\_i)} \prod\_{i=1}^{2} \Bigg[z\_i \Bigg(1 - \sum\_{j=3}^{k}z\_j\Bigg)^{-1}\Bigg]^{\alpha\_i - 1} \Bigg(1 - \sum\_{j=3}^{k}z\_j\Bigg)^{-1} \end{aligned}$$ Then, we can calculate Metropolis-Hastings ratio as follow: $$\begin{aligned}[b] R(Z^{(t)},Z^\*) = \frac{f(Z^\*)\pi^{(\boldsymbol{\alpha}^k)}(Z^{(t)})}{f(Z^{(t)})\pi^{(\boldsymbol{\alpha}^{(k)})}(Z^\*)} & = \frac{P^{(\boldsymbol{\psi}^k)}(Z^\*|Y)\pi^{(\boldsymbol{\alpha}^k)}(Z^{(t)})}{P^{(\boldsymbol{\psi}^k)}(Z^{(t)}|Y)\pi^{(\boldsymbol{\alpha}^k)}(Z^\*)}\\ &\propto\frac{P^{(\boldsymbol{\theta}^k)}(Y|Z^\*)\pi^{(\boldsymbol{\alpha}^k)}(Z^\*)\pi^{(\boldsymbol{\alpha}^k)}(Z^{(t)})}{P^{(\boldsymbol{\theta}^k)}(Y|Z^{(t)})\pi^{(\boldsymbol{\alpha}^k)}(Z^{(t)})\pi^{(\boldsymbol{\alpha}^k)}(Z^\*)} = \frac{P^{(\boldsymbol{\theta}^k)}(Y|Z^\*)}{P^{(\boldsymbol{\theta}^k)}(Y|Z^{(t)})}\\ \end{aligned}$$ where $\boldsymbol{\theta}^k$ represents the values for the parameters *δ*, *R*1, *R*2, *R**i**m*, *S*1, *S*2 from previous EM iteration. Finally, we can find sample *Z**t* + 1 as follows: $$Z^{(t+1)} = \begin{cases} Z^\* & \textrm{with probability min}{(R(Z^{(t)},Z^\*),1)} \\ Z^{(t)} & \textrm{otherwise}\\ \end{cases}$$ In the *k**t**h* iteration of the MCEM algorithm, we use MH algorithm to generate 10000 sample points *Z**k**m**f*(1), *Z**k**m**f*(2),...,*Z**k**m**f*(10000) from probability distribution $f\_{\boldsymbol{\psi}^{(k)}}\big(Z|Y\big)$ as shown above. We determine 10000 sample points are good enough for MH algorithm to converge, as Brooks Gelman scale reduction factor is very close to 1 for 10000 sample size in simulation. Then we use Monte-Carlo method to estimate integration in E -step as follows $$\begin{aligned}[b] &\hat{E}\_{\boldsymbol{\psi}^{(k)}}\{log P\_{\boldsymbol{\theta}}\big(Y|Z\big)|Y\} \\ & = \frac{1}{10000}\sum\_{t = 1}^{n} \sum\_{j=1}^{10000}\bigg[log \bigg(\frac{P(M^{(i)}=m,F^{(i)}=f,C\_1^{(i)}=c\_1,C\_2^{(i)}=c\_2,D\_1^{(i)}=1,D\_2^{(i)}=0)}{P(D^{(i)}\_1=1,D^{(i)}\_2=0)}\bigg) \\ & + \log \bigg(P\big(C\_3^{(i)} = c\_3, C\_4^{(i)} = c\_4,...,C\_k^{(i)} = c\_k, D\_3^{(i)}, D\_4^{(i)},...,D\_k^{(i)}|M^{(i)}=m,F^{(i)}=f\big)\bigg)\bigg] \end{aligned}$$ where $$\begin{aligned}[b] P(M^{(i)}=m &,F^{(i)}=f,C\_1^{(i)}=c\_1,C\_2^{(i)}=c\_2,D\_1^{(i)}=1,D\_2^{(i)}=0)\\ & = Z^{(j)}\_{kmf}P(C\_1=c\_1|M=m,F=f).P(C\_2=c\_2|M=m,F=f)\\ &.P(D\_1=1|M=m,F=f,C\_1=c\_1).P(D\_2=0|M=m,F=f,C\_2=c\_2) \end{aligned}$$ and $$\begin{aligned}[b] \big(C\_3^{(i)} & = c\_3, C\_4^{(i)} = c\_4,...,C\_k^{(i)} = c\_k, D\_3^{(i)}, D\_4^{(i)},...,D\_k^{(i)}|M^{(i)}=m,F^{(i)}=f\big)\\ & = P(C\_3 = c\_3|M^{(i)}=m,F^{(i)}=f)....P(C\_k = c\_k|M^{(i)}=m,F^{(i)}=f)\\ &.P(D\_3|M^{(i)}=m,F^{(i)}=f,C\_3 = c\_3)...P(D\_k|M^{(i)}=m,F^{(i)}=f,C\_k = c\_k) \end{aligned}$$ $$E\_{\boldsymbol{\psi}^{(k)}}\{log P\_{\boldsymbol{\alpha}}\big(Z\big)|Y\} = \frac{1}{10000} \sum\_{j=1}^{10000}\bigg[-log B\big(\boldsymbol{\alpha}\big) + \big(\alpha\_{m\_i,f\_i} - 1\big)\sum\_{m=0}^{2}\sum\_{f=0}^{2}logZ^{(j)}\_{kmf}\bigg]$$ Then we can define $Q\_{MC}(\boldsymbol{\psi};\boldsymbol{\psi}^{(k)})$ as follows $$Q\_{MC}(\boldsymbol{\psi},\boldsymbol{\psi}^{(k)}) = \hat{E}\_{\boldsymbol{\psi}^{(k)}}\{log P\_{\boldsymbol{\theta}}\big(Y|Z\big)|Y\} + E\_{\boldsymbol{\psi}^{(k)}}\{log P\_{\boldsymbol{\alpha}}\big(Z\big)|Y\}$$ ### **M Step:** For the maximization step, we maximize $Q\_{MC}\big(\boldsymbol{\psi};\boldsymbol{\psi}^{(k)}\big)$ which is the sum of above two equations with respect to the parameters $\boldsymbol{\psi}$ = (*δ*, *R*1, *R*2, *R**i**m*, *S*1, *S*1, *α*00, *α*01, *α*02, *α*10, *α*11, *α*12, *α*20, *α*21, *α*22) Note that the first term of $Q\_{MC}\big(\boldsymbol{\psi};\boldsymbol{\psi}^{(k)}\big)$ is only related to $\boldsymbol{\theta}$, and the second term of $Q\_{MC}\big(\boldsymbol{\psi};\boldsymbol{\psi}^{(k)}\big)$ is only related to $\boldsymbol{\alpha}$. Therefore, to maximize $Q\_{MC}\big(\boldsymbol{\psi};\boldsymbol{\psi}^{(k)}\big)$, we can separately maximize the first term with respect to $\boldsymbol{\theta}$, and maximize the second term with respect to $\boldsymbol{\alpha}$. Applying Importance sampling based on *M**C**E**M**D**S**P* algorithm --------------------------------------------------------------------- It is very time consuming to generate *Z**m**f* from posterior distribution using *M**C**E**M**D**S**P*, so we propose to use importance sampling method as the solution for it. Let *k* be the iteration number of EM algorithm. Then up to 10*t**h* iteration we generate *Z**m**f* using the posterior distribution $f(Z|Y;\boldsymbol{\psi}^{(k)})$ as in the *M**C**E**M**D**S**P* method. At *k* ≥ 11 iterations, we’ll use the importance weights to calculate *Q**M**C*(*ψ*; *ψ**k*) as follows. $$w\_t^{(k)} = \frac{L(\hat{\boldsymbol{\psi}}^{(k)} | Z\_{10}^{(t)},Y)}{L(\hat{\boldsymbol{\psi}}^{(0)} | Z\_{10}^{(t)},Y)} = \frac{P(Y|Z\_{10}^{(t)};\boldsymbol{\theta}^k).P(Z\_{10}^{(t)};\boldsymbol{\alpha}^k) }{P(Y|Z\_{10}^{(t)};\boldsymbol{\theta}^{10}).P(Z\_{10}^{(t)};\boldsymbol{\alpha}^{10}) } = \frac{P(Y|Z\_{10}^{(t)};\boldsymbol{\theta}^k).\pi^{\boldsymbol{\alpha}^k}(Z\_{10})}{P(Y|Z\_{10}^{(t)};\boldsymbol{\theta}^{10}).\pi^{\boldsymbol{\alpha}^{10}}(Z\_{10})}$$ Here, $P(Y|Z\_{10}^{(t)};\boldsymbol{\theta}^k)$ the probability density function of Y conditional on *t**t**h* monte carlo sample of Z at 10*t**h* iteration with parameter $\boldsymbol{\theta}=\boldsymbol{\theta}^k$ the values from the last importance sampling based *M**C**E**M**D**S**P* iteration for the parameters *δ*, *R*1, *R*2, *R**i**m*, *S*1, *S*2. Then we can compute the Q function in E step as $$Q\_{MCIm}(\boldsymbol{\psi};\boldsymbol{\psi}^k) = \frac{\sum\_{t=1}^{m}w\_t^{(k)} \ln f(Z\_t,Y | \boldsymbol{\psi})}{\sum\_{t=1}^{m}w\_t^{(k)}}$$ For maximization, since ∑*t* = 1*k**w**t**k* does not depend on unknown parameters, we can simply maximize the numerator with respect to $\boldsymbol{\theta}$ and $\boldsymbol{\alpha}$ to update *ψ*. $$\sum\_{t=1}^{m}w\_t^{(k)} \ln f(Z\_t,Y | \psi) = \sum\_{t=1}^{m}w\_t^{(k)} \ln f(Y|Z\_t;\boldsymbol{\theta}) + \sum\_{t=1}^{m}w\_t^{(k)} \ln f(Z\_t| \boldsymbol{\alpha})$$ The first term of the numerator of *Q**M**C**I**m* is only related to $\boldsymbol{\theta}$ and the second term of *Q**M**C**I**m* is only related to $\boldsymbol{\alpha}$. So we can maximize *Q**M**C**I**m* by separately maximizing the first part with respect to $\boldsymbol{\theta}$ and maximizing the second part with respect to $\boldsymbol{\alpha}$. Hypothesis Testing ------------------ To test whether the candidate marker has association effect, imprinting effect, and maternal effect with the disease. We need to consider the follower four models. 1. Full model In this model, we consider child’s own genotype effect, imprinting effect, and maternal effect are all possibly related to the disease. *P*(*D* = 1∣*M* = *m*, *F* = *f*, *C* = *c*) = *δ**R*1*I*(*c* = 1)*R*2*I*(*c* = 2)*R**i**m**I*(*c* = 1*m*)*S*1*I*(*m* = 1)*S*2*I*(*m* = 2) 2. Null model For null model, we consider there is no effect from child’s own genotype effect, imprinting effect, or maternal effect in the model, i.e., the disease penetrance is the same as the phenocopy rate. *P*(*D* = 1∣*M* = *m*, *F* = *f*, *C* = *c*) = *δ* 3. Non-Imprinting effect model In this model there is no imprinting effect, i.e., *R**i**m* = 1. *P*(*D* = 1∣*M* = *m*, *F* = *f*, *C* = *c*) = *δ**R*1*I*(*c* = 1)*R*2*I*(*c* = 2)*S*1*I*(*m* = 1)*S*2*I*(*m* = 2) 4. Non-Maternal effect model In this model, there is no maternal effect, i.e., *S*1 and *S*2 are both equal to 1. *P*(*D* = 1∣*M* = *m*, *F* = *f*, *C* = *c*) = *δ**R*1*I*(*c* = 1)*R*2*I*(*c* = 2)*R**i**m**I*(*c* = 1*m*) ### Association effect For testing association effect, we check the following hypothesis. *H*0 : *R*1 = *R*2 = *R**i**m* = *S*1 = *S*2 = 1 (NULL model) vs. *H**a* : at least one of these parameters is not equal to 1 (Full model ) The test statistic is, $$T\_1 = \frac{1}{10000}\sum\_{t=1}^{10000}2\big[log P\_{FULL}\big(Y|Z^{(t)}\big) - logP\_{NULL}\big(Y|Z^{(t)}\big)\big]$$ Here, $P\_{FULL}\big(Y|Z^{(t)}\big)$ is the likelihood value for the full model and $P\_{NULL}\big(Y|Z^{(t)}\big)$ is the likelihood for NULL model. When there are additional siblings, the test statistic asymptotically follows *χ*52, while when there are no additional siblings, the test statistic asymptotically follow *χ*62. This is because in log*P*(*Y*∣*Z*) formula, the discordant sibpair part is free of all the parameters in $\boldsymbol{\theta}$ (*δ*(1 − *δ*) appear in both numerator and denominator and can be cancelled), while additional sibling part includes *δ*. $$\begin{aligned}[b] & log P(Y|Z) = \sum\_{i=1}^{n} \Biggl\{ log \Bigg[ \frac{ P(M^{(i)}=m,F^{(i)}=f,C\_1^{(i)}=c\_1,C\_2^{(i)}=c\_2,D\_1^{(i)}=1,D\_2^{(i)}=0)}{P(D^{(i)}\_1=1,D^{(i)}\_2=0)}\Bigg] \\ & + \log \bigg(P\big(C\_3^{(i)} = c\_3, C\_4^{(i)} = c\_4,...,C\_k^{(i)} = c\_k, D\_3^{(i)}, D\_4^{(i)},...,D\_k^{(i)}|M^{(i)}=m,F^{(i)}=f\big)\bigg)\Biggr\}\\ & = \sum\_{i=1}^{n} \Biggl\{ log \Bigg[ \frac{ \mu\_{mf}.P(C\_1 = c\_1|M=m,F=f).P(C\_2 = c\_1|M=m,F=f)\delta(1-\delta) }{\sum\_{m,f,c\_1,c\_2} \mu\_{mf}.P(C\_1 = c\_1|M=m,F=f).P(C\_2 = c\_1|M=m,F=f)\delta(1-\delta)}\Bigg] \\ & + \log \bigg(P(C\_3 = c\_3|M=m,F=f)....P(C\_k = c\_k|M=m,F=f)\\ &.P(D\_3|M=m,F=f,C\_3 = c\_3)...P(D\_k|M=m,F=f,C\_k = c\_k)\bigg)\Biggr\}\\ & = \sum\_{i=1}^{n} \Biggl\{ log \Bigg[ \frac{ \mu\_{mf}.P(C\_1 = c\_1|M=m,F=f).P(C\_2 = c\_1|M=m,F=f)}{\sum\_{m,f,c\_1,c\_2} \mu\_{mf}.P(C\_1 = c\_1|M=m,F=f).P(C\_2 = c\_1|M=m,F=f)}\Bigg] \\ & + \log \bigg(P(C\_3 = c\_3|M=m,F=f)....P(C\_k = c\_k|M=m,F=f)\\ &.\delta^{\sum\_{l = 3}^{k}D\_l}(1 - \delta)^{k-2-\sum\_{l=3}^{k}D\_l}\bigg)\Biggr\}\\ \end{aligned}$$ ### Imprinting effect To test the imprinting effect, we consider the following hypothesis. *H*0 : *R**i**m* = 1 (Non-imprinting model) vs. *H**a* : *R**i**m* ≠ 1 (Full model) The test statistic is $$T\_2 = \frac{1}{10000}\sum\_{t=1}^{10000}2\big[log P\_{FULL}\big(Y|Z^{(t)}\big) - logP\_{Non-Imprinting}\big(Y|Z^{(t)}\big)\big] \sim \chi^2\_1$$ under null hypothesis. ### Maternal effect There exists maternal effect means *S*1 ≠ 1 or *S*2 ≠ 1. Thus, the hypothesis for the maternal effect is *H*0 : *S*1 = *S*2 = 1 (Non-Maternal effect model) vs. *H**a* : *S*1 or *S*2 is not equal to 1 (Full model) The test statistic for check the above hypothesis is, $$T\_3 = \frac{1}{10000}\sum\_{t=1}^{10000}2\big[log P\_{FULL}\big(Y|Z^{(t)}\big) - logP\_{Non-Maternal}\big(Y|Z^{(t)}\big)\big] \sim \chi^2\_2$$ under null hypothesis. For all of above three cases, p-value is calculated based on the test statistic and then compared with significance level. Simulation ========== To examine the performance of *M**C**E**M**D**S**P* method, we consider eight disease models and eight scenarios. [!htb] c c c c c c c c c c c c c & & & & & & Settings & *R*1 & *R*2 & *R**i**m* & *S*1 & *S*2 & &Settings & MAF & PREV & HWE 1 & 1 & 1 & 1 & 1 & 1 & & 1 & 0.1 & 0.05 & 0 2 & 2 & 3 & 1 & 1 & 1 & & 2 & 0.3 & 0.05 & 0 3 & 1 & 3 & 1 & 1 & 1 & & 3 & 0.1 & 0.15 & 0 4 & 1 & 3 & 1 & 2 & 2 & & 4 & 0.3 & 0.15 & 0 5 & 1 & 3 & 3 & 1 & 1 & & 5 & 0.1 & 0.05 & 1 6 & 3 & 3 & 1/3 & 1 & 1 & & 6 & 0.3 & 0.05 & 1 7 & 1 & 3 & 3 & 2 & 2 & & 7 & 0.1 & 0.15 & 1 8 & 3 & 3 & 1/3 & 2 & 2 & & 8 & 0.3 & 0.15 & 1 In table 3.1, the first setting for the disease model corresponds to null model with no genetic effect. The settings 2 and 3 correspond to the disease models that there is neither imprinting nor maternal effect. Setting 4 only has maternal effect. Settings 5 and 6 only have imprinting effect. Settings 7 and 8 have both imprinting and maternal effects. Under each model, we investigate eight different scenarios in which there are three factor values: minor allele frequency (MAF) {0.1, 0.3}, disease prevalence *P*(*D* = 1) (PREV) {0.05(*r**a**r**e*), 0.15(*c**o**m**m**o**n*)} and condition for Hardy-Weinberg equilibrium (HWE) {1 = *Y**e**s*, 0 = *N**o*}. Note that, *p* is the frequency for the minor allele and (1 − *p*) is wild allele frequency. As a result of population HWE, allelic exchangeability and mating symmetry will be implied. In cases that HWE is violated, we use a common assumption in population genetics and assume that a common cause of non-random matings is inbreeding. Then, in such cases, the probability of genotypes containing 0, 1, and 2 minor alleles are (1 − *p*)2(1 − *ζ*) + (1 − *p*)*ζ*, 2*p*(1 − *p*)(1 − *ζ*) and *p*2(1 − *ζ*) + *p**ζ*, respectively, where *ζ* is the inbreeding parameter. When HWE holds, *ζ* = 0. *ζ* is set to be 0.1 and 0.3 for males and females, respectively, when HWE does not hold. Note that with the specification of each scenario and a disease model, the penetrance probability is fully specified. We generate 500 replications under each of these 64 settings. To check the influence of sample size, each replicate consists of 100 or 500 discordant sibpair families. Firstly, parental genotypes are generated based on MAF and HWE. Then, the genotypes of their proband children are created according to the transmission probability assuming no recombination. Affection status D of the probands are determined by a Bernoulli trial, with the success probability calculated based on disease penetrance model. A family with an affected child and an unaffected child is recruited as a discordant sibpair family. The process of generating M, F,C, and D is repeated until we have collected sufficient numbers of discordant sibpair families to meet the preset sample size. We further simulate the genotype and disease status of a third child for each family as an additional sibling. We denote the data without additional sibling as “DS”, and denote the data with one additional sibling in each family as “DS+1”. Results ======= Type I error and power ---------------------- Under eight disease models and eight scenarios, we compare the type I error and power of *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* for data types “SD” and “SD+1”. Each figure from Figures 3.1 and 3.2 represent first two scenarios for sample size 100 (Appendix figures 1 - 6 are for the other scenarios). In the figures, the three rows represent association effect, imprinting effect, and maternal effect. Within each row, there are eight clusters of four bars representing the 8 disease models. The four bars in each cluster represent type I error (denoted as “E”) or power (denoted as “P”) of *L**I**M**E**D**S**P* and *M**C**E**M**D**S**P* methods on “SD” data, and *L**I**M**E**D**S**P* and *M**C**E**M**D**S**P* methods on “SD+1” data, respectively. According to the disease models, type I error for association effect can be obtained from disease model 1; type I error for imprinting effect can be obtained from disease model 1-4; type I error for maternal effect can be obtained from disease models 1,2, 3, 4, and 6. Powers can be obtained in other disease models. ![Bar charts for compare type I error and power of MCEM_{DSP} and LIME_{DSP} methods with and without additional siblings when HWE = 0, maf = 0.1 and prev = 0.05 with 100 families](Power_withNewAssociation_1.png)Bar charts for compare type I error and power of *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods with and without additional siblings when HWE = 0, maf = 0.1 and prev = 0.05 with 100 families ![Bar charts for compare type I error and power of MCEM_{DSP} and LIME_{DSP} methods with and without additional siblings when HWE = 0, maf = 0.3 and prev = 0.05 with 100 families](Power_withNewAssociation_2.png)Bar charts for compare type I error and power of *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods with and without additional siblings when HWE = 0, maf = 0.3 and prev = 0.05 with 100 families The figures show that *M**C**E**M**D**S**P* method can control all the type I error at around 0.05 nominal values, even when HWE does not hold. This means *M**C**E**M**D**S**P* is very robust to violation of HWE assumption. We can also see that no matter whether the discordant sibpair families have additional siblings or not, *M**C**E**M**D**S**P* method is generally more powerful than *L**I**M**E**D**S**P*. As expected, the powers of the methods with additional siblings are higher than the methods without additional siblings. We can also see that for some settings and scenarios, such as setting 5 and 7 when disease prevalence is 0.05, the imprinting effect power of *M**C**E**M**D**S**P* for the data without additional siblings can be even higher than that of *L**I**M**E**D**S**P* for the data with one additional sibling. The figures show that the power to detect maternal effects are generally very low for both *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods, especially for the data without additional siblings. The main reason is in discordant sibpair design the affected proband and unaffected proband share the same mother leading to little contrast to detect maternal effect. ![Bar charts for compare type I error and power of MCEM_{DSP} and LIME_{DSP} methods with and without additional siblings when HWE = 0, maf = 0.1 and prev = 0.05 with 500 families](Power_withNewAssociation_n500_1.png)Bar charts for compare type I error and power of *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods with and without additional siblings when HWE = 0, maf = 0.1 and prev = 0.05 with 500 families ![Bar charts for compare type I error and power of MCEM_{DSP} and LIME_{DSP} methods with and without additional siblings when HWE = 0, maf = 0.3 and prev = 0.05 with 500 families](Power_withNewAssociation_n500_2.png)Bar charts for compare type I error and power of *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods with and without additional siblings when HWE = 0, maf = 0.3 and prev = 0.05 with 500 families Each figure from Figures 3.3 and 3.4 represent first two scenarios when sample size is 500 (Appendix figures 7 - 12 are for the other scenarios). We can see that the powers are generally higher compared to that when sample size is 100, but the results of power comparison between different methods and data types are similar. The type I error, however, show different patterns. Type I errors are generally close to nominal value 0.05, except for imprinting effect when minor allele frequency is low and there are no additional siblings, which implies that under this situation, *M**C**E**M**D**S**P* algorithm cannot converge to the true parameter values, as small minor allele frequency might lead to flat complete log likelihood function. This inflation of type I error might be also related to the confounding between imprinting effect and maternal effect. As we mentioned, maternal effect can mimic paternal imprinting effect. *M**C**E**M**D**S**P* cannot differentiate the two confounding epigenetic effects in disease model 4 ((*R*1, *R*2, *R**i**m*, *S*1, *S*2) = (1, 3, 1, 2, 2)), where there is maternal effect, but no imprinting effect under the scenario MAF=0.1, sample size = 500, and data type=“SD”. Relative bias of the parameter estimation ----------------------------------------- We define the relative bias as $\frac{(\hat{\theta} - \theta)}{\theta}$. The relative bias is calculated under each scenario and disease model for discordant sib-pair families with and without additional siblings, and shown in boxplots 3.5 and 3.6 (Appendix figures 13 - 18 are for the other scenarios) when sample size is 100. There are five rows in each of these boxplot figure, representing the relative bias for *R*1, *R*2, *R**i**m*, *S*1, and *S*2, respectively. The eight clusters of boxplots represent the eight disease models. The four boxplots in each cluster represent the results for LIMEdsp and MCEMdsp for “SD” data and LIMEdsp and MCEMdsp for “SD+1” data, respectively. We can see that the relative bias medians for the parameters are all around 0 for both *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* under all settings. The interquartile range for *R*1, *R*2, *R**i**m*, *S*2 are generally smaller for *M**C**E**M**D**S**P*. The interquartile range for *S*1 is bigger for *M**C**E**M**D**S**P* when there are no additional sibling, but it becomes smaller than *L**I**M**E**D**S**P* when there are additional siblings. ![Box plots for biases of parameters from MCEM_{DSP} and LIME_{DSP} methods when HWE = 0, maf = 0.1 and PREV = 0.05 with 100 families](NEWRelativeBias_n100_without910_1.png)Box plots for biases of parameters from *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods when HWE = 0, maf = 0.1 and PREV = 0.05 with 100 families ![Box plots for biases of parameters from MCEM_{DSP} and LIME_{DSP} methods when HWE = 0, maf = 0.3 and PREV = 0.05 with 100 families](NEWRelativeBias_n100_without910_2.png)Box plots for biases of parameters from *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods when HWE = 0, maf = 0.3 and PREV = 0.05 with 100 families ![Box plots for biases of parameters from MCEM_{DSP} and LIME_{DSP} methods when HWE = 0, maf = 0.1 and PREV = 0.05 with 500 families](NEWRelativeBias_n500_without910_1.png)Box plots for biases of parameters from *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods when HWE = 0, maf = 0.1 and PREV = 0.05 with 500 families ![Box plots for biases of parameters from MCEM_{DSP} and LIME_{DSP} methods when HWE = 0, maf = 0.3 and PREV = 0.05 with 500 families](NEWRelativeBias_n500_without910_2.png)Box plots for biases of parameters from *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods when HWE = 0, maf = 0.3 and PREV = 0.05 with 500 families The Figures 3.7 and 3.8 represent first two scenarios when sample size is 500 (Appendix figures 19 - 24 are for the other scenarios). The relative bias comparison results between the two methods are similar for the two samples sizes, but the interquartile ranges are all smaller when sample size is larger. We can also see that both *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* have smaller proportion of wild estimation for the data with larger sizes or with additional siblings. Importance Sampling method -------------------------- Though *M**C**E**M**D**S**P* estimation gives a tractable solution to the problem arising when E-step is not available in closed form, the computation time cost is a major issue when implementing the *M**C**E**M**D**S**P* routine. Let’s consider the first disease model in Table 3.1 as an example. [!htb] | | | | | | | --- | --- | --- | --- | --- | | $\small{LIME\_{dsp}}$ | | | | | | $\small{MCEM\_{dsp}}$ | | | | | | | | | | | We apply importance sampling based *M**C**E**M**D**S**P* method for eight disease models when Hardy-Weinberg equilibrium (HWE) is 0, disease prevalence (prev) is 0.05 and minor allele frequency (MAF) is set to be 0.1 with sample size 500 for the families with one additional sibling. The simulation shows that, the type I error and the power of importance sampling based *M**C**E**M**D**S**P* are comparable to *M**C**E**M**D**S**P* method. For some disease models, the power of the importance sampling based *M**C**E**M**D**S**P* is a little lower than *M**C**E**M**D**S**P*, but it is still generally higher than *L**I**M**E**D**S**P* method. What’s more, the important sampling based *M**C**E**M**D**S**P* is much more time efficient than *M**C**E**M**D**S**P* method. ![Comparison of power and type I error of LIME_{DSP}, MCEM_{DSP} and importance sampling methods ](ImportanceSampling_withnewAssociation_1.png)Comparison of power and type I error of *L**I**M**E**D**S**P*, *M**C**E**M**D**S**P* and importance sampling methods Real Data Analysis ================== To illustrate application of *M**C**E**M**D**S**P* method we apply it to Framingham Heart Study (FHS). FHS is a long-term, ongoing cardiovascular risk study to examine the epidemiology of cardiovascular diseases on cohorts of residents of Framingham, Massachusetts. This study is considered as the first prospective study on cardiovascular disease which identified risk factors and their joint effects. In FHS several cardiovascular disease conditions were considered including coronary heart disease, stroke, hypertension, peripheral arterial disease, and congestive heart failure. In this study, we focus on hypertension, a multifactorial complex trait, which can increase the risk of coronary heart disease. A person is categorized as hypertensive, if her/his systolic blood pressure is  ≥  140mmHg or diastolic blood pressure is  ≥  90mmHg or he/she has taken medication to control blood pressure. In this study, we considered 48071 SNPs in 22 autosomal chromosomes for 263 discordant sib-pair families with 229 additional siblings. As in our simulations, *M**C**E**M**D**S**P* can control type I error well for all the settings when there are additional siblings, we believe *M**C**E**M**D**S**P* is also valid in this real data analysis. Many of the top SNPs identified to be associated with the hypertensive trait by *M**C**E**M**D**S**P* (Table 3.4) have been previously showed in the literature related to hypertension, cardiovascular disorder or other complex disease. Specifically, SNP rs12117125, inhabiting in the gene FCGR2A on chromosome 1,is found to be associated with Kawasaki disease. It has been identified in association with damage to the coronary arteries, making it a notable contributor to pediatric acquired heart disease. The potential impact on the coronary arteries underscores the significance of addressing and managing this condition in children.. SNP rs12416299 is another SNP identified to be associated with blood pressure, presence of hypertension, and age at onset of hypertension . It is located in the gene SORBS1 on chromosome 10. The gene SVEP1 in chromosome 9 (containing rs1327533) is associated with hypertension and type 2 diabetes. It is worth to discussing several genes found to have an imprinting effect on hypertension (Table 3.4). Previous research suggests that, the gene CD44 in chromosome 11 (containing rs353637) is related to pulmonary arterial hypertension. SNP rs16832560 in gene SOX2-OT in chromosome 3, is also diagnosed as related to pulmonary arterial hypertension which regulates the proliferation, migration, anti-apoptosis, and inflammation of pulmonary artery smooth muscle cells. The gene TRPM8 in chromosome 2 (containing rs7593557) is found to be associated with renin - antiotensin - aldosterone system - mediated hypertension, which subsequently induces small molecule and fluid extravasation, increases plasma Ig levels, and elicits immunosuppression. Also, according to literature, the gene ATP11A in chromosome 13 (containing rs381583) may act as a susceptibility gene for pulmonary fibrosis and an indicator of pulmonary hypertension. When considering the maternal effect, there are several genes that harbor SNPs which are found to have a significant effect on the hypertensive trait by *M**C**E**M**D**S**P* (Table 3.4). The previous literature shows that, SNP rs4961, inhabiting in gene ADD1 in chromosome 4, is associated with hypertension. It was found to be significantly associated with increased prevalence of peripheral arterial disease and incidence of coronary heart disease among hypertensive individuals. As another example, rs3811515 within gene SPHKAP (chromosome 2) is associated with hypertension. The gene ROBO4 in chromosome 11 (containing rs7129737) is identified that it has an association with pulmonary artery hypertension. Gene DOT1L in chromosome 19 (containing rs12986413) was found to have a strong association with greater systolic and diastolic blood pressure. However, when considering the top genes, almost all of them have higher minor allele frequency ( ≥ 0.2). A full list of the top SNPs identified by *M**C**E**M**D**S**P* can be found in Appendix Table 1,2, and 3. We also apply *L**I**M**E**D**S**P* method to the FHS data, and calculate the Bonferroni method adjusted p-values. We find that at 0.05 significance level, no SNPs have association, imprinting, or maternal effects based on *L**I**M**E**D**S**P* method. *M**C**E**M**D**S**P*, on the other hand, can identify 45 SNPs with association, 4 SNPs with imprinting effect, and 4 SNPs with maternal effect at 0.05 significance level. This result is consistent with our conclusion in simulations that *M**C**E**M**D**S**P* is generally more powerful than *L**I**M**E**D**S**P*. [!htb] | Effect | SNP | Chr | Position(BP) | Gene |  − *l**o**g*10(P-value) | | --- | --- | --- | --- | --- | --- | | Association | rs12117125 | 1 | 161522658 | FCGR2A | 12.3787 | | | rs12416299 | 10 | 95542074 | SORBS1 | 15.4775 | | | rs1327533 | 9 | 110368883 | SVEP1 | 12.0632 | | Imprinting | rs353637 | 11 | 35163005 | CD44 | 3.6998 | | | rs16832560 | 3 | 181564785 | SOX2-OT | 3.6316 | | | rs7593557 | 2 | 233955144 | TRPM8 | 3.5197 | | Maternal | rs4961 | 4 | 2904980 | ADD1 | 4.4915 | | | rs3811515 | 2 | 228018313 | SPHKAP | 4.2972 | | | rs7129737 | 11 | 124888741 | ROBO4 | 4.1144 | | | rs12986413 | 19 | 2170955 | DOT1L | 4.0802 | Concluding Remarks and Summary ============================== Both imprinting and maternal effects are important sources of missing heritability in complex human diseases that cannot be explained by genome-wide association studies. Among all the existing methods to detect these two confounded epigenetic effects, almost all the full likelihood -based methods rely on strong yet unrealistic assumptions concerning mating type probabilities to avoid over parameterization. Two partial likelihood – based methods, LIME and *L**I**M**E**D**S**P*, can exceptionally overcome the over-parameterization problem without making those assumptions. LIME requires case families and control families, while *L**I**M**E**D**S**P* only requires families with discordant sibpair that is easier to recruit. Empirical and theoretical results have shown their validity and robustness, but it is likely that these partial likelihood – based methods are less efficient than full-likelihood based methods in term of parameter estimation. An MCEM algorithm was developed to find full likelihood maximizer based on case families and control families and was found to be more powerful than LIME. In this study, we further propose an *M**C**E**M**D**S**P* algorithm to find full likelihood maximizer based on discordant sibpair design. This *M**C**E**M**D**S**P* algorithm can detect the two epigenetic effects jointly and can accommodate discordant sibpair families with an arbitrary number of additional siblings. Expectation maximization method is often used for finding full likelihood maximizer when there are missing data or unobservable latent random variables. In our study, we use mating type probabilities (the nuisance parameters) as latent random variables in the EM algorithm to get full likelihood maximizer without making assumptions about them. As the expectation in E-step is not in closed format, monte carlo simulations are used to estimate the expectation. Log likelihood ratio test is used to test association, imprinting effect and maternal effect related to the disease of interest. Extensive simulations under different disease models and scenarios demonstrate that the *M**C**E**M**D**S**P* method can generally control type I error under most scenarios and is more powerful than *L**I**M**E**D**S**P*. For some simulation settings, the power gain by using *M**C**E**M**D**S**P* method instead of *L**I**M**E**D**S**P* method can be even higher than the power gain by recruiting one more additional sibling in each family based on *L**I**M**E**D**S**P*. In addition, the parameter estimation based on *M**C**E**M**D**S**P* method has similar relative bias and generally smaller standard error compared to *L**I**M**E**D**S**P* method. To illustrate the utility of *M**C**E**M**D**S**P* method, we apply it to Framingham Heart Study data. The results show that many of our findings are consistent with those in the literature, but potential novel genes also emerged. Despite its advantages, *M**C**E**M**D**S**P* has several limitations. Firstly, simulations show that the empirical type I error for imprinting effect is inflated when the data include 500 discordant sibpairs without any additional siblings and minor allele frequency is low (0.01). This is probably because the low minor allele frequency leads to a flatter expectation function in the E-step of the algorithm, such that the M-step cannot identify the global likelihood maximizer. Secondly, *M**C**E**M**D**S**P* algorithm takes much longer time than *L**I**M**E**D**S**P*, due to the time cost in generating monte carlo samples in each iteration. To alleviate this problem, we propose an importance sampling based *M**C**E**M**D**S**P* algorithm, which can reweight and repeatedly use the monte carlo samples in E-step to greatly reduce the time consumption. Besides, simulations show that importance sampling based *M**C**E**M**D**S**P* algorithm has similar performance as the original *M**C**E**M**D**S**P*. In future works, we want to explore whether any modification of current *M**C**E**M**D**S**P* algorithm can correct the type I error inflation under some situations as in the simulations. In addition, we also want to check the possibility of applying *M**C**E**M**D**S**P* to the data of discordant sibpair families with father’s genotype missing. It is well known that fathers are usually much harder to recruit than mothers for a genetic study, and thus a study design with discordant sibpair and their mother may be easier to meeting its target sample size. Nevertheless, with father’s genotype missing, when both mother’s and child’s are heterogeneous, we cannot determine the parental origin of child’s minor allele, thus the related penetrance probability calculation will become more complex. *L**I**M**E**D**S**P* actually fails to deal with this type of data, as the related partial likelihood will become no longer free of the nuisance parameters to get rid of the over-parameterization problem. As *M**C**E**M**D**S**P* algorithm is based on full likelihood, we expect that it can accommodate this type of incomplete family data. Conditional distribution of subvector of a Dirichlet random variable ==================================================================== Let, *Z* = (*Z*1, ..., *Z**k*) follows a Dirichlet distribution with parameters (*α*1, ..., *α**k*). Then, let (1, ..., *k*) = (*A*1, *A*2) and *Z* = (*Z**A*1, *Z**A*2) By the definition of joint pdf of (*Z*1, ..., *Z**k*); $$f\_{Z\_1,...Z\_k}(z\_1,...,z\_k) = \frac{\Gamma (\sum\_{i=1}^{k}\alpha\_i)}{\prod\_{i=1}^{k} \Gamma(\alpha\_i)} \prod\_{i=1}^{k} z\_i^{\alpha\_i-1}, z\_i \in (0,1), \sum\_{i=1}^{k} z\_i = 1$$ Similarly, the joint pdf of *Z**A*2 is; $$f\_{Z\_{A\_2}}(z\_{A\_2}) = \frac{\Gamma (\sum\_{i=1}^{k}\alpha\_i)}{\Gamma(\alpha\_0) \prod\_{i \in A\_2} \Gamma(\alpha\_i)} \prod\_{i \in A\_2} z\_i^{\alpha\_i - 1} \Bigg(1 - \sum\_{i \in A\_2} z\_i\Bigg)^{\alpha\_0 - 1}$$ where, *α*0 = ∑*i* ∈ *A*1*α**i*. Then, the conditional pdf of *Z**A*1∣*Z**A*2 is given by; $$\begin{aligned}[b] f\_{Z\_{A\_1}|Z\_{A\_2}}(z\_{A\_1}|z\_{A\_2}) & = \frac{\Gamma(\sum\_{i \in A\_1}\alpha\_i)}{\prod\_{i \in A\_1}\Gamma(\alpha\_i)} \prod\_{i \in A\_1} z\_i^{\alpha\_i-1} \Bigg(1 - \sum\_{i \in A\_2} z\_i\Bigg)^{-(\alpha\_0 - 1)}\\ & = \frac{\Gamma(\sum\_{i \in A\_1}\alpha\_i)}{\prod\_{i \in A\_1}\Gamma(\alpha\_i)} \prod\_{i \in A\_1} \Bigg[z\_i \Bigg(1 - \sum\_{i \in A\_2}z\_i\Bigg)^{-1}\Bigg]^{\alpha\_i - 1} \Bigg(1 - \sum\_{i \in A\_2} z\_i\Bigg)^{-(m-1)} \end{aligned}$$ When *m* = length(*A*1), *Z**A*1∣*Z**A*2 = *z**A*2 has a scaled Dirichlet distribution; $$\frac{1}{1 - \textbf{1}^T z\_{A\_2}} Z\_{A\_1}|Z\_{A\_2} = z\_{A\_2} \sim Dir(\alpha\_{A\_1})$$ where **1** is the vector with length *k* − *m* and all entries equal to 1. Calculation of Probabilities in Table 3.2 ========================================= In Table 3.2, the formula to calculate the joint probability is as follows: $$\begin{aligned} P(M & = m,F = f, C\_1 = c\_1,C\_2 = c\_2,D\_1 = 1,D\_2 = 0)\\ & = P(M = m,F = f).P(C\_1 = c\_1|M = m,F = f).P(C\_2 = c\_2|M = m,F = f)\\ &\times P(D\_1 = 1|M = m,F = f,C\_1 = c\_1).P(D\_2 = 0|M = m,F = f,C\_2 = c\_2). \end{aligned}$$ For all types other than types 12, 13, 14, 18 and 19 (Table 3.2), if a child has one copy of the variant allele, the parental origin can be unambiguously identified, and hence the joint probability can be easily obtained by extracting the relevant factors from the relative risk models for disease prevalence: *P*(*D*1 = 1∣*M* = *m*, *F* = *f*, *C*1 = *c*1) = *δ**R*1*I*(*c*1 = 1)*R*2*I*(*c*1 = 2)*R**i**m**I*(*c*1 = 1*m*)*S*1*I*(*M* = 1)*S*2*I*(*M* = 2),  *P*(*D*2 = 0∣*M* = *m*, *F* = *f*, *C*2 = *c*2) = 1 − *δ**R*1*I*(*c*2 = 1)*R*2*I*(*c*2 = 2)*R**i**m**I*(*c*2 = 1*m*)*S*1*I*(*M* = 1)*S*2*I*(*M* = 2),  For example, in the familial genotype combination (*m*, *f*, *c*1, *c*2) = (2, 0, 1, 1), $$\begin{aligned} P(M = 2,F = 0,C\_1 = 1,C\_2 & = 1,D\_1 = 1,D\_2 = 0)= P(M = 2,F = 0)\\ &.P(C\_1 = 1|M = 2,F = 0).P(C\_2 = 1|M = 2,F = 0)\\ &.P(D\_1 = 1|M = 2,F = 0,C\_1 = 1)\\ &.P(D\_2 = 0|M = 2,F = 0,C\_2 = 1)\\ & = \mu\_{20} \delta r\_1 s\_2 r\_{im} (1 - \delta r\_1 s\_2 r\_{im}).\\ \end{aligned}$$ For type 12, in which (*m*, *f*, *c*1, *c*2) = (1, 1, 1, 0), as the variant allele carried by the affected child can be inherited either from the mother or the father with equal probabilities, the penetrance probability ends up being a equally weighted summation. That is, $$\begin{aligned} P(D\_1 = 1|M = 2,F = 0,C\_1 = 1) & = \frac{1}{2} [\delta r\_1 s\_1 + \delta r\_1 r\_{im} s\_1]\\ & = \frac{1}{2} \delta r\_1 s\_1 (1 + r\_{im}).\\ \end{aligned}$$ Then the joint probability can be written as, $$\begin{aligned} P(M = 1,F = 1,C\_1 = 1&,C\_2 = 0,D\_1 = 1,D\_2 = 0) = P(M = 1,F = 1)\\ &.P(C\_1 = 1|M = 1,F = 1).P(C\_2 = 0|M = 1,F = 1)\\ &.P(D\_1 = 1|M = 1,F = 1,C\_1 = 1)\\ &.P(D\_2 = 1|M = 1,F = 1,C\_2 = 0)\\ & = \mu\_{11}. \frac{1}{2}. \frac{1}{4}. \frac{1}{2} \delta r\_1 s\_1 (1 + r\_{im}). (1 - \delta s\_1)\\ & = \mu\_{11} \frac{1}{16} \delta r\_1 s\_1 (1 + r\_{im}) (1 - \delta s\_1). \\ \end{aligned}$$ For type 13, in which (*m*, *f*, *c*1, *c*2) = (1, 1, 0, 1), the variant allele carried by the unaffected child can be inherited either from the mother or the father with equal probabilities and, as such, the penetrance probability ends up being the summation of two penetrance probabilities weighted equally. That is, $$\begin{aligned} P(D\_2 = 0|M = 1,F = 1,C\_2 = 1) & = \frac{1}{2} [(1 - \delta r\_1 s\_1) + (1 - \delta r\_1 s\_1 r\_{im})]\\ & = \frac{1}{2} [2 - \delta r\_1 s\_1 (1 + r\_{im})]. \end{aligned}$$ Then, the joint probability can be written as, $$\begin{aligned} P(M = 1,F = 1,C\_1 = 0 &,C\_2 = 1,D\_1 = 1,D\_2 = 0) = P(M = 1,F = 1)\\ &.P(C\_1 = 0|M = 1,F = 1).P(C\_2 = 1|M = 1,F = 1)\\ &.P(D\_1 = 1|M = 1,F = 1,C\_1 = 0)\\ &.P(D\_2 = 0|M = 1,F = 1,C\_2 = 1)\\ & = \mu\_{11}. \frac{1}{4}. \frac{1}{2}. \frac{1}{2} \delta s\_1. \frac{1}{2} [2 - \delta r\_1 s\_1 (1 + r\_{im})]\\ & = \mu\_{11} \frac{1}{16} \delta s\_1 [2 - \delta r\_1 s\_1 (1 + r\_{im})].\\ \end{aligned}$$ For type 14, in which (*m*, *f*, *c*1, *c*2) = (1, 1, 1, 1), the variant allele carried by the both affected and unaffected child can be inherited either from the mother or the father with equal probabilities and, as such, the penetrance probability ends up being a equally weighted summation. $$\begin{aligned} P(M = 1,F = 1,C\_1 = 1&,C\_2 = 1,D\_1 = 1,D\_2 = 0) = P(M = 1,F = 1)\\ &.P(C\_1 = 1|M = 1,F = 1).P(C\_2 = 1|M = 1,F = 1)\\ &.P(D\_1 = 1|M = 1,F = 1,C\_1 = 1)\\ &.P(D\_2 = 1|M = 1,F = 1,C\_2 = 1)\\ & = \mu\_{11}. \frac{1}{2}. \frac{1}{2}. \frac{1}{2} \delta r\_1 s\_1 (1 + r\_{im}). \frac{1}{2} [2 - \delta r\_1 s\_1 (1 + r\_{im})]\\ & = \mu\_{11} \frac{1}{16} \delta r\_1 s\_1 (1 + r\_{im}) [2 - \delta r\_1 s\_1 (1 + r\_{im})].\\ \end{aligned}$$ We can follow similar procedures to derive the joint probabilities for other settings. Results ======= ![Bar charts for compare type I error and power of MCEM_{DSP} and LIME_{DSP} methods with and without additional siblings when HWE = 0, maf = 0.1 and prev = 0.15 with 100 families](Power_withNewAssociation_3.png)Bar charts for compare type I error and power of *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods with and without additional siblings when HWE = 0, maf = 0.1 and prev = 0.15 with 100 families ![Bar charts for compare type I error and power of MCEM_{DSP} and LIME_{DSP} methods with and without additional siblings when HWE = 0, maf = 0.3 and prev = 0.15 with 100 families](Power_withNewAssociation_4.png)Bar charts for compare type I error and power of *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods with and without additional siblings when HWE = 0, maf = 0.3 and prev = 0.15 with 100 families ![Bar charts for compare type I error and power of MCEM_{DSP} and LIME_{DSP} methods with and without additional siblings when HWE = 1, maf = 0.1 and prev = 0.05 with 100 families](Power_withNewAssociation_5.png)Bar charts for compare type I error and power of *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods with and without additional siblings when HWE = 1, maf = 0.1 and prev = 0.05 with 100 families ![Bar charts for compare type I error and power of MCEM_{DSP} and LIME_{DSP} methods with and without additional siblings when HWE = 1, maf = 0.3 and prev = 0.05 with 100 families](Power_withNewAssociation_6.png)Bar charts for compare type I error and power of *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods with and without additional siblings when HWE = 1, maf = 0.3 and prev = 0.05 with 100 families ![Bar charts for compare type I error and power of MCEM_{DSP} and LIME_{DSP} methods with and without additional siblings when HWE = 1, maf = 0.1 and prev = 0.15 with 100 families](Power_withNewAssociation_7.png)Bar charts for compare type I error and power of *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods with and without additional siblings when HWE = 1, maf = 0.1 and prev = 0.15 with 100 families ![Bar charts for compare type I error and power of MCEM_{DSP} and LIME_{DSP} methods with and without additional siblings when HWE = 1, maf = 0.3 and prev = 0.15 with 100 families](Power_withNewAssociation_8.png)Bar charts for compare type I error and power of *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods with and without additional siblings when HWE = 1, maf = 0.3 and prev = 0.15 with 100 families ![Bar charts for compare type I error and power of MCEM_{DSP} and LIME_{DSP} methods with and without additional siblings when HWE = 0, maf = 0.1 and prev = 0.15 with 500 families](Power_withNewAssociation_n500_3.png)Bar charts for compare type I error and power of *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods with and without additional siblings when HWE = 0, maf = 0.1 and prev = 0.15 with 500 families ![Bar charts for compare type I error and power of MCEM_{DSP} and LIME_{DSP} methods with and without additional siblings when HWE = 0, maf = 0.3 and prev = 0.15 with 500 families](Power_withNewAssociation_n500_4.png)Bar charts for compare type I error and power of *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods with and without additional siblings when HWE = 0, maf = 0.3 and prev = 0.15 with 500 families ![Bar charts for compare type I error and power of MCEM_{DSP} and LIME_{DSP} methods with and without additional siblings when HWE = 1, maf = 0.1 and prev = 0.05 with 500 families](Power_withNewAssociation_n500_5.png)Bar charts for compare type I error and power of *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods with and without additional siblings when HWE = 1, maf = 0.1 and prev = 0.05 with 500 families ![Bar charts for compare type I error and power of MCEM_{DSP} and LIME_{DSP} methods with and without additional siblings when HWE = 1, maf = 0.3 and prev = 0.05 with 500 families](Power_withNewAssociation_n500_6.png)Bar charts for compare type I error and power of *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods with and without additional siblings when HWE = 1, maf = 0.3 and prev = 0.05 with 500 families ![Bar charts for compare type I error and power of MCEM_{DSP} and LIME_{DSP} methods with and without additional siblings when HWE = 1, maf = 0.1 and prev = 0.15 with 500 families](Power_withNewAssociation_n500_7.png)Bar charts for compare type I error and power of *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods with and without additional siblings when HWE = 1, maf = 0.1 and prev = 0.15 with 500 families ![Bar charts for compare type I error and power of MCEM_{DSP} and LIME_{DSP} methods with and without additional siblings when HWE = 1, maf = 0.3 and prev = 0.15 with 500 families](Power_withNewAssociation_n500_8.png)Bar charts for compare type I error and power of *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods with and without additional siblings when HWE = 1, maf = 0.3 and prev = 0.15 with 500 families ![Box plots for biases of parameters from MCEM_{DSP} and LIME_{DSP} methods when HWE = 0, maf = 0.1 and PREV = 0.15 with 100 families](NEWRelativeBias_n100_without910_3.png)Box plots for biases of parameters from *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods when HWE = 0, maf = 0.1 and PREV = 0.15 with 100 families ![Box plots for biases of parameters from MCEM_{DSP} and LIME_{DSP} methods when HWE = 0, maf = 0.3 and PREV = 0.15 with 100 families](NEWRelativeBias_n100_without910_4.png)Box plots for biases of parameters from *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods when HWE = 0, maf = 0.3 and PREV = 0.15 with 100 families ![Box plots for biases of parameters from MCEM_{DSP} and LIME_{DSP} methods when HWE = 1, maf = 0.1 and PREV = 0.05 with 100 families](NEWRelativeBias_n100_without910_5.png)Box plots for biases of parameters from *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods when HWE = 1, maf = 0.1 and PREV = 0.05 with 100 families ![Box plots for biases of parameters from MCEM_{DSP} and LIME_{DSP} methods when HWE = 1, maf = 0.3 and PREV = 0.05 with 100 families](NEWRelativeBias_n100_without910_6.png)Box plots for biases of parameters from *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods when HWE = 1, maf = 0.3 and PREV = 0.05 with 100 families ![Box plots for biases of parameters from MCEM_{DSP} and LIME_{DSP} methods when HWE = 1, maf = 0.1 and PREV = 0.15 with 100 families](NEWRelativeBias_n100_without910_7.png)Box plots for biases of parameters from *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods when HWE = 1, maf = 0.1 and PREV = 0.15 with 100 families ![Box plots for biases of parameters from MCEM_{DSP} and LIME_{DSP} methods when HWE = 1, maf = 0.3 and PREV = 0.15 with 100 families](NEWRelativeBias_n100_without910_8.png)Box plots for biases of parameters from *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods when HWE = 1, maf = 0.3 and PREV = 0.15 with 100 families ![Box plots for biases of parameters from MCEM_{DSP} and LIME_{DSP} methods when HWE = 0, maf = 0.1 and PREV = 0.15 with 500 families](NEWRelativeBias_n500_without910_3.png)Box plots for biases of parameters from *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods when HWE = 0, maf = 0.1 and PREV = 0.15 with 500 families ![Box plots for biases of parameters from MCEM_{DSP} and LIME_{DSP} methods when HWE = 0, maf = 0.3 and PREV = 0.15 with 500 families](NEWRelativeBias_n500_without910_4.png)Box plots for biases of parameters from *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods when HWE = 0, maf = 0.3 and PREV = 0.15 with 500 families ![Box plots for biases of parameters from MCEM_{DSP} and LIME_{DSP} methods when HWE = 1, maf = 0.1 and PREV = 0.05 with 500 families](NEWRelativeBias_n500_without910_5.png)Box plots for biases of parameters from *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods when HWE = 1, maf = 0.1 and PREV = 0.05 with 500 families ![Box plots for biases of parameters from MCEM_{DSP} and LIME_{DSP} methods when HWE = 1, maf = 0.3 and PREV = 0.05 with 500 families](NEWRelativeBias_n500_without910_6.png)Box plots for biases of parameters from *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods when HWE = 1, maf = 0.3 and PREV = 0.05 with 500 families ![Box plots for biases of parameters from MCEM_{DSP} and LIME_{DSP} methods when HWE = 1, maf = 0.1 and PREV = 0.15 with 500 families](NEWRelativeBias_n500_without910_7.png)Box plots for biases of parameters from *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods when HWE = 1, maf = 0.1 and PREV = 0.15 with 500 families ![Box plots for biases of parameters from MCEM_{DSP} and LIME_{DSP} methods when HWE = 1, maf = 0.3 and PREV = 0.15 with 500 families](NEWRelativeBias_n500_without910_8.png)Box plots for biases of parameters from *M**C**E**M**D**S**P* and *L**I**M**E**D**S**P* methods when HWE = 1, maf = 0.3 and PREV = 0.15 with 500 families Real data Analysis ================== [!htb] | c| c| c| c| c |c| c | Rank & SNP & Chr & Position(BP) & Gene & & & & & & *M**C**E**M**D**S**P* & *L**I**M**E**D**S**P* 1 & rs5760711 & 22 & 24894886 & SGSM1 & 15.9546 & 0.4676 2 & rs12416299 & 10 & 95542074 & SORBS1 & 15.4775 & 0.2365 3 & rs340719 & 2 & 16222132 & & 14.8754 & 0.4882 4 & rs2973566 & 5 & 73852656 & ARHGEF28 & 14.8404 & 0.4387 5 & rs11539522 & 15 & 49625193 & DTWD1 & 14.6122 & 0.4878 6 & rs1146920 & 13 & 77760082 & SLAIN1 & 12.6497 & 0.4622 7 & rs12117125 & 1 & 161522658 & FCGR2A & 12.3787 & 0.4241 8 & rs2666954 & 6 & 380789 & & 12.0756 & 0.4285 9 & rs6076157 & 20 & 23882207 & & 12.0674 & 0.4477 10 & rs1327533 & 9 & 110368883 & SVEP1 & 12.0632 & 0.2198 11 & rs12306837 & 12 & 4571302 & DYRK4 & 11.7532 & 0.4845 12 & rs2281954 & 10 & 125796084 & UROS & 11.5686 & 0.4690 13 & rs11033793 & 11 & 4769244 & MMP26 & 11.4601 & 0.1714 14 & rs12345874 & 9 & 68776727 & PIP5K1B & 10.5720 & 0.4487 15 & rs3829736 & 12 & 47982720 & COL2A1 & 10.2677 & 0.3841 16 & rs9367018 & 6 & 12173899 & HIVEP1 & 9.6447 & 0.4840 17 & rs12197569 & 6 & 2164221 & GMDS & 9.3021 & 0.4700 18 & rs7124812 & 11 & 3439368 & LOC124902616 & 8.8759 & 0.3451 19 & rs9382736 & 6 & 57436863 & PRIM2 & 8.8020 & 0.4176 20 & rs16843643 & 2 & 241413353 & FARP2 & 8.6881 & 0.4622 [!htb] | c| c| c| c| c |c| c | Rank & SNP & Chr & Position(BP) & Gene & & & & & & *M**C**E**M**D**S**P* & *L**I**M**E**D**S**P* 1 & rs2281954 & 10 & 125796084 & UROS & 12.7598 & 0.4690 2 & rs7124812 & 11 & 3439368 & LOC124902616  & 8.3917 & 0.3451 3 & rs7928271 & 11 & 126918754 & KIRREL3  & 5.9070 & 0.4653 4 & rs6543298 & 2 & 98131244 & VWA3B & 4.9663 & 0.4565 5 & rs2860161 & 7 & 142824583 & & 4.8961 & 0.2271 6 & rs6886844 & 5 & 105942041 & & 4.8306 & 0.3260 7 & rs6485742 & 11 & 12432529 & PARVA & 4.7765 & 0.3281 8 & rs2059928 & 15 & 22152346 & & 4.7728 & 0.3815 9 & rs3734945 & 7 & 138621037 & SVOPL & 4.7018 & 0.3498 10 & rs1729086 & 2 & 231343344 & ARMC9 & 4.0955 & 0.1001 11 & rs2282537 & 11 & 120317262 & POU2F3 & 3.9273 & 0.1291 12 & rs1106337 & 1 & 167166482 & LOC105371601 & 3.7137 & 0.0351 13 & rs353637 & 11 & 35163005 & CD44 & 3.6998 & 0.1287 14 & rs13209741 & 6 & 1528900 & LOC102723944 & 3.6427 & 0.1949 15 & rs16832560 & 3 & 181564785 & SOX2-OT & 3.6316 & 0.1060 16 & rs818055 & 9 & 131092123 & LAMC3 & 3.6123 & 0.2010 17 & rs10018350 & 4 & 6320374 & PPP2R2C & 3.5524 & 0.1383 18 & rs949292 & 18 & 60501574 & & 3.5417 & 0.3216 19 & rs6878253 & 5 & 132277600 & LOC124901063 & 3.5211 & 0.0858 20 & rs7593557 & 2 & 233955144 & TRPM8 & 3.5197 & 0476 [!htb] | c| c| c| c| c |c| c | Rank & SNP & Chr & Position(BP) & Gene & & & & & & *M**C**E**M**D**S**P* & *L**I**M**E**D**S**P* 1 & rs9294284 & 6 & 83954925 & CYB5R4 & 279.1653 & 0.4817 2 & rs4660212 & 1 & 42030728 & HIVEP3 & 6.6689 & 0.4603 3 & rs6485742 & 11 & 12432529 & PARVA & 6.4115 & 0.3281 4 & rs7835497 & 8 & 39765231 & ADAM2 & 5.61404 & 0.1476 5 & rs8130490 & 21 & 18355236 & TMPRSS15 & 5.5795 & 0.4691 6 & rs5022654 & 9 & 4766378 & & 4.8936 & 0.0844 7 & rs12306837 & 12 & 4571302 & DYRK4 & 4.6017 & 0.4845 8 & rs7928271 & 11 & 126918754 & KIRREL3 & 4.5619 & 0.4643 9 & rs3751020 & 11 & 119608340 & & 4.5395 & 0.3099 10 & rs4961 & 4 & 2904980 & ADD1 & 4.4915 & 0.1957 11 & rs12197569 & 6 & 2164221 & GMDS & 4.3880 & 0.4700 12 & rs12703608 & 7 & 144547846 & TPK1 & 4.3071 & 0.0997 13 & rs3811515 & 2 & 228018313 & SPHKAP & 4.2972 & 0.2384 14 & rs4814626 & 20 & 17580396 & DSTN & 4.2546 & 0.1779 15 & rs17594685 & 13 & 41679215 & VWA8 & 4.1531 & 0.1108 16 & rs7129737 & 11 & 124888741 & ROBO4 & 4.1144 & 0.1369 17 & rs12986413 & 19 & 2170955 & DOT1L & 4.0802 & 0.4825 18 & rs10807372 & 6 & 47681838 & ADGRF2 & 4.066 & 0.3564 19 & rs10188832 & 2 & 16621901 & CYRIA & 4.0573 & 0.2162 20 & rs11595441 & 10 & 15010629 & & 4.0397 & 0.4536
arxiv_0000077
On the Windfall and Price of Friendship:Inoculation Strategies on Social Networks ================================================================================= This article investigates selfish behavior in games where players are embedded in a social context. A framework is presented which allows us to measure the *Windfall of Friendship*, i.e., how much players benefit (compared to purely selfish environments) if they care about the welfare of their friends in the social network graph. As a case study, a virus inoculation game is examined. We analyze the corresponding Nash equilibria and show that the Windfall of Friendship can never be negative. However, we find that if the valuation of a friend is independent of the total number of friends, the social welfare may not increase monotonically with the extent to which players care for each other; intriguingly, in the corresponding scenario where the relative importance of a friend declines, the Windfall is monotonic again. This article also studies convergence of best-response sequences. It turns out that in social networks, convergence times are typically higher and hence constitute a price of friendship. While such phenomena may be known on an anecdotal level, our framework allows us to quantify these effects analytically. Our formal insights on the worst case equilibria are complemented by simulations shedding light onto the structure of other equilibria. Game Theory, Social Networks, Equilibria, Virus Propagation, Windfall of Friendship Introduction ============ Social networks have existed for thousands of years, but it was not until recently that researchers have started to gain scientific insights into phenomena like the *small world property*. The rise of the Internet has enabled people to connect with each other in new ways and to find friends sharing the same interests from all over the planet. A social network on the Internet can manifest itself in various forms. For instance, on *Facebook*, people maintain virtual references to their friends. The contacts stored on mobile phones or email clients form a social network as well. The analysis of such networks—both their static properties as well as their evolution over time—is an interesting endeavor, as it reveals many aspects of our society in general. A classic tool to model human behavior is *game theory*. It has been a fruitful research field in economics and sociology for many years. Recently, computer scientists have started to use game theory methods to shed light onto the complexities of today’s highly decentralized networks. Game theoretic models traditionally assume that people act autonomously and are steered by the desire to maximize their benefits (or utility). Under this assumption, it is possible to quantify the performance loss of a distributed system compared to situations where all participants collaborate perfectly. A widely studied measure which captures this loss of social welfare is the *Price of Anarchy* (PoA). Even though these concepts can lead to important insights in many environments, we believe that in some situations, the underlying assumptions do not reflect reality well enough. One such example are social networks: most likely people act less selfishly towards their friends than towards complete strangers. Such altruistic behavior is typically not considered in game-theoretic models. In this article, we propose a game theoretic framework for social networks. Social networks are not only attractive to their participants, e.g., it is well-known that the user profiles are an interesting data source for the PR industry to provide tailored advertisements. Moreover, social network graphs can also be exploited for attacks, e.g., email viruses using the users’ address books for propagating, worms spreading on mobile phone networks and over the Internet telephony tool Skype have been reported (e.g., ). This article investigates rational inoculation strategies against such viruses from our game theoretic perspective, and studies the propagation of such viruses on the social network. Our Contribution ---------------- This article makes a first step to combine two active threads of research: social networks and game theory. We introduce a framework taking into consideration that people may care about the well-being of their friends. In particular, we define the *Windfall of Friendship* (WoF) which captures to what extent the social welfare improves in social networks compared to purely selfish systems. In order to demonstrate our framework, as a case study, we provide a game-theoretic analysis of a *virus inoculation game*. Concretely, we assume that the players have the choice between inoculating by buying anti-virus software and risking infection. As expected, our analysis reveals that the players in this game always benefit from caring about the other participants in the social network rather than being selfish. Intriguingly, however, we find that the Windfall of Friendship may not increase monotonically with stronger relationships. Despite the phenomenon being an “ever-green” in political debates, to the best of our knowledge, this is the first article to quantify this effect formally. This article derives upper and lower bounds on the Windfall of Friendship in simple graphs. For example, we show that the Windfall of Friendship in a complete graph is at most 4/3; this is tight in the sense that there are problem instances where the situation can indeed improve this much. Moreover, we show that in star graphs, friendship can help to eliminate undesirable equilibria. Generally, we discover that even in simple graphs the Windfall of Friendship can attain a large spectrum of values, from constant ratios up to Θ(*n*), *n* being the network size, which is asymptotically maximal for general graphs. Also an alternative friendship model is discussed in this article where the relative importance of an individual friend declines with a larger number of friends. While the Windfall of Friendship is still positive, we show that the non-monotonicity result is no longer applicable. Moreover, it is proved that in both models, computing the best and the worst friendship Nash equilibrium is NP-hard. The paper also initiates the discussion of implications on convergence. We give a potential function argument to show convergence of best-response sequences in various models and for simple, cyclic graphs. Moreover, we report on our simulations which indicate that the convergence times are typically higher in social contexts, and hence constitute a certain price of friendship. Finally, to complement our formal analysis of the worst equilibria, simulation results for average case equilibria are discussed. Organization ------------ The remainder of this article is organized as follows. Section [sec:relwork] reviews related work and Section [sec:model] formally introduces our model and framework. The Windfall of Friendship on general graphs and on special graphs is studied in Sections [sec:general] and [sec:cliquestar] respectively. Section [sec:relative] discusses an alternative model where the relative importance of a friend declines if the total number of friends increases. Aspects of best-response convergence and implications are considered in Section [sec:convergence]. We report on simulations in Section [sec:simulations]. Finally, we conclude the article in Section [sec:conclusion]. Related Work ============ Social networks are a fascinating topic not only in social sciences, but also in ethnology, and psychology. The advent of social networks on the Internet, e.g., *Facebook*, *LinkedIn*, *MySpace*, *Orkut*, or *Xing*, to name but a few, heralded a new kind of social interactions, and the mere scale of online networks and the vast amount of data constitute an unprecedented treasure for scientific studies. The topological structure of these networks and the dynamics of the user behavior has a mathematical and algorithmic dimension, and has raised the interest of mathematicians and engineers accordingly. The famous *small world experiment*  conducted by Stanley Milgram 1967 has gained attention by the algorithm community  and inspired research on topics such as decentralized search algorithms , routing on social networks  and the identification of communities . The dynamics of epidemic propagation of information or diseases has been studied from an algorithmic perspective as well . Knowledge on effects of this cascading behavior is useful to understand phenomena as diverse as word-of-mouth effects, the diffusion of innovation, the emergence of bubbles in a financial market, or the rise of a political candidate. It can also help to identify sets of influential players in networks where marketing is particularly efficient (*viral marketing*). For a good overview on economic aspects of social networks, we refer the reader to, which, i.a., compares random graph theory with game theoretic models for the formation of social networks. Recently, game theory has also received much attention by computer scientists. This is partly due to the various actors and stake-holders who influence the decentralized growth of the Internet: game theory is a useful tool to gain insights into the Internet’s socio-economic complexity. Many aspects have been studied from a game-theoretic point of view, e.g., *routing*, *multicast transmissions*, or *network creation*. Moreover, computer scientists are interested in the algorithmic problems offered by game theory, e.g., on the existence of pure equilibria . This article applies game theory to social networks where players are not completely selfish and autonomous but have friends about whose well-being they care to some extent. We demonstrate our mathematical framework with a virus inoculation game on social graphs. There is a large body of literature on the propagation of viruses . Miscellaneous misuse of social networks has been reported, e.g., *email viruses*[1](#fn1) have used address lists to propagate to the users’ friends. Similar vulnerabilities have been exploited to spread worms on the *mobile phone network*  and on the Internet telephony tool *Skype*[2](#fn2). There already exists interesting work on game theoretic and epidemic models of propagation in social networks. For instance, Montanari and Saberi  attend to a game theoretic model for the diffusion of an innovation in a network and characterize the rate of convergence as a function of graph structure. The authors highlight crucial differences between game theoretic and epidemic models and find that the spread of viruses, new technologies, and new political or social beliefs do not have the same viral behavior. The articles closest to ours are . Our model is inspired by Aspnes et al. . The authors apply a classic game-theoretic analysis and show that selfish systems can be very inefficient, as the Price of Anarchy is Θ(*n*), where *n* is the total number of players. They show that computing the social optimum is NP-hard and give a reduction to the combinatorial problem *sum-of-squares partition*. They also present a *O*(log2*n*) approximation. Moscibroda et al.  have extended this model by introducing malicious players in the selfish network. This extension facilitates the estimation of the robustness of a distributed system to malicious attacks. They also find that in a non-oblivious model, intriguingly, the presence of malicious players may actually *improve* the social welfare. In a follow-up work  which generalizes the social context of  to arbitrary bilateral relationships, it has been shown that there is no such phenomenon in a simple network creation game. The *Windfall of Malice* has also been studied in the context of congestion games  by Babaioff et al. In contrast to these papers, our focus here is on social graphs where players are concerned about their friends’ benefits. There is other literature on game theory where players are influenced by their neighbors. In *graphical economics*, an undirected graph is given where an edge between two players denotes that free trade is allowed between the two parties, where the absence of such an edge denotes an embargo or an other restricted form of direct trade. The payoff of a player is a function of the actions of the players in its neighborhood only. In contrast to our work, a different equilibrium concept is used and no social aspects are taken into consideration. Note that the nature of game theory on social networks also differs from *cooperative games* (e.g., ) where each coalition *C* ⊆ 2*V* of players *V* has a certain characteristic cost or payoff function *f* : 2*V* → R describing the collective payoff the players can gain by forming the coalition. In contrast to cooperative games, the “coalitions” are fixed, and a player participates in the “coalitions” of all its neighbors. A preliminary version of this article appeared at ACM EC 2008 , and there have been several interesting results related to our work since then. For example,  studies auctions with spite and altruism among bidders, and presents explicit characterizations of Nash equilibria for first-price auctions with random valuations and arbitrary spite/altruism matrices, and for first and second price auctions with arbitrary valuations and so-called regular social networks (players have same out-degree). By rounding a natural linear program with region-growing techniques, Chen et al.  present a better, *O*(log*z*)-approximation for the best vaccination strategy in the original model of , where *z* is the support size of the outbreak distribution. Moreover, the effect of autonomy is investigated: a benevolent authority may suggest which players should be vaccinated, and the authors analyze the “Price of Opting Out” under partially altruistic behavior; they show that with positive altruism, Nash equilibria may not exist, but that the price of opting out is bounded. We extend the conference version of this article  in several respects. The two most important additions concern *relative friendship* and *convergence*. We study an additional model where the relative importance of a neighbor declines with the total number of friends and find that while friendship is still always beneficial, the non-monotonicity result no longer applies: unlike in the absolute friendship model, the Windfall of Friendship can only increase with stronger social ties. In addition, we initiate the study of convergence issues in the social network. It turns out that it takes longer until an equilibrium is reached compared to purely selfish environments and hence constitutes a price of friendship. We present a potential function argument to prove convergence in some simple cyclic networks, and complement our study with simulations on Kleinberg graphs. We believe that the existence of and convergence to social equilibria are exciting questions for future research (see also the related fields of *player-specific utilities*  and *local search complexity* ). Finally, there are several minor changes, e.g., we improve the bound in Theorem [thm:monotone] from *n* > 7 to *n* > 3. Model ===== This section introduces our framework. In order to gain insights into the Windfall of Friendship, we study a virus inoculation game on a social graph. We present the model of this game and we show how it can be extended to incorporate social aspects. Virus Inoculation Game ---------------------- The virus inoculation game was introduced by . We are given an undirected network graph *G* = (*V*, *E*) of *n* = ∣*V*∣ players (or nodes) *p**i* ∈ *V*, for *i* = 1, …, *n*, who are connected by a set of edges (or *links*) *E*. Every player has to decide whether it wants to *inoculate* (e.g., purchase and install anti-virus software) which costs *C*, or whether it prefers saving money and facing the risk of being infected. We assume that being infected yields a damage cost of *L* (e.g., a computer is out of work for *L* days). In other words, an instance *I* of a game consists of a graph *G* = (*V*, *E*), the inoculation cost *C* and a damage cost *L*. We introduce a variable *a**i* for every player *p**i* denoting *p**i*’s chosen *strategy*. Namely, *a**i* = 1 describes that player *p**i* is protected whereas for a player *p**j* willing to take the risk, *a**j* = 0. In the following, we will assume that *a**j* ∈ {0, 1}, that is, we do not allow players to *mix* (i.e., use probabilistic distributions over) their strategies. These choices are summarized by the *strategy profile*, the vector *a⃗* = (*a*1, …, *a**n*). After the players have made their decisions, a virus spreads in the network. The propagation model is as follows. First, one player *p* of the network is chosen uniformly at random as a starting point. If this player is inoculated, there is no damage and the process terminates. Otherwise, the virus infects *p* and all unprotected neighbors of *p*. The virus now propagates recursively to their unprotected neighbors. Hence, the more insecure players are connected, the more likely they are to be infected. The vulnerable region (set of players) in which an insecure player *p**i* lies is referred to as *p**i*’s *attack component*. We only consider a limited region of the parameter space to avoid trivial cases. If the cost *C* is too large, no player will inoculate, resulting in a totally insecure network and therefore all players eventually will be infected. On the other hand, if *C* <  < *L*, the best strategy for all players is to inoculate. Thus, we will assume that *C* ≤ *L* and *C* > *L*/*n* in the following. In our game, a player has the following expected cost: [Actual Individual Cost][actualcost]$\\$ The *actual individual cost* of a player *p**i* is defined as $$c\_a(i,\vec{a}) = a\_{i} \cdot C + (1-a\_i) L \cdot \frac{k\_i}{n}$$ where *k**i* denotes the size of *p**i*’s attack component. If *p**i* is inoculated, *k**i* stands for the size of the attack component that would result if *p**i* became insecure. In the following, let *c**a*0(*i*, *a⃗*) refer to the actual cost of an insecure and *c**a*1(*i*, *a⃗*) to the actual cost of a secure player *p**i*. The total *social cost* of a game is defined as the sum of the cost of all participants: *C**a*(*a⃗*) = ∑*p**i* ∈ *V**c**a*(*i*, *a⃗*). Classic game theory assumes that all players act selfishly, i.e., each player seeks to minimize its individual cost. In order to study the impact of such selfish behavior, the solution concept of a *Nash equilibrium* (NE) is used. A Nash equilibrium is a strategy profile where no selfish player can unilaterally reduce its individual cost given the strategy choices of the other players. We can think of Nash equilibria as the stable strategy profiles of games with selfish players. We will only consider pure Nash equilibria in this article, i.e., players cannot use random distributions over their strategies but must decide whether they want to inoculate or not. In a pure Nash equilibrium, it must hold for each player *p**i* that given a strategy profile *a⃗* ∀*p**i* ∈ *V*,  ∀*a**i* :  *c**a*(*i*, *a⃗*) ≤ *c**a*(*i*, (*a*1, …, 1 − *a**i*, …, *a**n*)), implying that player *p**i* cannot decrease its cost by choosing an alternative strategy 1 − *a**i*. In order to quantify the performance loss due to selfishness, the (not necessarily unique) Nash equilibria are compared to the optimum situation where all players collaborate. To this end we consider the *Price of Anarchy* (PoA), i.e., the ratio of the social cost of the worst Nash equilibrium divided by the optimal social cost for a problem instance *I*. More formally, *P**o**A*(*I*) = max*N**E**C**N**E*(*I*)/*C**O**P**T*(*I*). Social Networks --------------- Our model for social networks is as follows. We define a *Friendship Factor* *F* which captures the extent to which players care about their *friends*, i.e., about the players *adjacent* to them in the social network. More formally, *F* is the factor by which a player *p**i* takes the individual cost of its neighbors into account when deciding for a strategy. *F* can assume any value between 0 and 1. *F* = 0 implies that the players do not consider their neighbors’ cost at all, whereas *F* = 1 implies that a player values the well-being of its neighbors to the same extent as its own. Let Γ(*p**i*) denote the set of neighbors of a player *p**i*. Moreover, let Γ*s**e**c*(*p**i*) ⊆ Γ(*p**i*) be the set of inoculated neighbors, and $\Gamma\_{\overline{sec}}(p\_i)=\Gamma(p\_i)\setminus \Gamma\_{sec}(p\_i)$ the remaining insecure neighbors. We distinguish between a player’s *actual cost* and a player’s *perceived cost*. A player’s actual individual cost is the expected cost arising for each player defined in Definition [actualcost] used to compute a game’s social cost. In our social network, the decisions of our players are steered by the players’ *perceived cost*. [Perceived Individual Cost][perceived cost]$\\$ The *perceived individual cost* of a player *p**i* is defined as *c**p*(*i*, *a⃗*) = *c**a*(*i*, *a⃗*) + *F* ⋅ ∑*p**j* ∈ Γ(*p**i*)*c**a*(*j*, *a⃗*). In the following, we write *c**p*0(*i*, *a⃗*) to denote the perceived cost of an insecure player *p**i* and *c**p*1(*i*, *a⃗*) for the perceived cost of an inoculated player. This definition entails a new notion of equilibrium. We define a *friendship Nash equilibrium* (FNE) as a strategy profile *a⃗* where no player can reduce its *perceived* cost by unilaterally changing its strategy given the strategies of the other players. Formally, ∀*p**i* ∈ *V*, ∀*a**i* : *c**p*(*i*, *a⃗*) ≤ *c**p*(*i*, (*a*1, …, 1 − *a**i*, …, *a**n*)). Given this equilibrium concept, we define the *Windfall of Friendship* *Υ*. [Windfall of Friendship (WoF)][def:WFDef] The *Windfall of Friendship* *Υ*(*F*, *I*) is the ratio of the social cost of the worst Nash equilibrium for *I* and the social cost of the worst friendship Nash equilibrium for *I*: $$\Upsilon(F,I)=\frac{\max\_{NE}{C\_{NE}(I)}}{\max\_{FNE}{C\_{FNE}(F,I)}}$$ *Υ*(*F*, *I*) > 1 implies the existence of a real windfall in the system, whereas *Υ*(*F*, *I*) < 1 denotes that the social cost can become *greater* in social graphs than in purely selfish environments. General Analysis ================ In this section we characterize friendship Nash equilibria and derive general results on the Windfall of Friendship for the virus propagation game in social networks. It has been shown  that in classic Nash equilibria (*F* = 0), an attack component can never consist of more than *C**n*/*L* insecure players. A similar characteristic also holds for friendship Nash equilibria. As every player cares about its neighbors, the maximal attack component size in which an insecure player *p**i* still does not inoculate depends on the number of *p**i*’s insecure neighbors and the size of their attack components. Therefore, it differs from player to player. We have the following helper lemma. [lemma:ac-size] The player *p**i* will inoculate if and only if the size of its attack component is $$k\_i > \frac{Cn/L + F \cdot \sum\_{p\_j \in{\Gamma\_{\overline{sec}}(p\_i)}}{k\_j}} {1+F|\Gamma\_{\overline{sec}}(p\_i)|},$$ where the *k**j*s are the attack component sizes of *p**i*’s insecure neighbors assuming *p**i* is secure. Player *p**i* will inoculate if and only if this choice lowers the perceived cost. By Definition [perceived cost], the perceived individual cost of an inoculated player is $$c\_p^1(i,\vec{a}) = C + F \left( |\Gamma\_{sec}(p\_i)|{C} + \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{L \frac{k\_j}{n}} \right)$$ and for an insecure player we have $$c\_p^0(i,\vec{a}) = L \frac{k\_i}{n} + F \left( |\Gamma\_{sec}(p\_i)|{C} + |\Gamma\_{\overline{sec}}(p\_i)|{L \frac{k\_i}{n}} \right).$$ For *p**i* to prefer to inoculate it must hold that $$\begin{aligned} c\_p^0(i,\vec{a}) &>& c\_p^1(i,\vec{a}) ~~\Leftrightarrow \\ L\frac{k\_i}{n} + F \cdot |\Gamma\_{\overline{sec}}(p\_i)|{L \frac{k\_i}{n}} &>& C + F \cdot \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{L \frac{k\_j}{n}} ~~\Leftrightarrow \\ L \frac{k\_i}{n} (1 + F |\Gamma\_{\overline{sec}}(p\_i)|) &>& C + \frac{FL}{n} \cdot \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{k\_j} ~~\Leftrightarrow \\ k\_i (1 + F |\Gamma\_{\overline{sec}}(p\_i)|) &>& Cn/L + F \cdot \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{k\_j} ~~\Leftrightarrow\\ k\_i&>& \frac{Cn/L + F \cdot \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{k\_j}}{1 + F |\Gamma\_{\overline{sec}}(p\_i)|}.\end{aligned}$$ A pivotal question is of course whether social networks where players care about their friends yield better equilibria than selfish environments. The following theorem answers this questions affirmatively: the worst FNE costs never more than the worst NE. [FFneversmaller1] For all instances of the virus inoculation game and 0 ≤ *F* ≤ 1, it holds that 1 ≤ *Υ*(*F*, *I*) ≤ *PoA*(*I*) The proof idea for *Υ*(*F*, *I*) ≥ 1 is the following: for an instance *I* we consider an arbitrary FNE with *F* > 0. Given this equilibrium, we show the existence of a NE with larger social cost (according to , our best response strategy always converges). Let *a⃗* be any (e.g., the worst) FNE in the social model. If *a⃗* is also a NE in the same instance with *F* = 0 then we are done. Otherwise there is at least one player *p**i* that prefers to change its strategy. Assume *p**i* is insecure but favors inoculation. Therefore *p**i*’s attack component has on the one hand to be of size at least *k**i*ʹ > *C**n*/*L*  and on the other hand of size at most $k\_i'' = (Cn/L + F \cdot \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{k\_j}) / (1+F |\Gamma\_{\overline{sec}}(p\_i)|) \leq (Cn/L + F |\Gamma\_{\overline{sec}}(p\_i)| (k\_i'' - 1) ) / (1+F |\Gamma\_{\overline{sec}}(p\_i)|) ~~\Leftrightarrow k\_i'' \leq Cn/L - F |\Gamma\_{\overline{sec}}(p\_i)|$ (cf Lemma [lemma:ac-size]). This is impossible and yields a contradiction to the assumption that in the selfish network, an additional player wants to inoculate. It remains to study the case where *p**i* is secure in the FNE but prefers to be insecure in the NE. Observe that, since every player has the same preference on the attack component’s size when *F* = 0, a newly insecure player cannot trigger other players to inoculate. Furthermore, only the players inside *p**i*’s attack component are affected by this change. The total cost of this attack component increases by at least $$\begin{aligned} x &=& \underbrace{\frac{k\_i}{n}L - C}\_{p\_i} + \underbrace{\sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}\left(\frac{k\_i}{n}L -\frac{k\_j}{n}L\right)}\_{\textit{$p\_i$'s insecure neighbors}}\\ &=& \frac{k\_i}{n}L - C + \frac{L}{n} (|\Gamma\_{\overline{sec}}(p\_i)|k\_i - \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{k\_j}).\end{aligned}$$ Applying Lemma [lemma:ac-size] guarantees that $$\sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)} k\_j \leq \frac{k\_i(1+F|\Gamma\_{\overline{sec}}(p\_i)|)-C n/L}{F}.$$ This results in $$\begin{aligned} x &\geq& \frac{L}{n} \left(|\Gamma\_{\overline{sec}}(p\_i)|k\_i - \frac{ k\_i(1+F|\Gamma\_{\overline{sec}}(p\_i)|)-C n/L}{F}\right)\\ &=& \frac{k\_i L}{n}(1-\frac{1}{F}) - C (1-\frac{1}{F})>0,\end{aligned}$$ since a player only gives up its protection if $C>\frac{k\_i L}{n}$. If more players are unhappy with their situation and become vulnerable, the cost for the NE increases further. In conclusion, there exists a NE for every FNE with *F* ≥ 0 for the same instance which is at least as expensive. The upper bound for the WoF, i.e., $\textit{PoA($I$)} \geq \Upsilon(F,I)$, follows directly from the definitions: while the PoA is the ratio of the NE’s social cost divided by the social optimum, *Υ*(*F*, *I*) is the ratio between the cost of the NE and the FNE. As the FNE’s cost must be at least as large as the social optimum cost the claim follows. Note that Aspnes et al.  proved that the Price of Anarchy never exceeds the size of the network, i.e., $n \geq \textit{PoA($I$)}$. Consequently, the Windfall of Friendship cannot be larger than *n* due to Theorem 4.2. The above result leads to the question of whether the Windfall of Friendship grows monotonically with stronger social ties, i.e., with larger friendship factors *F*. Intriguingly, this is not the case. [thm:monotone] For all networks with more than three players, there exist game instances where *Υ*(*F*, *I*) does not grow monotonically in *F*. We give a counter example for the star graph *S**n* which has one center player and *n* − 1 leaf players. Consider two friendship factors, *F**l* and *F**s* where *F**l* > *F**s*. We show that for the large friendship factor *F**l*, there exists a FNE, *F**N**E*1, where only the center player and one leaf player remain insecure. For the same setting but with a small friendship factor *F**s*, at least two leaf players will remain insecure, which will trigger the center player to inoculate, yielding a FNE, *F**N**E*2, where only the center player is secure. Consider *F**N**E*1 first. Let *c* be the insecure center player, let *l*1 be the insecure leaf player, and let *l*2 be a secure leaf player. In order for *F**N**E*1 to constitute a Nash equilibrium, the following conditions must hold: $$\textit{player } c: ~~ \frac{2L}{n}+\frac{2F\_lL}{n} < C + \frac{F\_lL}{n}$$ $$\textit{player } l\_{1}: ~~ \frac{2L}{n}+\frac{2F\_lL}{n} < C + \frac{F\_lL}{n}$$ $$\textit{player } l\_{2}: ~~ C+\frac{2F\_lL}{n} < \frac{3L}{n} + \frac{3F\_lL}{n}$$ For *F**N**E*2, let *c* be the insecure center player, let *l*1 be one of the two insecure leaf players, and let *l*2 be a secure leaf player. In order for the leaf players to be happy with their situation but for the center player to prefer to inoculate, it must hold that: $$\textit{player } c: ~~ C+\frac{2F\_sL}{n} < \frac{3L}{n} + \frac{6F\_sL}{n}$$ $$\textit{player } l\_{1}: ~~ \frac{3L}{n}+\frac{3F\_sL}{n} < C + \frac{2F\_sL}{n}$$ $$\textit{player } l\_{2}: ~~ C+\frac{3F\_sL}{n} < \frac{4L}{n} + \frac{4F\_sL}{n}$$ Now choose *C* :  = 5*L*/(2*n*) + *F**l**L*/*n* (note that due to our assumption that *n* > 3, *C* < *L*). This yields the following conditions: *F**l* > *F**s* + 1/2, *F**l* < *F**s* + 3/2, and *F**l* < 4*F**s* + 1/2. These conditions are easily fulfilled, e.g., with *F**l* = 3/4 and *F**s* = 1/8. Observe that the social cost of the first FNE (for *F**l*) is *C**o**s**t*(*S**n*, *a⃗**F**N**E*1) = (*n* − 2)*C* + 4*L*/*n*, whereas for the second FNE (for *F**s*) *C**o**s**t*(*S**n*, *a⃗**F**N**E*2) = *C* + (*n* − 1)*L*/*n*. Thus, *C**o**s**t*(*S**n*, *a⃗**F**N**E*1) − *C**o**s**t*(*S**n*, *a⃗**F**N**E*2) = (*n* − 3)*C* − (*n* − 5)*L*/*n* > 0 as we have chosen *C* > 5*L*/(2*n*) and as, due to our assumption, *n* > 3. This concludes the proof. Reasoning about best and worst Nash equilibria raises the question of how difficult it is to compute such equlibria. We can generalize the proof given in  and show that computing the most economical and the most expensive FNE is hard for any friendship factor. [NP-completeness] Computing the best and the worst pure FNE is NP-complete for any *F* ∈ [0, 1]. We prove this theorem by a reduction from two NP-hard problems, Vertex Cover  and Independent Dominating Set . Concretely, for the decision version of the problem, we show that answering the question whether there exists a FNE costing less than *k*, or more than *k* respectively, is at least as hard as solving vertex cover or independent dominating set. Note that verifying whether a proposed solution is correct can be done in polynomial time, hence the problems are indeed in NP. Fix some graph *G* = (*V*, *E*) and set *C* = 1 and *L* = *n*/1.5. We show that the following two conditions are necessary and sufficient for a FNE: (a) all neighbors of an insecure player are secure, and (b) every inoculated player has at least one insecure neighbor. Due to our assumption that *C* > *L*/*n*, condition (b) is satisfied in all FNE. To see that condition (a) holds as well, assume the contrary, i.e., an attack component of size at least two. An insecure player *p**i* in this attack component bears the cost $\frac{k\_i}{n} L + F(|\Gamma\_{sec}(p\_i)| C + |\Gamma\_{\overline{sec}}(p\_i)| \frac{k\_i}{n} L)$. Changing its strategy reduces its cost by at least $\Delta\_{i} = \frac{k\_i}{n} L + F |\Gamma\_{\overline{sec}}(p\_i)| \frac{k\_i}{n} L - C - F |\Gamma\_{\overline{sec}}(p\_i)| \frac{k\_i-1}{n} L = \frac{k\_i}{n} L + F |\Gamma\_{\overline{sec}}(p\_i)| \frac{1}{n} L - C$. By our assumption that *k**i* ≥ 2, and hence $|\Gamma\_{\overline{sec}}(p\_i)| \geq 1$, it holds that Δ*i* > 0, resulting in *p**i* becoming secure. Hence, condition (a) holds in any FNE as well. For the opposite direction assume that an insecure player wants to change its strategy even though (a) and (b) are true. This is impossible because in this case (b) would be violated because this player does not have any insecure neighbors. An inoculated player would destroy (a) by adopting another strategy. Thus (a) and (b) are sufficient for a FNE. We now argue that *G* has a vertex cover of size *k* if and only if the virus game has a FNE with *k* or fewer secure players, or equivalently an equilibrium with social cost at most *C**k* + (*n* − *k*)*L*/*n*, as each insecure player must be in a component of size 1 and contributes exactly *L*/*n* expected cost. Given a minimal vertex cover *V*ʹ ⊆ *V*, observe that installing the software on all players in *V*ʹ satisfies condition (a) because *V*ʹ is a vertex cover and (b) because *V*ʹ is minimal. Conversely, if *V*ʹ is the set of secure players in a FNE, then *V*ʹ is a vertex cover by condition (a) which is minimal by condition (b). For the worst FNE, we consider an instance of the independent dominating set problem. Given an independent dominating set *V*ʹ, installing the software on all players except the players in *V*ʹ satisfies condition (a) because *V*ʹ is independent and (b) because *V*ʹ is a dominating set. Conversely, the insecure players in any FNE are independent by condition (a) and dominating by condition (b). This shows that *G* has an independent dominating set of size at most *k* if and only if it has a FNE with at least *n* − *k* secure players. Windfall for Special Graphs =========================== While the last section has presented general results on equilibria in social networks and the Windfall of Friendship, we now present upper and lower bounds on the Windfall of Friendship for concrete topologies, namely the *complete graph* *K**n* and the *star graph* *S**n*. Complete Graphs --------------- In order to initiate the study of the Windfall of Friendship, we consider a very simple topology, the complete graph *K**n* where all players are connected to each other. First consider the classic setting where players do not care about their neighbors (*F* = 0). We have the following result: [cliqueNE] In the graph *K**n*, there are two Nash equilibria with social cost $$\begin{aligned} \textit{~~NE$\_1$: } Cost(K\_n,\vec{a}\_{\textit{NE1}}) &=& C(n-\lceil Cn/L \rceil + 1) + L/n(\lceil Cn/L \rceil - 1 )^2,\\ \textrm{and~~~~~~~~~~~~~~~~~~~~~~~~~~}\\ \textit{~~NE$\_2$: } Cost(K\_n,\vec{a}\_{\textit{NE2}}) &=& C(n-\lfloor Cn/L \rfloor) + L/n(\lfloor Cn/L \rfloor)^2.\end{aligned}$$ If ⌈*C**n*/*L*⌉ − 1 = ⌊*C**n*/*L*⌋, there is only one Nash equilibrium. Let *a⃗* be a NE. Consider an inoculated player *p**i* and an insecure player *p**j*, and hence *c**a*(*i*, *a⃗*) = *C* and $c\_a(j,\vec{a}) = L \frac{k\_j}{n}$, where *k**j* is the total number of insecure players in *K**n*. In order for *p**i* to remain inoculated, it must hold that *C* ≤ (*k**j* + 1)*L*/*n*, so *k**j* ≥ ⌈*C**n*/*L* − 1⌉; for *p**j* to remain insecure, it holds that *k**j**L*/*n* ≤ *C*, so *k**j* ≤ ⌊*C**n*/*L*⌋. As the total social cost in *K**n* is given by *C**o**s**t*(*K**n*, *a⃗*) = (*n* − *k**j*)*C* + *k**j*2*L*/*n*, the claim follows. Observe that the equilibrium size of the attack component is roughly twice the size of the attack component of the social optimum, as *C**o**s**t*(*K**n*, *a⃗*) = (*n* − *k**j*)*C* + *k**j*2*L*/*n* is minimized for *k**j* = *C**n*/2*L*. [cliqueopt] In the social optimum for *K**n*, the size of the attack component is either $\lfloor \frac{1}{2} Cn/L \rfloor$ or $\lceil \frac{1}{2} Cn/L \rceil$, yielding a total social cost of $$Cost(K\_n,\vec{a}\_{\textit{OPT}}) = (n-\lfloor \frac{1}{2} Cn/L \rfloor)C + (\lfloor \frac{1}{2} Cn/L \rfloor)^2 \frac{L}{n}$$ or $$Cost(K\_n,\vec{a}\_{\textit{OPT}}) = (n-\lceil \frac{1}{2} Cn/L \rceil)C + (\lceil \frac{1}{2} Cn/L \rceil)^2 \frac{L}{n}.$$ In order to compute the Windfall of Friendship, the friendship Nash equilibria in social networks have to be identified. [cliqueFNE] In *K**n*, there are two friendship Nash equilibria with social cost $$\begin{aligned} \textit{FNE$\_1$: } Cost(K\_n,\vec{a}\_{\textit{FNE1}}) &=& C \left(n-\left\lceil\frac{Cn/L - 1}{1 + F}\right\rceil\right) + L/n\left(\left\lceil\frac{Cn/L - 1}{1 + F}\right\rceil\right)^2, \\ \textrm{and~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}\\ \textit{FNE$\_2$: } Cost(K\_n,\vec{a}\_{\textit{FNE2}}) &=& C\left(n-\left\lfloor\frac{Cn/L + F}{1 + F}\right\rfloor\right) + L/n\left(\left\lfloor\frac{Cn/L + F}{1 + F}\right\rfloor\right)^2.\end{aligned}$$ If ⌈(*C**n*/*L* − 1)/(1 + *F*)⌉ = ⌊(*C**n*/*L* + *F*)/(1 + *F*)⌋, there is only one FNE. According to Lemma [lemma:ac-size], in a FNE, a player *p**i* remains secure if otherwise the component had size at least *k**i* = *k**j* + 1 ≥ (*C**n*/*L* + *F**k**j*2)/(1 + *F**k**j*) where *k**j* is the number of insecure players. This implies that *k**j* ≥ ⌈(*C**n*/*L* − 1)/(1 + *F*)⌉. Dually, for an insecure player *p**j* it holds that *k**j* ≤ (*C**n*/*L* + *F*(*k**j* − 1)2)/(1 + *F*(*k**j* − 1)) and therefore *k**j* ≤ ⌊(*C**n*/*L* + *F*)/(1 + *F*)⌋. Given these bounds on the total number of insecure players in a FNE, the social cost can be obtained by substituting *k**j* in *C**o**s**t*(*K**n*, *a⃗*) = (*n* − *k**j*)*C* + *k**j*2*L*/*n*. As the difference between the upper and the lower bound for *k**j* is at most 1, there are at most two equilibria and the claim follows. Given the characteristics of the different equilibria, we have the following theorem. [4/3] In *K**n*, the Windfall of Friendship is at most *Υ*(*F*, *I*) = 4/3 for an arbitrary network size. This is tight in the sense that there are indeed instances where the worst FNE is a factor 4/3 better than the worst NE. *Upper Bound.* We first derive the upper bound on *Υ*(*F*, *I*). $$\begin{aligned} \Upsilon(F,I) &=& \frac{Cost(K\_n,\vec{a}\_{\textit{NE}})}{Cost(K\_n,\vec{a}\_{\textit{FNE}})}\\ &\leq& \frac{Cost(K\_n,\vec{a}\_{\textit{NE}})}{Cost(K\_n,\vec{a}\_{\textit{OPT}})}\\ &\leq& \frac{(n - \lceil Cn/L - 1 \rceil) C + (\lfloor Cn/L \rfloor)^2 \frac{L}{n}}{(n - \frac{1}{2} Cn/L) C + (\frac{1}{2} Cn/L)^2 \frac{L}{n}}\end{aligned}$$ as the optimal social cost (cf Lemma [cliqueopt]) is smaller or equal to the social cost of any FNE. Simplifying this expression yields $$\begin{aligned} \Upsilon(F,I) &\leq& \frac{n(1-C/L)C + C^2 n/L}{n(1 - \frac{1}{2}C/L)C + \frac{1}{4} C^2 n/L}=\frac{1}{1-\frac{1}{4}C/L}.\end{aligned}$$ This term is maximized for *L* = *C*, implying that *Υ*(*F*, *I*) ≤ 4/3, for arbitrary *n*. *Lower Bound.* We now show that the ratio between the equilibria cost reaches 4/3. There exists exactly one social optimum of cost *L**n*/2 + (*n*/2)2*L*/*n* = 3*n**L*/4 for even *n* and *C* = *L* by Lemma [cliqueopt]. For *F* = 1 this is also the only friendship Nash equilibrium due to Lemma [cliqueFNE]. In the selfish game however the Nash equilibrium has fewer inoculated players and is of cost *n**L* (see Lemma [cliqueNE]). Since these are the only Nash equilibria they constitute the worst equilibria and the ratio is $$\Upsilon(F,I) = \frac{Cost(K\_n,\vec{a}\_{\textit{NE}})}{Cost(K\_n,\vec{a}\_{\textit{FNE}})}=\frac{nL}{3/4nL}=4/3.$$ To conclude our analysis of *K**n*, observe that friendship Nash equilibria always exist in complete graphs, and that in environments where one player at a time is given the chance to change its strategy in a best response manner quickly results in such an equilibrium as all players have the same preferences. Star ---- While the analysis of *K**n* was simple, it turns out that already slightly more sophisticated graphs are challenging. In the following, we investigate the Windfall of Friendship in star graphs *S**n*. Note that in *S**n*, the social welfare is maximized if the center player inoculates and all other players do not. The total inoculation cost then is *C* and the attack components are all of size 1, yielding a total social cost of *C**o**s**t*(*S**n*, *a⃗**OPT*) = *C* + (*n* − 1)*L*/*n*. [starOPT] In the social optimum of the star graph *S**n*, only the center player is inoculated. The social cost is *C**o**s**t*(*S**n*, *a⃗**OPT*) = *C* + (*n* − 1)*L*/*n*. The situation where only the center player is inoculated also constitutes a NE. However, there are more Nash equilibria. [starNE] In the star graph *S**n*, there are at most three Nash equilibria with social cost $$\begin{aligned} \textit{NE$\_1$: }Cost(S\_n,\vec{a}\_{\textit{NE1}}) &=& C+(n-1)L/n,\\ \textit{NE$\_2$: }Cost(S\_n,\vec{a}\_{\textit{NE2}}) &=& C(n-\lceil Cn/L \rceil +1 ) + L/n(\lceil Cn/L \rceil -1 )^2,\\ \textit{and~~~~~~~~~~~~~~~~~~~~~~~~~~~}\\ \textit{NE$\_3$: }Cost(S\_n,\vec{a}\_{\textit{NE3}}) &=& C(n-\lfloor Cn/L \rfloor) + L/n(\lfloor Cn/L \rfloor)^2.\end{aligned}$$ If $Cn/L\notin \mathbb{N}$, only two equilibria exist. If the center player is the only secure player, changing its strategy costs *L* but saves only *C*. When a leaf player becomes secure, its cost changes from *L*/*n* to *C*. These changes are unprofitable, and the social cost of this NE is *C**o**s**t*(*S**n*, *a⃗**NE1*) = *C* + (*n* − 1)*L*/*n*. For the other Nash equilibria the center player is not inoculated. Let the number of insecure leaf players be *n*0. In order for a secure player to remain secure, it must hold that *C* ≤ (*n*0 + 2)*L*/*n*, and hence *n*0 ≥ ⌈*C**n*/*L* − 2⌉. For an insecure player to remain insecure, it must hold that (1 + *n*0)*L*/*n* ≤ *C*, thus *n*0 ≤ ⌊*C**n*/*L* − 1⌋. Therefore, we can conclude that there are at most two Nash equilibria, one with ⌈*C**n*/*L* − 1⌉ and one with ⌊*C**n*/*L*⌋ many insecure players. The total social cost follows by substituting *n*0 in the total social cost function. Finally, observe that if *C**n*/*L* ∈ N and *C**n*/*L* > 3, all three equilibria exist in parallel; if $Cn/L \notin \mathbb{N}$, NE2 and NE3 become one equilibrium. Let us consider the social network scenario again. [starFNE] In *S**n*, there are at most three friendship Nash equilibria with social cost $$\begin{aligned} \textit{FNE$\_1$: }Cost(S\_n,\vec{a}\_{\textit{FNE1}}) &=& C+(n-1)L/n,\\ \textit{FNE$\_2$: }Cost(S\_n,\vec{a}\_{\textit{FNE2}}) &=& C(n-\lceil Cn/L -F \rceil +1 ) + L/n(\lceil Cn/L -F \rceil -1 )^2,\\ \textit{and~~~~~~~~~~~~~~~~~~~~~~~~~~~}\\ \textit{FNE$\_3$: }Cost(S\_n,\vec{a}\_{\textit{FNE3}}) &=& C(n-\lfloor Cn/L -F\rfloor) + L/n(\lfloor Cn/L-F \rfloor)^2.\end{aligned}$$ If $Cn/L-F\notin \mathbb{N}$, at most 2 friendship Nash equilibria exist. First, observe that having only an inoculated center player constitutes a FNE. In order for the center player to remain inoculated, it must hold that $C + F(n-1)L\frac{1}{n} \leq nL/n + F(n-1)L\frac{n}{n} = L + F(n-1)L$. All leaf players remain insecure as long as *L*/*n* + *F**C* ≤ *C* + *F**C*   ⇔ *L*/*n* ≤ *C*. These conditions are always true, and we have *C**o**s**t*(*S**n*, *a⃗**FNE1*) = *C* + (*n* − 1)*L*/*n*.If the center player is not inoculated, we have *n*0 insecure and *n* − *n*0 − 1 inoculated leaf players. In order for a secure leaf player to remain secure, it is necessary that $C + F \frac{n\_0+1}{n}L \leq \frac{n\_0+2}{n}L + F \frac{n\_0+2}{n}L$, so *n*0 ≥ ⌈*C**n*/*L* − *F*⌉ − 2. For an insecure leaf player, it must hold that $\frac{n\_0+1}{n}L + F \frac{n\_0+1}{n}L \leq C + F \frac{n\_0}{n}L$, so *n*0 ≤ ⌊*C**n*/*L* − *F*⌋ − 1. The claim follows by substitution. Note that there are instances where FNE1 is the only friendship Nash equilibrium. We already made use of this phenomenon in Section [sec:general] to show that *Υ*(*F*, *I*) is not monotonically increasing in *F*. The next lemma states under which circumstances this is the case. [converge to best NE][lemma:uniqueFNE] In *S**n*, there is a unique FNE equivalent to the social optimum if and only if $$\lfloor Cn/L-F \rfloor - \lfloor \frac{1}{2F} (\sqrt{1-4F(1-Cn/L)} -1 ) \rfloor -2 \geq 0$$ *S**n* has only one FNE if and only if every (insecure) leaf player is content with its chosen strategy but the insecure center player would rather inoculate. In order for an insecure leaf player to remain insecure we have *n*0 ≤ ⌊*C**n*/*L* − 1 − *F*⌋ and the insecure center player wants to inoculate if and only if $$\begin{aligned} C + F(n-n\_0-1)C + Fn\_0 \frac{1}{n}L < (n\_0+1)\frac{L}{n} + F(n-n\_0-1)C + Fn\_0 \frac{n\_0+1}{n}L,\end{aligned}$$ which is equivalent to *F**n*02 + *n*0 + 1 − *C**n*/*L* > 0. This implies that $n\_0 \geq \lfloor \frac{1}{2F} (\sqrt{1-4F(1-Cn/L)} -1 ) +1 \rfloor.$ Therefore there is only one FNE if and only if there exists an integer *n*0 such that $\lfloor \frac{1}{2F} (\sqrt{1-4F(1-Cn/L)} -1 ) +1 \rfloor \leq n\_0 \leq \lfloor Cn/L - 1 - F \rfloor$. Given the characterization of the various equilibria, the Windfall of Friendship can be computed. [thm:starFF] If $\lfloor \frac{1}{2F} (\sqrt{1-4F(1-Cn/L)} -1 ) \rfloor +2-\lfloor Cn/L-F \rfloor \leq 0$, the Windfall of Friendship is $$\Upsilon(F,I)\geq \frac{(n-2)C+L/n}{C+(n-1)L/n}, \textit{ ~~else~~ }\Upsilon(F,I)\leq \frac{n+1}{n-3}.$$ According to Lemma [lemma:uniqueFNE], the friendship Nash equilibrium is unique and hence equivalent to the social optimum if $$\lfloor Cn/L-F \rfloor - \lfloor \frac{1}{2F} (\sqrt{1-4F(1-Cn/L)} -1 ) \rfloor -2 \geq 0.$$ On the other hand, observe that there always exist sub-optimal Nash equilibria where the center player is not inoculated. Hence, we have $$\begin{aligned} \Upsilon(F,I) &=& \frac{Cost(S\_n,\vec{a}\_{\textit{NE}})}{Cost(S\_n,\vec{a}\_{\textit{FNE}})}= \frac{Cost(S\_n,\vec{a}\_{\textit{NE}})}{Cost(S\_n,\vec{a}\_{\textit{OPT}})}\\ &\geq& \frac{(n-\lfloor Cn/L-1 \rfloor)C + (\lceil Cn/L \rceil - 1)^2 L/n}{C+(n-1)L/n}\\ &\geq& \frac{C(n-2)+L/n}{C+(n-1)L/n}.\end{aligned}$$ Otherwise, i.e., if there exist friendship Nash equilibria with an insecure center player, an upper bound for the WoF can be computed $$\begin{aligned} \Upsilon(F,I) &=& \frac{Cost(S\_n,\vec{a}\_{\textit{NE}})}{Cost(S\_n,\vec{a}\_{\textit{FNE}})}\\ &\leq& \frac{(n-\lceil Cn/L-1 \rceil)C + (\lfloor Cn/L \rfloor)^2 L/n}{(n-\lfloor Cn/L - F \rfloor)C + (\lceil Cn/L - 1 - F \rceil)^2 L/n}\\ &\leq& \frac{(n+1)C}{nC+FC-2C(1+F)+(1+F)^2 L/n}\\ &<& \frac{(n+1)C}{C(n+F-2(1+F))}~~~<~~~ \frac{n+1}{n-3}.\end{aligned}$$ Theorem [thm:starFF] reveals that caring about the cost incurred by friends is particularly helpful to reach more desirable equilibria. In large star networks, the social welfare can be much higher than in Nash equilibria: in particular, the Windfall of Friendship can increase linearly in *n*, and hence indeed be asymptotically as large as the Price of Anarchy. However, if $\lfloor Cn/L-F \rfloor - \lfloor \frac{1}{2F} (\sqrt{1-4F(1-Cn/L)} -1 ) \rfloor-2 \geq 0$ does not hold, social networks are not much better than purely selfish systems: the maximal gain is constant. Finally observe that in stars friendship Nash equilibria always exist and can be computed efficiently (in linear time) by any best response strategy. Discussion ---------- This section has focused on a small set of very simple topologies only and we regard the derived results as a first step towards more complex graph classes such as Kleinberg graphs featuring the small-world property. Interestingly, however, our findings have implications for general topologies. We could show that even in simple graphs such as the star graph, the Windfall of Friendship can assume all possible values, from constant ratios up to ratios linear in *n*. This is asymptotically maximal for general graphs as well since the Price of Anarchy is bounded by *n* . On Relative Equilibria ====================== In the model we have studied so far, the actual cost of each friend—weighted by a factor *F*—is added to a player’s perceived cost. This describes a situation where friends are taken into account individually and independently of each other. However, one could imagine scenarios where the relative importance of a friend decreases with the total number of friends, that is, a player with many friends may care less about the welfare of a specific friend compared to a player who only has one or two friends. This motivates an alternative approach to describe perceived costs: [Relative Perceived Cost][relperceived cost]$\\$ The *relative perceived individual cost* of a player *p**i* is defined as $$c\_r(i,\vec{a}) = c\_a(i,\vec{a}) + F \cdot \frac{\sum\_{p\_j \in \Gamma(p\_i)} c\_a(j,\vec{a})}{|\Gamma(p\_i)|}.$$ In the following, we write *c**r*0(*i*, *a⃗*) to denote the relative perceived cost of an insecure player *p**i* and *c**r*1(*i*, *a⃗*) for the relative perceived cost of an inoculated player. We will refer to an FNE equilibrium with respect to relative perceived costs by *rFNE*. It turns out that while relative equilibria have similar properties as regular friendship equilibria and most of our techniques are still applicable, there are some crucial differences. Let us first consider the size of the attack components under rFNE. [lemma:ac-size-rel] The player *p**i* will inoculate if and only if the size of its attack component is $$k\_i > \frac{|\Gamma(p\_i)|\cdot Cn/L + F \cdot \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{k\_j}}{|\Gamma(p\_i)| + F |\Gamma\_{\overline{sec}}(p\_i)|},$$ where the *k**j*s are the attack component sizes of *p**i*’s insecure neighbors assuming *p**i* is secure. Player *p**i* will inoculate if and only if this choice lowers the relative perceived individual cost. By Definition [relperceived cost], the relative perceived individual costs of an inoculated player are $$c\_r^1(i,\vec{a}) = C + F/|\Gamma(p\_i)| \cdot \left( |\Gamma\_{sec}(p\_i)|{C} + \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{L \frac{k\_j}{n}} \right)$$ and for an insecure player we have $$c\_p^0(i,\vec{a}) = L \frac{k\_i}{n} + F/|\Gamma(p\_i)|\cdot \left( |\Gamma\_{sec}(p\_i)|{C} + |\Gamma\_{\overline{sec}}(p\_i)|{L \frac{k\_i}{n}} \right).$$ Thus, for *p**i* to prefer to inoculate it must hold that $$\begin{aligned} c\_p^0(i,\vec{a}) &>& c\_p^1(i,\vec{a}) ~~\Leftrightarrow \\ k\_i&>& \frac{Cn/L + F/|\Gamma(p\_i)| \cdot \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{k\_j}}{1 + F/|\Gamma(p\_i)|\cdot |\Gamma\_{\overline{sec}}(p\_i)|}.\end{aligned}$$ Not surprisingly, we can show that friendship is always beneficial also with respect to relative perceived costs. [FFneversmaller1relative] For all instances of the virus inoculation game and 0 ≤ *F* ≤ 1, it holds that 1 ≤ *Υ*(*F*, *I*) ≤ *PoA*(*I*) also in the relative cost model. Again, the upper bound for the WoF, i.e., $\textit{PoA($I$)} \geq \Upsilon(F,I)$, follows directly from the definitions (see also proof of Lemma [FFneversmaller1]). For *Υ*(*F*, *I*) ≥ 1 we start from a rFNE *a⃗* (defined with relative costs) with *F* > 0 and show that a best response execution yields a Nash equilibrium *a⃗*ʹ with cost *C**a*(*a⃗*) ≤ *C**a*(*a⃗*ʹ). If *a⃗* is also a NE in the same instance with *F* = 0 then we are done. Otherwise there is at least one player *p**i* that prefers to change its strategy. If *p**i* is insecure but favors inoculation, *p**i*’s attack component has on the one hand to be of size at least *k**i*ʹ > *C**n*/*L*  (otherwise there is not reason for *p**i* to become secure) and on the other hand of size at most $k\_i'' = (|\Gamma(p\_i)|\cdot Cn/L + F \cdot \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{k\_j}) / (|\Gamma(p\_i)|+F\cdot |\Gamma\_{\overline{sec}}(p\_i)|) \leq (|\Gamma(p\_i)|\cdot Cn/L + F\cdot |\Gamma\_{\overline{sec}}(p\_i)| (k\_i'' - 1) ) / (|\Gamma(p\_i)|+F\cdot|\Gamma\_{\overline{sec}}(p\_i)|)$ so $k\_i'' \leq |\Gamma(p\_i)|\cdot Cn/L - F|\Gamma\_{\overline{sec}}(p\_i)|$ (cf Lemma [lemma:ac-size-rel]), yielding a contradiction. What if *p**i* is secure in the rFNE but prefers to be insecure in the NE? Since every player has the same preference on the attack component’s size when *F* = 0, a newly insecure player cannot trigger other players to inoculate. Furthermore, only the players inside *p**i*’s attack component are affected by this change. The total cost of this attack component increases by at least (see also the proof of Lemma [FFneversmaller1]) $$\begin{aligned} x = \frac{k\_i}{n}L - C + \frac{L}{n} (|\Gamma\_{\overline{sec}}(p\_i)|k\_i - \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{k\_j}).\end{aligned}$$ Applying Lemma [lemma:ac-size-rel] guarantees that $$\sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)} k\_j \leq \frac{k\_i(1+F/|\Gamma(p\_i)|\cdot|\Gamma\_{\overline{sec}}(p\_i)|)-C n/L}{F/|\Gamma(p\_i)|}.$$ This results in $$\begin{aligned} x &\geq& \frac{L}{n} \left(|\Gamma\_{\overline{sec}}(p\_i)|k\_i - \frac{ k\_i(1+F/|\Gamma(p\_i)|\cdot|\Gamma\_{\overline{sec}}(p\_i)|)-C n/L}{F/|\Gamma(p\_i)|\cdot}\right)\\ &=& \frac{k\_i L}{n}\left(1-\frac{1}{F/|\Gamma(p\_i)|\cdot}\right) - C \left(1-\frac{1}{F/|\Gamma(p\_i)|\cdot}\right)>0,\end{aligned}$$ since a player only gives up its protection if $C>\frac{k\_i L}{n}$. If more players are unhappy with their situation and become vulnerable, the cost for the NE increases further. In conclusion, there exists a NE for every FNE with *F* ≥ 0 for the same instance which is at least as expensive. Interestingly, however, the phenomenon of a non-monotonic welfare increase with larger *F* does no longer hold in the star graph *S**n*. To see this, note that there are only at most two distinct rFNE in *S**n* (apart from the trivial situations where all players are either insecure or secure): the “good equilibrium” where the center player is secure and all the leave players insecure, and the “bad equilibrium” where the center is insecure and a fraction of the leaves secure. The following theorem shows that the example of Theorem [thm:monotone] for FNE is no longer true for rFNE. [thm:rel-monotone] The Windfall of Friendship is monotonic in *F* for *S**n* under the relative cost model. Consider a friendship factor *F*. Clearly, the equilibrium where only the center player is secure always exists (w.l.o.g., we focus on “reasonable values” *C* and *L*). When is there an equilibrium where the center is insecure? Consider such an equilibrium where *x* leave players are insecure. In order for this to constitute an equilibrium, it must hold for the center player that: $$\frac{(x+1)L}{n}+\frac{F}{n-1}\cdot \frac{(x+1)L}{n}+\frac{F\cdot C\cdot (n-x-1)}{n-1}< C + \frac{F}{n-1}\cdot \frac{x\cdot L}{n}+\frac{F\cdot C\cdot (n-x-1)}{n-1}$$ $$\Leftrightarrow \frac{(x+1)L}{n}+\frac{F}{n-1}\cdot \frac{L}{n}<C$$ On the other hand, for an insecure leaf player we have: $$\frac{(x+1)L}{n}+\frac{FL(x+1)}{n}< C + \frac{FLx}{n}$$ $$\Leftrightarrow \frac{(x+1)L}{n}+\frac{FL}{n}<C$$ Unlike in the FNE scenario, the center player is less likely to inoculate, i.e., leaf players inoculate first. Thus, a larger *F* can only render the existence of such an equilibrium more unlikely. Finally, note that the hardness result of Theorem [NP-completeness] is also applicable to relative FNEs. [rNP-completeness] Computing the best and the worst pure rFNE is NP-complete for any *F* ∈ [0, 1]. (*Sketch*) Again, deciding the existence of a rFNE with cost less than *k* or more than *k* is at least as hard as solving the *vertex cover* or *independent dominating set* problem, respectively. Note that verifying whether a proposed solution is correct can be done in polynomial time, hence the problems are indeed in NP. The proof is similar to Theorem [NP-completeness], and we only point out the difference for condition (a): an insecure player *p**i* in the attack component bears the cost $k\_i/n\cdot L + F|\Gamma\_{sec}(p\_i)| C + |\Gamma\_{\overline{sec}}(p\_i)| \cdot (k\_iL/n)/|\Gamma(p\_i)|$, and changing its strategy reduces the cost by at least $\Delta\_{i} = k\_iL/n+ F |\Gamma\_{\overline{sec}}(p\_i)| k\_i L/ (|\Gamma(p\_i)| n) - C - F |\Gamma\_{\overline{sec}}(p\_i)| (k\_i-1) L/(|\Gamma(p\_i)| n) = k\_iL/n - C + F L |\Gamma\_{\overline{sec}}(p\_i)|/(|\Gamma(p\_i)| n)$. By our assumption that *k**i* ≥ 2, and hence $|\Gamma\_{\overline{sec}}(p\_i)| \geq 1$, it holds that Δ*i* > 0, resulting in *p**i* becoming secure. Convergence =========== According to Lemma [FFneversmaller1] and Lemma [FFneversmaller1relative], the social context can only improve the overall welfare of the players, both in the absolute and the relative friendship model. However, there are implications beyond the players’ welfare in the equilibria: in social networks, the dynamics of how the equilibria are reached is different. In , Aspnes et al. have shown that best-response behavior quickly leads to some pure Nash equilibrium, from any initial situation. Their potential function argument however relies on a “symmetry” of the players in the sense insecure players in the same attack component have the same cost. This no longer holds in the social context where different players take into account their neighborhood: a player with four insecure neighbors is more likely to inoculate than a player with just one, secure neighbor. Thus, the distinction between “big” and “small” components used in  cannot be applied, as different players require a different threshold. Nevertheless, convergence can be shown in certain scenarios. For example, the hardness proofs of Lemmas [NP-completeness] and [rNP-completeness] imply that equilibria always exist in the corresponding areas of the parameter space, and it is easy to see that the equilibria are also reached by best-response sequences. Similarly, in the star and complete networks, best-response sequences converge in linear time. Linear convergence time also happens in more complex, cyclic graphs. For example, consider the cycle graph *C**n* where each player is connected to one left and one right neighbor in a circular fashion. To prove best response convergence from arbitrary initial states, we distinguish between an initial phase where certain structural invariants are established, and a second phase where a potential function argument can be applied with respect to the view of only one type of players. Each event when one player is given the chance to perform a best response is called a *round*. [thm:cycconv] From any initial state and in the cycle graph *C**n*, a best response round-robin sequence results in an equilibrium after *O*(*n*) changes, both in case of absolute and relative friendship equilibria. After two round-robin phases where each player is given the chance to make a best response twice (at most 2*n* changes or rounds), it holds that an insecure player *p*1 which is adjacent to a secure player *p*2 cannot become secure: since *p*1 preferred to be insecure at some time *t*, the only reason to become secure again is the event that a player *p*3 becomes insecure in *p*1’s attack component at time *t*ʹ > *t*; however, since *p*1 has a secure neighbor *p*2 and hence *p*3 can only have more insecure neighbors than *p*1, *p*3 cannot prefer a larger attack component than *p*1, which yields a contradiction to the assumption that *p*1 becomes secure while its neighbor *p*2 is still secure. Moreover, by the same arguments, there cannot be three consecutive secure players. Therefore, in the best response rounds after the two initial phases, there are the following cases. Case (A): a secure player having two insecure neighbors becomes insecure; Case (B): a secure player with one secure neighbor becomes insecure; and Case (C): an insecure player with two insecure neighbors becomes secure. In order to prove convergence, the following potential function Φ is used: Φ(*a⃗*) = ∑*A* ∈ S*b**i**g*(*a⃗*)∣*A*∣ − ∑*A* ∈ S*s**m**a**l**l*(*a⃗*)∣*A*∣ where the attack components *A* in S*b**i**g* contain more than *t* = *n**C*/(*F**L*) − *L*/*F* + 1 players and the attack components *A* in S*s**m**a**l**l* contain at most *t* players in case of absolute friendship equilibria; for relative friendship equilibria we use *t* = 2*C**n*/(*F**L*) − 2*L*/*F* + 1. In other words, the threshold *t* to distinguish between small and big components is chosen with respect to players having *two insecure neighbors*; in case of absolute FNEs: $$C+F\cdot\frac{L\cdot(t-1)}{n}=\frac{t\cdot L}{n}+2F\cdot\frac{L\cdot t}{n} \Leftrightarrow \frac{Cn}{FL}-\frac{L}{F}+1=t$$ and in case of relative FNEs: $$C+F/2\cdot\frac{L\cdot(t-1)}{n}=\frac{t\cdot L}{n}+F\cdot\frac{L\cdot t}{n} \Leftrightarrow \frac{2Cn}{FL}-\frac{2L}{F}+1=t$$ Note that it holds that  − *n* ≤ Φ(*a⃗*) ≤ *n*, ∀ *a⃗*. We now show that Case (A) and (C) reduce Φ(*a⃗*) by at least one unit in each best response. Moreover, Case (B) can increase the potential by at most one. However, since we have shown that Case (B) incurs less than *n* times, the claim follows by an amortization argument. *Case (A):* In this case, a new insecure player *p*1 is added to an attack component in S*s**m**a**l**l*. *Case (B):* A new insecure player *p*1 is added to an attack component in S*s**m**a**l**l* or to an attack component in S*b**i**g* (since *p*1 is “on the edge” of the attack component, it prefers a larger attack component). *Case (C):* An insecure player is removed from an attack component in S*b**i**g*. The proof of Theorem [thm:cycconv] can be adapted to show linear convergence in general 2-degree networks where players have degree at most two. In order to gain deeper insights into the convergence behavior, we conducted several experiments. Simulations =========== This section briefly reports on the simulations conducted on Kleinberg graphs (using clustering exponent *α* = 2). Although the existence of equilibria and the best-response convergence time complexity for general graphs remain an open question, during the thousands of experiments, we did not encounter a single instance which did not converge. Moreover, our experiments indicate that the initial configuration (i.e., the set of secure and insecure players) as well as the relationship of *L* to *C* typically has a negligible effect on the convergence time, and hence, unless stated otherwise, the following experiments assume an initially completely insecure network and *C* = 1 and *L* = 4. All experiments are repeated 100 times over different Kleinberg graphs. All our experiments showed a positive Windfall of Friendship that increases monotonically in *F*, both for the relative and the absolute friendship model. Figure [fig:socialcost] shows a typical result. Maybe surprisingly, it turns out that the windfall of friendship is often not due to a higher fraction of secure players, but rather the fact that the secure players are located at strategically more beneficial locations (see also Figure [fig:numsec]). We can conclude that there is a windfall of friendship not only for the worst but also for “average equilibria”. [ht] ![image](SocialCost) [fig:socialcost] [ht] ![image](boxplotSecureNodesL16) [fig:numsec] The box plots in Figure [fig:boxplotcost] give a more detailed picture of the cost for *F* ∈ {0, 1}. The overall cost of pure NE is typically higher than the cost of rFNE which is in turn higher than the cost of FNE. [ht] ![image](boxplotsCost) [fig:boxplotcost] Besides social cost, we are mainly interested in convergence times. We find that while the convergence time typically increases already for a small *F* > 0, the magnitude of *F* plays a minor role. Figure [fig:convf] shows the typical convergence times as a function of *F*. Notice that the convergence time more than doubles when changing from the selfish to the social model but is roughly constant for all values of *F*. [ht] ![image](ConvergenceFBoxPlot) [fig:convf] Conclusion ========== This article presented a framework to study and quantify the effects of game-theoretic behavior in social networks. This framework allows us to formally describe and understand phenomena which are often well-known on an anecdotal level. For instance, we find that the Windfall of Friendship is always positive, and that players embedded in a social context may be subject to longer convergence times. Moreover, interestingly, we find that the Windfall of Friendship does not always increase monotonically with stronger social ties. We believe that our work opens interesting directions for future research. We have focused on a virus inoculation game, and additional insights must be gained by studying alternative and more general games such as potential games, or games that do and do not exhibit a Braess paradox. Also the implications on the games’ dynamics need to be investigated in more detail, and it will be interesting to take into consideration behavioral models beyond equilibria (e.g., ). Finally, it may be interesting to study scenarios where players care not only about their friends but also, to a smaller extent, about friends of friends. What about practical implications? One intuitive takeaway of our work is that in case of large benefits of social behavior, it may make sense to design distributed systems where neighboring players have good relationships. However, if the resulting convergence times are large and the price of the dynamics higher than the possible gains, such connections should be discouraged. Our game-theoretic tools can be used to compute these benefits and convergence times, and may hence be helpful during the design phase of such a system. Acknowledgments =============== We would like to thank Yishay Mansour and Boaz Patt-Shamir from Tel Aviv University and Martina Hüllmann and Burkhard Monien from Paderborn University for interesting discussions on relative friendship equilibria and aspects of convergence. 10 V. Anantharam.. In *Proc. 43rd IEEE Conference on Decision and Control*, 2004. J. Aspnes, K. Chang, and A. Yampolskiy.. In *Proc. 16th ACM-SIAM Annual Symposium on Discrete Algorithms (SODA)*, 2005. M. Babaioff, R. Kleinberg, and C. H. Papadimitriou.. In *Proc. 8th ACM Conference on Electronic Commerce (EC)*, 2007. Also appeared in *Games and Economic Behavior (GEB), Volume 67, Number 1*, 2007. N. T. Bailey.. Hafner Press, 1975. J. M. Bilbao.. Springer, 2000. R. Blundell, W. Newey, and T. Persson (Eds).. 2006. P.-A. Chen, M. David, and D. Kempe.. In *Proc. 11th ACM Conference on Electronic Commerce (EC)*, 2010. P.-A. Chen, and D. Kempe.. In *Proc. 2nd International Symposium on Algorithmic Game Theory (SAGT)*, 2009. A. Fabrikant, A. Luthra, E. Maneva, C. H. Papadimitriou, and S. Shenker.. In *Proc. 22nd Annual Symposium on Principles of Distributed Computing (PODC)*, 2003. J. Feigenbaum, C. H. Papadimitriou, and S. Shenker.., 63(1):21–41, 2001. G. W. Flake, S. Lawrence, C. L. Giles, and F. M. Coetzee.., 35(3):66–71, 2002. C. Fleizach, M. Liljenstam, P. Johansson, G. M. Voelker, and A. Mehes.. In *Proc. 2007 ACM Workshop on Recurring Malcode (WORM)*, 2007. P. Fraigniaud, C. Gavoille, and C. Paul.. In *Proc. 23rd Annual ACM Symposium on Principles of Distributed Computing (PODC)*, 2004. J. C. Frauenthal.. Springer-Verlag, 1980. M. R. Garey and D. S. Johnson.., 1979. S. M. Kakade, M. Kearns, and L. E. Ortiz.. In *Proc. 17th Annual Conference on Learning Theory (COLT)*, 2004. R. M. Karp.., pages 85–103, 1972. M. Kearns, M. Littman, and S. Singh.. In *Proc. Conference on Uncertainty in Artificial Intelligence (UAI)*, 2001. J. O. Kephart and S. R. White.. In *Proc. IEEE Symposium on Security and Privacy*, 1993. J. O. Kephart, S. R. White, and D. M. Chess.., 30(5):20–26, 1993. J. Kleinberg.. In *Proc. 32nd ACM Symposium on Theory of Computing (STOC)*, 2000. J. Kleinberg.. In *Proc. International Congress of Mathematicians (ICM)*, 2006. J. Kleinberg.., Cambridge University Press, 2007. P. Kuznetsov, and S. Schmid.., 2010. J. Leskovec, L. A. Adamic, and B. A. Huberman.., 2007. D. Liben-Nowell, J. Novak, R. Kumar, P. Raghavan, and A. Tomkins.. In *Proc. National Academy of Sciences*, number 33, pages 11623–11628. National Acad Sciences, 2005. G. S. Manku, M. Naor, and U. Wieder.. In *Proc. 36th ACM Symposium on Theory of Computing (STOC)*, 2004. D. Meier, Y. A. Oswald, S. Schmid, and R. Wattenhofer.. In *Proc. 9th ACM Conference on Electronic Commerce (EC)*, 2008. S. Milgram.., pages 60–67, 1967. A. Montanari and A. Saberi.., pages 60–67, 2009. T. Moscibroda, S. Schmid, and R. Wattenhofer.. In *Proc. 25th Annual ACM Symposium on Principles of Distributed Computing (PODC)*, 2006. T. Moscibroda, S. Schmid, and R. Wattenhofer.. In *Proc. 25th Annual ACM Symposium on Principles of Distributed Computing (PODC)*, 2006. Also appeared in *Journal Internet Mathematics (IM), Volume 6, Number 2*, 2009. M. E. J. Newman and M. Girvan.., 69, 2004. Ch. Papadimitriou.., 2001. T. Roughgarden.. MIT Press, 2005. T. Roughgarden and Éva Tardos., 49(2):236–259, 2002. C. Wang, J. C. Knight, and M. C. Elder.. In *Proc. 16th Annual Computer Security Applications Conference (ACSAC)*, 2000. J. Wright and K. Leyton-Brown.. In *Proc. 24th AAAI Conference on Artificial Intelligence (AAAI)*, 2010. M. Yannakakis.. In *Computer Science Review*, 3(2):71–85 2009. --- 1. E.g., the Outlook worm *Worm.ExploreZip*.[↩](#fnref1) 2. See http://news.softpedia.com/news/Skype-Attacked-By-Fast-Spreading-Virus-52039.shtml.[↩](#fnref2) On the Windfall and Price of Friendship:Inoculation Strategies on Social Networks ================================================================================= This article investigates selfish behavior in games where players are embedded in a social context. A framework is presented which allows us to measure the *Windfall of Friendship*, i.e., how much players benefit (compared to purely selfish environments) if they care about the welfare of their friends in the social network graph. As a case study, a virus inoculation game is examined. We analyze the corresponding Nash equilibria and show that the Windfall of Friendship can never be negative. However, we find that if the valuation of a friend is independent of the total number of friends, the social welfare may not increase monotonically with the extent to which players care for each other; intriguingly, in the corresponding scenario where the relative importance of a friend declines, the Windfall is monotonic again. This article also studies convergence of best-response sequences. It turns out that in social networks, convergence times are typically higher and hence constitute a price of friendship. While such phenomena may be known on an anecdotal level, our framework allows us to quantify these effects analytically. Our formal insights on the worst case equilibria are complemented by simulations shedding light onto the structure of other equilibria. Game Theory, Social Networks, Equilibria, Virus Propagation, Windfall of Friendship Introduction ============ Social networks have existed for thousands of years, but it was not until recently that researchers have started to gain scientific insights into phenomena like the *small world property*. The rise of the Internet has enabled people to connect with each other in new ways and to find friends sharing the same interests from all over the planet. A social network on the Internet can manifest itself in various forms. For instance, on *Facebook*, people maintain virtual references to their friends. The contacts stored on mobile phones or email clients form a social network as well. The analysis of such networks—both their static properties as well as their evolution over time—is an interesting endeavor, as it reveals many aspects of our society in general. A classic tool to model human behavior is *game theory*. It has been a fruitful research field in economics and sociology for many years. Recently, computer scientists have started to use game theory methods to shed light onto the complexities of today’s highly decentralized networks. Game theoretic models traditionally assume that people act autonomously and are steered by the desire to maximize their benefits (or utility). Under this assumption, it is possible to quantify the performance loss of a distributed system compared to situations where all participants collaborate perfectly. A widely studied measure which captures this loss of social welfare is the *Price of Anarchy* (PoA). Even though these concepts can lead to important insights in many environments, we believe that in some situations, the underlying assumptions do not reflect reality well enough. One such example are social networks: most likely people act less selfishly towards their friends than towards complete strangers. Such altruistic behavior is typically not considered in game-theoretic models. In this article, we propose a game theoretic framework for social networks. Social networks are not only attractive to their participants, e.g., it is well-known that the user profiles are an interesting data source for the PR industry to provide tailored advertisements. Moreover, social network graphs can also be exploited for attacks, e.g., email viruses using the users’ address books for propagating, worms spreading on mobile phone networks and over the Internet telephony tool Skype have been reported (e.g., ). This article investigates rational inoculation strategies against such viruses from our game theoretic perspective, and studies the propagation of such viruses on the social network. Our Contribution ---------------- This article makes a first step to combine two active threads of research: social networks and game theory. We introduce a framework taking into consideration that people may care about the well-being of their friends. In particular, we define the *Windfall of Friendship* (WoF) which captures to what extent the social welfare improves in social networks compared to purely selfish systems. In order to demonstrate our framework, as a case study, we provide a game-theoretic analysis of a *virus inoculation game*. Concretely, we assume that the players have the choice between inoculating by buying anti-virus software and risking infection. As expected, our analysis reveals that the players in this game always benefit from caring about the other participants in the social network rather than being selfish. Intriguingly, however, we find that the Windfall of Friendship may not increase monotonically with stronger relationships. Despite the phenomenon being an “ever-green” in political debates, to the best of our knowledge, this is the first article to quantify this effect formally. This article derives upper and lower bounds on the Windfall of Friendship in simple graphs. For example, we show that the Windfall of Friendship in a complete graph is at most 4/3; this is tight in the sense that there are problem instances where the situation can indeed improve this much. Moreover, we show that in star graphs, friendship can help to eliminate undesirable equilibria. Generally, we discover that even in simple graphs the Windfall of Friendship can attain a large spectrum of values, from constant ratios up to Θ(*n*), *n* being the network size, which is asymptotically maximal for general graphs. Also an alternative friendship model is discussed in this article where the relative importance of an individual friend declines with a larger number of friends. While the Windfall of Friendship is still positive, we show that the non-monotonicity result is no longer applicable. Moreover, it is proved that in both models, computing the best and the worst friendship Nash equilibrium is NP-hard. The paper also initiates the discussion of implications on convergence. We give a potential function argument to show convergence of best-response sequences in various models and for simple, cyclic graphs. Moreover, we report on our simulations which indicate that the convergence times are typically higher in social contexts, and hence constitute a certain price of friendship. Finally, to complement our formal analysis of the worst equilibria, simulation results for average case equilibria are discussed. Organization ------------ The remainder of this article is organized as follows. Section [sec:relwork] reviews related work and Section [sec:model] formally introduces our model and framework. The Windfall of Friendship on general graphs and on special graphs is studied in Sections [sec:general] and [sec:cliquestar] respectively. Section [sec:relative] discusses an alternative model where the relative importance of a friend declines if the total number of friends increases. Aspects of best-response convergence and implications are considered in Section [sec:convergence]. We report on simulations in Section [sec:simulations]. Finally, we conclude the article in Section [sec:conclusion]. Related Work ============ Social networks are a fascinating topic not only in social sciences, but also in ethnology, and psychology. The advent of social networks on the Internet, e.g., *Facebook*, *LinkedIn*, *MySpace*, *Orkut*, or *Xing*, to name but a few, heralded a new kind of social interactions, and the mere scale of online networks and the vast amount of data constitute an unprecedented treasure for scientific studies. The topological structure of these networks and the dynamics of the user behavior has a mathematical and algorithmic dimension, and has raised the interest of mathematicians and engineers accordingly. The famous *small world experiment*  conducted by Stanley Milgram 1967 has gained attention by the algorithm community  and inspired research on topics such as decentralized search algorithms , routing on social networks  and the identification of communities . The dynamics of epidemic propagation of information or diseases has been studied from an algorithmic perspective as well . Knowledge on effects of this cascading behavior is useful to understand phenomena as diverse as word-of-mouth effects, the diffusion of innovation, the emergence of bubbles in a financial market, or the rise of a political candidate. It can also help to identify sets of influential players in networks where marketing is particularly efficient (*viral marketing*). For a good overview on economic aspects of social networks, we refer the reader to, which, i.a., compares random graph theory with game theoretic models for the formation of social networks. Recently, game theory has also received much attention by computer scientists. This is partly due to the various actors and stake-holders who influence the decentralized growth of the Internet: game theory is a useful tool to gain insights into the Internet’s socio-economic complexity. Many aspects have been studied from a game-theoretic point of view, e.g., *routing*, *multicast transmissions*, or *network creation*. Moreover, computer scientists are interested in the algorithmic problems offered by game theory, e.g., on the existence of pure equilibria . This article applies game theory to social networks where players are not completely selfish and autonomous but have friends about whose well-being they care to some extent. We demonstrate our mathematical framework with a virus inoculation game on social graphs. There is a large body of literature on the propagation of viruses . Miscellaneous misuse of social networks has been reported, e.g., *email viruses*[1](#fn1) have used address lists to propagate to the users’ friends. Similar vulnerabilities have been exploited to spread worms on the *mobile phone network*  and on the Internet telephony tool *Skype*[2](#fn2). There already exists interesting work on game theoretic and epidemic models of propagation in social networks. For instance, Montanari and Saberi  attend to a game theoretic model for the diffusion of an innovation in a network and characterize the rate of convergence as a function of graph structure. The authors highlight crucial differences between game theoretic and epidemic models and find that the spread of viruses, new technologies, and new political or social beliefs do not have the same viral behavior. The articles closest to ours are . Our model is inspired by Aspnes et al. . The authors apply a classic game-theoretic analysis and show that selfish systems can be very inefficient, as the Price of Anarchy is Θ(*n*), where *n* is the total number of players. They show that computing the social optimum is NP-hard and give a reduction to the combinatorial problem *sum-of-squares partition*. They also present a *O*(log2*n*) approximation. Moscibroda et al.  have extended this model by introducing malicious players in the selfish network. This extension facilitates the estimation of the robustness of a distributed system to malicious attacks. They also find that in a non-oblivious model, intriguingly, the presence of malicious players may actually *improve* the social welfare. In a follow-up work  which generalizes the social context of  to arbitrary bilateral relationships, it has been shown that there is no such phenomenon in a simple network creation game. The *Windfall of Malice* has also been studied in the context of congestion games  by Babaioff et al. In contrast to these papers, our focus here is on social graphs where players are concerned about their friends’ benefits. There is other literature on game theory where players are influenced by their neighbors. In *graphical economics*, an undirected graph is given where an edge between two players denotes that free trade is allowed between the two parties, where the absence of such an edge denotes an embargo or an other restricted form of direct trade. The payoff of a player is a function of the actions of the players in its neighborhood only. In contrast to our work, a different equilibrium concept is used and no social aspects are taken into consideration. Note that the nature of game theory on social networks also differs from *cooperative games* (e.g., ) where each coalition *C* ⊆ 2*V* of players *V* has a certain characteristic cost or payoff function *f* : 2*V* → R describing the collective payoff the players can gain by forming the coalition. In contrast to cooperative games, the “coalitions” are fixed, and a player participates in the “coalitions” of all its neighbors. A preliminary version of this article appeared at ACM EC 2008 , and there have been several interesting results related to our work since then. For example,  studies auctions with spite and altruism among bidders, and presents explicit characterizations of Nash equilibria for first-price auctions with random valuations and arbitrary spite/altruism matrices, and for first and second price auctions with arbitrary valuations and so-called regular social networks (players have same out-degree). By rounding a natural linear program with region-growing techniques, Chen et al.  present a better, *O*(log*z*)-approximation for the best vaccination strategy in the original model of , where *z* is the support size of the outbreak distribution. Moreover, the effect of autonomy is investigated: a benevolent authority may suggest which players should be vaccinated, and the authors analyze the “Price of Opting Out” under partially altruistic behavior; they show that with positive altruism, Nash equilibria may not exist, but that the price of opting out is bounded. We extend the conference version of this article  in several respects. The two most important additions concern *relative friendship* and *convergence*. We study an additional model where the relative importance of a neighbor declines with the total number of friends and find that while friendship is still always beneficial, the non-monotonicity result no longer applies: unlike in the absolute friendship model, the Windfall of Friendship can only increase with stronger social ties. In addition, we initiate the study of convergence issues in the social network. It turns out that it takes longer until an equilibrium is reached compared to purely selfish environments and hence constitutes a price of friendship. We present a potential function argument to prove convergence in some simple cyclic networks, and complement our study with simulations on Kleinberg graphs. We believe that the existence of and convergence to social equilibria are exciting questions for future research (see also the related fields of *player-specific utilities*  and *local search complexity* ). Finally, there are several minor changes, e.g., we improve the bound in Theorem [thm:monotone] from *n* > 7 to *n* > 3. Model ===== This section introduces our framework. In order to gain insights into the Windfall of Friendship, we study a virus inoculation game on a social graph. We present the model of this game and we show how it can be extended to incorporate social aspects. Virus Inoculation Game ---------------------- The virus inoculation game was introduced by . We are given an undirected network graph *G* = (*V*, *E*) of *n* = ∣*V*∣ players (or nodes) *p**i* ∈ *V*, for *i* = 1, …, *n*, who are connected by a set of edges (or *links*) *E*. Every player has to decide whether it wants to *inoculate* (e.g., purchase and install anti-virus software) which costs *C*, or whether it prefers saving money and facing the risk of being infected. We assume that being infected yields a damage cost of *L* (e.g., a computer is out of work for *L* days). In other words, an instance *I* of a game consists of a graph *G* = (*V*, *E*), the inoculation cost *C* and a damage cost *L*. We introduce a variable *a**i* for every player *p**i* denoting *p**i*’s chosen *strategy*. Namely, *a**i* = 1 describes that player *p**i* is protected whereas for a player *p**j* willing to take the risk, *a**j* = 0. In the following, we will assume that *a**j* ∈ {0, 1}, that is, we do not allow players to *mix* (i.e., use probabilistic distributions over) their strategies. These choices are summarized by the *strategy profile*, the vector *a⃗* = (*a*1, …, *a**n*). After the players have made their decisions, a virus spreads in the network. The propagation model is as follows. First, one player *p* of the network is chosen uniformly at random as a starting point. If this player is inoculated, there is no damage and the process terminates. Otherwise, the virus infects *p* and all unprotected neighbors of *p*. The virus now propagates recursively to their unprotected neighbors. Hence, the more insecure players are connected, the more likely they are to be infected. The vulnerable region (set of players) in which an insecure player *p**i* lies is referred to as *p**i*’s *attack component*. We only consider a limited region of the parameter space to avoid trivial cases. If the cost *C* is too large, no player will inoculate, resulting in a totally insecure network and therefore all players eventually will be infected. On the other hand, if *C* <  < *L*, the best strategy for all players is to inoculate. Thus, we will assume that *C* ≤ *L* and *C* > *L*/*n* in the following. In our game, a player has the following expected cost: [Actual Individual Cost][actualcost]$\\$ The *actual individual cost* of a player *p**i* is defined as $$c\_a(i,\vec{a}) = a\_{i} \cdot C + (1-a\_i) L \cdot \frac{k\_i}{n}$$ where *k**i* denotes the size of *p**i*’s attack component. If *p**i* is inoculated, *k**i* stands for the size of the attack component that would result if *p**i* became insecure. In the following, let *c**a*0(*i*, *a⃗*) refer to the actual cost of an insecure and *c**a*1(*i*, *a⃗*) to the actual cost of a secure player *p**i*. The total *social cost* of a game is defined as the sum of the cost of all participants: *C**a*(*a⃗*) = ∑*p**i* ∈ *V**c**a*(*i*, *a⃗*). Classic game theory assumes that all players act selfishly, i.e., each player seeks to minimize its individual cost. In order to study the impact of such selfish behavior, the solution concept of a *Nash equilibrium* (NE) is used. A Nash equilibrium is a strategy profile where no selfish player can unilaterally reduce its individual cost given the strategy choices of the other players. We can think of Nash equilibria as the stable strategy profiles of games with selfish players. We will only consider pure Nash equilibria in this article, i.e., players cannot use random distributions over their strategies but must decide whether they want to inoculate or not. In a pure Nash equilibrium, it must hold for each player *p**i* that given a strategy profile *a⃗* ∀*p**i* ∈ *V*,  ∀*a**i* :  *c**a*(*i*, *a⃗*) ≤ *c**a*(*i*, (*a*1, …, 1 − *a**i*, …, *a**n*)), implying that player *p**i* cannot decrease its cost by choosing an alternative strategy 1 − *a**i*. In order to quantify the performance loss due to selfishness, the (not necessarily unique) Nash equilibria are compared to the optimum situation where all players collaborate. To this end we consider the *Price of Anarchy* (PoA), i.e., the ratio of the social cost of the worst Nash equilibrium divided by the optimal social cost for a problem instance *I*. More formally, *P**o**A*(*I*) = max*N**E**C**N**E*(*I*)/*C**O**P**T*(*I*). Social Networks --------------- Our model for social networks is as follows. We define a *Friendship Factor* *F* which captures the extent to which players care about their *friends*, i.e., about the players *adjacent* to them in the social network. More formally, *F* is the factor by which a player *p**i* takes the individual cost of its neighbors into account when deciding for a strategy. *F* can assume any value between 0 and 1. *F* = 0 implies that the players do not consider their neighbors’ cost at all, whereas *F* = 1 implies that a player values the well-being of its neighbors to the same extent as its own. Let Γ(*p**i*) denote the set of neighbors of a player *p**i*. Moreover, let Γ*s**e**c*(*p**i*) ⊆ Γ(*p**i*) be the set of inoculated neighbors, and $\Gamma\_{\overline{sec}}(p\_i)=\Gamma(p\_i)\setminus \Gamma\_{sec}(p\_i)$ the remaining insecure neighbors. We distinguish between a player’s *actual cost* and a player’s *perceived cost*. A player’s actual individual cost is the expected cost arising for each player defined in Definition [actualcost] used to compute a game’s social cost. In our social network, the decisions of our players are steered by the players’ *perceived cost*. [Perceived Individual Cost][perceived cost]$\\$ The *perceived individual cost* of a player *p**i* is defined as *c**p*(*i*, *a⃗*) = *c**a*(*i*, *a⃗*) + *F* ⋅ ∑*p**j* ∈ Γ(*p**i*)*c**a*(*j*, *a⃗*). In the following, we write *c**p*0(*i*, *a⃗*) to denote the perceived cost of an insecure player *p**i* and *c**p*1(*i*, *a⃗*) for the perceived cost of an inoculated player. This definition entails a new notion of equilibrium. We define a *friendship Nash equilibrium* (FNE) as a strategy profile *a⃗* where no player can reduce its *perceived* cost by unilaterally changing its strategy given the strategies of the other players. Formally, ∀*p**i* ∈ *V*, ∀*a**i* : *c**p*(*i*, *a⃗*) ≤ *c**p*(*i*, (*a*1, …, 1 − *a**i*, …, *a**n*)). Given this equilibrium concept, we define the *Windfall of Friendship* *Υ*. [Windfall of Friendship (WoF)][def:WFDef] The *Windfall of Friendship* *Υ*(*F*, *I*) is the ratio of the social cost of the worst Nash equilibrium for *I* and the social cost of the worst friendship Nash equilibrium for *I*: $$\Upsilon(F,I)=\frac{\max\_{NE}{C\_{NE}(I)}}{\max\_{FNE}{C\_{FNE}(F,I)}}$$ *Υ*(*F*, *I*) > 1 implies the existence of a real windfall in the system, whereas *Υ*(*F*, *I*) < 1 denotes that the social cost can become *greater* in social graphs than in purely selfish environments. General Analysis ================ In this section we characterize friendship Nash equilibria and derive general results on the Windfall of Friendship for the virus propagation game in social networks. It has been shown  that in classic Nash equilibria (*F* = 0), an attack component can never consist of more than *C**n*/*L* insecure players. A similar characteristic also holds for friendship Nash equilibria. As every player cares about its neighbors, the maximal attack component size in which an insecure player *p**i* still does not inoculate depends on the number of *p**i*’s insecure neighbors and the size of their attack components. Therefore, it differs from player to player. We have the following helper lemma. [lemma:ac-size] The player *p**i* will inoculate if and only if the size of its attack component is $$k\_i > \frac{Cn/L + F \cdot \sum\_{p\_j \in{\Gamma\_{\overline{sec}}(p\_i)}}{k\_j}} {1+F|\Gamma\_{\overline{sec}}(p\_i)|},$$ where the *k**j*s are the attack component sizes of *p**i*’s insecure neighbors assuming *p**i* is secure. Player *p**i* will inoculate if and only if this choice lowers the perceived cost. By Definition [perceived cost], the perceived individual cost of an inoculated player is $$c\_p^1(i,\vec{a}) = C + F \left( |\Gamma\_{sec}(p\_i)|{C} + \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{L \frac{k\_j}{n}} \right)$$ and for an insecure player we have $$c\_p^0(i,\vec{a}) = L \frac{k\_i}{n} + F \left( |\Gamma\_{sec}(p\_i)|{C} + |\Gamma\_{\overline{sec}}(p\_i)|{L \frac{k\_i}{n}} \right).$$ For *p**i* to prefer to inoculate it must hold that $$\begin{aligned} c\_p^0(i,\vec{a}) &>& c\_p^1(i,\vec{a}) ~~\Leftrightarrow \\ L\frac{k\_i}{n} + F \cdot |\Gamma\_{\overline{sec}}(p\_i)|{L \frac{k\_i}{n}} &>& C + F \cdot \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{L \frac{k\_j}{n}} ~~\Leftrightarrow \\ L \frac{k\_i}{n} (1 + F |\Gamma\_{\overline{sec}}(p\_i)|) &>& C + \frac{FL}{n} \cdot \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{k\_j} ~~\Leftrightarrow \\ k\_i (1 + F |\Gamma\_{\overline{sec}}(p\_i)|) &>& Cn/L + F \cdot \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{k\_j} ~~\Leftrightarrow\\ k\_i&>& \frac{Cn/L + F \cdot \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{k\_j}}{1 + F |\Gamma\_{\overline{sec}}(p\_i)|}.\end{aligned}$$ A pivotal question is of course whether social networks where players care about their friends yield better equilibria than selfish environments. The following theorem answers this questions affirmatively: the worst FNE costs never more than the worst NE. [FFneversmaller1] For all instances of the virus inoculation game and 0 ≤ *F* ≤ 1, it holds that 1 ≤ *Υ*(*F*, *I*) ≤ *PoA*(*I*) The proof idea for *Υ*(*F*, *I*) ≥ 1 is the following: for an instance *I* we consider an arbitrary FNE with *F* > 0. Given this equilibrium, we show the existence of a NE with larger social cost (according to , our best response strategy always converges). Let *a⃗* be any (e.g., the worst) FNE in the social model. If *a⃗* is also a NE in the same instance with *F* = 0 then we are done. Otherwise there is at least one player *p**i* that prefers to change its strategy. Assume *p**i* is insecure but favors inoculation. Therefore *p**i*’s attack component has on the one hand to be of size at least *k**i*ʹ > *C**n*/*L*  and on the other hand of size at most $k\_i'' = (Cn/L + F \cdot \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{k\_j}) / (1+F |\Gamma\_{\overline{sec}}(p\_i)|) \leq (Cn/L + F |\Gamma\_{\overline{sec}}(p\_i)| (k\_i'' - 1) ) / (1+F |\Gamma\_{\overline{sec}}(p\_i)|) ~~\Leftrightarrow k\_i'' \leq Cn/L - F |\Gamma\_{\overline{sec}}(p\_i)|$ (cf Lemma [lemma:ac-size]). This is impossible and yields a contradiction to the assumption that in the selfish network, an additional player wants to inoculate. It remains to study the case where *p**i* is secure in the FNE but prefers to be insecure in the NE. Observe that, since every player has the same preference on the attack component’s size when *F* = 0, a newly insecure player cannot trigger other players to inoculate. Furthermore, only the players inside *p**i*’s attack component are affected by this change. The total cost of this attack component increases by at least $$\begin{aligned} x &=& \underbrace{\frac{k\_i}{n}L - C}\_{p\_i} + \underbrace{\sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}\left(\frac{k\_i}{n}L -\frac{k\_j}{n}L\right)}\_{\textit{$p\_i$'s insecure neighbors}}\\ &=& \frac{k\_i}{n}L - C + \frac{L}{n} (|\Gamma\_{\overline{sec}}(p\_i)|k\_i - \sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)}{k\_j}).\end{aligned}$$ Applying Lemma [lemma:ac-size] guarantees that $$\sum\_{p\_j \in \Gamma\_{\overline{sec}}(p\_i)} k\_j \leq \frac{k\_i(1+F|\Gamma\_{\overline{sec}}(p\_i)|)-C n/L}{F}.$$ This results in $$\begin{aligned} x &\geq& \frac{L}{n} \left(|\Gamma\_{\overline{sec}}(p\_i)|k\_i - \frac{ k\_i(1+F|\Gamma\_{\overline{sec}}(p\_i)|)-C n/L}{F}\right)\\ &=& \frac{k\_i L}{n}(1-\frac{1}{F}) - C (1-\frac{1}{F})>0,\end{aligned}$$ since a player only gives up its protection if $C>\frac{k\_i L}{n}$. If more players are unhappy with their situation and become vulnerable, the cost for the NE increases further. In conclusion, there exists a NE for every FNE with *F* ≥ 0 for the same instance which is at least as expensive. The upper bound for the WoF, i.e., $\textit{PoA($I$)} \geq \Upsilon(F,I)$, follows directly from the definitions: while the PoA is the ratio of the NE’s social cost divided by the social optimum, *Υ*(*F*, *I*) is the ratio between the cost of the NE and the FNE. As the FNE’s cost must be at least as large as the social optimum cost the claim follows. Note that Aspnes et al.  proved that the Price of Anarchy never exceeds the size of the network, i.e., $n \geq \textit{PoA($I$)}$. Consequently, the Windfall of Friendship cannot be larger than *n* due to Theorem 4.2. The above result leads to the question of whether the Windfall of Friendship grows monotonically with stronger social ties, i.e., with larger friendship factors *F*. Intriguingly, this is not the case. [thm:monotone] For all networks with more than three players, there exist game instances where *Υ*(*F*, *I*) does not grow monotonically in *F*. We give a counter example for the star graph *S**n* which has one center player and *n* − 1 leaf players. Consider two friendship factors, *F**l* and *F**s* where *F**l* > *F**s*. We show that for the large friendship factor *F**l*, there exists a FNE, *F**N**E*1, where only the center player and one leaf player remain insecure. For the same setting but with a small friendship factor *F**s*, at least two leaf players will remain insecure, which will trigger the center player to inoculate, yielding a FNE, *F**N**E*2, where only the center player is secure. Consider *F**N**E*1 first. Let *c* be the insecure center player, let *l*1 be the insecure leaf player, and let *l*2 be a secure leaf player. In order for *F**N**E*1 to constitute a Nash equilibrium, the following conditions must hold: $$\textit{player } c: ~~ \frac{2L}{n}+\frac{2F\_lL}{n} < C + \frac{F\_lL}{n}$$ $$\textit{player } l\_{1}: ~~ \frac{2L}{n}+\frac{2F\_lL}{n} < C + \frac{F\_lL}{n}$$ $$\textit{player } l\_{2}: ~~ C+\frac{2F\_lL}{n} < \frac{3L}{n} + \frac{3F\_lL}{n}$$ For *F**N**E*2, let *c* be the insecure center player, let *l*1 be one of the two insecure leaf players, and let *l*2 be a secure leaf player. In order for the leaf players to be happy with their situation but for the center player to prefer to inoculate, it must hold that: $$\textit{player } c: ~~ C+\frac{2F\_sL}{n} < \frac{3L}{n} + \frac{6F\_sL}{n}$$ $$\textit{player } l\_{1}: ~~ \frac{3L}{n}+\frac{3F\_sL}{n} < C + \frac{2F\_sL}{n}$$ $$\textit{player } l\_{2}: ~~ C+\frac{3F\_sL}{n} < \frac{4L}{n} + \frac{4F\_sL}{n}$$ Now choose *C* :  = 5*L*/(2*n*) + *F**l**L*/*n* (note that due to our assumption that *n* > 3, *C* < *L*). This yields the following conditions: *F**l* > *F**s* + 1/2, *F**l* < *F**s* + 3/2, and *F**l* < 4*F**s* + 1/2. These conditions are easily fulfilled, e.g., with *F**l* = 3/4 and *F**s* = 1/8. Observe that the social cost of the first FNE (for *F**l*) is *C**o**s**t*(*S**n*, *a⃗**F**N**E*1) = (*n* − 2)*C* + 4*L*/*n*, whereas for the second FNE (for *F**s*) *C**o**s**t*(*S**n*, *a⃗**F**N**E*2) = *C* + (*n* − 1)*L*/*n*. Thus, *C**o**s**t*(*S**n*, *a⃗**F**N**E*1) − *C**o**s**t*(*S**n*, *a⃗**F**N**E*2) = (*n* − 3)*C* − (*n* − 5)*L*/*n* > 0 as we have chosen *C* > 5*L*/(2*n*) and as, due to our assumption, *n* > 3. This concludes the proof. Reasoning about best and worst Nash equilibria raises the question of how difficult it is to compute such equlibria. We can generalize the proof given in  and show that computing the most economical and the most expensive FNE is hard for any friendship factor. [NP-completeness] Computing the best and the worst pure FNE is NP-complete for any *F* ∈ [0, 1]. We prove this theorem by a reduction from two NP-hard problems, Vertex Cover  and Independent Dominating Set . Concretely, for the decision version of the problem, we show that answering the question whether there exists a FNE costing less than *k*, or more than *k* respectively, is at least as hard as solving vertex cover or independent dominating set. Note that verifying whether a proposed solution is correct can be done in polynomial time, hence the problems are indeed in NP. Fix some graph *G* = (*V*, *E*) and set *C* = 1 and *L* = *n*/1.5. We show that the following two conditions are necessary and sufficient for a FNE: (a) all neighbors of an insecure player are secure, and (b) every inoculated player has at least one insecure neighbor. Due to our assumption that *C* > *L*/*n*, condition (b) is satisfied in all FNE. To see that condition (a) holds as well, assume the contrary, i.e., an attack component of size at least two. An insecure player *p**i* in this attack component bears the cost $\frac{k\_i}{n} L + F(|\Gamma\_{sec}(p\_i)| C + |\Gamma\_{\overline{sec}}(p\_i)| \frac{k\_i}{n} L)$. Changing its strategy reduces its cost by at least $\Delta\_{i} = \frac{k\_i}{n} L + F |\Gamma\_{\overline{sec}}(p\_i)| \frac{k\_i}{n} L - C - F |\Gamma\_{\overline{sec}}(p\_i)| \frac{k\_i-1}{n} L = \frac{k\_i}{n} L + F |\Gamma\_{\overline{sec}}(p\_i)| \frac{1}{n} L - C$. By our assumption that *k**i* ≥ 2, and hence $|\Gamma\_{\overline{sec}}(p\_i)| \geq 1$, it holds that Δ*i* > 0, resulting in *p**i* becoming secure. Hence, condition (a) holds in any FNE as well. For the opposite direction assume that an insecure player wants to change its strategy even though (a) and (b) are true. This is impossible because in this case (b) would be violated because this player does not have any insecure neighbors. An inoculated player would destroy (a) by adopting another strategy. Thus (a) and (b) are sufficient for a FNE. We now argue that *G* has a vertex cover of size *k* if and only if the virus game has a FNE with *k* or fewer secure players, or equivalently an equilibrium with social cost at most *C**k* + (*n* − *k*)*L*/*n*, as each insecure player must be in a component of size 1 and contributes exactly *L*/*n* expected cost. Given a minimal vertex cover *V*ʹ ⊆ *V*, observe that installing the software on all players in *V*ʹ satisfies condition (a) because *V*ʹ is a vertex cover and (b) because *V*ʹ is minimal. Conversely, if *V*ʹ is the set of secure players in a FNE, then *V*ʹ is a vertex cover by condition (a) which is minimal by condition (b). For the worst FNE, we consider an instance of the independent dominating set problem. Given an independent dominating set *V*ʹ, installing the software on all players except the players in *V*ʹ satisfies condition (a) because *V*ʹ is independent and (b) because *V*ʹ is a dominating set. Conversely, the insecure players in any FNE are independent by condition (a) and dominating by condition (b). This shows that *G* has an independent dominating set of size at most *k* if and only if it has a FNE with at least *n* − *k* secure players. Windfall for Special Graphs =========================== While the last section has presented general results on equilibria in social networks and the Windfall of Friendship, we now present upper and lower bounds on the Windfall of Friendship for concrete topologies, namely the *complete graph* *K**n* and the *star graph* *S**n*. Complete Graphs --------------- In order to initiate the study of the Windfall of Friendship, we consider a very simple topology, the complete graph *K**n* where all players are connected to each other. First consider the classic setting where players do not care about their neighbors (*F* = 0). We have the following result: [cliqueNE] In the graph *K**n*, there are two Nash equilibria with social cost $$\begin{aligned} \textit{~~NE$\_1$: } Cost(K\_n,\vec{a}\_{\textit{NE1}}) &=& C(n-\lceil Cn/L \rceil + 1) + L/n(\lceil Cn/L \rceil - 1 )^2,\\ \textrm{and~~~~~~~~~~~~~~~~~~~~~~~~~~}\\ \textit{~~NE$\_2$: } Cost(K\_n,\vec{a}\_{\textit{NE2}}) &=& C(n-\lfloor Cn/L \rfloor) + L/n(\lfloor Cn/L \rfloor)^2.\end{aligned}$$ If ⌈*C**n*/*L*⌉ − 1 = ⌊*C**n*/*L*⌋, there is only one Nash equilibrium. Let *a⃗* be a NE. Consider an inoculated player *p**i* and an insecure player *p**j*, and hence *c**a*(*i*, *a⃗*) = *C* and $c\_a(j,\vec{a}) = L \frac{k\_j}{n}$, where *k**j* is the total number of insecure players in *K**n*. In order for *p**i* to remain inoculated, it must hold that *C* ≤ (*k**j* + 1)*L*/*n*, so *k**j* ≥ ⌈*C**n*/*L* − 1⌉; for *p**j* to remain insecure, it holds that *k**j**L*/*n* ≤ *C*, so *k**j* ≤ ⌊*C**n*/*L*⌋. As the total social cost in *K**n* is given by *C**o**s**t*(*K**n*, *a⃗*) = (*n* − *k**j*)*C* + *k**j*2*L*/*n*, the claim follows. Observe that the equilibrium size of the attack component is roughly twice the size of the attack component of the social optimum, as *C**o**s**t*(*K**n*, *a⃗*) = (*n* − *k**j*)*C* + *k**j*2*L*/*n* is minimized for *k**j* = *C**n*/2*L*. [cliqueopt] In the social optimum for *K**n*, the size of the attack component is either $\lfloor \frac{1}{2} Cn/L \rfloor$ or $\lceil \frac{1}{2} Cn/L \rceil$, yielding a total social cost of $$Cost(K\_n,\vec{a}\_{\textit{OPT}}) = (n-\lfloor \frac{1}{2} Cn/L \rfloor)C + (\lfloor \frac{1}{2} Cn/L \rfloor)^2 \frac{L}{n}$$ or $$Cost(K\_n,\vec{a}\_{\textit{OPT}}) = (n-\lceil \frac{1}{2} Cn/L \rceil)C + (\lceil \frac{1}{2} Cn/L \rceil)^2 \frac{L}{n}.$$ In order to compute the Windfall of Friendship, the friendship Nash equilibria in social networks have to be identified. [cliqueFNE] In *K**n*, there are two friendship Nash equilibria with social cost $$\begin{aligned} \textit{FNE$\_1$: } Cost(K\_n,\vec{a}\_{\textit{FNE1}}) &=& C \left(n-\left\lceil\frac{Cn/L - 1}{1 + F}\right\rceil\right) + L/n\left(\left\lceil\frac{Cn/L - 1}{1 + F}\right\rceil\right)^2, \\ \textrm{and~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}\\ \textit{FNE$\_2$: } Cost(K\_n,\vec{a}\_{\textit{FNE2}}) &=& C\left(n-\left\lfloor\frac{Cn/L + F}{1 + F}\right\rfloor\right) + L/n\left(\left\lfloor\frac{Cn/L + F}{1 + F}\right\rfloor\right)^2.\end{aligned}$$ If ⌈(*C**n*/*L* − 1)/(1 + *F*)⌉ = ⌊(*C**n*/*L* + *F*)/(1 + *F*)⌋, there is only one FNE. According to Lemma [lemma:ac-size], in a FNE, a player *p**i* remains secure if otherwise the component had size at least *k**i* = *k**j* + 1 ≥ (*C**n*/*L* + *F**k**j*2)/(1 + *F**k**j*) where *k**j* is the number of insecure players. This implies that *k**j* ≥ ⌈(*C**n*/*L* − 1)/(1 + *F*)⌉. Dually, for an insecure player *p**j* it holds that *k**j* ≤ (*C**n*/*L* + *F*(*k**j* − 1)2)/(1 + *F*(*k**j* − 1)) and therefore *k**j* ≤ ⌊(*C**n*/*L* + *F*)/(1 + *F*)⌋. Given these bounds on the total number of insecure players in a FNE, the social cost can be obtained by substituting *k**j* in *C**o**s**t*(*K**n*, *a⃗*) = (*n* − *k**j*)*C* + *k**j*2*L*/*n*. As the difference between the upper and the lower bound for *k**j* is at most 1, there are at most two equilibria and the claim follows. Given the characteristics of the different equilibria, we have the following theorem. [4/3] In *K**n*, the Windfall of Friendship is at most *Υ*(*F*, *I*) = 4/3 for an arbitrary network size. This is tight in the sense that there are indeed instances where the worst FNE is a factor 4/3 better than the worst NE. *Upper Bound.* We first derive the upper bound on *Υ*(*F*, *I*). $$\begin{aligned} \Upsilon(F,I) &=& \frac{Cost(K\_n,\vec{a}\_{\textit{NE}})}{Cost(K\_n,\vec{a}\_{\textit{FNE}})}\\ &\leq& \frac{Cost(K\_n,\vec{a}\_{\textit{NE}})}{Cost(K\_n,\vec{a}\_{\textit{OPT}})}\\ &\leq& \frac{(n - \lceil Cn/L - 1 \rceil) C + (\lfloor Cn/L \rfloor)^2 \frac{L}{n}}{(n - \frac{1}{2} Cn/L) C + (\frac{1}{2} Cn/L)^2 \frac{L}{n}}\end{aligned}$$ as the optimal social cost (cf Lemma [cliqueopt]) is smaller or equal to the social cost of any FNE. Simplifying this expression yields $$\begin{aligned} \Upsilon(F,I) &\leq& \frac{n(1-C/L)C + C^2 n/L}{n(1 - \frac{1}{2}C/L)C + \frac{1}{4} C^2 n/L}=\frac{1}{1-\frac{1}{4}C/L}.\end{aligned}$$ This term is maximized for *L* = *C*, implying that *Υ*(*F*, *I*)�
arxiv_0000079
Charged Particle and Photon Multiplicity, and Transverse Energy Production in High-Energy Heavy-Ion Collisions ============================================================================================================== We review the charged particle and photon multiplicity, and transverse energy production in heavy-ion collisions starting from few GeV to TeV energies. The experimental results of pseudorapidity distribution of charged particles and photons at different collision energies and centralities are discussed. We also discuss the hypothesis of limiting fragmentation and expansion dynamics using the Landau hydrodynamics and the underlying physics. Meanwhile, we present the estimation of initial energy density multiplied with formation time as a function of different collision energies and centralities. In the end, the transverse energy per charged particle in connection with the chemical freeze-out criteria is discussed. We invoke various models and phenomenological arguments to interpret and characterize the fireball created in heavy-ion collisions. This review overall provides a scope to understand the heavy-ion collision data and a possible formation of a deconfined phase of partons via the global observables like charged particles, photons and the transverse energy measurement. INTRODUCTION ============ At extreme temperatures and energy density, hadronic matter undergoes a phase transition to partonic phase called as Quark-Gluon Plasma (QGP). The main goal of heavy-ion collision experiments is to study the QGP by creating such extreme conditions by colliding heavy nuclei at relativistic energies. During the last decade, there are many heavy-ion collision experiments carried out at SPS, RHIC and LHC to create and study QGP in the laboratory. Global observables like transverse energy ($E\_{\rm T}$), particle multiplicities ($N\_{\gamma}, N\_{\rm {ch}}$ etc.), $p\_{\rm T}$-spectra of the produced particles and their pseudorapidity distributions ($dE\_{\rm T}/d\eta, dN/d\eta$), with different colliding species and beam energies provide insight about the dynamics of the system and regarding the formation of QGP. It is also proposed that the correlation of mean transverse momentum, $\langle p\_{\rm T} \rangle$ and the multiplicity of the produced particles may serve as a probe for the Equation of State (EoS) of hot hadronic matter. In a thermodynamic description of the produced system, the rapidity density (*d**N*/*d**y*) reflects the entropy and the mean transverse momentum ($\langle p\_{\rm T} \rangle$), corresponds to the temperature of the system. Except at the phase transition points, the rapidity density linearly scales with $\langle p\_{\rm T} \rangle$. If the phase transition is of first order, then the temperature remains constant at the co-existence of the hadron gas and the QGP phase, thereby increasing the entropy density. In such a scenario, $\langle p\_{\rm T} \rangle$ shows a plateau with increase of entropy, thereby characterizing the phase transition associated with the time evolution of the system. Hence, the global observables like, *d**N*/*d**y* and $\langle p\_{\rm T} \rangle$, give indication of a possible existence of a QGP phase and the order of phase transition. $dE\_{\rm T}/d\eta$ gives the maximum energy density produced in the collision process which is necessary to understand the reaction dynamics. The formation of QGP may also change the shape of the pseudorapidity distribution. The event multiplicity distribution gives information of the centrality and energy density of the collision. The scaling of multiplicity with number of participant nucleons ($N\_{\rm {part}}$) reflects the particle production due to soft processes (low-$p\_{\rm T}$). Whereas, at high energy when hard processes (high-$p\_{\rm T}$) dominate, it is expected that the multiplicity will scale with the number of nucleon-nucleon collisions ($N\_{\rm {coll}}$). There are models to explain the particle production taking a linear combination of $N\_{\rm {part}}$ and $N\_{\rm {coll}}$ (called two-component model). The most viable way of studying QGP is via the particles produced in the collision in their respective domain of proposed methods. Then one of the most fundamental questions arises about the mechanism of particle production and how they are related with the initial energy density, gluon density in the first stage of the collision evolution and entropy of the system. Similarly, question can be put to figure out the role of soft and hard process of particle productions. It is proposed that the charged particle multiplicity or technically called as the pseudorapidity density distributions of charged particles, $dN\_{\rm{ch}}/d\eta$, can be used to address the above questions. Here the pseudorapidity, *η* =  − *l**n* *t**a**n* *θ*/2, where *θ* is the polar angle, the produced particles make with the detector. So *d**N**c**h*/*d**η* is called as one of the global variables to characterize the system produced in the heavy-ion collisions. Experimentally, it is more easy to estimate this quantity as most of the detectors are capable of detecting charged particles and it involves only kinematics of the charged particles. In this review, in Section-II, we discuss the method of experimental determination of collision centrality, which is followed by discussions on the midrapidity pseudorapidity density distributions of charged particles for different collision energies, collision species and centralities in Section-III. In this section, we discuss about the longitudinal scaling and factorization of charged particles. The expansion dynamics of the system is discussed using the pseudorapidity density distributions of charged particles and the Landau-Carruthers hydrodynamics. In subsequent subsections, the scaling of total charged particles with collision centrality and its energy dependence are discussed. This follows with similar discussions on the photon pseudorapidity density at forward rapidities in Section-IV, which includes longitudinal scaling of photons. Subsequently, in Section-V, discussions are made on the production of transverse energy and its use for centrality determination. Section-VI includes discussions on collision energy dependence of transverse energy, which is followed by discussions on the centrality dependence in Section-VII. Section-VIII includes discussions on estimation of initial energy density in Bjorken hydrodynamic scenario, and its energy and centrality dependences. Further we correlate the energy and centrality dependence of transverse energy per charged particle with chemical freeze-out criteria in Section-IX. In Section-X, we summarize the review with conclusions. Appendix discusses on the important properties of Gamma and Negative Binomial Distributions. Centrality determination ======================== In heavy-ion collisions, the event centrality is of utmost importance to understand the underlying physics of the collision. The event centrality is related to the impact parameter, defined as the distance between the centroids of the two colliding nuclei in a plane transverse to the beam axis, of the collision. The impact parameter tells about the overlap volume of the two nuclei. This overlap volume determines the geometrical parameters, like number of participant nucleons ($N\_{\rm{part}}$), number of spectator nucleons ($N\_{\rm{spec}}$) and the number of binary collisions ($N\_{\rm{coll}}$). The impact parameter can not be determined experimentally. However, the global observables, like total charged particles (*N**c**h*), transverse energy (*E**T*) or energy deposited in ZDC (*E**z**d**c*) etc., are related to this geometrical quantity. By combining the experimental observables with simulation, one can estimate the impact parameter and hence, the centrality of the event class. The centrality is expressed as the percentile (*c*) of the total hadronic interaction cross section corresponding to the charged particle multiplicity above certain threshold and is given by, $$c = \frac{1}{\sigma\_{AA}}\int\_{0}^{b} \frac{d\sigma}{db^{\prime}}db^{\prime} ~~.$$ In Eq (1), *σ**A**A* is the total nuclear interaction cross section of A+A collision. Assuming constant luminosity, the cross section can be replaced by the number of observed events after the trigger efficiency correction. But at very high energy, when these two nuclei pass by each other, there is a large QED cross section because of the electromagnetic field. This QED cross section is much larger than the hadronic cross section and this contaminates the most peripheral events. That is why the centrality determination is restricted to some percentile where the QED contribution is negligible. The fraction of hadronic events excluded by such cut as well as the trigger efficiency can be estimated by using a Glauber model simulation. For a given impact parameter, the $N\_{\rm{part}}$ and $N\_{\rm{coll}}$ can be estimated by Glauber Monte Carlo method. The parametrized Negative Binomial Distribution (NBD) can be used to describe the nucleon-nucleon collisions. For heavy-ion collisions, $N\_{\rm{part}}$ and $N\_{\rm{coll}}$ are used to generate the number of charged particles by incorporating two-component model in the following way: *N**a**n**c**e**s**t**o**r**s* = *f* × *N**p**a**r**t* + (1 − *f*) × *N**c**o**l**l*. This $N\_{\rm{ancestors}}$ refers to the “independent emitting source”. The two-component model given in Eq (2) incorporates the soft and hard interactions. Soft process is related to the $N\_{\rm{part}}$ and hard process is related to $N\_{\rm{coll}}$. The functional form of NBD distribution is given by, $$P(\mu, k, n) = \frac{\Gamma(n+k)}{\Gamma(n+1)\Gamma(k)}. \frac{(\mu/k)^{n}}{(\mu/k+1)^{n+k}}.$$ Eq (3) represents the probability of measuring *n* hits per ancestor. Here, *μ* represents the mean multiplicity per ancestor and *k* controls the width of the distribution. In *p* + *p*(*p̄*) collision, a Negative Binomial Distribution (NBD) with a fixed value of *μ* and *k* well describes the charged particle multiplicity data. The charged particle multiplicity for nucleus-nucleus collisions with a given impact parameter is generated by sampling $N\_{\rm{ancestors}}$ times the *p* + *p* multiplicity, which is generated by using NBD. Finally, a *χ*2 minimisation is done by fitting the Glauber Monte Carlo generated multiplicity and the charged particle multiplicity obtained from the collision data. The *χ*2 minimization will give us the value of *f*, *μ* and *k*. This gives a connection between an experimental observable and a Glauber Monte Carlo. From this one can have access to $N\_{\rm{part}}$ and $N\_{\rm{coll}}$ for a given class of centrality by NBD-Glauber fit. For example, the centrality determination in ALICE using V0 amplitude is given in Figure [centrality]. The two component model is fitted with the V0 amplitude in Figure [centrality] to find out the $N\_{\rm{part}}$ and $N\_{\rm{coll}}$ values for a corresponding centrality. [centrality] PSEUDORAPIDITY DENSITY DISTRIBUTION OF CHARGED PARTICLES (*d**N**c**h*/*d**η*) ============================================================================== Energy dependence of *d**N**c**h*/*d**η* for different collision species ------------------------------------------------------------------------ [dNchdEtaCuCu] [dNchdEtaAuAu] [dNchdEtaPbPb] The $dN\_{\rm{ch}}/d\eta$ distributions as a function of pseudorapidity of most central events for Cu+Cu collisions at $\sqrt{s\_{\rm{NN}}}$ = 22.4 GeV, 62.4 GeV and 200 GeV are given in Figure [dNchdEtaCuCu]. Similarly, the $dN\_{\rm{ch}}/d\eta$ distributions for Au+Au system at $\sqrt{s\_{\rm{NN}}}$ = 19.6 GeV, 62.4 GeV, 130 GeV and 200 GeV are given in Figure [dNchdEtaAuAu]. Both the collision systems, i.e. Cu+Cu and Au+Au, data are from PHOBOS experiment which has maximum pseudorapidity coverage of ∣*η*∣ < 5.3 at RHIC. In Figure [dNchdEtaPbPb], the charged particle pseudorapidity distributions of Pb+Pb collisions at different energies are presented. The filled circles and star markers correspond to the fixed target experiment for beam energies 40 AGeV and 158 AGeV, respectively. For the fixed target experiment, the *x*-axis is $\eta-\eta\_{\rm{peak}}$. Here, $\eta\_{\rm{peak}}$ corresponds to the peak position of the $dN\_{\rm{ch}}/d\eta$ distribution in the fixed target experiment. Theoretically, for fixed target environment, the $\eta\_{\rm{peak}}= \eta\_{\rm{mid}} = y\_{\rm{beam}}/2$ = 2.24 at 40 AGeV and 2.91 at 158 AGeV for Pb+Pb collisions, respectively. In experiment, $\eta\_{\rm{peak}}$ is obtained by fitting a Gaussian function to the $dN\_{\rm{ch}}/d\eta$ distribution. From the fitting, the $\eta\_{\rm{peak}}$ comes out to be 2.43 and 3.12 for 40 AGeV and 158 AGeV, respectively. The hollow points at 40 AGeV correspond to the mirror reflection around the *η**p**e**a**k*. The Pb+Pb data at 40 AGeV and 158 AGeV taken from NA50 have pseudorapidity coverage (*η* − *η**p**e**a**k*) < 1 and $(\eta - \eta\_{\rm{peak}}) \leq 2$, respectively. In Figure [dNchdEtaPbPb], the circles represent the beam energy of 40 AGeV, the star markers represents the beam energy of 158 AGeV. The $dN\_{\rm {ch}}/d\eta$ values of Pb+Pb collisions at $\sqrt{s\_{\rm{NN}}}$=2.76 TeV which are represented by squares and taken from ALICE experiment. ALICE has more wider pseudorapidity coverage (∣*η*∣ < 5.25). The data shown in Figure [dNchdEtaCuCu], [dNchdEtaAuAu] and [dNchdEtaPbPb] correspond to the most central events in the midrapidity. It is observed from Figure [dNchdEtaCuCu], [dNchdEtaAuAu] and [dNchdEtaPbPb] that the distribution is symmetric around *η* = 0. It is also found that with the increase of collision energy, the width and amplitude of the $\frac{dN\_{\rm{ch}}}{d\eta}$ increase. Similarly, width of the central plateau region also increases with increase of energy. Moreover, the plateau region converts into a dip for Pb+Pb collisions at $\sqrt{s\_{\rm{NN}}}$=2.76 TeV as shown in Figure [dNchdEtaPbPb]. This can be addressed by the particles compositions which is directly related to the chemistry of the QGP. The pseudorapidity distributions of kaon has more dip than pion and proton has more dip than pion and kaons at *η* = 0. This is because of the mass of the particles. In other way, heavier is the particles, the more is the dip in its pseudorapidity distribution. In the mean time, the transverse momentum spectra of identified particles show that the total proton+anti-proton production cross section is higher at LHC than at RHIC. That explains the dip observed for the Pb+Pb collisions at LHC energy. The multiplicity distribution at LHC can be well described by the double Gaussian function as follows, $$A\_{1}e^{\frac{-\eta^2}{\sigma\_{1}^{2}}} - A\_{2}e^{\frac{-\eta^2}{\sigma\_{2}^{2}}}$$ It is reported in Ref that the values of *A*1/*A*2, *σ*1, *σ*2 are same within the errors for each measured centrality bin. To test this whether it is valid for other systems and energies, we tried to fit this double Gaussian function to other multiplicity distributions of Au+Au and Cu+Cu systems measured at $\sqrt{s\_{\rm{NN}}}$= 200 and 130 GeV. To check the consistency, we considered $dN\_{\rm{ch}}/d\eta$ distributions of three centralities: 0-6%, 6-15% and 15-25%. The *χ*2 of the fitting, the fitting parameters *A*1, *A*2, the ratio of *A*1/*A*2, *σ*1, *σ*2 are given in Table [fitvalue1], [fitvalue2] and [fitvalue3]. It can be seen from the tabulated values that the values of *A*1/*A*2, *σ*1, *σ*2 are same within the errors for different centralities at a particular energy. Hence, this observation for RHIC energies agree with the observation made at LHC energy. It can be seen from Figure [dNchdEtaCuCu], [dNchdEtaAuAu] and [dNchdEtaPbPb] that with increase of energy, the width of pseudorapidity distribution increases. This can be related to the longitudinal flow and velocity of sound of the system (*c**s*) using Landau hydrodynamic model. It is observed that with increase of energy, the velocity of sound increases and can be understood in the rapidity space as follows. $$\sigma^{2}\_{y} = \frac{8}{3} \frac{c\_{s}^{2}}{1-c\_{s}^{4}} ln(\sqrt{s\_{NN}}/2m\_{p}).$$ where *m**p* is mass of proton and *σ**y* is the width of rapidity distribution of charged particles and *c**s*2 is the square of the velocity of sound, which equals to 1/3 for an ideal gas. [!h] |c | c | c | c | c | c | c | Centrality(%) & *χ*2/*n**d**f* & *A*1 & *A*2 & *A*1/*A*2 & *σ*1 & *σ*2 0-6& 2.787/48 & 1130 ± 60.52 & 951.2 ± 56.7 & 1.19 & 2.94 ± 0.06& 2.62 ± 0.08 6-15&1.238/48 & 821.7 ± 36.66& 682.5 ± 36.67 & 1.20 & 3.0 ± 0.08& 2.65 ± 0.09 15-25 & 0.913/48 &789.9 ± 26.18 & 670.7 ± 525 & 1.18 & 3.02 ± 0.113 & 2.77 ± 0.12 [fitvalue1] [!h] |c | c | c | c | c | c | c | Centrality(%) & *χ*2/*n**d**f* & *A*1 & *A*2 & *A*1/*A*2 & *σ*1 & *σ*2 0-6& 2.574/48 & 1987 ± 106 & 1461 ± 86.48 & 1.36 & 2.96 ± 0.04& 2.28 ± 0.06 6-15& 1.591/48 & 1831 ± 183.9 & 1344 ± 186.9 & 1.36 & 2.99 ± 0.08& 2.42 ± 0.08 15-25 & 1.427/48 & 1488 ± 116.1 & 1125 ± 78.8 & 1.32 & 3.01 ± 0.50 & 2.53 ± 0.06 [fitvalue2] [!h] |c | c | c | c | c | c | c | Centrality(%) & *χ*2/*n**d**f* & *A*1 & *A*2 & *A*1/*A*2 & *σ*1 & *σ*2 0-6& 4.987/48 & 1451 ± 132.1 & 904.8 ± 143.1 & 1.61 & 2.89 ± 0.06& 2.04 ± 0.10 6-15& 3.47/48 & 1128 ± 24.3 & 699.3 ± 88.9 & 1.61 & 2.97 ± 0.08& 2.13 ± 0.08 15-25 & 1.674/48 & 898 ± 9.9 & 600 ± 60.7 & 1.51 & 2.99 ± 0.03 & 2.3 ± 0.08 [fitvalue3] Longitudinal Scaling -------------------- Charged particle production in the higher rapidity region are subject of interest in terms of hypothesis of limiting fragmentation. According to this hypothesis, the observed pseudorapidity density of particle as a function of $\eta^{\prime} =\eta - y\_{\rm{beam}}$ approaches a limiting value in the fragmentation region even if the colliding energy is increased. Here $y\_{\rm{beam}} = ln \left( \sqrt{s\_{\rm{NN}}}/m\_{\rm{p}} \right)$. This can be explained by considering the whole heavy-ion collision process in laboratory frame of one of the nuclei. The hypothesis can be represented as follows. In the laboratory frame, out of the produced particles, some of them will have velocity increasing with the increase of collision energy. But some of them will have fixed velocity (or pseudorapidity) as collision energy increases which is postulated as they have approached a limiting distribution. This can be explained as follows. In the frame of the target nucleus, the projectile is Lorentz contracted and appears like a disk, collides and produces particles. As colliding energy is increased, the target will observe that a more contracted disk is colliding with it. However, the momentum transfer process between the projectile and target does not change with respect to the contraction rate. This leads to the limiting distribution of produced particles in the fragmentation region even if the collision energy is increased. One of the advantages of this observation is that it can be seen both in rapidity as well as in pseudorapidity distributions of the particles because at large forward rapidity region, $\eta \sim y - ln(p\_{\rm T}/m\_{\rm T})$. [LongScalCuCu] [LongScalAuAu] [LongScalPbPb] The normalized charged particle multiplicity density per participant pair as a function of $\eta^{\prime} = \eta - y\_{\rm{beam}}$ for different collision systems and different energies are shown in Figure [LongScalCuCu], [LongScalAuAu] and [LongScalPbPb]. In Figure [LongScalCuCu], the data are shown for Cu+Cu collisions at $\sqrt{s\_{NN}}$ = 22.4 GeV, 62.4 GeV and 200 GeV. In Figure [LongScalAuAu], the data are shown for Au+Au collisions at $\sqrt{s\_{NN}}$ = 19.6 GeV, 62.4 GeV, 130 GeV and 200 GeV. Similarly, in Figure [LongScalPbPb], the data are shown for Pb+Pb collisions at beam energies of 40 AGeV, 158 AGeV and at $\sqrt{s\_{NN}}$ = 2.76 GeV. The charged particle numbers for Pb+Pb collisions at $\sqrt{s\_{NN}}$ = 2.76 TeV at forward rapidity are estimated by extrapolating the double Gaussian function used to explain the charged particle distribution. Figure [LongScalCuCu], [LongScalAuAu] and [LongScalPbPb] show the saturation or limiting nature of charged particle density at very high value of *η* − *y**b**e**a**m* even if the energy of the projectile is increased. It is also observed in high energy *e*+ + *e*−, *p* + *p* and *d* + Au collisions. The hypothesis of limiting fragmentation assumes that the hadronic cross section approaches an asymptotic value at high energy. That means the hadronic excitation and breakup probability are almost independent of projectile energy. But later it is found that the hadronic cross section increases with increase of center of mass energy. The most spectacular fact of this hypothesis is that still this phenomenon is observed for a wide range of collision energies. Later this limiting fragmentation was tried to explain through Color Glass Condensate (CGC) model. The gluon saturation picture at very small *x* is used to understand this phenomenon. The charged particle multiplicity density normalized to participant pair obtained from CGC model is compared with the RHIC data at different energies. This CGC based model calculations provides reasonable description of the data at the fragmentation region for *p* + *p* and *A* + *A* collisions systems by considering different scale parameters and initial conditions. However, more precise modelling of the impact parameter dependence of the “unintegrated” gluon distribution functions is demanded in these models. In addition to this, the precise estimation of final state effects and inclusion of quark distributions into this frameworks are needed to explain the whole spectrum of data. In the framework of statistical thermal model, the extended longitudinal scaling can also be explained upto RHIC energies. It is also predicted that the LHC data will not show the longitudinal scaling. In Ref, the string percolation model predictions were also used to support their predictions. However, the recent LHC data violate the predictions from thermal model and follow the universal longitudinal scaling. It indicates that at LHC, some non-equilibrium phenomenon may be playing a role, which needs to be understood. [RcpAu] [RcpCu] [RcpPb] It is reported in Ref that the shape of the scaled pseudorapidity density in the rest frame of the projectiles is independent of the beam energy. However, this shape differs when it is studied as a function of different centralities. This centrality dependence mainly because of an excess of particles at high *η* and narrowing of the width of the pseudorapidity distribution in peripheral *A* + *A* collisions. The excess particles basically originate from nuclear remnant in the peripheral collisions. So it is realized that the shape is mainly a function of collision geometry. To cancel out the geometry effect, it is argued in Ref that ratio of $dN\_{\rm{ch}}/d\eta$ normalized to $N\_{\rm{part}}$ of central to peripheral events (*R**P**C*) can be used to ensure the observations on the energy-independence of the shape called as longitudinal scaling in the forward rapidities. In Ref, variable *R**P**C* is defined as follows $$R\_{PC}\left(\eta^{\prime}, 35-45\%\right) = \frac{\left ( dN\_{ch}/d\eta \right )^{35-45\%}/N\_{part}^{35-45\%}}{\left( dN\_{ch}/d\eta\right )^{0-6\%} / N\_{part}^{0-6\%}}$$ was introduced which shows the energy independence behaviour for Au+Au collisions at $\sqrt{s\_{\rm{NN}}}$ = 19.6, 130 and 200 GeV. This is shown in Figure [RcpAu]. The similar observation is tried for other collision systems. The $R\_{\rm{PC}}$ as a function of $\eta - y\_{\rm{beam}}$ for Cu+Cu collisions at $\sqrt{s\_{\rm{NN}}}$= 22.4, 62.4 and 200 GeV are shown in Figure [RcpCu]. Similarly, in Figure [RcpPb], values of $R\_{\rm{PC}}$ of Pb+Pb collisions at beam energies of 40 AGeV, 158 AGeV and $\sqrt{s\_{\rm{NN}}}$ = 2.76TeV are shown. Very interestingly, we observe both for Au+Au ( Figure [RcpAu]) and Cu+Cu (Figure [RcpCu]) collision data that $R\_{\rm{PC}}$ is independent of collision energy. For Pb+Pb collisions at 2.76 TeV, the peripheral events corresponding to 20-30% centrality and central events of 0-5% centrality. For 158 AGeV and 40 AGeV Pb+Pb collisions, peripheral events correspond to 25-35% centrality and central events correspond to 0-5% centrality. From Figure [RcpPb], it is difficult to conclude about the Pb+Pb collision data for the three energies as the data are not available for the whole pseudorapidity range as far as this discussion is concerned. However, the trend of the *R**P**C* values as a function of *η* − *y**b**e**a**m* in Figure [RcpPb] goes in line with the observations at RHIC. Factorization ------------- In a typical heavy-ion collision process, the nucleons in the overlap zones are called as participant nucleons which must have suffered at least one inelastic collision. Hence, the charged particles produced in the collision may have some relation with the number of participant nucleons in the reaction zone as well as the number of binary collisions. A nucleus-nucleus collision can be thought of superposition of many individual *p* + *p* collisions. So the final charged particle density should have some empirical relationship with the $\langle N\_{\rm{part}} \rangle$ and number of binary collisions ($N\_{\rm{coll}}$). In the framework of “wounded nucleon model”, it is observed that the $\frac{dN\_{ch}}{d\eta}$ scales with some power of $N\_{\rm{part}}$ upto the SPS energy. That is called as power law fit and is given by, $$\frac{dN\_{ch}}{d\eta} \propto N\_{part}^{\alpha}$$ where *α* is found to be  ∼ 1 for SPS energies. This linear relation with $N\_{\rm{part}}$ is interpreted as that the particle production upto SPS energies is mainly from the soft processes. However, the particle multiplicity at RHIC energies could not be explained by the above relationship. Then a two-component model was adopted which incorporate both the contribution of soft and hard processes by considering the $\langle N\_{\rm{part}} \rangle$ and $\langle N\_{\rm{coll}} \rangle$ to describe the final state hadron multiplicity. The two-component model is given as, $$\frac{dN\_{ch}}{d\eta} = (1-x) n\_{pp} \frac{\langle N\_{part} \rangle}{2} + x n\_{pp} \langle N\_{coll} \rangle.$$ where *n**p**p* is the measured multiplicity density in *p* + *p* collisions due to *x* fraction of hard processes and (1 − *x*) fraction represents the soft process. Number of binary collisions is proportional to nucleon-nucleon inelastic cross section ($\sigma^{\rm{NN}}\_{\rm{inel}}$). With increase of collision energy, the $\sigma^{\rm{NN}}\_{\rm{inel}}$ also increases. This results in dramatic increase of $N\_{\rm{coll}}$ with the increase of collision energy and therefore, the contribution of hard process will be dominant for particles production. So it is expected that there will be a strong centrality dependence of pseudorapidity distributions at higher energies. This can be tested by taking the ratio of scaled yield at the respective centralities at different energies. It is reported in Ref that the centrality dependence of particle production in the midrapidity exhibits factorization of beam energy and collision centrality as follows, $$\frac{2}{\langle N\_{part} \rangle} \frac{dN\_{ch}}{d\eta} = f(s) ~g(N\_{part}) \label{eqn:factorization}$$ Eq (9) basically illustrates the energy-centrality factorization. In the right hand side of Eq (9), the first term, i.e. *f*(*s*), depends on the energy and the second term, i.e. $g(N\_{\rm{part}})$, depends on the $\langle N\_{\rm{part}} \rangle$. In the midrapidity, the charged particle multiplicity density normalized to the participant pair, ($\langle N\_{\rm{part}} \rangle/2$) at different energies is shown in Figure [fact1]. The collision data are fitted with the parametrized form of right hand side of Eq (9). For Au+Au collision, the parametrized forms of *f*(*s*) and $g(N\_{\rm{part}})$ found from Ref are as follows, *f*(*s*) = 0.0147[*l**n*(*s*)]2 + 0.6 *g*(*N**p**a**r**t*) = 1 + 0.095*N**p**a**r**t*1/3 [fact1] [fact2] [fact3] Similarly, for Cu+Cu collisions, the co-efficients of *f*(*s*) does not change. However, the co-efficient of $N\_{\rm{part}}^{1/3}$ in $g(N\_{\rm{part}})$ changes, which is given by, *g*(*N**p**a**r**t*) = 1 + 0.129*N**p**a**r**t*1/3 In Figure [fact2], ratios of the charged particle multiplicity density normalized to the participant pair of Pb+Pb collisions at $\sqrt{s\_{NN}} = 2.76 $ TeV and Au+Au collisions data at different energies are shown as a function of $\langle N\_{\rm{part}} \rangle$. This observation implies that the pseudorapidity density of particles in the midrapidity normalized per participant pair can be factorized. However, when the collision system changes, the $N\_{\rm{part}}^{1/3}$ dependence comes into picture. We tried to fit the parametrized form of Eq (11) with the LHC data. We keep the form of *f*(*s*) same and set one parameter free of $g(N\_{\rm{part}})$. However, it doesn’t fit the data. This is shown in Figure [fact3]. Contrary, when both the parameters of Eq (11) are set free, then it fits well to the data. This observation contradicts the observation at RHIC. The RHIC data show that only the co-efficient of $N\_{\rm{part}}^{1/3}$ changes when collision system changes at the same collision energy. However, at LHC energy, the energy as well as the system size changes. After *χ*2 minimization, for better fit, we get the following form of $g(N\_{\rm{part}})$ for LHC data. It can be inferred that some other factor is playing a role for the particle production at LHC energy in addition to the RHIC energy. *g*(*N**p**a**r**t*) = 0.833 + 0.142*N**p**a**r**t*1/3 Expansion dynamics ------------------ The space-time evolution of the fireball created in the heavy-ion collisions can be explained by relativistic hydrodynamical approach which assumes that the medium is continuously flowing. The elliptic flow measurements, the two-particle correlations and transverse momentum spectra results at RHIC have given ample evidence of a strongly interacting medium created in the laboratory. There are many proposed statistical as well as hydrodynamical models in the past to explain the multiplicity and expansion dynamics of the systems. Landau hydrodynamics model is one of them, which is widely used to explain the expansion of the system produced in the collision, like *e*+ + *e*−, *p* + *p* and *A* + *A*. It has successfully explained the low energy collision data including the charged pion data at RHIC. The form of Landau hydrodynamics has been evolved with time to explain the global particle multiplicity and the differential rapidity distribution. The width of the charged particle density distribution in the midrapidity can shed some light on the longitudinal expansion dynamics of the system, velocity of sound and initial and final state rescattering. A detailed analysis about these are given in Ref.. It can also be used to define the degree of stopping or transparency in the heavy-ion collision reactions. The form of Landau-Carruthers rapidity distributions is given as, $$\frac{1}{N} \frac{dN}{d\lambda} = \frac{ exp( -\lambda^{2}/2L) } { ( 2\pi L)^{1/2}}$$ where *λ* = *η* =  − *l**n* *t**a**n*(*θ*/2) and $L = ln~\gamma = \frac{1}{2} ln (s/4m^2)$ and *γ*, which is equal to $\sqrt{s\_{NN}}/2m\_{p} $, is the Lorentz contraction factor. Here *m**p* is the mass of the proton. The Gaussian form of it is given as, $$\frac{dN}{dy} \propto exp(- y^{2}/2L).$$ Later in Ref, the pseudorapidity variable is substituted by rapidity to describe the distribution appropriately (the rapidity distribution of charged particles differs from pseudorapidity distribution at the smaller rapidity region). Then the rapidity distribution is given as, $$\frac{dN}{dy} \propto exp( \sqrt{y\_{b}^{2} - y^2} ),$$ where the beam rapidity, *y**b*, in the center of mass frame is $\text{cosh}^{-1} (\sqrt{s\_{NN}}/2m\_{p}) = ln(\sqrt{s\_{NN}}/m\_{p})$. Then Ref connects the total entropy of the system with the number density such that their ratio is constant for a thermally equilibrated system. [HydroCu] [HydroAu] [HydroPb] [WidthRatio] It is found that when the transformation of the distribution is made to rest frame of one of the colliding nuclei, the Gaussian form as given in Eq (15) shows the limiting fragmentation behaviour. And surprisingly, by setting some parameters, it also matches multiplicity distributions with the CGC calculations. In this review, we have tried to see the agreement of pseudorapidity distributions of charged particles by Landau-Carruthers function. The advantage of fitting Landau-Carruthers form to the data is that the *λ* variable used in the function has similar form as the pseudorapidity. The multiplicity distribution of Cu+Cu collision data as a function of rapidity for different energies are shown in Figure [HydroCu]. The Cu+Cu collision data are fitted with the Landau-Carruthers functions. The multiplicity distribution of Au+Au collisions as a function of rapidity for different energies are shown in Figure [HydroAu]. Similarly, the rapidity distributions of charged particles of Pb+Pb collisions at different energies are shown in Figure [HydroPb]. The $dN\_{\rm{ch}}/d\eta$ distribution of charged particles are also fitted with double-Gaussian functions. The width of the distributions obtained from the data and the models are divided and shown as a function of collision energy in Figure [WidthRatio]. It is observed from Figure [WidthRatio] that Landau-Carruthers hydrodynamics form explains the data starting from AGS, SPS to RHIC as the ratio is closed to one. However, the LHC data is far from one which implies that Landau hydrodynamics does not explain the expansion dynamics at LHC energy. Energy dependence of multiplicity density ----------------------------------------- [EngDep] The energy dependence of charged particle multiplicity density distribution per participant pair for most central collisions in heavy-ion collisions at midrapidity and for nucleon-nucleon non-single diffractive (NSD) and inelastic (INEL) collisions, as a function of collision energy is shown in Figure [EngDep]. The data points are from different energy and different collision species. To explain the normalized particle distribution in the midrapidity, different phenomenological functions are fitted. Upto top RHIC energy $\frac{dN\_{\rm{ch}}}{d\eta}$ for heavy-ion collisions is well described by a logarithmic function. However, the LHC data is underestimated by logarithmic function upto 26%. On the other hand, a power law fit seems to overestimate the low energy data for nucleus-nucleus collisions while explaining the high energy data upto LHC energies. Looking at the low-energy and high-energy behaviours of charge particle production being well-described by a logarithmic function and power-law functions, respectively, we have tried to fit a hybrid function (a combination of both) and find a very good agreement with the nucleus-nucleus data at all energies upto LHC 2.76 TeV. The physics motivation of the hybrid function can be explained by considering the result by Wolschin *et al.* which states that at high energy, charged particle multiplicity can be explained by a combination of midrapidity gluonic source predicted by the power law function and a fragmentation source predicted by logarithmic function. The predictions from IP-saturation model for the top RHIC energy and higher are also shown for a comparison with the corresponding nucleus-nucleus experimental data. For a direct comparison with A+A data, we have put together the *p* + *p*(*p̄*) NSD and INEL data. Both the data seem to fit to a power-law behaviour with the power decreasing while going from A+A to *p* + *p*(*p̄*) collisions. Scaling of $N\_{\rm{ch}}^{\rm{total}}$ with $N\_{\rm{part}}$ ------------------------------------------------------------ It is observed that the particle multiplicity at midrapidity does not scale with the number of participant nucleons, i.e. $N\_{\rm{part}}$. It is observed from Ref that the total charged particle measured over a wide range of pseudorapidity, when normalized per participant pair, scales with $N\_{\rm{part}}$. We considered different collision energies and collision systems to see the scaling behaviour of total charged particles. The normalized $N\_{\rm{ch}}^{\rm{total}}$ per participant pair as a function of $N\_{\rm{part}}$ are shown in Figure [NtotCu] and Figure [NtotAu] for Cu+Cu and Au+Au collisions, respectively. The error bars shown in the figures are statistical only. [NtotCu] [NtotAu] It is observed from Figure [NtotCu] and Figure [NtotAu] that the participant pair normalized $N\_{\rm{ch}}^{\rm{total}}$ scales perfectly with $N\_{\rm{part}}$ within the statistical uncertainties. Both for Cu+Cu and Au+Au systems, the normalized value of $N\_{\rm{ch}}^{\rm{total}}$ with respect to $N\_{\rm{part}}$ is constant as a function of $N\_{\rm{part}}$ and increases with increase of collision energy. It implies that modifications to particle production at forward rapidities are strongly correlated with compensating changes at midrapidity. Energy dependence of *N**c**h**t**o**t**a**l* --------------------------------------------- As discussed earlier, the total charged particles normalized par participant pair ($N\_{\rm{ch}}^{\rm{total}}/0.5~\langle N\_{\rm{part}} \rangle$ ) for Cu+Cu, Au+Au systems at different collision energies are independent of centrality. In addition to this, the $N\_{\rm{ch}}^{\rm{total}}$ value increases with increase of energy for all centralities. The energy dependence of $N\_{\rm{ch}}^{\rm{total}}$ from AGS to LHC is shown in Figure [engDepNch]. [engDepNch] It is observed that the $dN\_{\rm{ch}}/d\eta$ distribution in the midrapidity is almost flat and the width of the distribution decreases with decrease of collision energy. The fragmentation region can be explained by $dN\_{\rm{ch}}/d\eta = \alpha ( y\_{\rm{beam}} + \eta\_{0} - \eta)$. So the overall $dN\_{\rm{ch}}/d\eta$ distribution now can be thought of a trapezoid and hence, the total charged particles can be given by the trapezoidal rule as follows. $$N\_{ch}^{tpz} = \frac{dN\_{ch}|\_{0}}{d\eta} \left( 2\eta\_{0} + 2y\_{beam} - \frac{\langle N\_{part} \rangle}{2\alpha} \frac{dN\_{ch}|\_{0}}{d\eta} \right) \label{eqnTrap}$$ As $y\_{\rm{beam}} \simeq \frac{1}{2} \text{ln} s\_{\rm{NN}} - \text{ln}(m\_{0}c^{2})$ for $\sqrt{s\_{\rm{NN}}} \gg m\_{0}$, *m*0 is the mass of the nucleon, Eq. [eqnTrap] reduces to $$\frac{N\_{ch}^{tpz}}{0.5~\langle N\_{part} \rangle} \simeq \text{A} + \text{B}~\text{ln} ~s\_{NN} +\text{C} {(\text{ln}~s\_{NN})}^{2}. \label{eqnTrap1}$$ To explain the evolution of $N\_{\rm{ch}}^{\rm{total}}/(0.5~\langle N\_{\rm{part}} \rangle)$ with respect to $\sqrt{s\_{\rm{NN}}}$, parametrized form of Eq. [eqnTrap1] is used and fitted with the collision data which is shown by a dashed line in Figure [engDepNch]. It is found that this equation explains the PHOBOS Cu+Cu and Au+Au data at RHIC. However, it fails to explain the data at lower energies. Only after considering the leading term (${\text{ln}~s\_{\rm{NN}}})^{2}$ in Eq. [eqnTrap1], it explains the whole spectrum of energy dependence of total charged particles very nicely starting from $\sqrt{s\_{\rm{NN}}}$ = 2.4 GeV to $\sqrt{s\_{\rm{NN}}}$ = 200 GeV. The general form is, $$\frac{N\_{ch}^{tpz}}{0.5~\langle N\_{part} \rangle} =\text{A} + \text{C}~ {(\text{ln}~s\_{NN})}^{2}. \label{eqnTrap2}$$ The fitting of Eq. [eqnTrap2] to the data point is shown in Figure [engDepNch] by the solid line. It can be seen from Figure [engDepNch] that derived form of trapezoidal rule given by Eq. [eqnTrap1] and Eq. [eqnTrap2] underestimate the $N\_{\rm{ch}}^{\rm{total}}$ of Pb+Pb collisions at $\sqrt{s\_{\rm{NN}}}$ = 2.76 TeV measured by the ALICE experiment. This is because the $dN\_{\rm{ch}}/d\eta$ distribution of Pb+Pb data has a dip which in principle deviate from a trapezoidal shape. Measuring $N\_{\rm{ch}}^{\rm{total}}$ as a function of ${(\sqrt{s\_{\rm{NN}}})}^{1/2}$ is important in terms of Landau hydrodynamics. According to Landau hydrodynamics, the ratio of entropy density to the number density for a thermally equilibrated system is constant. In other words, the number density is proportional to the entropy density and hence the total number of particles is proportional to the total entropy. To be noted that during the hydrodynamic expansion of the system, the total entropy remains constant. So by measuring the total observed particles, the initial entropy can be determined and vice versa. For a system which is in local thermal equilibrium, the entropy density is proportional to the energy density and under this assumption, we can arrive at this relationship of $N\_{\rm{ch}}^{\rm{total}}$ with respect to the center of mass energy $\sqrt{s\_{\rm{NN}}}$ as follows. $$\frac{N\_{ch}^{total}}{0.5~\langle N\_{part} \rangle} = K {(\sqrt{s\_{NN}}/GeV)}^{1/2} \label{landau}$$ The parametrized form of Eq. [landau] is obtained for PHOBOS Au+Au data, which is given by $$\frac{N\_{ch}^{total}}{0.5~\langle N\_{part} \rangle} = 1.135 + 2.019 {(\sqrt{s\_{NN}}/GeV)}^{1/2}$$ and in general can be written as, $$\frac{N\_{ch}^{total}}{0.5~\langle N\_{part} \rangle} = \text{A} + \text{B} {(\sqrt{s\_{NN}}/GeV)}^{1/2} \label{landau1}$$ We have tried to fit Eq. [landau1] to the $N\_{\rm{ch}}^{\rm{total}}/(0.5~\langle N\_{\rm{part}} \rangle)$ data as a function of $({\sqrt{s\_{\rm{NN}}}/\rm{GeV}})^{1/2}$ obtained from AGS to LHC experiments which is shown by the dotted line in Figure [engDepNchLandau]. [engDepNchLandau] It is observed that Eq. [landau1] fails to explain the LHC data, as it over predicts. This observation goes inline with the measurement as shown in Figure [WidthRatio], *i.e.* the width of $dN\_{\rm{ch}}/d\eta$ of Pb+Pb data at $\sqrt{s\_{\rm{NN}}}$ = 2.76 TeV is more than the expectation of Landau hydrodynamics. It is seen that the hybrid function nicely describes the whole $dN\_{\rm{ch}}/d\eta$ distribution as a function of $\sqrt{s\_{\rm{NN}}}$ and Landau hydrodynamics can’t explain the LHC data. With this motivation, we tried to fit a hybrid form as given in Eq. [NtotHy] to fit the $N\_{\rm{ch}}^{\rm{total}}/(0.5~\langle N\_{\rm{part}} \rangle)$ as a function of $\sqrt{s\_{\rm{NN}}}$. $$\frac{N\_{ch}^{total}}{0.5~\langle N\_{part} \rangle}= \text{A} + \text{B} ~\text{ln}~(\sqrt{s\_{NN}}) + \text{C} {(\sqrt{s\_{NN}})}^{n} \label{NtotHy}$$ It is found that this hybrid function can explain the whole range of the data upto the LHC energy as shown in Figure [NtotPred]. The extrapolation of this function for the upcoming LHC Pb+Pb collisions at $\sqrt{s\_{\rm{NN}}}$ = 5.5 TeV is shown by the filled circle in Figure [NtotPred]. [NtotPred] PSEUDORAPIDITY DENSITY DISTRIBUTION OF PHOTONS ============================================== Photons are produced from every phase of the fireball expansion, like from hard scattering to the decay of hadrons in heavy-ion collision experiment. Photons hardly interact with the medium. So when photons get thermalized, their mean free paths become same as the system size and they leave the system unaffected. Thus it is believed that the photons carry the information of the thermalized system at all stages of the evolution of the produced fireball. Direct photons created from the QCD process are treated as golden probe to measure the thermodynamic parameters like initial temperature of the fireball. The inclusive photon spectra contain all photons including the photons produced from particle’s decay, e.g. *π*0 and *η*0. So photons can be used to estimate the degree of thermalization of the system. It is also proposed that as majority of photons are produced from *π*0 decay, so they can be used as a complementary measurement to the charged pion measurements. Photons can be used to study the anisotropic flow of the system. Photons can be used as a precursor for the measurement of pseudorapidity density distribution of charged particles. It is proposed that simultaneous measurement of charged particles with photons can be used in the search for Dis-oriented Chiral Condensate (DCC). Keeping the importance of measurement of photons in mind as a probe for QGP, we will be discussing the pseudorapidity distribution of photons. In this review, the pseudorapidity density of photons for different collision systems and at different energies are discussed. Then the expansion hydrodynamics of photons are discussed by invoking Landau hydrodynamics along with its advanced forms. In the forward rapidity, longitudinal scaling of photons are discussed. At the end, the scaling of total measured photons as a function of $\langle N\_{\rm{part}} \rangle$ is discussed for two collision systems. System size and energy dependence of photon distributions (*d**N**γ*/*d**η*) ---------------------------------------------------------------------------- The energy dependence of pseudorapidity distributions of photons are shown for Cu+Cu, Au+Au systems in Figure [dndetaCuGama] and [dndetaAuGama]. In Figure [dndetaCuGama], the pseudorapidity distribution of photons for Cu+Cu collisions at $\sqrt{s\_{NN}}$= 62.4 and 200 GeV are shown. In Figure [dndetaAuGama], *d**N**γ*/*d**η* for Au+Au collisions at $\sqrt{s\_{NN}}$= 62.4 and 200 GeV are shown. The Cu+Cu and Au+Au collision data are taken from STAR experiment at RHIC. The pseudorapidity distribution of photons of S+Au collisions data at 19.3 GeV and Pb+Pb collisions at 17.6 GeV are shown in Figure [dndetaPbGama]. The S+Au collision data and Pb+Pb collision data are taken from Ref and, respectively. Data collected are at the forward rapidity. However, to get the photon distribution in the backward rapidities, a reflection of the data about the midrapidity is done assuming that the *d**N**γ*/*d**η* is symmetric about *η* = 0 for collider experiments, e.g. Cu+Cu and Au+Au collisions. For fixed target experiments, like S+Au and Pb+Pb, the reflection is carried out with respect to the $\eta\_{\rm{peak}}$. The closed markers represent the mirror reflection of the data recorded by the detectors. The *d**N**γ*/*d**η* distributions are fitted with a double Gaussian and Landau-Carruthers functions to understand the expansion dynamics of the system. To see the extent of the Landau hydrodynamics is applicable to the system, the ratio of width of the *d**N**γ*/*d**η* of data and the width obtained from Landau-Carruthers fitting are shown as a function of collision energies in Figure [RatioGama]. It can be observed that at lower energy, it deviates from 1, but at RHIC energy, it agrees with Landau-Carruthers hydrodynamical model. It would be interesting to have corresponding LHC data to look into the validity of Landau hydrodynamics for photons at forward rapidities. [dndetaCuGama] [dndetaAuGama] [dndetaPbGama] [RatioGama] Longitudinal Scaling of Photon ------------------------------ [dGamadEtaCu] [dGamadEtaAu] In the previous section, the longitudinal scaling of charged particles in the forward rapidities are discussed. Is this longitudinal scaling a global phenomena of the heavy-ion collision or only specific to charged particle productions? To confirm this phenomena, the longitudinal scaling of photons is studied separately for two different collision species. In Figure [dGamadEtaCu], the *d**N**γ*/*d**η* as a function of *η*′ for Cu+Cu collision data at $\sqrt{s\_{NN}}$= 62.4 and 200 GeV are shown. In Figure [dGamadEtaAu], the *d**N**γ*/*d**η* for Au+Au collisions at $\sqrt{s\_{NN}}$ = 62.4, 200 GeV and Pb+Pb collision data at beam energy 158 AGeV as a function of *η*′ are shown. The *d**N**γ*/*d**η* data are available for only small pseudorapidity coverage. Still, the nature of the *d**N**γ*/*d**η* distribution as a function of *η*′ show the longitudinal scaling behaviour as a consequences of limiting fragmentation. It is observed from the Figure [dGamadEtaCu] and [dGamadEtaAu] that photon also shows the energy independent limiting fragmentation behaviour. It is seen that the limiting fragmentation of pions are same as the photon and independent of centrality unlike charged hadrons. It is also reported in Ref that the limiting fragmentation behaviour of photons for *p* + *p̄* collisions at $\sqrt{s}$ = 540 GeV are in close agreement with the measured photon yield in Au+Au collisions at $\sqrt{s\_{NN}}$ = 62.4 GeV unlike the charged particle results. Study from HIJING event generator indicates that about 93-96% of measured photons are from *π*0 decays. Hence, the centrality independent behaviour of photons is interpreted as indirect measure of meson limiting fragmentation. This contrasting behaviour of photon results of the limiting fragmentation with respect to charged hadrons may be due to nuclear remnants and baryon stopping. It indicates that mesons are not affected by baryon transport at forward rapidities. The study of identified charged particles with photon results done in Ref clearly indicates that net-proton results violate the energy independent behaviour of limiting fragmentation: a clear indication of baryon-meson anomaly. The centrality and energy independence behaviour of mesons contrary to inclusive charged hadrons and identified baryons implies that baryon transport plays an important role in particle production at forward rapidities. It is argued that although the baryon stopping is different for different collision energies, the mesons are not affected by it. In the context of baryon junction picture, baryons would have shown the energy independent limiting fragmentation behaviour at forward rapidities, if they carry the valance quarks like the mesons produced from the valance quarks. This suggests that baryon number resides in a nonperturbative configuration of gluon fields rather in the valence quarks. The longitudinal scaling behaviour observed for charged particles and photons ensure the universality of hypothesis of limiting fragmentation and put forward many deeper questions about the actual processes behind it. [RcpCuGamma] [RcpAuGamma] During the discussion of extended longitudinal scaling of charged particles, we have encountered that this is independent of energy but shows some dependence of collision geometry, i.e. centrality. Then $R\_{\rm{PC}}$ variable was adopted to deal with this issue. But in the limiting fragmentation of photons, it is found to be centrality independent. But to see the consistency, we tried to do the same exercise for photons by evaluating the $R\_{\rm{PC}}$ for different collision systems at different energies. The $R\_{\rm{PC}}$ is defined as given in Eq. (4). For the RHIC energies, the peripheral events correspond to 30-40% centrality and central events correspond to 0-5% centrality. For WA98 experiment, 25-35% centrality and 0-5% events are considered as peripheral events and central events, respectively. In Figure [RcpCuGamma], $R\_{\rm{PC}}$ for Cu+Cu collision data at $\sqrt{s\_{\rm{NN}}}$ = 62.4 and 200 GeV are shown as a function of *η*′. Similarly, in Figure [RcpAuGamma], $R\_{\rm{PC}}$ of Au+Au collision data at $\sqrt{s\_{\rm{NN}}}$ = 62.4 and 200 GeV superimposed with Pb+Pb data at beam energy 158 AGeV are shown as a function of *η*′. The error bars shown in Figure [RcpCuGamma] and [RcpAuGamma] are of statistical only. We observe from Figure [RcpCuGamma] and [RcpAuGamma] that within error bars, the $R\_{\rm{PC}}$ is constant and equal to one as a function of *η*′ irrespective of collision energies. This observation strengthen our argument that extended longitudinal scaling is a global phenomena for charged particles as well as for photons produced in the heavy-ion collision experiments. Scaling of $N\_{\gamma}^{\rm{total}}$ with $N\_{\rm{part}}$ ----------------------------------------------------------- Like the scaling of total charged particles with $N\_{\rm{part}}$, the total photons normalized per participant pairs as a function of average participant pairs are shown for Cu+Cu and Au+Au collision systems at 62.4, 200 GeV in Figure [NtotGamaCu] and [NtotGamaAu], respectively. Both the data scale nicely and the normalized *N**γ* values increase with increase of collision energy. Note that *N**γ* is the value of total number of photons measured within the detector acceptance ( − 3.7 < *η* <  − 2.3). [NtotGamaCu] [NtotGamaAu] From Figure [NtotGamaCu] and [NtotGamaAu], we observed that *N**γ* scales with the collision centrality like charged particles. [ET-min] Transverse energy and Collision Cross section ============================================= The transverse energy is one of the important global observables used to characterize the system formed in heavy-ion collisions at extreme conditions of temperature and energy density, where the formation of Quark-Gluon Plasma (QGP) is expected. The transverse energy ($E\_{\rm T}$) is the energy produced transverse to the beam direction and is closely related to the collision geometry. $E\_{\rm T}$ is an event-by-event variable defined as $${E\_T = \sum\_i E\_i sin\theta\_i ~~\rm{and}~~ \frac{dE\_T(\eta)}{d\eta} = sin\theta(\eta)\frac{dE(\eta)}{d\eta}.}$$ The sum is taken over all particles produced in an event within the detector acceptance. *E**i* and *θ**i* are the energy and polar angle of the final state particles. The energy of the individual particle can be determined by knowing their momenta and particle identification using tracking detectors and/or the total energy deposited in a calorimeter. The source of transverse energy production could be ``soft“ multi-particle production and/or the ``hard” scattering jets, depending on the collision energy. The transverse energy distribution is related to the multiplicity distribution by $$\frac{dE\_T}{d\eta} \sim \left\langle p\_T\right\rangle \times \frac{dN}{d\eta} \label{EtPt}$$ To probe the early stages of the produced fireball, it is ideal to take transverse observables like $E\_{\rm T}$, $p\_{\rm T}$ etc. This is because, before the collision of two nuclei, the longitudinal phase space is filled by the beam particles whereas the transverse phase space is empty. The $E\_{\rm T}$ is produced due to the initial scattering of the partonic constituents of the incoming nuclei and also by the re-scattering among the produced partons and hadrons. The $E\_{\rm T}$ production tells about the explosiveness of the interaction. Additionally, in the framework of boost-invariant hydrodynamics, the measurement of $E\_{\rm T}$ helps in the quantitative estimation of the initial energy density produced in the interaction. A comparison of this initial energy density with that of estimated by the lattice QCD (lQCD) calculations, gives indication of a possible formation of QGP in the corresponding heavy-ion interactions. However, there are several competing processes to make a difference between the initially generated and finally observed $E\_{\rm T}$. In an ideal case, if the fireball of the produced quanta, namely the partons or the hadrons depending on the case, break apart instantaneously without significant interactions, the observed transverse energy per unit rapidity $dE\_{\rm T}/dy$ will be the same as that was generated initially. On the other hand, if the system interacts very strongly achieving an early thermal equilibrium, which is maintained though out the system expansion, $dE\_{\rm T}/dy$ would decrease significantly due to the longitudinal work done by the hydrodynamic pressure. This decrease may however, be moderated by the build up of transverse hydrodynamic flow, which increases $E\_{\rm T}$. At higher collision energies, the difference between initially generated and finally observed $E\_{\rm T}$ may be reduced because of the gluon saturation in the wave function of the colliding heavy nuclei. This delays the onset of hydrodynamic flow and hence reduces the effective pressure, which decides the above difference. The collision centralities can be estimated by using the minimum bias $E\_{\rm T}$ distribution in a way it is done using the charged particle minimum bias distribution. This is shown in Figure [ET-min]. The shaded area in the figure corresponds to the most central (0 − 5%) collisions having the highest transverse energy. This corresponds to the 5% of the total cross section. Different centralities are defined by the percentages of total cross sections and are shown in the same figure. Each centrality class follows a Gaussian type of distribution with different mean and variance following the central limit theorem. The lower edge of the minimum bias distribution shows a peak, which corresponds to the most peripheral collisions. For the most central collisions corresponding to largest values of $E\_{\rm T}$, the shape of the distribution is mainly governed by the statistical fluctuations and the experimental acceptance. For larger acceptances, the fall off with increasing $E\_{\rm T}$ is very sharp. Ref. gives a very interesting account of addressing a fundamental question like if $E\_{\rm T}$ or the multiplicity is primary. In other words, whether $E\_{\rm T}$ production is primary, followed by fragmentation to final state particles, or whether $E\_{\rm T}$ is a random product of the particle multiplicity and the $p\_{\rm T}$ distribution. The method as discussed in the above report is as follows. If one assumes that the $E\_{\rm T}$ production is a result of the creation of the particles according to the semi-inclusive multiplicity distribution followed by the random assignment of transverse momentum to each particle in accordance with the single-particle semi-inclusive $p\_{\rm T}$ distribution, the process could be described by the equation, $$\frac{d\sigma}{dE\_T} = \sigma \sum\_{n=1}^{n\_{max}} f\_{\rm{NBD}}(n,1/k,\mu) ~ f\_{\Gamma}(E\_T,np,b), \label{Et-NBD}$$ where the multiplicity distribution in A+A collisions is represented by a Negative Binomial Distribution (NBD), $f\_{\rm{NBD}}(n,1/k,\mu)$. The $E\_{\rm T}$ distribution for *n* particles in the final state is represented by a Gamma function, *f*Γ(*E**T*, *n**p*, *b*), where *p* and *b* are the parameters of the $E\_{\rm T}$ distribution for a single particle. The details of the NBD and Gamma distributions with their properties are given in the Appendix. If we assume that the $E\_{\rm T}$ spectra for individual particles are independent of each other and in addition, it is also independent of the multiplicity *n*, then the $E\_{\rm T}$ spectrum for *n* particles is the $n^{\rm{th}}$ convolution of the single particle spectrum. As one finds difficulty in the convergence of fits to Eq. [Et-NBD], NBD was restricted to Poisson by fixing 1/*k* = 0, which in turn makes the convergence easier. If one assumes a simpler proportionality between $E\_{\rm T}$ and *n*, so that the number of particles in an event, *n* with transverse energy $E\_{\rm T}$ are related by *n* = *E**T*/⟨*p**T*⟩ (the nearest integer). The plot of ⟨*E**T*⟩*d**σ*/*d**E**T* in barn as a function of *E**T*/⟨*E**T*⟩ is fitted by the function given by Eq. [EtPt] and to the NBD given by $$\frac{d\sigma}{dE\_T} = \sigma f\_{\rm{NBD}}(E\_{\rm T}/\langle p\_{\rm T} \rangle,1/k,\mu). \label{Et-NBD-mod}$$ Note that the above NBD is now modified because of the simple relationship of $E\_{\rm T}$ and the multiplicity *n*, given by the Eq. [EtPt]. It is observed that the trend of the data leads to a better fit of single-Gamma distribution at higher values of $E\_{\rm T}$ compared to NBD and the reverse at lower values of $E\_{\rm T}$. Usually fitting functions with more number of parameters give flexibility to the fitting leading to a better fit. However, in this case, complicated functions like Eq. [Et-NBD] with more number of parameters give worse fit compared to simpler functions like Eq. [Et-NBD-mod]. Single-Gamma distribution fitting to the above distribution is better than the other two functions. If multiplicity were the primary quantity compared to transverse energy, which leads to the form of Eqs. [Et-NBD] and [Et-NBD-mod], then one would expect these equations to fit better than the single-Gamma distribution. It is interesting and compelling to speculate on the implications of these results for a detailed relationship of multiplicity with transverse energy and the effect of hadronization. However, it remains as an open question to be addressed by more controlled experiments. Collision energy dependence of Transverse energy ================================================ [EtVsCOM] Collision energy dependence of midrapidity $\frac{1}{N\_{\rm {part}}/2}\frac{dE\_{\rm{T}}}{d\eta}$. Shown are different phenomenological fitting functions to explain the transverse energy production. Figure [EtVsCOM] shows the collision energy dependence of $\frac{1}{N\_{\rm {part}}/2}\frac{dE\_{\rm{T}}}{d\eta}$ for central collisions at midrapidity. A logarithmic growth of transverse energy upto the top RHIC energy underestimates the LHC measurement, which is better described by a power-law fit. However, the later overestimates the low energy measurements. A hybrid function, which is a combination of logarithmic and power law, motivated by midrapidity gluonic source and a fragmentation source seems to explain the data for wide range of energies starting from few GeV to TeV. $\frac{1}{N\_{\rm {part}}/2}\frac{dE\_{\rm{T}}}{d\eta}$ increases by a factor of 3.07 from $\sqrt{s\_{\rm NN}} = 200$ GeV to 2.76 TeV. The CMS experiment estimate of $\frac{dE\_{\rm{T}}}{d\eta} = 2007 \pm 100$ GeV and $\frac{dN\_{\rm{ch}}}{d\eta} = 1612 \pm 55$ for top 5% central Pb+Pb collisions at 2.76 TeV. Division of both leads to transverse energy per charged particle of 1.25 ± 0.08 GeV at $\sqrt{s\_{\rm NN}} = 2.76$ TeV, for top 5% central Pb+Pb collisions, which is almost 42% higher than its corresponding value at top RHIC energy (0.88 ± 0.07 GeV). Centrality dependence of Transverse energy ========================================== [EtVsCent] Centrality dependence of midrapidity $\frac{1}{N\_{\rm {part}}/2}\frac{dE\_{\rm{T}}}{d\eta}$. Figure [EtVsCent] shows the centrality dependence of $\frac{1}{N\_{\rm {part}}/2}\frac{dE\_{\rm{T}}}{d\eta}$ at midrapidity. Various lower energy measurements are multiplied with some numbers to look into the similarity in the shape at higher energies. Except extreme peripheral events, within experimental uncertainties the centrality shows a universal shape for all energies. The value of $\frac{1}{N\_{\rm {part}}/2}\frac{dE\_{\rm{T}}}{d\eta}$ shows a monotonic increase with collision centrality. One of the goals of heavy-ion collision experiments is to create QGP in the laboratory and a pre-requisite of this is to ensure that sufficiently large energy density has been produced in the heavy-ion collisions. To ensure this, the estimation of the initial energy density through the measurement of the final state multiplicity and transverse energy is done through Bjorken hydrodynamic model. Numerical simulations on lattice give a lower bound for the initial energy density for the formation of a Quark Gluon Plasma, which is of the order of $1~ {\rm GeV/fm^3}$. A comparison of the estimated energy density from Bjorken model may establish the possible formation of QGP in heavy-ion collisions at a given collision energy. A schematic diagram of energy density as a function of fireball evolution time is given in Figure [EngDensityVsTime]. In general one can think of three different energy density estimates and two different time scales: 1. The peak *general energy density*, which is achieved when the incoming nuclei overlap with each other. 2. The peak *formed energy density*, which involves the produced particles at a proper time $\tau\_{\rm form}$. 3. The peak *thermalized energy density*, present at proper time $\tau\_{\rm therm}$, when local thermal equilibrium is first achieved, assuming that this occurs. In this review we shall restrict ourselves to the discussion on the formed energy density estimated through Bjorken boost invariant hydrodynamics. However, for a detailed discussions one can refer to Ref.. [EngDensityVsTime] Bjorken Hydrodynamics and Initial Energy Density ================================================ . The energy density, in general is defined as the ratio of total mass-energy within some region of space and the volume of that region, as seen at some instant of time in some Lorentz frame. As discussed in Ref, this definition is not satisfactory, as one can easily raise any energy density by viewing the system from a different frame of reference. For example, a gold or lead nucleus with constant energy density *ρ*0, when viewed in a boosted frame will appear to have energy density *γ*2*ρ*0, where *γ* is the value of the Lorentz boost factor. In a region having total momentum zero, one can meaningfully calculate the energy density as ratio of mass-energy and volume. Considering symmetric heavy-ion collisions (A+A) in collider experiments, with an overlapping of two nuclei, viewed in the center of momentum frame, the total energy density in the overlapping region is given by ⟨*ε*⟩ = 2*ρ*0*γ*2. If we take the normal nuclear matter density, $\rho\_0 = 0.14 ~{\rm {GeV/fm^3}}$, for a nucleus at rest and *γ* = 106 at $\sqrt{s\_{\rm {NN}}} = 200$ GeV, then the general energy density is $\langle \epsilon \rangle = 3150 ~{\rm {GeV/fm^3}}$ at RHIC. For LHC $\sqrt{s\_{\rm {NN}}} = 2.76$ TeV Pb+Pb collisions, *γ* = 1471.2, which leads to $\langle \epsilon \rangle = 606053 ~{\rm {GeV/fm^3}}$. As these numbers are spectacularly high, when compared to the lQCD predicted value of $1 ~ {\rm {GeV/fm^3}}$ energy density as a condition for the formation of a QGP phase, seem to be absurd. Hence, our interest would be to consider the energy density of the produced particles in order to infer about the possible formation of a QGP phase. This is done through the measurement of transverse energy at midrapidity and further the estimation of initial energy density in Bjorken hydrodynamic model. In the framework of Bjorken boost invariant hydrodynamic model, in any frame where the two incoming nuclei have very high energies, the region when/where the nuclei overlap will be very thin in the longitudinal direction and very short in duration. In this scenario, it is fair to describe that all produced particles are created at the same time and radiated out from a thin disk. This is the Bjorken hydrodynamic picture of nucleus-nucleus collision. Once the Lorentz contracted beam “pancakes” recede after their initial overlap, the region between them is occupied by secondaries at intermediate rapidities. We can calculate the local energy densities of these created particles, if we assume the secondaries are formed at some proper time $\tau\_{\rm {form}}$. Our region of interest, in any frame, will be a slab perpendicular to the beam direction, with longitudinal thickness *d**z*, with one face of the “source” plane in this frame, and the transverse overlap area *A*. The region described here corresponds to half the shaded region shown in Figure [bjorken]. Since *β*∥ ≃ 0 for particles near the source location, this is an appropriate region over which we can calculate a meaningful energy density. At time $t = \tau\_{\rm {form}}$, this volume will contain all the (now-formed) particles with longitudinal velocities $0 \leq \beta\_{\parallel} \leq dz/\tau\_{\rm{form}}$ (since we assume particles can’t scatter before they are formed!). Then we can write this number of particles as $dN = (dz/\tau\_{\rm{form}})\frac{dN}{d\beta\_{\parallel}}$, or equivalently $dN = (dz/\tau\_{\rm{form}})\frac{dN}{dy}$, where *y* is longitudinal rapidity, since *d**y* = *d**β*∥ at *y* = *β*∥ = 0. If these particles have an average total energy $\langle m\_{\rm T} \rangle$ in this frame (*E* = *m**T* for particles with no longitudinal velocity), then the total energy divided by the total volume of the slab at $t = \tau\_{\rm{form}}$ is $$\begin{aligned} \langle \epsilon(\tau\_{form})\rangle &=& \frac{dN \langle m\_T \rangle}{dz A} \nonumber \\ &=& \frac{dN(\tau\_{form})} {dy} \frac{\langle m\_T \rangle}{\tau\_{form} A} \nonumber \\ &=& \frac{1}{\tau\_{form} A} \frac{dE\_T(\tau\_{form})}{dy}, \label{bj}\end{aligned}$$ where, we have equated $\frac{dE\_T}{dy} = \langle m\_{\rm T} \rangle \frac{dN}{dy} \approx \langle m\_{\rm T} \rangle \frac{3}{2}\frac{dN\_{ch}}{dy}$ and emphasized that Eq. ([bj]) is true for the transverse energy density present at time $t = \tau\_{\rm{form}}$. The factor 3/2 compensates for the neutral particles. Eq. ([bj]) is referred as *B**j**o**r**k**e**n* *e**n**e**r**g**y* *d**e**n**s**i**t**y*,  *ε**B**j*. It is a valid measure of peak energy density in created particles, on very general grounds and in all frames, as long as two conditions are satisfied: (1) A finite formation time $\tau\_{\rm{form}}$ can meaningfully be defined for the created secondaries; and (2) The thickness/“crossing time” of the source disk is small compared to $\tau\_{\rm{form}}$, that is, $\tau\_{\rm{form}} >> 2R/\gamma$. Here *R* is the rest-frame radius of the nucleus and *γ* is the Lorentz factor. In particular, the validity of Eq. ([bj]) is completely independent of the shape of the $dE\_T(\tau\_{\rm{form}})/dy$ distribution to the extent that *β*∥ is infinitesimally small in a co-moving frame; a plateau in *d**E**T*/*d**y* is not required. For practical purposes at RHIC, we will consider condition (2) above to be satisfied as long as $\tau\_{\rm{form}} > 2R/\gamma$ is true. Historically, *ε**B**j* has been calculated using the final state *d**E**T*/*d**y* and simply inserting a nominal value of 1 fm/*c* for $\tau\_{\rm{form}}$. In addition, fixed target experiments have been using *d**E**T*/*d**η* as an estimate for *d**E**T*/*d**y*, which is a good approximation for these experiments. For collider experiments, a correction is made for the Jacobian *d**y*/*d**η*: ($\sqrt{1-m^2/\langle m\_{\rm T} \rangle ^2}\frac{dN}{dy} = J\frac{dN}{dy} = \frac{dN}{d\eta}$). However, we can’t take *ε**B**j* as an exact estimate of energy density without some justification for the value of 1 fm/*c* taken for $\tau\_{\rm{form}}$. Hence, we term it as *ε**B**j**N**o**m**i**n**a**l*. An indication of potential problems with this choice arises immediately when considering AGS Au+Au and SPS Pb+Pb collisions, where the center of mass “crossing times” 2*R*/*γ* are 5.3 fm/*c* and 1.6 fm/*c* respectively, which implies that this choice for $\tau\_{\rm{form}} = 1$ fm/*c* actually violates the validity condition $\tau\_{\rm{form}} > 2R/\gamma$ we set for the use of Eq.([bj]). So we will deprecate the use of *ε**B**j**N**o**m**i**n**a**l* as a quantitative estimate of actual produced energy density and instead treat it only as a compact way of comparing *d**E**T*/*d**η* measurements across different systems, centralities and beam energies. The Bjorken energy density obtained in this framework is given by $$\begin{aligned} \epsilon\_{Bj} &=& \frac{dE\_{T}}{dy} ~\frac{1}{\tau\_0 \pi R^2} \\ &=& \frac{dE\_{T}}{d\eta} J(y,\eta)~\frac{1}{\tau\_0 \pi R^2} \\ &\simeq & \langle m\_{\rm T} \rangle \frac{3}{2} \frac{dN\_{ch}}{dy} ~\frac{1}{\tau\_0 \pi R^2} \label{bjEqn}\end{aligned}$$ where, *τ*0 is the formation time, usually assumed to be 1 fm/*c* and *π**R*2 is the transverse overlap area of the colliding nuclei. The formation time is usually estimated from model calculations and has been a matter of debate. There are different ways to estimate the transverse overlap area. It goes like $N\_{\rm{part}}^{2/3}$ in an approach which accounts for only the common area of colliding nucleons but not the nuclei (chosen by STAR). In this approach, the transverse overlap area *F* = *π**R*2, where *R* = *R*0*A*1/3. When we replace *A* with the number of participants by, $A=N\_{\rm{part}}/2$, *F* becomes, $${F = \pi R\_0^2 ~(\frac{N\_{part}}{2})^{2/3} } \label{FEqn}$$ [phenixTransA] In the other approach (adopted by PHENIX), the transverse overlap area of the colliding species, *F*, is estimated in the following way. The Woods-Saxon parametrization for the nuclear density profile is given by $${\rho(r) = \frac{1}{(1+e^{(r-r\_n)/d})}}, \label{woods}$$ where *ρ*(*r*) is the nuclear density profile, *r**n* is the nuclear radius and *d* is a diffuseness parameter. Based on the measurements of electron scattering from Au nuclei, *r**n* is set to (6.38 ± 0.27) fm and *d* to (0.54 ± 0.01) fm. A Monte Carlo-Glauber model with *F* ∼ *σ**x**σ**y*, (where *σ**x* and *σ**y* are the widths of *x* and *y* position distributions of the participating nucleons in the transverse plane) is used to estimate the transverse overlap area of two colliding nuclei. In this approach, *F* is the transverse overlap area of two colliding nuclei, not the participating nucleons. The normalization to *π**R*2, where *R* is the sum of *r**n* and *d* parameters in the Woods-Saxon parametrization (given by Eq. [woods]), is done for most central collisions at the impact parameter *b* = 0. The results obtained in these two methods, as shown in Figure [phenixTransA], are different only in the peripheral bins. The results obtained by STAR agree with PHENIX results within systematic errors. However, STAR data show a smaller rate of increase of the energy density with $N\_{\rm{part}}$. As can be seen from the figure, the results agree rather well within uncertainties for central collisions, where we expect a deconfinement of quarks and gluons to take place. In the estimation of *ε**B**j*.*τ*, one uses the energy and rapidity dependent Jacobian factor, *J*(*y*, *η*), for the conversion of pseudorapidity to rapidity phase space. The value of the Jacobian is smaller at higher energies, as the average transverse momentum of particles increases with beam energy. STAR collaboration uses a factor of 1.18 for *η* → *y*-phase space conversion, as compared to 1.25 used by PHENIX for the estimation of Bjorken energy density at 200 GeV. The value of *ε**B**j* for Au+Au collisions at $\sqrt{s\_{NN}} =$ 19.6, 130 and 200 GeV are 2.2 ± 0.2,  4.7 ± 0.5 and $4.9 \pm 0.3~~ {\rm GeV/fm^3}$ ($5.4 \pm 0.6 ~ {\rm GeV/fm^3}$, PHENIX), respectively. Compared to this, *ε**B**j* at SPS for Pb+Pb collisions at $\sqrt{s\_{NN}} =$ 17.2 GeV is found to be $3.2 ~{\rm {GeV/fm^3}}$. This value of *ε**B**j* is much higher than the same for Au+Au collisions at the SPS-like energy i.e $\sqrt{s\_{NN}} =$ 19.6 GeV at RHIC. The CMS collaboration has estimated $\epsilon\_{Bj} = 14 ~ {\rm GeV/fm^3}$ with transverse overlap area of $A= \pi \times (7~{\rm fm}^2)$ and *J*(*y*, *η*) = 1.09 for top 5% central Pb+Pb collisions at $\sqrt{s\_{NN}} = 2.76$ TeV. As all these estimations assume the same formation time of 1 fm/*c*, there is an over estimation of *ε**B**j* at SPS. In any case these energy densities are significantly larger than the energy density ($\sim ~ 1~ {\rm {GeV/fm^3}}$) predicted by lattice QCD calculations for a transition to a deconfined quark gluon plasma phase. Following the deconfinement transition, there is a hydrodynamic expansion. Subsequently local equilibrium is achieved at *τ*0 ∼ 1 fm/*c*. This picture is indeed valid, if we compare the RHIC data for elliptic flow to the hydrodynamic calculations. [bjLHC] [bjNpart] Taking all *ε**B**j* measured for heavy-ion collisions at different energies and colliding species, we show *ε**B**j*.*τ* as a function of collision energy in Figure [bjLHC]. This is done using the Eq. [bjEqn]. The dashed line is a logarithmic fit. The logarithmic extrapolation of *ε**B**j*.*τ* for Pb+Pb collisions at $\sqrt{s\_{\rm NN}} = 2.76$ TeV at LHC is around $7.17 ~{\rm GeV~fm^{-2}~c^{-1}}$. However, the experimental estimation gives a value of $\epsilon\_{Bj}.\tau = 14 ~ {\rm GeV~fm^{-2}~c^{-1}}$, showing almost 50% underestimation by the logarithmic trend of the data. On the other hand, a hybrid fitting function, which is a combination of logarithmic and power law functions in center of mass energy, describes the data from few GeV to TeV energies. Fitting a power law function overestimates the low energy measurements. It should be noted that the formation time at LHC will be much less than 1 fm. The above value sets a lower bound to the initial energy density formed at LHC. Going from top RHIC energy to LHC 2.76 TeV, *ε**B**j*.*τ* increases almost 3 times. Figure [bjNpart] shows the estimate of the product of the Bjorken energy density and the formation time (*ε**B**j*.*τ*) as a function of the centrality of the collision in terms of $N\_{\rm{part}}$. As expected there is a monotonic increase in *ε**B**j*.*τ* with increasing centrality of the collision. While comparing the results from different experiments, related to the initial energy density, one needs to take care of the following factors: (i) value of the formation time taken into the calculations, (ii) the procedure of estimation of the transverse overlap area and (iii) the value of the Jacobian used to transform *η* to *y* phase space. The Formation Time ------------------ Is it possible to justify a better estimate for $\tau\_{\rm{form}}$? From general quantum mechanical arguments, in a frame where it’s motion is entirely transverse, a particle of energy *m**T* can be considered to have “formed” after a time *t* = ℏ/*m**T*. To estimate the average transverse mass, we can use the final state *d**E**T*/*d**η* to estimate $dE\_T(\tau\_{\rm{form}})/dy$ and, correspondingly, use the final state *d**N*/*d**η* as an estimate for $dN(\tau\_{\rm{form}})/dy$ to obtain $${\langle m\_T \rangle = \frac{dE\_T(\tau\_{form})/dy}{dN(\tau\_{form})/dy} \simeq \frac{dE\_T/d\eta}{dN/d\eta}~~(Final ~state).} \label{mt}$$ It has been observed experimentally that the ratio of final state transverse energy density to charge particle density, each per unit pseudorapidity is constant at about 0.85 GeV for central Au+Au collisions at top RHIC energy. This value is constant for a wide range of centrality and shows a very little change with beam energy, decreasing to 0.7 GeV, when $\sqrt{s\_{NN}}$ is decreased by a order of magnitude down to 19.6 GeV. However, at LHC, its observed value is 1.25 ± 0.08 GeV, which will be discussed in the next section. If we approximate *d**N**c**h*/*d**η* = (2/3)*d**N*/*d**η* in the final state, then Eq.([mt]) would imply $\langle m\_{\rm T} \rangle \simeq 0.57$ GeV and corresponding $\tau\_{\rm{form}} \simeq 0.35$ fm/*c*, a value shorter than the “nominal” 1 fm/*c* but long enough to satisfy the given validity condition $\tau\_{\rm{form}} > 2R/\gamma$ at RHIC. With *R* = 7 fm for Au+Au collisions and Lorentz factor $\gamma = \frac{\sqrt{s\_{\rm {NN}}}}{2m\_{\rm p}} = 106.6$, at $\sqrt{s\_{\rm {NN}}} = 200$ GeV, 2*R*/*γ* = 0.13 fm/*c*. For LHC, at $\sqrt{s\_{\rm {NN}}} = 2.76$ TeV, the observed $\langle p\_{\rm T} \rangle \sim$ 0.678 GeV for Pb+Pb collisions. Taking pion mass, one gets $\langle m\_{\rm T} \rangle \sim$ 0.81 GeV, which leads to $\tau\_{\rm{form}} \simeq 0.25$ fm/*c*. For Pb+Pb collisions at $\sqrt{s\_{\rm {NN}}} = 2.76$ TeV, taking *R* = 7.1 fm, Lorentz factor, *γ* = 1471.22, we get 2*R*/*γ* = 0.01 fm/*c*. Hence, the condition of $\tau\_{\rm{form}} > 2R/\gamma$ is also satisfied at LHC. It’s worth noting that the value of energy density obtained by Eq. ([bj]) represents a conservative lower limit on the actual $\langle \epsilon(\tau\_{\rm{form}})\rangle$ achieved at RHIC. This follows from two observations: (1) The final state measured *d**E**T*/*d**η* is a solid lower limit on the $dE\_T(\tau\_{\rm{form}})/dy$ present at formation time; and (2) The final state ratio (*d**E**T*/*d**η*)/(*d**N*/*d**η*) is a good lower limit on $\langle m\_{\rm T} \rangle$ at formation time, and so yields a good upper limit on $\tau\_{\rm{form}}$. The justification of these statements could be realized as follows. There are several known mechanisms that will decrease *d**E**T*/*d**y* as the collision system evolves after the initial particle formation, while no mechanism is known that can cause it to increase (for *y* = 0, at least). Therefore, its final state value should be a solid lower limit on its value at any earlier time. A list of mechanisms through which *d**E**T*/*d**y* will decrease after $t = \tau\_{\rm{form}}$ includes: (i) The initially formed secondaries in any local transverse “slab” will, in a co-moving frame, have all their energy in transverse motion and none in longitudinal motion; if they start to collide and thermalize, at least some of their *E**T* will be converted to longitudinal modes in the local frame; (ii) Should near local thermal equilibrium be obtained while the system’s expansion is still primarily longitudinal, then each local fluid element will lose internal energy through *p**d**V* work and so its *E**T* will decrease; (iii) If there are pressure gradients during a longitudinal hydrodynamic expansion then some fluid elements may be accelerated to higher or lower rapidities; these effects are complicated to predict, but we can state generally that they will always tend to *d**e**c**r**e**a**s**e* *d**E**T*/*d**y* where it has its maximum, namely at *y* = 0. Given that we have strong evidence of thermalization and hydrodynamic evolution at RHIC collisions, it’s likely that all these effects are present to some degree, and so we should suspect that final state *d**E**T*/*d**η* is substantially lower than $dE\_T(\tau\_{\rm{form}})/dy$ at midrapidity. Coming to the estimate of $\tau\_{\rm{form}}$, the assumption that $\tau\_{\rm{form}} = \hbar/\langle m\_{\rm T} \rangle$ can’t be taken as exact, even if the produced particles’ *m**T*’s are all identical, since “formed” is not an exact concept. However, if we accept the basic validity of this uncertainty principle argument, then we can see that the approximation in Eq. ([mt]) provides a lower limit on $\langle m\_{\rm T} \rangle$. First, the numerator *d**E**T*/*d**η* is a lower limit on $dE\_T(\tau\_{\rm{form}})/dy$, as above. Second, the argument is often made on grounds of entropy conservation that the local number density of particles can never decrease , which would make the final state denominator in Eq. ([mt]) an upper limit on its early-time value. Transverse energy per Charged particle ($E\_{\rm{T}}/N\_{\rm{ch}}$) and Freeze-out Criteria =========================================================================================== ht c|c| c| c $\sqrt{s\_{\rm{NN}}}$ (GeV) & Coll. Species & $E\_{\rm{T}}/N\_{\rm{ch}}$ (GeV) & Reference [0.5ex] 2.05 & Au+Au & 0.13  ±  0.03 & 3.81 & Au+Au & 0.598  ±  0.060 & 4.27 & Au+Au & 0.634  ±  0.063 & 4.84 & Au+Au & 0.680  ±  0.068 & 8.7 & Pb+Pb & 0.760  ±  0.060 & 12.4 & Pb+Pb & 0.780  ±  0.060 & 17.2 & Pb+Pb & 0.810  ±  0.060 & 19.6 & Au+Au & 0.738  ±  0.070 & 62.4 & Au+Au & 0.867  ±  0.121 & 130 & Au+Au & 0.869  ±  0.066 & 200 & Au+Au & 0.881  ±  0.071 & 2760 & Pb+Pb & 1.283  ±  0.085 & [1ex] [table-excitation] [ht] c c c c c c c c c c c c c c c $\sqrt{s\_{\rm{NN}}}$ (GeV) & 0-5 & 5-10 & 10-15 & 15-20 & 20-25 & 25-30 & 30-35 & 35-40 & 40-45 & 45-50 & 50-55 & 55-60 & 60-65 & 65-70 7.7 & 0.70& 0.73 & 0.74 & 0.77 & 0.78 & 0.81 & 0.84 & 0.87 & 0.90 & 0.94 19.6 & 1.07 & 1.11 & 1.14 & 1.17 & 1.18 & 1.22 & 1.24 & 1.27 & 1.30 & 1.36 27 & 1.22 & 1.24 & 1.27 & 1.30 & 1.34 & 1.38 & 1.40 & 1.44 & 1.47 & 1.54 39 & 1.41 & 1.45 & 1.47 & 1.51 & 1.54 & 1.58 & 1.61 & 1.66 & 1.71 & 1.69 62.4 & 1.21 & 1.26 & 1.32 & 1.40 & 1.49 & 1.57 & 1.67 & 1.76 & 1.84 & 1.92 & 2.01 & 2.07 & 2.17 & 2.23 130 & 1.95 & 2.02 & 2.05 & 2.12 & 2.21 & 2.31 & 2.38 & 2.48 & 2.56 & 2.65 & 2.73 & 2.80 & 2.89 & 3.01 200 & 1.93 & 2.06 & 2.18 & 2.31 & 2.46 & 2.57 & 2.66 & 2.76 & 2.87 & 2.95 & 3.04 & 3.16 & 3.29 & 3.44 $\sqrt{s\_{\rm{NN}}}$ (TeV) & 0-2.5 & 2.5-5 & 5-7.5 & 7.5-10 & 10-20 & 20-30 & 30-40 & 40-50 & 50-60 & 60-70 & 70-80 & & & 2.76 & 1.26 & 1.25 & 1.24 & 1.23 & 1.22 & 1.21 & 1.19 & 1.17 & 1.16 & 1.15 & 1.05 [table-etCent] The ratio of pseudorapidity densities of transverse energy and number of charged particles at midrapidity, i.e. $\frac{dE\_{\rm T}}{d\eta}/\frac{dN\_{\rm {ch}}}{d\eta}(\equiv E\_{\rm{T}}/N\_{\rm{ch}}$) has been studied both experimentally and phenomenologically to understand the underlying particle production mechanism. This observable is known as global barometric measure of the internal pressure in the ultra-dense matter produced in heavy-ion collisions. This quantity depends on the initial state of the collision and the viscosity of the matter as it expands to its final state, when it is observed by the detectors. This observable when studied as a function of collision energy (as shown in Figure [etEch] and the values are given in Table [table-excitation]), shows three regions of interest. The first one from the lower SIS energies to SPS energies shows a steep increase of $E\_{\rm{T}}/N\_{\rm{ch}}$ values, thereby indicating that the mean energy of the system increases (at midrapidity, $\langle E \rangle \sim \langle m\_{\rm T} \rangle$). In the second region, from SPS to top RHIC energy, $E\_{\rm{T}}/N\_{\rm{ch}}$ shows a very weak collision energy dependence, i.e. like a saturation behaviour. In this region the mean energy doesn’t increase, whereas the collision energy increases. This may indicate that the increase in collision energy results in new particle production in this energy domain, which is consistent with higher particle multiplicity observed at these energies. This behaviour has been well described in the context of a statistical hadron gas model (SHGM). In the framework of SHGM, it has been predicted that $E\_{\rm{T}}/N\_{\rm{ch}}$ would saturate at energies higher to that of top RHIC energy with a limiting value of 0.83 GeV. Here a static fireball is assumed at the freeze-out. However, a value of 1.25 ± 0.08 GeV is observed at the LHC 2.76 TeV center of mass energy recently, by the CMS collaboration. This creates a third region in the excitation function of $E\_{\rm{T}}/N\_{\rm{ch}}$, showing a jump from top RHIC to LHC energies. In this region, along with particle multiplicity, the mean energy per particle also increases, which needs to be understood from theoretical models taking the dynamics of the time evolution of the created fireball. It is however, observed that models based on final state gluon saturation (CGC like) seems to explain this behaviour in the excitation function of $E\_{\rm{T}}/N\_{\rm{ch}}$. The RHIC Beam Energy Scan (BES) data seem to follow the overall trend of the collision energy dependence of $E\_{\rm{T}}/N\_{\rm{ch}}$. It has been seen in one of the previous works of one of us (RS) that various freeze-out criteria like constant energy per particle (⟨*E*⟩/⟨*N*⟩ = 1.08 GeV), fixed baryon+anti-baryon density (*n**B* + *n**B̄* ≃ 0.12 *f**m*− 3), fixed entropy density per *T*3 ($\frac{s}{T^3} \simeq 7$) seem to describe the qualitative energy dependent behaviour of $E\_{\rm{T}}/N\_{\rm{ch}}$ quite consistently upto RHIC energies. As shown in the figure, a hybrid function which is a combination of logarithmic and power law in center of mass energy, seems to describe the data quite well. At very high energies, the creation and annihilation of gluons balances out leading to gluon saturation. In the framework of gluon saturation models, the high energy behaviour of this observable is well described. Figure [etEchCent] (upper panel) shows the centrality dependence of $E\_{\rm{T}}/N\_{\rm{ch}}$ from $\sqrt{s\_{\rm {NN}}}$ = 7.7 GeV to 2.76 TeV energy. These data are enlisted in Table [table-etCent]. Since the centrality definitions by the CMS experiment for $\frac{dN\_{\rm {ch}}}{d\eta}$ and $\frac{dE\_{\rm T}}{d\eta}$ are different, fitting the centrality dependent $\frac{dN\_{\rm {ch}}}{d\eta}$ by a function, $\frac{1}{0.5 N\_{\rm{part}}}\frac{dN\_{\rm {ch}}}{d\eta} = A ~ N\_{\rm{part}}^{\alpha}$, with *A* = 2.63 ± 0.24 and *α* = 0.19 ± 0.02, we have evaluated the $\frac{dN\_{\rm {ch}}}{d\eta}$ values corresponding to the $N\_{\rm{part}}$ values used to define the centrality classes for $\frac{dE\_{\rm T}}{d\eta}$. Then we have estimated the LHC values of $E\_{\rm{T}}/N\_{\rm{ch}}$ at different centralities, which are given in Table [table-etCent] and are shown in Figure [etEchCent]. Within the systematic errors, $E\_{\rm{T}}/N\_{\rm{ch}}$ for all energies upto top RHIC energy, show a weak centrality dependence with a modest increase from most peripheral collisions to $N\_{\rm {part}} = 100$, reaching a roughly constant value of around 0.8 GeV towards central collisions. The LHC data also show a similar behaviour but the constant value of $E\_{\rm{T}}/N\_{\rm{ch}}$ is around 1.25 GeV. This centrality dependence of $E\_{\rm{T}}/N\_{\rm{ch}}$ is shown to be equivalent to the behaviour of $\langle p\_{\rm T}\rangle$ as a function of centrality for top RHIC energy and for $\sqrt{s\_{\rm{NN}}} = 2.76$ TeV at LHC. This is shown in the lower panel of Figure [etEchCent]. The value of $\langle p\_{\rm T}\rangle = 0.678 \pm 0.007$ GeV/c at $\sqrt{s\_{\rm{NN}}} = 2.76$ TeV, which is almost 36% increase when compared with its value ( ∼ 0.5 GeV/c) at top RHIC energy. The value of $E\_{\rm{T}}/N\_{\rm{ch}}$ increases almost 45% from top RHIC to LHC energy. This shows that not only particle multiplicity increases while going from top RHIC to LHC energy, the $\langle p\_{\rm T}\rangle$ also increases, making a third region in the excitation function of $E\_{\rm{T}}/N\_{\rm{ch}}$. The near centrality independent behaviour of $E\_{\rm{T}}/N\_{\rm{ch}}$ is explained by statistical hadron gas model (SHGM) with a static fireball approximation at freeze-out. However, to explain the energy dependent behaviour of $E\_{\rm{T}}/N\_{\rm{ch}}$ in the whole range of energies upto LHC, one needs to consider the dynamical effects during the time evolution of fireball. Irrespective of the collisions species, the center of mass energies and the collision centrality, starting from the lower energies to top RHIC energy, the system evolves to the same final state, which could be characterized by a constancy in chemical freeze-out temperature. On the other hand, LHC data shows a different trend of $E\_{\rm{T}}/N\_{\rm{ch}}$, while the chemical freeze-out temperature doesn’t change that much from RHIC to LHC. This needs to be understood from the thermodynamics point of view. A theoretical description of the time evolution of the produced fireball in heavy-ion collisions is difficult, as it involves different degrees of freedoms at different points. The SHGM uses hadronic degrees of freedom at later times, when the chemical composition of the matter is frozen (known as chemical freeze-out). Then the particles mean free path becomes higher than the system size, which forbids the elastic collision of the constituents and the system is said to be kinetically frozen (known as thermal or kinetic freeze-out). In general, freeze-out could be a complicated process involving duration in time and a hierarchy where different types of particles and reactions switch-off at different times. This gives to the concept of *``differential freeze-out"*. From kinetic arguments, it is expected that reactions with lower cross-sections switch-off at higher densities/temperature compared to reactions with higher cross-sections. Hence, the chemical freeze-out, which corresponds to inelastic reactions occurs earlier in time compared to the kinetic freeze-out, which corresponds to elastic reactions. In accordance with the above discussions, one can think of strange or charmed particles decoupling from the system earlier than the lighter hadrons. A series of freeze-outs could be thought of corresponding to particular reaction channels. However, in general one focuses on chemical and kinetic freeze-outs, considering the freeze-out to be an instantaneous process. At higher energies, when *μ**B* ∼ 0, the transverse energy production is mainly due to the meson content of the system. The experimental observations go in line with the above fact, when we observe the ratio of *p̄*/*p* ∼ 1 at higher energies. The intersection points of lines of constant $E\_{\rm{T}}/N\_{\rm{ch}}$ and the freeze-out line give the values of $E\_{\rm{T}}/N\_{\rm{ch}}$ at the chemical freeze-out. The ratio of $\frac{dE\_T}{d\eta}$ and $\frac{dN\_{ch}}{d\eta}$ at midrapidity, as a function of center of mass energy. Experimental data are compared to the predictions from thermal model, gluon saturation model and the estimations obtained in the framework of the hybrid model fitting to transverse energy and charged particle data.[etEch] [etEchCent] SUMMARY AND CONCLUSIONS ======================= Pseudorapidity distribution of charged particles is proposed to be one of the important global observables to characterize the hot and dense medium produced in the heavy-ion collisions. We reviewed the charged particle and photon multiplicity density distribution results obtained by various heavy-ion collision experiments starting from AGS energies to top RHIC and LHC energies. Before going to the results, a brief introduction on determination of centrality is given. Centrality determination is important in terms of relating theoretical observables, like impact parameters and numbers of nucleon participant ($N\_{\rm{part}}$), to the collision geometry and observed particle multiplicity. To correlate them, for example, two-component model with NBD is fitted with the V0 amplitude and the centrality percentile is evaluated to classify the events into different centralities. In the mean time the respective $N\_{\rm{part}}$, $N\_{\rm{coll}}$ and impact parameters are estimated by Monte Carlo Glauber model. The $dN\_{\rm{ch}}/d\eta$ spectra are discussed for Cu+Cu, Au+Au and Pb+Pb collisions at different energies. It is observed that the width and amplitude of the distribution increases with increase of collision energy. A double-Gaussian function is fitted with the distribution and it is found that the ratio of amplitudes, the widths are similar from one centrality to other in their respective collision energy. More interestingly that more dip in the mid rapidity is observed in Pb+Pb collisions at $\sqrt{s\_{\rm{NN}}}$= 2.76 TeV. This is an indication of different hadro-chemistry at LHC energy than RHIC. Still this needs to be understood in details. Similarly, the energy dependence of *d**N**γ*/*d**η* of Cu+Cu, Au+Au, S+Au and Pb+Pb collisions are discussed. Then the limiting fragmentation behaviour of charged particles as well as photons are discussed for Cu+Cu, Au+Au and Pb+Pb collisions at different energies. The compilation of various experimental data go in line with the hypothesis of limiting fragmentation. Moreover, after observing the centrality dependence of longitudinal scaling of charged particles, $R\_{\rm{PC}}$ is used to confirm the scaling behaviour and the scaling seems to be valid for a wide range of energies. In contrast to charged particles, photons do not show any centrality dependence. It is interpreted as majority of photons in the forward rapidities are coming form *π*0 decays. Hence, mesons are not affected by baryon stopping as they are originated from valence quarks. CGC model has successfully explained the limiting fragmentation upto some extent. However, it needs more development and complete understanding of the final state effect and inclusion of quark distribution. This longitudinal scaling of hadrons still needs more insight to understand the physics process and its predicted violation at LHC energies in the frame work of SHGM and the validity from experimental data are to be understood from theoretical considerations. During the discussion of factorization, it is also observed that the centrality dependence of $dN\_{\rm{ch}}/d\eta$ can be factorized to beam energy and collision centrality. By taking the ratio of $dN\_{\rm{ch}}/d\eta$ of Pb+Pb collision at $\sqrt{s\_{\rm{NN}}}$= 2.76 TeV to other collision energies show a scaling behaviour as a function of $N\_{\rm{part}}$. To understand the expansion dynamics of the system, the $dN\_{\rm{ch}}/d\eta$ of charged particles and photons are fitted with Landau-Carruther and Gaussian functions. By taking the ratio of widths of data to the Landau-Carruther’s function, it is found that the system is expanding more or less like a Landau hydrodynamic fluid upto the RHIC energy. But the LHC data deviates from the Landau hydrodynamic model. Similarly, photons at RHIC energies also obey the Landau hydrodynamics. It is observed that the $N\_{\rm{ch}}^{\rm{total}}$ normalized to $N\_{\rm{part}}$ scales with centrality. It is to be noted here that in the midrapidity, $dN\_{\rm{ch}}/d\eta$ normalized to $N\_{\rm{part}}$ doesn’t scale with centrality, whereas the total charged particles do. This is because of the modification of charged particles at forward rapidities are strongly correlated with compensating changes at midrapidity. *d**N**γ*/*d**η* also shows similar scaling. It is found that trapezoidal rule can be used to explain the $N\_{\rm{ch}}^{\rm{total}}$ normalized to participant pair from AGS energies to RHIC energies. However, it fails at LHC energy. When a hybrid function constructed by adding the power law and logarithmic of $\sqrt{s\_{\rm{NN}}}$, it explains the whole range of data indicating that the charged particle production is a combined process of midrapidity gluonic source (power law) and fragmentation process (logarithmic). The transverse energy measurement and the estimation of initial energy density in the framework of Bjorken boost invariant hydrodynamics are presented for collision energies ranging from few GeV to TeV. In this energy domain, the centrality and energy dependence of $\frac{dE\_{\rm T}}{d\eta}$ and Bjorken energy density multiplied with formation time, $\epsilon\_{\rm{Bj}}.\tau$ have been studied. A comparison of $\epsilon\_{\rm{Bj}}$ with that of lQCD value, indicates the formation of a QGP phase both at RHIC and LHC energies. The barometric observable, *i.e.* transverse energy per charged particle, is related to the chemical freeze-out. Various freeze-out criteria seem to describe the energy dependent behaviour of $E\_{\rm{T}}/N\_{\rm{ch}}$ starting from few GeV to top RHIC energies. A static fireball approximation at freeze-out, however, fails to reproduce the corresponding data at LHC and necessitates the inclusion of fireball evolution dynamics in space and time in order to describe the behaviour for the whole range of energies. The similarity in the centrality dependence upto the top RHIC energy indicates that irrespective of the collision species and center of mass energies, the system evolves to a similar final state at freeze-out. **Note-** *In this review, we have made an attempt to give the developments in heavy-ion collisions towards the measurements of charged particle and photon multiplicities alongwith transevrse energy production from few GeV to TeV energies. Although we have tried to cover in some details, it is not an easy task and we can never assume the task to be complete. However, we believe that the references mentioned in this review shall guide the readers in the related fields. We apologize to those authors whose valuable contributions in this area have not been mentioned properly.* The Gamma and Negative Binomial Distributions ============================================= The Gamma distribution represents the probability density for a continuous variable *x*, and has two parameters *b* and *p*. This is given by $$f(x) ~=~f\_{\Gamma}(x,p,b) ~=~ \frac{b}{\Gamma(p)}(bx)^{p-1}~e^{-bx}, \label{gammaFn}$$ where *p* > 0,   *b* > 0,     0 ≤ *x* ≤ ∞, Γ(*p*)  =  (*p* − 1)! is the Gamma function if *p* is an integer, and *f*(*x*) is normalized, ∫0∞*f*(*x*) *d**x*  =  1. The first few moments of the distribution are $$\mu \equiv \langle x \rangle ~=~ \frac{p}{b}, ~~~~~ \sigma \equiv \sqrt{\langle x^2 \rangle -\langle x \rangle ^2}~=~ \frac{\sqrt{p}}{b}, ~~~~ \frac{\sigma^2}{\mu^2} ~=~ \frac{1}{p}. \label{gammaMoments}$$ The Negative Binomial Distribution (NBD) of an integer *m* is defined as $$P(m) = \frac{(m+k-1)!}{m! (k-1)!}~~ \frac{\left( \frac{\mu}{k}\right)^m}{\left( 1+\frac{\mu}{k}\right)^{m+k}}, \label{NBDFn}$$ where *P*(*m*) is normalized for 0 ≤ *m* ≤ ∞, *μ* ≡ ⟨*m*⟩, and some of the higher moments are $$\sigma ~=~ \sqrt{\mu \left(1+\frac{\mu}{k}\right)}, ~~ \frac{\sigma^2}{\mu^2} = \frac{1}{\mu}+\frac{1}{k}. \label{NBDMoments}$$ The NBD is having an additional parameter *k* compared to a Poisson distribution. In the limit *k* → ∞ NBD becomes a Poissonian distribution. With *k* equals to a negative integer (hence the name) becomes NBD. The NBD is strongly correlated with Gamma distribution, and hence becomes Gamma distribution in the limit *μ* ≫ *k* > 1. Usually Gamma distributions are replaced with NBD to prove various theorems. One important difference between NBD and Gamma distributions is in the limit *m* or *x* → 0: for *p* > 1 the limit is always zero for a Gamma distribution, whereas for the NBD it is always finite. The Gamma Distribution has got potential applications as under *convolution* it shows an important property. Define the *n* − fold convolution of a distribution with itself as *f**n*(*x*)  =  ∫0*x* *d**y**f*(*y*) *f**n* − 1(*x* − *y*);  then for a Gamma distribution given by Eq. [gammaFn], the *n* − fold convolution is simply given by the function $$f\_n(x) ~=~ \frac{b}{\Gamma (np)}(bx)^{np-1}e^{-bx} ~=~ f\_{\Gamma}(x, np, b), \label{GammaConvolute}$$ i.e., *p* → *n**p* and *b* remains unchanged. Note that the mean *μ**n* and the standard deviation *σ**n* of the *n* − fold convolution obey the familiar rule $$\mu\_n ~=~ n\mu ~=~ \frac{np}{b}, ~~~~ \sigma\_n ~=~ \sigma \sqrt{n} ~=~ \frac{\sqrt{np}}{b}, ~~ \frac{\sigma\_n}{\mu\_n} ~=~ \frac{1}{\sqrt{np}}. \label{GammaConvoluteRule}$$ The convolution property of the Gamma distribution also holds good for the NBD, with *μ**n* → *n**μ*,  *k* → *n**k*, so that *μ*/*k* remains constant. Note that the charged particle multiplicity distribution in proton-proton collisions obey a NBD, whereas the Gamma distribution fits to $E\_{\rm T}$ distributions. -0.75cm 90  J. C. Collins and M. J. Perry,*Phy. Rev. Lett.* 34, 1353 (1975). J.D. Bjorken,, 140 (1983). F. Karsch,, 199c (2002). M. Kataja *et al.*,, 2755 (1986). L. van Hove,, 138 (1982). S. Sarkar *et al.*,, 318 (1995).[Erratum:ibid **C51**, 2845 (1995).] A. Dumitru *et al.*,, 187 (1995). D. Kharzeev and M. Nardi,, 121 (2001).  B. B. Back *et al.* [PHOBOS Collaboration], *Phys. Rev. C* **74**, 021901 (2006).  M. C. Abreu *et al.* [NA50 Collaboration], *Phys. Lett. B* **530**, 43 (2002).  E. Abbas *et al.* [ALICE Collaboration], *Phys. Lett. B* **726**, 610 (2013).  B. B. Back *et al.* [PHOBOS Collaboration], *Phys. Rev. C* **70**, 021902 (2004).  B. Alver *et al.* [PHOBOS Collaboration], *Phys. Rev. C* **83**, 024913 (2011).  G. Aad *et al.* [ATLAS Collaboration],, 363 (2012). S. Chatrchyan *et al.*, [CMS Collaboration],, 141 (2011).  B. Abelev *et al.* [ALICE Collaboration], *Phys. Rev. C* **88**, 044909 (2013).  B. Abelev *et al.* [ALICE Collaboration], *Phys. Rev. Lett.* **109**, 252302 (2012).  K. Aamodt *et al.* [ALICE Collaboration], *Phys. Rev. Lett.* **106**, 032301 (2011).  B. Alver, B. B. Back, M. D. Baker, M. Ballintijn, D. S. Barton, R. R. Betts, R. Bindel and W. Busza *et al.*, *Phys. Rev. Lett.* **102**, 142301 (2009).  B. B. Back, M. D. Baker, D. S. Barton, R. R. Betts, M. Ballintijn, A. A. Bickley, R. Bindel and A. Budzanowski *et al.*, *Phys. Rev. Lett.* **91**, 052303 (2003).  H. Petersen and M. Bleicher, *PoS CPOD* **2006**, 025 (2006) [nucl-th/0611001].  B. Mohanty and J. Alam,, 064903 (2003).  J. Benecke, T. T. Chou, C. -N. Yang and E. Yen,*Phys. Rev.* **188**, 2159 (1969).  G. J. Alner et al. [UA5 Collaboration], *Z. Phys. C* **33**, 1 (1986).  B. B. Back *et al.* [PHOBOS Collaboration],*Phys. Rev. C* **72**, 031901 (2005); B. B. Back *et al.* [PHOBOS Collaboration], *Phys. Rev. C* **74**, 021902 (2006).  G. Antchev *et al.* [TOTEM Collaboration], *Eur. Phys. Letts.* **101**, 21004 (2013).  J. Jalilian-Marian, *Phys. Rev. C* **70**, 027902 (2004).  F. Gelis, A. M. Stasto and R. Venugopalan, *Eur. Phys. J. C* **48**, 489 (2006).  J. Cleymans, J. Strumpfer and L. Turko, *Phys. Rev. C* **78** 017901 (2008).   Brogueira, J. Dias de Deus, and C. Pajares, *Phys. Rev. C* **75**, 054908 (2007). R. Sahoo and A. N. Mishra, *Int. J. Mod. Phys. E* **23**, 1450024 (2014).  D. Kharzeev and M. Nardi, *Phys. Lett. B* **507**, 121 (2001).  X. -N. Wang and M. Gyulassy, *Phys. Rev. Lett.* **86**, 3496 (2001).  L. D. Landau, Izv. Akad. *Nauk Ser. Fiz.* **17**, 51 (1953).  P. Steinberg,*PoS CPOD* **2006**, 036 (2006), [nucl-ex/0702019].  P. Carruthers and Minh Doung-van, *Phys. Rev. D* **8**, 859 (1973).  C. Y. Wong,*Phys. Rev. C* **78**, 054902 (2008).  P. K. Netrakanti and B. Mohanty,*Phys. Rev. C* **71**, 047901 (2005) G. Wolschin, (1999) 85.  J. D. Bjorken,*Int. J. Mod. Phys. A* **7**, 4189 (1992); J.D. Bjorken, K.L. Kowalski, C.C. Taylor, Baked Alaska, SLAC- PUB-6109; K. Rajagopal and F. Wilczek, *Nucl. Phys. B* **399**, 395 (1993)  B. I. Abelev *et al.* [STAR Collaboration], *Nucl. Phys. A* **832**, 134 (2010);  J. Adams *et al.* [STAR Collaboration],*Phys. Rev. Lett.* **95**, 062301 (2005).  M. M. Aggarwal *et al.* [WA93 Collaboration],*Phys. Rev. C* **58**, 1146 (1998)  M. M. Aggarwal *et al.* [WA98 Collaboration],*Phys. Lett. B* **458**, 422 (1999).  J. Adams *et al.* [STAR Collaboration], *Phys. Rev. Lett.* **95**, 062301 (2005) J. Adams *et al*. [STAR Collaboration],, 054907 (2004). B.I Abelev *et al*. [STAR Collaboration],, 134 (2010). M. Jacob and P.V. Landshoff,, 657 (1986). X.N. Wang,, 287 (1997). M. Gyulassy and T. Matsui,, 419 (1984). K.J. Eskola, K. Kajantie, P.V. Ruuskanen, and K. Tuominen,, 379 (2000). P.F. Kolb, U. Heinz, P. Huovinen, K.J. Eskola, and K. Tuominen,, 197 (2001). A. Dumitru and M. Gyulassy,, 215 (2000). T. Abbott *et al*. [E-802 Collaboration],, 064602 (2001). T. Abbott *et al.*, [E802 Collaboration],, 2663 (1995). P. Ghosh,, 054017 (2012). L. Ahle *et al.*, [E802 Collaboration],, 2173 (1999). Raghunath Sahoo and Aditya Nath Mishra,, 1450024 (2014). S. Chatrchyan *et al.*, [CMS Collaboration],, 152303 (2012). F. Karsch,, 503 (2009); Z. Fodor and S. Katz,, 050 (2004). K. Adcox *et al.*, [PHENIX Collaboration],, 184 (2005). D. Kharzeev and M. Nardi,, 121 (2001). S.S. Adler *et al*., [PHENIX Collaboration],, 034908 (2005). B. Hahn *et al*.,, 1131 (1956); C.W. De Jager *et al*., At. Data Nucl. Data Tables **24**, 479 (1974). K. Adcox *et al*., [PHENIX Collaboration],, 052301 (2001). T. Alber *et al*.,, 3814 (1995). T.S. Ullrich,, 399c (2003). J. Adams *et al*., [STAR Collaboration],, 052302 (2004). P.F. Kolb and U. Heinz, nucl-th/0305084 (2003). B. Abelev *et al.*, [ALICE Collaboration],, 371 (2013). A. Krasnitz, Y. Nara, R. Venugopalan,, 268 (2003). W. Reisdorf, *et al.*, [FOPI Collaboration],, 493 (1997). M. van Leeuwen [NA49 Collaboration],, 161 (2003). S.V. Afanasiev *et al.*, [NA49 Collaboration],, 054902 (2002). J. B$\ddot{a}$chler *et al.*, [NA49 Collaboration],, 45 (1999). R. Sahoo, PhD Thesis (Utkal University, 2007), arXiv:0804.1800; R. Sahoo [STAR Collaboration],, 897 (2011). J. Cleymans, R. Sahoo, D.P. Mahapatra, D.K. Srivastava and S. Wheaton,, 172 (2008). J. Cleymans, R. Sahoo, D.P. Mahapatra, D.K. Srivastava and S. Wheaton,, 104147 (2008). D. Prorok, (2005) 194c. J. Cleymans and K. Redlich,, 5284 (1998). P. Braun-Munzinger and J. Stachel,, 1971 (2002). A. Tawfik,, S1105 (2005). J. Cleymans, H. Oeschler, K. Redlich, and S. Wheaton, 50 (2005). B. Abelev *et al*., [ALICE Collaboration],, 371 (2013). A. Adronic, P. Braun-Munzinger, J. Stachel and H. Stocker,, 203 (2011). B. Abelev *et al.* [ALICE Collaboration],, 252301 (2012). A. Baran, W. Broniowski and W. Florkowski,, 779 (2004). A Bialas and R. Peschanski,, 59 (1988). --- 1. Corresponding Author, Email: [email protected][↩](#fnref1) Charged Particle and Photon Multiplicity, and Transverse Energy Production in High-Energy Heavy-Ion Collisions ============================================================================================================== We review the charged particle and photon multiplicity, and transverse energy production in heavy-ion collisions starting from few GeV to TeV energies. The experimental results of pseudorapidity distribution of charged particles and photons at different collision energies and centralities are discussed. We also discuss the hypothesis of limiting fragmentation and expansion dynamics using the Landau hydrodynamics and the underlying physics. Meanwhile, we present the estimation of initial energy density multiplied with formation time as a function of different collision energies and centralities. In the end, the transverse energy per charged particle in connection with the chemical freeze-out criteria is discussed. We invoke various models and phenomenological arguments to interpret and characterize the fireball created in heavy-ion collisions. This review overall provides a scope to understand the heavy-ion collision data and a possible formation of a deconfined phase of partons via the global observables like charged particles, photons and the transverse energy measurement. INTRODUCTION ============ At extreme temperatures and energy density, hadronic matter undergoes a phase transition to partonic phase called as Quark-Gluon Plasma (QGP). The main goal of heavy-ion collision experiments is to study the QGP by creating such extreme conditions by colliding heavy nuclei at relativistic energies. During the last decade, there are many heavy-ion collision experiments carried out at SPS, RHIC and LHC to create and study QGP in the laboratory. Global observables like transverse energy ($E\_{\rm T}$), particle multiplicities ($N\_{\gamma}, N\_{\rm {ch}}$ etc.), $p\_{\rm T}$-spectra of the produced particles and their pseudorapidity distributions ($dE\_{\rm T}/d\eta, dN/d\eta$), with different colliding species and beam energies provide insight about the dynamics of the system and regarding the formation of QGP. It is also proposed that the correlation of mean transverse momentum, $\langle p\_{\rm T} \rangle$ and the multiplicity of the produced particles may serve as a probe for the Equation of State (EoS) of hot hadronic matter. In a thermodynamic description of the produced system, the rapidity density (*d**N*/*d**y*) reflects the entropy and the mean transverse momentum ($\langle p\_{\rm T} \rangle$), corresponds to the temperature of the system. Except at the phase transition points, the rapidity density linearly scales with $\langle p\_{\rm T} \rangle$. If the phase transition is of first order, then the temperature remains constant at the co-existence of the hadron gas and the QGP phase, thereby increasing the entropy density. In such a scenario, $\langle p\_{\rm T} \rangle$ shows a plateau with increase of entropy, thereby characterizing the phase transition associated with the time evolution of the system. Hence, the global observables like, *d**N*/*d**y* and $\langle p\_{\rm T} \rangle$, give indication of a possible existence of a QGP phase and the order of phase transition. $dE\_{\rm T}/d\eta$ gives the maximum energy density produced in the collision process which is necessary to understand the reaction dynamics. The formation of QGP may also change the shape of the pseudorapidity distribution. The event multiplicity distribution gives information of the centrality and energy density of the collision. The scaling of multiplicity with number of participant nucleons ($N\_{\rm {part}}$) reflects the particle production due to soft processes (low-$p\_{\rm T}$). Whereas, at high energy when hard processes (high-$p\_{\rm T}$) dominate, it is expected that the multiplicity will scale with the number of nucleon-nucleon collisions ($N\_{\rm {coll}}$). There are models to explain the particle production taking a linear combination of $N\_{\rm {part}}$ and $N\_{\rm {coll}}$ (called two-component model). The most viable way of studying QGP is via the particles produced in the collision in their respective domain of proposed methods. Then one of the most fundamental questions arises about the mechanism of particle production and how they are related with the initial energy density, gluon density in the first stage of the collision evolution and entropy of the system. Similarly, question can be put to figure out the role of soft and hard process of particle productions. It is proposed that the charged particle multiplicity or technically called as the pseudorapidity density distributions of charged particles, $dN\_{\rm{ch}}/d\eta$, can be used to address the above questions. Here the pseudorapidity, *η* =  − *l**n* *t**a**n* *θ*/2, where *θ* is the polar angle, the produced particles make with the detector. So *d**N**c**h*/*d**η* is called as one of the global variables to characterize the system produced in the heavy-ion collisions. Experimentally, it is more easy to estimate this quantity as most of the detectors are capable of detecting charged particles and it involves only kinematics of the charged particles. In this review, in Section-II, we discuss the method of experimental determination of collision centrality, which is followed by discussions on the midrapidity pseudorapidity density distributions of charged particles for different collision energies, collision species and centralities in Section-III. In this section, we discuss about the longitudinal scaling and factorization of charged particles. The expansion dynamics of the system is discussed using the pseudorapidity density distributions of charged particles and the Landau-Carruthers hydrodynamics. In subsequent subsections, the scaling of total charged particles with collision centrality and its energy dependence are discussed. This follows with similar discussions on the photon pseudorapidity density at forward rapidities in Section-IV, which includes longitudinal scaling of photons. Subsequently, in Section-V, discussions are made on the production of transverse energy and its use for centrality determination. Section-VI includes discussions on collision energy dependence of transverse energy, which is followed by discussions on the centrality dependence in Section-VII. Section-VIII includes discussions on estimation of initial energy density in Bjorken hydrodynamic scenario, and its energy and centrality dependences. Further we correlate the energy and centrality dependence of transverse energy per charged particle with chemical freeze-out criteria in Section-IX. In Section-X, we summarize the review with conclusions. Appendix discusses on the important properties of Gamma and Negative Binomial Distributions. Centrality determination ======================== In heavy-ion collisions, the event centrality is of utmost importance to understand the underlying physics of the collision. The event centrality is related to the impact parameter, defined as the distance between the centroids of the two colliding nuclei in a plane transverse to the beam axis, of the collision. The impact parameter tells about the overlap volume of the two nuclei. This overlap volume determines the geometrical parameters, like number of participant nucleons ($N\_{\rm{part}}$), number of spectator nucleons ($N\_{\rm{spec}}$) and the number of binary collisions ($N\_{\rm{coll}}$). The impact parameter can not be determined experimentally. However, the global observables, like total charged particles
arxiv_0000080
magnitude gravitational wave inspiral time. Gravitational radiation is the relevant loss term (as opposed to, for example, tides) because the orbits limited by this criteria are in the gravitational wave dominated regime of pericenter distance (Figure [fig:terms]). The combination of these limits on the captured WD population ensures that these WDs will interact primarily with the MBH over the course of their orbital inspiral. In the next subsections, we describe how interactions with the MBH transform the captured distribution. Modeling the Evolution of Captured WD orbits -------------------------------------------- ![ Phase space of encounters between WDs and MBHs. Gravitational waves are the dominant orbital evolution term above the solid lines (shown for M_{\rm wd} = 0.2, 0.6, and 1.0M_\odot). Tidal excitation is the dominant orbital-energy loss term for pericenter distances below the solid line. To the right of the dashed lines, the pericenter distance is within the MBH’s Schwarzschild radius and the WD would be swallowed whole. The gray shaded area (valid to the left of the dashed lines) shows the region in which tidal forcing at pericenter is strong enough to produce mass loss. Progressive shadings show the onset of mass loss, and \Delta M = 10%, 50% and 100% of the WD mass, from top to bottom, respectively. ](f5 "fig:") [fig:terms] To model the subsequent evolution of the WD orbits under the influence of both tides and gravitational radiation, we have developed an orbit-averaged code that can rapidly trace these inspirals. The effects of gravitational radiation from the orbit are applied following the prescription of, which is equivalent to the 2.5-order post-Newtonian approximation. Tidal excitation is computed following the model of. shows that the exchange between orbital and oscillation energy depends on the amplitude and phase of the WD’s oscillation as it encounters the MBH. This process leads to a ``memory" of previous interactions, and orbits that evolve chaotically as a given interaction can lead to either a positive or negative change in orbital energy and angular momentum. To model the fiducial change in orbital energy (for an unperturbed star) we follow the prescription given by [dE] Et = 0.7 -1 G mwd2/rwd, where *ϕ* = *η*− 1exp(2.74(*η* − 1)), with the dimensionless variable *η* a parameterization of pericenter distance $\eta^2 = \left({r\_{\rm p}}/r\_{\rm wd}\right)^3 \left( m\_{\rm wd}/{M\_{\rm bh}}\right)$. This expression is a fit to results computed following the method of and, where the overlap of *l* = 2 fundamental oscillation mode with the tidal forcing is integrated along the orbital trajectory. We compared Equation with numerical results derived computing such an integral and found at most a few percent difference as a function of ${r\_{\rm p}}$, and thus we adopt this simplifying form. The orbital energy lost through tides goes into the quadrupole fundamental mode of the WD, which oscillates with an eigenfrequency $\omega\_f \approx 1.445 G M\_{\rm wd} R\_{\rm wd}^{-3}$. The angular momentum exchange with oscillations is related to the energy loss, Lt = 2 Et/ f. Finally, we allow gravitational radiation to carry away oscillation energy from the tidally-excited WD. The luminosity of gravitational radiation scales with the oscillation energy, resulting in a constant decay time of tdec = 1.5 102 ( Mwd / M)-3 ( Rwd /10-2 R)4, which corresponds to $t\_{\rm dec} = 6447\text{ yr}$ for the 0.5*M*⊙ WD example used here. We terminate the evolution when one of several criteria are reached: 1. The pericenter distance is less than the radius at which mass loss occurs $({r\_{\rm p}}< 2 {r\_{\rm t}})$. 2. The accumulated oscillation energy of the WD exceeds its binding energy, Eosc > 3 10-2n G MWD2 RWD with *n* = 3/2. 3. The orbit circularizes. In the code this is when *e* < 0.1. Further evolution is traceable via the gravitational wave inspiral as impulsive excitation of tidal oscillations no longer occurs. These termination criteria correspond roughly to the categories of interactions between WDs and MBHs outlined in Section [sec:sig]. When criteria 1 is met, either a single-passage tidal disruption or multiple passage mass transfer episode can be initiated depending on the orbital eccentricity, *e*. When criteria 2 is met, eccentric-orbit mass transfer ensues. When criteria 3 is met, the WD’s orbit evolves to eventual Roche-lobe overflow. In the next subsection, we use these termination criteria to examine the distribution of orbits at the onset of mass transfer – after they have been transformed by their tidal and gravitational wave driven inspiral. Distributions at the Onset of Mass Transfer ------------------------------------------- ![Orbital distributions of WDs captured from split binaries at the onset of mass transfer to the MBH. Initial distributions are shown filled, final distributions are shown as lines. The upper panel shows the semi-major axis (blue) and pericenter distance (red) along with their corresponding initial distributions. The middle panels show the corresponding eccentricity and orbital period distributions. Orbits are evolved under the influence of gravitational waves and tidal excitation until the l=2 oscillation energy grows to reach the WD binding energy, at which point mass will be stripped from the WD envelope. The lower panel shows the number of orbits the WD survives before the onset of mass transfer, N_{\rm orb}. ](f6 "fig:") [fig:orbdist] In Figure [fig:orbdist], we show how captured 0.5*M*⊙ WDs from split binaries are eventually disrupted by a 105*M*⊙ MBH. The distribution of captured orbits is shown filled, while the final distribution is shown with lines. We find that in all cases, the deposition of orbital energy into tidal oscillation energy of the WD eventually reaches and exceeds the WD binding energy. We terminate our calculations at this point (termination criteria 2) as this represents the onset of tidally-forced mass loss from the WD to the MBH. We find no cases of complete circularization in our simulations (termination criteria 3). Circularization without tidal disruption requires larger initial pericenter distances, where tidal excitation is minimal, and correspondingly longer isolation times in order to allow the gravitational wave inspiral to complete. The circularization and inspiral times are similar under the influence of gravitational radiation. As a result, we find no cases of termination criteria 1 in our isolated evolutions. Instead, when the pericenter distance drops to within the radius at which tides are the dominant Δ*E* (Figure [fig:terms]), the tidal oscillation energy tends to rapidly grow to exceed the WD’s binding energy leading termination criteria 2 to be met. The number of orbits elapsed after capture and before the onset of mass transfer and termination is shown in the lower panel of Figure [fig:orbdist]. Following the onset of mass loss, tidal stripping and eventual disruption over repeated pericenter passages proceeds as described in Section [sec:sig]. We find that most WDs are disrupted with moderate eccentricity and a broad range of orbital periods between 103 and 106 s. The eccentricity distribution shows no nearly-circular orbits, but many orbits with $e<e\_{\rm crit}$, equation. The orbital period is particularly important in the case of eccentric encounters because it sets the timescale for repetition between subsequent pericenter mass-stripping episodes. After the onset of mass transfer, the WD can be expected to survive for at most tens of passages (Section [sec:sig]), thus the repetition time also fixes the range of possible total event durations. Detecting High Energy Signatures of White Dwarf Disruption ========================================================== In this Section, we compare the relative rates and expected luminosities of different classes of transients associated with WD-MBH interactions to discuss their detectability. We show that although rare, WD transients should outnumber their main-sequence counterparts in high energy detections because of their substantially higher peak luminosities. We then calculate that the rate of these events is sufficiently high to allow their detection by instruments such as *Swift*. Main sequence disruptions significantly outnumber WD disruptions and mass transfer interactions. In the upper panel of Figure [fig:rates], we compare the rate of main-sequence star tidal disruptions to that of WD tidal disruptions, and to repeating flares resulting from mass transfer from captured WDs. The disruption rates of stars and binaries are computed by integrating the flux into the loss cone given the cluster properties outlined in [sec:rates], equation. In the case of repeating transients, our disruption rate calculation is supplemented by the Monte Carlo simulation that traces orbits to the onset of mass transfer, described in [sec:inspiral]. To compute the values shown in the Figure, we assume a binary fraction of $f\_{\rm bin}=0.1$, and a WD fraction of $f\_{\rm wd}=0.1$ that applies both within the cluster and within binaries. We represent the remaining stars as main sequence stars that are sun-like, with *R*\* = *R*⊙ and *M*\* = *M*⊙. White dwarf interactions display a cut-off in MBH mass where most events transition from producing flares (if ${r\_{\rm p}}\gtrsim r\_{\rm ISCO} \approx 4 {r\_{\rm s}}$) to being consumption events with little or no electromagnetic signature. For the 0.5*M*⊙ WDs plotted, this cutoff occurs at black hole masses very near 105*M*⊙. Interestingly, the progressive disruption of WDs in eccentric orbits extends to slightly higher MBH masses, since the WD is disrupted gradually, over a number of orbits, without actually penetrating all the way to the tidal radius. These limits in black hole mass are flexible depending on the spin parameter and orientation of the MBH’s spin, since the general relativistic geodesic deviates substantially from a Newtonian trajectory in such deeply-penetrating encounters. If oriented correctly with respect to a maximally rotating Kerr hole, a 0.5*M*⊙ WD could, marginally, be disrupted by a 106*M*⊙ black hole. A realistic spectrum of WD masses would also contribute to softening this transition from flaring to consumption. While the lowest mass WDs are expected to be rare in nuclear clusters due to the effects of mass segregation, they are less dense than their more massive counterparts and could be disrupted by slightly more massive black holes. For example, a 0.1*M*⊙ WD could be disrupted by a 3 × 105*M*⊙ black hole. Although rare, relativistic WD transients significantly outshine their main sequence counterparts. In the lower panel of Figure [fig:rates], we combine the relative rates of different tidal interactions with their expected peak luminosities as a function of MBH mass. We allow the beamed luminosity of all of these jetted transients to trace the mass supply to the black hole, $L\propto \dot M c^2$, as in Figure [fig1] and assume that the degree of collimation is similar for each of the different classes of events. Given a population of MBHs with masses ${M\_{\rm bh}}\lesssim 10^5 M\_\odot$, WD tidal disruptions should be more easily detected than main sequence disruptions. Eccentric disruptions over the course of multiple orbits favor slightly higher black hole masses. Their rarity compared to single-passage WD tidal disruptions implies that although they have similar peak luminosities they represent a fractional contribution to the range of detectible events. This result suggests that WD disruptions, rather than main sequence disruptions, should serve as the most telling signpost to MBHs with masses less than 105*M*⊙. In the following subsection, we discuss how high energy emission can be produced in these transients. ![Rates of different interaction channels per galaxy, \dot N_{\rm gal}, as a function of {M_{\rm bh}}. The black line is the disruption of sun-like stars. Blue is the disruption of WDs, Green is the capture of WDs by split binaries into inspiralling orbits. Top: The disruption of MS stars per galactic center greatly outnumbers that of WDs. WD disruptions peak at lower {M_{\rm bh}} and are consumed whole by MBHs with masses {M_{\rm bh}}\gtrsim 10^5 M_\odot. Repeating flares extend to slightly higher {M_{\rm bh}} because they are disrupted progressively with pericenter distances moderately outside the tidal radius. Bottom: When weighted by their relative luminosities, disruptions of WDs appear more common than disruptions of MS stars. This panel is normalized to the MS value, and assumes similar f_{\rm beam} for all classes of events. Repeating flares are also quite luminous, but their relative rarity implies that they should make only a fractional contribution to the population of relativistic MS disruptions. ](f7 "fig:") [fig:rates] Dissipation and Emission Mechanisms ----------------------------------- Internal dissipation leading to a non-thermal spectrum, to be most effective, must occur when the jet is optically thin. Otherwise it will suffer adiabatic cooling before escaping, and could be thermalized. The comoving density in the jet propagating with a Lorentz factor Γ is $n'\approx L\_{\rm j}/(4\pi r^2 m\_p c^3 \Gamma^2)$, and using the definition of the Thomson optical depth in a continuous outflow $\tau\_{\rm j}\approx n'\sigma\_{\rm T} (r/\Gamma)$ we find the location of the photosphere $$r\_\tau={\dot{M} \sigma\_{\rm T} \over 4\pi m\_p c \Gamma^2}= 10^{13}\left({L\_{\rm j} \over 10^{49}\;{\rm erg/s}}\right)\left({\Gamma \over 10}\right)^{-3}\;{\rm cm}.$$ If the value of Γ at the jet base increases by at least a factor 2 over a timescale *δ**t*, then the later ejecta will catch up and dissipate a significant fraction of their kinetic energy at some distance given by $$r\_\iota\approx c \delta t \Gamma^2= 3 \times 10^{13} \left({\delta t \over 10\;{\rm s}} \right) \left({\Gamma \over 10} \right)^2\;{\rm cm}.$$ Outside *r**τ*, where radiation has decoupled from the plasma, the relativistic internal motions in the comoving frame will lead to shocks in the gas. This implies the following lower limit on $$\Gamma \gtrsim \Gamma\_{\rm c}= 7.5 \left({L\_{\rm j} \over 10^{49}\;{\rm erg/s}}\right)^{1/5}\left({\delta t \over 10\;{\rm s}}\right)^{-1/5}.$$ When $\Gamma \leq \Gamma\_{\rm c}$, the dissipation occurs when the outflow is optically thick and an almost thermal transient is expected to emanate from the jet’s photosphere. When $\Gamma \geq \Gamma\_{\rm c}$, dissipation takes place when the jet is optically thin. In the presence of turbulent magnetic fields built up behind the internal shocks, the accelerated electrons within this region can produce a synchrotron power-law radiation spectrum similar to that observed in GRBs. The resulting non-thermal flare from an internal shock collision will arrive at a detector at a time $\Delta t\_{\rm obs} \approx r\_\iota /(c \Gamma^2) \approx \delta t$. Thus, the relative time of flare variability at the detector will have a close one-to-one relationship with the time variability within the jet. Alternatively, high-energy emission can be produced as the jet propagates through the accretion disk region while interacting with very dense soft photon emission with typical energy $\Theta\_{\rm disk}=k T\_{\rm disk} /(m\_e c^2)$. A fraction $\approx \min(1,\tau\_{\rm j})$ of the photons are scattered by the inverse Compton effect to energies $\approx 2\Gamma^2\Theta\_{\rm disk}$, where we have assumed that a constant Γ has been attained. Each seed photon is boosted by  ≈ Γ2 in frequency, yielding a boosted accretion disk spectrum. The observed variability time scale, in this case, is primarily related to changes in the accretion disk luminosity. Due to relativistic aberration, the scattered photons propagate in a narrow 1/Γ beam. The Compton drag process can be very efficient in extracting energy from the jet and can limit its maximum speed of expansion so that $\Gamma^2 L\_{\rm Edd} \lesssim L\_{\rm j}$. Typical bulk Lorentz factors range from Γ ≈ 10 in quasars to Γ > 102 in GRBs. Transients that have so far been associated with tidal disruptions of stars have been mildly relativistic, with typical Lorentz factors of a few. In the case of *Swift* J1644+57, and inferred Γ ≈ 2.2. find the that $\Gamma \gtrsim 2.1$ is required in *Swift* J2058+05. In both cases, the observed spectrum can be explain by both internal dissipation and Compton drag. Event Rates ----------- We can estimate the detectable event rate by considering the space density of dwarf galaxies that might host these black holes. We estimate that a lower limit on the number density of dwarf galaxies is  ∼ 107Gpc− 3 although recent work has shown that it may be up to a factor of  ∼ 30 higher. If we assume that the MBH occupation fraction of these galaxies is $f\_{\rm MBH}$, and adopt a per MBH rate of $\dot N\_{\rm gal} \sim 10^{-6} \text{ yr}^{-1}$, then the rate of WD tidal disruptions per volume is $\dot N\_{\rm vol} \sim 10 f\_{\rm MBH} \text{ Gpc}^{-3} \text{ yr}^{-1}$. Note that this rate is approximately a factor of 100 smaller than the rate estimate of, because they adopt a higher $\dot N\_{\rm gal}$ that is derived by combining the tidal disruption rate normalization of an isothermal sphere (*ν*\* ∝ *r*− 2) with the fraction of disrupted WDs from N-body simulations of globular clusters. Considering their high luminosity, these transients may be detected out to cosmological distances. As an example, the annual event rate for transients with *z* < 1 is $ \dot N\_{z<1} \sim 1500 f\_{\rm MBH} \text{ yr}^{-1}, $ where we have used the fact that in an $\Omega\_{\rm m}=0.3$, *H*0 = 70 cosmology, *z* < 1 encloses a comoving volume of approximately 150 Gpc3. Because the emission is beamed, only a fraction $f\_{\rm beam}$ are detectable from our perspective due to the random orientation of the jet column. Thus we arrive at a potentially observable event rate of Nz<1, obs ~1500 fbeam fMBH -1. If $f\_{\rm beam} = 0.1$, then of order $150 f\_{\rm MBH} $ events are theoretically detectable per year. The fraction of these that would have triggered *Swift* in the past is still not completely understood. From Figure [fig2], typical peak timescales are thousands of seconds. suggest that <10% of exposures have sufficiently long-duration trigger applied to detect a longer event duration event like a WD-MBH interaction. Assuming that 10% of the theoretically observable events are found $\left(f\_{\rm Swift}=0.1\right)$, that leaves a *Swift* rate of $\dot N\_{\rm Swift} \sim 15 f\_{\rm MBH} \text{ yr}^{-1}$. This rate is low compared to the typical GRB rate detected by *Swift*, but potentially high enough to build a sample of events over a several year observing window with some long-cadence observations tailored to trigger on transients of this duration. Discussion ========== The MBH mass function --------------------- For MBH masses of $\lesssim 10^5 M\_\odot$, jetted transients associated with WD tidal disruptions are extremely luminous and fuel the black hole above the Eddington limit for nearly a year. These events offer a promising observational signature of quiescent black holes in this lower mass range due to their high luminosities. Unbeamed emission from the accretion flow is roughly Eddington-limited, and therefore will be at least three orders of magnitude fainter than the beamed emission. While previous *Swift* trigger criteria catered to much shorter-duration events, with increasing focus on long duration events recently, the fraction of transients that would trigger *Swift*, $f\_{\rm Swift}$, is likely to increase or at least become better constrained in future observations. With a *Swift* detection rate of order of $\dot N\_{\rm Swift} \sim 15 f\_{\rm MBH} \left(f\_{\rm Swift}/0.1\right) \left(f\_{\rm beam}/0.1\right) \text{ yr}^{-1}$, it should be possible to constrain the occupation fraction, $f\_{\rm MBH}$. More than one event per year would result if $f\_{\rm MBH}\gtrsim0.1$, and thus it is most likely possible to constrain $f\_{\rm MBH}$ to that level or larger. In placing such a limit, there remains some degeneracy, for example, $f\_{\rm MBH}= 0.1$ in the above expression could either mean that 10% of dense nuclei harbor MBHs, or that 10% of MBHs are surrounded by stellar systems. Even so, with knowledge of the expected signatures, the detection or non-detection of WD-disruption transients can place interesting constraints on the population of MBHs in this mass range with current facilities. Non-detections of events, therefore, would argue against the presence of MBHs or the presence of stellar cusps for this mass range. Ultra-long GRBs as WD Tidal Disruptions? ---------------------------------------- There is tantalizing evidence that tidal disruptions of WDs by MBHs have already been detected, under the guise of ultra-long GRBs. elaborate on the properties of several members of the newly emerging class of ultra-long GRBs: GRB 101225A, GRB 111209A, and GRB 121027A. All of these GRBs reach peak X-ray luminosities of  ∼ 1049erg s− 1 and non-thermal spectra reminiscent of relativistically beamed emission. At times greater than 104 seconds all of these bursts exhibit luminosities that are more than a factor of a hundred higher than typical long GRBs. Astrometrically, the two bursts for which data is available (GRB 101225A are GRB 111209A) are coincident with their host galaxy’s nuclear regions, suggesting compatibility with the idea that these transients originated through interaction with a central MBH. However, it is worth noting that if these events are associated with dwarf or satellite galaxies, they might appear offset from a more luminous central galaxy despite being coincident with the central regions of a fainter host, a clear-cut example being the transient source HLX-1. discuss a long-duration x-ray transient, XRT 000519, with a faint optical counterpart and quasi-periodic precursor emission. The source is located near M86. If it is at the distance of M86, the luminosity is similar to the Eddington limit of a 104*M*⊙ MBH. If it is, instead, a background object, the emission could be beamed and have a luminosity of up to  ∼ 1048 erg s− 1. Might such events be tidal disruptions of WDs by MBHs? Further evidence is certainly needed to ascertain the origin of these bursts, but the properties, including luminosities and decay timescales are in line with those we have reviewed for disruptions of WDs by MBHs. Figure [fig:phasespace] augments the phase space diagram of, showing characteristic luminosities and decay times for single-passage tidal disruptions of WDs and MBHs (blue shaded region). In Figure [fig:phasespace], we plot the peak timescale and luminosity of peak for the disruptions, for MBH masses from 103 to 105*M*⊙, and WD masses of 0.25 - 1*M*⊙. Other relevant timescales include $t\_{\rm Edd}$, the time above the MBH’s Eddington limit, plotted in Figure [fig2], and *t*90, as plotted for the GRB and soft gamma-ray repeater (SGR) sources, which is a factor of  ≈ 30 greater than $t\_{\rm peak}$. The peaks in the lightcurve of *Swift* J1644+57 have been associated with periodic spikes in the mass supply from a gradually disrupting WD in an eccentric orbit by. The suggested repetition time is *P* ∼ 5 × 104s. In our ${M\_{\rm bh}}=10^5 M\_\odot$, $M\_{\rm wd} = 0.5M\_\odot$ example of Figure [fig:orbdist],  ∼ 40% of the captured population initiates mass transfer with orbital periods $10^4{\rm s}<P<10^5{\rm s}$, thus, reproducing this repetition time does seem to be possible. Our inspiral simulations suggest that such repeating encounters are approximately an order of magnitude less common than their single-passage WD-dispruption counterparts. More importantly for determining the origin of *Swift* J1644+57, by comparison to Figure [fig:rates] we expect that repeating encounters with these sorts of repetition times would be detected at  ∼ 10% the rate of jetted main-sequence disruptions from these same MBH masses. However, single-passage WD disruptions, repeating encounters, and main-sequence disruptions each originate from different range of characteristic MBH masses (as shown in the lower panel of Figure [fig:rates]). If there is a strong cutoff in the low end of the MBH mass function we might expect this to truncate one class of events but not another. One remaining mystery is the shape of the lightcurve of *Swift* J1644+57 during the plateau phase. Variability could originate in modulated mass transfer or from the accretion flow and jet column itself, as described in Section [sec:detection],. If the jetted luminosity traces the mass accretion rate, $L \propto \dot M c^2$, as we have assumed here, we would expect the peaks in *Swift* J1644+57’s lightcurve to trace the exponentiating mass loss from the WD – instead of the observed plateau. If, however, this simplifying assumption proves incorrect (or incomplete) it does appear to be possible to produce events with plateau and super-Eddington timescales comparable to *Swift* J1644+57 with multi-passage disruptions of WDs. Detailed simulations of disk-assembly in multi-passage encounters offer perhaps the best hope to further constrain the electromagnetic signatures of these events. In WD disruptions, the jetted component is significantly more luminous than the Eddington-limited accretion disk component, and thus we have pursued the beamed high-energy signatures of these events in this paper. With the advent of LSST, however, detecting the corresponding disk emission signatures may become more promising. In a fraction of events that pass well within the tidal radius, a detonation might be ignited upon compression of the WD. In this scenario, maximum tidal compression can cause the shocked white dwarf material to exceed the threshold for pycnonuclear reactions so that thermonuclear runaway ensues. The critical *β* appears to be $\gtrsim 3$, so perhaps $\lesssim 1/3$ of the high-energy transients plotted in Figure [fig:phasespace] are expected to be accompanied by an optical counterpart in the form of an atypical type I supernova. Robustly separating ultra-long GRBs into core collapse and tidal disruption alternatives remains a challange. The central engines of ultra-long GRBs are essentially masked by high-energy emission with largely featureless spectra, revealing little more than the basic energetics of the relativistic outflow. Several distinguishing characteristics are, however, available. Variability timescales should be different (as they would be associated with compact objects of very different mass, see Section [sec:detection]). Significantly, the evolution of the prompt and afterglow emission at high energy and at radio wavelengths would be expected to deviate from that of a canonical impulsive blast wave in tidal disruption events due to long-term energy injection from the central engine. Disk emission, if detected in optical or UV observations, would present strong evidence of tidal disruption origin. While the bulk of WD disruptions would lack a coincident supernova, a minority would be accompanied by atypical type I supernovae. Optical signatures of a core-collapse event are uncertain, perhaps involving emission from the cocoon, accretion disk wind, or type IIP-like lightcurves but the detection of hydrogen lines in an accompanying supernova spectrum would point to a core-collapse origin. emphasize that one way to tackle these observational challenges in the near term is looking statistically at the astrometric positions of ultra-long bursts and whether they coincide with galactic centers. ![Luminosity versus duration adapted from. The WD+MBH region is the region of peak timescale and luminosity for a range of WD-MBH single-passage disruptive encounters. In the shaded region, MBH masses range from 10^3 to 10^5 M_\odot, while the WD masses plotted are 0.25-1M_\odot. For the GRB and SGR sources, t_{90} is plotted. If L\propto \dot M in the WD disruptions, t_{90} is a factor \approx 30 greater than t_{\rm peak}. The timescales and durations of WD-MBH interactions are well removed from typical long GRBs, but coincide with those of the emerging class of ultra-long GRBs, such as GRB 101225A, GRB 111209A, and GRB 121027A.](f8 "fig:") [fig:phasespace] Prospects for simultaneous Electromagnetic and Gravitational Wave Detection --------------------------------------------------------------------------- A primary source of interest in WD-MBH interactions has been their potential as sources of both electromagnetic and gravitational wave emission, especially as these events, if observed, would constrain the MBH mass function at low masses. Chirp waveforms have been computed for single, disruptive passages and should be detectible only if the source is within  ∼ 1Mpc given a 105*M*⊙ MBH. Potentially less restrictive are longer-lived periodic signals. The longest-lived transient, and that with the most uniform periodicity, would occur if a WD were overflowing its Roche lobe and transferring mass to the MBH from a circular orbit. However, we see no such circularization events in our orbit evolution simulations. Instead, the build-up of tidal oscillation energy in the WD leads to its disruption before the orbit circularizes, even in cases where gravitational radiation is the dominant term in the orbit evolution. In these eccentric cases, the gravitational wave signature reminiscent would be of a series of roughly-periodically spaced chirps associated with the pericenter passages. It is worth noting that these passages should not be strictly periodic because the orbital period wanders chaotically as successive passages pump energy into and out of the WD oscillations depending on the oscillation phase with which it encounters the MBH. Summary ======= In this paper we have discussed the role that orbital dynamics plays in shaping the transients that result from interactions between WDs and MBHs. WDs most commonly encounter black holes in single passages. Multiple passages from an eccentric orbit are about an order of magnitude less common, but would have characteristic repetition timescales of 104 − 106 s. The relative paucity of repeating events in our calculations, combined with the small range of MBH masses in which they appear to occur, suggests that the likelihood that *Swift* J1644+57 could form via the repeating disruption channel, as outlined shortly after the event by, is $\lesssim 10$%. We find no instances of mass transfer from a circular orbit. The consequence of these encounters is a mass supply that greatly exceeds the MBH’s Eddington limit. We expect the resulting thick accretion flow should amplify a poloidal magnetic field and lunch a jet. The relativistically beamed emission from these events may be more readily detectable than beamed emission from disruptions of main sequence stars. We therefore argue that the best prospects to constraining the lower-mass end of the MBH mass function lie in searching for the high-energy signatures of WD disruption events. The possibility of collecting a sample of such events in coming years with *Swift* appears promising. The detection or non-detection of these transients should offer strong constraints on the population of MBHs with masses ${M\_{\rm bh}}\lesssim 10^5 M\_\odot$ and the nature of the stellar clusters that surround them. 137 natexlab#1#1 Alexander, T. 2005, Physics Reports, 419, 65 Amaro-Seoane, P., Gair, J. R., Freitag, M., Miller, M. C., Mandel, I., Cutler, C. J., & Babak, S. 2005, Class. Quantum Grav., 631, R113 Amaro-Seoane, P., Miller, M. C., & Kennedy, G. F. 2012,, 425, 2401 Antonini, F., Faber, J., Gualandris, A., & Merritt, D. 2010,, 713, 90 Antonini, F., Lombardi, J. C. J., & Merritt, D. 2011,, 731, 128 Bahcall, J. N., & Wolf, R. A. 1976,, 209, 214 —. 1977,, 216, 883 Barack, L., & Cutler, C. 2004, Phys. Rev. D, 69, 082005 Baumgardt, H., Hopman, C., Portegies Zwart, S., & Makino, J. 2006,, 372, 467 Baumgardt, H., Makino, J., & Ebisuzaki, T. 2004,, 613, 1133 —. 2004,, 613, 1143 Begelman, M., Blandford, R., & Rees, M. 1984, Rev. Mod. Phys., 56, 255 Beloborodov, A. M. 1999, Hot and Cool: Bridging Gaps in Massive Star Evolution, 161, 295 Berger, E., Zauderer, A., Pooley, G. G., Soderberg, A. M., Sari, R., Brunthaler, A., & Bietenholz, M. F. 2012,, 748, 36 Berry, C. P. L., & Gair, J. R. 2013,, 433, 3572 Binney, J., & Tremaine, S. 2008, Galactic Dynamics: Second Edition (Princeton University Press) Blanton, M. R., Lupton, R. H., Schlegel, D. J., Strauss, M. A., Brinkmann, J., Fukugita, M., & Loveday, J. 2005,, 631, 208 Bloom, J. S., et al. 2011, Science, 333, 203 Boer, M., Gendre, B., & Stratta, G. 2013, eprint arXiv:1310.4944 Bromley, B. C., Kenyon, S. J., Geller, M. J., Barcikowski, E., Brown, W. R., & Kurtz, M. J. 2006,, 653, 1194 Burrows, D. N., et al. 2011,, 476, 421 Byun, Y. I., et al. 1996,, 111, 1889 Cannizzo, J. K., & Gehrels, N. 2009,, 700, 1047 Cannizzo, J. K., Troja, E., & Lodato, G. 2011,, 742, 32 Carter, B., & Luminet, J. P. 1982,, 296, 211 Cenko, S. B., et al. 2012,, 753, 77 Cheng, R. M., & Evans, C. R. 2013, Phys. Rev. D, 87, 104010 Cohn, H., & Kulsrud, R. M. 1978,, 226, 1087 Coughlin, E. R., & Begelman, M. C. 2014,, 781, 82 Dai, L., & Blandford, R. 2013,, 434, 2948 Dai, L., Blandford, R. D., & Eggleton, P. P. 2013,, 434, 2940 Dai, L., Escala, A., & Coppi, P. 2013,, 775, L9 De Colle, F., Guillochon, J., Naiman, J., & Ramirez-Ruiz, E. 2012,, 760, 103 de Freitas Pacheco, J. A., Filloux, C., & Regimbau, T. 2006, Phys. Rev. D, 74, 023001 Faber, S. M., et al. 1997,, 114, 1771 Farrell, S. A., Webb, N. A., Barret, D., Godet, O., & Rodrigues, J. M. 2009,, 460, 73 Ferrarese, L., & Merritt, D. 2000,, 539, L9 Frank, J. 1978,, 184, 87 Freitag, M. 2003,, 583, L21 Gebhardt, K., et al. 2000,, 539, L13 Gehrels, N., Ramirez-Ruiz, E., & Fox, D. B. 2009,, 47, 567 Gendre, B., et al. 2013,, 766, 30 Gezari, S., et al. 2009,, 698, 1367 Giannios, D., & Metzger, B. D. 2011,, 416, 2102 Gill, M., Trenti, M., Miller, M. C., van der Marel, R., Hamilton, D., & Stiavelli, M. 2008,, 686, 303 Goodman, J. 1986,, 308, L47 Graham, A. W., & Scott, N. 2013,, 764, 151 Greene, J. E., & Ho, L. C. 2004,, 610, 722 —. 2007,, 670, 92 —. 2007,, 667, 131 Guillochon, J., Manukian, H., & Ramirez-Ruiz, E. 2014,, 783, 23 Guillochon, J., & Ramirez-Ruiz, E. 2013,, 767, 25 Guillochon, J., Ramirez-Ruiz, E., & Lin, D. 2011,, 732, 74 Guillochon, J., Ramirez-Ruiz, E., Rosswog, S., & Kasen, D. 2009,, 705, 844 Gültekin, K., Miller, M. C., & Hamilton, D. P. 2004,, 616, 221 Gültekin, K., et al. 2009,, 698, 198 Haas, R., Shcherbakov, R. V., Bode, T., & Laguna, P. 2012,, 749, 117 Hayasaki, K., Stone, N., & Loeb, A. 2013,, 434, 909 Hills, J. G. 1975,, 254, 295 —. 1988,, 331, 687 Hils, D., & Bender, P. L. 1995,, 445, L7 Holcomb, C., Guillochon, J., De Colle, F., & Ramirez-Ruiz, E. 2013,, 771, 14 Hopman, C., & Alexander, T. 2005,, 629, 362 Ivanov, P. B., & Papaloizou, J. C. B. 2007,, 476, 121 Jonker, P. G., et al. 2013,, 779, 14 Kashiyama, K., Nakauchi, D., Suwa, Y., Yajima, H., & Nakamura, T. 2013,, 770, 8 Kelly, B. C., & Merloni, A. 2012, Advances in Astronomy, 2012, 1 Kepler, S. O., Kleinman, S. J., Nitta, A., Koester, D., Castanheira, B. G., Giovannini, O., Costa, A. F. M., & Althaus, L. 2007,, 375, 1315 Kesden, M. 2012, Phys. Rev. D, 85 Kormendy, J., & Ho, L. C. 2013,, 51, 511 Krolik, J. H., & Piran, T. 2011,, 743, 134 —. 2012,, 749, 92 Lauer, T. R., et al. 1995,, 110, 2622 Lee, H. M., & Ostriker, J. P. 1986,, 310, 176 Levan, A. J., et al. 2011, Science, 333, 199 —. 2014,, 781, 13 Lien, A., Sakamoto, T., Gehrels, N., Palmer, D. M., Barthelmy, S. D., Graziani, C., & Cannizzo, J. K. 2014,, 783, 24 Lightman, A. P., & Shapiro, S. L. 1977,, 211, 244 Lithwick, Y., & Sari, R. 2001,, 555, 540 Lopez-Camara, D., Lee, W. H., & Ramirez-Ruiz, E. 2009,, 692, 804 Luminet, J. P., & Pichon, B. 1989,, 209, 103 MacFadyen, A. I., & Woosley, S. E. 1999,, 524, 262 MacLeod, M., Guillochon, J., & Ramirez-Ruiz, E. 2012,, 757, 134 MacLeod, M., Ramirez-Ruiz, E., Grady, S., & Guillochon, J. 2013,, 777, 133 Magorrian, J., & Tremaine, S. 1999,, 309, 447 Magorrian, J., et al. 1998,, 115, 2285 Maoz, D., Badenes, C., & Bickerton, S. J. 2012,, 751, 143 Mardling, R. A. 1995,, 450, 722 —. 1995,, 450, 732 Marsh, T. R., Nelemans, G., & Steeghs, D. 2004,, 350, 113 Merritt, D. 2013, Dynamics and Evolution of Galactic Nuclei (Princeton University Press) Merritt, D., Schnittman, J. D., & Komossa, S. 2009,, 699, 1690 Meszaros, P., Ramirez-Ruiz, E., Rees, M. J., & Zhang, B. 2002,, 578, 812 Miller, B. P., Gallo, E., Greene, J. E., Kelly, B. C., Treu, T., Woo, J.-H., & Baldassare, V. 2014, eprint arXiv:1403.4246 Miller, M. C. 2005,, 626, L41 Nakauchi, D., Kashiyama, K., Suwa, Y., & Nakamura, T. 2013,, 778, 67 O’Leary, R. M., & Loeb, A. 2009,, 395, 781 —. 2012,, 421, 2737 Paschalidis, V., MacLeod, M., Baumgarte, T. W., & Shapiro, S. L. 2009, Phys. Rev. D, 80, 24006 Peters, P. 1964, Phys. Rev., 136, B1224 Phinney, E. S. 1982,, 198, 1109 Pilla, R. P., & Loeb, A. 1998,, 494, L167 Piro, L., et al. 2014, eprint arXiv:1405.2897 Press, W. H., & Teukolsky, S. A. 1977,, 213, 183 Pruet, J., Thompson, T. A., & Hoffman, R. D. 2004,, 606, 1006 Ramirez-Ruiz, E. 2004,, 349, L38 —. 2005, Monthly Notices RAS Letters, 363, L61 Ramirez-Ruiz, E., Celotti, A., & Rees, M. J. 2002,, 337, 1349 Ramirez-Ruiz, E., & Lloyd-Ronning, N. M. 2002, New Astronomy, 7, 197 Ramirez-Ruiz, E., & Rosswog, S. 2009,, 697, L77 Rashkov, V., & Madau, P. 2013,, 780, 187 Rees, M. J. 1988,, 333, 523 Rees, M. J., & Meszaros, P. 1994,, 430, L93 Reines, A. E., Greene, J. E., & Geha, M. 2013,, 775, 116 Reines, A. E., Sivakoff, G. R., Johnson, K. E., & Brogan, C. L. 2011,, 470, 66 Richstone, D., et al. 1998,, 395, 14 Rosswog, S., Ramirez-Ruiz, E., & Hix, W. R. 2008,, 679, 1385 —. 2009,, 695, 404 Rosswog, S., Ramirez-Ruiz, E., Hix, W. R., & Dan, M. 2008, Computer Physics Communications, 179, 184 Saxton, C. J., Soria, R., Wu, K., & Kuin, N. P. M. 2012,, 422, 1625 Sesana, A., Vecchio, A., Eracleous, M., & Sigurdsson, S. 2008,, 391, 718 Seth, A. C. 2010,, 725, 670 Shakura, N. I., & Sunyaev, R. A. 1973,, 24, 337 Shcherbakov, R. V., Pe’er, A., Reynolds, C. S., Haas, R., Bode, T., & Laguna, P. 2013,, 769, 85 Shen, R.-F., & Matzner, C. D. 2013, eprint arXiv:1305.5570 Stratta, G., et al. 2013,, 779, 66 Syer, D., & Ulmer, A. 1999,, 306, 35 Tremaine, S., et al. 2002,, 574, 740 Wang, J., & Merritt, D. 2004,, 600, 149 Wang, X., & Loeb, A. 2014, eprint arXiv:1402.5975 Wheeler, J. A. 1966,, 4, 393 Wright, E. L. 2006,, 118, 1711 Yu, Y. B., Wu, X. F., Huang, Y. F., Coward, D. M., Stratta, G., Gendre, B., & Howell, E. J. 2013, eprint arXiv:1312.0794 Zalamea, I., Menou, K., & Beloborodov, A. M. 2010,, 409, L25 Zauderer, B. A., Berger, E., Margutti, R., Pooley, G. G., Sari, R., Soderberg, A. M., Brunthaler, A., & Bietenholz, M. F. 2013,, 767, 152 Zauderer, B. A., et al. 2011,, 476, 425 Zhang, B.-B., Zhang, B., Murase, K., Connaughton, V., & Briggs, M. S. 2014,, 787, 66 Illuminating Massive Black Holes With White Dwarfs: Orbital Dynamics and High Energy Transients from Tidal Interactions ======================================================================================================================= White dwarfs (WDs) can be tidally disrupted only by massive black holes (MBHs) with masses less than  ∼ 105*M*⊙. These tidal interactions feed material to the MBH well above its Eddington limit, with the potential to launch a relativistic jet. The corresponding beamed emission is a promising signpost to an otherwise quiescent MBH of relatively low mass. We show that the mass transfer history, and thus the lightcurve, are quite different when the disruptive orbit is parabolic, eccentric, or circular. The mass lost each orbit exponentiates in the eccentric-orbit case leading to the destruction of the WD after several tens of orbits. We examine the stellar dynamics of clusters surrounding MBHs to show that single-passage WD disruptions are substantially more common than repeating encounters. The 1049 erg s− 1 peak luminosity of these events makes them visible to cosmological distances. They may be detectible at rates of as many as tens per year by instruments like *Swift*. In fact, WD-disruption transients significantly outshine their main-sequence star counterparts, and are the most likely tidal interaction to be detected arising from MBHs with masses less than 105*M*⊙. The detection or non-detection of such WD-disruption transients by *Swift* is, therefore, a powerful tool to constrain lower end of the MBH mass function. The emerging class of ultra-long gamma ray bursts all have peak luminosities and durations reminiscent of WD disruptions, offering a hint that WD-disruption transients may already be present in existing datasets. Introduction ============ Tidal disruption events have been studied theoretically since their prediction by. They are expected to produce luminous but short-lived accretion flares as debris from the disrupted star streams back to the massive black hole (MBH). The characteristic pericenter distance for tidal disruption to occur is the tidal radius, ${r\_{\rm t}}= ({M\_{\rm bh}}/M\_\*)^{1/3} R\_\*$, which is defined by the average density of the star and by the MBH mass. The tidal radius scales differently with MBH mass than the MBH’s Schwarzschild radius, ${r\_{\rm s}}= 2 G {M\_{\rm bh}}/ c^2$. As a result, for a given black hole mass, some stellar types may be vulnerable to tidal disruption while others would instead pass through the MBH horizon whole. Of particular interest to this study is the fact that MBHs more massive than $\sim 10^5 M\_\sun$ swallow typical white dwarfs (WDs) whole, while those of lower mass can produce tidal disruptions of WDs. Tidal disruptions of WDs, therefore, uniquely probe the long-debated existence of MBHs with masses less than 105*M*⊙. The kinematic traces of such black holes are difficult to resolve spatially due to their relatively small radii of gravitational influence, even with the *Hubble Space Telescope*, which has proven a powerful tool for probing more massive nuclei. While current observational constraints suggest that black holes are ubiquitous in giant galaxies, their presence is more uncertain in dwarf galaxies. Determination of the galactic center black hole mass function has traditionally focused on active galaxies, for which we can directly infer the black hole mass, and work by, and has shown intriguing possibilities for active galactic center black holes with masses similar to 105*M*⊙. But, observations of tidal disruption events probe the mass function of otherwise quiescent black holes, offering a powerful check on mass functions derived from their active counterparts. With this motivation, predicting the signatures of tidal interactions between WDs and MBHs has been the subject of substantial effort. Studies find that the resulting mass transfer nearly always exceeds the MBH’s Eddington limit mass accretion rate, $\dot M\_{\rm Edd} = 2 \times 10^{-3} ({M\_{\rm bh}}/10^5M\_\odot)\ M\_\odot \text{ yr}^{-1}$, where we have used $L\_{\rm Edd} = 4\pi G {M\_{\rm bh}}m\_{\rm p} c / \sigma\_{\rm T}$ and a 10% radiative efficiency, $L=0.1 \dot M c^2$. Accretion disk emission from these systems is at most $\sim L\_{\rm Edd}$, which is increasingly faint for smaller MBHs. However, we expect that these systems also launch relativistic jets as a result of the extremely rapid mass supply. The observed luminosity of these jetted transients is likely to be proportional to $\dot M c^2$ and thus may greatly exceed $L\_{\rm Edd}$ when $\dot M \gg \dot M\_{\rm Edd}$. While disk emission may peak at ultraviolet or soft x-ray frequencies, the jetted emission can be either produced by internal dissipation or by Compton-upscattering the disk photon field to higher frequencies. We turn our attention to these luminous high-energy jetted transients arising from WD-MBH interactions in this paper. Despite the general feature of high accretion rates, theoretical studies predict a wide diversity of signatures depending on the orbital parameters with which the WD encounters the MBH. Single, strongly-disruptive passages are thought to produce quick-peaking lightcurves with power-law decay tails as debris slowly falls back to the MBH. It has also been suggested that these sufficiently deeply-passing encounters may result in detonations of the WD, and thereby accompany the accretion flare with a simultaneous type I supernova. Multiple passage encounters result in lightcurves modulated by the orbital period, and recent work by,, and has shown that the mass fallback properties from eccentric orbits should be quite different from those in near-parabolic encounters. have suggested that tidal stripping of a WD might explain the variability in the lightcurve of *Swift* J1644+57. Finally, and have shown that the Roche lobe overflow of a WD in a circular orbit around a MBH will produce stable mass transfer, and a long-lived accretion flare. Transients in which the WD completes many orbits are of particular interest as they are persistent gravitational radiation sources with simultaneous electromagnetic counterparts. We review the properties of transients produced by tidal interactions between WDs and MBHs, with particular emphasis on the role that the orbit may play in shaping the ensuing mass transfer from the WD to the MBH in Section [sec:sig]. We focus on cases where the supply of material to MBH is above the hole’s Eddington limit and launches a relativistically-beamed jet component. In Section [sec:rates], we discuss our assumptions about the nature of stellar clusters surrounding MBHs. We model the tidal and gravitational wave-driven capture of WDs into bound orbits in order to predict the orbital distribution and rates of eccentric and circular mass transfer scenarios in Section [sec:inspiral]. We find that these events are likely outnumbered by single-passage disruptions. In Section [sec:detection], we illustrate that although they are rare, WD disruptions may sufficiently outshine MS disruptions in jetted transients that they should be easily detectible. In Section [sec:discussion], we argue that the detection or non-detection of these transients should place strong limits on the existence of MBHs with masses less than 105*M*⊙. Finally, we show that WD-MBH interaction transients bear similarities in peak luminosity and timescale to the newly-identified ultra-long gamma ray bursts. Phenomenology of White Dwarf Tidal Interactions =============================================== We can distinguish between WD-MBH encounters based on the orbit with which the WD begins to transfer mass to the MBH. This section reviews some of the expected signatures of these encounters, emphasizing the role of the orbital eccentricity at the onset of mass transfer. In Figure [fig1], we show representative light curves, calculated by assuming that $L=0.1 \dot M c^2$ for each of the orbital classes we will consider below. Transients produced range from a slow, smooth decline for Roche lobe overflow to multiple short-timescale flares in the eccentric tidal stripping case. We presume a WD mass of 0.5*M*⊙ and a MBH mass of 105*M*⊙ in Figure [fig1]. In all of the following we will assume that the WD mass radius relationship is described by [eq:massradius] Rwd = 0.013 R( )1/3 ( 1- )0.447, from. Where relevant, we will further assume that the internal structure of the WDs is described by that of a *n* = 3/2 polytrope. This is strictly most relevant at low WD masses, but because low-mass white dwarfs are the most common and also those most vulnerable to tidal interactions, we suggest this may be a reasonable approximation for most astrophysically relevant cases. ![ Accretion-powered flares that result from tidal interactions between 0.5 M_\odot WDs and a 10^5 M_\odot MBH, calculated assuming that L = 0.1 \dot M c^2. A tidal disruption event with r_p = r_t is shown in blue, a repeating flare due to tidal stripping of the WD in an eccentric orbit is shown in green, and Roche lobe overflow (RLOF) and the ensuing stable mass transfer is shown in red. For comparison, the gray line shows disruption of a sun-like star and the dashed line shows the Eddington luminosity for a 10^5 M_\odot black hole. Tidal disruption \dot M(t) curves are from. A wide diversity of flare characteristics are achieved with differing orbital parameters. ](f1 "fig:") [fig1] Near-parabolic orbit tidal disruption ------------------------------------- Typical tidal disruption events occur when stars are scattered by two-body relaxation processes in orbital angular momentum into orbits that pass close to the black hole at pericenter. We will parameterize the strength of the encounter with $\beta\equiv{r\_{\rm t}}/{r\_{\rm p}}$, such that higher *β* correspond to deeper encounters as compared to the tidal radius. Simulations of WD disruptions have been performed recently by several authors, and we describe some of the salient features here. The vast majority of these orbits originate from quite far from the MBH, where star-star scatterings become substantial. As a result, typical orbits are characterized by *e* ≈ 1. The aftermath of a disruption has been well determined in the limit that the spread in binding energy across the star at pericenter is large compared to its original orbital energy. The critical orbital eccentricity above which the parabolic approximation holds is [ecrit] e > ecrit 1-2 ( M\* Mbh)1/3. For a *β* = 1 encounter between a 0.5*M*⊙ WD and a 105*M*⊙ MBH, $e\_{\rm crit} \approx 0.97$. If $e > e\_{\rm crit}$, about half of the debris of tidal disruption is bound to the MBH, while the other half is ejected on unbound orbits. The initial fallback of the most bound debris sets the approximate timescale of peak of the lightcurve, which scales as ${\tau\_{\rm fb}}\propto {M\_{\rm bh}}^{1/2}M\_\*^{-1}R\_\*^{3/2}$. The peak accretion rate, which is proportional to $\dot M\_{\rm peak} \propto \Delta M / {\tau\_{\rm fb}}$, thus scales as $\dot M\_{\rm peak} \propto {M\_{\rm bh}}^{-1/2}M\_\*^{2}R\_\*^{-3/2}$. The fallback curves typically feature a fast rise to peak, and then a long, power-law decay with asymptotic slope similar to *t*− 5/3. Since the orbital time at the tidal radius is much shorter than that of the most bound debris, it is usually assumed that the accretion rate onto the MBH tracks the rate of fallback. In Figure [fig2], we estimate typical properties for encounters between WDs of various masses and MBHs of 104 and 104.5*M*⊙. To construct this Figure, we draw on results of hydrodynamic simulations of tidal disruption of *n* = 3/2 polytropic stars performed by. We plot colored lines corresponding to ten different impact parameters, where the WD would lose a fraction 0.1 − 1 of its mass in intervals of 0.1 in an encounter with a 104.5*M*⊙ MBH. We plot a single dot-dashed line for a 50% disruptive encounter between a WD and a MBH. All of these events fuel rapid accretion to the MBH with typical accretion rates ranging from hundreds to thousands of solar masses per year. Typical peak timescales for the accretion flares are hours. The long-term fallback fuels accretion above the Eddington limit for a period of months, after which one might expect the jet to shut off, terminating the high-energy transient emission. In the upper left panel, we compare pericenter distance to both ${r\_{\rm s}}$ and $r\_{\rm ISCO} \approx 4{r\_{\rm s}}$. Simulations of tidal encounters in general relativistic gravity, for example those of and indicate that if the pericenter distances ${r\_{\rm p}}\sim {r\_{\rm s}}$, relativistic precession becomes extremely important and free-particle trajectories deviate substantially from Newtonian trajectories. We expect, therefore, that encounters with ${r\_{\rm p}}\lesssim r\_{\rm ISCO}$ will experience strong general relativistic corrections to the orbital motion of tidal debris. The result is likely to be prompt swallowing of the bulk of the tidal debris rather than circularization and a prolonged accretion flare. For that reason we will use $r\_{\rm ISCO}$ as a point of comparison for determining when stars are captured whole or produce a tidal disruption flare in this paper. Future simulations of these extreme encounters will help distinguish where the exact cutoff between capture and flaring lies. [tbp] ![ The properties of tidal disruptions of WDs with masses 0.2-1.0 M_\odot encountering MBHs with masses of 10^4 and 10^{4.5} M_\odot. Colored lines represent encounters with a 10^{4.5} M_\odot MBH in which the WD loses a fraction between 0.1 and 1 of its total mass, in intervals of 0.1. Dot-dashed lines represent encounters in which half of the WD mass is stripped in an encounter with a 10^4 M_\odot MBH. The upper left panel shows that disruptive encounters occur outside the MBH’s Schwarzschild radius for the range of masses considered, but many close passages have {r_{\rm p}}<r_{\rm ISCO}, which may be a more appropriate cutoff for determining whether an accretion flare or prompt swallowing results from a given encounter. The remaining panels draw on simulation results from for n=3/2 polytropes to show the peak \dot M, timescale of peak, t_{\rm peak}, and time spent above the Eddington limit, t_{\rm Edd}. ](f2 "fig:") [fig2] Tidal stripping in an eccentric orbit ------------------------------------- From an eccentric orbit, if $e<e\_{\rm crit}$, equation, all of the debris of tidal disruption is bound to the MBH. If it is only partially disrupted, the remnant itself will return for further passages around the MBH and perhaps additional mass-loss episodes. We explore the nature of the accretion that results from the progressive tidal stripping of a WD in this section. In Sections [sec:rates] and [sec:inspiral], we will elaborate on the stellar dynamical processes that can lead a WD to be captured into such an orbit. Tightly-bound orbits around the MBH are very well described by Keplerian motion in the MBH’s gravitational field and the WD should be only scattered weakly each orbit. In other words, its orbital parameters diffuse slowly in response to any background perturbations (see Section [sec:rates] for more discussion of the stellar dynamics of tightly-bound stars). Thus, when such a WD first enters the tidal radius, it would do so only grazingly, losing a small fraction of its mass. We suggest in Sections [sec:rates] and [sec:inspiral] that a more typical process may be progressive tidal forcing to the point of disruption. In this picture, a WD in an initially non-disruptive orbit is eventually disrupted by the build up of tidal oscillation energy. Over many passages orbital energy is deposited into *l* = 2 mode oscillation energy of the WD. Eventually, the oscillation energy exceeds the WD’s gravitational binding energy and mass is stripped from the WD envelope. After the onset of mass transfer between the WD and the MBH, the WD will expand in radius and decrease in density following equation. The strength of subsequent encounters increases until the WD is completely destroyed by the MBH. At each encounter, we calculated the new *β* parameter based on the adjusted mass, and in turn the corresponding Δ*M*. The exact extent of mass loss may be modulated through the superposition of the WD’s oscillation phase and tidal forcing at pericenter. Unlike, for example, a giant star being tidally stripped, as long as degeneracy is not lifted the internal structure of the WD remains polytropic. We find that, over the course of tens of orbits, the mass loss episodes escalate from  < 10− 2*M*⊙ until the remaining portion of the WD is destroyed. This is in contrast to the calculation of, who, as a result of using a more approximate formula for Δ*M*(*β*) with shallower *β*-dependence, predict that the tidal stripping episode will persist for  ∼ 104 orbits. If additional heating occurs near the surface of the WD due to interaction between oscillations and marginally-bound material, as, for example, observed in simulations of WD and giant-star disruptions, the degeneracy of the outermost layers of the WD may be lifted, leading to an even more rapid exponentiation of mass-loss episodes. The example in Figure [fig1] shows a WD being stripped in an orbit with a period of 104.5 seconds. This timescale sets the repetition time of the flares, and corresponds to $e\approx0.97\approx e\_{\rm crit}$. One consequence of orbits with lower eccentricity is that the fallback of the bound material happens on very rapid timescales, potentially more rapidly than material that circularizes at the tidal radius may be viscously accreted. The ratio of fallback time (here estimated by the orbital period) to viscous time at pericenter is approximately, 2 ( )3/2 ( )-1 ( )-2, where *α**ν* is the Shakura-Sunayev viscosity parameter. When the viscous time is longer than the fallback time, the lightcurve will represent the viscous accretion of the nearly-impulsively assembled torus of tidal debris. To illustrate the accretion rate that may be expected, we employ a super-Eddington disk model proposed by and employed by to describe super-Eddington accretion in the case of *Swift* 1644+57. In this simple model, the accretion rate of an impulsively assembled disk is mediated by the rate of viscous expansion of the material. Quoting from, the peak accretion rate is M0 &=& 273 ( ) ( )1/2 && ( )-3/2 ( ) M-1. The behavior in time is then M(t) = M0 ( )-4/3, where $t\_0 = 4/9 \alpha\_\nu^{-1} {r\_{\rm p}}^{3/2}(G {M\_{\rm bh}})^{-1/2}$, approximately the viscous time at pericenter for a thick disk. The *t*− 4/3 proportionality is similar to that derived in the case of zero wind losses for impulsively assembled disks by. go on to show if the fraction of material carried away in a wind is, for example, 1/2, then the time proportionality steepens to  − 5/3. This indicates that the *t*− 4/3 power-law decay plotted in Figure [fig1] is at the shallow end of the range of possible behavoir. Any degree of wind-induced mass loss from the disk would steepen the slope of the falloff in time, further reducing the luminosity between flaring peaks. have recently proposed a new thick disk and jet launching model for super-Eddington accretion phases of tidal disruption events. Their ZEBRA model can capture the rise and peak phases in the lightcurve, not just the late-time decay behavior. These characteristics will be essential in making constraining comparisons to potential future observations. Roche-lobe overflow from a circular orbit ----------------------------------------- If the WD reaches the tidal radius in a circular orbit, mass transfer will proceed stably. The rate at which gravitational radiation carries away orbital angular momentum, JGR = - Jorb is balanced by the exchange of mass from the WD to MBH, and the corresponding widening of the orbit. The resulting equilibrium mass transfer rate is then given by Mwd = -1 Jorb Mwd where $\zeta\_{\rm wd}$ and $\zeta\_{\rm rl}$ are the coefficients of expansion of the WD and Roche lobe, respectively, in response to mass change, *ζ* = *d*ln*r*/*d*ln*M*,. For low mass WDs, $\zeta\_{\rm wd} \approx -1/3$, while the Roche lobe is well described by $\zeta\_{\rm rl} \approx 1/3$. As a result of the stability, mass transfer between a WD and a MBH would persist above the Eddington limit for multiple years. They would also radiate a persistent gravitational wave signal, with frequencies of order $\sim (G {M\_{\rm bh}}/ {r\_{\rm t}}^3)^{1/2}$, or about 0.2Hz for a 105*M*⊙ MBH and 0.5*M*⊙ WD. Such frequencies would place these objects within the sensitivity range of the proposed *LISA* and *eLISA* missions. Stellar Clusters Surrounding MBHs ================================= The properties of the stellar systems that surround MBHs determine the the orbital parameters with which WDs encounter MBHs. The nature of the stellar systems that surround MBHs with masses less than 106*M*⊙ remains observationally unconstrained. However, dense stellar clusters appear to almost universally surround known galactic center MBHs, which typically span the mass range of 106-109*M*⊙. Even in galaxies that lack nuclear activity, dense stellar clusters in galactic nuclei with centrally-peaked velocity dispersion profiles strongly suggest the presence of central massive objects. That MBHs should be surrounded by stars is not entirely unexpected. With a mass much greater than the average mass of surrounding stars, a MBH sinks rapidly to the dynamical center of the stellar system in which it resides. There may also exist a population of nearly ``naked" MBHs only surrounded by a hyper-compact stellar cluster. Such systems originate in dynamical interactions that lead to the high velocity ejection of MBHs from their host nuclei. A Simple Cluster Model ---------------------- In what follows, we adopt a simplified stellar cluster model in which the gravitational potential is Keplerian (dominated by the black hole), and the stellar density is a simple power-law with radius. Our approach is very similar to that of. In Figure [fig:clscales], and in the following paragraphs, we introduce the relevant scales that describe the orbital dynamics of such a system. MBHs embedded in stellar systems are the dominant gravitational influence over a mass of stars similar to their own mass. At larger radii within the galactic nucleus, the combined influence of the MBH and all of the stars describes stellar orbits. Keplerian motion around the MBH is energetically dominant within the MBH’s radius of influence [rh] rh = = 0.43 ()0.54  pc, where $\sigma\_{\rm h}$ is the velocity dispersion of the surrounding stellar system. We will assume that the velocity dispersion of stellar systems surrounding MBHs can be approximated by the ${M\_{\rm bh}}-\sigma$ relation. This assumption, by necessity, involves extrapolating the ${M\_{\rm bh}}-\sigma$ relation to lower black hole masses than those for which it was derived. We’ve adopted $\sigma\_{\rm h} = 2.3 \times 10^5 (M\_{\rm bh}/M\_\odot)^{1/4.38} \ {\rm cm \ s^{-1}}$. To normalize the mean density of the stellar cluster, we assume that the enclosed stellar mass within $r\_{\rm h}$ is equal to the MBH mass, $ M\_{\rm enc}(r\_{\rm h}) = {M\_{\rm bh}}$. Despite the uncertainty in extrapolating the ${M\_{\rm bh}}-\sigma$ relation, this exercise can provide a telling estimate of the order of magnitude rates of interactions between WDs and MBHs should the ${M\_{\rm bh}}-\sigma$ relation actually extend to lower masses. This calculation more robustly constrains the WD interaction rate relative to other interactions that are also based on the density of the stellar cluster, like main-sequence star disruptions. In energetic equilibrium, stars within this radius of influence distribute according to a power-law density profile in radius. We will show following equation that the energetic relaxation time for stellar clusters (of the masses we consider) is short compared to their age. Thus, the assumption of an equilibrated stellar density profile is realistic. The slope of this power-law depends on the mass of stars considered as compared to the average stellar mass. We adopt a stellar number density profile *ν*\* ∝ *r*− *α* with *α* = 3/2 in Figure [fig:clscales] and the examples that follow. As a result, the enclosed mass as a function of radius is $ M\_{\rm enc} = {M\_{\rm bh}}\left(r/r\_{\rm h}\right)^{3/2} $. If we assume that the angular momentum distribution is isotropic, then this radial density profile also defines the distribution function of stars in orbital binding energy, *ɛ*,, f() = ( 2 h2 )-3/2 \*(rh) ( h2 )-3 2. This density profile also sets the local one-dimensional velocity dispersion, 2 =, in the region $r\ll r\_{\rm h}$. If the outermost radius of the cluster is defined by the radius of influence, then the characteristic inner radius is the distance from the MBH at which the enclosed stellar mass is similar to the mass of a single star. This scale provides insight into the expected binding energy of the most bound star in the system. As a result, the radius that encloses a single stellar mass is [rmbar] r|m\* = ( |m\* Mbh )2/3 rh, where ${\bar m}\_\*$ is the average stellar mass. For simplicity, we adopt $\bar m\_\* = 1 M\_\odot$. In reality, mass segregation will create a gradient in which the average mass may vary substantially as a function of radius. It is possible that objects as large as 10*M*⊙, for example stellar-mass black holes, may be the dominant component at very small radii. However, because we adopt a radius-independent value of $\bar m\_\*$, we take 1*M*⊙ as representative of the turnoff mass of a  ∼ 12 Gyr-old stellar population. We plot the radii that enclose 1, 10, and 100 *M*⊙ in Figure [fig:clscales]. ![ Characteristic scales for WD-MBH interactions in stellar cusps surrounding MBHs given the cluster properties described in Section [sec:clmodel]. Shown, from bottom to top, are: 1) the Schwarzschild radius, {r_{\rm s}}, and the radius of the Innermost Stable Circular Orbit \sim 4 {r_{\rm s}} (black solid and dotted, respectively), 2) the tidal radius, {r_{\rm t}} for WDs (0.5M_\odot) and MS (sun-like) stars (yellow solid and dashed), 3) the radii that enclose 1, 10, and 100 M_\odot (gray dot-dashed), 4) The characteristic orbital semi-major axis that marks the transition from the empty (smaller a) to the full (larger a) loss cone regimes (red solid and dashed lines labeled \Delta J = {J_{\rm lc}}), and 5) the MBH radius of influence, r_{\rm h}. Filled shading denotes the region that is, on average, populated by stars. Filling colors denote the primary orbital relaxation mechanism with general-relativistic resonant relaxation (purple), mass-precession resonant relaxation (cyan), and finally non-resonant relaxation (green) being dominant from small to large radii, respectively. ](f3 "fig:") [fig:clscales] Orbital Relaxation ------------------ Within a dense stellar system, stars orbit under the combined influence of the MBH and all of the other stars. As a result, their orbital trajectories are constantly subject to perturbations, and deviate from closed, Keplerian ellipses. The magnitude of these perturbations may be estimated by comparing the orbital period, *P*, to the orbital relaxation time, $t\_{\rm r}$. For most stars, two-body relaxation drives orbital perturbations [tNRR] tNRR =, equation 3.2 of, also see. We adopt a value for the Coulomb logarithm of $\ln \Lambda = \ln \left({M\_{\rm bh}}/\bar m\_\* \right)$, the natural log of the number of stars within the sphere of influence. Under these assumptions, a cluster with a ${M\_{\rm bh}}= 10^5 M\_\odot$ MBH and $\bar m\_\* = 1M\_\odot$ would have undergone approximately 160 relaxation times within the age of the universe. Only when ${M\_{\rm bh}}\approx 10^{6.75}$ does the relaxation time equal the Hubble time, suggesting that the choice of a relaxed, power-law distribution of stellar number density is appropriate for the MBH masses we consider. Tightly bound stellar orbits are also perturbed by secular torques from the orbits of nearby stars. An important aspect of estimating the ``resonant relaxation" evolution time of a star’s orbit in response to these torques is estimating the coherence time of the background orbital distribution. The coherence time is the typical timescale on which neighboring orbits precess, and thus depends on the mechanism driving the precession. When this coherence time is determined by Newtonian advance of the argument of periastron, or mass precession, the incoherent resonant relaxation time is a factor of $ {M\_{\rm bh}}/ \bar m\_\* $ greater than the orbital period, [tRRN] tRR,M = ( Mbh|m\* ) P, and $t\_{{\rm coh},M} = {M\_{\rm bh}}P/M\_{\rm enc}$. The orbital period *P* is defined as *P*(*a*), the Keplerian orbital period for a semi-major axis, *a*, equal to *r*. Where general-relativistic precession determines the coherence time, the incoherent resonant relaxation time is [tRRGR] tRR,GR = 3 2 rg a ( Mbh|m\* )2 P Nenc, for a coherence time of $t\_{{\rm coh},GR} = a P/(12 r\_{\rm g})$, where $r\_{\rm g} = G {M\_{\rm bh}}/c^2$ and $N\_{\rm enc} = M\_{\rm enc}/ \bar m\_\*$. Either equation or determines the resonant relaxation timescale, $t\_{\rm RR}$ depending on which coherence time is shorter. We take the relaxation time, $t\_{\rm r}$, to be the minimum of the resonant and non-resonant relaxation timescales, tr = (tNRR,tRR). The background shading in Figure [fig:clscales] shows the dominant relaxation mechanism as a function of semi-major axis. First general relativistic resonant relaxation, then mass-precession resonant relaxation, and finally non-resonant relaxation dominate from small to large radii. Scattering to the Loss Cone --------------------------- Within a relaxation time, stellar orbits exhibit a random walk in orbital energy and angular momentum. Orbits deviate by of order their energy and the corresponding circular angular momentum in this time. Thus, the root-mean-square change in angular momentum per orbit is $\Delta J = {J\_{\rm c}}\sqrt{ P / {t\_{\rm r}}} $. The characteristic angular momentum of orbits that encounter the black hole is the loss cone angular momentum, ${J\_{\rm lc}}\approx \sqrt{ 2 G {M\_{\rm bh}}r\_{\rm p} }$, where $r\_{\rm p}$ is the the larger of the tidal radius, ${r\_{\rm t}}$ and the black hole Schwartzschild radius ${r\_{\rm s}}$. This loss cone angular momentum is significantly smaller than the circular angular momentum, ${J\_{\rm lc}}\ll {J\_{\rm c}}$. Thus, the timescale for the orbital angular momentum to change of order the loss cone angular momentum is typically much less than the relaxation time. As a result, stars tend to be scattered into disruptive orbits via their random walk in angular momentum rather than in energy. A comparison between the loss cone angular momentum, ${J\_{\rm lc}}$, and the mean scatter, Δ*J*, gives insight into the ability of orbital relaxation to repopulate the phase space of stars destroyed through interactions with the black hole. Where $\Delta J \gg {J\_{\rm lc}}$ the loss cone is often described as full. Orbital relaxation easily repopulates the orbits of stars that encounter the black hole. Conversely, where $\Delta J \ll {J\_{\rm lc}}$, the loss cone is, on average, empty. The transition between the full and empty loss cone regimes is typical of the semi-major axes from which most stars will be scattered to the black hole. From Figure [fig:clscales], we can see that this radius of transition lies well within the MBH radius of influence, where the enclosed mass is small. The flux of objects into the loss cone, and thus their disruption rate, is calculated based these criteria of whether the loss cone is full or empty at a given radius (or equivelently, orbital binding energy, *ɛ*). The number of stars in a full loss cone is $N\_{\rm lc}({\varepsilon})=4\pi^2 f({\varepsilon}) P({\varepsilon}) {J\_{\rm lc}}^2({\varepsilon}) d{\varepsilon}$. The rate at which they enter the loss cone is mediated by their orbital period defines a loss cone flux ${F\_{\rm lc}}({\varepsilon}) = F\_{\rm full}({\varepsilon}) = N\_{\rm lc}({\varepsilon}) / P({\varepsilon})$. In regions where the loss cone is not full, somewhat fewer objects populate the loss cone phase space and ${F\_{\rm lc}}({\varepsilon}) < F\_{\rm full}({\varepsilon})$. The exact expressions we use for ${F\_{\rm lc}}$ in the empty loss cone regime are not reprinted here for brevity but are given in equations (24-26) of and come from the model of. Once the loss cone flux as a function of energy, ${F\_{\rm lc}}$, is defined, the overall rate may be integrated [lcrate] Nlc = minmax Flc() d, where we take the limits of integration to be the orbital binding energy corresponding to $r\_{\rm h}$ $({\varepsilon}\_{\rm min})$ and that corresponding to the $M\_{\rm enc} = 10 M\_\odot$ radius $({\varepsilon}\_{\rm max})$. For MBH masses that can disrupt WDs (where ${r\_{\rm t}}> r\_{\rm ISCO} \approx 4 {r\_{\rm s}}$), most of the typically populated orbits are in the full loss cone limit. Most WDs, therefore, are scattered toward MBHs with mean $\Delta J \gtrsim {J\_{\rm lc}}$. In Figure [fig:scatter], we plot the cumulative distribution function of $\dot N\_{\rm lc}$ with respect to $\Delta J/{J\_{\rm lc}}$. This shows that for MBHs with ${M\_{\rm bh}}< 10^5 M\_\odot$, all WDs that reach the loss cone have mean $\Delta J > 0.3 {J\_{\rm lc}}$. This has implications for the ability of objects to undergo multiple passages with $J \approx {J\_{\rm lc}}$ as we discuss in the next section. ![ Fraction of the total loss cone flux for which the mean \Delta J < x, or the cumulative distribution function of loss cone flux with respect to \Delta J. This is computed following equation where we set {\varepsilon}_{\rm max} based on \Delta J and we assume 0.5M_\odot WDs. We plot lines for MBH masses between 10^3 and 10^5 M_\odot. For all black hole masses shown, 100% of the loss cone flux originates from regions in the cluster where \Delta J \gtrsim 0.3 {J_{\rm lc}}, and the bulk of \dot N_{\rm lc} originates from the full loss cone regime where \Delta J \gg {J_{\rm lc}}. Because stars receive substantial per-orbit scatter, it is unlikely that they will complete multiple close passages by the MBH with J\approx{J_{\rm lc}}. ](f4 "fig:") [fig:scatter] WD Capture and Inspiral ======================= Here, we focus on the stellar dynamics of the capture of WDs into tightly bound orbits from which they can transfer mass to the MBH. We show that WDs are placed into tightly bound orbits primarily through binary splitting by the MBH. These orbits then evolve under the influence of tides and gravitational radiation until the WD begins to interact with the MBH. In modeling this process, we adopt aspects of the pioneering work by. Binary Splitting and WD Capture ------------------------------- A key requirement for stars to undergo multiple-passage interactions with a MBH is that the per orbit scatter in angular momentum be sufficiently small that the pericenter distance remains similar between passages. A star in the full loss cone limit $\left(\Delta J \gg {J\_{\rm lc}}\right)$ that survives an encounter is very likely to be scattered away from its closely-plunging orbit before it undergoes another encounter. As we demonstrate in the previous section and in Figure [fig:scatter], most WDs are in regions where the per-orbit scatter is large relative to ${J\_{\rm lc}}$. Instead, in this section, we focus on the disruption of binary stars scattered toward the MBH, which can leave one star tightly bound to the MBH while the other is ejected on a hyperbolic orbit. Disruptions of binary stars lead WDs to be deposited into orbits from which they are hierarchically isolated from the remainder of the stellar system. These hierarchically isolated objects have an orbital semi-major axis that is smaller than the region that typically contains stars, $a < r\_{\bar m\_\*}$. This is the region inside the shaded region of Figure [fig:clscales] as determined by equation. Given such an initial orbit, the WD may undergo many close passages with the MBH without suffering significant scattering from the other cluster stars. To estimate the rates and distribution of captured orbits, we follow the analytic formalism of, which is motivated by results derived from three-body scattering experiments. equations (1)-(5) describe the probability of splitting a binary star as a function of impact parameter, as well as the mean and dispersion in the velocity of the ejected component. We use these expressions to construct a Monte Carlo distribution of binary disruptions. We let the binaries be scattered toward the black hole with rate according to their tidal radius, $r\_{\rm t, bin} = \left({M\_{\rm bh}}/m\_{\rm bin}\right)^{1/3} a$, where $m\_{\rm bin}$ is the mass of the binary. We use WD masses of 0.5*M*⊙ and companion masses of 1*M*⊙ in this example. We let the binaries originate from the same stellar density distribution described in Section [sec:rates], with a radially-constant binary fraction of $f\_{\rm bin}=0.1$. For simplicity, we distribute this population of binaries such that there is an equal number in each decade of semi-major axis *d**N*/*d**a* ∝ *a*− 1, within a range  − 3 < log(*a*/au) <  − 1, although see for a more detailed consideration of the separation distribution of field WD binaries. In our simulations, the most tightly bound binaries contribute most to the population that evolves to transfer mass the the MBH, and thus this limit most strongly affects the normalization of our results. The distribution of pericenter distances is chosen given a full loss cone of binaries, such that $dN/d{r\_{\rm p}}= \text{constant}$, and we ignore the small fraction of events with ${r\_{\rm p}}< a$. A sampling prior is placed based on the likelihood of a particular encounter occurring. This is estimated by integrating the flux of binaries to the loss cone from the portion of the cluster for which the full loss cone regime applies. This rate is $f\_{\rm bin} f\_{\rm WD}$ times the nominal loss-cone flux, ${F\_{\rm lc}}$, integrated from the radius of transition between the full and empty loss cone regimes for a given binary separation outward to $r\_{\rm h}$. This calculation is done following equation with ${\varepsilon}\_{\rm max}$ determined by the binding energy at which $\Delta J = {J\_{\rm lc}}$. Binaries that diffuse toward the black hole gradually from the empty loss cone regime are more likely to undergo a complex series of multiple encounters, the outcome of which is less easily predicted in an analytical formalism. Therefore we do not include the diffusion of binaries toward the black hole from the empty loss cone regime in our estimate. If the captured star has sufficiently small semi-major axis, it will be hierarchically separated from the rest of the steller cluster as the most bound star. The requirement for this condition is that [aheir] a < r|m\*, where $r\_{\bar m\_\*}$ is given by equation and $\bar m\_\*=1M\_\odot$. When selecting orbits that may undergo many passages, we require them to be hierarchically isolated following equation. As can be seen in Figure [fig:clscales], less bound stars (in the full loss cone regime) are subject to major perturbations $\Delta J \gtrsim {J\_{\rm lc}}$ each orbit, and thus could not undergo a multiple passage encounter with the MBH. The most tightly bound star, by contrast, evolves in relative isolation from the rest of the cluster until it is subject to a major disturbance. Another criteria we place on WD-capture orbits is that their gravitational radiation inspiral time be less than their isolation time. A chance close encounter is possible, but more likely is that another star is captured into a similarly tightly bound orbit. This unstable configuration can persist only as long as the stellar orbits avoid intersection. Eventually, this out-of-equilibrium configuration is destroyed, and one or both stars are scattered to more loosely bound orbits (or perhaps even a tidal disruption). Thus, we take the isolation time for a captured WD to be the inverse of the rate at which new binaries are split and deposit WDs into orbits with $a<r\_{\bar m\_\*}$. This is, of course, an approximation and Nbody simulations offer the possibility to determine the time between exchanges of most-bound cluster stars – with the effects of mass segregation almost certainly playing a role. We therefore make a final cut that requires $t\_{\rm insp} < t\_{\rm iso}$, where $t\_{\rm iso}$ is the isolation time as described above, and $t\_{\rm insp}$ is approximated as tinsp, the order of magnitude gravitational wave inspiral time. Gravitational radiation is the relevant loss term (as opposed to, for example, tides) because the orbits limited by this criteria are in the gravitational wave dominated regime of pericenter distance (Figure [fig:terms]). The combination of these limits on the captured WD population ensures that these WDs will interact primarily with the MBH over the course of their orbital inspiral. In the next subsections, we describe how interactions with the MBH transform the captured distribution. Modeling the Evolution of Captured WD orbits -------------------------------------------- ![ Phase space of encounters between WDs and MBHs. Gravitational waves are the dominant orbital evolution term above the solid lines (shown for M_{\rm wd} = 0.2, 0.6, and 1.0M_\odot). Tidal excitation is the dominant orbital-energy loss term for pericenter distances below the solid line. To the right of the dashed lines, the pericenter distance is within the MBH’s Schwarzschild radius and the WD would be swallowed whole. The gray shaded area (valid to the left of the dashed lines) shows the region in which tidal forcing at pericenter is strong enough to produce mass loss. Progressive shadings show the onset of mass loss, and \Delta M = 10%, 50% and 100% of the WD mass, from top to bottom, respectively. ](f5 "fig:") [fig:terms] To model the subsequent evolution of the WD orbits under the influence of both tides and gravitational radiation, we have developed an orbit-averaged code that can rapidly trace these inspirals. The effects of gravitational radiation from the orbit are applied following the prescription of, which is equivalent to the 2.5-order post-Newtonian approximation. Tidal excitation is computed following the model of. shows that the exchange between orbital and oscillation energy depends on the amplitude and phase of the WD’s oscillation as it encounters the MBH. This process leads to a ``memory" of previous interactions, and orbits that evolve chaotically as a given interaction can lead to either a positive or negative change in orbital energy and angular momentum. To model the fiducial change in orbital energy (for an unperturbed star) we follow the prescription given by [dE] Et = 0.7 -1 G mwd2/rwd, where *ϕ* = *η*− 1exp(2.74(*η* − 1)), with the dimensionless variable *η* a parameterization of pericenter distance $\eta^2 = \left({r\_{\rm p}}/r\_{\rm wd}\right)^3 \left( m\_{\rm wd}/{M\_{\rm bh}}\right)$. This expression is a fit to results computed following the method of and, where the overlap of *l* = 2 fundamental oscillation mode with the tidal forcing is integrated along the orbital trajectory. We compared Equation with numerical results derived computing such an integral and found at most a few percent difference as a function of ${r\_{\rm p}}$, and thus we adopt this simplifying form. The orbital energy lost through tides goes into the quadrupole fundamental mode of the WD, which oscillates with an eigenfrequency $\omega\_f \approx 1.445 G M\_{\rm wd} R\_{\rm wd}^{-3}$. The angular momentum exchange with oscillations is related to the energy loss, Lt = 2 Et/ f. Finally, we allow gravitational radiation to carry away oscillation energy from the tidally-excited WD. The luminosity of gravitational radiation scales with the oscillation energy, resulting in a constant decay time of tdec = 1.5 102 ( Mwd / M)-3 ( Rwd /10-2 R)4, which corresponds to $t\_{\rm dec} = 6447\text{ yr}$ for the 0.5*M*⊙ WD example used here. We terminate the evolution when one of several criteria are reached: 1. The pericenter distance is less than the radius at which mass loss occurs $({r\_{\rm p}}< 2 {r\_{\rm t}})$. 2. The accumulated oscillation energy of the WD exceeds its binding energy, Eosc > 3 10-2n G MWD2 RWD with *n* = 3/2. 3. The orbit circularizes. In the code this is when *e* < 0.1. Further evolution is traceable via the gravitational wave inspiral as impulsive excitation of tidal oscillations no longer occurs. These termination criteria correspond roughly to the categories of interactions between WDs and MBHs outlined in Section [sec:sig]. When criteria 1 is met, either a single-passage tidal disruption or multiple passage mass transfer episode can be initiated depending on the orbital eccentricity, *e*. When criteria 2 is met, eccentric-orbit mass transfer ensues. When criteria 3 is met, the WD’s orbit evolves to eventual Roche-lobe overflow. In the next subsection, we use these termination criteria to examine the distribution of orbits at the onset of mass transfer – after they have been transformed by their tidal and gravitational wave driven inspiral. Distributions at the Onset of Mass Transfer ------------------------------------------- ![Orbital distributions of WDs captured from split binaries at the onset of mass transfer to the MBH. Initial distributions are shown filled, final distributions are shown as lines. The upper panel shows the semi-major axis (blue) and pericenter distance (red) along with their corresponding initial distributions. The middle panels show the corresponding eccentricity and orbital period distributions. Orbits are evolved under the influence of gravitational waves and tidal excitation until the l=2 oscillation energy grows to reach the WD binding energy, at which point mass will be stripped from the WD envelope. The lower panel shows the number of orbits the WD survives before the onset of mass transfer, N_{\rm orb}. ](f6 "fig:") [fig:orbdist] In Figure [fig:orbdist], we show how captured 0.5*M*⊙ WDs from split binaries are eventually disrupted by a 105*M*⊙ MBH. The distribution of captured orbits is shown filled, while the final distribution is shown with lines. We find that in all cases, the deposition of orbital energy into tidal oscillation energy of the WD eventually reaches and exceeds the WD binding energy. We terminate our calculations at this point (termination criteria 2) as this represents the onset of tidally-forced mass loss from the WD to the MBH. We find no cases of complete circularization in our simulations (termination criteria 3). Circularization without tidal disruption requires larger initial pericenter distances, where tidal excitation is minimal, and correspondingly longer isolation times in order to allow the gravitational wave inspiral to complete. The circularization and inspiral times are similar under the influence of gravitational radiation. As a result, we find no cases of termination criteria 1 in our isolated evolutions. Instead, when the pericenter distance drops to within the radius at which tides are the dominant Δ*E* (Figure [fig:terms]), the tidal oscillation energy tends to rapidly grow to exceed the WD’s binding energy leading termination criteria 2 to be met. The number of orbits elapsed after capture and before the onset of mass transfer and termination is shown in the lower panel of Figure [fig:orbdist]. Following the onset of mass loss, tidal stripping and eventual disruption over repeated pericenter passages proceeds as described in Section [sec:sig]. We find that most WDs are disrupted with moderate eccentricity and a broad range of orbital periods between 103 and 106 s. The eccentricity distribution shows no nearly-circular orbits, but many orbits with $e<e\_{\rm crit}$, equation. The orbital period is particularly important in the case of eccentric encounters because it sets the timescale for repetition between subsequent pericenter mass-stripping episodes. After the onset of mass transfer, the WD can be expected to survive for at most tens of passages (Section [sec:sig]), thus the repetition time also fixes the range of possible total event durations. Detecting High Energy Signatures of White Dwarf Disruption ========================================================== In this Section, we compare the relative rates and expected luminosities of different classes of transients associated with WD-MBH interactions to discuss their detectability. We show that although rare, WD transients should outnumber their main-sequence counterparts in high energy detections because of their substantially higher peak luminosities. We then calculate that the rate of these events is sufficiently high to allow their detection by instruments such as *Swift*. Main sequence disruptions significantly outnumber WD disruptions and mass transfer interactions. In the upper panel of Figure [fig:rates], we compare the rate of main-sequence star tidal disruptions to that of WD tidal disruptions, and to repeating flares resulting from mass transfer from captured WDs. The disruption rates of stars and binaries are computed by integrating the flux into the loss cone given the cluster properties outlined in [sec:rates], equation. In the case of repeating transients, our disruption rate calculation is supplemented by the Monte Carlo simulation that traces orbits to the onset of mass transfer, described in [sec:inspiral]. To compute the values shown in the Figure, we assume a binary fraction of $f\_{\rm bin}=0.1$, and a WD fraction of $f\_{\rm wd}=0.1$ that applies both within the cluster and within binaries. We represent the remaining stars as main sequence stars that are sun-like, with *R*\* = *R*⊙ and *M*\* = *M*⊙. White dwarf interactions display a cut-off in MBH mass where most events transition from producing flares (if ${r\_{\rm p}}\gtrsim r\_{\rm ISCO} \approx 4 {r\_{\rm s}}$) to being consumption events with little or no electromagnetic signature. For the 0.5*M*⊙ WDs plotted, this cutoff occurs at black hole masses very near 105*M*⊙. Interestingly, the progressive disruption of WDs in eccentric orbits extends to slightly higher MBH masses, since the WD is disrupted gradually, over a number of orbits, without actually penetrating all the way to the tidal radius. These limits in black hole mass are flexible depending on the spin parameter and orientation of the MBH’s spin, since the general relativistic geodesic deviates substantially from a Newtonian trajectory in such deeply-penetrating encounters. If oriented correctly with respect to a maximally rotating Kerr hole, a 0.5*M*⊙ WD could, marginally, be disrupted by a 106*M*⊙ black hole. A realistic spectrum of WD masses would also contribute to softening this transition from flaring to consumption. While the lowest mass WDs are expected to be rare in nuclear clusters due to the effects of mass segregation, they are less dense than their more massive counterparts and could be disrupted by slightly more massive black holes. For example, a 0.1*M*⊙ WD could be disrupted by a 3 × 105*M*⊙ black hole. Although rare, relativistic WD transients significantly outshine their main sequence counterparts. In the lower panel of Figure [fig:rates], we combine the relative rates of different tidal interactions with their expected peak luminosities as a function of MBH mass. We allow the beamed luminosity of all of these jetted transients to trace the mass supply to the black hole, $L\propto \dot M c^2$, as in Figure [fig1] and assume that the degree of collimation is similar for each of the different classes of events. Given a population of MBHs with masses ${M\_{\rm bh}}\lesssim 10^5 M\_\odot$, WD tidal disruptions should be more easily detected than main sequence disruptions. Eccentric disruptions over the course of multiple orbits favor slightly higher black hole masses. Their rarity compared to single-passage WD tidal disruptions implies that although they have similar peak luminosities they represent a fractional contribution to the range of detectible events. This result suggests that WD disruptions, rather than main sequence disruptions, should serve as the most telling signpost to MBHs with masses less than 105*M*⊙. In the following subsection, we discuss how high energy emission can be produced in these transients. ![Rates of different interaction channels per galaxy, \dot N_{\rm gal}, as a function of {M_{\rm bh}}. The black line is the disruption of sun-like stars. Blue is the disruption of WDs, Green is the capture of WDs by split binaries into inspiralling orbits. Top: The disruption of MS stars per galactic center greatly outnumbers that of WDs. WD disruptions peak at lower {M_{\rm bh}} and are consumed whole by MBHs with masses {M_{\rm bh}}\gtrsim 10^5 M_\odot. Repeating flares extend to slightly higher {M_{\rm bh}} because they are disrupted progressively with pericenter distances moderately outside the tidal radius. Bottom: When weighted by their relative luminosities, disruptions of WDs appear more common than disruptions of MS stars. This panel is normalized to the MS value, and assumes similar f_{\rm beam} for all classes of events. Repeating flares are also quite luminous, but their relative rarity implies that they should make only a fractional contribution to the population of relativistic MS disruptions. ](f7 "fig:") [fig:rates] Dissipation and Emission Mechanisms ----------------------------------- Internal dissipation leading to a non-thermal spectrum, to be most effective, must occur when the jet is optically thin. Otherwise it will suffer adiabatic cooling before escaping, and could be thermalized. The comoving density in the jet propagating with a Lorentz factor Γ is $n'\approx L\_{\rm j}/(4\pi r^2 m\_p c^3 \Gamma^2)$, and using the definition of the Thomson optical depth in a continuous outflow $\tau\_{\rm j}\approx n'\sigma\_{\rm T} (r/\Gamma)$ we find the location of the photosphere $$r\_\tau={\dot{M} \sigma\_{\rm T} \over 4\pi m\_p c \Gamma^2}= 10^{13}\left({L\_{\rm j} \over 10^{49}\;{\rm erg/s}}\right)\left({\Gamma \over 10}\right)^{-3}\;{\rm cm}.$$ If the value of Γ at the jet base increases by at least a factor 2 over a timescale *δ**t*, then the later ejecta will catch up and dissipate a significant fraction of their kinetic energy at some distance given by $$r\_\iota\approx c \delta t \Gamma^2= 3 \times 10^{13} \left({\delta t \over 10\;{\rm s}} \right) \left({\Gamma \over 10} \right)^2\;{\rm cm}.$$ Outside *r**τ*, where radiation has decoupled from the plasma, the relativistic internal motions in the comoving frame will lead to shocks in the gas. This implies the following lower limit on $$\Gamma \gtrsim \Gamma\_{\rm c}= 7.5 \left({L\_{\rm j} \over 10^{49}\;{\rm erg/s}}\right)^{1/5}\left({\delta t \over 10\;{\rm s}}\right)^{-1/5}.$$ When $\Gamma \leq \Gamma\_{\rm c}$, the dissipation occurs when the outflow is optically thick and an almost thermal transient is expected to emanate from the jet’s photosphere. When $\Gamma \geq \Gamma\_{\rm c}$, dissipation takes place when the jet is optically thin. In the presence of turbulent magnetic fields built up behind the internal shocks, the accelerated electrons within this region can produce a synchrotron power-law radiation spectrum similar to that observed in GRBs. The resulting non-thermal flare from an internal shock collision will arrive at a detector at a time $\Delta t\_{\rm obs} \approx r\_\iota /(c \Gamma^2) \approx \delta t$. Thus, the relative time of flare variability at the detector will have a close one-to-one relationship with the time variability within the jet. Alternatively, high-energy emission can be produced as the jet propagates through the accretion disk region while interacting with very dense soft photon emission with typical energy $\Theta\_{\rm disk}=k T\_{\rm disk} /(m\_e c^2)$. A fraction $\approx \min(1,\tau\_{\rm j})$ of the photons are scattered by the inverse Compton effect to energies $\approx 2\Gamma^2\Theta\_{\rm disk}$, where we have assumed that a constant Γ has been attained. Each seed photon is boosted by  ≈ Γ2 in frequency, yielding a boosted accretion disk spectrum. The observed variability time scale, in this case, is primarily related to changes in the accretion disk luminosity. Due to relativistic aberration, the scattered photons propagate in a narrow 1/Γ beam. The Compton drag process can be very efficient in extracting energy from the jet and can limit its maximum speed of expansion so that $\Gamma^2 L\_{\rm Edd} \lesssim L\_{\rm j}$. Typical bulk Lorentz factors range from Γ ≈ 10 in quasars to Γ > 102 in GRBs. Transients that have so far been associated with tidal disruptions of stars have been mildly relativistic, with typical Lorentz factors of a few. In the case of *Swift* J1644+57, and inferred Γ ≈ 2.2. find the that $\Gamma \gtrsim 2.1$ is required in *Swift* J2058+05. In both cases, the observed spectrum can be explain by both internal dissipation and Compton drag. Event Rates ----------- We can estimate the detectable event rate by considering the space density of dwarf galaxies that might host these black holes. We estimate that a lower limit on the number density of dwarf galaxies is  ∼ 107Gpc− 3 although recent work has shown that it may be up to a factor of  ∼ 30 higher. If we assume that the MBH occupation fraction of these galaxies is $f\_{\rm MBH}$, and adopt a per MBH rate of $\dot N\_{\rm gal} \sim 10^{-6} \text{ yr}^{-1}$, then the rate of WD tidal disruptions per volume is $\dot N\_{\rm vol} \sim 10 f\_{\rm MBH} \text{ Gpc}^{-3} \text{ yr}^{-1}$. Note that this rate is approximately a factor of 100 smaller than the rate estimate of, because they adopt a higher $\dot N\_{\rm gal}$ that is derived by combining the tidal disruption rate normalization of an isothermal sphere (*ν*\* ∝ *r*− 2) with the fraction of disrupted WDs from N-body simulations of globular clusters. Considering their high luminosity, these transients may be detected out to cosmological distances. As an example, the annual event rate for transients with *z* < 1 is $ \dot N\_{z<1} \sim 1500 f\_{\rm MBH} \text{ yr}^{-1}, $ where we have used the fact that in an $\Omega\_{\rm m}=0.3$, *H*0 = 70 cosmology, *z* < 1 encloses a comoving volume of approximately 150 Gpc3. Because the emission is beamed, only a fraction $f\_{\rm beam}$ are detectable from our perspective due to the random orientation of the jet column. Thus we arrive at a potentially observable event rate of Nz<1, obs ~1500 fbeam fMBH -1. If $f\_{\rm beam} = 0.1$, then of order $150 f\_{\rm MBH} $ events are theoretically detectable per year. The fraction of these that would have triggered *Swift* in the past is still not completely understood. From Figure [fig2], typical peak timescales are thousands of seconds. suggest that <10% of exposures have sufficiently long-duration trigger applied to detect a longer event duration event like a WD-MBH interaction. Assuming that 10% of the theoretically observable events are found $\left(f\_{\rm Swift}=0.1\right)$, that leaves a *Swift* rate of $\dot N\_{\rm Swift} \sim 15 f\_{\rm MBH} \text{ yr}^{-1}$. This rate is low compared to the typical GRB rate detected by *Swift*, but potentially high enough to build a sample of events over a several year observing window with some long-cadence observations tailored to trigger on transients of this duration. Discussion ========== The MBH mass function --------------------- For MBH masses of $\lesssim 10^5 M\_\odot$, jetted transients associated with WD tidal disruptions are extremely luminous and fuel the black hole above the Eddington limit for nearly a year. These events offer a promising observational signature of quiescent black holes in this lower mass range due to their high luminosities. Unbeamed emission from the accretion flow is roughly Eddington-limited, and therefore will be at least three orders of magnitude fainter than the beamed emission. While previous *Swift* trigger criteria catered to much shorter-duration events, with increasing focus on long duration events recently, the fraction of transients that would trigger *Swift*, $f\_{\rm Swift}$, is likely to increase or at least become better constrained in future observations. With a *Swift* detection rate of order of $\dot N\_{\rm Swift} \sim 15 f\_{\rm MBH} \left(f\_{\rm Swift}/0.1\right) \left(f\_{\rm beam}/0.1\right) \text{ yr}^{-1}$, it should be possible to constrain the occupation fraction, $f\_{\rm MBH}$. More than one event per year would result if $f\_{\rm MBH}\gtrsim0.1$, and thus it is most likely possible to constrain $f\_{\rm MBH}$ to that level or larger. In placing such a limit, there remains some degeneracy, for example, $f\_{\rm MBH}= 0.1$ in the above expression could either mean that 10% of dense nuclei harbor MBHs, or that 10% of MBHs are surrounded by stellar systems. Even so, with knowledge of the expected signatures, the detection or non-detection of WD-disruption transients can place interesting constraints on the population of MBHs in this mass range with current facilities. Non-detections of events, therefore, would argue against the presence of MBHs or the presence of stellar cusps for this mass range. Ultra-long GRBs as WD Tidal Disruptions? ---------------------------------------- There is tantalizing evidence that tidal disruptions of WDs by MBHs have already been detected, under the guise of ultra-long GRBs. elaborate on the properties of several members of the newly emerging class of ultra-long GRBs: GRB 101225A, GRB 111209A, and GRB 121027A. All of these GRBs reach peak X-ray luminosities of  ∼ 1049erg s− 1 and non-thermal spectra reminiscent of relativistically beamed emission. At times greater than 104 seconds all of these bursts exhibit luminosities that are more than a factor of a hundred higher than typical long GRBs. Astrometrically, the two bursts for which data is available (GRB 101225A are GRB 111209A) are coincident with their host galaxy’s nuclear regions, suggesting compatibility with the idea that these transients originated through interaction with a central MBH. However, it is worth noting that if these events are associated with dwarf or satellite galaxies, they might appear offset from a more luminous central galaxy despite being coincident with the central regions of a fainter host, a clear-cut example being the transient source HLX-1. discuss a long-duration x-ray transient, XRT 000519, with a faint optical counterpart and quasi-periodic precursor emission. The source is located near M86. If it is at the distance of M86, the luminosity is similar to the Eddington limit of a 104*M*⊙ MBH. If it is, instead, a background object, the emission could be beamed and have a luminosity of up to  ∼ 1048 erg s− 1. Might such events be tidal disruptions of WDs by MBHs? Further evidence is certainly needed to ascertain the origin of these bursts, but the properties, including luminosities and decay timescales are in line with those we have reviewed for disruptions of WDs by MBHs. Figure [fig:phasespace] augments the phase space diagram of, showing characteristic luminosities and decay times for single-passage tidal disruptions of WDs and MBHs (blue shaded region). In Figure [fig:phasespace], we plot the peak timescale and luminosity of peak for the disruptions, for MBH masses from 103 to 105*M*⊙, and WD masses of 0.25 - 1*M*⊙. Other relevant timescales include $t\_{\rm Edd}$, the time above the MBH’s Eddington limit, plotted in Figure [fig2], and *t*90, as plotted for the GRB and soft gamma-ray repeater (SGR) sources, which is a factor of  ≈ 30 greater than $t\_{\rm peak}$. The peaks in the lightcurve of *Swift* J1644+57 have been associated with periodic spikes in the mass supply from a gradually disrupting WD in an eccentric orbit by. The suggested repetition time is *P* ∼ 5 × 104s. In our ${M\_{\rm bh}}=10^5 M\_\odot$, $M\_{\rm wd} = 0.5M\_\odot$ example of Figure [fig:orbdist],  ∼ 40% of the captured population initiates mass transfer with orbital periods $10^4{\rm s}<P<10^5{\rm s}$, thus, reproducing this repetition time does seem to be possible. Our inspiral simulations suggest that such repeating encounters are approximately an order of magnitude less common than their single-passage WD-dispruption counterparts. More importantly for determining the origin of *Swift* J1644+57, by comparison to Figure [fig:rates] we expect that repeating encounters with these sorts of repetition times would be detected at  ∼ 10% the rate of jetted main-sequence disruptions from these same MBH masses. However, single-passage WD disruptions, repeating encounters, and main-sequence disruptions each originate from different range of characteristic MBH masses (as shown in the lower panel of Figure [fig:rates]). If there is a strong cutoff in the low end of the MBH mass function we might expect this to truncate one class of events but not another. One remaining mystery is the shape of the lightcurve of *Swift* J1644+57 during the plateau phase. Variability could originate in modulated mass transfer or from the accretion flow and jet column itself, as described in Section [sec:detection],. If the jetted luminosity traces the mass accretion rate, $L \propto \dot M c^2$, as we have assumed here, we would expect the peaks in *Swift* J1644+57’s lightcurve to trace the exponentiating mass loss from the WD – instead of the observed plateau. If, however, this simplifying assumption proves incorrect (or incomplete) it does appear to be possible to produce events with plateau and super-Eddington timescales comparable to *Swift* J1644+57 with multi-passage disruptions of WDs. Detailed simulations of disk-assembly in multi-passage encounters offer perhaps the best hope to further constrain the electromagnetic signatures of these events. In WD disruptions, the jetted component is significantly more luminous than the Eddington-limited accretion disk component, and thus we have pursued the beamed high-energy signatures of these events in this paper. With the advent of LSST, however, detecting the corresponding disk emission signatures may become more promising. In a fraction of events that pass well within the tidal radius, a detonation might be ignited upon compression of the WD. In this scenario, maximum tidal compression can cause the shocked white dwarf material to exceed the threshold for pycnonuclear reactions so that thermonuclear runaway ensues. The critical *β* appears to be $\gtrsim 3$, so perhaps $\lesssim 1/3$ of the high-energy transients plotted in Figure [fig:phasespace] are expected to be accompanied by an optical counterpart in the form of an atypical type I supernova. Robustly separating ultra-long GRBs into core collapse and tidal disruption alternatives remains a challange. The central engines of ultra-long GRBs are essentially masked by high-energy emission with largely featureless spectra, revealing little more than the basic energetics of the relativistic outflow. Several distinguishing characteristics are, however, available. Variability timescales should be different (as they would be associated with compact objects of very different mass, see Section [sec:detection]). Significantly, the evolution of the prompt and afterglow emission at high energy and at radio wavelengths would be expected to deviate from that of a canonical impulsive blast wave in tidal disruption events due to long-term energy injection from the central engine. Disk emission, if detected in optical or UV observations, would present strong evidence of tidal disruption origin. While the bulk of WD disruptions would lack a coincident supernova, a minority would be accompanied by atypical type I supernovae. Optical signatures of a core-collapse event are uncertain, perhaps involving emission from the cocoon, accretion disk wind, or type IIP-like lightcurves but the detection of hydrogen lines in an accompanying supernova spectrum would point to a core-collapse origin. emphasize that one way to tackle these observational challenges in the near term is looking statistically at the astrometric positions of ultra-long bursts and whether they coincide with galactic centers. ![Luminosity versus duration adapted from. The WD+MBH region is the region of peak timescale and luminosity for a range of WD-MBH single-passage disruptive encounters. In the shaded region, MBH masses range from 10^3 to 10^5 M_\odot, while the WD masses plotted are 0.25-1M_\odot. For the GRB and SGR sources, t_{90} is plotted. If L\propto \dot M in the WD disruptions, t_{90} is a factor \approx 30 greater than t_{\rm peak}. The timescales and durations of WD-MBH interactions are well removed from typical long GRBs, but coincide with those of the emerging class of ultra-long GRBs, such as GRB 101225A, GRB 111209A, and GRB 121027A.](f8 "fig:") [fig:phasespace] Prospects for simultaneous Electromagnetic and Gravitational Wave Detection --------------------------------------------------------------------------- A primary source of interest in WD-MBH interactions has been their potential as sources of both electromagnetic and gravitational wave emission, especially as these events, if observed, would constrain the MBH mass function at low masses. Chirp waveforms have been computed for single, disruptive passages and should be detectible only if the source is within  ∼ 1Mpc given a 105*M*⊙ MBH. Potentially less restrictive are longer-lived periodic signals. The longest-lived transient, and that with the most uniform periodicity, would occur if a WD were overflowing its Roche lobe and transferring mass to the MBH from a circular orbit. However, we see no such circularization events in our orbit evolution simulations. Instead, the build-up of tidal oscillation energy in the WD leads to its disruption before the orbit circularizes, even in cases where gravitational radiation is the dominant term in the orbit evolution. In these eccentric cases, the gravitational wave signature reminiscent would be of a series of roughly-periodically spaced chirps associated with the pericenter passages. It is worth noting that these passages should not be strictly periodic because the orbital period wanders chaotically as successive passages pump energy into and out of the WD oscillations depending on the oscillation phase with which it encounters the MBH. Summary ======= In this paper we have discussed the role that orbital dynamics plays in shaping the transients that result from interactions between WDs and MBHs. WDs most commonly encounter black holes in single passages. Multiple passages from an eccentric orbit are about an order of magnitude less common, but would have characteristic repetition timescales of 104 − 106 s. The relative paucity of repeating events in our calculations, combined with the small range of MBH masses in which they appear to occur, suggests that the likelihood that *Swift* J1644+57 could form via the repeating disruption channel, as outlined shortly after the event by, is $\lesssim 10$%. We find no instances of mass transfer from a circular orbit. The consequence of these encounters is a mass supply that greatly exceeds the MBH’s Eddington limit. We expect the resulting thick accretion flow should amplify a poloidal magnetic field and lunch a jet. The relativistically beamed emission from these events may be more readily detectable than beamed emission from disruptions of main sequence stars. We therefore argue that the best prospects to constraining the lower-mass end of the MBH mass function lie in searching for the high-energy signatures of WD disruption events. The possibility of collecting a sample of such events in coming years with *Swift* appears promising. The detection or non-detection of these transients should offer strong constraints on the population of MBHs with masses ${M\_{\rm bh}}\lesssim 10^5 M\_\odot$ and the nature of the stellar clusters that surround them. 137 natexlab#1#1 Alexander, T. 2005, Physics Reports, 419, 65 Amaro-Seoane, P., Gair, J. R., Freitag, M., Miller, M. C., Mandel, I., Cutler, C. J., & Babak, S. 2005, Class. Quantum Grav., 631, R113 Amaro-Seoane, P., Miller, M. C., & Kennedy, G. F. 2012,, 425, 2401 Antonini, F., Faber, J., Gualandris, A., & Merritt, D. 2010,, 713, 90 Antonini, F., Lombardi, J. C. J., & Merritt, D. 2011,, 731, 128 Bahcall, J. N., & Wolf, R. A. 1976,, 209, 214 —. 1977,, 216, 883 Barack, L., & Cutler, C. 2004, Phys. Rev. D, 69, 082005 Baumgardt, H., Hopman, C., Portegies Zwart, S., & Makino, J. 2006,, 372, 467 Baumgardt, H., Makino, J., & Ebisuzaki, T. 2004,, 613, 1133 —. 2004,, 613, 1143 Begelman, M., Blandford, R., & Rees, M. 1984, Rev. Mod. Phys., 56, 255 Beloborodov, A. M. 1999, Hot and Cool: Bridging Gaps in Massive Star Evolution, 161, 295 Berger, E., Zauderer, A., Pooley, G. G., Soderberg, A. M., Sari, R., Brunthaler, A., & Bietenholz, M. F. 2012,, 748, 36 Berry, C. P. L., & Gair, J. R. 2013,, 433, 3572 Binney, J., & Tremaine, S. 2008, Galactic Dynamics: Second Edition (Princeton University Press) Blanton, M. R., Lupton, R. H., Schlegel, D. J., Strauss, M. A., Brinkmann, J., Fukugita, M., & Loveday, J. 2005,, 631, 208 Bloom, J. S., et al. 2011, Science, 333, 203 Boer, M., Gendre, B., & Stratta, G. 2013, eprint arXiv:1310.4944 Bromley, B. C., Kenyon, S. J., Geller, M. J., Barcikowski, E., Brown, W. R., & Kurtz, M. J. 2006,, 653, 1194 Burrows, D. N., et al. 2011,, 476, 421 Byun, Y. I., et al. 1996,, 111, 1889 Cannizzo, J. K., & Gehrels, N. 2009,, 700, 1047 Cannizzo, J. K., Troja, E., & Lodato, G. 2011,, 742, 32 Carter, B., & Luminet, J. P. 1982,, 296, 211 Cenko, S. B., et al. 2012,, 753, 77 Cheng, R. M., & Evans, C. R. 2013, Phys. Rev. D, 87, 104010 Cohn, H., & Kulsrud, R. M. 1978,, 226, 1087 Coughlin, E. R., & Begelman, M. C. 2014,, 781, 82 Dai, L., & Blandford, R. 2013,, 434, 2948 Dai, L., Blandford, R. D., & Eggleton, P. P. 2013,, 434, 2940 Dai, L., Escala, A., & Coppi, P. 2013,, 775, L9 De Colle, F., Guillochon, J., Naiman, J., & Ramirez-Ruiz, E. 2012,, 760, 103 de Freitas Pacheco, J. A., Filloux, C., & Regimbau, T. 2006, Phys. Rev. D, 74, 023001 Faber, S. M., et al. 1997,, 114, 1771 Farrell, S. A., Webb, N. A., Barret, D., Godet, O., & Rodrigues, J. M. 2009,, 460, 73 Ferrarese, L., & Merritt, D. 2000,, 539, L9 Frank, J. 1978,, 184, 87 Freitag, M. 2003,, 583, L21 Gebhardt, K., et al. 2000,, 539, L13 Gehrels, N., Ramirez-Ruiz, E., & Fox, D. B. 2009,, 47, 567 Gendre, B., et al. 2013,, 766, 30 Gezari, S., et al. 2009,, 698, 1367 Giannios, D., & Metzger, B. D. 2011,, 416, 2102 Gill, M., Trenti, M., Miller, M. C., van der Marel, R., Hamilton, D., & Stiavelli, M. 2008,, 686, 303 Goodman, J. 1986,, 308, L47 Graham, A. W., & Scott, N. 2013,, 764, 151 Greene, J. E., & Ho, L. C. 2004,, 610, 722 —. 2007,, 670, 92 —. 2007,, 667, 131 Guillochon, J., Manukian, H., & Ramirez-Ruiz, E. 2014,, 783, 23 Guillochon, J., & Ramirez-Ruiz, E. 2013,, 767, 25 Guillochon, J., Ramirez-Ruiz, E., & Lin, D. 2011,, 732, 74 Guillochon, J., Ramirez-Ruiz, E., Rosswog, S., & Kasen, D. 2009,, 705, 844 Gültekin, K., Miller, M. C., & Hamilton, D. P. 2004,, 616, 221 Gültekin, K., et al. 2009,, 698, 198 Haas, R., Shcherbakov, R. V., Bode, T., & Laguna, P. 2012,, 749, 117 Hayasaki, K., Stone, N., & Loeb, A. 2013,, 434, 909 Hills, J. G. 1975,, 254, 295 —. 1988,, 331, 687 Hils, D., & Bender, P. L. 1995,, 445, L7 Holcomb, C., Guillochon, J., De Colle, F., & Ramirez-Ruiz, E. 2013,, 771, 14 Hopman, C., & Alexander, T. 2005,, 629, 362 Ivanov, P. B., & Papaloizou, J. C. B. 2007,, 476, 121 Jonker, P. G., et al. 2013,, 779, 14 Kashiyama, K., Nakauchi, D., Suwa, Y., Yajima, H., & Nakamura, T. 2013,, 770, 8 Kelly, B. C., & Merloni, A. 2012, Advances in Astronomy, 2012, 1 Kepler, S. O., Kleinman, S. J., Nitta, A., Koester, D., Castanheira, B. G., Giovannini, O., Costa, A. F. M., & Althaus, L. 2007,, 375, 1315 Kesden, M. 2012, Phys. Rev. D, 85 Kormendy, J., & Ho, L. C. 2013,, 51, 511 Krolik, J. H., & Piran, T. 2011,, 743, 134 —. 2012,, 749, 92 Lauer, T. R., et al. 1995,, 110, 2622 Lee, H. M., & Ostriker, J. P. 1986,, 310, 176 Levan, A. J., et al. 2011, Science, 333, 199 —. 2014,, 781, 13 Lien, A., Sakamoto, T., Gehrels, N., Palmer, D. M., Barthelmy, S. D., Graziani, C., & Cannizzo, J. K. 2014,, 783, 24 Lightman, A. P., & Shapiro, S. L. 1977,, 211, 244 Lithwick, Y., & Sari, R. 2001,, 555, 540 Lopez-Camara, D., Lee, W. H., & Ramirez-Ruiz, E. 2009,, 692, 804 Luminet, J. P., & Pichon, B. 1989,, 209, 103 MacFadyen, A. I., & Woosley, S. E. 1999,, 524, 262 MacLeod, M., Guillochon, J., & Ramirez-Ruiz, E. 2012,, 757, 134 MacLeod, M., Ramirez-Ruiz, E., Grady, S., & Guillochon, J. 2013,, 777, 133 Magorrian, J., & Tremaine, S. 1999,, 309, 447 Magorrian, J., et al. 1998,, 115, 2285 Maoz, D., Badenes, C., & Bickerton, S. J. 2012,, 751, 143 Mardling, R. A. 1995,, 450, 722 —. 1995,, 450, 732 Marsh, T. R., Nelemans, G., & Steeghs, D. 2004,, 350, 113 Merritt, D. 2013, Dynamics and Evolution of Galactic Nuclei (Princeton University Press) Merritt, D., Schnittman, J. D., & Komossa, S. 2009,, 699, 1690 Meszaros, P., Ramirez-Ruiz, E., Rees, M. J., & Zhang, B. 2002,, 578, 812 Miller, B. P., Gallo, E., Greene, J. E., Kelly, B. C., Treu, T., Woo, J.-H., & Baldassare, V. 2014, eprint arXiv:1403.4246 Miller, M. C. 2005,, 626, L41 Nakauchi, D., Kashiyama, K., Suwa, Y., & Nakamura, T. 2013,, 778, 67 O’Leary, R. M., & Loeb, A. 2009,, 395, 781 —. 2012,, 421, 2737 Paschalidis, V., MacLeod, M., Baumgarte, T. W., & Shapiro, S. L. 2009, Phys. Rev. D, 80, 24006 Peters, P. 1964, Phys. Rev., 136, B1224 Phinney, E. S. 1982,, 198, 1109 Pilla, R. P., & Loeb, A. 1998,, 494, L167 Piro, L., et al. 2014, eprint arXiv:1405.2897 Press, W. H., & Teukolsky, S. A. 1977,, 213, 183 Pruet, J., Thompson, T. A., & Hoffman, R. D. 2004,, 606, 1006 Ramirez-Ruiz, E. 2004,, 349, L38 —. 2005, Monthly Notices RAS Letters, 363, L61 Ramirez-Ruiz, E., Celotti, A., & Rees, M. J. 2002,, 337, 1349 Ramirez-Ruiz, E., & Lloyd-Ronning, N. M. 2002, New Astronomy, 7, 197 Ramirez-Ruiz, E., & Rosswog, S. 2009,, 697, L77 Rashkov, V., & Madau, P. 2013,, 780, 187 Rees, M. J. 1988,, 333, 523 Rees, M. J., & Meszaros, P. 1994,, 430, L93 Reines, A. E., Greene, J. E., & Geha, M. 2013,, 775, 116 Reines, A. E., Sivakoff, G. R., Johnson, K. E., & Brogan, C. L. 2011,, 470, 66 Richstone, D., et al. 1998,, 395, 14 Rosswog, S., Ramirez-Ruiz, E., & Hix, W. R. 2008,, 679, 1385 —. 2009,, 695, 404 Rosswog, S., Ramirez-Ruiz, E., Hix, W. R., & Dan, M. 2008, Computer Physics Communications, 179, 184 Saxton, C. J., Soria, R., Wu, K., & Kuin, N. P. M. 2012,, 422, 1625 Sesana, A., Vecchio, A., Eracleous, M., & Sigurdsson, S. 2008,, 391, 718 Seth, A. C. 2010,, 725, 670 Shakura, N. I., & Sunyaev, R. A. 1973,, 24, 337 Shcherbakov, R. V., Pe’er, A., Reynolds, C. S., Haas, R., Bode, T., & Laguna, P. 2013,, 769, 85 Shen, R.-F., & Matzner, C. D. 2013, eprint arXiv:1305.5570 Stratta, G., et al. 2013,, 779, 66 Syer, D., & Ulmer, A. 1999,, 306, 35 Tremaine, S., et al. 2002,, 574, 740 Wang, J., & Merritt, D. 2004,, 600, 149 Wang, X., & Loeb, A. 2014, eprint arXiv:1402.5975 Wheeler, J. A. 1966,, 4, 393 Wright, E. L. 2006,, 118, 1711 Yu, Y. B., Wu, X. F., Huang, Y. F., Coward, D. M., Stratta, G., Gendre, B., & Howell, E. J. 2013, eprint arXiv:1312.0794 Zalamea, I., Menou, K., & Beloborodov, A. M. 2010,, 409, L25 Zauderer, B. A., Berger, E., Margutti, R., Pooley, G. G., Sari, R., Soderberg, A. M., Brunthaler, A., & Bietenholz, M. F. 2013,, 767, 152 Zauderer, B. A., et al. 2011,, 476, 425 Zhang, B.-B., Zhang, B., Murase, K., Connaughton, V., & Briggs, M. S. 2014,, 787, 66
arxiv_0000085
Cutoff for the Glauber dynamics of the lattice free field ========================================================= The Gaussian Free Field (GFF) is a canonical random surface in probability theory generalizing Brownian motion to higher dimensions. In two dimensions, it is critical in several senses, and is expected to be the universal scaling limit of a host of random surface models in statistical physics. It also arises naturally as the stationary solution to the stochastic heat equation with additive noise. Focusing on the dynamical aspects of the corresponding universality class, we study the mixing time, i.e., the rate of convergence to stationarity, for the canonical prelimiting object, namely the discrete Gaussian free field (DGFF), evolving along the (heat-bath) Glauber dynamics. While there have been significant breakthroughs made in the study of cutoff for Glauber dynamics of random curves, analogous sharp mixing bounds for random surface evolutions have remained elusive. In this direction, we establish that on a box of side-length *n* in $\mathbb Z^2$, when started out of equilibrium, the Glauber dynamics for the DGFF exhibit cutoff at time $\frac{2}{\pi^2}n^2 \log n$. Introduction ============ Stochastic interfaces and naturally associated dynamics are mathematical models for various natural phenomena. Classical examples include ballistic deposition, crystal growth, and the evolution of boundaries separating thermodynamic phases. Often, the fluctuation theory of the interface is expected to be governed by one of a few canonical stochastic partial differential equations (SPDEs). Key details of the models determine the relevant SPDE, and in turn the relevant universality class, and in particular the Gaussian or non-Gaussian nature of the fluctuations. This paper is concerned with the dynamical evolution of the Gaussian Free Field (GFF), perhaps the most canonical model of a random surface. Just as Brownian motion (simply the GFF in 1D) is the universal scaling limit of diffusive random curves, the GFF in dimension *d* = 2 is expected to arise as the universal scaling limit of a large class of random surfaces in lattice statistical mechanics. Before discussing examples of the latter and their associated dynamics that have been the object of intense investigation over the years, let us mention that there are various other perspectives to which the GFF is central including in conformal field theory, and in Coulomb gas theory and the study of log-correlated fields: see the surveys  for more on these angles. A canonical dynamical evolution of the GFF is best described by a *height function* $h(x,t):\Lambda \times \mathbb R\_+ \to \mathbb R$ for some domain $\Lambda \subset \mathbb R^d$, that is evolving stochastically with a local smoothening force; this is encoded by the stochastic heat equation with additive noise (SHE), ∂*t**h*(*x*, *t*) = Δ*x**h*(*x*, *t*) + *η*(*x*, *t*) ,  where the (spatial) Laplacian Δ*x* is the local smoothening operator and *η* is a space-time white noise. The stochastic heat equation is a widely studied SPDE, and it is a classical fact that the GFF is its stationary solution (see e.g. ). In two spatial dimensions, the SHE and its stationary solution, the GFF, form the central member of what is sometimes referred to as the Edwards–Wilkinson (EW) universality class of surface growth, named after a physical model for granular surface aggregation introduced in . The EW universality class is expected to encompass both the equilibrium and off-equilibrium fluctuations of a host of random surface models arising in statistical physics; by way of example, this class is expected to include the Ginzburg–Landau ∇*φ* (GL) interface model, the height functions of the dimer and six-vertex models, the solid-on-solid model at high temperatures, and the interface between phases in the 3D Ising model in its roughening regime. In the case of the dimer model and the GL model the equilibrium fluctuations are known to converge to the GFF (e.g.,  and  respectively), and in some of the other cases, it is at least known that the variance of the fluctuations is of the same order as that of the GFF . In light of the above, it is of much interest to study dynamical aspects of this universality class. We will focus on the central lattice model identifying this class: the 2D *discrete Gaussian free field* (DGFF). This is a discretization in space of the GFF, where *h*(*x*, *t*) lives on $\Lambda\_n\times \mathbb R\_+$, where Λ*n* is a box of side-length *n* in the integer lattice $\mathbb Z^2$. While there are various examples of natural discrete dynamic evolutions mimicking  with the Laplacian replaced by the discrete Laplacian, and *η* living on $\Lambda\_n \times \mathbb R\_+$, we will consider the heat-bath Glauber dynamics, also known as the Gibbs sampler, for the DGFF. This is the canonical local update Markov process where at any time, the height at a site of Λ*n* is updated according to , i.e., taking value given by the average of its neighboring values plus an independent standard Gaussian. Of particular interest is the analysis of this dynamics when the surface is initialized far from equilibrium, say at an atypically high height of *n*. Under a macroscopic rescaling, the random surface evolution becomes non-random admitting a hydrodynamic limit to a non-linear parabolic PDE: such results have been shown for certain dynamics of the form of  in  for the GL model (of which the DGFF is a special case), in  for the height function of the dimer model, and in  for the zero-temperature Ising interface in $\mathbb Z^2$. However, one cannot hope to extract information about convergence of the random process to equilibrium from the deterministic large-scale behavior alone. This finer resolution is the focus of this paper. [b].31 [scale = 1] at (0,0) ; at (0,-3.5) ; [b].31 [scale = 1] at (0,0) ; at (0,-3.5) ; [b].31 [scale = 1] at (0,0) ; at (0,-3.5) ; [fig:DGFF-Glauber-evolution] Setting up the right framework to study such questions naturally brings us into the world of mixing times of Markov processes. This field has been the subject of intense investigation over the last several decades, and is too large to do justice to in this introduction, so we simply mention that much attention has been paid to mixing times of Glauber dynamics for lattice statistical mechanics models, including some of the aforementioned models belonging to the EW universality class. Often in high-dimensional Markov chains, a remarkable phenomenon, known as *cutoff*, occurs where the total-variation distance of the system to equilibrium dramatically drops from 1 to 0 in a window of time that is smaller order than the mixing time itself. First identified in seminal works for the random walk on the hypercube and the symmetric group (see ), establishing cutoff and its location has become the ``best possible" result in the analysis of fast mixing Markov chains. We refer to the texts  for a detailed account of the literature on the cutoff phenomenon. In our context,  developed a general framework for showing that the mixing time for Glauber dynamics of random curves of length *n* (i.e., *d* = 1) following  is of the order of *n*2log*n*. Subsequently,  established cutoff at $\frac{1}{\pi^2} n^2\log n$ for the key example of the height function associated to the simple symmetric exclusion process on a line segment of length *n*. This has since been extended to a general theory of cutoff for dynamics of random curves following  in . When *d* ≥ 2, however, the approaches followed in these papers face significant barriers, and sharp mixing time results for random surface evolutions are very limited. This state of affairs motivates the present work, where we consider the 2D DGFF and establish cutoff for its Glauber dynamics. To the best of our knowledge, this is the first cutoff result for a random surface Glauber dynamics in any dimension *d* > 1. Postponing further discussion, let us now formalize our main result. Main result ----------- Let $\Lambda\_n = {\llbracket}1,n-1\rrb^2 = \{1,...,n-1\}^2$ be the 2-dimensional lattice cube of side-length *n* with nearest neighbor edges, and let $\partial \Lambda\_n = \{v\in \mathbb Z^2 \setminus \Lambda\_n: d(v,\Lambda\_n) = 1\}$ be its (outer) boundary. Let $\overline \Lambda\_n = \Lambda\_n \cup \partial \Lambda\_n = {\llbracket}0,n\rrb^2$ and write *x* ∼ *y* if *d*(*x*, *y*) = 1. For a fixed boundary condition $\eta: {\partial \Lambda\_n} \to \mathbb R$, the DGFF on Λ*n* is the Gaussian process ${{\ensuremath{\vec{h}}}}=(h(x))\_{x\in \overline \Lambda\_n}$ with *h*(*x*) = *η*(*x*) for all *x* ∈ ∂Λ*n*,  whose density *π**n* is proportional to $$\exp\Big(-{\frac 1{8}}\sum\_{x,y\in \overline \Lambda\_n: x\sim y} (h(x)-h(y))^2\Big)\,.$$ For convenience take *η* to be identically zero (to see that this does not lead to any loss of generality cf. Remark [rem:bc]), and view any *h⃗* taking value zero on ∂Λ*n* simply as a function on Λ*n*. The continuous-time Glauber dynamics for the DGFF, denoted *h⃗*(*t*) = (*h*(*x*, *t*))*x* ∈ Λ*n* is defined as follows. Assign each *x* ∈ Λ*n* a rate-1 Poisson clock; if the clock at *x* rings at time *t*, leave *h* unchanged everywhere except at *x* where it updates to a unit variance Gaussian whose mean is equal to the average of *h*(*y*, *t*) for $y\in \overline \Lambda\_n: y\sim x$ (see Definition [def:surface-RW-coupling] for a formal definition). Next, we formally introduce the notions of mixing time and cutoff. Recall the total variation distance $\|\mu - \nu\|\_{{\textsc{tv}}}$ between probability measures *μ* and *ν* defined on the same space. Let *P**t**δ**h⃗*(0) be the law of *h⃗*(*t*), when initialized from some *h⃗*(0) taking value zero on ∂Λ*n*. Note that in order for the mixing time (the time it takes for the total variation distance to *π**n* to be less than 1/4) to be finite, we need to restrict our attention to initializations of finite height; in analogy with interfaces from dimer and Ising models, a natural such choice is *h⃗*(0) such that ∥*h⃗*(0)∥∞ ≤ *n*. Define the (worst case) total-variation distance of the DGFF Glauber dynamics at time *t* to be $$d\_{{{\textsc{tv}}}}^{(n)}(t):=\max\_{{{\ensuremath{\vec{h}}}}(0): \|{{\ensuremath{\vec{h}}}}(0)\|\_\infty \le n} \| P\_t \delta\_{{{\ensuremath{\vec{h}}}}(0)} - \pi\_n \|\_{{\textsc{tv}}}\,.$$ [def:cutoff] We say the Glauber dynamics for the DGFF on Λ*n* with initialization *h⃗*(0) such that ∥*h⃗*(0)∥∞ ≤ *n* undergoes cutoff at time *t**n* with window *s**n* = *o*(*t**n*) if $$\begin{aligned} \lim\_{\lambda\to\infty} \liminf\_{n\to\infty} d\_{{{\textsc{tv}}}}^{(n)}(t\_n - \lambda s\_n) = 1\,, \quad \mbox{and}\quad \lim\_{\lambda \to\infty} \limsup\_{n\to\infty} d\_{{\textsc{tv}}}^{(n)} (t\_n + \lambda s\_n) = 0\,.\end{aligned}$$ With the above preparation, we now state our main result. [thm:main-revised] The Glauber dynamics for the DGFF on Λ*n* with initializations *h⃗*(0) : ∥*h⃗*(0)∥∞ ≤ *n* and zero boundary conditions, exhibits cutoff with window *O*(*n*2loglog*n*) at $$\begin{aligned} \label{eq:t-star} t\_\star & := \frac{2}{\pi^2} n^2 \log n\,.\end{aligned}$$ [rem:general-initializations] We chose the total-variation to be maximized over ∥*h⃗*(0)∥∞ ≤ *n* for concreteness, but if we chose to instead maximize over ∥*h⃗*(0)∥∞ ≤ *a**n* for any sequence *a**n* ≥ 0, then we would find that the mixing time maximized over such initializations is *t*⋆(*a**n*) + *O*(*n*2loglog*n*), where $$\begin{aligned} \label{eq:t-star-an} t\_\star(a\_n) := \frac{2}{\pi^2} n^2 \log a\_n\,. \end{aligned}$$ This implies the cutoff phenomenon if $\frac{\log a\_n}{\log \log n} \to \infty$. If one focuses on the special case of the flat initialization *h⃗*(0) ≡ 0, then we deduce that its mixing time is in fact *O*(*n*2loglog*n*); this specific initialization would be expected to mix in time *O*(*n*2) with no cutoff, but even establishing a quantitative *o*(*n*2log*n*) bound on its mixing time is challenging in *d* = 1. For the total-variation lower bounds in the above results, we can go further and obtain sharp lower bounds for the cutoff profile by leveraging the Gaussian nature of the process. This cutoff profile is expected to be sharp, but we are unable to get a matching upper bound due to serious technical obstructions: see Section [subsec:proof-ideas] and Remark [rem:tv-Gaussians] in particular for more. In the below, let erf(*x*) be the error function, i.e., the probability that a mean-zero Gaussian with variance 1/2 falls in the range [ − *x*, *x*]. [thm:main-lower-bound] The Glauber dynamics for the DGFF on Λ*n* with initializations *h⃗*(0) having ∥*h⃗*(0)∥∞ ≤ *a**n* satisfying $\frac{a\_n}{\log n} \to\infty$ and zero boundary conditions satisfies $$\begin{aligned} \liminf\_{n\to\infty} d\_{{\textsc{tv}}}^{(n)}(t\_\star(a\_n) + sn^2) \ge \operatorname{erf}\Big( \frac{2}{\pi} e^{ - \pi^2 s/2}\Big)\,. \end{aligned}$$ [rem:bc] Though we focused on the case of zero boundary data, our results hold for general Dirichlet boundary data *η*. In fact the measure *π**n* as well as the Glauber dynamics with boundary condition *η* is obtained from the one with zero boundary conditions by a deterministic *η*-dependent translation. See Lemma [lem:boundary-conditions] for the formal statement. Moreover, though our results were formulated for the continuous-time Glauber dynamics, it will be clear in the proofs that we could just as well have written them for the discrete-time Glauber dynamics, and the cutoff location would become *n*2*t*⋆ with *O*(*n*4loglog*n*) window. Related work ------------ Before describing the key ideas in the paper, we briefly recount related cutoff results in one dimension, and the more limited results on random surface dynamics in higher dimensions. The most basic 1D interface model is a  ± 1 random walk path pinned at height zero at points 0 and *n*. In the associated Glauber dynamics, a local minimum  ∨  (resp. local maximum  ∧ ) at a point $x\in {\llbracket}1,n-1\rrb$ flips with probability 1/2 at the corresponding clock rings. This seemingly innocuous dynamics encodes many apparently more complicated Markov chains including the adjacent transposition walk on the symmetric group, and the symmetric exclusion process on ${\llbracket}1,n-1\rrb$. The famous work of Wilson  gave upper and lower bounds of matching order *n*2log*n* for the mixing time of these processes; a key contribution of that work was noticing that the expected value of the height at a point evolves according to the discrete heat equation, whose spectral theory on the line segment is of course well-understood. The bounds of  remained the state of the art until the breakthrough work of Lacoin  establishing cutoff for the above-described dynamics. That proof relied on a delicate combination of tools from the analysis of monotone Markov chains, together with martingale techniques. Related ideas and refinements have been subsequently developed in e.g.,  leading to a robust theory of cutoff for a large class of 1D interface models in the universality class of . In its most general form , cutoff at $\frac 1{\pi^2} n^2 \log n$ was established for Glauber dynamics of 1D interface models with Gibbs distribution exp( − *V*(∣∇*h*∣)) for convex *V*. The DGFF in dimension one fits directly into this framework via the choice *V*(*x*) = *x*2/4. Moving on to higher dimensions, arguments similar to those of  can be used to see that the mixing time of the DGFF Glauber dynamics is of order *n*2log*n*. A straightforward generalization of the argument in  (which itself extended the discrete argument from  to a continuum setup) would allow one to identify the spectral gap of the DGFF dynamics as 1 − cos(*π*/*n*). However, a key step in all proofs of cutoff from  breaks down as soon as *d* ≠ 1. Namely, all those proofs relied on fast mixing on an *O*(1)-sized skeleton to split the segment into a finite number of smaller scale segments between the points of the skeleton, which mix independently. Martingale techniques based on connections to the discrete heat equation can be combined with monotonicity arguments to obtain useful *one-point* mixing time bounds, sufficient for the mixing time of the skeleton provided the skeleton size is *O*(1). However this will only be true in dimension one, and any such approach thus runs into a serious barrier when *d* ≥ 2. For random surfaces whose heights take integer values, the situation in *d* ≥ 2 is even worse, as the expected height no longer follows the discrete heat equation due to integer effects. A general methodology was used in  to prove *n*2(log*n*)*C* bounds on mixing times of random surfaces by trying to mimic the mean-curvature flow evolution of the surface, but this required a key local input bounding the mixing time from initializations whose heights are on the same scale as the fluctuations. Such an input is only available in a few settings, notably the zero-temperature Ising interface, and the height function of the dimer model . We also mention recent progress by  establishing the order *n*− 2 spectral gap for a hierarchical approximation to the discrete Gaussian model (the DGFF constrained to take integer valued heights). For many other important models of random surfaces like the solid-on-solid model, and the low-temperature Ising interface in three dimensions, there do not even exist sub-exponential bounds on the mixing time. Idea of the proof ----------------- Here, we describe our approach to proving Theorem [thm:main-revised]. ### Backwards-in-time random walk representation The starting point of our proof is the observation that the Glauber dynamics for the DGFF admits an exact representation in terms of backwards-in-time random walks on Λ*n* killed at ∂Λ*n*. This can be viewed as a discrete version of the Feynman–Kac representation for the SHE. A similar graphical representation was used to prove ergodicity and bound covariance growth in infinite volume in , and a synchronous version of this appeared in a recent work of Chatterjee  for more general random surface evolutions. Formally, the random walk representation goes as follows. First consider the following standard graphical construction of the Glauber dynamics (*h*(*x*, *t*))*x* ∈ Λ*n*, *t* ≥ 0. Begin by sampling a space-time Poisson process ${\ensuremath{\mathcal P}}$ on Λ*n* × [0, ∞]. Further for each point $(x,t)\in {\ensuremath{\mathcal P}}$ let there be an associated standard Gaussian *Z*(*x*, *t*). Given this, the conditional (on ${\ensuremath{\mathcal P}}$) process $(h^{\ensuremath{\mathcal P}}(x,t))\_{x,t}$ is constructed by at time $(x,t)\in {\ensuremath{\mathcal P}}$, updating the value at *x* to $\frac{1}{4}\sum\_{y\sim x} h(y,t^-) + Z(x,t)$. Now for any fixed *t*, we can define the random walk started at (*x*, *t*) going back in time, that whenever it encounters a point in $(y,s)\in{\ensuremath{\mathcal P}}$ jumps to a uniformly random neighbor of *y*, and gets frozen (killed) when it hits ∂Λ*n*. For any such random walk trajectory $(S^{x,{\ensuremath{\mathcal P}}}\_s)\_{s\in [0,t]}$ initialized at $S^{x,{\ensuremath{\mathcal P}}}\_t=x$, let $Z\_t(S\_{[0,t]}^{x,{\ensuremath{\mathcal P}}})$ be the sum of the Gaussian variables *Z*(*y*, *s*) encountered along the trajectory. The random walk representation of $(h^{\ensuremath{\mathcal P}}(x,t))\_{x,t}$ is the following exact equality: for any *t* > 0 and any realization of ${\ensuremath{\mathcal P}}$, h$\mathcal P$(x,t)=E, where the expectation is taken only over the random trajectory of $S^{x,{\ensuremath{\mathcal P}}}$. In particular, the distribution of $h^{\ensuremath{\mathcal P}}(\cdot,t)$ given ${\ensuremath{\mathcal P}}$ is a multivariate Gaussian and hence the unconditional law of the Glauber dynamics is a mixture, over the Poisson clock ring times, of Gaussian processes. We refer to Theorem [thm:rw-representation-exact-equality] and Corollary [cor:random-walk-representation] for the formal statements. [rem:tv-Gaussians] At this point, one could hope to leverage exact formulae for the total-variation between high-dimensional Gaussian distributions. If for most realizations of ${\ensuremath{\mathcal P}}$, $h^{\ensuremath{\mathcal P}}(t)$ is mixed after time *t*⋆, then the desired mixing time upper bound would follow. Using the BTRW representation, it is possible to show that for typical ${\ensuremath{\mathcal P}}$, for *t* ≥ *t*⋆, the covariance matrix of $h^{\ensuremath{\mathcal P}}(t)$ can be assumed to be the Green’s function (up to negligible error terms). In that case, bounding the total-variation distance between $h^{\ensuremath{\mathcal P}}(t)$ and stationarity can be found, by reduction to a 1D Gaussian computation, to be equivalent to bounding $$\begin{aligned} \label{eq:m-Laplacian-m} (m^{\ensuremath{\mathcal P}}\_t)^{\mathsf T} \Delta m^{\ensuremath{\mathcal P}}\_t \qquad \mbox{where} \quad (m^{\ensuremath{\mathcal P}}\_t)\_x = \mathbb E[h^{\ensuremath{\mathcal P}}(x,t)]\,.\end{aligned}$$ However, the mean vector $m\_t^{\ensuremath{\mathcal P}}$—essentially the probability that the BTRW from *x* did not hit the boundary ∂Λ*n* in time [0, *t*]—may be quite oscillatory for fixed (typical) ${\ensuremath{\mathcal P}}$ (even if averaged over ${\ensuremath{\mathcal P}}$ it is very smooth). This possibility is captured by the fact that the number of jumps taken by independent BTRWs from neighboring sites *x* and *y*, may differ by $\sqrt{t}\approx n\sqrt{\log n}$. Ignoring the correlations between the number of jumps and the trajectory of the BTRW in ${\ensuremath{\mathcal P}}$, this would indeed induce oscillations in $m\_t^{\ensuremath{\mathcal P}}$ that are too large to naively bound. Though unlikely, these oscillations could in principle align with high modes of the Laplacian, and contribute non-negligibly to the quantity in . Decoupling the dependencies between ${\ensuremath{\mathcal P}}$ and the survival probability of the BTRW appears to be a challenging task, though this could yield a path to obtaining a cutoff profile upper bound to match Theorem [thm:main-lower-bound]. ### A two-stage coupling Given this, the upper bound of Theorem [thm:main-revised] takes up most of the work of the paper; our approach to this upper bound entails coupling the evolution started from the maximal configuration of all *n* and any generic configuration *g* with ∥*g*∥∞ ≤ *n*. Let us denote the corresponding Glauber dynamics by *h**n⃗*(*x*, *t*) and *h**g⃗*(*x*, *t*) respectively. The coupling will proceed by stitching together two stages of monotone couplings so that we have *h**n⃗*(*x*, *t*) ≥ *h**g⃗*(*x*, *t*) for all *t* ≥ 0. As a result of this, the coalescence time of the coupling time is upper bounded by the hitting time of zero by the volume process *V**t* = ∑*x**h**n⃗*(*x*, *t*) − *h**g⃗*(*x*, *t*). We now describe the different steps by which this hitting time is bounded by *t*⋆ + *O*(*n*2loglog*n*). Such a two-stage coupling was already used to great effect in the 1D evolutions studied in ; in the below we overview the approach for pedagogical reasons, then emphasize the places where significant innovations were required to push those arguments to higher dimension. * The first phase of the coupling of *h**n⃗*(*x*, *t*) and *h**g⃗*(*x*, *t*) uses the same underlying Poisson process determining the update sequence, and the same standard Gaussian random variables *Z*(*y*, *s*) to carry out the updates at each site. This is called the identity coupling. (The specifics of the coupling beyond monotonicity are actually not important in the first stage, but the identity coupling is the simplest such choice.) This stage is used to smoothly bring the mean of the process *V**t* down to slightly below (e.g., by a factor of (log*n*)− 5) its ``equilibrium scale" of *O*(*n*2). This occurs in time that is *t*⋆ + *O*(*n*2loglog*n*) because the expectation of the volume process *V**t* shrinks from its initial value of *n*3 with exponential rate $\lambda\_{\mathbf{1}}= \frac{\pi^2}{2n^2}+O(n^{-4})$, the smallest eigenvalue of the Laplacian associated to the standard random walk on Λ*n*. This follows from the BTRW representation, by recognizing that the mean discrepancy at a site *x* is connected to the probability that the BTRW $S\_{-t}^{x,{\ensuremath{\mathcal P}}}$ has not been absorbed into ∂Λ*n*—what we call the survival probability of the BTRW. By means of this correspondance, we get both quenched and annealed in ${\ensuremath{\mathcal P}}$ estimates on the mean of the DGFF dynamics, and thus the mean discrepancy at each site. * If we were to continue using the identity coupling, these discrepancies would continue decaying with rate *λ***1**, and the time it would take to reach *V**t* = *o*(1) so that a ``union bound" over all sites would suffice for coalescence, is a further $\frac{4}{\pi^2}n^2 \log n$—destroying any chance at establishing cutoff. We therefore switch at this point to a coupling called *the sticky coupling*, which again uses the same Poisson clocks for the two chains, but upon an update at a site *x*, couples the resulting values of the update at *x* optimally (maximizing the probability that the discrepancy at *x* becomes zero). The argument then proceeds by establishing that under the sticky coupling, in a further *O*(*n*2) time, *V**t* hits zero. For this, we leverage the fact that *V**t* is a super martingale, then perform a multi-scale analysis, at each scale bounding the time it takes to drop by a multiplicative factor of 1/log*n*. The *i*’th such scale reduction will take time at most 2− *i**n*2 so that this can be summed over *i* and cumulatively contributes a time of *O*(*n*2). Unsurprisingly, this analysis relies on lower bounding the angle bracket process corresponding to *V**t* (for the reader unfamiliar with the notion, the precise definition appears in Definition [def:supermartingale-angle-bracket]), which in turn upper bounds the hitting times of lower scales. While this is a rather technical part of the proof and bears similarities to the arguments in, adapting the bound in  completely breaks when the dimension is increased to two; this is because of a use of Cauchy–Schwarz to lower bound the growth rate of the angle bracket—governed by the ℓ2-norm of the discrepancy process *h**n⃗*(*x*, *t*) − *h**g⃗*(*x*, *t*)—by the volume process *V**t*. That lower bound incurs a factor of 1/∣Λ*n*∣, which in dimension two (and higher) is too small to still bound the hitting times of smaller scales by *o*(*n*2log*n*). We refer the reader to Remark [rem:quadratic-variation-bound-difficulty] for a precise description of this obstruction. Our workaround to this problem is to use a tight bound on the number of sites at which the discrepancy is non-zero. To do so, we rely on the observation that the sticky coupling, when it fails to couple two updates, tends to yield a discrepancy that is at least of order one. Leveraging such a stochastic domination, we establish a concentration inequality (Lemma [lem:two-norm-to-one-norm]) comparing the ℓ2 norm of the discrepancy process to its number of non-zero discrepancies, and then in turn comparing that to its ℓ1 norm, *V**t*, that holds throughout the second stage of the coupling process. This comparison also uses as an input an ℓ∞ bound on the maximal discrepancy through the process, which is achieved by means of the BTRW representation and our knowledge of the (quenched) Gaussian law of the DGFF dynamics. Obtaining regularity estimates of this form that hold throughout the evolution, is often an esential source of difficulty in analysis of random surface dynamics. [rem:intro-higher-d] Let us remark briefly on the trouble this argument would face if considering the DGFF in dimensions *d* ≥ 3. There, even under optimal assumptions on the quadratic variation accumulated by the *V**t* super martingale under the sticky coupling, the coalescence time of the sticky coupling would be of order $\frac{2d-2}{\pi^2}n^2 \log n$, instead of the expected $\frac{d}{\pi^2} n^2 \log n$. For more details, see Remark [rem:higher-d-sticky-coupling]. Thus, any proof of cutoff for dimensions *d* ≥ 3 must go either by a completely different dynamical coupling, or by more analytic means like those hinted at in Remark [rem:tv-Gaussians]. ### Sharp lower bound on the cutoff profile For the lower bounds of Theorem [thm:main-revised] and [thm:main-lower-bound], we use as a test function the inner product of the first eigenfunction of the Laplacian with *h*, when initialized from a deterministic shift of a sample from stationarity. Conditioned on ${\ensuremath{\mathcal P}}$, the law of this function is Gaussian, whose mean and covariance we can describe precisely, up to *o*(1) error terms. This test function captures the right cutoff profile because the higher Fourier modes of *h* should ``mix" in time that is *o*(*t*⋆) when started out of equilibrium, and to first order, the mean of *h* is simply a multiple of the first eigenfunction of the Laplacian. In , the cutoff profile was obtained for the symmetric exclusion process on the circle (*d* = 1) and similarly came from the inner product of the height with the first eigenfunction of the corresponding Laplacian. We note that the above intuition is difficult to quantify into an upper bound on the cutoff profile in our setting, however, because the *o*(1) fluctuations in the mean of *h* can in principle align with high modes of the Laplacian, in which case they would contribute significantly to the total-variation distance between two DGFF evolutions. Related questions ----------------- We conclude the introduction with a brief account of possible extensions and other related themes of interest. Given our exact usage of the inner product with the first eigenfunction of the Laplacian to get the cutoff profile lower bound, it may be of interest to obtain a limit theory for the evolution of the Fourier coefficients of the height function *h⃗*(*t*). This could offer a path for controlling high modes of the random surface, and obtaining an upper bound on the cutoff profile that matches Theorem [thm:main-lower-bound]. If we were studying a Langevin diffusion process emulating , then the problem would diagonalize, and the Fourier coefficients would truly evolve according to *independent* Ornstein–Uhlenbeck (OU) processes whose mean-reversion factors depend on the index of their Fourier mode. The tensorization would simplify the study of its mixing time and cutoff greatly (c.f. the recent paper  where cutoff for *interacting* OU processes was studied in detail). Since our motivation comes from the study of random surface dynamics, where the allowed heights are often discrete, our preference is for the Glauber dynamics, where this tensorization does not hold. But it would be interesting to understand to what extent correlations induced by ${\ensuremath{\mathcal P}}$ in the Fourier coefficient evolutions disappear in the limit. Beyond the Glauber dynamics of the DGFF, of course even obtaining *O*(*n*2log*n*) mixing time bounds, let alone cutoff, for lattice models of random surfaces belonging to the EW universality class (e.g., the height function of the dimer model, the solid-on-solid model, and the interface of the Ising model) would be of significant interest. A modified setting of the DGFF that one could explore is the mixing time of random surface models in the presence of a hard floor (conditioned on being non-negative), inducing an entropic repulsion effect. While in dimension one, sharp mixing time bounds, including cutoff at $\frac{1}{\pi^2} n^2 \log n$ have been extended to this setting in , in dimension two, the situation is more complicated. In fact, for certain integer-valued random surface models (e.g., the solid-on-solid model) the mixing time can slow down exponentially due to such a floor effect . Another natural next step is to move beyond the 2D setting to the DGFF on more general families of graphs, in particular subsets of $\mathbb Z^d$ for *d* ≥ 3. There, we do not expect the sticky coupling to coalesce in *o*(*n*2log*n*) time, and one would likely need to rely on the quenched Gaussianity of the interface together with the Fourier techniques hinted at above to demonstrate cutoff. Lastly, we mention that there is an entirely different class of random surface growth where the growth has a a non-linear dependence on the local gradient. This universality class is dictated by the stochastic PDE known as the *KPZ equation*  which one obtains by adding the non-linear term ∣∇*h*(*x*, *t*)∣2 to the right-hand side of . In contrast to the SHE, the non-linearity is expected to give rise to non-Gaussian fluctuations. In one dimension, the KPZ equation has seen enormous attention and integrable features of certain models have led to remarkable progress (see  for a survey). In the mixing time direction, cutoff was recently shown for the asymmetric exclusion process (a natural Markov chain belonging to the KPZ class) on the line segment in . In higher dimensions much less is known, and in fact some recent works show that under certain renormalizations, the KPZ equation may even admit EW limits and Gaussian fluctuations in sub-critical regimes . Showing that the KPZ equation yields a non-Gaussian fluctuation theory under some renormalization in dimensions *d* ≥ 2 remains a major open problem. In light of the above advances, studying mixing times and cutoff in random surface growth models belonging to the KPZ universality class is an exciting avenue for future research. Acknowledgements ---------------- The authors thank the anonymous referees for their careful reading and suggestions. The authors thank H. Lacoin and P. Diaconis for useful suggestions. S.G. was partially supported by NSF grant DMS-1855688, NSF Career grant DMS-1945172, and a Sloan Fellowship. R.G. thanks the Miller Institute for Basic Research in Science for its support. A random walk representation of the DGFF evolution ================================================== In this section, we formally construct the random walk representation of the Glauber dynamics for the DGFF; this serves as a key source of regularity estimates for the height process over time, as alluded to in Section [subsec:proof-ideas]. Formal definition of the Glauber dynamics ----------------------------------------- We begin by giving a formal definition of the continuous-time Glauber dynamics for the DGFF on Λ*n*. [def:surface-RW-coupling] Fix a realization of Poisson noise ${\ensuremath{\mathcal Q}}= {\ensuremath{\mathcal Q}}\_n$ on Λ*n* × [0, *t*], enumerate the set of all times of ${\ensuremath{\mathcal Q}}$ in increasing order as $$\begin{aligned} 0<\sigma\_1 < \sigma\_2 < \cdots < \sigma\_{N(t)}<t\,. \end{aligned}$$ and let *ν*1, ..., *ν**N*(*t*) be the vertices at which these clock rings occur. For convenience, let *σ*0 = 0. Let (*G*1, ..., *G**N*(*t*)) be a sequence of i.i.d. standard normal random variables. Given initial data *h⃗*(0) = (*h*(*x*, 0))*x* ∈ Λ*n*, construct ${{\ensuremath{\vec{h}}}}^{{\ensuremath{\mathcal Q}}}(t) = (h^{\mathcal Q}(x,t))\_{x\in \Lambda\_n}$ as follows. * For all *s* ∈ [0, *σ*1), let $h^{\mathcal Q}(x,s) = h(x,0)$ for all *x* ∈ Λ*n*. * For every *j* ≥ 1, for all *s* ∈ [*σ**j*, *σ**j* + 1), let $$\begin{aligned} h^{{\ensuremath{\mathcal Q}}}(x, s) = \begin{cases} h^{{\ensuremath{\mathcal Q}}}(x,\sigma\_{j-1}) & x\ne \nu\_{j} \\ G\_{j} + \frac{1}{4} \sum\_{w\sim \nu\_{j}} h^{{\ensuremath{\mathcal Q}}}(w, \sigma\_{j-1}) & x = \nu\_{j} \end{cases}\,. \end{aligned}$$ Notice that the update rule is the same as resampling the value of $h^{{\ensuremath{\mathcal Q}}}$ at the site *ν**j* conditionally on its values on the neighbors of *ν**j*. This is because the distribution of the DGFF at a vertex *v*, conditionally on the values of its neighbors is exactly distributed as a unit variance Gaussian whose mean is the average of its neighbors’ values. [rem:identity-coupling] Definition [def:surface-RW-coupling] naturally induces a coupling of DGFF dynamics chains with different initializations by use of the same Poisson update sequence ${\ensuremath{\mathcal Q}}$ and the same Gaussian random variables *G**j*. This coupling will be called the *identity coupling*, and can easily be seen to be a monotone coupling. Let us pause to use the formal representation of the above definition to reason that it suffices for us to consider the DGFF evolution with identically zero boundary conditions, as alluded to in Remark [rem:bc]. The following shows that the dynamics with any other boundary conditions can be coupled to simply be a vertical shift of the one with the zero boundary data. For a function *η* on ∂Λ*n*, let Harm*η* be the discrete harmonic extension of *η*, i.e., the unique function on Λ*n* ∪ ∂Λ*n* that takes values *η* on ∂Λ*n* and is discrete harmonic on Λ*n*. [lem:boundary-conditions] Let *h⃗* be the Glauber dynamics with zero boundary conditions, and let *h⃗*1 be the Glauber dynamics with boundary conditions $\eta\_1 \in \mathbb R^{\partial \Lambda\_n}$. Initialize *h⃗*1(0) = Harm*η*1 + *h⃗*(0). Then, $$\begin{aligned} {{\ensuremath{\vec{h}}}}\_1(t) - \operatorname{Harm}\_{\eta\_1} \stackrel{d}= {{\ensuremath{\vec{h}}}}(t)\,. \end{aligned}$$ It suffices to show that the left and right-hand sides agree with probability one if they are constructed by Definition [def:surface-RW-coupling] using the same ${\ensuremath{\mathcal Q}}$ and the same Gaussians (*G**j*)*j*. Fix almost any ${\ensuremath{\mathcal Q}}$, and enumerate the times of ${\ensuremath{\mathcal Q}}$ in increasing order as $$\begin{aligned} 0<\sigma\_1<\sigma\_2< \cdots <\sigma\_{N(t)}<t\,. \end{aligned}$$ We will prove inductively over *j* that $$\begin{aligned} {{\ensuremath{\vec{h}}}}\_1(\sigma\_j) - \operatorname{Harm}\_{\eta\_1} = {{\ensuremath{\vec{h}}}}(\sigma\_j)\,. \end{aligned}$$ The base case of *σ*0 :  = 0 is evident by construction, as *h⃗*1(0) − Harm*η*1 = *h⃗*(0). Suppose now that for the first *j* − 1 updates, the equality holds. Then, at time *σ**j*, suppose a vertex *v**j* is updated. The equality on all vertices other than *v**j* evidently carry over from *σ**j* − 1. The relevant updates made at *v**j* are as follows: $$\begin{aligned} {{\ensuremath{\vec{h}}}}\_1(v\_j,\sigma\_j) & = \frac{1}{4}\sum\_{z\sim v\_j} {{\ensuremath{\vec{h}}}}\_1(z,\sigma\_{j-1}) + G\_j\,, \qquad \mbox{and}\qquad {{\ensuremath{\vec{h}}}}(v\_j,\sigma\_j) = \frac{1}{4}\sum\_{z\sim v\_j} {{\ensuremath{\vec{h}}}}(z,\sigma\_{j-1}) + G\_j\,. \end{aligned}$$ Then, by the inductive hypothesis, $$\begin{aligned} {{\ensuremath{\vec{h}}}}\_1(v\_j,\sigma\_j) - {{\ensuremath{\vec{h}}}}(v\_j,\sigma\_j) = \frac{1}{4}\sum\_{z \sim v\_j} \operatorname{Harm}\_{\eta\_1}(z)\,, \end{aligned}$$ which by harmonicity of Harm*η*1, is exactly Harm*η*1(*v**j*). Random walk representation of the DGFF evolution ------------------------------------------------ Given the construction of Definition [def:surface-RW-coupling], we now wish to define a backwards-in-time random walk (BTRW) through a Poisson noise field, i.e., it jumps whenever it encounter a point in the latter, through which we will give an alternative representation of the surface evolution *h⃗*(*t*). Let ${\ensuremath{\mathcal P}}\_{\mathbb Z^2} = (p\_{v,i})\_{v\in \mathbb Z^2, i\ge 1}$ be a Poisson field in the space-time slab $\mathbb Z^2 \times (-\infty, 0]$, i.e., for each $v\in \mathbb Z^2$, assign an independent Poisson clock, with clock rings at (*t**v*, *i*)*i* ≥ 1; then let $p\_{v,i} = (v,-t\_{v,i})\in \mathbb Z^2 \times (-\infty,0]$. For ease of notation, let ${\ensuremath{\mathcal P}}\_{n} = {\ensuremath{\mathcal P}}\_{\Lambda\_n}$ be the restriction of ${\ensuremath{\mathcal P}}\_{\mathbb Z^2}$ to Λ*n* × ( − ∞, 0], and thus let ${\ensuremath{\mathcal P}}\_\infty = {\ensuremath{\mathcal P}}\_{\mathbb Z^2}$. When understood from context, we will drop the *n* notation so that ${\ensuremath{\mathcal P}}= {\ensuremath{\mathcal P}}\_n$. [def:backwards-rw] Fix a realization of the Poisson field ${\ensuremath{\mathcal P}}\_\infty$ such that (*t**v*, *i*)*v*, *i* are all distinct (this event has probability one). Then, we can easily identify the times $t\in \{t\_{v,i}: v\in \mathbb Z^2, i\ge 1\}$ with the unique space-time point $p\in {\ensuremath{\mathcal P}}\_\infty$ such that *p* is at time  − *t*, and vice versa. For every *x* ∈ Λ*n*, define the BTRW in ${\ensuremath{\mathcal P}}= {\ensuremath{\mathcal P}}\_n$ started from *x*, denoted $(S\_{-s}^{x,{\ensuremath{\mathcal P}}})\_{s \ge 0}$, as follows: 1. Let $(\mathcal E^x\_{j})\_{j\ge 1}$ be a sequence of i.i.d. jumps uniform at random among { ± *e*1, ...,  ± *e**d*}. 2. Construct the discrete-time random walk (*Y**k*)*k* ≥ 0 by initializing *Y*0 = *x* and for each *i* ≥ 1, letting $Y\_i = Y\_{i-1} + {\ensuremath{\mathcal E}}\_i^x$. 3. Initialize $S\_{-s}^{x,{\ensuremath{\mathcal P}}} = Y\_0$ for all  − *s* ∈ [ − *T*1*x*,  − *T*0*x*] where  − *T*1*x* =  − *t**x*, 1 and *T*0*x* = 0. 4. For each *i* ≥ 1, let S-sx,$\mathcal P$ = Yi, *w**h**e**r**e* -Ti+1x = {-t< -Tix: -t{-tYi,j}j1}. We denote the random walk *trajectory* $(S\_{-s}^{x,{\ensuremath{\mathcal P}}})\_{-s\in [-t,0]}$ by $S\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}}$. We view this as a right-continuous function from $[-t,0] \to \mathbb Z^2$. (While right-continuity in time is more typical of forward-in-time random walks, we constructed the BTRW as right-continuous to match the right-continuity of the Glauber dynamics.) The BTRW trajectory is in direct correspondence with the sequence of Poisson points it collects $\{p\in {\ensuremath{\mathcal P}}: p\in S\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}}\}$ (and almost surely in direct correspondence with the sequence of times { − *T**i**x* ≥  − *t*}). We therefore use $S\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}}$ both to denote the function from $[-t,0] \to \mathbb Z^2$ and the set of Poisson points or times it passes through. Observe that using ${\ensuremath{\mathcal P}}\_n$ to generate the random walk is the same as absorbing the walk at ∂Λ*n*, since as soon as it jumps onto a vertex of ∂Λ*n* it will never see another clock ring, and therefore never move again. We refer the reader to Figure [fig:rw-rep] for a depiction of BTRW trajectories in ${\ensuremath{\mathcal P}}$. (1,0)–(9,0); (0,0)–(10,0); iin 0,...,10(i,0)–(i,-5);; (0,-5)–(10,-5); (0,0)–(0,-5); (10,0)–(10,-5); at (5,.35) Λ*n*; (A0) at (10.35,0) 0; (At) at (10.35,-5)  − *t*; (A0)–(At); =[color = red, opacity = 1] iin 1,...,5(A1i) at (1,-Mod(1.311(10+i),5)) ; at (A1i)  × ;; iin 1,...,4(A2i) at (2,-Mod(1.414(10+2\*i),5)) ; at (A2i)  × ;; iin 1,...,5(A3i) at (3,-Mod(1.514(9+2\*i),5)) ; at (A3i)  × ;; iin 1,...,5(A4i) at (4,-Mod(1.624(10+2\*i),5)) ; at (A4i)  × ;; iin 1,...,6(A5i) at (5,-Mod(1.148(10+i),5)) ; at (A5i)  × ;; iin 1,...,4(A6i) at (6,-Mod(1.244(10+i),5)) ; at (A6i)  × ;; iin 1,...,3(A7i) at (7,-Mod(1.344(10+i),5)) ; at (A7i)  × ;; iin 1,...,6(A8i) at (8,-Mod(1.777(10+i),5)) ; at (A8i)  × ;; iin 1,...,6(A9i) at (9,-Mod(1.555(9+i),5)) ; at (A9i)  × ;; (1.975,0)– (1.975,-Mod(1.414(10+2\*3),5)) – ++(-.975,0) – (A12) – ++(1,0)–(A22) – ++(1,0) –(A33)–++(-1,0)–(A21)–++(-1,0)–(A14)–++(-1,0)–(0,-5); (2.025,0)– (2.025,-Mod(1.414(10+2\*3),5)) – ++(.975,0) – (A31)–++(1,0)–(A41)–++(1,0)–(A54)–++(-1,0)–(A42)–++(1,0)–(A55)–++(1,0)–(A62)–++(-1,0)–(A56)–++(-1,0)–(4,-5); [fig:rw-rep] The main result of this section is the following representation of the Glauber dynamics for the DGFF surface on Λ*n* in term of expectations of the above random walk trajectories through the Poisson noise field ${\ensuremath{\mathcal P}}\_n$. Assign every $p\in {\ensuremath{\mathcal P}}\_\infty$ an independent $\mathcal N(0,1)$ random-variable *Z**p* alternatively denoted *Z*− *t* if *p* is at time  − *t* (almost surely all points of ${\ensuremath{\mathcal P}}\_\infty$ are at distinct times). We will show that conditionally on the clock rings of the Glauber dynamics between times [0, *t*], there is a corresponding Poisson noise field on Λ*n* × [ − *t*, 0], in which the Glauber dynamics is expressed as averages of the Gaussians (*Z**p*) collected by the BTRW trajectory at times [ − *t*, 0]. Let ${\ensuremath{\mathcal Q}}$ be a Poisson noise field on $\mathbb Z^2 \times [0,t]$, and let $h^{\ensuremath{\mathcal Q}}(t)$ be the Glauber dynamics whose sequence of clock rings is exactly ${\ensuremath{\mathcal Q}}$. Let $\vartheta\_r {\ensuremath{\mathcal P}}$ be the unit intensity Poisson process on Λ*n* × [0, *r*] obtained by time-shifting ${\ensuremath{\mathcal P}}$ by *r* and then restricting to non-negative times, i.e., it simply considers ${\ensuremath{\mathcal P}}$ restricted to Λ*n* × [ − *r*, 0] shifted in the time coordinate by *r* to fit Λ*n* × [0, *r*]. [thm:rw-representation-exact-equality] Fix almost any Poisson noise process ${\ensuremath{\mathcal P}}= {\ensuremath{\mathcal P}}\_n$, and let ${\ensuremath{\mathcal Q}}\_t = \vartheta\_t{\ensuremath{\mathcal P}}$. Let $(Z\_p)\_{p\in {\ensuremath{\mathcal P}}}$ be i.i.d. standard Gaussian random variables. For any initialization *h⃗*(0), for all *t* > 0, $$\begin{aligned} {{\ensuremath{\vec{h}}}}^{{\ensuremath{\mathcal Q}}\_t}(t) \stackrel{d}= \bigg(\mathbb E\_{\mathcal E^x} \Big[h\big(S\_{-t}^{x,{\ensuremath{\mathcal P}}},0\big) + \sum\_{p \in {{\ensuremath{S}}}\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}}} Z\_p \Big]\bigg)\_{x\in \Lambda\_n}\,. \end{aligned}$$ By averaging over ${\ensuremath{\mathcal Q}}$ and ${\ensuremath{\mathcal P}}$ and noting that the shift *ϑ**t* is a measure preserving transformation for the Poisson process, we obtain the following corollary. [cor:random-walk-representation] For every initialization *h⃗*(0), for all *t* > 0, we have $$\begin{aligned} {{\ensuremath{\vec{h}}}}(t) \stackrel{d}= \Bigg(\mathbb E\_{\mathcal E^x} \Big[h\big(S\_{-t}^{x,{\ensuremath{\mathcal P}}},0\big) + \sum\_{p \in {{\ensuremath{S}}}\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}}} Z\_p \Big]\Bigg)\_{x\in \Lambda\_n}\,, \end{aligned}$$ where the randomness on the right-hand side is over both ${\ensuremath{\mathcal P}}= {\ensuremath{\mathcal P}}\_n$ and $(Z\_p)\_{p\in {\ensuremath{\mathcal P}}}$. [***Proof of Theorem [thm:rw-representation-exact-equality]***] The proof will follow from an explicit coupling between the process on the left-hand side as defined in Definition [def:surface-RW-coupling] and the process on the right-hand side, defined in Definition [def:backwards-rw]. Fix almost any ${\ensuremath{\mathcal P}}$. Enumerate the set of all times in ${\ensuremath{\mathcal P}}= {\ensuremath{\mathcal P}}\_n$ as $$\begin{aligned} 0> -\tau\_1 > -\tau\_2> \cdots\,. \end{aligned}$$ For any *s*, let *N*(*s*) = max{*j* :  − *τ**j* >  − *s*}. We claim that for every *s*, for any initial data *h⃗*(0), $$\begin{aligned} \label{eq:rw-representation-exact-equality} h^{\vartheta\_s {\ensuremath{\mathcal P}}}(x,s) = \mathbb E\_{\mathcal E^x} \Big[h\big(S\_{-s}^{x,{\ensuremath{\mathcal P}}},0\big) + \sum\_{p \in {{\ensuremath{S}}}\_{[-s,0]}^{x,{\ensuremath{\mathcal P}}}} Z\_p \Big] \qquad \mbox{for all $x\in \Lambda\_n$}\,, \end{aligned}$$ holds if $h^{\vartheta\_{s}{\ensuremath{\mathcal P}}}(x,s)$ is generated using Definition [def:surface-RW-coupling] with $$\begin{aligned} \label{eq:Gaussian-G-Z-coupling} G\_j = Z\_{(v\_{N(s) + 1-j}, -\tau\_{N(s)+ 1 -j})}\qquad \mbox{for all $j\le N(s)$}\,, \end{aligned}$$ where *v**l* is the location of the clock ring at time  − *τ**l*. This is of course a valid coupling since both the (*G**j*)*j* and (*Z**p*)*p* are sequences of i.i.d. standard Gaussians. The desired result of the theorem would follow from establishing  with the choice *s* = *t* for any realization of $(Z\_p)\_{p\in {\ensuremath{\mathcal P}}}$. In the rest of this proof, both ${\ensuremath{\mathcal P}}$ and $(Z\_p)\_{p\in {\ensuremath{\mathcal P}}}$ will be fixed, so all expectations should be understood to be only over the jump sequence ${\ensuremath{\mathcal E}}^x$, and we drop that subscript from the notation. By right-continuity of the quantities on either side of , it suffices to prove that equality for all $s\notin \{\tau\_j\}\_j$. Towards that, divide ( − ∞, 0] \ {*τ**j*}*j* into intervals *I*0 = ( − *τ*1, 0] and *I**k* = ( − *τ**k* + 1,  − *τ**k*). We will prove inductively over *k* that  holds for all *s* ∈ *I**k* and all initial data *h⃗*(0). If *k* = 0, then evidently for every *s* ∈ *I*0, $\vartheta\_s {\ensuremath{\mathcal P}}$ will not contain any clock rings, and $h^{\vartheta\_s {\ensuremath{\mathcal P}}}(x,s) = h(x,0)$ for all *x*. On the right-hand side the random walk ${{\ensuremath{S}}}\_{[-s,0]}^{x,{\ensuremath{\mathcal P}}}$ will not contain any jumps from *x*, and the right-hand also yields *h*(*x*, 0) verifying the base case. Now suppose that the equality of  holds for all *s* ∈ ⋃*l* < *k**I**l*; we will show it holds for all *s* ∈ *I**k*. By the inductive hypothesis, we have for all *s* ∈ *I**k* − 1 $$\begin{aligned} h^{\vartheta\_s {\ensuremath{\mathcal P}}}(x,s) = \mathbb E \Big[h(S\_{-s}^{x,{\ensuremath{\mathcal P}}},0) + \sum\_{p\in {{\ensuremath{S}}}\_{[-s,0]}^{x,{\ensuremath{\mathcal P}}}} Z\_p\Big]\qquad \mbox{for all $x\in \Lambda\_n$}\,. \end{aligned}$$ Now notice that for *r* > *s*, ${{\ensuremath{\vec{h}}}}^{\vartheta\_r{\ensuremath{\mathcal P}}}(r)$ initialized from *h⃗*(0) can be generated by letting $${{\ensuremath{\vec{g}}}}\_r = {{\ensuremath{\vec{h}}}}^{\vartheta\_r {\ensuremath{\mathcal P}}}(r-s)\,,$$ and then using *g⃗**r* as the initial data for a new Glauber dynamics chain ${{\ensuremath{\vec{h}}}}^{\vartheta\_s {\ensuremath{\mathcal P}}}(s)$. By this reasoning, for all *r* > *s*, we have $$\begin{aligned} h^{\vartheta\_{r}{\ensuremath{\mathcal P}}}(x,r) = \mathbb E\Big[g\_r(S\_{-s}^{x,{\ensuremath{\mathcal P}}}) + \sum\_{p\in {{\ensuremath{S}}}\_{[-s,0]}^{x,{\ensuremath{\mathcal P}}}} Z\_p\Big]\qquad \mbox{for all $x\in \Lambda\_n$}. \end{aligned}$$ Let *r* ∈ *I**k*, and note that *N*(*s*) + 1 = *N*(*r*) = *k*. Since under the coupling for $h^{\vartheta\_r {\ensuremath{\mathcal P}}}$, we have *G*1 = *Z*(*v**k*,  − *τ**k*), we get $$\begin{aligned} g\_r(y) = \begin{cases} h(y,0) & y\ne v\_k \\ Z\_{(v\_k,-\tau\_{k})} + \frac{1}{4} \sum\_{w\sim v\_{k}} h(w,0) & y = v\_k \end{cases}\,. \end{aligned}$$ Plugging in the expression for *g**r*(*y*) in the display above it, we see that $$\begin{aligned} \label{eq:E-tilde-h-expansion} \mathbb E [ g\_r(S\_{-s}^{x,{\ensuremath{\mathcal P}}})] & = \sum\_{y\in \Lambda\_n} g\_r(y) \mathbb P(S\_{-s}^{x,{\ensuremath{\mathcal P}}} = y) \nonumber \\ & = \sum\_{y\ne v\_{k}} h (y,0) \mathbb P(S\_{-s}^{x,{\ensuremath{\mathcal P}}} = y) + \mathbb P(S\_{-s}^{x,{\ensuremath{\mathcal P}}} = v\_{k}) Z\_{(v\_k,-\tau\_{k})} + \frac{1}{4} \mathbb P(S\_{-s}^{x,{\ensuremath{\mathcal P}}} = v\_{k}) \sum\_{w\sim v\_k} h(w,0)\,. \end{aligned}$$ Now notice that if $S\_{-s}^{x,{\ensuremath{\mathcal P}}} =y$ for a *y* ≠ *v**k*, then since the only clock ring in Λ*n* in the time interval ( − *r*,  − *s*) occurs at the site *v**k*, we have that $S\_{-r}^{x,{\ensuremath{\mathcal P}}} = S\_{-s}^{x,{\ensuremath{\mathcal P}}}$. On the other hand, if $S\_{-s}^{x,{\ensuremath{\mathcal P}}} = v\_k$, then the probability that $S\_{-r}^{x,{\ensuremath{\mathcal P}}} = y$ is $\frac{1}{4}$ if *y* ∼ *v**k*, and zero otherwise. Therefore, $$\begin{aligned} \mathbb P(S\_{-r}^{x,{\ensuremath{\mathcal P}}} = y) = \begin{cases} 0 & y = v\_{k} \\ \mathbb P(S\_{-s}^{x,{\ensuremath{\mathcal P}}} = y) + \frac{1}{4}\mathbb P(S\_{-s}^{x,{\ensuremath{\mathcal P}}} = v\_k) & y\sim v\_k \\ \mathbb P(S\_{-s}^{x,{\ensuremath{\mathcal P}}} = y) & \mbox{else} \end{cases}\,. \end{aligned}$$ It is then evident that the first and third terms together in the expansion of  are exactly the quantity $\mathbb E[h(S\_{-r}^{x,{\ensuremath{\mathcal P}}},0)]$. It remains to show that $$\begin{aligned} \mathbb P(S\_{-s}^{x,{\ensuremath{\mathcal P}}} = v\_k) Z\_{(v\_k,-\tau\_k)} = \mathbb E\Big[ \sum\_{p\in {{\ensuremath{S}}}\_{[-r,0]}^{x,{\ensuremath{\mathcal P}}}} Z\_p - \sum\_{p\in {{\ensuremath{S}}}\_{[-s,0]}^{x,{\ensuremath{\mathcal P}}}} Z\_p\Big]\,. \end{aligned}$$ To see this, notice that the right-hand side is exactly $$\begin{aligned} \mathbb E\Big[ \sum\_{p\in {{\ensuremath{S}}}\_{[-r,-s]}^{x,{\ensuremath{\mathcal P}}}} Z\_p\Big] = \mathbb E\Big[ Z\_{(v\_k,-\tau\_{k})} \mathbf 1\{S\_{-s}^{x,{\ensuremath{\mathcal P}}} =v\_k\}\Big]\,, \end{aligned}$$ as the only clock ring in the interval [ − *r*,  − *s*] is the one at (*v**k*,  − *τ**k*). Altogether, we have obtained the desired equality of . Consequences of the random walk representation ============================================== Having established the BTRW representation of Theorem [thm:rw-representation-exact-equality], we now derive some fundamental consequences of this representation. Before we proceed, let us introduce a change of notation that will prove to be convenient. Because the process on the left-hand side of Theorem [thm:rw-representation-exact-equality] requires one to keep shifting ${\ensuremath{\mathcal P}}$ by time *t* to obtain ${\ensuremath{\mathcal Q}}\_t$, and this is inconvenient to think about, let us solely work with the process on the right-hand side. Abusing notation, for each ${\ensuremath{\mathcal P}}= {\ensuremath{\mathcal P}}\_n$, define the random process $$\begin{aligned} \label{eq:h-P} {{\ensuremath{\vec{h}}}}^{\ensuremath{\mathcal P}}(t) = \big(h^{\ensuremath{\mathcal P}}(x,t)\big)\_{x\in \Lambda\_n} = \bigg(\mathbb E\_{{\ensuremath{\mathcal E}}^x}\Big[h(S\_{-t}^{x,{\ensuremath{\mathcal P}}},0) + \sum\_{p\in S\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}}} Z\_p\Big]\bigg)\_{x\in \Lambda\_n}\,. \end{aligned}$$ Recall from Theorem [thm:rw-representation-exact-equality] that for any fixed *t*, there is a direct coupling of this process to ${{\ensuremath{\vec{h}}}}^{{\ensuremath{\mathcal Q}}\_t}(t)$, where ${\ensuremath{\mathcal Q}}\_t = \vartheta\_t {\ensuremath{\mathcal P}}$, but this coupling depends on *t*. Thus, we prefer to use the distributional equality of Corollary [cor:random-walk-representation] to view the Glauber dynamics *h⃗*(*t*) at time *t* as the average over ${\ensuremath{\mathcal P}}$ of ${{\ensuremath{\vec{h}}}}^{{\ensuremath{\mathcal P}}}(t)$. Gaussianity, and mean and covariance expressions ------------------------------------------------ We begin by establishing the Gaussianity of the process ${{\ensuremath{\vec{h}}}}^{{\ensuremath{\mathcal P}}}(t)$ and giving random-walk characterizations of its mean and covariance. [prop:mean-and-variance-of-h] Fix almost any ${\ensuremath{\mathcal P}}$. For every *h⃗*(0), and every *t* > 0, the law of ${{\ensuremath{\vec{h}}}}^{{\ensuremath{\mathcal P}}}(t)$ is a multivariate Gaussian. Its mean and covariance can be expressed as follows: $$\begin{aligned} \mathbb E[h^{{\ensuremath{\mathcal P}}}(x,t)] & = \mathbb E\_{\mathcal E^x} \Big[h(S\_{-t}^{x,{\ensuremath{\mathcal P}}},0)\Big]\quad \mbox{for all $x\in \Lambda\_n$}\,, \\ \operatorname{Cov}[h^{{\ensuremath{\mathcal P}}}(x,t), h^{{\ensuremath{\mathcal P}}}(y,t)] & = \mathbb E\_{\mathcal E^x,\mathcal E^y}\Big[\big|\big\{p\in {{\ensuremath{S}}}\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}} \cap {{\ensuremath{S}}}\_{[-t,0]}^{y,{\ensuremath{\mathcal P}}}\big\}\big|\Big]\quad \mbox{for all $x,y\in \Lambda\_n$}\,. \end{aligned}$$ Here $\mathbb E\_{{\ensuremath{\mathcal E}}^x,{\ensuremath{\mathcal E}}^y}$ is the expectation with respect to the product measure $\mathbb P\_{{\ensuremath{\mathcal E}}^x}\otimes \mathbb P\_{{\ensuremath{\mathcal E}}^y}$. We start by expressing ${{\ensuremath{\vec{h}}}}^{{\ensuremath{\mathcal P}}}(t)$ as a linear transformation of a sequence of independent standard Gaussians $(Z\_p)\_{p\in {\ensuremath{\mathcal P}}}$. First of all, for a fixed ${\ensuremath{\mathcal P}}$, for each *t*,  let *N*(*t*) be the total number of clock rings in the spacetime slab Λ*n* × [ − *t*, 0]. Let  − *τ**N*(*t*) <  − *τ**N*(*t*) − 1 < ... <  − *τ*1 < 0 be these clock rings ordered in time, and recall that the *Z**p*s can be labeled according to which time *p* corresponds to. Let *Z⃗* = (*Z*− *τ*1, ..., *Z*− *τ**N*(*t*)) be this *N*(*t*) × 1 standard Gaussian vector. By construction, the ∣Λ*n*∣ × 1 random vector ${{\ensuremath{\vec{h}}}}^{{\ensuremath{\mathcal P}}}(t) = (h^{{\ensuremath{\mathcal P}}}(x,t))\_{x\in \Lambda\_n}$ can be expressed as $$\begin{aligned} h^{{\ensuremath{\mathcal P}}}(x,t) & = \mathbb E\_{{\ensuremath{\mathcal E}}^x}[h(S\_{-t}^{x,{\ensuremath{\mathcal P}}},0)] + \sum\_{{{\ensuremath{S}}}'\_{[-t,0]}}\Big[ \mathbb P\_{{\ensuremath{\mathcal E}}^x}({{\ensuremath{S}}}\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}} = {{\ensuremath{S}}}'\_{[-t,0]}) \big[\sum\_{j: -\tau\_j \in {{\ensuremath{S}}}'\_{[-t,0]}} Z\_{-\tau\_j}\big]\Big] \\ & = \mathbb E\_{{\ensuremath{\mathcal E}}^x}[h(S\_{-t}^{x,{\ensuremath{\mathcal P}}},0)] + \sum\_{1\le j\le N(t)} Z\_{-\tau\_j} \Big[\sum\_{{{\ensuremath{S}}}'\_{[-t,0]}\ni -\tau\_j} \mathbb P\_{{\ensuremath{\mathcal E}}^x}({{\ensuremath{S}}}\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}} = {{\ensuremath{S}}}'\_{[-t,0]})\Big]\,. \end{aligned}$$ where the sums are over feasible BTRW trajectories *S*ʹ[ − *t*, 0] starting from *x* (naturally identified with the times of the Poisson points it collects). As such, we can express $${{\ensuremath{\vec{h}}}}^{{\ensuremath{\mathcal P}}}(t) = {{\ensuremath{\vec{m}}}}\_t^{{\ensuremath{\mathcal P}}} + {{\ensuremath{A}}}\_t^{{\ensuremath{\mathcal P}}} \vec{Z}\,,$$ where ${{\ensuremath{\vec{m}}}}\_t^{{\ensuremath{\mathcal P}}} = (m\_t^{\ensuremath{\mathcal P}}(x))\_{x\in \Lambda\_n}$ is a ∣Λ*n*∣ × 1 vector and ${{\ensuremath{A}}}\_t^{{\ensuremath{\mathcal P}}}$ is a ∣Λ*n*∣ × *N*(*t*) matrix given as follows: $$\begin{aligned} {{\ensuremath{\vec{m}}}}\_t^{{\ensuremath{\mathcal P}}}(x) = \mathbb E\_{{\ensuremath{\mathcal E}}^x}[h(S\_{-t}^{x,{\ensuremath{\mathcal P}}},0)] \qquad \mbox{and}\qquad {{\ensuremath{A}}}\_t^{{\ensuremath{\mathcal P}}} (x,j) = \mathbb P\_{{\ensuremath{\mathcal E}}^x}(-\tau\_j \in {{\ensuremath{S}}}\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}})\,. \end{aligned}$$ Observe that for fixed ${\ensuremath{\mathcal P}}$, neither ${{\ensuremath{\vec{m}}}}\_t^{{\ensuremath{\mathcal P}}}$ nor ${{\ensuremath{A}}}\_t^{\ensuremath{\mathcal P}}$ are random. This expression for ${{\ensuremath{\vec{h}}}}^{\mathcal P}(t)$ as a linear transformation of *Z⃗* thus implies that the former is distributed as a Gaussian random vector having $$\begin{aligned} \mathbb E[h^{{\ensuremath{\mathcal P}}}(t)] = {{\ensuremath{\vec{m}}}}\_t^{{\ensuremath{\mathcal P}}} \qquad \mbox{and covariance matrix} \qquad {{\ensuremath{\Sigma}}}\_t^{{\ensuremath{\mathcal P}}} = {{\ensuremath{A}}}\_t^{{\ensuremath{\mathcal P}}} ({{\ensuremath{A}}}\_t^{{\ensuremath{\mathcal P}}})^{\mathsf{T}}\,. \end{aligned}$$ (The latter expression uses the fact that the covariance matrix of *Z⃗* is the identity.) The desired expression for the mean therefore follows immediately from the above. The expression for the covariance can be seen by writing $$\begin{aligned} {{\ensuremath{\Sigma}}}\_t^{{\ensuremath{\mathcal P}}}(x,y) = \sum\_j \mathbb P\_{{\ensuremath{\mathcal E}}^x}(-\tau\_j\in {{\ensuremath{S}}}\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}}) \mathbb P\_{{\ensuremath{\mathcal E}}^y}(-\tau\_j\in {{\ensuremath{S}}}\_{[-t,0]}^{y,{\ensuremath{\mathcal P}}})\,. \end{aligned}$$ This is exactly the expected number of clock rings shared by the two walks, under a product measure over the jumps ${\ensuremath{\mathcal E}}^x,{\ensuremath{\mathcal E}}^y$ yielding the expression desired by the proposition. Let *G**n*( ⋅ ,  ⋅ ) be the Green’s function for the random walk on Λ*n* killed at ∂Λ*n*, i.e., $$\begin{aligned} \label{eq:Greens-function} G\_n(x,y) = \mathbb E\Big[\sum\_{k\ge 0} \mathbf 1\{Y^x\_k= y\}\Big]\,, \end{aligned}$$ where (*Y**k**x*)*k* ≥ 0 is the simple random walk on Λ*n* ∪ ∂Λ*n* killed upon hitting ∂Λ*n*. For convenience, we drop *n* from the notation when understood from context and denote *G**n* by *G*. Recall that *G*( ⋅ ,  ⋅ ) is the covariance matrix for the DGFF on Λ*n*. We will use ${{\ensuremath{H}}}\_t^{\ensuremath{\mathcal P}}$ to denote the heat kernel matrix for the BTRW $S\_{-t}^{\cdot,{\ensuremath{\mathcal P}}}$, i.e., for fixed ${\ensuremath{\mathcal P}}= {\ensuremath{\mathcal P}}\_n$, for every *x*, *y* ∈ Λ*n*, $$\begin{aligned} \label{eq:H-t-P} {{\ensuremath{H}}}\_t^{\ensuremath{\mathcal P}}(x,y) = \mathbb P\_{{\ensuremath{\mathcal E}}^x}(S\_{-t}^{x,{\ensuremath{\mathcal P}}} = y)\,. \end{aligned}$$ While Proposition [prop:mean-and-variance-of-h] included explicit expressions for the mean and the covariance in terms of averages of BTRW trajectories, the following result compares the same to their equilibrium counterparts, namely 0 and *G*. [prop:monotonicity-of-mean-variance] For almost every ${\ensuremath{\mathcal P}}$, we have for every *x*, *y* ∈ Λ*n* $$\begin{aligned} \lim\_{t\to\infty} \mathbb E[h^{{\ensuremath{\mathcal P}}}(x,t)] = 0\qquad \mbox{and} \quad \lim\_{t\to\infty} \operatorname{Cov}[h^{{\ensuremath{\mathcal P}}}(x,t)h^{{\ensuremath{\mathcal P}}}(y,t)] = {{\ensuremath{G}}}(x,y)\,. \end{aligned}$$ Furthermore, for every *t* > 0, $$\begin{aligned} \Big({{\ensuremath{G}}}(x,y) - \operatorname{Cov}[h^{{\ensuremath{\mathcal P}}}(x,t),h^{{\ensuremath{\mathcal P}}}(y,t)]\Big)\_{x,y} = \Big({{\ensuremath{H}}}\_t^{{\ensuremath{\mathcal P}}} {{\ensuremath{G}}}({{\ensuremath{H}}}\_t^{{\ensuremath{\mathcal P}}})^{\mathsf{T}}\Big)\_{x,y}\,. \end{aligned}$$ [***Proof of Proposition [prop:monotonicity-of-mean-variance]***] Fix any two initializations *h⃗*1(0), *h⃗*2(0), and let *h⃗*1(*t*), *h⃗*2(*t*) be their respective time evolutions. Use the same Poisson noise field ${\ensuremath{\mathcal P}}$ and the same Gaussians (*Z**p*)*p* for both dynamical realizations. Let *g*(*x*, 0) = *h*2(*x*, 0) − *h*1(*x*, 0) for all *x* ∈ Λ*n*. By Proposition [prop:mean-and-variance-of-h] and linearity of expectation, $$\begin{aligned} \big|\mathbb E[h\_2^{{\ensuremath{\mathcal P}}}(x,t)]- \mathbb E[h\_1^{{\ensuremath{\mathcal P}}}(x,t)]\big| = \big|\mathbb E\_{{\ensuremath{\mathcal E}}^x}[g(S\_{-t}^{x,{\ensuremath{\mathcal P}}},0)]\big| \le \|g(\cdot,0)\|\_\infty \mathbb P\_{{\ensuremath{\mathcal E}}^x}(S\_{-t}^{x,{\ensuremath{\mathcal P}}}\notin \partial \Lambda\_n)\,. \end{aligned}$$ Now observe that as long as ${\ensuremath{\mathcal P}}\_{\{v\}}$ is infinite for every *v* (an event which has probability one), the probability above goes to zero as *t* → ∞ (recalling that the set ∂Λ*n* is absorbing). This implies the convergence of the expectation, since if *h⃗*2(0) ≡ 0, then $\mathbb E[{{\ensuremath{\vec{h}}}}\_2^{\ensuremath{\mathcal P}}(t)] \equiv 0$ for all *t* > 0. Now consider the difference in covariances. For fixed ${\ensuremath{\mathcal P}}$ and using the same $(Z\_p)\_{p\in {\ensuremath{\mathcal P}}}$, by the representation of Theorem [thm:rw-representation-exact-equality], we have $$\begin{aligned} h\_2^{{\ensuremath{\mathcal P}}}(x,t) - \mathbb E[h\_2(S\_{-t}^{x,{\ensuremath{\mathcal P}}},0)] = h\_1^{{\ensuremath{\mathcal P}}}(x,t) - \mathbb E[h\_1(S\_{-t}^{x,{\ensuremath{\mathcal P}}},0)] = \mathbb E\_{{\ensuremath{\mathcal E}}^x} \Big[\sum\_{p\in {{\ensuremath{S}}}\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}}} Z\_p\Big]\,. \end{aligned}$$ The right-hand side quantity is exactly $h\_3^{{\ensuremath{\mathcal P}}}(x,t)$ (viewed as a random variable in (*Z**p*)*p*), initialized from *h*3(*x*, 0) = 0 for all *x*. We can then rewrite $$h\_2^{{\ensuremath{\mathcal P}}}(x,t) = \mathbb E\_{{\ensuremath{\mathcal E}}^x}[h\_2(S\_{-t}^{x,{\ensuremath{\mathcal P}}},0)] + h\_3^{\ensuremath{\mathcal P}}(x,t)\,.$$ Let *h⃗*2(0) be drawn from *π* (independently of ${\ensuremath{\mathcal P}}$ and (*Z**p*) and hence of $h\_3^{\ensuremath{\mathcal P}}(x,t)$). Under both the randomness of *π* and the randomness of (*Z**p*)*p* (but conditionally on ${\ensuremath{\mathcal P}}$), by definition of the Glauber dynamics, ${{\ensuremath{\vec{h}}}}\_2^{\ensuremath{\mathcal P}}(t)$ is distributed as *π* for all *t*. Therefore, we get for almost every fixed ${\ensuremath{\mathcal P}}$, $$\begin{aligned} {{\ensuremath{G}}}(x,y) & = \operatorname{Cov}[h\_2^{{\ensuremath{\mathcal P}}}(x,t), h\_2^{{\ensuremath{\mathcal P}}}(y,t)] \\ & = \operatorname{Cov}\_\pi\big[\mathbb E\_{{\ensuremath{\mathcal E}}^x}[h\_2(S\_{-t}^{x,{\ensuremath{\mathcal P}}},0)], \mathbb E\_{{\ensuremath{\mathcal E}}^y}[h\_2(S\_{-t}^{y,{\ensuremath{\mathcal P}}},0)]\big] + \operatorname{Cov}[h\_3^{{\ensuremath{\mathcal P}}}(x,t), h\_3^{{\ensuremath{\mathcal P}}}(y,t)] \\ & = \operatorname{Cov}\_\pi\big[\mathbb E\_{{\ensuremath{\mathcal E}}^x}[h\_2(S\_{-t}^{x,{\ensuremath{\mathcal P}}},0)], \mathbb E\_{{\ensuremath{\mathcal E}}^y}[h\_2(S\_{-t}^{y,{\ensuremath{\mathcal P}}},0)]\big] + \operatorname{Cov}[h\_1^{{\ensuremath{\mathcal P}}}(x,t), h\_1^{{\ensuremath{\mathcal P}}}(y,t)]\,, \end{aligned}$$ where we used the fact that a deterministic shift in the initialization does not change the covariance in (*Z**p*) to change $h\_3^{\ensuremath{\mathcal P}}$ to $h\_1^{\ensuremath{\mathcal P}}$. Subtracting the second term from both sides, and then swapping expectations in the first term, we get $$\begin{aligned} {{\ensuremath{G}}}(x,y) - \operatorname{Cov}[h\_1^{{\ensuremath{\mathcal P}}}(x,t), h\_1^{{\ensuremath{\mathcal P}}}(y,t)] & = \mathbb E\_{\pi}\big[\mathbb E\_{{\ensuremath{\mathcal E}}^x,{\ensuremath{\mathcal E}}^y}[h\_2(S\_{-t}^{x,{\ensuremath{\mathcal P}}},0) h\_2(S\_{-t}^{y,{\ensuremath{\mathcal P}}},0)]\big] \\ & = \mathbb E\_{{\ensuremath{\mathcal E}}^x,{\ensuremath{\mathcal E}}^y}[{{\ensuremath{G}}}({S\_{-t}^{x,{\ensuremath{\mathcal P}}},S\_{-t}^{y,{\ensuremath{\mathcal P}}})}]\,, \end{aligned}$$ where the expectation $\mathbb E\_{{\ensuremath{\mathcal E}}^x,{\ensuremath{\mathcal E}}^y}$ is under the product distribution $\mathbb E\_{{\ensuremath{\mathcal E}}^x}$ and $\mathbb E\_{{\ensuremath{\mathcal E}}^y}$. This last expectation is then easily seen to be exactly the *x*, *y*’th entry of the matrix ${{\ensuremath{H}}}\_{t}^{{\ensuremath{\mathcal P}}}{{\ensuremath{G}}}({{\ensuremath{H}}}\_{t}^{{\ensuremath{\mathcal P}}})^{\mathsf{T}}$. Notice that this quantity is non-negative since all entries of *G*(*x*, *y*) are non-negative. The last thing to show is that this quantity goes to zero as *t* → ∞. This follows by observing that $$\begin{aligned} \mathbb E\_{{\ensuremath{\mathcal E}}^x,{\ensuremath{\mathcal E}}^y} [{{\ensuremath{G}}}({S\_{-t}^{x,{\ensuremath{\mathcal P}}},S\_{-t}^{y,{\ensuremath{\mathcal P}}})}] \le \|{{\ensuremath{G}}}\|\_\infty \mathbb P\_{{\ensuremath{\mathcal E}}^x}(S\_{-t}^{x,{\ensuremath{\mathcal P}}}\notin \partial \Lambda\_n)\mathbb P\_{{\ensuremath{\mathcal E}}^y}(S\_{-t}^{y,{\ensuremath{\mathcal P}}}\notin \partial \Lambda\_n)\,, \end{aligned}$$ and recalling that almost surely both probabilities on the right-hand side go to zero as *t* → ∞. Exponential decay rate of the mean process ========================================== In the previous section, we expressed the mean process of *h⃗*, when conditioned on ${\ensuremath{\mathcal P}}$, in terms of its BTRW representation. If the initial data is uniform (e.g., the all-*n* initialization), then the mean $\mathbb E[h^{\ensuremath{\mathcal P}}(x,t)]$ is evidently governed by the probability that $S\_{-t}^{x,{\ensuremath{\mathcal P}}}$ has not been absorbed into ∂Λ*n*—what we call the survival probability of the BTRW. Harmonic measure estimates for the BTRW in a fixed Poisson noise (i.e., quenched) can be quite different from those in discrete or continuous time; for instance if at time  − *t*, the most recent clock ring were at site *v*, the probability that $S\_{-t}^{x,{\ensuremath{\mathcal P}}} = v$ is zero. Nonetheless, the survival probability is a sufficiently averaged quantity that for most ${\ensuremath{\mathcal P}}$ it behaves as it would for the usual discrete random walk, for which the spectral theory, and therefore the quenched survival probability after *t* steps, can be sharply characterized on the grid graph. That is the aim of this section. In what follows, the randomness will often come from the jump sequences $({\ensuremath{\mathcal E}}^x)\_{x\in \Lambda\_n}$ of the BTRW, with ${\ensuremath{\mathcal P}}\_\infty$ fixed. We will denote probabilities only over this randomness by $\mathbb P\_{{\ensuremath{\mathcal E}}}$. The use of $\mathbb P$ in the context of the BTRW will be over both sources of randomness, ${\ensuremath{\mathcal P}}\_\infty$ and $({\ensuremath{\mathcal E}}^x)$. **Other notational disclaimers.** Throughout the rest of the paper, all constants in *O*(1), Ω(1) and *o*(1) etc should be understood to hold with constants that are independent of *n*. In particular, all statements that follow should be understood to hold uniformly over *n* sufficiently large. For ease of notation, we will use *c*, *C* to denote the existence of constants independent of *n* for which the relevant claim holds; these letters can denote different constants from line to line. Finally, for readability, we will ignore rounding issues and integer effects as it will be clear how to handle with minimal modifications. Preliminaries for discrete-time random walks in Λ*n* ---------------------------------------------------- We begin by collecting (standard) preliminaries on the spectrum of the Laplacian on Λ*n* and the harmonic measure and Green’s function of the discrete-time simple random walk on Λ*n*. We refer to  for more details, and to Appendix [sec:appendix] for precise references for and proofs of these facts. Consider the (normalized) discrete Laplace operator Δ*n* on Λ*n*, given by the ∣Λ*n*∣ × ∣Λ*n*∣ matrix $$\begin{aligned} \label{eq:Laplacian} \Delta\_n(x,y) = \begin{cases} -1 & x=y \\ \frac{1}{4} & x \sim y \\ 0 & \mbox{else} \end{cases}\,. \end{aligned}$$ Let *λ***1** be the smallest eigenvalue of  − Δ*n*, and let *φ***1** denote the corresponding normalized eigenvector. Then $$\begin{aligned} \label{eq:top-eigenvalue} \lambda\_{\mathbf{1}}= 1-\cos\Big(\frac{\pi}{n}\Big)\,, \end{aligned}$$ and for *x* = (*x*1, *x*2) ∈ Λ*n*, $$\begin{aligned} \label{eq:top-eigenvector} \varphi\_{\mathbf{1}}(x) = \prod\_{j=1,2} \bigg(\sqrt{\frac{2}{n}} \sin\Big(\frac{x\_j \pi}{n}\Big)\bigg)\,. \end{aligned}$$ With that, we can also define a different normalization of this top eigenvector: $$\widehat \varphi\_{\mathbf{1}}(x) := \varphi\_{\mathbf{1}}(x) \sum\_{y\in \Lambda\_n} \varphi\_{\mathbf{1}}(y)\,.$$ [fact:order-of-magnitudes] For *λ***1** and *φ***1** as above, $\lambda\_{\mathbf{1}}= \frac{\pi^2}{2n^2} + O(n^{-4})$ and max*x* ∈ Λ*n**φ***1**(*x*) = *O*(*n*− 1). Further, $\max\_{x\in \Lambda\_n}\widehat \varphi\_{\mathbf{1}}(x) = O(1)$ and $\min\_{x:d(x,\partial \Lambda\_n) \ge n/4}\widehat \varphi\_{\mathbf{1}}(x) = \Omega(1)$. Let (*Y**k**x*)*k* ≥ 0 be the discrete-time simple random walk on Λ*n* that is killed as soon as it hits ∂Λ*n*. Notice that if *P* is the transition matrix for *Y**k*, then  − Δ*n* = *I* − *P*. Using this relation, we obtain the following sharp bound on the survival probability of (*Y**k**x*)*k* ≥ 0 in Λ*n*. [lem:survival-probability-srw] For all *k* such that *n*2 = *o*(*k*), and all *x* ∈ Λ*n*, we have $$\begin{aligned} \mathbb P(Y\_k^x\notin \partial \Lambda\_n) = \widehat \varphi\_{\mathbf{1}}(x) (1-\lambda\_{\mathbf{1}})^k + o(e^{ - \lambda\_{\mathbf{1}}k})\,. \end{aligned}$$ If $(\widetilde Y\_t)\_{t\ge 0}$ is the rate-1 continuous time version of (*Y**k*), then for all *x* ∈ Λ*n* and *t* such that *n*2 = *o*(*t*), $$\begin{aligned} \mathbb P(\widetilde Y\_t^x \notin \partial \Lambda\_n) = \widehat \varphi\_{\mathbf{1}}(x) e^{ - \lambda\_{\mathbf{1}}t} + o(e^{ - \lambda\_{\mathbf{1}}t})\,. \end{aligned}$$ We also recall the size of the Green’s function *G* = *G**n* on Λ*n* which one can find, for instance, in Sections 4 and 8 of . [fact:green-function-srw] For every *x*, *y* ∈ Λ*n*, we have $$\begin{aligned} G(x,y) \le C\log \Big(\frac{n}{|x-y|\vee 1}\Big)\,. \end{aligned}$$ Number of clock rings collected by the random walk in ${\ensuremath{\mathcal P}}$ --------------------------------------------------------------------------------- The first estimates we require are on the number of Poisson points picked up, i.e., the number of jumps made, by the BTRW. Denote the number of such jumps by time  − *t* as $$\begin{aligned} N\_{-t}^{x,{\ensuremath{\mathcal P}}} = \Big|\Big\{s\in {\ensuremath{\mathcal P}}: s\in {{\ensuremath{S}}}\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}}\Big\}\Big|\,. \end{aligned}$$ [lem:number-of-clock-rings] Fix *t* > 0 and *r* > 0. With probability 1 − *C**e*− *r*/*C*, the Poisson field ${\ensuremath{\mathcal P}}\_\infty = {\ensuremath{\mathcal P}}\_{\mathbb Z^2}$ is such that, $$\begin{aligned} \mathbb P\_{\ensuremath{\mathcal E}}\Big( N\_{-t}^{x,{\ensuremath{\mathcal P}}\_\infty} \notin \big(t - r \sqrt{t}, t+ r \sqrt t\big) \mbox{ for some $x\in \Lambda\_n$}\Big) \le C n^2 e^{ - (r^2 \wedge r\sqrt{t})/C}\,. \end{aligned}$$ We will first bound the probability averaged over both the Poisson field and the random-walk jumps and then simply use Markov’s inequality. Namely, we first establish that for every *x* ∈ Λ*n*, $$\begin{aligned} \label{wts:number-of-jumps} \mathbb P\Big(({\ensuremath{\mathcal P}}\_\infty,{\ensuremath{\mathcal E}}^x): N\_{-t}^{x,{\ensuremath{\mathcal P}}\_\infty}\notin (t-r\sqrt{t}, t+ r\sqrt{t})\Big)\le C e^{ - (r^2 \wedge r\sqrt{t})/C}\,. \end{aligned}$$ This will follow from the observation that the law of ${{\ensuremath{S}}}\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}}$, when averaged over ${\ensuremath{\mathcal P}}$, is simply that of a standard continuous-time random walk in Λ*n*. Formally we will construct a coupling of the clock rings the random walk collects, and a Poisson process of intensity 1 on [0, ∞) denoted $\mathcal T = (\mathcal T\_1,{\ensuremath{\mathcal T}}\_2,...)$. We claim that the law of the sequence (*T*1*x*, *T*2*x*, ...) of times of the Poisson points collected by the random walk $S^{x,{\ensuremath{\mathcal P}}\_\infty}$ is identical to the law of $({\ensuremath{\mathcal T}}\_{1},{\ensuremath{\mathcal T}}\_2,...)$. To see this, let ${\ensuremath{\mathcal F}}\_t$ denote the filtration generated by ${\ensuremath{\mathcal P}}\_{\infty,t}$ together with the trajectory $S\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}\_\infty}$, i.e., the restriction of ${\ensuremath{\mathcal P}}\_\infty$ to times in [ − *t*, 0] and the jumps ${\ensuremath{\mathcal E}}^x$ that are used up to time  − *t*. Then observe that conditionally on ${\ensuremath{\mathcal F}}\_{T\_1^x}$, *T**j**x* − *T**j* − 1*x*, is, by the memoryless property of exponential random variables (as well as the independence of the Poisson processes on the different columns {*v*} × ( − ∞, 0] and independence from ${\ensuremath{\mathcal E}}^x$), simply distributed as an Exponential(1), which is the same as the distribution of ${\ensuremath{\mathcal T}}\_j -{\ensuremath{\mathcal T}}\_{j-1}$. With that equality in distribution in hand, $N\_{-t}^{x,{\ensuremath{\mathcal P}}\_\infty}$ is equal in distribution to the number of elements of ${\ensuremath{\mathcal T}}$ that are smaller than *t*, i..e., the number of points of an intensity 1 Poisson process in an interval [0, *t*] which is a Pois(*t*) random variable. Thus follows by the Chernoff bound, $$\begin{aligned} \mathbb P\big(\mbox{Pois}(t) \notin (t-r\sqrt{t}, t+r \sqrt{t})\big) \le \begin{cases} Ce^{ - r^2/C} & r\le \sqrt{t} \\ Ce^{- r\sqrt{t}/C} & r>\sqrt{t} \end{cases}\,. \end{aligned}$$ By a union bound, $$\begin{aligned} \mathbb P\Big(({\ensuremath{\mathcal P}}\_\infty,({\ensuremath{\mathcal E}}^x)\_x): \bigcup\_{x\in \Lambda\_n} \big\{N\_{-t}^{x,{\ensuremath{\mathcal P}}\_\infty}\notin (t-r\sqrt{t}, t+ r\sqrt{t})\big\}\Big)\le C n^2 e^{ - (r^2 \wedge r\sqrt{t})/C}\,. \end{aligned}$$ Now using Markov’s inequality, we find that $$\begin{aligned} \mathbb P\Big({\ensuremath{\mathcal P}}\_\infty: \mathbb P\Big( \bigcup\_{x\in \Lambda\_n} N\_{-t}^{x,{\ensuremath{\mathcal P}}\_\infty}\notin (t-r\sqrt{t}, & t+r\sqrt{t})\Big) >\sqrt{C} n^2 e^{ - (r^2 \wedge r\sqrt{t})/2C} \Big) \\ & \le \frac{C n^2 e^{- (r^2 \wedge r\sqrt{t})/C}}{\sqrt{C}n^2 e^{ - (r^2 \wedge r\sqrt{t})/2C}} \le \sqrt{C}e^{-(r^2 \wedge r\sqrt{t})/2C}\,. \end{aligned}$$ This implies the desired for some other choice of constant *C*. The following lemma establishes that the random-walk through the Poisson noise field ${\ensuremath{\mathcal P}}\_n$ is a time-change of a discrete-time simple random walk, but where the time-change is trajectory dependent. The proof is immediate by taking the coupling of the internal clock of the random walk in Poisson noise, to an independent rate 1 Poisson clock, and then using the same sequence of jumps ${\ensuremath{\mathcal E}}^x$ to generate the coupled simple random walk. [lem:time-change-of-srw] Let *Y**k**x* be the discrete-time simple random walk started from *x* with jump sequence given by ${\ensuremath{\mathcal E}}^x$. Let $\tau\_0(k)= \inf \{t: N\_{-t}^{x,{\ensuremath{\mathcal P}}} = k\}$. Then for any ${\ensuremath{\mathcal P}}$, we have $$\begin{aligned} (S\_{-\tau\_0(k)}^{x,{\ensuremath{\mathcal P}}})\_{k\ge 0} = (Y\_{k}^x)\_{k\ge 0}\,. \end{aligned}$$ In particular, for every *k*, we have $\mathbb P\_{\ensuremath{\mathcal E}}(S\_{-\tau\_0(k)}^{x,{\ensuremath{\mathcal P}}}\in \cdot) = \mathbb P(Y\_k^x\in \cdot)$. Survival probability of random walk in ${\ensuremath{\mathcal P}}$ ------------------------------------------------------------------ In this subsection, we use the bound on the number of steps taken by the random walk in the Poisson noise field from Lemma [lem:number-of-clock-rings] to prove the following sharp bound on the absorption probability of $S\_{-t}^{x,{\ensuremath{\mathcal P}}}$. Recall the top eigenvalue *λ***1** and corresponding eigenvector *φ***1** of the Laplacian on Λ*n* from –, and recall their orders of magnitude from Fact [fact:order-of-magnitudes]. [prop:mean-of-h] Fix any *t* = Θ(*n*2log*n*). With probability 1 − *o*(*n*− 4), ${\ensuremath{\mathcal P}}\_\infty$ is such that for any *x* ∈ Λ*n*, $$\begin{aligned} \mathbb P\_{\ensuremath{\mathcal E}}\big(S\_{-t}^{x,{\ensuremath{\mathcal P}}}\notin \partial \Lambda\_n\big) = (\widehat \varphi\_{\mathbf{1}}(x) +o(1)) e^{ - \lambda\_{\mathbf{1}}t}\,, \end{aligned}$$ recalling that $\widehat \varphi\_{\mathbf{1}}(x) = \varphi\_{\mathbf{1}}(x) \sum\_{y\in \Lambda\_n} \varphi\_{\mathbf{1}}(y)$. For fixed ${\ensuremath{\mathcal P}}\_\infty$, define the event $$\begin{aligned} E\_{t,x,r}^{{\ensuremath{\mathcal P}}\_\infty} = \big\{ N\_{-t}^{x,{\ensuremath{\mathcal P}}\_\infty} \in (t-r\sqrt{t}, t+r\sqrt{t})\big\}\,, \end{aligned}$$ and let $$\begin{aligned} \Gamma = \Big\{{\ensuremath{\mathcal P}}\_\infty: \mathbb P\_{\ensuremath{\mathcal E}}\big( \bigcup\_{x\in \Lambda\_n} (E\_{t,x,r}^{{\ensuremath{\mathcal P}}\_\infty})^c\big)\le Cn^2 e^{ - r/C} \Big\}\,. \end{aligned}$$ Let *r* = (log*n*)2, and notice that by Lemma [lem:number-of-clock-rings], Γ has probability 1 − *o*(*n*− 4). Now fix any ${\ensuremath{\mathcal P}}\_\infty\in \Gamma$, and notice that since ∂Λ*n* is absorbing for the random walk in ${\ensuremath{\mathcal P}}$, $$\begin{aligned} \mathbb P\_{\ensuremath{\mathcal E}}\big( S\_{-t}^{x,{\ensuremath{\mathcal P}}}\notin \partial \Lambda\_n\big) \le \mathbb P\_{\ensuremath{\mathcal E}}\big({{\ensuremath{S}}}\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}\_\infty} \cap \partial \Lambda\_n = \emptyset, E\_{t,x,r}^{{\ensuremath{\mathcal P}}\_\infty}\big) + \mathbb P\_{\ensuremath{\mathcal E}}((E\_{t,x,r}^{{\ensuremath{\mathcal P}}\_\infty})^c)\,. \end{aligned}$$ The second term on the right-hand side is at most *C**n*2*e*− *r*/*C* by definition of Γ. Now consider the first term on the right-hand side and observe that the event $\{{{\ensuremath{S}}}\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}\_\infty}\cap \partial \Lambda\_n = \emptyset\}$ is decreasing in *t*. Thus, we can maximize its probability by replacing *t* with the random time $$\tau\_0 (t-r\sqrt{t})= \inf \{s: N\_{-s}^{x,{\ensuremath{\mathcal P}}\_\infty} = t-r \sqrt{t}\}$$ (as on the event $E\_{x,r,t}^{{\ensuremath{\mathcal P}}\_\infty}$ we necessarily have $\tau\_{0}(t-r\sqrt{t})<t$). In particular, $$\begin{aligned} \mathbb P\_{\ensuremath{\mathcal E}}\big({{\ensuremath{S}}}\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}\_\infty}\cap \partial \Lambda\_n = \emptyset, E\_{t,x,r}^{{\ensuremath{\mathcal P}}\_\infty}\big) & \le \mathbb P\_{\ensuremath{\mathcal E}}\big({{\ensuremath{S}}}\_{[-\tau\_0(t-r\sqrt{t}),0]}^{x,{\ensuremath{\mathcal P}}\_\infty} \cap \partial \Lambda\_n= \emptyset\big) \\ & = \mathbb P\_{\ensuremath{\mathcal E}}\big( S\_{-\tau\_0(t-r\sqrt{t})}^{x,{\ensuremath{\mathcal P}}} \notin \partial \Lambda\_n\big)\,. \end{aligned}$$ Recall from Lemma [lem:time-change-of-srw] that the distribution of $S\_{-\tau\_0(t-r\sqrt{t})}^{x,{\ensuremath{\mathcal P}}}$ is exactly that of $t-r\sqrt{t}$ steps of a simple (discrete-time) random walk (*Y**k**x*)*k* ≥ 0 absorbed at ∂Λ*n*. Then, by Lemma [lem:survival-probability-srw], if we let *r* = (log*n*)2 and *t* = *O*(*n*2log*n*) so that $n^2 = o(t - r\sqrt{t})$, we have $$\begin{aligned} \mathbb P\_{\ensuremath{\mathcal E}}\big(S\_{-\tau\_{0}(t-r\sqrt{t})}^{x,{\ensuremath{\mathcal P}}}\notin \partial \Lambda\_n\big) = \mathbb P (Y\_{t-r\sqrt{t}}^x\notin \partial \Lambda\_n) = (\widehat \varphi\_{\mathbf{1}}(x) +o(1)) e^{ - \lambda\_{\mathbf{1}}(t- r\sqrt{t})}\,. \end{aligned}$$ At this point using the fact that *λ***1** = Θ(*n*− 2) and $\widehat \varphi\_{\mathbf{1}}(x) = O(1)$, the above yields the desired $$\begin{aligned} \mathbb P\_{\ensuremath{\mathcal E}}\big( S\_{-t}^{x,{\ensuremath{\mathcal P}}}\notin \partial \Lambda\_n\big) & \le (\widehat \varphi\_{\mathbf{1}}(x)+o(1)) e^{ - \lambda\_{\mathbf{1}}(t-r\sqrt{t})} + Cn^2 e^{ - r/C} \\ & \le (\widehat \varphi\_{\mathbf{1}}(x)+o(1)) e^{ - \lambda\_{\mathbf{1}}t}\,. \end{aligned}$$ Turning to the matching lower bound, we have $$\begin{aligned} \mathbb P\_{\ensuremath{\mathcal E}}\big( S\_{-t}^{x,{\ensuremath{\mathcal P}}}\notin \partial \Lambda\_n\big) & \ge \mathbb P\_{\ensuremath{\mathcal E}}\big({{\ensuremath{S}}}\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}\_\infty} \cap \partial \Lambda\_n = \emptyset, E\_{t,x,r}^{{\ensuremath{\mathcal P}}\_\infty}\big) \\ & \ge \mathbb P\_{\ensuremath{\mathcal E}}\big(S\_{-\tau\_0(t+r\sqrt{t})}^{x,{\ensuremath{\mathcal P}}}\notin \partial\Lambda\_n = \emptyset, E\_{t,x,r}^{{\ensuremath{\mathcal P}}\_\infty}\big)\,, \end{aligned}$$ using the fact that on $E\_{t,x,r}^{{\ensuremath{\mathcal P}}\_\infty}$, we have $\tau\_0(t+r\sqrt{t}) > t$ (and the event $S\_{[-t,0]}^{x,{\ensuremath{\mathcal P}}\_\infty}\cap \partial \Lambda\_n = \emptyset$ is decreasing in *t*). Fixing any ${\ensuremath{\mathcal P}}\_\infty\in \Gamma$, this is in turn at least $$\begin{aligned} \mathbb P\_{\ensuremath{\mathcal E}}\big(S\_{-\tau\_0(t+r\sqrt{t})}^{x,{\ensuremath{\mathcal P}}}\notin \partial\Lambda\_n = \emptyset\big)- \mathbb P\_{\ensuremath{\mathcal E}}((E\_{t,x,r}^{{\ensuremath{\mathcal P}}\_\infty})^c) \ge \mathbb P (Y\_{t+r\sqrt{t}}^x \notin \partial \Lambda\_n) - Cn^2 e^{ - r/C}\,. \end{aligned}$$ By Lemma [lem:survival-probability-srw], and the choices *r* = (log*n*)2 and *t* = *O*(*n*2log*n*), this is at least $$\begin{aligned} (\widehat \varphi\_{\mathbf{1}}(x) -o(1)) e^{ - \lambda\_{\mathbf{1}}(t+r\sqrt{t})} - Cn^2 e^{ - r/C}\,, \end{aligned}$$ which, using the choices of *r*, *t* and the fact that *λ***1** = Θ(*n*− 2) is $(\widehat \varphi\_{\mathbf{1}}(x)-o(1)) e^{ - \lambda\_{\mathbf{1}}t}$ as desired. Annealed survival probability estimates --------------------------------------- We conclude this section by using the above quenched estimates to also obtain an annealed estimate on the mean decay for the DGFF Glauber dynamics for arbitrary initialization. [lem:annealed-expectation-against-random-walk] For any non-negative function $f:\Lambda\_n \to\mathbb R\_+$, for every *t* ≥ *n*2 we have $$\begin{aligned} \mathbb E\_{{\ensuremath{\mathcal P}}}\Big[\sum\_{x\in \Lambda\_n} \mathbb E\_{{\ensuremath{\mathcal E}}}\big[f(S\_{-t}^{x,{\ensuremath{\mathcal P}}})]\Big] \le Ce^{ - \lambda\_{\mathbf{1}}(t-n^2)} \sum\_{x\in \Lambda\_n} f(x)\,. \end{aligned}$$ where $\mathbb E\_{{\ensuremath{\mathcal P}}}$ denotes expectation with respect to the Poisson update sequence. By linearity of expectation, the left-hand side is $$\begin{aligned} \mathbb E\_{{\ensuremath{\mathcal P}}}\Big[\sum\_{x\in \Lambda\_n} \mathbb E\_{{\ensuremath{\mathcal E}}}\big[f(S\_{-t}^{x,{\ensuremath{\mathcal P}}})]\Big] = \sum\_{x\in \Lambda\_n} \sum\_{y\in \Lambda\_n} f(y) \mathbb E\_{{\ensuremath{\mathcal P}}} \mathbb P\_{{\ensuremath{\mathcal E}}}(S\_{-t}^{x,{\ensuremath{\mathcal P}}}= y)\,. \end{aligned}$$ The expectation over ${\ensuremath{\mathcal P}}$ of the probability $\mathbb P\_{{\ensuremath{\mathcal E}}}(S\_{-t}^{x,{\ensuremath{\mathcal P}}} = y)$ is, by the reasoning presented in Lemma [lem:number-of-clock-rings], exactly the probability for a standard continuous-time random walk $(\widetilde Y\_t^x)\_{t\ge 0}$ with rate-one Poisson jump times and absorbed at ∂Λ*n*. It therefore suffices to show that $$\begin{aligned} \label{eq:nts-one-point-hitting-probability} \mathbb P\_{{\ensuremath{\mathcal E}}}(\widetilde Y\_t^x =y) \le \frac{C}{n^2}e^{ - \lambda\_{\mathbf{1}}t}\,. \end{aligned}$$ To see this, observe that by the Markov property, we can bound the left-hand side as $$\begin{aligned} \mathbb P\_{{\ensuremath{\mathcal E}}}(\widetilde Y\_{t-n^2}^x \notin \partial \Lambda\_n ) \max\_{z\in \Lambda\_n} \mathbb P\_{{\ensuremath{\mathcal E}}}(\widetilde Y\_{n^2}^z = y)\,. \end{aligned}$$ The first term here is $\widehat \varphi\_{\mathbf{1}}(x) (1+o(1)) e^{ - \lambda\_{\mathbf{1}}(t-n^2)}$ per Lemma [lem:survival-probability-srw]. The second term is upper bounded by, say, $\mathbb P\_{{\ensuremath{\mathcal E}}}(\bar Y\_{n^2}^z = y)$ where $(\bar Y\_t^z)$ is not killed at ∂Λ*n*; by a local central limit theorem, this probability is at most $\frac{C}{n^2}$ for some universal constant *C* as claimed. The volume supermartingale ========================== The bounds of Proposition [prop:mean-of-h] and Lemma [lem:annealed-expectation-against-random-walk] imply an exponential decay with rate Θ(*n*− 2) for the mean of the DGFF dynamics per Proposition [prop:mean-and-variance-of-h]. This would already be essentially sufficient to deduce an *O*(*n*2log*n*) upper bound on the mixing time. Indeed this follows from what is essentially a union bound on the probability of not coupling in one sweep (a time period in which every site gets updated at least once) after the mean $\mathbb E[{{\ensuremath{\vec{h}}}}(t)]$ is *o*(*n*− 2) everywhere. However, this would not attain the correct mixing time of *t*⋆ + *o*(*n*2log*n*), which corresponds to the time when this mean process is simply *o*(1) everywhere. In order to show the right mixing time upper bound, we switch to a martingale based argument building on that appearing in . The argument aims to couple two DGFF chains, one initialized from the maximal, all-*n*, configuration, with one initialized from some *g⃗* for some arbitrary *g⃗* having ∥*g⃗*∥∞ ≤ *n*. Since in this section, we will be tracking the evolution of these chains simultaneously, and aiming to couple them to one another, we emphasize the initialization dependence in our notation, using *h⃗**n⃗* to indicate the former chain, and *h⃗**g⃗* to indicate the latter. The fundamental quantity in the argument is the *volume supermartingale* given by $$\begin{aligned} \label{eq:V-t} V\_t = \sum\_{x\in \Lambda\_n} dh\_x(t)\,, \qquad \mbox{where} \qquad dh\_x(t) = h^{\vec{n}}(x,t) - h^{{{\ensuremath{\vec{g}}}}}(x,t)\,. \end{aligned}$$ The terminology of calling this process a *supermartingale* is justified by Claim [clm:V-t-supermartingale]. For each *g⃗*, we construct a coupling of *h**n⃗*(*t*) and *h**g⃗*(*t*) such that *d**h**x*(*t*) ≥ 0 holds for all *x* ∈ Λ*n* and all *t* ≥ 0, and such that at some *t* = *t*⋆ + *O*(*n*2loglog*n*), we have $$\begin{aligned} \mathbb P(V\_t \ne 0) = o(1)\,. \end{aligned}$$ This will straightforwardly imply that the mixing time maximized over all initializatons ∥*g⃗*∥∞ ≤ *n* is at most *t*⋆ + *O*(*n*2loglog*n*) as claimed in Theorem [thm:main-revised]. Our approach to proving this is to stitch together two different couplings of the dynamics that preserve the monotonicity of the dynamics. The first of these couplings is the identity coupling defined by Remark [rem:identity-coupling]. This will be used for a time of *t*⋆ + *C*1*n*2loglog*n* to get the volume supermartingale down to a size of *o*(*n*2(log*n*)− 5), corresponding to an average discrepancy of *o*((log*n*)− 5) per site. [lem:stage-1] Under the identity coupling, $$\begin{aligned} \mathbb P\big( dh\_x(t\_\star + sn^2) \ge Ce^{-\pi^2 s/2} \mbox{ for some $x\in \Lambda\_n$}\big) = o(1)\,. \end{aligned}$$ In particular, we have $$\begin{aligned} \mathbb P\big(V\_{t\_\star + sn^2}\le n^2 e^{ - \pi^2 s/2}) = 1-o(1)\,. \end{aligned}$$ However, given the continuous nature of the state space, the identity coupling will never actually result in coalescence of the two Markov chain realizations. Moreover, even getting *V**t* to *o*(1) scales so that the *total* discrepancy is truly negligible, takes a further Θ(*n*2log*n*) time under that coupling, destroying any chance of proving cutoff. To address this, we use a second coupling, called the sticky coupling, that will be used to make the two chains coalesce in only a further *O*(*n*2loglog*n*) time. [def:two-stage-coupling] For two chains *h⃗**n⃗* and *h⃗**g⃗* with ∥*g⃗*∥∞ ≤ *n*, the *two-stage coupling* of (*h⃗**n⃗*(*t*), *h⃗**g⃗*(*t*))*t* ≥ 0 is the following. 1. For $T\_0 := t\_\star + \frac{12}{\pi^2}n^2 \log \log n$, couple (*h⃗**n⃗*(*t*), *h⃗**g⃗*(*t*))*t* ∈ [0, *T*0] using the identity coupling. 2. Then, for all *t* ≥ *T*0, evolve (*h⃗**n⃗*(*t*), *h⃗**g⃗*(*t*)) using the sticky coupling which we will define shortly in Definition [def:DGFF-sticky-coupling]. With the two-stage coupling defined, the main result of this section is the following. [prop:V-t-down-to-polylog-scales] There exists *C* such that the following holds. For every *g⃗* such that ∥*g⃗*∥∞ ≤ *n*, under the two-stage coupling, $$\begin{aligned} \mathbb P\big( V\_{t\_\star + Cn^2 \log \log n} \le (\log n)^{5}\big) = 1-o(1)\,. \end{aligned}$$ Note that there is still a remaining piece of the argument to get *V**t* to actually hit zero. That step is reasonably straightforward and we will return to it in the following section; the remainder of this section will be dedicated to introducing the *sticky coupling* and then establishing Proposition [prop:V-t-down-to-polylog-scales]. The sticky coupling ------------------- We begin by describing the *sticky coupling* of two Gaussian random variables, which is an *optimal coupling* in the sense that it maximizes the probability that the two random variables agree. For two Gaussian random variables $X\sim {\ensuremath{\mathcal N}}(-\mu,1)$ and $Y\sim {\ensuremath{\mathcal N}}(\mu,1)$, the sticky coupling P*μ* is given as follows. Define the sub-probability measures $$\begin{aligned} \label{eq:nu-1} d\nu\_0^\mu(x) = (2\pi)^{-1/2} \big(e^{ - (x-\mu)^2/2}\wedge e^{ - (x+\mu)^2/2}\big)dx\,, \end{aligned}$$ and $$\begin{aligned} \label{eq:nu-2-3} d\nu\_1^\mu(x)= (2\pi)^{-1/2}\big(e^{ - (x-\mu)^2/2} - e^{ - (x+\mu)^2/2}\big){\mathbf{1}}\_{\{x\ge 0\}}dx\,. \end{aligned}$$ (Note that *d**ν*0*μ* + *d**ν*1*μ* is the law of ${\ensuremath{\mathcal N}}(\mu,1)$.) Then draw (*X*, *Y*) ∼ P*μ* as follows: $$\begin{aligned} \begin{cases} X=Y \sim \nu\_0^\mu & \quad \mbox{w.prob. } \nu\_0^\mu(\mathbb R) \\ -X = Y \sim \nu\_1^\mu & \quad \mbox{w.prob. } \nu\_1^\mu(\mathbb R) \end{cases}\,. \end{aligned}$$ It can be easily seen both that this coupling is monotone in the sense that *Y* ≥ *X*, and that this coupling is an *optimal* coupling of two Gaussians, i.e., $\mathbb P(X\ne Y) = \|N(-\mu,1) - N(\mu,1)\|\_{{\textsc{tv}}}$, which we recall is exactly $\operatorname{erf}(\mu/\sqrt{2})$. (Actually the details of how (*X*, *Y*) are coupled on the complement of *ν*0*μ* is not important but the choice we make is for concreteness.) Further, the above can be immediately generalized to the following. [def:Gaussian-sticky-coupling] The sticky coupling P between two Gaussian random variables *Z**a* ∼ *N*(*a*, 1) and *Z**b* ∼ *N*(*b*, 1) for *a* ≤ *b* is defined by taking (*X*, *Y*) ∼ P(*b* − *a*)/2 and drawing $Z\_a = X+\frac{a+b}{2}$ and $Z\_b = Y+\frac{a+b}{2}$. With this in hand, we can now define a sticky coupling of the Glauber dynamics for the DGFF, using P as its building block. [def:DGFF-sticky-coupling] The sticky coupling P between two DGFF Glauber dynamics chains *h⃗**g⃗* and *h⃗**g⃗*ʹ is defined by using the same Poisson noise for the update sequence, and whenever the height at a site *x* ∈ Λ*n* is being updated, using the sticky coupling of Definition [def:Gaussian-sticky-coupling] on the two Gaussian updates corresponding to the two dynamics, at *x*. By monotonicity of P*μ*, the sticky coupling of the DGFF chains is monotone i.e., if *g⃗* ≤ *g⃗*ʹ pointwise, then *h⃗**g⃗*(*t*) ≤ *h⃗**g⃗*ʹ(*t*) pointwise for all *t*. In particular, under the sticky coupling of *h⃗**n⃗* and *h⃗**g⃗*, we have that *d**h**x*(*t*) and *V**t* are non-negative for all *t*. At this point we make a simple but useful observation. Unlike the identity coupling, the sticky coupling lends itself to discrepancies *d**h**x*(*t*) that are either zero or order-one-sized. We will use the following simple calculus exercise to prove this. [lem:domination-of-bernoullis] Let *ν*1*μ* be as in . There exists a universal constant 0 < *ζ* ≤ 1 such that if *Y* is drawn as $\bar \nu\_1^\mu$ (*ν*1*μ* normalized to be a probability measure), then $$\begin{aligned} Y\succeq \zeta \cdot \mbox{Ber}(\zeta) \qquad \mbox{for all $\mu>0$}\,, \end{aligned}$$ where  ≽  denotes stochastic domination. Evidently, $\bar \nu\_1^\mu$ is supported on $\mathbb R\_+$. It therefore suffices for us to show that there exist *η*, *δ* > 0 such that $$\begin{aligned} \label{eq:nu-1-density-lower-bound} \inf\_{x\in [\mu+\eta, \mu+2\eta]} \frac{d\bar\nu\_1^\mu(x)}{dx} \ge \delta \qquad \mbox{uniformly over all $\mu>0$}\,. \end{aligned}$$ This is sufficient via the choice of *ζ* = min{*η*, *δ**η*} by non-negativity of *μ*. We first consider *μ* small. Consider the scaling of the normalization factor $\nu\_1^\mu(\mathbb R\_+)$ in *μ*. For a standard normal random variable *Z*, we have $$\begin{aligned} \nu\_1^\mu (\mathbb R\_+) = \mathbb P(Z\ge -\mu) - \mathbb P (Z\ge \mu) = \operatorname{erf}(\frac{\mu}{\sqrt 2}) = \frac{\sqrt{2}}{\sqrt{\pi}}\mu(1+o\_\mu(1))\, \end{aligned}$$ where the *o**μ*(1) term goes to zero as *μ* ↓ 0. By differentiating, we simultaneously have that, as long as *x* ∈ *K* for some compact set *K*, $$\begin{aligned} \frac{1}{\mu}(e^{ - (x-\mu)^2/2}- e^{ - (x+\mu)^2/2}) = xe^{-x^2/2}(1+o\_\mu(1)) \end{aligned}$$ where the *o**μ*(1) term is uniform over all *x* ∈ *K*. Therefore, $$\begin{aligned} \inf\_{x\in [\mu+\eta, \mu+2\eta]} d\bar \nu\_1^\mu(x) = \frac 12(1+o\_\mu(1)) xe^{- x^2/2} \ge \frac 12(1+o\_\mu(1)) \eta e^{ -2\eta^2}\,. \end{aligned}$$ As a consequence, for any *η*, there exists a choice of *δ* such that uniformly over all *μ* < *μ*0 for some *μ*0, the right-hand side is at least that *δ*. Let us now consider *μ* ≥ *μ*0. For all such *μ*, we have for all *x* ∈ [*μ* + *η*, *μ* + 2*η*], $$\begin{aligned} d\bar \nu\_1^\mu(x) \ge d\nu\_1^\mu(x) & = (2\pi)^{-1/2} (e^{-2\eta^2} - e^{ -2\mu^2}) \end{aligned}$$ at which point as long as *η* < *μ*0/2, there will be a uniform choice of *δ* such that this is at least *δ* uniformly over *μ* ≥ *μ*0. Together, these imply the claimed . Regularity estimates -------------------- Before using the above property of the sticky coupling to contract the volume supermartingale in an *O*(*n*2loglog*n*) time, we will need some a priori regularity estimates on the various processes *h⃗**n⃗*, *h⃗**g⃗* and *d**h⃗* that will hold for all *t* ≥ *t*⋆. We begin by proving the claimed upper bound on *V**t* after the identity coupling phase. [***Proof of Lemma [lem:stage-1]***] For any two initializations *h⃗*1(0) and *h⃗*2(0), if the corresponding DGFF dynamics ${{\ensuremath{\vec{h}}}}\_i^{\ensuremath{\mathcal P}}(t)$ are coupled via the identity coupling, then by, $$\begin{aligned} {{\ensuremath{\vec{h}}}}\_1^{{\ensuremath{\mathcal P}}}(t) - \mathbb E[{{\ensuremath{\vec{h}}}}\_1^{{\ensuremath{\mathcal P}}}(t)] = {{\ensuremath{\vec{h}}}}\_2^{{\ensuremath{\mathcal P}}}(t) - \mathbb E[{{\ensuremath{\vec{h}}}}\_2^{{\ensuremath{\mathcal P}}}(t)]\,. \end{aligned}$$ Thus, conditional on any Poisson update sequence ${\ensuremath{\mathcal P}}$, under the identity coupling, we have $$\begin{aligned} dh\_x^{\ensuremath{\mathcal P}}(t) = h\_x^{{{\
arxiv_0000086
 1}, while *S* = *T**H**b**h**v*, so [*T* : *S*] = 2. Since *p*, *q* and *r* have odd order, they must be elements of *S*. It is routine to verify that the involution *t*− 1*h*2 ∈ *S* and ⟨*p*, *q*, *r*, *t*− 1*h*2⟩ ≃ *C*33 : *C*2, so *S* = ⟨*p*, *q*, *r*, *t*− 1*h*2⟩. Now *S* fixes the vertex *H**b**h**b**h**v* ∈ Γ(*H**b**h**v*) \ {*H*}, so Γ is not 7-CH. Next we prove that Γ is 6-CH. Since Γ is 7-arc-transitive, we only need to consider those connected induced subgraphs of Γ of order at most 6 that are not path graphs; these are described in Figure [fig:6CH]. For each induced subgraph Δ of Γ in Figure [fig:6CH], ![The graphs for 6-connected-homogeneity](config "fig:") [fig:6CH] the vertices of some connected induced subgraph Σ of order ∣*V*Δ∣ − 1 have been labelled with elements of *V*Γ. To show that Γ is 6-CH, by Lemma [lemma:detkCH], it suffices to verify the following: for each Δ in Figure [fig:6CH], the pointwise stabiliser *P* in *G* of *V*Σ is transitive on Γ(*H*) \ *V*Σ. Cases (b) and (d) are trivial since ∣Γ(*H*) \ *V*Σ∣ = 1. In all remaining cases, Γ(*H*) \ *V*Σ = {*H**b**h**v*, *H**b**h**u*− 1}. For cases (a), (c), (e) and (f), the element *r*− 1*s*− 1*t*− 1*u**v* lies in *P* and maps *H**b**h**u*− 1 to *H**b**h**v*. In case (g), the element (*t*− 1*h*2)(*t*− 1*u**v*) lies in *P* and maps *H**b**h**u*− 1 to *H**b**h**v*. If Γ is the incidence graph of the split Cayley hexagon of order (3, 3)—i.e., the generalised hexagon associated with the group *G*2(3)—then Γ is 7-arc-transitive with valency 4 , so Γ is 6-CH but not 7-CH by Proposition [prop:7trans]. In order to prove Theorem [thm:girth5], we first establish two more detailed results: Theorem [thm:girth5diam2] concerns those graphs with diameter 2, while Theorem [thm:girth5strong] concerns those graphs with diameter at least 3. Observe that if Γ is a finite connected graph with valency 2, then Γ ≃ *C**n* for some *n*, so in what follows, we focus on the case where Γ has valency at least 3. First we consider the case where Γ has diameter 2. The Hoffman-Singleton graph is a strongly regular graph with parameters (50, 7, 0, 1) and girth 5; it has automorphism group PΣU3(5) and point stabiliser *S*7. See for several constructions of this graph. [thm:girth5diam2] Let Γ be a finite connected graph with girth at least 5, valency at least 3, and diameter 2. If Γ is 3-CH, then one of the following holds. * Γ is the Petersen graph. Here Γ is CH. * Γ is the Hoffman-Singleton graph. Here Γ is 5-CH but not 6-CH. Since Γ has girth at least 5, it is locally (*t* + 1) ⋅ *K*1 for some *t* ≥ 2, and *c*2(Γ) = 1. Since Γ has diameter 2, it follows that *a*2(Γ) = *t*. In particular, Γ has girth 5, so Γ is a Moore graph (see ). Since Γ is 3-CH, it is distance-transitive, so either *t* = 2 and Γ is the Petersen graph, in which case (i) holds by Theorem [thm:CH], or *t* = 6 and Γ is the Hoffman-Singleton graph . Suppose that Γ is the Hoffman-Singleton graph. Let *u* ∈ *V*Γ, write Γ(*u*) = {*v*, *x*, *x*1…, *x*5}, and let *w*1, *w*2 ∈ Γ2(*u*) ∩ Γ(*v*). Let Δ*i* be the subgraph of Γ induced by {*u*, *v*, *x*, *w*1, *w*2, *x**i*} for *i* ∈ {1, …, 5}. For each *i*, the graph Δ*i* is isomorphic to the tree on 6 vertices with two vertices of valency 3. However, using Magma , we determine that there exists *i* ∈ {1, …, 5} such that the pointwise stabiliser in Aut(Γ) of {*u*, *v*, *x*, *w*1, *w*2} also fixes *x**i*, so Γ is not 6-CH. Using , it is routine to verify that Γ is 5-CH (see Remark [remark:magma]). Next we consider the case where Γ has diameter at least 3. For *n* ≥ 4, the odd graph *O**n* (see ) has girth 6, valency *n* and diameter *n* − 1, and this graph is 4-CH but not 5-CH, as it is not 4-arc-transitive. On the other hand, we will see shortly that the valency of a 5-CH graph with girth at least 5 is very restricted. We will also see that the only 5-CH graphs with girth at least 5 and diameter 3 are the incidence graphs of the projective planes PG2(*q*) for 2 ≤ *q* ≤ 4. Note that the incidence graph of the Fano plane PG2(2) is often called the Heawood graph. Recall the definition of an *s*-transitive graph from §[s:defn]. [thm:girth5strong] Let Γ be a finite connected 5-CH graph with girth at least 5, valency *n* ≥ 3, and diam(Γ) ≥ 3. Then 3 ≤ *n* ≤ 5, Γ is 4-arc-transitive, and one of the following holds. * diam(Γ) = 3 and Γ is the incidence graph of the projective plane PG2(*q*) for 2 ≤ *q* ≤ 4. Here Γ is 6-CH but not 7-CH. * diam(Γ) = 4 and Γ is the incidence graph of the generalised quadrangle *W*3(*q*) for *q* = 2 or 4. Here Γ is 6-CH but not 7-CH. * diam(Γ) ≥ 5 and Γ is not 7-CH. Further, if Γ is 6-CH, then either Γ is 5-transitive and *n* = 3 or 5, or Γ is 7-transitive and *n* = 4. Let *G* :  = Aut(Γ), let *u* ∈ *V*Γ and let *H* :  = *G**u*Γ(*u*). The graph Γ is locally (*t* + 1) ⋅ *K*1 for some *t* ≥ 2. Now *H* is an *r*-transitive permutation group of degree *t* + 1 where *r* :  = min(*t* + 1, 4), so by Theorem [thm:4trans], either *A**t* + 1 ⊴ *H* ≤ *S**t* + 1, or *H* is the simple group M*t* + 1 where *t* + 1 ∈ {11, 12, 23, 24}. Let (*u*0, …, *u*4) be a 4-arc. Since Γ has diameter at least 3, it has girth at least 6 by Lemma [lemma:c2=1](i), so the subgraph induced by {*u*0, …, *u*4} is a path graph with 5 vertices. Thus Γ is 4-arc-transitive. In particular, Γ is *s*-transitive for some *s* ≥ 4, so by , PSL2(*t*) ⊴ *H* and *t* is a power of a prime ℓ such that either *s* = 4, or *s* = 2ℓ + 1 and ℓ ≤ 3. If *t* ≥ 5, then *H* has a unique non-abelian composition factor, namely *A**t* + 1 or *M**t* + 1, but neither of these groups is isomorphic to the simple group PSL2(*t*), a contradiction. Thus *t* ∈ {2, 3, 4} and *n* ∈ {3, 4, 5}. If diam(Γ) = 3, then Γ is distance-transitive with intersection array {*t* + 1, *t*, *t*; 1, 1, *c*3} where *c*3 = 1 or *t* + 1 by Lemma [lemma:c2=1]. We claim that Γ is the incidence graph of the projective plane PG2(*t*). If *t* = 2, then Γ is the Heawood graph by , so the claim holds. If *t* = 3 or 4, then Γ is the point graph of a generalised hexagon of order (1, *t*) by , so Γ is the incidence graph of a projective plane of order *t* (see ); since PG2(*t*) is the unique projective plane of order *t* when *t* = 3 or 4, the claim holds. Next we show that Γ is not 7-CH. Choose a point *p* of PG2(*t*), and let ℓ1, ℓ2 and ℓ3 be pairwise distinct lines on *p*. Let *q*1 and *q*2 be points on ℓ1 \ {*p*} and ℓ2 \ {*p*} respectively. Let ℓ be the unique line on *q*1 and *q*2, and let *q*3 be the unique point on ℓ and ℓ3. For *x* ∈ ℓ3 \ {*p*}, let Δ*x* be the subgraph of Γ induced by {*p*, ℓ1, ℓ2, ℓ3, *q*1, *q*2, *x*}, and observe that Δ*x* ≃ Δ*y* for all *y* ∈ ℓ3 \ {*p*}. Observe also that *q*3 ∈ ℓ3 \ {*p*} and ∣ℓ3 \ {*p*}∣ = *t* ≥ 2. Suppose that *g* ∈ Aut(Γ) fixes {*p*, ℓ1, ℓ2, ℓ3, *q*1, *q*2} pointwise. Note that *g* is a collineation of Γ; that is, *g* maps points to points and lines to lines. Since *q*1 and *q*2 are distinct points on ℓ, we must have ℓ*g* = ℓ. Now *q*3*g* lies on ℓ*g* = ℓ and ℓ3*g* = ℓ3, so *q*3*g* = *q*3. Thus Γ is not 7-CH. It is routine to verify that Γ is 6-CH, so (i) holds. If diam(Γ) = 4, then Γ is distance-transitive with intersection array {*t* + 1, *t*, *t*, *t*; 1, 1, 1, *c*4} by Lemma [lemma:c2=1], so Γ is the incidence graph of the generalised quadrangle *W*3(*t*) where *t* ∈ {2, 4} by . We may view the points and lines of *W*3(*t*) as points and lines respectively of PG2(*t*), and the above proof shows that Γ is not 7-CH (even though the line ℓ is not a line of *W*3(*t*)). It is routine to verify that Γ is 6-CH, so (ii) holds. We may therefore assume that diam(Γ) ≥ 5. Now *a**i* = 0 and *c**i* = 1 for 1 ≤ *i* ≤ 3 by Lemma [lemma:c2=1], so Γ has girth at least 8. Thus the subgraph induced by a 5-arc (or 6-arc) is a path graph with 6 (or 7) vertices. If Γ is 6-CH, it follows that Γ is 5-arc-transitive, and since *s* = 2ℓ + 1 where 2 ≤ ℓ ≤ 3 and *t* is a power of ℓ, we conclude that either Γ is 5-transitive and *n* = 3 or 5, or Γ is 7-transitive and *n* = 4. Similarly, if Γ is 7-CH, then Γ is 6-arc-transitive and therefore 7-arc-transitive with valency 4, but no such graph exists by Proposition [prop:7trans]. Thus (iii) holds. [Proof of Theorem $\ref{thm:girth5}$] (i) Let Γ be a finite 7-CH graph with girth at least 5. By Remark [remark:connected], we may assume that Γ is connected. If Γ has valency 0 or 1, then Γ is CH. If Γ has valency 2, then Γ ≃ *C**n* for some *n* ≥ 5, so Γ is CH. If Γ has valency at least 3, then by Theorems [thm:girth5diam2] and [thm:girth5strong], Γ is the Petersen graph and Γ is CH. (ii) By , there are infinitely many finite connected quartic 7-arc-transitive graphs, all of which have girth at least 12 by Lemma [lemma:girth], and by Proposition [prop:7trans], any such graph is 6-CH but not 7-CH. (iii) Let Γ be a finite graph with valency 4 and girth at least 7. Note that Γ is 7-arc-transitive if and only if Γ is a disjoint union of connected 7-arc-transitive graphs, all of which are isomorphic, so by Remark [remark:connected], we may assume that Γ is connected. Note that the Petersen graph has girth 5, and the incidence graph of PG2(*q*) has girth 6. If Γ is 6-CH, then by Theorems [thm:girth5diam2] and [thm:girth5strong], Γ is 7-arc-transitive. Conversely, if Γ is 7-arc-transitive, then Γ is 6-CH by Proposition [prop:7trans]. We finish this section with the following observation. [prop:4trans] Any finite cubic 4-arc-transitive graph is 5-CH. Let Γ be a finite 4-arc-transitive graph with valency 3. By Lemma [lemma:girth], Γ has girth at least 6, so the only connected induced subgraphs of Γ with order at most 5 that are not *s*-arcs are the trees with order 4 and 5 that contain a vertex of valency 3. Since Γ has valency 3, it follows from Lemma [lemma:detkCH] that Γ is 5-CH. Locally disconnected graphs with girth 3 and *c*2 = 1 ===================================================== Recall that a finite 3-CH graph Γ is locally disconnected with girth 3 if and only if Γ is locally (*t* + 1) ⋅ *K**s* for some integers *t* ≥ 1 and *s* ≥ 2 (see Lemma [lemma:3CH]). In this section, we consider such graphs Γ for which *c*2 = 1. In §[ss:t=1], we consider the case where *t* = 1; in particular, we prove Theorem [thm:2Ks]. In §[ss:general], we briefly consider the case where *t* ≥ 2. The case where *t* = 1 ---------------------- [lemma:linegraph] Let Γ be a finite connected graph. For *s* ≥ 2, the following are equivalent. * Γ is locally 2 ⋅ *K**s* and *c*2(Γ) = 1. * Γ is the line graph of a finite connected graph with girth at least 5 and valency *s* + 1. If (i) holds, then (ii) holds by . Conversely, if (ii) holds, then it is routine to verify that (i) holds. Let Γ1 and Γ2 be finite connected graphs where $E\Gamma\_1\neq\varnothing$. If *φ* : Γ1 → Γ2 is an isomorphism, then there is a natural isomorphism *φ̂* : *L*(Γ1) → *L*(Γ2) defined by {*u*, *v*} ↦ {*u**φ*, *v**φ*} for all {*u*, *v*} ∈ *E*Γ1. Conversely, if Γ1 and Γ2 have at least five vertices, and if *ψ* : *L*(Γ1) → *L*(Γ2) is an isomorphism, then there exists a unique isomorphism *φ* : Γ1 → Γ2 such that *ψ* = *φ̂* by . It is routine to verify that this also holds when Γ1 and Γ2 have at least three vertices and do not contain any triangles. In particular, if Γ is a finite connected regular graph with girth at least 5 and valency at least 3, then Γ contains a cycle with length at least 5, so there is a group isomorphism of Aut(Γ) onto Aut(*L*(Γ)) defined by *g* ↦ *ĝ* for all *g* ∈ Aut(Γ). [lemma:linegraphCH] Let Γ be a finite connected regular graph with girth at least 5 and valency at least 3. For *k* ≥ 2, the following are equivalent. * The line graph *L*(Γ) is *k*-CH. * Γ is (*k* + 1)-CH and has girth at least *k* + 2. Suppose that *L*(Γ) is *k*-CH. The graph Γ has girth at least 5 and valency at least 3, so there exists an induced subgraph Δ of Γ such that Δ is a path graph with 5 vertices. Write *E*Δ = {*e*1, *e*2, *e*3, *e*4} where *e**i* is incident with *e**i* + 1 for 1 ≤ *i* ≤ 3. Now *L*(Δ) is a path graph with 4 vertices that is an induced subgraph of *L*(Γ), so if diam(*L*(Γ)) = 2, then there exists *e* ∈ *E*Γ such that *e* is incident with *e*1 and *e*4, but then *e* ∈ *E*Δ, a contradiction. Thus diam(*L*(Γ)) ≥ 3. Let *r* be the girth of Γ. Now Γ has an induced subgraph Δʹ such that Δʹ ≃ *C**r*, so *L*(Δʹ) is an induced subgraph of *L*(Γ) that is isomorphic to *C**r*. Let Γ have valency *s* + 1, and note that *s* ≥ 2. By Lemma [lemma:linegraph], *L*(Γ) is locally 2 ⋅ *K**s* and *c*2(*L*(Γ)) = 1. Thus *r* ≥ *k* + 2 by Lemma [lemma:cyclebig]. Now we prove that Γ is (*k* + 1)-CH. Let Δ1 and Δ2 be connected induced subgraphs of Γ where 3 ≤ ∣*V*Δ1∣ ≤ *k* + 1, and let *φ* : Δ1 → Δ2 be an isomorphism. Since Γ has girth at least *k* + 2, the graph Δ1 is a tree and therefore has at most *k* edges. Now *L*(Δ1) and *L*(Δ2) are connected induced subgraphs of *L*(Γ) with order at most *k*, and *φ̂* : *L*(Δ1) → *L*(Δ2) is an isomorphism, so there exists *g* ∈ Aut(Γ) such that *φ̂* extends to *ĝ* ∈ Aut(*L*(Γ)), whence *φ* extends to *g*, as desired. In particular, we have proved that Γ is 2-arc-transitive; since Γ has valency at least 2, it follows that Γ is 2-CH and therefore (*k* + 1)-CH. Thus (ii) holds. Conversely, suppose that Γ is (*k* + 1)-CH and has girth at least *k* + 2. Let Δ1 and Δ2 be connected induced subgraphs of *L*(Γ) with order at most *k*, and let *ψ* : Δ1 → Δ2 be an isomorphism. It is clear that *L*(Γ) is vertex-transitive, so we may assume that ∣Δ1∣ ≥ 2. Let Σ*i* be the subgraph of Γ induced by the edges in *V*Δ*i*. Now Σ*i* is connected and *L*(Σ*i*) = Δ*i*. In particular, there exists an isomorphism *φ* : Σ1 → Σ2 such that *ψ* = *φ̂*. Since Σ*i* has ∣*V*Δ*i*∣ edges and Γ has girth at least *k* + 2, it follows that Σ*i* is a tree, so ∣*V*Σ*i*∣ = ∣*V*Δ*i*∣ + 1. If Σ*i* is not a (vertex) induced subgraph of Γ, then the graph induced by *V*Σ*i* contains a cycle of length at most *k* + 1, a contradiction. Hence there exists *g* ∈ Aut(Γ) such that *φ* extends to *g*, in which case *ψ* = *φ̂* extends to *ĝ* ∈ Aut(*L*(Γ)). Thus (i) holds. By Lemma [lemma:linegraphCH], any result from §[s:girth5] can be reinterpreted for locally 2 ⋅ *K**s* graphs where *s* ≥ 2 and *c*2 = 1. For example, if Γ is the line graph of the incidence graph of PG2(*q*) for 2 ≤ *q* ≤ 4, then Γ is 4-CH but not 5-CH (since the incidence graph of PG2(*q*) has girth 6). Similarly, if Γ is the line graph of the incidence graph of either *W*3(*q*) for *q* = 2 or 4, or the split Cayley hexagon of order (3, 3), then Γ is 5-CH but not 6-CH. The following result is an easy consequence of Theorems [thm:girth5diam2] and [thm:girth5strong]. [thm:2Ksplus] Let Γ be a finite connected graph that is locally 2 ⋅ *K**s* where *s* ≥ 2 and *c*2(Γ) = 1. Then Γ is not 6-CH. If Γ is 4-CH, then *s* ∈ {2, 3, 4} and Γ is the line graph of a finite connected 4-arc-transitive graph with valency *s* + 1. By Lemma [lemma:linegraph], Γ = *L*(Σ) for some finite connected graph Σ with girth at least 5 and valency *s* + 1. If Γ is 4-CH, then Σ is 5-CH with girth at least 6 by Lemma [lemma:linegraphCH], so by Theorems [thm:girth5diam2] and [thm:girth5strong], Σ is 4-arc-transitive and *s* ∈ {2, 3, 4}. If Γ is 6-CH, then Σ is 7-CH and has girth at least 8 by Lemma [lemma:linegraphCH], but no such graph exists by Theorems [thm:girth5diam2] and [thm:girth5strong]. [Proof of Theorem $\ref{thm:2Ks}$] By Theorem [thm:2Ksplus] and Remark [remark:connected], Theorem [thm:2Ks](i) holds. By , there are infinitely many finite connected 7-arc-transitive graphs with valency 4, all of which are 6-CH by Proposition [prop:7trans], so Theorem [thm:2Ks](ii) follows from Lemmas [lemma:girth], [lemma:linegraph] and [lemma:linegraphCH]. The line graph of a finite connected cubic 4-arc-transitive graph is 4-CH. Let Γ be a finite connected 4-arc-transitive graph with valency 3. Then Γ is 5-CH by Proposition [prop:4trans], and Γ has girth at least 6 by Lemma [lemma:girth], so *L*(Γ) is 4-CH by Lemma [lemma:linegraphCH]. The case where *t* ≥ 2 ---------------------- For finite connected locally (*t* + 1) ⋅ *K**s* graphs Γ with *t* ≥ 2, *s* ≥ 2 and *c*2 = 1, our only general results are Lemmas [lemma:c2=1] and [lemma:cyclebig]. Note that Kantor proved (without using the CFSG) that no finite connected 3-CH graph with girth 3 and *c*2 = 1 is strongly regular. In the following, we determine those 4-CH graphs Γ that are distance-transitive with valency at most 13. The point graph of the Hall-Janko near octagon (see ) is locally 5 ⋅ *K*2 and distance-transitive with intersection array {10, 8, 8, 2; 1, 1, 4, 5}. It has automorphism group J2 : 2, where J2 denotes the Hall-Janko sporadic simple group. [prop:badsmallval] Let Γ be a finite connected 4-CH graph that is locally (*t* + 1) ⋅ *K**s* where *t* ≥ 2, *s* ≥ 2 and *c*2(Γ) = 1. If Γ is distance-transitive with valency at most 13, then one of the following holds. * Γ is the point graph of the dual of the split Cayley hexagon of order (2, 2). Here Γ is 4-CH but not 5-CH. * Γ is the point graph of the Hall-Janko near octagon. Here Γ is 4-CH but not 5-CH. By , Γ is not strongly regular, so *d* :  = diam(Γ) ≥ 3. By Lemma [lemma:c2=1], Γ has intersection array {*s*(*t* + 1), *s**t*, *s**t*, *b*3, …, *b**d* − 1; 1, 1, *c*3, …, *c**d*}. In particular, *b*0 = *s*(*t* + 1) ≥ 6 and *b*1 = *b*2 = *s**t* ≥ 4. Further, neither *b*0 nor *b*1 is prime, and *b*0 − *b*1 = *s* ≥ 2. By , one of the following holds. * Γ is the point graph of a generalised hexagon of order (2, 2). Here Γ has intersection array {6, 4, 4; 1, 1, 3} and is locally 3 ⋅ *K*2 with order 63. * Γ is the point graph of the Hall-Janko near octagon. Here Γ has intersection array {10, 8, 8, 2; 1, 1, 4, 5} and is locally 5 ⋅ *K*2 with order 315. * Γ is the point graph of a generalised octagon of order (2, 4). Here Γ has intersection array {10, 8, 8, 8; 1, 1, 1, 5} and is locally 5 ⋅ *K*2 with order 1755. * Γ is the point graph of a generalised hexagon of order (3, 3). Here Γ has intersection array {12, 9, 9; 1, 1, 4} and is locally 4 ⋅ *K*3 with order 364. * Γ is the point graph of a generalised octagon of order (4, 2). Here Γ has intersection array {12, 8, 8, 8; 1, 1, 1, 3} and Γ is locally 3 ⋅ *K*4 with order 2925. If (b) holds, then Γ is not 5-CH by Lemma [lemma:c2=1], and using Magma  and  (see Remark [remark:magma]), it is routine to verify that Γ is 4-CH, so (ii) holds. Thus we may assume that one of (a), (c), (d) or (e) holds. Now Γ is the point graph of a distance-transitive generalised *n*-gon S for some *n*. By , one of the following holds: in (a), S is the split Cayley hexagon of order (2, 2) or its dual; in (c), S is the Ree-Tits octagon of order (2, 4); in (d), S is the split Cayley hexagon of order (3, 3) (since this generalised hexagon is self-dual); and in (e), S is the dual of the Ree-Tits octagon of order (2, 4). If (c) or (d) holds, then using , it is routine to verify that Γ is not 4-CH (the tree of order 4 with a vertex of valency 3 fails). Similarly, if (e) holds, then using , it is routine to verify that Γ is not 3-CH (the cycle of length 3 fails). Lastly, suppose that (a) holds. Neither graph is 5-CH by Lemma [lemma:c2=1]. In order to differentiate between the split Cayley hexagon and its dual, here is a construction of the former: its points are the one-dimensional totally singular subspaces of a quadratic space on *V*7(2), and its 63 lines are an orbit of *G*2(2) ≤ PΩ7(2) on the two-dimensional totally singular subspaces. (See for an explicit description of the lines.) Using Magma , it is routine to verify that the point graph of the split Cayley hexagon is not 4-CH (the tree of order 4 with a vertex of valency 3 fails), while the point graph of the dual of the split Cayley hexagon is 4-CH. Thus (i) holds. On *k*-connected-homogeneous graphs =================================== A graph Γ is *k*-connected-homogeneous (*k*-CH) if *k* is a positive integer and any isomorphism between connected induced subgraphs of order at most *k* extends to an automorphism of Γ, and connected-homogeneous (CH) if this property holds for all *k*. Locally finite, locally connected graphs often fail to be 4-CH because of a combinatorial obstruction called the unique *x* property; we prove that this property holds for locally strongly regular graphs under various purely combinatorial assumptions. We then classify the locally finite, locally connected 4-CH graphs. We also classify the locally finite, locally disconnected 4-CH graphs containing 3-cycles and induced 4-cycles, and prove that, with the possible exception of locally disconnected graphs containing 3-cycles but no induced 4-cycles, every finite 7-CH graph is CH. introduction ============ A (simple undirected) graph Γ is *homogeneous* if any isomorphism between finite induced subgraphs extends to an automorphism of Γ. An analogous definition can be made for any relational structure, and the study of these highly symmetric objects dates back to Fraïssé . The finite and countably infinite homogeneous graphs have been classified , and very few families of graphs arise (see Theorem [thm:hom]). Consequently, various relaxations of homogeneity have been considered. For example, a graph Γ is *k**-homogeneous* if *k* is a positive integer and any isomorphism between induced subgraphs of order at most *k* extends to an automorphism of Γ. Every locally finite 5-homogeneous graph is homogeneous —remarkably, this result does not rely upon the classification of the finite simple groups (CFSG)—but for each *k*, there are uncountably many countable *k*-homogeneous graphs that are not (*k* + 1)-homogeneous . Further, for 2 ≤ *k* ≤ 4, the locally finite *k*-homogeneous graphs have been classified using the CFSG  (see §[s:results]). We require the following definition: for a graph (or graph property) *X*, we say that a graph Γ is *locally X* if the neighbourhood of any vertex in Γ is non-empty and induces a graph that is isomorphic to (or has property) *X*; see §[s:defn] for other unexplained terms. Here is another way to relax the concept of homogeneity: a graph is *k**-connected-homogeneous*, or *k**-CH*, if *k* is a positive integer and any isomorphism between connected induced subgraphs of order at most *k* extends to an automorphism of the graph, and *connected-homogeneous*, or *CH*, if it is *k*-CH for all *k*. The locally finite CH graphs have been classified  (see Theorem [thm:CH]), as have the countably infinite CH graphs , and Gray  proved that any infinite locally finite 3-CH graph with more than one end is CH. The 1-CH and 2-CH graphs are precisely the vertex-transitive and regular arc-transitive graphs, respectively. The 3-CH graphs with girth at least 4 are 2-arc-transitive, and it is infeasible to classify such graphs in general, even in the finite case. Indeed, for a classification of the finite 2-arc-transitive graphs,  dealing with just one of the cases arising from the reduction of the third author in would require a classification of all transitive actions of all finite simple groups for which a point stabiliser acts 2-transitively on one of its orbits. Finite 3-CH graphs with girth 3 are studied using different terminology in ; see also. In this paper, we investigate locally finite *k*-CH graphs for *k* ≥ 4. [remark:connected] A graph is *k*-CH if and only if it is a disjoint union of connected *k*-CH graphs, all of which are isomorphic. Thus we will often restrict our attention to connected graphs. The study of *k*-CH graphs (with non-zero valency) naturally divides into the locally connected case and the locally disconnected case. First we consider the locally connected case. (Definitions of the graphs that arise below may be found in §[s:graphs].) [thm:connected] Let Γ be a locally finite, connected, locally connected graph. If Γ is 4-CH, then one of the following holds. * Γ is *K**n* where *n* ≥ 2 or *K**m*[*r*] where *m* ≥ 3 and *r* ≥ 2. Here Γ is homogeneous. * Γ is the Schläfli graph. Here Γ is 4-homogeneous but not 5-CH. * Γ is the McLaughlin graph. Here Γ is 4-CH but not 5-CH. [cor:connected] Any locally finite, connected, locally connected 5-CH graph is homogeneous. Any disconnected 2-homogeneous graph must be a disjoint union of complete graphs with the same order (see Lemma [lemma:2homsimple]). Since a 3-CH graph is locally 2-homogeneous (see Lemma [lemma:localhom]), it follows that a locally finite 3-CH graph is locally disconnected if and only if it is locally (*t* + 1) ⋅ *K**s* for some positive integers *t* and *s*. For such graphs, one important parameter is the number of common neighbours of two vertices at distance two; this number is constant for a 3-CH graph Γ, and we denote it by *c*2(Γ), or *c*2 when context permits. Note that a locally (*t* + 1) ⋅ *K**s* graph has girth 3 precisely when *s* > 1, and girth at least 5 precisely when *s* = *c*2 = 1. Further, it has no induced 4-cycles precisely when *c*2 = 1 (see Lemma [lemma:notinduced]). We divide the locally disconnected case into four cases: girth 3 with *c*2 > 1, girth 4, girth at least 5, and girth 3 with *c*2 = 1. In our next result, we classify the locally finite 4-CH graphs in the first of these cases. [thm:girth3quad] Let Γ be a locally finite, connected, locally disconnected graph with girth 3 for which *c*2 > 1. If Γ is 4-CH, then one of the following holds. * Γ is *K**n*□*K**n* where *n* ≥ 3. Here Γ is CH. * Γ is the point graph of the generalised quadrangle *Q*4(3), *Q*5−(3) or *Q*5−(4). Here Γ is 4-CH but not 5-CH. * Γ is the point graph of the generalised quadrangle *Q*5−(2). Here Γ is 5-CH but not 6-CH. Note that the point graph of *Q*5−(2) and its complement the Schläfli graph, which arose in the locally connected case, are the only two locally finite 4-homogeneous graphs that are not homogeneous . The following is an immediate consequence of Corollary [cor:connected] and Theorem [thm:girth3quad]. [cor:girth3quad] Any locally finite 6-CH graph with girth 3 and *c*2 > 1 is CH. For the three remaining cases, we do not have classifications of the locally finite 4-CH graphs. Instead, we consider finite *k*-CH graphs for slightly larger *k*. For the girth 4 case, we use the classification of the finite 4-transitive permutation groups (a well-known consequence of the CFSG) together with some results from  to prove the following; note that families of finite CH graphs with girth 4 do exist (see Theorems [thm:CH] or [thm:girth4plus]). [thm:girth4] * Any finite 5-CH graph with girth 4 is CH. * There are infinitely many finite connected 4-CH graphs with girth 4 that are not 5-CH. For *k* ≥ 4, finite *k*-CH graphs with girth at least 5 are *s*-arc-transitive for some *s* ≥ 3, and such graphs have been studied extensively. In particular, Weiss proved that if Γ is a finite *s*-arc-transitive graph with valency at least 3, then *s* ≤ 7, and if *s* = 7, then Γ has valency 3*e* + 1 for some positive integer *e*. Further, Conder and Walker  have constructed infinitely many finite connected quartic 7-arc-transitive graphs. Using these results, as well as the classification of the finite 4-transitive permutation groups, we obtain our next theorem; see Theorems [thm:girth5diam2] and [thm:girth5strong] for more details. Note that finite CH graphs with girth at least 5 do exist: these include the Petersen graph and any cycle with at least 5 vertices. [thm:girth5] * Any finite 7-CH graph with girth at least 5 is *C**H*. * There are infinitely many finite connected quartic 6-CH graphs with girth at least 12 that are not 7-CH. * A finite quartic graph with girth at least 7 is 6-CH if and only if it is 7-arc-transitive. The case where Γ is locally disconnected with girth 3 but *c*2 = 1 seems to be more difficult. Here Γ is locally (*t* + 1) ⋅ *K**s* where *t* ≥ 1 and *s* ≥ 2. When *t* = 1, we can make some progress, for Γ is the line graph of a graph Σ with valency *s* + 1 and girth at least 5, and it turns out that Γ is *k*-CH precisely when Σ is (*k* + 1)-CH with girth at least *k* + 2 (see Lemma [lemma:linegraphCH]). Thus results about *k*-CH graphs with girth at least 5 can be interpreted for locally 2 ⋅ *K**s* graphs with *c*2 = 1. In particular, we obtain the following; see Theorem [thm:2Ksplus] for more details. Note that the line graph of the regular tree of valency *s* + 1 is an infinite CH locally 2 ⋅ *K**s* graph with *c*2 = 1. [thm:2Ks] * For *s* ≥ 2, there are no finite 6-CH locally 2 ⋅ *K**s* graphs with *c*2 = 1. * There are infinitely many finite connected 5-CH locally 2 ⋅ *K*3 graphs with *c*2 = 1. Thus, with the possible exception of locally (*t* + 1) ⋅ *K**s* graphs where *t* ≥ 2, *s* ≥ 2 and *c*2 = 1, there exists an absolute constant *A* such that every finite *A*-CH graph is CH, and *A* = 7 is the best possible constant. We summarise this result here. [cor:all] For a finite 7-CH graph Γ, one of the following holds. * Γ is locally *m* ⋅ *K**n* for some *m* ≥ 3 and *n* ≥ 2, and *c*2 = 1. * Γ is CH. Further, there are infinitely many finite connected 6-CH graphs that are not 7-CH. For *k*-CH graphs (*k* ≥ 4) that satisfy the condition of Corollary [cor:all](i), we only have partial results; see Proposition [prop:badsmallval] and Lemmas [lemma:c2=1] and [lemma:cyclebig]. In particular, there are examples of 4-CH graphs in this case: the point graph of the dual of the split Cayley hexagon of order (2, 2) is 4-CH but not 5-CH, as is the point graph of the Hall-Janko near octagon. This leads us to the following open problem; note that there are infinite locally finite CH graphs that satisfy the condition of Corollary [cor:all](i), but there are no such finite graphs (see Theorem [thm:CH]). Determine whether there exists an absolute constant *A* for which there are no finite *A*-CH graphs satisfying the condition of Corollary $\ref{cor:all}$(i). Our approach for the locally connected case is to use the classification of the finite 3-homogeneous graphs (see Theorem [thm:3hom]) together with the observation that any 4-CH graph is locally a 3-homogeneous graph (see Lemma [lemma:localhom]). Further, we will show that there is a combinatorial property of graphs—the unique *x* property—that acts as an obstruction for 4-connected-homogeneity (see Definition [defn:x] and Lemma [lemma:xplus]); we will prove that this property holds for locally strongly regular graphs under various combinatorial assumptions (see §[s:uniquex]). This paper is organised as follows. In §[s:prelim], we provide some notation and definitions (§[s:defn]-[s:graphs]), state some basic results (§[s:other]), and state some classification theorems (§[s:results]). In §[s:kCH], we establish some properties of *k*-CH graphs, and in §[s:uniquex], we consider graphs with the unique *x* property. In §[s:LC], we consider locally connected graphs and prove Theorem [thm:connected]; in §[s:GQ], we consider locally disconnected graphs with girth 3 and *c*2 > 1 and prove Theorem [thm:girth3quad]; in §[s:girth4], we consider graphs with girth 4 and prove Theorem [thm:girth4]; in §[s:girth5], we consider graphs with girth at least 5 and prove Theorem [thm:girth5]; and in §[s:LDbad], we consider locally disconnected graphs with girth 3 and *c*2 = 1 and prove Theorem [thm:2Ks]. Note that the proofs of our main results depend upon the CFSG. [remark:magma] We sometimes use Magma  to determine whether a graph is *k*-CH. These computations are routine: we construct a *G*-arc-transitive graph Γ using a representation of a group *G* provided by  or standard techniques, and we analyse Γ using Lemma [lemma:detkCH]. This analysis could be performed using various software packages; we find Magma the most convenient, and there is extensive online documentation for the commands we use to implement Lemma [lemma:detkCH]. Preliminaries ============= All graphs in this paper are undirected, simple (no multiple edges or loops) and have non-empty vertex sets, but they need not be finite or even locally finite. Basic graph theoretical terminology not given here may be found in the appendix of . All group actions and graph isomorphisms are written on the right, and basic group theoretic terminology may be found in . The notation used to denote the finite simple groups (and their automorphism groups) is consistent with . Notation and definitions ------------------------ A graph Γ consists of a non-empty vertex set *V*Γ and an edge set *E*Γ, which is a set of 2-subsets of *V*Γ. The *order* of Γ is ∣*V*Γ∣. For a non-empty subset *X* of *V*Γ, we often abuse notation and write *X* for the subgraph of Γ induced by *X*. The *girth* of a graph Γ is the length of a shortest cycle in Γ (or infinity when Γ has no cycles). We write $\overline{\Gamma}$ for the complement of Γ. When *E*Γ is non-empty, the *line graph* of Γ, denoted by *L*(Γ), has vertex set *E*Γ, and two vertices of *L*(Γ) are adjacent whenever the corresponding edges of Γ have a common vertex in Γ. We denote the distance between *u*, *v* ∈ *V*Γ by *d*Γ(*u*, *v*), and the diameter of Γ by diam(Γ). When Γ is connected and bipartite, a *halved graph* of Γ is one of the two connected components of Γ2, where Γ2 is the graph with vertex set *V*Γ in which *u* and *v* are adjacent whenever *d*Γ(*u*, *v*) = 2. For *u* ∈ *V*Γ and any integer *i* ≥ 0, let Γ*i*(*u*) :  = {*v* ∈ *V*Γ : *d*Γ(*u*, *v*) = *i*}. We write Γ(*u*) for the *neighbourhood* Γ1(*u*). The cardinality of Γ(*u*) is the *valency* of *u*, and the graph induced by Γ(*u*) (when non-empty) is a *local graph* of Γ. When every vertex of Γ has the same valency, we say that Γ is *regular* and refer to the *valency* of Γ. A graph is *locally finite* if every vertex has finite valency; this extends the definition given in the introduction to include graphs with valency 0 (such as *K*1). Note that, by definition, a graph with valency 0 is neither locally connected nor locally disconnected. For a positive integer *s*, a *path of length* *s* in Γ is a sequence of vertices (*u*0, …, *u**s*) such that *u**i* is adjacent to *u**i* + 1 for 0 ≤ *i* < *s*, an *s-arc* is a path (*u*0, …, *u**s*) where *u**i* − 1 ≠ *u**i* + 1 for 0 < *i* < *s*, and an *arc* is a 1-arc. An (*s**-*)*geodesic* is a path (*u*0, …, *u**s*) where *d*Γ(*u*0, *u**s*) = *s*. A *path graph* is a tree *T* with ∣*T*(*u*)∣ ≤ 2 for all *u* ∈ *V**T*. A *μ**-graph* of Γ is a graph that is induced by Γ(*u*) ∩ Γ(*w*) for some *u*, *w* ∈ *V*Γ with *d*Γ(*u*, *w*) = 2. For this paragraph, assume that Γ is locally finite, and let *i* be a non-negative integer. We write *k**i*(Γ) or *k**i* for ∣Γ*i*(*u*)∣ whenever ∣Γ*i*(*u*)∣ does not depend on the choice of *u*, or to indicate that we are assuming this. For *u*, *v* ∈ *V*Γ such that *d*Γ(*u*, *v*) = *i*, let *c**i*(*u*, *v*) :  = ∣Γ*i* − 1(*u*) ∩ Γ(*v*)∣ (when *i* ≥ 1), *a**i*(*u*, *v*) :  = ∣Γ*i*(*u*) ∩ Γ(*v*)∣ and *b**i*(*u*, *v*) :  = ∣Γ*i* + 1(*u*) ∩ Γ(*v*)∣. Whenever *c**i*(*u*, *v*) does not depend on the choice of *u* and *v*, or to indicate that we are assuming this, we write *c**i*(Γ) or *c**i*, and similarly for *a**i*(*u*, *v*) and *b**i*(*u*, *v*). A graph Γ is *strongly regular* with parameters (*v*, *k*, *λ*, *μ*) if it is finite with order *v* and regular with valency *k* where 0 < *k* < *v* − 1, and if any two adjacent vertices have *λ* = *λ*(Γ) common neighbours and any two non-adjacent vertices have *μ* = *μ*(Γ) common neighbours. Note that complete graphs and edgeless graphs are not strongly regular. The complement of a strongly regular graph is again strongly regular with parameters (*v*, *v* − *k* − 1, *v* − 2*k* + *μ* − 2, *v* − 2*k* + *λ*). Let *G* be a group acting on a set Ω. We denote the permutation group induced by this action by *G*Ω, and the pointwise stabiliser in *G* of *u*1, …, *u**n* ∈ Ω by *G**u*1, …, *u**n*. When *G* is transitive on Ω, any orbit of *G* on Ω × Ω is an *orbital*; the *trivial* orbital is {(*u*, *u*) : *u* ∈ Ω}, and the other orbitals are *non-trivial*. The *rank* of *G* is the number of orbitals. Equivalently, for any *u* ∈ Ω, the rank of *G* is the number of orbits of *G**u* on Ω. The group *G* is *primitive* if it is transitive and there are no non-trivial *G*-invariant equivalence relations on Ω (the trivial ones are {(*u*, *u*) : *u* ∈ Ω} and Ω × Ω). The group *G* is *k**-transitive* if 1 ≤ *k* ≤ ∣Ω∣ and *G* acts transitively on the set of *k*-tuples of pairwise distinct elements of Ω. A graph Γ is *G**-vertex-transitive* (or *G**-arc-transitive*) if *G* is a subgroup of the automorphism group Aut(Γ) and *G* acts transitively on *V*Γ (or the arcs of Γ); we omit the prefix *G* when *G* = Aut(Γ). We say that Γ is *s**-arc-transitive* if Aut(Γ) acts transitively on the set of *s*-arcs, and *s**-transitive* if Γ is *s*-arc-transitive but not (*s* + 1)-arc-transitive. (Note that we have defined two different concepts of *n*-transitivity: one for groups above, and one for graphs here.) When Aut(Γ) acts transitively on ordered pairs of vertices at distance *i* for each integer *i* ≥ 0, we say that Γ is *distance-transitive*, and when Γ is also finite with diameter *d*, it has *intersection array* {*b*0, *b*1, …, *b**d* − 1; *c*1, *c*2, …, *c**d*}. We say that Γ is (*G*, *k*)*-homogeneous* (or (*G*, *k*)*-CH*) when *k* is a positive integer, *G* ≤ Aut(Γ), and any isomorphism between (connected) induced subgraphs of Γ with order at most *k* extends to an element of *G*. Note that a (*G*, 2)-CH graph is *G*-vertex-transitive and *G*-arc-transitive. A *partial linear space* is a pair (P, L) where P is a non-empty set of *points* and L is a collection of subsets of P called *lines* such that two distinct points are in at most one line, and every line contains at least two points. The *incidence graph* of a partial linear space is the bipartite graph whose vertices are the points and lines, where a point *p* is adjacent to a line ℓ whenever *p* ∈ ℓ. The *point graph* of a partial linear space is the graph whose vertices are the points, where two points are adjacent whenever they are collinear. For a positive integer *n*, a *generalised n-gon* is a partial linear space whose incidence graph Γ has diameter *n* with *c**i* = 1 (i.e., *c**i*(*u*, *v*) = 1 for all *u*, *v* ∈ *V*Γ such that *d*Γ(*u*, *v*) = *i*) for *i* < *n*. A generalised *n*-gon is *thick* when every line has size at least three and every point is contained in at least three lines, and it has *order* (*s*, *t*) when there exist positive integers *s* and *t* such that every line has size *s* + 1 and every point is contained in *t* + 1 lines. It is routine to prove that any thick generalised *n*-gon has order (*s*, *t*) for some *s* and *t*. A generalised *n*-gon is *distance-transitive* if its point graph is distance-transitive. When *n* = 4, 6 or 8, a generalised *n*-gon is a *generalised quadrangle*, *generalised hexagon* or *generalised octagon* respectively. A generalised quadrangle satisfies the *GQ Axiom*: for each point *p* and line ℓ such that $p\notin\ell$, there is a unique *q* ∈ ℓ such that *p* is collinear with *q*. Conversely, any partial linear space with at least two lines that satisfies the GQ Axiom is a generalised quadrangle. See for more details. We denote a finite field of order *q* by F*q* and a *d*-dimensional vector space over F*q* by *V**d*(*q*). We will use the following terminology concerning forms. See for more information. A *symplectic*, *unitary* or *quadratic* space is a pair (*V**d*(*q*), *κ*) where *κ* is, respectively, a non-degenerate symplectic, unitary or quadratic form on *V**d*(*q*). In a symplectic or unitary space (*V*, *f*), a vector *v* ∈ *V* is *singular* if *f*(*v*, *v*) = 0, and in a quadratic space (*V*, *Q*), a vector *v* ∈ *V* is *singular* if *Q*(*v*) = 0. In a symplectic, unitary or quadratic space (*V*, *κ*), a subspace *W* of *V* is *totally singular* if every vector in *W* is singular. For a positive integer *m*, a quadratic space (*V*2*m*(*q*), *Q*) has *plus type* when the maximal totally singular subspaces of *V*2*m*(*q*) have dimension *m*, and *minus type* when the maximal totally singular subspaces of *V*2*m*(*q*) have dimension *m* − 1; we also say that the quadratic space has type *ɛ* where *ɛ* ∈ { + ,  − }. Families of graphs ------------------ Let *m* and *n* be positive integers. We denote the complete graph with *n* vertices by *K**n*, the cycle with *n* vertices by *C**n*, the complete multipartite graph with *n* parts of size *m* by *K**n*[*m*], and its complement (the disjoint union of *n* copies of *K**m*) by *n* ⋅ *K**m*. We also write *K**n*, *n* for the complete bipartite graph *K*2[*n*]. The *grid graph* *K**n*□*K**m* has vertex set *V**K**n* × *V**K**m*, where distinct vertices (*u*1, *u*2) and (*v*1, *v*2) are adjacent whenever *u*1 = *v*1 or *u*2 = *v*2. We denote the complement of this graph by *K**n* × *K**m*. In the literature, the graph *K*2 × *K**n* is often described as the graph *K**n*, *n* with the edges of a perfect matching removed. The *n-cube* *Q**n* has vertex set F2*n*, where two vertices are adjacent whenever they differ in exactly one coordinate. The *folded n-cube* □*n* is obtained from *Q**n* by identifying those vertices *u* and *v* for which *u* + *v* = (1, …, 1). The *affine polar graph* *V**O*2*m**ɛ*(*q*) has vertex set *V*2*m*(*q*), and vectors *u* and *v* are adjacent whenever *Q*(*u* − *v*) = 0, where (*V*2*m*(*q*), *Q*) is a quadratic space with type *ɛ*. We will be interested in the point or incidence graphs of the following classical generalised quadrangles where *q* is a power of a prime: *W*3(*q*), *H*3(*q*2), *H*4(*q*2), *Q*4(*q*) for *q* odd, and *Q*5−(*q*). These are defined as follows. In each case, we have a symplectic, unitary or quadratic space (*V**d*(*s*), *κ*), the points are the one-dimensional totally singular subspaces of *V**d*(*s*) with respect to *κ*, and the lines are the two-dimensional totally singular subspaces (which we may of course view as sets of points). For *W*3(*q*), we take a symplectic space with (*d*, *s*) = (4, *q*); for *H*3(*q*2) or *H*4(*q*2), a unitary space with (*d*, *s*) = (4, *q*2) or (5, *q*2) respectively; for *Q*4(*q*), a quadratic space with (*d*, *s*) = (5, *q*) and *q* odd; and for *Q*5−(*q*), a quadratic space of minus type with (*d*, *s*) = (6, *q*). We will also be interested in certain other generalised *n*-gons. The *split Cayley hexagon* is a generalised hexagon of order (*q*, *q*) whose automorphism group contains the exceptional group of Lie type *G*2(*q*). The *Ree-Tits octagon* is a generalised octagon of order (*q*, *q*2) whose automorphism group contains the exceptional group of Lie type 2*F*4(*q*). See  for more details. The *Clebsch graph* is the halved 5-cube (see §[s:defn] for the definition of a halved graph). We caution the reader that some authors define the Clebsch graph to be the complement of the halved 5-cube, which is isomorphic to □5 and *V**O*4−(2); our definition is consistent with  and Seidel . The *Petersen graph* is the complement of the local graph of the halved 5-cube. The *Higman-Sims* graph is a strongly regular graph with parameters (100, 22, 0, 6) whose automorphism group is HS : 2, where HS denotes the Higman-Sims group, a sporadic simple group (see ). Similarly, the *McLaughlin graph* is a strongly regular graph with parameters (275, 112, 30, 56) whose automorphism group is McL : 2, where McL denotes the McLaughlin group, another sporadic simple group (see ). The *Schläfli graph* is the complement of the point graph of the generalised quadrangle *Q*5−(2). For positive integers *t* and *s*, the *biregular tree* *T**t* + 1, *s* + 1 is an (infinite) tree with bipartition (*V**t*, *V**s*) such that the vertices in *V**t* have valency *t* + 1, and the vertices in *V**s* have valency *s* + 1. The halved graph of *T**t* + 1, *s* + 1 with vertex set *V**t* is locally (*t* + 1) ⋅ *K**s*, while the halved graph with vertex set *V**s* is locally (*s* + 1) ⋅ *K**t*. We will see in §[s:results] that the halved graphs of *T**t* + 1, *s* + 1 for *t*, *s* ≥ 1 are precisely the infinite locally finite connected CH graphs. To obtain the complete list of such graphs, it suffices to either consider only those halved graphs with vertex set *V**t*, or assume that *t* ≥ *s*, but we allow this redundancy in the notation for simplicity. Basic results ------------- Almost by definition, we obtain the following useful observation concerning 2-homogeneous graphs; see. [] [lemma:2homsimple] A 2-homogeneous graph is either a disjoint union of complete graphs with the same order, or connected with diameter 2. Note that we permit the disjoint union to contain only one complete graph. The following is immediate from Lemma [lemma:2homsimple]. [lemma:2homsimplelf] If Γ is a locally finite 2-homogeneous graph, then either Γ is a (possibly infinite) disjoint union of finite complete graphs with the same order, or Γ is finite with diameter 2. Note that Lemma [lemma:2homsimplelf] is often used to derive a result about locally finite 2-homogeneous graphs from the analogous result for the finite case. We will sometimes use a stronger form of Lemma [lemma:2homsimple]. Recall the definition of a (*G*, *k*)-homogeneous graph from §[s:defn]. If Γ is a non-complete (*G*, 2)-homogeneous graph that contains an edge, then the non-trivial orbitals of *G* on *V*Γ are the sets of adjacent pairs and distinct non-adjacent pairs, so *G* is transitive of rank 3 on *V*Γ. If *G* also preserves a non-trivial equivalence relation  ≡  on *V*Γ, then the set of pairs of distinct vertices *u* and *v* such that *u* ≡ *v* must be either the set of adjacent pairs, or the set of distinct non-adjacent pairs, so Γ is either a disjoint union of complete graphs, or a complete multipartite graph. Thus we have the following result. [lemma:2hom] Let Γ be a (*G*, 2)-homogeneous graph. Then exactly one of the following holds. * Γ or $\overline{\Gamma}$ is a disjoint union of complete graphs with the same order. * diam(Γ) = 2 and *G* is primitive of rank 3 on *V*Γ. Next we give a sufficient condition for the local action of an arc-transitive graph to be faithful; this will be useful in the locally connected case. [lemma:faithful] Let Γ be a connected *G*-arc-transitive graph. If there exists *x* ∈ *V*Γ and *y* ∈ Γ(*x*) such that the pointwise stabiliser of Γ(*x*) ∩ Γ(*y*) in *G**x*, *y* also fixes Γ(*x*) pointwise, then the action of *G**u* on Γ(*u*) is faithful for all *u* ∈ *V*Γ. Suppose that there exists *x* ∈ *V*Γ and *y* ∈ Γ(*x*) such that the pointwise stabiliser of Γ(*x*) ∩ Γ(*y*) in *G**x*, *y* also fixes Γ(*x*) pointwise. Since *G* acts transitively on the arcs of Γ, it follows that for any *u* ∈ *V*Γ and *v* ∈ Γ(*u*), if *h* ∈ *G**u*, *v* fixes Γ(*u*) ∩ Γ(*v*) pointwise, then *h* fixes Γ(*u*) pointwise. Suppose that *g* ∈ *G**u* fixes Γ(*u*) pointwise. Let *w* ∈ *V*Γ. There is a path (*u*0, …, *u*ℓ) in Γ with *u*0 = *u* and *u*ℓ = *w*. If *g* fixes {*u**i*} ∪ Γ(*u**i*) pointwise for some integer *i* ≥ 0, then *g* fixes *u**i* + 1, *u**i* and Γ(*u**i* + 1) ∩ Γ(*u**i*) pointwise, so, by the above observation, *g* fixes {*u**i* + 1} ∪ Γ(*u**i* + 1) pointwise. By induction, *g* fixes *u*ℓ = *w*. Thus *g* = 1. The proof of the following is routine. [lemma:notinduced] Let Γ be a locally (*t* + 1) ⋅ *K**s* graph for positive integers *t* and *s*. Then no induced subgraph of Γ is isomorphic to the complete graph *K*4 with one edge removed. The next result is a well-known property of quadratic spaces; see the proof of , for example. [lemma:quadratic] Let (*V*, *Q*) be a quadratic space, and let *f* be the bilinear form associated with *Q*. For any non-zero singular vector *v*, there exists *w* ∈ *V* such that *Q*(*w*) = 0 and *f*(*v*, *w*) = 1. We finish this section with a result of Tutte; see  for a proof. [lemma:girth] An *s*-arc-transitive graph with valency at least 3 has girth at least 2*s* − 2. Classification theorems ----------------------- To begin, we state the classification of the finite homogeneous graphs; this was obtained independently by Gardiner  (using the work of Sheehan ) and Gol’Fand and Klin . In fact, this classification immediately implies that the locally finite homogeneous graphs are known (as Gardiner notes in ) since such graphs are either disjoint unions of complete graphs with the same order, or finite with diameter 2 (see Lemma [lemma:2homsimplelf]). However, we will only state the classification in the finite case for simplicity. [] [thm:hom] A finite graph Γ is homogeneous if and only if Γ is listed below. * (*t* + 1) ⋅ *K**s* where *t* ≥ 0 and *s* ≥ 1. * *K**m*[*r*] where *m* ≥ 2 and *r* ≥ 2. * *C*5 or *K*3□*K*3. In the introduction, we stated two important results concerning *k*-homogeneous graphs: first, every locally finite 5-homogeneous graph is homogeneous , and second, the only locally finite 4-homogeneous graphs that are not 5-homogeneous are the point graph of *Q*5−(2) and its complement the Schläfli graph . We also alluded to the fact that the locally finite 2- and 3-homogeneous graphs are known. We now give some more details about these classifications. By Lemmas [lemma:2homsimplelf] and [lemma:2hom], in order to classify the locally finite 2-homogeneous graphs, it suffices to consider those finite graphs Γ for which diam(Γ) = 2 and Aut(Γ) is primitive of rank 3. Using the CFSG, the finite primitive permutation groups of rank 3 were classified in a series of papers (see ). A non-trivial orbital *X* of such a group *G* is the adjacency relation of a graph Γ with *G* ≤ Aut(Γ) precisely when *X* is self-paired (i.e., symmetric), and this occurs precisely when ∣*G*∣ is even. Thus the locally finite 2-homogeneous graphs are known as an immediate consequence of the classification of the finite primitive rank 3 groups. Note that any locally finite connected non-complete 2-homogeneous graph is finite of diameter 2 and is therefore distance-transitive and strongly regular. The locally finite *k*-homogeneous graphs for *k* ≥ 3 are therefore also known, but these graphs were in fact enumerated (using the CFSG for *k* ≤ 4) before the classification of the finite primitive rank 3 groups was available. We now state Cameron and Macpherson’s classification of the finite 3-homogeneous graphs; the locally finite classification then follows from Lemma [lemma:2homsimplelf]. [] [thm:3hom] A finite graph Γ is 3-homogeneous if and only if Γ or $\overline{\Gamma}$ is listed below. * (*t* + 1) ⋅ *K**s* where *t* ≥ 0 and *s* ≥ 1. * *K**n*□*K**n* where *n* ≥ 3. * *V**O*2*m**ɛ*(2) where *m* ≥ 3 and *ɛ* ∈ { + ,  − }. * The point graph of *Q*5−(*q*) where *q* is a power of a prime. * *C*5, the Clebsch graph, the Higman-Sims graph, or the McLaughlin graph. Our statement of Theorem [thm:3hom] may appear to differ from, but it does describe the same set of graphs, and it is routine to verify that these graphs are indeed 3-homogeneous. In (iii), we impose the restriction *m* ≥ 3 since *V**O*2+(2) ≃ *K*2, 2, $VO^-\_2(2)\simeq \overline{K\_4}$ and *V**O*4+(2) ≃ *K*4 × *K*4, all of which arise in (i) or (ii), and *V**O*4−(2) is isomorphic to the complement of the Clebsch graph. The graphs we list in (iv) are isomorphic to those of (iv) since *Q*5−(*q*) is isomorphic to the point-line dual of *H*3(*q*2). Next we state the classification of the locally finite CH graphs. This classification implies, in particular, that the only infinite, locally finite, connected CH graphs are the halved graphs of the biregular tree *T**t* + 1, *s* + 1 for positive integers *t* and *s*. Note that the halved graphs of *T**t* + 1, 2 are the regular tree *T**t* + 1 and its line graph *L*(*T**t* + 1). Gardiner mistakenly claimed in  that these are the only infinite, locally finite, connected CH graphs, but this was later corrected by Enomoto  to include the halved graphs of *T**t* + 1, *s* + 1 for *s* ≥ 2. [] [thm:CH] A locally finite connected graph Γ is CH if and only if Γ is listed below. * *K**n* where *n* ≥ 1 or *K**m*[*r*] where *m* ≥ 2 and *r* ≥ 2. * *C**n* where *n* ≥ 5. * *K**n*□*K**n* where *n* ≥ 3. * *K*2 × *K**n* where *n* ≥ 4. * A halved graph of the biregular tree *T**t* + 1, *s* + 1 where *t* ≥ 1 and *s* ≥ 1. * The Petersen graph, or the folded 5-cube □5. The finite distance-transitive generalised quadrangles were classified by Buekenhout and Van Maldeghem  using the CFSG. In §[s:GQ], we will use the following consequence of their work. [] [thm:GQ] Let Q be a finite thick distance-transitive generalised quadrangle of order (*s*, *t*) where *s* divides *t*. Let *G* :  = Aut(Q). Then one of the following holds. * Q is *W*3(*q*) for a prime power *q*. Here (*s*, *t*) = (*q*, *q*) and *G* = PΓSp4(*q*). * Q is *Q*4(*q*) for an odd prime power *q*. Here (*s*, *t*) = (*q*, *q*) and *G* = PΓO5(*q*). * Q is *Q*5−(*q*) for a prime power *q*. Here (*s*, *t*) = (*q*, *q*2) and *G* = PΓO6−(*q*). * Q is *H*4(*q*2) for a prime power *q*. Here (*s*, *t*) = (*q*2, *q*3) and *G* = PΓU5(*q*). By, Q is one of *W*3(*q*), *Q*4(*q*) for *q* odd, *Q*5−(*q*), *H*3(*q*2), *H*4(*q*2), the dual of *H*4(*q*2), or a generalised quadrangle with order (3, 5). Note that *H*3(*q*2) has order (*q*2, *q*), and the dual of *H*4(*q*2) has order (*q*3, *q*2). Since Q has order (*s*, *t*) where *s* divides *t*, one of (i)-(iv) holds. The following is a well-known consequence of the CFSG (see ). [CFSG] [thm:4trans] The only finite 4-transitive permutation groups of degree *n* are *S**n* for *n* ≥ 4, *A**n* for *n* ≥ 6, and the Mathieu groups M*n* for *n* ∈ {11, 12, 23, 24}. Properties of *k*-CH graphs =========================== We begin with a result that provides the basic approach for studying *k*-CH graphs in the locally connected case. Note that many of the arguments in this section apply to infinite graphs, including those that are not locally finite. Recall the definition of a (*G*, *k*)-CH graph from §[s:defn]. [lemma:localhom] If Γ is a (*G*, *k*)-CH graph with non-zero valency for some *k* ≥ 2, then for each *u* ∈ *V*Γ, the graph induced by Γ(*u*) is (*G**u*Γ(*u*), *k* − 1)-homogeneous. Let *u* ∈ *V*Γ. Let Δ1 and Δ2 be induced subgraphs of Γ(*u*) of order at most *k* − 1, and suppose that *φ* : Δ1 → Δ2 is a graph isomorphism. For each *i*, let Σ*i* denote the subgraph of Γ induced by *V*Δ*i* ∪ {*u*}. Define *φ*\* : Σ1 → Σ2 by *u* ↦ *u* and *v* ↦ *v**φ* for all *v* ∈ *V*Δ1. Now *φ*\* is an isomorphism between connected graphs of order at most *k*, so there exists *g* ∈ *G* such that *v**g* = *v**φ*\* for all *v* ∈ *V*Σ1. Since *g* ∈ *G**u*, it preserves Γ(*u*) and therefore induces an automorphism of Γ(*u*) that extends *φ*, as desired. [lemma:3CH] If Γ is a locally finite 3-CH graph with non-zero valency, then Γ is either locally (*t* + 1) ⋅ *K**s* for some *t* ≥ 0 and *s* ≥ 1, or locally a graph with diameter 2. Apply Lemmas [lemma:2homsimple] and [lemma:localhom]. Our next result provides a method for determining whether a (*k* − 1)-CH graph is *k*-CH. This requires some additional terminology: a graph Γ is (*G*, Δ)*-homogeneous* if *G* ≤ Aut(Γ) and Δ is a finite graph such that any isomorphism between induced subgraphs of Γ that are isomorphic to Δ extends to an automorphism of Γ. Note that Γ is (*G*, *k*)-CH if and only if Γ is (*G*, Δ)-homogeneous for all connected graphs Δ of order at most *k*. [lemma:detkCH] Let Γ be a graph, let Δ be an induced subgraph of Γ of order *k* ≥ 2, and let Σ be an induced subgraph of Δ of order *k* − 1. Let *u*0 be the unique vertex in *V*Δ \ *V*Σ, and let *X* :  = {*u* ∈ *V*Γ \ *V*Σ : Γ(*u*) ∩ *V*Σ = Γ(*u*0) ∩ *V*Σ}. If Γ is (*G*, Σ)-homogeneous, then the following are equivalent. * Γ is (*G*, Δ)-homogeneous. * The pointwise stabiliser in *G* of *V*Σ is transitive on *X*. Let *P* be the pointwise stabiliser in *G* of *V*Σ. First suppose that (ii) does not hold. Let *u*1 and *u*2 be elements of *X* in different orbits of *P*. Let Δ*i* be the graph induced by *V*Σ ∪ {*u**i*} for each *i*. Then Δ1 ≃ Δ2 ≃ Δ, and there is an isomorphism *φ* : Δ1 → Δ2 that fixes *V*Σ pointwise and maps *u*1 to *u*2. Since *φ* cannot be extended to *G*, (i) does not hold. Conversely, suppose that (ii) holds. Let *φ* : Δ1 → Δ2 be an isomorphism between induced subgraphs Δ1 and Δ2 of Γ such that Δ1 is isomorphic to Δ. There exists an isomorphism *φ*1 : Δ1 → Δ. Hence *φ*2 :  = *φ*− 1*φ*1 : Δ2 → Δ is also an isomorphism. Fix *i* ∈ {1, 2}. Let Σ*i* :  = Σ*φ**i*− 1 and *w**i* :  = *u*0*φ**i*− 1, so that *V*Δ*i* = *V*Σ*i* ∪ {*w**i*}, and let *φ**i*ʹ :  = *φ**i*∣*V*Σ*i*. Now *φ**i*ʹ : Σ*i* → Σ is an isomorphism of induced subgraphs of Γ of order *k* − 1, so there exists *g**i* ∈ *G* that extends *φ**i*ʹ. Let *u**i* :  = *w**i**g**i*. Now *u**i* ∈ *X* since Γ(*u*0) ∩ *V*Σ = (Γ(*w**i*) ∩ *V*Σ*i*)*φ**i*ʹ = (Γ(*w**i*) ∩ *V*Σ*i*)*g**i* = Γ(*u**i*) ∩ *V*Σ. By assumption, there exists *g* ∈ *P* such that *u*1*g* = *u*2. Further, *g* extends the isomorphism *g*1− 1*φ**g*2 : Δ1*g*1 → Δ2*g*2 since (*V*Δ*i*)*g**i* = *V*Σ ∪ {*u**i*}. Thus *g*1*g**g*2− 1 extends *φ*, and (i) holds. Note that for any connected graph Δ of order *k* ≥ 2, we can always find a connected induced subgraph Σ of Δ of order *k* − 1: choose *u*, *v* ∈ *V*Δ such that *d*Δ(*u*, *v*) = diam(Δ) and take Σ to be the graph induced by *V*Δ \ {*u*}. The next two results will be instrumental in our proof of Theorem [thm:connected]. [lemma:x] Let Γ be a (*G*, 4)-CH graph. If there exists *u* ∈ *V*Γ, *v* ∈ Γ(*u*) and *w* ∈ Γ2(*u*) ∩ Γ(*v*) such that *G**u*, *v*, *w* fixes some *x* ∈ Γ(*u*) ∩ Γ2(*v*), then the following hold. * The graph induced by Γ(*u*) ∩ Γ2(*v*) is either edgeless or complete. * Γ(*u*) ∩ Γ2(*v*) ∩ Γ(*w*) is either {*x*} or (Γ(*u*) ∩ Γ2(*v*)) \ {*x*}. Let *X* :  = Γ(*u*) ∩ Γ2(*v*). If *y*, *z* ∈ *X* ∩ Γ(*w*), then there is an isomorphism between the connected graphs induced by {*u*, *v*, *w*, *y*} and {*u*, *v*, *w*, *z*} that maps *y* to *z* and fixes *u*, *v* and *w*, so *G**u*, *v*, *w* acts transitively on *X* ∩ Γ(*w*). Similarly, *G**u*, *v*, *w* acts transitively on *X* \ Γ(*w*). Thus *G**u*, *v*, *w* has at most two orbits on *X*. By assumption, *G**u*, *v*, *w* ≤ *G**u*, *v*, *x*, so *G**u*, *v*, *x* has at most two orbits on *X*. If *G**u*, *v*, *x* is transitive on *X*, then *X* = {*x*}, so (i) and (ii) hold. Otherwise, *G**u*, *v*, *x* and *G**u*, *v*, *w* have the same orbits on *X*, namely {*x*} and *X* \ {*x*}, so (ii) holds. Further, since *G**u*, *v*, *x* is transitive on *X* \ {*x*}, either *x* has no neighbours in *X* \ {*x*}, or *x* is adjacent to every vertex in *X* \ {*x*}. Since *G**u*, *v* acts transitively on *X*, it follows that (i) holds. The following property will enable us to state a useful consequence of Lemma [lemma:x]. [defn:x] A non-complete graph Γ has the *unique *x* property* if for some *u* ∈ *V*Γ, *v* ∈ Γ(*u*) and *w* ∈ Γ2(*u*) ∩ Γ(*v*), there exists a unique *x* ∈ Γ(*u*) ∩ Γ2(*v*) such that Γ(*u*) ∩ Γ(*v*) ∩ Γ(*w*) = Γ(*u*) ∩ Γ(*v*) ∩ Γ(*x*). [lemma:xplus] Let Γ be a graph in which Γ(*u*ʹ) ∩ Γ2(*v*ʹ) is neither edgeless nor complete for some *u*ʹ ∈ *V*Γ and *v*ʹ ∈ Γ(*u*ʹ). If Γ has the unique *x* property, then Γ is not 4-CH. Suppose that Γ is 4-CH and has the unique *x* property. There exist *u* ∈ *V*Γ, *v* ∈ Γ(*u*), *w* ∈ Γ2(*u*) ∩ Γ(*v*) and a unique *x* ∈ Γ(*u*) ∩ Γ2(*v*) such that Γ(*u*) ∩ Γ(*v*) ∩ Γ(*w*) = Γ(*u*) ∩ Γ(*v*) ∩ Γ(*x*). By assumption, Γ(*u*ʹ) ∩ Γ2(*v*ʹ) is neither edgeless nor complete for some *u*ʹ ∈ *V*Γ and *v*ʹ ∈ Γ(*u*ʹ), and since Γ is 2-CH, it follows that Γ(*u*) ∩ Γ2(*v*) is neither edgeless nor complete. If *g* ∈ Aut(Γ)*u*, *v*, *w*, then *x**g* ∈ Γ(*u*) ∩ Γ2(*v*) and Γ(*u*) ∩ Γ(*v*) ∩ Γ(*w*) = Γ(*u*) ∩ Γ(*v*) ∩ Γ(*x**g*), so *x**g* = *x*, but this is impossible by Lemma [lemma:x]. When the local graph of Γ has diameter 2, Lemma [lemma:xplus] says the following: if the *μ*-graph in Γ(*v*) of some *u*, *w* ∈ Γ(*v*) is also the *μ*-graph in Γ(*u*) of *v* and exactly one other vertex *x* ∈ Γ(*u*), then either Γ(*u*) ∩ Γ2(*v*) is edgeless or complete, or Γ is not 4-CH. One of the immediate consequences of the definition of 2-homogeneity is that every connected non-complete 2-homogeneous graph has diameter 2 (see Lemma [lemma:2homsimple]). However, this need not be the case for 4-CH graphs: using Lemma [lemma:detkCH], it is routine to verify that the *n*-cube is a 4-CH graph with diameter *n*. We now establish some sufficient conditions for a connected 4-CH graph to have diameter 2. [lemma:1fordiam2] Let Γ be a connected 4-CH graph. If there exists *u* ∈ *V*Γ, *v* ∈ Γ(*u*) and *w* ∈ Γ2(*u*) ∩ Γ(*v*) such that $\Gamma(u)\cap\Gamma\_2(v)\cap\Gamma(w)\neq \varnothing$ and the graph induced by Γ(*u*) ∩ Γ2(*v*) is connected, then diam(Γ) = 2. Let *G* :  = Aut(Γ). Suppose for a contradiction that Γ contains a 3-geodesic. By assumption, there exists *x* ∈ Γ(*u*) ∩ Γ2(*v*) ∩ Γ(*w*). Now *G* acts transitively on the set of 2-geodesics in Γ, and (*v*, *w*, *x*) is a 2-geodesic, so there exists *y* ∈ *V*Γ such that (*v*, *w*, *x*, *y*) is a 3-geodesic. Suppose that *z* is a neighbour of *x* in Γ(*u*) ∩ Γ2(*v*). If *z* is not adjacent to *w*, then the subgraphs induced by {*v*, *w*, *x*, *y*} and {*v*, *w*, *x*, *z*} are isomorphic, so there exists *g* ∈ *G**v*, *w*, *x* such that *y**g* = *z*, but *d*Γ(*v*, *y*) = 3 while *d*Γ(*v*, *z*) = 2, a contradiction. Since the graph induced by Γ(*u*) ∩ Γ2(*v*) is connected, it follows that *w* is adjacent to every vertex in Γ(*u*) ∩ Γ2(*v*). Thus Γ(*u*) ∩ Γ2(*v*) ⊆ Γ(*w*) ∩ Γ2(*v*). Since (*u*, *v*, *w*) is a 2-geodesic, there exists *g* ∈ *G**v* such that *u**g* = *w* and *w**g* = *u*, and it follows that Γ(*u*) ∩ Γ2(*v*) = Γ(*w*) ∩ Γ2(*v*). Further, there exists *y*ʹ ∈ *V*Γ such that (*u*, *v*, *w*, *y*ʹ) is a 3-geodesic, but then *y*ʹ ∈ (Γ(*w*) ∩ Γ2(*v*)) \ Γ(*u*), a contradiction. Note that a graph Γ has an induced 4-cycle if and only if there exists *u* ∈ *V*Γ, *v* ∈ Γ(*u*) and *w* ∈ Γ2(*u*) ∩ Γ(*v*) such that $\Gamma(u)\cap\Gamma\_2(v)\cap\Gamma(w)\neq \varnothing$ (as in the statement of Lemma [lemma:1fordiam2]), and this occurs precisely when some *μ*-graph of Γ is not complete. [lemma:2fordiam2] Let Γ be a 3-CH graph where $\Gamma(u)\cap\Gamma\_2(v)\cap\Gamma(w)=\varnothing$ for some *u* ∈ *V*Γ, *v* ∈ Γ(*u*) and *w* ∈ Γ2(*u*) ∩ Γ(*v*). Then every *μ*-graph of Γ(*u*) is complete. By assumption, *v* is adjacent to every vertex in Γ(*u*) ∩ Γ(*w*) \ {*v*}, but Aut(Γ)*u*, *w* is transitive on Γ(*u*) ∩ Γ(*w*), so the graph induced by Γ(*u*) ∩ Γ(*w*) is complete. It follows that every *μ*-graph of Γ is complete. Let Σ be the graph induced by Γ(*u*), and let *x* ∈ *V*Σ and *y* ∈ Σ2(*x*). Now Γ(*x*) ∩ Γ(*y*) induces a complete graph, so Σ(*x*) ∩ Σ(*y*) does as well. [lemma:diam2] Let Γ be a connected 4-CH locally Σ graph, where Σ is a connected graph for which there exists *v* ∈ *V*Σ and *x* ∈ Σ2(*v*) such that the graph induced by Σ2(*v*) is connected and the graph induced by Σ(*v*) ∩ Σ(*x*) is not complete. Then diam(Γ) = 2. Let *u* ∈ *V*Γ, and view *v* as a vertex in Γ(*u*). There exists *w* ∈ Γ2(*u*) ∩ Γ(*v*). Some *μ*-graph of Σ is not complete by assumption, so Lemma [lemma:2fordiam2] implies that $\Gamma(u)\cap\Gamma\_2(v)\cap\Gamma(w)\neq \varnothing$. Since Σ has diameter 2 by Lemma [lemma:3CH], the graphs induced by Σ2(*v*) and Γ(*u*) ∩ Γ2(*v*) are isomorphic, so Γ(*u*) ∩ Γ2(*v*) induces a connected graph. Thus diam(Γ) = 2 by Lemma [lemma:1fordiam2]. Lemma [lemma:diam2] does not hold for 3-CH graphs: using Lemma [lemma:detkCH], it is routine to verify that the halved *n*-cube is a 3-CH graph with diameter ⌊*n*/2⌋ whose local graph satisfies the conditions of Lemma [lemma:diam2] for *n* ≥ 4. If Γ is a locally finite connected 4-CH graph whose local graph Σ satisfies the conditions of Lemma [lemma:diam2], then Γ is a finite 2-homogeneous graph and therefore known (see §[s:results]). In fact, it turns out that most finite 3-homogeneous graphs satisfy the conditions on Σ in Lemma [lemma:diam2], so we could prove Theorem [thm:connected] using a case-by-case analysis of the finite 2-homogeneous graphs. However, there are many more families of finite 2-homogeneous graphs than finite 3-homogeneous graphs (see ), and we prefer the more direct and elementary approach provided by Lemma [lemma:x]. Next we have some results that will be useful in the locally disconnected case when *c*2 = 1. Note that in both of these results, we are not necessarily assuming that diam(Γ) is finite, but since *k* is an integer, the parameter *m* is also an integer. [lemma:ctrick] Let Γ be a connected *k*-CH graph that is locally (*t* + 1) ⋅ *K**s* for positive integers *t* and *s*. Let *m* :  = min(diam(Γ), *k* − 1) and suppose that *c**i*(Γ) = 1 for some 1 < *i* < *m* (so *k* ≥ 4). Let *u* ∈ *V*Γ, *v* ∈ Γ*i*(*u*) and *y* ∈ Γ*i* − 1(*u*) ∩ Γ(*v*). Then Γ*i*(*u*) ∩ Γ(*v*) = Γ(*v*) ∩ Γ(*y*). There exists a unique clique C of size *s* + 1 containing *v* and *y*, and Γ*i* − 1(*u*) ∩ Γ(*v*) = {*y*} since *c**i* = 1, so Γ(*v*) ∩ Γ(*y*) = C \ {*y*, *v*} ⊆ Γ*i*(*u*) ∩ Γ(*v*). There exists a path (*u*0, …, *u**i*) where *u*0 = *u*, *u**i* − 1 = *y* and *u**i* = *v*, and since diam(Γ) > *i* and Γ is (*i* + 1)-CH, there exists *w* ∈ Γ*i* + 1(*u*) ∩ Γ(*v*). If there exists *x* ∈ Γ*i*(*u*) ∩ Γ(*v*) \ Γ(*y*), then {*u*0, …, *u**i*, *x*} and {*u*0, …, *u**i*, *w*} induce isomorphic subgraphs of Γ with order *i* + 2, and *i* + 2 ≤ *k*, so there exists *g* ∈ Aut(Γ)*u*0, …, *u**i* such that *x**g* = *w*, but *d*Γ(*u*, *x*) = *i* while *d*Γ(*u*, *w*) = *i* + 1, a contradiction. Thus Γ*i*(*u*) ∩ Γ(*v*) = Γ(*v*) ∩ Γ(*y*). [lemma:c2=1] Let Γ be a connected *k*-CH graph that is locally (*t* + 1) ⋅ *K**s* for positive integers *t* and *s* where *k* ≥ 3 and *c*2(Γ) = 1. Let *m* :  = min(diam(Γ), *k* − 1). Then the following hold. * *c**i*(Γ) = 1 and *a**i*(Γ) = *s* − 1 for 1 ≤ *i* < *m*. * For *u*, *v* ∈ *V*Γ such that *d*Γ(*u*, *v*) = *m*, the set Γ*m* − 1(*u*) ∩ Γ(*v*) induces *c**m*(Γ) ⋅ *K*1. * If diam(Γ) ≤ *k* − 1, then Γ is distance-transitive. * If diam(Γ) < *k* − 1, then either *c**m*(Γ) = 1, or *s* = 1 and *c**m*(Γ) = *t* + 1. * If 3 ≤ diam(Γ) < *k* − 2, then *s* = 1, and either Γ ≃ *C*2*m* + 1 or *c**m*(Γ) = *t* + 1. Let *G* :  = Aut(Γ). Since Γ is (*m* + 1)-CH, the parameters *a**i* and *c**i* are defined for 1 ≤ *i* ≤ *m*, and (iii) holds. By assumption, *a*1 = *s* − 1 and *c*1 = *c*2 = 1. First we prove that (i) holds. Suppose for a contradiction that *c**i* − 1 = 1 for some 2 < *i* < *m* (so *k* ≥ 5) but *c**i* ≠ 1. Then there exists *u* ∈ *V*Γ, *v* ∈ Γ*i*(*u*), *w* ∈ Γ*i* + 1(*u*) ∩ Γ(*v*), distinct *x*, *y* ∈ Γ*i* − 1(*u*) ∩ Γ(*v*), and a path (*u*0, …, *u**i*) where *u*0 = *u*, *u**i* − 1 = *y* and *u**i* = *v*. Since *c*2 = 1, *x* is not adjacent to *u**i* − 2. Further, *x* is not adjacent to *y* since Γ*i* − 1(*u*) ∩ Γ(*y*) = Γ(*y*) ∩ Γ(*u**i* − 2) by Lemma [lemma:ctrick]. Hence {*u*0, …, *u**i*, *x*} and {*u*0, …, *u**i*, *w*} induce path graphs with order *i* + 2, and *i* + 2 ≤ *k*, so there exists *g* ∈ *G**u*0, …, *u**i* such that *x**g* = *w*, a contradiction. Thus *c**i* = 1 for 1 ≤ *i* < *m*. In particular, *a**i* = *s* − 1 for 1 < *i* < *m* by Lemma [lemma:ctrick]. Since *a*1 = *s* − 1, (i) holds. For the remainder of the proof, let (*u*0, *u*1, …, *u**m*) be a geodesic in Γ. Let *u* :  = *u*0, *v* :  = *u**m* and *y* :  = *u**m* − 1. Note that *y* ∈ Γ*m* − 1(*u*) ∩ Γ(*v*). First we claim that *y* has no neighbours in Γ*m* − 1(*u*) ∩ Γ(*v*), in which case (ii) holds since ∣Γ*m* − 1(*u*) ∩ Γ(*v*)∣ = *c**m*. If *m* = 2, then the claim is trivial since *c*2 = 1. Suppose instead that *m* ≥ 3. By (i), *c**m* − 1 = 1, so Γ*m* − 1(*u*) ∩ Γ(*y*) = Γ(*y*) ∩ Γ(*u**m* − 2) by Lemma [lemma:ctrick]. Suppose for a contradiction that *x* is a neighbour of *y* in Γ*m* − 1(*u*) ∩ Γ(*v*). Now *x* is adjacent to *u**m* − 2, but then *u**m* − 2 and *v* are vertices at distance 2 in Γ with common neighbours *y* and *x*, contradicting our assumption that *c*2 = 1. Thus the claim holds. Next we prove that (iv) holds. Suppose that diam(Γ) < *k* − 1 and *c**m* > 1. Now there exists *x* ∈ Γ*m* − 1(*u*) ∩ Γ(*v*) such that *x* ≠ *y*. Observe that {*u*0, …, *u**m*, *x*} induces a path graph by (ii) and the fact that *c*2 = 1. If there exists *w* ∈ (Γ*m*(*u*) ∩ Γ(*v*)) \ Γ(*y*), then {*u*0, …, *u**m*, *w*} also induces a path graph, so there exists *g* ∈ *G**u*0, …, *u**m* such that *x**g* = *w*, a contradiction. Thus Γ*m*(*u*) ∩ Γ(*v*) ⊆ Γ(*y*). Similarly, Γ*m*(*u*) ∩ Γ(*v*) ⊆ Γ(*x*), but Γ(*y*) ∩ Γ(*x*) = {*v*} since *d*Γ(*x*, *y*) = 2, so $\Gamma\_m(u)\cap \Gamma(v)=\varnothing$. It then follows from (ii) that *s* = 1 and *c**m* = *t* + 1, so (iv) holds. Finally, we prove that (v) holds. Suppose that 3 ≤ diam(Γ) < *k* − 2. If *s* = 1 and *c**m* = *t* + 1, then (v) holds, so we may assume otherwise. Then *c**m* = 1 by (iv). Now there exists *x* ∈ (Γ(*v*) ∩ Γ*m*(*u*)) \ Γ(*y*). Let *z* be the unique vertex in Γ*m* − 1(*u*) ∩ Γ(*x*). Since *c**m* = 1 and *c*2 = 1, the set {*y*, *v*, *x*, *z*} induces a path graph. By Lemma [lemma:ctrick], Γ2(*v*) ∩ Γ(*u**m* − 2) = Γ(*u**m* − 2) ∩ Γ(*y*), so *z* is not adjacent to *u**m* − 2. Thus {*u*0, …, *u**m*, *x*, *z*} induces a path graph. If *s* ≥ 2, then there exists *w* ∈ Γ(*x*) ∩ Γ(*z*), and *w* ∈ Γ*m*(*u*), but *w* is not adjacent to *v* or *y* since *c*2 = 1, so {*u*0, …, *u**m*, *x*, *w*} also induces a path graph, in which case there exists *g* ∈ *G**u*0, …, *u**m*, *x* such that *z**g* = *w*, a contradiction. Thus *s* = 1. If *t* ≥ 2, then there exists *w* ∈ Γ(*x*) \ {*v*, *z*}. Again *w* ∈ Γ*m*(*u*) and *w* is not adjacent to *v* or *y*, so {*u*0, …, *u**m*, *x*, *w*} induces a path graph, a contradiction as above. Hence *t* = 1, so Γ ≃ *C**n* for some *n*. Since Γ has diameter *m* and *c**i* = 1 for 1 ≤ *i* ≤ *m*, it follows that *n* = 2*m* + 1. One consequence of the classification of the locally finite CH graphs  (see Theorem [thm:CH]) is that the only locally finite, connected, locally disconnected CH graphs with girth 3 and *c*2 = 1 are halved graphs of the biregular tree *T**t* + 1, *s* + 1. In particular, no such graph is finite. For graphs with diameter at least 3, these facts can be deduced directly from Lemma [lemma:c2=1]. [lemma:cyclebig] Let Γ be a connected *k*-CH graph that is locally (*t* + 1) ⋅ *K**s* where *t* ≥ 1, *s* ≥ 2, *c*2(Γ) = 1 and diam(Γ) ≥ 3. If some induced subgraph of Γ is isomorphic to *C**r* for some *r* > 3, then *r* ≥ *k* + 2. Let Δ be an induced subgraph of Γ that is isomorphic to *C**r* where *r* > 3. Since *c*2(Γ) = 1, *r* ≥ 5. If *k* ≤ 3, then *r* ≥ *k* + 2, as desired, so we assume that *k* ≥ 4. Write *r* = 2*n* + 1 or 2*n* + 2 where *n* ≥ 2. Label the vertices of *V*Δ as follows: choose *u* ∈ *V*Δ, and write Δ*i*(*u*) = {*x**i*, *y**i*} for 1 ≤ *i* ≤ *n*, where *x**i* is adjacent to *x**i* + 1 for 1 ≤ *i* ≤ *n* − 1, and therefore *y**i* is adjacent to *y**i* + 1 for 1 ≤ *i* ≤ *n* − 1. If *r* = 2*n* + 1, then *x**n* is adjacent to *y**n*, while if *r* = 2*n* + 2, then Δ*n* + 1(*u*) = {*z*} where *z* is adjacent to *x**n* and *y**n*. Let *m* :  = min(diam(Γ), *k* − 1), and recall that *m* ≥ 3. Since *s* ≥ 2 and diam(Γ) ≥ 3, Lemma [lemma:c2=1](iv) and (v) imply that either *m* ≥ *k* − 1, or *m* = *k* − 2 and *c**m*(Γ) = 1 (in which case *k* ≥ 5). In particular, 2*m* + 1 ≥ *k* + 2. Hence if *n* ≥ *m*, then *r* ≥ 2*m* + 1 ≥ *k* + 2, as desired, so we assume instead that *n* < *m*. Observe that *x*2, *y*2 ∈ Γ2(*u*) since *V*Δ induces *C**r* where *r* > 3. Suppose that *x**i* ∈ Γ*i*(*u*) for some 2 ≤ *i* < *n*. Now *x**i* − 1 ∈ Γ*i* − 1(*u*) ∩ Γ(*x**i*). Further, *c**i*(Γ) = 1 by Lemma [lemma:c2=1](i), so $x\_{i+1}\notin \Gamma\_{i-1}(u)$, and if *x**i* + 1 ∈ Γ*i*(*u*), then *x**i* + 1 ∈ Γ*i*(*u*) ∩ Γ(*x**i*) = Γ(*x**i*) ∩ Γ(*x**i* − 1) by Lemma [lemma:ctrick], a contradiction since *V*Δ induces *C**r*. Thus *x**i* + 1 ∈ Γ*i* + 1(*u*). It follows that *x**j*, *y**j* ∈ Γ*j*(*u*) for 1 ≤ *j* ≤ *n*. By Lemma [lemma:c2=1](i), *c**n*(Γ) = 1. If *r* = 2*n* + 1, then *y**n* is adjacent to *x**n*, but then *y**n* is adjacent to *x**n* − 1 by Lemma [lemma:ctrick], a contradiction. Thus *r* = 2*n* + 2, and by a similar argument, $z\notin \Gamma\_n(u)$. Since *c**n*(Γ) = 1, it follows that *z* ∈ Γ*n* + 1(*u*). In particular, *c**n* + 1(Γ) ≥ 2. If *n* + 1 < *m*, then *c**n* + 1(Γ) = 1 by Lemma [lemma:c2=1](i), a contradiction. Thus *m* = *n* + 1. We saw above that either *m* ≥ *k* − 1, or *m* = *k* − 2 and *c**m*(Γ) = 1. Since *c**m*(Γ) ≠ 1, we conclude that *m* ≥ *k* − 1. Thus *r* = 2*n* + 2 = 2*m* ≥ 2(*k* − 1) ≥ *k* + 2, as desired. Families of graphs with the unique *x* property =============================================== In this section, we give five different sets of combinatorial conditions on the local structure of a graph Γ which guarantee that Γ has the unique *x* property (see Definition [defn:x]). These will be used to prove Theorem [thm:connected] in conjunction with Lemma [lemma:xplus]. Recall that, by our definition, all strongly regular graphs are finite non-complete graphs. In particular, a strongly regular graph with *μ* > 0 is connected with diameter 2. [lemma:2K1] Let Γ be a locally Σ graph where Σ is a strongly regular graph with *μ*(Σ) > 0 in which every *μ*-graph is 2 ⋅ *K*1. Then Γ has the unique *x* property. Let *u* ∈ *V*Γ, *v* ∈ Γ(*u*) and *w* ∈ Γ2(*u*) ∩ Γ(*v*). Since the graph induced by Γ(*v*) has diameter 2, it follows that the graph Δ induced by Γ(*u*) ∩ Γ(*v*) ∩ Γ(*w*) is a *μ*-graph of Γ(*v*) and is therefore isomorphic to 2 ⋅ *K*1. Let Σ denote the graph induced by Γ(*u*). Let *V*Δ = {*y*, *z*}, and note that *d*Σ(*y*, *z*) = 2. Now *v* ∈ Σ(*y*) ∩ Σ(*z*) and Σ(*y*) ∩ Σ(*z*) ≃ 2 ⋅ *K*1, so Σ(*y*) ∩ Σ(*z*) = {*v*, *x*} for some *x* ∈ Σ2(*v*). Further, Σ(*v*) ∩ Σ(*x*) is also isomorphic to 2 ⋅ *K*1, so Σ(*v*) ∩ Σ(*x*) = {*y*, *z*}. Thus *x* ∈ Γ(*u*) ∩ Γ2(*v*) and *V*Δ = {*y*, *z*} = Γ(*u*) ∩ Γ(*v*) ∩ Γ(*x*), and *x* is the unique such vertex. [lemma:k2=1] Let Γ, Σ and Δ be graphs with the following properties. * Γ is locally Σ, and every *μ*-graph of Σ is isomorphic to Δ. * Σ is strongly regular with *μ*(Σ) > 0. * Δ is regular with diameter 2 and *k*2(Δ) = 1. Then Γ has the unique *x* property. Let *u* ∈ *V*Γ, *v* ∈ Γ(*u*) and *w* ∈ Γ2(*u*) ∩ Γ(*v*). The graph Γ(*v*) has diameter 2, so Γ(*u*) ∩ Γ(*v*) ∩ Γ(*w*) is a *μ*-graph of Γ(*v*) and is therefore isomorphic to Δ. For the remainder of the proof, we write Σ for the graph induced by Γ(*u*) and Δ for the graph induced by Γ(*u*) ∩ Γ(*v*) ∩ Γ(*w*). Let *y* ∈ *V*Δ. Since *k*2(Δ) = 1, there exists a unique *z* ∈ Δ2(*y*). Now Σ(*y*) ∩ Σ(*z*) ≃ Δ and *v* ∈ Σ(*y*) ∩ Σ(*z*), so there is a unique *x* at distance 2 from *v* in Σ(*y*) ∩ Σ(*z*). Then Σ(*v*) ∩ Σ(*x*) ≃ Δ, and *y* and *z* are at distance 2 in Σ(*v*) ∩ Σ(*x*). Note that Δ is finite since Σ is finite by definition. Since Δ is regular, ∣Σ(*v*) ∩ Σ(*y*) ∩ Σ(*x*)∣ = ∣Δ(*y*)∣ = ∣Σ(*v*) ∩ Σ(*y*) ∩ Σ(*z*)∣. Further, since *k*2(Δ) = 1, both Σ(*v*) ∩ Σ(*y*) ∩ Σ(*x*) and Δ(*y*) are subsets of Σ(*v*) ∩ Σ(*y*) ∩ Σ(*z*). Thus Δ(*y*) = Σ(*v*) ∩ Σ(*y*) ∩ Σ(*x*). It follows that *V*Δ = {*y*, *z*} ∪ Δ(*y*) = Σ(*v*) ∩ Σ(*x*) = Γ(*u*) ∩ Γ(*v*) ∩ Γ(*x*). It remains to show that *x* is the unique vertex in Γ(*u*) ∩ Γ2(*v*) such that *V*Δ = Γ(*u*) ∩ Γ(*v*) ∩ Γ(*x*). If *x*ʹ is such a vertex, then *x*ʹ ∈ Σ2(*v*) ∩ Σ(*y*) ∩ Σ(*z*) = {*x*}. [lemma:polar] Let Γ, Σ and Δ be graphs with the following properties. * Γ is locally Σ, and every *μ*-graph of Σ is isomorphic to Δ. * Σ is strongly regular with *λ*(Σ) = *μ*(Σ) − 2. * Δ has diameter 3 with *k*1(Δ) = *k*2(Δ) and *k*3(Δ) = 1. * For any distinct non-adjacent *y*, *z* ∈ *V*Σ, if *v* and *x* are vertices at distance 3 in the graph induced by Σ(*y*) ∩ Σ(*z*), then *y* and *z* are at distance 3 in the graph induced by Σ(*v*) ∩ Σ(*x*). Then Γ has the unique *x* property. Let *u* ∈ *V*Γ, *v* ∈ Γ(*u*) and *w* ∈ Γ2(*u*) ∩ Γ(*v*). The graph Γ(*v*) has diameter 2, so Γ(*u*) ∩ Γ(*v*) ∩ Γ(*w*) is a *μ*-graph of Γ(*v*) and is therefore isomorphic to Δ. For the remainder of the proof, we write Σ for the graph induced by Γ(*u*) and Δ for the graph induced by Γ(*u*) ∩ Γ(*v*) ∩ Γ(*w*). Let *y* ∈ *V*Δ. Since *k*3(Δ) = 1, there exists a unique *z* ∈ Δ3(*y*). Now Σ(*y*) ∩ Σ(*z*) ≃ Δ and *v* ∈ Σ(*y*) ∩ Σ(*z*), so there is a unique *x* at distance 3 from *v* in the graph induced by Σ(*y*) ∩ Σ(*z*). Then Σ(*v*) ∩ Σ(*x*) ≃ Δ, and *y* and *z* are at distance 3 in the graph induced by Σ(*v*) ∩ Σ(*x*) by assumption. We claim that Δ(*y*) = Σ(*v*) ∩ Σ(*y*) ∩ Σ(*x*). Since *v* and *y* are adjacent, ∣Σ(*v*) ∩ Σ(*y*)∣ = *λ*(Σ). Now 2 + 2∣Δ(*y*)∣ = ∣*V*Δ∣ = *μ*(Σ) and *λ*(Σ) = *μ*(Σ) − 2, so ∣Δ(*y*)∣ = *λ*(Σ)/2. Since Δ is regular, ∣Σ(*v*) ∩ Σ(*y*) ∩ Σ(*x*)∣ = *λ*(Σ)/2 = ∣Σ(*v*) ∩ Σ(*y*) ∩ Σ(*z*)∣. Since Σ has diameter 2, any vertex in Σ(*v*) ∩ Σ(*y*) lies in Σ(*z*) or Σ2(*z*). Thus ∣Σ(*v*) ∩ Σ(*y*) ∩ Σ2(*z*)∣ = *λ*(Σ)/2. Now Σ(*v*) ∩ Σ(*y*) ∩ Σ(*x*) and Δ(*y*) are both subsets of Σ(*v*) ∩ Σ(*y*) ∩ Σ2(*z*), so the claim follows. By exchanging the roles of *y* and *z* in the above proof, we also obtain Δ(*z*) = Σ(*v*) ∩ Σ(*z*) ∩ Σ(*x*). Thus *V*Δ = {*y*, *z*} ∪ Δ(*y*) ∪ Δ(*z*) = Σ(*v*) ∩ Σ(*x*) = Γ(*u*) ∩ Γ(*v*) ∩ Γ(*x*). It remains to show that *x* is the unique vertex in Γ(*u*) ∩ Γ2(*v*) such that *V*Δ = Γ(*u*) ∩ Γ(*v*) ∩ Γ(*x*). If *x*ʹ is another such vertex, then *V*Δ ⊆ Σ(*x*ʹ), so *x*ʹ ∈ Σ(*y*) ∩ Σ(*z*) \ ({*x*, *v*} ∪ Σ(*v*)), in which case *x*ʹ ∈ Σ(*x*), but then ∣Σ(*x*) ∩ Σ(*x*ʹ)∣ = *λ*(Σ) < *μ*(Σ) = ∣*V*Δ∣, contradicting *V*Δ ⊆ Σ(*x*) ∩ Σ(*x*ʹ). [lemma:polarcomp] Let Γ, Σ and Δ be graphs with the following properties. * Γ is locally Σ, and every *μ*-graph of Σ is isomorphic to Δ. * Σ is strongly regular with *μ*(Σ) > 0. * Δ is regular and for any *y* ∈ *V*Δ, there exists a unique *z* ∈ Δ2(*y*) such that Δ(*y*) = Δ(*z*). * Σ has valency *μ*(Σ) + 2*λ*(Σ) − 2*k*1(Δ). Then Γ has the unique *x* property. Let *u* ∈ *V*Γ, *v* ∈ Γ(*u*) and *w* ∈ Γ2(*u*) ∩ Γ(*v*). The graph Γ(*v*) has diameter 2, so Γ(*u*) ∩ Γ(*v*) ∩ Γ(*w*) is a *μ*-graph of Γ(*v*) and is therefore isomorphic to Δ. For the remainder of the proof, we write Σ for the graph induced by Γ(*u*); Δ for the graph induced by Γ(*u*) ∩ Γ(*v*) ∩ Γ(*w*); and Π for the graph induced by Σ(*v*). Let *y* ∈ *V*Δ. By assumption there exists a unique *z* ∈ Δ2(*y*) such that Δ(*y*) = Δ(*z*). Now the graph Λ induced by Σ(*y*) ∩ Σ(*z*) is isomorphic to Δ, and *v* ∈ Σ(*y*) ∩ Σ(*z*), so there exists a unique *x* ∈ Λ2(*v*) such that Λ(*v*) = Λ(*x*). Then the graph Θ induced by Σ(*v*) ∩ Σ(*x*) is isomorphic to Δ, and Δ(*y*) ⊆ Λ(*v*) ⊆ Θ(*y*) ∩ Θ(*z*), so Δ(*y*) = Λ(*v*) = Θ(*y*) = Θ(*z*). In particular, Π(*y*) ∩ Π(*z*) = Σ(*v*) ∩ Σ(*y*) ∩ Σ(*z*) = Λ(*v*) = Δ(*y*). Let *X* :  = *V*Θ \ ({*y*, *z*} ∪ Θ(*y*)), *Y* :  = *V*Δ \ ({*y*, *z*} ∪ Δ(*y*)) and *Z* :  = *V*Π \ ({*y*, *z*} ∪ Π(*y*) ∪ Π(*z*)). Note that *X*, *Y* ⊆ *Z*. Now ∣*X*∣ = *μ*(Σ) − (*k*1(Δ) + 2) = ∣*Y*∣ and ∣*Z*∣ = ∣*V*Π∣ − (2*λ*(Σ) − *k*1(Δ) + 2), but ∣*V*Π∣ = *μ*(Σ) + 2*λ*(Σ) − 2*k*1(Δ) by assumption, so ∣*X*∣ = ∣*Y*∣ = ∣*Z*∣. Thus *X* = *Z* = *Y*, so *V*Δ = *V*Θ = Γ(*u*) ∩ Γ(*v*) ∩ Γ(*x*). It remains to show that *x* is the unique vertex in Γ(*u*) ∩ Γ2(*v*) such that *V*Δ = Γ(*u*) ∩ Γ(*v*) 
arxiv_0000094
Rank Bounds for Design Matrices with Applications to Combinatorial Geometry and Locally Correctable Codes ========================================================================================================= A (*q*, *k*, *t*)-design matrix is an *m* × *n* matrix whose pattern of zeros/non-zeros satisfies the following design-like condition: each row has at most *q* non-zeros, each column has at least *k* non-zeros and the supports of every two columns intersect in at most *t* rows. We prove that for *m* ≥ *n*, the rank of any (*q*, *k*, *t*)-design matrix over a field of characteristic zero (or sufficiently large finite characteristic) is at least $$n - \left(\frac{qtn}{2k}\right)^2.$$ Using this result we derive the following applications: Impossibility results for 2-query LCCs over large fields. A 2-query locally correctable code (LCC) is an error correcting code in which every codeword coordinate can be recovered, probabilistically, by reading at most two other code positions. Such codes have numerous applications and constructions (with exponential encoding length) are known over finite fields of small characteristic. We show that infinite families of such linear 2-query LCCs *do not exist* over fields of characteristic zero or large characteristic regardless of the encoding length. Generalization of known results in combinatorial geometry. We prove a quantitative analog of the Sylvester-Gallai theorem: Let *v*1, …, *v**m* be a set of points in $\C^d$ such that for every *i* ∈ [*m*] there exists at least *δ**m* values of *j* ∈ [*m*] such that the line through *v**i*, *v**j* contains a third point in the set. We show that the dimension of {*v*1, …, *v**m*} is at most *O*(1/*δ*2). Our results generalize to the high dimensional case (replacing lines with planes, etc.) and to the case where the points are colored (as in the Motzkin-Rabin Theorem). Introduction ============ In this work we study what *combinatorial* properties of matrices guarantee high algebraic *rank*, where a property is combinatorial if it depends only on the zero/non-zero pattern of the matrix, and not on the values of its entries. This question has a rich history in mathematics (see Section [sec: rel work]), and some computer science motivations: Locally correctable codes. A *locally correctable code* is an error correcting code in which for every codeword *y*, given a corrupted version $\Tilde{y}$ of *y* and an index *i*, one can recover the correct value of *y**i* from $\Tilde{y}$ by looking only at very few coordinates of $\Tilde{y}$. It is an open question in coding theory to understand the tradeoffs between the fraction of errors, locality (number of coordinates read) and rate (ratio of message length to codeword length) of such codes, with very large gaps between the known upper bounds and lower bounds (see the survey ). The question is open even for linear codes, where the condition of being locally correctable turns out to be equivalent to the existence of low weight codewords in the dual codewords that are “well-spread” in some precise technical sense (see Section [sec-LCC]). Because of the relation between the rate of the code and its dual, the question becomes equivalent to asking whether this combinatorial “well-spreadness” condition guarantees high rank. Matrix rigidity. A longstanding question is to come up with an explicit matrix that is *rigid* in the sense that its rank cannot be reduced by changing a small number of its entries. Random matrices are extremely rigid, and sufficiently good explicit constructions will yield lower bounds for arithmetic circuits , though we are still very far from achieving this (see the survey ). One can hope that a combinatorial property guaranteeing large rank will be robust under small perturbations, and hence a matrix satisfying such a property will automatically be rigid. In both these cases it is crucial to obtain bounds on the rank that depend solely on the zero/non-zero pattern of the matrix, without placing any restrictions on the non-zero coefficients. For example, there are very strong bounds known for matrix rigidity under the restriction that the non-zero coefficients have bounded magnitude (see Chapter 3 in ), but they only imply lower bounds in a very restricted model. In fact, there is a relation between the two questions, and sufficiently good answers for the first question will imply answers for the second one . We stress that these two examples are in no way exhaustive. The interplay between combinatorial and algebraic properties of matrices is a fascinating question with many potential applications that is still very poorly understood. Our Results ----------- In this work we give a combinatorial property of complex matrices that implies high rank. While not strong enough to prove rigidity results, we are able to use it to obtain several applications in combinatorial geometry and locally correctable codes. Our main result is the following theorem, giving a lower bound on the rank of matrix whose non-zero pattern forms has certain combinatorial-design like properties in the sense that the sets of non-zero entries in each column have small intersections. (This theorem is restated as Theorem [thm-rankdesign].) [Rank bound for design matrices] [ithm:main] Let *m* ≥ *n*. We say that an *m* × *n* complex matrix *A* is a (*q*, *k*, *t*)-design matrix if every row of *A* has at most *q* non-zero entries, every column of *A* has at least *k* non-zeroes entries, and the supports of every two columns intersect in at most *t* rows. For every such *A*, $$\rank(A) \geq n - \left(\frac{q\cdot t \cdot n}{2k}\right)^2.$$ We also show that Theorem [ithm:main], and in fact any result connecting the zero/non-zero pattern to rank, can be made to hold over arbitrary characteristic zero fields and also over fields of sufficiently large (depending on *m*, *n*) finite characteristic. ### Applications to Combinatorial Geometry Our most immediate applications of Theorem [ithm:main] are to questions regarding line-point incidences. Results on line-point incidences have recently found use in the area of computational complexity in relation to pseudo-randomness and de-randomization. In this setting we have an arrangement of a finite number of points in real or complex space. Every such arrangement gives rise to a set of lines, namely, those lines that pass through at least two of the points in the arrangement. Information about these lines can be converted, in some cases, into information about the dimension of the set of points (i.e. the dimension of the space the points span). Our rank theorem can be used to derive generalizations for two well-known theorems in this area: the Sylvester-Gallai theorem and the Motzkin-Rabin theorem. #### Generalizing the Sylvester-Gallai Theorem. The Sylvester-Gallai (SG for short) theorem says that if *m* distinct points $v\_1,\ldots,v\_m \in \R^d$ are not collinear, then there exists a line that passes through exactly two of them. In its contrapositive form the SG theorem says that if for every *i* ≠ *j* the line through *v**i* and *v**j* passes through a third point *v**k*, then dim{*v*1, …, *v**m*} ≤ 1, where dim{*v*1, …, *v**m*} is the dimension of the smallest affine subspace containing the points. This theorem was first conjectured by Sylvester in 1893, proved (in dual form) by Melchior in 1940, and then independently conjectured by Erdos in 1943 and proved by Gallai in 1944. The SG theorem has several beautiful proofs and many generalizations, see the survey. Over the complex numbers the (tight) bound on the dimension is 2 instead of 1. The complex version was first proven by Kelly using a deep results from algebraic geometry, and more recently, an elementary proof was found by Elkies, Pretorius and Swanepoel who also proved it over the quaternions with an upper of 4 on the dimension. We say that the points *v*1, …, *v**m* (in $\R^d$ or $\C^d$) form a *δ*-SG configuration if for every *i* ∈ [*m*] there exists at least *δ**m* values of *j* ∈ [*m*] such that the line through *v**i*, *v**j* contains a third point in the set. Szemeredi and Trotter  showed that, when *δ* is larger than some absolute constant close to 1, then the dimension of a *δ*-SG configuration is at most one (over the reals). We show the following generalization of their result to arbitrary *δ* > 0 (and over the complex numbers). [Quantitative SG theorem][ithm:SG] If $v\_1,\ldots,v\_m \in \C^d$ is a *δ*-SG configuration then dim{*v*1, …, *v**m*} < 13/*δ*2. We note that one cannot replace the bound 13/*δ*2 of Theorem [ithm:SG] with 1 or even with any fixed constant, as one can easily create a *δ*-SG configuration of dimension roughly 2/*δ* by placing the points on 1/*δ* lines. This is analogous to error correcting codes, where once the fraction *δ* of agreement between the original and corrupted codeword drops below half there can be no unique decoding. In that sense our result can be thought of as a *list decoding* variant of the SG theorem, whereas the result of is its unique decoding variant. We also show an “average case” version of the SG theorem, proving a bound on the dimension of a large subset of the points under the assumption that there are many collinear triples (see Theorem [thm-SGav]). We also prove a version of Theorem [thm-deltaSG] with lines replaced by *k*-flats (*k*-dimensional affine subspaces). This generalizes a theorem of Hansen which deals with the case *α* = 1. The statement of this result is technical and so we give it in Section [sec-highdim] where it is also proven. Since our proofs use elementary (and purely algebraic) reductions to the rank theorem, they hold over arbitrary fields of characteristic zero or of sufficiently large finite characteristic. This is in contrast to many of the known proofs of such theorems which often rely on specific properties of the real (or complex) numbers. However, we currently do not recover the full version of the original SG theorem, in the sense that even for *δ* = 1 we do not get a bound of 1 (or 2 for complex numbers) on the dimension. (However, the term 13/*δ*2 can be improved a bit in the *δ* = 1 case to obtain a bound of 9 on the dimension.) #### Generalizing the Motzkin-Rabin Theorem. The Motzkin-Rabin (MR for short) theorem (see e.g. ) is an interesting variant of the Sylvester-Gallai theorem that states that if points $v\_1,\ldots,v\_m \in \R^d$ are colored either red or blue and there is no monochromatic line passing through at least two points, then they are all collinear. As in the SG theorem, we obtain a quantitative generalization of the MR theorem such that (letting *b* and *r* be the numbers of blue and red points respectively), if for every blue (resp. red) point *v*, there are *δ**b* blue (resp. *δ**r* red) points *v*ʹ where the line through *v* and *v*ʹ passes through a red (resp. blue) point, then dim{*v*1, …, *v**m*} ≤ *O*(1/*δ*4). We also prove a three colors variant of the MR theorem, showing that if *v*1, …, *v**m* are colored red, blue and green, and all lines are not monochromatic, then dim{*v*1, …, *v**m*} is at most some absolute constant. ### Locally Correctable Codes A (linear) *q* query locally correctable code ((*q*, *δ*)-LCC for short) over a field $\F$ is a subspace $C \subseteq \F^n$ such that, given an element $\Tilde{y}$ that disagrees with some *y* ∈ *C* in at most *δ**n* positions and an index *i* ∈ [*n*], one can recover *y**i* with, say, probability 0.9, by reading at most *q* coordinates of $\Tilde{y}$. Over the field of two elements $\F\_2$ the standard Hadamard code construction yields a (2,*δ*)-query LCC with dimension Ω(log(*n*)) for constant *δ* > 0 (see the survey ). In contrast we show that for every constant *δ* > 0 there do not exist infinite family of such codes over the complex numbers: [Impossibility of 2-query LCCs over $\C$] [ithm:lcc] If *C* is a 2-query LCC for *δ* fraction of errors over $\C$, then dim(*C*) ≤ *O*(1/*δ*9). We note that the Hadamard construction does yield a *locally decodable code* over the complex numbers with dimension Ω(log*n*). Locally decodable codes are the relaxation of a locally correctable codes where one only needs to be able to recover the coordinates of the original message as opposed to the codeword. Thus over the complex numbers, there is a very strong separation between the notions of locally decodable and locally correctable codes, whereas it is consistent with our knowledge that for, say, $\F\_2$ the rate/locality tradeoffs of both notions are the same. Related Work ------------ The idea to use matrix scaling to study structural properties of matrices was already present in. This work, which was also motivated by the problem of matrix rigidity, studies the presence of short cycles in the graphs of non-zero entries of a square matrix. A related line of work on the rank of ‘design’ matrices is the work emerging from Hamada’s conjecture. (See for a recent result and more references.) Here, a design matrix is defined using stricter conditions (each row/column has exactly the same number of non-zeros and the intersections are also all of the same size) which are more common in the literature dealing with combinatorial designs. In order to be completely consistent with this line of work we should have called our matrices ‘approximate-design’ matrices. We chose to use the (already overused) word ‘design’ to make the presentation more readable. We also note that considering approximate designs only makes our results stronger. Hamada’s conjecture states that of all zero/one matrices whose support comes from a design (in the stricter sense), the minimal rank is obtained by matrices coming from geometric designs (in our language, Reed-Muller codes). In contrast to this paper, the emphasis in this line of works is typically on small finite fields. We note here that the connection between Hamada’s conjecture and LCCs was already observed by Barkol, Ishai and Weinreb who also conjectured (over small fields) the ‘approximate-design’ versions which we prove here for large fields. Another place where the support of a matrix is connected to its rank is in graph theory where we are interested in minimizing the rank of a (square, symmetric) real matrix which has the same support as the adjacency matrix of a given graph. This line of work goes back for over fifty years and has many applications in graph theory. See for a recent survey on this topic. Over the reals we can also ask about the minimal rank of matrices with certain *sign-pattern*. That is, given a matrix over {1,  − 1}, what is the minimal rank of a matrix which has the same sign-pattern. This minimal rank is called the *sign-rank* of a matrix. The question of coming up with (combinatorial or otherwise) properties that imply high sign-rank is one of major importance and has strong connections to communication complexity, learning theory and circuit complexity, among others. For a recent work with plenty of references see. In particular we would like to mention a connection to the work of Forster on the sign-rank of the Hadamard matrix. (An earlier version of this work used a variant of a lemma from instead of the results of on matrix scaling to obtain our main result.) Organization ------------ In Section [sec-techniques] we give a high level overview of our techniques. In Section [sec-rank] we prove our main result on the rank of design matrices. In Section [sec-SG] we prove our quantitative variants of the Sylvester-Gallai theorem. In Section [sec-highdim] we prove the high-dimensional analog of Theorem [thm-deltaSG] where lines are replaced with flats. In Section [sec-MR] we prove our generalizations of the Motzkin-Rabin theorem. In Section [sec-LCC] we prove our results on locally correctable codes. In Section [sec-finite] we show how our results extend to other fields. We conclude in Section [sec-open] with a discussion of open problems. Our Techniques ============== We now give high-level proof overviews for some of our results. Rank Lower Bounds for Design Matrices ------------------------------------- Theorem [ithm:main] – the rank lower bound for design matrices – is proved in two steps. We now sketch the proof, ignoring some subtleties and optimizations. The proof starts with the observation that, as in the case of matrix rigidity and similar questions, the result is much easier to prove given a bound on the *magnitude* of the non-zero entries. Indeed, if *A* is a (*q*, *k*, *t*)-design matrix and all of its non-zero entries have absolute value in [1/*c*, 1] for some constant *c*, then the *n* × *n* matrix *M* = *A*\**A* is *diagonally dominant*, in the sense that for all *i* ≠ *j*, *m**i**i* ≥ *k*/*c*2 but ∣*m**i**j*∣ ≤ *t*. (Here *A*\* denotes the conjugate transpose of *A*.) Thus one can use known results on such matrices (e.g. ) to argue that $\rank(A) \geq \rank(M) \geq n - (ntc^2/k)^2$. Our main idea is to reduce to this case where the non-zero coefficients of *A* are (roughly) bounded using *matrix scaling*. A *scaling* $\Hat{A}$ of a matrix *A* is obtained by multiplying for all *i*, *j*, the *i*’th row of *A* by some positive number *ρ**i* and the *j*’th column of *A* by some positive number *γ**j*. Clearly, *A* and $\Hat{A}$ share the same rank and zero/non-zero pattern. We use known matrix-scaling results to show that every (*q*, *k*, *t*)-design matrix *A* has a scaling in which every entry has magnitude at most (roughly) 1 but its columns have norm at least (roughly) $\sqrt{k/q}$. We note that the typical application of matrix-scaling was with respect to the ℓ1-norm of the rows and columns. Here we take a different path: We use scaling with respect to ℓ2-norm. We defer the description of this step to Section [sec-rank] but the high level idea is to use a theorem of that shows that such a scaling exists (in fact without the dependence on *q*) if *A* had the property of not containing any large all-zero sub-matrix. While this property cannot be in general guaranteed, we show that by repeating some rows of *A* one can obtain a matrix *B* that has this property, and a scaling of *B* can be converted into a scaling of *A*. Since our lower bound on the entry *m**i**i* in the bounded coefficient case (where again *M* = *A*\**A*) only used the fact that the columns have large norms, we can use the same argument as above to lower bound the rank of *M*, and hence of *A*. Generalized Sylvester-Gallai Theorem ------------------------------------ Recall that the quantitative SG theorem (Theorem [ithm:SG]) states that every *δ*-SG configuration *v*1, …, *v**n*, has dimension at most 13/*δ*2. Our proof of Theorem [ithm:SG] uses Theorem [ithm:main] as follows. Suppose for starters that every one of these lines passed through *exactly* three points. Each such line induces an equation of the form *α**v**i* + *β**v**j* + *γ**v**k* = 0. Now for *m* = *δ**n*2, let *A* be the *m* × *n* matrix whose rows correspond to these equations. Since every two points participate in only one line, *A* will be a (3, *δ**n*, 1) design matrix, meaning that according to Theorem [ithm:main], *A*’s rank is at least $n-\left(\tfrac{3}{2\delta}\right)^2$. Since *A* times the matrix whose rows are *v*1, …, *v**n* is zero we have $\dim\{ v\_1,\ldots, v\_n \} \leq n - \rank(A)$. We thus get an upper bound of ⌊9/4⌋ = 2 on this dimension. To handle the case when some lines contain more than three points, we choose in some careful way from each line ℓ containing *r* points a subset of the $\binom{r}{3}$ equations of the form above that it induces on its points. We show that at some small loss in the parameters we can still ensure the set of equations forms a design, hence again deriving a lower bound on its rank via Theorem [ithm:main]. Our method extend also to an “average case” SG theorem (Theorem [thm-SGav]), where one only requires that the set of points supports many (i.e., Ω(*n*2)) collinear triples and that each pair of points appear together in a few collinear triples. In this case we are able to show that there is a subset of Ω(*n*) points whose span has dimension *O*(1). See Section [sec-SG] for more details. Our generalizations of the Motzkin-Rabin theorem follow from our theorem on *δ*-SG configurations via simple reductions (see Section [sec-MR]). Locally Correctable Codes ------------------------- At first sight, Theorem [ithm:lcc] – non existence of 2 query locally correctable codes over $\C$ – seems like it should be an immediate corollary of Theorem [ithm:SG]. Suppose that a code *C* maps $\C^d$ to $\C^n$, and let *v*1, …, *v**n* denote the rows of its generating matrix. That is, the code maps a message $x \in \C^d$ to the vector (⟨*v*1, *x*⟩, …, ⟨*v**n*, *x*⟩). The fact that *C* is a 2 query LCC for *δ* errors implies that for every such row *v**i*, there are roughly *δ**n* pairs *j*, *k* such that *v**i* is in the span of {*v**j*, *v**k*}. Using some simple scaling/change of basis, this gives precisely the condition of being a *δ*-SG configuration, save for one caveat: In a code there is no guarantee that all the vectors *v*1, …, *v**n* are distinct. That is, the code may have repeated coordinates that are always identical. Intuitively it seems that such repetitions should not help at all in constructing LCCs but proving this turned out to be elusive. In fact, our proof of Theorem [ithm:lcc] is rather more complicated than the proof Theorem [ithm:SG], involving repeated applications of Theorem [ithm:main] which result also in somewhat poorer quantitative bounds. The idea behind the proof to use a variant of the “average case” SG theorem to repeatedly find Ω(*n*) points among *v*1, …, *v**n* whose span has *O*(1) dimension, until there are no more points left. We defer all details to Section [sec-LCC]. Given Theorem [ithm:main], one may have expected that Theorem [ithm:lcc] could be extended for LCCs of any constant number *q* of queries. After all, the condition of *C* being an LCC intuitively seems like only a slight relaxation of requiring that the dual code of *C* has a generating matrix whose non-zero pattern is a combinatorial design, and indeed in known constructions of LCCs, the dual code does form a design. We are not, however, able to extend our results to 3 and more queries. A partial explanation to our inability is that 3 query LCCs give rise to configuration of planes (instead of lines) and point and planes exhibit much more complicated combinatorial properties than lines. Rank of Design Matrices ======================= In this section we prove our main result which gives a lower bound on the rank of matrices whose zero/non-zero pattern satisfies certain properties. We start by defining these properties formally. [Design matrix][def-designmatrix] Let *A* be an *m* × *n* matrix over some field. For *i* ∈ [*m*] let *R**i* ⊂ [*n*] denote the set of indices of all non-zero entries in the *i*’th row of *A*. Similarly, let *C**j* ⊂ [*m*], *j* ∈ [*n*], denote the set of non-zero indices in the *j*’th column. We say that *A* is a *(*q*, *k*, *t*)-design matrix* if 1. For all *i* ∈ [*m*], ∣*R**i*∣ ≤ *q*. 2. For all *j* ∈ [*n*], ∣*C**j*∣ ≥ *k*. 3. For all *j*1 ≠ *j*2 ∈ [*n*], ∣*C**j*1 ∩ *C**j*2∣ ≤ *t*. [Restatement of Theorem [ithm:main] – rank of design matrices][thm-rankdesign] Let *A* be an *m* × *n* complex matrix. If *A* is a (*q*, *k*, *t*)-design matrix then $$\rank(A) \geq n - \left(\frac{q\cdot t \cdot n}{2k}\right)^2.$$ The proof of the theorem actually holds under a slightly weaker condition on the sizes of the intersections. Instead of requiring that ∣*C**j*1 ∩ *C**j*2∣ ≤ *t* for all pairs of columns *j*1 ≠ *j*2, it is enough to ask that ∑*j*1 ≠ *j*2∣*C**j*1 ∩ *C**j*2∣2 ≤ *n*2 ⋅ *t*2. That is, there could be some pairs with large intersection as long as the average of the squares is not too large. The proof of the theorem is given below, following some preliminaries. Preliminaries for the Proof of Theorem [thm-rankdesign] ------------------------------------------------------- #### Notation: For a set of real vectors $V \in \C^n$ we denote by $\rank(V)$ the dimension of the vector space spanned by elements of *V*. We denote the ℓ2-norm of a vector *v* by ∥*v*∥. We denote by *I**n* the *n* × *n* identity matrix. We start with definitions and results on matrix scaling. [def-scaling][Matrix scaling] Let *A* be an *m* × *n* complex matrix. Let $\rho \in \C^{m}, \gamma \in \C^n$ be two complex vectors with all entries non-zero. We denote by $$\SC(A,\rho,\gamma)$$ the matrix obtained from *A* by multiplying the (*i*, *j*)’th element of *A* by *ρ**i* ⋅ *γ**j*. We say that two matrices *A*, *B* of the same dimensions are a scaling of each other if there exist non-zero vectors *ρ*, *γ* such that $B = \SC(A,\rho,\gamma)$. It is easy to check that this is an equivalence relation. We refer to the elements of the vector *ρ* as the *row scaling coefficients* and to the elements of *γ* as the *column scaling coefficients*. Notice that two matrices which are a scaling of each other have the same rank and the same pattern of zero and non-zero entries. Matrix scaling originated in a paper of Sinkhorn and has been widely studied since (see for more background). The following is a special case of a theorem from that gives sufficient conditions for finding a scaling of a matrix which has certain row and column sums. [Property-*S*] Let *A* be an *m* × *n* matrix over some field. We say that *A* satisfies *Property-*S** if for every zero sub-matrix of *A* of size *a* × *b* it holds that $$\label{eq-propS} \frac{a}{m} + \frac{b}{n} \leq 1.$$ [Matrix scaling theorem, Theorem 3 in ][thm-scaling] Let *A* be an *m* × *n* real matrix with non-negative entries which satisfies Property-*S*. Then, for every *ε* > 0, there exists a scaling *A*ʹ of *A* such that the sum of each row of *A*ʹ is at most 1 + *ε* and the sum of each column of *A*ʹ is at least *m*/*n* − *ε*. Moreover, the scaling coefficients used to obtain *A*ʹ are all positive real numbers. The proof of the theorem is algorithmic : Start by normalizing *A*’s rows to have sum 1, then normalize *A*’s columns to have sum *m*/*n*, then go back to normalizing the rows the have sum 1, and so forth. It can be shown (using a suitable potential function) that this process eventually transforms *A* to the claimed form (since *A* has Property-*S*). We will use the following easy corollary of the above theorem. [ℓ22-scaling][cor-l2scale] Let *A* = (*a**i**j*) be an *m* × *n* complex matrix which satisfies Property-*S*. Then, for every *ε* > 0, there exists a scaling *A*ʹ of *A* such that for every *i* ∈ [*m*] ∑*j* ∈ [*n*]∣*a**i**j*∣2 ≤ 1 + *ε* and for every *j* ∈ [*n*] ∑*i* ∈ [*m*]∣*a**i**j*∣2 ≥ *m*/*n* − *ε*. Let *B* = (*b**i**j*) = (∣*a**i**j*∣2). Then *B* is a real non-negative matrix satisfying Property-*S*. Applying Theorem [thm-scaling] we get that for all *ε* > 0 there exists a scaling $B' = \SC(B, \rho, \gamma)$, with *ρ*, *γ* positive real vectors, which has row sums at most 1 + *ε* and column sums at least *m*/*n* − *ε*. Letting $\rho'\_i = \sqrt{\rho\_i}$ and $\gamma'\_i = \sqrt{\gamma\_i}$ we get a scaling $\SC(A,\rho',\gamma')$ of *A* with the required properties. We will use a variant of a well known lemma (see for example ) which provides a bound on the rank of matrices whose diagonal entries are much larger than the off-diagonal ones. [lem-diagdom] Let *A* = (*a**i**j*) be an *n* × *n* complex hermitian matrix and let 0 < ℓ < *L* be integers. Suppose that *a**i**i* ≥ *L* for all *i* ∈ [*n*] and that ∣*a**i**j*∣ ≤ ℓ for all *i* ≠ *j*. Then $$\rank(A) \geq \frac{n}{1+ n\cdot (\ell/L)^2} \geq n - (n\ell/L)^2.$$ We can assume w.l.o.g. that *a**i**i* = *L* for all *i*. If not, then we can make the inequality into an equality by multiplying the *i*’th row and column by (*L*/*a**i**i*)1/2 < 1 without changing the rank or breaking the symmetry. Let $r = \rank(A)$ and let *λ*1, …, *λ**r* denote the non-zero eigenvalues of *A* (counting multiplicities). Since *A* is hermitian we have that the *λ**i*’s are real. We have $$\begin{aligned} n^2 \cdot L^2 &=& \tr(A)^2 = \left( \sum\_{i=1}^r \lambda\_i \right)^2 \leq r \cdot \sum\_{i=1}^r \lambda\_i^2 = r \cdot \sum\_{i,j=1}^n |a\_{ij}|^2 \\ &\leq& r \cdot ( n\cdot L^2 + n^2 \cdot \ell^2).\end{aligned}$$ Rearranging we get the required bound. The second inequality in the statement of the lemma follows from the fact that 1/(1 + *x*) ≥ 1 − *x* for all *x*. Proof of Theorem [thm-rankdesign] --------------------------------- To prove the theorem we will first find a scaling of *A* so that the norms (squared) of the columns are large and such that each entry is small. Our first step is to find an *n**k* × *n* matrix *B* that will satisfy Property-*S* and will be composed from rows of *A* s.t. each row is repeated with multiplicity between 0 and *q*. To achieve this we will describe an algorithm that builds the matrix *B* iteratively by concatenating to it rows from *A*. The algorithm will *mark* entries of *A* as it continues to add rows. Keeping track of these marks will help us decide which rows to add next. Initially all the entries of *A* are *unmarked*. The algorithm proceeds in *k* steps. At step *i* (*i* goes from 1 to *k*) the algorithm picks *n* rows from *A* and adds them to *B*. These *n* rows are chosen as follows: For every *j* ∈ {1, …, *n*} pick a row that has an unmarked non-zero entry in the *j*’th column and mark this non-zero entry. The reason why such a row exists at all steps is that each column contains at least *k* non-zero entries, and in each step we mark at most one non-zero entry in each column. [cla-matrixB] The matrix *B* obtained by the algorithm has Property-*S* and each row of *A* is added to *B* at most *q* times. The *n* rows added at each of the *k* steps form an *n* × *n* matrix with non-zero diagonal. Thus they satisfy Property-*S*. It is an easy exercise to verify that a concatenation of matrices with Property-*S* also has this property. The bound on the number of times each row is added to *B* follows from the fact that each row has at most *q* non-zero entries and each time we add a row to *B* we mark one of its non-zero entries. Our next step is to obtain a scaling of *B* and, from it, a scaling of *A*. Fix some *ε* > 0 (which will later tend to zero). Applying Corollary [cor-l2scale] we get a scaling *B*ʹ of *B* such that the ℓ2-norm of each row is at most $\sqrt{1+{\epsilon}}$ and the ℓ2-norm of each column is at least $\sqrt{nk/n - {\epsilon}}= \sqrt{k -{\epsilon}}$. We now obtain a scaling *A*ʹ of *A* as follows: The scaling of the columns are the same as for *B*ʹ. For the rows of *A* appearing in *B* we take the maximal scaling coefficient used for these rows in *B*ʹ, that is, if row *i* in *A* appears as rows *i*1, *i*2, …, *i**q*ʹ in *B*, then the scaling coefficient of row *i* in *A*ʹ is the maximal scaling coefficient of rows *i*1, *i*2, …, *i**q*ʹ in *B*ʹ. For rows *not* in *B*, we pick scaling coefficients so that their ℓ2 norm (in the final scaling) is equal to 1. The matrix *A*ʹ is a scaling of *A* such that each row has ℓ2-norm at most $\sqrt{1+{\epsilon}}$ and each column has ℓ2-norm at least $\sqrt{(k-{\epsilon})/q}$. The fact that the row norms are at most $\sqrt{1+{\epsilon}}$ is trivial. To argue about the column norms observe that a column of *B*ʹ is obtained from repeating each non-zero element in the corresponding column of *A*ʹ at most *q* times (together with some zeros). Therefore, if we denote by *c*1, …, *c**s* the non-zero entries in some column of *A*ʹ, we have that ∑*i* = 1*s**m**i* ⋅ ∣*c**i*∣2 ≥ *k* − *ε*,  where the *m**i*’s are integers between 0 and *q*. In this last inequality we also relied on the fact that we chose the maximal row scaling coefficient among all those that correspond to the same row in *A*. Therefore, ∑*i* = 1*s*∣*c**i*∣2 ≥ (*k* − *ε*)/*q*,  as required. Our final step is to argue about the rank of *A*ʹ (which is the same as the rank of *A*). To this end, consider the matrix *M* = (*A*ʹ)\* ⋅ *A*ʹ,  where (*A*ʹ)\* is *A*ʹ transposed conjugate. Then *M* = (*m**i**j*) is an *n* × *n* hermitian matrix. The diagonal entries of *M* are exactly the squares of the ℓ2-norm of the columns of *A*ʹ. Therefore, *m**i**i* ≥ (*k* − *ε*)/*q* for all *i* ∈ [*n*]. We now upper bound the off-diagonal entries. The off-diagonal entries of *M* are the inner products of different columns of *A*ʹ. The intersection of the support of each pair of different columns is at most *t*. The norm of each row is at most $\sqrt{1+{\epsilon}}$. For every two real numbers *α*, *β* so that *α*2 + *β*2 ≤ 1 + *ε* we have ∣*α* ⋅ *β*∣ ≤ 1/2 + *ε*ʹ, where *ε*ʹ tends to zero as *ε* tends to zero. Therefore ∣*m**i**j*∣ ≤ *t* ⋅ (1/2 + *ε*ʹ) for all *i* ≠ *j* ∈ [*n*]. Applying Lemma [lem-diagdom] we get that $$\rank(A) = \rank(A') \geq n - \left(\frac{q\cdot t(1/2+{\epsilon}') \cdot n}{k-{\epsilon}}\right)^2.$$ Since this holds for all *ε* > 0 it holds also for *ε* = 0, which gives the required bound on the rank of *A*. Sylvester-Gallai Configurations =============================== In this section we prove the quantitative Sylvester-Gallai (SG) Theorem. We will be interested with point configurations in real and complex space. These are finite sets of distinct points *v*1, …, *v**n* in $\R^d$ or $\C^d$. The dimension of a configuration is defined to be the dimension of the smallest affine subspace containing all points. [Special and ordinary lines] Let $v\_1,\ldots,v\_n \in \C^d$ be a set of *n* distinct points in *d*-dimensional complex space. A line ℓ passing through at least three of these points is called a *special* line. A line passing through exactly two points is called an *ordinary* line. [*δ*-SG configuration] [def: d SG] Let *δ* ∈ [0, 1]. A set of *n* distinct points $v\_1,\ldots,v\_n \in \C^d$ is called a **δ*-SG configuration* if for every *i* ∈ [*n*], there exists a family of special lines *L**i* all passing through *v**i* and at least *δ**n* of the points *v*1, …, *v**n* are on the lines in *L**i*. (Note that each collection *L**i* may cover a different subset of the *n* points.) The main result of this section bounds the dimension of *δ*-SG configurations for all *δ* > 0. Since we can always satisfy the definition by spreading the points evenly over 1/*δ* lines we know that the dimension can be at least 2/*δ* (and in fact in complex space at least 3/*δ*). We prove an upper bound of *O*(1/*δ*2). [Restatement of Theorem [ithm:SG] – quantitative SG theorem][thm-deltaSG] Let *δ* ∈ (0, 1]. Let $v\_1,\ldots,v\_n \in \C^d$ be a *δ*-SG configuration. Then $$\dimension\{v\_1,\ldots,v\_n\} < 13 / \delta^2.$$ Moreover, the dimension of a 1-SG configuration is at most 10. The constants in the proof have been optimized to the best of our abilities. Notice that in the above theorem *δ* can be dependant on *n*. For example, a (1/log(*n*))-SG configuration of *n* points can have rank at most *O*(log(*n*)2). Preliminaries to the Proof of Theorem [thm-deltaSG] --------------------------------------------------- The notion of a latin square will turn out useful in the proof: [Latin squares] An *r* × *r* *latin square* is an *r* × *r* matrix *D* such that *D**i*, *j* ∈ [*r*] for all *i*, *j* and every number in [*r*] appears exactly once in each row and in each column. A latin square *D* is called *diagonal* if *D**i*, *i* = *i* for all *i* ∈ [*r*]. [][thm-diagonalsquare] For every *r* ≥ 3 there exists a diagonal *r* × *r* latin square. We note that we use diagonal latin squares only to optimize constant factors. If one does not care about such factors then there is a simple construction that serves the same goal. The following lemma is an easy consequence of the above theorem. [lem-triples] Let *r* ≥ 3. Then there exists a set *T* ⊂ [*r*]3 of *r*2 − *r* triples that satisfies the following properties: 1. Each triple (*t*1, *t*2, *t*3) ∈ *T* is of three distinct elements. 2. For each *i* ∈ [*r*] there are exactly 3(*r* − 1) triples in *T* containing *i* as an element. 3. For every pair *i*, *j* ∈ [*r*] of distinct elements there are at most 6 triples in *T* which contain both *i* and *j* as elements. Let *D* be an *r* × *r* diagonal latin square which we know exists from Theorem [thm-diagonalsquare]. Define *T* ⊂ [*r*]3 to be the set of all triples (*i*, *j*, *k*) ∈ [*r*]3 with *i* ≠ *j* such that *D**i*, *j* = *k*. The number of such triples is *r*2 − *r*. Property 1 holds by the definition of diagonal latin square— we cannot have *D**i*, *j* = *i* for *j* ≠ *i* since *D**i*, *i* = *i* and every row in *D* has distinct as the (*i*, *i*) entry in *D* is labeled *i* for all *i* ∈ [*r*], and similarly we cannot have *D**i*, *j* = *j* for *i* ≠ *j*. Let *i* ∈ [*r*]. By construction, there are *r* − 1 triples in *T* which have *i* as their first entry, and *r* − 1 triples that have *i* as their second entry. There are also *r* − 1 triples in *T* which have *i* as their last entry, since for every one of the *r* − 1 rows *i*ʹ ≠ *i* there is exactly one location *j*ʹ ≠ *i*ʹ in which the label *i* appears, and that contributes the triple (*i*ʹ, *j*ʹ, *i*) to *T*. This proves Property 2. To prove Property 3 observe that two triples in *T* can agree in at most one place. For example, knowing the row and column determines the label, knowing the row and label determines the column, and so forth. Therefore, a pair (*i*, *j*) cannot appear in more than 6 triples since otherwise there would have been at least two triples with *i*, *j* at the same places, and these triples would violate the above rule. Proof of Theorem [thm-deltaSG] ------------------------------ Let *V* be the *n* × *d* matrix whose *i*’th row is the vector *v**i*. Assume w.l.o.g. that *v*1 = 0. Thus $$\dimension\{v\_1,\ldots,v\_n\} = \rank(V).$$ The overview of the proof is as follows. We will first build an *m* × *n* matrix *A* that will satisfy *A* ⋅ *V* = 0. Then, we will argue that the rank of *A* is large because it is a design matrix. This will show that the rank of *V* is small. Consider a special line ℓ which passes through three points *v**i*, *v**j*, *v**k*. This gives a linear dependency among the three vectors *v**i*, *v**j*, *v**k* (we identify a point with its vector of coordinates in the standard basis). In other words, this gives a vector *a* = (*a*1, …, *a**n*) which is non-zero only in the three coordinates *i*, *j*, *k* and such that *a* ⋅ *V* = 0. If *a* is not unique, choose an arbitrary vector *a* with these properties. Our strategy is to pick a family of collinear triples among the points in our configuration and to build the matrix *A* from rows corresponding to these triples in the above manner. Let $\L$ denote the set of all special lines in the configuration (i.e. all lines containing at least three points). Then each *L**i* is a subset of $\L$ containing lines passing through *v**i*. For each $\ell \in \L$ let *V*ℓ denote the set of points in the configuration which lie on the line ℓ. Then ∣*V*ℓ∣ ≥ 3 and we can assign to it a family of triples *T*ℓ ⊂ *V*ℓ3, given by Lemma [lem-triples] (we identify *V*ℓ with [*r*], where *r* = ∣*V*ℓ∣ in some arbitrary way). We now construct the matrix *A* by going over all lines $\ell \in \L$ and for each triple in *T*ℓ adding as a row of *A* the vector with three non-zero coefficients *a* = (*a*1, …, *a**n*) described above (so that *a* is the linear dependency between the three points in the triple). Since the matrix *A* satisfies *A* ⋅ *V* = 0 by construction, we only have to argue that *A* is a design matrix and bound its rank. The matrix *A* is a (3, 3*k*, 6)-design matrix, where $k \triangleq \lfloor \delta n \rfloor - 1$. By construction, each row of *A* has exactly 3 non-zero entries. The number of non-zero entries in column *i* of *A* corresponds to the number of triples we used that contain the point *v**i*. These can come from all special lines containing *v**i*. Suppose there are *s* special lines containing *v**i* and let *r*1, …, *r**s* denote the number of points on each of those lines. Then, since the lines through *v**i* have only the point *v**i* in common, we have that ∑*j* = 1*s*(*r**j* − 1) ≥ *k*. The properties of the families of triples *T*ℓ guarantee that there are 3(*r**j* − 1) triples containing *v**i* coming from the *j*’th line. Therefore there are at least 3*k* triples in total containing *v**i*. The size of the intersection of columns *i*1 and *i*2 is equal to the number of triples containing the points *v**i*1, *v**i*2 that were used in the construction of *A*. These triples can only come from one special line (the line containing these two points) and so, by Lemma [lem-triples], there can be at most 6 of those. Applying Theorem [thm-rankdesign] we get that $$\begin{aligned} \rank(A) & \geq & n - \left(\frac{3 \cdot 6 \cdot n}{2 \cdot 3k}\right)^2 \geq n - \left(\frac{3 \cdot n}{\delta n - 2}\right)^2 \\ & \geq & n - \left(\frac{3 \cdot n \cdot 13 }{ 11 \cdot \delta n} \right)^2 > n - 13/\delta^2,\end{aligned}$$ where the third inequality holds as *δ**n* ≥ 13 since otherwise the theorem trivially holds. Since *A* ⋅ *V* = 0 we have that $$\rank(A) + \rank(V) \leq n.$$ This implies that $$\rank(V) < 13/\delta^2,$$ which completes the proof. For *δ* = 1, the calculation above yields $\rank(V) < 11$. Average-Case Version -------------------- In this section we use Theorem [thm-deltaSG] to argue about the case where we only know that there are many collinear triples in a configuration. [Average-case SG theorem][thm-SGav] Let $V = \{v\_1,\ldots,v\_m\} \subset \C^d$ be a set of *m* distinct points. Let *T* be the set of (unordered) collinear triples in *V*. Suppose ∣*T*∣ ≥ *α**m*2 and that every two points *v*, *v*ʹ in *V* appear in at most *c* triples in *T*, then there exists a subset *V*ʹ ⊂ *V* such that ∣*V*ʹ∣ ≥ *α**m*/(2*c*) and $\dimension(V') \leq O(1/\alpha^2)$. Notice that the bound on the number of triples containing a fixed pair of points is necessary for the theorem to hold. If we remove this assumption than we could create a counter-example by arranging the points so that *m*2/3 of them are on a line and the rest span the entire space. [lem: comb lem] Let *H* be a 3-regular hypergraph with vertex set [*m*] and *α**m*2 edges of co-degree at most *c* (i.e. for every *i* ≠ *j* in [*m*], the set {*i*, *j*} is contained in at most *c* edges). Then there is a subset *M* ⊆ [*m*] of size ∣*M*∣ ≥ *α**m*/(2*c*) so that the minimal degree of the sub-graph of *H* induced by *M* is at least *α**m*/2. We describe an iterative process to find *M*. We start with *M* = [*m*]. While there exists a vertex of degree less than *α**m*/2, remove this vertex from *M* and remove all edges containing this vertex from *H*. Continuing in this fashion we conclude with a set *M* such that every point in *M* has degree at least *α**m*/2. This process removed in total at most *m* ⋅ *α**m*/2 edges and thus the new *H* still contains at least *α**m*2/2 edges. As the co-degree is at most *c*, every vertex appears in at most *c**m* edges. Thus, the size of *M* is of size at least *α**m*/(2*c*). [Proof of Theorem [thm-SGav]] The family of triples *T* defines a 3-regular hypergraph on *V* of co-degree at most *c*. Lemma [lem: comb lem] thus implies that there is a subset *V*ʹ ⊆ *V* of size ∣*V*ʹ∣ ≥ *α**m*/(2*c*) that is an (*α*/2)-SG configuration. By Theorem [thm-deltaSG], *V*ʹ has dimension at most *O*(1/*α*2). Robust SG Theorem for *k*-Flats =============================== In this section we prove two high-dimensional analogs of the SG theorem. Let $\fl(v\_1,\ldots,v\_k)$ (fl for ‘flat’) denote the affine span of *k* points (i.e. the points that can be written as linear combinations with coefficients that sum to one). We call *v*1, …, *v**k* *independent* if their flat is of dimension *k* − 1 (dimension means affine dimension), and say that *v*1, …, *v**k* are *dependent* otherwise. A **k*-flat* is an affine subspace of dimension *k*. In the following *V* is a set of *n* distinct points in complex space $\C^d$. A *k*-flat is called *ordinary* if its intersection with *V* is contained in the union of a (*k* − 1)-flat and a single point. A *k*-flat is *elementary* if its intersection with *V* has exactly *k* + 1 points. Notice that for *k* = 1 (lines) the two notions of ordinary and elementary coincide. For dimensions higher than one, there are two different definitions that generalize that of $\SG$ configuration. The first definition is based on ordinary *k*-flats (though in a slightly stronger way which will be more useful in the proofs to come). The second definition (which is less restricted than the first one) uses elementary *k*-flats. The set *V* is a *δ*-$\SG\_k^\*$ configuration if for every independent *v*1, …, *v**k* ∈ *V* there are at least *δ**n* points *u* ∈ *V* s.t. either $u \in \fl(v\_1,\ldots,v\_k)$ or the *k*-flat $\fl(v\_1,\ldots,v\_k,u)$ contains a point *w* outside $\fl(v\_1,\ldots,v\_k) \cup \{ u\}$. The set *V* is a *δ*-$\SG\_k$ configuration if for every independent *v*1, …, *v**k* ∈ *V* there are at least *δ**n* points *u* ∈ *V* s.t. either $u \in \fl(v\_1,\ldots,v\_k)$ or the *k*-flat $\fl(v\_1,\ldots,v\_k,u)$ is not elementary. Both definitions coincide with that of $\SG$ configuration when *k* = 1: Indeed, $\fl(v\_1) = v\_1$ and $\fl(v\_1,u)$ is the line through *v*1, *u*. Therefore, *u* is never in $\fl(v\_1)$ and the line $\fl(v\_1,u)$ is not elementary iff it contains at least one point *w* ∉ {*v*1, *u*}. We prove two high-dimensional versions of the SG theorem, each corresponding to one of the definitions above. The first uses the more restricted ‘star’ definition and gives a strong upper bound on dimension. The second uses the less restricted definition and gives a weaker bound on dimension. [thm-weak] Let *V* be a *δ*-$\SG\_k^\*$ configuration. Then *d**i**m*(*V*) ≤ *f*(*δ*, *k*) with *f*(*δ*, *k*) = *O*((*k*/*δ*)2). [thm-strong] Let *V* be a *δ*-$\SG\_k$ configuration. Then *d**i**m*(*V*) ≤ *g*(*δ*, *k*) with *g*(*δ*, *k*) = 2*C**k*/*δ*2 with *C* > 1 a universal constant. The proofs of the two theorems are below. Theorem [thm-weak] follows by an appropriate induction on the dimension, using the (one-dimensional) robust SG theorem. Theorem [thm-strong] follows by reduction to Theorem [thm-weak]. Before proving the theorems we set some notations. Fix some point *v*0 ∈ *V*. By a *normalization w.r.t. *v*0* we mean an affine transformation $N : \C^d \mapsto \C^d$ which first moves *v*0 to zero, then picks a hyperplane *H* s.t. no point in *V* (after the shift) is parallel to *H* (i.e has inner product zero with the orthogonal vector to *H*) and finally multiplies each point (other than zero) by a constant s.t. it is in *H*. [clm: prop of N] For such a mapping *N* we have that *v*0, *v*1, …, *v**k* are dependent iff *N*(*v*1), …, *N*(*v**k*) are dependent. Since translation and scaling does not affect dependence, w.l.o.g. we assume that *v*0 = 0 and that the distance of the hyperplane *H* from zero is one. Let *h* be the unit vector orthogonal to *H*. For all *i* ∈ [*k*] we have *N*(*v**i*) = *v**i*/⟨*v**i*, *h*⟩. Assume that *v*0, *v*1, …, *v**k* are dependent, that is, w.l.o.g. *v**k* = ∑*i* ∈ [*k* − 1]*a**i**v**i* for some *a*1, …, *a**k* − 1. For all *i* ∈ [*k* − 1] define *b**i* = *a**i*⟨*v**i*, *h*⟩/⟨*v**k*, *h*⟩. Thus *N*(*v**k*) = ∑*i* ∈ [*k* − 1]*a**i**v**i*/⟨*v**k*, *h*⟩ = ∑*i* ∈ [*k* − 1]*b**i**N*(*v**i*) where ∑*i* ∈ [*k* − 1]*b**i* = 1, which means that *N*(*v*1), …, *N*(*v**k*) are dependent. Since the map *a**i* ↦ *b**i* is invertible, the other direction of the claim holds as well. We first prove the theorem for *δ*-$\SG\_k^\*$ configurations. [Proof of Theorem [thm-weak]] The proof is by induction on *k*. For *k* = 1 we know *f*(*δ*, 1) ≤ *c**δ*− 2 with *c* > 1 a universal constant. Suppose *k* > 1. We separate into two cases. The first case is when *V* is an (*δ*/(2*k*))-$\SG\_1$ configuration and we are done using the bound on *k* = 1. In the other case there is some point *v*0 ∈ *V* s.t. the size of the set of points on special lines through *v*0 is at most *δ*/(2*k*) (a line is special if it contains at least three points). Let *S* denote the set of points on special lines through *v*0. Thus ∣*S*∣ < *δ**n*/(2*k*). Let $N : \C^d \mapsto \C^d$ be a normalization w.r.t. *v*0. Notice that for points *v* ∉ *S* the image *N*(*v*) determines *v*. Similarly, all points on some special line map to the same point via *N*. Our goal is to show that *V*ʹ = *N*(*V* \ {*v*0}) is a ((1 − 1/(2*k*))*δ*)-*S**G**k* − 1\* configuration (after eliminating multiplicities from *V*ʹ). This will complete the proof since dim(*V*) ≤ dim(*V*ʹ) + 1. Indeed, if this is the case we have *f*(*δ*, *k*) ≤ max{4*c*(*k*/*δ*)2, *f*((1 − 1/(2*k*))*δ*, *k* − 1) + 1}. and by induction we have *f*(*δ*, *k*) ≤ 4*c*(*k*/*δ*)2. Fix *v*ʹ1, …, *v*ʹ*k* − 1 ∈ *V*ʹ to be *k* − 1 independent points (if no such tuple exists then *V*ʹ is trivially a configuration). Let *v*1, …, *v**k* − 1 ∈ *V* be points s.t. *N*(*v**i*) = *v*ʹ*i* for *i* ∈ [*k* − 1]. Claim [clm: prop of N] implies that *v*0, *v*1, …, *v**k* − 1 are independent. Thus, there is a set *U* ⊂ *V* of size at least *δ**n* s.t. for every *u* ∈ *U* either $u \in \fl(v\_0,v\_1,\ldots,v\_{k-1})$ or the *k*-flat $\fl(v\_0,v\_1,\ldots,v\_{k-1},u)$ contains a point *w* outside $\fl(v\_0,v\_1,\ldots,v\_{k-1}) \cup \{ u \}$. Let $\tilde U = U \setminus S$ so that *N* is invertible on $\tilde U$ and $$|\tilde U| \geq |U| - |S| \geq (1-1/(2k)) \delta n.$$ Suppose $u \in \tilde U$ and let *u*ʹ = *N*(*u*). By Claim [clm: prop of N] if $u \in \fl(v\_0,v\_1,\ldots,v\_{k-1})$ then *u*ʹ is in $\fl(v'\_1,\ldots,v'\_{k-1})$. Otherwise, $\fl(v\_0,v\_1,\ldots,v\_{k-1},u)$ contains a point *w* outside $\fl(v\_0,v\_1,\ldots,v\_{k-1}) \cup \{ u \}$. Let *w*ʹ = *N*(*w*). We will show that *w*ʹ is (a) contained in the (*k* − 1)-flat $\fl(v'\_1,\ldots,v'\_{k-1},u')$ and (b) is outside $\fl(v'\_1,\ldots,v'\_{k-1}) \cup \{ u' \}$. Property (a) follows from Claim [clm: prop of N] since *v*0, *v*1, …, *v**k* − 1, *u*, *w* are dependent and so *v*ʹ1, …, *v*ʹ*k* − 1, *u*ʹ, *w*ʹ are also dependent. To show (b) observe first that by Claim [clm: prop of N] the points *v*ʹ1, …, *v*ʹ*k* − 1, *u*ʹ are independent (since *v*0, *v*1, …, *v**k* − 1, *u* are independent) and so *u*ʹ is not in $\fl(v'\_1,\ldots,v'\_{k-1})$. We also need to show that *w*ʹ ≠ *u*ʹ but this follows from the fact that *u* ≠ *w* and so *w*ʹ = *N*(*w*) ≠ *N*(*u*) = *u*ʹ since *N* is invertible on $\tilde U$ and $u \in \tilde U$. Since $$|N(\tilde U)| = |\tilde U| \geq (1-1/(2k)) \delta n \geq (1-1/(2k)) \delta |V'|$$ the proof is complete. We can now prove the theorem for *δ*-$\SG\_k$ configurations. [Proof of Theorem [thm-strong]] The proof follows by induction on *k* (the case *k* = 1 is given by Theorem [thm-deltaSG]). Suppose *k* > 1. Suppose that dim(*V*) > *g*(*δ*, *k*). We want to show that there exist *k* independent points *v*1, …, *v**k* s.t. for at least 1 − *δ* fraction of the points *w* ∈ *V* we have that *w* is not in $\fl(v\_1,\ldots,v\_k)$ **and** the flat $\fl(v\_1,\ldots,v\_k,w)$ is elementary (i.e. does not contain any other point). Let *k*ʹ = *g*(1, *k* − 1). By choice of *g* we have *g*(*δ*, *k*) > *f*(*δ*, *k*ʹ + 1) with *f* from Theorem [thm-weak]. Thus, by Theorem [thm-weak], we can find *k*ʹ + 1 independent points *v*1, …, *v**k*ʹ + 1 s.t. there is a set *U* ⊂ *V* of size at least (1 − *δ*)*n* s.t. for every *u* ∈ *U* we have that *u* is not in $\fl(v\_1,\ldots,v\_{k'+1})$ **and** the (*k*ʹ + 1)-flat $\fl(v\_1,\ldots,v\_{k'+1},u)$ contains only one point, namely *u*, outside $\fl(v\_1,\ldots,v\_{k'+1})$. We now apply the inductive hypothesis on the set $V \cap \fl(v\_1,\ldots,v\_{k'+1})$ which has dimension at least *k*ʹ = *g*(1, *k* − 1). This gives us *k* independent points *v*ʹ1, …, *v*ʹ*k* that define an elementary (*k* − 1)-flat $\fl(v'\_1,\ldots,v'\_k)$. (Saying that *V* is not 1-$\SG\_{k-1}$ is the same as saying that it contains an elementary (*k* − 1)-flat). Joining any of the points *u* ∈ *U* to *v*ʹ1, …, *v*ʹ*k* gives us an elementary *k*-flat and so the theorem is proved. Generalizations of the Motzkin-Rabin Theorem ============================================ In this section we prove two variants of the Motzkin-Rabin Theorem. The first is a quantitative analog in the spirit of Theorem [thm-deltaSG]. The second is a variant in which the number of colors is three (instead of two). A Quantitative Variant ---------------------- [*δ*-MR configuration] Let *V*1, *V*2 be two disjoint finite subsets of $\C^d$. Points in *V*1 are of *color* 1 and points in *V*2 are of *color* 2. A line is called *bi-chromatic* if it contains at least one point from each of the two colors. We say that *V*1, *V*2 are a **δ*-MR* configuration if for every *i* ∈ [2] and for every point *p* ∈ *V**i*, the bi-chromatic lines through *p* contain at least *δ*∣*V**i*∣ points. Let $V\_1,V\_2 \subset \C^d$ be a *δ*-MR configuration. Then $$\dimension(V\_1,V\_2) \leq O(1/\delta^4).$$ We will call a line passing through exactly two points in *V*1 (resp. *V*2) a *V*1-ordinary (resp. *V*2-ordinary) line. W.l.o.g. assume ∣*V*1∣ ≤ ∣*V*2∣. We seperate the proof into two cases: Case I is when *V*2 is a (*δ*/2)-SG configuration. Then, by Theorem [thm-deltaSG], $\dimension(V\_2) \leq O(1/\delta^2)$. If in addition $$\dimension(V\_1) \leq 13/(\delta/2)^2$$ then we are done. Otherwise, by Theorem [thm-deltaSG], there exists a point *a*0 ∈ *V*1 such that there are at least (1 − *δ*/2)∣*V*1∣    *V*1-ordinary lines through *a*0. Let *a*1, …, *a**k* denote the points in *V*1 that belong to these lines with *k* ≥ (1 − *δ*/2)∣*V*1∣. We now claim that *V*2 ∪ {*a*0} spans all the points in *V*1. This will suffice since, in this case, $\dimension(V\_2) \leq O(1/\delta^2)$. Let *a* ∈ *V*1. Then, since *V*1, *V*2 is a *δ*-MR configuration, there are at least *δ*∣*V*1∣ points in *V*1 such that the line through them and *a* contains a point in *V*2. One of these points must be among *a*1, …, *a**k*, say it is *a*1. Since *a* is in the span of *V*2 and *a*1 and since *a*1 is in the span of *V*2 and *a*0 we are done. Case II is when *V*2 is not a (*δ*/2)-SG configuration. In this case, there is a point *b* ∈ *V*2 such that there are at least (1 − *δ*/2)∣*V*2∣   *V*2-ordinary lines through *b*. From this fact and from the *δ*-MR property, we get that ∣*V*1∣ ≥ (*δ*/2)∣*V*2∣ (there are at least (*δ*/2)∣*V*2∣   *V*2-ordinary lines through *b* that have an additional point from *V*1 on them). This implies that the union *V*1 ∪ *V*2 is a (*δ*2/4)-SG configuration and the result follows by applying Theorem [thm-deltaSG]. A Three Colors Variant ---------------------- [3MR configuration] Let *V*1, *V*2, *V*3 be three pairwise disjoint finite subsets of $\C^d$, each of distinct points. We say that *V*1, *V*2, *V*3 is a *3MR*-configuration if every line ℓ so that ℓ ∩ (*V*1 ∪ *V*2 ∪ *V*3) has more than one point intersects at least two of the sets *V*1, *V*2, *V*3. Let *V*1, *V*2, *V*3 be a 3MR configuration and denote *V* = *V*1 ∪ *V*2 ∪ *V*3. Then $$\dimension(V) \leq O(1).$$ Assume w.l.o.g. that *V*1 is not smaller than *V*2, *V*3. Let *α* = 1/16. There are several cases to consider: 1. *V*1 is an *α*-SG configuration. By Theorem [thm-deltaSG], the dimension of *V*1 is at most *d*1 = *O*(1/*α*2). Consider the two sets $$V'\_2 = V\_2 \setminus \span(V\_1) \ \ \text{and} \ \ V'\_3 = V\_3 \setminus \span(V\_1),$$ each is a set of distinct points in $\C^{d}$. Assume w.l.o.g. that ∣*V*ʹ2∣ ≥ ∣*V*ʹ3∣. 1.1. *V*ʹ2 is an *α*-SG configuration. By Theorem [thm-deltaSG], the dimension of *V*ʹ2 is at most *d*2 = *O*(1/*α*2). Fix a point *v*3 in *V*ʹ3. For every point *v* ≠ *v*3 in *V*ʹ3 the line through *v*3, *v* contains a point from $\span(V\_1) \cup V'\_2$. Therefore, $$\dimension(V) \leq d\_1 + d\_2 + 1 \leq O(1).$$ 1.2. *V*ʹ2 is not an *α*-SG configuration. There is a point *v*2 in *V*ʹ2 so that for *k* ≥ ∣*V*ʹ2∣/2 of the points *v* ≠ *v*2 in *V*ʹ2 the line through *v*2, *v* does not contain any other point from *V*ʹ2. If $V'\_2 = \span(V\_1,v\_2)$ then the dimension of *V*1 ∪ *V*2 is at most *d*1 + 1 and we are done as in the previous case. Otherwise, there is a point *v*ʹ2 in $V'\_2 \setminus \span(V\_1,v\_2)$. We claim that in this case ∣*V*ʹ3∣ ≥ *k*/2. Denote by *P*2 the *k* points *v* ≠ *v*2 in *V*ʹ2 so that the line through *v*2, *v* does not contain any other point from *V*ʹ2. For every *v* ∈ *P*2 there is a point *V*1, 3(*v*) in *V*1 ∪ *V*3 that is on the line through *v*, *v*2 (the point *v*2 is fixed). There are two cases to consider. The first case is that for at least *k*/2 of the points *v* in *P*2 we have *V*1, 3(*v*) ∈ *V*3. In this case clearly ∣*V*3∣ ≥ *k*/2. The second case is that for at least *k*/2 of the points *v* in *P*2 we have *V*1, 3(*v*) ∈ *V*1. Fix such a point *v* ∈ *P*2 (which is in $\span(V\_1,v\_2)$). The line through *v*ʹ2, *v* contains a point *v*ʹ from *V*1 ∪ *V*3. The point *v*ʹ is not in $\span(V\_1)$, as if it was then *v*ʹ2 would be in $\span(v,v') \subseteq \span(V\_1,v)$. Therefore *v*ʹ is in *V*3. This also implies that ∣*V*ʹ3∣ ≥ *k*/2. Denote *V*ʹ = *V*2 ∪ *V*3ʹ. So we can conclude that for every *v*ʹ in *V*ʹ the special lines through *v*ʹ contain at least ∣*V*ʹ∣/8 of the points in *V*1 ∪ *V*2 ∪ *V*3. As in the proof of Theorem [thm-deltaSG], we can thus define a family of triples *T*, each triple of three distinct collinear points in *V*, so that each *v*ʹ in *V*ʹ belongs to at least ∣*V*ʹ∣/8 triples in *T* and each two distinct *v*ʹ, *v*ʺ in *V*ʹ belong to at most 6 triples. By a slight abuse of notation, we also denote by *V* the matrix with rows defined by the points in *V*. Let *V*1 be the submatrix of *V* with row defined by points in $\span(V\_1) \cap V$ and *V*ʹ be the submatrix of *V* with row defined by points in *V*ʹ. Use the triples in *T* to construct a matrix *A* so that *A* ⋅ *V* = 0. Let *A*1 be the submatrix of *A* consisting of the columns that correspond to $\span(V\_1) \cap V$ and *A*ʹ be the submatrix of *A* consisting of the columns that correspond to *V*ʹ. Therefore, *A*ʹ ⋅ *V*ʹ =  − *A*1 ⋅ *V*1 which implies $$\rank(A' \cdot V') \leq \rank(A\_1 \cdot V\_1) \leq d\_1.$$ By the above discussion *A*ʹ is a (3, ∣*V*ʹ∣/8, 6)-design matrix and thus, by Theorem [thm-rankdesign], has rank at least ∣*V*ʹ∣ − *O*(1) and so $$\dimension(V') \leq O(1) + d\_1 \leq O(1).$$ We can finally conclude that $$\dimension(V) \leq d\_1 + \dimension(V') \leq O(1).$$ 2. *V*1 is not an *α*-SG configuration. There is a point *v*1 in *V*1 so that for at least ∣*V*1∣/2 of the points *v* ≠ *v*1 in *V*1 the line through *v*1, *v* does not contain any other point from *V*1. Assume w.l.o.g. that ∣*V*2∣ ≥ ∣*V*3∣. This implies that ∣*V*2∣ ≥ ∣*V*1∣/4. 2.1. ∣*V*3∣ < ∣*V*2∣/16. In this case the configuration defined by *V*1 ∪ *V*2 is an *α*-SG configuration. By Theorem [thm-deltaSG], the dimension of *V*1 ∪ *V*2 is at most *d*1, 2 = *O*(1/*α*2). Fix a point *v*3 in *V*3. For every point *v* ≠ *v*3 in *V*3 the line through *v*3, *v* contains a point from *V*1 ∪ *V*2. Therefore, $$\dimension(V) \leq d\_{1,2} + 1 \leq O(1).$$ 2.1. ∣*V*3∣ ≥ ∣*V*2∣/16. In this case *V* is an *α*-SG configuration. By Theorem [thm-deltaSG], the dimension of *V* is thus at most *O*(1/*α*2). Two-Query Locally Correctable Codes =================================== We now prove the non-existence of 2-query (linear) locally correctable codes (LCC) over $\C$. We start by formally defining locally correctable codes: [Linear locally correctable code (LCC)][def-lcc] Let $\F$ be some field. A (*q*, *δ*)-LCC over $\F$ is a linear subspace $C \subset \F^m$ such that there exists a randomized decoding procedure $D : \F^m \times [m] \mapsto \F$ with the following properties: 1. For all *x* ∈ *C*, for all *i* ∈ [*m*] and for all $v \in \F^m$ with *w*(*v*) ≤ *δ**m* we have that *D*(*x* + *v*, *i*) = *x**i* with probability at least 3/4 (the probability is taken only over the internal randomness of *D*). 2. For every $y \in \F^m$ and *i* ∈ [*m*], the decoder *D*(*y*, *i*) reads at most *q* positions in *y*. The *dimension* of an LCC is simply its dimension as a subspace of $\F^m$. In the above definition we allow the algorithm *D* to perform operations over the field $\F$. Since we do not care about the running time of *D* we do not discuss issues of representation of field elements and efficiency of handling them. (In any case, it turns out that for linear codes in the small number of queries and low error case, one can assume w.l.o.g. that the decoder is also linear, see Lemma [lem-LCC] below.) Our result on locally decodable codes is the following: [Restatement of Theorem [ithm:lcc]— non-existence of 2 query LCCs over $\C$][thm-lcc] Let $C \subset \C^m$ be a (2, *δ*)-LCC over $\C$. Then dim(*C*) ≤ *O*(1/*δ*9). As in Theorem [thm-deltaSG], also in this theorem, *δ* can be an arbitrary function of *m*. To make the connection between LCCs and *S**G*-configurations explicit, we define the notion of a *δ*-LCC configuration. [*δ*-LCC Configuration] A list of non-zero points (*v*1, …, *v**m*) in $\C^d$ (not necessarily distinct) is called a *δ*-LCC configuration if for every subset Δ ⊂ [*m*] of size at most *δ**m* and for every *i* ∈ [*m*], there exist *j*, *k* ∈ [*m*] \ Δ such that either *v**i* ∈ {*v**j*, *v**k*} (in which case *v**i* can be recovered by its own copies), or *v**i*, *v**j*, *v**k* are three distinct collinear points (in which case *v**i* is recovered by two other coordinates). The following lemma shows the connection between these two notions. [lem-LCC] If there exists a (2, *δ*)-LCC of dimension *n* over $\C$ then there exists a *δ*-LCC configuration of dimension at least *n* − 1 over $\C$. To prove the lemma we will use the following definition. [Generating set] Let $C \subset \F^m$ be a subspace. We say that a list of vectors *V* = (*v*1, …, *v**m*) in $\F^n$ is a *generating set* for *C* if $$C = \left\{ \left({\langle y,v\_1 \rangle}, {\langle y,v\_2 \rangle}, \ldots, {\langle y,v\_m \rangle} \right) \,\,|\,\, y \in \F^n \right\},$$ where ⟨*y*, *v*⟩ is the standard inner product over $\F$. [Proof of Lemma [lem-LCC]] Let *V* = (*v*1, …, *v**m*) be a generating set for *C* with $\dimension(V) \geq n-1$. We might lose 1 since we defined $\dimension(V)$ as the dimension of the smallest *affine* subspace containing *V*. When the local decoder for *C* reads two positions in a codeword, it is actually reading ⟨*y*, *v**j*⟩, ⟨*y*, *v**k*⟩ for some vector $y \in \C^n$ (or noisy versions of them). In order to be able to recover ⟨*y*, *v**i*⟩ from ⟨*y*, *v**j*⟩, ⟨*y*, *v**k*⟩ with positive probability it must be that $v\_i \in \span\{v\_j,v\_k\}$. (If we choose *y* as Gaussian and *v**i* is not in the span of *v**j*, *v**k* then even conditioned on the values of ⟨*y*, *v**j*⟩, ⟨*y*, *v**k*⟩ the r.v. ⟨*y*, *v**i*⟩ takes any specific value with probability zero.) Applying an invertible linear transformation on *V* preserves properties such as one vector being in the span of another set. So we can assume w.l.o.g. that the first coordinate in all elements of *V* is non-zero. Scaling each *v**i* by a non-zero scalar also preserves the properties of spans and so we can assume w.l.o.g. that the first coordinate in each *v**i* is equal to 1. Now, for *v**i* to be in the span of *v**j*, *v**k* it must be that either *v**i* ∈ {*v**j*, *v**k*} or *v**i* is on the line passing through *v**j*, *v**k* (and they are all distinct). Thus, we have a *δ*-LCC configuration with dimension *n* − 1. In view of this lemma, in order to prove Theorem [thm-lcc] it is enough to prove: [thm-lccconf] Let $V = (v\_1,\ldots,v\_m)\in (\C^d)^m$ be a *δ*-LCC configuration. Then $$\dimension(V) \leq O(1/\delta^9).$$ Proof of Theorem [thm-lccconf] ------------------------------ Let *V* = (*v*1, …, *v**m*) be the list of *m* points in $\C^d$. The main difficulty in proving the theorem is that some of these points may be the same. That is, two points *v**i*, *v**j* can actually correspond to the same vector in $\C^d$. In this case we say that *v**i*, *v**j* are *copies* of each other. Otherwise, we say that *v**i*, *v**j* are *distinct*. If *v* is a point in the list *V*, we let the *multiplicity* of *v*, denoted *M*(*v*), be the number of times that (a copy of) *v* occurs in *V*. We note that while repetitions make the proof of Theorem [thm-lccconf] more complicated, we do not know if they actually help in constructing LCCs with better parameters. Our proof will proceed in an iterative way, at each step identifying a sufficiently large sublist with small dimension and removing it. The key step will be the following theorem: [thm-lowranksub] There exists an integer *K*1 > 0 s.t. the following holds. Let $V = (v\_1,\ldots,v\_m) \in (\C^d)^m$ be a *δ*-LCC configuration. Then there exists a sublist *V*ʹ ⊂ *V* of size at least *δ*3*m*/*K*1 and dimension at most *K*1/*δ*6. If there exists a point *v* ∈ *V* with multiplicity larger that *δ**m*/10 then the theorem is true by taking *V*ʹ to be all copies of this point. This avoids the case where a point is recovered mostly by its own copies. For the rest of the proof we can, thus, assume the following. [fact: v many triples] For all *v* ∈ *V* and for every sublist Δ of *V* of size at most *δ**m*/2 there is a collinear triple containing *v* such that the other two points in the triple are not in Δ (and are distinct from *v*). We will describe a (probabilistic) construction of a family of collinear triples and build a design matrix from it. We call a triple of points in *V* *good* if it contains three distinct collinear points. We define a family *T* of good triples as follows: For every line ℓ that has at least three distinct points in *V* we will define (randomly) a family *T*ℓ of good triples (later we will fix the randomness). The family *T* will be the union of all these sets. The construction of *T* we present is probabilistic. It is possible to construct *T* explicitly and achieve similar properties. We choose to present the probabilistic construction as it is simpler and less technical. Let ℓ be such a line with *r* points on it (counting multiplicities). Denote by *V*(ℓ) the sublist of *V* containing all points that lie on ℓ. We first take the family *F* of triples on [*r*] given by Lemma [lem-triples] and then pick a random one-to-one mapping *ρ* : [*r*] ↦ *V*(ℓ). For a triple *t* in *F* we denote by *ρ*(*t*) the triple of points in *V*(ℓ) that is the image of *t* under *ρ*. We take *T*ℓ to be the set of all triples *ρ*(*t*) with *t* ∈ *F* and such that *ρ*(*t*) is good (i.e., it ‘hits’ three distinct points). Intuitively, we will have many good triples on a line (in expectation) if there are no two points whose copies cover most of the line (then the probability of hitting three distinct points is small). We will later show that this cannot happen on too many lines. The next proposition shows that there is a way to fix the randomness so that *T* contains a quadratic number of triples. [prop-expectation] The expectation of ∣*T*∣ is at least *α**m*2 with *α* = (*δ*/15)3. We will prove this proposition later in Section [sec-proofprop] and will continue now with the proof of the theorem. Fix *T* to be a family of triples that has size at least the expectation of ∣*T*∣. By construction and Lemma [lem-triples], the family *T* contains only good triples and each pair of points appears in at most 6 different triples (since every two distinct points define a single line and two non-distinct points never appear in a triple together). The family *T* thus defines a 3-regular hypergraph with vertex set [*m*] and at least *α**m*2 edges and of co-degree at most 6. Lemma [lem: comb lem] thus implies that there is a sublist *V*ʹ of *V* of size at least ∣*V*ʹ∣ = *m*ʹ ≥ *α**m*/12 ≥ (*δ*/45)3*m* with the following property: Let *T*ʹ be the subfamily of *T* that *V*ʹ induces. Every *v*ʹ in *V*ʹ is contained in at least *α**m*/2 triples in *T*ʹ. By a slight abuse of notation, we also denote by *V*ʹ the *m*ʹ × *d* matrix with rows defined by the points in *V*ʹ (including repetitions). We now use the triples in *T*ʹ to construct a matrix *A*ʹ so that *A*ʹ ⋅ *V*ʹ = 0. By the above discussion *A*ʹ is a (3, *α**m*/2, 6)-design matrix and thus, by Theorem [thm-rankdesign], has rank at least $$m' - \left( \frac{18 m'}{\alpha m}\right)^2 \geq m' - (18/\alpha)^2$$ and so $$\dimension(V') \leq (18/ \alpha)^2 \leq (60/\delta)^6$$ as was required. The next proposition shows how the above theorem can be used repeatedly on a given LCC. [prop-repeat] There exist an integer *K*2 > 0 s.t. the following holds: Let $V = (v\_1,\ldots,v\_m)\in (\C^d)^m$ be a *δ*-LCC configuration and let *U*, *W* be a partition of *V* into two disjoint sublists such that $W \cap \span(U) = \emptyset$. Then there exists a new partition of *V* to two sublists *U*ʹ and *W*ʹ such that $W' \cap \span(U') = \emptyset$ and such that 1. ∣*U*ʹ∣ ≥ ∣*U*∣ + *δ*3*m*/*K*2, and 2. $\dimension(U') \leq \dimension(U) + K\_2/\delta^6$. First, we can assume that all points in *W* have multiplicity at most *δ**m*/2 (otherwise we can add one point from *W* with high multiplicity to *U* to get *U*ʹ). Thus, for all points *v* and all sublists Δ of size at most *δ**m*/2 there is a collinear triple of three distinct points containing *v* and two other points outside Δ. Again, this is to avoid points that are recovered mostly by copies of themselves. For a point *w* ∈ *W* we define three disjoint sublists of points *U*(*w*), *P*1(*w*) and *P*2(*w*). The first list, *U*(*w*), will be the list of all points in *U* that are on special lines through *w* (that is, lines containing *w* and at least two other distinct points). Notice that, since $w \not\in \span(U)$, each line through *w* can contain at most one point from *U*. The second list, *P*1(*w*), will be the list of points in *W* \ {*w*} that are on a line containing *w* and a point from *U*. The third list, *P*2(*w*), will be of all other points on special lines through *w* (that is, on special lines that do not intersect *U*). These three lists are indeed disjoints, since *w* is the only common point between two lines passing through it. By the above discussion we have that ∣*P*1(*w*)∣ + ∣*P*2(*w*)∣ ≥ *δ**m*/2 for all *w* ∈ *W* (since removing these two lists destroys all collinear triples with *w*). We now separate the proof into two cases: #### Case I : There exists *w* ∈ *W* with ∣*P*1(*w*)∣ > *δ**m*/4. In this case we can simply take *U*ʹ to be the points in *V* that are also in the span of {*w*} ∪ *U*. This new *U*ʹ will include all points in *P*1(*w*) and so will grow by at least *δ**m*/4 points. Its dimension will grow by at most one and so we are done. #### Case II : For all *w* ∈ *W*, ∣*P*2(*w*)∣ ≥ *δ**m*/4. Denote *m*ʹ = ∣*W*∣. In this case *W* itself is a *δ*ʹ-LCC configuration with $$\delta' = \frac{\delta m}{8m'}.$$ Applying Theorem [thm-lowranksub] we get a sublist *U*ʺ ⊂ *W* of size at least $$\frac{(\delta')^3 m'}{K\_1} \geq (\delta/8)^3 \cdot \frac{m}{K\_1}$$ and dimension at most $$\frac{K\_1}{(\delta')^{6}} \leq K\_1 (8/\delta)^6.$$ We can thus take *U*ʹ to be the points in *V* that are in the span of *U* ∪ *U*ʺ and the proposition is proved. [Proof of Theorem [thm-lccconf]] We apply Proposition [prop-repeat] on *V*, starting with the partition *U* = ∅, *W* = *V* and ending when *U* = *V*, *W* = ∅. We can apply the proposition at most *K*2/*δ*3 times and in each step add at most *K*2/*δ*6 to the dimension of *A* (which is initially zero). Therefore, the final list *U* = *V* will have dimension at most *O*(1/*δ*9). Proof of Proposition [prop-expectation] --------------------------------------- Order the points in *V* so that all copies of the same point are consecutive and so that *M*(*v**i*) ≤ *M*(*v**j*) whenever *i* ≤ *j*. Let *S* ⊂ *V* be the sublist containing the first *δ**m*/10 points in this ordering (we may be splitting the copies of a single point in the middle but this is fine). We will use the following simple fact later on: [fact-S] If *v* ∈ *S* and *M*(*v*ʹ) < *M*(*v*) then *v*ʹ ∈ *S*. For a point *v* ∈ *V* we denote by *T*(*v*) the set of (ordered) triples in *T* containing *v* and for a line ℓ by *T*ℓ(*v*) the set of (ordered) triples in *T*ℓ containing *v*. Recall that these are all random variables determined by the choice of the mappings *ρ* for each line ℓ. The proposition will follow by the following lemma. [lem: T(v) large] Let *v* ∈ *S*. Then the expectation of ∣*T*(*v*)∣ is at least (*δ*/10)2*m*. The lemma completes the proof of the proposition: summing over all points in *S* we get $$\begin{aligned} \operatorname{\mathbb{E}}[|T|] &\geq& \operatorname{\mathbb{E}}\left[ (1/3)\sum\_{v \in V} |T(v)| \right]\,\,\,\,\, \text{\small (each triple is counted at most three times)}\\ & \geq& (1/3)\sum\_{v \in S} \operatorname{\mathbb{E}}[ |T(v)| ] \\ &\geq& (1/3) \cdot (\delta m /10) \cdot ( (\delta / 10)^2 m ) \geq (\delta / 15)^3 m^2.\end{aligned}$$ [Proof of Lemma [lem: T(v) large]] Denote by *L*(*v*) the set of all special lines through *v*. To prove the lemma we will identify a subfamily *L*ʹ(*v*) of *L*(*v*) that contributes many triples to *T*(*v*). To do so, we need the following definitions. For a set $\gamma \subset \C^d$ denote by *P*(*γ*) the set of distinct points in *V* that are in *γ*. Denote *M*(*γ*) = ∑*v* ∈ *P*(*γ*)*M*(*v*). Denote by $P(\bar S)$ the set of distinct points not in *S*. [Degenerate line] Let ℓ ∈ *L*(*v*). We say that ℓ ∈ *L*(*v*) is *degenerate* if either 1. The size of $P(\ell) \cap P(\bar S)$ is at most one. That is, ℓ contains at most one distinct point outside *S*. Or, 2. There exists a point *v*ℓ ∈ *P*(ℓ), distinct from *v*, such that *M*(*v*ℓ) ≥ (1 − *δ*/10)*M*(ℓ). A degenerate line satisfying the first (second) property above will be called a degenerate line of the first (second) kind. Define *L*ʹ(*v*) as the set of line ℓ in *L*(*v*) that are not degenerate. We will continue by proving two claims. The first claim shows that every line in *L*ʹ(*v*) contributes many triples in expectation to *T*(*v*). [cla-expline] For every ℓ ∈ *L*ʹ(*v*) we have $\operatorname{\mathbb{E}}[ |T\_\ell(v)| ] \geq \delta M(\ell) /10$. Denote *r* = ∣*M*(ℓ)∣. The family of triples *T*ℓ is obtained by taking a family of *r*(*r* − 1) triples *F* on [*r*] (obtained from Lemma [lem-triples]) and mapping it randomly to ℓ, omitting all triples that are not good (those that do not have three distinct points). For each triple *t* ∈ *F* the probability that *ρ*(*t*) will be in *T*ℓ(*v*) can be lower bounded by $$\frac{3}{r} \cdot \frac{2r}{3(r-1)} \cdot \frac{\delta}{20} = \frac{\delta}{10(r-1)}$$ The factor of 3/*r* comes from the probability that one of the three entries in *t* maps to *v* (these are disjoint events so we can sum their probabilities). The next factor, 2*r*/(3(*r* − 1)), comes from the probability that the second entry in *t* (in some fixed order) maps to a point distinct from *v*. Indeed since $|P(\ell) \cap P(\bar S)| \geq 2$ and using Fact [fact-S] we know that there are at least two distinct points *v*ʹ, *v*ʺ on ℓ with *M*(*v*ʹ) ≥ *M*(*v*) and *M*(*v*ʺ) ≥ *v*. Since *M*(*v*) + *M*(*v*ʹ) + *M*(*v*ʺ) ≤ *r*, we get that *M*(*v*) ≤ *r*/3, and so there are at least 2*r*/3 “good” places for the second point to map to. The last factor, *δ*/20, comes from the probability that the third element of the triple will map to a point distinct from the first two. The bound of *δ*/20 will follow from the fact that ℓ does not satisfy the second property in the definition of a degenerate line. To see why, let *v*2 be the image of the second entry in *t*. Since ℓ is not degenerate, $r' \triangleq r - M(v\_2) > \delta r / 10$. Since $|P(\ell) \cap P(\bar S)| \geq 2$, there is a point *v*ʹ in $P(\bar S)$ not in {*v*, *v*2}, and hence, by Fact [fact-S], *M*(*v*) ≤ *M*(*v*ʹ). Since *M*(*v*) + *M*(*v*ʹ) ≤ *r*ʹ, we get that *M*(*v*) ≤ *r*ʹ/2. Thus *r*ʹ − *M*(*v*) ≥ *r*ʹ/2 ≥ *δ**r*/20. But *r*ʹ − *M*(*v*) is exactly the number of ‘good’ places that the third entry can map to that are from *v* and *v*2. Using linearity of expectation we can conclude $$\operatorname{\mathbb{E}}[ |T\_\ell(v)| ] \geq r(r-1) \cdot \frac{\delta}{10(r-1)} = \delta r /10.$$ The second claim shows that there are many points on lines in *L*ʹ(*v*). [cla-manylines] With the above notations, we have: ∑ℓ ∈ *L*ʹ(*v*)*M*(ℓ) ≥ *δ**m*/10. Assume in contradiction that ∑ℓ ∈ *L*ʹ(*v*)*M*(ℓ) < *δ**m*/10. Let Δʹ denote the sub-list of *V* containing all points that lie on lines in *L*ʹ(*v*) so that ∣Δʹ∣ ≤ *δ**m*/10. We will derive a contradiction by finding a small sublist Δ of *V* (containing Δʹ and two other small sub-lists) that would violate Fact [fact: v many triples]. That is, if we remove Δ from *V*, we destroy all collinear triples containing *v*. Let ℓ be a degenerate line of the second kind. Then there is a point *v*ℓ on it that is distinct from *v* and has multiplicity at least (1 − *δ*/10)*M*(ℓ). For every such line let Δℓ denote the sublist of *V* containing all of the at most (*δ*/10)*M*(ℓ) − *M*(*v*) points on this line that are distinct from both *v* and *v*ℓ. Let Δ2 denote the union of these lists Δℓ over all degenerate lines of the second kind. We now have that ∣Δ2∣ ≤ *δ**m*/10 since ∑ℓ(*M*(ℓ) − *M*(*v*)) ≤ *m* and in each line ℓ we have ∣Δℓ∣ ≤ (*δ*/10)*M*(ℓ) − *M*(*v*) ≤ (*δ*/10)(*M*(ℓ) − *M*(*v*)). Notice that, removing the points in Δ2 destroys all collinear triples on degenerate lines of the second kind. Finally, let Δ*S* denote the sublist of *V* containing all points that have a copy in *S*. Thus Δ*S* contains the list *S* (of at most *δ**m*/10 elements), plus all of the at most *δ**m*/10 copies of the last point in *S*, meaning that ∣Δ*S*∣ ≤ *δ**m*/5. Removing Δ*S* destroys all collinear triples on degenerate lines of the first kind. Define Δ as the union of the three sublists Δʹ, Δ2 and Δ*S*. From the above we have that removing Δ from *V* destroys all collinear triples containing *V* and that ∣Δ∣ ≤ 4(*δ*/10)*m* < *δ**m*/2. This contradicts Fact [fact: v many triples]. Combining the two claims we get that for all *v* ∈ *S*, $$\operatorname{\mathbb{E}}[ |T(v)| ]\geq \sum\_{\ell \in L'(v)} \operatorname{\mathbb{E}}[ |T\_\ell(v) |] \geq \sum\_{\ell \in L'(v)} \delta M(\ell) / 10 \geq (\delta / 10) \cdot (\delta m / 10) = (\delta / 10)^2 m.$$ This completes the proof of Lemma [lem: T(v) large]. Extensions to Other Fields ========================== In this section we show that our results can be extended from the complex field to fields of characteristic zero, and even to fields with very large positive characteristic. The argument is quite generic and relies on Hilbert’s Nullstellensatz. [*T*-matrix] Let *m*, *n* be integers and let *T* ⊂ [*m*] × [*n*]. We call an *m* × *n* matrix *A* a **T*-matrix* if all entries of *A* with indices in *T* are non-zero and all entries with indices outside *T* are zero. [Effective Hilbert’s Nullstellensatz ] Let $g\_1,\ldots,g\_s \in \Z[y\_1,\ldots,y\_t]$ be degree *d* polynomials with coefficients in {0, 1} and let $$Z \triangleq \{ y \in \C^t \,|\, g\_i(y)=0 \,\, \forall i \in [s] \}.$$ Suppose $h \in \Z[z\_1,\ldots,z\_t]$ is another polynomial with coefficients in {0, 1} which vanishes on *Z*. Then there exist positive integers *p*, *q* and polynomials $f\_1, \ldots, f\_s \in \Z[y\_1,\ldots,y\_t]$ such that ∑*i* = 1*s**f**i* ⋅ *g**i* ≡ *p* ⋅ *h**q*. Furthermore, one can bound *p* and the maximal absolute value of the coefficients of the *f**i*’s by an explicit function *H*0(*d*, *t*, *s*). [thm-finitefield] Let *m*, *n*, *r* be integers and let *T* ⊂ [*m*] × [*n*]. Suppose that all complex *T*-matrices have rank at least *r*. Let $\F$ be a field of either characteristic zero or of finite large enough characteristic *p* > *P*0(*n*, *m*), where *P*0 is some explicit function of *n* and *m*. Then, the rank of all *T*-matrices over $\F$ is at least *r*. Let $g\_1,\ldots,g\_s \in \C[\{x\_{ij} \ | \ i \in [m], j \in [n]\} ]$ be the determinants of all *r* × *r* sub-matrices of an *m* × *n* matrix of variables *X* = (*x**i**j*). The statement “all *T*-matrices have rank at least *r*” can be phrased as “if *x**i**j* = 0 for all (*i*, *j*) ∉ *T* and *g**k*(*X*) = 0 for all *k* ∈ [*s*] then ∏(*i*, *j*) ∈ *T**x**i**j* = 0.” That is, if all entries outside *T* are zero and *X* has rank smaller than *r* then it must have at least one zero entry also inside *T*. From Nullstellensatz we know that there are integers *α*, *λ* > 0 and polynomials *f*1, …, *f**s* and *h**i**j*, (*i*, *j*) ∉ *T*, with integer coefficients such that *α* ⋅ (∏(*i*, *j*) ∈ *T**x**i**j*)*λ* ≡ ∑(*i*, *j*) ∉ *T**x**i**j* ⋅ *h**i**j*(*X*) + ∑*k* = 1*s**f**i*(*X*) ⋅ *g**i*(*X*). This identity implies the high rank of *T*-matrices also over any field $\F$ in which *α* ≠ 0. Since we have a bound on *α* in terms of *n* and *m* the result follows. Discussion and Open Problems ============================ Our rank bound for design matrices has a dependence on *q*, the number of non-zeros in each row. Can this dependency be removed? This might be possible since a bound on *q* follows indirectly from specifying the bound on *t*, the sizes of the intersections. Removing this dependency might also enable us to argue about square matrices. Our results so far are interesting only in the range of parameters where the number of rows is much larger than the number of columns. With respect to Sylvester-Gallai configurations, the most obvious open problem (discussed in the introduction) is to close the gap between our bound of *O*(1/*δ*2) on the dimension of *δ*-SG configuration and the trivial lower bound of Ω(1/*δ*) obtained by a simple partition of the points into 1/*δ* lines. Another interesting direction is to explore further the connection between design-matrices and LCCs. The most natural way to construct an LCC is by starting with a low-rank design matrix and then defining the code by taking the matrix to be its parity-check matrix. Call such codes *design-LCCs*. Our result on the rank of design matrices shows, essentially, that design-LCCs over the complex numbers cannot have good parameters in general (even for large query complexity). It is natural to ask whether there could exist LCCs that do not originate from designs. Or, more specifically, whether any LCC defines another LCC (with similar parameters) which is a design-LCC. This question was already raised in. Answering this question over the complex numbers will, using our results, give bounds for general LCCs. It is not out of the question to hope for bounds on LCCs with query complexity as large as polynomial in *m* (the encoding length). This would be enough to derive new results on rigidity via the connection made in. In particular, our results on design matrices still give meaningful bounds (on design-LCCs) in this range of parameters. More formally, our results suggest a bound of roughly $\poly(q,1/\delta)$ on the dimension of (*q*, *δ*)-LCCs that arise from designs. A strong from of a conjecture from says that an LCC $C \subset \F^n$ with *q* = *n**ε* queries and error *δ* = *n*− *ε*, for some constant *ε* > 0, cannot have dimension 0.99 ⋅ *n*. This conjecture, if true, would lead to new results on rigidity. Thus, showing that any LCC defines a design (up to some polynomial loss of parameters), combined with our results, would lead to new results on rigidity. Acknowledgements ================ We thank Moritz Hardt for many helpful conversations. We thank Jozsef Solymosi for helpful comments. --- 1. Microsoft Research New England. Email: `[email protected]`. Most of the work done while at Princeton University and supported by NSF grants CNS-0627526, CCF-0426582 and CCF-0832797, and the Packard and Sloan fellowships.[↩](#fnref1) 2. Department of Computer Science, Princeton University. Email: `[email protected]`. Research partially supported by NSF grant CCF-0832797 and by the Packard fellowship.[↩](#fnref2) 3. School of Mathematics, Institute for Advanced Study. Email: `[email protected]`. Research partially supported by NSF grants CCF-0832797 and DMS-0835373.[↩](#fnref3) 4. Department of Mathematics, Technion - IIT. Email: `[email protected]`. Most of the work done while at the IAS. Research partially supported by NSF grants CCF-0832797 and DMS-0835373.[↩](#fnref4) Rank Bounds for Design Matrices with Applications to Combinatorial Geometry and Locally Correctable Codes ========================================================================================================= A (*q*, *k*, *t*)-design matrix is an *m* × *n* matrix whose pattern of zeros/non-zeros satisfies the following design-like condition: each row has at most *q* non-zeros, each column has at least *k* non-zeros and the supports of every two columns intersect in at most *t* rows. We prove that for *m* ≥ *n*, the rank of any (*q*, *k*, *t*)-design matrix over a field of characteristic zero (or sufficiently large finite characteristic) is at least $$n - \left(\frac{qtn}{2k}\right)^2.$$ Using this result we derive the following applications: Impossibility results for 2-query LCCs over large fields. A 2-query locally correctable code (LCC) is an error correcting code in which every codeword coordinate can be recovered, probabilistically, by reading at most two other code positions. Such codes have numerous applications and constructions (with exponential encoding length) are known over finite fields of small characteristic. We show that infinite families of such linear 2-query LCCs *do not exist* over fields of characteristic zero or large characteristic regardless of the encoding length. Generalization of known results in combinatorial geometry. We prove a quantitative analog of the Sylvester-Gallai theorem: Let *v*1, …, *v**m* be a set of points in $\C^d$ such that for every *i* ∈ [*m*] there exists at least *δ**m* values of *j* ∈ [*m*] such that the line through *v**i*, *v**j* contains a third point in the set. We show that the dimension of {*v*1, …, *v**m*} is at most *O*(1/*δ*2). Our results generalize to the high dimensional case (replacing lines with planes, etc.) and to the case where the points are colored (as in the Motzkin-Rabin Theorem). Introduction ============ In this work we study what *combinatorial* properties of matrices guarantee high algebraic *rank*, where a property is combinatorial if it depends only on the zero/non-zero pattern of the matrix, and not on the values of its entries. This question has a rich history in mathematics (see Section [sec: rel work]), and some computer science motivations: Locally correctable codes. A *locally correctable code* is an error correcting code in which for every codeword *y*, given a corrupted version $\Tilde{y}$ of *y* and an index *i*, one can recover the correct value of *y**i* from $\Tilde{y}$ by looking only at very few coordinates of $\Tilde{y}$. It is an open question in coding theory to understand the tradeoffs between the fraction of errors, locality (number of coordinates read) and rate (ratio of message length to codeword length) of such codes, with very large gaps between the known upper bounds and lower bounds (see the survey ). The question is open even for linear codes, where the condition of being locally correctable turns out to be equivalent to the existence of low weight codewords in the dual codewords that are “well-spread” in some precise technical sense (see Section [sec-LCC]). Because of the relation between the rate of the code and its dual, the question becomes equivalent to asking whether this combinatorial “well-spreadness” condition guarantees high rank. Matrix rigidity. A longstanding question is to come up with an explicit matrix that is *rigid* in the sense that its rank cannot be reduced by changing a small number of its entries. Random matrices are extremely rigid, and sufficiently good explicit constructions will yield lower bounds for arithmetic circuits , though we are still very far from achieving this (see the survey ). One can hope that a combinatorial property guaranteeing large rank will be robust under small perturbations, and hence a matrix satisfying such a property will automatically be rigid. In both these cases it is crucial to obtain bounds on the rank that depend solely on the zero/non-zero pattern of the matrix, without placing any restrictions on the non-zero coefficients. For example, there are very strong bounds known for matrix rigidity under the restriction that the non-zero coefficients have bounded magnitude (see Chapter 3 in ), but they only imply lower bounds in a very restricted model. In fact, there is a relation between the two questions, and sufficiently good answers for the first question will imply answers for the second one . We stress that these two examples are in no way exhaustive. The interplay between combinatorial and algebraic properties of matrices is a fascinating question with many potential applications that is still very poorly understood. Our Results ----------- In this work we give a combinatorial property of complex matrices that implies high rank. While not strong enough to prove rigidity results, we are able to use it to obtain several applications in combinatorial geometry and locally correctable codes. Our main result is the following theorem, giving a lower bound on the rank of matrix whose non-zero pattern forms has certain combinatorial-design like properties in the sense that the sets of non-zero entries in each column have small intersections. (This theorem is restated as Theorem [thm-rankdesign].) [Rank bound for design matrices] [ithm:main] Let *m* ≥ *n*. We say that an *m* × *n* complex matrix *A* is a (*q*, *k*, *t*)-design matrix if every row of *A* has at most *q* non-zero entries, every column of *A* has at least *k* non-zeroes entries, and the supports of every two columns intersect in at most *t* rows. For every such *A*, $$\rank(A) \geq n - \left(\frac{q\cdot t \cdot n}{2k}\right)^2.$$ We also show that Theorem [ithm:main], and in fact any result connecting the zero/non-zero pattern to rank, can be made to hold over arbitrary characteristic zero fields and also over fields of sufficiently large (depending on *m*, *n*) finite characteristic. ### Applications to Combinatorial Geometry Our most immediate applications of Theorem [ithm:main] are to questions regarding line-point incidences. Results on line-point incidences have recently found use in the area of computational complexity in relation to pseudo-randomness and de-randomization. In this setting we have an arrangement of a finite number of points in real or complex space. Every such arrangement gives rise to a set of lines, namely, those lines that pass through at least two of the points in the arrangement. Information about these lines can be converted, in some cases, into information about the dimension of the set of points (i.e. the dimension of the space the points span). Our rank theorem can be used to derive generalizations for two well-known theorems in this area: the Sylvester-Gallai theorem and the Motzkin-Rabin theorem. #### Generalizing the Sylvester-Gallai Theorem. The Sylvester-Gallai (SG for short) theorem says that if *m* distinct points $v\_1,\ldots,v\_m \in \R^d$ are not collinear, then there exists a line that passes through exactly two of them. In its contrapositive form the SG theorem says that if for every *i* ≠ *j* the line through *v**i* and *v**j* passes through a third point *v**k*, then dim{*v*1, …, *v**m*} ≤ 1, where dim{*v*1, …, *v**m*} is the dimension of the smallest affine subspace containing the points. This theorem was first conjectured by Sylvester in 1893, proved (in dual form) by Melchior in 1940, and then independently conjectured by Erdos in 1943 and proved by Gallai in 1944. The SG theorem has several beautiful proofs and many generalizations, see the survey. Over the complex numbers the (tight) bound on the dimension is 2 instead of 1. The complex version was first proven by Kelly using a deep results from algebraic geometry, and more recently, an elementary proof was found by Elkies, Pretorius and Swanepoel who also proved it over the quaternions with an upper of 4 on the dimension. We say that the points *v*1, …, *v**m* (in $\R^d$ or $\C^d$) form a *δ*-SG configuration if for every *i* ∈ [*m*] there exists at least *δ**m* values of *j* ∈ [*m*] such that the line through *v**i*, *v**j* contains a third point in the set. Szemeredi and Trotter  showed that, when *δ* is larger than some absolute constant close to 1, then the dimension of a *δ*-SG configuration is at most one (over the reals). We show the following generalization of their result to arbitrary *δ* > 0 (and over the complex numbers). [Quantitative SG theorem][ithm:SG] If $v\_1,\ldots,v\_m \in \C^d$ is a *δ*-SG configuration then dim{*v*1, …, *v**m*} < 13/*δ*2. We note that one cannot replace the bound 13/*δ*2 of Theorem [ithm:SG] with 1 or even with any fixed constant, as one can easily create a *δ*-SG configuration of dimension roughly 2/*δ* by placing the points on 1/*δ* lines. This is analogous to error correcting codes, where once the fraction *δ* of agreement between the original and corrupted codeword drops below half there can be no unique decoding. In that sense our result can be thought of as a *list decoding* variant of the SG theorem, whereas the result of is its unique decoding variant. We also show an “average case” version of the SG theorem, proving a bound on the dimension of a large subset of the points under the assumption that there are many collinear triples (see Theorem [thm-SGav]). We also prove a version of Theorem [thm-deltaSG] with lines replaced by *k*-flats (*k*-dimensional affine subspaces). This generalizes a theorem of Hansen which deals with the case *α* = 1. The statement of this result is technical and so we give it in Section [sec-highdim] where it is also proven. Since our proofs use elementary (and purely algebraic) reductions to the rank theorem, they hold over arbitrary fields of characteristic zero or of sufficiently large finite characteristic. This is in contrast to many of the known proofs of such theorems which often rely on specific properties of the real (or complex) numbers. However, we currently do not recover the full version of the original SG theorem, in the sense that even for *δ* = 1 we do not get a bound of 1 (or 2 for complex numbers) on the dimension. (However, the term 13/*δ*2 can be improved a bit in the *δ* = 1 case to obtain a bound of 9 on the dimension.) #### Generalizing the Motzkin-Rabin Theorem. The Motzkin-Rabin (MR for short) theorem (see e.g. ) is an interesting variant of the Sylvester-Gallai theorem that states that if points $v\_1,\ldots,v\_m \in \R^d$ are colored either red or blue and there is no monochromatic line passing through at least two points, then they are all collinear. As in the SG theorem, we obtain a quantitative generalization of the MR theorem such that (letting *b* and *r* be the numbers of blue and red points respectively), if for every blue (resp. red) point *v*, there are *δ**b* blue (resp. *δ**r* red) points *v*ʹ where the line through *v* and *v*ʹ passes through a red (resp. blue) point, then dim{*v*1, …, *v**m*} ≤ *O*(1/*δ*4). We also prove a three colors variant of the MR theorem, showing that if *v*1, …, *v**m* are colored red, blue and green, and all lines are not monochromatic, then dim{*v*1, …, *v**m*} is at most some absolute constant. ### Locally Correctable Codes A (linear) *q* query locally correctable code ((*q*, *δ*)-LCC for short) over a field $\F$ is a subspace $C \subseteq \F^n$ such that, given an element $\Tilde{y}$ that disagrees with some *y* ∈ *C* in at most *δ**n* positions and an index *i* ∈ [*n*], one can recover *y**i* with, say, probability 0.9, by reading at most *q* coordinates of $\Tilde{y}$. Over the field of two elements $\F\_2$ the standard Hadamard code construction yields a (2,*δ*)-query LCC with dimension Ω(log(*n*)) for constant *δ* > 0 (see the survey ). In contrast we show that for every constant *δ* > 0 there do not exist infinite family of such codes over the complex numbers: [Impossibility of 2-query LCCs over $\C$] [ithm:lcc] If *C* is a 2-query LCC for *δ* fraction of errors over $\C$, then dim(*C*) ≤ *O*(1/*δ*9). We note that the Hadamard construction does yield a *locally decodable code* over the complex numbers with dimension Ω(log*n*). Locally decodable codes are the relaxation of a locally correctable codes where one only needs to be able to recover the coordinates of the original message as opposed to the codeword. Thus over the complex numbers, there is a very strong separation between the notions of locally decodable and locally correctable codes, whereas it is consistent with our knowledge that for, say, $\F\_2$ the rate/locality tradeoffs of both notions are the same. Related Work ------------ The idea to use matrix scaling to study structural properties of matrices was already present in. This work, which was also motivated by the problem of matrix rigidity, studies the presence of short cycles in the graphs of non-zero entries of a square matrix. A related line of work on the rank of ‘design’ matrices is the work emerging from Hamada’s conjecture. (See for a recent result and more references.) Here, a design matrix is defined using stricter conditions (each row/column has exactly the same number of non-zeros and the intersections are also all of the same size) which are more common in the literature dealing with combinatorial designs. In order to be completely consistent with this line of work we should have called our matrices ‘approximate-design’ matrices. We chose to use the (already overused) word ‘design’ to make the presentation more readable. We also note that considering approximate designs only makes our results stronger. Hamada’s conjecture states that of all zero/one matrices whose support comes from a design (in the stricter sense), the minimal rank is obtained by matrices coming from geometric designs (in our language, Reed-Muller codes). In contrast to this paper, the emphasis in this line of works is typically on small finite fields. We note here that the connection between Hamada’s conjecture and LCCs was already observed by Barkol, Ishai and Weinreb who also conjectured (over small fields) the ‘approximate-design’ versions which we prove here for large fields. Another place where the support of a matrix is connected to its rank is in graph theory where we are interested in minimizing the rank of a (square, symmetric) real matrix which has the same support as the adjacency matrix of a given graph. This line of work goes back for over fifty years and has many applications in graph theory. See for a recent survey on this topic. Over the reals we can also ask about the minimal rank of matrices with certain *sign-pattern*. That is, given a matrix over {1,  − 1}, what is the minimal rank of a matrix which has the same sign-pattern. This minimal rank is called the *sign-rank* of a matrix. The question of coming up with (combinatorial or otherwise) properties that imply high sign-rank is one of major importance and has strong connections to communication complexity, learning theory and circuit complexity, among others. For a recent work with plenty of references see. In particular we would like to mention a connection to the work of Forster on the sign-rank of the Hadamard matrix. (An earlier version of this work used a variant of a lemma from instead of the results of on matrix scaling to obtain our main result.) Organization ------------ In Section [sec-techniques] we give a high level overview of our techniques. In Section [sec-rank] we prove our main result on the rank of design matrices. In Section [sec-SG] we prove our quantitative variants of the Sylvester-Gallai theorem. In Section [sec-highdim] we prove the high-dimensional analog of Theorem [thm-deltaSG] where lines are replaced with flats. In Section [sec-MR] we prove our generalizations of the Motzkin-Rabin theorem. In Section [sec-LCC] we prove our results on locally correctable codes. In Section [sec-finite] we show how our results extend to other fields. We conclude in Section [sec-open] with a discussion of open problems. Our Techniques ============== We now give high-level proof overviews for some of our results. Rank Lower Bounds for Design Matrices ------------------------------------- The
arxiv_0000099
Asymptotic Existence of Fair Divisions for Groups ================================================= The problem of dividing resources fairly occurs in many practical situations and is therefore an important topic of study in economics. In this paper, we investigate envy-free divisions in the setting where there are multiple players in each interested party. While all players in a party share the same set of resources, each player has her own preferences. Under additive valuations drawn randomly from probability distributions, we show that when all groups contain an equal number of players, a welfare-maximizing allocation is likely to be envy-free if the number of items exceeds the total number of players by a logarithmic factor. On the other hand, an envy-free allocation is unlikely to exist if the number of items is less than the total number of players. In addition, we show that a simple truthful mechanism, namely the random assignment mechanism, yields an allocation that satisfies the weaker notion of approximate envy-freeness with high probability. Introduction ============ Dividing resources among interested parties in a fair manner is a problem that commonly occurs in real-world situations and is consequently of fundamental importance. Countries negotiate over international issues, as Egypt and Israel did in 1978 over interests in the Sinai Peninsula and the U.S. and Panama in 1994 over those in the Panama Canal. Likewise, divorced couples negotiate over their marital property, airlines over flight routes, and Internet clients over bandwidth and storage space. On a smaller scale, typical everyday tasks involving fair division include distributing household tasks, splitting a taxi fare, and sharing apartment rent. Given its far-reaching and often critical applications, it should not come as a surprise that fair division has long been a popular topic of study in economics. To reason about fair division, we must carefully define what we mean for a division to be “fair”. Several notions of fairness have been proposed in the literature. For example, a division is said to be *proportional* if every player values her allocation at least 1/*n* times her value for the whole set of items, where *n* denotes the number of players. Another commonly-used notion of fairness, and one that we will work with throughout the paper, is that of *envy-freeness*. A division of a set of items is said to be *envy-free* if each player values the set of items that she receives at least as much as the set of items that any other player receives. When utilities are additive, an envy-free division is proportional and moreover satisfies another important fairness notion called the *maximin share criterion*.[1](#fn1) While procedures for finding envy-free divisions have been proposed, an envy-free division does not necessarily exist in arbitrary settings. This can most easily be seen in the case where there are two players and one item that both players value positively, or more generally when the number of players exceeds the number of items and players have positive values for items. The fair division literature has for the most part assumed that each interested party consists of a single player (or equivalently, several players represented by a single common preference). However, this assumption is too restrictive for many practical situations. Indeed, an outcome of a negotiation between countries may have to be approved by members of the cabinets of each country. Since valuations of outcomes are usually subjective, one can easily imagine a situation in which one member of a cabinet of a country thinks that the division is fair while another member disagrees. Similarly, in a divorce case, different members of the family on the husband side and the wife side may have varying opinions on a proposed settlement. Another example is a large company or university that needs to divide its resources among competing groups of agents (e.g., departments in a university), since the agents in each group can have misaligned interests. Indeed, the professors who perform theoretical research may prefer more whiteboards and open space in the department building, while those who engage in experimental work are more likely to prefer laboratories. In this paper, we study envy-free divisions when there are multiple players in each group. Every player has her own preferences, and players in the same group can have very different preferences. In this generalized setting, we consider a division to be envy-free if every player values the set of items assigned to her group at least as much as that assigned to any other group. Our Contributions ----------------- In Section [sec:asymptotic], we investigate the asymptotic existence and non-existence of envy-free divisions using a probabilistic model, previously used in the setting with one player per group. We show that under additive valuations and other mild technical conditions, when all groups contain an equal number of players, an envy-free division is likely to exist if the number of goods exceeds the total number of players by a logarithmic factor, no matter whether the players are distributed into several groups of small size or few groups of large size (Theorem [thm:existence]). In particular, any allocation that maximizes social welfare is likely to be envy-free. In addition, when there are two groups with possibly unequal numbers of players and the distribution on the valuation of each item is symmetric, an envy-free division is likely to exist if the number of goods exceeds the total number of players by a logarithmic factor as well (Theorem [thm:existencesymmetric]). Although it might not be surprising that a welfare-maximizing allocation is envy-free with high probability when there are sufficiently many items, we find the fact that only an extra logarithmic factor is required to be rather unexpected. Indeed, as the number of players in each group increases, it seems as though the independence between the preferences of each player would make it much harder to satisfy all of them simultaneously, since they all need to be allocated the same items. To complement our existence results, we show on the other hand that we cannot get away with a much lower number of items and still have an envy-free division with high probability. In particular, if the number of items is less than the total number of players by a superconstant factor, or if the number of items is less than the total number of players and the number of groups is large, we show that the probability that an envy-free division exists is low (Corollaries [cor:nonexistence1] and [cor:nonexistence2]). This leaves the gap between asymptotic existence and non-existence of envy-free divisions at a mere logarithmic factor. While the techniques used to show asymptotic existence of envy-free divisions in Section [sec:asymptotic] give rise to mechanisms that compute such divisions with high probability, these mechanisms are unfortunately not truthful. In other words, implementing these mechanisms in the presence of strategic players can lead to undesirable outcomes. In Section [sec:existenceapprox], we tackle the issue of truthfulness and show that a simple truthful mechanism, namely the random assignment mechanism, is **α*-approximate envy-free* with high probability for any constant *α* ∈ [0, 1) (Theorem [thm:existenceapprox]). Approximate envy-freeness means that even though a player may envy another player in the resulting division, the values of the player for her own allocation and for the other player’s allocation differ by no more than a multiplicative factor of *α*. In other words, the player’s envy is relatively small compared to her value for her own allocation. The number of items required to obtain approximate envy-freeness with high probability increases as we increase *α*. Our result shows that it is possible to achieve truthfulness and approximate envy-freeness simultaneously in a wide range of random instances, and improves upon the previous result for the setting with one player per group in several ways. Related Work ------------ Our results in Section [sec:asymptotic] can be viewed as generalizations of previous results by Dickerson et al., who showed asymptotic existence and non-existence under a similar model but in a more limited setting where each group has only one player. In particular, these authors proved that under certain technical conditions on the probability distributions, an allocation that maximizes social welfare is envy-free with high probability if the number of items is larger than the number of players by a logarithmic factor. In fact, their result also holds when the number of players stay constant, as long as the number of items goes to infinity. Similarly, we show that a welfare-maximizing allocation is likely to be envy-free if the number of items exceeds the number of players by a logarithmic factor. While we require that the number of player per group goes to infinity, the number of groups can stay small, even constant. On the non-existence front, Dickerson et al. showed that if the utility for each item is independent and identically distributed across players, then envy-free allocations are unlikely to exist when the number of items is larger than the number of players by a linear fraction. On the other hand, our non-existence results apply to the regime where the number of items is smaller than the number of players. Note that while this regime is uninteresting in Dickerson et al.’s setting since envy-free allocations cannot exist, in our generalized setting an envy-free allocation can already exist when the number of items is at least the number of *groups*. Besides the asymptotic results on envy-free divisions, results of this type have also been shown for other fairness notions, including proportionality and the maximin share criterion. These two notions are weaker than envy-freeness when utilities are additive. Suksompong showed that proportional allocations exist with high probability if the number of goods is a multiple of the number of players or if the number of goods grows asymptotically faster than the number of players. Kurokawa et al. showed that if either the number of players or the number of items goes to infinity, then an allocation satisfying the maximin share criterion is likely to exist as long as each probability distribution has at least constant variance. Amanatidis et al. analyzed the rate of convergence for the existence of allocations satisfying the maximin share criterion when the utilities are drawn from the uniform distribution over the unit interval. Another common approach for circumventing the potential nonexistence of divisions satisfying certain fairness concepts, which we do not discuss in this paper, is by showing approximation guarantees for worst-case instances. In particular, Suksompong investigated approximation guarantees for groups of agents using the maximin share criterion. Finally, a model was recently introduced that incorporates the element of resource allocation for groups. The model concerns the problem of finding a small “agreeable” subset of items, i.e., a small subset of items that a group of players simultaneously prefer to its complement. Nevertheless, in that model the preferences of only one group of players are taken into account, whereas in our work we consider the preferences of multiple groups at the same time. Preliminaries ============= Let a set *N* of *n* :  = *g**n*ʹ players be divided into *g* ≥ 2 groups *G*1, …, *G**g* of *n*ʹ players each, and let *M* :  = {1, 2, …, *m*} denote the set of items. Let *g*(*i*) be the index of the group containing player *i* (i.e., player *i* belongs to the group *G**g*(*i*)). Each item will be assigned to exactly one group, to be shared among the members of the group. We assume that each player *i* ∈ *N* has a cardinal utility *u**i*(*j*) for each item *j* ∈ *M*. We may suppose without loss of generality that *u**i*(*j*) ∈ [0, 1], since otherwise we can scale down all utilities by their maximum. We will also make a very common assumption that utilities are *additive*, i.e., *u**i*(*M*ʹ) = ∑*j* ∈ *M*ʹ*u**i*(*j*) for any player *i* ∈ *N* and any subset of items *M*ʹ ⊆ *M*. The *social welfare* of an assignment is the sum of the utilities of all *n* players from the assignment. We are now ready to define the notion of envy-freeness. Denote the subsets of items that are assigned to the *g* groups by *M*1, …, *M**g*, respectively. Player *i* in group *G**g*(*i*) regards her allocation *M**g*(*i*) as *envy-free* if *u**i*(*M**g*(*i*)) ≥ *u**i*(*M**j*) for every group *G**j* ≠ *G**g*(*i*). The assignment of the subsets *M*1, …, *M**g* to the *g* groups is called *envy-free* if every player regards her allocation as envy-free. Next, we list two probabilistic results that will be used in our proofs. We first state the Chernoff bound, which gives us an upper bound on the probability that a sum of independent random variables is far away from its expected value. [Chernoff bound] [lem:chernoff] Let *X*1, …, *X**r* be independent random variables that are bounded in an interval [0, 1], and let *S* :  = *X*1 + ⋯ + *X**r*. We have $$\begin{aligned} \Pr[S \geq (1 + \delta)\operatorname\*{\mathbb{E}}[S]] &\leq \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S]}{3}\right)},\end{aligned}$$ and, $$\begin{aligned} \Pr[S \leq (1 - \delta)\operatorname\*{\mathbb{E}}[S]] &\leq \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S]}{2}\right)}\end{aligned}$$ for every *δ* ≥ 0. Another lemma that we will use is the Berry-Esseen theorem. In short, it states that a sum of a sufficiently large number of independent random variables behaves similarly to a normal distribution. On the surface, this sounds like the central limit theorem. However, the Berry-Esseen theorem relies on a slightly stronger assumption and delivers a more concrete bound, which is required for our purposes. [Berry-Esseen theorem ] [lem:berry] Let *X*1, …, *X**r* be *r* independent and identically distributed random variables, each of which has mean *μ*, variance *σ*2, and third moment[2](#fn2) *ρ*. Let *S* :  = *X*1 + ⋯ + *X**r*. There exists an absolute constant *C**BE* such that $$\begin{aligned} \left|\Pr[S \leq x] - \Pr\_{y \sim \mathcal{N}(\mu r, \sigma^2 r)}[y \leq x]\right| \leq \frac{\rho C\_\mathit{BE}}{\sigma^3 \sqrt{r}}\end{aligned}$$ for every *x* ∈ R. Note that N(*μ**r*, *σ*2*r*) is the normal distribution with mean *μ**r* and variance *σ*2*r*, i.e., its probability density function is $$f(x) = \frac{1}{\sigma \sqrt{2\pi r}}e^\frac{-(x - \mu r)^2}{2\sigma^2 r}.$$ Let us now state two assumptions on distributions of utilities; in Section [sec:asymptotic] we will work with the first and in Section [sec:existenceapprox] with the second. * For each item *j* ∈ *M*, the utilities *u**i*(*j*) ∈ [0, 1] for *i* ∈ *N* are drawn independently at random from a distribution D*j*. Each distribution D*j* is *non-atomic*, i.e., Pr[*u**i*(*j*) = *x*] = 0 for every *x* ∈ [0, 1]. Moreover, the variances of the distributions are bounded away from zero, i.e., there exists a constant *σ**m**i**n* > 0 such that the variances of D1, …, D*m* are at least *σ**m**i**n*2. * For each *i* ∈ *N* and *j* ∈ *M*, the utility *u**i*(*j*) ∈ [0, 1] is drawn independently at random from a probability distribution D*i*, *j*. The mean of each distribution is bounded away from zero, i.e., there exists a constant *μ**m**i**n* > 0 such that $\operatorname\*{\mathbb{E}}[u\_i(j)] \geq \mu\_{min}$ for every *i* ∈ *N*, *j* ∈ *M*. Note that assumption [A2] is weaker than [A1]. Indeed, in [A2] we do not require D*i*, *j* to be the same for every *i*. In addition, since *u**i*(*j*) ∈ [0, 1] for all *i* ∈ *N* and *j* ∈ *M*, we have $\operatorname\*{\mathbb{E}}[u\_i(j)]\geq \operatorname\*{\mathbb{E}}[u\_i(j)^2]\geq \operatorname\*{\mathbb{E}}[u\_i(j)^2]-\operatorname\*{\mathbb{E}}[u\_i(j)]^2=\text{Var}(u\_i(j)).$ Hence, the condition that the means of the distributions are bounded away from zero follows from the analogous condition on the variances. In Section [sec:existenceapprox], we consider the notion of *approximate envy-freeness*, which means that for each player, there is no allocation of another group for which the player’s utility is a certain (multiplicative) factor larger than the utility of the player for the allocation of her own group. The notion is defined formally below. We write $M\_p \succsim\_i^\alpha M\_q$ for *α* ∈ [0, 1] if and only if *u**i*(*M**p*) ≥ *α**u**i*(*M**q*). Player *i* considers an assignment *M*1, …, *M**g* of items to the *g* groups **α*-approximate envy-free* if $M\_{g(i)} \succsim\_i^\alpha M\_p$ for every group *p* ∈ {1, …, *g*}. We say that an assignment is **α*-approximate envy-free* if it is *α*-approximate envy-free for every player *i*. Finally, we give the definition of a truthful mechanism, which we will use in Section [sec:existenceapprox]. A *mechanism* is a function that takes as input the utility of player *i* for item *j* for all *i* ∈ *N* and *j* ∈ *M*, and outputs a (possibly random) assignment of items to the groups. A mechanism is said to be *truthful* if every player always obtains the highest possible (expected) utility by submitting her true utilities to the mechanism, regardless of the utilities that the remaining players submit. Asymptotic Existence and Non-Existence of Fair Divisions ======================================================== In this section, we study the existence and non-existence of fair divisions. First, we show that, when *m* is Ω(*n*log*n*), where Ω( ⋅ ) hides a sufficiently large constant, there exists an envy-free division with high probability (Theorem [thm:existence]). In particular, we prove that a welfare-maximizing allocation is likely to be envy-free. This gives rise to a simple algorithm that finds such a fair division with high probability. We also extend our existence result to the case where there are two groups but the groups need not have the same number of players; we show a similar result in this case, provided that each distribution D*j* satisfies an additional symmetry condition (Theorem [thm:existencesymmetric]). Moreover, on the non-existence front, we prove that when *m* is smaller than *n*, the probability that a fair division exists is at most 1/*g**n* − *m* (Theorem [thm:nonexistence]). This has as consequences that if the number of items is less than the total number of players by a superconstant factor, or if the number of items is less than the total number of players and the number of groups is large, then the probability that an envy-free division exists is low (Corollaries [cor:nonexistence1] and [cor:nonexistence2]). We begin with our main existence result. [thm:existence] Assume that [A1] holds. For any fixed *σ**m**i**n* > 0, there exists a constant *C* > 0 such that, for any sufficiently large *n*ʹ, if *m* > *C**n*log*n*, then there exists an envy-free assignment with high probability. In fact, we not only prove that an envy-free assignment exists but also give a simple greedy algorithm that finds one such assignment with high probability. The algorithm idea is simple; we greedily assign each item to the group that maximizes the total utility of the item with respect to the players in that group. This yields an allocation that maximizes social welfare. The allocation is therefore Pareto optimal, i.e., there exists no other allocation in which every player is weakly better off and at least one player is strictly better off. The pseudocode of the algorithm is shown below. [greedy] [1] let *M*1 = ⋯ = *M**g* = ∅. choose *k*\* from $\operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)$ let *M**k*\* ← *M**k*\* ∪ {*j*} The analysis of the algorithm contains similarities to that of the corresponding result in the setting with one player per group. However, significantly more technical care will be required to handle our setting in which each group contains multiple players. This is reflected by our use of the Berry-Esseen theorem (Lemma [lem:berry]). Here we provide a proof sketch that contains all the high-level ideas but leaves out some tedious details, especially calculations; the full proof can be found in the appendix. [Proof sketch of Theorem [thm:existence]] We will first bound Pr[*M**g*ʹ ≻ *i**M**g*(*i*)] for each player *i* and each group *G**g*ʹ ≠ *G**g*(*i*); we then use the union bound at the end to conclude Theorem [thm:existence]. To bound Pr[*M**g*ʹ ≻ *i**M**g*(*i*)], we define a random variable *A**i*, *j* to be *u**i*(*j*) if item *j* is assigned to group *G**g*(*i*) and zero otherwise. Similarly, define *B**i*, *j**g*ʹ to be *u**i*(*j*) if the item is assigned to group *G**g*ʹ and zero otherwise. Intuitively, with respect to player *i*, *A**i*, *j* is the utility contribution of item *j* to the group *G**g*(*i*). On the other hand, *B**i*, *j**g*ʹ is the utility that is “lost” to group *G**g*ʹ. In other words, *M**g*ʹ ≻ *i**M**g*(*i*) if and only if *S**A* < *S**B*, where *S**A* = ∑*j* ∈ *M**A**i*, *j* and *S**B* = ∑*j* ∈ *M**B**i*, *j**g*ʹ. We will use the Chernoff bound to estimate the probability of this event. To do so, we first need to bound $\operatorname\*{\mathbb{E}}[A\_{i, j}]$ and $\operatorname\*{\mathbb{E}}[B\_{i, j}^{g'}]$. From symmetry between different groups, the probability that item *j* is assigned to each group is 1/*g*. Thus, we have $\operatorname\*{\mathbb{E}}[A\_{i, j}] = \frac{1}{g}\operatorname\*{\mathbb{E}}\left[u\_i(j) \mid \text{item } j \text{ is assigned to } G\_{g(i)}\right]$ and $\operatorname\*{\mathbb{E}}[B\_{i, j}^{g'}] = \frac{1}{g}\operatorname\*{\mathbb{E}}\left[u\_i(j) \mid \text{item } j \text{ is assigned to } G\_{g'}\right]$. It is now fairly easy to see that $\operatorname\*{\mathbb{E}}[B\_{i, j}^{g'}] \leq \mu\_j/g$, where *μ**j* is the mean of D*j*; the reason is that the expected value of *u**i*(*j*) when *j* is not assigned to *G**g*(*i*) is clearly at most *μ**j*. For convenience, we will assume in this proof sketch that $\operatorname\*{\mathbb{E}}[B\_{i, j}^{g'}]$ is roughly *μ**j*/*g*. Now, we will bound the expected value of *A**i*, *j*. For each *p* = 1, …, *g*, let *X**p* denote the sum of the utilities of item *j* with respect to all players in *G**p*. Due to symmetry among players within the same group, we have $$\begin{aligned} \operatorname\*{\mathbb{E}}[A\_{i, j}] &= \frac{1}{n'g} \operatorname\*{\mathbb{E}}\left[X\_{g(i)} \mid X\_{g(i)}=\max\{X\_1, \dots, X\_g\}\right] \\ &= \frac{1}{n'g} \operatorname\*{\mathbb{E}}[\max\{X\_1, \dots, X\_g\}].\end{aligned}$$ The latter equality comes from the symmetry between different groups. Now, we use the Berry-Esseen theorem (Lemma [lem:berry]), which tells us that each of *X*1, …, *X**g* is close to N(*μ**j**n*ʹ, Ω(*σ**m**i**n*2*n*ʹ)). With simple calculations, one can see that the expectation of the maximum of *g* identically independent random variables sampled from N(*μ**j**n*ʹ, Ω(*σ**m**i**n*2*n*ʹ)) is $\mu\_j n' + \Omega(\sigma\_{min}\sqrt{n'})$. Roughly speaking, we also have $$\operatorname\*{\mathbb{E}}\left[A\_{i, j}\right] = \frac{\mu\_j}{g} + \Omega\left(\frac{\sigma\_{min}}{g\sqrt{n'}}\right).$$ Having bounded the expectations of *A**i*, *j* and *B**i*, *j**g*ʹ, we are ready to apply the Chernoff bound. Let $\delta = \Theta\left(\frac{\sigma\_{min}}{\mu\_j \sqrt{n'}}\right)$ where Θ( ⋅ ) hides some sufficiently small constant. When *n*ʹ is sufficiently large, we can see that $(1 + \delta)\operatorname\*{\mathbb{E}}[B\_{i, j}^{g'}] < (1 - \delta)\operatorname\*{\mathbb{E}}[A\_{i, j}]$, which implies that $(1 + \delta)\operatorname\*{\mathbb{E}}[S\_B] < (1 - \delta)\operatorname\*{\mathbb{E}}[S\_A]$. Using the Chernoff bound (Lemma [lem:chernoff]) on *S**A* and *S**B*, we have $$\begin{aligned} \Pr[S\_A \leq (1 - \delta)\operatorname\*{\mathbb{E}}[S\_A]] &\leq \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S\_A]}{2}\right)},\end{aligned}$$ and, $$\begin{aligned} \Pr[S\_B \geq (1 + \delta)\operatorname\*{\mathbb{E}}[S\_B]] &\leq \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S\_B]}{3}\right)}.\end{aligned}$$ Thus, we have $$\begin{aligned} \Pr[S\_A < S\_B] &\leq \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S\_A]}{2}\right)} + \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S\_B]}{3}\right)} \\ &\leq 2\exp{\left(-\Omega\left(\frac{\sigma\_{min}^2 m }{n \mu\_j}\right)\right)} \\ (\text{Since } \mu\_j \leq 1) &\leq 2\exp{\left(-\Omega\left(\frac{\sigma\_{min}^2 m }{n}\right)\right)}.\end{aligned}$$ Recall that Pr[*M**g*ʹ ≻ *i**M**g*(*i*)] = Pr[*S**A* < *S**B*]. Using the union bound for all *i* and all *g*ʹ ≠ *g*(*i*), the probability that the assignment output by the algorithm is not envy-free is at most $$\begin{aligned} 2n(g - 1)\exp{\left(-\Omega\left(\frac{\sigma\_{min}^2 m }{n}\right)\right)},\end{aligned}$$ which is at most 1/*m* when *m* ≥ *C**n*log*n* for some sufficiently large *C*. This completes the proof sketch for the theorem. Unfortunately, the algorithm in Theorem [thm:existence] cannot be extended to give a proof for the case where the groups do not have the same number of players. However, in a more restricted setting where there are only two groups with potentially different numbers of players and an additional symmetry condition on D1, …, D*m* is enforced, a result similar to that in Theorem [thm:existence] can be shown, as stated in the theorem below. [thm:existencesymmetric] Assume that [A1] holds. Suppose that there are only two groups but not necessarily with the same number of players; let *n*1, *n*2 denote the numbers of players of the first and second group respectively (so *n* = *n*1 + *n*2). Assume also that D1, …, D*m* are symmetric (around 1/2)[3](#fn3), i.e., $$\Pr\_{X \sim \mathcal{D}\_j}\left[X \leq \frac{1}{2}-x\right] = \Pr\_{X \sim \mathcal{D}\_j}\left[X \geq \frac{1}{2}+x\right]$$ for all *x* ∈ [0, 1/2]. For any fixed *σ**m**i**n* > 0, there exists a constant *C* > 0 such that, for any sufficiently large *n*1 and *n*2, if *m* > *C**n*log*n*, then there exists an envy-free assignment with high probability. The algorithm is similar to that in Theorem [thm:existence]; the only difference is that, instead of assigning each item to the group with the highest *total* utility over its players, we assign the item to the group with the highest *average* utility, as seen in the pseudocode of Algorithm [greedy2]. [greedy2] [1] let *M*1 = *M*2 = ∅. choose *k*\* from $\operatorname\*{arg\,max}\_{k = 1, 2} \frac{\sum\_{p \in G\_k} u\_p(j)}{n\_k}$ let *M**k*\* ← *M**k*\* ∪ {*j*} The proof is essentially the same as that of Theorem [thm:existence] after the random variables defined are changed corresponding to the modification in the algorithm. For instance, *A**i*, *j* is now defined as $$\begin{aligned} A\_{i, j} = u\_i(j) \cdot \mathbf{1}\left[g(i) = \operatorname\*{arg\,max}\_{k = 1, 2} \frac{\sum\_{p \in G\_k} u\_p(j)}{n\_k}\right]\end{aligned}$$ where **1**[*E*] denotes an indicator variable for event *E*. Due to the similarities between the two proofs, we will not repeat the whole proof. Instead, we would like to point out that all the arguments from Theorem [thm:existence] work here save for only one additional fact that we need to prove: Let *X*1 and *X*2 denote ∑*p* ∈ *G*1*u**p*(*j*)/*n*1 and ∑*p* ∈ *G*2*u**p*(*j*)/*n*2 respectively. Then, $$\begin{aligned} \Pr[X\_1 \geq X\_2] = \frac{1}{2}.\end{aligned}$$ To show this, observe first that, since D*j* is symmetric over 1/2, the distributions of *X*1 and *X*2 are also symmetric over 1/2. Let *f*1 and *f*2 be the probability density functions of *X*1 and *X*2 respectively, we have $$\begin{aligned} \Pr[X\_1 \geq X\_2] &= \int\_0^1 \int\_0^x f\_1(x)f\_2(y) dy dx \\ &= \int\_0^1 \int\_0^x f\_1(1-x)f\_2(1-y) dy dx \\ &= \int\_0^1 \int\_x^1 f\_1(x)f\_2(y) dy dx \\ &= \Pr[X\_2 \geq X\_1].\end{aligned}$$ Hence, Pr[*X*1 ≥ *X*2] = Pr[*X*2 ≥ *X*1] = 1/2, as desired. Next, we state and prove an upper bound for the probability that an envy-free assignment exists when the number of players exceeds the number of items. Such an assignment obviously does not exist under this condition if every group contains only one player. In fact, the theorem holds even without the assumption that the variances of D1, …, D*m* are at least *σ**m**i**n*2 > 0. [thm:nonexistence] Assume that [A1] holds. If *m* < *n*, then there exists an envy-free assignment with probability at most 1/*g**n* − *m*. Suppose that *m* ≤ *n* − 1, and fix an assignment *M*1, …, *M**g*. We will bound the probability that this assignment is envy-free. Consider any player *i* in group *G**g*(*i*). The probability that the assignment is envy-free for this particular player is the probability that the total utility of the player for the bundle *M**g*(*i*) is no less than that for other bundles *M**j*. This can be written as follows: $$\begin{aligned} \Pr\_{u\_i(1) \in \mathcal{D}\_1, \dots, u\_i(m) \in \mathcal{D}\_m}\left[\sum\_{l \in M\_{g(i)}} u\_i(l) = \max\_{k = 1, \dots, g}{\sum\_{l \in M\_k} u\_i(l)}\right].\end{aligned}$$ For each *j* = 1, …, *g*, define *p**j* as $$\begin{aligned} p\_j = \Pr\_{x\_1 \in \mathcal{D}\_1, \dots, x\_m \in \mathcal{D}\_m}\left[\sum\_{l \in M\_j} x\_l = \max\_{k = 1, \dots, g}{\sum\_{l \in M\_k} x\_l}\right].\end{aligned}$$ Notice that the probability that the assignment is envy-free for player *i* is *p**g*(*i*). Since *u**i*(1), …, *u**i*(*m*) is chosen independently of *u**i*ʹ(1), …, *u**i*ʹ(*m*), for every *i*ʹ ≠ *i*, the probability that this assignment is envy-free for every player is simply the product of the probability that the assignment is envy-free for each player, i.e., $$\begin{aligned} \prod\_{i = 1}^{n} p\_{g(i)} &= \prod\_{j=1}^{g} p\_{j}^{n'}.\end{aligned}$$ Using the inequality of arithmetic and geometric means, we arrive at the following bound: $$\begin{aligned} \prod\_{j=1}^{g} p\_{j}^{n'} \leq \left(\frac{1}{g} \sum\_{j=1}^{g} p\_j\right)^{n'g}.\end{aligned}$$ Recall our assumption that the distributions D*j* are non-atomic. Hence we may assume that the events ∑*l* ∈ *M**j**x**l* = max*k* = 1, …, *g*∑*l* ∈ *M**k**x**l* are disjoint for different *j*. This implies that ∑*j* = 1*g**p**j* = 1. Thus, the probability that this fixed assignment is envy-free is at most $$\begin{aligned} \left(\frac{1}{g} \sum\_{j=1}^{g} p\_j\right)^{n'g} = \left(\frac{1}{g}\right)^{n'g} = \frac{1}{g^n}.\end{aligned}$$ Finally, since each assignment is envy-free with probability at most 1/*g**n* and there are *g**m* possible assignments, by union bound the probability that there exists an envy-free assignment is at most 1/*g**n* − *m*. This completes the proof of the theorem. The following corollaries can be immediately derived from Theorem [thm:nonexistence]. They say that an envy-free allocation is unlikely to exist when the number of items is less than the number of players by a superconstant factor, or when the number of items is less than the number of players and the number of groups is large. [cor:nonexistence1] Assume that [A1] holds. When *m* = *n* − *ω*(1), the probability that there exists an envy-free assignment converges to zero as *n* → ∞. [cor:nonexistence2] Assume that [A1] holds. When *m* < *n*, the probability that there exists an envy-free assignment converges to zero as *g* → ∞. Truthful Mechanism for Approximate Envy-Freeness ================================================ While the algorithms in Section [sec:asymptotic] translate to mechanisms that yield with high probability envy-free divisions that are compatible with social welfare assuming that players are truth-telling, the resulting mechanisms suffer from the setback that they are easily manipulable. Indeed, since they aim to maximize (total or average) welfare, strategic players will declare their values for the items to be high, regardless of what the actual values are. This presents a significant disadvantage: Implementing these mechanisms in most practical situations, where we do not know the true valuations of the players and have no reason to assume that they will reveal their valuations in a honest manner, can lead to potentially undesirable outcomes. In this section, we work with the weaker notion of approximate envy-freeness and show that a simple truthful mechanism yields an approximately envy-free assignment with high probability. In particular, we prove that the random assignment mechanism, which assigns each item to a player chosen uniformly and independently at random, is likely to produce such an assignment. In the setting where each group consists of only one player, Amanatidis et al. showed that when the distribution is as above and the number of items *m* is large enough compared to *n*, the random assignment mechanism yields an approximately envy-free assignment with high probability. Our statement is an analogous statement for the case where each group can have multiple players. [thm:existenceapprox] Assume that [A2] holds. For every *α* ∈ [0, 1), there exists a constant *C* depending only on *α* and *μ**m**i**n* such that, if *m* > *C**g*log*n*, then the random assignment, where each item *j* ∈ *M* is assigned independently and uniformly at random to a group, is *α*-approximate envy-free with high probability. Before we prove Theorem [thm:existenceapprox], we note some ways in which our result is stronger than that of Amanatidis et al.’s apart from the fact that multiple players per group are allowed in our setting. First, Amanatidis et al. required D*i*, *j* to be the same for all *j*, which we do not assume here. Next, they only showed that the random assignment is likely to be *approximately proportional*, a weaker notion that is implied by approximate envy-freeness. Moreover, in their result, *m* needs to be as high as Ω(*n*2), whereas in our case it suffices for *m* to be in the range Ω(*g*log*n*). Finally, we also derive a stronger probabilistic bound; they showed a “success probability” of the algorithm of 1 − *O*(*n*2/*m*), while our success probability is 1 − exp( − Ω(*m*/*g*)). [Proof of Theorem [thm:existenceapprox]] For each player *i* ∈ *N*, each item *j* ∈ *M* and each *p* ∈ {1, …, *g*}, let *A**i*, *j**p* be a random variable representing the contribution of item *j*’s utility with respect to player *i* to group *G**p*, i.e., *A**i*, *j**p* is *u**i*(*j*) if item *j* is assigned to group *G**p* and is zero otherwise. Define *S**i**p* :  = ∑*j* ∈ *M**A**i*, *j**p*. Observe that each player *i* considers the assignment to be *α*-approximate envy-free if and only if *S**i**g*(*i*) ≥ *α**S**i**p* for every *p*. Let $\delta = \frac{1 - \alpha}{1 + \alpha}$; from this choice of *δ* and since $\operatorname\*{\mathbb{E}}[S\_i^p]$ is equal for every *p*, we can conclude that *S**i**g*(*i*) ≥ *α**S**i**p* is implied by $S\_i^{g(i)} \geq (1 - \delta) \operatorname\*{\mathbb{E}}[S\_i^{g(i)}]$ and $S\_i^p \leq (1 + \delta)\operatorname\*{\mathbb{E}}[S\_i^p]$. In other words, we can bound the probability that the random assignment is not *α*-approximate envy-free as follows. $$\begin{aligned} &\Pr[\exists i \in N, p \in \{1, \dots, g\}: S\_i^{g(i)} < \alpha S\_i^p] \\ &\phantom{{}=6}\leq \sum\_{i \in N, p \in \{1, \dots, g\}} \Pr[S\_i^{g(i)} < \alpha S\_i^p] \\ &\phantom{{}=6}\leq \sum\_{i \in N, p \in \{1, \dots, g\}} \Pr[S\_i^{g(i)} < (1 - \delta) \operatorname\*{\mathbb{E}}[S\_i^{g(i)}] \vee S\_i^p > (1 + \delta)\operatorname\*{\mathbb{E}}[S\_i^{g(i)}]] \\ &\phantom{{}=6}\leq \sum\_{i \in N, p \in \{1, \dots, g\}} (\Pr[S\_i^{g(i)} < (1 - \delta) \operatorname\*{\mathbb{E}}[S\_i^{g(i)}]] + \Pr[S\_i^p > (1 + \delta)\operatorname\*{\mathbb{E}}[S\_i^{p}]]). \end{aligned}$$ Since *S**i**p* = ∑*j* ∈ *M**A**i*, *j**p* and *A**i*, *j**p*’s are independent and lie in [0, 1], we can use Chernoff bound (Lemma [lem:chernoff]) to upper bound the last terms. Hence, the probability that the alloaction is not *α*-approximate envy-free is at most $$\begin{aligned} \sum\_{i \in N, p \in \{1, \cdots, g\}} \exp\left(\frac{-\delta^2 \operatorname\*{\mathbb{E}}[S\_i^{g(i)}]]}{2}\right) + \exp\left(\frac{-\delta^2 \operatorname\*{\mathbb{E}}[S\_i^p]]}{3}\right).\end{aligned}$$ Finally, observe that $$\operatorname\*{\mathbb{E}}[S\_i^p] = \sum\_{j \in M} \operatorname\*{\mathbb{E}}[A\_{i, j}^p] = \sum\_{j \in M} \frac{1}{g} \operatorname\*{\mathbb{E}}[u\_i(j)] \geq \frac{m \mu\_{min}}{g}.$$ This means that the desired probability is bounded above by $$\begin{aligned} &\sum\_{i \in N, p \in \{1, \dots, g\}} \exp\left(\frac{-\delta^2 m \mu\_{min}}{2g}\right) + \exp\left(\frac{-\delta^2 m \mu\_{min}}{3g}\right) \\ &\phantom{{}=6}\leq 2ng \exp\left(\frac{-\delta^2 m \mu\_{min}}{3g}\right) \\ &\phantom{{}=6}\leq \exp\left(-\frac{\delta^2 m \mu\_{min}}{3g} + 3\log n\right).\end{aligned}$$ When $m > \left(\frac{10}{\mu\_{min}\delta^2}\right)g \log n$, the above expression is at most exp( − Ω(*m*/*g*)), concluding our proof. Concluding Remarks ================== In this paper, we study a generalized setting for fair division that allows interested parties to contain multiple players, possibly with highly differing preferences. This setting allows us to model several real-world cases of fair division that cannot be done under the traditional setting. We establish almost-tight bounds on the number of players and items under which a fair division is likely or unlikely to exist. Furthermore, we consider the issue of truthfulness and show that a simple truthful mechanism produces an assignment that is approximately envy-free with high probability. While the assumptions of additivity and independence are somewhat restrictive and might not apply fully to settings in the real world, our results give indications as to what we can expect if the assumptions are relaxed, such as if a certain degree of dependence is introduced. An interesting future direction is to generalize the results to settings with more general valuations. In particular, if the utility functions are low-degree polynomials, then one could try applying the invariance principle, which is a generalization of the Berry-Esseen theorem that we use. We end the paper with some questions that remain after this work. A natural question is whether we can generalize our existence and non-existence results (Theorems [thm:existence] and [thm:nonexistence]) to the setting where the groups do not contain the same number of players. This non-symmetry between the groups seems to complicate the approaches that we use in this paper. For example, it breaks the greedy algorithm used in Theorem [thm:existence]. Nevertheless, it might still be possible to prove existence of an envy-free division using other algorithms or without relying on a specific algorithm. Another direction for future research is to invent procedures for computing envy-free divisions, whenever such divisions exist, for the general setting where each group contains multiple players and players have arbitrary (not necessarily additive) preferences. Even procedures that only depend on rankings of single items do not appear to extend easily to this setting. Indeed, if a group contains two players whose preferences are opposite of each other, it is not immediately clear what we should assign to the group. It would be useful to have a procedure that produces a desirable outcome, even for a small number of players in each group. Lastly, one could explore the limitations that arise when we impose the condition of truthfulness, an important property when we implement the mechanisms in practice. For instance, truthful allocation mechanisms have recently been characterized in the case of two players, and it has been shown that there is a separation between truthful and non-truthful mechanisms for approximating maximin shares. In our setting, a negative result on the existence of a truthful mechanism that yields an envy-free division with high probability would provide such a separation as well, while a positive result in this direction would have even more consequences for practical applications. Acknowledgments =============== The authors thank the anonymous reviewers for their helpful feedback. Warut Suksompong acknowledges support from a Stanford Graduate Fellowship. Appendix ======== Proof of Theorem [thm:existence] -------------------------------- First we list the following well-known fact, which allows us to easily determine the mean of a random variable from its cumulative density function. [prop:meanbymass] Let *X* be a non-negative random variable. Then $$\begin{aligned} \operatorname\*{\mathbb{E}}[X] = \int\_0^\infty \Pr[X \geq x] dx.\end{aligned}$$ To analyze the algorithm, consider any player *i* and any group *g*ʹ ≠ *g*(*i*). We will first bound the probability that *M**g*ʹ ≻ *i**M**g*(*i*). To do this, for each item *j* ∈ *M*, define *A**i*, *j* as $$\begin{aligned} A\_{i, j} = u\_i(j)\cdot\textbf{1}\left[g(i) = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right],\end{aligned}$$ where $1\left[g(i) = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right]$ is an indicator random variable that indicates whether $g(i) = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)$. Similarly, define *B**i*, *j**g*ʹ as $$\begin{aligned} B\_{i, j}^{g'} = u\_i(j)\cdot\textbf{1}\left[g' = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right].\end{aligned}$$ Moreover, suppose that D*j* has mean *μ**j* and variance *σ**j*2. Notice that, with respect to player *i*, *A**i*, *j* is the utility that item *j* contributes to *g*(*i*) whereas *B**i*, *j**g*ʹ is the utility that item *j* contributes to *g*ʹ. In other words, *M**g*ʹ ≻ *i**M**g*(*i*) if and only if ∑*j* ∈ *M**A**i*, *j* < ∑*j* ∈ *M**B**i*, *j**g*ʹ. To bound Pr[*M**g*ʹ ≻ *i**M**g*(*i*)], we will first bound $\operatorname\*{\mathbb{E}}\left[A\_{i, j}\right]$ and $\operatorname\*{\mathbb{E}}\left[B\_{i, j}^{g'}\right]$. Then, we will use the Chernoff bound to bound Pr[∑*j* ∈ *M**A**i*, *j* < ∑*j* ∈ *M**B**i*, *j**g*ʹ]. Observe that, due to symmetry, we can conclude that $$\begin{aligned} \operatorname\*{\mathbb{E}}\left[u\_i(j)\cdot\textbf{1}\left[g' = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right]\right] = \operatorname\*{\mathbb{E}}\left[u\_i(j)\cdot\textbf{1}\left[g'' = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right]\right]\end{aligned}$$ for any *g*ʺ ≠ *g*(*i*). Thus, we can now rearrange *B**i*, *j**g*ʹ as follows: $$\begin{aligned} \operatorname\*{\mathbb{E}}\left[B\_{i, j}^{g'}\right] &= \frac{1}{g - 1} \left(\sum\_{g'' \ne g(i)} \operatorname\*{\mathbb{E}}\left[u\_i(j)\cdot\textbf{1}\left[g'' = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right]\right]\right) \\ &= \frac{1}{g - 1} \left(\operatorname\*{\mathbb{E}}\left[u\_i(j) \sum\_{g'' \ne g(i)}\textbf{1}\left[g'' = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right]\right]\right) \\ &= \frac{1}{g - 1} \left(\operatorname\*{\mathbb{E}}\left[u\_i(j) \left(1 - \textbf{1}\left[g(i) = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right]\right)\right]\right).\end{aligned}$$ Hence, we have $$\begin{aligned} \label{eq:AtoB} \operatorname\*{\mathbb{E}}\left[B\_{i, j}^{g'}\right] = \frac{1}{g - 1} \left(\mu\_j - \operatorname\*{\mathbb{E}}\left[A\_{i, j}\right]\right).\end{aligned}$$ Now, consider *A**i*, *j*. Again, due to symmetry, we have $$\begin{aligned} \operatorname\*{\mathbb{E}}\left[A\_{i, j}\right] &= \frac{1}{n'} \left(\sum\_{i' \in G\_{g(i)}} \operatorname\*{\mathbb{E}}\left[u\_{i'}(j)\cdot\textbf{1}\left[g(i) = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right]\right]\right) \\ &= \frac{1}{n'} \operatorname\*{\mathbb{E}}\left[\left(\sum\_{i' \in G\_{g(i)}} u\_{i'}(j)\right)\cdot\textbf{1}\left[g(i) = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right]\right].\end{aligned}$$ Let S denote the distribution of the sum of *n*ʹ independent random variables, each drawn from D*j*. It is obvious that ∑*p* ∈ *G**k**u**p*(*j*) is drawn from S independently for each *k*. In other words, $\operatorname\*{\mathbb{E}}\left[A\_{i, j}\right]$ can be written as $$\begin{aligned} \operatorname\*{\mathbb{E}}\left[A\_{i, j}\right] &= \frac{1}{n'} \operatorname\*{\mathbb{E}}\left[X\_1 \cdot\textbf{1}\left[X\_1 = \max\{X\_1, \dots, X\_g\}\right]\right].\end{aligned}$$ The expectation on the right is taken over *X*1, …, *X**g* sampled independently from S. From symmetry among *X*1, …, *X**g*, we can further derive the following: $$\begin{aligned} \operatorname\*{\mathbb{E}}\left[A\_{i, j}\right] &= \frac{1}{n'} \Pr\left[X\_1 = \max\{X\_1, \dots, X\_g\}\right] \operatorname\*{\mathbb{E}}\left[X\_1 \mid X\_1 = \max\{X\_1, \dots, X\_g\}\right] \\ &= \frac{1}{n'g} \operatorname\*{\mathbb{E}}\left[X\_1 \mid X\_1 = \max\{X\_1, \dots, X\_g\}\right] \\ &= \frac{1}{n'g} \operatorname\*{\mathbb{E}}\left[\max\{X\_1, \dots, X\_g\}\right].\end{aligned}$$ Consider the distribution of max{*X*1, …, *X**g*}. Let us call this distribution Y. Notice that $\operatorname\*{\mathbb{E}}\left[\max\{X\_1, \dots, X\_g\}\right]$ is just the mean of Y, i.e., $$\begin{aligned} \label{eq:meanA} \operatorname\*{\mathbb{E}}\left[A\_{i, j}\right] = \frac{1}{n' g} \operatorname\*{\mathbb{E}}\_{Y \sim \mathcal{Y}}[Y].\end{aligned}$$ To bound this, let *F**S* and *F**Y* be the cumulative density functions of S and Y respectively. Notice that *F**Y*(*x*) = *F**S*(*x*)*g* for all *x*. Applying Proposition [prop:meanbymass] to S and Y yields the following: $$\begin{aligned} \operatorname\*{\mathbb{E}}\_{S \sim \mathcal{S}}[S] = \int\_0^\infty (1 - F\_S(x))dx,\end{aligned}$$ and, $$\begin{aligned} \operatorname\*{\mathbb{E}}\_{Y \sim \mathcal{Y}}[Y] = \int\_0^\infty (1 - F\_S(x)^g)dx.\end{aligned}$$ By taking the difference of the two, we have $$\begin{aligned} \operatorname\*{\mathbb{E}}\_{Y \sim \mathcal{Y}}[Y] = \operatorname\*{\mathbb{E}}\_{S \sim \mathcal{S}}[S] + \int\_0^\infty F\_S(x)\left(1 - F\_S(x)^{g - 1}\right)dx.\end{aligned}$$ To bound the right hand side, recall that S is just the distribution of the sum of *n*ʹ independent random variables sampled according to D*j*. Note that the third moment of D*j* is at most 1 because it is bounded in [0, 1]. Thus, by applying the Berry-Esseen Theorem (Lemma 2), we have $$\begin{aligned} \left|F\_S(x) - \Pr\_{y \sim \mathcal{N}(\mu\_j n', \sigma\_j^2 n')}[y \leq x]\right| \leq \frac{C\_{BE}}{\sigma\_j^3 \sqrt{n'}}.\end{aligned}$$ for all *x* ∈ R. When *n*ʹ is sufficiently large, the right hand side is at most 0.1. Moreover, it is easy to check that Pr*y* ∼ N(*μ**j**n*ʹ, *σ**j*2*n*ʹ)[*y* ≤ *x*] ∈ [0.5, 0.85] for every $x \in \left[\mu\_j n', \mu\_j n' + \sigma\_j \sqrt{n'}\right]$. Hence, *F**S*(*x*) ∈ [0.4, 0.95] for every $x \in \left[\mu\_j n', \mu\_j n' + \sigma\_j \sqrt{n'}\right]$. Now, we can bound $\operatorname\*{\mathbb{E}}\_{Y \sim \mathcal{Y}}[Y]$ as follows: $$\begin{aligned} \operatorname\*{\mathbb{E}}\_{Y \sim \mathcal{Y}}[Y] &= \operatorname\*{\mathbb{E}}\_{S \sim \mathcal{S}}[S] + \int\_0^\infty F\_S(x)\left(1 - F\_S(x)^{g - 1}\right)dx \\ &= \mu\_j n' + \int\_0^\infty F\_S(x)\left(1 - F\_S(x)^{g - 1}\right)dx \\ (\text{Since } F\_S(x)\left(1 - F\_S(x)^{g - 1}\right) \geq 0) &\geq \mu\_j n' + \int\_{\mu\_j n'}^{\mu\_j n' + \sigma\_j \sqrt{n'}} F\_S(x)\left(1 - F\_S(x)^{g - 1}\right)dx \\ &\geq \mu\_j n' + \int\_{\mu\_j n'}^{\mu\_j n' + \sigma\_j \sqrt{n'}} (0.4)(0.05)dx \\ &= \mu\_j n' + \sigma\_j \sqrt{n'}/50 \\ (\text{Since } \sigma\_j \geq \sigma\_{min}) &\geq \mu\_j n' + \sigma\_{min} \sqrt{n'}/50.\end{aligned}$$ Plugging the above inequality into equation ([eq:meanA]), we can conclude that $$\begin{aligned} \operatorname\*{\mathbb{E}}\left[A\_{i, j}\right] = \frac{1}{n'g} \operatorname\*{\mathbb{E}}\_{Y \in \mathcal{Y}}[Y] \geq \frac{\mu\_j}{g} + \frac{\sigma\_{min}}{50 g \sqrt{n'}}.\end{aligned}$$ From this and equation ([eq:AtoB]), we have $$\begin{aligned} \operatorname\*{\mathbb{E}}\left[B\_{i, j}^{g'}\right] = \frac{1}{g - 1} \left(\mu\_j - \operatorname\*{\mathbb{E}}\left[A\_{i, j}\right]\right) \leq \frac{1}{g - 1} \left(\mu\_j - \frac{\mu\_j}{g}\right) = \frac{\mu\_j}{g}.\end{aligned}$$ Now, define *C**i*, *j**g*ʹ as $C\_{i, j}^{g'} = B\_{i, j}^{g'} + \left(\mu\_j/g - \operatorname\*{\mathbb{E}}\left[B\_{i, j}^{g'}\right]\right)$. Notice $\operatorname\*{\mathbb{E}}\left[C\_{i, j}^{g'}\right] = \mu\_j/g$. As stated earlier, *M**g*ʹ ≻ *i**M**g*(*i*) if and only if ∑*j* ∈ *M**A**i*, *j* < ∑*j* ∈ *M**B**i*, *j**g*ʹ. Let *S**A* = ∑*j* ∈ *M**A**i*, *j*, *S**B* = ∑*j* ∈ *M**B**i*, *j**g*ʹ, *S**C* = ∑*j* ∈ *M**C**i*, *j**g*ʹ and let $\delta = \frac{\sigma\_{min}}{200\mu\_j \sqrt{n'}}$. Notice that, since we assume that the variance of D*j* is positive, *μ**j* is also non-zero, which means that *δ* is well-defined. Using Chernoff bound (Lemma 1) on *S**A* and *S**C*, we have $$\begin{aligned} \Pr[S\_A \leq (1 - \delta)\operatorname\*{\mathbb{E}}[S\_A]] &\leq \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S\_A]}{2}\right)},\end{aligned}$$ and, $$\begin{aligned} \Pr[S\_C \geq (1 + \delta)\operatorname\*{\mathbb{E}}[S\_C]] &\leq \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S\_C]}{3}\right)}.\end{aligned}$$ Moreover, when *n*ʹ is large enough, we have $(1 - \delta)\operatorname\*{\mathbb{E}}[S\_A] \geq (1 + \delta)\operatorname\*{\mathbb{E}}[S\_C]$. Thus, we have $$\begin{aligned} \Pr[S\_A < S\_C] &\leq \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S\_A]}{2}\right)} + \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S\_C]}{3}\right)} \\ &\leq \exp{\left(\frac{-\delta^2 m \mu\_j}{2g}\right)} + \exp{\left(\frac{-\delta^2m \mu\_j}{3g}\right)} \\ &\leq 2\exp{\left(\frac{-\sigma\_{min}^2 m }{120000g n' \mu\_j}\right)} \\ (\text{Since } \mu\_j \leq 1) &\leq 2\exp{\left(\frac{-\sigma\_{min}^2 m }{120000n}\right)}.\end{aligned}$$ Due to how *C**i*, *j**g*ʹ is defined, we have Pr[*S**A* < *S**C*] ≥ Pr[*S**A* < *S**B*] = Pr[*M**g*ʹ ≻ *i**M**g*(*i*)]. Using the union bound for all *i* and all *g*ʹ ≠ *g*(*i*), the probability that the assignment output by the algorithm is not envy-free is at most $$\begin{aligned} 2n(g - 1)\exp{\left(\frac{-\sigma\_{min}^2 m }{120000n}\right)},\end{aligned}$$ which is at most 1/*m* when *m* ≥ *C**n*log*n* for some sufficiently large *C*. This completes the proof for the theorem. --- 1. See, e.g., for the definition of the maximin share criterion.[↩](#fnref1) 2. The third moment of a random variable *X* is defined as $\operatorname\*{\mathbb{E}}[|X - \operatorname\*{\mathbb{E}}[X]|^3]$.[↩](#fnref2) 3. There is nothing special about the number 1/2; a similar result holds if the distributions are supported on a subset of an interval [*a*, *b*] and are symmetric around (*a* + *b*)/2, for some 0 < *a* < *b*.[↩](#fnref3) Asymptotic Existence of Fair Divisions for Groups ================================================= The problem of dividing resources fairly occurs in many practical situations and is therefore an important topic of study in economics. In this paper, we investigate envy-free divisions in the setting where there are multiple players in each interested party. While all players in a party share the same set of resources, each player has her own preferences. Under additive valuations drawn randomly from probability distributions, we show that when all groups contain an equal number of players, a welfare-maximizing allocation is likely to be envy-free if the number of items exceeds the total number of players by a logarithmic factor. On the other hand, an envy-free allocation is unlikely to exist if the number of items is less than the total number of players. In addition, we show that a simple truthful mechanism, namely the random assignment mechanism, yields an allocation that satisfies the weaker notion of approximate envy-freeness with high probability. Introduction ============ Dividing resources among interested parties in a fair manner is a problem that commonly occurs in real-world situations and is consequently of fundamental importance. Countries negotiate over international issues, as Egypt and Israel did in 1978 over interests in the Sinai Peninsula and the U.S. and Panama in 1994 over those in the Panama Canal. Likewise, divorced couples negotiate over their marital property, airlines over flight routes, and Internet clients over bandwidth and storage space. On a smaller scale, typical everyday tasks involving fair division include distributing household tasks, splitting a taxi fare, and sharing apartment rent. Given its far-reaching and often critical applications, it should not come as a surprise that fair division has long been a popular topic of study in economics. To reason about fair division, we must carefully define what we mean for a division to be “fair”. Several notions of fairness have been proposed in the literature. For example, a division is said to be *proportional* if every player values her allocation at least 1/*n* times her value for the whole set of items, where *n* denotes the number of players. Another commonly-used notion of fairness, and one that we will work with throughout the paper, is that of *envy-freeness*. A division of a set of items is said to be *envy-free* if each player values the set of items that she receives at least as much as the set of items that any other player receives. When utilities are additive, an envy-free division is proportional and moreover satisfies another important fairness notion called the *maximin share criterion*.[1](#fn1) While procedures for finding envy-free divisions have been proposed, an envy-free division does not necessarily exist in arbitrary settings. This can most easily be seen in the case where there are two players and one item that both players value positively, or more generally when the number of players exceeds the number of items and players have positive values for items. The fair division literature has for the most part assumed that each interested party consists of a single player (or equivalently, several players represented by a single common preference). However, this assumption is too restrictive for many practical situations. Indeed, an outcome of a negotiation between countries may have to be approved by members of the cabinets of each country. Since valuations of outcomes are usually subjective, one can easily imagine a situation in which one member of a cabinet of a country thinks that the division is fair while another member disagrees. Similarly, in a divorce case, different members of the family on the husband side and the wife side may have varying opinions on a proposed settlement. Another example is a large company or university that needs to divide its resources among competing groups of agents (e.g., departments in a university), since the agents in each group can have misaligned interests. Indeed, the professors who perform theoretical research may prefer more whiteboards and open space in the department building, while those who engage in experimental work are more likely to prefer laboratories. In this paper, we study envy-free divisions when there are multiple players in each group. Every player has her own preferences, and players in the same group can have very different preferences. In this generalized setting, we consider a division to be envy-free if every player values the set of items assigned to her group at least as much as that assigned to any other group. Our Contributions ----------------- In Section [sec:asymptotic], we investigate the asymptotic existence and non-existence of envy-free divisions using a probabilistic model, previously used in the setting with one player per group. We show that under additive valuations and other mild technical conditions, when all groups contain an equal number of players, an envy-free division is likely to exist if the number of goods exceeds the total number of players by a logarithmic factor, no matter whether the players are distributed into several groups of small size or few groups of large size (Theorem [thm:existence]). In particular, any allocation that maximizes social welfare is likely to be envy-free. In addition, when there are two groups with possibly unequal numbers of players and the distribution on the valuation of each item is symmetric, an envy-free division is likely to exist if the number of goods exceeds the total number of players by a logarithmic factor as well (Theorem [thm:existencesymmetric]). Although it might not be surprising that a welfare-maximizing allocation is envy-free with high probability when there are sufficiently many items, we find the fact that only an extra logarithmic factor is required to be rather unexpected. Indeed, as the number of players in each group increases, it seems as though the independence between the preferences of each player would make it much harder to satisfy all of them simultaneously, since they all need to be allocated the same items. To complement our existence results, we show on the other hand that we cannot get away with a much lower number of items and still have an envy-free division with high probability. In particular, if the number of items is less than the total number of players by a superconstant factor, or if the number of items is less than the total number of players and the number of groups is large, we show that the probability that an envy-free division exists is low (Corollaries [cor:nonexistence1] and [cor:nonexistence2]). This leaves the gap between asymptotic existence and non-existence of envy-free divisions at a mere logarithmic factor. While the techniques used to show asymptotic existence of envy-free divisions in Section [sec:asymptotic] give rise to mechanisms that compute such divisions with high probability, these mechanisms are unfortunately not truthful. In other words, implementing these mechanisms in the presence of strategic players can lead to undesirable outcomes. In Section [sec:existenceapprox], we tackle the issue of truthfulness and show that a simple truthful mechanism, namely the random assignment mechanism, is **α*-approximate envy-free* with high probability for any constant *α* ∈ [0, 1) (Theorem [thm:existenceapprox]). Approximate envy-freeness means that even though a player may envy another player in the resulting division, the values of the player for her own allocation and for the other player’s allocation differ by no more than a multiplicative factor of *α*. In other words, the player’s envy is relatively small compared to her value for her own allocation. The number of items required to obtain approximate envy-freeness with high probability increases as we increase *α*. Our result shows that it is possible to achieve truthfulness and approximate envy-freeness simultaneously in a wide range of random instances, and improves upon the previous result for the setting with one player per group in several ways. Related Work ------------ Our results in Section [sec:asymptotic] can be viewed as generalizations of previous results by Dickerson et al., who showed asymptotic existence and non-existence under a similar model but in a more limited setting where each group has only one player. In particular, these authors proved that under certain technical conditions on the probability distributions, an allocation that maximizes social welfare is envy-free with high probability if the number of items is larger than the number of players by a logarithmic factor. In fact, their result also holds when the number of players stay constant, as long as the number of items goes to infinity. Similarly, we show that a welfare-maximizing allocation is likely to be envy-free if the number of items exceeds the number of players by a logarithmic factor. While we require that the number of player per group goes to infinity, the number of groups can stay small, even constant. On the non-existence front, Dickerson et al. showed that if the utility for each item is independent and identically distributed across players, then envy-free allocations are unlikely to exist when the number of items is larger than the number of players by a linear fraction. On the other hand, our non-existence results apply to the regime where the number of items is smaller than the number of players. Note that while this regime is uninteresting in Dickerson et al.’s setting since envy-free allocations cannot exist, in our generalized setting an envy-free allocation can already exist when the number of items is at least the number of *groups*. Besides the asymptotic results on envy-free divisions, results of this type have also been shown for other fairness notions, including proportionality and the maximin share criterion. These two notions are weaker than envy-freeness when utilities are additive. Suksompong showed that proportional allocations exist with high probability if the number of goods is a multiple of the number of players or if the number of goods grows asymptotically faster than the number of players. Kurokawa et al. showed that if either the number of players or the number of items goes to infinity, then an allocation satisfying the maximin share criterion is likely to exist as long as each probability distribution has at least constant variance. Amanatidis et al. analyzed the rate of convergence for the existence of allocations satisfying the maximin share criterion when the utilities are drawn from the uniform distribution over the unit interval. Another common approach for circumventing the potential nonexistence of divisions satisfying certain fairness concepts, which we do not discuss in this paper, is by showing approximation guarantees for worst-case instances. In particular, Suksompong investigated approximation guarantees for groups of agents using the maximin share criterion. Finally, a model was recently introduced that incorporates the element of resource allocation for groups. The model concerns the problem of finding a small “agreeable” subset of items, i.e., a small subset of items that a group of players simultaneously prefer to its complement. Nevertheless, in that model the preferences of only one group of players are taken into account, whereas in our work we consider the preferences of multiple groups at the same time. Preliminaries ============= Let a set *N* of *n* :  = *g**n*ʹ players be divided into *g* ≥ 2 groups *G*1, …, *G**g* of *n*ʹ players each, and let *M* :  = {1, 2, …, *m*} denote the set of items. Let *g*(*i*) be the index of the group containing player *i* (i.e., player *i* belongs to the group *G**g*(*i*)). Each item will be assigned to exactly one group, to be shared among the members of the group. We assume that each player *i* ∈ *N* has a cardinal utility *u**i*(*j*) for each item *j* ∈ *M*. We may suppose without loss of generality that *u**i*(*j*) ∈ [0, 1], since otherwise we can scale down all utilities by their maximum. We will also make a very common assumption that utilities are *additive*, i.e., *u**i*(*M*ʹ) = ∑*j* ∈ *M*ʹ*u**i*(*j*) for any player *i* ∈ *N* and any subset of items *M*ʹ ⊆ *M*. The *social welfare* of an assignment is the sum of the utilities of all *n* players from the assignment. We are now ready to define the notion of envy-freeness. Denote the subsets of items that are assigned to the *g* groups by *M*1, …, *M**g*, respectively. Player *i* in group *G**g*(*i*) regards her allocation *M**g*(*i*) as *envy-free* if *u**i*(*M**g*(*i*)) ≥ *u**i*(*M**j*) for every group *G**j* ≠ *G**g*(*i*). The assignment of the subsets *M*1, …, *M**g* to the *g* groups is called *envy-free* if every player regards her allocation as envy-free. Next, we list two probabilistic results that will be used in our proofs. We first state the Chernoff bound, which gives us an upper bound on the probability that a sum of independent random variables is far away from its expected value. [Chernoff bound] [lem:chernoff] Let *X*1, …, *X**r* be independent random variables that are bounded in an interval [0, 1], and let *S* :  = *X*1 + ⋯ + *X**r*. We have $$\begin{aligned} \Pr[S \geq (1 + \delta)\operatorname\*{\mathbb{E}}[S]] &\leq \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S]}{3}\right)},\end{aligned}$$ and, $$\begin{aligned} \Pr[S \leq (1 - \delta)\operatorname\*{\mathbb{E}}[S]] &\leq \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S]}{2}\right)}\end{aligned}$$ for every *δ* ≥ 0. Another lemma that we will use is the Berry-Esseen theorem. In short, it states that a sum of a sufficiently large number of independent random variables behaves similarly to a normal distribution. On the surface, this sounds like the central limit theorem. However, the Berry-Esseen theorem relies on a slightly stronger assumption and delivers a more concrete bound, which is required for our purposes. [Berry-Esseen theorem ] [lem:berry] Let *X*1, …, *X**r* be *r* independent and identically distributed random variables, each of which has mean *μ*, variance *σ*2, and third moment[2](#fn2) *ρ*. Let *S* :  = *X*1 + ⋯ + *X**r*. There exists an absolute constant *C**BE* such that $$\begin{aligned} \left|\Pr[S \leq x] - \Pr\_{y \sim \mathcal{N}(\mu r, \sigma^2 r)}[y \leq x]\right| \leq \frac{\rho C\_\mathit{BE}}{\sigma^3 \sqrt{r}}\end{aligned}$$ for every *x* ∈ R. Note that N(*μ**r*, *σ*2*r*) is the normal distribution with mean *μ**r* and variance *σ*2*r*, i.e., its probability density function is $$f(x) = \frac{1}{\sigma \sqrt{2\pi r}}e^\frac{-(x - \mu r)^2}{2\sigma^2 r}.$$ Let us now state two assumptions on distributions of utilities; in Section [sec:asymptotic] we will work with the first and in Section [sec:existenceapprox] with the second. * For each item *j* ∈ *M*, the utilities *u**i*(*j*) ∈ [0, 1] for *i* ∈ *N* are drawn independently at random from a distribution D*j*. Each distribution D*j* is *non-atomic*, i.e., Pr[*u**i*(*j*) = *x*] = 0 for every *x* ∈ [0, 1]. Moreover, the variances of the distributions are bounded away from zero, i.e., there exists a constant *σ**m**i**n* > 0 such that the variances of D1, …, D*m* are at least *σ**m**i**n*2. * For each *i* ∈ *N* and *j* ∈ *M*, the utility *u**i*(*j*) ∈ [0, 1] is drawn independently at random from a probability distribution D*i*, *j*. The mean of each distribution is bounded away from zero, i.e., there exists a constant *μ**m**i**n* > 0 such that $\operatorname\*{\mathbb{E}}[u\_i(j)] \geq \mu\_{min}$ for every *i* ∈ *N*, *j* ∈ *M*. Note that assumption [A2] is weaker than [A1]. Indeed, in [A2] we do not require D*i*, *j* to be the same for every *i*. In addition, since *u**i*(*j*) ∈ [0, 1] for all *i* ∈ *N* and *j* ∈ *M*, we have $\operatorname\*{\mathbb{E}}[u\_i(j)]\geq \operatorname\*{\mathbb{E}}[u\_i(j)^2]\geq \operatorname\*{\mathbb{E}}[u\_i(j)^2]-\operatorname\*{\mathbb{E}}[u\_i(j)]^2=\text{Var}(u\_i(j)).$ Hence, the condition that the means of the distributions are bounded away from zero follows from the analogous condition on the variances. In Section [sec:existenceapprox], we consider the notion of *approximate envy-freeness*, which means that for each player, there is no allocation of another group for which the player’s utility is a certain (multiplicative) factor larger than the utility of the player for the allocation of her own group. The notion is defined formally below. We write $M\_p \succsim\_i^\alpha M\_q$ for *α* ∈ [0, 1] if and only if *u**i*(*M**p*) ≥ *α**u**i*(*M**q*). Player *i* considers an assignment *M*1, …, *M**g* of items to the *g* groups **α*-approximate envy-free* if $M\_{g(i)} \succsim\_i^\alpha M\_p$ for every group *p* ∈ {1, …, *g*}. We say that an assignment is **α*-approximate envy-free* if it is *α*-approximate envy-free for every player *i*. Finally, we give the definition of a truthful mechanism, which we will use in Section [sec:existenceapprox]. A *mechanism* is a function that takes as input the utility of player *i* for item *j* for all *i* ∈ *N* and *j* ∈ *M*, and outputs a (possibly random) assignment of items to the groups. A mechanism is said to be *truthful* if every player always obtains the highest possible (expected) utility by submitting her true utilities to the mechanism, regardless of the utilities that the remaining players submit. Asymptotic Existence and Non-Existence of Fair Divisions ======================================================== In this section, we study the existence and non-existence of fair divisions. First, we show that, when *m* is Ω(*n*log*n*), where Ω( ⋅ ) hides a sufficiently large constant, there exists an envy-free division with high probability (Theorem [thm:existence]). In particular, we prove that a welfare-maximizing allocation is likely to be envy-free. This gives rise to a simple algorithm that finds such a fair division with high probability. We also extend our existence result to the case where there are two groups but the groups need not have the same number of players; we show a similar result in this case, provided that each distribution D*j* satisfies an additional symmetry condition (Theorem [thm:existencesymmetric]). Moreover, on the non-existence front, we prove that when *m* is smaller than *n*, the probability that a fair division exists is at most 1/*g**n* − *m* (Theorem [thm:nonexistence]). This has as consequences that if the number of items is less than the total number of players by a superconstant factor, or if the number of items is less than the total number of players and the number of groups is large, then the probability that an envy-free division exists is low (Corollaries [cor:nonexistence1] and [cor:nonexistence2]). We begin with our main existence result. [thm:existence] Assume that [A1] holds. For any fixed *σ**m**i**n* > 0, there exists a constant *C* > 0 such that, for any sufficiently large *n*ʹ, if *m* > *C**n*log*n*, then there exists an envy-free assignment with high probability. In fact, we not only prove that an envy-free assignment exists but also give a simple greedy algorithm that finds one such assignment with high probability. The algorithm idea is simple; we greedily assign each item to the group that maximizes the total utility of the item with respect to the players in that group. This yields an allocation that maximizes social welfare. The allocation is therefore Pareto optimal, i.e., there exists no other allocation in which every player is weakly better off and at least one player is strictly better off. The pseudocode of the algorithm is shown below. [greedy] [1] let *M*1 = ⋯ = *M**g* = ∅. choose *k*\* from $\operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)$ let *M**k*\* ← *M**k*\* ∪ {*j*} The analysis of the algorithm contains similarities to that of the corresponding result in the setting with one player per group. However, significantly more technical care will be required to handle our setting in which each group contains multiple players. This is reflected by our use of the Berry-Esseen theorem (Lemma [lem:berry]). Here we provide a proof sketch that contains all the high-level ideas but leaves out some tedious details, especially calculations; the full proof can be found in the appendix. [Proof sketch of Theorem [thm:existence]] We will first bound Pr[*M**g*ʹ ≻ *i**M**g*(*i*)] for each player *i* and each group *G**g*ʹ ≠ *G**g*(*i*); we then use the union bound at the end to conclude Theorem [thm:existence]. To bound Pr[*M**g*ʹ ≻ *i**M**g*(*i*)], we define a random variable *A**i*, *j* to be *u**i*(*j*) if item *j* is assigned to group *G**g*(*i*) and zero otherwise. Similarly, define *B**i*, *j**g*ʹ to be *u**i*(*j*) if the item is assigned to group *G**g*ʹ and zero otherwise. Intuitively, with respect to player *i*, *A**i*, *j* is the utility contribution of item *j* to the group *G**g*(*i*). On the other hand, *B**i*, *j**g*ʹ is the utility that is “lost” to group *G**g*ʹ. In other words, *M**g*ʹ ≻ *i**M**g*(*i*) if and only if *S**A* < *S**B*, where *S**A* = ∑*j* ∈ *M**A**i*, *j* and *S**B* = ∑*j* ∈ *M**B**i*, *j**g*ʹ. We will use the Chernoff bound to estimate the probability of this event. To do so, we first need to bound $\operatorname\*{\mathbb{E}}[A\_{i, j}]$ and $\operatorname\*{\mathbb{E}}[B\_{i, j}^{g'}]$. From symmetry between different groups, the probability that item *j* is assigned to each group is 1/*g*. Thus, we have $\operatorname\*{\mathbb{E}}[A\_{i, j}] = \frac{1}{g}\operatorname\*{\mathbb{E}}\left[u\_i(j) \mid \text{item } j \text{ is assigned to } G\_{g(i)}\right]$ and $\operatorname\*{\mathbb{E}}[B\_{i, j}^{g'}] = \frac{1}{g}\operatorname\*{\mathbb{E}}\left[u\_i(j) \mid \text{item } j \text{ is assigned to } G\_{g'}\right]$. It is now fairly easy to see that $\operatorname\*{\mathbb{E}}[B\_{i, j}^{g'}] \leq \mu\_j/g$, where *μ**j* is the mean of D*j*; the reason is that the expected value of *u**i*(*j*) when *j* is not assigned to *G**g*(*i*) is clearly at most *μ**j*. For convenience, we will assume in this proof sketch that $\operatorname\*{\mathbb{E}}[B\_{i, j}^{g'}]$ is roughly *μ**j*/*g*. Now, we will bound the expected value of *A**i*, *j*. For each *p* = 1, …, *g*, let *X**p* denote the sum of the utilities of item *j* with respect to all players in *G**p*. Due to symmetry among players within the same group, we have $$\begin{aligned} \operatorname\*{\mathbb{E}}[A\_{i, j}] &= \frac{1}{n'g} \operatorname\*{\mathbb{E}}\left[X\_{g(i)} \mid X\_{g(i)}=\max\{X\_1, \dots, X\_g\}\right] \\ &= \frac{1}{n'g} \operatorname\*{\mathbb{E}}[\max\{X\_1, \dots, X\_g\}].\end{aligned}$$ The latter equality comes from the symmetry between different groups. Now, we use the Berry-Esseen theorem (Lemma [lem:berry]), which tells us that each of *X*1, …, *X**g* is close to N(*μ**j**n*ʹ, Ω(*σ**m**i**n*2*n*ʹ)). With simple calculations, one can see that the expectation of the maximum of *g* identically independent random variables sampled from N(*μ**j**n*ʹ, Ω(*σ**m**i**n*2*n*ʹ)) is $\mu\_j n' + \Omega(\sigma\_{min}\sqrt{n'})$. Roughly speaking, we also have $$\operatorname\*{\mathbb{E}}\left[A\_{i, j}\right] = \frac{\mu\_j}{g} + \Omega\left(\frac{\sigma\_{min}}{g\sqrt{n'}}\right).$$ Having bounded the expectations of *A**i*, *j* and *B**i*, *j**g*ʹ, we are ready to apply the Chernoff bound. Let $\delta = \Theta\left(\frac{\sigma\_{min}}{\mu\_j \sqrt{n'}}\right)$ where Θ( ⋅ ) hides some sufficiently small constant. When *n*ʹ is sufficiently large, we can see that $(1 + \delta)\operatorname\*{\mathbb{E}}[B\_{i, j}^{g'}] < (1 - \delta)\operatorname\*{\mathbb{E}}[A\_{i, j}]$, which implies that $(1 + \delta)\operatorname\*{\mathbb{E}}[S\_B] < (1 - \delta)\operatorname\*{\mathbb{E}}[S\_A]$. Using the Chernoff bound (Lemma [lem:chernoff]) on *S**A* and *S**B*, we have $$\begin{aligned} \Pr[S\_A \leq (1 - \delta)\operatorname\*{\mathbb{E}}[S\_A]] &\leq \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S\_A]}{2}\right)},\end{aligned}$$ and, $$\begin{aligned} \Pr[S\_B \geq (1 + \delta)\operatorname\*{\mathbb{E}}[S\_B]] &\leq \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S\_B]}{3}\right)}.\end{aligned}$$ Thus, we have $$\begin{aligned} \Pr[S\_A < S\_B] &\leq \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S\_A]}{2}\right)} + \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S\_B]}{3}\right)} \\ &\leq 2\exp{\left(-\Omega\left(\frac{\sigma\_{min}^2 m }{n \mu\_j}\right)\right)} \\ (\text{Since } \mu\_j \leq 1) &\leq 2\exp{\left(-\Omega\left(\frac{\sigma\_{min}^2 m }{n}\right)\right)}.\end{aligned}$$ Recall that Pr[*M**g*ʹ ≻ *i**M**g*(*i*)] = Pr[*S**A* < *S**B*]. Using the union bound for all *i* and all *g*ʹ ≠ *g*(*i*), the probability that the assignment output by the algorithm is not envy-free is at most $$\begin{aligned} 2n(g - 1)\exp{\left(-\Omega\left(\frac{\sigma\_{min}^2 m }{n}\right)\right)},\end{aligned}$$ which is at most 1/*m* when *m* ≥ *C**n*log*n* for some sufficiently large *C*. This completes the proof sketch for the theorem. Unfortunately, the algorithm in Theorem [thm:existence] cannot be extended to give a proof for the case where the groups do not have the same number of players. However, in a more restricted setting where there are only two groups with potentially different numbers of players and an additional symmetry condition on D1, …, D*m* is enforced, a result similar to that in Theorem [thm:existence] can be shown, as stated in the theorem below. [thm:existencesymmetric] Assume that [A1] holds. Suppose that there are only two groups but not necessarily with the same number of players; let *n*1, *n*2 denote the numbers of players of the first and second group respectively (so *n* = *n*1 + *n*2). Assume also that D1, …, D*m* are symmetric (around 1/2)[3](#fn3), i.e., $$\Pr\_{X \sim \mathcal{D}\_j}\left[X \leq \frac{1}{2}-x\right] = \Pr\_{X \sim \mathcal{D}\_j}\left[X \geq \frac{1}{2}+x\right]$$ for all *x* ∈ [0, 1/2]. For any fixed *σ**m**i**n* > 0, there exists a constant *C* > 0 such that, for any sufficiently large *n*1 and *n*2, if *m* > *C**n*log*n*, then there exists an envy-free assignment with high probability. The algorithm is similar to that in Theorem [thm:existence]; the only difference is that, instead of assigning each item to the group with the highest *total* utility over its players, we assign the item to the group with the highest *average* utility, as seen in the pseudocode of Algorithm [greedy2]. [greedy2] [1] let *M*1 = *M*2 = ∅. choose *k*\* from $\operatorname\*{arg\,max}\_{k = 1, 2} \frac{\sum\_{p \in G\_k} u\_p(j)}{n\_k}$ let *M**k*\* ← *M**k*\* ∪ {*j*} The proof is essentially the same as that of Theorem [thm:existence] after the random variables defined are changed corresponding to the modification in the algorithm. For instance, *A**i*, *j* is now defined as $$\begin{aligned} A\_{i, j} = u\_i(j) \cdot \mathbf{1}\left[g(i) = \operatorname\*{arg\,max}\_{k = 1, 2} \frac{\sum\_{p \in G\_k} u\_p(j)}{n\_k}\right]\end{aligned}$$ where **1**[*E*] denotes an indicator variable for event *E*. Due to the similarities between the two proofs, we will not repeat the whole proof. Instead, we would like to point out that all the arguments from Theorem [thm:existence] work here save for only one additional fact that we need to prove: Let *X*1 and *X*2 denote ∑*p* ∈ *G*1*u**p*(*j*)/*n*1 and ∑*p* ∈ *G*2*u**p*(*j*)/*n*2 respectively. Then, $$\begin{aligned} \Pr[X\_1 \geq X\_2] = \frac{1}{2}.\end{aligned}$$ To show this, observe first that, since D*j* is symmetric over 1/2, the distributions of *X*1 and *X*2 are also symmetric over 1/2. Let *f*1 and *f*2 be the probability density functions of *X*1 and *X*2 respectively, we have $$\begin{aligned} \Pr[X\_1 \geq X\_2] &= \int\_0^1 \int\_0^x f\_1(x)f\_2(y) dy dx \\ &= \int\_0^1 \int\_0^x f\_1(1-x)f\_2(1-y) dy dx \\ &= \int\_0^1 \int\_x^1 f\_1(x)f\_2(y) dy dx \\ &= \Pr[X\_2 \geq X\_1].\end{aligned}$$ Hence, Pr[*X*1 ≥ *X*2] = Pr[*X*2 ≥ *X*1] = 1/2, as desired. Next, we state and prove an upper bound for the probability that an envy-free assignment exists when the number of players exceeds the number of items. Such an assignment obviously does not exist under this condition if every group contains only one player. In fact, the theorem holds even without the assumption that the variances of D1, …, D*m* are at least *σ**m**i**n*2 > 0. [thm:nonexistence] Assume that [A1] holds. If *m* < *n*, then there exists an envy-free assignment with probability at most 1/*g**n* − *m*. Suppose that *m* ≤ *n* − 1, and fix an assignment *M*1, …, *M**g*. We will bound the probability that this assignment is envy-free. Consider any player *i* in group *G**g*(*i*). The probability that the assignment is envy-free for this particular player is the probability that the total utility of the player for the bundle *M**g*(*i*) is no less than that for other bundles *M**j*. This can be written as follows: $$\begin{aligned} \Pr\_{u\_i(1) \in \mathcal{D}\_1, \dots, u\_i(m) \in \mathcal{D}\_m}\left[\sum\_{l \in M\_{g(i)}} u\_i(l) = \max\_{k = 1, \dots, g}{\sum\_{l \in M\_k} u\_i(l)}\right].\end{aligned}$$ For each *j* = 1, …, *g*, define *p**j* as $$\begin{aligned} p\_j = \Pr\_{x\_1 \in \mathcal{D}\_1, \dots, x\_m \in \mathcal{D}\_m}\left[\sum\_{l \in M\_j} x\_l = \max\_{k = 1, \dots, g}{\sum\_{l \in M\_k} x\_l}\right].\end{aligned}$$ Notice that the probability that the assignment is envy-free for player *i* is *p**g*(*i*). Since *u**i*(1), …, *u**i*(*m*) is chosen independently of *u**i*ʹ(1), …, *u**i*ʹ(*m*), for every *i*ʹ ≠ *i*, the probability that this assignment is envy-free for every player is simply the product of the probability that the assignment is envy-free for each player, i.e., $$\begin{aligned} \prod\_{i = 1}^{n} p\_{g(i)} &= \prod\_{j=1}^{g} p\_{j}^{n'}.\end{aligned}$$ Using the inequality of arithmetic and geometric means, we arrive at the following bound: $$\begin{aligned} \prod\_{j=1}^{g} p\_{j}^{n'} \leq \left(\frac{1}{g} \sum\_{j=1}^{g} p\_j\right)^{n'g}.\end{aligned}$$ Recall our assumption that the distributions D*j* are non-atomic. Hence we may assume that the events ∑*l* ∈ *M**j**x**l* = max*k* = 1, …, *g*∑*l* ∈ *M**k**x**l* are disjoint for different *j*. This implies that ∑*j* = 1*g**p**j* = 1. Thus, the probability that this fixed assignment is envy-free is at most $$\begin{aligned} \left(\frac{1}{g} \sum\_{j=1}^{g} p\_j\right)^{n'g} = \left(\frac{1}{g}\right)^{n'g} = \frac{1}{g^n}.\end{aligned}$$ Finally, since each assignment is envy-free with probability at most 1/*g**n* and there are *g**m* possible assignments, by union bound the probability that there exists an envy-free assignment is at most 1/*g**n* − *m*. This completes the proof of the theorem. The following corollaries can be immediately derived from Theorem [thm:nonexistence]. They say that an envy-free allocation is unlikely to exist when the number of items is less than the number of players by a superconstant factor, or when the number of items is less than the number of players and the number of groups is large. [cor:nonexistence1] Assume that [A1] holds. When *m* = *n* − *ω*(1), the probability that there exists an envy-free assignment converges to zero as *n* → ∞. [cor:nonexistence2] Assume that [A1] holds. When *m* < *n*, the probability that there exists an envy-free assignment converges to zero as *g* → ∞. Truthful Mechanism for Approximate Envy-Freeness ================================================ While the algorithms in Section [sec:asymptotic] translate to mechanisms that yield with high probability envy-free divisions that are compatible with social welfare assuming that players are truth-telling, the resulting mechanisms suffer from the setback that they are easily manipulable. Indeed, since they aim to maximize (total or average) welfare, strategic players will declare their values for the items to be high, regardless of what the actual values are. This presents a significant disadvantage: Implementing these mechanisms in most practical situations, where we do not know the true valuations of the players and have no reason to assume that they will reveal their valuations in a honest manner, can lead to potentially undesirable outcomes. In this section, we work with the weaker notion of approximate envy-freeness and show that a simple truthful mechanism yields an approximately envy-free assignment with high probability. In particular, we prove that the random assignment mechanism, which assigns each item to a player chosen uniformly and independently at random, is likely to produce such an assignment. In the setting where each group consists of only one player, Amanatidis et al. showed that when the distribution is as above and the number of items *m* is large enough compared to *n*, the random assignment mechanism yields an approximately envy-free assignment with high probability. Our statement is an analogous statement for the case where each group can have multiple players. [thm:existenceapprox] Assume that [A2] holds. For every *α* ∈ [0, 1), there exists a constant *C* depending only on *α* and *μ**m**i**n* such that, if *m* > *C**g*log*n*, then the random assignment, where each item *j* ∈ *M* is assigned independently and uniformly at random to a group, is *α*-approximate envy-free with high probability. Before we prove Theorem [thm:existenceapprox], we note some ways in which our result is stronger than that of Amanatidis et al.’s apart from the fact that multiple players per group are allowed in our setting. First, Amanatidis et al. required D*i*, *j* to be the same for all *j*, which we do not assume here. Next, they only showed that the random assignment is likely to be *approximately proportional*, a weaker notion that is implied by approximate envy-freeness. Moreover, in their result, *m* needs to be as high as Ω(*n*2), whereas in our case it suffices for *m* to be in the range Ω(*g*log*n*). Finally, we also derive a stronger probabilistic bound; they showed a “success probability” of the algorithm of 1 − *O*(*n*2/*m*), while our success probability is 1 − exp( − Ω(*m*/*g*)). [Proof of Theorem [thm:existenceapprox]] For each player *i* ∈ *N*, each item *j* ∈ *M* and each *p* ∈ {1, …, *g*}, let *A**i*, *j**p* be a random variable representing the contribution of item *j*’s utility with respect to player *i* to group *G**p*, i.e., *A**i*, *j**p* is *u**i*(*j*) if item *j* is assigned to group *G**p* and is zero otherwise. Define *S**i**p* :  = ∑*j* ∈ *M**A**i*, *j**p*. Observe that each player *i* considers the assignment to be *α*-approximate envy-free if and only if *S**i**g*(*i*) ≥ *α**S**i**p* for every *p*. Let $\delta = \frac{1 - \alpha}{1 + \alpha}$; from this choice of *δ* and since $\operatorname\*{\mathbb{E}}[S\_i^p]$ is equal for every *p*, we can conclude that *S**i**g*(*i*) ≥ *α**S**i**p* is implied by $S\_i^{g(i)} \geq (1 - \delta) \operatorname\*{\mathbb{E}}[S\_i^{g(i)}]$ and $S\_i^p \leq (1 + \delta)\operatorname\*{\mathbb{E}}[S\_i^p]$. In other words, we can bound the probability that the random assignment is not *α*-approximate envy-free as follows. $$\begin{aligned} &\Pr[\exists i \in N, p \in \{1, \dots, g\}: S\_i^{g(i)} < \alpha S\_i^p] \\ &\phantom{{}=6}\leq \sum\_{i \in N, p \in \{1, \dots, g\}} \Pr[S\_i^{g(i)} < \alpha S\_i^p] \\ &\phantom{{}=6}\leq \sum\_{i \in N, p \in \{1, \dots, g\}} \Pr[S\_i^{g(i)} < (1 - \delta) \operatorname\*{\mathbb{E}}[S\_i^{g(i)}] \vee S\_i^p > (1 + \delta)\operatorname\*{\mathbb{E}}[S\_i^{g(i)}]] \\ &\phantom{{}=6}\leq \sum\_{i \in N, p \in \{1, \dots, g\}} (\Pr[S\_i^{g(i)} < (1 - \delta) \operatorname\*{\mathbb{E}}[S\_i^{g(i)}]] + \Pr[S\_i^p > (1 + \delta)\operatorname\*{\mathbb{E}}[S\_i^{p}]]). \end{aligned}$$ Since *S**i**p* = ∑*j* ∈ *M**A**i*, *j**p* and *A**i*, *j**p*’s are independent and lie in [0, 1], we can use Chernoff bound (Lemma [lem:chernoff]) to upper bound the last terms. Hence, the probability that the alloaction is not *α*-approximate envy-free is at most $$\begin{aligned} \sum\_{i \in N, p \in \{1, \cdots, g\}} \exp\left(\frac{-\delta^2 \operatorname\*{\mathbb{E}}[S\_i^{g(i)}]]}{2}\right) + \exp\left(\frac{-\delta^2 \operatorname\*{\mathbb{E}}[S\_i^p]]}{3}\right).\end{aligned}$$ Finally, observe that $$\operatorname\*{\mathbb{E}}[S\_i^p] = \sum\_{j \in M} \operatorname\*{\mathbb{E}}[A\_{i, j}^p] = \sum\_{j \in M} \frac{1}{g} \operatorname\*{\mathbb{E}}[u\_i(j)] \geq \frac{m \mu\_{min}}{g}.$$ This means that the desired probability is bounded above by $$\begin{aligned} &\sum\_{i \in N, p \in \{1, \dots, g\}} \exp\left(\frac{-\delta^2 m \mu\_{min}}{2g}\right) + \exp\left(\frac{-\delta^2 m \mu\_{min}}{3g}\right) \\ &\phantom{{}=6}\leq 2ng \exp\left(\frac{-\delta^2 m \mu\_{min}}{3g}\right) \\ &\phantom{{}=6}\leq \exp\left(-\frac{\delta^2 m \mu\_{min}}{3g} + 3\log n\right).\end{aligned}$$ When $m > \left(\frac{10}{\mu\_{min}\delta^2}\right)g \log n$, the above expression is at most exp( − Ω(*m*/*g*)), concluding our proof. Concluding Remarks ================== In this paper, we study a generalized setting for fair division that allows interested parties to contain multiple players, possibly with highly differing preferences. This setting allows us to model several real-world cases of fair division that cannot be done under the traditional setting. We establish almost-tight bounds on the number of players and items under which a fair division is likely or unlikely to exist. Furthermore, we consider the issue of truthfulness and show that a simple truthful mechanism produces an assignment that is approximately envy-free with high probability. While the assumptions of additivity and independence are somewhat restrictive and might not apply fully to settings in the real world, our results give indications as to what we can expect if the assumptions are relaxed, such as if a certain degree of dependence is introduced. An interesting future direction is to generalize the results to settings with more general valuations. In particular, if the utility functions are low-degree polynomials, then one could try applying the invariance principle, which is a generalization of the Berry-Esseen theorem that we use. We end the paper with some questions that remain after this work. A natural question is whether we can generalize our existence and non-existence results (Theorems [thm:existence] and [thm:nonexistence]) to the setting where the groups do not contain the same number of players. This non-symmetry between the groups seems to complicate the approaches that we use in this paper. For example, it breaks the greedy algorithm used in Theorem [thm:existence]. Nevertheless, it might still be possible to prove existence of an envy-free division using other algorithms or without relying on a specific algorithm. Another direction for future research is to invent procedures for computing envy-free divisions, whenever such divisions exist, for the general setting where each group contains multiple players and players have arbitrary (not necessarily additive) preferences. Even procedures that only depend on rankings of single items do not appear to extend easily to this setting. Indeed, if a group contains two players whose preferences are opposite of each other, it is not immediately clear what we should assign to the group. It would be useful to have a procedure that produces a desirable outcome, even for a small number of players in each group. Lastly, one could explore the limitations that arise when we impose the condition of truthfulness, an important property when we implement the mechanisms in practice. For instance, truthful allocation mechanisms have recently been characterized in the case of two players, and it has been shown that there is a separation between truthful and non-truthful mechanisms for approximating maximin shares. In our setting, a negative result on the existence of a truthful mechanism that yields an envy-free division with high probability would provide such a separation as well, while a positive result in this direction would have even more consequences for practical applications. Acknowledgments =============== The authors thank the anonymous reviewers for their helpful feedback. Warut Suksompong acknowledges support from a Stanford Graduate Fellowship. Appendix ======== Proof of Theorem [thm:existence] -------------------------------- First we list the following well-known fact, which allows us to easily determine the mean of a random variable from its cumulative density function. [prop:meanbymass] Let *X* be a non-negative random variable. Then $$\begin{aligned} \operatorname\*{\mathbb{E}}[X] = \int\_0^\infty \Pr[X \geq x] dx.\end{aligned}$$ To analyze the algorithm, consider any player *i* and any group *g*ʹ ≠ *g*(*i*). We will first bound the probability that *M**g*ʹ ≻ *i**M**g*(*i*). To do this, for each item *j* ∈ *M*, define *A**i*, *j* as $$\begin{aligned} A\_{i, j} = u\_i(j)\cdot\textbf{1}\left[g(i) = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right],\end{aligned}$$ where $1\left[g(i) = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right]$ is an indicator random variable that indicates whether $g(i) = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)$. Similarly, define *B**i*, *j**g*ʹ as $$\begin{aligned} B\_{i, j}^{g'} = u\_i(j)\cdot\textbf{1}\left[g' = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right].\end{aligned}$$ Moreover, suppose that D*j* has mean *μ**j* and variance *σ**j*2. Notice that, with respect to player *i*, *A**i*, *j* is the utility that item *j* contributes to *g*(*i*) whereas *B**i*, *j**g*ʹ is the utility that item *j* contributes to *g*ʹ. In other words, *M**g*ʹ ≻ *i**M**g*(*i*) if and only if ∑*j* ∈ *M**A**i*, *j* < ∑*j* ∈ *M**B**i*, *j**g*ʹ. To bound Pr[*M**g*ʹ ≻ *i**M**g*(*i*)], we will first bound $\operatorname\*{\mathbb{E}}\left[A\_{i, j}\right]$ and $\operatorname\*{\mathbb{E}}\left[B\_{i, j}^{g'}\right]$. Then, we will use the Chernoff bound to bound Pr[∑*j* ∈ *M**A**i*, *j* < ∑*j* ∈ *M**B**i*, *j**g*ʹ]. Observe that, due to symmetry, we can conclude that $$\begin{aligned} \operatorname\*{\mathbb{E}}\left[u\_i(j)\cdot\textbf{1}\left[g' = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right]\right] = \operatorname\*{\mathbb{E}}\left[u\_i(j)\cdot\textbf{1}\left[g'' = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right]\right]\end{aligned}$$ for any *g*ʺ ≠ *g*(*i*). Thus, we can now rearrange *B**i*, *j**g*ʹ as follows: $$\begin{aligned} \operatorname\*{\mathbb{E}}\left[B\_{i, j}^{g'}\right] &= \frac{1}{g - 1} \left(\sum\_{g'' \ne g(i)} \operatorname\*{\mathbb{E}}\left[u\_i(j)\cdot\textbf{1}\left[g'' = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right]\right]\right) \\ &= \frac{1}{g - 1} \left(\operatorname\*{\mathbb{E}}\left[u\_i(j) \sum\_{g'' \ne g(i)}\textbf{1}\left[g'' = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right]\right]\right) \\ &= \frac{1}{g - 1} \left(\operatorname\*{\mathbb{E}}\left[u\_i(j) \left(1 - \textbf{1}\left[g(i) = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right]\right)\right]\right).\end{aligned}$$ Hence, we have $$\begin{aligned} \label{eq:AtoB} \operatorname\*{\mathbb{E}}\left[B\_{i, j}^{g'}\right] = \frac{1}{g - 1} \left(\mu\_j - \operatorname\*{\mathbb{E}}\left[A\_{i, j}\right]\right).\end{aligned}$$ Now, consider *A**i*, *j*. Again, due to symmetry, we have $$\begin{aligned} \operatorname\*{\mathbb{E}}\left[A\_{i, j}\right] &= \frac{1}{n'} \left(\sum\_{i' \in G\_{g(i)}} \operatorname\*{\mathbb{E}}\left[u\_{i'}(j)\cdot\textbf{1}\left[g(i) = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right]\right]\right) \\ &= \frac{1}{n'} \operatorname\*{\mathbb{E}}\left[\left(\sum\_{i' \in G\_{g(i)}} u\_{i'}(j)\right)\cdot\textbf{1}\left[g(i) = \operatorname\*{arg\,max}\_{k = 1, \dots, g} \sum\_{p \in G\_k} u\_p(j)\right]\right].\end{aligned}$$ Let S denote the distribution of the sum of *n*ʹ independent random variables, each drawn from D*j*. It is obvious that ∑*p* ∈ *G**k**u**p*(*j*) is drawn from S independently for each *k*. In other words, $\operatorname\*{\mathbb{E}}\left[A\_{i, j}\right]$ can be written as $$\begin{aligned} \operatorname\*{\mathbb{E}}\left[A\_{i, j}\right] &= \frac{1}{n'} \operatorname\*{\mathbb{E}}\left[X\_1 \cdot\textbf{1}\left[X\_1 = \max\{X\_1, \dots, X\_g\}\right]\right].\end{aligned}$$ The expectation on the right is taken over *X*1, …, *X**g* sampled independently from S. From symmetry among *X*1, …, *X**g*, we can further derive the following: $$\begin{aligned} \operatorname\*{\mathbb{E}}\left[A\_{i, j}\right] &= \frac{1}{n'} \Pr\left[X\_1 = \max\{X\_1, \dots, X\_g\}\right] \operatorname\*{\mathbb{E}}\left[X\_1 \mid X\_1 = \max\{X\_1, \dots, X\_g\}\right] \\ &= \frac{1}{n'g} \operatorname\*{\mathbb{E}}\left[X\_1 \mid X\_1 = \max\{X\_1, \dots, X\_g\}\right] \\ &= \frac{1}{n'g} \operatorname\*{\mathbb{E}}\left[\max\{X\_1, \dots, X\_g\}\right].\end{aligned}$$ Consider the distribution of max{*X*1, …, *X**g*}. Let us call this distribution Y. Notice that $\operatorname\*{\mathbb{E}}\left[\max\{X\_1, \dots, X\_g\}\right]$ is just the mean of Y, i.e., $$\begin{aligned} \label{eq:meanA} \operatorname\*{\mathbb{E}}\left[A\_{i, j}\right] = \frac{1}{n' g} \operatorname\*{\mathbb{E}}\_{Y \sim \mathcal{Y}}[Y].\end{aligned}$$ To bound this, let *F**S* and *F**Y* be the cumulative density functions of S and Y respectively. Notice that *F**Y*(*x*) = *F**S*(*x*)*g* for all *x*. Applying Proposition [prop:meanbymass] to S and Y yields the following: $$\begin{aligned} \operatorname\*{\mathbb{E}}\_{S \sim \mathcal{S}}[S] = \int\_0^\infty (1 - F\_S(x))dx,\end{aligned}$$ and, $$\begin{aligned} \operatorname\*{\mathbb{E}}\_{Y \sim \mathcal{Y}}[Y] = \int\_0^\infty (1 - F\_S(x)^g)dx.\end{aligned}$$ By taking the difference of the two, we have $$\begin{aligned} \operatorname\*{\mathbb{E}}\_{Y \sim \mathcal{Y}}[Y] = \operatorname\*{\mathbb{E}}\_{S \sim \mathcal{S}}[S] + \int\_0^\infty F\_S(x)\left(1 - F\_S(x)^{g - 1}\right)dx.\end{aligned}$$ To bound the right hand side, recall that S is just the distribution of the sum of *n*ʹ independent random variables sampled according to D*j*. Note that the third moment of D*j* is at most 1 because it is bounded in [0, 1]. Thus, by applying the Berry-Esseen Theorem (Lemma 2), we have $$\begin{aligned} \left|F\_S(x) - \Pr\_{y \sim \mathcal{N}(\mu\_j n', \sigma\_j^2 n')}[y \leq x]\right| \leq \frac{C\_{BE}}{\sigma\_j^3 \sqrt{n'}}.\end{aligned}$$ for all *x* ∈ R. When *n*ʹ is sufficiently large, the right hand side is at most 0.1. Moreover, it is easy to check that Pr*y* ∼ N(*μ**j**n*ʹ, *σ**j*2*n*ʹ)[*y* ≤ *x*] ∈ [0.5, 0.85] for every $x \in \left[\mu\_j n', \mu\_j n' + \sigma\_j \sqrt{n'}\right]$. Hence, *F**S*(*x*) ∈ [0.4, 0.95] for every $x \in \left[\mu\_j n', \mu\_j n' + \sigma\_j \sqrt{n'}\right]$. Now, we can bound $\operatorname\*{\mathbb{E}}\_{Y \sim \mathcal{Y}}[Y]$ as follows: $$\begin{aligned} \operatorname\*{\mathbb{E}}\_{Y \sim \mathcal{Y}}[Y] &= \operatorname\*{\mathbb{E}}\_{S \sim \mathcal{S}}[S] + \int\_0^\infty F\_S(x)\left(1 - F\_S(x)^{g - 1}\right)dx \\ &= \mu\_j n' + \int\_0^\infty F\_S(x)\left(1 - F\_S(x)^{g - 1}\right)dx \\ (\text{Since } F\_S(x)\left(1 - F\_S(x)^{g - 1}\right) \geq 0) &\geq \mu\_j n' + \int\_{\mu\_j n'}^{\mu\_j n' + \sigma\_j \sqrt{n'}} F\_S(x)\left(1 - F\_S(x)^{g - 1}\right)dx \\ &\geq \mu\_j n' + \int\_{\mu\_j n'}^{\mu\_j n' + \sigma\_j \sqrt{n'}} (0.4)(0.05)dx \\ &= \mu\_j n' + \sigma\_j \sqrt{n'}/50 \\ (\text{Since } \sigma\_j \geq \sigma\_{min}) &\geq \mu\_j n' + \sigma\_{min} \sqrt{n'}/50.\end{aligned}$$ Plugging the above inequality into equation ([eq:meanA]), we can conclude that $$\begin{aligned} \operatorname\*{\mathbb{E}}\left[A\_{i, j}\right] = \frac{1}{n'g} \operatorname\*{\mathbb{E}}\_{Y \in \mathcal{Y}}[Y] \geq \frac{\mu\_j}{g} + \frac{\sigma\_{min}}{50 g \sqrt{n'}}.\end{aligned}$$ From this and equation ([eq:AtoB]), we have $$\begin{aligned} \operatorname\*{\mathbb{E}}\left[B\_{i, j}^{g'}\right] = \frac{1}{g - 1} \left(\mu\_j - \operatorname\*{\mathbb{E}}\left[A\_{i, j}\right]\right) \leq \frac{1}{g - 1} \left(\mu\_j - \frac{\mu\_j}{g}\right) = \frac{\mu\_j}{g}.\end{aligned}$$ Now, define *C**i*, *j**g*ʹ as $C\_{i, j}^{g'} = B\_{i, j}^{g'} + \left(\mu\_j/g - \operatorname\*{\mathbb{E}}\left[B\_{i, j}^{g'}\right]\right)$. Notice $\operatorname\*{\mathbb{E}}\left[C\_{i, j}^{g'}\right] = \mu\_j/g$. As stated earlier, *M**g*ʹ ≻ *i**M**g*(*i*) if and only if ∑*j* ∈ *M**A**i*, *j* < ∑*j* ∈ *M**B**i*, *j**g*ʹ. Let *S**A* = ∑*j* ∈ *M**A**i*, *j*, *S**B* = ∑*j* ∈ *M**B**i*, *j**g*ʹ, *S**C* = ∑*j* ∈ *M**C**i*, *j**g*ʹ and let $\delta = \frac{\sigma\_{min}}{200\mu\_j \sqrt{n'}}$. Notice that, since we assume that the variance of D*j* is positive, *μ**j* is also non-zero, which means that *δ* is well-defined. Using Chernoff bound (Lemma 1) on *S**A* and *S**C*, we have $$\begin{aligned} \Pr[S\_A \leq (1 - \delta)\operatorname\*{\mathbb{E}}[S\_A]] &\leq \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S\_A]}{2}\right)},\end{aligned}$$ and, $$\begin{aligned} \Pr[S\_C \geq (1 + \delta)\operatorname\*{\mathbb{E}}[S\_C]] &\leq \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S\_C]}{3}\right)}.\end{aligned}$$ Moreover, when *n*ʹ is large enough, we have $(1 - \delta)\operatorname\*{\mathbb{E}}[S\_A] \geq (1 + \delta)\operatorname\*{\mathbb{E}}[S\_C]$. Thus, we have $$\begin{aligned} \Pr[S\_A < S\_C] &\leq \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S\_A]}{2}\right)} + \exp{\left(\frac{-\delta^2\operatorname\*{\mathbb{E}}[S\_C]}{3}\right)} \\ &\leq \exp{\left(\frac{-\delta^2 m \mu\_j}{2g}\right)} + \exp{\left(\frac{-\delta^2m \mu\_j}{3g}\right)} \\ &\leq 2\exp{\left(\frac{-\sigma\_{min}^2 m }{120000g n' \mu\_j}\right
arxiv_0000100
masses of magnetars. 140mm ccccccc *q* & $Bp[\rm{G}]$ & $Bc[\rm{G}]$ & *M*[*M*⊙] & $P[\rm{ms}]$ & $R[\rm{km}]$ & h 0.90&6.73E+16&2.63E+17&1.99E+01&1.25E+00&13.1E+00&0.367E+00 0.90&7.32E+16&2.80E+17&3.01E+01&1.06E+00&14.9E+00&0.326E+00 | | | | --- | --- | | | | | | | | | | | --- | --- | | | | | | | Conclusion ========== In this paper, we have investigated equilibrium sequences of magnetized rotating stars with four kinds of realistic equations of state (EOSs) of SLy et al.), FPS, Shen, and LS. We have employed the Tomimura-Eriguchi scheme to construct the equilibrium configurations. At first, we have obtained the solution in the regime of Newtonian gravity. Basic properties of the magnetized Newtonian stars are summarized as follows: (1) For the non-rotating sequences, there exist nearly toroidal configurations, *q**m**i**n* ∼ 0, irrespective of the EOSs. The magnetic energy stored in the stars increases with the degree of deformation being larger. (2) For the rotating sequences, we have categorized the sequences with four kinds of EOSs as rotation-dominated (RD) type, magnetic-dominated (MD) one, and nearly toroidal one. As a result, the sequences with softer EOSs of SLy and FPS are found to belong to RD or MD type and the sequences with stiffer EOSs of Shen and LS can belong also to the nearly toroidal one. (3) We have also focused on the structure of equilibrium configurations. Reflecting the stiffness of EOSs, the density distributions of stars with SLy or FPS EOSs concentrate more at the center than those of the stars with Shen or LS EOSs. The toroidal fields for SLy and FPS EOSs are found to distribute in relatively wider regions in the vicinity of the equatorial plane than for Shen and LS EOSs. The poloidal magnetic fields are also affected by the toroidal fields because poloidal fields are highly distorted where the toroidal fields are strong. Regardless of the difference of the EOSs, a global configuration of magnetic field line is found to be universal, namely the tori of twisted field lines around the symmetry axis inside the star and the untwisted poloidal field lines, which penetrate the stellar surface to continue to the vacuum. Then, adding the GR correction to the gravity, we have performed the quantitative investigation of the strong magnetized equilibrium stars. As a result, we find that the difference due to the EOSs becomes small because all the employed EOSs become sufficiently stiff for the large maximum density, typically greater than $10^{15}\rm{g}~\rm{cm}^{-3}$. We have investigated the relation between the baryon mass, the magnetic field at pole, the rotational period, the equatorial radius, and the maximum magnetic field as a function of the maximum density. The typical magnetic fields at pole are about $10^{16} \rm{G}$, the periods are about several milliseconds, the radii are about ten kilometers and the maximum magnetic fields are about $10^{17} \rm{G}$. The maximum mass is found to be 3.0*M*⊙ for SLy EOS, 2.6*M*⊙ for FPS EOS, 3.5*M*⊙ for Shen EOS and 2.7*M*⊙ for LS EOS for *q* = 0.7 configurations, respectively. These values are about twenty percents increasing for that of the spherical stars. Finally, we have computed equilibrium sequences at finite temperature for the Shen and LS EOSs aiming to construct the equilibrium configurations of the proto-magnetars. As a result, it is found that the features appearing in the cold magnetized equilibrium configurations above are still valid despite of the incursion of the finite temperature effect on the EOSs. Since the masses of the proto-magnetars are highly dependent on the EOSs, we have speculated that one may obtain information about the EOSs from the observation of the masses of magnetars. It is true that our treatment for the general relativistic effect is nothing but a crude approximation. Thus we consider this study as a prelude to a fully general relativistic study, however hoping for the moment that our results could serve as the initial condition for the hydrodynamic evolutionary studies of newly-born magnetars including the microphysical EOSs. Acknowledgments =============== We express thanks to S. Yoshida for fruitful discussions and to H. Ono, H. Suzuki, and K. Sumiyoshi for providing us the tabulated table for Lattimer-Swesty EOS. Kiuchi thanks to K. i. Maeda and Kotake to K. Sato for continuing encouragements. Kotake also thanks to S. Yamada and E. Müller for informative discussions. This work was supported in part by the Japan Society for Promotion of Science(JSPS) Research Fellowships (Kiuchi), Grants-in-Aid for the Scientific Research from the Ministry of Education, Science and Culture of Japan (No.1840044 and S19104006). 99 Akmal, A., Pandharipande, V. R., & Ravenhall, D. G. 1998, PRC, 58, 1804 Baym, G., & Pethick, C. 1979, Ann. Rev. Astron. Astrophys., 17, 415 Bocquet, M., Bonazzola, S., Gourgoulhon, E., & Novak, J. 1995, A&A, 301, 757 Bonazzola, S., & Gourgoulhon, E. 1996, A&A 312, 675 Braithwaite, J. & Spruit, H. C. 2004, Nature, 431, 819 Braithwaite, J. & Spruit, H. C. 2006, A&A, 450, 1097 Chandrasekhar, S., 1956, ApJ 124, 232 Chandrasekhar, S., & Fermi, E. 1953, ApJ 118, 116 Cowling, T. G. 1965, in Stellar Structure, ed. L. H. Allen & D. B. McLaughlin (Chicago: Univ. Chicago Press), 425 Douchin, F., & Haensel, P. 2001, A&A, 380, 151 Duncan, R. C. & Thompson, C., 1992, ApJ, 392, L9 Ferrario, L. & Wickramasinghe, D., 2007 Mon. Not. Roy. Astron. Soc.  375, 1009 Ferraro, V. C. A. 1937, MNRAS 97, 458 Ferraro, V. C. A. 1954, ApJ 119, 407 Friedman, B., & Pandharipande, V. R. 1981, Nuclear Physics A, 361, 502 Glendenning, N. K. 2001, Physics. Rep., 342, 393 Hachisu, I. 1986, ApJ 61, 479 Harding, A. K., & Lai, D. 2006, Reports of Progress in Physics, 69, 2631 Hurley, K., 1999, arXiv:astro-ph/9912061. Ioka, K. 2001, MNRAS 327, 639 Ioka, K. & Sasaki, M. 2003, PRD 67, 124026 Ioka, K. & Sasaki, M. 2004, ApJ 600, 296 Kaspi, V. M. 2004, Young Neutron Stars and Their Environments, 218, 231 Konno, K., Obata, T.& Kojima, Y. 1999, A&A, 352, 211 Kotake, K., Sawai, H., Yamada, S., & Sato, K. 2004, ApJ, 608, 391 Kotake, K., Yamada, S., Sato, K., Sumiyoshi, K., Ono, H., & Suzuki, H. 2004 PRD 69, 124004 Kouveliotou, C., et al. 1998, Nature, 393, 235 Kotake, K., Sato, K., & Takahashi, K. 2006, Reports of Progress in Physics, 69, 971 Lovelace, R. V. E., Mehanian, C. M.,  & Sulkanen, M. E. 1986, ApJS,  62, 1 Lattimer, J. M., & Douglas Swesty, F. 1991, Nuclear Physics A, 535, 331 Lattimer, J. M. & Prakash, M. 2006 astro-ph/0612440. Livne, E., Dessart, L., Burrows, A., & Meakin, C. A. 2007, ApJS, 170, 187 Marek, A., Dimmelmeier, H., Janka, H., Mueller, E., & Buras, R. 2006, A&A, 445, 273 Markey, P., & Taylar. R. J. 1973 MNRAS, 163, 77 Markey, P., & Taylar. R. J. 1974 MNRAS, 168, 505 Mereghetti, S. arXiv:astro-ph/9911252. Mestel, L. 1961, MNRAS, 122, 473 Miketinac, M. J. 1975, Ap&SS, 35, 349 Moiseenko, S. G., Bisnovatyi-Kogan, G. S., & Ardeljan, N. V. 2006, MNRAS, 370, 501 Monaghan, J. J. 1965, MNRAS, 131, 105 Monaghan, J. J. 1966, MNRAS, 134, 275 Morrison, I. A., Baumgarte, T. W., & Shapiro, S. L. 2004, ApJ, 610, 941 Nozawa, T., Stergioulas, N., Gourgoulhon, E., & Eriguchi, Y. 1998, A&A Sup. Ser., 132, 431 Obergaulinger, M., Aloy, M. A., Müller, E. 2006, A&A, 450, 1107 Oppenheimer, J. R., & Volkoff, G. 1939, Phys. Rev., 55, 374 Ostriker, J. P. & Hartwick, F. D. A. 1968, ApJ, 153, 797 Ostriker, J. P. & Mark, J. W-K. 1968, ApJ, 151, 1075 Pandharipande, V. R., & Ravenhall, D. G. 1989, NATO ASIB Proc. 205: Nuclear Matter and Heavy Ion Collisions, 103 Prakash, M., Bombaci, I., Prakash, M., Ellis, P. J., Lattimer, J. M., and Knorren, R. 1997, Phys. Rept.  280, 1 Prendergast, K. H. 1956, ApJ, 123, 498 Rampp, M., & Janka, H.-T. 2002, A&A, 396, 361 Roberts, P. H. 1955, ApJ, 122, 508 Roxburgh, I. W. 1966, MNRAS, 132, 347 Sawai, H., Kotake, K., & Yamada, S. 2005 ApJ 631, 446 Shapiro, S. L., & Teukolsky, S. A. 1983, Research supported by the National Science Foundation. New York, Wiley-Interscience, 1983, 663 p., Shibata, M., Taniguchi, K., & Uryu, K. 2005 PRD 71, 084021 Shen, H., Toki, H., Oyamatsu, K., Sumiyoshi, K. 1998, Nuclear Physics, A637, 43, 109, 301 Shen, H., Toki, H., Oyamatsu, K., & Sumiyoshi, K. 1998, Progress of Theoretical Physics, 100, 1013 Shibata, M., Liu, Y. T., Shapiro, S. L., & Stephens, B. C. 2006 PRD 74, 104026 Sumiyoshi, K., Yamada, S., Suzuki, H., Shen, H., Chiba, S., & Toki, H. 2005, ApJ, 629, 922 Takiwaki, T., Kotake, K., Nagataki, S., & Sato, K. 2004, ApJ, 616, 1086 Thompson, C. & Duncan, R. C. 1993, ApJ, 408, 194 Thompson, C. & Duncan, R. C. 1995, Mon. Not. Roy. Astron. Soc.  275, 255 Thompson, C. & Duncan, R. C. 1996, ApJ 473, 322 Tomiumra, Y. & Eriguchi, Y., 2005 MNRAS, 359, 1117 Watts, A. 2006, 36th COSPAR Scientific Assembly, 36, 168 Wiringa, R. B., Fiks, V., & Fabrocini, A. 1988, PRC, 38, 1010 Woltjer, L. 1960, ApJ, 131, 227 Woods, P. M. & Thompson, C. 2004, arXiv:astro-ph/0406133. Wright, G. A. E. 1973, MNRAS, 162, 339 Yamada, S., & Sawai, H. 2004, ApJ, 608, 907 Yoshida, S. & Eriguchi, Y. 2006, ApJS, 164, 156 Yoshida, S., Yoshida, S., & Eriguchi, Y. 2006, ApJ, 651, 462 Code Test ========= To check our code ability, we calculate the same sequences shown in with polytropic equations of state. We actually obtain two kinds of sequences, non-rotating sequence and rotating-sequence. In Table [non-rot], some physical quantities of the non-rotating sequence with the polytropic index *N* = 3.0, *k* = 0.1 and *a* = 15 (see equation ([eq26]), ([eq27])) are shown. For the axis ratio *q*, the upper row corresponds to the result by and the down one do to our result. Table [rig-rot] is a result of the rotating sequence with polytropic index *N* = 0.5, *a* = 20, and *μ̂* = 0.05. In all sequences, the relative errors for the each quantities with ’s result are less than one percent and VC is 10− 4 ∼ 10− 5. Therefore, we confirm that our code works well. 140mm | q | *H*/∣*W*∣ | *U*/∣*W*∣ | ∣*Ŵ*∣ | *μ̂* | *Ĉ* | *β̂* | *M̂* | VC | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | 0.98 | 0.0050 | 0.332 | 7.310E-4 | 0.140 | -6.348E-3 | 5.276E-3 | 0.078 | | | | 0.0050 | 0.332 | 7.310E-4 | 0.140 | -6.348E-3 | 5.276E-3 | 0.078 | 2.55E-4 | | 0.90 | 0.0275 | 0.324 | 7.922E-4 | 0.299 | -7.163E-3 | 5.303E-2 | 0.082 | | | | 0.0275 | 0.324 | 7.922E-4 | 0.299 | -7.163E-3 | 5.303E-2 | 0.082 | 2.54E-4 | | 0.80 | 0.0635 | 0.312 | 9.004E-4 | 0.408 | -8.526E-3 | 5.335E-2 | 0.088 | | | | 0.0635 | 0.312 | 9.004E-4 | 0.408 | -8.526E-3 | 5.335E-2 | 0.088 | 2.50E-4 | | 0.70 | 0.1131 | 0.296 | 1.097E-3 | 0.476 | -1.064E-2 | 5.400E-3 | 0.099 | | | | 0.1131 | 0.296 | 1.097E-3 | 0.476 | -1.064E-2 | 5.400E-3 | 0.099 | 2.37E-4 | | 0.60 | 0.1856 | 0.272 | 1.560E-3 | 0.501 | -1.456E-2 | 5.576E-3 | 0.121 | | | | 0.1856 | 0.272 | 1.560E-3 | 0.501 | -1.456E-2 | 5.575E-3 | 0.121 | 2.07E-4 | | 0.50 | 0.3044 | 0.232 | 4.087E-3 | 0.447 | -2.721E-2 | 6.524E-3 | 0.211 | | | | 0.3042 | 0.232 | 4.079E-3 | 0.447 | -2.718E-2 | 6.520E-3 | 0.210 | 1.21E-4 | | 0.40 | 0.4087 | 0.197 | 7.624E-3 | 0.279 | -3.845E-2 | 6.188E-3 | 0.320 | | | | 0.4086 | 0.197 | 7.637E-3 | 0.279 | -3.849E-2 | 6.190E-3 | 0.321 | 6.59E-6 | | 0.30 | 0.4219 | 0.193 | 4.626E-3 | 0.247 | -3.022E-2 | 4.713E-3 | 0.254 | | | | 0.4218 | 0.193 | 4.638E-3 | 0.247 | -3.026E-2 | 4.716E-3 | 0.254 | 8.43E-6 | | 0.20 | 0.4287 | 0.190 | 3.434E-3 | 0.237 | -2.635E-2 | 4.003E-3 | 0.220 | | | | 0.4287 | 0.190 | 3.453E-3 | 0.237 | -2.643E-2 | 4.009E-3 | 0.221 | 3.12E-6 | | 0.10 | 0.4326 | 0.189 | 2.900E-3 | 0.233 | -2.443E-2 | 3.648E-3 | 0.203 | | | | 0.4326 | 0.189 | 2.926E-3 | 0.233 | -2.454E-2 | 3.658E-3 | 0.204 | 1.52E-5 | | 0.01 | 0.4338 | 0.189 | 2.743E-3 | 0.233 | -2.384E-2 | 3.539E-3 | 0.197 | | | | 0.4338 | 0.189 | 2.771E-3 | 0.232 | -2.396E-2 | 3.550E-3 | 0.198 | 1.96E-5 | 140mm | q | *H*/∣*W*∣ | *U*/∣*W*∣ | *T*/∣*W*∣ | ∣*Ŵ*∣ | $\hat{\Omega}^2$ | *Ĉ* | *β̂* | *M̂* | VC | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $\hat{\Omega}^2<0$ | - | - | - | - | - | - | - | - | - | | 0.98 | 2.75E-3 | 0.331 | 2.04E-3 | 2.68E-1 | 1.50E-3 | -0.181 | 8.61E-2 | 2.24 | | | | 2.74E-3 | 0.331 | 2.04E-3 | 2.68E-1 | 1.50E-3 | -0.181 | 8.61E-2 | 2.24 | 2.19E-4 | | 0.90 | 2.88E-3 | 0.317 | 2.37E-2 | 2.31E-1 | 1.64E-2 | -0.175 | 7.78E-2 | 2.05 | | | | 2.89E-3 | 0.317 | 2.37E-2 | 2.31E-1 | 1.64E-2 | -0.175 | 7.77E-2 | 2.05 | 1.37E-4 | | 0.80 | 3.10E-3 | 0.297 | 5.33E-2 | 1.86E-1 | 3.40E-2 | -0.166 | 6.71E-2 | 1.80 | | | | 3.10E-3 | 0.297 | 5.33E-2 | 1.86E-1 | 3.40E-2 | -0.166 | 6.71E-2 | 1.80 | 1.25E-4 | | 0.70 | 3.37E-3 | 0.275 | 8.60E-2 | 1.44E-1 | 4.98E-2 | -0.155 | 5.63E-2 | 1.55 | | | | 3.37E-3 | 0.274 | 8.60E-2 | 1.44E-1 | 4.98E-2 | -0.155 | 5.63E-2 | 1.55 | 1.35E-4 | | 0.60 | 3.72E-3 | 0.251 | 1.22E-1 | 1.04E-1 | 6.29E-2 | -0.141 | 4.55E-2 | 1.29 | | | | 3.73E-3 | 0.251 | 1.22E-1 | 1.04E-1 | 6.29E-2 | -0.141 | 4.55E-2 | 1.29 | 1.22E-4 | | 0.50 | 4.21E-3 | 0.225 | 1.61E-1 | 6.75E-2 | 7.19E-2 | -0.123 | 3.46E-2 | 1.01 | | | | 4.22E-3 | 0.225 | 1.61E-1 | 6.76E-2 | 7.19E-2 | -0.123 | 3.46E-2 | 1.01 | 8.79E-5 | | 0.43 | 3.64E-3 | 0.210 | 1.83E-1 | 4.06E-2 | 7.46E-2 | -0.101 | 2.67E-2 | 0.750 | | | | 3.67E-3 | 0.210 | 1.84E-1 | 4.07E-2 | 7.46E-2 | -0.102 | 2.67E-2 | 0.750 | 9.53E-5 | | MS | - | - | - | - | - | - | - | - | - | General Relativistic Correction =============================== We start from the metric of spherical symmetry space time, $$\begin{aligned} ds^2 = - c^2e^{2\Phi\_{GR}}dt^2 + e^{2\Lambda}dr^2 + r^2 ( d\theta^2 + \sin^2 \theta d\varphi^2 ).\label{metric}\end{aligned}$$ Matter is assumed as a perfect fluid, thus the energy momentum tensor is $$\begin{aligned} T^{\mu\nu} = \left( \rho ( 1 + e ) + \frac{P}{c^2} \right) u^\mu u^\nu + P g^{\mu\nu}, \label{pfluid}\end{aligned}$$ where *ρ*, *e*, and *P* are a density, specific internal energy, and pressure. TOV equation  is $$\begin{aligned} &&\frac{dm}{dr} = 4 \pi r^2 \rho ( 1 + e ),\label{TOV1}\\ &&\frac{dP}{dr} = - G \frac{\rho h}{r^2}( m + \frac{4\pi r^3 P}{c^2} ) /( 1 - \frac{2Gm}{c^2r} ),\label{TOV2},\end{aligned}$$ where *h* = 1 + *e* + *P*/*ρ**c*2 is relativistic enthalpy and *m* is mass function defined as $$\begin{aligned} e^{2\Lambda} = \frac{1}{1-\frac{2Gm}{c^2r}}.\end{aligned}$$ Equation for the potential Φ*G**R* is $$\begin{aligned} \frac{d\Phi\_{GR}}{dr} = \frac{G}{c^2r^2}\left( m + \frac{4\pi r^3 P}{c^2} \right)/( 1 - \frac{2Gm}{c^2r} ).\label{TOV3}\end{aligned}$$ This potential, of course, reduces to the gravitational potential Φ in the Newtonian limit. To make clear a general relativistic contribution in the potential, we perform the Taylor expansion of equation ([TOV3]) with respect to *x*(*r*) ≡ 2*G**m*(*r*)/*r**c*2, which is guaranteed to be small than unity in star. Result is written in the form, $$\begin{aligned} &&\frac{d\Phi\_{GR}}{dr} = \frac{G}{c^2 r^2} \int^r\_0 4\pi r^2 \rho dr' \nonumber\\ &&+ \frac{1}{r} \Big[ \frac{G}{c^2 r}\int^r\_0 4 \pi r^2 \rho e dr' \nonumber\\ &&+ \frac{1}{2}\sum^\infty\_{n=2} x(r)^n + \frac{4\pi G r^2 P}{c^4} \sum^\infty\_{n=0} x(r)^n \Big]. \label{TOV4}\end{aligned}$$ Note that the first term is a contribution from Newtonian gravitational potential. So, we define a general relativistic correction in the potential as *δ*Φ*G**R* ≡ *c*2Φ*G**R* − Φ*N*. Boundary condition is derived from a requirement that the outer vacuum space time is equivalent to the Schwarzschild space time, $$\begin{aligned} \Phi\_{GR} = \frac{1}{2} \ln \left( 1 - \frac{2GM}{c^2 R} \right),\label{TOV5a}\end{aligned}$$ where *R* and *M* are radius and ADM mass of the star. From the Taylor expansion with respect to $x(R)=\frac{2GM}{c^2R}$, the boundary condition for the general relativistic correction term *δ*Φ*G**R* is derived. As a result, *δ*Φ*G**R* is expressed in an integral form, $$\begin{aligned} &&\delta \Phi\_{GR}(r) = - \frac{c^2}{2} \sum^\infty\_{n=2} \frac{x(R)^n}{n} - \frac{G}{r} \int^r\_0 4 \pi \tilde{r}^2 \rho e d\tilde{r} \nonumber\\ &&- \int^R\_r d\tilde{r} \frac{1}{\tilde{r}}\Big[ 4 \pi G \tilde{r}^2 \rho e + \frac{c^2}{2} \sum^\infty\_{n=2} x(\tilde{r})^n \nonumber\\ &&+ \frac{4\pi G r^2 P}{c^2} \sum^\infty\_{n=0} x(\tilde{r})^n \Big].\label{TOV5b}\end{aligned}$$ This spherically symmetric effective potential is also applied in our 2D axisymmetric configuration code, where we first compute angular averages of the relevant hydrodynamical variables. These are then used to calculate the spherical general correction term *δ*Φ*G**R*1*D*(*r*) in equation ([TOV5b]). Consequently, we modify the 2D Newtonian potential Φ*N*2*D*(*r*, *θ*) to obtain the two dimensional general relativistic potential $$\begin{aligned} \Phi^{2D}\_{GR}(r,\theta) = \Phi^{2D}\_{N}(r,\theta) + \delta \Phi^{1D}\_{GR}(r). \label{TOV5c}\end{aligned}$$ The method shown in has an ambiguity in definition of potential and imposes boundary condition of their potential at infinity. However, in our method the general relativistic correction in the potential is explicitly written down and its boundary condition is imposed at the stellar surface. So, the two method is a little bit different. Verification of our method is shown in the last in this appendix. Subsequently, we focus on the Bernoulli equation, which is ordinary used to obtain axisymmetric configuration (e.g. ([eq22])). Under the metric form ([TOV1]), four velocity is given by $$\begin{aligned} u^\mu = ( e^{-\Phi\_{GR}},0,0,0).\label{TOV6}\end{aligned}$$ The relativistic Bernoulli equation with spherically symmetric is also written as $$\begin{aligned} c^2 \ln h = \ln u^t + C,\label{TOV7}\end{aligned}$$ where *h* and C are relativistic enthalpy, integral constant, respectively. Combining equations ([TOV6])-([TOV7]), we find $$\begin{aligned} c^2 \ln h = - (\Phi\_N + \delta \Phi\_{GR}) + C.\label{TOV8}\end{aligned}$$ Finally we modify the Bernoulli equation of magnetized rotating case as $$\begin{aligned} c^2 \ln h = - \Phi^{2D}\_{GR} + \frac{1}{2} R^2 \Omega^2 + \int^{\Psi} \mu(u) du + C. \label{TOV9}\end{aligned}$$ Note that this treatment may be speculation. To verify our treatment, we calculate spherical symmetric stars with four kinds of EOS in both 1D TOV code and 2D axisymmetric code. Relations of mass, both ADM and baryon mass, with central density in these stars are displayed in Figure [GR1D]. We confirm our treatment works well. | | | | --- | --- | | | | | | | --- 1. E-mail: [email protected][↩](#fnref1) 2. E-mail:[email protected][↩](#fnref2) Equilibrium Configurations of Strongly Magnetized Neutron Stars with Realistic Equations of State ================================================================================================= We investigate equilibrium sequences of magnetized rotating stars with four kinds of realistic equations of state (EOSs) of SLy (Douchin et al.), FPS (Pandharipande et al.), Shen (Shen et al.), and LS (Lattimer & Swesty). Employing the Tomimura-Eriguchi scheme to construct the equilibrium configurations. we study the basic physical properties of the sequences in the framework of Newton gravity. In addition we newly take into account a general relativistic effect to the magnetized rotating configurations. With these computations, we find that the properties of the Newtonian magnetized stars, e.g., structure of magnetic field, highly depends on the EOSs. The toroidal magnetic fields concentrate rather near the surface for Shen and LS EOSs than those for SLy and FPS EOSs. The poloidal fields are also affected by the toroidal configurations. Paying attention to the stiffness of the EOSs, we analyze this tendency in detail. In the general relativistic stars, we find that the difference due to the EOSs becomes small because all the employed EOSs become sufficiently stiff for the large maximum density, typically greater than $10^{15}\rm{g}~\rm{cm}^{-3}$. The maximum baryon mass of the magnetized stars with axis ratio *q* ∼ 0.7 increases about up to twenty percents for that of spherical stars. We furthermore compute equilibrium sequences at finite temperature, which should serve as an initial condition for the hydrodynamic study of newly-born magnetars. Our results suggest that we may obtain information about the EOSs from the observation of the masses of magnetars. stars: magnetic fields - stars: rotation Introduction ============ Recently there is a growing evidence of the supermagnetized neutron stars with the magnetic fields of  ∼ 1014 − 1015 G, the so-called magnetars (e.g., ). Although such stars are estimated to be only a subclass ( ∼ 10%) of the canonical neutron stars with  ∼ 1012 − 1013 G, much attention has been paid to this objects because there remain many astrophysically exciting but unresolved problems. Giant flaring activities observed during the past  ∼ 20 years have given us good opportunities to study the coupling of the interior to the magnetospheric structures of magnetars, but the relationship between the crustal fraction and the subsequent starquarks has not yet been clarified (see references in ). The origin of the large magnetic field is also a big problem, whether generated at post-collapse in the rapidly neutron star or descend from the main sequence stars. Assuming the large magnetic fields before core-collapse, extensive magnetohydrodynamic (MHD) stellar collapse simulations have been done recently in order to understand the formation mechanism of magnetars with the different levels of the sophistication in the treatment of the equations of state (EOSs), the neutrino transport, and the general relativity (see for a review). Here it is worth mentioning that the gravitational waves from magnetars could be a useful tool giving us an information about the magnetar interiors. While in a microscopic point of view, the effects of magnetic field larger than the so-called QED limit of $\sim 4.4\times 10^{13}~\rm{G}$, on the EOSs (e.g., ) and the radiation processes have been also extensively investigated (see for a review). For the understanding of the formation and evolution of the magnetars, it is indispensable to unify these macroscopic and microscopic studies together, albeit not an easy job. When one tries to study the above interesting issues, the construction of an equilibrium configuration of magnetars may be one of the most fundamental problems, thus has been extensively investigated since the pioneering study by. and studied the equilibrium configurations of an incompressible star with the poloidal fields. then succeeded to take into account the toroidal fields. extended Prendergast’s work to the compressible one. and improved a boundary condition of the poloidal fields. Note that all these works mentioned above were based on the perturbation approach in which one assumes the magnetic field is sufficiently weak. and developed techniques to treat a strong magnetic field beyond the perturbative approach with the Newtonian gravity., and developed the scheme with the general relativistic framework. Here it should be noted that the studies mentioned above have contained two drawbacks to be modified. In,, and, the magnetic fields are assumed to vanish at the stellar surface and such a condition may be somewhat unrealistic. In,,,, and, only poloidal fields have been considered. It has been pointed out that such a purely poloidal field will be unstable and will decay within several Alfvén times. In fact, recently find by the 3D numerical simulations that the pure poloidal fields certainly decay within several Alfvén times and the configurations consisted both of poloidal and toroidal ones with comparable strength can be only stable. Recently, established a new non-perturbative method, which has conquered the above drawbacks. In the scheme, one can obtain a solution with a natural boundary condition, in which the magnetic field vanishes at the infinity. Moreover, one can treat both poloidal and toroidal fields simultaneously. More recently, calculated a number of sequences of magnetized rotating star and discussed their basic properties. Extending the Tomimura-Eriguchi scheme to treat the differential rotation, found that the magnetic fields can be in a twisted-torus equilibrium structure and discussed the universality of such structures. There may still remain a room to be sophisticated in their studies that the polytropic EOSs have been used for simplicity. In general, the central density of the neutron stars is considered to be higher than the nuclear density. Since we still do not have an answer about the EOS in such a higher regime, many kinds of the nuclear EOSs have been proposed (e.g., ) depending on the descriptions of the nuclear forces and the possible appearance of the exotic physics (e.g., ). While the stiffness of the polytropic EOS has been treated to be globally constant inside the star, it should depend on the density in the realistic EOSs. Remembering that the equilibrium configurations are achieved by the subtle and local balance of the gravitational force, centrifugal force, the Lorentz force, and the pressure gradient, it is a nontrivial problem how the equilibrium configurations change for the realistic EOSs. Therefore the first purpose in this paper is to extend the studies of and to incorporate the realistic EOSs and discuss the equilibrium properties. Four kinds of EOSs of SLy, FPS, Shen, and Lattimer-Swesty are adopted, which are often employed in the recent literatures of the MHD studies mentioned above. In contrast to the case of the polytropic EOS, a maximum density enters as a new parameter. At first, we set this value as comparable as the nuclear density of $\rho\_{\rm nuc} \approx 2.8\times 10^{14} \rm{g}~\rm{cm}^{-3}$, because the general relativistic (GR) corrections are rather small at this density regime as will be mentioned. If a maximum density of star is much larger than the nuclear density, we must take account into a GR effect. However, the fully GR approach to the magnetized equilibrium configuration has not been established yet except for the purely poloidal fields. Therefore, we consider here a new approach to take into account a GR effect approximately and see their effects on the equilibrium configurations. This is the second purpose of this paper. Applying this method, we furthermore compute equilibrium sequences at finite temperature, which should serve as an initial condition for the hydrodynamic evolutionary studies of newly-born magnetars. This paper is organized as follows. In Section [basic], we shall briefly summarize the basic equations, the numerical scheme and the employed equations of state. Numerical results and their analysis are presented in Section [result]. Summary and discussion follow in Section [conc]. Numerical Scheme and Models =========================== Basic equations --------------- The basic equations describing an equilibrium state of a perfect conductive fluid are summarized according to : 1. Maxwell’s equations: $$\begin{aligned} \nabla\_a E^a = 4 \pi \rho\_e,\label{eq1}&&\\ \epsilon^{abe}\nabla\_b B\_e = \frac{4\pi}{c}j^a,\label{eq2}&&\\ \epsilon^{abe}\nabla\_b E\_e = 0,\label{eq3}&&\\ \nabla\_a B^a = 0,\label{eq4}&&\end{aligned}$$ where we use Gaussian units, with the charge and current densities *ρ**e* and *j**a* both measured in electrostatic units. Here *E**a*, *B**a*, and *c* are the electric field, magnetic field, and the light velocity, respectively, ∇*a* stands for the covariant derivative and *ε**a**b**e* for the Levi-Civita tensor in flat three-dimensional space. 2. Perfect conductivity equations: $$\begin{aligned} && E\_a + \epsilon\_{abe} \frac{v^b}{c} B^e = 0,\label{eq5}\end{aligned}$$ where *v**a* denotes the fluid velocity. 3. Mass conservation: $$\begin{aligned} && \nabla\_a(\rho v^a ) = 0,\label{eq6}\end{aligned}$$ where *ρ* is the mass density. 4. Euler equations: $$\begin{aligned} && \rho v^b \nabla\_b v\_a + \nabla\_a p + \rho \nabla\_a \Phi -\frac{1}{c} \epsilon\_{abe} j^b B^e = 0,\label{eq7}\end{aligned}$$ where *p* is the pressure and Φ is the gravitational potential. Note that the last term on the left-hand side of equation ([eq7]) represents the Lorentz force being exerted on the perfectly conductive fluid, which can, due to equation ([eq2]), be rewritten in terms of *B**a* as $$\begin{aligned} \frac{1}{c}\epsilon\_{abe} j^b B^e = \frac{1}{4\pi}( B^b \nabla\_b B\_a - B^b \nabla\_a B\_b ), \label{eq9}\end{aligned}$$ 5. Poisson equation for the gravitational potential: $$\begin{aligned} && \nabla^a \nabla\_a \Phi = 4\pi G \rho,\label{eq10}\end{aligned}$$ where *G* is the gravitational constant. In order to obtain stationary states of magnetized stars, the following assumptions are adopted in this frame work:(1) The magnetic axis and the rotation axis are aligned. (2) The matter and the magnetic field distributions are axisymmetric about this axis. (3) The sources of the magnetic fields, i.e., the current distributions are confined to within the stars. For the sake of simplicity we require two additional conditions that are not necessary to have stationary solutions. (4) The meridional circulation of the fluid is neglected. Consequently, the fluid flow can be described by the angular velocity Ω only, which is given by *v**a* = Ω*φ**a*, where *φ* stands for the rotational Killing vector. Due to this assumptions and the axisymmetric assumption, the mass conservation equation ([eq6]) is automatically satisfied. (5) The barotropic equation of state, *p* = *p*(*ρ*), is adopted. Although we employ realistic equations of state in this paper, this barotropic condition can be maintained as explained in subsection [EOS]. Under these assumptions, the basic equations for determining strictures of magnetized rotating stars can be derived. For axisymmetric configurations, the divergence-free magnetic fields are automatically satisfied by introducing Ψ of *R* and *z* as follows: $$\begin{aligned} && B^R = - \frac{1}{R}\frac{\partial}{\partial z} \Psi,\label{eq11}\\ && B^z = \frac{1}{R}\frac{\partial}{\partial R} \Psi.\label{eq12}\end{aligned}$$ The function Ψ is sometimes called the flux function. Here we have employed the cylindrical coordinates (*R*, *ϕ*, *z*), in terms of which the line element and the Levi-Civita tensor are, respectively, given by $$\begin{aligned} && ds^2 = g\_{ab} dx^a dx^b = dR^2 + R^2 d\phi^2 + dz^2,\label{eq13}\\ && \epsilon\_{R\phi z} = R.\label{eq14}\end{aligned}$$ Introducing the vector potential *A**a*, defined as $$\begin{aligned} && B^a = \epsilon^{abe}\nabla\_b A\_e,\label{eq15}\end{aligned}$$ we can confirm that the flux function Ψ is nothing but the *ϕ* −  component of the vector potential *A**ϕ*. Equation ([eq5]) gives us the electric fields as $$\begin{aligned} && E\_a = - \frac{\Omega}{c} \nabla\_a \Psi.\label{eq16}\end{aligned}$$ Note that *E**ϕ* = 0, because Ψ is an axisymmetric function. From equations ([eq3]) and ([eq16]) the angular velocity must be expressed as $$\begin{aligned} && \Omega = \Omega(\Psi).\label{eq17}\end{aligned}$$ This relation is sometimes called Ferraro’s law of isorotation . The integrability conditions for equation ([eq7]) result in the following relations: $$\begin{aligned} && B^\phi = \frac{S(\Psi)}{R^2},\label{eq18}\\ && \frac{j^a}{c} = \frac{\kappa}{4\pi}B^a + \rho(\mu + R^2\Omega\Omega')\varphi^a,\label{eq19}\end{aligned}$$ where *S* and *μ* are arbitrary functions of Ψ, and $$\begin{aligned} && \kappa(\Psi) \equiv S'(\Psi).\label{eq20}\end{aligned}$$ Here the prime denotes the differentiation with respect to Ψ. The functional relation *S*(Ψ) is sometimes called the dynamical torque-free condition . From equations ([eq2]) and ([eq19]) we obtain $$\begin{aligned} R\frac{\partial}{\partial R} \left(\frac{1}{R}\frac{\partial \Psi}{\partial R}\right) + \frac{\partial^2 \Psi}{\partial z^2} = - 4 \pi \rho R^2 [R^2\Omega\Omega' + \mu(\Psi)]\nonumber\\ - \kappa(\Psi)S(\Psi),\label{eq21}\end{aligned}$$ which is generalized Grad-Shafranov equation. The first integral of the Euler equation can be expressed as $$\begin{aligned} \int \frac{dP}{\rho} = - \Phi + \frac{1}{2}R^2\Omega^2 + \int^\Psi \mu(u)du + C,\label{eq22}\end{aligned}$$ where *C* is a constant of integration. Following, we convert partial differential equations ([eq10]) and ([eq21]) into integral equations, given by $$\begin{aligned} &&\Phi(r) = - G\int\frac{\rho(\tilde{r})}{|r-\tilde{r}|}d^3\tilde{r}, \label{eq23}\\ &&A\_{\phi}(r)\sin \phi = \int\frac{\kappa S + 4\pi (\mu + \tilde{R}^2\Omega \Omega' )\rho \tilde{R}^2}{4\pi \tilde{R}|r-\tilde{r}|} \sin\tilde{\phi}d^3\tilde{r},\label{eq24}\end{aligned}$$ where all boundary conditions for the potentials Φ and Ψ, i.e., that the potentials are regular everywhere, are included in the integral expressions, and we need not consider them any further. Concerning the arbitrary function *κ*(Ψ), since the current vector *j**a* has to vanish outside the star as mentioned before, the function *κ* needs to vanish outside the star. Concerning the functions characterizing the magnetic field distributions, we choose the following forms: $$\begin{aligned} && \mu(u) = \mu,\label{eq25}\\ && S(u) = \frac{a}{k+1}(u-u\_{max})^{k+1}\theta(u-u\_{max}),\label{eq26}\\ && \kappa(u) = S' = a(u-u\_{max})^k\theta(u-u\_{max}),\label{eq27}\end{aligned}$$ where *μ*, *a*, and *k* are arbitrary constants and *u**m**a**x* is the maximum value of Ψ in the vacuum region. Here *θ*(*x*) stands for the Heaviside’s step function, and we have assumed that *k* ≥ 0 in order to have finite *κ*. It should be noted that, as shown below, *κ*, defined by equation ([eq27]), satisfied the required condition, i.e., *κ* = 0 outside of the star, at least for the stars in the present study. Since we have not known what rotation law is realized in actual magnetized stars, in the present investigation we adopt the rigid rotation, $$\begin{aligned} \Omega(u) = \Omega\_0. \label{eq28}\end{aligned}$$ In order to carry out numerical computations properly, we introduce the following non-dimensional variables by using the maximum density *ρ**m**a**x* and the equatorial radius *r**e*: $$\begin{aligned} && \hat{\rho} \equiv \rho/\rho\_{max}\label{eq30},\\ && \hat{r} \equiv r/r\_e = r/\sqrt{\frac{1}{\beta}\frac{p\_{max}} {4\pi G \rho\_{max}^2}}\label{eq31},\\ && \hat{\Omega} \equiv \Omega/\sqrt{4\pi G \rho\_{max}}\label{eq32},\\ && \hat{C} \equiv C / ( 4\pi G \rho\_{max} r\_e^2 )\label{eq33},\\ && \hat{\kappa} \equiv \kappa/ \left(\frac{1}{r\_e}\right)\label{eq34},\\ && \hat{\mu} \equiv \mu /\left(\frac{\sqrt{4 \pi G}}{r\_e}\right)\label{eq35},\\ && \hat{A}\_{\phi} \equiv A\_{\hat{\phi}} / (\sqrt{4 \pi G }\rho\_{max} r\_e^2) \label{eq36},\\ && \hat{B}^a \equiv B\_{\hat{a}}/(\sqrt{4 \pi G }\rho\_{max} r\_e),\label{eq37}\end{aligned}$$ where *β* is a numerical factor that is introduced to fix the non-dimensional equatorial radius to be unity. Here *B**â* denotes the orthonormal component of *B**a*, which is frequently convenient because it has a physical dimension. By using these variables, global physical quantities characterizing an equilibrium state, the gravitational potential energy *W*, the rotational energy *T*, the internal energy *U*, the magnetic energy *H*, the mass *M* can be expressed as follows: $$\begin{aligned} && \hat{W} = W/( 4\pi G \rho\_{max}^2 r\_e^5 ),\label{eq38}\\ && \hat{T} = T/( 4\pi G \rho\_{max}^2 r\_e^5 ),\label{eq39}\\ && \hat{U} = U/( 4\pi G \rho\_{max}^2 r\_e^5 ),\label{eq40}\\ && \hat{H} = H/( 4\pi G \rho\_{max}^2 r\_e^5 ),\label{eq41}\\ && \hat{M} = M/( \rho\_{max} r\_e^3 ),\label{eq42}\end{aligned}$$ where $$\begin{aligned} && W \equiv \frac{1}{2}\int\_{star} \Phi \rho d^3 r,\label{eq43}\\ && T \equiv \frac{1}{2}\int\_{star} \rho (R\Omega)^2 d^3r,\label{eq44}\\ && U \equiv \int\_{star} p d^3 r,\label{eq45}\\ && H \equiv - \frac{1}{c}\int\_{all~space}x^a \epsilon\_{abe} j^b B^e d^3r, \label{eq46}\\ && M \equiv \int\_{star} \rho d^3r.\label{eq47}\end{aligned}$$ We employ the Hachisu self-consistent field(HSCF) scheme . In the HSCF scheme, one of the model parameters characterizing an equilibrium star is the axis ratio *q*, defined as *q* ≡ *r**p*/*r**e*, where *r**e* is the smallest distance to the surface from the origin. With the HSCF scheme, *ρ*, *A**ϕ*, *β*, *C*, and Ω0 for rotating configurations (for non-rotating configurations) are iteratively solved, and during iteration cycles, *q* and other model parameters are fixed. By changing the axis ratio *q*, and fixing an appropriate set of the parameters, we follow one model sequence of equilibrium configurations. In actual calculations, we divide the interval [0, 1] in the *r̂* -direction into 100 meshes and the interval [0, *π*/2]in the *θ*- direction into 200 meshes. Note that it is enough to calculate solutions for the interval [0, *π*/2] in the *θ*-directions because we impose the equatorial plane symmetry. The accuracies of the numerical solutions are checked with an assessment by the normalized virial equation (e.g.,  ), defined as $$\begin{aligned} VC = | 2T + W + 3U + H |/|W|.\label{virial}\end{aligned}$$ For later convenience, here we like to explain about the qualitative meanings of the parameters *a* and *μ̂*. The parameter *μ̂* is directly involved in the Bernoulli equation ([eq22]) and plays a role to determine the matter distribution. Increasing of *a* enhances magnitude of magnetic field Ψ through the generalized Grad-Shafranov equation ([eq21]). Therefore, increasing of both *a* and *μ̂* results in an enhancement in the Lorentz force exerting on the conductive fluids. Note that *a* = 0 means that the magnetic field has the poloidal component only, however *μ̂* = 0 *does not mean* that there exists only toroidal component (see equation ([eq11]), ([eq12]) and ([eq18])). Equations of State ------------------ As mentioned in Section [intro], equation of state (EOS) is an important ingredient for determining the equilibrium configurations. Before conducting an extensive study as done before for the studies of rotating equilibrium configurations in which more variety of EOSs were employed (e.g., and references therein), we adopt here four kinds of EOSs of SLy, FPS, Shen, and Lattimer-Swesty which are often employed in the recent MHD studies relevant for magnetars. In the study of cold neutron stars, the *β*-equilibrium condition with respect to beta decays of the form *e*− + *p* ↔ *n* + *ν**e* and *n* ↔ *p* + *e*− + *ν̄**e*, can be well validated as for the static properties. Since neutrinos and antineutrinos escape from the star their chemical potentials vanish at zero-temperature *T* = 0 with *T* being the temperature, and the equilibrium condition is $\mu\_{\rm n} = \mu\_{\rm e} + \mu\_{\rm p}$, with $\mu\_{\rm n}$, $\mu\_{\rm e}$, and $\mu\_{\rm p}$ being the chemical potentials of neutron, electron, and proton, respectively. With the charge neutrality condition, we can determine the three independent thermodynamic variables, (for example the pressure as *P*(*ρ*, *Y**e*, *T*) with *Y**e* being the electron fraction), can only be determined by a single variable, which we take to be the density, namely *P*(*ρ*, *Y**e*(*ρ*)), noting here that *T* = 0 is assumed for the case of the cold neutron stars. Thanks to this, we can use the formalism mentioned in Section 2 without violating the barotropic condition of the EOSs. At the maximum densities higher than $\sim 2 \rho\_{\rm nuc}$, muons can appear, and higher than $\sim 3 \rho\_{\rm nuc}$, one may take into possible appearance of hyperons. However, since the muon contribution to pressure at the higher density has been pointed to be very small, and we still do not have detailed knowledge of the hyperon interactions, we prefer to employ the above neutron star matter, namely *e*−, *n*, *p*, model to higher densities. In the following, we shortly summarize features of the EOSs employed here. Lattimer-Swesty EOS is the one which has been used these years as a standard in the research field of core-collapse supernova explosions and the subsequent neutron star formations (see references in ), which is based on the compressible drop model for nuclei together with dripped nucleons. The values of nuclear parameters are chosen according to nuclear mass formulae and other theoretical studies with the Skyrme interaction. Shen EOS is the rather modern one currently often used in the research field, which is based on the relativistic mean field theory with a local density approximations, which has been constructed to reproduce the experimental data of masses and radii of stable and unstable nuclei (see references in ). FPS are modern version of an earlier microscopic EOS calculations by, which employs both two body (U14) and three-body interactions (TNI). In the SLy EOS, neutron-excess dependence is added to the FPS EOS, which is more suitable for the neutron star interiors. As for the FPS and SLy EOSs, we use the fitting formulae presented in. In the upper panel of Figure [adindex], we plot the pressure *P* as a function of the rest-mass density *ρ* for the EOSs. In the bottom panel, we plot the adiabatic index, defined by $$\begin{aligned} \Gamma = \frac{d \ln P}{d \ln \rho}\label{eq:adindex},\end{aligned}$$ which can be a useful tool not only to characterize the stiffness but also to see their effects on the equilibrium configurations. As seen, the adiabatic index as a function of the density is far away from constant as in the case of the polytropic EOS, showing an increasing (but sometimes zigzag) trend up to the several nuclear density (for example $\sim 2 \rho\_{\rm nuc}$ of SLy) and decreases slowly at higher densities due to the interplay of the density dependence of the nuclear interactions and of increasing proton fractions. Note that the relatively large discrepancies of the pressure below $\sim 10^{14}~\rm{g}~\rm{cm}^{-3}$ and the zigzag features of the adiabatic indices are due to the difference in the treatment of the inhomogeneous matter consisted of electrons, protons, free nucleons, and a species of heavy nuclei, and these are pointed out to have little effects on determining the equilibrium configurations which predominantly determined by the behavior of the EOSs at higher densities. Numerical Results ================= As mentioned, the first purpose of this study is to investigate the basic properties of magnetized rotating stars with the four kinds of EOSs in Newtonian gravity. In all the calculations, the value of *k* (equation ([eq26])) is set to be 0.1 because this choice makes the maximum toroidal fields to be comparable to the poloidal ones. It should be mentioned that one should specify the maximum density when using the realistic EOSs, while the degree of freedom of this choice can be eliminated in case of the polytropic EOSs because one can choose the polytropic constant freely. We actually fix its value as $3\times 10^{14}~\rm{g}~\rm{cm}^{-3}$ close to the nuclear density. Before going to the main results, we note that we have checked the accuracy of our newly developed code by the test problems and their results are summarized in appendix [code test]. Newtonian Case -------------- For clarity, we categorize the equilibrium configurations to two types of non-rotating type and rotating one. For non-rotating sequence, the parameters, axis ratio *q* and *a*, are input and kept fixed during calculation and then the parameters, *β*, *Ĉ*, and *μ̂* are obtained as output. For rotating sequence, *q*, *a*, and *μ̂* are input and kept fixed, then *β*, *Ĉ*, and $\hat{\Omega}\_0$ are obtained as outcome. ### Static Magnetized Configurations [SMC] We first concentrate on the non-rotating sequences, in which the anisotropic magnetic stress is only the agent to deform the stars. In Table [real-non-rot-mix], various physical quantities corresponding to the previous studies are given for the four kinds of EOSs. As a model parameter, we set *a* = 12 because we are interested in the combination effects of poloidal and toroidal fields. Note again that the choice of *a* = 0 leads to the unstable pure poloidal configurations as mentioned. As shown in the table, the values of *H*/∣*W*∣ for the sequences become large enough to be 0.1 when the stars sufficiently deform typically *q* of the axis ratio to be $ \lesssim 0.7$. In such a region, the perturbative approach may break down and a non-perturbative taken here is valid. Lowering the axis ratio, it can be seen that all sequences can achieve nearly toroidal density configuration, *q* ∼ 0 due to the strong Lorentz forces. In fact, *H*/∣*W*∣ increases as the axis ratio *q* decreases as shown in Figure [q-H]. This feature does not depend on the EOSs. Furthermore we find the values of *H*/∣*W*∣ for SLy or FPS sequences is greater than the those for Shen or LS sequences for any axis ratio. This tendency is explained with respect to the stiffness of the EOSs. As seen from the bottom panel of Figure 1, Shen or LS EOSs are more stiffer than SLy or FPS EOSs in the density region near and below $3\times 10^{14}~\rm{g}~\rm{cm}^{-3}$. Therefore, if the stars should have the same axis ratio, the force driven by the pressure for the Shen or LS stars is greater than the forces for the SLy or FPS stars. Consequently, the SLy or FPS stars need much Lorentz force than the Shen or LS stars to have the same degree of the deformation. 140mm ccccccccc q & *H*/∣*W*∣ & *U*/∣*W*∣ & ∣*Ŵ*∣ & *μ̂* & *Ĉ* & *β̂* & *M̂* & VC 0.99&0.004&0.332&0.140E-01&0.079&-0.311E-01&0.256E-01&0.387&0.125E-03 0.90&0.040&0.320&0.168E-01&0.224&-0.374E-01&0.259E-01&0.431&0.130E-03 0.80&0.091&0.303&0.212E-01&0.288&-0.468E-01&0.260E-01&0.495&0.112E-03 0.70&0.151&0.283&0.277E-01&0.319&-0.597E-01&0.256E-01&0.582&0.828E-04 0.60&0.212&0.263&0.376E-01&0.334&-0.772E-01&0.243E-01&0.700&0.690E-04 0.50&0.268&0.244&0.460E-01&0.342&-0.935E-01&0.218E-01&0.795&0.813E-04 0.40&0.328&0.224&0.377E-01&0.342&-0.905E-01&0.184E-01&0.733&0.755E-04 0.30&0.353&0.216&0.245E-01&0.309&-0.739E-01&0.148E-01&0.600&0.105E-03 0.20&0.359&0.214&0.182E-01&0.290&-0.639E-01&0.127E-01&0.520&0.149E-03 0.10&0.364&0.212&0.154E-01&0.282&-0.591E-01&0.116E-01&0.479&0.143E-03 0.01&0.366&0.211&0.145E-01&0.280&-0.577E-01&0.112E-01&0.466&0.133E-03 0.99&0.004&0.332&0.394E-02&0.104&-0.145E-01&0.155E-01&0.180&0.231E-03 0.90&0.040&0.320&0.577E-02&0.282&-0.198E-01&0.170E-01&0.226&0.199E-03 0.80&0.095&0.302&0.916E-02&0.344&-0.286E-01&0.187E-01&0.299&0.161E-03 0.70&0.166&0.278&0.152E-01&0.357&-0.424E-01&0.199E-01&0.406&0.114E-03 0.60&0.238&0.254&0.256E-01&0.351&-0.623E-01&0.202E-01&0.557&0.748E-04 0.50&0.298&0.234&0.353E-01&0.348&-0.809E-01&0.187E-01&0.682&0.632E-04 0.40&0.361&0.213&0.277E-01&0.338&-0.763E-01&0.156E-01&0.618&0.678E-04 0.30&0.373&0.209&0.163E-01&0.292&-0.585E-01&0.120E-01&0.482&0.105E-03 0.20&0.381&0.206&0.121E-01&0.277&-0.509E-01&0.102E-01&0.418&0.129E-03 0.10&0.386&0.205&0.102E-01&0.271&-0.471E-01&0.931E-02&0.385&0.139E-03 0.01&0.388&0.204&0.966E-02&0.270&-0.459E-01&0.903E-02&0.374&0.145E-03 0.99&0.002&0.332&0.161&0.051&-0.132&0.701E-01&1.65&0.218E-02 0.90&0.027&0.324&0.149&0.157&-0.134&0.645E-01&1.58&0.207E-02 0.80&0.059&0.313&0.137&0.220&-0.137&0.580E-01&1.50&0.197E-02 0.70&0.096&0.301&0.127&0.266&-0.141&0.510E-01&1.43&0.185E-02 0.60&0.138&0.287&0.120&0.301&-0.147&0.436E-01&1.38&0.169E-02 0.50&0.185&0.271&0.111&0.328&-0.151&0.365E-01&1.32&0.156E-02 0.40&0.234&0.255&0.091&0.345&-0.146&0.308E-01&1.20&0.140E-02 0.30&0.279&0.240&0.070&0.348&-0.134&0.262E-01&1.05&0.135E-02 0.20&0.303&0.232&0.054&0.338&-0.120&0.230E-01&0.932&0.144E-02 0.10&0.305&0.231&0.048&0.325&-0.113&0.215E-01&0.878&0.146E-02 0.01&0.306&0.231&0.046&0.322&-0.110&0.210E-01&0.860&0.148E-02 0.99&0.003&0.331&0.100E+00&0.054&-0.010&0.570E-01&1.25&0.357E-02 0.90&0.031&0.322&0.965E-01&0.166&-0.104&0.532E-01&1.22&0.313E-02 0.80&0.067&0.310&0.931E-01&0.229&-0.110&0.485E-01&1.19&0.295E-02 0.70&0.108&0.296&0.910E-01&0.273&-0.117&0.434E-01&1.17&0.277E-02 0.60&0.154&0.281&0.909E-01&0.307&-0.126&0.377E-01&1.18&0.221E-02 0.50&0.203&0.265&0.887E-01&0.331&-0.134&0.320E-01&1.16&0.209E-02 0.40&0.255&0.248&0.738E-01&0.345&-0.130&0.270E-01&1.07&0.184E-02 0.30&0.299&0.233&0.553E-01&0.343&-0.118&0.229E-01&0.928&0.221E-02 0.20&0.316&0.227&0.428E-01&0.327&-0.105&0.201E-01&0.822&0.202E-02 0.10&0.317&0.227&0.372E-01&0.314&-0.098&0.187E-01&0.769&0.213E-02 0.01&0.318&0.227&0.355E-01&0.311&-0.096&0.182E-01&0.751&0.191E-02 *H*/∣*W*∣ as function of *q* for the non rotating sequences shown in Table  [real-non-rot-mix].[q-H] ### Rotating Magnetized Configurations Tables [real-rot-mix] and [real-mag-mix] are devoted to the magnetized sequences with rotation for the four kinds of EOSs. By setting *a* = 16 and *μ̂* = 0.2 in Table [real-rot-mix], *a* = 16 and *μ̂* = 0.3 in Table [real-mag-mix], respectively, we can compute the equilibrium configurations with comparable toroidal and poloidal fields. By changing *μ̂* in two values, we can see clearly the effects of stronger magnetic field. From the tables, it can be seen that there exist maximum (*q**m**a**x*) and minimum (*q**m**i**n*) values in the axis ratio *q*. This feature does not appear in the sequence only with the magnetic fields (see Subsection [SMC]), in which we can find the physical solutions for any values of *q* (see Table [real-non-rot-mix]). Equilibrium configurations which have larger axis ratio than *q**m**a**x* have $\hat{\Omega}^2 < 0$ and such configurations are of course unphysical. Too strong magnetic field causes such a configuration, in which ”anti”-centrifugal force is needed. On the other hand, configurations with smaller *q* than *q**m**i**n* belong to following two types: (1) Due to too strong Lorentz forces, the converged solutions have $\hat{\Omega}^2<0$ as explained above. (2) the mass of the star sheds from the equatorial surface because the centrifugal forces are too strong (MS in the table indicates the so-called mass-shedding). Following, we call the sequence ending in the former and the latter type as magnetic-field-dominated (**MD**) and rotation-dominated (**RD**) sequence, respectively. It is noted that larger values of *μ̂* make *q**m**a**x* smaller (compare Table [real-rot-mix] with [real-mag-mix]) because *q**m**a**x* is determined by the magnetic field strength. So, all the sequences shown in Table [real-rot-mix] belong to the RD sequence regardless of the EOSs, but not for the sequences in Table [real-mag-mix]. The sequences with SLy or FPS EOSs belong to RD type and those with Shen or LS EOSs belong to MD type. As like this, even if we set the same parameter of *a* and *μ̂*, it is found that the type of the sequence depends on the EOSs. In order to see this feature more clearly, we perform a parameter search in *a* − *μ̂* parameter space and classify the sequences. In Figure [Phase], we show a set of phase diagrams on *a* − *μ̂* plane with different EOSs. In the figure, the region with $\hat{\Omega}^2<0$ shows that we cannot obtain any solutions except one with negative angular velocity. “*q**m**i**n* = 0.01” indicates a parameter region in which nearly toroidal configurations exist. So we find the equilibrium sequences with SLy or FPS EOSs are classified into either RD or MD type. However, the sequences with Shen or LS has another type of sequence, in which nearly toroidal configurations *q* ∼ 0 exits as in the case of magnetized configurations without rotation (see Section [SMC]). Such a configuration never appears in the model with SLy or FPS EOSs. Looking at the stiffness of EOSs in Figure [adindex] again, it can be seen that the Shen and LS EOSs are stiffer than SLy and FPS EOSs near the central density adopted here ($\rho\_{max} = 3 \times 10^{14} \rm{g}~\rm{cm}^{-3}$). Thus the sequence with a nearly toroidal configuration is found to appear for the stiffer EOSs. This qualitative feature was also noticed in the polytropic studies . Then we move on to investigate the structures of the equilibrium configurations. Figures [SLyNew]-[LSNew] show the distributions of density, and toroidal/poloidal magnetic fields of models characterized by *q* = 0.7, *a* = 16, *μ̂* = 0.2 for SLy, FPS, Shen, and LS EOSs, respectively. The reason why these values of the parameters are chosen is that we want to fix *q* for the different EOS models. For example, there does not exist the common value of *q* if we set the value of *μ̂* as 0.3 (see Table [real-mag-mix]). *q* = 0.7 taken here is the smallest common value in the sequences with *μ̂* = 0.2 (see Table [real-rot-mix]). It is seen that the density distributions of stars with SLy or FPS EOSs are more concentrated at the center than those of the stars with Shen or LS EOSs. These feature are due to the stiffness of EOSs as mentioned above. We also find the toroidal fields for SLy or FPS EOSs distribute in relatively wider regions in the vicinity of the equatorial plane than for Shen or LS EOSs, in which the toroidal fields concentrate near the stellar surfaces. It is noted that the toroidal fields only exist in the interior of the stars, which is due to the choice of the functional form of *κ* (see equation ([eq27])). Near the rotational axis, the poloidal magnetic field behaves like a dipole field, where the toroidal fields are weak. However in the vicinity of the equatorial plane where the toroidal fields become strong, the poloidal fields become distorted. The region where the magnetic field is mixed depends on the EOSs. The stars with the softer EOS (SLy or FPS) tend to have wider mixed region than those with the stiffer EOS (Shen or LS). The magnetic field lines Ψ may be good a tool to see the structures, showing the tori of twisted field lines around the symmetry axis inside the star and the untwisted poloidal field lines, which penetrate the stellar surface to continue to the vacuum. It is noted that this universality of the twisted-torus structures of the magnetic fields was seen also in the polytropic stars. Therefore our results for the realistic EOSs may be regarded as a further generalization of their results. 140mm ccccccccccc q & *H*/∣*W*∣ & *U*/∣*W*∣ & *T*/∣*W*∣ & ∣*Ŵ*∣ &$\hat{\Omega}^2$ & *Ĉ* & *β̂* & *M̂* & h & VC $\hat{\Omega}^2<0$&-&-&-&-&-&-&-&-&- 0.90&0.350E-1&0.321&0.855E-3&0.147E-1&0.536E-3&-0.347E-1&0.249E-1&0.399&0.320&0.128E-3 0.80&0.283E-1&0.316&0.117E-1&0.898E-2&0.720E-2&-0.290E-1&0.202E-1&0.297&0.270&0.167E-3 0.70&0.198E-1&0.315&0.176E-1&0.452E-2&0.108E-1&-0.220E-1&0.154E-1&0.197&0.210&0.216E-3 0.63&0.146E-1&0.316&0.184E-1&0.251E-2&0.114E-1&-0.172E-1&0.122E-1&0.138&0.199&0.261E-3 MS&-&-&-&-&-&-&-&-&- $\hat{\Omega}^2<0$&-&-&-&-&-&-&-&-&- 0.94&0.168E-1&0.328&0.374E-3&0.417E-2&0.247E-3&-0.157E-1&0.156E-1&0.187&0.189&0.225E-3 0.90&0.152E-1&0.327&0.234E-2&0.341E-2&0.154E-2&-0.146E-1&0.143E-1&0.166&0.180&0.242E-3 0.80&0.113E-1&0.326&0.583E-2&0.191E-2&0.383E-2&-0.116E-1&0.114E-1&0.117&0.166&0.306E-3 0.70&0.783E-2&0.326&0.729E-2&0.941E-3&0.480E-2&-0.866E-2&0.858E-2&0.765E-1&0.160&0.402E-3 0.65&0.644E-2&0.326&0.742E-2&0.632E-3&0.489E-2&-0.735E-2&0.733E-2&0.603E-1&0.168&0.477E-3 MS&-&-&-&-&-&-&-&-&- $\hat{\Omega}^2<0$&-&-&-&-&-&-&-&-&- 0.80&0.605E-1&0.311&0.216E-2&0.132&0.138E-2&-0.134&0.573E-1&0.146E+1&0.424&0.197E-2 0.70&0.672E-1&0.289&0.315E-1&0.106&0.183E-1&-0.129&0.485E-1&0.129E+1&0.458&0.182E-2 0.60&0.756E-1&0.265&0.644E-1&0.814E-1&0.335E-1&-0.122&0.397E-1&0.111E+1&0.497&0.166E-2 0.50&0.860E-1&0.237&0.101&0.596E-1&0.461E-1&-0.113&0.309E-1&0.929&0.537&0.148E-2 0.40&0.991E-1&0.204&0.144&0.406E-1&0.550E-1&-0.101&0.224E-1&0.750&0.582&0.124E-2 MS&-&-&-&-&-&-&-&-&- $\hat{\Omega}^2<0$&-&-&-&-&-&-&-&-&- 0.81&0.669E-1&0.309&0.143E-2&0.881E-1&0.883E-3&-0.106&0.482E-1&0.115E+1&0.502&0.289E-2 0.80&0.675E-1&0.307&0.400E-2&0.862E-1&0.246E-2&-0.106&0.475E-1&0.114E+1&0.505&0.291E-2 0.70&0.749E-1&0.287&0.313E-1&0.675E-1&0.176E-1&-0.101&0.401E-1&0.988&0.535&0.275E-2 0.60&0.834E-1&0.264&0.615E-1&0.495E-1&0.312E-1&-0.934E-1&0.326E-1&0.826&0.565&0.242E-2 0.50&0.905E-1&0.240&0.929E-1&0.304E-1&0.422E-1&-0.799E-1&0.246E-1&0.622&0.584&0.238E-2 0.48&0.865E-1&0.239&0.972E-1&0.245E-1&0.439E-1&-0.732E-1&0.226E-1&0.547&0.567&0.236E-2 MS&-&-&-&-&-&-&-&-&- 140mm ccccccccccc q & *H*/∣*W*∣ & *U*/∣*W*∣ & *T*/∣*W*∣ & ∣*Ŵ*∣ &$\hat{\Omega}^2$ & *Ĉ* & *β̂* & *M̂* & h & VC $\hat{\Omega}^2<0$&-&-&-&-&-&-&-&-&-&- 0.70&0.147&0.282&0.283E-2&0.195E-1&0.144E-2&-0.494E-1&0.232E-1&0.474&0.471&0.820E-4 0.60&0.425E-1&0.305&0.214E-1&0.323E-2&0.125E-1&-0.206E-1&0.128E-1&0.161&0.203&0.241E-3 MS&-&-&-&-&-&-&-&-&-&- $\hat{\Omega}^2<0$&-&-&-&-&-&-&-&-&-&- 0.86&0.492E-1&0.317&0.301E-3&0.525E-2&0.187E-3&-0.195E-1&0.162E-1&0.214&0.217&0.207E-3 0.80&0.372E-1&0.318&0.483E-2&0.326E-2&0.304E-2&-0.158E-1&0.135E-1&0.161&0.179&0.255E-3 0.70&0.226E-1&0.321&0.793E-2&0.135E-2&0.507E-2&-0.107E-1&0.967E-2&0.950E-1&0.159&0.361E-3 0.63&0.161E-1&0.323&0.819E-2&0.700E-3&0.529E-2&-0.802E-2&0.750E-2&0.640E-1&0.167&0.459E-3 MS&-&-&-&-&-&-&-&-&-&- $\hat{\Omega}^2<0$&-&-&-&-&-&-&-&-&-&- 0.52&0.190&0.269&0.518E-3&0.106&0.254E-3&-0.146&0.367E-1&0.129E+01&0.491&0.153E-2 0.50&0.195&0.264&0.635E-2&0.103&0.304E-2&-0.146&0.350E-1&0.127E+01&0.496&0.154E-2 0.40&0.222&0.237&0.332E-1&0.850E-1&0.136E-1&-0.142&0.275E-1&0.115E+01&0.525&0.135E-2 0.30&0.260&0.216&0.459E-1&0.654E-1&0.151E-1&-0.131&0.222E-1&0.101E+01&0.565&0.124E-2 0.20&0.305&0.213&0.273E-1&0.505E-1&0.713E-2&-0.117&0.201E-1&0.897&0.610&0.139E-2 0.11&0.331&0.222&0.110E-2&0.457E-1&0.259E-3&-0.110&0.200E-1&0.858&0.615&0.148E-2 $\hat{\Omega}^2<0$&-&-&-&-&-&-&-&-&-&- $\hat{\Omega}^2<0$&-&-&-&-&-&-&-&-&-&- 0.51&0.217&0.259&0.243E-2&0.813E-1&0.112E-2&-0.128&0.311E-1&0.111E+01&0.569&0.205E-2 0.50&0.220&0.256&0.512E-2&0.805E-1&0.233E-2&-0.128&0.304E-1&0.110E+01&0.571&0.194E-2 0.40&0.251&0.230&0.289E-1&0.695E-1&0.113E-1&-0.127&0.241E-1&0.102E+01&0.600&0.171E-2 0.30&0.298&0.212&0.322E-1&0.533E-1&0.981E-2&-0.117&0.199E-1&0.905&0.648&0.191E-2 0.21&0.350&0.216&0.842E-3&0.418E-1&0.204E-3&-0.104&0.186E-1&0.811&0.690&0.105E-2 $\hat{\Omega}^2<0$&-&-&-&-&-&-&-&-&-&- | | | | --- | --- | | | | | | | | | | | --- | --- | | | | | | | | | | | --- | --- | | | | | | | | | | | --- | --- | | | | | | | | | | | --- | --- | | | | | | | General Relativistic Case ------------------------- As mentioned in Section [intro], when one tries to perform a qualitative investigation on the equilibrium configurations of magnetized rotating stars, the general relativistic effect must be taken into account. Since a full treatment is beyond scope of our paper, we here try to include the effect by adding a spherically general relativistic correction to the Newtonian gravitational potential. Although this method is similar to the approach reported in, the definition of general relativistic potential and its boundary condition are modified to be appropriate for this study. Details of our approach and its verification are shown in appendix [GR correct]. In this subsection, we report the models with the correction. ### Basic property In Table [gr1], physical quantities of magnetized stars, such as magnetic field at pole *B**p*, one at center *B**c*, baryon mass *M*, rotation period *P*, and radius *R*, are shown with *a* = 20 and *μ̂* = 0.1. Maximum density is taken to be $10^{15}\rm{g}~\rm{cm}^{-3}$, typically where the general relativistic correction cannot be negligible. Since our treatment for the correction may break down if the equilibrium star is highly deformed, we only focus on the equilibrium sequences with mildly strong magnetic field with the comparable strength of poloidal and toroidal fields (see *h* in Table [gr1]). From the table, it is found that there exist *q**m**a**x* and *q**m**i**n* as same in the Newtonian sequences. We find that all the sequences belong to the rotation-dominated type, irrespective of EOSs, with the maximum and minimum value of axis ratio being almost the same, typically *q**m**a**x* ∼ 0.9 and *q**m**i**n* ∼ 0.6. More interestingly, we find the GR effect does not change the basic property of the magnetized stars drastically. The values of the magnetic field at pole and center are also comparable, typically $B\_p \sim 10^{16} [\rm{G}] $ and $B\_c \sim 10^{17} [\rm{G}]$. The baryon masses of these stars are in the range between 1.7*M*⊙ - 3.6*M*⊙ and the stars with FPS EOS tend to have lighter masses and those with Shen EOS. Masses of the stars with SLy and LS EOSs are shown to be comparable. Comparing with the observations of magnetars, the obtained magnetic fields are a bit larger and the rotation periods are an order of milliseconds, which are too rapid in comparison with the observed ones of seconds. We discuss this point in the Subsection [sec:proto-mag]. Figures [GRSLy], [GRFPS], [GRShen], and [GRLS] are the density and magnetic field distributions of the equilibrium configurations with *q* = 0.7 in Table [gr1]. Note again that this value is the smallest common value in the configurations with the different EOSs. It can be seen that the configurations of density and the magnetic fields are rather similar regardless of the EOSs. The toroidal components of the magnetic fields concentrate near the stellar surface. The poloidal fields are distorted in the region, in which the toroidal fields are strong, and its shapes are dipole like near the rotation axis. Despite of the incursion of the general relativistic correction, the structures of the magnetic field lines are twisted tori as same in the Newtonian case. The reason of these similarity is that the employed EOSs become sufficiently stiff (Figure [adindex]) below the maximum density of $10^{15}\rm{g}~\rm{cm}^{-3}$ and that the effects on the configurations become smaller. It is seen that Shen and LS EOSs become softer at the density regime due to the increasing proton fraction. However it is noted that the equilibrium configurations are affected whether or not the EOSs become already stiffened at the smaller density regimes than the maximum density. In the two EOSs, this indeed occurs at $\sim 10^{14}\rm{g}~\rm{cm}^{-3}$. After the stiffening, the effects of the difference of the EOSs on the configurations become small with increasing the maximum density. 140mm ccccccc *q* & $Bp[\rm{G}]$ & $Bc[\rm{G}]$ & *M*[*M*⊙] & $P[\rm{ms}]$ & $R[\rm{km}]$ & h $\hat{\Omega}^2<0$&-&-&-&-&-&- 0.97&0.663E+17&0.273E+18&0.174E+01&0.336E+01&0.117E+02&0.393E+00 0.90&0.723E+17&0.277E+18&0.185E+01&0.120E+01&0.124E+02&0.429E+00 0.80&0.826E+17&0.282E+18&0.203E+01&0.831E+00&0.135E+02&0.491E+00 0.70&0.913E+17&0.282E+18&0.222E+01&0.711E+00&0.152E+02&0.560E+00 0.63&0.908E+17&0.267E+18&0.228E+01&0.684E+00&0.167E+02&0.563E+00 MS&-&-&-&-&-&- $\hat{\Omega}^2<0$&-&-&-&-&-&- 0.97&0.581E+17&0.251E+18&0.135E+01&0.464E+01&0.111E+02&0.425E+00 0.90&0.635E+17&0.255E+18&0.143E+01&0.136E+01&0.117E+02&0.459E+00 0.80&0.732E+17&0.259E+18&0.156E+01&0.924E+00&0.128E+02&0.516E+00 0.70&0.813E+17&0.258E+18&0.170E+01&0.783E+00&0.143E+02&0.572E+00 0.63&0.818E+17&0.245E+18&0.176E+01&0.748E+00&0.158E+02&0.574E+00 MS&-&-&-&-&-&- $\hat{\Omega}^2<0$&-&-&-&-&-&- 0.98&0.670E+17&0.277E+18&0.282E+01&0.372E+01&0.140E+02&0.293E+00 0.90&0.732E+17&0.280E+18&0.301E+01&0.106E+01&0.149E+02&0.326E+00 0.80&0.816E+17&0.282E+18&0.328E+01&0.761E+00&0.163E+02&0.380E+00 0.70&0.876E+17&0.276E+18&0.353E+01&0.668E+00&0.183E+02&0.459E+00 0.65&0.855E+17&0.263E+18&0.358E+01&0.656E+00&0.197E+02&0.463E+00 MS&-&-&-&-&-&- $\hat{\Omega}^2<0$&-&-&-&-&-&- 0.98&0.613E+17&0.259E+18&0.186E+01&0.998E+01&0.123E+02&0.333E+00 0.90&0.673E+17&0.263E+18&0.199E+01&0.125E+01&0.131E+02&0.367E+00 0.80&0.762E+17&0.266E+18&0.217E+01&0.870E+00&0.143E+02&0.419E+00 0.70&0.834E+17&0.263E+18&0.237E+01&0.747E+00&0.160E+02&0.483E+00 0.62&0.816E+17&0.245E+18&0.244E+01&0.719E+00&0.180E+02&0.491E+00 MS&-&-&-&-&-&- | | | | --- | --- | | | | | | | | | | | --- | --- | | | | | | | | | | | --- | --- | | | | | | | | | | | --- | --- | | | | | | | ### Relations between physical quantities and maximum density The relation between the physical quantities and the maximum density is an important information and it sometimes helps us to understand the stability of equilibrium configuration. Figure [rho-mass] shows maximum density and baryon mass relations in the magnetized rotating stars for the parameters, *a* = 12 and *μ̂* = 0.1 for the different EOSs. Long-dashed, short-dashed, and dotted lines represent the relations for *q* = 0.90, *q* = 0.80, and *q* = 0.70, respectively. As a reference relation, we also show one in the spherical configuration as *q* = 1.0. All sequences reach the mass-shedding limit at *q* ∼ 0.60. Closed circles correspond to the maximum mass points. As the axis ratios become small, the maximum mass points shift to left, i.e. lower density region. We find that the increase of the maximum mass is up to about twenty percents for the every EOS employed here. In Figure [rho-Bp], the magnetic fields at the pole are shown as a function of the maximum densities in the same sequences. For the sequences with SLy, FPS, and LS EOSs, the magnetic field at pole is an increasing function of the maximum density for any axis ratio. On the other hand, it is found that only for the sequences with Shen EOS, these exist maximum of the polar magnetic fields around $\rho \sim 2 \times 10^{15} \rm{g}~\rm{cm}^{-3}$. How about relations between other physical quantities and the maximum mass? Figures [rho-P], [rho-R], and [rho-Bmax] are the relations between the rotation periods, equatorial radii, and maximum magnetic field strength with the maximum density. Figure [rho-P] shows the rotation periods are order of millisecond and they decrease with decreasing a value of *q*. This is because these sequences are rotation-dominated type (see Table [gr1]) and more centrifugal force is needed to deform the equilibrium star largely. These features do not depend on the EOSs. From Figure [rho-R], the equatorial radii are order of ten kilometer and decreasing functions of the maximum density. We also find equatorial radii increase with decreasing of *q*. The maximum magnetic field strength are roughly $10^{17}[\rm{G}]$ and are increasing functions of maximum density in Figure [rho-Bmax]. Note that their values do not depend on *q* because these sequences belong to rotation-dominated type as mentioned above. As we have already shown, the magnetic field near the rotational axis is like a dipole decaying as *B**m**a**x*/*r*3 for *r* → ∞. Now, since we find that *B**m**a**x* is almost insensitive to the values of *q* with fixing the maximum density, *B**p* should then be proportional to 1/*R**p*3, where *R**p* is a polar radius. For instance in the sequences with SLy EOS, *R**p* changes only from $11.2[\rm{km}]$ to $10.6[\rm{km}]$ with decreasing of *q* from 0.9 to 0.7. On the other hand, it is found that the change in the equatorial radius is much larger up to about twenty percents for every EOS employed here when changing *q* from 0.9 to 0.7 (see Figure [rho-R]). For completeness, we finally show *M* − *R* relation in Figure [M-R], where *M* and *R* are baryon mass and equatorial radius, respectively. It is noted that the relation is also valid for the spherical stars, implying that a hierarchy of the stiffness becomes so . | | | | --- | --- | | | | | | | | | | | --- | --- | | | | | | | | | | | --- | --- | | | | | | | | | | | --- | --- | | | | | | | | | | | --- | --- | | | | | | | | | | | --- | --- | | | | | | | Model of Proto-Magnetar ----------------------- In the previous section, we find that the obtained equilibrium configurations are difficult to be applied for the observed magnetar. Here we decide to construct an equilibrium configuration of a proto-magnetar bearing in mind a hypothesis by that the magnetars could be originated from the proto-neutron-stars with a surface magnetic field of order $10^{16}[\rm{G}]$ and with its rotation period of an order of milliseconds. The models computed here should serve as examples of initial models of the hydrodynamic/evolutionary simulations of the proto-magnetar. As mentioned in Section [EOS], we can construct a model of proto-magnetar with finite temperature by employing Shen and LS EOSs. In the actual calculation, the maximum density, the parameters associated with magnetic field, axis ratio and entropy per baryon (in unit of Boltzmann’s constant) are fixed as $(\rho\_{max},a,\hat{\mu},q,s)=(10^{15}\rm{g}~\rm{cm}^{-3},20,0.1,0.9,2)$. Here we take the value of the entropy per baryon in the proto-neutron according to indicating its distributions to be nearly uniform as a result of the convection. Results are summarized in Table [hotmag]. Note that magnetic field strength at pole is order of $10^{16}[\rm{G}]$. Matter and magnetic field structure of these stars are depicted in Figure [Shen-protomag] and [LS-protomag]. As a result, it is found that the features appearing in the cold magnetized equilibrium configurations above are still valid despite of the incursion of the finite temperature effect on the EOSs. More interestingly, the masses of magnetars are also dependent on the EOSs as shown in Table [hotmag]. They are predicted to be as large as 3.0*M*⊙ for the LS EOS, such neutron stars have never been observed. Although we have little observational information about the masses of magnetar so far, our results suggest that we may obtain information about the EOS from the observation of the masses of magnetars. 140mm ccccccc *q* & $Bp[\rm{G}]$ & $Bc[\rm{G}]$ & *M*[*M*⊙] & $P[\rm{ms}]$ & $R[\rm{km}]$ & h 0.90&6.73E+16&2.63E+17&1.99E+01&1.25E+00&13.1E+00&0.367E+00 0.90&7.32E+16&2.80E+17&3.01E+01&1.06E+00&14.9E+00&0.326E+00 | | | | --- | --- | | | | | | | | | | | --- | --- | | | | | | | Conclusion ========== In this paper, we have investigated equilibrium sequences of magnetized rotating stars with four kinds of realistic equations of state (EOSs) of SLy et al.), FPS, Shen, and LS. We have employed the Tomimura-Eriguchi scheme to construct the equilibrium configurations. At first, we have obtained the solution in the regime of Newtonian gravity. Basic properties of the magnetized Newtonian stars are summarized as follows: (1) For the non-rotating sequences, there exist nearly toroidal configurations, *q**m**i**n* ∼ 0, irrespective of the EOSs. The magnetic energy stored in the stars increases with the degree of deformation being larger. (2) For the rotating sequences, we have categorized the sequences with four kinds of EOSs as rotation-dominated (RD) type, magnetic-dominated (MD) one, and nearly toroidal one. As a result, the sequences with softer EOSs of SLy and FPS are found to belong to RD or MD type and the sequences with stiffer EOSs of Shen and LS can belong also to the nearly toroidal one. (3) We have also focused on the structure of equilibrium configurations. Reflecting the stiffness of EOSs, the density distributions of stars with SLy or FPS EOSs concentrate more at the center than those of the stars with Shen or LS EOSs. The toroidal fields for SLy and FPS EOSs are found to distribute in relatively wider regions in the vicinity of the equatorial plane than for Shen and LS EOSs. The poloidal magnetic fields are also affected by the toroidal fields because poloidal fields are highly distorted where the toroidal fields are strong. Regardless of the difference of the EOSs, a global configuration of magnetic field line is found to be universal, namely the tori of twisted field lines around the symmetry axis inside the star and the untwisted poloidal field lines, which penetrate the stellar surface to continue to the vacuum. Then, adding the GR correction to the gravity, we have performed the quantitative investigation of the strong magnetized equilibrium stars. As a result, we find that the difference due to the EOSs becomes small because all the employed EOSs become sufficiently stiff for the large maximum density, typically greater than $10^{15}\rm{g}~\rm{cm}^{-3}$. We have investigated the relation between the baryon mass, the magnetic field at pole, the rotational period, the equatorial radius, and the maximum magnetic field as a function of the maximum density. The typical magnetic fields at pole are about $10^{16} \rm{G}$, the periods are about several milliseconds, the radii are about ten kilometers and the maximum magnetic fields are about $10^{17} \rm{G}$. The maximum mass is found to be 3.0*M*⊙ for SLy EOS, 2.6*M*⊙ for FPS EOS, 3.5*M*⊙ for Shen EOS and 2.7*M*⊙ for LS EOS for *q* = 0.7 configurations, respectively. These values are about twenty percents increasing for that of the spherical stars. Finally, we have computed equilibrium sequences at finite temperature for the Shen and LS EOSs aiming to construct the equilibrium configurations of the proto-magnetars. As a result, it is found that the features appearing in the cold magnetized equilibrium configurations above are still valid despite of the incursion of the finite temperature effect on the EOSs. Since the masses of the proto-magnetars are highly dependent on the EOSs, we have speculated that one may obtain information about the EOSs from the observation of the masses of magnetars. It is true that our treatment for the general relativistic effect is nothing but a crude approximation. Thus we consider this study as a prelude to a fully general relativistic study, however hoping for the moment that our results could serve as the initial condition for the hydrodynamic evolutionary studies of newly-born magnetars including the microphysical EOSs. Acknowledgments =============== We express thanks to S. Yoshida for fruitful discussions and to H. Ono, H. Suzuki, and K. Sumiyoshi for providing us the tabulated table for Lattimer-Swesty EOS. Kiuchi thanks to K. i. Maeda and Kotake to K. Sato for continuing encouragements. Kotake also thanks to S. Yamada and E. Müller for informative discussions. This work was supported in part by the Japan Society for Promotion of Science(JSPS) Research Fellowships (Kiuchi), Grants-in-Aid for the Scientific Research from the Ministry of Education, Science and Culture of Japan (No.1840044 and S19104006). 99 Akmal, A., Pandharipande, V. R., & Ravenhall, D. G. 1998, PRC, 58, 1804 Baym, G., & Pethick, C. 1979, Ann. Rev. Astron. Astrophys., 17, 415 Bocquet, M., Bonazzola, S., Gourgoulhon, E., & Novak, J. 1995, A&A, 301, 757 Bonazzola, S., & Gourgoulhon, E. 1996, A&A 312, 675 Braithwaite, J. & Spruit, H. C. 2004, Nature, 431, 819 Braithwaite, J. & Spruit, H. C. 2006, A&A, 450, 1097 Chandrasekhar, S., 1956, ApJ 124, 232 Chandrasekhar, S., & Fermi, E. 1953, ApJ 118, 116 Cowling, T. G. 1965, in Stellar Structure, ed. L. H. Allen & D. B. McLaughlin (Chicago: Univ. Chicago Press), 425 Douchin, F., & Haensel, P. 2001, A&A, 380, 151 Duncan, R. C. & Thompson, C., 1992, ApJ, 392, L9 Ferrario, L. & Wickramasinghe, D., 2007 Mon. Not. Roy. Astron. Soc.  375, 1009 Ferraro, V. C. A. 1937, MNRAS 97, 458 Ferraro, V. C. A. 1954, ApJ 119, 407 Friedman, B., & Pandharipande, V. R. 1981, Nuclear Physics A, 361, 502 Glendenning, N. K. 2001, Physics. Rep., 342, 393 Hachisu, I. 1986, ApJ 61, 479 Harding, A. K., & Lai, D. 2006, Reports of Progress in Physics, 69, 2631 Hurley, K., 1999, arXiv:astro-ph/9912061. Ioka, K. 2001, MNRAS 327, 639 Ioka, K. & Sasaki, M. 2003, PRD 67, 124026 Ioka, K. & Sasaki, M. 2004, ApJ 600, 296 Kaspi, V. M. 2004, Young Neutron Stars and Their Environments, 218, 231 Konno, K., Obata, T.& Kojima, Y. 1999, A&A, 352, 211 Kotake, K., Sawai, H., Yamada, S., & Sato, K. 2004, ApJ, 608, 391 Kotake, K., Yamada, S., Sato, K., Sumiyoshi, K., Ono, H., & Suzuki, H. 2004 PRD 69, 124004 Kouveliotou, C., et al. 1998, Nature, 393, 235 Kotake, K., Sato, K., & Takahashi, K. 2006, Reports of Progress in Physics, 69, 971 Lovelace, R. V. E., Mehanian, C. M.,  & Sulkanen, M. E. 1986, ApJS,  62, 1 Lattimer, J. M., & Douglas Swesty, F. 1991, Nuclear Physics A, 535, 331 Lattimer, J. M. & Prakash, M. 2006 astro-ph/0612440. Livne, E., Dessart, L., Burrows, A., & Meakin, C. A. 2007, ApJS, 170, 187 Marek, A., Dimmelmeier, H., Janka, H., Mueller, E., & Buras, R. 2006, A&A, 445, 273 Markey, P., & Taylar. R. J. 1973 MNRAS, 163, 77 Markey, P., & Taylar. R. J. 1974 MNRAS, 168, 505 Mereghetti, S. arXiv:astro-ph/9911252. Mestel, L. 1961, MNRAS, 122, 473 Miketinac, M. J. 1975, Ap&SS, 35, 349 Moiseenko, S. G., Bisnovatyi-Kogan, G. S., & Ardeljan, N. V. 2006, MNRAS, 370, 501 Monaghan, J. J. 1965, MNRAS, 131, 105 Monaghan, J. J. 1966, MNRAS, 134, 275 Morrison, I. A., Baumgarte, T. W., & Shapiro, S. L. 2004, ApJ, 610, 941 Nozawa, T., Stergioulas, N., Gourgoulhon, E., & Eriguchi, Y. 1998, A&A Sup. Ser., 132, 431 Obergaulinger, M., Aloy, M. A., Müller, E. 2006, A&A, 450, 1107 Oppenheimer, J. R., & Volkoff, G. 1939, Phys. Rev., 55, 374 Ostriker, J. P. & Hartwick, F. D. A. 1968, ApJ, 153, 797 Ostriker, J. P. & Mark, J. W-K. 1968, ApJ, 151, 1075 Pandharipande, V. R., & Ravenhall, D. G. 1989, NATO ASIB Proc. 205: Nuclear Matter and Heavy Ion Collisions, 103 Prakash, M., Bombaci, I., Prakash, M., Ellis, P. J., Lattimer, J. M., and Knorren, R. 1997, Phys. Rept.  280, 1 Prendergast, K. H. 1956, ApJ, 123, 498 Rampp, M., & Janka, H.-T. 2002, A&A, 396, 361 Roberts, P. H. 1955, ApJ, 122, 508 Roxburgh, I. W. 1966, MNRAS, 132, 347 Sawai, H., Kotake, K., & Yamada, S. 2005 ApJ 631, 446 Shapiro, S. L., & Teukolsky, S. A. 1983, Research supported by the National Science Foundation. New York, Wiley-Interscience, 1983, 663 p., Shibata, M., Taniguchi, K., & Uryu, K. 2005 PRD 71, 084021 Shen, H., Toki, H., Oyamatsu, K., Sumiyoshi, K. 1998, Nuclear Physics, A637, 43, 109, 301 Shen, H., Toki, H., Oyamatsu, K., & Sumiyoshi, K. 1998, Progress of Theoretical Physics, 100, 1013 Shibata, M., Liu, Y. T., Shapiro, S. L., & Stephens, B. C. 2006 PRD 74, 104026 Sumiyoshi, K., Yamada, S., Suzuki, H., Shen, H., Chiba, S., & Toki, H. 2005, ApJ, 629, 922 Takiwaki, T., Kotake, K., Nagataki, S., & Sato, K. 2004, ApJ, 616, 1086 Thompson, C. & Duncan, R. C. 1993, ApJ, 408, 194 Thompson, C. & Duncan, R. C. 1995, Mon. Not. Roy. Astron. Soc.  275, 255 Thompson, C. & Duncan, R. C. 1996, ApJ 473, 322 Tomiumra, Y. & Eriguchi, Y., 2005 MNRAS, 359, 1117 Watts, A. 2006, 36th COSPAR Scientific Assembly, 36, 168 Wiringa, R. B., Fiks, V., & Fabrocini, A. 1988, PRC, 38, 1010 Woltjer, L. 1960, ApJ, 131, 227 Woods, P. M. & Thompson, C. 2004, arXiv:astro-ph/0406133. Wright, G. A. E. 1973, MNRAS, 162, 339 Yamada, S., & Sawai, H. 2004, ApJ, 608, 907 Yoshida, S. & Eriguchi, Y. 2006, ApJS, 164, 156 Yoshida, S., Yoshida, S., & Eriguchi, Y. 2006, ApJ, 651, 462 Code Test ========= To check our code ability, we calculate the same sequences shown in with polytropic equations of state. We actually obtain two kinds of sequences, non-rotating sequence and rotating-sequence. In Table [non-rot], some physical quantities of the non-rotating sequence with the polytropic index *N* = 3.0, *k* = 0.1 and *a* = 15 (see equation ([eq26]), ([eq27])) are shown. For the axis ratio *q*, the upper row corresponds to the result by and the down one do to our result. Table [rig-rot] is a result of the rotating sequence with polytropic index *N* = 0.5, *a* = 20, and *μ̂* = 0.05. In all sequences, the relative errors for the each quantities with ’s result are less than one percent and VC is 10− 4 ∼ 10− 5. Therefore, we confirm that our code works well. 140mm | q | *H*/∣*W*∣ | *U*/∣*W*∣ | ∣*Ŵ*∣ | *μ̂* | *Ĉ* | *β̂* | *M̂* | VC | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | 0.98 | 0.0050 | 0.332 | 7.310E-4 | 0.140 | -6.348E-3 | 5.276E-3 | 0.078 | | | | 0.0050 | 0.332 | 7.310E-4 | 0.140 | -6.348E-3 | 5.276E-3 | 0.078 | 2.55E-4 | | 0.90 | 0.0275 | 0.324 | 7.922E-4 | 0.299 | -7.163E-3 | 5.303E-2 | 0.082 | | | | 0.0275 | 0.324 | 7.922E-4 | 0.299 | -7.163E-3 | 5.303E-2 | 0.082 | 2.54E-4 | | 0.80 | 0.0635 | 0.312 | 9.004E-4 | 0.408 | -8.526E-3 | 5.335E-2 | 0.088 | | | | 0.0635 | 0.312 | 9.004E-4 | 0.408 | -8.526E-3 | 5.335E-2 | 0.088 | 2.50E-4 | | 0.70 | 0.1131 | 0.296 | 1.097E-3 | 0.476 | -1.064E-2 | 5.400E-3 | 0.099 | | | | 0.1131 | 0.296 | 1.097E-3 | 0.476 | -1.064E-2 | 5.400E-3 | 0.099 | 2.37E-4 | | 0.60 | 0.1856 | 0.272 | 1.560E-3 | 0.501 | -1.456E-2 | 5.576E-3 | 0.121 | | | | 0.1856 | 0.272 | 1.560E-3 | 0.501 | -1.456E-2 | 5.575E-3 | 0.121 | 2.07E-4 | | 0.50 | 0.3044 | 0.232 | 4.087E-3 | 0.447 | -2.721E-2 | 6.524E-3 | 0.211 | | | | 0.3042 | 0.232 | 4.079E-3 | 0.447 | -2.718E-2 | 6.520E-3 | 0.210 | 1.21E-4 | | 0.40 | 0.4087 | 0.197 | 7.624E-3 | 0.279 | -3.845E-2 | 6.188E-3 | 0.320 | | | | 0.4086 | 0.197 | 7.637E-3 | 0.279 | -3.849E-2 | 6.190E-3 | 0.321 | 6.59E-6 | | 0.30 | 0.4219 | 0.193 | 4.626E-3 | 0.247 | -3.022E-2 | 4.713E-3 | 0.254 | | | | 0.4218 | 0.193 | 4.638E-3 | 0.247 | -3.026E-2 | 4.716E-3 | 0.254 | 8.43E-6 | | 0.20 | 0.4287 | 0.190 | 3.434E-3 | 0.237 | -2.635E-2 | 4.003E-3 | 0.220 | | | | 0.4287 | 0.190 | 3.453E-3 | 0.237 | -2.643E-2 | 4.009E-3 | 0.221 | 3.12E-6 | | 0.10 | 0.4326 | 0.189 | 2.900E-3 | 0.233 | -2.443E-2 | 3.648E-3 | 0.203 | | | | 0.4326 | 0.189 | 2.926E-3 | 0.233 | -2.454E-2 | 3.658E-3 | 0.204 | 1.52E-5 | | 0.01 | 0.4338 | 0.189 | 2.743E-3 | 0.233 | -2.384E-2 | 3.539E-3 | 0.197 | | | | 0.4338 | 0.189 | 2.771E-3 | 0.232 | -2.396E-2 | 3.550E-3 | 0.198 | 1.96E-5 | 140mm | q | *H*/∣*W*∣ | *U*/∣*W*∣ | *T*/∣*W*∣ | ∣*Ŵ*∣ | $\hat{\Omega}^2$ | *Ĉ* | *β̂* | *M̂* | VC | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $\hat{\Omega}^2<0$ | - | - | - | - | - | - | - | - | - | | 0.98 | 2.75E-3 | 0.331 | 2.04E-3 | 2.68E-1 | 1.50E-3 | -0.181 | 8.61E-2 | 2.24 | | | | 2.74E-3 | 0.331 | 2.04E-3 | 2.68E-1 | 1.50E-3 | -0.181 | 8.61E-2 | 2.24 | 2.19E-4 | | 0.90 | 2.88E-3 | 0.317 | 2.37E-2 | 2.31E-1 | 1.64E-2 | -0.175 | 7.78E-2 | 2.05 | | | | 2.89E-3 | 0.317 | 2.37E-2 | 2.31E-1 | 1.64E-2 | -0.175 | 7.77E-2 | 2.05 | 1.37E-4 | | 0.80 | 3.10E-3 | 0.297 | 5.33E-2 | 1.86E-1 | 3.40E-2 | -0.166 | 6.71E-2 | 1.80 | | | | 3.10E-3 | 0.297 | 5.33E-2 | 1.86E-1 | 3.40E-2 | -0.166 | 6.71E-2 | 1.80 | 1.25E-4 | | 0.70 | 3.37E-3 | 0.275 | 8.60E-2 | 1.44E-1 | 4.98E-2 | -0.155 | 5.63E-2 | 1.55 | | | | 3.37E-3 | 0.274 | 8.60E-2 | 1.44E-1 | 4.98E-2 | -0.155 | 5.63E-2 | 1.55 | 1.35E-4 | | 0.60 | 3.72E-3 | 0.251 | 1.22E-1 | 1.04E-1 | 6.29E-2 | -0.141 | 4.55E-2 | 1.29 | | | | 3.73E-3 | 0.251 | 1.22E-1 | 1.04E-1 | 6.29E-2 | -0.141 | 4.55E-2 | 1.29 | 1.22E-4 | | 0.50 | 4.21E-3 | 0.225 | 1.61E-1 | 6.75E-2 | 7.19E-2 | -0.123 | 3.46E-2 | 1.01 | | | | 4.22E-3 | 0.225 | 1.61E-1 | 6.76E-2 | 7.19E-2 | -0.123 | 3.46E-2 | 1.01 | 8.79E-5 | | 0.43 | 3.64E-3 | 0.210 | 1.83E-1 | 4.06E-2 | 7.46E-2 | -0.101 | 2.67E-2 | 0.750 | | | | 3.67E-3 | 0.210 | 1.84E-1 | 4.07E-2 | 7.46E-2 | -0.102 | 2.67E-2 | 0.750 | 9.53E-5 | | MS | - | - | - | - | - | - | - | - | - | General Relativistic Correction =============================== We start from the metric of spherical symmetry space time, $$\begin{aligned} ds^2 = - c^2e^{2\Phi\_{GR}}dt^2 + e^{2\Lambda}dr^2 + r^2 ( d\theta^2 + \sin^2 \theta d\varphi^2 ).\label{metric}\end{aligned}$$ Matter is assumed as a perfect fluid, thus the energy momentum tensor is $$\begin{aligned} T^{\mu\nu} = \left( \rho ( 1 + e ) + \frac{P}{c^2} \right) u^\mu u^\nu + P g^{\mu\nu}, \label{pfluid}\end{aligned}$$ where *ρ*, *e*, and *P* are a density, specific internal energy, and pressure. TOV equation  is $$\begin{aligned} &&\frac{dm}{dr} = 4 \pi r^2 \rho ( 1 + e ),\label{TOV1}\\ &&\frac{dP}{dr} = - G \frac{\rho h}{r^2}( m + \frac{4\pi r^3 P}{c^2} ) /( 1 - \frac{2Gm}{c^2r} ),\label{TOV2},\end{aligned}$$ where *h* = 1 + *e* + *P*/*ρ**c*2 is relativistic enthalpy and *m* is mass function defined as $$\begin{aligned} e^{2\Lambda} = \frac{1}{1-\frac{2Gm}{c^2r}}.\end{aligned}$$ Equation for the potential Φ*G**R* is $$\begin{aligned} \frac{d\Phi\_{GR}}{dr} = \frac{G}{c^2r^2}\left( m + \frac{4\pi r^3 P}{c^2} \right)/( 1 - \frac{2Gm}{c^2r} ).\label{TOV3}\end{aligned}$$ This potential, of course, reduces to the gravitational potential Φ in the Newtonian limit. To make clear a general relativistic contribution in the potential, we perform the Taylor expansion of equation ([TOV3]) with respect to *x*(*r*) ≡ 2*G**m*(*r*)/*r**c*2, which is guaranteed to be small than unity in star. Result is written in the form, $$\begin{aligned} &&\frac{d\Phi\_{GR}}{dr} = \frac{G}{c^2 r^2} \int^r\_0 4\pi r^2 \rho dr' \nonumber\\ &&+ \frac{1}{r} \Big[ \frac{G}{c^2 r}\int^r\_0 4 \pi r^2 \rho e dr' \nonumber\\ &&+ \frac{1}{2}\sum^\infty\_{n=2} x(r)^n + \frac{4\pi G r^2 P}{c^4} \sum^\infty\_{n=0} x(r)^n \Big]. \label{TOV4}\end{aligned}$$ Note that the first term is a contribution from Newtonian gravitational potential. So, we define a general relativistic correction in the potential as *δ*Φ*G**R* ≡ *c*2Φ*G**R* − Φ*N*. Boundary condition is derived from a requirement that the outer vacuum space time is equivalent to the Schwarzschild space time, $$\begin{aligned} \Phi\_{GR} = \frac{1}{2} \ln \left( 1 - \frac{2GM}{c^2 R} \right),\label{TOV5a}\end{aligned}$$ where *R* and *M* are radius and ADM mass of the star. From the Taylor expansion with respect to $x(R)=\frac{2GM}{c^2R}$, the boundary condition for the general relativistic correction term *δ*Φ*G**R* is derived. As a result, *δ*Φ*G**R* is expressed in an integral form, $$\begin{aligned} &&\delta \Phi\_{GR}(r) = - \frac{c^2}{2} \sum^\infty\_{n=2} \frac{x(R)^n}{n} - \frac{G}{r} \int^r\_0 4 \pi \tilde{r}^2 \rho e d\tilde{r} \nonumber\\ &&- \int^R\_r d\tilde{r} \frac{1}{\tilde{r}}\Big[ 4 \pi G \tilde{r}^2 \rho e + \frac{c^2}{2} \sum^\infty\_{n=2} x(\tilde{r})^n \nonumber\\ &&+ \frac{4\pi G r^2 P}{c^2} \sum^\infty\_{n=0} x(\tilde{r})^n \Big].\label{TOV5b}\end{aligned}$$ This spherically symmetric effective potential is also applied in our 2D axisymmetric configuration code, where we first compute angular averages of the relevant hydrodynamical variables. These are then used to calculate the spherical general correction term *δ*Φ*G**R*1*D*(*r*) in equation ([TOV5b]). Consequently, we modify the 2D Newtonian potential Φ*N*2*D*(*r*, *θ*) to obtain the two dimensional general relativistic potential $$\begin{aligned} \Phi^{2D}\_{GR}(r,\theta) = \Phi^{2D}\_{N}(r,\theta) + \delta \Phi^{1D}\_{GR}(r). \label{TOV5c}\end{aligned}$$ The method shown in has an ambiguity in definition of potential and imposes boundary condition of their potential at infinity. However, in our method the general relativistic correction in the potential is explicitly written down and its boundary condition is imposed at the stellar surface. So, the two method is a little bit different. Verification of our method is shown in the last in this appendix. Subsequently, we focus on the Bernoulli equation, which is ordinary used to obtain axisymmetric configuration (e.g. ([eq22])). Under the metric form ([TOV1]), four velocity is given by $$\begin{aligned} u^\mu = ( e^{-\Phi\_{GR}},0,0,0).\label{TOV6}\end{aligned}$$ The relativistic Bernoulli equation with spherically symmetric is also written as $$\begin{aligned} c^2 \ln h = \ln u^t + C,\label{TOV7}\end{aligned}$$ where *h* and C are relativistic enthalpy, integral constant, respectively. Combining equations ([TOV6])-([TOV7]), we find $$\begin{aligned} c^2 \ln h = - (\Phi\_N + \delta \Phi\_{GR}) + C.\label{TOV8}\end{aligned}$$ Finally we modify the Bernoulli equation of magnetized rotating case as $$\begin{aligned} c^2 \ln h = - \Phi^{2D}\_{GR} + \frac{1}{2} R^2 \Omega^2 + \int^{\Psi} \mu(u) du + C. \label{TOV9}\end{aligned}$$ Note that this treatment may be speculation. To verify our treatment, we calculate spherical symmetric stars with four kinds of EOS in both 1D TOV code and 2D axisymmetric code. Relations of mass, both ADM and baryon mass, with central density in these stars are displayed in Figure [GR1D]. We confirm our treatment works well. | | | | --- | --- | | | | | | | --- 1. E-mail: [email protected][↩](#fnref1) 2. E-mail:[email protected][↩](#fnref2)
arxiv_0000101
Bachelor Thesis **Engineering Faster Sorters** **for Small Sets of Items** Jasper Anton Marianczuk Date: May 09, 2019 | | | | --- | --- | | Supervisors: | Prof. Dr. Peter Sanders | | | Dr. Timo Bingmann | Institute of Theoretical Informatics, Algorithmics Department of Informatics Karlsruhe Institute of Technology --- Hiermit versichere ich, dass ich diese Arbeit selbständig verfasst und keine anderen, als die angegebenen Quellen und Hilfsmittel benutzt, die wörtlich oder inhaltlich übernommenen Stellen als solche kenntlich gemacht und die Satzung des Karlsruher Instituts für Technologie zur Sicherung guter wissenschaftlicher Praxis in der jeweils gültigen Fassung beachtet habe. Karlsruhe, den 09.05.2019 **Abstract** Sorting a set of items is a task that can be useful by itself or as a building block for more complex operations. That is why a lot of effort has been put into finding sorting algorithms that sort large sets as efficiently as possible. But the more sophisticated and fast the algorithms become asymptotically, the less efficient they are for small sets of items due to large constant factors. A relatively simple sorting algorithm that is often used as a base case sorter is insertion sort, because it has small code size and small constant factors influencing its execution time. This thesis aims to determine if there is a faster way to sort these small sets of items to provide an efficient base case sorter. We looked at sorting networks, at how they can improve the speed of sorting few elements, and how to implement them in an efficient manner by using conditional moves. Since sorting networks need to be implemented explicitly for each set size, providing networks for larger sizes becomes less efficient due to increased code sizes. To also enable the sorting of slightly larger base cases, we modified Super Scalar Sample Sort and created Register Sample Sort, to break down those larger sets into sizes that can in turn be sorted by sorting networks. From our experiments we found that when sorting only small sets, the sorting networks outperform insertion sort by at least 25% for any array size between 2 and 16. When integrating sorting networks as a base case sorter into quicksort, we achieved far less performance improvements over using insertion sort, which is due to the networks having a larger code size and cluttering the L1 instruction cache. The same effect occurs when including Register Sample Sort as a base case sorter for IPS4o. But for computers that have a larger L1 instruction cache of 64 KiB or more, we obtained speed-ups of 6.4% when using sorting networks as a base case sorter in quicksort, and of 9.2% when integrating Register Sample Sort as a base case sorter into IPS4o, each in comparison to using insertion sort as the base case sorter. In conclusion, the desired improvement in speed could only be achieved under special circumstances, but the results clearly show the potential of using conditional moves in the field of sorting algorithms. **Zusammenfassung** Das Sortieren einer Menge von Elementen ist ein Prozess der für sich alleine nützlich sein kann oder als Baustein für komplexere Operationen dient. Deswegen wurde in den Entwurf von Sortieralgorithmen, die eine große Menge an Elementen effizient sortieren, bereits großer Aufwand investiert. Doch je ausgefeilter und schneller die Algorithmen asymptotisch sind, desto ineffizienter werden sie beim Sortieren kleinerer Mengen aufgrund hoher konstanter Faktoren. Ein relativ einfacher Sortieralgorithmus, der oft als Basisfall Sortierer genutzt wird, ist Insertion Sort, weil dessen Code kurz ist und er kleine konstante Faktoren hat. Diese Bachelorarbeit hat das Ziel herauszufinden ob es einen schnelleren Algorithmus gibt um solche wenigen Elemente zu sortieren, damit dieser als effizienter Basisfall Sortierer genutzt werden kann. Wir haben uns dazu Sortiernetzwerke angeschaut, wie man durch sie das Sortieren kleiner Listen beschleunigen kann und wie man sie effizient implementiert: Durch das Ausnutzen von konditionellen `move`-Befehlen. Weil Sortiernetzwerke für jede Listengröße explizit implementiert werden müssen, nimmt die Effizienz des Sortierens mittels Sortiernetwerken wegen erhöhter Codegröße ab je größer die Listen sind, die sortiert werden sollen. Um auch das Sortieren etwas größerer Basisfälle zu ermöglichen haben wir Super Scalar Sample Sort modifiziert und Register Sample Sort entworfen, welcher eine größere Liste in mehrere kleine Listen zerteilt, die dann von den Sortiernetzwerke sortiert werden können. In unseren Experimenten sind wir zu dem Ergebnis gekommen, dass, wenn nur kleine Mengen sortiert werden, die Sortiernetzwerke um mindestens 25% schneller sind als Insertion Sort, für alle Listen, die zwischen 2 und 16 Elementen enthalten. Beim Integrieren der Sortiernetzwerke als Basisfall Sortierer in Quicksort haben wir weit weniger Geschwindigkeitszuwachs gegenüber der Benutzung von Insertion Sort erhalten, was daran liegt, dass der Code der Netzwerke mehr Platz benötigt und den Code für Quicksort aus dem L1 Instruktionscache verdrängt. Derselbe Effekt tritt auch beim Benutzen von Register Sample Sort as Basisfall Sortierer für IPS4o auf. Allerdings konnten wir uns bei Rechnern, die über einen größeren L1 Instruktionscache von 64 KiB oder mehr verfügen, mit Sortiernetzwerken bei Quicksort um 6,4% und mit Register Sample Sort bei IPS4o um 9,2% gegenüber Insertion Sort als Basisfall Sortierer verbessern. Zusammenfassend haben wir die angestrebte Verbesserung nur unter besonderen Bedingungen erreicht, aber die Ergebnisse weisen deutlich darauf hin, dass die konditionellen `move`-Befehle Potential im Anwendungsbereich von Sortieralgorithmen haben. Introduction ============ Motivation ---------- Sorting, that is rearranging the elements in a set to be in a specific order, is one of the basic algorithmic problems. In school and university, basic sorting algorithms like bubble sort, insertion sort, and merge sort, as well as a simple variant of quicksort are taught at first. These algorithms are rated by the number of comparisons they require to sort a set of items. This amount of comparisons is put into relation to the input size and looked at on an asymptotic level. Only later one realizes that what looks good on paper does not have to work well in practice, so factors like average cases, cache effects, hardware setups, and constant factors need to be taken into consideration, too. A sophisticated choice on which sorting algorithm to use (for a particular use case) should be influenced by all of these factors. Complex sorting algorithms aim to sort a large number of items quickly, and a lot of them follow the divide-and-conquer idea of designing an algorithm. However, sorting small sets of items, e.g. with 16 elements or less, is usually fast enough that investing a lot of effort into optimizing sorting algorithms for those cases results in very small gains, looking at the absolute amount of time saved. The complex sorters do not perform as well when sorting small sets of items, having good asymptotic properties but larger constant factors that become more important for the small sizes. Because of that the *base case* of sorting small enough subsets is performed using a simpler algorithm, which is often insertion sort. It has a worst-case run-time of O(*n*2), but small constant factors that make it suitable to use for small *n*. If this sorter is executed many times as base case of a larger sorter though, the times do sum up to contribute to a substantial part of the sorting time. The guiding question of this thesis is: Is there a faster way to sort sets of up to 16 elements than insertion sort? When sorting a set of uniformly distributed random numbers, the chance of any number being greater than another is on average 50%. Therefore, whenever a conditional branch is influenced by one element’s relation to another, one in two of those branches will be mispredicted, which leads to an overall performance penalty. This is a problem that has already been looked at by Michael Codish, Luís Cruz-Filipe, Markus Nebel and Peter Schneider-Kamp in “Optimizing sorting algorithms by using sorting networks” in 2017, and this thesis has taken a great deal of inspiration from it. Overview of the Thesis ---------------------- We will first look at sorting networks in section [section:networks]. Section [section:preliminaries:networks] gives a basis of sorting networks and assembly code. After that, we look at different ways of implementing sorting networks efficiently in C++ in section [section:implementation-networks]. For that we focused on elements that consist of a key and an additional reference value. This enables the sorting of complex items, not being limited to integers. In section [section:samplesort] we will take a small detour to look at Super Scalar Sample Sort and develop an efficient modified version for sets with 256 elements or less by holding the splitters in general purpose registers instead of an array. After that section [section:results] discusses the results and improvements of using sorting networks we achieved in our experiments, measuring the performance of the sorting networks and sample sort individually, and also including them as base cases into quicksort and IPS4o . After that we conclude the results of this thesis in section [section:conclusion]. Sorting Networks ================ Preliminaries ------------- Sorting algorithms can generally be classified into two groups: Those of which the behaviour depends on the input, e.g. quicksort where the sorting speed depends on how well the chosen pivot partitions the set into equally-sized halves, and those of which the behaviour is not influenced by the configuration of the input. The latter are also called *data-oblivious*. One example of a data-oblivious sorting algorithm is the sorting network. A sorting network of size *n* consists of a number of *n* so-called channels numbered 1 to *n*, each representing one of the inputs, and connections between the channels, called comparators. Where two channels are connected by a comparator it means that the values are to be compared, and if the channel with the lower number currently holds a value that is greater than the value of the channel with the higher number, the values are to be exchanged between the channels. The comparators are given in a fixed order that determines the sequence of executing these *conditional swaps*, so that in the end 1. the channels contain a permutation of the original input, and 2. the values held by the channels are in nondecreasing order. Sorting networks are data-oblivious because all the comparisons are always performed, and in the same order, no matter which permutation of an input is given. For any sorting network, two metrics can be used to quantify it: the *length* and the *depth*. A network’s length refers to the number of comparators it contains, and a network’s depth describes the minimal amount of levels a network can be divided into. Where two comparators are ordered one after the other, and no channel is used by both comparators, they can be combined into a level. In other words: the result of the second comparator does not depend upon the result of the first. Inductively, any comparator can be merged into a level that executes right before or after it, if its channels are not already used by any comparator in the level. Since now all the comparators in a level are independent from one another, they can be executed in parallel. ### Networks in Practice * **Best known networks:** For networks of up to size 16 there exist proven optimal lengths and proven optimal depths. For example, the network for 10 elements with optimal length 29 has depth 9, the one with optimal depth 7 has length 31. For networks of greater size there only exist currently known lowest numbers of length or depth. Those best networks are acquired through optimizations that were initially done by hand and nowadays are realized e.g. with the help of computers and evolutionary algorithms. * **Recursive networks:** For creating sorting networks there also exist algorithms that work in a recursive divide-and-conquer way: split the input into two parts, sort each part recursively, and merge the two parts together in the end. Representatives for this kind of approach are the constructions of R.J. Nelson and B.C. Bose and the algorithm by K.E. Batcher. Bose and Nelson split the input sequence into first and second half, while Batcher partitions into elements with an even index and elements with an odd index. The advantage of those recursive networks over the specially optimized ones is that they can easily be created even for large network sizes. While the generated networks may have more comparators than the best known networks, the number of comparators in a network acquired from either Bose-Nelson or Batcher of size *n* has an upper bound of O(*n* (log*n*)2). ![image](Network_Size_6_BoseNelsonNetworks_Locality_cropped.jpg) [network:bosenelson:6] Sorting networks are usually depicted by using horizontal lines for the channels, and vertical connections between these lines for the comparators. A network by Bose and Nelson for 6 elements displayed like that can be seen in figure [network:bosenelson:6]. ### Improving the Speed of Sorting through Sorting Networks An important question to ask is how sorting networks can improve the sorting speed on a set of elements (on average), if they can not take any shortcuts for “good” inputs, like an insertion sort that would leverage an already sorted input and do one comparison per element. The answer to this question is *branching*. Because the compiler knows in advance which comparisons are going to be executed in which order, the control flow does not contain conditional branches, in particular getting rid of expensive branch mispredictions. On uniformly distributed random inputs, the chances that any number is smaller than another is 50% on average, making branches unpredictable. In the case of insertion sort that means not knowing in advance with how many elements the next one has to be compared until it is inserted into the right place. Even though with sorting networks the compiler knows in advance when to execute which comparator, implementing the compare-and-swap operation in a naive way (as seen in [section:preliminaries:compare-and-swap]) the compiler might still generate branches. In that case, the sorting networks are no faster than insertion sort, or even slower. ### Compare-and-Swap For sorting networks, the basic operation used is to compare two values against each other. If they are in the wrong order (the “smaller” element occurs after the “bigger” one in the sequence), they are swapped. Intuitively, one might implement the operation in C++ like this: ``` void ConditionalSwap(TValueType& left, TValueType& right) { if (left > right) { std::swap(left, right); } } ``` Here `TValueType` is a template typename and can be instantiated with any type that implements the `>` operator. As suggested in, the same piece of code can be rewritten like this: ``` void ConditionalSwap2(TValueType& left, TValueType& right) { TValueType temp = left; if (temp > right) { left = right; } if (temp > right) { right = temp; } } ``` At first glance it looks like we now have two branches that can be taken. But the code executed if the condition is true now only consists of a single assignment each, which can be expressed in x86-Architecture through a *conditional move* instruction. In AT&T syntax (see section [section:preliminaries:asm]), a conditional move (`cmov a,b`) will write a value in register `a` into register `b`, if a condition is met. If the condition is not met, no operation takes place (still taking the same number of CPU cycles as the move operation would have). Since the address of the next instruction no longer depends upon the previously evaluated condition, the control flow now does not contain branches. The only downside of the conditional move is that it can take longer than a normal move instruction on certain architectures, and can only be executed when the comparison has performed and its result is available. When the elements to be sorted are only integers, some compilers do generate code with conditional moves for those operations. When the elements are more general (in this thesis we will look at pairs of an unsigned 64 bit integer key and an unsigned 64 bit reference value, which could be a pointer or an address in an array), gcc 7.3.0, the compiler used for the experiments, does not generate conditional moves. To force the usage of conditional moves, a feature of gcc was used that allows the programmer to specify small amounts of assembly code to be inserted into the regular machine code generated by gcc, called inline assembly. This mechanic and the notation is further explained in section [section:preliminaries:asm]. ### Assembly Code Assembly code represents the machine instructions executed by the CPU. It can be given as the actual opt-codes or as human-readable text. There are two different conventions for the textual representation, the Intel syntax or MASM syntax and the AT&T syntax. The main differences are: c | p6.2cm | p6.2cm & & Operand size & The size of the operand does not have to be specified & The size of the operand is appended to the instruction: `b` (byte = 8 bit), `l` (long = 32 bit), `q` (quad-word = 64 bit) Parameter order & The destination is written first, then the source of the value: `mov dest,src` & The source is written first, then the destination: `movq src,dest` In this thesis only the AT&T syntax will be used. The gcc C++ compiler has a feature that allows the programmer to write assembly instructions in between regular C++ code, called “inline assembly” (`asm`). A set of assembly instructions to be executed must be given, followed by a definition for input and output variables and a list of clobbered registers. This extra information is there to communicate to the compiler what is happening inside the `asm` block. Gcc itself does not parse or optimize the given assembly statements, they are only after compilation added into the generated assembly code by the GNU Assembler. A variable being in the output list means that the value will be modified, a clobbered register is one where gcc cannot assume that the value it held before the `asm` block will be the same as after the block. In this thesis, the clobbered registers will almost always be the conditional-codes registers (`cc`), which include the carry-flag, zero-flag and the signed-flag, which are modified during a compare-instruction. This way of specifying the input, output and clobbered registers is also called *extended asm*. Taking the code from [section:preliminaries:compare-and-swap], and assuming `TValueType = uint64_t`, the statement ``` if (temp > right) { left = right; } ``` can now be written as ``` __asm__( "cmpq %[temp],%[right]\n\t" //performs right - temp internally "cmovbq %[right],%[left]\n\t" //left = right, if right < temp : [left] "=&r"(left) //output : "0"(left), [right] "r"(right), [temp] "r"(temp) //input : "cc" //clobber ); ``` In extended `asm`, one can define C++ variables as input or output operands, and gcc will assign a register for that value (if it has the `"r"` modifier), and also write the value in an output register back to the given variable after the `asm` block. Note that the names in square brackets are symbolic names only valid in the context of the assembly instructions and independent from the names in the C++ code before. The link between the C++ names and the symbolic names happens in the input and output declarations. With the conditional moves it is important to properly declare the input and output variables, because they perform a task that is a bit unusual: an output variable may be overwritten, and also may not. For the output register for `left`, two things must apply: 1. if the condition is false, it must hold the value of `left`, and [cmov:condition:1] 2. if the condition is true, it must hold the value of `right`. For optimizations purposes, the compiler might reduce the number of registers used by placing the output of one operation into a register that previously held the input for some other operation. To prevent this, the declaration for the output `[left] "=&r"(left)` has the `"&"` modifier added to it, meaning it is an “early clobber” register and that no other input can be placed in that register. In combination with `"0"(left)` in the input line, it is also tied to an input, so that the previous value of `left` is loaded into the register beforehand, to comply with constraint [cmov:condition:1]. Because we already declared it as output, instead of giving it a new symbolic name we tie it to the output by referencing its index in the output list, which since it is the first output variable is `"0"`. The `"="` in the output declaration solely means that this register will be written to. Any output needs to have the `"="` modifier. We see that each assembly instruction is postfixed with `\n\t`. That is because the instruction strings are appended into a single instruction string during compilation and `\n\t` tells the GNU assembler where one instruction ends and the next begins. The `cmov` instruction is postfixed with a `b` in this example, which stands for “below”. So the `cmov` will be executed if `right` is below `temp` (unsigned comparison `right < temp`). Apart from below we will also see not equal (`ne`) and carry (`c`) as a postfix. In addition to that, both the `cmp` and the `cmovb` are postfixed with a `q` (quad-word) to indicate that the operands are 64-bit values. When a subtraction `minuend` − `subtrahend` is performed and `subtrahend` is larger than `minuend` (interpreted as unsigned numbers), the operation causes an underflow which results in the carry flag being set to 1. The check for that carry flag being 1 can be used as a condition by itself, and the carry flag influences other condition checks like *below*. This property of the comparison setting the carry flag will be used in section [section:samplesort:impl]. Implementation of Sorting Networks ---------------------------------- ### Providing the Network Frame ![Best network with optimal length for 16 elements](Network_Size_16_BestNetworks.jpg "fig:") [network:best:16] ![Bose Nelson network for 16 elements optimizing locality](Network_Size_16_BoseNelsonNetworks_Locality.jpg "fig:") [network:bosenelson:16] ![Bose Nelson network for 16 elements optimizing parallelism](Network_Size_16_BoseNelsonNetworks_Parallelism.jpg "fig:") [network:bosenelson:parl:16] The best networks for sizes of up to 16 elements were taken from John Gamble’s Website and are length-optimal. The Bose Nelson networks have been generated using the instructions from their paper. For sizes of 8 and below the best and generated networks have the same amount of comparators and levels. For sizes larger than 8 the generated networks are at a disadvantage because they have more comparators and/or levels. As a trade-off their recursive structure makes it possible to leverage a different trait: locality. Instead of optimizing them to sort as parallel as possible, we can first sort the first half of the set, then the second half, and then apply the merger. This way, chances are higher that all $\frac{n}{2}$ elements of the first half might fit into the processor’s general purpose registers. During this part of the sorting routine, no accesses to memory or cache are required. To determine if there is a visible speed-up, the networks were generated optimizing (a) locality and (b) parallelism. As an extra idea, the Bose Nelson networks were generated in a way that one can pass the elements as separate parameters instead of as an array. That way one can sort elements that are not contiguously placed in memory. Because the networks were implemented as method calls to the smaller sorters and merge methods, there would be a large overhead in placing many elements onto the call stack for each method call. While we hoped this would make a difference by reducing code size, the overhead for the method call was too large. That is why all the methods are declared `inline` which results in the same flat sequence of swaps for each size the networks optimizing locality have. Examples of networks for 16 elements can be seen in figures [network:best:16], [network:bosenelson:16] and [network:bosenelson:parl:16]. All networks are implemented so that they have an entry method that takes a pointer to an array `A` and an array size `n` as input and delegates the call to the specific method for that number of elements, which in turn executes all the comparators. To measure different implementations for the conditional swaps, the network methods and the swap are templated, so that when calling the network with an array of a specific type the respective specialized conditional-swap implementation will be used. Our approach differs from the work in in the type of elements that were sorted. While they measured the sorting of `int`s, which are usually 32-bit sized integers, we made the decision to sort elements that consist of a 64-bit integer key and a 64-bit integer reference value, enabling not only the sorting of numbers but also the sorting of complex elements, when giving a pointer or an array index as the reference value to the original set. This was implemented by creating `struct`s that contain a key and reference value each, having the following structure: ``` struct SortableRef { uint64_t key, reference; } ``` They also define the operators  `>, >=, ==, <, <=`  and  `!=`  for reasons of usability, and templated methods `uint64_t GetKey(TSortable)` and `uint64_T GetReference(TSortable)` are available. ### Implementing the Conditional Swap The `ConditionalSwap` is implemented as a templated method like this: ``` template <typename TValueType> inline void ConditionalSwap(TValueType& left, TValueType& right) { //body } ``` The following variants will represent the body of one specialization of the template function for a specific struct. Each of them was given a three letter abbreviation to name them in the results. We implemented the following approaches: * using std::swap (`Def`) * using inline if statements (`QMa`) * using std::tie and std::tuple (`Tie`) * using jmp and xchg (`JXc`) * using four cmovs and temp variables (`4Cm`) * using four cmovs split from one another and temp variables (`4CS`) * using six cmovs and temp variables (`6Cm`) * moving pointers with cmov instead of values (`Cla`) * moving pointers and supporting a predicate (`CPr`) The details of implementation can be seen in the following paragraphs. #### using std::swap (`Def`) The default implementation for the template makes use of the defined `<` operator: ``` if (right < left) std::swap(left, right); ``` This is the intuitive way of writing the conditional swap we already saw in section [section:preliminaries:compare-and-swap], without any inline assembly. #### using inline if statements (`QMa`) ``` bool r = (left > right); auto temp = left; left = r? right : left; right = r? temp : right; ``` Here it was attempted to convince the compiler to generate conditional moves by using the *inline if*-statements with trivial values in the else part. #### using std::tie and std::tuple (`Tie`) ``` std::tie(left, right) = (right < left)? std::make_tuple(right, left) : std::make_tuple(left, right); ``` This approach uses assignable tuples (tie). #### using jmp and xchg (`JXc`) ``` __asm__( "cmpq %[left_key],%[right_key]\n\t" "jae %=f\n\t" "xchg %[left_key],%[right_key]\n\t" "xchg %[left_reference],%[right_reference]\n\t" "%=:\n\t" : [left_key] "=&r"(left.key), [right_key] "=&r"(right.key), [left_reference] "=&r"(left.reference), [right_reference] "=&r"(right.reference) : "0"(left.key), "1"(right.key), "2"(left.reference), "3"(right.reference) : "cc" ); ``` The `%=` generates a unique label for each instance of the `asm` statement, so that the jumps go where they belong. #### using four cmovs and temp variables (`4Cm`) ``` uint64_t tmp = left.key; uint64_t tmpRef = left.reference; __asm__( "cmpq %[left_key],%[right_key]\n\t" "cmovbq %[right_key],%[left_key]\n\t" "cmovbq %[right_reference],%[left_reference]\n\t" "cmovbq %[tmp],%[right_key]\n\t" "cmovbq %[tmp_ref],%[right_reference]\n\t" : [left_key] "=&r"(left.key), [right_key] "=&r"(right.key), [left_reference] "=&r"(left.reference), [right_reference] "=&r"(right.reference) : "0"(left.key), "1"(right.key), "2"(left.reference), "3"(right.reference), [tmp] "r"(tmp), [tmp_ref] "r"(tmpRef) : "cc" ); ``` #### using four cmovs split from one another and temp variables (`4CS`) ``` uint64_t tmp = left.key; uint64_t tmpRef = left.reference; __asm__ volatile ( "cmpq %[left_key],%[right_key]\n\t" : : [left_key] "r"(left.key), [right_key] "r"(right.key) : "cc" ); __asm__ volatile ( "cmovbq %[right_key],%[left_key]\n\t" : [left_key] "=&r"(left.key) : "0"(left.key), [right_key] "r"(right.key) : ); __asm__ volatile ( "cmovbq %[right_reference],%[left_reference]\n\t" : [left_reference] "=&r"(left.reference) : "0"(left.reference), [right_reference] "r"(right.reference) : ); __asm__ volatile ( "cmovbq %[tmp],%[right_key]\n\t" : [right_key] "=&r"(right.key) : "0"(right.key), [tmp] "r"(tmp) : ); __asm__ volatile ( "cmovbq %[tmp_ref],%[right_reference]\n\t" : [right_reference] "=&r"(right.reference) : "0"(right.reference), [tmp_ref] "r"(tmpRef) : ); ``` Because we split the `asm` blocks, they have to be declared `volatile` so that the optimizer does not move them around or out of order. Without declaring them `volatile`, some of the networks were not sorting correctly. The blocks were split because we hoped the compiler would be able to insert operations that do not affect the conditional codes and are unrelated to the current conditional swap between the `cmp`-instruction and the conditional moves, to reduce the amount of wait cycles that have to be performed. This was successful as can be seen in the experimental results in section [section:experiments:normal]. #### using six cmovs and temp variables (`6Cm`) ``` uint64_t tmp; uint64_t tmpRef; __asm__ ( "cmpq %[left_key],%[right_key]\n\t" "cmovbq %[left_key],%[tmp]\n\t" "cmovbq %[left_reference],%[tmp_ref]\n\t" "cmovbq %[right_key],%[left_key]\n\t" "cmovbq %[right_reference],%[left_reference]\n\t" "cmovbq %[tmp],%[right_key]\n\t" "cmovbq %[tmp_ref],%[right_reference]\n\t" : [left_key] "=&r"(left.key), [right_key] "=&r"(right.key), [left_reference] "=&r"(left.reference), [right_reference] "=&r"(right.reference), [tmp] "=&r"(tmp), [tmp_ref] "=&r"(tmpRef) : "0"(left.key), "1"(right.key), "2"(left.reference), "3"(right.reference), "4"(tmp), "5"(tmpRef) : "cc" ); ``` #### moving pointers with cmov instead of values (`Cla`) This idea came from a result created by the clang compiler from the special code as seen in the ConditionalSwap2 method in [section:preliminaries:compare-and-swap]. For the transformation to gcc, we took only the minimal necessary instructions concerning the conditional move into the `asm` block: ``` SortableRef_ClangVersion* leftPointer = &left; SortableRef_ClangVersion* rightPointer = &right; uint64_t rightKey = right.key; SortableRef_ClangVersion tmp = left; __asm__ volatile( "cmpq %[tmp_key],%[right_key]\n\t" "cmovbq %[right_pointer],%[left_pointer]\n\t" : [left_pointer] "=&r"(leftPointer) : "0"(leftPointer), [right_pointer] "r"(rightPointer), [tmp_key] "m"(tmp.key), [right_key] "r"(rightKey) : "cc" ); left = *leftPointer; leftPointer = &tmp; __asm__ volatile( "cmovbq %[left_pointer],%[right_pointer]\n\t" : [right_pointer] "=&r"(rightPointer) : "0"(rightPointer), [left_pointer] "r"(leftPointer) : ); right = *rightPointer; ``` #### moving pointers and supporting a predicate (`CPr`) Instead of performing the comparison inside the `asm` block, which requires knowledge of the datatype of the key, it can also be done over a predicate, using the result of that comparison inside the inline assembly: ``` SortableRef_ClangPredicate* leftPointer = &left; SortableRef_ClangPredicate* rightPointer = &right; SortableRef_ClangPredicate temp = left; int predicateResult = (int) (right < temp); __asm__ volatile( "cmp $0,%[predResult]\n\t" "cmovneq %[right_pointer],%[left_pointer]\n\t" : [left_pointer] "=&r"(leftPointer) : "0"(leftPointer), [right_pointer] "r"(rightPointer), [predResult] "r"(predicateResult) : "cc" ); left = *leftPointer; leftPointer = &temp; __asm__ volatile( "cmovneq %[left_pointer],%[right_pointer]\n\t" : [right_pointer] "=&r"(rightPointer) : "0"(rightPointer), [left_pointer] "r"(leftPointer) : ); right = *rightPointer; ``` For the `Cla` implementation the `b` in `cmovb` was used to execute the conditional move if `right_key` was smaller than `temp_key`. If that is the case, the predicate will return true, or as an int a value not equal to zero. When comparing this result to 0, the `cmov` is to be executed if the result was any value other than zero, so the postfix here is `ne` (not equal). Note that while the knowledge of how to compare the elements is still present by doing the comparison directly (`right < temp`), the compiler now needs to take the result from the comparison, and put it into an integer that is then used in the `asm` block. The only addition to make it completely independent from the sorted elements would be to pass a predicate to do the comparison, which would also involve modifying the network frame to take and pass the predicate. To measure on the same network frame we took this shortcut of doing the comparison using the `<` operator. Register Sample Sort ==================== Preliminaries ------------- Sample sort is a sorting algorithm that follows the divide-and-conquer principle. The input is separated into *k* subsets, that each contain elements within an interval of the total ordering, with the intervals being distinct from one another. That is done by first choosing a subset *S* of *a* ⋅ *k* elements and sorting *S*. Afterwards the splitters {*s*0, *s*1, …, *s**k* − 1, *s**k*} = { − ∞, *S**a*, *S*2*a*, …, *S*(*k* − 1)*a*, ∞} are taken from *S*. The parameter *a* denotes the oversampling factor. Oversampling is used to get a better sample of splitters to achieve more evenly-sized partitions, trading for the time that is required to sort the larger sample. With the splitters the elements *e**i* are then *classified*, placing them into buckets *b**j*, where *j* ∈ {1, …, *k*} and *s**j* − 1 < *e**i* ≤ *s**j*. For *k* being a power of 2, this placement can be achieved by viewing the splitters as a binary tree, with *s**k*/2 being the root, all *s**l* with *l* < *k*/2 representing the left subtree and those with *l* > *k*/2 the right one. To place an element, one must only traverse this binary tree, resulting in a binary search instead of a linear one. Quicksort is therefore a specialization of sample sort with fixed parameter *k* = 2, having only one splitter, the pivot, and splitting the input into two partitions. Implementing Sample Sort for Medium-Sized Sets ---------------------------------------------- The motivation to look at sample sort was that we wanted to see how well the sorting networks perform when using them as a base case for the *In-Place Parallel Super Scalar Samplesort* (IPS4o) by Michael Axtmann, Sascha Witt, Daniel Ferizovic and Peter Sanders. The problem that occured is that IPS4o can go into the base case with sizes larger than 16, while the networks we looked at only sort sets of up to 16 elements. To close that gap, we created a sequential version of Super Scalar Sample Sort that can reduce base case sizes of up to 256 down to blocks of 16 or less in an efficient manner. Since the total size was expected to not be much greater than 256, not much effort was made to keep the algorithm in-place. The central idea was to place the splitters not into an array, as described in, but to hold them in general purpose registers for the whole duration of the element classification. The question now arose as to which splitter an element needs to be compared to after the first comparison with the middle splitter. When the splitters are organized in a binary heap in an array, that can be done by using array indices, the children of splitter *j* being at positions 2*j* and 2*j* + 1. If an element is smaller than *s**j*, it would afterwards be compared to *s*2*j*, otherwise to *s*2*j* + 1. But this way of accessing the splitters does not work when they are placed in registers. The solution was to create a copy of the left subtree, and to conditionally overwrite that with the right subtree, should the element be greater than the root node. The next comparison is then made against the root of the temporary tree that now contains the correct splitters to compare that element against. For 3 splitters that requires 1 conditional move, and for 7 splitters would require 3 conditional moves after the first comparison and 1 more after the second comparison, per element. After finding the correct splitters to compare to, we are left with one more problem: How to know in which bucket the element is to be placed into at the end. In this was done by making use of the calculated index determining the next splitter to compare to. We chose an approach similar to creating this index, using the correlation between binary numbers and the tree-like structure of the splitters. We will be viewing the splitters not as a binary heap but just as a list where the middle of the list represents the root node of the tree, its children being the middle element of the left and the middle element of the right list. If an element *e**i* is larger than the first splitter *s**k*/2 (with *k* − 1 being the number of splitters), it must be placed in a bucket *b**j* with $j \geq \frac{k}{2}$ (assuming 0-based indexing for *b*). That also means that the index of that bucket, represented as a binary number, must have its bit at position $l := \log \frac{k}{2}$ set to 1. That way, the result of the comparison (*e**i* > *s**k*/2) can be interpreted as an integer (1 for `true`, 0 for `false`) and added to j. If that was not the last comparison, *j* is then multiplied by 2 (meaning its bits are shifted left by one position). This means the bit from the first comparison makes its way “left” in the binary representation while the comparison traverses down the tree, and so forth with the other comparisons. After traversing the splitter tree to the end, *e**i* will have been compared to the correct splitters and *j* will hold the index of the bucket that *e**i* belongs into. These operations can be implemented without branches by making use of the way comparisons are done: At the end of section [section:preliminaries:asm] we explained that when comparing (unsigned) numbers (which is nothing but a subtraction), and the `subtrahend` being greater than the `minuend`, the operation causes an underflow and the carry flag is set. We also notice that when converting the result of the predicate (*e**i* > *s**k*/2) to an integer value, the integer will be 1 for `true` and 0 for `false`. So in assembly code, we can compare the result from evaluating the predicate to the value 0: `cmp %[predResult],%[zero]` where `zero` is just a register that holds the value 0. This trick is needed because the `cmp` instruction needs the second operand to be a register. This will execute 0 − `predResult`, which underflows for the predicate returning true. This way we can postfix the `cmov` needed for moving the next splitters with a `c` checking for a set carry flag. The second instruction we make use of is the *rotate carry left* (`rcl`) instruction, which performs a *rotate left* instruction on *j*, but includes the carry flag as an additional bit after the least significant bit of the integer. This exactly takes the predicate result and puts it at the bottom of *j*, with the previous content being shifted one to the left beforehand. That means it performs two necessary operations at once. As an addition to the efficient classification, while looping over the elements we allow to place multiple elements into buckets per loop, allowing for all the registers in the machine to be used. This additional parameter is called *blockSize*. There is one downside to this approach: The keys of the splitters (since we only need a splitter’s key for classifying an element) must be small enough to fit into a general purpose register. Needing more than one register per key would mean either running out of registers or spending extra time to conditionally move the splitter keys around. For three splitters the needed number of registers for block sizes 1 to 5 are as seen in table [table:samplesort:registerusage]. We can see that the trade-off for classifying multiple elements at the same time is the amount of registers needed. If we were to use 7 splitters instead of three, the number of registers required for classifying just 1 element at a time would go up to 15. Also, with 8 buckets, if we get recursive subproblems with sizes just over 16, classifying into 8 buckets again would be greatly inefficient, resulting in many buckets containing very few. This is why we decided to only use three splitters for this particular sorter. Pseudocode to implement the classification can be seen as an example for an array of integers and `blockSize = 1` in algorithm [algo:samplesort:cversion]. *j* is here called `state`, and the temporary subtree consists of one splitter which we gave the name `splitterx`. For the branchless implementation we used the `cmovc` for line [algo:samplesort:cmov] and the `rcl` instruction for line [algo:samplesort:rcl]. At the last level of classification no more moving of splitters is required, so instead of doing another comparison against the predicate result and using `rcl`, we can just shift `state` left by one position and add the predicate’s result to it (line [algo:samplesort:laststate]). Alternatively we could use a bitwise `OR` or `XOR` after the shift, which would have the same result. But we decided that adding the predicate result was more readable. For sorting the splitter sample, the same sorting method can be used as for the base case. [!h] r | c c c c c | c c c c c & & & & & 1 & 2 & 3 & 4 & 5 & 1 & 2 & 3 & 4 & 5 splitters & 3 & 3 & 3 & 3 & 3 & 7 & 7 & 7 & 7 & 7 buckets pointer & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 current element index & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 element count & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 state & 1 & 2 & 3 & 4 & 5 & 1 & 2 & 3 & 4 & 5 predicate result & 1 & 2 & 3 & 4 & 5 & 1 & 2 & 3 & 4 & 5 splitterx & 1 & 2 & 3 & 4 & 5 & 3 & 6 & 9 & 12 & 15 sum & 9 & 12 & 15 & 18 & 21 & 15 & 20 & 25 & 30 & 35 [table:samplesort:registerusage] [!h] [algo:samplesort:cversion] int splitter0, splitter1, splitter2  ←  determineSplitters() int state, predicateResult, splitterx int\* b0, b1, b2, b3  ←  allocateBuckets(elementCount) Experimental Results ==================== In the tests we ran, different sorting algorithms and conditional-swap implementations were compared. For the details about the different sorters and swaps refer to section [section:implementation-networks]. The names of the sorters are built in an abbrevatory way that matches the following format: 1. It starts with an `I` or an `N`, indicating if the used algorithm is insertion sort or a sorting network. [enumeration:experimentnaming:algtype] * In case of sorting networks, if it is a `Best` network or a Bose Nelson network (`BoNe`). + For a Bose Nelson network whether it was optimized for Locality (`L`), Parallelism (`P`) or generated to take the items as single parameters `M` (see section [section:implementation-networks]) 2. Then follows the type of benchmark, `-N` for sorting one set of items (“normal sort”, section [section:experiments:normal]), `-I` for sorting many continuous sets of items (“inrow sort”, section [section:experiments:inrow]), `-S` for sorting with Sample Sort (section [section:experiments:samplesort]), `-Q` for sorting with quicksort (section [section:experiments:quicksort]) and `-4` for sorting with IPS4o  (section [section:experiments:ipso]). * In case of Sample Sort, the parameters `numberOfSplitters`, `oversamplingFactor` and `blockSize` are appended as numbers 3. Lastly, the name of the `struct` used for the template specialization is appended (see section [section:preliminaries:compare-and-swap] for the abbreviations for conditional swaps) as well as a single `K` for elements that have only a key and `KR` for those that have a key and a reference value. Where for comparison `std::sort` was run, the name in step [enumeration:experimentnaming:algtype] is `StdSort`. For example, when measuring sample sort with parameters 332 and a Bose Nelson network optimizing parallelism as the base case with conditional swap `4CS`, the sorter name would be `N BoNeP -S332 KR 4CS`. Environment ----------- [!h] | Machine Name | | | | | --- | --- | --- | --- | | | 2 x Intel Xeon 8-core | 2 x Intel Xeon 12-core | AMD Ryzen 8-core | | | E5-2650 v2 2.6 GHz | E5-2670 v3 2.3 GHz | 1800X 3.6 GHz | | RAM | 128 GiB DDR3 | 128 GiB DDR4 | 32GB DDR4 | | L1 Cache (per Core) | 32 KiB I + 32 KiB D | 32 KiB I + 32 KiB D | 64 KiB I + 32 KiB D | | L2 Cache (per Core) | 256 KiB | 256 KiB | 512 KiB | | L3 Cache (total) | 20 MiB | 30 MiB | 16 MiB [8 MiB] | [table:machines] As compiler the gcc C++ compiler in version 7.3.0 was used with the `-O3` flag. The measurements were done with only essential processes running on the machine apart from the measurement. To prevent the process from being swapped to another core during execution it was run with `taskset 0x1`. In total, three different machines were used to do the measurements. Their hardware properties can be seen in table [table:machines]. “I” and “D” refer to dedicated Instruction and Data caches. Also note that while the AMD Ryzen’s L3 cache has a total size of 16 MiB, it is divided into two 8 MiB caches that are exclusive to 4 cores each. Since all measurements were done on a single core, the L3 cache size in brackets is the one available to the program. The operating system on all machine was Ubuntu 18.04. Generating Plots ---------------- Due to the high number of dimensions in the measurements (machine the measurement is run on, type of network, conditional swap implementation, array size) the results could not always be plotted two-dimensionally. We used box-plots where applicable to show more than just an average value for a measurement. The box incloses all values between the first quartile (`1Q`) and third quartile (`3Q`). The line in the middle shows the median. Further the inter-quartile-range (`IQR`) is calculated as the distance between first and third quartile. The lines (called whiskers) left and right of the boxes go until the smallest value greater than `1Q` − 1.5 ⋅ `IQR` and the greatest value smaller than `3Q` + 1.5 ⋅ `IQR` respectively. Values below these ranges are called outliers and shown as individual dots. Conducting the Measurements --------------------------- #### Random Numbers In order to measure the time needed to sort some data, one has to have data first. For these measurements, the data consisted of pairs of a 64-bit unsigned integer key and a 64-bit unsigned integer reference value. Those were generated as uniformly distributed random numbers by a lightweight implementation of the std::minstd\_rand generator from the C++ <random> library that works as follows: First a `seed` is set, taken e.g. from the current time. When a new random number is requested, the generator calculates `seed` = `seed`  ⋅  48271 % 2147483647 and returns the current `seed`. The numbers generated like that do not use all 64 bits available, which is only for practicality with the permutation check as will be seen below. For each measurement *i*, a new `seed`*i* is taken from the current time. The same `seed`*i* is then set before the execution of each sorter, to provide all sorters with the same random inputs. #### Measuring The actual measuring was done via linux’s PERF\_EVENT interface that allows to do fine-grained measurements. Here, the number of cpu cycles spent on sorting was the unit of measurement. That also means that the results do not depend on clock speeds (e.g. when overclocking), but only on the CPU’s architecture. #### Compilation When we started this project, it was only a single source file (.cpp) with an increasing amount of headers that were all included in that single file. That is also due to the fact that templated methods cannot be placed in source files because they need to be visible to all including files at compile time. The increasing amount of code and the many different templates brought the compiler to a point where it took over a minute to compile the project. The problem we encountered was that the compiler only gives itself a limited amount of time for compiling a single source file. In order to stay within the time boundaries for a single file, the optimization became poor. We saw measurements being slower for no apparent reason. To solve that problem, we used code generation to create source files that contain an acceptable amount of methods that initiate part of a measurement in a wrapper method. This way, from the main source file we only need to call the correct wrapper methods to perform the measurements, and this way we achieved results that were more stable and reproducible. For compilation, the flag `-O3` was used to achieve high optimization and speed. That also means that, without using the sorted data in some way, the compiler would deem the result unimportant and skip the sorting altogether. That is why after each sort, to generate a side-effect, the set is checked for two properties: That it is sorted, and that it is a permutation of the previously generated set. The first can easily be done by checking for each value that it is not greater than the value before it. #### Permutation Check The permutation check is done probabilistically: At design time, a (preferably large) prime number *p* is chosen. Before sorting, $v = \prod\_{i = 1}^{n} (z - a\_i) \mod p$ is calculated for a number *z* and values *a* = {*a*1, …, *a**n*}. To check the permutation after sorting and obtaining *a*ʹ = {*a*ʹ1, …, *a*ʹ*n*}, $w = \prod\_{i = 1}^{n} (z - a'\_i) \mod p$ is calculated. If *v* ≠ *w*, *a*ʹ cannot be a permutation of *a*. If *v* = *w*, we claim that *a*ʹ is a permutation of *a*. To minimize the chances of *a*ʹ not being a permutation of *a*, but *v* being equal to *w*, *v* = 0 was disallowed in the first step. If *v* is zero, *z* is incremented by one and the product calculated again, until *v* ≠ 0. #### Benchmarks The benchmark seen in algorithm [algo:normal] was used for most of the measurements. To reduce the chance of cache misses at the beginning of the measurement, one warmup run of random generation, sorting and sorted checking is done beforehand (lines [algo:normal:warmup:start] to [algo:normal:warmup:end]). The array is then sorted `numberOfIterations` times and checked for the sorted and permutation properties. After that only the generation of the random numbers and the sorted and permutation checking is measured, to later subtract the time from the previously measured one, resulting in the time needed for the sorting alone. Since this is not deterministic in time, and both measurements are subjects to their own deviation, it can occasionally happen that the second measurement takes longer than the first, even though less work has been done. We get those negative times more often for the sorters with small array sizes, where the sorting itself takes relatively little time compared to the random generation and sorted checking. The negative times show up as outliers in the results. The function `simulateCheckSorted` checks the permutation like `checkSorted`, but since randomly generated arrays are rarely ordered, instead of checking for each element if it is smaller than its predecessor, it checks for equality. That should never happen with the random number generator used, and thus run for the same amount of cycles. The function MeasureSorting is called a total of `numberOfMeasures` times for each `arraySize` that is sorted. For the measurements shown in section [section:experiments:inrow] the benchmark was slightly modified as can be seen in algorithm [algo:inrow]. Here the goal was to look at cache- and memory-effects by creating an array that does not fit into the CPU’s L3-cache, and then filling the cache with something else, in this case the reference array. We then split the original array into many blocks of size `arraySize` and sort each independently. Because we have to create the whole array at the beginning, we can generate the numbers before and check for correct sorting after measuring, so there is no need to do a second measurement like in the first benchmark (lines [algo:normal:random:start] to [algo:normal:random:end] in algorithm [algo:normal]). Here, instead of giving a `numberOfIterations` parameter to indicate how often the sorting is to be executed, we provide a `numberOfArrays` value that says how many arrays of size `arraySize` are to be created contiguously. This parameter is chosen for each `arraySize` in a way that `numberOfArrays`  ×  `arraySize` does not fit into the L3 cache of the machine the measurement is performed on. [!p] [algo:normal] [!p] [algo:inrow] Sorting One Set of 2-16 items ----------------------------- The benchmark from algorithm [algo:normal] was used with parameters * `numberOfIterations` = 100 * `numberOfMeasures` = 500 * `arraySize` ∈ {2, …, 16}. The results seen in tables [table:normalsort:avg:A], [table:normalsort:avg:B] and [table:normalsort:avg:C] contain the name of the sorter and the average number of cycles per iteration, over the total of all measurements, for machines A, B and C. The algorithm that performed best in a column is marked in bold font, and for each column the value relative to the best in that column was calculated. For each row the geometric mean is calculated over the relative values and from that the rank is determined. Table [table:normalsort:avg:all] contains the geometric mean and rank taking the results from all three machines into consideration. Here it becomes visible that the implementations that have conditional branches and those that do not are clearly separated by rank, the former occupy the lower share of the ranks, while the latter get all the higher ranks. We see that the claim from section [section:implementation-conditionalswap] for the `4CS` conditional swap is true for machines A and B, but not for machine C. We also see in table [table:normalsort:avg:all] that the first three ranks have the same geometric mean, so the Bose Nelson networks can compete with the optimized networks that have fewer comparators due to their locality. The boxplots for array size 8 are given for each machine in figures [plot:normal:8:A], [plot:normal:8:B] and [plot:normal:8:C], showing that these higher-ranked implementations are not only faster on average, but that their distribution is almost entirely faster than any of the insertion sort implementations, together with a lower variance. To improve readability, the variants `JXc`, `6Cm` and `QMa` are omitted. Also one outlier was removed from dataset of machine B for the `’N BoNeL -N KR Cla’` sorter with value  − 42.6 so that the plot has a scale similar to those of the other two machines, to improve comparability. The result set for machine A contains a lot of outliers that we did not want to exclude. To be able to compare it easily with the other two plots we added an additional axis at the top that shows the CPU cycles per iteration as percentages where the average of the best insertion sort is 100%. To see a trend in increasing array size, we chose a few Conditional Swap implementations that do best for more than one network and array size on all machines. Their average sorting times can be seen in figures [plot:normal:lineplot:A], [plot:normal:lineplot:B] and [plot:normal:lineplot:C]. For visibility reasons, we omitted the Bose Nelson Parameter networks in these plot. What we already saw from the tables is here visible as well, the `4Cm` and `4CS` implementations have good performance and are almost always faster on average than insertion sort (apart from `arraySize = 2` on machine A). These results indicate that there is potential in using sorting networks, showing an improvement of 32% of the best network over the best insertion sort, on average, for any array size. Problems with this way of measurement are that the same space in memory is sorted over and over again, which is rarely a use case when sorting a base case. Because of this, the measurements probably reflect unrealistic conditions regarding cache accesses and cache misses. To get a bit closer to actual base case sorting, the next section has a different approach to not sort the same space in memory twice. [!p] l | r @   r | r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r| & & & Rank & GeoM & 2&3&4&5&6&7&8&9&10&11&12&13&14&15&16 `I -N KR POp` & 22 & 1.85 & 15.21&37.82&80.52&124.17&166.14&204.37&250.08&282.97&323.87&369.31&417.57&437.93&509.37&520.20&579.44 `I -N KR STL` & 26 & 1.92 & 13.82&39.99&83.75&128.44&178.50&213.03&257.62&287.15&346.35&382.08&434.47&455.29&532.36&554.80&614.56 `I -N KR Def` & 33 & 2.07 & 17.23&40.53&84.80&132.12&178.09&220.69&277.29&311.27&378.66&412.51&475.48&499.27&595.68&614.74&693.90 `I -N KR AIF` & 36 & 2.21 & 16.55&50.76&90.56&148.37&202.86&252.22&307.29&342.66&400.98&442.48&485.63&518.62&593.07&609.29&672.98 `N Best -N KR 4CS` & 1 & 1.07 & 11.59&**24.11**&**34.35**&66.58&82.54&96.95&125.56&134.92&**183.73**&**218.14**&**254.95**&278.53&356.75&353.24&395.58 `N Best -N KR 4Cm` & 5 & 1.12 & 16.33&24.21&38.05&54.80&74.74&**85.23**&127.40&141.99&201.85&238.39&279.15&301.47&375.39&399.06&450.54 `N Best -N KR Cla` & 8 & 1.23 & 8.91&31.22&40.41&86.73&110.04&144.05&163.77&188.67&220.54&240.00&298.76&285.42&347.36&**349.35**&400.98 `N Best -N KR CPr` & 10 & 1.27 & **8.13**&32.20&46.88&87.99&112.46&146.10&164.91&190.61&208.00&256.03&296.58&301.14&382.17&381.71&438.18 `N Best -N KR 6Cm` & 13 & 1.37 & 17.20&25.57&46.86&64.68&96.36&107.56&146.34&176.34&259.56&289.24&339.02&392.64&490.05&502.11&585.95 `N Best -N KR Def` & 21 & 1.84 & 20.09&37.08&73.94&113.14&144.92&179.09&248.33&268.15&302.44&338.76&417.69&429.27&555.59&552.84&712.22 `N Best -N KR Tie` & 25 & 1.90 & 20.47&38.06&63.58&98.92&139.21&182.82&238.59&271.77&316.34&369.96&477.94&519.14&597.69&639.07&753.42 `N Best -N KR JXc` & 32 & 2.04 & 18.44&36.50&68.67&113.06&167.99&207.12&264.82&293.56&347.16&409.67&506.11&522.21&680.63&711.20&791.41 `N Best -N KR QMa` & 37 & 2.60 & 17.72&44.69&96.02&149.03&207.73&252.95&341.19&397.81&438.98&573.64&681.13&700.09&832.27&910.77&1057.45 `N BoNeL -N KR 4CS` & 2 & 1.08 & 11.45&24.99&35.82&67.94&82.41&98.05&128.16&**132.68**&186.46&224.66&262.69&**275.88**&**344.76**&352.46&**387.59** `N BoNeL -N KR 4Cm` & 3 & 1.11 & 13.53&25.18&38.33&55.62&76.06&86.06&132.02&142.06&193.99&232.70&284.86&302.12&383.21&386.96&423.71 `N BoNeL -N KR 6Cm` & 15 & 1.42 & 15.96&28.18&45.90&73.18&90.18&115.24&148.93&214.27&278.32&298.28&365.05&414.27&493.26&508.85&560.16 `N BoNeL -N KR Cla` & 16 & 1.42 & 8.72&31.04&40.71&82.99&112.46&143.75&163.76&239.80&270.36&325.30&354.56&403.83&452.67&493.90&550.58 `N BoNeL -N KR CPr` & 17 & 1.44 & 9.03&33.01&47.21&88.37&113.62&147.21&166.12&238.67&265.87&321.12&347.10&401.81&446.13&482.28&536.51 `N BoNeL -N KR Tie` & 27 & 1.93 & 20.87&40.57&64.35&99.65&137.68&173.18&231.53&265.85&343.47&383.88&472.81&513.56&636.85&676.79&782.22 `N BoNeL -N KR Def` & 28 & 1.94 & 20.11&40.52&78.60&102.58&139.42&170.73&237.18&265.27&366.93&372.58&478.09&481.87&637.88&636.35&764.29 `N BoNeL -N KR JXc` & 31 & 2.04 & 18.95&36.18&68.86&108.92&160.18&196.61&256.54&314.83&368.23&427.12&504.82&570.41&642.64&662.73&789.99 `N BoNeL -N KR QMa` & 38 & 2.67 & 18.16&45.95&93.38&143.03&196.39&241.82&326.07&406.05&514.55&578.58&685.08&776.92&912.15&998.34&1163.99 `N BoNeM -N KR 4Cm` & 7 & 1.22 & 16.08&27.11&38.84&54.86&82.51&94.36&119.90&214.28&252.78&251.85&284.48&318.74&415.51&394.31&500.68 `N BoNeM -N KR 4CS` & 11 & 1.28 & 11.79&24.70&43.96&73.23&83.76&130.14&**115.48**&204.19&273.27&286.72&281.43&314.56&487.85&464.53&518.94 `N BoNeM -N KR 6Cm` & 18 & 1.51 & 15.65&27.98&52.71&93.07&85.76&113.21&134.35&223.11&340.69&340.96&393.85&434.99&575.98&499.30&630.78 `N BoNeM -N KR Cla` & 19 & 1.63 & 13.05&32.46&54.07&90.32&116.27&149.86&175.80&278.17&314.13&348.62&395.85&448.91&559.24&566.87&639.09 `N BoNeM -N KR CPr` & 20 & 1.67 & 15.10&33.91&46.42&103.89&120.13&153.62&203.55&287.35&322.22&347.58&374.67&423.75&521.46&565.28&726.59 `N BoNeM -N KR Def` & 29 & 1.94 & 18.38&39.34&75.91&113.68&157.87&199.64&237.50&259.60&352.19&369.31&455.58&479.48&596.34&633.54&748.67 `N BoNeM -N KR Tie` & 30 & 2.01 & 21.34&38.64&62.66&96.05&135.19&172.29&229.37&265.31&368.96&452.05&554.50&544.95&769.84&767.68&788.78 `N BoNeM -N KR JXc` & 35 & 2.18 & 19.47&38.29&70.01&108.73&147.29&184.76&252.61&358.55&454.97&478.92&594.31&589.14&769.05&751.62&924.53 `N BoNeM -N KR QMa` & 39 & 2.71 & 24.28&54.14&100.79&136.42&204.57&251.56&325.24&403.05&480.82&547.65&651.81&739.13&864.35&966.34&1092.30 `N BoNeP -N KR 4CS` & 4 & 1.11 & 11.41&24.83&35.67&61.12&84.51&96.59&130.30&159.81&199.35&230.24&271.39&302.50&363.86&388.34&422.56 `N BoNeP -N KR 4Cm` & 6 & 1.14 & 13.00&25.08&38.14&**53.95**&**74.04**&94.47&119.49&151.09&209.73&237.70&291.55&334.29&398.82&426.06&468.23 `N BoNeP -N KR Cla` & 9 & 1.25 & 8.81&31.30&41.47&80.30&100.54&130.65&147.00&211.21&233.02&265.58&286.03&320.45&363.05&385.07&438.35 `N BoNeP -N KR CPr` & 12 & 1.28 & 9.68&33.04&47.46&82.36&94.76&116.33&147.20&212.97&223.40&267.75&289.20&337.12&401.57&417.11&468.35 `N BoNeP -N KR 6Cm` & 14 & 1.41 & 15.17&27.86&45.69&66.43&86.38&102.07&151.31&207.81&275.71&317.69&375.96&418.47&505.54&545.92&617.54 `N BoNeP -N KR Tie` & 23 & 1.88 & 20.56&37.00&64.49&97.05&131.03&171.37&223.33&270.21&324.68&395.20&462.10&534.33&590.53&642.31&744.88 `N BoNeP -N KR Def` & 24 & 1.88 & 19.80&41.86&74.79&111.41&144.15&174.45&237.91&265.89&325.01&350.58&420.83&473.65&563.51&599.15&727.96 `N BoNeP -N KR JXc` & 34 & 2.08 & 20.22&36.90&69.50&109.37&150.28&188.40&251.74&293.71&379.19&439.07&525.51&585.25&719.42&744.81&844.24 `N BoNeP -N KR QMa` & 40 & 2.79 & 24.23&52.09&99.01&148.85&192.75&263.79&338.27&395.25&517.71&584.97&677.76&794.20&938.78&1006.31&1166.80 [table:normalsort:avg:A] [!p] l | r @   r | r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r| & & & Rank & GeoM & 2&3&4&5&6&7&8&9&10&11&12&13&14&15&16 `I -N KR POp` & 25 & 1.84 & 12.56&36.78&73.37&111.91&151.52&183.05&221.86&263.07&302.35&353.36&399.83&439.54&475.60&508.11&550.21 `I -N KR STL` & 29 & 1.93 & 10.97&37.26&78.04&122.75&161.05&201.27&247.30&280.78&321.94&376.15&420.50&461.52&493.07&519.96&559.69 `I -N KR Def` & 32 & 1.98 & 14.03&40.14&76.93&117.68&157.59&198.50&242.60&280.73&322.72&375.54&422.67&465.00&511.51&558.47&599.95 `I -N KR AIF` & 36 & 2.32 & 14.98&54.78&92.20&135.59&195.79&245.14&292.73&327.82&380.85&438.38&489.94&538.38&573.31&612.57&656.83 `N Best -N KR 4CS` & 1 & 1.06 & 8.00&22.10&35.70&60.36&71.61&93.40&112.30&144.17&**171.34**&**214.03**&238.42&285.88&316.71&344.10&364.40 `N Best -N KR 4Cm` & 3 & 1.10 & 8.75&**20.34**&**34.47**&**54.70**&70.88&90.27&115.48&**137.92**&189.67&222.14&262.03&307.09&353.77&391.72&432.70 `N Best -N KR CPr` & 8 & 1.19 & **5.23**&25.15&38.71&71.13&92.24&124.12&145.15&188.31&194.53&234.72&266.15&307.63&343.99&372.53&409.52 `N Best -N KR Cla` & 9 & 1.20 & 6.86&28.30&38.70&75.69&92.84&129.24&146.96&190.31&196.30&233.05&261.56&**278.43**&**306.94**&**335.57**&363.08 `N Best -N KR 6Cm` & 14 & 1.36 & 9.52&24.23&39.97&69.73&88.59&111.52&136.40&174.35&228.98&283.96&321.57&402.13&458.39&499.02&553.25 `N Best -N KR Def` & 21 & 1.78 & 16.38&33.75&64.57&95.65&125.05&164.26&204.86&246.37&265.34&336.21&399.28&428.57&508.38&550.37&631.51 `N Best -N KR Tie` & 22 & 1.82 & 16.04&33.74&57.37&88.86&114.24&165.85&203.09&250.32&280.61&352.47&435.29&508.75&539.74&598.76&685.05 `N Best -N KR JXc` & 33 & 2.00 & 16.52&33.44&63.44&98.88&138.36&179.19&218.44&282.01&309.11&385.21&480.67&541.52&626.83&665.20&766.23 `N Best -N KR QMa` & 37 & 2.53 & 14.98&42.43&86.31&140.41&184.73&235.35&301.62&364.06&383.06&539.44&623.15&659.71&744.50&846.43&932.15 `N BoNeL -N KR 4CS` & 2 & 1.08 & 8.79&23.49&37.04&62.21&72.40&93.37&112.70&142.31&173.74&218.67&**237.75**&281.41&309.13&342.61&**349.19** `N BoNeL -N KR 4Cm` & 4 & 1.11 & 9.06&21.77&35.45&55.14&72.10&90.98&116.37&142.51&181.85&223.36&263.63&319.19&355.23&380.37&404.78 `N BoNeL -N KR CPr` & 13 & 1.34 & 6.54&27.47&39.79&72.94&93.51&125.24&145.99&215.92&243.90&285.57&315.19&374.44&404.65&434.14&454.70 `N BoNeL -N KR Cla` & 15 & 1.39 & 7.30&29.84&39.35&76.71&92.91&130.63&147.23&226.02&247.07&293.28&324.55&393.59&415.47&460.03&477.18 `N BoNeL -N KR 6Cm` & 16 & 1.43 & 10.65&25.90&40.56&70.90&87.99&113.19&137.92&217.54&260.44&292.33&353.98&427.97&475.74&503.83&529.54 `N BoNeL -N KR Tie` & 26 & 1.87 & 17.59&33.75&57.62&87.79&115.16&157.70&198.00&245.89&307.31&367.01&444.25&506.00&579.13&658.81&722.07 `N BoNeL -N KR Def` & 27 & 1.89 & 16.64&36.37&67.23&90.58&120.61&158.18&200.27&250.83&320.55&355.67&453.90&487.58&575.63&625.45&688.18 `N BoNeL -N KR JXc` & 31 & 1.97 & 16.28&33.15&62.43&91.86&133.58&171.44&214.22&279.84&338.71&398.70&456.86&536.44&615.27&660.74&748.77 `N BoNeL -N KR QMa` & 38 & 2.63 & 14.51&43.23&82.70&134.53&178.88&225.65&290.45&391.15&462.34&547.55&646.34&742.97&835.85&940.49&1057.89 `N BoNeM -N KR 4Cm` & 7 & 1.17 & 9.86&21.49&34.89&55.43&72.68&**89.56**&**106.83**&212.18&224.25&233.20&260.33&325.45&378.21&405.97&438.89 `N BoNeM -N KR 4CS` & 12 & 1.28 & 8.90&23.66&41.74&72.84&72.88&129.91&108.65&206.26&248.00&262.31&262.53&321.93&431.00&445.25&452.85 `N BoNeM -N KR 6Cm` & 18 & 1.56 & 11.72&25.59&48.66&95.36&84.11&110.15&133.02&234.05&318.24&343.21&382.43&447.82&527.23&528.53&599.74 `N BoNeM -N KR CPr` & 19 & 1.58 & 11.92&29.69&39.66&93.65&94.49&144.63&168.80&258.49&301.06&331.26&341.15&432.34&444.36&531.04&566.47 `N BoNeM -N KR Cla` & 20 & 1.60 & 12.11&33.08&47.86&88.18&94.52&149.64&147.27&249.49&280.22&330.82&350.09&443.00&490.15&546.82&537.15 `N BoNeM -N KR Def` & 28 & 1.92 & 15.74&35.40&67.25&105.51&137.72&179.21&211.66&250.63&320.06&350.38&443.83&481.71&549.25&621.84&689.81 `N BoNeM -N KR Tie` & 30 & 1.96 & 16.95&33.72&56.67&86.72&115.31&154.85&198.75&245.12&339.09&443.65&526.52&553.74&695.92&741.39&723.53 `N BoNeM -N KR JXc` & 35 & 2.14 & 16.46&35.71&60.93&95.72&123.50&167.92&218.13&359.78&433.33&449.88&539.73&577.80&689.96&729.62&862.97 `N BoNeM -N KR QMa` & 39 & 2.66 & 20.64&48.64&90.61&126.40&185.62&229.86&300.87&373.85&431.10&518.01&615.57&735.18&780.40&895.37&951.65 `N BoNeP -N KR 4CS` & 5 & 1.12 & 8.46&23.34&37.09&57.80&73.68&92.70&113.69&159.60&190.06&219.94&252.55&307.79&328.41&377.27&392.42 `N BoNeP -N KR 4Cm` & 6 & 1.14 & 8.95&22.69&35.57&55.35&**69.60**&90.41&111.13&149.56&197.46&225.24&271.98&335.76&362.40&414.60&447.03 `N BoNeP -N KR CPr` & 10 & 1.22 & 5.97&26.99&39.44&67.44&80.12&111.68&129.78&190.36&211.08&251.35&278.01&336.65&370.57&405.37&436.74 `N BoNeP -N KR Cla` & 11 & 1.23 & 7.10&29.97&39.96&73.81&84.93&116.40&131.89&200.48&213.50&243.83&265.25&317.23&337.10&377.03&402.82 `N BoNeP -N KR 6Cm` & 17 & 1.44 & 10.32&25.62&40.47&69.19&84.75&110.24&134.78&210.52&252.52&313.76&368.79&434.89&481.18&543.99&580.31 `N BoNeP -N KR Def` & 23 & 1.83 & 16.91&38.39&65.18&97.07&120.63&156.82&195.89&242.43&288.94&336.14&403.45&458.41&522.17&589.12&656.89 `N BoNeP -N KR Tie` & 24 & 1.84 & 17.37&32.93&57.10&86.35&114.55&156.12&193.53&248.25&299.72&369.58&429.28&509.05&566.42&628.05&716.46 `N BoNeP -N KR JXc` & 34 & 2.04 & 16.51&32.47&60.17&95.97&137.22&171.44&221.18&283.43&339.45&422.44&492.87&576.73&655.46&723.94&784.93 `N BoNeP -N KR QMa` & 40 & 2.75 & 20.44&48.92&87.15&142.84&174.98&248.01&298.49&385.09&462.66&544.00&626.85&763.33&847.00&940.79&1027.04 [table:normalsort:avg:B] [!p] l | r @   r | r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r| & & & Rank & GeoM & 2&3&4&5&6&7&8&9&10&11&12&13&14&15&16 `I -N KR POp` & 21 & 2.45 & 11.39&36.05&77.28&128.96&181.15&227.59&265.00&298.11&335.56&370.76&404.10&450.91&488.88&529.10&582.69 `I -N KR Def` & 33 & 2.90 & 15.33&48.24&95.20&148.27&195.18&249.58&305.80&347.41&395.45&434.98&479.17&524.59&575.01&628.20&688.63 `I -N KR STL` & 35 & 2.97 & 16.64&52.56&102.10&154.79&194.80&248.41&306.96&346.42&393.33&436.37&486.46&536.80&580.03&634.94&702.84 `I -N KR AIF` & 36 & 3.23 & 16.77&56.45&109.56&160.66&215.97&268.93&332.20&379.01&432.62&479.89&540.53&601.36&643.93&708.10&776.50 `N Best -N KR 4Cm` & 2 & 1.11 & 7.31&**16.27**&29.45&47.03&70.03&82.22&**96.48**&126.39&**143.46**&169.87&176.58&233.61&268.71&310.79&345.07 `N Best -N KR 4CS` & 5 & 1.19 & 7.07&18.10&34.24&55.89&67.86&93.54&110.54&137.94&146.50&175.16&195.19&239.20&274.19&315.00&343.90 `N Best -N KR 6Cm` & 8 & 1.25 & 6.85&17.84&33.82&62.44&79.27&97.94&116.81&142.67&165.77&184.17&206.94&256.17&295.75&316.26&352.90 `N Best -N KR CPr` & 11 & 1.43 & **3.09**&30.86&36.06&73.57&89.71&124.63&140.12&190.47&199.37&251.14&265.94&292.43&335.35&364.68&389.62 `N Best -N KR Cla` & 12 & 1.45 & 4.73&26.60&37.62&75.06&87.90&116.55&142.33&194.22&196.00&251.06&260.73&294.26&326.31&356.21&384.08 `N Best -N KR Tie` & 26 & 2.75 & 15.76&38.05&73.01&109.62&144.16&211.29&266.32&314.66&362.63&424.85&500.30&595.58&687.34&733.87&897.51 `N Best -N KR JXc` & 27 & 2.79 & 15.56&42.27&75.44&114.30&147.68&205.71&254.78&309.67&375.45&433.90&491.17&605.14&703.28&774.28&901.10 `N Best -N KR Def` & 30 & 2.85 & 15.40&37.48&80.60&126.48&158.41&222.60&294.91&346.24&358.13&456.70&514.14&561.78&694.25&758.85&846.88 `N Best -N KR QMa` & 38 & 3.79 & 11.05&54.53&110.76&180.78&239.95&310.40&389.88&468.14&494.45&665.40&741.91&783.56&920.78&1000.01&1110.08 `N BoNeL -N KR 4Cm` & 1 & 1.07 & 6.05&16.77&29.70&47.32&70.14&83.57&96.87&**125.01**&149.19&164.25&**174.84**&**216.16**&**244.09**&**266.22**&**288.18** `N BoNeL -N KR 4CS` & 4 & 1.15 & 6.51&18.51&33.64&57.84&69.08&90.70&112.78&134.28&153.70&182.76&185.91&234.77&256.08&272.71&299.99 `N BoNeL -N KR 6Cm` & 9 & 1.31 & 8.37&20.04&35.32&67.72&77.64&99.97&120.89&156.16&177.52&199.67&217.96&260.26&282.87&316.31&348.27 `N BoNeL -N KR CPr` & 17 & 1.58 & 3.27&32.08&37.08&73.85&91.57&124.20&141.78&205.08&251.50&290.99&312.06&369.32&405.85&437.23&462.54 `N BoNeL -N KR Cla` & 18 & 1.59 & 4.48&27.21&37.60&76.08&89.28&117.83&141.78&200.79&240.48&289.48&306.41&371.91&403.27&443.11&457.93 `N BoNeL -N KR JXc` & 22 & 2.56 & 15.24&35.21&66.96&100.40&131.09&189.59&235.21&304.23&364.99&418.31&472.06&536.86&655.17&704.77&777.83 `N BoNeL -N KR Tie` & 24 & 2.69 & 15.15&37.39&64.10&112.33&139.39&191.19&240.18&314.21&375.78&435.35&500.68&610.78&682.65&765.66&882.64 `N BoNeL -N KR Def` & 31 & 2.89 & 14.65&39.25&78.65&115.49&145.62&206.33&273.29&347.40&409.92&460.74&560.31&611.72&725.22&827.99&932.90 `N BoNeL -N KR QMa` & 39 & 3.81 & 10.33&50.08&97.42&168.02&223.17&286.64&359.03&484.01&578.52&663.41&766.77&906.75&1002.35&1112.73&1223.32 `N BoNeM -N KR 4Cm` & 7 & 1.23 & 7.29&17.72&**29.35**&**45.03**&**61.88**&82.52&98.43&202.36&212.83&200.65&243.93&238.92&308.91&320.33&364.00 `N BoNeM -N KR 6Cm` & 15 & 1.50 & 7.00&21.07&40.85&84.36&80.01&99.31&117.10&197.20&266.33&235.02&258.16&323.18&415.37&352.34&399.50 `N BoNeM -N KR 4CS` & 16 & 1.51 & 6.96&17.60&40.66&80.80&70.93&145.19&112.51&208.82&246.55&256.60&238.25&282.13&411.16&416.42&433.64 `N BoNeM -N KR CPr` & 19 & 1.95 & 8.86&33.28&38.15&94.93&98.62&147.79&186.70&251.71&298.36&323.52&341.95&417.32&448.73&548.35&592.07 `N BoNeM -N KR Cla` & 20 & 1.96 & 13.01&32.55&47.51&90.72&93.87&154.40&146.24&232.55&276.11&316.92&340.40&421.70&482.01&533.05&533.90 `N BoNeM -N KR JXc` & 28 & 2.82 & 16.39&37.79&66.83&110.69&134.03&194.48&243.91&385.89&444.32&496.26&538.42&586.56&705.07&767.01&877.98 `N BoNeM -N KR Tie` & 29 & 2.84 & 17.52&41.29&65.86&105.65&145.14&198.25&259.15&322.99&398.23&468.78&547.83&627.10&796.34&798.66&842.88 `N BoNeM -N KR Def` & 34 & 2.95 & 11.71&37.08&84.63&136.06&176.03&231.43&289.16&351.31&413.91&456.64&542.57&641.53&720.54&826.37&908.10 `N BoNeM -N KR QMa` & 37 & 3.77 & 17.63&58.32&107.62&142.40&237.64&264.96&384.33&435.37&500.00&574.65&726.09&825.43&913.20&980.12&1129.61 `N BoNeP -N KR 4Cm` & 3 & 1.15 & 6.86&17.92&29.84&49.01&61.90&**79.90**&106.49&129.82&156.20&**163.84**&201.19&251.63&287.49&319.13&345.12 `N BoNeP -N KR 4CS` & 6 & 1.21 & 6.63&20.99&33.47&50.87&69.91&89.23&114.03&132.59&155.05&197.48&204.60&262.52&285.92&321.02&342.46 `N BoNeP -N KR 6Cm` & 10 & 1.31 & 7.80&20.58&34.62&60.07&83.27&101.94&126.02&143.48&174.07&207.35&228.25&262.75&287.39&344.92&355.76 `N BoNeP -N KR CPr` & 13 & 1.48 & 3.24&31.77&37.47&74.46&84.18&123.89&146.83&196.26&213.12&254.61&274.11&323.22&347.05&384.41&414.97 `N BoNeP -N KR Cla` & 14 & 1.48 & 4.77&27.88&38.17&69.83&75.02&116.28&141.25&191.49&221.49&250.44&274.70&325.92&350.47&394.49&424.23 `N BoNeP -N KR JXc` & 23 & 2.69 & 13.90&35.16&65.39&105.11&139.71&197.48&255.87&294.85&364.97&461.81&505.71&605.46&686.42&814.11&919.46 `N BoNeP -N KR Tie` & 25 & 2.72 & 15.69&34.72&68.55&112.03&142.03&209.70&257.04&312.52&367.93&434.85&509.64&578.63&702.64&752.57&892.05 `N BoNeP -N KR Def` & 32 & 2.90 & 14.81&39.78&78.61&127.92&181.05&231.43&274.24&346.11&379.57&453.44&491.92&610.98&708.39&773.08&859.50 `N BoNeP -N KR QMa` & 40 & 3.99 & 19.09&56.49&99.26&180.09&222.01&298.15&371.18&475.00&549.86&660.00&745.95&881.24&994.07&1082.76&1196.13 [table:normalsort:avg:C] [!p] l | r @   r | r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r| & & & Rank & GeoM & 2&3&4&5&6&7&8&9&10&11&12&13&14&15&16 `I -N KR POp` & 21 & 1.97 & 13.31&36.84&77.52&121.73&166.39&204.93&245.35&281.53&320.20&364.03&407.29&443.03&491.19&519.29&570.38 `I -N KR STL` & 29 & 2.18 & 13.99&44.13&88.72&136.05&178.19&221.54&271.65&306.46&354.48&399.11&448.19&485.94&536.15&570.51&626.51 `I -N KR Def` & 34 & 2.22 & 15.49&43.53&85.92&133.04&177.05&222.94&275.77&313.38&366.00&407.47&459.20&496.26&561.57&600.20&659.81 `I -N KR AIF` & 36 & 2.48 & 16.19&53.96&98.14&148.52&205.27&255.57&311.14&350.39&404.94&454.16&505.94&553.38&604.04&645.66&703.20 `N Best -N KR 4CS` & 1 & 1.08 & 9.49&21.14&34.61&61.39&74.12&93.88&116.89&138.90&**166.86**&**201.46**&**229.80**&267.67&318.35&338.59&367.90 `N Best -N KR 4Cm` & 4 & 1.10 & 11.80&**20.24**&**33.80**&**51.24**&71.67&**85.75**&113.85&**135.38**&178.33&208.90&238.10&278.08&333.61&367.57&413.45 `N Best -N KR Cla` & 8 & 1.26 & 7.07&28.71&38.97&80.04&96.97&129.58&150.98&191.32&204.88&242.45&273.71&286.36&326.97&347.76&382.99 `N Best -N KR CPr` & 9 & 1.27 & **5.62**&29.14&40.79&78.30&98.99&133.63&150.52&189.73&200.78&247.67&278.47&300.13&353.26&373.06&412.66 `N Best -N KR 6Cm` & 12 & 1.31 & 11.86&21.84&40.18&65.61&87.83&105.09&133.64&163.41&217.18&252.83&286.92&353.76&417.93&429.41&497.07 `N Best -N KR Tie` & 23 & 2.07 & 17.79&36.53&64.86&99.31&132.75&186.83&236.91&278.75&320.47&382.78&471.00&542.17&608.62&658.26&779.34 `N Best -N KR Def` & 24 & 2.07 & 17.67&36.25&73.23&111.95&143.00&189.50&250.12&287.76&308.47&378.63&443.90&474.71&587.42&626.30&731.54 `N Best -N KR JXc` & 33 & 2.19 & 17.11&37.58&69.36&108.46&151.93&197.17&245.69&295.11&343.64&409.43&492.56&556.92&670.48&716.70&818.76 `N Best -N KR QMa` & 37 & 2.86 & 14.49&47.71&97.71&157.50&211.46&266.79&344.29&410.69&439.47&592.67&682.56&714.70&833.13&919.82&1034.29 `N BoNeL -N KR 4Cm` & 2 & 1.08 & 9.98&21.36&34.70&51.99&73.39&87.23&116.70&135.64&174.36&205.56&240.54&275.67&330.05&346.83&375.29 `N BoNeL -N KR 4CS` & 3 & 1.08 & 9.11&22.19&35.65&62.92&75.16&94.38&119.00&136.72&173.22&211.53&232.70&**265.93**&**304.89**&**326.46**&**348.16** `N BoNeL -N KR 6Cm` & 14 & 1.37 & 12.10&24.61&41.19&70.46&84.83&108.23&135.38&196.49&245.98&259.64&310.46&366.85&429.46&437.78&485.76 `N BoNeL -N KR Cla` & 16 & 1.43 & 6.67&29.23&39.13&79.28&99.19&130.78&150.76&222.19&252.77&303.20&328.71&389.41&422.62&465.09&493.86 `N BoNeL -N KR CPr` & 17 & 1.43 & 6.24&30.74&41.61&79.86&100.76&134.10&151.81&219.75&254.20&299.55&327.56&382.07&421.80&452.26&485.46 `N BoNeL -N KR Tie` & 25 & 2.08 & 18.14&37.39&61.98&100.26&130.55&174.16&223.09&275.50&342.71&395.67&472.43&544.67&633.90&701.33&794.77 `N BoNeL -N KR JXc` & 27 & 2.12 & 17.03&34.89&65.91&100.48&142.81&186.18&235.30&299.63&356.66&414.47&478.47&548.58&637.53&677.42&771.92 `N BoNeL -N KR Def` & 28 & 2.15 & 17.48&38.62&74.44&103.11&135.11&178.95&237.15&288.96&365.87&397.72&498.02&529.04&646.10&699.94&796.40 `N BoNeL -N KR QMa` & 38 & 2.92 & 14.43&46.63&90.93&149.35&199.66&251.64&325.05&427.89&518.88&596.66&699.04&810.17&917.14&1017.55&1148.28 `N BoNeM -N KR 4Cm` & 7 & 1.19 & 11.67&22.36&34.35&51.89&72.19&89.01&**109.41**&208.81&231.50&227.94&263.31&294.41&367.71&374.03&431.20 `N BoNeM -N KR 4CS` & 13 & 1.32 & 9.54&21.89&42.09&76.18&77.20&136.41&112.08&206.68&256.74&270.60&260.43&306.41&441.37&442.21&467.06 `N BoNeM -N KR 6Cm` & 18 & 1.49 & 11.67&24.90&47.78&90.53&83.76&107.69&128.11&219.37&311.29&297.59&345.98&401.83&503.23&457.17&541.58 `N BoNeM -N KR Cla` & 19 & 1.68 & 12.87&32.68&50.31&89.81&103.25&151.29&157.12&253.04&290.36&332.44&361.42&437.62&510.11&548.59&572.66 `N BoNeM -N KR CPr` & 20 & 1.69 & 12.25&32.26&42.04&98.50&105.12&149.09&186.85&265.49&308.33&334.27&355.31&424.85&476.11&548.73&628.61 `N BoNeM -N KR Def` & 30 & 2.18 & 15.33&37.26&76.07&119.07&157.49&203.52&246.69&289.45&362.59&392.89&482.09&536.33&621.77&697.33&783.48 `N BoNeM -N KR Tie` & 31 & 2.18 & 18.97&38.05&61.85&96.35&131.68&175.20&229.28&278.30&368.86&455.01&542.54&576.28&753.78&769.33&784.82 `N BoNeM -N KR JXc` & 35 & 2.30 & 17.73&37.10&65.93&104.73&135.12&182.40&237.82&368.48&444.19&474.94&558.74&584.25&722.61&749.45&889.37 `N BoNeM -N KR QMa` & 39 & 2.92 & 21.00&53.83&99.47&135.04&209.36&248.78&337.02&403.97&470.19&546.77&664.62&768.21&853.46&946.93&1058.06 `N BoNeP -N KR 4Cm` & 5 & 1.12 & 10.10&21.80&34.53&52.94&**69.20**&88.78&112.80&143.67&188.17&210.52&252.70&303.77&355.74&392.22&419.03 `N BoNeP -N KR 4CS` & 6 & 1.13 & 9.31&23.11&35.51&56.29&77.11&92.99&120.16&151.19&179.63&217.68&244.75&292.50&328.03&362.46&387.20 `N BoNeP -N KR Cla` & 10 & 1.28 & 6.86&29.69&39.90&74.93&87.15&122.21&139.99&201.27&222.60&253.53&275.37&321.67&350.16&385.47&421.30 `N BoNeP -N KR CPr` & 11 & 1.30 & 6.55&30.51&42.04&75.27&86.68&117.72&140.35&199.82&216.17&258.35&280.93&331.44&373.93&402.71&440.35 `N BoNeP -N KR 6Cm` & 15 & 1.38 & 11.51&24.77&40.86&65.58&84.98&105.34&137.82&187.49&235.82&276.33&324.97&372.66&435.94&470.59&525.82 `N BoNeP -N KR Tie` & 22 & 2.06 & 18.15&34.93&63.42&98.77&129.21&179.81&224.98&277.16&331.25&399.71&466.61&540.55&620.63&675.01&785.68 `N BoNeP -N KR Def` & 26 & 2.11 & 17.48&39.97&72.50&112.59&149.29&187.89&236.08&284.89&332.48&380.62&439.68&515.16&600.16&656.77&749.71 `N BoNeP -N KR JXc` & 32 & 2.19 & 17.01&34.81&65.09&103.38&143.31&185.32&242.27&290.60&361.36&441.20&508.06&589.21&687.21&762.41&849.86 `N BoNeP -N KR QMa` & 40 & 3.05 & 21.39&52.56&94.71&158.02&196.77&270.19&335.88&419.22&509.74&596.41&684.62&813.48&926.26&1010.03&1129.40 [table:normalsort:avg:all] [plot:normal:8:A] [plot:normal:8:B] [plot:normal:8:C] [plot:normal:lineplot:A] [plot:normal:lineplot:B] [plot:normal:lineplot:C] Sorting Many Continuous Sets of 2-16 Items ------------------------------------------ Here the benchmark shown in algorithm [algo:inrow] was used. Instead of sorting a single array multiple times, multiple arrays are created adjacent to each other and sorted in series. The number of arrays used is chosen in a way that all of them do not fit into the CPU’s L3 cache. Since the reference array is sorted before the measurement, the original array should not be present in the cache, causing a cache miss on every access. The results are similar to the previous ones. A difference we can see when comparing figures [plot:inrow:lineplot:A], [plot:inrow:lineplot:B] and [plot:inrow:lineplot:C] to figures [plot:normal:lineplot:A], [plot:normal:lineplot:B] and [plot:normal:lineplot:C] from the single sort measurement is that the `CPr` swap that operates on pointers and moves values around in memory became worse compared to the `4Cm` and `4CS` implementations for array sizes greater than 2. Here the values can probably get pre-loaded for the next conditional swap while the current one is finishing, while `CPr` accesses the element’s reference value only when the destination address is calculated, which results in less pre-loading that can be done. The complete overview over the average values of each sorter across all three machines can be seen in table [table:inrowsort:avg:all]. We see speed-ups for using the sorting networks from 25% at array size 2 all the way up to 59% at array size 15. [plot:inrow:lineplot:A] [plot:inrow:lineplot:B] [plot:inrow:lineplot:C] [!tbp] l | r @   r | r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r@  r| & & & Rank & GeoM & 2&3&4&5&6&7&8&9&10&11&12&13&14&15&16 `I -I KR POp` & 26 & 2.39 & 20.67&44.67&72.23&104.33&142.53&182.47&222.07&262.90&302.13&342.53&382.87&425.57&468.73&513.80&556.07 `I -I KR Def` & 27 & 2.42 & 22.17&47.30&74.60&106.70&143.40&183.47&223.37&263.23&302.53&342.47&383.63&425.73&467.77&511.97&554.67 `I -I KR STL` & 34 & 2.57 & 27.07&53.47&82.40&114.07&150.63&191.53&232.87&273.00&313.67&355.40&398.73&441.80&484.63&529.97&573.47 `I -I KR AIF` & 35 & 2.57 & 23.67&50.00&79.53&111.97&149.87&193.77&237.77&280.87&324.37&367.43&412.23&456.13&499.43&544.10&589.47 `N Best -I KR 4CS` & 2 & 1.05 & 15.97&25.00&35.67&48.73&57.60&70.90&79.30&96.50&**111.23**&133.43&148.70&175.07&222.03&226.00&261.80 `N Best -I KR 4Cm` & 5 & 1.08 & 15.97&**23.67**&34.87&47.20&56.30&70.20&82.20&**95.07**&114.20&**131.30**&**146.00**&205.50&236.97&271.73&299.80 `N Best -I KR 6Cm` & 8 & 1.29 & 15.97&25.70&37.03&55.13&66.80&85.50&94.37&117.63&139.07&160.90&181.33&270.23&301.70&347.80&379.53 `N Best -I KR Cla` & 11 & 1.40 & 14.00&25.30&39.90&70.83&87.67&114.43&128.97&169.67&172.10&201.00&218.60&231.20&254.33&278.03&295.50 `N Best -I KR CPr` & 13 & 1.43 & **13.90**&28.37&41.07&74.03&91.33&117.27&130.77&167.40&165.93&201.17&220.90&233.23&259.73&288.53&307.23 `N Best -I KR Def` & 22 & 2.33 & 23.67&44.33&72.00&96.07&129.27&167.40&205.97&245.73&262.57&328.70&373.57&408.03&478.33&531.47&605.10 `N Best -I KR Tie` & 23 & 2.34 & 25.97&43.77&69.00&92.00&119.37&166.30&200.83&235.50&274.13&324.33&380.43&435.33&486.90&555.63&639.40 `N Best -I KR JXc` & 32 & 2.52 & 24.60&45.30&70.63&99.83&137.83&173.73&213.70&253.73&295.23&363.17&418.60&479.23&560.80&638.80&696.60 `N Best -I KR QMa` & 37 & 3.10 & 24.67&46.00&83.13&130.07&176.83&223.30&279.90&333.30&357.03&500.63&575.63&580.67&684.17&773.97&845.93 `N BoNeL -I KR 4CS` & 1 & 1.04 & 16.00&25.00&35.63&48.50&57.10&69.30&78.83&100.57&115.27&136.13&150.63&**174.33**&200.53&**207.23**&**247.57** `N BoNeL -I KR 4Cm` & 3 & 1.06 & 15.97&24.00&34.97&47.70&57.00&70.50&79.10&100.80&117.30&136.77&148.63&180.77&**198.23**&245.17&268.67 `N BoNeL -I KR 6Cm` & 9 & 1.32 & 16.00&26.00&37.60&55.70&66.93&85.33&94.57&124.93&145.57&173.90&194.77&269.73&328.20&350.90&355.43 `N BoNeL -I KR Cla` & 17 & 1.65 & 13.93&25.13&40.00&70.97&87.63&114.33&129.00&200.47&223.80&268.67&286.87&344.30&365.33&400.97&416.60 `N BoNeL -I KR CPr` & 18 & 1.67 & 14.60&28.40&41.10&74.00&91.40&117.00&130.73&197.03&229.17&265.43&282.93&335.90&362.13&389.33&407.03 `N BoNeL -I KR Tie` & 25 & 2.36 & 25.97&43.63&68.33&92.87&120.40&166.40&200.63&237.40&282.20&331.43&384.17&444.20&494.67&562.53&639.57 `N BoNeL -I KR Def` & 28 & 2.42 & 24.10&45.40&74.37&95.90&128.67&166.37&205.67&248.33&306.17&330.20&414.60&438.80&525.63&575.17&630.03 `N BoNeL -I KR JXc` & 31 & 2.47 & 24.33&45.33&70.67&99.67&137.83&173.80&213.67&260.80&303.70&347.07&402.80&459.87&533.17&576.37&640.67 `N BoNeL -I KR QMa` & 39 & 3.27 & 25.00&46.50&83.20&130.23&177.13&223.37&280.17&371.97&434.07&505.10&582.43&688.13&756.13&833.53&945.70 `N BoNeM -I KR 4Cm` & 7 & 1.19 & 16.00&23.93&**34.33**&47.33&55.90&66.60&**76.57**&170.50&177.00&167.90&194.00&194.70&253.57&262.33&307.97 `N BoNeM -I KR 4CS` & 12 & 1.41 & 16.00&24.63&41.20&63.03&55.77&111.00&77.50&165.30&203.97&204.63&200.43&294.33&321.33&349.40&382.30 `N BoNeM -I KR 6Cm` & 16 & 1.52 & 16.00&25.93&36.90&53.23&65.30&125.07&92.47&193.63&244.90&239.67&236.83&344.47&348.43&383.13&363.77 `N BoNeM -I KR Cla` & 19 & 1.86 & 16.67&29.33&41.20&83.67&89.00&135.63&128.30&220.20&259.97&297.53&311.73&402.10&427.67&484.07&477.33 `N BoNeM -I KR CPr` & 20 & 1.93 & 16.27&29.73&41.90&92.67&91.70&137.23&157.13&240.17&275.77&304.70&309.10&399.97&404.70&479.83&531.63 `N BoNeM -I KR Tie` & 29 & 2.45 & 26.00&44.33&69.03&92.70&121.00&162.47&197.57&239.17&305.27&373.97&427.80&516.20&569.87&566.87&648.47 `N BoNeM -I KR Def` & 30 & 2.46 & 23.33&45.40&73.73&105.90&144.73&182.37&212.07&246.03&305.07&329.70&410.60&432.07&501.03&575.37&633.07 `N BoNeM -I KR JXc` & 36 & 2.64 & 24.40&42.67&70.63&123.73&130.33&168.57&212.20&281.13&333.73&432.47&458.37&492.10&583.77&632.10&741.07 `N BoNeM -I KR QMa` & 38 & 3.20 & 29.00&52.00&89.77&126.97&177.13&225.23&278.20&346.27&400.33&467.37&533.50&611.23&702.33&774.00&878.87 `N BoNeP -I KR 4CS` & 4 & 1.08 & 16.00&24.97&35.70&**45.70**&**54.77**&66.70&78.73&99.60&122.90&145.37&156.00&190.20&210.63&263.43&279.30 `N BoNeP -I KR 4Cm` & 6 & 1.09 & 16.00&23.80&36.73&46.93&54.97&**65.80**&78.73&101.50&119.97&138.13&155.37&218.03&211.63&285.87&305.37 `N BoNeP -I KR 6Cm` & 10 & 1.35 & 16.33&26.00&37.30&52.43&63.17&79.57&93.27&124.70&151.13&174.90&241.00&287.13&329.40&385.30&411.77 `N BoNeP -I KR Cla` & 14 & 1.45 & 14.63&25.03&40.03&66.00&81.47&108.33&116.67&178.70&188.40&215.53&227.33&269.33&285.97&316.60&333.97 `N BoNeP -I KR CPr` & 15 & 1.47 & 13.90&28.33&41.13&70.27&83.43&108.47&116.93&177.00&185.57&210.00&228.57&263.30&295.20&318.77&341.63 `N BoNeP -I KR Def` & 21 & 2.32 & 23.67&46.10&71.00&96.90&128.83&160.97&196.50&241.30&278.13&321.10&364.47&419.27&478.67&544.53&597.17 `N BoNeP -I KR Tie` & 24 & 2.35 & 25.50&44.67&69.63&90.33&124.73&160.10&193.67&236.70&284.33&327.73&383.80&440.30&500.37&557.47&633.60 `N BoNeP -I KR JXc` & 33 & 2.54 & 24.67&43.33&70.60&100.33&133.17&169.03&217.23&258.40&318.80&377.17&428.93&489.80&564.90&625.70&713.63 `N BoNeP -I KR QMa` & 40 & 3.29 & 26.33&49.67&87.57&130.00&165.33&224.03&274.07&367.53&428.20&509.07&570.77&684.67&776.17&849.17&927.27 [table:inrowsort:avg:all] Sorting a Large Set of Items with Quicksort ------------------------------------------- After seeing the first two results, we wanted to know how the base case sorters perform when used inside a scalable sorting algorithm. For that we modified introsort, a quicksort implementation from the STL library, as follows: Introsort calls insertion sort only once, right at the end. Since that is not possible with the sorting networks, they had to be called directly when the partitioning resulted in a partition of 16 elements or less. Also we determined the pivot using the 3-element Bose Nelson parameter network instead of using `if-else` and `std::swap`. The sorters were measured using benchmark [algo:normal] with parameters * `numberOfIterations` = 50 * `numberOfMeasures` = 200 * `arraySize` = 1024 × 16 = 16384 = 214. To have a basis of comparison we also measured sorting with `std::sort`. These times can be taken from figures [plot:quicksort:A], [plot:quicksort:B] and [plot:quicksort:C]. The `QSort -Q KR Def` sorter is just a direct copy of the STL sort doing a final insertion sort at the end. That was measured to see that our code copy does as well as `std::sort` before doing the modifications. [plot:quicksort:A] [plot:quicksort:B] [plot:quicksort:C] [!h] | | A:   `N Best -Q KR Cla` | B:   `N Best -Q KR Cla` | C:   `N Best -Q KR Cla` | | --- | --- | --- | --- | | `I -Q KR Def` | 1.76% | 2.1% | 8.76% | | `I -Q KR POp` | 3.99% | 2.58% | 6.47% | | `StdSort -Q` | 12.3% | 10.6% | 14% | [table:completesort:speedups] Speed-ups of including sorting networks into a sorting algorithm like quicksort can be seen in table [table:completesort:speedups]. What is notable is that the variants with insertion sort at the base are faster than the one with the final insertion sort, which should come from the fact that they are already specialized for the item they sort and do not require a predicate for the sorting. Also, the base case is called right after the partitioning is at a low enough level, which means that the elements are still present in the first- or second-level cache. That also explains why the `Cla` conditional swap performs the best with quicksort, while we saw in the last section that this is not necessarily the case when we have a cache miss. Recalling the results from the previous sections, we appeared to be achieving great improvements in reducing the time needed for sorting sets of 2-16 items. By measuring only the sorting of the small sets we have exploited the networks’ strength: not containing conditional branches. The results from the measurements with quicksort highlight the networks’ weakness: The larger code size. When integrating the sorting networks into quicksort for sorting the base cases, every time a partition results in one part having 16 elements or less, we switch from the code for quicksort to the code for the sorting network. Thus, the code for quicksort is partly removed from the L1 instruction cache and replaced with the code for the sorting network. Because the network’s code is just a flat sequence of conditional swaps, each line of code is accessed exactly once per sort. That means it caused a lot of quicksort’s code to be removed from the instruction cache without gaining a speed-up because its code is now in the cache, and will be removed again when quicksort is handed back the flow of control and loads its code back into the instruction cache. We can see that effect especially for machines A and B which have 32 KiB of L1 instruction cache, where the speed-up is hardly over 2% for the best network base case over the best insertion sort base case. Where we got a much more improvement is on machine C, which has double the space in its L1 instruction cache. Here we achieved a speed-up of almost 6.5% when making use of the best networks. It is no surprise that we do not see improvements similar to those in section [section:experiments:normal] or [section:experiments:inrow] because the partitioning that quicksort performs takes the same amount of time no matter which base case sorter is used, representing a part of the algorithm that is not optimizable through using sorting networks. [!h] | | A:   `N BoNeL -s332 4CS` | B:   `N BoNeL -s332 4CS` | C:   `N BoNeL -s332 4CS` | | --- | --- | --- | --- | | `I -s332 Def` | 17.4% | 17.5% | 29.2% | | `StdSort -s` | 43.6% | 43.49% | 51% | [table:samplesort:speedups] Sorting a Medium-Sized Set of Items with Sample Sort ---------------------------------------------------- Sample sort was measured using benchmark [algo:normal] with parameters: * `numberOfIterations` = 50 * `numberOfMeasures` = 200 * `arraySize` = 256. The measurements were done with two different goals in mind: The first was to see which parameters work best for the machines used and the array size set. This can be seen in figures [plot:samplesort:bonel:A], [plot:samplesort:bonel:B] and [plot:samplesort:bonel:C] for the Bose Nelson networks optimizing locality. To be able to compare the results on the different machines, the configurations were ordered based on the times from machine A, and are in the same order in the other two plots. An oversampling factor of 3 and block size of 2 performed best on machine A and B. That configuration also performs best when using the other networks or insertion sort as a base case. On machine C block sizes larger than 2 performed better (on average) along with an oversampling factors of 3 or greater. We measured larger variances and got a lot more outliers, so here choosing a “best” configuration was not so easy. When looking at the other networks and insertion sort as base case, consistently well performing parameters are an oversampling factor of 3 and a block size of 4, but with very little lead over other configurations. That is interesting to see because all three machines run x86 assembly instructions and have the same number
arxiv_0000102
Quantum and classical registers =============================== We present a generic theory of “registers” in imperative programs and instantiate it in the classical and quantum setting. Roughly speaking, a register is some mutable part of the program state. Mutable classical variables and quantum registers and wires in quantum circuits are examples of this. However, registers in our setting can also refer to subparts of other registers, or combinations of parts from different registers, or quantum registers seen in a different basis, etc. Our formalization is intended to be well suited for formalization in theorem provers and as a foundation for modeling quantum/classical variables in imperative programs. We study the quantum registers in greater detail and cover the infinite-dimensional case as well. We implemented a large part of our results (including a minimal quantum Hoare logic and an analysis of quantum teleportation) in the Isabelle/HOL theorem prover. Introduction ============ In this work, we are concerned with the semantics of “registers”, that is, “variables” that can occur on the left or right hand side of an assignment or similar operation in imperative programs. (E.g., *x* in $\assign x 5$. Sometimes known as lvalues in the classical setting.) Our main focus is on quantum programs, but we also consider classical programs (or classical variables in quantum programs). Our central aim is to give a simple modeling suitable for formalizing the semantics of imperative programs in proof assistants. #### What is a register? For the purpose of this work, a *register* refers to some part of the state of a program that can be mutated. For example, if the state is a triple (*x*, *y*, *z*) of integers, then $\assign x 5$ would change (*x*, *y*, *z*) into (5, *y*, *z*), so *x* is a register referring to the first component of a 3-tuple. Quantumly, any access to a variable mutates it. A typical example of a quantum register would be *Q* in the operation $\qapply UQ$ (meaning: apply the unitary *U* to the quantum state in *Q*). Thus a quantum register in our setting coincides with what is usually already informally referred to as a quantum register. To understand the difficulty in formalizing the semantics of registers – after all, isn’t “the *n*-th component of the tuple” already a more-or-less formal semantics? – we first look at some more complex examples of classical registers: $\assign{\pair xy} e$ (*pairing*). Here the result of computing *e* (an expression returning a tuple) is assigned to the pair of variables $\pair xy$. $\assign{\chain x1}5$ (*chaining*). Here *x* is assumed to be a variable of type *i**n**t* × *i**n**t* (pairs of integers), and we assign 5 to the first component of *x*. $\assign{f(x)}4$ (*mapping*). Here *f* is some bijection on the domain of *x*. E.g., if *f* is negation, then this command would be equivalent to $\assign{x}{-4}$. This allows us to get a different “view” on a variable. E.g., *f* could convert between sorted association lists and partial functions, depending on what is easier to use at a given moment. We might have a variable *x* that contains a comma-separated list of strings (or some other variable length encoding). And we might want to refer to the *i*-th string *x**i* in *x*. This situation is different from the previous ones in that before, all the registers referred to a continuous region of the program state (e.g., a sequence of addresses in a heap). But mutating *x**i* changes other parts of the memory, too, because the strings following *x**i* have to be shifted (and possibly even reencoded). We still wish to consider *x**i* as a register in that case. Combinations of the above: For example, $\assign{\pair{\chain x1}{y}}e$ would update the first component of *x* and the whole of *y*. In fact, *x**i* in the previous example could be expressed as $\chain{f(x)}i$ if *f* is a bijection between comma separated strings and tuples of strings. In all of the above cases, it is fairly easy to give denotational semantics to the command $\assign Xe$ where *X* is any expression involving variables and the operations above. We simply evaluate *e* and then recursively descend through *X* to figure out how the result needs to be put in the memory/program state. However, this is already somewhat unsatisfactory because we cannot cleanly separate the meaning of *X* from that of $\assign{}{}$. In other words, we would like some denotational semantics $\denot X$ that describes the register *X* without referring to its structure, and then we define $\assign Xe$ as: “compute $v:=\denot e$, update the state at position $\denot X$ with *v*”. Quantumly, the situation becomes even more challenging: $\qapplyOLD{\pair QR}U$ (*pairing*). Apply a unitary *U* to the pair $\pair QR$ of the registers *Q* and *R*. Classically, we can always work around the need to have tuples as registers. E.g., $\assign{\pair xy} e$ could be rewritten as $\assign ze$, $\assign x{\mathit{fst}(z)}$, $\assign y{\mathit{snd}(z)}$. Quantumly, this is not possible, we cannot break up *U* into two parts. Support for register tuples cannot be broken down to more elementary features. $\qapplyOLD{\chain Q1}U$ (*chaining*). This would mean to apply *U* to the first component (or first qubit) of *Q*. It is equivalent to $\qapplyOLD Q{U\otimes I}$ but defining a general rule for such a syntactic rewriting is troublesome in more complex situations (e.g., when both $\chain Q1$ and $\chain Q2$ occur in the same nested expression). $\qapplyOLD{UQ}{U'}$ (*mapping*). Here *U**Q* is the register *Q*, but seen in a different basis, via the basis transformation *U*. This is often seen in informal descriptions (e.g., “*Q* contains *x* in the diagonal basis”) but in a formal description we need to explicitly apply and unapply the basis transform *U*, obfuscating the meaning. Combinations of the above. Say, if *U* denotes the transformation between computational and Bell basis, we could write $\chain{(U{\pair QR})}1$ to refer to the first qubit of *Q**R* in the Bell basis. A motivating example from physics: Let *X* denote a quantum variable that describes the *position* of a single particle. If *U* is the Fourier transform, then *P* :  = *U**X* denotes a quantum variable that describes the *momentum* of the particle. Physically, none of the two variables is more “real” than the other. Hence it makes sense to treat both *P* and *X* as first-class quantum registers, instead of having to apply a Fourier transform to the position each time we want to access the momentum. But they cannot be completely separate registers, as performing an operation (unitary/measurement/initialization) on the position affects the momentum and vice versa. A complex motivating example (from cryptography) for those features is the following: In, Zhandry introduces *compressed oracles*, a proof technique for reasoning about adversaries/algorithms accessing quantum random oracles (a quantum random oracle is a uniformly randomly chosen function $h:\bits n\to\bits m$ where the algorithm has access to the unitary $U\_h\colon \ket{x,y}\to\ket{x,y\oplus h(x)}$). In that proof technique, a transformation is applied to the register *H* initially containing the superposition of all possible functions *h* to map it to an ordered sequence of pairs (*x*, *y*) where only those *x* occur such that *h*(*x*) is not a uniform superposition of *y*’s. (The compressed representation has the advantage that it can be represented with little quantum memory, even though representing a function *h* needs *m*2*n* qubits.) Working with this technique (and even understanding it) is quite difficult due to the fact that all reasoning needs to explicitly take into account the various encoding/decoding operations (or be very informal). Registers of the above kind could allow us to reason both about the register *H* as well as the register *D* that contains the compressed oracle without having to choose which one is the physically chosen one. And we can refer to registers such as $\chain Hx$, the register that contains the output *h*(*x*), even when we physically only have the register *D*. This makes the formal description of algorithms considerably simpler. To understand the difficulties in modeling quantum registers, we encourage the reader to come up with a clean and simple formalism in which a program such as, e.g., $\qapply U{\pair{\chain x2}{\CNOT\pair{y}{\chain x3}}}$ makes sense. (Here *x* is a three qubit and *y* a qubit register.) #### Existing approaches. What approaches exist for modeling classical and quantum registers? While we are not aware of an explicit theoretical treatment of what a register is, modeling registers is something that needs to be done implicitly whenever an imperative programming language or Hoare logic with variables is defined. [(i)] [item:depfun.c] *Dependent function (classical registers).* The state of the program with variables *V* is represented as a dependent function *m* : ∏*v* ∈ *V**T**v* that maps a variable *v* ∈ *V* to the set *T**v* (its domain or type). Assigning $\assign vx$ can then be formalized as changing the program state from *m* to *m*(*v* :  = *x*). (For simplicity, we assume here that *x* is a constant.) The advantage of this approach is that it is very simple and obvious. It has probably been used innumerable times in the literature. It is also very easy to formalize, especially in a logic that supports dependent functions.[1](#fn1) The disadvantage is that already the register $\pair xy$ is not a register in this sense. $\assign{\pair xy}x$ needs to be broken down in the definition of the semantics into several updates of the function *m*. *Avoiding mutable variables.* We can sidestep the whole question of variables by having only immutable variables (i.e., single-assignment variables), and a heap with mutable cells that can be referenced using integers. The advantage of this approach is that it is easily modeled using a state monad (with the heap as the state), the single-assignment variables arise naturally from the do-notation. This gives us effectively mutable variables by using a single-assignment variable that points at a datastructure in the heap. The advantage is that the modeling of this is that the modeling is simple and formally elegant, while it is still possible to build up more complex registers (e.g., a register-pair can be emulated using a variable that points to two addresses of parts of other datastructures). The disadvantage is that we always need to take explicit care about possible aliasing (e.g., using separation logic ), and that the “complex registers” cannot be used transparently; program code needs to explicitly follow pointer chains. Imperative HOL uses this approach. *Explicit tensor products with swapping and associativity (quantum registers).* Here, the state of the system is represented by a quantum state in $\calH\_A\otimes\calH\_B\otimes\calH\_C\otimes\dots$ where *A*, *B*, *C*, … are the variables/registers used in the program. A statement such as $\qapplyOLD BH$ then formally means to apply 1 ⊗ *H* ⊗ 1 ⊗ … to the state. Unfortunately, it becomes more complex when a unitary is applied to several registers. E.g., $\qapplyOLD{\pair CA}\CNOT$ means $(1\otimes \sigma \otimes 1 \otimes\dots)(\sigma\otimes 1\otimes\dots) \paren{\CNOT\otimes 1\otimes\dots} (\sigma\otimes 1\otimes\dots)(1\otimes \sigma \otimes 1 \otimes\dots)$ where *σ* is the swap unitary, i.e., *σ*(*ψ* ⊗ *ϕ*) = *ϕ* ⊗ *ψ*. And this is only if we assume that the tensor product is associative. (I.e., (*X* ⊗ *Y*) ⊗ *Z* = *X* ⊗ (*Y* ⊗ *Z*).) If not, we additionally need to insert occurrences of *α*, the canonical isomorphism between (*X* ⊗ *Y*) ⊗ *Z* and *X* ⊗ (*Y* ⊗ *Z*). We believe that this is what people usually implicitly have in mind when drawing a quantum circuit. (Essentially, a quantum circuit is just a program that specifies a sequence of unitaries applied to certain wires/registers.) The advantage is that this is quite a universal approach (any of the examples of registers we gave above can be represented by doing a suitable basis transform, then applying the unitary, and then transforming back). The disadvantage is that compiling a given application of a unitary into the correct sequence of operations is nontrivial, tedious, and hard to work with. This approach is taken by. (In a less abstract setting, by tensoring *n* × *n* matrices.) In the denotational semantics of Qwire formalized in Coq, the same approach is taken, on a list of qubit variables (called wires there). However, the language is made typesafe by hiding the wire indices from the user (by using variables to abstractly refer to the wire indices). *Diagrammatic approaches (quantum registers).* Following, quantum processes can be described by diagrams (basically, graphs in which the edges are quantum wires). In this approach, the problem of what a quantum register is seemingly does not occur. However, formally, a diagram actually corresponds to a morphism in a symmetric monoidal category, which means it would be expressed as a composition of quantum gates together with the canonical morphisms *σ* and *α* mentioned before. So at this level of formalization, we have the same situation as in the previous item. [item:depfun.q] *Dependent function (quantum registers).* We can reuse the idea from the classical case of using a dependent function: Say we have quantum variables $V=\braces{A,B,C,\dots}$ with corresponding Hilbert spaces $\setC^{T\_A},\setC^{T\_B},\setC^{T\_C},\dots$ for sets *T**A*, ….[2](#fn2) Here *T**A*, … are the “types” of the variables, e.g., $T\_A=\bit$ if *A* is a qubit. Then we define the state space of the program as $\calH:=\setC^{\paren{\prod\_{v\in V}T\_v}}$. I.e., the basis states of $\calH$ are indexed by dependent functions. (In other words, a quantum memory is a superposition of classical memories.) We can now define the canonical bijection $f\_A:\paren{\prod\_{v\in V}T\_v}\to T\_A\times \paren{\prod\_{v\in V\setminus\{A\}}T\_v}$. This can be lifted to an isomorphism $U\_A: \setC^{\paren{\prod\_{v\in V}T\_v}}\to \setC^{T\_A} \otimes \setC^{\paren{\prod\_{v\in V\setminus\{A\}}T\_v}}$ via $U\_A\ket{m}:=\ket{f\_A(m)}$. So $\qapplyOLD AH$ could be implemented as $\adj{U\_A}(H\otimes I)U\_A$. This approach also easily allows us to formalize $\qapplyOLD {(A,B,\dots)} H$ by defining $f\_{A,B,\dots}: \paren{\prod\_{v\in V}T\_v}\to \paren{\prod\_{v=A,B,\dots}T\_v} \times \paren{\prod\_{v\in V\setminus\{A,B,\dots\}}T\_v}$ and then *U**A*, *B*, … analogously. But note that this approach does not give us a unifying notion of registers; we have variables *v* ∈ *V*, and we have lists of variables which are different kind of object. The advantage of this approach is that is easier to handle than explicit insertion of swaps *σ* and associators *α*, and that it does not introduce the need to have a canonical ordering of the variables (*V* is seen as a set, not as a list). Also having an infinite set *V* of variables is simpler with this approach. One disadvantages is that it is unclear how this approach would work with variables whose spaces are not of the form $\setC^{T}$ or ℓ2(*T*), i.e., we always need a canonical basis of their spaces. (A basis always exists, but its existence requires the axiom of choice and fixing an “arbitrary basis” might make formalization more challenging, especially in constructive logic.) Another disadvantage is that we still have no general concept of what a register is: *A* and (*A*, *B*) are conceptually different things, we cannot apply, e.g., further operations such as tupling to (*A*, *B*), and we have no notion of *U*(*A*) for a register *A*. This approach has been taken in the *qrhl-tool*. (However, their formalization is only partial at this point.) *Via lists of dimensions (quantum).* A less abstract variant of the previous is to specify the available registers by a list of dimensions. (*d*1, …, *d**n*) means there are *n* quantum registers, each of dimension *d**i*. Variables are simply referred to by their indices 1, …, *n*. To apply a quantum operation to a list of quantum variables, the operator needs to be “lifted” by suitable reindexing and tensoring. The advantage is that this plays well with an approach where quantum states and operations on it are represented as traditional vectors/matrices (i.e., vectors/matrices indexed by integers, as opposed, e.g., elements of $\setC^X$ for some arbitrary set *X*). This may interact more easily with some existing formalizations of matrices (e.g., the one from ). The disadvantage is that we cannot construct registers from simpler registers with this approach. And it is also very low-level, with the need for index-calculations, and untyped variables. The Isabelle/HOL formalization of quantum Hoare logic from uses this approach. See in particular that describes the encoding/decoding for lifting operators. The low-level language SQIR formalized in Coq uses the same approach, except that there every “variable” is a single qubit. [item:splitting] *Via splitting isomorphisms (classical/quantum).* One possibility how to generically define what a register is is to specify an isomorphism/bijection between the state of the program and a bipartite state which consists of “the variable” and “the rest”. That is, in the classical case, a register *v* with domain *T* would be a bijection *v* : *M* → *T* ⊗ *M*ʹ for some *M*ʹ. (Here *M* is the set of program states.) The bijections *f**A*, *f**A*, *B*, … in are examples of this. Then the command $\assign x5$ would map a state *m* to $\letin{(t,m')}{x(m)}{x^{-1}(5,m')}$. Here *x* does not have to represent a single variable. Any of the examples of registers we gave can be expressed as such a function *a*. Similarly, a quantum register with space $\calH\_a$ would be a unitary $a:\calH\_M\to\calH\_a\otimes\calH'$ for some $\calH'$. *U**A*, *U**A*, *B*, … in are examples of this. And $\qapplyOLD aU$ would correspond to applying $\adj a(U\otimes I)a$ to the program state. We can even formulate the “chaining” (cf. ) of registers generically. If we have a quantum register *a* with space $\calH\_a$ defined for program states in $\calH\_M$, and another register *b* that is defined for program states in $\calH\_a$, then $\chain ab$ is simply (*b* ⊗ *I*)*a*. (And similarly for classical registers.) The advantage of this approach is that we now have a generic notion of registers that encompasses the examples above (and more). When specifying the semantics of a language, there is no need to treat registers and lists of registers etc. differently. The disadvantage is that while all the examples can be expressed as registers, the operations that construct those registers from simpler ones (in particular, pairing) cannot be expressed. What would be (*a*, *b*) for general *a* and *b*? Under which circumstances is (*a*, *b*) even well-defined? *Getter and setter functions (classical registers).* A register *a* with domain *T* can be characterized by two functions *g* : *M* → *T* (the *getter*) and *s* : *T* × *M* → *M* (the *setter*). Here *g*(*m*) intuitively retrieves the value of *a* in the program state/memory *m*, and *s*(*x*, *m*) updates the value of *a* in *m* to *x*. A register could thus be characterized as such a pair (*g*, *s*) satisfying some natural axioms. This has the same advantages as the approach, except that it only applies to the classical case. However, it turns out that it is possible to formulate all the constructions on registers in our examples in a generic way. (This follows from the fact that registers defined by getters/setters turn out to stand in 1-1 correspondance with the classical registers that we derive in from a more general theory.) The implementation of records in Isabelle/HOL uses this approach to model record fields. (A record can be seen as a program memory, and then the field names are the program variables.) As these examples illustrate, it is not easy to get a definition of a register that is at the same time easy to formalize, generic enough to allow us to construct complex registers from simple ones, and works in the classical and quantum setting. #### Our contribution. In this work, we present a solution to the problem of generically modeling registers. It is based on the general idea that a register with domain *A* is something that maps an “update” on *A* to an “update” on the whole program state, subject to some natural conditions. First () we present the approach in a general (category-theoretical but simple) way that encompasses both classical and quantum registers (and possibly could be adapted to other computational settings, e.g., computations in non-local models or in other physical models, see, e.g., for some examples). We show that in this generic setting, we can define constructions on registers as well as reason about elementary properties. In we instantiate this approach to quantum registers (both for finite- and for infinite-dimensional quantum systems), and in to classical registers. (Interestingly, the quantum case is conceptually somewhat simpler.) We then illustrate the use of quantum registers in by introducing a simple quantum Hoare logic and verifying the correctness of quantum teleportation. In, we introduce the additional concept of register categories with *complements* in which there is a well-defined notion of “everything outside of register *F*” (and we show that the quantum register category has complements). In, we study the quantum register category in more detail: We show that many different quantum mechanical concepts (such as quantum channels, measurements, mixed states, etc.) interoperate well with our notion of registers. A large fraction of the results presented in this paper have been formally verified in Isabelle/HOL, namely the everything from Sections [sec:generic]–[sec:complements] (though everything related to quantum registers is formalized only for finite-dimensional systems). We present the formalization in. #### Related work. A number of papers model quantum/classical registers implicitly or explicitly in different ways, some examples have been discussed above already. Various works use von Neumann algebras to model quantum (sub)systems, somewhat similar to what we do in the infinite-dimensional setting, e.g.,. The relationship to our work is subtle and discussed in. Registers, generically ====================== #### A register category. The basic idea behind our formalization of registers is that it transforms updates on the register’s domain into updates on the program state. For example, if we have a quantum register *F* with quantum space $\calH\_F$, and some unitary *U* on $\calH\_F$ (which constitutes a specification of how we want to update the content of *F*), we need to know how *U* operates on the whole program state, i.e., we want to transform *U* into a unitary *F*(*U*) on the program state space $\calH\_M$. Or classically, an update on a variable with domain *T* could be a function *T* ↦ *T*, and thus gets transformed into a function *M* ↦ *M* (*M* is the set of all program states). In the generic setting, we do not specify explicitly what updates are (e.g., functions, matrices, …). Instead, we only require that the set of updates form a monoid because we can compose (multiply) updates, and there is a “do nothing” update. (For any variable type, we have a different monoid of updates. The overall program state is treated no differently, so we also have a monoid of updates of program states.) Said concisely, we are in a category in which the objects are monoids. We will use bold letters $\mathbf A,\mathbf B,\dots$ for those objects. In this setting, a register is a function that transforms an update of one type **A** into an update of another type **B**. So the next step in our formalization is to specify functions between those monoids (i.e., transformations of updates). Thus in our category, the morphisms are (not necessarily all) functions between the monoids. For example, in the quantum case, this could just be linear functions mapping matrices to matrices. We use the term *pre-register* for these morphisms. (Since actual registers will need to satisfy some additional restrictions.) Pre-registers have to satisfy the usual axioms of categories as well as a few additional natural ones listed in the first half of. We furthermore require that the resulting category has a tensor product, having some (but not all[3](#fn3)) of the usual properties of a tensor product. (Notation:  ⊗  is right-associative. I.e., $\mathbf A\otimes \mathbf B\otimes\mathbf C$ means $\mathbf A\otimes (\mathbf B\otimes\mathbf C)$.) **General axioms:** ()[ax:monoids] The objects (a.k.a. *update monoids*) $\mathbf A,\mathbf B,\dots$ of $\calL$ are monoids. (Which monoids are objects depends on the specific register category.) ()[ax:preregs] The pre-registers are functions $\mathbf A\to \mathbf B$. (Which functions are pre-registers depends on the specific register category.) They satisfy the axioms for categories, i.e., they are closed under composition (if *F*, *G* are pre-registers, *F* ∘ *G* is a pre-register) and the identity is a pre-register. ()[ax:cdot-a] For any $a\in\mathbf A$, the functions *x* ↦ *a* ⋅ *x* and *x* ↦ *x* ⋅ *a* are pre-registers $\mathbf A\to\mathbf A$. $\calL$ has a tensor product such that: ()[ax:tensor] For all $\mathbf A,\mathbf B$, $\mathbf A \otimes \mathbf B$ is an object of $\calL$, and for $a\in\mathbf A,b\in\mathbf B$, *a* ⊗ *b* ∈ **A** ⊗ **B**. ()[ax:tensorext] For pre-registers $F,G:\mathbf A\otimes\mathbf B\to\mathbf C$, if ∀*a*, *b* : *F*(*a* ⊗ *b*) = *G*(*a* ⊗ *b*), then *F* = *G*. ()[ax:tensor.mult] The tensor product is distributive with respect to the monoid multiplication  ⋅ , i.e., (*a* ⊗ *b*) ⋅ (*c* ⊗ *d*) = (*a* ⋅ *c*) ⊗ (*b* ⋅ *d*). **Registers:** ()[ax:reg.prereg] Registers are pre-registers. (Which pre-registers are also registers depends on the specific register category.) ()[ax:reg.morphisms] Registers satisfy the axioms for morphisms in categories, i.e., they are closed under composition (if *F*, *G* are registers, *F* ∘ *G* is a register) and the identity is a register. ()[ax:reg.monhom] Registers are monoid homomorphisms. (*F*(1) = 1 and *F*(*a* ⋅ *b*) = *F*(*a*) ⋅ *F*(*b*).) ()[ax:tensor-1] *x* ↦ *x* ⊗ 1 and *x* ↦ 1 ⊗ *x* are registers. ()[ax:pairs] If registers $F:\mathbf A\to\mathbf C$ and $G:\mathbf B\to\mathbf C$ have commuting ranges (i.e., *F*(*a*), *G*(*b*) commute for all *a*, *b*), there exists a register $\pair FG:\mathbf A\otimes\mathbf B\to\mathbf C$ such that $\forall a\, b.\ \pair FG(a\otimes b) = F(a)\cdot G(b)$. [fig:axioms] Now we are ready to specify what registers are in our category. A register is a function that maps an update to an update, so the pre-registers in our category are potential candidates for registers. However, potentially not all pre-registers are valid registers. Instead, when instantiating of our theory we specify which pre-registers are registers. We require registers to satisfy a few simple properties: Registers are closed under composition. Intuitively, this means that if we can meaningfully transform an update on one type $\mathbf A$ into an update on $\mathbf B$, and one on $\mathbf B$ into one on $\mathbf C$, we can also transform from $\mathbf A$ to $\mathbf C$. Besides other things, this is needed to define the chaining of registers. Registers *F* are monoid homomorphisms. This means that doing nothing on a variable is translated to doing nothing on the whole program state (*F*(1) = 1). And doing *a* and then *b* on the register has the effect of doing *F*(*a*) and then *F*(*b*) on the whole program state (*F*(*b* ⋅ *a*) = *F*(*b*) ⋅ *F*(*a*)). Mapping an update *x* to *x* ⊗ 1 or 1 ⊗ *x* is a register. Intuitively, this means: if we can update some system *A* with *x*, then we can update a bipartite system *A**B* or *B**A*, too, by doing nothing on *B*. And we require axiom [ax:pairs] to guarantee the existence of “pairs”, see below. The precise axioms are given in. To summarize: if we want to instantiate the theory of registers in a concrete setting (e.g., classical or quantum registers), we have to specify which monoids constitute the objects (e.g., sets of matrices), which functions consitute the pre-registers (e.g., linear maps on matrices), and which subset of the pre-registers are the registers.[4](#fn4) Then, when giving the semantics of imperative programs, we would identify one object $\mathbf M$ that represents possible updates on the overall program state. And then the program variables can be modeled as registers $\mathbf A\to\mathbf M$ where $\mathbf A$ is the type of the variable. (For simplicity, we notationally conflate the type of a variable with the corresponding set of updates since the type determines the set of updates.) And then registers could be built up from “hardcoded” registers (e.g., corresponding to explicitly declared program variables) using the operations on registers we describe next. #### Operations on registers. The simplest operation on registers is *chaining*. For example, say we have a variable *X* of type $\mathbf A$ in some memory of type $\mathbf M$. Recall that for simplicity of exposition, we identify types with their corresponding update monoids. And assume that $\mathbf A$ is a structured type with fields *F*, *G*, …. Then *X* is a register $\mathbf A\to\mathbf M$. And *F* is a register $\mathbf B\to\mathbf A$ for some $\mathbf B$ because a record field can be seen as a variable inside a memory of type $\mathbf A$. Then it makes sense to consider the register $\chain XF$ that refers to *F* within *X*. Since *F* tells us how to transform an update in $\mathbf B$ into an update in $\mathbf A$, and *X* tells us how to transform that into an update in $\mathbf M$, we can simply define $\chain XF$ as *X* ∘ *F*. [Chaining] For registers $F:\mathbf B\to\mathbf C$, $G:\mathbf A\to\mathbf B$, let $\symbolindexmark\chain{\chain FG} := F\circ G$. Chaining is particularly useful combined with the registers $\symbolindexmark\Fst\Fst(a):=a\otimes 1$ and $\symbolindexmark\Snd\Snd(b):=1\otimes b$. These are then registers that refer to the first or second part of a pair. E.g., a register $X:(\mathbf A\otimes \mathbf B)\otimes \mathbf C\mapsto \mathbf M$ would refer to a three-tuple, and $\chain{\chain X\Fst}\Snd$ refers to the second component of *X*. Thus we can already generically reason about tuples. *Mapping* a register (see ) also can be done almost generically. A bijection *f* between two types $\mathbf A$ and $\mathbf B$ often gives rise to a map *F* to transform an update in $\mathbf B$ into an update in $\mathbf A$. (How exactly *f* is interpreted as *F*, and whether *F* is actually a register, depends on the concrete register category.) Then *f*(*X*) for a register of type $\mathbf A$ is nothing else than $\chain XF$. Finally, given two registers $F:\mathbf A\to\mathbf M$, $G:\mathbf B\to\mathbf M$, we want to construct the *pair* $\pair FG:\mathbf A\otimes\mathbf B\to \mathbf M$. What we want is that operating on the register $\pair FG$ is the same as operating on both *F* and *G*. E.g., in the classical setting, we want that $\assign{\pair FG}{(x,y)}$ is the same as $\assign Fx;\assign Gy$. And in the quantum setting $\qapplyOLD{\pair FG}{U\otimes V}$ should be the same as $\qapplyOLD FU;\qapplyOLD GV$. Now $\pair FG$ is not, in general, a meaningful construct. For example, if *F* is a quantum register, then it is not clear what meaning $\pair FF$ would have. Thus we need to come up with a notion of *compatibility* between registers so that we can pair them. This definition will need to be well-behaved. For example, if *F*, *G*, *H* are pairwise compatible, we would want that $\pair FG, H$ are also compatible, otherwise showing compatibility of derived registers would become too difficult. It turns out a good (and simple) definition of compatibility is to require that *F*(*a*), *G*(*b*) commute for all *a*, *b*. This intuitively means that *F* and *G* represent different parts of the memory. [Compatibility / pairs][def:pair] We call registers *F*, *G* *compatible* if ∀*a*, *b*. *F*(*a*) ⋅ *G*(*b*) = *G*(*b*) ⋅ *F*(*a*). We define the *pair* as the unique register such that *F*(*a* ⊗ *b*) = *F*(*a*) ⋅ *G*(*b*). (Notation: $\pair FG$ is right-associative. I.e., $\pairFFF FGH$ means $\pair{F\_1}{\pair{F\_2}{F\_3}}$ and $\pairs{F\_1}{F\_n}$ means $\pair{F\_1}{\pair{\dots}{\pair{\dots}{F\_n}}}$.) If the registers *F*, *G* are compatible, then $\pair FG$ exists, is uniquely defined, and is a register. Based on pairing, we can also define useful registers $\symbolindexmark\swap\swap:\mathbf A\otimes\mathbf B\to\mathbf B\otimes\mathbf A$ and $\symbolindexmark\assoc\assoc:(\mathbf A\otimes\mathbf B)\otimes\mathbf C \to \mathbf A\otimes(\mathbf B\otimes\mathbf C)$ and its inverse $\symbolindexmark\assocp\assocp$. And for registers $F:\mathbf A\to\mathbf{A}',G:\mathbf{B}\to\mathbf{B}'$, we can define a register $F\symbolindexmark\rtensor\rtensor G:\mathbf A\otimes\mathbf B\to\mathbf A'\otimes\mathbf B'$, the tensor product of registers.[5](#fn5) (Notation: $\rtensor$ is right-associative. I.e., $F\rtensor G\rtensor H$ means $F\rtensor (G\rtensor H)$.) Together with the chaining operation, this gives us the possibility to rewrite expressions involving registers. For example, the quantum program $\qapplyOLD{\pair FG}U;\ \qapplyOLD{\pair GF}V$ would perform $\pair GF(V)\cdot \pair FG(U)$ on the quantum memory. And say we have another quantum program $\qapplyOLD{\pair FG}W$, performing $\pair FG(W)$. And we want to find out whether those programs are denotationally equivalent. We can rewrite $\pair GF(V)\cdot \pair FG(U)$ into $\pair FG(\sigma(V)\cdot U)$ (without any recurse to the specific semantics of the quantum case, only using Axiom [ax:reg.monhom] and Law [law:pair.sigma]). Thus the two programs are denotationally equal if *σ*(*V*) ⋅ *U* = *W*. This important fact here is that we have reduced the denotational equivalence question to an expression involving only the matrices *U*, *V*, *W*. We do not need to know how *F*, *G* are defined, they might be elementary quantum registers or complex register-expressions (as long as they are compatible), but all the underlying complexity is handled by the register formalism. The laws for reasoning about registers are given in. We formalized the proofs of all laws in Isabelle/HOL (), so we do not give them here. **Tensor product.** (*F*, *G* are assumed to be registers.) ()[law:tensor3] If *F*(*a* ⊗ (*b* ⊗ *c*)) = *G*(*a* ⊗ (*b* ⊗ *c*)) for all *a*, *b*, *c*, then *F* = *G*.()[law:tensor.ab] $(F\rtensor G)(a\otimes b) = F(a)\otimes G(b)$ **Swaps and associators.** ()*σ*, *α*, *α*ʹ are registers. ()[law:sigma.ab] *σ*(*a* ⊗ *b*) = *b* ⊗ *a* ()$\chain\sigma\Fst=\Snd$ $\chain\sigma\Snd=\Fst$ ()[law:alpha.abc] $\alpha\pb\paren{(a\otimes b)\otimes c}=a\otimes(b\otimes c)$ ()[law:alpha’.abc] $\alpha'\pb\paren{a\otimes(b\otimes c)}=(a\otimes b)\otimes c$ **Compatibility.** (*F*, *G*, *H*, *L* are assumed to be registers.) ()$\Fst,\Snd$ are compatible. ()[law:compat3] If *F*, *G*, *H* are pairwise compatible, $\pair FG$ and *H* are compatible. ()If *F*, *G* are compatible, $\chain FH,G$ are compatible. ()If *F*, *G* are compatible, $\chain HF,\chain HG$ are compatible. ()If *F*, *H* are compatible, and *G*, *L* are compatible, then $F\rtensor G, H\rtensor L$ are compatible. **Pairs.** (*F*, *G*, *H* are assumed to be pairwise compatible registers, and *C*, *D* to be registers.) ()$\pair FG$ is a register. ()[law:pair.select] $\chain{\pair FG}\Fst=F$, $\chain{\pair FG}\Snd=G$, ()$\pair\Fst\Snd = \id$ ()$\pair\Snd\Fst = \sigma$ ()[law:pair.sigma] $\chain{\pair FG}\sigma = \pair GF$ ()[law:pair.alpha] $\chain{\pair F{\pair GH}}\alpha = \pair{\pair FG}H$ ()[law:pair.alpha’] $\chain{\pair{\pair FG}H}{\alpha'} = {\pair F{\pair GH}}$ ()[law:pair.chain] $\pair{\chain CF}{\chain CG} = \chain C{\pair FG}$ ()[law:pair.tensor] $\chain{\pair FG}{(C\rtensor D)}=\pair{\chain FC}{\chain GD}$ [fig:laws] #### Iso-registers and equivalence. Some simple categorical notion will come in handy later: We call a register *F* an *iso-register* iff there exists a register *G* such that $F\circ G=\id$ and $G\circ F=\id$. (I.e., an iso-register is an isomorphism in the category of registers.) Iso-registers have an important intuitive meaning: While registers typically give access only to part of the program memory, an iso-register gives access to the whole memory.[6](#fn6) We call two registers $F:\mathbf A\to\mathbf C$, $G:\mathbf B\to \mathbf C$ *equivalent* iff there exists an iso-register *I* such that *F* ∘ *I* = *G*. (In category-theoretic terms: they are isomorphic as objects of the slice category of the register category.) Intuitively, this means that one can perform the same updates on the program memory given *F* and given *G*. In particular, if we care only about the effect a register can have on the program memory, all we need to know is the register up to equivalence.[7](#fn7) Quantum registers ================= Instantiating the general theory in the quantum setting turns out to be very easy (at least when considering finite-dimensional spaces only). The objects $\mathbf A,\mathbf B,\dots$ have to be monoids that represent updates of quantum variables. Updates of quantum variables are unitary operations (and/or projectors etc.), i.e., linear operators on a finite-dimensional (but not zero-dimensional) complex Hilbert space (a.k.a. matrices).[8](#fn8) Thus: the objects of the category are the spaces of linear operators on $\calH\_A$ for finite dimensional Hilbert spaces $\calH\_A$. (Those are naturally monoids with the identity matrix 1.) Throughout this section, we will use the notational convention that $\mathbf A$ is the space of linear operators on $\calH\_A$. Similarly for $\mathbf B,\mathbf C,\dots$ Pre-registers $\mathbf A\to\mathbf B$ are defined to be the linear functions. It is then easy to check that all axioms from concerning pre-registers are satisfied. The tensor product is the usual tensor product of matrices from linear algebra. Finally, we define the registers in $\Lquantumfin$ as those linear maps $F:\mathbf A\to\mathbf B$ such that *F*(1) = 1, *F*(*a**b*) = *F*(*a*)*F*(*b*), and $F(\adj a)=\adj{F(a)}$. (The first two properties are required for registers by definition, and the third one makes registers more well-behaved, e.g., *F*(*U*) is unitary for a unitary *U*.) In other words, registers are unital  \* -homomorphisms. For this definition of $\Lquantumfin$, the axioms from can be shown to hold, and the laws from follow. We do not give the proofs in the finite-dimensional case here explicitly here. They can be found in our Isabelle/HOL formalization () and also arise as special cases of the infinite-dimensional proofs (see ). In addition, we can easily define elementary registers. E.g., if our program state is of the form $\calH\_M:=\calH\_A\otimes\calH\_B\otimes\calH\_C\otimes\dots$, then *B*(*b*) :  = 1 ⊗ *b* ⊗ 1 ⊗ … is a register $\mathbf B\to\mathbf M$ that refers to the second subsystem. Or, if we use the “dependent function” approach from example on, we have a program state of the form $\calH\_M := \setC^{\paren{\prod\_{v\in V} T\_v}}$. Then a variable *A* ∈ *V* has state space $\calH\_A:=\setC^{T\_A}$, and the corresponding register *A* is $A(a) := \adj{U\_A}(a\otimes 1)U\_A$ for the unitary *U**A* defined on. However, in addition to those elementary registers, we can construct tuples and more using the constructions from. It is also possible to keep things axiomatic and to perform an analysis simply stating that *A*, *B*, *C*, … are compatible quantum registers without specifying concretely into which program state they embed and how. This means that the result of the analysis holds for any choice of memory. (We do this in the teleportation example in.) Beside the laws inherited from, we get some useful laws specific to the quantum setting. For example, *F*(*a*) is a projector/unitary if *a* is a projector/unitary. If we define $\symbolindexmark\sandwich{\sandwich U}(a):=Ua\adj U$ (short for sandwich), then $\sandwich U$ is a register for any unitary *U*. This makes it possible to “map” quantum registers (see ), what was written *U**X* there would simply be $\chain X{\sandwich U}$. We have the law $F\pb\paren{\sandwich a(b)}=\sandwich{F(a)}\pb\paren{F(b)}$. Quantum registers as defined above allow us to transport unitaries or projectors from the space $\calH\_A$ of a register to the space $\calH\_M$ of the whole program state. However, when reasoning about quantum programs, we may also want to transport a subspace of $\calH\_A$ to a subspace of $\calH\_M$. For example, predicates in Hoare logic might be represented as subspaces, and we want to express predicates on the whole program state in terms of predicates about individual variables. This is easily achieved because a subspace can be represented by the unique projector onto that subspace. One common operation on subspaces in this context is the intersection. In general, there is no nice way to express the intersection of two subspaces in terms of their projectors. However, if the two subspaces are defined with respect to *compatible* registers *F*, *G*, we have the simple rule: If *a*, *b* are projectors, then *F*(*a*) ⋅ *G*(*b*) is a projector and $\im F(a) \cap \im G(b) = \im F(a)G(b)$. (If the two spaces we are intersecting are more complex expressions involving registers, we can first bring them into the form *F*(*a*) and *G*(*b*) by rewriting using the laws from.) We will make use of this fact in our Hoare logic example in. And similarly, the law $F(\psi\adj\psi)\cdot G(\phi\adj\phi) = \pair FG\pb\paren{(\psi\otimes\phi)\adj{(\psi\otimes\phi)}}$ allows us to join two rank-1 projectors on different registers into a single one. We formalized the proofs of all laws in Isabelle/HOL (), so we do not give them here. The infinite-dimensional case ----------------------------- Above, we instantiated the general theory of registers in the quantum case by considering finite-dimensional Hilbert spaces only. That is, this approach does not allow us to consider, e.g., registers of type $\setC^{\setZ}$, i.e., an integer in superposition (*quint*). Another example of something that does not fit into the finite-dimensional framework is the position/momentum example from the introduction. We now describe how the instantiation above needs to be changed to accommodate registers whose types correspond to arbitrary Hilbert spaces. In the classical case, the pairing axiom [ax:pairs] followed immediately from the universal property of the tensor product.[9](#fn9) The problem in the infinite-dimensional case is that this universal property does not, in general, hold.[10](#fn10) Fortunately, for the right choices of objects and morphisms, we can still prove the pairing axiom without the usual universal property. Note that the results in the present section have not been formally verified in Isabelle/HOL. #### The quantum register category $\symbolindexmark\Lquantum\Lquantum$. The objects (update monoids) are the sets of bounded operators on Hilbert spaces. That is, for every Hilbert space $\calH$ (the type of a given register), the space of bounded linear operators $\calH\to\calH$ is an object. Throughout this paper, in the quantum setting, we follow the notational convention that $\mathbf A,\mathbf B,\dots$ always denotes $B(\calH\_A),B(\calH\_B),\dots$ ($\bounded(\calH)$ is a monoid where the multiplication is the composition of operators and the unit is the identity operator 1.) The *pre-registers* $\mathbf A\to\mathbf B$ are the weak\*-continuous bounded linear maps from $\mathbf A$ to $\mathbf B$. (The *weak\*-topology* on the space of bounded operators is the coarsest topology where $x\mapsto \abs{\tr ax}$ is continuous for all trace-class operators *a* (beginning of §20 in ). In other words, a net *x**i* weak\*-converges to *x* iff for all trace class *a*, $\tr ax\_i\to\tr ax$. Weak\*-continuous bounded linear maps are also known as *normal* maps.) The tensor product on $\mathbf A,\mathbf B$ is the *tensor product of von Neumann algebras*, that is, $\mathbf A\symbolindexmark\tensor\tensor\mathbf B$ is the set $\bounded(\calH\_A\otimes\calH\_B)$ of bounded operators on $\calH\_A\otimes\calH\_B$. (In, this tensor product is defined in Definition IV.1.3 as “$\overline\otimes$”. The fact that $\mathbf A\otimes\mathbf B=\bounded(\calH\_A\otimes\calH\_B)$ is given by (10) in the same section.) For any $a\in\mathbf A$, $b\in\mathbf B$, *a* ⊗ *b* is defined as the unique operator such that (*a* ⊗ *b*)(*ψ* ⊗ *ϕ*) = *a**ψ* ⊗ *b**ϕ* for all $\psi\in\calH\_A,\phi\in\calH\_B$. (The tensor product on the rhs of this equation is the Hilbert space tensor product. Existence of *a* ⊗ *b* is shown in the discussion after that definition.) The *registers* are the weak\*-continuous unital \*-homomorphisms. (Unital means *F*(1) = 1,  \* -homomorphism means linear, *F*(*a**b*) = *F*(*a*)*F*(*b*) and $F(\adj a)=\adj{F(a)}$.) As we have not proven the results about infinite-dimensional quantum registers in Isabelle/HOL, we give the proofs that $\Lquantum$ is a register category in. Discussion ---------- #### Using other tensor products (in the infinite-dimensional case). In, we chose the tensor product of von Neumann algebras as the tensor product in our category. As a consequence, the morphisms we considered were *weak\*-continuous* bounded linear maps. (This is because the von Neumann tensor product is the weak\*-closure of the algebraic tensor product.) However, this is not the only possible tensor product on spaces of bounded operators. Other tensor products exist that are the closure of the algebraic tensor product with respect to other topologies. For example, the projective C\*-tensor product satisfies the property required for Axiom [ax:pairs] without requiring *weak\*-continuous* maps. So it is well conceivable that we can define a category of quantum registers also with this tensor product. Or we could forgo any topological considerations and try to use the algebraic tensor product, and allow all linear maps (not just bounded ones). The reason why we chose the tensor product of von Neumann algebras is that it satisfies $\bounded(\calH\_A)\otimes\bounded(\calH\_B)=\bounded(\calH\_A\otimes\calH\_B)$. This makes the resulting theory much more natural to use: If we have two registers *F*, *G*, with types $\calH\_A$ and $\calH\_B$, then it would be natural that the pair $\pair FG$ has type $\calH\_A\otimes\calH\_B$. E.g., a unitary $U:\calH\_A\otimes\calH\_B\to\calH\_A\otimes\calH\_B$ is something we expect to be able to apply to the register $\pair FG$. It seems that the tensor product of von Neumann algebras is the only one that gives us this property. However, we do not exclude that instantiations of the theory of registers with other tensor products might give us other advantages that we are not currently aware of. #### Mixed classical/quantum registers. Another interesting question is to extend the theory of quantum registers to mixed classical/quantum registers. For example, we could have a register that contains a list of qubits, where the length of the list is classical. Our current formalism does not allow that. (We can only model registers that contain lists of qubits where the length of the list is also in superposition.) In particular, with the current theory, classical registers (see ) are not a special case of quantum ones. At a first glance, it would seem that there is already a ready-made solution available for this problem: Mixed classical/quantum systems are oftentimes represented by von Neumann algebras (i.e., certain subsets of the sets of all bounded operators). For example, a system consisting of *n* qubits would be represented by $\bounded(\setC^{2^n})$, same as we do in the present paper. And a system of classical lists of qubits then would be the direct sum of all possible systems of *n*-qubits for *n* ≥ 0. That is, $\bigoplus\_{n=0}^\infty \bounded(\setC^{2^n})$. And a classical bit could be $\bounded(\setC)\oplus\bounded(\setC)$. In fact, classical systems are exactly the commutative von Neumann algebras! Unfortunately, this approach seems unsuitable in our setting. To see this, consider the special case of classical systems where the program state is purely classical. I.e., it is modeled by some commutative von Neumann algebra $\mathbf C$. But that means that for any two registers $F:\mathbf A\to\mathbf C$, $G:\mathbf B\to\mathbf C$, the ranges of *F* and *G* commute trivially. Hence any two registers in a program with classical state would be compatible. This shows us that we are on the wrong track. The intuitive meaning of compatibility is that *F*, *G* corresponds to different parts of the program state, and this should clearly not be the case for any two registers. (In particular, *F* should not be compatible with itself except in the degenerate case where *F* has unit type.) Furthermore, modeling classical registers as commutative von Neumann algebras also fails to represent the intuitive meaning of a “set of updates”. Updates on a classical register should in some way correspond to classical functions on some set *X*. In particular, different updates should not always commute. But in a commutative von Neumann algebra, everything commutes. (Note that we have only illustrated why the approach fails for classical data. But since classical data is a necessary special case of mixed quantum/classical data, this also implies that the approach does not work for mixed data.) What went wrong? And why is the von Neumann algebra formalism (as sketched above) successful in modeling mixed classical/quantum data in other settings? The reason, as we understand it, is that this von Neumann algebra formalism is based on a very different intuition: Elements of the von Neumann algebra represent *observables*. For example, in $\bounded(\calH)$, we find the Hermitian operators *a* on $\calH$, and each such operator gives rise to a function from quantum states to real numbers, via $\tr a\rho$ or $\adj\psi a\psi$, depending on whether we consider a mixed state *ρ* or a pure state *ψ*. Physically, this corresponds to the expected value of the real-valued measurement described by *a*. And a classical system with values *X* is described by the Neumann algebra consisting of all bounded complex sequences *a**x* (*x* ∈ *X*). For any classical “state”, namely a discrete[11](#fn11) probability distribution *μ* on *X*, any (real-valued) *a**x* also gives rise to a real number via ∑*x**μ*(*x*)*a**x*, namely the expectation of *a**x* where *x* is the state of the system. So an element of a von Neumann algebra (in this formalism) describes a *measurement* on the system. I.e., it tells us how to *read* from it. In contrast, in our setting, we also need to know how to *write* to the system (since we describe “updates”). This is basically the same difference as between rvalues and lvalues in classical programming languages. Knowing how to read a register is not, in general, sufficient for knowing how to write it.[12](#fn12) So we claim that the fact that our quantum register category has the same objects and morphisms as existing work that uses the von Neumann algebra formalism (e.g., to mention just a few) is coincidental and only happens when we consider purely quantum systems. Namely, in our setting, the objects are the linear span of all unitary transformations (a unitary is a natural representation of an “update”), while in the other work, the objects are the linear span of all Hermitian operators (a Hermitian is a natural representation of a real-valued measurement). Those two spans happen to coincide, in both cases we get all bounded operators. However, one should be careful to read too much into this coincidence. In particular, if we leave the purely quantum setting, we do not have any such correspondence anymore. (Consider for example the stark difference of our modeling of classical registers (partial functions on sets, see ) and the modeling of classical systems via commutative von Neumann algebras (complex sequences).)[13](#fn13) So in light of this, to formalize mixed classical/quantum registers, we will need to use a different category than that of von Neumann algebras. At this point, it is not clear yet what this category looks like. Classical registers =================== We come to the classical case. The most obvious approach would be to say that the updates on a register taking values in *A* are the functions *A* → *A*. However, it is not possible to satisfy the axioms of a register category for this choice.[14](#fn14) Instead, we define the updates on *A* as the *partial* functions $A\symbolindexmark\partialto\partialto A$. This choice has the additional advantage that it also directly provides support for partiality in the program semantics; the semantics of a deterministic partial program is often naturally described as a partial function. (An earlier version of this work formalized classical updates as relations on *A* instead. We believe that our new approach is more natural and easier to use. Both approaches are viable, however.) In this section, we use the notational convention that $\mathbf A,\mathbf B,\dots$ denote the sets of partial functions on *A*, *B*, …. To define registers $\mathbf A\to\mathbf B$, let us forget for a moment that a register should map updates to updates. Instead, we note that a classical register with values in *A* inside a memory of type *B* naturally allows us to perform two actions: We can read the value of the register using a function *g* : *B* → *A* (the *getter*). And we can set the value of the register using a function *s* : *A* × *B* → *B* (the *setter*). The getter and setter satisfy some natural conditions: We call (*g*, *s*) *valid* iff for all *a*, *a*ʹ ∈ *A* and *b* ∈ *B*, we have *b* = *s*(*g*(*b*), *b*) and *g*(*s*(*a*, *b*)) = *a* and *s*(*a*, *s*(*a*ʹ, *b*)) = *s*(*a*, *b*). A valid getter/setter pair is a natural candidate for formalizing classical registers. We show that this approach indeed fits in our formalism. A getter/setter pair (*g*, *s*) naturally gives rise to a function $F:\mathbf A\to\mathbf B$ via *F*(*a*)(*b*) :  = *s*(*a*(*g*(*b*)), *b*). (I.e., to apply *a* to the register, we retrieve its content *g*(*b*), apply *a* to it, and set the result as the new content of the register.) Note that *F*(*a*) can be a partial function if *a* is, but *F*(*a*) is total for total *a*. Thus the *registers* are the functions $F:\mathbf A\to\mathbf B$ such that there exist valid (*g*, *s*) with $\forall a\in\mathbf A, b\in B. \ F(a)(b) = s(a(g(b)),b)$. In order to define pre-registers, we somewhat relax these conditions. We cannot define the pre-registers $\mathbf A\to\mathbf B$ as all functions $\mathbf A\to\mathbf B$ since then Axiom [ax:tensorext] would not hold. Instead, we define *pre-registers* as the functions $F:\mathbf A\to\mathbf B$ such that there exist a total *g* : *B* → *A* and a partial $s:A\times B\partialto B$ with $\forall a\in\mathbf A, b\in B. \ F(a)(b) = s(a(g(b)),b)$. (Note: we removed the validity requirement, and we allow *s* to be partial. With total *s*, Axiom [ax:cdot-a] would not hold.) Finally, we need to define the tensor product of updates. $\mathbf A\otimes\mathbf B$ is the set of partial functions on *A* × *B*. (This is the only natural choice since we want a pair of registers taking values in *A*, *B*, respectively, to take values in *A* × *B*.) Then (*a* ⊗ *b*)(*x*, *y*) :  = (*a*(*x*), *b*(*y*)), defined iff both *a*(*x*), *b*(*y*) are defined, makes this into a tensor product satisfying all our axioms. Note that the pairs $\pair FG$ in this category are very natural, too: If $F:\mathbf A\to\mathbf C$, $G:\mathbf B\to\mathbf C$ are defined by the getters/setters (*g**F*, *s**F*) and (*g**G*, *s**G*), respectively, then $\pair FG$ is defined by the getter/setter (*g*, *s*) with *g*(*c*) = (*g**F*(*c*), *g**G*(*c*)) and *s*((*a*, *b*), *c*) = *s**F*(*a*, *s**G*(*b*, *c*)). With these choices, we get a register category $\symbolindexmark\Lclassical\Lclassical$ satisfying all axioms from and the laws from follow. We can also express mappings (see ) easily. For a bijection *f*, let $\hat f$ be the register defined by getter *g* :  = *f* and setter *s*(*a*, *b*) :  = *f*− 1(*a*). Then $\hat f$ is a register, and we can express *f*(*X*) simply as $\chain X{\hat f}$. Furthermore, for singleton sets *A*, have a *unit register* $u:\mathbf A\to\mathbf B$ that is compatible with all registers, and such that setting it has no effect. We formalized the proofs for this section in Isabelle/HOL (), so we do not give them here. Quantum Hoare logic =================== To illustrate the use and flexibility of our registers, we will now develop a very simple quantum Hoare logic. Since the main focus is to show the use of quantum variables (and constructions based upon these), the language we analyze will not contain any features that are orthogonal to the question of variables, e.g., loops. The ideas described can be easily extended to other Hoare logics that use subspaces as pre-/postconditions (i.e., von-Neumann-Birkhoff quantum logic ; used, e.g., in ). (And Hoare logics that use operators as pre-/postconditions should be even easier to adapt because registers naturally translate operators from variables to overall program states.) #### Language definition. Our minimalistic language has two commands: applying a unitary, and one-sided branching. Fix a space $\calH\_M$ for the program state. A *program* *P* is a sequence of commands *C* with $$P ::= C\_1;\dots;C\_n \qquad C ::= \qapplyOLD FU \ |\ \qif Gx$$ We assume that the following type constraints are satisfied: *U* is a unitary on some $\calH\_A$, *F* is a register $\mathbf A\to\mathbf M$ for the same $\calH\_A$, *x* ∈ *B* for some *B*, and *G* is a register $\mathbf B\to \mathbf M$ where $\calH\_B:=\setC^B$. (Here $\mathbf M$ is fixed throughout and corresponds to the program state.) $\symbolindexmark\skipprog\skipprog$ denotes the empty program (a.k.a. skip). The command applies the unitary *U* to the register *F*. Thus its denotational semantics is simply $\denot{\qapplyOLD FU}=F(U)$. And measures *G* in the computational basis and, if the outcome is *x*, continues the program. Otherwise it halts. (I.e., the program state is 0 in that case.) Thus its denotational semantics is $\denot{\qif Gx}=G\pb\paren{\selfbutter x}$ (projecting the state of *G* onto $\ket x$).[15](#fn15) The denotation of a program is $\symbolindexmark\denot{\denot{C\_1;\dots;C\_n}} := \denot{C\_n}\circ\dots\circ\denot{C\_1}$. In particular, the denotation of a program is a linear operator on $\calH\_M$. #### Hoare judgments. For subspaces *A*, *B* of $\calH\_M$ and a program *P*, we write to mean: for all *ψ* ∈ *A*, $\denot P\psi\in B$. The following rules follow directly from the semantics: $$\begin{gathered} \inferrule[Seq]{\hoare A{P\_1}B\\\hoare B{P\_2}C}{\hoare A{P\_1;P\_2}C} \qquad \inferrule[Weaken]{A \subseteq A' \\ B' \subseteq B \\ \hoare {A'}P{B'}}{\hoare APB} \qquad \inferrule[Skip]{A \subseteq B}{\hoare A\skipprog B} \\[4pt] \inferrule[Apply]{F(U)\cdot A \subseteq B}{\hoare A{\qapplyOLD FU}B} \qquad \inferrule[If]{G\pb\paren{\selfbutter x}\cdot A \subseteq B}{\hoare A{\qif Gx}B}\end{gathered}$$ Note how in the last two rules, registers are used not only in programs, but also to describe pre-/postconditions. We do not need to introduce any additional definitions for that since *F*(*U*) and $G\pb\paren{\selfbutter x}$ already have meaning as operators by definition of quantum registers. (And  ⋅  means the multiplication of an operator with a subspace.) We can also very easily express the predicate “register *F* is in state *ψ*” (short $F\quanteq\psi$). Being in state *ψ* is represented by the subspace $\Span\braces\psi$. Or equivalently, $\im\psi\adj\psi$. And *F* transports this to a subspace of $\calH\_M$. Namely: for a register $F:\mathbf A\to\mathbf M$ and a state $\psi\in\calH\_A$, we define $F\symbolindexmark\quanteq{\!\strut\quanteq} \psi$ as the subspace $\im F(\psi\adj\psi)$ of $\calH\_M$. Finally, we will often need the intersection of subspaces. See, for facts for reasoning about this in terms of registers and projectors. In the next section, we show what all of this looks like in a concrete situation. Teleportation ------------- We illustrate the use of quantum registers and of our minimal Hoare logic by analyzing quantum teleportation . As a quantum circuit, it is illustrated in left. The overall effect of this quantum circuit is to “move” the qubit from wire *X* to wire Φ2, assuming Φ1, Φ2 are preinitialized with an EPR pair $\symbolindexmark\bell\bell=\frac1{\sqrt2}\ket{00} + \frac1{\sqrt2}\ket{11}$. Note that there are also two wires *A*, *B* that represent any additional state that Alice and Bob may hold. ll [baseline=(current bounding box.center)] ($(\getWireCoord{Phi1}) + (-1mm,1mm)$) – ($(\getWireCoord{Phi2}) + (-1mm,-1mm)$) node [pos=0.5, anchor=east, xshift=-2mm] $\bell$; at () /; at () /; (cnot) ; (H) $\hada$; (MPhi1) $\centernot\frown$; (a) *a*; (MPhi1) – (a); (MX) $\centernot\frown$; (b) *b*; (MX) – (b); (X) $\pauliX$; (Z) $\pauliZ$; & [c]l $ \teleport ab := $ $\qapplyOLD {\pair X{\chain\Phi\Fst}} \CNOT;$ $ \qapplyOLD X \hada;$ $ \qif {\chain\Phi\Fst} a;$ $ \qif X b;$ $ \qapplyOLD {\chain\Phi\Snd} {\pauliX^a};$ $ \qapplyOLD {\chain\Phi\Snd} {\pauliZ^b}$ [fig:teleport] In our simplistic language, we model this with four programs for $a,b\in\bit$, see. Each of them corresponds to one possible outcome of the measurements. (I.e., instead of having if-then-else in our program, we use the one-sided branching and show the same postcondition in all four cases.) Note that in this modeling, for the fun of it we decided that instead of having two qubit variables Φ1, Φ2, we have a two-qubit variable Φ, and we access their subparts as $\chain\Phi\Fst,\chain\Phi\Snd$. We assume that *X*, *A*, *B*, Φ are pairwise compatible. Besides that, we make no assumptions about how they are embedded into the overall program memory. The claim is that teleportation “moves” a state *ψ* from *X* to $\chain\Phi\Snd$, if Φ is initialized with an EPR pair. Thus, with the notation from the previous section, this would be expressed as $\forall ab.\ \hoare{X\quanteq\psi \cap \Phi\quanteq\bell}{\teleport ab}{\chain\Phi\Snd\quanteq\psi}$. However, this only states that if *X* contains some state *ψ*, not entangled with anything else, then $\chain\Phi\Snd$ will contain it. To express a more general (and harder to analyze in Hoare logic) statement, we can say: If the joint state of *X*, *A*, *B* is *ψ*, then the joint state of $\chain\Phi\Snd,A,B$ is *ψ* afterwards. In other words, we want to show: $$\forall ab.\qquad \pb\hoare{\XAB\quanteq\psi \cap \Phi\quanteq\bell}{~\teleport ab~}{\PhiTwoAB\quanteq\psi} \label{eq:hoare.teleport}$$ We now prove. First, we have $${\XAB\quanteq\psi \,\cap\, \Phi\quanteq\bell} \quad=\quad \underbrace{\Phi\pb\paren{\bell\bell^\dagger}}\_{=:\,O\_1} \cdot \underbrace{\XAB\quanteq\psi}\_{=:\,\pre}$$ This follows from the definition of $\quanteq$ and the law $\im F(a) \cap \im G(b) = \im F(a)G(b)$ from. $$\begin{gathered} \pb\hoare {O\_1\cdot\pre} {~\qapply\CNOT\XPhiOne~} {\underbrace{\XPhiOne(\CNOT)\cdot O\_1}\_{=:\,O\_2}\cdot\,\pre} \\ \pb\hoare {O\_2\cdot\pre} {~\qapply\hada X~} {\underbrace{X(\hada)\cdot O\_2}\_{=:\,O\_3}\cdot\,\pre}\end{gathered}$$ These follow directly from the Hoare rule Apply. $$\begin{gathered} \pb\hoare {O\_3\cdot\pre} {~\qif\PhiOne a~} {\underbrace{\PhiOne(\selfbutter a)\cdot O\_3}\_{=:\,O\_4}\cdot\,\pre} \\ \pb\hoare {O\_4\cdot\pre} {~\qif Xb~} {\underbrace{X(\selfbutter b)\cdot O\_4}\_{=:\,O\_5}\cdot\,\pre}\end{gathered}$$ These follow directly from the Hoare rule If. We want to show that *O*5 is equal to the following operator: $$O\_5' := \tfrac12\, \PhiTwo\pb\paren{\XZadj} \cdot \XPhiTwo\pb\paren{\Uswap} \cdot \Phi\pb\paren{\ket{ab}\adj\bell}$$ Here $\symbolindexmark\Uswap\Uswap$ is the two-qubit swap unitary ($U\_\sigma\ket{a,b}=\ket{b,a}$). To show this, we first unfold the definition of *O*5 and rewrite it with the rules from into the following form that contains only a single application of a register (namely $\pair X\Phi$): $$O\_5 = \pair X\Phi \pB\paren{ \underbrace{ (\selfbutter b \otimes I\_2) \cdot (I\_1 \otimes \selfbutter a \otimes I\_1) \cdot (\hada \otimes I\_2) \cdot \alpha(\CNOT\otimes I\_1) \cdot (I\_1 \otimes \bell\adj\bell) }\_{{} =:\, M} }$$ (For readability, we annotate the identities *I* with the number of qubits they operate on.) Generally, we try to avoid bringing everything under one single combined register because the terms become unreadable. However, in this specific case, we want to do the next step by explicit computation of the matrix $M\in\setC^{8\times 8}$. Note, however, the following important fact: We did not involve the registers *A*, *B* in this computation. If we had done that, and gotten an expression of the form $\pair\XAB\Phi(M')$, then we would not be able to brute-force compute *M*ʹ since we do not have an upper bound on the dimension of *M*ʹ. (*A*, *B* are quantum registers of unspecified size.) Similarly, we can unfold *O*5ʹ: $$O\_5' \ = \ \pair X\Phi \pB\paren{ \underbrace{ (I\_1 \otimes I\_1 \otimes \tfrac12\XZadj) \cdot (\id \rtensor \sigma) (\alpha (U\_\sigma \otimes I\_1)) \cdot (I\_1 \otimes \ket{ab}\adj\bell) }\_{{} =:\, M'} }$$ By explicit calculation, we get that *M* = *M*ʹ.[16](#fn16) Thus *O*5 = *O*5ʹ. How do we interpret this result? The postcondition $O\_5\cdot\pre$ means that the result of applying the first four steps of $\teleport ab$ is the result of applying *O*5 to a program state with *ψ* in $\XAB$. Thus *O*5 = *O*5ʹ implies that the result of applying these steps is the result of applying *O*5ʹ to a program state with *ψ* in $\XAB$. And what does *O*5ʹ do? It applies some operator on Φ (we do not care which since Φ does not contain any part of our payload *ψ*). Then it swaps *X* and $\chain\Phi\Snd$. Now *ψ* changed from being the state of $\XAB$ to being the state of $\PhiTwoAB$. This is exactly what we want in the end. And finally, *O*5ʹ applies $\XZadj$ to $\PhiTwo$. This is something we need to undo in the last two commands of the program: $$\begin{gathered} \pb\hoare {O\_5'\cdot\pre} {~\qapply{\pauliX^a}\PhiTwo~} {\underbrace{\PhiTwo(\pauliX^a)\cdot O\_5'}\_{=:\,O\_6}\cdot\,\pre} \\ \pb\hoare {O\_6\cdot\pre} {~\qapply{\pauliZ^b}\PhiTwo~} {\underbrace{\PhiTwo(\pauliZ^b)\cdot O\_6}\_{=:\,O\_7}\cdot\,\pre}\end{gathered}$$ These follow directly from the Hoare rule Apply. Then $O\_7=\PhiTwo(\XZ)\cdot O\_5' = \frac12 \XPhiTwo\pb\paren{U\_\sigma} \cdot \Phi(\ket{ab}\adj\bell) $. And $$\begin{aligned} O\_7 \cdot \pre &\starrel= \XPhiTwo\pb\paren{U\_\sigma} \cdot \Phi\pb\paren{\ket{ab}\adj\bell} \cdot \XAB(\psi) \cdot\calH\_M \\&\starstarrel= \XPhiTwo\pb\paren{U\_\sigma} \cdot \XAB(\psi) \cdot \underbrace{ \Phi\pb\paren{\ket{ab}\adj\bell} \cdot\calH\_M }\_{\subseteq\,\calH\_M} \\[-3mm]&\subseteq \XPhiTwo\pb\paren{U\_\sigma} \cdot \XAB(\psi) \cdot\calH\_M \\&\tristarrel= \PhiTwoAB(\psi) \cdot \underbrace{\XPhiTwo\pb\paren{U\_\sigma} \cdot\calH\_M}\_{\subseteq\,\calH\_M} \\[-8pt]& \subseteq \PhiTwoAB(\psi) \cdot \calH\_M \\& =\ \pB\paren{ \PhiTwoAB \quanteq \psi }.\end{aligned}$$ Here ( \* ) is by definition of $\pre$ and $\quanteq$. The factor $\frac12$ vanishes because multiplying a subspace with a nonzero scalar has no effect. And ( \*  \* ) follows since $\XAB$ and Φ are compatible and hence $\XAB(\dots),\Phi(\dots)$ commute. And $(\*\mathord\*\*)$ follows by bringing both sides into the form $\pair{\pair{X}{\chain\Phi\Snd}}\dots$ and expanding $\swap=\sandwich\Uswap$. Combining all Hoare judgments, equalities, and inequalities, and using rules Seq and Weaken, we get which finishes the analysis of teleportation. Complements =========== When we think of a register *F* as region in memory, it makes sense to consider the complement $\compl F$ of *F*, namely everything in the memory that is not contained in *F*. A priori, there is no reason to assume that in a general register category, $\compl F$ is definable in a meaningful way, or that it constitutes a register. However, we will see that by requiring some additional axioms in a register category, we get a well-defined notion of complements. In particular, quantum registers have complements, see. (We suspect classical registers have complements, too, but we have not investigated that case.) Before we formalize complements, let us explain why complements are useful: * When formulating predicates referring to states on a complex system (e.g., the predicates that are used in the pre- and postconditions in the quantum Hoare logic in ), we can use predicates to refer to the state of “everything else”. For example, we could state that no part of a quantum memory except for explicitly listed register *F*1, …, *F**n* is modified by a program by stating that if $\compl{\pairs{F\_1}{F\_n}}$ is in state *ψ* before the execution, then $\compl{\pairs{F\_1}{F\_n}}$ is in state *ψ* afterwards. * Complements will be used numerous times in the constructions and proofs in where we describe how various quantum mechanical concepts interact with registers. For example, when describing the state of a complex system by giving the states of all subsystems (Sections [sec:mixed] and [sec:pure]), we have to specify the state of *all* registers. If an explicit list of all registers is not known, we can use complements to refer to “all other registers”. And both the construction of the partial trace into a register () as well as the construction how to lift a quantum channel on a register to a quantum channel on the whole space () use complements under the hood. * Last but not least, besides the practical use cases, it is a foundationally interesting and natural property: In a register category with complements, there is always a meaningful and well-behaved notion of “the rest of the system”. In particular, this applies to the quantum register category. #### Definitions. We say registers *F*, *G* are *complements* iff they are compatible and $\pair FG$ is an iso-register. We say a register category *has complements* iff For every register $F:\mathbf A\to\mathbf B$, there exists an object $\mathbf C$ and a register $G:\mathbf C\to \mathbf B$ such that *F*, *G* are complements.[17](#fn17) If *F*, *G* are complements and *F*, *H* are complements, then *G*, *H* are equivalent. The second condition justifies to talk about “the” complement $\symbolindexmark\compl{\compl F}$ of *F*. Strictly speaking, $\compl F$ is just one of the possible complements of *F*, determined only up to equivalence. (Recall that intuitively, two registers are equivalent when the correspond to the same part of the program memory.) In a register category with complements, the laws in hold. We proved all laws in Isabelle/HOL (see ). **Complements.** (*F*, *G*, *H* are assumed to be registers.) ()[law:compl.sym] *F*, *G* are complements iff *G*, *F* are complements. ()$\compl F$ is a register. ()[law:compl.is.compl] $F,\compl F$ are complements. ()[law:compl.equiv] Assume *F*, *G* are complements. Then *G*, *H* are equivalent iff *F*, *H* are complements. ()$F,\compl{\paren{\compl F}}$ are equivalent. ()[law:complement.pair] If *F*, *G* are compatible, *F*, $\pair G{\compl{\pair FG}}$ are complements (and also *G*, $\pair F{\compl{\pair FG}}$). ()[law:complement.chain] $\chain FG$ and $\pb\pair{\compl F}{\chain F{{\compl G}}}$ are complements. ($\chain F{{\compl G}}$ is to be read as $\chain F{\paren{\compl G}}$.) ()[law:complement.tensor] $F\rtensor G$ and $\compl F\rtensor\compl G$ are complements. **Unit registers.** (*F* is assumed to be a register, *U*, *U*ʹ unit registers, *I* an iso-register.) ()*U*, *F* are compatible. ()*F*, (*U*; *F*) are equivalent. ()*F* ∘ *U* is a unit register. ()*U* ∘ *I* is a unit register. ()*U*, *U*ʹ are equivalent. ()If $U:\mathbf A\to\mathbf C, U':\mathbf B\to\mathbf D$, then $\mathbf A,\mathbf B$ are isomorphic. ()$\compl\id$ is a unit register (with same codomain as $\id$). ()$\unitreg{\mathbf A}:\mathbf I\to\mathbf A$ is a unit register (and $\mathbf I$ does not depend on $\mathbf A$). ()If $U:\mathbf A\to\mathbf B$, then $\mathbf C$ and $\mathbf C\otimes\mathbf A$ are isomorphic (for every $\mathbf C$). ()[law:complement.iso.unit] *I* and *U* are complements. [fig:compl.laws] Note that we can easily extend the notion of complements to finite sets of registers: We say *F*1, …, *F**n* are a *partition* iff they are pairwise compatible and $\pairs{F\_1}{F\_n}$ is an iso-register. (Note that this definition does not depend on the order of the registers *F*1, …, *F**n* by Laws [law:pair.sigma], [law:pair.alpha], [law:pair.alpha’] and the fact that *σ*, *α*, *α*ʹ are iso-registers.[18](#fn18)) The intuition of being a partition is that the registers *F*1, …, *F**n* cover the whole memory. #### Unit registers. One interesting consequence of complements (that could, however, also be easily axiomatized on its own), are *unit registers*. Intuitively, a unit register is one that corresponds to an empty part of the program memory. E.g., a classical variable of type `unit` would be a unit register; writing into it has no effect. Formally, we say that a register $F:\mathbf A\to\mathbf B$ is a unit register iff *F* and $\id$ are complements. (Since $\id$ is a register that contains the whole memory, this intuitively means that *F* refers to an empty part of the memory.) Unit registers are useful in languages with side-effects (even if the side-effect is just possible non-termination). E.g., if *e* is an expression with side-effects and unit result type, we can write $\assign{()}e$ to evaluate *e* without storing the result in memory. (Here () denotes a unit register.) If unit registers did not exist, the language would need a distinct concept for evaluating expressions without storing the result. It turns out that a register category with complements has a unit register $\symbolindexmark\unitreg{\unitreg{\mathbf{A}}}:\mathbf I\to\mathbf A$ for every $\mathbf A$ (i.e., $\mathbf A$ represents the type of the memory). Roughly speaking, $\unitreg{\mathbf{A}}$ is defined as the complement of $\id:\mathbf A\to\mathbf A$. This is trivially a unit register according to our definition. However, this would lead to a different domain $\mathbf I$ for every $\mathbf A$. With some extra effort, we can show that all those domains $\mathbf I$ are isomorphic and then construct a unit register $\unitreg{\mathbf{A}}$ whose domain is independent of $\mathbf A$. However, we also need to show that our definition of unit registers “makes sense”. For example, we want that a unit register is compatible with all other registers (not just with $\id$). The properties we derive are listed in ; when proving these properties (see our Isabelle formalization, ) we heavily use the properties of complements. Note that for concrete register categories (such as the quantum register category), we may prefer not to use the unit register $\unitreg{\mathbf{A}}$ that exists generically but instead define it concretely. For example, in the quantum setting, we can define a unit register of type $\setC\mapsto\mathbf{A}$, see below. (Here $\setC$ is slight abuse of notation for the space of linear functions $\setC\to\setC$.) Having $\setC$ instead of some unspecified **I** as the domain means that the type of expressions that return unit values is nicer. However, the laws from still apply. (And the unit registers according to both definitions are equivalent.) Quantum registers ----------------- We show that the quantum register category $\Lquantum$ has complements. For the finite-dimensional case, we have shown this in Isabelle/HOL. Here, we show the more general infinite-dimensional case (cf. ). #### Existence. Fix a register $F:\mathbf A\to\mathbf B$. By (that was already used in the proof of Axiom [ax:pairs], ), there is a Hilbert space $\calH\_C$ and a unitary $U:\calH\_A\otimes\calH\_C\to\calH\_B$ such that $F(a)=U(a\otimes 1\_{\mathbf C})\adj U$. Let $G(c) := U(1\_{\mathbf{A}}\otimes c)\adj U$. *G* is a register (using Axiom [ax:tensor-1] and ). Define $I(x):=Ux\adj U$ which is a register (using ). We have $F(a)G(c) = U(a\otimes c)\adj U = G(c)F(a)$ since *U* is unitary. Thus *F*, *G* are compatible. Thus the pair $\pair FG$ exists, and $\pair FG(a\otimes c) = I(a\otimes c)$. By Axiom [ax:tensorext], this implies that $\pair FG=I$. Since $I^{-1}(y):=\adj UyU$ is also a register and the inverse of *I*, $\pair FG=I$ is an iso-register. Thus *F*, *G* are complements by definition. #### Uniqueness. Assume that $F:\mathbf A\to\mathbf D$, $G:\mathbf B\to\mathbf D$ are complements, and *F*, $H:\mathbf C\to\mathbf D$ are complements. By (), $G(\mathbf B)$ is the commutant of $F(\mathbf A)$. By the same lemma, $H(\mathbf C)$ is the commutant of $F(\mathbf A)$ as well. Thus $G(\mathbf B)=H(\mathbf C)$. By (), registers with the same range are equivalent. Thus *G*, *H* are equivalent. By definition of “has complements”, we now have: [theo:compl] $\Lquantum$ has complements. Hence the laws from hold for $\Lquantum$. Register categories with complements always have unit registers. However, in the quantum register category, we have a particularly natural unit register (with a simple and explicitly specified domain): [lemma:quantum.unit] $u:\setC\to\mathbf B$, *u*(*c*) :  = *c* ⋅ 1**B** is a unit register where we identify $\bounded(\setC)$ with $\setC$.[19](#fn19) We have $u(\setC)=\idmult\_\mathbf{B}$. By (), this means that *u* is a unit register. Lifting various quantum objects =============================== By definition, a quantum register $F:\mathbf A\to\mathbf B$ maps a bounded operator on the Hilbert space $\calH\_A$ (the space corresponding to the part of the memory that the register occupies) to a bounded operator on $\calH\_B$ (the space corresponding to the whole memory). We already mentioned in that *F* maps unitaries to unitaries. This allows us to interpret unitary operations on a register as unitaries on the whole memory, and thus to interpret a quantum circuit containing only unitary operations. But what about other operations/objects that occur in quantum computation? Can we interpret a quantum state on a register as a quantum state on the whole memory? How about a measurement on a quantum state? A quantum channel? In this section, we answer those questions. Elementary objects ------------------ Since unitaries and projectors *A* are bounded operators, a register *F* lifts them to the whole memory as *F*(*A*). The following lemma tells us that all relevant properties are preserved: Let $F:\mathbf A\to\mathbf B$ be a register. Fix *A* ∈ **A**. [(i)] If *A* is a unitary, then *F*(*A*) is a unitary. If *A* is an isometry, then *F*(*A*) is an isometry. If *A* is a projector, then *F*(*A*) is a projector. (By projector, we mean an *orthogonal* projector throughout this paper.) $\pb\norm{F(A)} = \norm A$. The proof is given in. Subspaces (quantum predicates) ------------------------------ Subspaces of a Hilbert space are important in quantum mechanics to describe the properties of quantum states (e.g., “state *ψ* is in subspace *S*”). E.g., in von-Neumann-Birkhoff quantum logic and in many quantum Hoare logics. Most often, we consider (topologically) *closed subspaces* only. (Otherwise, the limit of a sequence of quantum states satisfying a property does not necessarily satisfy that property. In the finite-dimensional case, every subspace is closed, so that requirement is usually not stated explicitly in works considering only finite-dimensional systems.) Closed subspaces stand in a natural 1-1 correspondence with projectors: If *S* is a closed subspace, there exists a unique projector *P**S* with image *S* (), and for any projector *P*, its image is a closed subspace. Since we already know how to lift projectors through registers (), it is now easy to lift closed subspaces: For a register $F:\mathbf A\to\mathbf B$, and a closed subspace $S\subseteq\calH\_A$, we define $\symbolindexmark\regsubspace{\regsubspace FS}$ as the image of *F*(*P**S*) where *P**S* denotes the projector with image *S*. For non-closed subspaces, we do not know if there is a simple way to lift them through registers.[20](#fn20) The following lemma shows that the lifting of subspaces performs as expected: Let $F:\mathbf A\to\mathbf B$ be a register. Let *S*, *T* be closed subspaces of $\calH\_A$. [(i)] $\regsubspace FS$ is a closed subspace of $\calH\_B$. $F(\ortho S)=\ortho{F(S)}$ where $\symbolindexmark\ortho{\ortho{S}}$ denotes the orthogonal complement of *S*. $F\pb\paren{\{0\}}=\{0\}$. $F(\calH\_A)=F(\calH\_B)$. *F*(*S*) ⊆ *F*(*T*) iff *S* ⊆ *T*. *F*(*S* ∩ *T*) = *F*(*S*) ∩ *F*(*T*). *F*(*S* + *T*) = *F*(*S*) + *F*(*T*) where *S* + *T* denotes the closure of {*ψ* + *ϕ* : *ψ* ∈ *T*, *ϕ* ∈ *S*}, or equivalently the smallest closed subspace containing *S*, *T*. The proof is given in. Mixed states (density operators) -------------------------------- A *mixed state* (roughly speaking, a probabilistic quantum state) is represented by a density operator. Formally, a *density operator* on $\calH\_A$ is a positive trace-class operator on $\calH\_A$ of trace 1. (A trace-class operator is one for which the trace is defined, see. In the finite-dimensional case, the requirement “trace-class” can be omitted. We write $\symbolindexmark\tracecl{\tracecl(\calH)}$ for the space of all trace-class operators on $\calH$, and $\symbolindexmark\trnorm{\trnorm\cdot}$ for the trace norm.) And a *subdensity operator* is a positive trace-class operator of trace  ≤ 1. Every (sub)density operator *ρ* can be written as $\rho=\sum\_i p\_i\psi\_i\adj{\psi\_i}$ with *p**i* ≥ 0 and ∑*i**p**i* = 1 (or  ≤ 1) and $\psi\_i\in\calH\_A$; the intuition is that the state is *ψ**i* with probability *p**i*. Since *ρ* is a bounded operator on $\calH\_A$, it is tempting to think that we can lift *ρ* to $\calH\_B$ as *F*(*ρ*). Unfortunately, this does not work. *F*(*ρ*) will, in general, not be a density operator.[21](#fn21) If we think about the situation, this is not surprising: If we only know what the state of the register *F* is, we cannot tell what the state of the overall system is since we do not know what the state of the remaining parts of the memory should be. (Note that if *F* is an *iso-register*, then *F*(*ρ*) is a density operator if *ρ* is.) However, if registers *F*1, *F*2, …, *F**n* “cover the whole memory” and *ρ*1, *ρ*2, …, *ρ**n* are mixed states on those registers, then it makes sense to talk about the state of the whole system. Formally, we do this as follows: And “*F*1, *F*2, …, *F**n* cover the whole memory” simply means that *F*1, *F*2, …, *F**n* are a partition (see ).[22](#fn22) By definition, this implies that $\pairs{F\_1}{F\_n}$ is an iso-register. Thus we can lift the states *ρ*1, *ρ*2, *ρ*3, … to become a state *ρ* on the whole memory defined as $\rho := \pairs{F\_1}{F\_n}\pb\paren{\rho\_1\otimes\dots\otimes\rho\_n}$. We write for this state *ρ*. (This is a slight abuse of notation, because $\mtensor$ is not actually a binary operation. The whole term $F\_1(\rho\_1)\mtensor\dots\mtensor F\_n(\rho\_n)$ is defined as a whole.) The following lemma ensures that this construction is well-defined (in particular, it produces a density operator), and that lifting density operators is compatible with the operations considered in the previous section. Let $F\_1:\mathbf A\_1\to\mathbf B$, …, $F\_n:\mathbf A\_n\to\mathbf B$ be a partition. Assume that *ρ**i* (*i* = 1, …, *n*) are trace-class operators on $\calH\_{A\_i}$. [(i)] If *ρ*1, …, *ρ**n* are (sub)density operators, then $F\_1(\rho\_1)\mtensor\dots\mtensor F\_n(\rho\_n)$ is a (sub)density operator. $\tr \pb\paren{F\_1(\rho\_1)\mtensor\dots\mtensor F\_n(\rho\_n)} = \tr\rho\_1\cdots\tr\rho\_n$. (And in particular, ${F\_1(\rho\_1)\mtensor\dots\mtensor F\_n(\rho\_n)}$ is trace-class.) The lifting of mixed states does not depend on the order of the registers and operators. That is, for a permutation *π* on 1, …, *n*, we have $F\_{\pi(1)}(\rho\_{\pi(1)})\mtensor F\_{\pi(2)}(\rho\_{\pi(2)})\mtensor\dots\mtensor F\_{\pi(n)}(\rho\_{\pi(n)}) = F\_1(\rho\_1)\mtensor F\_2(\rho\_2)\mtensor\dots\mtensor F\_n(\rho\_n)$. Let *U*, *V* be bounded operators on $\calH\_{A\_i}$. Then $F\_1(U\rho\_1 V)\mtensor F\_2(\rho\_2)\mtensor\dots\mtensor F\_n(\rho\_n) = F\_1(U) \pb\paren{F\_1(\rho\_1)\mtensor\dots\mtensor F\_n(\rho\_n)} F\_1(V)$.[23](#fn23) $F\_1(\rho\_1)\mtensor F\_2(\rho\_2)\mtensor\dots\mtensor F\_n(\rho\_n)$ has rank 1 iff all *ρ**i* have rank 1. $\Phi:\rho\_1,\dots,\rho\_n \mapsto F\_1(\rho\_1)\mtensor\dots\mtensor F\_n(\rho\_n)$ is bounded linear in each argument (with respect to the trace norm). $ \pair{F\_1}{F\_2}(\rho\_1\otimes\rho\_2)\mtensor F\_3(\rho\_3)\mtensor\dots\mtensor F\_n(\rho\_n) = F\_1(\rho\_1)\mtensor F\_2(\rho\_2)\mtensor\dots\mtensor F\_n(\rho\_n)$. Let *S**i* generate $\tracecl(\calH\_{A\_i})$ for *i* = 1, …, *n*. Then $\pb\braces{ F\_1(\rho\_1)\mtensor F\_2(\rho\_2)\mtensor\dots\mtensor F\_n(\rho\_n) : \rho\_i\in S\_i }$ generates $\tracecl(\calH\_B)$. (“Generate” refers to the linear span, closed with respect to the trace norm.) In particular, for quantum (sub)channels[24](#fn24) $\calE,\calF$, if $\calE \pb\paren{ F\_1(\rho\_1)\mtensor F\_2(\rho\_2)\mtensor\dots\mtensor F\_n(\rho\_n) } = \calF\pb\paren{ F\_1(\rho\_1)\mtensor F\_2(\rho\_2)\mtensor\dots\mtensor F\_n(\rho\_n) }$ for all *ρ**i* ∈ *S**i*, then $\calE=\calF$. Let $G\_1:\mathbf C\_1\to\mathbf A\_1$, …, $G\_m:\mathbf C\_m\to\mathbf A\_1$ be a partition, and *σ**i* trace-class operators on $\calH\_{C\_i}$. Then $ F\_1\pB\paren{G\_1(\sigma\_1)\mtensor\dots\mtensor G\_m(\sigma\_m)}\mtensor F\_2(\rho\_2)\mtensor\dots\mtensor F\_n(\rho\_n) = \chain{F\_1}{G\_1}(\sigma\_1) \mtensor\dots\mtensor \chain{F\_1}{G\_m}(\sigma\_m) \mtensor F\_2(\rho\_2)\mtensor\dots\mtensor F\_n(\rho\_n) $. The proof is given in. #### Example. We illustrate how to use lifted mixed states in a small example, to illustrate how the theory developed in this subsection can be used. We want to evaluate the following circuit: ; ; ; ; at () /; (HF) *H*; (cnot) ; (HG) *H*; The bottom wire represents any other quantum registers that are possibly present in the system, i.e., the complement $Z:=\compl{\pairFFF FGH}$. Registers *F* and *H* are initially in the classical state $\ket 0$ ($\selfbutter 0$ as a density operator). And register *G* is in the completely mixed state $\frac12\, 1=\frac12\selfbutter0+\frac12\selfbutter1$. The initial state in this circuit can be expressed as $F\pb\paren{\selfbutter 0} \mtensor G\pb\paren{\frac12\,1} \mtensor H\pb\paren{\selfbutter0} \mtensor Z(\rho\_\mathit{rest})$. And the circuit consists of the unitaries *F*(*H*), $\pair FG(\CNOT)$, *G*(*H*). (Which are to be applied from both sides to the current state, like $\rho\mapsto U\rho\adj U$.) Thus we can evaluate the circuit as follows: $$\begin{aligned} & F\pb\paren{\selfbutter 0} \mtensor G\pb\paren{\tfrac12\,1} \mtensor H\pb\paren{\selfbutter0} \mtensor Z(\rho\_\mathit{rest}) \\ \xmapsto{F(H)\textcolor{gray}{(\cdot)}\adj{F(H)}}\ & F\pb\paren{H\selfbutter 0\adj H} \mtensor G\pb\paren{\tfrac12\,1} \mtensor H\pb\paren{\selfbutter0} \mtensor Z(\rho\_\mathit{rest}) \\&\quad = F\pb\paren{\selfbutter +} \mtensor G\pb\paren{\tfrac12\,1} \mtensor H\pb\paren{\selfbutter0} \mtensor Z(\rho\_\mathit{rest}) \\& \quad \starrel= \pair GH\pb\paren{\tfrac12\,1 \otimes \selfbutter0} \mtensor F\pb\paren{\selfbutter +} \mtensor Z(\rho\_\mathit{rest}) \\ \xmapsto{\pair GH(\CNOT)\textcolor{gray}{(\cdot)}\adj{\pair GH(\CNOT)}}\ & \pair GH\pb\paren{\CNOT\paren{\tfrac12\,1 \otimes \selfbutter0}\adj\CNOT} \mtensor F\pb\paren{\selfbutter +} \mtensor Z(\rho\_\mathit{rest}) \\& \quad= \pair GH\pb\paren{\tfrac12\selfbutter{00}+\tfrac12\selfbutter{11}} \mtensor F\pb\paren{\selfbutter +} \mtensor Z(\rho\_\mathit{rest}) \\ \xmapsto{\pair GH(H\otimes 1)\textcolor{gray}{(\cdot)}\adj{\pair GH(H\otimes 1)}}\ & \pair GH\pb\paren{(H\otimes 1)\pb\paren{\tfrac12\selfbutter{00}+\tfrac12\selfbutter{11}}\adj{(H\otimes 1)}} \mtensor F\pb\paren{\selfbutter +} \mtensor Z(\rho\_\mathit{rest}) \\& \qquad= \pair GH\pb\paren{{\tfrac12\selfbutter{+0}+\tfrac12\selfbutter{-1}}} \mtensor F\pb\paren{\selfbutter +} \mtensor Z(\rho\_\mathit{rest}).\end{aligned}$$ Here ( \* ) is by,  which allow us to arbitrarily reorder and pair registers in order to put the registers involved in the next gate into the first position. Each of the $\xmapsto{\dots}$ is by which applies the current gate to the first tensor factor. Note that in the third $\xmapsto{\dots}$, we apply $\pair GH(H\otimes 1)$ instead of *G*(*H*). This is because we cannot rewrite the current state into the form $G(\dots)\mtensor\dots$ because the state of $\pair GH$ cannot be written as a tensor product anymore after the CNOT. However, $\pair GH(H\otimes 1)=G(H)$ by Axiom [ax:pairs], so we still apply the same gate, only written differently. So the final state can be written as $\ket +$ in the *F* register and $\ket{+0}$ in the *G*, *H* registers. (Due to the CNOT, we cannot separate the states in those registers *G*, *H* anymore.) Note that all the reorderings of registers necessary for applying the gates are quite straightforward and should be easy to automate in a proof assistant. (But we have not done so at this point.) Pure states ----------- Sometimes, it is simpler to work with pure states than mixed states. For example, when reasoning about a quantum circuit that contains only unitary operations, keeping track its behavior on pure states may be simpler since the added generality of mixed states might be overkill. To the best of our knowledge, there are two slightly different definitions of what a pure state on $\calH\_A$ is: A unit vector $\psi\in\calH\_A$. A unit vector $\psi\in\calH\_A$ modulo a global phase factor $c\in\setC$. Equivalently: a one-dimensional subspace of $\calH\_A$. Equivalently: a rank-1 density operator $\psi\adj\psi$. The second formalization is motivated by the fact that two pure states that differ only by a global phase factor (i.e., $\exists c\in\setC.\ \abs c=1\land\psi\_1=c\psi\_2$) are physically indistinguishable, i.e., there is no sequence of operations and measurements that can distinguish them. Since the second formalization is equivalent to restricting ourselves to mixed states of rank 1, we can simply use the mechanisms discussed in the previous section to work with these mixed states. Thus in this section, we will focus on the first formalization, i.e., a state being a unit vector $\psi\in\calH\_A$. First a word on why this formalization is even important (besides the fact that it is widely used). After all, if two states differing only in the global phase are physically indistinguishable, why do we even care about the difference between two such states? The reason (in our opinion) is that one important use case of pure states in quantum computation is to determine the behavior of a unitary by evaluating it on all pure states (or on a basis of pure states). Namely, if *U*1*ψ* = *U*2*ψ* on all $\psi\in\calH\_A$ (or on a basis), then *U*1 = *U*2. We cannot conclude that *U*1 = *U*2 if *U*1*ψ* = *U*2*ψ* only holds up to a phase factor.[25](#fn25) So to cover this use case, we need to see how to lift pure states $\psi\in\calH\_A$ through quantum registers. Unfortunately, while the lifting of all other quantum objects (unitaries, projectors, mixed states, superoperators, measurements) turns out to work very naturally, this is not the case with pure states. #### Definition. Like in the mixed state case (and for the same reasons), we assume a partition $F\_1:\mathbf A\_1\to\mathbf B$, …, $F\_n:\mathbf A\_n\to\mathbf B$, and pure states $\psi\_1\in\calH\_{A\_1}$, …, $\psi\_n\in\calH\_{A\_n}$ on those registers. We define the following auxiliary constructs: For every $\mathbf A$ (or equivalently, for every Hilbert space $\calH\_A$), we fix an arbitrary unit vector $\symbolindexmark\pureeta{\pureeta{\mathbf A}}\in\calH\_A$, with the constraint that $\eta\_{\mathbf A\otimes\mathbf B}=\eta\_{\mathbf A}\otimes\eta\_{\mathbf B}$. For every **B** and every non-zero *b* ∈ **B**, we fix an arbitrary unit vector $\symbolindexmark\purexi{\purexi b}$ in the span of *b*, with the additional constraint that if $\pureeta{\mathbf{B}}$ is in the span of *b*, then $\purexi b:=\pureeta{\mathbf{B}}$. Then we can lift the states *ψ*1, …, *ψ**n* through *F*1, …, *F**n* to $\calH\_B$ as $\psi := \pb\paren{F\_1(\psi\_1\adj{\pureeta{\mathbf A\_1}})\mtensor\dots\mtensor F\_n(\psi\_n\adj{\pureeta{\mathbf A\_n}})} \purexi b$ with $b:={\paren{F\_1(\pureeta{\mathbf{A}\_1}\adj{\pureeta{\mathbf A\_1}})\mtensor\dots\mtensor F\_n(\pureeta{\mathbf{A}\_n}\adj{\pureeta{\mathbf A\_n}})}}$. (Recall the notation $\mtensor$ from the previous section.) We write for this state $\psi\in\calH\_B$. (As for $\mtensor$, this is a slight abuse of notation because $\ptensor$ is not a binary operation.) We call a register $F:\mathbf A\to\mathbf B$ **η*-regular* iff there exists a *c* such that $(F;\compl F)(\pureeta{\mathbf A}\adj{\pureeta{\mathbf B}}\otimes c) = \pureeta{\mathbf B}$. (Intuitively, *η*-regular registers are registers that do not involve any “mappings” (see the introduction), e.g., registers constructed from pairs, $\Fst$, $\Snd$.) This is a technical notion needed for stating below. The following lemma shows that this indeed results in a pure state and shows that the approach is compatible with the lifting of unitaries etc. Assume that $F\_1:\mathbf A\_1\to\mathbf B$, …, $F\_n:\mathbf A\_n\to\mathbf B$ form a partition. Let $\psi\_1,\dots,\psi\_n\in\calH\_{A\_1},\dots,\calH\_{A\_n}$. [(i)] $\pB\norm{F\_1(\psi\_1)\ptensor\dots\ptensor F\_n(\psi\_n)} = \norm{\psi\_1}\cdots\norm{\psi\_n}$. In particular, if *ψ*1, …, *ψ**n* are pure states, so is $F\_1(\psi\_1)\ptensor\dots\ptensor F\_n(\psi\_n)$. The construction does not depend on the order of the registers and pure states. That is, for a permutation *π* on 1, …, *n*, we have $F\_{\pi(1)}(\psi\_{\pi(1)})\ptensor F\_{\pi(2)}(\psi\_{\pi(2)})\ptensor\dots\ptensor F\_{\pi(n)}(\psi\_{\pi(n)}) = F\_1(\psi\_1)\ptensor F\_2(\psi\_2)\ptensor\dots\ptensor F\_n(\psi\_n)$. $\psi\_1,\dots,\psi\_n \mapsto F\_1(\psi\_1)\ptensor F\_2(\psi\_2)\ptensor\dots\ptensor F\_n(\psi\_n)$ is bounded linear in each argument. $ \pair{F\_1}{F\_2}(\psi\_1\otimes\psi\_2)\ptensor F\_3(\psi\_3)\ptensor\dots\ptensor F\_n(\psi\_n) = F\_1(\psi\_1)\ptensor F\_2(\psi\_2)\ptensor\dots\ptensor F\_n(\psi\_n)$.[26](#fn26) Let *S**i* generate $\calH\_{A\_i}$ for *i* = 1, …, *n*. (“Generate” refers to the linear span, closed with respect to the Hilbert space norm.) Then $\pb\braces{ F\_1(\psi\_1)\ptensor\dots\ptensor F\_n(\psi\_n) : \psi\_i\in S\_i }$ generates $\calH\_B$. In particular, for bounded operators *U*, *V*, if $U \pb\paren{ F\_1(\psi\_1)\ptensor \dots\ptensor F\_n(\psi\_n) } = V\pb\paren{ F\_1(\psi\_1)\ptensor\dots\ptensor F\_n(\psi\_n) }$ for all *ψ**i* ∈ *S**i*, then *U* = *V*. Let $G\_1:\mathbf C\_1\to\mathbf A\_1$, …, $G\_m:\mathbf C\_m\to\mathbf A\_1$ be a partition. Assume that *G*1, …, *G**m* are *η*-regular. Then $ F\_1\pB\paren{G\_1(\gamma\_1)\ptensor\dots\ptensor G\_m(\gamma\_m)}\ptensor F\_2(\psi\_2)\ptensor\dots\ptensor F\_n(\psi\_n) = \chain{F\_1}{G\_1}(\gamma\_1) \ptensor\dots\ptensor \chain{F\_1}{G\_m}(\gamma\_m) \ptensor F\_2(\psi\_2)\ptensor\dots\ptensor F\_n(\psi\_n) $. $\Fst$, $\Snd$, $\swap$, $\assoc$, $\assoc'$ are *η*-regular. If *F*, *G* are *η*-regular and compatible, then $\pair FG$ is *η*-regular. If *F*, *G* are *η*-regular, then $\chain FG$ is *η*-regular. If *F*, *G* are *η*-regular, then $F\rtensor G$ is *η*-regular. Let *U* be a bounded operator on $\calH\_{A\_1}$. Then $F\_1(U\psi\_1)\ptensor F\_2(\psi\_i)\ptensor\dots\ptensor F\_n(\psi\_n) = F\_1(U) \pb\paren{F\_1(\psi\_1)\ptensor\dots\ptensor F\_n(\psi\_n)}$. Let $\mathord\lozenge(\psi):=\psi\adj\psi$. Then $F\_1(\mathord\lozenge(\psi\_1))\mtensor\dots\mtensor F\_n(\mathord\lozenge(\psi\_n)) = \mathord\lozenge\pB\paren{F\_1(\psi\_1)\ptensor\dots\ptensor F\_n(\psi\_n)}$. Let *S* be a closed subspace of $\calH\_{A\_1}$. Assume *ψ*2, …, *ϕ**n* ≠ 0. Then $F\_1(\psi\_1)\ptensor\dots\ptensor F\_n(\psi\_n) \in \regsubspace{F\_1}S$ iff *ψ*1 ∈ *S*. (See for the definition of $\regsubspace{F\_1}S$.) The proof is given in. #### Limitations of the approach. In contrast to the lifting of mixed states, there are a number of limitations to this approach of lifting pure states. First, the resulting definition does depend on the arbitrary choice of vectors $\pureeta{\mathbf A},\purexi{F}$. In fact, different choices of $\pureeta{\mathbf A},\purexi{F}$ will map the same pure states on *ψ*1, …, *ψ**n* to different states $F\_1(\psi\_1)\ptensor \dots\ptensor F\_n(\psi\_n)$. (Although it is easy to verify that different choices of $\pureeta{\mathbf A},\purexi{F}$ will lead to $F\_1(\psi\_1)\ptensor \dots\ptensor F\_n(\psi\_n)$ that are equal up to a global phase factor.) In contrast, the lifting of mixed states ($F\_1(\rho\_1)\mtensor\dots\mtensor F\_n(\rho\_n)$) does not depend on arbitrary choices. Second, there are some constraints concerning nested invocations of this construction. (I.e., when one of the states *ψ**i* in $F\_1(\psi\_1)\ptensor \dots\ptensor F\_n(\psi\_n)$ is again of the form $G\_1(\phi\_1)\ptensor \dots\ptensor G\_n(\phi\_n)$.) In this case, we can flatten the nested application of the construction only when *F*1 is *η*-regular (see ); *η*-regularity guarantees that the arbitrarily chosen $\pureeta{\mathbf A}$ are compatible with each other. In contrast, when lifting mixed states, we can flatten nested applications with no side-conditions (see ). Third, depending on the logical foundations we use, it might not even be able to globally fix arbitrary choices $\pureeta{\mathbf A},\purexi{F}$ throughout the category of Hilbert spaces. (While satisfying the required constraints between them, e.g., $\pureeta{\mathbf A\otimes\mathbf B}=\pureeta{\mathbf A}\otimes\pureeta{\mathbf B}$.) We might instead use, e.g., the category of Hilbert spaces $\calH\_A$ with a distinguished unit vector $\eta\_{\mathbf A}$. While this seems like a very tiny constraint, it still means that we change our formal foundation just to accommodate the lifting of pure states.[27](#fn27) Finding a formalization of quantum registers that does not pose the above challenges when lifting pure states (while still preserving the other features described in this paper) is an open problem. #### Example. We illustrate how to use lifted pure states in a small example: We will show that three CNOTs are equivalent to a qubit swap. (This is near trivial in terms of the matrices but the focus of the example is not to show a complex situation but to explain the basic process. In situations involving many different gates on different registers, the present method would shine more as we would avoid complicated chains of swaps and tensor products with the identity.) We want to show that the following two circuits evaluate the same unitary: $$\begin{tikzpicture}[baseline=(current bounding box.center)] \initializeCircuit \newWires{F,G,rest} \stepForward{3mm} \labelWire[\tiny$F$]{F} \labelWire[\tiny$G$]{G} \drawWire{rest} \node at (\getWireCoord{rest}) {\footnotesize/}; \stepForward{3mm} \labelWire[\tiny\rlap{other registers}]{rest} \stepForward{2mm} \node[cnot=G,control=F] (cnot) {}; \stepForward{5mm} \node[cnot=F,control=G] (cnot) {}; \stepForward{5mm} \node[cnot=G,control=F] (cnot) {}; \stepForward{5mm} \drawWires{F,G,rest} \end{tikzpicture} \qquad=\qquad \begin{tikzpicture}[baseline=(current bounding box.center)] \initializeCircuit \newWires{F,G,rest} \stepForward{3mm} \labelWire[\tiny$F$]{F} \labelWire[\tiny$G$]{G} \drawWire{rest} \node at (\getWireCoord{rest}) {\footnotesize/}; \stepForward{3mm} \labelWire[\tiny\rlap{other reg.s}]{rest} \stepForward{2mm} \drawWires{rest,F,G} \stepForward{6mm} \crossWire{F}{G} \crossWire{G}{F} \skipWires{F,G} \stepForward{6mm} \drawWires{F,G,rest} \end{tikzpicture}$$ In the language of registers, we express this as $$\pair FG(\CNOT) \cdot \pair GF(\CNOT) \cdot \pair FG(\CNOT) \qquad=\qquad \pair FG(\Uswap) \label{eq:triple.swap}$$ where $\Uswap$ is the qubit swap $\Uswap(\ket x\otimes\ket y)=\ket y\otimes\ket x$. Note that we did not need to explicitly refer to the “other registers” because no operation happens on them. But when we do wish to refer to them, we can refer to them as $\compl{\pair FG}$. There are many ways to show. For this example, we choose to show it by evaluating the circuit step by step on an initial state from the computational basis. That is, we show $$\begin{gathered} \pB\paren{\pair FG(\CNOT) \cdot \pair GF(\CNOT) \cdot \pair FG(\CNOT)} \pB\paren{F\pb\paren{\ket x}\ptensor G\pb\paren{\ket y}\ptensor \compl{\pair FG}\pb\paren{\ket z}} \\ ={} {\pb\pair FG(\Uswap)}\ \pB\paren{F\pb\paren{\ket x}\ptensor G\pb\paren{\ket y}\ptensor \compl{\pair FG}\pb\paren{\ket z}}. \label{eq:triple.swap.basis}\end{gathered}$$ The left hand side is computed as $$\begin{aligned} &F\pb\paren{\ket x}\ptensor G\pb\paren{\ket y}\ptensor \compl{\pair FG}\pb\paren{\ket z} \starrel={} \pair FG\pb\paren{\ket x\otimes\ket y}\ptensor \compl{\pair FG}\pb\paren{\ket z} \\ \xmapsto{\pair FG(\CNOT)}{} & \pair FG\pb\paren{\ket x\otimes\ket{y\oplus x}}\ptensor \compl{\pair FG}\pb\paren{\ket z} \starstarrel={} \pair GF\pb\paren{\ket{y\oplus x}\otimes\ket x}\ptensor \compl{\pair FG}\pb\paren{\ket z} \\ \xmapsto{\pair GF(\CNOT)}{} & \pair GF\pb\paren{\ket{y\oplus x}\otimes\ket y}\ptensor \compl{\pair FG}\pb\paren{\ket z} \starstarrel={} \pair FG\pb\paren{\ket y\otimes\ket{y\oplus x}}\ptensor \compl{\pair FG}\pb\paren{\ket z} \\ \xmapsto{\pair FG(\CNOT)}{} & \pair FG\pb\paren{\ket y\otimes\ket x}\ptensor \compl{\pair FG}\pb\paren{\ket z} \starrel={} F\pb\paren{\ket y}\ptensor G\pb\paren{\ket x}\ptensor \compl{\pair FG}\pb\paren{\ket z}.\end{aligned}$$ Here ( \* ) follows by. Note that in a more complex situation with more registers, we would only have to pull those registers into the first tensor factor that are involved in the operation we are applying *in this computation step*. All other registers in the tensor product can be left untouched (like $\compl{\pair FG}\pb\paren{\ket z}$ in our example). And ( \*  \* ) follows by,. (By first unwrapping the pair and then rewrapping in a different order.) Each application of the CNOTs is done using. (All these steps are very mechanical and should not be hard to automate. Our present Isabelle/HOL implementation, for example, proves each of the equalities by a simple tactic invocation.) And similarly, $$\begin{aligned} &F\pb\paren{\ket x}\ptensor G\pb\paren{\ket y}\ptensor \compl{\pair FG}\pb\paren{\ket z} = \pair FG\pb\paren{\ket x\otimes\ket y}\ptensor \compl{\pair FG}\pb\paren{\ket z} \\ \xmapsto{\pair FG(\Uswap)}{} & \pair FG\pb\paren{\ket y\otimes\ket x}\ptensor \compl{\pair FG}\pb\paren{\ket z} = F\pb\paren{\ket y}\ptensor G\pb\paren{\ket x}\ptensor \compl{\pair FG}\pb\paren{\ket z}.\end{aligned}$$ Thus the lhs and rhs of both evaluate to the same state. Hence we have shown. From, our initial goal follows by. Quantum channels ---------------- Operations on mixed states are commonly described by *quantum channels* (a.k.a. quantum operation or CPTPM). A quantum channel is a completely positive trace-preserving map $\calE:\tracecl(\calH\_A)\to\tracecl(\calH\_B)$. (In the finite-dimensional case, the set $\tracecl(\calH)$ of all trace-class operators on $\calH$ is simply the set of all linear operators on $\calH$.) In particular, $\calE(\rho)$ is a density operator if *ρ* is. When we work with subdensity operators, it is more natural to consider *quantum subchannels*. $\calE:\tracecl(\calH\_A)\to\tracecl(\calH\_B)$ is a quantum subchannel iff it is completely positive and $\tr\calE(\rho)\leq\tr\rho$ for every positive *ρ*. Quantum (sub)channels have a tensor product compatible with the tensor product of trace-class operators by (). An alternative definition of quantum channels, which we call Kraus-channels to avoid ambiguity, is the following: $\calE:\tracecl(\calH\_A)\to\tracecl(\calH\_B)$ is a *Kraus-channel* iff there exists a family of bounded operators $M\_i:\calH\_A\to\calH\_B$ such that $\calE(\rho)=\sum\_i M\_i\rho\adj{M\_i}$ and $\sum\_i \adj M\_i M\_i=I$.[28](#fn28) A Kraus-channel is always a quantum channel, and the notions coincide if $\calH\_A,\calH\_B$ are separable (i.e., countably dimensional) Hilbert spaces. A Kraus-subchannel is defined in the same way, except with the condition $\sum\_i \adj M\_i M\_i\leq I$ instead of $\sum\_i \adj M\_i M\_i=I$. A Kraus-subchannel is always a quantum subchannel, and the notions coincide if $\calH\_A,\calH\_B$ are separable Hilbert spaces. In the following, we will only consider quantum channels where the domain and codomain are the same space. (I.e., quantum channels $\tracecl(\calH\_A)\to\tracecl(\calH\_A)$). This is because we will consider quantum channels that operate *on* the content of a register, not quantum channels between registers. (However, that does not preclude an operation that, e.g., takes data from one register $F:\mathbf A\to \mathbf C$ and puts the result of the computation into another register $G:\mathbf B\to\mathbf C$. This would simply be described as a quantum channel on $\mathbf A\otimes\mathbf B$ applied to $\pair FG$.) #### Lifting quantum channels. If $F:\mathbf A\to\mathbf B$ is an iso-register, then there is a very natural way of lifting a quantum channel $\calE:\mathbf A\to\mathbf A$. By, $F(a)=Ua\adj U$ for some unitary $U:\calH\_A\to\calH\_B$. We can thus define $F(\calE):\mathbf B\to\mathbf B$ as $F(\calE)(\rho):=U\calE(\adj U\rho U)\adj U$. Or equivalently, $F(\calE):=F\circ \calE\circ F^{-1}$. (The latter gives us a different interpretation: An iso-register is a quantum channel (see below), so $F(\calE)$ is just the composition of three quantum channels.) For non-iso-registers *F*, this definition does not work. (*F* is not a quantum channel, nor can it be represented as $Ua\adj U$.) Fortunately, using complements, we can very easily extend the above idea to arbitrary registers. Let $\compl F:\mathbf A'\to\mathbf B$ be the complement of *F*. We can lift $\calE$ to become a quantum channel on $\mathbf A\otimes\mathbf A'$; the tensor product $\calE\otimes \id$ of quantum channels achieves this. And $\pair F{\compl F}$ is an iso-register $\mathbf A\otimes\mathbf A'\to\mathbf B$ by definition of the complement. So we can lift $\calE\otimes \id$ to a quantum channel on $\mathbf B$ through $\pair F{\compl F}$. Putting this together, we get the following definition: $\symbolindexmark\regchannel{F(\calE)} := \pair F{\compl F} \circ (\calE\otimes\id) \circ \pair F{\compl F}^{-1}$. Note that this definition also applies if $\calE$ is a subchannel. Note that this construction is only applicable when the domain and codomain of $\calE$ are the same. (I.e., we cannot apply it to $\calE:\mathbf A\to\mathbf B$ with $\mathbf A\neq\mathbf B$.) This is because $\regchannel F\calE$ intuitively represents “applying $\calE$ to *F*”, not “applying $\calE$ to *F* and storing the result in a different register”. The definition uses an arbitrarily chosen complement $\compl F$ of *F* (recall that the complement is only unique up to equivalence). The following lemma shows that the definition does not depend on the choice of complement, and that $F(\calE)$ interacts as expected with other constructions. Let $\calE$, $\calF$ be quantum subchannels. Let *F*, *G* be registers. [(i)] If $\calE$ is a quantum (sub)channel, so is $F(\calE)$. Let *F*, *G* be complements. Then $\regchannel F\calE = \pair FG \circ (\calE\otimes\id) \circ \pair FG^{-1}$. (This means the choice of complement is irrelevant.) $\regchannel F\calE\circ \regchannel F\calF = \regchannel F{\calE\circ\calF}$. If *F*, *G* are compatible, then $\pair FG(\calE\otimes\calF) = F(\calE)\circ G(\calF) = G(\calF)\circ F(\calE)$. $\regchannel F\id=\id$. For a partition *F*1, …, *F**n* and trace-class operators *ρ*1, …, *ρ**n*, $\pb\regchannel{F\_1}\calE\pb\paren{F\_1(\rho\_1)\mtensor\dots\mtensor F\_n(\rho\_n)} = F\_1(\calE(\rho\_1))\mtensor F\_2(\rho\_2)\mtensor\dots\mtensor F\_n(\rho\_n)$. [29](#fn29) $F\pb\paren{G(\calE)} = (\chain FG)(\calE)$. $\sigma(\calE\otimes\calF) = \calF\otimes\calE$. $\assoc\pb\paren{\calE\otimes(\calF\otimes\calG)} = (\calE\otimes\calF)\otimes\calG$ and $\assocp\pb\paren{(\calE\otimes\calF)\otimes\calG} = \calE\otimes(\calF\otimes\calG)$. If $F:\mathbf A\to\mathbf C$, $G:\mathbf B\to\mathbf D$ are iso-registers, then $F|\_{\tracecl(\calH\_A)}$ (*F* restricted to trace-class operators) is a bijective quantum channel, and $(F\rtensor G)|\_{\tracecl(\calH\_A\otimes\calH\_B)} = F|\_{\tracecl(\calH\_A)}\otimes G|\_{\tracecl(\calH\_B)}$. The proof is given in. #### Lifting Kraus-channels. A Kraus-channel $\calE$ can be written as $\calE(\rho)=
arxiv_0000116
/9606222]. G. D’Ambrosio, G. F. Giudice, G. Isidori and A. Strumia, “Minimal flavor violation: An Effective field theory approach,” Nucl. Phys. B **645**, 155 (2002) [hep-ph/0207036]. I. Z. Rothstein, “TASI lectures on effective field theories,” hep-ph/0308266. A. Rajaraman, W. Shepherd, T. M. P. Tait and A. M. Wijangco, “LHC Bounds on Interactions of Dark Matter,” Phys. Rev. D **84**, 095013 (2011) [arXiv:1108.1196 [hep-ph]]. T. Appelquist, H. -C. Cheng and B. A. Dobrescu, “Bounds on universal extra dimensions,” Phys. Rev. D **64**, 035002 (2001) [hep-ph/0012100]. H. -C. Cheng, K. T. Matchev and M. Schmaltz, “Bosonic supersymmetry? Getting fooled at the CERN LHC,” Phys. Rev. D **66**, 056006 (2002) [hep-ph/0205314]. K. Cranmer and I. Yavin, “RECAST: Extending the Impact of Existing Analyses,” JHEP **1104**, 038 (2011) [arXiv:1010.2506 [hep-ex]]. K. Rao and D. Whiteson, “Reinterpretion of Experimental Results with Basis Templates,” arXiv:1203.6642 [hep-ex]. M. Drees, H. Dreiner, D. Schmeier, J. Tattersall and J. S. Kim, “CheckMATE: Confronting your Favourite New Physics Model with LHC Data,” arXiv:1312.2591 [hep-ph]. M. Papucci, K. Sakurai, A. Weiler and L. Zeune, “Fastlim: a fast LHC limit calculator,” arXiv:1402.0492 [hep-ph]. J. Alwall, P. Schuster and N. Toro, “Simplified Models for a First Characterization of New Physics at the LHC,” Phys. Rev. D **79**, 075020 (2009) [arXiv:0810.3921 [hep-ph]]. D. Alves *et al.* [LHC New Physics Working Group Collaboration], “Simplified Models for LHC New Physics Searches,” J. Phys. G **39**, 105005 (2012) [arXiv:1105.2838 [hep-ph]]. N. Arkani-Hamed, P. Schuster, N. Toro, J. Thaler, L. -T. Wang, B. Knuteson and S. Mrenna, “MARMOSET: The Path from LHC Data to the New Standard Model via On-Shell Effective Theories,” hep-ph/0703088 [HEP-PH]. [ATLAS Collaboration], “Search for supersymmetry with jets and missing transverse momentum: Additional model interpretations,” ATLAS-CONF-2011-155. H. Okawa [ATLAS Collaboration], “Interpretations of SUSY Searches in ATLAS with Simplified Models,” arXiv:1110.0282 [hep-ex]. [ATLAS Collaboration], “Interpretation of same-sign dilepton events at ATLAS with a simplified SUSY model,” ATLAS-CONF-2011-091. S. Chatrchyan *et al.* [CMS Collaboration], “Interpretation of Searches for Supersymmetry with simplified Models,” Phys. Rev. D **88**, no. 5, 052017 (2013) [arXiv:1301.2175 [hep-ex]]. K. Kondo, “Dynamical Likelihood Method for Reconstruction of Events With Missing Momentum. 1: Method and Toy Models,” J. Phys. Soc. Jap.  **57**, 4126 (1988). K. Kondo, “Dynamical likelihood method for reconstruction of events with missing momentum. 2: Mass spectra for 2 → 2 processes,” J. Phys. Soc. Jap.  **60**, 836 (1991). K. Kondo, T. Chikamatsu and S. H. Kim, “Dynamical likelihood method for reconstruction of events with missing momentum. 3: Analysis of a CDF high p(T) e mu event as t anti-t production,” J. Phys. Soc. Jap.  **62**, 1177 (1993). R. H. Dalitz and G. R. Goldstein, “The Decay and polarization properties of the top quark,” Phys. Rev. D **45**, 1531 (1992). B. Abbott *et al.* [D0 Collaboration], “Measurement of the top quark mass in the dilepton channel,” Phys. Rev. D **60**, 052001 (1999) [hep-ex/9808029]. J. C. Estrada Vigil, “Maximal use of kinematic information for the extraction of the mass of the top quark in single-lepton t anti-t events at D0,” FERMILAB-THESIS-2001-07. M. F. Canelli, “Helicity of the *W* boson in single - lepton *t**t̄* events,” UMI-31-14921. V. M. Abazov *et al.* [D0 Collaboration], “A precision measurement of the mass of the top quark,” Nature **429**, 638 (2004) [hep-ex/0406031]. J. S. Gainer, J. Lykken, K. T. Matchev, S. Mrenna and M. Park, “The Matrix Element Method: Past, Present, and Future,” arXiv:1307.3546 [hep-ph]. P. Artoisenet and O. Mattelaer, “MadWeight: Automatic event reweighting with matrix elements,” PoS CHARGED **2008**, 025 (2008). P. Artoisenet, V. Lemaitre, F. Maltoni and O. Mattelaer, “Automation of the matrix element reweighting method,” JHEP **1012**, 068 (2010) [arXiv:1007.3300 [hep-ph]]. G. P. Lepage, “A New Algorithm for Adaptive Multidimensional Integration,” J. Comput. Phys.  **27**, 192 (1978). J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer and T. Stelzer, “MadGraph 5 : Going Beyond,” JHEP **1106**, 128 (2011) [arXiv:1106.0522 [hep-ph]]. A. Belyaev, N. D. Christensen and A. Pukhov, “CalcHEP 3.4 for collider physics within and beyond the Standard Model,” Comput. Phys. Commun.  **184**, 1729 (2013) [arXiv:1207.6082 [hep-ph]]. A. Pukhov, E. Boos, M. Dubinin, V. Edneral, V. Ilyin, D. Kovalenko, A. Kryukov and V. Savrin *et al.*, “CompHEP: A Package for evaluation of Feynman diagrams and integration over multiparticle phase space,” hep-ph/9908288. E. Boos *et al.* [CompHEP Collaboration], “CompHEP 4.4: Automatic computations from Lagrangians to events,” Nucl. Instrum. Meth. A **534**, 250 (2004) [hep-ph/0403113]. L. Lonnblad, “ARIADNE version 4: A Program for simulation of QCD cascades implementing the color dipole model,” Comput. Phys. Commun.  **71**, 15 (1992). G. Corcella, I. G. Knowles, G. Marchesini, S. Moretti, K. Odagiri, P. Richardson, M. H. Seymour and B. R. Webber, “HERWIG 6: An Event generator for hadron emission reactions with interfering gluons (including supersymmetric processes),” JHEP **0101**, 010 (2001) [hep-ph/0011363]. M. Bahr, S. Gieseke, M. A. Gigg, D. Grellscheid, K. Hamilton, O. Latunde-Dada, S. Platzer and P. Richardson *et al.*, “Herwig++ Physics and Manual,” Eur. Phys. J. C **58**, 639 (2008) [arXiv:0803.0883 [hep-ph]]. T. Sjostrand, S. Mrenna and P. Z. Skands, “PYTHIA 6.4 Physics and Manual,” JHEP **0605**, 026 (2006) [hep-ph/0603175]. T. Sjostrand, S. Mrenna and P. Z. Skands, “A Brief Introduction to PYTHIA 8.1,” Comput. Phys. Commun.  **178**, 852 (2008) [arXiv:0710.3820 [hep-ph]]. T. Gleisberg, S. Hoeche, F. Krauss, A. Schalicke, S. Schumann and J. -C. Winter, “SHERPA 1. alpha: A Proof of concept version,” JHEP **0402**, 056 (2004) [hep-ph/0311263]. P. Z. Skands, ``Pythia and Vincia". A. J. Larkoski, J. J. Lopez-Villarejo and P. Skands, “Helicity-Dependent Showers and Matching with VINCIA,” Phys. Rev. D **87**, no. 5, 054033 (2013) [arXiv:1301.0933 [hep-ph]]. W. T. Giele, L. Hartgring, D. A. Kosower, E. Laenen, A. J. Larkoski, J. J. Lopez-Villarejo, M. Ritzmann and P. Skands, “The Vincia Parton Shower,” PoS DIS **2013**, 165 (2013) [arXiv:1307.1060]. R. Field, “Early LHC Underlying Event Data - Findings and Surprises,” arXiv:1010.3558 [hep-ph]. R. Field, “Min-Bias and the Underlying Event at the LHC,” Acta Phys. Polon. B **42**, 2631 (2011) [arXiv:1110.5530 [hep-ph]]. R. Field, “Min-Bias and the Underlying Event at the LHC,” arXiv:1202.0901 [hep-ph]. R. Field, “The underlying event in hadronic collisions,” Ann. Rev. Nucl. Part. Sci.  **62**, 453 (2012). P. Z. Skands, “Soft-QCD and UE spectra in pp collisions at very high CM energies (a Snowmass white paper),” arXiv:1308.2813 [hep-ph]. T. Kinoshita, “Mass singularities of Feynman amplitudes,” J. Math. Phys.  **3**, 650 (1962). T. D. Lee and M. Nauenberg, “Degenerate Systems and Mass Singularities,” Phys. Rev.  **133**, B1549 (1964). S. Albino, “The Hadronization of partons,” Rev. Mod. Phys.  **82**, 2489 (2010) [arXiv:0810.4255 [hep-ph]]. S. D. Ellis, J. Huston, K. Hatakeyama, P. Loch and M. Tonnesmann, “Jets in hadron-hadron collisions,” Prog. Part. Nucl. Phys.  **60**, 484 (2008) [arXiv:0712.2447 [hep-ph]]. G. P. Salam, “Towards Jetography,” Eur. Phys. J. C **67**, 637 (2010) [arXiv:0906.1833 [hep-ph]]. B. M. Waugh, H. Jung, A. Buckley, L. Lonnblad, J. M. Butterworth and E. Nurse, “HZTool and Rivet: Toolkit and Framework for the Comparison of Simulated Final States and Data at Colliders,” hep-ph/0605034. P. Z. Skands, “The Perugia Tunes,” arXiv:0905.3418 [hep-ph]. A. Buckley, H. Hoeth, H. Lacker, H. Schulz and J. E. von Seggern, “Systematic event generator tuning for the LHC,” Eur. Phys. J. C **65**, 331 (2010) [arXiv:0907.2973 [hep-ph]]. P. Z. Skands, “Tuning Monte Carlo Generators: The Perugia Tunes,” Phys. Rev. D **82**, 074018 (2010) [arXiv:1005.3457 [hep-ph]]. S. Catani, F. Krauss, R. Kuhn and B. R. Webber, “QCD matrix elements + parton showers,” JHEP **0111**, 063 (2001) [hep-ph/0109231]. F. Krauss, “Matrix elements and parton showers in hadronic interactions,” JHEP **0208**, 015 (2002) [hep-ph/0205283]. M. L. Mangano, M. Moretti, F. Piccinini and M. Treccani, “Matching matrix elements and shower evolution for top-quark production in hadronic collisions,” JHEP **0701**, 013 (2007) [hep-ph/0611129]. J. R. Dell’Aquila and C. A. Nelson, “*P* or CP Determination by Sequential Decays: V1 V2 Modes With Decays Into $\bar{\ell}$epton (A) ℓ(*B*) And/or *q̄* (A) *q*(*B*),” Phys. Rev. D **33**, 80 (1986). C. A. Nelson, “Correlation Between Decay Planes in Higgs Boson Decays Into *W* Pair (Into *Z* Pair),” Phys. Rev. D **37**, 1220 (1988). B. A. Kniehl, “The Higgs Boson Decay H → Z *g**g*,” Phys. Lett. B **244**, 537 (1990). A. Soni and R. M. Xu, “Probing CP violation via Higgs decays to four leptons,” Phys. Rev. D **48**, 5259 (1993) [hep-ph/9301225]. D. Chang, W. -Y. Keung and I. Phillips, “CP odd correlation in the decay of neutral Higgs boson into Z Z, W+ W-, or t anti-t,” Phys. Rev. D **48**, 3225 (1993) [hep-ph/9303226]. V. D. Barger, K. -m. Cheung, A. Djouadi, B. A. Kniehl and P. M. Zerwas, “Higgs bosons: Intermediate mass range at e+ e- colliders,” Phys. Rev. D **49**, 79 (1994) [hep-ph/9306270]. T. Arens and L. M. Sehgal, “Energy spectra and energy correlations in the decay *H* → *Z**Z* → *μ*+*μ*−*μ*+*μ*−,” Z. Phys. C **66**, 89 (1995) [hep-ph/9409396]. S. Y. Choi, D. J. Miller, M. M. Muhlleitner and P. M. Zerwas, “Identifying the Higgs spin and parity in decays to Z pairs,” Phys. Lett. B **553**, 61 (2003) [hep-ph/0210077]. B. C. Allanach, K. Odagiri, M. J. Palmer, M. A. Parker, A. Sabetfakhri and B. R. Webber, “Exploring small extra dimensions at the large hadron collider,” JHEP **0212**, 039 (2002) [hep-ph/0211205]. C. P. Buszello, I. Fleck, P. Marquard and J. J. van der Bij, “Prospective analysis of spin- and CP-sensitive variables in *H* → *Z**Z* → *l*(1) + *l*(1) − *l*(2) + *l*(2) −  at the LHC,” Eur. Phys. J. C **32**, 209 (2004) [hep-ph/0212396]. R. M. Godbole, D. J. Miller and M. M. Muhlleitner, “Aspects of CP violation in the H ZZ coupling at the LHC,” JHEP **0712**, 031 (2007) [arXiv:0708.0458 [hep-ph]]. V. A. Kovalchuk, “Model-independent analysis of CP violation effects in decays of the Higgs boson into a pair of the W and Z bosons,” J. Exp. Theor. Phys.  **107**, 774 (2008). W. -Y. Keung, I. Low and J. Shu, “Landau-Yang Theorem and Decays of a Z’ Boson into Two Z Bosons,” Phys. Rev. Lett.  **101**, 091802 (2008) [arXiv:0806.2864 [hep-ph]]. O. Antipin and A. Soni, “Towards establishing the spin of warped gravitons,” JHEP **0810**, 018 (2008) [arXiv:0806.3427 [hep-ph]]. Q. -H. Cao, C. B. Jackson, W. -Y. Keung, I. Low and J. Shu, “The Higgs Mechanism and Loop-induced Decays of a Scalar into Two Z Bosons,” Phys. Rev. D **81**, 015010 (2010) [arXiv:0911.3398 [hep-ph]]. Y. Gao, A. V. Gritsan, Z. Guo, K. Melnikov, M. Schulze and N. V. Tran, “Spin determination of single-produced resonances at hadron colliders,” Phys. Rev. D **81**, 075022 (2010) [arXiv:1001.3396 [hep-ph]]. A. De Rujula, J. Lykken, M. Pierini, C. Rogan and M. Spiropulu, “Higgs look-alikes at the LHC,” Phys. Rev. D **82**, 013003 (2010) [arXiv:1001.5300 [hep-ph]]. C. Englert, C. Hackstein and M. Spannowsky, “Measuring spin and CP from semi-hadronic ZZ decays using jet substructure,” Phys. Rev. D **82**, 114024 (2010) [arXiv:1010.0676 [hep-ph]]. A. Matsuzaki and H. Tanaka, “Determination of the Higgs CP property in Hadron Colliders,” arXiv:1101.2104 [hep-ph]. U. De Sanctis, M. Fabbrichesi and A. Tonero, “Telling the spin of the ’Higgs boson’ at the LHC,” Phys. Rev. D **84**, 015013 (2011) [arXiv:1103.1973 [hep-ph]]. H. E. Logan and J. Z. Salvail, “Model-independent Higgs coupling measurements at the LHC using the *H* → *Z**Z* → 4*l* lineshape,” Phys. Rev. D **84**, 073001 (2011) [arXiv:1107.4342 [hep-ph]]. J. S. Gainer, K. Kumar, I. Low and R. Vega-Morales, “Improving the sensitivity of Higgs boson searches in the golden channel,” JHEP **1111**, 027 (2011) [arXiv:1108.2274 [hep-ph]]. I. Low, P. Schwaller, G. Shaughnessy and C. E. M. Wagner, “The dark side of the Higgs boson,” Phys. Rev. D **85**, 015009 (2012) [arXiv:1110.4405 [hep-ph]]. C. Englert, M. Spannowsky and M. Takeuchi, “Measuring Higgs CP and couplings with hadronic event shapes,” JHEP **1206**, 108 (2012) [arXiv:1203.5788 [hep-ph]]. J. M. Campbell, W. T. Giele and C. Williams, “The Matrix Element Method at Next-to-Leading Order,” JHEP **1211**, 043 (2012) [arXiv:1204.4424 [hep-ph]]. J. M. Campbell, W. T. Giele and C. Williams, “Extending the Matrix Element Method to Next-to-Leading Order,” arXiv:1205.3434 [hep-ph]. N. Kauer and G. Passarino, “Inadequacy of zero-width approximation for a light Higgs boson signal,” JHEP **1208**, 116 (2012) [arXiv:1206.4803 [hep-ph]]. B. A. Kniehl and O. L. Veretin, “Low-mass Higgs decays to four leptons at one loop and beyond,” Phys. Rev. D **86**, 053007 (2012) [arXiv:1206.7110 [hep-ph]]. J. W. Moffat, “Identification of the 125 GeV Resonance as a Pseudoscalar Quarkonium Meson,” arXiv:1207.6015 [hep-ph]. B. Coleppa, K. Kumar and H. E. Logan, “Can the 126 GeV boson be a pseudoscalar?,” Phys. Rev. D **86**, 075022 (2012) [arXiv:1208.2692 [hep-ph]]. S. Bolognesi, Y. Gao, A. V. Gritsan, K. Melnikov, M. Schulze, N. V. Tran and A. Whitbeck, “On the spin and parity of a single-produced resonance at the LHC,” Phys. Rev. D **86**, 095031 (2012) [arXiv:1208.4018 [hep-ph]]. R. Boughezal, T. J. LeCompte and F. Petriello, “Single-variable asymmetries for measuring the ‘Higgs’ boson spin and CP properties,” arXiv:1208.4311 [hep-ph]. D. Stolarski and R. Vega-Morales, “Directly Measuring the Tensor Structure of the Scalar Coupling to Gauge Bosons,” Phys. Rev. D **86**, 117504 (2012) [arXiv:1208.4840 [hep-ph]]. P. Cea, “Comment on the evidence of the Higgs boson at LHC,” arXiv:1209.3106 [hep-ph]. J. Kumar, A. Rajaraman and D. Yaylali, “Spin Determination for Fermiophobic Bosons,” Phys. Rev. D **86**, 115019 (2012) [arXiv:1209.5432 [hep-ph]]. C. -Q. Geng, D. Huang, Y. Tang and Y. -L. Wu, “Note on 125 GeV Spin-2 particle,” Phys. Lett. B **719**, 164 (2013) [arXiv:1210.5103 [hep-ph]]. P. Avery, D. Bourilkov, M. Chen, T. Cheng, A. Drozdetskiy, J. S. Gainer, A. Korytov and K. T. Matchev *et al.*, “Precision studies of the Higgs boson decay channel *H* → *Z**Z* → 4*l* with MEKD,” Phys. Rev. D **87**, no. 5, 055006 (2013) [arXiv:1210.0896 [hep-ph]]. E. Masso and V. Sanz, “Limits on Anomalous Couplings of the Higgs to Electroweak Gauge Bosons from LEP and LHC,” Phys. Rev. D **87**, no. 3, 033001 (2013) [arXiv:1211.1320 [hep-ph]]. Y. Chen, N. Tran and R. Vega-Morales, “Scrutinizing the Higgs Signal and Background in the 2*e*2*μ* Golden Channel,” JHEP **1301**, 182 (2013) [arXiv:1211.1959 [hep-ph]]. T. Modak, D. Sahoo, R. Sinha and H. -Y. Cheng, “Inferring the nature of the boson at 125-126 GeV,” arXiv:1301.5404 [hep-ph]. S. Kanemura, M. Kikuchi and K. Yagyu, “Probing exotic Higgs sectors from the precise measurement of Higgs boson couplings,” Phys. Rev. D **88**, 015020 (2013) [arXiv:1301.7303 [hep-ph]]. J. S. Gainer, J. Lykken, K. T. Matchev, S. Mrenna and M. Park, “Geolocating the Higgs Boson Candidate at the LHC,” Phys. Rev. Lett.  **111**, 041801 (2013) [arXiv:1304.4936 [hep-ph]]. G. Isidori, A. V. Manohar and M. Trott, “Probing the nature of the Higgs-like Boson via *h* → *V**F* decays,” Physics Letters B 728C (2014), pp. 131-135 [arXiv:1305.0663 [hep-ph]]. J. Frank, M. Rauch and D. Zeppenfeld, “Higgs Spin Determination in the WW channel and beyond,” arXiv:1305.1883 [hep-ph]. B. Grinstein, C. W. Murphy and D. Pirtskhalava, “Searching for New Physics in the Three-Body Decays of the Higgs-like Particle,” JHEP **1310**, 077 (2013) [arXiv:1305.6938 [hep-ph]]. F. Caola and K. Melnikov, “Constraining the Higgs boson width with ZZ production at the LHC,” Phys. Rev. D **88**, 054024 (2013) [arXiv:1307.4935 [hep-ph]]. S. Banerjee, S. Mukhopadhyay and B. Mukhopadhyaya, “Higher dimensional operators and LHC Higgs data : the role of modified kinematics,” arXiv:1308.4860 [hep-ph]. Y. Sun, X. -F. Wang and D. -N. Gao, “CP mixed property of the Higgs-like particle in the decay channel *h* → *Z**Z*\* → 4*l*,” arXiv:1309.4171 [hep-ph]. I. Anderson, S. Bolognesi, F. Caola, Y. Gao, A. V. Gritsan, C. B. Martin, K. Melnikov and M. Schulze *et al.*, “Constraining anomalous HVV interactions at proton and lepton colliders,” arXiv:1309.4819 [hep-ph]. M. Chen, T. Cheng, J. S. Gainer, A. Korytov, K. T. Matchev, P. Milenovic, G. Mitselmakher and M. Park *et al.*, “The role of interference in unraveling the ZZ-couplings of the newly discovered boson at the LHC,” arXiv:1310.1397 [hep-ph]. G. Buchalla, O. Cata and G. D’Ambrosio, “Nonstandard Higgs Couplings from Angular Distributions in *h* → *Z*ℓ+ℓ−,” arXiv:1310.2574 [hep-ph]. Y. Chen and R. Vega-Morales, “Extracting Effective Higgs Couplings in the Golden Channel,” arXiv:1310.2893 [hep-ph]. J. M. Campbell, R. K. Ellis and C. Williams, “Bounding the Higgs width at the LHC using full analytic results for *g**g* → 2*e*2*μ*,” arXiv:1311.3589 [hep-ph]. Y. Chen, E. Di Marco, J. Lykken, M. Spiropulu, R. Vega-Morales and S. Xie, “8D Likelihood Effective Higgs Couplings Extraction Framework in the Golden Channel,” arXiv:1401.2077 [hep-ex]. M. Gonzalez-Alonso and G. Isidori, “The *h* → 4ℓ spectrum at low *m*34: Standard Model vs. light New Physics,” arXiv:1403.2648 [hep-ph]. J. S. Gainer, J. Lykken, K. T. Matchev, S. Mrenna and M. Park, “Beyond Geolocating: Constraining Higher Dimensional Operators in *H* → 4ℓ with Off-Shell Production and More,” arXiv:1403.4951 [hep-ph]. Y. Chen, R. Harnik and R. Vega-Morales, arXiv:1404.1336 [hep-ph]. S. Chatrchyan *et al.* [CMS Collaboration], “Study of the Mass and Spin-Parity of the Higgs Boson Candidate Via Its Decays to Z Boson Pairs,” Phys. Rev. Lett.  **110**, 081803 (2013) [arXiv:1212.6639 [hep-ex]]. G. Aad *et al.* [ATLAS Collaboration], “Evidence for the spin-0 nature of the Higgs boson using ATLAS data,” Phys. Lett. B **726**, 120 (2013) [arXiv:1307.1432 [hep-ex]]. S. Chatrchyan *et al.* [CMS Collaboration], “Measurement of the properties of a Higgs boson in the four-lepton final state,” arXiv:1312.5353 [hep-ex]. [ATLAS Collaboration], “Measurements of the properties of the Higgs-like boson in the four lepton decay channel with the ATLAS detector using 25 fb−1 of proton-proton collision data,” ATLAS-CONF-2013-013. [ATLAS Collaboration], “Study of the spin of the new boson with up to 25 fb− 1 of ATLAS data,” ATLAS-CONF-2013-040. [CMS Collaboration], “Combination of standard model Higgs boson searches and measurements of the properties of the new boson with a mass near 125 GeV,” CMS-PAS-HIG-13-005. L. Lyons, Cambridge, Uk: Univ. Pr. ( 1986) 226p P. C. Bhat, “Multivariate Analysis Methods in Particle Physics,” Ann. Rev. Nucl. Part. Sci.  **61**, 281 (2011). H. C. Cheng, K. T. Matchev and M. Schmaltz, “Radiative corrections to Kaluza-Klein masses,” Phys. Rev.  D **66**, 036005 (2002) [arXiv:hep-ph/0204342]. L. Edelhauser, K. T. Matchev and M. Park, “Spin effects in the antler event topology at hadron colliders,” JHEP **1211**, 006 (2012) [arXiv:1205.2054 [hep-ph]]. G. L. Bayatian et al. [CMS Collaboration], “CMS technical design report, volume II: Physics performance,” J. Phys. G G 34, 995 (2007). T. Han, I. W. Kim and J. Song, “Kinematic Cusps: Determining the Missing Particle Mass at Colliders,” Phys. Lett.  B **693**, 575 (2010) [arXiv:0906.5009 [hep-ph]]. M. Battaglia, A. Datta, A. De Roeck, K. Kong and K. T. Matchev, “Contrasting supersymmetry and universal extra dimensions at the CLIC multi-TeV e+ e- collider,” JHEP **0507**, 033 (2005) [hep-ph/0502041]. A. Datta, K. Kong and K. T. Matchev, “Discrimination of supersymmetry and universal extra dimensions at hadron colliders,” Phys. Rev. D **72**, 096006 (2005) [Erratum-ibid. D **72**, 119901 (2005)] [hep-ph/0509246]. P. Meade and M. Reece, “Top partners at the LHC: Spin and mass measurement,” Phys. Rev. D **74**, 015010 (2006) [hep-ph/0601124]. C. Athanasiou, C. G. Lester, J. M. Smillie and B. R. Webber, “Distinguishing Spins in Decay Chains at the Large Hadron Collider,” JHEP **0608**, 055 (2006) [hep-ph/0605286]. L. -T. Wang and I. Yavin, “Spin measurements in cascade decays at the LHC,” JHEP **0704**, 032 (2007) [hep-ph/0605296]. M. Burns, K. Kong, K. T. Matchev and M. Park, “A General Method for Model-Independent Measurements of Particle Spins, Couplings and Mixing Angles in Cascade Decays with Missing Energy at Hadron Colliders,” JHEP **0810**, 081 (2008) [arXiv:0808.2472 [hep-ph]]. A. J. Barr, “Measuring slepton spin at the LHC,” JHEP **0602**, 042 (2006) [hep-ph/0511115]. P. Konar, K. Kong and K. T. Matchev, “$\sqrt{\hat{s}}\_{min}$ : A Global inclusive variable for determining the mass scale of new physics in events with missing energy at hadron colliders,” JHEP **0903**, 085 (2009) [arXiv:0812.1042 [hep-ph]]. P. Konar, K. Kong, K. T. Matchev and M. Park, “RECO level $\sqrt{s}\_{min}$ and subsystem $\sqrt{s}\_{min}$: Improved global inclusive variables for measuring the new physics mass scale in ${\not \!\! E\_T}$ events at hadron colliders,” JHEP **1106**, 041 (2011) [arXiv:1006.0653 [hep-ph]]. I. Hinchliffe, F. E. Paige, M. D. Shapiro, J. Soderqvist and W. Yao, “Precision SUSY measurements at CERN LHC,” Phys. Rev. D **55**, 5520 (1997) [hep-ph/9610544]. A. Tricoli [ATLAS Collaboration], “Parton Densities at the LHC,” arXiv:0808.2579 [hep-ex]. --- 1. Ideally one would like to avoid theoretical prejudice completely, by going beyond SUSY and UED spin assignments.[↩](#fnref1) 2. *M**Z*2 has been shown to be a relatively sensitive variable for signal versus background discrimination and for measuring Higgs properties, see, for example .[↩](#fnref2) 3. The charged Higgs is a scalar and cannot “communicate” information about the helicity of the *b*-quark to the lepton, thus the shape of the distribution is as as in the pure phase space case.[↩](#fnref3) 4. The two samples must be mixed accounting for the relative number of events generated in regions N and $\overline{\mathcal{N}}$. This is an example of multi-channel integration with channels *I* and *J*.[↩](#fnref4) 5. This is another example of multi-channel integration.[↩](#fnref5) Exploring Theory Space with Monte Carlo Reweighting =================================================== Introduction ============ Monte Carlo (MC) simulation is an essential and ubiquitous tool in particle physics . For realistic studies, such simulation generally includes the modeling of the detector response as well as the underlying physical process. While theoretical studies generally require only “fast” detector simulation, such as that performed by PGS  or Delphes , the modeling of the detector response to a given physics event at the experimental level, using frameworks such as GEANT , is necessarily a time-consuming and computationally expensive process. The challenge of accommodating the current lack of direct evidence for new physics within frameworks that address the limitations of the Standard Model (SM) leads to models with many parameters, such as the general gauge mediation  and pMSSM  frameworks in low-energy Supersymmetry (SUSY). Even when no particularly compelling model of Beyond the SM (BSM) physics exists for a given process, it may be useful to parameterize potential deviations from the SM using an effective theory , which may involve many operators and hence many parameters. In addition to the situations above which involve many continuous parameters, there are also issues with discrete parameters; in particular there can be many choices for the spin assignments for particles in a given event topology. A well-known example of this is the fact that most if not all SUSY topologies are also produced in models of Universal Extra Dimensions (UED) . It is therefore desirable to study, at the very least, the UED counterparts of SUSY signal benchmark points.[1](#fn1) Clearly experimentalists at the CERN Large Hadron Collider (LHC) and similar experiments are faced with an overwhelming multitude of theoretical models which could be searched for. To determine the properties of a given signal in the detector generally requires detector-level MC simulation (fullsim) of that signal, e.g., with GEANT, which is a time-consuming process, as noted above. To repeat this procedure for the myriad possibilities suggested by theorists is simply not possible. Attempts have been made to solve this problem , perhaps most notably by encouraging theorists to test the predictions of their models against suitably chosen experimental results (e.g., limits on cross-sections times branching fractions as a function of the mass spectrum). However, an actual discovery will still require the use of fullsim samples to obtain realistic distributions for the kinematic variables of interest (such as test statistics). We propose and describe the following procedure for the experimental analysis of a large number of signal hypotheses: 1. Generate a large set of unweighted sample events at generator level, {*G**i*} for a specific model and point in that model’s parameter space, which we term $(A,\bm{\alpha})$ (where *A* describes the model and $\bm{\alpha}$ specifies a point in that model’s parameter space); a reasonable choice for the model, *A*, would be a simplified model . 2. Pass the generator-level events, {*G**i*} through realistic detector simulation. Select desired events (“apply cuts”) to obtain a set of detector level events {*D**i*}. (Every generator-level event *G**i* maps to some detector-level event, *D**i*, but not every detector-level event passes our cuts.) 3. Note that the cross section after acceptances and efficiencies, as well as the detector-level predictions for all distributions in signal hypothesis $(A,\bm{\alpha})$ can be obtained from the resulting event sample. 4. Choose some other model, *B*, and parameter space point, $\bm{\beta}$. Assign each detector-level event *D**i* a weight $w(B,\bm{\beta},G\_i)/ w(A,\bm{\alpha},G\_i)$, which *depends only on generator-level information* and uses, in particular, the “truth” values of particles such as neutrinos or neutralinos which are not observed in the detector. Because the reweighting involves only generator-level information, it is very fast compared with detector simulation. As we will show in the following section, where we derive and justify the method, this is not an approximation– in the limit of infinite MC samples, this method reproduces all distributions and quantities of interest exactly. Therefore, a relatively efficient method for examining the large “theory” space would be for experimentalists to provide theorists with the hard-process/ parton-level events which they have subjected to detector simulation. The theorists can then calculate the weights of these events under the original model/ parameter space point as well as the weights of these events for any desired point in model/ parameter space. The experimentalists can take these weights and obtain any desired distribution *at detector level* from reweighting their detector-level events. (Of course the “theorist” in this description could also be an experimentalist interested in the model being studied.) While, as we will discuss below, practical issues *may* arise for finite MC samples, in general this method could vastly extend the reach into theory space of LHC analyses. We reiterate that the MC events being shared here are “truth” events. The momenta of all “invisible” final state particles (such as neutrinos or neutralinos) are fully specified. Therefore the calculation of weights in different scenarios is numerically trivial; the time-consuming integration over undetermined momenta which is necessary in the Matrix Element Method , and which can be performed by tools such as MadWeight  is not necessary. In Section [sec:mc], we will provide a brief overview of MC simulation in particle physics. It is not our intent to provide a review (see, e.g., Refs. ). We wish only to remind the reader that a set of unweighted hard-process parton-level events $\bm{x}\_{i,{\rm true}}$ can be used to generate a set of unweighted events after showering, hadronization, detector simulation, object reconstruction, etc., which are given by $$\label{eq:T} \bm{x}\_{i,{\rm objects}} = T(\bm{x}\_{i,{\rm objects}}, \bm{x}\_{i,{\rm true}}) \bm{x}\_{i,{\rm true}}.$$ The important points about the generalized transfer function *T*, which incorporates the vast amount of physics briefly summarized in Section [sec:mc], are 1. It does not depend on the new physics model or parameters. 2. It is unitary. Every hard-process event simulated corresponds to a detector-simulated event with objects reconstructed, even if, e.g., due to the failure to reconstruct the physics objects corresponding to some partons, it will not be included in our calculations of distributions for the relevant final state. 3. As *T* is unitary, if we take a set of unweighted events $\bm{x}\_{i,{\rm true}}$ and simulate each event, $\bm{x}\_{i,{\rm true}}$, or, in equivalent language, choose a random value of $\bm{x}\_{\rm objects}$ according to the probability distribution function (PDF), $T(\bm{x}\_{i,{\rm objects}}, \bm{x}\_{i,{\rm true}})$ for each $\bm{x}\_{i,{\rm true}}$, then we will obtain an unweighted set of detector-simulated and object-reconstructed events $\{\bm{x}\_{i,\rm{objects}}\}$. If the reader is willing to accept these facts about *T*, then Section [sec:mc] may be safely skipped. Reweighting itself is described in Section [sec:reweighting]. To demonstrate its use, we then provide a few examples of reweighting in action. In Section [sec:h4l] we study a signal model with a multi-dimensional parameter space, while in Section [sec:antler] we use reweighting to analyze angular correlations needed to determine the spins of the new particles. In Section [sec:clustering] we demonstrate that reweighting works even in the presence of showering, hadronization, jet clustering, and detector simulation. We present some useful notes for practitioners of our method in Section [sec:practical]. Section [sec:conclusions] is reserved for our conclusions. Monte Carlo Overview ==================== Monte Carlo Basics ------------------ The integral *F* of an arbitrary function *f*(*x*) over some interval (*x*−, *x*+) is: *F* ≡ *F*(*x*+) − *F*(*x*−) = ∫*x*−*x*+*d**x**f*(*x*) = ⟨*f*⟩∫*x*−*x*+*d**x* = ⟨*f*⟩(*x*+ − *x*−),  where *F*(*x*) is the antiderivative of *f*(*x*) and ⟨*f*⟩ is the average value of *f*(*x*). The Monte Carlo technique for estimating ⟨*f*⟩ is to select *N* values of *x**i* = *x*− + *r**i*(*x*+ − *x*−), where *r**i* is a uniform random variate in the range (0,1), and take the average of *f*(*x*) at these points: $$\langle f \rangle = \frac{1}{N} \sum\_{i=0}^{i=N-1} f(x\_i). \label{faverage}$$ The points *x**i* used to evaluate this sum can be viewed as a set of ``events" with weights *f*(*x**i*). To compute the MC average of a different function, *g*(*x*), we similarly have: $$\langle g \rangle = \frac{1}{N} \sum\_{i=0}^{i=N-1} g(x\_i) = \frac{1}{N} \sum\_{i=0}^{i=N-1} f(x\_i)\, \frac{g(x\_i)}{f(x\_i)}. \label{gaverage}$$ Importance sampling considers a change in the measure of sampling in order to reduce the variance of the MC estimate: $$\int\_{x\_{-}}^{x\_{+}} dx f(x) = \int\_{x\_{-}}^{x\_{+}} dx f(x) \frac{g(x)}{g(x)}= \int\_{x\_{-}}^{x\_{+}} dx g(x) \frac{f(x)}{g(x)}\equiv \left\langle \frac{f}{g} \right\rangle\_{g} \int\_{x\_{-}}^{x\_{+}} g(x) dx. \label{impsamp}$$ In many applications, *g*(*x*) can be integrated analytically and has an antiderivative *G* with a inverse *G*− 1. Then *x**i* = *G*− 1(*r**i* + *G*(*x*−)). From ([faverage]) and ([impsamp]) it follows that: $$\left\langle \frac{f}{g} \right\rangle\_{g} = \frac{1}{N} \sum\_{i=0}^{i=N-1} \left.\frac{f(x\_i)}{g(x\_i)}\right\vert\_{x\_i = G^{-1}(r\_i + G(x\_{-}))},$$ where we emphasize that the average is with respect to the function *g*. It is not necessary to have a closed form for *g*, *G* or *G*− 1, and this will be the case for the practical application of these formulas discussed later. In fact, the selection of integration points from *g*(*x*) may be performed numerically using, for example, VEGAS. As a trivial example, assume a set of *N* unweighted events generated according to *g*(*x*), so that each event has a weight *σ*/*N*. Then, choosing *f*(*x*) = *g*(*x*), we have: $$\langle g \rangle = \left\langle 1 \right\rangle \int\_{x\_{-}}^{x\_{+}} dx g(x) \simeq \sum\_{i=0}^{N-1} \frac{\sigma}{N} = \sigma.$$ Even though the weights are uniform in this case, the values of *x**i* associated with each event are distributed according to *g*(*x*) and can be used to calculate averages or construct histograms of other quantities based on *x*. Parton-Level Event Generation ----------------------------- In the applications discussed here, we are interested in performing a reweighting of events at the parton-level. The parton-level is usually the first non-trivial level of approximation where physics beyond the Standard Model is needed. In the examples considered here, we focus on hadron-hadron collisions. The basic quantities of interest are the kinematic properties of events involving some new particles or interactions. We limit ourselves to processes of the type 1 + 2 → *i* + *j* + ⋯ + *n*. Then, the partonic cross section is proportional to $$f\_{1/h}(x\_1,Q^2) f\_{2/h}(x\_2,Q^2) \bigg\vert {\cal A}[1,2 \to i,j,\cdots,n] \bigg\vert^2,$$ where the amplitude for this process is $\cal A$. Imagine now, another process with the *exact* same initial and final state, but described by a different amplitude ${\cal A^\prime}$. The relative probability of this process with respect to the former is given simply by $|{\cal A^\prime}|^2/|{\cal A}|^2$. This parton-level description is often called the “hard-process” and describes the physics occurring at energies $Q \gg \Lambda\_{\rm QCD}$, or equivalently length/ time scales $\ll 1/\Lambda\_{\rm QCD}$. The expression for the partonic cross section can be cast in the form of a probability distribution that generates parton-level “events” or configurations of kinematic variables *x**i*. (In this heuristic description, we can think of e.g. the helicity and color of all final state partons as specified; in general, we need to generate all relevant final state helicities and color structures.) These events might be the output of a program such as MadGraph5 , CalcHEP , or CompHEP ; these programs can also provide code that can be used later in reweighting. Since our interest is in the reweighting of new physics signal events, the hard-processes we consider will involve some new physics model, *A*, described by some parameters $\bm{\alpha}$. We will term this “theory point” $(A,\bm{\alpha})$. Our goal is to obtain all cross sections and distributions for a different model *B*, for the parameter point, $\bm{\beta}$. Of course, it may be that we are scanning the parameter space of model *A*, in which case *B* = *A* and we are considering parameter points $\bm{\alpha}\_1, \bm{\alpha}\_2,...$. Also, for some processes, like Higgs boson production and decay, it may make sense for $(A,\bm{\alpha})$ to be the SM rather than a new physics model; we will then reweight to obtain distributions for the non-SM theory point $(B, \bm{\beta})$. Particle-Level Event Generation ------------------------------- For now we focus on perhaps the primary use of Monte Carlo event generators, which is to produce a sample of unweighted events of particles to be input to a realistic detector simulation. Generating events is important, because we are not interested only in the cross section, subject to arbitrary event selection (“cuts”), $$\label{eq:sigma} \sigma\_G = \int\_{\mathcal{D}\_G} \frac{d \sigma(A,\bm{\alpha}; \bm{x}\_{\rm true})}{d\bm{x}\_{\rm true}} d\bm{x}\_{\rm true},$$ but also in the kinematic distribution, *ρ*(*V*), for a kinematic variable of interest, $V(\bm{x}\_{\rm true})$, subject to certain cuts: $$\label{eq:rho} \rho(V^\prime) = \frac{1}{\sigma\_G} \int\_{\mathcal{D}\_G} \frac{d \sigma(A,\bm{\alpha}; \bm{x}\_{\rm true})}{d\bm{x}\_{\rm true}} \delta(V^\prime - V(\bm{x}\_{\rm true})) d\bm{x}\_{\rm true}.$$ Here D*G* is the space of events ($\bm{x}\_i$) which pass cuts. These are not the final, detector-level cuts. However, cuts are often applied at this stage, either to provide a cut-off for infrared divergent quantities or simply to reduce the time spent on detector-simulation of events that will not pass the triggers or the final, detector-level cuts. If we generate a sample of *N* unweighted events $\{\bm{x}\_i\}$ in the allowed region, and, in the course of generating the events determine the total cross section for the specified process to be $\sigma\_{G, {\rm MC}}$, then, of course, we can approximate the cross section after cuts by $\sigma\_G \approx \sigma\_{G, {\rm MC}}$, while the normalized distribution of an arbitrary kinematic variable, *ρ* (as in Eq. ([eq:rho])) can be approximated by forming the histogram $$\label{eq:rho-mc} \int\_{V\_{\rm min}}^{V\_{\rm max}} \rho(V) \, dV \approx \frac{1}{N} \sum\_i \left\{ \theta(V(\bm{x}\_i) - V\_{\rm min}) - \theta( V(\bm{x}\_i) - V\_{\rm max} ) \right\},$$ where $(V\_{\rm min},V\_{\rm max})$ is a bin interval. Note that $\theta(V(\bm{x}\_i) - V\_{\rm min}) - \theta( V(\bm{x}\_i) - V\_{\rm max})$ is 1 if $V\_{\rm min} < V(\bm{x}\_i) < V\_{\rm max}$ and 0 otherwise. The generalization of Eq. ([eq:rho]) and Eq. ([eq:rho-mc]) to multidimensional observables is obvious. The cross section in Eq. ([eq:sigma]) and the distributions *ρ*(*V*) in Eq. ([eq:rho]) and Eq. ([eq:rho-mc]) have been calculated considering only the physics from the hard process. To obtain accurate distributions for quantities measured in the detector, we must utilize general purpose MC generators and/or showering tools like ARIADNE, HERWIG, HERWIG++, PYTHIA 6, PYTHIA 8, SHERPA, or VINCIA. With these tools we must simulate initial and final state radiation (ISR and FSR), decay resonances with sufficiently short life times, and hadronize colored objects. While modeling the hadronic physics we must also consider, e.g., the physics of the underlying event. Formally, we can think of events at each level of the simulation as living in different spaces. Thus the initial, unweighted, hard-process parton-level event undergoes a transformation: $$\begin{aligned} \begin{CD} \bm{x}\_{i, {\rm true}} @>>{S:~\rm showering}> \bm{x}\_{i, {\rm showered}} @>>{H:~\rm hadronization}> \bm{x}\_{i, {\rm hadron}} \end{CD}.\end{aligned}$$ We note that the dimensions of each $\bm{x}\_i$ vector are in general quite different. Showering adds particles to the event via ISR and FSR, and decays replace resonances with two or more daughter particles. Also, colored partons, either before or after showering and decays, are obviously not in one-to-one correspondence with final state hadrons. The mappings *S* and *H* are unitary (provided we do not impose additional cuts at this stage). In the case of these QCD processes, this ability to separate physics at different length scales (and to consider the composition of probabilities rather than amplitudes) results both from factorization and from more specific results like the KLN theorem  (see e.g., Refs.  and  and references therein for more discussion of these points). Thus, for a particular event $\bm{x}\_{i,\rm{true}}$, $$\int d \bm{x}\_{\rm showered} S(\bm{x}\_{\rm showered}, \bm{x}\_{i,\rm{true}}) = 1.$$ Likewise, for some particular showered event $\bm{x}\_{i,~\rm{showered}}$ we would have, $$\int d \bm{x}\_{\rm hadron} H(\bm{x}\_{\rm hadron}, \bm{x}\_{i,\rm showered}) = 1.$$ Obviously, this implies that $$\int d \bm{x}\_{\rm showered} d\bm{x}\_{\rm hadron} H(\bm{x}\_{\rm hadron}, \bm{x}\_{\rm showered}) S(\bm{x}\_{\rm showered}, \bm{x}\_{i,\rm{true}} ) = 1.$$ It is clear that if we start with a set of unweighted hard-process generator-level events $\{\bm{x}\_{i,{\rm true}}\}$, and then, for each $\bm{x}\_{i,{\rm true}}$ choose a random value for $\bm{x}\_{i,\rm showered}$ according to the PDF $S(\bm{x}\_{\rm showered}, \bm{x}\_{i,\rm{true}})$ corresponding to $\bm{x}\_{i,\rm{true}}$, then we will obtain a set of unweighted, showered events $\{\bm{x}\_{i,{\rm showered}}\}$. We can obviously continue this procedure by choosing, $\bm{x}\_{i,\rm{hadron}}$ according the the PDFs $H(\bm{x}\_{\rm hadron}, \bm{x}\_{i,\rm{showered}})$, thereby obtaining a set of unweighted, hadron-level events $\{\bm{x}\_{i,{\rm hadron}}\}$. That we perform this random selection according to the PDFs $S(\bm{x}\_{\rm showered}, \bm{x}\_{i,\rm{true}})$ and $H(\bm{x}\_{\rm hadron}, \bm{x}\_{i,\rm{showered}})$ using a simulation program such as HERWIG or PYTHIA does not affect the argument. We remind the reader that only the generation of the initial set of hard-process parton-level events $\{\bm{x}\_{i,\rm{true}}\}$ had anything to do with the new physics theory point $(A, \bm{\alpha})$. The subsequent showering and hadronization depends only on the hard process event $\bm{x}\_{\rm true}$ (by which we include both the four momenta of the particles and the particular color structure which was generated) and SM parameters, such as *α**S*(*Q*). Detector-level Monte Carlo Simulation and Reconstruction -------------------------------------------------------- In this subsection we briefly describe the process of performing detector-level MC simulation, in order to explain why reweighting detector-level events based on the hard-process matrix elements works. Two further stages of event evolution, detector simulation and object reconstruction, also decompose into the product of PDFs: $$\begin{aligned} \begin{CD} \bm{x}\_{i, {\rm hadron}} @>>{D:~\rm detector~simulation}> \bm{x}\_{i, {\rm detector}} @>>{R:~\rm object~reconstruction}> \bm{x}\_{i, {\rm objects}} \end{CD}. \end{aligned}$$ Detector simulation refers to determining the tracks produced and energy deposited in various parts of the detector, using GEANT. So in general $\bm{x}\_{i, {\rm detector}}$ does not describe the four-momenta of particles, but the properties of tracks, energy in calorimeter cells or towers, etc. In order to do physics, we must take this raw detector output and map it into “physics objects”, such as “electrons”, “muons”, or “jets”. The reconstruction of jets, of course, is especially non-trivial; significant work has gone into the theoretical and experimental understanding of jet algorithms . For our purposes, it suffices to note that the standard simulation procedures are formally analogous to the showering and hadronization described in that they can be viewed as obtaining a set of unweighted detector-level events $\bm{x}\_{i, {\rm detector}}$ corresponding to hadron-level event $\bm{x}\_{i, {\rm hadron}}$, by choosing a random value of $\bm{x}\_{i, {\rm detector}}$ from the PDF $D(\bm{x}\_{\rm detector}, \bm{x}\_{i, {\rm hadron}})$ and obtaining an event consisting of reconstructed physics objects, $\bm{x}\_{i,\rm objects}$ from this detector event, $\bm{x}\_{i,\rm detector}$ using the PDF $R(\bm{x}\_{\rm objects}, \bm{x}\_{i,\rm detector})$. In fact, we can compose the four PDFs, *S*, *H*, *D*, and *R* into the generalized-transfer function *T* from Eq. [eq:T], which, as a product of normalized PDFs (as each step is unitary), is also a normalized PDF. While we did not discuss MC tuning , or pileup effects explicitly, we note that these effects contribute only to $T(\bm{x}\_{\rm objects}, \bm{x}\_{\rm true})$ and neither depend on the new physics parameters $(A, \bm{\alpha})$ nor change the interpretation of $T(\bm{x}\_{\rm objects}, \bm{x}\_{\rm true})$ as a normalized probability distribution. We note also that procedures like CKKW  or MLM  matching, which attempt to combine aspects of the parton shower with event generation from matrix elements do not change the overall thrust of our argument. At most, we must modify our reweighting procedure to use events $\bm{x}\_{i,{\rm matched}}$ and generalized transfer function $T(\bm{x}\_{i,{\rm objects}}, \bm{x}\_{i,{\rm matched}})$ in Eq. ([eq:T]); these modifications, and the associated modifications to the reweighting procedure for these events, are relatively straightforward. Reweighting =========== Given a set of unweighted hard-process events, $\bm{x}\_{i,~\rm{true}}$, generated with, e.g., MadGraph, and following the facts about the generalized transfer function *T* from Eq. ([eq:T]) listed in Section [sec:intro] and/or the discussion in Section [sec:mc], we can produce a set of unweighted events $\bm{x}\_{i,\rm{objects}}$ through MC simulation that correspond to the hard-process events $\bm{x}\_{i,\rm{true}}$. These events can be used to obtain detector-level cross sections and distributions, in a manner exactly analogous to that specified above (c.f. Eq. ([eq:rho-mc])); namely if $N\_{\rm detector}$ of the $N\_{\rm generator}$ events which we simulate satisfy our cuts on our detector-simulated reconstructed physics objects, then our cross section (times efficiencies and acceptances) is given by $$\label{eq:sigma-mc-detector} \sigma\_D = \sigma\_G \frac{N\_{\rm detector}}{N\_{\rm generator}}.$$ The content of the bin in the histogram for an arbitrary kinematic variable *V* containing values in the range $[V\_{\rm min}, V\_{\rm max}]$ is given by $$\label{eq:rho-mc-detector} \int\_{V\_{\rm min}}^{V\_{\rm max}} \rho(V)\, dV \approx \frac{1}{N} \sum\_i \left\{ \theta(V(\bm{x}\_{i, \rm objects}) - V\_{\rm min}) - \theta( V(\bm{x}\_{i, \rm objects}) - V\_{\rm max})\right\}.$$ Of course, these detector-level events correspond to the generator-level events $\{\bm{x}\_{i,\rm true}\}$, which were generated using the new physics theory point $(A, \bm{\alpha})$. This raises the question: what if we now wish to obtain detector-level cross section and observables for a different new physics model theory point, which we label $(B, \bm{\beta})$? Obviously we could simply perform the procedure described above on a new set of unweighted, parton-level events generated for $(B,\bm{\beta})$. This approach, however, can become impractical very quickly, as fullsim detector simulation is very slow (rates on the order of minutes per event are typical) relative to the rest of the simulation process. Instead, we note that the Eqs. ([eq:sigma-mc-detector]) and ([eq:rho-mc-detector]) can be thought of as the MC evaluation of cross sections or histograms using the differential cross section for $\bm{x}\_{\rm objects}$, i.e. $$\label{eq:P-A} \frac{d\sigma(A,\bm{\alpha}; \bm{x}\_{\rm objects})}{d \bm{x}\_{\rm objects}} = \int T(\bm{x}\_{\rm objects}, \bm{x}\_{\rm true}) \frac{d\sigma(A,\bm{\alpha}; \bm{x}\_{\rm true})}{d \bm{x}\_{\rm true}} d\bm{x}\_{\rm true}.$$ If we replace the theory point $(A,\bm{\alpha})$ with $(B,\bm{\beta})$, we have instead $$\begin{aligned} \label{eq:P-B} \frac{d\sigma(B,\bm{\beta}; \bm{x}\_{\rm objects})}{d \bm{x}\_{\rm objects}} & = & \int T(\bm{x}\_{\rm objects}, \bm{x}\_{\rm true}) \frac{d\sigma(B,\bm{\beta}; \bm{x}\_{\rm true})}{d \bm{x}\_{\rm true}} d\bm{x}\_{\rm true} \\ \nonumber &=& \int T(\bm{x}\_{\rm objects}, \bm{x}\_{\rm true}) \frac{d\sigma(A,\bm{\alpha}; \bm{x}\_{\rm true})}{d \bm{x}\_{\rm true}} \, R(A,\bm{\alpha},B,\bm{\beta}) \, d\bm{x}\_{\rm true},\end{aligned}$$ with $$R(A,\bm{\alpha},B,\bm{\beta}) \equiv \bigg( \frac{d\sigma(B,\bm{\beta}; \bm{x}\_{\rm true})}{d \bm{x}\_{\rm true}} \bigg/ \frac{d\sigma(A,\bm{\alpha}; \bm{x}\_{\rm true})}{d \bm{x}\_{\rm true}} \bigg). \label{eq:R}$$ When we, e.g., perform the integral in Eq. ([eq:rho-mc-detector]) by generating unweighted events for the hard-process and then simulating, we are really replacing some region of $\bm{x}\_{\rm true}$ space, with volume *V**i*, with the event $\bm{x}\_{i, \rm true}$. The weight of this region in the calculation of quantities, is, approximately $V\_i \frac{d\sigma(A,\bm{\alpha}; \bm{x}\_{\rm true})}{d \bm{x}\_{\rm true}}$. (These weights are equal for unweighted events, up to statistical fluctuations. *V**i* is small when the differential cross section is large; equivalently the number of events selected in a region of phase space is proportional, in the limit of large statistics, to the differential cross section in that region of phase space.) When we then perform an integral like that in Eq. ([eq:rho-mc-detector]) using MC methods, we therefore must replace the weights: $$\begin{aligned} (V\_i) \frac{d\sigma(A,\bm{\alpha}; \bm{x}\_{\rm true})}{d \bm{x}\_{\rm true}} \to (V\_i) \frac{d\sigma(B,\bm{\beta}; \bm{x}\_{\rm true})}{d \bm{x}\_{\rm true}} \equiv (V\_i) \frac{d\sigma(A,\bm{\alpha}; \bm{x}\_{\rm true})}{d \bm{x}\_{\rm true}} R(A,\bm{\alpha},B,\bm{\beta}).\end{aligned}$$ At the level of MC events, this replacement corresponds to weighting each event by the ratio of hard-process differential cross sections ([eq:R]). With this procedure we obtain, for cross sections $$\label{eq:sigma-mc-detector-weighted} \sigma\_D(B,\bm{\beta}) =\sigma\_G(A,\bm{\alpha}) \, \frac{1}{N\_{\rm generator}} \sum\_{i, \rm (accepted)} R(A,\bm{\alpha},B,\bm{\beta}).$$ and for histograms $$\begin{aligned} \label{eq:rho-mc-detector-weighted} \int\limits\_{V\_{\rm min}}^{V\_{\rm max}} \!\!\!\! \rho(V) dV \approx \!\!\!\!\!\! \sum\limits\_{i, \rm accepted} \!\!\!\! \frac{R(A,\bm{\alpha},B,\bm{\beta})}{N\_{\rm generator}} \biggl[\theta(V(\bm{x}\_{i, \rm objects}) - V\_{\rm min}) - \theta(V(\bm{x}\_{i, \rm objects})- V\_{\rm max} )\biggr]. \end{aligned}$$ Applications ============ ![image](FIG/reweighting) In this section, we demonstrate the use of reweighting methods through three example analyses. We also discuss the bin-by-bin errors in histograms obtained by reweighting. In Subsection [sec:h4l] we will show how to reweight events to study different coupling structures for Higgs to four-lepton events. This will be followed, in Subsection [sec:antler], by an investigation of reweighting to obtain distributions for a UED model from MC samples generated for a SUSY model. Finally, in Subsection [sec:clustering] we will study the effects of showering, hadronization, (fast) detector simulation, and jet reconstruction. Especially for the benefit of readers who may have skipped Section [sec:mc], in Fig. [fig:reweight] we present a cartoon which describes the reweighting procedure. Essentially we can obtain distributions in some physics model *J* by reweighting MC events generated for a model *I*. The reweighting factor ([eq:R]) is given by the ratio of differential cross sections evaluated for the “truth”-level parton-level MC event in each model. In the limit where model *I* and model *J* are produced from the same initial partons and have the same masses for final state particles, the parton distribution functions and phase space factors in this ratio cancel, and we are left with the ratio of squared matrix elements, as described in the caption to Fig. [fig:reweight]. The different colorations of the cartoons representing model *I* and model *J* in Fig. [fig:reweight] represent the different distributions of the squared matrix elements over the common phase space in the two models. In the examples provided, the factors used for reweighting were obtained using the “standalone” matrix element calculating code which can be generated automatically, in either Fortran or C++, from MadGraph5 . This code requires the external parton momenta, which should be provided by a short, user-supplied, “wrapper” code. One can also generate similar standalone code in C or Mathematica using CalcHEP or CompHEP . We encourage the authors of these and other similar automatic matrix element calculators to further increase the user-friendliness of their tools for the purpose of reweighting. Before proceeding, we pause to present a caveat about the sorts of models one can study using reweighting. Obviously we can only use a reweighting procedure if the event being reweighted is possible in both models. So the final state particles must have the same masses in both models. Additionally, when one or both models contain intermediate resonances, it is important that those resonances, if sufficiently narrow, have the same masses in each model. Otherwise, the resulting extreme differences in the density of events in the phase space of the two models can lead to undersampling of important regions of phase space. However, this is not, in principle, an insurmountable difficulty, and several practical approaches for dealing with undersampling will be discussed in Section [sec:practical]. Changing the Coupling Structure: Higgs to Four Leptons ------------------------------------------------------ ![image](FIG/feynman_diagrams_1) The “golden” Higgs to four-lepton channel (see Fig. [fig:golden]) is quite sensitive, both for the discovery and for the subsequent measurement of the spin and parity properties of the Higgs boson , and therefore plays an important role in the experimental study of the Higgs at the LHC . Assuming that the putative Higgs is spin-zero, we can then characterize its couplings to *Z* bosons, following , using the Lagrangian $$\begin{aligned} \mathcal{L} \supset \sum\_{i=1}^5 \kappa\_i \mathcal{O}\_i = && -\kappa\_1 \frac{M\_Z^2}{v} H Z\_\mu Z^\mu -\frac{\kappa\_2}{2 v} H F\_{\mu\nu} F^{\mu\nu} -\frac{\kappa\_3}{2 v} H F\_{\mu\nu} \tilde{F}^{\mu\nu} \nonumber \\ &&+\frac{\kappa\_4 M\_Z^2}{M\_X^2 v} \Box H Z\_\mu Z^\mu +\frac{2\kappa\_5}{v} H Z\_\mu \square Z^\mu. \label{Lag}\end{aligned}$$ Therefore, in general, the *H**Z**Z* couplings involve five parameters (up to lowest non-trivial dimension). Following the “Geolocating” approach , we can remove the degree of freedom associated with the partial width, leaving us with a “sphere” of four-dimensions. To explore this space fully, we may need to employ reweighting procedures, such as those described here. As an example of such analyses, in Fig. [fig:MZ2] we study the distribution of the smaller dilepton invariant mass[2](#fn2) *M**Z*2 = min(*M**e*+*e*−, *M**μ*+*μ*−)  in the 2*e*2*μ* final state. In each of the three panels in Fig. [fig:MZ2], the solid blue histogram is always the normalized *M**Z*2 distribution for the pure *κ*5 case, i.e. when *κ*5 ≠ 0 and all other *κ**i* (*i* = 1, …, 4) are zero. On the other hand, the solid magenta histograms correspond to the case when only one of the other *κ**i* couplings is turned on: *κ*1 ≠ 0 in panel (a), *κ*2 ≠ 0 in panel (b), and *κ*3 ≠ 0 in panel (c). Our reweighting procedure now allows us to obtain the shape of the solid blue histogram by reweighting the corresponding magenta plot. The results are shown by the dotted red lines in Fig. [fig:MZ2]. By comparing the solid blue and dotted red distributions, we observe a very good match, which validates our procedure. However, it is not sufficient to show that the central bin values for each histogram are reproduced by the reweighting procedure. To perform statistical analyses using the reweighted histograms, we need to understand the error (or uncertainty) on the histogram bin values. Following Ref. , if there are *N* events in a bin, each of which has weight *w**k*, then the value of that bin in the reweighted histogram is *T* = ∑*w**k* ≡ *N*⟨*w*⟩. The error on *T* is given by $$\label{eq:delta} \delta = \sqrt{\sum w\_k^2} = \sqrt{N} \sqrt{\langle w^2 \rangle}.$$ Clearly, in the special case of an unweighted histogram, *δ* yields $\sqrt{N}$ which reproduces the well-known expression from Poisson statistics. Combining Eqs. ([eq:T2]) and ([eq:delta]), and writing ⟨*w*2⟩ in terms of the variance on the weights, *σ**w*2 = ⟨*w*2⟩ − ⟨*w*⟩2,  we find the fractional error on the bin value in a weighted histogram to be given by $$\label{eq:weighted-error} \frac{\delta}{T} = \frac{1}{\sqrt{N}} \sqrt{1 + \frac{\sigma\_w^2}{\langle w \rangle^2}}.$$ For the case of interest to us, the *w**k* are the reweighting factors $R(A,\bm{\alpha},B,\bm{\beta})$ used to reweight events generated for model *A*, in order to obtain a histogram for model *B*. (Note that the value of *N* is the same for the two models.) Since in general the reweighting factor varies from event to event, the variance ([eq:variance]) is nonzero. Thus, when we reweight, the statistical error increases, and Eq. ([eq:weighted-error]) quantifies this effect. To provide a concrete demonstration of these effects, in Fig. [fig:error](a-c) we show the fractional error, as calculated using Eq. ([eq:weighted-error]), on each bin of the corresponding histograms from Fig. [fig:MZ2](a-c). In Fig. [fig:error], solid magenta lines denote the fractional error (which scales as $1/\sqrt{N}$) in the original model with unweighted events, while the red dotted lines show the error on the corresponding histogram obtained from the same events using reweighting. In accordance with Eq. ([eq:weighted-error]), the error is in general greater for the reweighted histogram, but not always — e.g., in Fig. [fig:error](a), the errors before and after reweighting are essentially the same. To better illustrate the origin of the errors after reweighting, in Fig. [fig:R] we provide two dimensional temperature plots of the reweighting factor, *R*, and the corresponding value of *M**Z*2 for each event. We see a clear correlation between the spread in the values of *R* and the magnitude of the increase in the fractional errors in Fig. [fig:error]. At this point, one might wonder how the errors on histograms which were reweighted from an initial model *A* to another model *B* compare to the errors on histograms built from unweighted events generated directly in model *B*. Since the error on the bin values in the unweighted histogram is $1/\sqrt{T}$, from Eqs. ([eq:T2]) and ([eq:weighted-error]) we get $$\label{eq:reweighting-error} \bigg(\frac{\delta}{T}\bigg) / \bigg(\frac{1}{\sqrt{T}}\bigg) = \sqrt{\langle w \rangle} \sqrt{1 + \frac{\sigma\_w^2}{\langle w \rangle^2}},$$ where we assume sufficiently large statistics, so that the number of unweighted events is close to the expected value *T*. We see that the second factor in Eq. ([eq:reweighting-error]) always leads to an increase in the error when reweighting, in agreement with Fig. [fig:error]. On the other hand, the first factor, $\sqrt{\langle w \rangle}$, can be either larger or smaller than 1, depending on the relative weights between models *A* and *B*. Using Eqs. ([eq:weighted-error]) and ([eq:reweighting-error]), one can quantify the error from reweighting in the region of interest. If that error turns out to be unacceptably large for the task at hand, one can apply the procedures discussed below in Sec. [sec:practical] to further reduce those errors. In Fig. [fig:MZ2] we presented distributions for the *M**Z*2 variable, which we studied further in Figs. [fig:error] and [fig:R]. However, we wish to emphasize that the variable whose distribution is obtained via reweighting could also be an optimized multivariate analysis-based variable such as the MELA KD , MEKD , or the output of a boosted decision tree or neural network analysis. Hence reweighting can be used in tandem with sophisticated and powerful multivariate analysis methods . Changing Spin Assignments: The Antler Topology ---------------------------------------------- ![image](FIG/feynman_diagrams_2) Extensive simplified model searches  have been performed by the ATLAS and CMS collaborations in various channels . When performing a simplified model analysis, one generally fixes the spins of the particles involved in the simplified event topology. One common choice is to use SUSY spin assignments, though another possibility is to decay all particles by pure phase space. It is important to be able to interpret the results from these searches in the context of other models, with different spin assignments. A particularly well motivated example of an alternative spin assignment is provided by the minimal UED model (MUED). To perform reinterpretations for different spin assignments, we need to recalculate cross sections, branching ratios, and efficiencies for the given study point in the new model. We can obtain cross sections and branching ratios for the new model point via theoretical formulae, but efficiencies generally must be obtained through MC simulation. We emphasize that we can recycle existing MC samples to determine these efficiencies for different new physics models in an efficient manner. In this subsection, we demonstrate the utilization of MC event samples generated for a SUSY model to perform a collider analysis of the MUED model, in the context of the “antler” topology depicted in Fig. [fig:antler]. In particular we show distributions of collider observables, which could be used to discriminate between SUSY and MUED . Numbering the leptons as 1 for the ℓ− and 2 for the ℓ+, these variables are: ![image](FIG/Energy_l) ![image](FIG/mll) * cos(*θ**B*): A variable calculated from the pseudorapidity difference between the two leptons , defined by $$\begin{aligned} \cos(\theta\_B) &=& \tanh\left(\frac{\Delta \eta\_{12}}{2}\right), \\ \Delta \eta\_{12} &=& \eta\_{1} -\eta\_{2}, \textrm{ with } \eta\_i = \frac{1}{2}\ln\left(\frac{P\_i+P\_{iz}}{P\_i -P\_{iz}}\right).\end{aligned}$$ * Δ*ϕ*: The azimuthal angular difference of the visible particles, $$\Delta \phi = \cos^{-1}\left(\frac{\vec P\_{1T} \cdot \vec P\_{2T}} {|\vec P\_{1T}|\,|\vec P\_{2T}|} \right).$$ * *E*ℓ: The energy of a lepton *E*ℓ. * *M*ℓ−ℓ+: The invariant mass of the two leptons. * $\sqrt{\hat s}\_{\min}$: An estimator of the mass scale of the hard scattering, proposed in , calculated for the antler event topology assuming a particular value for the $\tilde \chi\_1^0$ mass. Since in practice we would scan over this “trial” mass, for definiteness we use the correct or “truth” value of this quantity in the analyses presented here. The definition of $\sqrt{\hat s}\_{\min}$ is $$\sqrt{\hat s}\_{\min} = \sqrt{M\_{\ell^-\ell^+}^2+|\vec P\_{1T}+\vec P\_{2T}|^2}+\sqrt{4 m\_{\tilde \chi\_1^0}^2+{\not \!\! E\_T}^2}\,.$$ * $\sqrt{\hat s}$: The actual partonic center-of-mass energy. Distributions of the above observables are shown in Fig. [fig:antOBS]. For the various quantities, we show 1. (solid magenta lines) The distribution of the quantity obtained from MC simulation of the CMS LM6  SUSY parameter point, where the mass of the right-handed scalar lepton, $m\_{\tilde \ell\_R}$ is 176.62 GeV and the neutralino mass $m\_{\tilde \chi\_1^0}$ is 158.18 GeV. 2. (solid blue lines) The distribution of the quantity obtained from MC simulation of a MUED parameter point where the mass of the Kaluza-Klein (KK) lepton is 176.62 GeV, and the mass of the KK photon is 158.18 GeV, matching the masses of the SUSY particles in the LM6 scenario. 3. (dotted red lines) The distribution of the quantity that one would obtain for the MUED parameter point, evaluated by the reweighting of events generated for the SUSY parameter point. The MC samples used to generate these distributions consist of 100, 000 parton level events at the 14 TeV LHC. Fig. [fig:antOBS] suggests that MUED analyses can in fact be performed by reweighting SUSY MC events, allowing the efficient probing of non-standard spin assignments at the LHC. One could extend this technique beyond MUED to all other possible spin assignments for a given event topology with given masses. In particular, one could set robust and model independent limits on new physics in the following way. Among all possible spin assignments (or more generally, parameter values), identify the case which is most difficult to test experimentally, and use it to present limits from experimental searches. The conservative bounds set in this way will be valid for all other spin assignments as well. Jet Clustering, Event Selection, and Detector Response ------------------------------------------------------ ![image](FIG/reweighting_Detector) Parton level information can be distorted by various factors, including showering, hadronization, jet clustering, and detector response. All of these can “deform” the phase space of generated particles. But, as noted in Section [sec:mc], the relative weights in various models for a given observed event depend only on the parton level amplitudes for the event in each model. We present a conceptual picture of this reweighting procedure in Fig. [fig:detector]. To illustrate this point numerically, we simulate $t \bar t$ production at 14 TeV LHC, where the top-quark (anti-top quark) decays into a *W*+ (*W*−) boson or charged Higgs *ϕ*+ (*ϕ*−), see Fig. [fig:ttbar]. Our goal is to evaluate the performance of the reweighting procedure described in Fig. [fig:detector], including effects that we did not account for in the parton-level examples described in Sections [sec:h4l] and [sec:antler]. Specifically, we generated 100, 000 parton level events with MadGraph5(aMC) v.2.1.0, passed the events to PYTHIA 6.4, and performed detector simulation with Delphes v.3.0.12. For jet clustering, we used the anti-*k**t* algorithm with Δ*R* = 0.5. For the reconstruction of visible particles, we employ default smearing factors and lepton tagging efficiencies. We apply basic analysis cuts with *p**T*(*e*) > 10 GeV, *p**T*(*μ*) > 5 GeV and *p**T*(*j*) > 30 GeV and do not require *b*-tagging in our analysis. ![image](FIG/feynman_diagrams_3) When the top quark decays into a bottom quark and a charged Higgs boson, the invariant mass distribution of the bottom quark and the corresponding lepton from the charged Higgs boson decay will have a triangular shape (the solid blue line in Fig. [fig:invDET](a)).[3](#fn3) On the other hand, the invariant mass distribution of the *b*-quark and the corresponding lepton from a *W*± decay will have a non-trivial shape (solid magenta lines in Fig. [fig:invDET](a)). These two cases can be easily related by reweighting at the parton level, as shown by the red dotted histogram in Fig. [fig:invDET](a). At the detector level, the *b*-quark is reconstructed as a jet, thus we select the two highest *p**T* jets to be our *b* jets. In order to study the effects of combinatorics separately, in Fig. [fig:invDET](b) we show an intermediate (and unphysical) case where we use MC truth information to select the correct jet to be paired with the muon. We see that the reweighting procedure correctly reproduces the distributions of kinematic variables computed in terms of reconstructed objects. Finally, in Fig. [fig:invDET](c) we show the same three distributions, only this time ignoring the MC truth information and fully accounting for the combinatorics. In order to reduce the contamination from combinatorial pairing errors, we use the mixed event subtraction method , where we add both possible pairings in a given event, then subtract a pairing of the muon with a jet from a different event. We emphasize that the point of these analyses is that a distribution made through reweighting is identical to the actual distribution made with a MC events generated for the hypothesis in question. Possible extensions =================== As noted in Section [sec:applications] above, reweighting events generated using model *I* to obtain distributions in model *J* may become problematic for two main reasons. Firstly, in a phase space region where the cross section for the new model *J* is high, while the cross-section for the reference model *I* is low, the region is sampled by relatively few MC events. Secondly, the spread in the reweighting factors for events in the given bin may become large. Both of these effects are described by Eq. [eq:weighted-error]. We are obviously interested in minimizing the errors on histograms, but at the same time, we want to keep as low as possible the number of events which need to be processed by full MC simulation. We therefore suggest three practical procedures that can be used to mitigate issues related both to undersampling and to the spread on reweighting factors, and possibly to help understand systematic effects. 1. **Targeted Generation:** In this procedure, whenever the error in a reweighted histogram obtained for model *J* becomes greater than a certain threshold, one generates additional new events in the undersampled region that are phase space “neighbors” of the events already in the bin. The simplest way to think of this is to divide phase space P into two non-overlapping regions, N and $\overline{\mathcal{N}}$, where N describes a phase space region where additional event generation is desired, and $\overline{\mathcal{N}}$ is the rest of the phase space. Then we generate events for model *J* only in region N. Distributions for *J* will then be obtained by reweighting the events generated for model *I* in the $\overline{\mathcal{N}}$ and for events generated for model *J* in the region N.[4](#fn4) If one then wishes to obtain distributions for a third model, *K*, one will reweight events generated for model *I* in region $\overline{\mathcal{N}}$ and the events generated for model *J* in region N, in each case using a reweighting factor involving the appropriate models. 2. **Regions of Overlap:** A related approach is to generate large samples of unweighted events at several benchmarks in parameter space, which we label *I*1, *I*2, etc. One then obtains distributions *D*1, *D*2, etc. for model *J* by reweighting events generated for model *I*1, *I*2, etc.[5](#fn5) If the benchmarks are chosen appropriately, every important region in phase space for model *J* can be described with low errors by reweighted events generated for some model *I**j*, hence avoiding the undersampling issues described above. Also, the comparison of the values of the distribution obtained from different choices of unweighted events may allow for the calculation of systematic errors from this procedure. 3. **Optimized Sample:** A third, and quite distinct approach to optimizing reweighting procedures for large parameter space is to optimize the choice of model *I* to avoid undersampling in as much of the parameter space as possible. In general, this would be performed by maximizing an integral over the parameter space (possibly weighted by some prior or importance measure) of a function representing the errors that will be obtained on bins of the desired histograms. This appears to be a hard integral to perform in practice, so reasonable approaches will probably involve approximation to some degree. It is also unclear whether even an optimized model would be as efficient as generating events for at least the several benchmarks in parameter space described in the previous item. Conclusions =========== We have presented and described in detail a procedure aimed at surmounting the experimental challenges presented by the multiplicity of theoretical models, which may each, in turn, have large parameter spaces. We realize that the procedure presented here is not totally unknown to experimentalists (particularly in the special case of parton distribution function reweighting ). However, given the importance of this challenge, we felt it important to highlight the potential of this method, especially for the study of large BSM parameter spaces. Specifically, we have shown how reweighting events using “truth” information from generator-level Monte Carlo events, including the momenta of invisible particles, makes possible the detailed study of large signal parameter spaces and/or large numbers of signal models at the level of detail needed for experimental purposes. We demonstrated this in several motivated physics examples and also illustrated potential issues which can arise, including an explicit discussion of the errors on the weighted histograms generated from reweighting. There are several advantages of our method: * Given a large existing sample for some theory model *I*, we gain speed in generating an effective fullsim sample for another theory model *J* by avoiding detector simulation, hadronization, showering and fragmentation. * All existing fullsim samples can be resurrected for the study of new models. * The reweighting is computationally very quick and simple - for example, there is no need to integrate over the momenta of any invisible particles. * Reweighting also allows the interactions between theorists and experimentalists to be maximally efficient - the experimentalists are handling the fullsim MC generation, while the theorists are providing only the parton-level functions for reweighting the events. We conclude with the simple recommendation that in order to be able to implement the procedure described here, when generating fullsim samples, one should make sure to record and store the generator-level information which is needed for reweighting. JG, JL, KM, and SM thank their CMS colleagues for useful discussions, and in particular, G. Hamel de Monchenault for useful comments on a draft of this manuscript. MP appreciates a useful discussion with J. Wacker on the application of a matrix element reweighting method for simplified model searches at the “Coordinating a Simplified Models Effort” workshop, CERN 2013. All authors thank the anonymous referee for useful comments, especially on the treatment of uncertainties in weighted histograms and on using this technique to obtain model-independent limits. MP is supported by the World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan. Work supported in part by U.S. Department of Energy Grant ER41990. Fermilab is operated by the Fermi Research Alliance under contract DE-AC02-07CH11359 with the U.S. Department of Energy. 99 G. Cowan, “Monte Carlo Techniques,” in Ref. . P. Nason and P.Z. Skands, “Monte Carlo Event Generators”, in Ref. . J. Beringer *et al.* [Particle Data Group Collaboration], “Review of Particle Physics (RPP),” Phys. Rev. D **86**, 010001 (2012). A. Buckley, J. Butterworth, S. Gieseke, D. Grellscheid, S. Hoche, H. Hoeth, F. Krauss and L. Lonnblad *et al.*, “General-purpose event generators for LHC physics,” Phys. Rept.  **504**, 145 (2011) [arXiv:1101.2599 [hep-ph]]. PGS 4, <physics.ucdavis.edu/~conway/research/software/pgs/pgs4-general.htm>. J. de Favereau *et al.* [DELPHES 3 Collaboration], “DELPHES 3, A modular framework for fast simulation of a generic collider experiment,” JHEP **1402**, 057 (2014) [arXiv:1307.6346 [hep-ex]]. S. Agostinelli *et al.* [GEANT4 Collaboration], “GEANT4: A Simulation toolkit,” Nucl. Instrum. Meth. A **506**, 250 (2003). G. F. Giudice and R. Rattazzi, “Theories with gauge mediated supersymmetry breaking,” Phys. Rept.  **322**, 419 (1999) [hep-ph/9801271]. L. M. Carpenter, M. Dine, G. Festuccia and J. D. Mason, “Implementing General Gauge Mediation,” Phys. Rev. D **79**, 035002 (2009) [arXiv:0805.2944 [hep-ph]]. K. A. Intriligator and M. Sudano, “Comments on General Gauge Mediation,” JHEP **0811**, 008 (2008) [arXiv:0807.3942 [hep-ph]]. P. Meade, N. Seiberg and D. Shih, Prog. Theor. Phys. Suppl. **177**, 143 (2009) [arXiv:0801.3278 [hep-ph]]. L. M. Carpenter, “Surveying the Phenomenology of General Gauge Mediation,” arXiv:0812.2051 [hep-ph]. M. Buican, P. Meade, N. Seiberg and D. Shih, JHEP **0903**, 016 (2009) [arXiv:0812.3668 [hep-ph]]. N. Craig, S. Knapen and D. Shih, “General Messenger Higgs Mediation,” JHEP **1308**, 118 (2013) [arXiv:1302.2642]. C. F. Berger, J. S. Gainer, J. L. Hewett and T. G. Rizzo, “Supersymmetry Without Prejudice,” JHEP **0902**, 023 (2009) [arXiv:0812.0980 [hep-ph]]. R. C. Cotta, J. S. Gainer, J. L. Hewett and T. G. Rizzo, “Dark Matter in the MSSM,” New J. Phys.  **11**, 105026 (2009) [arXiv:0903.4409 [hep-ph]]. S. S. AbdusSalam, B. C. Allanach, F. Quevedo, F. Feroz and M. Hobson, “Fitting the Phenomenological MSSM,” Phys. Rev. D **81**, 095012 (2010) [arXiv:0904.2548 [hep-ph]]. S. Sekmen, S. Kraml, J. Lykken, F. Moortgat, S. Padhi, L. Pape, M. Pierini and H. B. Prosper *et al.*, “Interpreting LHC SUSY searches in the phenomenological MSSM,” JHEP **1202**, 075 (2012) [arXiv:1109.5119 [hep-ph]]. A. Arbey, M. Battaglia and F. Mahmoudi, “Implications of LHC Searches on SUSY Particle Spectra: The pMSSM Parameter Space with Neutralino Dark Matter,” Eur. Phys. J. C **72**, 1847 (2012) [arXiv:1110.3726 [hep-ph]]. CMS Collaboration [CMS Collaboration], “Phenomenological MSSM interpretation of the CMS 2011 5fb-1 results,” CMS-PAS-SUS-12-030. S. P. Martin, “A Supersymmetry primer,” In \*Kane, G.L. (ed.): Perspectives on supersymmetry II\* 1-153 [hep-ph/9709356]. M. Drees, R. Godbole and P. Roy, “Theory and phenomenology of sparticles: An account of four-dimensional N=1 supersymmetry in high energy physics,” Hackensack, USA: World Scientific (2004) 555 p H. Baer and X. Tata, “Weak scale supersymmetry: From superfields to scattering events,” Cambridge, UK: Univ. Pr. (2006) 537 p M. Dine, “Supersymmetry and string theory: Beyond the standard model,” Cambridge, UK: Cambridge Univ. Pr. (2007) 515 p K. G. Wilson and J. B. Kogut, “The Renormalization group and the epsilon expansion,” Phys. Rept.  **12**, 75 (1974). H. Georgi, “An Effective Field Theory for Heavy Quarks at Low-energies,” Phys. Lett. B **240**, 447 (1990). H. Georgi, “Effective field theory,” Ann. Rev. Nucl. Part. Sci.  **43**, 209 (1993). D. B. Kaplan, “Effective field theories,” nucl-th/9506035. A. V. Manohar, “Effective field theories,” In \*Schladming 1996, Perturbative and nonperturbative aspects of quantum field theory\* 311-362 [hep-ph/9606222]. G. D’Ambrosio, G. F. Giudice, G. Isidori and A. Strumia, “Minimal flavor violation: An Effective field theory approach,” Nucl. Phys. B **645**, 155 (2002) [hep-ph/0207036]. I. Z. Rothstein, “TASI lectures on effective field theories,” hep-ph/0308266. A. Rajaraman, W. Shepherd, T. M. P. Tait and A. M. Wijangco, “LHC Bounds on Interactions of Dark Matter,” Phys. Rev. D **84**, 095013 (2011) [arXiv:1108.1196 [hep-ph]]. T. Appelquist, H. -C. Cheng and B. A. Dobrescu, “Bounds on universal extra dimensions,” Phys. Rev. D **64**, 035002 (2001) [hep-ph/0012100]. H. -C. Cheng, K. T. Matchev and M. Schmaltz, “Bosonic supersymmetry? Getting fooled at the CERN LHC,” Phys. Rev. D **66**, 056006 (2002) [hep-ph/0205314]. K. Cranmer and I. Yavin, “RECAST: Extending the Impact of Existing Analyses,” JHEP **1104**, 038 (2011) [arXiv:1010.2506 [hep-ex]]. K. Rao and D. Whiteson, “Reinterpretion of Experimental Results with Basis Templates,” arXiv:1203.6642 [hep-ex]. M. Drees, H. Dreiner, D. Schmeier, J. Tattersall and J. S. Kim, “CheckMATE: Confronting your Favourite New Physics Model with LHC Data,” arXiv:1312.2591 [hep-ph]. M. Papucci, K. Sakurai, A. Weiler and L. Zeune, “Fastlim: a fast LHC limit calculator,” arXiv:1402.0492 [hep-ph]. J. Alwall, P. Schuster and N. Toro, “Simplified Models for a First Characterization of New Physics at the LHC,” Phys. Rev. D **79**, 075020 (2009) [arXiv:0810.3921 [hep-ph]]. D. Alves *et al.* [LHC New Physics Working Group Collaboration], “Simplified Models for LHC New Physics Searches,” J. Phys. G **39**, 105005 (2012) [arXiv:1105.2838 [hep-ph]]. N. Arkani-Hamed, P. Schuster, N. Toro, J. Thaler, L. -T. Wang, B. Knuteson and S. Mrenna, “MARMOSET: The Path from LHC Data to the New Standard Model via On-Shell Effective Theories,” hep-ph/0703088 [HEP-PH]. [ATLAS Collaboration], “Search for supersymmetry with jets and missing transverse momentum: Additional model interpretations,” ATLAS-CONF-2011-155. H. Okawa [ATLAS Collaboration], “Interpretations of SUSY Searches in ATLAS with Simplified Models,” arXiv:1110.0282 [hep-ex]. [ATLAS Collaboration], “Interpretation of same-sign dilepton events at ATLAS with a simplified SUSY model,” ATLAS-CONF-2011-091. S. Chatrchyan *et al.* [CMS Collaboration], “Interpretation of Searches for Supersymmetry with simplified Models,” Phys. Rev. D **88**, no. 5, 052017 (2013) [arXiv:1301.2175 [hep-ex]]. K. Kondo, “Dynamical Likelihood Method for Reconstruction of Events With Missing Momentum. 1: Method and Toy Models,” J. Phys. Soc. Jap.  **57**, 4126 (1988). K. Kondo, “Dynamical likelihood method for reconstruction of events with missing momentum. 2: Mass spectra for 2 → 2 processes,” J. Phys. Soc. Jap.  **60**, 836 (1991). K. Kondo, T. Chikamatsu and S. H. Kim, “Dynamical likelihood method for reconstruction of events with missing momentum. 3: Analysis of a CDF high p(T) e mu event as t anti-t production,” J. Phys. Soc. Jap.  **62**, 1177 (1993). R. H. Dalitz and G. R. Goldstein, “The Decay and polarization properties of the top quark,” Phys. Rev. D **45**, 1531 (1992). B. Abbott *et al.* [D0 Collaboration], “Measurement of the top quark mass in the dilepton channel,” Phys. Rev. D **60**, 052001 (1999) [hep-ex/9808029]. J. C. Estrada Vigil, “Maximal use of kinematic information for the extraction of the mass of the top quark in single-lepton t anti-t events at D0,” FERMILAB-THESIS-2001-07. M. F. Canelli, “Helicity of the *W* boson in single - lepton *t**t̄* events,” UMI-31-14921. V. M. Abazov *et al.* [D0 Collaboration], “A precision measurement of the mass of the top quark,” Nature **429**, 638 (2004) [hep-ex/0406031]. J. S. Gainer, J. Lykken, K. T. Matchev, S. Mrenna and M. Park, “The Matrix Element Method: Past, Present, and Future,” arXiv:1307.3546 [hep-ph]. P. Artoisenet and O. Mattelaer, “MadWeight: Automatic event reweighting with matrix elements,” PoS CHARGED **2008**, 025 (2008). P. Artoisenet, V. Lemaitre, F. Maltoni and O. Mattelaer, “Automation of the matrix element reweighting method,” JHEP **1012**, 068 (2010) [arXiv:1007.3300 [hep-ph]]. G. P. Lepage, “A New Algorithm for Adaptive Multidimensional Integration,” J. Comput. Phys.  **27**, 192 (1978). J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer and T. Stelzer, “MadGraph 5 : Going Beyond,” JHEP **1106**, 128 (2011) [arXiv:1106.0522 [hep-ph]]. A. Belyaev, N. D. Christensen and A. Pukhov, “CalcHEP 3.4 for collider physics within and beyond the Standard Model,” Comput. Phys. Commun.  **184**, 1729 (2013) [arXiv:1207.6082 [hep-ph]]. A. Pukhov, E. Boos, M. Dubinin, V. Edneral, V. Ilyin, D. Kovalenko, A. Kryukov and V. Savrin *et al.*, “CompHEP: A Package for evaluation of Feynman diagrams and integration over multiparticle phase space,” hep-ph/9908288. E. Boos *et al.* [CompHEP Collaboration], “CompHEP 4.4: Automatic computations from Lagrangians to events,” Nucl. Instrum. Meth. A **534**, 250 (2004) [hep-ph/0403113]. L. Lonnblad, “ARIADNE version 4: A Program for simulation of QCD cascades implementing the color dipole model,” Comput. Phys. Commun.  **71**, 15 (1992). G. Corcella, I. G. Knowles, G. Marchesini, S. Moretti, K. Odagiri, P. Richardson, M. H. Seymour and B. R. Webber, “HERWIG 6: An Event generator for hadron emission reactions with interfering gluons (including supersymmetric processes),” JHEP **0101**, 010 (2001) [hep-ph/0011363]. M. Bahr, S. Gieseke, M. A. Gigg, D. Grellscheid, K. Hamilton, O. Latunde-Dada, S. Platzer and P. Richardson *et al.*, “Herwig++ Physics and Manual,” Eur. Phys. J. C **58**, 639 (2008) [arXiv:0803.0883 [hep-ph]]. T. Sjostrand, S. Mrenna and P. Z. Skands, “PYTHIA 6.4 Physics and Manual,” JHEP **0605**, 026 (2006) [hep-ph/0603175]. T. Sjostrand, S. Mrenna and P. Z. Skands, “A Brief Introduction to PYTHIA 8.1,” Comput. Phys. Commun.  **178**, 852 (2008) [arXiv:0710.3820 [hep-ph]]. T. Gleisberg, S. Hoeche, F. Krauss, A. Schalicke, S. Schumann and J. -C. Winter, “SHERPA 1. alpha: A Proof of concept version,” JHEP **0402**, 056 (2004) [hep-ph/0311263]. P. Z. Skands, ``Pythia and Vincia". A. J. Larkoski, J. J. Lopez-Villarejo and P. Skands, “Helicity-Dependent Showers and Matching with VINCIA,” Phys. Rev. D **87**, no. 5, 054033 (2013) [arXiv:1301.0933 [hep-ph]]. W. T. Giele, L. Hartgring, D. A. Kosower, E. Laenen, A. J. Larkoski, J. J. Lopez-Villarejo, M. Ritzmann and P. Skands, “The Vincia Parton Shower,” PoS DIS **2013**, 165 (2013) [arXiv:1307.1060]. R. Field, “Early LHC Underlying Event Data - Findings and Surprises,” arXiv:1010.3558 [hep-ph]. R. Field, “Min-Bias and the Underlying Event at the LHC,” Acta Phys. Polon. B **42**, 2631 (2011) [arXiv:1110.5530 [hep-ph]]. R. Field, “Min-Bias and the Underlying Event at the LHC,” arXiv:1202.0901 [hep-ph]. R. Field, “The underlying event in hadronic collisions,” Ann. Rev. Nucl. Part. Sci.  **62**, 453 (2012). P. Z. Skands, “Soft-QCD and UE spectra in pp collisions at very high CM energies (a Snowmass white paper),” arXiv:1308.2813 [hep-ph]. T. Kinoshita, “Mass singularities of Feynman amplitudes,” J. Math. Phys.  **3**, 650 (1962). T. D. Lee and M. Nauenberg, “Degenerate Systems and Mass Singularities,” Phys. Rev.  **133**, B1549 (1964). S. Albino, “The Hadronization of partons,” Rev. Mod. Phys.  **82**, 2489 (2010) [arXiv:0810.4255 [hep-ph]]. S. D. Ellis, J. Huston, K. Hatakeyama, P. Loch and M. Tonnesmann, “Jets in hadron-hadron collisions,” Prog. Part. Nucl. Phys.  **60**, 484 (2008) [arXiv:0712.2447 [hep-ph]]. G. P. Salam, “Towards Jetography,” Eur. Phys. J. C **67**, 637 (2010) [arXiv:0906.1833 [hep-ph]]. B. M. Waugh, H. Jung, A. Buckley, L. Lonnblad, J. M. Butterworth and E. Nurse, “HZTool and Rivet: Toolkit and Framework for the Comparison of Simulated Final States and Data at Colliders,” hep-ph/0605034. P. Z. Skands, “The Perugia Tunes,” arXiv:0905.3418 [hep-ph]. A. Buckley, H. Hoeth, H. Lacker, H. Schulz and J. E. von Seggern, “Systematic event generator tuning for the LHC,” Eur. Phys. J. C **65**, 331 (2010) [arXiv:0907.2973 [hep-ph]]. P. Z. Skands, “Tuning Monte Carlo Generators: The Perugia Tunes,” Phys. Rev. D **82**, 074018 (2010) [arXiv:1005.3457 [hep-ph]]. S. Catani, F. Krauss, R. Kuhn and B. R. Webber, “QCD matrix elements + parton showers,” JHEP **0111**, 063 (2001) [hep-ph/0109231]. F. Krauss, “Matrix elements and parton showers in hadronic interactions,” JHEP **0208**, 015 (2002) [hep-ph/0205283]. M. L. Mangano, M. Moretti, F. Piccinini and M. Treccani, “Matching matrix elements and shower evolution for top-quark production in hadronic collisions,” JHEP **0701**, 013 (2007) [hep-ph/0611129]. J. R. Dell’Aquila and C. A. Nelson, “*P* or CP Determination by Sequential Decays: V1 V2 Modes With Decays Into $\bar{\ell}$epton (A) ℓ(*B*) And/or *q̄* (A) *q*(*B*),” Phys. Rev. D **33**, 80 (1986). C. A. Nelson, “Correlation Between Decay Planes in Higgs Boson Decays Into *W* Pair (Into *Z* Pair),” Phys. Rev. D **37**, 1220 (1988). B. A. Kniehl, “The Higgs Boson Decay H → Z *g**g*,” Phys. Lett. B **244**, 537 (1990). A. Soni and R. M. Xu, “Probing CP violation via Higgs decays to four leptons,” Phys. Rev. D **48**, 5259 (1993) [hep-ph/9301225]. D. Chang, W. -Y. Keung and I. Phillips, “CP odd correlation in the decay of neutral Higgs boson into Z Z, W+ W-, or t anti-t,” Phys. Rev. D **48**, 3225 (1993) [hep-ph/9303226]. V. D. Barger, K. -m. Cheung, A. Djouadi, B. A. Kniehl and P. M. Zerwas, “Higgs bosons: Intermediate mass range at e+ e- colliders,” Phys. Rev. D **49**, 79 (1994) [hep-ph/9306270]. T. Arens and L. M. Sehgal, “Energy spectra and energy correlations in the decay *H* → *Z**Z* → *μ*+*μ*−*μ*+*μ*−,” Z. Phys. C **66**, 89 (1995) [hep-ph/9409396]. S. Y. Choi, D. J. Miller, M. M. Muhlleitner and P. M. Zerwas, “Identifying the Higgs spin and parity in decays to Z pairs,” Phys. Lett. B **553**, 61 (2003) [hep-ph/0210077]. B. C. Allanach, K. Odagiri, M. J. Palmer, M. A. Parker, A. Sabetfakhri and B. R. Webber, “Exploring small extra dimensions at the large hadron collider,” JHEP **0212**, 039 (2002) [hep-ph/0211205]. C. P. Buszello, I. Fleck, P. Marquard and J. J. van der Bij, “Prospective analysis of spin- and CP-sensitive variables in *H* → *Z**Z* → *l*(1) + *l*(1) − *l*(2) + *l*(2) −  at the LHC,” Eur. Phys. J. C **32**, 209 (2004) [hep-ph/0212396]. R. M. Godbole, D. J. Miller and M. M. Muhlleitner, “Aspects of CP violation in the H ZZ coupling at the LHC,” JHEP **0712**, 031 (2007) [arXiv:0708.0458 [hep-ph]]. V. A. Kovalchuk, “Model-independent analysis of CP violation effects in decays of the Higgs boson into a pair of the W and Z bosons,” J. Exp. Theor. Phys.  **107**, 774 (2008). W. -Y. Keung, I. Low and J. Shu, “Landau-Yang Theorem and Decays of a Z’ Boson into Two Z Bosons,” Phys. Rev. Lett.  **101**, 091802 (2008) [arXiv:0806.2864 [hep-ph]]. O. Antipin and A. Soni, “Towards establishing the spin of warped gravitons,” JHEP **0810**, 018 (2008) [arXiv:0806.3427 [hep-ph]]. Q. -H. Cao, C. B. Jackson, W. -Y. Keung, I. Low and J. Shu, “The Higgs Mechanism and Loop-induced Decays of a Scalar into Two Z Bosons,” Phys. Rev. D **81**, 015010 (2010) [arXiv:0911.3398 [hep-ph]]. Y. Gao, A. V. Gritsan, Z. Guo, K. Melnikov, M. Schulze and N. V. Tran, “Spin determination of single-produced resonances at hadron colliders,” Phys. Rev. D **81**, 075022 (2010) [arXiv:1001.3396 [hep-ph]]. A. De Rujula, J. Lykken, M. Pierini, C. Rogan and M. Spiropulu, “Higgs look-alikes at the LHC,” Phys. Rev. D **82**, 013003 (2010) [arXiv:1001.5300 [hep-ph]]. C. Englert, C. Hackstein and M. Spannowsky, “Measuring spin and CP from semi-hadronic ZZ decays using jet substructure,” Phys. Rev. D **82**, 114024 (2010) [arXiv:1010.0676 [hep-ph]]. A. Matsuzaki and H. Tanaka, “Determination of the Higgs CP property in Hadron Colliders,” arXiv:1101.2104 [hep-ph]. U. De Sanctis, M. Fabbrichesi and A. Tonero, “Telling the spin of the ’Higgs boson’ at the LHC,” Phys. Rev. D **84**, 015013 (2011) [arXiv:1103.1973 [hep-ph]]. H. E. Logan and J. Z. Salvail, “Model-independent Higgs coupling measurements at the LHC using the *H* → *Z**Z* → 4*l* lineshape,” Phys. Rev. D **84**, 073001 (2011) [arXiv:1107.4342 [hep-ph]]. J. S. Gainer, K. Kumar, I. Low and R. Vega-Morales, “Improving the sensitivity of Higgs boson searches in the golden channel,” JHEP **1111**, 027 (2011) [arXiv:1108.2274 [hep-ph]]. I. Low, P. Schwaller, G. Shaughnessy and C. E. M. Wagner, “The dark side of the Higgs boson,” Phys. Rev. D **85**, 015009 (2012) [arXiv:1110.4405 [hep-ph]]. C. Englert, M. Spannowsky and M. Takeuchi, “Measuring Higgs CP and couplings with hadronic event shapes,” JHEP **1206**, 108 (2012) [arXiv:1203.5788 [hep-ph]]. J. M. Campbell, W. T. Giele and C. Williams, “The Matrix Element Method at Next-to-Leading Order,” JHEP **1211**, 043 (2012) [arXiv:1204.4424 [hep-ph]]. J. M. Campbell, W. T. Giele and C. Williams, “Extending the Matrix Element Method to Next-to-Leading Order,” arXiv:1205.3434 [hep-ph]. N. Kauer and G. Passarino, “Inadequacy of zero-width approximation for a light Higgs boson signal,” JHEP **1208**, 116 (2012) [arXiv:1206.4803 [hep-ph]]. B. A. Kniehl and O. L. Veretin, “Low-mass Higgs decays to four leptons at one loop and beyond,” Phys. Rev. D **86**, 053007 (2012) [arXiv:1206.7110 [hep-ph]]. J. W. Moffat, “Identification of the 125 GeV Resonance as a Pseudoscalar Quarkonium Meson,” arXiv:1207.6015 [hep-ph]. B. Coleppa, K. Kumar and H. E. Logan, “Can the 126 GeV boson be a pseudoscalar?,” Phys. Rev. D **86**, 075022 (2012) [arXiv:1208.2692 [hep-ph]]. S. Bolognesi, Y. Gao, A. V. Gritsan, K. Melnikov, M. Schulze, N. V. Tran and A. Whitbeck, “On the spin and parity of a single-produced resonance at the LHC,” Phys. Rev. D **86**, 095031 (2012) [arXiv:1208.4018 [hep-ph]]. R. Boughezal, T. J. LeCompte and F. Petriello, “Single-variable asymmetries for measuring the ‘Higgs’ boson spin and CP properties,” arXiv:1208.4311 [hep-ph]. D. Stolarski and R. Vega-Morales, “Directly Measuring the Tensor Structure of the Scalar Coupling to Gauge Bosons,” Phys. Rev. D **86**, 117504 (2012) [arXiv:1208.4840 [hep-ph]]. P. Cea, “Comment on the evidence of the Higgs boson at LHC,” arXiv:1209.3106 [hep-ph]. J. Kumar, A. Rajaraman and D. Yaylali, “Spin Determination for Fermiophobic Bosons,” Phys. Rev. D **86**, 115019 (2012) [arXiv:1209.5432 [hep-ph]]. C. -Q. Geng, D. Huang, Y. Tang and Y. -L. Wu, “Note on 125 GeV Spin-2 particle,” Phys. Lett. B **719**, 164 (2013) [arXiv:1210.5103 [hep-ph]]. P. Avery, D. Bourilkov, M. Chen, T. Cheng, A. Drozdetskiy, J. S. Gainer, A. Korytov and K. T. Matchev *et al.*, “Precision studies of the Higgs boson decay channel *H* → *Z**Z* → 4*l* with MEKD,” Phys. Rev. D **87**, no. 5, 055006 (2013) [arXiv:1210.0896 [hep-ph]]. E. Masso and V. Sanz, “Limits on Anomalous Couplings of the Higgs to Electroweak Gauge Bosons from LEP and LHC,” Phys. Rev. D **87**, no. 3, 033001 (2013) [arXiv:1211.1320 [hep-ph]]. Y. Chen, N. Tran and R. Vega-Morales, “Scrutinizing the Higgs Signal and Background in the 2*e*2*μ* Golden Channel,” JHEP **1301**, 182 (2013) [arXiv:1211.1959 [hep-ph]]. T. Modak, D. Sahoo, R. Sinha and H. -Y. Cheng, “Inferring the nature of the boson at 125-126 GeV,” arXiv:1301.5404 [hep-ph]. S. Kanemura, M. Kikuchi and K. Yagyu, “Probing exotic Higgs sectors from the precise measurement of Higgs boson couplings,” Phys. Rev. D **88**, 015020 (2013) [arXiv:1301.7303 [hep-ph]]. J. S. Gainer, J. Lykken, K. T. Matchev, S. Mrenna and M. Park, “Geolocating the Higgs Boson Candidate at the LHC,” Phys. Rev. Lett.  **111**, 041801 (2013) [arXiv:1304.4936 [hep-ph]]. G. Isidori, A. V. Manohar and M. Trott, “Probing the nature of the Higgs-like Boson via *h* → *V**F* decays,” Physics Letters B 728C (2014), pp. 131-135 [arXiv:1305.0663 [hep-ph]]. J. Frank, M. Rauch and D. Zeppenfeld, “Higgs Spin Determination in the WW channel and beyond,” arXiv:1305.1883 [hep-ph]. B. Grinstein, C. W. Murphy and D. Pirtskhalava, “Searching for New Physics in the Three-Body Decays of the Higgs-like Particle,” JHEP **1310**, 077 (2013) [arXiv:1305.6938 [hep-ph]]. F. Caola and K. Melnikov, “Constraining the Higgs boson width with ZZ production at the LHC,” Phys. Rev. D **88**, 054024 (2013) [arXiv:1307.4935 [hep-ph]]. S. Banerjee, S. Mukhopadhyay and B. Mukhopadhyaya, “Higher dimensional operators and LHC Higgs data : the role of modified kinematics,” arXiv:1308.4860 [hep-ph]. Y. Sun, X. -F. Wang and D. -N. Gao, “CP mixed property of the Higgs-like particle in the decay channel *h* → *Z**Z*\* → 4*l*,” arXiv:1309.4171 [hep-ph]. I. Anderson, S. Bolognesi, F. Caola, Y. Gao, A. V. Gritsan, C. B. Martin, K. Melnikov and M. Schulze *et al.*, “Constraining anomalous HVV interactions at proton and lepton colliders,” arXiv:1309.4819 [hep-ph]. M. Chen, T. Cheng, J. S. Gainer, A. Korytov, K. T. Matchev, P. Milenovic, G. Mitselmakher and M. Park *et al.*, “The role of interference in unraveling the ZZ-couplings of the newly discovered boson at the LHC,” arXiv:1310.1397 [hep-ph]. G. Buchalla, O. Cata and G. D’Ambrosio, “Nonstandard Higgs Couplings from Angular Distributions in *h* → *Z*ℓ+ℓ−,” arXiv:1310.2574 [hep-ph]. Y. Chen and R. Vega-Morales, “Extracting Effective Higgs Couplings in the Golden Channel,” arXiv:1310.2893 [hep-ph]. J. M. Campbell, R. K. Ellis and C. Williams, “Bounding the Higgs width at the LHC using full analytic results for *g**g* → 2*e*2*μ*,” arXiv:1311.3589 [hep-ph]. Y. Chen, E. Di Marco, J. Lykken, M. Spiropulu, R. Vega-Morales and S. Xie, “8D Likelihood Effective Higgs Couplings Extraction Framework in the Golden Channel,” arXiv:1401.2077 [hep-ex]. M. Gonzalez-Alonso and G. Isidori, “The *h* → 4ℓ spectrum at low *m*34: Standard Model vs. light New Physics,” arXiv:1403.2648 [hep-ph]. J. S. Gainer, J. Lykken, K. T. Matchev, S. Mrenna and M. Park, “Beyond Geolocating: Constraining Higher Dimensional Operators in *H* → 4ℓ with Off-Shell Production and More,” arXiv:1403.4951 [hep-ph]. Y. Chen, R. Harnik and R. Vega-Morales, arXiv:1404.1336 [hep-ph]. S. Chatrchyan *et al.* [CMS Collaboration], “Study of the Mass and Spin-Parity of the Higgs Boson Candidate Via Its Decays to Z Boson Pairs,” Phys. Rev. Lett.  **110**, 081803 (2013) [arXiv:1212.6639 [hep-ex]]. G. Aad *et al.* [ATLAS Collaboration], “Evidence for the spin-0 nature of the Higgs boson using ATLAS data,” Phys. Lett. B **726**, 120 (2013) [arXiv:1307.1432 [hep-ex]]. S. Chatrchyan *et al.* [CMS Collaboration], “Measurement of the properties of a Higgs boson in the four-lepton final state,” arXiv:1312.5353 [hep-ex]. [ATLAS Collaboration], “Measurements of the properties of the Higgs-like boson in the four lepton decay channel with the ATLAS detector using 25 fb−1 of proton-proton collision data,” ATLAS-CONF-2013-013. [ATLAS Collaboration], “Study of the spin of the new boson with up to 25 fb− 1 of ATLAS data,” ATLAS-CONF-2013-040. [CMS Collaboration], “Combination of standard model Higgs boson searches and measurements of the properties of the new boson with a mass near 125 GeV,” CMS-PAS-HIG-13-005. L. Lyons, Cambridge, Uk: Univ. Pr. ( 1986) 226p P. C. Bhat, “Multivariate Analysis Methods in Particle Physics,” Ann. Rev. Nucl. Part. Sci.  **61**, 281 (2011). H. C. Cheng, K. T. Matchev and M. Schmaltz, “Radiative corrections to Kaluza-Klein masses,” Phys. Rev.  D **66**, 036005 (2002) [arXiv:hep-ph/0204342]. L. Edelhauser, K. T. Matchev and M. Park, “Spin effects in the antler event topology at hadron colliders,” JHEP **1211**, 006 (2012) [arXiv:1205.2054 [hep-ph]]. G. L. Bayatian et al. [CMS Collaboration], “CMS technical design report, volume II: Physics performance,” J. Phys. G G 34, 995 (2007). T. Han, I. W. Kim and J. Song, “Kinematic Cusps: Determining the Missing Particle Mass at Colliders,” Phys. Lett.  B **693**, 575 (2010) [arXiv:0906.5009 [hep-ph]]. M. Battaglia, A. Datta, A. De Roeck, K. Kong and K. T. Matchev, “Contrasting supersymmetry and universal extra dimensions at the CLIC multi-TeV e+ e- collider,” JHEP **0507**, 033 (2005) [hep-ph/0502041]. A. Datta, K. Kong and K. T. Matchev, “Discrimination of supersymmetry and universal extra dimensions at hadron colliders,” Phys. Rev. D **72**, 096006 (2005) [Erratum-ibid. D **72**, 119901 (2005)] [hep-ph/0509246]. P. Meade and M. Reece, “Top partners at the LHC: Spin and mass measurement,” Phys. Rev. D **74**, 015010 (2006) [hep-ph/0601124]. C. Athanasiou, C. G. Lester, J. M. Smillie and B. R. Webber, “Distinguishing Spins in Decay Chains at the Large Hadron Collider,” JHEP **0608**, 055 (2006) [hep-ph/0605286]. L. -T. Wang and I. Yavin, “Spin measurements in cascade decays at the LHC,” JHEP **0704**, 032 (2007) [hep-ph/0605296]. M. Burns, K. Kong, K. T. Matchev and M. Park, “A General Method for Model-Independent Measurements of Particle Spins, Couplings and Mixing Angles in Cascade Decays with Missing Energy at Hadron Colliders,” JHEP **0810**, 081 (2008) [arXiv:0808.2472 [hep-ph]]. A. J. Barr, “Measuring slepton spin at the LHC,” JHEP **0602**, 042 (2006) [hep-ph/0511115]. P. Konar, K. Kong and K. T. Matchev, “$\sqrt{\hat{s}}\_{min}$ : A Global inclusive variable for determining the mass scale of new physics in events with missing energy at hadron colliders,” JHEP **0903**, 085 (2009) [arXiv:0812.1042 [hep-ph]]. P. Konar, K. Kong, K. T. Matchev and M. Park, “RECO level $\sqrt{s}\_{min}$ and subsystem $\sqrt{s}\_{min}$: Improved global inclusive variables for measuring the new physics mass scale in ${\not \!\! E\_T}$ events at hadron colliders,” JHEP **1106**, 041 (2011) [arXiv:1006.0653 [hep-ph]]. I. Hinchliffe, F. E. Paige, M. D. Shapiro, J. Soderqvist and W. Yao, “Precision SUSY measurements at CERN LHC,” Phys. Rev. D **55**, 5520 (1997) [hep-ph/9610544]. A. Tricoli [ATLAS Collaboration], “Parton Densities at the LHC,” arXiv:0808.2579 [hep-ex]. --- 1. Ideally one would like to avoid theoretical prejudice completely, by going beyond SUSY and UED spin assignments.[↩](#fnref1) 2. *M**Z*2 has been shown to be a relatively sensitive variable for signal versus background discrimination and for measuring Higgs properties, see, for example .[↩](#fnref2) 3. The charged Higgs is a scalar and cannot “communicate” information about the helicity of the *b*-quark to the lepton, thus the shape of the distribution is as as in the pure phase space case.[↩](#fnref3) 4. The two samples must be mixed accounting for the relative number of events generated in regions N and $\overline{\mathcal{N}}$. This is an example of multi-channel integration with channels *I* and *J*.[↩](#fnref4) 5. This is another example of multi-channel integration.[↩](#fnref5)
arxiv_0000120
* axis, thus increasing *α*R. On the contrary, for a given pressure, temperature increases the complex interplay between the electrons and the lattice, yielding a reduction of *α*R. These are the behaviors observed in the trivial phase [Fig. [FigRashbaPT](c), diamond markers]. The TI phase displays the opposite behavior [Fig. [FigRashbaPT](c), round markers]: *α*R is reduced by pressure and increased by temperature. To understand this behavior, we must analyze more carefully both the Rashba energy and momentum, as *α*R captures the interplay between those two quantities. On the one hand, for all pressures, the Rashba energy [Fig. [FigRashbaPT](a)] almost steadily increases with temperature, except for the CB in the trivial phase, which shows a very weak decrease with temperature. On the other hand, the Rashba momentum is almost unaffected by pressure and temperature in the TI phase [Fig. [FigRashbaPT](b), round markers], while it increases almost steadily with temperature for pressures below the TPT. The opposite trends of *α*R between the trivial and TI phase can, therefore, be understood by two observations. On the one hand, in both phases, the pressure dependence of *α*R is governed by the Rashba energy, which increases in the trivial phase and decreases in the TI phase. On the other, their opposite temperature dependence is caused by the Rashba momentum. In the trivial phase, both *E*R and *k*R increase with temperature, the latter being more significant than the former, hence the decrease of *α*R. In the TI phase, the increase of *α*R captures the change of the Rashba energy, as *k*R is scarcely affected by temperature. We cannot, however, determine if this behavior is linked to the TPT or merely a consequence of an increased bonding along *ĉ* induced by increased pressure. This question could be addressed by investigating the temperature dependence of the high-pressure Rashba splitting in other bismuth tellurohalides, BiTeBr and BiTeCl, which are not known to exhibit a pressure-induced TPT . Lastly, we verified that the trends presented in Fig. [FigRashbaPT] for the TI phase do not vary if the temperature-dependent band structure is evaluated in the *P*C1 plane rather than the *P*C2 plane, remaining in the A-L direction. Hence, it seems reasonable that these trends, including the quasi temperature-independence of the Rashba momentum, could be applied to any *k**z* between those planes. We also verified that the temperature-dependent values of *P*C2 are not significantly altered if considering the EPI and TE corrections computed in the *P*C1 plane. Temperature-dependent Weyl semimetal phase width ================================================ [TableWSM] |c||c|c|c||c|c|c||c|c|c| Temperature&&& (K) & *P*C1 & *P*C2 & width & *P*C1 & *P*C2 & width & *P*C1 & *P*C2 & width (GPa) Static & 2.08 & 2.28 & 0.20& 2.08 & 2.28 & 0.20& 2.08 & 2.28 & 0.20 0 & 2.08 & 2.31 & 0.23 &2.08 & 2.28 & 0.20 & 2.08 &2.31 &0.23 100 & 2.10 & 2.42 & 0.32 & 2.09 &2.39 &0.30 & 2.14 &2.53 &0.39 200 & 2.14 & 2.57 & 0.44 & 2.16 & 2.55 & 0.39 & 2.24 & 2.82 &0.58 300 & 2.17 & 2.73 & 0.56 & 2.24 & 2.70 & 0.46 & 2.30 & 3.09 & 0.79 400 & 2.20 & 2.89 & 0.69 & 2.33 & 2.83 & 0.50 & 2.32 & 3.33 & 1.01 500 & 2.22 & 3.05 & 0.83 &2.46 & 2.95& 0.51 & 2.30 & 3.55 & 1.25 In Table [TableWSM], we report the numerical values for the temperature-dependent WSM phase boundaries, namely *P*C1(*T*) and *P*C2(*T*). These values were extracted from Fig. [FigGapPT] using the extrapolation procedure described in Sect. [WSMResults] and are explicitly plotted in Fig. [FigPTDiagram]. For comparison, we also include the temperature-dependent critical pressures obtained by considering the EPI and TE corrections independently. 105ifxundefined [1] ifx#1 ifnum [1] #1firstoftwo secondoftwo ifx [1] #1firstoftwo secondoftwo ““#1””@noop [0]secondoftwosanitize@url [0]` 12`$12`&12`#12`12`\_12`%12@startlink[1]@endlink[0]@bib@innerbibempty   and   ,  [Rev. Mod. Phys.  **82**, 3045 ( 2010)](\doibase 10.1103/RevModPhys.82.3045)  ,  , and   ,  [Phys. Scr.  **T164**,  014001 ( 2015)](\doibase 10.1088/0031-8949/2015/T164/014001)  ,  ,  ,  ,  , ,  ,  ,  ,  ,  ,  and   ,  [Science  **332**,  560 ( 2011)](\doibase 10.1126/science.1201607)  ,  ,  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **90**,  161202(R) ( 2014)](\doibase 10.1103/PhysRevB.90.161202)  ,  ,  ,  ,  ,  ,  ,  ,  , ,  ,  and   ,  [Nat. Mater.  **11**,  1023 ( 2012)](\doibase 10.1038/nmat3449)  ,  ,  ,  ,  , ,  and  ,  [Nature  **452**, 970 ( 2008)](\doibase 10.1038/nature06843)  ,  ,  ,  ,  ,  ,  ,  ,  and  ,  [Phys. Rev. Lett.  **111**,  155701 ( 2013)](\doibase 10.1103/PhysRevLett.111.155701)  ,  ,  ,  ,  , ,  and  ,  [Phys. Rev. Lett.  **110**,  107401 ( 2013)](\doibase 10.1103/PhysRevLett.110.107401)  ,  ,  ,  ,  and   ,  [J. Mater. Chem. C  **4**,  2243 ( 2016)](\doibase 10.1039/C6TC00020G)  ,  ,  ,  ,  ,  ,  , ,  and   ,  [Nat. Phys.  **10**,  294 ( 2014)](\doibase 10.1038/nphys2898)  ,  ,  ,  ,  and   ,  [Nano Lett. **15**,  1222 ( 2015)](\doibase 10.1021/nl5043769)  ,  ,  ,  ,  and   ,  [Phys. Rev. Lett.  **111**,  056803 ( 2013)](\doibase 10.1103/PhysRevLett.111.056803)  ,  ,  ,  and  ,  [Science  **342**,  453 ( 2013)](\doibase 10.1126/science.1239834)  ,  ,  ,  ,  and   ,  [Nat. Commun.  **8**, 13940 ( 2017)](\doibase 10.1038/ncomms13940)  ,  [Nat Phys  **11**,  5 ( 2015)](\doibase 10.1038/nphys3217)  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  and   ,  [Sci. Rep. **3**,  1757 ( 2013)](\doibase 10.1038/srep01757)  ,  ,  ,  ,  and   ,  [Sci. Rep.  **4**,  3841 ( 2014)](\doibase 10.1038/srep03841)  ,  [Phys. Rev. B  **81**,  125318 ( 2010)](\doibase 10.1103/PhysRevB.81.125318)   and   ,  [JETP Lett.  **39**,  78-81 ( 1984)](http://www.jetpletters.ac.ru/ps/1264/article_19121.pdf)  ,  ,  ,  ,  and   ,  [Nat. Mater. **14**,  871 ( 2015)](\doibase 10.1038/nmat4360)   and   ,  [Adv. Mater.  **29**,  1605911 ( 2017)](\doibase https://doi.org/10.1002/adma.201605911)  ,  ,  ,  ,  ,  ,  , ,  and   ,  [Science  **342**,  1490 ( 2013)](\doibase 10.1126/science.1242247)  ,  , and   ,  [Rev. Mod. Phys.  **76**,  323 ( 2004)](\doibase 10.1103/RevModPhys.76.323)  ,  ,  ,  and   ,  [Solid State Commun.  **119**,  207 ( 2001)](\doibase 10.1016/S0038-1098(01)00111-9)  ,  ,  ,  ,  ,  and   ,  [Nat. Mater. **10**,  521-526 ( 2011)](\doibase 10.1038/nmat3051)  ,  ,  ,  ,  ,  ,  ,  , ,  ,  ,  ,  ,  and   ,  [Phys. Rev. Lett.  **109**,  096803 ( 2012)](\doibase 10.1103/PhysRevLett.109.096803)  ,  ,  ,  ,  ,  ,  , ,  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. Lett.  **109**,  116403 ( 2012)](\doibase 10.1103/PhysRevLett.109.116403)  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. Lett.  **110**,  107204 ( 2013)](\doibase 10.1103/PhysRevLett.110.107204)  ,  , and   ,  [Phys. Rev. B  **84**,  041202(R) ( 2011)](\doibase 10.1103/PhysRevB.84.041202)  ,  ,  ,  ,  and   ,  [Phys. Rev. Lett.  **108**,  246802 ( 2012)](\doibase 10.1103/PhysRevLett.108.246802)  ,  ,  ,  and   ,  [Nat. Commun.  **3**,  679 ( 2012)](\doibase 10.1038/ncomms1679)  ,  ,  ,  ,  ,  ,  ,  ,  , ,  and   ,  [Phys. Rev. Lett.  **112**,  047402(R) ( 2014)](\doibase 10.1103/PhysRevLett.112.047402)  ,  ,  ,  ,  and  ,  [Opt. Spectrosc.  **117**,  764 ( 2014)](\doibase 10.1134/S0030400X14110125)  ,  ,  ,  ,  ,  ,  ,  , ,  ,  ,  ,  ,  ,  ,  ,  ,  and   ,  [Sci. Rep. **7**,  39699 ( 2017)](\doibase 10.1038/srep39699)  ,  ,  ,  ,  ,  ,  , ,  ,  ,  and  ,  [Adv. Mater.  **29**,  1605965 ( 2017)](\doibase 10.1002/adma.201605965)  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **90**,  161107(R) ( 2014)](\doibase 10.1103/PhysRevB.90.161107)  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **87**,  041104(R) ( 2013)](\doibase 10.1103/PhysRevB.87.041104)  ,  ,  ,  ,  ,  , ,  ,  and   ,  [Sci. Rep.  **5**,  15973 ( 2015)](\doibase 10.1038/srep15973)  ,  ,  ,  ,  ,  , ,  and  ,  [J. Phys.: Condens. Matter **26**,  342202 ( 2014)](\doibase 10.1088/0953-8984/26/34/342202)  ,  ,  ,  ,  and   ,  [JETP Lett.  **98**,  557 ( 2014)](\doibase 10.1134/S0021364013220074)   and   ,  [Phys. Rev. B  **90**,  155316 ( 2014)](\doibase 10.1103/PhysRevB.90.155316)  ,  ,  ,  ,  ,  and  ,  [New J. Phys.  **18**,  113003 ( 2016)](\doibase 10.1088/1367-2630/18/11/113003)  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. Lett.  **121**,  246403 ( 2018)](\doibase 10.1103/PhysRevLett.121.246403)   and   ,  [Phys. Rev. B  **94**,  165203 ( 2016)](\doibase https://doi.org/10.1103/PhysRevB.94.165203)  ,  [*Electronic Structure: Basic Theory and Practical Methods*](\doibase 10.1017/CBO9780511805769) ( Cambridge University Press,  Cambridge,  2004)  ,  [Phys. Rev. Lett.  **110**,  046402 ( 2013)](\doibase 10.1103/PhysRevLett.110.046402)   and   , [Phys. Rev. B  **89**,  205103 ( 2014)](\doibase 10.1103/PhysRevB.89.205103)   and   ,  [Phys. Rev. B  **88**,  195133 ( 2013)](\doibase 10.1103/PhysRevB.88.195133)   and   ,  [Phys. Rev. Lett.  **117**,  246401 ( 2016)](\doibase 10.1103/PhysRevLett.117.246401)   and   ,  [Phys. Rev. Lett.  **117**,  226801 ( 2016)](\doibase 10.1103/PhysRevLett.117.226801)  ,  ,  ,  ,  ,   ,  ,  , ,  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **89**,  075138 ( 2014)](\doibase 10.1103/PhysRevB.89.075138)  ,  ,  ,  ,  ,  ,  ,  and   ,  [Nat. Commun.  **6**,  8463 ( 2015)](\doibase 10.1038/ncomms9463)   and   ,  [Phys. Rev. Mater.  **1**,  054201 ( 2017)](\doibase 10.1103/PhysRevMaterials.1.054201)   and   ,  [Phys. Rev. B  **27**, 4760 ( 1983)](\doibase 10.1103/PhysRevB.27.4760)  , @noop *Many-Particle Physics*,  2nd ed. ( Plenum Press,  1990)  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **92**,  085137 ( 2015)](\doibase 10.1103/PhysRevB.92.085137)   and   , [J. Phys. C: Solid State Phys.  **9**, 2305 ( 1976)](\doibase 10.1088/0022-3719/9/12/013)   and   ,  [Phys. Rev. B  **23**, 1495 ( 1981)](\doibase 10.1103/PhysRevB.23.1495)  ,  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **90**,  214304 ( 2014)](\doibase 10.1103/PhysRevB.90.214304)  ,  ,  ,  ,  ,  and   ,  [J. Chem. Phys.  **143**,  102813 ( 2015)](\doibase 10.1063/1.4927081)  ,  [Rev. Mod. Phys.  **89**,  015003 ( 2017)](\doibase 10.1103/RevModPhys.89.015003)  ,  , and   ,  [Phys. Rev. B  **91**,  224310 ( 2015)](\doibase 10.1103/PhysRevB.91.224310)  ,  , and   , [Ann. Phys.  **523**,  168 ( 2011)](\doibase 10.1002/andp.201000100)  , @noop *Thermophysical properties of materials*,  Selected topics in solid state physics No.  XVIII ( North-Holland Physics Publishing,  Amsterdam, 1986)  , @noop *The Physics of Phonons* ( A. Hilger,  New York, NY, USA,  1990)  ,  [Mod. Phys. Lett. B  **34** No.  2, 2050025( 2020)](\doibase 10.1142/S0217984920500256)  ,  [J. Phys. C: Solid State Phys.  **5**,  535 ( 1972)](\doibase 10.1088/0022-3719/5/5/005)   and   ,  [Phys. Rev. Lett.  **121**,  255901 ( 2018)](\doibase 10.1103/PhysRevLett.121.255901)  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  , ,  ,  ,  ,  ,  ,  ,  ,  ,  , ,  ,  ,  ,  ,  ,  ,  ,  , ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  , , , ,  ,  ,  ,  ,  ,  and  ,  [Comput. Phys. Commun.  **248**,  107042 ( 2019)](\doibase 10.1016/j.cpc.2019.107042)  ,  [Phys. Rev. B  **55**, 10337 ( 1997)](\doibase 10.1103/PhysRevB.55.10337)   and   , [Phys. Rev. B  **55**,  10355 ( 1997)](\doibase 10.1103/PhysRevB.55.10355)  ,  ,  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **86**,  094302 ( 2012)](\doibase 10.1103/PhysRevB.86.094302)  ,  ,  and   ,  [Phys. Rev. B  **58**, 3641 ( 1998)](\doibase 10.1103/PhysRevB.58.3641)  ,  , and   ,  [Phys. Rev. Lett.  **77**,  3865 ( 1996)](\doibase 10.1103/PhysRevLett.77.3865)   and   ,  [Phys. Rev. B  **91**,  245204 ( 2015)](\doibase 10.1103/PhysRevB.91.245204)  ,  [PNAS  **30**,  244 ( 1944)](\doibase 10.1073/pnas.30.9.244)  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **97**,  115145 ( 2018)](\doibase 10.1103/PhysRevB.97.115145)  ,  ,  ,  and   ,  [Phys. Rev. B  **71**,  035117 ( 2005)](\doibase 10.1103/PhysRevB.71.035117)  ,  , and   , [Phys. Rev. B  **72**,  035105 ( 2005)](\doibase 10.1103/PhysRevB.72.035105)  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **95**,  075146 ( 2017)](\doibase 10.1103/PhysRevB.95.075146)   and   ,  [Phys. Rev. B  **83**,  235401 ( 2011)](\doibase 10.1103/PhysRevB.83.235401)  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  and   ,  [J. Phys. Chem. C  **117**,  25677 ( 2013)](http://pubs.acs.org/doi/abs/10.1021/jp409824g)   and   , [Phys. Rev. B  **91**,  165140 ( 2015)](\doibase 10.1103/PhysRevB.91.165140)   and   ,  [Phys. Rev. Lett.  **113**,  076407 ( 2014)](\doibase 10.1103/PhysRevLett.113.076407)  ,  , and   , [Phys. Rev. Lett.  **113**, 076408 ( 2014)](\doibase 10.1103/PhysRevLett.113.076408)   and   , [Phys. Rev. B  **99**,  075157 ( 2019)](\doibase 10.1103/PhysRevB.99.075157)   and   , @noop “ Influence of the electron-phonon interaction on the topological phase transition in BiTeI,”  ( 2020),  Accepted for publication in Proc. of XIth International Symposium Quantum Theory and Symmetries, CRM Series on Mathematical Physics (Springer).   and   ,  [Rev. Mod. Phys.  **77**,  1173 ( 2005)](\doibase 10.1103/RevModPhys.77.1173)   and   ,  [Comput. Mater. Sci.  **167**,  257 ( 2019)](\doibase 10.1016/j.commatsci.2019.05.045)  ,  [Adv. Phys.  **3**,  325 ( 1954)](\doibase 10.1080/00018735400101213)   and   ,  [Phys. Rev. B  **94**, 115135 ( 2016)](\doibase 10.1103/PhysRevB.94.115135)  ,  ,  ,  ,  ,  ,  ,  and   , @noop “ Generalized Fröhlich model for real materials: predominance of non-adiabatic effects in zero-point renormalization of electronic energies,”  ( 2019),  Unpublished  ,  ,  ,  ,  ,  and  ,  [Phys. Rev. B  **87**,  205103 ( 2013)](\doibase 10.1103/PhysRevB.87.205103)  ,  ,  ,  ,  ,  and   ,  [JETP Lett.  **101**,  507 ( 2015)](\doibase 10.1134/S0021364015080147)  ,  ,  and   ,  [Phys. Rev. B  **96**,  155309 ( 2017)](\doibase 10.1103/PhysRevB.96.155309)  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **94**,  205130 ( 2016)](\doibase 10.1103/PhysRevB.94.205130)  ,  ,  ,  and  ,  [J. Chem. Phys.  **132**,  154104 ( 2010)](\doibase 10.1063/1.3382344)   and   ,  [J. Chem. Phys.  **124**, 174104 ( 2006)](\doibase 10.1063/1.2190220)  ,  , and   ,  [J. Comput. Chem.  **32**,  1456 ( 2011)](\doibase 10.1002/jcc.21759)  ,  ,  and   ,  [Phys. Rev. B  **93**,  144304 ( 2016)](\doibase 10.1103/PhysRevB.93.144304)  ,  ,  ,  and  ,  [J. Solid State Chem.  **114**,  379 ( 1995)](\doibase https://doi.org/10.1006/jssc.1995.1058)  ,  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. Lett.  **107**,  117401 ( 2011)](\doibase 10.1103/PhysRevLett.107.117401)  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **86**,  085204 ( 2012)](\doibase 10.1103/PhysRevB.86.085204)  ,  ,  ,  ,  ,  ,  ,  and  ,  [Phys. Rev. B  **89**,  195117 ( 2014)](\doibase 10.1103/PhysRevB.89.195117)  , ed., [*Spin physics in semiconductors*](\doibase 10.1007/978-3-319-65436-2),  2nd ed., Springer series in solid-state sciences ; 157 ( Springer, Berlin,  2017)  and   ,  [Phys. Rev. B  **101**, 121102(R) ( 2020)](\doibase 10.1103/PhysRevB.101.121102) Temperature dependence of the topological phase transition of BiTeI from first principles ========================================================================================= A topological phase transition from a trivial insulator to a Z2 topological insulator requires the bulk band gap to vanish. In the case of noncentrosymmetric materials, these phases are separated by a gapless Weyl semimetal phase. However, at finite temperature, the gap is affected by atomic motion, through electron-phonon interaction, and by thermal expansion of the lattice. As a consequence, the phase space of topologically nontrivial phases is affected by temperature. In this paper, the pressure and temperature dependence of the indirect band gap of BiTeI is investigated from first principles. We evaluate the contribution from both electron-phonon interaction and thermal expansion, and show that their combined effect drives the topological phase transition towards higher pressures with increasing temperature. Notably, we find that the sensitivity of both band extrema to pressure and topology for electron-phonon interaction differs significantly according to their leading orbital character. Our results indicate that the Weyl semimetal phase width is increased by temperature, having almost doubled by 100 K when compared to the static lattice results. Our findings thus provide a guideline for experimental detection of the nontrivial phases of BiTeI and illustrate how the phase space of the Weyl semimetal phase in noncentrosymmetric materials can be significantly affected by temperature. [intro]Introduction =================== Topological phases of matter have become a thriving field of condensed matter physics, for both fundamental and applied research . In three-dimensional (3D) materials, these phases are characterized by the existence of metallic surface states with peculiar properties, such as protection against backscattering, spin-momentum locking and dissipationless spin-polarized currents, as well as inverted bulk band gaps . The discovery of experimentally tunable topological phases, whether through stoichiometric doping , hydrostatic pressure , strain , external electric fields  or interaction with light , has led to a continually growing number of proposals for promising and innovative applications relying on the refined engineering of these robust states and their associated phase transitions . Another widely studied class of materials is the bulk Rashba semiconductors, in which a strong spin-orbit interaction combined with the absence of inversion symmetry leads to a splitting of electronic bands of opposite spin polarization . The band extrema are shifted away from the time-reversal invariant points in the Brillouin zone, both in energy and momentum, in the plane perpendicular to the potential gradient . This Rashba effect gives rise to many quantum phenomena, such as the Edelstein, spin Hall and spin galvanic effects, as well as noncentrosymmetric superconductivity [see, for example, Bahramy and Ogawa  and references therein]. Just like the topological surface states, the bulk Rashba split bands have their spin orientation locked perpendicular to their momentum, leading to a helical spin texture . This feature makes these materials promising candidates for realizing devices involving the active manipulation of the spin degree of freedom, for both spintronics  and quantum computing . Because it exhibits both of these phenomena, there is no wonder that BiTeI has attracted such a wide interest for the last decade, both from the experimental and first-principles communities. Besides displaying one of the largest Rashba splittings known so far , it was predicted to turn into a strong Z2 topological insulator under hydrostatic pressure . Following this prediction, BiTeI has been widely investigated through optical , electrical transport  and Shubnikov-de Haas oscillations experiments . Many experiments revealed signatures supporting the existence of the topological phase transition (TPT) . While at first it was thought that the TPT of BiTeI occurred at a single critical pressure *P**C*, it was later demonstrated that the lack of inversion symmetry imposed the existence of an intermediate Weyl semimetal (WSM) phase  between the trivial band insulator and topological insulator phases, yielding *two* critical pressures. Tight-binding  and first-principles calculations  predicted that this WSM phase could exist only within a narrow pressure range of 0.1-0.2 GPa, making its experimental detection technically challenging. The precise value of the critical pressure of BiTeI is still elusive to this day. Experiments have located it between 2.0 and 3.5 GPa , while first-principles calculations have predicted it in a slightly wider range of pressures, from 1.6 to 4.5 GPa . From a theoretical point of view, predicting the critical pressure comes down to finding the gap closing pressure, which is inherently dependent on the accuracy of the calculated band gap at ambient pressure, which can be biased by the well-established underestimation of the band gap by density functional theory (DFT) . On the other hand, one should not forget that most first-principles calculations are done under the assumption of a static lattice, while experiments are inherently done at finite temperature. This completely overlooks the fact that electrons can interact with thermally activated phonons, resulting in a temperature-dependent shift in the electronic eigenenergies. Moreover, at *T* = 0 K, it neglects the contribution of the zero-point renormalization (ZPR) to the eigenenergies, due to the zero-point motion of the ions. For narrow gap materials, this renormalization could, in principle, push the system towards a band inversion. In the case of BiTeI, this shift will directly affect *both* critical pressures at which the band gap will effectively close or reopen at a given temperature. The ability of electron-phonon interaction (EPI) to induce a change in the bulk topology was first demonstrated with model Hamiltonians , and later analyzed by first-principles calculations for BiTl(S1 − *x*Se*x*)2  and for the Bi2Se3 family . These studies were able to capture the whole complexity of the EPI, with fewer approximations. They demonstrated that, depending on the strength and sign of the different types of couplings, nontrivial topology could either be promoted or suppressed by temperature in real materials. The suppression of the topological surface state signatures at sufficiently high temperatures was observed experimentally in Pb1 − *x*Sn*x*Se . The temperature-dependent band structure of BiTeI was previously investigated by focusing on the variation of the Rashba splitting rather than on tracking the TPT . In this paper, we study the temperature dependence of the topological phase transition in BiTeI using first-principles methods, assessing for both the EPI and thermal expansion (TE) contributions. We explicitly assess the temperature dependence of the band-gap renormalization as a function of pressure, both in the trivial and the topological phases. We observe that the EPI contribution changes sign as the system undergoes the TPT, and link this behavior to the band inversion phenomenon, showing that the band extrema exhibit a distinct pressure dependence related to their leading orbital character. In turn, the TE contribution is unaffected by the leading orbital character of the band extrema, but manifests sensitivity to the topological invariant by changing sign as the system undergoes the TPT. We finally evaluate how the pressure width of the WSM phase is affected by temperature by extrapolating the total renormalization trends of each band towards the TPT, and find that the stronger renormalization of the band gap in the TI phase widens the WSM with increasing temperature. The remainder of this paper is organized as follows. After presenting the theoretical formalism used to obtain the EPI and TE contributions to the electronic structure in Sect. [theory], we summarize the computational details in Sect. [computation]. Sect. [staticresults] presents our analysis of the TPT and WSM phase for a static lattice at *T* = 0 K, while Sect. [tdepresults] and [WSMResults] focus on the effects of temperature on both the band gap and the TPT. We finally discuss the implications and limitations of our results in Sect. [discussion] and summarize our findings. [theory]Methodology =================== The temperature dependence of the band-gap energy, *E**g*, for a given constant pressure, can be approximated at first order by the sum of two distinct contributions , namely the thermal expansion, Δ*E**g*TE, and the renormalization of the electronic eigenenergies by electron-phonon interactions at constant volume, Δ*E**g*EPI : Δ*E**g* ≃ Δ*E**g*EPI + Δ*E**g*TE. Electron-phonon interaction --------------------------- In many-body perturbation theory , the temperature dependence of the electronic eigenenergies, *ɛ***k***n*, induced by electron-phonon interaction can be captured, under certain assumptions , by the real part of the electron-phonon self-energy, ΣEPI, evaluated at the quasiparticle energy: *ɛ***k***n*(*T*) = *ɛ***k***n*0 + Re[Σ**k***n*EPI(*ω* = *ɛ***k***n*(*T*))],  where *ɛ***k***n*0 is the unperturbed eigenvalue of a Kohn-Sham (KS) eigenstate with wavevector **k** and band index *n*, computed at the fixed, relaxed geometry. The Hartree atomic unit system is used throughout this paper (ℏ = *m**e* = *e* = 1). In the nonadiabatic Allen-Heine-Cardona (AHC) framework , the self-energy at the second-order of perturbation is the sum of the Fan and Debye-Waller (DW) contributions [Fig. [FigFanDW]]: Σ**k***n*EPI(*ω*, *T*) = Σ**k***n*Fan(*ω*, *T*) + Σ**k***n*DW(*T*),  where the former captures a frequency-dependent interaction with two first-order EPI vertices, and the latter captures a static one with a single second-order vertex. Evaluating the previous equation at absolute zero temperature yields the ZPR, which arises from the zero-point motion of the ions. For a detailed review of the AHC methodology, we refer our readers to the works of Poncé *et al.*  and to the extensive review by Giustino . [FigFanDW] Assuming that the quasiparticle energy is close to the unperturbed electronic energy, one can apply the on-the-mass-shell approximation , in which the Fan self-energy is evaluated at the poles of the Green’s function, namely at the frequency *ω* corresponding to the bare eigenvalue *ɛ***k***n*0, yielding *ɛ***k***n*(*T*) ≈ *ɛ***k***n*0 + Re[Σ**k***n*EPI(*ω* = *ɛ***k***n*0, *T*)]. Furthermore approximating the fully interacting electronic wavefunction by the noninteracting KS-DFT wavefunction, one obtains the standard result for the retarded Fan self-energy : $$\label{EqnSigmaFan} \begin{split} & \Sigma\_{\mathbf{k}n}^{\text{Fan}}(\omega,T) = \frac{1}{N\_\mathbf{q}} \sum\limits\_{\mathbf{q}\nu}^{\text{BZ}}\sum\limits\_{n'} \frac{1}{2\omega\_{\mathbf{q}\nu}}|\bra{\mathbf{k+q}n'}V^{(1)}\_{\mathbf{q} \nu}\ket{\mathbf{k}n}|^2 \times \\ & \left[ \frac{n\_{\mathbf{q}\nu}(T) + 1 - f\_{\mathbf{k+q}n'}(T)}{\omega-\varepsilon^0\_{\mathbf{k+q}n'}- \omega\_{\mathbf{q}\nu} + i\eta\_\mathbf{k}} + \frac{n\_{\mathbf{q}\nu}(T) + f\_{\mathbf{k+q}n'}(T)}{\omega-\varepsilon^0\_{\mathbf{k+q}n'}+ \omega\_{\mathbf{q}\nu} + i\eta\_\mathbf{k}} \right], \end{split}$$ in which the contributions from all phonon modes with frequency *ω***q***ν* are summed for all wavevectors **q** and branches *ν* in the Brillouin zone (BZ). The whole temperature dependence of this expression is captured by the bosonic and fermionic occupation factors, respectively *n***q***ν* and *f***k+q***n*ʹ. The parameter *η***k** = *η* sgn(*ɛ***k***n*0 − *μ*), where *η* is real and positive, preserves causality by correctly shifting the poles of the Green’s function in the complex plane. Equation ([EqnSigmaFan]) implies the limit of an infinite number of phonon wavevectors (*N***q** → ∞), leading to a vanishingly small value of *η*. *V***q***ν*(1) is the first-order self-consistent change of the local KS potential induced by the collective atomic displacement **R***κ**α**ν*(**q**) along a given phonon mode  : $$\label{EqnV1} V^{(1)}\_{\mathbf{q}\nu} = \sum\limits\_{\kappa\alpha}\frac{\partial V^{\text{KS}}}{\partial \textbf{R}^\nu\_{\kappa\alpha}(\mathbf{q})} = \sum\limits\_{\kappa,\alpha} V^{(1)}\_{\kappa\alpha}(\mathbf{q}\nu),$$ with $$\label{EqnDerivative} \frac{\partial}{\partial \textbf{R}^\nu\_{\kappa\alpha}(\mathbf{q})}= U\_{\nu,\kappa\alpha}(\mathbf{q})\sum\limits\_l\text{e}^{i\mathbf{q}\cdot \mathbf{R}\_l}\frac{\partial}{\partial \mathbf{R}\_{l\kappa\alpha}}.$$ In these expressions, **R***l**κ**α* denotes the displacement of atom *κ*, located in unit cell *l* with lattice vector **R***l*, in cartesian direction *α*; **R***κ**α**ν*(**q**) therefore describes the collective atomic displacement of along the **q***ν*-phonon mode. *V**κ**α*(1)(**q***ν*) refers to the contribution of a displacement of atom *κ* in direction *α* to the full first-order potential *V***q***ν*(1). *U**ν*, *κ**α*(**q**) is the **q***ν*-phonon displacement vector, which is related to the phonon eigenvector, *ξ**κ**α**ν*(**q**), through: $$\xi^\nu\_{\kappa\alpha}(\mathbf{q}) = \sqrt{M\_\kappa} U\_{\nu,\kappa\alpha}(\mathbf{q}),$$ with *M**κ* being the mass of atom *κ*. The static Debye-Waller self-energy is defined as the second derivative of the potential with respect to two atomic displacements, evaluated at the first order in perturbation theory  : $$\Sigma^{\text{DW}}\_{\mathbf{k}n} (T) = \frac{1}{N\_\mathbf{q}} \sum\limits\_{\mathbf{q}\nu}\frac{1}{2\omega\_{\mathbf{q}\nu}}\bra{\mathbf{k}n} V^{(2)}\_{\mathbf{q}\nu} \ket{\mathbf{k}n}\left[2n\_{\mathbf{q}\nu}(T) + 1\right],$$ where the second-order perturbation potential is $$V^{(2)}\_{\mathbf{q}\nu} = \frac{1}{2}\sum\limits\_{\substack{\kappa\alpha \\ \kappa'\alpha'}}\frac{\partial^2 V^{\text{KS}}}{\partial \textbf{R}^\nu\_{\kappa'\alpha'}(\mathbf{-q})\partial \textbf{R}^\nu\_{\kappa\alpha}(\mathbf{q})}\;,$$ with the derivatives defined as in Eq. ([EqnDerivative]). Only the phonon occupation factor contributes to the temperature dependence of this term since it does not involve any intermediate electronic state [as can be seen from Fig. [FigFanDW](b)]. The constant term inside the brackets accounts for the ZPR contribution. The numerical evaluation of this second-order derivative is a computational bottleneck of density-functional perturbation theory (DFPT) . It can, however, be circumvented by applying the rigid-ion approximation (RIA), which supposes that the potentials created by each nucleus are independent of each other. Within this approximation, thanks to the translational invariance of the lattice, *V***q***ν*(2) can be expressed in terms of the first-order derivatives entering the Fan self-energy  [see Eq. ([EqnV1])]. The consequences of this approximation on the ZPR of diamond have been discussed by Poncé *et al.* . In this framework, the Debye-Waller self-energy becomes : $$\label{EqnSigmaDW} \begin{split} \Sigma^{\text{DW,RIA}}\_{\mathbf{k}n} = &\frac{1}{N\_\mathbf{q}}\sum\limits\_{\mathbf{q}\nu}^{\text{BZ}} \sum\limits\_{n'}-\frac{1}{2\omega\_{\mathbf{q}\nu}} \frac{|g^{\text{DW}}\_{\mathbf{k}nn'}(\mathbf{q}\nu)|^2} {\varepsilon^0\_{\mathbf{k}n}-\varepsilon^0\_{\mathbf{k}n'}+i\eta}\\ &\times \left[n\_{\mathbf{q}\nu}(T) + \frac{1}{2}\right], \end{split}$$ with $$\begin{split} &|g^{\text{DW}}\_{\mathbf{k}nn'}(\mathbf{q}\nu)|^2 =\\ & \qquad \quad \sum\limits\_{\substack{\kappa\alpha \\ \kappa'\alpha'}} \left[U\_{\nu,\kappa\alpha}(\mathbf{q})U^\*\_{\nu,\kappa\alpha'} (\mathbf{q}) + U\_{\nu,\kappa'\alpha}(\mathbf{q})U^\*\_{\nu,\kappa'\alpha'} (\mathbf{q})\right] \\ &\qquad \quad \times \bra{\mathbf{k}n} V^{\text{DW}\*}\_{\kappa\alpha}\ket{\mathbf{k+q}n'} \bra{\mathbf{k+q}n'} V^{\text{DW}}\_{\kappa'\alpha'}\ket{\mathbf{k}n}, \end{split}$$ and *V**κ**α*DW = *V**κ**α*(1)(**q** = 0, *ν*). We finally evaluate the sum on band index *n*ʹ in Eq. ([EqnSigmaFan]) and ([EqnSigmaDW]) using the semi-static approach described in Poncé *et al.*  and Antonius *et al.* . We compute the full, nonadiabatic contribution for all bands up to a band index *M*, which we choose to be well-separated in energy (i.e., more than 10 eV) from the first conduction band. For the high-energy bands with band index *n*ʹ > *M*, the phonon frequencies in the denominators of Eq. ([EqnSigmaFan]) and ([EqnSigmaDW]) are negligible with respect to the electronic eigenenergy difference. These bands can, therefore, be treated within the adiabatic approximation, neglecting the phonon frequencies. Furthermore, this sum over an arbitrarily large number of empty states can be replaced by the solution of a linear Sternheimer equation for the subspace orthonormal to the low-energy states (*n*ʹ ≤ *M*), thus significantly reducing the numerical cost of the calculation . We estimate the relative error on the renormalization induced by this semi-static treatment of the high energy bands to be 1-2% at most. Thermal expansion ----------------- The thermal expansion contribution was evaluated through the quasi-harmonic approximation (QHA) . Within this framework, the only departure from harmonicity occurs through the explicit volume dependence acquired by the phonon frequencies. It should be understood that the purpose of the QHA is to deliver the leading order expression of the thermal expansion coefficient *α*, rather than capturing the full anharmonic effects present in the crystal , as would do nonperturbative methods . The thermal expansion coefficient, *α*, is a rank 2 tensor that relates a small temperature increment Δ*T* to the strain, *ε*, it induces on the lattice: *ε**i**j* = *α**i**j*Δ*T*,  where *i*, *j* = 1, 2, 3 are cartesian directions. Hence, the most general definition of *α**i**j* is $$\alpha\_{ij} = \left(\frac{\partial\epsilon\_{ij}}{\partial T}\right)\_\sigma,$$ where the derivative is evaluated at constant stress *σ*, which is usually taken to be a constant pressure *P*. The diagonal components of the strain tensor describe the relative change in the length of the lattice parameters: *ε**i**i* = Δ*a**i*/*a**i*. The volumic expansion coefficient *β* can therefore be obtained by taking the trace of *α*: $$\beta = \frac{\partial}{\partial T}\left(\frac{\Delta V}{V}\right)\_P = \sum\limits\_{i=1}^3 \alpha\_{ii}.$$ In the most simple case of cubic symmetry, all *α**i**i* are equal and can be expressed by  $$\label{EqnAlphaCubic} \alpha(T) = \frac{\beta}{3} = \frac{\kappa\_0} {3V\_0 N\_{\mathbf{q}}}\sum\limits\_{\mathbf{q}\nu}\gamma^V\_{\mathbf{q}\nu} c\_{\mathbf{q}\nu}(T),$$ where *κ*0 is the bulk compressibility at equilibrium volume, *V*0 is the primitive cell equilibrium volume, *N***q** is the number of phonons in the homogenous grid used to sample the BZ and *c***q***ν*(*T*) is the specific heat of the **q***ν*-phonon mode at temperature *T*. We have also introduced the volumic mode Grüneisen parameters, defined as  $$\gamma^V\_{\mathbf{q}\nu} = -\frac{\partial\text{ln}\omega\_{\mathbf{q}\nu}(V)}{\partial\text{ln}V}.$$ The bulk Grüneisen parameter, often referred to in the literature, is defined as $$\label{EqnBulkGru} \gamma^V = \frac{\sum\limits\_{\mathbf{q}\nu}\gamma^V\_{\mathbf{q}\nu} c\_{\mathbf{q}\nu}}{\sum\limits\_{\mathbf{q}\nu}c\_{\mathbf{q}\nu}}.$$ Thus, for cubic systems, the linear thermal expansion coefficient, *α*(*T*), Eq. ([EqnAlphaCubic]), is simply proportional to two scalar parameters, *γ**V* and *κ*0, the former governing the sign of the thermal expansion. For materials belonging to noncubic symmetry groups, one must consider the most general case, in which the mode Grüneisen parameters take a tensor form and capture the variation of the phonon frequencies with respect to a given strain: $$\label{EqnGammaij} \gamma^{ij}\_{\mathbf{q}\nu} = -\frac{\partial\text{ln}\omega\_{\mathbf{q}\nu}}{\partial\epsilon\_{ij}}.$$ One can also define bulk Grüneisen parameters *γ**i**j* following the same procedure as in Eq. ([EqnBulkGru]). For axial crystals, which include the case of BiTeI’s trigonal symmetry, the thermal expansion coefficient tensor has two distinct nonvanishing components : *α*11 = *α*22 and *α*33. Because of this anisotropy, the resulting linear thermal expansion coefficients along crystallographic directions *â* and *ĉ* capture a subtle interplay between the vibrational and elastic properties of the material : $$\label{EqnAlpha} \begin{split} \alpha\_a &= \alpha\_{11} = \frac{C\_\sigma}{V\_0}\left[(s\_{11}+s\_{12})\gamma^{a\_1} + s\_{13}\gamma^c\right],\\ \alpha\_c &= \alpha\_{33} = \frac{C\_\sigma}{V\_0}\left[2s\_{13}\gamma^{a\_1} + s\_{33}\gamma^c\right], \end{split}$$ where *C**σ* is the heat capacity at constant stress and *s**i**j* is the *i**j*-coefficient of the elastic compliance tensor. In these expressions, the subscripts *a* and *c* refer to the length of the unit cell along (*â*1, *â*2) and *ĉ*. *γ**a*1 formally refers to a strain applied in only *one* direction perpendicular in the crystallographic axis. From Eq. ([EqnGammaij]), the directional mode Grüneisen parameters *γ***q***ν**a*1 and *γ***q***ν**c* take the form $$\label{EqnGammaHex} \begin{split} \gamma^{a\_1}\_{\mathbf{q}\nu} &= \gamma^{11}\_{\mathbf{q}\nu} = -\frac{1}{2} \left(\frac{\partial\text{ln}\omega\_{\mathbf{q}\nu}} {\partial\text{ln}a}\right)\_c\;,\\ \gamma^{c}\_{\mathbf{q}\nu} &= \gamma^{33}\_{\mathbf{q}\nu} = - \left(\frac{\partial\text{ln}\omega\_{\mathbf{q}\nu}} {\partial\text{ln}c}\right)\_a\;.\\ \end{split}$$ In the previous expressions, the phonon frequency derivatives are evaluated with respect to the variation of only one lattice parameter, the other one remaining fixed at its static equilibrium value. One should finally note that, in contrast to Eq. ([EqnGammaij]), the derivative entering *γ***q***ν**a*1 is taken by changing the *a* cell dimension, which affects *both* lattice vectors in the basal plane, hence the factor 1/2. We finally note that the derivation of the Grüneisen formalism  neglects the zero-point energy of the phonons, such that there is no thermal expansion at *T* = 0 K. [computation]Computational details ================================== First-principles calculations ----------------------------- All first-principles calculations were performed using the ABINIT software package . The bulk electronic structure was obtained within DFT, while response function and electron-phonon coupling calculations were performed within DFPT . We used a maximum plane-wave energy of 35 hartree and sampled the BZ using a 8 × 8 × 8 Monkhorst-Pack **k**-point grid. Spin-orbit coupling was taken into account as it is necessary to obtain the Rashba effect and since it has been shown to strongly affect both electronic  and vibrational  properties of BiTeI. We used Hartwigsen-Goedecker-Hutter (HGH) fully relativistic norm-conserving pseudopotentials , including explicitly the semi-core 5*d* electrons for Bi. The electron-phonon self-energy was computed with the ElectronPhononCoupling module. Throughout this paper, we rely on the generalized gradient approximation of the Perdew-Burke-Ernzerhof (PBE-GGA) functional , although it has been shown to overestimate the lattice parameters at ambient pressure for this particular material . Since the purpose of this paper is to investigate the temperature dependence of the TPT, we chose the functional that gives us the best overall agreement with experiment for the bare band gap and the projected orbital character of the band extrema at ambient pressure, as well as for the predicted critical pressure *P*C1. For detailed results and a complete discussion about our choice of PBE-GGA functional, see Appendix [AppendixStructure]. Structural properties --------------------- The unit-cell geometry has been optimized until the resulting forces on all atoms were lower than 10− 5 hartree/bohr3. Due to the layered nature of BiTeI, the in-plane and normal lattice parameters of BiTeI do not vary isotropically under hydrostatic pressure : *c* decreases more rapidly than *a* within the first GPa applied. In order to allow for this nonhomogenous variation of the lattice parameters, the external pressure was modelized by applying an isotropic stress tensor. The lattice was fully optimized for nine different pressures between 0 and 5 GPa. The resulting cell volumes were then fitted with a Murnaghan equation of state (EOS)  in order to validate the theoretically applied pressure. For all pressures below 3.5 GPa, the discrepancy between the EOS pressure and the applied pressure was less than 0.1 GPa, and less than 0.06 GPa for pressures surrounding the TPT. The lattice structure at intermediate pressures was obtained by interpolating from the results of those nine calculations. The optimized lattice structure at all pressures considered in this paper can be found in Table [TableOpt] of Appendix [AppendixStructure]. Lattice dynamics and electron-phonon coupling --------------------------------------------- Phonon frequencies and electron-phonon matrix elements were computed with a 12 × 12 × 12 homogeneous, Γ-centered Monkhorst-Pack **q**-point grid. Explicit calculations were done at seven different pressures, spanning both the trivial (0.0, 0.5, 1.0, and 1.5 GPa) and topological (3.0, 3.5 and 5.0 GPa) phases. The pressure dependence of the phonon frequencies for the Raman-active modes is in good agreement with experimental data [see Tables [TablePhononfreqZ0] and [TablePhononfreqZ1] of Appendix [AppendixStructure]]. In a typical nonadiabatic AHC calculation, one aims for a value of *η* smaller than the highest phonon energy (typically around *η* = 0.01 eV, see, for example, Nery *et al.* ). This however requires a very dense **q**-point sampling. As discussed in the Appendix of Antonius *et al.* , when using a coarser **q**-point grid, one risks artificially emphasizing the contribution of a discrete number of terms in Eq. ([EqnSigmaFan]) where, for a given **q***ν*-phonon, the value of *ɛ***k***n*0 − *ɛ***k+q***n*ʹ0 ± *ω***q***ν* gets vanishingly small, hence inducing rapid fluctuations of the self-energy . These peaks are a numerical artefact of the **q**-point sampling, and the presence of such fluctuations in the vicinity of the bare electronic eigenvalue *ɛ***k***n*0 could lead to an overestimation of the renormalization. In contrast, increasing *η* without caution can lead to an unphysical flattening of the self-energy, suppressing physically relevant features. We therefore chose *η*=0.1 eV, which allows the self-energy to be a smooth function of frequency, without suppressing physically relevant features. We verified that this choice was suitable for the claims reported in this paper. Lastly, our first-principles methodology relies on the assumption that the electron-phonon interaction can modify the electronic eigenenergies, but cannot by itself promote an electron to an excited state. Such a possibility would modify the resulting electronic density and would require one to evaluate the nondiagonal elements of the self-energy, Σ**k***n*, *n*ʹ [see Eq. (4) of Antonius *et al.* ], allowing *n*ʹ ≠ *n*. This procedure was, however, out of reach for the present work. Thus, we restricted our investigation of electron-phonon coupling to pressures where the predicted bare band gap was greater than the highest phonon energy (E*g* > 20 meV). Thermal expansion ----------------- The directional Grüneisen parameters *γ**a*1 and *γ**c* were evaluated using an 8 × 8 × 8 Γ-centered Monkhorst-Pack grid. Note that the Grüneisen parameters require a smaller **q**-point grid than the EPI interaction since they only depend on the phonon frequencies, which converge faster with the **q**-point sampling. The derivatives in Eq. ([EqnGammaHex]) were computed by central finite difference using three different volumes per direction, per pressure. For each calculation, the internal atomic coordinates in the *ĉ* direction were optimized at fixed volume, using the same convergence criterion as for the equilibrium structure. The compliance constants *s**i**j* were obtained by computing the strain-strain derivatives of the total energy within DFPT . We used the so-called relaxed-ion compliance tensor, which takes into account the relaxation of the atomic coordinates within the unit cell. This typically lowers the components of the elastic stiffness tensor (usually referred to as the elastic constants), as the resulting stress on the atoms is reduced by the relaxation process. Since the compliance tensor is the inverse of the elastic stiffness tensor, it typically increases the elastic compliance when compared to the clamped-ion result . Results and discussion ====================== Topological phase transition in the static lattice -------------------------------------------------- [FigStructure] [FigFatbands] [FigBareGapP] BiTeI is a layered material composed of alternating Bi, Te and I planes stacked along the high-symmetry crystallographic *ĉ* axis. This small band-gap semiconductor belongs to the noncentrosymmetric trigonal space group P3m1 (no. 156, mp-22965). The crystal structure and first BZ are shown in Fig. [FigStructure]. A common signature of a TPT is an inversion of the orbital character of the band structure in the vicinity of the band gap . In the case of BiTeI, a schematic analysis of the band splitting showed that the inverted bands should be of *p**z* character . Fig. [FigFatbands] displays the leading orbital character of the last valence and first conduction bands, for different pressures throughout the studied range. We computed the *p**z* character by projecting the wavefunction on the *l* = 1, *m* = 0 spherical harmonic centered around the different atoms in the unit cell. To emphasize the band inversion, Fig. [FigFatbands] shows the *relative* projected character on each band, namely the normalized difference between the Bi-6p*z* and Te/I-5p*z* projections. At ambient pressure [Fig. [FigFatbands](a)], the indirect gap is located in the *k**z* = *π*/*c* plane, close to the A high-symmetry point, in the direction of H [on the red path of Fig. [FigStructure](b)]. In the H-A and A-L directions, which both exhibit Rashba splitting, the valence band maximum (VBM) is dominated by *p**z* states of Te and I (dark brown), whereas the conduction band minimum (CBM) has Bi-*p**z* (light yellow) character. It is a trivial band insulator, with topological index Z2=(0;000). The Z2 topological index was computed by tracking the evolution of the hybrid Wannier charge centers, using the Z2Pack software package . For more information about the definition of the Z2 topological index for 3D topological insulators, we refer to Sections -C and -A of the review by Hasan and Kane . As pressure is increased, the band gap progressively decreases until it closes at *P*C1=2.08 GPa [Fig. [FigFatbands](b)]. At this point, the BZ exhibits three pairs of Dirac cones, which are split into pairs of Weyl nodes upon further pressure increase; see Fig. 11 of Liu and Vanderbilt . These Weyl nodes migrate within the BZ until they annihilate each other, at *P*C2=2.28 GPa [Fig. [FigFatbands](c)]. Fortunately, from previous studies , it is known that the gap closing occurs along symmetry line H-A (with *k**z* = *π*/*c*), and that it will reopen along a symmetry line rotated by *π*/6, without symmetry constraints in the *k**z* direction (namely, in the A-L-M mirror plane). In the following, we define the *P*C2 plane as the plane parallel to the H-A-L plane, with *k**z* = 1.07698 *π*/*c*, which is the *k**z* coordinate of the second band touching point displayed in Fig. [FigFatbands](c), where the Weyl nodes’ annihilation occurred. This plane corresponds to the shaded plane in Fig. [FigStructure](b). In a similar fashion, we identify the H-A-L plane as the *P*C1 plane. In the vicinity of the TPT, the valence and conduction band extrema show a mixed character, with an almost equal contribution of Bi-p*z* and Te/I-p*z* states [Fig.[FigFatbands](b-c)], foreshadowing the band inversion [Fig. [FigFatbands](d)] which occurs for *P* > 2.28 GPa. The system is then a strong topological insulator, with Z2=(1;001). In the TI phase, the minimal band gap is shifted from the H-A-L plane (*k**z* = *π*/*c*) to the *P*C2 plane (*k**z* = 1.07698 *π*/*c*), where the Weyl nodes’ annihilation occurred. The full pressure dependence of the band gap for the static lattice is shown in Fig. [FigBareGapP]. For the static lattice at *T* = 0 K, we thus obtain a WSM phase width of 0.2 GPa, in good agreement with previous calculations . Our value for the first gap closing *P*C1=2.08 GPa agrees with many experimental estimations  but is slightly lower than those of other experimental  and computational works . For comparison with experiments, one should note that, as the TPT of BiTeI occurs under hydrostatic pressure, direct observation of the topological surface states with angle-resolved photoemission spectroscopy (ARPES) would be challenging from an experimental point of view. As a consequence, experiments rather focused on indirect signatures of these states, for example, a broad minimum of resistivity , which do not allow a precise estimation of *P**C*. The difference with other computational works can be explained by the use of experimental pressure-dependent lattice parameters instead of the DFT-relaxed ones , as well as by the choice of projector-augmented wave (PAW) pseudopotentials without semi-core states for Bi  and the use of the Heyd-Scuseria-Ernzerhof (HSE) hybrid functional  to compute the static gap at ambient pressure. Indeed, the calculated critical pressure is quite sensitive to the choice of functional, as it may significantly modify the starting gap at ambient pressure. One must also recall that the DFT gap has fundamentally no physical meaning; a proper, theoretically adequate calculation of the ambient pressure gap would require using a many-body methodology like GW. Nevertheless, for the scope of this paper, we are more interested in the *relative* variation of the critical pressures rather than in their absolute values. These slight discrepancies with other works do not, therefore, undermine our conclusions. Our value of *P*C1 corresponds to a relative volume (*V*/*V*0) compression of  ∼ 12%, in good agreement with other calculations relying on the PBE functional . We also verified that the relative volume compression within the WSM phase was consistent with the original prediction of Liu and Vanderbilt . Temperature dependence of the topological phase transition ---------------------------------------------------------- Within this paper, we track the temperature dependence of the critical pressures *P*C1 and *P*C2 by evaluating how the electron-phonon interaction and the thermal expansion will affect the indirect band-gap value. Our analysis is based on several assumptions. First, we suppose that the Z2 topological index computed at *T* = 0 K for the static lattice remains unchanged unless the renormalization closes the gap. Second, since the system lacks inversion symmetry regardless of temperature, we assume that Liu and Vanderbilt’s argument  is still valid, such that a gapless WSM phase still separates the two insulating phases. We finally suppose that the *k**z* value of the *P*C2 plane remains unchanged by temperature, as it is unlikely that the Weyl nodes’ annihilation point in the BZ changes significantly from the static result. We verified that the position of both band extrema along the line in the TI phase does not change significantly over the range of temperatures and pressures studied, between the *P*C1 and *P*C2 planes. A more thorough analysis of the temperature dependence of the band-gap location within the *P*C1 and *P*C2 planes, as well as the pressure-temperature dependence of the Rashba splitting, can be found in Appendix [AppendixRashba]. A more formal approach would also require one to redefine the topological invariant for finite temperatures, relying on the density matrix rather than on the static ground state wavefunction, since temperature modifies the occupation of the electronic eigenstates . The question of the persistence of quantum topological order at finite temperatures, compared to a nonquantized “classical” topological order, is still under investigation . When considering the effect of temperature on the TPT within our framework, there are thus two possible outcomes, which depend on the sign of the band-gap renormalization. In the first case, a negative correction implies that the band gap closes with increasing temperature, which can both drive a topologically trivial system towards a band inversion and stabilize an already inverted band structure, thus globally promoting the topological phase. On the other hand, a positive correction will favor the trivial phase: the band gap opens with increasing temperature, preventing the band inversion in a trivial system and destabilizing a TI band structure until the band inversion is reversed. A more detailed version of this argument can be found in our proceedings paper . In the case of a pressure-induced TPT with an intermediate WSM phase, both critical pressures, *P*C1 and *P*C2, will be modified by temperature: a negative renormalization favoring the topological phase will diminish the amount of pressure required to close the band gap, while a positive renormalization promoting the trivial phase will increase it. The WSM phase width will also be affected by the relative strength of the renormalization between the trivial and topological phases. The combined effect of the strength and sign of the renormalization will determine if the WSM phase is widened or narrowed by increasing temperature. ### Electron-phonon interaction [FigBandRenormEPI] In a typical semiconductor, with a nearly parabolic dispersion near well-defined band extrema and a sufficiently wide band gap, the Fan contribution usually dominates the self-energy. One then expects from Eq. ([EqnSigmaFan]) to observe a decrease of the band-gap energy with increasing temperature: the VBM is repelled by neighboring occupied states with similar but lower energy, leading to a positive renormalization. Similarly, the CBM is repelled by neighboring unoccupied states with higher energy, yielding a negative renormalization. Couplings between states with different occupation factors are disadvantaged by the large energy difference in the denominator, due to the presence of the gap. This behavior is sometimes referred to in the literature as the Varshni effect . In BiTeI, the temperature dependence of the gap cannot be inferred by such a heuristic argument, since, on the one hand, the small band gap emphasizes the weight of couplings between the subsets of occupied and unoccupied bands compared to a typical semiconductor, and, on the other, the Rashba splitting creates regions in the BZ where a phonon with finite, nonvanishing wavevector **q** can couple electronic states with very similar eigenenergies. The EPI contribution to the VBM (center panel), CBM (top panel) and total band gap (bottom panel) is shown in Fig. [FigBandRenormEPI]. In order to track any temperature-induced change to the gap location and to accurately capture the renormalization for the minimal gap, the EPI was computed in the H-A-L plane for the trivial phase, and in the *P*C2 plane for the TI phase, for electronic states along the solid red and dashed blue paths shown in Fig. [FigStructure](b). In the trivial insulator phase, both the VBM and the CBM are shifted towards higher energies. While the CBM renormalization is almost pressure independent for a given temperature, the VBM is strongly affected by pressure, its renormalization dropping by roughly a factor of 3 between 0 and 1 GPa. This behavior is most likely linked to the higher compressibility along the *c* axis at lower pressures . The strength of this variation could, however, be amplified by the overestimated compressibility predicted by the PBE functional for BiTeI . In our case, this results in a total gap renormalization that changes sign within the trivial phase, going from negative to positive shortly after 0.5 GPa. Nevertheless, the EPI contribution to the gap remains quite small compared to the bare gap energy at ambient pressure (at most 20 meV at all temperatures). As argued before, such an opening of the gap with temperature when approaching the TPT will delay the first gap closing, thus moving *P*C1 towards a slightly higher value. In the TI phase, the renormalization displays a different behavior: the VBM and CBM renormalizations have opposite signs, both contributing to a decrease of the gap energy with increasing temperature. Again, this behavior is not favorable for the nontrivial case as it works to revert the already inverted gap, driving the system back to the trivial phase. One should also note that, in this phase, it is the VBM that exhibits a pressure-independent behavior, while, for the CBM, the negative renormalization increases steadily with increasing pressure. This seemingly distinct behavior is simply a signature of the band inversion: it can be understood by recalling that the orbital decomposition of the state being corrected at the VBM in the TI phase has the same character as the CBM in the trivial phase, as was emphasized in Fig. [FigFatbands]. Similarly, the CBM in the trivial phase has the same leading orbital character as the VBM in the TI phase. From this point of view, the band extremum dominated by Bi-*p**z* character exhibits a quasi-pressure-independent renormalization, while the extremum with Te/I-*p**z* character is gradually shifted towards lower energies throughout the studied pressure range. It is also rather peculiar that only the Te/I states are affected by the change of topology, which can be seen in the temperature-dependent renormalization at a given pressure going from positive to negative as the system goes through the phase transition. A more thorough explanation of this sign change would require one to probe the EPI in the gap-closing region, which was not possible in the scope of this paper. Nevertheless, if considering the gap behavior in both phases, we can infer that the EPI globally promotes the trivial phase, pushing the TPT towards higher pressures as temperature increases. ### Thermal expansion [FigBandRenormTE] The TE contribution to the total renormalization is displayed in Fig. [FigBandRenormTE], for the CBM (top panel), VBM (center panel), and indirect band gap (bottom panel). When solely looking at the band renormalization, one finds a seemingly monotonic behavior, both bands being shifted towards lower absolute energies. The pressure-dependent renormalization at a given temperature does not seem affected by the leading orbital character of the band extrema, as was the case for EPI. In the TI phase, the renormalization rate at a given pressure is greater for the CBM than for the VBM, hence a global decrease of the gap energy with increasing temperature. In the trivial phase, the indirect band-gap renormalization shows a far more erratic behavior: while for most pressures the renormalization rate is greater for the VBM than for the CBM, hence a gap opening with increasing temperature, at 0.5 GPa we obtain the opposite trend. This is reminiscent of the sign change observed in this pressure range for the EPI contribution [see Fig. [FigBandRenormEPI], lower panel], which we again attribute to an overestimated compressibility in the low-pressure regime. For a layered structure like BiTeI, a higher compressibility will mainly enhance the *s*33 compliance constant entering the definition of *α**c* (Eq. ([EqnAlpha])), which will, in turn, modify the thermal expansion along this axis and the resulting gap renormalization. As discussed more thoroughly in Appendix [AppStructural], the lattice structure is reasonably well described by PBE for *P* ≥ 1 GPa. Thus, in the following section, we only consider the normalization trends for pressures of 1 GPa and higher to evaluate the temperature-dependent critical pressures. One should finally note that the TE contribution has the same order of magnitude as the EPI contribution, such that it cannot be neglected when investigating the temperature dependence of the gap, as it is often done for materials with light ions. We recall here that, as mentioned in Section [theoryTE], the Grüneisen formalism neglects the zero-point phonon vibrational energy, such that the *T* = 0 K lattice parameters are identical to the static ones. We estimated the missing zero-point thermal expansion contribution to the band-gap renormalization using the high-temperature extrapolation method displayed in Fig. 2 of Cardona and Thewalt . In this method, the renormalized lattice parameters at *T* = 0 K are evaluated by extrapolating the high-temperature slope of Δ*a*(*T*)/*a* and Δ*c*(*T*)/*c* to 0 K. This led to a nearly constant contribution of  ∼ 2 meV in the trivial phase and  ∼  − 5 meV in the topological phase. This missing contribution does not affect the qualitative trends previously described. The renormalization trends observed when approaching the TPT in both insulating phases, a gap opening in the trivial phase and a gap closing in the topological phase, are precisely the trends observed for the EPI contribution. Consequently, just like EPI, TE delays the TPT, moving both critical pressures towards higher values with increasing temperature. Excluding *P* < 1 GPa from the analysis for previously stated reasons, these trends agree qualitatively with Monserrat and Vanderbilt , despite being smaller in absolute value. This could partly be attributed to the fact that Monserrat and Vanderbilt minimized the Gibbs free energy independently for each lattice parameter, while our methodology [see Eq. ([EqnGammaHex]], correlates the expansion along a given axis with the “thermal pressure”  induced on the lattice by varying both lattice parameters, through *γ**a*1 and *γ**c*. This discrepancy could also be explained by anharmonic contributions to the phonon free energy, which our perturbative methodology does not capture. A more refined and numerically accurate calculation of the anisotropic thermal expansion coefficients would rely on the full temperature-dependent phonon-perturbed potential, taking into account all anharmonic interactions, as implemented in the Temperature Dependent Effective Potential method (TDEP) method . Weyl semimetal phase evolution ------------------------------ [FigGapPT] [FigPTDiagram] The temperature dependence of the WSM phase width was assessed by combining the results displayed in the two previous sections to evaluate the full gap energy renormalization at each pressure and temperature. For each **k** point on the paths considered [see Fig. [FigStructure](b)], the EPI and TE corrections were added to the static band structure before computing the resulting temperature-dependent band gap. Since, for technical reasons discussed in Section [staticresults], we could not directly probe the EPI in the gap closing region, we tracked the temperature dependence of *P*C1 and *P*C2 by extrapolating the total renormalization behavior for each band extrema from neighboring pressures towards the TPT in the trivial and topological phases. This correction was then added to the bare gap energy computed with the static lattice (black markers with denser pressure sampling), as shown by the colored dashed lines in Fig [FigGapPT]. The temperature-dependent critical pressures extracted from Fig. [FigGapPT] are explicitly shown in the pressure-temperature phase diagram of Fig. [FigPTDiagram] (black markers). For visual reference, the critical pressures obtained with the static lattice are shown with dotted lines. The numerical values for *P*C1(*T*) and *P*C2(*T*) are detailed in Table [TableWSM] of Appendix [AppendixWSM], as well as the WSM phase boundaries obtained by considering only the contribution from EPI or TE. As discussed in the previous sections, both EPI and TE are unfavorable to the topological phase. We therefore observe without surprise that both *P*C1 and *P*C2 are shifted towards higher pressures. However, as the renormalization from both contributions is stronger in magnitude in the topological phase, the temperature-dependent variation is much weaker for *P*C1 than for *P*C2. Thus, the WSM phase width increases with temperature. This effect is already sizable at *T* = 0 K, where the zero-point motion of the ions induces a 15% increase in the WSM phase width, compared to the static lattice. It has almost doubled by *T* = 100 K, despite the absolute WSM phase width remaining small at 0.39 GPa. The only deviation to this behavior is the slight decrease of *P*C1 above room temperature. This can be explained by the fact that, for *P* = 1 and 1.5 GPa, EPI and TE do not shift the band extrema in the same way in the BZ. Therefore, in this pressure range, the total renormalization does not sum up to the individual corrections. One should also take into consideration the fact that, at finite temperature, thermal excitations do not allow one to physically distinguish between an insulating and a gapless phase, if the band-gap energy is smaller than *k**B**T*. Thus, for each temperature, we defined a crossover region between topologically distinct phases by evaluating the pressure difference between a given static lattice critical pressure and the pressure at which *E**g* = *k*B*T* in the appropriate insulating phase, using the data from Fig. [FigBareGapP]. In Fig. [FigPTDiagram], these regions identify the phase space where, within our methodology, the topological index cannot be properly assessed. Lastly, we also added to Fig. [FigPTDiagram] a color intensity gradient in the high-temperature regime of the WSM phase, to emphasize the fact that our calculations did not explicitly treat the temperature dependence within this pressure region. Therefore, there remains some uncertainty about the topological nature of BiTeI in this region of the pressure-temperature phase space. From these results, we can conclude that temperature effects do not restrain the experimental detection of the WSM phase. In the low-temperature regime, where quantum effects can more easily be observed thanks to the reduction of thermal noise, the WSM phase widening remains however small, despite being sizable when compared to the static lattice phase width. The experimental signatures of the TI phase will be found at higher pressures with increasing temperature. These qualitative temperature dependencies can assist the experimental effort when designing experiments the purpose of which is to detect signatures of a topologically nontrivial phase. Moreover, they shed some light on the subtle interplay between EPI and TE and their effect on the phase space of topological phases in noncentrosymmetric materials. Since the presence of gapless modes characterizes both the WSM and Z2 TI phases, additional experiments focusing on observables that can physically distinguish between these two nontrivial phases, and that are feasible under isotropic hydrostatic pressure, could provide more insight about this phenomenon. Discussion ---------- The purpose of this paper was to determine the qualitative trend of the temperature-dependent renormalization of *P*C1 and *P*C2: the global behavior of the TPT, more than the precise numerical values of the critical pressures, was our main target. From this point of view, there are several limitations to our analysis, the first of them being the **q**-point sampling for the EPI calculation. BiTeI is a polar material, i.e., has nonvanishing Born effective charges, with infrared-active phonon modes. At long wavelength, ∣*q*∣ → 0, atomic displacements along longitudinal optical (LO) modes generate a macroscopic electric field throughout the material, which can couple to the electrons through dielectric interaction. This particular type of EPI, known as the Fröhlich interaction , has been shown to dominate the ZPR for polar materials  and to cause an unphysical divergence of the ZPR within the adiabatic AHC framework , in which Eq. ([EqnSigmaFan]) is approximated by neglecting the phonon frequencies in the denominators. Thus, for polar materials, evaluating the precise contribution of the long-range Fröhlich interaction to the ZPR within the nonadiabatic AHC framework requires a very dense **q**-point grid, which for technical reasons was not available to us at the time of this study. The methodology developed by Nery and Allen  to estimate the polar contribution missing from a coarse **q**-point grid calculation could also not be straightforwardly applied to BiTeI for two reasons. First, the band extrema are not located at the zone center and spin-orbit coupling is mandatory to describe Rashba splitting correctly. Second, the validity of the parabolic effective mass approximation near the band extrema is also questionable for this material, especially at higher pressures. Our results should consequently be understood as a lower bound to the full EPI contribution. The extrapolation technique used to evaluate the temperature dependence of the critical pressures is also by itself limited by the assumption that the renormalization behavior we observe in both insulating phases varies monotonically towards the TPT. This assumption could only be validated by an explicit calculation of the EPI in the close vicinity of the TPT, which would require one to lift the diagonal self-energy approximation, as excitations across the gap will no longer be negligible. The addition of these nondiagonal coupling terms to the self-energy, which we conjecture will have a significant contribution, could alter the general trends observed in Fig. [FigGapPT]. These extra terms will also be crucial to fully characterize the effect of EPI on the Te/I-*p**z* band extrema near the critical pressures (VBM for Z2 = 0, CBM for Z2 = 1; see Fig. [FigBandRenormEPI] and Section [EPIResults]). Lastly, our analysis supposed a perfect semiconductor crystal, while experimentally BiTeI has been shown to be naturally *n*-doped due to small deviations in stoichiometry . The presence of such doping charge could slightly affect the phonon-perturbed crystal potential, which could, in return, modify the matrix elements entering the EPI self-energy. We do not however expect this self-doping to have a strong impact on the qualitative trends observed throughout this paper, as long as the amount of defects remains small compared to substitutional doping. Conclusion ========== In the present paper, we have characterized the temperature dependence of the topological phase transition in BiTeI using first-principles methodologies based on DFPT. The electron-phonon interaction was obtained with the nonadiabatic AHC methodology, while the thermal expansion was computed using the Grüneisen parameters formalism. The indirect band-gap renormalization induced by EPI changes sign as the system undergoes the phase transition, opening the gap in the trivial phase and closing it in the TI phase. The band extremum with leading Bi-6*p**z* character displays a quasi pressure-independent behavior, while the Te/I-5*p**z* extremum manifests a strong sensitivity to both pressure and topology. The thermal expansion contribution to the band-gap renormalization has the same order of magnitude as the EPI contribution. At a given temperature, both band extrema are affected in a similar manner throughout the pressure range, regardless of their leading orbital character. The resulting band-gap renormalization nevertheless captures the change of topology: it mainly increases the indirect gap in the trivial phase and reduces it throughout the topological phase. The combined effect of EPI and TE globally moves the TPT of BiTeI towards higher pressures. The resulting temperature-dependent renormalization is stronger in magnitude in the TI phase compared to the trivial phase, such that the intermediate WSM phase is widened by temperature. Clear signatures of the Z2 topological insulator phase should, therefore, appear at higher pressures as temperature is increased. Overall, our results indicate that temperature effects are not negligible for materials with heavy ions and must be accounted for when engineering devices relying on topological properties. Our findings can also aid the search for experimental evidence of the topologically nontrivial phases of BiTeI, and more generally reveal how temperature can have a substantial influence on the phase space of topological phases of noncentrosymmetric materials. ***Note added.*** Recently, we learned of an article by Lihm and Park which is relevant to this research field. This research was financially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), under the Discovery Grants program grant No. RGPIN-2016-06666. Computations were made on the supercomputers Beluga and Briaree managed by Calcul Québec and Compute Canada. The operation of these supercomputers is funded by the Canada Foundation for Innovation (CFI), the Ministère de l’Économie, de la Science et de l’Innovation du Québec (MESI) and the Fonds de recherche du Québec - Nature et technologies (FRQ-NT). V.B.-C. acknowledges support by the NSERC Alexander Graham Bell Canada Graduate Scholarship doctoral program. Effect of Van der Waals dispersion correction ============================================= Since the first prediction of the TPT in BiTeI , numerous studies of ground-state properties of bismuth tellurohalides (BiTeX, X=Cl, Br, I) have appeared in the first-principles literature . Many of them have discussed the fact that PBE-GGA does not capture the Van der Walls bonding properly along the normal axis for this class of materials, resulting in a general overestimation of the lattice parameters , up to 7% for the *ĉ* axis . PBE-GGA was also criticized as yielding an unrealistic compressibility for BiTeI at low pressures . It has also been shown that dispersion-corrected DFT, which accounts for Van der Waals forces through a semi-empirical dispersion potential, reproduces more accurately the pressure dependence of the lattice structure for bismuth tellurohalides . These studies did not, however, investigate the impact of these corrections on the phonon frequencies, nor on the electron-phonon interaction. In the following, we include the Van der Waals interaction within the DFT framework, using dispersion-corrected DFT-D3BJ, where a semi-empirical dispersion potential is added  in conjunction with Becke-Jonhson damping  to avoid short-range diverging behavior. DFT-D3BJ was shown to be one of the most reliable dispersion-corrected methods to describe vibrational properties . Both the trivial (0, 1.5 GPa) and the topological (3.5, 5 GPa) phases are considered. Structural properties --------------------- [TableLattice] |c||c|c|c||c|c|c| && & Exp. & PBE & DFT-D3BJ & Exp. & PBE & DFT-D3BJ a (Å) & 4.34  & 4.44 & 4.36 & 4.15  & 4.23 & 4.12 c (Å) & 6.85  & 7.31 & 6.78 & 6.52  & 6.55 & 6.49 c/a ratio & 1.58  & 1.65 & 1.56 & 1.57  & 1.58 & 1.58 gap (eV) & 0.38 , 0.36 , 0.33  & 0.30 & 0.03 & & 0.16 & 0.30 Bi-Te *ĉ* (Å) & 2.10  & 1.70 & 1.73 & & 1.75 & 1.78 Bi-I *ĉ* (Å) & 1.72  & 2.11 & 2.15 & & 2.16 & 2.17 [TableOpt] |c||c|c|c|c|c|c|c| & 0 GPa & 0.5 GPa & 1 GPa & 1.5 GPa & 3 GPa & 3.5 GPa & 5 GPa a (Å) & 4.44 &4.41 &4.39 &4.37 &4.30 &4.28 & 4.23 c (Å) & 7.31 &7.05 &6.90 &6.81 &6.66 &6.62 & 6.55 Bi-Te *ĉ* (Å) & 1.70 & 1.71&1.71 &1.72 &1.74 &1.75 & 1.75 Bi-I *ĉ* (Å) & 2.11 &2.12 &2.13 &2.14 &2.15 &2.15 & 2.16 [TableRashba] |c|c|c|c| & Exp. & PBE &DFT-D3BJ ER (meV) & 100  108 & 103 & 181 *k*R (Å− 1) & 0.052 , 0.046 , 0.050 & 0.050&0.053 *α*R (eV ⋅ Å) & 3.85  4.3 & 4.13 & 6.86 The lattice parameters, bare band gap energies and equilibrium distance between the Te/I and Bi planes for 0 and 5 GPa are shown in Tables [TableLattice] and [TableRashba], with comparison to available experimental data. In good agreement with previous works , PBE-GGA overestimates the lattice parameters at ambient pressure more than what is typically expected from DFT calculations, at 2.3% for *a* and 6.7% for *c*, while the dispersion-corrected method reproduces more accurately the experimental structure. At 5 GPa, both methods agree within 2% of experimental value, for both *a* and *c*. The interplanar equilibrium distances at both pressures are in good agreement with experiment. Note that the positions of Te and I planes are inverted when compared to the experimental results ; this effect has been attributed to the fact that x-ray diffraction measurements cannot establish a clear distinction between the Te and I planes, due to their very similar atomic radii and charge . This inversion is systematic throughout the first-principles literature . Analyzing the full pressure dependence of the lattice parameters for PBE-GGA and DFT-D3BJ between 0 and 5 GPa (not included here) lead to similar results as those of Güler-Kiliç and Kiliç (see Figs. 3(g) and 3(h) of ), where the overestimation of *a* remains around 2% throughout the whole pressure range, while for *c* it drops rapidly under 2% after 1 GPa. Thus, for $P\gtrsim 1$ GPa, the lattice structure is similarly well described by both methods. The resulting lattice parameters and interatomic distances along *ĉ* obtained by the optimization procedure are presented in Table [TableOpt] for all pressures included in this work. The bulk band gap of BiTeI has been measured at ambient pressure via ARPES , optical spectroscopy  and optical spectral ellipsometry , yielding respectively 0.38, 0.36 and 0.33 eV. Another soft x-ray ARPES experiment obtained a smaller value of 0.26 eV ; they however argued that their measurement probes a region between the bulk and the subsurface, where some band bending effects could lower the measured band gap compared to the bulk value. Our results with PBE, 0.30 eV for the indirect gap and 0.33 eV for the direct gap, are in pleasantly good agreement with experiment, thanks to the overestimation of the lattice parameters counterbalancing PBE’s well-know gap underestimation . For DFT-D3BJ, the band gap is strongly underestimated: the system is almost metallic, with a band gap of about 30 meV at ambient pressure. We are not aware of any reference for experimental gap measurements in the topological phase. Beyond the crystal structure, the parameters characterizing the Rashba splitting can also be used to assess the level of agreement of the calculated electronic structure with experiment. The Rashba energy, ER, is defined as the energy difference between electronic states at the band extrema and at the time-reversal protected degeneracy point of spin-split bands, namely A for BiTeI [see Fig. [FigFatbands]]. The momentum offset between those two **k**-points is captured by the Rashba wavevector, **k**R. The Rashba parameter, $$\label{EqnRashba} \alpha\_\text{R} = \frac{2 E\_\text{R}}{k\_\text{R}},$$ is the strength of the spin-orbit interaction in the Rashba Hamiltonian. These parameters are clearly displayed in the left panel of Fig. 2a and Eq. (1) of Monserrat and Vanderbilt . Our calculations agree reasonably well with available experimental data [see Table [TableRashba]], with a clear advantage for PBE-GGA. Vibrational properties ---------------------- [TablePhononfreqZ0] |c||c|c|c|c||c|c|c| && Mode & Exp.;; ; & Calc.; & PBE & DFT-D3BJ& Exp.; & PBE & DFT-D3BJ E1 & 52.5; 55; 56; n/a & 59; 54 (73) & 53 (74) & 57 (77) & 60; n/a & 58 (77) & 62 (80) A11 & 90.5; 93; 92; 92 & 92; 88 (94) & 90 (94) & 90 (100) & 96; 96 & 92 (100) & 95 (106) E2 & 99; 102; 101; 101& 101; 96 (115) &97 (118) & 101 (117) & 106; 107 & 102 (119) & 106 (118) A12 & 146.5; 150; 148; 146 & 152; 141 (142) & 141 (142) & 146 (148) & 152; 153 & 147 (148) & 151 (152) [TablePhononfreqZ1] |c||c|c|c||c|c|c| && Mode & Exp.; & PBE & DFT-D3BJ& Exp. ;& PBE & DFT-D3BJ E1 & 66; n/a & 63 (81) & 67 (84) & 70*a*; n/a & 68 (84) & 71 (87) A11 & 105; 104 & 99 (108) &102 (111) & 107; 108 & 102 (114) & 109 (121) E2 & 112; 112 &108 (120) & 115 (120) & 115; 115 & 111 (122) & 115 (123) A12 & 162; 162 & 154 (155) & 159 (151) & 165; 167 & 159 (160) & 164 (166) At the zone center, Γ, BiTeI has *C*3*v* point group symmetry, yielding four Raman-active modes, whose irreducible representations are two doubly-degenerate *E* modes and two *A*1 modes . Since BiTeI has no inversion center, all modes are both Raman and infrared active . Phonon frequencies for these modes are compared to available experimental data and previous first-principles calculations in Tables [TablePhononfreqZ0] and [TablePhononfreqZ1]. At ambient pressure, we obtain a relative mean deviation with experiment of 4.1% for PBE and 1.6% for D3BJ. At 5 GPa, these deviations drop to 3.8% for PBE and 0.8% for D3BJ. We verified that the general pressure dependence of the Raman-active frequencies agreed well with experiment in both cases. Our calculated values for LO-TO splitting are in excellent agreement with previous calculations . A small feature measured at *ω*=118.5 cm− 1 at ambient pressure  can be associated with the LO-TO split E2 mode, thus validating our results. Discussion ---------- [FigVDW] The purpose of this work is to analyze the temperature dependence of the TPT in BiTeI. The numerical value of the critical pressures is directly related to the bare band gap value, which will be corrected by electron-phonon interaction. From Eq. ([EqnSigmaFan]) and ([EqnSigmaDW]), we can see that the phonon frequencies and the bare electronic eigenvalues are key physical quantities in this correction. This dependence goes beyond the simple electronic dispersion: as discussed in the main text, the band gap also plays a crucial role. Indeed, the denominator captures the “unlikelyhood” of couplings between two eigenstates with a large energy difference, rooted in perturbation theory. Furthermore, considering the renormalization of both band extrema forming the fundamental band gap, the gap energy will act as a “barrier”, giving a lower bound on the smallest value of the denominator for couplings with eigenstates within the subset of bands located across the gap. Therefore, a significant underestimation of the bare band gap will artificially emphasize interband couplings and bias the resulting zero-point renormalization. While one could argue that this effect should be small at *T* = 0 K, especially for a material with heavy atoms like BiTeI, it should not be forgotten that it will be amplified by the phonon occupation factor at higher temperatures, and thus could lead to significantly different conclusions. Lastly, since we are dealing with a TPT, the orbital character of the valence and conduction bands close to their extrema is subject to an inversion process. This phenomenon is captured by the wavefunction, such that a wrong band character could alter the different matrix elements entering the self-energy. While DFT-D3BJ provides the most accurate descriptions of the cell geometry and Raman-active phonon frequencies, our results indicate that in our case, it is not a suitable choice to track the temperature dependence of the TPT. On the one hand, despite its accurate description of Van der Waals bonding between the atomic planes, DFT-D3BJ delivers a severe underestimation of the band gap at ambient pressure. This can be understood by considering the van der Waals correction as acting like an “effective pressure” along the *ĉ* axis. As previously argued, such a small value of the band gap would not only flaw our bare *P*C predictions (by the same arguments as Rusinov *et al.* ) but could also artificially strengthen some interband couplings in the EPI self-energy. One could easily argue that this band gap underestimation could be overturned simply by applying a scissor shift operator to the eigenvalues entering the self-energy, or be compensated by evaluating the pressure shift that would bring the gap to the experimental value. We considered both these approaches. For the latter, we estimate that one must shift towards a negative pressure *P*0 ≃  − 2 GPa in order to bring the DFT-D3BJ gap close to the experimental value, thus bringing the lattice parameters very close to the PBE relaxed values at ambient pressure. About the former, we have investigated the temperature-dependent renormalization at 0, 1.5, 3.5 and 5 GPa, with and without the scissor shift operator. The results for *P* = 1.5 GPa shown in Fig. [FigVDW] are quite striking: PBE-GGA (red circles) and DFT-D3BJ with (green diamonds) and without (blue squares) the scissor shift disagree about the sign of the renormalization for the CBM. This sign disagreement is also reflected in the total gap renormalization. This discrepancy brings to light a fundamental aspect of the calculation, namely the fact that the EPI renormalization is affected by the system’s topology. From Z2 topological invariant calculations, we found that at 1.5 GPa, the DFT-D3BJ wavefunction already is in the *topological* state (yielding a *negative* correction), while for PBE-GGA it is still in the *trivial* state (where the correction is *positive*). These are *precisely* the trends observed in Fig. [FigBandRenormEPI]. Moreover, if we compare the DFT-D3BJ without scissor shift (blue squares) to the PBE-GGA result at 3.5 GPa (dashed yellow line, triangle markers), which is roughly *P* = 1.5 GPa − *P*0, we can see that those two calculations, with both wavefunctions in the topological state and with roughly the same lattice parameters and bare band gap, deliver almost identical results. Thus, even if DFT-D3BJ delivers the most accurate lattice parameters and Raman-active phonon frequencies at 1.5 GPa when compared to experiment, it predicts a Z2 topological index that contradicts all experimental evidences of the phase transition. Therefore, computing the renormalization with DFT-D3BJ and a scissor shift operator for a given pressure would roughly result in applying the electron-phonon contribution from a PBE-GGA calculation at another pressure, shifted by *P*0. This is especially crucial when investigating pressures neighboring the TPT, as it can lead to wrongly assign a renormalization computed with a TI wavefunction to a shifted band structure intended to describe the trivial phase. For pressures sufficiently far from the transition in the TI phase, we expect the trend of a DFT-D3BJ with scissor shift calculation to be correct, despite a discrepancy in the numerical value of the renormalization. For the trivial phase, a Z2 calculation showed that 0 < *P**C*1 < 0.5 GPa for DFT-D3BJ. Therefore, we cannot rely on such results to describe the ambient pressure renormalization since the resulting wavefunction does not predict the correct band character. For these reasons, we chose to rely on PBE-GGA functional: despite a clear overestimation of the lattice parameters, it does deliver a band gap, Rashba splitting and Raman-active phonon frequencies that are in reasonable overall agreement with experiment. Moreover and most crucially, it exhibits the right band characters at ambient pressure and predicts a critical pressure *P*C1 in agreement with experimental evidence. Pressure and temperature-dependent Rashba Splitting =================================================== [FigRashbaPT] As discussed in the main text, one of the assumptions underlying our work is that the Weyl nodes trajectory throughout the WSM phase is not qualitatively affected by temperature. In other words, we suppose that the *k**z* location of the *P*C1 and *P*C2 planes remains constant; hence we only investigated EPI and TE within these planes. Without having access to the pressure and temperature dependence of the band structure in the full BZ, we can nevertheless gain significant insight by tracking the pressure and temperature dependence of the different parameters characterizing Rashba splitting within these planes. The Rashba energy, *E*R, captures the steepness of the band’s slope in the vicinity of the band gap. The Rashba momentum, *k*R, tracks the distance between the band extrema and the degeneracy point at (0, 0, *k**z*), that is, the temperature-dependent band gap location within a given *k**z* plane. Finally, the Rashba parameter, *α*R, captures the strength of the Rashba interaction (Eq. ([EqnRashba])), which is related to the potential gradient along the *ĉ* axis . Fig. [FigRashbaPT] presents the results for the Rashba energy (a), Rashba momentum (b) and Rashba parameter (c), for both the valence (bottom panels) and conduction bands (upper panels). We emphasize that these parameters were computed for the minimal band gap, namely on the (H-A, *P*C1) line for the trivial phase and on the for the TI phase. Hence, the numerical values for ambient pressure in this figure should not be compared to experimental data, as available ARPES measurements were done on the line. The trends reported here nevertheless agree with those of Monserrat and Vanderbilt  for the direction. As can be seen in Fig. [FigRashbaPT], pressure has a stronger effect than temperature on the Rashba parameters. This behavior can be qualitatively understood as increasing temperature enhances atomic vibrations around their slightly shifted equilibrium positions, while pressure modifies more significantly these equilibrium positions by reducing the distance between the atomic planes. Hence, for a given temperature, increasing pressure enhances the potential gradient along the *ĉ* axis, thus increasing *α*R. On the contrary, for a given pressure, temperature increases the complex interplay between the electrons and the lattice, yielding a reduction of *α*R. These are the behaviors observed in the trivial phase [Fig. [FigRashbaPT](c), diamond markers]. The TI phase displays the opposite behavior [Fig. [FigRashbaPT](c), round markers]: *α*R is reduced by pressure and increased by temperature. To understand this behavior, we must analyze more carefully both the Rashba energy and momentum, as *α*R captures the interplay between those two quantities. On the one hand, for all pressures, the Rashba energy [Fig. [FigRashbaPT](a)] almost steadily increases with temperature, except for the CB in the trivial phase, which shows a very weak decrease with temperature. On the other hand, the Rashba momentum is almost unaffected by pressure and temperature in the TI phase [Fig. [FigRashbaPT](b), round markers], while it increases almost steadily with temperature for pressures below the TPT. The opposite trends of *α*R between the trivial and TI phase can, therefore, be understood by two observations. On the one hand, in both phases, the pressure dependence of *α*R is governed by the Rashba energy, which increases in the trivial phase and decreases in the TI phase. On the other, their opposite temperature dependence is caused by the Rashba momentum. In the trivial phase, both *E*R and *k*R increase with temperature, the latter being more significant than the former, hence the decrease of *α*R. In the TI phase, the increase of *α*R captures the change of the Rashba energy, as *k*R is scarcely affected by temperature. We cannot, however, determine if this behavior is linked to the TPT or merely a consequence of an increased bonding along *ĉ* induced by increased pressure. This question could be addressed by investigating the temperature dependence of the high-pressure Rashba splitting in other bismuth tellurohalides, BiTeBr and BiTeCl, which are not known to exhibit a pressure-induced TPT . Lastly, we verified that the trends presented in Fig. [FigRashbaPT] for the TI phase do not vary if the temperature-dependent band structure is evaluated in the *P*C1 plane rather than the *P*C2 plane, remaining in the A-L direction. Hence, it seems reasonable that these trends, including the quasi temperature-independence of the Rashba momentum, could be applied to any *k**z* between those planes. We also verified that the temperature-dependent values of *P*C2 are not significantly altered if considering the EPI and TE corrections computed in the *P*C1 plane. Temperature-dependent Weyl semimetal phase width ================================================ [TableWSM] |c||c|c|c||c|c|c||c|c|c| Temperature&&& (K) & *P*C1 & *P*C2 & width & *P*C1 & *P*C2 & width & *P*C1 & *P*C2 & width (GPa) Static & 2.08 & 2.28 & 0.20& 2.08 & 2.28 & 0.20& 2.08 & 2.28 & 0.20 0 & 2.08 & 2.31 & 0.23 &2.08 & 2.28 & 0.20 & 2.08 &2.31 &0.23 100 & 2.10 & 2.42 & 0.32 & 2.09 &2.39 &0.30 & 2.14 &2.53 &0.39 200 & 2.14 & 2.57 & 0.44 & 2.16 & 2.55 & 0.39 & 2.24 & 2.82 &0.58 300 & 2.17 & 2.73 & 0.56 & 2.24 & 2.70 & 0.46 & 2.30 & 3.09 & 0.79 400 & 2.20 & 2.89 & 0.69 & 2.33 & 2.83 & 0.50 & 2.32 & 3.33 & 1.01 500 & 2.22 & 3.05 & 0.83 &2.46 & 2.95& 0.51 & 2.30 & 3.55 & 1.25 In Table [TableWSM], we report the numerical values for the temperature-dependent WSM phase boundaries, namely *P*C1(*T*) and *P*C2(*T*). These values were extracted from Fig. [FigGapPT] using the extrapolation procedure described in Sect. [WSMResults] and are explicitly plotted in Fig. [FigPTDiagram]. For comparison, we also include the temperature-dependent critical pressures obtained by considering the EPI and TE corrections independently. 105ifxundefined [1] ifx#1 ifnum [1] #1firstoftwo secondoftwo ifx [1] #1firstoftwo secondoftwo ““#1””@noop [0]secondoftwosanitize@url [0]` 12`$12`&12`#12`12`\_12`%12@startlink[1]@endlink[0]@bib@innerbibempty   and   ,  [Rev. Mod. Phys.  **82**, 3045 ( 2010)](\doibase 10.1103/RevModPhys.82.3045)  ,  , and   ,  [Phys. Scr.  **T164**,  014001 ( 2015)](\doibase 10.1088/0031-8949/2015/T164/014001)  ,  ,  ,  ,  , ,  ,  ,  ,  ,  ,  and   ,  [Science  **332**,  560 ( 2011)](\doibase 10.1126/science.1201607)  ,  ,  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **90**,  161202(R) ( 2014)](\doibase 10.1103/PhysRevB.90.161202)  ,  ,  ,  ,  ,  ,  ,  ,  , ,  ,  and   ,  [Nat. Mater.  **11**,  1023 ( 2012)](\doibase 10.1038/nmat3449)  ,  ,  ,  ,  , ,  and  ,  [Nature  **452**, 970 ( 2008)](\doibase 10.1038/nature06843)  ,  ,  ,  ,  ,  ,  ,  ,  and  ,  [Phys. Rev. Lett.  **111**,  155701 ( 2013)](\doibase 10.1103/PhysRevLett.111.155701)  ,  ,  ,  ,  , ,  and  ,  [Phys. Rev. Lett.  **110**,  107401 ( 2013)](\doibase 10.1103/PhysRevLett.110.107401)  ,  ,  ,  ,  and   ,  [J. Mater. Chem. C  **4**,  2243 ( 2016)](\doibase 10.1039/C6TC00020G)  ,  ,  ,  ,  ,  ,  , ,  and   ,  [Nat. Phys.  **10**,  294 ( 2014)](\doibase 10.1038/nphys2898)  ,  ,  ,  ,  and   ,  [Nano Lett. **15**,  1222 ( 2015)](\doibase 10.1021/nl5043769)  ,  ,  ,  ,  and   ,  [Phys. Rev. Lett.  **111**,  056803 ( 2013)](\doibase 10.1103/PhysRevLett.111.056803)  ,  ,  ,  and  ,  [Science  **342**,  453 ( 2013)](\doibase 10.1126/science.1239834)  ,  ,  ,  ,  and   ,  [Nat. Commun.  **8**, 13940 ( 2017)](\doibase 10.1038/ncomms13940)  ,  [Nat Phys  **11**,  5 ( 2015)](\doibase 10.1038/nphys3217)  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  and   ,  [Sci. Rep. **3**,  1757 ( 2013)](\doibase 10.1038/srep01757)  ,  ,  ,  ,  and   ,  [Sci. Rep.  **4**,  3841 ( 2014)](\doibase 10.1038/srep03841)  ,  [Phys. Rev. B  **81**,  125318 ( 2010)](\doibase 10.1103/PhysRevB.81.125318)   and   ,  [JETP Lett.  **39**,  78-81 ( 1984)](http://www.jetpletters.ac.ru/ps/1264/article_19121.pdf)  ,  ,  ,  ,  and   ,  [Nat. Mater. **14**,  871 ( 2015)](\doibase 10.1038/nmat4360)   and   ,  [Adv. Mater.  **29**,  1605911 ( 2017)](\doibase https://doi.org/10.1002/adma.201605911)  ,  ,  ,  ,  ,  ,  , ,  and   ,  [Science  **342**,  1490 ( 2013)](\doibase 10.1126/science.1242247)  ,  , and   ,  [Rev. Mod. Phys.  **76**,  323 ( 2004)](\doibase 10.1103/RevModPhys.76.323)  ,  ,  ,  and   ,  [Solid State Commun.  **119**,  207 ( 2001)](\doibase 10.1016/S0038-1098(01)00111-9)  ,  ,  ,  ,  ,  and   ,  [Nat. Mater. **10**,  521-526 ( 2011)](\doibase 10.1038/nmat3051)  ,  ,  ,  ,  ,  ,  ,  , ,  ,  ,  ,  ,  and   ,  [Phys. Rev. Lett.  **109**,  096803 ( 2012)](\doibase 10.1103/PhysRevLett.109.096803)  ,  ,  ,  ,  ,  ,  , ,  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. Lett.  **109**,  116403 ( 2012)](\doibase 10.1103/PhysRevLett.109.116403)  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. Lett.  **110**,  107204 ( 2013)](\doibase 10.1103/PhysRevLett.110.107204)  ,  , and   ,  [Phys. Rev. B  **84**,  041202(R) ( 2011)](\doibase 10.1103/PhysRevB.84.041202)  ,  ,  ,  ,  and   ,  [Phys. Rev. Lett.  **108**,  246802 ( 2012)](\doibase 10.1103/PhysRevLett.108.246802)  ,  ,  ,  and   ,  [Nat. Commun.  **3**,  679 ( 2012)](\doibase 10.1038/ncomms1679)  ,  ,  ,  ,  ,  ,  ,  ,  , ,  and   ,  [Phys. Rev. Lett.  **112**,  047402(R) ( 2014)](\doibase 10.1103/PhysRevLett.112.047402)  ,  ,  ,  ,  and  ,  [Opt. Spectrosc.  **117**,  764 ( 2014)](\doibase 10.1134/S0030400X14110125)  ,  ,  ,  ,  ,  ,  ,  , ,  ,  ,  ,  ,  ,  ,  ,  ,  and   ,  [Sci. Rep. **7**,  39699 ( 2017)](\doibase 10.1038/srep39699)  ,  ,  ,  ,  ,  ,  , ,  ,  ,  and  ,  [Adv. Mater.  **29**,  1605965 ( 2017)](\doibase 10.1002/adma.201605965)  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **90**,  161107(R) ( 2014)](\doibase 10.1103/PhysRevB.90.161107)  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **87**,  041104(R) ( 2013)](\doibase 10.1103/PhysRevB.87.041104)  ,  ,  ,  ,  ,  , ,  ,  and   ,  [Sci. Rep.  **5**,  15973 ( 2015)](\doibase 10.1038/srep15973)  ,  ,  ,  ,  ,  , ,  and  ,  [J. Phys.: Condens. Matter **26**,  342202 ( 2014)](\doibase 10.1088/0953-8984/26/34/342202)  ,  ,  ,  ,  and   ,  [JETP Lett.  **98**,  557 ( 2014)](\doibase 10.1134/S0021364013220074)   and   ,  [Phys. Rev. B  **90**,  155316 ( 2014)](\doibase 10.1103/PhysRevB.90.155316)  ,  ,  ,  ,  ,  and  ,  [New J. Phys.  **18**,  113003 ( 2016)](\doibase 10.1088/1367-2630/18/11/113003)  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. Lett.  **121**,  246403 ( 2018)](\doibase 10.1103/PhysRevLett.121.246403)   and   ,  [Phys. Rev. B  **94**,  165203 ( 2016)](\doibase https://doi.org/10.1103/PhysRevB.94.165203)  ,  [*Electronic Structure: Basic Theory and Practical Methods*](\doibase 10.1017/CBO9780511805769) ( Cambridge University Press,  Cambridge,  2004)  ,  [Phys. Rev. Lett.  **110**,  046402 ( 2013)](\doibase 10.1103/PhysRevLett.110.046402)   and   , [Phys. Rev. B  **89**,  205103 ( 2014)](\doibase 10.1103/PhysRevB.89.205103)   and   ,  [Phys. Rev. B  **88**,  195133 ( 2013)](\doibase 10.1103/PhysRevB.88.195133)   and   ,  [Phys. Rev. Lett.  **117**,  246401 ( 2016)](\doibase 10.1103/PhysRevLett.117.246401)   and   ,  [Phys. Rev. Lett.  **117**,  226801 ( 2016)](\doibase 10.1103/PhysRevLett.117.226801)  ,  ,  ,  ,  ,   ,  ,  , ,  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **89**,  075138 ( 2014)](\doibase 10.1103/PhysRevB.89.075138)  ,  ,  ,  ,  ,  ,  ,  and   ,  [Nat. Commun.  **6**,  8463 ( 2015)](\doibase 10.1038/ncomms9463)   and   ,  [Phys. Rev. Mater.  **1**,  054201 ( 2017)](\doibase 10.1103/PhysRevMaterials.1.054201)   and   ,  [Phys. Rev. B  **27**, 4760 ( 1983)](\doibase 10.1103/PhysRevB.27.4760)  , @noop *Many-Particle Physics*,  2nd ed. ( Plenum Press,  1990)  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **92**,  085137 ( 2015)](\doibase 10.1103/PhysRevB.92.085137)   and   , [J. Phys. C: Solid State Phys.  **9**, 2305 ( 1976)](\doibase 10.1088/0022-3719/9/12/013)   and   ,  [Phys. Rev. B  **23**, 1495 ( 1981)](\doibase 10.1103/PhysRevB.23.1495)  ,  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **90**,  214304 ( 2014)](\doibase 10.1103/PhysRevB.90.214304)  ,  ,  ,  ,  ,  and   ,  [J. Chem. Phys.  **143**,  102813 ( 2015)](\doibase 10.1063/1.4927081)  ,  [Rev. Mod. Phys.  **89**,  015003 ( 2017)](\doibase 10.1103/RevModPhys.89.015003)  ,  , and   ,  [Phys. Rev. B  **91**,  224310 ( 2015)](\doibase 10.1103/PhysRevB.91.224310)  ,  , and   , [Ann. Phys.  **523**,  168 ( 2011)](\doibase 10.1002/andp.201000100)  , @noop *Thermophysical properties of materials*,  Selected topics in solid state physics No.  XVIII ( North-Holland Physics Publishing,  Amsterdam, 1986)  , @noop *The Physics of Phonons* ( A. Hilger,  New York, NY, USA,  1990)  ,  [Mod. Phys. Lett. B  **34** No.  2, 2050025( 2020)](\doibase 10.1142/S0217984920500256)  ,  [J. Phys. C: Solid State Phys.  **5**,  535 ( 1972)](\doibase 10.1088/0022-3719/5/5/005)   and   ,  [Phys. Rev. Lett.  **121**,  255901 ( 2018)](\doibase 10.1103/PhysRevLett.121.255901)  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  , ,  ,  ,  ,  ,  ,  ,  ,  ,  , ,  ,  ,  ,  ,  ,  ,  ,  , ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  , , , ,  ,  ,  ,  ,  ,  and  ,  [Comput. Phys. Commun.  **248**,  107042 ( 2019)](\doibase 10.1016/j.cpc.2019.107042)  ,  [Phys. Rev. B  **55**, 10337 ( 1997)](\doibase 10.1103/PhysRevB.55.10337)   and   , [Phys. Rev. B  **55**,  10355 ( 1997)](\doibase 10.1103/PhysRevB.55.10355)  ,  ,  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **86**,  094302 ( 2012)](\doibase 10.1103/PhysRevB.86.094302)  ,  ,  and   ,  [Phys. Rev. B  **58**, 3641 ( 1998)](\doibase 10.1103/PhysRevB.58.3641)  ,  , and   ,  [Phys. Rev. Lett.  **77**,  3865 ( 1996)](\doibase 10.1103/PhysRevLett.77.3865)   and   ,  [Phys. Rev. B  **91**,  245204 ( 2015)](\doibase 10.1103/PhysRevB.91.245204)  ,  [PNAS  **30**,  244 ( 1944)](\doibase 10.1073/pnas.30.9.244)  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **97**,  115145 ( 2018)](\doibase 10.1103/PhysRevB.97.115145)  ,  ,  ,  and   ,  [Phys. Rev. B  **71**,  035117 ( 2005)](\doibase 10.1103/PhysRevB.71.035117)  ,  , and   , [Phys. Rev. B  **72**,  035105 ( 2005)](\doibase 10.1103/PhysRevB.72.035105)  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **95**,  075146 ( 2017)](\doibase 10.1103/PhysRevB.95.075146)   and   ,  [Phys. Rev. B  **83**,  235401 ( 2011)](\doibase 10.1103/PhysRevB.83.235401)  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  and   ,  [J. Phys. Chem. C  **117**,  25677 ( 2013)](http://pubs.acs.org/doi/abs/10.1021/jp409824g)   and   , [Phys. Rev. B  **91**,  165140 ( 2015)](\doibase 10.1103/PhysRevB.91.165140)   and   ,  [Phys. Rev. Lett.  **113**,  076407 ( 2014)](\doibase 10.1103/PhysRevLett.113.076407)  ,  , and   , [Phys. Rev. Lett.  **113**, 076408 ( 2014)](\doibase 10.1103/PhysRevLett.113.076408)   and   , [Phys. Rev. B  **99**,  075157 ( 2019)](\doibase 10.1103/PhysRevB.99.075157)   and   , @noop “ Influence of the electron-phonon interaction on the topological phase transition in BiTeI,”  ( 2020),  Accepted for publication in Proc. of XIth International Symposium Quantum Theory and Symmetries, CRM Series on Mathematical Physics (Springer).   and   ,  [Rev. Mod. Phys.  **77**,  1173 ( 2005)](\doibase 10.1103/RevModPhys.77.1173)   and   ,  [Comput. Mater. Sci.  **167**,  257 ( 2019)](\doibase 10.1016/j.commatsci.2019.05.045)  ,  [Adv. Phys.  **3**,  325 ( 1954)](\doibase 10.1080/00018735400101213)   and   ,  [Phys. Rev. B  **94**, 115135 ( 2016)](\doibase 10.1103/PhysRevB.94.115135)  ,  ,  ,  ,  ,  ,  ,  and   , @noop “ Generalized Fröhlich model for real materials: predominance of non-adiabatic effects in zero-point renormalization of electronic energies,”  ( 2019),  Unpublished  ,  ,  ,  ,  ,  and  ,  [Phys. Rev. B  **87**,  205103 ( 2013)](\doibase 10.1103/PhysRevB.87.205103)  ,  ,  ,  ,  ,  and   ,  [JETP Lett.  **101**,  507 ( 2015)](\doibase 10.1134/S0021364015080147)  ,  ,  and   ,  [Phys. Rev. B  **96**,  155309 ( 2017)](\doibase 10.1103/PhysRevB.96.155309)  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **94**,  205130 ( 2016)](\doibase 10.1103/PhysRevB.94.205130)  ,  ,  ,  and  ,  [J. Chem. Phys.  **132**,  154104 ( 2010)](\doibase 10.1063/1.3382344)   and   ,  [J. Chem. Phys.  **124**, 174104 ( 2006)](\doibase 10.1063/1.2190220)  ,  , and   ,  [J. Comput. Chem.  **32**,  1456 ( 2011)](\doibase 10.1002/jcc.21759)  ,  ,  and   ,  [Phys. Rev. B  **93**,  144304 ( 2016)](\doibase 10.1103/PhysRevB.93.144304)  ,  ,  ,  and  ,  [J. Solid State Chem.  **114**,  379 ( 1995)](\doibase https://doi.org/10.1006/jssc.1995.1058)  ,  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. Lett.  **107**,  117401 ( 2011)](\doibase 10.1103/PhysRevLett.107.117401)  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  and   ,  [Phys. Rev. B  **86**,  085204 ( 2012)](\doibase 10.1103/PhysRevB.86.085204)  ,  ,  ,  ,  ,  ,  ,  and  ,  [Phys. Rev. B  **89**,  195117 ( 2014)](\doibase 10.1103/PhysRevB.89.195117)  , ed., [*Spin physics in semiconductors*](\doibase 10.1007/978-3-319-65436-2),  2nd ed., Springer series in solid-state sciences ; 157 ( Springer, Berlin,  2017)  and   ,  [Phys. Rev. B  **101**, 121102(R) ( 2020)](\doibase 10.1103/PhysRevB.101.121102)
arxiv_0000123
Propagating left/right asymmetry in the zebrafish embryo: one-dimensional model =============================================================================== During embryonic development in vertebrates, left-right (L/R) asymmetry is reliably generated by a conserved mechanism: a L/R asymmetric signal is transmitted from the embryonic node to other parts of the embryo by the L/R asymmetric expression and diffusion of the TGF-*β* related proteins Nodal and Lefty via propagating gene expression fronts in the lateral plate mesoderm (LPM) and midline. In zebrafish embryos, Nodal and Lefty expression can only occur along 3 narrow stripes that express the co-receptor *one-eyed pinhead* (oep): Nodal along stripes in the left and right LPM, and Lefty along the midline. In wild-type embryos, Nodal is only expressed in the left LPM but not the right, because of inhibition by Lefty from the midline; however, bilateral Nodal expression occurs in loss-of-handedness mutants. A two-dimensional model of the zebrafish embryo predicts this loss of L/R asymmetry in oep mutants. In this paper, we simplify this two-dimensional picture to a one-dimensional model of Nodal and Lefty front propagation along the oep-expressing stripes. We represent Nodal and Lefty production by step functions that turn on when a linear function of Nodal and Lefty densities crosses a threshold. We do a parameter exploration of front propagation behavior, and find the existence of *pinned* intervals, along which the linear function underlying production is pinned to the threshold. Finally, we find parameter regimes for which spatially uniform oscillating solutions are possible. Received for publication xx yy 2011 and in final form xx yy 2012. Address reprint requests and inquiries to Christopher L. Henley, E-mail: [email protected] 2 Introduction ============ In vertebrates, organs such as the heart and brain develop in an invariant left-right (L/R) asymmetric fashion, with *situs inversus*, or complete organ mirror-reversal, being a rare occurrence. The best-understood mechanism by which L/R bias is first established is that special cilia around the embyronic node drive a leftwards fluid flow. Disrupting, or reversing this flow results in randomization, or reversal of organ asymmetry. The cilia appear to be conserved in vertebrates, but it is controversial whether the nodal flow mechanism is decisive in all vertebrates, since asymmetry of some other origin (involving electrical potentials due to polarized gap junctions) is observable before the cilia even begin moving, and this may determine organ asymmetry in *Xenopus*. Our concern in this paper is with the subsequent propagation of this L/R signal from the node to the rest of the embryo. It is first transmitted to the lateral plate mesoderm (LPM), resulting in L/R asymmetric expression of the TGF-*β* related signaling molecules Nodal and Lefty. Nodal is expressed exclusively in the left LPM, and Lefty in the midline, along gene expression fronts that propagate from the posterior to the anterior of the embryo from approximately the 12 to 20 somite stages. This propagation depends on the diffusion of Nodal and Lefty, as well as their production rates which are promoted or inhibited by these same molecules. Subsequently, downstream genes (such as Pitx2) are activated and direct L/R asymmetric organogenesis. Nodal and Lefty constitute a classic activator-inhibitor system that is highly conserved among vertebrates. Such a model was introduced and studied for the initiation of Nodal and Lefty expression in the LPM and midline, respectively, at the level of the node in the mouse. This was spatialy one-dimensional, representing the L/R axis, and did not attempt to model the longitudinal propagation of the L/R signal. In this paper, we extend the ideas in this model to the zebrafish embryo, in which Nodal and Lefty gene expression only occurs in cells carrying the co-receptor *one-eyed pinhead* (oep) found on narrow stripes 3-5 cells wide in the midline and LPM. (Note that the Nodal analogue in the zebrafish is called “Southpaw”, but we will call it “Nodal” for consistency with the terminology of Ref. ). An advantage of this system is that it is quasi-one-dimensional in the longitudinal direction, because the most important variables are Nodal and Lefty production on these 3 stripes, and the propagation of their gene expression fronts is clearly unidirectional in the posterior-to-anterior direction. A two-dimensional model of the Nodal/Lefty system in the zebrafish embryo has been set up and simulated , as we review in Section [sec:2D-model]. In Sections [sec:1D-model] and [sec:results], we work through the consequences of a one-dimensional idealization of it, representing not the L/R axis (as in ) but the anterior-posterior axis. We idealize Nodal and Lefty production to have step-function turn-ons, when a threshold function linear in their concentrations crosses a threshold. We classify all possible behaviors in the limit that Nodal is unaffected by Lefty, and demonstrate the existence of intervals along which the Lefty profile must be *pinned* to that of Nodal. In Section [sec:oscillating], we find parameter regimes that allow uniform spatial oscillations to occur. Two-dimensional model ===================== In this section, we outline what would be a minimal model for the two-dimensional Nodal/Lefty system. This will motivate the simplification we make, in section [sec:1D-model], to a one dimensional model of similar form. The degrees of freedom in the model we use here are only the concentrations *N*(*r*, *t*) of Nodal and *L*(*r*, *t*) of Lefty, as found unbound in the extracellular fluid. In this section we let *r* represent a two-component position (*x*, *y*) on the embryo, which is practically a flat two-dimensional layer enveloping the yolk cell; in later sections on the one-dimensional model it will be replaced by the variable *y*. Also, *t* is the time. The geometry is idealized so that the three oep-expressing stripes extend indefinitely for *y* > 0, and are straight, parallel, and of unvarying widths, as shown in Fig. [fig:layout]. That means that, away from the baseline at *y* = 0, the model system has a translational invariance. That permits one to talk in a mathematically precise way about a limiting behavior in the model, in particular *uniform motion of a front*. Whether this is pertinent to the actual embryo depends on whether initial transients persist for a short time compared to the stages in which the L/R signal propagates, and for a short distance compared to the inter-stripe spacing or to the embryo’s length, which we know to be the case. Lefty is produced at a rate *s**L*(*r*, *t*) [concentration per unit time per unit area] which can be nonzero only at points lying within the midline stripe, since only those cells can produce Lefty. The function *s**L*(*r*, *t*) depends, in our simplified model, only on the local, instantaneous concentrations of Nodal and Lefty; the exact functional dependence is specified in Sec. [sec:N-L-production]. More precisely, the production function depending on *N*(*r*, *t*) and *L*(*r*, *t*) should rather be the transcription rate of the Lefty mRNA, while the source rate *s**L*(*r*, *t*) of Lefty protein is in turn proportional to the mRNA concentration, and thus should lag the production function we use by roughly the mRNA lifetime. (Indeed, a model including that lag was successfully fitted to quantitative data in, Chapter 3.) But in this paper, we assume that the source rate is proportional to the instantaneous production function; that is equivalent to assuming the Lefty mRNA is transcribed copiously and decays quickly compared to other time scales. Similarly, Nodal is produced at a rate *s**N*(*r*, *t*) which is nonzero only within stripes in the left LPM and right LPM, being described by a production function of the same form as that of Lefty in the midline, and with the same production function in the right and left LPM, so *no prior asymmetry* is assumed. As with Lefty, we have made an approximation of short-lived mRNA so that the mRNA content is not treated as an independent variable. In the case of Nodal, there is a second place where *s**N*(*r*, *t*) is nonzero, which are the two tailbuds, located as shown in Figure [fig:layout]. This production is *not* modulated by *N*(*r*, *t*) or by *L*(*r*, *t*), and may be assumed constant in time for simplicity (in reality it gradually diminishes ). The tailbud production has some left-right asymmetry, which in our model is responsible for *all* downstream asymmetries. The origin of the asymmetry is imagined to be in the nearby Kupffer’s vesicle  (the organ in which the special cilia are found in zebrafish). Diffusion behavior ------------------ Once produced, Nodal and Lefty are assumed to be passive, noninteracting densities that satisfy a diffusion equation with diffusion constants *D**N* and *D**L* respectively. Furthermore, they are assumed to have degradation rates, respectively *τ**N*− 1 and *τ**L*− 1. Although one expects the two signaling molecules to have similar values of these parameters, we allow them to be different in general, because (as we derive in the results sections) this inequality induces qualitiatively *different* behaviors. (The differential equations incorporating these parameters are exactly like Eqs.  and  of the one-dimensional model, except the second derivative with respect to *y* is replaced by the two-dimensional divergence.) Characteristic length and velocity scales can be constructed from the diffusion constant and degradation times: $$l\_{N}=\sqrt{D\_{N}\tau\_{N}};$$ $$v\_{N}=\sqrt{\frac{D\_{N}}{\tau\_{N}}},$$ and we define *l**L* and *v**L* similarly. These scales determine the typical range in space that the signal molecules travel, and the typical speed of the production fronts we shall study. Conditions for Nodal and Lefty production ----------------------------------------- A key property determining the behavior is that expression (and production) of either Nodal or Lefty is *promoted* by the presence of Nodal and *inhibited* by the presence of Lefty. Thus, both production functions *s**N*(*r*, *t*) and *s**L*(*r*, *t*) are increasing as a function of *N* and decreasing as a function of *L*. As we explain next, one can consider several versions of our model with different functional forms for the production functions. Let us first lay out how production is actually regulated. Along the LPM and midline stripes, Nodal and Lefty protein bind to the plasma membrane with the help of oep co-receptors ; we may say ``oep" is in one of three states: unbound, Nodal-bound or Lefty-bound. We assume that only a negligible fraction of all Nodal and Lefty produced are thus sequestered by the receptors, so that uptake or release of the signaling molecules is not included in *s**N*(*r*, *t*) and *s**L*(*r*, *t*). In turn, Nodal-bound oep induces a signaling cascade that promotes transcription of Nodal (in the LPM) or Lefty (on the midline). The fraction of Nodal-bound oep, *ϕ*(*N*, *L*), is $$\phi(N,L)=\frac{N}{k^{-1}+N+L} \label{eq:phi}$$ The oep receptors on the midline are believed to be identical to those on the LPM stripes, so we use the *same* function *ϕ*(*N*, *L*) in both stripes with the same equilibrium constant *k* (where *k*− 1 has the units of concentration). The Nodal production function *s**N* is assumed to be proportional to a Hill function $$f\_N(\phi)=\frac{1}{1+ (\phi\_{\*N}/\phi)^{n\_H}}. \label{eq:Hill}$$ This is a rounded step from 0 to 1 centered at a threshold parameter *ϕ*\* *N*; the step gets sharper as the Hill exponent *n**H* grows large. A microscopic rationalization for the form in Eq.  would be that the Nodal gene’s regulatory region cooperatively binds *n**H* copies of the signal or transcription factor induced by Nodal-bound oep. (Ref.  used a different production function than that also approximates a step function with a parameter controlling the sharpness.) The actual production rate is *s**N*(*r*, *t*) = *s**N*0*f**N*(*ϕ*(*N*, *L*)) where *s**N*0 is the maximum, or saturated, production rate. A similar function *f**L*(*ϕ*), with (it is expected) a rather different threshold *ϕ*\* *L*, and a saturated production rate *s**L*0, controls Lefty production. Part of the motivation for including the intermediate function *ϕ* in the model is to represent mutants in which oep is under- or over-expressed. That would have the effect of multiplying *ϕ* in by a constant, or equivalently of dividing *ϕ*\* *N* and *ϕ*\* *L* by that constant . We adopt two simplifying assumptions, either of which could be made independently of the other. The first pertains to the threshold for turning production on or off. In place of the nonlinear *ϕ* function, we follow Ref.  and instead use two threshold functions linear in the concentrations: [eq:Cij] *C**N*(*N*, *L*) = *C**N**N**N* − *C**N**L**L* *C**L*(*N*, *L*) = *C**L**N**N* − *C**L**L**L* where *N* = *N*(*r*, *t*) and *L* = *L*(*r*, *t*) are Nodal and Lefty concentration, respectively, and *C**N**N*, *C**N**L*, *C**L**N*, and *C**L**L* are constants. Production of *N* or *L* respectively turns on when this function exceeds a threshold parameter *C**N* \* or *C**L* \*, which should be positive, since a certain concentration of Nodal is needed to turn on either production, even in the absence of Lefty. We set *C**N* \* ≡ *C**L* \* ≡ 1 without loss of generality by scaling the four coefficients in Eqs. . The relation of Eqs.  to the *ϕ* function in is that the condition to be over threshold, *ϕ* > *ϕ*\* *N* or *ϕ* > *ϕ*\* *L*, is (by inserting Eq. ) mathematically equivalent to [eq:phi-to-linear] *k*(*ϕ*\* *N*− 1 − 1)*N* − *k**L* > 1;  *k*(*ϕ*\* *L*− 1 − 1)*N* − *k**L* > 1. Clearly, we can (roughly) identify the coefficients in with those in, in particular this viewpoint implies *C**N**L* ≡ *C**L**L*. Note that although the condition to be at threshold is captured exactly by the linear functions, away from threshold the production will depend on *N* and *L* in a different way when *n**H* < ∞. The linear form of Eqs.  and  will be more convenient mathematically. The second simplification we make is to assume *n**H* → ∞ in, i.e. a step-function turn-on of the production. In this case, Nodal production *s**N*(*r*, *t*) and Lefty production *s**L*(*r*, *t*) are given by: [eq:prodNL] $$\begin{cases} s\_{N}(r,t)=0 &,\: C\_{N}<1\\ 0\leq s\_{N}(r,t)\leq s\_{N0} &,\: C\_{N}=1\\ s\_{N}(r,t)=s\_{N0} &,\: C\_{N}> 1; \end{cases}\label{eq:prod\_N}$$ and $$\begin{cases} s\_{L}(r,t)=0 &,\: C\_{L}<1\\ 0\leq s\_{L}(r,t)\leq s\_{L0} &,\: C\_{L}=1\\ s\_{L}(r,t)=s\_{L0} &,\: C\_{L}>1. \end{cases}\label{eq:prod\_L}$$ Behavior of the two-dimensional model and comparison with experiments --------------------------------------------------------------------- The model initializes with zero Nodal or Lefty concentration in the domain (Fig. [fig:layout]), and with the 2 tailbuds producing Nodal that diffuses to the base of the 3 stripes. The left tailbud produces more than the right, which is responsible for all downstream asymmetries. Successful initiation at the base of the stripes requires that the Nodal produced in the tailbuds is sufficient to start self-sustaining Nodal production in the left LPM, but not the right LPM; production begins when thresholds given in are exceeded. The times at which this occurs differs for all 3 stripes depending on the difference between *ϕ*\* *N* and *ϕ*\* *L*, the amounts of Nodal produced by the left and right tailbuds, as well as the position of the tailbuds (closer to or farther from the midline, in a L/R symmetric way). Lefty, once produced on the midline, diffuses to the left and right LPM in equal amounts. Thus, the threshold function is constrained such that Nodal production turns off in the right LPM due to Lefty inhibition, but stays on in the left because of greater *N* there. This occurs in simulations for widely-varying parameter sets, for differences in tailbud production rates of 30%. As this asymmetry is decreased, the coefficients in, as well as the geometry of the problem, must be tuned more finely for successful initiation to occur. Experiments in zebrafish find that after an initial phase the wavefronts of Nodal and Lefty gene expression proceed at a fixed speed, and indeed this is reproduced in simulations: once initiated in a self-sustaining way, Nodal production on the left LPM switches on from the base upwards, and the leading edge of production moves at an asymptotically fixed rate along the stripe, followed by a front of Lefty production, moving at the same asymptotic rate on the midline. The main role of Nodal and Lefty in this model is to transmit the L/R asymmetric signal from the tailbud. Lefty in particular acts as a “barrier” preventing initiation of Nodal production in the right stripe: the Lefty concentration there is sufficiently high that Nodal diffusing from the left LPM is insufficient to initiate production there. Another necessary condition for successful propagation is that the function must not fall below 1 along the midline, or else Lefty production will cease, and Nodal produced in the left LPM will not be prevented from diffusing to the right LPM and initiating production further up the right stripe, leading to loss of L/R asymmetric propagation. Indeed, in Lefty knockout mutants, bilateral Nodal expression is observed. We intended to model the space and time-dependent behaviors in this model to, ultimately, * confirm the correctness of the basic picture by its qualitative agreement with experiments; * explain, and predict, the phenotypes of various mutations involving this system; * characterize the robustness of how the initial bias in the tailbud is amplified (as is manifested in the error rate); * show how the numerical values of the various parameters can be inferred (or as least bounded) on the basis of experiments. However, while it is understood in principle how we get a moving front, the mathematical formulation involves convolutions in two dimensions and does not include closed forms of simple functions. Furthermore, simulations suggested various possible regimes in which production might turn off again after the initial passage of a pulse, or in which the midline Lefty production had a gradual onset in space, much less sharp than the Nodal front. In order to analytically represent the functional form, and to comprehensively classify all regimes of the asymptotic behavior, we turned to a one dimensional model, which is the main focus of the rest of this paper. One-dimensional model ===================== The 2D model may be simplified to 1D by making the following simplifications. Since Nodal and Lefty expression occur on narrow, parallel stripes, we may replace the finite width of each stripe by a 1D line. Also, the concentration profile on one stripe is transmitted (with a diffusive lag) to another through the intervening space; this space can be replaced by lines as well, and we end up with a 5-line 1D model with diffusion between stripes given by *N**i*(*y*, *t*) = *k**D**N**i* − 1(*y*, *t* − Δ*t*) + *k**D**N**i* + 1(*y*, *t* − Δ*t*) + ... where *N**i* and *N**i* ± 1 are Nodal concentrations on adjacent stripes, and *k**D* and Δ*t* describe the rate of diffusion. In this paper, we will only focus on the front propagation of Nodal and Lefty gene expression in the steady state, in which case we further neglect inter-stripe diffusion and thus simplify the system to 1D. Diffusion equations ------------------- The partial differential equations for Nodal and Lefty are: [eq:d/dt] $$\frac{\partial N(y,t)}{\partial t}=D\_{N}\frac{\partial^{2}N(y,t)}{\partial y^{2}}-\frac{1}{\tau\_{N}}N(y,t)+s\_{N}(y,t) \label{eq:dN/dt}$$ $$\frac{\partial L(y,t)}{\partial t}=D\_{L}\frac{\partial^{2}L(y,t)}{\partial y^{2}}-\frac{1}{\tau\_{L}}L(y,t)+s\_{L}(y,t) \label{eq:dL/dt}$$ The equations appear linear at first glance, but the *s**N* and *s**L* terms are in fact nonlinear, since (according to and ) they depend on (*N*, *L*) via step functions. Nevertheless, if either *C**N*(*y*) < 1 or *C**N*(*y*) > 1 throughout an interval, then *s**N*(*y*) is constant (either 0 or *s**N*0) in that interval, in which case the differential equation for *N*(*y*) may be solved, and similarly for *L*(*y*). Evidently, the key parameter in such a solution is the position *y* at which the threshold is crossed and production turns on or off. Surprisingly, it is also generically possible that the threshold function is pinned at 1 throughout a finite interval (see Sec. [sec:pinned]). Within such an interval, the production function is indefinite (the middle case in Eqs.  and ) and the diffusion equation can no longer be used to find the solution. However, the threshold condition becomes an equality interval and gives the solution in such intervals. The complications in our results have to do mostly with handling such “pinned” intervals. The simplest possible solution is for constant, homogeneous production, e.g. *s**N*(*y*, *t*) = *s**N*0 for all (*y*, *t*) for Nodal. The solution is *N*(*y*, *t*) = *N*0, or *L*(*y*, *t*) = *L*0 in the analogous case for Lefty, where *N*0 ≡ *τ**N**s**N*0;  *L*0 ≡ *τ**L**s**L*0. The middle of any interval of production looks locally like a piece of this homogeneous system, so it is not surprising that *N*0 and *L*0 serve as a reference level for all solutions. In particular, since *s**N*(*y*, *t*) ≤ *s**N*0 everywhere, any solution must have *N*(*y*, *t*) ≤ *N*0 everywhere, and similarly for *L*(*y*, *t*). The traveling wavefront solution -------------------------------- Experiments show the Nodal front is typically ahead of the Lefty front . If it is sufficiently far ahead, the Lefty concentration there is negligible, and it is sufficient to consider a Nodal-only situation, in which the front of Nodal production moves at constant speed *v*: $$s\_{N}(y,t)=\begin{cases} s\_{N0} &,\: y<vt\\ 0 &,\: y>vt. \end{cases}\label{eq:wave prod}$$ The front shows a constant shape to a viewer traveling at velocity *v*, i.e. *N*(*y*, *t*) = *N*(*y* − *v**t*). When this is inserted into Eq. , it becomes an ordinary, second-order, linear differential equation, which is solved by an exponential: $$N(y-vt)=\begin{cases} N\_{0}[1-g\_{N}e^{\kappa\_{N}(y-vt)}] &,\: y-vt<0\\ N\_{0}(1-g\_{N})e^{-\kappa'\_{N}(y-vt)} &,\: y-vt>0. \end{cases} \label{eq:N-for-step-function}$$ Substituting this ansatz into Eq. , we find that *κ**N* and *κ*ʹ*N* are given by [eq:kappaN] $$\kappa\_{N}^{2}+\frac{v}{D\_{N}}\kappa\_{N}-\frac{1}{l\_{N}^{2}}=0$$ $$\kappa\_{N}^{'2}-\frac{v}{D\_{N}}\kappa'\_{N}-\frac{1}{l\_{N}^{2}}=0$$ By matching the boundary conditions at *y* − *v**t* = 0, we find *g**N* is given by $$g\_{N}=\frac{\kappa\_{N}'}{\kappa\_{N}+\kappa\_{N}'}\label{eq:g\_N}$$ Essentially, this means that the shape of the concentration profile depends on the relative magnitudes of *v* and *v**N*. Physically, for *v* > 0 and *v* ≫ *v**N*, the wavefront is traveling forward too fast for there to be much diffusion from the producing region to the non-producing region, and so the concentration at the front is low (*g**N* → 1); conversely, for *v* < 0 and ∣*v*∣ ≫ *v**N*, the concentration at the front is high for the same reason. (It is reminiscent of the Doppler effect, except that the Nodal signal spreads by the diffusion equation rather than the wave equation.) There is in fact a symmetry relating solutions with *v* > 0 and *v* < 0, whereby the profiles of an advancing front of speed *v* and a retreating front (reflected in *y*) of speed  − *v* add up to a uniform profile, because adding the production of an advancing front and that of a retreating front simply obtains a uniform producing line. Furthermore, when *v* = 0 the height of the front is simply $\frac{1}{2}N\_{0}$, and the stationary concentration profile simplifies to: $$N(y)=\begin{cases} N\_{0}(1-\frac{1}{2}e^{y/l\_{N}}) &,\: y<0\\ \frac{1}{2}N\_{0}e^{-y/l\_{N}} &,\: y>0 \end{cases}\label{eq:N\_simple\_step}$$ (Notice that we set *y* = 0 to be the position of the front.) Since the coefficients in the threshold equations and are in general different, the Lefty wavefront is in general displaced by a distance Δ*y* from the Nodal one. We have $$L(y-vt)=\begin{cases} L\_{0}[1-g\_{L}e^{\kappa\_{L}(y-vt+\Delta y)}] &,\: y-vt<-\Delta y\\ L\_{0}(1-g\_{L})e^{-\kappa'\_{L}(y-vt+\Delta y)} &,\: y-vt>-\Delta y \end{cases}\label{eq:L(y-vt)}$$ where Δ*y* > 0 signifies that the Nodal front is ahead of the Lefty front, and *κ**L*, *κ*ʹ*L* and *g**L* are defined analogously to Eqs.  and. Finally, to find *v* and Δ*y*, we observe that *N* = *N*0(1 − *g**N*) at the Nodal front, and *L* = *L*0(1 − *g**L*) at the Lefty front, and substitute and into the threshold functions ; assuming that Δ*y* > 0: [eq:vDy] *C**N**N**N*0(1 − *g**N*) − *C**N**L**L*0(1 − *g**L*)*e*− *κ*ʹ*L*Δ*y* = 1 *C**L**N**N*0[1 − *g**N**e*− *κ**N*Δ*y*] − *C**L**L**L*0(1 − *g**L*) = 1 from which we can solve (non-trivially) for *v* and Δ*y*. Nondimensionalization --------------------- Certain combinations of parameter changes are trivial, in the sense that the solutions look the same apart from rescalings of the distance, time, or concentrations. Since we are faced with the difficulty of exploring a large parameter space, we wish to discover the minimum number of nontrivial parameters. To this end, we will scale all variables and parameters in the problem so as to make them dimensionless. First, we introduce the scaled Nodal and Lefty concentrations $$\tilde{N}(y)\equiv\frac{N}{N\_{0}},$$ $$\tilde{L}(y)\equiv\frac{L}{L\_{0}},$$ so that 0 ≤ *Ñ*(*y*),  *L̃*(*y*) ≤ 1. Correspondingly, *C̃**N**N*, *C̃**N**L*, *C̃**L**N* and *C̃**L**L* are simply the coefficients in the (scaled form of the) threshold functions and : [eq:scaled-Cij] $$\begin{aligned} \tilde{C}\_{NN} &\equiv& C\_{NN}N\_{0};\\ \tilde{C}\_{NL} &\equiv& C\_{NL}L\_{0};\\ \tilde{C}\_{LN} &\equiv& C\_{LN}N\_{0};\\ \tilde{C}\_{LL} &\equiv& C\_{LL}L\_{0}.\end{aligned}$$ For example, Nodal production turns on when *C̃**N**N**Ñ* − *C̃**N**L**L̃* > 1, and similarly for *L̃*. The parameters are pertinent even in a spatially uniform situation. The remaining parameters relate to time or length scales. We scale length and time such that the parameters for Nodal become unity: [eq:scaled-l-v] $$\begin{aligned} \tilde{l}\_L &\equiv& l\_{L}/ l\_{N}; \\ \tilde{v}\_L &\equiv& v\_{L}/ v\_{N}; \\ \tilde{v} &\equiv& v/v\_{N}.\end{aligned}$$ The last two parameters are relevant (in a moving steady state) if and only if *v* ≠ 0. Implicitly, Eqs.  also nondimensionalize the time scale: *τ̃**L* ≡ *l̃**L*/*ṽ**L* In total, we have seven nontrivial parameters for the one-dimensional problem, defined in Eqs.  and . By rescaling space and time with $y\rightarrow\frac{y}{l\_N}$ and $t\rightarrow\frac{t}{\tau\_N}$ we find that Eqs.  become [eq:tilded/dt] $$\frac{\partial \tilde{N}(y,t)}{\partial t}=\frac{\partial^{2}\tilde{N}(y,t)}{\partial y^{2}}-\tilde{N}(y,t)+\tilde{s}\_{N}(y,t) \label{eq:dtildeN/dt}$$ $$\frac{\partial \tilde{L}(y,t)}{\partial t}=\tilde{l}\_L\tilde{v}\_L\frac{\partial^{2}\tilde{L}(y,t)}{\partial y^{2}}- \frac{1}{\tilde{\tau}\_{L}}\tilde{L}(y,t)+\tilde{s}\_{L}(y,t) \label{eq:dtildeL/dt}$$ where 0 ≤ *s̃**N*(*y*, *t*) ≤ 1 and $0\leq\tilde{s}\_L(y,t)\leq\frac{1}{\tau\_L}$. The (*N*, *L*) plane: zero dimensional case ------------------------------------------- In Sec. [sec:traveling-wavefront], asymptotically in either direction both *N* and *L* approach constant, uniform values. Therefore, in order to classify possible wavefront solutions, it is helpful first to classify all possible uniform solutions. Fig. [fig:N-L-cases] shows the (*Ñ*, *L̃*) phase plane with lines representing the $\dot{\tilde{N}}=0$ and $\dot{\tilde{L}}=0$ isoclines. The most important feature of these diagrams is the relation of the two isoclines to each other and to the bounding lines: this determines the possible steady states. There are 4 qualitative cases for the relation of $\dot{\tilde{N}}$ to $\dot{\tilde{L}}$: 1. *N*-isocline is to left of *L*-isocline 2. *L*-isocline crosses *N*-isocline left to right 3. *L*-isocline is to left of *N*-isocline 4. *N*-isocline crosses *L*-isocline left to right. (There are other mathematical cases in which one of the isoclines does not pass through the square at all, but that is obviously to be discarded, since there would be no way ever to produce the corresponding signal.) These phase planes are generalizations of the wild-type scenario: as noted after Eq. , if we justify the linear threshold equations from the single-binding site receptor behavior, we necessarily find *C̃**N**L* = *C̃**L**L*; the geometrical expresion of this on the (*Ñ*, *L̃*) plane is that the two isoclines cross on the negative *L* axis, hence only cases (a) and (c) could be realized. However, our purpose here is to study the general behavior of these equations that could arise in any embryonic system with propagating Nodal and Lefty fronts (since this is conserved in all vertebrates), and perhaps even mathematically equivalent equations having a quite different biological interpretation. In general, binding of Nodal and Lefty might be cooperative at the receptor; furthermore, gene regulation further downstream might be cooperative. Thus, the general form of the isoclines is nonlinear and we expect that all these topologies are imaginable. There is always a stable fixed point at *Ñ* = *L̃* = 0; this is the only fixed point in case (iii), which is thus trivial. If the *Ñ*-isocline and the *L̃*-isocline both intersect the upper border, that is if *C̃**N**N* − *C̃**N**L* > 1 and *C̃**L**N* − *C̃**L**L* > 1, we have a stable fixed point with both *N* and *L* saturated, which underlies the basic wavefront behavior of Sec. [sec:traveling-wavefront] above. However, if the *L̃*-isocline intersects the right edge, to the right of the *Ñ*-isocline, i.e. when *C̃**L**N* − *C̃**L**L* < 1 as *may* happen in either case (i) or case (ii), then the stable fixed point at *Ñ* = 1 exists at a less than saturated value, which we will define *L̃*\*: $$\tilde{L}\_{\*}\equiv\frac{\tilde{C}\_{LN}-1}{\tilde{C}\_{LL}}\label{eq:L\*}$$ It is not obvious that *L̃* being *pinned* at *L̃*\* < 1 exists in the case with spatial variation, but we will in fact show in Sec. [sec:pinned] that analogous “pinned” intervals arise in 1D traveling wave solutions. Finally, in case (iv), there is a fixed point at which *both* *Ñ* and *L̃* are less than saturated; this may be stable, but also may be unstable to periodic oscillations around the fixed point, as elaborated in Sec. [sec:oscillating]. Results: steady-state fronts ============================ To explore the large parameter space of the 1D model, we will consider special cases; the aim is a taxonomy of the possible behaviors of the model, with the hope that any general case will be qualitatively similar to one of these behaviors seen in the special cases. In particular, we concern ourselves with the steady state limiting behaviors, after all transients have died out. Any nonzero Nodal and Lefty wavefronts will then be traveling at identical speed *v*. *C̃**N**L* = 0 and *v* = 0 -------------------------- For the largest part of our story, we consider the case that *C̃**N**L* = 0, that is, Nodal is unaffected by Lefty (i.e. Nodal is an autonomous variable). We first solve for the behaviors of the one-component system in which Nodal activates Nodal, which already has moving front solutions; then, we treat the Nodal concentration *N*(*y*, *t*) as if it were externally imposed and solve for another one-component problem representing the Lefty concentration field. The reasons for choosing what seems to be a major simplification are (i) the reverse approximation, in which Lefty is autonomous, has only the trivial solution with Lefty off (since Lefty only inhibits); (ii) experimentally, the Nodal front leads and the Lefty front follows; thus, it is plausible that the inhibition from Lefty has no important effect on the Nodal front (serving only to prevent the initiation of a front on the other side); (iii) in Section [sec:N-L-plane] we see that only the topology of the (*N*, *L*) plot really matters for the dependnce on coefficients *C**i**j*. The second simplification we make is to set *v* = 0. As mentioned, the Nodal and Lefty fronts are traveling at equal speed in the steady state, and are thus stationary in the comoving frame. In fact, the only difference between the *v* = 0 and *v* ≠ 0 cases is the steepness of their exponential profiles at the front. This does *not* qualitatively affect the results we present below. With *C̃**N**L* = 0 and *v* = 0, the Nodal concentration takes the form given by, or in dimensionless form, $$\tilde{N}(y)=\begin{cases} 1-\frac{1}{2}e^{y} &,\: y<0\\ \frac{1}{2}e^{-y} &,\: y>0 \end{cases}\label{eq:N\_simple\_step\_dimless}$$ and our 7-dimensional parameter space reduces to a 3-dimensional one, with parameters *l̃**L*, *C̃**L**L*, and *L̃*\* (which is more convenient for our purposes than *C̃**L**N*). Our goal is to classify all the possible types of behavior of the Lefty profile as these parameters are varied. Note that *C̃**N**N* and *C̃**L**N* are always greater than 1, or else N and L will never have their production turned on. Also, let the position where Lefty production turns on be *y* = *y**L*; it may be on either side of the independent Nodal front *y* = 0. “Pinned” intervals ------------------ It soon became apparent that the behavior of *L* did not only consist of intervals of full production *s**L* = *s*0*L*, where *C**L*(*y*) > 1, and zero production, where *C**L*(*y*) < 1, but also intervals where the threshold function is *pinned* at *C**L*(*y*) = 1. The clearest example of this is to consider what happens when *L̃*\* < 1 as *y* →  − ∞, where our background Nodal profile goes to a limit of *Ñ* → 1. Since we want *L* to be produced somewhere, *C̃**L**N* ≡ *C**L**N**N*0 > 1 necessarily (this is with *L* = 0). As *y* →  − ∞ and *Ñ* → 1, *L*(*y*) = 0 is mathematically inconsistent, and so Lefty production must turn on. Now, by definition *L̃*\* is the amount of *L* that turns off its own production when *N* = *N*0, so *L̃*\* < 1 means that before *L̃* → 1, Lefty will turn off its own production again. This is a “paradox” in which Lefty production cannot be fully on or fully off, and it is only resolved if we let the threshold function be *pinned* at 1, i.e. *C**L*(*y* →  − ∞) ≡ 1, such that 0 < *s̃**L* < 1. Constraining *C**L*(*y*) means that *L̃*(*y*) is completely determined by *Ñ*(*y*) in this *pinned* interval: $$\tilde{L}(y) \equiv \frac{1} {\tilde{C}\_{LL}}\label{eq:L\_pinned} \bigg(\tilde{C}\_{LN}\tilde{N}(y)-1\bigg)$$ Now, we explicitly work out the possible behaviors of *L* when *L̃*\*, *l̃**L* and *C̃**L**L* are varied. *L* will be described in terms of the types of production occurring along the line, from  − ∞ to ∞. Full production is denoted “1”, zero production denoted “0”, and the pinned interval denoted “p”. For example, the background Nodal profile is characterized as {1,0}, which is shorthand for an interval of full production adjoining one of zero production. In the following, we split up the cases into *L̃*\* < 1 and *L̃*\* > 1. We find that the most important parameter is the ratio of length scales, *l̃**L*, which determines how various pinned intervals arise in the behavior of Lefty. Note that the value of *ṽ**L* does not enter at all. Classification of 1D model -------------------------- ### Case *L̃*\* < 1: {p,0} and {p,1,0} $ \begin{array}{cc} \includegraphics[width=3.3in]{new\_p0.eps} \\ \includegraphics[width=3.3in]{new\_p10.eps} \end{array}$ As already shown, there is a pinned interval extending to *y* →  − ∞, where *L* is completely determined. The production *s**L*(*y*) is also completely determined, and is derived as follows. Substituting back into ([eq:dL/dt]): $$\frac{D\_{L}}{\tilde{C}\_{LL}}\frac{d^2\tilde{N}(y)}{dy^2}-\frac{1}{\tau\_{L}}\tilde{L}+\tilde{s}\_{L}(y)=0$$ Now using ([eq:dN/dt]) to eliminate *d*2*Ñ*(*y*)/*d**y*2, Lefty production in the pinned interval is given by: $$\tilde{s}\_{L}(y)=\frac{1}{\tilde{C}\_{LL}}(\tilde{C}\_{LN}[(1-\tilde{l}\_L^{2})\tilde{N}(y)+\tilde{l}\_L^{2}\tilde{s}\_{N}(y)]-1) \label{eq:production\_pinned}$$ where $\tilde{s}\_{N}(y)=\begin{cases} 0 &, y>0\\ 1 &, y<0 \end{cases}$. The point of finding this expression is that as *y* →  − ∞, we see that *s̃**L* → *L̃*\* < 1. This simply means that as *L̃*\* goes to zero, so does the asymptotic production of Lefty. The values of *l̃**L*, *C̃**L**N* or *C̃**L**L* in ([eq:productionpinned]) are free to vary so long as *L̃* in is between 0 and 1. On the other hand, the location of the front separating the pinned interval and the region of zero production must be determined from matching both *L̃* and *d**L̃*/*d**y* on either side of the boundary. It is possible to end up with a value of *y**L* where *s̃**L* lies outside [0,1]. Indeed, checking when when this occurs using ([eq:productionpinned]) demonstrates that {p,0} is inconsistent when *L̃*\**l̃**L* > 1 Here, there are three regions instead: pinned, full production, and no production (in short, {p,1,0}). As *L̃*\**l̃**L* increases beyond 1, the length of the fully producing interval increases. Simulations and solving for the boundary conditions validate this (see figure [fig:pin1]). ### Case *L̃*\* > 1: {1,0} and {1,p,0} $ \begin{array}{cc} \includegraphics[width=3.3in]{new\_10.eps} \\ \includegraphics[width=3.3in]{new\_1p0.eps} \end{array}$ The traveling wavefront given by ([eq:L(y-vt)]) is still not guaranteed when *L̃*\* > 1. We can see this by considering the limit at which *l̃**L* ≪ 1: at the front of Lefty production, *L̃* goes from 1 to 0 with a background of nearly constant *N*. Since *L* inhibits its own production, the apparent paradox is that *d**C**L*(*y*)/*d**y* > 0, i.e. the regions of zero and full production are reversed! Simulations show that as *l̃**L* is decreased and the slope of *L* made steeper, once the slope exceeds that of the “pinned” Lefty ([eq:Lpinned]), {1,0} will transition to {1,p,0}. Basically, requiring that *d**C**L*(*y*)/*d**y* < 0 at the front is equivalent to this condition. To write out explicit conditions for entering the {1,p,0} state, we should consider both cases *y**L* > 0 and *y**L* < 0, for which *Ñ* takes different forms given in. Thus, the condition for {1,p,0} is that *C**L*(*y*) crosses the threshold in the other direction: *d**C**L*(*y**L*)/*d**y* > 0, giving: $$\begin{aligned} y\_{L}&>0:\;\tilde{l}\_L^{-1}>&1+\frac{2}{\tilde{C}\_{LL}} \label{eq:{1,pin,0}-1} ;\\ y\_{L}&<0:\;\tilde{l}\_L^{-1}>& 2\tilde{L}\_\* - 1. \label{eq:{1,pin,0}-1-1}\end{aligned}$$ (Note that both conditions require *l̃**L* < 1, i.e. the Nodal length scale is greater than Lefty, as argued above in the limiting case.) If the inequalities are in the opposite direction, then we are back to the traveling wavefront given by ([eq:L(y-vt)]), or in our notation 1,0. Here we can solve for the Lefty concentration profile fully: using we find that Δ*y* > 0 when $$\frac{1}{2}\tilde{C}\_{LN} - \frac{1}{2}\tilde{C}\_{LL} < 1.$$ |c|c| Interval patterns & Parameter conditions{1, 0} & *L̃*\* > 1,  *d**C**L*(*y**L*)/*d**y* < 0{1, *p*, 0} & *L̃*\* > 1,  *d**C**L*(*y**L*)/*d**y* > 0{*p*, 0} & *L̃*\* < 1,  *l̃**L*− 1 > *L̃*\*{*p*, 1, 0} & *L̃*\* < 1,  *l̃**L*− 1 < *L̃*\* Solutions oscillating in time ============================= In the previous section(s), we explored the limiting case of an autonomous, Lefty-independent Nodal production. We now address new phenomena which are possible when Nodal is significantly affected by Lefty. The most striking of these is the possibility of a time-oscillating solution, which we will study in the special case that all concentrations are uniform in space. The experimentally pertinent motivation is whether it is possible to generate traveling solutions, consisting of pulses of Nodal and Lefty, periodic in space and in time; one expects such solutions to be possible only if uniform oscillations are possible in the model. By examining the geometry of the (*N*, *L*) plane [Figure [fig:N-L]], it can be seen that only case (d) has the possibility of oscillations; this is shown with more detail in Fig. [fig:N-L-osc](b). Note that the fixed point in the center, since it has *N**F* < *N*0 and *L**F* < *L*0, corresponds to production rates less than saturation (*s**N* < *s**N*0 and *s**L* < *s**L*0), so we are in a “pinned” regime for *both* *N* and *L*, which is self-consistent with being on the isoclines for both. Parts of each cycle ------------------- The period of an oscillation must consist of four phases [as shown in Fig. [fig:N-L-osc](a)]: * Nodal and Lefty both on; Lefty grows faster than Nodal, until Nodal turns off; * Nodal off and Lefty on; Nodal decays, until Lefty turns off; * Nodal and Lefty both off; Lefty decays faster than Nodal, and they remaining Nodal that has not yet decayed is sufficient to turn on Nodal again; * Nodal on and Lefty off; Nodal grows, until it is sufficient to turn Lefty on too (and back to phase 1). Evidently, a key condition is on the degradation times, which govern the growth rates of Nodal or Lefty : we need *τ̃**L* ≡ *τ**L*/*τ**N* < 1. (Of course, the actual magnitude of the degradation times simply sets an overall time scale.) We can describe the history with a series of times *t**i*, i=1,2,..., at each of which one of the signals turns on or off. Between these times, the production rates are constant (either zero or saturated), and the differential equation is solved by a simple exponential: $$\tilde{N}(t) = \begin{cases} \tilde{N}(t\_i) e^{-(t-t\_i)/\tau\_N} &, \: s\_N=0\\ 1 -[1-\tilde{N}(t\_i)] e^{-(t-t\_i)/\tau\_N} &, \: s\_N=s\_{N0}, \end{cases} \label{eq:N-t-constant-s}$$ and similarly for *L̃*(*t*). To visualize the dynamics, it is convenient to draw the trajectory of (*Ñ*(*t*), *L̃*(*t*)) in the (*Ñ*, *L̃*) plane; in our equations, the time derivatives $\dot{\tilde{N}}\equiv d\tilde{N}/dt$ and $\dot{\tilde{L}}\equiv d\tilde{L}/dt$ are fuctions only of (*Ñ*, *L̃*). The lines *C̃**N*(*Ñ*, *L̃*) = 0 and *C̃**L*(*Ñ*, *L̃*) = 0 on this plot are the *isoclines*, meaning the places where (respectively) $\dot{\tilde{N}}$ and $\dot{\tilde{L}}$ change sign. (Isoclines were also used in the analysis of Ref. .) Each time the trajectory intersects an isocline is one of the times *t**i*; let the concentrations at these times be *Ñ**i* ≡ *Ñ*(*t**i*), *L̃**i* ≡ *L̃*(*t**i*). We can find the trajectory curve, and thus find solutions of the dynamical equations, by eliminating time and considering only the discrete map (*Ñ**i*, *L̃**i*) → (*Ñ**i* + 1, *L̃**i* + 1). We next find the functional form of this map. Consider phase III of the cycle, during which [according to ] *Ñ*(*t*) = *Ñ**i**e*− (*t* − *t**i*)/*τ**N* and *L̃*(*t*) = *L̃**i**e*− (*t* − *t**i*)/*τ**L*. Eliminating time, we get [eq:N-L-phases] $$\frac{\tilde{L}}{\tilde{L}\_i} = \Biggl[\frac{\tilde{N}}{\tilde{N}\_i}\Biggr]^{1/{{\tilde{\tau}\_L}}}. \label{eq:N-L-phaseIII}$$ The point (*Ñ**i* + 1, *L̃**i* + 1) is the intersection of the curve with the isocline *C̃**N*(*Ñ*, *L̃*) = 0. In phase IV, the trajectory would be $$\frac{\tilde{L}}{\tilde{L}\_i} = \Biggl[\frac{1-\tilde{N}}{1-\tilde{N}\_i}\Biggr]^{1/{{\tilde{\tau}\_L}}}, \label{eq:N-L-phaseIV}$$ and similarly in the other four phases of the cycle. Notice that the curve can possibly intersect the isocline in phases I or III only if *τ̃**L* < 1. (For example, whehn *τ̃**L* = 1 the trajectory in each phase is a straight line to one corner of the square.) To give the cycling behavior, it is necessary (but not sufficient) that the isoclines intersect as shown in Figure [fig:N-L-cases](b), which depends on two inequalities. The $\dot{\tilde{N}}$ and $\dot{\tilde{L}}$ isoclines’ respective intercepts at *L̃* = 0 satisfy 1/*C̃**N**N* < 1/*C̃**L**N*; their intercepts at *L̃* = 1 must be in the reverse order, (1 + *C̃**N**L*)/*C̃**N**N* > (1 + *C**L**L*)/*C̃**L**N*. The two inequalities can be written together as $$\frac{\tilde{C}\_{LL}+1}{\tilde{C}\_{NL}+1} < \frac{\tilde{C}\_{LN}} {\tilde{C}\_{NN}} < 1 \label{eq:osc-fp-cond}$$ This tends to be satisfied When conditions are satisfied, the isoclines cross at a fixed point (*Ñ*\*, *L̃*\*) where $$\begin{aligned} \tilde{N}\_\* &\equiv& \frac {\tilde{C}\_{NL}-\tilde{C}\_{LL}} {\tilde{C}\_{LN} \tilde{C}\_{NL}- \tilde{C}\_{NN}\tilde{C}\_{LL}} \\ \tilde{L}\_\* &\equiv& \frac {\tilde{C}\_{NN}-\tilde{C}\_{LN}} {\tilde{C}\_{LN} \tilde{C}\_{NL}- \tilde{C}\_{NN}\tilde{C}\_{LL}}.\end{aligned}$$ Stability conditions -------------------- The trajectory always tends to spiral around (*Ñ*\*, *L̃*\*). as illustrated in Figure [fig:N-L-osc](b). However, there are different possibile limiting behaviors depending on whether the trajectory spirals inwards (stable fixed point) or outwards, and on where it ends up. There is always a stable fixed point at (*Ñ*, *L̃*) = (0, 0) [the empty system] and provided the isoclines intercept the upper edge of (*N*, *L*) space [as in Figure  [fig:N-L-osc](b)], there is also a stable fixed point at (*Ñ*, *L̃*) = (1, 1) [both Nodal and Lefty turned on and saturated]. Some of the possibilities: * The fixed point is stable; there is one unstable cycle, such that a trajectory starting inside it tends to (*Ñ*\*, *L̃*\*) and a trajectory starting outside it tends to (0, 0) or (1, 1). * There are no cycles (stable or unstable); the fixed point is unstable, and spirals out to (0, 0) or (1, 1). * The fixed point is unstable, and spirals out to a stable cycle (beyond which is an untable cycle, as in case (1)); this is the case of interest to us. To evaluate the stability, we must linearize near the fixed point in terms of *δ**Ñ* ≡ *Ñ* − *Ñ*\*, *δ**L̃* ≡ *L̃* − *L̃*\*. The trajectory (e.g. Eqs.  or ) becomes $\delta \tilde{L}=m\_{\rm ph}\delta \tilde{N}$, also $m\_{\rm ph}$ is the slope in each respective phase of the cycle, ${\rm ph}\to$ I,II,III, or IV. Thus, $$m\_{{\scriptstyle \rm III}}= \frac{1}{{{\tilde{\tau}\_L}}} \frac{\tilde{L}\_\*}{\tilde{N}\_\*};$$ $$m\_{{\scriptstyle \rm IV}}= - \frac{1}{{{\tilde{\tau}\_L}}} \Big(\frac{\tilde{L}\_\*}{1-\tilde{N}\_\*}\Big);$$ and similarly for the rest. A convenient viewpoint on the condition is that $$m\_{{\scriptstyle \rm ph}}= {{\tilde{\tau}\_L}}m\_{{{\scriptstyle \rm ph}},0}$$ where $m\_{{{\scriptstyle \rm ph}},0}$ is the slope of the line from (*Ñ*\*, *L̃*\*) to the appropriate corner of the square. Meanwhile, the slopes of the $\dot{\tilde{N}}=0$ and $\dot{\tilde{L}}=0$ isoclines are $$\begin{aligned} m\_N &\equiv& \tilde{C}\_{NN}/\tilde{C}\_{NL}, \\ m\_L &\equiv& \tilde{C}\_{LN}/\tilde{C}\_{LL}.\end{aligned}$$ Evidently, the additional (and sufficient) condition to get a spiraling behavior is that $m\_N < m\_{{\scriptstyle \rm III}}, m\_{{\scriptstyle \rm I}}< m\_L$. Since *τ̃**L* < 1 and $m\_{{\scriptstyle \rm ph}}$ is of order 1/*τ̃**L* for $\rm ph=I,II,III,IV$, it follows that *m**L* is typically large and hence *C**L**L* must be relatively small. Solving one linear equation for each phase of the cycle, we find after four isocline intersections that (*δ**Ñ**i* + 4, *δ**L̃**i* + 4) = Λ(*δ**Ñ**i*, *δ**L̃**i*), where $$\Lambda \equiv \frac{R\_{{\scriptstyle \rm I}}R\_{{\scriptstyle \rm III}}}{R\_{{\scriptstyle \rm II}}R\_{{\scriptstyle \rm IV}}}$$ where $$R\_{{\scriptstyle \rm ph}}\equiv \frac{m\_L-m\_{{\scriptstyle \rm ph}}}{m\_N-m\_{{\scriptstyle \rm ph}}}.$$ for $\rm ph= I,II,III, IV$. Thus, the fixed point (*Ñ*\*, *L̃*\*) is stable if and only if Λ < 1. An interesting special case is when (*Ñ*\*, *L̃*\*) = (1/2, 1/2): in that case, Λ ≡ 1 so the fixed point is always marginal. To get a better understanding of the stability, consider the case that $m\_N \ll m\_{{\scriptstyle \rm I}},..., m\_{{\scriptstyle \rm IV}}\ll m\_L$. Then $$R\_{{\scriptstyle \rm ph}}\approx - \frac{m\_L}{m\_{{\scriptstyle \rm ph}}} \Big(1- \frac{m\_{{\scriptstyle \rm ph}}}{m\_L} + \frac{m\_N}{m\_{{\scriptstyle \rm ph}}}\Big).$$ Noting that $m\_{{\scriptstyle \rm I}}m\_{{\scriptstyle \rm III}}/m\_{{\scriptstyle \rm II}}m\_{{\scriptstyle \rm IV}}\equiv 1$, we get $$\begin{aligned} \Lambda &\approx& 1- \frac{1}{{{\tilde{\tau}\_L}}m\_L} \Big(m\_{{{\scriptstyle \rm I}},0} + m\_{{{\scriptstyle \rm III}},0} - m\_{{{\scriptstyle \rm II}},0} - m\_{{{\scriptstyle \rm IV}},0}\Big) + \nonumber\\ &&+ {{\tilde{\tau}\_L}}m\_N \Big(m\_{{{\scriptstyle \rm I}},0}^{-1} + m\_{{{\scriptstyle \rm III}},0}^{-1} -m\_{{{\scriptstyle \rm II}},0}^{-1} - m\_{{{\scriptstyle \rm IV}},0}^{-1}\Big).\end{aligned}$$ The sums in parentheses do not have a factor of *τ̃**L*; they depend only on (*Ñ*\*, *L̃*\*). Both sums tend to be positive when the fixed point is farther from the line *L̃* = *Ñ* than it is from the line *L̃* = 1 − *Ñ*, or both negative in the opposite situation. Thus, in the former situation, the fixed point tends to be unstable when (*m**L**m**N*)1/2 < *τ̃**L*. In the opposite situation, the fixed point tends to be unstable when the inequality is in the other direction. The oscillation period *T* can be inferred from the orbit on the (*N*, *L*) plane by going back to equation. Note that $\dot{\tilde{N}}$ and $\dot{\tilde{L}}$ do not go to zero approaching the isocline line, but rather they approach a positive or negative constant coming from the respective sides. In other words, the four legs of the cycle on the (*Ñ*, *L̃*) plane are each traversed at a roughly constant speed. Hence a cycle that forms a small loop around (*Ñ*\*, *L̃*\*), as happens just beyond the instability, has its period *T* proportional to its oscillation amplitude. It follows that if the parameters are close to the instability threshold, we can construct a spatially periodic solution of intervals containing oscillations which are shifted in phase with respect to each other, provided that the spatial period is large compared to the distance that either signal can diffuse in time *T*: locally, this situation is equivalent to the uniform one, since the signal from contrasting intervals does not have time to propagate. It follows that periodic traveling waves are also possible, as well as a standing wave made up of alternating domains that each oscillate similar to the uniform oscillations, but in opposite phase to each other. Conclusion ========== In this work, we first sketched (Sec. [sec:2D-model]) the realistic two-dimensional model developed in, which exhibits a uniformly moving posterior-to-anterior front of Nodal production on one of the two stripes representing the lateral plate mesoderm, followed by a front of Lefty production on the midline, just like the experimental observation , and which is known to propagate the left/right asymmetry from the tailbuds through the entire embryo. We then set up a version of this model in one dimension (Sec. [sec:1D-model]), representing the anterior-to-posterior axis, which captures most properties of front propagration. The one-dimensional space represents a combination of the left LPM and the midline, and its main qualitative defect compared to the two-dimensional system is that it misses the lag time for the Nodal signal *N* to diffuse from the LPM to the midline, or for the Lefty signal *L* to diffuse from the midline to the LPM. The key parameters of the model were (1) coefficients [Sec. [sec:N-L-production]] quantifying how much Nodal promotes the two signals, and how much Lefty inhibits them, and (2) different diffusion constants and/or degradation times for the two signals, allowing us to define a typical length scale for either one, over which its concentration varies in front of and behind a sharp spatial step of the production; all in all, we found seven dimensionless ratios, each of which is a nontrivial parameter (Sec. [sec:non-dim]). We then set out to classify, analytically, the possible behaviors of the model in a steady state, and see how the depended on the parameter values. A key aid in identifying the regime is make a two dimensional plot (Sec. [sec:N-L-plane]) of how the homogeneous concentrations (*N*, *L*) would evolve: the very existence of a front depends on having a fixed point on this plane representing nonzero Nodal production, alongside the fixed point of zero production (which is always present). Our main focus was on uniformly moving fronts, but in fact one can specialize (without real loss of generality, and with considerable mathematical simplification) to the special case of zero front velocity [Sec. [sec:traveling-wavefront]]. The key special feature we found was the possibility that, instead of showing a sharp front where the production rapidly switches from zero to maximum, the Lefty production (on the midline in the real embryo) might be varying continuously over an extended interval (Sec. [sec:results]). This was due to a sort of feedback or buffering, in which the Nodal and Lefty concentrations compensate exactly such that every point along this interval is right at the threshold for Lefty production (“pinned”). Finally, in Sec. [sec:oscillating] we changed focus to the possibility of a temporally periodic behavior; this parameter regime allows both the possibility of a steady limiting state which is “pinned” (at threshold) for *both* Nodal and Lefty. One may get oscillations if Lefty degrades much faster than Nodal, and we characterized the exact mathematical criterion for the steady state to be unstable to developing oscillations. Having found the relationship of these behaviors to the parameter values, in future work we will be able to find the corresponding parameter regimes in the realistic two-dimensional model, with the ultimate goal of finding a set that gives the observed phenotypes. We obtained many behaviors that are *not* seen experimentally. (It is still uncertain whether the Lefty production has a sharp front or an extended one like the “pinned” case.) This is a positive feature of the model, for it will allow the exclusion of whole domains of parameter values. Our results might motivate further studies of the possible behaviors of propagating L/R asymmetry in the zebrafish and other embryos. The pertinent measurements in real systems that relate to phenomena in this model are * The measured front velocity constrains the velocity scale *v**N*. * The sharpness of the *N* and *L* concentration fronts gives the length scales *l**N* and *l**L* and/or indicates the possibility of the “pinned” interval; however, experiments that measure mRNA are in effect probing the production rate and do not access *l**N* and *l**L* directly. * The lag between the Nodal front and the Lefty front, gives information on the difference between Nodal and Lefty threshold parameters. Experimentally the Lefty front lags, but many parameter sets give a Lefty front that spatially leads (even though causally it is downstream from Nodal), so observation can limit parameters to a certain window. In addition, they suggest how changing a parameter (due to a mutation or some external perturbation) could give an exotic result, such as non-monotonic front. What the present study did *not* address is how the Nodal production is first initiated, and why it only happens along the left LPM. That depends on the *two-dimensional* geometry, e.g. the relative separations of the tailbud from the foot of the LPM stripes and the midline stripe of Oep-producing cells. Furthermore, it is possible the real system could propagate, e.g., a finite pulse of Nodal that terminates, but that will be seen only if such a signal is produced at the posterior end. These questions must be left to a proper two-dimensional study. SUPPLEMENTARY MATERIAL ====================== This work was supported by the Department of Energy grant DE-FG02-89ER-45405 (C.L.H.) and National Institute of Child Health & Human Development grant R01-HD048584 (B. X. and R.D.B.). 99 H. Hamada, C. Meno, D. Watanabe, and Y. Saijoh. Establishment of Vertebrate Left-right Asymmetry. Nature Reviews Genetics 3:103-113, 2002. M. Levin. Leftright asymmetry in embryonic development: a comprehensive review. Mechanisms of Development 122: 3-25, 2005. A. Raya and J.C.I. Belmonte. Leftright asymmetry in the vertebrate embryo: from early information to higher-level integration. Nature Reviews Genetics 7:283-293, 2006. J. McGrath, S. Somlo, S. Makova, X. Tian, and M. Brueckner. Two populations of node monocilia initiate left-right asymmetry in the mouse. Cell 114:61-73, 2003. S. Nonaka, et al. Randomization of LeftRight Asymmetry due to Loss of Nodal Cilia Generating Leftward Flow of Extraembryonic Fluid in Mice Lacking KIF3B Motor Protein. Cell 99:829-837, 1998. S. Nonaka, H. Shiratori, Y. Saijoh, and H. Hamada. Determination of leftright patterning of the mouse embryo by artificial nodal flow. Nature 418:96-99, 2002. J. J. Essner, et al. Leftright development: Conserved function for embryonic nodal cilia. Nature 418:37-38, 2002. D. S. Adams, K. R. Robinson, T. Fukumoto, S. Yuan, R. C. Albertson, P. Yelick, L. Kuo, M. McSweeney, and M. Levin. Early, *H*+-V-ATPase-dependent proton flux is necessary for consistent left-right patterning of non-mammalian vertebrates. Dev. 133:1657-1671, 2006. A. Kawasumi et al. Left-right asymmetry in the level of active Nodal protein produced in the node is translated into left-right asymmetry in the lateral plate of mouse embryos. Dev. Biol. 353:321-330, 2011. J. Brennan, D.P. Norris, and E.J. Robertson. Nodal activity in the node governs left-right asymmetry. Genes and Development 16: 2339-2344, 2002. X. Wang, and H.J. Yost. Initiation and Propagation of Posterior to Anterior (PA) Waves in Zebrafish Left-Right Development. Developmental Dynamics 237:3640-3647, 2008. A.K. Ryan, et al. W. Sabbagh, J. Greenwald, S. Choe, D. P. Norris, E. J. Robertson, R. M. Evans, M. G. Rosenfeld, and J. C. I. Belmonte. Pitx2 determines leftright asymmetry of internal organs in vertebrates. Nature 394: 545-551, 1998. T. Nakamura, et al. Generation of Robust Left-Right Asymmetry in the Mouse Embryo Requires a Self-Enhancement and Lateral-Inhibition System. Developmental Cell 11:495-507, 2006. B. Xu. “Systematic Analysis of Asymmetric Nodal Signaling in the Development of Zebrafish Left-Right Patterning.” (Ph. D. thesis, Princeton University, 2010) C.L. Henley, B. Xu, and B. Burdine. Simplified model for Nodal front propagation in Zebrafish embryo (unpublished). J. J. Essner, J. D. Amack, M. K. Nyholm, E. B. Harris, and H. J. Yost, Kupffer’s vesicle is a ciliated organ of asymmetry in the zebrafish embryo that initiates left-right development of the brain, heart and gut, Development 132:1247-1260 (2005r4). L. Wolpert, J. Smith, T. Jessell, P. Lawrence, E. Robertson, and E. Meyerowitz. Principles of Development (3rd ed.). OUP, New York, 2007. K. A. Smith, et al. Bmp and Nodal Independently Regulate lefty1 Expression to Maintain Unilateral Nodal Activity during Left-Right Axis Specification in Zebrafish. PLoS Genetics 7:e1002289, 2011. Propagating left/right asymmetry in the zebrafish embryo: one-dimensional model =============================================================================== During embryonic development in vertebrates, left-right (L/R) asymmetry is reliably generated by a conserved mechanism: a L/R asymmetric signal is transmitted from the embryonic node to other parts of the embryo by the L/R asymmetric expression and diffusion of the TGF-*β* related proteins Nodal and Lefty via propagating gene expression fronts in the lateral plate mesoderm (LPM) and midline. In zebrafish embryos, Nodal and Lefty expression can only occur along 3 narrow stripes that express the co-receptor *one-eyed pinhead* (oep): Nodal along stripes in the left and right LPM, and Lefty along the midline. In wild-type embryos, Nodal is only expressed in the left LPM but not the right, because of inhibition by Lefty from the midline; however, bilateral Nodal expression occurs in loss-of-handedness mutants. A two-dimensional model of the zebrafish embryo predicts this loss of L/R asymmetry in oep mutants. In this paper, we simplify this two-dimensional picture to a one-dimensional model of Nodal and Lefty front propagation along the oep-expressing stripes. We represent Nodal and Lefty production by step functions that turn on when a linear function of Nodal and Lefty densities crosses a threshold. We do a parameter exploration of front propagation behavior, and find the existence of *pinned* intervals, along which the linear function underlying production is pinned to the threshold. Finally, we find parameter regimes for which spatially uniform oscillating solutions are possible. Received for publication xx yy 2011 and in final form xx yy 2012. Address reprint requests and inquiries to Christopher L. Henley, E-mail: [email protected] 2 Introduction ============ In vertebrates, organs such as the heart and brain develop in an invariant left-right (L/R) asymmetric fashion, with *situs inversus*, or complete organ mirror-reversal, being a rare occurrence. The best-understood mechanism by which L/R bias is first established is that special cilia around the embyronic node drive a leftwards fluid flow. Disrupting, or reversing this flow results in randomization, or reversal of organ asymmetry. The cilia appear to be conserved in vertebrates, but it is controversial whether the nodal flow mechanism is decisive in all vertebrates, since asymmetry of some other origin (involving electrical potentials due to polarized gap junctions) is observable before the cilia even begin moving, and this may determine organ asymmetry in *Xenopus*. Our concern in this paper is with the subsequent propagation of this L/R signal from the node to the rest of the embryo. It is first transmitted to the lateral plate mesoderm (LPM), resulting in L/R asymmetric expression of the TGF-*β* related signaling molecules Nodal and Lefty. Nodal is expressed exclusively in the left LPM, and Lefty in the midline, along gene expression fronts that propagate from the posterior to the anterior of the embryo from approximately the 12 to 20 somite stages. This propagation depends on the diffusion of Nodal and Lefty, as well as their production rates which are promoted or inhibited by these same molecules. Subsequently, downstream genes (such as Pitx2) are activated and direct L/R asymmetric organogenesis. Nodal and Lefty constitute a classic activator-inhibitor system that is highly conserved among vertebrates. Such a model was introduced and studied for the initiation of Nodal and Lefty expression in the LPM and midline, respectively, at the level of the node in the mouse. This was spatialy one-dimensional, representing the L/R axis, and did not attempt to model the longitudinal propagation of the L/R signal. In this paper, we extend the ideas in this model to the zebrafish embryo, in which Nodal and Lefty gene expression only occurs in cells carrying the co-receptor *one-eyed pinhead* (oep) found on narrow stripes 3-5 cells wide in the midline and LPM. (Note that the Nodal analogue in the zebrafish is called “Southpaw”, but we will call it “Nodal” for consistency with the terminology of Ref. ). An advantage of this system is that it is quasi-one-dimensional in the longitudinal direction, because the most important variables are Nodal and Lefty production on these 3 stripes, and the propagation of their gene expression fronts is clearly unidirectional in the posterior-to-anterior direction. A two-dimensional model of the Nodal/Lefty system in the zebrafish embryo has been set up and simulated , as we review in Section [sec:2D-model]. In Sections [sec:1D-model] and [sec:results], we work through the consequences of a one-dimensional idealization of it, representing not the L/R axis (as in ) but the anterior-posterior axis. We idealize Nodal and Lefty production to have step-function turn-ons, when a threshold function linear in their concentrations crosses a threshold. We classify all possible behaviors in the limit that Nodal is unaffected by Lefty, and demonstrate the existence of intervals along which the Lefty profile must be *pinned* to that of Nodal. In Section [sec:oscillating], we find parameter regimes that allow uniform spatial oscillations to occur. Two-dimensional model ===================== In this section, we outline what would be a minimal model for the two-dimensional Nodal/Lefty system. This will motivate the simplification we make, in section [sec:1D-model], to a one dimensional model of similar form. The degrees of freedom in the model we use here are only the concentrations *N*(*r*, *t*) of Nodal and *L*(*r*, *t*) of Lefty, as found unbound in the extracellular fluid. In this section we let *r* represent a two-component position (*x*, *y*) on the embryo, which is practically a flat two-dimensional layer enveloping the yolk cell; in later sections on the one-dimensional model it will be replaced by the variable *y*. Also, *t* is the time. The geometry is idealized so that the three oep-expressing stripes extend indefinitely for *y* > 0, and are straight, parallel, and of unvarying widths, as shown in Fig. [fig:layout]. That means that, away from the baseline at *y* = 0, the model system has a translational invariance. That permits one to talk in a mathematically precise way about a limiting behavior in the model, in particular *uniform motion of a front*. Whether this is pertinent to the actual embryo depends on whether initial transients persist for a short time compared to the stages in which the L/R signal propagates, and for a short distance compared to the inter-stripe spacing or to the embryo’s length, which we know to be the case. Lefty is produced at a rate *s**L*(*r*, *t*) [concentration per unit time per unit area] which can be nonzero only at points lying within the midline stripe, since only those cells can produce Lefty. The function *s**L*(*r*, *t*) depends, in our simplified model, only on the local, instantaneous concentrations of Nodal and Lefty; the exact functional dependence is specified in Sec. [sec:N-L-production]. More precisely, the production function depending on *N*(*r*, *t*) and *L*(*r*, *t*) should rather be the transcription rate of the Lefty mRNA, while the source rate *s**L*(*r*, *t*) of Lefty protein is in turn proportional to the mRNA concentration, and thus should lag the production function we use by roughly the mRNA lifetime. (Indeed, a model including that lag was successfully fitted to quantitative data in, Chapter 3.) But in this paper, we assume that the source rate is proportional to the instantaneous production function; that is equivalent to assuming the Lefty mRNA is transcribed copiously and decays quickly compared to other time scales. Similarly, Nodal is produced at a rate *s**N*(*r*, *t*) which is nonzero only within stripes in the left LPM and right LPM, being described by a production function of the same form as that of Lefty in the midline, and with the same production function in the right and left LPM, so *no prior asymmetry* is assumed. As with Lefty, we have made an approximation of short-lived mRNA so that the mRNA content is not treated as an independent variable. In the case of Nodal, there is a second place where *s**N*(*r*, *t*) is nonzero, which are the two tailbuds, located as shown in Figure [fig:layout]. This production is *not* modulated by *N*(*r*, *t*) or by *L*(*r*, *t*), and may be assumed constant in time for simplicity (in reality it gradually diminishes ). The tailbud production has some left-right asymmetry, which in our model is responsible for *all* downstream asymmetries. The origin of the asymmetry is imagined to be in the nearby Kupffer’s vesicle  (the organ in which the special cilia are found in zebrafish). Diffusion behavior ------------------ Once produced, Nodal and Lefty are assumed to be passive, noninteracting densities that satisfy a diffusion equation with diffusion constants *D**N* and *D**L* respectively. Furthermore, they are assumed to have degradation rates, respectively *τ**N*− 1 and *τ**L*− 1. Although one expects the two signaling molecules to have similar values of these parameters, we allow them to be different in general, because (as we derive in the results sections) this inequality induces qualitiatively *different* behaviors. (The differential equations incorporating these parameters are exactly like Eqs.  and  of the one-dimensional model, except the second derivative with respect to *y* is replaced by the two-dimensional divergence.) Characteristic length and velocity scales can be constructed from the diffusion constant and degradation times: $$l\_{N}=\sqrt{D\_{N}\tau\_{N}};$$ $$v\_{N}=\sqrt{\frac{D\_{N}}{\tau\_{N}}},$$ and we define *l**L* and *v**L* similarly. These scales determine the typical range in space that the signal molecules travel, and the typical speed of the production fronts we shall study. Conditions for Nodal and Lefty production ----------------------------------------- A key property determining the behavior is that expression (and production) of either Nodal or Lefty is *promoted* by the presence of Nodal and *inhibited* by the presence of Lefty. Thus, both production functions *s**N*(*r*, *t*) and *s**L*(*r*, *t*) are increasing as a function of *N* and decreasing as a function of *L*. As we explain next, one can consider several versions of our model with different functional forms for the production functions. Let us first lay out how production is actually regulated. Along the LPM and midline stripes, Nodal and Lefty protein bind to the plasma membrane with the help of oep co-receptors ; we may say ``oep" is in one of three states: unbound, Nodal-bound or Lefty-bound. We assume that only a negligible fraction of all Nodal and Lefty produced are thus sequestered by the receptors, so that uptake or release of the signaling molecules is not included in *s**N*(*r*, *t*) and *s**L*(*r*, *t*). In turn, Nodal-bound oep induces a signaling cascade that promotes transcription of Nodal (in the LPM) or Lefty (on the midline). The fraction of Nodal-bound oep, *ϕ*(*N*, *L*), is $$\phi(N,L)=\frac{N}{k^{-1}+N+L} \label{eq:phi}$$ The oep receptors on the midline are believed to be identical to those on the LPM stripes, so we use the *same* function *ϕ*(*N*, *L*) in both stripes with the same equilibrium constant *k* (where *k*− 1 has the units of concentration). The Nodal production function *s**N* is assumed to be proportional to a Hill function $$f\_N(\phi)=\frac{1}{1+ (\phi\_{\*N}/\phi)^{n\_H}}. \label{eq:Hill}$$ This is a rounded step from 0 to 1 centered at a threshold parameter *ϕ*\* *N*; the step gets sharper as the Hill exponent *n**H* grows large. A microscopic rationalization for the form in Eq.  would be that the Nodal gene’s regulatory region cooperatively binds *n**H* copies of the signal or transcription factor induced by Nodal-bound oep. (Ref.  used a different production function than that also approximates a step function with a parameter controlling the sharpness.) The actual production rate is *s**N*(*r*, *t*) = *s**N*0*f**N*(*ϕ*(*N*, *L*)) where *s**N*0 is the maximum, or saturated, production rate. A similar function *f**L*(*ϕ*), with (it is expected) a rather different threshold *ϕ*\* *L*, and a saturated production rate *s**L*0, controls Lefty production. Part of the motivation for including the intermediate function *ϕ* in the model is to represent mutants in which oep is under- or over-expressed. That would have the effect of multiplying *ϕ* in by a constant, or equivalently of dividing *ϕ*\* *N* and *ϕ*\* *L* by that constant . We adopt two simplifying assumptions, either of which could be made independently of the other. The first pertains to the threshold for turning production on or off. In place of the nonlinear *ϕ* function, we follow Ref.  and instead use two threshold functions linear in the concentrations: [eq:Cij] *C**N*(*N*, *L*) = *C**N**N**N* − *C**N**L**L* *C**L*(*N*, *L*) = *C**L**N**N* − *C**L**L**L* where *N* = *N*(*r*, *t*) and *L* = *L*(*r*, *t*) are Nodal and Lefty concentration, respectively, and *C**N**N*, *C**N**L*, *C**L**N*, and *C**L**L* are constants. Production of *N* or *L* respectively turns on when this function exceeds a threshold parameter *C**N* \* or *C**L* \*, which should be positive, since a certain concentration of Nodal is needed to turn on either production, even in the absence of Lefty. We set *C**N* \* ≡ *C**L* \* ≡ 1 without loss of generality by scaling the four coefficients in Eqs. . The relation of Eqs.  to the *ϕ* function in is that the condition to be over threshold, *ϕ* > *ϕ*\* *N* or *ϕ* > *ϕ*\* *L*, is (by inserting Eq. ) mathematically equivalent to [eq:phi-to-linear] *k*(*ϕ*\* *N*− 1 − 1)*N* − *k**L* > 1;  *k*(*ϕ*\* *L*− 1 − 1)*N* − *k**L* > 1. Clearly, we can (roughly) identify the coefficients in with those in, in particular this viewpoint implies *C**N**L* ≡ *C**L**L*. Note that although the condition to be at threshold is captured exactly by the linear functions, away from threshold the production will depend on *N* and *L* in a different way when *n**H* < ∞. The linear form of Eqs.  and  will be more convenient mathematically. The second simplification we make is to assume *n**H* → ∞ in, i.e. a step-function turn-on of the production. In this case, Nodal production *s**N*(*r*, *t*) and Lefty production *s**L*(*r*, *t*) are given by: [eq:prodNL] $$\begin{cases} s\_{N}(r,t)=0 &,\: C\_{N}<1\\ 0\leq s\_{N}(r,t)\leq s\_{N0} &,\: C\_{N}=1\\ s\_{N}(r,t)=s\_{N0} &,\: C\_{N}> 1; \end{cases}\label{eq:prod\_N}$$ and $$\begin{cases} s\_{L}(r,t)=0 &,\: C\_{L}<1\\ 0\leq s\_{L}(r,t)\leq s\_{L0} &,\: C\_{L}=1\\ s\_{L}(r,t)=s\_{L0} &,\: C\_{L}>1. \end{cases}\label{eq:prod\_L}$$ Behavior of the two-dimensional model and comparison with experiments --------------------------------------------------------------------- The model initializes with zero Nodal or Lefty concentration in the domain (Fig. [fig:layout]), and with the 2 tailbuds producing Nodal that diffuses to the base of the 3 stripes. The left tailbud produces more than the right, which is responsible for all downstream asymmetries. Successful initiation at the base of the stripes requires that the Nodal produced in the tailbuds is sufficient to start self-sustaining Nodal production in the left LPM, but not the right LPM; production begins when thresholds given in are exceeded. The times at which this occurs differs for all 3 stripes depending on the difference between *ϕ*\* *N* and *ϕ*\* *L*, the amounts of Nodal produced by the left and right tailbuds, as well as the position of the tailbuds (closer to or farther from the midline, in a L/R symmetric way). Lefty, once produced on the midline, diffuses to the left and right LPM in equal amounts. Thus, the threshold function is constrained such that Nodal production turns off in the right LPM due to Lefty inhibition, but stays on in the left because of greater *N* there. This occurs in simulations for widely-varying parameter sets, for differences in tailbud production rates of 30%. As this asymmetry is decreased, the coefficients in, as well as the geometry of the problem, must be tuned more finely for successful initiation to occur. Experiments in zebrafish find that after an initial phase the wavefronts of Nodal and Lefty gene expression proceed at a fixed speed, and indeed this is reproduced in simulations: once initiated in a self-sustaining way, Nodal production on the left LPM switches on from the base upwards, and the leading edge of production moves at an asymptotically fixed rate along the stripe, followed by a front of Lefty production, moving at the same asymptotic rate on the midline. The main role of Nodal and Lefty in this model is to transmit the L/R asymmetric signal from the tailbud. Lefty in particular acts as a “barrier” preventing initiation of Nodal production in the right stripe: the Lefty concentration there is sufficiently high that Nodal diffusing from the left LPM is insufficient to initiate production there. Another necessary condition for successful propagation is that the function must not fall below 1 along the midline, or else Lefty production will cease, and Nodal produced in the left LPM will not be prevented from diffusing to the right LPM and initiating production further up the right stripe, leading to loss of L/R asymmetric propagation. Indeed, in Lefty knockout mutants, bilateral Nodal expression is observed. We intended to model the space and time-dependent behaviors in this model to, ultimately, * confirm the correctness of the basic picture by its qualitative agreement with experiments; * explain, and predict, the phenotypes of various mutations involving this system; * characterize the robustness of how the initial bias in the tailbud is amplified (as is manifested in the error rate); * show how the numerical values of the various parameters can be inferred (or as least bounded) on the basis of experiments. However, while it is understood in principle how we get a moving front, the mathematical formulation involves convolutions in two dimensions and does not include closed forms of simple functions. Furthermore, simulations suggested various possible regimes in which production might turn off again after the initial passage of a pulse, or in which the midline Lefty production had a gradual onset in space, much less sharp than the Nodal front. In order to analytically represent the functional form, and to comprehensively classify all regimes of the asymptotic behavior, we turned to a one dimensional model, which is the main focus of the rest of this paper. One-dimensional model ===================== The 2D model may be simplified to 1D by making the following simplifications. Since Nodal and Lefty expression occur on narrow, parallel stripes, we may replace the finite width of each stripe by a 1D line. Also, the concentration profile on one stripe is transmitted (with a diffusive lag) to another through the intervening space; this space can be replaced by lines as well, and we end up with a 5-line 1D model with diffusion between stripes given by *N**i*(*y*, *t*) = *k**D**N**i* − 1(*y*, *t* − Δ*t*) + *k**D**N**i* + 1(*y*, *t* − Δ*t*) + ... where *N**i* and *N**i* ± 1 are Nodal concentrations on adjacent stripes, and *k**D* and Δ*t* describe the rate of diffusion. In this paper, we will only focus on the front propagation of Nodal and Lefty gene expression in the steady state, in which case we further neglect inter-stripe diffusion and thus simplify the system to 1D. Diffusion equations ------------------- The partial differential equations for Nodal and Lefty are: [eq:d/dt] $$\frac{\partial N(y,t)}{\partial t}=D\_{N}\frac{\partial^{2}N(y,t)}{\partial y^{2}}-\frac{1}{\tau\_{N}}N(y,t)+s\_{N}(y,t) \label{eq:dN/dt}$$ $$\frac{\partial L(y,t)}{\partial t}=D\_{L}\frac{\partial^{2}L(y,t)}{\partial y^{2}}-\frac{1}{\tau\_{L}}L(y,t)+s\_{L}(y,t) \label{eq:dL/dt}$$ The equations appear linear at first glance, but the *s**N* and *s**L* terms are in fact nonlinear, since (according to and ) they depend on (*N*, *L*) via step functions. Nevertheless, if either *C**N*(*y*) < 1 or *C**N*(*y*) > 1 throughout an interval, then *s**N*(*y*) is constant (either 0 or *s**N*0) in that interval, in which case the differential equation for *N*(*y*) may be solved, and similarly for *L*(*y*). Evidently, the key parameter in such a solution is the position *y* at which the threshold is crossed and production turns on or off. Surprisingly, it is also generically possible that the threshold function is pinned at 1 throughout a finite interval (see Sec. [sec:pinned]). Within such an interval, the production function is indefinite (the middle case in Eqs.  and ) and the diffusion equation can no longer be used to find the solution. However, the threshold condition becomes an equality interval and gives the solution in such intervals. The complications in our results have to do mostly with handling such “pinned” intervals. The simplest possible solution is for constant, homogeneous production, e.g. *s**N*(*y*, *t*) = *s**N*0 for all (*y*, *t*) for Nodal. The solution is *N*(*y*, *t*) = *N*0, or *L*(*y*, *t*) = *L*0 in the analogous case for Lefty, where *N*0 ≡ *τ**N**s**N*0;  *L*0 ≡ *τ**L**s**L*0. The middle of any interval of production looks locally like a piece of this homogeneous system, so it is not surprising that *N*0 and *L*0 serve as a reference level for all solutions. In particular, since *s**N*(*y*, *t*) ≤ *s**N*0 everywhere, any solution must have *N*(*y*, *t*) ≤ *N*0 everywhere, and similarly for *L*(*y*, *t*). The traveling wavefront solution -------------------------------- Experiments show the Nodal front is typically ahead of the Lefty front . If it is sufficiently far ahead, the Lefty concentration there is negligible, and it is sufficient to consider a Nodal-only situation, in which the front of Nodal production moves at constant speed *v*: $$s\_{N}(y,t)=\begin{cases} s\_{N0} &,\: y<vt\\ 0 &,\: y>vt. \end{cases}\label{eq:wave prod}$$ The front shows a constant shape to a viewer traveling at velocity *v*, i.e. *N*(*y*, *t*) = *N*(*y* − *v**t*). When this is inserted into Eq. , it becomes an ordinary, second-order, linear differential equation, which is solved by an exponential: $$N(y-vt)=\begin{cases} N\_{0}[1-g\_{N}e^{\kappa\_{N}(y-vt)}] &,\: y-vt<0\\ N\_{0}(1-g\_{N})e^{-\kappa'\_{N}(y-vt)} &,\: y-vt>0. \end{cases} \label{eq:N-for-step-function}$$ Substituting this ansatz into Eq. , we find that *κ**N* and *κ*ʹ*N* are given by [eq:kappaN] $$\kappa\_{N}^{2}+\frac{v}{D\_{N}}\kappa\_{N}-\frac{1}{l\_{N}^{2}}=0$$ $$\kappa\_{N}^{'2}-\frac{v}{D\_{N}}\kappa'\_{N}-\frac{1}{l\_{N}^{2}}=0$$ By matching the boundary conditions at *y* − *v**t* = 0, we find *g**N* is given by $$g\_{N}=\frac{\kappa\_{N}'}{\kappa\_{N}+\kappa\_{N}'}\label{eq:g\_N}$$ Essentially, this means that the shape of the concentration profile depends on the relative magnitudes of *v* and *v**N*. Physically, for *v* > 0 and *v* ≫ *v**N*, the wavefront is traveling forward too fast for there to be much diffusion from the producing region to the non-producing region, and so the concentration at the front is low (*g**N* → 1); conversely, for *v* < 0 and ∣*v*∣ ≫ *v**N*, the concentration at the front is high for the same reason. (It is reminiscent of the Doppler effect, except that the Nodal signal spreads by the diffusion equation rather than the wave equation.) There is in fact a symmetry relating solutions with *v* > 0 and *v* < 0, whereby the profiles of an advancing front of speed *v* and a retreating front (reflected in *y*) of speed  − *v* add up to a uniform profile, because adding the production of an advancing front and that of a retreating front simply obtains a uniform producing line. Furthermore, when *v* = 0 the height of the front is simply $\frac{1}{2}N\_{0}$, and the stationary concentration profile simplifies to: $$N(y)=\begin{cases} N\_{0}(1-\frac{1}{2}e^{y/l\_{N}}) &,\: y<0\\ \frac{1}{2}N\_{0}e^{-y/l\_{N}} &,\: y>0 \end{cases}\label{eq:N\_simple\_step}$$ (Notice that we set *y* = 0 to be the position of the front.) Since the coefficients in the threshold equations and are in general different, the Lefty wavefront is in general displaced by a distance Δ*y* from the Nodal one. We have $$L(y-vt)=\begin{cases} L\_{0}[1-g\_{L}e^{\kappa\_{L}(y-vt+\Delta y)}] &,\: y-vt<-\Delta y\\ L\_{0}(1-g\_{L})e^{-\kappa'\_{L}(y-vt+\Delta y)} &,\: y-vt>-\Delta y \end{cases}\label{eq:L(y-vt)}$$ where Δ*y* > 0 signifies that the Nodal front is ahead of the Lefty front, and *κ**L*, *κ*ʹ*L* and *g**L* are defined analogously to Eqs.  and. Finally, to find *v* and Δ*y*, we observe that *N* = *N*0(1 − *g**N*) at the Nodal front, and *L* = *L*0(1 − *g**L*) at the Lefty front, and substitute and into the threshold functions ; assuming that Δ*y* > 0: [eq:vDy] *C**N**N**N*0(1 − *g**N*) − *C**N**L**L*0(1 − *g**L*)*e*− *κ*ʹ*L*Δ*y* = 1 *C**L**N**N*0[1 − *g**N**e*− *κ**N*Δ*y*] − *C**L**L**L*0(1 − *g**L*) = 1 from which we can solve (non-trivially) for *v* and Δ*y*. Nondimensionalization --------------------- Certain combinations of parameter changes are trivial, in the sense that the solutions look the same apart from rescalings of the distance, time, or concentrations. Since we are faced with the difficulty of exploring a large parameter space, we wish to discover the minimum number of nontrivial parameters. To this end, we will scale all variables and parameters in the problem so as to make them dimensionless. First, we introduce the scaled Nodal and Lefty concentrations $$\tilde{N}(y)\equiv\frac{N}{N\_{0}},$$ $$\tilde{L}(y)\equiv\frac{L}{L\_{0}},$$ so that 0 ≤ *Ñ*(*y*),  *L̃*(*y*) ≤ 1. Correspondingly, *C̃**N**N*, *C̃**N**L*, *C̃**L**N* and *C̃**L**L* are simply the coefficients in the (scaled form of the) threshold functions and : [eq:scaled-Cij] $$\begin{aligned} \tilde{C}\_{NN} &\equiv& C\_{NN}N\_{0};\\ \tilde{C}\_{NL} &\equiv& C\_{NL}L\_{0};\\ \tilde{C}\_{LN} &\equiv& C\_{LN}N\_{0};\\ \tilde{C}\_{LL} &\equiv& C\_{LL}L\_{0}.\end{aligned}$$ For example, Nodal production turns on when *C̃**N**N**Ñ* − *C̃**N**L**L̃* > 1, and similarly for *L̃*. The parameters are pertinent even in a spatially uniform situation. The remaining parameters relate to time or length scales. We scale length and time such that the parameters for Nodal become unity: [eq:scaled-l-v] $$\begin{aligned} \tilde{l}\_L &\equiv& l\_{L}/ l\_{N}; \\ \tilde{v}\_L &\equiv& v\_{L}/ v\_{N}; \\ \tilde{v} &\equiv& v/v\_{N}.\end{aligned}$$ The last two parameters are relevant (in a moving steady state) if and only if *v* ≠ 0. Implicitly, Eqs.  also nondimensionalize the time scale: *τ̃**L* ≡ *l̃**L*/*ṽ**L* In total, we have seven nontrivial parameters for the one-dimensional problem, defined in Eqs.  and . By rescaling space and time with $y\rightarrow\frac{y}{l\_N}$ and $t\rightarrow\frac{t}{\tau\_N}$ we find that Eqs.  become [eq:tilded/dt] $$\frac{\partial \tilde{N}(y,t)}{\partial t}=\frac{\partial^{2}\tilde{N}(y,t)}{\partial y^{2}}-\tilde{N}(y,t)+\tilde{s}\_{N}(y,t) \label{eq:dtildeN/dt}$$ $$\frac{\partial \tilde{L}(y,t)}{\partial t}=\tilde{l}\_L\tilde{v}\_L\frac{\partial^{2}\tilde{L}(y,t)}{\partial y^{2}}- \frac{1}{\tilde{\tau}\_{L}}\tilde{L}(y,t)+\tilde{s}\_{L}(y,t) \label{eq:dtildeL/dt}$$ where 0 ≤ *s̃**N*(*y*, *t*) ≤ 1 and $0\leq\tilde{s}\_L(y,t)\leq\frac{1}{\tau\_L}$. The (*N*, *L*) plane: zero dimensional case ------------------------------------------- In Sec. [sec:traveling-wavefront], asymptotically in either direction both *N* and *L* approach constant, uniform values. Therefore, in order to classify possible wavefront solutions, it is helpful first to classify all possible uniform solutions. Fig. [fig:N-L-cases] shows the (*Ñ*, *L̃*) phase plane with lines representing the $\dot{\tilde{N}}=0$ and $\dot{\tilde{L}}=0$ isoclines. The most important feature of these diagrams is the relation of the two isoclines to each other and to the bounding lines: this determines the possible steady states. There are 4 qualitative cases for the relation of $\dot{\tilde{N}}$ to $\dot{\tilde{L}}$: 1. *N*-isocline is to left of *L*-isocline 2. *L*-isocline crosses *N*-isocline left to right 3. *L*-isocline is to left of *N*-isocline 4. *N*-isocline crosses *L*-isocline left to right. (There are other mathematical cases in which one of the isoclines does not pass through the square at all, but that is obviously to be discarded, since there would be no way ever to produce the corresponding signal.) These phase planes are generalizations of the wild-type scenario: as noted after Eq. , if we justify the linear threshold equations from the single-binding site receptor behavior, we necessarily find *C̃**N**L* = *C̃**L**L*; the geometrical expresion of this on the (*Ñ*, *L̃*) plane is that the two isoclines cross on the negative *L* axis, hence only cases (a) and (c) could be realized. However, our purpose here is to study the general behavior of these equations that could arise in any embryonic system with propagating Nodal and Lefty fronts (since this is conserved in all vertebrates), and perhaps even mathematically equivalent equations having a quite different biological interpretation. In general, binding of Nodal and Lefty might be cooperative at the receptor; furthermore, gene regulation further downstream might be cooperative. Thus, the general form of the isoclines is nonlinear and we expect that all these topologies are imaginable. There is always a stable fixed point at *Ñ* = *L̃* = 0; this is the only fixed point in case (iii), which is thus trivial. If the *Ñ*-isocline and the *L̃*-isocline both intersect the upper border, that is if *C̃**N**N* − *C̃**N**L* > 1 and *C̃**L**N* − *C̃**L**L* > 1, we have a stable fixed point with both *N* and *L* saturated, which underlies the basic wavefront behavior of Sec. [sec:traveling-wavefront] above. However, if the *L̃*-isocline intersects the right edge, to the right of the *Ñ*-isocline, i.e. when *C̃**L**N* − *C̃**L**L* < 1 as *may* happen in either case (i) or case (ii), then the stable fixed point at *Ñ* = 1 exists at a less than saturated value, which we will define *L̃*\*: $$\tilde{L}\_{\*}\equiv\frac{\tilde{C}\_{LN}-1}{\tilde{C}\_{LL}}\label{eq:L\*}$$ It is not obvious that *L̃* being *pinned* at *L̃*\* < 1 exists in the case with spatial variation, but we will in fact show in Sec. [sec:pinned] that analogous “pinned” intervals arise in 1D traveling wave solutions. Finally, in case (iv), there is a fixed point at which *both* *Ñ* and *L̃* are less than saturated; this may be stable, but also may be unstable to periodic oscillations around the fixed point, as elaborated in Sec. [sec:oscillating]. Results: steady-state fronts ============================ To explore the large parameter space of the 1D model, we will consider special cases; the aim is a taxonomy of the possible behaviors of the model, with the hope that any general case will be qualitatively similar to one of these behaviors seen in the special cases. In particular, we concern ourselves with the steady state limiting behaviors, after all transients have died out. Any nonzero Nodal and Lefty wavefronts will then be traveling at identical speed *v*. *C̃**N**L* = 0 and *v* = 0 -------------------------- For the largest part of our story, we consider the case that *C̃**N**L* = 0, that is, Nodal is unaffected by Lefty (i.e. Nodal is an autonomous variable). We first solve for the behaviors of the one-component system in which Nodal activates Nodal, which already has moving front solutions; then, we treat the Nodal concentration *N*(*y*, *t*) as if it were externally imposed and solve for another one-component problem representing the Lefty concentration field. The reasons for choosing what seems to be a major simplification are (i) the reverse approximation, in which Lefty is autonomous, has only the trivial solution with Lefty off (since Lefty only inhibits); (ii) experimentally, the Nodal front leads and the Lefty front follows; thus, it is plausible that the inhibition from Lefty has no important effect on the Nodal front (serving only to prevent the initiation of a front on the other side); (iii) in Section [sec:N-L-plane] we see that only the topology of the (*N*, *L*) plot really matters for the dependnce on coefficients *C**i**j*. The second simplification we make is to set *v* = 0. As mentioned, the Nodal and Lefty fronts are traveling at equal speed in the steady state, and are thus stationary in the comoving frame. In fact, the only difference between the *v* = 0 and *v* ≠ 0 cases is the steepness of their exponential profiles at the front. This does *not* qualitatively affect the results we present below. With *C̃**N**L* = 0 and *v* = 0, the Nodal concentration takes the form given by, or in dimensionless form, $$\tilde{N}(y)=\begin{cases} 1-\frac{1}{2}e^{y} &,\: y<0\\ \frac{1}{2}e^{-y} &,\: y>0 \end{cases}\label{eq:N\_simple\_step\_dimless}$$ and our 7-dimensional parameter space reduces to a 3-dimensional one, with parameters *l̃**L*, *C̃**L**L*, and *L̃*\* (which is more convenient for our purposes than *C̃**L**N*). Our goal is to classify all the possible types of behavior of the Lefty profile as these parameters are varied. Note that *C̃**N**N* and *C̃**L**N* are always greater than 1, or else N and L will never have their production turned on. Also, let the position where Lefty production turns on be *y* = *y**L*; it may be on either side of the independent Nodal front *y* = 0. “Pinned” intervals ------------------ It soon became apparent that the behavior of *L* did not only consist of intervals of full production *s**L* = *s*0*L*, where *C**L*(*y*) > 1, and zero production, where *C**L*(*y*) < 1, but also intervals where the threshold function is *pinned* at *C**L*(*y*) = 1. The clearest example of this is to consider what happens when *L̃*\* < 1 as *y* →  − ∞, where our background Nodal profile goes to a limit of *Ñ* → 1. Since we want *L* to be produced somewhere, *C̃**L**N* ≡ *C**L**N**N*0 > 1 necessarily (this is with *L* = 0). As *y* →  − ∞ and *Ñ* → 1, *L*(*y*) = 0 is mathematically inconsistent, and so Lefty production must turn on. Now, by definition *L̃*\* is the amount of *L* that turns off its own production when *N* = *N*0, so *L̃*\* < 1 means that before *L̃* → 1, Lefty will turn off its own production again. This is a “paradox” in which Lefty production cannot be fully on or fully off, and it is only resolved if we let the threshold function be *pinned* at 1, i.e. *C**L*(*y* →  − ∞) ≡ 1, such that 0 < *s̃**L* < 1. Constraining *C**L*(*y*) means that *L̃*(*y*) is completely determined by *Ñ*(*y*) in this *pinned* interval: $$\tilde{L}(y) \equiv \frac{1} {\tilde{C}\_{LL}}\label{eq:L\_pinned} \bigg(\tilde{C}\_{LN}\tilde{N}(y)-1\bigg)$$ Now, we explicitly work out the possible behaviors of *L* when *L̃*\*, *l̃**L* and *C̃**L**L* are varied. *L* will be described in terms of the types of production occurring along the line, from  − ∞ to ∞. Full production is denoted “1”, zero production denoted “0”, and the pinned interval denoted “p”. For example, the background Nodal profile is characterized as {1,0}, which is shorthand for an interval of full production adjoining one of zero production. In the following, we split up the cases into *L̃*\* < 1 and *L̃*\* > 1. We find that the most important parameter is the ratio of length scales, *l̃**L*, which determines how various pinned intervals arise in the behavior of Lefty. Note that the value of *ṽ**L* does not enter at all. Classification of 1D model -------------------------- ### Case *L̃*\* < 1: {p,0} and {p,1,0} $ \begin{array}{cc} \includegraphics[width=3.3in]{new\_p0.eps} \\ \includegraphics[width=3.3in]{new\_p10.eps} \end{array}$ As already shown, there is a pinned interval extending to *y* →  − ∞, where *L* is completely determined. The production *s**L*(*y*) is also completely determined, and is derived as follows. Substituting back into ([eq:dL/dt]): $$\frac{D\_{L}}{\tilde{C}\_{LL}}\frac{d^2\tilde{N}(y)}{dy^2}-\frac{1}{\tau\_{L}}\tilde{L}+\tilde{s}\_{L}(y)=0$$ Now using ([eq:dN/dt]) to eliminate *d*2*Ñ*(*y*)/*d**y*2, Lefty production in the pinned interval is given by: $$\tilde{s}\_{L}(y)=\frac{1}{\tilde{C}\_{LL}}(\tilde{C}\_{LN}[(1-\tilde{l}\_L^{2})\tilde{N}(y)+\tilde{l}\_L^{2}\tilde{s}\_{N}(y)]-1) \label{eq:production\_pinned}$$ where $\tilde{s}\_{N}(y)=\begin{cases} 0 &, y>0\\ 1 &, y<0 \end{cases}$. The point of finding this expression is that as *y* →  − ∞, we see that *s̃**L* → *L̃*\* < 1. This simply means that as *L̃*\* goes to zero, so does the asymptotic production of Lefty. The values of *l̃**L*, *C̃**L**N* or *C̃**L**L* in ([eq:productionpinned]) are free to vary so long as *L̃* in is between 0 and 1. On the other hand, the location of the front separating the pinned interval and the region of zero production must be determined from matching both *L̃* and *d**L̃*/*d**y* on either side of the boundary. It is possible to end up with a value of *y**L* where *s̃**L* lies outside [0,1]. Indeed, checking when when this occurs using ([eq:productionpinned]) demonstrates that {p,0} is inconsistent when *L̃*\**l̃**L* > 1 Here, there are three regions instead: pinned, full production, and no production (in short, {p,1,0}). As *L̃*\**l̃**L* increases beyond 1, the length of the fully producing interval increases. Simulations and solving for the boundary conditions validate this (see figure [fig:pin1]). ### Case *L̃*\* > 1: {1,0} and {1,p,0} $ \begin{array}{cc} \includegraphics[width=3.3in]{new\_10.eps} \\ \includegraphics[width=3.3in]{new\_1p0.eps} \end{array}$ The traveling wavefront given by ([eq:L(y-vt)]) is still not guaranteed when *L̃*\* > 1. We can see this by considering the limit at which *l̃**L* ≪ 1: at the front of Lefty production, *L̃* goes from 1 to 0 with a background of nearly constant *N*. Since *L* inhibits its own production, the apparent paradox is that *d**C**L*(*y*)/*d**y* > 0, i.e. the regions of zero and full production are reversed! Simulations show that as *l̃**L* is decreased and the slope of *L* made steeper, once the slope exceeds that of the “pinned” Lefty ([eq:Lpinned]), {1,0} will transition to {1,p,0}. Basically, requiring that *d**C**L*(*y*)/*d**y* < 0 at the front is equivalent to this condition. To write out explicit conditions for entering the {1,p,0} state, we should consider both cases *y**L* > 0 and *y**L* < 0, for which *Ñ* takes different forms given in. Thus, the condition for {1,p,0} is that *C**L*(*y*) crosses the threshold in the other direction: *d**C**L*(*y**L*)/*d**y* > 0, giving: $$\begin{aligned} y\_{L}&>0:\;\tilde{l}\_L^{-1}>&1+\frac{2}{\tilde{C}\_{LL}} \label{eq:{1,pin,0}-1} ;\\ y\_{L}&<0:\;\tilde{l}\_L^{-1}>& 2\tilde{L}\_\* - 1. \label{eq:{1,pin,0}-1-1}\end{aligned}$$ (Note that both conditions require *l̃**L* < 1, i.e. the Nodal length scale is greater than Lefty, as argued above in the limiting case.) If the inequalities are in the opposite direction, then we are back to the traveling wavefront given by ([eq:L(y-vt)]), or in our notation 1,0. Here we can solve for the Lefty concentration profile fully: using we find that Δ*y* > 0 when $$\frac{1}{2}\tilde{C}\_{LN} - \frac{1}{2}\tilde{C}\_{LL} < 1.$$ |c|c| Interval patterns & Parameter conditions{1, 0} & *L̃*\* > 1,  *d**C**L*(*y**L*)/*d**y* < 0{1, *p*, 0} & *L̃*\* > 1,  *d**C**L*(*y**L*)/*d**y* > 0{*p*, 0} & *L̃*\* < 1,  *l̃**L*− 1 > *L̃*\*{*p*, 1, 0} & *L̃*\* < 1,  *l̃**L*− 1 < *L̃*\* Solutions oscillating in time ============================= In the previous section(s), we explored the limiting case of an autonomous, Lefty-independent Nodal production. We now address new phenomena which are possible when Nodal is significantly affected by Lefty. The most striking of these is the possibility of a time-oscillating solution, which we will study in the special case that all concentrations are uniform in space. The experimentally pertinent motivation is whether it is possible to generate traveling solutions, consisting of pulses of Nodal and Lefty, periodic in space and in time; one expects such solutions to be possible only if uniform oscillations are possible in the model. By examining the geometry of the (*N*, *L*) plane [Figure [fig:N-L]], it can be seen that only case (d) has the possibility of oscillations; this is shown with more detail in Fig. [fig:N-L-osc](b). Note that the fixed point in the center, since it has *N**F* < *N*0 and *L**F* < *L*0, corresponds to production rates less than saturation (*s**N* < *s**N*0 and *s**L* < *s**L*0), so we are in a “pinned” regime for *both* *N* and *L*, which is self-consistent with being on the isoclines for both. Parts of each cycle ------------------- The period of an oscillation must consist of four phases [as shown in Fig. [fig:N-L-osc](a)]: * Nodal and Lefty both on; Lefty grows faster than Nodal, until Nodal turns off; * Nodal off and Lefty on; Nodal decays, until Lefty turns off; * Nodal and Lefty both off; Lefty decays faster than Nodal, and they remaining Nodal that has not yet decayed is sufficient to turn on Nodal again; * Nodal on and Lefty off; Nodal grows, until it is sufficient to turn Lefty on too (and back to phase 1). Evidently, a key condition is on the degradation times, which govern the growth rates of Nodal or Lefty : we need *τ̃**L* ≡ *τ**L*/*τ**N* < 1. (Of course, the actual magnitude of the degradation times simply sets an overall time scale.) We can describe the history with a series of times *t**i*, i=1,2,..., at each of which one of the signals turns on or off. Between these times, the production rates are constant (either zero or saturated), and the differential equation is solved by a simple exponential: $$\tilde{N}(t) = \begin{cases} \tilde{N}(t\_i) e^{-(t-t\_i)/\tau\_N} &, \: s\_N=0\\ 1 -[1-\tilde{N}(t\_i)] e^{-(t-t\_i)/\tau\_N} &, \: s\_N=s\_{N0}, \end{cases} \label{eq:N-t-constant-s}$$ and similarly for *L̃*(*t*). To visualize the dynamics, it is convenient to draw the trajectory of (*Ñ*(*t*), *L̃*(*t*)) in the (*Ñ*, *L̃*) plane; in our equations, the time derivatives $\dot{\tilde{N}}\equiv d\tilde{N}/dt$ and $\dot{\tilde{L}}\equiv d\tilde{L}/dt$ are fuctions only of (*Ñ*, *L̃*). The lines *C̃**N*(*Ñ*, *L̃*) = 0 and *C̃**L*(*Ñ*, *L̃*) = 0 on this plot are the *isoclines*, meaning the places where (respectively) $\dot{\tilde{N}}$ and $\dot{\tilde{L}}$ change sign. (Isoclines were also used in the analysis of Ref. .) Each time the trajectory intersects an isocline is one of the times *t**i*; let the concentrations at these times be *Ñ**i* ≡ *Ñ*(*t**i*), *L̃**i* ≡ *L̃*(*t**i*). We can find the trajectory curve, and thus find solutions of the dynamical equations, by eliminating time and considering only the discrete map (*Ñ**i*, *L̃**i*) → (*Ñ**i* + 1, *L̃**i* + 1). We next find the functional form of this map. Consider phase III of the cycle, during which [according to ] *Ñ*(*t*) = *Ñ**i**e*− (*t* − *t**i*)/*τ**N* and *L̃*(*t*) = *L̃**i**e*− (*t* − *t**i*)/*τ**L*. Eliminating time, we get [eq:N-L-phases] $$\frac{\tilde{L}}{\tilde{L}\_i} = \Biggl[\frac{\tilde{N}}{\tilde{N}\_i}\Biggr]^{1/{{\tilde{\tau}\_L}}}. \label{eq:N-L-phaseIII}$$ The point (*Ñ**i* + 1, *L̃**i* + 1) is the intersection of the curve with the isocline *C̃**N*(*Ñ*, *L̃*) = 0. In phase IV, the trajectory would be $$\frac{\tilde{L}}{\tilde{L}\_i} = \Biggl[\frac{1-\tilde{N}}{1-\tilde{N}\_i}\Biggr]^{1/{{\tilde{\tau}\_L}}}, \label{eq:N-L-phaseIV}$$ and similarly in the other four phases of the cycle. Notice that the curve can possibly intersect the isocline in phases I or III only if *τ̃**L* < 1. (For example, whehn *τ̃**L* = 1 the trajectory in each phase is a straight line to one corner of the square.) To give the cycling behavior, it is necessary (but not sufficient) that the isoclines intersect as shown in Figure [fig:N-L-cases](b), which depends on two inequalities. The $\dot{\tilde{N}}$ and $\dot{\tilde{L}}$ isoclines’ respective intercepts at *L̃* = 0 satisfy 1/*C̃**N**N* < 1/*C̃**L**N*; their intercepts at *L̃* = 1 must be in the reverse order, (1 + *C̃**N**L*)/*C̃**N**N* > (1 + *C**L**L*)/*C̃**L**N*. The two inequalities can be written together as $$\frac{\tilde{C}\_{LL}+1}{\tilde{C}\_{NL}+1} < \frac{\tilde{C}\_{LN}} {\tilde{C}\_{NN}} < 1 \label{eq:osc-fp-cond}$$ This tends to be satisfied When conditions are satisfied, the isoclines cross at a fixed point (*Ñ*\*, *L̃*\*) where $$\begin{aligned} \tilde{N}\_\* &\equiv& \frac {\tilde{C}\_{NL}-\tilde{C}\_{LL}} {\tilde{C}\_{LN} \tilde{C}\_{NL}- \tilde{C}\_{NN}\tilde{C}\_{LL}} \\ \tilde{L}\_\* &\equiv& \frac {\tilde{C}\_{NN}-\tilde{C}\_{LN}} {\tilde{C}\_{LN} \tilde{C}\_{NL}- \tilde{C}\_{NN}\tilde{C}\_{LL}}.\end{aligned}$$ Stability conditions -------------------- The trajectory always tends to spiral around (*Ñ*\*, *L̃*\*). as illustrated in Figure [fig:N-L-osc](b). However, there are different possibile limiting behaviors depending on whether the trajectory spirals inwards (stable fixed point) or outwards, and on where it ends up. There is always a stable fixed point at (*Ñ*, *L̃*) = (0, 0) [the empty system] and provided the isoclines intercept the upper edge of (*N*, *L*) space [as in Figure  [fig:N-L-osc](b)], there is also a stable fixed point at (*Ñ*, *L̃*) = (1, 1) [both Nodal and Lefty turned on and saturated]. Some of the possibilities: * The fixed point is stable; there is one unstable cycle, such that a trajectory starting inside it tends to (*Ñ*\*, *L̃*\*) and a trajectory starting outside it tends to (0, 0) or (1, 1). * There are no cycles (stable or unstable); the fixed point is unstable, and spirals out to (0, 0) or (1, 1). * The fixed point is unstable, and spirals out to a stable cycle (beyond which is an untable cycle, as in case (1)); this is the case of interest to us. To evaluate the stability, we must linearize near the fixed point in terms of *δ**Ñ* ≡ *Ñ* − *Ñ*\*, *δ**L̃* ≡ *L̃* − *L̃*\*. The trajectory (e.g. Eqs.  or ) becomes $\delta \tilde{L}=m\_{\rm ph}\delta \tilde{N}$, also $m\_{\rm ph}$ is the slope in each respective phase of the cycle, ${\rm ph}\to$ I,II,III, or IV. Thus, $$m\_{{\scriptstyle \rm III}}= \frac{1}{{{\tilde{\tau}\_L}}} \frac{\tilde{L}\_\*}{\tilde{N}\_\*};$$ $$m\_{{\scriptstyle \rm IV}}= - \frac{1}{{{\tilde{\tau}\_L}}} \Big(\frac{\tilde{L}\_\*}{1-\tilde{N}\_\*}\Big);$$ and similarly for the rest. A convenient viewpoint on the condition is that $$m\_{{\scriptstyle \rm ph}}= {{\tilde{\tau}\_L}}m\_{{{\scriptstyle \rm ph}},0}$$ where $m\_{{{\scriptstyle \rm ph}},0}$ is the slope of the line from (*Ñ*\*, *L̃*\*) to the appropriate corner of the square. Meanwhile, the slopes of the $\dot{\tilde{N}}=0$ and $\dot{\tilde{L}}=0$ isoclines are $$\begin{aligned} m\_N &\equiv& \tilde{C}\_{NN}/\tilde{C}\_{NL}, \\ m\_L &\equiv& \tilde{C}\_{LN}/\tilde{C}\_{LL}.\end{aligned}$$ Evidently, the additional (and sufficient) condition to get a spiraling behavior is that $m\_N < m\_{{\scriptstyle \rm III}}, m\_{{\scriptstyle \rm I}}< m\_L$. Since *τ̃**L* < 1 and $m\_{{\scriptstyle \rm ph}}$ is of order 1/*τ̃**L* for $\rm ph=I,II,III,IV$, it follows that *m**L* is typically large and hence *C**L**L* must be relatively small. Solving one linear equation for each phase of the cycle, we find after four isocline intersections that (*δ**Ñ**i* + 4, *δ**L̃**i* + 4) = Λ(*δ**Ñ**i*, *δ**L̃**i*), where $$\Lambda \equiv \frac{R\_{{\scriptstyle \rm I}}R\_{{\scriptstyle \rm III}}}{R\_{{\scriptstyle \rm II}}R\_{{\scriptstyle \rm IV}}}$$ where $$R\_{{\scriptstyle \rm ph}}\equiv \frac{m\_L-m\_{{\scriptstyle \rm ph}}}{m\_N-m\_{{\scriptstyle \rm ph}}}.$$ for $\rm ph= I,II,III, IV$. Thus, the fixed point (*Ñ*\*, *L̃*\*) is stable if and only if Λ < 1. An interesting special case is when (*Ñ*\*, *L̃*\*) = (1/2, 1/2): in that case, Λ ≡ 1 so the fixed point is always marginal. To get a better understanding of the stability, consider the case that $m\_N \ll m\_{{\scriptstyle \rm I}},..., m\_{{\scriptstyle \rm IV}}\ll
arxiv_0000124
Orbital fitting of imaged planetary companions with high eccentricities and unbound orbits ========================================================================================== Application to Fomalhaut b and PZ Telecopii B[1](#fn1) ====================================================== ### Received....; Accepted.... For such orbits, standard MCMC codes assuming only bound orbits may be inappropriate. Our goal is to develop a new MCMC implementation able to handle bound and unbound orbits as well in a continuous manner, and to apply it to the cases of Fomalhaut b and PZ Tel B. We present here this code, based on the use of universal Keplerian variables and Stumpff functions. We present two versions of this code, the second one using a different set of angular variables designed to avoid degeneracies arising when the projected orbital motion is quasi-radial, as it is the case for PZ Tel B. We also present additional observations of PZ Tel B The code is applied to Fomalhaut b and PZ Tel B. Concerning Fomalhaut b, we confirm previous results, but we show that on the sole basis of the astrometric data, open orbital solutions are also possible. The eccentricity distribution nevertheless still peaks around  ∼ 0.9 in the bound regime. We present a first successful orbital fit of PZ Tel B, showing in particular that while both bound and unbound orbital solutions are equally possible, the eccentricity distribution presents a sharp peak very close to *e* = 1, meaning a quasi-parabolic orbit. It was recently suggested that the presence of unseen inner companions to imaged ones may lead orbital fitting algorithms to artificially give very high eccentricities. We show that this caveat is unlikely to apply to Fomalhaut b. Concerning PZ Tel B, we derive a possible solution involving an inner   ∼ 12*M*Jup companion that would mimic a *e* = 1 orbit despite a real eccentricity around 0.7, but a dynamical analysis reveals that such a system would not be stable. We thus conclude that our orbital fit is robust. Introduction ============ A growing number of substellar companions are nowadays regularly discovered and characterised by direct imaging. These objects are usually massive (larger than a few Jupiter masses) and orbit at wide separations (typically  < 20 AU). Some of them as sufficiently close to their host star to allow detection of their orbital motion. Astrometric follow up gives then access to constraints on their orbits . The use of Markov-Chain Monte-Carlo (MCMC) algorithms to fit orbits of substellar or planetary companions has become common now. This statistical approach is particularly well suited for directly imaged companions, as their orbit is usually only partially and unequally covered by observations. Some of the fitted orbits appear surprisingly very eccentric. This is for instance the case for Fomalhaut b and PZ Tel B. Fomalhaut b is an imaged planetary companion orbiting the A3V star Fomalhaut (*α* Psa, HD 216956, HIP 113368) at  ∼ 119AU. The physical nature of this object is still a matter of debate. It is commonly thought to be a low mass planet, but it also has been suggested that the Fomalhaut b image could represent starlight reflected by a cloud of dust grains, possibly bound to a real planet. The first attempts to fit Fomalhaut b’s orbit on the basis of the available astrometric positions reveal a very eccentric, possibly unbound orbit ($e\ga 0.8$). Subsequent dynamical studies on the past history of this planet and its interaction with the dust belt imaged around the star led to the conclusion that another, yet undiscovered planet must be present in this system to control the dynamics of the dust belt, and that Fomalhaut bmay have been formerly trapped in a mean-motion resonance with that planet before being scattered on its present day orbit. This however assumes that Fomalhaut b is actually a bound companion. While this is likely, unbound solutions might still be possible. According to, planets imaged of such small orbital arcs are compatible with bound orbital solutions as well as unbound ones due to unknown position and velocity along the line of sight. The case of PZ Tel B is even more complex. PZ Tel (HD 174429, HIP 92680), is a G5-K8 star member of the 24 ± 3 Myr old *β* Pic moving group. A sub-stellar companion, termed PZ Tel B, was independently discovered by and. Its mass is estimated  ∼ 20 *M*Jup and  ∼ 40 *M*Jup. It is therefore most likely a substellar object. Attempts to fit the orbit of this companion based on successive astrometric positions led to the conclusion that it must be close to edge-on and highly eccentric. The pair has been imaged regularly since 2007. PZ Tel B is moving away from the central star along a quasi straight line. Its distance to the star increased by  ∼ 60 % between 2007 and 2012. From the orbital standpoint, it is not clear whether PZ Tel B is a bound companion. But, in any case, its periastron must be small ($\la 1\,$AU), which is a major difference with Fomalhaut b. However, spectra of PZ Tel B obtained by indicate that it is a young object like the star itself. This strongly suggests that both objects are physically bound. MCMC Orbital fitting techniques are usually based on the assumption that the orbit to fit is elliptic, making use of corresponding Keplerian formulas. This can be problematic with orbits with eccentricities close to 1. This can either prevent convergence of the fit, or generate boundaries in the fitted orbit distributions that are not physical, but rather generated by the limitation of the method. Of course, on could design an independent MCMC code based on the use of open orbit formulas. Such a code would try to fit open orbits only. Our goal is to design a code that can handle both kinds of orbits in a continuous manner. Applied to the cases invoked above, this would help deriving for instance a robust estimate of a probability to be bound. This cannot be done using standard Keplerian variables, as the changes of formulas between bound and unbound orbits would generate enough noise to prevent the code to converge. We develop here a MCMC code based on the use of universal Keplerian variables, an elegant reformulation of Keplerian orbits that holds for bound and unbound orbits as well. The organisation of the paper is the following : In Sect. 2, we present new VLT/NACO observations of PZ Tel B that we will use together with order data in our fit. Then In Sect. 3, we present the fundamentals of our new code based on universal Keplerian variables. In Sects. 4 and 5, we present its application to the Fomalhaut b and PZ Tel B cases respectively. In Sect. 6, we present further modeling based on the suggestion by that highly eccentric companions could be actually less eccentric that they appear due to the presence of unseen additional inner companions. For the PZ Tel B case, we find one configuration that could indeed generate an apparent very high eccentricity, but we present subsequent dynamical modeling showing that it is in fact unstable. Our conclusions are then presented in Sect. 7. New observations of PZ Tel B ============================ Log of the observations ----------------------- PZ Tel B was observed with VLT/NaCo on September 26, 2010 in the L’ band filter (central wavelength=3.8 *μ*m, width=0.62*μ*m) in pupil-stabilized mode (P.I. Ehrenreich, Program 085.C-0277). The mode was used to subtract the stellar halo using the angular differential imaging (ADI) technique, despite the companion could be seen into our raw images. [ht] [Tab:Obs] @llllllllllll Object & Date & Band & Density filter & Camera & DIT & NDIT & $\mathrm{N\_\mathrm{exp}}$ & $\mathrm{\theta\_\mathrm{start}/\theta\_\mathrm{end}}$ & \langle Seeing \rangle & $\mathrm{\langle\tau\_{0}\rangle}$ & Notes &&&&& (s) &&& (∘) & () & (ms) & *θ* Orionis & 24/09/2010 & L’ &–& L27 & 0.3 & 60 & 7 &  − 132.922/ − 133.862 & 0.88 & 0.73 & Astrometric cal. PZ Tel & 26/09/2010 & L’ & ND\_long & L27 & 0.2 & 150 & 8 & 4.540/7.102 & 1.77 & 1.20& PSF PZ Tel & 26/09/2010 & L’ & – & L27 & 0.3 & 100 & 143 & 10.624/53.175 & 1.83 & 1.03 & ADI sequence PZ Tel & 26/09/2010 & L’ & ND\_long & L27 & 0.2 & 150 & 8 & 53.348/54.936 & 1.37 & 1.20 & PSF PZ Tel & 03/05/2011 & $\mathrm{K\_{s}}$ & ND\_short & S27 & 1.0 & 100 & 12 &  − 51.868/ − 43.155 & 0.50 & 5.54 & ADI sequence PZ Tel & 07/06/2011 & L’ & ND\_long & L27 & 0.2 & 150 & 8 &  − 71.868/ − 70.664 & 2.78 & 0.75 & PSF PZ Tel & 07/06/2011 & L’ & – & L27 & 0.2 & 150 & 96 &  − 70.317/ − 52.769 & 0.81 & 2.76 & ADI sequence PZ Tel & 07/06/2011 & L’ & ND\_long & L27 & 0.2 & 150 & 7 &  − 52.582/ − 50.922 & 0.78 & 2.61 & PSF The observation sequences, atmospheric conditions (seeing, coherence time *τ*0), and instrument setup are summarized in Table [Tab:Obs]. A continuous sequence of 1200 exposures was recorded, split into 8 cubes (*N*exp) of 150 images each (NDIT). The detector 512 × 512 pixel windowing mode was used to allow for having short data integration times (DIT=0.3s). The *ND\_long* neutral density (ND) was placed into the light path for the first and last 8 × 150 frames of the sequence to acquire unsaturated images of the star for the purpose of the calibration of the companion photometry and astrometry. The star core was in the non-linearity regime for the rest of the 143 exposures. The parallactic angle (*θ*) over the ND-free exposures ranges from $\theta\_\mathrm{start}=10.62\degr$ to $\theta\_\mathrm{end}=53.17\degr$, corresponding to an angular variation of 2.85 times the full-width at half maximum (FWHM) at 350mas. The system was re-observed on June 7, 2011 using the same instrument configuration (Program 087.C-0450, P.I. Ehrenreich). This sequence was recorded under unstable conditions. Nevertheless, the image angular resolution was high enough to resolve the brown-dwarf companion. The observation sequence is similar to the one of September 26, 2010, although the field rotation is reduced (17.55∘). It is summarized into Table [Tab:Obs]. To conclude, we recorded one additional epoch of pupil-stabilized observations of the PZ Tel system in the *K**s* band (central wavelength=2.18 *μ*m, width=0.35 *μ*m) with NaCo (P.I. Lagrange, Program 087.C-0431). We used the neutral density filter of the instrument (*ND\_Short*) adequate to this band to avoid saturating the star. The field rotation was not sufficient to take advantage of the angular differential imaging technique. Data Reduction -------------- The reduction of the L’-band observations was carried out by a pipeline developed in Grenoble. The pipeline first applied the basic cosmetic steps (bad pixel removal, nodded frame subtraction) to the raw images. The star was then registered into each individual frames of each cube using a 2D Moffat function adjusted onto the non-saturated portions of the images. We applied a frame selection inside each cube based on the maximum flux, and on the integrated flux over the PSF core. The cubes were then concatenated to create a master cube. Given the relative brightness of the companion and the amount of field rotation for the 2010 observations, we chose to apply the smart-ADI (SADI) flux-conservative algorithm to suppress the stellar halo. The algorithm builds for each frame of our observation sequence a model of the stellar halo from the other sequence images for which a companion located at a distance *R* from the star has moved angularly (because of the field rotation) by more than *n* ×  the FWHM (separation criterion). Only the *N**N* (*N**N* ∈ *R*) frames the closest in time (Depth parameter) and respecting the separation criterion are considered. We defer the reader to and for details. We found the parameters maximizing the detection significance of the companion (*R* = 13.6 pixels, depth=4, and 2 × FWHMs) throughout these intervals: 2 ≤ Depth ≤ 12, FWHM=1, 1.5, 2. Flux losses associated to the self-subtraction of the companion during the ADI process were estimated using either five artificial planets (AP) are injected at $\mathrm{PA}=179\degr$, $-61\degr$, $239\degr$, $210\degr$, and $270\degr$, or negative fake planets. These AP were built from the non-saturated exposures of the star taken before and after the ADI observations (see Table [Tab:Obs]). We derive a final contrasts of Δ*L*′ = 5.15 ± 0.15mag. The error accounts for uncertainties on the flux losses estimates, on the evolution of the PSF through the ADI sequence, and on the method used to extract the companion photometry. This new L’ band photometry was accounted for the up-to-date analysis of the spectral energy distribution of the companion. Images of the *θ* Orionis C astrometric field were acquired with an identical setup on September, 24 2010. They were properly reduced with the `Eclipse` software. The position of the stars on the detector were compared to their position on sky measured by to derive the instrument orientation to the North of $-0.36\pm0.11\degr$ and a detector plate scale of 27.13 ± 0.09mas. We used these measurements together with the position of PZ Tel B derived from the negative fake planet injection to find a PA=$59.9\pm0.7\degr$ and a separation of 374 ± 5mas for the companion. The second epoch of L’ band observations was reduced with the IPAG pipeline, but using the classical imaging (CADI) algorithm. The algorithm built a model of the halo valid for all the images of the sequence from the median of all images contained in the sequence. It is more appropriate than the smart-ADI algorithms here because of the small amount of field rotation. Indeed, it would not have been possible to built a model of the halo for each frames of the sequence while respecting a separation criterion of 1 FWHM for all the frames contained in our sequence. The flux losses were estimated in the same way as for the 2010 L’-band observations. We measure a contrast of Δ*L*′ = 4.6 ± 0.6mag. The photometry is less accurate because of the unstable conditions during the observations. The instrument orientation (TN=1.33 ± 0.05∘) and plate-scale (27.38 ± 0.08 mas/pixel) were measured onto the images of the binary star HD 113984 observed on September 2, 2011. We therefore derive a PA=60.0 ± 0.6∘ and separation of 390.0 ± 5.0 mas for the companion. We realigned each of the *K**s* band images to the North and median-combined them to create a final image of PZ Tel AB. The companion is seen in the images. We removed the stellar halo at the position of the companion making a axial symmetry of the halo around a axis inclined at PA=-45, 0, or 90∘. We integrated the flux of the star and companions into circular apertures of radii 135 mas (5 pixels) to derive a contrast ratio of *δ**K**s* = 5.46 ± 0.07. The error bars account for the dispersion of contrast found for the different choice of duplication axis for the stellar halo removal. This contrast ratio is consistent within error bars with the one derived by with the same instrumental setup. We measure a PA=60.0 ± 0.6∘ and a separation of 390.0 ± 5.0 mas for the system. This astrometry relies on the TN and plate-scale measured on March 03, 11, and reported in. The astrometry reported in this section assumes that the TN and plate-scale are stable between the observations of the astrometric fields and our observations of PZ Tel. This seems to be the case according to the measurements of. Fundamentals of MCMC for high eccentricity and open orbits ========================================================== MCMC for astrometric imaged companions -------------------------------------- The fundamentals of the MCMC method applied to planets detected with radial velocities are described in, and its application to imaged companions are for instance described in. The first requirement is to presuppose general probability distributions (commonly called priors) for the orbital elements. For bounds orbits, the usual orbital elements are the semi-major axis *a*, the eccentricity *e*, the inclination *i*, the longitude of ascending node Ω, the argument of periastron *ω*, and the time for periastron passage *t**p*. The priors for these elements are generally assumed uniform between natural bounds, except for *a* for which a logarithmic prior ( ∝ ln*a*) is often assumed, and for *i* for which assume a prior  ∝ sin*i* is also of standard use. Combined with uniform prior for Ω, this choice ensures a uniform probability distribution over the sphere for the direction of the orbital angular momentum vector. In the building process of a Markov chain, successive orbital solutions are generated from preceding ones taking steps on the orbital variables and selecting or rejecting the generated new orbits using the Metropolis-Hastings algorithm (hereafter MH). MCMC theory tells that whenever the chain grows, it is expected to stabilise and the final statistical distribution of orbits within the chain samples the posterior probabilistic distribution of orbital solutions. In practice several chains are run in parallel (we use 10), and Gelman-Rubin criterion on crossed variances is used to check for convergence. An important point to note is that the variables on which steps are taken with MH are not necessarily the orbital elements listed above themselves, but rather combinations of them. In (*β* Pic b) and (Fomalhaut b), the work is done on the parameter vector $$\begin{aligned} \vec{w\_1} & = & \left(\frac{\cos(\omega+\Omega+v\_0)}{P}, \frac{\sin(\omega+\Omega+v\_0)}{P},\frac{e\cos(\omega+\Omega)}{\sqrt{1-e^2}}, \right.\nonumber\\ &&\frac{e\sin(\omega+\Omega)}{\sqrt{1-e^2}}, (1-e^2)^{1/4}\sin\frac{i}{2}\cos(\omega-\Omega),\nonumber\\ &&\left.(1-e^2)^{1/4}\sin\frac{i}{2}\sin(\omega-\Omega)\right), \label{param1}\end{aligned}$$ where *P* is the orbital period and *v*0 is the true anomaly at a reference epoch (typically that of a specific data point of a time close to periastron passage). This choice was dictated by the following considerations : * As pointed out in, considering imaged companions, there is a degeneracy in orbital solutions concerning parameters Ω and *ω*. Considering one potential solution with (Ω, *ω*) values, the same solution but with (Ω + *π*, *ω* + *π*) exactly gives the same projected orbital motion. The only way to lift this degeneracy is to have independent information about which side of the projected orbit (or of the associated disk) is on the foreground, or to have radial velocity measurements. Hence taking steps on Ω and *ω* in MCMC can generate convergence difficulties with chains oscillating between two symmetric families of solutions. To avoid this difficulty, we consider angles *ω* + Ω and *ω* − Ω which are unambiguously determined. It is indeed easy to express the projected Keplerian model as a function of these angles only. * Whenever *i* = 0, angles Ω and *ω* are undefined (and so angle *ω* − Ω), while Ω + *ω* is still defined. Hence we take variables  ∝ sin(*i*/2)cos(*ω* − Ω) and  ∝ sin(*i*/2)sin(*ω* − Ω) to avoid a singularity whenever *i* → 0. * The same applies to eccentricity. When *e* vanishes, Ω + *ω* itself is undefined. This is why we consider variables  ∝ *e*cos(Ω + *ω*) and  ∝ *e*sin(Ω + *ω*). * *ω* + Ω + *v*0 is defined even when *e* = *i* = 0. This is why we chose it in the remaining variables. But as much as possible, we avoid directly taking steps on angular variables themselves, which can lead to convergence problems with jumps around 2*π* and similar values. This is why we use cos(*ω* + Ω + *v*0) and sin(*ω* + Ω + *v*0) in the remaining variables. As explained by, the assumed priors are taking into account in MH multiplying the basic probability by the Jacobian of the transformation from the linear vector (ln*a*, *e*, sin*i*, Ω, *ω*, *t**p*) to the parameters vectors. This Jacobian reads here $$J\_1=\frac{1}{2}\frac{e(1+e\cos v\_0)}{(1-e^2)^3P^2}\qquad.$$ Open orbits and universal variables ----------------------------------- The parameter vector $\vec{w\_1}$ ([param1]) is well suited to fit low eccentricity orbits. It has nevertheless proved efficiency for high eccentricity orbits as well. also gives alternate sets of parameters for such orbits. However, none of them can handle open orbits. Moreover, their validity of the fit close to the boundary *e* = 1 is questionable. Our goal is to allow MCMC fitting to account for bound or unbound orbits in a continuous manner as well. Some of the variables in Eqs. ([param1]), such as the orbital period *P* and $\sqrt{1-e^2}$ are clearly inappropriate and need to be changed. The periastron *q* is conversely always defined for any type of orbit. So changing *P* to *q* and eliminating $\sqrt{1-e^2}$ in Eqs. ([param1]) could be a first solution. We designed a code based on this idea, which turned out not to be efficient. Convergence of Markov Chains could not be reached after billions of iterations. The reason lies in the Keplerian model assumed. To be able to compute the position and velocity of an orbiting companion at a given time (and subsequently a *χ*2), one needs to solve Kepler’s equation for the eccentric anomaly *u* as a function the time *t*. Kepler’s equation depends on the type of orbit. For an elliptical, parabolic and hyperbolic orbit, this equation reads $$u-e\sin u = M,\qquad\frac{u}{2}+\frac{u^3}{6}=M,\quad\mbox{and} \quad e\sinh u-u = M \label{keps}$$ respectively. In the parabolic case, this equation is called Barker’s equation, and *u* = tan(*v*/2), where *v* is the true anomaly. *M* = *n*(*t* − *t**p*) is the mean anomaly, while *n* is the mean motion. This equation holds in all cases, but *n* is defined as $\sqrt{\mu/a^3}$ in the elliptic and hyperbolic case, and as $\sqrt{\mu/8q^3}$ in the parabolic case, where *μ* = *G**M* is the dynamical mass (*M* is the central mass). In the random walk process of a Markov chain, permanent switching between these equations lead to instabilities that prevent convergence. A good way to overcome this difficulty is to move to the universal variable formulation, which is a very elegant way to provide a unique and continuous alternative to Kepler’s equation valid for any kind of orbit. We first define the energy parameter *α* as $$\alpha=-2E=\frac{\mu}{q}\,(1-e)\qquad,$$ where *E* is the energy per unit mass. This expression is valid for any orbit. Elliptical orbits are characterized by *α* > 0, parabolic orbits by *α* = 0 and hyperbolic ones by *α* < 0. The eccentric anomaly *u* is then replaced by the universal variable *s*. For non-parabolic orbits, *s* is defined as $$s=\frac{u}{\sqrt{|\alpha|}}\qquad,$$ and as $$s=\frac{u}{2qn}$$ for parabolic orbits. It can be shown that in any case, we have $$s=\frac{1-e}{q}\,\left(t-t\_p\right)+\frac{eY}{\sqrt{q\mu(1+e)}}\qquad,$$ where *Y* is the *y*-coordinate in a local *O**X**Y* referential frame (*X* axis pointing towards periastron). This shows that the definition of *s* is continuous irrespective of the type of orbit. Then, show that Kepler’s equation can then be rewritten in any case as *μ**s*3*c*3(*α**s*2) + *q**s**c*1(*α**s*2) = *t* − *t**p*  ,  where *t* is the time, and the *c**k*’s are the Stumpff functions defined as $$c\_k(x)=\sum\_{n=0}^{+\infty}\frac{(-1)^n}{(2n+k)!}\,x^n\qquad. \label{ck}$$ This formulation of Kepler’s equation is valid for any orbit. Using *c**k*(0) = 1/*k*!, we see that for *α* = 0 (parabolic orbit), this equation is equivalent to Barker’s equation. Once this equation is solved for *s*, the heliocentric distance *r* and the rectangular coordinates *X* and *Y* read $$\begin{aligned} X & = & q-\mu s^2c\_2\left(\alpha s^2\right)\qquad,\\ Y & = & s\sqrt{q\mu(1+e)}\,c\_1\left(\alpha s^2\right)\qquad,\\ r & = & q+e\mu s^2c\_2\left(\alpha s^2\right) \qquad,\end{aligned}$$ these formulas applying for any type of orbit. This formalism was used by several authors for specific problems, such as for wide binaries in clusters, or to sole Keplerian problems with additional disturbing potentials  ∝ 1/*r*2. The Kepler advancing routines in the popular symplectic *N*-body packages Swift and Mercury are also coded this way for high eccentricity and open orbits. Very recently, proposed an alternate formalism that avoids the use of Stumpff functions. They claim it to be more efficient. We did not try to apply that yet and used the Stumpff functions theory instead. Based on the use of Stumpff functions, we designed a new MCMC code, using the following parameter vector $$\begin{aligned} \vec{w\_2} & = & \left(\rule[-1.5ex]{0pt}{3ex} q\cos(\omega+\Omega), q\sin(\omega+\Omega),\right. \nonumber\\ && \qquad\left.\sin\frac{i}{2}\cos(\omega-\Omega), \sin\frac{i}{2}\sin(\omega-\Omega),e,s\_0\right)\quad, \label{param2}\end{aligned}$$ where *s*0 is the universal variable at a given reference epoch. The priors are assumed uniform for Ω, *ω*, *e*, and *t**p*, logarithmic for *q* and  ∝ sin*i* for *i*. The Jacobian of the transformation from (ln*q*, *e*, sin*i*, Ω, *ω*, *t**p*) to $\vec{w\_2}$ reads $$J\_2=\frac{q^2}{2}\,\frac{{\mathrm{d}}s}{{\mathrm{d}}t}\qquad,$$ where d*s*/d*t* can be obtained as $$\frac{{\mathrm{d}}s}{{\mathrm{d}}t}=\frac{1}{\mu s^2c\_2\left(\alpha s^2\right) +qc\_0\left(\alpha s^2\right)}\qquad,$$ to be evaluated here at *s* = *s*0. Transformation of angles ------------------------ This new code was able to reach convergence in the Fomalhaut bcase. Convergence appeared however hard to reach in the PZ Tel case. This is due to the structure of the data (Table [pzteldata]). The astrometric data of PZ Tel B reveal indeed a quasi-linear motion nearly aligned with the central star. PZ Tel B’s orbit appears extremely eccentric, perhaps unbound, but with a periastron much smaller ($\la1\,$AU) than the measured projected distances. In this context, the local reference frame *O**X**Y**Z* may be not well defined. Only the line of apsides *O**X* is likely to be well constrained by the data, while the two other directions require an other nearly arbitrary angular variable to be fixed. The very bad constraint on that angular variable results into degeneracies in the constraints on angles (Ω, *ω*, *i*) that are sufficient to prevent convergence. It thus appears necessary to isolate the badly constrained angular variable into a specific variable. This require to change the parameter vector $\vec{w\_2}$ (Eq. [param2]). With respect to the sky reference frame (*x*-axis pointing towards north, *y*-axis towards east, and *z*-axis towards the Earth), the basis vectors ($\vec{e\_X},\vec{e\_Y},\vec{e\_Z}$) of the local *O**X**Y**Z* reference frame read $$\begin{aligned} \vec{e\_X}&&\left\{ \begin{array}{l} \cos\omega\cos\Omega-\cos i\sin\omega\sin\Omega\\ \cos\omega\sin\Omega+\cos i\sin\omega\cos\Omega\\ \sin\omega\sin i \end{array}\right.\qquad,\nonumber\\ \vec{e\_Y}&&\left\{ \begin{array}{l} -\sin\omega\cos\Omega-\cos i\cos\omega\sin\Omega\\ -\sin\omega\sin\Omega+\cos i\cos\omega\cos\Omega\\ \cos\omega\sin i \end{array}\right.\qquad,\nonumber\\ \vec{e\_Z}&&\left\{ \begin{array}{l} \sin i\sin\Omega\\ -\sin i\cos\Omega\\ \cos i \end{array}\right.\qquad. \label{exeyez}\end{aligned}$$ According to our analysis, in the case of PZ Tel B, only $\vec{e\_X}$ is well constrained by the data. This results in complex combined constraints on Ω, *ω* and *i*. For instance, if $\vec{e\_Z}$ was well constrained by the data, then *ω* would be the weakly constrained parameter, as $\vec{e\_Z}$ is only function of Ω and *i*. The idea is therefore to define new angles in a similar way as in Eqs. ([exeyez]), but in such a way that $\vec{e\_X}$ depends on two angles only. We thus introduce new angles *i*ʹ, Ωʹ and *ω*ʹ designed in such a way that $\vec{e\_X}$ is defined with respect to (*i*ʹ, Ωʹ, *ω*ʹ) in the same manner as $\vec{e\_Z}$ is defined with respect to (*i*, Ω, *ω*). Similarly $\vec{e\_Y}$ will be defined like $\vec{e\_X}$, and $\vec{e\_Z}$like $\vec{e\_Y}$. We thus write now $$\begin{aligned} \vec{e\_X}&&\left\{ \begin{array}{l} \sin i'\sin\Omega'\\[\jot] -\sin i'\cos\Omega'\\[\jot] \cos i' \end{array}\right.\qquad,\nonumber\\ \vec{e\_Y}&&\left\{ \begin{array}{l} \cos\omega'\cos\Omega'-\cos i'\sin\omega'\sin\Omega' \\ \cos\omega'\sin\Omega'+\cos i'\sin\omega'\cos\Omega' \\ \sin'\omega\sin i' \end{array}\right.\qquad,\nonumber\\ \vec{e\_Z}&&\left\{ \begin{array}{l} -\sin\omega'\cos\Omega'-\cos i'\cos\omega'\sin\Omega'\\ -\sin\omega'\sin\Omega'+\cos i'\cos\omega'\cos\Omega'\\ \cos\omega'\sin i' \end{array}\right.\qquad. \label{exeyez2}\end{aligned}$$ The comparison between formulas ([exeyez]) and ([exeyez2]) gives the correspondence between the two sets of angles. Now, the line of apsides is defined by *i*ʹ and Ωʹ only, and *ω*ʹ, the badly constrained angular variable, is undefined if *q* vanishes. It is therefore worth modifying vector $\vec{w\_2}$ according to this transformation. However, one should not forget that vector $\vec{w\_2}$ was designed to avoid the natural degeneracy of astrometric solution between (Ω, *ω*) and (Ω + *π*, *ω* + *π*). It can be seen from Eqs. ([exeyez]) and ([exeyez2]) that changing (Ω, *ω*) to (Ω + *π*, *ω* + *π*) is equivalent to changing (*i*ʹ, *ω*ʹ) to (*π* − *i*ʹ,  − *ω*ʹ) while leaving Ωʹ unchanged. This transformation leaves the first two components of $\vec{e\_X}$ and $\vec{e\_Y}$ unchanged (this explains the degeneracy of the projected orbit), as well as the third component of $\vec{e\_Z}$, all remaining components being changed to their opposites. The new parameter vector must remain unchanged with this transformation as well, to avoid convergence difficulties. We chose the following new parameter vector $$\begin{aligned} \vec{w\_3} & = & \left(\sin i'\cos\Omega', \sin i'\sin\Omega',\right. \nonumber\\ && \qquad\left. q\sin i'\cos\omega', q\cos i'\sin\omega',e,s\_0\right)\qquad. \label{param3}\end{aligned}$$ This new vector is invariant in the transformation (*i*ʹ, *ω*ʹ) → (*π* − *i*ʹ,  − *ω*ʹ). Its first two components define $\vec{e\_X}$ unambiguously. Its third and fourth components vanish when *q* = 0, i.e., when *ω*ʹ is undefined, which avoids singularities. Now, this vector can be expressed as a function of the original angles (*i*, Ω, *ω*) directly, so that the formal introduction of the new angles (*i*ʹ, Ωʹ, *ω*ʹ) is unnecessary. The same vector can be written $$\begin{aligned} \vec{w\_3} & = & \left(\rule[-2.5ex]{0pt}{5ex} -\cos\omega\sin\Omega-\sin\omega\cos i\cos\Omega,\right.\nonumber\\ &&\qquad\cos\omega\cos\Omega-\sin\omega\cos i\sin\Omega,\nonumber\\ &&\qquad\qquad\left.q\,\cos i,q\,\frac{\cos\omega\sin\omega\sin^2i} {\sqrt{1-\sin^2i\sin^2\omega}},e,s\_0\right)\qquad. \label{param3b}\end{aligned}$$ It can be checked that this vector is invariant in the transformation (Ω, *ω*) → (Ω + *π*, *ω* + *π*). This will be our parameter vector for a second version of the MCMC code. The new Jacobian of the transformation from (ln*q*, *e*, sin*i*, Ω, *ω*, *t**p*) to $\vec{w\_3}$ reads now $$J\_3=q^2\sin^2i\sin^2\omega{\sqrt{1-\sin^2i\sin^2\omega}}\,\frac{{\mathrm{d}}s}{{\mathrm{d}}t} \qquad.$$ This new version succeeded in reaching convergence for PZ Tel B. Implementation -------------- The two versions of our code have been written in Fortran 90, with an additional OPEN-MP parallel treatment of the computed Markov chains. Our basic strategy is the same is in. We first perform a least-square Levenberg-Marquardt fit of the orbit. This takes only a few seconds to converge towards a local *χ*2 minimum. Of course, this fit is made starting from a rough orbit guess. The same procedure is re-initiated many times varying the starting orbit. This allows to probe the variety of local *χ*2 minima. In all cases described below, among various minima, a main one, or a series of very similar ones was reached. This main minimum was selected as a starting point for the MCMC chains. This procedure turns out to speed up the convergence of the chains. This starting point is marked as red bars and black stars in the resulting MCMC posterior plots (see below). We also tried to run the MCMC starting from a random guess instead, and checked that the same posterior distributions were reached, but slower. We also checked in the posterior *χ*2 distributions derived from the runs that in all cases, the starting point initially derived with Levenberg-Marquardt actually achieves the best *χ*2 in the distribution. Strictly speaking, Levenberg-Marquardt works well to quickly derive the best *χ*2 minimum. This shows that using other least-square fitting algorithms like downhill simplex for instance would not lead to a better result, as all these methods aim at finding a *χ*2 minimum, possibly the best one. However, MCMC runs reveal afterwards that with sparsely sampled orbits as we are dealing with here, the very best *χ*2 minimum does not always correspond to a probability peak in the posterior distributions. This intrinsic fact is independent from the method used to get the minimum. The implementation of the universal variable formalism described above requires an efficient algorithm to compute the Stumpff functions. The series ([ck]) defining them efficiently converge only for sufficiently small *x*. We use a reduction algorithm described in, that makes use of the following set of formulas $$\begin{aligned} c\_0(4x)=2\left[c\_0(x)\right]^2,& \, & c\_1(4x)=c\_0(x)c\_1(x),\label{qu1}\\ c\_2(4x)=\frac{1}{2}\left[c\_2(x)\right]^2,&\, & c\_3(4x)=\frac{1}{4}c\_2(x)+\frac{1}{4}c\_0(x)c\_3(x)\;.\label{qu2}\end{aligned}$$ Any input argument *x* is first reduced by successive factors of 4 until it satisfies ∣*x*∣ < 0.1. Then the series up to order 6 are used to get *c*2(*x*) and *c*3(*x*) only. To compute *c*0 and *c*1, the following relations are used *c*0(*x*) = 1 − *x**c*2(*x*),   *c*1(*x*) = 1 − *x**c*3(*x*) . Equations ([qu1]) and ([qu2]) are then applied recursively to derive the Stumpff functions for the original argument *x*. demonstrated the efficiency of this algorithm. In the fitting routine, universal Kepler’s equation ([kepu]) must be solved numerically using a root-finding algorithm. We do it with Newton’s quartic method or with Laguerre-Conway’s method. To compute the derivatives of the Stumpff function, we use the following relation $$\frac{{\mathrm{d}}c\_n(x)}{{\mathrm{d}}x}=\frac{1}{2x}\left(c\_{n\_1}(x)-nc\_n(x)\right)\qquad,$$ or equivalently, if we define *ϕ**n*(*α*, *s*) = *s**n**c**n*(*α**s*2), $$\frac{\partial\phi\_n(\alpha,s)}{\partial s}=\phi\_{n-1}(\alpha,s)\qquad.$$ For the special case *n* = 0, we have $$\frac{\partial\phi\_o(\alpha,s)}{\partial s}=-\alpha\phi\_1(\alpha,s)\qquad.$$ The same algorithm is implemented in the symplectic *N*-body integrator Swift for high eccentricity orbits. Its use turns out to be only a few times (3–4) more computing time consuming than that of a standard Keplerian formalism based on sine and cosine functions. But this is worth applying it in MCMC to the case of very high eccentricity and open orbits, as the use of the universal Kepler’s equation ([kepu]) eliminates the instabilities due to the permanent switch between the various formulas ([keps]). Thus Markov chains converge more efficiently. Results for Fomalhaut b ======================= Fomalhaut b is known to have a very eccentric orbit, with an eccentricity in any case $\ga 0.5$ and most probably around 0.8–0.9. Whether it is actually bound to the central star may be questionable, especially due to the very small coverage fraction of its orbit. If bound, its orbital period is a matter of hundreds of years if not more, while the four available astrometric points span over a period of 8 years only. Therefore, as noted by, what is measured is basically a projected position and a projected velocity onto the sky plane, so that the *z*-coordinates (i.e., along the line of sight) of the position an velocity are unknown. As a matter of fact, use with these data a simple sampling method drawing random *z*-coordinates for the position on velocity. They find an eccentricity distribution very similar to that derived in with MCMC. However, in both cases, the orbit was supposed to be bound. The eccentricity distribution of nevertheless extends up to *e* = 1 (that of stops at *e* ≃ 0.98), showing that unbound solutions could exist as well. This justifies the use of our new code to check this possibility. The code in its first version (see above) was used with the available astrometric data from and listed in. Following the prescriptions by, 10 chains were run in parallel until the Gelman-Rubin parameters *R̂* and *T̂* reach repeatedly convergence criteria for all parameters in Eq. ([param2]), i.e., *R̂* < 1.01 and *T̂* > 1000. We had already used the same procedure in and. In, this convergence criteria were reached after 6.2 × 107 steps. Here it took 4.25 × 108 steps with the universal variable code, running on the same data. This illustrates how the possibility for Markov chains to extend in the unbound orbit domain increases the complexity of the problem. We also had to fix an arbitrary upper limit *e*max = 4 for the eccentricity to ensure convergence. Setting larger *e*max values results in more steps needed to reach convergence, but the assumed *e*max = 4 upper limit as some physical justification. A large eccentricity means that Fomalhaut b is a passing by object that is currently encountering a flyby in the Fomalhaut system. The eccentricity of a flyby orbit cannot be arbitrarily large. For an hyperbolic orbit, the eccentricity is directly linked to the relative velocity at infinity *v*\infty by the energy balance equation $$\frac{1}{2}v\_\mathrm{\infty}^2=\frac{GM(e-1)}{q}\qquad. \label{vinfty}$$ An upper limit to *v*\infty can be given considering a typical dispersion velocity in the solar neighborhood, i.e.,  ∼ 20 km\,s− 1. Assuming *q* = 25 AU, i.e., the most probable value for hyperbolic orbits in our distribution (see Fig. [mcmcmapfomb]), this immediately translates into an estimate of an upper limit for the eccentricity *e*max ≃ 4. [mcmcmapfomb] The global statistics of the posterior orbital distribution obtained from the run is shown in Fig. [mcmcmapfomb], where distribution histograms for all individual elements (*q*, *e*, *i*, Ω, *ω*, *t**p*) as well as density maps for all combinations of them are represented. Special enlargements of the plots concerning *e* and Ω are shown in Fig. [fombmcmc]. In all histogram (mono-dimensional) plots, the red vertical bar superimposed to the plot shows the corresponding value for the best fit (lowest *χ*2) orbit obtained independently via a least-square Levenberg-Marquardt procedure. The same orbital solution is marked with black stars in all off-diagonal density plots combining two orbital parameters. As explained above, a least-square fit is initiated prior to launching MCMC. The resulting best-fit model is then used as a starting point for the Markov chains, and posterior *χ*2 distributions show that this solution actually achieves the minimum of the distribution. We first compare these plots to the corresponding ones in, where the fit was made over the same data set, but limited to bound orbits only. The first striking fact is that the eccentricity distribution extends now beyond *e* = 1 well into the unbound regime. This shows as suspected that unbound orbital solutions for Fomalhaut b do exist. The best fit solution is itself an unbound orbit with *e* ≃ 1.9. We nevertheless note a strong peak in the distribution at *e* ≃ 0.94 that appears exactly at the same place as in. This clearly shows that for such a weakly constrained problem, MCMC is definitely superior to least-square. In fact, the whole eccentricity distribution below $e\la 0.96$ exactly matches the corresponding one in. This shows up in Fig. [fombmcmc] where the eccentricity histogram was intentionally cut at *e* = 1.5 to permit a better comparison. This first validates the present run (as the previous one was done with another code), and second shows that the cutoff at *e* ≃ 0.98 that appeared in the previous distribution was not physical, but rather due to the intrinsic limitation of the code used. The tail of the distribution extends now in the unbound regime up to the *e*max = 4 limitation that was fixed in the run. The shape of this tail can be fitted as with a *e*− 3/2 power law. We also note (Fig. [mcmcmapfomb]) that the periastron distribution closely matches that of, while extending further out towards larger values. This is clearly due to the contribution of unbound orbits, as can be seen in the *q*–*e* probability map. From this we can derive an estimate of the probability for Fomalhaut b’s orbit to be bound, just counting the number of bound orbits in the whole set. We find *p*bound = 0.23. This probability actually depends on the assumed limitation *e*max = 4. If we had let the eccentricity to take larger values, the number of unbound orbits in the whole set would have been larger, and subsequently *p*bound would have been smaller. It is nevertheless possible to estimate the ultimate *p*bound value that would be derived if we put no upper limit on the eccentricity. Taking into account the fact that the tail of the posterior eccentricity distribution roughly falls off as *e*− 3/2, we can extrapolate the distribution up to infinity, integrate it and reintroduce the missing orbits corresponding to *e* > 4 into the distribution. Our posterior sample of orbits contains 106 solutions. Extrapolating the distribution we can estimate that  ∼ 2.1 × 105 corresponding to *e* > 4 are missing in our sample. This changes our probability estimate to *p*bound = 0.19, which can be considered as a minimum value. However, as the *e*max = 4 threshold results from a physical consideration (see above), the first derived *p*bound value can be regarded as robust. Note also that it is not very much above the minimum value. This shows that the contribution of very high eccentricity solutions is minor. This probability is however just derived from a pure mathematical analysis without any likelihood consideration. Flybys are rare but not necessarily improbable. Looking now at the distributions of the other orbital elements, we see in Fig. [fombmcmc] that the location of the orbit in (Ω, *i*) still closely matches that of the observed dust disk (the white star in the plot), like in. In other works, there is still a strong suspicion of near-coplanarity between Fomalhaut b and the dust disk. This clearly favours a bound configuration rather than a flyby that would have no reason for being coplanar. Another possibility is that Fomalhaut b is just getting ejected today from the system. That last configuration nevertheless appears improbable regardless to the timescale of the ejection ( ∼ 1000yrs) compared to the age of the system. To conclude, these plausibility considerations combined with our *p*bound ≃ 0.23 estimate and the clear peak of the eccentricity distribution at *e* = 0.94 enable us to stress that Fomalhaut b is probably bound to the central star. The situation is less clear with the argument of periastron. In the *ω*–*e* plot (Fig. [mcmcmapfomb]), we see that depending on whether *e* < 1 or *e* > 1, the solutions exhibit different *ω* values. In, we noted that the observed dust disk corresponds to $\omega\_\mathrm{disk}=-148.9\degr$. This sill roughly matches the *ω* values for bound orbits, i.e., bound orbits are still apsidally aligned with the disk with a few tens of degrees. [fomborbs] The distribution of the time of periastron passage (*t**p*) is similar to that of, expect that we have here an additional tail after 1950 that corresponds to unbound orbits. Figure [fomborbs] shows finally a few orbital solution in projection onto the sky plane, with bound and unbound configurations. We note that all solutions fit the observed positions, while assuming very different shapes. We clearly see here the effect of the bad observational orbital coverage. PZ Tel B ======== Results ------- [pzteldata] @llllll Obs. date & Declination (*x*) & Right ascension (*y*) & Separation & Position Angle & Reference & (mas) & (mas) & (mas) & (mas) & Jun. 13, 2007 & 121.26 ± 1.20 & 225.01 ± 2.20 & 255.6 ± 2.5 & 61.68 ± 0.6 & Apr. 11, 2009 & 169.96 ± 8.57 & 282.87 ± 8.57 & 330.0 ± 10. & 59.0 ± 1.0 & Sep. 28, 2009 & 165.65 ± 0.59 & 293.02 ± 1.05 & 336.6 ± 1.2 & 60.52 ± 0.22 & May 07, 2010 & 175.52 ± 0.60 & 308.23 ± 1.04 & 354.7 ± 1.2 & 60.34 ± 0.21 & May 05, 2010 & 183.25 ± 1.53 & 309.87 ± 2.58 & 360.0 ± 3.0 & 59.4 ± 0.5 & Sep. 26, 2010 & 186.90 ± 4.10 & 313.52 ± 6.87 & 365.0 ± 8.0 & 59.2 ± 0.8 & This work Oct. 28, 2010 & 185.15 ± 0.55 & 319.53 ± 0.95 & 369.3 ± 1.1 & 59.91 ± 0.18 & Mar. 25, 2011 & 192.02 ± 0.51 & 330.46 ± 0.87 & 382.2 ± 1.0 & 59.84 ± 0.19 & May 03, 2011 & 194.61 ± 0.99 & 342.58 ± 1.74 & 394.0 ± 2.0 & 60.4 ± 0.2 & This work Jun. 06, 2011 & 195.97 ± 0.25 & 335.22 ± 0.43 & 388.3 ± 0.5 & 59.69 ± 0.1 & Jun. 07, 2011 & 195.00 ± 2.51 & 337.75 ± 4.33 & 390.0 ± 5.0 & 60.0 ± 0.6 & This work Apr. 05, 2012 & 196.09 ± 4.45 & 345.19 ± 7.83 & 397.0 ± 9.0 & 60.4 ± 0.2 & Jun. 08, 2012 & 212.41 ± 0.10 & 361.75 ± 0.13 & 419.5 ± 0.14 & 59.58 ± 0.22 & [mcmcmappztelb] [pztelbqe] As mentioned above, orbital solutions for PZ Tel B  all yields very eccentric orbits (*e* > 0.6). These orbital determinations were performed assuming a bound orbit, so that no test at *e* ≥ 1 was done. This assumption is however questionable given the orbital solutions found. We thus ran our universal variable MCMC code with the available astrometric data of PZ Tel B (Table [pzteldata]). As explained above (Sect. 2), we used the second version of the code with parameter vector $\vec{w\_3}$ (Eq. [param3b]) that ensures a better convergence. Even in that case, convergence was hard to reach. 10 chains were run in parallel. After 1.5 × 1010steps, the Gelman-Rubin parameter *R̂* values for the 6 variables in $\vec{w\_3}$ were ranging between 1.006 and 1.019, while the *T̂* parameter values were ranging between 260 and 800. The run was stopped there to save computing time, as reaching the demanded criteria (*R̂* < 1.01 and *T̂* > 1000 for all variables) would have demanded many more steps. The *R̂* and *T̂* values reached at the stopping point must nevertheless be considered as characteristic for an already very good convergence, so that we may trust the resulting posterior distribution. We checked indeed that posterior distributions that we could derive stopping the computation earlier, i.e. at a point when the *R̂* and *T̂* values were somewhat further away from convergence criteria, did not show significant differences with those presented below. Noticeably, nothing comparable in terms of convergence criteria was reached using the first version of the code using parameter vector $\vec{w\_2}$ (Eq. [param2]). The global statistics of the posterior orbital distribution obtained from the run is shown in Fig. [mcmcmappztelb], which was built with the same plotting conventions as Fig. [mcmcmapfomb] for Fomalhaut b. In particular the red bar and the black star indicate the best fit orbit obtained via Levenberg-Marquardt. Special enlargements concerning the periastron and the eccentricity are shown in Fig. [pztelbqe]. The most striking feature that shows up is the eccentricity distribution. As expected, PZ Tel B’s orbit appears extremely eccentric, but the eccentricity distribution is drastically different from that of Fomalhaut b. As pointed out by, the temporal coverage of Fomalhaut b’s orbit is so small that what is measured is basically a projected position and a projected velocity, with no information about position and velocity along the line of sight. Consequently, solutions with arbitrarily high eccentricities are mathematically possible. This is not the case for PZ Tel B. Table [pzteldata] shows that the motion followed over 5 years is quasi linear, but with a separation with the central star that increased by more than 60%. Even if the orbital coverage is still small (see Fig. [pztelb-orbits]), this is more than just a projected position and project velocity measurement. Consequently, no tail extending to arbitrarily large values is obtained with MCMC. All solutions naturally concentrate in the range 0.91 < *e* < 1.23. The eccentricity distribution appears extremely concentrated around *e* = 1. The enlargement in Fig. [pztelbqe] close to *e* = 1 reveals a peak near *e* = 1.00. The median of the distribution is at *e* = 1.001275; 67% and 95% confidence levels are 0.965 < *e* < 1.024 and 0.906 < *e* < 1.157 respectively. We compare the eccentricity distribution to that recently found by, who derived 0.622 < *e* < 0.9991 using a LSMC (Least-squares Monte-Carlo) approach, bur restricted to bound orbits. Of course our distribution now extends beyond *e* = 1, but our lower bound (*e* = 0.91) is significantly larger than theirs. This is due to our additional data points rather than to the method used. The periastron distribution shows a sharp peak around *q* = 0.07 AU. The *q*–*e* map (Fig. [mcmcmappztelb]) reveals actually two branches of solutions, one with bound solution and one with unbound solutions. But most solutions concentrate close to *e* = 1 and *q* = 0.07 AU. The inclination shows a peak at $i=98\degr$. This corresponds to a nearly edge-on configuration, and could explain the quasi-linear motion. But solutions up to $i=150\degr$ are also possible. The *i*–*e* map shows that the larger inclination solutions actually corresponds to those with *e* ≃ 1. All solutions have $i>90\degr$, showing that the orbit is viewed in a retrograde configuration from the Earth. The distribution of the time of periastron (*t**p*) exhibits a very sharp peak in 2002.5 that corresponds also to orbits with *e* ≃ 1. [pztelb-orbits] [pztelb-2003] Figure [pztelb-orbits] shows a few orbital solutions in projection onto the sky plane. We see that the solutions actually fit the data points, but that they all have a much smaller periastron. Figure [pztelb-2003] shows the best *χ*2 orbit in a similar way, but superimposed to a density map showing the predicted projected positions as of Jul. 22, 2003 (2003.556). report indeed a non-detection of PZ Tel B in a NACO image from that day. They conclude that no giant planet was present at a separation larger than 170mas from the star. We see on Fig. [pztelb-2003] that at the corresponding epoch, we predict that PZ Tel B was much closer to the star, in most cases short after periastron, but in any case closer to the star than 170mas. Our prediction is therefore in agreement with the non-detection by. Discussion ---------- The exact nature of PZ Tel B’s orbit around PZ Tel is still controversial. Our orbital analysis shows that both bound or unbound solutions are valid, but that the eccentricity distribution is strongly peaked around *e* = 1. If PZ Tel B is unbound to PZ Tel, it could be a passing by object (a flyby). This is hard to believe given the very small periastron value, and also considering the fact that both the star and the companion seem to be young objects. This cannot however be ruled out mathematically. From Eq. ([vinfty]) with *M* = 1.25 *M*⊙ and *v*∞ = 20 kms− 1, we derive *e* = 1.012 for *q* = 0.07 AU and *e* = 1.181 for *q* = 1 AU, which is still in the tail of the eccentricity distribution, although not in the main peak. We nevertheless think this possibility to be very unlikely. Still if PZ Tel B is unbound, it could be an escaping companion that was recently ejected by some gravitational perturbation. Two problems arise with this hypothesis. First, the ejection must have occurred *very* recently (a few years ago). Given the age of the star, the probability of witnessing such an event right today is very low, about  ∼ 10− 6 if we consider the timescale of the ejection ( ∼ 10 yrs) and the fact that such an ejection should not occur more than a few times in the history of the system. Second, to efficiently perturb a 40 *M*Jup companion, an additional object of comparable mass (at least) is required. As of yet, no such additional companion has been detected. So, PZ Tel B is presumably a bound companion. Then one must explain its extremely high eccentricity. It could result from secular perturbation processes such as the Kozai-Lidov mechanism. This is a likely mechanism for generating very high eccentricities. But here again, given the fairly high mass of PZ Tel B, this would require a more massive outer companion that would probably have been already discovered. Hidden inner companions? ========================= Fomalhaut b ----------- argue that imaged substellar companions that appear very eccentric with a first order orbital fit could actually be much less eccentric due to the presence of an unseen inner companion. The reason is that the measured astrometry is necessarily relative to the central star, while in the presence of a massive enough inner companion, the Keplerian motion of the imaged body should be considered around the center of mass of the system. develop a full analytic study showing how the presence of such an unseen companion could artificially enhance the fitted eccentricity. present a detailed study dedicated to the case of Fomalhaut b, concluding that in any case, Fomalhaut b must be significantly eccentric. According to them, in the best realistic configuration, a  ∼ 12*M*Jup companion orbiting Fomalhaut at 10AU could account for a  ∼ 10% overestimate of Fomalhaut b’s orbital eccentricity. This would for instance shift the eccentricity peak from *e* = 0.94 to *e* = 0.85. This possibility cannot be ruled out until the inner configuration of Fomalhaut’s planetary system remains unconstrained. It would also be compatible with the scenario outlined by to explain the origin of Fomalhaut b’s eccentricity. According to this model, Fomalhaut b should have formerly resided at  ∼ 60 AU in the 5:2 mean-motion resonance with another Jupiter-sized planet (termed Fomalhaut c) located at  ∼ 100 AU. Then, due to the resonant action, its eccentricity would have increased, and it would have been ejected towards its present-day orbit. This is still compatible with the hypothetical presence of another massive planet orbiting inside at 10AU. The only difficulty is that with *e* = 0.85, Fomalhaut b’s periastron is still as low as  ∼ 18 AU, which is still close enough to 10AU to raise the question of its orbital stability versus perturbations by the hidden companion. However, according to ’s scenario, Fomalhaut b’s orbit must already cross today that of the putative Fomalhaut c planet orbiting at 100AU. So in any case, Fomalhaut b must lie today on a metastable orbit. Adding another massive body deep inside the system does not change this conclusion. Consequently, the presence of an additional massive planet orbiting Fomalhaut at 10AU that would artificially enhance Fomalhaut b’s eccentricity by  ∼ 10% cannot be ruled out, as being still compatible with all observational constraints. Moreover it does not affect the dynamical scenario of. PZ Tel B -------- The case of PZ Tel B is more complex. The main difference with Fomalhaut b is that it is imaged over a more significant part of its orbit. As noted by, the detected astrometric motion of Fomalhaut b is basically compatible with a straight line at constant speed, so that what is measured is not much more than a projected position and a projected velocity. This is not the case for PZ Tel B, as the projected distance to the star is already able to vary significantly over the observation period (Table [pzteldata]). This is actually the reason why the fitted eccentricity distribution does not extend towards arbitrarily large values (Fig. [pztelbqe]). Consequently, an analytic study of the potential effect of an unseen companion on the fitted orbit is less easy. nevertheless calculated that a companion at least as massive as 130 *M*Jup orbiting PZ Tel at 5.5 AU is required to mimic PZ Tel B’s eccentricity. But recent imaging by exclude companions more massive than 26 *M*Jup at this distance. However, as for Fomalhaut b, an unseen companion could account only partially for PZ Tel B’s eccentricity. We decided to perform an automated search based on this idea. Our strategy is the following: We arbitrarily fix the characteristics of an unseen companion (mass and orbit) that we may call PZ Tel c. Given these characteristics, we calculate at each time the expected position of the center of mass of the system, and recompute the astrometric positions of PZ Tel B relative to this center of mass. Then we restart a least-square fit and check the eccentricity of the best *χ*2 solution obtained. This process is then automatically re-initiated many times, changing the characteristics of PZ Tel c until a solution yielding a least-square fit with the minimal eccentricity is found. Of course we do several attempts, varying the starting points. They showed that in any case, PZ Tel B’s eccentricity of the best *χ*2 solution never gets below  ∼ 0.7. For these most favourable configuration, the MCMC fit is re-launched to derive the statistical distributions of orbits. We present here one of these runs. [pztelc] @ll Mass & 12.041 ± 0.1 *M*Jup Semi-major axis & 3.514 ± 0.0004AU Eccentricity & 0.4691 ± 0.03 Inclination & $\sim 0\degr$ (4.924 × 10− 6 ± 3, degrees) Argument of periastron & 30.170 ± 15 degrees Longitude of ascending node & 81.660 ± 5 degrees Time of periastron passage & 1951.57 ± 0.25 AD [pzteldatamodif] @lll Obs. date & Declination (*x*) & Right ascension (*y*) & (mas) & (mas) Jun. 13, 2007 & 120.86 ± 1.20 & 225.83 ± 2.20 Apr. 11, 2009 & 169.36 ± 8.57 & 282.88 ± 8.57 Sep. 28, 2009 & 165.30 ± 0.59 & 292.76 ± 1.05 May 07, 2010 & 175.76 ± 0.60 & 307.99 ± 1.04 May 05, 2010 & 183.50 ± 1.53 & 309.64 ± 2.58 Sep. 26, 2010 & 187.34 ± 4.10 & 313.61 ± 6.87 Oct. 28, 2010 & 185.60 ± 0.55 & 319.70 ± 0.95 Mar. 25, 2011 & 192.43 ± 0.51 & 330.92 ± 0.87 May 03, 2011 & 194.99 ± 0.99 & 343.11 ± 1.74 Jun. 06, 2011 & 196.32 ± 0.25 & 335.80 ± 0.43 Jun. 07, 2011 & 195.35 ± 2.51 & 338.33 ± 4.33 Apr. 05, 2012 & 196.11 ± 4.45 & 346.04 ± 7.83 Jun. 08, 2012 & 212.36 ± 0.10 & 362.62 ± 0.13 [mcmcmappztelbdk3] The characteristics of the putative PZ Tel c corresponding to this run are listed in Table [pztelc] and the recomputed barycentric astrometry of PZ Tel B is given in Table [pzteldatamodif]. The main characteristics of the result of the run (histograms of periastron, eccentricity and inclination) are shown in Fig. [mcmcmappztelbdk3]. The first thing we note is that the putative PZ Tel c companion (12 *M*Jup at 3.5AU) is compatible with the current non-detection limits. Second, the difference between the computed barycentric astrometric data and the measured data is small, often within the error bars of Table [pzteldata]. Nonetheless, the difference in the orbital fit (Fig. [mcmcmappztelbdk3]) is striking. PZ Tel Bstill appears eccentric, but its eccentricity is now confined between 0.65 and 0.8, the best *χ*2 orbit (the red bar in the plots) having *e* ≃ 0.68. According to this analysis, PZ Tel Bwould clearly be a bound companion. Its periastron *q* ranges between 8 and 24 AU, but it must be noted that all orbits with *q* < 8 AU have been eliminated in the fitting procedure as being presumably highly unstable versus gravitational perturbations from the hypothetical PZ Tel ccompanion. Letting the periastron distribution extend towards lower values would generate solutions with larger eccentricities, but these would probably not be physical. Dynamical analysis ------------------ Figure [mcmcmappztelbdk3] also reveals that PZ Tel B’s orbital inclination is very close to $90\degr$, meaning an almost edge-on viewed orbit. Simultaneously, PZ Tel c’s inclination appears extremely low (Table [pztelc]), meaning a pole-on orbit. This allows to question the dynamical stability of such a system. According to our fit, PZ Tel B’s periastron is most probably around  ∼ 10 AU, which is  ∼  3 times larger than the semi-major axis of PZ Tel c’s semi-major axis. This is in principle marginally enough to ensure the dynamical stability of the whole. But here both orbits are nearly perpendicular. In this context, the inner orbit is likely to be trapped in the Kozai-Lidov mechanism characterized by huge eccentricity changes. This could trigger orbital instability. [pztelbchjs] We thus decided to numerically investigate the dynamical stability of this three body system, starting from the best *χ*2 solution for PZ Tel B in Fig. [mcmcmappztelbdk3], and with the orbital solution from Table [pztelc]. As the fitted orbit of PZ Tel B is barycentric, we used our HJS symplectic code that naturally works in Jacobi coordinates, which is the case here. The result is presented in Fig. [pztelbchjs], which shows the orbital evolution of the system over 106yr. The regular evolution pattern is an indication of stability. In fact, the integration was carried out up to 107yr, which reveals the same behaviour as in the first 106 yr. Moreover, the semi-major axes of both planets (not shown here) appeared to be remarkably stable, which confirms the stability. Nonetheless, PZ Tel c’s eccentricity exhibits large amplitude oscillations couple with oscillation of the mutual inclination between both orbital planes. This behaviour is characteristic for a strong Kozai resonance. It must be noted however that in high eccentricity phases, the mutual inclination does not drop down to 0 but up towards $180\degr$ (retrograde orbits). This is thus an example of retrograde Kozai resonance. This picture is however very probably erroneous. In fact, given the almost perfectly perpendicular initial configuration of the orbits, the Kozai cycles drive PZ Tel c up to very high eccentricity values. The peak eccentricity in the cycles is actually close to  ∼ 0.998! Considering that PZ Tel c’s semi-major axis is nearly constant, its periastron must drop down to very small values during peak eccentricity phases. The right plot of Fig. [pztelbchjs] confirms this fact. The minimum periastron value in the peaks ranges between 104 AU and 10− 3 AU. With a mass of 1.25 *M*⊙, PZ Tel’s radius can be estimated to 9 × 105 km, i.e, 6 × 10− 3 AU. PZ Tel c is thus potentially subject to collision with the central star. However, when the periastron decreases in high eccentricity phases, PZ Tel c is very probably affected by tides with the central star that may prevent physical collisions. Tides were not taken into account in the run described in Fig. [pztelbchjs], so that this picture does not hold. [pztelbctid] We thus re-computed the secular evolution of the same three-body system, but taking now tides between the central star and PZ Tel c into account. The computation was done using a special version of the HJS integrator to which we have added tides and relativistic post-Newtonian corrections. The details of this code are presented in, with an application to the GJ 436 system. Tides mainly depend of the assumed tidal dissipation parameter *Q**p* for the planet, a dimensionless parameter related to the rate of energy dissipated through tides per orbital period. The smaller *Q**p*, the more efficient the tidal dissipation is. *Q**p* is very hard to estimate, but typical values for giant planets range around 105 within one order or magnitude. Figure [pztelbctid] shows the result of such a simulation assuming *Q**p* = 105 for PZ Tel c. The difference with the previous run is striking. PZ Tel c’s semi-major axis appears to remain constant for   ∼ 3 × 104, yr and to suddenly drop after. During the first phase (before 3 × 104 yr), the eccentricity gradually increases before decreasing, and the mutual inclination decreases before stabilising. The explanation is the following: in the first phase, the periastron remains too large to allow tides to be active. But the Kozai mechanism causes an eccentricity increase until a point where tides act at periastron. The subsequent energy dissipation causes then a rapid decay of PZ Tel c’s orbit and a subsequent circularisation. This scenario is actually similar to the one depicted in for GJ 436 and by. In the latter cases, several Kozai-Lidov oscillations together with tidal friction first occur before the inner orbit starts to decay. Here this state is reached at the very first eccentricity peak, thanks to very strong tides when the periastron gets very close to the stellar surface. The consequence is that tides prevent the deduced orbital configuration between PZ Tel B and PZ Tel c from being stable, as the hypothetical PZ Tel c inevitably migrates to a much closer orbit after only 3 × 104 yr. This result obviously depends on the assumed *Q**p* value. We thus tried other runs with increased *Q**p* values (106 and above) to reduce the strength of tides. This appeared to only delay the time of the orbital decay without changing the basic scenario. In any case, PZ Tel c ends up on a tight orbit ( < 0.1 AU) on a timescale in any case much lower than the age of the system. Our conclusion is that the PZ Tel c scenario depicted here to account for the apparent very high eccentricity of PZ Tel B does not hold, as it would require an orbital configuration that cannot be stable. We are thus back to our conclusion that PZ Tel B’s orbit is really very close to parabolic state, as deduced from our initial MCMC analysis. Conclusion ========== We have developed a new MCMC code based on the use of universal Keplerian variables, dedicated to the orbital fit of imaged companions with very high eccentricities or unbound orbits. This code was successfully applied to the specific cases of Fomalhaut b and PZ Tel B. Concerning Fomalhaut b, we confirm our orbital determination of, but we show that the eccentricity distribution can extend above *e* = 1 in the unbound regime. This is in agreement with the analysis of who show that for such companions imaged over a very small orbital arc, the unknown radial velocity renders unbound orbits possible. We think however that Fomalhaut b is very probably a bound companion, although very eccentric. The case of PZ Tel B is more complex. Our code reveals a very different eccentricity distribution than for Fomalhaut b. PZ Tel B’s eccentric distribution exhibits indeed a very sharp peak around *e* = 1. According to, imaged companions can appear much more eccentric than they are actually, thanks to the presence of hidden inner companions that affect that astrometric data. already showed that this model cannot account for Fomalhaut b’s eccentricity. Concerning PZ Tel B, we show that a hidden ∼ 12 *M*Jup companion orbiting at  ∼ 3.5 AU in a pole-on configuration (contrary to the edge-on orbit of PZ Tel B) could mimic an almost unbound orbit despite a real eccentricity around  ∼ 0.7. However, due to the combination of tides and Kozai-Lidov mechanism, this configuration is dynamically unstable. We are thus back to the conclusion that PZ Tel Bis really a very high eccentricity, possible unbound companion. The dynamical origin of Fomalhaut b and its configuration relative to the dust disk orbiting Fomalhaut was recently investigated by. According to this model, the Fomalhaut system should harbour another, more massive planet that controls the shape of the disk. Fomalhaut b could have formerly resided in a mean-motion resonance with that planet have been put on its present day eccentricity via a gradual eccentricity increase and one or several close encounters. The case of PZ Tel B is more complex. According to our orbital determination, its eccentricity is in any case very close to 1 if not above. The planet passed through a very close ( < 0.1 AU) periastron in 2002, consistent with its non-detection in 2003 NaCo images. Highly eccentric orbits with very small periastron are usually triggered by Kozai-Lidov mechanism or by mean-motion resonance with a moderately eccentric outer companion. But given the estimated mass of PZ Tel B, this would require a perturbing companion in the low stellar mass regime. Such a companion would have already been detected. Apart from a very peculiar past encounter event, there is therefore no obvious explanation for the unusual orbital configuration of PZ Tel B. Further monitoring of this system will help better understanding it. Most of the computations presented in this paper were performed using the Froggy platform of the CIMENT infrastructure (https://ciment.ujf-grenoble.fr), which is supported by the Rhône-Alpes region (GRANT CPER07\_13 CIRA), the OSUG@2020 labex (reference ANR10 LABX56) and the Equip@Meso project (reference ANR-10-EQPX-29-01) of the programme Investissements d’Avenir supervised by the Agence Nationale pour la Recherche. The MCMC code used in this paper can be obtained freely upon request to `[email protected]`. Aarseth S.J., 1999, Celest. Mech. 73, 127 Barker A.J., Ogilvie G.I., 2009, MNRAS 395, 2268 Bell C.P.M., Mamajek E.E., Naylor T., 2015, MNRAS, in press, arXiv:1508.05955 Beust H., Morbidelli A., 2000, Icarus 143, 170 Beust H., 2003, A&A 400, 1129 Beust H., Bonfils X., Montagnier G., Delfosse X., Forveille T., 2012, A&A 545, A88 Beust H., Augereau J.-C., Bonsor A., et al., 2014, A&A 561, A43 Biller B.A., Liu M.C., Wahhaj Z., et al., 2013, ApJ 720, L82 Biller B.A., Liu M.C., Wahhaj Z., et al., 2013, ApJ 777, 160 Binks A.S., Jeffries R.D., 2014, MNRAS 438, L11 Bonnefoy M., Lagrange A.-M., Boccaletti A., et al., 2011, A&A 528, L15 Bonnefoy M., et al., 2014,?? Burkardt T.M., Danby J.M.A., 1983, Celest. Mech. 31, 317 Caballero J.A., Elipe A., 2001, Astron. Astrophys. Transactions, 19, 869 Chambers J.E., 1999, MNRAS 304, 793 Chauvin G., Lagrange A.-M., Beust H., et al., 2012, A&A 542, A41 Currie T., Debes J., Rodigas T.J., 2012, ApJ 760, L32 Danby J.M.A., 1987, Celest. Mech. 40, 303 Danby J.M.A., Burkardt T.M., 1983, Celest. Mech 31, 95 Devillard N., 1997, The Messenger 87, 19 Fabrycky D., Tremaine S., 2007, ApJ 669, 1298 Faramaz V., Beust H., Augereau J.-C., Kala s P., Graham J.R., 2005, A&A 573, A87 Ford E.B., Kozinsky B., Rasio F.A., 2000, ApJ 535, 385 Ford E.B., 2005, AJ 129, 1706 Ford E.B., 2006, ApJ 642, 505 Galicher R., Marois C., Zuckerman B., Macintosh B., 2013,  769, 42 Ginski C., Schmidt T.O.B., Mugrauer M., et al., 2014, MNRAS 444, 2280 Janson M., Carson J.C., Lafrenière D., et al., 2012, ApJ 747, 116 Kalas P., Graham J.R., Chiang E., et al., 2005, Nature 435, 1067 Kalas P., Graham J.R., Chiang E., et al., 2008, Science 322, 1345 Kalas P., Graham J.R., Fitzgerald, M.P., Clampin M., 2013, ApJ 775, 56 Kennedy G.M., Wyatt M.C., 2011, MNRAS 412, 2137 Krymolowski Y., Mazeh T., 1999, MNRAS 304, 720 Kozai Y., 1962, AJ 67, 591 Lenzen R., Hartung M., Brandner W., et al., 2003, in “Instrument Design and Performance for Optical/Infrared Ground-based Telescopes”, proc. SPIE, M. Iye, M. Moorwood (Eds.), 4841, 944 Levison H.F., Duncan M.J., 1994, Icarus 108, 18 McCaughrean M.J., Stauffer J.R., 1994, AJ 108, 1382 Malo L., Doyon R., Feiden G.A., et al., 2014, ApJ 792, 37 Maire A.-L., Bonnefoy M., Ginski C., et al., 2015, A&A, in press, arXiv:1511.04072 Mamajek E.E., 2012, ApJ 754, L20 Mamajek E.E., Bell C.P.M., 2014, MNRAS 445, 2169 Marois C., Lafrenière D., Doyon R., Macintosh B., Nadeau D., 2006, A&A 641, 556 Masciadri E., Mundt R., Henning T&, Alvarez C., Barrado y Nevascués D., 2005, Apj 625, 1004 Messina S., Desidera S., Turatto M.; Lanzafame A.C., Guinan E.F., 2010, A&A 520, A15 Mugrauer M., Vogt N., Neuhäuser R., Schmidt T.O.B., 2010, A&A 523, L1 Mugrauer M., Röll T., Ginski C., et al., 2012, MNRAS 424, 1714 Nelson B., Ford E.B., Payne M.J., 2014, ApJS 210, 11 Pearce T.D., Wyatt M.C., Kennedy G.M., 2014, MNRAS 437, 2686 Pearce T.D., Wyatt M.C., Kennedy G.M., 2015, MNRAS 448, 3679 Pueyo L., Soummer R., Hoffmann J., et al., 2015, ApJ 803, 31 Reche R., Beust H., Augereau J.-C., 2009, A&A 493, 661 Rousset G., Lacombe F., Puget P., et al., 2003, in “Adaptive Optical System Technologies II”, proc. SPIE, P.-L. Wizinowich, D. Bonaccini (Eds.), 4839, 140 Soummer R., Brendan Hagan J., Pueyo L., et al., 2011, ApJ 741, 55 Spencer Jones H., Jackson J., 1936, “Proper motions of stars in the zone catalogue of 20843 stars 1900”, London: HMSO Schmidt T.O.B., Mugrauer M., Neuhäuser R., et al., 2014, A&A 566, A85 Torres C.A.O., Quast G.R., da Silva L., et al., 2006, A&A 460, 695 van Dessel E., Sinachopoulos D., 1993, A&AS 100, 517 Wisdom J., Hernandez D.M., 2015, MNRAS 453, 3015 Zhang K, Hamilton D.P., 2008, Icarus 193, 267 Zuckerman B., Song I., Bessell M.S., Webb R.A., 2001, ApJ 562, L87 --- 1. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile (Program ID: 085.C-0867(B) and 085.C-0277(B)).[↩](#fnref1) Orbital fitting of imaged planetary companions with high eccentricities and unbound orbits ========================================================================================== Application to Fomalhaut b and PZ Telecopii B[1](#fn1) ====================================================== ### Received....; Accepted.... For such orbits, standard MCMC codes assuming only bound orbits may be inappropriate. Our goal is to develop a new MCMC implementation able to handle bound and unbound orbits as well in a continuous manner, and to apply it to the cases of Fomalhaut b and PZ Tel B. We present here this code, based on the use of universal Keplerian variables and Stumpff functions. We present two versions of this code, the second one using a different set of angular variables designed to avoid degeneracies arising when the projected orbital motion is quasi-radial, as it is the case for PZ Tel B. We also present additional observations of PZ Tel B The code is applied to Fomalhaut b and PZ Tel B. Concerning Fomalhaut b, we confirm previous results, but we show that on the sole basis of the astrometric data, open orbital solutions are also possible. The eccentricity distribution nevertheless still peaks around  ∼ 0.9 in the bound regime. We present a first successful orbital fit of PZ Tel B, showing in particular that while both bound and unbound orbital solutions are equally possible, the eccentricity distribution presents a sharp peak very close to *e* = 1, meaning a quasi-parabolic orbit. It was recently suggested that the presence of unseen inner companions to imaged ones may lead orbital fitting algorithms to artificially give very high eccentricities. We show that this caveat is unlikely to apply to Fomalhaut b. Concerning PZ Tel B, we derive a possible solution involving an inner   ∼ 12*M*Jup companion that would mimic a *e* = 1 orbit despite a real eccentricity around 0.7, but a dynamical analysis reveals that such a system would not be stable. We thus conclude that our orbital fit is robust. Introduction ============ A growing number of substellar companions are nowadays regularly discovered and characterised by direct imaging. These objects are usually massive (larger than a few Jupiter masses) and orbit at wide separations (typically  < 20 AU). Some of them as sufficiently close to their host star to allow detection of their orbital motion. Astrometric follow up gives then access to constraints on their orbits . The use of Markov-Chain Monte-Carlo (MCMC) algorithms to fit orbits of substellar or planetary companions has become common now. This statistical approach is particularly well suited for directly imaged companions, as their orbit is usually only partially and unequally covered by observations. Some of the fitted orbits appear surprisingly very eccentric. This is for instance the case for Fomalhaut b and PZ Tel B. Fomalhaut b is an imaged planetary companion orbiting the A3V star Fomalhaut (*α* Psa, HD 216956, HIP 113368) at  ∼ 119AU. The physical nature of this object is still a matter of debate. It is commonly thought to be a low mass planet, but it also has been suggested that the Fomalhaut b image could represent starlight reflected by a cloud of dust grains, possibly bound to a real planet. The first attempts to fit Fomalhaut b’s orbit on the basis of the available astrometric positions reveal a very eccentric, possibly unbound orbit ($e\ga 0.8$). Subsequent dynamical studies on the past history of this planet and its interaction with the dust belt imaged around the star led to the conclusion that another, yet undiscovered planet must be present in this system to control the dynamics of the dust belt, and that Fomalhaut bmay have been formerly trapped in a mean-motion resonance with that planet before being scattered on its present day orbit. This however assumes that Fomalhaut b is actually a bound companion. While this is likely, unbound solutions might still be possible. According to, planets imaged of such small orbital arcs are compatible with bound orbital solutions as well as unbound ones due to unknown position and velocity along the line of sight. The case of PZ Tel B is even more complex. PZ Tel (HD 174429, HIP 92680), is a G5-K8 star member of the 24 ± 3 Myr old *β* Pic moving group. A sub-stellar companion, termed PZ Tel B, was independently discovered by and. Its mass is estimated  ∼ 20 *M*Jup and  ∼ 40 *M*Jup. It is therefore most likely a substellar object. Attempts to fit the orbit of this companion based on successive astrometric positions led to the conclusion that it must be close to edge-on and highly eccentric. The pair has been imaged regularly since 2007. PZ Tel B is moving away from the central star along a quasi straight line. Its distance to the star increased by  ∼ 60 % between 2007 and 2012. From the orbital standpoint, it is not clear whether PZ Tel B is a bound companion. But, in any case, its periastron must be small ($\la 1\,$AU), which is a major difference with Fomalhaut b. However, spectra of PZ Tel B obtained by indicate that it is a young object like the star itself. This strongly suggests that both objects are physically bound. MCMC Orbital fitting techniques are usually based on the assumption that the orbit to fit is elliptic, making use of corresponding Keplerian formulas. This can be problematic with orbits with eccentricities close to 1. This can either prevent convergence of the fit, or generate boundaries in the fitted orbit distributions that are not physical, but rather generated by the limitation of the method. Of course, on could design an independent MCMC code based on the use of open orbit formulas. Such a code would try to fit open orbits only. Our goal is to design a code that can handle both kinds of orbits in a continuous manner. Applied to the cases invoked above, this would help deriving for instance a robust estimate of a probability to be bound. This cannot be done using standard Keplerian variables, as the changes of formulas between bound and unbound orbits would generate enough noise to prevent the code to converge. We develop here a MCMC code based on the use of universal Keplerian variables, an elegant reformulation of Keplerian orbits that holds for bound and unbound orbits as well. The organisation of the paper is the following : In Sect. 2, we present new VLT/NACO observations of PZ Tel B that we will use together with order data in our fit. Then In Sect. 3, we present the fundamentals of our new code based on universal Keplerian variables. In Sects. 4 and 5, we present its application to the Fomalhaut b and PZ Tel B cases respectively. In Sect. 6, we present further modeling based on the suggestion by that highly eccentric companions could be actually less eccentric that they appear due to the presence of unseen additional inner companions. For the PZ Tel B case, we find one configuration that could indeed generate an apparent very high eccentricity, but we present subsequent dynamical modeling showing that it is in fact unstable. Our conclusions are then presented in Sect. 7. New observations of PZ Tel B ============================ Log of the observations ----------------------- PZ Tel B was observed with VLT/NaCo on September 26, 2010 in the L’ band filter (central wavelength=3.8 *μ*m, width=0.62*μ*m) in pupil-stabilized mode (P.I. Ehrenreich, Program 085.C-0277). The mode was used to subtract the stellar halo using the angular differential imaging (ADI) technique, despite the companion could be seen into our raw images. [ht] [Tab:Obs] @llllllllllll Object & Date & Band & Density filter & Camera & DIT & NDIT & $\mathrm{N\_\mathrm{exp}}$ & $\mathrm{\theta\_\mathrm{start}/\theta\_\mathrm{end}}$ & \langle Seeing \rangle & $\mathrm{\langle\tau\_{0}\rangle}$ & Notes &&&&& (s) &&& (∘) & () & (ms) & *θ* Orionis & 24/09/2010 & L’ &–& L27 & 0.3 & 60 & 7 &  − 132.922/ − 133.862 & 0.88 & 0.73 & Astrometric cal. PZ Tel & 26/09/2010 & L’ & ND\_long & L27 & 0.2 & 150 & 8 & 4.540/7.102 & 1.77 & 1.20& PSF PZ Tel & 26/09/2010 & L’ & – & L27 & 0.3 & 100 & 143 & 10.624/53.175 & 1.83 & 1.03 & ADI sequence PZ Tel & 26/09/2010 & L’ & ND\_long & L27 & 0.2 & 150 & 8 & 53.348/54.936 & 1.37 & 1.20 & PSF PZ Tel & 03/05/2011 & $\mathrm{K\_{s}}$ & ND\_short & S27 & 1.0 & 100 & 12 &  − 51.868/ − 43.155 & 0.50 & 5.54 & ADI sequence PZ Tel & 07/06/2011 & L’ & ND\_long & L27 & 0.2 & 150 & 8 &  − 71.868/ − 70.664 & 2.78 & 0.75 & PSF PZ Tel & 07/06/2011 & L’ & – & L27 & 0.2 & 150 & 96 &  − 70.317/ − 52.769 & 0.81 & 2.76 & ADI sequence PZ Tel & 07/06/2011 & L’ & ND\_long & L27 & 0.2 & 150 & 7 &  − 52.582/ − 50.922 & 0.78 & 2.61 & PSF The observation sequences, atmospheric conditions (seeing, coherence time *τ*0), and instrument setup are summarized in Table [Tab:Obs]. A continuous sequence of 1200 exposures was recorded, split into 8 cubes (*N*exp) of 150 images each (NDIT). The detector 512 × 512 pixel windowing mode was used to allow for having short data integration times (DIT=0.3s). The *ND\_long* neutral density (ND) was placed into the light path for the first and last 8 × 150 frames of the sequence to acquire unsaturated images of the star for the purpose of the calibration of the companion photometry and astrometry. The star core was in the non-linearity regime for the rest of the 143 exposures. The parallactic angle (*θ*) over the ND-free exposures ranges from $\theta\_\mathrm{start}=10.62\degr$ to $\theta\_\mathrm{end}=53.17\degr$, corresponding to an angular variation of 2.85 times the full-width at half maximum (FWHM) at 350mas. The system was re-observed on June 7, 2011 using the same instrument configuration (Program 087.C-0450, P.I. Ehrenreich). This sequence was recorded under unstable conditions. Nevertheless, the image angular resolution was high enough to resolve the brown-dwarf companion. The observation sequence is similar to the one of September 26, 2010, although the field rotation is reduced (17.55∘). It is summarized into Table [Tab:Obs]. To conclude, we recorded one additional epoch of pupil-stabilized observations of the PZ Tel system in the *K**s* band (central wavelength=2.18 *μ*m, width=0.35 *μ*m) with NaCo (P.I. Lagrange, Program 087.C-0431). We used the neutral density filter of the instrument (*ND\_Short*) adequate to this band to avoid saturating the star. The field rotation was not sufficient to take advantage of the angular differential imaging technique. Data Reduction -------------- The reduction of the L’-band observations was carried out by a pipeline developed in Grenoble. The pipeline first applied the basic cosmetic steps (bad pixel removal, nodded frame subtraction) to the raw images. The star was then registered into each individual frames of each cube using a 2D Moffat function adjusted onto the non-saturated portions of the images. We applied a frame selection inside each cube based on the maximum flux, and on the integrated flux over the PSF core. The cubes were then concatenated to create a master cube. Given the relative brightness of the companion and the amount of field rotation for the 2010 observations, we chose to apply the smart-ADI (SADI) flux-conservative algorithm to suppress the stellar halo. The algorithm builds for each frame of our observation sequence a model of the stellar halo from the other sequence images for which a companion located at a distance *R* from the star has moved angularly (because of the field rotation) by more than *n* ×  the FWHM (separation criterion). Only the *N**N* (*N**N* ∈ *R*) frames the closest in time (Depth parameter) and respecting the separation criterion are considered. We defer the reader to and for details. We found the parameters maximizing the detection significance of the companion (*R* = 13.6 pixels, depth=4, and 2 × FWHMs) throughout these intervals: 2 ≤ Depth ≤ 12, FWHM=1, 1.5, 2. Flux losses associated to the self-subtraction of the companion during the ADI process were estimated using either five artificial planets (AP) are injected at $\mathrm{PA}=179\degr$, $-61\degr$, $239\degr$, $210\degr$, and $270\degr$, or negative fake planets. These AP were built from the non-saturated exposures of the star taken before and after the ADI observations (see Table [Tab:Obs]). We derive a final contrasts of Δ*L*′ = 5.15 ± 0.15mag. The error accounts for uncertainties on the flux losses estimates, on the evolution of the PSF through the ADI sequence, and on the method used to extract the companion photometry. This new L’ band photometry was accounted for the up-to-date analysis of the spectral energy distribution of the companion. Images of the *θ* Orionis C astrometric field were acquired with an identical setup on September, 24 2010. They were properly reduced with the `Eclipse` software. The position of the stars on the detector were compared to their position on sky measured by to derive the instrument orientation to the North of $-0.36\pm0.11\degr$ and a detector plate scale of 27.13 ± 0.09mas. We used these measurements together with the position of PZ Tel B derived from the negative fake planet injection to find a PA=$59.9\pm0.7\degr$ and a separation of 374 ± 5mas for the companion. The second epoch of L’ band observations was reduced with the IPAG pipeline, but using the classical imaging (CADI) algorithm. The algorithm built a model of the halo valid for all the images of the sequence from the median of all images contained in the sequence. It is more appropriate than the smart-ADI algorithms here because of the small amount of field rotation. Indeed, it would not have been possible to built a model of the halo for each frames of the sequence while respecting a separation criterion of 1 FWHM for all the frames contained in our sequence. The flux losses were estimated in the same way as for the 2010 L’-band observations. We measure a contrast of Δ*L*′ = 4.6 ± 0.6mag. The photometry is less accurate because of the unstable conditions during the observations. The instrument orientation (TN=1.33 ± 0.05∘) and plate-scale (27.38 ± 0.08 mas/pixel) were measured onto the images of the binary star HD 113984 observed on September 2, 2011. We therefore derive a PA=60.0 ± 0.6∘ and separation of 390.0 ± 5.0 mas for the companion. We realigned each of the *K**s* band images to the North and median-combined them to create a final image of PZ Tel AB. The companion is seen in the images. We removed the stellar halo at the position of the companion making a axial symmetry of the halo around a axis inclined at PA=-45, 0, or 90∘. We integrated the flux of the star and companions into circular apertures of radii 135 mas (5 pixels) to derive a contrast ratio of *δ**K**s* = 5.46 ± 0.07. The error bars account for the dispersion of contrast found for the different choice of duplication axis for the stellar halo removal. This contrast ratio is consistent within error bars with the one derived by with the same instrumental setup. We measure a PA=60.0 ± 0.6∘ and a separation of 390.0 ± 5.0 mas for the system. This astrometry relies on the TN and plate-scale measured on March 03, 11, and reported in. The astrometry reported in this section assumes that the TN and plate-scale are stable between the observations of the astrometric fields and our observations of PZ Tel. This seems to be the case according to the measurements of. Fundamentals of MCMC for high eccentricity and open orbits ========================================================== MCMC for astrometric imaged companions -------------------------------------- The fundamentals of the MCMC method applied to planets detected with radial velocities are described in, and its application to imaged companions are for instance described in. The first requirement is to presuppose general probability distributions (commonly called priors) for the orbital elements. For bounds orbits, the usual orbital elements are the semi-major axis *a*, the eccentricity *e*, the inclination *i*, the longitude of ascending node Ω, the argument of periastron *ω*, and the time for periastron passage *t**p*. The priors for these elements are generally assumed uniform between natural bounds, except for *a* for which a logarithmic prior ( ∝ ln*a*) is often assumed, and for *i* for which assume a prior  ∝ sin*i* is also of standard use. Combined with uniform prior for Ω, this choice ensures a uniform probability distribution over the sphere for the direction of the orbital angular momentum vector. In the building process of a Markov chain, successive orbital solutions are generated from preceding ones taking steps on the orbital variables and selecting or rejecting the generated new orbits using the Metropolis-Hastings algorithm (hereafter MH). MCMC theory tells that whenever the chain grows, it is expected to stabilise and the final statistical distribution of orbits within the chain samples the posterior probabilistic distribution of orbital solutions. In practice several chains are run in parallel (we use 10), and Gelman-Rubin criterion on crossed variances is used to check for convergence. An important point to note is that the variables on which steps are taken with MH are not necessarily the orbital elements listed above themselves, but rather combinations of them. In (*β* Pic b) and (Fomalhaut b), the work is done on the parameter vector $$\begin{aligned} \vec{w\_1} & = & \left(\frac{\cos(\omega+\Omega+v\_0)}{P}, \frac{\sin(\omega+\Omega+v\_0)}{P},\frac{e\cos(\omega+\Omega)}{\sqrt{1-e^2}}, \right.\nonumber\\ &&\frac{e\sin(\omega+\Omega)}{\sqrt{1-e^2}}, (1-e^2)^{1/4}\sin\frac{i}{2}\cos(\omega-\Omega),\nonumber\\ &&\left.(1-e^2)^{1/4}\sin\frac{i}{2}\sin(\omega-\Omega)\right), \label{param1}\end{aligned}$$ where *P* is the orbital period and *v*0 is the true anomaly at a reference epoch (typically that of a specific data point of a time close to periastron passage). This choice was dictated by the following considerations : * As pointed out in, considering imaged companions, there is a degeneracy in orbital solutions concerning parameters Ω and *ω*. Considering one potential solution with (Ω, *ω*) values, the same solution but with (Ω + *π*, *ω* + *π*) exactly gives the same projected orbital motion. The only way to lift this degeneracy is to have independent information about which side of the projected orbit (or of the associated disk) is on the foreground, or to have radial velocity measurements. Hence taking steps on Ω and *ω* in MCMC can generate convergence difficulties with chains oscillating between two symmetric families of solutions. To avoid this difficulty, we consider angles *ω* + Ω and *ω* − Ω which are unambiguously determined. It is indeed easy to express the projected Keplerian model as a function of these angles only. * Whenever *i* = 0, angles Ω and *ω* are undefined (and so angle *ω* − Ω), while Ω + *ω* is still defined. Hence we take variables  ∝ sin(*i*/2)cos(*ω* − Ω) and  ∝ sin(*i*/2)sin(*ω* − Ω) to avoid a singularity whenever *i* → 0. * The same applies to eccentricity. When *e* vanishes, Ω + *ω* itself is undefined. This is why we consider variables  ∝ *e*cos(Ω + *ω*) and  ∝ *e*sin(Ω + *ω*). * *ω* + Ω + *v*0 is defined even when *e* = *i* = 0. This is why we chose it in the remaining variables. But as much as possible, we avoid directly taking steps on angular variables themselves, which can lead to convergence problems with jumps around 2*π* and similar values. This is why we use cos(*ω* + Ω + *v*0) and sin(*ω* + Ω + *v*0) in the remaining variables. As explained by, the assumed priors are taking into account in MH multiplying the basic probability by the Jacobian of the transformation from the linear vector (ln*a*, *e*, sin*i*, Ω, *ω*, *t**p*) to the parameters vectors. This Jacobian reads here $$J\_1=\frac{1}{2}\frac{e(1+e\cos v\_0)}{(1-e^2)^3P^2}\qquad.$$ Open orbits and universal variables ----------------------------------- The parameter vector $\vec{w\_1}$ ([param1]) is well suited to fit low eccentricity orbits. It has nevertheless proved efficiency for high eccentricity orbits as well. also gives alternate sets of parameters for such orbits. However, none of them can handle open orbits. Moreover, their validity of the fit close to the boundary *e* = 1 is questionable. Our goal is to allow MCMC fitting to account for bound or unbound orbits in a continuous manner as well. Some of the variables in Eqs. ([param1]), such as the orbital period *P* and $\sqrt{1-e^2}$ are clearly inappropriate and need to be changed. The periastron *q* is conversely always defined for any type of orbit. So changing *P* to *q* and eliminating $\sqrt{1-e^2}$ in Eqs. ([param1]) could be a first solution. We designed a code based on this idea, which turned out not to be efficient. Convergence of Markov Chains could not be reached after billions of iterations. The reason lies in the Keplerian model assumed. To be able to compute the position and velocity of an orbiting companion at a given time (and subsequently a *χ*2), one needs to solve Kepler’s equation for the eccentric anomaly *u* as a function the time *t*. Kepler’s equation depends on the type of orbit. For an elliptical, parabolic and hyperbolic orbit, this equation reads $$u-e\sin u = M,\qquad\frac{u}{2}+\frac{u^3}{6}=M,\quad\mbox{and} \quad e\sinh u-u = M \label{keps}$$ respectively. In the parabolic case, this equation is called Barker’s equation, and *u* = tan(*v*/2), where *v* is the true anomaly. *M* = *n*(*t* − *t**p*) is the mean anomaly, while *n* is the mean motion. This equation holds in all cases, but *n* is defined as $\sqrt{\mu/a^3}$ in the elliptic and hyperbolic case, and as $\sqrt{\mu/8q^3}$ in the parabolic case, where *μ* = *G**M* is the dynamical mass (*M* is the central mass). In the random walk process of a Markov chain, permanent switching between these equations lead to instabilities that prevent convergence. A good way to overcome this difficulty is to move to the universal variable formulation, which is a very elegant way to provide a unique and continuous alternative to Kepler’s equation valid for any kind of orbit. We first define the energy parameter *α* as $$\alpha=-2E=\frac{\mu}{q}\,(1-e)\qquad,$$ where *E* is the energy per unit mass. This expression is valid for any orbit. Elliptical orbits are characterized by *α* > 0, parabolic orbits by *α* = 0 and hyperbolic ones by *α* < 0. The eccentric anomaly *u* is then replaced by the universal variable *s*. For non-parabolic orbits, *s* is defined as $$s=\frac{u}{\sqrt{|\alpha|}}\qquad,$$ and as $$s=\frac{u}{2qn}$$ for parabolic orbits. It can be shown that in any case, we have $$s=\frac{1-e}{q}\,\left(t-t\_p\right)+\frac{eY}{\sqrt{q\mu(1+e)}}\qquad,$$ where *Y* is the *y*-coordinate in a local *O**X**Y* referential frame (*X* axis pointing towards periastron). This shows that the definition of *s* is continuous irrespective of the type of orbit. Then, show that Kepler’s equation can then be rewritten in any case as *μ**s*3*c*3(*α**s*2) + *q**s**c*1(*α**s*2) = *t* − *t**p*  ,  where *t* is the time, and the *c**k*’s are the Stumpff functions defined as $$c\_k(x)=\sum\_{n=0}^{+\infty}\frac{(-1)^n}{(2n+k)!}\,x^n\qquad. \label{ck}$$ This formulation of Kepler’s equation is valid for any orbit. Using *c**k*(0) = 1/*k*!, we see that for *α* = 0 (parabolic orbit), this equation is equivalent to Barker’s equation. Once this equation is solved for *s*, the heliocentric distance *r* and the rectangular coordinates *X* and *Y* read $$\begin{aligned} X & = & q-\mu s^2c\_2\left(\alpha s^2\right)\qquad,\\ Y & = & s\sqrt{q\mu(1+e)}\,c\_1\left(\alpha s^2\right)\qquad,\\ r & = & q+e\mu s^2c\_2\left(\alpha s^2\right) \qquad,\end{aligned}$$ these formulas applying for any type of orbit. This formalism was used by several authors for specific problems, such as for wide binaries in clusters, or to sole Keplerian problems with additional disturbing potentials  ∝ 1/*r*2. The Kepler advancing routines in the popular symplectic *N*-body packages Swift and Mercury are also coded this way for high eccentricity and open orbits. Very recently, proposed an alternate formalism that avoids the use of Stumpff functions. They claim it to be more efficient. We did not try to apply that yet and used the Stumpff functions theory instead. Based on the use of Stumpff functions, we designed a new MCMC code, using the following parameter vector $$\begin{aligned} \vec{w\_2} & = & \left(\rule[-1.5ex]{0pt}{3ex} q\cos(\omega+\Omega), q\sin(\omega+\Omega),\right. \nonumber\\ && \qquad\left.\sin\frac{i}{2}\cos(\omega-\Omega), \sin\frac{i}{2}\sin(\omega-\Omega),e,s\_0\right)\quad, \label{param2}\end{aligned}$$ where *s*0 is the universal variable at a given reference epoch. The priors are assumed uniform for Ω, *ω*, *e*, and *t**p*, logarithmic for *q* and  ∝ sin*i* for *i*. The Jacobian of the transformation from (ln*q*, *e*, sin*i*, Ω, *ω*, *t**p*) to $\vec{w\_2}$ reads $$J\_2=\frac{q^2}{2}\,\frac{{\mathrm{d}}s}{{\mathrm{d}}t}\qquad,$$ where d*s*/d*t* can be obtained as $$\frac{{\mathrm{d}}s}{{\mathrm{d}}t}=\frac{1}{\mu s^2c\_2\left(\alpha s^2\right) +qc\_0\left(\alpha s^2\right)}\qquad,$$ to be evaluated here at *s* = *s*0. Transformation of angles ------------------------ This new code was able to reach convergence in the Fomalhaut bcase. Convergence appeared however hard to reach in the PZ Tel case. This is due to the structure of the data (Table [pzteldata]). The astrometric data of PZ Tel B reveal indeed a quasi-linear motion nearly aligned with the central star. PZ Tel B’s orbit appears extremely eccentric, perhaps unbound, but with a periastron much smaller ($\la1\,$AU) than the measured projected distances. In this context, the local reference frame *O**X**Y**Z* may be not well defined. Only the line of apsides *O**X* is likely to be well constrained by the data, while the two other directions require an other nearly arbitrary angular variable to be fixed. The very bad constraint on that angular variable results into degeneracies in the constraints on angles (Ω, *ω*, *i*) that are sufficient to prevent convergence. It thus appears necessary to isolate the badly constrained angular variable into a specific variable. This require to change the parameter vector $\vec{w\_2}$ (Eq. [param2]). With respect to the sky reference frame (*x*-axis pointing towards north, *y*-axis towards east, and *z*-axis towards the Earth), the basis vectors ($\vec{e\_X},\vec{e\_Y},\vec{e\_Z}$) of the local *O**X**Y**Z* reference frame read $$\begin{aligned} \vec{e\_X}&&\left\{ \begin{array}{l} \cos\omega\cos\Omega-\cos i\sin\omega\sin\Omega\\ \cos\omega\sin\Omega+\cos i\sin\omega\cos\Omega\\ \sin\omega\sin i \end{array}\right.\qquad,\nonumber\\ \vec{e\_Y}&&\left\{ \begin{array}{l} -\sin\omega\cos\Omega-\cos i\cos\omega\sin\Omega\\ -\sin\omega\sin\Omega+\cos i\cos\omega\cos\Omega\\ \cos\omega\sin i \end{array}\right.\qquad,\nonumber\\ \vec{e\_Z}&&\left\{ \begin{array}{l} \sin i\sin\Omega\\ -\sin i\cos\Omega\\ \cos i \end{array}\right.\qquad. \label{exeyez}\end{aligned}$$ According to our analysis, in the case of PZ Tel B, only $\vec{e\_X}$ is well constrained by the data. This results in complex combined constraints on Ω, *ω* and *i*. For instance, if $\vec{e\_Z}$ was well constrained by the data, then *ω* would be the weakly constrained parameter, as $\vec{e\_Z}$ is only function of Ω and *i*. The idea is therefore to define new angles in a similar way as in Eqs. ([exeyez]), but in such a way that $\vec{e\_X}$ depends on two angles only. We thus introduce new angles *i*ʹ, Ωʹ and *ω*ʹ designed in such a way that $\vec{e\_X}$ is defined with respect to (*i*ʹ, Ωʹ, *ω*ʹ) in the same manner as $\vec{e\_Z}$ is defined with respect to (*i*, Ω, *ω*). Similarly $\vec{e\_Y}$ will be defined like $\vec{e\_X}$, and $\vec{e\_Z}$like $\vec{e\_Y}$. We thus write now $$\begin{aligned} \vec{e\_X}&&\left\{ \begin{array}{l} \sin i'\sin\Omega'\\[\jot] -\sin i'\cos\Omega'\\[\jot] \cos i' \end{array}\right.\qquad,\nonumber\\ \vec{e\_Y}&&\left\{ \begin{array}{l} \cos\omega'\cos\Omega'-\cos i'\sin\omega'\sin\Omega' \\ \cos\omega'\sin\Omega'+\cos i'\sin\omega'\cos\Omega' \\ \sin'\omega\sin i' \end{array}\right.\qquad,\nonumber\\ \vec{e\_Z}&&\left\{ \begin{array}{l} -\sin\omega'\cos\Omega'-\cos i'\cos\omega'\sin\Omega'\\ -\sin\omega'\sin\Omega'+\cos i'\cos\omega'\cos\Omega'\\ \cos\omega'\sin i' \end{array}\right.\qquad. \label{exeyez2}\end{aligned}$$ The comparison between formulas ([exeyez]) and ([exeyez2]) gives the correspondence between the two sets of angles. Now, the line of apsides is defined by *i*ʹ and Ωʹ only, and *ω*ʹ, the badly constrained angular variable, is undefined if *q* vanishes. It is therefore worth modifying vector $\vec{w\_2}$ according to this transformation. However, one should not forget that vector $\vec{w\_2}$ was designed to avoid the natural degeneracy of astrometric solution between (Ω, *ω*) and (Ω + *π*, *ω* + *π*). It can be seen from Eqs. ([exeyez]) and ([exeyez2]) that changing (Ω, *ω*) to (Ω + *π*, *ω* + *π*) is equivalent to changing (*i*ʹ, *ω*ʹ) to (*π* − *i*ʹ,  − *ω*ʹ) while leaving Ωʹ unchanged. This transformation leaves the first two components of $\vec{e\_X}$ and $\vec{e\_Y}$ unchanged (this explains the degeneracy of the projected orbit), as well as the third component of $\vec{e\_Z}$, all remaining components being changed to their opposites. The new parameter vector must remain unchanged with this transformation as well, to avoid convergence difficulties. We chose the following new parameter vector $$\begin{aligned} \vec{w\_3} & = & \left(\sin i'\cos\Omega', \sin i'\sin\Omega',\right. \nonumber\\ && \qquad\left. q\sin i'\cos\omega', q\cos i'\sin\omega',e,s\_0\right)\qquad. \label{param3}\end{aligned}$$ This new vector is invariant in the transformation (*i*ʹ, *ω*ʹ) → (*π* − *i*ʹ,  − *ω*ʹ). Its first two components define $\vec{e\_X}$ unambiguously. Its third and fourth components vanish when *q* = 0, i.e., when *ω*ʹ is undefined, which avoids singularities. Now, this vector can be expressed as a function of the original angles (*i*, Ω, *ω*) directly, so that the formal introduction of the new angles (*i*ʹ, Ωʹ, *ω*ʹ) is unnecessary. The same vector can be written $$\begin{aligned} \vec{w\_3} & = & \left(\rule[-2.5ex]{0pt}{5ex} -\cos\omega\sin\Omega-\sin\omega\cos i\cos\Omega,\right.\nonumber\\ &&\qquad\cos\omega\cos\Omega-\sin\omega\cos i\sin\Omega,\nonumber\\ &&\qquad\qquad\left.q\,\cos i,q\,\frac{\cos\omega\sin\omega\sin^2i} {\sqrt{1-\sin^2i\sin^2\omega}},e,s\_0\right)\qquad. \label{param3b}\end{aligned}$$ It can be checked that this vector is invariant in the transformation (Ω, *ω*) → (Ω + *π*, *ω* + *π*). This will be our parameter vector for a second version of the MCMC code. The new Jacobian of the transformation from (ln*q*, *e*, sin*i*, Ω, *ω*, *t**p*) to $\vec{w\_3}$ reads now $$J\_3=q^2\sin^2i\sin^2\omega{\sqrt{1-\sin^2i\sin^2\omega}}\,\frac{{\mathrm{d}}s}{{\mathrm{d}}t} \qquad.$$ This new version succeeded in reaching convergence for PZ Tel B. Implementation -------------- The two versions of our code have been written in Fortran 90, with an additional OPEN-MP parallel treatment of the computed Markov chains. Our basic strategy is the same is in. We first perform a least-square Levenberg-Marquardt fit of the orbit. This takes only a few seconds to converge towards a local *χ*2 minimum. Of course, this fit is made starting from a rough orbit guess. The same procedure is re-initiated many times varying the starting orbit. This allows to probe the variety of local *χ*2 minima. In all cases described below, among various minima, a main one, or a series of very similar ones was reached. This main minimum was selected as a starting point for the MCMC chains. This procedure turns out to speed up the convergence of the chains. This starting point is marked as red bars and black stars in the resulting MCMC posterior plots (see below). We also tried to run the MCMC starting from a random guess instead, and checked that the same posterior distributions were reached, but slower. We also checked in the posterior *χ*2 distributions derived from the runs that in all cases, the starting point initially derived with Levenberg-Marquardt actually achieves the best *χ*2 in the distribution. Strictly speaking, Levenberg-Marquardt works well to quickly derive the best *χ*2 minimum. This shows that using other least-square fitting algorithms like downhill simplex for instance would not lead to a better result, as all these methods aim at finding a *χ*2 minimum, possibly the best one. However, MCMC runs reveal afterwards that with sparsely sampled orbits as we are dealing with here, the very best *χ*2 minimum does not always correspond to a probability peak in the posterior distributions. This intrinsic fact is independent from the method used to get the minimum. The implementation of the universal variable formalism described above requires an efficient algorithm to compute the Stumpff functions. The series ([ck]) defining them efficiently converge only for sufficiently small *x*. We use a reduction algorithm described in, that makes use of the following set of formulas $$\begin{aligned} c\_0(4x)=2\left[c\_0(x)\right]^2,& \, & c\_1(4x)=c\_0(x)c\_1(x),\label{qu1}\\ c\_2(4x)=\frac{1}{2}\left[c\_2(x)\right]^2,&\, & c\_3(4x)=\frac{1}{4}c\_2(x)+\frac{1}{4}c\_0(x)c\_3(x)\;.\label{qu2}\end{aligned}$$ Any input argument *x* is first reduced by successive factors of 4 until it satisfies ∣*x*∣ < 0.1. Then the series up to order 6 are used to get *c*2(*x*) and *c*3(*x*) only. To compute *c*0 and *c*1, the following relations are used *c*0(*x*) = 1 − *x**c*2(*x*),   *c*1(*x*) = 1 − *x**c*3(*x*) . Equations ([qu1]) and ([qu2]) are then applied recursively to derive the Stumpff functions for the original argument *x*. demonstrated the efficiency of this algorithm. In the fitting routine, universal Kepler’s equation ([kepu]) must be solved numerically using a root-finding algorithm. We do it with Newton’s quartic method or with Laguerre-Conway’s method. To compute the derivatives of the Stumpff function, we use the following relation $$\frac{{\mathrm{d}}c\_n(x)}{{\mathrm{d}}x}=\frac{1}{2x}\left(c\_{n\_1}(x)-nc\_n(x)\right)\qquad,$$ or equivalently, if we define *ϕ**n*(*α*, *s*) = *s**n**c**n*(*α**s*2), $$\frac{\partial\phi\_n(\alpha,s)}{\partial s}=\phi\_{n-1}(\alpha,s)\qquad.$$ For the special case *n* = 0, we have $$\frac{\partial\phi\_o(\alpha,s)}{\partial s}=-\alpha\phi\_1(\alpha,s)\qquad.$$ The same algorithm is implemented in the symplectic *N*-body integrator Swift for high eccentricity orbits. Its use turns out to be only a few times (3–4) more computing time consuming than that of a standard Keplerian formalism based on sine and cosine functions. But this is worth applying it in MCMC to the case of very high eccentricity and open orbits, as the use of the universal Kepler’s equation ([kepu]) eliminates the instabilities due to the permanent switch between the various formulas ([keps]). Thus Markov chains converge more efficiently. Results for Fomalhaut b ======================= Fomalhaut b is known to have a very eccentric orbit, with an eccentricity in any case $\ga 0.5$ and most probably around 0.8–0.9. Whether it is actually bound to the central star may be questionable, especially due to the very small coverage fraction of its orbit. If bound, its orbital period is a matter of hundreds of years if not more, while the four available astrometric points span over a period of 8 years only. Therefore, as noted by, what is measured is basically a projected position and a projected velocity onto the sky plane, so that the *z*-coordinates (i.e., along the line of sight) of the position an velocity are unknown. As a matter of fact, use with these data a simple sampling method drawing random *z*-coordinates for the position on velocity. They find an eccentricity distribution very similar to that derived in with MCMC. However, in both cases, the orbit was supposed to be bound. The eccentricity distribution of nevertheless extends up to *e* = 1 (that of stops at *e* ≃ 0.98), showing that unbound solutions could exist as well. This justifies the use of our new code to check this possibility. The code in its first version (see above) was used with the available astrometric data from and listed in. Following the prescriptions by, 10 chains were run in parallel until the Gelman-Rubin parameters *R̂* and *T̂* reach repeatedly convergence criteria for all parameters in Eq. ([param2]), i.e., *R̂* < 1.01 and *T̂* > 1000. We had already used the same procedure in and. In, this convergence criteria were reached after 6.2 × 107 steps. Here it took 4.25 × 108 steps with the universal variable code, running on the same data. This illustrates how the possibility for Markov chains to extend in the unbound orbit domain increases the complexity of the problem. We also had to fix an arbitrary upper limit *e*max = 4 for the eccentricity to ensure convergence. Setting larger *e*max values results in more steps needed to reach convergence, but the assumed *e*max = 4 upper limit as some physical justification. A large eccentricity means that Fomalhaut b is a passing by object that is currently encountering a flyby in the Fomalhaut system. The eccentricity of a flyby orbit cannot be arbitrarily large. For an hyperbolic orbit, the eccentricity is directly linked to the relative velocity at infinity *v*\infty by the energy balance equation $$\frac{1}{2}v\_\mathrm{\infty}^2=\frac{GM(e-1)}{q}\qquad. \label{vinfty}$$ An upper limit to *v*\infty can be given considering a typical dispersion velocity in the solar neighborhood, i.e.,  ∼ 20 km\,s− 1. Assuming *q* = 25 AU, i.e., the most probable value for hyperbolic orbits in our distribution (see Fig. [mcmcmapfomb]), this immediately translates into an estimate of an upper limit for the eccentricity *e*max ≃ 4. [mcmcmapfomb] The global statistics of the posterior orbital distribution obtained from the run is shown in Fig. [mcmcmapfomb], where distribution histograms for all individual elements (*q*, *e*, *i*, Ω, *ω*, *t**p*) as well as density maps for all combinations of them are represented. Special enlargements of the plots concerning *e* and Ω are shown in Fig. [fombmcmc]. In all histogram (mono-dimensional) plots, the red vertical bar superimposed to the plot shows the corresponding value for the best fit (lowest *χ*2) orbit obtained independently via a least-square Levenberg-Marquardt procedure. The same orbital solution is marked with black stars in all off-diagonal density plots combining two orbital parameters. As explained above, a least-square fit is initiated prior to launching MCMC. The resulting best-fit model is then used as a starting point for the Markov chains, and posterior *χ*2 distributions show that this solution actually achieves the minimum of the distribution. We first compare these plots to the corresponding ones in, where the fit was made over the same data set, but limited to bound orbits only. The first striking fact is that the eccentricity distribution extends now beyond *e* = 1 well into the unbound regime. This shows as suspected that unbound orbital solutions for Fomalhaut b do exist. The best fit solution is itself an unbound orbit with *e* ≃ 1.9. We nevertheless note a strong peak in the distribution at *e* ≃ 0.94 that appears exactly at the same place as in. This clearly shows that for such a weakly constrained problem, MCMC is definitely superior to least-square. In fact, the whole eccentricity distribution below $e\la 0.96$ exactly matches the corresponding one in. This shows up in Fig. [fombmcmc] where the eccentricity histogram was intentionally cut at *e* = 1.5 to permit a better comparison. This first validates the present run (as the previous one was done with another code), and second shows that the cutoff at *e* ≃ 0.98 that appeared in the previous distribution was not physical, but rather due to the intrinsic limitation of the code used. The tail of the distribution extends now in the unbound regime up to the *e*max = 4 limitation that was fixed in the run. The shape of this tail can be fitted as with a *e*− 3/2 power law. We also note (Fig. [mcmcmapfomb]) that the periastron distribution closely matches that of, while extending further out towards larger values. This is clearly due to the contribution of unbound orbits, as can be seen in the *q*–*e* probability map. From this we can derive an estimate of the probability for Fomalhaut b’s orbit to be bound, just counting the number of bound orbits in the whole set. We find *p*bound = 0.23. This probability actually depends on the assumed limitation *e*max = 4. If we had let the eccentricity to take larger values, the number of unbound orbits in the whole set would have been larger, and subsequently *p*bound would have been smaller. It is nevertheless possible to estimate the ultimate *p*bound value that would be derived if we put no upper limit on the eccentricity. Taking into account the fact that the tail of the posterior eccentricity distribution roughly falls off as *e*− 3/2, we can extrapolate the distribution up to infinity, integrate it and reintroduce the missing orbits corresponding to *e* > 4 into the distribution. Our posterior sample of orbits contains 106 solutions. Extrapolating the distribution we can estimate that  ∼ 2.1 × 105 corresponding to *e* > 4 are missing in our sample. This changes our probability estimate to *p*bound = 0.19, which can be considered as a minimum value. However, as the *e*max = 4 threshold results from a physical consideration (see above), the first derived *p*bound value can be regarded as robust. Note also that it is not very much above the minimum value. This shows that the contribution of very high eccentricity solutions is minor. This probability is however just derived from a pure mathematical analysis without any likelihood consideration. Flybys are rare but not necessarily improbable. Looking now at the distributions of the other orbital elements, we see in Fig. [fombmcmc] that the location of the orbit in (Ω, *i*) still closely matches that of the observed dust disk (the white star in the plot), like in. In other works, there is still a strong suspicion of near-coplanarity between Fomalhaut b and the dust disk. This clearly favours a bound configuration rather than a flyby that
arxiv_0000125
10+12/10-10/10+0.01) rectangle +(0.08, 0.08); (4/10+3/10, 1/10+12/10-11/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-12/10+0.01) rectangle +(0.08, 0.08); [shift=(1.4,0)] (1/10+3/10, 1/10+12/10-1/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-4/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-6/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-7/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-8/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-9/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-10/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-11/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-12/10+0.01) rectangle +(0.08, 0.08); [shift=(1.8,0)] (1/10+3/10, 1/10+12/10-1/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-4/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-6/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-7/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-8/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-9/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-10/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-11/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-12/10+0.01) rectangle +(0.08, 0.08); (a) at (1.0, 0.05) *A*; (b)  \* ; (c) *v*; (d)  → ; (e) *w*; (0, 0.70) – (2.5,.70); at (0.0, 1.05) node 0; at (0.0,.35) node 1; (0.2, 1.0) – (2.4, 1.0); (0.2, 0.4) – (2.4, 0.4); at (0.25, 1.25) p0; at (0.25, 0.95) p1; at (0.25, 0.65) p2; at (0.25, 0.35) p3; [fig:spmvnodeaware] A common approach to a SpMV is to compute the local portion of the SpMV with the on-process block of *A* while messages are exchanged to attain the off-process portions of *v* necessary for the local update of *w*. While this allows for the overlap of some communication and computation, it requires the exchange of many point-to-point messages, which still creates a large communication overhead (see ). The inefficiency of this standard approach is attributed to two redundancies shown in . First, many messages are injected into the network from each node. Some nodes are sending multiple messages to a single destination process on a separate node, creating a redundancy of messages. Secondly, processes send the necessary values from their local portion of *v* to any other process requiring that information for its local computation of *w*. However, the same information may already be available and received by a separate process on the same node, creating a redundancy of data being sent through the network. [fig:spmvbreakdown] [x=80pt,y=80pt] [shift=(3pt, 15pt)] =[>=latex,shorten >=5pt,shorten <=5pt,->,draw,thin,color=black] (p0) to (p6); (p1) to (p6); ([yshift=-10pt]p2.south) to (p6); (p3) to (p6); (p4) to (p6); (p5) to (p6); (p7) to (p6); at (85pt, -12.0pt) Sending multiple messages to Node 1; [shift=(200pt, 15pt)] =[>=latex,shorten >=5pt,shorten <=0pt,->,draw,thin,color=black] in -4,0,4 ([xshift=pt,yshift=5pt]p1) rectangle +(0.04, 0.04); (myp) at ([xshift=9pt,yshift=6pt] p1); (myp) to[bend left] (p4); (myp) to[bend left] (p5); (myp) to[looseness=.7, bend right=45] (p6); (myp) to[bend right=75,in=90,out=-45] (p7); at (85pt, -12.0pt) Sending duplicate data to Node 1; [fig:standardcomm] Node-aware communication techniques  mitigate these issues by considering node topology to further break down the off-process block into vector values of *v* that require on- or off-node communication; this decomposition is shown in . As a result, costly redundant messages are traded for faster, on-node communication, resulting in 2 different multi-step schemes, namely 3-step and 2-step. ### 3-step Node-Aware Communication 3-step node-aware communication eliminates both redundancies in standard communication by gathering all necessary data to be sent off-node in a single buffer. Efficient implementation of this method relies on pairing all processes with a receiving process on distinct nodes – ensuring that every process remains active throughout the entire communication scheme. First, all data to be sent to a separate node are gathered in a buffer by the single process paired with that node. Secondly, this process sends the data buffer to the paired process on the receiving node. Thirdly, the paired process on the receiving node redistributes the data locally to the correct destination processes on-node. An example of these steps is outlined in . [x=80pt,y=80pt] [shift=(3pt, 5pt)] [xshift=20pt,yshift=200pt] at (-10pt,40pt) Step 1; [xshift=100pt,yshift=100pt] at (-10pt,40pt) Step 2; [xshift=180pt,yshift=0pt] at (-10pt,40pt) Step 3; [fig:3step] ### 2-Step Node-Aware Communication 2-step node-aware communication eliminates the redundancy of sending duplicate data from standard communication and decreases the number of inter-node messages, but not to the same degree as 3-step communication. In this case, *each* process exchanges the information needed by the receiving node with their paired process directly. Then the receiving node redistributes the messages on-node as shown in . While multiple messages are sent to the same node, the duplicate data being sent through the network is eliminated. Hence, the number of bytes communicated with 3-step and 2-step node-aware schemes is the same, and often yields a significant reduction over the amount of data being sent through the network with standard communication. [x=80pt,y=80pt] [shift=(3pt, 25pt)] [xshift=0pt,yshift=0pt] at (90pt,-15pt) Step 1; [xshift=200pt,yshift=0pt] at (90pt,-15pt) Step 2; [fig:2step] ### Node-Aware Communication Models lll parameter & description & first use *p* & number of processes & §[sec:nodeawarecomm] `nnz`  & number of nonzeros in *A* & §[sec:performance] *α* & network latency & ([eq:maxrate]) *s* & maximum number of bytes sent by a process & ([eq:maxrate]) *m* & maximum number of messages sent by a process & ([eq:maxrate]) `ppn`  & processes per node & ([eq:maxrate]) *R**N* & network injection rate (B/s) & ([eq:maxrate]) *R**b* & network rate (B/s) & ([eq:maxrate]) *s*proc & maximum number of bytes sent by a process & ([eq:model2]) *s*node & maximum number of bytes injected by a node & ([eq:model2]) *m*proc$\rightarrow$node & maximum number of nodes to which a processor sends & ([eq:model2]) *m*node$\rightarrow$node & maximum number of messages between two nodes & ([eq:model3]) *s*node$\rightarrow$node & maximum size of a message between two nodes & ([eq:model3]) [tab:parameters] The *max-rate* model  is used to quantify the efficiency of node-aware communication throughout the remainder of . For clarity, all modeling parameters referenced throughout the remainder of the paper are defined in . The max-rate model is an improvement to the standard postal model of communication, accounting for injection limits into the network. The cost of sending messages from a symmetric multiprocessing (SMP) node is modeled as $$\label{eq:max\_rate} T = \alpha \cdot m + \max\left( \frac{{\texttt{ppn}}\cdot s}{R\_N}, \frac{s}{R\_b} \right)$$ where *α* is the latency, *m* is the number of messages sent by a single process on a given node, *s* is the number of bytes sent by a single process on a given SMP node, `ppn` is the number of processes per node, *R**N* is the rate at which a network interface card (NIC) can inject data into the network, and *R**b* is the rate at which a process can transport data. In the case of on-node messages, the injection rate is not present and the max-rate model reduces to the standard postal model for communication $$\label{eq:postal\_model} T = \alpha\_{\ell} \cdot m + \frac{s}{R\_{b,\ell}},$$ where *α*ℓ is the *local* or on-node latency and *R**b*, ℓ is the rate of sending a message on-node. In , the max-rate model is extended to 2-step and 3-step communication by splitting the model into inter-node and intra-node components. For 3-step, the communication model becomes $$\label{eq:model3} T\_{\text{total}} = \underbrace{ \alpha \cdot \frac{{m\_{\text{node$\rightarrow$node}}}}{{\texttt{ppn}}} + \max\left(\frac{{s\_{\text{node}}}}{R\_N}, \frac{{s\_{\text{proc}}}}{R\_b}\right) }\_{\text{inter-node}} + \underbrace{ 2\cdot\left( \alpha\_{\ell} \cdot ({\texttt{ppn}}- 1) + \frac{{s\_{\text{node$\rightarrow$node}}}}{R\_{b,\ell}} \right) }\_{\text{intra-node}}$$ where *m*node$\rightarrow$node is the maximum number of messages communicated between any two nodes and *s*node$\rightarrow$node is the size of messages communicated between any two nodes. For 2-step, this results in $$\label{eq:model2} T\_{\text{total}} = \underbrace{ \alpha \cdot {m\_{\text{proc$\rightarrow$node}}}+ \max\left(\frac{{s\_{\text{node}}}}{R\_N}, \frac{{s\_{\text{proc}}}}{R\_b}\right) }\_{\text{inter-node}} + \underbrace{ \alpha\_{\ell} \cdot ({\texttt{ppn}}- 1) + \frac{{s\_{\text{proc}}}}{R\_{b,\ell}} }\_{\text{intra-node}}$$ where *s*node and *s*proc represent the maximum number of bytes injected by a single NIC and communicated by a single process from an SMP node, respectively, and *m*proc$\rightarrow$node is the maximum number of nodes with which any process communicates. The latency to communicate between nodes, *α*, is often much higher than the intra-node latency, *α*ℓ, thus motivating a multi-step communication approach. In a 2-step method, having every process on-node communicate data minimizes the constant factor $\max\left(\frac{{s\_{\text{node}}}}{R\_N}, \frac{{s\_{\text{proc}}}}{R\_b}\right)$, which depends on the maximum amount of data being communicated to a separate node by a single process. In practice, a 3-step method often yields the best performance for a parallel SpMV since the amount of data being communicated by a single process is often small. As a result moving the data to be communicated off-node into a single buffer minimizes the first term in . These multi-step communication techniques minimize the amount of time spent in inter-node communication. We extend this idea to the block vector operation in . Performance Study of Enlarged Conjugate Gradient ================================================ In this section we detail the per-iteration performance and performance modeling of ECG. A communication efficient version of  is implemented in Raptor  and is based on the work in . Throughout this section and the remainder of the paper, we assume an *n* × *n* matrix *A* with `nnz` nonzeros is partitioned row-wise across a set of *p* processes. Each process contains at most $\frac{n}{p}$ contiguous rows of the matrix *A*. In the modeling that follows, we assume a equal number of nonzeros per partition. In addition, each block vector in  — *R*, *X*, *Z*, *A**Z*, *P*, and *A**P* — is partitioned row-wise, and with the same row distribution as *A*. The variables *c*, *d*, and *d*\tiny old are size *t* × *t* and a copy of each is stored locally on each process. Tests were performed on the Blue Waters Cray XE/XK machine at the National Center for Supercomputing Applications (NCSA) at University of Illinois. Blue Waters  contains a 3D torus Gemini interconnect; each Gemini consists of two nodes. The complete system contains 22 636 XE compute nodes; each with two AMD 6276 Interlagos processors, and additional XK compute nodes unused in these tests. Implementation -------------- The scalability of a direct implementation of  is limited , however, this is improved by fusing communication and by executing the system solve in  in  on each process. This is accomplished in  by decomposing the computation of *P* into several steps as described in . [!ht] *A**Z* ← *A* \* *Z* *Z**T**A**Z* ← *Z**T* \* *A**Z* *C**T**C* ← *Z**T**A**Z*[pkchol] *P* ← solve *P* \* *C* = *Z*[pkline] [calcpk] The *t* × *t* product *Z**T*(*A**Z*) is stored locally on every process in the storage space of *c*, as shown in . The Cholesky factorization on  of  is performed simultaneously on every process, yielding a (local) copy of *C*. Then each process performs a local triangular system solve using the local vector values of *Z* to construct the portion of *P* (see ). Similarly, an additional sparse matrix block vector product *A**P* = *A* \* *P* is avoided by noting that *A**P* is constructed using *A**P* ← `Triangular Solve with Multiple Right Sides of` *A**P* \* *C* = *A**Z* since the product *A**Z* and the previous iteration’s *A**P* = *A* \* *P* are already stored. [x=100pt,y=100pt] [shift=(10pt, 15pt)] (0, 0.70) – (3.8,.70); (0.2, 1.0) – (3.7, 1.0); (0.2, 0.4) – (3.7, 0.4); at (0.0, 1.05) node 0; at (0.0,.35) node 1; at (0.25, 1.25) p0; at (0.25, 0.95) p1; at (0.25, 0.65) p2; at (0.25, 0.35) p3; in 105,15,45,75 at (3.0, -0.05) *c*; at (3.3, -0.05) *d*; at (3.6, -0.065) $d\_{\textnormal{\tiny old}}$; at (0.6, -0.05) *R*; at (1.1, -0.05) *X*; at (1.6, -0.05) *Z*; at (2.15, -0.05) …; at (2.65, -0.05) *A**P*; [fig:vectorpartitioning] summarizes our implementation in terms of computational kernels, with the on process computation in terms of floating point operations along with the associated type of communication. The remainder of the calculations within ECG consist of local block vector updates, as well as block vector inner products for the values *c*, *d*, and *d*\tiny old. A straightforward approach is to compute these independently within the algorithm, resulting in four `MPI_Allreduce` global communications per iteration. However, since the input data required to calculate *c*, *d*, and *d*\tiny old are available on  in  when *c* is computed, a single global reduction is possible. The implementation described in  highlights a single call to `MPI_Allreduce` for all of these values and reducing them in the same buffer. This reduces the number of `MPI_Allreduce` calls to two per iteration. [!ht] SpMV Vector Initialization [ecgkernels] From , we note that computation and communication per iteration costs of ECG have increased over that of parallel CG. For our implementation, the number of collective communication calls to `MPI_Allreduce` has remained the same as CG (at two), but the number of values in the global reductions has increased from a single float in each of CG’s global reductions to *t*2 and 3*t*2. The singular SpMV from CG has increased to a sparse matrix block vector product (SpMBV), which does not increase the number of point-to-point messages, but does increase the size of the messages being communicated by the enlarging factor *t*. Additionally, the local computation for *each kernel* has increased by a factor of *t*. ECG uses these extra per iteration requirements to reduce the total number of iterations to convergence, resulting in fewer iterations than CG as seen in . [fig:mfemconvergence] Per Iteration Performance ------------------------- In  we decompose the performance of a single iteration of ECG for  into (local) computation, point-to-point communication, and collective communication. Performance tests were executed on Blue Waters . Each test is the average of 20 iterations of ECG; reported times are the maximum average time recorded for any single process. At small scales, local computation dominates performance, while at larger scales, the point-to-point communication in the single SpMBV kernel and the collective communication in the block vector inner products become the bottleneck in ECG. also shows the time spent in a single inner product. While we observe growth with the number of processes, as expected, the relative cost (and growth) within ECG remains low. Importantly, increasing *t* at high processor counts does not significantly contribute to cost. This is shown in , where the mean runtime for various block vector inner products all fall within each other’s confidence intervals. This suggests that increasing *t* to drive down the iteration count will have little affect on the per iteration cost of the two calls to `MPI_Allreduce`, and in fact, will result in fewer total calls to it due to the reduction in iterations. [fig:ecgprofiling] [fig:bvinnerprofiling] The remainder of this section focuses on accurately predicting the performance of a single iteration of ECG through robust performance models. In particularly, SpMBV communication is addressed in detail in  where we discuss new node-aware communication techniques for blocked data. Performance Modeling -------------------- To better understand the timing profiles in , we develop performance models. Below, we present two different models for the performance of communication within a single iteration of ECG. First, consider the standard postal model for communication, which represents the maximum amount of time required for communication by an individual process in a single iteration of ECG as $$\label{eq:ecg\_comm\_postal} T\_{\text{postal}} = \underbrace{ \alpha \cdot m + \frac{s \cdot t}{R\_b} }\_\text{point to point} + \underbrace{ 2\cdot\alpha \cdot \log(p) + \frac{f\cdot4\cdot t^2}{R\_b} }\_\text{collective}$$ where *f* is the number of bytes for a floating point number — e.g. *f* = 8. See  for a complete description of model parameters. As discussed in , this model presents a misleading picture on the performance of ECG at scale, particularly on current supercomputer architectures where SMP nodes encounter injection bandwidth limits when sending inter-node messages. [fig:maxrateproof] To improve the model we drop in the max-rate model for the point-to-point communication, resulting in $$\label{eq:ecg\_comm\_mr} T\_{MR} = \underbrace{ \alpha \cdot m + \max \left( \frac{{\texttt{ppn}}\cdot s\cdot t}{R\_N}, \frac{s\cdot t}{R\_b} \right) }\_\text{point to point} + \underbrace{ 2\cdot \alpha \cdot \log(p) + \frac{f\cdot4\cdot t^2}{R\_b} }\_\text{collective}.$$ shows that the max-rate model provides a more accurate upper bound on the time spent in point to point communication within ECG. The term $2\cdot \alpha \cdot \log(p) + \frac{f\cdot4\cdot t^2}{R\_b}$ remains unchanged in to represent the collective communication required for the two block vector inner products. Each block vector inner product incurs latency from requiring log(*p*) messages in an optimal implementation of the `MPI_Allreduce`. More accurate models for modeling the communication of the `MPI_Allreduce` in the inner product exist, such as the logP model  and logGP model , but optimization of the reduction is outside the scope of this paper, so we leave the postal model for representing the performance of the inner products. Modeling the computation within an iteration of ECG is straightforward. The computation for a single iteration of ECG is written as the sum of the kernel floating point operations in , which results in the following $$\label{eq:comp} T\_{comp} = \gamma \cdot\left((2 + 2 t) \frac{{\texttt{nnz}}}{p} + (4t + 4t^2) \frac{n}{p} + \frac{1}{2}t^2 + \frac{1}{6}t^3\right)$$ where *γ* is the time required to compute a single floating point operation. In total, we arrive at the following model for a single iteration of ECG $$\label{eq:ecg\_iter} T\_{ECG} = \alpha \cdot m + \max \left( \frac{{\texttt{ppn}}\cdot s\cdot t}{R\_N}, \frac{s\cdot t}{R\_b} \right) + 2\cdot\alpha \cdot \log(p) + \frac{f\cdot4\cdot t^2}{R\_b} + \gamma \cdot\left((2 + 2 t) \frac{{\texttt{nnz}}}{p} + (4t + 4t^2) \frac{n}{p} + \frac{1}{2}t^2 + \frac{1}{6}t^3\right).$$ Using this model, we can predict the reduction in the amount of time spent in point-to-point communication using the multi-step communication techniques presented in for a single iteration of ECG for  — see . c|cc|cc|cc|cc|cc|cc & & & & & & & & & & & & & & & & & & 5 & & 55.6 & & 70.0 & & 70.2 & & 75.5 & & 78.9 & & 76.6 10 & & 40.0 & & 56.2 & & 65.2 & 65.9 & 67.5 & & 69.1 & & 68.7 15 & & 37.6 & & 40.4 & & 66.7 & & 67.0 & & 58.5 & 60.5 & 63.8 20 & & 24.5 & & 38.3 & & 62.3 & & 47.6 & & 45.5 & & 53.6 [tab:ecgpercents2step] c|cc|cc|cc|cc|cc|cc & & & & & & & & & & & & & & & & & & 5 & & 55.6 & & 70.0 & & 70.2 & & 75.5 & & 78.9 & & 76.6 10 & & 40.0 & & 56.2 & & 65.2 & & 67.5 & 65.6 & 69.1 & 68.5 & 68.7 15 & & 37.6 & & 40.4 & & 66.7 & & 67.0 & & 58.5 & & 63.8 20 & & 24.5 & & 38.3 & & 62.3 & & 47.6 & & 45.5 & 48.8 & 53.6 [tab:ecgpercents3step] [tab:ecgmodelpercents] We see that ECG is still limited at large processor counts even when substituting the node-aware communication techniques to replace the costly point-to-point communication observed in . The models do predict a large amount of speedup for most cases, however, when using 3-step communication, suggesting that node-aware communication techniques can reduce the large point-to-point bottleneck observed in the performance study. While a large communication cost stems equally from the collective communication the `MPI_Allreduce` operations, their performance is dependent upon underlying `MPI` implementation and outside the scope of this paper. We address the point-to-point communication performance further in  by analyzing it through the lens of node-aware communication techniques, optimizing them to achieve the best possible performance at scale. Optimized Communication for Blocked Data ======================================== As discussed in , scalability for ECG is limited by the sparse matrix-block vector multiplication (SpMBV) kernel defined as *A* ⋅ *V* → *W*,  with *A* ∈ R*n* × *n* and *V*, *V* ∈ R*n* × *t*, where 1 < *t* ≪ *n*. Due to the block vector structure of *X*, each message in a SpMV is increased by a factor of *t* (see ). The larger messages associated with *t* > 1 increase the amount of time spent in the point-to-point communication at larger scales, making the SpMBV operation an ideal candidate for node-aware messaging approaches. [x=100pt,y=100pt] [shift=(5pt, 15pt)] in 1,..., 12 in 1,...,12 (, /10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-1/10+0.01) rectangle +(0.08, 0.08); (2/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (3/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (3/10+3/10, 1/10+12/10-1/10+0.01) rectangle +(0.08, 0.08); (2/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (4/10+3/10, 1/10+12/10-4/10+0.01) rectangle +(0.08, 0.08); (5/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (6/10+3/10, 1/10+12/10-6/10+0.01) rectangle +(0.08, 0.08); (5/10+3/10, 1/10+12/10-4/10+0.01) rectangle +(0.08, 0.08); (4/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (6/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (7/10+3/10, 1/10+12/10-7/10+0.01) rectangle +(0.08, 0.08); (8/10+3/10, 1/10+12/10-8/10+0.01) rectangle +(0.08, 0.08); (9/10+3/10, 1/10+12/10-9/10+0.01) rectangle +(0.08, 0.08); (7/10+3/10, 1/10+12/10-9/10+0.01) rectangle +(0.08, 0.08); (10/10+3/10, 1/10+12/10-10/10+0.01) rectangle +(0.08, 0.08); (11/10+3/10, 1/10+12/10-11/10+0.01) rectangle +(0.08, 0.08); (12/10+3/10, 1/10+12/10-12/10+0.01) rectangle +(0.08, 0.08); (11/10+3/10, 1/10+12/10-10/10+0.01) rectangle +(0.08, 0.08); (10/10+3/10, 1/10+12/10-12/10+0.01) rectangle +(0.08, 0.08); (6/10+3/10, 1/10+12/10-1/10+0.01) rectangle +(0.08, 0.08); (4/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (5/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (5/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-4/10+0.01) rectangle +(0.08, 0.08); (2/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-6/10+0.01) rectangle +(0.08, 0.08); (11/10+3/10, 1/10+12/10-7/10+0.01) rectangle +(0.08, 0.08); (12/10+3/10, 1/10+12/10-8/10+0.01) rectangle +(0.08, 0.08); (10/10+3/10, 1/10+12/10-9/10+0.01) rectangle +(0.08, 0.08); (9/10+3/10, 1/10+12/10-11/10+0.01) rectangle +(0.08, 0.08); (8/10+3/10, 1/10+12/10-12/10+0.01) rectangle +(0.08, 0.08); (10/10+3/10, 1/10+12/10-1/10+0.01) rectangle +(0.08, 0.08); (8/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (9/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (12/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (7/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (10/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (9/10+3/10, 1/10+12/10-4/10+0.01) rectangle +(0.08, 0.08); (11/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (9/10+3/10, 1/10+12/10-6/10+0.01) rectangle +(0.08, 0.08); (2/10+3/10, 1/10+12/10-7/10+0.01) rectangle +(0.08, 0.08); (4/10+3/10, 1/10+12/10-8/10+0.01) rectangle +(0.08, 0.08); (3/10+3/10, 1/10+12/10-9/10+0.01) rectangle +(0.08, 0.08); (5/10+3/10, 1/10+12/10-10/10+0.01) rectangle +(0.08, 0.08); (4/10+3/10, 1/10+12/10-11/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-12/10+0.01) rectangle +(0.08, 0.08); (a) at (1.0, 0.05) *A*; (b)  \* ; (c) *V*; (d)  → ; (e) *W*; (0, 0.70) – (2.5,.70); at (0.0, 1.05) node 0; at (0.0,.35) node 1; (0.2, 1.0) – (2.4, 1.0); (0.2, 0.4) – (2.4, 0.4); at (0.25, 1.25) p0; at (0.25, 0.95) p1; at (0.25, 0.65) p2; at (0.25, 0.35) p3; [fig:nodeawarespmbv] Performance Modeling -------------------- Recalling the node-aware communication models from , we augment the 2-step and 3-step models with block vector size, *t*. As a result, the 2-step model in  becomes $$\label{eq:model2\_bv} T\_{total} = \underbrace{ \alpha \cdot {m\_{\text{proc$\rightarrow$node}}}+ \max\left( \frac{t\cdot{s\_{\text{node}}}}{R\_N}, \frac{t\cdot{s\_{\text{proc}}}}{R\_b} \right) }\_{\text{inter-node}} + \underbrace{ \alpha\_{\ell} \cdot ({\texttt{ppn}}- 1) + t \cdot \frac{{s\_{\text{proc}}}}{R\_{b,\ell}} }\_{\text{intra-node}}$$ while the 3-step model in  becomes $$\label{eq:model3\_bv} T\_{total} = \underbrace{ \alpha \cdot \frac{{m\_{\text{node$\rightarrow$node}}}}{{\texttt{ppn}}} + \max\left( \frac{t\cdot{s\_{\text{node}}}}{R\_N}, \frac{t\cdot{s\_{\text{proc}}}}{R\_b} \right) }\_{\text{inter-node}} + \underbrace{ 2\cdot\left( \alpha\_{\ell} \cdot ({\texttt{ppn}}- 1) + t \cdot \frac{{s\_{\text{node$\rightarrow$node}}}}{R\_{b,\ell}} \right) }\_{\text{intra-node}}.$$ In both models, *t* impacts maximum rate, a term that is relatively small for *t* = 1 and dominates the inter-node portion of the communication as *t* grows. Since messages are increased by a factor of *t*, the single buffer used in 3-step communication quickly reaches the network injection bandwidth limits. Using multiple buffers, as in 2-step communication, helps mitigate the issue, however more severe imbalance persists since the amount of data sent to different nodes is often widely varying. shows every inter-node message sent by a single process alongside the message size for  when performing the SpMBV kernel with 4096 processes and 16 processes per node. We present the message sizes for 3-step and 2-step communication, noting that the overall number of inter-node messages decreases when using 3-step communication, but the average message size sent by a single process increases. [fig:msgsizesp4096] For *t* = 20, the maximum message size nears 106 for 3-step communication, while the maximum message size barely reaches 105 for 2-step communication. Additionally, there is clear imbalance in the inter-node message sizes for 2-step communication with messages ranging in size from 103–105 Bytes. Profiling --------- We next apply the node-aware communication strategies presented for SpMVs and SpGEMMs in  to the SpMBV kernel within ECG. displays the performance of standard, 2-step, and 3-step communication when applied to the SpMBV kernel for with *t* = 5 and *t* = 20. The 2-step communication appears outperform in most cases. This is due to the large amount of data to be sent off-node that is split across many processes. We see 2-step communication performing better due to the term *α* ⋅ *m*proc$\rightarrow$node in  being smaller than the *α* ⋅ *m*node$\rightarrow$node term in the 3-step communication model  due to multiplication by the factor *t*. In fact, all counts measured in the ECG profiling section are now multiplied by the enlarging factor *t*. While the traditional SpMV shows speedup with 3-step communication, we now see that 2-step is generally the best fit for our methods as message size, and thus *t*, increases. [fig:mfemspmbvcomparison] Next we consider node-aware communication performance results for a subset of the largest matrices in the SuiteSparse matrix collection  (matrix details can be found in ). These were selected based on size and density to provide a variety of scenarios. lcccc matrix & rows/ cols & nnz & nnz/row & density audikw\_1 & & & 82.3 & 8.72e-5 Geo\_1438 & & & 41.9 & 2.91e-05 bone010 & & & 48.5 & 4.92e-05 Emilia\_923 & & & 43.7 & 4.74e-05 Flan\_1565 & & & 72.9 & 4.66e-05 Hook\_1498 & & & 39.6 & 2.65e-05 ldoor & & & 44.6 & 4.69e-05 Serena & & & 46.1 & 3.31e-05 thermal2 & & & 7.0 & 5.69e-06 [tab:matrices] While 2-step communication is effective in many instances, it is not always the most optimal communication strategy, as depicted in. Unlike the results for , 3-step and 2-step communication do not always outperform standard communication, and in some instances (4096 processes), for most values of *t* there is performance degradation. This is seen more clearly in. [fig:bwspmbvcomparison] [fig:bwspmbvcomparisonzoom] It is important to highlight cases where only a single node-aware communication technique results in performance deterioration over standard communication. Distinct examples include `Geo_1438` and `thermal2` on 4096 processes. Both of these matrices benefit from 2-step communication for *t* = 10 and 20, but performance degrades when using 3-step communication. Another example is the performance of `ldoor` on 8192 processes (seen in ). For *t* = 5 and 10, `ldoor` results in performance degradation using 2-step communication, but up to 5 ×  speedup over standard communication when using 3-step communication. These cases highlight that while one node-aware communication technique underperforms in comparison to standard communication, the other node aware technique is still much faster. Using this as the key motivating factor, we discuss an optimal node-aware communication technique for blocked data in . Optimal Communication for Blocked Data -------------------------------------- When designing an optimal communication scheme for the blocked data format, the main consideration is the impact the number of vectors within the block have on the size of messages communicated. The effects message size and message number have on performance can vary based on machine, hence we present results for Blue Waters alongside Lassen . Lassen, a 23-petaflop IBM system at Lawrence Livermore National Laboratory, consists of 792 nodes connected via a Mellanox 100 Gb/s Enhanced Data Rate (EDR) InfiniBand network. Each node on Lassen is dual-socket with 44 IBM Power9 cores and 4 NVIDIA Volta GPUs (which are unused in our tests.) In , we view the effects placement of data and amount of data being communicated has on performance times for two different machines. This figure shows the amount of time required to send various numbers of bytes between two processes when they are located on the same node but different sockets (blue) and on the same node and same socket (red) for the machines Blue Waters (left) and Lassen (right). It also shows the amount of time required to communicate between two processes on separate nodes when there are less than 4 (Blue Waters) or 5 (Lassen) active processes communicating through the network at the same time or greater than 4 or 5 processes communicating through the network at the same time. We see that as the number of bytes communicated between two processes increases, it becomes increasingly important whether those two processes are located on the same socket, node, or require communication through the network. [fig:commsplit] For both machines, inter-node communication is fastest when message sizes are small, and there are few messages being injected into the network. On Blue Waters, intra-node communication is the fastest, with the time being dependent on how physically close the processes are located. For instance, when two processes are on the same socket, communication is faster than when they are on the same node, but different sockets. This is true for Lassen, as well, but cross-socket intra-node communication is not always faster than communicating through the network. Once message sizes exceed 104 bytes, and there are fewer than five processes actively communicating, inter-node communication is faster than two cross-socket intra-node processes communicating. For both machines, however, we see that once a large enough communication volume is reached, it becomes faster to split the inter-node data being sent across a subset of the processes on a single node due to network contention as observed in . In addition to the importance of the placement of two communicating processes, the total message volume and number of actively communicating processes plays a key role in communication performance. While it is extremely costly for every process on a node to send 105 bytes as seen in, there are performance benefits when splitting a large communication volume across all processes on a node, depicted in. Blue Waters sees modest performance benefits when splitting large messages across multiple processes, whereas Lassen sees much greater performance benefits. [fig:nodepong] These observations help justify why 3-step communication would outperform 2-step communication, and vice versa in certain cases of the SpMBV profiling presented in . Sending all messages in a single buffer becomes impractical when the block size, *t*, is very large, but having each process communicate with a paired process also poses problems when some of the inter-node messages being sent are still very large, as seen in. Motivated by the results above, we introduce an optimized multi-step communication process that combines the aspects of both the 3-step and 2-step communication techniques, and using 3-step or 2-step communication when necessary. We reduce the number of inter-node messages and conglomerate messages to be sent off node for certain cases when the message sizes to be sent off-node are below a given threshold, and we split the messages to be sent off-node across multiple processes when the message size is larger than a given threshold. Hence, each node is determining the most optimal way to perform its inter-node communication. As a result, this nodal optimal communication eliminates the redundancies of data being injected into the network, just as 3-step and 2-step do, but in some cases, it does not reduce the number of inter-node messages as much as 3-step, and in fact can increase the number of inter-node messages injected by a single node to be larger than those injected by 2-step, but never exceeding the total number of active processes per node. The number of inter-node messages sent can be represented by the following inequality, *m*node$\rightarrow$node ≤ *n**o**p**t* ≤ max(*m*proc$\rightarrow$node, `ppn`) where *n**o**p**t* is the number of inter-node messages injected by a single process for the optimal node-aware communication, and it is bounded below by the worst-case number of messages communicated in 3-step communication and above by the worst-case number of messages communicated in 2-step communication or `ppn`, whichever is greater. The proposed process, excluding reducing the global communication strategy to 3-step or 2-step communication, is summarized in . This figure outlines the nodal optimal process of communicating data between two nodes, each with four local processes. Step 1: Each node conglomerates small messages to be sent off-node while retaining larger messages. Messages are assigned to processes in descending order of size to the first available process on node. This is done for every node simultaneously. Step 2: Buffers prepared in step 1 are sent to their destination node, specifically to the paired process on that node with the same local rank. P0 exchanges data with P4, while P1 sends data to P5, and P2 sends data to P6. Step 3: All processes on each individual node redistribute their received data to the correct destination processes on-node. In this step, all communication is local. [x=80pt,y=80pt] [shift=(5pt, 5pt)] [xshift=20pt,yshift=230pt] (-0.1,-0.1) rectangle +(2.35,2.5); (p1) to (p0); (p3) to (p1); [xshift=0pt,yshift=100pt] (p5) to (p4); (p7) to (p4); at (-14pt,95pt) Step 1; [xshift=60pt,yshift=120pt] at (-14pt,40pt) Step 2; ([yshift=8pt]p0) to[looseness=0.7,out=60, in=120] ([yshift=8pt]p4); ([yshift=-8pt]p2) to[looseness=0.7,out=-60, in=-120] ([yshift=-8pt]p6); ([yshift=-8pt]p1) to[looseness=0.9,out=-60, in=-120] ([yshift=-8pt]p5); [xshift=120pt,yshift=0pt] at (-14pt,40pt) Step 3; (p0) – (p1); (p1) – (p3); (p3) – (p2); (p2) – (p0); (p0) – (p3); (p2) – (p1); (p4) – (p5); (p5) – (p7); (p7) – (p6); (p6) – (p4); (p4) – (p7); (p5) – (p6); [fig:optstep] The resulting speedups for the two systems Blue Waters and Lassen are presented in. Here, the message size cutoff being used is the message size cutoff before the MPI implementation switches to the rendezvous protocol. For sending large messages between processes, the rendezvous protocol communicates an envelope first, then the remaining data is communicated after the receiving process allocates buffer space. It is necessary to use this protocol for large messages, but there is a slight slowdown over sending messages via the short and eager protocols which either include the data being sent as part of the envelope (short) or eagerly send the data if it does not fit into the envelope (eager). This cutoff is chosen because rendezvous is the slowest protocol and because the switch to the rendezvous protocol is approximately the crossover point when on-node messages become slower than network messages on Lassen (around 104 Bytes in.) [fig:bwlassenspmbvopt] Using this cutoff point, the method sees speedup for *some* test matrices and performance degradation for most on Blue Waters, which is consistent with the minimal speedup seen by splitting large messages across multiple processes in. Additionally, it is likely that network contention is playing a large role in the Blue Waters results as the messages sizes become large based on the findings in , and due to each node determining independently how to send its own data without consideration of the size or number of messages injected by other nodes. The nodal optimal communication performs better on Lassen than Blue Waters and aligns with expectations based on the combination of using 3-step for nodes with small inter-node messages to inject into the network where on-node communication is faster than network communication () and splitting large messages where the benefits are much greater (). While Blue Waters achieves higher speedups in some cases than Lassen, both systems see speedups as large as 60x. These results only present part of the overall communication picture, however. There are still cases where the global communication strategy should be reduced to 3-step communication, 2-step communication, or standard communication. This differs based on the two machines and the specific test matrix, but tuning between the techniques results in the most optimal communication strategy. Tuning comes at the minimal cost of performing four different SpMBVs during setup of the SpMBV communicator. Speedups over standard communication when using tuning to use the fastest communication technique are presented in. [fig:bwlassenspmbvmin] In the top plot of, Blue Waters benefits in 33% of the cases from including the nodal optimal multi-step communication strategy. These results are expected based on where the benefits of nodal optimal multi-step communication were less than ideal. In fact, for 20% of the cases on Blue Waters, standard communication is the most performant, consistent with matrices of low density. The matrices `Geo_1438` and `ldoor`, which have the smallest density of the test matrices (), see minimal benefits from the multi-step communication techniques as the number of processes is scaled up due to the minimal amount of data being communicated. We expected to see 2-step and nodal optimal communication perform the best on Lassen due to the faster inter-node communication for smaller sized messages () and the benefits of splitting messages across many processes on node (). These expectations are consistent with the results presented in the bottom plot of ; most test matrices saw the best SpMBV performance with nodal optimal communication (44% of the cases). The remainder of the test cases were divided almost equally between 2-step, 3-step, and standard communication for which technique was the most performant (approximately 18% of the cases, each). shows the benefits of using the tuned point-to-point communication over the standard communication in a single iteration of ECG for. Tuned communication reduces the percentage of time spent in point-to-point communication independent of system, though the performance benefits are typically best when more data is being communicated (*t* = 20 in and.) For Blue Waters, the new communication technique results in point-to-point communication taking 20% − 40% less of the total time compared to the percentage of time when using standard communication (corresponding to the blue highlighted values in ). Performance benefits are much greater on Lassen where the tuned communication results in a decrease of more than 40% of the total iteration time compared to an iteration time with standard communication in most cases. c|cc|cc|cc|cc|cc|cc & & & & & & & & & & & & & & & & & & 5 & 54.7 & 55.6 & & 70.0 & 66.2 & 70.2 & & 75.5 & & 78.9 & & 76.6 10 & & 40.0 & & 56.2 & & 65.2 & & 67.5 & & 69.1 & & 68.7 15 & & 37.6 & & 40.4 & & 66.7 & & 67.0 & & 58.5 & & 63.8 20 & & 24.5 & & 38.3 & & 62.3 & 46.0 & 47.6 & & 45.5 & & 53.6 [tab:ecgpercentsbw] c|cc|cc|cc|cc|cc & & & & & & & & & & & & & & & 5 & & 84.3 & & 88.0 & & 88.4 & & 87.2 & & 93.1 10 & & 87.2 & & 84.9 & & 84.7 & & 89.3 & & 91.3 15 & & 81.2 & & 85.7 & & 86.1 & & 90.0 & & 89.2 20 & & 74.2 & & 79.9 & & 81.7 & & 80.7 & & 79.5 [tab:ecgpercentslassen] [tab:ecgpercents] Conclusions =========== The enlarged conjugate gradient method (ECG) is an efficient method for solving large systems of equations designed to reduce the collective communication bottlenecks of the classical conjugate gradient method (CG.) Within ECG, block vector updates replace the single vector updates of CG, thereby reducing the overall number of iterations required for convergence and hence the overall amount of collective communication. In this paper, we performed a performance study and analysis of the effects of block vectors on the balance of collective communication, point-to-point communication, and computation within the iterations of ECG. We noted the increased volume of data communicated and its disproportionate affects on the performance of the point-to-point communication; the communication bottleneck of ECG shifted to be the point-to-point communication within the sparse matrix-block vector kernel (SpMBV). To address the new SpMBV bottleneck, we designed an optimal multi-step communication technique that builds on existing node-aware communication techniques and improves them for the emerging supercomputer architectures with greater numbers of processes per node and faster inter-node networks. Overall, this paper provides a comprehensive study of the performance of ECG in a distributed parallel environment, and introduces a novel point-to-point multi-step communication technique that provides consistent speedup over standard communication techniques independent of machine. Future work includes profiling and improving the performance of ECG on hybrid supercomputer architectures with computation offloaded to graphics processing units. The software used to generate the results in this paper is freely available in RAPtor . Acknowledgments =============== This material is based in part upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number *DE-NA0003963* and *DE-NA0003966*. This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI0725070 and ACI-1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. This material is based in part upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number de-na0002374. Performance Analysis and Optimal Node-Aware Communication for Enlarged Conjugate Gradient Methods ================================================================================================= pdfsuppresswarningpagegroup=1 Krylov methods are a key way of solving large sparse linear systems of equations, but suffer from poor strong scalabilty on distributed memory machines. This is due to high synchronization costs from large numbers of collective communication calls alongside a low computational workload. Enlarged Krylov methods address this issue by decreasing the total iterations to convergence, an artifact of splitting the initial residual and resulting in operations on block vectors. In this paper, we present a performance study of an Enlarged Krylov Method, Enlarged Conjugate Gradients (ECG), noting the impact of block vectors on parallel performance at scale. Most notably, we observe the increased overhead of point-to-point communication as a result of denser messages in the sparse matrix-block vector multiplication kernel. Additionally, we present models to analyze expected performance of ECG, as well as, motivate design decisions. Most importantly, we introduce a new point-to-point communication approach based on node-aware communication techniques that increases efficiency of the method at scale. <ccs2012> <concept> <conceptid>10002950.10003705.10011686</conceptid> <conceptdesc>Mathematics of computing Mathematical software performance</conceptdesc> <conceptsignificance>500</conceptsignificance> </concept> <concept> <conceptid>10002950.10003705.10003707</conceptid> <conceptdesc>Mathematics of computing Solvers</conceptdesc> <conceptsignificance>500</conceptsignificance> </concept> <concept> <conceptid>10010147.10010919.10010172</conceptid> <conceptdesc>Computing methodologies Distributed algorithms</conceptdesc> <conceptsignificance>500</conceptsignificance> </concept> </ccs2012> Introduction ============ A significant performance limitation for sparse solvers on large scale parallel computers is the lack of computational work compared to the communication overhead . The iterative solution to large, sparse linear systems of the form *A**x* = *b* often requires *many* sparse matrix-vector multiplications and costly collective communication in the form of inner products; this is the case with the conjugate gradient (CG) method, and with Krylov methods in general. In this paper, we consider so-called *enlarged* Krylov methods , wherein block vectors are introduced to improve convergence, thereby reducing the amount of collective communication in exchange for denser point-to-point communication in the sparse matrix-block vector multiplication. We analyze the associated performance expectations and introduce efficient communication methods that render this class of methods more efficient at scale. There have been a number of suggested algorithms for addressing the imbalance in computation and communication within Krylov methods, including communication avoidance , overlapping communication and computation , and delaying communication at the cost of performing more computation . Most recently, there has been work on reducing iterations to convergence via increasing the amount of computation per iteration, and ultimately, the amount of data communicated . These approaches have been successful in reducing the number of global synchronization points; the current work is considered complementary in that the goal is reduction of the total *amount* of communication. In addition to reducing synchronization points, Enlarged Krylov methods such as enlarged conjugate gradient (ECG) reduce the number of sparse matrix-vector multiplications by improving the convergence of the method through an increase in the amount of computation per iteration. This is accomplished by using block vectors, which results in both an increase in (local) computational work, but also an increase in inter-process communication per iteration. Consequently, the focus of this paper is on analyzing the effects of block vectors on the performance of ECG and proposing optimal strategies to address the communication imbalances they introduce. There are two key contributions made in this paper. 1. A performance study and analysis of an enlarged Krylov method based on ECG, with an emphasis on the communication and computation of block vectors. Specifically, we note how they re-balance the point-to-point and collective communication within a single iteration of ECG, shifting the performance bottleneck to the point-to-point communication. 2. The development of a new communication technique for blocked data based on node-aware communication techniques that have shown to reduce time spent in communication within the context of sparse matrix-vector multiplication and algebraic multigrid . This new communication technique exhibits speedups as high as 60x for various large-scale test matrices on two different supercomputer systems, as well as, reduces the point-to-point communication bottleneck in ECG. These contributions are presented in  and , respectively. Background ========== The conjugate gradient (CG) method for solving a system of equations, *A**x* = *b*, exhibits poor parallel scalability in many situations . In particular, the *strong* scalability is limited due to the high volume of collective communication relative to the low computational requirements of the method. The enlarged conjugate gradient (ECG) method has a lower volume of collective communication and higher computational requirements per iteration compared to CG, thus exhibiting better strong scalability. In this section, we detail the basic structure of ECG, briefly outlining the method in terms of mathematical operations and highlighting the key differences from standard CG. A key computation kernel in both Krylov and enlarged Krylov methods is that of a sparse matrix-vector multiplication; we discuss node-aware communication techniques for this operation in . Throughout this section and the remainder of the paper, ECG performance is analyzed with respect to the problem described in . [ex:mfem] In this example, we consider a discontinuous Galerkin finite element discretization of the Laplace equation,  − Δ*u* = 1 on a unit square, with homogeneous Dirichlet boundary conditions. The problem is generated using MFEM  and the resulting sparse matrix consists of $\num{1310720}$ rows and $\num{104529920}$ nonzero entries. Graph partitioning is not used to reorder the entries, unless stated. Enlarged Krylov Subspace Methods -------------------------------- Similar to CG, ECG begins with an initial guess *x*0 and seeks an update as a solution to the problem *A**x* = *b*, with the initial residual given by *r*0 = *b* − *A**x*0. Unlike CG, which considers updates of the form *x**k* ∈ *x*0 + K*k*, where K*k* is the Krylov subspace defined as K*k* = span{*r*0, *A**r*0, *A*2*r*0, …, *A**k* − 1*r*0},  ECG targets *x**k* ∈ *x*0 + K*k*, *t* where K*k*, *t* is the *enlarged* Krylov space defined as K*k*, *t* = span{*T**r*0, *t*, *A**T**r*0, *t*, *A*2*T**r*0, *t*, …, *A**k* − 1*T**r*0, *t*},  with *T**r*, *t* representing a projection of the residual *r* (defined next). Notably, the enlarged Krylov subspace contains the traditional Krylov subspace: K*k* ⊂ K*k*, *t* . In , *T**r*, *t* defines a projection of the residual *r* (normally the initial residual *r*0) from R*n* → R*n* × *t*, by splitting *r* across *t* subdomains. The projection may be defined in a number of ways, with the caveat that the resulting columns of *T**r*, *t* are linearly independent and preserve the row-sum: *r* = ∑*i* = 0*t*(*T**r*, *t*)*i*,  where we denote the *i*th column of *T* as (*T*)*i*. An illustration of multiple permissible splittings is shown in . [x=100pt,y=100pt] [shift=(10pt, 15pt)] in 1,...,12 <4 (0, /10+0.01) rectangle +(0.08, 0.08); <7 (0, /10+0.01) rectangle +(0.08, 0.08); <10 (0, /10+0.01) rectangle +(0.08, 0.08); (0, /10+0.01) rectangle +(0.08, 0.08); (a) at (0.05, 0.0) *r*; (b)  → ; (c) *T**r*, 3; in 1, 2, 3 in 1,...,12 (, /10+0.01) rectangle +(0.08, 0.08); <4 =1 (, /10+0.01) rectangle +(0.08, 0.08); <7 =3 (, /10+0.01) rectangle +(0.08, 0.08); <10 =2 (, /10+0.01) rectangle +(0.08, 0.08); =1 (, /10+0.01) rectangle +(0.08, 0.08); [shift=(120pt,0)] in 1,...,12 <4 (0, /10+0.01) rectangle +(0.08, 0.08); <7 (0, /10+0.01) rectangle +(0.08, 0.08); <10 (0, /10+0.01) rectangle +(0.08, 0.08); (0, /10+0.01) rectangle +(0.08, 0.08); (a) at (0.05, 0.0) *r*; (b)  → ; (c) *T**r*, 3; in 1, 2, 3 in 1,...,12 (, /10+0.01) rectangle +(0.08, 0.08); <4 =1 (+, /10+0.01) rectangle +(0.08, 0.08); <7 =3 (+, /10+0.01) rectangle +(0.08, 0.08); <10 =2 (+, /10+0.01) rectangle +(0.08, 0.08); =1 (+, /10+0.01) rectangle +(0.08, 0.08); [shift=(240pt,0)] in 1,...,12 <4 (0, /10+0.01) rectangle +(0.08, 0.08); <7 (0, /10+0.01) rectangle +(0.08, 0.08); <10 (0, /10+0.01) rectangle +(0.08, 0.08); (0, /10+0.01) rectangle +(0.08, 0.08); (a) at (0.05, 0.0) *r*; (b)  → ; (c) *T**r*, 3; in 1, 2, 3 in 1,...,12 (, /10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 12/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 11/10+0.01) rectangle +(0.08, 0.08); (2/10+3/10, 10/10+0.01) rectangle +(0.08, 0.08); (2/10+3/10, 9/10+0.01) rectangle +(0.08, 0.08); (3/10+3/10, 8/10+0.01) rectangle +(0.08, 0.08); (3/10+3/10, 7/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 6/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 5/10+0.01) rectangle +(0.08, 0.08); (2/10+3/10, 4/10+0.01) rectangle +(0.08, 0.08); (2/10+3/10, 3/10+0.01) rectangle +(0.08, 0.08); (3/10+3/10, 2/10+0.01) rectangle +(0.08, 0.08); (3/10+3/10, 1/10+0.01) rectangle +(0.08, 0.08); [fig:splitr] Increasing the number of subdomains, *t*, increases computation from single vector updates to block vector updates of size *n* × *t*. Additional uses of block vectors within ECG are outlined in detail in . On  a (small) linear system is solved to generate the *t* search directions. In addition, the sparse matrix-block vector product (SpMBV) *A**P**k* is performed at each iteration. The number of iterations to convergence is generally reduced from that required by CG, but the algorithm does not eliminate the communication overhead when the algorithm is performed at scale. Unlike CG, where the performance bottleneck is caused by the load imbalance incurred from each inner product in the iteration, ECG sees communication overhead at scale due to the communication associated with the SpMBV kernel (see  in ). We introduce a new communication method in  to improve this performance. [!ht] *r* :  = *b* − *A**x* *P* :  = 0, *R* :  = *T**r*, *t*, *Z* :  = *R* *x* :  = ∑*i* = 1*t*(*X*)*i* [ecg] Node-Aware Communication Techniques ----------------------------------- Sparse matrix-vector multiplication (SpMV), defined as *A* ⋅ *v* → *w* with *A* ∈ R*m* × *n* and *v*, *w* ∈ R*n*, is a common kernel in sparse iterative methods. It is known to lack strong scalability in a distributed memory parallel environment, a problem stemming from low computational requirements and the communication overhead associated with applying standard communication techniques to sparse matrix operations. Generally, *A*, *v*, and *w* are partitioned row-wise across *p* processes with contiguous rows stored on each process (see ). In addition, we split the rows of *A* on each process into 2 blocks, namely on-process and off-process. The on-process block is the diagonal block of columns corresponding to the on-process portion of rows in *v* and *w*, and the off-process block contains the matrix *A*’s nonzero values correlating to non-local rows of *v* and *w* stored off-process. This splitting is common practice, as it differentiates between the portions of a SpMV that require communication with other processes. [x=100pt,y=100pt] [shift=(10pt, 15pt)] in 1,..., 12 in 1,...,12 (, /10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-1/10+0.01) rectangle +(0.08, 0.08); (2/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (3/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (3/10+3/10, 1/10+12/10-1/10+0.01) rectangle +(0.08, 0.08); (2/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (4/10+3/10, 1/10+12/10-4/10+0.01) rectangle +(0.08, 0.08); (5/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (6/10+3/10, 1/10+12/10-6/10+0.01) rectangle +(0.08, 0.08); (5/10+3/10, 1/10+12/10-4/10+0.01) rectangle +(0.08, 0.08); (4/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (6/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (7/10+3/10, 1/10+12/10-7/10+0.01) rectangle +(0.08, 0.08); (8/10+3/10, 1/10+12/10-8/10+0.01) rectangle +(0.08, 0.08); (9/10+3/10, 1/10+12/10-9/10+0.01) rectangle +(0.08, 0.08); (7/10+3/10, 1/10+12/10-9/10+0.01) rectangle +(0.08, 0.08); (10/10+3/10, 1/10+12/10-10/10+0.01) rectangle +(0.08, 0.08); (11/10+3/10, 1/10+12/10-11/10+0.01) rectangle +(0.08, 0.08); (12/10+3/10, 1/10+12/10-12/10+0.01) rectangle +(0.08, 0.08); (11/10+3/10, 1/10+12/10-10/10+0.01) rectangle +(0.08, 0.08); (10/10+3/10, 1/10+12/10-12/10+0.01) rectangle +(0.08, 0.08); (6/10+3/10, 1/10+12/10-1/10+0.01) rectangle +(0.08, 0.08); (4/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (5/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (5/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-4/10+0.01) rectangle +(0.08, 0.08); (2/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-6/10+0.01) rectangle +(0.08, 0.08); (11/10+3/10, 1/10+12/10-7/10+0.01) rectangle +(0.08, 0.08); (12/10+3/10, 1/10+12/10-8/10+0.01) rectangle +(0.08, 0.08); (10/10+3/10, 1/10+12/10-9/10+0.01) rectangle +(0.08, 0.08); (9/10+3/10, 1/10+12/10-11/10+0.01) rectangle +(0.08, 0.08); (8/10+3/10, 1/10+12/10-12/10+0.01) rectangle +(0.08, 0.08); (10/10+3/10, 1/10+12/10-1/10+0.01) rectangle +(0.08, 0.08); (8/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (9/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (12/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (7/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (10/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (9/10+3/10, 1/10+12/10-4/10+0.01) rectangle +(0.08, 0.08); (11/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (9/10+3/10, 1/10+12/10-6/10+0.01) rectangle +(0.08, 0.08); (2/10+3/10, 1/10+12/10-7/10+0.01) rectangle +(0.08, 0.08); (4/10+3/10, 1/10+12/10-8/10+0.01) rectangle +(0.08, 0.08); (3/10+3/10, 1/10+12/10-9/10+0.01) rectangle +(0.08, 0.08); (5/10+3/10, 1/10+12/10-10/10+0.01) rectangle +(0.08, 0.08); (4/10+3/10, 1/10+12/10-11/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-12/10+0.01) rectangle +(0.08, 0.08); [shift=(1.4,0)] (1/10+3/10, 1/10+12/10-1/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-4/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-6/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-7/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-8/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-9/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-10/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-11/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-12/10+0.01) rectangle +(0.08, 0.08); [shift=(1.8,0)] (1/10+3/10, 1/10+12/10-1/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-4/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-6/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-7/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-8/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-9/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-10/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-11/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-12/10+0.01) rectangle +(0.08, 0.08); (a) at (1.0, 0.05) *A*; (b)  \* ; (c) *v*; (d)  → ; (e) *w*; (0, 0.70) – (2.5,.70); at (0.0, 1.05) node 0; at (0.0,.35) node 1; (0.2, 1.0) – (2.4, 1.0); (0.2, 0.4) – (2.4, 0.4); at (0.25, 1.25) p0; at (0.25, 0.95) p1; at (0.25, 0.65) p2; at (0.25, 0.35) p3; [fig:spmvnodeaware] A common approach to a SpMV is to compute the local portion of the SpMV with the on-process block of *A* while messages are exchanged to attain the off-process portions of *v* necessary for the local update of *w*. While this allows for the overlap of some communication and computation, it requires the exchange of many point-to-point messages, which still creates a large communication overhead (see ). The inefficiency of this standard approach is attributed to two redundancies shown in . First, many messages are injected into the network from each node. Some nodes are sending multiple messages to a single destination process on a separate node, creating a redundancy of messages. Secondly, processes send the necessary values from their local portion of *v* to any other process requiring that information for its local computation of *w*. However, the same information may already be available and received by a separate process on the same node, creating a redundancy of data being sent through the network. [fig:spmvbreakdown] [x=80pt,y=80pt] [shift=(3pt, 15pt)] =[>=latex,shorten >=5pt,shorten <=5pt,->,draw,thin,color=black] (p0) to (p6); (p1) to (p6); ([yshift=-10pt]p2.south) to (p6); (p3) to (p6); (p4) to (p6); (p5) to (p6); (p7) to (p6); at (85pt, -12.0pt) Sending multiple messages to Node 1; [shift=(200pt, 15pt)] =[>=latex,shorten >=5pt,shorten <=0pt,->,draw,thin,color=black] in -4,0,4 ([xshift=pt,yshift=5pt]p1) rectangle +(0.04, 0.04); (myp) at ([xshift=9pt,yshift=6pt] p1); (myp) to[bend left] (p4); (myp) to[bend left] (p5); (myp) to[looseness=.7, bend right=45] (p6); (myp) to[bend right=75,in=90,out=-45] (p7); at (85pt, -12.0pt) Sending duplicate data to Node 1; [fig:standardcomm] Node-aware communication techniques  mitigate these issues by considering node topology to further break down the off-process block into vector values of *v* that require on- or off-node communication; this decomposition is shown in . As a result, costly redundant messages are traded for faster, on-node communication, resulting in 2 different multi-step schemes, namely 3-step and 2-step. ### 3-step Node-Aware Communication 3-step node-aware communication eliminates both redundancies in standard communication by gathering all necessary data to be sent off-node in a single buffer. Efficient implementation of this method relies on pairing all processes with a receiving process on distinct nodes – ensuring that every process remains active throughout the entire communication scheme. First, all data to be sent to a separate node are gathered in a buffer by the single process paired with that node. Secondly, this process sends the data buffer to the paired process on the receiving node. Thirdly, the paired process on the receiving node redistributes the data locally to the correct destination processes on-node. An example of these steps is outlined in . [x=80pt,y=80pt] [shift=(3pt, 5pt)] [xshift=20pt,yshift=200pt] at (-10pt,40pt) Step 1; [xshift=100pt,yshift=100pt] at (-10pt,40pt) Step 2; [xshift=180pt,yshift=0pt] at (-10pt,40pt) Step 3; [fig:3step] ### 2-Step Node-Aware Communication 2-step node-aware communication eliminates the redundancy of sending duplicate data from standard communication and decreases the number of inter-node messages, but not to the same degree as 3-step communication. In this case, *each* process exchanges the information needed by the receiving node with their paired process directly. Then the receiving node redistributes the messages on-node as shown in . While multiple messages are sent to the same node, the duplicate data being sent through the network is eliminated. Hence, the number of bytes communicated with 3-step and 2-step node-aware schemes is the same, and often yields a significant reduction over the amount of data being sent through the network with standard communication. [x=80pt,y=80pt] [shift=(3pt, 25pt)] [xshift=0pt,yshift=0pt] at (90pt,-15pt) Step 1; [xshift=200pt,yshift=0pt] at (90pt,-15pt) Step 2; [fig:2step] ### Node-Aware Communication Models lll parameter & description & first use *p* & number of processes & §[sec:nodeawarecomm] `nnz`  & number of nonzeros in *A* & §[sec:performance] *α* & network latency & ([eq:maxrate]) *s* & maximum number of bytes sent by a process & ([eq:maxrate]) *m* & maximum number of messages sent by a process & ([eq:maxrate]) `ppn`  & processes per node & ([eq:maxrate]) *R**N* & network injection rate (B/s) & ([eq:maxrate]) *R**b* & network rate (B/s) & ([eq:maxrate]) *s*proc & maximum number of bytes sent by a process & ([eq:model2]) *s*node & maximum number of bytes injected by a node & ([eq:model2]) *m*proc$\rightarrow$node & maximum number of nodes to which a processor sends & ([eq:model2]) *m*node$\rightarrow$node & maximum number of messages between two nodes & ([eq:model3]) *s*node$\rightarrow$node & maximum size of a message between two nodes & ([eq:model3]) [tab:parameters] The *max-rate* model  is used to quantify the efficiency of node-aware communication throughout the remainder of . For clarity, all modeling parameters referenced throughout the remainder of the paper are defined in . The max-rate model is an improvement to the standard postal model of communication, accounting for injection limits into the network. The cost of sending messages from a symmetric multiprocessing (SMP) node is modeled as $$\label{eq:max\_rate} T = \alpha \cdot m + \max\left( \frac{{\texttt{ppn}}\cdot s}{R\_N}, \frac{s}{R\_b} \right)$$ where *α* is the latency, *m* is the number of messages sent by a single process on a given node, *s* is the number of bytes sent by a single process on a given SMP node, `ppn` is the number of processes per node, *R**N* is the rate at which a network interface card (NIC) can inject data into the network, and *R**b* is the rate at which a process can transport data. In the case of on-node messages, the injection rate is not present and the max-rate model reduces to the standard postal model for communication $$\label{eq:postal\_model} T = \alpha\_{\ell} \cdot m + \frac{s}{R\_{b,\ell}},$$ where *α*ℓ is the *local* or on-node latency and *R**b*, ℓ is the rate of sending a message on-node. In , the max-rate model is extended to 2-step and 3-step communication by splitting the model into inter-node and intra-node components. For 3-step, the communication model becomes $$\label{eq:model3} T\_{\text{total}} = \underbrace{ \alpha \cdot \frac{{m\_{\text{node$\rightarrow$node}}}}{{\texttt{ppn}}} + \max\left(\frac{{s\_{\text{node}}}}{R\_N}, \frac{{s\_{\text{proc}}}}{R\_b}\right) }\_{\text{inter-node}} + \underbrace{ 2\cdot\left( \alpha\_{\ell} \cdot ({\texttt{ppn}}- 1) + \frac{{s\_{\text{node$\rightarrow$node}}}}{R\_{b,\ell}} \right) }\_{\text{intra-node}}$$ where *m*node$\rightarrow$node is the maximum number of messages communicated between any two nodes and *s*node$\rightarrow$node is the size of messages communicated between any two nodes. For 2-step, this results in $$\label{eq:model2} T\_{\text{total}} = \underbrace{ \alpha \cdot {m\_{\text{proc$\rightarrow$node}}}+ \max\left(\frac{{s\_{\text{node}}}}{R\_N}, \frac{{s\_{\text{proc}}}}{R\_b}\right) }\_{\text{inter-node}} + \underbrace{ \alpha\_{\ell} \cdot ({\texttt{ppn}}- 1) + \frac{{s\_{\text{proc}}}}{R\_{b,\ell}} }\_{\text{intra-node}}$$ where *s*node and *s*proc represent the maximum number of bytes injected by a single NIC and communicated by a single process from an SMP node, respectively, and *m*proc$\rightarrow$node is the maximum number of nodes with which any process communicates. The latency to communicate between nodes, *α*, is often much higher than the intra-node latency, *α*ℓ, thus motivating a multi-step communication approach. In a 2-step method, having every process on-node communicate data minimizes the constant factor $\max\left(\frac{{s\_{\text{node}}}}{R\_N}, \frac{{s\_{\text{proc}}}}{R\_b}\right)$, which depends on the maximum amount of data being communicated to a separate node by a single process. In practice, a 3-step method often yields the best performance for a parallel SpMV since the amount of data being communicated by a single process is often small. As a result moving the data to be communicated off-node into a single buffer minimizes the first term in . These multi-step communication techniques minimize the amount of time spent in inter-node communication. We extend this idea to the block vector operation in . Performance Study of Enlarged Conjugate Gradient ================================================ In this section we detail the per-iteration performance and performance modeling of ECG. A communication efficient version of  is implemented in Raptor  and is based on the work in . Throughout this section and the remainder of the paper, we assume an *n* × *n* matrix *A* with `nnz` nonzeros is partitioned row-wise across a set of *p* processes. Each process contains at most $\frac{n}{p}$ contiguous rows of the matrix *A*. In the modeling that follows, we assume a equal number of nonzeros per partition. In addition, each block vector in  — *R*, *X*, *Z*, *A**Z*, *P*, and *A**P* — is partitioned row-wise, and with the same row distribution as *A*. The variables *c*, *d*, and *d*\tiny old are size *t* × *t* and a copy of each is stored locally on each process. Tests were performed on the Blue Waters Cray XE/XK machine at the National Center for Supercomputing Applications (NCSA) at University of Illinois. Blue Waters  contains a 3D torus Gemini interconnect; each Gemini consists of two nodes. The complete system contains 22 636 XE compute nodes; each with two AMD 6276 Interlagos processors, and additional XK compute nodes unused in these tests. Implementation -------------- The scalability of a direct implementation of  is limited , however, this is improved by fusing communication and by executing the system solve in  in  on each process. This is accomplished in  by decomposing the computation of *P* into several steps as described in . [!ht] *A**Z* ← *A* \* *Z* *Z**T**A**Z* ← *Z**T* \* *A**Z* *C**T**C* ← *Z**T**A**Z*[pkchol] *P* ← solve *P* \* *C* = *Z*[pkline] [calcpk] The *t* × *t* product *Z**T*(*A**Z*) is stored locally on every process in the storage space of *c*, as shown in . The Cholesky factorization on  of  is performed simultaneously on every process, yielding a (local) copy of *C*. Then each process performs a local triangular system solve using the local vector values of *Z* to construct the portion of *P* (see ). Similarly, an additional sparse matrix block vector product *A**P* = *A* \* *P* is avoided by noting that *A**P* is constructed using *A**P* ← `Triangular Solve with Multiple Right Sides of` *A**P* \* *C* = *A**Z* since the product *A**Z* and the previous iteration’s *A**P* = *A* \* *P* are already stored. [x=100pt,y=100pt] [shift=(10pt, 15pt)] (0, 0.70) – (3.8,.70); (0.2, 1.0) – (3.7, 1.0); (0.2, 0.4) – (3.7, 0.4); at (0.0, 1.05) node 0; at (0.0,.35) node 1; at (0.25, 1.25) p0; at (0.25, 0.95) p1; at (0.25, 0.65) p2; at (0.25, 0.35) p3; in 105,15,45,75 at (3.0, -0.05) *c*; at (3.3, -0.05) *d*; at (3.6, -0.065) $d\_{\textnormal{\tiny old}}$; at (0.6, -0.05) *R*; at (1.1, -0.05) *X*; at (1.6, -0.05) *Z*; at (2.15, -0.05) …; at (2.65, -0.05) *A**P*; [fig:vectorpartitioning] summarizes our implementation in terms of computational kernels, with the on process computation in terms of floating point operations along with the associated type of communication. The remainder of the calculations within ECG consist of local block vector updates, as well as block vector inner products for the values *c*, *d*, and *d*\tiny old. A straightforward approach is to compute these independently within the algorithm, resulting in four `MPI_Allreduce` global communications per iteration. However, since the input data required to calculate *c*, *d*, and *d*\tiny old are available on  in  when *c* is computed, a single global reduction is possible. The implementation described in  highlights a single call to `MPI_Allreduce` for all of these values and reducing them in the same buffer. This reduces the number of `MPI_Allreduce` calls to two per iteration. [!ht] SpMV Vector Initialization [ecgkernels] From , we note that computation and communication per iteration costs of ECG have increased over that of parallel CG. For our implementation, the number of collective communication calls to `MPI_Allreduce` has remained the same as CG (at two), but the number of values in the global reductions has increased from a single float in each of CG’s global reductions to *t*2 and 3*t*2. The singular SpMV from CG has increased to a sparse matrix block vector product (SpMBV), which does not increase the number of point-to-point messages, but does increase the size of the messages being communicated by the enlarging factor *t*. Additionally, the local computation for *each kernel* has increased by a factor of *t*. ECG uses these extra per iteration requirements to reduce the total number of iterations to convergence, resulting in fewer iterations than CG as seen in . [fig:mfemconvergence] Per Iteration Performance ------------------------- In  we decompose the performance of a single iteration of ECG for  into (local) computation, point-to-point communication, and collective communication. Performance tests were executed on Blue Waters . Each test is the average of 20 iterations of ECG; reported times are the maximum average time recorded for any single process. At small scales, local computation dominates performance, while at larger scales, the point-to-point communication in the single SpMBV kernel and the collective communication in the block vector inner products become the bottleneck in ECG. also shows the time spent in a single inner product. While we observe growth with the number of processes, as expected, the relative cost (and growth) within ECG remains low. Importantly, increasing *t* at high processor counts does not significantly contribute to cost. This is shown in , where the mean runtime for various block vector inner products all fall within each other’s confidence intervals. This suggests that increasing *t* to drive down the iteration count will have little affect on the per iteration cost of the two calls to `MPI_Allreduce`, and in fact, will result in fewer total calls to it due to the reduction in iterations. [fig:ecgprofiling] [fig:bvinnerprofiling] The remainder of this section focuses on accurately predicting the performance of a single iteration of ECG through robust performance models. In particularly, SpMBV communication is addressed in detail in  where we discuss new node-aware communication techniques for blocked data. Performance Modeling -------------------- To better understand the timing profiles in , we develop performance models. Below, we present two different models for the performance of communication within a single iteration of ECG. First, consider the standard postal model for communication, which represents the maximum amount of time required for communication by an individual process in a single iteration of ECG as $$\label{eq:ecg\_comm\_postal} T\_{\text{postal}} = \underbrace{ \alpha \cdot m + \frac{s \cdot t}{R\_b} }\_\text{point to point} + \underbrace{ 2\cdot\alpha \cdot \log(p) + \frac{f\cdot4\cdot t^2}{R\_b} }\_\text{collective}$$ where *f* is the number of bytes for a floating point number — e.g. *f* = 8. See  for a complete description of model parameters. As discussed in , this model presents a misleading picture on the performance of ECG at scale, particularly on current supercomputer architectures where SMP nodes encounter injection bandwidth limits when sending inter-node messages. [fig:maxrateproof] To improve the model we drop in the max-rate model for the point-to-point communication, resulting in $$\label{eq:ecg\_comm\_mr} T\_{MR} = \underbrace{ \alpha \cdot m + \max \left( \frac{{\texttt{ppn}}\cdot s\cdot t}{R\_N}, \frac{s\cdot t}{R\_b} \right) }\_\text{point to point} + \underbrace{ 2\cdot \alpha \cdot \log(p) + \frac{f\cdot4\cdot t^2}{R\_b} }\_\text{collective}.$$ shows that the max-rate model provides a more accurate upper bound on the time spent in point to point communication within ECG. The term $2\cdot \alpha \cdot \log(p) + \frac{f\cdot4\cdot t^2}{R\_b}$ remains unchanged in to represent the collective communication required for the two block vector inner products. Each block vector inner product incurs latency from requiring log(*p*) messages in an optimal implementation of the `MPI_Allreduce`. More accurate models for modeling the communication of the `MPI_Allreduce` in the inner product exist, such as the logP model  and logGP model , but optimization of the reduction is outside the scope of this paper, so we leave the postal model for representing the performance of the inner products. Modeling the computation within an iteration of ECG is straightforward. The computation for a single iteration of ECG is written as the sum of the kernel floating point operations in , which results in the following $$\label{eq:comp} T\_{comp} = \gamma \cdot\left((2 + 2 t) \frac{{\texttt{nnz}}}{p} + (4t + 4t^2) \frac{n}{p} + \frac{1}{2}t^2 + \frac{1}{6}t^3\right)$$ where *γ* is the time required to compute a single floating point operation. In total, we arrive at the following model for a single iteration of ECG $$\label{eq:ecg\_iter} T\_{ECG} = \alpha \cdot m + \max \left( \frac{{\texttt{ppn}}\cdot s\cdot t}{R\_N}, \frac{s\cdot t}{R\_b} \right) + 2\cdot\alpha \cdot \log(p) + \frac{f\cdot4\cdot t^2}{R\_b} + \gamma \cdot\left((2 + 2 t) \frac{{\texttt{nnz}}}{p} + (4t + 4t^2) \frac{n}{p} + \frac{1}{2}t^2 + \frac{1}{6}t^3\right).$$ Using this model, we can predict the reduction in the amount of time spent in point-to-point communication using the multi-step communication techniques presented in for a single iteration of ECG for  — see . c|cc|cc|cc|cc|cc|cc & & & & & & & & & & & & & & & & & & 5 & & 55.6 & & 70.0 & & 70.2 & & 75.5 & & 78.9 & & 76.6 10 & & 40.0 & & 56.2 & & 65.2 & 65.9 & 67.5 & & 69.1 & & 68.7 15 & & 37.6 & & 40.4 & & 66.7 & & 67.0 & & 58.5 & 60.5 & 63.8 20 & & 24.5 & & 38.3 & & 62.3 & & 47.6 & & 45.5 & & 53.6 [tab:ecgpercents2step] c|cc|cc|cc|cc|cc|cc & & & & & & & & & & & & & & & & & & 5 & & 55.6 & & 70.0 & & 70.2 & & 75.5 & & 78.9 & & 76.6 10 & & 40.0 & & 56.2 & & 65.2 & & 67.5 & 65.6 & 69.1 & 68.5 & 68.7 15 & & 37.6 & & 40.4 & & 66.7 & & 67.0 & & 58.5 & & 63.8 20 & & 24.5 & & 38.3 & & 62.3 & & 47.6 & & 45.5 & 48.8 & 53.6 [tab:ecgpercents3step] [tab:ecgmodelpercents] We see that ECG is still limited at large processor counts even when substituting the node-aware communication techniques to replace the costly point-to-point communication observed in . The models do predict a large amount of speedup for most cases, however, when using 3-step communication, suggesting that node-aware communication techniques can reduce the large point-to-point bottleneck observed in the performance study. While a large communication cost stems equally from the collective communication the `MPI_Allreduce` operations, their performance is dependent upon underlying `MPI` implementation and outside the scope of this paper. We address the point-to-point communication performance further in  by analyzing it through the lens of node-aware communication techniques, optimizing them to achieve the best possible performance at scale. Optimized Communication for Blocked Data ======================================== As discussed in , scalability for ECG is limited by the sparse matrix-block vector multiplication (SpMBV) kernel defined as *A* ⋅ *V* → *W*,  with *A* ∈ R*n* × *n* and *V*, *V* ∈ R*n* × *t*, where 1 < *t* ≪ *n*. Due to the block vector structure of *X*, each message in a SpMV is increased by a factor of *t* (see ). The larger messages associated with *t* > 1 increase the amount of time spent in the point-to-point communication at larger scales, making the SpMBV operation an ideal candidate for node-aware messaging approaches. [x=100pt,y=100pt] [shift=(5pt, 15pt)] in 1,..., 12 in 1,...,12 (, /10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-1/10+0.01) rectangle +(0.08, 0.08); (2/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (3/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (3/10+3/10, 1/10+12/10-1/10+0.01) rectangle +(0.08, 0.08); (2/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (4/10+3/10, 1/10+12/10-4/10+0.01) rectangle +(0.08, 0.08); (5/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (6/10+3/10, 1/10+12/10-6/10+0.01) rectangle +(0.08, 0.08); (5/10+3/10, 1/10+12/10-4/10+0.01) rectangle +(0.08, 0.08); (4/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (6/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (7/10+3/10, 1/10+12/10-7/10+0.01) rectangle +(0.08, 0.08); (8/10+3/10, 1/10+12/10-8/10+0.01) rectangle +(0.08, 0.08); (9/10+3/10, 1/10+12/10-9/10+0.01) rectangle +(0.08, 0.08); (7/10+3/10, 1/10+12/10-9/10+0.01) rectangle +(0.08, 0.08); (10/10+3/10, 1/10+12/10-10/10+0.01) rectangle +(0.08, 0.08); (11/10+3/10, 1/10+12/10-11/10+0.01) rectangle +(0.08, 0.08); (12/10+3/10, 1/10+12/10-12/10+0.01) rectangle +(0.08, 0.08); (11/10+3/10, 1/10+12/10-10/10+0.01) rectangle +(0.08, 0.08); (10/10+3/10, 1/10+12/10-12/10+0.01) rectangle +(0.08, 0.08); (6/10+3/10, 1/10+12/10-1/10+0.01) rectangle +(0.08, 0.08); (4/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (5/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (5/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-4/10+0.01) rectangle +(0.08, 0.08); (2/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-6/10+0.01) rectangle +(0.08, 0.08); (11/10+3/10, 1/10+12/10-7/10+0.01) rectangle +(0.08, 0.08); (12/10+3/10, 1/10+12/10-8/10+0.01) rectangle +(0.08, 0.08); (10/10+3/10, 1/10+12/10-9/10+0.01) rectangle +(0.08, 0.08); (9/10+3/10, 1/10+12/10-11/10+0.01) rectangle +(0.08, 0.08); (8/10+3/10, 1/10+12/10-12/10+0.01) rectangle +(0.08, 0.08); (10/10+3/10, 1/10+12/10-1/10+0.01) rectangle +(0.08, 0.08); (8/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (9/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (12/10+3/10, 1/10+12/10-2/10+0.01) rectangle +(0.08, 0.08); (7/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (10/10+3/10, 1/10+12/10-3/10+0.01) rectangle +(0.08, 0.08); (9/10+3/10, 1/10+12/10-4/10+0.01) rectangle +(0.08, 0.08); (11/10+3/10, 1/10+12/10-5/10+0.01) rectangle +(0.08, 0.08); (9/10+3/10, 1/10+12/10-6/10+0.01) rectangle +(0.08, 0.08); (2/10+3/10, 1/10+12/10-7/10+0.01) rectangle +(0.08, 0.08); (4/10+3/10, 1/10+12/10-8/10+0.01) rectangle +(0.08, 0.08); (3/10+3/10, 1/10+12/10-9/10+0.01) rectangle +(0.08, 0.08); (5/10+3/10, 1/10+12/10-10/10+0.01) rectangle +(0.08, 0.08); (4/10+3/10, 1/10+12/10-11/10+0.01) rectangle +(0.08, 0.08); (1/10+3/10, 1/10+12/10-12/10+0.01) rectangle +(0.08, 0.08); (a) at (1.0, 0.05) *A*; (b)  \* ; (c) *V*; (d)  → ; (e) *W*; (0, 0.70) – (2.5,.70); at (0.0, 1.05) node 0; at (0.0,.35) node 1; (0.2, 1.0) – (2.4, 1.0); (0.2, 0.4) – (2.4, 0.4); at (0.25, 1.25) p0; at (0.25, 0.95) p1; at (0.25, 0.65) p2; at (0.25, 0.35) p3; [fig:nodeawarespmbv] Performance Modeling -------------------- Recalling the node-aware communication models from , we augment the 2-step and 3-step models with block vector size, *t*. As a result, the 2-step model in  becomes $$\label{eq:model2\_bv} T\_{total} = \underbrace{ \alpha \cdot {m\_{\text{proc$\rightarrow$node}}}+ \max\left( \frac{t\cdot{s\_{\text{node}}}}{R\_N}, \frac{t\cdot{s\_{\text{proc}}}}{R\_b} \right) }\_{\text{inter-node}} + \underbrace{ \alpha\_{\ell} \cdot ({\texttt{ppn}}- 1) + t \cdot \frac{{s\_{\text{proc}}}}{R\_{b,\ell}} }\_{\text{intra-node}}$$ while the 3-step model in  becomes $$\label{eq:model3\_bv} T\_{total} = \underbrace{ \alpha \cdot \frac{{m\_{\text{node$\rightarrow$node}}}}{{\texttt{ppn}}} + \max\left( \frac{t\cdot{s\_{\text{node}}}}{R\_N}, \frac{t\cdot{s\_{\text{proc}}}}{R\_b} \right) }\_{\text{inter-node}} + \underbrace{ 2\cdot\left( \alpha\_{\ell} \cdot ({\texttt{ppn}}- 1) + t \cdot \frac{{s\_{\text{node$\rightarrow$node}}}}{R\_{b,\ell}} \right) }\_{\text{intra-node}}.$$ In both models, *t* impacts maximum rate, a term that is relatively small for *t* = 1 and dominates the inter-node portion of the communication as *t* grows. Since messages are increased by a factor of *t*, the single buffer used in 3-step communication quickly reaches the network injection bandwidth limits. Using multiple buffers, as in 2-step communication, helps mitigate the issue, however more severe imbalance persists since the amount of data sent to different nodes is often widely varying. shows every inter-node message sent by a single process alongside the message size for  when performing the SpMBV kernel with 4096 processes and 16 processes per node. We present the message sizes for 3-step and 2-step communication, noting that the overall number of inter-node messages decreases when using 3-step communication, but the average message size sent by a single process increases. [fig:msgsizesp4096] For *t* = 20, the maximum message size nears 106 for 3-step communication, while the maximum message size barely reaches 105 for 2-step communication. Additionally, there is clear imbalance in the inter-node message sizes for 2-step communication with messages ranging in size from 103–105 Bytes. Profiling --------- We next apply the node-aware communication strategies presented for SpMVs and SpGEMMs in  to the SpMBV kernel within ECG. displays the performance of standard, 2-step, and 3-step communication when applied to the SpMBV kernel for with *t* = 5 and *t* = 20. The 2-step communication appears outperform in most cases. This is due to the large amount of data to be sent off-node that is split across many processes. We see 2-step communication performing better due to the term *α* ⋅ *m*proc$\rightarrow$node in  being smaller than the *α* ⋅ *m*node$\rightarrow$node term in the 3-step communication model  due to multiplication by the factor *t*. In fact, all counts measured in the ECG profiling section are now multiplied by the enlarging factor *t*. While the traditional SpMV shows speedup with 3-step communication, we now see that 2-step is generally the best fit for our methods as message size, and thus *t*, increases. [fig:mfemspmbvcomparison] Next we consider node-aware communication performance results for a subset of the largest matrices in the SuiteSparse matrix collection  (matrix details can be found in ). These were selected based on size and density to provide a variety of scenarios. lcccc matrix & rows/ cols & nnz & nnz/row & density audikw\_1 & & & 82.3 & 8.72e-5 Geo\_1438 & & & 41.9 & 2.91e-05 bone010 & & & 48.5 & 4.92e-05 Emilia\_923 & & & 43.7 & 4.74e-05 Flan\_1565 & & & 72.9 & 4.66e-05 Hook\_1498 & & & 39.6 & 2.65e-05 ldoor & & & 44.6 & 4.69e-05 Serena & & & 46.1 & 3.31e-05 thermal2 & & & 7.0 & 5.69e-06 [tab:matrices] While 2-step communication is effective in many instances, it is not always the most optimal communication strategy, as depicted in. Unlike the results for , 3-step and 2-step communication do not always outperform standard communication, and in some instances (4096 processes), for most values of *t* there is performance degradation. This is seen more clearly in. [fig:bwspmbvcomparison] [fig:bwspmbvcomparisonzoom] It is important to highlight cases where only a single node-aware communication technique results in performance deterioration over standard communication. Distinct examples include `Geo_1438` and `thermal2` on 4096 processes. Both of these matrices benefit from 2-step communication for *t* = 10 and 20, but performance degrades when using 3-step communication. Another example is the performance of `ldoor` on 8192 processes (seen in ). For *t* = 5 and 10, `ldoor` results in performance degradation using 2-step communication, but up to 5 ×  speedup over standard communication when using 3-step communication. These cases highlight that while one node-aware communication technique underperforms in comparison to standard communication, the other node aware technique is still much faster. Using this as the key motivating factor, we discuss an optimal node-aware communication technique for blocked data in . Optimal Communication for Blocked Data -------------------------------------- When designing an optimal communication scheme for the blocked data format, the main consideration is the impact the number of vectors within the block have on the size of messages communicated. The effects message size and message number have on performance can vary based on machine, hence we present results for Blue Waters alongside Lassen . Lassen, a 23-petaflop IBM system at Lawrence Livermore National Laboratory, consists of 792 nodes connected via a Mellanox 100 Gb/s Enhanced Data Rate (EDR) InfiniBand network. Each node on Lassen is dual-socket with 44 IBM Power9 cores and 4 NVIDIA Volta GPUs (which are unused in our tests.) In , we view the effects placement of data and amount of data being communicated has on performance times for two different machines. This figure shows the amount of time required to send various numbers of bytes between two processes when they are located on the same node but different sockets (blue) and on the same node and same socket (red) for the machines Blue Waters (left) and Lassen (right). It also shows the amount of time required to communicate between two processes on separate nodes when there are less than 4 (Blue Waters) or 5 (Lassen) active processes communicating through the network at the same time or greater than 4 or 5 processes communicating through the network at the same time. We see that as the number of bytes communicated between two processes increases, it becomes increasingly important whether those two processes are located on the same socket, node, or require communication through the network. [fig:commsplit] For both machines, inter-node communication is fastest when message sizes are small, and there are few messages being injected into the network. On Blue Waters, intra-node communication is the fastest, with the time being dependent on how physically close the processes are located. For instance, when two processes are on the same socket, communication is faster than when they are on the same node, but different sockets. This is true for Lassen, as well, but cross-socket intra-node communication is not always faster than communicating through the network. Once message sizes exceed 104 bytes, and there are fewer than five processes actively communicating, inter-node communication is faster than two cross-socket intra-node processes communicating. For both machines, however, we see that once a large enough communication volume is reached, it becomes faster to split the inter-node data being sent across a subset of the processes on a single node due to network contention as observed in . In addition to the importance of the placement of two communicating processes, the total message volume and number of actively communicating processes plays a key role in communication performance. While it is extremely costly for every process on a node to send 105 bytes as seen in, there are performance benefits when splitting a large communication volume across all processes on a node, depicted in. Blue Waters sees modest performance benefits when splitting large messages across multiple processes, whereas Lassen sees much greater performance benefits. [fig:nodepong] These observations help justify why 3-step communication would outperform 2-step communication, and vice versa in certain cases of the SpMBV profiling presented in . Sending all messages in a single buffer becomes impractical when the block size, *t*, is very large, but having each process communicate with a paired process also poses problems when some of the inter-node messages being sent are still very large, as seen in. Motivated by the results above, we introduce an optimized multi-step communication process that combines the aspects of both the 3-step and 2-step communication techniques, and using 3-step or 2-step communication when necessary. We reduce the number of inter-node messages and conglomerate messages to be sent off node for certain cases when the message sizes to be sent off-node are below a given threshold, and we split the messages to be sent off-node across multiple processes when the message size is larger than a given threshold. Hence, each node is determining the most optimal way to perform its inter-node communication. As a result, this nodal optimal communication eliminates the redundancies of data being injected into the network, just as 3-step and 2-step do, but in some cases, it does not reduce the number of inter-node messages as much as 3-step, and in fact can increase the number of inter-node messages injected by a single node to be larger than those injected by 2-step, but never exceeding the total number of active processes per node. The number of inter-node messages sent can be represented by the following inequality, *m*node$\rightarrow$node ≤ *n**o**p**t* ≤ max(*m*proc$\rightarrow$node, `ppn`) where *n**o**p**t* is the number of inter-node messages injected by a single process for the optimal node-aware communication, and it is bounded below by the worst-case number of messages communicated in 3-step communication and above by the worst-case number of messages communicated in 2-step communication or `ppn`, whichever is greater. The proposed process, excluding reducing the global communication strategy to 3-step or 2-step communication, is summarized in . This figure outlines the nodal optimal process of communicating data between two nodes, each with four local processes. Step 1: Each node conglomerates small messages to be sent off-node while retaining larger messages. Messages are assigned to processes in descending order of size to the first available process on node. This is done for every node simultaneously. Step 2: Buffers prepared in step 1 are sent to their destination node, specifically to the paired process on that node with the same local rank. P0 exchanges data with P4, while P1 sends data to P5, and P2 sends data to P6. Step 3: All processes on each individual node redistribute their received data to the correct destination processes on-node. In this step, all communication is local. [x=80pt,y=80pt] [shift=(5pt, 5pt)] [xshift=20pt,yshift=230pt] (-0.1,-0.1) rectangle +(2.35,2.5); (p1) to (p0); (p3) to (p1); [xshift=0pt,yshift=100pt] (p5) to (p4); (p7) to (p4); at (-14pt,95pt) Step 1; [xshift=60pt,yshift=120pt] at (-14pt,40pt) Step 2; ([yshift=8pt]p0) to[looseness=0.7,out=60, in=120] ([yshift=8pt]p4); ([yshift=-8pt]p2) to[looseness=0.7,out=-60, in=-120] ([yshift=-8pt]p6); ([yshift=-8pt]p1) to[looseness=0.9,out=-60, in=-120] ([yshift=-8pt]p5); [xshift=120pt,yshift=0pt] at (-14pt,40pt) Step 3; (p0) – (p1); (p1) – (p3); (p3) – (p2); (p2) – (p0); (p0) – (p3); (p2) – (p1); (p4) – (p5); (p5) – (p7); (p7) – (p6); (p6) – (p4); (p4) – (p7); (p5) – (p6); [fig:optstep] The resulting speedups for the two systems Blue Waters and Lassen are presented in. Here, the message size cutoff being used is the message size cutoff before the MPI implementation switches to the rendezvous protocol. For sending large messages between processes, the rendezvous protocol communicates an envelope first, then the remaining data is communicated after the receiving process allocates buffer space. It is necessary to use this protocol for large messages, but there is a slight slowdown over sending messages via the short and eager protocols which either include the data being sent as part of the envelope (short) or eagerly send the data if it does not fit into the envelope (eager). This cutoff is chosen because rendezvous is the slowest protocol and because the switch to the rendezvous protocol is approximately the crossover point when on-node messages become slower than network messages on Lassen (around 104 Bytes in.) [fig:bwlassenspmbvopt] Using this cutoff point, the method sees speedup for *some* test matrices and performance degradation for most on Blue Waters, which is consistent with the minimal speedup seen by splitting large messages across multiple processes in. Additionally, it is likely that network contention is playing a large role in the Blue Waters results as the messages sizes become large based on the findings in , and due to each node determining independently how to send its own data without consideration of the size or number of messages injected by other nodes. The nodal optimal communication performs better on Lassen than Blue Waters and aligns with expectations based on the combination of using 3-step for nodes with small inter-node messages to inject into the network where on-node communication is faster than network communication () and splitting large messages where the benefits are much greater (). While Blue Waters achieves higher speedups in some cases than Lassen, both systems see speedups as large as 60x. These results only present part of the overall communication picture, however. There are still cases where the global communication strategy should be reduced to 3-step communication, 2-step communication, or standard communication. This differs based on the two machines and the specific test matrix, but tuning between the techniques results in the most optimal communication strategy. Tuning comes at the minimal cost of performing four different SpMBVs during setup of the SpMBV communicator. Speedups over standard communication when using tuning to use the fastest communication technique are presented in. [fig:bwlassenspmbvmin] In the top plot of, Blue Waters benefits in 33% of the cases from including the nodal optimal multi-step communication strategy. These results are expected based on where the benefits of nodal optimal multi-step communication were less than ideal. In fact, for 20% of the cases on Blue Waters, standard communication is the most performant, consistent with matrices of low density. The matrices `Geo_1438` and `ldoor`, which have the smallest density of the test matrices (), see minimal benefits from the multi-step communication techniques as the number of processes is scaled up due to the minimal amount of data being communicated. We expected to see 2-step and nodal optimal communication perform the best on Lassen due to the faster inter-node communication for smaller sized messages () and the benefits of splitting messages across many processes on node (). These expectations are consistent with the results presented in the bottom plot of ; most test matrices saw the best SpMBV performance with nodal optimal communication (44% of the cases). The remainder of the test cases were divided almost equally between 2-step, 3-step, and standard communication for which technique was the most performant (approximately 18% of the cases, each). shows the benefits of using the tuned point-to-point communication over the standard communication in a single iteration of ECG for. Tuned communication reduces the percentage of time spent in point-to-point communication independent of system, though the performance benefits are typically best when more data is being communicated (*t* = 20 in and.) For Blue Waters, the new communication technique results in point-to-point communication taking 20% − 40% less of the total time compared to the percentage of time when using standard communication (corresponding to the blue highlighted values in ). Performance benefits are much greater on Lassen where the tuned communication results in a decrease of more than 40% of the total iteration time compared to an iteration time with standard communication in most cases. c|cc|cc|cc|cc|cc|cc & & & & & & & & & & & & & & & & & & 5 & 54.7 & 55.6 & & 70.0 & 66.2 & 70.2 & & 75.5 & & 78.9 & & 76.6 10 & & 40.0 & & 56.2 & & 65.2 & & 67.5 & & 69.1 & & 68.7 15 & & 37.6 & & 40.4 & & 66.7 & & 67.0 & & 58.5 & & 63.8 20 & & 24.5 & & 38.3 & & 62.3 & 46.0 & 47.6 & & 45.5 & & 53.6 [tab:ecgpercentsbw] c|cc|cc|cc|cc|cc & & & & & & & & & & & & & & & 5 & & 84.3 & & 88.0 & & 88.4 & & 87.2 & & 93.1 10 & & 87.2 & & 84.9 & & 84.7 & & 89.3 & & 91.3 15 & & 81.2 & & 85.7 & & 86.1 & & 90.0 & & 89.2 20 & & 74.2 & & 79.9 & & 81.7 & & 80.7 & & 79.5 [tab:ecgpercentslassen] [tab:ecgpercents] Conclusions =========== The enlarged conjugate gradient method (ECG) is an efficient method for solving large systems of equations designed to reduce the collective communication bottlenecks of the classical conjugate gradient method (CG.) Within ECG, block vector updates replace the single vector updates of CG, thereby reducing the overall number of iterations required for convergence and hence the overall amount of collective communication. In this paper, we performed a performance study and analysis of the effects of block vectors on the balance of collective communication, point-to-point communication, and computation within the iterations of ECG. We noted the increased volume of data communicated and its disproportionate affects on the performance of the point-to-point communication; the communication bottleneck of ECG shifted to be the point-to-point communication within the sparse matrix-block vector kernel (SpMBV). To address the new SpMBV bottleneck, we designed an optimal multi-step communication technique that builds on existing node-aware communication techniques and improves them for the emerging supercomputer architectures with greater numbers of processes per node and faster inter-node networks. Overall, this paper provides a comprehensive study of the performance of ECG in a distributed parallel environment, and introduces a novel point-to-point multi-step communication technique that provides consistent speedup over standard communication techniques independent of machine. Future work includes profiling and improving the performance of ECG on hybrid supercomputer architectures with computation offloaded to graphics processing units. The software used to generate the results in this paper is freely available in RAPtor . Acknowledgments =============== This material is based in part upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number *DE-NA0003963* and *DE-NA0003966*. This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI0725070 and ACI-1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. This material is based in part upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number de-na0002374.
arxiv_0000126
Shape asymmetry: a morphological indicator for automatic detection of galaxies in the post-coalescence merger stages. ===================================================================================================================== [firstpage] We present a new morphological indicator designed for automated recognition of galaxies with faint asymmetric tidal features suggestive of an ongoing or past merger. We use the new indicator, together with preexisting diagnostics of galaxy structure to study the role of galaxy mergers in inducing (post-)starburst spectral signatures in local galaxies, and investigate whether (post-)starburst galaxies play a role in the the build up of the ‘red sequence’. Our morphological and structural analysis of an evolutionary sample of 335 (post-)starburst galaxies in the SDSS DR7 with starburst ages 0 < *t**S**B* < 0.6 Gyr, shows that 45% of galaxies with young starbursts (*t**S**B* < 0.1 Gyr) show signatures of an ongoing or past merger. This fraction declines with starburst age, and we find a good agreement between automated and visual classifications. The majority of the oldest (post-)starburst galaxies in our sample (*t**S**B* ∼ 0.6 Gyr) have structural properties characteristic of early-type disks and are not as highly concentrated as the fully quenched galaxies commonly found on the ‘red sequence’ in the present day Universe. This suggests that, if (post-)starburst galaxies are a transition phase between active star-formation and quiescence, they do not attain the structure of presently quenched galaxies within the first 0.6 Gyr after the starburst. galaxies:evolution, galaxies:starburst, galaxies:interactions, galaxies:structure Introduction ============ The contrast between ‘blue-cloud’ and ‘red-sequence’ galaxies has long been known and examined in many studies both in the local Universe and at higher redshifts. The star-forming blue-cloud galaxies tend to have late-type morphologies, while the quiescent systems populating the red sequence are predominantly early-type. The origin of this bimodality remains unclear and has become one of the major conundrums in the field of extragalactic astronomy. As the stellar mass contained within the red sequence has doubled since *z* ∼ 1, contrary to that of the blue cloud, it is thought that blue galaxies can migrate to the red sequence once their star formation has quenched. While several mechanisms responsible for the star-formation quenching have been studied, the emerging consensus is that the transformation between star-forming and quiescent galaxies can occur through either a fast or a slow channel. The slow mode involves the secular evolution of galaxies, without the presence of external triggering mechanisms, where star-formation fades gradually as the gas supply is used up. This work’s focus is on the fast route, supported by the scarcity of galaxies in the ‘green valley’, stretching between the blue cloud and the red sequence in the colour-magnitude diagrams. Fast quenching is commonly associated with an escalated gas consumption during a merger-induced starburst, followed by removal of the remaining gas supply by subsequent stellar and/or AGN feedback. However, a recent study of local galaxies with signatures of a past starburst by has shown that they preserve substantial gas reservoirs which can potentially fuel star formation for an extended period of time, making the ‘fast’ nature of this quenching scenario debatable. Further study of the possible quenching mechanisms is required, and galaxies that show signatures of a recent starburst (post-starburst galaxies) as well as those that have undergone a recent merger (post-merger galaxies) may provide a unique insight to this ‘fast’ transition phase between the star-forming and the quiescent mode of galaxy evolution. Post-starburst (PSB) galaxies (also known as ‘k+a’ galaxies) have spectra featuring unusually strong Balmer lines in absorption - an attribute of A stars. Such features are thought to be a signature of a recent ( ∼  1 Gyr) truncation following either a short burst or normal-mode star formation activity within a galaxy. Distinguishing between the two events requires careful modelling, which shows that very strong Balmer lines are inconsistent with anything other than a recent starburst. First discovered over 30 years ago, PSB galaxies are found in various regions of the Universe: locally, they are rare and populate mainly the field and poor groups but their incidence increases at higher redshifts, where they seem to lie predominantly in clusters (e.g. but see for a contradictory conclusion). Whether PSB galaxies are a link between the star-forming and quiescent phases of galaxy evolution is still an ongoing debate. Results of some studies support this claim: at 0.5 < *z* < 1, the mass flux through the PSB phase is comparable to the mass growth of the red sequence, and quenched PSB galaxies tend to be early-type. However, other studies suggest the opposite: the incidence of PSB galaxies at 0.5 < *z* < 1 is inconsistent with a major channel for the red sequence growth in clusters, and the remaining gas reservoirs up to 0.5 − 1 billion years after the starburst suggest a non-quenched state for local PSB galaxies. The contradictory conclusions of the different studies may partially come from their differing selection techniques used to identify PSB signatures in galaxies, as well as the different environments and redshifts that each study focuses on. Clearly further study of this interesting class of galaxies is warranted. The origin of PSB signatures in galaxies has not yet been constrained. A possible mechanism, favoured by numerical simulations (e.g. ), is gas-rich major mergers, which can induce starbursts strong enough to rapidly consume a significant amount of the galaxy’s gas reservoir, and can transform late-type galaxies into early types both structurally and kinematically. Mergers of galaxies have been observed for over 50 years and their connection with PSB spectral signatures in galaxies has been investigated in some previous studies but with no definite conclusion so far. The fact that many local PSB galaxies lie in the field and poor groups suggests that they could be merger remnants, and the signs of morphological disturbance found in many PSB galaxies provides strong evidence supporting this claim (). Nonetheless, results reported by some other studies suggest otherwise, for example: not all PSB galaxies show signs of an interaction and the redshift-evolution of the PSB number density found by is  ∼ 100 times that of the major-merger rate estimated by. Furthermore, argue that the relative incidence of starburst, post-starburst, passive and star-forming galaxies found in five rich clusters suggests that less disruptive events, like minor mergers and accretion, are at play. The role of galaxy mergers in quenching of star-formation in galaxies is not well understood and it remains challenging to constrain due to the large variability of merger signatures and the resulting difficulties in defining robust sample selection criteria. Common methods include selection of close pairs or probing visual merger signatures in galactic structure. The latter approach can be done by means of either visual inspection or quantitative approach involving, for example: rotational asymmetry, lopsidedness the Gini − *M*20 parameter space or the recently introduced median-filtering of high-resolution images for detection of double nuclei. Such measures are useful for identifying galaxies in early stages of a merger, particularly prior to the coalescence of the galaxy nuclei but, they are less suitable for tracing low surface-brightness post-merger signatures. While visual classification offers great reliability and have recently been extended to large galaxy samples (The Galaxy Zoo Project, ; see for merger morphology), the need for a more efficient, automated selection of galaxies in post-coalescence stages of a merger increases in today’s era of extensive surveys. An automated approach also provides a more quantitative and easily reproducable description of galaxy morphology than visual classification. In this paper we introduce a new morphological indicator designed to probe the asymmetric structures in the outskirts of galaxies in the late stages of a merger. Through comparison with visual classification of galaxy images, we show that the indicator performs remarkably well in detecting faint signatures of tidal disruption in starburst and post-starburst galaxies. We use the new measure to identify post-merger (or, in few cases, ongoing-merger) candidates in a sample of 335 local galaxies with bulges, with strongest starburst and post-starburst signatures in the Sloan Digital Sky Survey (SDSS, ), selected using a sophisticated technique introduced by, and to investigate the evolution of the morphological disturbance in these galaxies as a function of the starburst age. We also use several standard measures found in literature to characterise the structure of galaxies evolving through the starburst and post-starburst phase and compare them with those characteristic of galaxies residing on the present-day red sequence. This paper is structured as follows: Section 2 contains a description of samples and their selection criteria; Section 3 - the methodology; Section 4 - calibration and testing; Section 5 - the results; Section 6 - a discussion; Section 7 - the summary of conclusions. We adopt a cosmology with Ω*m* = 0.30, ΩΛ = 0.70 and H0 =70kms− 1Mpc− 1. Sample selection ================ The following samples of galaxies were selected from the SDSS DR7 spectroscopic catalogue. [fig:vivplot] The test sample --------------- The test sample consists of 70 local galaxies at 0.01 < *z* < 0.07 with SDSS Petrosian magnitude within the range: 14.5 < *m**r* < 17.7. The sample was selected to represent systems with various degrees of morphological disturbance, which was determined based on visual inspection of the galaxy images. To find galaxies with disrupted morphologies, we pre-selected galaxies with spectroscopic starburst and post-starburst signatures, as further described in Section [sec:psbsample]. We then visually examined their images in *r*-band as well as the SDSS false colour images for the presence of morphological disturbance and tidal features, pointing to a past (and in a few cases, still ongoing) merger event[1](#fn1). The final sample of test galaxies (Figures [fig:morphsample1] and [fig:morphsample2]) contains: galaxies with highly disrupted morphologies, tidal features indicating a past major merger (images 1-20); galaxies with moderate morphological disturbance, with no prominent tidal features but slight deviations from regular appearance (images 21-30); galaxies with no signs of morphological disturbance (images 31-40). The notable fraction of ongoing mergers in the sample is a consequence of the selection criteria chosen for purposes of the method testing. The test sample is not representative of the entire population of local PSB galaxies. Finally, we include in the test sample a control subset of 30 normal galaxies (selected using the above constraints on redshift and magnitude) classified as early- (images 41-50, 61-65) and late-types (images 51-60, 66-70) based on the SDSS *fracDev* parameter with threshold values of 0.99 and 0.1, respectively. The *fracDev* parameter describes the fraction of the total galaxy light fit by the de Vaucouleurs profile (with the total light being represented by the model magnitude, computed from a linear combination of de Vaucouleurs and exponential fits to galaxy light profiles). We split the normal galaxies based on their inclination, into face-on (*b*/*a* > 0.5, images 41-60) and edge-on (*b*/*a* < 0.5, images 61-70) subsets. The (post-)starburst sample --------------------------- In this work we are interested in the whole sequence from starburst to quiescence, and wish to observe the decline in post-merger features that might be expected following the starburst event. We therefore select an evolutionary sample of galaxies at different stages of the starburst/post-starburst phase, which we collectively refer to as the ‘(post-)starburst’ or ‘SB/PSB’ sample. The sample consists of 400 local galaxies which have undergone a strong recent starburst selected from a parent sample of 70,000 galaxies with 0.01 < *z* < 0.07 and spectral SNR/pixel greater than 8 in the *g*-band, from the spectroscopic SDSS DR7 catalog. The galaxies within the parent sample were selected to be bulge dominated, with stellar surface mass density *μ* >  3  × 108M$\_{\sun}$ kpc− 2 [2](#fn2). This is similar to imposing a stellar mass limit of 1010M$\_{\sun}$. Our reason for using this sample is that it has already been studied in to measure a  ∼ 200 Myr offset between starburst and accretion onto the central supermassive black hole, and in to investigate the evolution of their dust and gas contents. The disadvantage of this pre-selection on stellar mass surface density is that galaxies with bulges may undergo stronger starbursts due to gravitational effects, and therefore not be representative of the full merging galaxy population. We intend on extending our analysis to further samples in later work. Figure [fig:vivplot] shows the distribution of two spectral indices that describe the shape of the spectrum (4000Å break and Balmer break strength) and Balmer absorption line strength for all 70,000 bulge-dominated galaxies. These spectral features constrain the recent star formation history of the galaxy. The indices are computed within the spectral region 3175Å-4150Å using a principal component analysis (PCA), which identifies and groups together features that vary simultaneously as the balance of stars in a spectrum changes. By combining multiple Balmer absorption lines together with information from the shape of the stellar continuum the PCA provides a much higher signal-to-noise ratio spectral index to identify an excess of A stars compared to the traditionally used H*β* or H*δ* absorption lines. In Figure [fig:vivplot], a large majority of the galaxies are found on the right side of the plot with strong 4000Å break (PC1), characteristic of the quiescent red-sequence galaxies. Lower values of PC1 point to ongoing star formation, therefore the less numerous blue-cloud galaxies appear on the left side of the plot. Starburst galaxies undergo brief episodes of enhanced star formation which reflect in extremely low values of PC1 as well as PC2 (deficit of Balmer absorption and weak 4000Å breaks), due to their stellar populations being dominated by short-lived O/B stars. As the dominant stellar populations within these galaxies change with time elapsed from the starburst event, the galaxy moves to the post-starburst phase, and the Balmer absorption features will increase in strength relative to the 4000Å break. The peak at highest PC2 values is where we find old post-starburst galaxies with dominant populations of A stars and starburst age (*t**S**B*)  ∼  1Gyr. To select an evolutionary sample of (post-)starburst galaxies, we selected galaxies from the extreme left of the PC1/2 distribution. These have undergone the strongest recent starburst in the sample, and therefore exhibit the most extreme spectral features. The starburst ages of those galaxies were estimated using Bayesian fitting of stellar population synthesis models to the spectral indices. First, starburst ages for all galaxies with PC1 <  − 4 or PC2 >  − 0.5 were estimated from the median of the posterior distribution, assuming a star formation history comprising an old stellar population and recent starburst. To identify a statistically complete sample of the strongest (post-)starburst galaxies we selected galaxies with the lowest PC1 value, such that we have 20 galaxies per 30Myr time interval, up to a starburst age of 600Myr. The reason for this age restriction is that, at older ages, there is a degeneracy between starburst age and burst strength using PC1 and PC2 alone (see ). The final sample of 400 SB/PSB galaxies are plotted in Figure [fig:vivplot], with starburst age coded by colour. The key difference between this selection method and those used in previous studies of post-starburst galaxies is that we select purely on stellar continuum features and do not remove galaxies with identifiable nebular emission lines. Traditionally galaxies are removed from the post-starburst samples if either H*α* or [O$\mbox{{\expandafter\@slowromancap\romannumeral 2@}}$] is visible, thereby ensuring that star-formation has completely shut off. We did not do this for two key reasons: (i) the traditional method actively selects against galaxies that host narrow line AGN, which are more prevalent in post-starburst galaxies than galaxies with other star-formation histories, resulting in incomplete samples ; (ii) the traditional method only selects old post-starburst galaxies, as starbursts themselves are not instantaneous, but rather have a decay time of  ∼ 0.3 Gyr. For further details of the sample selection, see and. Prior to the analysis, the final sample was reduced to 335 galaxies. This was done after visual inspection of the galaxy images (SDSS *r*-band) which revealed that in case of 65 galaxies, the presence of bright field stars in the images could contaminate the measurements of the structural parameters. As a result, those galaxies were discarded. This does not affect the statistical property of our sample as the discarded galaxies were distributed evenly throughout all starburst age bins. The control sample of star-forming and quiescent galaxies --------------------------------------------------------- In order to relate the structure and morphology of galaxies evolving through the SB/PSB phase to that of both star-forming and passively evolving galaxies, we also selected control samples of 49 and 53 galaxies from the blue-cloud and red-sequence regions of the PC1-PC2 spectral index space, respectively. For that purpose, we used the same bulge-dominated base sample as during the selection of the SB/PSB galaxies. The control samples contain random selections of galaxies from the regions centred on the most populated parts of the blue cloud and the red sequence (marked by the blue/red box in Figure [fig:vivplot]). To eliminate potential biases due to varying galaxy structure with the stellar mass within both the blue cloud and the red sequence, we ensured that the stellar mass distributions of the galaxies within our control samples match that of the oldest (0.5 Gyr < *t**S**B* < 0.6 Gyr) SB/PSB galaxies in our main galaxy sample. Methodology =========== To characterise the structure of the galaxies we applied a range of automated structural measures to the sky-subtracted *r*-band images of the (post-)starburst galaxies. We first defined a binary ‘detection mask’ containing the pixels to be included in the structure measurements (Section [sec:methodmask]). Then, we computed the standard measures of galaxy structure used in the literature (Section [sec:methodoldparams]) as well a new measure of morphology introduced in this work, designed for detecting asymmetric morphological features characteristic of post-mergers (Section [sec:methodashape]). [fig:masksradii] Binary detection mask --------------------- To create the binary detection mask we employed an 8-connected structure detection algorithm, with 8-connected neighbours defined as all pixels touching one of the edges or corners of the central pixel. The algorithm searches for such neighbouring pixels with intensities above a threshold level defined as a function of the background sky level and its standard deviation. The algorithm is specifically designed to identify faint features in the outskirts of galaxies. To enhance the detectability of such features without significantly lowering the detection threshold and potentially dropping to the noise level, we passed the images through a 3 × 3 running average (mean) filter prior to the detection process. This reduced the noise in the images by not only amplifying the signal from the regions with low surface-brightness but also diminishing the pixellation effect in those regions where individual pixels are too faint to be detected. Then, starting from the central pixel as given by the SDSS position, the detection mask was built by accepting all pixels that are 8-connected to previously accepted pixels and have intensities above the chosen threshold. After some experimentation we found the optimal threshold for a robust detection of faint tidal features is 1 standard deviation above the sky background level. For the SDSS *r*-band images of the SB/PSB sample, this corresponds to a mean limiting surface brightness of 24.7 mag/arcsec2. In Figure [fig:masksradii] we show examples of detection masks computed for SB/PSB galaxies with different levels of morphological disturbance, as classified by visual inspection of the SDSS *r*-band images (the exact criteria of the visual classification are explained in detail in Section [sec:resultspsbvisclass]). In the case of ongoing mergers, with double nuclei joined by a bridge (Figure [fig:morphsample1], obj. 7, 9, 13, 17), the detection mask will include both components of the merging system. The computation of the binary detection mask relies on a good estimate of the sky background level in the images. For a self-contained analysis, we estimated our own sky background levels rather than using SDSS values, although we find the two measurements agree well. We extracted pixels lying within a circular annulus with inner and outer radius of 20 and 40 times the full width half maximum (FWHM) of the light profile of each galaxy. This resulted in a large enough area and inner boundary sufficiently far from the central source to ensure a representative measurement of the sky background level. We estimated the sky background level by the mode of the flux histogram, clipped iteratively at 3*σ* until convergence of the mode (typically after a few iterations). We defined the mode as 2.5 × median - 1.5 × mean. From the binary detection masks we estimated the galaxy radius, *R**m**a**x*, as the distance between the centre (brightest pixel within the mask) and the most distant pixel from that centre. We find that this definition of galaxy radius is an improvement for our purposes over the commonly used Petrosian radius, *R**p*, in the case of galaxies with significant morphological disturbance. In the second row of Figure [fig:masksradii] apertures at the new radii (black) are compared to those at twice the Petrosian radii (red). The latter value is typically used to recover the majority of flux for galaxies with regular morphologies. The radii are similar for morphologically undisturbed galaxies but in the presence of significant morphological disturbance, *R**m**a**x* includes the extended faint structures in the outskirts of galaxies, which can be excluded by 2*R**p*. Finally, we recomputed the sky background level within a circular annulus with inner and outer radius of one and two times *R**m**a**x*. This final estimate of the sky background level was subtracted from the images prior to the measurement of the morphological parameters (although it differs from the original one by no more than 3 digital counts per pixel, we note a drop in the standard deviation by as much as 50%). Structural parameters --------------------- Below, we list and briefly describe the measures of galaxy structure used in this study. *Sérsic index, n:* For each galaxy, we fitted the Sérsic function to its surface brightness profile, extracted by considering the mean flux in pixel-thick concentric circular annuli defined by apertures centred at the brightest pixel of the galaxy image. $$I(R) = I\_{e}\mbox{exp}\bigg\{-b\_{n}\bigg[\bigg(\dfrac{R}{R\_{e}}\bigg)^{1/n}-1 \bigg]\bigg\}, \label{eqn:sersic}$$ where *I**e* stands for the intensity at the galaxy’s effective radius, *R**e*, enclosing half of the total galaxy light, and the constant *b**n* is determined by the Sérsic index, *n*. During the fitting procedure, the Sérsic index was restricted to fall between 0.5 and 6.0. *Concentration index, C:* We computed *C* from the growth curve radii *R*20 and *R*80, enclosing 20% and 80% of the galaxy’s total light, respectively, calculated within a circular aperture defined by *R**m**a**x*: $$C = 5\mbox{log}\_{10} \bigg(\frac{R\_{80}}{R\_{20}}\bigg). \label{eqn:c}$$ *Asymmetry, A:* We followed the standard definition of rotational asymmetry : $$A = \dfrac{\Sigma \mid I\_{0}-I\_{\theta} \mid}{2\Sigma \mid I\_{0} \mid } - A\_{bgr} \label{eqn:a}$$ where *I**θ* is the flux from an individual pixel in the galaxy image rotated 180*o* about a chosen centroid (selected to minimise the values of *A*) and *I**θ* is flux from a pixel at the same position in the original, un-rotated image. The subtraction of *A**b**g**r* (the asymmetry in the ‘background’) accounts for the effect of noise on *A*. We measured *A* within a circular region defined by *R**m**a**x* (rather than the commonly used 1.5 × *R**p*). We chose the rotation centroid out of a set of pixels accounting for the brightest 30% of the galaxy’s total light such as to minimise *A*: for every pixel being treated as the rotation centroid, we computed *A* and then selected the minimum value to represent our final measurement. This approach was taken to eliminate potential local asymmetry minima in the morphologically disturbed galaxies ( found no such minima but their study did not specifically focus on galaxies with significant morphological disturbance, where this effect could become apparent). *Outer asymmetry, *A**o*:* To enhance the signal from the low-surface-brightness regions in the outskirts of the galaxies we used a modified version of the asymmetry measure, which we call the ‘outer’ asymmetry, *A**o*. It is computed similarly to the standard asymmetry parameter (Equation [eqn:a]), but with an inner circular aperture containing the brightest 50% of the total flux excluded from the measurement. A comparable definition of ‘outer’ asymmetry was also recently developed by. [fig:masks] *Clumpiness, S:* We measured *S* following the standard definition : $$S = \dfrac{\Sigma \mid I\_{o} - I\_{\sigma} \mid}{\Sigma \mid I\_{o} \mid} \label{eqn:s}$$ where *I**σ* stands for pixel intensity in the smoothed galaxy image (using a Gaussian smoothing filter with width given by the radius containing 20% of the total galaxy flux, *R*20) and *I**o* is that in the original image. We measured *S* within an aperture defined by *R**m**a**x* and centred on the highest intensity pixel. We excluded the central circular aperture at *R*20 to eliminate the contribution from extremely bright centres present in some highly-concentrated galaxies (see e.g. ). *Gini index, G:* We used *G* to measure the degree of inequality in the light distribution within images of the galaxies : $$G = \dfrac{1}{2\overline{X}n(n-1)}\sum\_{i}^{n}(2i-n-1)\mid X\_{i}\mid \label{eqn:g}$$ with pixel intensities, *X**i*, in increasing order, *n* - the total number of pixels assigned to the galaxy (in this work, using the detection mask), and $\overline{X}$ - the mean over all intensities. *G* is independent of the position of the galaxy’s centre and is computed based on the rank-ordered cumulative distribution function of the pixel intensities within well specified boundaries. In this work, these boundaries were determined using the binary detection mask. *Moment of light, *M*20:* Following, we computed the second order moment of the flux-weighted distance of the brightest pixels containing 20% of the total flux (*M*20) from the centre of the galaxy, to measure the spatial extent of the brightest galaxy regions: $$\begin{split} M\_{20} = \mbox{log}\_{10} \bigg( \dfrac{\sum\_{i}M\_{i}}{M\_{tot}} \bigg), \mbox{while} \sum\_{i}f\_{i} < 0.2f\_{tot} \\ M\_{tot} = \sum\_{i}^{n}M\_{i} = \sum\_{i}^{n}f\_{i}[(x\_{i}-x\_{c})^{2}+(y\_{i}-y\_{c})^{2}] \end{split} \label{eqn:m}$$ with the individual pixel coordinates denoted by *x**i*, *y**i*, and the centroid’s coordinates given by *x**c* and *y**c*. We measured *M*20 within boundaries defined by the binary detection masks, with respect to a free centroid parameter, computed by minimising the second order moment of the flux-weighted distance of all pixels, *M**t**o**t*. In this work, the minimisation of *M**t**o**t* was performed using candidate centroid pixels within a region comprising the brightest 30% of the galaxy’s total flux. A new morphology measure: the ‘shape’ asymmetry ----------------------------------------------- We introduce a new measure of asymmetry designed specifically to detect galaxies with low surface-brightness tidal features. We call the new measure the ‘shape’ asymmetry (*A**S*). It is computed using the same mathematical expression as the standard asymmetry parameter (Equation [eqn:a]); however the measurement is performed using the binary detection masks rather than the images of galaxies. This allows for equal weighting of all galaxy parts during the measurement, regardless of their relative brightness. The key difference between *A**S* and the other asymmetry measures investigated in this work (*A*, *A**o*) is that the new parameter is purely a measure of morphological asymmetry and does not contain any information about the asymmetry of the light distribution within the galaxy. It is therefore not influenced by the presence of asymmetric bright galaxy components, like multiple nuclei, and it is highly sensitive to low surface-brightness regions of galaxies. This is illustrated in Figure [fig:masks]. As in the case of *A*, during the computation of *A**S* the images were rotated around the flux-weighted minimum asymmetry centroid, to ensure that the asymmetry is measured with respect to the galaxy core rather than an arbitrary region selected by minimising the shape asymmetry itself. The measurement was performed within an extraction region defined by a circular aperture at *R**m**a**x*. The noise correction term (*A**b**g**r*, Equation [eqn:a]) was omitted in the measurement of *A**S*: in contrast to the standard asymmetry measure, increasing the aperture size does not increase the amount of random noise as all ‘background’ pixels within the detection mask are set to 0. Calibration and testing ======================= In this section we present the results obtained using the test sample for calibration of the structural parameters with our new binary detection masks (Section [sec:resultsteststandard]) and testing of the newly introduced morphological measure (Section [sec:resultstestashape]). [fig:testall] The standard measures --------------------- The early- and late-type galaxies (ETGs and LTGs, respectively) in the test sample provide a good standard for the measures of galaxy structure as they are expected to show values of *n*, *C*, *A*, *S*, *G* and *M*20 within well established ranges. Based on previous studies: * LTGs galaxies tend to have *n* ∼ 1.0, while ETGs show a range of values, typically with $2.0\lesssim n \lesssim 4.0$ ; * *C* ranges between 2.0 and 5.0, with highest values characteristic of ETGs (*C* > 4.0) and decreasing to *C* ∼ 3 for LTGs ; * all normal galaxies tend to have low *A*, with values around 0.1 for ETGs, increasing towards later types to around 0.2 ; * *S* also tends to increase towards later types, ranging from around 0.1 to 0.3 ; * there is a clear separation in the values of *G* found for ETGs and LTGs, with *G* ∼ 0.6 for the former and *G* ∼ 0.4 for the latter ; * *M*20 takes on values lower than -2.0 for ETGs galaxies, and increases to around -1.5 for LTGs; galaxies with double nuclei have *M*20 ∼  − 1.0. Figure [fig:testall] shows the six standard structural parameters computed for all galaxies in the test sample. The values obtained for the early- and late-type galaxies (ETGs and LTGs, respectively) are in general agreement with previous studies. ETGs appear more concentrated ($2.0\lesssim n \lesssim 5.0$ and $3.5\lesssim C \lesssim 4.5$) and have smoother light distribution (*S* ∼ 0.1) than LTGs ($0.5\lesssim n \lesssim 2.0$, $2.5\lesssim C \lesssim 3.5$, *S* ∼ 0.2). Both galaxy types show high symmetry under a rotation about 180*o* ($0.05\lesssim A \lesssim 0.2$). ETGs and LTGs are also well separated from one another in the *G* − *M*20 space, with the former showing more unequal light distributions ($0.4\lesssim G \lesssim 0.6$) and the presence of a compact bright nucleus (M20 <  − 2), than the latter ($0.6\lesssim G \lesssim 0.8$, M20 >  − 2). The separation between early and late types in the *A* − *S* parameter space is not as strong as expected from the results of previous studies. This is likely a consequence of the relatively regular appearance and lack of prominent spiral arms in the LTG subsample (Appendix [app:test]), leading to low values of both *A* and *S*. Within the test sample, the morphologically disturbed galaxies are not easily distinguishable from galaxies with regular morphologies, when using the six standard structural parameters. Exceptions are galaxies with multiple bright nuclei (marked by circles in Figure [fig:testall]). In those cases, care should be taken when interpreting the values of *n* and *C*, measured within the *R**m**a**x*, as they correspond to the entire (still merging) system rather than its individual components. For most parameters, the values span the whole range characteristic of both ETGs and LTGs, with a slight bias toward the former. The exceptions are *G* and *A*. The values of *G* found for the morphologically disturbed galaxies are more similar to those found for highly-concentrated ETGs. We note that this tendency for such high values of *G* could be a result of the sample selection as all morphologically disturbed galaxies were selected from a parent sample of galaxies with central SB/PSB signatures. This is supported by the fact that the undisturbed SB/PSB galaxies also show high values of *G*. Additionally, in morphologically disturbed galaxies the values of *G* could potentially be enhanced by the presence of the faint tidal features which tend to add to the inequality in the light distributions in those galaxies (see ).However,the fact that equally high values of *G* were found in the morphologically undisturbed SB/PSB suggests that this contribution of faint tidal features to the measured values of *G* may not be substantial. The difference between morphologically disturbed galaxies and galaxies with regular morphologies is most noticeable in the values of *A*. However, only 35% (7/13) of the galaxies with the highest morphological disturbance in the test sample are well separated from the galaxies with regular appearance. Visual inspection of these galaxies shows that those with highest A generally have bright double nuclei. The values of *A* measured for galaxies with low morphological disturbance are comparable with those obtained for galaxies with regular morphologies. We conclude that, while the standard measures investigated in this work are suitable for the structural analysis of normal galaxies as well as for identifying galaxies with multiple nuclei, they are not as well-suited for identifying galaxies with faint tidal features observed in post-mergers. In the following section, we present results for our new modified asymmetry measure, designed specifically for identifying galaxies with such features. [fig:Amorph] The ‘shape’ asymmetry --------------------- In Figure [fig:Amorph] we compare the values of our new measure of morphology, *A**S*, with the two measures of structural asymmetry, *A* and *A**o*. The dotted lines represent a threshold value at 0.2 used to separate between galaxies with regular morphologies and those with high morphological disturbance. The threshold was chosen empirically based on the fact that all galaxies with regular morphologies in the test sample have *A**S* < 0.2 while *A**S* ≥ 0.2 is found only for galaxies with some level of morphological disturbance (detected by visual inspection). The situation is similar for *A* and *A**o*, (with an exception of one elliptical galaxy having a slightly higher *A**o*). We compare the performance of the three measures of asymmetry by considering this threshold. Figure [fig:Amorph] shows that the shape asymmetry performs significantly better at separating galaxies with highest degree of morphological disturbance from those with regular appearance, than the other two asymmetry measures. Considering the above threshold value, *A**S* ≥ 0.2 identifies 95% of the galaxies with most prominent tidal features (dark blue symbols), compared with 35% and 45% picked out by *A* and *A**o*, respectively. Visual inspection of the single galaxy omitted by *A**S* revealed that the features present in the galaxy’s structure form an azimuthally symmetric pattern, explaining the corresponding low value of *A**S*. We note that the new measure also recognises 60% of galaxies with lower level of morphological disturbance (violet symbols), while both the standard asymmetry parameter and the outer asymmetry fail to pick out any such objects. The ability of *A**S* to detect galaxies with asymmetric tidal features comes from the independence of the measure of the flux distribution within the galaxies. In contrast, both *A* and *A**o* are flux-weighted measurements and therefore can become dominated by the brightest regions within galaxies, especially in the case of the former, where significant contribution to the measurement can come from multiple bright nuclei found in galaxies in pre-coalescence stages of a merger. Visual inspection identifies 4/7 (57%) such galaxies in the test sample and all of them have *A* ≥ 0.2 (Figure [fig:Amorph]). In the case of *A**o*, the contribution from multiple bright nuclei will not be as significant as in the case of *A*, as the inner aperture containing the brightest 50% of the total light is omitted. The parameter should therefore be more sensitive to the low-surface brightness regions than *A*; however, as we show in Figure [fig:Amorph], it fails to identify more than half of the galaxies with highly disrupted morphologies. We also investigate outer asymmetries computed by cutting out inner apertures containing as much as 80% and 90% of the total flux, however, we find no significant improvement over *A**o* with the inner 50% cut out. This may stem from the differences in the light profiles of galaxies: while for some objects cutting out the inner circular aperture containing 50% of the total light may be optimal for separating its asymmetric outer features, it may prove insufficient or excessive for other galaxies. It is therefore not obvious how to define the inner circular aperture that would yield the most robust result. We also investigate an alternative approach to outer asymmetry by considering a central region with its geometry defined by the distribution of pixels accounting for a given fraction of the total galaxy light. However, we find that the irregularity of the inner region defined in such a way artificially increases the value of *A**o*. We conclude that the shape asymmetry is a much better indicator of the presence of asymmetric tidal features in galaxies than any of the standard structural parameters considered in this work, including the standard asymmetry parameter, *A*, and its modifications designed to measure the asymmetry of galaxy outskirts (*A**o*). We note that the shape asymmetry is purely a measure of galaxy morphology rather than structure. In order to gain information about the asymmetry of the light distribution within galaxies, the computation of the standard asymmetry parameter is a more suitable approach. ### Significance of rotation angles As shown in the study by, while normal galaxies tend to show strong 180*o*-symmetry, their projected shapes can introduce significant asymmetry under a 90*o*-rotation (*A*90), which can serve as a good approximation for the directly measured minor-to-major axes ratio for statistical galaxy samples. In particular, galaxies seen ‘face-on’ show *A*90 ∼ 0, while those observed at higher inclination angles tend to reveal higher values, with *A*90 reaching around 0.8 for the edge-on systems. We applied the concept of asymmetry under a 90*o*-rotation to the binary detection masks of galaxies within the test sample. As shown in Figure [fig:Amorph], spiral and elliptical galaxies are separated in *A**S*(90) according to their axial ratio: galaxies with *b*/*a* < 0.5 tend to show higher asymmetries under a 90*o* rotation. We find that *A**S*(90) is roughly correlated with *A**S*(180) for the morphologically disturbed galaxies, however the large scatter is caused by the variety of shapes of the tidal features. Within the test sample, galaxies with elongated tail-like features tend to have both higher *A**S*(180) and *A**S*(90), while those with shell-like features show lower *A**S*(90). Further tests of this aspect of the shape asymmetry will involve samples of galaxies with representative tidal features of different types (e.g. ), galaxies in earlier stages of a merger (e.g. ), as well as simulated galaxy mergers ). [fig:testdetThresh] ### Image quality effects As the binary detection masks are created by assembling connected pixels above a specific threshold, the measurement of *A**S* will depend on the choice of that threshold. Therefore, the limiting surface brightness of the images used to create the detection masks will be the measurement’s main limitation. For the SDSS *r*-band images, the optimal detection threshold was found to be 1*σ* above the sky background level, corresponding to *μ**l**i**m*= 24.7 mag/arcsec2. Figure [fig:testdetThresh] shows example results of tests of the dependence of *A**S* on the limiting surface brightness of the galaxy images by increasing the detection threshold. Upon inspection of the resulting binary masks and the corresponding values of the shape asymmetry we find that, unsurprisingly, the behaviour of *A**S* with limiting magnitude depends on the geometry of the features in the outskirts of the galaxies, as well as on their brightness relative to the central regions. For objects showing significant morphological disturbance, *A**S* changes substantially and in general follows a decreasing trend with decreasing *μ**l**i**m* (i.e. with decreasing image depth). Conversely, in the case of galaxies with regular morphologies, this effect is much less pronounced and the measurement of *A**S* shows stability against varying *μ**l**i**m*. To summarise, *A**S* provides a robust way of quantifying galaxy morphology and it is the most successful automated method to date at distinguishing between galaxies with asymmetric tidal features and those with regular morphologies. To accurately measure the shape asymmetry of galaxies, images of sufficient depth are required. The SDSS data used in this study are sufficiently deep in surface brightness to reveal faint structures in the outskirts of galaxies; however, using images of higher limiting magnitude could disclose further even fainter features in their morphology, consequently influencing their measured values of *A**S*. Therefore, great care must be taken when comparing measurements of *A**S* between different surveys, and between galaxies with different redshifts. Results ======= In this section we present the results of the structural and morphological analysis of the (post-)starburst galaxies. First, we summarise the results of a visual inspection of the galaxy images (Section [sec:resultspsbvisclass]), and then we present the main findings of the automated approach (Section [sec:resultspsbparams]). Qualitative description ----------------------- We examined the *r*-band images of the 335 SB/PSB galaxies for features suggesting a past merger event, with an aim to determine the dependance of the presence of such features on the starburst age. The visual inspection was carried out independently by five reviewers and involved examining images of the SB/PSB galaxies as well as an additional sample of 300 continuously star-forming galaxies. The star-forming galaxies were selected from the blue cloud, using the same spectral indices as the SB/PSB galaxies and within the same redshift and stellar mass range (in this case, blue-cloud galaxies were more appropriate than red-sequence, because, due to their star formation activity, they are more likely to show disturbance in their structure than galaxies in which the star formation has already quenched). All images were randomised prior to inspection. Each galaxy was then classified, according to a pre-agreed scheme, as an object with: * regular morphology, no signs of disturbance pointing to any kind of interaction (class 0), * low/moderate level of morphological disturbance suggesting a possible interaction (class 1), * high morphological disturbance (obvious post-merger) with elongated tidal features, e.g. tails (class 2), * high morphological disturbance (obvious post-merger) with shell-like features (class 3). Examples of the different classes of galaxies are presented in the top row of Figure [fig:masksradii], with the numeric classes assigned by the 5 classifiers. The visual inspection has shown that the galaxies in our SB/PSB sample have a variety of morphologies. Many, in particular those of young starburst ages, show signs of a past merger, including disturbed morphologies and presence of tidal features. Ongoing mergers with double bright nublei constitute only  ∼ 3% of the whole sample. [fig:visual] In Figure [fig:visual] we define *ζ* as the fraction of SB/PSB galaxies showing signatures of a past (or ongoing) merger (disturbance in morphology, presence of tidal features such as tails, arms or shells - i.e. classes 1, 2, 3), calculated independently in different age bins with a width of 0.1 Gyr. As visual classification is subjective, the individual classifications were not always in complete agreement. For a compromise of reliability and detectability, we discuss the ‘combined classification’, in which case *ζ* includes only those galaxies for which at least three reviewers agreed on the presence of morphological disturbance. As shown in Figure [fig:visual], about 45% of the youngest SB/PSB galaxies (t*S**B*  <  0.1 Gyr) in the sample show features characteristic of a post-merger. Furthermore, there is a decreasing trend in *ζ* with starburst age. For the older PSB galaxies in the sample (t*S**B*  >  0.3 Gyr), we find *ζ* ∼ 30%, which is consistent with the fraction of morphological disturbance found in our control sample of ordinary blue-sequence galaxies. Quantitative measures of structure and morphology ------------------------------------------------- In Figures [fig:nCGM20age] and [fig:SAAoAsage], we present all measures of galaxy structure and morphology considered in this work, computed for the SB/PSB sample as a function of starburst age (*t**S**B*), as well as for the control samples of star-forming blue-cloud and quiescent red-sequence galaxies (plotted with arbitrary ages for clarity). For each parameter, we show individual values per galaxy and median values calculated in 0.1-Gyr wide age-bins (left column). We also quote the Spearman rank correlation coefficient (*ρ*) and the corresponding *p*-value (two-sided significance of the coefficient’s deviation from zero). All values are summarised in Table [tab:table1], where we comment on the general characteristics of our sample, inferred from the individual parameters. Figures [fig:nCGM20age] and [fig:SAAoAsage] also show distributions of the parameter values: in the middle panel, for the youngest (*t**S**B* ≤ 0.1Gyr) and oldest (0.5 Gyr ≤ *t**S**B* ≤ 0.6 Gyr) (post-)starburst galaxies in the sample, and in the right panel, for the oldest (post-)starburst galaxies and the control sample of star-forming and quiescent galaxies. For quantitative comparison of the distributions, we performed a Kolmogorov-Smirnov (K-S) test. We quote the obtained values of the K-S statistics (*D*) and the corresponding probability of the null hypothesis (*p*) in the respective panels of Figure [fig:nCGM20age] and [fig:SAAoAsage]. [tab:table1] ccl Typical range & Trend & Comments & with *t**S**B* & (mean characteristics) & (Spearman) & & & - light profiles of $1.0 \lesssim n \lesssim 3.0$ & *ρ* ∼ 0.06 & early disks; & *p* ∼ 0.23 & - no significant trend & & with starburst age; & & - moderate $2.9\lesssim C \lesssim4.2$ & *ρ* ∼ 0.04 & concentration; & *p* ∼ 0.41 & - no significant trend & & with starburst age; & & - moderately extended $-2.4\lesssim M\_{20} \lesssim-1.4$ & *ρ* ∼  − 0.05 & single nucleus; & *p* ∼ 0.32 & - no significant trend & & with starburst age; & & - highly unequal light $0.55\lesssim G \lesssim0.8$ & *ρ* ∼  − 0.29 & distribution; & *p* < 1.0*e*− 4 & - moderate decline & & with starburst age; & & - moderately ‘clumpy’ $0.08\lesssim S \lesssim0.2$ & *ρ* ∼ 0.20 & light distribution; & *p* ∼ 1.0*e*− 4 & - moderate increase & & with starburst age; $0.05\lesssim A \lesssim0.25$ & *ρ* ∼  − 0.18 & - moderately declining & *p* ∼ 1.0*e*− 3 & asymmetry && with starburst age; $0.05\lesssim A\_{o} \lesssim0.25$ & *ρ* ∼  − 0.14 & - moderately declining & *p* ∼ 0.01 & asymmetry && with starburst age; $0.05\lesssim A\_{S} \lesssim0.35$ & *ρ* ∼  − 0.20 & - moderately declining & *p* ∼ 1.0*e*− 4 & asymmetry && with starburst age; [tab:table2] cccc Asymmetry & Outliers with & Outliers with & Trend in measure & *t**S**B* < 0.3 & *t**S**B* > 0.3 & the fraction & & & of outliers *A* & 16% & 8% & *ρ* ∼  − 0.71 &&& *p* ∼ 0.11 *A**o* & 27% & 14% &*ρ* ∼  − 0.77 &&& *p* ∼ 0.07 *A**S* & 43% & 21% & *ρ* ∼  − 0.94 &&& *p* ∼ 10− 3 [fig:nCGM20age] [fig:SAAoAsage] Considering the measures pertaining to the central concentration of light and the spatial extent of the galaxy nucleus, we find that most galaxies in the SB/PSB sample show characteristics of early-type disks: $1.0\lesssim n\lesssim 3.0$, $3.0\lesssim C\lesssim 4.0$, *M*20 peaking around -2.0 (see e.g. ). As suggested by the low values of the Spearman rank correlation coefficient, there is no significant trend in either *n*, *C* or *M*20 with starburst age (see left panel of Figure [fig:nCGM20age]). This points to little evolution of the central light concentration over the first 0.6 Gyr after the starburst. Direct comparison of the young and old (post-)starburst galaxies shows little difference in the distributions of all three parameters (middle panel, Figure [fig:nCGM20age]) and the K-S test suggests that they are likely to have been drawn from the same distribution. Comparison of the old subset with the control sample of red-sequence galaxies reveals that, after 0.6 Gyr since the last starburst, the (post-)starburst galaxies are not as highly concentrated as the galaxies populating the red sequence; rather, their structure (*n*, *C*, *M*20) bears closer resemblance to star-forming blue-cloud galaxies (right panel, Figure [fig:nCGM20age]). The values of the Gini index found for the youngest galaxies in the (post-)starburst sample are as high as those obtained for the red-sequence galaxies (right panel, Figure [fig:nCGM20age]). Contrary to *n*, *C* and *M*20, *G* is a measure independent of the position of the galaxy centre or assumptions of azimuthally symmetric apertures, and it is designed to pick out changes in the light distribution on a pixel scale. It should therefore be more sensitive to subtle variations in the light distribution of galaxies following a starburst than the other measures. Consequently, high *G* does not necessarily imply high central concentration. *G* shows the most pronounced correlation with the starburst age out of all measures considered in this work (*ρ* =  − 0.3, *p* < 1.0− 4). The decreasing tendency suggests that SB/PSB galaxies tend to have more uniform light distribution as they age, which could be a consequence of the decaying starburst as well as, to some extent, diminishing prominence of the faint tidal features. We do not attribute this effect to contribution from multiple nuclei as their presence is indicated only in a few galaxies by *M*20 (the dearth of objects with *M*20 ≥  − 1.0). This was confirmed during the visual inspection of the galaxy images (only  ∼ 3% classified as ongoing mergers). The fading central starburst could also be the reason for the increasing tendency in clumpiness as a function of starburst age (left panel, Figure [fig:SAAoAsage]). As the amount of light from the star-bursting region decreases, the less luminous off-centre star-forming regions are being uncovered, increasing the resulting values of *S* (although the parameter is designed to exclude the bright central regions from the measurement, the size of the excluded central aperture will not always coincide with that of the star-bursting region). All three measures of rotational asymmetry, *A*, *A**o* and *A**S*, show a decreasing tendency with the starburst age (left column, Figure [fig:SAAoAsage]) but the trend is most pronounced in the values of the shape asymmetry (*ρ* =  − 0.2, *p* ∼ 10− 4). We also find an excess of young galaxies with high *A**S* when comparing the distributions of *A**S* for the young and old subsets (middle panel, Figure [fig:SAAoAsage]) of the (post-)starburst sample. In Table [tab:table2], we compare the number of such ‘outliers’ and its evolution with starburst age for all three measures of asymmetry, using the threshold value of 0.2 defined in Section [sec:resultstestashape]. We find the highest number of outliers for the shape asymmetry: 43% of galaxies with *t**S**B* < 0.3 Gyr show *A**S* ≥ 0.2, and the fraction drops to 21% for galaxies with *t**S**B* > 0.3 Gyr. In the last column of Table [tab:table2], we quantify the trend in the fraction of galaxies with *A*, *A**o* and *A**S* greater than 0.2 as a function of the starburst age (in 0.1 Gyr-wide age bins), by means of the Spearman rank correlation coefficient and the corresponding *p*-value. We find a decreasing tendency for all three measures, but a particularly strong and statistically significant trend for the shape asymmetry (*ρ* ∼  − 0.9, *p* ∼ 10− 3). This declining number of the outliers suggests that, as (post-)starburst galaxies age, they are less likely to show the presence of asymmetric features in their structure. In the case of the standard asymmetry measure, the observed decline is likely to be associated with changes in the symmetry of the central bright galaxy regions. The decrease in the number of galaxies with high shape asymmetry, as a function of the starburst age, points directly to morphological transformations and implies that (post-)starburst galaxies are less likely to show the presence of tidal features, characteristic of post-mergers, as they age. To summarise, the quantitative analysis of the structure of local (post-)starburst galaxies with high stellar mass surface density, reveals moderate central concentration, light profiles and spatial extent of the brightest regions characteristic of early-type disk galaxies, as expected for the sample selection. We do not find any significant evolution of these properties during the first 0.6 Gyr following the most recent starburst. The distribution of light within the (post-)starburst galaxies shows a high level of inequality, as measured by the Gini index, (particularly for those with youngest starbursts), and this inequality decreases with the starburst age. This could be an effect of the decaying starburst and/or of the fading faint tidal features in the outskirts of those galaxies, however, the latter is not expected to be significant, as discussed in Section [sec:resultsteststandard]. Using the shape asymmetry we find that, within the first 0.6 Gyr after the most recent starburst, (post-)starburst galaxies are less likely to show the presence of tidal features characteristic of post-mergers as they age. [fig:visauto] Discussion ========== Both visual and automated analyses of the (post-)starburst galaxies point to a declining number of galaxies with high morphological disturbance as a function of starburst age. Below, we present a quantitative comparison of the visual vs. automated approaches used in this paper (Section [sec:discussionvisauto]). We then discuss the findings of this study in the context of galaxy evolution (Section [sec:discussiongalevo]). Detecting tidal features - visual versus automated approach ----------------------------------------------------------- In the left panel of Figure [fig:visauto] we compare the qualitative analysis of the (post-)starburst galaxies based on visual classification, with both the standard and new automated measures of asymmetry. As in Section [sec:resultspsbvisclass], we define *ζ* as the fraction of (post-)starburst galaxies classified as post-merger candidates in a given age bin. In the case of the visual approach, we show the combined classification, including only those galaxies for which at least three reviewers agreed on the presence of morphological disturbance. For the automated classification, we consider the threshold value introduced in Section [sec:resultstestashape] where an automatically selected post-merger candidate has *A**S* ≥ 0.2. The automated classification computed using the shape asymmetry reproduces almost exactly the result of the human visual classification, pointing to a declining trend in *ζ* with the starburst age. In the right panel of Figure [fig:visauto], we define *ζ*\* - the fraction of the *visually identified post-merger candidates* which have *A**S* ≥ 0.2, calculated in a given age bin. Above each data point, we quote the total number of galaxies (both with and without morphological disturbance) per age bin. The declining trend in *ζ*\* means that the agreement between visual and automated classification weakens with the starburst age. The reason for this is twofold: 1) the visual classification was computed by combining individual classifications obtained by different reviewers and is therefore affected by the differences in opinions between them; 2) the automated approach is designed to measure the degree of asymmetry of the tidal features present in galaxies, while the visual approach classifies galaxies according to the presence/absence of such features only. The second statement is supported by the declining trend in *A**S* as a function of the starburst age for the morphologically disturbed (post-)starburst galaxies (Figure [fig:bulgedistundist]a). Merger, starburst, post-merger, post-starburst,...red sequence? ---------------------------------------------------------------- In this section we discuss the implications of our findings on the role of galaxy mergers in inducing starburst events in massive star-forming galaxies, and that of (post-)starburst galaxies in the build up of the red sequence. ### From merger to (post-)starburst The results of this study are in a general agreement with previous works on the presence of tidal features in the structure of some galaxies with post-starburst stellar populations. We do not make direct quantitative comparisons due to our different selection criteria, and image limiting magnitudes, and consequently, different sample properties, compared to previous studies. Here, we consider an evolutionary sample of starburst and post-starburst galaxies and trace the changes in their morphology as a function of the starburst age, while the results of the previous studies refer to generally older post-starburst galaxies with no spectroscopic evidence of ongoing star-formation. We find that at least 45% of the youngest starburst galaxies in our sample show disturbed morphological features, and that this fraction decreases with the starburst age to about 25 − 30% at *t**S**B* > 0.3 Gyr (left panel, Figure [fig:visauto]). This latter fraction is consistent with the fraction of ordinary blue-sequence galaxies that are visually classified as being morphologically disturbed (Figure [fig:visual]), and therefore the morphological disturbance may not be directly related to the starburst event. We also find a declining trend in *A**S* in morphologically disturbed galaxies, with the youngest starbursts showing a range of values berween  ∼ 0.2 and 0.6, and most galaxies with *t**S**B* > 0.4 Gyr having *A**S* ∼ 0.2 (Figure [fig:bulgedistundist]a). This provides a qualitative fit to the galaxy merger picture emerging from numerical simulations where, after the coalescence of the two progenitors, the morphological disturbance of the system gradually fades away leading to formation of a morphologically undisturbed remnant (see e.g. ). To investigate the role of galaxy mergers in inducing starburst events in massive star-forming galaxies, we must focus on galaxies which have experienced a very recent starburst, so that at the time of observations their tidal features have not begun to vanish. Consequently, we consider the youngest age-bin in Figure [fig:visauto] (left panel) to deduce that, based on the investigated sample, we can attribute at least 45% of the (post-)starburst signatures to galaxy mergers. The remaining 55% could be a result of 1) either galaxy mergers/interactions under conditions that fail to produce tidal signatures in the remnants’ morphologies or, 2) some other internal processes. The fact that 30% of our control sample of blue-sequence galaxies are also visually identified as being morphologically disturbed, additionally indicates that either galaxy mergers in the local Universe can cause significant morphological disturbance without inducing a central starburst, or morphological disturbance is not a good predictor of a past merger event. In future work, we will extend our analysis to a larger sample of galaxies with young starbursts, more representative of the entire galaxy population in the nearby Universe. We will also aim to constrain the upper limit on the fraction of (post-)starburst galaxies which have originated from a merger by considering galaxy mergers found in cosmological simulations. [fig:firstsub] [fig:secondsub] [fig:thirdsub] [fig:bulgedistundist] ### From (post-)starburst to quiescence The values of *G* obtained for galaxies with the youngest starbursts (*t**S**B* < 0.1 Gyr) are as high or even higher than those found for the continuously star-forming and quiescent galaxies, and they tend to decrease with the starburst age. In the previous section, we attributed this effect primarily to the decaying nuclear starburst but we also noted that contributions from faint tidal features in galaxies may increase the corresponding values of *G*. Although our analysis of the test sample suggests that the presence of faint tidal features should have a marginal effect on *G* in (post-)starburst galaxies, here, we address this concept again by investigating the behaviour of the Gini index in the morphologically disturbed and undisturbed subsets of our (post-)starburst sample separately. In Figure [fig:bulgedistundist]b we present the evolution of *G* for (post-)starburst galaxies with regular morphologies as identified by visual inspection. In this case, we still observe a declining trend in *G* with starburst age (*ρ* ∼  − 0.26, *p* ∼ 10− 4), however slightly less pronounced than that found for the total sample (*ρ* ∼  − 0.29, *p* < 10− 4). This implies that the contribution from the tidal features is indeed not consequential and that the time-evolution of the inequality in the light distribution within (post-)starburst galaxies is predominantly an effect of the decaying starburst. This is further supported by Figure [fig:bulgedistundist]c, where we consider the values of *G* computed using images in the *g*-band. In this regime, there is more light from younger stars with shorter lifetimes than in the case of the *r*-band, and therefore, the effects of the decaying starburst should be more apparent. As expected, we find a stronger declining trend in *G* as a function of the starburst age in the *g*-band (*ρ* ∼  − 0.34, *p* < 10− 4). The observed trends in the values of *G* suggest that the photometric signatures of the most recent starburst, in both the *r*-band and the *g*-band, fade away after  ∼ 0.5 Gyr since the starburst event. Our analysis shows that the (post-)starburst galaxies typically have structural properties characteristic of early-type disks (reflected in the values of *n*, *C* and *M*20), with values similar to those found for the star-forming blue-cloud galaxies and somewhat lower than those measured for the quiescent red-sequence galaxies in our control samples. The average structural properties of the sample are largely a result of our sample selection, where we only considered galaxies with high stellar surface mass densities. More interestingly, the lack of evolution in these structural parameters with starburst age, and in particular the values of *n*, *C* and *M*20 for the oldest (post-)starbursts, leads us to conclude that the (post-)starburst galaxies do not attain the highly concentrated structure typical for red-sequence galaxies within the first 0.6 Gyr following a merger. This does not necessarily imply that they are not heading toward the red sequence; perhaps further structural evolution is required for the final transition to occur. A similar conclusion was found in a recent study of a subset of the (post-)starburst galaxies used in this work, by, who found a substantial amount of gas remaining in these galaxies, that could sustain star formation for at least 0.5-1 Gyr after the starburst, suggesting that the post-starburst galaxies are not yet fully quenched. We note the following caveats in our conclusions. The galaxies in our control sample of red-sequence galaxies to which we compare, have not necessarily quenched their star formation recently and could have undergone some structural evolution while residing on the red sequence. Furthermore, present-day red-sequence galaxies that experienced their star-formation quenching earlier in the history of the Universe, when the physical conditions were significantly different to the present day, may not be representative of the ‘future’ red-sequence which will form once the residual star-formation within the present-day (post-)starburst galaxies have ceased. Further study of (post-)starburst galaxies at higher redshifts, including those with *t**S**B* > 0.6 Gyr, as well as a more extended sample of red-sequence galaxies, and a quantitative comparison with previous studies are underway. Summary ======= It is thought that the observed bimodality in galaxy properties could be a result of a transformation from gas-rich actively star-forming galaxies to passively evolving galaxies devoid of gas, following some star-formation quenching event (e.g. Bell et al. 2004, Faber et al. 2007). In this work, we focused on one of the possible quenching routes, involving a transition through a post-starburst phase subsequent to a gas-rich merger event (e.g. Sanders et al. 1988, Wild et al. 2009, Yesuf et al. 2014). Our study involved the morphological and structural analysis of an evolutionary sample of 335 bulge-galaxies with the strongest (post-)starburst signatures within the SDSS DR7, with equal number of galaxies per unit starburst age (*t**S**B*) with *t**S**B* < 0.6 Gyr. The unique sample properties provided an opportunity for a statistical study of a sequence of galaxies evolving through the short-lasting phase following a nuclear starburst. For the analysis, we used standard measures of galaxy structure found in the literature as well as a newly introduced morphological indicator designed to trace faint asymmetric tidal features in the outskirts of the galaxies. Below, we list the main results of our study and their implications. * The new measure of morphological asymmetry introduced in this work provides a robust method for probing asymmetric tidal features in the outskirts of galaxies and allows for an effective detection of galaxies in late stages of a merger. Within our test sample, the shape asymmetry criterion *A**S* > 0.2 identifies 95% (19/20) of galaxies with faint tidal features and 60% (6/10) of those with low/moderate morphological disturbance. Applied to our (post-)starburst sample, *A**S* > 0.2 detects 45% of the galaxies with the youngest starbursts (*t**S**B* < 0.1 Gyr), a fraction which is in a remarkable agreement with that obtained by visual inspection of the images. * The fraction of (post-)starburst galaxies which show features characteristic of post-mergers declines with the starburst age. For the morphologically disturbed (post-)starburst galaxies, the measured values of *A**S* decrease with the starburst age, with *A**S* ∼ 0.2 for most galaxies with starbursts older than 0.5 Gyr.These trends fit qualitatively to the galaxy merger picture, where the morphological disturbance of the merger remnant gradually fades away as the remnant evolves through the dynamically-cold post-merger phase (see e.g. ). * Assuming that morphological disturbance in galaxies is an indicator of a past merger, both visual classification and the automated approach (using the shape asymmetry) suggest that at least 45% of the youngest (post-)starburst galaxies in the sample (*t**S**B* < 0.1 Gyr) have originated in a merger. * The insignificant age-evolution of their central concentration, light profiles or the extent of the brightest regions, suggests that (post-)starburst galaxies do not attain the structure of the fully-quenched galaxies within the first 0.6 Gyr after the starburst. Rather, their structure bears more resemblance to that of the continuously star-forming galaxies. Further structural evolution is required before the post-starburst galaxies can become comparable to galaxies populating the present-day red sequence. Acknowledgments =============== The authors would like to thank the anonymous referee for thorough reading of the manuscript and useful comments and suggestions, which led to clarification of several important points in the text. MMP and VW acknowledge support from the European Career Reintegration Grant Phiz-Ev (P.I. V. Wild). VW, KR and JM-A acknowledge support from the European Research Council Starting Grant SEDMorph (P.I. V. Wild). CJW acknowledges support through the Marie Curie Career Integration Grant 303912. PHJ acknowledges the support of the Academy of Finland grant 1274931. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. 99 Abazajian K. N. et al., 2009, ApJS, 182, 543 Abraham R. G., Valdes F., Yee H. K. C., van den Bergh S., 1994, ApJ, 432, 75 Abraham R. G., Tanvir N. R., Santiago B. X., Ellis R. S., Glazebrook K., van den Bergh S., 1996, MNRAS, 279, L47 Abraham R. G, van den Bergh S., Nair P., 2003, ApJ, 588, 218 Arnouts S. et al., 2007, AAp, 476, 137 Arp H., 1966, ’Atlas of peculiar galaxies’, Pasadena: California Inst. Technology Atkinson A. M., Abraham R. G., Ferguson A. M. N., 2013, ApJ, 765, 28 Baldry I. K., Balogh M. L., Bower R. G., Glazebrook K., Nichol R. C., Bamford S. P., Budavari T., 2006, MNRAS, 373, 469 Balogh M. L., Morris S. L., Yee H. K., Carlberg R. G., Ellingson E., 1999, ApJ, 527, 54-79 Balogh M. L., Miller C., Nichol R., Zabludoff A., Goto T., 2005, MNRAS, 360, 587 Balogh M. L., Morris S. L., 2000, MNRAS, 318, 703 Barnes J. E., 1988, ApJ, 331, 699 Barnes J. E., Hernquist L., 1991, ApJL, 370, L65 Barnes J. E., Hernquist L., 1996, ApJ, 471, 115 Barton E. J., Geller M. J., Kenyon S. J., 2000, ApJ, 530, 660 Bekki K., Shioya Y., Couch W. J., 2001, ApJL, 547, L17 Bekki K., Couch W. J., Shioya Y., Vazdekis A., 2005, MNRAS, 359, 949 Bell E. F. et al., 2004, ApJ, 608, 752 Berry R., Burnell J., 2000, The handbook of astronomical image processing Bershady M. A., Jangren A., Conselice C. J., 2000, AJ, 119, 2645 Bertin E., Arnouts S., 1996, AApS, 117, 393 Birnboim Y., Dekel A., 2003, 345, 349 Blake C. et al., 2004, MNRAS, 355, 713 Blanton M. R. et al., 2001, AJ, 121, 2358 Bruzual G., Charlot S., 2003, MNRAS, 344, 1000 Bundy K. et al., 2005, ApJ, 651, 120 Cameron E., 2010, PASA, 28, 128 Conselice C., 2003, ApJS, 147, 1 Conselice C. J., Bershady M. A., Jangren A., 2000b, ApJ, 529, 886 Cortese L., Hughes T. M., 2009, MNRAS, 400, 1225 Couch W. J., Sharples R. M., 1987, MNRAS, 229, 423 Cox T. J., Jonsson P., Primack J. R., Somerville R. S., 2006, MNRAS, 373, 1013 Da Costa G. S., 1992, Astronomical Society of the Pacific Conference Series. ASP Conference Series, 23, 90 Darg D. W. et al., 2010, MNRAS, 401, 1552 de Lucia G., Poggianti B. M., Halliday C., Milvang-Jensen B., Noll S., Smail I., Zaritsky D., 2009, MNRAS, 400, 68 de Ravel L., Le Fèvre O., Tresse L., et al., 2009, AAp, 498, 379 Di Matteo T., Springel V., Hernquist L., 2005, Nature, 433, 604 Dressler A., Gunn J. E., 1983, ApJ, 270, 7 Dressler A., Smail I., Poggianti B. M., Butcher H., Couch W. J., Ellis R. S., Oemler Jr. A., 1999, ApJS, 122, 51 Dressler A., Oelmer Jr. A., Poggianti B. M., Gladders M. D., Abramson L., Vulcani B., 2013, ApJ, 770, 62 Ellison S. L., Mendel J. T., Patton D. R., Scudder J. M., 2013, MNRAS, 435, 3627 Faber S. M. et al., 2007, ApJ, 665, 265 Freedman Woods D., Geller M. J., Kurtz M. J., Westra E., Fabricant D. G., Dell’Antonio I., 2010, AJ, 139, 1857 Glasser, G. J., 1962, J. Am. Stat. Assoc., 57, 648 Goto T., 2005, MNRAS, 357, 937 Graham A. W., Driver S. P., 2005, 22, 118 Gunn J. E., Gott, J. R. III, 1972, ApJ, 176, 1 Hernández-Toledo H. M., Vázquez-Mata J. A., Martínez-Vázquez L. A., Avila Reese V., Méndez-Hernández H., Ortega-Esbrí S., Núñez J. P. M., 2008, AJ, 136, 2115 Hopkins P. F., Hernquist L., Cox T. J., Di Matteo T., Robertson B., Springel V., 2006, ApJS, 163, 1 Hopkins P. F., Bundy K., Hernquist L., Ellis R. S., 2007, ApJ, 659, 976 Hopkins P. F., Hernquist L., Cox T. J., Kereš, D., 2008, ApJS, 175, 356 Hopkins P. F., Hernquist L., 2010, MNRAS, 402, 985 Isserstedt J., Schindler R., 1986, A&A, 167, 11 Johansson P. H., Naab T., Burkert A., 2009, ApJ, 690, 802 Johansson P. H., Naab T., Ostriker J. P., 2012, ApJ, 574, 115 Kauffmann G. et al., 2003a, MNRAS, 341, 33 Kauffmann G. et al., 2003b, MNRAS, 341, 54 Kaviraj S., Kirkby L. A., Silk J., Sarzi M., 2007, MNRAS, 382, 960 Kent S. M., 1985, ApJS, 59, 115 Lackner C. N. et al., 2014, AJ, 148, 137 Lintott C. J. et al., 2008, MNRAS, 389, 1179 Lintott C. J. et al., 2011, MNRAS, 410, 166 Lotz J. M., Primack J., Madau P., 2004, AJ, 128, 163 Lotz J. M., Jonsson P., Cox T. J., Primack J. R., 2008, MNRAS, 391, 1137 Ma C. J., Ebeling H., Donovan D., Barrett E., 2008, ApJ, 688, 945 Martig M., Bournaud F., Teyssier R., Dekel A., 2009, 707, 250 Mendel J. T., Simard L., Ellison S. L., Patton D. R., 2013, MNRAS, 429, 2212 Naab T., Burkert A., 2003, ApJ, 597, 893 Noeske et al., 2007, ApJL, 660, L43 Nolan L. A., Raychaudhury S., Kabán A., 2007, MNRAS, 375, 381 Patton D. R. et al., 2002, ApJ, 565, 208 Petrosian V., 1976, ApJL, 209, L1 Poggianti B. M., Smail I., Dressler A., Couch W. J., Barger A. J., Butcher H., Ellis R. S., Oemler Jr. A., 1999, ApJ, 518, 576 Poggianti B. M. et al., 2009, ApJ, 693, 112 Quintero A. D. et al., 2003, ApJ, 602, 190 Reichard T. A., Heckman T. M., Rudnick G., Brinchmann J., Kauffmann G., 2008, ApJ, 677, 186 Reichard T. A., Heckman T. M., Rudnick G., Brinchmann J., Kauffmann G., Wild V., 2009, ApJ, 691, 1005 Rix H. W., Zaritsky D., 1995, ApJ, 447, 82 Rowlands K., Wild V., Nesvadba N., Sibthorpe B., Mortier A., Lehnert M., da Cunha E., 2015, MNRAS, 448, 258 Sanders D. B., Soifer B. T., Elias J. H., Madore B. F., Matthews K., Neugebauer G., Scoville N. Z., 1988, ApJ, 325, 74 Schawinski K. et al., 2014, MNRAS, 440, 889 Schweizer F., 1982, ApJ, 252, 455 Schweizer F., Whitmore B. C., Rubin V. C., 1983, AJ, 88, 909 Sersic J. L., 1963, Boletin de la Asociacion Argentina de Astronomia La Plata Argentina, 6, 41 Shade D. et al., 1995, ApJ, 451, L1 Simard L., Mendel J. T., Patton D. R., Ellison S. L., McConnachie A. W., 2011, ApJS, 196, 11 Snyder G. F., Cox T. J., Hayward C. C., Hernquist L., Jonsson P., 2011, ApJ, 741, 77 Strateva I. et al., 2001, AJ, 122, 1861 Takamiya M. 1999, ApJS, 122, 109 Toomre A., Toomre J., 1972, ApJ, 178, 623 Tran K. V. H., Franx M., Illingworth G. D., van Dokkum P., Kelson D. D., Magee D., 2004, ApJ, 609, 683 Villforth C., et al., 2014, MNRAS, 439, 3342 von der Linden A., Wild V., Kauffmann G., White S. D. M., Weinmann S., 2010, MNRAS, 404, 1231 van Dokkum P. G., 2005, AJ, 130, 2647 Vorontsov-Velyaminov B. A., 1959, ’Atlas and catalog of interacting galaxies’ Wen Z. Z., Zheng X. Z., Xia An F., 2014, ApJ, 787, 130 Wild V., Kauffmann G., Heckman T., Charlot S., Lemson G., Brinchmann J., Reichard T., Pasquali A., 2007, MNRAS, 381, 543 Wild V., Walcher C. J., Johansson P. H., Tresse L., Charlot S., Pollo A., Le Fèvre O., de Ravel L., 2009, MNRAS, 395, 144 Wild V. et al., Heckman T., Charlot S., 2010, MNRAS, 405, 933 Wong O. I. et al., 2012, MNRAS, 420, 1684 Yan R., Newman J., A., Faber S. M., Konidaris N., Koo D., Davis M., 2006, ApJ, 648, 281 Yang Y., Zabludoff A. I., Zaritsky D., Mihos J. C., 2008, ApJ, 688, 945 Yasuda N. et al., 2001, AJ, 122, 1104 Yesuf H. M., Faber S. M., Trump J. R., Koo D. C., Fang J. J., Liu F. S., Wild V., Hayward C.C., 2014, ApJ, 792, 84 Zabludoff A. I., Zaritsky D., Lin H., Tucker D., Hashimoto Y., Shectman S. A., Oemler A., Kirshner R. P., 1996, ApJ, 466, 104 Zepf S. E., Koo D. C., 1989, ApJ, 337, 34 Zwaan M. A., Kuntschner H., Pracy M. B., Couch W. J., 2013, MNRAS, 432, 492 The test sample =============== ![Three-colour images of the post-starburst galaxies in the test sample, including objects with: high morphological disturbance (1-20), low morphological disturbance (21-30) and regular morphology (31-35).](jpegs_morphsample1.jpg "fig:") [fig:morphsample1] ![Three-colour images of the remaining objects in the test sample: post-starburst galaxies with regular morphology (36-40) and normal galaxies, including early-types (41-50; 61-65) and late-types (51-60; 67-70).](jpegs_morphsample2.jpg)Three-colour images of the remaining objects in the test sample: post-starburst galaxies with regular morphology (36-40) and normal galaxies, including early-types (41 − 50; 61 − 65) and late-types (51 − 60; 67 − 70). [fig:morphsample2] [lastpage] --- 1. Troughout the text, we refer to both ongoing and past merger events collectively as past merger events or ‘post-mergers’.[↩](#fnref1) 2. *μ* = *M*\*/(2*π**r*2) with *r* being the physical size in kpc of the radius which contains 50% of the SDSS’ *z*-band Petrosian flux and the stellar mass measured from the 5-band SDSS photometry (J. Brinchmann, http://www.mpa-garching.mpg.de/SDSS)[↩](#fnref2) Shape asymmetry: a morphological indicator for automatic detection of galaxies in the post-coalescence merger stages. ===================================================================================================================== [firstpage] We present a new morphological indicator designed for automated recognition of galaxies with faint asymmetric tidal features suggestive of an ongoing or past merger. We use the new indicator, together with preexisting diagnostics of galaxy structure to study the role of galaxy mergers in inducing (post-)starburst spectral signatures in local galaxies, and investigate whether (post-)starburst galaxies play a role in the the build up of the ‘red sequence’. Our morphological and structural analysis of an evolutionary sample of 335 (post-)starburst galaxies in the SDSS DR7 with starburst ages 0 < *t**S**B* < 0.6 Gyr, shows that 45% of galaxies with young starbursts (*t**S**B* < 0.1 Gyr) show signatures of an ongoing or past merger. This fraction declines with starburst age, and we find a good agreement between automated and visual classifications. The majority of the oldest (post-)starburst galaxies in our sample (*t**S**B* ∼ 0.6 Gyr) have structural properties characteristic of early-type disks and are not as highly concentrated as the fully quenched galaxies commonly found on the ‘red sequence’ in the present day Universe. This suggests that, if (post-)starburst galaxies are a transition phase between active star-formation and quiescence, they do not attain the structure of presently quenched galaxies within the first 0.6 Gyr after the starburst. galaxies:evolution, galaxies:starburst, galaxies:interactions, galaxies:structure Introduction ============ The contrast between ‘blue-cloud’ and ‘red-sequence’ galaxies has long been known and examined in many studies both in the local Universe and at higher redshifts. The star-forming blue-cloud galaxies tend to have late-type morphologies, while the quiescent systems populating the red sequence are predominantly early-type. The origin of this bimodality remains unclear and has become one of the major conundrums in the field of extragalactic astronomy. As the stellar mass contained within the red sequence has doubled since *z* ∼ 1, contrary to that of the blue cloud, it is thought that blue galaxies can migrate to the red sequence once their star formation has quenched. While several mechanisms responsible for the star-formation quenching have been studied, the emerging consensus is that the transformation between star-forming and quiescent galaxies can occur through either a fast or a slow channel. The slow mode involves the secular evolution of galaxies, without the presence of external triggering mechanisms, where star-formation fades gradually as the gas supply is used up. This work’s focus is on the fast route, supported by the scarcity of galaxies in the ‘green valley’, stretching between the blue cloud and the red sequence in the colour-magnitude diagrams. Fast quenching is commonly associated with an escalated gas consumption during a merger-induced starburst, followed by removal of the remaining gas supply by subsequent stellar and/or AGN feedback. However, a recent study of local galaxies with signatures of a past starburst by has shown that they preserve substantial gas reservoirs which can potentially fuel star formation for an extended period of time, making the ‘fast’ nature of this quenching scenario debatable. Further study of the possible quenching mechanisms is required, and galaxies that show signatures of a recent starburst (post-starburst galaxies) as well as those that have undergone a recent merger (post-merger galaxies) may provide a unique insight to this ‘fast’ transition phase between the star-forming and the quiescent mode of galaxy evolution. Post-starburst (PSB) galaxies (also known as ‘k+a’ galaxies) have spectra featuring unusually strong Balmer lines in absorption - an attribute of A stars. Such features are thought to be a signature of a recent ( ∼  1 Gyr) truncation following either a short burst or normal-mode star formation activity within a galaxy. Distinguishing between the two events requires careful modelling, which shows that very strong Balmer lines are inconsistent with anything other than a recent starburst. First discovered over 30 years ago, PSB galaxies are found in various regions of the Universe: locally, they are rare and populate mainly the field and poor groups but their incidence increases at higher redshifts, where they seem to lie predominantly in clusters (e.g. but see for a contradictory conclusion). Whether PSB galaxies are a link between the star-forming and quiescent phases of galaxy evolution is still an ongoing debate. Results of some studies support this claim: at 0.5 < *z* < 1, the mass flux through the PSB phase is comparable to the mass growth of the red sequence, and quenched PSB galaxies tend to be early-type. However, other studies suggest the opposite: the incidence of PSB galaxies at 0.5 < *z* < 1 is inconsistent with a major channel for the red sequence growth in clusters, and the remaining gas reservoirs up to 0.5 − 1 billion years after the starburst suggest a non-quenched state for local PSB galaxies. The contradictory conclusions of the different studies may partially come from their differing selection techniques used to identify PSB signatures in galaxies, as well as the different environments and redshifts that each study focuses on. Clearly further study of this interesting class of galaxies is warranted. The origin of PSB signatures in galaxies has not yet been constrained. A possible mechanism, favoured by numerical simulations (e.g. ), is gas-rich major mergers, which can induce starbursts strong enough to rapidly consume a significant amount of the galaxy’s gas reservoir, and can transform late-type galaxies into early types both structurally and kinematically. Mergers of galaxies have been observed for over 50 years and their connection with PSB spectral signatures in galaxies has been investigated in some previous studies but with no definite conclusion so far. The fact that many local PSB galaxies lie in the field and poor groups suggests that they could be merger remnants, and the signs of morphological disturbance found in many PSB galaxies provides strong evidence supporting this claim (). Nonetheless, results reported by some other studies suggest otherwise, for example: not all PSB galaxies show signs of an interaction and the redshift-evolution of the PSB number density found by is  ∼ 100 times that of the major-merger rate estimated by. Furthermore, argue that the relative incidence of starburst, post-starburst, passive and star-forming galaxies found in five rich clusters suggests that less disruptive events, like minor mergers and accretion, are at play. The role of galaxy mergers in quenching of star-formation in galaxies is not well understood and it remains challenging to constrain due to the large variability of merger signatures and the resulting difficulties in defining robust sample selection criteria. Common methods include selection of close pairs or probing visual merger signatures in galactic structure. The latter approach can be done by means of either visual inspection or quantitative approach involving, for example: rotational asymmetry, lopsidedness the Gini − *M*20 parameter space or the recently introduced median-filtering of high-resolution images for detection of double nuclei. Such measures are useful for identifying galaxies in early stages of a merger, particularly prior to the coalescence of the galaxy nuclei but, they are less suitable for tracing low surface-brightness post-merger signatures. While visual classification offers great reliability and have recently been extended to large galaxy samples (The Galaxy Zoo Project, ; see for merger morphology), the need for a more efficient, automated selection of galaxies in post-coalescence stages of a merger increases in today’s era of extensive surveys. An automated approach also provides a more quantitative and easily reproducable description of galaxy morphology than visual classification. In this paper we introduce a new morphological indicator designed to probe the asymmetric structures in the outskirts of galaxies in the late stages of a merger. Through comparison with visual classification of galaxy images, we show that the indicator performs remarkably well in detecting faint signatures of tidal disruption in starburst and post-starburst galaxies. We use the new measure to identify post-merger (or, in few cases, ongoing-merger) candidates in a sample of 335 local galaxies with bulges, with strongest starburst and post-starburst signatures in the Sloan Digital Sky Survey (SDSS, ), selected using a sophisticated technique introduced by, and to investigate the evolution of the morphological disturbance in these galaxies as a function of the starburst age. We also use several standard measures found in literature to characterise the structure of galaxies evolving through the starburst and post-starburst phase and compare them with those characteristic of galaxies residing on the present-day red sequence. This paper is structured as follows: Section 2 contains a description of samples and their selection criteria; Section 3 - the methodology; Section 4 - calibration and testing; Section 5 - the results; Section 6 - a discussion; Section 7 - the summary of conclusions. We adopt a cosmology with Ω*m* = 0.30, ΩΛ = 0.70 and H0 =70kms− 1Mpc− 1. Sample selection ================ The following samples of galaxies were selected from the SDSS DR7 spectroscopic catalogue. [fig:vivplot] The test sample --------------- The test sample consists of 70 local galaxies at 0.01 < *z* < 0.07 with SDSS Petrosian magnitude within the range: 14.5 < *m**r* < 17.7. The sample was selected to represent systems with various degrees of morphological disturbance, which was determined based on visual inspection of the galaxy images. To find galaxies with disrupted morphologies, we pre-selected galaxies with spectroscopic starburst and post-starburst signatures, as further described in Section [sec:psbsample]. We then visually examined their images in *r*-band as well as the SDSS false colour images for the presence of morphological disturbance and tidal features, pointing to a past (and in a few cases, still ongoing) merger event[1](#fn1). The final sample of test galaxies (Figures [fig:morphsample1] and [fig:morphsample2]) contains: galaxies with highly disrupted morphologies, tidal features indicating a past major merger (images 1-20); galaxies with moderate morphological disturbance, with no prominent tidal features but slight deviations from regular appearance (images 21-30); galaxies with no signs of morphological disturbance (images 31-40). The notable fraction of ongoing mergers in the sample is a consequence of the selection criteria chosen for purposes of the method testing. The test sample is not representative of the entire population of local PSB galaxies. Finally, we include in the test sample a control subset of 30 normal galaxies (selected using the above constraints on redshift and magnitude) classified as early- (images 41-50, 61-65) and late-types (images 51-60, 66-70) based on the SDSS *fracDev* parameter with threshold values of 0.99 and 0.1, respectively. The *fracDev* parameter describes the fraction of the total galaxy light fit by the de Vaucouleurs profile (with the total light being represented by the model magnitude, computed from a linear combination of de Vaucouleurs and exponential fits to galaxy light profiles). We split the normal galaxies based on their inclination, into face-on (*b*/*a* > 0.5, images 41-60) and edge-on (*b*/*a* < 0.5, images 61-70) subsets. The (post-)starburst sample --------------------------- In this work we are interested in the whole sequence from starburst to quiescence, and wish to observe the decline in post-merger features that might be expected following the starburst event. We therefore select an evolutionary sample of galaxies at different stages of the starburst/post-starburst phase, which we collectively refer to as the ‘(post-)starburst’ or ‘SB/PSB’ sample. The sample consists of 400 local galaxies which have undergone a strong recent starburst selected from a parent sample of 70,000 galaxies with 0.01 < *z* < 0.07 and spectral SNR/pixel greater than 8 in the *g*-band, from the spectroscopic SDSS DR7 catalog. The galaxies within the parent sample were selected to be bulge dominated, with stellar surface mass density *μ* >  3  × 108M$\_{\sun}$ kpc− 2 [2](#fn2). This is similar to imposing a stellar mass limit of 1010M$\_{\sun}$. Our reason for using this sample is that it has already been studied in to measure a  ∼ 200 Myr offset between starburst and accretion onto the central supermassive black hole, and in to investigate the evolution of their dust and gas contents. The disadvantage of this pre-selection on stellar mass surface density is that galaxies with bulges may undergo stronger starbursts due to gravitational effects, and therefore not be representative of the full merging galaxy population. We intend on extending our analysis to further samples in later work. Figure [fig:vivplot] shows the distribution of two spectral indices that describe the shape of the spectrum (4000Å break and Balmer break strength) and Balmer absorption line strength for all 70,000 bulge-dominated galaxies. These spectral features constrain the recent star formation history of the galaxy. The indices are computed within the spectral region 3175Å-4150Å using a principal component analysis (PCA), which identifies and groups together features that vary simultaneously as the balance of stars in a spectrum changes. By combining multiple Balmer absorption lines together with information from the shape of the stellar continuum the PCA provides a much higher signal-to-noise ratio spectral index to identify an excess of A stars compared to the traditionally used H*β* or H*δ* absorption lines. In Figure [fig:vivplot], a large majority of the galaxies are found on the right side of the plot with strong 4000Å break (PC1), characteristic of the quiescent red-sequence galaxies. Lower values of PC1 point to ongoing star formation, therefore the less numerous blue-cloud galaxies appear on the left side of the plot. Starburst galaxies undergo brief episodes of enhanced star formation which reflect in extremely low values of PC1 as well as PC2 (deficit of Balmer absorption and weak 4000Å breaks), due to their stellar populations being dominated by short-lived O/B stars. As the dominant stellar populations within these galaxies change with time elapsed from the starburst event, the galaxy moves to the post-starburst phase, and the Balmer absorption features will increase in strength relative to the 4000Å break. The peak at highest PC2 values is where we find old post-starburst galaxies with dominant populations of A stars and starburst age (*t**S**B*)  ∼  1Gyr. To select an evolutionary sample of (post-)starburst galaxies, we selected galaxies from the extreme left of the PC1/2 distribution. These have undergone the strongest recent starburst in the sample, and therefore exhibit the most extreme spectral features. The starburst ages of those galaxies were estimated using Bayesian fitting of stellar population synthesis models to the spectral indices. First, starburst ages for all galaxies with PC1 <  − 4 or PC2 >  − 0.5 were estimated from the median of the posterior distribution, assuming a star formation history comprising an old stellar population and recent starburst. To identify a statistically complete sample of the strongest (post-)starburst galaxies we selected galaxies with the lowest PC1 value, such that we have 20 galaxies per 30Myr time interval, up to a starburst age of 600Myr. The reason for this age restriction is that, at older ages, there is a degeneracy between starburst age and burst strength using PC1 and PC2 alone (see ). The final sample of 400 SB/PSB galaxies are plotted in Figure [fig:vivplot], with starburst age coded by colour. The key difference between this selection method and those used in previous studies of post-starburst galaxies is that we select purely on stellar continuum features and do not remove galaxies with identifiable nebular emission lines. Traditionally galaxies are removed from the post-starburst samples if either H*α* or [O$\mbox{{\expandafter\@slowromancap\romannumeral 2@}}$] is visible, thereby ensuring that star-formation has completely shut off. We did not do this for two key reasons: (i) the traditional method actively selects against galaxies that host narrow line AGN, which are more prevalent in post-starburst galaxies than galaxies with other star-formation histories, resulting in incomplete samples ; (ii) the traditional method only selects old post-starburst galaxies, as starbursts themselves are not instantaneous, but rather have a decay time of  ∼ 0.3 Gyr. For further details of the sample selection, see and. Prior to the analysis, the final sample was reduced to 335 galaxies. This was done after visual inspection of the galaxy images (SDSS *r*-band) which revealed that in case of 65 galaxies, the presence of bright field stars in the images could contaminate the measurements of the structural parameters. As a result, those galaxies were discarded. This does not affect the statistical property of our sample as the discarded galaxies were distributed evenly throughout all starburst age bins. The control sample of star-forming and quiescent galaxies --------------------------------------------------------- In order to relate the structure and morphology of galaxies evolving through the SB/PSB phase to that of both star-forming and passively evolving galaxies, we also selected control samples of 49 and 53 galaxies from the blue-cloud and red-sequence regions of the PC1-PC2 spectral index space, respectively. For that purpose, we used the same bulge-dominated base sample as during the selection of the SB/PSB galaxies. The control samples contain random selections of galaxies from the regions centred on the most populated parts of the blue cloud and the red sequence (marked by the blue/red box in Figure [fig:vivplot]). To eliminate potential biases due to varying galaxy structure with the stellar mass within both the blue cloud and the red sequence, we ensured that the stellar mass distributions of the galaxies within our control samples match that of the oldest (0.5 Gyr < *t**S**B* < 0.6 Gyr) SB/PSB galaxies in our main galaxy sample. Methodology =========== To characterise the structure of the galaxies we applied a range of automated structural measures to the sky-subtracted *r*-band images of the (post-)starburst galaxies. We first defined a binary ‘detection mask’ containing the pixels to be included in the structure measurements (Section [sec:methodmask]). Then, we computed the standard measures of galaxy structure used in the literature (Section [sec:methodoldparams]) as well a new measure of morphology introduced in this work, designed for detecting asymmetric morphological features characteristic of post-mergers (Section [sec:methodashape]). [fig:masksradii] Binary detection mask --------------------- To create the binary detection mask we employed an 8-connected structure detection algorithm, with 8-connected neighbours defined as all pixels touching one of the edges or corners of the central pixel. The algorithm searches for such neighbouring pixels with intensities above a threshold level defined as a function of the background sky level and its standard deviation. The algorithm is specifically designed to identify faint features in the outskirts of galaxies. To enhance the detectability of such features without significantly lowering the detection threshold and potentially dropping to the noise level, we passed the images through a 3 × 3 running average (mean) filter prior to the detection process. This reduced the noise in the images by not only amplifying the signal from the regions with low surface-brightness but also diminishing the pixellation effect in those regions where individual pixels are too faint to be detected. Then, starting from the central pixel as given by the SDSS position, the detection mask was built by accepting all pixels that are 8-connected to previously accepted pixels and have intensities above the chosen threshold. After some experimentation we found the optimal threshold for a robust detection of faint tidal features is 1 standard deviation above the sky background level. For the SDSS *r*-band images of the SB/PSB sample, this corresponds to a mean limiting surface brightness of 24.7 mag/arcsec2. In Figure [fig:masksradii] we show examples of detection masks computed for SB/PSB galaxies with different levels of morphological disturbance, as classified by visual inspection of the SDSS *r*-band images (the exact criteria of the visual classification are explained in detail in Section [sec:resultspsbvisclass]). In the case of ongoing mergers, with double nuclei joined by a bridge (Figure [fig:morphsample1], obj. 7, 9, 13, 17), the detection mask will include both components of the merging system. The computation of the binary detection mask relies on a good estimate of the sky background level in the images. For a self-contained analysis, we estimated our own sky background levels rather than using SDSS values, although we find the two measurements agree well. We extracted pixels lying within a circular annulus with inner and outer radius of 20 and 40 times the full width half maximum (FWHM) of the light profile of each galaxy. This resulted in a large enough area and inner boundary sufficiently far from the central source to ensure a representative measurement of the sky background level. We estimated the sky background level by the mode of the flux histogram, clipped iteratively at 3*σ* until convergence of the mode (typically after a few iterations). We defined the mode as 2.5 × median - 1.5 × mean. From the binary detection masks we estimated the galaxy radius, *R**m**a**x*, as the distance between the centre (brightest pixel within the mask) and the most distant pixel from that centre. We find that this definition of galaxy radius is an improvement for our purposes over the commonly used Petrosian radius, *R**p*, in the case of galaxies with significant morphological disturbance. In the second row of Figure [fig:masksradii] apertures at the new radii (black) are compared to those at twice the Petrosian radii (red). The latter value is typically used to recover the majority of flux for galaxies with regular morphologies. The radii are similar for morphologically undisturbed galaxies but in the presence of significant morphological disturbance, *R**m**a**x* includes the extended faint structures in the outskirts of galaxies, which can be excluded by 2*R**p*. Finally, we recomputed the sky background level within a circular annulus with inner and outer radius of one and two times *R**m**a**x*. This final estimate of the sky background level was subtracted from the images prior to the measurement of the morphological parameters (although it differs from the original one by no more than 3 digital counts per pixel, we note a drop in the standard deviation by as much as 50%). Structural parameters --------------------- Below, we list and briefly describe the measures of galaxy structure used in this study. *Sérsic index, n:* For each galaxy, we fitted the Sérsic function to its surface brightness profile, extracted by considering the mean flux in pixel-thick concentric circular annuli defined by apertures centred at the brightest pixel of the galaxy image. $$I(R) = I\_{e}\mbox{exp}\bigg\{-b\_{n}\bigg[\bigg(\dfrac{R}{R\_{e}}\bigg)^{1/n}-1 \bigg]\bigg\}, \label{eqn:sersic}$$ where *I**e* stands for the intensity at the galaxy’s effective radius, *R**e*, enclosing half of the total galaxy light, and the constant *b**n* is determined by the Sérsic index, *n*. During the fitting procedure, the Sérsic index was restricted to fall between 0.5 and 6.0. *Concentration index, C:* We computed *C* from the growth curve radii *R*20 and *R*80, enclosing 20% and 80% of the galaxy’s total light, respectively, calculated within a circular aperture defined by *R**m**a**x*: $$C = 5\mbox{log}\_{10} \bigg(\frac{R\_{80}}{R\_{20}}\bigg). \label{eqn:c}$$ *Asymmetry, A:* We followed the standard definition of rotational asymmetry : $$A = \dfrac{\Sigma \mid I\_{0}-I\_{\theta} \mid}{2\Sigma \mid I\_{0} \mid } - A\_{bgr} \label{eqn:a}$$ where *I**θ* is the flux from an individual pixel in the galaxy image rotated 180*o* about a chosen centroid (selected to minimise the values of *A*) and *I**θ* is flux from a pixel at the same position in the original, un-rotated image. The subtraction of *A**b**g**r* (the asymmetry in the ‘background’) accounts for the effect of noise on *A*. We measured *A* within a circular region defined by *R**m**a**x* (rather than the commonly used 1.5 × *R**p*). We chose the rotation centroid out of a set of pixels accounting for the brightest 30% of the galaxy’s total light such as to minimise *A*: for every pixel being treated as the rotation centroid, we computed *A* and then selected the minimum value to represent our final measurement. This approach was taken to eliminate potential local asymmetry minima in the morphologically disturbed galaxies ( found no such minima but their study did not specifically focus on galaxies with significant morphological disturbance, where this effect could become apparent). *Outer asymmetry, *A**o*:* To enhance the signal from the low-surface-brightness regions in the outskirts of the galaxies we used a modified version of the asymmetry measure, which we call the ‘outer’ asymmetry, *A**o*. It is computed similarly to the standard asymmetry parameter (Equation [eqn:a]), but with an inner circular aperture containing the brightest 50% of the total flux excluded from the measurement. A comparable definition of ‘outer’ asymmetry was also recently developed by. [fig:masks] *Clumpiness, S:* We measured *S* following the standard definition : $$S = \dfrac{\Sigma \mid I\_{o} - I\_{\sigma} \mid}{\Sigma \mid I\_{o} \mid} \label{eqn:s}$$ where *I**σ* stands for pixel intensity in the smoothed galaxy image (using a Gaussian smoothing filter with width given by the radius containing 20% of the total galaxy flux, *R*20) and *I**o* is that in the original image. We measured *S* within an aperture defined by *R**m**a**x* and centred on the highest intensity pixel. We excluded the central circular aperture at *R*20 to eliminate the contribution from extremely bright centres present in some highly-concentrated galaxies (see e.g. ). *Gini index, G:* We used *G* to measure the degree of inequality in the light distribution within images of the galaxies : $$G = \dfrac{1}{2\overline{X}n(n-1)}\sum\_{i}^{n}(2i-n-1)\mid X\_{i}\mid \label{eqn:g}$$ with pixel intensities, *X**i*, in increasing order, *n* - the total number of pixels assigned to the galaxy (in this work, using the detection mask), and $\overline{X}$ - the mean over all intensities. *G* is independent of the position of the galaxy’s centre and is computed based on the rank-ordered cumulative distribution function of the pixel intensities within well specified boundaries. In this work, these boundaries were determined using the binary detection mask. *Moment of light, *M*20:* Following, we computed the second order moment of the flux-weighted distance of the brightest pixels containing 20% of the total flux (*M*20) from the centre of the galaxy, to measure the spatial extent of the brightest galaxy regions: $$\begin{split} M\_{20} = \mbox{log}\_{10} \bigg( \dfrac{\sum\_{i}M\_{i}}{M\_{tot}} \bigg), \mbox{while} \sum\_{i}f\_{i} < 0.2f\_{tot} \\ M\_{tot} = \sum\_{i}^{n}M\_{i} = \sum\_{i}^{n}f\_{i}[(x\_{i}-x\_{c})^{2}+(y\_{i}-y\_{c})^{2}] \end{split} \label{eqn:m}$$ with the individual pixel coordinates denoted by *x**i*, *y**i*, and the centroid’s coordinates given by *x**c* and *y**c*. We measured *M*20 within boundaries defined by the binary detection masks, with respect to a free centroid parameter, computed by minimising the second order moment of the flux-weighted distance of all pixels, *M**t**o**t*. In this work, the minimisation of *M**t**o**t* was performed using candidate centroid pixels within a region comprising the brightest 30% of the galaxy’s total flux. A new morphology measure: the ‘shape’ asymmetry ----------------------------------------------- We introduce a new measure of asymmetry designed specifically to detect galaxies with low surface-brightness tidal features. We call the new measure the ‘shape’ asymmetry (*A**S*). It is computed using the same mathematical expression as the standard asymmetry parameter (Equation [eqn:a]); however the measurement is performed using the binary detection masks rather than the images of galaxies. This allows for equal weighting of all galaxy parts during the measurement, regardless of their relative brightness. The key difference between *A**S* and the other asymmetry measures investigated in this work (*A*, *A**o*) is that the new parameter is purely a measure of morphological asymmetry and does not contain any information about the asymmetry of the light distribution within the galaxy. It is therefore not influenced by the presence of asymmetric bright galaxy components, like multiple nuclei, and it is highly sensitive to low surface-brightness regions of galaxies. This is illustrated in Figure [fig:masks]. As in the case of *A*, during the computation of *A**S* the images were rotated around the flux-weighted minimum asymmetry centroid, to ensure that the asymmetry is measured with respect to the galaxy core rather than an arbitrary region selected by minimising the shape asymmetry itself. The measurement was performed within an extraction region defined by a circular aperture at *R**m**a**x*. The noise correction term (*A**b**g**r*, Equation [eqn:a]) was omitted in the measurement of *A**S*: in contrast to the standard asymmetry measure, increasing the aperture size does not increase the amount of random noise as all ‘background’ pixels within the detection mask are set to 0. Calibration and testing ======================= In this section we present the results obtained using the test sample for calibration of the structural parameters with our new binary detection masks (Section [sec:resultsteststandard]) and testing of the newly introduced morphological measure (Section [sec:resultstestashape]). [fig:testall] The standard measures --------------------- The early- and late-type galaxies (ETGs and LTGs, respectively) in the test sample provide a good standard for the measures of galaxy structure as they are expected to show values of *n*, *C*, *A*, *S*, *G* and *M*20 within well established ranges. Based on previous studies: * LTGs galaxies tend to have *n* ∼ 1.0, while ETGs show a range of values, typically with $2.0\lesssim n \lesssim 4.0$ ; * *C* ranges between 2.0 and 5.0, with highest values characteristic of ETGs (*C* > 4.0) and decreasing to *C* ∼ 3 for LTGs ; * all normal galaxies tend to have low *A*, with values around 0.1 for ETGs, increasing towards later types to around 0.2 ; * *S* also tends to increase towards later types, ranging from around 0.1 to 0.3 ; * there is a clear separation in the values of *G* found for ETGs and LTGs, with *G* ∼ 0.6 for the former and *G* ∼ 0.4 for the latter ; * *M*20 takes on values lower than -2.0 for ETGs galaxies, and increases to around -1.5 for LTGs; galaxies with double nuclei have *M*20 ∼  − 1.0. Figure [fig:testall] shows the six standard structural parameters computed for all galaxies in the test sample. The values obtained for the early- and late-type galaxies (ETGs and LTGs, respectively) are in general agreement with previous studies. ETGs appear more concentrated ($2.0\lesssim n \lesssim 5.0$ and $3.5\lesssim C \lesssim 4.5$) and have smoother light distribution (*S* ∼ 0.1) than LTGs ($0.5\lesssim n \lesssim 2.0$, $2.5\lesssim C \lesssim 3.5$, *S* ∼ 0.2). Both galaxy types show high symmetry under a rotation about 180*o* ($0.05\lesssim A \lesssim 0.2$). ETGs and LTGs are also well separated from one another in the *G* − *M*20 space, with the former showing more unequal light distributions ($0.4\lesssim G \lesssim 0.6$) and the presence of a compact bright nucleus (M20 <  − 2), than the latter ($0.6\lesssim G \lesssim 0.8$, M20 >  − 2). The separation between early and late types in the *A* − *S* parameter space is not as strong as expected from the results of previous studies. This is likely a consequence of the relatively regular appearance and lack of prominent spiral arms in the LTG subsample (Appendix [app:test]), leading to low values of both *A* and *S*. Within the test sample, the morphologically disturbed galaxies are not easily distinguishable from galaxies with regular morphologies, when using the six standard structural parameters. Exceptions are galaxies with multiple bright nuclei (marked by circles in Figure [fig:testall]). In those cases, care should be taken when interpreting the values of *n* and *C*, measured within the *R**m**a**x*, as they correspond to the entire (still merging) system rather than its individual components. For most parameters, the values span the whole range characteristic of both ETGs and LTGs, with a slight bias toward the former. The exceptions are *G* and *A*. The values of *G* found for the morphologically disturbed galaxies are more similar to those found for highly-concentrated ETGs. We note that this tendency for such high values of *G* could be a result of the sample selection as all morphologically disturbed galaxies were selected from a parent sample of galaxies with central SB/PSB signatures. This is supported by the fact that the undisturbed SB/PSB galaxies also show high values of *G*. Additionally, in morphologically disturbed galaxies the values of *G* could potentially be enhanced by the presence of the faint tidal features which tend to add to the inequality in the light distributions in those galaxies (see ).However,the fact that equally high values of *G* were found in the morphologically undisturbed SB/PSB suggests that this contribution of faint tidal features to the measured values of *G* may not be substantial. The difference between morphologically disturbed galaxies and galaxies with regular morphologies is most noticeable in the values of *A*. However, only 35% (7/13) of the galaxies with the highest morphological disturbance in the test sample are well separated from the galaxies with regular appearance. Visual inspection of these galaxies shows that those with highest A generally have bright double nuclei. The values of *A* measured for galaxies with low morphological disturbance are comparable with those obtained for galaxies with regular morphologies. We conclude that, while the standard measures investigated in this work are suitable for the structural analysis of normal galaxies as well as for identifying galaxies with multiple nuclei, they are not as well-suited for identifying galaxies with faint tidal features observed in post-mergers. In the following section, we present results for our new modified asymmetry measure, designed specifically for identifying galaxies with such features. [fig:Amorph] The ‘shape’ asymmetry --------------------- In Figure [fig:Amorph] we compare the values of our new measure of morphology, *A**S*, with the two measures of structural asymmetry, *A* and *A**o*. The dotted lines represent a threshold value at 0.2 used to separate between galaxies with regular morphologies and those with high morphological disturbance. The threshold was chosen empirically based on the fact that all galaxies with regular morphologies in the test sample have *A**S* < 0.2 while *A**S* ≥ 0.2 is found only for galaxies with some level of morphological disturbance (detected by visual inspection). The situation is similar for *A* and *A**o*, (with an exception of one elliptical galaxy having a slightly higher *A**o*). We compare the performance of the three measures of asymmetry by considering this threshold. Figure [fig:Amorph] shows that the shape asymmetry performs significantly better at separating galaxies with highest degree of morphological disturbance from those with regular appearance, than the other two asymmetry measures. Considering the above threshold value, *A**S* ≥ 0.2 identifies 95% of the galaxies with most prominent tidal features (dark blue symbols), compared with 35% and 45% picked out by *A* and *A**o*, respectively. Visual inspection of the single galaxy omitted by *A**S* revealed that the features present in the galaxy’s structure form an azimuthally symmetric pattern, explaining the corresponding low value of *A**S*. We note that the new measure also recognises 60% of galaxies with lower level of morphological disturbance (violet symbols), while both the standard asymmetry parameter and the outer asymmetry fail to pick out any such objects. The ability of *A**S* to detect galaxies with asymmetric tidal features comes from the independence of the measure of the flux distribution within the galaxies. In contrast, both *A* and *A**o* are flux-weighted measurements and therefore can become dominated by the brightest regions within galaxies, especially in the case of the former, where significant contribution to the measurement can come from multiple bright nuclei found in galaxies in pre-coalescence stages of a merger. Visual inspection identifies 4/7 (57%) such galaxies in the test sample and all of them have *A* ≥ 0.2 (Figure [fig:Amorph]). In the case of *A**o*, the contribution from multiple bright nuclei will not be as significant as in the case of *A*, as the inner aperture containing the brightest 50% of the total light is omitted. The parameter should therefore be more sensitive to the low-surface brightness regions than *A*; however, as we show in Figure [fig:Amorph], it fails to identify more than half of the galaxies with highly disrupted morphologies. We also investigate outer asymmetries computed by cutting out inner apertures containing as much as 80% and 90% of the total flux, however, we find no significant improvement over *A**o* with the inner 50% cut out. This may stem from the differences in the light profiles of galaxies: while for some objects cutting out the inner circular aperture containing 50% of the total light may be optimal for separating its asymmetric outer features, it may prove insufficient or excessive for other galaxies. It is therefore not obvious how to define the inner circular aperture that would yield the most robust result. We also investigate an alternative approach to outer asymmetry by considering a central region with its geometry defined by the distribution of pixels accounting for a given fraction of the total galaxy light. However, we find that the irregularity of the inner region defined in such a way artificially increases the value of *A**o*. We conclude that the shape asymmetry is a much better indicator of the presence of asymmetric tidal features in galaxies than any of the standard structural parameters considered in this work, including the standard asymmetry parameter, *A*, and its modifications designed to measure the asymmetry of galaxy outskirts (*A**o*). We note that the shape asymmetry is purely a measure of galaxy morphology rather than structure. In order to gain information about the asymmetry of the light distribution within galaxies, the computation of the standard asymmetry parameter is a more suitable approach. ### Significance of rotation angles As shown in the study by, while normal galaxies tend to show strong 180*o*-symmetry, their projected shapes can introduce significant asymmetry under a 90*o*-rotation (*A*90), which can serve as a good approximation for the directly measured minor-to-major axes ratio for statistical galaxy samples. In particular, galaxies seen ‘face-on’ show *A*90 ∼ 0, while those observed at higher inclination angles tend to reveal higher values, with *A*90 reaching around 0.8 for the edge-on systems. We applied the concept of asymmetry under a 90*o*-rotation to the binary detection masks of galaxies within the test sample. As shown in Figure [fig:Amorph], spiral and elliptical galaxies are separated in *A**S*(90) according to their axial ratio: galaxies with *b*/*a* < 0.5 tend to show higher asymmetries under a 90*o* rotation. We find that *A**S*(90) is roughly correlated with *A**S*(180) for the morphologically disturbed galaxies, however the large scatter is caused by the variety of shapes of the tidal features. Within the test sample, galaxies with elongated tail-like features tend to have both higher *A**S*(180) and *A**S*(90), while those with shell-like features show lower *A**S*(90). Further tests of this aspect of the shape asymmetry will involve samples of galaxies with representative tidal features of different types (e.g. ), galaxies in earlier stages of a merger (e.g. ), as well as simulated galaxy mergers ). [fig:testdetThresh] ### Image quality effects As the binary detection masks are created by assembling connected pixels above a specific threshold, the measurement of *A**S* will depend on the choice of that threshold. Therefore, the limiting surface brightness of the images used to create the detection masks will be the measurement’s main limitation. For the SDSS *r*-band images, the optimal detection threshold was found to be 1*σ* above the sky background level, corresponding to *μ**l**i**m*= 24.7 mag/arcsec2. Figure [fig:testdetThresh] shows example results of tests of the dependence of *A**S* on the limiting surface brightness of the galaxy images by increasing the detection threshold. Upon inspection of the resulting binary masks and the corresponding values of the shape asymmetry we find that, unsurprisingly, the behaviour of *A**S* with limiting magnitude depends on the geometry of the features in the outskirts of the galaxies, as well as on their brightness relative to the central regions. For objects showing significant morphological disturbance, *A**S* changes substantially and in general follows a decreasing trend with decreasing *μ**l**i**m* (i.e. with decreasing image depth). Conversely, in the case of galaxies with regular morphologies, this effect is much less pronounced and the measurement of *A**S* shows stability against varying *μ**l**i**m*. To summarise, *A**S* provides a robust way of quantifying galaxy morphology and it is the most successful automated method to date at distinguishing between galaxies with asymmetric tidal features and those with regular morphologies. To accurately measure the shape asymmetry of galaxies, images of sufficient depth are required. The SDSS data used in this study are sufficiently deep in surface brightness to reveal faint structures in the outskirts of galaxies; however, using images of higher limiting magnitude could disclose further even fainter features in their morphology, consequently influencing their measured values of *A**S*. Therefore, great care must be taken when comparing measurements of *A**S* between different surveys, and between galaxies with different redshifts. Results ======= In this section we present the results of the structural and morphological analysis of the (post-)starburst galaxies. First, we summarise the results of a visual inspection of the galaxy images (Section [sec:resultspsbvisclass]), and then we present the main findings of the automated approach (Section [sec:resultspsbparams]). Qualitative description ----------------------- We examined the *r*-band images of the 335 SB/PSB galaxies for features suggesting a past merger event, with an aim to determine the dependance of the presence of such features on the starburst age. The visual inspection was carried out independently by five reviewers and involved examining images of the SB/PSB galaxies as well as an additional sample of 300 continuously star-forming galaxies. The star-forming galaxies were selected from the blue cloud, using the same spectral indices as the SB/PSB galaxies and within the same redshift and stellar mass range (in this case, blue-cloud galaxies were more appropriate than red-sequence, because, due to their star formation activity, they are more likely to show disturbance in their structure than galaxies in which the star formation has already quenched). All images were randomised prior to inspection. Each galaxy was then classified, according to a pre-agreed scheme, as an object with: * regular morphology, no signs of disturbance pointing to any kind of interaction (class 0), * low/moderate level of morphological disturbance suggesting a possible interaction (class 1), * high morphological disturbance (obvious post-merger) with elongated tidal features, e.g. tails (class 2), * high morphological disturbance (obvious post-merger) with shell-like features (class 3). Examples of the different classes of galaxies are presented in the top row of Figure [fig:masksradii], with the numeric classes assigned by the 5 classifiers. The visual inspection has shown that the galaxies in our SB/PSB sample have a variety of morphologies. Many, in particular those of young starburst ages, show signs of a past merger, including disturbed morphologies and presence of tidal features. Ongoing mergers with double bright nublei constitute only  ∼ 3% of the whole sample. [fig:visual] In Figure [fig:visual] we define *ζ* as the fraction of SB/PSB galaxies showing signatures of a past (or ongoing) merger (disturbance in morphology, presence of tidal features such as tails, arms or shells - i.e. classes 1, 2, 3), calculated independently in different age bins with a width of 0.1 Gyr. As visual classification is subjective, the individual classifications were not always in complete agreement. For a compromise of reliability and detectability, we discuss the ‘combined classification’, in which case *ζ* includes only those galaxies for which at least three reviewers agreed on the presence of morphological disturbance. As shown in Figure [fig:visual], about 45% of the youngest SB/PSB galaxies (t*S**B*  <  0.1 Gyr) in the sample show features characteristic of a post-merger. Furthermore, there is a decreasing trend in *ζ* with starburst age. For the older PSB galaxies in the sample (t*S**B*  >  0.3 Gyr), we find *ζ* ∼ 30%, which is consistent with the fraction of morphological disturbance found in our control sample of ordinary blue-sequence galaxies. Quantitative measures of structure and morphology ------------------------------------------------- In Figures [fig:nCGM20age] and [fig:SAAoAsage], we present all measures of galaxy structure and morphology considered in this work, computed for the SB/PSB sample as a function of starburst age (*t**S**B*), as well as for the control samples of star-forming blue-cloud and quiescent red-sequence galaxies (plotted with arbitrary ages for clarity). For each parameter, we show individual values per galaxy and median values calculated in 0.1-Gyr wide age-bins (left column). We also quote the Spearman rank correlation coefficient (*ρ*) and the corresponding *p*-value (two-sided significance of the coefficient’s deviation from zero). All values are summarised in Table [tab:table1], where we comment on the general characteristics of our sample, inferred from the individual parameters. Figures [fig:nCGM20age] and [fig:SAAoAsage] also show distributions of the parameter values: in the middle panel, for the youngest (*t**S**B* ≤ 0.1Gyr) and oldest (0.5 Gyr ≤ *t**S**B* ≤ 0.6 Gyr) (post-)starburst galaxies in the sample, and in the right panel, for the oldest (post-)starburst galaxies and the control sample of star-forming and quiescent galaxies. For quantitative comparison of the distributions, we performed a Kolmogorov-Smirnov (K-S) test. We quote the obtained values of the K-S statistics (*D*) and the corresponding probability of the null hypothesis (*p*) in the respective panels of Figure [fig:nCGM20age] and [fig:SAAoAsage]. [tab:table1] ccl Typical range & Trend & Comments & with *t**S**B* & (mean characteristics) & (Spearman) & & & - light profiles of $1.0 \lesssim n \lesssim 3.0$ & *ρ* ∼ 0.06 & early disks; & *p* ∼ 0.23 & - no significant trend & & with starburst age; & & - moderate $2.9\lesssim C \lesssim4.2$ & *ρ* ∼ 0.04 & concentration; & *p* ∼ 0.41 & - no significant trend & & with starburst age; & & - moderately extended $-2.4\lesssim M\_{20} \lesssim-1.4$ & *ρ* ∼  − 0.05 & single nucleus; & *p* ∼ 0.32 & - no significant trend & & with starburst age; & & - highly unequal light $0.55\lesssim G \lesssim0.8$ & *ρ* ∼  − 0.29 & distribution; & *p* < 1.0*e*− 4 & - moderate decline & & with starburst age; & & - moderately ‘clumpy’ $0.08\lesssim S \lesssim0.2$ & *ρ* ∼ 0.20 & light distribution; & *p* ∼ 1.0*e*− 4 & - moderate increase & & with starburst age; $0.05\lesssim A \lesssim0.25$ & *ρ* ∼  − 0.18 & - moderately declining & *p* ∼ 1.0*e*− 3 & asymmetry && with starburst age; $0.05\lesssim A\_{o} \lesssim0.25$ & *ρ* ∼  − 0.14 & - moderately declining & *p* ∼ 0.01 & asymmetry && with starburst age; $0.05\lesssim A\_{S} \lesssim0.35$ & *ρ* ∼  − 0.20 & - moderately declining & *p* ∼ 1.0*e*− 4 & asymmetry && with starburst age; [tab:table2] cccc Asymmetry & Outliers with & Outliers with & Trend in measure & *t**S**B* < 0.3 & *t**S**B* > 0.3 & the fraction & & & of outliers *A* & 16% & 8% & *ρ* ∼  − 0.71 &&& *p* ∼ 0.11 *A**o* & 27% & 14% &*ρ* ∼  − 0.77 &&& *p* ∼ 0.07 *A**S* & 43% & 21% & *ρ* ∼  − 0.94 &&& *p* ∼ 10− 3 [fig:nCGM20age] [fig:SAAoAsage] Considering the measures pertaining to the central concentration of light and the spatial extent of the galaxy nucleus, we find that most galaxies in the SB/PSB sample show characteristics of early-type disks: $1.0\lesssim n\lesssim 3.0$, $3.0\lesssim C\lesssim 4.0$, *M*20 peaking around -2.0 (see e.g. ). As suggested by the low values of the Spearman rank correlation coefficient, there is no significant trend in either *n*, *C* or *M*20 with starburst age (see left panel of Figure [fig:nCGM20age]). This points to little evolution of the central light concentration over the first 0.6 Gyr after the starburst. Direct comparison of the young and old (post-)starburst galaxies shows little difference in the distributions of all three parameters (middle panel, Figure [fig:nCGM20age]) and the K-S test suggests that they are likely to have been drawn from the same distribution. Comparison of the old subset with the control sample of red-sequence galaxies reveals that, after 0.6 Gyr since the last starburst, the (post-)starburst galaxies are not as highly concentrated as the galaxies populating the red sequence; rather, their structure (*n*, *C*, *M*20) bears closer resemblance to star-forming blue-cloud galaxies (right panel, Figure [fig:nCGM20age]). The values of the Gini index found for the youngest galaxies in the (post-)starburst sample are as high as those obtained for the red-sequence galaxies (right panel, Figure [fig:nCGM20age]). Contrary to *n*, *C* and *M*20, *G* is a measure independent of the position of the galaxy centre or assumptions of azimuthally symmetric apertures, and it is designed to pick out changes in the light distribution on a pixel scale. It should therefore be more sensitive to subtle variations in the light distribution of galaxies following a starburst than the other measures. Consequently, high *G* does not necessarily imply high central concentration. *G* shows the most pronounced correlation with the starburst age out of all measures considered in this work (*ρ* =  − 0.3, *p* < 1.0− 4). The decreasing tendency suggests that SB/PSB galaxies tend to have more uniform light distribution as they age, which could be a consequence of the decaying starburst as well as, to some extent, diminishing prominence of the faint tidal features. We do not attribute this effect to contribution from multiple nuclei as their presence is indicated only in a few galaxies by *M*20 (the dearth of objects with *M*20 ≥  − 1.0). This was confirmed during the visual inspection of the galaxy images (only  ∼ 3% classified as ongoing mergers). The fading central starburst could also be the reason for the increasing tendency in clumpiness as a function of starburst age (left panel, Figure [fig:SAAoAsage]). As the amount of light from the star-bursting region decreases, the less luminous off-centre star-forming regions are being uncovered, increasing the resulting values of *S* (although the parameter is designed to exclude the bright central regions from the measurement, the size of the excluded central aperture will not always coincide with that of the star-bursting region). All three measures of rotational asymmetry, *A*, *A**o* and *A**S*, show a decreasing tendency with the starburst age (left column, Figure [fig:SAAoAsage]) but the trend is most pronounced in the values of the shape asymmetry (*ρ* =  − 0.2, *p* ∼ 10− 4). We also find an excess of young galaxies with high *A**S* when comparing the distributions of *A**S* for the young and old subsets (middle panel, Figure [fig:SAAoAsage]) of
arxiv_0000132
An Introduction to Langer’s Theory ================================== This note provides a pedagogical introduction to Langer’s theory for activated rate processes in multiple dimensions at the high friction limit, with an emphasis on the connection between the theory and the property of the backward committor/splitting probability near the saddle point. The intended audience is assumed to have some familiarity with linear algebra and statistical mechanics while knowledge of stochastic processes is not strictly necessary. **Keywords:** Kramers’ theory, KLBS theory This note is intended as an introduction to Langer’s theory, the multidimensional extension of Kramers’ theory for activated rate processes in the high friction limit. The note is organized as follows: Section [reviewsection] briefly reviews some elements of the theory of stochastic process, which prepares for the introduction of Kramers’ theory in Section [Kramerstheory], where the flux-over-population method is demonstrated in the relatively simple context of a one-dimensional double-well potential. The majority of this note is devoted to the discussion of Langer’s theory in Section [Langertheory]. The derivation begins by relating the steady-state probability density in the flux-over-population method to the backward committor, followed by a detailed analysis of the behavior of the committor near the saddle point. This allows us to derive an expression for the probability flux vector field, culminating in the presentation of the multidimensional rate constant in equation. Lastly, Section [BSsimplification] discusses some more recent development by Berezhkovskii and Szabo that allows one to project Langer’s result back to one dimension. Together, the results presented in this note are sometimes referred to as the KLBS theory. Given the venerable age of Langer’s theory and the existence of several review articles and textbook partly devoted to this subject, one might wonder why this note is necessary. This introduction certainly draws heavily on the aforementioned works, but differs in two regards. First, almost all current accounts of Langer’s theory are quite concise. While this feature is excellent for experts who are in need of a quick refresher, it poses significant challenges for beginners, because the derivation of the theory can be quite tedious and involve some mathematical tricks that are not standard knowledge for a typical reader. One goal of this note, therefore, is to derive Langer’s theory while erring on the side of presenting too many, rather than too few, algebraic details. Second, an interesting feature of Langer’s theory is that the expression for the rate constant critically depends on several basic geometric features of the committor/splitting probability near the saddle point. This connection was not explicitly stated in Langer’s original paper, and is usually not fully developed in recent accounts of the theory. This is, again, unfortunate, since this connection provides a simple setting for gaining an intuitive understanding of the behavior of the committor near an idealized transition state. Given the prominent role the committor plays in modern rate theories, such as the transition path theory, such intuitions can be valuable in understanding the more recent development. Therefore, as described above, a second goal of this note is to make explicit the connection of Langer’s theory to the committor function and provide a detailed analysis thereof. The primary intended audience for this note are students and researchers in chemistry and biophysics interested in condensed-phase simulations of macromolecules. The reader should be familiar with linear algebra and have some exposure to classical statistical mechanics and some rate theory (e.g., Arrhenius equation); knowledge of stochastic processes is not strictly necessary although results from the theory will be invoked in some derivations. A brief review of stochastic process ==================================== In this section we briefly review some concepts from the theory of stochastic process relevant for our discussion. Readers who have never studied this subject should either consult standard references on stochastic processes and statistical mechanics, or simply take the results presented below as given and fill in their missing background knowledge later. The presentation in this section partially follows that in Chapter 15 of. Overdamped Langevin equation ---------------------------- Consider a (closed) system that is in contact with a large heat bath. In a typical (classical) simulation, the system consists of the macromolecule(s) of interest as well as some water molecules and counterions. The time evolution of the full system is determined by the Hamiltonian equations, and the equilibrium phase space distribution function is microcanonical. To avoid explicit representation of the bath, we “abstract away” the bath degrees of freedom, and the remaining system dynamics can be described via the generalized Langevin equation: $$\begin{aligned} r\_i'(t) &= \frac{p\_i(t)}{m\_i} \\ p\_i'(t) &= -\frac{\partial W(r)}{\partial r\_i} - \int\_0^t \sum\_j \sqrt{m\_im\_j}\gamma\_{ij}(t-\tau)r\_j'(\tau)\,d\tau + \xi\_i(t) \label{GLE\_v'}\end{aligned}$$ Here, *r**i*, *p**i*, and *m**i* are the position, momentum, and (renormalized) mass of the *i*th degree of freedom in the remaining system, and *W*(*r*) is the potential of mean force obtained from the full potential energy function by averaging over the bath degrees of freedom. The generalized Langevin equation can be derived via either the harmonic bath model or, more rigorously, via the Mori–Zwanzig theory. The generalized Langevin equations is reminiscent of Newton’s equation of motion, but the price we pay for doing away with the bath degrees of freedom is the difficulty of dealing with a set of stochastic integro-differential equations. Specifically, compared to Newton’s equations, there are two additional terms in that provide a coarse-grained model of the effect of the bath on the system. Here, we have “decomposed” the effect of the bath into two parts. First, the fluctuation term *ξ**i*(*t*) represents a random force (i.e., noise) acting on the system. Although the motion of the bath is fully deterministic, by ignoring the molecular details we model *ξ**i*(*t*) as a random, or stochastic, process. For a system solvated by a dense bath, such as liquid water, that affects the system dynamics through soft collisions (i.e., weak noise), a common model for *ξ**i*(*t*) is a Gaussian random process with zero mean. Second, the dissipation term ∫0*t**γ**i**j*(*t* − *τ*)*r**j*ʹ(*τ*) *d**τ* is a convolution integral that acts as a friction force slowing down the system. This integral is called a memory integral; the term *γ**i**j*(*t*) is known as the memory kernel, or dynamic friction kernel, and it encodes the “memory” of the motion of the system by the bath. Physically, the memory integral represents the fact that the bath requires a finite amount of time to respond to fluctuations in the system and this lag affects the motion of the system. At equilibrium, the fluctuation term and the dissipation term are related by the second fluctuation-dissipation theorem, $$\mathbb E[\xi\_i(t)\xi\_j(t')] = k\_BT\sqrt{m\_im\_j}\gamma\_{ij}(|t-t'|) \label{GLE\_colored\_noise}$$ Now, we make some further simplifying assumptions to make more analytically tractable. First, we assume that the bath responds instantaneously to the motion of the system; i.e., we assume that the memory integral decays instantaneously and the bath has no memory of the system history. This is a good model when the (renormalized) mass of the system is much larger than that of the bath. In this case, the memory kernel becomes *γ**i**j*(*t*) = 2*γ**i**j**δ*(*t*), where we have defined *γ**i**j* = ∫0∞*γ**i**j*(*t*) *d**t* as the static friction kernel, or simply the friction coefficient, and *δ*(*t*) is a Dirac delta function. The fluctuation-dissipation theorem now reads $$\mathbb E[\xi\_i(t)\xi\_j(t')] = 2k\_BT \sqrt{m\_im\_j}\gamma\_{ij}\delta(|t-t'|) \label{GLE\_white\_noise}$$ Stochastic processes with an autocorrelation function of this form are called white noises, and the terminology reflects the fact that the power spectral density of the process is a constant over all frequencies. With these assumptions, the generalized Langevin equation becomes $$\begin{aligned} r\_i'(t) &= \frac{p\_i(t)}{m\_i} \\ p\_i'(t) &= -\frac{\partial W(r)}{\partial r\_i} - \sum\_j\sqrt{m\_im\_j}\gamma\_{ij}r\_j'(t) + \xi\_i(t) \label{LE\_v'}\end{aligned}$$ which is known as the Langevin equation. Compared to the generalized Langevin equation, the Langevin equation describes stochastic processes that are Markovian (i.e., memoryless). Second, for dense solvent, such as water, the high friction and frequent collisions with the system leads to, on a short timescale (i.e., *t* < *γ*− 1), rapid fluctuations in the acceleration *p**i*ʹ(*t*). However, on a longer timescale the change in the time-averaged velocity will be small as the effect of collisions cancel out each other. This allows us to set the acceleration in to zero, which leads to $$\sum\_j\sqrt{m\_im\_j}\gamma\_{ij}r\_j'(t) = -\frac{\partial W(r)}{\partial r\_i} + \xi\_i(t) \label{overdamped\_LE}$$ The motion described by goes by many names, such as diffusion, Brownian motion, or overdamped Langevin dynamics; the equation itself is known as the overdamped Langevin equation. In some applications, the cross-correlation terms in the memory kernel are ignored, in which case takes on a simpler form $$m\_i\gamma\_{ii}r\_i'(t) = -\frac{\partial W(r)}{\partial r\_i} + \xi\_i(t)$$ Smoluchowski equation --------------------- For the purpose of describing the kinetics of activated rate processes at equilibrium, working directly with is inconvenient, since trajectories consistent with are individual realizations of the system dynamics, while we are more interested in the statistics of an ensemble of such realizations. In other words, we are more interested in *p*(*r*, *t*), the probability density of the system (strictly speaking, *p*(*r*, *t*) is a conditional probability density function more appropriated denoted as *p*(*r*, *t*∣*r*0, *t*0)). The time evolution of *p*(*r*, *t*) for processes governed by is given by $$\frac{\partial p(r, t)}{\partial t}=-\nabla\cdot J(r, t) \quad\text{with}\quad J(r, t) = -D\pi(r)\nabla\frac{p(r, t)}{\pi(r)} \label{Langer\_FP}$$ where *π*(*r*) is the Boltzmann distribution in the configuration space and the stationary solution to, *J*(*r*, *t*) is the (probability) flux, and *D* is the (position-independent) diffusion matrix, which we assume to be symmetric positive definite. Elements of the diffusion matrix are related to the static friction kernel defined in via the relation $$D\_{ij}=\frac{k\_BT}{\sqrt{m\_im\_j}\gamma\_{ij}}$$ Equations such as are known as the Smoluchowski equation, a special case of a class of partial differential equations known as the Fokker-Planck equations, or Kolmogorov’s forward equations. It is also common to refer to the first part of as the continuity equation for the probability density, and the second part of as a “constitutive” equation. For the following discussion it is also convenient to rewrite in two forms. First, we can rewrite component-wise: ∂*t**p*(*r*, *t*) = ∑*i**j*∂*i**D**i**j*[*β*∂*j**W*(*r*) + ∂*j*]*p*(*r*, *t*) where we have used a short-hand notation ∂*t* = ∂/∂*t* and ∂*i* = ∂/∂*r**i*. For readers not familiar with the component-wise notation, they should convince themselves that, e.g., the *i*th component of a matrix-vector product *A**x* can be expressed as ∑*j**A**i**j**x**j*. Second, we can rewrite as an equation involving the Fokker-Planck operator *L*†, $$\frac{\partial p(r, t)}{\partial t}=L^\dagger p(r, t) \quad\text{where}\quad L^\dagger=\nabla\cdot D\pi(r)\nabla\pi^{-1}(r) \label{Langer\_FP\_operator}$$ Here, the symbol  †  indicates that the Fokker-Planck operator *L*† is the adjoint of another operator *L* known as the generator, *L* = *π*− 1(*r*)∇ ⋅ *D**π*(*r*)∇ The concept of an adjoint operator is a generalization of the Hermitian transpose of a matrix. As shows, the Fokker-Planck operator dictates the time evolution of the probability density *p*(*r*, *t*). There is a similar interpretation for the generator: the generator dictates the time evolution of (conditional) ensemble averages, or observables, of the form $u(r, t)=\mathbb E[f(r(t))|r(0)=r\_0]$, for some suitable scalar function *f* defined on the configuration space; i.e., $$\frac{\partial u(r, t)}{\partial t}=Lu(r, t)$$ Kramers’ theory at the high friction limit ========================================== In this section we consider Kramers’ theory in the context of a one-dimensional Brownian particle whose motion is described by the one-dimensional version of the overdamped Langevin equation, $$m\gamma r'(t)=-\frac{dW}{dr}+\xi(t) \quad\text{where}\quad \mathbb E[\xi(t)]=0, \mathbb E[\xi(t)\xi(t')] = 2k\_BT m\gamma\delta(|t-t'|) \label{1D\_overdamp\_langevin}$$ From, it follows that the probability density *p*(*r*, *t*) satisfies the following Fokker-Planck equation $$\frac{\partial p(r, t)}{\partial t}=-\frac{\partial J(r, t)}{\partial r} \quad\text{with}\quad J(r, t)=-D\left[\beta\frac{dW}{dr}+\frac{\partial}{\partial r}\right]p(r, t) \label{Kramers\_FP}$$ where *D* = *k**B**T*/*m**γ* is the position-independent diffusion constant. The stationary solution to is the Boltzmann distribution *π*(*r*) = *e*− *β**W*(*r*)/*Z*, where *Z* is a configurational partition function. Here, we take *W*(*r*) to be an (asymmetric) double-well potential, where the minimum of the reactant well *A* is located at *r**A*, the minimum of the product well *B* is located at *r**B*, and the transition state *r*‡ is identified as the position of the peak of *W*(*r*) between the two wells; without loss of generality, we assume that *r**A* < *r*‡ < *r**B*. An example of such a potential is shown in Figure [fig:1dpmf]. Further, we assume that the height of the barrier *W*(*r*‡) is much larger than *k**B**T*, so that there is a separation of timescales between barrier crossing and within-well equilibration. A related assumption here is that barrier crossing is much slower than the correlation time of the dynamic friction kernel so that the static friction kernel approximation can be justified. Together, these assumptions amount to the situation where a single slow degree of freedom in the system *r* is sufficient for describing the reaction, while the rest of the degrees of freedom relax much faster than the timescale of barrier crossing along *r*; we should thus interpret “bath” as also including other degrees of freedom of the macromolecule that are not explicitly treated, and interpret *r* not necessarily as a position variable but more generally as a reaction coordinate (i.e., some function of the position variables *r**i*). We mention here in passing that Kramers also derived expressions for the rate constant in the weak and moderate-to-high friction regimes. We will not discuss these results since they are not as relevant for condensed-phase simulations. ![An example one-dimensional double-well potential of mean force. The potential is given by W(r)=-\ln\left(e^{-2(x+1)^2}+e^{-2(x-1)^2+1}\right).](1D_example.png "fig:") [fig:1dpmf] Kramers’ theory via the flux-over-population method --------------------------------------------------- Kramers derived an expression for the rate constant of the *A* → *B* reaction, *k**A**B*, using what is now known as the flux-over-population method. Consider a hypothetical procedure in which an ensemble of *n* particles are prepared in the reactant well *A* (usually we set *n* = 1), where they rapidly reach thermal equilibrium on a timescale much faster than that of escaping to the product well *B*. Whenever a particle escapes from the reactant well and reaches the product well, it is immediately removed and a new one is added to the reactant well, such that the reactant population is always maintained at *n*. As we show in the analysis below, the exact positions at which a particle is considered escaped and at which a new one is inserted are not very important. As the system reaches a non-equilibrium steady-state, there is a non-zero probability current *J*, or flux, into the product well, which is the reaction rate (i.e., the number of transitions from *A* to *B* per unit time). We normalize this reaction rate by the reactant population, which gives the rate constant, *k**A**B* = *J*/*n* hence the name “flux-over-population”. Before flushing out the algebraic details, let us pause for a moment and consider why the flux-over-population procedure is needed. In other words, why does the calculation of an equilibrium rate constant invoke the flux of a seemingly contrived non-equilibrium process? A simple answer is that the (net) flux between any two states (micro- or macro-) in equilibrium is zero, due to detailed balance, and thus gives no information about kinetics. To see why the flux-over-population procedure circumvent this problem, let us consider the behavior of particles at some point near the transition state and see how they contribute to the flux across that point at equilibrium. At any given moment, such a particle can be categorized into one of four groups, depending on its past and future behavior: 1. The particle came from *A* and will move to *B* before going back to *A*. 2. The particle came from *A* and will go back to *A* before moving to *B*. 3. The particle came from *B* and will move to *A* before going back to *B*. 4. The particle came from *B* and will go back to *B* before moving to *A*. Behavior described in group 2 and 4 are known as barrier recrossing and does not contribute to the flux, since each pair of crossing-recrossing cancel out each other (this is to be contrasted with the procedure in transition-state theory, where all (re)crossing events are counted towards the total flux across some dividing surface ). Because the net equilibrium flux is zero, this implies that the flux contributed by group 1 and 3 cancels as well. Note that, for ergodic dynamics, this categorization is exhaustive and it is not possible for a particle to cross the barrier once and stay in one of the two wells forever. By removing particles reaching the product well, the flux-over-population procedure removes the flux contribution from group 3 (and group 4), while the only remaining nonzero flux contribution from group 1 stays close to its equilibrium value at the steady state due to the separation of timescales. As such, the nonequilibrium steady-state flux *J* in this procedure gives the true *A* to *B* reaction rate. At steady state, the population distribution *p*ss(*r*) is time-independent, and thus by the flux *J* is now both time- and position-independent. The two quantities are related via $$J = -D\left[\beta\frac{dW}{dr}+\frac{\partial}{\partial r}\right]p\_\text{ss}(r)= -De^{-\beta W(r)}\frac{d}{dr}\left[e^{\beta W(r)}p\_\text{ss}(r)\right]$$ Now, divide both sides by *D**e*− *β**W*(*r*) and integrate both sides over the interval [*r**A*, *r**B*], which gives $$J\int\_{r\_A}^{r\_B}D^{-1}e^{\beta W(r)}\,dr = - e^{\beta W(r)}p\_\text{ss}(r)\big|\_{r=r\_A}^{r\_B} \label{Kramers\_flux}$$ We now consider the behavior of *p*ss(*r*) near the two boundaries: *r**A* and *r**B*. At *r* = *r**B*, we require *p*ss(*r**B*) = 0 to satisfy the absorbing boundary condition caused by particle removal. At positions in the reactant well away from the absorbing boundary condition at *r**B*, we assume that the system is close to equilibrium, i.e., *p*ss(*r*) ≈ *π*(*r*), because the particle insertion procedure maintains the reactant population *n* in the reactant well, and inserted particles thermalize much faster than the reaction timescale. This observation has two consequences. First, it implies that *p*ss(*r**A*) = *π*(*r**A*). Second, it implies that the population in the reactant well is *n* = ∫− ∞*r*‡*p*ss(*r*) *d**r* ≈ ∫− ∞*r*‡*π*(*r*) *d**r* Now, apply the boundary conditions at *p*ss(*r**A*) and *p*ss(*r**B*) to, then apply the flux-over-population formula, and we arrive at $$\begin{aligned} k\_{AB} &= \frac{e^{\beta W(r\_A)}p\_\text{ss}(r\_A) - e^{\beta W(r\_B)}p\_\text{ss}(r\_B)}{\int\_{r\_A}^{r\_B}D^{-1}e^{\beta W(r)}\,dr \int\_{-\infty}^{r^\ddagger}p\_\text{ss}(r)\,dr} \\ &= \frac{e^{\beta W(r\_A)}\pi(r\_A)}{\int\_{r\_A}^{r\_B}D^{-1}e^{\beta W(r)}\,dr \int\_{-\infty}^{r^\ddagger}\pi(r)\,dr} \\ &= \frac{1}{\int\_{r\_A}^{r\_B}D^{-1}e^{\beta W(r)}\,dr \int\_{-\infty}^{r^\ddagger}e^{-\beta W(r)}\,dr} \label{Kramers\_rate\_1}\end{aligned}$$ If we assume that *W*(*r*) is harmonic around *r*‡ and *r**A*, we write, for *W*(*r*) near *r*‡ and *r**A*, $$W(r)\approx\Delta W^\ddagger -\frac 1 2 \kappa^\ddagger\left(r-r^\ddagger\right)^2 \quad\text{and}\quad W(r)\approx \frac 1 2 \kappa\_A\left(r-r\_A\right)^2 \label{Kramers\_harmonic}$$ where we have defined Δ*W*‡ = *W*(*r*‡) − *W*(*r**A*) and set *W*(*r**A*) = 0 (resetting the zero of the potential of mean force has the effect of changing the partition function, which has no effect on our derivation). Here, *κ**A* and *κ*‡ are the force constants of the harmonic potentials; equivalently, *κ**A* and *κ*‡ can also be interpreted as the curvatures at *r**A* and *r*‡, respectively. Applying to gives the following approximate results $$\begin{aligned} \int\_{r\_A}^{r\_B}D^{-1}e^{\beta W(r)}\,dr &\approx \int\_{-\infty}^\infty D^{-1}e^{\beta\left(\Delta W^\ddagger - \kappa^\ddagger r^2/2\right)}\,dr = D^{-1}\sqrt{\frac{2\pi}{\beta\kappa^\ddagger}} e^{\beta\Delta W^\ddagger} \label{Kramers\_int\_saddle}\\ \int\_{-\infty}^{r^\ddagger}e^{-\beta W(r)}\,dr &\approx \int\_{-\infty}^\infty e^{-\beta\kappa\_Ar^2/2}\,dr = \sqrt{\frac{2\pi}{\beta\kappa\_A}} \label{Kramers\_int\_reactant}\end{aligned}$$ These results provide a good approximation whenever the region of validity for the harmonic approximation is large enough that the added probability mass by taking the integration limit to infinity is negligible. Substituting and back to gives $$k\_{AB}=\frac{\beta D}{2\pi}\sqrt{\kappa\_A\kappa^\ddagger} e^{-\beta\Delta W^\ddagger} \label{Kramers\_rate\_2}$$ This is the rate constant predicted by Kramers’ theory at the high friction limit. Since the dynamics at high friction is diffusive, it is also known as the spatial-diffusion-limited rate. The rate constant for the reverse reaction can be obtained analogously by replacing the reactant well *A* with the product well *B* in the preceding derivation. If the diffusion constant is position-dependent, the derivation up to (and include) is still valid. The diffusion constant can be “folded” into the potential of mean force *W*(*r*) as *e**β**W*(*r*)/*D*(*r*) = *e**β**W*(*r*) − ln*D*(*r*) = *e**β*[*W*(*r*) − *k**B**T*ln*D*(*r*)] This defines a new potential surface *W*(*r*) − *k**B**T*ln*D*(*r*), which may reach its maximum at a position other than *r*‡. In the literature it is also common to parameterize the harmonic potential near *r*‡ and *r**A* as $$W(r)\approx\Delta W^\ddagger -\frac 1 2 m {\omega^\ddagger}^2\left(r-r^\ddagger\right)^2 \quad\text{and}\quad W(r)\approx \frac 1 2 m \omega\_A^2\left(r-r\_A\right)^2 \label{Kramers\_harmonic\_alt}$$ Here, *ω**A* (and analogously, *ω*‡) is the angular frequency of a harmonic oscillator of mass *m* on the potential surface *W*(*r*) = *m**ω**A*2*r*2/2; *ω**A* is related to the force constant/curvature *κ**A* by *κ**A* = *m**ω**A*2. Together with the definition of the diffusion constant *D* = 1/*β**m**γ*, can be written alternatively as $$k\_{AB}=\frac{\omega\_A\omega^\ddagger}{2\pi\gamma}e^{-\beta\Delta W^\ddagger} \label{Kramers\_rate\_alt}$$ Kramers’ theory via MFPT ------------------------ There is an alternative approach to deriving using the mean first-passage time (MFPT). Let Ω be the reaction coordinate space of the system. For a subset *B* ⊆ Ω, the MFPT *m**B*(*r*) is the average time for the system initiated at *r* to reach *B*. It can be shown, using potential theory, that the MFPT satisfies the following (Dirichlet) boundary value problem, $$\begin{cases} Lm\_B(r)=-1, & r\in\Omega\backslash B \\ m\_B(r)=0, & r\in B \end{cases} \label{MFPT\_Dirichlet}$$ where the notation Ω ∖ *B* denotes the complement of *B* in Ω. For one-dimensional Brownian dynamics, the generator *L* defined in takes on the form $$L= \frac{d}{dr}D\left[\frac{d}{dr} - \beta W(r)\right]= D\frac{d^2}{dr^2} - D\beta\frac{dW}{dr}\frac{d}{dr}$$ Unfortunately, a full derivation of from scratch would take too long; interested readers should consult standard references on Markov processes for more details. Nevertheless, an “intuitive” justification of goes as follows: consider a hypothetical procedure where an ensemble of system trajectories are prepared, all with the same initial condition at position *r*(0) = *r*0 outside *B*; we can calculate *m**B*(*r*0) by taking the average first hitting time to the set *B* among the trajectories. For each given trajectory *r*(*t*), moving forward in time by a small amount *δ**t* reduces the first hitting time at *r*(*t* + *δ**t*) by the same amount of *δ**t*; as such, the time derivative of *m**B*(*r*0) should be  − 1. Since the generator acting on an observable *m**B*(*r*0) gives its time derivative, it follows that *L**m**B*(*r*0) =  − 1. On the other hand, if *r*0 is already in *B*, then the first hitting time to *B* is zero by construction, hence the boundary condition in. To determine the rate constant for the transition from the reactant well to the product well, let us consider the MFPT to the product well from some point *r* near the reactant well (as we will see, the particular choice of *r* is not very important). More specifically, let $\Omega=\mathbb R$ and consider the MFPT for the set *B* = [*r**B*, ∞). The differential equation *L**m**B*(*r*) =  − 1 can be solved using an integrating factor, $$\begin{aligned} D\frac{d^2m\_B}{dr^2} - D\beta\frac{dW}{dr}\frac{dm\_B}{dr} &= -1\\ \frac{d}{dr}\left[e^{-\beta W(r)}\frac{dm\_B}{dr}\right] &= -D^{-1}e^{-\beta W(r)} \\ \int\_{-\infty}^{t}\frac{d}{ds}\left[e^{-\beta W(s)}\frac{dm\_B}{ds}\right]\,ds &= -D^{-1}\int\_{-\infty}^t e^{-\beta W(s)}\,ds \\ \int\_r^{r\_B}\frac{dm\_B}{dt}\,dt &= -\int\_r^{r\_B} D^{-1}e^{\beta W(t)}\int\_{-\infty}^t e^{-\beta W(s)}\,dsdt \\ m\_B(r) &= \int\_r^{r\_B} D^{-1}e^{\beta W(t)}\int\_{-\infty}^t e^{-\beta W(s)}\,dsdt \label{Kramers\_MFPT}\end{aligned}$$ It is easy to check that, under the harmonic approximations of, the function *e**β**W*(*t*)*e*− *β**W*(*s*) has a unique global maximum at (*s* = *r**A*, *t* = *r*‡) within the domain of integration in (see Figure [fig:mfptintregion]). Since *e**β**W*(*t*)*e*− *β**W*(*s*) decays exponentially away from the global minimum, we can simply extend the domain of integration to $\mathbb R^2$, in which case the double integral in simplifies to the integrals considered in and. Note that the validity of this approximation implies that *m**B*(*r*) is a constant near and to the left of the reactant well, and that the particular choice for the boundaries of *B* is not important as long as its lower bound is sufficiently to the right of *r*‡. Taken together, we have shown that the rate constant derived from the flux-over-population method is equivalent to the inverse of the mean first-passage time; in fact, this equivalence is exact for a much broader range of stochastic processes than those considered in this note. Unfortunately, despite its conceptual simplicity, the MFPT cannot in general be calculated analytically for problems in more than one dimension and thus we will not consider this method in the following sections; however, see Section VII. D. of for an approximate approach to solving for the MFPT in higher dimensions using asymptotic methods. ![The region of integration for the double integral in. The region of integration is shown in green. The maxima of e^{\beta W(t)}e^{-\beta W(s)} are shown as intersections of the orange lines. Only one of the two maxima, (s=r_A, t=r^\ddagger), is located inside the integration limit.](int_region_plt.png "fig:") [fig:mfptintregion] Langer’s extension to higher dimensions ======================================= Now let us consider the extension of Kramers’ theory to an *N*-dimensional potential (of mean force) with the reactant and product wells separated by a saddle point; recall that the saddle point is a point on the potential energy surface *W*(*r*) where the gradient is zero and the Hessian has exactly one unstable mode. An example two-dimensional double-well potential is shown in Figure [fig:2dexample]A. Conceptually, the flux-over-population method employed in the one-dimensional case carries over with few changes. However, as we will see, the higher-dimensional setting of does pose some algebraic challenges. The derivation presented here loosely follows the presentation in. Similar to our treatment in one dimension, we assume a harmonic approximation at *r**A*, the bottom of the reactant well *A*, $$W(r)\approx \frac 1 2 (r-r\_A)^TH\_A(r-r\_A) \label{Langer\_harmonic\_reactant}$$ where *H**A* is the Hessian matrix of second-order derivatives evaluated at the minimum *r**A*, which we assume to be symmetric positive definite; note that the Hessian is also known as the force constant matrix (to see why, note that for a harmonic potential *κ**r*2/2 with force constant *κ*, the second-order derivative at the center *r* = 0 is simply *κ*). In terms of notations, in this note we will exclusively use notations of the form *x**T**y* to denote inner products in place of other common notations such as *x* ⋅ *y* or $\left<x, y\right>$; readers more familiar with the other notations should convince themselves that, e.g., the quadratic form in can be expressed in the angle bracket notation as $\left<r-r\_A, H\_A(r-r\_A)\right>/2$. In addition, we assume that there is a saddle point *r*‡ in between the reactant and product wells, which is the point with the minimum energy along the barrier ridge. A harmonic approximation at the saddle point gives $$W(r) \approx \Delta W^\ddagger + \frac 1 2 (r-r^\ddagger)^TH^\ddagger(r-r^\ddagger) \label{Langer\_harmonic\_saddle}$$ Here, we assume that *H*‡ has exactly one negative eigenvalue associated with the unstable mode at *r*‡, while the rest of the eigenvalues are assumed to be strictly positive. In general, it is possible for *H*‡ to possess one or more eigenvalues of zero, which correspond to some underlying symmetries of the system. For example, the Hessian matrix of a system of two one-dimensional particles with positions *x*1 and *x*2 whose distance is subject to the harmonic restraint *W*(*x*1, *x*2) = *κ*(*r*0 − (*x*1 − *x*2))2/2 will have a zero eigenvalue corresponding to the translational symmetry of the system. Such symmetries can oftentimes be eliminated (e.g., here by working with the reaction coordinate *r* = *x*1 − *x*2) and we shall not consider such cases for the following analysis. Lastly, we record here for future reference, ∂*i**W*(*r*) = ∑*j**H**i**j*‡(*r**j* − *r**j*‡) This result comes from the fact that *H*‡ is symmetric, and, for any quadratic form *x**T**A**x*, ∇*x**x**T**A**x* = (*A* + *A**T*)*x*. A relation between the steady-state probability and the backward committor -------------------------------------------------------------------------- ![An example two-dimensional double-well potential. A) Contour plot of the double-well potential; the potential is given by W(x,y)=\frac{5}{2}(x^2-1)^2+5y^2 (see also ). The reactant well A and the product well B are defined, somewhat arbitrarily, as disks of radius 1/4 centered at the global minima r_A and r_B. The white dashed vectors show the eigenvectors of H^\ddagger, while the white solid vectors show the eigenvectors of H^\ddagger D, with D=\{\{4, 1\}, \{1, 2\}\}. B) The exact backward committor q^-(x, y) and steady-state flux J(x, y). The committor is computed by solving numerically using the NDSolveValue function with the appropriate DirichletCondition in Mathematica 12.1 and is shown as a contour plot (with 0.1 contour increment). The steady-state flux is calculated using and represented as the orange vector field; the lengths of the streamline segments are proportional to the vector field magnitude. C) The approximate backward committor and steady-state flux from Langer’s theory. The committor is given by, and the steady-state flux is given by with \pi(s) replaced with its harmonic approximation at the saddle point. For all calculations \beta is set to one.](2D_example.png "fig:") [fig:2dexample] Instead of seeking a steady-state solution *p*ss(*r*) to directly, we take an indirect approach that relates *p*ss(*r*) to a function called the backward committor *q*−(*r*), which is the probability that, going back in time, a trajectory starting at *r* will reach the reactant well before the product well. As we will see, *p*ss(*r*), a nonequilibrium quantity, is related to the behavior of *q*−(*r*) at equilibrium by a surprisingly simple equation. Since Brownian dynamics under detailed balance is time-reversible (in the sense that a time-reversed process is statistically indistinguishable from the original one), it is more convenient in this context to think of *q*−(*r*) as representing the probability that, going forward in time, a trajectory starting at *r* will reach the reactant well before the product well. The choice of the superscript notation is intended to distinguish *q*− from *q*+, the forward committor, which is the probability that, going forward in time, a trajectory starting at *r* will reach the product well before the reactant well; for time-reversible processes at equilibrium the two quantities are related by *q*+(*r*) + *q*−(*r*) = 1. The committor is also known as splitting probability, or, in the protein folding literature, *p*fold. One should be careful not to confuse the definition of time reversibility in the theory of stochastic process with the concept of reversibility in Hamiltonian dynamics (i.e., flipping both time and momentum leaves the process invariant), the concept of reversibility in thermodynamics, or the concept of reversible reactions in kinetics. In this section we show that *q*−(*r*) = *p*ss(*r*)/*π*(*r*) First, as in the one-dimensional case, the steady-state distribution *p*ss(*r*) satisfies the following boundary value problem, $$\begin{cases} L^\dagger p\_\text{ss}(r)=0, & r\in\Omega\backslash(A\cup B) \\ p\_\text{ss}(r)=\pi(r), & r\in A \\ p\_\text{ss}(r)=0, & r\in B \end{cases} \label{Langer\_p\_ss\_eq}$$ Second, similar to the MFPT, the backward committor also satisfies a boundary value problem, $$\begin{cases} Lq^-(r)=0, & r\in\Omega\backslash(A\cup B) \\ q^{-}(r)=1, & r\in A \\ q^-(r)=0, & r\in B \end{cases} \label{Langer\_committor\_eq}$$ In general, the time evolution of the backward committor is actually dictated by *L**R*, the generator for the time-reversed process, but *L**R* = *L* for time-reversible processes. Again, we will not attempt to derive rigorously. An “intuitive” justification is as follows: consider the same hypothetical procedure as before, where an ensemble of system trajectories are prepared, all with the same initial condition at position *r*(0) = *r*0 outside of *A* or *B*; we can calculate *q*−(*r*0) by determining the fraction of these trajectories that reach *A* first before *B*. The fate of each individual realization of the system dynamics is time-independent: a trajectory either eventually reaches *A* first before it reaches *B*, or it doesn’t; therefore, the fraction *q*−(*r*0) is time-independent, and thus *L**q*−(*r*0), which gives the time derivative, is zero. If *r*0 is already in *A*, then *q*− = 1 by construction since it is not possible for the trajectory to reach *B* before *A*; similarly, if *r*0 is already in *B*, then *q*− = 0 since the trajectory is already in *B* before it can reach *A*. Comparing with, we see that the generator is related to the Fokker-Planck operator by *L* = *π*(*r*)− 1*L*†*π*(*r*) Substituting in gives *π*(*r*)− 1*L*†*π*(*r*)*q*−(*r*) = 0; this implies that *L*†*π*(*r*)*q*−(*r*) = 0 since *π*− 1(*r*) ∝ *e**β**W*(*r*) is strictly positive for all *r*. Furthermore, *π*(*r*)*q*−(*r*) = *π*(*r*) for *r* ∈ *A* and *π*(*r*)*q*−(*r*) = 0 for *r* ∈ *B*. Taken together, we see that *π*(*r*)*q*−(*r*) satisfies the same well-posed boundary value problem as *p*ss(*r*), which implies that they are the same function, as desired. We note here that this proof does not work for stochastic processes described by the Langevin equation outside the overdamped regime. An ansatz for the backward committor ------------------------------------ Given, the problem of determining *p*ss(*r*) now reduces to that of determining *q*−(*r*), the backward committor. In this section, we seek an analytical expression for the backward committor near the saddle point *r*‡. To make the derivation clearer, we first perform a change of variable *s* = *r* − *r*‡ to shift the origin to the saddle point; this procedure has no effect on the functional form of any preceding expressions except for changing the independent variable from *r* to *s*. At steady state, use the fact that *p*ss(*s*) is time-independent and *p*ss(*s*) = *q*−(*s*)*π*(*s*); becomes $$\begin{aligned} 0 &= \sum\_{ij}\partial\_iD\_{ij}\left[\beta\partial\_j W(s)+\partial\_j\right]q^-(s)\pi(s) \\ 0 &= \sum\_{ij}\partial\_i D\_{ij}\left[\beta(\partial\_jW(s))q^-(s)\pi(s) + (\partial\_j q^-(s))\pi(s) - \beta\pi q^-(s)\partial\_jW(s)\right] \\ 0 &= \sum\_{ij}D\_{ij}\partial\_i\left[(\partial\_j q^-(s))\pi(s)\right] \\ 0 &= \sum\_{ij}D\_{ij}\left[\partial\_{ij} q^-(s) -\beta\left(\partial\_iW(s)\right)\left(\partial\_j q^-(s)\right)\right] \label{Langer\_committor\_diff\_eq}\end{aligned}$$ In the second equality, we used the fact that ∂*i**π*(*s*) =  − *β**π*(*s*)∂*i**W*(*s*). To examine the solution to near the saddle point *r*‡, we first consider the one dimensional case. Here, becomes *W*(*s*) = Δ*W*‡ − *κ*‡*s*2/2 as in, with *κ*‡ > 0 representing the force constant for the unstable mode, and reduces to $$\frac{d^2}{ds^2}q^{-}(s) + \beta\kappa^\ddagger s\frac{d}{ds}q^{-}(s)=0 \label{Langer\_committor\_diff\_eq\_1D}$$ Equation can be solved by first solving a first-order ODE for *ν*(*s*) = *d**q*−/*d**s* using an integrating factor and then integrating both sides of the solution from 0 to *s*. After some algebra, we arrive at the general solution in the form of an error function $$q^-(s)=A+\frac{B}{\sqrt{2\pi}}\int\_0^{s\sqrt{\beta\kappa^\ddagger}} e^{-u^2/2}\,du \label{Langer\_committor\_1D\_sol}$$ where *A* and *B* are two free constants. Since *q*−(*s*) is a backward committor, we require that *q*− → 1 as *s* →  − ∞ (i.e., for *s* in the reactant well) and *q*− → 0 as *s* → ∞ (i.e., for *s* in the product well). Substituting these limits into gives *A* − *B*/2 = 1 and *A* + *B*/2 = 0; it should be obvious that *A* = 1/2 and *B* =  − 1, which means that the solution to that satisfies the appropriate boundary conditions is $$q^-(s)=\frac 1 2 -\frac{1}{\sqrt{2\pi}}\int\_0^{s\sqrt{\beta\kappa^\ddagger}} e^{-u^2/2}\,du = \frac{1}{\sqrt{2\pi}}\int\_{s\sqrt{\beta\kappa^\ddagger}}^\infty e^{-u^2/2}\,du = \sqrt{\frac{\beta\kappa^\ddagger}{2\pi}}\int\_s^\infty e^{-\beta\kappa^\ddagger u^2/2}\,du \label{Langer\_committor\_1D\_specific\_sol}$$ This solution indicates that the committor drops down from 1 to 0 with a sigmoidal functional form as *s* increases. In particular, at *s* = 0, *q*− = 1/2. This analysis of the one-dimensional backward committor provides important hints about the behavior of the committor in higher dimensions. In this setting, the set of all points *s* for which *q*−(*s*) has the same value constitutes an isocommittor surface (or curve, in two dimensions). Near the transition state, it is not unreasonable to assume that these isocommittor surfaces can be approximated as planes (or lines, in two dimensions) if *q*− is sufficiently smooth. Furthermore, within the region where the harmonic approximation is valid, this family of isocommittor planes needs to be parallel, since level surfaces cannot intersect for a well-defined function. These considerations motivate us to devise a solution to in terms of isocommittor planes. In general, a plane passing through the origin can be parameterized using its normal vector *n* as *n**T**s* = 0; for planes that do not cross the origin, they can still be parameterized as *n**T*(*s* − *s*0) = 0, where *s*0 is a displacement vector from some point on the plane to the origin. In light of this functional form, let us define a vector *v*+ that is normal to the isocommittor planes (the reason for this notation will become clear shortly); using this vector, the family of isocommittor planes can be parameterized as *v*+*T**s* = *u*, $u\in\mathbb R$. We orient *v*+ such that *q*− approaches 1 for *u* < 0 and approaches 0 for *u* > 0. Taken together, the analysis so far suggests that the solution to has the form $$q^-(s)=\frac{a}{\sqrt{2\pi}}\int\_{v\_+^Ts}^\infty e^{-(au)^2/2}\,du \label{Langer\_isocommittor\_guess}$$ where *a* > 0 is a free constant to be determined. It is easy to check that with this solution form, *q*−(*s*) is constant on each isocommittor plane, approaches 1 for *s* near the reactant well, and approaches 0 for *s* near to the product well. One particular isocommittor plane of interest is the one that crosses the saddle point; i.e., the plane parameterized by *u* = 0 and where *q*− = 1/2. This plane is sometimes known as the stochastic separatrix and often plays a special role in the analysis of reaction mechanisms: if we define the transition state (or more accurately, the transition state ensemble) as the set of configurations where the committor is 1/2, the transition state is the stochastic saparatrix. In Kramers’ theory, this plane reduces to the peak of the barrier separating the reactant and product state. Unfortunately, the simplicity of this equivalence between the transition state and the barrier peak often does not hold in practice, depending on the choice of the reaction coordinate and the theoretical framework under consideration; see, e.g.,. Properties of the vector normal to the isocommittor planes ---------------------------------------------------------- The ansatz in still leaves some questions unanswered. Specifically, 1. How does the direction of *v*+ relate to the geometry at the saddle point? 2. What is the magnitude of *v*+? 3. What is the value of the free constant *a* > 0? We address these questions in this section. ### v+ as a generalized eigenvector To understand the direction of *v*+, we first argue that *v*+ is actually an eigenvector of the matrix *H*‡*D*. In order to show this, we need to examine the solution of on the *q*− = 1/2 plane. First, using the Leibniz integral rule, we record here the first- and second-order derivatives of with respect to *s*: $$\partial\_i q^-(s)=-\frac{a}{\sqrt{2\pi}} e^{-\left(av\_+^Ts\right)^2/2}v\_{+i} \quad\text{and}\quad \partial\_{ij}q^-(s)=\frac{a^3}{\sqrt{2\pi}}\left(v\_+^Ts\right) e^{-\left(av\_+^Ts\right)^2/2}v\_{+i}v\_{+j} \label{Langer\_committor\_derivatives}$$ On the *q*− = 1/2 plane, *v*+*T**s* = 0 and reduces to $$\partial\_i q^-(s)=-\frac{a}{\sqrt{2\pi}}v\_{+i} \quad\text{and}\quad \partial\_{ij}q^-(s)=0 \quad\text{when}\quad v\_+^Ts=0 \label{Langer\_committor\_1/2\_derivatives}$$ Substituting in the derivative of *W*(*s*) in and the derivatives of *q*−(*s*) in, we see that can be further simplified on the *q*− = 1/2 plane as, $$\begin{aligned} 0&= \sum\_{ij} D\_{ij}\left[-\beta\left(\sum\_k H^\ddagger\_{ik}s\_k\right) \left(-\frac{a}{\sqrt{2\pi}}v\_{+j}\right)\right]\\ 0&= \sum\_k s\_k\left[\sum\_{ij}H^\ddagger\_{ki}D\_{ij}v\_{+j}\right] \\ 0&= s^T\left(H^\ddagger Dv\_+\right)\label{Langer\_eigenvector\_derivation}\end{aligned}$$ In the second equality, we used the fact that *H*‡ is symmetric (i.e., *H**i**k*‡ = *H**k**i*‡). This result indicates that the vector *H*‡*D**v*+ is orthogonal to *s*. Since *v*+ is orthogonal to *s* confined on the *q*− 1 = 1/2 plane, *H*‡*D**v*+ is parallel with *v*+. This shows that *v*+ is an eigenvector of *H*‡*D*. Anticipating the analysis in the next section, we denote the corresponding eigenvalue as  − *λ*+ (with *λ*+ > 0). Note that we can write the eigenvector equation *H*‡*D**v*+ =  − *λ*+*v*+ as *H*−  ‡*v*+ =  − *λ*+− 1*D**v*+ where, through an abuse of notation, we have written (*H*‡)− 1 = *H*−  ‡. Equations of this form are called generalized eigenvalue problems whenever *H*−  ‡ is symmetric and *D* is symmetric positive definite. The collection of all such generalized eigenvectors form a nonsingular matrix *V* that simultaneously diagonalizes *H*‡ and *D*, in the sense that *V**T**D**V* = *I* and *V**T**H*−  ‡*V* = Λ− 1 where Λ− 1 is a diagonal matrix containing the corresponding generalized eigenvalues. The vectors *v**i*’s in *V* are orthogonal with respect to the inner product induced by *D*; i.e., $$\left<v\_i, v\_j\right>\_D=v\_i^TDv\_j=\lambda\_jv\_i^TH^{-\ddagger} v\_j=\lambda\_j\Lambda\_{ij}^{-1}=\delta\_{ij}$$ where *δ**i**j* is the Kronecker delta function. ### The value of the free constant a The magnitudes of *a* and *v*+ are related; first we determine the value of *a* in terms of *v*+. To do so, we consider the solution to outside the committor 1/2 plane where *v*+*T**s* ≠ 0: $$\begin{aligned} 0 &= \sum\_{ij} D\_{ij}\left[\frac{a^3}{\sqrt{2\pi}}\left(v\_+^Ts\right) e^{-\left(av\_+^Ts\right)^2/2}v\_{+i}v\_{+j} - \beta\left(\sum\_k H^\ddagger\_{ik}s\_k\right) \left(-\frac{a}{\sqrt{2\pi}}e^{-(av\_+^Ts)^2/2}v\_{+j}\right)\right]\\ 0 &= \sum\_{ij} D\_{ij}\left[a^2\left(v\_+^Ts\right)v\_{+i}v\_{+j} + \beta\left(\sum\_k H^\ddagger\_{ik}s\_k\right)v\_{+j}\right]\\ 0 &= a^2(v\_+^Ts)\sum\_{ij}v\_{+i}D\_{ij}v\_{+j} + \beta\sum\_{jik}v\_{+j}D\_{ji}H^\ddagger\_{ik}s\_k\\ 0 &= a^2(v\_+^Ts)v\_+^TDv\_+ + \beta (v\_+^TDH^\ddagger)s \label{beta\_or\_no\_beta}\\ 0 &= a^2(v\_+^Ts)v\_+^TDv\_+ - \beta (\lambda\_+ v\_+^T)s \\ 0 &= a^2v\_+^TDv\_+ -\beta\lambda\_+ \\ a &= \sqrt{\beta\lambda\_+/v\_+^TDv\_+} \label{free\_const\_a}\end{aligned}$$ Applying to gives $$q^-(s)=\sqrt{\frac{\beta\lambda\_+}{2\pi v\_+^TDv\_+}}\int\_{v\_+^Ts}^\infty \exp\left(-\frac1 2\frac{\beta\lambda^+}{v\_+^TDv\_+}u^2\right)\,du \label{Langer\_isocommittor\_guess2}$$ Note that implies that  − *λ*+ is a negative eigenvalue of *H*‡*D*: because *β* > 0 and *v*+*T**D**v*+ > 0 (since *D* is symmetric positive definite), *λ*+ also needs to be positive so that *a* > 0. What does this result mean for the direction represented by the corresponding eigenvector *v*+? When *D* can be written in the form of *σ*2*I*, where *I* is an identity matrix, the diffusion on the potential (of mean force) is isotropic (i.e., same diffusivity in every direction). In this case, *v*+ is simply an eigenvector of *H*‡, *H*‡*v*+ =  − *λ*+*σ*− 2*v*+ that is, *v*+ is an eigenvector associated with the only negative eigenvalue of *H*‡ corresponding to the unstable mode separating the reactant and product well. In the case of anisotropic diffusion, *H*‡*D* still has a single unique unstable mode represented by *v*+: because *H*‡ is symmetric and *D* is symmetric positive definite, a result in linear algebra states that the eigenvalues of *H*‡*D* have the same signs as those of *H*‡. As such, *v*+ is still an eigenvector associated with the only negative eigenvalue of *H*‡*D* in the anisotropic case. However, the presence of the matrix *D* effectively stretches and rotates the dynamics on the surface of the potential; as such, the unstable mode of *H*‡*D* no longer necessarily points in the same direction as the unstable mode of *H*‡ (see, e.g., Figure [fig:2dexample]A). ### The magnitude of v+ The magnitude of *v*+ determines how fast the committor *q*−(*s*) decays to zero from the reactant well to the product well. To determine this quantity, we first note that the derivative of the one-dimensional backward committor in at the saddle point *s* = 0 is given by $$\frac{d}{ds}q^-(s)\Big|\_{s=0}=-\sqrt{\frac{\beta\kappa^\ddagger}{2\pi}}e^{-\beta\kappa^\ddagger s^2/2}\Big|\_{s=0}=-\sqrt{\frac{\beta\kappa^\ddagger}{2\pi}} \label{committor\_1D\_derivative}$$ While the gradient of the multi-dimensional backward committor in at the saddle point *s* = 0 is given by $$\nabla\_s q^-(s)\Big|\_{s=0} = -\sqrt{\frac{\beta\lambda\_+}{2\pi v\_+^TDv\_+}}\exp\left(-\frac1 2\frac{\beta\lambda^+}{v\_+^TDv\_+}(v\_+^Ts)^2\right)\Big|\_{s=0} v\_+=-\sqrt{\frac{\beta\lambda\_+}{2\pi v\_+^TDv\_+}}v\_+ \label{committor\_ND\_derivative}$$ Comparing these two expressions, we can identify *κ*‡ with *λ*+ since both are “eigenvalues” corresponding to the unstable mode at the saddle point. In order for the absolute value of and the norm of (induced by *D*) to be equal, we require that *v*+*T**D**v*+ = 1 Recall that the expression *v*+*T**D**v*+ is the inner product of *v*+ with itself induced by *D* and defines a vector norm via $\|v\_+\|\_D=\sqrt{v\_+^TDv\_+}$. Taken together, we see that the solution to is $$q^-(s)=\sqrt{\frac{\beta\lambda\_+}{2\pi}}\int\_{v\_+^Ts}^\infty e^{-\beta\lambda\_+u^2/2}\,du \label{Langer\_isocommittor\_solution}$$ See Figure [fig:2dexample]B and [fig:2dexample]C for a comparison of with the exact solution to for the example double-well potential shown in [fig:2dexample]A. Lastly, we note here that, with the help of (after setting $a=\sqrt{\beta\lambda\_+}$), it is fairly straightforward to check that is in fact an exact solution to under the harmonic approximation; i.e., satisfies ∑*i**j**D**i**j*∂*i**j**q*−(*s*) − *β*∑*i**j**k**s**k**H**k**i*‡*D**i**j*∂*j**q*−(*s*) = 0 in addition to satisfying the boundary conditions listed in. In other words, if the potential surface *W*(*s*) is exactly harmonic, the isocommittor surfaces are indeed parallel planes parametrized by *v*+*T**s* = *u* for $u\in\mathbb R$. Properties of the steady-state flux vector field ------------------------------------------------ Now with an expression for the backward committor in hand, we are ready to write down an expression for the steady-state flux and analyze its behavior near the saddle point. ### The flux vector field has a constant direction Substituting and in the multidimensional Fokker-Planck equation, we see that the steady-state flux is $$\begin{aligned} J(s) &= -D\pi(s)\nabla\_s q^-(s) \label{Langer\_ss\_flux\_def}\\ &= -D\pi(s)\left(-\sqrt{\frac{\beta\lambda\_+}{2\pi}}e^{-\beta\lambda\_+(v\_+^Ts)^2/2}\right)\nabla\_s(v\_+^Ts) \\ &= \sqrt{\frac{\beta\lambda\_+}{2\pi}}e^{-\beta\lambda\_+(v\_+^Ts)^2/2}\pi(s) D v\_+ \label{Langer\_ss\_flux}\end{aligned}$$ One immediate observation here is that near the saddle point, the flux has a constant direction *D**v*+. ### The flux vector field is constant along its streamlines Not only does *J*(*s*) have a constant direction *D**v*+, it also has a constant magnitude along that direction. To see why, we first parameterize *s* using an orthogonal decomposition *s*(*u*) = Δ*s* + *u**D**v*+ where Δ*s* is a constant that measures the displacement from the origin (i.e., the saddle point) orthogonal to the direction defined by *D**v*+, while $u\in\mathbb R$ is the independent variable that measures progression along the *D**v*+ direction. In, the only factor dependent on *s* is *e*− *β**λ*+(*v*+*T**s*)2/2*π*(*s*). Using the harmonic approximation of *π*(*s*) at near the saddle point in, this expression can be written as $$\exp\left[-\frac\beta 2 s^T\left(H^\ddagger +\lambda\_+v\_+v\_+^T\right)s\right]$$ Using the parameterization *s*(*u*), the quadratic form in this expression becomes (Δ*s* + *u**D**v*+)*T*(*H*‡ + *λ*+*v*+*v*+*T*)(Δ*s* + *u**D**v*+). This expression can be expanded, but some of the terms after the expansion are constants independent of *u*. Keeping only the *u*-dependent terms, we get $$\begin{gathered} u\Delta s^TH^\ddagger Dv\_+ + u\lambda\_+\Delta s^Tv\_+v\_+^TDv\_+ + u v\_+^TD^TH^\ddagger \Delta s \\ + u^2v\_+^TD^TH^\ddagger Dv\_+ + u\lambda\_+v\_+^TD^Tv\_+v\_+^T\Delta s + u^2\lambda\_+v\_+^TD^Tv\_+v\_+^TDv\_+\end{gathered}$$ Using the facts that *H*‡ and *D* are symmetric, *H*‡*D**v*+ =  − *λ*+*v*+, and *v*+*T**D**v*+ = 1, one can show that every term in this expression cancels, and thus *J*(*s*) is constant along the *D**v*+ direction. Taken together, we can visualize the flux as a flow of probability density along parallel streamlines near the saddle point, and the velocity of the probability current along each streamline is constant. See Figure [fig:2dexample]B and [fig:2dexample]C for a comparison of the streamlines of with the exact solution to for the example double-well potential shown in [fig:2dexample]A. ### Flux surface integrals are constant over dividing surfaces As in the one-dimensional case, we need the total flux from the reactant well to the product well, which gives the reaction rate. This task is made somewhat easy by a few considerations. First, we note that, for a double-well potential, the flux-over-population steady-state flux *J*(*s*) has exactly one source in the reactant well *A* and one sink in the product well *B*; that is, *J*(*s*) is a divergence-less vector field in Ω ∖ (*A* ∪ *B*) at the steady state (this should be obvious by setting time derivative to zero in ). A consequence of this property is that the total flux from *A* to *B* is given by the surface integral of *J*(*s*) over a dividing surface between *A* and *B* (i.e., a surface that partitions Ω into two sets, one of which contains *A* and the other contains *B*). Importantly, the particular choice of the dividing surface is irrelevant; if this integral differs on two dividing surfaces, it would imply that there is a net sink or source located in the in-between region. Second, since the probability flow is concentrated near the saddle point, one can restrict the flux surface integral further to regions close to the saddle point where is valid. With the help of, a result we will prove in Section [BSsimplification], a particularly simple choice of dividing surface is the *q*− = 1/2 plane. The total flux of over the stochastic separatrix is $$\begin{aligned} J &= \int \|v\_+\|^{-1}v\_+^TJ(s)\,d\sigma(s) \\ &\overset{1}{=} \int v\_+^TJ(s)\delta(v\_+^Ts)\,ds \\ &= \sqrt{\frac{\beta\lambda\_+}{2\pi}}v\_+^TD v\_+ Z^{-1}e^{-\beta\Delta W^\ddagger} \int e^{^-\beta s^TH^\ddagger s/2}\delta(v\_+^Ts)\,ds \\ &= \sqrt{\frac{\beta\lambda\_+}{2\pi}}Z^{-1}e^{-\beta\Delta W^\ddagger} \left[(2\pi/\beta)^{1-N} v\_+^TH^{-\ddagger}v\_+\det H^\ddagger\right]^{-1/2} \\ &\overset{2}{=} \left(\frac{2\pi}{\beta}\right)^{N/2}Z^{-1}e^{-\beta\Delta W^\ddagger}\frac{\beta\lambda\_+}{2\pi}\left|\det H^\ddagger\right|^{-1/2} \label{Langer\_flux\_integral\_sol\_q\_1/2}\end{aligned}$$ Here, equality (1) follows from the relation *d**σ*(*s*) = *δ*(*v*+*T**s*)∥∇*s**v*+*T**s*∥ *d**s*, which is a consequence of what is known as the coarea formula; note that ∥∇*s**v*+*T**s*∥ = ∥*v*+∥. The volume integrals are over the entire $\Omega=\mathbb R^N$ reaction coordinate space of the system; this will be the assumed domain of integration for the rest of the note unless otherwise specified. Equality (2) follows from the facts that *H*−  ‡*v*+ =  − *λ*+− 1*D**v*+ and det*H*‡ < 0. In the rest of this section, we show and confirm that the total flux can be obtained using a surface integral over an arbitrary plane near the the saddle point, which gives the same result as. Let us parameterize such a plane by *n**T**s* = *θ*, where *n* is the normal vector (not necessarily normalized) and $\theta\in\mathbb R$. The only condition we impose on the plane is that *n* is not orthogonal to *D**v*+ (i.e., the plane parameterized by *n**T**s* = *θ* is not parallel to *D**v*+, the direction of the flux). Furthermore, without loss of generality we assume that *n* is oriented such that *n**T**D**v*+ > 0. Using and the coarea formula, the integral of *J*(*s*) over this plane is given by $$\begin{aligned} J &= \int \|n\|^{-1}n^TJ(s)\,d\sigma(s) \\ &= \int n^TJ(s)\delta(\theta-n^Ts)\,ds \\ &= \sqrt{\frac{\beta\lambda\_+}{2\pi}}n^TD v\_+ Z^{-1}e^{-\beta\Delta W^\ddagger} \int e^{^-\beta s^T\left(H^\ddagger+\lambda\_+v\_+v\_+^T\right) s/2}\delta(\theta - n^Ts)\,ds \label{Langer\_flux\_integral}\end{aligned}$$ As with most Gaussian integrals, we seek a change of variable that makes the volume integral in separable. For a typical Gaussian integral with an integrand of the form *e*− *x**T**A**x*/2, where *A* is a symmetric matrix, there is an orthogonal matrix *Q* that diagonalizes *A* (i.e., *Q**T**A**Q* = *M* for some diagonal matrix *M*) and enables a change of variable *x* = *Q**y* that renders the integral separable. In the current case, recall our discussion of the generalized eigenvector problem in Section [v+aseigenvector], where we have defined the matrix *V* that simultaneously diagonalizes *H*‡ and *D* according to. Comparing *V* and *Q*, we introduce the change of variable *t* = *V**T**s* (as well as *m* = *V**T**n*). The Jacobian for this transformation is det*V*− 1, which is not necessarily one because *V* may not be orthogonal. After this transformation, *v*+ relates to a new basis vector *e*+ by *e*+ = *V*− 1*v*+; to see why, note that together with, *H*‡*D**v*+ =  − *λ*+*v*+ implies Λ*V*− 1*v*+ =  − *λ*+*V*− 1*v*+ that is, *V*− 1*v*+ is an eigenvector of the diagonal matrix Λ corresponding to its unique negative eigenvalue  − *λ*+. Since  − *λ*+ has an algebraic multiplicity of one, the eigenvector *V*− 1*v*+ must have only one nonzero element. Furthermore, the norm of *V*− 1*v*+ is fixed by ∥*V*− 1*v*+∥ = *v*+*V*− *T**V*− 1*v*+ = *v*+*T**D**v*+ = 1. Since *v*+ is a column vector of *V*, the nonzero element of *e*+ must be positive, otherwise *V**e*+ =  − *v*+. Together, we have shown that *e*+ is a normalized standard basis vector in the new coordinate system. With this change of variable, becomes $$\begin{aligned} J &= \sqrt{\frac{\beta\lambda\_+}{2\pi}}n^TD v\_+ Z^{-1}e^{-\beta\Delta W^\ddagger} \int e^{^-\beta t^TV^{-1}\left(H^\ddagger+\lambda\_+v\_+v\_+^T\right)V^{-T}t/2}\delta(\theta - m^TV^{-1}V^{-T}t)|\det V^{-1}|\,dt\\ &= \sqrt{\frac{\beta\lambda\_+}{2\pi}}n^TD v\_+ Z^{-1}e^{-\beta\Delta W^\ddagger}|\det V^{-1}| \int e^{^-\beta t^T\left(\Lambda+\lambda\_+e\_+e\_+^T\right)t/2}\delta(\theta - m^TDt)\,dt \label{Langer\_flux\_integral\_2}\end{aligned}$$ Before further simplifications of, a comment about the term *t**T*(Λ + *λ*+*e*+*e*+*T*)*t* is in order. In the new basis, *H*‡ becomes Λ, while *v*+*v*+*T* becomes *e*+*e*+*T*. In particular, Λ + *λ*+*e*+*e*+*T* is a diagonal matrix whose diagonal elements are the eigenvalues *λ**j*’s except for  − *λ*+, which has been deleted from Λ by *λ*+*e*+*e*+*T*. This observation should become especially obvious using the outer product form of the spectral theorem, which gives Λ = ∑*j**λ**j**e**j**e**j**T*; in other words, compared to Λ (or *H*‡), Λ + *λ*+*e*+*e*+*T* (or *H*‡ + *λ*+*v*+*v*+*T*) is rank-deficient because one of its eigenspaces corresponding to the unstable mode has been deleted. As a result, the quadratic form *t**T*(Λ + *λ*+*e*+*e*+*T*)*t* sums over all *λ**j**t**j*2 except for *j* corresponding to  − *λ*+*t*+2. In light of this analysis, we will use notations ∑*j* ≠  + and ∏*j* ≠  + to denote summation or product over *j*’s except for *j* corresponding to  − *λ*+ and *e*+. Let us denote the volume integral in as *I*. We will simplify *I* with the following strategy. First, we convert the Dirac delta function in into its integral representation; i.e., $$\delta(f(x))=\frac{1}{2\pi}\int\_{-\infty}^\infty e^{ikf(x)}\,dk \label{dirac\_delta\_integral}$$ for any sufficiently smooth test function *f*(*x*). Second, we integrate over all *t**j*’s for which *j* ≠  + , then over *k*, and then over *t*+. This order of operation ensures that every integral is an analytically tractable Gaussian integral with a linear term, which can be solved by first completing the squares: $$\begin{aligned} I&= \frac{1}{2\pi}\int\int e^{^-\beta t^T\left(\Lambda+\lambda\_+e\_+e\_+^T\right)t/2}e^{ik(\theta - m^TDt)}\,dtdk\\ &= \frac{1}{2\pi}\int\int e^{ik\theta}e^{-ik\sum\_l m\_lD\_{l+}t\_+}\prod\_{j\ne+}\left(\int e^{^-\beta \lambda\_jt\_j^2/2}e^{-ik\sum\_lm\_lD\_{lj}t\_j}\,dt\_j\right)\,dkdt\_+\\ &= \frac{1}{2\pi}\int\int e^{ik\theta}e^{-ik\sum\_l m\_lD\_{l+}t\_+}\prod\_{j\ne+}\left(\sqrt{\frac{2\pi}{\beta\lambda\_j}}e^{-k^2\left(\sum\_l m\_lD\_{lj}\right)^2/2\beta\lambda\_j}\right)\,dkdt\_+\\ &= \frac{1}{2\pi}\left(\prod\_{j\ne+}\sqrt{\frac{2\pi}{\beta\lambda\_j}}\right)\int e^{ik\theta}e^{-ik\sum\_l m\_lD\_{l+}t\_+}\left(\frac{2\pi\beta}{\sum\_{j\ne+}\lambda\_j^{-1}\left(\sum\_l m\_lD\_{lj}\right)^2}\right)^{1/2}e^{-\frac{\beta}{2}\frac{(\theta-\sum\_l m\_lD\_{l+}t\_+)^2}{\sum\_{j\ne+}\lambda\_j^{-1}\left(\sum\_l m\_lD\_{lj}\right)^2}}\,dt\_+\\ &= \frac{1}{2\pi}\left(\prod\_{j\ne+}\sqrt{\frac{2\pi}{\beta\lambda\_j}}\right)\left(\frac{2\pi\beta}{\sum\_{j\ne+}\lambda\_j^{-1}\left(\sum\_l m\_lD\_{lj}\right)^2}\right)^{1/2}\left(\frac{2\pi}{\beta}\frac{\sum\_{j\ne+}\lambda\_j^{-1}\left(\sum\_l m\_lD\_{lj}\right)^2}{\left(\sum\_l m\_lD\_{l+}\right)^2}\right)^{1/2}\\ &= \left(\prod\_{j\ne+}\sqrt{\frac{2\pi}{\beta\lambda\_j}}\right)|m^TDe\_+|^{-1} \label{Langer\_flux\_integral\_3}\end{aligned}$$ Substituting back into gives $$\begin{aligned} J &= \sqrt{\frac{\beta\lambda\_+}{2\pi}}\left(\prod\_{j\ne+}\sqrt{\frac{2\pi}{\beta\lambda\_j}}\right) Z^{-1}e^{-\beta\Delta W^\ddagger}|\det V^{-1}|\frac{n^TD v\_+}{|m^TDe\_+|}\\ &\overset{1}{=} \left(\frac{2\pi}{\beta}\right)^{N/2}\frac{\beta \lambda\_+}{2\pi}Z^{-1}e^{-\beta\Delta W^\ddagger}\frac{|\det V^{-1}|}{|\det H^\ddagger D|^{1/2}} \\ &\overset{2}{=} \left(\frac{2\pi}{\beta}\right)^{N/2}Z^{-1}e^{-\beta\Delta W^\ddagger}\frac{\beta \lambda\_+}{2\pi}|\det H^\ddagger|^{-1/2} \label{Langer\_flux\_integral\_sol}\end{aligned}$$ In equality (1), we have used the fact that *n**T**D**v*+ = *m**T**V*− 1*D**V**e*+ = *m**T**V*− 1(*V*− *T**V*− 1)*V**e*+ = *m**T*(*V*− 1*V*− *T*)*e*+ = *m**T**D**T**e*+ = *m**T**D**e*+ > 0 In equality (2), we have used the fact that ∣det*H*‡*D*∣ = ∣det*H*‡∣det*D* and det*D* = det*V*− *T**V*− 1 = (det*V*− 1)2 both of which follow directly from. It should be obvious that is independent of either *θ* or *n* used to parameterize the dividing plane and is thus an intrinsic property of the flux. The result in is the same as, as expected. However, the choice of *q*− = 1/2 in the derivation of is special in that the quadratic form *s**T*(*H*‡ + *λ*+*v*+*v*+*T*)*s* is simply *s**T**H*‡*s*, which is not rank-deficient. Langer’s rate constant ---------------------- According to the flux-over-population method, the only quantity left to determine for computing the rate constant is the steady-state population *n* in the reactant well. This is given by $$\begin{gathered} n=Z^{-1}\int e^{-\beta (r-r\_A)^TH\_A (r-r\_A)/2}\,dr = Z^{-1}\prod\_i \int\_{-\infty}^\infty e^{-\beta \mu\_i t\_i^2/2}\,dt\_i \\ =Z^{-1}\left(2\pi/\beta\right)^{N/2}\prod\_i\mu\_i^{-1/2}=Z^{-1}\left(2\pi/\beta\right)^{N/2}\left(\det H\_A\right)^{-1/2} \label{Langer\_ss\_pop}\end{gathered}$$ where *r* − *r**A* = *Q**t* with *Q* being an orthogonal matrix that diagonalizes *H**A*, and *μ**i*’s are the eigenvalues of *H**A*. Recall that *H**A* is assumed to be symmetric positive definite, and thus *μ**i* > 0 for all *i*’s. At last, combining and, we see that the rate constant is $$k\_{AB}=J/n=\frac{\beta\lambda\_+}{2\pi}\sqrt{\frac{\det H\_A}{\left|\det H^\ddagger\right|}}e^{-\beta\Delta W^\ddagger} \quad\text{where}\quad H^\ddagger Dv\_+=-\lambda\_+v\_+ \label{Langer\_rate\_constant}$$ Three brief comments about this result are in order. First, we note here that it is also common in the literature to define  − *λ*+ as an eigenvalue of the matrix *β**H*‡*D*, while the magnitude of *v*+ is still fixed by *v*+*T**D**v*+ = 1. One can retrace our derivations and see that, starting from, this has the effect of replacing all occurrences of *β**λ*+ by *λ*+, in which case the rate constant now reads $$k\_{AB}=\frac{\lambda\_+}{2\pi}\sqrt{\frac{\det H\_A}{\left|\det H^\ddagger\right|}}e^{-\beta\Delta W^\ddagger} \quad\text{where}\quad \beta H^\ddagger Dv\_+=-\lambda\_+v\_+$$ Second, it is easy to check that the multidimensional result can be reduced to Kramers’ rate constant in the case of a single reaction coordinate. By comparing to and, we identify *H**A* as *κ**A* and *H*‡ as  − *κ*‡. Next, since  − *λ*+ is the eigenvalue corresponding to the unstable mode of *H*‡*D*, and the eigenvalue of a 1 × 1 “matrix” is the matrix element itself, it follows that in the one-dimensional case where only the unstable mode is considered,  − *λ*+ can be identified as the product of *H*‡ (i.e.,  − *κ*‡) and *D*, the one-dimensional diffusion constant. Making these substitutions reduce to. Lastly, we briefly comment on some computational aspects of the theory. Using molecular dynamics simulations, the application of requires the determination of the activation free energy Δ*W*‡ and the diffusion matrix *D*. The activation free energy can be obtained through a variety of well-established equilibrium techniques such as umbrella sampling, or more recent nonequilibrium techniques based on Jarzynski equality. Some methods for computing the diffusion matrix are described in, e.g.,. Limitation of Langer’s theory ----------------------------- Some caution should be exercised in applying Langer’s theory; in this section we briefly discuss two such pitfalls. First, Langer’s result can fail in some unexpected cases where the assumption of separation of timescales implied by the flux-over-population procedure is violated. In the case of highly anisotropic diffusion, it is possible to arrange the relative positions of the reactant and product wells such that barrier (re)crossing over the saddle point can take place on a timescale much shorter than the time required to relax within either well due to slower within-well diffusion. If the diffusion anisotropy is extreme, the overall kinetics could become nonexponential, depending on the initial preparation of the system within the reactant well. In such cases, Langer’s theory predicts a rate constant that cannot properly account for the dynamics of within-well equilibration. This issue was first described in, and a corrected analytical expression for the rate constant in the two-dimensional case was derived in. More generally, anisotropy in the diffusion matrix and/or potential surface can lead to deviations from Langer’s theory; some case studies have been documented in, e.g.,. Second, for complex systems typically encountered in condensed-phase chemistry and biophysics, the difficulty with Langer’s theory is not the calculations entailed by the theory itself, but rather the question of how to select a small number of reaction coordinates that can provide a “complete” description of an activated rate process. This question is an active area of research (see, e.g., ), and in practice answering this question is often more akin to an art that relies on domain-specific knowledge and intuition. Typically, an incomplete description of the reaction leads to a loss of Markovianity of the projected stochastic process in the reaction coordinate space; mathematically, this prevents us from simplifying the generalized Langevin equation into the Langevin equation (see Section [reviewlangevin]), which was the starting point for the derivation of Kramers’ theory. When the memory kernel can be reasonably approximated, one approach to treating the non-Markovian dynamics in the reaction coordinate space is the Grote-Hynes theory based on the stable states picture, which is akin to Kramers’ theory but with the generalized Langevin equation as its starting point; a multidimensional generalization was developed in. Recent theoretical development has focused on alternative frameworks such as transition path theory, milestoning, weighted ensemble, and path sampling methods that circumvent these difficulties to various extent, usually at the expense of increased computational costs. Simplification of Langer’s theory by Berezhkovskii and Szabo ============================================================ It is often claimed that the committor is the ideal reaction coordinate to describe a reaction. In cases where Langer’s theory is adequate, this claim is supported by a striking result, which we will demonstrate in this section; namely, that projecting the system dynamics along the direction of *v*+ and then applying Kramers’ theory gives the same rate constant as Langer’s result, while projections along any other directions either give an overestimation or outright do not converge. Recall from our extensive discussion in Section [Langertheory] that *v*+ is the vector normal to the isocommittor planes near the saddle point and that *v*+ is the eigenvector of *H*‡*D* corresponding to the unique unstable mode at the saddle point. Gaussian surface integral over a plane -------------------------------------- Before we discuss the main result of this section, we take a detour here and show that for any vectors $x, x\_0, n\in \mathbb R^N$ and an invertible, symmetric matrix $A\in\mathbb R^{N\times N}$, $$\int\_\Omega\delta(\theta-n^Tx)e^{-\beta(x-x\_0)^TA(x-x\_0)/2}\,dx=\frac{e^{-\beta(\theta-n^Tx\_0)^2/2n^TA^{-1}n}}{\sqrt{(2\pi/\beta)^{1-N}(n^TA^{-1}n)\det A}} \label{gaussian\_delta\_integral}$$ whenever *n**T**A*− 1*n*det*A* > 0. This is a Gaussian integral restricted to a plane parameterized by *n**T**x* = *θ*, with *n* being the normal vector. Our strategy for evaluating is somewhat similar to the approach taken in Section [propertiesofJ(s)] for evaluating the surface integral of the steady-state flux vector; one major difference here is that does not involve a rank-deficient matrix. Again, we start the derivation by seeking a change of variable that makes the volume integral in separable. Since *A* is symmetric, there exists a diagonal matrix Λ and orthogonal matrix *Q* such that *Q**T**A**Q* = Λ. Here, the diagonal elements in Λ contain the eigenvalues of *A* and the columns of *Q* are the corresponding eigenvectors. Let us define the change of variable *x* = *Q**y*, along with *x*0 = *Q**y*0 and *n* = *Q**m*. The Jacobian of this transformation is det*Q* = 1. Before we apply the change of variable to, let us pause for a moment and consider the meaning of the condition *n**T**A*− 1*n*det*A* > 0. Since *A* is invertible and symmetric, all the eigenvalues of *A* are real and nonzero. If *A* is furthermore positive definite, then det*A* = ∏*λ**j* > 0 and *n**T**A*− 1*n* > 0 and thus converges for any nonzero $n\in\mathbb R^N$. We are more interested in the scenario where *A* has a single unstable mode. Let us denote this eigenvector as *v**j*ʹ and the corresponding negative eigenvalue as *λ**j*ʹ. In this case, det*A* < 0 and thus we require *n**T**A*− 1*n* < 0; this implies that *n* should not be too close to being perpendicular to *v**j*ʹ (i.e., the integration should not be over a plane close to being parallel to the unstable mode). To see why, let us write *n* = ∑*j**m**j**v**j* using the eigenvectors *v**j*’s in *Q* as the basis vectors. With this representation, *n**T**A*− 1*n* = ∑*j**λ**j*− 1*m**j*2; this sum is negative only if *m**j*ʹ is large enough such that ∣*λ**j*ʹ− 1∣*m**j*ʹ2 > ∑*j* ≠ *j*ʹ*λ**j*− 1*m**j*2. In the following analysis we will prove in the case of a single unstable mode. For readers not interested in the algebraic details, the rest of the section can be skipped without loss of continuity. Now, let us denote the integral in as *I*. With the change of variable, we see that $$\begin{aligned} I &= \int\delta(\theta-m^TQ^TQy)e^{-\beta (y-y\_0)^T\Lambda (y-y\_0)/2}\,dy\\ &= \frac{1}{2\pi}e^{-\beta y\_0^T\Lambda y\_0/2}\int\int e^{ik(\theta-m^Ty)}e^{-\beta y^T\Lambda y/2}e^{\beta y\_0^T\Lambda y}\,dydk\\ &= \frac{1}{2\pi}e^{-\beta y\_0^T\Lambda y\_0/2}\int e^{ik\theta}\int e^{-\beta y^T\Lambda y/2}e^{(\beta y\_0-ik\Lambda^{-1}m)^T\Lambda y}\,dydk \\ &= \frac{1}{2\pi}e^{-\beta y\_0^T\Lambda y\_0/2}\int e^{ik\theta} \prod\_j\int e^{-\frac{\lambda\_{j}}{2\beta}\left[\beta(y\_j-y\_{0j}) + ik\lambda\_{j}^{-1}m\_j\right]^2 + \frac{\lambda\_{j}}{2\beta}\left[ik\lambda\_{j}^{-1}m\_j-\beta y\_{0j}\right]^2}\,dy\_jdk \\ &= \frac{1}{2\pi}e^{-\beta y\_0^T\Lambda y\_0/2}\int e^{ik\theta} \prod\_je^{\frac{\lambda\_{j}}{2\beta}\left[ik\lambda\_{j}^{-1}m\_j-\beta y\_{0j}\right]^2} \left(\int e^{-\frac{\lambda\_{j}}{2\beta}\left[\beta(y\_j-y\_{0j}) + ik\lambda\_{j}^{-1}m\_j\right]^2}\,dy\_j\right)\,dk \label{all\_gaussian\_integrals}\end{aligned}$$ The second equality follows by using the integral representation of the Dirac delta function as in. At this point, we split the integrand of ∫*d**k* in into the product of two expressions. The first involves all *j*’s for which *λ**j* > 0, and the second consists of *j*ʹ for which *λ**j*ʹ < 0. The first group is $$\begin{gathered} \prod\_{j\ne j'}e^{\frac{\lambda\_{j}}{2\beta}\left[ik\lambda\_{j}^{-1}m\_j-\beta y\_{0j}\right]^2}\int e^{-\frac{\lambda\_{j}}{2\beta}\left[\beta(y\_j-y\_{0j}) + ik\lambda\_{j}^{-1}m\_j\right]^2}\,dy\_j = \prod\_{j\ne j'}e^{\frac{\lambda\_{j}}{2\beta}\left[ik\lambda\_{j}^{-1}m\_j-\beta y\_{0j}\right]^2}\sqrt{\frac{2\pi}{\beta\lambda\_j}}\\ = \left(\prod\_{j\ne j'}\sqrt{\frac{2\pi}{\beta\lambda\_{j}}}\right) e^{\frac{\beta}{2}\sum\_{j\ne j'}\lambda\_jy\_{0j}^2}e^{\sum\_{j\ne j'}\left(-k^2\lambda\_j^{-1}m\_j^2/2\beta-ikm\_jy\_{0j}\right)}\end{gathered}$$ While the second group is simply ∫*e**i**k**θ* − *i**k**m**j*ʹ*y**j*ʹ − *β**λ**j*ʹ*y**j*ʹ2/2 + *β**λ**j*ʹ*y*0*j*ʹ*y**j*ʹ *d**y**j*ʹ The original integral in can now be rewritten as $$\begin{aligned} I &= \frac{1}{2\pi}e^{-\frac{\beta}{2} y\_0^T\Lambda y\_0}\left(\prod\_{j\ne j'}\sqrt{\frac{2\pi}{\beta\lambda\_{j}}}\right) e^{\frac{\beta}{2}\sum\_{j\ne j'}\lambda\_jy\_{0j}^2}\int \int I(k)I(y\_{j'})\,dy\_{j'}dk\\ &= \frac{1}{2\pi}e^{-\frac{\beta}{2}\lambda\_{j'}y\_{0j'}^2}(2\pi/\beta)^{N-1}\left(\prod\_{j\ne j'}\lambda\_{j}^{-1/2}\right)\int I(y\_{j'})\int I(k)\,dkdy\_{j'}\end{aligned}$$ with $$\begin{aligned} I(k) &= e^{ik\theta - ikm\_{j'}y\_{j'} + \sum\_{j\ne j'}\left(-k^2\lambda\_j^{-1}m\_j^2/2\beta-ikm\_jy\_{0j}\right)} \\ I(y\_{j'}) &= e^{-\frac{\beta}{2}\lambda\_{j'}y\_{j'}^2+\beta\lambda\_{j'}y\_{0j'}y\_{j'}}\end{aligned}$$ To further simplify this expression, we first integrate over *k*, which is a Gaussian integral with a linear term, $$\int I(k)\,dk = \left(\frac{2\pi\beta}{\sum\_{j\ne j'}\lambda\_j^{-1}m\_j^2}\right)^{1/2} e^{-\frac{\beta}{2}\frac{\left(\sum\_{j\ne j'}m\_jy\_{0j}+m\_{j'}y\_{j'}-\theta\right)^2}{\sum\_{j\ne j'}\lambda\_j^{-1}m\_j^2}}$$ Then we integrate over *y**j*ʹ, which is again a Gaussian integral with a linear term, $$\begin{aligned} \int I(y\_{j'})\int I(k)\,dkdy\_{j'} &= \left(\frac{2\pi\beta}{\sum\_{j\ne j'}\lambda\_j^{-1}m\_j^2}\right)^{1/2} \int e^{-\frac{\beta}{2}\lambda\_{j'}y\_{j'}^2 + \beta\lambda\_{j'}y\_{0j'}y\_{j'} -\frac{\beta}{2}\frac{\left(\sum\_{j\ne j'}m\_jy\_{0j}+m\_{j'}y\_{j'}-\theta\right)^2}{\sum\_{j\ne j'}\lambda\_j^{-1}m\_j^2}}\,dy\_{j'}\\ &= \left(\frac{2\pi\beta}{\sum\_{j\ne j'}\lambda\_j^{-1}m\_j^2} \frac{2\pi\lambda\_{j'}^{-1}\sum\_{j\ne j'} \lambda\_j^{-1}m\_j^2}{\beta\sum\_j\lambda\_{j}^{-1}m\_j^2}\right)^{1/2} e^{\frac{\beta}{2}\lambda\_{j'}y\_{0j'}^2} e^{-\frac{\beta}{2}\frac{\left(\theta - \sum\_j m\_j y\_{0j}\right)^2}{\sum\_j \lambda\_j^{-1}m\_j^2}}\\ &= 2\pi \left(\lambda\_{j'}n^TA^{-1}n\right)^{-1/2} e^{\frac{\beta}{2}\lambda\_{j'}y\_{0j'}^2} e^{-\beta(\theta-n^Tx\_0)^2/2n^TA^{-1}n}\end{aligned}$$ The last equality follows from the fact that *n**T**A*− 1*n* = *m**T*Λ− 1*m* = ∑*j**λ**j*− 1*m**j*2. Taken together, we see that $$\begin{aligned} I &= e^{-\frac{\beta}{2}\lambda\_{j'}y\_{0j'}^2}(2\pi/\beta)^{N-1}\left(\prod\_{j\ne j'}\lambda\_{j}^{-1/2}\right)\left(\lambda\_{j'}n^TA^{-1}n\right)^{-1/2} e^{\frac{\beta}{2}\lambda\_{j'}y\_{0j'}^2} e^{-\beta(\theta-n^Tx\_0)^2/2n^TA^{-1}n}\\ &= (2\pi/\beta)^{N-1}\left(\prod\_{j}\lambda\_{j}^{-1/2}\right)\left(n^TA^{-1}n\right)^{-1/2} e^{-\beta(\theta-n^Tx\_0)^2/2n^TA^{-1}n} \\ &= \frac{e^{-\beta(\theta-n^Tx\_0)^2/2n^TA^{-1}n}}{\sqrt{(2\pi/\beta)^{1-N}(n^TA^{-1}n)\det A}}\end{aligned}$$ as desired. Projection of Langer’s result to one dimension ---------------------------------------------- Let us consider the same *N*-dimensional potential of mean force *W*(*r*) as in Section [Langertheory]. Here, we are interested in further projecting *W*(*r*) to a one-dimensional potential of mean force. Let us denote this direction by a vector *n*. Integrating away the degrees of freedom orthogonal to *n* is equivalent to performing a surface integral of the probability density *π*(*r*) over a family of planes that are orthogonal to *n*. Let us parameterize this family of planes as *n**T**r* = *θ* for $\theta\in\mathbb R$. After having worked through our derivation of Langer’s result, it should be obvious to the reader that the one-dimensional potential of mean force is *e*− *β**F*(*θ*) = *Z*− 1∫*δ*(*θ* − *n**T**r*)*e*− *β**W*(*r*) *d**r* Adapting Kramers’ result in to *F*(*θ*), the one-dimensional rate constant is $$k(n)= \left(\int\_{\theta\_A}^{\theta\_B}\frac{e^{\beta F(\theta)}}{n^TDn}\,d\theta \int\_{-\infty}^{\theta^\ddagger}e^{-\beta F(\theta)}\,d\theta\right)^{-1} \label{Kramers\_rate\_BS\_proj}$$ where *θ**A* and *θ**B* are the positions of free energy minima at the reactant and product well, respectively, and *θ*‡ is the position of the saddle point in the one-dimensional projection. The expression *n**T**D**n* is the one-dimensional diffusion constant along *n*. To understand this expression, first note that since the diffusion matrix represents a physical property of the system, the representation of *D* should change with coordinate transformations in such a way as to leave the underlying physics invariant. This makes *D* a tensor, specifically a second-order contravariant tensor. Under the transformation *r* ↦ *θ*, the tensor *D**i**j* transforms correspondingly to a zeroth-order tensor (i.e., a constant) by $$\sum\_{ij}D\_{ij}\frac{\partial\theta}{\partial r\_i}\frac{\partial\theta}{\partial r\_j}=\sum\_{ij}n\_iD\_{ij}n\_j=n^TDn$$ as desired. Using and the harmonic approximations at the saddle point, the first integral in evaluates to $$\begin{aligned} \int\_{\theta\_A}^{\theta\_B}\frac{e^{\beta F(\theta)}}{n^TDn}\,d\theta &\approx Z\frac{e^{\beta\Delta W^\ddagger}}{n^TDn} \int\left(\int \delta(\theta-n^Tr)e^{-\beta(r-r^\ddagger)^TH^\ddagger(r-r^\ddagger)/2}\,dr\right)^{-1}d\theta\\ &= Z\frac{e^{\beta\Delta W^\ddagger}}{n^TDn}\sqrt{(2\pi/\beta)^{1-N}(n^TH^{-\ddagger}n)\det H^\ddagger}\sqrt{(2\pi/\beta) |n^TH^{-\ddagger}n|}\\ &= (2\pi/\beta)^{1-N/2}Ze^{\beta\Delta W^\ddagger}\frac{|n^TH^{-\ddagger}n|}{n^TDn}|\det H^\ddagger|^{1/2}\end{aligned}$$ Note that since det*H*‡ < 0, the use of requires that *n**T**H*−  ‡*n* < 0; otherwise the integral diverges as we discussed in the previous section. Similarly, with the harmonic approximation at the reactant well, the second integral in evaluates to $$\begin{aligned} \int\_{-\infty}^{\theta^\ddagger}e^{-\beta F(\theta)}\,d\theta &\approx Z^{-1} \int\int \delta(\theta-n^Tr)e^{-\beta(r-r\_A)^TH\_A(r-r\_A)/2}\,drd\theta \\ &= Z^{-1}\sqrt{\frac{(2\pi/\beta) (n^TH^{-1}\_An)}{(2\pi/\beta)^{1-N}(n^TH^{-1}\_An)\det H\_A}}\\ &= Z^{-1}(2\pi/\beta)^{N/2}\left(\det H\_A\right)^{-1/2}\end{aligned}$$ The application of here does not impose any further conditions on *n*, since *H**A* is symmetric positive definite. Together, the rate constant is $$k(n)= \frac{\beta}{2\pi}\sqrt{\frac{\det H\_A}{|\det H^\ddagger|}}\frac{n^TDn}{|n^TH^{-\ddagger}n|}e^{-\beta\Delta W^\ddagger} \label{BS\_rate\_constant}$$ The one-dimensional rate constant in is reminiscent of Langer’s result in, except with *λ*+ replaced by *n**T**D**n*/∣*n**T**H*−  ‡*n*∣. In fact, *n**T**D**n*/∣*n**T**H*−  ‡*n*∣ = *λ*+ when *n* is parallel to *v*+ (this should be clear from the fact that *H*−  ‡*v*+ =  − *λ*+− 1*D**v*+.) Furthermore *k*(*v*+) is the minimum of *k*(*n*) (more strictly speaking, any vector proportional to *v*+ will do). To see why, recall the earlier discussion related to the simultaneous diagonalization of *H*‡ and *D* by *V* in. With a change of variable *n* = *V**m*, $$\frac{n^TDn}{|n^TH^{-\ddagger}n|} = \frac{m^TV^{T}DVm}{|m^TV^TH^{-\ddagger}Vm|} = \frac{m^Tm}{|m^T\Lambda^{-1} m|} = \frac{\|m\|^2}{|-\lambda\_+^{-1}m\_+^2 + \sum\_{j\ne+}\lambda\_j^{-1} m\_j^2|}$$ The expression *m**T*Λ− 1*m*/*m**T**m* is an example of a Rayleigh quotient. In the absence of any constraint, the quotient is bounded between the largest and smallest eigenvalues of Λ− 1. This bound is unfortunately not helpful, since we require that *n**T**H*−  ‡*n* (and thus *m**T*Λ− 1*m*) be negative. For any fixed length ∥*m*∥, the expression ∣ − *λ*+− 1*m*+2 + ∑*j* ≠  +*λ**j*− 1*m**j*2∣ in the denominator is maximized under this constraint whenever the positive terms in the sum ∑*j* ≠  +*λ**j*− 1*m**j*2 are minimized. This is achieved by setting all *m**j* = 0 except for ∣*m*+∣ = ∥*m*∥; in other words, *m* is proportional to *e*+, and thus *n* is proportional to *v*+ (the change of variable described here is the same as that in Section [fluxsurfaceintegral], but one should be careful not to confuse the vectors *n* and *m* defined in this section with those defined in Section [fluxsurfaceintegral]). Taken together, we have shown that *n* = *v*+ minimizes and the minimum is equivalent to, as desired. Acknowledgement =============== I would like to thank Robert Alberstein for his critical reading of the note, and I would like to acknowledge support through Chan Zuckerberg Biohub Investigator funds to Dr. Tanja Kortemme (UCSF). An Introduction to Langer’s Theory ================================== This note provides a pedagogical introduction to Langer’s theory for activated rate processes in multiple dimensions at the high friction limit, with an emphasis on the connection between the theory and the property of the backward committor/splitting probability near the saddle point. The intended audience is assumed to have some familiarity with linear algebra and statistical mechanics while knowledge of stochastic processes is not strictly necessary. **Keywords:** Kramers’ theory, KLBS theory This note is intended as an introduction to Langer’s theory, the multidimensional extension of Kramers’ theory for activated rate processes in the high friction limit. The note is organized as follows: Section [reviewsection] briefly reviews some elements of the theory of stochastic process, which prepares for the introduction of Kramers’ theory in Section [Kramerstheory], where the flux-over-population method is demonstrated in the relatively simple context of a one-dimensional double-well potential. The majority of this note is devoted to the discussion of Langer’s theory in Section [Langertheory]. The derivation begins by relating the steady-state probability density in the flux-over-population method to the backward committor, followed by a detailed analysis of the behavior of the committor near the saddle point. This allows us to derive an expression for the probability flux vector field, culminating in the presentation of the multidimensional rate constant in equation. Lastly, Section [BSsimplification] discusses some more recent development by Berezhkovskii and Szabo that allows one to project Langer’s result back to one dimension. Together, the results presented in this note are sometimes referred to as the KLBS theory. Given the venerable age of Langer’s theory and the existence of several review articles and textbook partly devoted to this subject, one might wonder why this note is necessary. This introduction certainly draws heavily on the aforementioned works, but differs in two regards. First, almost all current accounts of Langer’s theory are quite concise. While this feature is excellent for experts who are in need of a quick refresher, it poses significant challenges for beginners, because the derivation of the theory can be quite tedious and involve some mathematical tricks that are not standard knowledge for a typical reader. One goal of this note, therefore, is to derive Langer’s theory while erring on the side of presenting too many, rather than too few, algebraic details. Second, an interesting feature of Langer’s theory is that the expression for the rate constant critically depends on several basic geometric features of the committor/splitting probability near the saddle point. This connection was not explicitly stated in Langer’s original paper, and is usually not fully developed in recent accounts of the theory. This is, again, unfortunate, since this connection provides a simple setting for gaining an intuitive understanding of the behavior of the committor near an idealized transition state. Given the prominent role the committor plays in modern rate theories, such as the transition path theory, such intuitions can be valuable in understanding the more recent development. Therefore, as described above, a second goal of this note is to make explicit the connection of Langer’s theory to the committor function and provide a detailed analysis thereof. The primary intended audience for this note are students and researchers in chemistry and biophysics interested in condensed-phase simulations of macromolecules. The reader should be familiar with linear algebra and have some exposure to classical statistical mechanics and some rate theory (e.g., Arrhenius equation); knowledge of stochastic processes is not strictly necessary although results from the theory will be invoked in some derivations. A brief review of stochastic process ==================================== In this section we briefly review some concepts from the theory of stochastic process relevant for our discussion. Readers who have never studied this subject should either consult standard references on stochastic processes and statistical mechanics, or simply take the results presented below as given and fill in their missing background knowledge later. The presentation in this section partially follows that in Chapter 15 of. Overdamped Langevin equation ---------------------------- Consider a (closed) system that is in contact with a large heat bath. In a typical (classical) simulation, the system consists of the macromolecule(s) of interest as well as some water molecules and counterions. The time evolution of the full system is determined by the Hamiltonian equations, and the equilibrium phase space distribution function is microcanonical. To avoid explicit representation of the bath, we “abstract away” the bath degrees of freedom, and the remaining system dynamics can be described via the generalized Langevin equation: $$\begin{aligned} r\_i'(t) &= \frac{p\_i(t)}{m\_i} \\ p\_i'(t) &= -\frac{\partial W(r)}{\partial r\_i} - \int\_0^t \sum\_j \sqrt{m\_im\_j}\gamma\_{ij}(t-\tau)r\_j'(\tau)\,d\tau + \xi\_i(t) \label{GLE\_v'}\end{aligned}$$ Here, *r**i*, *p**i*, and *m**i* are the position, momentum, and (renormalized) mass of the *i*th degree of freedom in the remaining system, and *W*(*r*) is the potential of mean force obtained from the full potential energy function by averaging over the bath degrees of freedom. The generalized Langevin equation can be derived via either the harmonic bath model or, more rigorously, via the Mori–Zwanzig theory. The generalized Langevin equations is reminiscent of Newton’s equation of motion, but the price we pay for doing away with the bath degrees of freedom is the difficulty of dealing with a set of stochastic integro-differential equations. Specifically, compared to Newton’s equations, there are two additional terms in that provide a coarse-grained model of the effect of the bath on the system. Here, we have “decomposed” the effect of the bath into two parts. First, the fluctuation term *ξ**i*(*t*) represents a random force (i.e., noise) acting on the system. Although the motion of the bath is fully deterministic, by ignoring the molecular details we model *ξ**i*(*t*) as a random, or stochastic, process. For a system solvated by a dense bath, such as liquid water, that affects the system dynamics through soft collisions (i.e., weak noise), a common model for *ξ**i*(*t*) is a Gaussian random process with zero mean. Second, the dissipation term ∫0*t**γ**i**j*(*t* − *τ*)*r**j*ʹ(*τ*) *d**τ* is a convolution integral that acts as a friction force slowing down the system. This integral is called a memory integral; the term *γ**i**j*(*t*) is known as the memory kernel, or dynamic friction kernel, and it encodes the “memory” of the motion of the system by the bath. Physically, the memory integral represents the fact that the bath requires a finite amount of time to respond to fluctuations in the system and this lag affects the motion of the system. At equilibrium, the fluctuation term and the dissipation term are related by the second fluctuation-dissipation theorem, $$\mathbb E[\xi\_i(t)\xi\_j(t')] = k\_BT\sqrt{m\_im\_j}\gamma\_{ij}(|t-t'|) \label{GLE\_colored\_noise}$$ Now, we make some further simplifying assumptions to make more analytically tractable. First, we assume that the bath responds instantaneously to the motion of the system; i.e., we assume that the memory integral decays instantaneously and the bath has no memory of the system history. This is a good model when the (renormalized) mass of the system is much larger than that of the bath. In this case, the memory kernel becomes *γ**i**j*(*t*) = 2*γ**i**j**δ*(*t*), where we have defined *γ**i**j* = ∫0∞*γ**i**j*(*t*) *d**t* as the static friction kernel, or simply the friction coefficient, and *δ*(*t*) is a Dirac delta function. The fluctuation-dissipation theorem now reads $$\mathbb E[\xi\_i(t)\xi\_j(t')] = 2k\_BT \sqrt{m\_im\_j}\gamma\_{ij}\delta(|t-t'|) \label{GLE\_white\_noise}$$ Stochastic processes with an autocorrelation function of this form are called white noises, and the terminology reflects the fact that the power spectral density of the process is a constant over all frequencies. With these assumptions, the generalized Langevin equation becomes $$\begin{aligned} r\_i'(t) &= \frac{p\_i(t)}{m\_i} \\ p\_i'(t) &= -\frac{\partial W(r)}{\partial r\_i} - \sum\_j\sqrt{m\_im\_j}\gamma\_{ij}r\_j'(t) + \xi\_i(t) \label{LE\_v'}\end{aligned}$$ which is known as the Langevin equation. Compared to the generalized Langevin equation, the Langevin equation describes stochastic processes that are Markovian (i.e., memoryless). Second, for dense solvent, such as water, the high friction and frequent collisions with the system leads to, on a short timescale (i.e., *t* < *γ*− 1), rapid fluctuations in the acceleration *p**i*ʹ(*t*). However, on a longer timescale the change in the time-averaged velocity will be small as the effect of collisions cancel out each other. This allows us to set the acceleration in to zero, which leads to $$\sum\_j\sqrt{m\_im\_j}\gamma\_{ij}r\_j'(t) = -\frac{\partial W(r)}{\partial r\_i} + \xi\_i(t) \label{overdamped\_LE}$$ The motion described by goes by many names, such as diffusion, Brownian motion, or overdamped Langevin dynamics; the equation itself is known as the overdamped Langevin equation. In some applications, the cross-correlation terms in the memory kernel are ignored, in which case takes on a simpler form $$m\_i\gamma\_{ii}r\_i'(t) = -\frac{\partial W(r)}{\partial r\_i} + \xi\_i(t)$$ Smoluchowski equation --------------------- For the purpose of describing the kinetics of activated rate processes at equilibrium, working directly with is inconvenient, since trajectories consistent with are individual realizations of the system dynamics, while we are more interested in the statistics of an ensemble of such realizations. In other words, we are more interested in *p*(*r*, *t*), the probability density of the system (strictly speaking, *p*(*r*, *t*) is a conditional probability density function more appropriated denoted as *p*(*r*, *t*∣*r*0, *t*0)). The time evolution of *p*(*r*, *t*) for processes governed by is given by $$\frac{\partial p(r, t)}{\partial t}=-\nabla\cdot J(r, t) \quad\text{with}\quad J(r, t) = -D\pi(r)\nabla\frac{p(r, t)}{\pi(r)} \label{Langer\_FP}$$ where *π*(*r*) is the Boltzmann distribution in the configuration space and the stationary solution to, *J*(*r*, *t*) is the (probability) flux, and *D* is the (position-independent) diffusion matrix, which we assume to be symmetric positive definite. Elements of the diffusion matrix are related to the static friction kernel defined in via the relation $$D\_{ij}=\frac{k\_BT}{\sqrt{m\_im\_j}\gamma\_{ij}}$$ Equations such as are known as the Smoluchowski equation, a special case of a class of partial differential equations known as the Fokker-Planck equations, or Kolmogorov’s forward equations. It is also common to refer to the first part of as the continuity equation for the probability density, and the second part of as a “constitutive” equation. For the following discussion it is also convenient to rewrite in two forms. First, we can rewrite component-wise: ∂*t**p*(*r*, *t*) = ∑*i**j*∂*i**D**i**j*[*β*∂*j**W*(*r*) + ∂*j*]*p*(*r*, *t*) where we have used a short-hand notation ∂*t* = ∂/∂*t* and ∂*i* = ∂/∂*r**i*. For readers not familiar with the component-wise notation, they should convince themselves that, e.g., the *i*th component of a matrix-vector product *A**x* can be expressed as ∑*j**A**i**j**x**j*. Second, we can rewrite as an equation involving the Fokker-Planck operator *L*†, $$\frac{\partial p(r, t)}{\partial t}=L^\dagger p(r, t) \quad\text{where}\quad L^\dagger=\nabla\cdot D\pi(r)\nabla\pi^{-1}(r) \label{Langer\_FP\_operator}$$ Here, the symbol  †  indicates that the Fokker-Planck operator *L*† is the adjoint of another operator *L* known as the generator, *L* = *π*− 1(*r*)∇ ⋅ *D**π*(*r*)∇ The concept of an adjoint operator is a generalization of the Hermitian transpose of a matrix. As shows, the Fokker-Planck operator dictates the time evolution of the probability density *p*(*r*, *t*). There is a similar interpretation for the generator: the generator dictates the time evolution of (conditional) ensemble averages, or observables, of the form $u(r, t)=\mathbb E[f(r(t))|r(0)=r\_0]$, for some suitable scalar function *f* defined on the configuration space; i.e., $$\frac{\partial u(r, t)}{\partial t}=Lu(r, t)$$ Kramers’ theory at the high friction limit ========================================== In this section we consider Kramers’ theory in the context of a one-dimensional Brownian particle whose motion is described by the one-dimensional version of the overdamped Langevin equation, $$m\gamma r'(t)=-\frac{dW}{dr}+\xi(t) \quad\text{where}\quad \mathbb E[\xi(t)]=0, \mathbb E[\xi(t)\xi(t')] = 2k\_BT m\gamma\delta(|t-t'|) \label{1D\_overdamp\_langevin}$$ From, it follows that the probability density *p*(*r*, *t*) satisfies the following Fokker-Planck equation $$\frac{\partial p(r, t)}{\partial t}=-\frac{\partial J(r, t)}{\partial r} \quad\text{with}\quad J(r, t)=-D\left[\beta\frac{dW}{dr}+\frac{\partial}{\partial r}\right]p(r, t) \label{Kramers\_FP}$$ where *D* = *k**B**T*/*m**γ* is the position-independent diffusion constant. The stationary solution to is the Boltzmann distribution *π*(*r*) = *e*− *β**W*(*r*)/*Z*, where *Z* is a configurational partition function. Here, we take *W*(*r*) to be an (asymmetric) double-well potential, where the minimum of the reactant well *A* is located at *r**A*, the minimum of the product well *B* is located at *r**B*, and the transition state *r*‡ is identified as the position of the peak of *W*(*r*) between the two wells; without loss of generality, we assume that *r**A* < *r*‡ < *r**B*. An example of such a potential is shown in Figure [fig:1dpmf]. Further, we assume that the height of the barrier *W*(*r*‡) is much larger than *k**B**T*, so that there is a separation of timescales between barrier crossing and within-well equilibration. A related assumption here is that barrier crossing is much slower than the correlation time of the dynamic friction kernel so that the static friction kernel approximation can be justified. Together, these assumptions amount to the situation where a single slow degree of freedom in the system *r* is sufficient for describing the reaction, while the rest of the degrees of freedom relax much faster than the timescale of barrier crossing along *r*; we should thus interpret “bath” as also including other degrees of freedom of the macromolecule that are not explicitly treated, and interpret *r* not necessarily as a position variable but more generally as a reaction coordinate (i.e., some function of the position variables *r**i*). We mention here in passing that Kramers also derived expressions for the rate constant in the weak and moderate-to-high friction regimes. We will not discuss these results since they are not as relevant for condensed-phase simulations. ![An example one-dimensional double-well potential of mean force. The potential is given by W(r)=-\ln\left(e^{-2(x+1)^2}+e^{-2(x-1)^2+1}\right).](1D_example.png "fig:") [fig:1dpmf] Kramers’ theory via the flux-over-population method --------------------------------------------------- Kramers derived an expression for the rate constant of the *A* → *B* reaction, *k**A**B*, using what is now known as the flux-over-population method. Consider a hypothetical procedure in which an ensemble of *n* particles are prepared in the reactant well *A* (usually we set *n* = 1), where they rapidly reach thermal equilibrium on a timescale much faster than that of escaping to the product well *B*. Whenever a particle escapes from the reactant well and reaches the product well, it is immediately removed and a new one is added to the reactant well, such that the reactant population is always maintained at *n*. As we show in the analysis below, the exact positions at which a particle is considered escaped and at which a new one is inserted are not very important. As the system reaches a non-equilibrium steady-state, there is a non-zero probability current *J*, or flux, into the product well, which is the reaction rate (i.e., the number of transitions from *A* to *B* per unit time). We normalize this reaction rate by the reactant population, which gives the rate constant, *k**A**B* = *J*/*n* hence the name “flux-over-population”. Before flushing out the algebraic details, let us pause for a moment and consider why the flux-over-population procedure is needed. In other words, why does the calculation of an equilibrium rate constant invoke the flux of a seemingly contrived non-equilibrium process? A simple answer is that the (net) flux between any two states (micro- or macro-) in equilibrium is zero, due to detailed balance, and thus gives no information about kinetics. To see why the flux-over-population procedure circumvent this problem, let us consider the behavior of particles at some point near the transition state and see how they contribute to the flux across that point at equilibrium. At any given moment, such a particle can be categorized into one of four groups, depending on its past and future behavior: 1. The particle came from *A* and will move to *B* before going back to *A*. 2. The particle came from *A* and will go back to *A* before moving to *B*. 3. The particle came from *B* and will move to *A* before going back to *B*. 4. The particle came from *B* and will go back to *B* before moving to *A*. Behavior described in group 2 and 4 are known as barrier recrossing and does not contribute to the flux, since each pair of crossing-recrossing cancel out each other (this is to be contrasted with the procedure in transition-state theory, where all (re)crossing events are counted towards the total flux across some dividing surface ). Because the net equilibrium flux is zero, this implies that the flux contributed by group 1 and 3 cancels as well. Note that, for ergodic dynamics, this categorization is exhaustive and it is not possible for a particle to cross the barrier once and stay in one of the two wells forever. By removing particles reaching the product well, the flux-over-population procedure removes the flux contribution from group 3 (and group 4), while the only remaining nonzero flux contribution from group 1 stays close to its equilibrium value at the steady state due to the separation of timescales. As such, the nonequilibrium steady-state flux *J* in this procedure gives the true *A* to *B* reaction rate. At steady state, the population distribution *p*ss(*r*) is time-independent, and thus by the flux *J* is now both time- and position-independent. The two quantities are related via $$J = -D\left[\beta\frac{dW}{dr}+\frac{\partial}{\partial r}\right]p\_\text{ss}(r)= -De^{-\beta W(r)}\frac{d}{dr}\left[e^{\beta W(r)}p\_\text{ss}(r)\right]$$ Now, divide both sides by *D**e*− *β**W*(*r*) and integrate both sides over the interval [*r**A*, *r**B*], which gives $$J\int\_{r\_A}^{r\_B}D^{-1}e^{\beta W(r)}\,dr = - e^{\beta W(r)}p\_\text{ss}(r)\big|\_{r=r\_A}^{r\_B} \label{Kramers\_flux}$$ We now consider the behavior of *p*ss(*r*) near the two boundaries: *r**A* and *r**B*. At *r* = *r**B*, we require *p*ss(*r**B*) = 0 to satisfy the absorbing boundary condition caused by particle removal. At positions in the reactant well away from the absorbing boundary condition at *r**B*, we assume that the system is close to equilibrium, i.e., *p*ss(*r*) ≈ *π*(*r*), because the particle insertion procedure maintains the reactant population *n* in the reactant well, and inserted particles thermalize much faster than the reaction timescale. This observation has two consequences. First, it implies that *p*ss(*r**A*) = *π*(*r**A*). Second, it implies that the population in the reactant well is *n* = ∫− ∞*r*‡*p*ss(*r*) *d**r* ≈ ∫− ∞*r*‡*π*(*r*) *d**r* Now, apply the boundary conditions at *p*ss(*r**A*) and *p*ss(*r**B*) to, then apply the flux-over-population formula, and we arrive at $$\begin{aligned} k\_{AB} &= \frac{e^{\beta W(r\_A)}p\_\text{ss}(r\_A) - e^{\beta W(r\_B)}p\_\text{ss}(r\_B)}{\int\_{r\_A}^{r\_B}D^{-1}e^{\beta W(r)}\,dr \int\_{-\infty}^{r^\ddagger}p\_\text{ss}(r)\,dr} \\ &= \frac{e^{\beta W(r\_A)}\pi(r\_A)}{\int\_{r\_A}^{r\_B}D^{-1}e^{\beta W(r)}\,dr \int\_{-\infty}^{r^\ddagger}\pi(r)\,dr} \\ &= \frac{1}{\int\_{r\_A}^{r\_B}D^{-1}e^{\beta W(r)}\,dr \int\_{-\infty}^{r^\ddagger}e^{-\beta W(r)}\,dr} \label{Kramers\_rate\_1}\end{aligned}$$ If we assume that *W*(*r*) is harmonic around *r*‡ and *r**A*, we write, for *W*(*r*) near *r*‡ and *r**A*, $$W(r)\approx\Delta W^\ddagger -\frac 1 2 \kappa^\ddagger\left(r-r^\ddagger\right)^2 \quad\text{and}\quad W(r)\approx \frac 1 2 \kappa\_A\left(r-r\_A\right)^2 \label{Kramers\_harmonic}$$ where we have defined Δ*W*‡ = *W*(*r*‡) − *W*(*r**A*) and set *W*(*r**A*) = 0 (resetting the zero of the potential of mean force has the effect of changing the partition function, which has no effect on our derivation). Here, *κ**A* and *κ*‡ are the force constants of the harmonic potentials; equivalently, *κ**A* and *κ*‡ can also be interpreted as the curvatures at *r**A* and *r*‡, respectively. Applying to gives the following approximate results $$\begin{aligned} \int\_{r\_A}^{r\_B}D^{-1}e^{\beta W(r)}\,dr &\approx \int\_{-\infty}^\infty D^{-1}e^{\beta\left(\Delta W^\ddagger - \kappa^\ddagger r^2/2\right)}\,dr = D^{-1}\sqrt{\frac{2\pi}{\beta\kappa^\ddagger}} e^{\beta\Delta W^\ddagger} \label{Kramers\_int\_saddle}\\ \int\_{-\infty}^{r^\ddagger}e^{-\beta W(r)}\,dr &\approx \int\_{-\infty}^\infty e^{-\beta\kappa\_Ar^2/2}\,dr = \sqrt{\frac{2\pi}{\beta\kappa\_A}} \label{Kramers\_int\_reactant}\end{aligned}$$ These results provide a good approximation whenever the region of validity for the harmonic approximation is large enough that the added probability mass by taking the integration limit to infinity is negligible. Substituting and back to gives $$k\_{AB}=\frac{\beta D}{2\pi}\sqrt{\kappa\_A\kappa^\ddagger} e^{-\beta\Delta W^\ddagger} \label{Kramers\_rate\_2}$$ This is the rate constant predicted by Kramers’ theory at the high friction limit. Since the dynamics at high friction is diffusive, it is also known as the spatial-diffusion-limited rate. The rate constant for the reverse reaction can be obtained analogously by replacing the reactant well *A* with the product well *B* in the preceding derivation. If the diffusion constant is position-dependent, the derivation up to (and include) is still valid. The diffusion constant can be “folded” into the potential of mean force *W*(*r*) as *e**β**W*(*r*)/*D*(*r*) = *e**β**W*(*r*) − ln*D*(*r*) = *e**β*[*W*(*r*) − *k**B**T*ln*D*(*r*)] This defines a new potential surface *
arxiv_0000146
Extractor-Based Time-Space Lower Bounds for Learning ==================================================== A matrix $M: \AA \times \XX \rightarrow \{-1,1\}$ corresponds to the following learning problem: An unknown element $x \in \XX$ is chosen uniformly at random. A learner tries to learn *x* from a stream of samples, (*a*1, *b*1), (*a*2, *b*2)…, where for every *i*, $a\_i \in \AA$ is chosen uniformly at random and *b**i* = *M*(*a**i*, *x*). Assume that *k*, ℓ, *r* are such that any submatrix of *M* of at least 2− *k* ⋅ ∣*A*∣ rows and at least 2− ℓ ⋅ ∣*X*∣ columns, has a bias of at most 2− *r*. We show that any learning algorithm for the learning problem corresponding to *M* requires either a memory of size at least Ω(*k* ⋅ ℓ), or at least 2Ω(*r*) samples. The result holds even if the learner has an exponentially small success probability (of 2− Ω(*r*)). In particular, this shows that for a large class of learning problems, any learning algorithm requires either a memory of size at least Ω((log∣*X*∣) ⋅ (log∣*A*∣)) or an exponential number of samples, achieving a tight Ω((log∣*X*∣) ⋅ (log∣*A*∣)) lower bound on the size of the memory, rather than a bound of Ω(min{(log∣*X*∣)2, (log∣*A*∣)2}) obtained in previous works . Moreover, our result implies all previous memory-samples lower bounds, as well as a number of new applications. Our proof builds on  that gave a general technique for proving memory-samples lower bounds. Introduction ============ Can one prove unconditional lower bounds on the number of samples needed for learning, under memory constraints? The study of the resources needed for learning, under memory constraints was initiated by Shamir  and by Steinhardt, Valiant and Wager . While the main motivation for studying this question comes from learning theory, the problem is also relevant to computational complexity and cryptography . Steinhardt, Valiant and Wager conjectured that any algorithm for learning parities of size *n* requires either a memory of size Ω(*n*2) or an exponential number of samples. This conjecture was proven in , showing for the first time a learning problem that is infeasible under super-linear memory constraints. Building on , it was proved in  that learning parities of sparsity ℓ is also infeasible under memory constraints that are super-linear in *n*, as long as ℓ ≥ *ω*(log*n*/loglog*n*). Consequently, learning linear-size DNF Formulas, linear-size Decision Trees and logarithmic-size Juntas were all proved to be infeasible under super-linear memory constraints  (by a reduction from learning sparse parities). Can one prove similar memory-samples lower bounds for other learning problems? As in , we represent a learning problem by a matrix. Let $\XX$, $\AA$ be two finite sets of size larger than 1 (where $\XX$ represents the concept-class that we are trying to learn and $\AA$ represents the set of possible samples). Let $M: \AA \times \XX \rightarrow \{-1,1\}$ be a matrix. The matrix *M* represents the following learning problem: An unknown element $x \in \XX$ was chosen uniformly at random. A learner tries to learn *x* from a stream of samples, (*a*1, *b*1), (*a*2, *b*2)…, where for every *i*, $a\_i \in \AA$ is chosen uniformly at random and *b**i* = *M*(*a**i*, *x*). Let $n = \log|\XX|$ and $n' = \log|\AA|$. A general technique for proving memory-samples lower bounds was given in . The main result of  shows that if the norm of the matrix *M* is sufficiently small, then any learning algorithm for the corresponding learning problem requires either a memory of size at least Ω((min{*n*, *n*ʹ})2), or an exponential number of samples. This gives a general memory-samples lower bound that applies for a large class of learning problems. Independently of , Moshkovitz and Moshkovitz also gave a general technique for proving memory-samples lower bounds . Their initial result was that if *M* has a (sufficiently strong) mixing property then any learning algorithm for the corresponding learning problem requires either a memory of size at least 1.25 ⋅ min{*n*, *n*ʹ} or an exponential number of samples . In a recent subsequent work , they improved their result, and obtained a theorem that is very similar to the one proved in . (The result of  is stated in terms of a combinatorial mixing property, rather than matrix norm. The two notions are closely related (see in particular Corollary 5.1 and Note 5.1 in )). ### Our Results The results of  and  gave a lower bound of at most Ω((min{*n*, *n*ʹ})2) on the size of the memory, whereas the best that one could hope for, in the information theoretic setting (that is, in the setting where the learner’s computational power is unbounded), is a lower bound of Ω(*n* ⋅ *n*ʹ), which may be significantly larger in cases where *n* is significantly larger than *n*ʹ, or vice versa. In this work, we build on  and obtain a general memory-samples lower bound that applies for a large class of learning problems and shows that for every problem in that class, any learning algorithm requires either a memory of size at least Ω(*n* ⋅ *n*ʹ) or an exponential number of samples. Our result is stated in terms of the properties of the matrix *M* as a two-source extractor. Two-source extractors, first studied by Santha and Vazirani  and Chor and Goldreich , are central objects in the study of randomness and derandomization. We show that even a relatively weak two-source extractor implies a relatively strong memory-samples lower bound. We note that two-source extractors have been extensively studied in numerous of works and there are known techniques for proving that certain matrices are relatively good two-source extractors. Our main result can be stated as follows (Corollary [cor:main1]): Assume that *k*, ℓ, *r* are such that any submatrix of *M* of at least 2− *k* ⋅ ∣*A*∣ rows and at least 2− ℓ ⋅ ∣*X*∣ columns, has a bias of at most 2− *r*. Then, any learning algorithm for the learning problem corresponding to *M* requires either a memory of size at least Ω(*k* ⋅ ℓ), or at least 2Ω(*r*) samples. The result holds even if the learner has an exponentially small success probability (of 2− Ω(*r*)). A more detailed result, in terms of the constants involved, is stated in Theorem [thm:TM-main1] in terms of the properties of *M* as an *L*2-Extractor, a new notion that we define in Definition [definition:l2-extractor], and is closely related to the notion of two-source extractor. (The two notions are equivalent up to small changes in the parameters.) All of our results (and all applications) hold even if the learner is only required to *weakly learn* *x*, that is to output a hypothesis *h* : *A* → { − 1, 1} with a non-negligible correlation with the *x*-th column of the matrix *M*. We prove in Theorem [thm:TM-main2] that even if the learner is only required to output a hypothesis that agrees with the *x*-th column of *M* on more than a 1/2 + 2− Ω(*r*) fraction of the rows, the success probability is at most 2− Ω(*r*). As in , we model the learning algorithm by a *branching program*. A branching program is the strongest and most general model to use in this context. Roughly speaking, the model allows a learner with infinite computational power, and bounds only the memory size of the learner and the number of samples used. As mentioned above, our result implies all previous memory-samples lower bounds, as well as new applications. In particular: 1. **Parities:** A learner tries to learn *x* = (*x*1, …, *x**n*) ∈ {0, 1}*n*, from random linear equations over F2. It was proved in  (and follows also from ) that any learning algorithm requires either a memory of size Ω(*n*2) or an exponential number of samples. The same result follows by Corollary [cor:main1] and the fact that inner product is a good two-source extractor . 2. **Sparse parities:** A learner tries to learn *x* = (*x*1, …, *x**n*) ∈ {0, 1}*n* of sparsity ℓ, from random linear equations over F2. In Section [sec:sparse-parities], we reprove the main results of . In particular, any learning algorithm requires: 1. Assuming ℓ ≤ *n*/2: either a memory of size Ω(*n* ⋅ ℓ) or 2Ω(ℓ) samples. 2. Assuming ℓ ≤ *n*0.9: either a memory of size Ω(*n* ⋅ ℓ0.99) or ℓΩ(ℓ) samples. 3. **Learning from sparse linear equations:** A learner tries to learn *x* = (*x*1, …, *x**n*) ∈ {0, 1}*n*, from random sparse linear equations, of sparsity ℓ, over F2. In Section [sec:sparse-equations], we prove that any learning algorithm requires: 1. Assuming ℓ ≤ *n*/2: either a memory of size Ω(*n* ⋅ ℓ) or 2Ω(ℓ) samples. 2. Assuming ℓ ≤ *n*0.9: either a memory of size Ω(*n* ⋅ ℓ0.99) or ℓΩ(ℓ) samples. 4. **Learning from low-degree equations:** A learner tries to learn *x* = (*x*1, …, *x**n*) ∈ {0, 1}*n*, from random multilinear polynomial equations of degree at most *d*, over F2. In Section [sec:low-deg-equations], we prove that if *d* ≤ 0.99 ⋅ *n*, any learning algorithm requires either a memory of size $\Omega\left( \binom{n}{\le d} \cdot n/d \right)$ or 2Ω(*n*/*d*) samples. 5. **Low-degree polynomials:** A learner tries to learn an *n*ʹ-variate multilinear polynomial *p* of degree at most *d* over F2, from random evaluations of *p* over F2*n*ʹ. In Section [sec:low-degree-polynomials], we prove that if *d* ≤ 0.99 ⋅ *n*ʹ, any learning algorithm requires either a memory of size $\Omega\left( \binom{n'}{\le d} \cdot n'/d \right)$ or 2Ω(*n*ʹ/*d*) samples. 6. **Error-correcting codes:** A learner tries to learn a codeword from random coordinates: Assume that $M: \AA \times \XX \rightarrow \{-1,1\}$ is such that for some $|\XX|^{-1} \le \epsilon < 1$, any pair of different columns of *M*, agree on at least $\tfrac{1-\epsilon}{2} \cdot |A|$ and at most $\tfrac{1+\epsilon}{2} \cdot |A|$ coordinates. In Section [sec:sq], we prove that any learning algorithm for the learning problem corresponding to *M* requires either a memory of size $\Omega\big( (\log|\XX|) \cdot (\log(1/\epsilon))\big)$ or $\big(\tfrac{1}{\epsilon}\big)^{\Omega(1)}$ samples. We also point to a relation between our results and statistical-query dimension . 7. **Random matrices:** Let $\XX, \AA$ be finite sets, such that, $|\AA| \geq (2\log|\XX|)^{10}$ and $|\XX| \geq (2\log|\AA|)^{10}$. Let $M: \AA \times \XX \rightarrow \{-1,1\}$ be a random matrix. Fix $k = \tfrac{1}{2}\log |\AA|$ and $\ell = \tfrac{1}{2}\log |\XX|$. With very high probability, any submatrix of *M* of at least 2− *k* ⋅ ∣*A*∣ rows and at least 2− ℓ ⋅ ∣*X*∣ columns, has a bias of at most 2− Ω(min{*k*, ℓ}). Thus, by Corollary [cor:main1], any learning algorithm for the learning problem corresponding to *M* requires either a memory of size $\Omega\left((\log|\XX|) \cdot (\log|\AA|) \right)$, or $\big(\min\{|\XX|,|\AA|\}\big)^{\Omega(1)}$ samples. We note also that our results about learning from sparse linear equations have applications in bounded-storage cryptography. This is similar to , but in a different range of the parameters. In particular, for every *ω*(log*n*) ≤ ℓ ≤ *n*, our results give an encryption scheme that requires a private key of length *n*, and time complexity of *O*(ℓlog*n*) per encryption/decryption of each bit, using a random access machine. The scheme is provenly and unconditionally secure as long as the attacker uses at most *o*(*n*ℓ) memory bits and the scheme is used at most 2*o*(ℓ) times. ### Techniques Our proof follows the lines of the proof of  and builds on that proof. The proof of  considered the norm of the matrix *M*, and thus essentially reduced the entire matrix to only one parameter. In our proof, we consider the properties of *M* as a two-source extractor, and hence we have three parameters (*k*, ℓ, *r*), rather than one. Considering these three parameters, rather than one, enables a more refined analysis, resulting in a stronger lower bound with a slightly simpler proof. A proof outline is given in Section [sec:overview]. ### Motivation and Discussion Many previous works studied the resources needed for learning, under certain information, communication or memory constraints (see in particular  and the many references given there). A main message of some of these works is that for some learning problems, access to a relatively large memory is crucial. In other words, in some cases, learning is infeasible, due to memory constraints. From the point of view of human learning, such results may help to explain the importance of memory in cognitive processes. From the point of view of machine learning, these results imply that a large class of learning algorithms cannot learn certain concept classes. In particular, this applies to any bounded-memory learning algorithm that considers the samples one by one. In addition, these works are related to computational complexity and have applications in cryptography. ### Related Work Independently of our work, Beame, Oveis Gharan and Yang also gave a combinatorial property of a matrix *M*, that holds for a large class of matrices and implies that any learning algorithm for the corresponding learning problem requires either a memory of size Ω((log∣*X*∣) ⋅ (log∣*A*∣)) or an exponential number of samples (when ∣*A*∣ ≤ ∣*X*∣) . Their property is based on a measure of how matrices amplify the 2-norms of probability distributions that is more refined than the 2-norms of these matrices. Their proof also builds on . They also show, as an application, tight time-space lower bounds for learning low-degree polynomials, as well as other applications. Preliminaries ============= Denote by $\U\_X: \XX \rightarrow \Reals^+$ the uniform distribution over $\XX$. Denote by log the logarithm to base 2. Denote by $\binom{n}{\le k} = \binom{n}{0} + \binom{n}{1} + \ldots + \binom{n}{k}$. For a random variable *Z* and an event *E*, we denote by $\P\_Z$ the distribution of the random variables *Z*, and we denote by $\P\_{Z|E}$ the distribution of the random variable *Z* conditioned on the event *E*. ### Viewing a Learning Problem as a Matrix Let $\XX$, $\AA$ be two finite sets of size larger than 1. Let $n = \log\_2|\XX|$. Let $M: \AA \times \XX \rightarrow \{-1,1\}$ be a matrix. The matrix *M* corresponds to the following learning problem: There is an unknown element $x \in \XX$ that was chosen uniformly at random. A learner tries to learn *x* from samples (*a*, *b*), where $a \in \AA$ is chosen uniformly at random and *b* = *M*(*a*, *x*). That is, the learning algorithm is given a stream of samples, (*a*1, *b*1), (*a*2, *b*2)…, where each *a**t* is uniformly distributed and for every *t*, *b**t* = *M*(*a**t*, *x*). ### Norms and Inner Products Let *p* ≥ 1. For a function $f: \XX \rightarrow \Reals$, denote by ∥*f*∥*p* the ℓ*p* norm of *f*, with respect to the uniform distribution over $\XX$, that is: $${\left\lVertf\right\rVert}\_{p} = \left( {\mathop{\bf E\/}}\_{x \in\_R \XX} \left[ |f(x)|^{p} \right] \right)^{1/p}.$$ For two functions $f,g: \XX \rightarrow \Reals$, define their inner product with respect to the uniform distribution over *X* as $$\langle f,g \rangle = {\mathop{\bf E\/}}\_{x \in\_R \XX} [ f(x) \cdot g(x) ].$$ For a matrix $M: \AA \times \XX \to \Reals$ and a row $a \in \AA$, we denote by $M\_a: \XX \to \Reals$ the function corresponding to the *a*-th row of *M*. Note that for a function $f: \XX \to \Reals$, we have ${\langleM\_a, f\rangle} = \frac{(M \cdot f)\_a}{|X|}$. ### *L*2-Extractors and *L*∞-Extractors ***L*2-Extractor:** [definition:l2-extractor] Let $\XX,\AA$ be two finite sets. A matrix $M: \AA \times \XX \to \{-1,1\}$ is a (*k*, ℓ)-*L*2-Extractor with error 2− *r*, if for every non-negative $f : \XX \to \Reals$ with $\frac{{\left\lVertf\right\rVert}\_2}{{\left\lVertf\right\rVert}\_1} \le 2^{\ell}$ there are at most 2− *k* ⋅ ∣*A*∣ rows *a* in *A* with $$\frac{|{\langleM\_a,f\rangle}|}{{\left\lVertf\right\rVert}\_1} \ge 2^{-r}\;.$$ Let Ω be a finite set. We denote a distribution over Ω as a function $f:\Omega \to \Reals^{+}$ such that ∑*x* ∈ Ω*f*(*x*) = 1. We say that a distribution $f:\Omega \to \Reals^{+}$ has min-entropy *k* if for all *x* ∈ Ω, we have *f*(*x*) ≤ 2− *k*. ***L*∞ − Extractor:** [definition:min-extractor] Let $\XX,\AA$ be two finite sets. A matrix $M:\AA\times \XX\rightarrow \{-1,1\}$ is a (*k*, ℓ ∼ *r*)-*L*∞-Extractor if for every distribution $p\_x: \XX \to \Reals^{+}$ with min-entropy at least $(\log(|\XX|)-\ell)$ and every distribution $p\_a: \AA \to \Reals^{+}$ with min-entropy at least $(\log(|\AA|)-k)$, $$\bigg|\sum\_{a'\in \AA} \sum\_{x' \in \XX} p\_a(a') \cdot p\_x(x') \cdot M(a',x')\bigg| \le 2^{-r}.$$ ### Branching Program for a Learning Problem In the following definition, we model the learner for the learning problem that corresponds to the matrix *M*, by a *branching program*. **Branching Program for a Learning Problem:** A branching program of length *m* and width *d*, for learning, is a directed (multi) graph with vertices arranged in *m* + 1 layers containing at most *d* vertices each. In the first layer, that we think of as layer 0, there is only one vertex, called the start vertex. A vertex of outdegree 0 is called a leaf. All vertices in the last layer are leaves (but there may be additional leaves). Every non-leaf vertex in the program has $2|\AA|$ outgoing edges, labeled by elements $(a,b) \in \AA \times \{-1,1\}$, with exactly one edge labeled by each such (*a*, *b*), and all these edges going into vertices in the next layer. Each leaf *v* in the program is labeled by an element $\tilde{x}(v) \in \XX$, that we think of as the output of the program on that leaf. **Computation-Path:** The samples $(a\_1, b\_1), \ldots, (a\_m, b\_m) \in \AA \times \{-1,1\}$ that are given as input, define a computation-path in the branching program, by starting from the start vertex and following at step *t* the edge labeled by (*a**t*, *b**t*), until reaching a leaf. The program outputs the label *x̃*(*v*) of the leaf *v* reached by the computation-path. **Success Probability:** The success probability of the program is the probability that *x̃* = *x*, where *x̃* is the element that the program outputs, and the probability is over *x*, *a*1, …, *a**m* (where *x* is uniformly distributed over $\XX$ and *a*1, …, *a**m* are uniformly distributed over $\AA$, and for every *t*, *b**t* = *M*(*a**t*, *x*)). Overview of the Proof ===================== The proof follows the lines of the proof of  and builds on that proof. Assume that *M* is a (*k*, ℓ)-*L*2-extractor with error 2− *r*ʹ, and let *r* = min{*k*, ℓ, *r*ʹ}. Let *B* be a branching program for the learning problem that corresponds to the matrix *M*. Assume for a contradiction that *B* is of length *m* = 2*ε**r* and width *d* = 2*ε**k*ℓ, where *ε* is a small constant. We define the *truncated-path*, $\T$, to be the same as the computation-path of *B*, except that it sometimes stops before reaching a leaf. Roughly speaking, $\T$ stops before reaching a leaf if certain “bad” events occur. Nevertheless, we show that the probability that $\T$ stops before reaching a leaf is negligible, so we can think of $\T$ as almost identical to the computation-path. For a vertex *v* of *B*, we denote by *E**v* the event that $\T$ reaches the vertex *v*. We denote by Pr(*v*) = Pr(*E**v*) the probability for *E**v* (where the probability is over *x*, *a*1, …, *a**m*), and we denote by $\P\_{x|v} = \P\_{x|E\_v}$ the distribution of the random variable *x* conditioned on the event *E**v*. Similarly, for an edge *e* of the branching program *B*, let *E**e* be the event that $\T$ traverses the edge *e*. Denote, Pr(*e*) = Pr(*E**e*), and $\P\_{x|e} = \P\_{x|E\_e}$. A vertex *v* of *B* is called *significant* if $${\left\lVert\P\_{x|v}\right\rVert}\_{2} > 2^{\ell} \cdot 2^{-n}.$$ Roughly speaking, this means that conditioning on the event that $\T$ reaches the vertex *v*, a non-negligible amount of information is known about *x*. In order to guess *x* with a non-negligible success probability, $\T$ must reach a significant vertex. Lemma [lemma-main1] shows that the probability that $\T$ reaches any significant vertex is negligible, and thus the main result follows. To prove Lemma [lemma-main1], we show that for every fixed significant vertex *s*, the probability that $\T$ reaches *s* is at most 2− Ω(*k*ℓ) (which is smaller than one over the number of vertices in *B*). Hence, we can use a union bound to prove the lemma. The proof that the probability that $\T$ reaches *s* is extremely small is the main part of the proof. To that end, we use the following functions to measure the progress made by the branching program towards reaching *s*. Let *L**i* be the set of vertices *v* in layer-*i* of *B*, such that Pr(*v*) > 0. Let Γ*i* be the set of edges *e* from layer-(*i* − 1) of *B* to layer-*i* of *B*, such that Pr(*e*) > 0. Let $${\cal Z}\_i = \sum\_{v \in L\_i} \Pr(v) \cdot \langle \P\_{x|v},\P\_{x|s} \rangle^{k},$$ $${\cal Z}'\_i = \sum\_{e \in \Gamma\_i} \Pr(e) \cdot \langle \P\_{x|e},\P\_{x|s} \rangle^{k}.$$ We think of ${\cal Z}\_i, {\cal Z}'\_i$ as measuring the progress made by the branching program, towards reaching a state with distribution similar to $\P\_{x|s}$. We show that each ${\cal Z}\_i$ may only be negligibly larger than ${\cal Z}\_{i-1}$. Hence, since it’s easy to calculate that ${\cal Z}\_0 = 2^{-2nk}$, it follows that ${\cal Z}\_i$ is close to 2− 2*n**k*, for every *i*. On the other hand, if *s* is in layer-*i* then ${\cal Z}\_i$ is at least $\Pr(s) \cdot \langle \P\_{x|s},\P\_{x|s}\rangle^k$. Thus, $\Pr(s) \cdot \langle \P\_{x|s},\P\_{x|s}\rangle^k$ cannot be much larger than 2− 2*n**k*. Since *s* is significant, $\langle \P\_{x|s},\P\_{x|s}\rangle^k > 2^{\ell k} \cdot 2^{-2nk}$ and hence Pr(*s*) is at most 2− Ω(*k*ℓ). The proof that ${\cal Z}\_i$ may only be negligibly larger than ${\cal Z}\_{i-1}$ is done in two steps: Claim [claim-p2] shows by a simple convexity argument that ${\cal Z}\_i \leq {\cal Z}'\_i$. The hard part, that is done in Claim [claim-p0] and Claim [claim-p1], is to prove that ${\cal Z}'\_i$ may only be negligibly larger than ${\cal Z}\_{i-1}$. For this proof, we define for every vertex *v*, the set of edges Γ*o**u**t*(*v*) that are going out of *v*, such that Pr(*e*) > 0. Claim [claim-p0] shows that for every vertex *v*, $$\sum\_{e \in \Gamma\_{out}(v)} \Pr(e) \cdot \langle \P\_{x|e},\P\_{x|s} \rangle^{k}$$ may only be negligibly higher than $$\Pr(v) \cdot \langle \P\_{x|v},\P\_{x|s} \rangle^{k}.$$ For the proof of Claim [claim-p0], which is the hardest proof in the paper, and the most important place where our proof deviates from (and simplifies) the proof of , we consider the function $\P\_{x|v} \cdot \P\_{x|s}$. We first show how to bound ${\left\lVert\P\_{x|v} \cdot \P\_{x|s}\right\rVert}\_2$. We then consider two cases: If ${\left\lVert\P\_{x|v} \cdot \P\_{x|s}\right\rVert}\_1$ is negligible, then $\langle \P\_{x|v},\P\_{x|s} \rangle^{k}$ is negligible and doesn’t contribute much, and we show that for every *e* ∈ Γ*o**u**t*(*v*), $\langle \P\_{x|e},\P\_{x|s} \rangle^{k}$ is also negligible and doesn’t contribute much. If ${\left\lVert\P\_{x|v} \cdot\P\_{x|s}\right\rVert}\_1$ is non-negligible, we use the bound on ${\left\lVert\P\_{x|v} \cdot\P\_{x|s}\right\rVert}\_2$ and the assumption that *M* is a (*k*, ℓ)-*L*2-extractor to show that for almost all edges *e* ∈ Γ*o**u**t*(*v*), we have that $\langle \P\_{x|e},\P\_{x|s} \rangle^k$ is very close to $\langle \P\_{x|v},\P\_{x|s} \rangle^k$. Only an exponentially small (2− *k*) fraction of edges are “bad” and give a significantly larger $\langle \P\_{x|e},\P\_{x|s} \rangle^k$. The reason that in the definitions of ${\cal Z}\_i$ and ${\cal Z}'\_i$ we raised $\langle \P\_{x|v},\P\_{x|s} \rangle$ and $\langle \P\_{x|e},\P\_{x|s} \rangle$ to the power of *k* is that this is the largest power for which the contribution of the “bad” edges is still small (as their fraction is 2− *k*). This outline oversimplifies many details. Let us briefly mention two of them. First, it is not so easy to bound ${\left\lVert\P\_{x|v} \cdot \P\_{x|s}\right\rVert}\_2$. We do that by bounding ${\left\lVert\P\_{x|s}\right\rVert}\_2$ and ${\left\lVert\P\_{x|v}\right\rVert}\_{\infty}$. In order to bound ${\left\lVert\P\_{x|s}\right\rVert}\_2$, we force $\T$ to stop whenever it reaches a significant vertex (and thus we are able to bound ${\left\lVert\P\_{x|v}\right\rVert}\_2$ for every vertex reached by $\T$). In order to bound ${\left\lVert\P\_{x|v}\right\rVert}\_{\infty}$, we force $\T$ to stop whenever $\P\_{x|v}(x)$ is large, which allows us to consider only the “bounded” part of $\P\_{x|v}$. (This is related to the technique of *flattening* a distribution that was used in ). Second, some edges are so “bad” that their contribution to ${\cal Z}'\_i$ is huge so they cannot be ignored. We force $\T$ to stop before traversing any such edge. (This is related to an idea that was used in  of analyzing separately paths that traverse “bad” edges). We show that the total probability that $\T$ stops before reaching a leaf is negligible. Main Result =========== [thm:TM-main1] Let $\tfrac{1}{100}< c <\tfrac{2}{3}$. Fix *γ* to be such that $\tfrac{3c}{2} < \gamma^2 < 1$. Let $\XX$, $\AA$ be two finite sets. Let $n = \log\_2|\XX|$. Let $M: \AA \times \XX \rightarrow \{-1,1\}$ be a matrix which is a (*k*ʹ, ℓʹ)-*L*2-extractor with error 2− *r*ʹ, for sufficiently large[4](#fn4) *k*ʹ, ℓʹ and *r*ʹ, where ℓʹ ≤ *n*. Let $$\label{eq:param setting} {r}:= \min\left\{ \tfrac{r'}{2}, \tfrac{(1-\gamma)k'}{2}, \tfrac{(1-\gamma)\ell'}{2} -1 \right\}.$$ Let *B* be a branching program of length at most 2*r* and width at most 2*c* ⋅ *k*ʹ ⋅ ℓʹ for the learning problem that corresponds to the matrix *M*. Then, the success probability of *B* is at most *O*(2− *r*). Let *k* :  = *γ**k*ʹ  and  ℓ :  = *γ*ℓʹ/3. Note that by the assumption that *k*ʹ, ℓʹ and *r*ʹ are sufficiently large, we get that *k*, ℓ and *r* are also sufficiently large. Since ℓʹ ≤ *n*, we have $\ell + {r}\le \tfrac{\gamma \ell'}{3} +\tfrac{(1-\gamma)\ell'}{2} < \tfrac{\ell'}{2} \le \tfrac{n}{2}$. Thus, *r* < *n*/2 − ℓ. Let *B* be a branching program of length *m* = 2*r* and width *d* = 2*c* ⋅ *k*ʹ ⋅ ℓʹ for the learning problem that corresponds to the matrix *M*. We will show that the success probability of *B* is at most *O*(2− *r*). The Truncated-Path and Additional Definitions and Notation ---------------------------------------------------------- We will define the **truncated-path**, $\T$, to be the same as the computation-path of *B*, except that it sometimes stops before reaching a leaf. Formally, we define $\T$, together with several other definitions and notations, by induction on the layers of the branching program *B*. Assume that we already defined the truncated-path $\T$, until it reaches layer-*i* of *B*. For a vertex *v* in layer-*i* of *B*, let *E**v* be the event that $\T$ reaches the vertex *v*. For simplicity, we denote by Pr(*v*) = Pr(*E**v*) the probability for *E**v* (where the probability is over *x*, *a*1, …, *a**m*), and we denote by $\P\_{x|v} = \P\_{x|E\_v}$ the distribution of the random variable *x* conditioned on the event *E**v*. There will be three cases in which the truncated-path $\T$ stops on a non-leaf *v*: 1. If *v* is a, so called, significant vertex, where the ℓ2 norm of $\P\_{x|v}$ is non-negligible. (Intuitively, this means that conditioned on the event that $\T$ reaches *v*, a non-negligible amount of information is known about *x*). 2. If $\P\_{x|v} (x)$ is non-negligible. (Intuitively, this means that conditioned on the event that $\T$ reaches *v*, the correct element *x* could have been guessed with a non-negligible probability). 3. If $(M \cdot \P\_{x|v}) (a\_{i+1})$ is non-negligible. (Intuitively, this means that $\T$ is about to traverse a “bad” edge, which is traversed with a non-negligibly higher or lower probability than other edges). Next, we describe these three cases more formally. ### Significant Vertices We say that a vertex *v* in layer-*i* of *B* is **significant** if $${\left\lVert\P\_{x|v}\right\rVert}\_{2} > 2^{\ell} \cdot 2^{-n}.$$ ### Significant Values Even if *v* is not significant, $\P\_{x|v}$ may have relatively large values. For a vertex *v* in layer-*i* of *B*, denote by $\Sig(v)$ the set of all $x' \in \XX$, such that, $$\P\_{x|v}(x') > 2^{2\ell +2{r}} \cdot 2^{-n}.$$ ### Bad Edges For a vertex *v* in layer-*i* of *B*, denote by $\Bad(v)$ the set of all $\alpha \in \AA$, such that, $$\left| (M \cdot\P\_{x|v})(\alpha) \right| \geq 2^{-r'}.$$ ### The Truncated-Path $\T$ We define $\T$ by induction on the layers of the branching program *B*. Assume that we already defined $\T$ until it reaches a vertex *v* in layer-*i* of *B*. The path $\T$ stops on *v* if (at least) one of the following occurs: 1. *v* is significant. 2. $x \in \Sig(v)$. 3. $a\_{i+1} \in \Bad(v)$. 4. *v* is a leaf. Otherwise, $\T$ proceeds by following the edge labeled by (*a**i* + 1, *b**i* + 1) (same as the computational-path). Proof of Theorem [thm:TM-main1] ------------------------------- Since $\T$ follows the computation-path of *B*, except that it sometimes stops before reaching a leaf, the success probability of *B* is bounded (from above) by the probability that $\T$ stops before reaching a leaf, plus the probability that $\T$ reaches a leaf *v* and *x̃*(*v*) = *x*. The main lemma needed for the proof of Theorem [thm:TM-main1] is Lemma [lemma-main1] that shows that the probability that $\T$ reaches a significant vertex is at most *O*(2− *r*). [lemma-main1] The probability that $\T$ reaches a significant vertex is at most *O*(2− *r*). Lemma [lemma-main1] is proved in Section [section:mainlemma]. We will now show how the proof of Theorem [thm:TM-main1] follows from that lemma. Lemma [lemma-main1] shows that the probability that $\T$ stops on a non-leaf vertex, because of the first reason (i.e., that the vertex is significant), is small. The next two lemmas imply that the probabilities that $\T$ stops on a non-leaf vertex, because of the second and third reasons, are also small. [claim-A0] If *v* is a non-significant vertex of *B* then $$\Pr\_{x} [x \in \Sig(v) \; | \; E\_v] \leq 2^{-2{r}}.$$ Since *v* is not significant, $${\mathop{\bf E\/}}\_{x' \sim \P\_{x|v}} \left[ \P\_{x|v}(x') \right] = \sum\_{x' \in \XX} \left[ \P\_{x|v}(x')^{2} \right] = 2^n \cdot {\mathop{\bf E\/}}\_{x' \in\_R \XX} \left[ \P\_{x|v}(x')^{2} \right] \leq 2^{2\ell} \cdot 2^{-n}.$$ Hence, by Markov’s inequality, $$\Pr\_{x' \sim \P\_{x|v}} \left[ \P\_{x|v}(x') > 2^{2{r}} \cdot 2^{2\ell} \cdot 2^{-n} \right] \leq 2^{-2{r}}.$$ Since conditioned on *E**v*, the distribution of *x* is $\P\_{x|v}$, we obtain $$\Pr\_{x} \left[x \in \Sig(v) \; \big| \; E\_v\right] = \Pr\_{x} \left[ \left(\P\_{x|v}(x) > 2^{2{r}} \cdot 2^{2\ell} \cdot 2^{-n}\right) \; \big|\; E\_v\; \right] \leq 2^{-2{r}}.\qedhere$$ [claim-A2] If *v* is a non-significant vertex of *B* then $$\Pr\_{a\_{i+1}} [a\_{i+1} \in \Bad(v)] \leq 2^{-2{r}}.$$ Since *v* is not significant, ${\left\lVert\P\_{x|v}\right\rVert}\_2 \le 2^{\ell} \cdot 2^{-n}$. Since $\P\_{x|v}$ is a distribution, ${\left\lVert\P\_{x|v}\right\rVert}\_1 = 2^{-n}$. Thus, $$\frac{{\left\lVert\P\_{x|v}\right\rVert}\_2}{{\left\lVert\P\_{x|v}\right\rVert}\_1} \le 2^{\ell} \le 2^{\ell'}.$$ Since *M* is a (*k*ʹ, ℓʹ)-*L*2-extractor with error 2− *r*ʹ, there are at most 2− *k*ʹ ⋅ ∣*A*∣ elements $\alpha \in \AA$ with $$\left|{\langleM\_{\alpha}, \P\_{x|v}\rangle}\right| \ge 2^{-r'} \cdot {{\left\lVert\P\_{x|v}\right\rVert}\_1} = 2^{-r'} \cdot 2^{-n}$$ The claim follows since *a**i* + 1 is uniformly distributed over $\AA$ and since *k*ʹ ≥ 2*r* (Equation ). We can now use Lemma [lemma-main1], Claim [claim-A0] and Claim [claim-A2] to prove that the probability that $\T$ stops before reaching a leaf is at most *O*(2− *r*). Lemma [lemma-main1] shows that the probability that $\T$ reaches a significant vertex and hence stops because of the first reason, is at most *O*(2− *r*). Assuming that $\T$ doesn’t reach any significant vertex (in which case it would have stopped because of the first reason), Claim [claim-A0] shows that in each step, the probability that $\T$ stops because of the second reason, is at most 2− 2*r*. Taking a union bound over the *m* = 2*r* steps, the total probability that $\T$ stops because of the second reason, is at most 2− *r*. In the same way, assuming that $\T$ doesn’t reach any significant vertex (in which case it would have stopped because of the first reason), Claim [claim-A2] shows that in each step, the probability that $\T$ stops because of the third reason, is at most 2− 2*r*. Again, taking a union bound over the 2*r* steps, the total probability that $\T$ stops because of the third reason, is at most 2− *r*. Thus, the total probability that $\T$ stops (for any reason) before reaching a leaf is at most *O*(2− *r*). Recall that if $\T$ doesn’t stop before reaching a leaf, it just follows the computation-path of *B*. Recall also that by Lemma [lemma-main1], the probability that $\T$ reaches a significant leaf is at most *O*(2− *r*). Thus, to bound (from above) the success probability of *B* by *O*(2− *r*), it remains to bound the probability that $\T$ reaches a non-significant leaf *v* and *x̃*(*v*) = *x*. Claim [claim-A1] shows that for any non-significant leaf *v*, conditioned on the event that $\T$ reaches *v*, the probability for *x̃*(*v*) = *x* is at most 2− *r*, which completes the proof of Theorem [thm:TM-main1]. [claim-A1] If *v* is a non-significant leaf of *B* then Pr[*x̃*(*v*) = *x* ∣ *E**v*] ≤ 2− *r*. Since *v* is not significant, $${\mathop{\bf E\/}}\_{x' \in\_R \XX} \left[ \P\_{x|v}(x')^{2} \right] \leq 2^{2\ell} \cdot 2^{-2n}.$$ Hence, for every $x' \in \XX$, $$\Pr [x=x' \; | \; E\_v]= \P\_{x|v}(x') \leq 2^{\ell} \cdot 2^{-n/2} \le 2^{- {r}}$$ since *r* ≤ *n*/2 − ℓ (Equation ). In particular, $$\Pr [\tilde{x}(v) = x \; | \; E\_v] \le 2^{- {r}}.\qedhere$$ This completes the proof of Theorem [thm:TM-main1]. Proof of Lemma [lemma-main1] ---------------------------- We need to prove that the probability that $\T$ reaches any significant vertex is at most *O*(2− *r*). Let *s* be a significant vertex of *B*. We will bound from above the probability that $\T$ reaches *s*, and then use a union bound over all significant vertices of *B*. Interestingly, the upper bound on the width of *B* is used only in the union bound. ### The Distributions $\P\_{x|v}$ and $\P\_{x|e}$ Recall that for a vertex *v* of *B*, we denote by *E**v* the event that $\T$ reaches the vertex *v*. For simplicity, we denote by Pr(*v*) = Pr(*E**v*) the probability for *E**v* (where the probability is over *x*, *a*1, …, *a**m*), and we denote by $\P\_{x|v} = \P\_{x|E\_v}$ the distribution of the random variable *x* conditioned on the event *E**v*. Similarly, for an edge *e* of the branching program *B*, let *E**e* be the event that $\T$ traverses the edge *e*. Denote, Pr(*e*) = Pr(*E**e*) (where the probability is over *x*, *a*1, …, *a**m*), and $\P\_{x|e} = \P\_{x|E\_e}$. [claim-d0] For any edge *e* = (*v*, *u*) of *B*, labeled by (*a*, *b*), such that Pr(*e*) > 0, for any $x' \in \XX$, $$\P\_{x|e} (x') = \left\{ \begin{array}{ccccc} 0 & \;\;\;\; \mbox{if } & x' \in \Sig(v) & \mbox{or} & M(a,x') \neq b \\ \P\_{x|v} (x') \cdot c\_e^{-1} & \;\;\;\; \mbox{if } & x' \not \in \Sig(v)& \mbox{and} & M(a,x') = b \end{array} \right.$$ where *c**e* is a normalization factor that satisfies, $$c\_e \geq \tfrac{1}{2} - 2\cdot 2^{-2{r}}.$$ Let *e* = (*v*, *u*) be an edge of *B*, labeled by (*a*, *b*), and such that Pr(*e*) > 0. Since Pr(*e*) > 0, the vertex *v* is not significant (as otherwise $\T$ always stops on *v* and hence Pr(*e*) = 0). Also, since Pr(*e*) > 0, we know that $a \not \in \Bad(v)$ (as otherwise $\T$ never traverses *e* and hence Pr(*e*) = 0). If $\T$ reaches *v*, it traverses the edge *e* if and only if: $x \not \in \Sig(v)$ (as otherwise $\T$ stops on *v*) and *M*(*a*, *x*) = *b* and *a**i* + 1 = *a*. Therefore, for any $x' \in \XX$, $$\P\_{x|e} (x') = \left\{ \begin{array}{ccccc} 0 & \;\;\;\; \mbox{if } & x' \in \Sig(v) & \mbox{or} & M(a,x') \neq b \\ \P\_{x|v} (x') \cdot c\_e^{-1} & \;\;\;\; \mbox{if } & x' \not \in \Sig(v)& \mbox{and} & M(a,x') = b \end{array} \right.$$ where *c**e* is a normalization factor, given by $$c\_e= \sum\_{\left\{ x' \; : \; x' \not \in \Sig(v) \; \wedge \; M(a,x') = b \right\} } \P\_{x|v} (x') \; = \; \Pr\_x[(x \not \in \Sig(v)) \wedge (M(a,x) = b) \; | \;E\_v].$$ Since *v* is not significant, by Claim [claim-A0], $$\Pr\_{x} [x \in \Sig(v) \; | \; E\_v] \leq 2^{-2{r}}.$$ Since $a \not \in \Bad(v)$, $$\left| \Pr\_{x} [M(a,x) = 1 \; | \; E\_v] - \Pr\_{x} [M(a,x) = -1 \; | \; E\_v] \right| = \left| (M \cdot\P\_{x|v})(a) \right| \leq 2^{-r'},$$ and hence $$\Pr\_{x} [M(a,x) \neq b \; | \; E\_v] \leq \tfrac{1}{2} + 2^{-r'}.$$ Hence, by the union bound, $$c\_e= \Pr\_x[(x \not \in \Sig(v)) \wedge (M(a,x) = b) \; | \;E\_v] \geq \tfrac{1}{2} - 2^{-r'} - 2^{-2{r}} \geq \tfrac{1}{2} - 2\cdot 2^{-2{r}}$$ (where the last inequality follows since *r* ≤ *r*ʹ/2, by Equation ). ### Bounding the Norm of $\P\_{x|s}$ We will show that ${\left\lVert\P\_{x|s}\right\rVert}\_{2}$ cannot be too large. Towards this, we will first prove that for every edge *e* of *B* that is traversed by $\T$ with probability larger than zero, ${\left\lVert\P\_{x|e}\right\rVert}\_{2}$ cannot be too large. [claim-b0] For any edge *e* of *B*, such that Pr(*e*) > 0, $${\left\lVert\P\_{x|e}\right\rVert}\_{2} \leq 4 \cdot 2^{\ell} \cdot 2^{-n}.$$ Let *e* = (*v*, *u*) be an edge of *B*, labeled by (*a*, *b*), and such that Pr(*e*) > 0. Since Pr(*e*) > 0, the vertex *v* is not significant (as otherwise $\T$ always stops on *v* and hence Pr(*e*) = 0). Thus, $${\left\lVert\P\_{x|v}\right\rVert}\_{2} \leq 2^{\ell} \cdot 2^{-n}.$$ By Claim [claim-d0], for any $x' \in \XX$, $$\P\_{x|e} (x') = \left\{ \begin{array}{ccccc} 0 & \;\;\;\; \mbox{if } & x' \in \Sig(v) & \mbox{or} & M(a,x') \neq b \\ \P\_{x|v} (x') \cdot c\_e^{-1} & \;\;\;\; \mbox{if } & x' \not \in \Sig(v)& \mbox{and} & M(a,x') = b \end{array} \right.$$ where *c**e* satisfies, $$c\_e \geq \tfrac{1}{2} - 2\cdot 2^{-2{r}} > \tfrac{1}{4}$$ (where the last inequality holds because we assume that *k*ʹ, ℓʹ, *r*ʹ and thus *r* are sufficiently large.) Thus, $${\left\lVert\P\_{x|e}\right\rVert}\_{2} \leq c\_e^{-1} \cdot {\left\lVert\P\_{x|v}\right\rVert}\_{2} \leq 4 \cdot 2^{\ell} \cdot 2^{-n}\qedhere$$ [claim-b1] $${\left\lVert\P\_{x|s}\right\rVert}\_{2} \leq 4 \cdot 2^{\ell} \cdot 2^{-n}.$$ Let Γ*i**n*(*s*) be the set of all edges *e* of *B*, that are going into *s*, such that Pr(*e*) > 0. Note that ∑*e* ∈ Γ*i**n*(*s*)Pr(*e*) = Pr(*s*). By the law of total probability, for every $x' \in \XX$, $$\P\_{x|s} (x') = \sum\_{e \in \Gamma\_{in}(s)} \tfrac{\Pr(e)}{\Pr(s)} \cdot \P\_{x|e} (x'),$$ and hence by Jensen’s inequality, $$\P\_{x|s} (x')^2 \leq \sum\_{e \in \Gamma\_{in}(s)} \tfrac{\Pr(e)}{\Pr(s)} \cdot \P\_{x|e} (x')^2.$$ Summing over $x' \in \XX$, we obtain, $${\left\lVert\P\_{x|s}\right\rVert}\_{2}^2 \leq \sum\_{e \in \Gamma\_{in}(s)} \tfrac{\Pr(e)}{\Pr(s)} \cdot {\left\lVert\P\_{x|e}\right\rVert}\_{2}^2.$$ By Claim [claim-b0], for any *e* ∈ Γ*i**n*(*s*), $${\left\lVert\P\_{x|e}\right\rVert}\_{2}^2 \leq \left( 4 \cdot 2^{\ell} \cdot 2^{-n} \right)^2.$$ Hence, $${\left\lVert\P\_{x|s}\right\rVert}\_{2}^2 \leq \left( 4 \cdot 2^{\ell} \cdot 2^{-n} \right)^2.\qedhere$$ ### Similarity to a Target Distribution Recall that for two functions $f,g: \XX \rightarrow \Reals^+$, we defined $$\langle f,g \rangle = {\mathop{\bf E\/}}\_{z \in\_R \XX} [ f(z) \cdot g(z) ].$$ We think of ⟨*f*, *g*⟩ as a measure for the similarity between a function *f* and a target function *g*. Typically *f*, *g* will be distributions. [claim-s1] $$\langle \P\_{x|s},\P\_{x|s} \rangle > 2^{2\ell} \cdot 2^{-2n}.$$ Since *s* is significant, $$\langle \P\_{x|s},\P\_{x|s} \rangle = {\left\lVert\P\_{x|s}\right\rVert}^2\_{2} > 2^{2\ell} \cdot 2^{-2n}.\qedhere$$ [claim-s2] $$\langle \U\_X,\P\_{x|s} \rangle = 2^{-2n},$$ where $\U\_X$ is the uniform distribution over $\XX$. Since $\P\_{x|s}$ is a distribution, $$\langle \U\_X,\P\_{x|s} \rangle = 2^{-2n} \cdot \sum\_{z \in \XX} \P\_{x|s}(z) = 2^{-2n}.\qedhere$$ ### Measuring the Progress For *i* ∈ {0, …, *m*}, let *L**i* be the set of vertices *v* in layer-*i* of *B*, such that Pr(*v*) > 0. For *i* ∈ {1, …, *m*}, let Γ*i* be the set of edges *e* from layer-(*i* − 1) of *B* to layer-*i* of *B*, such that Pr(*e*) > 0. Recall that *k* = *γ**k*ʹ (Equation ). For *i* ∈ {0, …, *m*}, let $${\cal Z}\_i = \sum\_{v \in L\_i} \Pr(v) \cdot \langle \P\_{x|v},\P\_{x|s} \rangle^{{k}}.$$ For *i* ∈ {1, …, *m*}, let $${\cal Z}'\_i = \sum\_{e \in \Gamma\_i} \Pr(e) \cdot \langle \P\_{x|e},\P\_{x|s} \rangle^{{k}}.$$ We think of ${\cal Z}\_i, {\cal Z}'\_i$ as measuring the progress made by the branching program, towards reaching a state with distribution similar to $\P\_{x|s}$. For a vertex *v* of *B*, let Γ*o**u**t*(*v*) be the set of all edges *e* of *B*, that are going out of *v*, such that Pr(*e*) > 0. Note that ∑*e* ∈ Γ*o**u**t*(*v*)Pr(*e*) ≤ Pr(*v*). (We don’t always have an equality here, since sometimes $\T$ stops on *v*). The next four claims show that the progress made by the branching program is slow. [claim-p0] For every vertex *v* of *B*, such that Pr(*v*) > 0, $$\sum\_{e \in \Gamma\_{out}(v)} \tfrac{\Pr(e)}{\Pr(v)} \cdot \langle \P\_{x|e},\P\_{x|s} \rangle^{{k}} \leq \langle \P\_{x|v},\P\_{x|s} \rangle^{{k}} \cdot \left( 1 + 2^{-{r}}\right)^{{k}} + \left( 2^{-2n +2} \right)^{{k}}.$$ If *v* is significant or *v* is a leaf, then $\T$ always stops on *v* and hence Γ*o**u**t*(*v*) is empty and thus the left hand side is equal to zero and the right hand side is positive, so the claim follows trivially. Thus, we can assume that *v* is not significant and is not a leaf. Define $P: \XX \rightarrow \Reals^+$ as follows. For any $x' \in \XX$, $$P (x') = \left\{ \begin{array}{ccc} 0 & \;\;\;\; \mbox{if } & x' \in \Sig(v) \\ \P\_{x|v} (x') & \;\;\;\; \mbox{if } & x' \not \in \Sig(v) \end{array} \right.$$ Note that by the definition of $\Sig(v)$, for any $x' \in \XX$, *P*(*x*ʹ) ≤ 22ℓ + 2*r* ⋅ 2− *n*. Define $f: \XX \rightarrow \Reals^+$ as follows. For any $x' \in \XX$, $$f(x') = P(x') \cdot \P\_{x|s}(x').$$ By Claim [claim-b1] and Equation , $$\label{e12} {\left\lVertf\right\rVert}\_{2} \leq 2^{2\ell + 2{r}} \cdot 2^{-n} \cdot {\left\lVert\P\_{x|s}\right\rVert}\_{2} \leq 2^{2\ell + 2{r}} \cdot 2^{-n} \cdot 4 \cdot 2^{\ell} \cdot 2^{-n} = 2^{3\ell + 2{r}+2} \cdot 2^{-2n}.$$ By Claim [claim-d0], for any edge *e* ∈ Γ*o**u**t*(*v*), labeled by (*a*, *b*), for any $x' \in \XX$, $$\P\_{x|e} (x') = \left\{ \begin{array}{ccc} 0 & \;\;\;\; \mbox{if } & M(a,x') \neq b \\ P(x') \cdot c\_e^{-1} & \;\;\;\; \mbox{if } & M(a,x') = b \end{array} \right.$$ where *c**e* satisfies, $$c\_e \geq \tfrac{1}{2} - 2\cdot 2^{-2{r}}.$$ Therefore, for any edge *e* ∈ Γ*o**u**t*(*v*), labeled by (*a*, *b*), for any $x' \in \XX$, $$\P\_{x|e} (x') \cdot \P\_{x|s} (x') = \left\{ \begin{array}{ccc} 0 & \;\;\;\; \mbox{if } & M(a,x') \neq b \\ f(x') \cdot c\_e^{-1} & \;\;\;\; \mbox{if } & M(a,x') = b \end{array} \right.$$ and hence, we have $$\begin{aligned} \nonumber \langle \P\_{x|e},\P\_{x|s} \rangle &= {\mathop{\bf E\/}}\_{x' \in\_R \XX} [ \P\_{x|e}(x') \cdot \P\_{x|s}(x') ] = {\mathop{\bf E\/}}\_{x' \in\_R \XX} [f(x') \cdot c\_e^{-1} \cdot \mathbf{1}\_{\{x' \in X \; : \; M(a,x')=b\}}] \\ \nonumber &= {\mathop{\bf E\/}}\_{x' \in\_R \XX} \left[f(x') \cdot c\_e^{-1} \cdot \tfrac{(1+b\cdot M(a,x'))}{2}\right] = \left({\left\lVertf\right\rVert}\_1 + b \cdot {\langleM\_a, f\rangle}\right) \cdot (2c\_e)^{-1} \\ \label{e13} &< \left( {\left\lVertf\right\rVert}\_1+ |{\langleM\_a, f\rangle}| \right) \cdot \left(1 + 2^{-2{r}+ 3}\right)\end{aligned}$$ (where the last inequality holds by the bound that we have on *c**e*, because we assume that *k*ʹ, ℓʹ, *r*ʹ and thus *r* are sufficiently large). We will now consider two cases: ### Case I: ∥*f*∥1 < 2− 2*n* In this case, we bound ∣⟨*M**a*, *f*⟩∣ ≤ ∥*f*∥1 (since *f* is non-negative and the entries of *M* are in { − 1, 1}) and (1 + 2− 2*r* + 3) < 2 (since we assume that *k*ʹ, ℓʹ, *r*ʹ and thus *r* are sufficiently large) and obtain for any edge *e* ∈ Γ*o**u**t*(*v*), $$\langle \P\_{x|e},\P\_{x|s} \rangle < 4 \cdot 2^{-2n}.$$ Since $\sum\_{e \in \Gamma\_{out}(v)} \tfrac{\Pr(e)}{\Pr(v)} \leq 1$, Claim [claim-p0] follows, as the left hand side of the claim is smaller than the second term on the right hand side. ### Case II: ∥*f*∥1 ≥ 2− 2*n* For every $a \in \AA$, define $$t(a) = \frac{|{\langleM\_a, f\rangle}|}{{\left\lVertf\right\rVert}\_1}.$$ By Equation , $$\label{e14} \langle \P\_{x|e},\P\_{x|s} \rangle^{{k}} < {\left\lVertf\right\rVert}\_1^{{k}} \cdot \left( 1 + t(a) \right)^{{k}} \cdot \left( 1 + 2^{-2{r}+ 3}\right)^{{k}}.$$ Note that by the definitions of *P* and *f*, $${\left\lVertf\right\rVert}\_1 = {\mathop{\bf E\/}}\_{x' \in\_R \XX} [f(x')] = \langle P,\P\_{x|s} \rangle \leq \langle \P\_{x|v},\P\_{x|s} \rangle.$$ Note also that for every $a \in \AA$, there is at most one edge *e*(*a*, 1) ∈ Γ*o**u**t*(*v*), labeled by (*a*, 1), and at most one edge *e*(*a*,  − 1) ∈ Γ*o**u**t*(*v*), labeled by (*a*,  − 1), and we have $$\tfrac{\Pr(e\_{(a,1)})}{\Pr(v)} + \tfrac{\Pr(e\_{(a,-1)})}{\Pr(v)} \leq \tfrac{1}{|A|},$$ since $\tfrac{1}{|A|}$ is the probability that the next sample read by the program is *a*. Thus, summing over all *e* ∈ Γ*o**u**t*(*v*), by Equation , $$\label{e15} \sum\_{e \in \Gamma\_{out}(v)} \tfrac{\Pr(e)}{\Pr(v)} \cdot \langle \P\_{x|e},\P\_{x|s} \rangle^{{k}} < \langle \P\_{x|v},\P\_{x|s} \rangle ^{{k}} \cdot {\mathop{\bf E\/}}\_{a \in\_R \AA} \left[ \left( 1 + t(a) \right)^{{k}} \right] \cdot \left( 1 + 2^{-2{r}+3}\right)^{{k}}.$$ It remains to bound $$\label{e16} {\mathop{\bf E\/}}\_{a \in\_R \AA} \left[ \left( 1 + t(a) \right)^{{k}} \right],$$ using the properties of the matrix *M* and the bounds on the ℓ2 versus ℓ1 norms of *f*. By Equation , the assumption that ∥*f*∥1 ≥ 2− 2*n*, Equation  and Equation , we get $$\frac{{\left\lVertf\right\rVert}\_{2}}{{\left\lVertf\right\rVert}\_1} \leq 2^{3\ell + 2{r}+2} \le 2^{\ell'}\;.$$ Since *M* is a (*k*ʹ, ℓʹ)-*L*2-extractor with error 2− *r*ʹ, there are at most 2− *k*ʹ ⋅ ∣*A*∣ rows $a\in \AA$ with $t(a) = \frac{|{\langleM\_a,f\rangle}|}{{\left\lVertf\right\rVert}\_1} \ge 2^{-r'}$. We bound the expectation in Equation , by splitting the expectation into two sums $$\label{e18} {\mathop{\bf E\/}}\_{a \in\_R \AA} \left[ \left( 1 + {t(a)} \right)^{{k}} \right] = \tfrac{1}{|A|} \cdot \sum\_{a \;: \; t(a) \leq 2^{-r'}} \left( 1 + t(a) \right)^{{k}} + \tfrac{1}{|A|} \cdot \sum\_{a \;: \; t(a) > 2^{-r'}} \left( 1 + t(a) \right)^{{k}}.$$ We bound the first sum in Equation  by (1 + 2− *r*ʹ)*k*. As for the second sum in Equation , we know that it is a sum of at most $2^{-k'} \cdot |\AA|$ elements, and since for every $a \in \AA$, we have *t*(*a*) ≤ 1, we have $$\tfrac{1}{|A|} \cdot \sum\_{a \;: \; t(a) > 2^{-r'}} \left( 1 + t(a) \right)^{{k}} \leq 2^{-k'} \cdot 2^{{k}} \le 2^{-2{r}}\;$$ (where in the last inequality we used Equations  and ). Overall, using Equation  again, we get $$\label{e18.5} {\mathop{\bf E\/}}\_{a \in\_R \AA} \left[ \left( 1 + t(a) \right)^{{k}} \right] \le (1+2^{-r'})^{{k}} + 2^{-2{r}} \le (1+2^{-2{r}})^{{k}+1}.$$ Substituting Equation  into Equation , we obtain $$\begin{aligned} \sum\_{e \in \Gamma\_{out}(v)} \tfrac{\Pr(e)}{\Pr(v)} \cdot \langle \P\_{x|e},\P\_{x|s} \rangle^{{k}} &< \langle \P\_{x|v},\P\_{x|s} \rangle ^{{k}} \cdot \left(1 + 2^{-2{r}}\right)^{{k}+1} \cdot \left( 1 + 2^{-2{r}+3}\right)^{{k}} \\ &< \langle \P\_{x|v},\P\_{x|s} \rangle ^{{k}} \cdot \left( 1 + 2^{-{r}}\right)^{{k}}\end{aligned}$$ (where the last inequality uses the assumption that *r* is sufficiently large). This completes the proof of Claim [claim-p0]. [claim-p1] For every *i* ∈ {1, …, *m*}, $${\cal Z}'\_i \leq {\cal Z}\_{i-1} \cdot \left( 1 + 2^{-{r}}\right)^{{k}} + \left( 2^{-2n +2} \right)^{{k}}.$$ By Claim [claim-p0], $$\begin{aligned} {\cal Z}'\_i = \sum\_{e \in \Gamma\_i} \Pr(e) \cdot \langle \P\_{x|e},\P\_{x|s} \rangle^{{k}} &= \sum\_{v \in L\_{i-1}} \Pr(v) \cdot \sum\_{e \in \Gamma\_{out}(v)} \tfrac{\Pr(e)}{\Pr(v)} \cdot \langle \P\_{x|e},\P\_{x|s} \rangle^{{k}} \\&\leq \sum\_{v \in L\_{i-1}} \Pr(v) \cdot \left( \langle \P\_{x|v},\P\_{x|s} \rangle^{{k}} \cdot \left( 1 + 2^{-{r}}\right)^{{k}} + \left( 2^{-2n +2} \right)^{{k}} \right) \\&= {\cal Z}\_{i-1} \cdot \left( 1 + 2^{-{r}}\right)^{{k}} + \sum\_{v \in L\_{i-1}} \Pr(v) \cdot \left( 2^{-2n +2} \right)^{{k}} \\&\leq {\cal Z}\_{i-1} \cdot \left( 1 + 2^{-{r}}\right)^{{k}} + \left( 2^{-2n +2} \right)^{{k}}\qedhere\end{aligned}$$ [claim-p2] For every *i* ∈ {1, …, *m*}, $${\cal Z}\_{i} \leq {\cal Z}'\_i.$$ For any *v* ∈ *L**i*, let Γ*i**n*(*v*) be the set of all edges *e* ∈ Γ*i*, that are going into *v*. Note that ∑*e* ∈ Γ*i**n*(*v*)Pr(*e*) = Pr(*v*). By the law of total probability, for every *v* ∈ *L**i* and every $x' \in \XX$, $$\P\_{x|v} (x') = \sum\_{e \in \Gamma\_{in}(v)} \tfrac{\Pr(e)}{\Pr(v)} \cdot \P\_{x|e} (x'),$$ and hence $$\langle \P\_{x|v},\P\_{x|s} \rangle = \sum\_{e \in \Gamma\_{in}(v)} \tfrac{\Pr(e)}{\Pr(v)} \cdot \langle \P\_{x|e},\P\_{x|s} \rangle.$$ Thus, by Jensen’s inequality, $$\langle \P\_{x|v},\P\_{x|s} \rangle ^{{k}} \leq \sum\_{e \in \Gamma\_{in}(v)} \tfrac{\Pr(e)}{\Pr(v)} \cdot \langle \P\_{x|e},\P\_{x|s} \rangle ^{{k}}.$$ Summing over all *v* ∈ *L**i*, we get $${\cal Z}\_{i} = \sum\_{v \in L\_{i}} \Pr(v) \cdot \langle \P\_{x|v},\P\_{x|s} \rangle ^{{k}} \leq \sum\_{v \in L\_{i}} \Pr(v) \cdot \sum\_{e \in \Gamma\_{in}(v)} \tfrac{\Pr(e)}{\Pr(v)} \cdot \langle \P\_{x|e},\P\_{x|s} \rangle ^{{k}}$$ $$= \sum\_{e \in \Gamma\_i} \Pr(e) \cdot \langle \P\_{x|e},\P\_{x|s} \rangle^{{k}} = {\cal Z}'\_i.\qedhere$$ [claim-p3] For every *i* ∈ {1, …, *m*}, $${\cal Z}\_i \leq 2^{4{k}+ 2{r}} \cdot 2^{-2{k}\cdot n}.$$ By Claim [claim-s2], ${\cal Z}\_0 = (2^{-2n})^{{k}}$. By Claim [claim-p1] and Claim [claim-p2], for every *i* ∈ {1, …, *m*}, $${\cal Z}\_i \leq {\cal Z}\_{i-1} \cdot \left( 1 + 2^{-{r}}\right)^{{k}} + \left( 2^{-2n +2} \right)^{{k}}.$$ Hence, for every *i* ∈ {1, …, *m*}, $${\cal Z}\_i \leq \left( 2^{-2n +2} \right)^{{k}} \cdot m \cdot \left( 1 + 2^{-{r}}\right)^{{k}m}.$$ Since *m* = 2*r*, $${\cal Z}\_i \leq 2^{-2{k}\cdot n} \cdot 2^{2{k}} \cdot 2^{{r}} \cdot e^{{k}} \leq 2^{-2{k}\cdot n} \cdot 2^{4{k}+ 2{r}}.\qedhere$$ ### Proof of Lemma [lemma-main1] We can now complete the proof of Lemma [lemma-main1]. Assume that *s* is in layer-*i* of *B*. By Claim [claim-s1], $${\cal Z}\_i \geq \Pr(s) \cdot \langle \P\_{x|s},\P\_{x|s} \rangle ^{{k}} > \Pr(s) \cdot \left( 2^{2\ell} \cdot 2^{-2n} \right)^{{k}} = \Pr(s) \cdot 2^{2\ell \cdot {k}} \cdot 2^{-2{k}\cdot n}.$$ On the other hand, by Claim [claim-p3], $${\cal Z}\_i \leq 2^{4{k}+ 2{r}} \cdot 2^{-2{k}\cdot n}.$$ Thus, using Equation  and Equation , we get Pr(*s*) ≤ 24*k* + 2*r* ⋅ 2− 2ℓ ⋅ *k* ≤ 24*k*ʹ ⋅ 2− (2*γ*2/3) ⋅ (*k*ʹℓʹ). Recall that we assumed that the width of *B* is at most 2*c**k*ʹℓʹ for some constant *c* < 2/3, and that the length of *B* is at most 2*r*. Recall that we fixed *γ* such that 2*γ*2/3 > *c*. Taking a union bound over at most 2*r* ⋅ 2*c**k*ʹℓʹ ≤ 2*k*ʹ ⋅ 2*c**k*ʹℓʹ significant vertices of *B*, we conclude that the probability that $\T$ reaches any significant vertex is at most 2− Ω(*k*ʹℓʹ). Since we assume that *k*ʹ and ℓʹ are sufficiently large, 2− Ω(*k*ʹℓʹ) is certainly at most 2− *k*ʹ, which is at most 2− *r*. Lower Bounds for Weak Learning ------------------------------ In this section, we show that under the same conditions of Theorem [thm:TM-main1], the branching program cannot even weakly-learn the function. That is, we show that the branching program cannot output a hypothesis $h : \AA \to \{-1,1\}$ with a non-negligible correlation with the function defined by the true unknown *x*. We change the definition of the branching program and associate with each leaf *v* a hypothesis $h\_v: \AA \to \{-1,1\}$. We measure the success as the correlation between *h**v* and the function defined by the true unknown *x*. Formally, for any $x\in \XX$, let $M^{(x)}: \AA \to \{-1,1\}$ be the function corresponding to the *x*-th column of *M*. We define the value of the program as ${\mathop{\bf E\/}}\left[\left|{\langleh\_v, M^{(x)}\rangle}\right|\right]$, where the expectation is over *x*, *a*1, …, *a**m* (recall that *x* is uniformly distributed over $\XX$ and *a*1, …, *a**m* are uniformly distributed over $\AA$, and for every *t*, *b**t* = *M*(*a**t*, *x*)). The following claim bounds the expected correlation between *h**v* and *M*(*x*), conditioned on reaching a non-significant leaf. [claim:low-correlation] If *v* is a non-significant leaf, then $${\mathop{\bf E\/}}\_{x}\Big[ \left|{\langleh\_v, M^{(x)}\rangle}\right| \;\Big|\; E\_v\Big] \le O(2^{-r/2}).$$ We expand the expected correlation between *h**v* and *M*(*x*), squared: $$\begin{aligned} {\mathop{\bf E\/}}\_{x}\Big[\left|{\langleh\_v,M^{(x)}\rangle}\right|\;\Big|\;E\_v\Big]^2 &\le {\mathop{\bf E\/}}\_{x}\Big[{\langleh\_v,M^{(x)}\rangle}^2\;\Big|\;E\_v\Big] = \sum\_{x' \in \XX}{\P\_{x|v}(x') \cdot {\langleh\_v,M^{(x')}\rangle}^2}\\ &= \sum\_{x'\in \XX} \P\_{x|v}(x')\cdot {\mathop{\bf E\/}}\_{a,a' \in\_R \AA} [h\_v(a) \cdot M(a,x') \cdot h\_v(a') \cdot M(a',x')]\\ &= {\mathop{\bf E\/}}\_{a,a'\in\_R \AA} \bigg[h\_v(a) \cdot h\_v(a') \cdot \sum\_{x' \in \XX} \P\_{x|v}(x')\cdot M(a,x') \cdot M(a',x')\bigg]\\ &\le {\mathop{\bf E\/}}\_{a,a'\in\_R \AA} \left[\left|\sum\_{x' \in \XX} \P\_{x|v}(x')\cdot M(a,x') \cdot M(a',x')\right|\right]\\ &= {\mathop{\bf E\/}}\_{a\in\_R \AA} \left[{\mathop{\bf E\/}}\_{a' \in\_R \AA}\left[\left|\sum\_{x' \in \XX} \P\_{x|v}(x')\cdot M(a,x') \cdot M(a',x')\right|\right]\right]\;.\end{aligned}$$ Next, we show that ${\mathop{\bf E\/}}\_{a' \in\_R \AA}\left[\left|\sum\_{x' \in \XX} \P\_{x|v}(x')\cdot M(a,x') \cdot M(a',x')\right|\right]\ \le 4\cdot 2^{-r}$ for any $a\in \AA$. Fix $a \in \AA$. Let $q\_{a}: \XX \to \Reals$ be the function defined by $q\_{a}(x') = \P\_{x|v}(x') \cdot M(a,x')$ for $x' \in \XX$. Since $|q\_a(x')|=|\P\_{x|v}(x')|$ for any $x' \in \XX$ and since *v* is a non-significant vertex, we get $${\left\lVertq\_a\right\rVert}\_2 = {\left\lVert\P\_{x|v}\right\rVert}\_2 \le 2^{\ell} \cdot 2^{-n} \qquad\text{and}\qquad {\left\lVertq\_a\right\rVert}\_1 = {\left\lVert\P\_{x|v}\right\rVert}\_1 = 2^{-n}.$$ Hence, $\frac{{\left\lVertq\_a\right\rVert}\_2}{{\left\lVertq\_a\right\rVert}\_1}\le 2^{\ell}$. We would like to use the fact that *M* is a (*k*ʹ, ℓʹ)-*L*2-extractor with error 2− *r*ʹ to show that there aren’t many rows of *M* with a large inner product with *q**a*. However, *q**a* can get negative values and the definition of *L*2-extractors only handles non-negative functions $f: \XX \to \Reals^{+}$. To solve this issue, we use the following lemma, proved in Section [sec:useful]. [lem:negative] Suppose that $M: \AA \times \XX \to \{-1,1\}$ is a (*k*ʹ, ℓʹ)-*L*2-extractor with error at most 2− *r*. Let $f: \XX \to \Reals$ be any function (i.e., *f* can get negative values) with $\frac{{\left\lVertf\right\rVert}\_2}{{\left\lVertf\right\rVert}\_1} \le 2^{\ell'-r}$. Then, there are at most 2 ⋅ 2− *k*ʹ ⋅ ∣*A*∣ rows *a* ∈ *A* with $\frac{|{\langleM\_a, f\rangle}|}{{\left\lVertf\right\rVert}\_1} \ge 2\cdot 2^{-r}$. Since *M* is a (*k*ʹ, ℓʹ)-*L*2-extractor with error at most 2− *r*ʹ, and since *r* < *r*ʹ, we have that *M* is also a (*k*ʹ, ℓʹ)-*L*2-extractor with error at most 2− *r*. Since $\frac{{\left\lVertq\_a\right\rVert}\_2}{{\left\lVertq\_a\right\rVert}\_1}\le 2^{\ell} \le 2^{\ell'-r}$, we can apply Lemma [lem:negative] with *f* = *q**a*, and error 2− *r*. We get that there are at most 2 ⋅ 2− *k*ʹ ⋅ ∣*A*∣ rows $a' \in \AA$ with $\frac{\left|{\langleq\_a, M\_{a'}\rangle}\right|}{{\left\lVertq\_a\right\rVert}\_1} \ge 2\cdot 2^{-r}$. Thus, $${\mathop{\bf E\/}}\_{a' \in\_R \AA} \left[ \left|\sum\_{x' \in \XX} q\_{a}(x') \cdot M(a',x')\right| \right] = {\mathop{\bf E\/}}\_{a' \in\_R \AA} \left[ \frac{\left|{\langleq\_a, M\_{a'}\rangle}\right|}{{\left\lVertq\_a\right\rVert}\_1} \right] \le 2\cdot 2^{-k'} + 2\cdot 2^{-r} \le 4\cdot 2^{-{r}}\;.$$ Overall, we get that ${\mathop{\bf E\/}}\_{x}\big[|{\langleh\_v,M^{(x)}\rangle}|\;\big|\;E\_v\big]^2 \le 4\cdot 2^{-{r}}$. Taking square roots of both sides of the last inequality completes the proof. Lemma [lemma-main1], Claim [claim-A0] and Claim [claim-A2] show that the probability that $\T$ stops before reaching a leaf is at most *O*(2− *r*). Combining this with Claim [claim:low-correlation] we get that (under the same conditions of Theorem [thm:TM-main1]) $${\mathop{\bf E\/}}[ \left|{\langleh\_v, M^{(x)}\rangle}\right|] \le \Pr[\T\text{~stops}] + O(2^{-r/2}) \le O(2^{-r/2}),$$ where the expectation and probability are taken over $x\in\_R \XX$ and $a\_1, \ldots,a\_m\in\_R \AA$. We get the following theorem as a conclusion. [thm:TM-main2] Let $\tfrac{1}{100}< c <\tfrac{2}{3}$. Fix *γ* to be such that $\tfrac{3c}{2} < \gamma^2 < 1$. Let $\XX$, $\AA$ be two finite sets. Let $n = \log\_2|\XX|$. Let $M: \AA \times \XX \rightarrow \{-1,1\}$ be a matrix which is a (*k*ʹ, ℓʹ)-*L*2-extractor with error 2− *r*ʹ, for sufficiently large[5](#fn5) *k*ʹ, ℓʹ and *r*ʹ, where ℓʹ ≤ *n*. Let $${r}:= \min\left\{ \tfrac{r'}{2}, \tfrac{(1-\gamma)k'}{2}, \tfrac{(1-\gamma)\ell'}{2} -1 \right\}.$$ Let *B* be a branching program of length at most 2*r* and width at most 2*c* ⋅ *k*ʹ ⋅ ℓʹ for the learning problem that corresponds to the matrix *M*. Then, $${\mathop{\bf E\/}}[ \left| {\langleh\_v, M^{(x)}\rangle}\right|] \le O(2^{-r/2})\;.$$ In particular, the probability that the hypothesis agrees with the function defined by the true unknown *x*, on more than 1/2 + 2− *r*/4 of the inputs, is at most *O*(2− *r*/4). Main Corollary -------------- [cor:main1] There exists a (sufficiently small) constant *c* > 0, such that: Let $\XX$, $\AA$ be two finite sets. Let $M: \AA \times \XX \rightarrow \{-1,1\}$ be a matrix. Assume that $k,\ell, r \in \N$ are such that any submatrix of *M* of at least 2− *k* ⋅ ∣*A*∣ rows and at least 2− ℓ ⋅ ∣*X*∣ columns, has a bias of at most 2− *r*. Let *B* be a branching program of length at most 2*c* ⋅ *r* and width at most 2*c* ⋅ *k* ⋅ ℓ for the learning problem that corresponds to the matrix *M*. Then, the success probability of *B* is at most 2− Ω(*r*). By Lemma [lemma:error-minent] (stated and proved below), there exist *k*ʹ = *k* + Ω(*r*),  ℓʹ = ℓ + Ω(*r*), and *r*ʹ = Ω(*r*), such that: any submatrix of *M* of at least 2− *k*ʹ ⋅ ∣*A*∣ rows and at least 2− ℓʹ ⋅ ∣*X*∣ columns, has a bias of at most 2− *r*ʹ. By Lemma [lemma:min-extractor] (stated and proved below), *M* is an (Ω(*k*) + Ω(*r*), Ω(ℓ) + Ω(*r*))-*L*2-extractor with error 2− Ω(*r*). The corollary follows by Theorem [thm:TM-main1]. Applications ============ Some Useful Lemmas ------------------ ### Handling Negative Functions In the following lemma, we show that up to a small loss in parameters an *L*2-extractor has similar guarantees for any function $f: \XX \to \Reals$ with bounded ℓ2-vs-ℓ1-norm regardless of whether or not *f* is non-negative. Suppose that $M: \AA \times \XX \to \{-1,1\}$ is a (*k*ʹ, ℓʹ)-*L*2-extractor with error at most 2− *r*. Let $f: \XX \to \Reals$ be any function with $\frac{{\left\lVertf\right\rVert}\_2}{{\left\lVertf\right\rVert}\_1} \le 2^{\ell'-r}$. Then, there are at most 2 ⋅ 2− *k*ʹ ⋅ ∣*A*∣ rows *a* ∈ *A* with $\frac{|{\langleM\_a, f\rangle}|}{{\left\lVertf\right\rVert}\_1} \ge 2\cdot 2^{-r}$. Let $f\_{+}, f\_{-}: \XX \to \Reals^{+}$ be the non-negative functions defined by $$f\_{+}(x) = \begin{cases} f(x), & f(x)>0\\ 0,& \mbox{otherwise} \end{cases} \qquad\qquad f\_{-}(x) = \begin{cases} |f(x)|, & f(x)<0\\ 0,& \mbox{otherwise} \end{cases}$$ for $x \in \XX$. We have *f*(*x*) = *f*+(*x*) − *f*−(*x*) for all $x \in \XX$. We split into two cases: 1. If ∥*f*+∥1 < 2− *r* ⋅ ∥*f*∥1, then ∣⟨*M**a*, *f*+⟩∣ ≤ ∥*f*+∥1 < 2− *r* ⋅ ∥*f*∥1 for all $a\in \AA$. 2. If ∥*f*+∥1 ≥ 2− *r* ⋅ ∥*f*∥1, then *f*+ is a non-negative function with $$\frac{{\left\lVertf\_{+}\right\rVert}\_2}{{\left\lVertf\_{+}\right\rVert}\_1} \le \frac{{\left\lVertf\right\rVert}\_2}{{\left\lVertf\right\rVert}\_1 \cdot 2^{-r}} \le 2^{\ell'}\;.$$ Thus, we may use the assumption that *M* is an *L*2-extractor to deduce that there are at most 2− *k*ʹ ⋅ ∣*A*∣ rows $a\in \AA$ with ∣⟨*M**a*, *f*+⟩∣ ≥ ∥*f*+∥1 ⋅ 2− *r*. In both cases, there are at most 2− *k*ʹ ⋅ ∣*A*∣ rows $a\in \AA$ with ∣⟨*M**a*, *f*+⟩∣ ≥ ∥*f*∥1 ⋅ 2− *r*. Similarly, there are at most 2− *k*ʹ ⋅ ∣*A*∣ rows $a\in \AA$ with ∣⟨*M**a*, *f*−⟩∣ ≥ ∥*f*∥1 ⋅ 2− *r*. Thus, for all but at most 2 ⋅ 2− *k*ʹ ⋅ ∣*A*∣ of the rows $a\in \AA$ we have $$|{\langleM\_a, f\rangle}| \le |{\langleM\_a, f\_{+}\rangle}| + |{\langleM\_a, f\_{-}\rangle}| < 2\cdot {\left\lVertf\right\rVert}\_1 \cdot 2^{-r}\;.\qedhere$$ ### Error vs. Min-Entropy [lemma:error-minent] Let $M: \AA \times \XX \rightarrow \{-1,1\}$ be a matrix. Let *k*, ℓ, *r* be such that any submatrix of *M* of at least 2− *k* ⋅ ∣*A*∣ rows and at least 2− ℓ ⋅ ∣*X*∣ columns, has a bias of at most 2− *r*. Then, there exist *k*ʹ = *k* + Ω(*r*),  ℓʹ = ℓ + Ω(*r*), and *r*ʹ = Ω(*r*), such that: any submatrix of *M* of at least 2− *k*ʹ ⋅ ∣*A*∣ rows and at least 2− ℓʹ ⋅ ∣*X*∣ columns, has a bias of at most 2− *r*ʹ. Assume without loss of generality that *k*, ℓ, *r* are larger than some sufficiently large absolute constant. We will show that there exists *k*ʹ = *k* + Ω(*r*), such that, any submatrix of *M* of at least 2− *k*ʹ ⋅ ∣*A*∣ rows and at least 2− ℓ ⋅ ∣*X*∣ columns, has a bias of at most 2− Ω(*r*). The proof of the lemma then follows by applying the same claim again on the transposed matrix. Let $k' = k + \tfrac{r}{10}$. Assume for a contradiction that there exist *T* ⊆ *A* of size at least 2− *k*ʹ ⋅ ∣*A*∣ and *S* ⊆ *X* of size at least 2− ℓ ⋅ ∣*X*∣, such that the bias of *T* × *S* is larger than, say, 2− *r*/2. By the assumption of the lemma, ∣*T*∣ < 2− *k* ⋅ ∣*A*∣. Let *T*ʹ be an arbitrary set of 2− *k* ⋅ ∣*A*∣ rows in *A* \ *T*. By the assumption of the lemma, the bias of *T*ʹ × *S* is at most 2− *r*. Therefore, the bias of (*T*ʹ ∪ *T*) × *S* is at least $$\tfrac{|T|}{|T' \cup T|} \cdot 2^{-r/2} - \tfrac{|T'|}{|T' \cup T|} \cdot 2^{-r} \geq \tfrac{1}{2} \cdot 2^{-r/10} \cdot 2^{-r/2} - 2^{-r} > 2^{-r}.$$ Thus, (*T*ʹ ∪ *T*) × *S* contradicts the assumption of the lemma. ### *L*2-Extractors and *L*∞-Extractors We will show that *M* being an *L*2-Extractor is equivalent to *M* being an *L*∞-Extractor (barring constants). [lem:L2toLinf] If a matrix $M:\AA\times \XX\rightarrow \{-1,1\}$ is a (*k*, ℓ)-*L*2-Extractor with error 2− *r*, then *M* is also a (*k* − *ξ*, 2ℓ ∼ (min{*r*, *ξ*} − 1))-*L*∞-Extractor, ∀0 < *ξ* < *k*. Taking $\xi=\frac{k}{2}$, we get that if *M* is a (*k*, ℓ)-*L*2-Extractor with error 2− *r*, then *M* is also a (Ω(*k*), Ω(ℓ) ∼ (Ω(min{*r*, *k*})))-*L*∞-Extractor. We pick a *ξ* (0 < *ξ* < *k*). To prove that *M* is a (*k* − *ξ*, 2ℓ ∼ (min{*r*, *ξ*} − 1))-*L*∞-Extractor, it suffices to prove the statement of the *L*∞-Extractors for any two uniform distributions over subsets $A\_1 \subseteq \AA$ and $X\_1 \subseteq \XX$ of size at least $\frac{|\AA|}{2^{k-\xi}}$ and $\frac{|\XX|}{2^{2\ell}}$ respectively. This follows from the fact that any distribution with min-entropy at least *h* can be written as a convex combination of uniform distributions on sets of size at least 2*h* . For a distribution *p**x*, which is uniform over a subset $X\_1 \subseteq \XX$ of size at least $\frac{|\XX|}{2^{2\ell}}$, $$\frac{{\left\lVertp\_x\right\rVert}\_2}{{\left\lVertp\_x\right\rVert}\_1}= \left(\frac{|\XX|}{|X\_1|}\right)^{\frac{1}{2}}\le 2^{\ell}.$$ Using the fact that *M* is a (*k*, ℓ)-*L*2-Extractor with error 2− *r*, we know that there are at most $\frac{|\AA|}{2^k}$ rows *a* with ∣(*M* ⋅ *p**x*)*a*∣ ≥ 2− *r*. Using the fact that *p**a* is a uniform distribution over a set *A*1 of size at least $\frac{|\AA|}{2^{k-\xi}}$, we get $$\begin{aligned} \left|\sum\_{a' \in \AA}\sum\_{x'\in \XX} {p\_a(a') \cdot p\_x(x') \cdot M(a',x')}\right| &\le\frac{1}{|A\_1|} \cdot \sum\_{a'\in A\_1} \left|(M\cdot p\_x)\_{a'}\right|\\ &\le\frac{1}{|A\_1|} \cdot \left(\frac{|\AA|}{2^{k}} + |A\_1|\cdot 2^{-r} \right) \le 2^{-\xi} + 2^{-r}\end{aligned}$$ This proves that *M* is a (*k* − *ξ*, 2ℓ ∼ (min{*r*, *ξ*} − 1))-*L*∞-Extractor, ∀0 < *ξ* < *k*. [lemma:min-extractor] If a matrix $M:\AA\times \XX\rightarrow \{-1,1\}$ is a (*k*, ℓ ∼ *r*)-*L*∞-Extractor, then *M* is also a $\left(k-1,\frac{\ell-\xi-1}{2}\right)$-*L*2-Extractor with error 2− *r* + 2− *ξ* + 1, ∀1 ≤ *ξ* ≤ ℓ − 1. Taking $\xi=\frac{\ell}{2}$, we get that if *M* is a (*k*, ℓ ∼ *r*)-*L*∞-Extractor, then *M* is also a (Ω(*k*), Ω(ℓ))-*L*2-Extractor with error 2− Ω(min{*r*, ℓ}). In this proof, we use the following notation. For two non-negative functions $P,Q:X~\rightarrow~\Reals$, we denote by dist(*P*, *Q*) the ℓ1-distance between the two functions, that is dist(*P*, *Q*) = ∑*x* ∈ *X*∣*P*(*x*) − *Q*(*x*)∣ . Note that dist(*P*, *Q*) = ∥*P* − *Q*∥1 ⋅ ∣*X*∣. We want to prove that for any 1 ≤ *ξ* ≤ ℓ − 1, and any non-negative function $f:\XX\rightarrow \Reals$ with $\frac{{\left\lVertf\right\rVert}\_2}{{\left\lVertf\right\rVert}\_1}\le 2^{\frac{\ell-\xi-1}{2}}$, there are at most $2\cdot 2^{-k}\cdot |\AA|$ rows $a\in \AA$ with $\frac{|{\langleM\_a,f\rangle}|}{{\left\lVertf\right\rVert}\_1}\ge 2^{-r}+2^{-\xi+1}$. Let’s assume that there exists a non-negative function $f:\XX\rightarrow \Reals$ for which the last statement is not true. Let *f**p* be a probability distribution on $\XX$ defined by $f\_p(x)=\frac{f(x)}{\sum\_{x}f(x)}=\frac{f(x)}{|\XX| \cdot {\left\lVertf\right\rVert}\_1}$. Then, $${\left\lVertf\_p\right\rVert}\_2=\frac{{\left\lVertf\right\rVert}\_2}{|\XX|\cdot {\left\lVertf\right\rVert}\_1}\le \frac{2^{\frac{\ell-\xi-1}{2}}}{|\XX|}$$ $$\implies \left(\frac{\sum\_xf\_p(x)^2}{|\XX|}\right)^{\frac{1}{2}}\le \frac{2^{\frac{\ell-\xi-1}{2}}}{|\XX|}$$ $$\implies \sum\_xf\_p(x)^2 \le 2^{\ell-\xi-1-\log(|\XX|)}$$ Thus, there is strictly less than 2− *ξ* probability mass on elements *x* with $f\_p(x)> 2^{\ell-\log(|\XX|)-1}$. Let $\bar{f\_p}:\XX\rightarrow \Reals$ be the trimmed function that takes values *f**p*(*x*) at *x* when $f\_p(x)\le 2^{\ell-\log(|\XX|)-1}$ and 0 otherwise. We define a new probability distribution $p\_x:\XX\rightarrow [0,1]$ as $$p\_x(x')=\bar{f\_p}(x')+\frac{1-\sum\_{x'}\bar{f\_p}(x')}{|\XX|}.$$ Informally, we are just redistributing the probability mass removed from *f**p*. It is easy to see that the new probability distribution *p**x* has min-entropy at least $\log(|\XX|)-\ell$, and dist(*p**x*, *f**p*) < 2− *ξ* + 1 as $\mathrm{dist}(p\_x,f\_p)\le \mathrm{dist}(p\_x,\bar{f\_p}) + \mathrm{dist}(\bar{f\_p},f\_p)< 2^{-\xi}+2^{-\xi}$. Let *A*bad be the set of rows $a\in \AA$ with $\frac{|{\langleM\_a,f\rangle}|}{{\left\lVertf\right\rVert}\_1}=|(M\cdot f\_p)\_a|\ge 2^{-r}+2^{-\xi+1}$. By our assumption, $|A\_{\text{bad}}|\ge 2\cdot 2^{-k}|\AA|$. Let *A*1 and *A*2 be the set of rows *a* with (*M* ⋅ *f**p*)*a* ≥ 2− *r* + 2− *ξ* + 1 and (*M* ⋅ *f**p*)*a* ≤  − (2− *r* + 2− *ξ* + 1) respectively. As *A*bad = *A*1 ∪ *A*2, w.l.o.g. $|A\_1|\ge |A\_{\text{bad}}|/2\ge 2^{-k}|\AA|$ (else we can work with *A*2 and the rest of the argument follows similarly). Let *p**a* be a uniform probability distribution over the set *A*1. Clearly *p**a* has min-entropy at least $\log(|\AA|)-k$. As (*M* ⋅ *f**p*)*a* ≥ 2− *r* + 2− *ξ* + 1 for the entire support of *p**a*, we get $$\label{eq:1} \left|{\mathop{\bf E\/}}\_{a\in\_R A\_1}[ (M \cdot f\_p)\_a]\right| \ge 2^{-r}+2^{-\xi+1}.$$ As the entries of *M* have magnitude at most 1, we have $$\label{eq:2} \left|{\mathop{\bf E\/}}\_{a\in\_R A\_1}\left[ (M \cdot (p\_x-f\_p))\_a\right]\right| \le {\mathop{\bf E\/}}\_{a\in\_R A\_1}\left[ \sum\_{x'\in \XX}{|p\_x(x')-f\_p(x')|}\right] = \mathrm{dist}(p\_x,f\_p)\;.$$ Combining Equations , and  together gives $$\left|{\mathop{\bf E\/}}\_{a\in\_R A\_1}[ (M \cdot p\_x)\_a]\right| \;\ge\; 2^{-r}+2^{-\xi+1}-\mathrm{dist}(p\_x,f\_p) \;>\; 2^{-r}$$ Thus, we have two distributions *p**a* and *p**x* with min-entropy at least $\log(|\AA|)-k$ and $\log(|\XX|)-\ell$ respectively contradicting the fact that *M* is a (*k*, ℓ ∼ *r*)-*L*∞-Extractor. Hence no such *f* exists and *M* is a $(k-1,\frac{\ell-\xi-1}{2})$-*L*2-Extractor with error 2− *r* + 2− *ξ* + 1. ### Transpose [lemma:transpose] If a matrix $M:\AA\times \XX\rightarrow \{-1,1\}$ is a (*k*, ℓ)-*L*2-Extractor with error 2− *r*, then the transposed matrix *M**t* is an (Ω(ℓ), Ω(*k*))-*L*2-Extractor with error 2− Ω(min{*r*, *k*}). As *M* is a (*k*, ℓ)-*L*2-Extractor with error 2− *r*, using Lemma [lem:L2toLinf], *M* is also a (Ω(*k*), Ω(ℓ) ∼ (Ω(min{*r*, *k*})))-*L*∞-Extractor. The definition of *L*∞-Extractor is symmetric in its rows and columns and hence, *M**t* is also a (Ω(ℓ), Ω(*k*) ∼ (Ω(min{*r*, *k*})))-*L*∞-Extractor. Now, using Lemma [lemma:min-extractor] on *M**t*, we get that *M**t* is also a (Ω(ℓ), Ω(*k*))-*L*2-Extractor with error 2− Ω(min{*r*, *k*}). ### Lower Bounds for Almost Orthogonal Vectors In this section, we show that a matrix $M : \AA \times \XX \to \{-1,1\}$ whose rows are almost orthogonal is a good *L*2-extractor. A similar technique was used in many previous works (see for example ). Motivated by the applications (e.g., learning sparse parities and learning from low-degree equations) in which some pairs of rows are not almost orthogonal, we relax this notion and only require that almost all pairs of rows are almost orthogonal. We formalize this in the definition of (*ε*, *δ*)-almost orthogonal vectors. [def:eps-delta] **(*ε*, *δ*)-almost orthogonal vectors:** Vectors *v*1, …, *v**m* ∈ { − 1, 1}*X* are (*ε*, *δ*)-almost orthogonal if for any *i* ∈ [*m*] there are at most *δ* ⋅ *m* indices *j* ∈ [*m*] with ∣⟨*v**i*, *v**j*⟩∣ > *ε*. Definition [def:eps-delta] generalizes the definition of an (*ε*, *δ*)-biased set from . [def:eps-delta-T] **(*ε*, *δ*)-biased set ():** A set *T* ⊆ {0, 1}*n* is (*ε*, *δ*)-biased if there are at most *δ* ⋅ 2*n* elements *a* ∈ {0, 1}*n* with $\left|{\mathop{\bf E\/}}\_{x\in\_R T}[(-1)^{a \cdot x}]\right|> {\epsilon}$, (where *a* ⋅ *x* denotes the inner product of *a* and *x*, modulo 2). Definition [def:eps-delta-T] is a special case of Definition [def:eps-delta], where the vectors corresponding to a set *T* ⊆ {0, 1}*n* are defined as follows. With every *a* ∈ {0, 1}*n*, we associate the vector *v**a* of length ∣*T*∣, whose *x*-th entry equals ( − 1)*a* ⋅ *x* for any *x* ∈ *T*. Indeed, *T* is (*ε*, *δ*)-biased iff the vectors {*v**a* : *a* ∈ {0, 1}*n*} are (*ε*, *δ*)-almost orthogonal. [Generalized Johnson’s Bound][lemma:Johnson] Let *M* ∈ { − 1, 1}*A* × *X* be a matrix. Assume that {*M**a*}*a* ∈ *A* are (*ε*, *δ*)-almost orthogonal vectors. Then, for any $\gamma > \sqrt{{\epsilon}}$ and any non-negative function *f* : *X* → R+, we have at most $(\tfrac{\delta}{\gamma^2-{\epsilon}}) \cdot |A|$ rows *a* ∈ *A* with ∣⟨*M**a*, *f*⟩∣ ≥ *γ* ⋅ ∥*f*∥2. In particular, fixing $\gamma = \sqrt{{\epsilon}+ \delta^{1/2}}$, we have that *M* is a (*k*, ℓ)-*L*2-extractor with error 2− *r*, for $k = \frac12 \log(1/\delta)$, and $\ell = r = \Omega\big(\min\{\log(1/{\epsilon}), \log(1/\delta)\}\big)$. Fix $\gamma>\sqrt{{\epsilon}}$. Let *I*+ (respectively, *I*−) be the rows in *A* with high correlation (respectively, anti-correlation) with *f*. More precisely: $$\begin{aligned} I\_{+} &:= \{i \in A: \;\; {\langleM\_i,f\rangle} > \gamma \cdot \|f\|\_2\}\;,\\ I\_{-} &:= \{i \in A: \;\; -{\langleM\_i,f\rangle} > \gamma \cdot \|f\|\_2\}\;. \end{aligned}$$ Let *I* = *I*+ ∪ *I*−. Define *z* = ∑*i* ∈ *I*+*M**i* − ∑*i* ∈ *I*−*M**i*. We consider the inner product of *f* and *z*. We have $$\begin{aligned} (|I| \cdot \gamma \cdot \|f\|\_2)^2 < {\langlef,z\rangle}^2 &= \Bigg({\mathop{\bf E\/}}\_{x \in\_R X}\bigg[ f(x) \cdot \Big(\sum\_{i\in I\_{+}}{M\_{i,x}}- \sum\_{i\in I\_{-}}{M\_{i,x}}\Big)\bigg]\Bigg)^2\\ &\le {\mathop{\bf E\/}}\_{x\in\_R X}\Big[ f(x)^2\Big] \cdot {\mathop{\bf E\/}}\_{x \in\_R X}\Bigg[\Big(\sum\_{i\in I\_{+}} M\_{i,x} -\sum\_{i\in I\_{-}} M\_{i,x}\Big)^2\Bigg]\tag{Cauchy-Schwarz}\\ &\le \|f\|\_2^2 \cdot \sum\_{i\in I} \sum\_{i' \in I}{|{\langleM\_{i},M\_{i'}\rangle}|}. \end{aligned}$$ For any fixed *i* ∈ *I*, we break the inner-sum ∑*i*ʹ ∈ *I*∣⟨*M**i*, *M**i*ʹ⟩∣ according to whether or not ∣⟨*M**i*, *M**i*ʹ⟩∣ > *ε*. By the assumption on *M*, there are at most *δ* ⋅ ∣*A*∣ rows *i*ʹ for which the inner-product is larger than *ε*. For these rows, the inner-product is at most 1. Thus, we get $$\begin{aligned} (|I| \cdot \gamma \cdot \|f\|\_2)^2 &<\|f\|\_2^2 \cdot \sum\_{i \in I} \sum\_{i' \in I}{|{\langleM\_i,M\_{i'}\rangle}|}\le \|f\|\_2^2 \cdot |I|\cdot ( |A|\cdot \delta + {\epsilon}\cdot|I|). \end{aligned}$$ That is, $$\begin{aligned} |I| \cdot \gamma^2 < |A|\cdot \delta + {\epsilon}\cdot|I|. \end{aligned}$$ Rearranging gives $$|I| < \left(\frac{\delta}{\gamma^2 - {\epsilon}}\right) \cdot |A|,$$ which completes the first part of the proof. We turn to the in particular part. Assume that $\frac{{\left\lVertf\right\rVert}\_2}{{\left\lVertf\right\rVert}\_1} \le 2^{\ell}$. Thus, we proved that there are at most $\left(\frac{\delta}{\gamma^2 - {\epsilon}}\right) \cdot |A|$ rows *a* ∈ *A*, such that, ∣⟨*M**a*, *f*⟩∣ ≥ *γ* ⋅ 2ℓ ⋅ ∥*f*∥1. Fixing $\gamma = \sqrt{{\epsilon}+ \delta^{1/2}}$, *k* = log(1/*δ*1/2), and $\ell = r = \frac12 \log(1/\gamma)$, we get that *M* is a (*k*, ℓ)-*L*2-extractor with error 2− *r* (Definition [definition:l2-extractor]). Finally, note that $\ell = r = \Omega\big(\min\{\log(1/\delta), \log(1/{\epsilon})\}\big)$, which completes the proof. Learning Sparse Parities ------------------------ As an application of Lemma [lemma:Johnson] and Theorem [thm:TM-main1], we reprove the main result in. [lemma:sparse] Let *T* ⊆ {0, 1}*n* be an (*ε*, *δ*)-biased set, with *ε* ≥ *δ*. Define the matrix *M* : {0, 1}*n* × *T* → { − 1, 1} by *M*(*a*, *x*) = ( − 1)*a* ⋅ *x*. Then, the learning task associated with *M* (“parity learning over *T*”) requires either at least Ω(log(1/*ε*) ⋅ log(1/*δ*)) memory bits or at least $\poly(1/{\epsilon})$ samples. The rows {*M**a*}*a* ∈ {0, 1}*n* are (*ε*, *δ*)-almost orthogonal vectors. Thus, by Lemma [lemma:Johnson], we get that *M* is a (*k*, ℓ)-*L*2-extractor with error 2− *r*, for *k* = Ω(log(1/*δ*)) and *r* = ℓ = Ω(log(1/*ε*)) (assuming *ε* ≥ *δ*). By Theorem [thm:TM-main1], we get the required memory-samples lower bound. [][lemma:KRT] There exists a (sufficiently small) constant *c* > 0 such that the following holds. Let *T*ℓ = {*x* ∈ {0, 1}*n* : ∑*i**x**i* = ℓ}. For any *ε* > (8ℓ/*n*)ℓ/2, *T*ℓ is an (*ε*, *δ*)-biased set for *δ* = 2 ⋅ *e*− *ε*2/ℓ ⋅ *n*/8. In particular, *T*ℓ is an (*ε*, *δ*)-biased set for 1. *ε* = 2− *c*ℓ, *δ* = 2− *c**n*, assuming ℓ ≤ *c**n*. 2. *ε* = ℓ− *c*ℓ, *δ* = 2− *c**n*/ℓ0.01, assuming ℓ ≤ *n*0.9. Let *c* > 0 be the constant mentioned in Lemma [lemma:KRT]. The following lemma complements Lemma [lemma:KRT] to the range of parameters *c**n* ≤ ℓ ≤ *n*/2. It shows that *T*ℓ is (2− Ω(*n*), 2− Ω(*n*))-biased in this case. The proof is a simple application of Parseval’s identity (see ). [] [lemma:KRT2] Let *T* ⊆ {0, 1}*n* be any set. Then, *T* is an (*ε*, *δ*)-biased set for $\delta = \frac{1}{|T|\cdot {\epsilon}^2}$. In particular, *T* is (∣*T*∣− 1/3, ∣*T*∣− 1/3)-biased. We get the following as an immediate corollary. Let *T*ℓ = {*x* ∈ {0, 1}*n* : ∑*i**x**i* = ℓ}. 1. Assuming ℓ ≤ *n*/2, parity learning over *T*ℓ requires either at least Ω(*n* ⋅ ℓ) memory bits or at least 2Ω(ℓ) samples. 2. Assuming ℓ ≤ *n*0.9, parity learning over *T*ℓ requires either at least Ω(*n* ⋅ ℓ0.99) memory bits or at least ℓΩ(ℓ) samples. Learning from Sparse Linear Equations ------------------------------------- Lemma [lemma:transpose] and the proof of Lemma [lemma:sparse] gives the following immediate corollary. [lemma:sparse-transpose] Let *T* ⊆ {0, 1}*n* be an (*ε*, *δ*)-biased set, with *ε* ≥ *δ*. Then, the matrix *M* : *T* × {0, 1}*n* → { − 1, 1}, defined by *M*(*a*, *x*) = ( − 1)*a* ⋅ *x* is a (*k*, ℓ)-*L*2-extractor with error 2− *r*, for ℓ = Ω(log(1/*δ*)) and *k* = *r* = Ω(log(1/*ε*)). Thus, the learning task associated with *M* (“learning from equations in *T*”) requires either at least Ω(log(1/*ε*) ⋅ log(1/*δ*)) memory bits or at least $\poly(1/{\epsilon})$ samples. We get the following as an immediate corollary of Lemmas [lemma:KRT], [lemma:KRT2] and [lemma:sparse-transpose]. Let *T*ℓ = {*x* ∈ {0, 1}*n* : ∑*i**x**i* = ℓ}. 1. Assuming ℓ ≤ *n*/2, learning from equations in *T*ℓ requires either at least Ω(*n* ⋅ ℓ) memory bits or at least 2Ω(ℓ) samples. 2. Assuming ℓ ≤ *n*0.9, learning from equations in *T*ℓ requires either at least Ω(*n* ⋅ ℓ0.99) memory bits or at least ℓΩ(ℓ) samples. Learning from Low Degree Equations ---------------------------------- In the following, we consider multilinear polynomials in F2[*x*1, …, *x**n*] of degree at most *d*. We denote by *P**d* the linear space of all such polynomials. We denote the bias of a polynomial *p* ∈ F2[*x*1, …, *x**n*] by $${\mathrm{bias}}(p) := {\mathop{\bf E\/}}\_{x\in {\mathbb{F}}\_2^n}[(-1)^{p(x)}].$$ We rely on the following result of Ben-Eliezer, Hod and Lovett, showing that random low-degree polynomials have very small bias with very high probability. [] Let *d* ≤ 0.99 ⋅ *n*. Then, $$\Pr\_{p\in\_R P\_d}[|{\mathrm{bias}}(p)| > 2^{-c\_1 \cdot n/d}] \le 2^{-c\_2 \cdot \binom{n}{\le d}}$$ where 0 < *c*1, *c*2 < 1 are absolute constants. [cor:low-deg-equations] Let $d, n \in \N$, with *d* ≤ 0.99 ⋅ *n*. Let *M* : *P**d* × F2*n* → { − 1, 1} be the matrix defined by *M*(*p*, *x*) = ( − 1)*p*(*x*) for any *p* ∈ *P**d* and *x* ∈ F2*n*. Then, the vectors {*M**p* : *p* ∈ *P**d*} are (*ε*, *δ*)-almost orthogonal, for *ε* = 2− *c*1*n*/*d* and $\delta = 2^{-c\_2\binom{n}{\le d}}$, (where 0 < *c*1, *c*2 < 1 are absolute constants). In particular, *M* is a (*k*, ℓ)-*L*2-extractor with error 2− *r*, for $k = \Omega\big(\binom{n}{\le d}\big)$ and *r* = ℓ = Ω(*n*/*d*). Thus, the learning task associated with *M* (“learning from degree-*d* equations”) requires either at least $\Omega\left( \binom{n}{\le d} \cdot n/d \right) \ge \Omega((n/d)^{d+1})$ memory bits or at least 2Ω(*n*/*d*) samples. We reinterpret. Since *P**d* is a linear subspace, for any fixed *p* ∈ *P**d* and a uniformly random *q* ∈ *R**P**d*, we have that *p* + *q* is a uniformly random polynomial in *P**d*. Thus, for any fixed *p* ∈ *P**d*, at most $2^{-c\_2 \cdot \binom{n}{\le d}}$ fraction of the polynomials *q* ∈ *P**d* have ∣bias(*p* + *q*)∣ ≥ 2− *c*1 ⋅ *n*/*d*. In other words, we get that {*M**p* : *p* ∈ *P**d*} are (*ε*, *δ*)-almost orthogonal vectors for *ε* = 2− *c*1 ⋅ *n*/*d* and $\delta = 2^{-c\_2\cdot \binom{n}{\le d}}$. We apply Lemma [lemma:Johnson] to get the “in particular” part, noting that in our case $\Omega\big(\min\{\log(1/{\epsilon}), \log(1/\delta)\}\big) = \Omega(n/d)$. We apply Theorem [thm:TM-main1] to get the “thus” part. Learning Low Degree Polynomials ------------------------------- Lemma [lemma:transpose] and Corollary [cor:low-deg-equations] gives the following immediate corollary. [cor:low-deg] Let $d, n \in \N$, with *d* ≤ 0.99 ⋅ *n*. Let *M* : F2*n* × *P**d* → { − 1, 1} be the matrix defined by *M*(*a*, *p*) = ( − 1)*p*(*a*) for any *p* ∈ *P**d* and *a* ∈ F2*n*. Then, *M* is a (*k*, ℓ)-*L*2-extractor with error 2− *r*, for $\ell = \Omega\big(\binom{n}{\le d}\big)$ and *k* = *r* = Ω(*n*/*d*). Thus, the learning task associated with *M* (“learning degree-*d* polynomials”) requires either at least $\Omega\left( \binom{n}{\le d} \cdot n/d \right) \ge \Omega((n/d)^{d+1})$ memory bits or at least 2Ω(*n*/*d*) samples. Relation to Statistical-Query-Dimension --------------------------------------- Let $\mathcal C$ be a class of functions mapping $\AA$ to { − 1, 1}. The Statistical-Query-Dimension of $\mathcal C$, denoted $\mathrm{SQdim}(\mathcal C)$, is defined to be the maximal *m* such that there exist functions *f*1, …, *f**m* ∈ C with ∣⟨*f**i*, *f**j*⟩∣ ≤ 1/*m* for all *i* ≠ *j* . As a corollary of Lemma [lemma:transpose] and Lemma [lemma:Johnson], we get the following. Let $\mathcal C$ be a class of functions mapping $\AA$ to { − 1, 1}. Let $\mathrm{SQdim}(\mathcal C) = m$. Let *f*1
arxiv_0000147
The Orthologic of Epistemic Modals ================================== ### Forthcoming in *Journal of Philosophical Logic*. Epistemic modals have peculiar logical features that are challenging to account for in a broadly classical framework. For instance, while a sentence of the form *p* ∧ ◇¬*p* (‘*p*, but it might be that not *p*’) appears to be a contradiction, ◇¬*p* does not entail ¬*p*, which would follow in classical logic. Likewise, the classical laws of distributivity and disjunctive syllogism fail for epistemic modals. Existing attempts to account for these facts generally either under- or over-correct. Some theories predict that *p* ∧ ◇¬*p*, a so-called *epistemic contradiction*, is a contradiction only in an etiolated sense, under a notion of entailment that does not always allow us to replace *p* ∧ ◇¬*p* with a contradiction; these theories underpredict the infelicity of embedded epistemic contradictions. Other theories savage classical logic, eliminating not just rules that intuitively fail, like distributivity and disjunctive syllogism, but also rules like non-contradiction, excluded middle, De Morgan’s laws, and disjunction introduction, which intuitively remain valid for epistemic modals. In this paper, we aim for a middle ground, developing a semantics and logic for epistemic modals that makes epistemic contradictions genuine contradictions and that invalidates distributivity and disjunctive syllogism but that otherwise preserves classical laws that intuitively remain valid. We start with an *algebraic semantics*, based on ortholattices instead of Boolean algebras, and then propose a *possibility semantics*, based on partial possibilities related by compatibility. Both semantics yield the same consequence relation, which we axiomatize. We then show how to lift an arbitrary possible worlds model for a non-modal language to a possibility model for a language with epistemic modals. The goal throughout is to retain what is desirable about classical logic while accounting for the non-classicality of epistemic vocabulary. **Keywords:** epistemic modals, negation, connectives, ortholattices, algebraic semantics, possibility semantics Introduction ============ Exploration of epistemic modals in the last decades has shown that they do not fit easily into the framework of classical modal logic. Following Yalcin, we can characterize the problem as follows. There is strong evidence that sentences of the form *p* ∧ ◊¬*p* or ¬*p* ∧ ◊*p* (as well as variants that reverse the order) are contradictory: not only are they unassertable, but also they embed like contradictions across the board. To get a sense for the evidence, note the infelicity of the following: . [butler]. # The butler is the murderer, but he might not be.. # Suppose that the butler is the murderer, but he might not be.. #Everyone who is tall might not be tall. In light of these data, it is natural to consider a logic that makes sentences of the form *p* ∧ ◊¬*p* contradictory. The problem is that adding $ p\wedge\lozenge \neg p\vDash \bot$ as an entailment to any modal logic that extends classical logic immediately yields the entailment $\lozenge \neg p\vDash \neg p$. But ◊¬*p* plainly does not entail ¬*p*. That the butler might not be the murderer does not entail that the butler *is not* the murderer. A profusion of responses to this problem have been developed in recent years. These responses either reject the assumption that *p* ∧ ◊¬*p* is truly a contradiction or hold that it is a contradiction only under a non-classical notion of $\vDash$, one that blocks the reasoning above. For a prominent example, consider the dynamic system of, wherein *p* ∧ ◊¬*p* is a contradiction,[1](#fn1) but conjunction and entailment are interpreted non-classically, so that we cannot conclude that $\lozenge \neg p\vDash \neg p$ (). The non-classicality of this approach, however, goes very deep: for instance, the classical laws of non-contradiction and excluded middle also fail in the dynamic system, but we know of no evidence for their failure from modal language. Our goal in this paper is to develop a more minimalist non-classical approach to epistemic modals that validates $ p\wedge\lozenge\neg p\vDash \bot$ without validating $\lozenge \neg p\vDash \neg p$. In particular, we block the inference from $ p\wedge\lozenge\neg p\vDash \bot$ to $\lozenge \neg p\vDash \neg p$ by treating negation, algebraically speaking, as an *orthocomplementation* (the complement operation characteristic of ortholattices) but not necessarily a *pseudocomplementation* (the complement operation of Heyting and Boolean algebras, obeying the law that *a* ∧ *b* = 0 implies *b* ≤ ¬*a*). However, we aim to depart from classical logic in a minimal way, by invalidating only those classical inference patterns that have intuitive counterexamples involving epistemic modals. We note in particular that those intuitive counterexamples always involve combinations of sentences with different levels of epistemic iteration: for example, when *p* itself is modal-free, *p* ∧ ◊¬*p* conjoins a sentence *p* with no epistemic modality and a sentence ◊¬*p* with one level of epistemic modality. Hence we develop a system that is fully classical when reasoning with sentences of the same “epistemic level.” On this picture, classical reasoning is dangerous only when it crosses different epistemic levels. We start in § [des] by drawing out intuitive desiderata for a logic of epistemic modals. We then characterize an *epistemic orthologic* that meets those desiderata, first algebraically in § [algebraic] and then using a possibility semantics in § [PossSem]. Both semantics yield the same consequence relation. We provide a sound and complete axiomatization for this epistemic orthologic, following the call in to axiomatize proposed consequence relations in natural language semantics. In § [Epistemicization], we show how to lift an arbitrary possible worlds model for a non-modal language to a possibility model for a language with epistemic modals, which yields an implementation of our possibility semantics built on the more familiar primitives of possible worlds semantics. Finally, we compare our approach to existing approaches in § [Comparisons] and conclude in § [Conclusion]. An associated online repository (<https://github.com/wesholliday/ortho-modals>) contains a Jupyter notebook with code to use our logic with the Natural Language Toolkit’s interface to the Prover9 theorem prover and Mace4 model builder, as well as a Jupyter notebook with code for using the possibility semantics of § [PossSem]. Desiderata ========== We will begin by bringing out the key desiderata for the logic of epistemic modals. In particular, our goal in this section and the next is to identify properties of an entailment relation appropriate to a language with epistemic modality. As a methodological preliminary, let us say a bit more about the kind of entailment relation we aim to characterize. Our target consequence relation is a classical one in the sense that it aims to capture universal preservation of truth. This has two upshots worth noting. First, *φ* entails *ψ* iff *φ* is semantically equivalent to *φ* ∧ *ψ*, where two formulas *φ*, *ψ* are semantically equivalent iff for any formulas *χ* and *ρ*, $\chi[\nicefrac{\varphi}{\rho}]$ is true iff $\chi[\nicefrac{\psi}{\rho}]$ is true (where $\chi[\nicefrac{\varphi}{\rho}]$ is the sentence that results from uniformly substituting *φ* for *ρ* in *χ*; we will ignore potential issues about hyperintensionality here). Second, if *φ* entails *ψ*, then the probability of *ψ* must be at least that of *φ* (since on any natural notion of probability, the probability of semantically equivalent propositions will be equal, and the probability of a conjunct will always be at least as great as the probability of a conjunction). We highlight these two upshots of the classical consequence relation because they yield two ways to empirically test our target notion of entailment. For a brief illustration, consider the inference from *φ* to □*φ*, which is valid in many logics for epistemic modals. We will invalidate this inference, and indeed, it is invalid in a logic that aims to capture inferences that universally preserve truth. For if *φ* entailed □*φ* in this sense, then the probability of □*φ* would always have to be at least as great as that of *φ*, but this is wrong. Consider a fair coin that was just flipped; suppose we do not know the outcome of the flip. Plausibly, the probability that it landed heads is around.5, but the probability that it *must* have landed heads is much lower, around 0. Second, since the present notion of entailment contraposes (given classical assumptions about De Morgan’s laws and negation that we will not question here), if $\varphi\vDash\Box \varphi$, then $\neg\Box\varphi\vDash\neg\varphi$; but the latter is plainly implausible, since ¬□*φ* is equivalent to ◊¬*φ*, and clearly ◊¬*φ* does not entail ¬*φ* (from ‘It might not be raining’ it does not follow that it is not raining). Finally, given that most accept that □*φ* entails *φ*, if *φ* also entailed □*φ* then they would be logically equivalent. But then they would be everywhere substitutable for one another, salva veritate. However, they are clearly not: ¬*φ* and ¬□*φ* are not equivalent (compare ‘it’s not raining’ to ‘it’s not the case that it *must* be raining’). There are other kinds of entailment relations that we can also characterize: for instance, prominently, we might develop a logic characterizing the inferences that preserve rational acceptance in all cases (,,, ). These logics are well worth studying but are not our central target here. In the conclusion, we briefly discuss how to characterize a logic of acceptance in our framework. In such a logic—unlike ours—the inference from *φ* to □*φ* might reasonably be seen to be valid. However, we do not see any sense in which our project (characterizing a logic of truth preservation) and the project of characterizing a logic of acceptance are in conflict. There is a purely terminological question about which of these deserves the name of ‘logic’, a question whose interest we do not see; we are happy to use ‘logic’ in the traditional way for the set of truth-preserving inferences, but if others prefer different usages, feel free to substitute your preferred terminology. There could also be a substantive question if someone thought that we should *only* characterize the logic of acceptance and that the logic of truth-preservation is uninteresting or irrelevant. But such a position has little merit and (unsurprisingly) has not been defended in the literature to our knowledge. For even if we had an adequate characterization of the logic of acceptance, there would remain the further question of which inferences preserve probability and of which sentences are always substitutable salva veritate. These questions are not answered by a logic of acceptance (since we might have, e.g., that *φ* entails □*φ* on that notion, but the inference does not preserve probability, nor are *φ* and □*φ* substitutable). So everyone interested in core semantic notions needs to characterize the logic of truth preservation. It is natural to think that a logic of rational acceptance will then supervene on this logic, together with a notion of rational acceptance (and that is indeed how such a logic is defined in, e.g., ). But even if you reject this supervenience claim, you still need a logic of truth preservation, since it is not plausible that the logic of truth preservation supervenes on the logic of rational acceptance. We take these points to be uncontroversial, but questions about our target notion of entailment have frequently been raised in conversation and are particularly pertinent to the literature on epistemic modals (where it is often claimed that *φ* entails □*φ*—which, again, we are happy to accept in one sense, but not in our target sense of entailment). Let us now turn to our substantive desiderata for our target logic. Epistemic contradictions ------------------------ First, we give more evidence that sentences of the form *p* ∧ ◊¬*p* and ¬*p* ∧ ◊*p* are indeed contradictory, drawing on observations in,,, and. Yalcin calls sentences of this form *epistemic contradictions*. calls epistemic contradictions and variants that reverse their order (that is, sentences of the form ◊¬*p* ∧ *p* or ◊*p* ∧ ¬*p*) *Wittgenstein sentences*. Although this is somewhat controversial (and runs counter to the prominent dynamic approach mentioned above), we think that order does not matter to our central points, so we intend the claims we make here to apply to all Wittgenstein sentences. However, for brevity, we often use epistemic contradictions as a stand-in for all Wittgenstein sentences. To start, note that Wittgenstein sentences are generally unassertable. It is hard to think of circumstances in which any of the following can be asserted: .. #It’s raining, but it might not be.. #It might be raining, but it isn’t.. #The cat might be under the bed, but she is on the couch.. #I’ll make lasagna, but I might not make lasagna.. #Sue isn’t the winner but she might be. Unassertability, however, could plausibly be due to either pragmatic or semantic factors. Indeed, these sentences are superficially similar to Moore sentences (as observed), and the latter are standardly explained on a pragmatic basis: you cannot assert, say, ‘It’s raining, but I don’t know that’, because you cannot know this, on pain of contradiction, and you need to know what you assert.[2](#fn2) To test whether the unassertability of Wittgenstein sentences is pragmatic or semantic, we can look at how they embed in various environments, comparing their behavior with embedded Moore sentences on the one hand and embedded classical contradictions on the other, building on the structure of arguments of. So consider the three-way comparisons below, with first the Moore sentence, then the Wittgenstein sentence, and finally the contradiction: .. Suppose Sue is the winner but I don’t know it.. # Suppose Sue is the winner but she might not be.. # Suppose Sue is the winner and isn’t the winner. .. If Sue is the winner but I don’t know it, I will chastise her for not telling me.. # If Sue is the winner but she might not be, I will chastise her for not telling me.. # If Sue is the winner and isn’t the winner, I will chastise her for not telling me. . [drekm]. It could be that Sue is the winner but I don’t know it.. # It could be that Sue is the winner and might not be.. # It could be that Sue is the winner and isn’t the winner. . [drek]. Either Sue is the winner but I don’t know it, or she isn’t the winner but I don’t know it.. # Either Sue is the winner but might not be, or she isn’t the winner but might be.. # Either Sue is the winner and isn’t the winner, or she isn’t the winner and is the winner. . [definiteec]. The winner is, for all I know, not the winner.. # The winner might not be the winner.. # The winner isn’t the winner. . [indefiniteec]. Someone is the winner and for all I know isn’t the winner.. # Someone is the winner and might not be the winner.. # Someone is the winner and isn’t the winner. In all these cases, the first variant, embedding a Moore sentence, is felicitous, supporting the idea that the infelicity of Moore sentences is pragmatic. By contrast, both of the latter variants are infelicitous, suggesting that the infelicity of Wittgenstein sentences is not pragmatic, after all, but instead due to the fact that they are genuine contradictions. This is not to say that Wittgenstein sentences and classical contradictions pattern in exactly the same ways. It is well known that classical contradictions are often judged by ordinary speakers to be interpretable (‘Sue is the winner and isn’t the winner’ could be used to express that Sue is the winner in one sense but not in some other sense), and Wittgenstein sentences, embedded or not, can similarly be coerced (though perhaps with more difficulty) into slightly different communicative functions. Nonetheless, their behavior across the board is strikingly like that of contradictions. We conclude that they are indeed contradictions and that any residual differences between *p* ∧ ¬*p* and *p* ∧ ◊¬*p* should be accounted for on the basis of their underlying compositional semantics. This presents us with our first desideratum: a logic on which Wittgenstein sentences are contradictions. That is, we want $ \varphi\wedge\lozenge \neg \varphi\vDash \bot$ (and likewise $ \neg \varphi\wedge\lozenge \varphi\vDash \bot$, $\lozenge \varphi\wedge \neg \varphi\vDash \bot$, and $\lozenge \neg \varphi\wedge \varphi\vDash \bot$). In fact, we need more than this: we need a system in which *φ* ∧ ◇¬*φ* can always be replaced with *φ* ∧ ¬*φ* *salva veritate*. Otherwise, we would not have an immediate account of the data above, even if $ \varphi\wedge\lozenge\neg \varphi\vDash \bot$. This is worth mentioning since some systems, like domain semantics (), predict that $ \varphi\wedge\lozenge \neg \varphi\vDash \bot$ but not that *φ* ∧ ◇¬*φ* can always be replaced with *φ* ∧ ¬*φ* *salva veritate* and hence miss some of the data above (see § [Comparisons] for further discussion). Call a logic an *-logic* (suggesting *epistemic*) if it satisfies these first two desiderata. We have already seen why an -logic is hard to obtain: given classical assumptions, treating Wittgenstein sentences as contradictions would make ◊¬*p* entail ¬*p*. So, in particular, we want a logic that is an -logic but that does not yield the absurd conclusion that $\lozenge\neg p\vDash\neg p$. Lest this problem be treated as having to do essentially with the peculiarities of ‘and’, we should note that the problem can equally be formulated without involving conjunction at all. If we take the evidence above to show that *p* and ◊¬*p* are jointly inconsistent, then what we need is a logic on which $\{\lozenge\neg p,p\}\vDash\bot$. Classical reasoning would yield $\lozenge \neg p\vDash\neg p$. So we need a logic that blocks this reasoning. Of course, evidence that *p* and ◊¬*p* are jointly inconsistent is somewhat less direct than the evidence that *p* ∧ ◊¬*p* is itself inconsistent. After all, we cannot embed a pair of propositions under a unary sentential operator, as we did with the corresponding conjunction. But the sentences above already yield indirect evidence that it is not just the conjunction that is inconsistent, but also its conjuncts are jointly inconsistent, since the infelicity persists when the conjuncts are distributed across the restrictor and scope of quantifiers, as in [definiteec] and [indefiniteec]. Similar evidence comes from pairs of attitude ascriptions, where again the infelicity persists when the conjuncts in question are distributed under distinct attitude predicates: .. # Liam believes that Sue is the winner. Liam also believes that Sue might not be the winner.. # Suppose that Sue is the winner. Suppose, further, that Sue might not be the winner.. # I hope Susie wins. I also hope she might not win. Finally, another way to see the puzzle of epistemic contradictions, also emphasized by, is to approach it from intuitions about meaning and synonymy rather than intuitions about logic. It is very natural to think that ◊*p* means something like ‘For all we know, *p* is true’. This is roughly the meaning it has in the influential approach of, for instance. But if that is right, then *p* ∧ ◊¬*p* should have the meaning of a Moore sentence and should be embeddable just like Moore sentences are. But the examples above clearly show that this is not the case. So apparently ◊*p* does not mean even roughly the same thing as ‘For all we know, *p* is true’. But then what does it mean? Distributivity -------------- We have seen one reason to think that an adequate treatment of epistemic modals calls for revision to classical logic. A second reason, brought out in, has to do with the distributive law, according to which *φ* ∧ (*ψ* ∨ *χ*) is logically equivalent to (*φ* ∧ *ψ*) ∨ (*φ* ∧ *χ*) for any sentences *φ*, *ψ*, *χ*. This is, of course, a law of classical logic. But while it is intuitively valid for modal-free fragments, it is not intuitively valid once epistemic modals are on the scene; for compare: .. [suea] Sue might be the winner and she might not be, and either she is the winner or she isn’t.. [suec] #Sue might not be the winner and she is the winner, or else Sue might be the winner and she isn’t the winner. While [suea] sounds simply like a long-winded avowal of ignorance, [suec] is very strange. But if distributivity is valid, then [suea] entails [suec]! (In fact, given distributivity, [suea] and [suec] are logically *equivalent* provided that *p* entails ◇*p*, on which more shortly.) Such examples motivate us to look for a logic that invalidates distributivity for the modal fragment. The ramifications of the failure of distributivity are plausibly very wide. For instance, these failures might also help explain a puzzling observation, which is based on discussion in and related discussion in. Consider a fair lottery where at least one ticket will win, but not all will win. The winning ticket(s) have been drawn, but we do not know which tickets won. Then the following seems true: . Every ticket might not be a winning ticket.[everymight] But we cannot conclude: . Some winning ticket might not be a winning ticket.[winmight] The winning tickets, however, are among the tickets, and so if every ticket might not be a winning ticket, it seems that we should be able to conclude that some winning ticket might not be a winning ticket. We can reconstruct this inference as follows. Suppose we have rigid designators for all the tickets: *t*1, …, *t**n*. Then [everymight] can be taken to be the conjunction ◊¬*W*(*t*1) ∧ … ∧ ◊¬*W*(*t**n*), where *W* stands for ‘is a winner’. We also know the disjunction *W*(*t*1) ∨ … ∨ *W*(*t**n*). Putting these two together, we have $$\big(\lozenge \neg W(t\_1)\wedge \dots \wedge\lozenge \neg W(t\_n)\big)\wedge \big(W(t\_1)\vee\dots\vee W(t\_n)\big);$$ in short, every ticket might not be a winner, but some ticket is a winner. Distributivity would allow us to infer $$\big(W(t\_1)\wedge \lozenge \neg W(t\_1)\big) \vee\dots \vee \big(W(t\_n)\wedge \lozenge \neg W(t\_n)\big),$$ that is, ‘Some winning ticket might not be a winning ticket’, as in [winmight]. However, if distributivity is not valid, we cannot conclude [winmight] from [everymight]. We will set aside quantification in this paper, but this example illustrates the importance of distributive reasoning—and identifying its failures—across a variety of examples involving epistemic modals. Disjunctive syllogism --------------------- A closely related point is that disjunctive syllogism intuitively fails for epistemic modals (). Disjunctive syllogism says that $\{p\vee q, \neg q\}\vDash p$. Hence the following inference, varying an example from Klinedinst and Rothschild, is valid if disjunctive syllogism is: .. Either the dog is inside or it must be outside.[doga]. It’s not the case that the dog must be outside.[dogb]. Therefore, the dog is inside. [dogc] This inference is intuitively invalid. For [doga] feels a lot like the corresponding tautology: . The dog is inside or outside. And indeed, in any -logic that validates De Morgan’s laws (plus double negation elimination and duality for ◊ and □), [doga] is a logical truth: *p* ∨ □¬*p* is equivalent to ¬(¬*p* ∧ ◊*p*), which, if the logic is an -logic, is equivalent to ¬⊥. Again, this seems intuitive: that is, sentences of the form *p* ∨ □¬*p* do feel a lot like the corresponding tautologies *p* ∨ ¬*p*, as in [fatima] (an intuition that our system will capture by predicting that *p* ∨ □¬*p* is indeed a tautology): .[fatima]. Either Fatima is home or else she must be somewhere else.. Either John must be the culprit or else it isn’t him. Thus, we know [doga] just on the basis of the logic of epistemic modals. And [dogb] (which is equivalent, by duality, to ‘the dog might be inside’) can be true without the dog in fact being inside. (Given that [doga] has the status of a logical truth, and given duality, this is just the point, again, that ◊*p* does not entail *p*.) Hence disjunctive syllogism is not valid for a fragment including epistemic modals. Given very weak assumptions, disjunctive syllogism follows from distributivity. So the intuitive failure of disjunctive syllogism gives us another reason to invalidate distributivity. Orthomodularity --------------- Another law of classical logic—and indeed of the weaker system of quantum logic (see, e.g., )—that we have reason to invalidate is orthomodularity, according to which if $\varphi\vDash \psi$, then $ \psi\vDash \varphi\vee(\neg \varphi\wedge \psi) $.[3](#fn3) It is easy to see that this is classically valid (indeed, we have $ \psi\vDash \varphi\vee(\neg \varphi\wedge \psi) $ in classical logic, though the assumption of $\varphi\vDash\psi$ is needed in quantum logic). But, building on observations in, we argue that it is invalid for epistemic modals.[4](#fn4) We assume, again, that $p\vDash \lozenge p$. By contraposition of $\vDash$ and duality, this follows from the assumption that ‘must’ is *factive*, i.e., that $\Box p\vDash p$, which is widely (though not universally) accepted; see for a variety of arguments for factivity, and see Footnote [ptomight] for direct arguments that $p\vDash\lozenge p$.[5](#fn5) Given that $p\vDash \lozenge p$, orthomodularity says that $\Diamond p\vDash p\vee (\neg p\wedge\Diamond p)$. But, if our logic is an -logic, the right disjunct of *p* ∨ (¬*p* ∧ ◇*p*) is contradictory, so the disjunction is equivalent to *p*; likewise, (¬*p* ∧ ◇*p*) ∨ *p* is predicted to be equivalent to *p*. For example, consider [dorrpair]: . [dorrpair]. Either it isn’t the butler but it might be, or else it’s the butler.[dorr]. It’s the butler. [dorrb] Intuitions here are somewhat unclear, because [dorr] sounds infelicitous—as we would expect a disjunction with a contradictory disjunct to sound. Nonetheless, as observe, [dorr] feels intuitively equivalent to [dorrb]. However, orthomodularity predicts that ‘It might be the butler’ entails [dorr]. Hence, given the contradictoriness of ¬*p* ∧ ◊*p*, orthomodularity would let us infer *p* from ◊*p*, which again is unacceptable. Distributivity entails orthomodularity, so the intuitive failure of the latter gives us yet another reason to reject distributivity. Conservativity -------------- Our last desideratum is a kind of methodological conservatism. Where there is not a specific argument for the failure of a classically valid inference pattern, we should validate it, on the presumption that it is indeed valid. It is of course harder to argue for the validity of a schema than against it; arguing that a principle is invalid only requires a single convincing counterexample, whereas we cannot ever look at every instance of a schema by way of arguing for its validity. Still, we think that as a methodological principle it is sensible to proceed by minimally altering classical logic in light of counterexamples and examining the result. Thus, we will look for a minimal variation on classical (modal) logic that can meet the desiderata above—that is, which is an -logic and invalidates distributivity and even orthomodularity for the modal fragment, but still validates these, along with all other classical laws, for the non-modal fragment. Across the modal fragment, it will retain many of the central classical principles, including non-contradiction, excluded middle, the introduction and elimination rules for conjunction and disjunction, and De Morgan’s laws, all of which have been invalidated by various theories of epistemic modals. Moreover, our logic will be fully classical over parts of the modal fragment restricted to the same “epistemic level”—essentially, fragments built up out of sentences with uniform levels of nesting of modal operators—limiting non-classicality to the part of the modal fragment that involves combinations of sentences across different epistemic levels. To be clear, we do not have an argument that the logic we present is *the* minimal variation on classical logic that is an -logic and invalidates the patterns that intuitively fail for epistemic modals. There are -logics which validate strictly more classical patterns than ours; and there may turn out to be a case for adopting some such extension of our logic (as we will briefly discuss in § [EpExtSection]).[6](#fn6) But as we will discuss in § [Comparisons], our approach preserves a lot more of classical logic than any other -logic we know of in the literature. Among other things, we hope that our discussion will spur exploration of other -logics extending ours. Algebraic semantics[algebraic] ============================== In this section, we begin our development of formal semantics for epistemic modals. We start with a rather abstract *algebraic semantics*. A model in such a semantics associates with each formula of the formal language an element in an *algebra of propositions*. This is familiar from *possible world semantics* for classical propositional logic, wherein a model associates with each formula a set of possible worlds, i.e., an element of the powerset algebra P(*W*) arising from the set *W* of worlds. The meanings of ‘and’, ‘or’, and ‘not’ are given by the operations of intersection, union, and complementation, respectively, in the powerset algebra P(*W*). The only difference between possible world semantics and algebraic semantics for classical propositional logic is that in the latter, we allow Boolean algebras other than powerset algebras. Instead of associating each formula with an element of P(*W*) for some set *W*, we associate with each formula an element of an arbitrary Boolean algebra *B*, which comes equipped with operations ¬,  ∧ , and  ∨  used to interpret ‘not’, ‘and’, and ‘or’ (see § [AlgOrtho]). The powerset algebra P(*W*) is just one concrete example of a Boolean algebra. But for the purposes of classical propositional logic, there is no loss of generality in working only with powerset algebras. Possible world semantics for normal modal logic can also be viewed as a concrete version of an algebraic semantics. A set *W* together with a binary accessibility relation *R* on *W* gives us not only the powerset algebra P(*W*) but also a modal operator ◇ on P(*W*) defined for *A* ∈ P(*W*) by ◇*A* = {*w* ∈ *W* ∣ ∃*v* : *w**R**v*and *v* ∈ *A*}. A more abstract algebraic semantics uses an arbitrary Boolean algebra equipped with a modal operator, called a *Boolean algebra with operator* (BAO). In the context of modal logic, there *is* a loss of generality in working only with powerset algebras, as not all normal modal logics can be given possible world semantics as above (see and references therein). But the normal modal logics discussed in connection with natural language semantics typically can be handled by possible world semantics. The logic we are after in this paper is *non-classical*. Thus, we cannot use powerset algebras, as in possible world semantics, or Boolean algebras more generally, as in algebraic semantics for classical logic. Instead, we will use a more general class of algebras, known as *ortholattices*, in which it is possible to invalidate classical laws such as distributivity. We will add modal operators to ortholattices to give an algebraic semantics for an *epistemic orthologic*. Then in § [PossSem], we will give a more concrete semantics, known as *possibility semantics*, that stands to ortholattice semantics as possible world semantics stands to Boolean algebraic semantics. Review of algebraic semantics for orthologic -------------------------------------------- In this section, we review ortholattices and their associated *orthologic*. Ortholattices have long been important objects of study in lattice theory (see ), examples of which arise as algebras of events in quantum mechanics (). We will argue that ortholattices also arise as algebras of propositions in a language with epistemic modals. Those familiar with basic lattice theory and ortholattices may skip straight to § [AddEpistemics1], where we add epistemic modals to the picture. ### Ortholattices First, we recall the definition of a lattice, where we think of *A* as a set of propositions and  ∨  and  ∧  as the operations of disjunction and conjunction, respectively. [BoundLat] A *lattice* is a tuple *L* = ⟨*A*,  ∨ ,  ∧ ⟩ where *A* is a nonempty set and  ∨  and  ∧  are binary operations on *A* such that the following equations hold for all *a*, *b*, *c* ∈ *A* and  ∘  ∈ { ∨ ,  ∧ }: 2.75in * idempotence: *a* ∘ *a* = *a*; * commutativity: *a* ∘ *b* = *b* ∘ *a*; 2.75in * associativity: *a* ∘ (*b* ∘ *c*) = (*a* ∘ *b*) ∘ *c*; * absorption: *a* ∧ (*a* ∨ *b*) = *a* ∨ (*a* ∧ *b*) = *a*. We define a binary relation  ≤  on *A*, called the *lattice order of *L**, by: *a* ≤ *b* if and only if *a* = *a* ∧ *b*. It is easy to check that *a* ≤ *b* is also equivalent to *a* ∨ *b* = *b* (using absorption and commutativity). Moreover,  ≤  is a partial order (i.e., reflexive, transitive, and antisymmetric). Indeed, let us recall the order-theoretic definition of a lattice as a partially ordered set. Given a partial order  ≤  on a set *A*, * an *upper bound* of a subset *X* ⊆ *A* is a *y* ∈ *A* such that *x* ≤ *y* for every *x* ∈ *X*; * a *least upper bound* of *X* is an upper bound *y* of *X* such that *y* ≤ *z* for every upper bound *z* of *X*. If there is a least upper bound of *X*, it is unique by the antisymmetry of  ≤ . The notion of a *greatest lower bound* is defined dually. Then a partially ordered set is a lattice if every nonempty finite subset has both a least upper bound and a greatest lower bound. From such a partially ordered set, we obtain a lattice ⟨*A*,  ∨ ,  ∧ ⟩ in the sense of Definition [BoundLat] by defining *a* ∨ *b* to be the least upper bound of {*a*, *b*} and *a* ∧ *b* to be the greatest lower bound of {*a*, *b*}. Conversely, given a lattice in the sense of Definition [BoundLat], the partially ordered set ⟨*A*,  ≤ ⟩ with  ≤  defined as in Definition [BoundLat] is a lattice in the order-theoretic sense. Hence we may think of lattices in terms of either the equational or order-theoretic definition. Finally, a lattice is *complete* if every subset has both a least upper bound and a greatest lower bound. Every finite lattice is complete but there are infinite lattices that are not complete. We will assume that our lattices have bounds, corresponding to the contradictory proposition ⊥ and the trivial proposition ⊤. A *bounded lattice* is a tuple *L* = ⟨*A*,  ∨ , 0,  ∧ , 1⟩ where ⟨*A*,  ∨ ,  ∧ ⟩ is a lattice and 0 and 1 are elements of *A* such that for all *a* ∈ *A*, we have * boundedness: *a* ∨ 0 = *a* and *a* ∧ 1 = *a*. Order-theoretically, a lattice is bounded if its order has a minimum element and a maximum element. Adding an operation ¬ for negation finally brings us to the definition of an ortholatticce. [OrtholatticeDef] An *ortholattice* is a tuple ⟨*A*,  ∨ , 0,  ∧ , 1, ¬⟩ where ⟨*A*,  ∨ , 0,  ∧ , 1⟩ is a bounded lattice and ¬ is a unary operation on *A*, called an *orthocomplementation*, that satisfies: 1. [OrthoDef1] complementation: for all *a* ∈ *A*, *a* ∨ ¬*a* = 1 and *a* ∧ ¬*a* = 0; 2. [OrthoDef2] involution: for all *a* ∈ *A*, ¬¬*a* = *a*; 3. [OrthoDef3] order-reversal: for all *a*, *b* ∈ *A*, if *a* ≤ *b*, then ¬*b* ≤ ¬*a*. An equivalent definition replaces [OrthoDef3] with (either one of) *De Morgan’s laws*: * for all *a*, *b* ∈ *A*, ¬(*a* ∨ *b*) = ¬*a* ∧ ¬*b*; * for all *a*, *b* ∈ *A*, ¬(*a* ∧ *b*) = ¬*a* ∨ ¬*b*. The difference between arbitrary ortholattices and the Boolean algebras of classical logic is that ortholattices need not obey the distributive law, as we will see in Examples [OrthoExample] and [KeyEx]. A *Boolean algebra* is a tuple ⟨*A*,  ∨ , 0,  ∧ , 1, ¬⟩ where ⟨*A*,  ∨ , 0,  ∧ , 1⟩ is a bounded lattice, ¬ is a unary operation on *A* satisfying complementation as in Definition [OrtholatticeDef].[OrthoDef1], and the *distributive laws* hold: * for all *a*, *b*, *c* ∈ *A*, *a* ∧ (*b* ∨ *c*) = (*a* ∧ *b*) ∨ (*a* ∧ *c*); * for all *a*, *b*, *c* ∈ *A*, *a* ∨ (*b* ∧ *c*) = (*a* ∨ *b*) ∧ (*a* ∨ *c*).[7](#fn7) It is straightforward to prove that every Boolean algebra is an ortholattice. The following weakening of distributivity is important in the study of ortholattices arising in quantum mechanics (see, e.g., ). An *orthomodular lattice* is an ortholattice satisfying the orthomodular law: * for all *a*, *b* ∈ *A*, *a* ∨ (¬*a* ∧ (*a* ∨ *b*)) = *a* ∨ *b*. Or equivalently, for all *a*, *c* ∈ *A*, if *a* ≤ *c*, then *c* ≤ *a* ∨ (¬*a* ∧ *c*) (equivalently, *c* = *a* ∨ (¬*a* ∧ *c*)). Associated with the difference between Boolean algebras and arbitrary ortholattices concerning distributivity is a difference concerning the interaction of conjunction, contradiction, and negation, given by the following standard fact. [PseudoLem] In a Boolean algebra, ¬ is *pseudocomplementation*: for all *a*, *b* ∈ *A*, *a* ∧ *b* = 0 implies *b* ≤ ¬*a*, so ¬*a* is the greatest element with respect to  ≤  of the set {*b* ∈ *A* ∣ *a* ∧ *b* = 0}.[8](#fn8) Crucially, the orthocomplementation in an ortholattice is not necessarily pseudocomplementation. In fact, the orthocomplementation being pseudocomplementation implies that the ortholattice is Boolean. [PseudoToBoole] For any ortholattice *L*, the following are equivalent: 1. [PseudoToBoole1] *L* is a Boolean algebra; 2. [PseudoToBoole1.5] *L* is distributive; 3. [PseudoToBoole2] the orthocomplementation operation in *L* is pseudocomplementation. The equivalence of [PseudoToBoole1] and [PseudoToBoole1.5] is straightforward, as noted above. From [PseudoToBoole1.5] to [PseudoToBoole2], we show that distributivity implies pseudocomplementation over ortholattices. Suppose *a* ∧ *b* = 0, so *a* ∧ *b* ≤ ¬*b*. Then since *a* ∧ ¬*b* ≤ ¬*b*, we have (*a* ∧ *b*) ∨ (*a* ∧ ¬*b*) ≤ ¬*b*. Finally, by distributivity, *a* ∧ (*b* ∨ ¬*b*) ≤ (*a* ∧ *b*) ∨ (*a* ∧ ¬*b*), so *a* ≤ ¬*b*. From [PseudoToBoole2] to [PseudoToBoole1.5], we show that pseudocomplementation implies distributivity over ortholattices. First note that pseudocomplementation implies disjunctive syllogism: (*x* ∨ *y*) ∧ ¬*x* ≤ *y*. For (*x* ∨ *y*) ∧ ¬*x* ∧ ¬*y* ≤ 0 by De Morgan’s laws and complementation, so (*x* ∨ *y*) ∧ ¬*x* ≤ *y* by pseudocomplementation and involution. Now for distributivity, we have $$\begin{aligned} && a \wedge( b\vee c)\wedge (\neg a \vee \neg b)\leq a\wedge(b\vee c)\wedge \neg b\leq a\wedge c \quad\mbox{using disjunctive syllogism twice}\\ &\Rightarrow & a\wedge(b\vee c)\wedge(\neg a\vee\neg b)\wedge\neg (a\wedge c)\leq \neg(a\wedge c)\wedge (a\wedge c) \leq 0 \quad\mbox{} \\ &\Rightarrow& a\wedge (b\vee c) \leq \neg ((\neg a\vee\neg b)\wedge\neg (a\wedge c))\quad\mbox{by pseudocomplementation} \\ &\Rightarrow& a\wedge (b\vee c) \leq (a\wedge b)\vee (a \wedge c) \quad\mbox{by De Morgan's and involution}.\qedhere\end{aligned}$$ Thus, we obtain a three-way equivalence for any ortholattice *L* between being a Boolean algebra, being distributive, and having its orthocomplementation be pseudocomplementation. [OrthoExample] Figure [OrthoEx] shows Hasse diagrams of the ortholattices **O**6 and **MO**2. Recall that in a Hasse diagram of a lattice with lattice order  ≤ , a line segment going *upward* from *x* to *y* means that *x* ≤ *y* and there is no third element *z* with *x* ≤ *z* ≤ *y*. Observe that **O**6 is not orthomodular and hence not distributive. For *a* ≤ *b* and yet *a* ∨ (¬*a* ∧ *b*) = *a* ∨ 0 = *a* ≠ *b*. Also note that the orthocomplementation is not pseudocomplementation: ¬*a* ∧ *b* = 0 and yet $ b\not\leq \neg\neg a=a$. **MO**2 is orthomodular[9](#fn9) but it is not distributive: *a* ∧ (¬*a* ∨ *b*) = *a* ∧ 1 = *a* ≠ 0 = 0 ∨ 0 = (*a* ∧ ¬*a*) ∨ (*a* ∧ *b*). Also note that orthocomplementation is not pseudocomplementation: *a* ∧ *b* = 0 and yet $b\not\leq \neg a$. [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2cm,semithick,every loop/.style=<-,shorten <=1pt] =[fill=gray!20,draw=none,text=black] (0) at (0,0) ; (a) at (-.75,.75) ; (b) at (.75,.75) ; (1l) at (-.75,1.5) ; (1r) at (.75,1.5) ; (new1) at (0,2.25) ; (new1) edge[-] node (1l); (new1) edge[-] node (1r); (1l) edge[-] node (a); (1r) edge[-] node (b); (a) edge[-] node (0); (b) edge[-] node (0); (1) at (6,2) ; (x) at (4,1) ; (y) at (5,1) ; (y’) at (7,1) ; (z) at (8,1) ; (0) at (6,0) ; (1) edge[-] node (y); (1) edge[-] node (y’); (1) edge[-] node (x); (1) edge[-] node (z); (x) edge[-] node (0); (y) edge[-] node (0); (y’) edge[-] node (0); (z) edge[-] node (0); [OrthoEx] ### Language and consequence We can use ortholattices to interpret a basic propositional logical language. [LDef] Let L be the language generated by the grammar *φ* :  :  = ⊤ ∣ *p* ∣ ¬*φ* ∣ (*φ* ∧ *φ*) where *p* belongs to a countably infinite set Prop of propositional variables. We define *φ* ∨ *ψ* :  = ¬(¬*φ* ∧ ¬*ψ*), a definition justified by the fact that ortholattices satisfy De Morgan’s laws and involution. Note that we use the same symbols for the connectives of L and the operations in ortholattices, trusting that no confusion will arise. As usual in algebraic semantics, we interpret propositional variables as elements of an algebra and then extend the interpretation to all formulas of the language recursively. A *valuation* on an ortholattice ⟨*A*,  ∨ , 0,  ∧ , 1, ¬⟩ is a map *θ* : Prop → *A*. Such a *θ* extends to *θ̃* : L → *A* by: *θ̃*(⊤) = 1, *θ̃*(¬*φ*) = ¬*θ̃*(*φ*), and *θ̃*(*φ* ∧ *ψ*) = *θ̃*(*φ*) ∧ *θ̃*(*ψ*). Also as usual, we say that *ψ* is a semantic consequence of *φ* if the semantic value of *φ* is always below the semantic value of *ψ* in the lattice order  ≤  (which one may now view as an entailment relation, just as the subset relation is viewed as an entailment relation between propositions in possible world semantics). [AlgCon0] Given a class **C** of ortholattices, we define the semantic consequence relation $\vDash\_\mathbf{C}$, a binary relation on L, as follows: $\varphi\vDash\_\mathbf{C}\psi$ if for all *L* ∈ **C** and valuations *θ* on *L*, we have *θ̃*(*φ*) ≤ *θ̃*(*ψ*), where  ≤  is the lattice order of *L*. We can similarly define a consequence relation between a set of premises on the left and a single conclusion on the right: $\Gamma\vDash\_\mathbf{C}\psi$ if for all *L* ∈ **C**, valuations *θ* on *L*, and *a* ∈ *L*, if *a* is a lower bound of {*θ̃*(*φ*) ∣ *φ* ∈ Γ}, then *a* ≤ *θ̃*(*ψ*). But for simplicity we will only consider finite sets of premises here, in which case a single premise on the left suffices given that L contains a conjunction interpreted as meet. ### Logic, soundness and completeness Axiomatizing the semantic consequence relation of Definition [AlgCon0] is straightforward. [][OrthoLogicDef] An *orthologic* is a binary relation  ⊢  on the set L of formulas such that for all *φ*, *ψ*, *χ* ∈ L: | | | | --- | --- | | 1. *φ* ⊢ ⊤; | 6. ¬¬*φ* ⊢ *φ*; | | 2. *φ* ⊢ *φ*; | 7. *φ* ∧ ¬*φ* ⊢ *ψ*; | | 3. *φ* ∧ *ψ* ⊢ *φ*; | 8. if *φ* ⊢ *ψ* and *ψ* ⊢ *χ*, then *φ* ⊢ *χ*; | | 4. *φ* ∧ *ψ* ⊢ *ψ*; | 9. if *φ* ⊢ *ψ* and *φ* ⊢ *χ*, then *φ* ⊢ *ψ* ∧ *χ*; | | 5. *φ* ⊢ ¬¬*φ*; | 10. if *φ* ⊢ *ψ*, then ¬*ψ* ⊢ ¬*φ*. | A *theorem* of the orthologic  ⊢  is a formula *φ* such that ⊤ ⊢ *φ*. As the intersection of orthologics is clearly an orthologic, there is a smallest orthologic, denoted or  ⊢ O. Note that with *φ* ∨ *ψ* defined as ¬(¬*φ* ∧ ¬*ψ*), we get the introduction and elimination rules for disjunction: * *φ* ⊢ *φ* ∨ *ψ*. * if *φ* ⊢ *χ* and *ψ* ⊢ *χ*, then *φ* ∨ *ψ* ⊢ *χ*. For the first, by rule 3, we have ¬*φ* ∧ ¬*ψ* ⊢ ¬*φ*, so by rule 10, we have ¬¬*φ* ⊢ *φ* ∨ *ψ*, which with rules 5 and 8 yields *φ* ⊢ *φ* ∨ *ψ*. For the second, if *φ* ⊢ *χ* and *ψ* ⊢ *χ*, then ¬*χ* ⊢ ¬*φ* and ¬*χ* ⊢ ¬*ψ* by rule 10, which implies ¬*χ* ⊢ ¬*φ* ∧ ¬*ψ* by rule 9, which implies *φ* ∨ *ψ* ⊢ ¬¬*χ* by rule 10 and hence *φ* ∨ *ψ* ⊢ *χ* by rules 6 and 8. However, we do *not* have pseudocomplementation or distributivity, either of which would collapse to classical logic (recall Proposition [PseudoToBoole]). Nor do we have what might be called *proof by cases with side assumptions*: * if *ξ* ∧ *φ* ⊢ *χ* and *ξ* ∧ *ψ* ⊢ *χ*, then *ξ* ∧ (*φ* ∨ *ψ*) ⊢ *χ*. This would allow the derivation of distributivity, since disjunction introduction and proof by cases with side assumptions yield *ξ* ∧ *φ* ⊢ (*ξ* ∧ *φ*) ∨ (*ξ* ∧ *ψ*)and *ξ* ∧ *ψ* ⊢ (*ξ* ∧ *φ*) ∨ (*ξ* ∧ *ψ*), so *ξ* ∧ (*φ* ∨ *ψ*) ⊢ (*ξ* ∧ *φ*) ∨ (*ξ* ∧ *ψ*). A completeness theorem can now be proved using standard techniques of algebraic logic. [AlgComp1] The logic O is sound and complete with respect to the class **O** of all ortholattices according to the consequence relation of Definition [AlgCon0]: for all *φ*, *ψ* ∈ L, we have *φ* ⊢ O*ψ* if and only if $\varphi\vDash\_\mathbf{O}\psi$. For soundness, it is easy to check that *φ* ⊢ O*ψ* implies $\varphi\vDash\_\mathbf{O}\psi$. For completeness, recall the construction of the *Lindenbaum-Tarski algebra *L* of O*: the underlying set of *L* is the set of all equivalence classes of formulas of L, where *φ* and *ψ* are equivalent if *φ* ⊢ *ψ* and *ψ* ⊢ *φ*; then let 0 = [¬⊤] and 1 = [⊤], and given equivalence classes [*φ*] and [*ψ*], define [*φ*] ∨ [*ψ*] = [*φ* ∨ *ψ*], [*φ*] ∧ [*ψ*] = [*φ* ∧ *ψ*], ¬[*φ*] = [¬*φ*] (that the choice of representatives does not matter is easily shown using the principles of ). Then where  ≤  is the lattice order of *L*, we have [*φ*] ≤ [*ψ*] iff *φ* ⊢ O*ψ*. It is easy to check that *L* is an ortholattice. Moreover, where *θ* is the valuation with *θ*(*p*) = [*p*] for each *p* ∈ Prop, an obvious induction shows *θ̃*(*φ*) = [*φ*] for each *φ* ∈ L. Now if $\varphi\not\vdash\_\mathsf{O}\psi$, then $[\varphi]\not\leq [\psi]$ and hence $\tilde{\theta}(\varphi)\not\leq \tilde{\theta}(\psi)$, so $\varphi\nvDash\_\mathbf{O}\psi$. [remarkfitch] A Fitch-style natural deduction system for the minimal orthologic in the signature ⊤, ¬,  ∧ ,  ∨  can be obtained from a Fitch-style natural deduction system for classical logic in the same signature (cf. ) by (i) restricting the rule of *Reiteration* so that the formula occurrence to be reiterated and its reiterates must belong to the same column and the same subproofs,[10](#fn10) (ii) keeping the introduction and elimination rules for  ∧  and  ∨  the same,[11](#fn11) and (iii) allowing ¬ Introduction and Reductio ad Absurdum to apply when there is a contradiction between a formula derived in a subproof and a formula previously derived in the column in which the subproof immediately occurs. Such a proof system is shown in Figure [FitchRules]. The motivation for restricting Reiteration can easily be seen when we consider epistemic modal propositions, to be added in the next section, as in the following example: $$\begin{nd} \hypo [1] {1} {\Diamond p} \hypo [2] {2} {(p\vee\neg p)} \open \hypo [3] {3} {p} \have [4] {4} {p\vee (\neg p\wedge\Diamond p)} \oi{3} \close \open \hypo [5] {5} {\neg p} \have [6] {6} {\Diamond p} \r{1} \have [7] {7} {\neg p \wedge \Diamond p} \ai{5,6} \have [8] {8} {p\vee (\neg p \wedge \Diamond p)} \oi{7} \close \have [9] {9} {p\vee (\neg p\wedge\Diamond p)} \oe{2,3-4, 5-8} \end{nd}$$ The application of Reiteration on line 6 is obviously suspect: we should not be allowed to reiterate the assumption that *might *p** into a subproof where we have just supposed *not* *p*! Moreover, if *q* is a genuine propositional variable, standing in for any proposition (cf. § [DistinguishSec]), including an epistemic proposition such as *might *p**, then we cannot accept the analogous proof with *q* in place of ◇*p*. However, if we restrict the Reiteration rule as suggested above, then all is well: where *φ* ⊢ FitchO*ψ* means that there is a Fitch-style proof of the conclusion *ψ* from the assumption *φ*, Theorem [AlgComp1] holds with  ⊢ FitchO in place of  ⊢ O. For soundness, an easy induction shows that if there is a subproof or proof beginning with *φ* and ending with *ψ*, then—thanks to the restriction on Reiteration—*ψ* is in fact a semantic consequence of *φ*, i.e., $\varphi\vDash\_\mathbf{O}\psi$. For completeness, it is easy to check that the principles of Definition [OrthoLogicDef], as well as the equivalence of *φ* ∨ *ψ* and ¬(¬*φ* ∧ ¬*ψ*), hold for  ⊢ FitchO, in which case the same style of proof as for Theorem [AlgComp1] applies. 2.25in $$\begin{nd} \have [\vdots] {1} {\vdots} \have [i] {2} {\varphi} \have [\vdots] {3} {\vdots} \have [j] {5} {\varphi} \lr{2} \end{nd}$$ 2.25in $$\begin{nd} \have [\vdots] {4} {\vdots} \have [i] {5} {\varphi\_s} \have [\vdots] {8} {\vdots} \have [j] {9} {\varphi\_t} \have [\vdots] {10} {\vdots} \have [k] {11} {\varphi\_1\wedge\varphi\_2}\ai{5,9} \end{nd}\quad\;\;\;$$ 2.25in $$\begin{nd} \have [\vdots] {4} {\vdots} \have [i] {5} {\varphi\_1\wedge\varphi\_2} \have [\vdots] {8} {\vdots} \have [j] {9} {\varphi\_s} \ae{5} \end{nd}\qquad\quad\,$$ 2.25in $$\begin{nd} \have [\vdots] {1} {\vdots} \have [i] {2} {\top} \topi{} \end{nd}\qquad\qquad\quad\,$$ 2.25in $$\begin{nd} \have [\vdots] {1} {\vdots} \have [i] {2} {\varphi\_s} \have [\vdots] {3} {\vdots} \have [j] {4} {\varphi\_1\vee\varphi\_2}\oi{2} \end{nd}\qquad\;\;\;$$ 2.25in $$\begin{nd} \have [\vdots] {} {\vdots} \have [i] {0} {\varphi\vee\psi} \have [\vdots] {1} {\vdots} \open \hypo [j] {2} {\varphi} \have [\vdots] {3} {\vdots} \have [k] {5} {\chi} \close \open \hypo [l] {7} {\psi} \have [\vdots] {8} {\vdots} \have [m] {9} {\chi} \close \have [n] {n} {\chi} \oe{0,2-5,7-9} \end{nd}$$ 2.25in $$\begin{nd} \have [\vdots] {0} {\vdots} \have [i] {3} {\psi} \have [\vdots] {} {\vdots} \open \hypo [j] {1} {\varphi} \have [\vdots] {4} {\vdots} \have [k] {5} {\neg\psi} \close \have [l] {6} {\neg\varphi} \ni{1-5,3} \end{nd}\,$$ 2.25in $$\begin{nd} \have [\vdots] {0} {\vdots} \have [i] {3} {\psi} \have [\vdots] {4} {\vdots} \open \hypo[j] {1} {\neg \varphi} \have [\vdots] {} {\vdots} \have [k] {5} {\neg\psi} \close \have [l]{6} {\varphi} \RAA{1-5,3} \end{nd}$$ 2.25in $$\begin{nd} \have [\vdots] {4} {\vdots} \have [i] {5} {\varphi} \have [\vdots] {6} {\vdots} \have [j] {7} {\neg\varphi} \have [\vdots] {8} {\vdots} \have [k] {9}{\psi} \ne{5, 7} \end{nd}\qquad\qquad\,$$ [FitchRules] Adding epistemic modality ------------------------- In this section, we extend the algebraic semantics of § [AlgOrtho] to interpret the modals ‘must’ and ‘might’. ### Modal and epistemic ortholattices We begin by extending ortholattices with a unary operation □. First, we define a baseline notion of a *modal* ortholattice and then add additional conditions for epistemic ortholattices. A *modal ortholattice* is a tuple ⟨*A*,  ∨ , 0,  ∧ , 1, ¬, □⟩ where ⟨*A*,  ∨ , 0,  ∧ , 1, ¬⟩ is an ortholattice and □ is a unary operation on *A* satisfying: * □(*a* ∧ *b*) = □*a* ∧ □*b* for all *a*, *b* ∈ *A*, and □1 = 1. For *a* ∈ *A*, we define ◇*a* = ¬□¬*a*. The following is easy to check. [DiamondLem] In any modal ortholattice, we have: * ◇(*a* ∨ *b*) = ◇*a* ∨ ◇*b* for all *a*, *b* ∈ *A*, and ◇0 = 0. Indeed, we could have defined modal ortholattices as algebras ⟨*A*,  ∨ , 0,  ∧ , 1, ¬, ◇⟩ where ⟨*A*,  ∨ , 0,  ∧ , 1, ¬⟩ is an ortholattice and ◇ is a unary operation on *A* satisfying the conditions of Lemma [DiamondLem]. But later it will turn out to be more convenient to have □ as our primitive. Now we can consider additional constraints on the □ operation, just as modal logic considers additional axioms on a □ modality (see, e.g., ). In particular, in order to view □ and ◇ as ‘must’ and ‘might’, we adopt two further constraints. First, corresponding to the factivity of ‘must’, we have the following constraint. A *T modal ortholattice* is a modal ortholattice also satisfying * □*a* ≤ *a* for all *a* ∈ *A*. Next comes the crucial constraint corresponding to Wittgenstein sentences being contradictions. An *epistemic ortholattice* is a T modal ortholattice also satisfying * : ¬*a* ∧ ◇*a* = 0 for all *a* ∈ *A*. By the involution property of ¬ in an ortholattice, is equivalent to *a* ∧ ◇¬*a* = 0; and by the commutativity of  ∧  in a lattice, it is also equivalent to ◇*a* ∧ ¬*a* = 0 (and ◇¬*a* ∧ *a* = 0), in contrast to the consistency of ◇*a* ∧ ¬*a* (and ◇¬*a* ∧ *a*) in some dynamic systems (e.g., in ; see ). Some philosophers of language have argued for *iteration principles* for ‘must’ and ‘might’, leading to the following additional constraints. An *S5 modal ortholattice* is a T modal ortholattice also satisfying: * □*a* ≤ □□*a* for all *a* ∈ *A*; * ◇*a* ≤ □◇*a* for all *a* ∈ *A*. A *S5 epistemic ortholattice* is an S5 modal ortholattice that is also an epistemic ortholattice. [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick] =[fill=gray!20,draw=none,text=black] (0) at (0,0) ; (d) at (0,2) ; (a) at (-2.25,2) ; (Nc) at (2.25,2) ; (b) at (-2.25,4) ; (Nb) at (2.25,4) ; (c) at (-2.25,6) ; (Na) at (2.25,6) ; (Nd) at (0,6) ; (1) at (0,8) ; (d) edge[-] node (c); (d) edge[-] node (Na); (d) edge[-] node (0); (1) edge[-] node (Nd); (Nd) edge[-] node (a); (Nd) edge[-] node (Nc); (1) edge[-] node (c); (1) edge[-] node (Na); (a) edge[-] node (b); (b) edge[-] node (c); (Nc) edge[-] node (Nb); (Nb) edge[-] node (Na); (a) edge[-] node (0); (Nc) edge[-] node (0); (Na) edge[loop right,blue] node (Na); (d) edge[loop left,blue] node (d); (Nd) edge[loop left,blue] node (Nd); (1) edge[loop left,blue] node (1); (c) edge[loop left,blue] node (c); (a) edge[loop left,blue] node (a); (Nc) edge[loop right,blue] node (Nc); (0) edge[loop left,blue] node (0); (b) edge[->,blue,bend right] node (a); (Nb) edge[->,blue,bend left] node (Nc); [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick] =[fill=gray!20,draw=none,text=black] (0) at (0,0) ; (d) at (0,2) ; (a) at (-2.25,2) ; (Nc) at (2.25,2) ; (b) at (-2.25,4) ; (Nb) at (2.25,4) ; (c) at (-2.25,6) ; (Na) at (2.25,6) ; (Nd) at (0,6) ; (1) at (0,8) ; (d) edge[-] node (c); (d) edge[-] node (Na); (d) edge[-] node (0); (1) edge[-] node (Nd); (Nd) edge[-] node (a); (Nd) edge[-] node (Nc); (1) edge[-] node (c); (1) edge[-] node (Na); (a) edge[-] node (b); (b) edge[-] node (c); (Nc) edge[-] node (Nb); (Nb) edge[-] node (Na); (a) edge[-] node (0); (Nc) edge[-] node (0); (Na) edge[loop right,blue] node (Na); (d) edge[loop left,blue] node (d); (Nd) edge[loop left,blue] node (Nd); (1) edge[loop left,blue] node (1); (c) edge[loop left,blue] node (c); (a) edge[loop left,blue] node (a); (Nc) edge[loop right,blue] node (Nc); (0) edge[loop left,blue] node (0); (b) edge[->,blue,bend right] node (a); (Nb) edge[->,blue,bend left] node (Nc); [Fig1] [KeyEx] Figure [Fig1] displays the Hasse diagram of an S5 epistemic ortholattice *L*, labeled in two ways. Recall that a line segment going *upward* from *x* to *y* means that *x* ≤ *y* and there is no third element *z* with *x* ≤ *z* ≤ *y*. The □ operation is depicted by the blue arrows. Note the failure of distributivity: (*p* ∨ ¬*p*) ∧ (◇*p* ∧ ◇¬*p*) = 1 ∧ (◇*p* ∧ ◇¬*p*) = ◇*p* ∧ ◇¬*p* ≠ 0 and yet (*p* ∧ ◇¬*p*) ∨ (¬*p* ∧ ◇*p*) = 0 ∨ 0 = 0. That distributivity fails follows from the failure of orthomodularity. Recall this is the condition that *a* ≤ *b* implies *a* ∨ (¬*a* ∧ *b*) = *b*. Yet in Figure [Fig1] we have *p* ≤ ◇*p* and *p* ∨ (¬*p* ∧ ◇*p*) = *p* ∨ 0 = *p* ≠ ◇*p*. Next, observe that $$p\wedge\Diamond \neg p = 0\mbox{ and yet }\Diamond \neg p\not\leq \neg p.$$ This shows that the orthocomplementation ¬ is not *pseudocomplementation* (recall Lemma [PseudoLem]). Also observe that $$(p\vee\Box\neg p) \wedge \lozenge p = \top\wedge\lozenge p=\lozenge p\not\leq p.$$ This shows that disjunctive syllogism fails. Finally, note that the *subortholattice* of *L* generated by *p* is the four-element *Boolean* algebra: [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick] =[fill=gray!20,draw=none,text=black] (botL) at (0,0) ; (AL) at (-1,1) ; (CL) at (1,1) ; (B’L) at (0,2) ; (CL) edge[-] node (botL); (AL) edge[-] node (botL); (B’L) edge[-] node (AL); (B’L) edge[-] node (CL); Thus, if we think of *p* as a non-modal proposition such as ‘it is raining’ and generate further propositions using disjunction, conjunction, and negation, the result is Boolean. This corresponds to the fact that the failures of distributivity, orthomodularity, pseudocomplementation, and disjunctive syllogism discussed above essentially involve epistemic modals. In § [DistinguishSec], we return to this observation and show how to recover full classical reasoning for a Boolean fragment of our language interpreted in Boolean subalgebras of our lattices. ### Language and consequence We can use epistemic ortholattices to interpret the following propositional modal language. [ELDef] Let EL be the language generated by the grammar *φ* :  :  = ⊤ ∣ *p* ∣ ¬*φ* ∣ (*φ* ∧ *φ*) ∣ □*φ* where *p* belongs to a countably infinite set Prop of propositional variables. We define ⊥ :  = ¬⊤, *φ* ∨ *ψ* :  = ¬(¬*φ* ∧ ¬*ψ*), and ◇*φ* :  = ¬□¬*φ*. Again we use the same symbols for the connectives of EL and the operations in epistemic ortholattices, trusting that no confusion will arise. The algebraic semantics follows the same approach as in § [LangCons1]. A *valuation* on a modal ortholattice ⟨*A*,  ∨ , 0,  ∧ , 1, ¬, □⟩ is a map *θ* : Prop → *A*. Such a *θ* extends to *θ̃* : EL → *A* by: *θ̃*(⊤) = 1, *θ̃*(¬*φ*) = ¬*θ̃*(*φ*), *θ̃*(*φ* ∧ *ψ*) = *θ̃*(*φ*) ∧ *θ̃*(*ψ*), and *θ̃*(□*φ*) = □*θ̃*(*φ*). [AlgCon] Given a class **C** of modal ortholattices, define the semantic consequence relation $\vDash\_\mathbf{C}$, a binary relation on EL, as follows: $\varphi\vDash\_\mathbf{C}\psi$ if for every *L* ∈ **C** and valuation *θ* on *L*, we have *θ̃*(*φ*) ≤ *θ̃*(*ψ*), where  ≤  is the lattice order of *L*. ### Logic, soundness and completeness Building on § [Log1], axiomatizing the semantic consequence relation of Definition [AlgCon] is straightforward. [EODef] An *epistemic orthologic* is a binary relation  ⊢  on the set EL of formulas satisfying for all *φ*, *ψ* ∈ EL conditions 1-10 of Definition [OrthoLogicDef] plus: ll 11. if *φ* ⊢ *ψ*, then □*φ* ⊢ □*ψ*;& 14. □*φ* ⊢ *φ*; 12. □*φ* ∧ □*ψ* ⊢ □(*φ* ∧ *ψ*); & 15. ¬*φ* ∧ ◇*φ* ⊢ ⊥ (Wittgenstein’s Law). 13. *φ* ⊢ □⊤; As the intersection of epistemic orthologics is clearly an epistemic orthologic, there is a smallest epistemic orthologic, denoted or  ⊢ EO.[12](#fn12) Wittgenstein’s Law yields the following noteworthy property of epistemic orthologics. [BoxBotLem] For any epistemic orthologic  ⊢  and *φ* ∈ EL, if □*φ* ⊢ ⊥, then *φ* ⊢ ⊥. If □*φ* ⊢ ⊥, then ⊤ ⊢ ¬□*φ*, in which case *φ* ⊢ *φ* ∧ ¬□*φ* ⊢ ¬¬*φ* ∧ ◇¬*φ* ⊢ ⊥, so *φ* ⊢ ⊥. Conversely, if one assumes the principle in Lemma [BoxBotLem] together with principles 11 and 14 in Definition [EODef] on top of orthologic, then Wittgenstein’s Law is derivable (see the proof of Fact [GeneralizedWittLaw]). As in § [Log1], a completeness theorem can be proved using standard techniques of algebraic logic. [AlgComp2] The logic $\textsf{EO}$ is sound and complete with respect to the class **EO** of epistemic ortholattices according to the consequence relation of Definition [AlgCon]: for all *φ*, *ψ* ∈ EL, we have *φ* ⊢ EO*ψ* if and only if $\varphi\vDash\_\mathbf{EO}\psi$. By the same strategy as in the proof of Theorem [AlgComp1], checking that the Lindenbaum-Tarski algebra of $\textsf{EO}$ is an epistemic ortholattice. Soundness and completeness with respect to epistemic ortholattices is also straightforward by adding the rules □*φ* ⊢ □□*φ* (known as 4) and ◇*φ* ⊢ □◇*φ* (known as 5) to Definition [EODef]. However, neither of these rules, nor the rule *φ* ⊢ □◇*φ* (known as B), is valid with respect to epistemic ortholattices in general. Moreover, the evidence about the status of these inferences for epistemic modality is mixed. On the one hand, argues that patterns from nested epistemic modals tell against the collapse principles that these axioms together entail. For instance, the sentences in [stackedmight] do not sound obviously equivalent but instead are intuitively increasingly strong:[13](#fn13) .[stackedmight]. John might possibly win.. John might win.. John certainly might win. Judgments here are somewhat difficult to ascertain, however, given that it is very difficult to directly stack epistemic modals in English. Pulling in the other direction, conjunctions that instantiate violations of,, and —that is, sentences with the form □*p* ∧ ¬□□*p*, ◊*p* ∧ ¬□◊*p*, and *p* ∧ ¬□◊*p*, respectively—do sound inconsistent, as in [antis5] (using duality to improve readability): .[antis5]. # John must be the winner, but maybe he might not be the winner..# John might be the winner, but it might be that he must not be the winner.. # John is the winner, but it might be that he must not be the winner. This extends to embedded environments, suggesting this is not simply a Moorean phenomenon but should be accounted for logically. In fact, the account so far has a nice way to make sense of both of these kinds of evidence. On the one hand,,, and are all invalid according to our consequence relation for **EO**. On the other hand, the conjunctions above that instantiate their violations are all inconsistent. Reasoning from that inconsistency to the validity of,, and, although classically valid, is blocked in our system since negation is not pseudocomplementation. This lets us account both for the fact that nested modals appear not to collapse and for the fact that conjunctions that witness the failures of,, and appear inconsistent. One might think that without $\textsf{4}$, which is equivalent to ◇◇*φ* ⊢ ◇*φ*, the other principles of our logic could not account for the fact that *p* ∧ ◇◇¬*p* should be inconsistent. In fact, we do not need $\textsf{4}$ to account for this. For the following, let ◇0*φ* = *φ* and ◇*n* + 1*φ* = ◇◇*n**φ*. [GeneralizedWittLaw] For any epistemic orthologic  ⊢ , *n* ∈ N, and *φ* ∈ EL, we have *φ* ∧ ◇*n*¬*φ* ⊢ ⊥. By induction on *n*. Assume *ψ* ∧ ◇*n*¬*ψ* ⊢ ⊥ for all *ψ* ∈ EL. Then for any *φ* ∈ EL, □(*φ* ∧ ◇*n* + 1¬*φ*) ⊢ □*φ* ∧ □◇*n* + 1¬*φ* ⊢ □*φ* ∧ ◇*n* + 1¬*φ* ⊢ □*φ* ∧ ◇*n*¬□*φ* ⊢ ⊥,  using the inductive hypothesis for the last step. Thus, *φ* ∧ ◇*n* + 1¬*φ* ⊢ ⊥ by Lemma [BoxBotLem]. ### Distinguishing Boolean propositions We now make precise the idea from § [ConsSec] that classical principles should hold for the “non-modal fragment.” If we understand the basic elements *p*, *q*, *r* ∈ Prop of our inductively defined formal language as *propositional variables*, standing in for arbitrary propositions including epistemic modal propositions, then we do not want classical principles like *p* ∧ (*q* ∨ *r*) ⊢ (*p* ∧ *q*) ∨ (*p* ∧ *r*), since accepting such a principle for *p*, *q*, *r* means accepting it for all propositions (cf. ). However, we can add to the basic elements of our formal language a set Bool whose elements we think of as variables only for *non-epistemic*, or *Boolean*, propositions.[14](#fn14) Typographically, *p*, *q*, *r*, … are elements of Prop, whereas `p`, `q`, `r`, … are elements of Bool. [ELPlus] Let EL+ be the language generated by the grammar *φ* :  :  = ⊤ ∣ *p* ∣ `p` ∣ ¬*φ* ∣ (*φ* ∧ *φ*) ∣ □*φ* where *p* belongs to a countably infinite set Prop and `p` belongs to a countably infinite set Bool. A formula of EL+ is said to be *Boolean* if all its propositional variables are from Bool and it does not contain □. In line with our idea that the only failures of classicality come from epistemic modals, we interpret the variables of Bool in a Boolean subalgebra of our ambient ortholattice of propositions. A *modal ortho-Boolean lattice* is a tuple ⟨*A*, *B*,  ∨ , 0,  ∧ , 1, ¬, □⟩ where ⟨*A*,  ∨ , 0,  ∧ , 1, ¬, □⟩ is a modal ortholattice and ⟨*B*,  ∨ ∣ *B*, 0,  ∧ ∣ *B*, 1, ¬∣ *B*⟩ is a Boolean algebra where *B* ⊆ *A* and  ∨ ∣ *B*,  ∧ ∣ *B*, and ¬∣ *B* are the restrictions of  ∨ ,  ∧ , and ¬, respectively, to *B*. Note that every modal ortholattice can be expanded to a modal ortho-Boolean lattice by taking *B* = {0, 1}, so modal ortho-Boolean lattices may be viewed as a generalization of modal ortholattices. So far there is no required connection between the distinguished Boolean subalgebra of a modal ortho-Boolean lattice and the ambient modal ortholattice: any Boolean subalgebra can be distinguished. However, there appear to be intuitively valid principles relating Boolean—but not arbitrary—propositions and epistemic modals, and capturing such principles requires a certain coherence between the Boolean subalgebra *B* and the ambient epistemic ortholattice. Consider, for example, the following. [LevelwiseBooldeanDef] Given a modal ortho-Boolean lattice *L* = ⟨*A*, *B*,  ∨ , 0,  ∧ , 1, ¬, □⟩, define: * *B*0 = *B*; * *B**n* + 1 is the subortholattice of ⟨*A*,  ∨ , 0,  ∧ , 1, ¬⟩ generated by {□*b* ∣ *b* ∈ *B**n*}. Then *L* is *level-wise Boolean* if *B**n* is Boolean for each *n* ∈ N. The motivation for this condition is straightforward: no natural language counterexample to a classical inference that we have found is such that all propositions come from the same level *B**n*. For example, the counterexample to pseudocomplementation, going from *p* ∧ ◇¬*p* = 0 to ◇¬*p* ≤ ¬*p*, involves *p*, ¬*p* ∈ *B**n* and ◇¬*p* ∈ *B**n* + 1. Similar points apply to our counterexamples involving distributivity, disjunctive syllogism, and orthomodularity. This observation is, to our knowledge, novel, and it suggests a picture on which classical reasoning *across different epistemic levels* can be dangerous, but classical reasoning within a given epistemic level is safe. We can model that picture with level-wise Boolean epistemic ortholattices: [OrthoBoole]An *epistemic ortho-Boolean lattice* is a modal ortho-Boolean lattice ⟨*A*, *B*,  ∨ , 0,  ∧ , 1, ¬, □⟩ that is level-wise Boolean and such that ⟨*A*,  ∨ , 0,  ∧ , 1, ¬, □⟩ is an epistemic ortholattice. For checking the level-wise Boolean condition, note that if *B**n* + 1 = *B**n*, then *B**k* = *B**n* for all *k* ≥ *n*; and if *A* is finite, then we are bound to reach such a fixed point *B**n*. [LevelwiseBooleEx] Consider the epistemic ortholattice in Figure [Fig1] with *B* = {⊥, *p*, ¬*p*, ⊤}, highlighted on the left of Figure [StratFig]. Then *B* forms a subortholattice and a four-element Boolean algebra. Moreover, *B*1 = {⊥, □*p*, ◇¬*p*, ◇*p*, □¬*p*, □*p* ∨ □¬*p*, ◇*p* ∧ ◇¬*p*, ⊤} forms an eight-element Boolean algebra, highlighted on the right of Figure [StratFig]; and *B*2 = *B*1, so *B**n* = *B*1 for all *n* ≥ 1. Thus, by equipping the epistemic ortholattice in Figure [Fig1] with *B*, we obtain an epistemic ortho-Boolean lattice. [scale=.9,->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick] =[fill=gray!20,draw=none,text=black] (0) at (0,0) ; (d) at (0,2) ; (a) at (-2.25,2) ; (Nc) at (2.25,2) ; (b) at (-2.25,4) ; (Nb) at (2.25,4) ; (c) at (-2.25,6) ; (Na) at (2.25,6) ; (Nd) at (0,6) ; (1) at (0,8) ; (d) edge[-] node (c); (d) edge[-] node (Na); (d) edge[-] node (0); (1) edge[-] node (Nd); (Nd) edge[-] node (a); (Nd) edge[-] node (Nc); (1) edge[-] node (c); (1) edge[-] node (Na); (a) edge[-] node (b); (b) edge[-] node (c); (Nc) edge[-] node (Nb); (Nb) edge[-] node (Na); (a) edge[-] node (0); (Nc) edge[-] node (0); (Na) edge[loop right,blue] node (Na); (d) edge[loop left,blue] node (d); (Nd) edge[loop left,blue] node (Nd); (1) edge[loop left,blue] node (1); (c) edge[loop left,blue] node (c); (a) edge[loop left,blue] node (a); (Nc) edge[loop right,blue] node (Nc); (0) edge[loop left,blue] node (0); (b) edge[->,blue,bend right] node (a); (Nb) edge[->,blue,bend left] node (Nc); [scale=.9,->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick] =[fill=gray!20,draw=none,text=black] (0) at (0,0) ; (d) at (0,2) ; (a) at (-2.25,2) ; (Nc) at (2.25,2) ; (b) at (-2.25,4) ; (Nb) at (2.25,4) ; (c) at (-2.25,6) ; (Na) at (2.25,6) ; (Nd) at (0,6) ; (1) at (0,8) ; (d) edge[-] node (c); (d) edge[-] node (Na); (d) edge[-] node (0); (1) edge[-] node (Nd); (Nd) edge[-] node (a); (Nd) edge[-] node (Nc); (1) edge[-] node (c); (1) edge[-] node (Na); (a) edge[-] node (b); (b) edge[-] node (c); (Nc) edge[-] node (Nb); (Nb) edge[-] node (Na); (a) edge[-] node (0); (Nc) edge[-] node (0); (Na) edge[loop right,blue] node (Na); (d) edge[loop left,blue] node (d); (Nd) edge[loop left,blue] node (Nd); (1) edge[loop left,blue] node (1); (c) edge[loop left,blue] node (c); (a) edge[loop left,blue] node (a); (Nc) edge[loop right,blue] node (Nc); (0) edge[loop left,blue] node (0); (b) edge[->,blue,bend right] node (a); (Nb) edge[->,blue,bend left] node (Nc); [StratFig] Let us now turn to the semantics of EL+. A valuation *θ* on a modal ortho-Boolean lattice ⟨*A*, *B*,  ∨ , 0,  ∧ , 1, ¬, □⟩ is a map *θ* : Prop ∪ Bool → *A* such that for all `p` ∈ Bool, *θ*(`p`) ∈ *B*. Such a valuation extends to *θ̃* : EL+ → *A* by: *θ̃*(⊤) = 1, *θ̃*(¬*φ*) = ¬*θ̃*(*φ*), *θ̃*(*φ* ∧ *ψ*) = *θ̃*(*φ*) ∧ *θ̃*(*ψ*), and *θ̃*(□*φ*) = □*θ̃*(*φ*). Corresponding to the different subortholattices *B**n*, we have a hierarchy of sublanguages B*n*.   * Let B0 be the set of Boolean formulas as in Definition [ELPlus]. * Let B*n* + 1 be the smallest set of formulas that includes {□*φ* ∣ *φ* ∈ B*n*} and is closed under ¬ and  ∧ . An obvious induction on the structure of formulas shows that formulas at each level are interpreted in the corresponding *B**n*. [InterpretInBn] Let *θ* be a valuation on a modal ortho-Boolean lattice ⟨*A*, *B*,  ∨ , 0,  ∧ , 1, ¬, □⟩. Then for any *n* ∈ N and *β* ∈ B*n*, we have *θ̃*(*β*) ∈ *B**n*. Our consequence relation for EL+ is defined just like our consequence relation for EL but quantifying over modal ortho-Boolean lattices instead of modal ortholattices. [AlgCon+] Given a class **C** of modal ortho-Boolean lattices, define the semantic consequence relation $\vDash\_\mathbf{C}^+$, a binary relation on EL+, as follows: $\varphi\vDash\_\mathbf{C}^+\psi$ if for every *L* ∈ **C** and valuation *θ* on *L*, we have *θ̃*(*φ*) ≤ *θ̃*(*ψ*), where  ≤  is the lattice order of *L*. Turning to the logic, we now explicitly include the principle of distributivity for formulas at a given epistemic level.[15](#fn15) [EOplus] Let EO+ be defined just like EO in Definition [EODef] for arbitrary formulas *φ*, *ψ*, *χ* ∈ EL+ but in addition we have for all *n* ∈ N and *α*, *β*, *γ* ∈ B*n*: * *α* ∧ (*β* ∨ *γ*) ⊢ (*α* ∧ *β*) ∨ (*α* ∧ *γ*). Adding distributivity for formulas at a given epistemic level allows us to derive all principles of classical logic for formulas at that level. For example, we can derive the psueodcomplementation principle that *α* ∧ *β* ⊢ ⊥ implies *α* ⊢ ¬*β* for *α*, *β* ∈ B*n* as in the proof of Proposition [PseudoToBoole]. We can also derive principles of classical normal modal logic for formulas at a given epistemic level. [DiamondBoxDiamond] We can derive that ◇*δ* ∧ □*γ* ⊢ ◇(*δ* ∧ *γ*) for *δ*, *γ* ∈ B*n* as follows: ◇*δ* ∧ □*γ* ∧ □¬(*δ* ∧ *γ*) ⊢ ◇*δ* ∧ □(*γ* ∧ ¬(*δ* ∧ *γ*)) ⊢ ◇*δ* ∧ □¬*δ* ⊢ ⊥,  so by pseudocomplementation for formulas in B*n* + 1, we have ◇*δ* ∧ □*γ* ⊢ ◇(*δ* ∧ *γ*). Note that we would not want this principle for formulas of different epistemic levels, since ◇*p* ∧ □◇¬*p* should not entail ◇(*p* ∧ ◇¬*p*). More generally, we have the following completeness theorem. [EO+comp1] The logic $\textsf{EO}^+$ (summarized in Figure [EO+Fig]) is sound and complete with respect to the class **EO**+ of all epistemic ortho-Boolean lattices according to the consequence relation of Definition [AlgCon+]: for all *φ*, *ψ* ∈ EL+, we have *φ* ⊢ EO+*ψ* if and only if $\varphi\vDash\_\mathbf{EO}^+\psi$. Soundness is again straightforward. For completeness, as in the proofs of Theorems [AlgComp1] and [AlgComp2], we consider the Lindenbaum-Tarski algebra *L* of $\textsf{EO}^+$. Let *B* = {[*β*] ∣ *β*Boolean}, and note that *B* forms a subortholattice of *L*. As a consequence of the distributivity rule of EO+, each *B**n* generated from *B* is a Boolean algebra under the restricted operations of *L*. Hence we have an epistemic ortho-Boolean lattice. Finally, let *θ* be the valuation on *L* with *θ*(*p*) = [*p*] for all *p* ∈ Prop and *θ*(`p`) = [`p`] for all `p` ∈ Bool. The rest of the proof is the same as for Theorems [AlgComp1] and [AlgComp2]. As before, soundness and completeness with respect to epistemic ortho-Boolean lattices is also straightforward by adding the rules □*φ* ⊢ □□*φ* and ◇*φ* ⊢ □◇*φ* to Definition [EOplus]. EO+ is thus our proposed base logic for epistemic modals. EO+ clearly satisfies all the desiderata of § [des] except perhaps for conservativity. When it comes to conservativity, EO+ preserves much of classical modal logic, but we will see in § [EpExtSection] that there are plausible -logics that preserve even more of classical modal logic. We will raise the question whether EO+ is “conservative enough” or whether we want to also commit to those further principles or to other principles of classical modal logic. |ll| & 1. *φ* ⊢ ⊤; & 6. ¬¬*φ* ⊢ *φ*; 2. *φ* ⊢ *φ*; & 7. *φ* ∧ ¬*φ* ⊢ *ψ*; 3. *φ* ∧ *ψ* ⊢ *φ*; &8. if *φ* ⊢ *ψ* and *ψ* ⊢ *χ*, then *φ* ⊢ *χ*; 4. *φ* ∧ *ψ* ⊢ *ψ*; &9. if *φ* ⊢ *ψ* and *φ* ⊢ *χ*, then *φ* ⊢ *ψ* ∧ *χ*; 5. *φ* ⊢ ¬¬*φ*; & 10. if *φ* ⊢ *ψ*, then ¬*ψ* ⊢ ¬*φ*. & 11. if *φ* ⊢ *ψ*, then □*φ* ⊢ □*ψ*; & 14. □*φ* ⊢ *φ*; 12. □*φ* ∧ □*ψ* ⊢ □(*φ* ∧ *ψ*); & 15. ¬*φ* ∧ ◇*φ* ⊢ ⊥; 13. *φ* ⊢ □⊤; & & 16. *α* ∧ (*β* ∨ *γ*) ⊢ (*α* ∧ *β*) ∨ (*α* ∧ *γ*) & for *α*, *β*, *γ* ∈ B*n*. & & [EO+Fig] Possibility semantics ===================== While the algebraic semantics of Section [algebraic] shows perspicuously exactly how we are departing from the Boolean algebraic semantics underlying classical logic, some may be unsatisfied with the way in which the algebraic approach simply builds into our semantics—in the form of equations or inequalities on lattices—precisely the logical principles we want to get out. Thus, in this section, we turn to a more concrete and perhaps more intuitive and illuminating *possibility semantics* for epistemic orthologic. We begin with possibility semantics for non-modal orthologic, reviewing the approach of.[16](#fn16) Although Goldblatt does not classify his semantics under the banner of “possibility semantics,” this classification is justified by the close connection, explained in, between his semantics for orthologic using a relation of *compatibility* (or what he calls *proximity*) and possibility semantics for classical logic using a relation of *refinement*.[17](#fn17) Indeed, classical possibility semantics as developed in and can be recast in terms of compatibility instead of refinement (see Remark [BooleanCase] and ). Here we assume no previous exposure to possibility semantics of any kind. All we assume is familiarity with possible world semantics, of which possibility semantics is a generalization. Review of possibility semantics for orthologic ---------------------------------------------- Possibility semantics for orthologic replaces the set *W* of worlds from classical possible world semantics with a set *S* of possibilities, endowed with a *compatibility relation* $\between$ between possibilities. [CompFrame] A (*symmetric*) *compatibility frame* is a pair $\mathcal{F}=\langle S,\between\rangle$ where *S* is a nonempty set and $\between$ is a reflexive and symmetric binary relation on *S*. A set *W* of possible worlds may be regarded as a compatibility frame in which each *w* ∈ *W* is compatible only with itself: $v\between w$ implies *v* = *w*. Think of $x\between y$ as meaning that **x* does not settle as true anything that *y* settles as false*. If *w* and *v* are complete possible worlds, then—assuming distinct worlds must differ in some respect—*w* must settle something as true that *y* settles as false. However, unlike possible worlds, distinct *possibilities* may be compatible with each other—a sign of their *partiality*. For example, the possibility that it is raining in Beijing is compatible with the possibility that it is sunny in Malibu. The above notion of compatibility is weaker than what might be called *compossibility*: *x* and *y* are compossible if there is some possibility *z* that settles as true everything that *x* settles as true and everything that *y* settles as true. As we shall see below, in the above sense of compatibility, a possibility *x* settling as true that *it might be raining in Beijing* can be *compatible* with a possibility *y* settling as true that *it is not raining in Beijing*, since settling as true *that it might be raining in Beijing* does not entail settling as false *that it is not raining in Beijing*; however, such an *x* and *y* cannot be *compossible*, since there can be no possibility *z* that settles as true that *it might be raining in Beijing but it isn’t*. considers more general compatibility frames in which $\between$ is only assumed to be reflexive,[18](#fn18) as such frames can be used to represent arbitrary complete lattices; but here we use the term ‘compatibility frame’ for only the symmetric frames, which will give rise to complete ortholattices. We now turn to the question of what counts as a *proposition*. In basic possible world semantics for classical logic, every set of worlds is (or corresponds to) a proposition; as a result, the collection of propositions ordered by inclusion forms a Boolean algebra. In semantics for *non-classical* logics, not every set of worlds or possibilities can count as a proposition—for then we would still be stuck with a Boolean algebra of propositions. Thus, for example, in possible world semantics for intuitionistic logic (,, ), only sets of worlds that are upward closed with respect to an information order count as legitimate propositions. In possibility semantics for orthologic, only sets of possibilities that behave sensibly with respect to the compatibility relation count as legitimate propositions. [RegProp] Given a compatibility frame $\langle S,\between\rangle$, a set *A* ⊆ *S* is said to be *$\between$-regular* if for all *x* ∈ *S*, $$x\not\in A \Rightarrow \exists y\between x\;\forall z\between y\;\, z\not\in A.$$ The idea is that if *x* does *not* make a proposition *A* true, then there should be a possibility *y* compatible with *x* that makes *A* *false*, so that all possibilities *z* compatible with *y* do not make *A* true.[19](#fn19) See Figure [RegDefFig]. If *x* already makes *A* false, so no possibility compatible with *x* makes *A* true, then we can simply take *y* = *x*. The interesting case is where *x* neither makes *A* true nor makes *A* false. Thus, regularity can be expressed in the slogan: Indeterminacy Implies Compatibility with Falsity. Thus, if *A* is indeterminate at *x*, then *x* is compatible with a *y* that makes *A* false. Let us emphasize that regularity is a constraint on what counts as a *proposition* in possibility semantics, not a constraint on what counts as a *possibility*. Any pair of a nonempty set *S* and compatibility relation on that set is a frame for possibility semantics, just as any pair of a set and accessibility relation is a frame for classical possible world semantics. But unlike in classical possible world semantics, not every subset of *S* counts as a proposition. Restricting what subsets of a state space count as propositions has several precedents: again, from possible world semantics for intuitionistic logic; from standard approaches to probability, where a *σ*-algebra over the state space—which need not be the full powerset algebra—is specified; as well as possible-world approaches to conditionals, where a set of admissible conditional antecedents is specified (as in, where the conditional operator is defined only on sentences in the language, not on arbitrary propositions). Even in classical possible world semantics, what counts as a propositions is implicitly limited to those classes of possible worlds that are also sets (). [->,>=stealth,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick] =[fill=gray!20,draw=none,text=black] (-1.25,-.6) circle (1); at (-1.25,-.6) ; (2,.5) – (2, -2.2) – (.7, -2.2) – (-.7, -2.2) – (-.7,.6) – (2,.6) – (2,.5) ; at (.65,-.8) *A*; at (3.2,-.8)  ⇒  ∃*y*:; (6,-.6) circle (1); (9.25,.5) – (9.25, -2.2) – (7.95, -2.2) – (6.55, -2.2) – (6.55,.6) – (9.25,.6) – (9.25,.5) ; at (7.9,-.8) *A*; (5.4,-1) circle (1); (a) at (6,-.6) ; (b) at (5.4,-1) ; (a) edge[-] node (b); [RegDefFig] From the compatibility relation, it is useful to define a relation of *refinement*. This can be done in two equivalent ways: *y* refines *x* if every proposition settled true by *x* is also settled true by *y*, or equivalently, if every possibility compatible with *y* is compatible with *x*. [RefinementDef] For any compatibility frame $\langle S,\between\rangle$, the following are equivalent for any *x*, *y* ∈ *S*: 1. [RefinementDef1] for all $\between$-regular sets *A* ⊆ *S*, if *x* ∈ *A*, then *y* ∈ *A*; 2. [RefinementDef2] for all *z* ∈ *S*, if $z\between y$ then $z\between x$. When these conditions hold, we write *y* ⊑ *x*. From [RefinementDef1] to [RefinementDef2], suppose $z\between y$ but not $z\between x$. Let $A=\{v\in S\mid \mbox{not }z\between v\}$. We claim that *A* is $\between$-regular. We must show that if *u* ∉ *A*, then $\exists u'\between u$ $\forall u''\between u'$ *u*ʺ ∉ *A*, i.e., that if $z\between u$, then $\exists u'\between u$ $\forall u''\between u'$ $z\between u''$. Indeed, let *u*ʹ = *z* and use the symmetry of $\between$. Thus, *A* is $\between$-regular, and *x* ∈ *A* but *y* ∉ *A*. From [RefinementDef2] to [RefinementDef1], let *A* be a $\between$-regular set. Suppose *x* ∈ *A* and condition [RefinementDef2] holds. We claim that for all $y'\between y$ there is a $y''\between y'$ with *y*ʺ ∈ *A*, so *y* ∈ *A* by the $\between$-regularity of *A*. Suppose $y'\between y$. Then since condition [RefinementDef2] holds, we have $y'\between x$ and hence $x\between y'$ by symmetry of $\between$. Thus, we may take *y*ʺ = *x*. For any possibility *x*, the set of refinements of *x* is a $\between$-regular set and hence may be regarded as a proposition—the proposition *that possibility *x* obtains*. [Downx] Given a compatibility frame $\mathcal{F}=\langle S,\between\rangle$ and *x* ∈ *S*, the set $\mathord{\downarrow}x=\{y\in S\mid y\sqsubseteq x\}$ is $\between$-regular. Suppose $y\not\in \mathord{\downarrow}x$. Hence there is a $y'\between y$ such that *not* $y'\between x$. Now consider any $y''\between y'$. Since $y'\between y''$ but *not* $y'\between x$, we have that $y''\not\sqsubseteq x$. Thus, we have shown that if $y\not\in \mathord{\downarrow}x$, then $\exists y'\between y$ $\forall y''\between y'$ $y''\not\in \mathord{\downarrow}x$. Hence $\mathord{\downarrow}x$ is $\between$-regular. We can also use the notion of refinement to define *possible worlds*: a world is a possibility that refines every possibility with which it is compatible. [WorldDef] Given a compatibility frame $\mathcal{F}=\langle S,\between\rangle$, a *world in F* is a *w* ∈ *S* such that for all *x* ∈ *S*, $w\between x$ implies *w* ⊑ *x*. Now in line with the idea that only $\between$-regular sets of possibilities are propositions, a *model* should interpret the propositional variables of our formal language as $\between$-regular sets. [CompModDef] A *compatibility model* is a pair M = ⟨F, *V*⟩ where $\mathcal{F}=\langle S,\between\rangle$ is a compatibility frame and *V* is a function assigning to each *p* ∈ Prop a $\between$-regular set *V*(*p*) ⊆ *S*. We say that M is *based on* F. We now observe how any compatibility frame gives rise to an ortholattice. Compare the following result, due to Birkhoff, to the way in which for any set *W*, the family of all subsets ordered by the subset relation  ⊆  forms a complete Boolean algebra. [FrameToOrtho] Given any compatibility frame $\mathcal{F}=\langle S,\between\rangle$, the $\between$-regular sets ordered by the subset relation  ⊆  form a complete lattice, which becomes an ortholattice with the operation ¬ defined by $$\neg A=\{x\in S\mid \forall y\between x\;\, y\not\in A\}.\label{NegEq}$$ For the lattice operations, we have *A* ∧ *B* = *A* ∩ *B*, *A* ∨ *B* = ¬(¬*A* ∩ ¬*B*), 1 = *S*, and $0=\varnothing$. We denote the ortholattice of $\between$-regular subsets of F by *O*(F). Thus,  ∧  is intersection as in possible world semantics, but note how ¬ and  ∨  are no longer interpreted as in possible world semantics. In possible world semantics, ¬*A* = {*x* ∈ *W* ∣ *x* ∉ *A*}, in contrast to ([NegEq]), and  ∨  is union, in contrast to what we get when we unpack the definition  ∨  as *A* ∨ *B* = ¬(¬*A* ∩ ¬*B*): $$A\vee B=\{x\in S\mid \forall y\between x\,\exists z\between y: z\in A\cup B\},\label{OrEq}$$ i.e., *x* makes *A* ∨ *B* true just in case every possibility compatible with *x* is in turn compatible with a possibility that makes one of *A* or *B* true. [BooleanCase] As observed in (Example 3.16), the ortholattice arising from a compatibility frame as in Proposition [FrameToOrtho] is a Boolean algebra if for all *x*, *y* ∈ *S*, if $x\between y$, then *x* and *y* have a common refinement, i.e., there is a *z* such that *z* ⊑ *x* and *z* ⊑ *y*. Indeed, this condition is not only sufficient but also necessary for the ortholattice to be Boolean: for if *x* and *y* have no common refinement, so $\mathord{\downarrow}x\cap\mathord{\downarrow}y=\varnothing$, then in a Boolean algebra we must have $\mathord{\downarrow}x \subseteq\neg \mathord{\downarrow}y$, contradicting $x\between y$. Thus, classicality corresponds to the condition (to use terminology introduced after Definition [CompFrame]) that *compatibility* implies *compossibility*. Our departure from classicality can now be seen as follows: we want there to be distinct but *compatible* possibilities that settle *A* and ◇¬*A* as true, respectively, as neither *A* nor ◇¬*A* should entail the negation of the other; yet such possibilities should not be *compossible*—they should have no common refinement, since no single possibility should settle both *A* and ◇¬*A* as true. It is noteworthy that two compatibility frames $(S,\between)$ and $(S,\between')$ can have the same derived refinement relation but give rise to non-isomorphic ortholattices; an example is provided in the second of the notebooks cited in § [Intro]. Thus, although the refinement relation has all the information one needs in a classical setting (wherein compatibility can be defined from a primitive partial order of refinement by: $x\between y$ if *x* and *y* have a common refinement), it does not in our non-classical setting here. [CompEx] Figure [Fig2] shows a simple compatibility frame with five possibilities (above) and its refinement relation (below) defined from compatibility as in Lemma [RefinementDef]. [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick] =[fill=gray!20,draw=none,text=black] (1) at (0,0) ; (2) at (1.5,0) ; (3) at (3,0) ; (4) at (4.5,0) ; (5) at (6,0) ; (1) edge[-] node (2); (2) edge[-] node (3); (3) edge[-] node (4); (4) edge[-] node (5); [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick] =[fill=gray!20,draw=none,text=black] (1) at (0,0) ; (2) at (1.5,0) ; (3) at (3,0) ; (4) at (4.5,0) ; (5) at (6,0) ; (1) edge[<-,dashed] node (2); (4) edge[->,dashed] node (5); [Fig2] Figure [RegFig] shows the ten $\between$-regular subsets of the compatibility frame from Figure [Fig2], each highlighted in green. For example, {*x*1, *x*2, *x*3} is $\between$-regular according to Definition [RegProp] because each possibility outside the set is compatible with *x*5, which is not compatible with anything in the set; but {*x*2, *x*3, *x*4} is not $\between$-regular, because *x*1 is outside the set and yet everything compatible with *x*1 is compatible with something in the set. Note that ordering the $\between$-regular subsets in Figure [RegFig] by  ⊆  and defining ¬ as in Proposition [FrameToOrtho] yields an ortholattice *O*(F) isomorphic to the ortholattice in Figure [Fig1] (reproduced on the right of Figure [JIfig] below)! [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick,scale=.5] =[fill=gray!20,draw=none,text=black] (1) at (0,0) ; (2) at (1.5,0) ; (3) at (3,0) ; (4) at (4.5,0) ; (5) at (6,0) ; (1) edge[-] node (2); (2) edge[-] node (3); (3) edge[-] node (4); (4) edge[-] node (5); [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick,scale=.5] =[fill=gray!20,draw=none,text=black] (1) at (0,0) ; (2) at (1.5,0) ; (3) at (3,0) ; (4) at (4.5,0) ; (5) at (6,0) ; (1) edge[-] node (2); (2) edge[-] node (3); (3) edge[-] node (4); (4) edge[-] node (5); [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick,scale=.5] =[fill=gray!20,draw=none,text=black] (1) at (0,0) ; (2) at (1.5,0) ; (3) at (3,0) ; (4) at (4.5,0) ; (5) at (6,0) ; (1) edge[-] node (2); (2) edge[-] node (3); (3) edge[-] node (4); (4) edge[-] node (5); [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick,scale=.5] =[fill=gray!20,draw=none,text=black] (1) at (0,0) ; (2) at (1.5,0) ; (3) at (3,0) ; (4) at (4.5,0) ; (5) at (6,0) ; (1) edge[-] node (2); (2) edge[-] node (3); (3) edge[-] node (4); (4) edge[-] node (5); [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick,scale=.5] =[fill=gray!20,draw=none,text=black] (1) at (0,0) ; (2) at (1.5,0) ; (3) at (3,0) ; (4) at (4.5,0) ; (5) at (6,0) ; (1) edge[-] node (2); (2) edge[-] node (3); (3) edge[-] node (4); (4) edge[-] node (5); [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick,scale=.5] =[fill=gray!20,draw=none,text=black] (1) at (0,0) ; (2) at (1.5,0) ; (3) at (3,0) ; (4) at (4.5,0) ; (5) at (6,0) ; (1) edge[-] node (2); (2) edge[-] node (3); (3) edge[-] node (4); (4) edge[-] node (5); [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick,scale=.5] =[fill=gray!20,draw=none,text=black] (1) at (0,0) ; (2) at (1.5,0) ; (3) at (3,0) ; (4) at (4.5,0) ; (5) at (6,0) ; (1) edge[-] node (2); (2) edge[-] node (3); (3) edge[-] node (4); (4) edge[-] node (5); [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick,scale=.5] =[fill=gray!20,draw=none,text=black] (1) at (0,0) ; (2) at (1.5,0) ; (3) at (3,0) ; (4) at (4.5,0) ; (5) at (6,0) ; (1) edge[-] node (2); (2) edge[-] node (3); (3) edge[-] node (4); (4) edge[-] node (5); [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick,scale=.5] =[fill=gray!20,draw=none,text=black] (1) at (0,0) ; (2) at (1.5,0) ; (3) at (3,0) ; (4) at (4.5,0) ; (5) at (6,0) ; (1) edge[-] node (2); (2) edge[-] node (3); (3) edge[-] node (4); (4) edge[-] node (5); [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick,scale=.5] =[fill=gray!20,draw=none,text=black] (1) at (0,0) ; (2) at (1.5,0) ; (3) at (3,0) ; (4) at (4.5,0) ; (5) at (6,0) ; (1) edge[-] node (2); (2) edge[-] node (3); (3) edge[-] node (4); (4) edge[-] node (5); [RegFig] [TwoOrthoRep] The two ortholattices in Figure [OrthoEx] can be realized as the ortholattices of $\between$-regular subsets of the two compatibility frames in Figure [CompFig]. Verifying this claim is a good check for understanding. [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick] =[fill=gray!20,draw=none,text=black] (1) at (0,0) ; (2) at (1.5,0) ; (3) at (3,0) ; (4) at (4.5,0) ; (1) edge[-] node (2); (2) edge[-] node (3); (3) edge[-] node (4); [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick] =[fill=gray!20,draw=none,text=black] (1) at (0,0) ; (2) at (1.5,0) ; (3) at (1.5,-1.5) ; (4) at (0,-1.5) ; (1) edge[-] node (2); (2) edge[-] node (3); (3) edge[-] node (4); (1) edge[-] node (4); [CompFig] From Examples [CompEx] and [TwoOrthoRep], one might notice the key to representing any finite ortholattice using a compatibility frame: the possibilities in the frame correspond to the *join-irreducible* elements of the ortholattices, i.e., those nonzero elements *a* of the ortholattice that cannot be obtained as a join of elements distinct from *a*—intuitively, those noncontradictory propositions that cannot be expressed as a disjunction of distinct propositions; then two possibilities *a* and *b* are compatible if $a\not\leq \neg b$ in the ortholattice. Figure [JIfig] highlights in orange the join-irreducible elements of the three ortholattices considered so far, elements which one can match up with the possibilities in the corresponding compatibility frames in Figures [CompFig] and [Fig2]. [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2cm,semithick,every loop/.style=<-,shorten <=1pt] =[fill=gray!20,draw=none,text=black] (0) at (0,0) ; (a) at (-.75,.75) ; (b) at (.75,.75) ; (1l) at (-.75,1.5) ; (1r) at (.75,1.5) ; (new1) at (0,2.25) ; (new1) edge[-] node (1l); (new1) edge[-] node (1r); (1l) edge[-] node (a); (1r) edge[-] node (b); (a) edge[-] node (0); (b) edge[-] node (0); (1) at (5,2) ; (x) at (3,1) ; (y) at (4,1) ; (y’) at (6,1) ; (z) at (7,1) ; (0) at (5,0) ; (1) edge[-] node (y); (1) edge[-] node (y’); (1) edge[-] node (x); (1) edge[-] node (z); (x) edge[-] node (0); (y) edge[-] node (0); (y’) edge[-] node (0); (z) edge[-] node (0); [->,>=stealth’,shorten >=1pt,shorten <=1pt, auto,node distance=2.5cm,semithick] =[fill=gray!20,draw=none,text=black] (0) at (0,0) ; (d) at (0,.75) ; (a) at (-1,.75) ; (Nc) at (1,.75) ; (b) at (-1,1.5) ; (Nb) at (1,1.5) ; (c) at (-1,2.25) ; (Na) at (1,2.25) ; (Nd) at (0,2.25) ; (1) at (0,3) ; (d) edge[-] node (c); (d) edge[-] node (Na); (d) edge[-] node (0); (1) edge[-] node (Nd); (Nd) edge[-] node (a); (Nd) edge[-] node (Nc); (1) edge[-] node (c); (1) edge[-] node (Na); (a) edge[-] node (b); (b) edge[-] node (c); (Nc) edge[-] node (Nb); (Nb) edge[-] node (Na); (a) edge[-] node (0); (Nc) edge[-] node (0); [JIfig] The fact that finite ortholattices can be represented using compatibility frames as described above is a consequence of the following more general representation theorem for all complete ortholattices.[20](#fn20) A set *V* of elements of a lattice *L* is said to be *join-dense in *L** if every element of *L* can be obtained as the (possibly infinite) join of some elements of *V* (e.g., the set of all elements of *L* is trivially join-dense in *L*). [MacLaren] Let *L* = ⟨*A*,  ∨ , 0,  ∧ , 1, ¬⟩ be an ortholattice and *V* any join-dense subset of *L*. Then where $\mathcal{F}=\langle V\setminus\{0\},\between\rangle$ is the compatibility frame based on *V* \ {0} with $\between$ defined by $a\between b$ if $a\not\leq \neg b$, we have that *L* embeds into *O*(F) via the map *a* ↦ {*b* ∈ *V* \ {0} ∣ *b* ≤ *a*}, which is an isomorphism if *L* is complete. For a proof, see, e.g.,. Compare this representation of complete ortholattices to Tarski’s representation of *complete and atomic Boolean algebras*, which lies at the foundation of possible world semantics: any such Boolean algebra is isomorphic to the powerset of its set of atoms (think possible worlds).[21](#fn21) Now in a *finite* ortholattice, the set of join-irreducible elements is join-dense in the lattice, so we obtain the following corollary of Theorem [MacLaren], which explains Examples [CompEx] and [TwoOrthoRep]. [FiniteRep] Let *L* = ⟨*A*,  ∨ , 0,  ∧ , 1, ¬⟩ be a finite ortholattice. Then where $\mathcal{F}=\langle J,\between\rangle$ is the compatibility frame based on the set *J* of join-irreducible elements of *L* with $\between$ defined by $a\between b$ if $a\not\leq \neg b$, we have that *O*(F) is isomorphic to *L*. Let us now turn to formal semantics for the language L (recall Definition [LDef]). Proposition [FrameToOrtho] leads us to the following. [GoldblattSemantics] Given a compatibility model $\mathcal{M}= \langle S,\between,V\rangle$, *x* ∈ *S*, and *φ* ∈ L, we define $\mathcal{M},x\Vdash \varphi$ as follows: 1. $\mathcal{M},x\Vdash\top$; 2. $\mathcal{M},x\Vdash p$ iff *x* ∈ *V*(*p*); 3. $\mathcal{M},x\Vdash \varphi\wedge\psi$ iff $\mathcal{M},x\Vdash \varphi$ and $\mathcal{M},x\Vdash \psi$; 4. $\mathcal{M},x\Vdash \neg\varphi$ iff for all $y\between x$, $\mathcal{M},y\nVdash \varphi$. We define $\llbracket \varphi\rrbracket^\mathcal{M}=\{x\in S\mid \mathcal{M},x\Vdash\varphi\}$. Then given our definition of *φ* ∨ *ψ* as *φ* ∨ *ψ* :  = ¬(¬*φ* ∧ ¬*ψ*), as in ([OrEq]) we have: * $\mathcal{M},x\Vdash \varphi\vee\psi$ iff for all $y\between x$ there is a $z\between y$ such that $\mathcal{M},z\Vdash \varphi$ or $\mathcal{M},z\Vdash\psi$. An easy induction shows that the set of possibilities that make a formula true is indeed a proposition. For any compatibility model M and
arxiv_0000150
Heat fluctuations in a harmonic chain of active particles ========================================================= One of the major challenges in stochastic thermodynamics is to compute the distributions of stochastic observables for small-scale systems for which fluctuations play a significant role. Hitherto much theoretical and experimental research has focused on systems composed of passive Brownian particles. In this paper, we study the heat fluctuations in a system of interacting active particles. Specifically we consider a one-dimensional harmonic chain of *N* active Ornstein-Uhlenbeck particles, with the chain ends connected to heat baths of different temperatures. We compute the moment-generating function for the heat flow in the steady state. We employ our general framework to explicitly compute the moment-generating function for two example single-particle systems. Further, we analytically obtain the scaled cumulants for the heat flow for the chain. Numerical Langevin simulations confirm the long-time analytical expressions for first and second cumulants for the heat flow for a two-particle chain. Introduction ============ Nonequilibrium systems are ubiquitous . Examples include molecular motors, engines, bio-molecules, colloidal particles, and chemical reactions. In stark contrast to equilibrium counterparts , a general framework to understand nonequilibrium systems is still missing. In the last couple of decades, researchers have found general relations governing systems arbitrarily far from equilibrium, such as the *fluctuation relations* , notably including transient and steady-state fluctuation theorems , the Jarzynski work-free energy relation , the Crooks work-fluctuation theorem , and the Hatano-Sasa relation . Recently, the *thermodynamic uncertainty relation* was established , essentially bounding the precision of arbitrary currents by the average entropy production. This points to promising applications to infer dissipation by measuring arbitrary currents . Although these relations are independent of specific system details, fluctuations of observables (heat, work, entropy production, particle current, efficiency, etc.) remain dependent on the choice of the system. The probability distribution *P*(A, *τ*) of an observable A at time *τ* in a system of interest carries full information about the fluctuations of A. In the long-time limit, *P*(A, *τ*) is expected to have a large-deviation form , *P*(A, *τ*) ≍ *e**τ*I(A/*τ*), where the symbol  ≍  implies logarithmic equality and the large-deviation function is $$\begin{aligned} \mathcal{I}(a)\equiv \lim\_{\tau\to\infty} \dfrac{1}{\tau}\ln P(\mathcal{A}=a\tau,\tau) \, \label{ldf-intro}\end{aligned}$$ for A scaling linearly with the observation time *τ* . Unfortunately, exact calculation of the large-deviation function is only known for a few systems (see some examples in Refs. ). Another class of nonequilibrium systems, *active matter* , has attracted significant attention in recent years. The individual components of active matter independently consume energy from an internal source (in addition to the surrounding environment) and perform directed motion , thereby breaking time-reversal symmetry. Systems exhibiting active behavior include fish schools , flocking birds  and rods , light-activated colloids , bacteria , synthetic micro-swimmers , and motile cells . Several interesting observations have been made in different settings, for example clustering , absence of a well-defined mechanical pressure , motility-induced phase separation , and jamming . Research has focused on numerous quantitative features of active systems, e.g., transport properties in exclusion processes , position distributions with  and without resetting , survival probability , mean squared displacement and position correlation functions , spatio-temporal velocity correlation functions , arcsine laws , and the perimeter of the convex hull . Three predominant types of modeling are used to describe the motion of an individual active particle: (1) an active Brownian particle (ABP) , (2) a run-and-tumble particle (RTP) , and (3) an active Ornstein-Uhlenbeck particle (AOUP) . Recently, inspired by two bacterial species (*Myxococcus xanthus* and *Pseudomonas putida*), Santra et al. introduced a new scheme, a *direction-reversing active Brownian particle* (DRABP), to model bacterial motion . These different models differ in how they model self-propulsion. In this paper we study the simplest model, an AOUP, which nevertheless introduces several rich behaviors such as motility-induced phase separation , glassy dynamics , accumulation at walls , and has recently been used to understand distance from equilibrium  and time-reversal symmetry breaking  of active-matter systems. A central concern in nonequilibrium physics is heat conduction through a system of interest, connected to two heat baths at different temperatures. According to Fourier’s law, the local current is proportional to the local temperature gradient. Much research has studied the microscopic details of this picture in, for example, harmonic chains  and lattices , anharmonic chains  and lattices , disordered harmonic chains , a harmonic chain with alternating masses, elastically colliding unequal-mass particles , a free Brownian particle, and Brownian oscillators . We are unaware of any study of heat conduction in a system of interacting active particles. In this paper, we quantify the effect of activity on heat-transport properties (both average and fluctuations of heat flow) in a one-dimensional chain of *N* AOUPs connected by harmonic springs. In the steady state, we compute the long-time limit of the moment-generating function for heat flow using the formalism developed in Ref.. We use our framework to show explicit derivations for the moment-generating functions in the long-time limit for two different one-particle systems . We write analytical expressions for the first two cumulants of the heat flow (higher cumulants can be computed similarly). For a two-AOUP chain, we also compare the long-time analytical results with numerical simulations performed using Langevin dynamics. The paper is organized as follows. In Sec. [model], we present the model and discuss the steady-state joint distribution. In Sec. [FP-sec], we formally derive the distribution of heat flow in the long-time limit. In Sec. [sec:z-lm], we compute the characteristic function for heat flow in the long-time limit in the steady state. We apply our formalism to two different examples in Sec. [examp]. Using the characteristic function, we analytically compute cumulants for heat flow in Sec. [cum] and compare the analytical results for a chain of two particles with numerical simulations. Finally, we conclude in Sec. [summ]. Setup ===== Consider a harmonic chain composed of *N* active Ornstein-Uhlenbeck particles (AOUPs) in one dimension. Each particle is connected to its nearest neighbors with harmonic springs. Let *k**i* be the stiffness constant of a spring connecting particles *i* and *i* + 1. The left- and right-end particles, particles 1 and *N*, are connected to fixed locations with harmonic springs of stiffness $k\_{\rm L}=k\_0$ and $k\_{\rm R}=k\_N$, respectively. Particles 1 and *N* are, respectively, coupled with friction coefficients $\gamma\_{\rm L}$ and $\gamma\_{\rm R}$ to baths of temperature $T\_{\rm L}$ and $T\_{\rm R}$. Fig. [fig:scheme] shows a schematic of the system. [fig:scheme] The underdamped dynamics of this coupled system obey, in matrix form, $$\begin{aligned} \dot X(t)&=V(t)\label{eq1}\\ M\dot V(t)&=-\Phi X-\Gamma V(t)+F(t)+B(t)\label{eq2}\\ \dot F(t)&=-R^{-1}F(t)+\mathcal{Z}(t)\label{eq3} \,\end{aligned}$$ where the dot indicates a time derivative. In Eqs.  and, *X* ≡ (*x*1, *x*2, …, *x**N*)⊤, *V* ≡ (*v*1, *v*2, …, *v**N*)⊤, and *M* ≡ diag(*m*1, *m*2, …, *m**N*), where *x**i*, *v**i*, and *m**i*, respectively, are the position, velocity, and mass of the *i*th particle. The left and right ends of the chain are connected to heat baths (see Fig. [fig:scheme]), so the noise vector is $B\equiv\delta\_{i,1}\eta\_{\rm L}+\delta\_{i,N}\eta\_{\rm R}=(\eta\_{\rm L},0,\dots,0,\eta\_{\rm R})^\top$, and the friction matrix is $\Gamma\equiv\delta\_{i,j}(\delta\_{i,1}\gamma\_{\rm L}+\delta\_{i,N}\gamma\_{\rm R})$, where $\eta\_{\rm L,{\rm R}}(t)$ are Gaussian thermal white noises with mean zero and correlations $\langle\eta\_{\rm L}(t)\eta\_{\rm L}(t')\rangle=2\gamma\_{\rm L}T\_{\rm L}\delta(t-t')$, $\langle\eta\_{\rm R}(t)\eta\_{\rm R}(t')\rangle=2\gamma\_{\rm R}T\_{\rm R}\delta(t-t')$, and $\langle\eta\_{\rm L}(t)\eta\_{\rm R}(t')\rangle=0$. For convenience, throughout the paper we set Boltzmann’s constant to one. The nearest-neighbor coupling is reflected in the tridiagonal symmetric force matrix Φ with elements Φ*i*, *j* ≡ (*k**i* − 1 + *k**i*)*δ**i*, *j* − *k**i* − 1*δ**i*, *j* + 1 − *k**i**δ**i*, *j* − 1. The chain particles are driven by force vector $F{\color{black}\equiv}(f\_1,f\_2,\dots, f\_N)^\top$, with each *active force* *f**i* dynamically evolving according to the Ornstein-Uhlenbeck (OU) equation in Eq. , with *active-noise* vector $\mathcal{Z}(t){\color{black}\equiv}(\zeta\_1,\zeta\_2,\dots,\zeta\_N)^\top$, where each component *ζ**i*(*t*) is again a Gaussian white noise with mean zero and correlations $\langle\zeta\_i(t)\zeta\_j(t')\rangle=2D^{\rm a}\_{i}\delta\_{i,j}\delta(t-t')$. The active and thermal noises are uncorrelated to each other, i.e., ⟨*η**i*(*t*)*ζ**j*(*t*ʹ)⟩ = 0 for all *t*, *t*ʹ. In Eq. , $R {\color{black}\equiv} {\rm diag}(t^{\rm a}\_{1},t^{\rm a}\_{2},\dots,t^{\rm a}\_{N})$ is a diagonal matrix whose (*i*, *i*)-th element corresponds to the active relaxation time for the *i*th active force. The superscript ‘a’ indicates *active*. In the long-time stationary state, the mean of each active force *f**i* is zero, with correlation $$\begin{aligned} \langle f\_i(t)f\_i(t')\rangle=D^{\rm a}\_{i}t^{\rm a}\_{i}\exp\big\{-|t-t'|/t\_{i}^{\rm a}\big\} \delta\_{i,j}.\end{aligned}$$ Notice that in the limit $t^{\rm a}\_{i}\to 0$ and $D^{\rm a}\_{i}\to\infty$ such that $(t^{\rm a}\_{i})^2D^{\rm a}\_{i}$ approaches a finite constant 2A*i*, this active force is delta-correlated in time: ⟨*f**i*(*t*)*f**j*(*t*ʹ)⟩ = 2A*i* *δ**i*, *j**δ*(*t* − *t*ʹ). From Eqs. –, the dynamical state vector $U{\color{black}\equiv}(X,V,F)^\top$ of the full system is linear with Gaussian white noises. Therefore, at a long time, the distribution of *U* reaches a stationary state (SS) Gaussian distribution (see Appendix [ss-app]): $$\begin{aligned} P\_{\rm SS}(U)\equiv \dfrac{1}{\sqrt{(2\pi)^{3N}\det[\Sigma]}}\exp\bigg[-\dfrac{1}{2}U^\top \Sigma^{-1}U\bigg],\label{ss-eqn}\end{aligned}$$ for correlation matrix $$\begin{aligned} \Sigma\equiv \langle U U^\top\rangle=\dfrac{1}{\pi}\int\_{-\infty}^{+\infty}~d\omega~\bigg[\sum\_{j=1}^{N}\dfrac{D^{\rm a}\_{j} q\_jq\_j^\dagger}{\omega^2+(t^{\rm a}\_{j})^{-2}}\\+\gamma\_{\rm L}T\_{\rm L}\ell\_1\ell\_1^\dagger+\gamma\_{\rm R}T\_{\rm R}\ell\_N\ell\_N^\dagger\bigg]\nonumber,\end{aligned}$$ in which vectors *q**j* and ℓ*j*, respectively, are $$\begin{aligned} q\_j^\top&\equiv (G\_{1,j},G\_{2,j},\dots,G\_{N,j},i\omega\_nG\_{1,j},\label{eqs-7}\\&i\omega\_nG\_{2,j},\dots,i\omega\_nG\_{N,j},\delta\_{1,j},\dots,\delta\_{N,j})~~\text{for}~j=1,\dots, N,\notag\\ \ell\_j^\top&\equiv(G\_{1,j},G\_{2,j},\dots,G\_{N,j},i\omega\_nG\_{1,j},\label{eqs-8}\\&i\omega\_nG\_{2,j},\dots,i\omega\_nG\_{N,j}, \underbrace{0,0,\dots,0}\_{N})~~\text{for}~ j=1,N.\notag\end{aligned}$$ Notice that the symbol  †  refers to the combination of transpose and *ω* →  − *ω* operations on a matrix. In both vectors  and, the first *N* components, middle *N* components, and final *N* components, respectively, correspond to positions *x*, velocities *v*, and active forces *f*. *G**i*, *j*(*ω*) is the (*i*, *j*)-th matrix element of the symmetric Green’s function matrix $$\begin{aligned} G(\omega)\equiv[\Phi-\omega^2 M+i\omega \Gamma]^{-1}.\label{gr-fn-def}\end{aligned}$$ In this paper, we are interested in the fluctuations of heat flow from the left heat bath to the system  in a given time *τ* in the steady state $P\_{\rm SS}(U\_0)$ [see Eq. ]: $$\begin{aligned} Q\_{\rm L}\equiv \int\_0^\tau~dt~[\eta\_{\rm L}(t)-\gamma\_{\rm L} v\_1(t)]v\_1(t).\label{heat-eqn}\end{aligned}$$ In Sec. [sec:z-lm] we will show that the fluctuations of heat flow from the right heat bath can be computed using that of the left heat bath by applying suitable transformations. Note that the above integral has to be interpreted with the Stratonovich rule . $Q\_{\rm L}$ is not linear in the Gaussian state vector *U*, so we expect that its probability distribution $P(Q\_{\rm L},\tau)$ is not generally Gaussian. In the following, we give a formal derivation of $P(Q\_{\rm L},\tau)$ using the Fokker-Planck equation. Formal solution of the Fokker-Planck equation to derive $P(Q\_{\rm L},\tau)$ ============================================================================ To obtain the distribution of $Q\_{\rm L}$, it is convenient to first compute the conditional characteristic function (also known as the conditional moment-generating function (CMGF)): $$\begin{aligned} Z(\lambda,U,\tau|U\_0)\equiv \int\_{-\infty}^{+\infty}~dQ\_{\rm L}~ e^{-\lambda Q\_{\rm L}}~\rho(Q\_{\rm L},U,\tau|U\_0), \label{rhs-2}\end{aligned}$$ where $\rho(Q\_{\rm L},U,\tau|U\_0)$ is the conditional joint distribution. We write the right-hand side as $$\begin{aligned} Z(\lambda,U,\tau|U\_0)= \bigg\langle e^{-\lambda Q\_{\rm L}} \delta[U-U(\tau)]\bigg\rangle\_{U\_0},\label{eq-z}\end{aligned}$$ where the angular brackets indicate averaging over all trajectories emanating from a fixed initial state vector *U*0. Note that setting the conjugate variable *λ* to zero in either Eq.  or gives the distribution *P*(*U*, *τ*∣*U*0) of state vector *U* at time *τ* starting from a fixed initial vector *U*0. *Z*(*λ*, *U*, *τ*∣*U*0) obeys the Fokker-Planck equation : $$\begin{aligned} \dfrac{\partial Z(\lambda,U,\tau|U\_0)}{\partial \tau}=\mathcal{L}\_\lambda Z(\lambda,U,\tau|U\_0),\label{fp-eqn-1}\end{aligned}$$ where L*λ* is the Fokker-Planck operator [see Eq. ] . Since this differential equation is linear, the formal solution can be written as a linear combination of left- and right-eigenfunctions. In the long-time limit, the solution is dominated by the term corresponding to the largest eigenvalue *μ*(*λ*) of L*λ*, giving $$\begin{aligned} Z(\lambda,U,\tau|U\_0)\approx e^{\tau \mu(\lambda)} \chi(U\_0,\lambda)\Psi(U,\lambda),\label{evec-form} \end{aligned}$$ where Ψ(*U*, *λ*) is the corresponding right eigenfunction such that L*λ*Ψ(*U*, *λ*) = *μ*(*λ*)Ψ(*U*, *λ*), and *χ*(*U*0, *λ*) is the projection of the initial state *U*0 onto the left eigenvector corresponding to *μ*(*λ*). Further note that the left- and right-eigenfunctions satisfy the normalization condition ∫*d**U* *χ*(*U*, *λ*)Ψ(*U*, *λ*) = 1. Integrating the CMGF over both the steady-state distribution $P\_{\rm SS}(U\_0)$ of the initial state vector *U*0  and the final state vector *U* gives the characteristic function (moment-generating function): $$\begin{aligned} Z(\lambda,\tau)\approx g(\lambda)e^{\tau \mu(\lambda)}, \label{f-z-l}\end{aligned}$$ for prefactor $$\begin{aligned} g(\lambda)\equiv \int dU\_0~\int~dU~P\_{\rm SS}(U\_0) \chi(U\_0,\lambda) \Psi(U,\lambda).\label{int-g-l}\end{aligned}$$ Inverting *Z*(*λ*, *τ*) using the inverse Fourier transform gives the distribution function: $$\begin{aligned} P(Q\_{\rm L},\tau)&{\color{black}=} \dfrac{1}{2\pi i}\int\_{-i\infty}^{+i\infty}~d\lambda~e^{\lambda Q\_{\rm L}} Z(\lambda,\tau)\label{pdf-pq}\\&\approx\dfrac{1}{2\pi i}\int\_{-i\infty}^{+i\infty}~d\lambda~g(\lambda)e^{\tau[ \mu(\lambda)+\lambda \mathcal{Q}]},\notag\end{aligned}$$ where $\mathcal{Q} \equiv Q\_{\rm L}/\tau$ is the time-averaged heat rate entering the system from the chain’s left end. The integral is performed along the vertical contour passing through the origin of the complex-*λ* plane. When both *g*(*λ*) and *μ*(*λ*) are analytic functions of *λ*, the integral can be approximated (in the large-*τ* limit) using the saddle-point method , giving the large-deviation form of the distribution $$\begin{aligned} P(Q\_{\rm L}=\mathcal{Q} \tau,\tau)\asymp e^{\tau \mathcal{I}(\mathcal{Q})}, \end{aligned}$$ where I(Q) ≡ *μ*(*λ*\*) + Q*λ*\* is the large-deviation function , and *λ*\* is the saddle point, a solution of $$\begin{aligned} \dfrac{\partial \mu(\lambda)}{\partial \lambda}\bigg|\_{\lambda=\lambda^\*(\mathcal{Q})} = -\mathcal{Q}. \end{aligned}$$ However, when *g*(*λ*) has singularities in the region *λ* ∈ [0, *λ*\*], special care is needed . Computation of *μ*(*λ*) and *g*(*λ*) using the Fokker-Planck equation is rather difficult. In Sec. [sec:z-lm], we compute the characteristic function *Z*(*λ*, *τ*) using a method developed in and previously used to compute distributions of quantities such as partial and apparent entropy productions , work fluctuations , heat transport in lattices , and heat and work fluctuations for a Brownian oscillator .        [fig:mu-lam] Computing the characteristic function *Z*(*λ*, *τ*) =================================================== In this section, we derive the largest eigenvalue *μ*(*λ*) and the prefactor *g*(*λ*) appearing in the characteristic function *Z*(*λ*, *τ*) [see Eq. ] for heat $Q\_{\rm L}$ flowing through the left end of the *N*-particle system in the steady state. We first introduce the finite-time Fourier transform and its inverse , $$\begin{aligned} \tilde A(\omega\_n)&\equiv \dfrac{1}{\tau}\int\_0^\tau~dt~A(t)~e^{-i\omega\_n t}\label{FT-1}\\ A(t)&\equiv \sum\_{n=-\infty}^{+\infty}\tilde A(\omega\_n)~e^{i\omega\_n t}\label{FT-2},\end{aligned}$$ where $\omega\_n{\color{black}=} 2\pi n/\tau$ for integer *n*. We replace $\eta\_{\rm L}(t)$ and *v*1(*t*) in the right-hand side (RHS) of Eq.  with their finite-time inverse Fourier transform representations , and then integrate over time, obtaining the Fourier decomposition for the left heat flow: $$\begin{aligned} Q\_{\rm L}= \dfrac{\tau}{2}\sum\_{n=-\infty}^{+\infty} \big[\tilde \eta\_{\rm L}(\omega\_n)\tilde v\_1(-\omega\_n)+\tilde \eta\_{\rm L}(-\omega\_n)\tilde v\_1(\omega\_n)\label{FT-Q}\\-2\gamma\_{\rm L} \tilde v\_1(\omega\_n)\tilde v\_1(-\omega\_n)\big]\notag. \end{aligned}$$ We substitute the above expression of $Q\_{\rm L}$ in the conditional characteristic function, *Z*(*λ*, *U*, *τ*∣*U*0), given in Eq. , and compute the average (see Appendix [dd-eq] for detailed calculations), which eventually leads to (in the long-time limit) $$\begin{aligned} Z(\lambda,U,\tau|U\_0) &\approx \dfrac{e^{\tau \mu(\lambda)}e^{-\frac{1}{2} U^\top L\_1 U}e^{-\frac{1}{2} U\_0^\top L\_2 U\_0}}{\sqrt{(2\pi)^{3N}\det H\_1(\lambda)}},\label{z-u-u0}\\ \mu(\lambda)&\equiv-\dfrac{1}{4\pi} \int\_{-\infty}^{+\infty}~d\omega~\ln [\det(\Lambda \Omega)],\label{mu-lam}\\ L\_1(\lambda) &\equiv H\_1^{-1}+H\_1^{-1} H\_2^\top, \\ L\_2(\lambda) &\equiv -H\_1^{-1}H\_2^\top. \label{l1-l2}\end{aligned}$$ Here *μ*(*λ*) is the largest eigenvalue of the Fokker-Planck operator L*λ*, where in the integrand $\Lambda\equiv \frac{2}{\tau}~\text{diag}(D^{\rm a}\_{1},D^{\rm a}\_{2},\dots,D^{\rm a}\_{N},\gamma\_{\rm L} T\_{\rm L},\gamma\_{\rm R}T\_{\rm R})$ is the noise correlation matrix appearing in the noise distributions in Eqs.  and, and Ω ≡ Λ− 1 + *λ**τ**C* (see Eq.  for *C*) [1](#fn1). The matrices *H*1(*λ*), *H*2(*λ*), and *H*3(*λ*), respectively, are defined in Eqs. ,, and. Computation of the determinant in the integrand on the RHS of Eq.  for arbitrary *N* appears to us a difficult task. Nonetheless, for *N* = 1 and 2 (for $k\equiv k\_1=k\_{\rm L}=k\_{\rm R}$) one can show that $$\begin{aligned} \mu(\lambda)=-\dfrac{1}{4\pi}\int\_{-\infty}^{+\infty}~d\omega~\ln\bigg[1+4\lambda(\Delta \beta-\lambda)\omega^2 \gamma\_{\rm L} \gamma\_{\rm R} T\_{\rm L}T\_{\rm R} |G\_{1,N}|^2 -4\lambda(1+T\_{\rm L}\lambda)\gamma\_{\rm L}\sum\_{\ell=1}^{N}\dfrac{D^{\rm a}\_{\ell}|G\_{1,\ell}|^2}{1+(\omega t^{\rm a}\_{\ell})^{-2}}\bigg],\label{full-mu}\end{aligned}$$ for $\Delta\beta \equiv T\_{\rm R}^{-1}-T\_{\rm L}^{-1}$. Further, Fig. [fig:mu-lam] shows the indistinguishability of and for *N* = 5 and 10. Thus, we hypothesize that is valid for any *N*. When the particles composing the chain have no activity ($D^{\rm a}\_{\ell}\to 0$), we recover the same *μ*(*λ*) shown in Ref.  for a harmonic chain of passive particles. Further, in the limit $t^{\rm a}\_{\ell}\to 0$ and $D^{\rm a}\_{\ell} \to \infty$ such that $(t^{\rm a}\_{\ell})^2 D^{\rm a}\_{\ell} \to 2\mathcal{A}\_{\ell}$, the OU force *f**i* is delta-correlated in time, and Eq.  becomes $$\begin{aligned} \mu(\lambda)&=-\dfrac{1}{4\pi}\int\_{-\infty}^{+\infty}~d\omega~\ln\bigg[1+4\lambda(\Delta \beta-\lambda)\omega^2 \gamma\_{\rm L} \gamma\_{\rm R} T\_{\rm L}T\_{\rm R} |G\_{1,N}|^2 -8\lambda(1+T\_{\rm L}\lambda)\gamma\_{\rm L} \omega^2\sum\_{\ell=1}^{N} \mathcal{A}\_{\ell}|G\_{1,\ell}|^2\bigg],\label{full-mu-2}\end{aligned}$$ where the third term can be understood as the contribution coming from the Gaussian white noise with variance 2Aℓ acting on the ℓth particle in the chain. In what follows, unless specified, we maintain the general case where $t^{\rm a}\_{\ell}$ is positive and $D^{\rm a}\_{\ell}$ is finite. When the variable *λ* conjugate to the heat $Q\_{\rm L}$ is set to zero in *Z*(*λ*, *U*, *τ*∣*U*0) in Eq. , both *μ*(*λ*) and *H*2(*λ*) vanish (see Eqs.  and ). This gives the distribution for *U* at time *τ* starting from *U*0, which in the large-*τ* limit approaches the (unique) steady state, $$\begin{aligned} P\_{\rm SS}(U)\equiv Z(0,U,\tau{\color{black}\to \infty}|U\_0)=\dfrac{ e^{-\frac{1}{2} U^\top H^{-1}\_1(0) U}}{\sqrt{(2\pi)^{3N}\det H\_1(0)}}.\label{ss-eqn-2}\end{aligned}$$ *H*1(0) can be obtained from Eq.  and shown equal to *M*, thereby recovering Eq. . Finally, we obtain the characteristic function $Z(\lambda,\tau)\equiv \langle e^{-\lambda Q\_{\rm L}} \rangle$ by integrating *Z*(*λ*, *U*, *τ*∣*U*0) [see Eq. ] over the steady-state distribution $P\_{\rm SS}(U\_0)$ [see Eq. ] of the initial state vector *U*0 and the final state vector *U*, thereby identifying the prefactor $$\begin{aligned} g(\lambda)= \big(\det[H\_1(\lambda)L\_1(\lambda)]\det[I+H\_1(0)L\_2(\lambda)]\big)^{-1/2},\label{g-lamb}\end{aligned}$$ where *I* is the identity matrix. In this section, we calculated the characteristic function for the left heat flow $Q\_{\rm L}(\tau)$ [Eq. ]. The characteristic function for the heat flow $$\begin{aligned} Q\_{\rm R}(\tau)\equiv \int\_0^\tau~dt~[\eta\_{\rm R}(t)-\gamma\_{\rm R} v\_N(t)]v\_N(t),\label{heat-eqn-R}\end{aligned}$$ from the right heat bath can be simply obtained from the characteristic function *Z*(*λ*, *τ*) for the left heat flow. This can be done by making the transformations: $\gamma\_{\rm R}\longleftrightarrow \gamma\_{\rm L},~T\_{\rm R}\longleftrightarrow T\_{\rm L}$ and relabelling the mass of particles, spring constants, strength of the active forces, and active-forces relaxation time, respectively, as $$\begin{aligned} &{(m\_1,m\_2,\dots,m\_N)}\longrightarrow{(m\_N,m\_{N-1},\dots,m\_1)},\\ &{(k\_{\rm L},k\_1,\dots,k\_{N-1},k\_{\rm R})}\longrightarrow{(k\_{\rm R},k\_{N-1},\dots,k\_{1},k\_{\rm L})},\\ &{(D\_1^{\rm a},D\_2^{\rm a},\dots,D\_N^{\rm a})}\longrightarrow{(D\_N^{\rm a},D\_{N-1}^{\rm a},\dots,D\_1^{\rm a})},\\ &{(t\_{1}^{\rm a},t\_{2}^{\rm a},\dots,t\_{N}^{\rm a})}\longrightarrow{(t\_{N}^{\rm a},t\_{N-1}^{\rm a},\dots,t\_{1}^{\rm a})}.\end{aligned}$$ In this way, $Q\_{\rm L}$ exactly maps onto $Q\_{\rm R}$. Applying these transformations to *Z*(*λ*, *τ*) gives the characteristic function $Z\_{\rm R}(\lambda,\tau)\approx g\_{\rm R}(\lambda)e^{\tau \mu\_{\rm R}(\lambda)}$ for the heat flow from the right heat bath[2](#fn2), ultimately giving $$\begin{aligned} \mu\_{\rm R}(\lambda)=-\dfrac{1}{4\pi}\int\_{-\infty}^{+\infty}~d\omega~\ln\bigg[1-4\lambda(\Delta \beta+\lambda)\omega^2 \gamma\_{\rm L} \gamma\_{\rm R} T\_{\rm L}T\_{\rm R} |G\_{1,N}|^2 -4\lambda(1+T\_{\rm R}\lambda)\gamma\_{\rm R} \sum\_{\ell=1}^{N}\dfrac{D^{\rm a}\_{\ell}|G\_{N,\ell}|^2}{1+(\omega t^{\rm a}\_{\ell})^{-2}}\bigg].\label{full-mu-R}\end{aligned}$$ Examples ======== So far, we have described how to compute the characteristic function *Z*(*λ*, *τ*) ≈ *g*(*λ*)*e**τ**μ*(*λ*) for our *N*-particle system. As discussed above, its inversion gives the full probability distribution $P(Q\_{\rm L},\tau)$ of heat fluctuations. In this section, we consider two simple illustrative examples demonstrating our method to exactly compute both *μ*(*λ*) and *g*(*λ*) at steady state. Heat fluctuation for a harmonic oscillator ------------------------------------------ First, we specialize our general model to a single particle without OU driving force (i.e., a passive particle). This recovers the model previously used in Ref.  to study the heat and work fluctuations for a Brownian oscillator. This particle evolves according to  $$\begin{aligned} \dot x(t) &= v(t),\\ m\dot v(t) &= kx(t)-(\gamma\_{\rm L}+\gamma\_{\rm R})v(t)+\eta\_{\rm L}(t)+\eta\_{\rm R}(t),\end{aligned}$$ for particle position *x*, velocity *v*, mass *m*, and trap stiffness *k*. In this case, we aim to compute the characteristic function *Z*(*λ*, *τ*) ∼ *g*(*λ*)*e**τ**μ*(*λ*) for heat flow $Q\_{\rm L}$ into the system from the left heat bath . Let us first compute *μ*(*λ*) (see the general expression  for *N* particles) in which Ω = Λ− 1 + *λ**τ**C*. Here the noise correlation matrix governing the noise distributions ([n-d-1], [n-d-0]) is $\Lambda=\frac{2}{\tau}\,\text{diag}(\gamma\_{\rm L} T\_{\rm L},\gamma\_{\rm R} T\_{\rm R})$. Upon identifying Φ = *k*, *M* = *m*, and $\Gamma=\gamma\_{\rm L}+\gamma\_{\rm R}$ (see Sec. [model]), the Green’s function  becomes $$\begin{aligned} G=[k-m\omega^2+i\omega(\gamma\_{\rm L}+\gamma\_{\rm R})]^{-1}.\label{g-fun}\end{aligned}$$ In this case, the matrix *C* appearing in the Fourier decomposition of the heat flow $Q\_{\rm L}$ (see Eq. ) can be deduced from for the special case of *N* = 1: $$\begin{aligned} C= \begin{pmatrix} \overbrace{i\omega(G-G^\*)-2\gamma\_{\rm L} \omega^2|G|^2}^{2\gamma\_{\rm R} \omega^2|G|^2}&&-i\omega G^\*-2\gamma\_{\rm L} \omega^2|G|^2\\ i\omega G-2\gamma\_{\rm L} \omega^2|G|^2&&-2\gamma\_{\rm L} \omega^2|G|^2 \end{pmatrix},\label{cs}\end{aligned}$$ where ∣*G*∣2 ≡ *G**G*\*, and the first diagonal element of the matrix is re-written using a relation derived from Eq. , $$\begin{aligned} G-G^\*=-2i\omega (\gamma\_{\rm L}+\gamma\_{\rm R}) |G|^2 \.\end{aligned}$$ Thus, *μ*(*λ*) given in Eq.  becomes $$\begin{aligned} \mu(\lambda)&=-\dfrac{1}{4\pi}\int\_{-\infty}^{+\infty}~d\omega~\ln [1+4\omega^2 |G|^2\gamma\_{\rm L}\gamma\_{\rm R}T\_{\rm L}T\_{\rm R}\lambda(\Delta\beta-\lambda)],\\ &=\dfrac{\gamma\_{\rm L}+\gamma\_{\rm R}}{2m}[1-\nu(\lambda)],\label{mu-sanjib} \end{aligned}$$ where again $\Delta\beta\equiv T\_{\rm R}^{-1}-T\_{\rm L}^{-1}$, and $$\begin{aligned} \nu(\lambda)&\equiv\sqrt{1+4\frac{\gamma\_{\rm L}\gamma\_{\rm R}}{(\gamma\_{\rm L}+\gamma\_{\rm R})^2}T\_{\rm L}T\_{\rm R}\lambda(\Delta \beta-\lambda)}.\label{nu-lam}\end{aligned}$$ To compute the prefactor *g*(*λ*), we compute the three matrices *H*1(*λ*), *H*2(*λ*), and *H*3(*λ*) appearing in Eq.  in exponents of the CMGF, *Z*(*λ*, *τ*, *U*∣*U*0). With the identification of $$\begin{aligned} K\_1&=(\ell\_1^\top,\ell\_2^\top)^\top,\\ K\_2^\dagger&=(\ell\_1^\*,\ell\_2^\*),\\ \ell\_1&=\ell\_2, \\ a\_1^\top&=[(1+2i\gamma\_{\rm L} \omega G^\*)\mathcal{R},2i\gamma\_{\rm L}\omega G^\*\mathcal{R}], \\ a\_2&=[(1-2i\gamma\_{\rm L}\omega G)\mathcal{R}^\dagger,-2i\gamma\_{\rm L}\omega G\mathcal{R}^\dagger]^\top,\\ \mathcal{R}&= G(k,-im\omega)^\top,\end{aligned}$$ *H*1(*λ*) given in Eq.  becomes $$\begin{aligned} H\_1(\lambda)&=\dfrac{\tau}{2\pi}\int\_{-\infty}^{+\infty}~d\omega~\dfrac{\Omega\_{22}-\Omega\_{21}-\Omega\_{12}+\Omega\_{11}}{\det[\Omega]}\ell\_1^\*\ell\_1^\top\label{h1-1-eqn}\\ &=\dfrac{\gamma\_{\rm L}T\_{\rm L}+\gamma\_{\rm R}T\_{\rm R}}{\pi}\int\_{-\infty}^{+\infty}~d\omega~\times\label{h1-2-eqn}\\&\dfrac{|G|^2}{1+4\omega^2 |G|^2\gamma\_{\rm L}\gamma\_{\rm R}T\_{\rm L}T\_{\rm R}\lambda(\Delta\beta-\lambda)}\begin{pmatrix} 1&&i\omega\\ -i\omega&&\omega^2 \end{pmatrix}.\notag\end{aligned}$$ In Eq. , Ω*i**j* is the (*i*, *j*)-th matrix element of Ω. The second line is obtained using the relations $\Omega\_{22}-\Omega\_{21}-\Omega\_{12}+\Omega\_{11}=\frac{\tau}{2}[(\gamma\_{\rm L}T\_{\rm L})^{-1}+(\gamma\_{\rm R}T\_{\rm R})^{-1}]$ and $\det \Omega=\frac{\tau^2}{4 \gamma\_{\rm L}\gamma\_{\rm R}T\_{\rm L}T\_{\rm R}}[1+4 \gamma\_{\rm L}\gamma\_{\rm R}T\_{\rm L}T\_{\rm R} \omega^2 |G|^2\lambda(\Delta \beta-\lambda)]$. The integrals of the off-diagonal elements vanish because the integrands are odd. Integrating the diagonal elements gives $$\begin{aligned} H\_1(\lambda)=\dfrac{\gamma\_{\rm L}T\_{\rm L}+\gamma\_{\rm R}T\_{\rm R}}{(\gamma\_{\rm L}+\gamma\_{\rm R})\nu(\lambda)} \begin{pmatrix} k^{-1}&&0\\ 0&&m^{-1} \end{pmatrix}.\end{aligned}$$ Similarly, *H*2(*λ*) (see Eq. ) becomes $$\begin{aligned} H\_2(\lambda) &=\dfrac{\lambda}{\pi}\int\_{-\infty}^{+\infty}d\omega~e^{-i\omega \epsilon}~\times\\&\dfrac{\gamma\_{\rm L}T\_{\rm L}(1+2i\gamma\_{\rm L}\omega G^\*)+2i\omega G^\*\gamma\_{\rm R}T\_{\rm R}(\gamma\_{\rm L}+\lambda \gamma\_{\rm L}T\_{\rm L})}{1+4\omega^2 |G|^2\gamma\_{\rm L}\gamma\_{\rm R}T\_{\rm L}T\_{\rm R}\lambda(\Delta\beta-\lambda)}\times\nonumber\\&\begin{pmatrix}k&&ik\omega\\-i\omega m&&m\omega^2\end{pmatrix}\nonumber\\ &=\frac{\lambda \gamma\_{\rm L}T\_{\rm L}-\frac{1}{2} (\gamma\_{\rm L}+\gamma\_{\rm R}) [\nu(\lambda)-1]}{(\gamma\_{\rm L}+\gamma\_{\rm R})\nu(\lambda )}\begin{pmatrix}1&&0\\0&&1\end{pmatrix},\end{aligned}$$ and *H*3(*λ*) (see Eq. ) becomes $$\begin{aligned} H\_3(\lambda) &=\dfrac{\lambda}{2\pi}\int\_{-\infty}^{+\infty}d\omega~\dfrac{\mathcal{R}\mathcal{R}^\dagger}{\det(\Omega)}\times\\&\bigg[\lambda\tau^2\bigg(\dfrac{1}{2\gamma\_{\rm R}T\_{\rm R}}-2\gamma\_{\rm L} \omega^2 |G|^2(\Delta \beta-\lambda)\bigg)+2\gamma\_{\rm L} \det[\Omega]\bigg]\nonumber\\ &=\dfrac{\lambda(\gamma\_{\rm L}+\lambda \gamma\_{\rm L}T\_{\rm L})}{\pi}\int\_{-\infty}^{+\infty}d\omega~\times\\&\dfrac{|G|^2}{1+4\omega^2 |G|^2\gamma\_{\rm L}\gamma\_{\rm R}T\_{\rm L}T\_{\rm R}\lambda(\Delta\beta-\lambda)}\begin{pmatrix}k^2&&i\omega mk\\-i\omega mk&&m^2\omega^2\end{pmatrix}\nonumber\\ &=\dfrac{\lambda(\gamma\_{\rm L}+\lambda \gamma\_{\rm L}T\_{\rm L})}{(\gamma\_{\rm L}+\gamma\_{\rm R}) \nu(\lambda)}\begin{pmatrix}k&&0\\0&&m\end{pmatrix}.\end{aligned}$$ One can check that these matrices satisfy the condition *H*3(*λ*) = [*I* + *H*2(*λ*)]*H*1− 1(*λ*)*H*2⊤(*λ*), ensuring the factorization of the CMGF into the product of factors that respectively capture the entire dependence on *U*0 and *U* (see Eqs.  and ). Substituting *H*1(*λ*) and *H*2(*λ*) in *L*1(*λ*) and *L*2(*λ*) given in Eq.  and Eq.  gives $$\begin{aligned} g(\lambda)=\dfrac{4\nu(\lambda)}{[1+\nu(\lambda)]^2-[2\lambda \gamma\_{\rm L}T\_{\rm L}(\gamma\_{\rm L}+\gamma\_{\rm R})^{-1}]^2}.\label{g-lam-sanjib}\end{aligned}$$ Finally, we write the characteristic function *Z*(*λ*, *τ*) ≈ *g*(*λ*)*e**τ**μ*(*λ*) (see Eq. ) using Eqs.  and. Using the inverse transform, one can find the distribution of $P(Q\_{\rm L},\tau)$ as discussed in Ref. . Work fluctuations for a Brownian particle driven by a correlated external random force -------------------------------------------------------------------------------------- Here we specialize our general model to *N* = 1, $T\_{\rm L} =T\_{\rm R} =T$, and *k**i* = 0. The equations of motion for the particle read  $$\begin{aligned} \dot v(t)&=-\dfrac{1}{t\_\gamma} v(t)+\dfrac{1}{m}f(t)+\dfrac{1}{m}\eta(t), \label{underdamped}\\ \dot f(t)&=-\dfrac{1}{t^{\rm a}}f(t)+\zeta(t),\end{aligned}$$ where *t**γ* ≡ *m*/*γ* is the characteristic relaxation timescale of a particle’s velocity, *η*(*t*) is Gaussian thermal white noise of mean zero and correlation ⟨*η*(*t*)*η*(*t*ʹ)⟩ = 2*γ**T**δ*(*t* − *t*ʹ), and *f*(*t*) is again an active OU force with mean zero and correlation $\langle f(t)f(t')\rangle=D^{\rm a}t^{\rm a}e^{-|t-t'|/t^{\rm a}}$ (see Sec. [model]). We use our extended framework to obtain the characteristic function for work done on the Brownian particle and show its consistency with previous calculations on this model by Pal and Sabhapandit . The work due to external forcing is  (using the Stratonovich rule ) $$\begin{aligned} W\equiv \int\_0^\tau~dt~f(t)v(t).\end{aligned}$$ Multiplying Eq.  by *v* and integrating from 0 to *τ*, the first law of thermodynamics reads $$\begin{aligned} \dfrac{m}{2}(v\_\tau^2-v\_0^2)&=\int\_0^\tau~dt~f(t) v(t)+\int\_0^\tau~dt~[\eta(t)-\gamma v(t)] v(t)\\ \Delta \mathcal{U}&=W+Q,\label{f-law-2}\end{aligned}$$ where the LHS, and the second term on the RHS, respectively, are the change in the internal energy and the heat flow from the bath to the system. Following Ref. , we define dimensionless work W ≡ *β**W*, where *β* ≡ *T*− 1 is the inverse temperature: the work is measured in units of thermal energy $k\_{\rm B}T$ (Boltzmann’s constant $k\_{\rm B}$ is one). With a suitable mapping, we compute the distribution of the dimensionless work W. The CMGF (see Eq. ) for W can be written as $$\begin{aligned} Z\_{\mathcal{W}}(\lambda,U,\tau|U\_0)&\equiv\bigg\langle e^{-\lambda \mathcal{W}} \delta[U-U(\tau)]\bigg\rangle\_{U\_0}\label{f-line}\\ &=e^{-\beta m\lambda (v\_\tau^2-v\_0^2)/2}\bigg\langle e^{\lambda \beta Q} \delta[U-U(\tau)]\bigg\rangle\_{U\_0}\label{s-line}\\ &=e^{-\beta m\lambda (v\_\tau^2-v\_0^2)/2}~Z(-\beta \lambda,U,\tau|U\_0)\\ &\approx \dfrac{e^{\tau \mu(-\beta\lambda)}e^{-\frac{\beta m\lambda}{2} (v\_\tau^2-v\_0^2)}}{\sqrt{(2\pi)^{2}\det H\_1(-\beta\lambda)}}\label{zw}\times\\& ~~~~~~~~e^{-\frac{1}{2} U^\top L\_1(-\beta\lambda) U}e^{-\frac{1}{2} U\_0^\top L\_2(-\beta\lambda) U\_0}.\nonumber\end{aligned}$$ The second line follows from the first law of thermodynamics . Further, we write the boundary contributions in the exponent in Eq.  in the matrix form: $$\begin{aligned} -\dfrac{\beta m\lambda}{2}(v\_\tau^2-v\_0^2){\color{black}=} \dfrac{1}{2}U^\top L\_0(-\beta\lambda)U-\dfrac{1}{2}U\_0^\top L\_0(-\beta\lambda)U\_0,\label{eq-zw-2} \end{aligned}$$ where [*L*0( − *β**λ*)]*i*, *j* ≡  − *β**m**λ**δ**i*, 1*δ**j*, 1, for 1 ≤ *i*, *j* ≤ 2. Substituting this in Eq.  yields $$\begin{aligned} Z\_\mathcal{W}(\lambda,U,\tau|U\_0)&\approx \dfrac{e^{\tau \mu(-\beta\lambda)}e^{-\frac{1}{2} U^\top L\_3(-\beta\lambda) U}e^{-\frac{1}{2} U\_0^\top L\_4(-\beta\lambda) U\_0}}{\sqrt{(2\pi)^{2}\det H\_1\big(-\beta\lambda\big)}},\label{zw-2}\end{aligned}$$ where we have identified the modified exponents relating the exponents obtained from the CMGF of the (dimensionless) heat dissipated ( − *β**Q*) to the bath from the system, to that of the work on the particle by the external force: *L*3( − *β**λ*) ≡ *L*1( − *β**λ*) − *L*0( − *β**λ*) and *L*4( − *β**λ*) ≡ *L*2( − *β**λ*) + *L*0( − *β**λ*). We emphasize that the matrix *H*1( − *β**λ*) corresponding to the work W remains the same as that of the heat dissipated to the bath  − *β**Q*. Integrating over the final state vector *U* and the initial state vector *U*0 with respect to the initial steady state distribution $P\_{\rm SS}(U\_0)$, gives the characteristic function (see Sec. [FP-sec]), $$\begin{aligned} &Z\_\mathcal{W}(\lambda,\tau)\approx g\_\mathcal{W}(\lambda)e^{\tau\mu\_\mathcal{W}(\lambda)},\label{zw-eqn-5} \\ &\mu\_\mathcal{W}(\lambda)\equiv \mu(-\beta\lambda),\label{mu-w-eqn}\\ &g\_\mathcal{W}(\lambda)\equiv \big(\det[H\_1(-\beta\lambda)L\_3(-\beta\lambda)]\big)^{-1/2}\times\label{gw-lam}\\&~~~~~~~~~~~~\big(\det[I+H\_1(0)L\_4(-\beta\lambda)]\big)^{-1/2}.\notag\end{aligned}$$ Let us now compute *μ*W(*λ*) and *g*W(*λ*). In this example, there is no harmonic confinement (*k* = 0), so Φ = 0, Γ = *γ*, and *M* = *m* (see Sec. [model]). Thus, the Green’s function  becomes *G* = [*i**ω**γ* − *m**ω*2]− 1. The diagonal matrix Λ in the noise distributions in Eqs.  and is $\Lambda=\tfrac{2}{\tau}~\text{diag}(D^{\rm a},D)$. In the integrand of *μ*(*λ*) defined in Eq. , Ω = Λ− 1 + *λ**τ**C*, where the Hermitian matrix *C* can be obtained from Eq.  for one particle: $$\begin{aligned} C=\begin{pmatrix} \dfrac{-2\gamma|G|^2}{1+(\omega^2t^{\rm a})^{-2}}&&\dfrac{G^\*}{1+(i\omega t^{\rm a})^{-1}}\\ &&\\ \dfrac{-G}{-1+(i\omega t^{\rm a})^{-1}}&&0 \end{pmatrix}.\end{aligned}$$ Substituting Λ and Ω in *μ*(*λ*) in Eq.  and making the transformation *λ* →  − *β**λ* gives $$\begin{aligned} \mu\_\mathcal{W}(\lambda) &=-\dfrac{1}{4\pi}\int\_{-\infty}^{+\infty}~d\omega~\ln\bigg[1+\dfrac{4\lambda(1-\lambda)\theta}{(\delta^2t\_\gamma^2\omega^2+1)(t\_\gamma^2\omega^2+1)}\bigg]\\ &=\dfrac{1}{2t\_\gamma}[1-\bar\nu(\lambda)],\label{mu-w-fin}\end{aligned}$$ for $$\begin{aligned} \bar\nu(\lambda)&{\color{black}\equiv}\dfrac{1}{\delta}\left[\sqrt{1+\delta^2+2\delta\nu(\lambda)}-1\right]\\ \nu(\lambda)&{\color{black}\equiv}\sqrt{1+4\theta\lambda(1-\lambda)}.\end{aligned}$$ Following , we introduced two dimensionless parameters, the relative strength $\theta\equiv (t^{\rm a})^2 D^{\rm a}/(\gamma T)$ of the external force with respect to thermal fluctuations, and the ratio $\delta\equiv t^{\rm a}/t\_\gamma$ of the relaxation time of the external forcing and the relaxation time *t**γ* of the particle’s velocity. To compute *g*W(*λ*), we first compute matrices *H*1(*λ*), *H*2(*λ*), and *H*3(*λ*) appearing in exponents of the CMGF *Z*(*λ*, *τ*, *U*∣*U*0) in Eq. , and then make the transformation *λ* →  − *β**λ* (see Eq. ). In order to proceed further, we identify the following vectors which are helpful in computation of these matrices: $$\begin{aligned} K\_1 &= [(i\omega+1/t^{\rm a})^{-1}q\_1^\top,\ell\_1^\top]^\top{\color{black}\,}\\ K\_2^\dagger &= [(-i\omega+1/t^{\rm a})^{-1}q\_1^\*,\ell\_1^\*]{\color{black}\,}\\ \ell\_1^\top &= (i\omega G,0),\\ a\_1^\top &= \left[\frac{2\gamma G^\*{\mathcal{R}}}{-1+(i\omega t^{\rm a})^{-1}},(1+2i\gamma\omega G^\*){\mathcal{R}}\right]{\color{black}\,}\\ \mathcal{R} &= [-i\omega mG,\dfrac{- G}{1+(i\omega t^{\rm a})^{-1}}]^\top,\\ a\_2 &= \left[\frac{-2\gamma G\mathcal{R}^\dagger}{1+(i\omega t^{\rm a})^{-1}},(1-2i\gamma\omega G)\mathcal{R}^\dagger\right]{\color{black}\,}\\ q\_1^\top &= (i\omega G,1)\.\end{aligned}$$ Therefore, the matrices *H*1(*λ*), *H*2(*λ*), *H*3(*λ*), respectively given in Eqs. ,, and, can be simplified as $$\begin{aligned} H\_1(\lambda)&=\dfrac{\tau}{2\pi}\int\_{-\infty}^{+\infty}~\dfrac{d\omega}{\det[\Omega]}~\bigg[ \dfrac{\Omega\_{22}q\_1^\*q\_1^\top}{\omega^2+1/(t^{\rm a})^{2}}-\dfrac{\Omega\_{21}\ell\_1^\*q\_1^\top}{i\omega+1/t^{\rm a}}+\Omega\_{11}\ell\_1^\*\ell\_1^\top-\dfrac{\Omega\_{12}q\_1^\*\ell\_1^\top}{-i\omega+1/t^{\rm a}}\bigg],\label{uh-1}\\ H\_2(\lambda) &=\dfrac{\lambda\tau}{2\pi}\int\_{-\infty}^{+\infty}\dfrac{d\omega~e^{-i\omega \epsilon}}{\det[\Omega]}~\bigg[\dfrac{2i\gamma\omega G^\*\Omega\_{22}\mathcal{R}q\_1^\top}{\omega^2+1/(t^{\rm a})^{2}}-\dfrac{2\gamma G^\*\Omega\_{12}\mathcal{R}\ell\_1^\top} {-1+(i\omega t^{\rm a})^{-1}}-\dfrac{(1+2i\omega\gamma G^\*)\Omega\_{21}\mathcal{R}q\_1^\top}{i\omega+1/t^{\rm a}} +(1+2i\gamma\omega G^\*)\Omega\_{11}\mathcal{R}\ell\_1^\top\bigg],\label{uh-2}\\ H\_3(\lambda) &=\dfrac{\lambda}{2\pi}\int\_{-\infty}^{+\infty}\dfrac{d\omega}{\det[\Omega]}\bigg[\dfrac{4\gamma^2|G|^2}{1+(\omega^2t^{\rm a})^{-2}}\Omega\_{22}-\dfrac{2\gamma G^\*(1-2i\omega \gamma G)}{-1+(i\omega t^{\rm a})^{-1}}\Omega\_{12} +\dfrac{2\gamma G(1+2i\omega \gamma G^\*)}{1+(i\omega t^{\rm a})^{-1}}\Omega\_{21}+\Omega\_{11}+2\gamma\det[\Omega]\bigg]\mathcal{R}\mathcal{R}^\dagger.\label{uh-3}\end{aligned}$$ Using the above integrals –, we verify the condition *H*3(*λ*) = [*I* + *H*2(*λ*)]*H*1− 1(*λ*)*H*2⊤(*λ*) which ensures the factorization of the CMGF in terms of left and right eigenfunctions (see Appendix. [dd-eq]). We substitute *H*1(*λ*) and *H*2(*λ*), respectively given in Eqs.  and, in *g*W(*λ*) shown in, and numerically compute the latter for a given parameters for comparison with the prefactor shown in Eq. (31) of Ref. . Figure [fig:g-comp] shows that there is excellent agreement. [fig:g-comp]                       [cgf-fig]                       [cgf-fig-R]                                           [fig:ratio] [fig:ratio-same-d] Thus, we write the characteristic function *Z*W(*λ*, *τ*) (see Eq. ) using *μ*W(*λ*) and *g*W(*λ*) given in Eqs.  and, respectively. One can invert the former using the inverse Fourier transform defined in Eq.  and obtain *P*(W, *τ*) as discussed in Ref. . Therefore, in this section, using two different examples, we have shown how our general framework can be employed to exactly calculate CMGFs for non-Gaussian observables in the long-time limit. Cumulants of heat flow ====================== In Sec. [sec:z-lm], we computed the characteristic function *Z*(*λ*, *τ*) for the heat entering the left end of the harmonic chain of *N* AOUPs in the steady state. For a given number *N* of particles, one can, in principle, compute both *μ*(*λ*) and *g*(*λ*) as discussed in Sec. [examp], and invert *Z*(*λ*, *τ*) ∼ *g*(*λ*)*e**τ**μ*(*λ*) using the inverse Fourier transform  to give the full distribution for $Q\_{\rm L}$. Since *Z*(*λ*, *τ*) is the moment-generating function, its logarithm gives the cumulant-generating function. In the long-time limit, $$\begin{aligned} \dfrac{1}{\tau}\ln Z(\lambda,\tau) =\dfrac{1}{\tau}\ln \langle e^{-\lambda Q\_{\rm L}}\rangle=\mu(\lambda)+\dfrac{1}{\tau}\ln g(\lambda).\label{scgf}\end{aligned}$$ If *g*(*λ*) is an analytic function of *λ*, differentiating on both sides and setting *λ* to zero gives the first scaled cumulant (mean) of the heat flow (i.e., the left heat current) $$\begin{aligned} -\dfrac{\partial \mu(\lambda)}{\partial \lambda}\bigg|\_{\lambda=0} = \dfrac{\langle Q\_{\rm L} \rangle}{\tau} \equiv J\_{\rm L}.\label{cur-1}\end{aligned}$$ Similarly, differentiating twice and setting *λ* = 0 gives $$\begin{aligned} \dfrac{\partial^2 \mu(\lambda)}{\partial\lambda^2}\bigg|\_{\lambda=0}=\dfrac{1}{\tau}(\langle Q\_{\rm L} \rangle^2-\langle Q\_{\rm L}^2 \rangle).\label{cur-2}\end{aligned}$$ Higher-order cumulants can be obtained similarly. Notice that in the long-time limit, the contributions from *g*(*λ*) are lower order in *τ* and so vanish in both Eqs.  and. This is even true if *g*(*λ*) has singularities. For example, consider a case in which $g(\lambda)=\dfrac{g\_0(\lambda)}{\prod\_{i}(\lambda\_i-\lambda)^{\alpha\_i}}$, where *g*0(*λ*) is an analytic function of *λ* and the singularities are on the right-side of the origin: *λ**i* > 0. (Here *α**i* need not be integers.) Substituting in Eq. , in the long-time limit there is no contribution from *g*(*λ*) in Eqs.  and (and similarly for higher cumulants). Therefore, substituting *μ*(*λ*) (given in Eq. ) in Eqs.  and, the first two cumulants can be obtained in the integral form: $$\begin{aligned} J\_{\rm L}&=\dfrac{T\_{\rm L}-T\_{\rm R}}{4\pi}\int\_{-\infty}^{+\infty}~d\omega~4\gamma\_{\rm L}\gamma\_{\rm R} \omega^2 |G\_{1,N}|^2-\dfrac{1}{4\pi}\int\_{-\infty}^{+\infty}~d\omega~ 4\gamma\_{\rm L}\sum\_{\ell=1}^{N}\dfrac{D^{\rm a}\_{\ell}|G\_{1,\ell}|^2}{1+(\omega t^{\rm a}\_{\ell})^{-2}},\label{eq-cur}\\ \dfrac{1}{\tau}[\langle Q\_{\rm L}^2 \rangle-\langle Q\_{\rm L} \rangle^2]&=\dfrac{1}{4\pi} \int\_{-\infty}^{+\infty}~d\omega~\bigg[\bigg\{4\gamma\_{\rm L}\gamma\_{\rm R}(T\_{\rm L}-T\_{\rm R})\omega^2|G\_{1,N}|^2-4\gamma\_{\rm L}\sum\_{\ell=1}^{N}\dfrac{D^{\rm a}\_{\ell}|G\_{1,\ell}|^2}{1+( \omega t^{\rm a}\_{\ell})^{-2}}\bigg\}^2\label{eq-var}\\&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+8\gamma\_{\rm L}\gamma\_{\rm R}T\_{\rm L}T\_{\rm R}\omega^2|G\_{1,N}|^2+8\gamma\_{\rm L}T\_{\rm L}\sum\_{\ell=1}^{N}\dfrac{ D^{\rm a}\_{\ell}|G\_{1,\ell}|^2}{1+(\omega t^{\rm a}\_{\ell})^{-2}}\bigg],\notag\end{aligned}$$ where the first integral in Eq.  corresponds to the heat current observed in Ref.  for a harmonic chain when the particles have no activity. Similarly, the limit *D*ℓ*a* → 0 yields the second cumulant as shown in Ref. . An alternative derivation for first and second cumulants shown in Appendix [JL-diff] gives the same cumulants as in Eqs.  and. Similarly we use the scaled cumulant-generating function $\mu\_{\rm R}(\lambda)$  to obtain the first and second scaled cumulants of the right heat flow $Q\_{\rm R}$: $$\begin{aligned} J\_{\rm R}&=-\dfrac{T\_{\rm L}-T\_{\rm R}}{4\pi}\int\_{-\infty}^{+\infty}~d\omega~4\gamma\_{\rm L}\gamma\_{\rm R} \omega^2 |G\_{1,N}|^2-\dfrac{1}{4\pi}\int\_{-\infty}^{+\infty}~d\omega~ 4\gamma\_{\rm R}\sum\_{\ell=1}^{N}\dfrac{D^{\rm a}\_{\ell}|G\_{N,\ell}|^2}{1+(\omega t^{\rm a}\_{\ell})^{-2}},\label{eq-cur-R}\\ \dfrac{1}{\tau}[\langle Q\_{\rm R}^2 \rangle-\langle Q\_{\rm R} \rangle^2]&=\dfrac{1}{4\pi} \int\_{-\infty}^{+\infty}~d\omega~\bigg[\bigg\{4\gamma\_{\rm L}\gamma\_{\rm R}(T\_{\rm L}-T\_{\rm R})\omega^2|G\_{1,N}|^2+4\gamma\_{\rm R}\sum\_{\ell=1}^{N}\dfrac{D^{\rm a}\_{\ell}|G\_{N,\ell}|^2}{1+( \omega t^{\rm a}\_{\ell})^{-2}}\bigg\}^2\label{eq-var-R}\\&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+8\gamma\_{\rm L}\gamma\_{\rm R}T\_{\rm L}T\_{\rm R}\omega^2|G\_{1,N}|^2+8\gamma\_{\rm R}T\_{\rm R}\sum\_{\ell=1}^{N}\dfrac{ D^{\rm a}\_{\ell}|G\_{N,\ell}|^2}{1+(\omega t^{\rm a}\_{\ell})^{-2}}\bigg].\notag\end{aligned}$$ Figures [cgf-fig](a) and (b) show increasing observation time increases the agreement of analytical expressions for the first two scaled cumulants of left heat flow for a two-AOUP chain with numerical simulations performed using Langevin dynamics. Figure [cgf-fig](c) shows that as the particle activities $D\_{i}^{\rm a}$ increase, the current $J\_{\rm L}$ decreases and eventually changes sign. This is because the active forces perform work on the particles, so these particles dissipate heat to the reservoir to maintain steady state. Therefore, there is a competition between two currents: the current due to thermal forces (first term of Eq. ) and that due to active forces (second term of Eq. ). Figure [cgf-fig](d) shows that these active forces enhance heat fluctuations. In summary, with increasing AOUP activity the distribution of the left heat flow shifts to lower mean and its width increases. Similarly, Figs. [cgf-fig-R](a) and (b), respectively, compare the analytical expressions for the first and second scaled cumulants of right heat flow given in Eqs.  and with numerical simulations performed using Langevin dynamics. In this case (for $T\_{\rm L}>T\_{\rm R}$), the right current remains negative (leaving the chain and entering the bath) and decreases with the particle activity [see Fig. [cgf-fig-R](c)]. This current’s sign can be physically understood as follows. There are two currents that enter into the right heat bath: the thermal current due to the temperature gradient (first term of Eq. ) and the current due to the active forces (second term of Eq. ). Each make a negative contribution to $J\_{\rm R}$. (Notice that for $J\_{\rm L}$, the thermal current has opposite sign and competes with the active-force current, so the left current’s sign depends on the dominant contribution.) Similar to the left heat flow, here also the active forces enhance the fluctuations of the right heat flow; therefore, as the particle activity increases, the distribution shifts toward lower mean and broadens [see Fig. [cgf-fig-R](d)]. Figure [fig:ratio] shows for the three-AOUP chain the analytical ratios of left and right heat currents [Eqs.  and ] and of the heat-flow variances [Eqs.  and ], as functions of the leftmost particle’s activity $D\_1^{\rm a}$ at fixed $D\_2^{\rm a}$ for three different values of $D\_3^{\rm a}$. These cumulants are clearly neither anti-symmetric nor symmetric. To gain more insight, Fig. [fig:ratio-same-d] shows these ratios when all particles have identical activity ($D\_i^{\rm a} = D^{\rm a}~\forall~i$), as a function of the activity strength $D^{\rm a}$. As expected, in the limit $D^{\rm a}\to0$, $J\_{\rm L}\to -J\_{\rm R}$ and $[\langle Q\_{\rm L}^2 \rangle -\langle Q\_{\rm L} \rangle^2] \to [\langle Q\_{\rm R}^2 \rangle -\langle Q\_{\rm R} \rangle^2]$. In the opposite limit ($D^{\rm a}\to\infty$), these ratios saturate to particular values (see dashed lines) obtained from the dominating contributions in the limit $D^{\rm a}~\to~\infty$ of analytical expressions ,,, and. This can be physically understood as follows. When particle activity is sufficiently high that in Eqs.  and the active-force current (second integral) dominates the thermal current (first integral), a majority of the heat current is due to the active forces and flows toward both heat baths, giving a positive ratio of currents. Similarly, in this limit the heat-flow fluctuations are mostly due to the active forces and their ratio saturates to its limiting behavior (dashed horizontal line in Fig. [fig:ratio-same-d](b)) obtained from the dominating contribution. Even when each particle has distinct activity, we expect the ratio of cumulants of left and right heat flow in the limit of large activity strength of the particles (i.e., $D^{\rm a}\_i\to \infty~\forall~i$) to saturate (similar to Fig. [fig:ratio-same-d]) as long as the integrals in Eqs. ,,, and containing *D**i**a* are dominating. Summary ======= In this paper, we considered a harmonic chain of *N* active Ornstein-Uhlenbeck particles. Each particle is influenced by a persistent stationary-state Ornstein-Uhlenbeck active force which has an exponential correlation in time. The chain ends are connected via different friction constants to two heat reservoirs of different temperatures. Due to the temperature difference, heat generally flows through the system. We computed the steady-state heat flow entering each end of the chain, in the long-time limit analytically obtaining the characteristic function for this heat flow. We demonstrated two examples where one can compute the characteristic function for non-Gaussian observables. Finally, we used the characteristic function to compute the scaled cumulants for the heat flow and observed the effect of the activity on the heat current and its fluctuations. In particular, we found the activity of particles produces heat flow out the left end, thereby counteracting the rightward heat flow at the leftmost particle in the absence of activity. At the same time, it also enhances the fluctuations of the heat flow. In brief, activity of the particles reduces the mean and broadens the distribution of the left heat flow. The results presented in this paper are based on the framework of stochastic thermodynamics  and give us an understanding of steady-state thermal conduction in an active-matter harmonic chain. Recent research has shown that the first two cumulants for an arbitrary current are constrained by the thermodynamic uncertainty relation : fluctuations of the current are bounded by entropy production. Therefore, these two cumulants will be useful in the thermodynamic uncertainty relation  to infer the dissipation of this active-matter system. It would also be interesting to see the effect of active run-and-tumble particles and active Brownian particles on these first two cumulants and the related thermodynamic uncertainty relation. We emphasize that the large-deviation function for a stochastic observable (such as the heat flow) is related to the cumulant-generating function through the Legendre transform  (see Sec. [FP-sec]), where tails of the distribution are identified by the cutoffs within which *μ*(*λ*) is a real function. (See Refs.  for the computation of these cutoffs.) The prefactor *g*(*λ*) also importantly affects the tails of the distributions when *g*(*λ*) has singularities . Given our framework, one can consider simple examples permitting analytical computation of *μ*(*λ*) and *g*(*λ*), and carefully invert the characteristic function to obtain the probability density function using methods discussed in Refs. . In this paper, we have considered a harmonic chain where only the ends are connected to thermal reservoirs of different temperatures. Extending our system of active particles to connect each particle to a different temperature  would be interesting. Additionally, departures from Fourier’s law for a harmonic chain composed of pinned active particles (where each particle is additionally confined in its own distinct potential) and the role of boundary conditions and system size on the heat conduction are interesting topics for future investigation . This research was supported by “Excellence Project 2018” of the Cariparo foundation from the University of Padova (D.G.), a Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant (D.A.S.), and a Tier-II Canada Research Chair (D.A.S.). The authors thank Lorenzo Caprini for suggesting useful references, Rituparno Mandal for useful discussions, and the anonymous reviewers for their valuable suggestions that have improved the manuscript’s content and presentation. The Fokker-Planck equation ========================== Here we derive the Fokker-Planck equation for the conditional moment-generating function *Z*(*λ*, *U*, *τ*∣*U*0), where *U* ≡ [*x*1, *x*2, …, *x**N*, *v*1, *v*2, …, *v**N*, *f*1, *f*2, …, *f**N*]⊤. We first write the evolution equation for the conditional joint density function $\rho(Q\_{\rm L},U,\tau|U\_0)$ : $$\begin{aligned} \dfrac{\partial \rho}{\partial t} &= -\sum\_{i=1}^N\bigg[\dfrac{\partial}{\partial x\_i} \bigg(\dfrac{\langle \Delta x\_i\rangle}{\Delta t} \rho\bigg)+\dfrac{\partial}{\partial v\_i} \bigg(\dfrac{\langle \Delta v\_i\rangle}{\Delta t} \rho\bigg)+\dfrac{\partial}{\partial f\_i} \bigg(\dfrac{\langle \Delta f\_i\rangle}{\Delta t} \rho\bigg)\bigg] \nonumber\\ &\quad +\dfrac{1}{2}\sum\_{i,j}\bigg[\dfrac{\partial^2}{\partial x\_i\partial x\_j} \bigg(\dfrac{\langle \Delta x\_i \Delta x\_j\rangle}{\Delta t} \rho\bigg) +\dfrac{\partial^2}{\partial v\_i\partial v\_j} \bigg(\dfrac{\langle \Delta v\_i \Delta v\_j\rangle}{\Delta t} \rho\bigg)+\dfrac{\partial^2}{\partial f\_i\partial f\_j} \bigg(\dfrac{\langle \Delta f\_i \Delta f\_j\rangle}{\Delta t} \rho\bigg)\bigg] \nonumber\\ &\quad +\dfrac{1}{2}\sum\_{i,j}\bigg[\dfrac{\partial^2}{\partial x\_i\partial v\_j} \bigg(\dfrac{\langle \Delta x\_i \Delta v\_j\rangle}{\Delta t} \rho\bigg) +\dfrac{\partial^2}{\partial x\_i\partial f\_j} \bigg(\dfrac{\langle \Delta x\_i \Delta f\_j\rangle}{\Delta t} \rho\bigg)+\dfrac{\partial^2}{\partial v\_i\partial f\_j} \bigg(\dfrac{\langle \Delta v\_i \Delta f\_j\rangle}{\Delta t} \rho\bigg)\bigg] \\ &\quad - \dfrac{\partial}{\partial Q\_{\rm L}} \bigg(\dfrac{\langle \Delta Q\_{\rm L}\rangle}{\Delta t}\rho\bigg) + \dfrac{1}{2}\dfrac{\partial^2}{\partial Q\_{\rm L}^2}\bigg(\dfrac{\langle \Delta Q^2\_{\rm L} \rangle}{\Delta t} \rho\bigg)\nonumber\\ &\quad +\sum\_{i=1}^{N}\bigg[\dfrac{\partial^2}{\partial Q\_{\rm L}\partial x\_{i}}\bigg(\dfrac{\langle \Delta x\_i \Delta Q\_{\rm L}\rangle}{\Delta t} \rho\bigg)+\dfrac{\partial^2}{\partial Q\_{\rm L}\partial v\_{i}}\bigg(\dfrac{\langle \Delta v\_i \Delta Q\_{\rm L}\rangle}{\Delta t} \rho\bigg)+\dfrac{\partial^2}{\partial Q\_{\rm L}\partial f\_{i}}\bigg(\dfrac{\langle \Delta f\_i \Delta Q\_{\rm L}\rangle}{\Delta t} \rho\bigg)\bigg].\notag\end{aligned}$$ To evaluate the right-hand side, we discretize the dynamical equations – and (following the Stratonovich rule), and compute the moments in the limit of vanishing time-increment (Δ*t* → 0). Substituting these moments, we find $$\begin{aligned} \dfrac{\partial \rho}{\partial t} &= -\sum\_{i=1}^N\bigg[v\_i\dfrac{\partial \rho}{\partial x\_i}+\dfrac{1}{m\_i}\dfrac{\partial}{\partial v\_i} \bigg(\bigg\{-\sum\_{j=1}^N \Phi\_{i,j}x\_j+f\_i-\gamma\_{\rm L}v\_{1}\delta\_{i,1}-\gamma\_{\rm R}v\_{N}\delta\_{i,N}\bigg\}\rho\bigg)-\dfrac{1}{t\_i^{\rm a}}\dfrac{\partial}{\partial f\_i}(f\_i \rho)\bigg]\\ &\quad +\dfrac{\gamma\_{\rm L}T\_{\rm L}}{m\_1^2}\dfrac{\partial^2 \rho}{\partial v\_1^2}+\dfrac{\gamma\_{\rm R}T\_{\rm R}}{m\_N^2}\dfrac{\partial^2 \rho}{\partial v\_N^2}+\sum\_{i=1}^{N} D\_{i}^{\rm a} \dfrac{\partial^2 \rho}{\partial f\_i^2}- \bigg(\dfrac{\gamma\_{\rm L}T\_{\rm L}}{m\_1}-\gamma\_{\rm L}v\_1^2\bigg)\dfrac{\partial \rho}{\partial Q\_{\rm L}} + \gamma\_{\rm L}T\_{\rm L}v\_1^2\dfrac{\partial^2 \rho}{\partial Q\_{\rm L}^2}+\dfrac{2\gamma\_{\rm L}T\_{\rm L}}{m\_1} \dfrac{\partial^2}{\partial Q\_{\rm L} v\_1} (v\_1 \rho).\notag\end{aligned}$$ Fourier transforming *ρ* to $Z(\lambda, U,\tau|U\_0) \equiv \int\_{-\infty}^{+\infty}~dQ\_{\rm L}~\rho(Q\_{\rm L},U,\tau|U\_0)~e^{-\lambda Q\_{\rm L}}$ gives the evolution equation  for the conditional moment-generating function *Z*(*λ*, *U*, *τ*∣*U*0), in which the Fokker-Planck operator is $$\begin{aligned} \mathcal{L}\_\lambda &\equiv -\sum\_{i=1}^N\bigg[v\_i\dfrac{\partial}{\partial x\_i}+\dfrac{1}{m\_i}\dfrac{\partial}{\partial v\_i} \bigg(-\sum\_{j=1}^N \Phi\_{i,j}x\_j+f\_i-\gamma\_{\rm L}v\_{1}\delta\_{i,1}-\gamma\_{\rm R}v\_{N}\delta\_{i,N}\bigg)-\dfrac{1}{t\_i^{\rm a}}\dfrac{\partial}{\partial f\_i}f\_i\bigg]\label{FPO}\\&+\dfrac{\gamma\_{\rm L}T\_{\rm L}}{m\_1^2}\dfrac{\partial^2}{\partial v\_1^2}+\dfrac{\gamma\_{\rm R}T\_{\rm R}}{m\_N^2}\dfrac{\partial^2}{\partial v\_N^2}+\sum\_{i=1}^{N} D\_{i}^{\rm a} \dfrac{\partial^2 }{\partial f\_i^2}-\lambda \bigg(\dfrac{\gamma\_{\rm L}T\_{\rm L}}{m\_1}-\gamma\_{\rm L}v\_1^2\bigg) + \lambda^2\gamma\_{\rm L}T\_{\rm L}v\_1^2+\dfrac{2\lambda\gamma\_{\rm L}T\_{\rm L}}{m\_1} \dfrac{\partial}{\partial v\_1}v\_1. \notag\end{aligned}$$ Detailed derivation of steady-state distribution $P\_{\rm SS}(U)$ ================================================================= Here we compute the steady-state distribution (see Eq. ) for the full dynamics given in Eqs. –. Fourier transforming Eqs. – using Eq.  gives $$\begin{aligned} i\omega\_n \tilde X(\omega\_n)&=\tilde{V}(\omega\_n)-\dfrac{\Delta X}{\tau},\label{X-eqn}\\ (i\omega\_n M+\Gamma)\tilde V(\omega\_n)&=-\Phi \tilde{X}(\omega\_n)+\tilde F(\omega\_n)+\tilde B(\omega\_n)-\dfrac{M\Delta V}{\tau},\label{V-eqn}\\ \big(i\omega\_n I+R^{-1}\big)\tilde F(\omega\_n)&=\tilde{\mathcal{Z}} (\omega\_n)-\dfrac{\Delta F}{\tau},\label{F-eqn}\end{aligned}$$ where [Δ*X*, Δ*V*, Δ*F*] ≡ [*X*(*τ*) − *X*(0), *V*(*τ*) − *V*(0), *F*(*τ*) − *F*(0)]. Notice that the phase space of our system is unbounded; therefore, these boundary terms cannot be neglected even in the long-time limit. In Eq. , *I* is the identity matrix. Substituting Eqs.  and in Eq.  gives $$\begin{aligned} (i\omega\_n M+\Gamma)\tilde V(\omega\_n)&=-\dfrac{\Phi}{i\omega\_n} \bigg[\tilde{V}(\omega\_n)-\dfrac{\Delta X}{\tau}\bigg]+\big[i\omega\_n I+R^{-1}\big]^{-1}\bigg[\tilde {\mathcal{Z}}(\omega\_n)-\dfrac{\Delta F}{\tau}\bigg] +\tilde B(\omega\_n)-\dfrac{M\Delta V}{\tau},\end{aligned}$$ or, solving for *Ṽ*: $$\begin{aligned} \tilde V(\omega\_n) &= i\omega\_n G(\omega\_n)\bigg[\big[i\omega\_n I+R^{-1}\big]^{-1}\tilde{\mathcal{Z}}(\omega\_n)+\tilde B(\omega\_n)\bigg]+\dfrac{G(\omega\_n)}{\tau}\bigg[\Phi\Delta X-i\omega\_n M\Delta V -i\omega\_n\big[i\omega\_n I+R^{-1}\big]^{-1}\Delta F\bigg], \label{Eq-FV}\end{aligned}$$ where *G*(*ω**n*) ≡ [Φ − *ω**n*2*M* + *i**ω**n*Γ]− 1 is the Green’s function matrix. Using Eq. , the force vector at time *τ* can be computed using the inverse Fourier transform  and substituting *t* = *τ* − *ε* (for *ε* > 0), giving  $$\begin{aligned} F(\tau)&={\color{black}\lim\_{\epsilon\to 0}\sum\_{n=-\infty}^{+\infty} \tilde{F}(\omega\_n)~e^{i\omega\_n (\tau-\epsilon)}} \\ &=\lim\_{\epsilon\to 0}\sum\_{n=-\infty}^{+\infty} \tilde{F}(\omega\_n)~e^{-i\omega\_n \epsilon}\label{ftau}\end{aligned}$$ for *τ* = 2*π**n*/*ω**n*. In the limit of large *τ*, we can convert the second summation into an integral over *ω*. The matrix $\big[i\omega\_n I+R^{-1}\big]^{-1}$ has only diagonal entries, and each entry gives a pole $\omega=i/t^{\rm a}\_{\ell}$ which lies in the upper half of the complex *ω*-plane. Therefore, using the Cauchy residue theorem , the second term (containing boundary terms) vanishes in that limit, giving $$\begin{aligned} F(\tau)&=\lim\_{\epsilon\to 0}\sum\_{n=-\infty}^{+\infty}~e^{-i\omega\_n \epsilon}\big[i\omega\_n I+R^{-1}\big]^{-1}\tilde{\mathcal{Z}}(\omega\_n).\end{aligned}$$ Similar to Eq. , inverse Fourier transforming Eq.  gives *V*(*τ*), and we find that the term $$\begin{aligned} \sum\_{n=-\infty}^{+\infty}e^{-i\omega\_n \epsilon} \dfrac{G(\omega\_n)}{\tau}\bigg[\Phi\Delta X-i\omega\_nM\Delta V-i\omega\_n\big[i\omega\_n I+R^{-1}\big]^{-1}\Delta F\bigg],\end{aligned}$$ vanishes in the *τ* → ∞ limit since all the poles lie in the upper half of the complex *ω*-plane. Thus, the velocity vector at time *τ* is $$\begin{aligned} V(\tau)=\lim\_{\epsilon\to 0}\sum\_{n=-\infty}^{+\infty}e^{-i\omega\_n \epsilon}~i\omega\_n G(\omega\_n)\bigg[\big[i\omega\_n I+R^{-1}\big]^{-1}\tilde{\mathcal{Z}}(\omega\_n)+\tilde B(\omega\_n)\bigg].\end{aligned}$$ Substituting Eq.  in Eq.  gives the Fourier-space position vector $$\begin{aligned} \tilde X(\omega\_n)&=G(\omega\_n)\bigg[\big[i\omega\_n I+R^{-1}\big]^{-1}\tilde{\mathcal{Z}}(\omega\_n)+\tilde B(\omega\_n)\bigg]\\ &\quad +\dfrac{G(\omega\_n)}{i\omega\_n\tau}\bigg[\Phi\Delta X-i\omega\_nM\Delta V-i\omega\_n\big[i\omega\_n I+R^{-1}\big]^{-1}\Delta F\bigg]-\dfrac{\Delta X}{i\omega\_n\tau}. \nonumber\end{aligned}$$ Following the same argument, in the limit of large *τ* the position vector at time *τ* is $$\begin{aligned} X(\tau)=\lim\_{\epsilon\to 0}\sum\_{n=-\infty}^{+\infty}e^{-i\omega\_n \epsilon}~G(\omega\_n)\bigg[\big[i\omega\_n I+R^{-1}\big]^{-1}\tilde{\mathcal{Z}}(\omega\_n)+\tilde B(\omega\_n)\bigg].\end{aligned}$$ Therefore, considering the contribution from *X*(*τ*), *V*(*τ*), and *F*(*τ*), the full state vector is $$\begin{aligned} U^\top(\tau)&=\lim\_{\epsilon\to 0}\sum\_{n=-\infty}^{+\infty}e^{-i\omega\_n \epsilon} \bigg(\bigg[\tilde {\mathcal{Z}}^\top(\omega\_n)\big[i\omega\_n I+R^{-1}\big]^{-1}+\tilde B^\top(\omega\_n)\bigg]G^\top(\omega\_n)\, \label{Utop-tau} \\ &\quad\quad i\omega\_n\bigg[\tilde {\mathcal{Z}}^\top(\omega\_n)\big[i\omega\_n I+R^{-1}\big]^{-1} +\tilde B^\top(\omega\_n)\bigg]G^\top(\omega\_n)\,\ \tilde {\mathcal{Z}}^\top(\omega\_n)\big[i\omega\_n I+R^{-1}\big]^{-1} \bigg),\nonumber\\ &=\lim\_{\epsilon\to 0}\sum\_{n=-\infty}^{+\infty}e^{-i\omega\_n \epsilon}~\bigg[\sum\_{j=1}^{N}\dfrac{\tilde \zeta\_j(\omega\_n)q\_j^\top}{i\omega\_n+1/t\_{j}^{\rm a}}+\tilde{\eta}\_{\rm L}(\omega\_n)\ell\_1^\top+\tilde{\eta}\_{\rm R}(\omega\_n)\ell\_N^\top\bigg],\label{utop-vec}\end{aligned}$$ where (for *j* = 1, …, *N*) $$\begin{aligned} q\_j^\top& \equiv (G\_{1,j},G\_{2,j},\dots,G\_{N,j},i\omega\_nG\_{1,j},i\omega\_nG\_{2,j},\dots,i\omega\_nG\_{N,j},\delta\_{1,j}\delta\_{2,j},\dots,\delta\_{N,j}),\\ \ell\_j^\top &\equiv (G\_{1,j},G\_{2,j},\dots,G\_{N,j},i\omega\_nG\_{1,j},i\omega\_nG\_{2,j},\dots,i\omega\_nG\_{N,j}, \overbrace{0,0,\dots,0}^{N}).\end{aligned}$$ The first *N* components correspond to positions, the next *N* to velocities, and the last *N* to OU forces. *U*(*τ*) has average zero and correlation $$\begin{aligned} \langle U(\tau)U^\top(\tau) \rangle&=\lim\_{\epsilon\to 0}\sum\_{n=-\infty}^{+\infty}\sum\_{m=-\infty}^{+\infty}e^{-i\omega\_n \epsilon}~e^{-i\omega\_m \epsilon}\nonumber\\ &\quad \times \bigg[\sum\_{j=1}^{N}\sum\_{p=1}^{N}\dfrac{q\_j(\omega\_n)\langle \tilde \zeta\_j(\omega\_n)\tilde \zeta\_p(\omega\_m)\rangle q\_p^\top(\omega\_m)}{(i\omega\_n+1/t\_{j}^{\rm a})(i\omega\_m+1/t\_{p}^{\rm a})}\\ &\quad\quad +\ell\_1(\omega\_n)\langle \tilde{\eta}\_{\rm L}(\omega\_n)\tilde{\eta}\_{\rm L}(\omega\_m)\rangle \ell\_1^\top(\omega\_m)+\ell\_N(\omega\_n) \langle \tilde{\eta}\_{\rm R}(\omega\_n)\tilde{\eta}\_{\rm R}(\omega\_m)\rangle \ell\_N^\top(\omega\_m)\bigg]\nonumber\\ &=\dfrac{2}{\tau}\sum\_{n=-\infty}^{+\infty}\bigg[\sum\_{j=1}^{N}\dfrac{D^{\rm a}\_{j}q\_jq\_j^\dagger}{\omega^2\_n+1/(t^{\rm a}\_{j})^{2}} +\gamma\_{\rm L}T\_{\rm L}\ell\_1\ell\_1^\dagger+\gamma\_{\rm R}T\_{\rm R}\ell\_N\ell\_N^\dagger\bigg]\\ &=\dfrac{1}{\pi}\int\_{-\infty}^{+\infty}~d\omega~\bigg[\sum\_{j=1}^{N}\dfrac{D^{\rm a}\_{j}q\_jq\_j^\dagger}{\omega^2+1/(t^{\rm a}\_{j})^{2}}+\gamma\_{\rm L}T\_{\rm L}\ell\_1\ell\_1^\dagger+\gamma\_{\rm R}T\_{\rm R}\ell\_N\ell\_N^\dagger\bigg],\label{corr-eqn}\end{aligned}$$ for Fourier-space noise correlations $$\begin{aligned} [\langle \tilde\eta\_i(\omega)\tilde\eta\_j(\omega') \rangle,\langle \tilde \zeta\_i(\omega)\tilde \zeta\_j(\omega') \rangle]^\top&=\dfrac{1}{\tau^2}\int\_0^\tau~dt\_1~\int\_0^\tau~dt\_2~e^{-i\omega t\_1}~e^{-i\omega' t\_2} [\langle \eta\_i(t\_1)\eta\_j(t\_2) \rangle,\langle \zeta\_i(t\_1)\zeta\_j(t\_2) \rangle]^\top\\ &=\frac{2}{\tau}\delta\_{i,j}\delta\_{\omega,-\omega'}[\gamma\_iT\_i,D\_{i}^{\rm a}]^\top. \label{noise-correlations}\end{aligned}$$ A Gaussian with zero mean and correlation  gives the reported steady-state distribution . Detailed derivation of Eq. ========================== Here we present the detailed derivation of Eq. , helpful for computing the characteristic function for the heat $Q\_{\rm L}$ entering the leftmost particle from the left bath. Fourier transforming  the RHS of $Q\_{\rm L}$ in Eq.  gives $$\begin{aligned} Q\_{\rm L}=\dfrac{\tau}{2}\sum\_{n=-\infty}^{+\infty}~[\tilde{\eta}\_{\rm L}(\omega\_n) \tilde v\_1(-\omega\_n)+\tilde{\eta}\_{\rm L}(-\omega\_n) \tilde v\_1(\omega\_n)-2\gamma\_{\rm L} \tilde v\_1 (\omega\_n) \tilde v\_1(-\omega\_n)].\label{Q-L-omega}\end{aligned}$$ Using Eq. , we can write the Fourier-space velocity of the leftmost particle as $$\begin{aligned} \tilde v\_1(\omega\_n)&=i\omega\_n\bigg[\sum\_{\ell=1}^{N} \dfrac{G\_{1,\ell}(\omega\_n)\tilde{\zeta}\_\ell(\omega\_n)}{i\omega\_n+1/t^{\rm a}\_{\ell}}+G\_{1,1}(\omega\_n)\tilde{\eta}\_{\rm L}(\omega\_n)+G\_{1,N}(\omega\_n)\tilde{\eta}\_{\rm R}(\omega\_n)\bigg]+\dfrac{\mathcal{R}^\top \Delta U}{\tau},\label{v1-eqn}\end{aligned}$$ for $$\begin{aligned} \mathcal{R}^\top&\equiv \bigg[[G(\omega\_n)\Phi]\_{1,1}\,\ [G(\omega\_n)\Phi]\_{1,2}\,\dots,\ [G(\omega\_n)\Phi]\_{1,N}\,\ -i\omega\_n [G(\omega\_n)M]\_{1,1},\label{q-eqn-1}\\ &\quad -i\omega\_n [G(\omega\_n)M]\_{1,2},\dots,-i\omega\_n [G(\omega\_n)M]\_{1,N}, -i\omega\_n\bigg(\frac{G\_{1,1}(\omega\_n)}{i\omega\_n+1/t\_{1}^{\rm a}},\frac{G\_{1,2}(\omega\_n)}{i\omega\_n+1/t\_{2}^{\rm a}},\dots,\frac{G\_{1,N}(\omega\_n)}{i\omega\_n+1/t^{\rm a}\_{N}}\bigg)\bigg],\nonumber\\ \Delta U&{\color{black}\equiv}[\Delta x\_1,\Delta x\_2,\dots,\Delta x\_N,\Delta v\_1,\Delta v\_2,\dots,\Delta v\_N,\Delta f\_1,\Delta f\_2,\dots,\Delta f\_N]^\top.\end{aligned}$$ Similarly, replacing *ω**n* in Eq.  by  − *ω**n*, $$\begin{aligned} \tilde v\_1(-\omega\_n)&=-i\omega\_n\bigg[ \sum\_{\ell=1}^{N} \dfrac{G\_{1,\ell}(-\omega\_n)\tilde{\zeta}\_\ell(-\omega\_n)}{-i\omega\_n+1/t^{\rm a}\_{\ell}}+G\_{1,1}(-\omega\_n)\tilde{\eta}\_{\rm L}(-\omega\_n)+G\_{1,N}(-\omega\_n)\tilde{\eta}\_{\rm R}(-\omega\_n)\bigg]+\dfrac{\mathcal{R}^\dagger \Delta U}{\tau}.\label{v1m-eqn}\end{aligned}$$ Substituting Eq.  and in Eq.  gives $$\begin{aligned} Q\_{\rm L}=\dfrac{1}{2}\sum\_{n=-\infty}^{+\infty}\bigg[\tau\tilde \xi\_n^\top C\_n \xi\_n^\*+ \tilde\xi\_n^\top \alpha\_n+ \alpha\_{-n}^\top \tilde \xi\_n^\*-2\gamma\_{\rm L}\dfrac{\Delta U^\top \mathcal{R}\mathcal{R}^\dagger \Delta U}{\tau}\bigg],\label{Q-eqn-FT}\end{aligned}$$ where $\tilde \xi\_{n}^\top \equiv[\tilde \zeta\_1(\omega\_n), \dots, \tilde \zeta\_N(\omega\_n), \tilde \eta\_{\rm L}(\omega\_n),\tilde \eta\_{\rm R}(\omega\_n)]$ is the Fourier-space noise vector, and the Hermitian matrix *C**n* and vectors *α*− *n*⊤ are, respectively, $$\begin{aligned} C\_n&\equiv C\_n^{(1)}-2\gamma\_{\rm L} C\_{n}^{(2)},\label{cn-eqn}\\ \alpha\_{-n}^\top&\equiv A\_{-n}^\top-2\gamma\_{\rm L} B\_{-n}^\top.\label{alpha-eqn}\end{aligned}$$ In Eq. , the upper-triangle matrix elements (for 1 ≤ *i* ≤ *j* ≤ *N*) of Hermitian matrices *C**n*(1) and *C**n*(2), respectively, are $$\begin{aligned} [C^{(1)}\_{n}]\_{i,j}&=0{\color{black}\,}\\ [C^{(1)}\_{n}]\_{i,N+1}&=\dfrac{G\_{1,i}}{1+(i\omega\_n t\_{i}^{\rm a})^{-1}}{\color{black}\,}\\ [C^{(1)}\_{n}]\_{i,N+2}&=0{\color{black}\,}\\ [C^{(1)}\_{n}]\_{N+1,N+1}&=i\omega\_n[G\_{1,1}-G\_{1,1}^\*]{\color{black}\,}\\ [C^{(1)}\_{n}]\_{N+1,N+2}&=-i\omega\_n G\_{1,N}^\*{\color{black}\,}\\ [C^{(1)}\_{n}]\_{N+2,N+2}&=0{\color{black}\,}\end{aligned}$$ and $$\begin{aligned} [C\_n^{(2)}]\_{i,j}&=\dfrac{\omega\_n^2 G\_{1,i}G\_{1,j}^\*}{(i\omega\_n+1/t^{\rm a}\_{i})(-i\omega\_n+1/t^{\rm a}\_{j})}{\color{black}\,}\\ [C\_n^{(2)}]\_{i,N+1}&=\dfrac{\omega\_n^2G\_{1,i}G\_{1,1}^\*}{i\omega\_n+1/t\_{i}^{\rm a}}{\color{black}\,} \\ [C\_n^{(2)}]\_{i,N+2}&=\dfrac{\omega\_n^2 G\_{1,i}G\_{1,N}^\*}{i\omega\_n+1/t\_{i}^{\rm a}}{\color{black}\,}\\ [C^{(2)}\_{n}]\_{N+1,N+1}&=\omega\_n^2G\_{1,1}G\_{1,1}^\*{\color{black}\,}\\ [C^{(2)}\_{n}]\_{N+1,N+2}&=\omega\_n^2G\_{1,1} G\_{1,N}^\*{\color{black}\,}\\ [C^{(2)}\_{n}]\_{N+2,N+2}&=\omega\_n^2G\_{1,N} G\_{1,N}^\*\.\end{aligned}$$ Further, the two vectors in *α*− *n*⊤ in Eq.  are $$\begin{aligned} A\_{-n}^\top&\equiv (\underbrace{0,0,\dots,0}\_{N},1,0)\Delta U^\top \mathcal{R},\\ B\_{-n}^\top&\equiv \bigg(\frac{-G\_{1,1}^\*}{-1+(i\omega\_n t\_{1}^{\rm a})^{-1}},\frac{-G\_{1,2}^\*}{-1+(i\omega\_n t\_{2}^{\rm a})^{-1}},\dots,\frac{-G\_{1,N}^\*}{-1+(i\omega\_n t\_{N}^{\rm a})^{-1}},-i\omega\_n G\_{1,1}^\*,-i\omega\_n G\_{1,N}^\*\bigg) \Delta U^\top \mathcal{R}.\end{aligned}$$ We now compute the conditional characteristic function for $Q\_{\rm L}$ defined as (see Eq. ) $$\begin{aligned} Z(\lambda,U,\tau|U\_0)&\equiv \bigg\langle e^{-\lambda Q\_{\rm L}} \delta[U-U(\tau)]\bigg\rangle\_{U\_0}\\ &=\int \dfrac{d^{3N}\sigma}{(2\pi)^{3N}}e^{i\sigma^\top U} \big\langle e^{E(\tau)}\big\rangle\_{U\_0},\label{RCF}\end{aligned}$$ where $E(\tau)\equiv -\lambda Q\_{\rm L}-i U^\top(\tau) \sigma$. The second line results from substituting the integral representation of the Dirac delta function. The state vector *U*(*τ*) can be rewritten using Eq.  as $$\begin{aligned} U^\top(\tau)&=\lim\_{\epsilon\to 0}\sum\_{n=-\infty}^{+\infty}e^{-i\omega\_n \epsilon}\left[\sum\_{j=1}^N\dfrac{\tilde{\zeta}\_j(\omega\_n)q\_j^\top}{i\omega\_n+1/t\_{j}^{\rm a}}+\tilde{\eta}\_{\rm L}(\omega\_n)\ell\_1^\top+\tilde{\eta}\_{\rm R}(\omega\_n)\ell\_N^\top \right]\\ &\equiv\lim\_{\epsilon\to 0}\sum\_{n=-\infty}^{+\infty}e^{-i\omega\_n \epsilon}~\tilde \xi\_n^\top K\_1,\label{k1-eqn}\\ U(\tau)&= \lim\_{\epsilon\to 0}\sum\_{n=-\infty}^{+\infty}e^{-i\omega\_n \epsilon}\left[\sum\_{j=1}^N~\dfrac{q\_j\tilde{\zeta}\_j(\omega\_n)}{i\omega\_n+1/t\_{j}^{\rm a}}+\ell\_1\tilde{\eta}\_{\rm L}(\omega\_n)+\ell\_N\tilde{\eta}\_{\rm R}(\omega\_n) \right]\\ &\equiv\lim\_{\epsilon\to 0}\sum\_{n=-\infty}^{+\infty}e^{-i\omega\_n \epsilon}~K\_2^\top~\tilde \xi\_n\label{k2-eqn},\end{aligned}$$ where the inner products in and are defined using the respective column and row vectors, $$\begin{aligned} & ~~~~~~~~~~~~~~~~~~~~~~~~~~~K\_1\equiv \begin{bmatrix} (i\omega\_n+1/t\_{1}^{\rm a})^{-1}q\_1^\top\\ (i\omega\_n+1/t\_{2}^{\rm a})^{-1}q\_2^\top\\ \vdots\\ (i\omega\_n+1/t^{\rm a}\_{N})^{-1}q\_N^\top\\ \ell\_1^\top\\ \ell\_N^\top \end{bmatrix},\\ K\_2^\top&\equiv [(i\omega\_n+1/t\_{1}^{\rm a})^{-1}q\_1,(i\omega\_n+1/t\_{2}^{\rm a})^{-1}q\_2,\dots,(i\omega\_n+1/t^{\rm a}\_{N})^{-1}q\_N,\ell\_1,\ell\_N],\end{aligned}$$ in which the first *N* components correspond to active noise and the last two to thermal noise. Substituting *U*⊤(*τ*) in *E*(*τ*) leads to $$\begin{aligned} E(\tau)&=\sum\_{n=1}^{+\infty}\bigg[-\lambda \tau\tilde \xi\_n^\top C\_n \tilde \xi\_n^\*+\tilde\xi\_n^\top \beta\_n+ \beta\_{-n}^\top \tilde \xi\_n^\*+\dfrac{2\gamma\_{\rm L}\lambda}{\tau}|\Delta\_n|^2\bigg]-\dfrac{1}{2}\lambda \tau\tilde \xi\_0^\top C\_0 \tilde\xi\_0+\tilde\xi\_0^\top \beta\_0+\dfrac{\lambda \gamma\_{\rm L}}{\tau} |\Delta\_0|^2,\label{e-tau}\end{aligned}$$ for $$\begin{aligned} |\Delta\_n|^2&\equiv \Delta U^\top \mathcal{R}\mathcal{R}^\dagger \Delta U,\\ \beta\_n&\equiv -\lambda \alpha\_n-i e^{-i\omega\_n \epsilon} \big(K\_1\sigma\big).\end{aligned}$$ The average appearing in Eq. , ⟨*e**E*(*τ*)⟩*U*0, can be computed as follows. We first note that *E*(*τ*) (given in Eq. ) is quadratic in $\tilde \xi\_n$ and in $\tilde \xi\_0$. Since $\tilde \xi\_0$ and each $\tilde \xi\_n$ (*n* = 1, 2, …, ∞) are independent and identically distributed noise vectors, $$\begin{aligned} P(\tilde \xi\_n)&=\dfrac{e^{-\tilde \xi\_n^\top \Lambda^{-1}\tilde \xi\_n^\*}}{\pi^{N+2}\det[\Lambda]}\, \quad n\geq 1 \label{n-d-1},\\ P(\tilde \xi\_0)&=\dfrac{e^{-\frac{1}{2}\tilde \xi\_0^\top \Lambda^{-1}\tilde \xi\_0}}{\sqrt{(2\pi)^{N+2}\det[\Lambda]}}.\label{n-d-0} \,\end{aligned}$$ we write $\big\langle e^{E(\tau)}\big\rangle\_{U\_0}$ in product form: $$\begin{aligned} \langle e^{E(\tau)}\rangle\_{U\_0}&=\prod\_{n=1}^{\infty}\bigg\langle e^{-\lambda \tau\tilde \xi\_n^\top C\_n \tilde \xi\_n^\*+\tilde\xi\_n^\top \beta\_n+ \beta\_{-n}^\top \tilde \xi\_n^\*+\frac{2\gamma\_{\rm L}\lambda}{\tau}|\Delta\_n|^2}\bigg\rangle\_{U\_0} \bigg\langle e^{-\frac{1}{2}\lambda \tau\tilde \xi\_0^\top C\_0 \tilde\xi\_0+\tilde\xi\_0^\top \beta\_0+\frac{\lambda \gamma\_{\rm L}}{\tau} |\Delta\_0|^2} \bigg\rangle\_{U\_0}. \label{this-eqn}\end{aligned}$$ In and, the diagonal matrix $\Lambda\equiv 2/\tau~\text{diag}(D^{\rm a}\_{1},D^{\rm a}\_{2},\dots,D^{\rm a}\_{N},\gamma\_{\rm L} T\_{\rm L},\gamma\_{\rm R} T\_{\rm R})$ carries information about the strength of thermal and active noises. We compute each average by Gaussian integration, simplifying Eq.  to $$\begin{aligned} \langle e^{E(\tau)}\rangle\_{U\_0}&=\exp\bigg(-\dfrac{1}{2}\sum\_{n=-\infty}^{+\infty} \ln\big[\det(\Lambda \Omega\_n)\big]\bigg)\exp\bigg(\sum\_{n=\infty}^{+\infty}\bigg[\dfrac{1}{2}\beta\_{-n}^T\Omega\_n^{-1}\beta\_n+\frac{\lambda\gamma\_{\rm L}}{\tau}|\Delta\_n|^2\bigg]\bigg),\label{gf}\end{aligned}$$ for Ω*n* ≡ Λ− 1 + *λ**τ**C**n*. In the limit of large *τ*, these summations become integrals, converting to $$\begin{aligned} \langle e^{E(\tau)}\rangle\_{U\_0}&\approx e^{\tau \mu(\lambda)}e^{-\frac{1}{2}\sigma^\top H\_1(\lambda)\sigma+i\Delta U^\top H\_2(\lambda)\sigma+\frac{1}{2}\Delta U^\top H\_3(\lambda)\Delta U}.\label{av-E}\end{aligned}$$ The exponent *μ*(*λ*) in the integral form is $$\begin{aligned} \mu(\lambda)\equiv-\dfrac{1}{4\pi} \int\_{-\infty}^{+\infty}~d\omega~\ln [\det(\Lambda \Omega)],\label{mu-lam-app}\end{aligned}$$ and the matrices are $$\begin{aligned} H\_1(\lambda)&\equiv\dfrac{\tau}{2\pi}\int\_{-\infty}^{+\infty}d\omega~ K\_2^\dagger \Omega^{-1}K\_1,\label{h1-eqn}\\ H\_2(\lambda)&\equiv\dfrac{\lambda\tau}{2\pi}\int\_{-\infty}^{+\infty}d\omega~e^{-i\omega \epsilon}~a\_1^\top \Omega^{-1} K\_1,\label{h2-eqn}\\ H\_3(\lambda)&\equiv\dfrac{\lambda\tau}{2\pi}\int\_{-\infty}^{+\infty}d\omega~\bigg[\lambda a\_1^\top \Omega^{-1}a\_2+\frac{2\gamma\_{\rm L}}{\tau} \mathcal{R}\mathcal{R}^\dagger\bigg],\label{h3-eqn}\end{aligned}$$ for vectors $$\begin{aligned} a\_1^\top&\equiv\bigg(\frac{2\gamma\_{\rm L}G\_{1,1}^\*\mathcal{R}}{-1+(i\omega\_n t\_{1}^{\rm a})^{-1}}\,\ \frac{2\gamma\_{\rm L}G\_{1,2}^\* \mathcal{R}}{-1+(i\omega\_n t\_{2}^{\rm a})^{-1}}\,\dots,\ \frac{2\gamma\_{\rm L}G\_{1,N}^\*\mathcal{R}}{-1+(i\omega\_n t^{\rm a}\_{N})^{-1}}\, [1+2\gamma\_{\rm L}i\omega\_n G\_{1,1}^\*]\mathcal{R}\,\ 2\gamma\_{\rm L}i\omega\_n G\_{1,N}^\* \mathcal{R}\bigg),\\ a\_2&\equiv\bigg(\frac{-2\gamma\_{\rm L}G\_{1,1}\mathcal{R}^\dagger}{1+(i\omega\_n t\_{1}^{\rm a})^{-1}}\,\ \frac{-2\gamma\_{\rm L}G\_{1,2} \mathcal{R}^\dagger}{1+(i\omega\_n t\_{2}^{\rm a})^{-1}}\,\dots,\ \frac{-2\gamma\_{\rm L}G\_{1,N}\mathcal{R}^\dagger}{1+(i\omega\_n t^{\rm a}\_{N})^{-1}}\, [1-2\gamma\_{\rm L}i\omega\_n G\_{1,1}]\mathcal{R}^\dagger\,\ -2\gamma\_{\rm L}i\omega\_n G\_{1,N} \mathcal{R}^\dagger\bigg)^\top.\end{aligned}$$ Note that in *a*1⊤ and *a*2, the first *N* elements correspond to active noises and the last two to thermal noises. Substituting Eq.  in Eq.  and integrating over *σ* yields $$\begin{aligned} &Z(\lambda,U,\tau|U\_0)\approx e^{\tau \mu(\lambda)}\dfrac{e^{\frac{1}{2}\Delta U^\top H\_3(\lambda)\Delta U}}{\sqrt{(2\pi)^{3N}\det H\_1(\lambda)}} e^{-\frac{1}{2} [U^\top+\Delta U^\top H\_2(\lambda)]~H\_1^{-1}(\lambda)~[U+H\_2^\top(\lambda)\Delta U]}.\label{this-eqn-1} \end{aligned}$$ The formal long-time solution of the Fokker-Planck equation (see Sec. [FP-sec]) is *Z*(*λ*, *U*, *τ*∣*U*0) ≈ *e**τ**μ*(*λ*)*χ*(*U*0, *λ*)Ψ(*U*, *λ*). Therefore, to identify the left- and right-eigenfunctions, we factorize the RHS of Eq.  into separate factors that capture the respective dependence on *U* and *U*0. This identification can be achieved by setting (*H*1− 1*H*2⊤ − *H*3 + *H*2*H*1− 1*H*2⊤) + (*H*2*H*1− 1 − *H*3 + *H*2*H*1− 1*H*2⊤)⊤ = 0, giving. Alternative derivation of first and second scaled cumulant for left heat flow ============================================================================= Here we derive the first two scaled cumulants for the left heat flow, starting from its Fourier representation . This calculation verifies the cumulants obtained from the cumulant-generating function *μ*(*λ*) (see Eq. ). The computation of higher cumulants (above second) using the following method becomes complicated, and therefore it is convenient to compute the cumulants from *μ*(*λ*). We first obtain the first scaled cumulant. In, we will substitute the Fourier-transformed velocity $\tilde v\_1(\omega\_n)$ of the first particle . We first recall from Sec. [cum] that in the long-time limit the cumulant-generating function is independent of *g*(*λ*), which generally captures the boundary contributions. Therefore, dropping the boundary contributions in this limit simplifies $\tilde v\_{1}(\omega\_n)$ to $$\begin{aligned} \tilde v\_1(\omega\_n)\approx i\omega\_n\bigg[\sum\_{\ell=1}^{N} \dfrac{G\_{1,\ell}(\omega\_n)}{i\omega\_n+1/t^{\rm a}\_{\ell}}\tilde{\zeta}\_\ell(\omega\_n)+G\_{1,1}(\omega\_n)\tilde{\eta}\_{\rm L}(\omega\_n)+G\_{1,N}(\omega\_n)\tilde{\eta}\_{\rm R}(\omega\_n)\bigg] \.\label{v-eqn-sm}\end{aligned}$$ Substituting this in Eq.  and averaging over both thermal and active noise gives $$\begin{aligned} \langle Q\_{\rm L} \rangle &\approx\sum\_{n=-\infty}^{+\infty} \bigg\{-i\omega\_n [G\_{1,1}(-\omega\_n)-G\_{1,1}(\omega\_n)]\gamma\_{\rm L} T\_{\rm L} -2\gamma\_{\rm L}\omega\_n^2\bigg(\sum\_{\ell=1}^{N}\dfrac{D^{\rm a}\_{\ell} |G\_{1,\ell}|^2}{\omega\_n^2+1/(t^{\rm a}\_{\ell})^{2}} +|G\_{1,1}|^2 \gamma\_{\rm L}T\_{\rm L}+|G\_{1,N}|^2 \gamma\_{\rm R}T\_{\rm R}\bigg)\bigg\} \. \label{this-new-eq-1}\end{aligned}$$ Using the definition of the Green’s function matrix , $$\begin{aligned} G(-\omega\_n)-G(\omega\_n)=2i\omega\_n G(-\omega\_n)\Gamma G(\omega\_n) \,\end{aligned}$$ thus $$\begin{aligned} G\_{1,1}(-\omega\_n)-G\_{1,1}(\omega\_n)&=2i\omega\_n \sum\_{\ell,m} G\_{1,\ell}(-\omega\_n)\Gamma\_{\ell,m} G\_{m,1}(\omega\_n)\\ &=2i\omega\_n \sum\_{\ell,m} G\_{1,\ell}(-\omega\_n)\delta\_{\ell,m}[\delta\_{\ell,1}\gamma\_{\rm L}+\delta\_{m,N}\gamma\_{\rm R}] G\_{m,1}(\omega\_n)\\ &=2i\omega\_n \sum\_{\ell} G\_{1,\ell}(-\omega\_n)[\delta\_{\ell,1}\gamma\_{\rm L}+\delta\_{\ell,N}\gamma\_{\rm R}] G\_{\ell,1}(\omega\_n)\\ &=2i\omega\_n (|G\_{1,1}|^2\gamma\_{\rm L}+|G\_{1,N}|^2\gamma\_{\rm R}) \, \label{c-8}\end{aligned}$$ where the last line follows from the symmetry of the Green’s function matrix. Substituting in the first term inside curly brackets in, and converting the summation into a time integral in the long-time limit, gives $J\_{\rm L}$ as in. Next, we compute the second scaled cumulant for the left heat flow. We square both sides of Eq.  to write $$\begin{aligned} Q\_{\rm L}^2&= \dfrac{\tau^2}{4}\sum\_{n,m=-\infty}^{+\infty}[\tilde{\eta}\_{\rm L}(\omega\_n) \tilde v\_1(-\omega\_n)+\tilde{\eta}\_{\rm L}(-\omega\_n) \tilde v\_1(\omega\_n)-2\gamma\_{\rm L} \tilde v\_1 (\omega\_n) \tilde v\_1(-\omega\_n)]~~\times\\ &~~~~~~~~~~~~~~~~~~~~[\tilde{\eta}\_{\rm L}(\omega\_m) \tilde v\_1(-\omega\_m)+\tilde{\eta}\_{\rm L}(-\omega\_m) \tilde v\_1(\omega\_m)-2\gamma\_{\rm L} \tilde v\_1 (\omega\_m) \tilde v\_1(-\omega\_m)]\nonumber\\ &=\dfrac{\tau^2}{4}\sum\_{n,m=-\infty}^{+\infty}[\tilde{\eta}\_{\rm L}(\omega\_n) \tilde v\_1(-\omega\_n)\tilde{\eta}\_{\rm L}(\omega\_m) \tilde v\_1(-\omega\_m)+\tilde{\eta}\_{\rm L}(-\omega\_n) \tilde v\_1(\omega\_n)\tilde{\eta}\_{\rm L}(-\omega\_m) \tilde v\_1(\omega\_m)\\&~~~~~~~~~~~~~~~~~~-4 \gamma\_{\rm L} \tilde{\eta}\_{\rm L}(\omega\_n) \tilde v\_1(-\omega\_n)\tilde v\_1 (\omega\_m) \tilde v\_1(-\omega\_m)-4 \gamma\_{\rm L} \tilde{\eta}\_{\rm L}(-\omega\_n) \tilde v\_1(\omega\_n)\tilde v\_1 (\omega\_m) \tilde v\_1(-\omega\_m)\nonumber\\&~~~~~~~~~~~~~~~~~~+2\tilde{\eta}\_{\rm L}(\omega\_n) \tilde v\_1(-\omega\_n)\tilde{\eta}\_{\rm L}(-\omega\_m) \tilde v\_1(\omega\_m)+4 \gamma\_{\rm L}^2\tilde v\_1 (\omega\_n) \tilde v\_1(-\omega\_n)\tilde v\_1 (\omega\_m) \tilde v\_1(-\omega\_m)] \. \nonumber\end{aligned}$$ Averaging over the noise distributions gives $$\begin{aligned} \langle Q\_{\rm L}^2\rangle-\langle Q\_{\rm L} \rangle^2&= \dfrac{\tau^2}{4}\sum\_{n,m=-\infty}^{+\infty}[\langle \tilde{\eta}\_{\rm L}(\omega\_n) \tilde{\eta}\_{\rm L}(\omega\_m) \rangle \langle \tilde v\_1(-\omega\_n)\tilde v\_1(-\omega\_m)\rangle+\langle\tilde{\eta}\_{\rm L}(-\omega\_n) \tilde{\eta}\_{\rm L}(-\omega\_m) \rangle\langle \tilde v\_1(\omega\_n) \tilde v\_1(\omega\_m)\rangle\label{big-eqn-1}\\& -4 \gamma\_{\rm L} \langle \tilde{\eta}\_{\rm L}(\omega\_n) \tilde v\_1 (\omega\_m) \rangle \langle \tilde v\_1(-\omega\_n)\tilde v\_1(-\omega\_m)\rangle-4 \gamma\_{\rm L} \langle \tilde{\eta}\_{\rm L}(-\omega\_n) \tilde v\_1 (\omega\_m) \rangle\langle \tilde v\_1(\omega\_n)\tilde v\_1(-\omega\_m)\rangle\nonumber\\& +2\langle\tilde{\eta}\_{\rm L}(\omega\_n) \tilde{\eta}\_{\rm L}(-\omega\_m)\rangle\langle \tilde v\_1(-\omega\_n) \tilde v\_1(\omega\_m)\rangle+4 \gamma\_{\rm L}^2\langle\tilde v\_1 (\omega\_n) \tilde v\_1 (\omega\_m) \rangle\langle \tilde v\_1(-\omega\_n)\tilde v\_1(-\omega\_m)\rangle +\nonumber\\& + \langle \tilde{\eta}\_{\rm L}(\omega\_n) \tilde v\_1(-\omega\_m)\rangle \langle \tilde{\eta}\_{\rm L}(\omega\_m) \tilde v\_1(-\omega\_n)\rangle+\langle\tilde{\eta}\_{\rm L}(-\omega\_n) \tilde v\_1(\omega\_m) \rangle\langle \tilde{\eta}\_{\rm L}(-\omega\_m)\tilde v\_1(\omega\_n)\rangle\nonumber\\& -4 \gamma\_{\rm L} \langle \tilde{\eta}\_{\rm L}(\omega\_n) \tilde v\_1(-\omega\_m) \rangle \langle \tilde v\_1 (\omega\_m) \tilde v\_1(-\omega\_n)\rangle-4 \gamma\_{\rm L} \langle \tilde{\eta}\_{\rm L}(-\omega\_n) \tilde v\_1(-\omega\_m) \rangle\langle\tilde v\_1 (\omega\_m) \tilde v\_1(\omega\_n)\rangle\nonumber\\&+2\langle\tilde{\eta}\_{\rm L}(\omega\_n) \tilde v\_1(\omega\_m)\rangle\langle\tilde{\eta}\_{\rm L}(-\omega\_m) \tilde v\_1(-\omega\_n)\rangle+4 \gamma\_{\rm L}^2\langle\tilde v\_1 (\omega\_n) \tilde v\_1(-\omega\_m) \rangle\langle\tilde v\_1 (\omega\_m) \tilde v\_1(-\omega\_n) \rangle ], \nonumber\end{aligned}$$ where we have used Wick’s theorem  for multivariate Gaussian distributions. We substitute $\tilde v\_1(\omega\_n)$ from Eq.  on the right-hand side of, then write the average over thermal and active noise in each term utilizing Eq. . This eventually leads to $$\begin{aligned} \dfrac{\langle Q\_{\rm L}^2\rangle-\langle Q\_{\rm L} \rangle^2}{\tau}&\approx \dfrac{1}{\tau}\sum\_{n=-\infty}^{+\infty}\bigg[(1-4\omega\_n^2\gamma\_{\rm L}^2 |G\_{1,1}|^2-4\omega\_n^2\gamma\_{\rm L}\gamma\_{\rm R} |G\_{1,N}|^2)~\times\\ &\quad \quad \bigg(\sum\_{\ell=1}^{N} \dfrac{4\gamma\_{\rm L}T\_{\rm L} \omega\_n^2D\_{\ell}^{\rm a}|G\_{1,\ell}(\omega\_n)|^2}{\omega\_n^2+(t\_\ell^{\rm a})^{-2}}+4\gamma\_{\rm L}^2T\_{\rm L}^2 \omega\_n^2|G\_{1,1}(\omega\_n)|^2+4\gamma\_{\rm L}T\_{\rm L}\gamma\_{\rm R}T\_{\rm R} \omega\_n^2|G\_{1,N}(\omega\_n)|^2\bigg)\nonumber\\ &\quad +2\bigg(\sum\_{\ell=1}^{N} \dfrac{2\gamma\_{\rm L} \omega\_n^2D\_{\ell}^{\rm a} |G\_{1,\ell}(\omega\_n)|^2}{\omega\_n^2+(t\_\ell^{\rm a})^{-2}}+2\gamma\_{\rm L}^2T\_{\rm L}\omega\_n^2|G\_{1,1}(\omega\_n)|^2+2\gamma\_{\rm L}\gamma\_{\rm R}T\_{\rm R}\omega\_n^2|G\_{1,N}(\omega\_n)|^2\bigg)^2 \nonumber\\ &\quad -2\omega\_n^2\gamma\_{\rm L}^2T\_{\rm L}^2 \big\{[G\_{1,1} (-\omega\_n)]^2 + [G\_{1,1} (\omega\_n)]^2\big\}\bigg].\nonumber\end{aligned}$$ In the long-time limit, the summation becomes a time integral, giving Eq. . Seifert, U., 2012. Stochastic thermodynamics, fluctuation theorems and molecular machines. Reports on progress in physics, 75(12), p.126001. Van Kampen, N.G., 1992. Stochastic processes in physics and chemistry (Vol. 1). Elsevier. Klages, R., Just, W. and Jarzynski, C. eds., 2013. Nonequilibrium statistical physics of small systems. Wiley-VCH Verlag GmbH & Company KGaA. Ritort, F., 2008. Nonequilibrium fluctuations in small systems: From physics to biology. Advances in chemical physics, 137, p.31. Plischke, M. and Bergersen, B., 2006. Equilibrium statistical physics. World Scientific Publishing Company. Evans, D.J., Cohen, E.G.D. and Morriss, G.P., 1993. Probability of second law violations in shearing steady states. Physical review letters, 71(15), p.2401. Searles, D.J. and Evans, D.J., 2000. Ensemble dependence of the transient fluctuation theorem. The Journal of Chemical Physics, 113(9), pp.3503-3509. Searles, D.J. and Evans, D.J., 2001. Fluctuation theorem for heat flow. International journal of thermophysics, 22(1), pp.123-134. Kurchan, J., 1998. Fluctuation theorem for stochastic dynamics. Journal of Physics A: Mathematical and General, 31(16), p.3719. Jarzynski, C., 1997. Nonequilibrium equality for free energy differences. Physical Review Letters, 78(14), p.2690. Crooks, G.E., 1999. Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Physical Review E, 60(3),
arxiv_0000154
the Higgs mechanism). In a lattice model, short-range couplings are of the form *e**α*(*x*)*e*− *α*(*x* + *δ*) where *δ* is of the order of the lattice constant. In a continuum field theory, these terms are represented by polynomials of spatial gradients  ∼ ∇*e**α*(*x*). In particular, the NG modes arise from these terms, and since the Goldstone theorem guarantees that NG modes exist whenever the groundstate spontaneously breaks symmetry, terms of the form *e**α*(*x*)*e*− *α*(*y*) or compositions thereof are always present. Summarizing, the allowed terms in the Hamiltonian have the groundstate as an eigenstate, or are of the form *e**α*(*x*)*e*− *α*(*y*) with *y* ≠ *x*. It is only the latter type of terms that can bring the classical groundstate to a superposition of states (a mixture of states with the same total eigenvalue of the order parameter). They are always present due to the Goldstone theorem. We see that these terms necessarily involve a factor of the positive root *e**α*(*x*). Such terms modify the classical groundstate, *unless* they annihilate the groundstate since they would take one outside of the Hilbert space. This is only the case if the groundstate is the highest-weight state. Also in this case, the classical groundstate is an eigenstate of the Hamiltonian, since all terms have either finite eigenvalue or annihilate the state, in other words: have eigenvalue 0. Thus properties (iii) and (iv) are equivalent. One may now ask: if the dynamic terms annihilate the highest-weight state, how can NG modes exist? The important point is that the NG modes are propagating modes with a finite momentum (even if very small). Furthermore, they are excitations that are caused by perturbations, for instance thermal fluctuations or an external force (linear response). The *k* = 0 fluctuations, which are in fact the different classical states in *G*/*H*, are however always forbidden by the reasoning in this section. Concluding, we have shown that quantum fluctuations are only absent in groundstates that are highest-weight states, and that they are always present in other cases, even if the order parameter operator commutes with the Hamiltonian. Then total order parameter is conserved, but it may fluctuate locally. Thus (iii) and (iv) always imply (i) but not vice versa. The reverse direction is the topic of the next section. Multitude of order parameters ----------------------------- We now elucidate the structure of order parameter operators necessary to completely specify the SSB consequences. As we have seen above, there exist states with order parameter operators commuting with the Hamiltonian that nevertheless allow for quantum fluctuations. One would like to have a criterion for when and when not such a phenomenon occurs. This is nicely provided by a recent development concerning the full spectrum of excitations related to broken symmetry generators by Hayata and Hidaka . These authors consider not just one but all possible order parameter operators. More specifically, they classify all possible interpolating fields collected as $$\label{eq:total interpolating field} \mathbf{\Phi}(x) = \begin{pmatrix} \phi^i(x) \\ j^b(x) \end{pmatrix},$$ such that for each component of **\Phi**(*x*) there is at least one symmetry generator *Q**a* for which ⟨[*Q**a*, *ϕ**i*(*x*)]⟩ ≠ 0, or ⟨[*Q**a*, *j**b*(*x*)]⟩ ≠ 0. Here the *j**b*(*x*) are Noether charge densities as before, but the operators *ϕ**i* are different local operators acting on order parameter space *G*/*H*. The set {*ϕ**i*(*x*), *j**b*(*x*)} should consist of linearly independent operators, and they should excite linearly independent modes from the symmetry-broken groundstate, that is, the determinant of the matrix ⟨[*Q**a*, Φ*b*(*x*)]⟩ should be non-zero. The Noether charge densities *j**b* follow the story outlined above, but we also need to consider the other interpolating fields, which will lead to order parameter operators [*Q**a*, *ϕ**i*(*x*)] that are not symmetry generators themselves. Because of the transformation properties under the symmetry group, the expectation value ⟨[*Q**a*, *ϕ**i*]⟩ = ∑*j**c**a**i**j*⟨*ϕ**j*⟩ is always some linear combination of the expectation values of the operators *ϕ**i* itself, i.e. not of Noether charge densities *j**b*. An example of the latter is the staggered magnetization of the antiferromagnet (Sec. [subsec:example Antiferromagnet]) and the ferrimagnet (Sec. [subsec:example Ferrimagnet]). Hayata and Hidaka classify the types of NG modes using Eq.. If a symmetry generator *Q**a* is only broken by interpolating fields *ϕ**i* but not by any Noether charge density *j**b*, it will excite a type-A NG mode. Conversely, if *Q**a* is broken by a Noether charge density *j**b*, then it will excite a type-B NG mode. So far there is nothing new. However, it may happen that a generator *Q**a* is broken by both a Noether charge density *j**b* *and* another operator *ϕ**i*. In practice, this always involves a pair of symmetry generators *E**α*+, *E**α*− and a pair of interpolating fields *ϕ**i*. The authors have shown that in that case, there is a gapped “partner” mode in the spectrum, and this mode is excited by the broken generator *Q**a*, provided the determinant of ⟨[*Q**a*, Φ*K*(*x*)]⟩ where *K* = {*i*, *b*} is non-zero, see [appendix:linear independence of excitations] for a working example. This corresponds precisely to the structure we have found in Sec. [subsec:Highest-weight states]. We know that one particular combination of the pair of broken Noether charge densities *j**a*, *j**b*, typically the lowering operator *e*− *α*, excites the massless NG mode. Then the Hermitian conjugate *e**α* excites the gapped partner mode. The reason for the presence of quantum fluctuations is now clear: even if the order parameter operator [*Q**a*, *j**b*] is conserved in time, the order parameter operator [*Q**a*, *ϕ**i*] is not. The choice of order parameter always possesses some arbitrariness, since the only requirement is that it be finite in the ordered state while it vanish in the disordered state. We cannot *a priori* decide which order parameter is ‘best’, and it befits us to consider all possible operators. Note that a similar course of action is necessary to determine whether a broken symmetry generator *Q**a* excites a type-A NG mode: one needs to make sure that all the combinations ⟨[*Q**a*, *j**b*]⟩ in Eq.  vanish. The quantum fluctuations could then be said to modify the classical state |Ψcl⟩ as an eigenstate of the order parameter operator [*Q**a*, *ϕ**i*], while leaving it in an eigenstate of the order parameter operator [*Q**a*, *j**b*]. Again, it is clear that the presence of type-A NG modes is associated with order parameters operators *ϕ**i* which do not commute with the Hamiltonian and hence always lead to quantum fluctuations. We can identify the following two cases in the absence of any type-A NG modes: 1. *We are in a highest-weight state for all root generators; any fluctuations would take one out of the Hilbert space and are forbidden; there is no additional order parameter operator that does not commute with the Hamiltonian.* 2. *There is a finite Noether charge density, but the state is not maximally polarized; the root generators will excite a NG mode and a gapped partner mode; there is an additional order parameter operator not commuting with the Hamiltonian.* This settles the issue mentioned above: even if the naive order parameter operator—that nevertheless determines the massless NG modes—commutes with the Hamiltonian, there may be other order parameter operators that do not. In that case, quantum fluctuations are present. Note that if one were to look only at the order parameter operators *ϕ**i*, one would correctly conclude the spontaneous breaking of symmetry generators *Q**a*, yet one would not recognize that they actually excite type-B and not type-A NG modes. Thus one needs the complete information contained in **\Phi**(*x*) of Eq. to characterize the low-energy spectrum and the presence of quantum fluctuations. As an example, consider the ferrimagnet, which is an antiferromagnet of spins of unequal magnitude on each sublattice (see Sec. [subsec:example Ferrimagnet]). This state has staggered magnetization, but the total magnetization does not cancel out and is finite. In this case, both the magnetization and the staggered magnetization are order parameters and both signal the onset of magnetic ordering in some preferred direction. Since the commutator expectation value of the broken spin rotation generators does not vanish, there is a type-B NG mode which is a spin wave (magnon) of spin precession consistent with the magnetization direction. But since the staggered magnetization is an order parameter as well, there must additionally be a gapped partner mode, the spin wave precession opposite to the magnet, consistent with for instance Refs. . Another example is the non-maximally polarized ferromagnet, which is explained in more detail in Sec. [subsec:example Intermediately polarized magnet]. Take a high-spin system *s* > 1 with biquadratic Heisenberg-type Hamiltonian. It could be imagined that for a certain choice of parameters, the groundstate is not a canted state, but a uniform state with lower but non-zero magnetic quantum number *m* < *s*. This state has a non-zero magnetization like the ordinary, maximally polarized ferromagnet, but quantum fluctuations are possible since the raising operators do not annihilate this state. From a symmetry point of view, the magnetization *M* = *S**z* is a good order parameter operator, but the nematic tensor *N**z**z* is as well, where $N^{ab} = \frac{1}{2}(S^a S^b + S^b S^a) - c \delta\_{ab}$ and *c* is some number to make the combination traceless. The nematic tensor does not commute with the Hamiltonian. It can be shown that the excitations generated by *N**x**z* and *N**y**z* are linearly independent from those by *S**x*, *S**y*, and therefore can lead to gapped modes. In the maximally polarized ferromagnet, *N**z**z* is also a good order parameter operator, but the excitations are not linearly independent, precluding the existence of a gapped partner mode . See [appendix:linear independence of excitations] for an explicit derivation. Time-reversal symmetry ---------------------- It is claimed by several authors  that the spontaneous breaking of time-reversal symmetry is responsible for the peculiarities of the ferromagnet, such as the quadratic NG dispersion relation. It was conceived that the asymmetry between temporal and spatial derivatives in *ω* ∝ *k*2 could only happen when time-reversal symmetry is broken; then a time-reversal odd term in the effective Lagrangian is allowed. This is not the case. Obviously Lorentz invariance is broken in such systems, but there is no fundamental reason time-reversal should be as well. The term with a single time derivative  ∼ *π**a*∂*t**π**b* in Eq.  can be time-reversal even or odd, depending on the transformation properties of *π**a* and *π**b*, which follow the properties of the symmetry generators that excite these modes. Let us take a look at the Lie algebra of symmetry generators and their behavior under time reversal. Consider the Lie algebra relations [*j**a*, *j**b*] = i*f**a**b**c**j**c*, and for simplicity, take the Lie group *S**U*(2) with *f**a**b**c* = *ε**a**b**c*,  *a*, *b*, *c* ∈ *x*, *y*, *z*. Under time reversal, i →  − i. There are now two possibilities: all three of *j**x*, *j**y*, *j**z* are odd under time-reversal; or two are even and one is odd. The first case is realized in Heisenberg magnets, where the Noether charge densities are spin rotations, which are all odd under time-reversal. In the latter case, a Noether charge density that is even under time-reversal may obtain an expectation value, leading to a type-B NG mode. The groundstate, an eigenstate of the Noether charge density, is then time-reversal invariant as well. Examples of the second case  include the isospin group, which has the same *S**U*(2) structure, but *τ**x* and *τ**z* are even under time reversal, while *τ**y* is odd. If *τ**z* were to pick up an expectation value, it would lead to a time-reversal invariant state with a type-B NG mode. The term Eq.  contains one even field, one odd field and one time derivative, the combination of which is even under time reversal. This was also noticed in Ref. . Surely, if a Noether charge density obtains an expectation value ⟨*Q**c*⟩ = *M*, then the Z2 symmetry *Q**c* →  − *Q**c* is spontaneously broken, but this may or may not be the transformation under time-reversal symmetry. It must be concluded that systems with finite Noether charge densities and type-B NG modes can but not must have spontaneously broken time-reversal invariance, property (vii) in Fig. [fig:FM equivalencies]. Thin spectrum and decoherence ============================= A very important chapter in the book of spontaneous symmetry breaking concerns the so-called thin spectrum, or (Anderson) tower of states, consisting of a sparse set of states of extremely low energy above the exact groundstate |Ψsymm⟩. Its existence was recognized by Anderson  in 1952 for the Heisenberg antiferromagnet on a square lattice, but its mathematical structure was not realized until later . These states are so close to the groundstate that they are realistically accessible at any finite temperature; the symmetry-broken state realized in nature ∣Ψ0⟩ of any finite-size system is then a superposition of the absolute groundstate and states in the thin spectrum. This wavepacket is very stable, and in almost all cases very close to the classical state ∣Ψcl⟩. It should be regarded at as the groundstate for all practical purposes, for instance in the derivation of the Goldstone theorem (or at least to establish the spectrum of NG modes). Structure of the thin spectrum ------------------------------ Let us make these statements more precise. True spontaneous symmetry breaking is mathematically rigorous only for infinitely large systems. Then all the states in configuration space *G*/*H* are exactly degenerate. After one of these has been spontaneously chosen (perhaps by an external perturbation), it will remain in that state forever, since the tunneling matrix element between the different degenerate states vanishes in the infinite limit . For finite systems, a superposition ∣Ψsymm⟩ of all the states in *G*/*H* has lower energy than the classical states ∣Ψcl⟩. This is easy to see starting from a spin-$\tfrac{1}{2}$ antiferromagnet on two sites only: the singlet configuration | ↑  ↓ ⟩ − | ↓  ↑ ⟩ has a lower energy than any of the pure tensor product (Néel) states | ↑  ↓ ⟩ or | ↓  ↑ ⟩. This extreme case persists for larger systems, and |Ψsymm⟩ is the absolute lowest energy state, although the gap to the lowest excitation becomes very small quickly. Koma and Tasaki have shown that the state ∣Ψsymm⟩, which extrapolates to what they call “naive infinite volume limit” of the finite-volume groundstate like the spin singlet, is inherently unstable against perturbations . What happens instead is that the would-be degenerate states differ from the groundstate by a tiny energy gap. The spacing of these states goes as 1/*N* where *N* is the number of degrees of freedom, typically the number of particles. For any macroscopic system the energy of these states is clearly very low, and they become degenerate with the groundstate for *N* → ∞. If one regards the macroscopic system as a very heavy rotor in configuration space *G*/*H* with moment of inertia *I* ∝ *N*, then these states are the excitations of this rotor with energy levels like ℏ2/*I* . This is the Anderson tower of states. Another name, thin spectrum, was coined to indicate that the density of these states is small, and they do not contribute to the thermodynamic properties of the macroscopic system like specific heat . It can for instance be shown that the contribution to the free energy of a spin system typically scales as  ∼ ln*N*/*N*, which vanishes for large *N* . The notion of the thin spectrum is so profound, that its presence is even used as evidence of long-range order . Thin spectrum as quantum fluctuations ------------------------------------- An alternative view of the thin spectrum was put forward by Van Wezel and coworkers . Since the classical groundstate ∣Ψcl⟩ is generally not the exact lowest energy state, there are quantum fluctuations tending towards lower energy. These fluctuations are virtual excitations, which are the deviations of the order parameter from its classically preferred value. As an example, take the spin waves in an antiferromagnet as mentioned in Sec. [subsec:first example antiferromagnet]. The finite-momentum *k* > 0 fluctuations can be readily taken into account, but the ones at exactly zero momentum *k* = 0 should be treated separately. In a standard derivation of fluctuations involving Bogoliubov transformations, the *k* = 0 components would lead to a singularity . These *k* = 0 excitations correspond to the fluctuations of the macroscopic, spontaneously broken state *as a whole*. They are fluctuations of the center of mass, of the heavy rotor mentioned above. In very small systems, these *k* = 0 fluctuations actually occur on a noticeable timescale, and must be properly accounted for . Summarizing, the states in the thin spectrum are the *k* = 0 quantum fluctuations of the system as a whole. Then we must immediately conclude that, if a system has no quantum fluctuations as detailed in the previous section, there are also no *k* = 0 quantum fluctuations and hence no thin spectrum. This is easy to see in the Heisenberg ferromagnet. The classical, polarized groundstate is an exact eigenstate of the Hamiltonian, and the system as a whole will remain in this state perpetually, even for finite-size systems. There is no lower-energy state to strive towards, and all the classical states are perfectly degenerate. There is no thin spectrum. Therefore property (iii) or equivalently (iv) implies (v) (see Sec. [sec:Introduction] and Fig.  [fig:FM equivalencies]). Decoherence time ---------------- The energy of the states in the thin spectrum is lower than that of the lowest NG mode, which has a minimum of momentum inversely proportional to the system size . This causes a fundamental limit on the coherence time of quantum superpositions of small but macroscopic systems. Examples include superconducting flux qubits, Cooper pair boxes, spins in solids, atomic condensates in optical lattices etc. . The argument is as follows . We are going to consider a superposition of two slightly different spontaneously broken states, typically one groundstate and one excited state. A good example is two states with *N* and *N* + 1 particles respectively, which is quite literally realized in a Cooper pair box (charge qubit). Call these two states |*m*⟩ where *m* = 0, 1, with energies *E**m*. But in reality, each of them has a thin spectrum of states, labeled by *n*. Therefore we are talking about states |*m*, *n*⟩ with energies *E**m**n*. The thin spectrum states have such low energy that they are always thermally populated. Then, we need to consider these thermal mixtures denoted by the density matrix $\rho\_m = \frac{1}{Z\_m} \sum\_n {\mathrm{e}}^{-\beta E^n\_m} \lvert m,n\rangle \langle m,n \rvert$. Here *Z**m* = ∑*n*e− *β**E**m**n* is the partition function and *β* = 1/*k*B*T*. The thermal noise in the states of the thin spectrum is a cause of decoherence which is unavoidable, since these states are so close to the groundstate. It is possible to create an initial superposition of the two states *m* = 0, 1 given by the density matrix $$\begin{aligned} \rho\_\mathrm{sup} &= \frac{1}{2Z} \sum\_n {\mathrm{e}}^{-\beta E^n\_0} \big( \lvert 0,n \rangle \langle 0,n \rvert + \lvert 1,n \rangle \langle 1,n \rvert \nonumber\\ &\phantom{mmmmmmmmm} +\lvert 0,n \rangle \langle 1,n \rvert +\lvert 1,n \rangle \langle 0,n \rvert \big).\end{aligned}$$ The unitary time evolution via operator *U* = e− i/ℏ H*t* of energy eigenstates |*m*, *n*⟩ is obviously *U*|*m*, *n*⟩ = e− i/ℏ *E**m**n**t*|*m*, *n*⟩, and the density matrix evolves as *ρ* → *U**ρ**U*†. The last two terms in the density matrix pick up phase factors e± i/ℏ (*E*0*n* − *E*1*n*)*t*. Since the thin spectrum states are in practice unobservable, we should trace the density matrix over these states. The off-diagonal terms in the density matrix thus obtained are $$\rho\_\textrm{off-diag} = \frac{1}{2Z} \Big( \sum\_n {\mathrm{e}}^{-\beta E^n\_0} {\mathrm{e}}^{-{\mathrm{i}}/\hbar\; ( E^n\_0 - E^n\_1) t} \Big) \lvert 0 \rangle \langle 1 \rvert$$ and its Hermitian conjugate. If these off-diagonal terms vanish, the superposition has been lost: decoherence. Now the phase factors are proportional to *E*0*n* − *E*1*n* which will in general be finite. These phase factors lead to destructive interference, and thereby cause decoherence. The typical timescale for this decoherence is set by the inverse of the energy difference Δ*E*thin = *E*0*n* − *E*1*n*, namely *t*coh ∝ ℏ/Δ*E*thin. Note that the energy difference *E*00 − *E*10 is constant in time and does not contribute to dephasing (this is the standard constant phase factor for superpositions of states of different energy). The number of states participating in dephasing depends on the temperature, and is of the order of *E*thin/*k*B*T*, where *E*thin is the typical spacing of energy levels in the thin spectrum. Van Wezel and coworkers have shown that in general Δ*E*thin ∝ *E*thin/*N* . Putting this together, we find a maximum coherence time limited by the dephasing due to thermal population of thin spectrum states of the order of $$\label{eq:thin spectrum coherence time} t\_\mathrm{coh} \propto \frac{\hbar}{\Delta E\_\mathrm{thin} } \frac{E\_\mathrm{thin} }{k\_\mathrm{B} T} = \frac{\hbar}{k\_\mathrm{B} T} \frac{E\_\mathrm{thin} }{\Delta E\_\mathrm{thin} } = \frac{\hbar}{k\_\mathrm{B} T} N.$$ Interestingly, the actual details of the thin spectrum do not matter; because Δ*E*thin is proportional to *E*thin, these factors cancel, and only the size of the system expressed in the number of particles *N* is of importance. The maximum coherence time is longer for larger systems, according to Eq. . The reason is that the thin spectrum states become more and more degenerate with the groundstate and do not cause dephasing as much. For a life-size system *N* ≈ 1024, and at room temperature ℏ/*k*B*T* ≈ 10− 14s such that *t*coh ≈ 1010s. But for mesoscopic systems *N* ≈ 105–108 at laboratory temperature *T* ∼ 1*K*, this limit *t*coh ≈ 10− 5s starts competing with ordinary, environmental sources of decoherence. Experiments ----------- This fundamental limit to coherence time of superpositions of macroscopic systems, which is unavoidable because the thin spectrum is always present and very close to the groundstate (much closer than the lowest ‘gapless’ excitation which has a wavenumber inversely proportional to the system size), is not only of practical importance when creating devices for quantum information purposes, but also provides an experimental test of the claims made in this paper. Namely, in ordered systems where the groundstate is a highest-weight state, like the Heisenberg ferromagnet, there is no thin spectrum and hence this fundamental limit does not apply. An experimentalist could craft two very similar systems with a small but macroscopic number of particles, say *N* ∼ 10000. For instance the Hamiltonian of one could have as the groundstate |Ψ0⟩ a maximally polarized state and the other an intermediately polarized state (see Sec. [subsec:example Intermediately polarized magnet]). Create macroscopic superpositions, for instance of the groundstate and a magnon state (cf. Ref. ). If these systems could be isolated and protected from other sources of decoherence well enough, the system with non-maximal polarization should show decoherence where the maximally polarized one should not. In this way the predictions made in this work could be tested quite directly. Note that a real ferromagnet is probably not suitable for such an experiment, since a magnetic material is also a crystal that breaks spacetime symmetry and has phonon NG modes and an associated thin spectrum, next to other sources of decoherence. Examples ======== We shall now look at several models that illustrate the SSB, NG mode spectrum and presence of quantum fluctuations in distinct ways. The first few examples are spin-*s* Heisenberg magnets on bipartite lattices with Hamiltonian H = *J*∑⟨*j**l*⟩**S***j* ⋅ **S***l* + *K*∑⟨*j**l*⟩(**S***j* ⋅ **S***l*)2 The operators *S**j**a*, *a* = *x*, *y*, *z* are generators of *S**U*(2) in the spin-*s* representation at lattice site *j*, *J* is the exchange parameter, *K* the biquadratic exchange parameter, and the sums are over nearest-neighbor lattice sites. The Hamiltonian is invariant under *S**U*(2)-rotations of all spins simultaneously, generated by *S**a* = ∑*j**S**j**a*. The first term prefers aligned (*J* < 0) or anti-aligned (*J* > 0) spin on neighboring sites. The biquadratic term, for which we always take *K* ≥ 0, prefers to have neighboring spin orthogonally oriented, in the classical case. Thus for small values of *K* it induces a canting from the aligned or anti-aligned state. In quantum models, this term can also be minimized by choosing a lower magnetic quantum number *m* < *s*, whereas the ferromagnetic term *J* < 0 is minimal for large, aligned values of *m*. See Fig. [fig:magnet cartoons] for a visual image of the types of magnets considered. [fig:magnet cartoons] Antiferromagnet --------------- Let the exchange parameters be *J* > 0, *K* = 0. The spins prefer to anti-align, in a spontaneously chosen direction which we can take to be the *z*-axis. The classical groundstate is the Néel state, with order parameter operator the staggered magnetization S*z* = ∑*j*S*j**z* = ∑*j*( − 1)*j**S**j**z*. Rotations around the *z*-axis are unbroken, while the other two generators are spontaneously broken: ⟨[*S**a*, S*j**b*]⟩ = i*ε**a**b**z*⟨S*j**z*⟩ ≠ 0. The symmetry breaking pattern is *S**U*(2) → *U*(1), and the order parameter space is *S**U*(2)/*U*(1) ≃ *S*2, the two-sphere. The expectation value of the Noether charge densities ⟨*S**j**z*⟩ is zero. There are two type-A NG modes, magnons with linear dispersion. For low dimension *D* and low *s*, quantum fluctuations are pronounced, most extreme for $s = \tfrac{1}{2}, D=2$, where the staggered magnetization is reduced to about 60% . Ferromagnet ----------- Take *J* < 0, *K* = 0. The groundstate has all spins aligned in a spontaneously chosen direction *z*. The order parameter operator is the generator *S**z* itself. The symmetry breaking pattern is the same as for the antiferromagnet, *S**U*(2) → *U*(1), but now a finite Noether charge density ⟨*S**z*⟩ = *M* ≠ 0 is present, such that the two broken densities *S**k**x* and *S**k**y* excite the same type-B NG mode with quadratic dispersion. If *M* > 0, *S**k*− = *S**k**x* − i*S**k**y* excites the NG mode, while *S**k*+ = *S**k**x* + i*S**k**y* takes one out of the Hilbert space. This is case 1) of Sec. [subsec:Multitude of order parameters] and there is no gapped partner mode. The state is maximally polarized and there are no quantum fluctuations. The classical groundstate is an eigenstate of the Hamiltonian, and there is no thin spectrum, even for small finite-size systems. Ferrimagnet ----------- Ferrimagnets are antiferromagnets that have spins of unequal magnitude on each of the sublattices. We model this with the same Hamiltonian as the antiferromagnet, *J* > 0, *K* = 0, but the spin *s**A* on the *A*-sublattice is different from *s**B* on the *B*-sublattice. For instance, let *A* have spin-1 and *B* spin-$\tfrac{1}{2}$. One must be careful about how to formulate the symmetry transformations, but the Hamiltonian has an *S**U*(2)-symmetry as before. For *K* = 0 we again have the symmetry breaking pattern *S**U*(2) → *U*(1) and rotations about the *z*-axis are unbroken. However, because of the imbalance in spin, the expectation value of the generator *S**z* is non-zero. Thus we have a single type-B NG mode for the broken generators *S**x*, *S**y*. The staggered magnetization, not commuting with the Hamiltonian, is also an order parameter for this state. We find ourselves in case 2) of Sec. [subsec:Multitude of order parameters] and there is a gapped partner mode. This is consistent with spin wave theory and numerical calculations . Canted magnet ------------- Now take a finite biquadratic exchange parameter, *J* < 0, *K* > ∣*J*∣/2. The biquadratic term penalizes a too-high degree of magnetization. This will induce canting of the spins, i.e. classically they will make an angle with the magnetization axis *z*. The canting can be coplanar or not, to be determined by other terms in the Hamiltonian. The classical canting angle *θ* is given by cos2*θ* = ∣*J*∣/2*s*2*K* for spin-*s*. The rotation symmetry about the *z*-axis is now spontaneously broken as well, and the symmetry breaking pattern is *S**U*(2) → 1, leading to a type-A NG mode in addition to the type-B NG mode of the ferromagnetic kind. The state is not maximally polarized and there is a gapped partner mode to the type-B mode, as well as the type-A mode. There are quantum fluctuations, for which it is already sufficient to recognize the presence of the type-A NG mode. It is very instructive to see how the canted magnet emerges from its two limiting cases, the ferromagnet with polarization along the *z*-axis and the antiferromagnet with polarization along the *x*-axis. In the first case, we start out with a ferromagnet with its type-B NG mode excited by *S**x* and *S**y* and no gapped partner mode. The expectation value for *S**z* is maximal, namely ⟨*S**z*⟩ = *M* = *N**s*. Introducing a canting angle *θ* will lower this expectation value to *N*(*s*cos*θ*). First of all, excitations that increase the magnetic quantum number, generated by *S*+, are now possible, and the gapped partner mode emerges. Next, rotations around the *z*-axis are now broken as well. In a coplanar canted magnet, there is now staggered magnetization in for instance the *x**z* plane. This additional breaking of the *U*(1)-symmetry leads to an additional type-A NG mode. Consequently we find one type-A NG mode associated with broken rotations around the magnetization axis, and one type-B NG mode with its gapped partner mode, associated with the other two broken rotations. Starting out from the antiferromagnet is a bit more intricate. Staggered magnetization along the *x*-axis breaks *S**y* and *S**z*, and there are two type-A NG modes. Inducing a canting angle leads to a non-zero magnetization in a perpendicular direction, let us say along the *z*-axis. This causes a ferromagnetic-type symmetry breaking of *S**x* and *S**y*; these two generators now conspire to excite one type-B NG mode and one gapped partner mode. Meanwhile the type-A NG mode excited by *S**z* persists. We find the same spectrum of one type-A, one type-B and one gapped partner mode. From this point of view there is no difference between a canted ferromagnet and a canted antiferromagnet. These results are consistent with the spectrum of canted magnets derived via effective Lagrangians . Note that many canted magnets in nature arise by acting on an antiferromagnet with a magnetic field in another direction than the staggered magnetization axis. This field could be intrinsic or external. In these cases, the additional symmetry breaking is not spontaneous but explicit, i.e. due to a symmetry-breaking term in the Hamiltonian. The new NG mode obtains a gap as a result of this, but the rest of the argument follows similar lines. Intermediately polarized magnet ------------------------------- Again, consider parameters *J* < 0, *K* > ∣*J*∣/2. For *s* > 1, it is possible that the parameters *J* and *K* are fine-tuned such that the average magnetization density *s*cos*θ* is precisely equal to an allowed magnetic quantum number *s* > *m* > 0. It could be surmised that the groundstate will have all spins in the state ∣*s*, *m*⟩ (classical spins always have length *s* and the only thing they can do is canting). Such intermediately-polarized states are identified in spin-2 and spin-3 spinor BECs, and can be stable in the latter case . However, spinor BECs are more complicated since they also carry superfluid sound, see below. They follow again the *S**U*(2) → *U*(1) pattern, and have finite magnetization *M* = *N**m*, with *N* the number of sites. However, the state is clearly not maximally polarized, and *S**k*− excites a type-B NG mode and *S**k*+ a gapped partner mode. See [appendix:Holstein–Primakoff transformation for finite magnetization] for a Holstein–Primakoff derivation of these modes. The staggered magnetization vanishes, but by the reasoning of Sec. [subsec:Multitude of order parameters] we know there must be another order parameter operator. In this case the nematic tensor $N^{ab} = \sum\_j \tfrac{1}{2} ( S^a\_j S^b\_j + S^b\_j S^a\_j) - c \delta\_{ab}$ can be seen to obtain an expectation value in the *N**z**z*-component. The expectation values ⟨[*S**x*, *N**j**y**z*]⟩ and ⟨[*S**y*, *N**j**x**z*]⟩ do not vanish and indicate the existence of the gapped partner mode. Interestingly, the nematic tensor is also an order parameter operator for the ferromagnet, but then it can be shown that the mode associated with these expectation values is linearly dependent on the NG mode, and cannot be taken as an independent, gapped, mode  (cf. ). See [appendix:linear independence of excitations] for more details. Spinor Bose-Einstein condensates -------------------------------- Due to the advances in experimental techniques that can cool and trap atoms very efficiently, there has been abundant recent research into Bose–Einstein condensates of atoms with higher spin *s* = 1, 2, 3. These atoms do not only form a bosonic superfluid, but their spin degrees of freedom lead to very interesting physics as well. The symmetry group of the disordered states is *U*(1) × *S**O*(3), where the *U*(1) refers to the superfluid phase variable, and the *S**O*(3) are rotations of the spin degree of freedom. There is a plethora of ordered states which breaks this symmetry to various continuous or discrete subgroups. In particular, a phase with intermediate polarization as described in the previous section is realized in spin-2 under magnetic field (F1 phases) and spin-3 as a groundstate (F phase) spinor-BECs, where we follow the nomenclature of Ref. . These systems, also because of their small sizes (*N* ∼ 10000), would seem to be ideal testing grounds for the decoherence limit due to the thin spectrum (see Sec. [sec:Thin spectrum and decoherence]). One complication is that, for certain choices of parameters in the Hamiltonian, the symmetry group may actually be larger than *U*(1) × *S**O*(3), leading to additional NG modes. More importantly, one must be very careful to account for the fact that the U(1) superfluid phase is always spontaneously broken, and there is always a type-A NG mode (phonon/zero sound) in the spectrum. This is already sufficient to guarantee the existence of a thin spectrum, which could instigate the same limit even in the spinor-ferromagnetic phase. Whether there is a noticeable difference in decoherence time when there are additional thin spectrum states due to the spin degrees of freedom, is left to further investigation. Linear sigma model ------------------ As an example from field theory (as opposed to condensed matter physics), consider the linear sigma model for a complex scalar doublet field *ϕ* = (*ϕ*1, *ϕ*2), with Lagrangian L = |(∂*t* − i*μ*)*ϕ*|2 + |∂*m**ϕ*|2 − *m*2∣*ϕ*∣2 − *λ*∣*ϕ*∣4. Here *μ* is a chemical potential, not to be confused with the weight vector of Sec. [subsec:Highest-weight states]. This model captures the physics of the color–flavor-locked phase in kaon condensates in quantum chromodynamics at high density (large *μ*), and is worked out in detail in Refs. . The Lagrangian without chemical potential is invariant under *S**U*(2) × *S**U*(2)-transformations of the doublet field: $$\begin{aligned} \begin{pmatrix} \phi\_1 \\ \phi\_2 \end{pmatrix} &\to {\mathrm{e}}^{{\mathrm{i}}\vec{\theta}\_\mathrm{L} \cdot \vec{\sigma}\_\mathrm{L}} \begin{pmatrix} \phi\_1 \\ \phi\_2 \end{pmatrix},& \begin{pmatrix} \phi\_2^\* \\ \phi\_1 \end{pmatrix} &\to {\mathrm{e}}^{-{\mathrm{i}}\vec{\theta}\_\mathrm{R} \cdot \vec{\sigma}\_\mathrm{R}} \begin{pmatrix} \phi\_2^\* \\ \phi\_1 \end{pmatrix}.\end{aligned}$$ Here *σ⃗*L, R are vectors containing Pauli matrices. A finite chemical potential breaks Lorentz invariance, and also breaks the internal symmetry explicitly down to *S**U*(2) × *U*(1). The symmetry generators *σ*R*x* and *σ*R*y* are broken explicitly and they excite gapped modes. If *m*2 − *μ*2 turns negative, the internal symmetry is further broken spontaneously down to the *U*(1) group generated by I + *σ*L*z*. The groundstate expectation value of the field can be chosen to be ⟨*ϕ*⟩ ≡ ⟨Ψ0|*ϕ*|Ψ0⟩ = (0, *v*), *v* ∈ R; the order parameter operator can be chosen as *σ*L*z* itself. Therefore a type-B NG boson arises, which is excited by *σ*L*x* and *σ*L*y*. The symmetry generated by I − *σ*L*z* is also spontaneously broken and excites a type-A NG boson. The spectrum of the lowest excitations has been worked out in Refs. . The modes excited by *σ*L*x* and *σ*L*y* have dispersions $ \omega\_\pm = \sqrt{ k^2 + \mu^2} \pm \mu$. The negative sign corresponds to the NG boson with quadratic dispersion, while the positive sign corresponds to the gapped partner mode with gap 2*μ*. We can connect this spectrum to the discussion in Sec. [subsec:Multitude of order parameters]. From the Lagrangian we can find the Noether charge densities explicitly: *j**t**A* =  − i*π**a**T**a**b**A**ϕ**b* + i*ϕ**a*†*T**a**b**A**π**b*†. Here we have grouped the symmetry generator matrices as *T**A* = (I, *σ**A*), and the canonical momenta are $$\begin{aligned} \label{eq:linear sigma canonical momenta} \pi\_a &= \frac{\partial \mathcal{L} }{\partial (\partial\_t \phi\_a)} = (\partial\_t + {\mathrm{i}}\mu )\phi^\dagger\_a,& \pi^\dagger\_a &= \frac{\partial \mathcal{L} }{\partial (\partial\_t \phi^\dagger\_a)} = (\partial\_t - {\mathrm{i}}\mu )\phi\_a.\end{aligned}$$ The canonical commutation relations are [*ϕ**a*(*x*), *π**b*(*y*)] = [*ϕ**a*†(*x*), *π**b*†(*y*)] = i*δ**a**b**δ*(*x* − *y*),  and all other combinations have vanishing commutation relations. The Noether charges are *Q**A* = ∫d*D**x* *j**t**A*(*x*). For the commutation relation between the Noether charges amongst each other one can calculate, [*Q**A*, *j**t**B*(*x*)] =  − i*π*(*x*)[*T**A*, *T**B*]*ϕ*(*x*) + i*ϕ*†(*x*)[*T**A*, *T**B*]*π*†(*x*). For our case ⟨*ϕ*⟩ = (0, *v*), we look at *A*, *B* = *x*, *y* and [*T**A*, *T**B*] = i*σ**z*. Substituting the canonical momenta Eq. , we find $$\begin{aligned} \langle [Q^x,j^y\_t(x)] \rangle &= {\mathrm{i}}\langle \big[ - {\mathrm{i}}(\partial\_t \phi^\dagger) \sigma^z \phi + {\mathrm{i}}\phi^\dagger \sigma^z \partial\_t \phi + 2\mu \phi^\dagger \sigma^z \phi \big] \rangle \nonumber\\ &={\mathrm{i}}2\mu \big[ \langle \phi^\*\_1 \phi\_1 \rangle - \langle \phi^\*\_2 \phi\_2 \rangle \big] = {\mathrm{i}}2 \mu v^2.\end{aligned}$$ We used the translational invariance of the groundstate to drop the spatial dependence of the left-hand side. Here the groundstate expectation value of the first two terms in the first line vanishes as usual (cf. *U*(1)-symmetry breaking where the Noether charge densities never obtain an expectation value). In this case, the fields *ϕ*1 and *ϕ*1†, which do not generate symmetries and are therefore of the form *ϕ**i* in Eq. , are also interpolating fields for the breaking of *Q**x* and *Q**y*. We calculate, using Eq. , $$\begin{aligned} \langle [Q^x, \phi\_1(x)] \rangle &= - {\mathrm{i}}[\pi\_a,\phi\_1] T^x\_{ab} \langle \phi\_b \rangle = - \langle \phi\_2 \rangle = - v,\nonumber\\ \langle [Q^x, \phi^\dagger\_1(x)] \rangle &= {\mathrm{i}}\langle \phi^\dagger\_a \rangle T^x\_{ab} [\pi^\dagger\_b,\phi^\dagger\_1] = \langle \phi^\dagger\_2 \rangle = v,\nonumber\\ \langle [Q^y, \phi\_1(x)] \rangle &= - {\mathrm{i}}[\pi\_a,\phi\_1] T^y\_{ab} \langle \phi\_b \rangle = {\mathrm{i}}\langle \phi\_2 \rangle = {\mathrm{i}}v,\nonumber\\ \langle [Q^y, \phi^\dagger\_1(x)] \rangle &= {\mathrm{i}}\langle \phi^\dagger\_a \rangle T^y\_{ab} [\pi^\dagger\_b,\phi^\dagger\_1] = {\mathrm{i}}\langle \phi^\dagger\_2 \rangle = {\mathrm{i}}v.\end{aligned}$$ For the set Φ*A* = (*j**t**x*, *j**t**y*, *ϕ*1, *ϕ*1†) we can check that the excitations are linearly independent by showing that the determinant of the matrix ⟨[*Q**A*, Φ*B*]⟩ is non-zero, in the same way as in [appendix:linear independence of excitations]. Therefore, the type-B NG mode is accompanied by a gapped partner mode, confirming our knowledge of the exact dispersion relations above. Conclusions =========== I have endeavored to provide a satisfactory explanation of why the Heisenberg ferromagnet is such a remarkable state of spontaneously broken matter, even though it is not unique: in the least there are maximally polarized states for any *S**U*(*N*)-system with Heisenberg-type Hamiltonian. The sufficient and necessary condition is that all possible order parameters operators should commute with the Hamiltonian. In the case that there is at least one linearly independent operator, not a symmetry of the Hamiltonian yet obtaining a groundstate expectation value, we find ourselves in an intermediate case. Here the broken generators excite a type-B NG mode *and* a gapped partner mode. Furthermore the classical groundstate is not an eigenstate of the Hamiltonian, quantum fluctuations modifying the classical groundstate are present, and a thin spectrum exists. These statements can be verified by probing the thin spectrum, which is possible directly in numerical calculations, and indirectly experimentally by investigating the fundamental limit to coherence time of macroscopic superpositions $t\_\mathrm{coh} \propto \frac{\hbar}{k\_\mathrm{B} T} N$. Let us conclude with some open questions. First, as mentioned above, spinor BECs appear to be an appealing playground to find type-B NG modes and maximally or intermediately polarized states. The gapped partner mode in the latter case should be identifiable. However, spinor BECs are a superfluid as well with spontaneously broken *U*(1)-symmetry leading to a type-A NG mode in the form of superfluid zero sound. The order parameter associated with this symmetry breaking is of the ordinary kind, and leads to quantum fluctuations and a thin spectrum. As such, a clear-cut experiment that compares two situations with and without a thin spectrum is impossible here. However, the introduction of additional thin spectrum states by going from a maximally to non-maximally polarized state could lead to a significant *difference* in coherence time. A careful consideration of the arguments laid out in Refs.  supported by numerical calculations or simulations could provide an answer. Another question is the stability of the groundstate due to quantum fluctuations. In Ref.  it is argued that systems which only have type-B and no type-A NG modes, it is possible to have long-range order at zero temperature even in 1+1 dimensions. They provide two derivations based on effective Lagrangians, but the heart of the argument is that there are no quantum fluctuations that can destroy the ordered state: a Heisenberg ferromagnet is stable even at small sizes. However, here we have seen that if the groundstate is not a highest-weight state, quantum fluctuations are in fact present, even if there are only type-B NG modes. It would be interesting to investigate whether long-range order can in fact persist in such 1+1-dimensional systems, and if not, where the argument using effective Lagrangians fails. Since we have seen in Sec. [subsec:Time-reversal symmetry] that time-reversal is actually a separate issue, the opposite question becomes relevant: are there examples of ‘ordinary’ symmetry breaking with only type-A NG modes which have nevertheless broken time-reversal symmetry? As a matter of fact, the Néel state in an antiferromagnet breaks this symmetry, so the question becomes really: to what extent is broken time-reversal symmetry a relevant feature at all? The standard argument, put forward for instance in Ref.  is that, in the Néel state, the combined transformation of time reversal and translation by one lattice spacing does leave this state invariant. But a combined transformation of time reversal and 180∘ rotation of all spins also leaves the groundstate invariant, and this symmetry operation is identical for ferromagnets and antiferromagnets alike. It seems there is demand for a more rigorous definition of “macroscopic order parameter” that somehow averages over small length scales. Using the improved definition, perhaps it can then be shown that spontaneously broken time-reversal symmetry can only emerge for systems containing type-B NG modes. Acknowledgements ================ I thank Haruki Watanabe, Yoshimasa Hidaka, Jasper van Wezel, Louk Rademaker, Tomas Brauner and Naoto Nagaosa for useful discussions. This work was supported by the Foreign Postdoctoral Researcher program at RIKEN. Linear independence of excitations ================================== Here we state and illustrate when Noether charge densities and other operators as interpolating fields excite linearly independent modes. We should consider the set of all possible interpolating fields in Eq., namely both the Noether charge densities *j**b* and other fields *ϕ**i*. Then the set of broken Noether charge densities {*Q**a*} has a non-vanishing commutator expectation value for at least one component of ⟨[*Q**a*, *j**b*]⟩ or ⟨[*Q**a*, *ϕ**i*]⟩. For each of these non-vanishing combinations one could apply the Goldstone theorem, leading to massless modes. However some of these modes may in fact be linearly dependent. The criterion is the following . Take the set of interpolating fields Φ*K* = {*ϕ**i*, *j**b*} where *K* runs over all *ϕ**i* and *j**b* that causes a symmetry generator to be spontaneously broken. Compute all the commutator expectation values M*K**L* = ∫d*D**x*ʹ⟨Ψ0∣[Φ*K*(*x*ʹ), Φ*L*(*x*)]∣Ψ0⟩, where as always ∣Ψ0⟩ is the symmetry-breaking groundstate. If the determinant of the matrix M*K**L* is zero, then some of the modes are linearly dependent. As an example, take a $s=\tfrac{3}{2}$ Heisenberg magnet. The spin matrices in this representation are $$\begin{aligned} S^x &= \frac{1}{2} \begin{pmatrix} & \sqrt{3} & & \\ \sqrt{3} & & 2 & \\ & 2 & & \sqrt{3} \\ & & \sqrt{3} & \end{pmatrix}, \end{aligned}$$ $$\begin{aligned} S^y &= \frac{{\mathrm{i}}}{2} \begin{pmatrix} & -\sqrt{3} & & \\ \sqrt{3} & & -2 & \\ & 2 & & -\sqrt{3} \\ & & \sqrt{3} & \end{pmatrix}, \end{aligned}$$ $$\begin{aligned} S^z &= \begin{pmatrix} \tfrac{3}{2}& & & \\ & \tfrac{1}{2} & & \\ & & -\tfrac{1}{2} & \\ & & & -\tfrac{3}{2} \end{pmatrix}.\end{aligned}$$ The nematic tensor is $N^{ab} = \frac{1}{2}(S^aS^b + S^bS^a) - \frac{5}{4} \delta\_{ab}$. It is symmetric in (*a**b*) and specified by six components, $$\begin{aligned} N^{xx} &= \frac{1}{2} \begin{pmatrix} -1 & & \sqrt{3} & \\ & 1 & & \sqrt{3} \\ \sqrt{3} & & 1 & \\ & \sqrt{3} & & -1 \end{pmatrix}, & N^{xy} &= \frac{{\mathrm{i}}}{2} \begin{pmatrix} & & -\sqrt{3} & \\ & & & -\sqrt{3} \\ \sqrt{3} & & & \\ & \sqrt{3} & & \end{pmatrix}, \nonumber\\ N^{yy} &= \frac{1}{2} \begin{pmatrix} -1 & & -\sqrt{3} & \\ & 1 & & -\sqrt{3} \\ -\sqrt{3} & & 1 & \\ & -\sqrt{3} & & -1 \end{pmatrix}, & N^{xz} &= \frac{1}{2} \begin{pmatrix} & \sqrt{3} & & \\ \sqrt{3} & & & \\ & & &-\sqrt{3} \\ & &-\sqrt{3} & \end{pmatrix}, \nonumber\\ N^{zz} &= \frac{1}{2} \begin{pmatrix} 1 & & & \\ & -1 & & \\ & & -1 & \\ & & & 1 \end{pmatrix}, & N^{yz} &= \frac{{\mathrm{i}}}{2} \begin{pmatrix} & - \sqrt{3} & & \\ \sqrt{3} & & & \\ & & &\sqrt{3} \\ & &-\sqrt{3} & \end{pmatrix}.\end{aligned}$$ Suppose that the groundstate is the intermediately polarized state $| \Psi\_0 \rangle = \begin{pmatrix} 0 & 1 & 0 & 0 \end{pmatrix}^\mathrm{T}$. Both *S**z* and *N**z**z* are order parameters for this state (*N**x**x* and *N**y**y* are as well, but they do not lead to additional breaking of symmetry generators, so that the former two suffice). The interpolating fields for the spontaneous breaking of *S**x* are *S**y* and *N**y**z* while those for the spontaneous breaking of *S**y* are *S**x* and *N**x**z*. To determine whether the interpolating fields excite modes linearly independent from the NG mode excited by the pair *S**x*, *S**y*, we should evaluate the matrix $$\mathcal{M} = \begin{pmatrix} 0 & \langle [S^x,S^y] \rangle & \langle [S^x,N^{xz}] \rangle & \langle [S^x,N^{yz}] \rangle \\ \langle [S^y,S^x] \rangle & 0 & \langle [S^y,N^{xz}] \rangle & \langle [S^y,N^{yz}] \rangle \\ \langle [N^{xz},S^x] \rangle & \langle [N^{xz},S^y] \rangle & 0 & \langle [N^{xz},N^{yz}] \rangle \\ \langle [N^{yz},S^x] \rangle & \langle [N^{yz},S^y] \rangle & \langle [N^{yz},N^{xz}] \rangle & 0 \end{pmatrix}$$ For our groundstate $| \Psi\_0 \rangle = \begin{pmatrix} 0 & 1 & 0 & 0 \end{pmatrix}^\mathrm{T}$ this matrix is $$\frac{{\mathrm{i}}}{2} \begin{pmatrix} 0 & 1 & 0 & -3 \\ -1 & 0 & 3 & 0 \\ 0 & -3 & 0 & -3 \\ -3 & 0 & 3 & 0 \end{pmatrix},$$ and its determinant is 9 ≠ 0. So here we have linearly independent modes and this indicates the existence of a gapped partner mode, which is excited by the spin raising operator *S*+. Conversely, suppose that the groundstate is the maximally polarized state $| \Psi\_0 \rangle = \begin{pmatrix} 1 & 0 & 0 & 0 \end{pmatrix}^\mathrm{T}$. In this case, we have the same structure for the broken symmetry generators, interpolating fields and order parameters. But the above matrix evaluates to $$\frac{{\mathrm{i}}}{2} \begin{pmatrix} 0 & 3 & 0 & 3 \\ -3 & 0 & -3 & 0 \\ 0 & 3 & 0 & 3 \\ -3 & 0 & -3 & 0 \end{pmatrix},$$ and its determinant is zero. In this case, the modes excited by the interpolating fields *N**x**z*, *N**y**z* are actually linearly dependent on the NG mode, and there is no separate gapped partner mode. We know that the reason is that the raising operator would take one out of the Hilbert space. Holstein–Primakoff transformation for finite magnetization ========================================================== We want to examine excitations around the state of intermediate magnetization *s* > *m* > 0. To this end we perform a very simplistic Holstein–Primakoff-type calculation to get a sense of the important issues. We assume that the biquadratic term  ∼ *K* serves to stabilize this groundstate, but that the spin waves are primarily due to the ordinary exchange term  ∼ *J*. In other words, we neglect the biquadratic term here. There are excitations that lower the spin and those that raise the spin. Therefore we introduce quanta *c*† that raise the expectation value of *S**z* and quanta *d*† that lower this expectation value. Both excitations have the state |*m*⟩ ≡ |*s*, *m*⟩ as the groundstate: *c*|*m*⟩ = *d*|*m*⟩ = 0. Since the magnetization *m* is finite, the species do not have the same weight within the decomposition. Introduce boson operators *c**i*, *c**i*† and *d**i*, *d**i*† such that *c**i* and *d**i* annihilate the state ∣*m*⟩ on each site *i*. The commutation relations are [*c**i*, *c**j*†] = [*d**i*, *d**j*†] = *δ**i**j* and the other ones vanish. Define: $$\begin{aligned} S^z &= m + c^\dagger c - d^\dagger d, \\ S^+ &= \tfrac{1}{\sqrt{2}} ( \alpha c^\dagger U + \sqrt{\alpha^2 + 1}\; U d),\\ S^- &= \tfrac{1}{\sqrt{2}} ( \sqrt{\alpha^2 + 1}\; d^\dagger U + \alpha U c),\\ U &= \sqrt{2m - \frac{2}{\alpha^2 - 1} c^\dagger c -\frac{2}{\alpha^2 + 2} d^\dagger d}.\end{aligned}$$ Here *α* should be a positive real number unequal to 1. The commutation relations [*S**z*, *S*±] =  ± *S*± are easily checked. The remaining one is $$\begin{aligned} [S^+, S^-] &= \tfrac{1}{2} [\alpha c^\dagger U + \sqrt{\alpha^2 + 1}\; U d,\sqrt{\alpha^2 + 1}\; d^\dagger U + \alpha U ] \nonumber\\ &= \tfrac{1}{2} \alpha^2 [c^\dagger U,Uc] + \tfrac{1}{2} (\alpha^2+1) [Ud,d^\dagger U] \nonumber\\ & \phantom{m} + \tfrac{1}{2} \alpha\sqrt{\alpha^2+1} ([ c^\dagger U,Ud] + [ d^\dagger U,Uc] ).\end{aligned}$$ The first two terms do reduce to 2*S**z*. The last two terms are hard to calculate because of the square root in *U*. For now we assume they do vanish, they should not matter to lowest order anyway. For the lowest excitations, the radical *U* is unimportant, and we approximate $U \approx \sqrt{2m}$. The Heisenberg Hamiltonian is $$\mathcal{H} = J \sum\_{j \delta} \tfrac{1}{2} S^+\_j S^-\_{j+\delta} + \tfrac{1}{2} S^-\_j S^+\_{j+\delta} + S^z\_j S^z\_{j+\delta}.$$ Here *j* runs over all sites and *δ* over nearest-neighbors of *j*. We substitute the expressions above and keep only terms up to quadratic order in *c*, *d*. $$\begin{aligned} \mathcal{H}&= J \sum\_{j \delta}\tfrac{m}{2} ( \alpha c^\dagger\_j +\sqrt{\alpha^2 + 1}\; d\_j)(\sqrt{\alpha^2 + 1}\; d^\dagger\_{j +\delta} + \alpha c\_{j+\delta}) \nonumber\\ &\phantom{mmm}+\tfrac{m}{2} (\sqrt{\alpha^2 + 1}\; d^\dagger\_{j} + \alpha c\_{j})( \alpha c^\dagger\_{j+\delta} +\sqrt{\alpha^2 + 1}\; d\_{j + \delta} ) \nonumber\\ &\phantom{mmm} + m^2 + m c^\dagger\_j c\_{j} + m c^\dagger\_{j+\delta}c\_{j+\delta} - m d^\dagger\_j d\_{j} - m d^\dagger\_{j+\delta}d\_{j+\delta} \nonumber\\ &= E\_0 + J m\sum\_{j \delta} \tfrac{1}{2} \alpha^2 (c^\dagger\_j c\_{j+\delta} + c^\dagger\_{j+\delta}c\_{j}) + \tfrac{1}{2} (\alpha^2+1) (d^\dagger\_j d\_{j+\delta} + d^\dagger\_{j+\delta}d\_{j}) \nonumber\\ &\phantom{mmm} + \tfrac{1}{2} \alpha \sqrt{\alpha^2 +1} (c^\dagger\_j d^\dagger\_{j +\delta} + d^\dagger\_{j}c^\dagger\_{j+\delta} + c\_{j}d\_{j + \delta} + d\_j c\_{j+\delta}) \nonumber\\ &\phantom{mmm} + c^\dagger\_j c\_{j} + c^\dagger\_{j+\delta}c\_{j+\delta} - d^\dagger\_j d\_{j} - d^\dagger\_{j+\delta}d\_{j+\delta} \end{aligned}$$ Here *E*0 = *J**m*2*N**z*. We Fourier transform in the usual way to find the factors *ν**k* = ∑*δ*ei*k* ⋅ *δ*, which leads to $$\begin{aligned} \mathcal{H}&= E\_0 + J m\sum\_k (\alpha^2 \nu\_k +2) c^\dagger\_k c\_k + ((\alpha^2+1) \nu\_k -2) d^\dagger\_k d\_k \nonumber\\ &\phantom{mmmmmmmm} + \alpha \sqrt{\alpha^2 +1} \nu\_k (c^\dagger\_k d^\dagger\_k + c\_k d\_k).\end{aligned}$$ Now we perform a Bogoliubov transformation $$\begin{aligned} c\_k &= \sqrt{\alpha^2 +1}\; a\_k - \alpha b^\dagger\_k,& c^\dagger\_k &= \sqrt{\alpha^2 +1}\; a^\dagger\_k - \alpha b\_k, \nonumber\\ d\_k &= -\alpha a^\dagger\_k + \sqrt{\alpha^2 +1}\; b\_k, & d^\dagger\_k &= -\alpha a\_k + \sqrt{\alpha^2 +1}\; b^\dagger\_k\end{aligned}$$ This transformation makes the cross terms *a*†*b*† and *a**b* vanish, and the Hamiltonian is diagonalized. The Hamiltonian reduces to H = *E*0 + *J**m*∑*k*2*a**k*†*a**k* + ( − 1 + *ν**k*)*b**k*†*b**k*. Since *ν**k* = cos(*k**δ*) ≈ (1 − *δ*2*k*2), where *δ* is the lattice constant, we see that the *b**k*† excite a gapless mode with quadratic dispersion. The *a**k*† excitations are gapped, with the gap size set by the exchange parameter *J* and the magnetization *m*. This mode is actually dispersive if higher order terms are taken into account, but the gap at zero momentum is *J**m*. Now we can identify the modes *b**k*† and *a**k*†. Looking at the Bogoliubov transformations, *b**k*† consists of *d**k*† and *c**k*, which make up *S*−, while *a**k*† consists of *d**k* and *c**k*† which make up *S*+. Therefore, for positive magnetization *m*, the lowering operator *S**k*− excites the gapless Goldstone mode, while the raising operator *S**k*+ excited the gapped partner mode. Criteria for the absence of quantum fluctuations after spontaneous symmetry breaking ==================================================================================== The lowest-energy state of a macroscopic system in which symmetry is spontaneously broken, is a very stable wavepacket centered around a spontaneously chosen, classical direction in symmetry space. However, for a Heisenberg ferromagnet the quantum groundstate is exactly the classical groundstate, there are no quantum fluctuations. This coincides with seven exceptional properties of the ferromagnet, including spontaneous time-reversal symmetry breaking, a reduced number of Nambu–Goldstone modes and the absence of a thin spectrum (Anderson tower of states). Recent discoveries of other non-relativistic systems with fewer Nambu–Goldstone modes suggest these specialties apply there as well. I establish precise criteria for the absence of quantum fluctuations and all the other features. In particular, it is not sufficient that the order parameter operator commute with the Hamiltonian. It leads to a measurably larger coherence time of superpositions in small but macroscopic systems. spontaneous symmetry breaking; quantum fluctuations; Nambu–Goldstone modes; thin spectrum Introduction ============ The Heisenberg ferromagnet has always been an eccentric duckling in the flock of spontaneous symmetry breaking (SSB) states consisting of antiferromagnets, crystals, superfluids, chiral SSB, the Standard Model and many others. This is only exacerbated by being one of the earliest and simplest models demonstrating SSB, used as the archetype in a large portion of the literature. Perhaps because much of its physics can be understood by undergraduate level calculations, have its peculiarities never been put in a larger perspective. Still the subtleties are intricate enough to have sparked debates between the greatest of minds in the past century . Why is this state different from all other states? We talk about the following observations, clarified below: 1. the order parameter operator commutes with the Hamiltonian, is therefore a symmetry generator and is conserved in time; 2. two broken symmetry generators correspond to a single, quadratically dispersing Nambu–Goldstone (NG) mode; 3. the classical groundstate is an exact eigenstate of the Hamiltonian, there are no quantum fluctuations; 4. the raising operator, a root generator of the symmetry algebra, annihilates the groundstate, even locally (the spin of the maximally polarized state cannot be increased); 5. there is no *thin spectrum* or *tower of states* of nearly vanishing energy just above the groundstate; 6. the groundstate is an eigenstate of the unbroken symmetry generator with non-zero eigenvalue; 7. time-reversal symmetry is spontaneously broken. Arguably the most important of these features are (i) and (ii): the low-energy spectrum of NG modes is different from what one would expect based on the relativistic Goldstone theorem. This issue had been recognized early on,, and later generalized to systems other than the ferromagnet, but has basically been solved only in the last ten years or so : whenever the order parameter operator *Q**k*, that obtains a non-zero expectation value ⟨*Q**k*⟩ in the symmetry-broken state, is one of the symmetry generators itself—called a *finite Noether charge density*—then any two spontaneously broken generators *Q**i*, *Q**j* that contain this operator in their commutation relation [*Q**i*, *Q**j*] = ∑*k**f**i**j**k**Q**k* will in fact excite *the same* NG mode. That mode will have a different, in general quadratic, dispersion relation (a more precise statement will be given below). Therefore (i) implies (ii). For the ferromagnet with magnetization along the *z*-axis, the spin rotation operator *S**z* obtains a finite Noether charge density, while *S**x* and *S**y* are spontaneously broken and excite the same single spin wave (magnon) with quadratic dispersion. Such modes have been called type-B NG modes, as opposed to the ‘regular’, linearly dispersing, type-A NG modes. In a parallel development, several states of matter with broken charge densities and/or quadratically dispersing NG modes other than the ferromagnet have been identified. For instance in spinor Bose–Einstein condensates (BEC) , kaon condensates in quantum chromodynamics  and Tkachenko modes in superfluid vortices . The question of whether the other special properties of the ferromagnet (iii)–(vii) generalize to such systems arises naturally. Here I will establish precise criteria for the relations between each of the properties (i)–(vii). I will focus in particular on quantum fluctuations, properties (iii)–(v). When a continuous symmetry is spontaneously broken, there is a continuous manifold of degenerate classical groundstates. In the quantum case, any superposition of these states will be a valid groundstate as well, but tiny external perturbations will favor one particular classical state over all the others. At this point that may seem obvious, but these classical groundstates are almost never eigenstates of the quantum Hamiltonian, which implies unitary time evolution would bring one to a state different from the classical state. This deviation from the classical groundstate is known as *quantum fluctuations* although there is actually no time-dependent behavior, in the same sense as there are no particles and anti-particles “popping into and out of existence” in the QED vacuum. It is perhaps the most striking feature of spontaneous symmetry breaking that in almost all cases the actual quantum groundstate is very close to a classical groundstate ; for instance the reader’s chair is (for all practical purposes) in a position eigenstate even though its Hamiltonian has translation invariance and therefore its spectrum consists of momentum eigenstates. Assuredly the chair’s position does not fluctuate in time. In many texts the fact that classical states are dressed with quantum fluctuations is glossed over or ignored. Several others, most explicitly by Anderson , claim that all of the peculiar properties and in particular the absence of quantum fluctuations of the ferromagnet follow from the fact that its order parameter operator, *S**z*, commutes with the Hamiltonian *H*. The reason cited is that in that case, *S**z* and *H* can be simultaneously diagonalized and they share a basis of eigenstates. However, this is not sufficient. Namely, it is only the *total* magnetization *S**z* = ∑*j**S**j**z* where *j* runs over lattice sites, that commutes with *H*, while the local magnetization density *S**j**z* does not. Therefore, even if the total magnetization is conserved, local fluctuations of magnetization that leave the total magnetization constant, could in principle be allowed. I will present below some counterexamples of states with conserved order parameters but nevertheless showing quantum fluctuations. As a mnemonic, the reader can picture a ferrimagnet (see Sec. [subsec:example Ferrimagnet]) with a tiny imbalance between the magnitude of the spins on the A- and B-sublattice, that thus develops a small magnetization and a ferromagnet-like NG mode spectrum, but appears nevertheless to resemble more closely an antiferromagnet. In this light, the absence of quantum fluctuations in the Heisenberg ferromagnet (iii) is not due simply to a conserved order parameter, but rather is a consequence of the maximum polarization of the groundstate (iv). In words, fluctuations that leave the total magnetization invariant necessarily involve raising a spin at some site and lowering a spin at some other site (or rather a wavelike superposition of such processes). But the ferromagnet has maximum polarization at each site: the magnetic quantum number *m* is equal to the spin *s* at all sites. Therefore such processes would take one outside of the Hilbert space and are forbidden. From a symmetry-group-theoretic point of view, it turns out that identifying a single order parameter operator is not enough to determine whether quantum fluctuations are present or absent. Instead, one should consider all linearly independent operators that could function as order parameter, i.e. that obtain an expectation value in the ordered (symmetry-broken) state. Only if all of them commute with the Hamiltonian, will we be in a maximally polarized state and will it feature an absence of quantum fluctuations. If there is at least one linearly independent, non-commuting order parameter operator, then the state will not be maximally polarized, the massless NG mode will appear together with a massive ‘partner’ mode, and quantum fluctuations away from the classical groundstate are present. This is consistent with the recent classification scheme by Hayata and Hidaka  (see also ), which categorizes type-B massless NG modes with or without massive (gapped) partner modes, next to type-A massless NG modes. Property (vi), the fact that an unbroken symmetry generator does not annihilate the broken groundstate, as for instance spin-rotations around the *z*-axis in a ferromagnet with magnetization along the *z*-axis, is automatically satisfied by any finite Noether charge density (any symmetry generator that is also an order parameter operator). In fact, this can be ‘mended’ through redefining the symmetry operator by subtracting the order parameter expectation value. However, this will modify the symmetry Lie algebra relations, and later we will find it useful to retain this expectation value in relation to the highest-weight states (maximally polarized states). Property (vii), the spontaneous breaking of time-reversal invariance, will be shown to be a completely separate issue: some finite Noether charge densities correspond to time-reversal invariant states, while others will break that symmetry. Summarizing, the classical groundstate being an exact eigenstate of the Hamiltonian (iii), or equivalently the groundstate being the maximally polarized state (iv), is the strongest condition, and it implies the other properties in (i)–(vi), see Fig. [fig:FM equivalencies]. Quantum fluctuations are only absent for states which are completely polarized with respect to all broken symmetry generators (the highest-weight state for all broken root generators of the Lie algebra). Otherwise, for systems that have finite Noether charge densities, there will generally be additional order parameter operators that characterize the symmetry breaking. In that case the type-B NG mode is accompanied by a gapped mode, together excited by the pair of Lie algebra roots. (1,0.36869722) (0,0) (0.21,0.18)(iii) (0.44,0.18)(iv) (-0.00296365,0.18)(v) (0.57,0.0901246) (0.23,0.015)(i) (0.44,0.015)(ii) (-.02,0.015)(vi) (0.94,0.18)implies (0.94,0.10)equivalent (0.94,0.015)see text [fig:FM equivalencies] In this work I will present a reasonably self-contained exposition of known results and a derivation of the claims made in this introduction. To illustrate the matters at hand, in Section [sec:Heisenberg magnets: a first example] we will go through the elementary examples of the Heisenberg ferro- and antiferromagnet, which have identical symmetry breaking pattern *S**U*(2) → *U*(1) but show completely opposite behavior in their NG mode spectrum and quantum fluctuations. In Section [sec:Quantum fluctuations] I will clarify what I mean by *quantum fluctuations*. The next Section [sec:Order parameters and Nambu–Goldstone modes] summarizes the old and recent results on SSB and NG modes. Sections [sec:Order parameters and quantum fluctuations] and [sec:Thin spectrum and decoherence] comprise the main part of this work. In the former I show how one should examine all possible order parameter operators in order to make claims about the presence or absence of quantum fluctuations in an SSB groundstate. The latter explains the intricacies of the thin spectrum of low-lying states and how they influence coherence times of quantum superpositions. It also gives hints as to how the ramifications of the claims made here could be experimentally verified. In Section [sec:Examples] provides a slew of examples of model systems that satisfy properties (i)–(vii) in degrees varying from none to all. A summary and open questions are collected in Section [sec:Conclusions]. Heisenberg magnets: a first example =================================== We will illustrate these considerations by the Heisenberg ferromagnet and antiferromagnet, to introduce the general concepts in a concrete system. The 2D square lattice spin-$\tfrac{1}{2}$ antiferromagnet is an exquisite example not only because it has been studied intensively, but mostly because quantum fluctuations are in this case very pronounced: the order parameter, staggered magnetization, in the groundstate with quantum fluctuations being taken into account, is only about 60% of that of the groundstate of the classical antiferromagnet. In principle a solid like the chair the reader is sitting on is an equally valid example, but there the quantum fluctuations are negligible and it does not resonate with the intuition as much as the antiferromagnet does. Consider spin-*s* Heisenberg magnets on bipartite lattices with Hamiltonian H = H*J* + Hext = *J*∑⟨*j**l*⟩**S***j* ⋅ **S***l* − **h** ⋅ ∑*j***S***j*. The operators *S**j**a*, *a* = *x*, *y*, *z* are generators of *S**U*(2) in the spin-*s* representation at lattice site *j*, *J* is the exchange parameter, the sum ⟨*j**l*⟩ is over nearest-neighbor lattice sites, and **h** is an external magnetic field. The commutation relations are [*S**j**a*, *S**l**b*] = i*ε**a**b**c**S**j**c**δ**j**l*,   *a*, *b*, *c* = *x*, *y*, *z*. The first term in Eq.  is invariant under global *S**U*(2)-rotations ei*θ**a**S**a* over angles *θ**a* of all spins simultaneously, generated by *S**a* = ∑*j**S**j**a*. If *J* < 0 this term favors alignment of the spins and the groundstate is a ferromagnet. If *J* > 0, the antiferromagnetic case, it favors anti-alignment of the spins and the groundstate of the corresponding classical Hamiltonian is a Néel state with alternating spins. Ferromagnet ----------- For the ferromagnetic case *J* < 0, if there is no external field **h** = 0, then alignment in each direction has the same energy. In practice, one of the directions will be chosen *spontaneously*. Formally, this can be achieved by imagining a tiny external perturbation **h** > 0 that favors a certain direction, and once the system has reached the ferromagnetic state, let the perturbation vanish **h** → 0. Choose the alignment to lie along the *z*-axis, then the order parameter operator is *S**z* = ∑*j**S**j**z* itself, which commutes with the Hamiltonian since it is a symmetry generator. Rotations around the *z*-axis, the *U*(1)-group generated by *S**z*, are unbroken, while *S**x* and *S**y* are spontaneously broken. According to the adapted Goldstone theorem (see Sec. [subsec:Nambu–Goldstone modes]), these operators excite the same single NG mode, the magnon, with quadratic dispersion. Concurrently, the magnetization *M* = ⟨*S**z*⟩ is maximal, such that the raising operator *S**j*+ = *S**j**x* + i*S**j**y* would take one out of the Hilbert space, while the lowering operator *S**j*− = *S**j**x* − i*S**j**y* excites the NG mode. In this very special case, the classical groundstate is an eigenstate of the unperturbed Hamiltonian, so this is the whole story. There are no quantum fluctuations and at zero temperature the state will perpetually keep its alignment direction, even for small-size systems. In this sense, the Heisenberg ferromagnet can be called the most classical instance of spontaneous symmetry breaking at zero temperature. Antiferromagnet --------------- In contrast, the antiferromagnet *J* > 0 differs strongly from its classical counterpart. In the groundstate of the classical Hamiltonian, the Néel state, each site has its spin aligned antiparallel to its nearest neighbors, which is possible without any frustration on bipartite lattices (lattices that can be unambiguously split into two sublattices *A* and *B* such that a site at sublattice *A* only has nearest neighbors on sublattice *B* and vice versa). In terms of the eigenvalues of *S**j**z*, which can take values in *m**j* =  − *s*,  − *s* + 1, …*s* − 1, *s*, this state is |N\’eel⟩ =  ⊗ *j*|*m**j* = ( − 1)*j**s*⟩. We see that the total state is a (tensor) product of one-particle states. Choose the spin-coordinate system such that the axis of the antialignment is the *z*-axis. Then the order parameter operator quantifying this arrangement is S*z* = ∑*j*( − 1)*j**S**j**z*, called the *staggered magnetization*, where ( − 1)*j* alternates sign depending on whether *j* is on the A- or the B-sublattice. Again, *U*(1)-spin rotations around the *z*-axis are unbroken, while *S**x* and *S**y* are spontaneously broken. Since S*z* does not commute with the Hamiltonian, *S*+ and *S*− each excite distinct NG modes, also called magnons, each with linear dispersion relation. The Néel state is obviously not an eigenstate of the Hamiltonian Eq. . The easiest way to see this is to take the raising and lowering operators *S**j*± = *S**j**x* ± i*S**j**y*, and rewrite the exchange term of the Hamiltonian as, $$\label{eq:single exchange S+S- Heisenberg Hamiltonian} \mathcal{H}\_J = J \sum\_{\langle jl \rangle} \tfrac{1}{2}( S^+\_j S^-\_l + S^-\_j S^+\_l) + S^z\_j S^z\_l.$$ From the first two terms here we see that the Hamiltonian includes processes where the spin on one site is raised while simultaneously that on a neighboring site is lowered. As we argued such processes are forbidden in the ferromagnet, but in the antiferromagnet they occur prominently. Then unitary time evolution eiH*J**t* will bring the Néel state to a superposition of many other states with a few spins flipped. This can no longer be written as a product of one-particle states. In fact, it has been proven that the groundstate of the antiferromagnetic Hamiltonian Eq.  on any finite lattice must be a total spin singlet, i.e. a totally antisymmetrized superposition of spin states . But such a state does not break any symmetry (as **S**|singlet⟩ = 0, symmetries generated by **S** are unbroken). The staggered magnetization of this state vanishes as well. Formally, this singlet is the groundstate for any finite-size system, but in practice many-body systems will be found in a state with broken symmetry and long-range order characterized by staggered magnetization. It was recognized early on by Anderson  that there is a *thin spectrum* or *tower of states* of energy vanishing as 1/*N*, where *N* is the number of degrees of freedom, c.q. the number of particles. The actual, symmetry-breaking state is a superposition of the symmetric (singlet) groundstate and states in the the thin spectrum, a wavepacket boasting long-range order, with a very long lifetime. This state corresponds to the spontaneously broken state in the thermodynamic limit. The quantum fluctuations severely modify the classical Néel state, tending towards restoration of symmetry. Indeed, already a simple spin-wave theory estimate gives a significant reduction of the staggered magnetization , which has been confirmed extensively in numerical simulations . For $s=\tfrac{1}{2}$ and a *D* = 2 finite-size square lattice, the staggered magnetization is about 60% of that of the Néel state. The antiferromagnet is a special case in the sense that this strong reduction of symmetry breaking persists even for very large systems. Quantum fluctuations ==================== The main topic of this work is how quantum fluctuations influence the spontaneously broken groundstate. First we need to define what we mean by quantum fluctuations in this context. For this purpose, we define a ‘classical state’ in a quantum system with SSB. The classical state is then dressed with quantum fluctuations which tend to reduce the amount of symmetry breaking, i.e. reduce the magnitude of the order parameter. Nevertheless in most cases, SSB in the thermodynamic limit will lead to almost precisely this classical state, for instance in superconductors, crystals and most relativistic field theories. However, in some cases the quantum fluctuations cause the groundstate to severely deviate from the classical state, as we have seen in the example of the antiferromagnet in Sec. [subsec:first example antiferromagnet]. In finite-size systems, due to tunneling between the different degenerate groundstates, there is a unique lowest-energy state which does not break any symmetry at all. This state is however very unstable against perturbations, and there are other states, very close in energy to the absolute groundstate, which do feature broken symmetry. In these cases, when SSB does take place in practice, the ‘groundstate’ realized in nature is a compromise between the classical state and the symmetric groundstate. For these reasons we identify three states of interest below: the ‘classical state’ |Ψcl⟩ in which symmetry is broken and corresponds to our intuition about SSB; the ‘quantum groundstate’ |Ψsymm⟩ which is the unique groundstate in finite-size systems that breaks no symmetry at all; and the ‘actual groundstate’ |Ψ0⟩, which is the classical state dressed with quantum fluctuations. It is a very robust state, with broken symmetry. In finite-size systems it is close in energy to |Ψsymm⟩ but breaking symmetry related to |Ψcl⟩. Spontaneous symmetry breaking in short -------------------------------------- Let us quickly review the essentials of SSB. Let the Hamiltonian H be invariant under a group of symmetry transformations *G*. We are only interested in continuous groups *G* that give rise to NG modes. When the potential is of such a form that the groundstates of this Hamiltonian are only invariant under a subgroup *H* of *G*, then SSB will take place. There exists a degenerate set of groundstates; given any one of them |*ψ*⟩, the action of elements *g* of the coset *G*/*H* will lead to a different state $g \rightharpoonup \lvert \psi \rangle \equiv \lvert g \psi \rangle$ which is degenerate in energy since H commutes with the action of *g* by definition. Therefore the degenerate groundstates can be labeled by elements of *G*/*H*. The important point is that two inequivalent groundstates |*ψ*⟩, |*g**ψ*⟩ for *g* ≠ 1 are *orthogonal* in the thermodynamic limit (see for instance a proof that the matrix elements ⟨*ψ*|H|*g**ψ*⟩ vanish in that limit in Ref. ). This implies that whenever the system is found in one particular state |*ψ*⟩, it will stay in that state forever. The Hilbert spaces corresponding to each of the states |*g**ψ*⟩ are disconnected. One can define an order parameter operator O that distinguishes the degenerate groundstates, see Eq.  below. In all the cases in which we are interested, it is the space integral of a local operator O = ∫d*D**x* O(*x*). Its expectation value takes values in *G*/*H*, that is, ⟨*g**ψ*|O(*x*)|*g**ψ*⟩ = *g*. Here the groundstates are translationally invariant, because the eigenstates of the Hamiltonian are momentum eigenstates, so the right-hand side is independent of position. This definition holds up to a multiplicative factor denoting the magnitude of the order parameter. In general states with different magnitude of the order parameter differ in energy while states connected by a transformation in *G*/*H* are degenerate. Using these notions, we can give the Bogoliubov definition of SSB. Consider the symmetric Hamiltonian H supplemented with a symmetry breaking field, as parametrized by a chemical potential *μ*: H*μ* = H − *μ*O. Any *μ* > 0 will explicitly break the symmetry by favoring one of the states in *G*/*H* over the others, let us call this groundstate |*ψ*(*N*, *μ*)⟩, where *N* is the number of particles in a many-body system or the volume in a field theory. The essence of SSB is that taking the thermodynamic limit and the limit of vanishing chemical potential do not commute. $$\begin{aligned} \lim\_{N \to \infty} \lim\_{\mu \to 0} \langle\psi (N,\mu) \rvert\mathcal{O} \lvert\psi (N,\mu) \rangle &= 0, \label{eq:symmetric limits}\\ \lim\_{\mu \to 0} \lim\_{N \to \infty} \langle\psi (N,\mu) \rvert \mathcal{O} \lvert\psi (N,\mu) \rangle &\neq 0\label{eq:SSB limits}.\end{aligned}$$ Eq.  is the claim that in the absence of a symmetry-breaking external field there is always a symmetric groundstate. Eq.  is called a *quasiaverage*. This implies that in the presence of even the slightest perturbation, the broken state is favored over the symmetric state |Ψsymm⟩. For finite-size systems, this argument requires only little adjustment. In the presence of an external field, one state is favored. After the field is set to zero, this state persists even though it is not the absolute lowest energy state, because the time to tunnel to that state is extremely long, increasing as *N* grows. Three types of groundstates --------------------------- Even though one would think that the groundstate is a well-defined physical object, it is precisely the peculiarity of SSB that this notion becomes more complicated. We have already alluded to this in the introduction: the state realized in nature, for instance a periodic crystal, is almost always not an eigenstate of the Hamiltonian. Instead, the system will find itself near its ‘classical configuration’, for instance in a position and not a momentum eigenstate. There are several ways to look at the classical state—which is a perfectly quantum state as a vector in Hilbert space—but there is a very simple and intuitive definition: The classical state is an eigenstate of the order parameter operator, which extrapolates to the SSB groundstate in the thermodynamic limit. In a many-body system, the classical state is the many-body state that can be decomposed a product of single-particle states. These single-particle states are typically generalized coherent states: minimum-uncertainty wavepackets centered around a certain (classical) value, in our case, centered around the order parameter expectation value ⟨O(*x*)⟩. A good example is the BCS groundstate of a superconductor. For a field theory, the classical state leads to a field configuration which solves the Euler–Lagrange equations of motion. There are two reasons for this naming: in many cases we can directly compare classical and quantum field configurations, governed by classical resp. quantum Hamiltonians, and in those cases the groundstates of the classical system map to these states |Ψcl⟩ in the quantum system. All the lattice-spin models considered in this work are examples of that. The second reason is more profound: SSB is a way in which classical behavior emerges from quantum constituents. Everybody would regard a chair or a table as a classical object, while at the heart it is governed by a quantum Hamiltonian. In this regard a superconductor is as classical as a chair, this is precisely the concept of generalized rigidity . In the limit of vanishing chemical potential, fluctuations come into play. They are the corrections to the classical state due to the dynamical terms in the Hamiltonian or Lagrangian. In the path integral, they are the fluctuations around the classical (stationary) path. The state |Ψ0⟩ is the state which is realized in nature, the classical state dressed with/modified by quantum fluctuations. As we have mentioned, in almost all cases, the order parameter operator O does not commute with the Hamiltonian. The classical state |Ψcl⟩ which is an eigenstate of the order parameter operator, is therefore not an eigenstate of the Hamiltonian. Notwithstanding the different classical states becoming formally orthogonal in the thermodynamic limit, these fluctuations are in principle always present. However, the influence of quantum fluctuations is “usually utterly negligible” in the words of Anderson . In general, quantum fluctuations are stronger when the number of degrees of freedom is low. Here we are interested not in whether quantum fluctuations have a large or small effect, but in symmetry-broken states that fundamentally have no quantum fluctuations at all. We know this to be the case for the Heisenberg ferromagnet (Sec. [subsec:first example ferromagnet]), and we want to investigate whether other systems which have an order parameter operator that commutes with the Hamiltonian behave the same way. In finite-size systems, the different classical states are no longer orthogonal but overlap. Tunneling between the different states |*g**ψ*⟩ is now allowed and will lower the energy. There is a unique groundstate, a certain superposition of all the states |*g**ψ*⟩, which does not break any symmetry at all. This state we call |Ψsymm⟩. For instance, in the antiferromagnet, this is the total spin singlet. In finite-size but large systems, SSB can still take place as we know from everyday experience. The reason is that |Ψsymm⟩ is inherently unstable against perturbations . On the other hand there are states |Ψ0⟩ very close in energy to |Ψsymm⟩ but with broken symmetry, which extrapolate to the symmetry-broken states in the thermodynamic limit, as has been shown explicitly for *U*(1)-SSB in Ref. . Summarizing, SSB as is taught in most courses identifies the classical states |Ψcl⟩ as the degenerate, orthogonal groundstates. In principle, there are always quantum fluctuations that lead to a modified state |Ψ0⟩. Starting out by assuming there is a long-range ordered state |Ψcl⟩ and consecutively examining the symmetry-restoring quantum fluctuations that lead to |Ψ0⟩ is sometimes called the semiclassical method. The effect of quantum fluctuations is habitually rather minimal. The cases where quantum fluctuations are prominent are small systems, but also some large quantum systems like the antiferromagnet of Sec. [subsec:first example antiferromagnet]. Strictly speaking, the symmetry breaking becomes exact for infinite volume, but for any large, finite size, the antiferromagnet is subject to severe quantum fluctuations. Symmetry breaking in finite systems ----------------------------------- We close this section with some remarks on SSB in finite-size systems. Again, strictly speaking spontaneous symmetry breaking only occurs for infinite systems, or equivalently in the thermodynamic limit. In finite-size systems there exists a unique, symmetric groundstate |Ψsymm⟩. It seems that, at least formally, one cannot speak of symmetry breaking of finite systems in the absence of external perturbations. We have the following remarks about this apparent conundrum: * Any system in the universe and especially any system in the laboratory is obviously of finite size. Yet we witness broken symmetries on a daily basis. It would be foolish not to make use of the powerful machinery of order parameters, NG modes, phase transitions etc. while the finite system is really well approximated by the infinite system . The broken state in finite systems is directly related to a formally spontaneously-broken state in the thermodynamic limit . The validity of this approximation can actually be turned into a quantitative statement, by comparing the time it takes to tunnel from the symmetry-broken state to the groundstate, with the time scale that the observer is interested in. In fact, the thermodynamic limit is for all practical purposes reached very quickly, *N* ∼ 100 − 10000. * The non-commuting limits of Eq.  show that any tiny perturbation is sufficient to break the symmetry. No real-world system is completely free of external perturbations. Thus the limiting procedure of maintaining a small external field while taking the thermodynamic limit could be viewed not only as a mathematical trick but also to good approximation as the actual situation in nature as well. * As I will argue later, macroscopic but small systems might well be the most interesting regime for spontaneous symmetry breaking due to the measurable effects of the thin spectrum on the coherence time of quantum superpositions. For these reasons I will not shy away from using the term “spontaneous symmetry breaking” pertaining to finite-size systems. Readers taking a more formal point of view are welcome to replace this with “explicit symmetry breaking” by an infinitesimal field and to regard the NG modes having an infinitesimal mass, or take the limiting procedure using external fields literal. Order parameters and Nambu–Goldstone modes ========================================== In this section we collect known, formal results about spontaneous symmetry breaking and NG modes. In particular we summarize the recent classification of type-A and type-B NG modes. Soon after the establishment of the Goldstone theorem that connects spontaneously broken symmetries to massless bosonic excitations, it was recognized that it should hold in non-relativistic systems as well, although details need to be modified. In particular, in Lorentz-invariant systems, the dispersion relation of a scalar bosonic mode can only be linear *ω* ∝ *k*, since time and space exist on equal footing. In solid state physics it was of course known that the Heisenberg ferromagnet spin wave (magnon) has a quadratic dispersion, and also that there is only one independent spin wave even though there are two broken spin-rotation generators. Lange proved a non-relativistic version of the Goldstone theorem  (see also Ref. ). The precise statement is that there must be an excitation of vanishing energy as momentum goes to zero *k* → 0 whenever there is a spontaneously broken symmetry. Notably it does not state that each broken generator corresponds to an independent excitation, and it does not specify the dispersion relation. Lange also explicitly distinguishes the NG excitations that appear as *k* → 0 and are propagating, from those at exactly *k* = 0, which “are just other ground states” in the degenerate coset space *G*/*H*. A good early review including remarks on non-relativistic systems can be found in Ref. . About a decade later, Nielsen and Chadha proved a counting rule, where the number of broken symmetries is equal to or less than the number of NG modes provided that NG modes which have an even-exponent dispersion *ω* ∝ *k*2*n* are counted twice . In the 1990s, Leutwyler established a method for obtaining effective Lagrangians for the low-energy spectrum of non-relativistic systems with broken symmetries . Here the main ingredient for the distinction between type-A and type-B NG modes is already present, namely the possibility of a term in the effective Lagrangian which is linear in time derivatives, while spatial derivatives must be of at least quadratic order in isotropic systems, such that the dispersion relation *ω* ∝ *k*2 becomes feasible. In the context of kaon condensates in quantum chromodynamics (QCD), it was noted that a NG mode with quadratic dispersion arises and that it is related to the fact that the commutator of a pair of broken generators has a non-vanishing expectation value in the broken groundstate ⟨Ψ0∣[*Q*1, *Q*2]∣Ψ0⟩ ≠ 0  (this is formally not correct, see below). Nambu recognized that in this case, two broken symmetry generators in fact excite the same NG mode . In various configurations, Brauner, Watanabe and Murayama expanded upon these results and finally proved this statement using effective Lagrangians . The same counting rule was found independently using Mori projector operators by Hidaka . Spontaneous symmetry breaking ----------------------------- Take a physical model described by a Hamiltonian H (similar statements can be made for a Lagrangian formalism). Any unitary or antiunitary operator *g* that commutes with the Hamiltonian [*g*, H] = 0 is said to be a *symmetry* of the system. Antiunitary operators can be decomposed into a unitary operator and the time-reversal operator. Since we are interested in continuous symmetries, we shall not discuss the discrete time-reversal operator any further, and consider only unitary transformations. The composition *g*1*g*2 of two symmetry operators is again a symmetry, all symmetries form a group *G* = {*g*}. If the symmetry is continuous, *G* is a Lie group. The corresponding Lie algebra g is said to be generated by a set {*Q**a*} of *symmetry generators*. They are also *Noether charges* since they correspond to the space integral of the temporal component of Noether currents *j**μ**a*(*x*), *Q**a* = ∫d*D**x* *j**t**a*(*x*), which are conserved in time by Noether’s theorem. The *j**a*(*x*) ≡ *j**t**a*(*x*) are called *Noether charge densities*, and satisfy Lie algebra equal-time commutation relations, [*j**a*(*x*), *j**b*(*y*)] = i∑*c**f**a**b**c**j**c*(*x*)*δ*(*x* − *y*),  Here *f**a**b**c* are called *the structure constants*, which completely specify the Lie algebra. The charges themselves satisfy [*Q**a*, *Q**b*] = i∑*c**f**a**b**c**Q**c*. The group *G* now consists of elements *g* = expi*θ**a**Q**a*, where *θ**a* are real parameters. The Noether charges are observables and therefore Hermitian, so *g* is unitary. A state ∣*ψ*⟩ is said to be invariant under the operation *g* if $g \rightharpoonup | \psi \rangle = | \psi \rangle$ (since quantum states are defined up to a phase factor, the right-hand side could in principle obtain a phase factor and still be called invariant). Since *g* = expi*θ**a**Q**a*, this requirement is satisfied if $Q^a \rightharpoonup |\psi\rangle =0$, that is, if the Noether charge annihilates the state. The colloquial way of describing spontaneous symmetry breaking is stating that the groundstate ∣Ψ0⟩ is not invariant under all or some symmetries of the Hamiltonian. However, if *Q*∣Ψ0⟩ ≠ 0, then this state is actually not well-defined . Namely, the groundstate is assumed to be translationally invariant. One of the reasons for this is that the NG modes are momentum eigenstates, and since *P* is the generator of translations, good momentum quantum numbers can only be present in states with translational invariance. For large systems, it is sufficient that the translation symmetry be discrete, such that a crystal lattice, which breaks continuous to discrete translations, possesses enough symmetry for NG modes to be well-defined. Because of translational invariance, the norm of the state *Q*∣Ψ0⟩ would be: $$\begin{aligned} \label{eq:rotated SSB state norm} \langle \Psi\_0 | Q Q | \Psi\_0 \rangle &= \int {\mathrm{d}}^D x\; \langle \Psi\_0 | Q j(x) | \Psi\_0 \rangle \nonumber\\ &=\langle \Psi\_0 | Q j(0) | \Psi\_0\rangle \Big( \int {\mathrm{d}}^D x\; 1 \Big) \to \infty.\end{aligned}$$ Therefore, the formal definition for a symmetry generator *Q**a* to be spontaneously broken in the groundstate ∣Ψ0⟩ is that there exist an operator Φ such that, ⟨Ψ0∣[*Q**a*, Φ]∣Ψ0⟩ ≠ 0. Using the Baker–Campbell–Hausdorff formula, $$\label{eq:Baker--Campbell--Hausdorff formula} {\mathrm{e}}^{X} Y {\mathrm{e}}^{-X} = Y + [X,Y] + \tfrac{1}{2!} [X,[X,Y]] + \ldots$$ it is easy to see that the left-hand side of Eq. must vanish if ei*θ**a**Q**a* leaves the groundstate invariant, so this agrees with our intuition of a broken symmetry. Expectation values of commutators never suffer from the infinities of Eq.. The operator Φ is called the *interpolating operator* (or *interpolating field*). The operator O ≡ [*Q**a*, Φ] is called the *order parameter operator* and its groundstate expectation value ⟨O⟩ = ⟨Ψ0∣O∣Ψ0⟩ is called the *order parameter*. Some authors call O itself the order parameter, but we will always make the distinction. It is a good name, since the order parameter is zero in the disordered, symmetric state while it becomes non-zero in the ordered, symmetry-broken state. The symmetry generators that are not spontaneously broken form a subgroup *H* ⊂ *G*. The broken generators in general do not form a group; instead they form a coset *G*/*H*. As the broken generators transform the order parameter to a different, degenerate value, the coset space is sometimes called *order parameter space*: it enumerates all inequivalent groundstates given an arbitrary reference groundstate. Relative to this reference state, the value of the order parameter acting on the degenerate broken-symmetry states can be associated uniquely with an element of *G*/*H*, and can therefore be said to take values in order parameter space. For instance, in Heisenberg magnets, the full symmetry group of spin rotations is *S**U*(2), the group of unbroken symmetries of rotations around the (staggered) magnetization axis is *U*(1), and the order parameter takes values in *S**U*(2)/*U*(1) ≃ *S*2, the two-sphere. Note that our intuitive picture of unit arrows pointing in some direction in three dimensions is precisely this two-sphere. Nambu–Goldstone modes --------------------- From Eq. one can derive the Goldstone theorem by inserting a complete set of momentum and energy eigenstates, and taking appropriate limits. I will sketch the proof, details can be found for instance in Refs.. We start from the definition of a spontaneously broken generator *Q**a*, Eq. . The expectation value ⟨[*Q**a*, Φ]⟩ ≡ ⟨Ψ0∣[*Q**a*, Φ]∣Ψ0⟩ is time-independent, because $$\begin{aligned} \partial\_t \langle [ Q^a (t), \Phi ] \rangle &= \partial\_t \int {\mathrm{d}}^D x\; \langle [ j^a\_0 (t,x), \Phi ] \rangle \nonumber\\ &= -\int {\mathrm{d}}^D x\; \langle [ \nabla \cdot \mathbf{j}^a (t,x), \Phi ]\rangle \nonumber\\ &= - \oint {\mathrm{d}}\mathbf{S} \cdot \langle [ \mathbf{j}^a (t,x), \Phi ]\rangle =0\end{aligned}$$ Here we use that the Noether current is conserved ∂*t**j*0*a* + ∇ ⋅ **j***a* = 0 and that the surface term must vanish at infinity. In some cases, the surface term does not vanish and this can lead to interesting physics , but we leave this aside here. For the current operator we have *j**a*(*x*, *t*) = eiH*t* − i*P* ⋅ *x**j**a*(0)e− iH*t* + i*P* ⋅ *x* with Hamiltonian H and momentum operator *P*. We assume that the groundstate is translation invariant and has energy *E*0 = 0, such that e− iH*t* + i*P* ⋅ *x*|Ψ0⟩ = |Ψ0⟩. Now we insert a complete set of states ∣*n**k*⟩ with momentum *k* and energy *E**n*(*k*). Then $$\begin{aligned} \langle [ Q^a (t), \Phi ] \rangle &= \sum\_n \int\_k \int\_x \langle \Psi\_0 \rvert j^a\_0 (t,x) \lvert n\_k \rangle \langle n\_k \rvert \Phi \lvert \Psi\_0 \rangle \nonumber \\ &= \sum\_n \int\_k \int\_x \langle \Psi\_0 \rvert j^a\_0 (0) {\mathrm{e}}^{{\mathrm{i}}E\_n t-{\mathrm{i}}kx} \lvert n\_k \rangle \langle n\_k \rvert \Phi \lvert \Psi\_0 \rangle \nonumber\\ &\phantom{mmmmmmm} -\langle \Psi\_0 \rvert \Phi \lvert n\_k \rangle \langle n\_k \rvert {\mathrm{e}}^{-{\mathrm{i}}E\_n t+{\mathrm{i}}kx} j^a\_0 (0) \lvert \Psi\_0 \rangle \nonumber\\ &= \sum\_n \int\_k \delta(k) \Big[{\mathrm{e}}^{{\mathrm{i}}E\_n t} \langle \Psi\_0 \rvert j^a\_0 (0) \lvert n\_k \rangle \langle n\_k \rvert \Phi \lvert \Psi\_0 \rangle \nonumber\\ &\phantom{mmmmmmm} -{\mathrm{e}}^{-{\mathrm{i}}E\_n t} \langle \Psi\_0 \rvert \Phi \lvert n\_k \rangle \langle n\_k \rvert j^a\_0 (0) \lvert \Psi\_0 \rangle \Big].\end{aligned}$$ Now by Eq.  the left-hand side is non-zero. That implies at least one of the terms of the right-hand side must be non-zero. We have also seen that the left-hand side is time-independent, which means that the non-zero terms must have *E**n*(*k*) → 0. Since ∫d*D**x* e± *i**k**x* = *δ*(*k*), which is peaked around *k* → 0, we now conclude that there is at least one state |*π*⟩ ≡ |*n**k*⟩ which is excited by both the Noether charge density *j**a* and the interpolating field Φ, which has zero energy towards zero momentum, i.e. it is a gapless mode. This is the NG mode. If the theory is Lorentz invariant this mode must have linear dispersion *E**n*(*k*) ∝ *k*, but this is clearly not a requirement in the proof of the Goldstone theorem itself. From this we can already infer that if the order parameter operator is a symmetry generator itself O = *Q**c*, then via the Lie algebra relations Eq. the broken symmetry generators will always come in pairs (they are each other’s interpolating field Φ), and via the Goldstone theorem these pairs will excite the same NG mode. It is interesting to note that a relativistic theory can never have an order parameter operator that is a Noether charge density, because a Noether charge density is the temporal component of a four-current, and as such would change value under Lorentz transformations. There is an exception: finite Noether charge densities can arise for a Lorentz-invariant Lagrangian when the Lorentz symmetry is broken spontaneously. Now we state the counting rule as presented by Watanabe and Murayama . It rests on two observations: the non-vanishing commutator expectation value of broken symmetry generators (the order parameter), and the fact that the term in the effective Lagrangian of NG modes linear in time derivatives is proportional to this order parameter. First define the matrix *ρ**a**b* =  − i⟨Ψ0∣[*Q**a*, *j**b*(*x*)]∣Ψ0⟩. Because the groundstate is translationally invariant, the matrix is independent of *x*. As a result of the Lie algebra relations Eq., it is a real matrix, and because of the commutator it is antisymmetric. The authors realized that a real, antisymmetric matrix can always be cast in a block diagonal form by a suitable orthogonal transformation, where the blocks are either antisymmetric 2 × 2-matrices or zero-matrices: $$\begin{aligned} \rho\_{ab} &\to \begin{pmatrix} M\_1 & & & \\ & \ddots & & \\ & & M\_K & \\ & & & 0 \end{pmatrix}, \qquad M\_i = \begin{pmatrix} 0 & \lambda\_i \\ -\lambda\_i & 0 \end{pmatrix}.\end{aligned}$$ Here the *λ**i* are real and non-zero. The rank of the matrix is: rank *ρ**a**b* = 2*K*. The *N* broken symmetry generators split up into *N* − 2*K* generators with a separate interpolating field and vanishing commutator expectation value with all other generators, and *K* pairs with mutual non-vanishing commutator expectation value. Let *π**a*(*x*) represent the NG field corresponding to ⟨*π*∣*j**a*(*x*)∣0⟩ in the derivation of the Goldstone theorem. The Fourier transform is *π**a*(*k*) = ⟨*π*∣*j**a*(*k*)∣0⟩ = ⟨*π*∣∫d*D**x* ei*x* ⋅ *k**j**a*(*x*)∣0⟩. Watanabe and Murayama showed that the term in the effective Lagrangian with a single time derivative takes the form (after an orthogonal transformation). $$\label{eq:single time derivative effective Lagrangian} \mathcal{L}^{(1)}\_\mathrm{eff} = \frac{1}{4} \rho\_{ab} \big( (\partial\_t \pi^a) \pi^b - \pi^a (\partial\_t \pi^b) \big).$$ Thus precisely those generators *Q**a*, *Q**b* which have a finite commutator expectation value *ρ**a**b* ≠ 0 will lead to this term, showing that their dynamics are coupled, and the two NG fields correspond to one NG mode. As noted, in general it is an expectation value of a symmetry generator (Noether charge density) *λ**i* = ⟨*j**i*⟩. When this expectation value would vanish, so would this term in the effective Lagrangian. Next, because they feature a single time derivative (besides possible quadratic time derivatives), their dispersion relation is altered. In almost all cases, including all cases of internal symmetry, the lowest order in spatial derivatives is quadratic, leading to a *ω* ∝ *k*2 dispersion relation; some more exotic dispersions have been found when external (spacetime) symmetry is broken . For isotropic systems, which are assumed for the spontaneously broken groundstate as is translational symmetry, a term linear in spatial derivatives must vanish. These *K* modes are called type-B NG modes, while the *N* − 2*K* modes without coupled dynamics are called type-A NG modes. In almost all cases, type-A modes have linear, and type-B modes quadratic dispersion. For our purposes here, this shows that an order parameter operator that is a symmetry generator causes a reduced number of quadratically dispersing NG modes. Therefore property (i) implies (ii) (See Sec. [sec:Introduction]). A very interesting consequence of this distinction is the effect of fluctuations on the stability of the ordered state in low dimensions. The Mermin–Wagner–Hohenberg–Coleman theorem states that spontaneous fluctuations prevent the formation of an ordered state of infinite size (thermodynamic limit) at dimensions *D* ≤ 2 for any finite temperature and at dimension *D* ≤ 1 for zero temperature. The heuristic argument is as follows: the low-energy perturbations of the order parameter are the NG modes. For a stable groundstate, the fluctuations for small distances *x* → 0 should be small. The correlation function in *D* spatial dimensions for the order parameter over a distance *x* involves the propagator of scalar NG modes via $$\lim\_{x\to 0} \int {\mathrm{d}}^D k\; {\mathrm{d}}\omega\; {\mathrm{e}}^{ikx} \frac{1}{\omega^2 - k^2} \sim \int {\mathrm{d}}k\; k^{D-1} \frac{1}{k} \sim \int {\mathrm{d}}k\; k^{D-2}.$$ This correlation function has an infrared (*k* → 0) divergence if *D* ≤ 1. This is the result for zero temperature; at finite temperature there is an additional factor of 1/*k* from the boson distribution function of the NG modes, which effectively causes the divergence to occur already in one spatial dimension higher. This is in congruence with the notion that a quantum field theory can generally be mapped onto a thermal classical field theory in one dimension higher. However, all these statements assume that the propagator of the NG mode is like $\frac{1}{\omega^2 - k^2}$. If there are only type-B NG modes, we have in fact a different propagator and we need to reevaluate the derivation. Watanabe and Murayama have shown that indeed, if there are only type-B NG modes, there is the possibility of long-range order at zero temperature even in 1+1 dimensions . This is corroborated by the established knowledge that long-range order is indeed possible in the ferromagnetic (one-dimensional) spin chain. See Sec. [sec:Conclusions] for a remark about this statement. Order parameters and quantum fluctuations ========================================= Now we come to the main part of this work. We investigate how the facts that the order parameter operator is a symmetry generator and the absence of quantum fluctuations, as witnessed in the Heisenberg ferromagnet, are related. In the previous section we have seen that when two spontaneously broken symmetry generators have a non-vanishing groundstate expectation value of their commutator, we get a type-B NG mode. In almost all cases this implies that the order parameter operator is a linear combination of symmetry generators via the Lie algebra relations Eq.. The only other option is a so-called central extension of the Lie algebra , but here we are not interested in that possibility. Therefore we assume that the order parameter operator is a superposition of symmetry generators, and therefore it commutes with the Hamiltonian. Many authors, most notably Anderson , divide the whole universe of ordered states into those which have an order parameter operator that commutes with the Hamiltonian and those that do not, to immediately afterwards dryly state that the former collection contains only one known example: the Heisenberg ferromagnet. Consequently they tend to overgeneralize the significance of the commuting order parameter operator, and claim that it is a sufficient condition for the absence of quantum fluctuations. The reasoning is as follows: quantum fluctuations are deviations from the classically preferred direction in configuration space of the order parameter effected by unitary time evolution from the classical groundstate ∣Ψcl⟩. But if the order parameter operator commutes with the Hamiltonian, it is conserved in time, and hence it will not deviate at all. The flaw in this argument is that only the *total* order parameter operator *Q* = ∫d*D**x* *j*(*x*) is conserved in time, while the density *j*(*x*) is generally not (this is also true for the ferromagnet). Thus in principle, local fluctuations that leave the total order parameter invariant are allowed. Another way of saying this is that there are many eigenstates of the total order parameter operator that have the same eigenvalue, and the Hamiltonian will in general bring one to a mixture of these eigenstates. Finite Noether charge densities ------------------------------- First we make the following observation concerning the expectation value of symmetry generators. Initially we are concerned about cases where two broken symmetry generators *Q**a*, *Q**b* have a non-vanishing expectation value of their commutator, such that the order parameter operator is some combination of symmetry generators *Q**c*, but the symmetry transformations generated by that order parameter operator are themselves unbroken. For simplicity, consider *S**U*(2) where *S**x*, *S**y* are broken, and *S**z* is the order parameter operator, but spin-rotations around the *z*-axis, generated by *S**z*, are themselves unbroken. The other case, where *S**z* is also broken, as in a canted magnet, could be pictured by starting from the unbroken case but with an additional small breaking of *S**z* as well. By small we mean the expectation value of the order parameter. This additional broken symmetry will just result in an additional type-A NG mode on top of the type-B mode excited by *S**x* and *S**y*, see Sec. [subsec:example Canted magnet]. Larger values of the order parameter can be obtained by adiabatic continuation. The next two sections [subsec:Highest-weight states]–[subsec:Quantum fluctuations] concern the case of finite Noether charge densities that are themselves unbroken. In this case the symmetry generator *S**z* does not annihilate the groundstate, as would be usually the case for symmetry generators. But the groundstate must still be an eigenstate of *S**z* with finite eigenvalue *M* since it is an unbroken symmetry generator. Then ei*θ**S**z*∣Ψ0⟩ = ei*θ**M*∣Ψ0⟩, and this is equivalent to |Ψ0⟩ up to a phase factor, which is indistinguishable in quantum mechanics. This expression suffers from the same infinity as in Eq., but the statement for the Noether charge density ei*θ**j**z*(*x*)∣Ψ0⟩ = ei*θ**m*∣Ψ0⟩ is consistent, where *m* = *M*/*V* and *V* the volume of the system. Note that in a ferromagnet *M* corresponds to the magnetization which is a perfectly good physical quantity, and which obviously diverges in the thermodynamic limit since it is extensive. This special case is property (vi) of the enumeration in Sec. [sec:Introduction]. Since *j**z*(*x*) is a Noether charge density which has a non-vanishing groundstate expectation value, it is referred to as a *finite Noether charge density*. Here we see that finite Noether charge densities that are unbroken symmetry generators do not annihilate the groundstate but have instead a finite eigenvalue. It is possible to ‘remove’ this finite expectation value by redefining *S*new*z* = *S*old*z* − ⟨*S*old*z*⟩ but this modifies the Lie algebra relations Eq.  which we do not desire. And since the magnetization is a physically significant quantity, I see no benefit in doing so. Highest-weight states --------------------- The *Cartan subalgebra* of a Lie algebra is the subalgebra generated by a maximal set of mutually commuting Lie algebra generators. It has been shown that it is possible to choose the basis of the Cartan subalgebra in such a way that only Noether charge densities that lie in this Cartan subalgebra can obtain a finite groundstate expectation value . The eigenstates of generators *Q**c* in the Cartan subalgebra are called *weight states* ∣*μ*⟩ with *weight* *μ**c*, collected in a *weight vector* *μ* = {*μ**c*} (see any textbook, e.g. Ref. ). Thus by definition, *Q**c*∣*μ*⟩ = *μ**c*∣*μ*⟩. Again, one can revert to densities *j**c*(*x*) or consider a finite volume here if one is worried about infinities. Then the eigenvalues *μ**c* are always finite. It is possible to establish an ordering in the weight vectors *μ* , and the highest one is called the *highest-weight vector*, and the state ∣*μ*⟩ is then the *highest-weight state*. For finite-dimensional irreducible representations of the Lie algebra, in which we are exclusively interested here, the highest-weight state is unique. This does assume that we have chosen a certain basis for the Lie algebra, since the Cartan subalgebra is unique only up to isomorphism. For instance in a Heisenberg magnet, choosing the Cartan subalgebra to consist of *S**z*, fixes the highest-weight state to be the maximal magnetization in the *z*-direction. The space of operators not in the Cartan subalgebra is spanned by the *root generators* *E**α* = ∫d*D**x* *e**α*(*x*), where *e**α*(*x*) is the root generator density, and *α* is called the *root vector*. The root generators change the weight of weight states as *E**α*∣*μ*⟩ ∝ ∣*μ* + *α*⟩. An alternative definition of the highest-weight state is that it is annihilated by all positive root generators (the notion of positivity here follows the established ordering, of course). Furthermore *E**α*† = *E*− *α*. Thus the root generators are not Hermitian but the linear combinations $E^+\_{\alpha} = \tfrac{1}{\sqrt{2}}(E\_\alpha + E\_{-\alpha})$ and $E^-\_{\alpha} = -{\mathrm{i}}\tfrac{1}{\sqrt{2}}(E\_\alpha - E\_{-\alpha})$ are. These combinations are elements of the Lie algebra of symmetry transformations, are Hermitian and hence observables, and are therefore symmetry generators *Q**a*. The commutation relation is [*E**α*, *E*− *α*] = ∑*c**α**c**Q**c* and for their densities [*e**α*(*x*), *e*− *α*(*y*)] = ∑*c**α**c**j**c*(*x*)*δ*(*x* − *y*) at equal times. For the Hermitian combinations this leads to [*e**α*+(*x*), *e**α*−(*y*)] = i∑*c**α**c**j**c*(*x*)*δ*(*x* − *y*). For instance in the ferromagnet, *S**z* spans the Cartan subalgebra, the raising and lowering operators *S*± = *S**x* ± i*S**y* are root generators, *S*+ = (*S*−)†, and *S**x*, *S**y* are symmetry generators. The weight states for spin-*s* are |*s*, *m*⟩, and the root generators act as $S^\pm \lvert s,m \rangle = \sqrt{ s(s+1) - m (m\pm1)} \lvert s, m \pm 1\rangle$. So far this is textbook Lie algebras. Let us now connect it to finite Noether charge densities. When some *Q**c* obtains a finite Noether charge density but is itself unbroken, the groundstate is a finite-weight state with *μ**c* = *M* as mentioned above. Each pair of root generators *E**α*, *E*− *α* for which ∑*c**α**c*⟨*Q**c*⟩ ≠ 0 corresponds to a pair of broken symmetry generators *E**α*+, *E**α*−. They are broken because they do not leave the groundstate invariant, i.e. they change the weight from *μ* to *μ* + *α*, with at least some *α**c* non-zero since ∑*c**α**c*⟨*Q**c*⟩ ≠ 0. The root generators are precisely the combinations that are relevant to NG modes, since they connect to states with a good quantum number for the order parameter *Q**c*. That is, we can derive the Goldstone theorem from Eq.. We know from Sec. [subsec:Nambu–Goldstone modes] that there is only one massless type-B NG mode. I expect but cannot prove in full generality that the root generator that brings the weight closer to zero excites the massless NG mode. For instance, if *μ**c* > 0 and *α**c* > 0, then *E*− *α* should excite the NG mode. This holds true for all known examples including the ferromagnet and the linear sigma model with chemical potential, see Sec. [subsec:example Linear sigma model]. We are particularly interested in what happens to the other root generator *E**α*. First let us assume that we are not in the highest-weight state. Then *E**α*∣*μ*⟩ = ∣*μ* + *α*⟩ is accessible, and there is a mode associated with ∣*μ* + *α*, *k*⟩ = *e**α*(*k*)∣*μ*⟩, just as the massless NG mode is associated with ∣*μ* − *α*, *k*⟩ = *e*− *α*(*k*)∣*μ*⟩. Here *e**α*(*k*) = ∫d*D**x* ei*x* ⋅ *k**e**α*(*x*). Since we know that there is only one massless NG mode for the pair *E**α*, *E*− *α*, we expect that ∣*μ* + *α*, *k*⟩ corresponds to a massive mode, which is a ‘partner’ to the massless NG mode associated with ∣*μ* − *α*, *k*⟩. This is completely consistent with the classification scheme of Hayata and Hidaka , which identifies “gapped partner modes” to a large class of type-B NG modes. We come back to this point in Sec. [subsec:Multitude of order parameters]. In the explicit example of the linear sigma model with finite chemical potential, see Sec. [subsec:example Linear sigma model], these two modes have dispersion relations of the form , $$\label{eq:typical type-B dispersion} \omega\_{\pm} = \sqrt{k^2 + \Delta^2} \pm \Delta$$ where 2Δ is the energy gap. The mode *ω*− has indeed a vanishing energy and dispersion *ω* ∝ *k*2 for *k* → 0 while *ω*+ has an energy gap of 2Δ. I conjecture that the two modes excited by *E**α*, *E*− *α* will always have such a dispersion relation to quadratic order in *k*. The NG mode excited from a state with zero weight has Δ = 0, and states approaching this state, namely states with weight *μ**c* → 0, should have similar physical properties. It is then natural to expect that the gap Δ grows with higher weight, i.e. for a larger order parameter density *m*. [fig:levels] The other possibility is that we are in the highest-weight state ∣*μ*⟩ = ∣*μ*highest⟩. Because of the translational invariance of the groundstate it is the highest-weight state at every point in space. In this case, acting with the positive root *e**α*(*x*) at any point *x* will take one out of the Hilbert space and such operations are therefore forbidden. In this case the second, gapped mode is absent, like in the Heisenberg ferromagnet. The distinction between these two cases is depicted in Fig. [fig:levels] Quantum fluctuations -------------------- We will now show that only groundstates which are highest-weight states cannot have any quantum fluctuations. Recall that here, quantum fluctuations are understood to be the deviations from the classical groundstate ∣Ψcl⟩ as effected by the quantum Hamiltonian. The classical groundstate ∣Ψcl⟩ is an eigenstate of the order parameter operator, with the same eigenvalue at each point in space due to translational invariance. The order parameter takes values in the configuration space which is the coset space *G*/*H*. The presence of quantum fluctuations means that the groundstate |Ψ0⟩ is instead a mixture of states in *G*/*H*. The only operators that can act on this space are the original symmetry transformations in *G*, generated by the Lie algebra generators *Q**a*. Therefore, the Hamiltonian governing the low-energy excitations consists of compositions of operators *j**a*(*x*) corresponding to the densities of the generators *Q**a*. In the previous subsection we have shown how each generator corresponds either to unbroken symmetries, or to broken symmetries which must be Hermitian combinations of root generators. We again assume that the order parameter operator itself is an unbroken symmetry generator, and that there are no other type-A NG modes (conversely, if the symmetry generator is broken it excites type-A NG modes and those always lead to quantum fluctuations). In that case, since it commutes with the Hamiltonian, its expectation value is an eigenvalue that is conserved in time. Then the Hamiltonian consists of compositions of generator densities that leave this eigenvalue unchanged. The unbroken symmetries themselves commute with the Hamiltonian and their action cannot change conserved quantities. On the other hand the broken symmetry generator densities are split up into root generators *e**α*(*x*), *e*− *α*(*x*). These root generators alter the expectation value of the order parameter as *e**α*(*x*)∣*μ*, *y*⟩ ∝ *δ*(*x* − *y*)∣*μ* + *α*, *y*⟩, where ∣*μ*, *y*⟩ is the symmetry-broken groundstate at position *y*. Only compositions of root generators that leave the total order parameter eigenvalue invariant can be allowed. Such terms are for instance of the form *e**α*(*x*)*e*− *α*(*y*). They lower the eigenvalue at some point
arxiv_0000157
Combinatorics for certain Skew Tableaux, Dyck Paths, Triangulations, and Dissections ==================================================================================== We present combinatorial bijections and identities between certain skew Young tableaux, Dyck paths, triangulations, and dissections. Introduction ============ Bijections between Catalan objects are well understood. For instance, see. There are many generalizations of these objects and the bijections between them. In, Stanley provides a bijection between certain standard Young tableaux and dissections of a polygon. In, the authors provide a bijection between the same tableaux and certain Dyck paths. Meanwhile, various papers consider a certain collection of skew Young tableaux—which may be seen as a generalization of the aforementioned tableaux—which are used to compute formulas for the ordinary and equivariant Kazhdan–Lusztig polynomial for uniform, sparse paving, and paving matroids. [1](#fn1) , properties about the skew tableaux will have implications for the Dyck paths and triangulation objects of interest. Motivated by our findings, we then find a combinatorial bijection between the dissections in and our triangulations. In the next section, we will define relevant terminology for skew Young tableaux in subsection [subsec:sytback], Dyck paths in subsection [subsec:dyckback], and then both dissections and triangulations in subsection [subsec:distriback]. Then in subsection [subsec:mainback], we discuss the main results and findings of this paper in detail. In sections [sec:combinatorialbijections] and [sec:enumeration], we provide the definitions for the maps involved in the main results. ### Acknowledgements: The authors would like to thank Kyungyong Lee for his helpful input on this paper. Background and Main Results =========================== Skew Young Tableaux and Nomincreasing Partitions ------------------------------------------------ Let *λ*1 ≥ *λ*2 ≥ ⋯ ≥ *λ**k* be positive integers. We say that *λ* = [*λ*1, *λ*2, …, *λ**k*] is a *partition* of *n* if *λ*1 + ⋯ + *λ**k* = *n*. The *Young diagram of shape* *λ* is represented by boxes that are left justified so that the *i*th row has *λ**i* boxes. A *standard Young tableau* is achieved by filling the boxes with the numbers so that * each row strictly increases from left to right; * each column increases from top to bottom; and * if there are *n* boxes, only the numbers 1 through *n* are used. See Figure [fig:tableaux] below for an example of a Young diagram and standard Young tableaux. .45 .45 1 & 3 & 7 & 11 & 12 & 14 & 15 2 & 5 & 13 & 16 4 & 8 6 & 10 9 [fig:tableaux] Given partitions *μ* = [*μ*1, …, *μ*ℓ] and *λ* = [*λ*1, …, *λ**k*] so that *μ**i* ≤ *λ**i* for all *i*, the *skew Young diagram* *λ* \ *μ* is the set of squares from the diagram for *λ* that are not in the diagram for *μ*. As before, we define a *skew Young tableau* to be a skew Young diagram filled with numbers following the same rules described for standard Young tableau. See Figure [fig:skewtableaux] for an example of a skew Young tableaux. & & 1 & 2& 3 & 9 & 11 & 4 & 6 & 7 & 5 8 & 10 12 [fig:skewtableaux] The authors in introduce the notation $\Skyt(a,i,b)$ to denote the skew Young tableaux of shape [(*i* + 1)*b*, 1*a* − 2]/[(*i* − 1)*b* − 2], where we write *x**t* to denote *x*, *x*, …, *x*, where *x* is written *t* times. These are precisely the skew tableaux we discussed in the introduction. The diagram for the tableaux in $\Skyt(a,i,b)$ is shown in Figure [fig:skytsytdiagrams]. [scale=0.5, line width=0.9pt] (-1,0) grid (0,6); (-1,0) – node[left=7pt] *a* (-1,6); (0,4) grid (4,6); (0.1,4) – node[below=7pt] *i* (5,4); (4,4) grid (5,9); (5,4) – node[right=7pt] *b* (5,9); [fig:skytsytdiagrams] Dyck Paths ---------- A *Dyck path of semi-length *n** is a string in $\{{\textsf{U}},{\textsf{D}}\}^{2n}$ so that 1. the string has the same number of ${\textsf{U}}$’s and ${\textsf{D}}$’s (that is, *n* of each); and 2. the number of ${\textsf{U}}$’s is at least the number of ${\textsf{D}}$’s in any initial segment of the word. We will also often represent such a path visually using (1, 1) segments for ${\textsf{U}}$ and ( − 1, 1) segments for ${\textsf{D}}$, as in Figure [fig:dyckexample]. [scale=0.7, line width=1pt] (-3,0)–(15,0); (-2,0) circle[radius=5pt]; (-2,0)–(-1,1); (-1,1) circle[radius=5pt]; (-1,1)–(0,0); (0,0) circle[radius=5pt]; in 0,1,...,2 (,)–(+1,+1); (+1,+1) circle[radius=5pt]; (3,3)–(4,2); (4,2) circle[radius=5pt]; (4,2)–(5,3); (5,3) circle[radius=5pt]; (5,3)–(6,2); (6,2) circle[radius=5pt]; (6,2)–(7,1); (7,1) circle[radius=5pt]; (7,1)–(8,2); (8,2) circle[radius=5pt]; (8,2)–(9,1); (9,1) circle[radius=5pt]; (9,1)–(11,3); (10,2) circle[radius=5pt]; (11,3) circle[radius=5pt]; (11,3)–(14,0); (12,2) circle[radius=5pt]; (13,1) circle[radius=5pt]; (14,0) circle[radius=5pt]; [fig:dyckexample] A *long ascent* is a maximal ascent of length at least 2. A *singleton* is a maximal ascent of length 1. Let $\Dyck(n,\ell,s)$ be the Dyck paths of semi-length *n* with ℓ long ascents and *s* singletons so that no singleton appears *after* the last long ascent. Thus, the Dyck path in Figure [fig:dyckexample] is an element of $\Dyck(8,2,3)$. Dissections and Triangulations ------------------------------ Throughout this section, we assume polygons with *n* vertices have their vertices labeled 1 through *n* in counter-clockwise order. A *dissection* of a polygon *P* is a way of adding chords between non-adjacent vertices so that no two chords intersect in the interior of the polygon. Throughout, we let $\Dis(n,i)$ be the set of all dissections of an *n*-gon with *i* chords. Note that *i* in $\Dis(n,i)$ is at most *n* − 2. The elements of $\Dis(n,n-2)$ are the *triangulations* of an *n*-gon. Given a vertex *x* in a triangulated polygon, a *fan at *x** is a maximal collection triangles all containing *x*. In this case, we call *x* the *origin* of the fan. A *singular* fan is a fan with only one triangle. Let *e* be a boundary edge of a fan *F* at *x*. We are interested in being able to uniquely partition a triangulation into a collection of fans. This leads to the following definition. Let *T* be a triangulation. A *fan decomposition* is the the pair of sequences (F(*T*), *δ*(*T*)), where F(*T*) and *δ*(*T*) are defined as follows: * We let F(*T*) be a sequence of fans defined recursively as follows. Let *F* be the fan at the vertex with the smallest label. Delete this vertex and all edges incident with it in *T* to obtain a sequence of triangulations *T*1, …, *T**k*, arranged in counter clock-wise order so that *T**i* ∩ *T**i* + 1 is just vertex. If *T* is just an edge, then F(*T*) is the empty sequence, and otherwise $\mathcal{F}(T)\coloneqq(F,\mathcal{F}\_1,\dots, \mathcal{F}\_k)$ where F*i* = F(*T**i*). * Let *x**j* be the label of the origin of *F**j*. We let $\delta(T)\coloneqq (d\_1,\dots, d\_{k-1})$, where $d\_i\coloneqq x\_{i+1}-x\_i$ and *k* is the number of fans in F(*T*). One can think of *d**i* as the number of edges between the origins of *F**i* and *F**i* + 1 when traveling along the boundary of *T* counter-clockwise. Consider the triangulation *T* in Figure [fig:triangledecomp]. Observe that F(*T*) = (*F*1, *F*2, *F*3, *F*4, *F*5) where *F*1 is the size 1 fan at vertex 1, *F*2 is the size 3 fan at vertex 2, *F*3 is the size 1 fan at vertex 4, *F*4 is the size 1 fan at vertex 5, and *F*5 is the size 4 fan at vertex 7. Thus, *δ*(*T*) = (1, 2, 1, 2). Figure [fig:triangledecomp] shows the five fans, distinguishing them by thick boundary edges and different shades of orange in their interior. The white vertices correspond to the origins of the fans. [scale=1] at (.74,.76) (e); (e.corner 1)–(e.corner 2)–(e.corner 12)–(e.corner 1); (e.corner 1)–(e.corner 2)–(e.corner 12)–(e.corner 1); (e.corner 2)–(e.corner 12)–(e.corner 7)–(e.corner 4)–(e.corner 3)–(e.corner 2); (e.corner 2)–(e.corner 12)–(e.corner 7)–(e.corner 4)–(e.corner 3)–(e.corner 2); (e.corner 4)–(e.corner 5)–(e.corner 7)–(e.corner 4); (e.corner 4)–(e.corner 5)–(e.corner 7)–(e.corner 4); (e.corner 6)–(e.corner 5)–(e.corner 7)–(e.corner 6); (e.corner 6)–(e.corner 5)–(e.corner 7)–(e.corner 6); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 11)–(e.corner 12)–(e.corner 7); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 11)–(e.corner 12)–(e.corner 7); at (.74,.76) (e); in 1,...,12at (e.corner ) ; at (e.corner 1) 1; at (e.corner 2) 2; at (e.corner 3) 3; at (e.corner 4) 4; at (e.corner 5) 5; at (e.corner 6) 6; at (e.corner 7) 7; at (e.corner 8) 8; at (e.corner 9) 9; at (e.corner 10) 10; at (e.corner 11) 11; at (e.corner 12) 12; (e.corner 2) – (e.corner 12) ; (e.corner 2) – (e.corner 7) ; (e.corner 2) – (e.corner 4) ; (e.corner 4) – (e.corner 7) ; (e.corner 5) – (e.corner 7) ; (e.corner 6) – (e.corner 7) ; (e.corner 7) – (e.corner 10) ; (e.corner 7) – (e.corner 9) ; (e.corner 7) – (e.corner 11) ; (e.corner 7) – (e.corner 12) ; (e.corner 1) circle (1.25pt); (e.corner 2) circle (1.25pt); (e.corner 4) circle (1.25pt); (e.corner 5) circle (1.25pt); (e.corner 7) circle (1.25pt); [fig:triangledecomp] Observe that a fan decomposition uniquely determines *T*. That is, knowing the order and size of each fan along with the distance between origins of consecutive fans uniquely determine a triangulation. Let $\Tri(n,t,s)$ be the triangulations *T* of an *n*-gon so that F(*T*) has *s* + *t* fans so that precisely *s* are singular and so that the last fan is not singular. Thus, the triangulation in Figure [fig:triangledecomp] is an element of $\Tri(12,2,3)$. Main Results ------------ We now may state the main results of this paper. First, let us state, the result which we plan to generalize. We state the result by referencing the object $\Skyt(a,i,b)$ we defined above. [] [prop:originaldyckbijection] The tableaux in $\Skyt(a,i,2)$ are in bijection with Dyck paths of length 2(*a* + 2*i*) with *i* + 1 peaks and no singletons. In Section [sec:combinatorialbijections], we will provide explicit combinatorial maps which give us the following Theorem. [thm:skytdycktri] The following objects are in bijection: 1. $\Skyt(a,i,b)$; 2. $\Dyck(a+b+2i-2,i+1,b-2)$; and 3. $\Tri(a+b+2i,i+1,b-2)$. The maps between the three objects are defined in section [sec:combinatorialbijections]. The following pairs of maps are mutual inverses: * maps $\operatorname{\operatorname{\mathtt{SD}}\xspace}$ and $\operatorname{\operatorname{\mathtt{DS}}\xspace}$; * maps $\operatorname{\operatorname{\mathtt{ST}}\xspace}$ and $\operatorname{\operatorname{\mathtt{TS}}\xspace}$; and * maps $\operatorname{\operatorname{\mathtt{TD}}\xspace}$ and $\operatorname{\operatorname{\mathtt{DT}}\xspace}$. Note that this generalizes the result stated in Proposition [prop:originaldyckbijection], in addition to adding a triangulation interpretation. With the original motivation for this paper in mind, we specialize Theorem [thm:skytdycktri] to *b* = 2. After incorporating the work of of which provides a combinatorial bijection between $\Dis(n+2,i)$ and $\Skyt(n-i+1,i,2)$, we have the following. [2](#fn2) The following objects are in bijection. 1. $\Dis(n+2,i)$. 2. $\Skyt(n-i+1,i,2)$. 3. $\Dyck(n+i+1,i+1,0)$. 4. $\Tri(n+i+3,i+1,0)$. By specializing the maps involved in Theorem [thm:skytdycktri], we already have combinatorial bijections between the standard Young tableaux, Dyck paths, and triangulations in this theorem, though its important to note that are bijection between the standard Young tableaux and Dyck paths is precisely the proof of Proposition [prop:originaldyckbijection]. This leaves two pairs of objects with missing combinatorial bijections. In section [sec:enumeration] we demonstrate a bijection between the dissections and triangulations given in this corollary. Using our bijection between Dyck paths and Triangulations, one can extend our work in section [sec:enumeration] to give a bijection between the dissections and Dyck paths in Corollary [cor:disecbij], but we omit this interpretation from this paper. For our final result, we recall the following Lemma in terms of Dyck paths and triangulations. [lem:skewsym] Let *a*, *i*, and *b* be nonnegative integers. Then $$\# \Skyt(a,i,b)=\#\Skyt(b,i,a).$$ One may apply Theorem [thm:skytdycktri] to this Lemma in order to get the following. [cor:dycksym] 1. Let *n*, ℓ, *s* be nonnegative integers. Then $$\#\Dyck(n,\ell,s)=\#\Dyck(n,\ell,n-s-2\ell).$$ 2. Let *n*, *t*, *s* be nonnegative integers. Then $$\#\Tri(n,t,s)=\#\Tri(n,t,n-t-2\ell-2).$$ Although these equalities are naturally obtained with Theorem [thm:skytdycktri] and Lemma [lem:skewsym], there is no known direct combinatorial bijection describing these equalities. Hence we pose the following. Find a direct combinatorial proof of Corollary [cor:dycksym] which does not rely on using the skew tableaux or bijections given in this paper. Combinatorial Bijections ======================== The following subsections describe maps going between any two of the objects given in Theorem [thm:skytdycktri]. For convenience, we identify maps according to where the map from and to by using `S` for skew Young tableaux, `T` for Triangulations, and `D` for Dyck paths. For instance, map $\operatorname{\operatorname{\mathtt{ST}}\xspace}$ represents the map from skew Young tableaux to traingulations, and $\operatorname{\operatorname{\mathtt{TD}}\xspace}$ represents a map from triangulations to Dyck paths. Examples are used to alleviate any ambiguity with our maps. Before proceeding, however, we will point out a handy reinterpretation of the tableaux in $\Skyt(a,i,b)$. Let $\lambda\in \Skyt(a,i,b)$. Let *X* = {*x*1, *x*2, …, *x**i* + *b* − 1} be the set of values in the top *b* − 1 rows so that *x*1 < *x*2 < ⋯ < *x**i* + *b* − 1. If *x**j* is in row *b* − 1, define *y**j* to be the entry in the tableau directly below *x**j*. Then for 1 ≤ *j* < *i* + *b* − 1 let $$A\_{j}\coloneqq\begin{cases} \{x\_j\} & \text{if $x\_j$ is in the first $b-2$ rows;}\\ \{x\_j\}\cup\left([y\_j,y\_{k}-1]\setminus X\right)& \text{if $x\_j$ is in row $b-1$ and $y\_k$ is to the right of $y\_j$,}\end{cases}$$ where [*y**j*, *y**k* − 1] = {*y**j*, *y**j* + 1, *y**j* + 2, …, *y**k* − 1}. Let $A\_{i+b-1}\coloneqq\{x\_j\}\cup \big([x\_j+1,a+b+2i-2]\setminus X\big)$. Note that *x**j* is always the minimum of *A**j*. When ∣*A**j*∣ > 1, note the elements of *A**j* are precisely the entries in row *b* − 1 and *b* in column *j* along with all entries of column 1 which are between *y**j* and *y**k*. See Figure [fig:pushidea]. & & 1 & & 4 & & 5 & & 10 2 & 7 & 13 3 & 9 & 15 8 11 12 14 & & 1 & & 4 & & 5 & & 10 2 & 7 & 13 3 & 9 & 15 8 & 11 & 12 & 14 [fig:pushidea] The sequence (*A*1, …, *A**i* + *b* − 1) has enough information to reconstruct *λ*. Starting with *j* = 1, do the following. 1. If ∣*A**j*∣ = 1, let *x* ∈ *A**j*. Then place *x* in the highest possible position in the last column. 2. If ∣*A**j*∣ > 1, then let *x**j* = min*A**j* and *y**j* = min(*A**j* \ {*x**j*}). Place *x**j* in row *b* − 1 column *j* and place *y**j* in row *b* column *j*. Place all remaining entries from *A**j*—in increasing order—at the top most available position(s) in the first column. 3. Increase the value of *j* by 1. If *j* < *i* + *b* − 1, repeat these steps. Otherwise, *λ* is filled and we are done. Let *x**j* and *y**j* are defined in step (2) of the preceding procedure. Pick an integer *j*ʹ minimally so that *j* < *j*ʹ and ∣*A**j*ʹ∣ > 1. Note that (*A*1, …, *A**i* + *b* − 1) is an ordered partition of [*a* + *b* + 2*i* − 2] so that *x**j* < *x**j* + 1 and *y**j* < *y**j*ʹ, whenever *y**j* and *y**j*ʹ exist. These conditions guarantee that the rows of *λ* increase left-to-right. Such a sequence is called a *nomincreasing partition*, as defined in. To this end, we define the following. Let $\lambda\in \Skyt(a,i,b)$ and define *A*1, …, *A**i* + *b* − 1 for *λ* as above. We define $\Nom(\lambda)$ to be the nomincreasing partition $$\Nom(\lambda)\coloneqq (A\_1,A\_2,\dots, A\_{i+b-1}).$$ Map $\operatorname{\operatorname{\mathtt{SD}}\xspace}$ ------------------------------------------------------ In this section, we define a map $\operatorname{\operatorname{\mathtt{SD}}\xspace}$ from $\Skyt(a,i,b)$ to $\Dyck(a+b+2i-2, i+1, b-2)$. For simplicity, let $n\coloneqq a+b+2i-2$. Note that given $\lambda \in \Skyt(a,i,b)$, *n* is the number of entries in *λ* and *t* is the number of entries in the first *b* − 1 rows of *λ*. Let $\lambda\in \Skyt(a,i,b)$. We define $\operatorname{\operatorname{\mathtt{SD}}\xspace}(\lambda)$, a certain lattice path, as follows. Let $\Nom(\lambda)=(A\_1,\dots, A\_{i+b-1})$. Let *x**j* denote the minimum of *A**j*. Let $a\_j \coloneqq \#A\_j$. Then let $\operatorname{\operatorname{\mathtt{SD}}\xspace}(\lambda)$ be the lattice path given by following string in $\{{\textsf{U}},{\textsf{D}}\}^{2n}$: $$\begin{aligned} {\textsf{U}}^{a\_1}{\textsf{D}}^{x\_2-x\_1}{\textsf{U}}^{a\_2}{\textsf{D}}^{x\_3-x\_2}\cdots {\textsf{U}}^{a\_{t-1}}{\textsf{D}}^{x\_{t}-x\_{t-1}}{\textsf{U}}^{a\_{t}}{\textsf{D}}^{n-x\_{t}+x\_1}.\label{eq:dyckstring}\end{aligned}$$ Given *λ* be a tableau in $\Skyt(a,i,b)$, $\operatorname{\operatorname{\mathtt{SD}}\xspace}(\lambda)$ is a Dyck path. In particular, the lattice path $\operatorname{\operatorname{\mathtt{SD}}\xspace}(\lambda)$ is an element of $\Dyck(n, i+1, b-2)$. Recall that given $\lambda \in \Skyt(a,i,b)$, we can define have $\Nom(\lambda)=(A\_1,\dots, A\_{i+b-1})$ where ∣*A**i* + *b* − 1∣ > 1. Recall that *x**j* = min*A**j* is an entry in the top *b* − 1 rows of *λ*. In the string given in, ${\textsf{U}}$ corresponds to an up step and ${\textsf{D}}$ correspond to a down step. Note that for any *k*, we have [*x*1, *x**k*] ⊆ *A*1 ∪ ⋯ ∪ *A**k*. Otherwise, there exists a *w* ∈ [*x*1, *x**k*] so that *w* ∈ *A**j* for some *j* > *k*. Then we have *w* ≥ *x**j* > *x**k* ≥ *w*, a contradiction. Thus, for *k* < *t* ∑*j* = 1*k*(*x**j* + 1 − *x**j*) = *x**k* + 1 − *x*1 ≤ ∑*j* = 1*k**a**j*. Also, ∑*j* = 1*t*(*x**j* + 1 − *x**j*) = *x**t* − *x*1 + *n* − *x**t* + *x*1 = *n*. Moreover, there are precisely *b* − 2 of the *a**j* so that *a**j* = 1, and there are *i* of the *a**j* so that *a**j* > 1. Consequently, the constructed Dyck path is an element of $\Dyck(n,i,b-2)$. For the skew Young tableau *λ* in $\Skyt(7,2,6)$ below, *x*1 = 1, *x*2 = 2, *x*3 = 4, *x*4 = 5, *x*5 = 7, *x*6 = 10,  and *x*7 = 12. As *x*2,  *x*5 and *x*7 are the entries in row 5, *y**i* is defined for *i* = 2, 5, 7. We have *y*2 = 3, *y*5 = 11, and *y*7 = 13. Thus, *A*1 = {1}, *A*2 = {2, 3, 6, 8, 9}, *A*3 = {4}, *A*4 = {5}, *A*5 = {7, 11}, *A*6 = {10}, and *A*7 = {12, 13, 14, 15}. .49 [scale=0.5, line width=1pt] (0,0) grid (1,-7); (1,0) grid (2,-2); (2,0) grid (3,-2); (2,4) grid (3,-2); (.5,-.5) node 2; (.5,-1.5) node 3; (.5,-2.5) node 6; (.5,-4.5) node 9; (.5,-3.5) node 8; (1.5,-.5) node 7; (1.5,-1.5) node 11; (.5,-5.5) node 14; (.5,-6.5) node 15; (2.5,-.5) node 12; (2.5,-1.5) node 13; (2.5,3.5) node 1; (2.5,2.5) node 4; (2.5,1.5) node 5; (2.5,.5) node 10; Thus $\operatorname{\operatorname{\mathtt{SD}}\xspace}(\lambda)$ is the Dyck path given by $${\textsf{U}}{\textsf{D}}{\textsf{U}}^3{\textsf{D}}^2{{}{\textsf{U}}{\textsf{D}}{\textsf{U}}{\textsf{D}}}{\textsf{D}}{\textsf{U}}^4{\textsf{D}}^3{{}{\textsf{U}}{\textsf{D}}}{\textsf{D}}{\textsf{U}}^4{\textsf{D}}^4.$$ Map $\operatorname{\operatorname{\mathtt{DS}}\xspace}$ ------------------------------------------------------ The reader can verify that they are indeed inverses. Given *P*, label the down-steps, left to right, in increasing order, from 1 to *n*. Next, use the label on the down-step at each peak as the label for the up-step at the same peak. Given a Dyck path *P* in $\Dyck(n,\ell,s)$, the tableau $\operatorname{\operatorname{\mathtt{DS}}\xspace}(P)$ is a skew Young tableau in $\Skyt(n-s-2\ell-2,\ell-1,{\textcolor{black}{s+2}})$. Given a Dyck path in $\Dyck(15,3,4)$, we first label the down-steps, left to right, in increasing order. [scale=0.5, line width=1pt] (-3,0)–(29,0); (-2,0) circle[radius=5pt]; (-2,0)–(-1,1); (-1,1) circle[radius=5pt]; (-1,1)–(0,0); (0,0) circle[radius=5pt]; in 0,1,...,2 (,)–(+1,+1); (+1,+1) circle[radius=5pt]; (3,3)–(4,2); (4,2) circle[radius=5pt]; (4,2)–(5,1); (5,1) circle[radius=5pt]; (5,1)–(6,2); (6,2) circle[radius=5pt]; (6,2)–(7,1); (7,1) circle[radius=5pt]; (7,1)–(8,2); (8,2) circle[radius=5pt]; (8,2)–(9,1); (9,1) circle[radius=5pt]; (9,1)–(10,0); (10,0) circle[radius=5pt]; (10,0)–(14,4); (11,1) circle[radius=5pt]; (12,2) circle[radius=5pt]; (13,3) circle[radius=5pt]; (14,4) circle[radius=5pt]; (14,4)–(17,1); (15,3) circle[radius=5pt]; (16,2) circle[radius=5pt]; (17,1) circle[radius=5pt]; (17,1)–(18,2); (18,2) circle[radius=5pt]; (18,2)–(20,0); (19,1) circle[radius=5pt]; (20,0) circle[radius=5pt]; (20,0)–(24,4); (21,1) circle[radius=5pt]; (22,2) circle[radius=5pt]; (23,3) circle[radius=5pt]; (24,4) circle[radius=5pt]; (24,4)–(28,0); (25,3) circle[radius=5pt]; (26,2) circle[radius=5pt]; (27,1) circle[radius=5pt]; (28,0) circle[radius=5pt]; (-.2,.8) node[color=red] 1; (3.8,2.8) node[color=red] 2; (4.8,1.8) node[color=red] 3; (6.8,1.8) node[color=red] 4; (8.8,1.8) node[color=red] 5; (9.8,.8) node[color=red] 6; (14.8,3.8) node[color=red] 7; (15.8,2.8) node[color=red] 8; (16.8,1.8) node[color=red] 9; (19,1.8) node[color=red] 10; (20,0.8) node[color=red] 11; (25.,3.8) node[color=red] 12; (26.,2.8) node[color=red] 13; (27.,1.8) node[color=red] 14; (28.,.8) node[color=red] 15; Then label the upstep of each peak: [scale=0.5, line width=1pt] (-3,0)–(29,0); (-2,0) circle[radius=5pt]; (-2,0)–(-1,1); (-1,1) circle[radius=5pt]; (-1,1)–(0,0); (0,0) circle[radius=5pt]; in 0,1,...,2 (,)–(+1,+1); (+1,+1) circle[radius=5pt]; (3,3)–(4,2); (4,2) circle[radius=5pt]; (4,2)–(5,1); (5,1) circle[radius=5pt]; (5,1)–(6,2); (6,2) circle[radius=5pt]; (6,2)–(7,1); (7,1) circle[radius=5pt]; (7,1)–(8,2); (8,2) circle[radius=5pt]; (8,2)–(9,1); (9,1) circle[radius=5pt]; (9,1)–(10,0); (10,0) circle[radius=5pt]; (10,0)–(14,4); (11,1) circle[radius=5pt]; (12,2) circle[radius=5pt]; (13,3) circle[radius=5pt]; (14,4) circle[radius=5pt]; (14,4)–(17,1); (15,3) circle[radius=5pt]; (16,2) circle[radius=5pt]; (17,1) circle[radius=5pt]; (17,1)–(18,2); (18,2) circle[radius=5pt]; (18,2)–(20,0); (19,1) circle[radius=5pt]; (20,0) circle[radius=5pt]; (20,0)–(24,4); (21,1) circle[radius=5pt]; (22,2) circle[radius=5pt]; (23,3) circle[radius=5pt]; (24,4) circle[radius=5pt]; (24,4)–(28,0); (25,3) circle[radius=5pt]; (26,2) circle[radius=5pt]; (27,1) circle[radius=5pt]; (28,0) circle[radius=5pt]; (-1.8,.8) node[color=blue] 1; (2.2,2.8) node[color=red] 2; (5.2,1.8) node[color=blue] 4; (7.2,1.8) node[color=blue] 5; (13.2,3.8) node[color=red] 7; (17.,1.8) node[color=blue] 10; (23.,3.8) node[color=red] 12; [scale=0.5, line width=1pt] (-3,0)–(29,0); (-2,0) circle[radius=5pt]; (-2,0)–(-1,1); (-1,1) circle[radius=5pt]; (-1,1)–(0,0); (0,0) circle[radius=5pt]; in 0,1,...,2 (,)–(+1,+1); (+1,+1) circle[radius=5pt]; (3,3)–(4,2); (4,2) circle[radius=5pt]; (4,2)–(5,1); (5,1) circle[radius=5pt]; (5,1)–(6,2); (6,2) circle[radius=5pt]; (6,2)–(7,1); (7,1) circle[radius=5pt]; (7,1)–(8,2); (8,2) circle[radius=5pt]; (8,2)–(9,1); (9,1) circle[radius=5pt]; (9,1)–(10,0); (10,0) circle[radius=5pt]; (10,0)–(14,4); (11,1) circle[radius=5pt]; (12,2) circle[radius=5pt]; (13,3) circle[radius=5pt]; (14,4) circle[radius=5pt]; (14,4)–(17,1); (15,3) circle[radius=5pt]; (16,2) circle[radius=5pt]; (17,1) circle[radius=5pt]; (17,1)–(18,2); (18,2) circle[radius=5pt]; (18,2)–(20,0); (19,1) circle[radius=5pt]; (20,0) circle[radius=5pt]; (20,0)–(24,4); (21,1) circle[radius=5pt]; (22,2) circle[radius=5pt]; (23,3) circle[radius=5pt]; (24,4) circle[radius=5pt]; (24,4)–(28,0); (25,3) circle[radius=5pt]; (26,2) circle[radius=5pt]; (27,1) circle[radius=5pt]; (28,0) circle[radius=5pt]; (-1.8,.8) node[color=blue] 1; (2.2,2.8) node[color=red] 2; (1.2,1.8) node 3; (0.2,0.8) node 6; (5.2,1.8) node[color=blue] 4; (7.2,1.8) node[color=blue] 5; (13.2,3.8) node[color=red] 7; (12.,2.8) node 8; (11.,1.8) node 9; (10.,.8) node 11; (17.,1.8) node[color=blue] 10; (23.,3.8) node[color=red] 12; (22.,2.8) node 13; (21.,1.8) node 14; (20.,.8) node 15; Thus, we have *A*1 = {1}, *A*2 = {2, 3, 6}, *A*3 = {4}, *A*4 = {5}, *A*5 = {7, 8, 9, 11}, *A*6 = {10}, and *A*7 = {12, 13, 14, 15}. The map $\operatorname{\operatorname{\mathtt{DT}}\xspace}$ gives the tableau in Figure [fig:dycktoskew]. [scale=0.45, line width=1pt] (0,0) grid (1,-7); (1,0) grid (2,-2); (2,0) grid (3,-2); (2,4) grid (3,-2); (.5,-.5) node[color=red] 2; (.5,-1.5) node 3; (.5,-2.5) node 6; (.5,-4.5) node 11; (.5,-3.5) node 9; (1.5,-.5) node[color=red] 7; (1.5,-1.5) node 8; (.5,-5.5) node 14; (.5,-6.5) node 15; (2.5,-.5) node[color=red] 12; (2.5,-1.5) node 13; (2.5,3.5) node[color=blue] 1; (2.5,2.5) node[color=blue] 4; (2.5,1.5) node[color=blue] 5; (2.5,.5) node[color=blue] 10; [fig:dycktoskew] Map $\operatorname{\operatorname{\mathtt{DT}}\xspace}$ ------------------------------------------------------ A Dyck path in $\Dyck(n,\ell,s)$ has the form $${\textsf{U}}^{u\_1}{\textsf{D}}^{d\_1}{\textsf{U}}^{u\_2}{\textsf{D}}^{d\_2}\cdots {\textsf{U}}^{u\_{s+\ell}}{\textsf{D}}^{d\_{s+\ell}},$$ where *s* is the number of singletons, ℓ is the number of long ascents, and the *u**i* and *d**i* are positive integers. Recall that the triangulation is determined by its fan decomposition. Let *F**j* be a fan with *u**j* triangles. Then $\operatorname{\operatorname{\mathtt{DT}}\xspace}(P)$ is given by the fan decomposition If *P* is a Dyck path in $\Dyck(n,\ell,s)$, then $\operatorname{\operatorname{\mathtt{DT}}\xspace}(P)$ is a triangulation in $\Tri(n+2,\ell,s)$. Given $P\in \Dyck(n,\ell,s)$, note that that the number of triangles in $\operatorname{\operatorname{\mathtt{DT}}\xspace}(P)$ is given by sum of sizes of *F**j*: ∑*j* = 1*s* + ℓ*u**j* = *n* As $\operatorname{\operatorname{\mathtt{DT}}\xspace}(P)$ has *n* triangles, the boundary must have *n* + 2 edges. Also note that since there is no singleton after the last long ascent in our Dyck path, the last fan $\cF(T)$ will not be a singleton fan. Thus, we have constructed a triangulation in $\Tri(n+2,\ell,s)$, where the vertices are labeled as follows: label the origin of *F*1 as 1, and then label the remaining vertices from 2 to *n* in clock-wise order by starting at 1 and traveling along the boundary of the triangulation. Consider the path below. [scale=0.7, line width=1pt] (-3,0)–(19,0); (-2,0) circle[radius=5pt]; (-2,0)–(-1,1); (-1,1) circle[radius=5pt]; (-1,1)–(0,0); (0,0) circle[radius=5pt]; in 0,1,...,2 (,)–(+1,+1); (+1,+1) circle[radius=5pt]; (3,3)–(4,2); (4,2) circle[radius=5pt]; (4,2)–(5,1); (5,1) circle[radius=5pt]; (5,1)–(6,2); (6,2) circle[radius=5pt]; (6,2)–(7,1); (7,1) circle[radius=5pt]; (7,1)–(8,2); (8,2) circle[radius=5pt]; (8,2)–(9,1); (9,1) circle[radius=5pt]; (9,1)–(10,0); (10,0) circle[radius=5pt]; (10,0)–(14,4); (11,1) circle[radius=5pt]; (12,2) circle[radius=5pt]; (13,3) circle[radius=5pt]; (14,4) circle[radius=5pt]; (14,4)–(18,0); (15,3) circle[radius=5pt]; (16,2) circle[radius=5pt]; (17,1) circle[radius=5pt]; (18,0) circle[radius=5pt]; Associated to this are five fans, given below. .15 (a); in 1,...,3at (a.corner ) ; in 1,...,3 (a.corner 2) – (a.corner ) ; (a.corner 3) circle (1.pt); *F*1 .15 (a); (a.corner 1)–(a.corner 2)–(a.corner 3)–(a.corner 4)–(a.corner 5)–(a.corner 1); (a); in 1,...,5at (a.corner ) ; in 1,...,5 (a.corner 2) – (a.corner ) ; (a.corner 2) circle (1.pt); *F*2 .15 (a); (a.corner 1)–(a.corner 2)–(a.corner 3)–(a.corner 1); (a); in 1,...,3at (a.corner ) ; in 1,...,3 (a.corner 2) – (a.corner ) ; (a.corner 3) circle (1.pt); *F*3 .15 (a); (a.corner 1)–(a.corner 2)–(a.corner 3)–(a.corner 1); (a); in 1,...,3at (a.corner ) ; in 1,...,3 (a.corner 2) – (a.corner ) ; (a.corner 3) circle (1.pt); *F*4 .15 (a); (a.corner 1)–(a.corner 2)–(a.corner 3)–(a.corner 4)–(a.corner 5)–(a.corner 6)–(a.corner 1); (a); in 1,...,6at (a.corner ) ; in 1,...,6 (a.corner 2) – (a.corner ) ; (a.corner 2) circle (1.pt); *F*5 Below are the subsequent steps of attaching *F**j*. .2 (a); (a.corner 1)–(a.corner 2)–(a.corner 3)–(a.corner 4)–(a.corner 5)–(a.corner 1); (a); in 1,...,5at (a.corner ) ; in 1,...,5 (a.corner 2) – (a.corner ) ; at (-.38,.51) (b); in 1,...,3at (b.corner ) ; in 1,...,3 (b.corner 2) – (b.corner ) ; (a.corner 2) circle (1.pt); (b.corner 1) circle (1.pt); .2 (b); (b.corner 1)–(b.corner 2)–(b.corner 3)–(b.corner 4)–(b.corner 5)–(b.corner 1); (b); in 1,...,5at (a.corner ) ; in 1,...,5 (b.corner 2) – (b.corner ) ; at (-.38,.51) (a); in 1,...,3at (a.corner ) ; in 1,...,3 (a.corner 2) – (a.corner ) ; at (.61,-.2) (c); (c.corner 1)–(c.corner 2)–(c.corner 3)–(c.corner 1); at (.61,-.2) (c); in 1,...,3at (c.corner ) ; in 1,...,3 (c.corner 2) – (c.corner ) ; (a.corner 1) circle (1.pt); (b.corner 2) circle (1.pt); (c.corner 2) circle (1.pt); .2 (b); (b.corner 1)–(b.corner 2)–(b.corner 3)–(b.corner 4)–(b.corner 5)–(b.corner 1); (b); in 1,...,5at (b.corner ) ; in 1,...,5 (b.corner 2) – (b.corner ) ; at (-.38,.51) (a); in 1,...,3at (a.corner ) ; in 1,...,3 (a.corner 2) – (a.corner ) ; at (.61,-.2) (c); at (.61,-.2) (c); (c.corner 1)–(c.corner 2)–(c.corner 3)–(c.corner 1); at (.61,-.2) (c); in 1,...,3at (c.corner ) ; in 1,...,3 (c.corner 2) – (c.corner ) ; at (.89,.05) (d); (d.corner 1)–(d.corner 2)–(d.corner 3)–(d.corner 1); at (.89,.05) (d); in 1,...,3at (d.corner ) ; in 1,...,3 (d.corner 2) – (d.corner ) ; (a.corner 1) circle (.9pt); (b.corner 2) circle (1.pt); (c.corner 2) circle (1.pt); (d.corner 1) circle (1.pt); .2 (b); (b.corner 1)–(b.corner 2)–(b.corner 3)–(b.corner 4)–(b.corner 5)–(b.corner 1); (b); in 1,...,5at (b.corner ) ; in 1,...,5 (b.corner 2) – (b.corner ) ; at (-.38,.51) (a); in 1,...,3at (a.corner ) ; in 1,...,3 (a.corner 2) – (a.corner ) ; at (.61,-.2) (c); in 1,...,3at (c.corner ) ; in 1,...,3 (c.corner 2) – (c.corner ) ; at (.61,-.2) (c); (c.corner 1)–(c.corner 2)–(c.corner 3)–(c.corner 1); at (.61,-.2) (c); at (.89,.05) (d); in 1,...,3at (d.corner ) ; in 1,...,3 (d.corner 2) – (d.corner ) ; (d.corner 1)–(d.corner 2)–(d.corner 3)–(d.corner 1); at (.89,.05) (d); at (.6,.8) (e); (e.corner 1)–(e.corner 2)–(e.corner 3)–(e.corner 4)–(e.corner 5)–(e.corner 6)–(e.corner 1); at (.6,.8) (e); in 1,...,6at (e.corner ) ; in 1,...,6 (e.corner 2) – (e.corner ) ; (a.corner 1) circle (1.pt); (b.corner 2) circle (1.pt); (c.corner 2) circle (1.pt); (d.corner 1) circle (1.pt); (e.corner 2) circle (1.pt); Redrawn with vertex labels, we have the following, where the vertex labeled 1 is the origin of *F*1, the vertex labeled 2 is the origin of *F*2, and so on. [scale=1] at (.74,.76) (e); (e.corner 1)–(e.corner 2)–(e.corner 12)–(e.corner 1); (e.corner 1)–(e.corner 2)–(e.corner 12)–(e.corner 1); (e.corner 2)–(e.corner 12)–(e.corner 7)–(e.corner 4)–(e.corner 3)–(e.corner 2); (e.corner 2)–(e.corner 12)–(e.corner 7)–(e.corner 4)–(e.corner 3)–(e.corner 2); (e.corner 4)–(e.corner 5)–(e.corner 7)–(e.corner 4); (e.corner 4)–(e.corner 5)–(e.corner 7)–(e.corner 4); (e.corner 6)–(e.corner 5)–(e.corner 7)–(e.corner 6); (e.corner 6)–(e.corner 5)–(e.corner 7)–(e.corner 6); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 11)–(e.corner 12)–(e.corner 7); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 11)–(e.corner 12)–(e.corner 7); at (.74,.76) (e); in 1,...,12at (e.corner ) ; at (e.corner 1) 1; at (e.corner 2) 2; at (e.corner 3) 3; at (e.corner 4) 4; at (e.corner 5) 5; at (e.corner 6) 6; at (e.corner 7) 7; at (e.corner 8) 8; at (e.corner 9) 9; at (e.corner 10) 10; at (e.corner 11) 11; at (e.corner 12) 12; (e.corner 2) – (e.corner 12) ; (e.corner 2) – (e.corner 7) ; (e.corner 2) – (e.corner 4) ; (e.corner 4) – (e.corner 7) ; (e.corner 5) – (e.corner 7) ; (e.corner 6) – (e.corner 7) ; (e.corner 7) – (e.corner 10) ; (e.corner 7) – (e.corner 9) ; (e.corner 7) – (e.corner 11) ; (e.corner 7) – (e.corner 12) ; (e.corner 1) circle (1.25pt); (e.corner 2) circle (1.25pt); (e.corner 4) circle (1.25pt); (e.corner 5) circle (1.25pt); (e.corner 7) circle (1.25pt); Map $\operatorname{\operatorname{\mathtt{TD}}\xspace}$ ------------------------------------------------------ In this section, we construct a map $\operatorname{\operatorname{\mathtt{TD}}\xspace}$ from $\Tri(n,t,s)$ to $\Dyck(n-2,t,s)$ Given $T\in \Tri(n,t,s)$, we define $\operatorname{\operatorname{\mathtt{TD}}\xspace}(T)$, a certain lattice path, as follows. Consider the fan decomposition $(\cF(T), \d(T)) = ((F\_1,F\_2,\ldots, F\_{t+s}), (d\_1,\ldots,d\_{t+s-1})$. Let *x**j* denote the origin of *F**j* and let *d**t* + *s* be the number of boundary edges from *x**t* + *s* to *x*1 minus 2. Now we identify the *x**j* with its corresponding vertex in *T*. Letting *u**j* be the number of triangles in *F**j*, define $\operatorname{\operatorname{\mathtt{TD}}\xspace}(T)$ to be $${\textsf{U}}^{u\_1}{\textsf{D}}^{d\_1}{\textsf{U}}^{u\_2}{\textsf{D}}^{d\_2}\dots {\textsf{U}}^{u\_{t+s}}{\textsf{D}}^{d\_{t+s}}.$$ If $T\in \Tri(n,t,s)$, then the string $\operatorname{\operatorname{\mathtt{TD}}\xspace}(T)$ is a dyck path in $\Dyck(n-2,t,s)$. We claim $\operatorname{\operatorname{\mathtt{TD}}\xspace}(T)$ is a valid Dyck path. First, note that given any fan decomposition of a triangulation on *n* vertices, the largest label for an origin is *n* − 1. Thus, the largest number of boundary edges between the first and last origin (travelling counter-clockwise) is *n* − 2, which is precisely the number of triangles in the triangulation. Thus, by studying the triangulation *T**k* given by F(*T**k*) = (*F*1, *F*2, …, *F**k*) and the origins *x*1, …, *x**k*, we see that ∑*j* = 1*k**d**j* ≤ the number of triangles in $T\_k$ = ∑*j* = 1*k**u**j*. Also, note that $$\begin{aligned} \displaystyle \sum\_{j=1}^{t+s}u\_j&=\text{number of triangles in $T$}\\ &=n-2\\ &=\text{number of boundary edges of $T$ minus 2}\\ &=\sum\_{j=1}^{t+s}d\_j.\end{aligned}$$ Hence this path is a Dyck path with semi length *n* − 2. Finally, note that *u**t* + *s* = #*F**t* + *s* ≥ 2 since *F**t* + *s* is the last fan in F(*T*). Hence, we have constructed a path in $\Dyck(n-2,t,s)$. Suppose we start with the following triangulation. [scale=1] at (.74,.76) (e); in 1,...,12at (e.corner ) ; at (e.corner 1) 1; at (e.corner 2) 2; at (e.corner 3) 3; at (e.corner 4) 4; at (e.corner 5) 5; at (e.corner 6) 6; at (e.corner 7) 7; at (e.corner 8) 8; at (e.corner 9) 9; at (e.corner 10) 10; at (e.corner 11) 11; at (e.corner 12) 12; (e.corner 2) – (e.corner 12) ; (e.corner 2) – (e.corner 7) ; (e.corner 2) – (e.corner 4) ; (e.corner 4) – (e.corner 7) ; (e.corner 5) – (e.corner 7) ; (e.corner 6) – (e.corner 7) ; (e.corner 7) – (e.corner 10) ; (e.corner 7) – (e.corner 9) ; (e.corner 7) – (e.corner 11) ; (e.corner 7) – (e.corner 12) ; Below, we identify the origins of the fans in F = (*F*1, *F*2, *F*3, *F*4, *F*5) by using larger circles for such vertices. [scale=.9] at (.74,.76) (e); (e.corner 1)–(e.corner 2)–(e.corner 12)–(e.corner 1); (e.corner 1)–(e.corner 2)–(e.corner 12)–(e.corner 1); (e.corner 2)–(e.corner 12)–(e.corner 7)–(e.corner 4)–(e.corner 3)–(e.corner 2); (e.corner 2)–(e.corner 12)–(e.corner 7)–(e.corner 4)–(e.corner 3)–(e.corner 2); (e.corner 4)–(e.corner 5)–(e.corner 7)–(e.corner 4); (e.corner 4)–(e.corner 5)–(e.corner 7)–(e.corner 4); (e.corner 6)–(e.corner 5)–(e.corner 7)–(e.corner 6); (e.corner 6)–(e.corner 5)–(e.corner 7)–(e.corner 6); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 11)–(e.corner 12)–(e.corner 7); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 11)–(e.corner 12)–(e.corner 7); at (.74,.76) (e); at (e.corner 1) 1; at (e.corner 2) 2; at (e.corner 3) 3; at (e.corner 4) 4; at (e.corner 5) 5; at (e.corner 6) 6; at (e.corner 7) 7; at (e.corner 8) 8; at (e.corner 9) 9; at (e.corner 10) 10; at (e.corner 11) 11; at (e.corner 12) 12; in 1,...,12at (e.corner ) ; (e.corner 2) – (e.corner 12) ; (e.corner 2) – (e.corner 7) ; (e.corner 2) – (e.corner 4) ; (e.corner 4) – (e.corner 7) ; (e.corner 5) – (e.corner 7) ; (e.corner 6) – (e.corner 7) ; (e.corner 7) – (e.corner 10) ; (e.corner 7) – (e.corner 9) ; (e.corner 7) – (e.corner 11) ; (e.corner 7) – (e.corner 12) ; (e.corner 1) circle (1.25pt); (e.corner 2) circle (1.25pt); (e.corner 4) circle (1.25pt); (e.corner 5) circle (1.25pt); (e.corner 7) circle (1.25pt); Hence, *F*1, *F*3, and *F*4 are all fans of size 1, *F*2 is a fan of size 3, and *F*5 is a fan of size 4. The origins of these fans are identified in the image below. Consequently, *u*1 = *u*3 = *u*4 = 1, *u*2 = 3, and *u*5 = 4. Also note that *d*1 = *d*3 and *d*2 = *d*4 = 2. Hence we have the Dyck path $${\textsf{U}}{\textsf{D}}{\textsf{U}}^3{\textsf{D}}^2{\textsf{U}}{\textsf{D}}{\textsf{U}}{\textsf{D}}^2{\textsf{U}}^4{\textsf{D}}^4,$$ which visualized gives the following path. [scale=0.6, line width=1pt] (-3,0)–(19,0); (-2,0) circle[radius=5pt]; (-2,0)–(-1,1); (-1,1) circle[radius=5pt]; (-1,1)–(0,0); (0,0) circle[radius=5pt]; in 0,1,...,2 (,)–(+1,+1); (+1,+1) circle[radius=5pt]; (3,3)–(4,2); (4,2) circle[radius=5pt]; (4,2)–(5,1); (5,1) circle[radius=5pt]; (5,1)–(6,2); (6,2) circle[radius=5pt]; (6,2)–(7,1); (7,1) circle[radius=5pt]; (7,1)–(8,2); (8,2) circle[radius=5pt]; (8,2)–(9,1); (9,1) circle[radius=5pt]; (9,1)–(10,0); (10,0) circle[radius=5pt]; (10,0)–(14,4); (11,1) circle[radius=5pt]; (12,2) circle[radius=5pt]; (13,3) circle[radius=5pt]; (14,4) circle[radius=5pt]; (14,4)–(18,0); (15,3) circle[radius=5pt]; (16,2) circle[radius=5pt]; (17,1) circle[radius=5pt]; (18,0) circle[radius=5pt]; The following map is an explicit interpretation of the map $\operatorname{\operatorname{\mathtt{SD}}\xspace}$ composed with $\operatorname{\operatorname{\mathtt{DT}}\xspace}$ without needing to bring up Dyck paths. Map $\operatorname{\operatorname{\mathtt{ST}}\xspace}$ ------------------------------------------------------ In this section, we construct a map $\operatorname{\operatorname{\mathtt{ST}}\xspace}$ from $\Skyt(a,i,b)$ to $\Tri(a+b+2i,i+1,b-2)$. Given a tableau $\lambda\in \Skyt(a,i,b)$ Let $\lambda\in \Skyt(a,i,b)$. Then $\operatorname{\operatorname{\mathtt{ST}}\xspace}(\lambda)\in \Tri(a+b+2i,i+1,b-2)$. Recall triangulations are uniquely determined by their fan decomposition, and thus $\operatorname{\operatorname{\mathtt{ST}}\xspace}(\lambda)$ is guaranteed to be a triangulation. Note that the number of triangles in this triangulation is precisely the number of entries of *λ*, which is *a* + *b* + 2*i* − 2. Hence, the boundary of our constructed triangulation has *a* + *b* + 2*i* edges. Recall that among *A*1, …, *A**i* + *b* − 1, precisely *i* + 1 have cardinality larger than 1, and precisely *b* − 2 have cardinality exactly 1. Thus, our proposed fan decomposition for $\operatorname{\operatorname{\mathtt{ST}}\xspace}(\lambda)$ has precisely *i* + 1 non singular fans and *b* − 2 singular fans. Finally, note that by construction, *f**i* + *b* − 1 > 1 since ∣*A**i* + *b* − 1∣ > 1 hat is, the last fan appearing in F is not singular. All together, this verifies that we have constructed a triangulation in $\Tri(a+b+2i, i+1, b-2)$. Consider the following choice for *λ*. & & 1 & & 4 2 &5 & 7 3&6&9 8&& 10&& [scale=1] at (.74,.76) (e); (e.corner 1)–(e.corner 2)–(e.corner 12)–(e.corner 1); (e.corner 1)–(e.corner 2)–(e.corner 12)–(e.corner 1); (e.corner 2)–(e.corner 12)–(e.corner 4)–(e.corner 3)–(e.corner 2); (e.corner 2)–(e.corner 12)–(e.corner 4)–(e.corner 3)–(e.corner 2); (e.corner 4)–(e.corner 5)–(e.corner 12)–(e.corner 4); (e.corner 4)–(e.corner 5)–(e.corner 12)–(e.corner 4); (e.corner 6)–(e.corner 5)–(e.corner 12)–(e.corner 11)–(e.corner 7)–(e.corner 6); (e.corner 6)–(e.corner 5)–(e.corner 12)–(e.corner 11)–(e.corner 7)–(e.corner 6); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 11)–(e.corner 7); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 11)–(e.corner 7); at (.74,.76) (e); in 1,...,12at (e.corner ) ; in 1,...,3at (e.corner ) ; in 4,...,7at (e.corner ) ; in 8,...,12at (e.corner ) ; (e.corner 2) – (e.corner 12) ; (e.corner 2) – (e.corner 4) ; (e.corner 4) – (e.corner 12) ; (e.corner 5) – (e.corner 12) ; (e.corner 5) – (e.corner 7) ; (e.corner 6) – (e.corner 7) ; (e.corner 7) – (e.corner 10) ; (e.corner 7) – (e.corner 9) ; (e.corner 5) – (e.corner 11) ; (e.corner 7) – (e.corner 11) ; (e.corner 1) circle (1.25pt); (e.corner 2) circle (1.25pt); (e.corner 4) circle (1.25pt); (e.corner 5) circle (1.25pt); (e.corner 7) circle (1.25pt); The following map is an explicit interpretation of the map $\operatorname{\operatorname{\mathtt{TD}}\xspace}$ composed with $\operatorname{\operatorname{\mathtt{DS}}\xspace}$ without needing to bring up Dyck paths. Map $\operatorname{\operatorname{\mathtt{TS}}\xspace}$ ------------------------------------------------------ In this section, we construct a map $\operatorname{\operatorname{\mathtt{TD}}\xspace}$ from $\Tri(n,t,s)$ to $\Skyt(n-2-s-2t,t-1,s+2)$ Let $T\in \Tri(n,t,s)$. We define $\operatorname{\operatorname{\mathtt{TS}}\xspace}(T)$, a skew tableau, as follows. We label the triangles in the triangulation in the following way: For each fan, label a triangle with the label of the fan’s origin. Then, greedily label other triangles with the unused vertices in the order that fans appear in F(*T*). (Triangles within a single fan need not be labeled in any particular order.) Let *A**j* be the labels appearing in the fan *F**j*. Let $\operatorname{\operatorname{\mathtt{TS}}\xspace}(T)$ be the skew diagram Given $T\in \Tri(n,t,s)$, we have $\operatorname{\operatorname{\mathtt{TS}}\xspace}(T) \in \Skyt(n-2-s-2t,t-1,s+2)$. Note that the number of *A**j* so that ∣*A**j*∣ = 1 is precisely the number of singleton fans in *T*, which is *s*. Also, the number of *A**j* so that ∣*A**j*∣ > 1 is *t*. Thus, $\operatorname{\operatorname{\mathtt{TS}}\xspace}(T)\in \Skyt(n-2-s-2t,t-1,s+2)$. For example, given the triangulation in $\Tri(12,3,2)$ below, label the vertices in counter-clockwise order. [scale=1] at (.74,.76) (e); (e.corner 1)–(e.corner 2)–(e.corner 12)–(e.corner 1); (e.corner 1)–(e.corner 2)–(e.corner 12)–(e.corner 1); (e.corner 2)–(e.corner 12)–(e.corner 4)–(e.corner 3)–(e.corner 2); (e.corner 2)–(e.corner 12)–(e.corner 4)–(e.corner 3)–(e.corner 2); (e.corner 4)–(e.corner 5)–(e.corner 12)–(e.corner 4); (e.corner 4)–(e.corner 5)–(e.corner 12)–(e.corner 4); (e.corner 6)–(e.corner 5)–(e.corner 12)–(e.corner 11)–(e.corner 7)–(e.corner 6); (e.corner 6)–(e.corner 5)–(e.corner 12)–(e.corner 11)–(e.corner 7)–(e.corner 6); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 11)–(e.corner 7); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 11)–(e.corner 7); at (.74,.76) (e); in 1,...,12at (e.corner ) ; in 1,...,3at (e.corner ) ; in 4,...,7at (e.corner ) ; in 8,...,12at (e.corner ) ; (e.corner 2) – (e.corner 12) ; (e.corner 2) – (e.corner 4) ; (e.corner 4) – (e.corner 12) ; (e.corner 5) – (e.corner 12) ; (e.corner 5) – (e.corner 7) ; (e.corner 6) – (e.corner 7) ; (e.corner 7) – (e.corner 10) ; (e.corner 7) – (e.corner 9) ; (e.corner 5) – (e.corner 11) ; (e.corner 7) – (e.corner 11) ; (e.corner 1) circle (1.25pt); (e.corner 2) circle (1.25pt); (e.corner 4) circle (1.25pt); (e.corner 5) circle (1.25pt); (e.corner 7) circle (1.25pt); We now label a single triangle in each fan with the label of the corresponding origin. [scale=.9] at (.74,.76) (e); (e.corner 1)–(e.corner 2)–(e.corner 12)–(e.corner 1); (e.corner 1)–(e.corner 2)–(e.corner 12)–(e.corner 1); (e.corner 2)–(e.corner 12)–(e.corner 4)–(e.corner 3)–(e.corner 2); (e.corner 2)–(e.corner 12)–(e.corner 4)–(e.corner 3)–(e.corner 2); (e.corner 4)–(e.corner 5)–(e.corner 12)–(e.corner 4); (e.corner 4)–(e.corner 5)–(e.corner 12)–(e.corner 4); (e.corner 6)–(e.corner 5)–(e.corner 12)–(e.corner 11)–(e.corner 7)–(e.corner 6); (e.corner 6)–(e.corner 5)–(e.corner 12)–(e.corner 11)–(e.corner 7)–(e.corner 6); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 11)–(e.corner 7); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 11)–(e.corner 7); at (.74,.76) (e); in 1,...,12at (e.corner ) ; at (e.corner 1) 1; at (e.corner 12) 12; in 2,...,3at (e.corner ) ; in 4,...,7at (e.corner ) ; in 8,...,11at (e.corner ) ; (e.corner 2) – (e.corner 12) ; (e.corner 2) – (e.corner 4) ; (e.corner 4) – (e.corner 12) ; (e.corner 5) – (e.corner 12) ; (e.corner 5) – (e.corner 7) ; (e.corner 6) – (e.corner 7) ; (e.corner 7) – (e.corner 10) ; (e.corner 7) – (e.corner 9) ; (e.corner 5) – (e.corner 11) ; (e.corner 7) – (e.corner 11) ; at (0.1,4.3) **1**; at (-1,3) **2**; at (-1.3,.5) **4**; at (0.8,1.5) **5**; at (3.5,1.2) **7**; (e.corner 1) circle (1.25pt); (e.corner 2) circle (1.25pt); (e.corner 4) circle (1.25pt); (e.corner 5) circle (1.25pt); (e.corner 7) circle (1.25pt); We now label the remaining triangles greedily in the order that fans appear in F(*T*). [scale=.9] at (.74,.76) (e); (e.corner 1)–(e.corner 2)–(e.corner 12)–(e.corner 1); (e.corner 1)–(e.corner 2)–(e.corner 12)–(e.corner 1); (e.corner 2)–(e.corner 12)–(e.corner 4)–(e.corner 3)–(e.corner 2); (e.corner 2)–(e.corner 12)–(e.corner 4)–(e.corner 3)–(e.corner 2); (e.corner 4)–(e.corner 5)–(e.corner 12)–(e.corner 4); (e.corner 4)–(e.corner 5)–(e.corner 12)–(e.corner 4); (e.corner 6)–(e.corner 5)–(e.corner 12)–(e.corner 11)–(e.corner 7)–(e.corner 6); (e.corner 6)–(e.corner 5)–(e.corner 12)–(e.corner 11)–(e.corner 7)–(e.corner 6); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 11)–(e.corner 7); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 11)–(e.corner 7); at (.74,.76) (e); in 1,...,12at (e.corner ) ; at (e.corner 1) 1; at (e.corner 12) 12; in 2,...,3at (e.corner ) ; in 4,...,7at (e.corner ) ; in 8,...,11at (e.corner ) ; (e.corner 2) – (e.corner 12) ; (e.corner 2) – (e.corner 4) ; (e.corner 4) – (e.corner 12) ; (e.corner 5) – (e.corner 12) ; (e.corner 5) – (e.corner 7) ; (e.corner 6) – (e.corner 7) ; (e.corner 7) – (e.corner 10) ; (e.corner 7) – (e.corner 9) ; (e.corner 5) – (e.corner 11) ; (e.corner 7) – (e.corner 11) ; at (0.1,4.3) **1**; at (-1,3) **2**; at (-2.7,1.9) **3**; at (-1.3,.5) **4**; at (0.8,1.5) **5**; at (0.8,-.9) **6**; at (-0.2,-2.7) **8**; at (3.5,1.2) **7**; at (3.5,-.7) **9**; at (3.2,-1.9) **10**; (e.corner 1) circle (1.25pt); (e.corner 2) circle (1.25pt); (e.corner 4) circle (1.25pt); (e.corner 5) circle (1.25pt); (e.corner 7) circle (1.25pt); & & 1 & & 4 2 &5 & 7 3&6&9 8&& 10&& Dissections and Triangulations ============================== In this section, we construct a combinatorial bijection between $\Dis(n+2,i)$ and $\Tri(n+i+1,i+1,0)$ as mentioned in the discussion following Corollary [cor:disecbij]. [def:tritodis] Let *T* be a triangulation of an (*n* + *i* + 3)-gon with *i* + 1 non-singular fans and no singular fans.. Remove the internal diagonals of *F**j* in *T* for each *j*, leaving us with exactly *i* diagonals in *T*. Let *x**j* be the origin of *F**j*. For each vertex *x**j*, let *y**j* be the immediate vertex that follows *x**j* counterclockwise. Note that it is possible to have *y**j* = *x**j* + 1. Also, we always have that the *y**j* is a vertex in *F**j*, since (*x**j*, *y**j*) must bound a triangle, and by definition this triangle is a part of *F**j*. Contract each edge (*x**j*, *y**j*), creating an (*n* + 2)-gon. Note that the vertex labeled 1 will always be the origin of *F*1, so consequently we always contract (1, 2). Let 1 be the label of the new vertex after contracting this edge. Relabel the vertices in increasing counterclockwise order, starting at the original vertex 1. Since no fan of *T* was singular, it must be that the contractions preserved all *i* diagonals, giving us a dissection in $\Dis(n+2,i)$. [def:distotri] For the reverse map, let *D* be a dissection of an (*n* + 2)-gon with *i* chords, say *c*1, *c*2, …, *c**i*. We assume, as with triangulations, that the vertices of *D* are already labeled with the numbers 1 through *n* + 2. Let 1ʹ be a new vertex so that (1ʹ, 1) is an edge and (1ʹ, 2) is an edge. Delete the edge (1, 2). If 1 was incident to more than one chords, shift all chords that do not form a triangle with the edge (1, *n* + 2) so that they are incident with 1ʹ instead of 2. Now, let *x* be the next vertex counterclockwise to 1 incident to a chord. (Note it may be that *x* = 1ʹ.) Proceed with the following procedure. 1. Let *c**j*1, *c**j*2, …, *c**j**k* be the list of chords incident with *x*. 2. Let *z* be the vertex immediately counterclockwise of *x*. Remove the edge (*x*, *z*) and add a new vertex *x*ʹ along with edges (*x*, *x*ʹ) and (*x*ʹ, *z*). 3. If *x* is adjacent to exactly one chord, continue to step (5). Otherwise, let *y* be the vertex immediately clockwise to *x*. The edge (*y*, *x*) bounds a closed region which contains exactly one chord *c**j*ℓ. For each *c**j**m* with *m* ≠ ℓ, change its incidence with *x* to an incidence with *x*ʹ. 4. If, after doing the prior step, we add a boundary edge to a region that we have already added a boundary edge to, undo the prior step and continue to the next step. 5. Move to the next vertex counterclockwise to *x* incident to some chord, calling this new vertex *x*. (Note this new vertex may be the vertex *x*ʹ constructed in step (2).) If *x* is a vertex we have already visited before, terminate the procedure. Otherwise, restart at step (1). After doing this, observe that no region is a triangle. Also observe that we added a single edge to the boundary for each region (hence the importance of step (4)), and so we now have an (*n* + *i* + 3)-gon. Relabel the vertices, starting at the vertex labeled 1 and continuing counterclockwise. We can decompose our new polygon into *i* + 1 regions, labeled *P*1, *P*2, …, *P**i* + 1. Make each of these fans so that the origin of *P**j* is the vertex with the minimum label of *P**j*. This gives us a triangulation of an (*n* + *i* + 3)-gon with *i* + 1 non-singular fans and no singular fans. There are a couple of things to keep in mind that may help justify why the maps given in Definitions [def:tritodis] to [def:distotri] are mutual inverses. 1. The edges we contract going from a triangulation to a dissection are exactly the edges we add back going from a dissection to a triangulation. This is because the origins of fans in triangulations are always chosen by the smallest vertex appearing in a fan, which appear sooner traveling counterclockwise around the polygons than vertices with larger labels. The regions in a dissection are ultimately what become our fans for a triangulation, so we consequently always add an edge on the boundary of a dissection right after the vertex that would end up being the origin for a fan. 2. The chords of a dissection should be viewed as the parts of the triangulation that ultimately form the boundaries of the fans (along with the actual boundary of the polygon). Hence, we can not expect two such chords to remain incident in the triangulation, as this would alter the number of fans. See Figure [fig:tritodisec] below to see an illustration of the map from a triangulation to a dissection and Figure [fig:dissectotri] to see an illustration of the map of the other direction. .45 (a) [scale=1] at (.74,.76) (e); (e.corner 1)–(e.corner 16)–(e.corner 5)–(e.corner 4)–(e.corner 3)–(e.corner 2)–(e.corner 1); (e.corner 1)–(e.corner 16)–(e.corner 5)–(e.corner 4)–(e.corner 3)–(e.corner 2)–(e.corner 1); (e.corner 5)–(e.corner 16)–(e.corner 12)–(e.corner 6)–(e.corner 5); (e.corner 5)–(e.corner 16)–(e.corner 12)–(e.corner 6)–(e.corner 5); (e.corner 6)–(e.corner 7)–(e.corner 10)–(e.corner 11)–(e.corner 12)–(e.corner 6); (e.corner 6)–(e.corner 7)–(e.corner 10)–(e.corner 11)–(e.corner 12)–(e.corner 6); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 7); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 7); (e.corner 12)–(e.corner 13)–(e.corner 14)–(e.corner 15)–(e.corner 16)–(e.corner 12); (e.corner 12)–(e.corner 13)–(e.corner 14)–(e.corner 15)–(e.corner 16)–(e.corner 12); at (.74,.76) (e); in 1,...,16at (e.corner ) ; at (e.corner 1) 1; at (e.corner 2) 2; at (e.corner 3) 3; at (e.corner 4) 4; at (e.corner 5) 5; at (e.corner 6) 6; at (e.corner 7) 7; at (e.corner 8) 8; at (e.corner 9) 9; at (e.corner 10) 10; at (e.corner 11) 11; at (e.corner 12) 12; at (e.corner 13) 13; at (e.corner 14) 14; at (e.corner 15) 15; at (e.corner 16) 16; (e.corner 1) – (e.corner 3) ; (e.corner 1) – (e.corner 5) ; (e.corner 1) – (e.corner 4) ; (e.corner 1) – (e.corner 16) ; (e.corner 5) – (e.corner 16) ; (e.corner 5) – (e.corner 12) ; (e.corner 7) – (e.corner 10) ; (e.corner 9) – (e.corner 7) ; (e.corner 6) – (e.corner 12) ; (e.corner 6) – (e.corner 11) ; (e.corner 6) – (e.corner 10) ; (e.corner 13) – (e.corner 12) ; (e.corner 14) – (e.corner 12) ; (e.corner 16) – (e.corner 12) ; (e.corner 15) – (e.corner 12) ; (e.corner 1) circle (1.25pt); (e.corner 5) circle (1.25pt); (e.corner 6) circle (1.25pt); (e.corner 7) circle (1.25pt); (e.corner 12) circle (1.25pt); .45 (b) [scale=1] at (.74,.76) (e); at (.74,.76) (e); (e.corner 1)–(e.corner 16)–(e.corner 5)–(e.corner 4)–(e.corner 3)–(e.corner 2)–(e.corner 1); (e.corner 1)–(e.corner 16)–(e.corner 5)–(e.corner 4)–(e.corner 3)–(e.corner 2)–(e.corner 1); (e.corner 5)–(e.corner 16)–(e.corner 12)–(e.corner 6)–(e.corner 5); (e.corner 5)–(e.corner 16)–(e.corner 12)–(e.corner 6)–(e.corner 5); (e.corner 6)–(e.corner 7)–(e.corner 10)–(e.corner 11)–(e.corner 12)–(e.corner 6); (e.corner 6)–(e.corner 7)–(e.corner 10)–(e.corner 11)–(e.corner 12)–(e.corner 6); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 7); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 7); (e.corner 12)–(e.corner 13)–(e.corner 14)–(e.corner 15)–(e.corner 16)–(e.corner 12); (e.corner 12)–(e.corner 13)–(e.corner 14)–(e.corner 15)–(e.corner 16)–(e.corner 12); at (.74,.76) (e); in 1,...,16at (e.corner ) ; at (e.corner 1) 1; at (e.corner 2) 2; at (e.corner 3) 3; at (e.corner 4) 4; at (e.corner 5) 5; at (e.corner 6) 6; at (e.corner 7) 7; at (e.corner 8) 8; at (e.corner 9) 9; at (e.corner 10) 10; at (e.corner 11) 11; at (e.corner 12) 12; at (e.corner 13) 13; at (e.corner 14) 14; at (e.corner 15) 15; at (e.corner 16) 16; (e.corner 5) – (e.corner 16) ; (e.corner 6) – (e.corner 12) ; (e.corner 7) – (e.corner 10) ; (e.corner 16) – (e.corner 12) ; (e.corner 1) circle (1.25pt); (e.corner 5) circle (1.25pt); (e.corner 6) circle (1.25pt); (e.corner 7) circle (1.25pt); (e.corner 12) circle (1.25pt); .45 (c) [scale=.3] at (.74,.76) (e); at (.74,.76) (e); (e.corner 2) – (e.corner 1) ; (e.corner 6) – (e.corner 5) ; (e.corner 6) – (e.corner 7) ; (e.corner 8) – (e.corner 7) ; (e.corner 13) – (e.corner 12) ; in 1,...,16at (e.corner ) ; at (e.corner 1) 1; at (e.corner 2) 2; at (e.corner 3) 3; at (e.corner 4) 4; at (e.corner 5) 5; at (e.corner 6) 6; at (e.corner 7) 7; at (e.corner 8) 8; at (e.corner 9) 9; at (e.corner 10) 10; at (e.corner 11) 11; at (e.corner 12) 12; at (e.corner 13) 13; at (e.corner 14) 14; at (e.corner 15) 15; at (e.corner 16) 16; (e.corner 5) – (e.corner 16) ; (e.corner 6) – (e.corner 12) ; (e.corner 7) – (e.corner 10) ; (e.corner 16) – (e.corner 12) ; .45 (d) [scale=.3] at (.74,.76) (e); at (.74,.76) (e); in 1,...,11at (e.corner ) ; at (e.corner 1) 1; at (e.corner 2) 2; at (e.corner 3) 3; at (e.corner 4) 4; at (e.corner 5) 5; at (e.corner 6) 6; at (e.corner 7) 7; at (e.corner 8) 8; at (e.corner 9) 9; at (e.corner 10) 10; at (e.corner 11) 11; (e.corner 4) – (e.corner 11) ; (e.corner 4) – (e.corner 8) ; (e.corner 4) – (e.corner 6) ; (e.corner 11) – (e.corner 8) ; [fig:tritodisec] .45 (a) [scale=.3] at (.74,.76) (e); at (.74,.76) (e); in 1,...,11at (e.corner ) ; at (e.corner 1) 1; at (e.corner 2) 2; at (e.corner 3) 3; at (e.corner 4) 4; at (e.corner 5) 5; at (e.corner 6) 6; at (e.corner 7) 7; at (e.corner 8) 8; at (e.corner 9) 9; at (e.corner 10) 10; at (e.corner 11) 11; (e.corner 4) – (e.corner 11) ; (e.corner 4) – (e.corner 8) ; (e.corner 4) – (e.corner 6) ; (e.corner 11) – (e.corner 8) ; .45 (b) [scale=.3] at (.74,.76) (e); at (.74,.76) (e); in 1,...,13at (e.corner ) ; at (e.corner 1) 1; at (e.corner 2) 1ʹ; at (e.corner 3) 2; at (e.corner 4) 3; at (e.corner 5) 4; at (e.corner 6) 4ʹ; at (e.corner 7) 5; at (e.corner 8) 6; at (e.corner 9) 7; at (e.corner 10) 8; at (e.corner 11) 9; at (e.corner 12) 10; at (e.corner 13) 11; (e.corner 5) – (e.corner 13) ; (e.corner 6) – (e.corner 10) ; (e.corner 6) – (e.corner 8) ; (e.corner 13) – (e.corner 10) ; .45 (c) [scale=.3] at (.74,.76) (e); at (.74,.76) (e); in 1,...,16at (e.corner ) ; at (e.corner 1) 1; at (e.corner 2) 1ʹ; at (e.corner 3) 2; at (e.corner 4) 3; at (e.corner 5) 4; at (e.corner 6) 4ʹ; at (e.corner 7) 4ʺ; at (e.corner 8) 4‴; at (e.corner 9) 5; at (e.corner 10) 6; at (e.corner 11) 7; at (e.corner 12) 8; at (e.corner 13) 8ʹ; at (e.corner 14) 9; at (e.corner 15) 10; at (e.corner 16) 11; (e.corner 5) – (e.corner 16) ; (e.corner 6) – (e.corner 12) ; (e.corner 7) – (e.corner 10) ; (e.corner 16) – (e.corner 12) ; .45 (d) [scale=1 ] at (.74,.76) (e); (e.corner 1)–(e.corner 16)–(e.corner 5)–(e.corner 4)–(e.corner 3)–(e.corner 2)–(e.corner 1); (e.corner 1)–(e.corner 16)–(e.corner 5)–(e.corner 4)–(e.corner 3)–(e.corner 2)–(e.corner 1); (e.corner 5)–(e.corner 16)–(e.corner 12)–(e.corner 6)–(e.corner 5); (e.corner 5)–(e.corner 16)–(e.corner 12)–(e.corner 6)–(e.corner 5); (e.corner 6)–(e.corner 7)–(e.corner 10)–(e.corner 11)–(e.corner 12)–(e.corner 6); (e.corner 6)–(e.corner 7)–(e.corner 10)–(e.corner 11)–(e.corner 12)–(e.corner 6); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 7); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 7); (e.corner 12)–(e.corner 13)–(e.corner 14)–(e.corner 15)–(e.corner 16)–(e.corner 12); (e.corner 12)–(e.corner 13)–(e.corner 14)–(e.corner 15)–(e.corner 16)–(e.corner 12); at (.74,.76) (e); in 1,...,16at (e.corner ) ; at (e.corner 1) 1; at (e.corner 2) 2; at (e.corner 3) 3; at (e.corner 4) 4; at (e.corner 5) 5; at (e.corner 6) 6; at (e.corner 7) 7; at (e.corner 8) 8; at (e.corner 9) 9; at (e.corner 10) 10; at (e.corner 11) 11; at (e.corner 12) 12; at (e.corner 13) 13; at (e.corner 14) 14; at (e.corner 15) 15; at (e.corner 16) 16; (e.corner 1) – (e.corner 3) ; (e.corner 1) – (e.corner 5) ; (e.corner 1) – (e.corner 4) ; (e.corner 1) – (e.corner 16) ; (e.corner 5) – (e.corner 16) ; (e.corner 5) – (e.corner 12) ; (e.corner 7) – (e.corner 10) ; (e.corner 9) – (e.corner 7) ; (e.corner 6) – (e.corner 12) ; (e.corner 6) – (e.corner 11) ; (e.corner 6) – (e.corner 10) ; (e.corner 13) – (e.corner 12) ; (e.corner 14) – (e.corner 12) ; (e.corner 16) – (e.corner 12) ; (e.corner 15) – (e.corner 12) ; (e.corner 1) circle (1.25pt); (e.corner 5) circle (1.25pt); (e.corner 6) circle (1.25pt); (e.corner 7) circle (1.25pt); (e.corner 12) circle (1.25pt); [fig:dissectotri] --- 1. Kazhdan-Lusztig polynomials for matroids were first defined in.[↩](#fnref1) 2. It is worth noting that this connection between $\Skyt(a,i,b)$ and dissections of polygons has resurfaced recently in the work of Kazhdan-Lusztig polynomials for Matroids. Compare the comments in with the representation theoretic result after setting *m* = 1 and considering dimensions.[↩](#fnref2) Combinatorics for certain Skew Tableaux, Dyck Paths, Triangulations, and Dissections ==================================================================================== We present combinatorial bijections and identities between certain skew Young tableaux, Dyck paths, triangulations, and dissections. Introduction ============ Bijections between Catalan objects are well understood. For instance, see. There are many generalizations of these objects and the bijections between them. In, Stanley provides a bijection between certain standard Young tableaux and dissections of a polygon. In, the authors provide a bijection between the same tableaux and certain Dyck paths. Meanwhile, various papers consider a certain collection of skew Young tableaux—which may be seen as a generalization of the aforementioned tableaux—which are used to compute formulas for the ordinary and equivariant Kazhdan–Lusztig polynomial for uniform, sparse paving, and paving matroids. [1](#fn1) , properties about the skew tableaux will have implications for the Dyck paths and triangulation objects of interest. Motivated by our findings, we then find a combinatorial bijection between the dissections in and our triangulations. In the next section, we will define relevant terminology for skew Young tableaux in subsection [subsec:sytback], Dyck paths in subsection [subsec:dyckback], and then both dissections and triangulations in subsection [subsec:distriback]. Then in subsection [subsec:mainback], we discuss the main results and findings of this paper in detail. In sections [sec:combinatorialbijections] and [sec:enumeration], we provide the definitions for the maps involved in the main results. ### Acknowledgements: The authors would like to thank Kyungyong Lee for his helpful input on this paper. Background and Main Results =========================== Skew Young Tableaux and Nomincreasing Partitions ------------------------------------------------ Let *λ*1 ≥ *λ*2 ≥ ⋯ ≥ *λ**k* be positive integers. We say that *λ* = [*λ*1, *λ*2, …, *λ**k*] is a *partition* of *n* if *λ*1 + ⋯ + *λ**k* = *n*. The *Young diagram of shape* *λ* is represented by boxes that are left justified so that the *i*th row has *λ**i* boxes. A *standard Young tableau* is achieved by filling the boxes with the numbers so that * each row strictly increases from left to right; * each column increases from top to bottom; and * if there are *n* boxes, only the numbers 1 through *n* are used. See Figure [fig:tableaux] below for an example of a Young diagram and standard Young tableaux. .45 .45 1 & 3 & 7 & 11 & 12 & 14 & 15 2 & 5 & 13 & 16 4 & 8 6 & 10 9 [fig:tableaux] Given partitions *μ* = [*μ*1, …, *μ*ℓ] and *λ* = [*λ*1, …, *λ**k*] so that *μ**i* ≤ *λ**i* for all *i*, the *skew Young diagram* *λ* \ *μ* is the set of squares from the diagram for *λ* that are not in the diagram for *μ*. As before, we define a *skew Young tableau* to be a skew Young diagram filled with numbers following the same rules described for standard Young tableau. See Figure [fig:skewtableaux] for an example of a skew Young tableaux. & & 1 & 2& 3 & 9 & 11 & 4 & 6 & 7 & 5 8 & 10 12 [fig:skewtableaux] The authors in introduce the notation $\Skyt(a,i,b)$ to denote the skew Young tableaux of shape [(*i* + 1)*b*, 1*a* − 2]/[(*i* − 1)*b* − 2], where we write *x**t* to denote *x*, *x*, …, *x*, where *x* is written *t* times. These are precisely the skew tableaux we discussed in the introduction. The diagram for the tableaux in $\Skyt(a,i,b)$ is shown in Figure [fig:skytsytdiagrams]. [scale=0.5, line width=0.9pt] (-1,0) grid (0,6); (-1,0) – node[left=7pt] *a* (-1,6); (0,4) grid (4,6); (0.1,4) – node[below=7pt] *i* (5,4); (4,4) grid (5,9); (5,4) – node[right=7pt] *b* (5,9); [fig:skytsytdiagrams] Dyck Paths ---------- A *Dyck path of semi-length *n** is a string in $\{{\textsf{U}},{\textsf{D}}\}^{2n}$ so that 1. the string has the same number of ${\textsf{U}}$’s and ${\textsf{D}}$’s (that is, *n* of each); and 2. the number of ${\textsf{U}}$’s is at least the number of ${\textsf{D}}$’s in any initial segment of the word. We will also often represent such a path visually using (1, 1) segments for ${\textsf{U}}$ and ( − 1, 1) segments for ${\textsf{D}}$, as in Figure [fig:dyckexample]. [scale=0.7, line width=1pt] (-3,0)–(15,0); (-2,0) circle[radius=5pt]; (-2,0)–(-1,1); (-1,1) circle[radius=5pt]; (-1,1)–(0,0); (0,0) circle[radius=5pt]; in 0,1,...,2 (,)–(+1,+1); (+1,+1) circle[radius=5pt]; (3,3)–(4,2); (4,2) circle[radius=5pt]; (4,2)–(5,3); (5,3) circle[radius=5pt]; (5,3)–(6,2); (6,2) circle[radius=5pt]; (6,2)–(7,1); (7,1) circle[radius=5pt]; (7,1)–(8,2); (8,2) circle[radius=5pt]; (8,2)–(9,1); (9,1) circle[radius=5pt]; (9,1)–(11,3); (10,2) circle[radius=5pt]; (11,3) circle[radius=5pt]; (11,3)–(14,0); (12,2) circle[radius=5pt]; (13,1) circle[radius=5pt]; (14,0) circle[radius=5pt]; [fig:dyckexample] A *long ascent* is a maximal ascent of length at least 2. A *singleton* is a maximal ascent of length 1. Let $\Dyck(n,\ell,s)$ be the Dyck paths of semi-length *n* with ℓ long ascents and *s* singletons so that no singleton appears *after* the last long ascent. Thus, the Dyck path in Figure [fig:dyckexample] is an element of $\Dyck(8,2,3)$. Dissections and Triangulations ------------------------------ Throughout this section, we assume polygons with *n* vertices have their vertices labeled 1 through *n* in counter-clockwise order. A *dissection* of a polygon *P* is a way of adding chords between non-adjacent vertices so that no two chords intersect in the interior of the polygon. Throughout, we let $\Dis(n,i)$ be the set of all dissections of an *n*-gon with *i* chords. Note that *i* in $\Dis(n,i)$ is at most *n* − 2. The elements of $\Dis(n,n-2)$ are the *triangulations* of an *n*-gon. Given a vertex *x* in a triangulated polygon, a *fan at *x** is a maximal collection triangles all containing *x*. In this case, we call *x* the *origin* of the fan. A *singular* fan is a fan with only one triangle. Let *e* be a boundary edge of a fan *F* at *x*. We are interested in being able to uniquely partition a triangulation into a collection of fans. This leads to the following definition. Let *T* be a triangulation. A *fan decomposition* is the the pair of sequences (F(*T*), *δ*(*T*)), where F(*T*) and *δ*(*T*) are defined as follows: * We let F(*T*) be a sequence of fans defined recursively as follows. Let *F* be the fan at the vertex with the smallest label. Delete this vertex and all edges incident with it in *T* to obtain a sequence of triangulations *T*1, …, *T**k*, arranged in counter clock-wise order so that *T**i* ∩ *T**i* + 1 is just vertex. If *T* is just an edge, then F(*T*) is the empty sequence, and otherwise $\mathcal{F}(T)\coloneqq(F,\mathcal{F}\_1,\dots, \mathcal{F}\_k)$ where F*i* = F(*T**i*). * Let *x**j* be the label of the origin of *F**j*. We let $\delta(T)\coloneqq (d\_1,\dots, d\_{k-1})$, where $d\_i\coloneqq x\_{i+1}-x\_i$ and *k* is the number of fans in F(*T*). One can think of *d**i* as the number of edges between the origins of *F**i* and *F**i* + 1 when traveling along the boundary of *T* counter-clockwise. Consider the triangulation *T* in Figure [fig:triangledecomp]. Observe that F(*T*) = (*F*1, *F*2, *F*3, *F*4, *F*5) where *F*1 is the size 1 fan at vertex 1, *F*2 is the size 3 fan at vertex 2, *F*3 is the size 1 fan at vertex 4, *F*4 is the size 1 fan at vertex 5, and *F*5 is the size 4 fan at vertex 7. Thus, *δ*(*T*) = (1, 2, 1, 2). Figure [fig:triangledecomp] shows the five fans, distinguishing them by thick boundary edges and different shades of orange in their interior. The white vertices correspond to the origins of the fans. [scale=1] at (.74,.76) (e); (e.corner 1)–(e.corner 2)–(e.corner 12)–(e.corner 1); (e.corner 1)–(e.corner 2)–(e.corner 12)–(e.corner 1); (e.corner 2)–(e.corner 12)–(e.corner 7)–(e.corner 4)–(e.corner 3)–(e.corner 2); (e.corner 2)–(e.corner 12)–(e.corner 7)–(e.corner 4)–(e.corner 3)–(e.corner 2); (e.corner 4)–(e.corner 5)–(e.corner 7)–(e.corner 4); (e.corner 4)–(e.corner 5)–(e.corner 7)–(e.corner 4); (e.corner 6)–(e.corner 5)–(e.corner 7)–(e.corner 6); (e.corner 6)–(e.corner 5)–(e.corner 7)–(e.corner 6); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 11)–(e.corner 12)–(e.corner 7); (e.corner 7)–(e.corner 8)–(e.corner 9)–(e.corner 10)–(e.corner 11)–(e.corner 12)–(e.corner 7); at (.74,.76) (e); in 1,...,12at (e.corner ) ; at (e.corner 1) 1; at (e.corner 2) 2; at (e.corner 3) 3; at (e.corner 4) 4; at (e.corner 5) 5; at (e.corner 6) 6; at (e.corner 7) 7; at (e.corner 8) 8; at (e.corner 9) 9; at (e.corner 10) 10; at (e.corner 11) 11; at (e.corner 12) 12; (e.corner 2) – (e.corner 12) ; (e.corner 2) – (e.corner 7) ; (e.corner 2) – (e.corner 4) ; (e.corner 4) – (e.corner 7) ; (e.corner 5) – (e.corner 7) ; (e.corner 6) – (e.corner 7) ; (e.corner 7) – (e.corner 10) ; (e.corner 7) – (e.corner 9) ; (e.corner 7) – (e.corner 11) ; (e.corner 7) – (e.corner 12) ; (e.corner 1) circle (1.25pt); (e.corner 2) circle (1.25pt); (e.corner 4) circle (1.25pt); (e.corner 5) circle (1.25pt); (e.corner 7) circle (1.25pt); [fig:triangledecomp] Observe that a fan decomposition uniquely determines *T*. That is, knowing the order and size of each fan along with the distance between origins of consecutive fans uniquely determine a triangulation. Let $\Tri(n,t,s)$ be the triangulations *T* of an *n*-gon so that F(*T*) has *s* + *t* fans so that precisely *s* are singular and so that the last fan is not singular. Thus, the triangulation in Figure [fig:triangledecomp] is an element of $\Tri(12,2,3)$. Main Results ------------ We now may state the main results of this paper. First, let us state, the result which we plan to generalize. We state the result by referencing the object $\Skyt(a,i,b)$ we defined above. [] [prop:originaldyckbijection] The tableaux in $\Skyt(a,i,2)$ are in bijection with Dyck paths of length 2(*a* + 2*i*) with *i* + 1 peaks and no singletons. In Section [sec:combinatorialbijections], we will provide explicit combinatorial maps which give us the following Theorem. [thm:skytdycktri] The following objects are in bijection: 1. $\Skyt(a,i,b)$; 2. $\Dyck(a+b+2i-2,i+1,b-2)$; and 3. $\Tri(a+b+2i,i+1,b-2)$. The maps between the three objects are defined in section [sec:combinatorialbijections]. The following pairs of maps are mutual inverses: * maps $\operatorname{\operatorname{\mathtt{SD}}\xspace}$ and $\operatorname{\operatorname{\mathtt{DS}}\xspace}$; * maps $\operatorname{\operatorname{\mathtt{ST}}\xspace}$ and $\operatorname{\operatorname{\mathtt{TS}}\xspace}$; and * maps $\operatorname{\operatorname{\mathtt{TD}}\xspace}$ and $\operatorname{\operatorname{\mathtt{DT}}\xspace}$. Note that this generalizes the result stated in Proposition [prop:originaldyckbijection], in addition to adding a triangulation interpretation. With the original motivation for this paper in mind, we specialize Theorem [thm:skytdycktri] to *b* = 2. After incorporating the work of of which provides a combinatorial bijection between $\Dis(n+2,i)$ and $\Skyt(n-i+1,i,2)$, we have the following. [2](#fn2) The following objects are in bijection. 1. $\Dis(n+2,i)$. 2. $\Skyt(n-i+1,i,2)$. 3. $\Dyck(n+i+1,i+1,0)$. 4. $\Tri(n+i+3,i+1,0)$. By specializing the maps involved in Theorem [thm:skytdycktri], we already have combinatorial bijections between the standard Young tableaux, Dyck paths, and triangulations in this theorem, though its important to note that are bijection between the standard Young tableaux and Dyck paths is precisely the proof of Proposition [prop:originaldyckbijection]. This leaves two pairs of objects with missing combinatorial bijections. In section [sec:enumeration] we demonstrate a bijection between the dissections and triangulations given in this corollary. Using our bijection between Dyck paths and Triangulations, one can extend our work in section [sec:enumeration] to give a bijection between the dissections and Dyck paths in Corollary [cor:disecbij], but we omit this interpretation from this paper. For our final result, we recall the following Lemma in terms of Dyck paths and triangulations. [lem:skewsym] Let *a*, *i*, and *b* be nonnegative integers. Then $$\# \Skyt(a,i,b)=\#\Skyt(b,i,a).$$ One may apply Theorem [thm:skytdycktri] to this Lemma in order to get the following. [cor:dycksym] 1. Let *n*, ℓ, *s* be nonnegative integers. Then $$\#\Dyck(n,\ell,s)=\#\Dyck(n,\ell,n-s-2\ell).$$ 2. Let *n*, *t*, *s* be nonnegative integers. Then $$\#\Tri(n,t,s)=\#\Tri(n,t,n-t-2\ell-2).$$ Although these equalities are naturally obtained with Theorem [thm:skytdycktri] and Lemma [lem:skewsym], there is no known direct combinatorial bijection describing these equalities. Hence we pose the following. Find a direct combinatorial proof of Corollary [cor:dycksym] which does not rely on using the skew tableaux or bijections given in this paper. Combinatorial Bijections ======================== The following subsections describe maps going between any two of the objects given in Theorem [thm:skytdycktri]. For convenience, we identify maps according to where the map from and to by using `S` for skew Young tableaux, `T` for Triangulations, and `D` for Dyck paths. For instance, map $\operatorname{\operatorname{\mathtt{ST}}\xspace}$ represents the map from skew Young tableaux to traingulations, and $\operatorname{\operatorname{\mathtt{TD}}\xspace}$ represents a map from triangulations to Dyck paths. Examples are used to alleviate any ambiguity with our maps. Before proceeding, however, we will point out a handy reinterpretation of the tableaux in $\Skyt(a,i,b)$. Let $\lambda\in \Skyt(a,i,b)$. Let *X* = {*x*1, *x*2, …, *x**i* + *b* − 1} be the set of values in the top *b* − 1 rows so that *x*1 < *x*2 < ⋯ < *x**i* + *b* − 1. If *x**j* is in row *b* − 1, define *y**j* to be the entry in the tableau directly below *x**j*. Then for 1 ≤ *j* < *i* + *b* − 1 let $$A\_{j}\coloneqq\begin{cases} \{x\_j\} & \text{if $x\_j$ is in the first $b-2$ rows;}\\ \{x\_j\}\cup\left([y\_j,y\_{k}-1]\setminus X\right)& \text{if $x\_j$ is in row $b-1$ and $y\_k$ is to the right of $y\_j$,}\end{cases}$$ where [*y**j*, *y**k* − 1] = {*y**j*, *y**j* + 1, *y**j* + 2, …, *y**k* − 1}. Let $A\_{i+b-1}\coloneqq\{x\_j\}\cup \big([x\_j+1,a+b+2i-2]\setminus X\big)$. Note that *x**j* is always the minimum of *A**j*. When ∣*A**j*∣ > 1, note the elements of *A**j* are precisely the entries in row *b* − 1 and *b* in column *j* along with all entries of column 1 which are between *y**j* and *y**k*. See Figure [fig:pushidea]. & & 1 & & 4 & & 5 & & 10 2 & 7 & 13 3 & 9 & 15 8 11 12 14 & & 1 & & 4 & & 5 & & 10 2 & 7 & 13 3 & 9 & 15 8 & 11 & 12 & 14 [fig:pushidea] The sequence (*A*1, …, *A**i* + *b* − 1) has enough information to reconstruct *λ*. Starting with *j* = 1, do the following. 1. If ∣*A**j*∣ = 1, let *x* ∈ *A**j*. Then place *x* in the highest possible position in the last column. 2. If ∣*A**j*∣ > 1, then let *x**j* = min*A**j* and *y**j* = min(*A**j* \ {*x**j*}). Place *x**j* in row *b* − 1 column *j* and place *y**j* in row *b* column *j*. Place all remaining entries from *A**j*—in increasing order—at the top most available position(s) in the first column. 3. Increase the value of *j* by 1. If *j* < *i* + *b* − 1, repeat these steps. Otherwise, *λ* is filled and we are done. Let *x**j* and *y**j* are defined in step (2) of the preceding procedure. Pick an integer *j*ʹ minimally so that *j* < *j*ʹ and ∣*A**j*ʹ∣ > 1. Note that (*A*1, …, *A**i* + *b* − 1) is an ordered partition of [*a* + *b* + 2*i* − 2] so that *x**j* < *x**j* + 1 and *y**j* < *y**j*ʹ, whenever *y**j* and *y**j*ʹ exist. These conditions guarantee that the rows of *λ* increase left-to-right. Such a sequence is called a *nomincreasing partition*, as defined in. To this end, we define the following. Let $\lambda\in \Skyt(a,i,b)$ and define *A*1, …, *A**i* + *b* − 1 for *λ* as above. We define $\Nom(\lambda)$ to be the nomincreasing partition $$\Nom(\lambda)\coloneqq (A\_1,A\_2,\dots, A\_{i+b-1}).$$ Map $\operatorname{\operatorname{\mathtt{SD}}\xspace}$ ------------------------------------------------------ In this section, we define a map $\operatorname{\operatorname{\mathtt{SD}}\xspace}$ from $\Skyt(a,i,b)$ to $\Dyck(a+b+2i-2, i+1, b-2)$. For simplicity, let $n\coloneqq a+b+2i-2$. Note that given $\lambda \in \Skyt(a,i,b)$, *n* is the number of entries in *λ* and *t* is the number of entries in the first *b* − 1 rows of *λ*. Let $\lambda\in \Skyt(a,i,b)$. We define $\operatorname{\operatorname{\mathtt{SD}}\xspace}(\lambda)$, a certain lattice path, as follows. Let $\Nom(\lambda)=(A\_1,\dots, A\_{i+b-1})$. Let *x**j* denote the minimum of *A**j*. Let $a\_j \coloneqq \#A\_j$. Then let $\operatorname{\operatorname{\mathtt{SD}}\xspace}(\lambda)$ be the lattice path given by following string in $\{{\textsf{U}},{\textsf{D}}\}^{2n}$: $$\begin{aligned} {\textsf{U}}^{a\_1}{\textsf{D}}^{x\_2-x\_1}{\textsf{U}}^{a\_2}{\textsf{D}}^{x\_3-x\_2}\cdots {\textsf{U}}^{a\_{t-1}}{\textsf{D}}^{x\_{t}-x\_{t-1}}{\textsf{U}}^{a\_{t}}{\textsf{D}}^{n-x\_{t}+x\_1}.\label{eq:dyckstring}\end{aligned}$$ Given *λ* be a tableau in $\Skyt(a,i,b)$, $\operatorname{\operatorname{\mathtt{SD}}\xspace}(\lambda)$ is a Dyck path. In particular, the lattice path $\operatorname{\operatorname{\mathtt{SD}}\xspace}(\lambda)$ is an element of $\Dyck(n, i+1, b-2)$. Recall that given $\lambda \in \Skyt(a,i,b)$, we can define have $\Nom(\lambda)=(A\_1,\dots, A\_{i+b-1})$ where ∣*A**i* + *b* − 1∣ > 1. Recall that *x**j* = min*A**j* is an entry in the top *b* − 1 rows of *λ*. In the string given in, ${\textsf{U}}$ corresponds to an up step and ${\textsf{D}}$ correspond to a down step. Note that for any *k*, we have [*x*1, *x**k*] ⊆ *A*1 ∪ ⋯ ∪ *A**k*. Otherwise, there exists a *w* ∈ [*x*1, *x**k*] so that *w* ∈ *A**j* for some *j* > *k*. Then we have *w* ≥ *x**j* > *x**k* ≥ *w*, a contradiction. Thus, for *k* < *t* ∑*j* = 1*k*(*x**j* + 1 − *x**j*) = *x**k* + 1 − *x*1 ≤ ∑*j* = 1*k**a**j*. Also, ∑*j* = 1*t*(*x**j* + 1 − *x**j*) = *x**t* − *x*1 + *n* − *x**t* + *x*1 = *n*. Moreover, there are precisely *b* − 2 of the *a**j* so that *a**j* = 1, and there are *i* of the *a
arxiv_0000163
Scheduling and dimensioning of heterogeneous energy stores, with applications to future GB storage needs ======================================================================================================== Future “net-zero” electricity systems in which all or most generation is renewable may require very high volumes of storage, provided jointly by a number of heterogeneous technologies, in order to manage the associated variability in the generation-demand balance. We consider the problems of scheduling and dimensioning such storage. We develop a value-function based approach to optimal scheduling, and show that, to a good approximation, the problem to be solved at each successive point in time reduces to a linear programme with a particularly simple solution. We show that approximately optimal scheduling may be achieved without the need for a running forecast of the future generation-demand balance. We examine the applicability of the above theory to future GB storage needs, and discuss how it may be used to enable the most economic dimensioning of such storage, with possible savings of tens of billions of pounds, relative to the use of a single technology. **Keywords:** energy storage; economics; optimal scheduling. Introduction ============ Future electricity systems in which all or most generation is renewable, and hence highly variable, may require extremely high volumes of storage in order to manage this variability and to ensure that demand may always be met—see , and especially , for an assessment of likely future needs and costs in Great Britain, where, under a future “net-zero” carbon emissions strategy, capital costs of storage may run into many tens, or even hundreds, of billions of pounds. For the reasons we explain below, a mixture of storage technologies is likely to be required. The problems considered in the present paper are those of the scheduling and dimensioning of such storage, with the objective of meeting energy demand to a required reliability standard as economically as possible. These problems are considered from a societal viewpoint, which generally coincides with that of the electricity system operator. This is in contrast the viewpoint of a storage provider seeking to maximise profits (see  and the references therein). Insofar as the societal problem of managing energy storage is considered in the existing literature, this is mostly in the context of short-term storage used to cover occasional periods of generation shortfall, with sufficient time for recharging between such periods (see, e.g. ). The scheduling problems we consider in this paper are nontrivial in that, if storage is not managed properly, then it is likely that there will frequently arise the situation in which there is sufficient energy in storage to meet demand but it is located in too few stores for it to be possible to serve it at the required rate. The need for a solution of these scheduling problems is highlighted by the forthcoming Royal Society report  on long-term energy storage in GB. The present paper aims to supply the necessary mathematics for that, and to illustrate its application in the context of future GB storage needs. In a large system, such as that of Great Britain, which we consider further in Section [sec:appl-gb-energy], the processes of demand and of renewable generation vary on multiple timescales: on a timescale measured in hours there is diurnal variation in demand and in solar generation; on a timescale of days and weeks there is weather-related variation in demand and in most forms of renewable generation, and there is further demand variation due to weekends and holiday periods; on longer timescales there is seasonal variation in both demand and renewable generation which may extend to major differences between successive years (see  for more details). In GB, for example, it is particularly the case that some years are significantly windier than others (see ), necessitating volumes of storage of the order of tens of terawatt-hours within a future net-zero scenario—see Section [sec:appl-gb-energy] and  for similar results in the US context. Variation in the generation-demand balance may be managed by a number of different storage technologies. These vary greatly in their costs per unit of capacity and per unit of power (the maximum rate at which they may be charged or discharged), and further in their *(round-trip) efficiencies* (energy output as a fraction of energy input). In consequence, different storage technologies are typically appropriate to managing variation on these different timescales—see  for detailed comparisons and analysis in the GB context, and also the recent report  (especially Figure 1.6) for a discussion of differing technology costs and their implications. We further explore these issues in Section [sec:appl-gb-energy]. It thus seems likely that, in the management of such future electricity systems, there will be a need for a mix of storage technologies. This will be such that most of the required storage *capacity* will be provided by those technologies such as chemical storage, which are able to provide this capacity economically, while a high proportion of the *power* requirements will be met by technologies such as batteries or advanced (adiabatic) compressed air energy storage (ACAES) with much lower input-output costs and higher efficiencies. (Although there is considerable uncertainty in future costs, we show that, for the GB examples of Section [sec:appl-gb-energy] and on the basis of those costs given by , Figure 1.6, or by , it is likely that the use of an appropriate mixture of storage technologies would result in cost savings of the order of many billions of pounds, compared with the use of the most economical single technology. In particular, focusing on those technologies which minimise capacity costs alone leads to technology mixes which, when power costs are added, are extremely expensive—again see .) There now arise the questions of how such storage may be economically dimensioned, and of how it may be managed—the ability to answer the former question depending on having a sufficiently good understanding of the answer to the latter. The problem of managing, or scheduling, any given set of stores is that of deciding by how much each individual store should be charged or discharged at each successive point in time in order to best manage the generation-demand (im)balance—usually with the objective of minimising total unmet demand, or *unserved energy*, over some given period of time. In this context, usually those stores corresponding to any given technology may be treated as a single store provided their capacity-to-power ratios are approximately equal (see Section [sec:model]). However, the scheduling problem is a real-time problem, and in deciding which storage technology to prioritise at any given point in time, it is difficult to attempt to classify the current state of the generation-demand balance as representing short, medium, or long-term variation. Within the existing literature and in assessing likely GB storage needs, use a heuristic algorithm to attempt such a decomposition, while  use a filtering approach to choose between medium- and long-term storage. (Neither of these approaches allows for cross-charging—see Section [sec:model].) The present paper formulates the scheduling problem as one in mathematical optimisation theory in order to derive policies in which cooperation between stores happens automatically when this is beneficial, thereby enabling given generation-demand balance processes to be managed by storage systems which are considerably more compactly dimensioned in their power requirements in particular (see also ). Section [sec:model] of the paper defines the relevant mathematical model for the management of multiple stores. This incorporates capacity and rate (power) constraints, together with round-trip efficiencies, and allows for entirely general scheduling policies. Section [sec:nature-optim-solut] develops the relevant mathematics for the identification of optimal policies, when the objective is the minimisation of cumulative unserved energy to each successive point in time. We show that it is sufficient to consider policies that are *greedy* in an extended sense defined there. We further show that, at each successive point in time, the scheduling problem may be characterised as that of maximising a *value function* defined on the space of possible energy levels of the stores, and that, under appropriate conditions, the optimisation problem to be solved at that time is approximately a linear programme which has a simple, non-iterative, solution. We give conditions under which it is possible to find optimal policies, exact or approximate, from within the class of *non-anticipatory* policies, i.e. those which do not require real-time forecasting of the generation-demand balance. Section [sec:appl-gb-energy] considers an extended application to future GB energy storage needs, which, under the conditions considered, aims to be as realistic as possible. The aims here are both to demonstrate the applicability of the present theory, and further to show how one might reasonably go about solving the practical problems of identifying, dimensioning and managing future storage needs. We demonstrate the general success, and occasional limitations, of *non-anticipatory* policies as defined above. We indicate briefly how the analysis might be extended to include network constraints, if desired, although we also indicate why these are less significant in the context of dimensioning long-term storage. The concluding Section [sec:conclusions] considers some practical implications of the preceding results. Model ===== We study the management over (discrete) time of a set *S* of stores, where each store *i* ∈ *S* is characterised by four parameters (*E**i*, *Q**i*, *P**i*, *η**i*) as described below. For each store *i* ∈ *S*, we let *s**i*(0) be the initial level of energy in store *i* and *s**i*(*t*) be the level of energy in store *i* at (the end of) each subsequent time *t* ≥ 1. Without loss of generality and for simplicity of presentation of the necessary theory, we make the convention that the level of energy in each store at any time is measured by that volume of energy that it may ultimately supply, so that, within the model, any *(round-trip) inefficiency* of the store is accounted for at the input stage. While accounting for such inefficiency is essential to our modelling and results, we assume that energy, once stored, is not further subject to significant time-dependent *leakage*. However, the theory of the present paper would require only minor adjustments to incorporate such time-dependent leakage. The successive levels of energy in each store *i* satisfy the recursion *s**i*(*t*) = *s**i*(*t* − 1) + *r**i*(*t*),   *t* ≥ 1,  where *r**i*(*t*) is the rate (positive or negative) at which energy is added to the store *i* at the time *t*. Each store *i* ∈ *S* is subject to *capacity constraints* 0 ≤ *s**i*(*t*) ≤ *E**i*,   *t* ≥ 0,  so that *E**i* > 0 is the *capacity* of store *i* (again measured by the volume of energy it is capable of serving) and *rate constraints*  − *P**i* ≤ *r**i*(*t*) ≤ *η**i**Q**i*,   *t* ≥ 1. Here *P**i* > 0 is the (maximum) *output rate* of the store *i*, while *Q**i* > 0 is the (maximum) rate at which externally available energy may be used for *input* to the store, with the resulting rate at which the store fills being reduced by the *round-trip efficiency* *η**i* of the store, where 0 < *η**i* ≤ 1. Given the vector *s*(0) = (*s**i*(0),  *i* ∈ *S*) of the initial levels of energy in the stores, a *policy* for the subsequent management of the stores is a specification of the vector of rates *r*(*t*) = (*r**i*(*t*),  *i* ∈ *S*), for all times *t* ≥ 1; equivalently, from , it is a specification of the vector of store levels *s*(*t*) = (*s**i*(*t*),  *i* ∈ *S*), for all times *t* ≥ 1. (The question of how a policy may be *chosen* in real-time applications is considered in Section [sec:nature-optim-solut].) The stores are used to manage as far as possible a *residual energy process* (*re*(*t*),  *t* ≥ 1), where, for each time *t*, a *positive* value of *re*(*t*) corresponds to surplus energy available for *charging* the stores, subject to losses due to inefficiency, and a *negative* value of *re*(*t*) corresponds to energy *demand* to be met as far as possible from the stores. In applications, the residual energy *re*(*t*) is usually the surplus of generation over demand at each successive time *t*. For any time *t*, given the vector of rates *r*(*t*) = (*r**i*(*t*),  *i* ∈ *S*), define the *imbalance* *u*(*t*) by *u*(*t*) = *re*(*t*) − ∑*i*: *r**i*(*t*) < 0*r**i*(*t*) − ∑*i*: *r**i*(*t*) ≥ 0*r**i*(*t*)/*η**i*. We shall require also that that the policy defined by the rate vectors *r*(*t*), *t* ≥ 1, is such that, at each successive time *t*, *re*(*t*) ≥ 0  ⇒  *u*(*t*) ≥ 0,  so that, at any time *t* when there is an energy surplus, the net energy input into the stores, before losses due to round-trip inefficiency, cannot exceed that surplus; the quantity *u*(*t*) is then the *spilled energy* at time *t*. Similarly, we shall require that *re*(*t*) ≤ 0  ⇒  *u*(*t*) ≤ 0,  so that, at any time *t* when there is an energy shortfall, i.e. a positive net energy demand to be met from stores, the net energy output of the the stores does not exceed that demand; under the constraint , the quantity  − *u*(*t*) is then the *unserved energy* at time *t*. (It is not difficult to see that, under any reasonable objective for the use of the stores to manage the residual energy process—including the minimisation of total unserved energy as discussed below—there is nothing to be gained by overserving energy at times *t* such that *re*(*t*) ≤ 0.) We shall say that a policy is *feasible* for the management of the stores if, for each *t* ≥ 1, that policy satisfies the above relations –. For any feasible policy, define the total unserved energy *ue*(*t*) to any time *t* to be the sum of the unserved energies at those times *t*ʹ ≤ *t* such that *re*(*t*ʹ) ≤ 0, i.e. *ue*(*t*) =  − ∑*t*ʹ ≤ *t* :  *re*(*t*ʹ) ≤ 0*u*(*t*ʹ) = ∑*t*ʹ ≤ *t*max(0,  − *u*(*t*ʹ)),  where the second equality above follows from  and . Our objective is to determine a feasible policy for the management of the stores so as to minimise the total unserved energy over (some specified period of) time. It is possible that, at any time *t*, some store *i* may be *charging* (*r**i*(*t*) > 0) while some other store *j* is simultaneously *discharging* (*r**j*(*t*) < 0). We refer to this as *cross-charging*—although the model does not of course identify the routes taken by individual electrons. Although, in the presence of storage inefficiencies, cross-charging is wasteful of energy, it is nevertheless occasionally effective in enabling a better distribution of energy among stores and avoiding the situation in which energy may not be served at a sufficient rate because one or more stores are empty. We make also the following observation. Suppose that some subset *S*ʹ of the set of stores *S* is such that the stores *i* ∈ *S*ʹ have common efficiencies *η**i* and common capacity-to-power ratios *E**i*/*P**i* and *E**i*/*Q**i*. Then it is clear that these stores may be optimally managed by keeping the fractional storage levels *s**i*(*t*)/*E**i* equal across *i* ∈ *S*ʹ and over all times *t*, so that the stores in *S*ʹ effectively behave as a single large store with total capacity ∑*i* ∈ *S*ʹ*E**i* and total input and output rates ∑*i* ∈ *S*ʹ*Q**i* and ∑*i* ∈ *S*ʹ*P**i* respectively. This is relevant when, as in the application of Section [sec:appl-gb-energy], we wish to consider the scheduling and dimensioning of different storage technologies in order to obtain an optimal mix of the latter. Then, for this purpose, it is reasonable to treat—to a good approximation—the storage to be provided by any one technology as constituting a single large store. Nature of optimal policies ========================== We continue to take as our objective the minimisation of total unserved energy over some given period of time. We characterise desirable properties of policies for the management storage, and show how at least approximately optimal policies may be determined. In applications, the residual energy process to be managed is not generally known in advance (so ruling out, e.g., the use of straightforward linear programming approaches) and policies must be chosen dynamically in response to evolving information about that process. Within our discrete-time setting, the information available for decision-making at any time *t* will generally consist of the vector of store levels *s*(*t* − 1) = (*s**i*(*t* − 1),  *i* ∈ *S*) at the end of the preceding time period (equivalently the start of the time period *t*) together with the current value *re*(*t*) of the residual energy process at the time *t*. However, this information may be supplemented by some, necessarily probabilistic, prediction (however obtained) of the evolution of the residual energy process subsequent to time *t*. We shall be particularly interested in identifying conditions under which it is *sufficient* to consider (feasible) policies in which the decision to be made at any time *t*, i.e. the choice of rates vector *r*(*t*), depends *only* on *s*(*t* − 1) and *re*(*t*), thereby avoiding the need for real-time prediction of the future residual energy process. Such policies are usually referred to as *non-anticipatory*, or without *foresight*. Section [sec:greedy-policies] below defines *greedy* policies and shows that it is sufficient to consider such policies. Section [sec:non-pred-polic] discusses conditions under which a (greedy) optimal, or approximately optimal, policy may be found from within the class of non-anticipatory policies. Section [sec:value-functions] shows that the immediate optimisation problem to be solved at each successive time *t* may be characterised as that of maximising a *value function* defined on the space of possible store (energy) levels, and identifies conditions under which this latter problem is approximately a linear programme—with a particularly simple, non-iterative, solution. Greedy policies --------------- We define a *greedy* policy to be a feasible policy in which, at each successive time *t* ≥ 1, and given the levels *s*(*t* − 1) of the stores at the end of the preceding time period, [-] if the residual energy *re*(*t*) ≥ 0, i.e. there is energy available for charging the stores at time *t*, then there is no possibility to increase any of the rates *r**i*(*t*), *i* ∈ *S* (without decreasing any of the others), and so further charge the stores, while keeping the policy feasible; equivalently, we require that if the *spilled energy* *u*(*t*) > 0 (where *u*(*t*) is defined by ), then *r**i*(*t*) = min(*E**i* − *s**i*(*t* − 1),  *η**i**Q**i*) for all *i* ∈ *S*; thus the spilled energy *u*(*t*) at time *t* is minimised. if the residual energy *re*(*t*) < 0, i.e. there is net energy demand at time *t*, then there is no possibility to decrease any of the rates *r**i*(*t*), *i* ∈ *S* (without increasing any of the others), and so further serve demand, while keeping the policy feasible; equivalently, we require that if the *unserved energy*  − *u*(*t*) > 0, then *r**i*(*t*) =  − min(*s**i*(*t* − 1),  *P**i*) for all *i* ∈ *S*; thus the unserved energy  − *u*(*t*) at time *t* is minimised. Note that if *re*(*t*) = 0 at time *t*, then, for a feasible policy, it is necessarily the case, from  and , that the imbalance *u*(*t*) = 0. Proposition [proposition:greedy] and its corollary below generalise a result of  (in that case for a single store which can only discharge) to show that, under the objective of minimising the total unserved energy, it is sufficient to consider greedy policies. [proposition:greedy] Any feasible policy may be modified to be greedy while remaining feasible and while continuing to serve as least as much energy to each successive time *t*. Further, if the original policy is non-anticipatory, the modified policy may be taken to be non-anticipatory. Proposition [proposition:greedy] is intuitively appealing: at those times *t* such that *re*(*t*) ≥ 0, there is no point in withholding energy which might be used for charging any store, since the only possible “benefit” of doing so would be to allow further energy—not exceeding the amount originally withheld—to be placed in that store at a later time. Similarly, at those times *t* such that *re*(*t*) < 0, there is no point in withholding energy in any store which might be used to reduce unserved energy, since the only possible “benefit” of doing so would be to allow additional demand—not exceeding that originally withheld by that store—to be met by that store at a later time. A formal proof of Proposition [proposition:greedy] is given in the Appendix. Note that greedy policies may involve cross-charging (see Section [sec:model]). Proposition [proposition:greedy] has the following corollary. [corollary:1] Suppose that the objective is the minimisation of unserved energy over some given period of time. Then there is an optimal policy which is greedy. Further, within the class of non-anticipatory policies there is a greedy policy which is optimal within this class. We remark that under objectives other than the minimisation of total unserved energy, optimal policies may fail to be greedy. For example, if unserved energy were costed nonlinearly, or differently at different times, then at certain times it might be better to retain stored for more profitable use at later times—see, for example, . Non-anticipatory policies ------------------------- There are various conditions (see below) under which the optimal policy may be taken to be not only greedy (see Proposition [proposition:greedy]) but also *non-anticipatory* as defined above. We are therefore led to consider whether it is sufficient in applications to consider non-anticipatory policies—at least to obtain results which are at least approximately optimal, and to design and dimension storage configurations. Two such *non-anticipatory* policies which work well under different circumstances are: [-] The *greedy greatest-discharge-duration-first* (GGDDF) policy (see ) is a storage discharge policy for managing a given residual energy process which is negative (i.e. there is positive energy demand) over a given period of time, with the aim of minimising total unserved energy. It is defined by the requirement that, at each time *t*, stores are prioritised for discharging in order of their *residual discharge durations*, where the residual discharge duration of a store *i* at any time is defined as the energy in that store at the start of time divided by its maximum discharge rate *P**i*. This non-anticipatory policy is designed to cope with rate constraints and to avoid as far as possible the situation in which there are times at which there is sufficient total stored energy, but this is located in too few stores. It is optimal among policies which do not involve cross-charging, and more generally under the conditions discussed in . As also discussed there, it may be extended to situations where the residual energy process is both positive and negative. The *greatest-round-trip-efficiency first* (GRTEF) policy is a greedy policy which is designed to cope with round-trip inefficiency: stores are both charged and discharged—in each case to the maximum feasible extent—in decreasing order of their efficiencies and no cross-charging takes place. *In the absence of output rate constraints*, the GRTEF policy may be shown to be optimal: straightforward coupling arguments, similar to those used to prove Proposition [proposition:greedy], show that, amongst greedy policies, the GRTEF policy maximises the total stored energy ∑*i* ∈ *S**s**i*(*t*) at any time, so that energy which may be served under any other policy may be served under this policy. In practice, a reasonable and robust policy might be to use the GRTEF policy whenever no store is close to empty, and otherwise to switch to the GGDDF policy. However, there is a need to find the right balance between these two policies, and also to allow for the possibility of cross-charging where this might be beneficial. We therefore look more generally at non-anticipatory policies. The choice of such policies is informed both by some considerations from dynamic programming theory and by the observations above. Value functions --------------- Standard dynamic programming theory (see, e.g. ) shows that, at any time *t*, given a stochastic description of the future evolution of the residual energy process, an optimal decision at that time may be obtained through the computation of a *value function* *V**t*(*s*) defined on the set of possible states *s* = (*s**i*,  *i* ∈ *S*) of the stores, where each *s**i* = *s**i*(*t* − 1) is the level of energy in store *i* ∈ *S* at the start of time *t*. The quantity *V**t*(*s*) may be interpreted as the future *value* of having the energy levels of the stores in state *s* at time *t*, relative to their being instead in some other reference state, e.g. state 0, where *value* is the negative of *cost* as measured by expected future unserved energy. Then the optimal decision at any time *t* is that which maximises the value of the resulting state, less the cost of any energy unserved at time *t*. In the present problem, such a stochastic description is generally unavailable. However, the value function might reasonably be estimated from a sufficiently long residual energy data series—typically of at least several years duration—especially if one is able to assume (approximate) time-homogeneity of the above stochastic description. The latter assumption essentially corresponds, over sufficiently long time periods, to the use of a value function *V**t*(*s*) = *V*(*s*) which is independent of time *t* and to the use of a non-anticipatory scheduling policy (however, see below). As previously indicated, we make the convention that the state *s**i* of each store *i* denotes the amount of energy which it is able to serve—so that (in)efficiency losses are accounted for at the input stage. At any time *t*, and given a stochastic description as above, the value function *V**t*(*s*) may be computed in terms of absorption probabilities (see, e.g. ). For each *t*, let *v**i**t*(*s*) be the partial derivative of the value function *V**t*(*s*) with respect to variation of the level *s**i* of each store *i* ∈ *S*. Standard probabilistic coupling arguments (which we do not give in detail since they are not formally required here) show that, for each *i* ∈ *S*, *v**i**t*(*s*) lies between 0 and 1 and is decreasing is *s**i*. (For example, the positivity of *v**i**t*(*s*) is simply the *monotonicity* property that one can never be worse off by having more stored energy—see Section [sec:model]). We assume that changes in store energy levels are sufficiently small over each single time step *t* that changes to the value function may be measured using the above partial derivatives. Then the above problem of scheduling the charging or discharging of the stores over the time step *t* becomes that of choosing *feasible* rates *r*(*t*) = (*r**i*(*t*),  *i* ∈ *S*) so as to maximise ∑*i* ∈ *S**v**i**t*(*s*)*r**i*(*t*) − max(0,  − *u*(*t*)),  where *s**i* = *s**i*(*t* − 1) is again the level of each store *i* ∈ *S* at the end of the preceding time step *t* − 1, and where the second term in ) is the unserved energy at time *t* (see Section [sec:model]). Thus, under the above linearisation, the scheduling problem at each time *t* becomes a linear programme. When, at (the start of) any time *t*, the state of the stores is given by *s* = *s*(*t* − 1), we shall say that any store *i* ∈ *S* has *charging priority* over any store *j* ∈ *S* if *η**i**v**i**t*(*s*) > *η**j**v**j**t*(*s*), and that any store *i* ∈ *S* has *discharging priority* over any store *j* ∈ *S* if *v**i**t*(*s*) < *v**j**t*(*s*). Proposition [proposition:lp] below is again intuitively appealing; we give a formal proof in the Appendix. [proposition:lp] When the objective is the minimisation of total unserved energy over some given period of time, then, under the above linearisation, at each time *t* and with *s* = *s*(*t* − 1), the optimal charging, discharging and cross-charging decisions are given by the following procedure: [-] *when charging, i.e. if *re*(*t*) ≥ 0*, charge the stores in order of their charging priority, charging each successive store as far as permitted (the minimum of its input rate and its residual capacity) until the energy available for charging at time *t* is used as far as possible—any remaining energy being *spilled*; *when discharging, i.e. if *re*(*t*) < 0*, discharge the stores in order of their discharging priority, discharging each successive store as far as permitted (the minimum of its output rate and its available stored energy) until the demand at time *t* is met as fully as possible—any remaining demand being *unserved energy*; *subsequent to either of the above*, choose pairs of stores (*i*, *j*) in succession by, at each successive stage, selecting store *i* to be the store with the highest discharging priority which is still able to supply energy at the time *t* and selecting store *j* to be the store with the highest charging priority which is still able to accept energy at the time *t*; for each such successive pair (*i*, *j*), provided that *v**i**t*(*s*) < *η**j**v**j**t*(*s*),  *cross-charge* as much energy as possible from store *i* to store *j*. Note that the above priorities are such that this process necessarily terminates on the first occasion such that the condition  fails to be satisfied, and further that no cross-charging can occur when *re*(*t*) ≥ 0 and there is spilled energy, or when *re*(*t*) < 0 and there is unserved energy. The pairing of stores for cross-charging in the above procedure is entirely notional, and what is important is the policy thus defined. However, when efficiencies are low, cross-charging occurs infrequently. In the examples of Section [sec:appl-gb-energy], we consider *time-homogeneous* value function derivatives of the form *v**i**t*(*s*) = exp( − *λ**i**s**i*/*P**i*),  *i* ∈ *S*,  essentially corresponding, as above, to the use of non-anticipatory scheduling algorithms. (However, data limitations—see the analysis of Section [sec:appl-gb-energy]—mean that we use a single, extremely long, residual energy dataset of 324,360 hourly observations both to estimate the parameters *λ**i* and to examine the effectiveness of the resulting policies. Hence, the resulting scheduling algorithms might be regarded as having, at each successive point in time, some very mild anticipation of the future evolution of the residual energy process. Within the present exploratory analysis this approach seems reasonable.) The expression  is an approximation, both in its assumption that, for each *i* ∈ *S*, the partial derivative *v**i**t*(*s*) depends only on the state *s**i* of store *i*, and in the assumed functional form of the dependence of *v**i**t*(*s*) on *s**i*. The former assumption is equivalent to taking the value function as a sum of separate contributions from each store (a reasonable first approximation), while probabilistic large deviations theory  suggests that, under somewhat idealised conditions, when the mean residual energy is positive, the functions *v**i**t*(*s*) do decay exponentially. However, we primarily justify the use of the relation  in part by the arguments below, and in part by its practical effectiveness—see the examples of Section [sec:appl-gb-energy]. Recall that what are important are the induced decisions, as described above, on the storage configuration space. In particular, when the stores are under pressure and hence discharging, it follows from the definition of discharging priority above that it is only the ratios of the parameters *λ**i* which matter, except only for determining the extent to which cross-charging should take place. Taking *λ**i* = *λ* for all *i* and for some *λ* defines a policy which, when discharging, corresponds to the use of the GGDDF policy, supplemented by a degree of cross-charging which depends on the absolute value of the parameter *λ*. The ability to further adjust the relative values of the parameters *λ**i* between stores allows further tuning to reflect their relative efficiencies; in particular, for a given volume of stored energy, increasing the efficiency of a given store *i* ∈ *S* increases the desirability of having that energy stored in other stores and reserving more of the capacity of store *i* for future use—something which can be effected by increasing the parameter *λ**i*. Application to GB energy storage needs ====================================== In this section we give an extended example of the application of the preceding theory to the problem of dimensioning and scheduling future GB energy storage needs within a net-zero environment. Our primary aim is to illustrate the practical applicability of the theory. We also aim to show how, given also cost data, it might be used to assist in storage dimensioning. We are further concerned with the extent to which it is sufficient to consider non-anticipatory scheduling policies. A detailed description of the dimensioning problem in the GB context, together with details of all our storage, demand and renewable generation data, including storage costs, is given by —work prepared in support of the Royal Society working party on future GB energy storage. (That paper uses a rather heuristic scheduling algorithm, which occasionally leads to very high total power requirements.) Additional discussion is given in the Royal Society report itself . We consider here a GB 2050 net-zero scenario, also considered in . In this scenario heating and transport are decarbonised, in line with the UK’s 2050 net-zero commitment, thereby approximately doubling current electricity demand to 600 TWh per year (see ), and all electricity generation is renewable and provided by a mixture of 80% wind and 20% solar generation. (In particular, it is assumed that there is no fossil-fuel based generation, even with carbon-capture-and-storage.) It is further assumed that there is a 30% level of generation overcapacity—corresponding to total renewable generation of 780 TWh per year on average. The above wind-solar mix and level of generation overcapacity are those used in , and are approximately optimal on the basis of the generation and cost data considered there. We also consider, more briefly, the effect on storage dimensioning of a reduced level of overcapacity of 25%. However, on the basis of the costs given by , the 5% reduction in generation overcapacity (30 TWh) saves $15 bn in generation costs, with the consequence that, in all the examples below, the 30% level of overcapacity is here more economic. The lower level of overcapacity is included for comparison purposes. In the application of this section, we depart from our earlier convention (made for mathematical simplicity) of notionally accounting for all round-trip inefficiency at the input stage. We instead split the efficiency *η**i* of any store *i* by taking both the input and output efficiencies to be given by *η**i*0.5. This revised convention increases both the notional volumes of energy within any store *i* and the notional capacity of the store *i* by a factor *η**i*− 0.5. This is in line with most of the applied literature on energy storage needs and makes our storage capacities below directly comparable with those given elsewhere. #### Generation and demand data. We use a dataset consisting of 37 years of hourly “observations” of wind generation, solar generation and demand. The wind and solar generation data are both based on the 37-year (1980–2016) reanalysis weather data of  together with assumed installations of wind and solar farms distributed across GB and appropriate to the above scenario, and with 80% wind and 20% solar generation as above. The derived generation data are scaled so as to provide on average the required level of generation overcapacity relative to the modelled demand. The demand data are taken from a year-long hourly demand profile again corresponding to the above 2050 scenario and in which there is 600 TWh of total demand; this profile was prepared by Imperial College for the UK Committee on Climate Change . As in this year-long set of hourly demand data has been recycled to provide a 37-year trace to match the generation data. (This is reasonable here as the between-years variability which may present challenges to storage dimensioning and scheduling is likely to arise primarily from the between-years variability in renewable generation. However, see also ). More details and some analysis of the above data are given in . From these data we thus obtain a 37-year hourly *residual energy* (generation less demand) process to be managed by storage. For the chosen base level of 30% generation overcapacity, Figure [fig:residualenergy30] shows a histogram and autocorrelation function of the hourly residual energy process. The large variation in the residual energy process is to be compared with the mean demand of 68.6 GW. ![image](re_30_hist) ![Histogram and autocorrelation function of hourly residual energy (30% overcapacity).](re_30_acf "fig:") [fig:residualenergy30] Some level of generation overcapacity is required, both to account for losses due to inefficiencies in storage, and to keep the required volume of storage within reasonable bounds. As the level of generation overcapacity increases, storage requirements (specifically, capacities and input rates) decrease. The consequent decrease in storage costs must be balanced against the increase in the cost of generation. In our examples below, the considered levels of generation overcapacity are 30% and 25%. However, it is useful to consider briefly the volume of storage required to manage more general levels of overcapacity. In particular, for a *single* store with given efficiency and without input or output power constraints, there is a minimum store size and a minimum initial store energy level such that the store can completely manage the above residual energy process (i.e. with no unmet demand). Figure [fig:st.size] plots, for various levels of store efficiency and on the basis of our assumed 80%-20% wind-solar mix, this minimum store size against the assumed level of overcapacity in the above residual energy process. [fig:st.size] #### Storage data and costs. As discussed in Section [sec:introduction] and in , we consider three types of storage with associated efficiencies: [-] the *short* store is intended primarily for the management of diurnal and other short-term variation, and has a low capacity requirement (see below); it is assumed that it can therefore use a technology such as Li-ion battery storage with a high efficiency, which we here take to be 0.9; the *medium* store is intended primarily for the management of variation on a timescale of days and weeks, this variation resulting from weather-related variations in generation and demand; it has very substantial capacity requirements and may require a technology such as ACAES which has a lower efficiency, which we here take to be 0.7; the *long* store is intended for the management of seasonal and between-years variation (see Section [sec:introduction]); it has an very high capacity requirement, and a power requirement which—on account of potentially high input/output costs—it is desirable to keep relatively modest; it requires a technology, such as hydrogen or similar chemical storage, which currently has a low efficiency, which we here take to be 0.4. We use storage costs from, Table 3, and given in Table [tab:storagecosts] below (with storage capacity measured according to to the convention of this section with regard to accounting for inefficiency). These costs are based on various recent studies as reported in the above paper and are estimates of likely future storage costs in 2040 if the storage technologies are applied on a large scale—current costs are considerably higher. For Li-ion batteries, the maximum input and output rates are constrained to be the same, so that power costs may be associated with input power. However, there is huge uncertainty as to future storage costs and so the examples below are not to be read as providing recommendations about competing storage technologies. [ht] | | | | | | --- | --- | --- | --- | | | capacity | output power | input power | | | ($ per KWh) | ($ per KW) | ($ per KW) | | *long* (hydrogen) | 0.8 | 429 | 858 | | *medium* (ACAES) | 9.0 | 200 | 200 | | *short* (Li-ion) | 100.0 | 0 | 180 | [tab:storagecosts] Unit capacity costs decrease dramatically (by some orders of magnitude) as we move from the *short*, to the *medium*, to the *long* store, while unit power (rate) costs vary, again considerably, in the opposite direction. The aim in dimensioning and scheduling storage must therefore be to arrive at a position in which the *long* store is meeting most of the total capacity requirement, while as much as is reasonably possible of the total power requirement is being met by the *medium* and *short* stores. We treat GB as a single geographical node, ignoring network constraints. This is in line with most current studies of GB *long-term* storage needs, see, e.g. , and in line with the annual Electricity Capacity Reports produced by the GB system operator . As at present, future network constraints are unlikely to be continuously binding over periods of time in excess of a few hours, or a day or two at most, and are primarily likely to affect and to increase short-term storage requirements. (The effect of such constraints on the model of the present paper would be to add further linear constraints on the input and output rates of the stores. The general theory given in the preceding sections would continue to be applicable.) We take the reliability standard to be given by 24 GWh per year unserved energy and optimise scheduling and dimensioning subject to to constraint that this standard is met. This results in an average *number of hours* per year in which there is unserved energy which is roughly in line with the current GB standard of a maximum of 3 such hours per year. However, modest variation of the chosen reliability standard makes very little difference to our conclusions. Example [ex:1] below considers a single store. In the remaining examples we *schedule* storage using time-homogeneous value function derivatives *v**i*(*s*) given by  (with *s* defined as there to be the volume of stored energy which may be output). Thus, as previously discussed, the scheduling is almost completely non-anticipatory. We consider also the optimality of the scheduling algorithms used. We take the stores to be initially full. However, in all our examples, stores fill rapidly regardless of their initial energy levels, and these levels are in general independent of their initial values by the end of the first year of the 37-year period considered. [ex:1] *Single long (hydrogen) store with efficiency 0.4.* We first consider the management of the residual energy process by a single store, optimally dimensioned with respect to cost. If a single store is to be used, then, of the technologies considered here and on the basis of the present, *as yet very uncertain*, costs, a hydrogen store is the only economic possibility—see also  The unserved energy is clearly a decreasing function of each of the store capacity *E*, the maximum input power *Q* and the maximum output power *P*. For any given value of *P*, we may thus easily minimise the overall cost over (*E*, *Q*). It then turns out—unsurprisingly given the stringent reliability standard—that the overall cost is here minimised by taking *P* to be the minimum possible value (115.9 GW or 116.6 GW at 30% or 25% generation overcapacity) such that the given reliability standard of 24 GWh per year is satisfied. Table [tab:sl] shows the optimal storage dimensions and associated costs for both the levels of generation overcapacity considered in these examples. [ht] | | capacity | output power | input power | total | | --- | --- | --- | --- | --- | | size | 120.4 TWh | 115.9 GW | 80.0 GW | | | cost ($ bn) | 96.3         | 49.7       | 68.6       | 214.7 | | | capacity | output power | input power | total | | --- | --- | --- | --- | --- | | size | 167.9 TWh | 116.6 GW | 85.3 GW | | | cost ($ bn) | 134.3         | 50.0       | 73.2       | 257.5 | [tab:sl] These store capacities are larger than those suggested by Figure [fig:st.size], where the maximum store input powers *Q* were unconstrained: on the basis of the present costs, it is in each case more economic to reduce *Q* at the expense of allowing the store capacity *E* to increase. The total cost of storage required to manage 25% generation overcapacity is $42.8 bn greater than the corresponding cost at 30% overcapacity—making the 30% level of overcapacity more economic on the basis of the storage and generation costs discussed above. Figure [fig:sl.eu] plots cumulative unserved energy against time for each of the two levels of generation overcapacity. These two processes are here essentially identical. In each case, the store never completely empties and so unserved energy occurs only at those times at which the output power *P* of the store is insufficient to serve demand. [fig:sl.eu] Figure [fig:sl.sl] shows the corresponding processes formed by the successive energy levels within the store. A substantial fraction of the store capacity is needed solely to manage the single period of large shortfall in the residual energy process occurring at around 275,000 hours into the 37-yr (324,360 hour) period studied. This underlines the importance of using a residual energy time-series which is sufficiently long to capture those events such as sustained wind droughts which only occur perhaps once every few decades—see also . [fig:sl.sl] [ex:2] *Long (hydrogen) store with efficiency 0.4 plus medium (ACAES) store with efficiency 0.7.* In this example we show that, again on the basis of the cost data used here and the considered levels of generation overcapacity, extremely large savings (of the order of tens of billions of dollars) are to be made by the use of a suitable mixture of storage technologies. We choose *medium* (ACAES) store dimensions as below: for both levels of generation overcapacity, some numerical experimentation shows these to be at least close to optimal with respect to overall cost minimisation. Then, given these *medium* store dimensions, and subject to the given reliability standard, the *long* (hydrogen) store may be optimally dimensioned—given the use of the value-function based scheduling algorithm, and again to a very good approximation—as previously. Table [tab:slm] shows the optimal storage dimensions and associated costs, again for both 30% and 25% generation overcapacity. Observe that, in each case, the combined output power of the two stores is only slightly greater than that of the single store of Example [ex:1]. [ht] l|r|rrrr & capacity & output power & input power & total cost *long* & size & 72.8 TWh & 96.2 GW & 53.3 GW & store & cost ($ bn) & 58.2         & 41.3       & 45.7       & 145.2 *medium* & size & 2.5 TWh & 21.0 GW & 21.1 GW & store & cost ($ bn) & 22.5         & 4.2       & 4.2       & 30.9 &&&& --- 176.2 l|r|rrrr & capacity & output power & input power & total cost *long* & size & 79.3 TWh & 81.0 GW & 57.5 GW & store & cost ($ bn) & 63.4         & 34.7       & 49.3       & 147.5 *medium* & size & 4.5 TWh & 40.0 GW & 40.0 GW & store & cost ($ bn) & 40.5         & 8.0       & 8.0       & 56.5 &&&& --- 204.0 [tab:slm] The reason for the very large costs savings, relative to the use of a single storage technology and at both levels of generation overcapacity, is as follows. The low efficiency (0.4) of the *long* (hydrogen) store means that, when used on its own, its capacity is necessarily much greater than would have been the case had its efficiency been higher (see also Figure [fig:st.size]). The greater efficiency (0.7) of the very much smaller *medium* (ACAES) store introduced in this example allows it to be used to cycle rapidly (see Figure [fig:slm.sl]), serving a disproportionate share of the demand in relation to its capacity (see below), and thereby *greatly* reducing the capacity requirement for the *long* store. Note also the appreciably greater cost saving of $53.3 bn achieved by the introduction of the *medium* (ACAES) store at 25% generation overcapacity than the corresponding saving of $38.5 bn at 30% generation overcapacity. The parameters *λ**i* of the scheduling algorithm (equation ) are given by (*λ**l*, *λ**m*) = (0.0011, 0.01) per hr and by (*λ**l*, *λ**m*) = (0.001, 0.01) per hr respectively for 30% and 25% generation overcapacity. In each case the annual unserved energy just meets the required reliability standard of 0.024 TWh per year. The average annual volumes of energy served externally, i.e. to meet demand, by the *long* and *medium* stores are 47.6 TWh and 35.9 TWh respectively in the case of 30% overcapacity, and 38.7 TWh and 50.6 TWh respectively in the case of 25% overcapacity—with, in this example, negligible extra energy being used for cross-charging. Thus, in both cases, the much smaller *medium* store serves a comparable volume of energy to the *long* store. For 30% generation overcapacity, Figure [fig:slm.eu] plots cumulative unserved energy against time, together with the corresponding process in which there is only unserved energy to the extent that demand exceeds the combined output power (117.2 GW) of the two stores; this latter process providing a lower bound on the unserved energy achievable. The corresponding plot for 25% overcapacity is very similar. There is thus only one significant occasion (at around 150,000 hours) on which, for the original fully constrained storage system, there is unserved energy over and above that determined by the power constraint; this is the result of the *medium* store emptying and the *long* store then being unable on its own to serve energy at the required rate. Whether different (anticipatory) management of the stores, in the period immediately preceding this occasion, could have avoided this remains an open question, but it is clear that, in any case and in both examples, the stores are very close to being optimally controlled. [fig:slm.eu] Again, for the case of 30% generation overcapacity, Figure [fig:slm.sl] plots the percentage levels of energy in store during a two-year period, starting at time 265,000 hours and surrounding the one point in time at which the *long* store comes very close to emptying. The corresponding plot for 25% generation overcapacity is again very similar. In each case, the *medium* store cycles rapidly, thereby using its higher efficiency to greatly reduce the capacity and input rate requirements on the *long* store. It nevertheless generally reserves about half its capacity so that it is available to assist in any “emergency” in which the demand exceeds the output power of the *long* store alone. The exception to this occurs at those times when the *long* store is itself close to emptying, and when the *medium* store must therefore work harder to further relieve the pressure on the *long* store. [fig:slm.sl] [ex:3] *Long (hydrogen) store with efficiency 0.4 plus short (Li-ion) store with efficiency 0.9.* In the context of long-term GB storage needs, a necessarily relatively small *short* store (Li-ion battery) can probably only make a relatively modest contribution. Analogously to Example [ex:2], we here explore the extent to which it is possible for it to assist in the provision of storage primarily provided by a *long* (hydrogen) store. As in Example [ex:2], we choose *short* (Li-ion) store dimensions which appear to work well with respect to overall cost minimisation, subject here to equal input and output power ratings—see the discussion above. (Again, we do not systematically optimise the *short* store dimensions, but considerable experimentation fails to produce any further decrease in the overall costs below). Given the *short* store dimensions, and subject to the given reliability standard, the *long* (hydrogen) store may again be optimally dimensioned—given the use of the value-function based scheduling algorithm, and to a very good approximation—as previously. Table [tab:sls] shows storage dimensions and associated costs, again for both 30% and 25% generation overcapacity. These results are to be compared with those of Table [tab:sl]. What is remarkable is that, for both levels of generation overcapacity, very large reductions in the capacity of the *long* (hydrogen) store are achieved through the introduction of a *short* (Li-ion) store of *very* small capacity. This is again primarily achieved through constant rapid cycling by the *short* store so as to exploit its much greater efficiency—see Figure [fig:sls.sl] below. The total cost savings of $6.3 bn and $9.9 bn respectively, relative to the use of a single *long* store, are similarly noteworthy. [ht] l|r|rrrr & capacity & output power & input power & total cost *long* & size & 101.2    TWh & 115.9 GW & 77.5 GW & store & cost ($ bn) & 81.0            & 49.7       & 66.5       & 197.2 *short* & size & 0.085 TWh & 15.0 GW & 15.0 GW & store & cost ($ bn) & 8.5            & 0.0       & 2.7       & 11.2 &&&& --- 208.4 l|r|rrrr & capacity & output power & input power & total cost *long* & size & 136.8 TWh & 112.0 GW & 77.5 GW & store & cost ($ bn) & 109.4         & 48.0       & 66.5       & 224.0 *short* & size & 0.2 TWh & 20.0 GW & 20.0 GW & store & cost ($ bn) & 20.0         & 0.0       & 3.6       & 23.6 &&&& --- 247.6 [tab:sls] The parameters *λ**i* of the scheduling algorithm (equation ) are given by (*λ**l*, *λ**s*) = (0.000001, 0.1) per hr and by (*λ**l*, *λ**s*) = (0.0003, 0.1) per hr respectively for 30% and 25% generation overcapacity. The average annual volumes of energy served externally by the *long* and *short* stores are 73.8 TWh and 9.8 TWh respectively in the case of 30% overcapacity, and 73.4 TWh and 15.8 TWh respectively in the case of 25% overcapacity—again with negligible extra energy being used for cross-charging. For 30% generation overcapacity, Figure [fig:sls.sl] plots the percentage levels of energy in store during the same two-year period considered in Example [ex:2]. It is seen that the *short* (Li-ion) store here devotes *all* its capacity to cycling rapidly, using its higher efficiency to *greatly* reduce the capacity and, to a lesser extent, the input power requirements for the *long* store. The capacity costs of the *short* store are such that it is not worth further increasing its capacity so as to reserve energy to enable the reduction of the *output power* requirement of the *long* store. Hence the pattern of usage of the *short* store is here different from that of the *medium* store in the previous example. The corresponding plot for 25% generation overcapacity is again very similar. [fig:sls.sl] [ex:4] *Long (hydrogen) store with efficiency 0.4 plus medium (ACAES) store with efficiency 0.7 plus short (Li-ion) store with efficiency 0.9.* In this final example we take the set-up of Example [ex:2], i.e. *long* (hydrogen) store plus *medium* (ACAES) store, and consider whether any further overall cost reduction can be obtained by the addition of a *short* (Li-ion) store. For both 30% and 25% generation overcapacity, some experimentation shows that the storage dimensions and associated costs given in Table [tab:slms] are at least approximately optimal and lead to modest cost reductions—relative to Example [ex:2]—of $0.34 bn and $1.12 bn respectively. In each case the *short* store is relatively very small indeed; however, variation of its dimensions does not seem to assist in further reducing overall costs. Thus, of our four examples, the present three-store mix appears to be the most economical; further, on the basis of the generation costs discussed above, the 30% level of generation overcapacity yields a net saving of $12 bn in comparison with the 25% level of overcapacity. [ht] l|r|rrrr & capacity & output power & input power & total cost *long* & size & 72.2   TWh & 96.2 GW & 53.3 GW & store & cost ($ bn) & 57.8           & 41.3       & 45.7       & 144.8 *medium* & size & 2.44  TWh & 21.0 GW & 21.1 GW & store & cost ($ bn) & 22.0           & 4.2       & 4.2       & 30.4 *short* & size & 0.005 TWh & 2.0 GW & 2.0 GW & store & cost ($ bn) & 0.5            & 0.0       & 0.2       & 0.7 &&&& --- 175.8 l|r|rrrr & capacity & output power & input power & total cost *long* & size & 78.9   TWh & 81.0  GW & 57.5  GW & store & cost ($ bn) & 63.1           & 34.7        & 49.3        & 147.2 *medium* & size & 4.25  TWh & 40.0  GW & 40.4  GW & store & cost ($ bn) & 38.2           & 8.0         & 8.1        & 54.3 *short* & size & 0.010 TWh & 2.05 GW & 2.05 GW & store & cost ($ bn) & 1.0            & 0.0         & 0.4         & 1.4 &&&& --- 202.9 [tab:slms] The parameters *λ**i* of the scheduling algorithm (equation ) are given by (*λ**l*, *λ**m*, *λ**s*) = (0.001, 0.011, 0.035) per hr and by (*λ**l*, *λ**m*, *λ**s*) = (0.0009, 0.01, 0.05) per hr respectively for 30% and 25% generation overcapacity. The annual volumes of energy served externally by the *long*, *medium* and *short* stores are 47.2 TWh, 36.4 TWh and 0.012 TWh respectively in the case of 30% generation overcapacity, and 38.5 TWh, 50.7 TWh and 0.027 TWh respectively in the case of 25% generation overcapacity, again with negligible extra energy being used for cross-charging. For 30% generation overcapacity, Figure [fig:slms.sl] plots the percentage levels of energy in store during the same two-year period considered in Examples [ex:2] and [ex:3]. The behaviour of the *long* and *medium* store processes is, unsurprisingly, essentially as in Example [ex:2]. The behaviour of the *short* store is interesting. For most of the time it remains full, reserving its energy for those occasions on which it may be called on to act in an emergency. However, as the *long* and *medium* stores come close to being empty, the *short* store cycles as rapidly as possible—essentially in an attempt to prevent the former two stores actually emptying. [fig:slms.sl] Conclusions =========== Future electricity systems may well require extremely high volumes of energy storage with a mixture of storage technologies. This paper has studied the societal problems of scheduling and dimensioning such storage, with the scheduling objective of minimising total unserved energy over time, and the dimensioning objective of doing as economically as possible. We have identified properties of optimal scheduling policies and have argued that a value-function based approach is theoretically optimal. We have further shown that the optimal scheduling problem to be solved at each successive point in time reduces, to a good approximation, to a linear programme with a particularly simple solution. We have been particularly concerned to develop non-anticipatory (without foresight) scheduling policies which are robust and suitable for real-time implementation, and have demonstrated their success in practical application. However, there are very occasional situations in which a forecast of, for example, a prolonged energy drought would make it sensible to modify scheduling policies so as to maximally conserve energy. We have considered the practical application of the above theory to future GB energy storage needs, and shown, informally, how it may be used for the dimensioning of heterogeneous storage technologies. Notably, we have shown that the joint management of such technologies may greatly reduce overall costs (though the latter are as yet very uncertain). We have not considered how to effect such storage dimensioning and management within a *market* environment in which storage is privately owned and operated by players each seeking to optimise their own returns. It seems likely that, under such circumstances, the effective use of storage would require management over extended periods of time by the electricity system operator and that contractual arrangements, including the possible introduction of storage capacity markets, would have to be such as to make this possible (see  for how this might be done). Acknowledgements ================ This work was supported by Towards Turing 2.0 under the EPSRC Grant EP/W037211/1 and by the Alan Turing Institute. The author would also like to thank the Isaac Newton Institute for Mathematical Sciences for support during the programme Mathematics of Energy Systems (<https://www.newton.ac.uk/event/mes/>), when early work on this paper was undertaken. The author is grateful to Tony Roulstone and Paul Cosgrove of the University of Cambridge for many helpful discussions on GB storage needs and for making available all the data used in the analysis of this paper. He is also grateful to many other colleagues, notably Simon Tindemans of Delft University of Technology, Frank Kelly of the University of Cambridge and James Cruise, for wider discussions on the management of energy storage. Appendix: proofs ================ [Proof of Proposition [proposition:greedy]] Given any feasible policy, we show how, for each successive time *t*, the policy may be modified at each time *t*ʹ ≥ *t* in such a way that the policy becomes greedy at the time *t* and remains feasible at at each time *t*ʹ ≥ *t* (as well as at times prior to *t*), and further continues to serve at least as much energy in total to each successive time. Iterative application of this procedure over successive times *t* then finally yields a policy which is feasible and greedy at all times and which continues to serve at least as much energy in total to each successive time. (Thus, at any time *t*ʹ, the *final* modification to the original policy is obtained by a succession of the above modifications associated with the successive times *t* ≤ *t*ʹ.) Suppose that, immediately prior and immediately subsequent to the modification associated with the time *t* (which affects the storage rates and levels for those times *t*ʹ ≥ *t*), the storage rates are defined, for each time *t*ʹ, respectively by *r*(*t*ʹ) = (*r**i*(*t*ʹ),  *i* ∈ *S*) and $\hat r(t') = (\hat r\_i(t'),\,i\in S)$, with the corresponding store levels being given respectively by *s*(*t*ʹ) = (*s**i*(*t*ʹ),  *i* ∈ *S*) and $\hat s(t') = (\hat s\_i(t'),\,i\in S)$, and with the *total* unserved energy to each successive time *t*ʹ ≥ *t* being given respectively by *ue*(*t*) and $\hat{{\mathit{ue}}}(t')$ as defined by . Then the modification associated with the time *t* is defined as follows. [1.] If *re*(*t*) ≥ 0, increase (if necessary) the rates (*r**i*(*t*),  *i* ∈ *S*), at which energy is supplied to the stores at time *t* to $(\hat r\_i(t),\,i\in S)$, so that the policy becomes greedy at time *t* while remaining feasible at that time. Note that the effect of this is to increase (weakly) the store levels at time *t* so that $\hat s\_i(t) \ge s\_i(t)$, *i* ∈ *S*. For times *t*ʹ > *t* and for each *i* ∈ *S*, set $\hat r\_i(t') = \min(r\_i(t'), E\_i - s\_i(t'-1))$. Then the modified policy remains feasible and it is clear, by induction, that $\hat s\_i(t') \ge s\_i(t')$ for all *i* ∈ *S* and for all *t*ʹ ≥ *t*. Further, since *re*(*t*) ≥ 0 there is no unserved energy at time *t* and since, for *t*ʹ > *t* such that *re*(*t*ʹ) < 0, we have $\hat r\_i(t') \le r\_i(t)$, *i* ∈ *S* (implying, from , that the unserved energy  − *u*(*t*ʹ) does not increase) it follows that the total unserved energy to each successive time *t*ʹ ≥ *t* does not increase. If *re*(*t*) < 0, reduce (if necessary) the rates (*r**i*(*t*),  *i* ∈ *S*), to $(\hat r\_i(t),\,i\in S)$, so that the policy becomes greedy at time *t* while remaining feasible at that time. For times *t*ʹ > *t* and for each *i* ∈ *S*, set $$\label{eq:11} \hat r\_i(t') = \max(r\_i(t'), - s\_i(t'-1)).$$ Then the modified policy remains feasible at each time *t*ʹ > *t*. We show by induction that, for all *t*ʹ ≥ *t*, $$\label{eq:12} {\mathit{ue}}(t') - \hat{{\mathit{ue}}}(t') \ge \sum\_{i\in S} (s\_i(t') - \hat s\_i(t')).$$ For *t*ʹ = *t*, it is immediate from the definitions ,  and  (and since *re*(*t*) < 0), that  holds with equality. For *t*ʹ > *t*, assume the result  is true with *t*ʹ replaced by *t*ʹ − 1; we consider two cases: [-] if *re*(*t*ʹ) ≥ 0, then there is no unserved energy at time *t*ʹ under any feasible policy, so that the left side of  remains unchanged between times *t*ʹ − 1 and *t*ʹ, while, from , the right side of  decreases (weakly) between times *t*ʹ − 1 and *t*ʹ; thus the inequality  continues to hold at time *t*ʹ; if *re*(*t*ʹ) < 0, then, between times *t*ʹ − 1 and *t*ʹ, both the right and left sides of  increase by $r\_i(t') - \hat r\_i(t')$, so that  again continues to hold at time *t*ʹ. It also follows by induction, using , that $\hat s\_i(t') \le s\_i(t')$ for all *i* ∈ *S* and *t*ʹ ≥ *t*. Hence, from , it again follows that, under the modification associated with the time *t*, the total unserved energy to each successive time *t*ʹ ≥ *t* does not increase. To show the second assertion of the proposition, observe that, under the above construction, the greedy policy finally associated with each time *t*ʹ is defined entirely by the residual energy process (*re*(*t*),  *t* ≤ *t*ʹ) up to and including that time. [Proof of Proposition [proposition:lp]] For each time *t*, let $\hat r(t) = (\hat r\_i(t), \, i\in S)$ be the vector of rates determined by the algorithm of the proposition, and let $\hat u(t)$ be the corresponding imbalance given by . It follows from Proposition [proposition:greedy] that, when the objective is the minimisation of total unserved energy over time, it is sufficient to consider greedy policies in which, at those times *t* such that the residual energy *re*(*t*) ≥ 0 the spilled energy *u*(*t*) is minimised, and at those times *t* such that the residual energy *re*(*t*) < 0 the unserved energy  − *u*(*t*) is minimised. It is clear that, at each time *t*, the imbalance $\hat u(t)$ defined by the above algorithm achieves this minimisation in either case. Thus the problem of choosing, at each successive time *t*, a vector *r*(*t*) of feasible rates so as to maximise the expression given by  reduces to that of choosing such a vector *r*(*t*) so as to maximise ∑*i* ∈ *S**v**i**t*(*s*)*r**i*(*t*) (where, again the state vector *s* = *s*(*t* − 1)) subject to the additional constraint that the corresponding imbalance *u*(*t*) defined by  is equal to $\hat u(t)$. Assume, for the moment, that, at the given time *t*, the ordering of states by their charging or discharging priorities is in each case unique, i.e. that we do *not* have *η**i**v**i**t*(*s*) = *η**j**v**j**t*(*s*) for any *i*, *j* ∈ *S* or *v**i**t*(*s*) = *v**j**t*(*s*) for any *i*, *j* ∈ *S*. Then the above vector of rates $\hat r(t) = (\hat r\_i(t), \, i\in S)$ determined by the given algorithm is unique. Let $r(t) = \bar r(t)$ be the (or any) vector of rates which maximises ∑*i* ∈ *S**v**i**t*(*s*)*r**i*(*t*) subject to the corresponding imbalance $\bar u(t)$ being equal to $\hat u(t)$ as required above. Let $S\_+ = \{i\in S\colon \bar r\_i(t) > 0\}$ and let $S\_- = \{i\in S\colon \bar r\_i(t) < 0\}$. Then the rate vector $\bar r(t)$ satisfies the following four conditions, in each case since otherwise the above objective function ∑*i* ∈ *S**v**i**t*(*s*)*r**i*(*t*) could clearly be increased, while maintaining the given imbalance constraint $\bar u(t) = \hat u(t)$: [1.] [c:1] subject to the constraint that the total amount charged to the stores is as given by $\sum\_{i\in S\_+}\bar r\_i(t)$, both *S*+ and the individual rates $\bar r\_i(t)$, *i* ∈ *S*+, are as determined by the store charging priorities defined by the proposition; [c:2] similarly, subject to the constraint that the total amount discharged by the stores is as given by $-\sum\_{i\in S\_-}\bar r\_i(t)$, both *S*− and the individual rates $\bar r\_i(t)$, *i* ∈ *S*−, are as determined by the store discharging priorities defined by the proposition; [c:3] the condition  is satisfied for all *i* ∈ *S*−, *j* ∈ *S*+: [c:4] there are no pairs of stores *i*, *j* ∈ *S* satisfying  such that it is possible to improve the solution $\bar r(t)$ by (further) cross-charging from *i* to *j*. It is now easy to see that the above conditions [c:1]–[c:4] are sufficient to ensure that $\bar r(t)$ is precisely as determined by algorithm, i.e. that $\bar r(t) = \hat r(t)$. In the event that, at the given time *t*, the ordering of states by either their charging or discharging priorities is not unique (and so $\hat r(t)$ is not unique), it is easy to see that $\bar r(t)$, defined as above, may be adjusted if necessary—while continuing to maximise ∑*i* ∈ *S**v**i**t*(*s*)*r**i*(*t*) subject to the given imbalance constraint—so that we again have $\bar r(t) = \hat r(t)$: one standard way to do this is to perturb *v**i**t*(*s*), *i* ∈ *S*, infinitesimally so that the given $\hat r(t)$ becomes the unique solution of the scheduling algorithm, then let $\bar r(t)$ solve the given constrained maximisation problem as above, and then finally allow the perturbation to tend to zero to obtain the required result. --- 1. Heriot-Watt University and University of Edinburgh, UK.[↩](#fnref1) Scheduling and dimensioning of heterogeneous energy stores, with applications to future GB storage needs ======================================================================================================== Future “net-zero” electricity systems in which all or most generation is renewable may require very high volumes of storage, provided jointly by a number of heterogeneous technologies, in order to manage the associated variability in the generation-demand balance. We consider the problems of scheduling and dimensioning such storage. We develop a value-function based approach to optimal scheduling, and show that, to a good approximation, the problem to be solved at each successive point in time reduces to a linear programme with a particularly simple solution. We show that approximately optimal scheduling may be achieved without the need for a running forecast of the future generation-demand balance. We examine the applicability of the above theory to future GB storage needs, and discuss how it may be used to enable the most economic dimensioning of such storage, with possible savings of tens of billions of pounds, relative to the use of a single technology. **Keywords:** energy storage; economics; optimal scheduling. Introduction ============ Future electricity systems in which all or most generation is renewable, and hence highly variable, may require extremely high volumes of storage in order to manage this variability and to ensure that demand may always be met—see , and especially , for an assessment of likely future needs and costs in Great Britain, where, under a future “net-zero” carbon emissions strategy, capital costs of storage may run into many tens, or even hundreds, of billions of pounds. For the reasons we explain below, a mixture of storage technologies is likely to be required. The problems considered in the present paper are those of the scheduling and dimensioning of such storage, with the objective of meeting energy demand to a required reliability standard as economically as possible. These problems are considered from a societal viewpoint, which generally coincides with that of the electricity system operator. This is in contrast the viewpoint of a storage provider seeking to maximise profits (see  and the references therein). Insofar as the societal problem of managing energy storage is considered in the existing literature, this is mostly in the context of short-term storage used to cover occasional periods of generation shortfall, with sufficient time for recharging between such periods (see, e.g. ). The scheduling problems we consider in this paper are nontrivial in that, if storage is not managed properly, then it is likely that there will frequently arise the situation in which there is sufficient energy in storage to meet demand but it is located in too few stores for it to be possible to serve it at the required rate. The need for a solution of these scheduling problems is highlighted by the forthcoming Royal Society report  on long-term energy storage in GB. The present paper aims to supply the necessary mathematics for that, and to illustrate its application in the context of future GB storage needs. In a large system, such as that of Great Britain, which we consider further in Section [sec:appl-gb-energy], the processes of demand and of renewable generation vary on multiple timescales: on a timescale measured in hours there is diurnal variation in demand and in solar generation; on a timescale of days and weeks there is weather-related variation in demand and in most forms of renewable generation, and there is further demand variation due to weekends and holiday periods; on longer timescales there is seasonal variation in both demand and renewable generation which may extend to major differences between successive years (see  for more details). In GB, for example, it is particularly the case that some years are significantly windier than others (see ), necessitating volumes of storage of the order of tens of terawatt-hours within a future net-zero scenario—see Section [sec:appl-gb-energy] and  for similar results in the US context. Variation in the generation-demand balance may be managed by a number of different storage technologies. These vary greatly in their costs per unit of capacity and per unit of power (the maximum rate at which they may be charged or discharged), and further in their *(round-trip) efficiencies* (energy output as a fraction of energy input). In consequence, different storage technologies are typically appropriate to managing variation on these different timescales—see  for detailed comparisons and analysis in the GB context, and also the recent report  (especially Figure 1.6) for a discussion of differing technology costs and their implications. We further explore these issues in Section [sec:appl-gb-energy]. It thus seems likely that, in the management of such future electricity systems, there will be a need for a mix of storage technologies. This will be such that most of the required storage *capacity* will be provided by those technologies such as chemical storage, which are able to provide this capacity economically, while a high proportion of the *power* requirements will be met by technologies such as batteries or advanced (adiabatic) compressed air energy storage (ACAES) with much lower input-output costs and higher efficiencies. (Although there is considerable uncertainty in future costs, we show that, for the GB examples of Section [sec:appl-gb-energy] and on the basis of those costs given by , Figure 1.6, or by , it is likely that the use of an appropriate mixture of storage technologies would result in cost savings of the order of many billions of pounds, compared with the use of the most economical single technology. In particular, focusing on those technologies which minimise capacity costs alone leads to technology mixes which, when power costs are added, are extremely expensive—again see .) There now arise the questions of how such storage may be economically dimensioned, and of how it may be managed—the ability to answer the former question depending on having a sufficiently good understanding of the answer to the latter. The problem of managing, or scheduling, any given set of stores is that of deciding by how much each individual store should be charged or discharged at each successive point in time in order to best manage the generation-demand (im)balance—usually with the objective of minimising total unmet demand, or *unserved energy*, over some given period of time. In this context, usually those stores corresponding to any given technology may be treated as a single store provided their capacity-to-power ratios are approximately equal (see Section [sec:model]). However, the scheduling problem is a real-time problem, and in deciding which storage technology to prioritise at any given point in time, it is difficult to attempt to classify the current state of the generation-demand balance as representing short, medium, or long-term variation. Within the existing literature and in assessing likely GB storage needs, use a heuristic algorithm to attempt such a decomposition, while  use a filtering approach to choose between medium- and long-term storage. (Neither of these approaches allows for cross-charging—see Section [sec:model].) The present paper formulates the scheduling problem as one in mathematical optimisation theory in order to derive policies in which cooperation between stores happens automatically when this is beneficial, thereby enabling given generation-demand balance processes to be managed by storage systems which are considerably more compactly dimensioned in their power requirements in particular (see also ). Section [sec:model] of the paper defines the relevant mathematical model for the management of multiple stores. This incorporates capacity and rate (power) constraints, together with round-trip efficiencies, and allows for entirely general scheduling policies. Section [sec:nature-optim-solut] develops the relevant mathematics for the identification of optimal policies, when the objective is the minimisation of cumulative unserved energy to each successive point in time. We show that it is sufficient to consider policies that are *greedy* in an extended sense defined there. We further show that, at each successive point in time, the scheduling problem may be characterised as that of maximising a *value function* defined on the space of possible energy levels of the stores, and that, under appropriate conditions, the optimisation problem to be solved at that time is approximately a linear programme which has a simple, non-iterative, solution. We give conditions under which it is possible to find optimal policies, exact or approximate, from within the class of *non-anticipatory* policies, i.e. those which do not require real-time forecasting of the generation-demand balance. Section [sec:appl-gb-energy] considers an extended application to future GB energy storage needs, which, under the conditions considered, aims to be as realistic as possible. The aims here are both to demonstrate the applicability of the present theory, and further to show how one might reasonably go about solving the practical problems of identifying, dimensioning and managing future storage needs. We demonstrate the general success, and occasional limitations, of *non-anticipatory* policies as defined above. We indicate briefly how the analysis might be extended to include network constraints, if desired, although we also indicate why these are less significant in the context of dimensioning long-term storage. The concluding Section [sec:conclusions] considers some practical implications of the preceding results. Model ===== We study the management over (discrete) time of a set *S* of stores, where each store *i* ∈ *S* is characterised by four parameters (*E**i*, *Q**i*, *P**i*, *η**i*) as described below. For each store *i* ∈ *S*, we let *s**i*(0) be the initial level of energy in store *i* and *s**i*(*t*) be the level of energy in store *i* at (the end of) each subsequent time *t* ≥ 1. Without loss of generality and for simplicity of presentation of the necessary theory, we make the convention that the level of energy in each store at any time is measured by that volume of energy that it may ultimately supply, so that, within the model, any *(round-trip) inefficiency* of the store is accounted for at the input stage. While accounting for such inefficiency is essential to our modelling and results, we assume that energy, once stored, is not further subject to significant time-dependent *leakage*. However, the theory of the present paper would require only minor adjustments to incorporate such time-dependent leakage. The successive levels of energy in each store *i* satisfy the recursion *s**i*(*t*) = *s**i*(*t* − 1) + *r**i*(*t*),   *t* ≥ 1,  where *r**i*(*t*) is the rate (positive or negative) at which energy is added to the store *i* at the time *t*. Each store *i* ∈ *S* is subject to *capacity constraints* 0 ≤ *s**i*(*t*) ≤ *E**i*,   *t* ≥ 0,  so that *E**i* > 0 is the *capacity* of store *i* (again measured by the volume of energy it is capable of serving) and *rate constraints*  − *P**i* ≤ *r**i*(*t*) ≤ *η**i**Q**i*,   *t* ≥ 1. Here *P**i* > 0 is the (maximum) *output rate* of the store *i*, while *Q**i* > 0 is the (maximum) rate at which externally available energy may be used for *input* to the store, with the resulting rate at which the store fills being reduced by the *round-trip efficiency* *η**i* of the store, where 0 < *η**i* ≤ 1. Given the vector *s*(0) = (*s**i*(0),  *i* ∈ *S*) of the initial levels of energy in the stores, a *policy* for the subsequent management of the stores is a specification of the vector of rates *r*(*t*) = (*r**i*(*t*),  *i* ∈ *S*), for all times *t* ≥ 1; equivalently, from , it is a specification of the vector of store levels *s*(*t*) = (*s**i*(*t*),  *i* ∈ *S*), for all times *t* ≥ 1. (The question of how a policy may be *chosen* in real-time applications is considered in Section [sec:nature-optim-solut].) The stores are used to manage as far as possible a *residual energy process* (*re*(*t*),  *t* ≥ 1), where, for each time *t*, a *positive* value of *re*(*t*) corresponds to surplus energy available for *charging* the stores, subject to losses due to inefficiency, and a *negative* value of *re*(*t*) corresponds to energy *demand* to be met as far as possible from the stores. In applications, the residual energy *re*(*t*) is usually the surplus of generation over demand at each successive time *t*. For any time *t*, given the vector of rates *r*(*t*) = (*r**i*(*t*),  *i* ∈ *S*), define the *imbalance* *u*(*t*) by *u*(*t*) = *re*(*t*) − ∑*i*: *r**i*(*t*) < 0*r**i*(*t*) − ∑*i*: *r**i*(*t*) ≥ 0*r**i*(*t*)/*η**i*. We shall require also that that the policy defined by the rate vectors *r*(*t*), *t* ≥ 1, is such that, at each successive time *t*, *re*(*t*) ≥ 0  ⇒  *u*(*t*) ≥ 0,  so that, at any time *t* when there is an energy surplus, the net energy input into the stores, before losses due to round-trip inefficiency, cannot exceed that surplus; the quantity *u*(*t*) is then the *spilled energy* at time *t*. Similarly, we shall require that *re*(*t*) ≤ 0  ⇒  *u*(*t*) ≤ 0,  so that, at any time *t* when there is an energy shortfall, i.e. a positive net energy demand to be met from stores, the net energy output of the the stores does not exceed that demand; under the constraint , the quantity  − *u*(*t*) is then the *unserved energy* at time *t*. (It is not difficult to see that, under any reasonable objective for the use of the stores to manage the residual energy process—including the minimisation of total unserved energy as discussed below—there is nothing to be gained by overserving energy at times *t* such that *re*(*t*) ≤ 0.) We shall say that a policy is *feasible* for the management of the stores if, for each *t* ≥ 1, that policy satisfies the above relations –. For any feasible policy, define the total unserved energy *ue*(*t*) to any time *t* to be the sum of the unserved energies at those times *t*ʹ ≤ *t* such that *re*(*t*ʹ) ≤ 0, i.e. *ue*(*t*) =  − ∑*t*ʹ ≤ *t* :  *re*(*t*ʹ) ≤ 0*u*(*t*ʹ) = ∑*t*ʹ ≤ *t*max(0,  − *u*(*t*ʹ)),  where the second equality above follows from  and . Our objective is to determine a feasible policy for the management of the stores so as to minimise the total unserved energy over (some specified period of) time. It is possible that, at any time *t*, some store *i* may be *charging* (*r**i*(*t*) > 0) while some other store *j* is simultaneously *discharging* (*r**j*(*t*) < 0). We refer to this as *cross-charging*—although the model does not of course identify the routes taken by individual electrons. Although, in the presence of storage inefficiencies, cross-charging is wasteful of energy, it is nevertheless occasionally effective in enabling a better distribution of energy among stores and avoiding the situation in which energy may not be served at a sufficient rate because one or more stores are empty. We make also the following observation. Suppose that some subset *S*ʹ of the set of stores *S* is such that the stores *i* ∈ *S*ʹ have common efficiencies *η**i* and common capacity-to-power ratios *E**i*/*P**i* and *E**i*/*Q**i*. Then it is clear that these stores may be optimally managed by keeping the fractional storage levels *s**i*(*t*)/*E**i* equal across *i* ∈ *S*ʹ and over all times *t*, so that the stores in *S*ʹ effectively behave as a single large store with total capacity ∑*i* ∈ *S*ʹ*E**i* and total input and output rates ∑*i* ∈ *S*ʹ*Q**i* and ∑*i* ∈ *S*ʹ*P**i* respectively. This is relevant when, as in the application of Section [sec:appl-gb-energy], we wish to consider the scheduling and dimensioning of different storage technologies in order to obtain an optimal mix of the latter. Then, for this purpose, it is reasonable to treat—to a good approximation—the storage to be provided by any one technology as constituting a single large store. Nature of optimal policies ========================== We continue to take as our objective the minimisation of total unserved energy over some given period of time. We characterise desirable properties of policies for the management storage, and show how at least approximately optimal policies may be determined. In applications, the residual energy process to be managed is not generally known in advance (so ruling out, e.g., the use of straightforward linear programming approaches) and policies must be chosen dynamically in response to evolving information about that process. Within our discrete-time setting, the information available for decision-making at any time *t* will generally consist of the vector of store levels *s*(*t* − 1) = (*s**i*(*t* − 1),  *i* ∈ *S*) at the end of the preceding time period (equivalently the start of the time period *t*) together with the current value *re*(*t*) of the residual energy process at the time *t*. However, this information may be supplemented by some, necessarily probabilistic, prediction (however obtained) of the evolution of the residual energy process subsequent to time *t*. We shall be particularly interested in identifying conditions under which it is *sufficient* to consider (feasible) policies in which the decision to be made at any time *t*, i.e. the choice of rates vector *r*(*t*), depends *only* on *s*(*t* − 1) and *re*(*t*), thereby avoiding the need for real-time prediction of the future residual energy process. Such policies are usually referred to as *non-anticipatory*, or without *foresight*. Section [sec:greedy-policies] below defines *greedy* policies and shows that it is sufficient to consider such policies. Section [sec:non-pred-polic] discusses conditions under which a (greedy) optimal, or approximately optimal, policy may be found from within the class of non-anticipatory policies. Section [sec:value-functions] shows that the immediate optimisation problem to be solved at each successive time *t* may be characterised as that of maximising a *value function* defined on the space of possible store (energy) levels, and identifies conditions under which this latter problem is approximately a linear programme—with a particularly simple, non-iterative, solution. Greedy policies --------------- We define a *greedy* policy to be a feasible policy in which, at each successive time *t* ≥ 1, and given the levels *s*(*t* − 1) of the stores at the end of the preceding time period, [-] if the residual energy *re*(*t*) ≥ 0, i.e. there is energy available for charging the stores at time *t*, then there is no possibility to increase any of the rates *r**i*(*t*), *i* ∈ *S* (without decreasing any of the others), and so further charge the stores, while keeping the policy feasible; equivalently, we require that if the *spilled energy* *u*(*t*) > 0 (where *u*(*t*) is defined by ), then *r**i*(*t*) = min(*E**i* − *s**i*(*t* − 1),  *η**i**Q**i*) for all *i* ∈ *S*; thus the spilled energy *u*(*t*) at time *t* is minimised. if the residual energy *re*(*t*) < 0, i.e. there is net energy demand at time *t*, then there is no possibility to decrease any of the rates *r**i*(*t*), *i* ∈ *S* (without increasing any of the others), and so further serve demand, while keeping the policy feasible; equivalently, we require that if the *unserved energy*  − *u*(*t*) > 0, then *r**i*(*t*) =  − min(*s**i*(*t* − 1),  *P**i*) for all *i* ∈ *S*; thus the unserved energy  − *u*(*t*) at time *t* is minimised. Note that if *re*(*t*) = 0 at time *t*, then, for a feasible policy, it is necessarily the case, from  and , that the imbalance *u*(*t*) = 0. Proposition [proposition:greedy] and its corollary below generalise a result of  (in that case for a single store which can only discharge) to show that, under the objective of minimising the total unserved energy, it is sufficient to consider greedy policies. [proposition:greedy] Any feasible policy may be modified to be greedy while remaining feasible and while continuing to serve as least as much energy to each successive time *t*. Further, if the original policy is non-anticipatory, the modified policy may be taken to be non-anticipatory. Proposition [proposition:greedy] is intuitively appealing: at those times *t* such that *re*(*t*) ≥ 0, there is no point in withholding energy which might be used for charging any store, since the only possible “benefit” of doing so would be to allow further energy—not exceeding the amount originally withheld—to be placed in that store at a later time. Similarly, at those times *t* such that *re*(*t*) < 0, there is no point in withholding energy in any store which might be used to reduce unserved energy, since the only possible “benefit” of doing so would be to allow additional demand—not exceeding that originally withheld by that store—to be met by that store at a later time. A formal proof of Proposition [proposition:greedy] is given in the Appendix. Note that greedy policies may involve cross-charging (see Section [sec:model]). Proposition [proposition:greedy] has the following corollary. [corollary:1] Suppose that the objective is the minimisation of unserved energy over some given period of time. Then there is an optimal policy which is greedy. Further, within the class of non-anticipatory policies there is a greedy policy which is optimal within this class. We remark that under objectives other than the minimisation of total unserved energy, optimal policies may fail to be greedy. For example, if unserved energy were costed nonlinearly, or differently at different times, then at certain times it might be better to retain stored for more profitable use at later times—see, for example, . Non-anticipatory policies ------------------------- There are various conditions (see below) under which the optimal policy may be taken to be not only greedy (see Proposition [proposition:greedy]) but also *non-anticipatory* as defined above. We are therefore led to consider whether it is sufficient in applications to consider non-anticipatory policies—at least to obtain results which are at least approximately optimal, and to design and dimension storage configurations. Two such *non-anticipatory* policies which work well under different circumstances are: [-] The *greedy greatest-discharge-duration-first* (GGDDF) policy (see ) is a storage discharge policy for managing a given residual energy process which is negative (i.e. there is positive energy demand) over a given period of time, with the aim of minimising total unserved energy. It is defined by the requirement that, at each time *t*, stores are prioritised for discharging in order of their *residual discharge durations*, where the residual discharge duration of a store *i* at any time is defined as the energy in that store at the start of time divided by its maximum discharge rate *P**i*. This non-anticipatory policy is designed to cope with rate constraints and to avoid as far as possible the situation in which there are times at which there is sufficient total stored energy, but this is located in too few stores. It is optimal among policies which do not involve cross-charging, and more generally under the conditions discussed in . As also discussed there, it may be extended to situations where the residual energy process is both positive and negative. The *greatest-round-trip-efficiency first* (GRTEF) policy is a greedy policy which is designed to cope with round-trip inefficiency: stores are both charged and discharged—in each case to the maximum feasible extent—in decreasing order of their efficiencies and no cross-charging takes place. *In the absence of output rate constraints*, the GRTEF policy may be shown to be optimal: straightforward coupling arguments, similar to those used to prove Proposition [proposition:greedy], show that, amongst greedy policies, the GRTEF policy maximises the total stored energy ∑*i* ∈ *S**s**i*(*t*) at any time, so that energy which may be served under any other policy may be served under this policy. In practice, a reasonable and robust policy might be to use the GRTEF policy whenever no store is close to empty, and otherwise to switch to the GGDDF policy. However, there is a need to find the right balance between these two policies, and also to allow for the possibility of cross-charging where this might be beneficial. We therefore look more generally at non-anticipatory policies. The choice of such policies is informed both by some considerations from dynamic programming theory and by the observations above. Value functions --------------- Standard dynamic programming theory (see, e.g. ) shows that, at any time *t*, given a stochastic description of the future evolution of the residual energy process, an optimal decision at that time may be obtained through the computation of a *value function* *V**t*(*s*) defined on the set of possible states *s* = (*s**i*,  *i* ∈ *S*) of the stores, where each *s**i* = *s**i*(*t* − 1) is the level of energy in store *i* ∈ *S* at the start of time *t*. The quantity *V**t*(*s*) may be interpreted as the future *value* of having the energy levels of the stores in state *s* at time *t*, relative to their being instead in some other reference state, e.g. state 0, where *value* is the negative of *cost* as measured by expected future unserved energy. Then the optimal decision at any time *t* is that which maximises the value of the resulting state, less the cost of any energy unserved at time *t*. In the present problem, such a stochastic description is generally unavailable. However, the value function might reasonably be estimated from a sufficiently long residual energy data series—typically of at least several years duration—especially if one is able to assume (approximate) time-homogeneity of the above stochastic description. The latter assumption essentially corresponds, over sufficiently long time periods, to the use of a value function *V**t*(*s*) = *V*(*s*) which is independent of time *t* and to the use of a non-anticipatory scheduling policy (however, see below). As previously indicated, we make the convention that the state *s**i* of each store *i* denotes the amount of energy which it is able to serve—so that (in)efficiency losses are accounted for at the input stage. At any time *t*, and given a stochastic description as above, the value function *V**t*(*s*) may be computed in terms of absorption probabilities (see, e.g. ). For each *t*, let *v**i**t*(*s*) be the partial derivative of the value function *V**t*(*s*) with respect to variation of the level *s**i* of each store *i* ∈ *S*. Standard probabilistic coupling arguments (which we do not give in detail since they are not formally required here) show that, for each *i* ∈ *S*, *v**i**t*(*s*) lies between 0 and 1 and is decreasing is *s**i*. (For example, the positivity of *v**i**t*(*s*) is simply the *monotonicity* property that one can never be worse off by having more stored energy—see Section [sec:model]). We assume that changes in store energy levels are sufficiently small over each single time step *t* that changes to the value function may be measured using the above partial derivatives. Then the above problem of scheduling the charging or discharging of the stores over the time step *t* becomes that of choosing *feasible* rates *r*(*t*) = (*r**i*(*t*),  *i* ∈ *S*) so as to maximise ∑*i* ∈ *S**v**i**t*(*s*)*r**i*(*t*) − max(0,  − *u*(*t*)),  where *s**i* = *s**i*(*t* − 1) is again the level of each store *i* ∈ *S* at the end of the preceding time step *t* − 1, and where the second term in ) is the unserved energy at time *t* (see Section [sec:model]). Thus, under the above linearisation, the scheduling problem at each time *t* becomes a linear programme. When, at (the start of) any time *t*, the state of the stores is given by *s* = *s*(*t* − 1), we shall say that any store *i* ∈ *S* has *charging priority* over any store *j* ∈ *S* if *η**i**v**i**t*(*s*) > *η**j**v**j**t*(*s*), and that any store *i* ∈ *S* has *discharging priority* over any store *j* ∈ *S* if *v**i**t*(*s*) < *v**j**t*(*s*). Proposition [proposition:lp] below is again intuitively appealing; we give a formal proof in the Appendix. [proposition:lp] When the objective is the minimisation of total unserved energy over some given period of time, then, under the above linearisation, at each time *t* and with *s* = *s*(*t* − 1), the optimal charging, discharging and cross-charging decisions are given by the following procedure: [-] *when charging, i.e. if *re*(*t*) ≥ 0*, charge the stores in order of their charging priority, charging each successive store as far as permitted (the minimum of its input rate and its residual capacity) until the energy available for charging at time *t* is used as far as possible—any remaining energy being *spilled*; *when discharging, i.e. if *re*(*t*) < 0*, discharge the stores in order of their discharging priority, discharging each successive store as far as permitted (the minimum of its output rate and its available stored energy) until the demand at time *t* is met as fully as possible—any remaining demand being *unserved energy*; *subsequent to either of the above*, choose pairs of stores (*i*, *j*) in succession by, at each successive stage, selecting store *i* to be the store with the highest discharging priority which is still able to supply energy at the time *t* and selecting store *j* to be the store with the highest charging priority which is still able to accept energy at the time *t*; for each such successive pair (*i*, *j*), provided that *v**i**t*(*s*) < *η**j**v**j**t*(*s*),  *cross-charge* as much energy as possible from store *i* to store *j*. Note that the above priorities are such that this process necessarily terminates on the first occasion such that the condition  fails to be satisfied, and further that no cross-charging can occur when *re*(*t*) ≥ 0 and there is spilled energy, or when *re*(*t*) < 0 and there is unserved energy. The pairing of stores for cross-charging in the above procedure is entirely notional, and what is important is the policy thus defined. However, when efficiencies are low, cross-charging occurs infrequently. In the examples of Section [sec:appl-gb-energy], we consider *time-homogeneous* value function derivatives of the form *v**i**t*(*s*) = exp( − *λ**i**s**i*/*P**i*),  *i* ∈ *S*,  essentially corresponding, as above, to the use of non-anticipatory scheduling algorithms. (However, data limitations—see the analysis of Section [sec:appl-gb-energy]—mean that we use a single, extremely long, residual energy dataset of 324,360 hourly observations both to estimate the parameters *λ**i* and to examine the effectiveness of the resulting policies. Hence, the resulting scheduling algorithms might be regarded as having, at each successive point in time, some very mild anticipation of the future evolution of the residual energy process. Within the present exploratory analysis this approach seems reasonable.) The expression  is an approximation, both in its assumption that, for each *i* ∈ *S*, the partial derivative *v**i**t*(*s*) depends only on the state *s**i* of store *i*, and in the assumed functional form of the dependence of *v**i**t*(*s*) on *s**i*. The former assumption is equivalent to taking the value function as a sum of separate contributions from each store (a reasonable first approximation), while probabilistic large deviations theory  suggests that, under somewhat idealised conditions, when the mean residual energy is positive, the functions *v**i**t*(*s*) do decay exponentially. However, we primarily justify the use of the relation  in part by the arguments below, and in part by its practical effectiveness—see the examples of Section [sec:appl-gb-energy]. Recall that what are important are the induced decisions, as described above, on the storage configuration space. In particular, when the stores are under pressure and hence discharging, it follows from the definition of discharging priority above that it is only the ratios of the parameters *λ**i* which matter, except only for determining the extent to which cross-charging should take place. Taking *λ**i* = *λ* for all *i* and for some *λ* defines a policy which, when discharging, corresponds to the use of the GGDDF policy, supplemented by a degree of cross-charging which depends on the absolute value of the parameter *λ*. The ability to further adjust the relative values of the parameters *λ**i* between stores allows further tuning to reflect their relative efficiencies; in particular, for a given volume of stored energy, increasing the efficiency of a given store *i* ∈ *S* increases the desirability of having that energy stored in other stores and reserving more of the capacity of store *i* for future use—something which can be effected by increasing the parameter *λ**i*. Application to GB energy storage needs ====================================== In this section we give an extended example of the application of the preceding theory to the problem of dimensioning and scheduling future GB energy storage needs within a net-zero environment. Our primary aim is to illustrate the practical applicability of the theory. We also aim to show how, given also cost data, it might be used to assist in storage dimensioning. We are further concerned with the extent to which it is sufficient to consider non-anticipatory scheduling policies. A detailed description of the dimensioning problem in the GB context, together with details of all our storage, demand and renewable generation data, including storage costs, is given by —work prepared in support of the Royal Society working party on future GB energy storage. (That paper uses a rather heuristic scheduling algorithm, which occasionally leads to very high total power requirements.) Additional discussion is given in the Royal Society report itself . We consider here a GB 2050 net-zero scenario, also considered in . In this scenario heating and transport are decarbonised, in line with the UK’s 2050 net-zero commitment, thereby approximately doubling current electricity demand to 600 TWh per year (see ), and all electricity generation is renewable and provided by a mixture of 80% wind and 20% solar generation. (In particular, it is assumed that there is no fossil-fuel based generation, even with carbon-capture-and-storage.) It is further assumed that there is a 30% level of generation overcapacity—corresponding to total renewable generation of 780 TWh per year on average. The above wind-solar mix and level of generation overcapacity are those used in , and are approximately optimal on the basis of the generation and cost data considered there. We also consider, more briefly, the effect on storage dimensioning of a reduced level of overcapacity of 25%. However, on the basis of the costs given by , the 5% reduction in generation overcapacity (30 TWh) saves $15 bn in generation costs, with the consequence that, in all the examples below, the 30% level of overcapacity is here more economic. The lower level of overcapacity is included for comparison purposes. In the application of this section, we depart from our earlier convention (made for mathematical simplicity) of notionally accounting for all round-trip inefficiency at the input stage. We instead split the efficiency *η**i* of any store *i* by taking both the input and output efficiencies to be given by *η**i*0.5. This revised convention increases both the notional volumes of energy within any store *i* and the notional capacity of the store *i* by a factor *η**i*− 0.5. This is in line with most of the applied literature on energy storage needs and makes our storage capacities below directly comparable with those given elsewhere. #### Generation and demand data. We use a dataset consisting of 37 years of hourly “observations” of wind generation, solar generation and demand. The wind and solar generation data are both based on the 37-year (1980–2016) reanalysis weather data of  together with assumed installations of wind and solar farms distributed across GB and appropriate to the above scenario, and with 80% wind and 20% solar generation as above. The derived generation data are scaled so as to provide on average the required level of generation overcapacity relative to the modelled demand. The demand data are taken from a year-long hourly demand profile again corresponding to the above 2050 scenario and in which there is 600 TWh of total demand; this profile was prepared by Imperial College for the UK Committee on Climate Change . As in this year-long set of hourly demand data has been recycled to provide a 37-year trace to match the generation data. (This is reasonable here as the between-years variability which may present challenges to storage dimensioning and scheduling is likely to arise primarily from the between-years variability in renewable generation. However, see also ). More details and some analysis of the above data are given in . From these data we thus obtain a 37-year hourly *residual energy* (generation less demand) process to be managed by storage. For the chosen base level of 30% generation overcapacity, Figure [fig:residualenergy30] shows a histogram and autocorrelation function of the hourly residual energy process. The large variation in the residual energy process is to be compared with the mean demand of 68.6 GW. ![image](re_30_hist) ![Histogram and autocorrelation function of hourly residual energy (30% overcapacity).](re_30_acf "fig:") [fig:residualenergy30] Some level of generation overcapacity is required, both to account for losses due to inefficiencies in storage, and to keep the required volume of storage within reasonable bounds. As the level of generation overcapacity increases, storage requirements (specifically, capacities and input rates) decrease. The consequent decrease in storage costs must be balanced against the increase in the cost of generation. In our examples below, the considered levels of generation overcapacity are 30% and 25%. However, it is useful to consider briefly the volume of storage required to manage more general levels of overcapacity. In particular, for a *single* store with given efficiency and without input or output power constraints, there is a minimum store size and a minimum initial store energy level such that the store can completely manage the above residual energy process (i.e. with no unmet demand). Figure [fig:st.size] plots, for various levels of store efficiency and on the basis of our assumed 80%-20% wind-solar mix, this minimum store size against the assumed level of overcapacity in the above residual energy process. [fig:st.size] #### Storage data and costs. As discussed in Section [sec:introduction] and in , we consider three types of storage with associated efficiencies: [-] the *short* store is intended primarily for the management of diurnal and other short-term variation, and has a low capacity requirement (see below); it is assumed that it can therefore use a technology such as Li-ion battery storage with a high efficiency, which we here take to be 0.9; the *medium* store is intended primarily for the management of variation on a timescale of days and weeks, this variation resulting from weather-related variations in generation and demand; it has very substantial capacity requirements and may require a technology such as ACAES which has a lower efficiency, which we here take to be 0.7; the *long* store is intended for the management of seasonal and between-years variation (see Section [sec:introduction]); it has an very high capacity requirement, and a power requirement which—on account of potentially high input/output costs—it is desirable to keep relatively modest; it requires a technology, such as hydrogen or similar chemical storage, which currently has a low efficiency, which we here take to be 0.4. We use storage costs from, Table 3, and given in Table [tab:storagecosts] below (with storage capacity measured according to to the convention of this section with regard to accounting for inefficiency). These costs are based on various recent studies as reported in the above paper and are estimates of likely future storage costs in 2040 if the storage technologies are applied on a large scale—current costs are considerably higher. For Li-ion batteries, the maximum input and output rates are constrained to be the same, so that power costs may be associated with input power. However, there is huge uncertainty as to future storage costs and so the examples below are not to be read as providing recommendations about competing storage technologies. [ht] | | | | | | --- | --- | --- | --- | | | capacity | output power | input power | | | ($ per KWh) | ($ per KW) | ($ per KW) | | *long* (hydrogen) | 0.8 | 429 | 858 | | *medium* (ACAES) | 9.0 | 200 | 200 | | *short* (Li-ion) | 100.0 | 0 | 180 | [tab:storagecosts] Unit capacity costs decrease dramatically (by some orders of magnitude) as we move from the *short*, to the *medium*, to the *long* store, while unit power (rate) costs vary, again considerably, in the opposite direction. The aim in dimensioning and scheduling storage must therefore be to arrive at a position in which the *long* store is meeting most of the total capacity requirement, while as much as is reasonably possible of the total power requirement is being met by the *medium* and *short* stores. We treat GB as a single geographical node, ignoring network constraints. This is in line with most current studies of GB *long-term* storage needs, see, e.g. , and in line with the annual Electricity Capacity Reports produced by the GB system operator . As at present, future network constraints are unlikely to be continuously binding over periods of time in excess of a few hours, or a day or two at most, and are primarily likely to affect and to increase short-term storage requirements. (The effect of such constraints on the model of the present paper would be to add further linear constraints on the input and output rates of the stores. The general theory given in the preceding sections would continue to be applicable.) We take the reliability standard to be given by 24 GWh per year unserved energy and optimise scheduling and dimensioning subject to to constraint that this standard is met. This results in an average *number of hours* per year in which there is unserved energy which is roughly in line with the current GB standard of a maximum of 3 such hours per year. However, modest variation of the chosen reliability standard makes very little difference to our conclusions. Example [ex:1] below considers a single store. In the remaining examples we *schedule* storage using time-homogeneous value function derivatives *v**i*(*s*) given by  (with *s* defined as there to be the volume of stored energy which may be output). Thus, as previously discussed, the scheduling is almost completely non-anticipatory. We consider also the optimality of the scheduling algorithms used. We take the stores to be initially full. However, in all our examples, stores fill rapidly regardless of their initial energy levels, and these levels are in general independent of their initial values by the end of the first year of the 37-year period considered. [ex:1] *Single long (hydrogen) store with efficiency 0.4.* We first consider the management of the residual energy process by a single store, optimally dimensioned with respect to cost. If a single store is to be used, then, of the technologies considered here and on the basis of the present, *as yet very uncertain*, costs, a hydrogen store is the only economic possibility—see also  The unserved energy is clearly a decreasing function of each of the store capacity *E*, the maximum input power *Q* and the maximum output power *P*. For any given value of *P*, we may thus easily minimise the overall cost over (*E*, *Q*). It then turns out—unsurprisingly given the stringent reliability standard—that the overall cost is here minimised by taking *P* to be the minimum possible value (115.9 GW or 116.6 GW at 30% or 25% generation overcapacity) such that the given reliability standard of 24 GWh per year is satisfied. Table [tab:sl] shows the optimal storage dimensions and associated costs for both the levels of generation overcapacity considered in these examples. [ht] | | capacity | output power | input power | total | | --- | --- | --- | --- | --- | | size | 120.4 TWh | 115.9 GW | 80.0 GW | | | cost ($ bn) | 96.3         | 49.7       | 68.6       | 214.7 | | | capacity | output power | input power | total | | --- | --- | --- | --- | --- | | size | 167.9 TWh | 116.6 GW | 85.3 GW | | | cost ($ bn) | 134.3         | 50.0       | 73.2       | 257.5 | [tab:sl] These store capacities are larger than those suggested by Figure [fig:st.size], where the maximum store input powers *Q* were unconstrained: on the basis of the present costs, it is in each case more economic to reduce *Q* at the expense of allowing the store capacity *E* to increase. The total cost of storage required to manage 25% generation overcapacity is $42.8 bn greater than the corresponding cost at 30% overcapacity—making the 30% level of overcapacity more economic on the basis of the storage and generation costs discussed above. Figure [fig:sl.eu] plots cumulative unserved energy against time for each of the two levels of generation overcapacity. These two processes are here essentially identical. In each case, the store never completely empties and so unserved energy occurs only at those times at which the output power *P* of the store is insufficient to serve demand. [fig:sl.eu] Figure [fig:sl.sl] shows the corresponding processes formed by the successive energy levels within the store. A substantial fraction of the store capacity is needed solely to manage the single period of large shortfall in the residual energy process occurring at around 275,000 hours into the 37-yr (324,360 hour) period studied. This underlines the importance of using a residual energy time-series which is sufficiently long to capture those events such as sustained wind droughts which only occur perhaps once every few decades—see also . [fig:sl.sl] [ex:2] *Long (hydrogen) store with efficiency 0.4 plus medium (ACAES) store with efficiency 0.7.* In this example we show that, again on the basis of the cost data used here and the considered levels of generation overcapacity, extremely large savings (of the order of tens of billions of dollars) are to be made by the use of a suitable mixture of storage technologies. We choose *medium* (ACAES) store dimensions as below: for both levels of generation overcapacity, some numerical experimentation shows these to be at least close to optimal with respect to overall cost minimisation. Then, given these *medium* store dimensions, and subject to the given reliability standard, the *long* (hydrogen) store may be optimally dimensioned—given the use of the value-function based scheduling algorithm, and again to a very good approximation—as previously. Table [tab:slm] shows the optimal storage dimensions and associated costs, again for both 30% and 25% generation overcapacity. Observe that, in each case, the combined output power of the two stores is only slightly greater than that of the single store of Example [ex:1]. [ht] l|r|rrrr & capacity & output power & input power & total cost *long* & size & 72.8 TWh & 96.2 GW & 53.3 GW & store & cost ($ bn) & 58.2         & 41.3       & 45.7       & 145.2 *medium* & size & 2.5 TWh & 21.0 GW & 21.1 GW & store & cost ($ bn) & 22.5         & 4.2       & 4.2       & 30.9 &&&& --- 176.2 l|r|rrrr & capacity & output power & input power & total cost *long* & size & 79.3 TWh & 81.0 GW & 57.5 GW & store & cost ($ bn) & 63.4         & 34.7       & 49.3       & 147.5 *medium* & size & 4.5 TWh & 40.0 GW & 40.0 GW & store & cost ($ bn) & 40.5         & 8.0       & 8.0       & 56.5 &&&& --- 204.0 [tab:slm] The reason for the very large costs savings, relative to the use of a single storage technology and at both levels of generation overcapacity, is as follows. The low efficiency (0.4) of the *long* (hydrogen) store means that, when used on its own, its capacity is necessarily much greater than would have been the case had its efficiency been higher (see also Figure [fig:st.size]). The greater efficiency (0.7) of the very much smaller *medium* (ACAES) store introduced in this example allows it to be used to cycle rapidly (see Figure [fig:slm.sl]), serving a disproportionate share of the demand in relation to its capacity (see below), and thereby *greatly* reducing the capacity requirement for the *long* store. Note also the appreciably greater cost saving of $53.3 bn achieved by the introduction of the *medium* (ACAES) store at 25% generation overcapacity than the corresponding saving of $38.5 bn at 30% generation overcapacity. The parameters *λ**i* of the scheduling algorithm (equation ) are given by (*λ**l*, *λ**m*) = (0.0011, 0.01) per hr and by (*λ**l*, *λ**m*) = (0.001, 0.01) per hr respectively for 30% and 25% generation overcapacity. In each case the annual unserved energy just meets the required reliability standard of 0.024 TWh per year. The average annual volumes of energy served externally, i.e. to meet demand, by the *long* and *medium* stores are 47.6 TWh and 35.9 TWh respectively in the case of 30% overcapacity, and 38.7 TWh and 50.6 TWh respectively in the case of 25% overcapacity—with, in this example, negligible extra energy being used for cross-charging. Thus, in both cases, the much smaller *medium* store serves a comparable volume of energy to the *long* store. For 30% generation overcapacity, Figure [fig:slm.eu] plots cumulative unserved energy against time, together with the corresponding process in which there is only unserved energy to the extent that demand exceeds the combined output power (117.2 GW) of the two stores; this latter process providing a lower bound on the unserved energy achievable. The corresponding plot for 25% overcapacity is very similar. There is thus only one significant occasion (at around 150,000 hours) on which, for the original fully constrained storage system, there is unserved energy over and above that determined by the power constraint; this is the result of the *medium* store emptying and the *long* store then being unable on its own to serve energy at the required rate. Whether different (anticipatory) management of the stores, in the period immediately preceding this occasion, could have avoided this remains an open question, but it is clear that, in any case and in both examples, the stores are very close to being optimally controlled. [fig:slm.eu] Again, for the case of 30% generation overcapacity, Figure [fig:slm.sl] plots the percentage levels of energy in store during a two-year period, starting at time 265,000 hours and surrounding the one point in time at which the *long* store comes very close to emptying. The corresponding plot for 25% generation overcapacity is again very similar. In each case, the *medium* store cycles rapidly, thereby using its higher efficiency to greatly reduce the capacity and input rate requirements on the *long* store. It nevertheless generally reserves about half its capacity so that it is available to assist in any “emergency” in which the demand exceeds the output power of the *long* store alone. The exception to this occurs at those times when the *long* store is itself close to emptying, and when the *medium* store must therefore work harder to further relieve the pressure on the *long* store. [fig:slm.sl] [ex:3] *Long (hydrogen) store with efficiency 0.4 plus short (Li-ion) store with efficiency 0.9.* In the context of long-term GB storage needs, a necessarily relatively small *short* store (Li-ion battery) can probably only make a relatively modest contribution. Analogously to Example [ex:2], we here explore the extent to which it is possible for it to assist in the provision of storage primarily provided by a *long* (hydrogen) store. As in Example [ex:2], we choose *short* (Li-ion) store dimensions which appear to work well with respect to overall cost minimisation, subject here to equal input and output power ratings—see the discussion above. (Again, we do not systematically optimise the *short* store dimensions, but considerable experimentation fails to produce any further decrease in the overall costs below). Given the *short* store dimensions, and subject to the given reliability standard, the *long* (hydrogen) store may again be optimally dimensioned—given the use of the value-function based scheduling algorithm, and to a very good approximation—as previously. Table [tab:sls] shows storage dimensions and associated costs, again for both 30% and 25% generation overcapacity. These results are to be compared with those of Table [tab:sl]. What is remarkable is that, for both levels of generation overcapacity, very large reductions in the capacity of the *long* (hydrogen) store are achieved through the introduction of a *short* (Li-ion) store of *very* small capacity. This is again primarily achieved through constant rapid cycling by the *short* store so as to exploit its much greater efficiency—see Figure [fig:sls.sl] below. The total cost savings of $6.3 bn and $9.9 bn respectively, relative to the use of a single *long* store, are similarly noteworthy. [ht] l|r|rrrr & capacity & output power & input power & total cost *long* & size & 101.2    TWh & 115.9 GW & 77.5 GW & store & cost ($ bn) & 81.0            & 49.7       & 66.5       & 197.2 *short* & size & 0.085 TWh & 15.0 GW & 15.0 GW & store & cost ($ bn) & 8.5            & 0.0       & 2.7       & 11.2 &&&& --- 208.4 l|r|rrrr & capacity & output power & input power & total cost *long* & size & 136.8 TWh & 112.0 GW & 77.5 GW & store & cost ($ bn) & 109.4         & 48.0       & 66.5       & 224.0 *short* & size & 0.2 TWh & 20.0 GW & 20.0 GW & store & cost ($ bn) & 20.0         & 0.0       & 3.6       & 23.6 &&&& --- 247.6 [tab:sls] The parameters *λ**i* of the scheduling algorithm (equation ) are given by (*λ**l*, *λ**s*) = (0.000001, 0.1) per hr and by (*λ**l*, *λ**s*) = (0.0003, 0.1) per hr respectively for 30% and 25% generation overcapacity. The average annual volumes of energy served externally by the *long* and *short* stores are 73.8 TWh and 9.8 TWh respectively in the case of 30% overcapacity, and 73.4 TWh and 15.8 TWh respectively in the case of 25% overcapacity—again with negligible extra energy being used for cross-charging. For 30% generation overcapacity, Figure [fig:sls.sl] plots the percentage levels of energy in store during the same two-year period considered in Example [ex:2]. It is seen that the *short* (Li-ion) store here devotes *all* its capacity to cycling rapidly, using its higher efficiency to *greatly* reduce the capacity and, to a lesser extent, the input power requirements for the *long* store. The capacity costs of the *short* store are such that it is not worth further increasing its capacity so as to reserve energy to enable the reduction of the *output power* requirement of the *long* store. Hence the pattern of usage of the *short* store is here different from that of the *medium* store in the previous example. The corresponding plot for 25% generation overcapacity is again very similar. [fig:sls.sl] [ex:4] *Long (hydrogen) store with efficiency 0.4 plus medium (ACAES) store with efficiency 0.7 plus short (Li-ion) store with efficiency 0.9.* In this final example we take the set-up of Example [ex:2], i.e. *long* (hydrogen) store plus *medium* (ACAES) store, and consider whether any further overall cost reduction can be obtained by the addition of a *short* (Li-ion) store. For both 30% and 25% generation overcapacity, some experimentation shows that the storage dimensions and associated costs given in Table [tab:slms] are at least approximately optimal and lead to modest cost reductions—relative to Example [ex:2]—of $0.34 bn and $1.12 bn respectively. In each case the *short* store is relatively very small indeed; however, variation of its dimensions does not seem to assist in further reducing overall costs. Thus, of our four examples, the present three-store mix appears to be the most economical; further, on the basis of the generation costs discussed above, the 30% level of generation overcapacity yields a net saving of $12 bn in comparison with the 25% level of overcapacity. [ht] l|r|rrrr & capacity & output power & input power & total cost *long* & size & 72.2   TWh & 96.2 GW & 53.3 GW & store & cost ($ bn) & 57.8           & 41.3       & 45.7       & 144.8 *medium* & size & 2.44  TWh & 21.0 GW & 21.1 GW & store & cost ($ bn) & 22.0           & 4.2       & 4.2       & 30.4 *short* & size & 0.005 TWh & 2.0 GW & 2.0 GW & store & cost ($ bn) & 0.5            & 0.0       & 0.2       & 0.7 &&&& --- 175.8 l|r|rrrr & capacity & output power & input power & total cost *long* & size & 78.9   TWh & 81.0  GW & 57.5  GW & store & cost ($ bn) & 63.1           & 34.7        & 49.3        & 147.2 *medium* & size & 4.25  TWh & 40.0  GW & 40.4  GW & store & cost ($ bn) & 38.2           & 8.0         & 8.1        & 54.3 *short* & size & 0.010 TWh & 2.05 GW & 2.05 GW & store & cost ($ bn) & 1.0            & 0.0         & 0.4         & 1.4 &&&& --- 202.9 [tab:slms] The parameters *λ**i* of the scheduling algorithm (equation ) are given by (*λ**l*, *λ**m*, *λ**s*) = (0.001, 0.011, 0.035) per hr and by (*λ**l*, *λ**m*, *λ**s*) = (0.0009, 0.01, 0.05) per hr respectively for 30% and 25% generation overcapacity. The annual volumes of energy served externally by the *long*, *medium* and *short* stores are 47.2 TWh, 36.4 TWh and 0.012 TWh respectively in the case of 30% generation overcapacity, and 38.5 TWh, 50.7 TWh and 0.027 TWh respectively in the case of 25% generation overcapacity, again with negligible extra energy being used for cross-charging.
arxiv_0000168
Onsager’s “Ideal Turbulence” Theory =================================== Lars Onsager in 1945-1949 made an exact analysis of the high Reynolds-number limit for individual turbulent flow realizations modeled by incompressible Navier-Stokes equations, motivated by experimental observations that dissipation of kinetic energy does not vanish. I review here developments spurred by his key idea, that such flows are well-described by distributional or “weak” solutions of ideal Euler equations. 1/3 Hölder singularities of the velocity field were predicted by Onsager and since observed. His theory describes turbulent energy cascade without probabilistic assumptions and yields a local, deterministic version of the Kolmogorov 4/5th law. The approach is closely related to renormalization group methods in physics and envisages “conservation-law anomalies”, as discovered later in quantum field theory. There are also deep connections with Large-Eddy Simulation modeling. More recently, dissipative Euler solutions of the type conjectured by Onsager have been constructed and his 1/3 Hölder singularity proved to be the sharp threshold for anomalous dissipation. This progress has been achieved by an unexpected connection with work of John Nash on isometric embeddings of low regularity or “convex integration” techniques. The dissipative Euler solutions yielded by this method are wildly non-unique for fixed initial data, suggesting “spontaneously stochastic” behavior of high-Reynolds number solutions. I focus in particular on applications to wall-bounded turbulence, leading to novel concepts of spatial cascades of momentum, energy and vorticity to or from the wall as deterministic, space-time local phenomena. This theory thus makes testable predictions and offers new perspectives on Large-Eddy Simulation in presence of solid walls. Introduction ============ This entire essay shall be concerned with a theory of “ideal turbulence” which was proposed by Lars Onsager in a 1949 paper entitled “Statistical Hydrodynamics”. The proposal was set forth by Onsager in his characteristic laconic style in the final paragraph of that paper, which I quote here in full: > > ``It is of some interest to note that in principle, turbulent dissipation as described could take place just as readily without the final assistance by viscosity. In the absence of viscosity, the standard proof of the conservation of energy does not apply, because the velocity field does not remain differentiable! In fact it is possible to show that the velocity field in such ‘ideal’ turbulence cannot obey any LIPSCHITZ condition of the form > > > > $$(26)\hspace{100pt} > |\overrightarrow{v}( \overrightarrow{r^{{\!\,}\_\prime}}+\overrightarrow{r})- > \overrightarrow{v}( \overrightarrow{r^{{\!\,}\_\prime}})| > < ({\rm const.}) r^n, > \hspace{100pt}$$ > > > > for any order *n* greater than 1/3; otherwise the energy is conserved. Of course, under the circumstances, the ordinary formulation of the laws of motion in terms of differential equations becomes inadequate and must be replaced by a more general description; for example, the formulation (15) in terms of FOURIER series will do. The detailed conservation of energy (17) does not imply conservation of the total energy if the number of steps in the cascade is infinite, as expected, and the double sum of $Q(\overrightarrow{k}, \overrightarrow{k'})$ converges only conditionally.” — L. > > > In fact, the germ of these remarks were contained in a short abstract on fluid turbulence that Onsager published four years earlier, where he noted that “...various experiments indicate that the viscosity has a negligible effect on the primary process; hence one may inquire about the laws of turbulent dissipation in an ideal fluid.”. Although he never published an argument justifying his “it is possible to show” assertion, it is now known that Onsager had indeed derived a mathematical identity which implies his conclusion and which he communicated by letter to Theodore von Kármán and Chia-Chiao Lin in 1945. Although Onsager’s innovative ideas on this subject were long overlooked and conflated with the related theory of, it is now understood that Onsager’s work essentially refined and extended the concepts of Kolmogorov, anticipating ideas in Large-Eddy Simulation (LES) modelling, modern field-theoretic notions of conservation-law anomalies and renormalization-group invariance, and the concept of weak Euler solutions in the mathematical theory of partial differential equations. ![Lars Onsager (1903-1976). Photograph published originally in Svenska Dagbladet on December 6, 1968 shortly after the announcement of Onsager’s award of the Nobel Prize in Chemistry, reproduced on license from ZUMA Press.](figures/Fig1.jpg "fig:") [correlate] I first became aware of Onsager’s proposal in 1990 when I was a postdoc at Rutgers University and Uriel Frisch, visiting for the spring semester, delivered a series of seminars on hydrodynamic turbulence. Frisch had rediscovered some of Onsager’s key ideas on his own and he only learned of Onsager’s prior work himself from Robert Kraichnan in 1972 (Frisch, private communication). Onsager’s 1/3 Hölder claim was discussed briefly at the end of Frisch’s Rutgers lectures, which followed closely an expository article he published that same year. (Interestingly, the reference to Onsager’s 1949 paper and the 1/3 Hölder claim were cut from the published article, presumably because of length restrictions.) I looked up Onsager’s paper and was immediately impressed by his remarks because, unlike Kolmogorov’s arguments for 1/3 power-law scaling based upon statistical assumptions and dimensional analysis, Onsager claimed that the 1/3 exponent could be derived on a purely dynamical basis for individual solutions of the fluid equations. In fact, we now know that Onsager’s 1/3 result is not equivalent to that of, e.g. being completely compatible with inertial-range intermittency, which Onsager had already anticipated in 1945. After consulting with two Rutgers experts on mathematics of Navier-Stokes equations, Giovanni Gallavotti and Vladimir Scheffer, I was surprised to learn that Onsager’s claims were unknown to the PDE community in 1990 and that nothing was established about their validity. The situation is now very different and Onsager’s ideas have become the focus of major developments in the mathematical theory of PDE’s, connecting surprisingly with ideas of John Nash on a completely different problem of isometric embeddings of Riemannian manifolds. These developments by many people have in turn attracted renewed attention to Onsager’s ideas in the fluid mechanics community, being indeed the subject of an earlier Perspectives essay by. The purpose of the present work is to explain Onsager’s ideas in a pedagogical and straightforward manner. Although I shall follow somewhat the chronological development of the theory, the emphasis in this work is on science and not history. Most of the research on Onsager’s theory so far has been by mathematicians and this may have led to the impression among many that the subject is an esoteric branch of pure mathematics. Although the rigorous development leads indeed to some non-trivial mathematical issues, the theory is directly motivated by experimental observations and by intuitive physical ideas. Some of these points have been explained in my earlier unpublished short note and in my online turbulence course notes. In fact, I believe that Onsager’s point of view is one of the easiest to teach to beginning students of turbulence. The theory does not cover all turbulent phenomena, as it concerns the regime of very high Reynolds numbers, but it is not restricted to the idealized limit of infinite Reynolds number only (as has been sometimes misunderstood). It sheds little light, therefore, on the important problem of transition to turbulence. However, one of the great virtues of Onsager’s approach is the naturalness with which it extends concepts of high Reynolds-number turbulence from the traditional problem of incompressible fluid turbulence to more general cases of compressible fluids, relativistic fluids, plasma kinetics, and quantum superfluids. See section [sec:open]*(d)*. There are, by now, a very large number of researchers who have contributed to this subject, many at a deeper mathematical level than I have. There are already several illuminating and insightful reviews written by mathematicians, such as, but the present essay is not designed as such a comprehensive review. Instead, my purpose is to explain the subject from the personal perspective of a mathematical physicist working for some years in this area. As with any currently active field, all researchers may not agree with my proffered interpretations and points of view. I endeavor, however, to give technically correct explanations of results, even those to which I have not contributed myself. I do not want to present the subject as a *fait accompli*, which it is not, but instead as a living scientific theory with many critical questions still open. The main reason that I was excited to write this essay is that I believe that some of the most fundamental problems in this area remain unsolved and call for the combined efforts, not only of mathematicians, but also of fluid mechanicians, computational scientists, turbulence modelers and physicists, both theorists and experimentalists. The earlier essay of lucidly explained how analytical tools developed in the Onsager theory could be applied to direct analysis of empirical data. I will be concerned here instead with the complementary issue of how the “ideal turbulence” theory connects with the numerical modeling method of Large-Eddy Simulation (LES), a topic only briefly treated in that earlier work (, section 8.6). Although the two subjects have developed historically with little interaction, they are in fact quite intimately connected. For example, both LES and “ideal turbulence” aspire to describe individual realizations of a turbulent flow. I shall focus in this essay in particular on the issue of turbulence-wall interactions, which currently concerns deeply both the LES and the mathematical PDE communities and which poses some of the greatest current challenges in turbulence research. As we shall see, Onsager explored this problem himself and his attempts anticipate some recent developments. Since this essay is rather long, it may be useful to summarize briefly its contents and to explain the organization, so as to provide the reader some guide for the journey. The bare synopsis of the contents is as follows: l **[empiric]. Background** **[away]. Turbulence Away From Walls** l   *[result]. Onsager’s Result*    | | | --- | | [sec:RG-LES]. Ideal Turbulence, Renormalization Group and Large-Eddy Simulation | | [sec:DissAnom]. Local Deterministic 4/5th-Law and Dissipative Anomaly | | [sec:open]. Implications and Open Questions | | |   *[sec:ConjIntro]. Onsager’s Conjecture*    | | | --- | | [sec:exist]. Existence of Dissipative Euler Solutions | | [sec:nonuniq]. Non-Uniqueness for the Initial-Value Problem | | [sec:InfRe]. The Infinite Reynolds Number Limit | | | **[walls]. Turbulence Interactions With Solid Walls**   l *[sec:overture]. Overture on Turbulence and Solid Surfaces* *[wall:RG]. Onsager Renormalization Group Analysis*   | | | --- | | [wall:UV]. Regularization of Ultraviolet Divergences | | [wall:coarse]. Coarse-Grained Equations | | [sec:moment]. Momentum Cascade in Space | | [sec:energy]. Energy Cascade in Space and in Scale | | [sec:vorticity]. Vorticity Cascade in Space | | [sec:wkstrng]. Weak-Strong Uniqueness and Extreme Near-Wall Events | | | *[wall:exist]. Dissipative Euler Solutions and Zero-Viscosity Limit* **[sec:conclude]. Prospects**   | | | --- | | *[sec:true]. How Do We Check If It’s True?* | | *[sec:matter]. Why Does It Matter?* | | *[sec:last]. Last Words* | | | Our tour through this subject begins in Section [empiric] with a summary of some of the experimental phenomena and the basic theoretical assumptions generally made to explain them. The first half of the essay, Section [away], then treats turbulence away from solid walls, such as wakes and jets, which was the main subject of Onsager’s original work. As I discuss, Onsager himself made an exact and essentially rigorous analysis of this problem, deriving various results on energy cascade, singularities, scale-locality, etc. The section [result] on this subject is titled “Onsager’s Result”, although he never published his own derivations and his results had to be recovered and extended by work of others. We shall examine Onsager’s unpublished material on turbulence, which are now available, along with his unpublished results on many other subjects, in the online Onsager Archive hosted by the Norwegian University of Science and Technology: <https://www.ntnu.edu/onsager/lars-onsager-archive>. In addition, however, Onsager made more ambitious proposals on existence of dissipative Euler solutions and their emergence in the infinite-Reynolds number limit, which were motivated by observations but for which he almost certainly had no analytical arguments. Only much later was dramatic progress made on these questions and, as I review in section [sec:ConjIntro] entitled “Onsager’s Conjecture”, this has involved rather sophisticated mathematical tools. The second half of our essay, section [walls], deals with the subject of wall-bounded turbulence, which is of crucial importance for most terrestrial turbulent flows and a keen interest of Onsager’s, but which has only recently been seriously tackled by his methods. This section closely parallels the previous ones, with section [sec:overture] reviewing particular features of the high-Reynolds limit for turbulent-wall interactions and early ideas of Taylor and Onsager on the subject. Then section [wall:RG] describes in detail how the analysis pioneered by Onsager applies to wall-bounded turbulence and, in recent work of many researchers, leads to a picture of both spatial and scale cascades of momentum, energy and vorticity, but understood as deterministic and space-time local processes. The concluding section [sec:conclude] of this essay offers some final remarks about the empirical status and future importance of Onsager’s theory. Background ========== Before I can discuss any theory, I must briefly review the “various experiments” on turbulent energy dissipation mentioned by that motivated his analysis. The specific work cited by was the experimental study of Hugh L. on wake turbulence behind a grid using hotwire anemometry, which reported that the decay rate of kinetic energy $Q=-\frac{d}{dt}\frac{3}{2}u^{\prime 2}$ satisfies the scaling law Q~A eq:Taylor with *A* ≐ 0.2056 at sufficiently high Reynolds number *R**e* = *u*ʹ*L*/*ν*,  where *u*ʹ is the r.m.s. streamwise velocity fluctuation, *L* is the velocity integral length, and *ν* is the kinematic viscosity of the fluid. The scaling law seems to have been first hypothesized by G. I., who already argued in that kinetic energy can be “dissipated in fluid of infinitesimal viscosity, when the turbulent motion takes place in three dimensions.” Onsager inferred from the results of that the coefficient *A*(*R**e*) becomes constant for *R**e* ≫ 1,  but the first systematic evidence was obtained by based on a compilation of data from several experiments with different types of grids. See Fig. [figempiricala]. Note that the same hypothesis of the asymptotic *R**e*-independence of non-dimensionalized energy dissipation rate was made also by. [b]0.46![Reproduced from K. R. Sreenivasan, On the scaling of the turbulence energy dissipation rate, Phys. Fluids 27 1048–1051 (1984), with permission of AIP Publishing.](figures/Fig2a.png "fig:") [figempiricala] [b]0.52![image](figures/Fig2b.png) [figempiricalb] [figempirical] Although this observation is basic to several theories of high-Reynolds turbulence and is thus sometimes referred to as the “zeroth law of turbulence”, the experimental situation is in fact more complex and interesting. It is indeed true that the result is observed to hold well in wake flows, such as those past bluff or streamlined bodies. One evidence for this is the common observation that the drag coefficient of the body CD= CD tends to a constant value for *R**e* ≫ 1 where *F**d* is the drag force, *ρ* is fluid mass density, *U*∞ is external flow velocity and *A* is the frontal area of the body. See Fig. [figempiricalb] for the example of a circular disk and, section 5.2, for the connection with dissipation in the wake. On the other hand, there is a striking dichotomy for internal flows such as flows through pipes or channels, likewise Taylor-Couette, Rayleigh-Bénard, and von Kármán flows, and as well for flat-plate boundary layers, in all of which the strict independence from Reynolds number depends upon whether the solid boundary is hydraulically smooth or hydraulically rough. For mathematical readers I must clarify that this distinction has nothing to do with the mathematical smoothness of the boundary and “hydraulic roughness” means simply that the solid boundary has small ripples, ridges, etc. with some characteristic roughness height *k* that impede the flow. Thus, a hydraulically rough surface may in some cases be modeled by a *C*∞ manifold. The general observation in these flows is that the dimensionless dissipation rate becomes asymptotically independent of Reynolds number only when the solid boundary is hydraulically rough. This fact was noted as early as the 18th century by the French engineer Antoine de Chézy who attributed the seasonal variation of drag in the Parisian water canals to the growth and decline of algae and moss on the side walls. A more quantitative observation was made in straight circular pipe flows by who studied the friction factor =, lambda where $-\frac{\partial P}{\partial x}$ is the applied pressure gradient, *D* is the pipe diameter and *Ū* is the mean flow velocity. He found that *λ*(*R**e*) slowly decayed to zero as *R**e* → ∞ for a smooth-wall pipe but, with sand-grains glued to the wall, instead tended to a positive constant that depended on the grain height *k*. These same qualitative observations have been confirmed in a great variety of internal flows, e.g. see. Onsager, by the way, was certainly aware of such observations as indeed he cited in his 1949 paper the work of, who considered drag laws in pipe flows with both smooth and rough walls and who discussed the classic works of and others. The starting point of the theoretical analysis of was the incompressible Navier-Stokes equation t**u**+(**u**)**u**=-p +**u**, **u**=0, NS where *p* = *P*/*ρ* is kinematic pressure, in line with the common assumption that all turbulence phenomena at low Mach numbers can be described within this continuum approximation. Most of our essay will present results that have been obtained from this standard point of view. It is not, however, entirely obvious that this equation should be adequate to describe turbulent energy dissipation fully, since the fundamental *fluctuation-dissipation relation* of statistical physics implies that molecular dissipation phenomena and thermal fluctuations are intrinsically intertwined and must always occur together. Onsager was, of course, deeply familiar with the statistical theory of thermal fluctuations. The so-called “Onsager principle”, which he proposed in his 1931 work on reciprocal relations and which was worked out by for the linear regime, is probably the most elegant form of the fluctuation-dissipation relation, expressing the probability of a time-history to arise by thermal fluctuations directly in terms of the time-integrated dissipation. The prediction that thermal fluctuations should fundamentally modify the turbulent dissipation range starting at the Kolmogorov length-scale *η* = *ν*3/4/*ɛ*1/4 (with *ɛ* the mean dissipation rate of kinetic energy per unit mass) was apparently published first in this journal by. However, it was not until the work of, and independently, that a stochastic version of the Navier-Stokes equation was formulated that incorporates the fluctuation-dissipation relation. For a low-Mach incompressible fluid this equation for fluctuating velocity field $\tilde{{{\bf u}}}$ takes the form t +()=- ++, =0, LLNS where $\tilde{{{\mbox{\boldmath $\tau$}}}}$ is a Gaussian random stress field with mean zero and covariance ij(**x**,t)kl(**x**’,t’)=(ikjl+ iljk-ijkl)3(**x**-**x**’)(t-t’) FDR which represents the thermal momentum transport. Recently, evidence has emerged from numerical simulation of the stochastic equation and related models which confirm Betchov’s prediction. I shall therefore critically examine also in this essay the question whether the deterministic equations are sufficient to explain all of the experimental observations on turbulent energy dissipation. Since it has been traditionally assumed that the Navier-Stokes equations are an adequate model of incompressible fluid turbulence, direct numerical simulation (DNS) of those equations has also been used as a tool to study turbulent energy dissipation. I am not aware of any systematic DNS study of the Reynolds-dependence of dissipation in 3D wall-bounded flows (but see for a 2D flow). Instead most of the attention has been focused on turbulence in a periodic box stirred by a large-scale body force, which has been considered a close analogue of the decaying turbulence behind a grid. A great advantage of numerical simulations is that the entire flow field is accessible and thus local viscous energy dissipation per mass (**x**,t)=2Sij(**x**,t)Sij(**x**,t), epsilon with strain rate tensor $S\_{ij}=\frac{1}{2}\left(\frac{\partial u\_i}{\partial x\_j}+ \frac{\partial u\_j}{\partial x\_{i}}\right),$ can be calculated exactly for Navier-Stokes solutions, limited only by numerical resolution and machine precision. Corresponding to the prefactor *A* studied in grid turbulence is the dimensionless dissipation rate D= = ijij eq:D where the hat denotes the strain tensor calculated from the dimensionless variables $\hat{{{\bf u}}}={{\bf u}}/u',$ $\hat{{{\bf x}}}={{\bf x}}/L,$ *t̂* = *t*/(*L*/*u*ʹ). Numerical results in and are consistent with the hypothesis that the volume- and time-averaged dimensionless dissipation rate *D̄*(*R**e*) are asymptotically independent of *R**e*. Of course, no empirical data could ever strictly verify this hypothesis, as it would be impossible to rule out an extremely slow decay. Turbulence Away From Walls ========================== Except for brief mention of pipe flow, restricted his attention to the “simplest type of turbulence” which is the “nearly homogeneous and isotropic turbulence [produced] by means of a grid in a streaming gas”. I shall therefore consider this topic first in this essay, since most subsequent work has been done on this problem. I emphasize at the outset, however, that no statistical assumptions of homogeneity or isotropy are required for the results mentioned in the final paragraph of Onsager’s paper. The conclusions thus apply more generally to turbulence away from walls, such as in ubiquitous wake flows that are neither homogeneous nor isotropic. It is a bit ironic that Onsager suggested a completely deterministic approach to the analysis of turbulent flow in a paper entitled ``Statistical Hydrodynamics"! Onsager’s Result ---------------- result In the quote that began this essay, wrote that “it is possible to show that”, which was his characteristic catch-phrase to assert that HE had shown something and to invite others to prove it as well. He never published his own calculations (which was also customary), but, as I shall discuss below, he had worked out the essential steps of a proof that energy will be conserved when the velocity field satisfies a Hölder-Lipschitz condition “for any order *n* greater than 1/3.” The first published result was given by, who attempted to transform the brief argument using Fourier series sketched by into a rigorous proof. The idea was to show that the triply infinite series of Fourier coefficients which arises from the time-derivative of kinetic energy is absolutely summable on the assumption of Hölder regularity with exponent  > 1/3. This ultimately requires a bound on the spectral energy flux Π(*k*),  similarly as in the work of. It was quickly obvious that the 1/3 claim would hold if the spectral flux were dominated by “local” wavevector triads with all three wavectors of magnitudes roughly *k*. This approach is plagued by difficulties, however, not least because there is no necessary and sufficient condition for Hölder continuity in terms of absolute Fourier coefficients. I finally managed in 1992 to work out a proof of Onsager’s 1/3 claim, but invoking a condition on absolute Fourier coefficients stronger than Hölder continuity. At the same time, it appeared from the analysis that most contributions to the energy flux could be bounded even with a weaker assumption on the velocity of “Besov regularity”, which corresponds to Hölder regularity in space-mean sense, closely related to standard structure functions. After developing my first proof, I discussed the problem with Weinan E at the Institute for Advanced Study in 1992 and, the following year E and his collaborators, Peter Constantin and Edriss Titi, found an extremely simple proof not only of Onsager’s original claim but also of the natural Besov result. After being informed privately of this development by E, I realized that their method of proof was closely related to LES modelling and renormalization group, and that it could be simply explained in those terms. It is this argument that I present first in this section, before considering the alternative technical derivation given by Onsager himself. Note that I refer to the energy conservation statement, however, as “Onsager’s result” rather than by the term “Onsager’s conjecture” used by, which was at a time when Onsager’s own mathematical argument was unknown. I prefer to use the term “Onsager’s conjecture” for the deeper statement that inviscid energy dissipation is possible when the Euler velocity field has Hölder regularity  ≤ 1/3,  which was only proved much later. ### Ideal Turbulence, Renormalization Group and Large-Eddy Simulation sec:RG-LES The most basic conclusion that can be inferred from the empirical observations of *R**e*-independence in or is that the non-dimensionalized velocity gradients, if interpreted as ordinary classical derivatives, must diverge as $|\hat{{{\mbox{\boldmath $\nabla$}}}}\hat{{{\bf u}}}|\to\infty$ when *R**e* → ∞. This was the starting point of Onsager’s own line of argument, who began the short abstract in with the observation: ``The dissipation of energy by turbulence is regarded as primarily a ‘violet catastrophe’." In the language of modern field theory, turbulence exhibits ultraviolet (UV) divergences of velocity gradients in the limit *R**e* → ∞. In consequence, the equations of motion (or alternatively ) cannot remain valid in a naive sense in the infinite-*R**e* limit. Just as in field theory, to obtain a dynamical description of turbulence as *R**e* → ∞, one must somehow regularize these UV divergences. The simple but effective approach of was to perform what is called “low-pass filtering” in engineering, “spatial coarse-graining” in physics, and “mollifying” in mathematics. This method involves the use of a smooth kernel *G* for space dimension *d* that satisfies $$\begin{aligned} (i) \quad G({{\bf r}}) \geq 0, \qquad (ii) \quad G({{\bf r}}) \rightarrow 0 \,\,\mbox{rapidly for} \,\,\,\,\,|{{\bf r}}| \rightarrow \infty, \qquad (iii) \quad \int d^d r \,\,G({{\bf r}}) =1\end{aligned}$$ It is understood that *G* is centered at ${{\bf r}}= {\bf 0}$, $\int d^d r \,\,{{\bf r}}\,\,G({{\bf r}}) = {\bf 0},$ and that $\int d^d r |{{\bf r}}|^2 G({{\bf r}}) \approx 1$. We can then set G(**r**) -d G(**r**/) so that all of the above properties hold, except that now $\int d^d r |{{\bf r}}|^2 G\_\ell({{\bf r}}) \approx \ell^2$, with ℓ > 0 the regularization length scale. Finally, one defines a coarse-grained velocity at length-scale ℓ by the formula: (**x**,t)=ddr  G(**r**) **u**(**x**+**r**,t). II5 spatially averaging over the eddies of size  < ℓ. This coarse-grained field is roughly analogous to a “block spin” in critical phenomena and field theory. An identical operation is also employed in the “filtering approach” to turbulence advocated by and, but with a different motivation than regularization of divergences. For Onsager’s theory, the important point is that the coarse-graining operation regularizes gradients, so that ${{\mbox{\boldmath $\nabla$}}}\overline{{{\bf u}}}\_\ell$ remains finite as *ν* → 0 for any fixed length ℓ > 0. This may be shown using the simple integration-by-parts identity (**x**,t)=-ddr  (G)(**r**) **u**(**x**+**r**,t), II7 which by Cauchy-Schwartz inequality yields the bound $|{{\mbox{\boldmath $\nabla$}}}\overline{{{\bf u}}}\_\ell({{\bf x}},t)|\leq (1/\ell)\sqrt{C\_\ell\int d^dr \, |{{\bf u}}({{\bf r}},t)|^2}$ with constant $C\_\ell=\int d^dr\ |({{\mbox{\boldmath $\nabla$}}}G)\_\ell({{\bf r}})|^2.$ Thus, the coarse-grained gradient is bounded as long as the total kinetic energy remains finite as *ν* → 0 (which is necessarily true for freely-decaying turbulence with no stirring). The price of this regularization, as for quantum field-theory divergences, is that a new, arbitrary regularization scale ℓ has been introduced. Because divergences have been eliminated, one may now seek a dynamical description in terms of the coarse-grained field defined in. The equation which is satisfied is easily found to be t+= -+, =0, II9 because the coarse-graining operation commutes with all space- and time-derivatives, and one may inquire about its limit as *R**e* → ∞. For this purpose, one should non-dimensionalize the above equation and introduce hats everywhere, but doing so simply replaces the physical viscosity with the dimensionless viscosity *ν̂* = 1/*R**e*. Thus, as customary in the mathematical literature, one may omit the hats on non-dimensionalized variables and consider the limit *R**e* → ∞ as a “zero-viscosity limit” limit *ν* → 0. I shall do so hereafter, when this causes no confusion. Because the quantity $\Delta\overline{{{\bf u}}}\_\ell$ in remains bounded by a similar estimate as, one can easily show rigorously that $\nu\Delta\overline{{{\bf u}}}\_\ell\to 0$ as *ν* → 0 for fixed ℓ. If one indexes the solutions ${{\bf u}}^\nu$ of the Navier-Stokes equation by viscosity *ν*,  then the above results suggest that a limiting field ${{\bf u}}=\lim\_{\nu\to 0}{{\bf u}}^\nu$ will satisfy the equation t+= -, =0, wEuler Since one is dealing with velocity fields in function spaces, a careful statement of this hypothesis requires a suitable notion of convergence ${{\bf u}}^\nu\to {{\bf u}}.$ It is not hard to show that strong *L*2 convergence suffices, which is the condition that $\lim\_{\nu\to 0}\|{{\bf u}}^\nu-{{\bf u}}\|\_2=0$ where **u**p:=1/p,p1 As I shall discuss later (see section [sec:InfRe]), such strong *L*2-convergence is not guaranteed *a priori*, although it is in fact implied by some relatively mild assumptions on the energy spectrum which can be tested empirically. For the moment, I shall simply assume such convergence so that directly follows, but I return to this issue later. An important observation is that the validity of the coarse-grained equations for all lengths ℓ > 0 is equivalent to the condition in mathematical PDE theory that the velocity ${{\bf u}}$ is a *weak solution* of the incompressible Euler equations. Readers may be more familiar with the standard formulation of weak solutions by smearing the dynamical equations with smooth spacetime test functions, so that the equations hold in the sense of distributions or generalized functions. A simple proof that “coarse-grained solutions”, satisfying for all ℓ > 0,  are equivalent to standard weak solutions is given in section 2 of. Furthermore, these notions of weak solution are equivalent to what meant when he wrote that “the ordinary formulation of the laws of motion in terms of differential equations becomes inadequate and must be replaced by a more general description; for example, the formulation (15) in terms of FOURIER series will do.” Indeed, the incompressible Euler equations written as an infinite-dimensional set of ODE’s for the Fourier coefficients of the velocity field yield also standard weak solutions, as discussed by and. All of these equivalent formulations of weak solutions make sense even when spatial derivatives of velocity no longer exist in the classical sense, but only in the sense of distributions. It may appear odd to some readers that can be interpreted as “Euler equations”, since the filtered equations are generally regarded as unclosed whereas the Euler equations are closed PDE’s. It is indeed true that the equations are not closed equations for the coarse-grained velocity $\overline{{{\bf u}}}\_\ell$ itself. This fact is usually underlined by introducing the *turbulent/subscale stress* (**u**,**u**) := - tau so that the equations can be rewritten as t+= -, =0 FEuler with the non-closed term \boldmath $\tau$ℓ distinguished. However, the coarse-grained velocity $\overline{{{\bf u}}}\_\ell =G\_\ell\*{{\bf u}}$ and the subscale stress ${{\mbox{\boldmath $\tau$}}}\_\ell({{\bf u}},{{\bf u}})$ are both explicit functions of the fine-grained velocity field ${{\bf u}},$ as emphasized by my notations, and thus the equations are explicit conditions on that field. I may note that it is quite standard to consider that the large-scales  > ℓ in a turbulent flow must satisfy the Euler fluid equations. For example,, §31, wrote that: “We therefore conclude that, for the large eddies which are the basis of any turbulent flow, the viscosity is unimportant and may be equated to zero, so that the motion of these eddies obeys Euler’s equation.” What needs to be stressed is that the proper interpretation of such commonplace remarks is *not* that the coarse-grained velocity $\overline{{{\bf u}}}\_\ell$ satisfies the Euler equations in the usual naïve sense, but instead that ${{\bf u}}$ is described by a weak Euler solution at those scales, in the sense that holds. Although this point is elementary, it is frequently misunderstood and the source of common errors. There is another very important conceptual point which must be emphasized about the coarse-grained equations. Some critics have argued that these equations are unphysical and not appropriate as the basis for a fundamental theory of turbulence because both the length-scale ℓ and the filter kernel *G* are arbitrary. For example, wrote “The filter decomposition is formally more general than the Reynolds decomposition. However, the former is one among many decompositions, so to say, of a technical nature” (p.379) and also “After all Nature may and likely does not know about *our* decompositions.” (p.114). These are very shrewd remarks. In fact, I agree completely with these concerns of that Nature should not care about such arbitrary choices. Interestingly, this same problem has arisen in another area of physics, relativistic quantum field-theory, which is also plagued with similar UV divergences. In that case also arbitrary regularizations are required to eliminate the divergences and these introduce a new arbitrary length scale, equivalent to ℓ or, more commonly, a related energy scale *μ* = *c*ℏ/ℓ,  with *c* the speed of light and ℏ Planck’s constant. In the renormalized field theory, fundamental parameters of the theory such as coupling constants *λ*(*μ*) become dependent on this arbitrary scale, in the same manner that \boldmath $\tau$ℓ becomes dependent upon ℓ. Elementary particle physicists in the 1950’s were also worried that predictions of the theory should not depend upon such arbitrary choices and the concept of *renormalization group (RG) invariance* arose as the commonsense demand that any observable consequence of the theory should be independent of *μ*. For a very clear discussion, see, section 4.1. What makes RG invariance interesting and important is that non-trivial consequences can be deduced precisely by varying *μ* and demanding that physical results be *μ*-independent. What we shall see is that Onsager anticipated such arguments in the 1940’s and that his 1/3 Hölder claim for turbulent velocities is an exact non-perturbative consequence of such RG invariance. [b]0.32![Fine-grained](figures/Fig3a "fig:") [figfine] [b]0.32![Coarse-grained](figures/Fig3b "fig:") [figcoarse] [b]0.32![Further coarse-grained](figures/Fig3c "fig:") [figcoarser] [figcoarsening] To underscore the nature of the argument, I emphasize that coarse-graining is a purely passive operation —“removing one’s spectacles”—which changes no physical process. The effect of the coarse-graining operation in is illustrated by Fig. 3, which shows the velocity field observed at successively coarser spatial resolutions ℓ. Although the dynamics of the velocity field resolved at the Kolmogorov scale ℓ = *η* are described rather well by the Navier-Stokes equation, the coarse-grained velocity field $\overline{{{\bf u}}}\_\ell$ at scales ℓ ≫ *η* is described by the highly non-Newtonian equation. While the description of the dynamics changes with resolution ℓ,  nevertheless objective facts cannot depend upon the “eyesight” of the observer. Thus, an energy dissipation rate which is non-vanishing in the limit as *ν* → 0 implies that kinetic energy will decrease over a fixed interval of time [0, *t*],  as observed by experiment, and this fact cannot depend upon the arbitrary scale ℓ. An observation at resolution ℓ can only miss some kinetic energy of smaller eddies, since by convexity |(**x**,t)|2, II8 and it then follows that $E\_\ell(t):=(1/2)\int d^dx\, |\overline{{{\bf u}}}\_\ell({{\bf x}},t)|^2\leq (1/2)\int d^dx\, |{{\bf u}}({{\bf x}},t)|^2=E(t).$ If kinetic energy continues to decay even in the limit as *ν* → 0,  then such persistent energy decay must be seen also by the “myopic” observer who observes fluid features only at space-resolution ℓ. As I now show, however, the persistent energy decay observed at the fixed length-scale ℓ with *ν* → 0 is not due to molecular viscosity acting directly at those scales. The local kinetic energy balance at length-scales ℓ in the inertial-range obtained for the limit *ν* → 0 is calculated straightforwardly from to be && t(||2) +=-&& II15 where the quantity on the right side of the equation is given (with ${\bf A}{{\mbox{\boldmath $:$}}}{\bf B}=\sum\_{ij} A\_{ij}B\_{ij}$) by (**x**,t) = -(**x**,t)(**x**,t), II16 and represents the “deformation work” of the large-scale strain acting against small-scale stress, or the “energy flux” from resolved scales  > ℓ to unresolved scales  < ℓ. The mechanism of loss of energy by the inertial-range eddies is thus “energy cascade”, a term first used in this connection by. A key observation is that the stress-tensor ${{\mbox{\boldmath $\tau$}}}\_\ell({{\bf u}},{{\bf u}})$ may be rewritten in terms of *velocity-increments* $\delta {{\bf u}}({{\bf r}};{{\bf x}},t)={{\bf u}}({{\bf x}}+{{\bf r}},t)-{{\bf u}}({{\bf x}},t),$ as (**u**,**u**)=**u****u**-**u****u**, II19 where $\langle f\rangle\_\ell({{\bf x}},t):=\int d^dr\, G\_\ell(r) f({{\bf r}};{{\bf x}},t).$ This formula was originally obtained in in a slightly different form, and as above in as a physical re-interpretation of their result. Equation is easy to verify by direct calculation, but it can be simply understood as due to the invariance of the 2nd-order cumulant ${{\mbox{\boldmath $\tau$}}}\_\ell({{\bf u}},{{\bf u}})$ to shifts of ${{\bf u}}$ by vectors that are “non-random” with respect to the average ⟨ ⋅ ⟩ℓ over displacements ${{\bf r}},$ i.e. that are independent of ${{\bf r}}.$ This allows ${{\bf u}}({{\bf x}}+{{\bf r}},t)$ in the definition of ${{\mbox{\boldmath $\tau$}}}\_\ell({{\bf u}},{{\bf u}})$ to be replaced with $\delta{{\bf u}}({{\bf r}};{{\bf x}},t),$ yielding the formula. Similarly, one may rewrite eq. for coarse-grained velocity-gradients in terms of increments as (**x**,t)=-ddr  (G)(**r**) **u**(**r**;**x**,t), II20 using the fact that $\int d^dr\, ({{\mbox{\boldmath $\nabla$}}}G)\_\ell({{\bf r}})={{\mbox{\boldmath $0$}}}.$ As an immediate application of these formulas, one can rederive the prediction of that Hölder singularities *h* ≤ 1/3 are required in the velocity field in order for energy dissipation to persist in the limit *ν* → 0. Indeed, assuming for some constant *C* > 0 that |**u**(**r**;**x**,t)|C|**r**|h, II21 then it is straightforward to show using, and that (**x**,t)=O(3h-1). II22 As is clear from, persistent energy decay at resolution length ℓ can only occur if $\int d^dx\, \Pi\_\ell({{\bf x}},t)>0$ as *ν* → 0. On the other hand, the resolution scale ℓ is completely arbitrary. For any fixed ℓ one can take *ν* sufficiently small so that the “ideal equations” or hold to any desired accuracy at that scale, and then sequentially further decreasing ℓ one can correspondingly decrease *ν*. If the Hölder regularity held for all $({{\bf x}},t)$ with *h* > 1/3,  then clearly by it would follow that $\int d^dx\, \Pi\_\ell({{\bf x}},t)\to 0$ as ℓ → 0. This is a contradiction, since the rate of decay of energy must be independent of the arbitrary length-scale of resolution ℓ as ℓ → 0. Just as in, we thus infer that there must appear Hölder singularities *h* ≤ 1/3 in the limit as *ν* → 0,  or *R**e* → ∞. As I shall discuss in the next section [sec:DissAnom], Onsager had given a very similar argument to that above in his unpublished work. As an aside, I remark that the RG character of this argument can be made even more explicit by a somewhat different technical approach of, where it is observed for a Gaussian kernel $G({{\bf r}})=\exp(-r^2/2)/(2\pi)^{3/2}$ that $\frac{\partial}{\partial \ell^2} \overline{{{\bf u}}}\_\ell=\frac{1}{2}\Delta\overline{{{\bf u}}}\_\ell$ and thus =+(). RGflow In this approach, the subscale stress that “renormalizes” the bare Navier-Stokes dynamics can in fact be obtained by solving an equation that evolves \boldmath $\tau$ℓ in the scale parameter ℓ, analogous to the RG flow equations in high-energy physics and critical phenomena. Solving with the initial data \boldmath $\tau$ℓ∣ℓ = 0 = \boldmath $0$,  obtained = 02 d2. johnson Assuming the Hölder condition then gives ${{\mbox{\boldmath $\nabla$}}}\overline{{{\bf u}}}\_\lambda=O(\lambda^{h-1})$ as before and, from the identity, \boldmath $\tau$ℓ = *O*(ℓ2*h*). Thus, the same bound is deduced once again and this implies the 1/3 Hölder claim of Onsager. See also for a similar approach. The paper of derived in fact a stronger result, by replacing the Hölder regularity originally assumed by with a weaker assumption of Besov regularity. As discussed by, Besov regularity can be understood in terms of deterministic *p*th-order “velocity-structure functions” defined for absolute velocity increments and for space-averages over the flow domain Ω :  Sp(**r**)= ddx |**u**(**r**;**x**)|p. II23 Although power-laws in the separation distance $|{{\bf r}}|$ are generally observed empirically for turbulent fluid flows, Besov regularity requires only an upper bound Sp(**r**)Cp|**r**|p, |**r**|1. Besov If this inequality holds along with the modest assumption of finite *p*th-order moments ddx |**u**(**x**)|p <, Lp then the velocity field ${{\bf u}}$ is said to belong to the Besov space *B**p**σ*, ∞(Ω) with *σ* = *σ**p* :  = *ζ**p*/*p*. Note that the optimal constant $C\_p = \sup\_{|{{\bf r}}|\leq 1} \frac{S\_p({{\bf r}})}{|{{\bf r}}|^{\zeta\_p}} $ in the inequality is related to the so-called “Besov semi-norm” in the mathematical literature by the formula ${{\left\vert\kern-0.25ex\left\vert\kern-0.25ex\left\vert {{\bf u}}\right\vert\kern-0.25ex\right\vert\kern-0.25ex\right\vert}}\_{B^{\sigma,q}\_p}=C\_p^{1/p},$ so that the abstract semi-norm is given by the prefactor in expected power scaling laws. Note further that the Hölder condition is equivalent to the Besov condition for *p* = ∞,  with *σ*∞ = *h*. The result of was that energy dissipation non-vanishing in the limit as *ν* → 0 requires *ζ**p* ≤ *p*/3 for all *p* ≥ 3 or, equivalently, *σ**p* ≤ 1/3 for *p* ≥ 3. The argument generalizes that given previously for *p* = ∞ and uses the simple bound from the Hölder inequality valid for any *p* ≥ 3 | ddx (**x**,t)| p/3 p p/2=O(3p-1)since $\|{{\mbox{\boldmath $\nabla$}}}\overline{{{\bf u}}}\_\ell\|\_{p}=O\left(\ell^{\sigma\_p-1}\right)$ and ∥\boldmath $\tau$ℓ∥*p*/2 = *O*(ℓ2*σ**p*). For mathematical details, see, or. Just as before, if the Besov regularity held with *σ**p* > 1/3,  then it would follow that $\int d^dx\, \Pi\_\ell({{\bf x}},t)\to 0$ as ℓ → 0. This would yield the same contradiction as previously, since the rate of decay of energy must be independent of the arbitrary length-scale of resolution ℓ as ℓ → 0. Generalizing the statement of, one can infer that in fact *σ**p* ≤ 1/3 for all *p* ≥ 3 in the limit as *ν* → 0. Onsager’s ideas anticipate the “multifractal model” proposed by for the turbulent velocity field; see also for a comprehensive exposition. In fact, it is worth recalling the first sentence of : “A simple way of explaining power law structure function is to invoke singularities of the Euler equations considered as limit of the Navier-Stokes equations as the viscosity tends to zero.” proposed that the turbulent velocity field possesses a spectrum of Hölder exponents *h* ∈ [*h*min, *h*max] with each exponent *h* occurring on a set *S*(*h*) ⊂ Ω with Hausdorff dimension *D*(*h*). The velocity scaling exponents are then obtained by the formula p=h { hp+ (3-D(h))}, MF thus accounting for the observed deviations from the prediction *ζ**p* = *p*/3 of. The velocity scaling exponent relevant in this context can be understood as the “maximal Besov exponent” *σ**p* = *ζ**p*/*p* for any *p* ≥ 0 with p:=|**r**|0, zetap a concept which is meaningful even without any power-law scaling. As we shall see later, Onsager had arrived at similar conclusions already by 1945 (but without the modern concept of fractals). Note that *h**p* = *d**ζ**p*/*d**p* is the Hölder exponent *h* that yields the infimum in for each *p* ≥ 0. It then follows from or even directly from the concavity of *ζ**p* in *p* that *h**p* ≤ *σ**p* for all *p* ≥ 0. Thus, the theorem of implies also that *h**p* ≤ 1/3 for *p* ≥ 3 and the original result of is equivalent to the statement that *h*∞ = *h*min ≤ 1/3. It is important to emphasize that these are predictive statements which have received subsequent support from empirical determination of the multifractal dimension spectrum *D*(*h*) both from numerical simulations and from hot-wire experiments. As I shall discuss later, the “ideal turbulence” theory makes connection also with the alternative multifractal theory for the energy dissipation rate. Another successful prediction made by concerned the *locality* of the energy cascade, which is the statement that the nonlinear flux Πℓ defined in as a cubic function of the velocity field ${{\bf u}}$ is determined predominantly by velocity modes or “eddies” of scale near ℓ. In the words of : “The modulation of a given Fourier component of the motion is mostly due to those others which belong to wavenumbers of comparable magnitude.” In his 1945 letter to von Kármán and Lin (reproduced in, Appendix B), Onsager stated more precisely that “With a hypothesis slightly stronger than (14) the motion which belongs to wave-numbers of the same order of magnitude as $\underline{k}$ itself will furnish the greater part of the effective rate of shear,” where the mentioned condition (14) asserts that ${{\bf u}}$ is square-integrable but that ${{\mbox{\boldmath $\nabla$}}}{{\bf u}}$ is not. A condition of exactly this type is the Hölder condition, which may be shown for any 0 < *h* < 1 to imply locality. In fact, scale locality of energy cascade holds assuming only the *p*th-order Besov condition for any 0 < *σ**p* < 1. These predictions have been confirmed in a number of numerical studies, for example,. It is crucial that these locality properties hold instantaneously and deterministically, for individual flow realizations. The cited mathematical analyses and numerical simulations verify that averaging over velocity realizations improves the degree of locality but also that averaging is not necessary for local-in-scale interactions to dominate in energy transfer. Locality for individual realizations is necessary for another important feature of turbulence, the *universality* of small-scale statistics. The usual argument for such universal statistics has to do with the scale invariance of the Euler equations and the scale-locality of the energy transfer by which small scales are excited. It is by the latter property that, in the words of, “we are led to expect a cascade such that the wave-numbers increase typically in a geometric series, by a factor of the order 2 per step.” If each step in the cascade is chaotic and also of comparable nature to the previous steps, because of scale-invariance, then it is reasonable to expect that the details of large-scale flow features will be lost and superseded by motions intrinsic to the local Euler dynamics. It is further quite obvious that scale-locality for energy transfer only in the mean sense would be inadequate for such universality, because then long-range communication between large and small scales could exist instantaneously which is canceled only by averaging over time or initial data. This universality is one of the many features that connects Onsager’s “ideal turbulence” with the Large-Eddy Simulation/LES method of modelling turbulence. In LES the large scale motions (large eddies) of turbulent flow are computed directly and only small scale (sub-grid scale/SGS) motions are modelled, resulting in a significant reduction in computational cost compared to direct numerical simulation/DNS. The justification for the LES scheme is that the turbulent small-scales are expected to be statistically similar for every flow and thus may be universally modelled, while the non-universal large scales are explicitly computed at far lower cost. LES originated in the pioneering work of and, and it has developed historically with little connection to Onsager’s theory of “ideal turbulence”, although the two are quite intimately related. At the most superficial level, the analysis used by to prove Onsager’s 1/3 result is the same as the filtering approach advocated by and widely used in LES. Thus, the filtered equations used to derive Onsager’s theorem are the same as those employed in LES. However, filtering/coarse-graining is just one specific form of regularization and other schemes are possible (e.g. Onsager used also Fourier series as a regularizer). The specific regularization by spatial filtering is not intrinsic to either method. More important is that both approaches aim to describe individual flow realizations, unlike Reynolds-Averaged Navier-Stokes/RANS modelling which resigns itself to calculation of time- or ensemble-averaged velocities only. Furthermore, Onsager’s theory justifies the truism that an LES model should run even with molecular viscosity set to zero and thus should describe turbulent dissipation at infinite Reynolds number. ![Mixing of cream in a mug of coffee, moderately turbulent from mild stirring. Reproduced on license from Shutterstock.](figures/Fig4.jpg "fig:") [figcoffee] In concluding this section I want to emphasize the conceptual and practical importance of a deterministic description of turbulent flow for individual flow realizations. In much of the scientific literature on fluid turbulence, it is often reflexively assumed that any mathematical description must be statistical in nature. However, this instinctive reaction does not permit one to discuss even such a commonplace flow as your morning cup of coffee. See Fig. [figcoffee]. This flow has a decent Reynolds number that may be estimated of order *R**e* = *U**D*/*ν* = 3 × 104 (assuming stirring velocity *U* = 20 cm/s, cup diameter *D* = 6 cm, and kinematic viscosity $\nu=0.4\ {\rm mm}^2/{\rm s}$) and is thus modestly turbulent. However, there is no ensemble in sight! The coffee is stirred only a few times to mix the cream and thus the flow is decaying, so that one cannot average over time. Also, the size of the largest eddies is of the order of the cup diameter and thus one cannot average over many integral lengths of the flow. Nevertheless the fluid flow in the cup is turbulent each time you stir it and one must be able to describe the specific, individual flow. This need is common in many areas of science, such as geophysics and astrophysics, where some specific hurricane or some specific supernova must be understood. These remarks are not intended to deny the intrinsic stochasticity of turbulent flow. It is also true that, when you stir your morning cup of coffee, each time you will see a different pattern of the cream, no matter how carefully you try to repeat your actions each day. However, in many discussions of turbulence the applications of probabilistic methods are entirely gratuitous. It is only by eliminating the superficial and unnecessary resorts to a statistical description that one can uncover where probability theory is truly essential. ### Local Deterministic $\frac{4}{5}$th-Law and Dissipative Anomaly sec:DissAnom One of the landmark results of the statistical approach to turbulence is the celebrated *4/5th-law* derived by, who invoked statistical hypotheses of local homogeneity and isotropy of the flow. In a remarkable work developing Onsager’s ideas, have shown that the 4/5th law holds not only deterministically for individual flow realizations, not requiring any statistical hypotheses, but also that it holds in a much stronger spacetime local form. As we shall see, these developments were also foreshadowed by Onsager’s own unpublished work. Here I shall sketch the essential ideas in the work of and discuss their various ramifications. For a much more detailed explanation, see the online course notes of, Section III.C. The analysis of again attempts to understand the zeroth-law of turbulence or the “inertial-dissipation” of kinetic energy. The basic problem remains that UV divergences must appear in the inviscid limit, so that the fluid equations can no longer be understood as PDE’s in the naive sense. The starting point of is a balance equation for a point-split kinetic energy density $\frac{1}{2}{{\bf u}}({{\bf x}},t){{\mbox{\boldmath $\cdot$}}}{{\bf u}}({{\bf x}}+ {{\bf r}},t)$: $$\begin{aligned} \partial\_t (\frac{1}{2} {{\bf u}}\cdot {{\bf u}}') &+& {{\mbox{\boldmath $\nabla$}}}\cdot \left[(\frac{1}{2}{{\bf u}}\cdot {{\bf u}}'){{\bf u}}+ \frac{1}{2}(p {{\bf u}}' + p' {{\bf u}}) + \frac{1}{4}|{{\bf u}}'|^2 \delta {{\bf u}}- \nu {{\mbox{\boldmath $\nabla$}}}(\frac{1}{2}{{\bf u}}\cdot {{\bf u}}')\right] \cr &=& \frac{1}{4} {{\mbox{\boldmath $\nabla$}}}\_{{\bf r}}\cdot [\delta {{\bf u}}\,|\delta {{\bf u}}|^2] - \nu {{\mbox{\boldmath $\nabla$}}}{{\bf u}}: {{\mbox{\boldmath $\nabla$}}}{{\bf u}}' + \frac{1}{2}({\bf f} \cdot {{\bf u}}' + {\bf f'} \cdot {{\bf u}}), {\label}{split} \end{aligned}$$ where I have introduced the abbreviated notations $$\begin{aligned} {{\bf u}}&=& {{\bf u}}({{\bf x}},t),\,\,\, p = p({{\bf x}},t),\,\,\, {\bf f} = {\bf f}({{\bf x}},t) \cr {{\bf u}}' &=& {{\bf u}}({{\bf x}}+ {{\bf r}},t), \,\,\,p' = p({{\bf x}}+ {{\bf r}},t),\,\,\, {\bf f}' = {\bf f}({{\bf x}}+ {{\bf r}},t) \cr \delta {{\bf u}}&=& {{\bf u}}' - {{\bf u}}= {{\bf u}}({{\bf x}}+ {{\bf r}},t) - {{\bf u}}({{\bf x}},t) \end{aligned}$$ and I have permitted a body force ${\bf f}$ in the Navier-Stokes equation, for greater generality. Similar exact equations were introduced somewhat later by as deterministic versions of the Kármán-Howarth-Monin relations. However, the balance equation does not yet eliminate UV divergences by point-splitting alone, which requires further multiplying through by a filter kernel $G\_\ell({{\bf r}})$ and integrating over ${{\bf r}}.$ This yields a corresponding balance equation for the regularized energy density $\frac{1}{2} {{\bf u}}\cdot \bar{{{\bf u}}}\_\ell$: $$\begin{aligned} && \partial\_t (\frac{1}{2} {{\bf u}}\cdot \bar{{{\bf u}}}\_\ell) + {{\mbox{\boldmath $\nabla$}}}\cdot \left[(\frac{1}{2}{{\bf u}}\cdot \bar{{{\bf u}}}\_\ell){{\bf u}}+ \frac{1}{2}(p \bar{{{\bf u}}}\_\ell + \bar{p}\_\ell {{\bf u}}) + \frac{1}{4} \overline{(|{{\bf u}}|^2 {{\bf u}})\_\ell} - \frac{1}{4} \overline{(|{{\bf u}}|^2)\_\ell} {{\bf u}}- \nu {{\mbox{\boldmath $\nabla$}}}(\frac{1}{2}{{\bf u}}\cdot \bar{{{\bf u}}}\_\ell)\right] \cr && \hspace{30pt} =\ -\frac{1}{4 \ell} \int d^d {{\bf r}}\,({{\mbox{\boldmath $\nabla$}}}G)\_\ell({{\bf r}}) \cdot \delta {{\bf u}}({{\bf r}}) \,|\delta {{\bf u}}({{\bf r}})|^2 - \nu {{\mbox{\boldmath $\nabla$}}}{{\bf u}}: {{\mbox{\boldmath $\nabla$}}}\bar{{{\bf u}}}\_\ell + \frac{1}{2}({\bf f} \cdot \bar{{{\bf u}}}\_\ell + \bar{{\bf f}}\_\ell \cdot {{\bf u}}). {\label}{Fsplit} \end{aligned}$$ It is now easy to check that all terms remain bounded in the limit as *ν* → 0. The next step of was to study the convergence as *ν* → 0 of the various terms in the regularized energy balance, under the assumption of strong *L*3-convergence of Navier-Stokes solutions ${{\bf u}}^\nu$ to some ${{\bf u}},$ i.e. $\lim\_{\nu\to 0}\|{{\bf u}}^\nu-{{\bf u}}\|\_3=0.$ Note that ${{\bf u}}$ is then necessarily a weak Euler solution. As with the prior assumption of strong *L*2-convergence, this even stronger convergence assumption is not guaranteed *a priori*, but we shall see that it is implied by other empirically observed properties. Taking strong *L*3-convergence for granted, it is easy to check that all of the terms proportional to *ν* in in fact disappear in the limit. Furthermore, all of the remaining terms are found to converge in the sense of distributions to the same expression but with Navier-Stokes velocity ${{\bf u}}^\nu$ replaced with the limiting field ${{\bf u}}$, yielding a modified energy balance for the inviscid Euler solution: $$\begin{aligned} && \partial\_t (\frac{1}{2} {{\bf u}}\cdot \bar{{{\bf u}}}\_\ell) + {{\mbox{\boldmath $\nabla$}}}\cdot \left[(\frac{1}{2}{{\bf u}}\cdot \bar{{{\bf u}}}\_\ell){{\bf u}}+ \frac{1}{2}(p \bar{{{\bf u}}}\_\ell + \bar{p}\_\ell {{\bf u}}) + \frac{1}{4} \overline{(|{{\bf u}}|^2 {{\bf u}})\_\ell} - \frac{1}{4} \overline{(|{{\bf u}}|^2)\_\ell} {{\bf u}}\right] \cr && \hspace{30pt} =\ -\frac{1}{4 \ell} \int d^d {{\bf r}}\,({{\mbox{\boldmath $\nabla$}}}G)\_\ell({{\bf r}}) \cdot \delta {{\bf u}}({{\bf r}}) \,|\delta {{\bf u}}({{\bf r}})|^2 + \frac{1}{2}({\bf f} \cdot \bar{{{\bf u}}}\_\ell + \bar{{\bf f}}\_\ell \cdot {{\bf u}}). {\label}{FsplitE} \end{aligned}$$ Note, in particular, that validity in the sense of distributions means that the above balance equation is implicitly smeared with a smooth test function $\varphi({{\bf x}},t)$ in spacetime and that all derivatives acting on ${{\bf u}}$ or related solution fields can be moved over to the test function. From these remarks it is then easy to check further that the limit as ℓ → 0 of all terms exist directly, except for the term containing $({{\mbox{\boldmath $\nabla$}}}G)\_\ell({{\bf r}})$. However, because all other terms in the equation converge, this term must also converge. thus concluded that a local kinetic energy balance holds in the sense of distributions for the limiting Euler solution t (|**u**|2) + - D(**u**) +**f****u**Ebal where D(**u**) = 0 dd **r**(G)(**r**) **u**(**r**) |**u**(**r**)|2 := 0 D(**u**). DR The naïve kinetic energy balance of the Euler equations is broken by the term $D({{\bf u}}),$ which can be easily shown to vanish unless the velocity has Besov regularity exponent *σ**p* ≤ 1/3 for *p* ≥ 3. Note in fact that the same final equation can be obtained also by considering the coarse-grained energy balance with both factors of ${{\bf u}}$ filtered and then taking the limit ℓ → 0 in the same manner yields D(**u**) = 0 DPi so that the “inertial dissipation” in fact represents nonlinear energy cascade to infinitesimally small length scales. Physically, this loss of energy must arise from viscous dissipation. Notice, however, the equations - can be derived for *any* weak Euler solution, under the sole requirement that ${{\bf u}}\in L^3.$ Because the notion of weak Euler solution is time-reversible but $D({{\bf u}})$ is cubic in the velocity field, it follows that any Euler solution with $D({{\bf u}})>0$ yields by time-reversal another Euler solution with $D({{\bf u}})<0.$ It is only Euler solutions obtained in the inviscid limit, or so-called “viscosity solutions” of Euler, which can be expected to satisfy this physical expectation. Because coarse-graining is purely optional, one should be able to obtain the limiting energy balance directly in the limit *ν* → 0. This is what did, starting with the familiar kinetic energy balance for Navier-Stokes solutions ${{\bf u}}^\nu$: t( |**u**|2) + = - |**u**|2 +**f****u**. NSbal (Here I note in passing that considered a more general situation where the Navier-Stokes solutions are themselves singular and must be interpreted distributionally à la, in which case the “=” in is replaced by “ ≤ ”. For reasons discussed later, I am not interested physically in this mathematical generality.) Now assuming once more the strong *L*3 convergence ${{\bf u}}^\nu\to{{\bf u}},$ it is easy to see that all of the terms on the lefthand side converge distributionally as *ν* → 0 to the expected limit on the lefthand side of and likewise ${\bf f}{{\mbox{\boldmath $\cdot$}}}{{\bf u}}^\nu\to{\bf f}{{\mbox{\boldmath $\cdot$}}}{{\bf u}}.$ The only term which cannot be shown directly to converge is the viscous dissipation $\nu |{{\mbox{\boldmath $\nabla$}}}{{\bf u}}^\nu|^2,$ but, since all other terms in the Navier-Stokes balance converge, so also does this term. One thus derives the same local energy balance as before for the limiting Euler solution but one obtains furthermore the new expression D(**u**) = 0 |**u**|2 0. match This result can be understood as a matching relation for the “viscosity solutions” of Euler equations, which equates the energy carried by nonlinear cascade to infinitesimally small scales and the usual energy dissipation by molecular viscosity. A final important result obtained by exploited additional freedom in the regularization of UV divergences. In fact, not only is the length scale ℓ arbitrary, but so also is the precise choice of the kernel *G*. This freedom can be exploited by choosing a rotationally symmetric kernel for which $${{\mbox{\boldmath $\nabla$}}}G({{\bf r}}) = \hat{{{\bf r}}} G'(r).$$ In that case, the *d*-dimensional integral over ${{\bf r}}$ in the formula for $D({{\bf u}})$ can be transformed into hyperspherical coordinates so that $$D\_\ell({{\bf u}}) = \frac{1}{4 \ell} \int^\infty\_0 r^{d-1} dr \, \int\_{S^{d-1}} d \omega\_d (\hat{{{\bf r}}}) \hat{{{\bf r}}} \cdot \delta {{\bf u}}({{\bf r}}) |\delta {{\bf u}}({{\bf r}})|^2 (G')\_\ell (r)$$ where *S**d* − 1 is the unit hypersphere and *d**ω**d* is the measure on in *d*-dimensional solid angles. Using the integration by parts identity ∫0∞*r**d**G*ʹ(*r*) *d**r* = *d*,  one can infer a new identity r 0 = - D(**u**), 43rd where ⟨ ⋅ ⟩*a**n**g* denotes for any function of ${{\bf r}}$ the average with respect to *ω**d* over the direction $\hat{{{\bf r}}}$ and $\delta u\_L({{\bf r}})=\hat{{{\bf r}}}{{\mbox{\boldmath $\cdot$}}}\delta{{\bf u}}({{\bf r}})$ is the longitudinal velocity increment. Combined with the previous expression for $D({{\bf u}})$ in terms of the viscous dissipation, one can see that for *d* = 3 has the form of the Kolmogorov-Yaglom 4/3rd-law. It is well-known in the standard derivation by ensemble-averages that the Kolmogorov 4/5th- and 4/15th-laws are equivalent to the 4/3rd-law under the assumption of statistical isotropy. Since the angle average in the present derivations provides isotropy without any statistical hypotheses, some further manipulation yields $$\begin{aligned} \lim\_{r \rightarrow 0} \frac{\langle\delta u^3\_L({{\bf r}})\rangle\_{ang}}{r} &=& -\frac{12}{d(d+2)} D({{\bf u}}) {\label}{45th} \end{aligned}$$ $$\begin{aligned} \lim\_{r \rightarrow 0} \frac{\langle\delta u\_L({{\bf r}}) \delta u^2\_T({{\bf r}})\rangle\_{ang}}{r} &=& -\frac{4}{d(d+2)} D({{\bf u}}) {\label}{415th} \end{aligned}$$ where $\delta {{\bf u}}\_T({{\bf r}})=\delta {{\bf u}}({{\bf r}})-\hat{{{\bf r}}}\delta u\_L({{\bf r}})$ is the full transverse velocity increment and $\delta u\_T({{\bf r}})$ is the magnitude of any particular (fixed) component. One can see for *d* = 3 that has the form of the usual 4/5th-law and has the form of the 4/15th-law. Note, however, that the relations - are deterministic, holding for individual flow realizations, and are furthermore spacetime local in the sense of distributions. The latter statement simply means that they hold with both sides smeared by an arbitrary space-time test function $\varphi({{\bf x}},t),$ taking first *ν* → 0 and then *r* → 0. Thus, these results represent a consider strengthening of the original ensemble-average relations of. An effort was made by to test these deterministic local relations in pseudospectral numerical simulations, but the 5123 resolution available at the time was insufficient. It was possible to show that angle-averaging greatly improved the validity of the observed 4/5th-law, attaining results comparable to those at 10243 resolution even without time-averaging. Thus, the local 4/5th law appeared in fact to hold instantaneously, without smearing in time, beyond what could be proved mathematically. Note that it is only with simulations of 163843 resolution that the 4/5th-law has recently been demonstrated convincingly, exploiting both angle-averaging and time-averaging. This brings us close to being able to check the local 4/5th law in simulations of 327683 resolution by dividing the computational cube into eight 163843 subcubes and verifying that the relation holds with the structure functions and mean dissipation calculated by space-averages individually over each subcube. Of course, the result should hold with any spacetime test function *φ* held fixed in the limit first *ν* → 0 and then *r* → 0,  but current computational limitations seem to make characteristic functions of octants the only choice consistent with the stringent condition that the local Reynolds number must be high. It is worth remarking that the analogue of the deterministic, local 4/5th-law can be rigorously derived for the “viscosity solutions” of the inviscid Burgers equation, $\partial\_t u +\partial\_x (\frac{1}{2}u^2)=0.$ In that case the local energy balance holds $$\partial\_t (\frac{1}{2}u^2) + \partial\_x (\frac{1}{3} u^3) = - D(u)$$ with the “inertial dissipation” given by $$\lim\_{r \rightarrow 0} \frac{\langle\delta u^3\_L(r)\rangle\_{ang}}{|r|} = -12 D(u)$$ where $\delta u\_L(r):={\rm sign}(r) \delta u(r)$ and $\langle\delta u^3\_L(r)\rangle\_{ang} = \frac{1}{2}[\delta u^3(+ |r|) - \delta u^3(-|r|)]$. See, Section III, and. These relations correspond to the ensemble-averaged “12th-law” for Burgulence, but valid in spacetime local form with “inertial dissipation” arising entirely from shock singularities. It should be emphasized again that the deterministic versions of the 4/5th-law and the 4/15th-law, and all of the earlier results in this section, require no statistical assumptions whatsoever. Neither local homogeneity, local stationarity, isotropy nor any other hypothesis about the flow statistics was invoked in their derivation. In particular, these relations will apply in the “nonequilibrium decay regime” highlighted in the review of on turbulent energy dissipation. reported observations in a near wake region of various grids and bluff bodies that the time-averaged quantity *C**ε* :  = ⟨*D*⟩ = ⟨*ɛ*⟩/(*u*′3/*L*) scales as  *R**e**I**m*/*R**e**L**n* where in his notations *R**e**I* = *U**L**b*/*ν* is the “global” or “inlet Reynolds number” based on the inlet flow speed *U* and a length *L**b* giving the overall size of the grid or body, whereas *R**e**L* = *u*ʹ*L*/*ν* is the “local Reynolds number”. In the wake flows considered by both the rms velocity fluctuation *u*ʹ and integral length *L* are statistical quantities that depend upon the longitudinal distance *x* of the observation point from the grid/body. Since wake flows involve interactions with solid walls, I shall treat the energy cascade in such flows in section [sec:energy], but here I note that all of our previous results carry over, with possible modifications only directly at the solid surface. As I shall discuss in section [sec:overture], for flows with walls or solid bodies, I always consider the global Reynolds number *R**e**I*,  which appears in the non-dimensionalization of the Navier-Stokes equation. As noted already by (p.108), the quantity *C**ε* at each *x*-location is “more or less independent of the Reynolds number” *R**e**I*, even in the nonequilibrium decay region because “*m* ≈ 1 ≈ *n*, and *R**e**L* increases linearly with *R**e**I*”. In fact, *m* = *n* corresponds exactly to the “dissipative anomaly” conjectured by and all of our previous conclusions follow in the limit *R**e**I* → ∞. In particular, the local 4/5th-law and the 4/15th-law will hold in the nonequilibrium decay region, not only for time/ensemble-averages but even for individual flow realizations. has observed that “the Richardson-Kolmogorov cascade does not seem to be the... interscale energy transfer mechanism at work”, but the dissipative anomaly for *R**e**I* ≫ 1 must occur by Onsager’s local cascade mechanism, which generalizes Kolmogorov’s equilibrium picture. I agree with the observation of that the 4/5th-law of need not be valid in general for *r* ≃ *L*. In fact, our local, deterministic version will generally only hold for *r* ≪ *L**φ*,  where *L**φ* measures the spatial diameter of the support of the test function *φ* required to make meaningful. Amazingly enough, most of the previous developments were foreseen by Onsager in his unpublished work in the 1940’s. When I was on sabbatical at Yale in 2000, I was able to examine Onsager’s private research notes available there on microfiche and I was astonished to discover that Onsager’s own proof of his 1/3 Hölder claim was essentially identical to that given by. These notes are now all available online through the Onsager Archive hosted by the NTNU in Trondheim and the reader can peruse them at <https://ntnu.tind.io/record/121183>. The essential calculations are on pp.14-19 of Onsager’s notes in this folder, where he derives the identity but in a space integrated form and with the filtering function $G\_\ell({{\bf r}})$ denoted $F({{\bf r}}).$ It is interesting that on p.15, Onsager begins the calculation by writing an overbar, his notation for ensemble-average, but then scratches out the overbar in the second line, apparently realizing that volume-averaging was sufficient to derive the result. This seems to have been the moment that the theme of my essay was born. Onsager communicated his identity to von Kármán and Lin in his letter of June 1945 (reproduced as Appendix B in ) where he gave an argument for 1/3-scaling by taking $F({{\bf r}})$ to be a spherical tophat filter of radius *a* and then letting *a* → 0. It is further interesting that Onsager in this same letter also foresaw that inertial-range intermittency could lead to an energy spectrum having a steeper slope than  − 5/3,  writing that “As far as I can make out, a more rapid decrease of $\overline{a\_k^2}$ with increasing $\underline{k}$ would require a ‘spotty’ distribution of the regions in which the velocity varies rapidly between neighboring points.” Onsager then notes that all velocity increments supported in discontinuities at vortex sheets would produce *S*2(*r*) ∝ *r*,  corresponding to a *k*− 2 energy spectrum as for Burgers equation. Such sheets would also be consistent with *S*3(*r*) ∝ *r* but Onsager remarks that they would not agree well with experimental observations on velocity traces. It is worth remarking that vortex sheets cannot, in fact, be the origin of anomalous energy dissipation. For more discussion of Onsager’s insights on inertial-range intermittency, see, Section IV.C. The local kinetic energy balance derived by for weak Euler solutions is the most succinct formulation of Onsager’s proposal for turbulent dissipation ``without the final assistance by viscosity." This result has a very familiar appearance to a modern field-theorist, because it is reminiscent of anomalies in classical conservation laws due to UV divergences in the quantum field theory. This analogy seems to have been first pointed out by, who did not know about Onsager’s work, of course, but who based his discussion on the statistical theory of Kolmogorov. Polyakov pointed out the essential similarity between turbulent cascades and chiral anomalies in quantum Yang-Mills theory, as both involve a constant flux through wavenumbers which vitiates a naïve conservation law. He pointed out also that the first derivation of the chiral anomaly in quantum electrodynamics by using a point-splitting regularization was very similar to the derivation of the 4/5th-law by. It is interesting that Schwinger seemingly did not fully appreciate the significance of his own calculation and did not point out its implication that conservation of axial charge is violated. On the other hand, we now see that Onsager had used a similar point-splitting regularization already in the 1940’s and had fully appreciated its consequence that naïve conservation of kinetic energy in the inviscid limit is broken by the “violet catastrophes” in turbulent flows. This phenomenon is thus now commonly called a turbulent *dissipative anomaly*, because of the close connection with quantum-field theory anomalies. For more detailed discussion of this analogy, see, Section IV.B. I would like to make just a brief remark why the term “anomaly” is apt for the turbulent phenomenon, independent of the connection with quantum field-theory. It should be emphasized that no fundamental microscopic conservations laws for total energy, total linear momentum, total angular momentum, etc. are ever violated by such anomalies. However, there is obviously no microscopic law of “conservation of kinetic energy”! This conservation law is an *emergent property* of inviscid hydrodynamics due to its Hamiltonian character, in which the kinetic energy of the fluid is itself the conserved Hamiltonian. It is because of this Hamiltonian structure of the Euler equations that kinetic energy conservation is formally expected and thus the breakdown of this conservation is “anomalous”. The same is true more generally for turbulent dissipative anomalies. For example, conservation of helicity and conservation of circulation on arbitrary loops are not sacrosanct microscopic laws but are instead emergent laws connected with “relabelling symmetry”, an infinite-dimensional symmetry group of the action for three-dimensional ideal Euler equations. It is such emergent conservation laws of the ideal fluid which may be afflicted with dissipative anomalies in turbulent flows. It is interesting that some of these symmetries are preserved in a formulation of the viscous Navier-Stokes via a stochastic least-action principle, suggesting that not all of these emergent conservation laws will be explicitly broken by dissipative anomalies but may instead be realized in an unconventional stochastic form. See further discussion in the following section [sec:open] *(b)*. Finally, I point out that the work of makes a connection between the “ideal turbulence” theory and the multifractal model of energy dissipation. After had proposed his lognormal model of turbulent energy dissipation to account for intermittency corrections, introduced a more general description of the energy dissipation as a *multifractal measure* with a spectrum of singularities. Subsequent experimental studies of supported these predicted scaling properties of turbulent energy dissipation. The result of shows that the dissipative anomaly term $D({{\bf u}})$ in the inviscid energy balance coincides exactly with this multifractal measure. In fact, the analysis of implies that the inviscid limit of the viscous energy dissipation $\varepsilon^\nu=2\nu |{{\bf S}}^\nu|^2$ exists and coincides with $D({{\bf u}}),$ which is a non-negative distribution and, thus, a measure. Recent rigorous results relate the fractal dimension of the support of this measure to the inertial-range intermittency of the velocity increments. ### Implications and Open Questions sec:open There are many questions raised by the original analysis of and also many subsequent developments extending and elaborating his ideas. Here I shall briefly review and discuss such further implications and open issues. . Onsager’s result and its extensions by and others show that singularities of the velocity field are required to explain anomalous dissipation of kinetic energy in the inviscid limit of incompressible hydrodynamic turbulence. However, this theory does not explain the origin of such singularities. The traditional view has been that these singularities arise from finite-time blow-up of smooth Euler solutions, e.g. see, section 9.3. The most significant argument in favor of this view is based on a set of results in mathematical theory of PDE’s which goes by the name *weak-strong uniqueness*. Such results state that a weak Euler solution (or an Euler solution in even a more general sense: see section [sec:ConjIntro]) which has total kinetic energy non-increasing in time must coincide with a strong (i.e. smooth) solution with the same initial data, over the entire time-interval for which the latter exists. E.g. see. These results can be used to infer that a weak Euler solution obtained by a zero-viscosity limit of a Navier-Stokes solution, even in a weaker sense of limit than discussed in section [sec:DissAnom], must coincide with any such smooth Euler solution. In particular, these weak-strong uniqueness results rule out the appearance of anomalous energy dissipation over any finite time-interval, if the Navier-Stokes equation is solved with initial data that is a smooth velocity field or that even converges (say in *L*2) to a smooth field, and if the Euler solution with that initial data does not blow up. These arguments seem to suggest that anomalous dissipation requires finite-time Euler singularities. There has also been recent progress in showing blow-up of strong Euler solutions, both by numerical simulations and by rigorous mathematical analysis. On the other hand, there are significant reasons to doubt that finite-time blow-up of smooth Euler solutions has anything at all to do with empirical observations on fluid turbulence. First, weak-strong uniqueness is known to break down in the presence of solid boundaries, requiring in that case additional assumptions that may well be violated. This point will be discussed at length in section [sec:wkstrng]. Furthermore, the hypothesis that dissipative singularities in incompressible turbulence form from finite-time Euler singularities does not correlate well with observations. For example, it does not account for the dichotomy demonstrated by between flows with hydraulically smooth and rough walls, with vanishing dissipation for hydraulically smooth walls and a dissipative anomaly for hydraulically rough walls. In fact, the best numerical and mathematical evidence for a finite-time blow-up is in a cylindrical domain with smooth boundaries, where empirical evidence suggests anomalous dissipation does not exist! I personally am of the opinion that solid walls must play a crucial role in the appearance of anomalous energy dissipation in incompressible fluid turbulence, as I shall discuss in more detail in section [walls]. I just note here that *all* of the laboratory experiments and natural observations that support a turbulent anomaly involve fluid-solid interactions, e.g. the interactions with the solid grid for decaying turbulence in a wind-tunnel. The obvious objection to these remarks is the evidence for a dissipative anomaly arising from numerical simulations in a periodic box, with no walls. However, the evidence for anomaly via blow-up from simulations is weak. For example, the simulation of does not start with smooth initial data. As already discussed by, Remark #4, the simulation reported in uses an iterative initialization procedure which builds in an initial quasi-singularity, corresponding to an increasing range of Kolmogorov-type spectrum. This common device is widely regarded as a numerical short-cut to accelerate the convergence to a steady state, but it renders the simulations irrelevant to the issue of finite-time singularity. There are simulations that do employ smooth initial data, e.g. the Taylor-Green vortex as discussed recently by. However, here the evidence for anomalous dissipation in a finite time is weaker, with no completely compelling convergence to a non-zero limit within computational limitations. . One of the most remarkable consequences of Onsager’s Hölder singularity result was first discovered in the work of, who studied Lagrangian fluid particle trajectories that solve the initial-value problem: =**u**(**x**,t), **x**(0)=**x**0. part-ivp When ${{\bf u}}$ is the limiting Euler solution at infinite-*R**e* then, as recalled by, the maximal Hölder regularity derived by is insufficient to guarantee a unique solution of and there is generally a continuous infinity of solutions with exactly the same initial data. It was realized by that this non-uniqueness permits stochasticity to persist in the high-Reynolds limit. For example, the position of a Brownian particle such as a small colloid or a dye molecule advected by a turbulent flow will satisfy instead a stochastic ODE: d =**u**(,t)dt+ d(t), **x**(0)=**x**0, spart-ivp where *D* is the molecular diffusivity of the particle and $\tilde{{\bf W}}(t)$ is a Wiener process modeling the Brownian motion. To study the high-*R**e* limit, one may again non-dimensionalize by defining $\hat{{{\bf u}}}={{\bf u}}/u',$ $\hat{{{\bf x}}}={{\bf x}}/L,$ *t̂* = *t*/(*L*/*u*ʹ) and then *D* is replaced by 1/*P**e*,  where *P**e* = *u*ʹ*L*/*D* is the Péclet number. In the joint limit *R**e* ≫ 1,  *P**e* ≫ 1,  the Lagrangian particle trajectories $\tilde{{{\bf x}}}(t)$ solve the limiting deterministic ODE, but showed that they may remain random in the limit with a non-trivial transition probability density $p\_{{\bf u}}({{\bf x}},t|{{\bf x}}\_0,0)$ to arrive at position ${{\bf x}}$ at time *t*. See Fig. [figSS]. The essential physics was implicit already in the work of on 2-particle turbulent dispersion, according to which initial particle separations are “forgotten” at sufficiently long times. It is important to note however that there is no averaging over velocities in the prediction of and that the transition probability represents stochastic particle evolution in a fixed realization ${{\bf u}}$ of the turbulent velocity field. furthermore showed that this “spontaneous stochasticity” of Lagrangian particles provides a mechanism for anomalous dissipation of a concentration field *c* of such tracers, which satisfies the advection-diffusion equation t c(**x**,t)+**u**(**x**,t)c(**x**,t)= Dc(**x**,t), c(**x**,0)=c0(**x**). adv-diff Since the solution of this equation (in non-dimensionalized form) has the exact Feynman-Kac representation c(**x**,t) = p**u**Re,Pe(**x**,t|**x**0,0) c0(**x**0) ddx0, cFKrep the existence of a nontrivial limit $\lim\_{Re,Pe\to\infty} p\_{{\bf u}}^{Re,Pe}({{\bf x}},t|{{\bf x}}\_0,0) =p\_{{\bf u}}({{\bf x}},t|{{\bf x}}\_0,0)$ implies by convexity that $\int \frac{1}{2} c^2({{\bf x}},t)\, d^dx < \int \frac{1}{2} c^2\_0({{\bf x}}\_0)\, d^dx\_0 $ in the limit *R**e*, *P**e* → ∞. ![Sketch of the solutions of a deterministic ODE d{{\bf x}}/dt={{\bf u}}({{\bf x}},t) for deterministic initial data {{\bf x}}_0 but with singular velocity {{\bf u}}. Unlike traditional unique solutions, the trajectories spread randomly, like a plume of smoke. Reproduced with permission from G. Falkovich, K. Gawȩdzki, & M. Vergassola, Particles and fields in fluid turbulence, Rev. Mod. Phys. 73, 913–975 (2001). Copyright (2001) by the American Physical Society.](figures/Fig5.png "fig:") [figSS] The original work of was carried out for the synthetic turbulence model of, who considered advection by Gaussian random velocity fields that are delta-correlated in time. Note that for suitable power-law spatial correlations to mimic inertial-range turbulent velocity fields, the realizations of the Kraichnan velocity ensemble are Hölder continuous with probability one. A very important feature of this model is that the limiting probability distributions p**u**(**x**,t|**x**0,0)=Re,Pe p**u**Re,Pe(**x**,t|**x**0,0) are *independent* of the particular subsequences *R**e**k*,  *P**e**k* → ∞ and, in fact, are independent of the particular form of regularization and noise, e.g. the Brownian motion $\tilde{{{\bf W}}}(t)$ in might be replaced with a fractional Brownian motion and the limit would be the same. This strong result for the Kraichnan model follows from a rigorous study by who gave a direct construction of a Markov transition probability $p\_{{\bf u}}({{\bf x}},t|{{\bf x}}\_0,0)$ for the zero-regularization, zero-noise problem by a Weiner chaos expansion in the white-noise advecting velocity field ${{\bf u}}({{\bf x}},t).$ It is a consequence of this construction that any reasonable spatial regularization and noisy perturbation will converge to this same transition probability in the zero-regularization, zero-noise limit. It was also shown by that the realizations $\tilde{{{\bf x}}}(t)$ of the limiting Markov process with transition densities $p\_{{\bf u}}({{\bf x}},t|{{\bf x}}\_0,0)$ are solutions (in a generalized sense) of the deterministic initial-value problem for each fixed realization ${{\bf u}}$ of the Kraichnan velocity ensemble. Thus, in this case, the Markov process $\tilde{{{\bf x}}}(t)$ should be regarded as the proper solution of the above initial-value problem, that is then well-posed in the sense of Hadamard. Spontaneous stochasticity in the strong sense should be understood to include such universality of the limiting random process, that is, the robustness of the stochastic solution to different regularizations and noisy perturbations. Extensions of the work of and further developments on the Kraichnan model are described by. This comprehensive review makes clear that many of the results have validity extending beyond the Kraichnan model and, in fact, showed that spontaneous stochasticity is the only possible mechanism of scalar anomalous dissipation away from solid walls, for both passive and active scalars advected by a general incompressible velocity field. Note that universality of the limiting spontaneous statistics is not required in this case, so that different limiting random processes yielding anomalous dissipation may result from different subsequences and/or different regularizations.There are, however, only a handful of model velocity fields for which Lagrangian spontaneous stochasticity can be demonstrated from first principles. Renormalization group methods from statistical physics apply to velocity fields with suitable scaling properties and yield universality of the limit, but the underlying ergodic properties of the dynamical flows that are required for this method are difficult to establish in general. One interesting example of a velocity field solving a PDE is the solution of inviscid Burgers equation, where Lagrangian spontaneous stochasticity holds backward in time at shock locations and explains anomalous energy dissipation. There is no such spontaneous stochasticity for particles advected forward in time by the Burgers velocity field and it has been speculated that similar time-asymmetry may be a more general feature of Lagrangian spontaneous stochasticity in dissipative weak solutions of PDE’s obtained as inviscid limits. As far as Navier-Stokes turbulence is concerned, there is so far no compelling evidence of Lagrangian spontaneous stochasticity from laboratory experiments. On the other hand, numerical simulations of homogeneous, isotropic turbulence in a periodic domain provide reasonable evidence for the “forgetting” of initial separations of deterministic Lagrangian particles and “forgetting” of both initial separations and molecular diffusivity of stochastic Lagrangian trajectories. Setting aside the important issue of empirical evidence, another crucial question is what implications Lagrangian spontaneous stochasticity might have for incompressible Navier-Stokes turbulence. A Feynman-Kac representation of Navier-Stokes solutions was derived by which is similar to for the advection-diffusion equation but more non-trivial, because of the nonlinearity of the equations. It was pointed out by that the Constantin-Iyer representation corresponds to a stochastic principle of least-action, thus making connection with Hamiltonian fluid mechanics. In particular, the stochastic action functional for incompressible Navier-Stokes is invariant under the infinite-dimensional symmetry group of particle-relabelling just as is the action for deterministic Euler. The remarkable properties of vorticity under Euler dynamics that follow from that symmetry therefore carry over to Navier-Stokes in a stochastic fashion. For example, the Kelvin Theorem of conservation of circulation on an arbitrary Lagrangian loop for Euler holds also for Navier-Stokes in the sense that the circulation on stochastically advected loops is a backward-in-time martingale. This property prescribes the “arrow-of-time” of the irreversible Navier-Stokes dynamics and it is plausible that a similar property carries over to dissipative weak Euler solutions obtained in the inviscid limit. The dissipative anomaly for Navier-Stokes turbulence may be characterized also in terms of the time-asymmetry of separation of stochastic Lagrangian particle, which in 3D separate faster backward in time than they do forward in time. . A frequently-made criticism of Onsager’s “ideal turbulence” theory is that it applies only to the unrealistic limit *R**e* = ∞ and is thus inapplicable to real-world turbulence which is, of course, always at a finite Reynolds number. As an example, I may quote from the monograph of, §10.3.2: “... it is not clear why results for finite *R**e* (i.e., for NSE having no singularities or extremely ‘intermittent’ ones) are relevant for the limit (if such exists) *R**e* → ∞ (e.g., for Euler equation with space-filling singularities)”. In some respects, such criticisms are a simple misunderstanding of the elementary concept of limit. Indeed, the predictions of a theory for *R**e* → ∞ are experimentally falsifiable, as they must be valid to arbitrary accuracy by taking Reynolds numbers larger (but finite). There is, however, a legitimate question regarding any such predictions of an “ultimate regime” concerning how large *R**e* must be chosen to observe the predictions of the theory. This is especially important for Onsager’s theory since, as has often been observed, the strict requirement for its validity is that the number of cascade steps must be large, that is, log2*R**e* ≫ 1 and this condition is hard to satisfy in even the highest Reynolds-number terrestrial turbulent flows. There are two responses to this very important question. First, there are explicit error estimates in the mathematics of the Onsager theory which provide bounds on the correction terms at large but finite *R**e*. For example, in the coarse-grained energy balance equation there is at finite Reynolds number a viscous dissipation term $\nu |{{\mbox{\boldmath $\nabla$}}}\overline{{{\bf u}}}\_\ell|^2$ in addition to the inertial flux term Πℓ. It is not difficult to derive bounds of the form ||2 = O(u2()/2) locally in space-time with $\delta u(\ell)=\sup\_{|{{\bf r}}|<\ell} |\delta{{\bf u}}({{\bf r}})|,$ or similar bounds in terms of Besov norms for space-integrated dissipation. These estimates provide concrete upper estimates on the finite-*R**e* corrections, which can explain departures from the *R**e* = ∞ theory. Second, Onsager’s RG-type arguments can also be applied directly to the Navier-Stokes equations at large but finite-*R**e* and do not require any assumptions about existence of weak Euler solutions in the limit *R**e* → ∞. For example, study the balance of the subscale kinetic energy $k\_\ell:=(1/2){\rm Tr}\,{{\mbox{\boldmath $\tau$}}}\_\ell$ for the forced Navier-Stokes equation, which takes the form t k+ **J**= -+||2 ++ (**f**;**u**) where ${\bf J}\_\ell$ is a suitable spatial flux of the subscale kinetic energy and $\tau\_\ell({\bf f};{{\bf u}}):=\overline{({{\bf u}}{{\mbox{\boldmath $\cdot$}}}{\bf f})}\_\ell -\overline{{{\bf u}}}\_\ell{{\mbox{\boldmath $\cdot$}}}\overline{\bf f}\_\ell$ is the direct power input by the force into unresolved scales. If one assumes suitable Besov regularity of the Navier-Stokes solutions, ${{\bf u}}^\nu\in B^{\sigma,\infty}\_p,$ *p* ≥ 3,  uniform in the Reynold number, then one may derive bounds of the form D=(1/Re)||2=O(Re). gen-On49 These bounds follow by the same basic principle of independence of the physics on the arbitrary coarse-graining scale ℓ,  which allows one to choose ℓ/*L* ∝ *R**e*− 1/(1 + *σ*) to optimize the estimates. Following the same ideas of Onsager’s argument, one can then deduce the existence of “quasi-singularities” in the Navier-Stokes solutions from the observation of energy dissipation vanishing slowly with *R**e* :  D=(1/Re)||2Re- weak-anom which will require $\sigma\leq \sigma\_{\alpha}:=\frac{1+\alpha}{3-\alpha}.$ Note that when *α* = 0,  one recovers Onsager’s critical value $\sigma=\frac{1}{3},$ but when *α* > 0 instead $\sigma\_\alpha>\frac{1}{3}.$ This is a significant strengthening of the Onsager’s original result, because empirical observations alone could never allow one to distinguish *α* = 0 from a very tiny value of *α*. Nevertheless, Onsager’s conclusions about “quasi-singularities” are robust, remaining valid even under the weaker condition. The condition on the dimensionless dissipation rate for *α* > 0 has been evocatively termed a *weak dissipative anomaly* by. Note that whenever *α* < 1 in, then $|\widehat{{{\mbox{\boldmath $\nabla$}}}}\widehat{{{\bf u}}}|^2\propto Re^{1-\alpha}\to\infty$ as *R**e* → ∞ and thus classical solutions of the PDE’s can no longer exist in the limit of infinite-*R**e*. Although no assumption about existence of weak solutions is necessary to derive the bound, such weak Euler solutions nevertheless emerge under the natural assumption of some Besov regularity with *σ* < *σ**α* that is uniform in *R**e*. Note furthermore that have proved that the Kolmorogov 4/5th-law remains valid under the assumption of a weak dissipative anomaly only, unlike the original derivation of which assumed a *strong dissipative anomaly* (*α* = 0). It is easy to check that the Taylor microscale defined by *λ*2 = 15*ν**u*′2/*ɛ* satisfies lim*R**e* → ∞*λ*/*L* = 0 precisely when *α* < 1 in and then it is proved in that the maximum difference between $\frac{\langle \delta u\_L^3(r)\rangle}{r}$ and $-\frac{4}{5}\varepsilon^\nu$ becomes vanishingly small over an increasing range of length-scales *λ* ≪ *r* ≪ *L*. This important result shows that validity of the 4/5th law cannot be taken as evidence for a strong anomaly. The proof in considers the statistical steady-state of turbulence in a periodic domain driven by a body force which is a Gaussian random field, delta-correlated in time and takes advantage of simplifications of that stochastic forcing. More recently, analogous results have been derived by for incompressible Navier-Stokes equations with a deterministic forcing and valid for individual flow realizations. These results have gained some added significance by a recent surprising observation that the scaling exponent *ζ*3 of absolute 3rd-order structure functions defined by or for *p* = 3 in fact satisfies *ζ*3 > 1 in some high-*R**e* numerical simulations of forced turbulence in a periodic domain. This observation is, of course, inconsistent with the bound *ζ*3 ≤ 1 derived by under the assumption of a strong dissipative anomaly. appear to have found instead that the normalized dissipation rate *D* very slowly decays at high *R**e* according to the bound derived by. Therefore, only a weak dissipative anomaly appears to occur in this particular simulation of forced homogeneous and isotropic turbulence in a periodic box. Consistent with the results of and, however, the Kolmogorov 4/5th law is still observed to hold in this simulation, in agreement with earlier observations of. These observations need to be confirmed with other forcing schemes and extended to even higher Reynolds numbers, but they raise additional doubts about the simplification of forced turbulence in a periodic box as a valid paradigm for incompressible fluid turbulence more generally. One of the great virtues of Onsager’s “ideal turbulence” theory is that it extends readily to turbulent flows in other physical systems than incompressible fluids. For example, consequences have been worked out for turbulence in compressible fluids, both barotropic and with a general thermodynamic equation of state, in quantum superfluids, in relativistic fluids, in magnetohydrodynamics, both incompressible and compressible (, §IV.B), and in kinetic plasmas at low collisionality. In many of these examples, the dimensional and statistical reasoning of does not obviously apply and the Onsager theory provides the only first-principles formulation. Almost all of the just-listed examples have applications in astrophysics and space science and, in particular, most astrophysical fluids and plasmas are compressible. Astrophysics provides, in many ways, the optimal arena for Onsager’s theory. For one thing, the Reynolds numbers in astrophysics are often much larger than the highest Reynolds numbers attainable in terrestrial turbulence. In addition, problems such as the physical origin of fluid singularities are much easier to address in compressible fluids. A well-advanced mathematical theory already exists for generic development of shocks in smooth solutions of compressible Euler equations and the consequent production of vorticity. Shocks and baroclinic generation of vorticity are expected to be a common source of astrophysical turbulence, e.g. supernovae-driven shocks in the interstellar medium. Other recent results on the *R**e* → ∞ limit involve incompressible fluids, but fully incorporating the usually neglected effect of thermal fluctuations. Starting with the fluctuating hydrodynamic equations of for a low-Mach incompressible fluid (see also ), have studied the limiting dynamics in the inertial range. The earlier work of, which shall be discussed more below, already considered the standard large-scale non-dimensionalization of the equations via $\hat{{{\bf u}}}={{\bf u}}/u',$ $\hat{{{\bf x}}}={{\bf x}}/L,$ *t̂* = *u*ʹ*t*/*L*,  which takes the form $$\partial\_{\hat{t}}\hat{{{\bf u}}} + (\hat{{{\bf u}}}\cdot\hat{{{\mbox{\boldmath $\nabla$}}}})\hat{{{\bf u}}} = -\hat{{{\mbox{\boldmath $\nabla$}}}}\hat{p} + \frac{1}{Re}\hat{\Delta}\hat{{{\bf u}}} + \sqrt{\frac{2\theta\_\eta}{Re^{15/4}}} \hat{{{\mbox{\boldmath $\nabla$}}}}\cdot \hat{{{\mbox{\boldmath $\xi$}}}} + \digamma \hat{{\mathbf f}}. {\label}{FNS3}$$ where *θ**η* = *k**B**T*/*ρ**u**η*2*η*3 is the thermal energy relative to the energy of an eddy of Kolmogorov scale *η* and with Kolmogorov velocity *u**η* = (*ν**ɛ*)1/4,  a dimensionless quantity generally of order 10− 6 or even smaller. In deriving, assumed the Taylor relation *ɛ* ∼ *u*′3/*L* and an external body force with dimensionless magnitude $\digamma=f\_{rms}L/u^{\prime 2}$ has also been included to generate turbulence. It is naïvely clear that, on inertial-range scales, the direct effect of the thermal noise term must be negligible, even smaller than the viscous diffusion. In fact, the noise term leads to a mathematically ill-defined dynamics, unless the stochastic noise field $\tilde{{{\mbox{\boldmath $\xi$}}}}$ and velocity field $\tilde{{{\bf u}}}$ are both cut off at some high wave-number Λ,  or $\hat{\Lambda}=\Lambda L$ after non-dimensionalization. In the statistical physics literature, such a cut-off is standard and represents physically a coarse-graining length ℓ = Λ− 1 of the measured velocity field, which is chosen somewhere between *η* and the molecular mean-free path length ℓ*m**f**p*. In the limit *R**e* → ∞ and $\hat{\Lambda}\to \infty,$ prove under natural conditions that are observed in experiment (see section [sec:InfRe] below) that the limiting velocity fields are weak solutions of incompressible Euler, as assumed in the theory. It is important to stress that Onsager’s “ideal turbulence” description does not apply in the dissipation range and, indeed, Onsager never discussed dissipation-range turbulence in any of his published or unpublished writings, to my knowledge. This is perhaps one reason that he never discussed the possible interactions of thermal fluctuations and turbulence. This issue was raised about a decade later, however, by who pointed out that the kinetic energy spectrum of thermal fluid fluctuations E(k) ~ thermal must surpass the spectrum of turbulent fluctuations at a wavenumber of order *k**η* ∼ 1/*η*. This same idea was rediscovered much later by who provided numerical evidence for the conjecture in a shell model, before full verification by in a numerical simulation of the Landau-Lischitz fluctuating hydrodynamic equations. Confirmation by laboratory experiment is crucial but appears much more difficult at this time. In principle, all turbulent processes at sub-Kolmogorov scales should be strongly affected by thermal noise, including droplet formation, combustion, biolocomotion, etc. but it is unclear whether such probes of the small-scale physics can show the clear signature of thermal noise at large scales. A prototypical example is high Schmidt-number turbulent advection, where the classical theories of and that ignore thermal fluctuations miss power-law scaling of the concentration spectrum below the Batchelor dissipation scale. On the other hand, the classical prediction of a *k*− 1 spectrum in the viscous-convective range remains intact, because the strong effect of thermal hydrodynamic fluctuations in renormalizing the molecular diffusivity in that range is exactly the same as in laminar flows and thus hidden phenomenologically. There are effects of thermal noise even in the turbulent inertial range, but these are much more subtle and will be discussed further below. A frequent objection made by physicists and fluid mechanicians to the ideal turbulence theory is that the mathematical concept of “weak solution” is unphysical. In fact, such criticisms are directed not only at Onsager’s proposed Euler solutions, but also against the weak solutions of incompressible Navier-Stokes equation constructed by. To quote again from, §10.3.1, regarding Leray’s work: “An important point is that if one looks at real turbulence at finite Reynolds numbers (however large) there seems to be no need for weak solutions at all.” Here the presumption is that smooth, strong solutions of Navier-Stokes must exist, as there is no obvious empirical evidence for the severe singularities required to overcome the viscous regularization. This issue of the physical meaning of weak solutions is quite important, since it is at the center of the empirical testability of Onsager’s theory. Thus, I must discuss the matter briefly here. Furthermore, my own views may not coincide with those of most mathematicians who work on Onsager’s theory, and these differences must be carefully explained. The issues are subtle and intimately related with the famous Sixth Problem of, which was “the problem of developing mathematically the limiting processes... which lead from the atomistic view to the laws of motion of continua” and which remains still largely unresolved. As I have already emphasized, “weak solutions” are mathematically equivalent to “coarse-grained solutions” such as for Navier-Stokes and for Euler, when those equations are imposed for all ℓ > 0. In reality, the “coarse-grained solution” of Euler equations is an accurate physical description of a turbulent flow only in the inertial range of scales for ℓ ≫ *η*. The most important fact about the formulation in of “weak Euler solutions”, or its equivalent, is that it involves only the coarse-grained fields $\overline{{{\bf u}}}\_\ell$ and $\overline{p}\_\ell$ for ℓ ≫ *η*,  since the subscale stress ${{\mbox{\boldmath $\tau$}}}\_\ell({{\bf u}},{{\bf u}})$ depends as well only upon $\overline{{{\bf u}}}\_{\ell'}$ with $\ell'\lesssim\ell$ by the fundamental property of scale-locality. To state my view succinctly, it is the coarse-grained fields such as $\overline{{{\bf u}}}\_\ell$ and $\overline{p}\_\ell$ which are physical, because they correspond to what is experimentally measurable. In fact, every experiment has some spatial resolution ℓ,  such that only averaged properties for length-scales  > ℓ are obtained. It is instead the fine-grained/bare fields such as ${{\bf u}}({{\bf x}})$ which are unphysical, because they are unobservable objects corresponding to a mathematical idealization $\overline{{{\bf u}}}\_\ell\to{{\bf u}}$ as ℓ → 0,  which goes beyond the validity of a hydrodynamic description and is physically unachievable. The above views are probably not the same as those held currently by the majority of mathematicians and fluid mechanicians who work on turbulence, who mostly have followed in believing that > > “ ‘Turbulent’ and ‘molecular’ disorder are in general distinct. A macroscopic and yet turbulent ${{\bf u}}$ (and, more generally, a complete system of system of fluid dynamics with such characteristics) can be defined.” —, §3.2, p.446. > > > According to this common view, the deterministic Navier-Stokes equations are assumed valid for scales $\ell\gtrsim \lambda\_{mfp},$ the molecular mean free path length, with $\overline{{{\bf u}}}\_\ell\approx {{\bf u}},$ the Navier-Stokes solution, for all ℓ in the range $\eta\gtrsim \ell\gtrsim\lambda\_{mfp}.$ There is, however, no rigorous mathematical theory that justifies this assumption and there are no experimental measurements available at scales ℓ < *η* to confirm it. A major problem with this view is that it omits the physical effects of thermal fluctuations, which leads to deviations from the predictions of deterministic Navier-Stokes at scales ℓ ≫ *λ**m**f**p* that are experimentally well-observed in laminar flows. In fact, deterministic Navier-Stokes is a physically inconsistent set of equations, because it incorporates molecular dissipation without including the corresponding molecular fluctuations. It thus violates the fundamental fluctuation-dissipation theorem of statistical physics. I therefore do not regard solutions of deterministic Navier-Stokes, either weak or strong, as having fundamental physical significance. Deterministic Navier-Stokes is instead just a reasonably accurate “large-eddy simulation” model of turbulence for resolved scales down to ℓ ∼ *η*. A more fundamental model of molecular fluids are the fluctuating hydrodynamic equations of which incorporate the fluctuation-dissipation relation and which explain the experiments on thermal fluctuations in laminar flows. It was first pointed out by that fluctuating hydrodynamics is an appropriate model of the turbulent dissipation range and, in fact, the incompressible, low-Mach equations were proposed in his work independently of. However, as observed in the previous subsection, fluctuating hydrodynamics is not a continuum theory but instead involves an explicit cutoff Λ = ℓ− 1 and it is believed to describe the evolution of the coarse-grained fields $\overline{{{\bf u}}}\_\ell$ for all scales $\ell\gtrsim \lambda\_{mfp}$ (or, in fact, for the incompressible version, only at scales where $|\overline{{{\bf u}}}\_\ell|\lesssim c\_s,$ the sound speed ). Furthermore, the viscosity *ν*ℓ in the model depends upon the scale ℓ,  as shown, for example, by the renormalization group analysis of. Thus, even in laminar flows, the observed viscosity *ν*ℓ at length scale ℓ is an “eddy-viscosity”, although the fluctuating eddies arise there from thermal fluctuations rather than turbulent fluctuations. In the language of modern physics, the Landau-Lifschitz fluctuating hydrodynamics equations are low-wavenumber ``effective field theories". An important consequence of these considerations is that velocity-gradients are ℓ-dependent at *all* length-scales ℓ in a turbulent flow and not only for ℓ > *η*. However, it should be true to a good approximation that, for typical thermal fluctuations, ~**u**+O(()1/2),, where ${{\bf u}}$ is again the solution of deterministic Navier-Stokes. It then easily follows that ||2|**u**|2, =where ⟨ ⋅ ⟩ denotes a space-time average and ℓ/*η* spans about 2-3 decades of scales. For more details, see, section II.E. In this sense, typical thermal fluctuations should matter little for mean energy dissipation. On the other hand, this statement is presumably not true for large, rare thermal fluctuations. Indeed, in a simple particle model of hydrodynamics, the “totally asymmetric exclusion process”, dissipative weak solutions of inviscid Burgers equation arise with overwhelming probability (law of large numbers) but rare fluctuations (large deviations) can lead to weak solutions with local energy production in some space-time regions rather than dissipation everywhere. Furthermore, higher-order gradients ${{\mbox{\boldmath $\nabla$}}}^k\overline{{{\bf u}}}\_\ell$ with *k* > 1 will be much more strongly affected by thermal noise and, indeed, the numerical simulation of found that such higher-order gradients are completely dominated by thermal noise already for $\ell\lesssim \eta.$ I therefore fundamentally disagree with a common view, clearly expressed by, that hypothetical fine-grained fields ${{\bf u}}$ and their gradients are physically more “objective” than coarse-grained fields $\overline{{{\bf u}}}\_\ell$: > > “There is a generic ambiguity in defining the meaning of the term *small scales* (or more generally scales) and consequently the meaning of the term *cascade* in turbulence research. As mentioned in chapter 5, the specific meaning of this term and associated inter-scale energy exchange/‘cascade’ (e.g., spectral energy transfer) is essentially decomposition/representation dependent. Perhaps, the only common element in all decompositions/ representations (D/R) is that the small scales are associated with the field of velocity derivatives. Therefore, it is natural to look at this field as the one *objectively* (i.e., D/R independent) representing the small scales. Indeed, the dissipation is associated precisely with the strain field, *s**i**j*, both in Newtonian and non-Newtonian fluids.” —, §6.2, p.127. > > > This point of view is based on the belief, without evidence, that there is some hypothetical fine-grained field ${{\bf u}}=\lim\_{\ell\to 0} \overline{{{\bf u}}}\_\ell$ and that objectivity is achieved by “convergence” as ℓ → 0. The existence of such a fine-grained velocity in turbulent flow was probably first questioned by the visionary scientist, Lewis Fry Richardson, whose famous paper on turbulent pair-dispersion contained a section entitled “*Does the Wind possess a Velocity?*” He wrote there > > ``This question, at first sight foolish, improves on acquaintance. A velocity is defined, for example, in Lamb’s ‘Dynamics’ to this effect : Let Δ*x* be the distance in the *x* direction passed over in a time Δ*t*,  then the *x*-component of velocity is the limit of Δ*x*/Δ*t* as Δ*t* → 0. But for an air particle it is not obvious that Δ*x*/Δ*t* attains a limit as Δ*t* → 0. —, §1.2, p.709. > > > Indeed, each individual air molecule is moving at about 346 m/sec, the speed of sound in that fluid, with the mean wind velocity a small correction, and the local measured velocity will depend entirely upon the resolution scale ℓ which is adopted. I certainly agree that science should be concerned with objective facts. However, in contrast to the 19th-century continuum mechanics perspective that objectivity is achieved by convergence, the modern tool to achieve objectivity is the renormalization group. I may cite here the physics Nobel laureate Kenneth G. Wilson: > > “A procedure is now being developed to understand the statistical continuum limit. The procedure is called the renormalization group. It is the tool that one uses to study the statistical continuum limit in the same way that the derivative is the basic procedure for studying the ordinary continuum limit.” —, p.774. > > > The fundamental understanding that arose in physics was that for systems with strong fluctuations at all length scales — a class which includes both quantum field theories and turbulent flows — the effective description varies with the length scale ℓ of resolution. In that case, the proper goal is not to seek for some idealized and unobservable “truth” at ℓ → 0 but instead to attain objectivity by understanding how the description changes as ℓ is varied. This application of this approach to hydrodynamics has been mostly been restricted to flows much simpler than turbulence, c.f., where weak-coupling perturbation expansions are applicable, and only recently has some progress been made on non-perturbative application to turbulence by functional renormalization group methods. It is conceptually important, however, to understand that LES models resolved at length scale ℓ are not really continuum models. Indeed, it is a truism in the LES community that the true model is not the “continuum” LES model but instead the discretization of the model used to solve it numerically. It therefore makes sense to combine the steps of coarse-graining and numerical discretization, as in some approaches to modeling thermal fluctuations. I shall return to this theme later in discussing the relation of Onsager’s “ideal turbulence” theory with LES modeling. Onsager’s Conjecture -------------------- sec:ConjIntro The quote from on which I base this essay began with a provocative suggestion that “turbulent dissipation as described could take place just as readily without the final assistance by viscosity.” It is important to emphasize that none of the mathematical results that I have reviewed so far, including those of Onsager himself, establish this hypothesis. Such arguments give only necessary conditions for anomalous dissipation, but not sufficient conditions. An example of an instantaneous velocity field in *C**h* was found by showing that these proofs cannot be extended to *h* ≤ 1/3,  but such examples also fall far short of proving Onsager’s conjecture, as already stated there: > > “It must not, of course, be concluded that, simply because our argument fails when *h* ≤ 1/3, that non-conservation is actually possible for *h* ≤ 1/3. We emphasize that to demonstrate this it is necessary to construct an appropriate solution ${\bf v}(.,.)$ with ${\bf v}(.,t)\in C^h$, 0 < *h* < 1/3 for *t* ∈ [0, *T*], for which the energy indeed decreases or increases in the interval.” —, p.235. > > > There are several elements involved in Onsager’s conjecture which are highly non-trivial, e.g. that weak Euler solutions exist in the limit *R**e* → ∞ and that dissipation remains positive in that limit. As to the first matter, I note that inviscid limits of Navier-Stokes solutions can be proved to exist without any further assumptions, at least for flows in infinite Euclidean space or in a periodic domain. However, the limiting Euler solutions so obtained are much weaker even than those postulated by Onsager, consisting of “measure-valued solutions” with a distribution $p\_{{{\bf x}},t}(d{{\bf u}})$ of possible velocity values at each space-time point $({{\bf x}},t)$ or satisfying cumbersome energy inequalities. In addition, such generalized weak Euler solutions are proved to exist only for subsequences of viscosity *ν**k* → 0 and may differ from subsequence to subsequence. Recently, however, considerable progress has been made on these problems, in particular a rigorous proof has been found that dissipative Euler solutions as conjectured by do exist with ${{\bf u}}\in C^{1/3-\epsilon}$ for any *ε* > 0. Furthermore, advances have been made on deriving such results from the incompressible Navier-Stokes equations in the limit *R**e* → ∞ and even more progress has been made on related problems in passive scalar turbulence. In this section I shall attempt to give a broad overview of these developments as appropriate to this essay, but certainly no comprehensive review, which would be difficult in any case for such a rapidly moving and growing literature. See for some recent reviews by lead researchers in this area. It must be emphasized at the outset that, as spectacular as these recent mathematical developments might be, they cannot in principle justify the correctness of Onsager’s “ideal turbulence” as a physical theory. In fact, the final judge of the truth of any physical theory is experiment, not mathematics. I whole-heartedly agree with the famous dictum of Einstein: > > “Insofern sich die Sätze der Mathematik auf die Wirklichkeit beziehen, sind sie nicht sicher, und insofern sie sicher sind, beziehen sie sich nicht auf die Wirklichkeit. [Insofar as the propositions of mathematics refer to reality, they are not certain, and insofar as they are certain, they do not refer to reality.]” –, p.3. > > > Even if Onsager’s conjectures could be rigorously derived from a fundamental model such as the Landau-Lifschitz fluctuating hydrodynamic equations, serious questions arise whether those equations themselves might break down locally in spacetime in the vicinity of extreme intermittent events. Instead, the novel predictions of Onsager’s theory must be carefully compared with experiments to test their validity. In fact, we know already from the brief review of experimental observations in section [empiric] that Onsager’s 1/3 Hölder result cannot be the whole story, because it makes no reference to solid walls and, in particular, it does not elucidate the difference in the wall geometries where anomalous dissipation occurs and where it does not. I discuss this crucial issue more in section [walls]. ### Existence of Dissipative Euler Solutions sec:exist The program which has successfully led to a mathematical construction of the weak Euler solutions conjectured by in breakthrough work by was initiated and, which both appeared as preprints already in 2007. A very good review of this early work is given by and an updated review is to be found in. These authors pointed out a remarkable connection of Onsager’s conjecture with famous work of the mathematician John on *C*1 isometric embeddings. I give here a very succinct review, following closely the discussions in the previous references. Nash (1954) addressed a classical problem of differential geometry, whether a smooth manifold *M* of dimension *n* ≥ 2 with Riemannian metric *g* may be isometrically imbedded in *m*-dimensional Euclidean space ${\mathbb R}^m$, i.e. whether a *C*1 embedding map ${\bf u}: M \to {\mathbb R}^m$ exists so that the Riemannian metric induced by the embedding agrees with *g*,  or i**u**j**u**= gij. isometry To answer this question, Nash considered a more general problem of *short embeddings* which do not preserve lengths of curves on *M* but can only *decrease* lengths, so that i jgij short in the matrix sense. The startling result obtained by Nash, with some improvement due to, is the following: > > : *Let (*M*, *g*) be a smooth closed *n*-dimensional Riemannian manifold, and let $\overline{{\bf u}}:M\to {\mathbb R}^m$ be a *C*∞ strictly short embedding with *m* ≥ *n* + 1. For any *ε* > 0 there exists a *C*1 isometric embedding ${\bf u}: M\to {\mathbb R}^m$ with $|{\bf u}-\overline{{\bf u}}|\_{C^0}<\epsilon.$* > > > This result is surprising for two reasons. First, the condition is a set of *n*(*n* + 1)/2 equations in *m* unknowns. A reasonable guess would be that the system is solvable, at least locally, when *m* ≥ *n*(*n* + 1)/2 and this indeed was a classical conjecture of Schläfli. However, for *n* ≥ 3 and *m* = *n* + 1, the system is hugely overdetermined! It is not obvious that there should be any solutions at all, but the Nash-Kuiper Theorem shows that there exists an enormous set of *C*1-solutions that are *C*0-dense in the set of short embeddings. I shall not discuss here the details of Nash’s proof of his astonishing result, but just remark that his construction of the isometry ${\bf u}$ was in a series of stages, by adding at each stage a new small, high-frequency perturbation. The construction is not an “abstract nonsense” argument that requires the Axiom of Choice or some other non-constructive element, and, in fact, the entire procedure can be implemented to any finite stage of iteration, in principle, by a computer algorithm. For example, see the work of which calculates numerically a *C*1-isometric embedding of the flat 2-torus ${\mathbb T}^2$ into ${\mathbb R}^3$ and Fig. [fignash] for the first four stages of iteration. It interesting to note that Nash himself regarded his 1954 paper as a “sidetrack” on the problem of isometric embeddings and attached greater importance to his later work that constructed *C*∞ embeddings in higher dimensions. The latter paper is notable for its introduction of the Nash-Moser implicit function theorem and also for its exploitation of a smooth mollification scheme with a continuous parameter ℓ,  involving a careful study of variations with ℓ similar to an RG approach (L. Székelyhidi, Jr., private communication). A very intriguing recent stochastic formulation of makes this analogy more clear. ![The first four stages in the iterative construction of a C^1-isometric embedding of the flat 2-torus {\mathbb T}^2 into {\mathbb R}^3. The initial map is corrugated along the meridians to increase their length. Corrugations are then applied repeatedly in various directions to produce a sequence of maps. Each successive map is strictly short, with reduced isometric default. Reproduced from V. Borrelli et al., Flat tori in three-dimensional space and convex integration, Proc. Natl. Acad. Sci. 109 7218–7223 (2012), with permission of PNAS.](figures/Fig6 "fig:") [fignash] The fundamental contribution of De Lellis and Székelyhidi Jr was to realize that there is a very close mathematical analogy between the problem of isometrically embedding a smooth manifold by a map of low regularity and the problem of solving the Cauchy initial-value problem for incompressible Euler equations by a velocity field of low regularity and that Nash’s method of construction can be carried over to the latter. The analog of a “short mapping” for the Euler system is what De Lellis and Székelyhidi Jr call a smooth *subsolution*, i.e. a smooth triple $(\overline{{{\bf u}}}, \overline{p},\overline{{{\mbox{\boldmath $\tau$}}}})$ with $\overline{{{\mbox{\boldmath $\tau$}}}}$ a symmetric, positive-definite tensor such that $$\partial\_t\overline{{{\bf u}}}+{{\mbox{\boldmath $\nabla$}}}\cdot(\overline{{{\bf u}}}\ \overline{{{\bf u}}}+\overline{{{\mbox{\boldmath $\tau$}}}})=-{{\mbox{\boldmath $\nabla$}}}\overline{p}, \quad {{\mbox{\boldmath $\nabla$}}}\cdot\overline{{{\bf u}}}=0.$$ Everyone from the turbulence modelling community will recognize at once that this has the form of an LES model equation incorporating a positive-definite “turbulent stress” tensor $\overline{{{\mbox{\boldmath $\tau$}}}}.$ Approximation results can be obtained for subsolutions entirely analogous to the Nash-Kuiper theorem on approximation of short embeddings. As an example, a theorem of states the following: > > : *Let $(\overline{{{{\bf u}}}},\overline{p},\overline{{{\mbox{\boldmath $\tau$}}}})$ be any smooth, strict subsolution of the Euler equations on ${\mathbb T}^3 \times [0,T]$ and let *h* < 1/3. Then there exists a sequence $({{\bf u}}\_k,p\_k)$ of weak Euler solutions such that ${{\bf u}}\_k\in C^h({\mathbb T}^3 \times [0,T])$ satisfy, as *k* → ∞,  > $$\int\_{{\mathbb T}^3} d^3x\ f \ {{\bf u}}\_k\rightarrow \int\_{{\mathbb T}^3} d^3x\ f \overline{{{\bf u}}}, > \quad \int\_{{\mathbb T}^3} d^3x\ f \ {{\bf u}}\_k{{\bf u}}\_k\rightarrow \int\_{{\mathbb T}^3} d^3x\ f (\overline{{{\bf u}}}\ \overline{{{\bf u}}}+\overline{{{\mbox{\boldmath $\tau$}}}})$$ > for all $f\in L^1({\mathbb T}^3)$ uniformly in time, and furthermore for all *t* ∈ [0, *T*] and all *k* > $$\int\_{{\mathbb T}^3} d^3x\ \frac{1}{2}|{{\bf u}}\_k|^2 =\int\_{{\mathbb T}^3} d^3x\ \frac{1}{2}(|\overline{{{\bf u}}}|^2+{\rm Tr}\ \overline{{{\mbox{\boldmath $\tau$}}}}).$$* > > > The notion of convergence in this theorem can be made more physically transparent by taking $f({{\bf r}})=\widetilde{G}\_\delta({{\bf x}}+{{\bf r}})$ for *δ* > 0,  in which case the approximation property implies that pointwise k,, (**u**k,**u**k)(,) +Thus, if the subsolution $(\overline{{{{\bf u}}}},\overline{p},\overline{{{\mbox{\boldmath $\tau$}}}})$ arose from a coarse-grained Euler solution in the sense of, so that $\overline{{{{\bf u}}}}=\overline{{{{\bf u}}}}\_\ell$, $\overline{p}=\overline{p}\_\ell$, $\overline{{{\mbox{\boldmath $\tau$}}}}=\overline{{{\mbox{\boldmath $\tau$}}}}\_\ell({{\bf u}},{{\bf u}}),$ then the simple identity of for the convolution filter $\widetilde{\overline{G}}\_{\ell,\delta}=\overline{G}\_\ell\*\widetilde{G}\_\delta$,(**u**,**u**)= (,) + implies that for any *δ* > 0 k,,, (**u**k,**u**k),(**u**,**u**) pointwise in space. When *δ* ≪ ℓ,  this yields the remarkable statement that a coarse-grained Euler solution $(\widetilde{\overline{{{\bf u}}}}\_{\ell,\delta},\widetilde{\overline{p}}\_{\ell,\delta}) \simeq (\overline{{{\bf u}}}\_{\ell},\overline{p}\_{\ell})$ at scale ℓ can be well approximated pointwise in space by a sequence $(\widetilde{{{\bf u}}}\_{k,\delta},\widetilde{p}\_{k,\delta})$ of coarse-grained Euler solutions at the much smaller scale *δ*. The constructions of such weak Euler solutions follow a strategy similar to that of Nash, by a sequence of stages ${{\bf u}}\_0,$ ${{\bf u}}\_1,$ ${{\bf u}}\_2,$.... At stage *n* one has, after coarse-graining, a subsolution $$\partial\_t {{\bf u}}\_n +{{\mbox{\boldmath $\nabla$}}}\cdot({{\bf u}}\_n{{\bf u}}\_n+{{\mbox{\boldmath $\tau$}}}\_n)=-{{\mbox{\boldmath $\nabla$}}}p\_n$$ which is supported on wavenumbers  < Λ*n*. By adding a small-scale carefully chosen perturbation one can succeed to cancel a large part of the stress \boldmath $\tau$*n* so that, in the limit, ${{\mbox{\boldmath $\tau$}}}\_n\to {\bf 0}$ weakly and one obtains a weak limit ${{\bf u}}$ which is a distributional Euler solution. A number of different models of the small scales have been employed, such as Beltrami flows and Mikado flows, together with other operations, such as evolving under smooth Euler dynamics locally in time and “gluing” the different time-segments. In the language of LES modelling, the constructions can be regarded as a sort of iterative “defiltering”, in which unresolved scales are successively restored. A physicist might prefer to call this an “inverse renormalization group”, the reverse of the successive coarse-graining employed by. Clearly, there is a huge number of subsolutions, since one may adopt any positive definite tensor $\overline{{{\mbox{\boldmath $\tau$}}}}$ whatsoever. A consequence is therefore results such as the following: > > : *Let $e:[0,T]\to {\mathbb R}^+$ be any strictly positive, smooth function. Then for any 0 < *h* < 1/3 there exists a weak Euler solution ${{\bf u}}\in C^h({\mathbb T}^3 \times [0,T])$ such that > $$\int\_{{\mathbb T}^3} d^3x\
arxiv_0000169
the table. In the case of velocity isotropy (*β* = 0), it requires the excision of both And XII and And XIV from the datasets for the mass estimate of M31 to become comparable to or smaller than the Milky Way. For example, the mass of M31 with And XII and And XIV both removed is $0.85 \pm 0.24 \times 10^{12} \Msun$, as compared to the mass of the Milky Way with Leo I retained of $0.92 \pm 0.25 \times 10^{12} \Msun$. However, we have argued that And XIV is most likely bound, whilst And XII is a more ambiguous case. In other words, the problem pointed to by – namely that the mass of M31 inferred from the kinematics of the satellites is less than the mass of the Milky Way – has indeed been ameliorated by the discovery of more fast-moving M31 satellites. It seems particularly intriguing that such satellites exist for both the Milky Way and M31. used virialized models to estimate that the probability that, in a sample of 30 satellites, there is an object like Leo I, which changes the mass estimate by a factor of a few. They found that the probability is minute, only  ∼ 0.5%. Prior expectation does not favour the existence of objects like Leo I or And XII, yet in fact, both big galaxies in the Local Group possess such satellites. The clear conclusion is that the satellites in the outer parts of these galaxies cannot all be virialized. This is a point in favour of processes such as those advocated by to populate such orbits. [table:pms] | | | | | | --- | --- | --- | --- | | Name | | | | | | | | | | Carina | 22  ±  9 | 15  ±  9 | 1 | | Draco | 60  ±  40 | 110  ±  30 | 2 | | Fornax | 48  ±  5 | -36  ±  4 | 3 | | LMC/SMC | 198  ±  5 | 25  ±  5 | 4 | | Sculptor | 9  ±  13 | 2  ±  13 | 5 | | Sextans | -26  ±  41 | 10  ±  44 | 6 | | Ursa Minor | -50  ±  17 | 22  ±  16 | 7 | | M33 | 2.1  ±  0.7 | 2.5  ±  1.2 | 8 | | IC10 | -0.2  ±  0.8 | 2.0  ±  0.8 | 9 | | M31 | 2.1  ±  1.1 | -1.0  ±  0.9 | 10 | Sources: 1 -, 2 -, 3 -, 4 -, 5 -, 6 -, 7 -, 8 -, 9 -, 10 -, though unlike the other proper motions, this not a measurement but inferred from indirect evidence. ![Distribution of mass estimates as a fraction of true mass for Monte Carlo simulations using (top) 23 satellites with radial velocities, (middle) 7 satellites with proper motions and (bottom) 23 satellites, 7 of which have proper motions. The standard deviation of the best fitting Gaussian is shown in the top-right hand corner of each panel. [These plots assume \beta = -4.51, as estimated from the data].](mass_hist_mw "fig:") [fig:milkywaycase] ![The fractional contribution each satellite with proper motions makes to the mean mass estimate for the Milky Way Galaxy. Notice the extreme effect of Draco’s proper motion.](mest_mwpm_bias "fig:") [fig:pmconts] Simultaneous Solution for Mass and Anisotropy --------------------------------------------- There is one further way in which the estimators can be set to work with the line-of-sight velocities. When three dimensional positions and projected positions are simultaneously available – as for example in the case of M31’s satellites – it is possible to use the estimators based on both the $\langle \vlos^2 r^\alpha \rangle$ and the $\langle \vlos^2 R^\alpha \rangle$ moments to solve simultaneously for both the total mass and the anisotropy parameter. There is however no guarantee that the solution for *β* is in the physical range  − ∞ ≤ *β* ≤ 1. The success of this procedure of course rests on the accuracy of the data. The distances of the M31 satellites are determined by the tip of the red giant branch method and have errors of  ± 30 kpc. If we use eqns ([eq:Andcase]) and ([eq:lastcase]), and simultaneously solve for the unknowns, we obtain $$M\_{300} = 1.5 \pm 0.4 \times 10^{12} \Msun, \qquad\qquad \beta = -0.55^{+1.1}\_{-3.2}$$ which corresponds to mild tangential anisotropy. These are surprisingly sensible answers given the distance errors. Fig. [fig:simulterrors] is inferred from Monte Carlo simulations and shows the distributions of anisotropy parameters derived from simultaneous mass and anisotropy fitting for mock datasets. Also given in the panels are the median and 68 per cent confidence limits for the anisotropy parameter, in the case of 21 satellite galaxies (comparable to the present dataset for M31) and the case of 500 satellites. Although with 21 tracers, the errors on the anisotropy parameter are substantial, matters improve significantly with larger numbers of tracers. A dataset of 500 halo satellites (dwarf galaxies, globular clusters and planetary nebulae) is not unreasonable for a galaxy like M31 in the near future. This raises the possibility that the method of simultaneous fitting may prove more compelling in the future. In fact, given 500 tracers, it is reasonable to use the estimators based on both the $\langle \vlos^2 r^\alpha \rangle$ and the $\langle \vlos^2 R^\alpha \rangle$ moments to fit simultaneously at each distance, thus giving the run of anisotropy parameter and mass with radius. Radial and Proper Motion Datasets --------------------------------- Thus far, we have used only the line of sight velocities to make mass estimates. In this section, we add in the proper motions of satellites, where available. Thus, for the Milky Way galaxy, we combine results from eqn ([eq:firstcase]) for satellites without proper motions and from eqn ([eq:PMcase]) for those with proper motions, weighting each estimate by the reciprocal of the standard deviation to give the final answer. Proper motions, albeit with large error bars, have been measured for a total of 9 of the Milky Way satellite galaxies. It seems prudent to exclude Sagittarius, as it is in the process of merging with the Milky Way. Additionally, the interacting Magellanic Clouds are treated as a single system by computing the proper motion of their mass centroid, taking the masses of the LMC and SMC as $\sim 2 \times 10^{10} \Msun$ and $2 \times 10^9 \Msun$ respectively . This leaves us with a set of 7 satellites with proper motion data, summarized in Table [table:pms]. In most cases, errors on proper motions are large and, where multiple studies exist, the measurements are sometimes in disagreement. The proper motions inferred by ground based methods are in reasonable agreement with those derived from the *Hubble Space Telescope* (*HST*) in the cases of Fornax  , Carina  and the Magellanic Clouds . But, for Ursa Minor  and for Sculptor, agreement between different investigators is not good, and we have preferred to use the estimates derived from *HST* data. Nonetheless, it is important to include the proper motion data, especially for mass estimates of the Milky Way Galaxy. We use these proper motions along with distance and line-of-sight velocity data to calculate full space velocities for these satellites, as described in. In addition, there are two satellites of M31 with measured proper motions, namely M33 and IC 10. This astonishing feat has exploited the *Very Long Baseline Array* to measure the positions of water masers relative to background quasars at multiple epochs. Unfortunately, the technique cannot be extended to M31 itself, as it does not contain any water masers, and so its proper motion is much less securely known. However, reviewed the evidence from a number of sources – including kinematics of the M31 satellites, the motions of the satellites at the edge of the Local Group, and the constraints imposed by the tidal distortion of M33’s disk – to provide a best estimate. These data are also listed in Table [table:pms]. The Milky Way satellites are so remote that their line-of-sight velocities in the Galactic rest frame are almost identical to their radial velocities, as judged form the Galactic Centre. The proper motion data provide genuinely new information on the tangential motions and this is the only way to break the mass-anisotropy degeneracy. The same argument does not hold with equal force for M31, as the line of sight velocities incorporate contributions from both the radial and tangential components as reckoned from the centre of M31. Nonetheless, it is good practice to use all the data available, even though the proper motions of M33 and IC 10 with respect to the M31 reference frame must be inferred using an estimate of M31’s proper motion (rather than a measurement). For the satellites without proper motions, we use the form of the estimator given in eqns ([eq:firstcase]) or ([eq:Andcase]) for the Milky Way and M31 respectively; for those with proper motions, we use eqn ([eq:PMcase]). We combine results from the two estimators, weighting each estimate by the reciprocal of the standard deviation to give the final answer. To infer the standard deviation, we perform Monte Carlo simulations. So, for the case of the Milky Way, we generate mock datasets of 25 satellites, for which only 7 have proper motions. The errors on radial velocities are dwarfed by the uncertainty caused by the small number statistics and so are neglected. But, the errors on the proper motions are not negligible and they are incorporated into the simulations by adding a value selected at random from the range [-0.5 *μ*, 0.5 *μ*], where *μ* is the proper motion. The flat distribution has been chosen as systematic errors are as important as random Gaussian error in the determination of proper motions. However, we have tested alternatives in which we use the relative observational errors, or the relative observational errors multiplied by 2.5, and find that our results are robust against changes to the error law. The standard deviations of the fractional mass distribution the satellites with and without proper motions are separately computed, as illustrated in the panels of Figure [fig:milkywaycase]. We linearly combine the mass estimates, weighting with the reciprocal of the standard deviation, to give the final values reported in Table [table:massests]. Given that the Milky Way satellites with measured proper motions are moving on polar orbits, it is no surprise that the mass estimate of the Milky Way has now increased. Adopting the value of *β* we estimate from the data, we find $M\_{300} = 3.9 \pm 0.7 \times 10^{12} \Msun$ for the Milky Way Galaxy and $M\_{300} = 1.5 \pm 0.4 \times 10^{12} \Msun$ for M31. Assuming isotropy, we find $M\_{300} = 2.5 \pm 0.5 \times 10^{12} \Msun$ for the Milky Way Galaxy and $M\_{300} = 1.4 \pm 0.4 \times 10^{12} \Msun$ for M31. Notice however, the mass estimate for M31 has barely changed from the value inferred from the full radial velocity dataset. Again, we calculate the contribution that each satellite makes to the mass estimate to investigate whether any are dominating the final answer. First, this procedure guards against the possibility of a completely rogue proper motion measurement. Second, there are some suggestions that the Magellanic Clouds may not be bound, or even if bound may only be on its second passage and so may not be part of the relaxed distribution of satellite galaxies  . So, it is helpful to check that our results are not unduly sensitive to its inclusion. As Figure [fig:pmconts] shows, we find that Draco is a clear outlier and nearly doubles the Milky Way mass estimate. If we remove the Draco proper motion from the sample, we instead recover a mass *M*300 = 2.7  ±  0.5 $\times 10^{12} \Msun$ (assuming $\beta\_{\rm data}$) or *M*300 = 1.4  ±  0.3 $\times 10^{12} \Msun$ (assuming isotropy). It is particularly concerning that the proper motion of Draco has such a substantial effect, because – as judged from the size of the error bars in Table [table:pms] – it is one of the noisier measurements. By contrast, the exclusion of the Magellanic Clouds has only a minor effect, as is evident from the results listed in Table [table:massests]. We have covered a number of possibilities, so it is probably useful for us to give our best estimates. On balance, we think the case for including at least And XIV among the satellite sample for Andromeda is strong. Whilst And XII is a more ambiguous case, the lack of any HI gas suggests to us that it should also be included. Among the satellites of the Milky Way, we favour including Leo I based on the work of , whilst we are inclined to discard the proper motion of Draco reported in  until corroborated. Until the discrepancy between the velocity anisotropies reported in simulations and in data is explained, we prefer to use the data as our guide *So, our best estimate for the mass of the Milky Way within 300 kpc is* $$M\_{300} \sim 2.7 \pm 0.5 \times 10^{12} \Msun \label{eq:MWMASS}$$ whilst for M31, it is $$M\_{300}\sim 1.5 \pm 0.4 \times 10^{12} \Msun. \label{eq:ANDMASS}$$ These estimates are obtained using the combined radial velocity and proper motion datasets. The error bars only incorporate the statistical uncertainty. As we have emphasised, there are much greater uncertainties induced by selection of satellite members and velocity anisotropy. In particular, when these uncertainties are considered, it is not possible to decide which of the Milky Way or M31 is more massive based on satellite kinematic data alone. Discussion ========== It is instructive to compare our results with a number of recent estimates of the masses of the Local Group and its component galaxies. extracted a sample of  ∼ 2400 blue horizontal branch stars from the SDSS. These are all resident in the inner halo within 60 kpc of the Galactic centre. This has the advantage that the BHBs are surely virialized, but the disadvantage that no inference can be made about the mass exterior to 60 kpc. Hence, any estimate as to the total mass is driven wholly by prior assumptions rather than the data. In fact, assumed an NFW halo with a canonical concentration holds, and then estimated the virial mass of the Milky Way’s dark matter halo as $M = 1.0^{+0.3}\_{-0.2} \times 10^{12} \Msun$, using Jeans modelling with an anisotropy parameter inferred from numerical simulations. This is lower than our preferred value, but in good agreement with our comparable calculations using line of sight velocity datasets alone. A somewhat similar calculation for M31 has been reported by. The mass of the baryonic material is estimated using a Spitzer 3.6 *μ*m image of the galaxy, together with a mass-to-light ratio gradient based on the galaxy’s *B* − *R* colour. This is combined with an adiabatically-contracted NFW halo profile to reproduce the observed HI rotation curve data. They find a total virial mass of M31’s dark halo as $8.2 \pm 0.2 \times 10^{11} \Msun$. This is lower than most of our estimates, with the exception of those based on samples excluding both And XII and And XIV. Although these calculations are interesting, it is worth remarking that the final masses are not wholly controlled by the data. We know that, from Newton’s theorem, any mass distribution outside the limiting radius of our data has no observational effect in a spherical or elliptical system. To estimate the virial mass from data confined to the inner parts (such as BHBs or the optical disk) requires an understanding of the structure of the pristine dark halo initially, as well as how it responds to the formation of the luminous baryonic components. It is this that controls the final answer. used the Millennium Simulation to extract mock analogues of the Local Group and calibrate the bias and error distribution of the Timing Argument estimators . From this, they obtain a total mass of the two large galaxies in the Local Group of $5.3 \times 10^{12} \Msun$ with an inter-quartile range of [3.8 × 1012, 6.8 × 1012] $\Msun$ and a 95 % confidence lower limit of $1.8 \times 10^{12} \Msun$. Importantly, showed that the mass estimate from the timing argument is both unbiased and reasonably robust. This is a considerable advance, as there have long been worries that the gross simplification of two-body dynamics implicit in the original formulation of the Timing Argument may undermine its conclusions. It therefore seems reasonable to assume that the combined mass of the Milky Way Galaxy and M31 is at least $3.8 \times 10^{12} \Msun$, and perhaps more like $5.3 \times 10^{12} \Msun$. The low estimates of the Milky Way and M31 masses of and are not compatible with this, and barely compatible with Li & White’s 95 % lower limit. Using our preferred values in eqns ([eq:MWMASS]) and ([eq:ANDMASS]), the combined mass in the Milky Way and M31 galaxies is $4.2 \pm 0.6 \times 10^{12} \Msun$. This is comparable to the $3.8 \times 10^{12} \Msun$ of Li & White. also estimated a virial mass for the Milky Way of $2.4 \times 10^{12} \Msun$ with a range of [1.1 × 1012, 3.1 × 1012] $\Msun$, based on timing arguments for Leo I. Given all the uncertainties, this is in remarkable accord with our best estimate. Conclusions =========== We have derived a set of robust tracer mass estimators, and discussed the conditions under which they converge. Given the positions and velocities of a set of tracers – such as globular clusters, dwarf galaxies or stars – the estimators compute the enclosed mass within the outermost datapoints. The accuracy of the estimator has been quantified with Monte Carlo simulations. The estimators are applicable to a wide range of problems in contemporary astrophysics, including measuring the masses of elliptical galaxies, the haloes of spiral galaxies and galaxy clusters from tracer populations. They are considerably simpler to use than distribution function based methods , and involve no more calculation than taking weighted averages of combinations of the positional and kinematical data. They should find widespread applications. The mass estimators are applied to the satellite populations of the Milky Way and M31 to find the masses of both galaxies within 300 kpc. These estimates are the first to make use of the recent burst of satellite discoveries around both galaxies. Both satellite populations have nearly doubled in size since previous estimates were made. We summarise our results by answering the questions; What are (1) the minimum, (2) the maximum and (3) the most likely masses of the Milky Way and M31 galaxies? (1) The mass of the Milky Way Galaxy within 300 kpc could be as low as $0.4\pm 0.1 \times 10^{12} \Msun$. This would imply that Leo I is gravitationally unbound, contrary to the recent evidence provided by by . Leo I would then be either an interloper or an object being ejected from the Milky Way by an encounter. It would also require that the proper motion of Draco is incorrect, which is not inconceivable given the difficulty of the measurements. It implies that the satellite galaxies are moving on radial orbits and so the velocity anisotropy is radial. The mass of M31 within 300 kpc could plausibly be as low as $0.8 \pm 0.2 \times 10^{12} \Msun$. This would be the case if both And XII and And XIV are not gravitationally bound, which is possible if mechanisms such as those proposed by are ubiquitous. It would also require that the proper motion data on M33 and IC10 or – perhaps more likely – the indirectly inferred proper motion of M31 is in error. Again, such a low estimate for the mass occurs only if the satellites are moving predominantly radially. Although it is interesting to ask how low the masses of the Milky Way and M31 could be, it does produce a mystery in the context of the Timing Argument, which typically yields larger combined masses. It is possible that some of the mass of the Local Group is unassociated with the large galaxies. Although not the conventional picture, this is probably not ruled out and there have been suggestions that $\sim 10^{12} \Msun$ may be present in the Local Group in the form of baryons in the warm-hot intergalactic medium . There are few constraints on the possible existence of dark matter smeared out through the Local Group, and unassociated with the large galaxies. However, the clustering of the dwarf galaxies around the Milky Way and M31 does suggest that the gravity of the dark matter is centered on the prominent galaxies. (2) The largest mass estimate we obtained for the Milky Way Galaxy is $3.9 \pm 0.7 \times 10^{12} \Msun$. This extreme values is driven by the assumption of tangential anisotropy for the satellites, so that the measured line of sight velocities also imply substantial tangential motions as well. The estimate assumes all the satellites including Leo I to be bound, and the anomalously high proper motion measurement of Draco to be valid. Note that the present data allow considerably more scope to increase the mass of the Milky Way Galaxy than M31. Our largest mass estimate for M31 is a much more modest $1.6 \pm 0.4 \times 10^{12} \Msun$, which occurs when we analyse the whole sample incorporating And XII and And XIV and assume tangentially anisotropic velocity distributions. The current concensus is that the two galaxies are of a roughly similar mass, with M31 probably the slightly more massive of the two. This though is inferred from indirect properties, such as the numbers of globular clusters, which correlates with total mass albeit with scatter, or the amplitude of the inner gas rotation curve. The stellar halo of M31 is certainly more massive than that of the Milky Way, although this may not be a good guide to the dark halo. Of course, it could be that the current concensus is wrong, and that the Milky Way halo is more massive than that of Andromeda. There is also some indirect evidence in favour of this – for example, the typical sizes of the M31 dwarf spheroidals are large than those of the Milky Way, which is explicable if the Milky Way halo is denser. However, it does not seem reasonable to postulate that the mass of the Milky Way is substantially larger than that of M31. Hence, the very large estimate of $3.9 \pm 0.7 \times 10^{12} \Msun$ is best understood as a manifestation of the degeneracy in the problem of mass estimation with only primarily radial velocity data. (3) Our preferred estimates come from accepting Leo I, And XII and And XIV as bound satellites, whilst discarding the Draco proper motion as inaccurate. This gives an estimate for the mass of the Milky Way within 300 kpc as $2.7 \pm 0.5 \times 10^{12} \Msun$ and for M31 as $1.5 \pm 0.4 \times 10^{12} \Msun$, assuming the anisotropy implied by the data (*β* ≈  − 4.5). The error bars are just the statistical uncertainty and do not incorporate the uncertainty in anisotropy or sample membership. In view of this, it is not possible to decide which of the Milky Way galaxy or M31 is the more massive based on the kinematic data alone. These values for the masses are attractive for a number of reasons. First, the mass ratio between the Milky Way and M31 is of roughly of order unity, which accords with a number of other lines of evidence. Second, the values allow most of the dark matter in the Local Group implied by the Timing Argument to be clustered around the two most luminous galaxies. Third, they are within the range found for cosmologically motivated models of the Milky Way and M31. We prefer to assume the anisotropy implied by the admittedly scanty data on the proper motions of the satellites. However, for completeness, we quickly sketch the effects of dropping this assumption. If the velocity distribution is isotropic, or even radially anisotropic as suggested by the simulations, then the mass of the Milky Way becomes $1.4 \pm 0.3 \times 10^{12} \Msun$ or $1.2 \pm 0.3 \times 10^{12} \Msun$ respectively. Similarly for M31, the values are $1.4 \pm 0.4 \times 10^{12} \Msun$ (isotropy) or $ 1.3 \pm 0.4 \times 10^{12} \Msun$ (radially anisotropic). The greatest sources of uncertainty on the masses remain the role of possibly anomalous satellites like Leo I and the velocity anisotropy of the satellite populations. There is reason to be optimistic, as the *Gaia* satellite will provide proper motion data on all the dwarf galaxies that surround the Milky Way and M31, as well as many hundreds of thousands of halo stars. The analysis that we have carried out here indicates that proper motions are important if we wish to increase the accuracy of our estimates, as well as understand the dynamical nature of objects like Leo I. While we are not yet able to exploit the proper motions, *Gaia* will allow us to do so. Acknowledgments =============== NWE thanks Simon White for a number of insightful discussions on the matter of scale-free estimators. LLW thanks the Science and Technology Facilities Council of the United Kingdom for a studentship. Work by JHA is in part supported by the Chinese Academy of Sciences Fellowships for Young International Scientists (Grant No.:2009Y2AJ7). JHA also acknowledges support from the Dark Cosmology Centre funded by the Danish National Research Foundation (Danmarks Grundforskningsfond). The paper was considerably improved following the comments of an anonymous referee. -0.48cm T. E., Olszewski E. W., Pryor C., 1995,, 110, 2131 J. N., Tremaine S., 1981,, 244, 805 M., Ferraro F. R., Origlia L., Pancino E., Monaco L., Oliva E., 2002,, 124, 3222 M., Gennari N., Ferraro F. R., 2005,, 360, 185 M., Gennari N., Ferraro F. R., Sollima A., 2004,, 354, 708 V. et al., 2008,, 686, L83 V. et al., 2009,, p. 903 V. et al., 2007,, 654, 897 V. et al., 2006,, 647, L111 G., Kallivayalil N., Hernquist L., Robertson B., Cox T. J., van der Marel R. P., Alcock C., 2007,, 668, 949 J., Tremaine S., 1987, Galactic dynamics.Princeton, NJ, Princeton University Press, 1987 R., 1991,, 372, 54 A., Reid M. J., Falcke H., Greenhill L. J., Henkel C., 2005, Science, 307, 1440 A., Reid M. J., Falcke H., Henkel C., Menten K. M., 2007a, ArXiv:0708.1704 A., Reid M. J., Falcke H., Henkel C., Menten K. M., 2007b,, 462, 101 S. C. et al., 2007,, 662, L79 M.-R. L., van der Marel R. P., Loup C., Habing H. J., 2000,, 359, 601 A. A. et al., 1999,, 118, 1657 M. G. et al., 2007,, 668, L43 E., Méndez R. A., Pedreros M. H., Moyano M., Gallart C., Noël N., Baume G., Carraro G., 2009,, 137, 4339 R. A. et al., 2009,, 399, 1773 W., Binney J. J., 1998,, 298, 387 J., Kuhlen M., Madau P., 2007,, 667, 859 N. W., Hafner R. M., de Zeeuw P. T., 1997,, 286, 315 N. W., Wilkinson M. I., 2000,, 316, 929 N. W., Wilkinson M. I., Guhathakurta P., Grebel E. K., Vogt S. S., 2000,, 540, L9 N. W., Wilkinson M. I., Perrett K. M., Bridges T. J., 2003,, 583, 752 M. et al., 2006,, 651, 167 W. L. et al., 2001,, 553, 47 M., Willman B., Simon J. D., Strigari L. E., Kirby E. N., Law D. R., Strader J., 2009,, 692, 1464 S. T., Hunter J. H., Boonyasait V., 2002,, 337, 34 J., Zaritsky D., 2006,, 131, 2514 J., Tremaine S., Bahcall J. N., 1985,, 298, 8 A., 2004,, 610, L97 R., Martin N. F., Irwin M., Chapman S., Ferguson A. M. N., Lewis G. F., McConnachie A. W., 2007,, 671, 1591 R. A., Wyse R. F. G., Gilmore G., Irwin M. J., Suntzeff N. B., 1997,, 113, 634 M. J. et al., 2007,, 656, L13 M. J., Ferguson A. M. N., Huxor A. P., Tanvir N. R., Ibata R. A., Lewis G. F., 2008,, 676, L17 K. V., Law D. R., Majewski S. R., 2005,, 619, 800 F. D., Woltjer L., 1959,, 130, 705 J., Kubiak M., Szymanski M., Udalski A., Krzeminski W., Mateo M., 1995,, 112, 407 I. D., Karachentseva V. E., Huchtmeier W. K., Makarov D. I., 2004,, 127, 2031 I. D., Kashibadze O. G., Makarov D. I., Tully R. B., 2009,, 393, 1265 A., Zhao H., Somerville R. S., 2002,, 573, 597 A., Kleyna J. T., Wilkinson M. I., Grebel E. K., Gilmore G. F., Evans N. W., Wyse R. F. G., Harbeck D. R., 2007,, 134, 566 A., Wilkinson M. I., Kleyna J. T., Gilmore G. F., Grebel E. K., Mackey A. D., Evans N. W., Wyse R. F. G., 2007,, 657, 241 A. et al., 2009,, 690, 453 C. S., 1996,, 457, 228 P., Bastian U., 1997, New Astronomy, 2, 77 A. S., Lynden-Bell D., 1992,, 255, 105 D. R., Majewski S. R., Johnston K. V., 2009,, 703, L67 B. et al., 2009, ArXiv:0901.0820 Y., White S. D. M., 2008,, 384, 1459 B., Tremaine S., 1987,, 320, 493 D., Frenk C. S., 1981, The Observatory, 101, 200 S. R. et al., 2007,, 670, L9 N. F., de Jong J. T. A., Rix H.-W., 2008,, 684, 1075 N. F., Ibata R. A., Chapman S. C., Irwin M., Lewis G. F., 2007,, 380, 281 N. F., Ibata R. A., Irwin M. J., Chapman S., Lewis G. F., Ferguson A. M. N., Tanvir N., McConnachie A. W., 2006,, 371, 1983 N. F. et al., 2009, ArXiv:0909.0399 M. L., 1998,, 36, 435 A. W. et al., 2008,, 688, 1009 A. W., Irwin M. J., 2006,, 365, 902 A. W., Irwin M. J., Ferguson A. M. N., Ibata R. A., Lewis G. F., Tanvir N., 2005,, 356, 979 J. F., Frenk C. S., White S. D. M., 1996,, 462, 563 F. et al., 2003,, 421, 719 S., Pryor C., Bristow P., Olszewski E. W., Harris H. C., Mateo M., Minniti D., Tinney C. G., 2005,, 130, 95 S., Pryor C., Bristow P., Olszewski E. W., Harris H. C., Mateo M., Minniti D., Tinney C. G., 2006,, 131, 1445 S., Pryor C., Bristow P., Olszewski E. W., Harris H. C., Mateo M., Minniti D., Tinney C. G., 2007,, 133, 818 S., Pryor C., Olszewski E. W., 2008,, 135, 1024 S. et al., 2002,, 124, 3198 S., Pryor C., Olszewski E. W., Harris H. C., Mateo M., Minniti D., Tinney C. G., 2003,, 126, 2346 D., Dubath P., Pasquini L., 1995,, 300, 31 S., Lynden-Bell D., 1989,, 240, 195 K., Kreitschmann J., 1988,, 201, 51 S., Madore B. F., Freedman W. L., 1999,, 511, 671 L. V., Navarro J. F., Abadi M. G., Steinmetz M., 2007,, 379, 1475 I., Held E. V., Bertelli G., 2000,, 355, 56 R.-D., Irwin M. J., 1994, in MacGillivray H. T., ed., IAU Symposium Vol. 161, Astronomy from Wide-Field Imaging. p. 535 M. S., Barth A. J., Bullock J. S., 2008,, 389, 1911 J. D., Geha M., 2007,, 670, 313 S. T. et al., 2007,, 663, 960 R. P., Alves D. R., Hardy E., Suntzeff N. B., 2002,, 124, 2639 R. P., Guhathakurta P., 2008,, 678, 187 M. G., Mateo M., Olszewski E. W., 2008,, 688, L75 M. G., Mateo M., Olszewski E. W., Bernstein R., Wang X., Woodroofe M., 2006,, 131, 2114 M. G., Mateo M., Olszewski E. W., Pal J. K., Sen B., Woodroofe M., 2006,, 642, L41 S. D. M., 1981,, 195, 1037 M. I., Evans N. W., 1999,, 310, 645 X. X. et al., 2008,, 684, 1143 D. G. et al., 2000,, 120, 1579 D. B. et al., 2006,, 650, L41 D. B. et al., 2006,, 643, L103 D. B. et al., 2004,, 612, L121 D. B. et al., 2007,, 659, L21 --- 1. *α* =  − 1 corresponds to the gravitational field that pulls with an equal magnitude force regardless of radius, which is formally generated by a halo density falling off as *r*− 1. Provided we regard the scale-free potential as an approximation valid over a limited range and not extending to spatial infinity, we can permit *α* ≥  − 2, since *α* =  − 2 corresponds to the harmonic potential generated by a homogeneous sphere.[↩](#fnref1) 2. On the other hand, for the satellites of the Milky Way, it is often assumed that $d\ll\rin$, which leads to sin*φ* ≈ 0 and consequently $\langle\vlos^2r^\alpha\rangle \approx \langle v\_r^2r^\alpha\rangle$.[↩](#fnref2) 3. The result is valid provided that the integral is limited to spherical shells. However, given the lack of depth information, it might seem more logical to perform the integration over cylindrical shells. Unfortunately, the result is more complicated, as it involves the integrals of incomplete beta functions.[↩](#fnref3) The Masses of the Milky Way and Andromeda Galaxies ================================================== -.6in [firstpage] We present a family of robust tracer mass estimators to compute the enclosed mass of galaxy haloes from samples of discrete positional and kinematical data of tracers, such as halo stars, globular clusters and dwarf satellites. The data may be projected positions, distances, line of sight velocities or proper motions. The estimators all assume that the tracer population has a scale-free density and moves in a scale-free potential in the region of interest. The circumstances under which the boundary terms can be discarded and the estimator converges are derived. Forms of the estimator tailored for the Milky Way galaxy and for M31 are given. Monte Carlo simulations are used to quantify the uncertainty as a function of sample size. For the Milky Way galaxy, the satellite sample consists of 26 galaxies with line-of-sight velocities. We find that the mass of the Milky Way within 300 kpc is $M\_{300} = 0.9 \pm 0.3 \times 10^{12} \Msun$ assuming velocity isotropy. However, the mass estimate is sensitive to the assumed anisotropy and could plausibly lie between 0.7 - 3.4 $\times 10^{12} \Msun$, if anisotropies implied by simulations or by the observations are used. Incorporating the proper motions of 6 Milky Way satellites into the dataset, we find $M\_{300} = 1.4 \pm 0.3 \times 10^{12} \Msun$. The range here if plausible anisotropies are used is still broader, from 1.2 - $2.7 \times 10^{12} \Msun$. Note that our error bars only incorporate the statistical uncertainty. There are much greater uncertainties induced by velocity anisotropy and by selection of satellite members. For M31, there are 23 satellite galaxies with measured line-of-sight velocities, but only M33 and IC 10 have proper motions. We use the line of sight velocities and distances of the satellite galaxies to estimate the mass of M31 within 300 kpc as $M\_{300} = 1.4 \pm 0.4 \times 10^{12} \Msun$ assuming isotropy. There is only a modest dependence on anisotropy, with the mass varying between 1.3 - $1.6 \times 10^{12} \Msun$. Incorporating the proper motion dataset does not change the results significantly. Given the uncertainties, we conclude that the satellite data by themselves yield no reliable insights into which of the two galaxies is actually the more massive. Leo I has long been known to dominate mass estimates for the Milky Way due to its substantial distance and line-of-sight velocity. We find that And XII and And XIV similarly dominate the estimated mass of M31. As such, we repeat the calculations without these galaxies, in case they are not bound – although on the balance of the evidence, we favour their inclusion in mass calculations. galaxies: general – galaxies: haloes – galaxies: kinematics and dynamics – galaxies: individual: M31 – dark matter Introduction ============ The structure and extent of dark matter haloes have important implications for modern astrophysics, yet the determination of such properties is a difficult task and the results are often conflicting. A neat illustration is provided by the usage of Sagittarius Stream data to constrain the shape of the Milky Way dark halo. This has told us that the halo is nearly spherical, prolate, oblate or triaxial in nature! The Milky Way is the closest halo available for our study, the availability of data has improved substantially in recent years, and yet we are not able to determine its shape reliably. Similarly, we are unable to measure the masses of the Milky Way, or its neighbour, the Andromeda Galaxy (M31) with any precision. Despite their proximity to us, their masses remain sketchily determined and there is some controversy as to which halo is more massive. Judged by criteria such as the surface brightness of the stellar halo or the numbers of globular clusters or the amplitude of the gas rotation curve, M31 is seemingly the more massive. Judged by criteria such as the velocities of the satellite galaxies and distant globulars or tidal radii of the dwarf spheroidals, then the Milky Way is seemingly the more massive. For example, argued that the M31 halo is roughly as massive as that of the Milky Way, with the Milky Way marginally being the more massive of the two, while recent studies have found evidence favouring both the Milky Way and M31 as the more massive galaxy. The masses of both haloes within a few tens of kiloparsecs are reasonably well constrained by gas rotation curve data. However, these data only sample the inner parts of the haloes. In order to probe further out, we must turn to the kinematics of the satellite populations. Such tracers are a valuable tool for studying the dark matter haloes as their orbits contain important information about their host potential. Distance, radial velocity and proper motion data can be used to constrain halo extent, mass and velocity anisotropy. The uncertainties in the mass estimates for the Milky Way and M31 are largely due to the fact that there is seldom proper motion data available to complement distance and radial velocity information. With only one velocity component to work with, the eccentricities of the orbits are poorly constrained. Statistical methods must be applied to determine masses and these methods suffer greatly from the small sample sizes available, even with the recent burst of satellite discoveries associated with both galaxies. The projected mass estimator was introduced by. They assumed that only projected distance and line-of-sight velocity information was available. The estimator is also contained in the study of on scale-free ensembles of binary galaxies. The analysis was extended by and further modified by to consider the case of tracer populations. These previous studies successfully used the mass estimator to weigh M31. However, in its present form, the mass estimator is ill-suited for application to the Milky Way and such a study has not yet been attempted. Here, we develop alternative forms of the estimator, and analyse the conditions under which they are valid. In addition, the census of satellites around M31 has increased significantly since the last studies of this type were attempted and so we have more data at our disposal. Hence, we apply our estimator to M31 with these new data. Mass estimators =============== The projected mass estimator takes the form $$\label{eq:BTest} M = \frac{C}{G} \left< \vlos^2 R \right> = \frac{C}{G}\frac{1}{N}\sum\_{i=1}^N v\_{{\rm los},i}^2 R\_i$$ for a set of *N* tracers objects (e.g planetary nebulae, stars, globular clusters, dwarf spheroidal galaxy satellites) with line-of-sight velocities $\vlos$ and projected distances *R*. Here, *G* is the gravitational constant and *C* is a constant determined by the host potential and the eccentricity of the orbits. They found that $C = 16/ {\piup}$ for test particles with an isotropic velocity distribution orbiting a point mass and $C = 32/ {\piup}$ for test particles moving on radial orbits. This analysis was extended by to consider the case in which tracers may track the total mass (e.g. in galaxy groups). They found that $C = 32/{\piup}$ for particles with an isotropic velocity distribution and $C = 64/ {\piup}$ for particles on radial orbits. A key assumption in this work is that the members/tracers track the mass of the group/host. This is not true for all tracer populations, particularly for those tracers which are commonly used to estimate the masses of ellipticals or the haloes of spiral galaxies. Tracer Mass Estimator --------------------- Here, we give a formal derivation of our tracer estimators, so as to clarify the conditions under which they converge to the enclosed mass. Readers primarily interested in applications, and willing to take convergence on trust, should skip straight to the estimators themselves, namely eqns ([eq:firstcase]), ([eq:Andcase]), ([eq:PMcase]) and ([eq:lastcase]). We give formulae for the various cases in which true distances or projected distances, and line-of-sight velocities, or radial velocities or proper motions, are known for the tracers. The estimators are both simple and flexible. Let us begin by supposing that the observations are discrete positions *r* and radial velocities *v**r* of *N* members of a tracer population. Here, *r* is measured from the centre of the host galaxy, whilst $v\_r = {\dot r}$ is the radial velocity. We propose to combine the positional and kinematic data to give the enclosed mass *M* in the form $$M = \frac{C}{G} \left< v\_r^2 r^{\lambda} \right> = \frac{C}{G} \frac{1}{N} \sum\_{i=1}^N v\_{r,i}^2 r\_i^\lambda.$$ Here, unlike equation ([eq:BTest]), the constant *C* is not necessarily dimensionless. Notice that *a priori* we do not know the best choice for *λ*. This will emerge from our analysis. If *f* is the phase space distribution function of the tracers and $\sigmar$ the radial velocity dispersion, we see that under the assumption of spherical symmetry: $$\langle v\_r^2 r^\lambda\rangle =\frac1\Mt \int\!\rd^3\!\bmath r\,\rd^3\!\bmath v\, f v\_r^2 r^\lambda =\frac{4{\piup}}{\Mt} \int\rho\sigmar^2r^{\lambda+2}\,\rd r$$ where $\Mt$ is the mass in the tracers $$\Mt= 4{\piup}\int r^2\rho\,\rd r.$$ Now, let us assume that the tracer population is spherically symmetric and has a number density which falls off like a power-law $$\rho(r)\propto r^{-\gamma}\,;\qquad \frac{\rd\log\rho}{\rd\log r}=-\gamma \label{eq:cuspdens}$$ at least within the radius interval [$\rin,\rout$] where the data lie. Then, the estimator reduces to $$\label{eq:tra} \langle v\_r^2 r^\lambda\rangle =\frac1{\mathcal M} \int\_{\rin}^{\rout}r^{\lambda-\gamma+2}\sigmar^2\,\rd r \,;\qquad \mathcal M=\begin{cases}\ \displaystyle{\frac{\rout^{3-\gamma}-\rin^{3-\gamma}}{3-\gamma}} &(\gamma\ne3)\medskip\\\ \log\,\biggl(\dfrac{\rout}{\rin}\biggr)&(\gamma=3) \end{cases},$$ where log*x* is the natural logarithm. Once the behaviour of $\sigmar^2$ is found, we may relate this estimator to the dynamical halo mass *M*(*r*). This can be achieved through solving the Jeans equation, which reads: $${1\over \rho} {\mathrm{d} (\rho \sigmar^2) \over \mathrm{d}r} + {2\beta \sigmar^2 \over r} = -{GM(r)\over r^2}. \label{eq:jeans}$$ Here, we have introduced $\beta = 1- {\sigmat^2}/{\sigmar^2}$, the Binney anisotropy parameter, in which $\sigmat$ is the tangential velocity dispersion. Now, *β* → ∞ corresponds to a circular orbit model, *β* = 1 corresponds to purely radial orbits and *β* = 0 is the isotropic case. We note that the Jeans equation ([eq:jeans]) in a spherical system can be put into the form $$Q\rho\sigmar^2 =-\int\!Q\,\rho\,\frac{GM(r)}{r^2}\,\rd r\,;\qquad \log Q=\int\frac{2\beta}r\,\rd r.$$ If *β* is independent of *r*, this simplifies to be *Q* = *r*2*β*. To proceed further, the underlying gravity field is assumed to be scale-free at least in the interval [$\rin,\rout$], that is, the relative potential up to a constant is given by $$\psi(r)=\begin{cases}\ \displaystyle{ \frac{v\_0^2}{\alpha} \left( \frac{a}{r} \right)^{\alpha} } &(\alpha \ne 0) \medskip\\\ v\_0^2\, \log\left(\dfrac{a}{r}\right) & (\alpha = 0) \end{cases} \label{eq:gravfield}$$ with  − 1 ≤ *α* ≤ 1.[1](#fn1) Here, *a* is a fiducial radius, which should lie in the region for which the power-law approximation for the relative potential is valid (i.e., $\rin\le a\le\rout$) and *v*0 is the circular speed at that radius *a*. When *α* = 1, this corresponds to the case in which the test particles are orbiting a point-mass; when *α* = 0, the satellites are moving in a large-scale mass distribution with a flat rotation curve; when *α* = *γ* − 2, the satellites track the total gravitating mass. We remark that our model of a scale-free tracer population of satellites in a scale-free potential has previously been used to study the mass of the Milky Way by  , although using the standard technique of maximum likelihood for parameter estimation. The scale-free assumption is also equivalent to proposing the halo mass profile to be $$\frac{M(r)}{M(a)}=\left(\frac{r}{a}\right)^{1-\alpha},$$ and the local mass density  ∝ *r*− (*α* + 2). Consequently, if the power-law behaviour were allowed to be extended to infinity, the total mass of the dark halo would necessarily be infinite unless *α* = 1. (However, if the halo density were to fall off faster than *r*− 3 and so the total gravitating mass is finite, the leading term for the potential would be Keplerian. That is to say, for the case of a finite total mass halo, the gravity field experienced by the tracers may be approximated to be that of a point mass, given that $\rin$ is chosen to be sufficiently large so that the gravitating mass inside the sphere of $\rin$ dominates the mass within the shell region populated by the tracers.) Combining this with the constant-anisotropy assumption, the Jeans equation integrated between *r* and $\rout$ then reduces to $$\label{eq:jeansint} r^{2\beta-\gamma}\sigmar^2(r) -\rout^{2\beta-\gamma}\sigmar^2(\rout) =\frac{GM(a)}{a^{1-\alpha}} \int\_r^{\rout}\tilde r^{2\beta-\gamma-\alpha-1}\rd\tilde r.$$ provided that all our assumptions remain valid in the radius interval $[\rin,\rout]$ and $r,a\in[\rin,\rout]$. Now, our goal is to find the total halo mass. In reality, the observed tracers are only populated up to a finite outer radius, and so, any mass distribution outside of that radius does not affect our observations in a strictly spherical system (Newton’s theorem). We therefore extend the power-law potential assumption only up to the finite outer radius (here $\rout$), and set $a=\rout$. In other words, the halo mass that we are interested in is that contained within the outer radius, $M=M(\rout)$. With $a=\rout$, solving equation ([eq:jeansint]) for $\sigmar^2(r)$ results in (here $s\equiv r/\rout$) $$\label{eq:sol} \sigmar^2=\begin{cases}\ \dfrac{\sigmar^2(\rout)-\hat v\_0^2}{s^{2\beta-\gamma}} +\dfrac{\hat v\_0^2}{s^\alpha} &(\alpha+\gamma-2\beta\ne0)\medskip\\\ \displaystyle{\frac{\sigmar^2(\rout)-v\_0^2\,\log s}{s^\alpha}} &(\alpha=2\beta-\gamma) \end{cases}$$ where $v\_0^2=GM/\rout$ is the circular speed at $\rout$ whilst $\hat v\_0^2\equiv v\_0^2/(\alpha+\gamma-2\beta)$. Then, substituting the result of equation ([eq:sol]) into equation ([eq:tra]) and explicitly performing the integration yields (ignoring particular parameter combinations that involve the logarithm) $$\begin{gathered} \label{eq:vr} \frac{\langle v\_r^2 r^\lambda\rangle}{(3-\gamma)\rout^\lambda} =\frac{v\_0^2}{(\lambda-\alpha+3-\gamma)(\alpha+\gamma-2\beta)} \frac{1-u^{\lambda-\alpha+3-\gamma}}{1-u^{3-\gamma}} \\+\frac1{\lambda-2\beta+3} \left[\sigmar^2(\rout)- \frac{v\_0^2}{\alpha+\gamma-2\beta}\right] \frac{1-u^{\lambda-2\beta+3}}{1-u^{3-\gamma}}\end{gathered}$$ where $u\equiv\rin/\rout$. Notice now that the choice of *λ* = *α* makes the *u*-dependence of the first term in the right-hand side drop out. In fact, this could also have been deduced on dimensional grounds by requiring that our estimator is not dominated by datapoints at small radii or large radii. The last terms in equation ([eq:vr]) basically constitute the surface ‘pressure’ support terms in the Jeans equation, which we wish to minimize as *u* → 0. Here, we limit ourselves to the case that *λ* = *α*, when the corresponding leading term is $$\frac{1-u^{\alpha-2\beta+3}}{1-u^{3-\gamma}} \sim\begin{cases} 1&2\beta-\alpha,\gamma<3 \\-u^{-(2\beta-\alpha-3)} &\gamma<3<2\beta-\alpha \\-u^{\gamma-3} &2\beta-\alpha<3<\gamma \\u^{\alpha+\gamma-2\beta} &3<2\beta-\alpha,\gamma \end{cases}.$$ In other words, provided that *γ* > 3 and *γ* > 2*β* − *α*, the pressure term vanishes as *u* → 0, and we obtain the scale-free Jeans solutions of. In fact, since *β* ≤ 1 and  − 1 ≤ *α* ≤ 1, we find that 2*β* − *α* ≤ 3 and thus the second condition here is essentially redundant. Consequently, provided that *γ* > 3, that is the tracer density falls off more quickly than *r*− 3, we find the estimator to be $$\label{eq:msest} \langle v\_r^2 r^\alpha\rangle\simeq \frac{\rout^\alpha}{\alpha+\gamma-2\beta} \frac{GM}{\rout}+\mathcal R$$ where the remainder $\mathcal R\rightarrow0$ vanishes as $\rin/\rout\rightarrow0$ (here, $\rin$ and $\rout$ are the inner and outer radius of the tracer population). Alternatively, if *γ* < 3 and 2*β* − *α* < 3, the remainder term tends to a constant as *u* → 0. In a perfectly scale-free halo traced by again strictly scale-free populations, this constant must be zero. This is because, for such a system, $\sigmar^2$ should also be scale-free. Yet equation ([eq:sol]) implies that this is possible only if $\sigmar^2(\rout)=\hat v\_0^2$. Subsequently this also indicates that the coefficient for the remainder in equation ([eq:vr]) vanishes too. Even after relaxing the everywhere strict power-law behaviour, we would expect that $\sigmar^2\sim\hat v\_0^2$ and consequently that $|\sigmar^2-\hat v\_0^2|\ll\hat v\_0^2$, provided that 2*β* − *α* < *γ*, which is required to ensure $\hat v\_0^2>0$. That is to say, we expect that $\hat v\_0^2\rout^\alpha\gg\mathcal R$ as *u* → 0 in equation ([eq:msest]) for 2*β* − *α* < *γ* < 3, which is sufficient for justifying the applicability of our mass estimator. In other words, we have obtained a very simple result $$M = \frac{C}{G} \left< v\_r^2 r^{\alpha} \right>, \quad\quad C = \left( \alpha + \gamma -2 \beta \right) \rout^{1- \alpha}, \label{eq:firstcase}$$ provided that *C* > 0 (the simple interpolative argument indicates that this is still valid for *γ* = 3). This corresponds to the case in which the tracers have known radial velocity components *v**r* resolved with respect to the centre of the galaxy, as well as actual distances *r*. For satellites of the Milky Way, the line of sight velocity $\vlos$ is measured, and corrected to the Galactic rest frame. Now, *v**r* may be calculated from $\vlos$ only if proper motion data exists. Alternatively, a statistical correction can be applied to estimate *v**r* from $\vlos$ $$\label{eq:v2sc} \langle v\_r^2\rangle=\frac{\langle\vlos^2\rangle}{1-\beta\sin^2\varphi}$$ where *φ* is the angle between the unit vector from the Galactic Centre to the satellite and the unit vector from the Sun to the satellite. Note too that in the important isothermal case (*α* = 0), the galaxy rotation curve is flat with amplitude *v*0. Then, for members of a population with density falling like *ρ* ∼ *r*− 3, such as the Galactic globular clusters, eqn ([eq:firstcase]) reduces to *v*02 = (3 − 2*β*)⟨*v**r*2⟩. This is a generalization of the estimator of to the case of anisotropy. When the population is isotropic (*β* = 0), it reduces to the appealing simple statement that the circular speed is the rms velocity of the tracers multiplied by $\sqrt3\approx1.732$. Even if three dimensional distance *r* is replaced by projected distance *R* or *v**r* by some other projections of the velocity, the basic scaling result of eqn ([eq:firstcase]) remains valid. Different projections simply result in distinct constants *C*, as we now show. ![Distribution of mass estimate as a fraction of the true mass for 1000 Monte Carlo realisations, assuming that parameters \alpha, \beta and \gamma are known exactly. Left: N=10,000. Middle: N=100. Right: N=30. The number of satellites in the simulation and the form of the estimator used to recover the mass is shown in the top left corner of each panel. A best-fit Gaussian is plotted for each distribution and the standard deviation of the distribution is shown in the top right corner of each panel. On average, the tracer mass estimator recovers the true mass of the host. [The cases shown correspond to \alpha = 0.55, \beta = 0.0 and \gamma = 2.7].](mass_hist "fig:") [fig:masshists] A Family of Estimators ---------------------- Now, suppose that we have actual distances *r* from the centre of the host galaxy, but only projected or line of sight velocities $\vlos$. This is the case for many of M31’s satellite galaxies, for which distances have been measured by using the tip of the red giant branch method and for which projected velocities are known from spectroscopy. The calculation proceeds by considering $$\langle\vlos^2r^\alpha\rangle =\frac1\Mt \int\!\rd^3\!\bmath r\,\rd^3\!\bmath v\,f\vlos^2r^\alpha =\frac{2{\piup}}\Mt \int\_{\rin}^{\rout}\!\rd r\!\int\_0^{\piup}\!\rd\theta\, \rho\sigmalos^2r^{\alpha+2}\sin\theta$$ We now need the relationship between line-of-sight velocity dispersion $\sigmalos$ and the radial velocity dispersion $\sigmar$, namely $$\sigmalos^2 = \sigmar^2 \left( 1- \beta \sin^2 \varphi \right),$$ which is similar to equation ([eq:v2sc]) but here the angle *φ* is the angle between the line of sight and the position vector of the satellite with respect to the centre of the *host galaxy*. If the polar *z*-axis of the coordinate system is chosen such that the sun (that is, the observer) lies on the negative *z*-axis (i.e., $\theta={\piup}$) at a distance *d* from the centre of the host galaxy, we find that $$\sin^2\!\varphi=\frac{\sin^2\!\theta} {1+2\tfrac rd\cos\theta+\bigl(\tfrac rd\bigr)^2}.$$ However, for most external galaxies, it is reasonable to assume $d\gg\rout$, and therefore, we can safely approximate[2](#fn2) that sin2*φ* ≈ sin2*θ*. Then, $$\langle\vlos^2r^\alpha\rangle= \langle v\_r^2r^\alpha\rangle \int\_0^{{\piup}/2}\rd\theta\,\sin\theta\left(1-\beta\sin^2\theta\right),$$ and thus we find that $$M = \frac{C}{G} \left<\vlos^2 r^{\alpha} \right>, \quad\quad C = \frac{3\left( \alpha + \gamma -2 \beta \right)} {3-2 \beta} \rout^{1- \alpha}. \label{eq:Andcase}$$ Next, we consider the case in which we have full velocity information for the satellites, i.e., both radial velocities and proper motions. For example, this is the case for a subset of the satellites of the Milky Way . In this case, we can utilize $\sigma^2=\sigmar^2+\sigmat^2=(3-2\beta)\sigmar^2$, and therefore the estimator becomes $$M = \frac{C}{G} \left< v^2 r^{\alpha} \right>,\quad\quad C = \frac{\alpha + \gamma -2 \beta}{3 - 2 \beta} \rout^{1- \alpha}. \label{eq:PMcase}$$ Finally, we can assume a worst-case scenario in which the only data available are projected distances *R* and line-of-sight velocities $\vlos$ for the tracers. Outside of the galaxies of the Local Group, this is the usual state of affairs. So, this would be the form of the estimator to find the dark matter mass of nearby giant ellipticals like M87 from positions and velocities of the globular clusters. The estimator is derived following the same procedure with *R* = *r*sin*θ*, which results in the relation $$\langle\vlos^2R^\alpha\rangle= \langle v\_r^2r^\alpha\rangle \int\_0^{{\piup}/2}\rd\theta\,\sin^{\alpha+1}\!\theta \left(1-\beta\sin^2\theta\right).$$ Consequently, the corresponding estimator is found to be [3](#fn3) $$M = \frac{C}{G} \left<\vlos^2 R^{\alpha} \right>, \quad\quad C = \frac{\left( \alpha + \gamma -2 \beta \right)} {I\_{\alpha, \beta}} \rout^{1- \alpha} \label{eq:lastcase}$$ where $$I\_{\alpha,\beta} =\frac{{\piup}^{1/2}\Gamma(\tfrac\alpha2+1)}{4\Gamma(\tfrac\alpha2+\tfrac52)} \left[\alpha+3-\beta(\alpha+2)\right]$$ and Γ(*x*) is the gamma function. This case is related to work by Bahcall & Tremaine (1981). So, for example, in the Keplerian case (*α* = 1), a distribution of test particles with *γ* = 3 gives $$C = {32 \over {\piup}}{ 2- \beta \over 4 -3\beta}$$ When *β* = 0, this implies that $C = 16/{\piup}$; whilst when *β* = 1, $C = 32/ {\piup}$. Some of these estimators are implicit in other work. In particular, some are equivalent to those introduced by, who had a different focus on the dynamics of binary galaxies but who made the same scale-free assumptions to obtain robust mass estimators. Very recently, An & Evans (2010, in preparation) found a related family of estimators that are independent of parameters derived from the tracer density (like *γ*). Checks with Monte Carlo Simulations =================================== In order to verify the correctness of our mass estimators, we generate synthetic data-sets of anisotropic spherical tracer populations. Distances *r* are selected in $\left[ \rin, \rout \right]$ assuming the power-law density profile in eqn ([eq:cuspdens]). Projection directions are determined by the position angles: cos*θ* is generated uniformly in [ − 1, 1] and *ϕ* is generated uniformly in $\left[ 0,2 {\piup}\right]$. If *R* lies outside of the allowed range, the projection direction is regenerated until *R* is within $\left[ \Rin, \Rout \right]$. The phase-space distribution functions that give rise to such density profiles are given in. Tracer velocities are picked from the distributions $$f(v) \propto \begin{cases}\ v^{2-2\beta} \left| \psi(r) - \tfrac12 v^2 \right|^{ \left[ 2 \gamma -3 \alpha -2 \beta \left( 2 - \alpha \right) \right] / \left( 2 \alpha \right) } & (\alpha\ne0) \medskip\\\ v^{2-2\beta} \exp \biggl( -\dfrac{v^2}{2\sigma^2} \biggr) & (\alpha=0) \end{cases}.$$ For *α* > 0, the maximum velocity at any position is $\sqrt{2\psi(r)}$; for *α* ≤ 0, the velocities can become arbitrarily large. Following, we introduce spherical polar coordinates in velocity space (*v*, *ξ*, *η*) so that the velocities resolved in spherical polar coordinates with respect to the centre are then $$\begin{array}{lll} v\_r = v \cos \eta & v\_{\theta} = v \sin \eta \cos \xi & v\_{\phi} = v \sin \eta \sin \xi \end{array}$$ To generate velocities with the correct anisotropy, *ξ* is generated uniformly in $\left[ 0,2 {\piup}\right]$ and *η* is picked in $\left[ 0, {\piup}\right]$ from the distribution *F*(*η*) ∝ ∣sin*η*∣1 − 2*β* where *β* is the Binney anisotropy parameter. Finally, the line-of-sight velocities are calculated and used in the tracer mass estimator. Figure [fig:masshists] shows the distribution of mass estimates as fractions of the true mass for 1000 realisations, assuming that parameters *α*, *β* and *γ* are known exactly; the left panels show simulations with 10,000 tracers, the middle panels for 100 tracers and the right panels for 30 tracers. The panels use the different forms of the estimator given in eqns ([eq:firstcase]), ([eq:Andcase]), ([eq:PMcase]) and ([eq:lastcase]) respectively. A Gaussian with the same standard deviation as each distribution is also plotted for each panel. The standard deviation is included in the top-right corner of each plot and gives an estimate of the error in each case. We see that our mass estimators are unbiased – that is, on average, the true mass is recovered in all cases. The benefit of using three dimensional distances *r* instead of projected distances *R* is modest, as is the improvement gained by using *v**r* in place of $\vlos$. However, if proper motion data are available, then using *v* instead of *v**r* gives a more accurate mass estimate. So far, we have assumed that we know *α*, *β* and *γ* exactly, which is, of course, not the case. Our estimates for *α*, *β* and *γ* have errors associated with them, not least because the notion of a scale-free density profile in a scale-free potential is an idealization. As these parameters enter the estimator through the prefactor *C*, it is straightforward to obtain the additional uncertainty in the final answer using propagation of errors. As we will show in the next section *α* and *γ* are constrained either by cosmological arguments or by the data. The right-most column in Figure [fig:masshists] (a host with 30 satellites) is the most applicable to our data-sets at present as the Milky Way has 26 satellites and M31 23 satellites with a recorded line-of-sight velocity. The error on the mass estimate obtained in this case is  ∼ 25%. This is much larger than that the effects of errors on *α* and *γ* and so the latter will be ignored for the rest of the discussion. However, the case of the velocity anisotropy *β* is different as it is poorly constrained, with theory and data pointing in rather different directions. Changes in *β* can therefore make a substantial difference to the mass estimate. Note that these simulations yield no insight into systematic errors, because the mock data are drawn from the same distribution functions used to derive the form of the mass estimators. This is a concern as there are a number of causes of systematic error – for example, dark halos may not be spherical, or infall may continue to the present day so that the observed satellites may not necessarily be virialized. Deason et al. (2010, in preparation) have tested the estimators derived in this paper, as well as a number of other commonly used mass estimators, against simulations. Specifically, they extracted samples of Milky Way-like galaxies and their satellites from the *Galaxies Intergalactic Medium Interaction Calculation* , a recent high resolution hydrodynamical simulation of a large volume of the Universe. They find that the estimators in this paper significantly out-perform the projected mass estimator of and the tracer mass estimator of. [table:mwsats] | | | | | | | | --- | --- | --- | --- | --- | --- | | Name | | | | | | | | | | | | | | Bootes I | 358.1 | 69.6 | 57 | 106.6 | 1,2 | | Bootes II | 353.8 | 68.8 | 43 | -115.6 | 3,4 | | Canes Venatici I | 74.3 | 79.8 | 219 | 76.8 | 5,6 | | Canes Venatici II | 113.6 | 82.7 | 150 | -96.1 | 6,7 | | Carina | 260.1 | -22.2 | 102 | 14.3 | 8,9 | | Coma Bernices | 241.9 | 83.6 | 45 | 82.6 | 6,7 | | Draco | 86.4 | 34.7 | 92 | -104.0 | 8,10,11 | | Fornax | 237.3 | -65.6 | 140 | -33.6 | 8,12,13 | | Hercules | 28.7 | 36.9 | 141 | 142.9 | 6,7 | | LMC | 280.5 | -32.9 | 49 | 73.8 | 8,14,15 | | Leo I | 226.0 | 49.1 | 257 | 179.0 | 8,16,17 | | Leo II | 220.2 | 67.2 | 235 | 26.5 | 8,18,19 | | Leo IV | 265.4 | 56.5 | 154 | 13.9 | 6,7 | | Leo T | 214.9 | 43.7 | 422 | -56.0 | 6,20 | | Leo V | 261.9 | 58.5 | 175 | 62.3 | 21 | | SMC | 302.8 | -44.3 | 60 | 9.0 | 8,22,23 | | Sagittarius | 5.6 | -14.1 | 16 | 166.3 | 8,24 | | Sculptor | 287.5 | -83.2 | 87 | 77.6 | 8,25,26 | | Segue 1 | 220.5 | 50.4 | 28 | 113.5 | 3,27 | | Segue 2 | 149.4 | -38.1 | 41 | 39.7 | 28 | | Sextans | 243.5 | 42.3 | 89 | 78.2 | 8,9,29 | | Ursa Major I | 159.4 | 54.4 | 101 | -8.8 | 3,6 | | Ursa Major II | 152.5 | 37.4 | 36 | -36.5 | 6,30 | | Ursa Minor | 104.9 | 44.8 | 77 | -89.8 | 8,10,11 | | Willman 1 | 158.6 | 56.8 | 42 | 33.7 | 2,3 | Sources: 1 -, 2 -, 3 -, 4 -, 5 -, 6 -, 7 -, 8 -, 9 -, 10 -, 11 -, 12 -, 13 -, 14 -, 15 -, 16 -, 17 -, 18 -, 19 -, 20 -, 21 -, 22 -, 23 -, 24 -, 25 -, 26 -, 27 -, 28 -, 29 -, 30 - [table:andsats] | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | | Name | | | | | | | | | | | | | | | | M33 | 133.6 | -31.3 | 809 | 206 | 74 | 1,2 | | M32 | 121.1 | -22.0 | 785 | 5 | 95 | 2,3 | | IC 10 | 119.0 | -3.3 | 660 | 261 | -29 | 2,3,4 | | NGC 205 | 120.7 | -21.1 | 824 | 39 | 58 | 1,2 | | NGC 185 | 120.8 | -14.5 | 616 | 189 | 106 | 1,2 | | IC 1613 | 129.8 | -60.6 | 715 | 510 | -56 | 2,3,5 | | NGC 147 | 119.8 | -14.2 | 675 | 144 | 117 | 1,2 | | Pegasus | 94.8 | -43.6 | 919 | 473 | 85 | 1,2 | | Pisces | 126.7 | -40.9 | 769 | 268 | -37 | 1,2 | | And I | 121.7 | -24.8 | 745 | 59 | -84 | 1,2 | | And II | 128.9 | -29.2 | 652 | 185 | 83 | 1,2 | | And III | 119.4 | -26.3 | 749 | 75 | -57 | 1,2 | | And V | 126.2 | -15.1 | 774 | 109 | -107 | 1,2 | | And VI | 106.0 | -36.3 | 775 | 267 | -64 | 1,2 | | And VII | 109.5 | -9.9 | 763 | 218 | 21 | 1,2 | | And IX | 123.2 | -19.7 | 765 | 41 | 94 | 1,6,7 | | And X | 125.8 | -18.0 | 702 | 110 | 130 | 8,9 | | And XI | 121.7 | -29.1 | 785 | 102 | -140 | 7,10 | | And XII | 122.0 | -28.5 | 830 | 107 | -268 | 7,10,11 | | And XIII | 123.0 | -29.9 | 785 | 115 | 64 | 7,10 | | And XIV | 123.0 | -33.2 | 740 | 161 | -204 | 12 | | And XV | 127.9 | -24.5 | 770 | 94 | -57 | 13,14 | | And XVI | 124.9 | -30.5 | 525 | 280 | -106 | 13,14 | | And XVII | 120.2 | -18.5 | 794 | 45 | | 15 | | And XVIII | 113.9 | -16.9 | 1355 | 589 | | 16 | | And XIX | 115.6 | -27.4 | 933 | 187 | | 16 | | And XX | 112.9 | -26.9 | 802 | 128 | | 16 | | And XXI | 111.9 | -19.2 | 859 | 148 | | 17 | | And XXII | 132.6 | -34.1 | 794 | 220 | | 17 | Sources: 1 -, 2 -, 3 -, 4 -, 5 -, 6 -, 7 - Collins et al. (2009, in prep), 8 -, 9 - Kalirai et al. (2009, in prep), 10 -, 11 -, 12 -, 13 -, 14 -, 15 -, 16 -, 17 - Mass Estimates for Andromeda and the Milky Way ============================================== Choice of Power-Law Index Parameters ------------------------------------ We now apply the mass estimators to the Milky Way and M31, the two largest galaxies in the Local Group. In converting heliocentric quantities to Galactocentric ones, we assume a circular speed of 220 at the Galactocentric radius of the sun (*R*⊙ = 8.0 kpc) and a solar peculiar velocity of (*U*, *V*, *W*) = (10.00,5.25,7.17) , where *U* is directed inward to the Galactic Centre, *V* is positive in the direction of Galactic rotation at the position of the sun, and *W* is positive towards the North Galactic Pole . For the M31 satellites, positional and velocity data must be computed relative to M31 itself. We take the position of M31 to be (ℓ, b) = (121.2∘, -21.6∘) at a distance of 785 kpc and its line-of-sight velocity to be -123 in the Galactic rest frame (see e.g., McConnachie et al. 2005; McConnachie & Irwin 2006). [fig:alphanfw] In order to apply our estimators to these systems, we need to compute the power-law index of the host potential *α*, the velocity anisotropy *β* and the power-law index of the satellite density distribution *γ*. There are cosmological arguments suggesting that the potentials of dark haloes are well-approximated by Navarro-Frenk-White (NFW) profiles. Figure  [fig:alphanfw] shows the best-fit power-law to the NFW potential for a wide range of concentrations and virial radii. The fitting is performed in the region 10 < *r*/kpc < 300, which is where the majority of the satellites lie. Now, argued that the concentrations of the Milky Way and M31 are *c* ≈ 12, whilst the virial radii $\rvir$ are in the range 250-300 kpc. In other words, for the range of concentrations and virial radii appropriate to galaxies like the Milky Way and M31, we see – fortunately – that the surface in Figure [fig:alphanfw] is slowly-changing and flattish with *α* ≈ 0.55. ![image](gamma_nsatsr_mw)![Cumulative numbers of satellite N(<r) for the Milky Way (upper) and M31 (lower). The best-fit power laws in the range r \le 300\ \mbox{kpc} are also plotted. The index of these power-law fits may be used to estimate the power-law index of the satellite density distribution n(r).](gamma_nsatsr_m31 "fig:") [fig:nsatsr] [fig:betauncertainty] If the satellite number density distribution *n*(*r*) follows a power law with index *γ*, then the number of satellites within any radius, *N*( < *r*), also follows a power-law with index 3 − *γ*. We fit power-laws to the Milky Way and M31 satellite cumulative distributions in order to estimate *γ*. We restrict ourselves to the inner regions of the satellite distributions, $r \le 300\ \mbox{kpc}$; beyond this range, the satellite population is likely to be seriously incomplete. The distributions and the best-fitting power laws are shown in Figure [fig:nsatsr]; the Milky Way data is shown in the upper panel and M31 data is shown in the lower panel. We find *γ* = 2.6 for the Milky Way and *γ* = 2.1 for M31. Note that data from the Sloan Digital Sky Survey has been instrumental in the identification of many of the recently-discovered Milky Way dwarfs. The SDSS coverage includes only the region around the North Galactic Cap, and, as such, the distribution of known Milky Way satellites is concentrated in that area of the sky. However, given our underlying assumption that the distribution of satellites is spherically symmetric, this directional bias does not impair our mass estimators. A bigger worry may be the incompleteness in the satellite distribution, which could affect the power index for the tracer number density if the directional incompleteness varies in different distances. Finally, there are a number of possibilities for the velocity anisotropy for the satellite galaxies. Previous studies often assumed isotropy, arguing that there is no compelling evidence to the contrary. However, found that the velocity anisotropy of satellites in simulations behaves like $\beta(r) \simeq 0.55 \left( r /\rvir \right) ^{1/3}$ for $0.2 \rvir \le r \le \rvir$. To estimate *β* for the Milky Way and M31 satellites, we calculate the weighted mean of this distribution $$\bar{\beta} = \frac{\int\_{0.2\rvir}^{\rvir} \beta(r) n(r) r^2 dr}{\int\_{0.2\rvir}^{\rvir} n(r) r^2 dr}$$ where the weighting function *n*(*r*) is the satellite number density distribution. This gives *β̄* = 0.44 for the Milky Way and *β̄* = 0.45 for M31. This is similar to the anisotropy of halo stars (*β* = 0.37) in simulations reported by  . Even though these numbers have the backing of simulations, they are somewhat surprising. Most of the Milky Way satellites with measured proper motions are moving on polar or tangential orbits. Using the sample of the 7 Milky Way satellites with proper motions, we can compute the radial and tangential components of the Galactocentric velocity. From these, the observed anisotropy *β* ∼  -4.5, which favours tangential orbits. This is consistent with the earlier, though indirect, estimate of, who found *β* ∼  -1, again favouring tangential orbits. The origin of this discrepancy between simulations and data is not well understood. Perhaps there is considerable cosmic scatter in the anisotropy of the satellites, as it may depend on the details of the accretion history of the host galaxy. Figure [fig:betauncertainty] plots brings both good news and bad. The upper panel shows that the mass estimates for external galaxies using the line of sight estimator of eqn ([eq:Andcase]) are reasonably insensitive to the precise value of *β*. This make sense, as for a galaxy like M31, the line of sight velocity encodes information on both the radial and tangential velocity components referred to the M31’s centre. However, in the case of the Milky Way, the situation is very different. The measured velocities provide information almost wholly on the radial component referred to the Galactic Center. In the absences of proper motions, the velocity anisotropy is largely unconstrained by the data. This is the classical mass-anisotropy degeneracy, and so – as the lower panel shows – there is considerable uncertainty in the mass estimates inferred using eqn ([eq:firstcase]). In what follows, we typically quote mass estimates for the anisotropies derived both from observations $\beta\_{\rm data}$ and from simulations $\beta\_{\rm sim}$, as well as for the case of isotropy (*β* = 0). In the absence of consistent indications to the contrary, our preference is to assume isotropy and to give greatest credence to the mass estimates obtained with this assumption. [table:massests] lccccccccc Galaxy & & & & $\beta\_{\rm data}$ & isotropic & $\beta\_{\rm sim}$ & $\beta\_{\rm data}$ & isotropic & $\beta\_{\rm sim}$ & $\beta\_{\rm data}$ & isotropic & $\beta\_{\rm sim}$ Milky Way & 34.2  ±  9.3 & 9.2  ±  2.5 & 6.6  ±  1.8 & 21. 1  ±  5.7 & 5.5  ±  1.6 & 3.8  ±  1.0 & 13.9  ±  4.9 & 3.3  ±  1.1 & 2.1  ±  0.7 ... excl Leo I & 25.2  ±  7.5 & 6.9  ±  1.8 & 5.0  ±  1.2 &... &... &... &... &... &... ... excl Leo I, Her & 21.1  ±  6.3 & 5.8  ±  1.5 & 4.2  ±  1.1 & 17. 5  ±  5.3 & 4.6  ±  1.4 & 3.2  ±  0.8 &... &... &... MW with PMs & 38.6  ±  7.0 & 24.1  ±  5.3 & 22.1  ±  5.3 & 28. 3  ±  5.5 & 18.5  ±  4.2 & 17.0  ±  4.2 & 22.2  ±  5.0 & 13.8  ±  3.9 & 11.4  ±  3.1 ... excl Draco & 27.1  ±  4.9 & 14.1  ±  3.1 & 12.2  ±  2.7 & 18. 1  ±  3.4 & 10.0  ±  2.3 & 8.7  ±  2.2 & 12.9  ±  3.1 & 6.9  ±  1.9 & 5.4  ±  1.5 ... excl LMC/SMC & 38.8  ±  6.8 & 24.6  ±  5.8 & 21.7  ±  4.9 & 28. 3  ±  5.7 & 18.4  ±  4.3 & 16.3  ±  4.1 & 22.7  ±  5.6 & 13.9  ±  4.2 & 11.2  ±  3.4 ... excl Draco, LMC/SMC & 25.9  ±  5.1 & 12.4  ±  2.9 & 10.6  ±  2.5 & 17. 0  ±  3.5 & 8.5  ±  2.2 & 7.2  ±  1.8 & 11.3  ±  2.9 & 5.4  ±  1.7 & 4.2  ±  1.3 M31 & 15.8  ±  3.3 & 14.1  ±  4.1 & 13.1  ±  3.8 & 15. 4  ±  4.1 & 12.4  ±  3.8 & 10.6  ±  3.5 & 2.6  ±  1.0 & 2.1  ±  1.0 & 1.8  ±  1.0 ... excl AndXII & 12.2  ±  2.7 & 10.9  ±  3.1 & 10.1  ±  3.2 & 11. 4  ±  3.2 & 9.2  ±  3.1 & 7.9  ±  2.8 &... &... &... ... excl AndXII, AndXIV & 9.6  ±  2.1 & 8.5  ±  2.4 & 8.0  ±  2.4 & 8. 6  ±  2.6 & 6.9  ±  2.4 & 5.9  ±  2.2 &... &... &... M31 with PMs & 15.1  ±  3.8 & 13.9  ±  3.5 & 13.1  ±  3.5 &... &... &... &... &... &... ![The fractional contribution each satellite makes to the mean mass estimator for the Milky Way (top) and M31 (bottom). For both galaxies, the mass budget is dominated by two satellites. For the Milky Way these are Leo I (red, dotted) and Hercules (blue, dashed). For M31, these are And XII (red, dotted) and And XIV (blue, dashed).](mest_bias "fig:") [fig:massconts] Radial Velocity Datasets ------------------------ Armed with values for *α*, *β* and *γ*, we now set the mass estimators to work. Data for the satellites of the Milky Way and M31 are given in Tables [table:mwsats] and  [table:andsats] respectively. Objects for which no line-of-sight velocity has been measured (And XVII, And XVIII, And XIX, And XX, And XXI and And XXII) are included in the tables, but excluded from the analysis. Using eqn ([eq:firstcase]) and recalling that the Monte Carlo simulations gave errors of  ∼ 25%, we give estimates of the mass with 100, 200 and 300 kpc for the Milky Way Galaxy in Table [table:massests]. Assuming velocity isotropy, we obtain for the mass of the Milky Way $M\_{300} = 0.9 \pm 0.3 \times 10^{12} \Msun$. The cussedness of the mass-anisotropy degeneracy is well illustrated by the fact that using the observationally derived $\beta\_{\rm data}$ gives $M\_{300} = 3.4 \pm 0.9 \times 10^{12} \Msun$, whilst using that from simulations gives $M\_{300} = 0.6 \pm 0.2 \times 10^{12} \Msun$. The huge spread in mass estimates is due to the fact that the line-of-sight velocities for the satellites are almost entirely providing information on the radial velocities as judged from the Galactic Centre. There is almost no information on the tangential motions in our dataset. However, there are other astrophysical reasons why masses higher than $\sim 2 \times 10^{12} \Msun$ are disfavoured. Using eqn ([eq:Andcase]), we obtain the mass of M31 within 300 kpc as $M\_{300} = 1.4 \pm 0.4 \times 10^{12} \Msun$. Here, though, in sharp distinction to the case of the Milky Way, plausible changes in the velocity anisotropy generate modest changes of the order of 10 per cent in the mass estimate, as shown in Table [table:massests]. Of course, this is understandable, as the line-of-sight velocity now has information on both the radial and tangential components, albeit tangled up in the projection. Taking the masses derived using velocity isotropy (*β* = 0), we note that this work hints at the removal of a long-standing puzzle, namely that the kinematic data on the satellite galaxies suggested that M31 was less massive than the Milky Way, whereas other indicators (such as the total numbers of globular clusters or the amplitude of the gas rotation curve) suggested the reverse. In fact, with the new datasets, the ratio of the masses of M31 to the Milky Way ( ∼ 1.5) is close to that which would be inferred using the Tully-Fisher relationship and the assumption that the luminosity is proportional to the total mass (2504/2204 ≈ 1.67). If instead the radial anisotropies derived from simulations are preferred, then the ratio is  ∼ 1.98. However, it may be imprudent to include all the satellites. For example, Leo I has long been known to dominate mass estimates of the Milky Way, on account of its large distance ( ∼ 260 kpc) and high line-of-sight velocity . It is unclear that Leo I is actually on a bound orbit, as opposed to a hyperbolic one. Hence, many attempts at determining the mass of the Milky Way quote estimates both including and excluding Leo I. In fact, recent photometric and spectroscopic evidence presented by  favours the picture in which Leo I is bound on an orbit with high eccentricity ( ∼ 0.95) and small perigalacticon (10-15 kpc). In particular, such models give good matches to the surface density and radial velocity dispersion profiles of Leo I, and imply high mass estimates for the Milky Way. However, using simulations found a population of satellite galaxies on extreme orbits ejected from haloes as a result of three-body slingshot effects, and suggested that Leo I might be an example of such an object. So, although the present evidence favours a bound orbit, a definitive verdict must await the measurement of Leo I’s proper motion by the *Gaia* satellite, which should resolve the issue. ![Distribution of \beta obtained from simultaneous mass and velocity anisotropy fitting. The median of each distribution is shown as a blue dashed line and the 68 \% confidence limits as cyan dotted lines. These values are also given in blue and cyan on the plot. The left panel shows the idealised case for 500 tracers, the right panel the case tailored to our M31 data where only 21 tracers are used. The value of \beta used to generate the tracer population is also shown.](hist_simul "fig:") [fig:simulterrors] Given that there is one satellite that is known to inflate the Milky Way’s mass, it is interesting to investigate whether any of the other satellites, particularly the recent discoveries, play similar rôles. The upper panel of Figure [fig:massconts] shows the fractional contributions each satellite makes to the Milky Way’s mass ($C \vlos^2r^{\alpha} / (G N) $) – it is the total of these values that we take to be the mass estimate. There are two clear outliers; the outermost satellite in this distribution is Leo I, the less extreme satellite is Hercules. Like Leo I, Hercules has a substantial radial velocity and a relatively large Galactocentric distance ( ∼ 130 kpc). Hercules has a highly elongated, irregular and flattened structure . This is consistent with tidal disruption during pericentric passages on a highly eccentric orbit (*e* > 0.9). This seems good evidence that Hercules is truly bound to the Milky Way. We repeat the same analysis for M31 and the results are shown in the bottom panel of Figure [fig:massconts]. Interestingly, we see that there are two outliers in the distribution, namely two of the recent discoveries, And XII and And XIV. Notice that though both objects have a substantial effect on M31’s mass estimate, neither are as extreme as Leo I. It is the inclusion of these two new objects in the satellite dataset that has augmented the mass of M31, so that it is now somewhat greater than that of the Milky Way. But, this begs the question: should these satellites be included? And XIV was discovered by in a survey of the outer M31 stellar halo. They recognized its extreme dynamical properties and suggested that it may either be falling into M31 for the first time or that M31’s mass must be larger than hitherto estimated by virial arguments. In fact, And XIV’s lack of gas and its elongated structure suggest that ram pressure stripping and tidal effects may have been important in its evolution. This is consistent with And XIV being a true satellite of M31 that has already suffered a pericentric passage, a conclusion that could be strengthened with deeper imaging, which might reveal the presence of tidal tails around And XIV. And XII is a still more ambiguous object – it was discovered as a stellar overdensity by. Spectroscopic observations were subsequently taken by, who conjectured that the satellite might be falling into the Local Group for the first time. The evidence for this is its large velocity and its likely location behind M31. However, it remains unclear whether this evolutionary track is consistent with the absence of detection of HI gas in the object. Pristine, infalling dwarfs, which have not yet experienced a pericentric passage of 50 kpc or less, should retain sizeable amounts of neutral HI gas, whereas constrain the mass in HI to be less than $3 \times 10^3 \Msun$. In light of this, we provide more mass estimates, after removing possible ambiguous objects (and re-computing the parameter *γ* where necessary). For the Milky Way, we exclude Leo I only and then Leo I and Hercules. For M31, we exclude And XII only and then And XII and And XIV. These mass estimates are also shown in Table [table:massests]. Note that, for example, the exclusion of Leo I does not change the mass estimate within 100 or 200 kpc, as Leo I is outside of this range. Similarly, And XII and And XIV lie outside of 100 kpc from the center of M31, so the mass estimates without them do not change the final column of the table. In the case of velocity isotropy (*β* = 0), it requires the excision of both And XII and And XIV from the datasets for the mass estimate of M31 to become comparable to or smaller than the Milky Way. For example, the mass of M31 with And XII and And XIV both removed is $0.85 \pm 0.24 \times 10^{12} \Msun$, as compared to the mass of the Milky Way with Leo I retained of $0.92 \pm 0.25 \times 10^{12} \Msun$. However, we have argued that And XIV is most likely bound, whilst And XII is a more ambiguous case. In other words, the problem pointed to by – namely that the mass of M31 inferred from the kinematics of the satellites is less than the mass of the Milky Way – has indeed been ameliorated by the discovery of more fast-moving M31 satellites. It seems particularly intriguing that such satellites exist for both the Milky Way and M31. used virialized models to estimate that the probability that, in a sample of 30 satellites, there is an object like Leo I, which changes the mass estimate by a factor of a few. They found that the probability is minute, only  ∼ 0.5%. Prior expectation does not favour the existence of objects like Leo I or And XII, yet in fact, both big galaxies in the Local Group possess such satellites. The clear conclusion is that the satellites in the outer parts of these galaxies cannot all be virialized. This is a point in favour of processes such as those advocated by to populate such orbits. [table:pms] | | | | | | --- | --- | --- | --- | | Name | | | | | | | | | | Carina | 22  ±  9 | 15  ±  9 | 1 | | Draco | 60  ±  40 | 110  ±  30 | 2 | | Fornax | 48  ±  5 | -36  ±  4 | 3 | | LMC/SMC | 198  ±  5 | 25  ±  5 | 4 | | Sculptor | 9  ±  13 | 2  ±  13 | 5 | | Sextans | -26  ±  41 | 10  ±  44 | 6 | | Ursa Minor | -50  ±  17 | 22  ±  16 | 7 | | M33 | 2.1  ±  0.7 | 2.5  ±  1.2 | 8 | | IC10 | -0.2  ±  0.8 | 2.0  ±  0.8 | 9 | | M31 | 2.1  ±  1.1 | -1.0  ±  0.9 | 10 | Sources: 1 -, 2 -, 3 -, 4 -, 5 -, 6 -, 7 -, 8 -, 9 -, 10 -, though unlike the other proper motions, this not a measurement but inferred from indirect evidence. ![Distribution of mass estimates as a fraction of true mass for Monte Carlo simulations using (top) 23 satellites with radial velocities, (middle) 7 satellites with proper motions and (bottom) 23 satellites, 7 of which have proper motions. The standard deviation of the best fitting Gaussian is shown in the top-right hand corner of each panel. [These plots assume \beta = -4.51, as estimated from the data].](mass_hist_mw "fig:") [fig:milkywaycase] ![The fractional contribution each satellite with proper motions makes to the mean mass estimate for the Milky Way Galaxy. Notice the extreme effect of Draco’s proper motion.](mest_mwpm_bias "fig:") [fig:pmconts] Simultaneous Solution for Mass and Anisotropy --------------------------------------------- There is one further way in which the estimators can be set to work with the line-of-sight velocities. When three dimensional positions and projected positions are simultaneously available – as for example in the case of M31’s satellites – it is possible to use the estimators based on both the $\langle \vlos^2 r^\alpha \rangle$ and the $\langle \vlos^2 R^\alpha \rangle$ moments to solve simultaneously for both the total mass and the anisotropy parameter. There is however no guarantee that the solution for *β* is in the physical range  − ∞ ≤ *β* ≤ 1. The success of this procedure of course rests on the accuracy of the data. The distances of the M31 satellites are determined by the tip of the red giant branch method and have errors of  ± 30 kpc. If we use eqns ([eq:Andcase]) and ([eq:lastcase]), and simultaneously solve for the unknowns, we obtain $$M\_{300} = 1.5 \pm 0.4 \times 10^{12} \Msun, \qquad\qquad \beta = -0.55^{+1.1}\_{-3.2}$$ which corresponds to mild tangential anisotropy. These are surprisingly sensible answers given the distance errors. Fig. [fig:simulterrors] is inferred from Monte Carlo simulations and shows the distributions of anisotropy parameters derived from simultaneous mass and anisotropy fitting for mock datasets. Also given in the panels are the median and 68 per cent confidence limits for the anisotropy parameter, in the case of 21 satellite galaxies (comparable to the present dataset for M31) and the case of 500 satellites. Although with 21 tracers, the errors on the anisotropy parameter are substantial, matters improve significantly with larger numbers of tracers. A dataset of 500 halo satellites (dwarf galaxies, globular clusters and planetary nebulae) is not unreasonable for a galaxy like M31 in the near future. This raises the possibility that the method of simultaneous fitting may prove more compelling in the future. In fact, given 500 tracers, it is reasonable to use the estimators based on both the $\langle \vlos^2 r^\alpha \rangle$ and the $\langle \vlos^2 R^\alpha \rangle$ moments to fit simultaneously at each distance, thus giving the run of anisotropy parameter and mass with radius. Radial and Proper Motion Datasets --------------------------------- Thus far, we have used only the line of sight velocities to make mass estimates. In this section, we add in the proper motions of satellites, where available. Thus, for the Milky Way galaxy, we combine results from eqn ([eq:firstcase]) for satellites without proper motions and from eqn ([eq:PMcase]) for those with proper motions, weighting each estimate by the reciprocal of the standard deviation to give the final answer. Proper motions, albeit with large error bars, have been measured for a total of 9 of the Milky Way satellite galaxies. It seems prudent to exclude Sagittarius, as it is in the process of merging with the Milky Way. Additionally, the interacting Magellanic Clouds are treated as a single system by computing the proper motion of their mass centroid, taking the masses of the LMC and SMC as $\sim 2 \times 10^{10} \Msun$ and $2 \times 10^9 \Msun$ respectively . This leaves us with a set of 7 satellites with proper motion data, summarized in Table [table:pms]. In most cases, errors on proper motions are large and, where multiple studies exist, the measurements are sometimes in disagreement. The proper motions inferred by ground based methods are in reasonable agreement with those derived from the *Hubble Space Telescope* (*HST*) in the cases of Fornax  , Carina  and the Magellanic Clouds . But, for Ursa Minor  and for Sculptor, agreement between different investigators is not good, and we have preferred to use the estimates derived from *HST* data. Nonetheless, it is important to include the proper motion data, especially for mass estimates of the Milky Way Galaxy. We use these proper motions along with distance and line-of-sight velocity data to calculate full space velocities for these satellites, as described in. In addition, there are two satellites of M31 with measured proper motions, namely M33 and IC 10. This astonishing feat has exploited the *Very Long Baseline Array* to measure the positions of water masers relative to background quasars at multiple epochs. Unfortunately, the technique cannot be extended to M31 itself, as it does not contain any water masers, and so its proper motion is much less securely known. However, reviewed the evidence from a number of sources – including kinematics of the M31 satellites, the motions of the satellites at the edge of the Local Group, and the constraints imposed by the tidal distortion of M33’s disk – to provide a best estimate. These data are also listed in Table [table:pms]. The Milky Way satellites are so remote that their line-of-sight velocities in the Galactic rest frame are almost identical to their radial velocities, as judged form the Galactic Centre. The proper motion data provide genuinely new information on the tangential motions and this is the only way to break the mass-anisotropy degeneracy. The same argument does not hold with equal force for M31, as the line of sight velocities incorporate contributions from both the radial and tangential components as reckoned from the centre of M31. Nonetheless, it is good practice to use all the data available, even though the proper motions of M33 and IC 10 with respect to the M31 reference frame must be inferred using an estimate of M31’s proper motion (rather than a measurement). For the satellites without proper motions, we use the form of the estimator given in eqns ([eq:firstcase]) or ([eq:Andcase]) for the Milky Way and M31 respectively; for those with proper motions, we use eqn ([eq:PMcase]). We combine results from the two estimators, weighting each estimate by the reciprocal of the standard deviation to give the final answer. To infer the standard deviation, we perform Monte Carlo simulations. So, for the case of the Milky Way, we generate mock datasets of 25 satellites, for which only 7 have proper motions. The errors on radial velocities are dwarfed by the uncertainty caused by the small number statistics and so are neglected. But, the errors on the proper motions are not negligible and they are incorporated into the simulations by adding a value selected at random from the range [-0.5 *μ*, 0.5 *μ*], where *μ* is the proper motion. The flat distribution has been chosen as systematic errors are as important as random Gaussian error in the determination of proper motions. However, we have tested alternatives in which we use the relative observational errors, or the relative observational errors multiplied by 2.5, and find that our results are robust against changes to the error law. The standard deviations of the fractional mass distribution the satellites with and without proper motions are separately computed, as illustrated in the panels of Figure [fig:milkywaycase]. We linearly combine the mass estimates, weighting with the reciprocal of the standard deviation, to give the final values reported in Table [table:massests]. Given that the Milky Way satellites with measured proper motions are moving on polar orbits, it is no surprise that the mass estimate of the Milky Way has now increased. Adopting the value of *β* we estimate from the data, we find $M\_{300} = 3.9 \pm 0.7 \times 10^{12} \Msun$ for the Milky Way Galaxy and $M\_{300} = 1.5 \pm 0.4 \times 10^{12} \Msun$ for M31. Assuming isotropy, we find $M\_{300} = 2.5 \pm 0.5 \times 10^{12} \Msun$ for the Milky Way Galaxy and $M\_{300} = 1.4 \pm 0.4 \times 10^{12} \Msun$ for M31. Notice however, the mass estimate for M31 has barely changed from the value inferred from the full radial velocity dataset. Again, we calculate the contribution that each satellite makes to the mass estimate to investigate whether any are dominating the final answer. First, this procedure guards against the possibility of a completely rogue proper motion measurement. Second, there are some suggestions that the Magellanic Clouds may not be bound, or even if bound may only be on its second passage and so may not be part of the relaxed distribution of satellite galaxies  . So, it is helpful to check that our results are not unduly sensitive to its inclusion. As Figure [fig:pmconts] shows, we find that Draco is a clear outlier and nearly doubles the Milky Way mass estimate. If we remove the Draco proper motion from the sample, we instead recover a mass *M*300 = 2.7  ±  0.5 $\times 10^{12} \Msun$ (assuming $\beta\_{\rm data}$) or *M*300 = 1.4  ±  0.3 $\times 10^{12} \Msun$ (assuming isotropy). It is particularly concerning that the proper motion of Draco has such a substantial effect, because – as judged from the size of the error bars in Table [table:pms] – it is one of the noisier measurements. By contrast, the exclusion of the Magellanic Clouds has only a minor effect, as is evident from the results listed in Table [table:massests]. We have covered a number of possibilities, so it is probably useful for us to give our best estimates. On balance, we think the case for including at least And XIV among the satellite sample for Andromeda is strong. Whilst And XII is a more ambiguous case, the lack of any HI gas suggests to us that it should also be included. Among the satellites of the Milky Way, we favour including Leo I based on the work of , whilst we are inclined to discard the proper motion of Draco reported in  until corroborated. Until the discrepancy between the velocity anisotropies reported in simulations and in data is explained, we prefer to use the data as our guide *So, our best estimate for the mass of the Milky Way within 300 kpc is* $$M\_{300} \sim 2.7 \pm 0.5 \times 10^{12} \Msun \label{eq:MWMASS}$$ whilst for M31, it is $$M\_{300}\sim 1.5 \pm 0.4 \times 10^{12} \Msun. \label{eq:ANDMASS}$$ These estimates are obtained using the combined radial velocity and proper motion datasets. The error bars only incorporate the statistical uncertainty. As we have emphasised, there are much greater uncertainties induced by selection of satellite members and velocity anisotropy. In particular, when these uncertainties are considered, it is not possible to decide which of the Milky Way or M31 is more massive based on satellite kinematic data alone. Discussion ========== It is instructive to compare our results with a number of recent estimates of the masses of the Local Group and its component galaxies. extracted a sample of  ∼ 2400 blue horizontal branch stars from the SDSS. These are all resident in the inner halo within 60 kpc of the Galactic centre. This has the advantage that the BHBs are surely virialized, but the disadvantage that no inference can be made about the mass exterior to 60 kpc. Hence, any estimate as to the total mass is driven wholly by prior assumptions rather than the data. In fact, assumed an NFW halo with a canonical concentration holds, and then estimated the virial mass of the Milky Way’s dark matter halo as $M = 1.0^{+0.3}\_{-0.2} \times 10^{12} \Msun$, using Jeans modelling with an anisotropy parameter inferred from numerical simulations. This is lower than our preferred value, but in good agreement with our comparable calculations using line of sight velocity datasets alone. A somewhat similar calculation for M31 has been reported by. The mass of the baryonic material is estimated using a Spitzer 3.6 *μ*m image of the galaxy, together with a mass-to-light ratio gradient based on the galaxy’s *B* − *R* colour. This is combined with an adiabatically-contracted NFW halo profile to reproduce the observed HI rotation curve data. They find a total virial mass of M31’s dark halo as $8.2 \pm 0.2 \times 10^{11} \Msun$. This is lower than most of our estimates, with the exception of those based on samples excluding both And XII and And XIV. Although these calculations are interesting, it is worth remarking that the final masses are not wholly controlled by the data. We know that, from Newton’s theorem, any mass distribution outside the limiting radius of our data has no observational effect in a spherical or elliptical system. To estimate the virial mass from data confined to the inner parts (such as BHBs or the optical disk) requires an understanding of the structure of the pristine dark halo initially, as well as how it responds to the formation of the luminous baryonic components. It is this that controls the final answer. used the Millennium Simulation to extract mock analogues of the Local Group and calibrate the bias and error distribution of the Timing Argument estimators . From this, they obtain a total mass of the two large galaxies in the Local Group of $5.3 \times 10^{12} \Msun$ with an inter-quartile range of [3.8 × 1012, 6.8 × 1012] $\Msun$ and a 95 % confidence lower limit of $1.8 \times 10^{12} \Msun$. Importantly, showed that the mass estimate from the timing argument is both unbiased and reasonably robust. This is a considerable advance, as there have long been worries that the gross simplification of two-body dynamics implicit in the original formulation of the Timing Argument may undermine its conclusions. It therefore seems reasonable to assume that the combined mass of the Milky Way Galaxy and M31 is at least $3.8 \times 10^{12} \Msun$, and perhaps more like $5.3 \times 10^{12} \Msun$. The low estimates of the Milky Way and M31 masses of and are not compatible with this, and barely compatible with Li & White’s 95 % lower limit. Using our preferred values in eqns ([eq:MWMASS]) and ([eq:ANDMASS]), the combined mass in the Milky Way and M31 galaxies is $4.2 \pm 0.6 \times 10^{12} \Msun$. This is comparable to the $3.8 \times 10^{12} \Msun$ of Li & White. also estimated a virial mass for the Milky Way of $2.4 \times 10^{12} \Msun$ with a range of [1.1 × 1012, 3.1 × 1012] $\Msun$, based on timing arguments for Leo I. Given all the uncertainties, this is in remarkable accord with our best estimate. Conclusions =========== We have derived a set of robust tracer mass estimators, and discussed the conditions under which they converge. Given the positions and velocities of a set of tracers – such as globular clusters, dwarf galaxies or stars – the estimators compute the enclosed mass within the outermost datapoints. The accuracy of the estimator has been quantified with Monte Carlo simulations. The estimators are applicable to a wide range of problems in contemporary astrophysics, including measuring the masses of elliptical galaxies, the haloes of spiral galaxies and galaxy clusters from tracer populations. They are considerably simpler to use than distribution function based methods , and involve no more calculation than taking weighted averages of combinations of the positional and kinematical data. They should find widespread applications. The mass estimators are applied to the satellite populations of the Milky Way and M31 to find the masses of both galaxies within 300 kpc. These estimates are the first to make use of the recent burst of satellite discoveries around both galaxies. Both satellite populations have nearly doubled in size since previous estimates were made. We summarise our results by answering the questions; What are (1) the minimum, (2) the maximum and (3) the most likely masses of the Milky Way and M31 galaxies? (1) The mass of the Milky Way Galaxy within 300 kpc could be as low as $0.4\pm 0.1 \times 10^{12} \Msun$. This would imply that Leo I is gravitationally unbound, contrary to the recent evidence provided by by . Leo I would then be either an interloper or an object being ejected from the Milky Way by an encounter. It would also require that the proper motion of Draco is incorrect, which is not inconceivable given the difficulty of the measurements. It implies that the satellite galaxies are moving on radial orbits and so the velocity anisotropy is radial. The mass of M31 within 300 kpc could plausibly be as low as $0.8 \pm 0.2 \times 10^{12} \Msun$. This would be the case if both And XII and And XIV are not gravitationally bound, which is possible if mechanisms such as those proposed by are ubiquitous. It would also require that the proper motion data on M33 and IC10 or – perhaps more likely – the indirectly inferred proper motion of M31 is in error. Again, such a low estimate for the mass occurs only if the satellites are moving predominantly radially. Although it is interesting to ask how low the masses of the Milky Way and M31 could be, it does produce a mystery in the context of the Timing Argument, which typically yields larger combined masses. It is possible that some of the mass of the Local Group is unassociated with the large galaxies. Although not the conventional picture, this is probably not ruled out and there have been suggestions that $\sim 10^{12} \Msun$ may be present in the Local Group in the form of baryons in the warm-hot intergalactic medium . There are few constraints on the possible existence of dark matter smeared out through the Local Group, and unassociated with the large galaxies. However, the clustering of the dwarf galaxies around the Milky Way and M31 does suggest that the gravity of the dark matter is centered on the prominent galaxies. (2) The largest mass estimate we obtained for the Milky Way Galaxy is $3.9 \pm 0.7 \times 10^{12} \Msun$. This extreme values is driven by the assumption of tangential anisotropy for the satellites, so that the measured line of sight velocities also imply substantial tangential motions as well. The estimate assumes all the satellites including Leo I to be bound, and the anomalously high proper motion measurement of Draco to be valid. Note that the present data allow considerably more scope to increase the mass of the Milky Way Galaxy than M31. Our largest mass estimate for M31 is a much more modest $1.6 \pm 0.4 \times 10^{12} \Msun$, which occurs when we analyse the whole sample incorporating And XII and And XIV and assume tangentially anisotropic velocity distributions. The current concensus is that the two galaxies are of a roughly similar mass, with M31 probably the slightly more massive of the two. This though is inferred from indirect properties, such as the numbers of globular clusters, which correlates with total mass albeit with scatter, or the amplitude of the inner gas rotation curve. The stellar halo of M31 is certainly more massive than that of the Milky Way, although this may not be a good guide to the dark halo. Of course, it could be that the current concensus is wrong, and that the Milky Way halo is more massive than that of Andromeda. There is also some indirect evidence in favour of this – for example, the typical sizes of the M31 dwarf spheroidals are large than those of the Milky Way, which is explicable if the Milky Way halo is denser. However, it does not seem reasonable to postulate that the mass of the Milky Way is substantially larger than that of M31. Hence, the very large estimate of $3.9 \pm 0.7 \times 10^{12} \Msun$ is best understood as a manifestation of the degeneracy in the problem of mass estimation with only primarily radial velocity data. (3) Our preferred estimates come from accepting Leo I, And XII and And XIV as bound satellites, whilst discarding the Draco proper motion as inaccurate. This gives an estimate for the mass of the Milky Way within 300 kpc as $2.7 \pm 0.5 \times 10^{12} \Msun$ and for M31 as $1.5 \pm 0.4 \times 10^{12} \Msun$, assuming the anisotropy implied by the data (*β* ≈  − 4.5). The error bars are just the statistical uncertainty and do not incorporate the uncertainty in anisotropy or sample membership. In view of this, it is not possible to decide which of the Milky Way galaxy or M31 is the more massive based on the kinematic data alone. These values for the masses are attractive for a number of reasons. First, the mass ratio between the Milky Way and M31 is of roughly of order unity, which accords with a number of other lines of evidence. Second, the values allow most of the dark matter in the Local Group implied by the Timing Argument to be clustered around the two most luminous galaxies. Third, they are within the range found for cosmologically motivated models of the Milky Way and M31. We prefer to assume the anisotropy implied by the admittedly scanty data on the proper motions of the satellites. However, for completeness, we quickly sketch the effects of dropping this assumption. If the velocity distribution is isotropic, or even radially anisotropic as suggested by the simulations, then the mass of the Milky Way becomes $1.4 \pm 0.3 \times 10^{12} \Msun$ or $1.2 \pm 0.3 \times 10^{12} \Msun$ respectively. Similarly for M31, the values are $1.4 \pm 0.4 \times 10^{12} \Msun$ (isotropy) or $ 1.3 \pm 0.4 \times 10^{12} \Msun$ (radially anisotropic). The greatest sources of uncertainty on the masses remain the role of possibly anomalous satellites like Leo I and the velocity anisotropy of the satellite populations. There is reason to be optimistic, as the *Gaia* satellite will provide proper motion data on all the dwarf galaxies that surround the Milky Way and M31, as well as many hundreds of thousands of halo stars. The analysis that we have carried out here indicates that proper motions are important if we wish to increase the accuracy of our estimates, as well as understand the dynamical nature of objects like Leo I. While we are not yet able to exploit the proper motions, *Gaia* will allow us to do so. Acknowledgments =============== NWE thanks Simon White for a number of insightful discussions on the matter of scale-free estimators. LLW thanks the Science and Technology Facilities Council of the United Kingdom for a studentship. Work by JHA is in part supported by the Chinese Academy of Sciences Fellowships for Young International Scientists (Grant No.:2009Y2AJ7). JHA also acknowledges support from the Dark Cosmology Centre funded by the Danish National Research Foundation (Danmarks Grundforskningsfond). The paper was considerably improved following the comments of an anonymous referee. -0.48cm T. E., Olszewski E. W., Pryor C., 1995,, 110, 2131 J. N., Tremaine S., 1981,, 244, 805 M., Ferraro F. R., Origlia L., Pancino E., Monaco L., Oliva E., 2002,, 124, 3222 M., Gennari N., Ferraro F. R., 2005,, 360, 185 M., Gennari N., Ferraro F. R., Sollima A., 2004,, 354, 708 V. et al., 2008,, 686, L83 V. et al., 2009,, p. 903 V. et al., 2007,, 654, 897 V. et al., 2006,, 647, L111 G., Kallivayalil N., Hernquist L., Robertson B., Cox T. J., van der Marel R. P., Alcock C., 2007,, 668, 949 J., Tremaine S., 1987, Galactic dynamics.Princeton, NJ, Princeton University Press, 1987 R., 1991,, 372, 54 A., Reid M. J., Falcke H., Greenhill L. J., Henkel C., 2005, Science, 307, 1440 A., Reid M. J., Falcke H., Henkel C., Menten K. M., 2007a, ArXiv:0708.1704 A., Reid M. J., Falcke H., Henkel C., Menten K. M., 2007b,, 462, 101 S. C. et al., 2007,, 662, L79 M.-R. L., van der Marel R. P., Loup C., Habing H. J., 2000,, 359, 601 A. A. et al., 1999,, 118, 1657 M. G. et al., 2007,, 668, L43 E., Méndez R. A., Pedreros M. H., Moyano M., Gallart C., Noël N., Baume G., Carraro G., 2009,, 137, 4339 R. A. et al., 2009,, 399, 1773 W., Binney J. J., 1998,, 298, 387 J., Kuhlen M., Madau P., 2007,, 667, 859 N. W., Hafner R. M., de Zeeuw P. T., 1997,, 286, 315 N. W., Wilkinson M. I., 2000,, 316, 929 N. W., Wilkinson M. I., Guhathakurta P., Grebel E. K., Vogt S. S., 2000,, 540, L9 N. W., Wilkinson M. I., Perrett K. M., Bridges T. J., 2003,, 583, 752 M. et al., 2006,, 651, 167 W. L. et al., 2001,, 553, 47 M., Willman B., Simon J. D., Strigari L. E., Kirby E. N., Law D. R., Strader J., 2009,, 692, 1464 S. T., Hunter J. H., Boonyasait V., 2002,, 337, 34 J., Zaritsky D., 2006,, 131, 2514 J., Tremaine S., Bahcall J. N., 1985,, 298, 8 A., 2004,, 610, L97 R., Martin N. F., Irwin M., Chapman S., Ferguson A. M. N., Lewis G. F., McConnachie A. W., 2007,, 671, 1591 R. A., Wyse R. F. G., Gilmore G., Irwin M. J., Suntzeff N. B., 1997,, 113, 634 M. J. et al., 2007,, 656, L13 M. J., Ferguson A. M. N., Huxor A. P., Tanvir N. R., Ibata R. A., Lewis G. F., 2008,, 676, L17 K. V., Law D. R., Majewski S. R., 2005,, 619, 800 F. D., Woltjer L., 1959,, 130, 705 J., Kubiak M., Szymanski M., Udalski A., Krzeminski W., Mateo M., 1995,, 112, 407 I. D., Karachentseva V. E., Huchtmeier W. K., Makarov D. I., 2004,, 127, 2031 I. D., Kashibadze O. G., Makarov D. I., Tully R. B., 2009,, 393, 1265 A., Zhao H., Somerville R. S., 2002,, 573, 597 A., Kleyna J. T., Wilkinson M. I., Grebel E. K., Gilmore G. F., Evans N. W., Wyse R. F. G., Harbeck D. R., 2007,, 134, 566 A., Wilkinson M. I., Kleyna J. T., Gilmore G. F., Grebel E. K., Mackey A. D., Evans N. W., Wyse R. F. G., 2007,, 657, 241 A. et al., 2009,, 690, 453 C. S., 1996,, 457, 228 P., Bastian U., 1997, New Astronomy, 2, 77 A. S., Lynden-Bell D., 1992,, 255, 105 D. R., Majewski S. R., Johnston K. V., 2009,, 703, L67 B. et al., 2009, ArXiv:0901.0820 Y., White S. D. M., 2008,, 384, 1459 B., Tremaine S., 1987,, 320, 493 D., Frenk C. S., 1981, The Observatory, 101, 200 S. R. et al., 2007,, 670, L9 N. F., de Jong J. T. A., Rix H.-W., 2008,, 684, 1075 N. F., Ibata R. A., Chapman S. C., Irwin M., Lewis G. F., 2007,, 380, 281 N. F., Ibata R. A., Irwin M. J., Chapman S., Lewis G. F., Ferguson A. M. N., Tanvir N., McConnachie A. W., 2006,, 371, 1983 N. F. et al., 2009, ArXiv:0909.0399 M. L., 1998,, 36, 435 A. W. et al., 2008,, 688, 1009 A. W., Irwin M. J., 2006,, 365, 902 A. W., Irwin M. J., Ferguson A. M. N., Ibata R. A., Lewis G. F., Tanvir N., 2005,, 356, 979 J. F., Frenk C. S., White S. D. M., 1996,, 462, 563 F. et al., 2003,, 421, 719 S., Pryor C., Bristow P., Olszewski E. W., Harris H. C., Mateo M., Minniti D., Tinney C. G., 2005,, 130, 95 S., Pryor C., Bristow P., Olszewski E. W., Harris H. C., Mateo M., Minniti D., Tinney C. G., 2006,, 131, 1445 S., Pryor C., Bristow P., Olszewski E. W., Harris H. C., Mateo M., Minniti D., Tinney C. G., 2007,, 133, 818 S., Pryor C., Olszewski E. W., 2008,, 135, 1024 S. et al., 2002,, 124, 3198 S., Pryor C., Olszewski E. W., Harris H. C., Mateo M., Minniti D., Tinney C. G., 2003,, 126, 2346 D., Dubath P., Pasquini L., 1995,, 300, 31 S., Lynden-Bell D., 1989,, 240, 195 K., Kreitschmann J., 1988,, 201, 51 S., Madore B. F., Freedman W. L., 1999,, 511, 671 L. V., Navarro J. F., Abadi M. G., Steinmetz M., 2007,, 379, 1475 I., Held E. V., Bertelli G., 2000,, 355, 56 R.-D., Irwin M. J., 1994, in MacGillivray H. T., ed., IAU Symposium Vol. 161, Astronomy from Wide-Field Imaging. p. 535 M. S., Barth A. J., Bullock J. S., 2008,, 389, 1911 J. D., Geha M., 2007,, 670, 313 S. T. et al., 2007,, 663, 960 R. P., Alves D. R., Hardy E., Suntzeff N. B., 2002,, 124, 2639 R. P., Guhathakurta P., 2008,, 678, 187 M. G., Mateo M., Olszewski E. W., 2008,, 688, L75 M. G., Mateo M., Olszewski E. W., Bernstein R., Wang X., Woodroofe M., 2006,, 131, 2114 M. G., Mateo M., Olszewski E. W., Pal J. K., Sen B., Woodroofe M., 2006,, 642, L41 S. D. M., 1981,, 195, 1037 M. I., Evans N. W., 1999,, 310, 645 X. X. et al., 2008,, 684, 1143 D. G. et al., 2000,, 120, 1579 D. B. et al., 2006,, 650, L41 D. B. et al., 2006,, 643, L103 D. B. et al., 2004,, 612, L121 D. B. et al., 2007,, 659, L21 --- 1. *α* =  − 1 corresponds to the gravitational field that pulls with an equal magnitude force regardless of radius, which is formally generated by a halo density falling off as *r*− 1. Provided we regard the scale-free potential as an approximation valid over a limited range and not extending to spatial infinity, we can permit *α* ≥  − 2, since *α* =  − 2 corresponds to the harmonic potential generated by a homogeneous sphere.[↩](#fnref1) 2. On the other hand, for the satellites of the Milky Way, it is often assumed that $d\ll\rin$, which leads to sin*φ* ≈ 0 and consequently $\langle\vlos^2r^\alpha\rangle \approx \langle v\_r^2r^\alpha\rangle$.[↩](#fnref2) 3. The result is valid provided that the integral is limited to spherical shells. However, given the lack of depth information, it might seem more logical to perform the integration over cylindrical shells. Unfortunately, the result is more complicated, as it involves the integrals of incomplete beta functions.[↩](#fnref3)
arxiv_0000178
Unified *k*-space theory of optical coherence tomography ======================================================== We present a general theory of optical coherence tomography (OCT), which synthesizes the fundamental concepts and implementations of OCT under a common 3D *k*-space framework. At the heart of this analysis is the Fourier diffraction theorem, which relates the coherent interaction between a sample and plane wave to the Ewald sphere in the 3D *k*-space representation of the sample. While only the axial dimension of OCT is typically analyzed in *k*-space, we show that embracing a fully 3D *k*-space formalism allows explanation of nearly every fundamental physical phenomenon or property of OCT, including contrast mechanism, resolution, dispersion, aberration, limited depth of focus, and speckle. The theory also unifies diffraction tomography, confocal microscopy, point-scanning OCT, line-field OCT, full-field OCT, Bessel-beam OCT, transillumination OCT, interferometric synthetic aperture microscopy (ISAM), and optical coherence refraction tomography (OCRT), among others. Our unified theory not only enables clear understanding of existing techniques, but also suggests new research directions to continue advancing the field of OCT. Introduction ============ Since its invention nearly three decades ago, optical coherence tomography (OCT) has proliferated into a broad class of techniques with a variety of biomedical and clinical applications, such as in ophthalmology, cardiology, dermatology, oncology, and gastroenterology. Even beyond the well known categories of time-domain OCT (TD-OCT) and Fourier-domain OCT (FD-OCT), the field of OCT has evolved to encompass implementations that use a variety of illumination and detection strategies, unified by interferometry with a broadband or low-coherence source. The earliest implementations of OCT were point-scanning OCT systems, which involved scanning a focused spot across the sample, probing one lateral spatial position at a time. Even today, point-scanning OCT remains the most popular form of OCT and is very successful as a clinical standard for ophthalmic imaging and an emerging standard for intravascular and gastroenterological imaging. Shortly thereafter, full-field OCT (FF-OCT) and line-field OCT (LF-OCT) emerged as alternate strategies, which use unfocused or cylindrically (1D) focused light and parallel spatial detection (i.e., a 2D camera or a 1D line camera). Furthermore, apart from Gaussian beams with common mode or confocal detection, other illumination patterns and detection strategies have also been used in OCT, such as Bessel beams with double-pass illumination and detection or with decoupled Gaussian mode detection, annular pupils for illumination and detection, and many other strategies. All of these alternative illumination/detection strategies have been used to maintain a high lateral resolution over an extended depth of focus in OCT, compared to the standard Gaussian beam. Other methods have also been proposed to address this issue, notably interferometric synthetic aperture microscopy (ISAM), which computationally corrects the defocus by solving the coherent inverse scattering problem, with different solutions depending on the illumination/detection strategy (e.g., ISAM for Bessel-beam illumination/Gaussian mode detection and FF-OCT ). Even more recently, we developed an incoherent angular compounding technique called optical coherence refraction tomography (OCRT) to address this trade-off between depth of focus and lateral resolution by reconstructing an image with isotropic resolution. These various implementations and extensions of OCT would greatly benefit from a unified theoretical treatment that concisely identifies their differences and similarities, their relative advantages and disadvantages, and their relationship to the broader category of coherent imaging. Such a unified theory would also suggest new ways to continue the technological advancement of the field of OCT. To this end, here, extending our earlier preliminary work, we present a comprehensive, fully 3D *k*-space analysis of OCT that provides a unified theoretical framework. This theory not only encompasses all of the implementations of OCT mentioned in the previous paragraph, but also explains many fundamental concepts of OCT, including the contrast mechanism, origin of speckle, dispersion, and trade-off between the lateral resolution and depth of focus. Here, *k*-space refers to 3D Fourier space, where *k* is the customary symbol for representing spatial wavevectors, or *k*-vectors, that compactly denote both the propagation direction of plane waves and the wavelength or wavenumber via the *k*-vector’s length. Using principles from the field of Fourier optics, these *k*-vectors serve as a basis for decomposing more complicated waveforms, such as the different types of illumination strategies often employed in OCT (including focused Gaussian beams). As plane waves are the fundamental building blocks for analyzing more complicated systems, we utilize a principle that predicts how a plane wave interacts with a 3D object: the Fourier diffraction theorem, which was developed in the field of diffraction tomography to reconstruct a sample’s 3D refractive index (RI) distribution from a set of diffraction patterns resulting from plane wave illumination from multiple angles. Our *k*-space theoretical treatment builds upon prior excellent reviews and theoretical treatments of OCT, including works that analyze OCT in 3D *k*-space. However, we advance a unified and comprehensive *k*-space theory of OCT, encompassing a broader range of implementations of OCT and other coherent imaging techniques, as well as being the first one to incorporate speckle as a direct consequence of the band-pass nature of its transfer function. We note that this paper does not compare TD-OCT and FD-OCT or spectrometer-based FD-OCT and swept-source FD-OCT, as these topics have been treated extensively. Rather, from the point of view of *k*-space theory, the differences between these approaches are practical implementation details, insofar as they are different methods of measuring the same optical fields and therefore the same information about the sample. Note that the same cannot be said about point-scanning OCT, FF-OCT, and LF-OCT, which measure slightly different information about the sample, as will become clear. Towards a 3D *k*-space framework as a general theory of OCT =========================================================== Previous theoretical treatments and reviews of OCT (e.g., ) are typically centered around low-coherence interferometry, putting forth a 1D *k*-space model (in particular, a distorted version of *k**z* of our 3D *k*-space model; see Sec. [refocus]) that explains OCT’s axial resolving capabilities very accurately. The lateral resolution is then explained using beam focusing, separately from interferometry. While the separability of these explanations is a very good assumption for weakly-focused beams, here, we extend this picture with a fully wave-based, 3D *k*-space model that unites low coherence interferometry and beam focusing under one framework. This is achieved by examining OCT from a more fundamental perspective of the coherent interaction between a plane wave (or superpositions thereof) and a weakly scattering sample. This coherent interaction is straightforwardly visualized in 3D *k*-space via the Ewald sphere, which describes the information obtainable about a sample for a given wavelength and illumination direction, according to the Fourier diffraction theorem (see Sec. [FDT] below). In this work, we show that this *k*-space framework explains and highlights the interdependence among nearly all properties of OCT in a unified manner, particularly the source of contrast, the full 3D point-spread function (PSF) and transfer function (TF), the trade-off between the lateral resolution and depth of focus, and the origin of speckle. We also apply this common *k*-space framework to analyze and compare the TFs of the major implementations of OCT, specifically point-scanning OCT with a Gaussian beam and Bessel beam, LF-OCT, and FF-OCT, as well as coherent confocal microscopy and conventional holography as degenerate cases of OCT. All of these implementations can be thought of as special cases of diffraction tomography in reflection. Finally, we discuss the implications of this theoretical treatment on the limits of speckle reduction and resolution enhancement in OCT. [fig:born] First Born approximation ------------------------ In OCT, one typically makes a weakly or singly scattering assumption about the sample. A common interpretation is in terms of discrete photons and a sample composed of a discrete set of reflectors: an incident photon will interact with exactly one of the reflectors and ignore all the others. Since we are advancing a *k*-space framework, we need to understand this assumption in terms of waves. Thus, we turn to the inhomogeneous, time-independent wave equation (also known as the Helmholtz equation), (∇2 + *k*02*n**m*2)*u*(**r**) =  − *V*(**r**)*u*(**r**),  where *V*(**r**) = *k*02(*n*(**r**)2 − *n**m*2) is the sample’s scattering potential, which is directly related to its refractive index (RI) distribution *n*(**r**), **r** = (*x*, *y*, *z*) is the 3D spatial coordinate, *k*0 = 2*π*/*λ*0 is the vacuum wavenumber, and *n**m* is the background medium RI. Eq. [helmholtz] thus describes the propagation of a wave *u*(**r**) through a sample with a spatially varying scattering potential. As the wave equation only admits closed form solutions to all but the simplest RI distributions (e.g., uniform spheres), the solution in the general case is often obtained through iterative methods or through discrete approximations. One such iterative solution is the Born series, based on a recursive expansion of the Lippmann-Schwinger equation, the integral form of the Helmholtz differential equation (Eq. [helmholtz]). A more detailed explanation of the Born series and the wave equation is beyond the scope of this paper, but has been extensively treated in the literature. While the Born series in principle models the general case of multiple scattering, in practice it is unstable and has convergence issues. However, truncation of the Born series to only its first term yields a linear equation that permits an interpretable closed form solution given a plane wave illumination (Sec. [FDT]). This is known as the first Born approximation, which states that the emerging field is the superposition of the incident field, *u**i**n**c*(**r**), and the scattered field *u**s**c*(**r**): *u*(**r**) ≈ *u**i**n**c*(**r**) + *u**s**c*(**r**). We can now interpret the meaning of a “weakly scattering” or “singly scattering” sample in the context of OCT as the condition by which the first Born approximation is valid ; that is, *u**s**c* is much smaller than *u**i**n**c*. Note that even though Eq. [firstborn] is expressed in terms of the fields, the validity of the first Born approximation is a property of the sample, not the illumination. The first Born model is a reasonable assumption in OCT, as the backscattered signals are typically several orders of magnitude smaller than incident beam for most biological samples. Note that this assumption does not necessarily place a limit on the sample’s thickness, but rather the cumulative RI variation across the sample depth. For example, while a sample with very small RI variation can be thicker, a sample with high RI variation only satisfies the first Born approximation if it is thin. With enough cumulative RI variation, the sample becomes multiply scattering. Fig. [fig:born] illustrates this point intuitively with a multi-layer Born model, a multiple scattering model developed for diffraction tomography that divides the thick sample into layers within each of which the first Born approximation applies (with the caveat that the multi-layer Born model is not equivalent to the Born series, as the former does not consider bidirectional interaction among the layers). From this interpretation, in the multiply scattering case, the incident field on a deep layer within the sample has been aberrated through cumulative interactions with shallower RI variations. In fact, OCT is routinely operated outside of the first Born approximation, as evidenced by shadowing or attenuation at greater depths, which is not predicted by the first Born model. This suggests that OCT does not necessarily fail when the first Born assumption is broken, but rather as long as the field incident at a deep structure is not completely random (i.e., it is primarily forward scattering and therefore contains some memory of the incident field), one can still obtain depth-resolved measurements, at the cost of signal-to-noise ratio (SNR) due to inefficient back-coupling into the fiber, which acts as a spatial mode filter in the case of point-scanning OCT, or due to cross-talk in the case of FF-OCT. However, for the ensuing *k*-space theoretical treatment, we will assume validity of the first Born approximation. [fig:FDT] Contrast mechanism of OCT ------------------------- As light propagation throughout a sample is dictated by Eq. [helmholtz], the source of contrast in OCT is the spatially varying scattering potential, which directly relates to the sample’s RI distribution. As such, properties such as scattering coefficients, scattering phase functions, and anisotropy factors commonly used to characterize bulk scattering properties of tissue are simply higher-level descriptors that are based on the more fundamental scattering potential or RI variation. Note that scattering potential and RI can be complex-valued, meaning in theory the spatial variation in absorption can also influence how light propagates through the sample. However, as we will show, not all types of RI variation will produce a detectable signal in OCT, as only those which produce backscattering can contribute to OCT contrast. To appreciate what properties of the sample’s RI distribution that OCT is sensitive to, and conversely how the RI distribution affects an input illumination field, we turn to the FDT. Fourier diffraction theorem --------------------------- The FDT is a fundamental theorem for diffraction tomography that relates a sample’s 3D scattering potential to the complex 2D diffraction pattern of a plane wave, viewed from an arbitrary direction. Its relevance to OCT has also been pointed out. The FDT can be thought of as the wave analog of the projection-slice theorem (also known as the Fourier slice theorem), which assumes a ray model and is commonly used for X-ray computed tomography (CT). In the geometric optics limit (*λ* → 0), the two theorems converge. Specifically, consider a sample with scattering potential *V*(*x*, *y*, *z*) and a monochromatic plane wave governed by the wavevector $\mathbf{k\_{illum}}=(k\_{illum, x},k\_{illum, y},k\_{illum, z})$, which describes the direction of the field as well as its wavelength or wavenumber, i.e., $|\mathbf{k\_{illum}}|=k\_0$. Without loss of generality, assume that the sample is at the origin in a 3D Cartesian space and that we are interested in the 2D diffraction pattern, *U*(*x*, *y*), at *z* = 0 in the *x**y* plane (or a conjugate plane thereof). The FDT states $$\label{FDT\_eq} \widetilde{U}(k\_x,k\_y)\propto\widetilde{V}\left(\left(k\_x,k\_y,\sqrt{k\_0^2-k\_x^2-k\_y^2}\right)-\mathbf{k\_{illum}}\right),$$ where the tildes denote the Fourier transforms, $\widetilde{V}\left(k\_x,k\_y,k\_z\right)=\mathcal{F}\_{3D}\{V(x,y,z)\}$ and $\widetilde{U}\left(k\_x,k\_y\right)=\mathcal{F}\_{2D}\{U(x,y)\}$. The argument of $\widetilde{V}$ describes a displaced Ewald spherical shell in *k*-space with radius *k*0 and center $\mathbf{k\_{illum}}$ (Fig. [fig:FDT]). In practice, only a small solid angle surrounding the *k**z*-axis is accessible due to the limited numerical aperture (NA) of the objective lens. This partial spherical shell in *k*-space can be thought of as the 3D transfer function (TF) of the sample’s scattering potential for monochromatic plane wave illumination and full-field collection. While this TF is relatively modest in its *k*-space coverage, it forms the basic building block of general coherent imaging modalities that may use angular diversity (the angular spectrum) and wavelength diversity (the source spectrum) to construct larger TFs. Just as the FDT is a fundamental theorem for diffraction tomography, it is also a fundamental theorem for OCT, as a reflection-mode coherent imaging modality. In the following sections, we apply the FDT to a variety of coherent imaging modalities and derive their respective transfer functions of the information in the sample’s scattering potential. [fig:TFPSFoverview] Transfer functions of various coherent imaging modalities ========================================================= Holographic microscopy ---------------------- Perhaps the simplest case is holographic microscopy (or simply, holography ), which is a useful starting point to understand the FDT and is the building block for our extension of the 3D *k*-space formalism to OCT implementations. Holography uses monochromatic plane wave illumination, potentially from any angle, to interrogate the sample and images the emerging diffracted field onto a 2D film or camera with phase-sensitive detection (e.g., by use of an off-axis reference or an on-axis reference with multiple phase shifts). Depending on the direction of illumination relative to collection direction (assumed to be the *z*-axis), the partial Ewald sphere will be shifted to a different position in *k*-space. For reflective geometries, holography probes high spatial frequencies, while for transmissive geometries, holography probes low frequencies. Holography always has poor axial sectioning capabilities in 3D samples, as the TF is infinitely thin in the axial direction. For thin, 2D samples to which holography is often applied, the lack of axial width in the TF is not a concern, as a thin sample’s scattering potential spectrum in *k*-space is invariant in the *k**z* direction (i.e., the Fourier transform of a delta function is a constant). This 2D limiting case is known variously as quantitative phase imaging (QPI ), in which the 3D structure of the Ewald sphere can be ignored and regarded as circles in the *k**x**k**y* plane. Although holography has poor axial sectioning (Fig. [fig:TFPSFoverview]), because the full 2D field of the scattered wave is measured, we can use the Fresnel diffraction kernel to digitally propagate the field to, in principle, any axial position within the 3D sample. While out-of-plane features still appear, this property of holography has an interesting implication – in theory, it doesn’t matter where the camera is placed after the sample, whether at an image plane, Fourier plane, or directly next to the sample (in practice, putting the camera at an image plane is better to improve SNR). As we will see, this property implies that, at first glance, OCT should not have a limited depth of focus, which we discuss in Sec. [refocus] below. Diffraction tomography (multi-angle holography) ----------------------------------------------- Diffraction tomography takes holography a step further and uses angular diversity to synthesize a wider TF. One strategy is to use a fixed sample and detector, but to vary the illumination angle. Another is to have a fixed detector and illumination geometry, but to rotate the sample. These strategies can be implemented in transmission or reflection mode, with different synthesized TFs, which are depicted in Fig. [fig:TFPSFoverview]. Clearly, with angular diversity, the achievable TFs are far more substantial than with holography. We also note that reflective geometries tend to generate band-pass TFs, while transmissive geometries tend to generate low-pass TFs. This means that transmissive geometries are better suited for measuring quantitative RI values, while reflective geometries are generally only sensitive to changes in RI. Finally, just as with holography, this *k*-space analysis makes no reference to the depth of focus of the imaging lens, and thus the reconstruction volume is not theoretically limited when using high-NA objectives, but rather limited by SNR and the validity of the first Born approximation. [fig:FFOCTTFPSF] Full-field OCT -------------- FF-OCT uses a fixed, reflective imaging geometry from the same aperture, but uses a broadband source instead of monochromatic illumination. As with the previous examples, the interference pattern is recorded with a 2D camera. Here, we will assume a wavelength-swept source system to simplify the explanation, but the same analysis holds for a time-domain FF-OCT system, which can be conceptually decomposed to monochromatic plane waves (cf., Fourier optics). Thus, FF-OCT can be thought of as performing reflective holography at multiple wavelengths. The Ewald sphere has a different radius for different wavelengths (i.e., $|\mathbf{k\_{illum}}|=2\pi/\lambda\_{illum}$), so a continuous sweep synthesizes a continuous 3D band-pass (Fig. [fig:TFPSFoverview]). More precisely, given a Gaussian spectrum centered at *k*0 with a standard deviation width parameter of *σ**k*, and an imaging objective with $\mathrm{\mathit{NA}}=\sin(\sqrt{2\ln{2}}\sigma\_{\theta})$, defined such that the half width at half maximum (HWHM) collection angle is $\sqrt{2\ln{2}}\sigma\_{\theta}$, the TF of OCT is $$H\_{\mathrm{\mathit{FFOCT}}}(k\_x, k\_y, k\_z)\propto\exp\left(-\frac{(k\_r-2k\_0\cos(k\_\theta))^2}{8\sigma\_k^2\cos^2(k\_\theta)}\right)\exp\left(-\frac{2k\_\theta^2}{\sigma\_\theta^2}\right), \label{OCT\_TF}$$ where $k\_r=\sqrt{k\_x^2+k\_y^2+k\_z^2}$ and *k**θ* = cos− 1(*k**z*/*k**r*) are the 3D *k*-space coordinates in spherical coordinates (the azimuthal angle is not needed because the TF is symmetric about the *k**z*-axis). Although the two exponential factors in Eq. [OCTTF] are coupled, the first factor is more directly related to the axial resolution, while the second factor is more related to the lateral resolution. Thus, the FF-OCT TF is a solid angle with a half angle of *σ**θ*/2 (i.e., half that of the objective) and centered at *k**z* = 2*k*0, where *k*-space theory recovers the factor of 2 attributed to the round trip of the input beam using conventional OCT theory. One caveat with Eq. [OCTTF] is that in reflection, only the upper hemisphere of the Ewald sphere is detected, as the lower hemisphere corresponds to transmission in the opposite direction (i.e., the  − *k**z*-axis). Example TFs are plotted in Fig. [fig:FFOCTTFPSF]. In the low-NA limit (*σ**θ* → 0), Eq. [OCTTF] can be approximated by $$H\_{\mathrm{\mathit{FFOCT}}}(k\_x, k\_y, k\_z)\approx \exp\left(-\frac{(k\_z-2k\_0)^2}{8\sigma\_k^2}\right)\exp\left(-\frac{2k\_{xy}^2}{\sigma\_{k\_{xy}}^2}\right), \label{OCT\_TF\_approx}$$ where $k\_{xy}=\sqrt{k\_x^2+k\_y^2}\approx2k\_0k\_\theta$ and *σ**k**x**y* = 2*k*0*σ**θ* (Fig. [fig:FFOCTTFPSF], first row). In this limit, the TF is elliptical and therefore separable into its axial and lateral components, both of which are Gaussian. Thus, we can compute the axial and lateral PSFs analytically, given the Fourier transform pair, $\exp(-x^2/(2\sigma^2)) \overset{\mathcal{F}}{\leftrightarrow}\exp(-\sigma^2k^2/2)$. The axial PSF is thus $$\label{axial\_PSF} \mathrm{\mathit{psf}}\_z(z)\propto \exp\left(-\frac{z^2}{2\sigma\_z^2}\right)\exp(j2k\_0z),$$ where the axial Gaussian width parameter and the axial resolution are given by $$\label{axial\_resolution} \sigma\_z=\frac{1}{2\sigma\_k} \implies \delta z\_{\mathrm{\mathit{FWHM}}}=2\sqrt{2\ln(2)}\sigma\_z=\frac{\sqrt{2\ln(2)}}{\sigma\_k}=\frac{2\ln(2)}{\pi}\frac{\lambda\_0^2}{\Delta \lambda}\approx 0.44\frac{\lambda\_0^2}{\Delta \lambda},$$ where Δ*λ* is the FWHM bandwidth of the source in wavelength and *λ*0 is the center wavelength. This *k*-space derivation of the axial resolution is consistent with conventional OCT theory. Note that Eq. [axialPSF] is identical to the complex coherence function, except for a factor of 2 in the argument of the complex exponential. Similarly, if we analyze the lateral component of Eq. [OCTTFapprox], we obtain the lateral PSF, $$\label{lateral\_psf} \mathrm{\mathit{psf}}\_{xy}(x,y)\propto \exp\left(-\frac{x^2+y^2}{2\sigma\_{xy}^2}\right),$$ with lateral Gaussian width parameter and lateral resolution $$\label{lateral\_resolution} \sigma\_{xy}=\frac{2}{\sigma\_{k\_{xy}}}=\frac{1}{k\_0\sigma\_\theta}\implies \delta xy\_{\mathrm{\mathit{FWHM}}}=2\sqrt{2\ln(2)}\sigma\_{xy}= \frac{2\ln(2)}{\pi} \frac{\lambda\_0}{\mathrm{\mathit{NA}}} \approx 0.44\frac{\lambda\_0}{\mathrm{\mathit{NA}}},$$ also consistent with conventional OCT theory. We emphasize that this separability between and the availability of analytical expressions for the axial and lateral components are only possible when the approximation in Eq. [OCTTFapprox] is valid (i.e., low NAs). In general, however, the FF-OCT TF is governed by Eq. [OCTTF], in which there is a coupling between the axial and lateral dimensions. In other words, the mechanisms for axial and lateral resolution are *not* independent. Interestingly, note the parallels in the prefactors in the Eqs. [axialresolution] and [lateralresolution] when expressed in terms of wavelength. Thus, in addition to the usual interpretation of the axial resolution being governed by the source bandwidth, we can interpret the the lateral resolution as governed by an angular bandwidth. This further suggests that the general 3D *k*-space framework treats axial and lateral resolution on equal footing. The consequence of the band-pass nature of this TF is that OCT is only sensitive to rapid changes in the scattering potential or RI primarily in the axial direction. Quantitative RI values are close to the *k*-space origin and thus under normal circumstances, OCT cannot measure absolute RI. OCT also cannot detect gradual RI gradients (e.g., gradient index lenses) as these manifest as low-frequency components outside of the band-pass. Likewise, while horizontal edges (i.e., parallel to the *x**y* plane) are visible in OCT, vertical edges are not (see Sec. [coherencegating] for a more detailed explanation). More generally, the sensitivity of OCT to tilted edges depends on the magnitude Fourier components within the OCT band-pass. [fig:tiltedillum] Full-field OCT with off-axis illumination ----------------------------------------- As a stepping stone to understanding how confocal microsocpy and point-scanning OCT fit in this *k*-space framework, we first analyze FF-OCT using different illumination angles relative to the objective position. In other words, we generalize Eq. [OCTTF], which assumes $\mathbf{k\_{illum}}=(0,0,-k\_0)$, to allow $\mathbf{k\_{illum}}=(k\_{illum, x},k\_{illum, y},k\_{illum, z})$ to specify an arbitrary plane wave. For convenience, we define the following quantities: * $\mathbf{k\_{illum}}=(k\_{illum,r}=k\_0,k\_{illum,\theta},k\_{illum,\phi})$, the spherical coordinate representation (where *θ* and *ϕ* correspond to inclination angle and azimuthal angles, respectively), * $\mathbf{k\_{illum,\theta/2}}=(k\_0,k\_{illum,\theta/2},k\_{illum,\phi})$, * $k'\_{\theta}=\cos^{-1}\left(\frac{\mathbf{k}\cdot\mathbf{k\_{illum}}}{k\_rk\_0}\right)$, the angle between **k** (the 3D *k*-space coordinates) and $\mathbf{k\_{illum}}$, and * $k'\_{\theta,1/2}=\cos^{-1}\left(\frac{\mathbf{k}\cdot\mathbf{k\_{illum,1/2}}}{k\_rk\_0}\right)$, the angle between **k** and $\mathbf{k\_{illum,1/2}}$. Then, the FF-OCT TF for an arbitrary illumination wavevector $\mathbf{k\_{illum}}$ is given by $$H\_{\mathrm{\mathit{FFOCT}}}\left(k\_x, k\_y, k\_z;\mathbf{k\_{illum}}\right)\propto\exp\left(-\frac{(k\_r-2k\_0\cos(k'\_\theta))^2}{8\sigma\_k^2\cos^2(k'\_\theta)}\right)\exp\left(-\frac{2k'^2\_{\theta,1/2}}{\sigma\_\theta^2}\right), \label{OCT\_TF\_general}$$ noting that this equation reduces to Eq. [OCTTF] for $\mathbf{k\_{illum}}=(0,0,-k\_0)$. Fig. [fig:tiltedillum] shows example TFs for several illumination angles and different bandwidth/NA combinations. In particular, rotating the illumination causes the TF to shift laterally in *k*-space, similarly to the case monochromatic diffraction tomography Fig. [fig:TFPSFoverview], except with a change in the shape of the FF-OCT TF due to the difference in curvature of the Ewald spheres for different illumination colors. Another reason why this imaging geometry might be useful to consider is that it is another approach to coherently enhance the lateral resolution of FF-OCT, which to our knowledge has not yet been demonstrated experimentally. We compare this approach to ISAM in Sec. [coherentapproaches]. [fig:fourTFshighNA] [fig:fourTFslowNA] Point-scanning OCT ------------------ For the coherent imaging methods considered up until now, we have assumed an array detector (e.g., a 2D camera) with plane wave illumination. However, the most common implementation of OCT is with point-scanning, that is, raster-scanning a focused point and using a point detector, thereby encoding space in time. While the FDT (Eq. [FDTeq]) deals with plane wave illumination, we can synthesize any input illumination wavefront given its angular spectrum (the complex field amplitude as a function of the illumination *k*-vector). For this analysis, we assume a focused Gaussian beam with the same illumination NA as imaging NA, as typically enforced by an input fiber acting as the illumination source and collection aperture. The angular spectrum at the focus is given by $$E(k\_x, k\_y)=E\_0\exp\left(-\frac{k\_x^2+k\_y^2}{k\_0^2\mathrm{\mathit{NA}}^2}\right),$$ with the caveat that this equation includes evanescent fields, which, however, are negligible given that the NA is typically small in OCT. Thus, to calculate the TF of Gaussian-beam-illuminated, point-scanning OCT, we perform a superposition of different tilted Ewald spheres (Eq. [OCTTFgeneral]), weighted by the complex amplitude of the plane wave given by the illumination angular spectrum: $$\label{TF\_OCT\_point\_scan} \begin{split} &H\_{OCT}^{scan}(k\_x,k\_y,k\_z)=\\ &\iint\displaylimits\_{k\_{i,x}^2+k\_{i,y}^2<k\_0^2} E(k\_{i,x},k\_{i,y})H\_{\mathrm{\mathit{FFOCT}}}\left(k\_x, k\_y, k\_z;k\_{i,x},k\_{i,y},\sqrt{k\_0^2-k\_{i,x}^2-k\_{i,y}^2}\right)dk\_{i,x}dk\_{i,y}, \end{split}$$ where integration is over the domain of non-evanescent, propagating waves. Thus, point-scanning OCT (and coherent confocal microscopy) are similar to performing diffraction tomography over the equivalent angular range. The integral in Eq. [TFOCTpointscan] is akin to a convolution integral, except that the TF changes shape as the illumination angle is swept. Thus, it does not in general yield an analytical solution, so it needs to be evaluated numerically (Fig. [fig:fourTFshighNA]). We can see that point-scanning OCT has ~$\sqrt{2}$ improvement in lateral resolution over FF-OCT with the same NA due to confocal gating (note, however, that the frequency cutoff doubles, which is clearer when considering a non-Gaussian TF with a hard cutoff), as well as an improvement in axial resolution for larger NAs, as we are entering the optical coherence microscopy (OCM) regime in which the axial confocal gate and coherence gate become comparable to each other. Note also in these higher-NA cases that the radius of curvature of the point-scanning OCT TF appears to have increased compared to the FF-OCT TF, whose implications we discuss in Sec. [refocus]. Fig. [fig:fourTFslowNA] shows the same comparison but at a lower NA, at which the OCT TFs are more separable into their axial and lateral components. Here, we again see the improvement in lateral resolution of point-scanning OCT over FF-OCT. However, the axial resolution is similar, due to the weaker confocal gate than in the higher-NA case. We also note that Eq. [TFOCTpointscan] is general and can be used for any input illumination profile, as long as the angular spectrum is known. In particular, we can use Eq.[TFOCTpointscan] in later sections to derive the TFs for other spatial scanning techniques, such as confocal microscopy (Sec. [confocal]) and LF-OCT (Sec. [linefield]). We can also simulate the effects of aberration on the TFs by imparting a 2D phase profile in the angular spectrum. Confocal microscopy ------------------- We can also use Eq. [TFOCTpointscan] to compute the TF of coherent confocal microscopy by setting *σ**k* to a very small value (i.e., the monochromatic limit). The results are shown in Figs. [fig:fourTFshighNA] and [fig:fourTFslowNA], which are consistent with previous derivations of TFs for confocal microscopy assuming circular apertures, which describe the TF as the convolution of two Ewald spheres. Note that although Eq. [TFOCTpointscan] is not in general a convolution integral, it is for each individual wavenumber (*σ**k* → 0). In both transmission and reflection, we observe the $\sqrt{2}$ lateral resolution enhancement, but we can also see the optical sectioning effects at higher NAs. Thus, we can conclude that the origin of axial confocal gating is the curvature of the Ewald sphere, which is only appreciable at high NAs, as the axial extent of the TF would not change if there were no curvature. This is why the depth of focus is not significantly reduced in point-scanning OCT, which typically uses low NA beams that do not have significant curvature in *k*-space. Finally, we can see the change in radius of curvature of the TF more clearly in confocal microscopy than in point-scanning OCT, which we discuss in Sec. [refocus]. [fig:LFOCT] Line-field OCT -------------- In LF-OCT, the illumination beam is focused only in one dimension. Thus, LF-OCT behaves like a point-scanning OCT system along the focused dimension, while behaving like a FF-OCT system along the unfocused dimension. In other words, we can write the angular spectrum of the LF-OCT illumination at the focus as $$\label{LFOCT\_angular\_spectrum} E(k\_x, k\_y)=E\_0\exp\left(-\frac{k\_x^2}{k\_0^2\mathrm{\mathit{NA}}^2}\right)\delta(0, k\_y),$$ where *δ*(*x*, *y*) is the 2D Dirac delta function, and the illumination beam is focused in the *x* dimension, but not *y*. It is tempting to regard the *x* and *y* dimensions of the TF of LF-OCT as separable into the TFs for FF-OCT and point-scanning OCT, respectively; however, upon inspection of the Eq. [TFOCTpointscan], we note that the integral is not separable because while Eq.[LFOCTangularspectrum] is separable, Eq. [OCTTFgeneral] is not separable, except at low NAs. In the general case, each wavenumber component of the TF of LF-OCT has the shape of a horn torus (a torus without a hole in the center) (Fig. [fig:LFOCT]); that is, the surface described by revolving about the *k**y*-axis a circle with radius *k**i**l**l**u**m* and centered at *k**i**l**l**u**m* in the *k**x**k**z*-plane (i.e., the cross-section of the Ewald sphere). [fig:resolutiondoubling] Full-field OCT with partially spatially coherent illumination ------------------------------------------------------------- Up until now, we have considered fully spatially coherent plane-wave illumination when deriving the OCT TFs. However, in FF-OCT it is common to use spatially incoherent illumination to mitigate cross-talk issues due to full-field detection, which arise from multiply scattered photons that travel to neighboring region when imaged onto an array detector while remaining coherent with the reference beam. As coherence and incoherence are the extremes of a continuum, we examine the effects of partial coherence as the general case. To do so, we model a partially coherent source as a 2D source with a non-zero lateral extent, consisting of a continuous distribution of point sources that are mutually incoherent, with an intensity distribution, *I**s*(*x*, *y*). If this extended source is placed at the focal plane of collimating lens, which does not affect the coherence properties of the source, all the points will be collimated into non-interfering plane waves, propagating to the sample at angles given by the positions of the corresponding point sources. In the coherent limit of this partially spatially coherent source where the extended source is a single point source (*I**s*(*x*, *y*) ∝ *δ*(*x*, *y*)), after the collimating lens, we obtain a single perfectly coherent plane wave, and we recover FF-OCT with coherent illumination. In the incoherent limit of an infinitely wide source (*I**s*(*x*, *y*) ∝ 1), we obtain a superposition of many mutually incoherent plane waves with continuous angular coverage allowed by the collimating lens. Essentially, each mutually incoherent point source can be thought of as an independent channel or mode across which FF-OCT at a particular illumination angle is performed, where the larger the source, the more channels and therefore the wider angular range. Thus, FF-OCT with partially spatially coherent illumination has a TF similar to that of point-scanning OCT (lower right panel of Fig. [fig:fourTFshighNA]), which obtains illumination angular diversity through a focused beam (Fig. [fig:resolutiondoubling] compares three ways of attaining this TF discussed in this paper). Thus, Eq. [TFOCTpointscan] can be used to compute the TF of FF-OCT with partially coherent light, adjusting the angular spectrum *E*(*k**x*, *k**y*) according to the incoherent source extent. Transillumination OCT --------------------- Instead of reusing the illumination path as the detection path, transillumination OCT features a separate detection path on the other side of the sample. The detection channel is typically 180-opposite so that all of the light is collected in the absence of a sample (i.e., a bright-field configuration). Transillumination OCT is almost always implemented with focused illumination with point scanning rather than with unfocused illumination with full-field detection, the reason for which becomes clear when we analyze their TFs using the FDT. In fact, for each wavenumber within the broadband OCT source, the TF is the same as that of transmission DT with equal illumination and detection NAs, which is a low-pass, lateral-resolution-enhanced, doughnut/toroid-shaped filter (Figs. [fig:TFPSFoverview] and [fig:transillumination]a). This is because both have transmissive imaging geometries with angular illumination diversity, achieved either sequentially via plane wave angle sweeping or simultaneously via focused illumination (i.e., Fig. [fig:resolutiondoubling], except in transmission). As such, transillumination OCT with focused illumination is sensitive not only to the average RI of the sample (i.e., the DC component of the RI or scattering potential), but also to low-frequency RI variation and therefore, like transmission DT, has some axial resolvability, albeit limited. However, without focused illumination, the axial resolution is worse (cf. Fig. [fig:TFPSFoverview], first vs. second columns). Transillumination OCT is thus perhaps an exception among techniques with “OCT” in its name, as the only one with a TF centered at the **k** = (0, 0, 0) rather than **k** = (0, 0, 2*k*0). As such, elsewhere in this document, we will assume, when referring to an OCT TF, that it is a band-pass without explicitly clarifying that it is in reflection-mode. For transillumination OCT, each wavenumber accesses almost the same information about the sample, with the equivalent DT TF isotropically scaled in *k*-space in proportion to the wavenumber, *k*0. At first glance, it would appear that the wavenumber diversity does not add much benefit in terms of TF volume in 3D *k*-space, especially for FF-OCT with coherent illumination, as all the Ewald spheres are tangent to each other at the *k*-space origin. While this may be true in the first Born approximation, in the presence of multiple scattering, wavenumber diversity has proven to be useful to discriminate the weakly or singly scattered fields (i.e., those that do obey the first Born approximation, corresponding to ballistic photons) from the multiply scattered fields, the latter of which will have propagated over a longer distance than the former. Furthermore, focused illumination with a confocal pinhole offers not only lateral resolution enhancement, but also an additional mechanism for spatially filtering multiply scattered light. This confocal gate is absent in the hypothetical full-field transillumination OCT, which would thus suffer from the same cross-talk issue as in reflective FF-OCT. However, full-field transillumination OCT can theoretically achieve the same confocality with a spatially incoherent source or with sequential multiangle illumination, just as in reflective FF-OCT (Fig. [fig:resolutiondoubling]); however, these strategies, to our knowledge, have not been reported and would be interesting areas of future investigation. Multi-angle transillumination OCT --------------------------------- Since transillumination OCT has anisotropic resolution, with better lateral than axial resolution, recently researchers have incorporated angular diversity to obtain isotropic resolution, limited by the original lateral resolution. Over a decade ago, proofs of concept were demonstrated on phantom samples, and a similar reflection-mode concept was demonstrated even earlier. However, it wasn’t until recently that it was demonstrated on optically thick biological samples as a technique referred to as optical coherence projection tomography (OCPT). In analogy with X-ray CT or projection tomography, multi-angle transillumination OCT employs relative rotation between the illumination/detection path pairs and the sample (though all existing approaches have employed pure sample rotation). The resulting sinograms are then used in the usual backprojection algorithm to generate the isotropic-resolution reconstruction. The key benefit of using transillumination OCT (with confocal detection) over monochromatic transmission confocal microscopy or DT, all three of which theoretically have similar TFs, is the ability to reject mulitply scattered fields, which are much more difficult to model. In particular, for each angle, as mentioned in the earlier section, the transillumination OCT allows isolation of the earliest-arriving photons, corresponding to singly scattered fields (assuming there is enough SNR to detect them, which depends on the source brightness and optical thickness of the sample), which obey the first Born approximation and whose TF can thus be modeled in our *k*-space theory as the doughnut/toroid-shaped low-pass structure. Thus, with data acquisition over 180or 360, the synthesized TF is a spherical ball with a cutoff radius of 2*k**m**a**x**N**A* (Fig. [fig:transillumination]b). Note, however, the multiple scattering information is not completely discarded in multi-angle transillumination OCT. Specifically, in addition to scattering potential or RI information about the sample, transillumination OCT also allows measurement of integrated attenuation – while the former is encoded in the time-of-flight of the ballistic peak, the latter is encoded in the amplitude of the peak. Thus, with multi-angle information, one can reconstruct not only an RI map, but also a map of the total attenuation coefficient, as was demonstrated in OCPT. The attenuation, however, can be due to both scattering and absorption. [fig:transillumination] Effects of multiple scattering on transfer functions ---------------------------------------------------- Clearly, OCT does not cease to be useful when the first Born approximation is no longer satisfied. As Fig. [fig:FDT] suggests, the effects of multiple scattering become significant when the field incident at a structure deeper within the sample is no longer accurately approximated by the incident illumination. Effectively, this means that multiple scattering simply aberrates or otherwise distorts the incident field at greater sample depths, such that the first Born approximation may be considered slab-wise valid (although not jointly valid). Theoretically, we could do a plane-wave decomposition of this aberrated field, as we did for point-scanning OCT (Sec. [pointscanningOCT]), confocal microscopy (Sec. [confocal]) and LF-OCT (Sec. [linefield]), and thus as long as the beam maintains a forward-scattering bias, such that the angular spectrum is concentrated around DC, the TFs previously derived still approximately hold. As the multiple scattering becomes more severe to the point that the beam becomes completely random, with no memory of the original illumination, the angular spectrum is spread out with random phase, such that the multi-angle TFs in Eq. [TFOCTpointscan] would combine in a phase-unstable manner (much like how phase instabilities across lateral scan positions for ISAM prevent a faithful depth-invariant resolution reconstruction). *k*-space interpretation of coherence gating -------------------------------------------- Coherence gating is the ability to discriminate different scattering trajectories based on their optical path lengths using a broadband or low-temporal-coherence source. It is perhaps most intuitive to think about coherence gating in TD-OCT, in terms of photons propagating through and reflecting off of layers of a discrete, multi-layer structure. Under this model, interference between a reflection from a particular layer in the sample and reference beam only occurs if their path lengths are matched, with all other reflections “gated” out due to lack of temporal coherence with the reference beam. While this simplified picture is relatively straightforward to understand, we now generalize and interpret coherence gating in *k*-space in arbitrarily spatially inhomogeneous media. OCT is only sensitive to certain distributions of scattering potential or RI – those with spatial frequency content lying in the OCT TF. In particular, discrete boundaries in the sample are discontinuities in the RI distribution of the sample that have spatial frequency content everywhere in *k*-space. Consider the following examples of types of discontinuities and their Fourier transforms (in 2D and neglecting some scale factors for simplicity): 1. Point reflector: $$\label{point\_reflector} \delta(x-x\_0,z-z\_0)\overset{\mathcal{F}}{\leftrightarrow}\exp(-j(k\_xx\_0+k\_zz\_0)).$$ 2. Horizontal RI boundary: $$\label{horizontal\_boundary} H(z-z\_0)\overset{\mathcal{F}}{\leftrightarrow}\left(\frac{1}{\pi k\_z}+\delta(k\_z)\right)\exp(-jk\_zz\_0),$$ where *H* is the Heaviside step function. 3. Small object: 1. 1D rectangular object: $$\label{rect} \text{rect}((z-z\_0)/w )\overset{\mathcal{F}}{\leftrightarrow}\text{sinc}(wk\_z)\exp(-jk\_zz\_0).$$ 2. Circular (or spherical) object: $$\label{circ} \text{circ}((x-x\_0)/w,(z-z\_0)/w)\overset{\mathcal{F}}{\leftrightarrow}\text{jinc}\left(w\sqrt{k\_x^2+k\_z^2} \right)\exp(-j(k\_xx\_0+k\_zz\_0)),$$ where jinc(*x*) = *J*1(*x*)/*x* and *J**α* is the Bessel function of the first kind. An actual biological sample may be a superposition of these examples, modeling RI discontinuities at, for example, cell or organelle boundaries. Note that within an OCT TF, a small band-pass centered at **k** = (0, 0, 2*k*0), all of these examples appear as sinusoidal fringes in *k*-space. This is obvious for a point reflector (Eq. [pointreflector]). For a horizontal RI boundary (Eq. [horizontalboundary]), while the *k*-space response is a sinusoid with a decaying amplitude, within a small neighborhood surrounding *k**z* = 2*k*0, we have that $$\frac{\exp(-jk\_zz\_0)}{k\_z}\approx \frac{\exp(-jk\_zz\_0)}{2k\_0}.$$ For a rect object (Eq. [rect]), we can make the same type of approximation, so that $$\begin{split} \text{sinc}(wk\_z)\exp(-jk\_zz\_0) \approx \frac{\sin(wk\_z)}{2wk\_0}\exp(-jk\_zz\_0)\\ =\frac{j}{4wk\_0}\left(\exp(jk\_z(-z\_0-w)-\exp(jk\_z(-z\_0+w)\right). \end{split}$$ In other words, we have two fringes, one with frequency *z*0 − *w*, the other *z*0 + *w*, corresponding to the reflections of the front and back boundaries of the rect function. The same argument can be made for a circular object (Eq. [circ]), if we approximate the jinc function with a decaying sine. On the other hand, constant or slow-varying components of the RI distribution are not detected by reflection-mode OCT as they don’t produce *k*-space sinusoids that intersect with its TF. These sinusoids are analogous to those we see in conventional 1D FD-OCT processing; however, in our *k*-space analysis, the sinusoids permeate throughout 3D *k*-space. We can thus interpret coherence gating in *k*-space as the orthogonality of 3D fringes of different frequencies. That is, if one takes the inner product of the Fourier transform of the scattering potential with a desired fringe, exp( − *j*(*k**x**x*0 + *k**y**y*0 + *k**z**z*0)), across 3D *k*-space, only the fringe matching its frequency (i.e., 3D position), (*x*0, *y*0, *z*0), will produce a non-zero value, thus “gating” out the other frequencies/3D positions. This inner product is precisely the Fourier transform. Dispersion and aberrations ========================== Dispersion refers to the wavenumber-dependent changes in the refractive index of materials, such as glass, water, and biological tissue. If the amount of dispersion in the OCT sample and reference arms differs, the axial resolution of OCT images degrades. This axial degradation directly relates to the theory of ultra-short pulse broadening upon propagation through dispersive media (for a given source bandwidth, the temporal pulse width relates to the OCT axial resolution by *δ**z*/*c*). To compensate for dispersion mismatch, researchers either attempt to physically balance the amount of dispersion in both arms in hardware, or digitally compensate by multiplying the interferogram by a phase factor, exp(*j**ϕ*(*k*)), where *ϕ*(*k*) corresponds to the dispersion curve and is often expanded as a low-order polynomial. While dispersion compensation typically corrects dispersion due to the imaging system (e.g., the lenses and optical fibers), in principle the sample being probed may also be a source of dispersion, which we discuss next. More generally, since we are presenting a full 3D *k*-space theory, we will also discuss the generalized 3D pupil, a generalization of dispersion, which corresponds primarily to the axial dimension, to include lateral “dispersion,” more commonly referred to as aberrations. In other words, just as the distinction between source spectrum vs. angular spectrum has been blurred in *k*-space theory, we can also put dispersion and aberrations on equal footing. Note that just as a distinction between imaging system-induced vs. sample-induced dispersion can be made, a similar distinction can be made between system-induced aberrations vs. sample-induced aberrations, where the former can analogously be accounted for through the angular spectrum of the input illumination. Effects of dispersion on OCT transfer functions ----------------------------------------------- While the scattering potential (Eq. [scatteringpotential]), the quantity of interest in the *k*-space framework, is defined for a single wavenumber, *k*0, OCT uses multiple wavenumbers that could exhibit different RI and therefore scattering potentials in the sample of interest. Previous *k*-space-based analyses of OCT have also largely ignored this scattering potential dispersion, in part because sample-induced dispersion in most biological tissue samples is negligible, except for the largest bandwidths used in submicrometer-axial-resolution OCT. To examine the effects of scattering potential dispersion, analogous to how dispersion is conventionally handled to account for imaging system-induced dispersion (i.e., not sample-induced), we can perform a series expansion of the scattering potential. To make this analysis more tractable, we assume separability between the spatial and frequency dependence of the scattering potential, *V*(**r**, *k*) = *V*(**r**)exp(*j**ϕ*(*k*)),  where *ϕ*(*k*) = 0 for a dispersion-less medium and *V*(**r**) corresponds to Eq. [scatteringpotential]. The assumption behind the separability means that the dispersion curve does not depend on the spatial location (axially and laterally), which is a reasonable approximation when imaging biological tissue samples, which are largely composed of water. In fact, this assumption is often made in OCT, as, typically, global dispersion compensation values are used to correct for system dispersion, and sample dependent dispersion is ignored. There are a few works, however, that correct for axially or laterally dependent dispersion. We can thus expand *ϕ*(*k*), $$\label{V\_dispersion} \phi(k)= \phi(k\_0)+ \frac{d\phi}{dk}\Bigr\rvert\_{k\_0}(k-k\_0)+ \frac{1}{2}\frac{d^2\phi}{dk^2}\Bigr\rvert\_{k\_0}(k-k\_0)^2+ \frac{1}{6}\frac{d^3\phi}{dk^3}\Bigr\rvert\_{k\_0}(k-k\_0)^3+ ...\.$$ For a Fourier-domain OCT system, sampling *V*(**r**, *k*) at various *k*, in principle we would use Eq. [Vdispersion] as a normalization factor to extrapolate the value of *V*(**r**, *k* = *k*0). This procedure is analogous to digital dispersion compensation conventionally done in OCT to correct for system-induced dispersion ; thus, Eq. [Vdispersion] compensates for both system-induced and sample-induced dispersion. A corollary of this observation is that as long as the first Born approximation is valid, we do not need to account for depth-dependent dispersion. Nevertheless, there are a few works that demonstrate techniques for depth-dependent dispersion compensation, implying that they were considering samples thicker or with more RI variation than supported by the first Born approximation. First-order dispersion ---------------------- While the second-order and higher-order terms are typically associated with axial resolution degradation, the zeroth and first-order terms are associated with axial resolution enhancement. To see this, assume a linear dispersion curve of the phase RI, *n*(*k*) = *n*0 + *C**k*,  where *C* is constant with respect to *k*, with the corresponding group refractive index, $$n\_g(k)=n(k)+k\frac{dn(k)}{dk}=n\_0+2Ck.$$ The group index is also frequently specified as a function of the vacuum wavelength (i.e., *n**g*(*λ*) = *n*(*λ*) − *λ**d**n*(*λ*)/*d**λ*) or the frequency (i.e., *n**g*(*ω*) = *n*(*ω*) + *ω**d**n*(*ω*)/*d**ω*). Here, *k* = *ω*/*c* refers to the *vacuum* wavenumber, as opposed to the medium wavenumber *k**m* = *n**k*, so that the *n* factor is observed to have a lengthening effect on the *k*-vectors, therefore increasing the curvature of the Ewald spheres and having a resolution-enhancing effect. In particular, consider the medium *k*-bandwidth, Δ*k**m* = *n*(*k*2)*k*2 − *n*(*k*1)*k*1,  where *k*1 and *k*2 define the extent of the source spectrum (e.g., FWHM). Substituting in Eq. [lineardispersion] and noting that *k*1 + *k*2 = 2*k*0, we obtain Δ*k**m* = Δ*k*(*n*0 + 2*C**k*0) = Δ*k**n**g*(*k*0),  which states that the vacuum bandwidth, Δ*k*, is scaled by the medium’s group index at the center wavenumber or wavelength. This is consistent with the well-known result that the OCT axial resolution inside the medium, as compared to that in air or a vacuum, is improved by a factor of the group index at the center wavelength. The general effect on the 3D OCT TF can be appreciated by considering the red and blue Ewald spheres in Fig. [fig:FFOCTTFPSF], whose radii are scaled by their respective phase RIs. Generalized 3D pupil and numerical dispersion compensation ---------------------------------------------------------- Given that the *k*-space framework is agnostic to the distinction between the axial and lateral dimensions of the TF, similarly dispersion compensation in general should be considered jointly with 2D lateral spatial aberrations, a consideration which has been referred to as a generalization of numerical dispersion compensation. Indeed, 2D pupil aberrations are commonly corrected numerically in Fourier ptychographic microscopy, a coherent imaging technique that uses intensity-only images from multi-angle illumination to computationally reconstruct thin samples. In such applications, applying a 2D filter (i.e., multiplying by exp(*j**ϕ*(*k**x*, *k**y*))) is sufficient, due to the thin-sample approximation. In OCT, however, instead of operating on the interferogram, which is a 1D function of wavenumber (i.e., multiplying by exp(*j**ϕ*(*k*))), as is done in conventional numerical dispersion compensation, we can operate on the 3D *k*-space representation of the sample by multiplying by a combined phase filter that is a function of all three *k*-space coordinates (i.e., multiplying by exp(*j**ϕ*(*k**x*, *k**y*, *k**z*))). Such a generalized phase filter may be regarded as a generalized 3D pupil (where the “pupil” terminology originates from lateral aberration considerations), and is employed in computational adaptive optics (CAO) to correct for both monochromatic and chromatic aberrations. Although the weak scattering first Born approximation may limit the amount of such aberrations, these methods are still applicable in OCT, which in practice is often operated outside of the first Born approximation, or for correcting system-induced aberrations (e.g., astigmatism ). In weakly scattering or aberrating samples, the separability assumption in Eq. [separable] may still be met, such that a 1D dispersion compensation phase factor and a 2D pupil function for aberrations may be employed independently, but in the general case the generalized 3D pupil may not be separable. [generalizeddispersion] Limited depth of focus of OCT and computational refocusing with ISAM ==================================================================== One of the challenges of OCT is the trade-off between the lateral resolution and depth of focus. As a result, many practical OCT systems accept lateral resolutions on the order of 10 m or greater in order to obtain the hundreds-of-micron- to millimeter-scale depths of focus necessary for imaging practical samples. However, this seems to be at odds with the *k*-space framework, as the validity of TFs and PSFs requires shift invariance according to linear systems theory. Furthermore, we also established in Sec. [holography] that since we are making complex field measurements, we should theoretically be able to digitally propagate the field to any plane, irrespective of the position of the focus. [fig:resampling] Importance of accounting for Ewald sphere curvature --------------------------------------------------- The reason why OCT has limited depths of focus is that the typical OCT reconstruction algorithm in FD-OCT is a 1D inverse Fourier transform across the wavenumber sweep dimension, as the lateral dimensions are sampled in real-space (Fig. [fig:resampling]). This is problematic because we have derived 3D TFs, which would require 3D inverse Fourier transforms for proper reconstructions. However, standard implementations of OCT acquire the lateral dimensions in the real space domain, and thus according to the FDT (Eq. [FDTeq]), we have the information corresponding to the 2D Fourier transform *along the Ewald sphere* and not across the *k**x**k**y* plane, the ideal situation. Thus, the wavenumber sweep dimension does not correspond to *k**z*, but rather a distorted version thereof. Essentially, the culprit for the limited depth of focus of standard OCT is neglecting to account for the curvature of the Ewald sphere. In other words, standard OCT processing makes the erroneous assumption of separability of the lateral and axial TFs – the higher the NA, the less valid this assumption, and thus the shorter the depth of focus. One procedure for correcting this distortion is to first take a 2D inverse Fourier transform across the two lateral dimensions, thus lifting the data back into 3D *k*-space (note that this step requires phase stability across lateral positions ). Next, for each wavenumber, we assign the information to the correct location in *k*-space along the Ewald sphere, in the case of FF-OCT (as we will see, the *k*-space surfaces for point-scanning and line-field OCT differ). In practice, resampling and interpolation are required. Finally, with all the information in the correct place in *k*-space, we take a 3D inverse Fourier transform to recover the depth-of-focus-independent resolution. In other words, the regions outside of the nominal depth of focus are refocused, at least to the extent allowed by the SNR. In the following sub-sections, we will show that this correction procedure based on the FDT is an alternate derivation of inverse scattering theory for OCT, first derived almost 15 years ago and named by its inventors as interferometric synthetic aperture microscopy (ISAM). As will be seen, the resampling equation for FF-OCT differs from that of point-scanning OCT due to the difference in effective curvature of their respective TFs, as described above. [fig:ISAM] Resampling for full-field OCT ----------------------------- Although historically inverse scattering for FF-OCT (and a similar principle, holoscopy ) postdates that for point-scanning OCT, we start with FF-OCT, whose resampling procedure is conceptually simpler as it operates directly with the Ewald sphere. We start with Eq. [OCTTF], the correct form of the TF for FF-OCT, which we wish to distort into a form that allows separability between the axial and lateral components assumed in standard OCT processing. Once we find this distorting operation, we can invert it to obtain the correction procedure. We only need to focus on the argument of the first exponential factor in Eq. [OCTTF], which contains a coupling between the axial and lateral components. Rewriting as $$\exp\left(\frac{-(k\_r-2k\_0\cos(k\_\theta))^2}{8\sigma\_k^2\cos^2(k\_\theta)}\right)=\exp\left(\frac{-\left(\frac{k\_r^2}{2k\_z}-k\_0\right)^2}{2\sigma\_k^2}\right),$$ we can see that a substitution $k\_{\mathrm{\mathit{FFOCT}},z}=k\_r^2/(2k\_z)$ gives the separable form assumed in OCT, and that the lateral coordinates do not need to be modified, $k\_{\mathrm{\mathit{FFOCT}},x}=k\_x$ and $k\_{\mathrm{\mathit{FFOCT}},y}=k\_y$. Note that $k\_{\mathrm{\mathit{FFOCT}},z}$ can be directly interpreted as the wavenumber sweep in swept-source OCT. Thus, the coordinate transformation is $$\label{FFOCT\_resample} \begin{split} &k\_x=k\_{\mathrm{\mathit{FFOCT}},x},\\ &k\_y=k\_{\mathrm{\mathit{FFOCT}},y},\\ &k\_z=k\_{\mathrm{\mathit{FFOCT}},z} \pm \sqrt{k\_{\mathrm{\mathit{FFOCT}},z}^2-k\_x^2-k\_y^2}. \end{split}$$ Note that the two coordinate transformations for *k**z* correspond to the top and bottom halves of the Ewald sphere (corresponding to reflective and transmissive geometries, respectively ), centered at $(0, 0, k\_{\mathrm{\mathit{FFOCT}},z})$ with radius $k\_{\mathrm{\mathit{FFOCT}},z}$, and is consistent with a previous derivation by Marks et al.. Thus, every wavenumber in the sweep is corrected according to its respective Ewald sphere. This coordinate change becomes insignificant for small NAs in reflection ($k\_z\rightarrow 2k\_{\mathrm{\mathit{FFOCT}},z}$, where again the factor of 2 accounts for the round trip trajectory). Fig. [fig:ISAM] compares reconstructions using standard OCT processing and using the coordinate resampling. After the 3D *k*-space coordinate transform of Eq. [FFOCTresample] and interpolation of the scattering potential spectrum onto a regular grid for ease of digital processing (i.e., fast Fourier transform), we can take a 3D inverse Fourier transform to obtain a reconstruction with depth-independent resolution (Fig. [fig:resampling]). For the sake of completeness, we should also multiply the volume element of the 3D inverse Fourier transform integral by the determinant of the Jacobian of coordinate change in Eq. [FFOCTresample]. Resampling for full-field OCT with off-axis illumination -------------------------------------------------------- We can generalize the resampling equations in Eq. [FFOCTresample] by making the same argument for Eq. [OCTTFgeneral] as we did for Eq. [OCTTF]. We obtain a forward mapping of $k\_{\mathrm{\mathit{FFOCT}},z}=k\_r^2k\_0/(2\mathbf{k}\cdot\mathbf{k\_{illum}})$, from which after inverting, we obtain the following coordinate transformation: $$\label{FFOCT\_resample\_general} \begin{split} &k\_x=\frac{k\_{\mathrm{\mathit{FFOCT}},z}k\_{illum,x}}{k\_0},\\ &k\_y=\frac{k\_{\mathrm{\mathit{FFOCT}},z}k\_{illum,y}}{k\_0},\\ &k\_z=\frac{k\_{\mathrm{\mathit{FFOCT}},z}k\_{illum,z}}{k\_0} \pm \sqrt{k\_{\mathrm{\mathit{FFOCT}},z}^2-k\_x^2-k\_y^2}, \end{split}$$ where we have taken advantage of the fact that the TF can be shifted to any position in 3D *k*-space while maintaining the real-space reconstruction up to a constant phase ramp (i.e., the Fourier shift theorem). This coordinate resampling corresponds to an Ewald sphere centered at $\mathbf{k\_{illum}}$ with radius *k*0, and thus reduces to Eq. [FFOCTresample] for an on-axis illumination. Resampling for point-scanning OCT --------------------------------- While the coordinate transformations for FF-OCT depend on the Ewald sphere, centered at (0, 0, *k**i**l**l**u**m*) with radius *k**i**l**l**u**m*, for point-scanning OCT, they depend on a sphere centered at the origin with radius 2*k**i**l**l**u**m*. This is a consequence of Eq. [TFOCTpointscan], whose integral with respect to the input illumination direction is effectively revolving the Ewald sphere about the origin, thus creating a new spherical surface with twice the radius. Thus we can deduce the resampling equations to be $$\label{OCT\_resample} \begin{split} &k\_x=k\_{\mathrm{\mathit{OCT}},x},\\ &k\_y=k\_{\mathrm{\mathit{OCT}},y},\\ &k\_z=\sqrt{4k\_{\mathrm{\mathit{OCT}},z}^2-k\_x^2-k\_y^2}, \end{split}$$ which are consistent with the derivation for ISAM for high NAs. However, we note that the information for a given wavenumber does not exclusively come from the spherical surface with radius 2*k**i**l**l**u**m*, as can be easily visualized in the case of monochromatic confocal microscopy with a high NA (Fig. [fig:fourTFshighNA]). Rather, the TF has high density along this sphere. Thus, alternative resampling curves may perform better, which can further be adjusted according to illuminations and pupils besides Gaussian (Sec. [resamplinggeneral]). Resampling for line-field OCT ----------------------------- As mentioned in Sec. [linefield], the TF of LF-OCT has a horn torus shape for each wavenumber component. Thus, the coordinate transformation equations describe the partial surface of a horn torus facing away from the center: $$\label{LFOCT\_resample} \begin{split} &k\_x=k\_{\mathrm{\mathit{LFOCT}},x},\\ &k\_y=k\_{\mathrm{\mathit{LFOCT}},y},\\ &k\_z=\sqrt{\left(k\_{\mathrm{\mathit{LFOCT}},z}+\sqrt{k\_{\mathrm{\mathit{LFOCT}},z}^2-k\_y^2}\right)^2-k\_x^2}. \end{split}$$ Note that along the *k**x**k**z*-plane (*k**y* = 0) and the *k**y**k**z*-plane (*k**x* = 0), this resampling scheme recapitulates the resampling scheme for point-scanning OCT (Eq. [OCTresample]) and FF-OCT (Eq. [FFOCTresample]), respectively (Fig. [fig:LFOCT]). One interesting property of LF-OCT is that for a given wavenumber, there is more density along the toric surface than for point-scanning OCT along the spherical surface. This can be intuitively appreciated by the fact that the Ewald sphere shares its curvature with the torus along one lateral dimension. Similarly, in the case of FF-OCT, the Ewald sphere directly corresponds to the TF curvature in both lateral dimensions and thus all the density is concentrated along the Ewald sphere. This is, to our knowledge, the first derivation of ISAM resampling for LF-OCT. Resampling for other illumination patterns and pupils ----------------------------------------------------- Aside from plane waves or Gaussian beams focused in one or two dimensions, we may employ other types of illumination patterns, such as Bessel beams and annular pupils, as well as other detection strategies that differ from the illumination. Such experimental setups would result in TFs with different energy distributions and therefore different resampling relations. As noted in the preceding section, the resampling-based corrections are approximate because not all the *k*-space information for a given wavenumber resides along a 2D manifold, except for the case of FF-OCT for which that 2D manifold is the Ewald sphere. Thus, the resampling equations may have to be derived numerically by taking the center of mass through the TF for a given wavenumber. This was done using an OCT setup involving Bessel beam illumination and Gaussian mode detection. The same reasoning would be applied to FF-OCT with partially coherent illumination, which has a similar TF to that of point-scanning OCT. A natural question that arises is whether there is an illumination or detection strategy such that no resampling is required – that is, a TF whose energy distribution is maximized along a plane parallel to the *k**x**k**y*-plane. This would be advantageous as it avoids the need for phase stability. One such strategy, as noted by Sheppard et al., is to use annular pupils, which has been previously employed to extend the depth of focus of OCT. Speckle in OCT ============== Typically speckle in OCT is treated separately from its TF and modeled statistically. However, here, we show that speckle is a natural consequence of the band-pass nature of OCT TFs, as previously argued, and can be appreciated without invoking randomness and multiple scattering, which inevitably depend heavily on the sample structure. Our analysis will primarily focus on the OCT TF and make as few assumptions about the sample as possible. We start our analysis assuming separability of the axial and lateral TFs, which permits the use of Eqs. [OCTTFapprox], [axialPSF], and [lateralpsf], thereby simplifying the analysis. Real-space interpretation of speckle ------------------------------------ One common misconception about OCT is that its image formation is governed by convolution with an incoherent PSF. That is, $$\mathrm{\mathit{OCT}}(x,y,z)=|\mathrm{\mathit{psf}}(x,y,z)\otimes V(x,y,z)|^2\neq |\mathrm{\mathit{psf}}(x,y,z)|^2\otimes |V(x,y,z)|^2.$$ Although OCT is based on low-coherence interferometry, OCT is very much a coherent imaging modality – in our preceding *k*-space analyses, nowhere have we assumed incoherence or even partial coherence (except for FF-OCT in Sec. [partialcoherence]). However, the coherent nature of the OCT TF is the very characteristic that confers speckle to OCT images. In other words, when the incoherent model agrees with the coherent model, that is tantamount to lack of speckle, as we will see. As speckle is caused by interference of sub-resolution scatterers, we start our analysis with the coherent PSF (Eqs. [axialPSF] and [lateralpsf]) and analyze its effect on the simplest case of two scatterers, spaced by a potentially sub-resolution axial distance *d**z* and lateral distance *d**x* (omitting the *y* dimension as it behaves in the same way as the *x* dimension): $$\label{speckle\_with\_2\_scatterers} \begin{split} I(x,z)=&\left|\mathrm{\mathit{psf}}\left(x-\frac{d\_x}{2},z-\frac{d\_z}{2}\right)+\mathrm{\mathit{psf}}\left(x+\frac{d\_x}{2},z+\frac{d\_z}{2}\right)\right|^2 \\ =&\left|\mathrm{\mathit{psf}}\left(x-\frac{d\_x}{2},z-\frac{d\_z}{2}\right)\right|^2+\left|\mathrm{\mathit{psf}}\left(x+\frac{d\_x}{2},z+\frac{d\_z}{2}\right)\right|^2 \\ +&2\left|\mathrm{\mathit{psf}}\left(x-\frac{d\_x}{2},z-\frac{d\_z}{2}\right)\right|\left|\mathrm{\mathit{psf}}\left(x+\frac{d\_x}{2},z+\frac{d\_z}{2}\right)\right|\cos(2k\_0d\_z), \end{split}$$ where the first two terms in the expanded form correspond to those from a incoherent image formation model and the third term is the coherent interferometric term, which is the term of interest. Note that the oscillatory factor of the coherent term only depends on the axial separation, and not the lateral. When either the axial or lateral separation becomes large (*d**x*, *d**z* → ∞), the coherent term goes to 0 and thus we obtain the incoherent image formation model. In other words, when there are no sub-resolution scatterers, the coherent and incoherent models agree with each other and there is no speckle. However, when the separation is comparable to the axial resolution, the coherent term induces high-contrast modulation, which gives rise to speckle. Fig. [fig:speckleanglecompounded] compares coherent to incoherent image formation models. Note that for certain sub-resolution separations, the two reflectors become nearly invisible due to destructive interference. [fig:speckleanglecompounded] While two scatterers may not give rise to speckle in the conventional sense, as the interference pattern is predictable and somewhat recognizable, when there are *N* randomly-distributed scatterers within the system axial resolution we get a superposition of many pairwise coherent interferometric terms: $$\label{speckle} \begin{split} I(x,z)=&\left|\sum\_{n=1}^N r\_n\mathrm{\mathit{psf}}(x-d\_x^n,z-d\_z^n) \right|^2\\ =& \sum\_{n=1}^N \left|r\_n\mathrm{\mathit{psf}}(x-d\_x^n,z-d\_z^n)\right|^2\\+& \mathop{\sum^{N}\sum^{N}}\_{n\neq m} 2r\_nr\_m |\mathrm{\mathit{psf}}(x-d\_x^n,z-d\_z^n)| |\mathrm{\mathit{psf}}(x-d\_x^m,z-d\_z^m)| \cos(2k\_0(d\_z^n-d\_z^m)), \end{split}$$ where {*r**n*}*n* = 1*N* are arbitrary reflectivities and {*d**x**n*, *d**z**n*}*n* = 1*N* are the reflector positions. In this expression, as before, the first summation in the expanded form is the incoherent image formation model. The second double summation is the interferometric terms corresponding to each pair of reflectors in the resolution volume – this is the speckle observed in OCT. In particular, when {*d**x**n*, *d**z**n*}*n* = 1*N* are randomly distributed, the double summation in Eq. [speckle] is a superposition of randomly distributed fringes with random amplitudes. This treatment is consistent with the complex random walk interpretation of speckle, where here each $r\_n\mathrm{\mathit{psf}}(x-d\_x^n,z-d\_z^n)$ is a complex phasor with a random phase and amplitude. The speckle is thus fully developed in the limit of a sum of a large number of phasors, which by the central limit theorem converges to a complex Gaussian distribution, thus giving rise to a Rayleigh distribution on the amplitudes, consistent with prior analysis of OCT speckle under the assumption of large collection of randomly distributed scatterers. Eq. [specklewith2scatterers] may thus be regarded as a special case of underdeveloped speckle (perhaps the least developed speckle). In sum, analyzing the TF derived from *k*-space theory can explain speckle in OCT (at least that which occurs within the first Born approximation). *k*-space interpretation of speckle ----------------------------------- We can reach similar conclusions about speckle in *k*-space. Once again, we consider the case of two closely spaced reflectors (separation, *d**z*, ignoring *d**x* as it only affects the amplitude of the fringes), which manifest as beating of two fringes with very similar frequencies: $$\begin{gathered} \label{speckle\_k\_space} \widetilde{A}(k)= H\_z(k)\left(\cos\left(2k\left(z-\frac{d\_z}{2}\right)\right)+ \cos\left(2k\left(z-\frac{d\_z}{2}\right)\right)\right)=2H\_z(k)\cos(kd\_z)\cos(2kz),\\ H\_z(k)=\exp\left(\frac{-(k\_z-2k\_0)^2}{8\sigma\_k^2}\right), \end{gathered}$$ where *H**z*(*k*) is the axial component of the separable TF (Eq. [OCTTFapprox]), with DC terms not relevant for this analysis omitted. Eq. [specklekspace] contains a high-frequency carrier corresponding to the average position of the two reflectors, modulated by a low-frequency envelope corresponding to their small separation, all of which is windowed by the TF. We argue that speckle arises when the period of the low-frequency beat envelope is comparable to or larger than the width of the TF; that is, when the separation *d**z* is very small. Essentially, we can consider two extreme situations: 1) when the narrow TF is centered at the crest or trough of the low-frequency beat, or 2) when it is centered at a node of the beat. In the former case, we see only a single frequency at twice the amplitude, while in the latter case we see almost nothing. These correspond to, respectively, constructive and destructive interference. In other words, in both cases, the narrowbandedness of the OCT TF prevents us from seeing the bigger picture, that there are in fact two frequencies, not one. The larger the OCT TF (i.e., the larger the spectral and angular bandwidths), the better capable it is of detecting a low-frequency beat. Speckle reduction using angular compounding synthesizes incoherence ------------------------------------------------------------------- Since we have interpreted speckle as a consequence of the coherent nature of the TF and PSF, the motivation of speckle reduction is thus to make the image appear as if it was captured with an incoherent imaging system. While there are several methods for speckle reduction, here we focus on angular compounding, which involves incoherently averaging intensity OCT images acquired from multiple angles. Intuitively, angle-compounding speckle reduction works by changing the effective axial separation between the scatterers (i.e., modulating *d**z*), or equivalently changing the effective axial component of the illumination *k*-vector (i.e., modulating *k*0), with the hope that the oscillating coherent terms in Eq. [specklewith2scatterers] and [speckle] become averaged away to 0 or otherwise minimized. For this analysis, we focus on the case of two closely-spaced scatterers (axial separation=*d**z*), whose conclusions can be extended to the case of multiple scatterers, as we did in Eq. [speckle]. In particular, consider the OCT PSF (Eqs. [axialPSF] and [lateralpsf]) rotated by angle, *θ*, in the *x**z*-plane, $$\label{rotated\_psf} \begin{split} \mathrm{\mathit{psf}}\_\theta(x,z)=& \exp\left(-\frac{(x\cos(\theta) + z\sin(\theta))^2}{2\sigma\_x^2}\right) \exp\left(-\frac{(x\sin(\theta) - z\cos(\theta))^2}{2\sigma\_z^2}\right)\\ &\times\exp\left(-j2k\_0(x\sin(\theta)-z\cos(\theta)\right), \end{split}$$ where we have again omitted the *y* dimension for simplicity. Then, the OCT response to the two scatterers separated by *d**z* is given by $$\label{rotated\_response\_to\_2\_scatterers} \begin{split} I\_\theta(x=0,z=0)=& \left|\mathrm{\mathit{psf\_\theta}}\left(0,\frac{d\_z}{2}\right) + \mathrm{\mathit{psf\_\theta}}\left(0,-\frac{d\_z}{2}\right) \right|^2\\ =& 2\exp\left(-\frac{d\_z^2}{4} \left(\frac{\cos^2(\theta)}{\sigma\_z^2}+ \frac{\sin^2(\theta)}{\sigma\_x^2} \right)\right) \left(1+\cos(2k\_0d\_z\cos(\theta))\right). \end{split}$$ While it is possible to analyze this equation at any arbitrary *x**z* position, the general expression is exceedingly complicated and distracts from the motivation to understand angle-compounded speckle reduction. Thus, here, we have set *x* = 0 and *z* = 0, which is the halfway point between the scatterers and is the position most affected by speckle, as the magnitude of the coherent term is maximized. From the coherent term in Eq. [rotatedresponseto2scatterers] (the cosine term), we can see that the effective axial separation for view angle *θ* is *d**z*cos(*θ*) (an alternative interpretation is that the effective axial component of the illumination *k*-vector is *k*0cos(*θ*)). Thus, angular compounding via incoherent superposition of the coherent term of Eq. [rotatedresponseto2scatterers] is given by $$\label{speckle\_reduction} \begin{split} S(d\_z)=&\frac{1}{\pi}\int\_{-\pi/2}^{\pi/2} \exp\left(-\frac{d\_z^2}{4} \left(\frac{\cos^2(\theta)}{\sigma\_z^2}+ \frac{\sin^2(\theta)}{\sigma\_x^2} \right)\right) \cos(2k\_0d\_z\cos(\theta))d\theta \\ =& \exp\left(-\frac{d\_z^2}{4\sigma\_z^2}\right) \frac{1}{\pi} \int\_{-\pi/2}^{\pi/2} \exp\left(-\frac{d\_z^2}{4} \left(\frac{1}{\sigma\_x^2}- \frac{1}{\sigma\_z^2} \right)\sin^2(\theta) \right) \cos(2k\_0d\_z\cos(\theta))d\theta \\ \approx& \exp\left(-\frac{d\_z^2}{4\sigma\_z^2}\right) \frac{1}{\pi}\int\_{-\pi/2}^{\pi/2} \cos(2k\_0d\_z\cos(\theta))d\theta \\ =&\exp\left(-\frac{d\_z^2}{4\sigma\_z^2}\right)J\_0(2k\_0d\_z), \end{split}$$ where *J**α* is the Bessel function of the first kind. We found that the approximation in Eq. [specklereduction] is exact for isotropic resolution and holds well as long as $\sigma\_z\not\ll\lambda\_0$ or $\sigma\_x\not\ll\lambda\_0$, even for anisotropic PSFs. Intuitively, this can be understood by considering the fact that the Gaussian factor inside the integral of the second row of Eq. [specklereduction] is broad and does not affect the rapidly oscillating cosine away from *θ* = 0, unless *σ**x* or *σ**z* is very small. In sum, Eq. [specklereduction] shows that angular compounding over a full 180-range replaces the cos(2*k*0*d**z*) in the original coherent term by *J*0(2*k*0*d**z*), which decays like $1/\sqrt{2k\_0d\_z}$. Thus, with full angular compounding, scatterers can only contribute significantly to speckle over a length scale on the order of 1/*k*0 or a wavelength, compared to the typically much larger OCT axial resolution, *σ**z* ∝ 1/*σ**k*, given by the left factor in the fourth row of Eq. [specklereduction]. While the coherent term, and therefore speckle, cannot be completely eliminated, even with full angular compounding, the term does become more delta-like. Therefore, the angularly compounded imaging system behaves like an incoherent imaging system over a larger domain (Fig. [fig:speckleanglecompounded]), while conventional OCT only does so when the scatterers are sparse, which is rarely the case in biological tissue. In this sense, angular compounding synthesizes incoherence. Note, however, that if the axial resolution of OCT is on the order of the wavelength, angular compounding is not as effective a strategy for speckle reduction, as the two factors in the fourth row of Eq. [specklereduction] become comparable in width. Finally, we consider angular compounding over a limited angular range, evaluating the first integral in Eq. [specklereduction] (the unapproximated integral) numerically over an angular range of  ± *θ**m**a**x*. These results are summarized in Fig. [fig:specklelimitedangle], which show that limited angular compounding results in speckle reduction on length scales in between *σ**z* and *λ*0. [fig:specklelimitedangle] Increasing angular diversity is also a method of improving the resolution by expanding the TF of coherent imaging modalities. Later, we will discuss resolution enhancement techniques, including how they relate to speckle reduction (particularly the incoherent resolution enhancement techniques in Sec. [incoherentresolutionenhancement]). Equivalence of dynamic wavefront modulation and angular compounding for speckle reduction ----------------------------------------------------------------------------------------- Another strategy for speckle reduction is to modulate the illumination wavefront with a dynamic, random phase mask (e.g., a translating diffuser). The idea is to present multiple uncorrelated phase masks during one integration period so as to average multiple speckle patterns. Such a strategy has been employed in point-scanning OCT and in FF-OCT to reduce speckle due to lateral cross-talk (albeit, from multiple scattering). Fundamentally, this approach has the same idea as angle-compounding-based speckle reduction, in that they both perform incoherent averaging over multiple independent coherent patterns. While angle-compounding-based speckle reduction typically involves incoherent digital averaging, wavefront-modulation-based speckle reduction achieves incoherent averaging by presenting multiple phase patterns at different times, thus preventing interference among these otherwise mutually coherent patterns. Thus, one could also collect multiple images with random illuminations and average them incoherently digitally; likewise, one could also sweep the illumination angle within one integration period. They differ in that they use different coherent bases, where angle-compounding-based approaches uses a multi-angle plane wave basis (or weakly focused waves in the case of point-scanning OCT), while wavefront-modulation-based approaches uses a random basis (each member of which is itself a coherent superposition of the former multi-angle basis). However, the end result is the same. To appreciate the equivalence between angle compounding and wavefront modulation mathematically, we first model one particular modulated PSF as a coherent superposition of complex-valued PSFs from multiple angles, $$\mathrm{\mathit{psf^{mod}\_\phi}}(x,z)=\int\_{\Theta} \mathrm{\mathit{psf\_\theta}}(x,z)\exp(j\phi(\theta))d\theta,$$ where integration is performed over some domain, Θ ∈ [*θ**m**i**n*, *θ**m**a**x*], restricted by the system NA, and $\mathrm{\mathit{psf\_\theta}}$ is given by Eq. [rotatedpsf], *ϕ*(*θ*) is a random phase modulation introduced by the diffusing element at a particular time. Note that in general, the modulation can also include an amplitude component (i.e., *ϕ*(*θ*) can be complex-valued). Let’s once again analyze the case of two scatterers separated axially by *d**z* (Sec. [specklereductionsection]), with the same idea that our conclusions can be straightforwardly generalized to multiple scatterers. The OCT response for one modulation pattern, *ϕ*(*θ*), is given by $$\label{OCT\_modulated} \begin{split} I\_\phi(x=0,&z=0)\\ &\begin{split} =\left| \mathrm{\mathit{psf^{mod}\_\phi}}\left(0,\frac{d\_z}{2}\right) + \mathrm{\mathit{psf^{mod}\_\phi}}\left(0,-\frac{d\_z}{2}\right) \right|^2 \end{split}\\ &\begin{split} =\left| \int\_\Theta \left( \mathrm{\mathit{psf\_\theta}}\left(0,\frac{d\_z}{2}\right)+ \mathrm{\mathit{psf\_\theta}}\left(0,-\frac{d\_z}{2}\right)\right) \exp(j\phi(\theta)) d\theta \right|^2 \end{split}\\ &\begin{split} =\int\_\Theta \int\_\Theta \left( \mathrm{\mathit{psf\_\alpha}}\left(0,\frac{d\_z}{2}\right)+ \mathrm{\mathit{psf\_\alpha}}\left(0,-\frac{d\_z}{2}\right)\right) & \left( \mathrm{\mathit{psf^{\*}\_\beta}}\left(0,\frac{d\_z}{2}\right)+ \mathrm{\mathit{psf^{\*}\_\beta}}\left(0,-\frac{d\_z}{2}\right)\right)\\ &\times\exp(j(\phi(\alpha)-\phi(\beta))) d\alpha d\beta \end{split}\\ & \begin{split} \approx 2\exp\left(-\frac{d\_z^2}{4\sigma\_z^2}\right) \int\_\Theta \int\_\Theta \big[& \cos(k\_0d\_z(\cos(\alpha)-\cos(\beta)))\\ &+\cos(k\_0d\_z(\cos(\alpha)+\cos(\beta))) \big] \exp(j(\phi(\alpha)-\phi(\beta))) d\alpha d\beta, \end{split} \end{split}$$ where we have evaluated at *x* = 0 and *z* = 0 for the same reasons as in Eq. [rotatedresponseto2scatterers], and expanded the square magnitude of the integral and substituted Eq. [rotatedpsf]. The approximation made here is similar to the one made in Eq. [specklereduction]. The two cosine terms in the square brackets in Eq. [OCTmodulated] are the incoherent, non-interfering terms ($\mathrm{\mathit{psf\_\alpha}}\left(0,\frac{d\_z}{2}\right) \mathrm{\mathit{psf^{\*}\_\beta}}\left(0,\frac{d\_z}{2}\right) + \mathrm{\mathit{psf\_\alpha}}\left(0,-\frac{d\_z}{2}\right) \mathrm{\mathit{psf^{\*}\_\beta}}\left(0,-\frac{d\_z}{2}\right)$) and the coherent, interferometric terms ($\mathrm{\mathit{psf\_\alpha}}\left(0,\frac{d\_z}{2}\right) \mathrm{\mathit{psf^{\*}\_\beta}}\left(0,-\frac{d\_z}{2}\right) + \mathrm{\mathit{psf\_\alpha}}\left(0,\frac{d\_z}{2}\right) \mathrm{\mathit{psf^{\*}\_\beta}}\left(0,-\frac{d\_z}{2}\right)$), respectively (the same distinction as we made in Eq. [rotatedresponseto2scatterers]). In the ensuing analysis, we will ignore the non-interferometric terms, only analyzing the interferometric terms, which are responsible for speckle (as we did in Eq. [specklereduction]). The OCT response upon *incoherent* integration of the interferometric terms over multiple diffuser patterns is given by $$\label{wavefront\_mod} \begin{split} &\begin{split} S\_{mod}(d\_z)=2\exp\left(-\frac{d\_z^2}{4\sigma\_z^2}\right) \int\_\Phi \int\_\Theta \int\_\Theta &\cos(k\_0d\_z(\cos(\alpha)+\cos(\beta)))\\ &\times\exp(j(\phi(\alpha)-\phi(\beta))) P(\phi) d\alpha d\beta d\phi \end{split}\\ &\begin{split} \phantom{S\_{mod}(d\_z)}=2\exp\left(-\frac{d\_z^2}{4\sigma\_z^2}\right) \int\_\Phi \int\_\Theta \int\_\Theta \big[ &\cos(k\_0d\_z\cos(\alpha))\cos(k\_0d\_z\cos(\beta))-\\ &\sin(k\_0d\_z\cos(\alpha))\sin(k\_0d\_z\cos(\beta)) \big]\\ &\times\exp(j(\phi(\alpha)-\phi(\beta))) P(\phi) d\alpha d\beta d\phi, \end{split} \end{split}$$ where Φ is the domain of random phase modulation patterns accessible by the dynamic diffusing element, and *P*(*ϕ*) is the probability of a particular pattern (as a side note, the incoherent integration of the non-interferometric terms is the same as Eq. [wavefrontmod], except with the sign of the sine term in the square brackets reversed). Assuming that every modulation pattern is equally likely, so that *P*(*ϕ*) is constant and can be dropped from Eq. [wavefrontmod], and changing the order of integration, we have $$\label{S\_mod} \begin{split} \begin{split} S\_{mod}(d\_z)=2\exp\left(-\frac{d\_z^2}{4\sigma\_z^2}\right) \int\_\Theta \int\_\Theta \big[ &\cos(k\_0d\_z\cos(\alpha))\cos(k\_0d\_z\cos(\beta))-\\ &\sin(k\_0d\_z\cos(\alpha))\sin(k\_0d\_z\cos(\beta)) \big]\\ &\times\left[\int\_\Phi \exp(j(\phi(\alpha)-\phi(\beta))) d\phi\right] d\alpha d\beta. \end{split} \end{split}$$ Here, the factors contributing to wavefront modulation have been isolated in the square brackets, which is the mean outer product of the angularly-dependent modulation factors. We now consider two limiting cases that that result in different simplifications of this outer product: 1) *ϕ* is a constant, *θ*-independent, deterministic phase, and 2) *ϕ* is random and follows an independent multivariate uniform distribution over 2*π* radians (i.e., $\phi(\theta)\sim \mathrm{\mathit{Unif}}(0, 2\pi)$). It’s also possible for *ϕ* to have a stochastic and deterministic component (e.g., $\phi(\theta)\sim \mathrm{\mathit{Unif}}(0, \pi/4)$, which has a preferred phase), in which case the result would be a superposition of these two cases. For case 1, the mean outer product in Eq. [Smod] is also constant and can be dropped. Thus, assuming that Θ ∈ [ − *π*/2, *π*/2] to match the situation in Eq. [specklereduction], we have $$\label{deterministic} \begin{split} &\begin{split} S\_{mod}^{deterministic}(d\_z)\propto 2\exp\left(-\frac{d\_z^2}{4\sigma\_z^2}\right) \int\_\Theta \int\_\Theta \big[ &\cos(k\_0d\_z\cos(\alpha))\cos(k\_0d\_z\cos(\beta))-\\ &\sin(k\_0d\_z\cos(\alpha))\sin(k\_0d\_z\cos(\beta)) \big] d\alpha d\beta \end{split}\\ &\phantom{ S\_{mod}^{deterministic}(d\_z)}= 2\exp\left(-\frac{d\_z^2}{4\sigma\_z^2}\right) \Bigg[ \left( \int\_\Theta \cos(k\_0d\_z\cos(\theta)) d\theta \right)^2 - \left( \int\_\Theta \sin(k\_0d\_z\cos(\theta)) d\theta \right)^2 \Bigg] \\ &\phantom{ S\_{mod}^{deterministic}(d\_z)}\propto 2\exp\left(-\frac{d\_z^2}{4\sigma\_z^2}\right) \Big[ J\_0(k\_0d\_z)^2 - H\_0(k\_0d\_z)^2 \Big], \end{split}$$ where *H**α* is the Struve function. This case is simply just beam focusing, since all the multi-angle fields are mutually in phase and thus constructively interfere to form a focus. That is why Eq. [deterministic] approaches 0 as *d**z* approaches infinity, even in the absence of the Gaussian prefactor, as the focusing over a wide angular range improves the axial resolution. For the more interesting case 2, the mean outer product in Eq. [Smod] converges to an integral over a delta function, because the integrand is 1 when *α* = *β* and otherwise a random value on the complex unit circle. Thus, integration over many random patterns will average away the off-diagonal components of the outer product to 0, leaving behind the identity matrix. Thus, assuming that Θ ∈ [ − *π*/2, *π*/2] and Φ ∈ [ − *π*/2, *π*/2] to match the situation in Eqs. [specklereduction] and [deterministic], we have $$\label{speckle\_reduction2} \begin{split} &\begin{split} S\_{mod}^{stochastic}(d\_z){\mathrel{\vcenter{ \offinterlineskip\halign{\hfil$##$\cr \propto\cr\noalign{\kern2pt}\sim\cr\noalign{\kern-2pt}}}}}2\exp\left(-\frac{d\_z^2}{4\sigma\_z^2}\right) \int\_\Theta \int\_\Theta \big[ &\cos(k\_0d\_z\cos(\alpha))\cos(k\_0d\_z\cos(\beta))-\\ &\sin(k\_0d\_z\cos(\alpha))\sin(k\_0d\_z\cos(\beta)) \big]\\ &\times\left[\int\_\Phi \delta(\alpha-\phi)\delta(\beta-\phi) d\phi\right] d\alpha d\beta \end{split}\\ & \begin{split} \phantom{ S\_{mod}^{stochastic}(d\_z)}= 2\exp\left(-\frac{d\_z^2}{4\sigma\_z^2}\right) \int\_\Phi \Bigg[ &\left(\int\_\Theta \cos(k\_0d\_z\cos(\theta)) \delta(\theta-\phi) d\theta \right)^2\\ - &\left(\int\_\Theta \sin(k\_0d\_z\cos(\theta)) \delta(\theta-\phi) d\theta \right)^2 \Bigg] d\phi \end{split}\\ &\phantom{ S\_{mod}^{stochastic}(d\_z)}= 2\exp\left(-\frac{d\_z^2}{4\sigma\_z^2}\right) \int\_\Phi \big[ \cos^2(k\_0d\_z\cos(\phi))- \sin^2(k\_0d\_z\cos(\phi)) \big] d\phi\\ &\phantom{ S\_{mod}^{stochastic}(d\_z)}= 2\exp\left(-\frac{d\_z^2}{4\sigma\_z^2}\right) \int\_\Phi \cos(2k\_0d\_z\cos(\phi)) d\phi\\ &\phantom{ S\_{mod}^{stochastic}(d\_z)} \propto \exp\left(-\frac{d\_z^2}{4\sigma\_z^2}\right)J\_0(2k\_0d\_z), \end{split}$$ where we used the sifting property of the delta function to evaluate the inner integrals. This result is identical to angle-compounding based speckle reduction result (Eq. [specklereduction]). In fact, the first line in Eq. [specklereduction2] can be interpreted as using a delta amplitude modulation function with a fixed phase that is swept across the full angular range, thereby modeling sweeping of the illumination angle during one incoherent integration period – this is precisely angle-compounding-based speckle reduction. As a side note, the result for the non-interferometric term is the same, except with the sign of the sine term in Eq. [specklereduction2] flipped, resulting only the Gaussian prefactor – this is the same as for angle compounding, where the non-interferometric term is simply the Gaussian prefactor for all angles prior to compounding (see Eqs. [rotatedresponseto2scatterers] and [specklereduction]). In sum, both angle compounding and wavefront modulation obtain the same degree of speckle reduction. Of course, these are theoretical results that rely on the ideal conditions of full angular coverage ( ± *π*/2) and incoherent integration over an infinite number of independent speckle patterns and angles, which we have chosen to facilitate analytical evaluation of integrals. In practice, the angular range will be limited by the NAs of practical objectives, and the number of angles or independent modulation patterns will be finite. We can, however, conclude that angular compounding and wavefront modulation over a limited angular range will asymptotically have the same speckle reduction performance, because Eqs. [specklereduction] and [specklereduction2] reduce to the same integral over angular range. Fig. [fig:wavefrontmod] shows simulations of Eq. [specklereduction2] using a finite number of modulation patterns over multiple angular ranges, without the Gaussian prefactor, which would otherwise suppress speckle at larger inter-scatterer separation (c.f., Fig. [fig:specklelimitedangle]). Furthermore, the modulation patterns may follow other distributions other than the uniform distribution, in which case *P*(*ϕ*) cannot be dropped from Eq. [wavefrontmod]. In these cases, Eqs. [OCTmodulated] and [wavefrontmod] would likely have to be simulated numerically with discrete sums. Finally, we also reiterate that the conclusions drawn here for angle-compounding- and wavefront-modulation-based speckle reduction refer to speckle in the first Born or single scattering regime. Modeling reduction of speckle due to multiple scattering is more involved, as it precludes simplification to a broadly generalizable non-stochastic two-scatterer sample model and it relies more on the sample properties. [fig:wavefrontmod] Coherent resolution enhancement in OCT ====================================== Coherent resolution enhancement involves expanding the area of the TF in *k*-space, which can be achieved using a few strategies generally centered around the idea of increasing the angular or spectral bandwidths. Strategies referring to the expansion of the angular bandwidth are typically referred to as synthetic aperture techniques. These strategies require measurement diversity, such as through lateral scanning of a focused point (ISAM), angular scanning of a plane wave, or somewhere in between (lateral and angular scanning of a focused beam). Because synthetic aperture techniques must coherently combine information from multiple measurements, there needs to be a fixed or otherwise predictable phase coherence among the measurements (i.e., phase stability). Synthetic aperture techniques ----------------------------- Perhaps the most conceptually straightforward approach to achieve high resolution would be to use larger-NA objectives and larger spectral bandwidths combined with ISAM or any of the resampling techniques described in Sec. [refocus], with which in theory one would be able to achieve the TFs derived in Sec. [transferfunctions]. ISAM achieves angular diversity through the high-NA illumination, but must use lateral scanning so that every 3D spatial position within the sample observes the angular diversity. In particular, only the position corresponding to the nominal focus of the beam observes all of the plane wave angles in phase. Without lateral scanning, there are two effects: 1) neighboring positions in the lateral plane receive no light and therefore have zero angular diversity because of destructive interference, and 2) the further away axially from the focus, the more divergent the beam and the more locally plane-wave-like the beam, and therefore the less angular diversity (i.e., defocus) – angular diversity is synthesized by translating the beam laterally, so that a given defocused position observes the diverging wavefront everywhere. This is the reason why ISAM is termed so – angular coverage is synthesized away from the nominal focus, which without correction is the only position that observes angular diversity. If we regard the maximum NA as 1.0 and assume that the refractive index part, *n*, of the NA expression serves to shorten the wavelength (or lengthen the wavenumber), the maximum *k**x* or *k**y* range for FF-OCT is  ± *n**k**m**a**x*, where *k**m**a**x* is the largest vacuum wavenumber used to illuminate the sample. Using a focused illumination with the same NA of 1.0, the maximum *k**x* or *k**y* range doubles, becoming  ± 2*n**k**m**a**x*. As a result, the TFs are always constrained to reside inside a sphere of radius 2*n**k**m**a**x*. This 2*n**k**m**a**x*-radius limit is the same as for diffraction tomography, which can achieve the same TF as the point-scanning analog through angular diversity of illumination through sequential plane wave illumination with wide-field detection rather than through focused illumination. In analogy to diffraction tomography and other synthetic aperture techniques such as synthetic aperture radar, thus another strategy for synthetic aperture in OCT is to acquire OCT images over a potentially smaller aperture, but alter the illumination angles sequentially to synthesize a larger TF for FF-OCT (e.g., Eq. [OCTTFgeneral]). One could either rotate the sample so that illumination and collection are along the same axis, or keep the aperture fixed and vary the illumination angle (Eq. [OCTTFgeneral]), producing distinct TFs. Yet a third strategy would be a combination of these two strategies, whereby a focused beam could be scanned laterally and rotated. However, in all cases the synthesized TF would still be constrained by the 2*n**k**m**a**x*-radius sphere. Lateral point-scanning vs. angular plane wave rotation: a practical distinction in SNR distribution --------------------------------------------------------------------------------------------------- Fundamentally, all of these strategies are the same if we consider the effective plane wave angular coverage, differing only by the order in which the angularly-varying plane waves are delivered to the sample. However, there are at least two important practical differences among these strategies: SNR and phase stability. SNR considerations stem from the fact that even though our *k*-space analyses uses optical fields (i.e., in the FDT, Eq. [FDTeq]), in practice we can only detect intensity, which is proportional to the magnitude of the field squared. This is important because measurement SNR is related to the number of photons detected, which is proportional to the intensity rather than the field amplitude. One way to think of this is that there is a fixed energy (photon) budget, with which we are free to distribute spatially across the sample via constructive and destructive interference of our multi-angle plane waves by adjusting their phase and amplitudes (i.e., the angular spectrum). For example, the simplest case might be a single plane wave, which allocates our energy equally across the sample (in the absence of multiple scattering, as afforded by the first Born approximation), thereby conferring more uniform SNR across the sample. We can also choose the phase and amplitudes of our multi-angle plane waves such that they form a tightly-focused Gaussian beam – in this case, most of our energy budget is allocated to a region surrounding the focus, meaning there is high SNR at and near the focus, but low SNR everywhere else. Because SNR is low elsewhere, scanning is required. This is a major limitation of ISAM with high-NA illumination – although the lateral resolution theoretically becomes depth-independent, the SNR is only appreciable within the depth of focus. Contrast this with sweeping the plane wave angle sequentially as is done in diffraction tomography (and potentially in FF-OCT via Eq. [OCTTFgeneral]), which achieves the same depth-independent resolution as ISAM, but with a more spatially uniform SNR (also meaning lower SNR at the focus than in ISAM). Other strategies are possible, such as intentionally introducing astigmatism into the beam or using a Bessel beam, which achieve moderate SNR over a longer depth range. It remains to be seen whether there is an illumination strategy that confers wide angular diversity at every spatial position of the sample, thus reducing or altogether obviating the need for scanning of any kind. The other practical consideration is phase stability among the sequential measurements, which in practice often means nanometer-scale motion stability. For ISAM, there needs to be phase stability as the beam is translated laterally, which may be compromised due to sample motion or jitter in the scanning mechanism (e.g., galvanometers). Translational phase stability allows us to take 2D Fourier transforms across the lateral dimensions in order to operate in 3D *k*-space (Sec. [refocus1]). For FF-OCT, the lateral components of the 2D backscattered field are detected simultaneously, thus conferring lateral phase stability. However, sample motion is still a concern due to the slower source sweep rate or mechanical axial translation, which can affect the axial phase stability. Furthermore, phase stability must be maintained across different illumination angles, so that the Ewald spheres can be constructively superimposed in 3D *k*-space. Incoherent resolution enhancement in OCT ======================================== Finally, we discuss incoherent resolution enhancement techniques, a recent development that does not attempt to reconcile the phase relationships among the sequential measurements. To clarify, we are not referring to super-resolution in incoherent imaging techniques like fluorescence microscopy, but rather we are focusing on enhancing resolution in coherent imaging techniques such as OCT using techniques that do not rely on phase information. Optical coherence refraction tomography (OCRT) ---------------------------------------------- One technique that achieves incoherent resolution enhancement in OCT is one we recently introduced and named optical coherence refraction tomography (OCRT). One motivation of OCRT is that OCT typically has anisotropic resolution, with the axial better than the lateral resolution, which is due to the desire to have long depths of focus on the order of hundreds of microns to millimeters for bulk tissue imaging. To combat this anisotropy, OCRT uses magnitude OCT images (i.e., with phase information discarded) from multiple angles to create a reconstruction with more isotropic resolution, limited by the axial resolution (or lateral, whichever is better ). The theory of OCRT was previously explained in analogy to X-ray computed tomography (CT) using an anisotropic TF centered at the origin of *k*-space. With sample rotation or angular steering of the illumination, the TF rotates and the superposition approaches an isotropic TF (Fig. [fig:OCRT]). The requirement that the TFs be centered at the *k*-space origin appears to be at odds with the band-pass structure we derived for OCT and other reflective coherent imaging modalities (Sec. [transferfunctions]). However, if we make certain assumptions about the sample, we will see that the DC-centered TFs are, in fact, consistent with our *k*-space framework, which we will now demonstrate. For the ensuing analysis of OCRT, we are making the separability assumption of the OCT TF (Eq. [OCTTFapprox]), which is reasonable because without phase information and therefore ISAM resampling to rely on, in practice OCRT (and OCT in general) uses low NAs to obtain relatively uniform SNR and lateral resolution over long depths of focus. As a result of this separability assumption, we can drop the *y* dimension as its analysis is identical to that of *x*. Continuing, the key enabling assumption that justifies a DC-centered TF in OCT is that the sample is a finite set of randomly distributed discrete reflectors (i.e., a superposition of delta functions). This assumption is identical to the one made in incoherent speckle reduction discussed previously (Sec. [specklereductionsection]), and often in OCT in general. Thus, *V*(*x*, *z*) = ∑*n* = 1*N**r**n**δ*(*x* − *x**n*, *z* − *z**n*),  with random amplitudes {*r**n*}*n* = 1*N* and positions {*x**n*, *z**n*}*n* = 1*N*. Note that this is a reasonable assumption, even in biological tissue with slow RI variation in addition to scatterers, because the band-pass nature of the OCT TF cannot detect the low-frequency RI information anyway. Under this assumption, in *k*-space, the sample scattering potential is a superposition of complex exponentials with frequencies given by the spatial positions: $$\label{discrete\_sum\_k\_space} \widetilde{V}(k\_x,k\_z)=\sum\_{n=1}^N r\_n\exp(-j(k\_xx\_n+k\_zz\_n)).$$ Note that this scattering potential spectrum has infinite bandwidth, and thus in theory it does not matter which pass-band we choose, as each term in the sum exists everywhere, as ensured by the randomness of the {*x**n*, *z**n*}*n* = 1*N*. In practice, this means we can use any source bandwidth and image the sample from any angle, and we would still be able to detect the sample scatterers. An example of a sample that violates our discrete-sum assumption is a pristine glass cover slide, a sample whose scattering potential is concentrated along a straight line in *k*-space. Thus, there are view angles at which OCT observes nothing (e.g., when the incident beam is not nearly orthogonal to the surface). However, for biological samples, the discrete-sum assumption is reasonable. Proceeding with this assumption, we compute the *k*-space coverage of an intensity OCT image, with the phase discarded. First, in preparation for the next step, we note that $$\mathrm{\mathit{psf}}\_\mathrm{\mathit{lpf}}(x,z) = \left|\mathrm{\mathit{psf}}(x,z) \right|^2\propto\exp\left(-\frac{x^2}{\sigma\_{xy}^2}-\frac{z^2}{\sigma\_z^2}\right) \overset{\mathcal{F}}{\leftrightarrow} \exp\left(-\frac{1}{4}\left(\sigma\_{xy}^2k\_x^2+\sigma\_z^2k\_z^2\right)\right) \propto H\_\mathrm{\mathit{lpf}}(k\_x,k\_z),$$ which is the incoherent PSF and DC-centered TF that we seek ($\mathrm{\mathit{lpf}}$ = low-pass filter). We then compute the Fourier transform of Eq. [speckle], which is the square magnitude OCT response to a discrete sum of reflectors, and obtain $$\label{OCT\_intensity} \begin{split} \widetilde{I}(k\_x,k\_z)\propto H\_\mathrm{\mathit{lpf}}(k\_x,k\_z)\times \left[\begin{split} &\phantom{+}\sum\_{n=1}^N r\_n^2 \exp(-j(k\_x d\_x^n + k\_z d\_z^n)) \\ &+ \mathop{\sum^{N}\sum^{N}}\_{n\neq m} \begin{aligned}[t] &\Biggl[2r\_nr\_m\cos(2k\_0(d\_z^n-d\_z^m)) \\ &\times\exp\left(-j\frac{1}{2}\left(k\_x(d\_x^n+d\_x^m)+k\_z(d\_z^n+d\_z^m)\right)\right)\\ &\times\exp\left(-\frac{(d\_x^n-d\_x^m)^2}{4\sigma\_{xy}^2}-\frac{(d\_z^n-d\_z^m)^2}{4\sigma\_z^2}\right) \Biggr] \end{aligned} \end{split}\right]. \end{split}$$ Essentially, this is the autocorrelation of the OCT band-pass TF. This equation is the Fourier spectrum of the discrete-sum sample, filtered by the apparently low-pass, DC-centered TF, $H\_\mathrm{\mathit{lpf}}(k\_x,k\_z)$. That is, the terms in the multi-line brackets can be interpreted as the sample, which we now analyze line by line. The first line (the single summation) is functionally identical to the assumed scattering potential spectrum of our discrete-sum sample (Eq. [discretesumkspace]), while the remaining terms (the double summation) can be attributed to speckle. The factors in the second and third row of the multi-line brackets contain the beat and carrier frequencies, respectively, due to two closely axially spaced reflectors (completely analogous to our *k*-space interpretation of speckle in Eq. [specklekspace]). The factor in the fourth row of the multi-line brackets is a real-valued scaling factor that goes to 0 when the separation between the *n**t**h* and *m**t**h* reflectors becomes large compared to the resolution. This is also consistent with our previous analysis of speckle, as two reflectors only contribute significantly to speckle when the separations are sub-resolution. Now that we have demonstrated that the TF of the intensity OCT image is apparently centered at the *k*-space origin (i.e., $H\_\mathrm{|\mathit{OCT}|^2}(k\_x,k\_z)=H\_\mathrm{\mathit{lpf}}(k\_x,k\_z)$ and $\mathit{psf}\_\mathrm{|\mathit{OCT}|^2}(x,z)=\mathit{psf}\_\mathrm{\mathit{lpf}}(x,z)$), we can see that combining images with anisotropic resolution from multiple angles and superimposing them will create a reconstruction with isotropic spatial resolution. Since the center of the TF is over-emphasized, in practice we can apply a CT-like filtered backprojection algorithm to correct this bias. The theoretical PSF and TF of OCRT with full angular coverage are thus $$\begin{split} \mathrm{\mathit{psf}}\_\mathrm{\mathit{OCRT}}(x,z) &\propto \exp\left(-\frac{x^2+z^2}{\sigma\_{z}^2}\right), \\ H\_\mathrm{\mathit{OCRT}}(k\_x,k\_z) &\propto \exp\left(-\frac{\sigma\_z^2}{4}\left(k\_x^2+k\_z^2\right)\right), \end{split}$$ assuming that the original OCT axial resolution was superior to the lateral. We can also better appreciate more quantitatively why OCRT also obtains speckle reduction, as the speckle terms in the double summation in Eq. [OCTintensity] attenuate with angular diversity, as demonstrated in Eq. [specklereduction] and Figs. [fig:speckleanglecompounded]. and [fig:specklelimitedangle]. [fig:OCRT] Comparison with coherent resolution enhancement in OCT ------------------------------------------------------ While there are a few avenues for resolution enhancement in OCT, each has its own advantages and disadvantages, from practical and theoretical aspects. Coherent resolution enhancement techniques like ISAM are perhaps the most straightforward to implement in hardware, as they use the same setup and data acquisition pipeline as conventional OCT (unless higher NAs are desired). A major challenge, however, as discussed previously (Sec. [coherentapproaches]), is maintaining phase stability among the lateral scans, which is not required in conventional OCT. OCRT, however, does not require phase stability among its sequential measurements because OCRT uses intensity OCT images, in which the phase discarded. Another difference is that coherent methods, as synthetic aperture techniques, still maintain the band-pass nature of OCT and thus do not obtain speckle reduction (although the size of the speckle grain decreases in accordance to the expanded TF). OCRT, on the other hand, incoherently compounds the multi-angle images and takes advantage of the apparent low-pass nature of intensity OCT images to obtain speckle reduction. Thus, another perspective on this comparison between coherent and incoherent techniques is that both classes of techniques require additional information about the sample in 3D *k*-space, but utilize and synthesize this information differently. Synthetic aperture techniques maintain this information at their correct locations in *k*-space, while incoherent techniques like OCRT and speckle reduction rely on the discrete-sum assumption (Eqs. [sumassumption] and [discretesumkspace]), indicating that all pass-bands measure the same information about the sample with different speckle realizations (i.e., different observations of low-frequency beats caused by closely-spaced reflectors, as discussed Sec. [kspacespeckle]). Therefore, these pass-bands can be demodulated to DC (i.e., by taking the amplitude squared) and superimposed to obtain speckle reduction and, if the pass-bands are anisotropic, resolution enhancement. A simple example of this distinction is that between using the full OCT bandwidth or multiple OCT bands to create a high-resolution image in the coherent case, and averaging multiple sub-bands to trade off axial resolution for speckle reduction in the incoherent case. Finally, while synthetic aperture techniques place more emphasis on the angular spectral width, OCRT places more emphasis on the source spectral width and thus have different challenges. In particular, large source bandwidths are more susceptible to axial PSF broadening (equivalent to ultrafast pulse broadening) and thus require careful control of dispersion not only from the imaging system but also the sample (Sec. [sampleinduceddispersion]). Similarly, large angular bandwidths are more susceptible to spatial aberrations and thus require carefully designed high-NA objectives and sometimes sample-induced aberrations. Finally, if both large source spectral bandwidths and angular bandwidths are desired, a situation more applicable to coherent resolution enhancement techniques, both aberrations and dispersion and their couplings would have to be addressed (Sec. [generalizeddispersion]). Conclusion and future directions ================================ In summary, we have advanced a full 3D *k*-space model of OCT, placing it in the context of general coherent imaging modalities. Using the Fourier diffraction theorem as the fundamental axiom on which the whole theory rests, we have derived the TFs of various implementations of OCT, including FF-OCT, LF-OCT, and point-scanning OCT, which are all band-pass TFs centered at **k** = (0, 0, 2*k*0), assuming illumination in the  − *k**z* direction and collection in the *k**z* direction. Conventional OCT processing ignores the curvature of the TFs, which originates from the Ewald sphere, that effectively couples the axial and lateral dimensions, thereby resulting in limited depths of focus. Using ISAM to resample the *k*-space coordinates in theory recovers the depth-invariant resolution promised by a 3D TF. Furthermore, as this *k*-space framework blurs the distinction between the axial and lateral dimensions, axial dispersion compensation and lateral aberration corrections may be unified as a generalized 3D pupil function, as is done in CAO. We also explained OCT speckle from the band-pass nature of the TF, and showed how angular compounding synthesizes incoherence. In doing so, we have shown that the intensity OCT image can be considered to be governed by a low-pass transfer function under the assumption that the sample is a discrete collection of scatterers. Based on this observation, we explained how OCRT simultaneously obtains speckle reduction and resolution enhancement. This unifying theoretical treatment of existing OCT techniques also highlights future research directions for the field. As discussed above, another relatively unexplored method of enhancing the lateral resolution of OCT is to coherently combine FF-OCT from multiple angles. Such an approach could have advantages over ISAM with high-NA objectives in terms of phase stability requirements and higher SNR away from the nominal focus. Such a strategy, along with FF-OCT with spatially incoherent illumination, can also be applied to transillumination OCT, for which all existing approaches have used point scanning. Another direction is alternative illumination and detection geometries that reduce the curvature of the OCT TF, thereby expanding the depth of focus without requiring resampling in *k*-space. Finally, while most of our discussions assumed single scattering in the first Born approximation, OCT would also benefit from deterministic modeling of multiple scattering in addition to statistical treatments, as scattering is a deterministic phenomenon for static samples. Doing so may extend the imaging range of OCT that is otherwise restricted by the first Born approximation, in the same way that using sophisticated scattering models recently advanced in the field of diffraction tomography has enabled transmission imaging of thicker, multiply scattering samples than previously possible. While there have been efforts to create accurate wave-based forward models for OCT that model multiple scattering, they have yet to be used in inverse problem formulations to reconstruct the sample scattering potential or RI distribution. Accurate modeling of multiple scattering could not only extend the imaging depth of OCT, but also potentially reduce the cross-talk in FF-OCT with coherent illumination, which is caused by multiple scattering. Embracing optimization-based approaches to augment OCT is especially appropriate in this age where computational techniques are becoming much more feasible and commonplace. In conclusion, we have presented a unified theoretical treatment of OCT that not only explains the fundamental concepts and properties of OCT, but also renders more transparent the connections among existing implementations of OCT as well as with other coherent imaging techniques. We hope that this treatment will lead to new insights that encourage research in developing new OCT imaging techniques and extensions that yield ever more information about the sample under interrogation. Acknowledgments =============== National Institutes of Health (P30EY005722), National Science Foundation (CBET-1902904, DGF-1106401), Unrestricted Grant from Research to Prevent Blindness Disclosures =========== The authors have multiple patents related to various techniques discussed in this paper. Unified *k*-space theory of optical coherence tomography ======================================================== We present a general theory of optical coherence tomography (OCT), which synthesizes the fundamental concepts and implementations of OCT under a common 3D *k*-space framework. At the heart of this analysis is the Fourier diffraction theorem, which relates the coherent interaction between a sample and plane wave to the Ewald sphere in the 3D *k*-space representation of the sample. While only the axial dimension of OCT is typically analyzed in *k*-space, we show that embracing a fully 3D *k*-space formalism allows explanation of nearly every fundamental physical phenomenon or property of OCT, including contrast mechanism, resolution, dispersion, aberration, limited depth of focus, and speckle. The theory also unifies diffraction tomography, confocal microscopy, point-scanning OCT, line-field OCT, full-field OCT, Bessel-beam OCT, transillumination OCT, interferometric synthetic aperture microscopy (ISAM), and optical coherence refraction tomography (OCRT), among others. Our unified theory not only enables clear understanding of existing techniques, but also suggests new research directions to continue advancing the field of OCT. Introduction ============ Since its invention nearly three decades ago, optical coherence tomography (OCT) has proliferated into a broad class of techniques with a variety of biomedical and clinical applications, such as in ophthalmology, cardiology, dermatology, oncology, and gastroenterology. Even beyond the well known categories of time-domain OCT (TD-OCT) and Fourier-domain OCT (FD-OCT), the field of OCT has evolved to encompass implementations that use a variety of illumination and detection strategies, unified by interferometry with a broadband or low-coherence source. The earliest implementations of OCT were point-scanning OCT systems, which involved scanning a focused spot across the sample, probing one lateral spatial position at a time. Even today, point-scanning OCT remains the most popular form of OCT and is very successful as a clinical standard for ophthalmic imaging and an emerging standard for intravascular and gastroenterological imaging. Shortly thereafter, full-field OCT (FF-OCT) and line-field OCT (LF-OCT) emerged as alternate strategies, which use unfocused or cylindrically (1D) focused light and parallel spatial detection (i.e., a 2D camera or a 1D line camera). Furthermore, apart from Gaussian beams with common mode or confocal detection, other illumination patterns and detection strategies have also been used in OCT, such as Bessel beams with double-pass illumination and detection or with decoupled Gaussian mode detection, annular pupils for illumination and detection, and many other strategies. All of these alternative illumination/detection strategies have been used to maintain a high lateral resolution over an extended depth of focus in OCT, compared to the standard Gaussian beam. Other methods have also been proposed to address this issue, notably interferometric synthetic aperture microscopy (ISAM), which computationally corrects the defocus by solving the coherent inverse scattering problem, with different solutions depending on the illumination/detection strategy (e.g., ISAM for Bessel-beam illumination/Gaussian mode detection and FF-OCT ). Even more recently, we developed an incoherent angular compounding technique called optical coherence refraction tomography (OCRT) to address this trade-off between depth of focus and lateral resolution by reconstructing an image with isotropic resolution. These various implementations and extensions of OCT would greatly benefit from a unified theoretical treatment that concisely identifies their differences and similarities, their relative advantages and disadvantages, and their relationship to the broader category of coherent imaging. Such a unified theory would also suggest new ways to continue the technological advancement of the field of OCT. To this end, here, extending our earlier preliminary work, we present a comprehensive, fully 3D *k*-space analysis of OCT that provides a unified theoretical framework. This theory not only encompasses all of the implementations of OCT mentioned in the previous paragraph, but also explains many fundamental concepts of OCT, including the contrast mechanism, origin of speckle, dispersion, and trade-off between the lateral resolution and depth of focus. Here, *k*-space refers to 3D Fourier space, where *k* is the customary symbol for representing spatial wavevectors, or *k*-vectors, that compactly denote both the propagation direction of plane waves and the wavelength or wavenumber via the *k*-vector’s length. Using principles from the field of Fourier optics, these *k*-vectors serve as a basis for decomposing more complicated waveforms, such as the different types of illumination strategies often employed in OCT (including focused Gaussian beams). As plane waves are the fundamental building blocks for analyzing more complicated systems, we utilize a principle that predicts how a plane wave interacts with a 3D object: the Fourier diffraction theorem, which was developed in the field of diffraction tomography to reconstruct a sample’s 3D refractive index (RI) distribution from a set of diffraction patterns resulting from plane wave illumination from multiple angles. Our *k*-space theoretical treatment builds upon prior excellent reviews and theoretical treatments of OCT, including works that analyze OCT in 3D *k*-space. However, we advance a unified and comprehensive *k*-space theory of OCT, encompassing a broader range of implementations of OCT and other coherent imaging techniques, as well as being the first one to incorporate speckle as a direct consequence of the band-pass nature of its transfer function. We note that this paper does not compare TD-OCT and FD-OCT or spectrometer-based FD-OCT and swept-source FD-OCT, as these topics have been treated extensively. Rather, from the point of view of *k*-space theory, the differences between these approaches are practical implementation details, insofar as they are different methods of measuring the same optical fields and therefore the same information about the sample. Note that the same cannot be said about point-scanning OCT, FF-OCT, and LF-OCT, which measure slightly different information about the sample, as will become clear. Towards a 3D *k*-space framework as a general theory of OCT =========================================================== Previous theoretical treatments and reviews of OCT (e.g., ) are typically centered around low-coherence interferometry, putting forth a 1D *k*-space model (in particular, a distorted version of *k**z* of our 3D *k*-space model; see Sec. [refocus]) that explains OCT’s axial resolving capabilities very accurately. The lateral resolution is then explained using beam focusing, separately from interferometry. While the separability of these explanations is a very good assumption for weakly-focused beams, here, we extend this picture with a fully wave-based, 3D *k*-space model that unites low coherence interferometry and beam focusing under one framework. This is achieved by examining OCT from a more fundamental perspective of the coherent interaction between a plane wave (or superpositions thereof) and a weakly scattering sample. This coherent interaction is straightforwardly visualized in 3D *k*-space via the Ewald sphere, which describes the information obtainable about a sample for a given wavelength and illumination direction, according to the Fourier diffraction theorem (see Sec. [FDT] below). In this work, we show that this *k*-space framework explains and highlights the interdependence among nearly all properties of OCT in a unified manner, particularly the source of contrast, the full 3D point-spread function (PSF) and transfer function (TF), the trade-off between the lateral resolution and depth of focus, and the origin of speckle. We also apply this common *k*-space framework to analyze and compare the TFs of the major implementations of OCT, specifically point-scanning OCT with a Gaussian beam and Bessel beam, LF-OCT, and FF-OCT, as well as coherent confocal microscopy and conventional holography as degenerate cases of OCT. All of these implementations can be thought of as special cases of diffraction tomography in reflection. Finally, we discuss the implications of this theoretical treatment on the limits of speckle reduction and resolution enhancement in OCT. [fig:born] First Born approximation ------------------------ In OCT, one typically makes a weakly or singly scattering assumption about the sample. A common interpretation is in terms of discrete photons and a sample composed of a discrete set of reflectors: an incident photon will interact with exactly one of the reflectors and ignore all the others. Since we are advancing a *k*-space framework, we need to understand this assumption in terms of waves. Thus, we turn to the inhomogeneous, time-independent wave equation (also known as the Helmholtz equation), (∇2 + *k*02*n**m*2)*u*(**r**) =  − *V*(**r**)*u*(**r**),  where *V*(**r**) = *k*02(*n*(**r**)2 − *n**m*2) is the sample’s scattering potential, which is directly related to its refractive index (RI) distribution *n*(**r**), **r** = (*x*, *y*, *z*) is the 3D spatial coordinate, *k*0 = 2*π*/*λ*0 is the vacuum wavenumber, and *n**m* is the background medium RI. Eq. [helmholtz] thus describes the propagation of a wave *u*(**r**) through a sample with a spatially varying scattering potential. As the wave equation only admits closed form solutions to all but the simplest RI distributions (e.g., uniform spheres), the solution in the general case is often obtained through iterative methods or through discrete approximations. One such iterative solution is the Born series, based on a recursive expansion of the Lippmann-Schwinger equation, the integral form of the Helmholtz differential equation (Eq. [helmholtz]). A more detailed explanation of the Born series and the wave equation is beyond the scope of this paper, but has been extensively treated in the literature. While the Born series in principle models the general case of multiple scattering, in practice it is unstable and has convergence issues. However, truncation of the Born series to only its first term yields a linear equation that permits an interpretable closed form solution given a plane wave illumination (Sec. [FDT]). This is known as the first Born approximation, which states that the emerging field is the superposition of the incident field, *u**i**n**c*(**r**), and the scattered field *u**s**c*(**r**): *u*(**r**) ≈ *u**i**n**c*(**r**) + *u**s**c*(**r**). We can now interpret the meaning of a “weakly scattering” or “singly scattering” sample in the context of OCT as the condition by which the first Born approximation is valid ; that is, *u**s**c* is much smaller than *u**i**n**c*. Note that even though Eq. [firstborn] is expressed in terms of the fields, the validity of the first Born approximation is a property of the sample, not the illumination. The first Born model is a reasonable assumption in OCT, as the backscattered signals are typically several orders of magnitude smaller than incident beam for most biological samples. Note that this assumption does not necessarily place a limit on the sample’s thickness, but rather the cumulative RI variation across the sample depth. For example, while a sample with very small RI variation can be thicker, a sample with high RI variation only satisfies the first Born approximation if it is thin. With enough cumulative RI variation, the sample becomes multiply scattering. Fig. [fig:born] illustrates this point intuitively with a multi-layer Born model, a multiple scattering model developed for diffraction tomography that divides the thick sample into layers within each of which the first Born approximation applies (with the caveat that the multi-layer Born model is not equivalent to the Born series, as the former does not consider bidirectional interaction among the layers). From this interpretation, in the multiply scattering case, the incident field on a deep layer within the sample has been aberrated through cumulative interactions with shallower RI variations. In fact, OCT is routinely operated outside of the first Born approximation, as evidenced by shadowing or attenuation at greater depths, which is not predicted by the first Born model. This suggests that OCT does not necessarily fail when the first Born assumption is broken, but rather as long as the field incident at a deep structure is not completely random (i.e., it is primarily forward scattering and therefore contains some memory of the incident field), one can still obtain depth-resolved measurements, at the cost of signal-to-noise ratio (SNR) due to inefficient back-coupling into the fiber, which acts as a spatial mode filter in the case of point-scanning OCT, or due to cross-talk in the case of FF-OCT. However, for the ensuing *k*-space theoretical treatment, we will assume validity of the first Born approximation. [fig:FDT] Contrast mechanism of OCT ------------------------- As light propagation throughout a sample is dictated by Eq. [helmholtz], the source of contrast in OCT is the spatially varying scattering
arxiv_0000181
A product formula for certain Littlewood-Richardson coefficients for Jack and Macdonald polynomials =================================================================================================== Jack polynomials generalize several classical families of symmetric polynomials, including Schur polynomials, and are further generalized by Macdonald polynomials. In 1989, Richard Stanley conjectured that if the Littlewood-Richardson coefficient for a triple of Schur polynomials is 1, then the corresponding coefficient for Jack polynomials can be expressed as a product of weighted hooks of the Young diagrams associated to the partitions indexing the coefficient. We prove a special case of this conjecture in which the partitions indexing the Littlewood-Richardson coefficient have at most 3 parts. We also show that this result extends to Macdonald polynomials. Introduction ============ *Jack polynomials* *J**λ*(*α*; *x*) are a one parameter family of symmetric functions indexed by an integer partition *λ*. They were first introduced by Henry Jack in 1969 as generalizations of spherical functions over GL(*n*, F)/U(*n*, F), where *α* = 1/2, 1, 2 correspond to the cases of F = H, C, R. Jack polynomials can be characterized in several ways. They appear as simultaneous eigenfunctions of certain Laplace-Beltrami type differential operators. In addition, they form an orthogonal basis for the ring of symmetric functions over the field of rational functions in *α*. Jack polynomials were further generalized in 1988 by Macdonald polynomials *J**λ*(*q*, *t*; *x*), which are a two parameter family of polynomials that reduce to Jack polynomials under a special limit. The *α* = 1 specialization gives us scalar multiples of the well-known Schur polynomials, which play a central role in the representation theory of the symmetric group *S**n* as well as that of GL(*n*, C). These polynomials are also indexed by partitions, and can be described combinatorially in terms of Young tableaux. Moreover, the coefficients that arise when a product of two Schur functions is decomposed into a sum of Schur functions have a combinatorial description known as the Littlewood-Richardson Rule (see ), given by counting the number of skew tableaux of a certain type. These Littlewood-Richardson coefficients also appear in various other fields outside of representation theory, such as in the study of Grassmanians and sums of Hermitian matrices (see ). It is a continuing area of interest to find appropriate generalizations of these results for Schur polynomials in the context of Jack and Macdonald polynomials. Various works establish several combinatorial properties of these polynomials and conjecture others. It is also possible to compute Littlewood-Richardson coefficients for such polynomials (see ), but currently there are no formulas for these coefficients in the style of the Littlewood-Richardson rule. In this work, we prove a special case of one of Richard Stanley’s conjectures which proposes a combinatorial description for certain Littlewood-Richardson coefficients for Jack polynomials in terms of a choice of upper and lower hooks (see Section [ssec:stan]). In particular, this conjecture directly generalizes the Littlewood-Richardson rule for triples of partitions (*λ*, *μ*, *ν*) such that the corresponding coefficient for Schur polynomials indexed by this triple is 1. Moreover, this conjecture implies that the coefficient (under a certain explicit normalization) is a polynomial in *α* that can be written as a product of linear factors with positive integer coefficients. However, one of the main difficulties in proving this conjecture is that although it asserts that it is possible to write the coefficients as a product of some upper and lower hooks, it is not known how to make an appropriate choice of hooks. Also, while previous results, such as those in, already present useful combinatorial descriptions for these coefficients, there are currently no formulas that prove Stanley’s conjecture or even show the positivity of these coefficients. Here we prove that this conjecture is true when the partitions in the triple (*λ*, *μ*, *ν*) are restricted to having at most 3 parts (Theorem [thm:main]), and we extend this result to coefficients for Macdonald polynomials as well (Theorem [thm:macmain]). We also show that the hooks can be chosen such that they preserve a convenient additional constraint which allows us to encode the coefficients much more simply in terms of a system of numbers we call *division numbers* (defined in Section [ssec:div]). In order to prove these assertions, we first divide the problem into several cases, and then present experimentally obtained formulas in terms of division numbers for the coefficients in each case of our classification. It turns out that in each of these cases, the verification of the formula uses one of two main lemmas (Lemmas [lem:col] and [lem:row]), giving us a unifying underlying structure. In Section [sec:prelim], we provide some background about the combinatorics of partitions and symmetric functions. In Section [sec:stan], we give a precise statement of Stanley’s conjecture for Littlewood-Richardson coefficients of Jack polynomials and state our main theorem. We classify all the partitions that satisfy the hypotheses of Stanley’s conjecture in Section [sec:class]. Then, in Section [sec:proof], we present the division number formulas, main lemmas, and a proof of the main theorem. In Section [sec:macd], we extend our result from coefficients for Jack polynomials to coefficients for Macdonald polynomials. Finally, we describe some ongoing work and further directions relating to our results in Section [sec:future]. Preliminaries ============= In this section, we present some basic definitions and background information pertaining to the theory of partitions and symmetric functions. We refer the reader to for a more detailed treatment of this material. Partitions ---------- A *partition* *λ* is a sequence (*λ*1, *λ*2, …, *λ**n*) of non-negative integers listed in weakly decreasing order: *λ*1 ≥ *λ*2 ≥ ⋯ ≥ *λ**n* ≥ 0. Each nonzero *λ**i* is called a *part* of *λ*. We will sometimes write a partition *λ* in the form (*i*1*m*1, *i*2*m*2, …, *i**k**m**k*), where *i**j**m**j* denotes *m**j* parts equal to *i**j*. We call *m**j* the *multiplicity* of *i**j* in *λ*. The *length* ℓ(*λ*) of a partition *λ* is the number of parts of *λ*. Let P*n* denote the set of partitions of length at most *n*. We think of *λ* ∈ P*n* as an *n*-tuple, with *λ**i* = 0 for *i* > ℓ(*λ*). The *weight* ∣*λ*∣ of *λ* is the sum of its parts: ∣*λ*∣ = *λ*1 + *λ*2 + ⋯ + *λ**n*. If ∣*λ*∣ = *n*, then we say *λ* is a *partition of *n**. Given any two partitions *λ* and *μ*, we can define *λ* + *μ* as the partition obtained by taking the sum of *λ* and *μ* as sequences: (*λ* + *μ*)*i* = *λ**i* + *μ**i*. Given two partitions *λ*, *μ* of *n*, we say *μ* ≤ *λ* if for all *i* ∈ {1, …, *n*}, *μ*1 + … + *μ**i* ≤ *λ*1 + … + *λ**i*. The relation  ≤  defines a partial order, known as the *dominance order*, on the set of all partitions of *n*. Partitions are commonly represented diagramatically. The *Young diagram* of a partion *λ* is a left justified array of boxes such that there are *λ**i* boxes in row *i*. (We will use the same symbol *λ* to denote both the partition and its Young diagram.) Let *λ* = (5, 2, 2, 1). Then the corresponding Young diagram is: $$\yng(5,2,2,1)$$ [ex:young] The *conjugate* *λ*ʹ of a partition *λ* is the partition whose diagram is the transpose of the diagram of *λ*, where the transpose is obtained by reflecting across the main diagonal and thus interchanging rows and columns. If *λ* = (5, 2, 2, 1) (as in Example [ex:young]), then the transpose of its Young diagram is: $$\yng(4,3,1,1,1)$$ and so *λ*ʹ = (4, 3, 1, 1, 1). We say *λ* ⊃ *μ* if the diagram of *λ* contains the diagram of *μ*. Let *λ* − *μ* be the set theoretic difference between the two diagrams, which we call a *skew diagram*. If *λ* = (5, 2, 2, 1) and *μ* = (3, 2, 1), then the skew diagram *λ*/*μ* is denoted by the marked boxes in the diagram below: $$\young(~~~\bullet\bullet,~~,~\bullet,\bullet).$$ [e:skewd] If the skew diagram consists of *r* = ∣*λ*∣ − ∣*μ*∣ boxes and has at most one box in each column (respectively, row), we refer to it as a *horizontal *r*-strip* (respectively *vertical *r*-strip*). In Example [e:skewd], *λ*/*μ* is a horizontal 4-strip. However, it is not a vertical strip since the first row of the skew diagram contains two boxes. A *skew tableau* *T* is obtained by filling each box of a skew diagram *λ*/*μ* with a positive number, where *λ* − *μ* is called the *shape* of *T*. If *m**i* denotes the number of times *i* appears in the skew tableau, we say (*m*1, …, *m**r*) is the *weight* of the *T*, and the *word* *w*(*T*) of *T* is the sequence obtained by reading the entries of *T* from right to left in each row. Let *T* be the skew tableau given by $$\young(~~~12,~1133,12,3)$$ Then: * the shape of *T* is (5, 5, 2, 1) − (3, 1). * the weight of *T* is (4, 2, 3). * the word of *T* is *w*(*T*) = (2, 1, 3, 3, 1, 1, 2, 1, 3). A skew tableau *T* is said to be *semistandard* if the entries of *T* weakly increase across rows (from left to right) and strongly increase down columns. We say that *T* satisfies the *Yamanouchi word condition* if the number of occurrences of an integer *i* never exceeds the number of occurrences of *i* − 1 for any initial segment of *w*(*T*). A *Littlewood-Richardson* tableau is a semistandard skew tableau *T* that satisfies the Yamanouchi word condition. The skew tableau $$\young(~~~11,~112,23)$$ is a Littlewood-Richardson tableau. We will call any filling of a skew diagram that gives a Littlewood-Richardson tableau an *LR filling*. Symmetric Functions ------------------- Let Z[*x*1, …, *x**n*] denote the ring of polynomials in *n* independent variables *x*1, …, *x**n* with integer coefficients. Let *S**n* be the symmetric group on *n* letters. Then *S**n* acts on Z[*x*1, …, *x**n*] by permuting the variables, and a polynomial is called *symmetric* if it is unchanged under this action. The symmetric polynomials form a subring: Λ*n* = Z[*x*1, …, *x**n*]*S**n*. For each *α* = (*α*1, …, *α**n*) ∈ N*n* we can define the monomial *x**α* = *x*1*α*1⋯*x**n**α**n*. Then we can define the *monomial symmetric function* *m**λ*, where *λ* is a partition of length at most *n*, by *m**λ*(*x*1, …, *x**n*) = ∑*α* ∈ *S**n* ⋅ *λ**x**α*,  where *S**n* ⋅ *λ* is the orbit of *λ* under the action of *S**n*. The monomial symmetric functions form a Z-basis for Λ*n*. For a partition *λ*, we can also define the skew-symmetric polynomial *a**λ* by *a**λ*(*x*1, …, *x**n*) = ∑*w* ∈ *S**n**ε*(*w*)*x**w*(*λ*),  where *ε*(*w*) is the sign of the permutation *w* ∈ *S**n*. Let *δ* be the partition (*n* − 1, *n* − 2, …, 1, 0). Then *a**λ* + *δ* is divisible by *a**δ*, and the quotient $$s\_{\lambda}(x\_1,\ldots,x\_n) = \frac{a\_{{\lambda}+\delta}}{a\_{\delta}},$$ called the *Schur polynomial*, is a symmetric function. The *s**λ*,  ℓ(*λ*) ≤ *n* also form a basis for Λ*n*. Schur polynomials appear as spherical functions over GL(*n*, C)/U(*n*, C). Spherical functions over GL(*n*, F)/U(*n*, F) are further generalized by Jack polynomials *J**λ*(*α*; *x*1, …, *x**n*), where *α* = 1/2, 1, 2 correspond to the case of F = H, C, R, respectively. To define Jack polynomials, we must first define the operator *D*(*α*) on Λ ⊗ Q(*α*) by $$D({\alpha}) = \frac{{\alpha}}{2} \sum\_i x\_i^2 \frac{\partial^2}{\partial x\_i^2} + \sum\_{i \neq j} \frac{x\_i^2}{x\_i-x\_j} \frac{\partial}{\partial x\_i}.$$ Then *D*(*α*) is upper triangular on the basis of monomial symmetric functions *m**λ*, ie *D*(*α*)*m**λ* = ∑*μ* ≤ *λ**b**λ*, *μ**m**μ*. The *monic* Jack polynomials *P**λ* = *P**λ*(*α*; *x*1, …, *x**n*) = ∑*μ* ≤ *λ**v**λ*, *μ**m**μ* are the eigenfunctions of *D*(*α*) such that *v**λ*, *λ* = 1. Note that *P**λ*(1) = *s**λ*. We will also find it convenient to consider the following scalar multiples of *J**λ*: The *integral* Jack polynomials *J**λ* = *J**λ*(*α*; *x*1, …, *x**n*) = ∑*μ* ≤ *λ**v**λ*, *μ**m**μ* are the eigenfunctions of *D*(*α*) such that if ∣*λ*∣ = *m*, then *v**λ*, (1*m*) = *m*!. Jack polynomials are further generalized by Macdonald polynomials which are eigenfunctions of the operator *D*(*q*, *t*) on Λ ⊗ Q(*q*, *t*) defined by: $$D(q,t) = \sum\_i \left(\prod\_{i \neq j} \frac{tx\_i - x\_j}{x\_i - x\_j} T\_{q,i}\right),$$ where *T**q*, *i**f*(*x*1, …, *x**n*) = *f*(*x*1, …, *q**x**i*, …, *x**n*). Then, once again, *D*(*q*, *t*)*m**λ* = ∑*μ* ≤ *λ**b**λ*, *μ**m**μ*. The Macdonald polynomials *P**λ* = *P**λ*(*q*, *t*; *x*1, …, *x**n*) = ∑*μ* ≤ *λ**v**λ*, *μ**m**μ* are the eigenfunctions of *D*(*q*, *t*) such that *v**λ*, *λ* = 1. We can recover the Jack polynomials from the Macdonald polynomials by taking the limit as *q*, *t* go to 1, where the parameter *α* signifies the direction along which this limit is taken. Thus, lim*t* → 1*P**λ*(*t**α*, *t*) = *P**λ*(*α*). The Littlewood-Richardson Rule ------------------------------ Schur functions can be interpreted combinatorially, by the following theorem. *s**λ* = ∑*T**x**θ*(*T*),  where *T* is a tableau of shape *λ*, and *θ*(*T*) is the weight of *T*. *s*(2, 1) ∈ Λ3 :  $$\young(11,2)\quad\young(11,3)\quad\young(12,2)\quad\young(12,3)\quad\young(13,2)\quad\young(13,3) \quad\young(22,3)\quad\young(23,3)$$ *s*(2, 1) = *x*12*x*2 + *x*12*x*3 + *x*1*x*22 + 2*x*1*x*2*x*3 + *x*1*x*32 + *x*22*x*3 + *x*2*x*32 This also leads to a way of combinatorially interpreting the coefficients that appear when a product of Schur polynomials is expanded as a sum of Schur polynomials. This was developed using two major results. We start with a theorem that tells us how to expand such a product when one of the polynomials in the product is indexed by a partition of length 1. [Pieri Rule] *s**μ**s*(*r*) = ∑*λ**s**λ*,  where *λ*/*μ* is a horizontal *r*-strip. [thm:spieri] *μ* = (3, 1), *r* = 2 $$\young(~~~11,~)\quad\young(~~~1,~1)\quad\young(~~~1,~,1)\quad\young(~~~,~11)\quad\young(~~~,~1,1)$$ *s*(3, 1)*s*(2) = *s*(5, 1) + *s*(4, 2) + *s*(4, 1, 1) + *s*(3, 3) + *s*(3, 2, 1) Note that we can also consider the transpose of each of the indexing partitions, to get a way of multiplying two Schur polynomials when one of them is indexed by a partition consisting of a single column. Finally, we can extend this result to products of two Schur polynomials indexed by general partitions. This is done using the Littlewood-Richardson rule. [Littlewood-Richardson Rule] *s**μ**s**ν* = ∑*λ**c**μ*, *ν**λ**s**λ*,  where *c**μ*, *ν**λ* is the number of Littlewood-Richardson tableaux *T* of shape *λ*/*μ* and weight *ν*. *μ* = (2, 1), *ν* = (2, 1), *λ* = (3, 2, 1) $$\young(~~1,~1,2)~\young(~~1,~2,1)~\young(~~2,~1,1)$$ (0,0) –(1,2); *c*(2, 1), (2, 1)(3, 2, 1) = 2 Stanley’s Conjecture ==================== Statement of Conjecture ----------------------- We wish to generalize the Littlewood-Richardson rule to obtain a description of the coefficients that appear when a product of Jack or Macdonald polynomials is expanded as a sum of the respective polynomials. While it is possible to compute these coefficients recursively (see ), there is currently no combinatorial result that clearly reduces to the Littlewood-Richardson rule as we take the appropriate limit of the Jack or Macdonald polynomials to recover the corresponding Schur polynomials. However, in, Stanley made some observations and conjectures that give us some steps towards this goal. While Stanley discusses only the case of Jack polynomials in his paper, all results can be generalized to Macdonald polynomials as well. In order to state Stanley’s Conjecture, we must first define the hook length for a box in a Young diagram and some of its analogues. The *hook-length* *h**λ*(*b*) of a box *b* in the partition *λ* is obtained by counting all the boxes to the right of *b* (called the *arm*, denoted *a**λ*(*b*)) and all the boxes below *b* (called the *leg*, denoted ℓ(*b*)) along with *b* itself. $$\begin{aligned} a\_{\lambda}(i,j) &= {\lambda}\_i - j \\ \ell\_{\lambda}(i,j) &= {\lambda}\_j' - i \\ h\_{\lambda}(i,j) &= a\_{\lambda}(i,j) + \ell\_{\lambda}(i,j) + 1\end{aligned}$$ *λ* = (5, 2, 2, 1), *b* = (1, 2) $$\young(~\times---,~|,~|,~)$$ *h**λ*(*b*) = 3 + 2 + 1 = 6 We can define 2 *α*-generalizations of *h**λ*(*b*): * *upper hook-length*: *h**λ*\*(*b*) = *α*(*a*(*b*) + 1) + ℓ(*b*) * *lower hook-length*: *h*\**λ*(*b*) = *α*(*a*(*b*)) + ℓ(*b*) + 1 In effect, the upper hook treats the corner box as part of the arm, whereas the lower treats it as part of the leg. We also define the following products of hook lengths: $$\begin{aligned} H^{\lambda}\_\* &= \prod\_{b \in {\lambda}} h^{\lambda}\_\*(b) \\ H\_{\lambda}^\* &= \prod\_{b \in {\lambda}} h\_{\lambda}^\*(b) \\ j\_{\lambda}&= H^{\lambda}\_\* \cdot H\_{\lambda}^\* \end{aligned}$$ Then we can relate the integral and the monic Jack polynomials as follows: *J**λ*(*α*) = *H*\**λ**P**λ*(*α*). We can also define the dual *J**λ*\*(*α*) of *J**λ*(*α*) under the canonical inner product by: *J**λ*\*(*α*) = *j**λ*− 1*J**λ*(*α*). Finally, we consider the following expansions: *P**μ**P**ν* = ∑*λ**c**μ**ν**λ*(*α*)*P**λ*. $$\begin{aligned} J\_\mu J\_\nu &= \sum\_{\lambda}g^{{\lambda}}\_{\mu\nu}({\alpha}) J^\*\_{\lambda}, \\ P\_\mu P\_\nu &= \sum\_{\lambda}c^{{\lambda}}\_{\mu\nu}({\alpha}) P\_{\lambda}.\end{aligned}$$ Then *g**μ**ν**λ*(*α*) = *H**λ*\**H*\**μ**H*\**ν**c**μ*, *ν**λ*(*α*). We are now ready to state Stanley’s conjecture. [Stanley, 1989] Given partitions *λ*, *μ*, *ν* such that *c**μ*, *ν**λ*(1) = 1, then for all *α*, *g**μ*, *ν**λ*(*α*) = (∏*b* ∈ *λ**h̃**λ*(*b*))(∏*b* ∈ *μ**h̃**μ*(*b*))(∏*b* ∈ *ν**h̃**ν*(*b*)),  where each *h̃**ξ*(*b*) is either *h**ξ*\*(*b*) or *h*\**ξ*(*b*). Moreover, we can choose these hooks such that there is an equal number of upper and lower hooks. [conj:stan] Unfortunately, while this conjecture states that such a choice is always possible, there is no canonical way to make such a choice, and no conjecture for an assignment that might work in general. In fact, as Stanley himself notes in, there is often more than one assignment of upper and lower hooks that would satisfy this conjecture. In particular, he presents the following example, computed by Philip Hanlon. *λ* = (2, 2, 2, 1, 1), *μ* = (2, 1, 1), *ν* = (2, 1, 1) $$\young(ll,ll,u?,l,?) \qquad \young(u?,l,?) \qquad \young(u?,l,?)$$ Of the 6 boxes marked ``?", 5 must be taken to be upper hooks and 1 to be a lower hook, so there are 6 possible ways to obtain the correct coefficient. Since we can get *c**μ*, *ν**λ* by dividing *g**μ*, *ν**λ* by all the upper hooks in *λ* and all the lower hooks in *μ* and *ν*, we will call such hooks *standard hooks* and boxes assigned to have standard hooks in Equation [eq:stan] to be *standard boxes*. On the other hand, we will call lower hooks in *λ* and upper hooks in *μ* and *ν* *flipped hooks* and boxes with such an assignment in Equation [eq:stan] *flipped boxes*. If *g**μ*, *ν**λ*(*α*) is given by a product of only standard hooks, then *c**μ*, *ν**λ*(*α*) = 1 for all *α*. In general, *c**μ*, *ν**λ*(*α*) can be regarded as a product over flipped boxes of the ratio of the flipped hook to the standard hook. When *α* = 1, the upper and lower hooks have the same value, and so any such product reduces to 1, in agreement with the hypothesis *c**μ*, *ν**λ*(1) = 1. We will call any triple (*λ*, *μ*, *ν*) of partitions that satisfy the hypothesis *c**μ*, *ν**λ*(1) = 1 a *minimal triple*. Such triples correspond to the case of a unique Littlewood-Richardson tableau of shape *λ* − *μ* with weight *ν*, but it remains difficult to generate all such triples in general. Minimal triples lie on the boundary of Horn cones, which are given by the eigenvalues of Hermitian matrices *A*, *B*, *C* such that *A* + *B* + *C* = 0. (However, note that not all boundary triples are minimal.) Minimal triples also play a prominent role in Fulton’s conjecture, which states that a minimal triple remains minimal under a scaling of all three partitions by the same factor. (A proof of Fulton’s conjecture is given by Knutson, Tao and Woodward in.) Main Theorem ------------ In this work, we prove the following special case of Stanley’s conjecture. Stanley’s conjecture is true for *λ*, *μ*, *ν* ∈ P3. [thm:main] We will show this by first classifying all minimal triples of partitions in P3, which we do in Section [sec:class]. We thus divide the problem into several cases and develop an experimental formula in the form of Equation [eq:stan] for *c**μ*, *ν**λ* in each case. A complete list of these is given in Section [sec:div]. In Section [sec:ver], we verify that our experimental formulas indeed give the correct coefficient, thus completing the proof of Theorem [thm:main]. In Section [sec:macd], we extend this theorem to get Theorem [thm:macmain], which shows that the coefficient for the corresponding Macdonald polynomials can also be obtained for minimal triples of partitions in P3 using the same system of upper and lower hook assignments using a suitable generalization of hook-lengths. The Pieri Rule for Jack Polynomials ----------------------------------- By Theorem [thm:spieri], we see that if *ν* consists of a single row (or column), (*λ*, *μ*, *ν*) must be a minimal triple. In fact, we have an analogue of this theorem that gives a proof of Stanley’s conjecture when *ν* falls into this special case. [Pieri Rule for columns ] If *λ*/*μ* is a vertical *r*-strip and *ν* = (1*r*), then $${c^{{\lambda}}\_{\mu,\nu}}({\alpha}) = \prod\_{s \in X({\lambda}/\mu)} \frac{h^{{\lambda}}\_\*(s)}{h\_{{\lambda}}^\*(s)} \frac{h^{\*}\_\mu(s)}{h\_{\*}^\mu(s)},$$ where *X*(*λ*/*μ*) denotes all the boxes (*i*, *j*) ∈ *μ* such that *μ**i* = *λ**i* and *μ**j*ʹ < *λ**j*ʹ. [thm:jpieri] *λ* = (4, 2, 2), *μ* = (3, 2, 1), *ν* = (1, 1) $$\young(uuuu,ul,uu) \qquad \young(lll,lu,l) \qquad \young(l,l)$$ $${c^{{\lambda}}\_{\mu,\nu}}= \frac{2{\alpha}}{1+{\alpha}}$$ *g**μ*, *ν**λ* = 32*α*5(3 + 2*α*)(1 + 2*α*)2(2 + *α*)2(2 + 3*α*) [e:jpieri] We define $$b\_{\lambda}({\alpha})=\frac{H^{{\lambda}}\_\*({\alpha})}{H^\*\_{\lambda}({\alpha})}.$$ Thus, we can think of *b**λ*(*α*) as an operator that switches upper and lower hooks. This gives us the following equation: $$\displaystyle c^{{\lambda}'}\_{\mu',\nu'}\left(\frac{1}{{\alpha}}\right) = \frac{{c^{{\lambda}}\_{\mu,\nu}}({\alpha}) b\_\mu({\alpha}) b\_\nu({\alpha})}{b\_{\lambda}({\alpha})}. \label{eq:transpose}$$ Therefore, if *λ*, *μ*, *ν* is a minimal triple and we transpose all 3 partitions, the resulting Littlewood-Richardson coefficient corresponds to swapping all the upper and lower hooks. This allows us to use the Pieri rule for columns as a rule for rows as well. We consider the triple obtained by transposing the partitions in Example [e:jpieri]: *λ* = (3, 3, 1, 1), *μ* = (3, 2, 1), *ν* = (2) $$\young(lll,lul,l,l) \qquad \young(uuu,ul,u) \qquad \young(uu)$$ $${c^{{\lambda}}\_{\mu,\nu}}= \frac{16{\alpha}^2(1+2{\alpha})}{3(1+{\alpha})^4}$$ *g**μ*, *ν**λ* = 32*α*5(2 + 3*α*)(1 + 2*α*)2(2 + *α*)2(3 + 2*α*) Classification ============== We present a classification of all minimal triples (*λ*, *μ*, *ν*) consisting of partitions in P3. In particular, we show that such triples correspond to each face of co-dimension one of the *n* = 3 Horn cone (see ). It turns out that this correspondence is no longer true if we allow partitions of greater length, in which case minimal triples form a proper subset of the triples that lie on boundary faces of the associated Horn cone. Horn’s Inequalities ------------------- Horn cones were defined by to answer the following problem: given two *n* × *n* Hermitian matrices *A* and *B* with eigenvalues *μ* = (*μ*1, …, *μ**n*) and *ν* = (*ν*1, …, *ν**n*) (arranged in weakly decreasing order), we wish to determine the possible eigenvalues *λ* = (*λ*1, …, *λ**n*) of the sum *C* = *A* + *B*. Horn conjectured a list of inequalities involving *λ*, *μ*, *ν* that, together with the condition ∣*λ*∣ = ∣*μ*∣ + ∣*ν*∣, determine all possible combinations. These inequalities were verified by the works of Klyachko and of Knutson and Tao, which also show that the Littlewood-Richardson coefficient *c**μ*, *ν**λ* is nonzero if and only if (*λ*, *μ*, *ν*) lie in the Horn cone H*n*. Later, Knutson, Tao and Woodward determined the minimal necessary list of such inequalities that determines this cone. Using this list of inequalities for H3, we have that the Littlewood-Richardson coefficient *c**μ*, *ν**λ* is nonzero if the partitions *λ*, *μ*, *ν* ∈ P3 are such that ∣*λ*∣ = ∣*μ*∣ + ∣*ν*∣, and they satisfy all of the inequalities in Table [tab:horn] below. [h!] 3 1. *μ*3 ≤ *μ*2 2. *μ*2 ≤ *μ*1 3. *ν*3 ≤ *ν*2 4. *ν*2 ≤ *ν*1 5. *λ*3 ≤ *λ*2 6. *λ*2 ≤ *λ*1 7. *λ*1 ≤ *μ*1 + *ν*1 8. *λ*2 ≤ *μ*1 + *ν*2 9. *λ*2 ≤ *μ*2 + *ν*1 10. *λ*3 ≤ *μ*1 + *ν*3 11. *λ*3 ≤ *μ*2 + *ν*2 12. *λ*3 ≤ *μ*3 + *ν*1 13. *λ*3 ≥ *μ*3 + *ν*3 14. *λ*2 ≥ *μ*3 + *ν*2 15. *λ*2 ≥ *μ*2 + *ν*3 16. *λ*1 ≥ *μ*3 + *ν*1 17. *λ*1 ≥ *μ*2 + *ν*2 18. *λ*1 ≥ *μ*1 + *ν*3 [tab:horn] It is known that minimal triples (*λ*, *μ*, *ν*) all lie on a union of some faces of the Horn cone (see ). We will refer to a face of codimension one as a *facet*. Since each facet is obtained by changing one of the defining inequalities to an equality, for H3, we will refer to each facet by the same number as the corresponding inequality as in Table [tab:horn] above. In general, not every facet of H*n* contains minimal triples. However, this does hold for H3, and so one can check triples (*λ*, *μ*, *ν*) on the interior of each face, and determine that every single facet does indeed give a minimal triple. In the next section, we present a direct combinatorial proof of this fact. Littlewood-Richardson Tableaux ------------------------------ We will show that each facet of H3 contains minimal triples by classifying the possible Littlewood-Richardson tableaux of shape *λ*/*μ* of weight *ν* in the case that *λ*, *μ*, *ν* ∈ P3. The cases presented in this proof were also used to determine the experimentally obtained formulas for *c**μ*, *ν**λ*(*α*) presented in Section [sec:div]. For partions *λ*, *μ*, *ν* ∈ P3, we have *c**μ*, *ν**λ*(1) = 1 if and only if *λ*, *μ*, *ν* lie on a facet of the Horn cone H3. [thm:horn] A skew diagram of shape *λ*/*μ* consists of at most three rows. Therefore, if *ν* has length 3, then any LR filling of *λ*/*μ* of weight *ν* must consist of at least *ν*3 occurrences of *i* in row *i*. We therefore only need to consider the remaining boxes, and we can thus assume, without loss of generality, that *ν* has length at most 2. By symmetry, we can also assume the same for *μ*. Now let *T* be a Littlewood-Richardson tableau of shape *λ*/*μ* with weight *ν*. Then *w*(*T*) must be a sequence of 1’s and 2’s of the form (1*a*1, 2*b*2, 1*b*1, 2*c*2, 1*c*1),  where *i**m* denotes *m* consecutive occurrences of *i*. In order to satisfy the Yamanouchi word condition, we must require that *a*1 ≥ *b*2 and *a*1 + *b*1 ≥ *b*2 + *c*2. For instance, if *T*1 is the following diagram: $$\young(~~~~~~111,~~~1122,1222),$$ then *w*(*T*1) = (13, 22, 12, 23, 11). Note, however, that in this case, a filling of this skew diagram of weight (6, 5) is not unique. We must therefore determine which restrictions on the set (*a*1, *b*2, *b*1, *c*2, *c*1) of multiplicities in *w*(*T*) lead to a minimal triple (*λ*, *μ*, *ν*). First, suppose every column in *λ*/*μ* consists of a single box, so that *λ*/*μ* is a horizontal ∣*ν*∣-strip: $$\young(~~~~1,~~12,12).$$ In order to have a unique LR filling, either *b*2 = 0 (type B) or *c*1 = 0 (type C). To see this, consider the case of an LR filling in which both *b*2 and *c*1 are nonzero, as in the diagram above. Then the last 1 in the third row can be swapped with the first 2 in the second row to get another LR filling, $$\young(~~~~1,~~11,22)$$ so *c**μ*, *ν**λ*(1) must be greater than 1 in this case. However, as this second diagram illustrates, requiring that the filling be of type B or C is not sufficient to give a minimal triple, even though it is a necessary condition. Specifically, in the absence of any additional restrictions, it may be possible to swap a 2 in the third row with a 1 in the second row. Therefore, for each type, B or C, we require one of the following restrictions: 1. *c*2 = 0 2. *b*1 = 0 3. *a*1 = *b*2 4. *a*1 + *b*1 = *b*2 + *c*2. Conditions I and II remove one of the quantities that would have been involved in such a swap to get a new LR filling with the same weight. Conditions III and IV imply that any such swap would violate the Yamanouchi word condition, since the swap would have the effect of increasing *b*2 while leaving *a*1, *b*1 and *c*2 unchanged. Finally, we consider the case in which *λ*/*μ* is no longer necessarily a horizontal strip. Then every column in the skew diagram could have up to two boxes, and whenever it does contain two boxes, the filling must be a 1 in the upper box and a 2 in the lower box. We could have an overlap between the first and second rows (denoted by type *o*1) or an overlap between the second and third rows (denoted type *o*2). In the case that *o**i* does not occur, we denote the number of columns in the gap between the rows of the skew diagram by *g**i*. Thus, we have 32 cases in all (type B or C, type I-IV, type *o*1 or *g*1, and type *o*2 or *g*2). [h!] |l||c|c|c|c| Type & *g*1*g*2 & *g*1*o*2 & *o*1*g*2 & *o*1*o*2 B.I & (3) & (11) & (8) & (16) B.II & (15) & (5) & (2) & (10) B.III & (18) & (18) & (6) & (6) B.IV & (12) & (12) & (17) & (17) C.I & (13) & (1) & (13) & (1) C.II & (7) & (14) & (7) & (14) C.III & (9) & (9) & (9) & (9) C.IV & (4) & (4) & (4) & (4) [tab:class] We will use *o**i* and *g**i* not only as a label for each type, but also a count (analogous to *a*1, *b**i*, *c**i*) of the number of overlapping columns in the skew diagram, or the number of columns in the gap between rows of the skew diagram. Therefore, in general, the parts of *λ*, *μ*, *ν* are given by: $$\begin{aligned} \nu\_1 &= a\_1+b\_1+c\_1+o\_1+o\_2+\nu\_3 \\ \nu\_2 &= b\_2+c\_2+o\_1+o\_2+\nu\_3 \\ \mu\_1 &= b\_1+b\_2+g\_1+o\_2+c\_1+c\_2+g\_2+\mu\_3 \\ \mu\_2 &= c\_1+c\_2+g\_2+\mu\_3 \\ {\lambda}\_1 &= b\_1+b\_2+g\_1+o\_2+c\_1+c\_2+g\_2+a\_1+o\_1+\mu\_3+\nu\_3 \\ {\lambda}\_2 &= b\_1+b\_2+o\_2+c\_1+c\_2+g\_2+o\_1+\mu\_3+\nu\_3 \\ {\lambda}\_3 &= o\_2+c\_1+c\_2+\mu\_3+\nu\_3 \end{aligned}$$ Therefore, each of the 32 cases corresponds to a restriction on the partitions *λ*, *μ*, *ν*. For instance, B.I.*g*1*g*2 means that *b*2 = *c*2 = *o*1 = *o*2 = 0, and therefore *ν*2 = *ν*3. Similarly, B.II.*g*1*o*2 means that *b*2 = *c*2 = *o*1 = *g*2 = 0, and so we get that *μ*2 + *ν*2 = *λ*3. We give a complete list of restrictions in Table [tab:class], where each number refers to the facet of H3 determined by the correspondingly numbered Horn inequality above. We thus verify that each facet of H3 appears in this table, and therefore each must contain only minimal triples. Proof of Main Theorem ===================== Division Numbers ---------------- Every partition *λ* can be divided into rectangular *blocks* consisting of all columns of the same height. We will use *ω**i**λ* to denote the block ((*λ**i* − *λ**i* + 1)*i*). Then if ℓ(*λ*) = *n*, we can decompose *λ* as the sum *ω*1*λ* + *ω*2*λ* + ⋯ + *ω**n**λ* of all its blocks. Let *λ* = (6, 4, 2). The block *ω*3*λ* is highlighted in the Young diagram below. $$\young(\bullet\bullet~~~~,\bullet\bullet~~,\bullet\bullet)$$ We refer to each part of a block *ω**i**λ* as a *strip*. Thus, each strip consists of a row within a block. Let *λ* = (6, 4, 2). The strips (*ω*3*λ*)2 and (*ω*2*λ*)1 are highlighted in the Young diagram below. $$\young(~~\bullet\bullet~~,\bullet\bullet~~,~~)$$ It turns out that for a minimal triple (*λ*, *μ*, *ν*) of partitions in P3, it is possible to obtain *c**μ*, *ν**λ* by an assignment of upper and lower hooks in the corresponding diagrams such that within each strip, all the upper hooks that occur appear to the left of all the lower hooks that occur. (Note that a strip may contain only only upper hooks or only lower hooks.) We can thus encode the coefficient *c**μ*, *ν**λ* by a system of *division numbers*, which are numbers for each strip in *λ*, *μ*, *ν* indicating the transition point between upper and lower hooks. By convention, we use the division numbers to count the flipped hooks in each strip, ie the lower hooks in each strip of *λ* and the upper hooks in each strip of *μ* and *ν*. For each partition, we write the division numbers in a matrix style array, arranged in the same order (left to right, top to bottom) as the strip to which they correspond. Note that the division number symbols differ from matrices in that they contain no entries below the off-diagonal. Moreover, as we prove in Lemma [lem:3to2] below, all hooks in the blocks *ω*3*μ* and *ω*3*ν* can be taken to be lower hooks, corresponding to division numbers of 0 for all the strips in those blocks. Therefore, we will write the division numbers for *λ* within a 3 × 3 array and those for *μ* and *ν* within a 2 × 2 array. *λ* = (8, 7, 4), *μ* = (6, 3), *ν* = (5, 5) $$\young(ulllullu,uullull,ulll) \qquad \young(uuluuu,uul) \qquad \young(ullll,uuuul)$$ *c**μ*, *ν**λ* is encoded by the division numbers given as follows. $${\lambda}: \begin{bmatrix}3 & 2 & 0 \\ 2 & 2 \\ 3\end{bmatrix} \quad \mu: \begin{bmatrix}2 & 3\\2 \end{bmatrix} \quad \nu: \begin{bmatrix}1 & 0\\4 \end{bmatrix}.$$ Algebraic Structures -------------------- To compute *c**μ*, *ν**λ* from the division numbers, we require the following notation. Let *β* be a multiset. We regard *β* as the set of vanishing points (counted with multiplicity) of a polynomial. Therefore, let *ϕ*(*x*; *β*) be the smallest degree polynomial in *x* such that *ϕ*(*b*; *β*) = 0 for all nonzero *b* ∈ *β* and *ϕ*(0; *β*) = 1. In particular, we have: $$\phi(x;{\beta}) := \prod\_{b \in {\beta}, b \neq 0}\left(\frac{b-x}{b}\right).$$ Such polynomials give us a natural way to write the coefficients *c**μ*, *ν**λ*(*α*) for minimal triples. Given an upper hook *h**λ*\*, we can write the ratio of the corresponding lower hook to the upper hook as $$\frac{h^\*\_{\lambda}-({\alpha}-1)}{h^\*\_{\lambda}}.$$ Also, given a lower hook *h*\**μ*, we can write the ratio of the corresponding upper hook to the lower hook as $$\frac{-h^\mu\_\*-({\alpha}-1)}{-h^\mu\_\*}.$$ Thus, each *c**μ*, *ν**λ*(*α*) can we written as *ϕ*(*α* − 1; F(*λ*, *μ*, *ν*)), where F(*λ*, *μ*, *ν*) is the set of standards hooks of flipped boxes in *λ* and negatives of standard hooks of flipped boxes in *μ* and *ν*. We will also find it convenient to write our hooks in terms of *r* = 1/*α*. In this case, we can regard our hook-lengths as $$\begin{aligned} h^\*\_{\lambda}(b) &= a(b)+1 +\ell(b)r \\ h\_\*^{\lambda}(b) &= a(b) + (\ell(b)+1)r \end{aligned}$$ and we have that $$\phi({\alpha}-1;m{\alpha}+n) = \frac{(m-1){\alpha}+n+1}{m{\alpha}+n} = \frac{(m-1)+(n+1)r}{m+nr} = \phi(1-r;m+nr).$$ Succesive flipped *r*-hooks within a single strip differ by 1, and so we require an effective way to describe such products. To do this, we will first define the following notation: ⟨*x*; *a*⟩*j* = *ϕ*(*x*; {*a*, …, *a* + *j* − 1}). When *x* is fixed and clear from context, we will suppress it and simply write ⟨*a*⟩*j*. We will make use of two main identities involving such terms. For the first identity, observe that if *j* = *j*1 + *j*2, then $$\begin{aligned} {\left\langlea\right\rangle}\_{j\_1} {\left\langlea+j\_1\right\rangle}\_{j\_2} = {\left\langlea\right\rangle}\_j = {\left\langlea\right\rangle}\_{j\_2} {\left\langlea+j\_2\right\rangle}\_{j\_1}\end{aligned}$$ and so $$\frac{{\left\langlea\right\rangle}\_{j\_1}}{{\left\langlea+j\_2\right\rangle}\_{j\_1}} = \frac{{\left\langlea\right\rangle}\_{j\_2}}{{\left\langlea+j\_1\right\rangle}\_{j\_2}}. \label{eq:phid1}$$ For the second identity, note that if *a* + *b* = *x* then $$\left(\frac{a-x}{a}\right)\left(\frac{b-x}{b}\right) = \left(\frac{a-x}{a}\right)\left(\frac{-a}{x-a}\right) =1$$ and more generally that ⟨*a*⟩*j*⟨*b* − *j* + 1⟩*j* = 1,  where the *i*th term in the first product cancels with the (*j* − *i* + 1)th term in the second product, since *a* + (*i* − 1) + (*b* − *j* + 1) + (*j* − *i*) = *a* + *b* = *x*. Note that this is equivalent to saying that ⟨*a*⟩*j*⟨*b*⟩*j* = 1 whenever *a* + *b* = *x* − *j* + 1. Now note that such terms can be used to describe the product of flipped hooks within a single strip. We will use the notation: [*b*; *n*] = ⟨1 − *r*; *b* + 1⟩*n* = *ϕ*(1 − *r*; {*b* + 1, *b* + 2, …, *b* + *n*}). Let *h**λ**i**j* denote *h**λ*\*(*i*, 1) − *h**λ*\*(*j*, 1). Then given partitions *λ*, *μ*, *ν* and a set of division numbers n for each strip in these partitions, we define **d***μ*, *ν**λ*(n) to be the product: **d***μ*, *ν**λ*(n) = ∏*i* ≤ *j*[*h**λ**i**j*; *n**i**j**λ*] ⋅ [ − *h**μ**i**j*; *n**i**j**μ*] ⋅ [ − *h**ν**i**j*; *n**i**j**ν*],  where *n**i**j**λ* is the division number corresponding to the *i*\tiny th strip in *ω**j**λ*, and *n**i**j**ξ* is the division number corresponding to the *i*\tiny th strip in *ω**j* − 1*ξ* for *ξ* ∈ {*μ*, *ν*}. We will refer to the starting point *b* =  ± *h**ξ**i**j* in each term of the form [*b*; *n*] as the *anchor* for the corresponding strip. Using equation [eq:phid1], we can determine how changes to the anchors or division numbers affect **d***μ*, *ν**λ*(n). In particular, we have that $$\begin{aligned} \frac{[h^{ij}\_\xi;n^\xi\_{ij} - t]}{[h^{ij}\_\xi;n^\xi\_{ij}]} &= \frac{1}{[h^{ij}\_\xi+n^\xi\_{ij} - t;t]}, \label{eq:mod1} \\ \frac{[h^{ij}\_\xi+t;n^\xi\_{ij}]}{[h^{ij}\_\xi;n^\xi\_{ij}]} &= \frac{[h^{ij}\_\xi+n^\xi\_{ij};t]}{[h^{ij}\_\xi;t]}. \label{eq:mod2}\end{aligned}$$ Division Numbers for Minimal Triples in P3 ------------------------------------------ Let d*i**j**k* encode the quantity ∣*λ**i* + *μ**j* − *ν**k*∣, and let *ξ**i**j* denote *ξ**i* − *ξ**j* for any partition *ξ*. Let p be the positive part of *λ*3 − *μ*2 − *ν*3, so that p = *o*2 = max(*λ*3 − *μ*2 − *ν*3, 0). Finally, let *x*± = *x* ± p,  where *x* is either some d*i**j**k* or some *ξ**i**j*. We present a complete list of division number formulas below for minimal triples in P3. These formulas are grouped according to facets of the Horn cone defined by the inequalities in Table [tab:horn]. For each case, we present the division numbers for *λ*, *μ*, *ν*, and list the proposition in which this formula is verified. These propositions all appear in Section [sec:ver]. [h!]|l|l|ccc|l| &Case: & *λ*: & *μ*: & *ν*: & Prop. 1.&*μ*3 = *μ*2 & $\begin{bmatrix} {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{222} & 0 \\ {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{222} & \\ {\mathfrak{d}}\_{333} && \end{bmatrix}$ & $\begin{bmatrix} 0 & \mathfrak{d}\_{111}\\ 0 & \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{222} \\ {\mathfrak{d}}\_{333} & \end{bmatrix}$ & [prop:horn1] &&&&& 2.& *μ*2 = *μ*1 & $\begin{bmatrix} {\mathfrak{d}}\_{111} & {\mathfrak{d}}\_{213} & 0 \\ {\mathfrak{d}}\_{222} & {\mathfrak{d}}\_{213} & \\ {\mathfrak{d}}\_{111} && \end{bmatrix}$ & $\begin{bmatrix} 0 & 0\\ {\mathfrak{d}}\_{333} & \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}\_{213} & {\mathfrak{d}}\_{111} \\ {\mathfrak{d}}\_{213} & \end{bmatrix}$ & [prop:horn2] &&&&& 3.& *ν*3 = *ν*2 & $\begin{bmatrix} {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{222} & 0 \\ {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{222} & \\ {\mathfrak{d}}\_{333} && \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{222} \\ {\mathfrak{d}}\_{333} & \end{bmatrix}$ & $\begin{bmatrix} 0 & \mathfrak{d}\_{111}\\ 0 & \end{bmatrix}$ & [prop:horn1] &&&&& 4.& *ν*2 = *ν*1 & $\begin{bmatrix} {\mathfrak{d}}\_{111} & {\mathfrak{d}}\_{231} & 0 \\ {\mathfrak{d}}\_{222} & {\mathfrak{d}}\_{231} & \\ {\mathfrak{d}}\_{111} && \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}\_{231} & {\mathfrak{d}}\_{111} \\ {\mathfrak{d}}\_{231} & \end{bmatrix}$ & $\begin{bmatrix} 0 & 0\\ {\mathfrak{d}}\_{333} & \end{bmatrix}$ & [prop:horn2] &&&&& 5.& *λ*3 = *λ*2 & $\begin{bmatrix} {\mathfrak{d}}\_{111} & 0 & 0 \\ {\mathfrak{d}}\_{333} & 0 & \\ {\mathfrak{d}}\_{223} && \end{bmatrix} $ & $\begin{bmatrix} {\mathfrak{d}}\_{232} & {\mathfrak{d}}\_{223}\\ {\mathfrak{d}}\_{222} & \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}\_{223} & {\mathfrak{d}}\_{232} \\ {\mathfrak{d}}\_{223} & \end{bmatrix}$ & [prop:la23] &&&&& 6.& *λ*2 = *λ*1 & $\begin{bmatrix} {\mathfrak{d}}\_{333} & {\mathfrak{d}}^-\_{322} & 0 \\ \mathfrak{p} & {\mathfrak{d}}\_{231} & \\ {\mathfrak{d}}\_{221} && \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}^-\_{212} & \mu\_{12}\\ {\mathfrak{d}}\_{221} & \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}^+\_{231} & {\mathfrak{d}}\_{221} \\ {\mathfrak{d}}\_{231} & \end{bmatrix}$ & [prop:la21] &&&&& 7.& *λ*1 = *μ*1 + *ν*1 & $\begin{bmatrix} 0 & 0 & 0 \\ {\mathfrak{d}}\_{333} & 0 & \\ {\mathfrak{d}}\_{333} && \end{bmatrix}$ & $\begin{bmatrix} 0 & 0\\ {\mathfrak{d}}\_{333} & \end{bmatrix}$ & $\begin{bmatrix} 0 & 0 \\ {\mathfrak{d}}\_{333} & \end{bmatrix}$ & [prop:nu1] &&&&& 8.& *λ*2 = *μ*1 + *ν*2 & $\begin{bmatrix} {\mathfrak{d}}\_{333} & 0 & 0 \\ 0 & 0 & \\ {\mathfrak{d}}\_{333} && \end{bmatrix}$ & $\begin{bmatrix} 0 & 0 \\ {\mathfrak{d}}\_{333} & \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{111}\\ 0 & \end{bmatrix}$ & [prop:nu1] &&&&& 9.& *λ*2 = *μ*2 + *ν*1 & $\begin{bmatrix} {\mathfrak{d}}\_{333} & 0 & 0 \\ 0 & 0 & \\ {\mathfrak{d}}\_{333} && \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{111}\\ 0 & \end{bmatrix}$ & $\begin{bmatrix} 0 & 0 \\ {\mathfrak{d}}\_{333} & \end{bmatrix}$ & [prop:nu1] &&&&& 10.& *λ*3 = *μ*1 + *ν*3 & $\begin{bmatrix} {\mathfrak{d}}^+\_{111} & {\mathfrak{d}}\_{223} & 0 \\ {\mathfrak{d}}^+\_{221} & {\mathfrak{d}}\_{223} & \\ \mathfrak{p} && \end{bmatrix}$ & $\begin{bmatrix} \mathfrak{p} & 0 \\ {\mathfrak{d}}^+\_{223} & \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}\_{111} & {\mathfrak{d}}^+\_{223}\\ {\mathfrak{d}}\_{221} & \end{bmatrix}$ & [prop:nu1] &&&&& 11.& *λ*3 = *μ*2 + *ν*2 & $\begin{bmatrix} {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{222} & 0 \\ {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{222} & \\ 0 && \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}\_{332} & {\mathfrak{d}}\_{223}\\ 0 & \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}\_{323} & {\mathfrak{d}}\_{232} \\ 0 & \end{bmatrix}$ & [prop:la3] &&&&& 12.& *λ*3 = *μ*3 + *ν*1 & $\begin{bmatrix} {\mathfrak{d}}^+\_{111} & {\mathfrak{d}}\_{223} & 0 \\ {\mathfrak{d}}^+\_{221} & {\mathfrak{d}}\_{223} & \\ \mathfrak{p} && \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}\_{111} & {\mathfrak{d}}^+\_{223}\\ {\mathfrak{d}}\_{221} & \end{bmatrix}$ & $\begin{bmatrix} \mathfrak{p} & 0 \\ {\mathfrak{d}}^+\_{223} & \end{bmatrix}$ & [prop:nu1] &&&&& 13.& *λ*3 = *μ*3 + *ν*3 & $\begin{bmatrix} 0 & {\mathfrak{d}}\_{111} & 0 \\ 0 & {\mathfrak{d}}\_{111} & \\ 0 && \end{bmatrix}$ & $\begin{bmatrix} 0 & {\mathfrak{d}}\_{111}\\ 0 & \end{bmatrix}$ & $\begin{bmatrix} 0 & {\mathfrak{d}}\_{111} \\ 0 & \end{bmatrix}$ & [prop:nu3] &&&&& 14.& *λ*2 = *μ*3 + *ν*2 & $\begin{bmatrix} {\mathfrak{d}}\_{323} & 0 & 0 \\ {\mathfrak{d}}\_{333} & 0 & \\ {\mathfrak{d}}\_{323} && \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}\_{323} & 0\\ {\mathfrak{d}}\_{333} & \end{bmatrix}$ & $\begin{bmatrix} 0 & {\mathfrak{d}}\_{111} \\ 0 & \end{bmatrix}$ & [prop:nu3] &&&&& 15.& *λ*2 = *μ*2 + *ν*3 & $\begin{bmatrix} {\mathfrak{d}}\_{332} & 0 & 0 \\ {\mathfrak{d}}\_{333} & 0 & \\ {\mathfrak{d}}\_{332} && \end{bmatrix}$ & $\begin{bmatrix} 0 & {\mathfrak{d}}\_{111} \\ 0 & \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}\_{332} & 0\\ {\mathfrak{d}}\_{333} & \end{bmatrix}$ & [prop:nu3] &&&&& 16.& *λ*1 = *μ*3 + *ν*1 & $\begin{bmatrix} {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{232} & 0 \\ {\mathfrak{d}}\_{323} & {\mathfrak{d}}\_{222} & \\ {\mathfrak{d}}\_{323} && \end{bmatrix}$ & $\begin{bmatrix} 0 & \mu\_{12}\\ 0 & \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{232} \\ {\mathfrak{d}}\_{323} & \end{bmatrix}$ & [prop:nu3] &&&&& 17.& *λ*1 = *μ*2 + *ν*2 & $\begin{bmatrix} {\mathfrak{d}}\_{111} & {\mathfrak{d}}^-\_{223} & 0 \\ {\mathfrak{d}}^+\_{222} & {\mathfrak{d}}\_{113} & \\ {\mathfrak{d}}^+\_{121} && \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}^-\_{112} & 0\\ {\mathfrak{d}}\_{221} & \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}^+\_{213} & {\mathfrak{d}}\_{121} \\ {\lambda}^{+}\_{23} & \end{bmatrix}$ & [prop:la1] &&&&& 18.& *λ*1 = *μ*1 + *ν*3 & $\begin{bmatrix} {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{223} & 0 \\ {\mathfrak{d}}\_{332} & {\mathfrak{d}}\_{222} & \\ {\mathfrak{d}}\_{332} && \end{bmatrix}$ & $\begin{bmatrix} {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{223} \\ {\mathfrak{d}}\_{332} & \end{bmatrix}$ & $\begin{bmatrix} 0 & \nu\_{12}\\ 0 & \end{bmatrix}$ & [prop:nu3] &&&&& Note that all division numbers that appear in Table [tab:div] are positive and do not exceed the size of the strip in which they appear. Minimal Paths ------------- In order to verify the proposed formulas for the coefficient *c**μ*, *ν**λ*, we will typically use induction on ∣*λ*∣ − ∣*μ*∣. In order to do this, we decompose *ν* into two pieces *ν*ʹ and *ν*ʺ, and compute the coefficients obtained when we expand the product *P**μ**P**ν*ʹ*P**ν*ʺ as a sum. Using associativity, we can expand this product in 2 different ways. For fixed *λ*, *μ*, *ζ*, *ε*, ∑*κ* ⊂ *λ**c**μ*, *ζ**κ* ⋅ *c**κ*, *ε**λ* = ∑*η* ⊂ *λ**c**ζ*, *ε**η* ⋅ *c**μ*, *η**λ*. [lem:path] We use the associativity of product *P**μ**P**ζ**P**ε* to expand the coefficient of *P**λ* in this product as a sum in 2 different ways: $$\begin{aligned} (P\_\mu P\_{{\zeta}})P\_{\epsilon}&= \left(\sum\_{\kappa}c^{{\kappa}}\_{\mu,{\zeta}} P\_{\kappa}\right) P\_{{\epsilon}} \\ &= \sum\_\xi \sum\_{\kappa}c^{{\kappa}}\_{\mu,{\zeta}} \cdot c^{\xi}\_{{\kappa},{\epsilon}} \; P\_\xi \end{aligned}$$ $$\begin{aligned} P\_\mu (P\_{{\zeta}}P\_{\epsilon}) &= P\_\mu \left(\sum\_\eta c^{\eta}\_{{\zeta},{\epsilon}} P\_{\eta} \right)\\ &= \sum\_\xi \sum\_\eta c^{\eta}\_{{\zeta},{\epsilon}} \cdot c^{\xi}\_{\mu,\eta} \; P\_\xi.\end{aligned}$$ Picking out the coefficient of *P**λ* in this expression tells us: ∑*κ* ⊂ *λ**c**μ*, *ζ**κ* ⋅ *c**κ*, *ε**λ* = ∑*η* ⊂ *λ**c**ζ*, *ε**η* ⋅ *c**μ*, *η**λ*. It turns out that for minimal triples (*λ*, *μ*, *ν*) of partitions in P3, we can always decompose *ν* (or, equivalently, *μ*) into subpartitions *ν*ʹ and *ν*ʺ such that all the coefficients that appear in the expression ∑*κ* ⊂ *λ**c**μ*, *ν*ʹ*κ* ⋅ *c**κ*, *ν*ʺ*λ* = ∑*η* ⊂ *λ**c**μ*, *η**λ* ⋅ *c**ν*ʹ, *ν*ʺ*η* are indexed by minimal triples. We can solve this equation for *c**μ*, *ν**λ*, and we call the resulting expression a *minimal path*. In particular, we can pick *ν*ʺ to consist of a single row or column. In this case, coefficients involving *ν*ʺ can be obtained using the Pieri rule. Since *ν*ʹ is strictly smaller than *ν*, we can apply our inductive hypothesis or results of a previous case to compute *c**μ*, *ν*ʹ*κ* for *κ* ⊂ *λ*. On the other hand, since ∣*η*∣ = ∣*ν*∣, we use the following lemma to simplify coefficients of the form *c**μ*, *η**λ* for *η* ⊂ *λ*, *η* ≠ *ν*. *c**μ*, *ν**λ* = *c**μ* − *ω*3*μ*, *ν* − *ω*3*ν**λ* − *ω*3*μ* − *ω*3*ν*. [lem:3to2] We first use Lemma [lem:path] with *ε* = (13) and *ζ* = *ν* − *ε*, to get *c**μ*, *ν**λ* ⋅ *c**ζ*, *ε**ν* = *c**μ*, *ζ**λ* − *ε* ⋅ *c**λ* − *ε*, *ε**λ*. By the Pieri rule, *c**ζ*, *ε**ν* = *c**λ* − *ε*, *ε**λ* = 1, and so we get *c**μ*, *ν**λ* = *c**μ*, *ζ**λ* − *ε*. We can then iterate this to get *c**μ*, *ν**λ* = *c**μ*, *ν* − *ω*3*ν**λ* − *ω*3*ν*. Finally, we use the symmetry between *μ* and *ν* to obtain our identity. Since, in general, *η*3 > *ν*3 and thus ∣*ω*3*η*∣ > ∣*ω*3*ν*∣, this lemma allows us to reduce *c**μ*, *η**λ* such that it can also be computed by our inductive hypothesis or results of a previous case. Main Lemmas ----------- We will reduce our minimal path expressions to one of the following 2 identities, depending on whether *ν*ʺ is taken to be a row or a column in our decomposition of *ν*. We will use the notation [*n*] to denote {1, …, *n*}. Let *n* be fixed, and let *I* = [*n*]. Given sets *σ* = {*σ**i*}, *τ* = {*τ**i*} indexed by *i* ∈ *I*, we can define $$\begin{aligned} {\beta}\_j({\sigma},\tau) &={\left\{{\sigma}\_i-{\sigma}\_j\right\}} \cup {\left\{\tau\_i+{\sigma}\_j\right\}}, \\ \phi\_j(x;{\sigma},\tau) &= \phi\left(x;{\beta}\_j({\sigma},\tau)\right), \\ \Phi(x;{\sigma},\tau) &= \sum\_{j \in I} \phi\_j(x;{\sigma},\tau)\end{aligned}$$ Then for all *x*, *σ*, *τ*, Φ(*x*; *σ*, *τ*) ≡ Φ(*x*; *τ*, *σ*). [lem:col] We use induction on *n*. If *n* = 1, then *β*1(*σ*, *τ*) = *β*1(*τ*, *σ*) = {0, *τ*1 + *σ*1},  and so $$\Phi(x;{\sigma},\tau) = \Phi(x;\tau,{\sigma}) = \frac{\tau\_1+{\sigma}\_1-x}{\tau\_1+{\sigma}\_1}.$$ For greater *n*, we note that each *ϕ**j*(*x*; *σ*, *τ*) and *ϕ**j*(*x*; *τ*, *σ*) is a polynomial of degree 2*n* − 1 in *x*. We will show that the expression Φ(*x*; *σ*, *τ*) − Φ(*x*; *τ*, *σ*) vanishes at all points of the form *x**k**l* = (*σ**k* + *τ**l*), *k*, *l* ∈ *I*, and therefore must be identically 0. If we fix some *k* and *l* in *I*, we see that *ϕ**k*(*x**k**l*; *σ*, *τ*) = *ϕ**l*(*x**k**l*; *τ*, *σ*) = 0. If *j* ≠ *k*, then $$\phi\_j(x\_{kl};{\sigma},\tau) =\prod\_{i \neq j} \frac{{\sigma}\_i-{\sigma}\_j-{\sigma}\_k-\tau\_l}{{\sigma}\_i-{\sigma}\_j}\prod\_{i} \frac{\tau\_i+{\sigma}\_j-{\sigma}\_k-\tau\_l}{\tau\_i+{\sigma}\_j}.$$ We factor out the *i* = *k* term from the first product and the *i* = *l* term from the second product to get $$\phi\_j(x\_{kl};{\sigma},\tau) = \frac{(-{\sigma}\_j-\tau\_l)}{({\sigma}\_k-{\sigma}\_j)}\frac{({\sigma}\_j-{\sigma}\_k)}{(\tau\_l+{\sigma}\_j)}\left(\prod\_{i \neq j,k} \frac{{\sigma}\_i-{\sigma}\_j-{\sigma}\_k-\tau\_l}{{\sigma}\_i-{\sigma}\_j}\prod\_{i \neq l} \frac{\tau\_i+{\sigma}\_j-{\sigma}\_k-\tau\_l}{\tau\_i+{\sigma}\_j} \right).$$ Since $$\frac{(-{\sigma}\_j-\tau\_l)}{({\sigma}\_k-{\sigma}\_j)}\frac{({\sigma}\_j-{\sigma}\_k)}{(\tau\_l+{\sigma}\_j)} = 1,$$ we get that $$\begin{aligned} \phi\_j(x\_{kl};{\sigma},\tau) &=\prod\_{i \neq l} \frac{\tau\_i+{\sigma}\_j+{\sigma}\_k-\tau\_l}{\tau\_i+{\sigma}\_j} \prod\_{i \neq j,k} \frac{{\sigma}\_i-{\sigma}\_j-{\sigma}\_k-\tau\_l}{{\sigma}\_i-{\sigma}\_j} \\ &=\phi\_j(x\_{kl};{\sigma}\_{i\neq k},\tau\_{i\neq l}).\end{aligned}$$ By a similar calculation, we have *ϕ**j*(*x**k**l*; *τ*, *σ*) = *ϕ**j*(*x**k**l*; *τ**i* ≠ *l*, *σ**i* ≠ *k*). Therefore Φ(*x**k**l*; *σ*, *τ*) − Φ(*x**k**l*; *τ*, *σ*) = Φ(*x**k**l*; *σ**i* ≠ *k*, *τ**i* ≠ *l*) − Φ(*x**k**l*; *τ**i* ≠ *l*, *σ**i* ≠ *k*),  which is identically 0, by the inductive hypothesis. Fix *n* and let *σ* = (*σ*1, *σ*2), *τ* = (*τ*1, *τ*2). Let *σ**i**k* denote *σ**i* + *k*. Let $$\begin{aligned} {\beta}^n\_t(j;{\sigma},\tau) &=\bigcup\_{\substack{i \in [2]\\ k \in [n-t]}} {\left\{-k\right\}}\cup{\left\{{\sigma}^{k-1}\_i-{\sigma}^{t}\_j: i \neq j\right\}} \cup{\left\{\tau^{k-1}\_i+{\sigma}^{t}\_j\right\}}, \\ \phi^n\_t(x;{\sigma},\tau) &= \phi\left(x;{\beta}^n\_t(1;{\sigma},\tau)\right) \cdot \phi\left(x;{\beta}^n\_{n-t}(2;{\sigma},\tau)\right), \\ \Phi\_n(x;{\sigma},\tau) &=\sum^n\_{t=0} \phi^n\_t(x;{\sigma},\tau)\end{aligned}$$ Then for all *x*, *σ*, *τ*, Φ*n*(*x*; *σ*, *τ*) ≡ Φ*n*(*x*; *τ*, *σ*). [lem:row] We prove this identity by induction on *n*. When *n* = 1, the result follows from Lemma [lem:col]. For general *n*, we note that each *ϕ**t**n*(*x*; *σ*, *τ*) and *ϕ**t**n*(*x*; *τ*, *σ*) is a polynomial of degree 4*n* in *x*, so we must show that Φ*n*(*x*; *σ*, *τ*) − Φ*n*(*x*; *τ*, *σ*) vanishes at 4*n* + 1 points. Note that all terms vanish at *x* = 1 since 1 is contained in at least one of the sets [*n* − *t*] or [*t*]. We will show that Φ*n*(*x*; *σ*, *τ*) − Φ*n*(*x*; *τ*, *σ*) also vanishes at the 4*n* points given by *x* = (*σ**l* + *τ**m* + *k* − 1), *l*, *m* ∈ [2], *k* ∈ [*n*]. Since a transposition of *σ*1 and *σ*2 takes *ϕ**t**n*(*x*; *σ*, *τ*) to *ϕ**n* − *t**n*(*x*; *σ*, *τ*) and keeps *ϕ**t**n*(*x*; *τ*, *σ*) fixed, we can assume without loss of generality that *l* = *m* = 1, and so we let *x**k* = (*σ*1 + *τ*1 + *k* − 1). We claim that $$\phi^n\_t(x;{\sigma},\tau)=\left\{\begin{matrix} 0 &\mbox{ if } t<k \\ \phi^{n-k}\_{t-k}(x\_k;{\sigma}+k\mathbf{e}\_1,\tau+k\mathbf{e}\_1) \cdot c^n\_k &\mbox{ if } t\geq k \end{matrix} \right. \label{eq:lem2}$$ where **e**1 = (1, 0) and *c**k**n* is a term that does not depend on *t*, and is symmetric in *σ* and *τ*. We note that if *t* < *k*, then for *j* = *k* − *t*, *τ*1*j* − 1 + *σ*1*t* − *x**k* = *t* + *j* − *k* = 0. Since *k* ∈ [*n*], *j* must be in [*n* − *t*], and so this implies that for *t* < *k* *ϕ**t**n*(*x*; *σ*, *τ*) = 0. Now assume that *t* ≥ *k*. By definition, we have that *ϕ**t**n*(*x*; *σ*, *τ*) = *ϕ*(*x*; *β**t**n*(1; *σ*, *τ*)) ⋅ *ϕ*(*x*; *β**n* − *t**n*(2; *σ*, *τ*)),  and so we work with each of these two factors separately. Let ⟨*a*⟩*j* = ⟨*x**k*; *a*⟩*j*. We have that $$\begin{aligned} \phi\left(x\_k;{\beta}^n\_t(1;{\sigma},\tau)\right) &= {\left\langle-n+t\right\rangle}\_{n-t} {\left\langle{\sigma}\_2-{\sigma}\_1-t\right\rangle}\_{n-t} {\left\langle\tau\_2+{\sigma}\_1+t\right\rangle}\_{n-t} {\left\langle\tau\_1+{\sigma}\_1+t\right\rangle}\_{n-t}, \\ \phi\left(x\_k;{\beta}^{n-k}\_{t-k}(1;{\sigma}+k\mathbf{e}\_1,\tau+k\mathbf{e}\_1)\right) &= {\left\langle-n+t\right\rangle}\_{n-t} {\left\langle{\sigma}\_2-{\sigma}\_1-t\right\rangle}\_{n-t} {\left\langle\tau\_2+{\sigma}\_1+t\right\rangle}\_{n-t} {\left\langle\tau\_1+{\sigma}\_1+k+t\right\rangle}\_{n-t}. \end{aligned}$$ Note that the first three factors on the right hand side are the same in both lines. Therefore, if we divide the first expression by the second, we can simplify the ratio using [eq:phid1] to obtain: $$\begin{aligned} \frac{\phi\left(x\_k;{\beta}^n\_t(1;{\sigma},\tau)\right)}{\phi\left(x\_k;{\beta}^{n-k}\_{t-k}(1;{\sigma}+k\mathbf{e}\_1,\tau+k\mathbf{e}\_1)\right)} &= \frac{{\left\langle\tau\_1+{\sigma}\_1+t\right\rangle}\_{n-t}}{{\left\langle\tau\_1+{\sigma}\_1+k+t\right\rangle}\_{n-t}} \nonumber \\ &= \frac{{\left\langle\tau\_1+{\sigma}\_1+t\right\rangle}\_{k}}{{\left\langle\tau\_1+{\sigma}\_1+n\right\rangle}\_{k}}. \label{eq:bet1}\end{aligned}$$ On the other hand, we have that: $$\begin{aligned} \phi\left(x\_k;{\beta}^n\_t(2;{\sigma},\tau)\right) =& {\left\langle-t\right\rangle}\_{t}\cdot {\left\langle{\sigma}\_1-{\sigma}\_2-n+t\right\rangle}\_{t}\cdot{\left\langle\tau\_1+{\sigma}\_2+n-t\right\rangle}\_{t} \\ &\qquad \cdot{\left\langle\tau\_2+{\sigma}\_2+n-t\right\rangle}\_{t}, \\ \phi\left(x\_k;{\beta}^{n-k}\_{t-k}(2;{\sigma}+k\mathbf{e}\_1,\tau+k\mathbf{e}\_1)\right) =& {\left\langle-t+k\right\rangle}\_{t-k}\cdot {\left\langle{\sigma}\_1-{\sigma}\_2+k-n+t\right\rangle}\_{t-k}\cdot{\left\langle\tau\_1+{\sigma}\_2+k+n-t\right\rangle}\_{t-k} \\ &\qquad \cdot {\left\langle\tau\_2+{\sigma}\_2+n-t\right\rangle}\_{t-k}. \end{aligned}$$ Therefore if we divide the first expression by the second, and once again use [eq:phid1] to simplify the ratio, we get $$\frac{\phi\left(x\_k;{\beta}^n\_t(2;{\sigma},\tau)\right)}{\phi\left(x\_k;{\beta}^{n-k}\_{t-k}(2;{\sigma}+k\mathbf{e}\_1,\tau+k\mathbf{e}\_1)\right)} = {\left\langle-t\right\rangle}\_{k} {\left\langle{\sigma}\_1-{\sigma}\_2-n+t\right\rangle}\_{k} {\left\langle\tau\_1+{\sigma}\_2+n-t\right\rangle}\_{k}{\left\langle\tau\_2+{\sigma}\_2+n-k\right\rangle}\_{k}. \label{eq:bet2a}$$ By [eq:phid2], we can rewrite the first term on the right hand side of this expression as $${\left\langle-t\right\rangle}\_{k} = \frac{1}{{\left\langle\tau\_1+{\sigma}\_1+t\right\rangle}\_{k}}$$ since *τ*1 + *σ*1 + *t* − *t* = *x**k* + *k* − 1. We can also simplify the the middle two terms using the same identity. Since (*σ*1 − *σ*2 − *n* + *t*) + (*τ*1 + *σ*2 + *n* − *t*) = *x**k* − *k* + 1,  by [eq:phid2], we get that ⟨*σ*1 − *σ*2 − *n* + *t*⟩*k*⟨*τ*1 + *σ*2 + *n* − *t*⟩*k* = 1. Therefore, we can reduce [eq:bet2a] to $$\frac{\phi\left(x\_k;{\beta}^n\_t(2;{\sigma},\tau)\right)}{\phi\left(x\_k;{\beta}^{n-k}\_{t-k}(2;{\sigma}+k\mathbf{e}\_1,\tau+k\mathbf{e}\_1)\right)} = \frac{{\left\langle\tau\_2+{\sigma}\_2+n-k\right\rangle}\_{k}}{{\left\langle\tau\_1+{\sigma}\_1+t\right\rangle}\_{k}}. \label{eq:bet2}$$ Finally, we multiply the expressions in [eq:bet1] and [eq:bet2] to get that $$\begin{aligned} \frac{\phi^n\_{t} (x\_k;{\sigma},\tau)}{\phi^{n-k}\_{t-k} (x\_k;{\sigma}+k\mathbf{e}\_1,\tau+k\mathbf{e}\_1)} &= \frac{{\left\langle\tau\_1+{\sigma}\_1+t\right\rangle}\_{k}}{{\left\langle\tau\_1+{\sigma}\_1+n\right\rangle}\_{k}} \cdot \frac{{\left\langle\tau\_2+{\sigma}\_2+n-k\right\rangle}\_{k}}{{\left\langle\tau\_1+{\sigma}\_1+t\right\rangle}\_{k}} \\ &=\frac{{\left\langle\tau\_2+{\sigma}\_2+n-k\right\rangle}\_{k}}{{\left\langle\tau\_1+{\sigma}\_1+n\right\rangle}\_{k}},\end{aligned}$$ which completes our proof of equation [eq:lem2], with $$c\_k = \frac{{\left\langle\tau\_2+{\sigma}\_2+n-k\right\rangle}\_{k}}{{\left\langle\tau\_1+{\sigma}\_1+n\right\rangle}\_{k}}.$$ It is easy to see that this *c**k* does not depend on *t* and is symmetric in *σ* and *τ*. Since each *ϕ**t**n* contains this factor of *c**k* whenever *t* ≥ *k*, it follows that Φ*n*(*x**k*; *σ*, *τ*) = Φ*n* − *k*(*x**k*; *σ* + *k***e**1, *τ* + *k***e**1)*c**k*. Similarly, by transposing *σ* and *τ*, we get Φ*n*(*x**k*; *τ*, *σ*) = Φ*n* − *k*(*x**k*; *τ* + *k***e**1, *σ* + *k***e**1)*c**k*. Thus, by the inductive hypothesis: Φ*n*(*x**k*; *σ*, *τ*) − Φ*n*(*x**k*; *τ*, *σ*) = 0. Verification of Division Number Formulas ---------------------------------------- As in Section [sec:div], we will use the following notation: $$\begin{aligned} {\mathfrak{d}}\_{ijk} &= |{\lambda}\_i + \mu\_j - \nu\_k|, \\ \xi\_{ij} &= \xi\_i - \xi\_j, \\ \mathfrak{p} &= \max({\lambda}\_3-\mu\_2-\nu\_3,0), \\ x^{\pm} &= x\pm \mathfrak{p}.\end{aligned}$$ Given a set of division numbers n, recall that **d***μ*, *ν**λ*(n) = ∏*i* ≤ *j*[*h**λ**i**j*; *n**i**j**λ*] ⋅ [ − *h**μ**i**j*; *n**i**j**μ*] ⋅ [ − *h**ν**i**j*; *n**i**j**ν*],  where *n**i**j**λ* is the division number corresponding to the *i*\tiny th strip in *ω**j**λ*, and *n**i**j**ξ* is the division number corresponding to the *i*\tiny th strip in *ω**j* − 1*ξ* for *ξ* ∈ {*μ*, *ν*}. For each minimal triple (*λ*, *μ*, *ν*) we have a set of division numbers n(*λ*, *μ*, *ν*), as listed in Table [tab:div]. Let *d**μ*, *ν**λ* = **d***μ*, *ν**λ*(n(*λ*, *μ*, *ν*)). We verify that for each case in Table [tab:div], *c**μ*, *ν**λ* = *d**μ*, *ν**λ*. In order to do this, for each fixed triple (*λ*, *μ*, *ν*), we first decompose *ν* into two subpartitions *ζ* and *ε*, such that *ε* consists of a single row or column. By Lemma [lem:path], we then get a minimal path of the form: ∑*i**c**μ*, *ζ**κ*(*i*) ⋅ *c**κ*(*i*), *ε**λ* = ∑*i**c**ζ*, *ε**η*(*i*) ⋅ *c**μ*, *η*(*i*)*λ*,  where for some *i*, *η*(*i*) = *ν*. For such a path, we can use either a previously established result or induction to get that each *c**ξ*2, *ξ*3*ξ*1 equals *d**ξ*2, *ξ*3*ξ*1 for all the triples (*ξ*1, *ξ*2, *ξ*3) ≠ (*λ*, *μ*, *ν*). Therefore, to show that *c**μ*, *ν**λ* = *d**μ*, *ν**λ*, it suffices to instead verify the analogous identity for *d**μ*, *ν**λ*: ∑*i**d**μ*, *ζ**κ*(*i*) ⋅ *d**κ*(*i*), *ε**λ* = ∑*i**d**ζ*, *ε**η*(*i*) ⋅ *d**μ*, *η*(*i*)*λ*. When the sums on either side of equation [eq:did] consist of more than one term, we can prove this identity by writing each *d**μ*, *ζ**κ*(*i*) ⋅ *d**κ*(*i*), *ε**λ* and *d**ζ*, *ε**η*(*i*) ⋅ *d**μ*, *η*(*i*)*λ* as the product of *d**μ*, *ν**λ* ⋅ *d**ζ*, *ε**ν* with additional terms produced by changes to the anchors and division numbers, as given by equation [eq:phid1]. We will then show that the additional factors produced satisfy equation [eq:did] by showing that they fall into the form of Lemma [lem:col] (if *ε* is a single column) or [lem:row] (if *ε* is a single row). In the proofs below, we will also use the fact that since *c**μ*, *ν**λ* = *c**ν*, *μ**λ*, any division number formula that we prove for a coefficient based on a condition involving *μ* and *ν* can be subsequently be used for the condition obtained by interchanging *μ* and *ν*, as long as one also interchanges the role of *μ* and *ν* in the formula. If *μ*3 = *μ*2, then *c**μ*, *ν**λ* is given by $${\lambda}: \begin{bmatrix} {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{222} & 0 \\ {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{222} & \\ {\mathfrak{d}}\_{333} && \end{bmatrix} \qquad \mu: \begin{bmatrix} 0 & \mathfrak{d}\_{111}\\ 0 & \end{bmatrix} \qquad \nu: \begin{bmatrix} {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{222} \\ {\mathfrak{d}}\_{333} & \end{bmatrix}$$ [prop:horn1] By Lemma [lem:3to2], *c**μ*, *ν**λ* = *c**μ* − *ω*3*μ*, *ν**λ* − *ω*3*μ*. Since, *μ* − *ω*3*μ* consists of a single part, we can use the Pieri rule to compute *c**μ* − *ω*3*μ*, *ν**λ* − *ω*3*μ*. The Pieri rule for rows can be obtained from Theorem [thm:jpieri] and equation [eq:transpose]. In particular, we get that *c**μ* − *ω*3*μ*, *ν**λ* − *ω*3*μ* is obtained by treating all the hooks in *λ*, *μ*, *ν* as flipped hooks, except those corresponding to boxes (*i*, *j*) ∈ *ν*, *λ* such that *ν**i*ʹ = *λ**i*ʹ and *ν**j* < *λ**j*. Finally, we note that the d131 flipped hooks in *ω*1*λ* can be exchanged with the last d131 flipped hooks in *ω*1*μ*, leaving no flipped hooks in *ω*1*λ* and only *μ*13 − d131 = d111 flipped hooks in *ω*1*μ*. If *μ*2 = *μ*1, then *c**μ*, *ν**λ* is given by: $${\lambda}: \begin{bmatrix} {\mathfrak{d}}\_{111} & {\mathfrak{d}}\_{213} & 0 \\ {\mathfrak{d}}\_{222} & {\mathfrak{d}}\_{213} & \\ {\mathfrak{d}}\_{111} && \end{bmatrix} \qquad \mu:\begin{bmatrix} 0 & 0\\ {\mathfrak{d}}\_{333} & \end{bmatrix} \qquad \nu: \begin{bmatrix} {\mathfrak{d}}\_{213} & {\mathfrak{d}}\_{111} \\ {\mathfrak{d}}\_{213} & \end{bmatrix}$$ [prop:horn2] Let *d**μ*, *ν**λ* by the hypothesized formula. We use induction on (*ν*2 − *ν*3) to show that *c**μ*, *ν**λ* = *d**μ*, *ν**λ*. If *ν*2 = *ν*3, we can determine *c**μ*, *ν**λ* using Prop. [prop:horn1] with the roles of *μ* and *ν* reversed in the following way. First, note that by Horn inequality (8), *λ*2 ≤ *μ*1 + *ν*2 = *μ*2 + *ν*2, and by inequality (15), *λ*2 ≥ *μ*2 + *ν*3 = *μ*2 + *ν*2, so *λ*2 = *μ*2 + *ν*2. Therefore, d222 = 0, and so d213 = d222 = 0. Since ∣*λ*∣ = ∣*μ*∣ + ∣*ν*∣, it also follows that d111 = d333. Finally, note that since *λ*2 − *μ*3 − *ν*3 = *μ*1 − *μ*3 in this case, we can exchange the lower hooks in the second strip of *ω*3*λ* with the upper hooks in the first strip of *ω*2*μ*, so that Prop. [prop:horn1] gives us the following division numbers: $${\lambda}: \begin{bmatrix} {\mathfrak{d}}\_{111} & 0 & 0 \\ 0 & 0 & \\ {\mathfrak{d}}\_{111} && \end{bmatrix} \qquad \mu:\begin{bmatrix} 0 & 0\\ {\mathfrak{d}}\_{333} & \end{bmatrix} \qquad \nu: \begin{bmatrix} 0 & {\mathfrak{d}}\_{111} \\ 0 & \end{bmatrix}$$ We can verify that this is the same as *d**μ*, *ν**λ* in this case. If *ν*2 > *ν*3 then we can decompose *ν* into *ε* = (1, 1) and *ζ* = *ν* − *ε*. Then by Lemma [lem:path], we have ∑1 ≤ *i* ≤ 3*c**μ*, *η*(*i*)*λ**c**ζ*, *ε**η*(*i*) = ∑1 ≤ *i* ≤ 3*c**μ*, *ζ**κ*(*i*)*c**κ*(*i*), *ε**λ*,  where *η*(*i*) = *ζ* + (1, 1, 1) − **e***i* and *κ*(*i*) = *λ* − (1, 1, 1) + **e***i*, where **e***i* is a triple consisting of a 1 in the *i*\tiny th position and 0’s elsewhere. Note that *η*(3) = *ν*. We will show that our hypothesized coefficients satisfy equation [eq:mu12path]. We can determine *d**κ*(*i*), *ε**λ* and *d**ζ*, *ε**η*(*i*) by the Pieri rule, since *ε* consists of a single column. In particular, we have that $$\begin{aligned} d^{{\lambda}}\_{{\kappa}(i),{\epsilon}} &=\phi\left(1-r;{\left\{h^{ij}\_{{\lambda}}+1,-h^{ij}\_{{\kappa}(i)}+1:j>i\right\}}\right),\\ d^{\eta(i)}\_{{\zeta},{\epsilon}} &=\phi\left(1-r;{\left\{h^{ij}\_{\eta(i)}+1,-h^{ij}\_{{\zeta}}+1:j>i\right\}}\right).\end{aligned}$$ Note that by this definition, *d**ζ*, *ε**ν* = *d**ζ*, *ε**η*(3) = 1. We can also write out *d**μ*, *ζ**κ*(*i*) and *d**μ*, *η*(*i*)*λ*, by comparing them to *d**μ*, *ν**λ*, since *κ*(*i*) is obtained by modifying the parts of *λ* and *η*(*i*) and *ζ* are obtained by modifying the parts of *ν*. Therefore, *d**μ*, *ζ**κ*(*i*) and *d**μ*, *η*(*i*)*λ* can be determined by examining how these changes to *λ* and *ν* change the anchors and division numbers for each strip. In the symbols below, the entries denote how the corresponding anchor for each strip must be changed for that coefficient compared to the anchor of *d**μ*, *ν**λ*, and a \* indicates a change of  − 1 to the corresponding division number. $$\begin{aligned} {2} {\kappa}(1): \begin{pmatrix} +1^\* & +1^\* & 0 \\ 0 & 0^\* & \\ 0^\* & \end{pmatrix} \qquad &\mu:\begin{pmatrix} 0 & 0\\ 0^\* & \end{pmatrix} \qquad &{\zeta}: \begin{pmatrix} +1^\* & 0^\* \\ +1^\* & \end{pmatrix}, \\ {\kappa}(2): \begin{pmatrix} 0 & -1 & 0 \\ +1^\* & 0 & \\ 0 & \end{pmatrix} \qquad &\mu:\begin{pmatrix} 0 & 0\\ 0^\* & \end{pmatrix} \qquad &{\zeta}: \begin{pmatrix} +1 & 0 \\ +1 & \end{pmatrix}, \\ {\kappa}(3): \begin{pmatrix} -1 & 0^\* & 0 \\ -1 & 0^\* & \\ 0 & \end{pmatrix} \qquad &\mu:\begin{pmatrix} 0 & 0\\ 0 & \end{pmatrix} \qquad &{\zeta}: \begin{pmatrix} +1^\* & 0 \\ +1^\* & \end{pmatrix}, \\ {\lambda}: \begin{pmatrix} 0^\* & 0^\* & 0 \\ 0 & 0^\* & \\ 0^\* && \end{pmatrix} \qquad &\mu:\begin{pmatrix} 0 & 0\\ 0^\* & \end{pmatrix} \qquad &\eta(1): \begin{pmatrix} +2^\* & +1^\* \\ +1^\* & \end{pmatrix}, \\ {\lambda}: \begin{pmatrix} 0 & 0^\* & 0 \\ 0^\* & 0^\* & \\ 0 && \end{pmatrix} \qquad &\mu:\begin{pmatrix} 0 & 0\\ 0^\* & \end{pmatrix} \qquad &\eta(2): \begin{pmatrix} +1^\* & -1 \\ +2^\* & \end{pmatrix}.\end{aligned}$$ This allows us to determine each summand *d**μ*, *ζ**κ*(*i*)*d**κ*(*i*), *ε**λ* and *d**μ*, *η*(*i*)*λ**d**ζ*, *ε**η*(*i*) of equation [eq:mu12path] compared to *d**μ*, *ν**λ*, since we can use equations [eq:mod1] and [eq:mod2] to write terms of the form [*b* + *r*; *n* + *s*] as a product of [*b*; *n*] and some additional factors. In particular, we factor out *d**μ*, *ν**λ* from each of these terms. In addition, we factor out terms that appear in a majority of the six summands. Note that these terms come from the blocks *ω*2*ξ* for each partition *ξ*. Thus, we factor out $\frac{1}{\mathcal{X}}$ from each expression, where X = *ϕ*(1 − *r*; *λ*12 + *r* + d213, d213,  − *μ*23 − *r* + d333, 1 − *ν*13 − 2*r*, 1 − *ν*23 − *r*). Using the notation of Lemma [lem:col], we can rewrite X as *ϕ*3(1 − *r*; *σ*, *τ*), where $$\begin{aligned} {3} \tau\_1 &= {\lambda}\_{12}+{\mathfrak{d}}\_{213}+r &&= {\lambda}\_1 - \mu\_1 - \nu\_3 + r \\ \tau\_2 &= {\mathfrak{d}}\_{213} &&= {\lambda}\_2 - \mu\_1 - \nu\_3 \\ \tau\_3 &= {\mathfrak{d}}\_{333}-\mu\_{23}-r &&= {\lambda}\_3 - \mu\_1 - \nu\_3 - r \\ {\sigma}\_1 &= 1-\nu\_{13}-2r \\ {\sigma}\_2 &= 1-\nu\_{23}-r \\ {\sigma}\_3 &= 0.\end{aligned}$$ This allows us to write each term *d**μ*, *ζ**κ*(*i*)*d**κ*(*i*), *ε**λ* and *d**μ*, *η*(*i*)*λ**d**ζ*, *ε**η*(*i*) as a product of the form $$\frac{{d^{{\lambda}}\_{\mu,\nu}}}{\mathcal{X}} \phi(1-r;{\left\{a\_1,\ldots,a\_n\right\}}).$$ In the table below, we present the elements of the set A corresponding to each term. We will use *κ*(*i*) to indicate terms corresponding to *d**μ*, *ζ**κ*(*i*)*d**κ*(*i*), *ε**λ* and *η*(*i*) to indicate terms corresponding *d**μ*, *η*(*i*)*λ**d**ζ*, *ε**η*(*i*). | | *a*1 | *a*2 | *a*3 | *a*4 | *a*5 | | --- | --- | --- | --- | --- | --- | | *κ*(1) | *λ*12 + d213 + *r* | 1 − d111 − *r* |  − *λ*13 − 2*r* |  − *λ*12 − *r* | 1 + *ν*12 − d111 | | *κ*(2) | d213 |  − *λ*23 − *r* | *λ*12 + *r* | d213 + 1 − *ν*13 − 2*r* | 1 − *ν*23 − *r* | | *κ*(3) | *λ*13 + 2*r* | *λ*23 + *r* | d333 − *μ*23 − *r* | 1 − *λ*13 − d111 − 3*r* | 1 − *λ*23 − d222 − 2*r* | | *η*(1) | 1 − *λ*13 − d111 − 3*r* | 1 − d111 − *r* | *ν*13 + 2*r* − 1 | *ν*12 + *r* | d213 + 1 − *ν*13 − 2*r* | | *η*(2) | 1 − *λ*23 − d222 − 2*r* | 1 − d111 + *ν*12 | *ν*23 + *r* − 1 |  − *ν*12 − *r* | d213 + 1 − *ν*23 − *r* | Using the *σ**i* and *τ**i* defined above, and once again using the notation of Lemma [lem:col], one can check that $$\begin{aligned} d^{{\kappa}(i)}\_{\mu,{\zeta}} d^{{\lambda}}\_{{\kappa}(i),{\epsilon}} &= \frac{{d^{{\lambda}}\_{\mu,\nu}}}{\mathcal{X}} {\phi\_i(1-r;\tau,{\sigma})}, \\ d^{{\lambda}}\_{\mu,\eta(i)} d^{\eta(i)}\_{{\zeta},{\epsilon}} &= \frac{{d^{{\lambda}}\_{\mu,\nu}}}{\mathcal{X}} {\phi\_i(1-r;{\sigma},\tau)}.\end{aligned}$$ Therefore, we have that $$\begin{aligned} \sum^{3}\_{i=1} d^{{\kappa}(i)}\_{\mu,{\zeta}} c^{{\lambda}}\_{{\kappa}(i),{\epsilon}} &= \frac{{d^{{\lambda}}\_{\mu,\nu}}}{\mathcal{X}} {\Phi(1-r;\tau,{\sigma})}, \\ \sum^{3}\_{i=1} d^{{\lambda}}\_{\mu,\eta(i)} c^{\eta(i)}\_{{\zeta},{\epsilon}} &= \frac{{d^{{\lambda}}\_{\mu,\nu}}}{\mathcal{X}} {\Phi(1-r;{\sigma},\tau)}.\end{aligned}$$ and so by Lemma [lem:col], we have that both sums are equal, showing that *d**μ*, *ν**λ* satisfies equation [eq:mu12path]. By the inductive hypothesis, each *c**μ*, *ζ**κ*(*i*) = *d**μ*, *ζ**κ*(*i*), since *μ* is unchanged and *ζ*2 − *ζ*3 = *ν*2 − *ν*3 − 1. Similarly, *c**μ*, *η*(*i*)*λ* = *d**μ*, *η*(*i*)*λ* for *i* = 1, 2, since *η*(1)2 − *η*(1)3 = *ν*2 − *ν*3 − 1 and *η*(2)2 − *η*(2)3 = *ν*2 − *ν*3 − 2. Therefore, we get that the remaining term *c**μ*, *η*(*i*)*λ* = *c**μ*, *ν**λ* must equal *d**μ*, *ν**λ*. These last two propositions are instrumental in proving all of the remaining cases, since they allow us to form minimal paths by decomposing *ν* into two pieces *ζ* and *ε* such that *ε* consists of a single part, and *ζ* is such that either *ζ*2 = *ζ*3 or *ζ*1 = *ζ*2. Then we can determine coefficients involving *ε* using the Pieri rule for rows, and coefficients involving *ζ* using either Proposition [prop:horn1] or [prop:horn2]. In particular, Prop [prop:horn1] allows us to form a minimal path by decomposing *ν* into *ε* = (*ν*2 − *ν*3) and *ζ* = (*ν*1, *ν*3, *ν*3). $$\young(~~~~~~~,~~\bullet\bullet,~~) = \yng(7,2,2) + \young(\bullet\bullet)$$ Prop [prop:horn2] allows us to form a minimal path by decomposing *ν* into *ε* = *ω*1*ν* and *ζ* = *ν* − *ε* = *ω*3*ν* + *ω*2*ν*. $$\young(~~~~\bullet\bullet\bullet,~~~~,~~) = \yng(4,4,2) + \young(\bullet\bullet\bullet)$$ If *λ*3 = *μ*2 + *ν*2,  then *c**μ*, *ν**λ* is given by the following division numbers: $${\lambda}:\begin{bmatrix} {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{222} & 0 \\ {\mathfrak{d}}\_{333} & {\mathfrak{d}}\_{222} & \\ 0 && \end{bmatrix} \qquad \mu:\begin{bmatrix} {\mathfrak{d}}\_{332} & {\mathfrak{d}}\_{223}\\ 0 & \end{bmatrix} \qquad \nu:\begin{bmatrix} {\mathfrak{d}}\_{323} & {\mathfrak{d}}\_{232} \\ 0 & \end{bmatrix}.$$ [prop:la3] We will first use Lemma [lem:path] to show that *c**μ*, *ν**λ**c**ζ*, *ε**ν* = *c**μ*, *ζ**κ**c**κ*, *ε**λ*,  where *ε* = (*ν*2 − *ν*3), *ζ* = (*ν*1, *ν*3, *ν*3) and *κ* = *λ* − (0, 0, *ν*2 − *ν*3). For any *η* such that *c**ε*, *ζ**η* ≠ 0, we have that *η*2 ≤ *ν*2, with equality holding only if *η* = *ν*. By Horn inequality (11), we know that if *η*2 + *μ*2 < *ν*2 + *μ*2 = *λ*3, then *c**μ*, *η**λ* = 0. On the other hand, if *κ*3 > *μ*2 + *ν*3 then since *ζ*2 = *ζ*3 = *ν*3, we have that *κ*3 > *μ*2 + *ζ*2, which would imply that *c**μ*, *ζ**κ* = 0. Using Prop [prop:horn1] and the fact that *κ* = (*λ*1, *λ*2, *λ*3 − *ν*23) and *ζ* = (*ν*1, *ν*3, *ν*3), we have that *c**μ*, *ζ**κ* is given by the division numbers: $${\kappa}: \begin{bmatrix} {\mathfrak{d}}\_{333}-\nu\_{23} & {\mathfrak{d}}\_{222}+\nu\_{23} & 0 \\ {\mathfrak{d}}\_{333}-\nu\_{23} & {\mathfrak{d}}\_{222}+\nu\_{23} & \\ {\mathfrak{d}}\_{333}-\nu\_{23} && \end{bmatrix} \qquad \mu: \begin{bmatrix} {\mathfrak{d}}\_{333}-\nu\_{23} & {\mathfrak{d}}\_{222}+\nu\_{23} \\ {\mathfrak{d}}\_{333}-\nu\_{23} & \end{bmatrix} \qquad {\zeta}: \begin{bmatrix} 0 & \mathfrak{d}\_{111}\\ 0 & \end{bmatrix}$$ where d*i**j**k* still refers to ∣*λ**i* − *μ**j* − *ν**k*∣. Therefore, $$\begin{aligned} c^{{\kappa}}\_{\mu,{\zeta}} = [{\lambda}\_{13}+&\nu\_{23}+2r;{\mathfrak{d}}\_{333}-\nu\_{23}][{\lambda}\_{23}+\nu\_{23}+r;{\mathfrak{d}}\_{333}-\nu\_{23}][0;{\mathfrak{d}}\_{333}-\nu\_{23}] \\ &\cdot [{\lambda}\_{12}+r;{\mathfrak{d}}\_{222}-\nu\_{23}][0;{\mathfrak{d}}\_{222}-\nu\_{23}][-\mu\_{13}-2r;{\mathfrak{d}}\_{333}-\nu\_{23}] \\ &\cdot [-\mu\_{23}-r;{\mathfrak{d}}\_{333}-\nu\_{23}][-\mu\_{12}-r;{\mathfrak{d}}\_{222}-\nu\_{23}][-\nu\_{13}-r;{\mathfrak{d}}\_{111}].\end{aligned}$$ Note that since *λ*3 = *μ*2 + *ν*2 in this case, we also have that d333 − *ν*23 = *μ*23. This implies that [ − *μ*23 − *r*; d333 − *ν*23][0; d333 − *ν*23] = 1,  and so these terms can be removed from the product above. Next, we compute *c**κ*, *ε**λ* and *c**ζ*, *ε**ν* by the Pieri rule to get that $$\begin{aligned} c^{{\lambda}}\_{{\kappa},{\epsilon}} &= [{\lambda}\_{13}+2r;\nu\_{23}][{\lambda}\_{23}+r;\nu\_{23}][-{\lambda}\_{13}+\nu\_{23}+r;\nu\_{23}][-{\lambda}\_{23}+\nu\_{23};\nu\_{23}], \\ c^{\nu}\_{{\zeta},{\epsilon}} &= [\nu\_{12}+r;\nu\_{23}][-\nu\_{13}-r;\nu\_{23}].\end{aligned}$$ Therefore, we get that $$\begin{aligned} c^{{\kappa}}\_{\mu,{\zeta}}c^{{\lambda}}\_{{\kappa},{\epsilon}} &= [{\lambda}\_{13}+\nu\_{23}+2r;{\mathfrak{d}}\_{333}-\nu\_{23}][{\lambda}\_{23}+\nu\_{23}+r;{\mathfrak{d}}\_{333}-\nu\_{23}][{\lambda}\_{12}+r;{\mathfrak{d}}\_{222}-\nu\_{23}] \\ &\qquad\qquad \cdot [0;{\mathfrak{d}}\_{222}-\nu\_{23}][-\mu\_{13}-2r;{\mathfrak{d}}\_{333}-\nu\_{23}][-\mu\_{12}-r;{\mathfrak{d}}\_{222}-\nu\_{23}] \\ &\qquad\qquad \cdot [-\nu\_{13}-r;{\mathfrak{d}}\_{111}][{\lambda}\_{13}+2r;\nu\_{23}][{\lambda}\_{23}+r;\nu\_{23}] \\ &\qquad\qquad \cdot [-{\lambda}\_{13}+\nu\_{23}+r;\nu\_{23}][-{\lambda}\_{23}+\nu\_{23};\nu\_{23}] \\ &= [{\lambda}\_{13}+2r;{\mathfrak{d}}\_{333}][{\lambda}\_{23}+r;{\mathfrak{d}}\_{333}][{\lambda}\_{12}+r;{\mathfrak{d}}\_{222}][0;{\mathfrak{d}}\_{222}][-\mu\_{13}-2r;{\mathfrak{d}}\_{332}] \\ &\qquad\qquad \cdot[-\mu\_{12}-r;{\mathfrak{d}}\_{223}][-\nu\_{13}-r;{\mathfrak{d}}\_{111}],\end{aligned}$$ and so dividing by *c**ζ*, *ε**ν* gives us that $$\begin{aligned} {c^{{\lambda}}\_{\mu,\nu}}&= [{\lambda}\_{13}+2r;{\mathfrak{d}}\_{333}][{\lambda}\_{23}+r;{\mathfrak{d}}\_{333}][{\lambda}\_{12}+r;{\mathfrak{d}}\_{222}][0;{\mathfrak{d}}\_{222}][-\mu\_{13}-2r;{\mathfrak{d}}\_{332}] \\ & \qquad \quad \cdot [-\mu\_{12}-r;{\mathfrak{d}}\_{223}][-\nu\_{13}-r;{\mathfrak{d}}\_{232}][-\nu\_{12};\nu\_{23}].\end{aligned}$$ This expression corresponds to the desired division numbers. If *λ**i* = *μ**i* + *ν*1,  then *c**μ*, *ν**λ* = *c**μ*, *ζ**κ**c**κ*, *ε**λ*, where *ε* = *ω*1*ν*, *ζ* = *ν* − *ε* and *κ* = *λ* − ∣*ε*∣**e***i*. [prop:nu1] We apply Lemma [lem:path]. For any *η* such that *c**ε*, *ζ**η* ≠ 0, we have that *η*1 ≤ *ν*1, with equality holding only if *η* = *ν*. By Horn inequalities (7),(9) and (12), we know that if *η*1 + *μ**i* < *ν*1 + *μ**i* = *λ**i*, then *c**μ*, *η**λ* = 0. On the other hand, if *κ**i* > *λ**i* − ∣*ε*∣, then *ζ*1 + *μ**i* = *ν*1 − ∣*ε*∣ + *μ**i* = *λ**i* − ∣*ε*∣ < *κ**i*, which would imply that *c**μ*, *ζ**κ* = 0. If *λ**i* = *μ**i* + *ν*3,  then *c**μ*, *ν**λ* = *c**μ*, *ζ**κ**c**κ*, *ε**λ*, where *ε* = *ω*1*ν*, *ζ* = *ν* − *ε* and *κ* = *μ* + (*ν*33) + (*ν*23) − *ν*2**e***i*. [prop:nu3] We apply Lemma [lem:path]. For any *η* such that *c**ε*, *ζ**η* ≠ 0, we have that *η*3 ≥ *ν*3, with equality holding only if *η* = *ν*. By Horn inequalities (
arxiv_0000182
$0.7043 & +*118%* & *σ* = 20% & $0.6362 & $1.4534 & +*128%* & *σ* = 25% & $1.0060 & $2.4051 & +*139%* The above table illustrates the impact of investment (RPI) volatility (*σ*) on both the RCLA and S-RCLA value, assuming the same Gompertz mortality with parameters *λ* = 0, *m* = 86.3 and *b* = 9.5. Note that both the RCLA and S-RCLA are represented per guaranteed dollar of lifetime income (i.e. scaled) and that the valuation rate (and hence *μ*) is equal to 5%. We assume no deferral (*τ* = 0) and hence no bonus (*β* = 0). The table also displays the percent by which the Super RCLA exceeds the RCLA value, under various volatility assumptions and ages. Table #4illustrates the impact of investment (RPI) volatility (*σ*) on both the RCLA and S-RCLA value, assuming the same Gompertz mortality with parameters *m* = 86.3 and *b* = 9.5. Note that both the RCLA and S-RCLA are represented per guaranteed dollar of lifetime income and the valuation rate (and hence *μ*) is equal to 5%. We assume no deferral (*τ* = 0) and hence no bonus (*β* = 0). The table also displays the percent by which the Super RCLA exceeds the RCLA value, under various volatility assumptions and ages. Thus, for example, at the age of 67, under both a valuation rate *ρ* = 5% and a spending percentage *γ* = 5%, the value of an S-RCLA is between 100% and 140% greater than the value of a basic RCLA, depending on the level of volatility assumed in the RPI. It seems that under greater volatility *σ*, not only are the values of RCLA and S-RCLA higher, but the ratio between S-RCLA and RCLA is greater as well. Connection to Guaranteed Living Withdrawal Benefit (GLWB) --------------------------------------------------------- As we alluded to in the introduction, variants of RCLA derivatives are embedded within variable annuity (VA) contracts with guaranteed living income benefits (GLiBs) sold in the U.S., with variants sold in the UK, Japan, and now in Canada. This is now a market with close to $1 trillion in assets, and with annual sales of over $100 billion, in 2008. Hence the motivation for studying these products. A GLiB is a broad term that captures a wide variety of annuity riders, including the Guaranteed Minimum Withdrawal Benefit (GMWB), the Guaranteed Lifetime Withdrawal Benefit (GLWB) and the Guaranteed Minimum Income Benefit (GMIB). Thus, for example, a typical GLWB assures the policyholder that if they withdraw no more than $5 per $100 of initial investment deposit, they will be entitled to receive these $5 payments for the rest of their life regardless of the performance of the investments. They can withdraw or surrender the policy and receive the entire account value – net of withdrawals to date – at any time. On the other hand, if the account value ever hits zero the guarantee begins and the annuitant receives lifetime payments. Although in general the valuation of exotic options within retirement benefits has been analyzed by Sherris (1995) for example, these more specialized GLWB products have been studied by Dai, Kwok and Zong (2008) as well as Chen, Vetzal and Forsyth (2008) and Milevsky and Salisbury (2006). Our paper provides yet another perspective on these types of embedded options and Table #5 can now be interpreted as more than just model values for a theoretical product, but an actual estimate of the discounted value of the embedded insurance offered by a variable annuity with a guaranteed lifetime withdrawal benefit. **Table #5a** [c]||c||c||c||c||c|| Initial Purchase & Bonus, Deferral, Spending & *ρ* = 3.0% & *ρ* = 5.0% & *ρ* = 7.0% Age = 50 & *β* = 5%, *τ* = 1, *γ* = 5% & $39.5199 & $19.0804 & $8.2223 & *β* = 5%, *τ* = 7, *γ* = 5% & $40.3168 & $18.4768 & $7.5435 & *β* = 5%, *τ* = 10, *γ* = 5% & $38.8829 & $17.0176 & $6.6352 & *β* = 5%, *τ* = 15, *γ* = 5% & $34.6250 & $13.7056 & $4.8320 & *β* = 5%, *τ* = 20, *γ* = 5% & $28.5642 & $9.9539 & $3.0714 Age = 65 & *β* = 5%, *τ* = 1, *γ* = 5% & $13.0509 & $6.1738 & $2.5882 & *β* = 5%, *τ* = 7, *γ* = 5% & $10.8703 & $4.7075 & $1.8006 & *β* = 5%, *τ* = 10, *γ* = 5% & $9.0896 & $3.6539 & $1.2920 & *β* = 5%, *τ* = 15, *γ* = 5% & $5.9436 & $2.0457 & $0.6090 & *β* = 5%, *τ* = 20, *γ* = 5% & $3.1648 & $0.9087 & $0.2177 Age = 70 & *β* = 5%, *τ* = 1, *γ* = 5% & $7.1354 & $3.3118 & $1.3634 & *β* = 5%, *τ* = 7, *γ* = 5% & $5.2707 & $2.1989 & $0.8097 & *β* = 5%, *τ* = 10, *γ* = 5% & $4.0458 & $1.5467 & $0.5182 & *β* = 5%, *τ* = 15, *γ* = 5% & $2.1754 & $0.6977 & $0.1911 & *β* = 5%, *τ* = 20, *γ* = 5% & $0.8564 & $0.2258 & $0.0487 Age = 75 & *β* = 5%, *τ* = 1, *γ* = 5% & $3.2413 & $1.4654 & $0.5891 & *β* = 5%, *τ* = 7, *γ* = 5% & $2.0320 & $0.8077 & $0.2834 & *β* = 5%, *τ* = 10, *γ* = 5% & $1.3834 & $0.4970 & $0.1559 & *β* = 5%, *τ* = 15, *γ* = 5% & $0.5568 & $0.1647 & $0.0411 & *β* = 5%, *τ* = 20, *γ* = 5% & $0.1368 & $0.0329 & $0.0064 Notes: The table displays the value of a CONTINUOUS step-up Guaranteed Lifetime Withdrawal Benefit (GLWB) under a variety of bonus, deferral and withdrawal assumptions. It is the value of the Super-RCLA multiplied by the number of lifetime dollars guaranteed. The mortality is assumed Gompertz with parameters *λ* = 0, *m* = 86.3 and *b* = 9.5. Prices are risk neutral (ie. *μ* = *ρ* = *r* =  risk-free rate). **Table #5b** [c]||c||c||c||c||c|| Initial Purchase & Bonus, Deferral, Spending & *ρ* = 3.0% & *ρ* = 5.0% & *ρ* = 7.0% Age = 50 & *β* = 5%, *τ* = 1, *γ* = 5% & $22.8628 & $6.9956 & $1.3465 & *β* = 5%, *τ* = 7, *γ* = 5% & $22.6176 & $6.1226 & $1.0295 & *β* = 5%, *τ* = 10, *γ* = 5% & $21.8057 & $5.3677 & $0.8103 & *β* = 5%, *τ* = 15, *γ* = 5% & $19.5396 & $3.9733 & $0.4759 & *β* = 5%, *τ* = 20, *γ* = 5% & $16.1957 & $2.6430 & $0.2351 Age = 65 & *β* = 5%, *τ* = 1, *γ* = 5% & $5.4466 & $1.4094 & $0.2317 & *β* = 5%, *τ* = 7, *γ* = 5% & $4.2610 & $0.9040 & $0.1189 & *β* = 5%, *τ* = 10, *γ* = 5% & $3.5343 & $0.6506 & $0.0719 & *β* = 5%, *τ* = 15, *γ* = 5% & $2.2689 & $0.3214 & $0.0249 & *β* = 5%, *τ* = 20, *γ* = 5% & $1.1458 & $0.1229 & $0.0064 Age = 70 & *β* = 5%, *τ* = 1, *γ* = 5% & $2.4404 & $0.5805 & $0.0887 & *β* = 5%, *τ* = 7, *γ* = 5% & $1.6694 & $0.3148 & $0.0369 & *β* = 5%, *τ* = 10, *γ* = 5% & $1.2655 & $0.2038 & $0.0196 & *β* = 5%, *τ* = 15, *γ* = 5% & $0.6525 & $0.0793 & $0.0052 & *β* = 5%, *τ* = 20, *γ* = 5% & $0.2322 & $0.0209 & $0.0009 Age = 75 & *β* = 5%, *τ* = 1, *γ* = 5% & $0.8428 & $0.1813 & $0.0254 & *β* = 5%, *τ* = 7, *γ* = 5% & $0.4822 & $0.0793 & $0.0082 & *β* = 5%, *τ* = 10, *γ* = 5% & $0.3215 & $0.0446 & $0.0037 & *β* = 5%, *τ* = 15, *γ* = 5% & $0.1190 & $0.0122 & $0.0007 & *β* = 5%, *τ* = 20, *γ* = 5% & $0.0247 & $0.0018 & $0.0000 Notes: The table displays the value of a CONTINUOUS step-up Guaranteed Lifetime Withdrawal Benefit (GLWB) under a variety of bonus, deferral and withdrawal assumptions. It is the value of the Super-RCLA multiplied by the number of lifetime dollars guaranteed. The mortality is assumed Gompertz with parameters *λ* = 0, *m* = 86.3 and *b* = 9.5. Prices are risk neutral (ie. *μ* = *ρ* = *r* =  risk-free rate). This Table #5b is based on the RPI allocated to low volatility investments. Table #5adisplays the value of a continuous step-up (a.k.a. super) Guaranteed Lifetime Withdrawal Benefit (GLWB) under a variety of bonus, deferral and withdrawal assumptions. We assume precisely the maximum permitted withdrawals after the specified deferral, and no lapsation. Thus it is the value of the S-RCLA multiplied by the number of lifetime dollars guaranteed based on an initial deposit of $100 The mortality is assumed Gompertz with parameters *m* = 86.3 and *b* = 9.5. In contrast, Table #5bdisplays the same super GLWB, but under a low volatility of *σ* = 10%. As in Table #4b, the GLWB value is obtained by multiplying the value of a S-RCLA by the initial number of dollars guaranteed. So, for example, assume that a 65 year old deposits $100 into a VA+GLWB that offers a 5% bonus for each year that withdrawals are not made, and it offers a 5% of base” payment for life once the income begins.  The underlying base – on which the lifetime income guarantee is based – steps up in continuous time. So, if the individual intends on holding the VA+GLWB for 7 years, and then begins withdrawals, the value of this guaranteed income stream (in addition to the market value of the account itself) is $10.8703 per $100 initial deposit, under a 3% valuation rate and $4.7075 under a 5% valuation rate. This assumes the underlying VA assets are invested in a portfolio of stocks and bonds with expected volatility of *σ* = 17%. Again, note the contrast in GLWB values under a lower investment volatility of *σ* = 10% in Table #5b. The same two benefits at age *x* = 65 are valued substantially lower at $4.2610 under *ρ* = 3% and $0.9040 under *ρ* = 5%. This number comes from multiplying the S-RCLA value times five, since the initial guaranteed amount is $5. Of course, for there to be no arbitrage, the ongoing management fees charged on the initial deposit of $100 would have to cover the discounted (time zero) value of the GLWB option. Once again, the continuously stepped-up GLWB guarantee on a variable annuity policy is just a bundle of S-RCLA units plus a portfolio of managed money in a systematic withdrawal plan. As one would expect, the greater the volatility, the lower the valuation rate and the younger the individual, the higher is the value of the embedded option, at time zero. Conclusion and Discussion ========================= This paper values a type of exotic option that we christened a ruin-contingent life annuity (RCLA). The generic RCLA pays $1 per year for life, like a classical deferred annuity, but it begins making these payment only once a reference portfolio index is ruined. If this underlying reference index never hits zero, the income never starts. The rationale for buying an RCLA, and especially for a retiree without a Defined Benefit (DB) pension plan, is that it jointly hedges against financial market risk and personal longevity risk, which is cheaper than buying security against both individually. The motivation for studying the RCLA is that this exotic option is now embedded in approximately $800 billion worth of U.S. variable annuity policies. The impetus for creating stand-alone RCLA products is that they might appeal to the many soon-to-be-retired baby boomers who (i.) are not interested in paying for the entire variable annuity package, and (ii.) would be willing to consider annuitization, but only as a worst case Plan B scenario for financing retirement. Indeed, there is a substantial amount of economic and behavioral evidence – see for example the introduction to the book by Brown, Mitchell, Poterba and Warshawsky (2001) – that voluntary annuitization is unpopular as a Plan A for retirees. Thus, perhaps a cheaper annuity, and one that has a built-in deferral period might appeal to the growing masses of retirees without Defined Benefit (DB) pension plans. This was suggested recently by Webb, Gong and Sun (2007) as well, and has received attention from both practitioners and regulators – see for example Festa (2012). Our analysis is done in the classical Black-Scholes-Merton framework of complete markets and fully diversifiable market (via hedging) and longevity (via the law of large numbers) risk. We derived the PDE and relevant boundary conditions satisfied by the RCLA and some variants of the basic RCLA. We then described and used efficient numerical techniques to provide extensive estimates and display sensitivities to parameter values. Our simple valuation framework only provides a very rough intuitive sense of what these ruin-contingent life annuities might cost in real life. Of course, until a liquid and two-way market develops for these products, it is hard to gauge precisely what they will cost in a competitive market. We are currently working on extending the PDE formulation approach – by increasing the number of state variables in the problem – to deal with stochastic mortality, which might also be dependent on market returns, as well as the implications of time varying volatility, non-trivial mortality risk, and mean reverting interest rates. Likewise, we are investigating the game-theoretic implications of paying RCLA premiums continuously, as opposed to up-front. In other words, what happens when the RCLA option is purchased via installments, which then endows the option holder (annuitant) to lapse and cease payment? What is the ongoing No Arbitrage premium in this case? The option to lapse leads to a variety of interesting financial economic questions regarding the existence of equilibrium, all of which we leave for future research. As the U.S. Treasury and Department of Labor continues to encourage Defined Contribution (401k) plans to offer stand-alone longevity insurance to participants – see for example the article by Lieber (2010) in the *New York Times* – we believe that research into the optimal design and pricing of these market contingent annuities will, in itself, experience much longevity. 99 Albrecht, P. and R. Maurer (2002), Self-annuitization, consumption shortfall in retirement and asset allocation: The annuity benchmark, *Journal of Pension Economics and Finance*, Vol. 1(3), pg. 269-288 Ballotta, L. and S. Haberman (2003), Valuation of guaranteed annuity options, *Insurance: Mathematics and Economics*, Vol. 33(1), pg. 87-108 Boyle, P.P. and M. Hardy (2003), Guaranteed annuity options, *ASTIN Bulletin,* Vol. 33(2), pg. 125-152 Brown, J. R., O.S. Mitchell, J.M. Poterba and M.J. Warshawsky (2001), *The Role of Annuity Markets in Financing Retirement*, The MIT Press, Cambridge, Massachusetts. Carrière, J.F. (1994), An investigation of the Gompertz law of mortality, *Actuarial Research Clearing House*, Vol. 2, pg. 1-34 Chen, Z., K. Vetzal, and P.A. Forsyth (2008), The effect of modeling parameters on the value of GMWB guarantees, *Insurance: Mathematics and Economics*, Vol. 43(1), pg. 163-175 Dai, M., Y.K. Kwok and J. Zong (2008), Guaranteed minimum withdrawal benefit in variable annuities, *Mathematical Finance*, Vol 18(4), pg. 595-611 Davidoff, T. (2009), Housing, Health and Annuities, *Journal of Risk and Insurance*, Vol. 76(1), pg. 31-52. Dawson, P., K. Dowd, A.J.G. Cairns and D. Blake (2010), Survivor Derivatives: A Consistent Pricing Framework, *Journal of Risk and Insurance*, Vol. 77(3), pg. 579-596. Deelstra, G., M. Vanmaele, D. Vyncke (2010), Minimizing the Rik of a Financial Product Using a Put Option, *Journal of Risk and Insurance*, Vol. 77(4), pg. 767-800 Dhaene, J, M. Denuit, M.J. Goovaertz, R. Kaas, and D. Vyncke (2002), The concept of comonotonicity in actuarial science and finance: theory, *Insurance: Mathematics and Economics*, Vol 31(1), pg 3-33. Festa, E. (2012), NAIC subgroup suggests all ‘hybrid’ annuities may need reserve scrutiny, *LifeHealthPRO*, Feb. 16, 2012 – `www.lifehealthpro.com` Fong, J.H.Y., O.S. Mitchell and B.S.K. Koh (2011), Longevity risk management in Singapore’s National Pension System, *Journal of Risk and Insurance*, Vol. 78(4), pg. 961-981. Frees. E.W., J. Carrière and E. Valdez (1996), Annuity Valuation with Dependent Mortality, *The Journal of Risk and Insurance*, Vol. 63(2), pg. 229-261 Hardy, M. (2003), *Investment Guarantees*, Wiley. Huang, H., M.A. Milevsky and J. Wang (2004), Ruined moments in your life: how good are the approximations?, *Insurance: Mathematics and Economics*, Vol. 34(3), pg. 421-447 Huang, H., M.A. Milevsky and T.S. Salisbury (2009), A Different perspective on retirement income sustainability: Introducing the ruin-contingent life annuity, *Journal of Wealth Management*, Spring, pg. 1-9 Kingston, G. and S. Thorp (2005), Annuitization and asset allocation with HARA utility, *Journal of Pension Economics and Finance*, Vol. 4(3), pg. 225-248. Lieber, R. (2010), The unloved annuity gets a hug from Obama, *New York Times*, Jan. 30 2010, p. B1 Mereu, J. (1962), Annuity values directly from the Makeham constants, *Transactions of the Society of Actuaries*, Vol. 14, pg. 269-308 Milevsky, M.A. (2006), *The calculus of retirement income*, Cambridge Univ. Press Milevsky, M.A. and T.S. Salisbury (2006), Financial valuation of guaranteed minimum withdrawal benefits, *Insurance: Mathematics and Economics*, Vol. 38, pg. 21-38.. Norberg, R. (1999), Ruin problems with assets and liabilities of diffusion type, *Stochastic Processes and their Applications*, Vol. 81, pg. 255-269. Promislow, S.D. (2006), *Fundamentals of Actuarial Mathematics*, John, Wiley and Sons, West Sussex, England. Schulze, R.N. and T. Post (2010), Inividual Annuity Demand Under Aggregate Mortality Risk, *Journal of Risk and Insurance*, Vol. 77(2), pg. 423-449 Scott, J.S., J.G. Watson and W. Hu (2011), What makes a better annuity?, *Journal of Risk and Insurance*, Vol. 78(1), pg. 213-244. Sherris, M. (1995), The Valuation of Option Features in Retirement Benefits, *The Journal of Risk and Insurance*, Vol. 62 (3), pg. 509-534 Stone, C.A. and A. Zissu (2006), Securitization of Senior Life Settlements: Managing Extension Risk, Journal of Derivatives, Spring 2006. Sun, W. and A. Webb (2011), Valuing the Longevity Insurance Acquired by Delaying Claiming of Social Security, *Journal of Risk and Insurance*, Vol. 78(4), pg. 907-929. Webb, A., G. Gong and W. Sun (2007), An annuity people might actually buy, *Centre for Retirement Research at Boston College*, Working Paper, July 2007, No. 7-10 Young, V.R. (2004), Optimal investment strategy to minimize the probability of lifetime ruin, *North American Actuarial Journal*, Vol. 8(4), 106-126. --- 1. Huang is Professor of Mathematics and Statistics at York University. Milevsky is Associate Professor of Finance, York University, and Executive Director of the IFID Centre. Salisbury is Professor of Mathematics and Statistics at York University, all in Toronto, Canada. The contact author (Milevsky) can be reached via email at: [email protected]. The authors acknowledge the helpful comments of seminar participants at the Department of Risk Management and Insurance at The Wharton School, as well as seminar participants at Monash University, Melbourne, The University of New South Wales and the University of Technology, Sydney. In particular the authors would like to acknowledge helpful comments from Carl Chiarella, Neil Doherty, Olivia Mitchell, and Eckhard Platen. Huang’s and Salisbury’s research is supported in part by NSERC and MITACS.[↩](#fnref1) 2. There are some exceptions, for example the 2006 article in the Journal of Derivatives by Stone and Zissu on the topic of securitizing life insurance settlements.[↩](#fnref2) 3. The evolution of retirement wealth implied by equation ([RPI.eq1]) is often studied as an alternative to annuitization in the pension and retirement planning literature. See, for example, the paper by Albrecht and Maurer (2002) or Kingston and Thorp (2005), in which *γ**I*0 is set equal to the relevant SPIA factor times the initial wealth at retirement.[↩](#fnref3) Valuation and Hedging of theRuin-Contingent Life Annuity (RCLA) =============================================================== ### Version: 16 May 2012 *VALUATION AND HEDGING OF THE RUIN-CONTINGENT LIFE ANNUITY* This paper analyzes a novel type of mortality contingent-claim called a ruin-contingent life annuity (RCLA). This product fuses together a path-dependent equity put option with a “personal longevity” call option. The annuitant’s (i.e. long position) payoff from a generic RCLA is $1 of income per year for life, akin to a defined benefit pension, but deferred until a pre-specified financial diffusion process hits zero. We derive the PDE and relevant boundary conditions satisfied by the RCLA value (i.e. the hedging cost) assuming a complete market where No Arbitrage is possible. We then describe some efficient numerical techniques and provide estimates of a typical RCLA under a variety of realistic parameters. The motivation for studying the RCLA on a stand-alone basis is two-fold. First, it is implicitly embedded in approximately $1 trillion worth of U.S. variable annuity (VA) policies; which have recently attracted scrutiny from financial analysts and regulators. Second, the U.S. administration – both Treasury and Department of Labor – have been encouraging Defined Contribution (401k) plans to offer stand-alone longevity insurance to participants, and we believe the RCLA would be an ideal and cost effective candidate for that job. Introduction ============ Among the expanding universe of derivative securities priced off non-financial state variables, a recent innovation has been the mortality-contingent claim. As its name suggests, a mortality-contingent claim is a derivative product whose payoff is dependent or linked to the mortality status of an underlying reference life or pool of lives. The simplest and perhaps the most trivial mortality-contingent claim is a personal life insurance policy with a face value of one million dollars for example. In this case, the underlying state variable is the (binary) life status of the insured. If and when it jumps from the value of one (alive) to the value of zero (dead) the beneficiary of the life insurance policy receives a payout of one million dollars. Another equally trivial example is a life or pension annuity policy which provides monthly income until the annuitant dies. Payment for these options can be made up-front, as in the case of a pension income annuity, or by installments as in the case of a life insurance policy. Indeed, the analogy to credit default swaps is obvious and it is said that much of the technology – such as Gaussian copulas and reduced form hazard rate models – which are (rightfully or wrongfully) used for pricing credit derivatives can be traced to the actuarial science behind the pricing of insurance claims. Yet, in the past these pure mortality-contingent claims have been (perhaps rightfully) ignored[2](#fn2) by the mainstream quant community primarily because of the law of large numbers. It dictates that a large-enough portfolio of policies held by a large insurance company should diversify away all risk. Under this theory pricing collapsed to rather trivial time-value-of-money calculations based on cash-flows that are highly predictable in aggregate. However this conventional viewpoint came into question when, in the early part of this decade, a number of large insurance companies began offering equity put options with rather complex optionality that was directly tied to the mortality status of the insured. These variable annuity (VA) policies, as they are commonly known, have been the source of much public and regulatory consternation in late 2008 and early 2009, as the required insurance reserves mushroomed. An additional source of interest, not directly addressed in this paper, is the emergence of actuarial evidence that mortality itself contains a stochastic component. See, for example, Dawson, Dowd, Cairns and Blake (2010), or Schulze, Post (2010). Motivated by all of this, in this paper we value and provide hedging guidance on a type of product called a ruin-contingent life annuity (RCLA). The RCLA provides the buyer with a type of insurance against the joint occurrence of two separate (and likely independent) events; the two events are *under average* investment returns and *above average* longevity. The RCLA behaves like a pension annuity that provides lifetime income, but only in *bad* *economic* scenarios. In the good scenarios, properly defined, it pays nothing. The RCLA is obviously (much) cheaper than a generic life annuity which provides income under all economic scenarios. We will argue that the RCLA is a fundamental mortality-contingent building block of all VA income guarantees in the sense that it is not muddled by tax-frictions and other institutional issues. At the same time it retains many of the real-world features embedded within these policies. At the very least this article should provide an introduction to what we label *finsurance* – products that combine financial and insurance options in one package. Research into longevity insurance and life annuities in general, has increased in prominence and intensity – especially within the scholarly literature – during the last decade or so. Indeed, there is a growing awareness that most individuals are endowed with some form of longevity insurance – in the form of government social security – and must figure out how to optimize its usage. See, for example, Sun and Webb (2011) for a recent discussion of this within the content of delaying Social Security. Researchers are trying to develop a better understand how other assets might reduce the demand for longevity insurance, see for example Davidoff (2009). Many countries are struggling with the question of how to properly design a life annuity market. See for example Fong, Mitchell and Koh (2011). In this paper we take a slightly different approach and discuss product innovation. In a recent article, Scott, Watson and Hu (2011) discussed the characteristics that make-up an ideal (or better) annuity. Using microeconomic welfare analysis, they concluded that innovation in the field should focus on developing products that add survival contingencies to assets commonly held by individuals in retirement. Our current paper is along the same lines in that we actually construct and actually price such a product. Huang, Milevsky and Salisbury (2009) motivated the need for a stand-alone ruin-contingent life annuity (RCLA), albeit without deriving any valuation relationships. Practitioners and regulators have gone on to discuss the framework for offering such products (motivated in part by the above article), under such names as *contingent deferred annuities* or *hybrid annuities* – see for example Festa (2012). In this article we provide the valuation and hedging machinery for the RCLA, in a complete market setting (i.e. assuming no arbitrage). In terms of its position within the actuarial and finance literature, the RCLA is effectively a type of annuity option, and so this work is related to Ballotta and Haberman (2003), Deelstra, Vanmaele and Vyncke (2010), as well as Hardy (2003) or Boyle and Hardy (2003) in which similar complete market techniques are relied upon. In a subsequent paper we plan to describe the impact of incomplete markets and other frictions. How Does the RCLA Work? ----------------------- The RCLA is based on a reference portfolio index (RPI), a.k.a. the state variable, upon which the income/pension annuity start-date is based. The RPI is initiated at an artificial level of $100, for example, and consists of a broad portfolio of stocks (for example the SP500 or Russell 3000 Index). However at the end of each day, week or month the RPI is adjusted for total returns (plus or minus) and by a fixed cash outflow (minus) that reduces the RPI. The cash outflow can be constant in nominal terms or constant in real terms or something in between. The income annuity embedded within the RCLA begins payments if-and-when the RPI hits zero. Figure #1 provides an example of a possible sample-path for the RPI in discrete time. [figure1] Here is a detailed example that should help explain the mechanics of the RPI and the stochastic annuity start date. Assume that the Russell 3000 index is at a level of $100 on January 1st, 2009. If under a pre-specified withdrawal rate of $7 we assume that during January 2009 the Russell 3000 total return was a nominal 2%, then the level of a vintage 2009 RPI on the first day of February 2009 would be $\$100(1.02)-(7/12)=\$101.\,\allowbreak42$. The annual withdrawal rate of $7 is divided by 12 to create the monthly withdrawal, which can also be adjusted for inflation. The same calculation algorithm continues each month. Think of the RPI as mimicking the behavior of a retirement drawdown portfolio. Now, if and when this (vintage) 2009 RPI ever hits zero, the insurance company would then commence making $1 for life payments (either nominal or inflation-adjusted) to the annuitant who bought the product in January 2009, as long as they were still alive. Figure #2 graphically illustrates how the performance of the RPI would trigger the lifetime income payment. Under path #1 in which the RPI hits zero twenty years after purchase, the income would start at the age of 80. Under path #2 where the RPI never hits zero, the annuitant would receive nothing from the insurance company. [figure2] A generic RCLA is defined in units of $1, so if the annuitant purchased 7 units, they would continue to receive the same $7 of income without any disruptions to their standard of living. At inception the retiree buying the RCLA could select from a range of withdrawal rates, for example 5%, 6% or 7%, assuming the insurance company was willing to offer a menu of spending rates (at different prices, of course.) Likewise, the annuitant could specify nominal payments of $1 for life or real payments of $1 for life, which would obviously impact pricing as well. To be precise, and when necessary, we will use the notation *W**t* = *W**t*(*z*, *γ*) to denote the level/value of the reference portfolio index in year (*z* + *t*), where the initial withdrawal rate in year *z*, was set at *γ* percent of the initial value *I*0. In other words, *W**t* is the state variable underlying the derivative’s payout function. It is worth pointing out that, from the point of view of the insurance company offering an RCLA, this is a complete-markets product, that can be perfectly hedged (at least under our assumptions). Thus the price or value we will compute below is really measuring the company’s hedging cost. This may differ from the economic value an individual client places on the product, since from the client’s point of view, the market is incomplete and mortality risk is unhedgeable. What makes a hedge possible for the company is the law of large numbers – after selling many individual contracts, the total cash flows due to mortality become essentially deterministic, leaving only cash flows due to market fluctuations to be hedged. We will comment further on this issue below. Agenda for the Paper -------------------- In section #2 we briefly review the pricing of generic life annuities, which also helps introduce notation and provides some basic intuition for the RCLA. Section #3 formally introduces the concept of ruin under the relevant diffusion process, which becomes the trigger for the RCLA. Section #4, which is the core of the paper, introduces, values, and then describes the hedge for a basic RCLA. Section #5 describes some advanced products in which the payoff and ruin-trigger are non-constant. It also discusses the connection between RCLA values and the popular *Guaranteed Living Withdrawal Benefits* (GLWB) that are sold with variable annuity (VA) products in the U.S. We provide numerical examples in all sections and then conclude the paper in Section #6 with some direction for future research. Valuation of the Income Annuity =============================== In this section we very briefly review the valuation of single premium immediate (income) annuities, mainly in order to introduce notation and terminology for the remainder of the paper and provide background for those unfamiliar with mortality-contingent claims. We refer the interested reader to any basic actuarial textbook, such as Promislow (2006) or Milevsky (2006), for the assumptions we gloss over. The value of a life annuity that pays $1 per annum in continuous-time, is denoted by ALDA(*τ*; *ρ*, *x*) , where *x* denotes the purchase age, *ρ* denotes the (insurance company) valuation discount rate and *τ* is the start date. The ALDA is an acronym for *Advanced Life Deferred Annuity.* When the ALDA start date is immediate (*τ* = 0) we have the more familiar concept of a *Single Premium Immediate Annuity*, whose value is SPIA(*ρ*, *x*) :  = ALDA(0; *ρ*, *x*). Either way, the annuity valuation factor is equal to: ALDA(*τ*; *ρ*, *x*) :  = *E*[∫*τ**T**x**e*− *ρ**t**d**t*] = *E*[∫*τ*∞1{*T**x* > *t*}*e*− *ρ**t**d**t*] = ∫*τ*∞*t**p**x**e*− *ρ**t**d**t*,  where *T**x* denotes the future lifetime random variable conditional on the current (purchase) age *x* of the annuitant and (*t**p**x*) denotes the conditional probability of survival to age (*x* + *t*). In the above expression *τ* is deterministic and denotes the deferral period before the insurance company begins making lifetime payments to the annuitant. It is an actuarial identity that: ALDA(*τ*; *ρ*, *x*) :  = SPIA(*ρ*, *x* + *τ*) × *τ**p**x* × *e*− *ρ**τ*,  which is the product of the age–(*x* + *τ*) SPIA factor multiplied by the conditional probability of surviving to age (*x* + *τ*) multiplied by the relevant discount factor *e*− *ρ**τ*. In other words, the cost of a deferred annuity can be written in terms of an (older) immediate annuity, the survival probability and the discount rate. This actuarial identity will be used later when *τ* itself is randomized. Note that the expectation embedded within equation ([ALDA.eq1]) is taken with respect to the physical (real world) measure underlying the distribution of *T**x*, which, due to the law of large numbers and the ability to eliminate all idiosyncratic mortality risk is also equal to the risk-neutral measure. While outside the scope of this paper which deals exclusively with complete markets, in the event the realized force of mortality itself is stochastic, it may in fact generate a mortality risk premium in which case the physical (real world) and risk-neutral measure might not be the same. We leave this for other research. Under any continuous law of mortality specified by a deterministic function *λ**t* > 0, the expectation in equation ([ALDA.eq1]), the annuity factor, can be re-written as: ALDA(*τ*; *ρ*, *x*) = ∫*τ*∞ *e*− ∫0*t**λ**q* *d**q**e*− *ρ**t* *d**t* = ∫*τ*∞*e*− ∫0*t*(*λ**q* + *ρ*)*d**q**d**t*. For most of the numerical examples within the paper we will assume that *λ**t* obeys the Gompertz-Makeham (GM) law of mortality. The canonical GM force of mortality (see the paper by Carrière (1994) or Frees, Carrière and Valdez (1996) for example), can be represented by: $$\lambda\_{t}=\lambda+\frac{1}{b}e^{\left( \frac{x+t-m}{b}\right) }, \label{GM.eq1}$$ where *λ* ≥ 0 is a constant non-age dependent hazard rate, *b* > 0 denotes a dispersion coefficient and *m* > 0 denotes a modal value. Our notation for *λ**t* assumes four embedded parameters: the current age *x*, *λ*, *m* and *b*. Note that when *m* → ∞, and *b* > 0, the GM collapses to a constant force of mortality *λ*, and the future lifetime random variable is exponentially distributed. We will obtain some limiting expressions in this case. For the more general and practical GM law, our RCLA valuation expressions will be stated as solutions to a PDE. As far as the basic ALDA factor is concerned, in the case of GM mortality, one can actually obtain a closed-form expression for equation ([ALDA.eq2]), which – to our knowledge – was first suggested by Mereu (1962). See Milevsky (2006) for a derivation that: $$\text{ALDA}(\tau;\rho,x,\lambda,m,b)=\frac{b\Gamma(-(\lambda+\rho)b,\exp \{\frac{x-m+\tau}{b}\})}{\exp\left\{ (m-x)(\lambda+\rho)-\exp\left\{ \frac{x-m}{b}\right\} \right\} }, \label{GOMA.factor}$$ where all the input variables are now explictely listed in the arguments of the ALDA function, and Γ(*x*, *y*) denotes the incomplete Gamma function. The annuity factor itself is a decreasing function of age *x*, deferral period *τ*, and the valuation rate *ρ*. To see this, Figure #3 plots the annuity factor in equation ([GOMA.factor]), for a continuum of ages from *x* = 40 to *x* = 80 assuming the valuation rates, *ρ* = 3%, 5% and 7% and *τ* = 0 deferral period. [figure3] Table #1 displays some numerical values for a basic SPIA (immediate) and ALDA (deferred) annuity factor, under the Gompertz Makeham (*m* = 86.3, *b* = 9.5) continuous law of mortality. For example under an insurance valuation rate of *ρ* = 5%, at the age of *x* = 40, a buyer pays $16.9287 for an income stream of $1 per year for life, starting immediately. If the annuity is purchased at the same age but the start of income is delayed for *τ* = 10 years, the buyer pays $9.1010 for $1 per year for life, starting at age 50. In contrast, under the same *r* = 5% rate, at age 65 the annuity value is $11.3828 per dollar of lifetime income, starting immediately and only $4.0636 if the start of the income is deferred for *τ* = 10 years. In general, for higher valuation rates, advanced ages and longer deferral periods, the annuity factor is lower. Note that the above Gompertz-Makeham assumptions imply the conditional expectation of life at age 65 is 18.714 years, which can be easily obtained by substituting an insurance valuation rate of *ρ* = 0% into the annuity factor. Note that no death benefits or guarantee periods are assumed in these valuation expressions. Thus, the occurrence of death prior to the end of the deferral period will result in a complete loss of premium. **Table #1** [c]||c||c||c||c||c|| Purchase Age & Deferral & *ρ* = 3% & *ρ* = 5% & *ρ* = 7% Age = 40 & *τ* = 0 yrs. & $23.0144 & $16.9287 & $13.1126 & *τ* = 5 yrs. & $18.3822 & $12.5148 & $8.9034 & *τ* = 10 yrs. & $14.4228 & $9.1010 & $5.9575 & *τ* = 20 yrs. & $8.2124 & $4.4665 & $2.4877 Age = 50 & *τ* = 0 yrs. & $19.7483 & $15.2205 & $12.1693 & *τ* = 5 yrs. & $15.1364 & $10.8256 & $7.9778 & *τ* = 10 yrs. & $11.2448 & $7.4697 & $5.0815 & *τ* = 20 yrs. & $5.3714 & $3.0815 & $1.7921 Age = 65 & *τ* = 0 yrs & $13.6601 & $11.3828 & $9.6609 & *τ* = 5 yrs. & $9.1653 & $7.0974 & $5.5719 & *τ* = 10 yrs. & $5.6499 & $4.0636 & $2.9515 & *τ* = 20 yrs. & $1.3886 & $0.8577 & $0.5320 Age = 75 & *τ* = 0 yrs. & $9.2979 & $8.1680 & $7.2460 & *τ* = 5 yrs. & $5.0620 & $4.1250 & $3.3839 & *τ* = 10 yrs. & $2.2852 & $1.7240 & $1.3062 & *τ* = 20 yrs. & $0.1645 & $0.1055 & $0.0677 Note: Table displays the basic annuity factor – with no market contingencies – which is the actuarial present value per $1 of annual income (in continuous time) for life. The mortality is assumed Gompertz with parameters *λ* = 0, *m* = 86.3 and *b* = 9.5. Prices are risk neutral (ie. *μ* = *ρ* = *r* =  risk-free rate). No death benefits or guarantee periods are assumed. Thus, a death prior to the end of the deferral period will result in a complete loss of premium. The ruin-contingent life annuity and its variants, which we will formally define in the next section, can be viewed as generalizations of the ALDA factor, but where the deferral period *τ* is *stochastic* and tied to the performance of a reference portfolio index. Retirement Spending and Lifetime Ruin ===================================== The RCLA is an income annuity that begins payment when a reference portfolio index (RPI) hits zero, or is ruined. In this section we describe the mechanics of the state variable which triggers the payment. To begin with we assume investment returns are generated by a lognormal distribution so that the RPI obeys the classic “workhorse” of financial economics: *d**W**t* = (*μ**W**t* − *γ**I*0)*d**t* + *σ**W**t**d**B**t*,   *W*0 = *I*0. The parameter *μ* denotes the drift rate and *σ* denotes the diffusion coefficient. The constant *γ**I*0 denotes the annual spending rate underlying the RPI. Note that when *γ* = 0 the process *W**t* collapses to a geometric Brownian motion (GBM) which can never access zero in finite time. The presence of *γ* reduces the drift and makes zero accessible in finite time. The greater the value of *γ* the greater is the probability, all else being equal, that *W**t* hits zero[3](#fn3). We define the ruin time *R* of the diffusion process as a hitting-time or level-crossing time, which should be familiar from the classical insurance or queueing theory literature. Formally it is defined as: *R* :  = inf{*t*; *W**t* ≤ 0 ∣ *W*0 = *I*0}. There is obviously the possibility that *R* = ∞ and the RPI never hits zero. See the paper by Huang, Milevsky and Wang (2004) or the paper by Dhaene, Denuit, Goovaerts, Kaas and Vyncke (2002), as well as Norberg (1999), for a detailed and extensive description of the various analytic and moment-matching techniques that can be used to compute the probability distribution of *R*. Likewise, see the paper by Young (2004) for a derivation of asset allocation control policies on (*μ*, *σ*) that can be used to minimize ruin probabilities within the context of retirement spending. Our focus is not on controlling *R* or explicitly estimating Pr[*T**x* ≥ *R*] which is the lifetime ruin probability. We are simply interested in using *R* as a deferral time for an income annuity. The Ruin-Contingent Life Annuity (RCLA) ======================================= Like the generic annuity, the ruin-contingent life annuity (RCLA) is acquired with a lump-sum premium now, and eventually pays $1 of income per year for life. However, the income payments do not commence until time *τ* = *R*, when the reference portfolio index (RPI) hits zero. And, if the RPI never hits zero – or the annuitant dies prior to the RPI hitting zero – the RCLA expires worthless. Thus, the defining structure of the RCLA is similar to the annuity factor in equation ([ALDA.eq1]), albeit with a stochastic upper *and* lower bound: RCLA(*I*0; *ρ*, *x*, *λ*, *m*, *b*, *γ*, *μ*, *σ*, *τ*) = *E*[∫*R**R* ∨ *T**x**e*− *ρ**t**d**t*] The $1 of annual lifetime income starts at time *R* and continues until time max{*R*, *T**x*}. Thus, if the state-of-nature is such that *T**x* < *R*, and the annuitant is dead prior to the ruin time, the integral from *R* to *R* is zero and the payout is zero. Each RCLA unit entitles the annuitant to $1 of income. Thus, if one thinks of an RCLA as insuring a *γ**I*0 drawdown plan, then buying *γ**I*0 RCLA units, would continue to pay *γ**I*0 dollars upon ruin. Now, in order to derive a valuation relationship for the RCLA defined by equation ([RCLA.eq1]) we do the following. First, we simplify notation by writing the annuity factor ALDA(*ξ*; *ρ*, *x*) as *F*(*ξ*). In other words, *F*(*ξ*) = ∫*ξ*∞*t**p**x**e*− *ρ**t* *d**t* = *E*[∫*ξ**ξ* ∨ *T**x**e*− *ρ**t**d**t* ∣ *ξ*] Our problem then becomes to calculate: $$E[F(R)]=E\left[ E\Big[{\displaystyle\int\_{R}^{R\vee T\_{x}}} e^{-\rho t}dt\mid R\Big]\right] =E\left[ {\displaystyle\int\_{R}^{R\vee T\_{x}}}e^{-\rho t}dt\right] =\text{RCLA}(I\_0). \label{RCLA.eq3}$$ Note that once again we rely on the law of large numbers – from the perspective of the insurance company – to diversify away any idiosyncratic longevity risk and value the RCLA based on (subjective, physical) mortality expectations. Now, if F*t* is the filtration generated by *W**t*, the reference portfolio index, then *E*[*F*(*R*) ∣ F*t*] is a martingale in *t*. By the Markov property, it can be represented in the form *f*(*t* ∧ *R*, *W**t*), so applying Ito’s lemma leads to the familiar (Kolmogorov backward) PDE: $$f\_{t}+(\mu w-\gamma I\_{0})f\_{w}+\frac{1}{2}\sigma^{2}w^{2}f\_{ww}=0 \label{RCLA.eq4}$$ for *w* > 0 and *t* > 0. We now have an expression for ([RCLA.eq1]) as RCLA(*I*0) = *f*(0, *I*0). Equation ([RCLA.eq4]) differs from the famous Black-Scholes-Merton PDE by the presence of the *γ**I*0 constant multiplying the space derivative *f**w*. Also, our boundary conditions are different from the linear ones for call and put options. Two of our boundary conditions are that *f*(*t*, *w*) → 0 as either *t* → ∞ or *w* → ∞. Intuitively, the RCLA is worthless in states of nature where the underlying RPI never gets ruined, and/or only gets ruined after the annuitants have all died. The boundary condition we require is that *f*(*t*, 0) = *F*(*t*), defined by equation ([RCLA.eq2]). The intuition here is that if-and-when the RPI hits zero at some future time *ξ*, a live annuitant will be entitled to lifetime income whose actuarially discounted value is the annuity factor *F*(*ξ*). Moreover, when *λ**t* = *λ* is constant we recover the simple expression *F*(*ξ*) = *e*− (*λ* + *ρ*)*ξ*/(*λ* + *ρ*) and one can simplify the entire problem to obtain a solution of the form *f*(*t*, *w*) = *e*− (*λ* + *ρ*)*t**h*(*w*), where the new one-dimensional function *h*(*w*) satisfies the ODE: $$(\mu w-\gamma I\_{0})h\_{w}(w)+\frac{1}{2}\sigma^{2}w^{2}h\_{ww}(w)-(\lambda +\rho)h(w)=0, \label{RCLA.ode}$$ where *h**w* and *h**w**w* denote the first and second derivatives respectively. The two boundary conditions are *h*(∞) = 0 and *h*(0) = 1/(*λ* + *ρ*). But, when *λ**t* is non-constant and obeys the full GM law, we must use the full expression $F(\xi)=\left. \_{\xi}\overline{a}\_{x}(\rho)\right. $ for the boundary condition, which was displayed in equation ([GOMA.factor]). Note that we then have a parabolic PDE, which can be solved numerically. Note that in both equations ([RCLA.eq4]) and ([RCLA.ode]) we maintain a distinction between the drift rate *μ* and the insurance valuation rate *ρ*. One reason for doing so is to leave open the possibility of using our valuation equation to calculate the expected RCLA returns under the physical measure, in which *μ* could be the growth rate under the physical measure even if *ρ* = *r* is the risk-free interest rate. Another reason is that even if we are interested in calculating prices (or the costs of manufacturing or hedging the products), and so take *μ* = *r* to be the risk-free interest rate, an RCLA contract could still in principal specify a different value for the insurance valuation rate *ρ*. We will discuss this further in Section [hedgingsection]. However, in our numerical examples below we will take *μ* = *ρ* = *r* (the risk-free rate) as in the Black-Scholes-Merton economy, etc. There are also extensions of this analysis that should be possible. It would be natural, given this product’s role in retirement savings, to incorporate real inflation adjustment factors into the RCLA payouts. Since the product is envisioned as having a long horizon, it would also be worthwhile to incorporate stochastic volatility into the model for the underlying asset price, as well as variable interest rates. Finally, we have assumed complete diversification of mortality risk, due to the law of large numbers and the sale of a very large number of contracts. This is only a first approximation to actuarial practice, in which adjustments are made to account for the non-zero mortality risk still present when only a finite number of contracts are sold. We hope to treat several of these effects in subsequent work, but note that in some cases this means moving to techniques suitable for incomplete markets. Solution Technique ------------------ To solve the PDE for *f*(*t*, *w*) which is displayed in equation ([RCLA.eq4]), we first use the following transformation: *f*(*t*, *w*) = *F*′(*t*)*u*(*t*, *w*),  where without any loss of generality *u*(*t*, *w*) is defined as a new (possibly) two-dimensional function. By taking partial derivatives and the chain rule, it is easy to verify that: *f**t* = *F*′′*u* + *F*′*u**t*,  *f**w* = *F*′*u**w*,  *f**w**w* = *F*′*u**w**w*,  where once again we use shorthand notation *f**t*, *f**w* and *f**w**w* for the three derivatives of interest. By substituting equation ([solution.eq2]) into equation ([RCLA.eq4]), the valuation PDE for *f*(*t*, *w*) can be written in terms of the known function *F*(*t*) and the yet-to-be-determined function *u*(*t*, *w*) as: $$\frac{F^{\prime\prime}}{F^{\prime}}u+(\mu w-\gamma I\_{0})u\_{w}+\frac{1}{2}\sigma^{2}w^{2}u\_{ww}+u\_{t}=0. \label{solution.eq3}$$ Now, since by construction, *F*(*ξ*) = ∫*ξ*∞*e*− ∫0*s*(*λ**q* + *ρ*)*d**q**d**s*,  we have that *F*′(*ξ*) =  − *e*− ∫0*ξ*(*λ**q* + *ρ*)*d**q**d**s*,  *F*′′(*ξ*) =  − (*λ**ξ* + *r*)*F*′(*ξ*). Thus, expressed in units of time *t*, the PDE for *u*(*t*, *w*) becomes $$-(\lambda\_{t}+\rho)u+(\mu w-\gamma I\_{0})u\_{w}+\frac{1}{2}\sigma^{2}w^{2}u\_{ww}+u\_{t}=0, \label{solution.eq6}$$ where *u* is shorthand for *u*(*t*, *w*), and *u**t*, *u**w*, *u**w**w* are shorthand notations for the time, space and second space derivatives, respectively. Now, going back to the decomposition of *f*(*t*, *w*) in equation ([solution.eq1]), and using the boundary condition for *f*(*t*, *w*) at *w* = 0, we have *F*(*t*) = *f*(*t*, 0) = *F*′(*t*)*u*(*t*, 0),  and *F*′(*t*) = *F*′(*t*)*u**t*(*t*, 0) + *F*′′(*t*)*u*(*t*, 0),  from which we obtain *u**t*(*t*, 0) = (*λ**t* + *ρ*)*u*(*t*, 0) + 1. For the numerical procedure, we first generate values of *u*(*t*, *w*) by solving equation ([solution.eq6]) with boundary conditions from equation ([solution.eq9]) and condition *u*(*w*, *t*) → 0 as *w* → ∞ and *t* → ∞. Then we multiply *u*(*t*, *w*) by *F*′(*t*) to generate the RCLA values of *f*(*t*, *w*). [figure4] If necessary, values can also be calculated simultaneously for multiple values of *γ* by rescaling. This is the case, for example, in the numerical examples and tables found below. We let *w̃* = *w*/*γ**I*0 and define *ũ*(*t*, *w̃*) = *u*(*t*, *w*). Then the PDE for *ũ* is seen to be $$-(\lambda\_{t}+\rho)\tilde{u}+(\mu\tilde{w}-1)\tilde{u}\_{\tilde{w}}+\frac{1}{2}\sigma^{2}\tilde{w}^{2}\tilde{u}\_{\tilde{w}\tilde{w}}+\tilde{u}\_{t}=0, \label{solution.eq10}$$ with the same boundary conditions as before. The parameter *γ* no longer appears, so only one PDE needs to be solved, after which we can calculate $$f(t,w)=F^{\prime}(t)u(t,w)=F^{\prime}(t)\tilde{u}\Big(t,\frac{w}{\gamma I\_{0}}\Big)$$ for any desired value of *γ*. In fact, we will drop the tilde notation, since *ũ* is just *u* in the special case *γ**I*0 = 1. Thus, if we have computed that particular function *u* we then get RCLA values for other *γ*’s as RCLA(*I*0) = *F*′(0)*u*(0, *I*0/*γ**I*0) = *F*′(0)*u*(0, 1/*γ*). In Figure #4 we plot *f*(*t*, *w*), which is the RCLA value, assuming *μ* = *ρ* = *r* = 0.06 (i.e. for risk neutral pricing) *m* = 86.3, *b* = 9.5 and *x* = 40 (all three embedded mortality parameters) and *λ* = 0.003, which is the age-independent component of the Gompetz-Makeham law, and finally *σ* = 0.1. The computation is done by solving the equation for *u*(*t*, *w*) for 0 < *t* < 80 (corresponding to a maximum age of death of 120) and 0 < *w* < 50, and using a normalized value of *γ**I*0 = 1. As mentioned above, the function *f*(*t*, *w*) is recovered by multiplying *u*(*t*, *w*) by *F*′(*t*), evaluated by numerical quadrature based on Simpson’s rule. We can then use *f* to value RCLA’s with different withdrawal rates. Thus, for example, the point *f*(0, 10) corresponds to the price of a $1 per year for life RCLA, purchased at the age of 40, assuming a spending rate of *γ* = 1/10 = 10% of the RPI level *I*0 = 100. Note that we experimented with different domain sizes up to *w* = 100 and no visible differences in results were observed, relative to the case when *w* = 50. (A single-run took a few seconds for a grid resolution of *δ**w* = 0.1 on a MacBook Pro.) Note that when *λ**t* = *λ* is a constant, *u**t* = 0 and we recover the above-referenced special case mentioned prior to equation ([RCLA.ode]). Numerical Examples ------------------ Table #2adisplays the (risk neutral) value of the RCLA – which pays $1 per year of lifetime income – assuming the Reference Portfolio Index (RPI) is allocated to LOW volatility investments with *σ* = 10%. The spending *γ* denotes the fixed percentage of the initial RPI level *I*0 that is withdrawn annually (and in continuous time) until ruin. When *γ* = ∞ the RPI hits zero immediately and the RCLA collapses to a basic annuity priced in Table #1. The mortality is assumed Gompertz with parameters *m* = 86.3 and *b* = 9.5. Thus, for example, at the age of 65 the value of a 5% withdrawal RCLA on a low volatility index is $0.6872 under a valuation rate of *ρ* = 3% and a mere $0.1384 under a valuation rate of *ρ* = 5%. In fact, even at the young age of *x* = 50 and under a relatively high spending percentage of *γ* = 7%, the value of the RCLA is only $2.4921 per dollar of lifetime income upon ruin, under the 5% valuation rate. Predictably, at advanced ages the same 7% withdrawal RCLA is valued at only a fraction of this cost. For example, at age *x* = 75,  and under a valuation rate *ρ* = 5%,  the value of the RCLA is only $0.1965. This is the impact of low (*σ* = 10%) investment volatility; naturally when *σ* and *γ* are low, the probability of lifetime ruin is very small. In contrast, Table #2bwhich is identical in structure to #2a displays the (risk neutral) value of the RCLA assuming the Reference Portfolio Index (RPI) is allocated to high volatility investments with *σ* = 25%. Once again the RPI spending rate *γ* denotes the fixed percentage withdrawn. **Table #2a** [c]||c||c||c||c||c|| Initial Purchase & RPI Spending & *ρ* = 3.0% & *ρ* = 5.0% & *ρ* = 7.0% Age = 50 & *γ* = ∞ & $19.7483 & $15.2205 & $12.1693 & *γ* = 10% & $10.0297 & $5.5770 & $2.7307 & *γ* = 7% & $6.3444 & $2.4921 & $0.6928 & *γ* = 6% & $4.6797 & $1.4549 & $0.2887 & *γ* = 5% & $2.9226 & $0.6470 & $0.0820 & *γ* = 4% & $1.3642 & $0.1853 & $0.0129 & *γ* = 3% & $0.3716 & $0.0249 & $0.0008 Age = 65 & *γ* = ∞ & $13.6601 & $11.3828 & $9.6609 & *γ* = 10% & $4.7321 & $2.6623 & $1.2869 & *γ* = 7% & $2.2498 & $0.8381 & $0.2217 & *γ* = 6% & $1.3972 & $0.4024 & $0.0758 & *γ* = 5% & $0.6872 & $0.1384 & $0.0168 & *γ* = 4% & $0.2294 & $0.0282 & $0.0019 & *γ* = 3% & $0.0385 & $0.0024 & $0.0001 Age = 75 & *γ* = ∞ & $9.2979 & $8.1680 & $7.2460 & *γ* = 10% & $1.7928 & $0.9691 & $0.4433 & *γ* = 7% & $0.5818 & $0.1965 & $0.0476 & *γ* = 6% & $0.2930 & $0.0752 & $0.0130 & *γ* = 5% & $0.1094 & $0.0194 & $0.0022 & *γ* = 4% & $0.0253 & $0.0027 & $0.0002 & *γ* = 3% & $0.0026 & $0.0001 & $0.0000 Notes: Table displays the value of the RCLA – which pays $1 per year of lifetime income – assuming the Reference Portfolio Index (RPI) is allocated to LOW volatility investments with *σ* = 10%. The spending *γ* denotes the fixed percentage of the initial RPI level *I*0 that is withdrawn annually (and in continuous time) until ruin. When *γ* = ∞ the RPI hits zero immediately and the RCLA collapses to a basic annuity displayed in Table #1. The mortality is assumed Gompertz with parameters *λ* = 0, *m* = 86.3 and *b* = 9.5. Prices are risk neutral (ie. *μ* = *ρ* = *r* =  risk-free rate). **Table #2b** [c]||c||c||c||c||c|| Initial Purchase & RPI Spending & *ρ* = 3.0% & *ρ* = 5.0% & *ρ* = 7.0% Age = 50 & *γ* = ∞ & $19.7483 & $15.2205 & $12.1693 & *γ* = 10% & $10.6454 & $6.4788 & $3.8827 & *γ* = 7% & $8.0694 & $4.4234 & $2.3422 & *γ* = 6% & $6.9858 & $3.6383 & $1.8159 & *γ* = 5% & $5.7793 & $2.8227 & $1.3093 & *γ* = 4% & $4.4570 & $2.0038 & $0.8466 & *γ* = 3% & $3.0457 & $1.2249 & $0.4571 Age = 65 & *γ* = ∞ & $13.6601 & $11.3828 & $9.6609 & *γ* = 10% & $5.4652 & $3.5491 & $2.2451 & *γ* = 7% & $3.6732 & $2.1443 & $1.2009 & *γ* = 6% & $2.9976 & $1.6622 & $0.8790 & *γ* = 5% & $2.3015 & $1.1972 & $0.5899 & *γ* = 4% & $1.6103 & $0.7719 & $0.3477 & *γ* = 3% & $0.9645 & $0.4144 & $0.1657 Age = 75 & *γ* = ∞ & $9.2979 & $8.1680 & $7.2460 & *γ* = 10% & $2.4354 & $1.6324 & $1.0625 & *γ* = 7% & $1.4095 & $0.8470 & $0.4882 & *γ* = 6% & $1.0713 & $0.6113 & $0.3330 & *γ* = 5% & $0.7531 & $0.4031 & $0.2049 & *γ* = 4% & $0.4705 & $0.2322 & $0.1082 & *γ* = 3% & $0.2420 & $0.1072 & $0.0445 Notes: Table – which is identical in structure to #2a – displays the (risk neutral) value of the RCLA assuming the Reference Portfolio Index (RPI) is allocated to high volatility investments with *σ* = 25%. The RPI spending rate *γ* denotes the fixed percentage withdrawn. Note the impact of the higher volatility rate on the RCLA value. The 5% withdrawal RCLA that cost $0.6872 at the age of 65, under a valuation rate of *ρ* = 3% and low investment volatility in Table #2a is now valued at $2.3015 in Table #2b under an investment volatility of *σ* = 25%. Similarly, the value for a 7% withdrawal RCLA at age *x* = 7 and under *ρ* = 5% quadruples to $0.8470. As one might expect intuitively, the value of an RCLA is also extremely sensitive to the withdrawal percentage *γ* underlying the RPI. For example, at the age of 65 and under a valuation rate of *ρ* = 3%, a withdrawal percentage of *γ* = 7% on a high volatility RPI leads to an RCLA value of $3.6732, but is worth less than half at $1.6103 under a *γ* = 4% withdrawal percentage. One can interpret these results as indicating that insuring lifetime income against ruin at a 7% withdrawal rate is roughly 125% more expensive than insuring against ruin at a 4% withdrawal rate. This provides an economic benchmark by which different spending strategies can be compared. Hedging [hedgingsection] ------------------------ Our price, determined by risk-neutral valuation in previous sections, represents a hedging cost. It is worth making the hedging argument explicit (and evaluating Delta), even though this has certainly been implicit in what we described above. The partial differential equations given in the preceding sections evaluate expectations. In the complete markets setting, the expectations are risk-neutral, and represent hedging costs. In that setting, we normally choose the equity growth rate *μ* and the insurance valuation rate *ρ* to both coincide with the risk free interest rate *r*: *μ* = *ρ* = *r*. This is the setting used for the numerical examples given above. But we could also use the PDE’s to work out discounted expected cash flows under the real-world or physical measure, a problem that can arise in aspects of risk management other than pricing. In that case we would apply the above formulas with *μ* equalling the real-world equity growth rate, and *ρ* = *r* to be the risk-free rate. By generalizing the RCLA slightly, we can also imagine using the PDE when *μ* = *r* (so our measure is risk-neutral and we’re looking at pricing and hedging), but *ρ* < *r*. As we shall see below, this would be the case if payments from the RCLA were not fixed at $1 per year for life, but rather at *e**δ**t*, where *δ* = *μ* − *ρ*. This would correspond to an inflation-enhanced RCLA in which a fixed inflation rate *δ* is incorporated into the contract, so payments increase over time at rate *δ*. The standard RCLA described earlier is just the case *δ* = 0. In this subsection (and this subsection only) we will work out the hedging portfolio assuming a complete market, with risk-free rate *μ* = *r* and a valuation rate *ρ* = *r* − *δ*. We do not change the definition of the reference portfolio. Note that we do not hedge the RCLA “derivative” using the reference portfolio index (RPI) *W**t*, satisfying *d**W**t* = (*r**W**t* − *γ**I*0) *d**t* + *σ**W**t* *d**B**t* and *W*0 = *I*0, since that quantity incorporates withdrawals and is not readily tradeable. Instead we use a stock index *S**t* without withdrawals (which is assumed tradeable), on which the RPI is based. In other words, *d**S**t* = *r**S**t* *d**t* + *σ**S**t* *d**B**t*. We assume that a large number *N* of RCLA’s is sold at time 0, to age-*x* individuals. The company hedges these with a portfolio worth *V**t* at time *t*. Then *V**t* = Δ*t**S**t* + Ψ*t* where Δ*t* is the number of stock index units held, and Ψ*t* is a position in a money market account with interest rate *r*. Since the number of contracts is large, a predictable fraction *t**p**x* of contract holders are still alive at time *t*, leading to outflows from the hedging portfolio of *e**δ**t**t**p**x**N*, if ruin has occurred by time *t*. Thus $$\begin{aligned} dV\_{t} & =\Delta\_{t}\,dS\_{t}+r\Psi\_{t}\,dt-e^{\delta t}{}\_{t}p\_{x}N1\_{\{R<t\}}\,dt\\ & =rV\_{t}\,dt+\Delta\_{t}\sigma S\_{t}\,dB\_{t}-e^{\delta t}{}\_{t}p\_{x}N1\_{\{R<t\}}\,dt.\nonumber\end{aligned}$$ We obtain a positive solution by taking *V**t* = *N**e**r**t**f*(*t*, *W**t*) and $$\Delta\_{t}=\begin{cases} Ne^{rt}W\_{t}f\_{w}(t,W\_{t})/S\_{t}, & W\_{t}>0\\ 0, & W\_{t}=0. \end{cases}$$ The verification is a simple consequence of Ito’s lemma, the fact that *f*(*t*, *w*) solves ([RCLA.eq4]) when *w* > 0, and the observation that *N**e**r**t**f**t*(*t*, 0) = *N**e**r**t**F*′(*t*) =  − *N**e**r**t**e*− *ρ**t**t**p**x* =  − *e**δ**t**t**p**x**N*. Put another way, the value of the stock position in the hedge, per initial contract sold, is just Δ*t**S**t*/*N* = *e**r**t**W**t**f**w*(*t*, *W**t*). This expression reflects the fact that our solution is written using *W**t* rather than *S**t*, and the observation that *f* is already a discounted quantity (being a martingale). Note that the relation between *W**t* and *S**t* could be made explicit, but is path dependent. Finally, the initial hedging cost, per contract, is just *V*0/*N* = *f*(0, *I*0) as in ([initialcost]). Of course, in reality a company would simultaneously hedge a book of RCLA’s with different purchase dates, and sold to clients with a range of ages. But the above analysis serves to illustrate the connection between hedging and pricing. More Exotic Time-Dependent Payouts ================================== We now describe two additional types of RCLA, both of which are motivated by real-world products. In the first modification the spending rate *γ**I*0 increases to *γ*max0 ≤ *s* ≤ *t*{*W**s*}, which accounts for good performance, each time the underling RPI reaches a new maximum. In other words, this product could be used to insure a drawdown plan, in which withdrawals ratchet or step up. At ruin time *R*, this product pays $1 per year for life akin to the generic RCLA. In the second modification the spending rate increases in a similar manner, but the lifetime income – which starts upon the RPI’s ruin time – will be increased as well. Both of these RCLA variants are embedded within the latest generation of variable annuity (VA) policies sold around the world with guaranteed lifetime withdrawal benefits (GLWB). We now proceed to describe and value them in detail. The Fast-RCLA ------------- Once again, we let *T**x* denote the remaining lifetime random variable under a deterministic hazard rate *λ**t*, and we assume the RPI process *W**t* is independent of *T**x* and satisfies the following diffusion equation: *d**W**t* = (*μ**W**t* − *g*(*t*, *M**t*))*d**t* + *σ**W**t**d**B**t*,   *W*0 = *I*0 where the new function *M**t* is defined as: *M**t* = max0 ≤ *s* ≤ *t**W**s*. Both *W**t* and *M**t* are now defined up until the time *R* that *W**t* hits zero. Note that the drift term in equation ([General.RPI]) now includes a more general specification and is not necessarily a constant deterministic term *γ**I*0,  as in the basic RCLA case. The modified product that we call a *Fast* RCLA differs from the basic RCLA in that the spending function is defined in the following manner. $$g(t,m)=\left\{ \begin{array} [c]{cc}0 & t\leq\tau\\ \gamma\max\{m,W\_{0}e^{\beta\tau}\} & t>\tau \end{array} \right., \label{FRCLA.eq2}$$ where the new constant *β* denotes a “bonus rate” for delaying *τ* years prior to spending/withdrawals. Note that *τ* is now a deferral period before the RPI begins withdrawals. The constant *γ* multiplying the max function in equation ([FRCLA.eq2]) serves the same role as *γ* in the basic RCLA. It is a pre-specified percentage rate of some initial RPI value. Thus, for example, assume that *W*0 = *I*0 = 100 and that during the first ten years (*t* ≤ *τ* = 10) the reference portfolio index *W**t* grows at some (lognormally distributed) rate and without any withdrawals. Then, after ten years (*t* > *τ* = 10) the RPI starts to pay-out the greater of (i) *γ* = 5% of the the maximum RPI value *M*10 observed to date, and (ii) *γ* = 5% of $100e^{(0.05)10}=\allowbreak164.\,\allowbreak87$, which is $8.2 per year. Then, each time the process *W**t* reaches a new high, so that *M**t* = *W**t*, the spending rate *g*(*t*, *M**t*) is reset to (0.05)*W**t* = (0.5)*M**t*. Then, if-and-when the RPI hits zero the insurance company makes payments of $1 per year for life, to the annuitant. The value of the Fast RCLA is (still) defined as: F-RCLA(*I*0; *ρ*, *x*, *λ*, *m*, *b*, *I*0, *γ*, *μ*, *σ*, *τ*, *β*) :  = *f*(0, *I*0, *I*0) where for 0 < *w* ≤ *m*, *f*(*t*, *w*, *m*) = *E*[∫*R**R* ∨ *T**x**e*− *ρ**s**d**s* ∣ *W**t* = *w*, *M**t* = *m*]. The only difference between the F-RCLA and the RCLA is in the structure of the ruin time *R*. When *τ* = 0  and the RPI begins immediate withdrawals, the (generic) F-RCLA is more expensive compared to a basic RCLA because the ruin-time *R* under the diffusion specified by equation ([General.RPI]) will occur prior to (or at the same time) as the ruin-time generated by the constant withdrawal implicit within equation ([RPI.eq1]). To solve this valuation equation we go back to the PDE for the basic RCLA which we derived in the previous section. Note that the original PDE, displayed in equation ([RCLA.eq4]), did not involve the hazard rate function *λ**t*. Rather, the mortality was embedded into the boundary conditions. We take advantage of the same idea for the Fast-RCLA. First, we tinker with the definition of the *g*(*t*, *m*) spending function. We re-scale by starting *M**t* at *W*0*e**β**τ* rather than at *W*0. So let $\overline{M}\_{t}=W\_{0}e^{\beta\tau}\vee\max\_{0\leq s\leq t}W\_{s}$. We then define a moneyness variable $Y\_{t}=W\_{t}/\overline{M}\_{t}$, satisfying 0 ≤ *Y**t* ≤ 1. Let $\bar {g}(t,\overline{m})$ be *g*(*t*, *m*) in terms of the new variables, so that: $$\overline{g}(t,\overline{m})=\begin{cases} 0, & t\leq\tau\\ \gamma\overline{m}, & t>\tau. \end{cases} \label{FRCLA.eq4}$$ Our problem is now to calculate the value of a new function defined as $$h(t,y,\overline{m})=E[F(R)\mid Y\_{t}=y,\overline{M}\_{t}=m] \label{FRCLA.eq5}$$ where *F*(*ξ*) is defined as above, and *R* is the ruin time of *W**t*. Then the F-RCLA value $f(t,w,m)=h(t,y,\overline{m})$ where $\overline{m}=m\vee W\_{0}e^{\beta\tau}$ and $w=y\overline{m}$. The next step is to calculate *h* using that $$E[F(R)\mid\mathcal{F}\_{t}]=h(t\wedge R,Y\_{t},\overline{M}\_{t}) \label{FRCLA.eq6}$$ is a martingale. To apply Ito’s lemma, we need to write down the stochastic equations for *Y**t* (the new moneyness variable) and $\overline{M}\_{t}$ (the new maximum diffusion value). Note that $\overline{M}\_{t}$ is increasing, and defining $dL\_{t}=d\overline{M}\_{t}/\overline{M}\_{t}$, we have that *L**t* is a process that increases only when *Y**t* = 1, and $$d\overline{M}\_{t}=\overline{M}\_{t}\,dL\_{t}. \label{FRCLA.eq7}$$ Likewise $$\begin{aligned} dY\_{t} & =\frac{1}{\overline{M}\_{t}}dW\_{t}-\frac{W\_{t}}{\overline{M}\_{t}^{2}}d\bar{M}\_{t}\label{FRCLA.eq8}\\ & =\frac{\mu W\_{t}-\overline{g}}{\overline{M}\_{t}}\,dt+\frac{\sigma W\_{t}}{\overline{M}\_{t}}\,dB\_{t}-\frac{W\_{t}}{\overline{M}\_{t}}\,dL\_{t}\nonumber\\ & =(\mu Y\_{t}-\widehat{g}(t))\,dt+\sigma Y\_{t}\,dB\_{t}-Y\_{t}\,dL\_{t}\nonumber\end{aligned}$$ where we use yet another function, $$\widehat{g}(t)=\frac{\overline{g}(t,\overline{m})}{\overline{m}}=\begin{cases} 0, & t\leq\tau\\ \gamma, & t>\tau. \end{cases} \label{FRCLA.eq9}$$ We interpret ([FRCLA.eq8]) as a Skorokhod equation and *L**t* as a local time of *Y**t* at 1, the effect of which is to pull *Y**t* down when it reaches 1, to ensure that it does not ever exceed 1. In particular, *L**t* is determined by the process *Y**t*. Note that $\overline{M}\_{t}$ has now entirely disappeared from the stochastic equation for *Y**t*, so in fact *Y**t* is a one-dimensional Markov process all by itself. Because *R* is determined by *Y*, in fact $$h(t,y,\overline{m})=h(t,y) \label{FRCLA.eq10}$$ does not depend on $\overline{m}$ at all. We are able to make all of these simplifications because of the simple structure of the original spending rate *g*(*t*, *m*) in equation ([FRCLA.eq1]). If we had a more general withdrawal rate, say of the form *g*(*t*, *w*, *m*) where *g* is a more complicated function than the one used above, then we would have to keep track of the maximum state variable $\overline{m}$ in addition to the moneyness state variable *y*. Now, applying Ito’s lemma, we get that for *t* < *R*, $$dh(t,Y\_{t})=[h\_{t}+(\mu Y\_{t}-\widehat{g}(t))h\_{y}+\frac{1}{2}\sigma^{2}Y\_{t}^{2}h\_{yy}]\,dt+\sigma Y\_{t}h\_{y}\,dB\_{t}-Y\_{t}h\_{y}\,dL\_{t}. \label{FRCLA.eq11}$$ For this to be a martingale, both the *d**t* and *d**L**t* terms must vanish. So in particular, $$h\_{t}+(\mu y-\widehat{g}(t))h\_{y}+\frac{1}{2}\sigma^{2}y^{2}h\_{yy}=0 \label{FRCLA.eq12}$$ and *h**y* = 0 when *y* = 1 (recall that *d**L**t* = 0 unless *Y**t* = 1). The latter is one boundary condition, and *h*(*t*, 0) = *F*(*t*), *h*(*t*, *y*) → 0 as *t* → ∞ are the others. Note the similarity between the PDE we must solve for the F-RCLA in equation ([FRCLA.eq12]) and the original valuation PDE for the RCLA displayed in equation ([RCLA.eq4]). Besides the boundary conditions, the only difference is that *γ**I*0 is replaced by *ĝ*(*t*). So, in the Gompertz case there is one time variable and one spatial variable. The Super-RCLA -------------- In the previously analyzed F-RCLA, the spending/withdrawal stepped-up over time, but when ruin occurs the F-RCLA payout is the same as for the RCLA, namely $1 per year for life. This type of product is relevant in some contexts but not in others. Sometimes the lifetime income that is promised upon ruin can be greater than the originally guaranteed rate, and is linked to the function *g*(*t*, *m*) itself. Therefore, in this sub-section we examine the case in which the lifetime income paid by the annuity is linked to the increasing level of spending/withdrawals. As before, the RPI value satisfies the process: *d**W**t* = (*μ**W**t* − *g*(*t*, *M**t*)) *d**t* + *σ**W**t* *d**B**t*,  under the same (*μ*, *σ*) parameters and where the withdrawal function *g*(*t*, *m*) satisfies: $$g(t,w,m)=\begin{cases} 0, & t<\tau\\ \gamma m, & t\geq\tau \end{cases} \label{SRCLA.eq2}$$ and *M**t* = *W*0*e**β**τ* ∨ max0 ≤ *s* ≤ *t**W**s*. Recall that *β* is a bonus rate (during the deferral period) and *τ* denotes the length of deferral period, measured in years. In this sense, the underlying diffusion and ruin-time dynamics are identical to the previously discussed F-RCLA case. However, in contrast to the $1 of lifetime income payoff from the F-RCLA, we define the *Super* RCLA value as: $$\begin{aligned} \text{S-RCLA}(I\_0;\rho,x,\lambda,m,b,I\_{0},\gamma,\mu,\sigma,\tau,\beta) & \text{:}=\frac{f(0,I\_{0},I\_{0})}{g(0,I\_{0})}\label{SRCLA.eq3}\\ f(t,w,m) & :=E\left[ g(R,m)\int\_{R}^{R\wedge T\_{x}}e^{-\rho s}\,ds\right] .\nonumber\end{aligned}$$ The S-RCLA starts paying income for life when the process in equation ([SRCLA.eq1]) is ruined, but the income will not be $1. Instead, it will be equal to the withdrawal amount itself, *g*(*R*, *m*), just prior to the time of ruin *R*, divided by the initial withdrawal rate *g*(0, *I*0). If there was no step-up in the withdrawal spending prior to ruin, then the payout will simply be $1 for life, just like the F-RCLA and the original RCLA. We have decided to define the function *f*(*t*, *w*, *m*) so that we do not have to carry around the denominator *g*(0, *I*0) of equation ([SRCLA.eq3]) during the entire derivation. Either way, our boundary condition must change even though large parts of the solution are similar to the F-RCLA and RCLA. We define the moneyness variable *Y**t* = *W**t*/*M**t* so that 0 ≤ *Y**t* ≤ 1. Also, let *L**t* be the local time of *Y* at 1, so *d**Y**t* = (*μ**Y**t* − *ĝ*(*t*)) *d**t* + *σ**Y**t* *d**B**t* − *d**L**t* where the (new) scaled variable *ĝ*(*t*) is now defined as: $$\widehat{g}\_{t}=\begin{cases} 0, & t<\tau\\ \gamma I\_{0}, & t\geq\tau. \end{cases} \label{SRCLA.eq5}$$ By construction, we also have that *d**M**t* = *M**t* *d**L**t*. Moreover, the S-RCLA value defined by: *E*[*g*(*R*, *m*)∫*R**R* ∧ *T**x**e*− *ρ**s* *d**s* ∣ F*t*] will be a martingale. By the Markov property the S-RCLA value will be of the form *f*(*t* ∧ *R*, *W**t* ∧ *R*, *M**t* ∧ *R*) for some function *f*. There is a scaling relationship *f*(*t*, *c**w*, *c**m*) = *c**f*(*t*, *w*, *m*), from which we conclude that *f*(*t*, *w*, *m*) = *m**h*(*t*, *y*) for some function *h* (where *y* = *w*/*m*). Applying Ito’s lemma, $$d\left( M\_{t}h(t,Y\_{t})\right) =(h\_{t}+h\_{y}(\mu Y\_{t}-\widehat{g}(t))+\frac{1}{2}\sigma^{2}Y\_{t}^{2}h\_{yy})\,dt+\sigma Y\_{t}h\_{y}\,dB\_{t}+M\_{t}(h-h\_{y})\,dL\_{t}. \label{SRCLA.eq7}$$ We conclude that $$h\_{t}+h\_{y}(\mu y-\widehat{g}(t))+\frac{1}{2}\sigma^{2}y^{2}h\_{yy}=0 \label{SRCLA.eq8}$$ for 0 < *y* < 1, with boundary condition *h*(*t*, 1) = *h**y*(*t*, 1) at *y* = 1. There will again be a boundary condition *h*(*t*, *w*) → 0 as *t* → ∞. At *y* = 0 the boundary condition is that: $$h(t,0)=\begin{cases} 0, & t<\tau\\ \gamma I\_{0}F(t), & t\geq\tau \end{cases} \label{SRCLA.eq9}$$ where *F*(*t*) is defined as before. Note that we are multiplying *γ**I*0 by the annuity factor *F*(*t*) since the payoff is now specified in terms of the spending rate and not single dollars. Also, since the equation is parabolic we only need a boundary condition in time at *t* = ∞. After solving this PDE for *h*, we recover the S-RCLA value as: *f*(0, *w*0, *m*0) = *f*(0, *w*0, *w*0*e**β**τ*) = *w*0*e**β**τ**h*(0, *e*− *β**τ*). It is worth commenting on the boundary condition *h* = 0 when *w* = 0 and *t* < *τ*. This is because the formulation of the S-RCLA implies that the payout rate *g*(*t*, *w*, *m*) = 0 ∀*t*, if it happens that *R* < *τ*. However, the RPI cannot get ruined (in a GBM world) before time *τ*: *P*(*R* < *τ*) = 0. So it is presumably irrelevant what boundary condition we use when *w* = 0 and *t* < *τ*. **Table #3** [c]||c||c||c||c||c|| Initial Purchase & Initial Spending Rate & *ρ* = 3.0% & *ρ* = 5.0% & *ρ* = 7.0% Age = 50 & *γ* = 10.0% & $13.1593 & $8.4032 & $5.2951 & *γ* = 7.0% & $10.6177 & $6.0704 & $3.2801 & *γ* = 5.5% & $8.5497 & $4.3654 & $2.0237 & *γ* = 4.0% & $5.6736 & $2.3663 & $0.8479 Age = 57 & *γ* = 10.0% & $10.2025 & $6.6799 & $4.2748 & *γ* = 7.0% & $7.7181 & $4.4651 & $2.4178 & *γ* = 5.5% & $5.8394 & $2.9846 & $1.3756 & *γ* = 4.0% & $3.4939 & $1.4433 & $0.5122 Age = 65 & *γ* = 10.0% & $6.6981 & $4.4761 & $2.8938 & *γ* = 7.0% & $4.5249 & $2.6205 & $1.4077 & *γ* = 5.5% & $3.0899 & $1.5607 & $0.7076 & *γ* = 4.0% & $1.5749 & $0.6362 & $0.2216 Age = 67 & *γ* = 10.0% & $5.8505 & $3.9217 & $2.5370 & *γ* = 7.0% & $3.8074 & $2.1997 & $1.1767 & *γ* = 5.5% & $2.5171 & $1.2643 & $0.5698 & *γ* = 4.0% & $1.2218 & $0.4897 & $0.1695 Age = 75 & *γ* = 10.0% & $2.8580 & $1.9170 & $1.2303 & *γ* = 7.0% & $1.5232 & $0.8606 & $0.4481 & *γ* = 5.5% & $0.8542 & $0.4148 & $0.1809 & *γ* = 4.0% & $0.3261 & $0.1254 & $0.0420 Notes: Table displays the value of the Super RCLA assuming the Reference Portfolio Index (RPI) is allocated to medium volatility investments with *σ* = 17% volatility. The initial RPI spending *γ* denotes the percent of the initial index value that is withdrawn annually (in continuous time). The factors in Table #3 are not directly comparable to the factors in Table #2 since the lifetime income upon ruin could exceed $1, if the RPI “does well” prior to ruin. The mortality is assumed Gompertz with parameters *λ* = 0, *m* = 86.3 and *b* = 9.5. Prices are risk neutral (ie. *μ* = *ρ* = *r* =  risk-free rate). Table #3displays the (risk neutral) value of the Super RCLA assuming the Reference Portfolio Index (RPI) is allocated to medium volatility (*σ* = 17%) investments. The initial RPI spending *γ* denotes the percent of the initial index value that is withdrawn annually (in continuous time.) The factors in Table #3 are not directly comparable to the factors in Table #2 since the lifetime income upon ruin could exceed $1, if the RPI does well prior to ruin. As an example, consider a 67 year old with an initial spending rate of *γ* = 5.5%. Under a valuation rate of *ρ* = 5% and investment volatility of *σ* = 17%, an S-RCLA guaranteeing a lifetime payout of at least $1 upon ruin is valued at $1.2643. Again, the actual guaranteed payout will be determined by the extent of withdrawal step-ups during the spending period. In these examples, Gompertz mortality is assumed with parameters *m* = 86.3 and *b* = 9.5. **Table #4** [c]||c||c||c||c||c|| Purchase & Volatility (*σ*) & RCLA (*γ* = 5%) & SRCLA (*γ* = 5%) & “Super Premium” Age = 57 & *σ* = 8% & $0.2102 & $0.4341 & +*106%* & *σ* = 15% & $0.8590 & $1.9123 & +*123%* & *σ* = 20% & $1.4378 & $3.3569 & +*133%* & *σ* = 25% & $2.0521 & $5.0267 & +*145%* Age = 62 & *σ* = 8% & $0.1096 & $0.2233 & +*104%* & *σ* = 15% & $0.5617 & $1.2390 & +*121%* & *σ* = 20% & $1.0088 & $2.3325 & +*131%* & *σ* = 25% & $1.5052 & $3.6467 & +*142%* Age = 67 & *σ* = 8% & $0.0470 & $0.0939 & +*100%* & *σ* = 15% & $0.3230 & $0.7043 & +*118%* & *σ* = 20% & $0.6362 & $1.4534 & +*128%* & *σ* = 25% & $1.0060 & $2.4051 & +*139%* The above table illustrates the impact of investment (RPI) volatility (*σ*) on both the RCLA and S-RCLA value, assuming the same Gompertz mortality with parameters *λ* = 0, *m* = 86.3 and *b* = 9.5. Note that both the RCLA and S-RCLA are represented per guaranteed dollar of lifetime income (i.e. scaled) and that the valuation rate (and hence *μ*) is equal to 5%. We assume no deferral (*τ* = 0) and hence no bonus (*β* = 0). The table also displays the percent by which the Super RCLA exceeds the RCLA value, under various volatility assumptions and ages. Table #4illustrates the impact of investment (RPI) volatility (*σ*) on both the RCLA and S-RCLA value, assuming the same Gompertz mortality with parameters *m* = 86.3 and *b* = 9.5. Note that both the RCLA and S-RCLA are represented per guaranteed dollar of lifetime income and the valuation rate (and hence *μ*) is equal to 5%. We assume no deferral (*τ* = 0) and hence no bonus (*β* = 0). The table also displays the percent by which the Super RCLA exceeds the RCLA value, under various volatility assumptions and ages. Thus, for example, at the age of 67, under both a valuation rate *ρ* = 5% and a spending percentage *γ* = 5%, the value of an S-RCLA is between 100% and 140% greater than the value of a basic RCLA, depending on the level of volatility assumed in the RPI. It seems that under greater volatility *σ*, not only are the values of RCLA and S-RCLA higher, but the ratio between S-RCLA and RCLA is greater as well. Connection to Guaranteed Living Withdrawal Benefit (GLWB) --------------------------------------------------------- As we alluded to in the introduction, variants of RCLA derivatives are embedded within variable annuity (VA) contracts with guaranteed living income benefits (GLiBs) sold in the U.S., with variants sold in the UK, Japan, and now in Canada. This is now a market with close to $1 trillion in assets, and with annual sales of over $100 billion, in 2008. Hence the motivation for studying these products. A GLiB is a broad term that captures a wide variety of annuity riders, including the Guaranteed Minimum Withdrawal Benefit (GMWB), the Guaranteed Lifetime Withdrawal Benefit (GLWB) and the Guaranteed Minimum Income Benefit (GMIB). Thus, for example, a typical GLWB assures the policyholder that if they withdraw no more than $5 per $100 of initial investment deposit, they will be entitled to receive these $5 payments for the rest of their life regardless of the performance of the investments. They can withdraw or surrender the policy and receive the entire account value – net of withdrawals to date – at any time. On the other hand, if the account value ever hits zero the guarantee begins and the annuitant receives lifetime payments. Although in general the valuation of exotic options within retirement benefits has been analyzed by Sherris (1995) for example, these more specialized GLWB products have been studied by Dai, Kwok and Zong (2008) as well as Chen, Vetzal and Forsyth (2008) and Milevsky and Salisbury (2006). Our paper provides yet another perspective on these types of embedded options and Table #5 can now be interpreted as more than just model values for a theoretical product, but an actual estimate of the discounted value of the embedded insurance offered by a variable annuity with a guaranteed lifetime withdrawal benefit. **Table #5a** [c]||c||c||c||c||c|| Initial Purchase & Bonus, Deferral, Spending & *ρ* = 3.0% & *ρ* = 5.0% & *ρ* = 7.0% Age = 50 & *β* = 5%, *τ* = 1, *γ* = 5% & $39.5199 & $19.0804 & $8.2223 & *β* = 5%, *τ* = 7, *γ* = 5% & $40.3168 & $18.4768 & $7.5435 & *β* = 5%, *τ* = 10, *γ* = 5% & $38.8829 & $17.0176 & $6.6352 & *β* = 5%, *τ* = 15, *γ* = 5% & $34.6250 & $13.7056 & $4.8320 & *β* = 5%, *τ* = 20, *γ* = 5% & $28.5642 & $9.9539 & $3.0714 Age = 65 & *β* = 5%, *τ* = 1, *γ* = 5% & $13.0509 & $6.1738 & $2.5882 & *β* = 5%, *τ* = 7, *γ* = 5% & $10.8703 & $4.7075 & $1.8006 & *β* = 5%, *τ* = 10, *γ* = 5% & $9.0896 & $3.6539 & $1.2920 & *β* = 5%, *τ* = 15, *γ* = 5% & $5.9436 & $2.0457 & $0.6090 & *β* = 5%, *τ* = 20, *γ* = 5% & $3.1648 & $0.9087 & $0.2177 Age = 70 & *β* = 5%, *τ* = 1, *γ* = 5% & $7.1354 & $3.3118 & $1.3634 & *β* = 5%, *τ* = 7, *γ* = 5% & $5.2707 & $2.1989 & $0.8097 & *β* = 5%, *τ* = 10, *γ* = 5% & $4.0458 & $1.5467 & $0.5182 & *β* = 5%, *τ* = 15, *γ* = 5% & $2.1754 & $0.6977 & $0.1911 & *β* = 5%, *τ* = 20, *γ* = 5% & $0.8564 & $0.2258 & $0.0487 Age = 75 & *β* = 5%, *τ* = 1, *γ* = 5% & $3.2413 & $1.4654 & $0.5891 & *β* = 5%, *τ* = 7, *γ* = 5% & $2.0320 & $0.8077 & $0.2834 & *β* = 5%, *τ* = 10, *γ* = 5% & $1.3834 & $0.4970 & $0.1559 & *β* = 5%, *τ* = 15, *γ* = 5% & $0.5568 & $0.1647 & $0.0411 & *β* = 5%, *τ* = 20, *γ* = 5% & $0.1368 & $0.0329 & $0.0064 Notes: The table displays the value of a CONTINUOUS step-up Guaranteed Lifetime Withdrawal Benefit (GLWB) under a variety of bonus, deferral and withdrawal assumptions. It is the value of the Super-RCLA multiplied by the number of lifetime dollars guaranteed. The mortality is assumed Gompertz with parameters *λ* = 0, *m* = 86.3 and *b* = 9.5. Prices are risk neutral (ie. *μ* = *ρ* = *r* =  risk-free rate). **Table #5b** [c]||c||c||c||c||c|| Initial Purchase & Bonus, Deferral, Spending & *ρ* = 3.0% & *ρ* = 5.0% & *ρ* = 7.0% Age = 50 & *β* = 5%, *τ* = 1, *γ* = 5% & $22.8628 & $6.9956 & $1.3465 & *β* = 5%, *τ* = 7, *γ* = 5% & $22.6176 & $6.1226 & $1.0295 & *β* = 5%, *τ* = 10, *γ* = 5% & $21.8057 & $5.3677 & $0.8103 & *β* = 5%, *τ* = 15, *γ* = 5% & $19.5396 & $3.9733 & $0.4759 & *β* = 5%, *τ* = 20, *γ* = 5% & $16.1957 & $2.6430 & $0.2351 Age = 65 & *β* = 5%, *τ* = 1, *γ* = 5% & $5.4466 & $1.4094 & $0.2317 & *β* = 5%, *τ* = 7, *γ* = 5% & $4.2610 & $0.9040 & $0.1189 & *β* = 5%, *τ* = 10, *γ* = 5% & $3.5343 & $0.6506 & $0.0719 & *β* = 5%, *τ* = 15, *γ* = 5% & $2.2689 & $0.3214 & $0.0249 & *β* = 5%, *τ* = 20, *γ* = 5% & $1.1458 & $0.1229 & $0.0064 Age = 70 & *β* = 5%, *τ* = 1, *γ* = 5% & $2.4404 & $0.5805 & $0.0887 & *β* = 5%, *τ* = 7, *γ* = 5% & $1.6694 & $0.3148 & $0.0369 & *β* = 5%, *τ* = 10, *γ* = 5% & $1.2655 & $0.2038 & $0.0196 & *β* = 5%, *τ* = 15, *γ* = 5% & $0.6525 & $0.0793 & $0.0052 & *β* = 5%, *τ* = 20, *γ* = 5% & $0.2322 & $0.0209 & $0.0009 Age = 75 & *β* = 5%, *τ* = 1, *γ* = 5% & $0.8428 & $0.1813 & $0.0254 & *β* = 5%, *τ* = 7, *γ* = 5% & $0.4822 & $0.0793 & $0.0082 & *β* = 5%, *τ* = 10, *γ* = 5% & $0.3215 & $0.0446 & $0.0037 & *β* = 5%, *τ* = 15, *γ* = 5% & $0.1190 & $0.0122 & $0.0007 & *β* = 5%, *τ* = 20, *γ* = 5% & $0.0247 & $0.0018 & $0.0000 Notes: The table displays the value of a CONTINUOUS step-up Guaranteed Lifetime Withdrawal Benefit (GLWB) under a variety of bonus, deferral and withdrawal assumptions. It is the value of the Super-RCLA multiplied by the number of lifetime dollars guaranteed. The mortality is assumed Gompertz with parameters *λ* = 0, *m* = 86.3 and *b* = 9.5. Prices are risk neutral (ie. *μ* = *ρ* = *r* =  risk-free rate). This Table #5b is based on the RPI allocated to low volatility investments. Table #5adisplays the value of a continuous step-up (a.k.a. super) Guaranteed Lifetime Withdrawal Benefit (GLWB) under a variety of bonus, deferral and withdrawal assumptions. We assume precisely the maximum permitted withdrawals after the specified deferral, and no lapsation. Thus it is the value of the S-RCLA multiplied by the number of lifetime dollars guaranteed based on an initial deposit of $100 The mortality is assumed Gompertz with parameters *m* = 86.3 and *b* = 9.5. In contrast, Table #5bdisplays the same super GLWB, but under a low volatility of *σ* = 10%. As in Table #4b, the GLWB value is obtained by multiplying the value of a S-RCLA by the initial number of dollars guaranteed. So, for example, assume that a 65 year old deposits $100 into a VA+GLWB that offers a 5% bonus for each year that withdrawals are not made, and it offers a 5% of base” payment for life once the income begins.  The underlying base – on which the lifetime income guarantee is based – steps up in continuous time. So, if the individual intends on holding the VA+GLWB for 7 years, and then begins withdrawals, the value of this guaranteed income stream (in addition to the market value of the account itself) is $10.8703 per $100 initial deposit, under a 3% valuation rate and $4.7075 under a 5% valuation rate. This assumes the underlying VA assets are invested in a portfolio of stocks and bonds with expected volatility of *σ* = 17%. Again, note the contrast in GLWB values under a lower investment volatility of *σ* = 10% in Table #5b. The same two benefits at age *x* = 65 are valued substantially lower at $4.2610 under *ρ* = 3% and $0.9040 under *ρ* = 5%. This number comes from multiplying the S-RCLA value times five, since the initial guaranteed amount is $5. Of course, for there to be no arbitrage, the ongoing management fees charged on the initial deposit of $100 would have to cover the discounted (time zero) value of the GLWB option. Once again, the continuously stepped-up GLWB guarantee on a variable annuity policy is just a bundle of S-RCLA units plus a portfolio of managed money in a systematic withdrawal plan. As one would expect, the greater the volatility, the lower the valuation rate and the younger the individual, the higher is the value of the embedded option, at time zero. Conclusion and Discussion ========================= This paper values a type of exotic option that we christened a ruin-contingent life annuity (RCLA). The generic RCLA pays $1 per year for life, like a classical deferred annuity, but it begins making these payment only once a reference portfolio index is ruined. If this underlying reference index never hits zero, the income never starts. The rationale for buying an RCLA, and especially for a retiree without a Defined Benefit (DB) pension plan, is that it jointly hedges against financial market risk and personal longevity risk, which is cheaper than buying security against both individually. The motivation for studying the RCLA is that this exotic option is now embedded in approximately $800 billion worth of U.S. variable annuity policies. The impetus for creating stand-alone RCLA products is that they might appeal to the many soon-to-be-retired baby boomers who (i.) are not interested in paying for the entire variable annuity package, and (ii.) would be willing to consider annuitization, but only as a worst case Plan B scenario for financing retirement. Indeed, there is a substantial amount of economic and behavioral evidence – see for example the introduction to the book by Brown, Mitchell, Poterba and Warshawsky (2001) – that voluntary annuitization is unpopular as a Plan A for retirees. Thus, perhaps a cheaper annuity, and one that has a built-in deferral period might appeal to the growing masses of retirees without Defined Benefit (DB) pension plans. This was suggested recently by Webb, Gong and Sun (2007) as well, and has received attention from both practitioners and regulators – see for example Festa (2012). Our analysis is done in the classical Black-Scholes-Merton framework of complete markets and fully diversifiable market (via hedging) and longevity (via the law of large numbers) risk. We derived the PDE and relevant boundary conditions satisfied by the RCLA and some variants of the basic RCLA. We then described and used efficient numerical techniques to provide extensive estimates and display sensitivities to parameter values. Our simple valuation framework only provides a very rough intuitive sense of what these ruin-contingent life annuities might cost in real life. Of course, until a liquid and two-way market develops for these products, it is hard to gauge precisely what they will cost in a competitive market. We are currently working on extending the PDE formulation approach – by increasing the number of state variables in the problem – to deal with stochastic mortality, which might also be dependent on market returns, as well as the implications of time varying volatility, non-trivial mortality risk, and mean reverting interest rates. Likewise, we are investigating the game-theoretic implications of paying RCLA premiums continuously, as opposed to up-front. In other words, what happens when the RCLA option is purchased via installments, which then endows the option holder (annuitant) to lapse and cease payment? What is the ongoing No Arbitrage premium in this case? The option to lapse leads to a variety of interesting financial economic questions regarding the existence of equilibrium, all of which we leave for future research. As the U.S. Treasury and Department of Labor continues to encourage Defined Contribution (401k) plans to offer stand-alone longevity insurance to participants – see for example the article by Lieber (2010) in the *New York Times* – we believe that research into the optimal design and pricing of these market contingent annuities will, in itself, experience much longevity. 99 Albrecht, P. and R. Maurer (2002), Self-annuitization, consumption shortfall in retirement and asset allocation: The annuity benchmark, *Journal of Pension Economics and Finance*, Vol. 1(3), pg. 269-288 Ballotta, L. and S. Haberman (2003), Valuation of guaranteed annuity options, *Insurance: Mathematics and Economics*, Vol. 33(1), pg. 87-108 Boyle, P.P. and M. Hardy (2003), Guaranteed annuity options, *ASTIN Bulletin,* Vol. 33(2), pg. 125-152 Brown, J. R., O.S. Mitchell, J.M. Poterba and M.J. Warshawsky (2001), *The Role of Annuity Markets in Financing Retirement*, The MIT Press, Cambridge, Massachusetts. Carrière, J.F. (1994), An investigation of the Gompertz law of mortality, *Actuarial Research Clearing House*, Vol. 2, pg. 1-34 Chen, Z., K. Vetzal, and P.A. Forsyth (2008), The effect of modeling parameters on the value of GMWB guarantees, *Insurance: Mathematics and Economics*, Vol. 43(1), pg. 163-175 Dai, M., Y.K. Kwok and J. Zong (2008), Guaranteed minimum withdrawal benefit in variable annuities, *Mathematical Finance*, Vol 18(4), pg. 595-611 Davidoff, T. (2009), Housing, Health and Annuities, *Journal of Risk and Insurance*, Vol. 76(1), pg. 31-52. Dawson, P., K. Dowd, A.J.G. Cairns and D. Blake (2010), Survivor Derivatives: A Consistent Pricing Framework, *Journal of Risk and Insurance*, Vol. 77(3), pg. 579-596. Deelstra, G., M. Vanmaele, D. Vyncke (2010), Minimizing the Rik of a Financial Product Using a Put Option, *Journal of Risk and Insurance*, Vol. 77(4), pg. 767-800 Dhaene, J, M. Denuit, M.J. Goovaertz, R. Kaas, and D. Vyncke (2002), The concept of comonotonicity in actuarial science and finance: theory, *Insurance: Mathematics and Economics*, Vol 31(1), pg 3-33. Festa, E. (2012), NAIC subgroup suggests all ‘hybrid’ annuities may need reserve scrutiny, *LifeHealthPRO*, Feb. 16, 2012 – `www.lifehealthpro.com` Fong, J.H.Y., O.S. Mitchell and B.S.K. Koh (2011), Longevity risk management in Singapore’s National Pension System, *Journal of Risk and Insurance*, Vol. 78(4), pg. 961-981. Frees. E.W., J. Carrière and E. Valdez (1996), Annuity Valuation with Dependent Mortality, *The Journal of Risk and Insurance*, Vol. 63(2), pg. 229-261 Hardy, M. (2003), *Investment Guarantees*, Wiley. Huang, H., M.A. Milevsky and J. Wang (2004), Ruined moments in your life: how good are the approximations?, *Insurance: Mathematics and Economics*, Vol. 34(3), pg. 421-447 Huang, H., M.A. Milevsky and T.S. Salisbury (2009), A Different perspective on retirement income sustainability: Introducing the ruin-contingent life annuity, *Journal of Wealth Management*, Spring, pg. 1-9 Kingston, G. and S. Thorp (2005), Annuitization and asset allocation with HARA utility, *Journal of Pension Economics and Finance*, Vol. 4(3), pg. 225-248. Lieber, R. (2010), The unloved annuity gets a hug from Obama, *New York Times*, Jan. 30 2010, p. B1 Mereu, J. (1962), Annuity values directly from the Makeham constants, *Transactions of the Society of Actuaries*, Vol. 14, pg. 269-308 Milevsky, M.A. (2006), *The calculus of retirement income*, Cambridge Univ. Press Milevsky, M.A. and T.S. Salisbury (2006), Financial valuation of guaranteed minimum withdrawal benefits, *Insurance: Mathematics and Economics*, Vol. 38, pg. 21-38.. Norberg, R. (1999), Ruin problems with assets and liabilities of diffusion type, *Stochastic Processes and their Applications*, Vol. 81, pg. 255-269. Promislow, S.D. (2006), *Fundamentals of Actuarial Mathematics*, John, Wiley and Sons, West Sussex, England. Schulze, R.N. and T. Post (2010), Inividual Annuity Demand Under Aggregate Mortality Risk, *Journal of Risk and Insurance*, Vol. 77(2), pg. 423-449 Scott, J.S., J.G. Watson and W. Hu (2011), What makes a better annuity?, *Journal of Risk and Insurance*, Vol. 78(1), pg. 213-244. Sherris, M. (1995), The Valuation of Option Features in Retirement Benefits, *The Journal of Risk and Insurance*, Vol. 62 (3), pg. 509-534 Stone, C.A. and A. Zissu (2006), Securitization of Senior Life Settlements: Managing Extension Risk, Journal of Derivatives, Spring 2006. Sun, W. and A. Webb (2011), Valuing the Longevity Insurance Acquired by Delaying Claiming of Social Security, *Journal of Risk and Insurance*, Vol. 78(4), pg. 907-929. Webb, A., G. Gong and W. Sun (2007), An annuity people might actually buy, *Centre for Retirement Research at Boston College*, Working Paper, July 2007, No. 7-10 Young, V.R. (2004), Optimal investment strategy to minimize the probability of lifetime ruin, *North American Actuarial Journal*, Vol. 8(4), 106-126. --- 1. Huang is Professor of Mathematics and Statistics at York University. Milevsky is Associate Professor of Finance, York University, and Executive Director of the IFID Centre. Salisbury is Professor of Mathematics and Statistics at York University, all in Toronto, Canada. The contact author (Milevsky) can be reached via email at: [email protected]. The authors acknowledge the helpful comments of seminar participants at the Department of Risk Management and Insurance at The Wharton School, as well as seminar participants at Monash University, Melbourne, The University of New South Wales and the University of Technology, Sydney. In particular the authors would like to acknowledge helpful comments from Carl Chiarella, Neil Doherty, Olivia Mitchell, and Eckhard Platen. Huang’s and Salisbury’s research is supported in part by NSERC and MITACS.[↩](#fnref1) 2. There are some exceptions, for example the 2006 article in the Journal of Derivatives by Stone and Zissu on the topic of securitizing life insurance settlements.[↩](#fnref2) 3. The evolution of retirement wealth implied by equation ([RPI.eq1]) is often studied as an alternative to annuitization in the pension and retirement planning literature. See, for example, the paper by Albrecht and Maurer (2002) or Kingston and Thorp (2005), in which *γ**I*0 is set equal to the relevant SPIA factor times the initial wealth at retirement.[↩](#fnref3)
arxiv_0000183
experiments have been conducted 100 times for each type of generation of *A* and *B* and the reported results (both in terms of execution time and number of enabling decompositions) correspond to their average. The experiments have been executed on a desktop with medium capabilities (Intel Core i7-4790 CPU at 3.60 GHz, processor based on x64 bit, 8 GB RAM). Experiments with the $\textsc{SumComp}$ Algorithm ------------------------------------------------- In the evaluation of the performances of the $\textsc{SumComp}$ Algorithm we have considered different combinations of the parameters for the generation of *A*[*m**i**n*, *m**a**x*]*n* and the rule *R*1 for the generation of *B*. Indeed, the worst case corresponds on the existence of the sum composition and the need of generating all possible *A**B*-decompositions. The first experiments have been devoted to the identification of the execution time and number of enabling decompositions by varying the length of *A* and *B*. Due to the explosion of different combinations, and the limitation of the machine used for the experiments, we were able to consider partitions of *A* with ℓ(*A*) ≤ 23. Indeed, with ℓ(*A*) = 24, memory allocation issues were raised ( > 4.8 GB memory used). Moreover, we have considered three ranges of values ([1, 100], [1, 150], [1, 200]) in which the values of *A* have been randomly chosen. Figure [fig:SumCompTime1-200] shows a 3-dimensional comparison of the execution times varying the length of *A*[1, 200]*n* (with 3 ≤ *n* ≤ 23) and the length of *B* (with 2 ≤ ℓ(*B*) ≤ 22). The range [1, 200] has been chosen because it is the most representative among the considered ones, in fact the results in this range dominate the results in the other ones. The graphic shows that the execution times tend to increase when ℓ(*B*) is much smaller than ℓ(*A*). This makes sense because a higher number of possible combinations of values in *A* can have as sum the values in *B*. The execution times with these examples is affordable, but by increasing of a single unit the length of *A*, the memory is not sufficient for verifying all the different combinations and the results cannot be reported.     [fig:SumCompTot] A slice of Figure [fig:SumCompTime1-200], by fixing ℓ(*B*) = 4, is reported in Figure [fig:SumCompNumberTimeB41-200] with also the number of identified enabling decompositions. The graphic shows that the number of solutions (as well as the execution times) increase exponentially. Specifically, around 100.000 enabling decompositions in average can be determined for ℓ(*A*) = 23 in an elapsed time of 900 ms in average. Moreover, we can identify a correlation between the number of solutions and the execution times (correlation index  ≈ 91% on ℓ(*A*)). Another slice of Figure [fig:SumCompTime1-200], by fixing ℓ(*A*) = 23, is reported in Figure [fig:SumCompNumberTimeA231-200]. In this case we can note that the maximum execution time (and also the number of enabling decompositions) is obtained for low values of ℓ(*B*). Indeed, the few values contained in *B* can be obtained by summing different combinations of values in *A*. When ℓ(*B*) increases the possible combinations deeply decrease. If we now compare all the slices that can be obtained from Figure [fig:SumCompTime1-200] we can make the following observations. The execution time for the slice corresponding to ℓ(*A*) = 20 is around 30 ms while the one for the slice ℓ(*A*) = 23 moves to 900 ms with a significant increase of time. Moreover, the highest execution time for the slice corresponding to ℓ(*A*) = 20 is reached with ℓ(*B*) = 3, whereas for the case ℓ(*A*) = 23 the maximum execution time is reached with ℓ(*B*) = 4. Moreover, in the considered range of values we can state that the maximum number of enabling decompositions follows the law $5.5 \leq \frac{{\ell(A)}}{{\ell(B)}} \leq 7$. However, this observation holds only for the considered partitions *A* and ranges. A deeper analysis is required for proving the general validity of this claim. Table [tbl:enabling] reports the number of solutions (in average) by changing the range of values from which the integers in *A* are chosen. We can observe that the number of enabling decompositions is higher for values of *A* chosen in the range [1, 100]. This result is not intuitive and needs further investigations. [b] | # A | Range 1-100 | Range 1-150 | Range 1-200 | | --- | --- | --- | --- | | 18 | 1,168 | 447 | 251 | | 19 | 4,086 | 1,520 | 667 | | 20 | 13,901 | 4,261 | 2,097 | | 21 | 38,352 | 18,815 | 7,810 | | 22 | 159,581 | 54,395 | 26,285 | | 23 | 477,449 | 198,690 | 99,552 | [tbl:enabling] Figure [fig:SumCompNumberA23DistinctRange] shows the impact on the execution time of the presence of repeated elements in *A*. By increasing the number of repetitions of values in *A* (0 means no duplication, 1 one value duplicated and so on) we observe that the execution time decreases for each range of values. This means that our way to represent partitions has positive impact on the performances.     [fig:SumCompExist] Experiments with the $\textsc{SumCompExist}$ Algorithm ------------------------------------------------------ In the experiments with the existential algorithm we have considered the rule *R*2 for the generation of the partition *B*. Indeed, in this case we are interested in evaluating the behavior of the algorithm also when no solutions exist. Actually, the lack of an enabling decomposition corresponds to the worst case because the entire space of solutions should be explored before reaching to the conclusion. With this algorithm we have identified an issue of lack of memory too (when ℓ(*A*) = 33). However, since this algorithm does not require to carry all the enabling decompositions, we were able to conduct experiments with a partition *A* with maximal length of 32. [b]0.45![Execution times of \textsc{sumCompExist} when enabling decompositions do not exist – varying the length of A^{n}_{1,200} and the length of B (with 3 \leq {\ell(A)} \leq 32, 2 \leq {\ell(B)} \leq 31)](images/SumCompExistNotFound "fig:") [fig:SumCompExistNotFound]   [b]0.45![Frequency of existence of sum composition w.r.t. the generated datasets by varying the length of A^{n} for all ranges and the length of B (with 3 \leq {\ell(A)} \leq 32, 2 \leq {\ell(B)} \leq 31)](images/SumCompExistBasePerFound "fig:") [fig:SumCompExistBasePerFound] Figure [fig:SumCompExistGlobal] shows a 3-dimensional comparison of the execution times varying the length of *A*[1, 200]*n* (3 ≤ *n* ≤ 32) and the length of *B* (2 ≤ ℓ(*B*) ≤ 31). The skyline of the time execution of the existential algorithm is similar to the exhaustive version. However, the absolute times are considerably smaller. Indeed, the worse execution time for the exhaustive algorithm is around 900 ms for ℓ(*A*) = 23, whereas for the existential algorithm for a partition *A* with the same length is around 17 ms. In case of existence, moreover, we are able to handle a partition *A* with 32 elements. Thus, bigger than those considered in the exaustive case (the average execution time for ℓ(*A*) = 32 and ℓ(*B*) = 2 is around 4500 ms). Analogously to the exhaustive algorithm we have reported some slices of Figure [fig:SumCompExistGlobal] in Figure [fig:SumCompExistGlobalSliceB2] (by setting ℓ(*B*) = 2) and in Figure [fig:SumCompExistGlobalSliceA32] (by setting ℓ(*A*) = 32) by considering different intervals where the values of *A* are chosen. The results are analogous with those obtained in the exhaustive algorithm, but we have to observe that the increase in time execution is obtained for higher values of *A* (ℓ(*A*) > 30) and that we obtain higher execution times for the range [1 − 200] and for the case with ℓ(*B*) = 2. Figure [fig:SumCompExistA32DistinctRange] shows the execution times at increasing number of duplicated values and by considering different ranges of values for *A*. Also in this case we can remark that our representation of partitions has positive effects in the performances of the algorithm. This is more evident for the ranges [1, 100] and [1, 200], whereas in the range [1, 150] we have an outlier (average execution time 4, 219 ms in the case of number of repetitions of elements in *A* equal to 8) that has been confirmed by other experiments that we have conducted on the same range of values. Figure [fig:SumCompExistNotFound] reports the average execution time of the algorithm applied to cases where an enabling decomposition does not exists. These cases have been generated by rule *R*2 discussed at the beginning of the section and considering different lengths of *A* and *B* (ℓ(*A*) from 2 to 32 elements). It is easy to see that on average the execution time is lesser than 1 ms. By comparing these results with those reported in Figure [fig:SumCompExistGlobal] (where the enabling decompositions always exist) we can observe that identifying the lack of sum composition is much faster than identifying its presence of several order of magnitude. This result is the opposite of what we were expecting. We performed a deeper analysis on these results and we noticed that, for the cases of non existence of sum composition, the *simplification* mechanisms were very effective by significantly reducing the length of the partitions *A* and *B* and, consequently, reducing the execution time of the algorithm [b]0.45![Frequency of existence of sum composition w.r.t. the generated datasets by fixing {\ell(B)}=2 and by varying the length of A for all ranges (with 3 \leq {\ell(A)} \leq 32)](images/SumCompExistBasePerFoundB2 "fig:") [fig:SumCompExistBasePerFoundB2]   [b]0.45![Frequency of lacking of sum composition w.r.t. the generated datasets by fixing {\ell(A)}=32 for all ranges and by varying the length of B (with 2 \leq {\ell(B)} \leq 31)](images/SumCompExistBasePerFoundA32 "fig:") [fig:SumCompExistBasePerFoundA32] Figure [fig:SumCompExistBasePerFound] reports the frequency of the occurrence of enabling decompositions w.r.t. the number of cases that have been randomly generated in our datasets. The frequency is much higher when the length of *B* is low (around 2) and the length of *A* is high. The trend is better represented in Figure [fig:SumCompExistBasePerFoundB2] where a slice of the previous figure is shown by fixing ℓ(*B*) = 2 and in Figure [fig:SumCompExistBasePerFoundA32] where a slice of previous figure is shown by fixing ℓ(*A*) = 32. From these graphics we can note that the frequency with which enabling decompositions are identified is 16, 56%. Moreover, when the ratio ℓ(*A*)/ℓ(*B*) is low, the frequency of cases in which enabling decompositions are not identified rises to 100%. The highest numbers of these cases is verified when ℓ(*A*)/ℓ(*B*) = 2 and this result can be justified by Theorem [theo:3]. For higher value of this ratio, it could be worth investigating if the property holds also for higher numbers of *A*. The last experiments were devoted to evaluate the impact of the *simplification* properties introduced in Section [subsection:existence] on the performances of the existential algorithm. For this purpose, two kinds of *simplifications* are considered: * A *full simplification*: when the $\textsc{sumCompExist}$ algorithm can decide if the sum composition holds or not without the need to call the SumCompExistAux function. This happens when one of the conditions of the statements 1, 2, 3, 5, 6 and 8 are met in Algorithm [alg2]. * A *partial simplification*: when $\textsc{sumCompExist}$ can reduce the execution time of the algorithm by decreasing the numbers of elements of *A* and *B* before calling the SumCompExistAux function. This happens when the statements at line 4 or at line 7 is applied.     [fig:XXX] For the *full simplification*, the following situations of exit from Algorithm [alg2] are analyzed: * *variant B*. Exit at line 2. * *variant C*. Exit at line 3. * *variant E*. Exit at line 5. * *variant F*. Exit at line 6. * *variant G*. Exit at line 8. These variants are compared with the entire execution of the Algorithm [alg2] which is denoted *variant FULL*. Note that the exit at line 1 from Algorithm [alg2] is not considered because this situation cannot ever be verified because of the use of rule *R*2 in the generation of the datasets. Figure [fig:SumCompExistSimplTotal] reports the percentage of cases for the different variants to Algorithm [alg2]. We can see that, for the ranges of values observed, we have that the *full simplification* can provide the existence or not existence of sum composition for roughly 70% of the cases. This result is quite promising, because despite the high complexity of sum composition, it seems that for a significant percentage of cases, the algorithm can provide an answer quite quickly. The figure shows that the maximum average time is 1.23 ms (ℓ(*B*) = 2 and ℓ(*A*) = 29) that is 3 order of magnitude smaller than the case of not *full simplification* (4,469 ms when ℓ(*B*) = 2 and ℓ(*A*) = 32 in Figure [fig:SumCompExistGlobal]). Another interesting behavior can be observed in Figure [fig:SumCompExistSimplTotal]: when ℓ(*A*) ≥ 16 the percentage of *full simplification* becomes stable for all the cases. This fact deserves further study in the future. Figure [fig:SumCompExistSimplTime] shows the execution times of the different variants w.r.t. the ratio of ℓ(*A*) and ℓ(*B*). We can observe that till the ratio is 3, this *simplification* is quite effective in at least 50% of the cases. When this ratio increases, the efficacy of the simplification decreases rapidly. It would be interesting to verify with higher length of *A* if these results are confirmed. Another investigation direction is to identify other kinds of *simplification* that can be applied. Figure [fig:SumCompExistSimplA] shows the effects of *partial simplification* (only for the call of $\textsc{SumCompExistAux}$) of the partition *A* by applying the statements at line 4 and 7 in the $\textsc{sumCompExist}$ algorithm. In particular the line ``% CASE A REDUCTION“ is the percentage of the cases where we can reduce the number of elements of *A*, line ``% A REDUCTION” is the ratio (number of *A* eliminated)/ℓ(*A*) (two different scales used). The x-axis is the ratio ℓ(*A*)/ℓ(*B*) because it is the most reasonable due the variation of *A* and *B* on our tests. We can observe that this kind of simplification is applied in all the cases when ℓ(*A*)/ℓ(*B*) ≤ 2 then it rapidly decreases arriving at an application of this *simplification* at roughly 10% of the cases when ℓ(*A*)/ℓ(*B*) ≥ 10. The percentage of number of *A* eliminated is even less with a decreasing curve until ℓ(*A*)/ℓ(*B*) = 10 and then a strange small increase until ℓ(*A*)/ℓ(*B*) = 16. This kind of *simplification* can be considered interesting but marginal w.r.t. the most complex cases. Related Work ============ The sum composition problem, as defined in this article, is hard to find in the literature. A similar class of problems, named *Optimal Multi-Way Number Partitioning*, can be found in. In their case, the authors provide different optimal algorithms for separating a partition *A* of *n* positive integers into *k* subsets such that the largest sum of the integers assigned to any subset is minimized. The paper provides an overview of different algorithms that fall into three categories: sequential number partitioning (SNP); binary-search improved bin completion (BSIBC); and, cached iterative weakening (CIW). They show experimentally that for large random numbers, SNP and CIW outperform state of the art by up of seven orders of magnitude in terms of runtime. The problem is slightly different from the one that we face in this paper in which we wish to partition the set *A* according to the values contained in the set *B*. However, we also exploit a SNP approach for the generation of the possible partitions of the partition *A* according to the integer numbers contained in *B*. Peculiarity of our approach is the representation of the partition *A* and *B* that allows one to reduce the cases to be explored by eliminating useless permutations. A much more similar formulation of the problem faced in this paper is the “k-partition problem” proposed in where the authors wish to minimize a objective function *z* with the partitions *A* and *B* (with the sum of *A* equals the sum of *B*) where *A* is partitioned in *m* subpartitions (*A*1, …, *A**m*). For each partition the function *z* is calculated as *z* = *m**a**x**i* = 1*m*(∑*a* ∈ *A**i**a*)/*b**i*. It is easy to see that if the minimum of objective function is *z* = 1 then *B* is sum composition of *A* and, vice versa, if *B* is sum composition of *A*, the objective function *z* = 1. In, special restrictions are introduced concerning the length and distinct values of *A*, the maximal integer in *A* and the minimal integer in *B* in order to provide more efficient algorithms. For instance, if ℓ(*B*) = 2 and *m**a**x*(*A*) = 100, then ℓʹ(*A*) > 94 (i.e. almost all the value from 1 to 100) and *m**i**n*(*B*) > 2366. Even if there are many similarities with the approach proposed in this paper, in no real implementation of the algorithms are proposed and the paper lacks of experimental results. Moreover, the paper provides a response to the existence of a solution for the *k*-partition problem but does not provide algorithms for the identification of all possible solutions as we propose in this paper. Finally, our paper provides a characterization of the properties of sum composition that are exploited for reducing the number of cases to be tested. Even if our approach is still NP-hard, its runtime is significantly reduced especially when checking the existence of a solution, and we do not provide any restrictions on the partitions *A* and *B*. In our algorithms we need to go through the identification of all the solutions of the well-known “Subset Sum” Problem that is one of Karp’s original NP-complete problems. This problem consists in determining the existence of a subset *A*ʹ of *A* whose sum is *s* and is a well-known example of a problem that can be solved in weakly polynomial time. As a weakly NP-complete problem, there is a standard pseudopolynomial time algorithm using a dynamic programming, due to Bellman, that solves it in ${\mathcal O}({\ell(A)} \sigma(A'))$ time. Recently some better solutions have need proposed by Koiliaris and Xuy in with an algorithms that runs in $\tilde{{\mathcal O}}(min\{\sqrt{{\ell'(A)}} \sigma(A'), \sigma(A')^{\frac{4}{3}}, \sigma(A)\})$ time, where $\tilde{{\mathcal O}}$ hides polylogarithmic factors. We remark that in our case, it is not enough to determine if a subset exists but we need to identify all the possible subsets in *A* whose sum is *s*. Therefore, the standard approaches proposed in the literature cannnot be directly applied. In our case, we keep the values of *A* ordered and we apply a SNP approach for enumerating all the possible subpartition of *A* (starting from the lower values of *A*) that can lead to the sum *s*. A subpartition is skipped when including a new value of *A*, this leads to a value greater that *s*. Since our partitions are ordered, we can guarantee to identify a possible solution, whereas the approach proposed in randomly generates possible configurations to be proved. Finally, relying on our representation of partitions, permutations are avoided and the configurations to test are reduced. Conclusions =========== In this paper we introduced the sum composition problem between two partitions *A* and *B* of positive integers. Starting from a formal presentation of the problem by exploiting the poset theory, we have proposed several properties that can be exploited for improving the execution time of the developed algorithms. Then, we have developed an exhaustive algorithm for the generation of all *A**B*-decompositions for which *A* ≤ *B*, and an algorithm for checking the existence of the relation *A* ≤ *B*. The correctness of these two algorithms is proved. An experimental analysis is provided for assessing the quality of the proposed solutions. As expected, the algorithms have an exponential growth with the length of *A* and *B*. We also show a correlation between the execution time and the number of enabling decompositions in the execution of the exhaustive algorithm. Moreover, we show that the number of repetitions of the elements in the partition *A* impacts the execution time for both algorithms and how the adopted data structures reduce the number of configurations to be checked. For the existential algorithm, we had some surprising good impact on “full simplification” actions on roughly 70% of the cases and also “partial simplification” can have good impact in the range of ℓ(*A*)/ℓ(*B*) ≤ 3. Other interesting experimental evidences were identified as the ratio of ℓ(*A*)/ℓ(*B*) for the enabling decompositions (exhaustive algorithm). Anyway, these algorithms present a limitation of “lack of memory” (ℓ(*A*) = 24 for the exhaustive algorithm and ℓ(*A*) = 33 for the existence algorithm). New area of investigations can be identified in finding: *i*) new *simplification* properties that can reduce the execution time of the algorithms; *i**i*) new algorithms that have less limitation of memory usage; *i**i**i*) a wider range of test cases where the validity of certain properties can be checked. 10 url#1`#1`urlprefixhref#1#2#2 #1#1 R. Bellman, Notes on the theory of dynamic programming—transportation models, Manage. Sci. 4 (2) (1958) 191–195.. G. Birkhoff, Lattice theory, Vol. 25, American Mathematical Soc., 1940. L. Comtet, Advanced Combinatorics, Reidel, Boston, 1974. M. R. Garey, D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, W. H. Freeman and Co, 1979. R. M. Karp, Reducibility among Combinatorial Problems, Springer US, 1972, Ch. 9, pp. 85–103. K. Koiliaris, C. Xu, A faster pseudopolynomial time algorithm for subset sum, in: Proc. of the 28th Annual ACM-SIAM SODA, 2017, pp. 1062–1072.. S. A. Joni, G.-C. Rota, Coalgebras and bialgebras in combinatorics, Studies in Applied Mathematics 61 (2) (1979) 93–139.. C. Mark, Fast exact and approximate algorithms for k-partition and scheduling independent tasks, Discrete Mathematics 114 (1-3) (1993) 87–103. E. L. Schreiber, R. E. Korf, M. D. Moffitt, Optimal multi-way number partitioning, J. ACM 65 (4) (2018) 24:1–24:61.. G. M. Ziegler, On the poset of partitions of an integer, Journal of Combinatorial Theory, Series A 42 (2) (1986) 215 – 222.. The Sum Composition Problem =========================== In this paper, we study the “*sum composition problem*” between two lists *A* and *B* of positive integers. We start by saying that *B* is *sum composition* of *A* when there exists an ordered *m*-partition [*A*1, …, *A**m*] of *A* where *m* is the length of *B* and the sum of each part *A**k* is equal to the corresponding part of *B*. Then, we consider the following two problems: *i*) the *exhaustive problem*, consisting in the generation of all partitions of *A* for which *B* is sum composition of *A*, and *i**i*) the *existential problem*, consisting in the verification of the existence of a partition of *A* for which *B* is sum composition of *A*. Starting from some general properties of the sum compositions, we present a first algorithm solving the exhaustive problem and then a second algorithm solving the existential problem. We also provide proofs of correctness and experimental analysis for assessing the quality of the proposed solutions along with a comparison with related works. number partition problem integer partitions algorithms Introduction ============ The “*sum composition problem*” between two lists *A* = [*a*1, …, *a**n*] and *B* = [*b*1, …, *b**m*] of positive integers consists in the identification of a decomposition of the list *A* into *m* sub-lists *A*1, …, *A**m* (where *m* is the length of *B*) such that the sum of *A**k* is equal to *b**k* ∈ *B* (1 ≤ *k* ≤ *m*). When this decomposition exists we say that *B* is sum composition of *A*. The sum composition problem has some analogies and shares the same complexity with the NP-hard *number partition problem* where a list of integer *A* is partitioned in *k* partitions *P*1…*P**k* such that the sum of each part *P**j* (1 ≤ *j* ≤ *k*) are as nearly equal as possible. However, there are several practical applications that can be modeled as “*sum composition problem*”. In the case of weighted graphs, the values in *A* can correspond to the weights of the edges, and the values in *B* are the aggregated values according to which the graph should be partitioned. In, the sum composition problem is shown as the problem of scheduling independent tasks on uniform machines with different capabilities. In the case of databases, this problem has been encountered by one of the authors when studying functional dependencies in a relational table. Specifically, when a functional dependency between an attribute *x* and an attribute *y* of a table *R* exists, we can consider the list of the occurrences of values assumed by *x* (denoted *A*) and the corresponding one assumed by *y* (denoted by *B*). When *B* is sum composition of *A* it means that a functional dependency *x* → *y* can exist (this is a necessary condition but not sufficient). The properties of sum composition can be exploited for simplifying the checking of the existence of a functional dependency by using the statistics associated with the table with no access to the single tuples. In this paper, we start by interpreting this problem in terms of a partial order relation between two integer partitions and, then, we provide some general properties of such a relation. There are two kinds of properties. Those that guarantee the existence of a sum composition between two lists and those that allow a simplification of the two lists without affecting the property of being one sum composition of the other. These properties are exploited in the realization of two algorithms. The first one, named $\textsc{SumComp}$, has the purpose to generate all possible decompositions of *A* for which *B* is sum composition of *A*. The second one, named $\textsc{SumCompExist}$, has the purpose to check the existence of at least one decomposition of *A* for which the relation holds with *B*. In this last case, we are not interested in determining a decomposition but only to verify its existence. The complexity of these algorithms is the same of the algorithms developed for the number partition problem because of the combinatorial explosion of the cases that need to be checked. However, the identified properties and the particular representation of the lists (in which distinct integers are reported along with their multiplicity) allows us to mitigate in some cases the growth of the execution time curve. As discussed in the related work section, the sum composition problem has received little attention from the research community. However, its formulation can be of interest in many practical applications. Similar problems, presented in the literature for identifying optimal multi-way number partition, the k-partitioning problem, and the sub-set sum problem, usually are devoted to identify a single correspondence between the two lists and sometimes they introduce restrictions on the values occurring in *A* and *B*. Key characteristics of the proposed approach is the use of different properties of sum composition for reducing the controls in the algorithms and the adoption of a data structure for representing the partitions that reduce the number of combination of values to be considered. As shown in the experimental analysis section, these design strategies have positive effects on the running times of the algorithms (especially for the existential one) even if increasing the size of the partitions the execution times move quickly to an exponential rate. The paper is structured as follows. Section [sec:pb] introduces the problem and some notations. Then, Section [sec:properties] deals with the properties of sum composition. Section [sec:SCA] introduces the $\textsc{SumComp}$ algorithm and proves its correctness. Section [sec:existence] revises the previous algorithm for checking the sum composition existence. Section [sec:exp] presents the experimental results and Section [sec:rw] compares our results with related works. Finally, Section [sec:conclusion] draws the conclusions and future research directions. Problem Definition ================== An *integer partition* (or, simply, a *partition*) of a positive integer *N* is a list *A* = [*a*1, *a*2, …, *a**n*] where each *part* *a**i* is a positive integer, *a*1 ≤ *a*2 ≤ ⋯ ≤ *a**n* and *σ*(*A*) = *a*1 + *a*2 + ⋯ + *a**n* = *N*. A partition of *N* can also be represented as *A* = [(*a*ʹ1, *m*1), (*a*ʹ2, *m*2), …, (*a*ʹ*h*, *m**h*)], where the numbers *a*ʹ1 < ⋯ < *a*ʹ*h* are the distinct parts of *A*, and *m*1, *m*2, …, *m**h* are the respective multiplicities. We write ℓ(*A*) for the length of the list *A*, i.e. for the number of parts of the partition *A*, and ℓʹ(*A*) for the number of distinct parts in *A*. Usually, in the literature, an integer partition is written in decreasing order. In this paper, however, we write an integer partition in increasing order because, as we will see later, this notation is exploited in the formulation of the algorithms. The list *A*  =  [1, 2, 2, 4, 4, 6] is an integer partition of *N*  =  *σ*(*A*)  =  19 with ℓ(*A*)  =  6 parts and ℓʹ(*A*)  =  4 distinct parts. This partition can also be represented as [(1, 1), (2, 2), (4, 2), (6, 1)]. A *decomposition* of an integer partition *A* = [*a*1, *a*2, …, *a**n*] is a list of integer partitions [*A*1, *A*2, …, *A**m*] obtained as follows: given a set partition {*I*1, *I*2, …, *I**m*} of the index set {1, 2, …, *n*} of *A*, *A**i* is the integer partition whose parts are the *a**j* with *j* ∈ *I**i*, for *i* = 1, 2, …, *m*. The partition *A* = [1, 2, 2, 4, 4, 6] admits, for instance, the decomposition [*A*1, *A*2, *A*3], where *A*1 = [1, 6], *A*2 = [4] and *A*3 = [2, 2, 4]. The set *P**n* of all integer partitions of *n* can be ordered by refinement as follows: given two integer partitions *A* = [*a*1, *a*2, …, *a**n*] and *B* = [*b*1, *b*2, …, *b**m*], we have *A* ≤ *B* whenever there exists a decomposition [*A*1, *A*2, …, *A**m*] of *A* such that *σ*(*A**i*) = *b**i*, for every *i* = 1, 2, …, *m*. The set *P**n* is a very well known poset (partially ordered set) whose topological properties have been studied in several papers (e.g. see and the bibliography therein). We will say that *B* is *sum composition* of *A* whenever *A* ≤ *B*. Moreover, for simplicity, we will call **A**B*-decomposition* or *enabling decomposition* any decomposition of *A* for which *A* ≤ *B*. [ex:es2] For the two integer partitions *A* = [1, 2, 2, 3, 4, 5] and *B* = [5, 5, 7], we have the following *A**B*-decompositions: *D*1 = [[2, 3], [1, 4], [2, 5]], *D*2 = [[1, 4], [2, 3], [2, 5]], *D*3 = [[1, 2, 2],  [5], [3, 4]], *D*4 = [[2, 3], [5], [1, 2, 4]], *D*5 = [[1, 4], [5], [2, 2, 3]], *D*6 = [[5], [1, 2, 2], [3, 4]], *D*7 = [[5], [2, 3], [1, 2, 4]] and *D*8 = [[5], [1, 4], [2, 2, 3]]. In this way, the *sum composition problem* can be formulated as follows: * generate all the *A**B*-decompositions (exhaustive algorithm) between the partitions *A* and *B*; * determine if *A* ≤ *B* (existential algorithm). The sum composition problem has also a simple puzzle interpretation. Indeed, we have *A* ≤ *B* whenever there is a tiling of the Ferrers diagram Φ*B* of *B* using the rectangles corresponding to the parts of *A* (as in Figure [FigTiling]). (16,5)(0,-1) (0,0) (7,4) (0,0)(1,0)2 (0,1)(1,0)2 (0,2)(1,0)4 (0,3)(1,0)7 (0,4)(1,0)7 (0,0)(0,1)4 (1,0)(0,1)4 (2,0)(0,1)4 (3,2)(0,1)2 (4,2)(0,1)2 (5,3)(0,1)1 (6,3)(0,1)1 (7,3)(0,1)1 (3.5,-1)(*a*) (10,0) (7,4) (0.8,0.8)[bl] (0.1,0.1)(1,0)0.8 (0.1,0.9)(1,0)0.8 (0.1,0.1)(0,1)0.8 (0.9,0.1)(0,1)0.8 (0.8,1.8)[bl] (0.1,0.1)(1,0)1.8 (0.1,0.9)(1,0)1.8 (0.1,0.1)(0,1)0.8 (1.9,0.1)(0,1)0.8 (0.8,2.8)[bl] (0.1,0.1)(1,0)2.8 (0.1,0.9)(1,0)2.8 (0.1,0.1)(0,1)0.8 (2.9,0.1)(0,1)0.8 (0,0)(1,0)2 (0,1)(1,0)2 (0,2)(1,0)4 (0,3)(1,0)7 (0,4)(1,0)7 (0,0)(0,1)4 (1,0)(0,1)4 (2,0)(0,1)4 (3,2)(0,1)2 (4,2)(0,1)2 (5,3)(0,1)1 (6,3)(0,1)1 (7,3)(0,1)1 (3.5,-1)(*b*) (0,0) (1,0) (0,1) (0,2) (2,2) (0,3) (3,3) (6,3) [FigTiling] Given an integer partition *B* = [*b*1, *b*2, …, *b**m*], the number of all *A**B*-decompositions with *A* ≤ *B* is *p*(*b*1)*p*(*b*2)⋯*p*(*b**m*), where *p*(*k*) is the number of the integer partitions of *k*. Similarly, given an integer partition *A* = [*a*1, …, *a**n*], the number of all *A**B*-decompositions with *A* ≤ *B* is at most *b**n*, where *b**n* is the Bell number of order *n*, i.e. of the number of set partitions of an *n*-set. Finally, given two integer partitions *A* = [*a*1, …, *a**n*] and *B* = [*b*1, *b*2, …, *b**m* − 1, *b**m*] with *A* ≤ *B*, the number *d*(*A*, *B*) of *A**B*-decompositions satisfy the bounds 1 ≤ *d*(*A*, *B*) ≤ *p*(*b*1)*p*(*b*2)⋯*p*(*b**m* − 1). Clearly, if *A* = [1, …, 1] has length *m* and *B* = [*m*], we have *A* ≤ *B* and *d*(*A*, *B*) = 1. Moreover, for the partitions *A* = [1, 1, 1, 1, 1, 2, 2, 3] and *B* = [2, 3, 7], we have *A* ≤ *B* and *d*(*A*, *B*) = 6 = *p*(2)*p*(3). In the rest of the paper, we will use the following operations on integer partitions. The union of two partitions *A* and *B* is the partition *A* ∪ *B* whose parts are those of *A* and *B*, arranged in increasing order. Similarly, the intersection of *A* and *B* is the partition *A* ∩ *B* whose parts are those common to *A* and *B*, arranged in increasing order. *A* is a subpartition of *B*, i.e. *A* ⊆ *B*, when every part of *A* is also a part of *B*. If *A* is a subpartition of *B*, then we write *B* \ *A* for the partition whose parts are those of *B* taken off those of *A*. Finally, we write *a* ∈ *A* when *a* is a part of *A*. In this way, we have *σ*(*A* ∪ *B*) = *σ*(*A*) + *σ*(*B*) and *σ*(*B* \ *A*) = *σ*(*B*) − *σ*(*A*). Given *A* = [1, 1, 3, 4] and *B* = [1, 2, 4, 4, 5], we have *A* ∪ *B* = [1, 1, 1, 2, 3, 4, 4, 4, 5] and *A*  ∩  *B*  =  [1, 4]. Given *A*  =  [1, 1, 3, 4] and *B*  =  [1, 1, 2, 3, 3, 4, 5], we have *A*  ⊆  *B* and *B*  \  *A*  =  [2, 3, 5]. Properties of Sum Composition ============================= In this section, we present some properties of the sum composition relation that will be exploited in the exhaustive and existential algorithms presented in the next sections. Basic Properties ---------------- We start by listing some basic necessary conditions for the existence of a sum composition. [lemma:1] Let *A* = [*a*1, …, *a**n*] and *B* = [*b*1, …, *b**m*] be two integer partitions such that *A* ≤ *B*, and let [*A*1, …, *A**m*] be an *A**B*-decomposition. Then, we have 1. *n* ≥ *m* 2. *a*1 ≤ *b*1 (and, more generally, *a*1 ≤ *b**i* for every *i*) 3. *a**n* ≤ *b**m* (and, more generally, *a**i* ≤ *b**m* for every *i*) 4. if *a**i* ∈ *A**j*, then *a**i* ≤ *b**j* 5. *σ*(*A*) = *σ*(*B*). These properties derive directly from the definition of sum composition. 1. Immediate from the definition of the order relation  ≤ . 2. The element *a*1 is the minimum value appearing in *A*. Then *A*1 contains at least one part greater or equal to *a*1 and consequently *b*1 = *σ*(*A*1) ≥ *a*1. 3. The element *a**n* is the maximum value appearing in *A* and appears in some *A**i*. Similarly, the element *b**m* is the maximum value appearing in *B*. So, we have *b**m* ≥ *b**i* = *σ*(*A**i*) ≥ *a**n*. 4. If *a**i* ∈ *A**j*, then *b**j* = *σ*(*A**j*) = ⋯ + *a**i* + ⋯ ≥ *a**i*. 5. We have *σ*(*A*) = *σ*(*A*1) + ⋯ + *σ*(*A**m*) = *b*1 + ⋯ + *b**m* = *σ*(*B*). We have also the following useful condition. [theo:3] Let *A* = [*a*1, …, *a**n*] and *B* = [*b*1, …, *b**m*] be two integer partitions s.t. *A* ≤ *B*. If the intersection *A* ∩ *B* has *k* parts, then *n* ≥ 2*m* − *k*. Since *A* ≤ *B*, an *A**B*-decomposition [*A*1, …, *A**m*] exists. Since *A* and *B* have *k* common parts, there are at most *k* partitions *A**i* with only one part and all the remaining partitions contains at least two parts. Consequently *n* ≥ *k* + 2(*m* − *k*) = 2*m* − *k*. By Theorem [theo:3], if *A* = [*a*1, …, *a**n*] and *B* = [*b*1, …, *b**m*] are two integer partitions with *A* ∩ *B* with *k* parts and *n* ≤ 2*m* − *k* − 1, then *B* can not be a sum composition of *A*. The following lemma will be used in the subsequent sections to write the algorithms. It allows you to proceed iteratively on the elements of B, by considering them one by one. The validity of the hypothesis of this lemma is guaranteed at each step. [lemma:sumnm1] Let *A* = [*a*1, …, *a**n*] and *B* = [*b*1, …, *b**m*] be two integer partitions s.t. *σ*(*A*) = *σ*(*B*). If *B*ʹ  =  *B*  \  [*b**m*] is sum composition of some subpartition *A*ʹ of *A*, then *B* is sum composition of *A*. Since *B*ʹ is sum composition of *A*ʹ, there exists an *A*ʹ*B*ʹ-decomposition [*A*1, …, *A**m* − 1]. Let *A**m* = *A* \ *A*ʹ. Then *σ*(*A**m*) = *σ*(*A*) − *σ*(*A*ʹ) = *σ*(*A*) − (*σ*(*A*1) + ⋯ + *σ*(*A**m* − 1)) = *σ*(*A*) − (*b*1 + ⋯ + *b**m* − 1) = *σ*(*A*) − (*σ*(*B*) − *b**m*) = *b**m*. Hence, [*A*1, …, *A**m* − 1, *A**m*] is an *A**B*-decomposition. Let *A*(*k*) be the partition whose parts are the parts of a partition *A* = [*a*1, …, *a**n*] less or equal to *k* and let *A*[*k*] be the partition whose parts are the parts of *A* grater or equal to *k*. [theo:aGb] Let *A* = [*a*1, …, *a**n*] and *B* = [*b*1, …, *b**m*] be two integer partitions s.t. *B* is sum composition of *A*. For every *k* = 1, …, *b**m*, we have *σ*(*A*(*k*)) ≥ *σ*(*B*(*k*))  and  *σ*(*A*[*k*]) ≤ *σ*(*B*[*k*]) . Since *B* is sum composition of *A*, there exists an *A**B*-decomposition [*A*1, …, *A**m*]. If *B*(*k*) = [*b*1, …, *b**h*], then *σ*(*A*1) = *b*1 ≤ *k*, …, *σ*(*A**h*) = *b**h* ≤ *k*. Hence, all elements of *A*1, …, *A**h* are less or equal to *k*, and consequently *A*1 ∪ ⋯ ∪ *A**h* ⊆ *A*(*k*). Then *σ*(*A*(*k*)) ≥ *σ*(*A*1) + ⋯ + *σ*(*A**h*) = *b*1 + ⋯ + *b**h* = *σ*(*B*(*k*)). If *k* = 1, we have *A*[1] = *A* and *B*[1] = *B*. So *σ*(*A*[1]) = *σ*(*B*[1]), being *σ*(*A*) = *σ*(*B*). If *k* > 1, then *A*[*k*] = *A* \ *A*(*k* − 1) and *B*[*k*] = *B* \ *B*(*k* − 1). So *σ*(*A*[*k*]) = *σ*(*A*) − *σ*(*A*(*k* − 1)) ≤ *σ*(*B*) − *σ*(*B*(*k* − 1)) = *σ*(*B*[*k*]). We have also the following divisibility property (saying, on the other hand, that the sum composition property is preserved by multiplication). [theo:d1] Let *A* and *B* be two integer partitions s.t. *B* is sum composition of *A*. If there exists an integer *d* > 1 dividing all parts of *A*, then *d* divides all parts of *B*. Since *A* ≤ *B*, an *A**B*-decomposition [*A*1, …, *A**m*] exists. Moreover, by hypothesis, every *a**i* is divisible by *d*, we have at once that each *b**i* = *σ*(*A**i*) is divisible by *d*. Consider the integer partitions *A* = [50, 100, 100, 200, 250, 300] and *B* = [300, 300, 400]. Every element of *A* and *B* is divisible by 50. So, by dividing by 50, we have the partitions *A*ʹ = [1, 2, 2, 4, 5, 6] and *B*ʹ = [6, 6, 8]. Since [[6], [1, 5], [2, 2, 4]] is an *A*ʹ*B*ʹ-decomposition, then *A*ʹ ≤ *B*ʹ, and consequently *A* ≤ *B*. Notice that to obtain an *A**B*-decomposition it is sufficient to multiply all parts of an *A*ʹ*B*ʹ-decomposition by 50. Finally, we have that the sum composition property is preserved by the union of partitions. More precisely, we have the following result. [theo:Union] Let *A*ʹ = [*a*ʹ1, …, *a*ʹ*n*ʹ], *B*ʹ = [*b*ʹ1, …, *b*ʹ*m*ʹ], *A*ʺ = [*a*ʺ1, …, *a*ʺ*n*ʺ] and *B*ʺ = [*b*ʺ1, …, *b*ʺ*m*ʺ] be four integer partitions s.t. *B*ʹ is sum composition of *A*ʹ and *B*ʺ is sum composition of *A*ʺ. Then *B* = *B*ʹ ∪ *B*ʺ is sum composition of *A* = *A*ʹ ∪ *A*ʺ. Since *A*ʹ ≤ *B*ʹ, an *A*ʹ*B*ʹ-decomposition [*A*1ʹ, …, *A**m*ʹʹ] exists. Similarly, since *A*ʺ ≤ *B*ʺ, an *A*ʺ*B*ʺ-decomposition [*A*1ʺ, …,  *A**m*ʺʺ] exists. Hence, by properly merging [*A*1ʹ, …, *A**m*ʹʹ] and [*A*1ʺ, …,  *A**m*ʺʺ], we obtain a decomposition of *A*ʹ ∪ *A*ʺ for which *A*ʹ ∪ *A*ʺ ≤ *B*ʹ ∪ *B*ʺ. Consider the partitions *A*ʹ = [1, 1, 2, 3, 4] and *B*ʹ = [3, 3, 5] with the *A*ʹ*B*ʹ-decomposition [[1, 2], [3], [1, 4]], and the partitions *A*ʺ = [1, 1, 2, 3, 5] and *B*ʺ = [1, 5, 6] with the *A*ʺ*B*ʺ-decomposition [[1], [1, 3], [1, 6]]. Then *A*ʹ ∪ *A*ʺ = [1, 1, 1, 1, 2, 2, 3, 3, 4, 5] and *B*ʹ ∪ *B*ʺ = [1, 3, 3, 5, 5, 6], and [[1], [1, 2], [3], [1, 4], [2, 3],  [1, 6]] is a decomposition of *A*ʹ ∪ *A*ʺ for which *A*ʹ ∪ *A*ʺ ≤ *B*ʹ ∪ *B*ʺ. Given two partitions *A* and *B* with *A* ≤ *B*, and given two subpartions *A*ʹ ⊆ *A* and *B*ʹ ⊆ *B* with *A*ʹ ≤ *B*ʹ, for the complementary subpartitions *A*ʺ = *A* \ *A*ʹ and *B*ʺ = *B* \ *B*ʹ it is not said that *A*ʺ ≤ *B*ʺ. Consider, for instance, the partitions *A* = [1, 1, 1, 2, 2, 2, 3] and *B* = [2, 2, 3, 5] and the partitions *A*ʹ = [1, 1, 2, 2, 2] ⊆ *A* and *B*ʹ = [3, 5] ⊆ *B*. Then *A* ≤ *B* and *A*ʹ ≤ *B*ʹ. However, we have *A*ʺ = [1, 3] and *B*ʺ = [2, 2], and $ A'' \not\leq B'' $. Reduction Properties -------------------- In this section, we present some properties that can be used for eliminating values from the partitions involved in a sum composition, without altering the property of being a sum composition. These properties can positively effecting the execution time of the existence algorithm. [theo:theo1] Let *A* = [*a*1, …, *a**n*] and *B* = [*b*1, …, *b**m*] be two integer partitions s.t. *A* ≤ *B*. If two indices *i* and *j* exist s.t. *a**i* = *b**j*, then *B*ʹ = [*b*1, …, *b**j* − 1,  *b**j* + 1, …, *b**m*] is sum composition of *A*ʹ = [*a*1, …, *a**i* − 1, *a**i* + 1, …, *a**n*]. Since *A* ≤ *B*, an *A**B*-decomposition [*A*1, …, *A**m*] exists, where, by hypothesis, *a**i* = *b**j* = *σ*(*A**j*). We have the following two cases. 1. If *a**i* ∈ *A**j*, then *A**j* = [*a**i*]. Hence [*A*1, …, *A**j* − 1, *A**j* + 1, …, *A**m*] is an *A*ʹ*B*ʹ-decomposition. 2. If *a**i* ∉ *A**j*, then there exists an index *k* ≠ *j* such that *a**i* ∈ *A**k*. Let *A*ʹ*k* = (*A**j* ∪ *A**k*) \ [*a**i*]. Then, replacing *A**k* by *A*ʹ*k* in [*A*1, …, *A**j* − 1, *A**j* + 1, …, *A**m*], we obtain a decomposition of *A*ʹ such that *A*ʹ ≤ *B*ʹ. Indeed, we have *σ*(*A*ʹ*k*) = *σ*(*A**j*) + *σ*(*A**k*) − *a**i* = *b**j* + *b**k* − *a**i* = *b**k*. Theorem [theo:theo1] can be easily generalized as follows. [cor1] Let *A* and *B* be two integer partitions s.t. *B* is sum composition of *A*. If *C* is a subpartion of *A* and *B*, then *B* \ *C* is sum composition of *A* \ *C*. Suppose that *C* = [*c*1, …, *c**k*]. Since *A* and *B* have a common part *c*1, then *B*ʹ = *B* \ [*c*1] is a sum composition of *A*ʹ = *A* \ [*c*1], by Theorem [theo:theo1]. Since *A*ʹ and *B*ʹ have a common part *c*2, then *B*ʺ = *B*ʹ \ [*c*2] is a sum composition of *A*ʺ = *A*ʹ \ [*c*2], always by Theorem [theo:theo1]. Continuing in this way, at the end we have that *B* \ *C* is sum composition of *A* \ *C*. We also have the following result. [cor11] If *A* and *B* are two partitions s.t. *A* ≤ *B*, then, for every partition *C*, *A* ∪ *C* ≤ *B* ∪ *C*. By hypothesis, we have *A* ≤ *B*, and, clearly, *C* ≤ *C*. Hence, by Theorem [theo:Union], we have *A* ∪ *C* ≤ *B* ∪ *C*. Theorems [cor1] and [cor11] immediately implies the following property. If *A*, *B* and *C* are three integer partitions, then *A* ≤ *B* if and only if *A* ∪ *C* ≤ *B* ∪ *C*. Equivalently, if *A* and *B* are two integer partitions and *C* is a subpartion of *A* and *B*, then *A* ≤ *B* if and only if *A* \ *C* ≤ *B* \ *C*. Finally, we have the following further simplification property which will be used later to reduce the running time of the existence algorithm. [theo:simpl2] Let *A* = [*a*1, …, *a**n*] and *B* = [*b*1, …, *b**m* − 1, *b**m*] be two integer partitions s.t. *b**m* − 1 < *a**n* < *b**m*. Then *B* is sum composition of *A* if and only if *B*ʹ = [*b*1, …, *b**m* − 1] ∪ [*b**m* − *a**n*] is sum composition of *A*ʹ = [*a*1, …, *a**n* − 1]. If *A* ≤ *B*, then an *A**B*-decomposition [*A*1, …, *A**m*] exists. Suppose that *a**n* ∈ *A**k*. Then, by item 4 of Lemma [lemma:1], we have *a**n* ≤ *b**k*. Hence, being *b**m* − 1 < *a**n* < *b**m*, we have *k* = *m*, that is *a**n* ∈ *A**m*. Let *A*ʹ*m* = *A**m* \ [*a**n*]. Then [*A*1, …, *A**m* − 1, *A**m*ʹ] is a decomposition of *A*ʹ for which *A*ʹ ≤ *B*ʹ (indeed, *σ*(*A*ʹ*m*) = *σ*(*A**m*) − *a**n* = *b**m* − *a**n*). If *A*ʹ ≤ *B*ʹ, then an *A*ʹ*B*ʹ-decomposition [*A*ʹ1, …, *A*ʹ*m*] exists. If *A*ʹ*k* corresponds to *b**m* − *a**n* ∈ *B*ʹ, then [*A*ʹ1, …, *A*ʹ*k* − 1, *A*ʹ*k* ∪ [*a**n*], *A*ʹ*k* + 1, …, *A*ʹ*m*] is an *A**B*-decomposition. In fact, for all *A*ʹ*i* with *i* ≠ *k* the values are the same, and for *A**k* = *A*ʹ*k* ∪ [*a**n*] we have *σ*(*A**k*)  =  *σ*(*A*ʹ*k*)  +  *a**n*  =  *b**m*  −  *a**n*  +  *a**n*  =  *b**m*. Consider *A* = [1, 1, 2, 2, 4] and *B* = [1, 3, 6] with *A* ≤ *B*. Since 3 < 4 < 6, for the partitions *A*ʹ = [1, 1, 2, 2] and *B*ʹ = [1, 2, 3] also holds *A*ʹ ≤ *B*ʹ. The viceversa is true as well. The Exhaustive Sum Composition Algorithm ======================================== In this section, we present an algorithm for generating all *A**B*-decompositions of two integer partitions *A* = [*a*1, …, *a**n*] and *B* = [*b*1, …, *b**m*]. To develop such an algorithm, we represent an *A**B*-decomposition [*A*1, …, *A**m*] for *B* as a list [(*b*1, *A*1), …, (*b**m*, *A**m*)], where *σ*(*A**j*) = *b**j* for every *j* = 1, …, *m* to better clarify the association between the parts in *B* and the decomposition of *A*. [t] [1] **return** ∅ $quotient \leftarrow min(b~{\tt div}~a\_p, m\_p)$ *R**e**s**u**l**t* ← ∅ *R**e**s**u**l**t* = *R**e**s**u**l**t* ∪ {{(*a**p*, *q**u**o**t**i**e**n**t*)}} $\{A\_1,\ldots,A\_h\} \leftarrow \textsc{SumCompAux}(b - i a\_p, A, p+1)$ *R**e**s**u**l**t* = *R**e**s**u**l**t* ∪ {*A*1, …, *A**h*} *R**e**s**u**l**t* The $\textsc{SumCompAux}$ function ---------------------------------- Before presenting the general algorithm, we introduce the recursive function $\textsc{SumCompAux}$ having three parameters: an integer partition *A*, an integer *b* and a position *p*. The purpose of this function is to identify all partitions of the elements of *A* whose positions are greater (or equal to) *p* such that their sum is equal to *b*. Of course, when *p* = 1 the recursive function $\textsc{SumCompAux}$ identifies all the subpartitions of *A* such that their sum is *b*. [ex:esSumAux] Consider *A* = {(50, 1), (100, 2), (200, 1), (250, 1), (300, 1)} and *B* = [300, 300, 400] of Example [ex:es2]. When $\textsc{SumCompAux}$ is called on *A*, a part *b* ∈ *B*, and position 1, we expect that it provides all the subpartitions *A*1, …, *A**h* of *A* s.t. *σ*(*A*1) = … = *σ*(*A**h*) = *b*. Specifically: * $\textsc{SumCompAux}(300,A,1)\!=\!\{\!\{(50,1),(250,1)\},\!\{(100,1),(200,1)\},\!\{(300,1)\}\!\}$, * $\textsc{SumCompAux}(400,A,1)=\{\{(50,1),(100,1),(250,1)\},\{(100,2),(200,1)\},\{(100,1),(300,1)\}\}$. Note that $\textsc{SumCompAux}(300,A,4)=\{\{(300,1)\}\}$. That is, {(250, 1), (300, 1)} are the only parts of *A* that are considered for determining the value 300. At each step of the recursion, the function evaluates all the subpartitions of *A* that can be generated by taking into account the element at position *p* (denoted *a**p*). If *a**p* is greater than *b* (since *A* is ordered), no further subpartitions can be determined and the empty set is returned (see Item 3 of Lemma [lemma:1]). The same conclusion is obtained when the position *p* is greater than ℓ(*A*). When these cases are not met, it means that *b* ≥ *a**p*. Therefore, we can consider the possibility to use *a**p* (or not) in the identification of the subpartitions whose sum is equal to *b*. For this purpose we determine *q**u**o**t**i**e**n**t* as the minimum between the possible multiplicity of *a**p* (denoted with *m**p*) and the integer division of *b* by *a**p*. It represents the maximum number of times that *a**p* can be summed to obtain the value *b*. When the current element is not considered for determining the subpartitions of *A* whose sum is *b*, the function will return as output the subpartitions that can be determining starting from the element of *A* of position *p* + 1 for the same element *b*. When the current element is taken *i* > 0 times, the recursive call is invoked on the value (*b* − *i**a**p*) starting from the next position. Therefore, it determines the subpartitions *A*1, …, *A**h* of *A* whose sum is (*b* − *i**a**p*) and will return as output these sets in which element *a**p* is taken *i* times. When *q**u**o**t**i**e**n**t* ⋅ *a**p* = *b* there is no needs to proceeds with further recursive calls, and the pair (*a**p*, *q**u**o**t**i**e**n**t*) can be directly returned. Note that, when a call to the recursive function returns the empty set, it means that along this path no results can be obtained. ![Trace of the execution of function \textsc{sumCompAux}](images/trace "fig:")[fig:sumCompAux] Figure [fig:sumCompAux] reports an excerpt of the trace of the calls to function $\textsc{sumCompAux}$. In the top of the figure the parts of *A* with their repetitions are shown. The string ``return empty" with the black background represents the situation in which we have reached the end of the partition without finding sub-lists whose sum is *b*, whereas the gray background means that it is useless to proceed because *a**p* > *b*. Each column reports all the calls for the same element of *A*. The first line reports the calls in which the current element of *A* is not selected. This leads to reach the end of partition *A* without finding any sub-list of *A* whose sum is 400. In returning from the recursive call, the function tries to create a single instance of 300, and to identify instances of *A* of position 6 whose sum is 100. However, since we have reached the end of *A*, the empty set is returned. The same empty sets are returned in the call backs of the function until the second column (call(300,A,2)) is reached. For this case, indeed, we are able to identify a partition *R**e**s**u**l**t* = {{(300, 1), (100, 1)}}. *R**e**s**u**l**t* = {{(200, 1), (100, 2)}} is another subset that is obtained for call(200,A,2). Since we have reached to maximum number of repetitions of 100 in *A*, the last two results are collected in a partition and returned. The algorithm proceeds in identifying new results when the part 50 in *A* is selected once. We do not further present the other calls, that are analogous to those we have now discussed. The following theorem shows that $\textsc{SumCompAux}$ invoked on *A*, *b* and position *p* provides all the partitions *A**j* s.t. *σ*(*A**j*) = *b*. This theorem proves that this is true for all *a**i* ∈ *A* where *i* ≥ *p*. [teoSumAux] Let *A* = [(*a*1, 1), …, (*a**k*, *m**k*)] be a partition, and *b*, *p* (*p* ≤ *k*) two positive integers. The application of the function $\!\!\textsc{ SumCompAux}$ to *A*, *b* and *p* always terminates and returns all the partitions *A*1 ⊆ *A*, …, *A**s* ⊆ *A*, s.t.: * the position of the elements in *A*1, …, *A**s* is greater than (or equal to) *p*; * the sum of the elements in each subpartition is *b*; * no other subpartitions of *A* with elements of position greater than (or equal to) *p* can be identified whose sum is *b*. Let *k* = ℓʹ(*A*). We have the following two cases. Case 1. If *p* > *k*, the condition at line 1 guarantees that no solution is provided: an empty set is returned and the function terminates. Case 2. If *p* ≤ *k*, we proceed by induction on *q* = *k* − *p*. Case 2.1. If *q* = 0, then *p* = *k*. In this case, the only element of *A* to be evaluated is (*a**k*, *m**k*). The only possible solution is *v* = *b*/*a**k* with *v* ≤ *m**k*. We have the following subcases. Case 2.1.1. If *b* < *a**k*, then Equation ([eq:IntDiv]) has no solution and the function returns an empty set and terminates (line 2). Case 2.1.2. If *b* ≥ *a**k* and Equation ([eq:IntDiv]) has no integer solution or the integer solution is greater than *m**p*, then the function returns the empty set. In this case, in fact, no element (*a**p*, *t*) with *t* ≤ *m**p* exists s.t. *b* = *a**p**t*. The first statement of the *else* branch (line 9) is executed and the function $\textsc{SumCompAux}$ is called passing a position *p* + 1 > *k*. For the part proved in Case 1, the empty set is returned back; therefore the condition at line 10 is false, the empty set is returned and the function terminates. Case 2.1.3. If *b* ≥ *a**k* and Equation ([eq:IntDiv]) has an integer solution and the integer solution is lesser or equal than *m**p*, then the algorithm provides the set {(*a**k*, *t*)} as result which solves exactly Equation [eq:IntDiv] with *t* ≤ *m**k* (*k* = *p*). In fact, in this case, the calculation of *q**u**o**t**i**e**n**t* gives exactly *t*. In the *for* loop (line 5) there is only one case that satisfies the condition *i* = *q**u**o**t**i**e**n**t*  and  *q**u**o**t**i**e**n**t* ⋅ *a**p* = *b* In this case, when the condition at line 6 is satisfied, the set {(*a**k*, *q**u**o**t**i**e**n**t*)} is returned. Otherwise, the *else* branch (line 8) is executed and, using the same considerations of Case 2.1.2, *R**e**s**u**l**t* is left unaltered. At the end, the function correctly returns back the set {(*a**k*, *q**u**o**t**i**e**n**t*)} and terminates. This closes the proof of Case 2.1 Case 2.2. If the thesis is true for *q* − 1, then we have to prove that it holds also for *q*. Case 2.2.1. If *b* < *a**p*, then Equation ([eq:IntDiv]) has no solution neither for position *p* nor for all positions greater than *p*: the empty set is returned and the function terminates. Case 2.2.2. If *b* ≥ *a**p*, then we have the solution *A*ʹ*j* = [(*a**p*, *m*ʹ*p*, *j*), …, (*a**k*, *m*ʹ*k*, *j*)], where the elements with multiplicity *m*ʹ\_, *j* = 0 have been discarded and *m*ʹ*p*, *j* ≤ *m**p*  and  *b* = ∑*i* = *p**k**m*ʹ*i*, *j**a**i* However, the last equation can be rewritten by extracting the part of position *p* from the sum and moving it to the left member of the equation as follows: *b* − *m*ʹ*p*, *j**a**p* = ∑*i* = *p* + 1*k**m*ʹ*i*, *j**a**i* . The second member of Equation ([eq:EqSum1]) is the sum of the components of the set *A*ʺ*j* = {(*a**p* + 1, *m*ʹ*p* + 1, *j*), …, (*a**k*, *m*ʹ*k*, *j*)}. The first member of ([eq:EqSum1]) is formed by varying the *m*ʹ*p*, *j* from 0 (when (*a**p*, *m*ʹ*p*, *j*) is not present) to a value *v*, where *v* ≤ *b*/*a**p*  and  *v* ≤ *m**p* . The algorithm, after checking the condition at line 2, which is not satisfied, continues with the calculation of *q**u**o**t**i**e**n**t*. *q**u**o**t**i**e**n**t* is exactly the maximum value as described in ([eqConstr1]). The *for* loop (from line 5 to line 18) spans the *i* values from 0 to *q**u**o**t**i**e**n**t*. When *i* < *q**u**o**t**i**e**n**t* or *i* = *q**u**o**t**i**e**n**t*, but *q**u**o**t**i**e**n**t* is not an exact divisor of *b*, the function enters in the *else* branch at line 8. In this branch, the set {*A*1, …, *A**h*} is calculated by function $\textsc{SumCompAux}$ with parameters *b* − *i**a**p*, *A* and *p* + 1 as showed in Equation ([eq:EqSum1]). Due to the induction hypothesis, with this call, we have returned the set of partitions for position *p* + 1 and *b*ʹ = *b* − *i**a**p*, as stated in ([eq:EqSum1]) (the *m*ʹ*p*, *j* = *i* and the sum of the elements of the returned set is *b*ʹ = *b* − *i**a**p*). If this set is empty for the position *p* + 1, it means that no solution exists also for position *p*. If the returned set is not empty, then it is added to *R**e**s**u**l**t* based on the value of *i*: * if *i*  =  0 by adding the returned set to *R**e**s**u**l**t* as it is because the left member of ([eq:EqSum1]) is *b*, * if *i* > 0 we add to the returned set of position *p* + 1, also the pair (*a**p*, *i*) because of ([eq:EqSum1]); Then, we can guarantee that *R**e**s**u**l**t* contains only and all the correct subpartitions for the case (*a**p*, *i*) with *i* < *q**u**o**t**i**e**n**t* or *i* = *q**u**o**t**i**e**n**t* but *q**u**o**t**i**e**n**t* is not an exact divisor of *b*. When *i* = *q**u**o**t**i**e**n**t* and *q**u**o**t**i**e**n**t* is an exact divisor of *b*, then the only possible solution for this case is the pair (*a**p*, *q**u**o**t**i**e**n**t*) (*i* = *q**u**o**t**i**e**n**t*) because *b* = *q**u**o**t**i**e**n**t* ⋅ *a**p*; then we add to *R**e**s**u**l**t* the pair (*a**p*, *q**u**o**t**i**e**n**t*). When the for loop ends, the function returns the *R**e**s**u**l**t* so calculated and terminates the execution. Thus, Case 2.2 is proved. [t] [alg1] [1] **Input:** Two partitions: *A* = {(*a*1, *m*1), …, (*a**k*, *m**k*)}, *B* = [*b*1, …, *b**m*] **Output:** The set of all *A**B*-decompositions enabling the sum composition from *A* to *B* (when the sum composition exists, the empty set otherwise) **return** ∅ **return** ∅ **return** ∅ **return** ∅ **return** ∅ *R**e**s**u**l**t* ← ∅ ${\mathcal A} \leftarrow \textsc{SumCompAux}(b\_{1}, A, 1)$ **return** ∅ *B*ʹ ← *B* \ [*b*1] ${\mathcal F} \leftarrow \textsc{SumComp}(A \setminus A\_h, B')$ *R**e**s**u**l**t* ← *R**e**s**u**l**t* ∪ {*f* ∪ {(*b*1, *A**h*)}} *R**e**s**u**l**t* ← {{(*b*1, *A*)}} *R**e**s**u**l**t* The $\textsc{SumComp}$ Algorithm -------------------------------- The function $\textsc{SumCompAux}$ presented in the previous section is the core function of our approach because it generates all possible subpartition *A*1, …, *A**h* of *A* whose sum is equal to a *b*. If *b* is the first element of the partition *B*, then each *A**i* (1 ≤ *i* ≤ *h*) is the first element of an *A**B*-decomposition. Moreover, the problem reduces to find all *A*ʹ*B*ʹ-decompositions, where *A*ʹ = *A* \ *A**i* (1 ≤ *i* ≤ *h*) and *B*ʹ = *B* \ [*b*1]. Starting from this remark, we have developed Algorithm [alg1] ($\textsc{SumComp}$) for the generation of all *A**B*-decompositions, that works taking into account, in succession, the single elements of *B*. In particular, this algorithm takes advantage of the theoretical results obtained in Section [sec:properties]. Algorithm [alg1] works as follows. First, until line 6, it applies Items 5, 1 and 3 of Lemma [lemma:1], and Theorems [theo:3] and [theo:aGb] for checking necessary conditions of existence of sum composition. When these conditions are met, the algorithm proceeds in the examination of all the elements in *B*, otherwise, we can guarantee that no enabling decompositions can be determined. Then, recursively, the algorithm analyzes the length of *B*. When ℓ(*B*) = *m* = 1, all elements of the original *B* have been analyzed and the pair (*b*1, *A*) is included in *R**e**s**u**l**t* (see line 18 of the branch *else*). Indeed, since the condition at line 2 is false, it means that *b*1 is the sum of the elements of *A*. When ℓ(*B*) = *m* > 1, the function $\textsc{sumCompAux}$ is invoked (line 8) on the first element *b*1 of *B*, the current partition *A* and the position 1. In this way, we determine the subpartitions of *A* whose sum is *b*1. If the result of this invocation is the empty set, it means that no sub-partition *A* exist whose sum is *b*1 and then the empty set is returned. Otherwise, we have identified all possible subpartition A for element *b*1, and $\textsc{sumComp}$ is recursively invoked for each of the partition *A**h* ∈ A on *A* \ *A**h* and *B* \ [*b*1]. The recursive call will return the decompositions F for the elements of *B* \ [*b*1] starting from a partition *A* from which *A**h* have been removed. At this point, all the elements in A should be combined with all the elements in F in order to determine all the *A**B*-decompositions between *A* and *B* (line 14 of the algorithm). ![Recursive calls of the \textsc{sumComp} Algorithm](images/CatenaChiamateSumComp1 "fig:")[fig:sumComp] As shown by the following example, the application of $\textsc{sumComp}$ returns the set *R**e**s**u**l**t* that contains all the *A**B*-decompositions that can be determined from the initial *A* and *B*. [ex:sumcomp] Let *A* = [(50, 1), (100, 2), (200, 1), (250, 1), (300, 1)] and *B* = [300, 300, 400] the partition of Example [ex:es2]. The first line of Figure [fig:sumComp] reports the initial partitions *A* and *B*. The first vertical arrow on the left leads to the three possible *A*1, *A*2, *A*3 subpartitions whose sum is 300 (first element of *B*) that can be obtained from *A* by calling $\textsc{SumCompAux}$ at line 8 of the algorithm. Starting from them, the input of recursive calls to $\textsc{SumComp}$ (line 12) are determined as follows (*A* \ *A*1, *B* \ [300]), (*A* \ *A*2, *B* \ [300]), and (*A* \ *A*3, *B* \ [300]). Then, the second vertical line represents the subpartitions whose sum is 300 (second element of the initial *B*) that can be obtained from the current *A*. At this point, the backward arrow leads to the partial and final results (in total six) obtained by identifying the subpartitions of *A* whose sum is 400. These partial results are accumulated in the *R**e**s**u**l**t* variable from the recursive calls. The results in green are the new one that are added to the existing ones (in white) in the current step. [theo:correct1] Let *A* = [(*a*1, *m*1), …, (*a**k*, *m**k*)] and *B* = [*b*1, …, *b**m*] be two partitions. The application of Algorithm [alg1] to *A* and *B* always terminates and returns all possible *A**B*-decompositions (the empty set when $ A \not\leq B $). Each result $\mathcal S\_i$ of this algorithm has the the form $\mathcal S\_i=\{(b\_1,A\_{i,1}),\ldots,$ (*b**m*, *A**i*, *m*)} and is an *A**B*-decomposition. To prove this claim we have to show that: 1. *S**i* is an *A**B*-decomposition when *A* ≤ *B*, the empty set otherwise. 2. *R**e**s**u**l**t* contains only *A**B*-decompositions. 3. *R**e**s**u**l**t* contains all *A**B*-decompositions. The algorithm termination is guaranteed by the verification of these three items. Item 1. The algorithm first checks if the sufficient conditions of non existence of sum composition (lines 1 to 5) are satisfied. When one of these conditions is true, the empty set is returned and the algorithm terminates. Otherwise, the proof of item 1 is conducted by induction on *m* = ℓ(*B*) * If *m* = 1, then the solution is exactly the partition *A*. In fact, since *m* = 1 and the condition of line 1 is false, we have that ∑*i* = 1*k**m**i**a**i* = ∑*j* = 1*m**b**j* = *b*1. *R**e**s**u**l**t* is initialized with the empty set (line 6) and, since the condition of the *if* at line 8 is not satisfied, then the instruction at line 18 is executed and assigns to *R**e**s**u**l**t* the single solution {(*b*1, *A*)}. * We assume the property true for ℓ(*B*) = *m* − 1 and we prove it for ℓ(*B*) = *m*. *R**e**s**u**l**t* is initialized with empty set. The *if* at line 7 is satisfied, then the invocation of the $\textsc{SumCompAux}$ function returns the set ${\mathcal A}$ of all the subpartitions of *A* whose sum is *b*1, if they exist (see Theorem [teoSumAux]). If ${\mathcal A}$ is the empty set (condition at line 9), it means that no combination of elements of *A* can give as sum *b*1 and then *B* is not sum composition of *A* and the empty set is returned. Instead, if ${\mathcal A \neq \emptyset}$, then the function spans on all the *A**h* that are the outputs of $\textsc{SumCompAux}$ (from line 11 to line 16). This is to guarantee that each combination of possible solutions is taken into consideration. For each *A**h*, the algorithm recursively calculates all the enabling decompositions for *A* \ *A**h* and *B* \ [*b*1]. The two inputs are not empty. In fact, *B* \ [*b*1] has at least the element *b*2 (being *m* > 1). Moreover, *A* \ *A**h* ≠ ∅, being *σ*(*A**h*) = *b*1 < *σ*(*B*) = *σ*(*A*), i.e. *σ*(*A**h*) < *σ*(*A*). So, by the inductive hypothesis, the set ${\mathcal F}$ contains all *A* \ *A**h*, *B*ʹ-decompositions. The *R**e**s**u**l**t* is obtained (line 14) by adding, for each solution $f \in {\mathcal F}$, the pair (*b*1, *A**h*). The obtained result is thus an *A**B*-decomposition. Item 2. Consider a generic solution in *R**e**s**u**l**t* and we show that it is valid. The generic solution *s* ∈ *R**e**s**u**l**t*, by the induction hypothesis, has the form *s* = *f* ∪ {(*b*1, *A*1)}. However, since *f* is one of the output of Algorithm [alg1] (*A* \ *A*1, *B* \ [*b*1]), it can be written as *f* = {(*b*2, *A*2), …, (*b**m*, *A**m*)}. Then *S* = {(*b*1, *A*1)} ∪ {(*b*2, *A*2), …, (*b**m*, *A**m*)} with the property *σ*(*A*1) = *b*1 and *σ*(*A**i*) = *b**i* for every *i* = 2, …, *m*. So *A* ≤ *B*. Item 3. By absurd, suppose that a solution *S* = {(*b*1, *A*1), …, (*b**m*, *A**m*)} is not in *R**e**s**u**l**t*. Because at line 8 we have in ${\mathcal A}$ all the pairs (*b*1, *A*ʹ*h*) where *σ*(*A*ʹ*h*) = *b*1 (see Theorem [teoSumAux]), then the pair (*b*1, *A*1) was provided by the function $\textsc{SumCompAux}$. Then, the recursive call of Algorithm [alg1] with the parameters *A* \ *A*ʹ1 and *B* \ [*b*1], by inductive hypothesis, returns all (*A* \ *A*ʹ1, *B* \ [*b*1])-decompositions. So, it should also contain {(*b*2, *A*2), …, (*b**m*, *A**m*)}. The union of these two sets is thus an *A**B*-decomposition, in contradiction with the initial hypothesis. This concludes the proof of the theorem. The Existential Sum Composition Algorithm ========================================= In the previous section, Algorithm [alg1] was developed for generating all *A**B*-decompositions of two given integer partitions *A* and *B*. However, for several problems (as those mentioned in the introduction) we are not interested on all of them, but only on the fact that *B* is sum composition of *A*, that is, if there exists at least one *A**B*-decomposition. In this section, we develop the Algorithm $\textsc{SumCompExist}$ to provide an answer to such a problem. This algorithm relies on the function $\textsc{SumCompExistAux}$, which checks the sufficient conditions for having *A* ≤ *B*. This function, however, is invoked only after applying some *simplifications*, according to the theorems of Section [subsection:existence] for removing elements from *A* and *B* while preserving the relation of sum composition. Algorithm $\textsc{SumCompExist}$ checks in cascade several properties by applying items 1, 3, and 5 of Lemma [lemma:1]. Then, it applies the first *simplification* by eliminating the equal values from the partition *A* and *B* (Theorem [cor1]). Then, on the returned sets (if not empty) it applies Theorem [theo:3] and then Theorem [theo:simpl2]. Finally, it performs the check related to Theorem [theo:aGb]. After these checks and *simplifications*, the function $\textsc{SumCompExistAux}$ is invoked. The function $\textsc{SumCompExistAux}$ starts by checking the condition of Item 5 of Lemma [lemma:1] (line 12). Whenever the condition is verified, the *f**a**l**s**e* value is returned because no *A**B*-decompositions can be obtained. Otherwise, the length of *B* is checked. If ℓ(*B*) = 1 (line 14), then we are sure that an *A**B*-decomposition has been identified. Otherwise, when ℓ(*B*) > 1, the function (line 14) determines the partitions [*A*1, …, *A**h*] for *b*1 by invoking the function $\textsc{SumCompAux}$ (described in previous section). Whenever $h \not = 0$, we have *σ*(*A**i*) = *b*1 for every *i* = 1, …, *h*, and, for each *A**i*, $\textsc{SumCompExistAux}$ is recursively invoked on *A* \ *A**i* and *B* \ [*b*1] for checking the relation of sum composition between these two partitions (line 17). When one of these recursive function returns *t**r**u**e*, it means that *A* ≤ *B* and then the *t**r**u**e* value is returned (line 18). Whenever none of them returns *t**r**u**e*, it means that no *A**B*-decompositions exist and the *f**a**l**s**e* value is returned (line 20). [t] [alg2] [1] **Input:** Two partitions: *A* = {(*a*1, *m*1), …, (*a**k*, *m**k*)}, *B* = [*b*1, …, *b**m*] **Output:** *P* = *t**r**u**e* (when the sum composition exists) *P* = *f**a**l**s**e* otherwise (*B* is not sum composition of *A*) **return** *f**a**l**s**e* **return** *f**a**l**s**e* **return** *f**a**l**s**e* (*A*, *B*) ← (*A* \ *C*, *B* \ *C*) **return** *t**r**u**e* **return** *f**a**l**s**e* (*A*, *B*)  ←  (*A* \ [*a**k*], *B* \ [*b**m*] ∪ [*b**m* − *a**k*]) **return** *f**a**l**s**e* $Result \leftarrow \textsc{SumCompExistAux}(A, B)$ *R**e**s**u**l**t* **return** *f**a**l**s**e* **return** *t**r**u**e* $\{A\_1,\ldots,A\_h\}\leftarrow \textsc{SumCompAux}(b\_1, A, 1)$ *B*ʹ = *B* \ [*b*1] $Result \leftarrow \textsc{SumCompExistAux}(A \setminus A\_i, B')$ **return** *t**r**u**e* *f**a**l**s**e* Let *A* = {(50, 1), (100, 2), (200, 1), (250, 1), (300, 1)} and *B* = [300, 300, 400] be the partitions of Example [ex:es2]. Since *A* ∩ *B* = [300], by applying the code at line 4 of the algorithm, we obtain *A* = {(50, 1), (100, 2), (200, 1), (250, 1)} and *B* = [300, 400]. By considering the first 300 in *B*, the subpartitions of *A* whose sum is 300 are only {(50, 1), (250, 1)} and {(100, 1), (200, 1)}. Since the invocation of the recursive function on *A* = {(100, 2), (200, 1)} and *B* = [400] returns true, the algorithm returns the existence of a decomposition between the two subpartitions. Therefore, the process in this case is faster than the one described in Example [ex:sumcomp]. The following theorem provides the correctness of the algorithm. [theo:Existence] The application of Algorithm $\textsc{SumCompExist}$ to the partitions *A* and *B* always terminates and it returns *t**r**u**e* whenever *B* is sum composition of *A*. The function $\textsc{SumCompExist}$ can be divided in two parts: the first one (from line 1 to line 9) verifies the sufficient conditions for which *B* is not sum composition of *A*. The only exceptions are lines 4 and 7 in which the two partitions are simplified. Whenever, after removing from *A* and *B* their intersection, the obtained partition *A* is empty, it means that *A* = *B* and trivially *B* is sum composition of *A* (line 5). The second part of $\textsc{SumCompExist}$ is simply returning the result from $\textsc{SumCompExistAux}$ that can be proved by induction on the length of *B*. Case 1. If ℓ(*B*) = 1, then we have two possibilities. If the condition at line 12 is verified, then the function returns *f**a**l**s**e* as it has to be by Item 5 of Lemma [lemma:1]. On the contrary, the condition on line 13 is verified (because *b*1 = *σ*(*A*)) and *t**r**u**e* is returned. In both cases the algorithm terminates the execution. Case 2. If ℓ(*B*) > 1, we assume true for a *m* = ℓ(*B*) − 1, and we prove for *m* + 1. Also in this case, if the condition at line 12 is verified, then the function returns *f**a**l**s**e* and terminates (according to Item 5 of Lemma [lemma:1]). The condition at line 13 is false and thus function $\textsc{SumCompAux}$ is invoked on *b*1, *A*, and 1. It returns the subpartitions {*A*1, …, *A**h*} of *A* s.t. *b*1 = *σ*(*A**j*) for every *j* = 1, …, *h* (Theorem [teoSumAux]). If this set is empty, it means that *b*1 cannot be associated with any element of *A* and thus *B* is not sum composition of *A*. Therefore, the function jumps the *for* loop (lines 16–19), returns *f**a**l**s**e* and terminates. On the contrary, if this set is not empty, then the *for* loop (lines 16–19) is entered and each *A**i* ∈ {*A*1, …, *A**h*} is processed by recursively invoking $\textsc{SumCompExistAux}$ on *A* \ *A**i* and *B* \ [*b*1]. By inductive hypothesis, this function returns the existence of a sum composition between its parameters. When one of the returned value is *t**r**u**e*, the function also returns *t**r**u**e*. Indeed, *σ*(*A**i*) = *b*1 and *B* \ [*b*1] is sum composition of *A* \ *A**i* then, by Lemma [lemma:sumnm1], *B* is sum composition of *A*. Whenever none of the recursive invocations of $\textsc{SumCompExistAux}$ returns *t**r**u**e*, it means that *B* cannot be sum composition of *A*, and the *f**a**l**s**e* value is returned and the algorithm terminates. Experimental Results ==================== The algorithms described in the previous sections have been experimentally evaluated by considering synthetic datasets with specific characteristics for comparing their behaviors in the cases in which we know the existence of sum composition and the cases in which no sum composition can be determined. In the generation of the partition *A*, we have considered the possibility to randomly generate *n* samples in specific ranges of values with or without repetitions. In the following we denote with *A*[*m**i**n*, *m**a**x*]*n* the partition *A* whose length is *n* and ∀*a* ∈ *A*, *m**i**n* ≤ *a* ≤ *m**a**x*. For the generation of a partition *B* with *m* elements ( 2  ≤  *m*  <  *n*) we have considered *A*[*m**i**n*, *m**a**x*]*n* and applied one of the following rules. 1. Randomly split the *A*[*m**i**n*, *m**a**x*]*n* in *m* non-empty partitions and then determine the values of *B* by summing up the integers in each partition. 2. Generate the sum of the values of *A*, randomly generate *m* distinct ordered values as follows: *v*1, …, *v**m* − 1 in the range [1, *σ*(*A*)], and *v**m* = *σ*(*A*). Assuming *v*0 = 0, each *b**i* is obtained as *b**i* = *v**i* − *v**i* − 1. Using rule R1, we are guaranteed of the existence of sum composition, whereas by rule R2, we have that *σ*(*A*) = *σ*(*B*), but the condition of sum composition is not always guaranteed. Notice that in case of non-existence of a sum composition between the partitions *A* and *B*, the execution cost of the exhaustive and existential algorithms should be comparable because the entire space of possible solutions has to be explored. In this evaluation, we also wish to study the impact of the properties presented in Section [sec:properties] for reducing the number of recursive calls introduced in the algorithms. All experiments have been conducted 100 times for each type of generation of *A* and *B* and the reported results (both in terms of execution time and number of enabling decompositions) correspond to their average. The experiments have been executed on a desktop with medium capabilities (Intel Core i7-4790 CPU at 3.60 GHz, processor based on x64 bit, 8 GB RAM). Experiments with the $\textsc{SumComp}$ Algorithm ------------------------------------------------- In the evaluation of the performances of the $\textsc{SumComp}$ Algorithm we have considered different combinations of the parameters for the generation of *A*[*m**i**n*, *m**a**x*]*n* and the rule *R*1 for the generation of *B*. Indeed, the worst case corresponds on the existence of the sum composition and the need of generating all possible *A**B*-decompositions. The first experiments have been devoted to the identification of the execution time and number of enabling decompositions by varying the length of *A* and *B*. Due to the explosion of different combinations, and the limitation of the machine used for the experiments, we were able to consider partitions of *A* with ℓ(*A*) ≤ 23. Indeed, with ℓ(*A*) = 24, memory allocation issues were raised ( > 4.8 GB memory used). Moreover, we have considered three ranges of values ([1, 100], [1, 150], [1, 200]) in which the values of *A* have been randomly chosen. Figure [fig:SumCompTime1-200] shows a 3-dimensional comparison of the execution times varying the length of *A*[1, 200]*n* (with 3 ≤ *n* ≤ 23) and the length of *B* (with 2 ≤ ℓ(*B*) ≤ 22). The range [1, 200] has been chosen because it is the most representative among the considered ones, in fact the results in this range dominate the results in the other ones. The graphic shows that the execution times tend to increase when ℓ(*B*) is much smaller than ℓ(*A*). This makes sense because a higher number of possible combinations of values in *A* can have as sum the values in *B*. The execution times with these examples is affordable, but by increasing of a single unit the length of *A*, the memory is not sufficient for verifying all the different combinations and the results cannot be reported.     [fig:SumCompTot] A slice of Figure [fig:SumCompTime1-200], by fixing ℓ(*B*) = 4, is reported in Figure [fig:SumCompNumberTimeB41-200] with also the number of identified enabling decompositions. The graphic shows that the number of solutions (as well as the execution times) increase exponentially. Specifically, around 100.000 enabling decompositions in average can be determined for ℓ(*A*) = 23 in an elapsed time of 900 ms in average. Moreover, we can identify a correlation between the number of solutions and the execution times (correlation index  ≈ 91% on ℓ(*A*)). Another slice of Figure [fig:SumCompTime1-200], by fixing ℓ(*A*) = 23, is reported in Figure [fig:SumCompNumberTimeA231-200]. In this case we can note that the maximum execution time (and also the number of enabling decompositions) is obtained for low values of ℓ(*B*). Indeed, the few values contained in *B* can be obtained by summing different combinations of values in *A*. When ℓ(*B*) increases the possible combinations deeply decrease. If we now compare all the slices that can be obtained from Figure [fig:SumCompTime1-200] we can make the following observations. The execution time for the slice corresponding to ℓ(*A*) = 20 is around 30 ms while the one for the slice ℓ(*A*) = 23 moves to 900 ms with a significant increase of time. Moreover, the highest execution time for the slice corresponding to ℓ(*A*) = 20 is reached with ℓ(*B*) = 3, whereas for the case ℓ(*A*) = 23 the maximum execution time is reached with ℓ(*B*) = 4. Moreover, in the considered range of values we can state that the maximum number of enabling decompositions follows the law $5.5 \leq \frac{{\ell(A)}}{{\ell(B)}} \leq 7$. However, this observation holds only for the considered partitions *A* and ranges. A deeper analysis is required for proving the general validity of this claim. Table [tbl:enabling] reports the number of solutions (in average) by changing the range of values from which the integers in *A* are chosen. We can observe that the number of enabling decompositions is higher for values of *A* chosen in the range [1, 100]. This result is not intuitive and needs further investigations. [b] | # A | Range 1-100 | Range 1-150 | Range 1-200 | | --- | --- | --- | --- | | 18 | 1,168 | 447 | 251 | | 19 | 4,086 | 1,520 | 667 | | 20 | 13,901 | 4,261 | 2,097 | | 21 | 38,352 | 18,815 | 7,810 | | 22 | 159,581 | 54,395 | 26,285 | | 23 | 477,449 | 198,690 | 99,552 | [tbl:enabling] Figure [fig:SumCompNumberA23DistinctRange] shows the impact on the execution time of the presence of repeated elements in *A*. By increasing the number of repetitions of values in *A* (0 means no duplication, 1 one value duplicated and so on) we observe that the execution time decreases for each range of values. This means that our way to represent partitions has positive impact on the performances.     [fig:SumCompExist] Experiments with the $\textsc{SumCompExist}$ Algorithm ------------------------------------------------------ In the experiments with the existential algorithm we have considered the rule *R*2 for the generation of the partition *B*. Indeed, in this case we are interested in evaluating the behavior of the algorithm also when no solutions exist. Actually, the lack of an enabling decomposition corresponds to the worst case because the entire space of solutions should be explored before reaching to the conclusion. With this algorithm we have identified an issue of lack of memory too (when ℓ(*A*) = 33). However, since this algorithm does not require to carry all the enabling decompositions, we were able to conduct experiments with a partition *A* with maximal length of 32. [b]0.45![Execution times of \textsc{sumCompExist} when enabling decompositions do not exist – varying the length of A^{n}_{1,200} and the length of B (with 3 \leq {\ell(A)} \leq 32, 2 \leq {\ell(B)} \leq 31)](images/SumCompExistNotFound "fig:") [fig:SumCompExistNotFound]   [b]0.45![Frequency of existence of sum composition w.r.t. the generated datasets by varying the length of A^{n} for all ranges and the length of B (with 3 \leq {\ell(A)} \leq 32, 2 \leq {\ell(B)} \leq 31)](images/SumCompExistBasePerFound "fig:") [fig:SumCompExistBasePerFound] Figure [fig:SumCompExistGlobal] shows a 3-dimensional comparison of the execution times varying the length of *A*[1, 200]*n* (3 ≤ *n* ≤ 32) and the length of *B* (2 ≤ ℓ(*B*) ≤ 31). The skyline of the time execution of the existential algorithm is similar to the exhaustive version. However, the absolute times are considerably smaller. Indeed, the worse execution time for the exhaustive algorithm is around 900 ms for ℓ(*A*) = 23, whereas for the existential algorithm for a partition *A* with the same length is around 17 ms. In case of existence, moreover, we are able to handle a partition *A* with 32 elements. Thus, bigger than those considered in the exaustive case (the average execution time for ℓ(*A*) = 32 and ℓ(*B*) = 2 is around 4500 ms). Analogously to the exhaustive algorithm we have reported some slices of Figure [fig:SumCompExistGlobal] in Figure [fig:SumCompExistGlobalSliceB2] (by setting ℓ(*B*) = 2) and in Figure [fig:SumCompExistGlobalSliceA32] (by setting ℓ(*A*) = 32) by considering different intervals where the values of *A* are chosen. The results are analogous with those obtained in the exhaustive algorithm, but we have to observe that the increase in time execution is obtained for higher values of *A* (ℓ(*A*) > 30) and that we obtain higher execution times for the range [1 − 200] and for the case with ℓ(*B*) = 2. Figure [fig:SumCompExistA32DistinctRange] shows the execution times at increasing number of duplicated values and by considering different ranges of values for *A*. Also in this case we can remark that our representation of partitions has positive effects in the performances of the algorithm. This is more evident for the ranges [1, 100] and [1, 200], whereas in the range [1, 150] we have an outlier (average execution time 4, 219 ms in the case of number of repetitions of elements in *A* equal to 8) that has been confirmed by other experiments that we have conducted on the same range of values. Figure [fig:SumCompExistNotFound] reports the average execution time of the algorithm applied to cases where an enabling decomposition does not exists. These cases have been generated by rule *R*2 discussed at the beginning of the section and considering different lengths of *A* and *B* (ℓ(*A*) from 2 to 32 elements). It is easy to see that on average the execution time is lesser than 1 ms. By comparing these results with those reported in Figure [fig:SumCompExistGlobal] (where the enabling decompositions always exist) we can observe that identifying the lack of sum composition is much faster than identifying its presence of several order of magnitude. This result is the opposite of what we were expecting. We performed a deeper analysis on these results and we noticed that, for the cases of non existence of sum composition, the *simplification* mechanisms were very effective by significantly reducing the length of the partitions *A* and *B* and, consequently, reducing the execution time of the algorithm [b]0.45![Frequency of existence of sum composition w.r.t. the generated datasets by fixing {\ell(B)}=2 and by varying the length of A for all ranges (with 3 \leq {\ell(A)} \leq 32)](images/SumCompExistBasePerFoundB2 "fig:") [fig:SumCompExistBasePerFoundB2]   [b]0.45![Frequency of lacking of sum composition w.r.t. the generated datasets by fixing {\ell(A)}=32 for all ranges and by varying the length of B (with 2 \leq {\ell(B)} \leq 31)](images/SumCompExistBasePerFoundA32 "fig:") [fig:SumCompExistBasePerFoundA32] Figure [fig:SumCompExistBasePerFound] reports the frequency of the occurrence of enabling decompositions w.r.t. the number of cases that have been randomly generated in our datasets. The frequency is much higher when the length of *B* is low (around 2) and the length of *A* is high. The trend is better represented in Figure [fig:SumCompExistBasePerFoundB2] where a slice of the previous figure is shown by fixing ℓ(*B*) = 2 and in Figure [fig:SumCompExistBasePerFoundA32] where a slice of previous figure is shown by fixing ℓ(*A*) = 32. From these graphics we can note that the frequency with which enabling decompositions are identified is 16, 56%. Moreover, when the ratio ℓ(*A*)/ℓ(*B*) is low, the frequency of cases in which enabling decompositions are not identified rises to 100%. The highest numbers of these cases is verified when ℓ(*A*)/ℓ(*B*) = 2 and this result can be justified by Theorem [theo:3]. For higher value of this ratio, it could be worth investigating if the property holds also for higher numbers of *A*. The last experiments were devoted to evaluate the impact of the *simplification* properties introduced in Section [subsection:existence] on the performances of the existential algorithm. For this purpose, two kinds of *simplifications* are considered: * A *full simplification*: when the $\textsc{sumCompExist}$ algorithm can decide if the sum composition holds or not without the need to call the SumCompExistAux function. This happens when one of the conditions of the statements 1, 2, 3, 5, 6 and 8 are met in Algorithm [alg2]. * A *partial simplification*: when $\textsc{sumCompExist}$ can reduce the execution time of the algorithm by decreasing the numbers of elements of *A* and *B* before calling the SumCompExistAux function. This happens when the statements at line 4 or at line 7 is applied.     [fig:XXX] For the *full simplification*, the following situations of exit from Algorithm [alg2] are analyzed: * *variant B*. Exit at line 2. * *variant C*. Exit at line 3. * *variant E*. Exit at line 5. * *variant F*. Exit at line 6. * *variant G*. Exit at line 8. These variants are compared with the entire execution of the Algorithm [alg2] which is denoted *variant FULL*. Note that the exit at line 1 from Algorithm [alg2] is not considered because this situation cannot ever be verified because of the use of rule *R*2 in the generation of the datasets. Figure [fig:SumCompExistSimplTotal] reports the percentage of cases for the different variants to Algorithm [alg2]. We can see that, for the ranges of values observed, we have that the *full simplification* can provide the existence or not existence of sum composition for roughly 70% of the cases. This result is quite promising, because despite the high complexity of sum composition, it seems that for a significant percentage of cases, the algorithm can provide an answer quite quickly. The figure shows that the maximum average time is 1.23 ms (ℓ(*B*) = 2 and ℓ(*A*) = 29) that is 3 order of magnitude smaller than the case of not *full simplification* (4,469 ms when ℓ(*B*) = 2 and ℓ(*A*) = 32 in Figure [fig:SumCompExistGlobal]). Another interesting behavior can be observed in Figure [fig:SumCompExistSimplTotal]: when ℓ(*A*) ≥ 16 the percentage of *full simplification* becomes stable for all the cases. This fact deserves further study in the future. Figure [fig:SumCompExistSimplTime] shows the execution times of the different variants w.r.t. the ratio of ℓ(*A*) and ℓ(*B*). We can observe that till the ratio is 3, this *simplification* is quite effective in at least 50% of the cases. When this ratio increases, the efficacy of the simplification decreases rapidly. It would be interesting to verify with higher length of *A* if these results are confirmed. Another investigation direction is to identify other kinds of *simplification* that can be applied. Figure [fig:SumCompExistSimplA] shows the effects of *partial simplification* (only for the call of $\textsc{SumCompExistAux}$) of the partition *A* by applying the statements at line 4 and 7 in the $\textsc{sumCompExist}$ algorithm. In particular the line ``% CASE A REDUCTION“ is the percentage of the cases where we can reduce the number of elements of *A*, line ``% A REDUCTION” is the ratio (number of *A* eliminated)/ℓ(*A*) (two different scales used). The x-axis is the ratio ℓ(*A*)/ℓ(*B*) because it is the most reasonable due the variation of *A* and *B* on our tests. We can observe that this kind of simplification is applied in all the cases when ℓ(*A*)/ℓ(*B*) ≤ 2 then it rapidly decreases arriving at an application of this *simplification* at roughly 10% of the cases when ℓ(*A*)/ℓ(*B*) ≥ 10. The percentage of number of *A* eliminated is even less with a decreasing curve until ℓ(*A*)/ℓ(*B*) = 10 and then a strange small increase until ℓ(*A*)/ℓ(*B*) = 16. This kind of *simplification* can be considered interesting but marginal w.r.t. the most complex cases. Related Work ============ The sum composition problem, as defined in this article, is hard to find in the literature. A similar class of problems, named *Optimal Multi-Way Number Partitioning*, can be found in. In their case, the authors provide different optimal algorithms for separating a partition *A* of *n* positive integers into *k* subsets such that the largest sum of the integers assigned to any subset is minimized. The paper provides an overview of different algorithms that fall into three categories: sequential number partitioning (SNP); binary-search improved bin completion (BSIBC); and, cached iterative weakening (CIW). They show experimentally that for large random numbers, SNP and CIW outperform state of the art by up of seven orders of magnitude in terms of runtime. The problem is slightly different from the one that we face in this paper in which we wish to partition the set *A* according to the values contained in the set *B*. However, we also exploit a SNP approach for the generation of the possible partitions of the partition *A* according to the integer numbers contained in *B*. Peculiarity of our approach is the representation of the partition *A* and *B* that allows one to reduce the cases to be explored by eliminating useless permutations. A much more similar formulation of the problem faced in this paper is the “k-partition problem” proposed in where the authors wish to minimize a objective function *z* with the partitions *A* and *B* (with the sum of *A* equals the sum of *B*) where *A* is partitioned in *m* subpartitions (*A*1, …, *A**m*). For each partition the function *z* is calculated as *z* = *m**a**x**i* = 1*m*(∑*a* ∈ *A**i**a*)/*b**i*. It is easy to see that if the minimum of objective function is *z* = 1 then *B* is sum composition of *A* and, vice versa, if *B* is sum composition of *A*, the objective function *z* = 1. In, special restrictions are introduced concerning the length and distinct values of *A*, the maximal integer in *A* and the minimal integer in *B* in order to provide more efficient algorithms. For instance, if ℓ(*B*) = 2 and *m**a**x*(*A*) = 100, then ℓʹ(*A*) > 94 (i.e. almost all the value from 1 to 100) and *m**i**n*(*B*) > 2366. Even if there are many similarities with the approach proposed in this paper, in no real implementation of the algorithms are proposed and the paper lacks of experimental results. Moreover, the paper provides a response to the existence of a solution for the *k*-partition problem but does not provide algorithms for the identification of all possible solutions as we propose in this paper. Finally, our paper provides a characterization of the properties of sum composition that are exploited for reducing the number of cases to be tested. Even if our approach is still NP-hard, its runtime is significantly reduced especially when checking the existence of a solution, and we do not provide any restrictions on the partitions *A* and *B*. In our algorithms we need to go through the identification of all the solutions of the well-known “Subset Sum” Problem that is one of Karp’s original NP-complete problems. This problem consists in determining the existence of a subset *A*ʹ of *A* whose sum is *s* and is a well-known example of a problem that can be solved in weakly polynomial time. As a weakly NP-complete problem, there is a standard pseudopolynomial time algorithm using a dynamic programming, due to Bellman, that solves it in ${\mathcal O}({\ell(A)} \sigma(A'))$ time. Recently some better solutions have need proposed by Koiliaris and Xuy in with an algorithms that runs in $\tilde{{\mathcal O}}(min\{\sqrt{{\ell'(A)}} \sigma(A'), \sigma(A')^{\frac{4}{3}}, \sigma(A)\})$ time, where $\tilde{{\mathcal O}}$ hides polylogarithmic factors. We remark that in our case, it is not enough to determine if a subset exists but we need to identify all the possible subsets in *A* whose sum is *s*. Therefore, the standard approaches proposed in the literature cannnot be directly applied. In our case, we keep the values of *A* ordered and we apply a SNP approach for enumerating all the possible subpartition of *A* (starting from the lower values of *A*) that can lead to the sum *s*. A subpartition is skipped when including a new value of *A*, this leads to a value greater that *s*. Since our partitions are ordered, we can guarantee to identify a possible solution, whereas the approach proposed in randomly generates possible configurations to be proved. Finally, relying on our representation of partitions, permutations are avoided and the configurations to test are reduced. Conclusions =========== In this paper we introduced the sum composition problem between two partitions *A* and *B* of positive integers. Starting from a formal presentation of the problem by exploiting the poset theory, we have proposed several properties that can be exploited for improving the execution time of the developed algorithms. Then, we have developed an exhaustive algorithm for the generation of all *A**B*-decompositions for which *A* ≤ *B*, and an algorithm for checking the existence of the relation *A* ≤ *B*. The correctness of these two algorithms is proved. An experimental analysis is provided for assessing the quality of the proposed solutions. As expected, the algorithms have an exponential growth with the length of *A* and *B*. We also show a correlation between the execution time and the number of enabling decompositions in the execution of the exhaustive algorithm. Moreover, we show that the number of repetitions of the elements in the partition *A* impacts the execution time for both algorithms and how the adopted data structures reduce the number of configurations to be checked. For the existential algorithm, we had some surprising good impact on “full simplification” actions on roughly 70% of the cases and also “partial simplification” can have good impact in the range of ℓ(*A*)/ℓ(*B*) ≤ 3. Other interesting experimental evidences were identified as the ratio of ℓ(*A*)/ℓ(*B*) for the enabling decompositions (exhaustive algorithm). Anyway, these algorithms present a limitation of “lack of memory” (ℓ(*A*) = 24 for the exhaustive algorithm and ℓ(*A*) = 33 for the existence algorithm). New area of investigations can be identified in finding: *i*) new *simplification* properties that can reduce the execution time of the algorithms; *i**i*) new algorithms that have less limitation of memory usage; *i**i**i*) a wider range of test cases where the validity of certain properties can be checked. 10 url#1`#1`urlprefixhref#1#2#2 #1#1 R. Bellman, Notes on the theory of dynamic programming—transportation models, Manage. Sci. 4 (2) (1958) 191–195.. G. Birkhoff, Lattice theory, Vol. 25, American Mathematical Soc., 1940. L. Comtet, Advanced Combinatorics, Reidel, Boston, 1974. M. R. Garey, D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, W. H. Freeman and Co, 1979. R. M. Karp, Reducibility among Combinatorial Problems, Springer US, 1972, Ch. 9, pp. 85–103. K. Koiliaris, C. Xu, A faster pseudopolynomial time algorithm for subset sum, in: Proc. of the 28th Annual ACM-SIAM SODA, 2017, pp. 1062–1072.. S. A. Joni, G.-C. Rota, Coalgebras and bialgebras in combinatorics, Studies in Applied Mathematics 61 (2) (1979) 93–139.. C. Mark, Fast exact and approximate algorithms for k-partition and scheduling independent tasks, Discrete Mathematics 114 (1-3) (1993) 87–103. E. L. Schreiber, R. E. Korf, M. D. Moffitt, Optimal multi-way number partitioning, J. ACM 65 (4) (2018) 24:1–24:61.. G. M. Ziegler, On the poset of partitions of an integer, Journal of Combinatorial Theory, Series A 42 (2) (1986) 215 – 222..
arxiv_0000184
loss cone predicts a flatter distribution than the tide-only simulation does, the stellar encounters cannot entirely smooth out the peaks in the latitude distribution. This is consistent with the results in. Thus the observed non-uniform latitude distribution does not indicate that the Galactic tide dominates at the present epoch, as was claimed by. We can attempt to make a more quantitative assessment of how well our models predict the observed distribution. Using model comparison techniques we can ask whether our dynamical models (the combined tide plus encounters model) explain the data better than a uniform distribution. We can do this crudely on the binned data/simulations shown in the figure via a likelihood test. The act of binning means that the model-predicted number of events per bin is determined by the Poisson distribution, thus defining our likelihood. However, such a test is dependent on the choice of binning, and we have tried out a range of bin widths and centres. While we find that the combined model for the DQT Oort cloud model is always more favoured than a uniform distribution, the significance is marginal. An alternative approach is to use the unbinned data and unbinned model predictions, and to apply a kernel density estimate (KDE) to each. This produces a non-parametric density function for the data and for the model, the difference between which we quantify using the (symmetrized) Kullback-Leibler divergence (KLD). A value of zero divergence means that the two distributions are identical; larger (positive/negative) values indicate larger differences. We find that our dynamical models give smaller KLD values than do the uniform model (i.e. the former predict the data better), for both the DLDW and DQT. Although the distributions formed by the KDE are sensitive to size of the kernel adopted,[7](#fn7) we find that the KLD values are quite insensitive to this, and consistently favour the dynamical models. This suggests that the dynamical models explain the data better than a flat distribution in latitude (although because calibrating KLD ratios into formal significances is not easy, we leave this as a qualitative statement). Longitude distribution ---------------------- The perihelia of LPCs are not distributed uniformly on the celestial sphere. It has been suggested that they lie preferentially on a great circle, as evidenced by two peaks at *l**c* ≃ 135and *l**c* ≃ 315seen in Figure [fig:LPC1Abl]. The comets on this great circle could be induced by stellar encounters with preferred directions, thereby producing the apparent anisotropy. In the lower two panels in Figure [fig:dynbl], we see that the model predictions do not produce any very large peaks, although one around *l**c* ≃ 135 is discernable. We also observe a peak around *l**c* = 0–60 which is proposed as a signal of the “Biermann comet shower”. In our model, this peak is probably the result of accumulated perturbations from several stellar encounters with preferred directions. The peak around *l**c* = 135 is more prominent in the model prediction for the comets injected into the observable zone (red points/line in the figure). This peak is generated primarily by one or more massive stellar encounters. Hence, stellar encounters play a more significant role in injecting comets into the observable zone than just into the loss cone. This is consistent with the “synergy effect” investigated by. As with the latitude distribution, we also measured the KLD for the model predictions (for the loss cone) and for a uniform distribution. The dynamical models predict the data little better than a uniform distribution. (The likelihood test gives a similar result.) One reason for this lack of support for our dynamical (combined) model could be the fact that we are averaging the predicted distribution from the encounters over ten different realizations of the stellar encounters. This will tend to smooth out individual peaks, which are probably produced by just a few encounters with massive stars.[8](#fn8) If we instead only used a single random realization of encounters, we are unlikely to reproduce exactly the showers which occurred. This is an inherent problem of modelling stellar encounters in a stochastic way. This does not affect our model prediction of the latitude distribution nearly as much, however, because its shape is dominated by the non-stochastic tide. In order to investigate this we again use our encounter model via the proxy *γ* (a proxy of comet flux) defined in equation [eqn:gamma], but now as a function of $b\_{\rm p}$ and $l\_{\rm p}$, the direction toward the perihelion of the stellar encounter. Moreover, we now impose a minimum threshold, $\gamma\_{\rm lim}$, on the proxy: The larger the value of $\gamma\_{\rm lim}$, the larger the encounter perturbation must be for it to be included in the model. Using the encounter model described in section [sec:encmod], we simulate 10 million encounters and calculate *γ*, $b\_{\rm p}$, and $l\_{\rm p}$ for each. The predicted direction of an LPC’s perihelion is opposite on the sky to the direction of the encounter perihelion. Thus we can calculate *b**c* and *l**c* accordingly and use *γ*(*b**c*,  *l**c*) to predict the PDF of *b**c* and *l**c*. Then we divide the range of the Galactic longitude into 12 bins and sum *γ* in each bin including only those encounters with $\gamma>\gamma\_{\rm lim}$. Normalizing this gives the angular PDF of the encounter-induced flux, as shown in Figure [fig:lgammalim]. For larger values of $\gamma\_{\rm lim}$ we observe a larger variation in the flux with longitude, as expected, because then fewer encounters contribute to the distribution. As we can see from equation [eqn:gamma], these are the more massive and/or slower stars. These encounters may induce a series of weak comet showers rather than a single strong comet shower. Because strong encounters are rare and extremely weak encounters cannot induce enough anisotropic LPCs, the spikes in the longitude distribution can be caused by at least two weak encounters rather than one strong or many extremely weak encounters. From Figure [fig:dynbl], we see that the tide cannot completely wash out the anisotropy in the longitude distribution induced by these encounters. [fig:lgammalim] Consistent with our results, found that the two spikes in the longitude distribution result from weak impulsive perturbations by analyzing the energy and angular momentum of dynamically new LPCs. Similar to the definition of weak comet showers in and, we define encounters with *γ* in the interval $\lbrack 1\times 10^{-7}, 5\times 10^{-6}\rbrack M\_{\sun} ~km~s^{-1}~AU^{-1}$ as weak encounters. We do not find strong peaks in the longitude distribution of *γ* for these encounters in Figure [fig:lgammalim], because we know that *γ* can underestimate the intensity of the shower (see Figure [fig:gammafcpdf]). Thus a small enhancement of the two peaks in Figure [fig:lgammalim] may correspond to a large enhancement of the peaks in the longitude distribution as predicted by our dynamical model in Figure [fig:dynbl]. Inspecting the catalogue of the frequencies of different types of stellar encounters in table 8 of, we see that there were at least eight encounters with masses equal to or larger than one solar mass encountering the solar system in the past 10Myr with perihelia less than 1pc. These encounters can move to a heliocentric distance much larger than 50pc over that time, which is the upper limit for their unbiased sample of stellar encounters with *M**V* < 5 – see Figure 13 of. We also point out that GL 710 will have a close approach with the solar system in about 1.4Myr at a perihelion longitude of around 135∘. According to studies, it will induce a weak comet shower which is expected to increase the cometary flux by 40%-50%. This supports the suggestion that the solar apex motion induces the non-uniform longitude distribution of the LPCs’ perihelia (see Figure [fig:vencdencbl] and [fig:dynbl]). In addition, Algol, a triple-star system with a total mass of 5.8$M\_{\sun}$, encountered the solar system with a closest distance of 2.5pc 6.9Myr ago. The Galactic longitude of Algol was also close to 135∘. Based on the above plausible scenario, we conclude that the peaks in the longitude distribution of LPC perihelia could arise from the perturbations of a few strong stellar encounters, the encounter directions of which depend on the solar apex motion. Considering the important role of the Galactic tide in generating a non-uniform latitude distribution, and the role of stellar encounters in generating a non-uniform longitude distribution, the synergy effect plays a role in maintaining – rather than smoothing out – the anisotropy in the observed LPCs. In other words, we can explain the anisotropy of the LPC perihelia based only on the solar apex motion and the Galactic tide, without needing to invoke the Jupiter-mass solar companion as proposed by. To date there is no observational evidence for such a companion. We note that a recent analysis of data from the WISE satellite has excluded the existence of a Jupiter-mass solar companion with a heliocentric distance less than 1pc. Sensitivity test ================ Spiral arms and Galactic bar ---------------------------- The spiral arms and Galactic bar are non-axisymmetric, time-varying components of the Galactic potential. These make only a small contribution to the tidal force acting on the Sun and Oort cloud (). However, if their contribution is always in the same direction, the effect of their perturbation could accumulate. This can occur when the Sun is near to the co-rotation resonance, when the rotation velocities of the disk and of the spiral pattern coincide. To test this hypothesis, we simulate the solar and cometary motion adopting various constant pattern speeds of the spiral arms and the bar with fixed Galactic density distributions (specified in Section [sec:potential]). We integrate the solar orbit in the Galactic potential both including and excluding the non-axisymmetric components. The initial conditions of the Sun and potential parameters are given in Table [tab:modelpar]. We find that the gravitational force from the bar is always much larger than that from the spiral arms. However, the difference between the pattern speed of the Galactic bar Ω*b* and solar angular velocity is much larger than the difference between the pattern speed of the spiral arms Ω*s* and solar angular velocity, which results in a much lower accumulated perturbation due to the bar. To see this effect, we integrate the solar orbit back to 5 Gyr in the past. The variations of galactocentric radius and vertical displacement of the Sun are shown in Figure [fig:solarorbit]. The arms have a stronger effect on the solar orbit than does the bar. The spiral arms tend to increase the galactocentric radius of the Sun as the integration proceeds (back in time), while the bar modulates the galactocentric radius by a comparatively small amount. Neither the bar nor the arms significantly affect the vertical displacement amplitude of the Sun. Here the combined perturbation from the potential including both the Galactic bar and spiral arms changes the solar motion the same way as the perturbation from the bar alone. [fig:solarorbit] We now simulate the tide-induced flux corresponding to these different potential models. The lower panel in Figure [fig:fluxnonsym] shows that the non-axisymmetric components do not alter the flux very much. Although the perturbation from the arms can change the solar orbit slightly, the resulting change in the perturbation of the Oort cloud is minimal. The changed tidal force may change some individual cometary orbits, but has little effect on the overall injected comet flux, because the effect of the tide depends also on the distribution of the comets, which is nearly isotropic. We also see that the arms modify the cometary flux more than the bar, consistent with its larger impact on the stellar density. (The limited number of injected comets contributes to the sharp peaks in the relative flux difference, Δ*f**c*/*f**c*, after 3Gyr.) [fig:fluxnonsym] We also investigated the sensitivity of the solar motion and comet flux to the pattern speed of the asymmetric components. We find that the closer the pattern speed of the arms is to the angular velocity of the Sun, the larger the perturbation from the arms is. (We can understand this in terms of a resonance.) Meanwhile, the perturbation from the bar is not sensitive to the bar’s pattern speed. Finally, we also find that the distribution of *b**c* and *l**c* of the comet flux does not change very much for different non-axisymmetric components of the Galactic potential. In summary, we find that the model predictions of the tide-induced cometary flux are generally insensitive to changes in the non-axisymmetric components of the Galactic potential, except when a resonance between the arms and the solar orbit occurs, which increases the variation in the cometary flux. Variations of the prior ----------------------- As discussed earlier, the evidence depends on the prior distribution adopted for the model parameters. As this prior frequently cannot be determined with any certainty, it is important to investigate the sensitivity of the evidence to changes in the prior.[9](#fn9) To complete the calculation of evidences for dynamical models, we also vary the other three initial conditions, $V\_R(t=0~{\rm Myr})$, $z(t=0~{\rm Myr})$, and $V\_z(t=0~{\rm Myr})$, in the EncTideSigProb models, which we previously kept constant. Together with SigProb, EncSigProb and TideSigProb, this was previously the best favoured model (Table [tab:craterBF]). We made numerous changes in the priors by altering their parameter ranges, and re-did all necessary Monte Carlo samplings, numerical simulations, and likelihood calculations and recomputed the Bayes factors. Some of our results are shown in Table [tab:priorchange]. | models | varied prior | Bayes factor for basic150 | Bayes factor for basic250 | | --- | --- | --- | --- | | | none | 4.4 | 3.0 | | | $\sigma=2\bar{\sigma\_i}$ | 2.0 | 4.8 | | | $\sigma=1/2\bar{\sigma\_i}$ | 2.2 | 4.7 | | | $N=2N\_{\rm ts}$ | 1.9 | 1.8 | | | $N=1/2N\_{\rm ts}$ | 2.4 | 7.6 | | | none | 1.8 | 2.2 | | | $\sigma=2\bar{\sigma\_i}$ | 1.6 | 3.7 | | | $\sigma=1/2\bar{\sigma\_i}$ | 1.8 | 2.6 | | | $N=2N\_{\rm ts}$ | 1.5 | 1.5 | | | $N=1/2N\_{\rm ts}$ | 2.4 | 2.9 | | | none | 0.34 | 0.43 | | | 10 < *T* < 100 | 0.12 | 0.14 | | | 2*π*/300 < *ω* < 2*π*/10 | 0.34 | 0.39 | | | 10 < *T* < 300 | 0.88 | 5.4 × 10− 2 | | | none | 1.0 | 1.0 | | | 10 < *T* < 100 | 0.90 | 0.88 | | | 2*π*/300 < *ω* < 2*π*/10 | 1.0 | 1.0 | | | 10 < *T* < 300 | 1.8 | 1.4 | | | none | 15 | 2.0 × 102 | | | $0<t\_0<1.2\tau\_{\rm max}$ | 13 | 1.4 × 102 | | |  − 100 < *λ* < 100 | 7.7 | 1.0 × 102 | | | 0 < *λ* < 100 | 1.3 × 10− 2 | 1.8 × 10− 3 | | | none | 6.4 | 80 | | | $0<t\_0<1.2\tau\_{\rm max}$ | 8.3 | 71 | | | 2*π*/300 < *ω* < 2*π*/10 | 9.9 | 97 | | TideSigProb3 | none | 9.0 | 1.7 × 102 | | TideSigProb4 | none | 9.1 | 1.7 × 102 | | TideSigProb5 | none | 9.0 | 1.7 × 102 | | TideSigProb6 | none | 11 | 1.6 × 102 | [tab:priorchange] The difference in Bayes factors for random models (RandProb, RandBkgProb) and periodic models (SinProb, SinBkgProb) with different prior distributions is less than five. The Bayes factors also remain less than ten so they remain no better explanations of the cratering data than the Uniform model. Thus our former conclusions about these models are not very sensitive to plausible changes in the priors. The TideSigProb models in which other parameters are varied have nearly the same evidences as the TideSigProb models listed in Table [tab:craterBF], so these too are insensitive to these changes in the priors. We also see that the SigProb model with positive *λ* has Bayes factors much lower than SigProb with negative *λ* for both the basic150 and basic250 data sets. The dynamical models have parameters of the Galaxy potential, Sun’s initial conditions and combination ratio parameters (*η* and *ξ*) which are listed in Table [tab:prior]). To keep things simple, we change the fixed parameters and the ranges of the varying parameters individually, and then calculate the evidence by sampling the prior defined by the changed parameter and other parameters shown in Table [tab:prior]. We calculate evidences for dynamical models with double or half the disk mass (*M**d*), halo mass (*M**h*), standard deviation of the initial value *R* (*σ**R*), and the range of the varying ratio between the EncTideProb (or TideProb) and SigProb models (*η*). In addition, previous studies suggest that the number of tide-induced LPCs is not identical to the encounter-induced LPCs, i.e. *ξ* ≠ 1. Thus we multiply the ratio between the tide-induced flux and the encounter-induced flux (*ξ*) by a factor of 4 or 1/4 for the sensitivity test. The resulting Bayes factors calculated for the basic150 data set are shown in Table [tab:dynpriorchange]. In each row we see little variation: the Bayes factors are relatively insensitive to these parameters. This means that either the parameter space of the EncTideSigProb1 model is evenly favoured by the basic150 data set, or the data are unable to discriminate between the compound dynamical models. l|\*11c models &none&2*M**d*&1/2*M**d*&2*M**h*&1/2*M**h*&2*σ**R*&1/2*σ**R*&*ξ* = 4&*ξ* = 1/4&0 < *η* < 8&0 < *η* < 2 &1.5&2.5&3.4&2.5&4.1&2.3&2.6&—&—&—&— &1.0&2.1&2.3&2.6&3.5&1.8&1.0&1.5&0.73&—&— &11&15&11&13&12&12&11&12&10&13&8.8 [tab:dynpriorchange] The model prediction of the anisotropic LPCs (see Figure [fig:dynbl]) depends to a greater or lesser extent on the Galactic potential, the Sun’s initial condition, the Oort Cloud model, and the model of encounters. We vary the model parameters in the same way as we did in Table [tab:dynpriorchange] and simulate ten million orbits of DLDW comets perturbed by the tide and ten samples of stellar encounters backwards to 10Myr ago. We find that the latitude distribution of the LPC perihelia is not sensitive to the change of the Galactic halo mass, the initial conditions of the Sun, or the direction of the solar apex. The amplitudes of the peaks in the latitude distribution are reduced if we decrease the mass of the Galactic disk or increase the stellar masses, which make the stellar encounters play a more important role in injecting comets into the loss cone. However, the overall profile of the peaks is not changed in the latitude distribution. The peaks in the longitude distribution shift slightly if we change the solar apex direction, the masses of the encounters, or the mass of the Galactic disk. The longitude distribution is not sensitive to changes in the other model parameters. Finally, we also tested the effect of changing the time step in the (combined) simulations. We simulated four million comets generated from the DLDW model perturbed by the tide and ten samples of stellar encounters backwards to 10Myr ago using a time step of 0.001Myr (as opposed to 0.01Myr). We find little change in either the latitude or longitude distributions. In addition, we see only 4% more comets injected when using this smaller time step. In summary, we find that the overall shape of the angular distribution of LPC perihelia in both longitude and latitude is not very sensitive to changes in the model parameters, in particular not to the initial distribution of Oort Cloud comets, not to the masses of Galactic halo and disk, and not to the initial conditions of the Sun. Discussion and Conclusion ========================= We have built dynamical models for the impact rate and angular distribution of comets induced by the Galactic tide and stellar encounters, as modulated by the solar motion around the Galaxy. Without using the approximate methods (the averaged Hamiltonian or impulse approximation), we numerically simulate the tide-induced flux and encounter-induced flux separately. We use these to validate the use of proxies for tide-induced flux, *G*3, and for the encounter-induced flux, $\gamma\_{\rm bin}$, in our models. Using the Bayesian evidence framework, we find that the pure trend model (SigProb) together with the dynamical models including a trend component (EncSigProb, TideSigProb and EncTideSigProb) for the cratering record are better favoured than other models we have tested. The trend component indicates a decreasing cratering rate (*λ* < 0) towards the past over the past 100 Myr. This suggests that either the asteroid impact rate or the preservation bias or both dominates the cratering record. Because the craters in our data sets are larger than 5km, the preservation bias may not be very significant over this time scale. The disruption of a single large asteroid could explain the trend in the data, as suggested by. In addition, our models, which include the solar apex motion, can properly predict the anisotropic perihelia of LPCs without assuming a massive body in the outer Oort Cloud or an anisotropic Oort Cloud. The EncTideSigProb, EncSigProb and TideSigProb models have Bayes factors of the same magnitude as the SigProb model, which indicates that either the tide and encounter components are unnecessary in modelling the temporal distribution of craters, or the data cannot effectively discriminate between the models. The stochastic component in the comet flux arising from encounters – as represented by the term *γ* – in the EncProb and EncTideProb models can slightly increase their evidence relative to the TideProb model. We have performed a sensitivity test by changing the prior PDF over the parameters in the dynamical models and other time series models, and find only small changes of the Bayes factors. The asymmetrical components in the Galactic potential could, in principle, increase the time-variation of the comet flux and hence impact rate predicted by the dynamical models, by inducing larger deviations of the Sun’s motion from a circular orbit and thus larger changes in the local stellar density. It turns out that the non-axisymmetric component has relatively little impact on the predicted cometary flux, with the exception of when the Sun is in co-rotation with the spiral arms. In that case the transient resonance can produce large variations in the flux. By including the solar apex motion, our dynamical models for anisotropic LPCs can predict reasonably well the distribution of Galactic latitude and longitude in a set of 102 dynamically new comets. In this model, the asymmetry in the distribution of Galactic latitudes caused by the Sun’s current location and its motion over the past 10Myr (comparable with the time scale of a comet shower). The two narrow peaks in the cometary perihelia at *l**c* = 135∘ and *l**c* = 315∘ could be caused by a handful of strong stellar encounters encountering the Sun with their encountering velocities in the direction of antapex in the HRF. On the other hand, we might also see something similar due to the periodic orbital motion about the Sun of a massive body (such as a brown dwarf) residing within the Oort cloud. However, our dynamical model, which takes into account the solar apex motion, can predict the longitudinal asymmetry without assuming the existence of such a body. In addition, the latitude distribution of LPC perihelia predicted by our simulations is consistent with the theoretical prediction, although one peak in the observed distribution is not properly predicted by our simulations. The synergy effect between the encounters and the tide cannot entirely eliminate the anisotropy induced by either the tide or the encounters. A non-uniform distribution in the perihelion direction of encounters was found by, although the signal is of questionable significance due to the incompleteness, i.e. faint stars which high velocities being too faint after 10Myr for Hipparcos to have observed. An anisotropy in the longitude of LPCs will not correspond to an anisotropy in longitudes of impacts on the Earth’s surface due to the rotation of the Earth and its orbit about the Sun. Some latitude variation may be expected, despite the long-term variation in inclination and obliquity of the Earth’s orbit. Disrupted comets generally retain their original orbital plane, so the resulting asteroids would tend to impact in the plane perpendicular to solar apex. Yet these are all higher order effects which would be difficult to convincingly detect and relate to the solar orbit in the analysis of terrestrial impact craters. Our modelling approach has, like any other, introduced various assumptions and approximations. We have ignored the synergy effect between the Galactic tide and stellar encounters highlighted by. We instead simply sum the tide-induced flux and the encounter-induced flux in the ratio *ξ* to 1. Because the cometary impact rate modulated by the solar motion around the Galactic center seems to be unnecessary in order to explain the data, the synergy effect, which is also influenced by the solar motion, may not change the result significantly. In addition, we use a decreasing impact rate towards the past (negative trend component) to model the combined effect of preservation bias and asteroid impact rate. In modelling the angular distribution of the LPC perihelia, the sample noise in the comets injected into the observable zone prevent us from building a more robust model, especially for the longitude distribution. This problem could be resolved by calculating perturbations based on a more accurately measured Galactic tide and using an actual catalogue of encountering stars in the solar neighborhood as opposed to our stochastic model of plausible encounters. In common with some other studies (e.g. ), we have ignored the perturbing effect on comets from the giant planets, although we acknowledge that the giants planets could influence the predicted LPC flux in particular. The planetary perturbations can also change the fraction of the inner Oort cloud comets among the injected LPCs, which in turn could change the angular distribution of the LPC perihelia. However, these perturbations should not have a significant effect over the relatively short time scale of 10Myr which we use in the simulations to generate the LPC distribution. As the main goal of our work is to study the variable effect of the solar orbit on the LPC flux and angular distribution, rather than to predict the absolute LPC flux precisely, our conclusions should not be overly affected by neglecting the giant planets in this way. In the future, the Gaia survey allow us to detect many more recent stellar encounters down to fainter magnitude limits and larger distances than Hipparcos, thereby allowing us to extend the time scale over which we can get a complete sample of recent stellar encounters. The Gaia magnitude limit of G=20 which is low enough to cover the high velocity stars in a time scale of 10 Myr. For example, a star with absolute magnitude of 10 and a velocity of 80km/s in the HRF would move 800 pc in 10 Myr and so have an apparent magnitude of 19.5. Thus Gaia will be able to observe all stars more massive than early M dwarfs (and thus essentially all relevant stars) encountering the solar system over the past 10 Myr. For more recent timescales Gaia can observe even less massive objects. Moreover, the Gaia catalogue of more massive stellar encounters (stars with absolute magnitudes larger than that of the Sun) may shed light on the study of terrestrial craters over since the beginning of the Phanerozoic era, some 550Myr ago. Gaia can further improve the measurement of Sun’s initial conditions and the potential of the Galaxy. After including planetary perturbations, this would make the simulation of cometary orbits accurate enough to trace the stellar encounter back to the time when it generated comet showers and corresponding terrestrial craters. Acknowledgements ================ We thank Carmen Martinez, Inti Pelupessy, and Arjen van Elteren for explaining, installing, and testing the AMUSE framework. We are indebted to Anthony Brown and Piotr A. Dybczyński for their useful suggestions. We are grateful to the referee, Nathan Kaib, for constructive comments which helped to improve the manuscript. This work has been carried out as part of the Gaia Research for European Astronomy Training (GREAT-ITN) network. The research leading to these results has received funding from the European Union Seventh Framework Programme ([FP7/2007-2013] under grant agreement No. 264895. [lastpage] --- 1. E-mail:[email protected][↩](#fnref1) 2. Note that our angular distribution is different from the one given in because the direction of perihelion is opposite to that of aphelion.[↩](#fnref2) 3. We assume that the mass densities of different stellar categories have the same spatial distribution.[↩](#fnref3) 4. We define a symbol without using the subscript *i* when the symbol is derived from a combination of symbols belonging and not belonging to certain stellar category.[↩](#fnref4) 5. http://www.amusecode.org[↩](#fnref5) 6. We don’t use this equation in simulating cometary orbits in the AMUSE framework.[↩](#fnref6) 7. This is analogous to the size of the histogram bins. A histogram is just a particular type of kernel.[↩](#fnref7) 8. Such massive stars (or stars with relatively high *γ*) move slowly relative to the Sun, and so would generate a relatively narrow peak in comet flux with *l**c*.[↩](#fnref8) 9. A more robust – but also more time-consuming – way of calculating the evidence is presented in.[↩](#fnref9) Exploring the role of the Sun’s motion in terrestrial comet impacts =================================================================== [firstpage] The cratering record on the Earth and Moon shows that our planet has been exposed to high velocity impacts for much or all of its existence. Some of these craters were produced by the impact of long period comets (LPCs). These probably originated in the Oort cloud, and were put into their present orbits through gravitational perturbations arising from the Galactic tide and stellar encounters, both of which are modulated by the solar motion about the Galaxy. Here we construct dynamical models of these mechanisms in order to predict the time-varying impact rate of LPCs and the angular distribution of their perihelia (which is observed to be non-uniform). Comparing the predictions of these dynamical models with other models, we conclude that cometary impacts induced by the solar motion contribute only a small fraction of terrestrial impact craters over the past 250Myr. Over this time scale the apparent cratering rate is dominated by a secular increase towards the present, which might be the result of the disruption of a large asteroid. Our dynamical models, together with the solar apex motion, predict a non-uniform angular distribution of the perihelia, without needing to invoke the existence of a massive body in the outer Oort cloud. Our results are reasonably robust to changes in the parameters of the Galaxy model, Oort cloud, and stellar encounters. Earth — Galaxy: kinematics and dynamics — methods: statistical — solar-terrestrial relations — comets: general — Oort Cloud Introduction ============ Background ---------- Comet or asteroid impacts on the Earth are potentially catastrophic events which could have a fundamental effect on life on Earth. While at least one extinction event and associated crater is well documented – the K-T impact from 65Myr ago and the Chicxulub crater – a clear connection between other craters and extinction events is less well established. Nonetheless, we know of around 200 large impact craters on the Earth, and doubtless the craters of many other impacts have either since eroded or are yet to be discovered. Many studies in the past have attempted to identify patterns in the temporal distribution of craters and/or mass extinction events. Some claim there to be a periodic component in the data (e.g.), although the reliability of these analyses is debated, and other studies have come to other conclusions (e.g.  ). Of particular interest is whether these impacts are entirely random, or whether there are one or two dominant mechanisms which account for much of their temporal distribution. Such mechanisms need not be deterministic: stochastic models show characteristic distributions in their time series or frequency spectra (e.g.). We are therefore interested in accounting not for the times of individual impacts, but for the impact rate as a function of time. In doing this we should distinguish between asteroid and comet impacts. Having smaller relative velocities, asteroid impacts are generally less energetic. Asteroids originate from within a few AU of the Sun, so their impact rate is probably not affected much by events external to the solar system. Comets, on the other hand, originate from the Oort cloud, and so can be affected by the Galactic environment around the Sun. As the solar system orbits the Galaxy, it experiences gravitational perturbations from the Galactic tide and from encountering with individual passing stars. These perturbations are strong enough to modify the orbits of Oort cloud comets to inject them into the inner solar system. The strength of these perturbations is dependent upon the local stellar density, so the orbital motion of the Sun will modulate these influences and thus the rate of comet injection and impact to some degree (e.g. ). As the Sun shows a (quasi)-periodic motion perpendicular to the Galactic plane, and assuming that the local stellar density varies in the same way, it has been argued that this could explain a (supposed) periodic signal in the cratering record. Here we will investigate the connection between the solar motion and the large impact craters (i.e. those generated by high energy impacts) more explicitly. We do this by constructing a dynamical model of the Sun’s orbit, the gravitational potential, and the resulting perturbation of comet orbits, from which we will make probabilistic predictions of the time variability of the comet impact rate. The dates of impact craters are not the only relevant observational evidence available. We also know the orbits of numerous long-period comets (LPCs). The orbits of dynamically new LPCs – those which enter into the inner solar system for the first time – record the angular distribution of the cometary flux. This distribution of their perihelia is found to be anisotropic. Some studies interpret this as an imprint of the origination of comets, while others believe it results from a perturbation of the Oort Cloud. Under this perturbation scenario, it has been shown that the Galactic tide can (only) deplete the pole and equatorial region of the Oort Cloud in the Galactic frame, and so cannot account for all the observed anisotropy in the LPC perihelia. It has been suggested that the remainder is generated from the perturbation of either a massive body in the Oort Cloud or stellar encounters. Overview -------- Assuming a common origin of both the large terrestrial impact craters and the LPCs, we will construct dynamical models of the flux and orbits of injected comets as a function of time based on the solar motion around the Galaxy. Our approach differs from previous work in that we (1) simulate the comet flux injected by the Galactic tide and stellar encounters as they are modulated by the solar motion; (2) use an accurate numerical method rather than averaged Hamiltonian or Impulse Approximation in the simulation of cometary orbits; (3) take into account the influence from the Galactic bar and spiral arms; (4) test the sensitivity of the resulting cometary flux to varying both the initial conditions of the Sun and the parameters of the Galaxy potential, Oort Cloud, and stellar encounters. We build the dynamical models as follows. Adopting models of the Galactic potential, Oort Cloud and stellar encounters, we integrate the cometary orbits in the framework of the AMUSE software environment, developed for performing various kinds of astrophysical simulations. The cometary orbits can be integrated with the perturbation from either the Galactic tide, or stellar encounters, or both. All three are investigated. In principle, we can build a three-parameter dynamical model for the variation of the impacting comet flux as a function of time, Galactic latitude, and Galactic longitude. In practice we reduce this three-parameter model to a 1-parameter model of the variation of the comet impact rate over time, and a 2-parameter model of the angular distribution of the perihelia of LPCs. A further simplification is achieved by replacing the full numerical computations of the perturbations by separating proxies for the tide-induced comet flux and for the encounter-induced comet flux. These are shown to be good approximations which accelerate considerably the computations. We combine the predictions of the comet impact history with a (parameterized) component which accounts for the crater preservation bias (i.e. older craters are less likely to be discovered) and the asteroid impact rate. We then use Bayesian model comparison to compare the predictions of this model over different ranges of the model parameters to the observed cratering data, using the crater data and statistical method presented in. We obtain the 2-parameter model for the angular distribution of the perihelia of LPCs by integrating the full 3-parameter model over time. Because we no longer need the time resolution, we actually perform a separate set of numerical simulations to build this model. We then compare our results with data on 102 new comets with accurately determined semi-major axes (the “class 1A” comets of ). This paper is organized as follows. We introduce, in section [sec:data], the data on the craters and LPCs. In section [sec:simulation] we define our models for the Galactic potential, the Oort cloud, and for stellar encounters, and describe the method for the dynamical simulation of the comet orbits. In section [sec:bayesian] we summarize the Bayesian method of model comparison. In section [sec:impact] we use the dynamical model to construct the 1-parameter model of the cometary impact history. In Section [sec:comparison], we compare our dynamical time series models of the impact history with other models, to assess how well the data support each. In section [sec:ADP] we use the dynamical model again, but this time to predict the distribution of the perihelia of LPCs (the 2-parameter model), which we compare with the data. A test of the sensitivity of these model comparison results to the model parameters is made in section [sec:sensitivity]. We discuss our results and conclude in section [sec:conclusion]. The main symbols and acronyms used in this article are summarized in Table [tab:symbol]. [tab:symbol] | Symbol | Definition | | --- | --- | | PDF | probability density function | | LSR | local standard of rest | | HRF | heliocentric rest frame | | BP | before present | | LPC | long-period comet | | *s**j* | crater age | | *σ**t* | age uncertainty of crater | | *s**u**p* | upper limit of the age of crater | | ${\vec r}\_{\rm enc}$ | impact parameter or perihelion of encounter | | *v⃗*⋆ | velocity of a star in the LSR | | $ \vec{v}\_{\rm enc}$ | velocity of the stellar encounter relative to the Sun | | *b*⋆ | Galactic latitude of *v⃗*⋆ | | *l*⋆ | Galactic longitude of *v⃗*⋆ | | $ b\_{\rm enc} $ | Galactic latitude of $\vec{v}\_{\rm enc}$ | | $ l\_{\rm enc} $ | Galactic longitude of $\vec{v}\_{\rm enc}$ | | *b**p* | Galactic latitude of the perihelion of a stellar encounter | | *l**p* | Galactic longitude of the perihelion of a stellar encounter | | *b**c* | Galactic latitude of cometary perihelion | | *l**c* | Galactic longitude of cometary perihelion | | *q* | perihelion distance | | *a* | semi-major axis | | *e* | eccentricity | | $M\_{\rm enc}$ | mass of a stellar encounter | | $v\_{\rm enc}$ | speed of a star at encounter | | $r\_{\rm enc}$ | distance of a star at encounter | | *f**c* | injected comet flux relative to the total number of comets | | *f̄**c* | averaged *f**c* over a time scale | | *γ* | parameter of impact intensity$\frac{M\_{\rm enc}}{v\_{\rm enc}r\_{\rm enc}}$ | | $\gamma\_{\rm bin}$ | normalized maximum *γ* in a time bin | | *G*1, *G*2 | coefficients of radial tidal force | | *G*3 | coefficient of vertical tidal force | | *ρ* | stellar density | | *η* | ratio between the trend component and *f**c* | | *ξ* | ratio between the tide-induced flux and encounter-induced flux | | *κ* | angle between ${\vec r}\_{\rm enc}$ and the solar apex | | *M**s* | mass of the Sun | Data ==== Terrestrial craters ------------------- The data of craters we use in this work is from the *Earth Impact Database* (EID) maintained by the Planetary and Space Science Center at the University of New Brunswick. We restrict our analysis to craters with diameter  > 5 km and age  < 250Myr in order to reduce the influence of crater erosion (although an erosion effect is included in our time series models). We select the following data sets defined by * **basic150** (32 craters) age  ≤  150Myr, *σ**t* original * **ext150** (36 craters) age  ≤  150Myr, original or assigned * **full150** (48 craters) ext150 plus craters with *s**u**p* ≤  150Myr * **basic250** (42 craters) age  ≤  250Myr, *σ**t* original * **ext250** (46 craters) age  ≤  250Myr, original or assigned * **full250** (59 craters) ext250 plus craters with *s**u**p* ≤  250Myr The terms “basic”, “ext”, and “full” refer to the inclusion of craters with different kinds of age uncertainties. “original *σ**t*” means that just craters with measured age uncertainties are included. “original or assigned” adds to this craters for which uncertainties have been estimated. The “full” data sets further include craters with just upper age limits ( explains how these can be used effectively). As the size of the existing craters is determined by many factors, e.g. the inclination, velocity and size of the impactor, the impact surface, and erosion, we only use the time of occurrence (*s**j*) of each impact crater and its uncertainty (*σ**j*). Figure [fig:set3lim] plots the size and age of the 59 craters we use in the model comparison in Section [sec:comparison]. [fig:set3lim] Long-period comets ------------------ The LPCs we use are the 102 dynamically new comets (i.e. class 1A) identified by and discussed by. Figure [fig:LPC1Abl] shows the distribution over the Galactic latitude (*b**c*) and longitude (*l**c*) of the cometary perihelia. [2](#fn2) The two peaks in the longitude distribution suggest a great circle on the sky passing through *l* = 135∘ and *l* = 315∘. We explain this anisotropy in Section [sec:ADP]. [fig:LPC1Abl] Simulation of cometary orbits ============================= We now build dynamical models of the Oort cloud comets and their perturbation via the Galactic tide and stellar encounters by simulating the passage of the solar system through the Galaxy. We first introduce the Galactic potential, which yields a tidal gravitational force on the Sun and Oort Cloud comets. Then we give the initial conditions of the Oort cloud and the distribution of stellar encounters. Then we outline the numerical methods used to calculate the solar motion and the comet orbits. Galactic potential ------------------ We adopt a Galactic potential with three components, namely an axisymmetric disk and a spherically symmetric halo and bulge $$\Phi\_{\rm sym}=\Phi\_b+\Phi\_h+\Phi\_d \label{eqn:Phi\_sym}$$ (this is same model as in ). The components are defined (in cylindrical coordinates) as $$\begin{aligned} \Phi\_{b,h}&=&-\frac{GM\_{b,h}}{\sqrt{R^2+z^2+b\_{b,h}^2}},\\ \Phi\_{d}&=&-\frac{GM\_{d}}{\sqrt{R^2+(a\_d+\sqrt{(z^2+b\_d^2)})^2}}, \label{eqn:Phi\_component}\end{aligned}$$ where *R* is the disk-projected galactocentric radius of the Sun and *z* is its vertical displacement above the midplane of the disk. *M* is the mass of the component, *b* and *a* are scale lengths, and *G* is the gravitational constant. We adopt the values of these parameters from, which are listed in Table [tab:modelpar]. [tab:modelpar] @ll@ component& parameter value Bulge & *M**b* = 1.3955 × 1010 *M*⊙ & *b**b* = 0.35kpc Halo & *M**h* = 6.9766 × 1011 *M*⊙ & *b**h* = 24.0kpc Disk & *M**d* = 7.9080 × 1010 *M*⊙ & *a**d* = 3.55kpc & *b**d* = 0.25kpc Arm & *ζ* = 15∘ & $R\_{\rm min}=3.48$kpc & $\phi\_{\rm min}=-20^\circ$ & $\rho\_0=2.5 \times 10^7 M\_\odot {\rm kpc}^{-3}$ & *r*0 = 8kpc & *R**s* = 7kpc & *H* = 0.18kpc & Ω*s* = 20km*s*− 1/kpc bar & $R\_b/R\_{\rm CR}=0.8$ & *α* = 0.01 & $R\_{\rm CR}=R\_\odot(t=0~{\rm Myr})/2$ & *α* = 0.01 & Ω*b* = 60km*s*− 1/kpc In Section [sec:sensitivity] we will add to this non-axisymmetric and time-varying components due to spiral arms and the Galactic bar, to give the new potential $$\Phi\_{\rm asym}=\Phi\_{\rm sym}+\Phi\_{\rm arm}+\Phi\_{\rm bar} ~, \label{eqn:Phi\_asym}$$ where $\Phi\_{\rm arm}$ is a potential of two logarithmic arms from with parameters given in, and $\Phi\_{\rm bar}$ is a quadrupole potential of rigid rotating bar from. These components are used in the potential for the calculation of the solar orbit, but not the stellar encounter rate discussed in section [sec:encmod]. The geometry of the arm is $$\phi\_s(R) = \log(R/R\_{\rm min})/\tan(\zeta)+\phi\_{\rm min}, \label{eqn:phi\_spiral}$$ where *ζ* is the pitch angle, $R\_{\rm min}$ is the inner radius, and $\phi\_{\rm min}$ is the azimuth at that inner radius. A default pattern speed of Ω*p* = 20 km s− 1 kpc− 1 is adopted. The corresponding potential of this arm model is $$\begin{aligned} \Phi\_{arm} &=& -\frac{4 \pi G H}{K\_1 D\_1}\rho\_0 e^{-\frac{R-r\_0}{R\_s}}\nonumber\\ &&\times \cos(N[\phi-\phi\_s(R,t)])\left[\rm{sech}\left(\frac{K\_1z}{\beta\_1}\right)\right]^{\beta\_1}~, \label{eqn:Phi\_arm}\end{aligned}$$ where $$\begin{aligned} K\_1&=&\frac{N}{R \sin \zeta},\\ \beta\_1 &=& K\_1 H (1+0.4 K\_1 H),\\ D\_1 &=&\frac{1+K\_1 H +0.3 (K\_1 H)^2}{1+0.3 K\_1 H},\end{aligned}$$ and *N* is the number of spiral arms. The parameters in equation [eqn:Phiarm] are given in Table [tab:modelpar]. The bar potential is a 2D quadrupole. Because the Sun always lies outside of the bar, we adopt the potential $$\Phi\_{bar}=-A\_b \cos[2(\phi-\Omega\_b t-\phi\_{\rm min})] \left[\left(\frac{R}{R\_b}\right)^3 - 2\right] \ \ \ R \geq R\_b \label{eqn:Phib\_geqRb}$$ where *R**b* and Ω*b* are the size and pattern speed of the bar respectively and $\phi\_{\rm min}$ is the bar angle. We assume that the spiral arms start from the ends of the major axis of the bar. We only consider the barred state and ignore the evolution of the bar, so we adopt a constant amplitude for the quadrupole potential, i.e. *A**b* = *A**f*, in equation (3) of. *A**f* is determined by the definition of the bar strength $$\alpha\equiv 3~\frac{A\_f}{v^2}\left(\frac{R\_b}{R}\right)^3, \label{eqn:bar\_strength}$$ where *R* and *v* are the current galactocentric distance of the Sun and the corresponding local circular velocity. The fixed bar strength is given in Table [tab:modelpar], from which we calculate *A**f* and hence *A**b*. [sec:oort] Oort Cloud --------------------- We generate Oort cloud comets using two different models, one from (hereafter DQT) with the parameters defined in, and another which we have reconstructed from the work of (hereafter DLDW). In the DQT model, initial semi-major axes (*a*0) for comets are selected randomly from the interval [3000,  105]AU with a probability density proportional to *a*0− 1.5. The initial eccentricities (*e*0) are selected with a probability density proportional to *e*0, in such a way that the perihelia (*q*0) are guaranteed to be larger than 32 AU. We generate the other orbital elements — cos*i*0, *ω*0, Ω0 and *M*0 — from uniform distributions. Because the density profile of comets is proportional to *r*− 3.5, where *r* is the sun-comet distance, about 20% of the comets lie in the classical Oort Cloud (*a* > 20 000AU). In the DLDW model, the initial semi-major axes, eccentricities, and inclination angles are generated by Monte Carlo sampling from the relevant distributions shown in. This produces semi-major axes in the range 3000 to 100000AU and ensures that the perihelia are larger than 32AU. Unlike the DQT model, there is a dependency of the cometary eccentricity and inclination on the semi-major axis, as can be see in Figures 1 and 2 of. We generate comet positions and velocities relative to the invariant plane and then transform these into vectors relative to the Galactic plane. In doing so we adopted values for the Galactic longitude and latitude of the north pole of the invariant plane of 98∘ and 29∘ respectively. The distributions of the cometary heliocentric distances for the DQT and DLDW models are given in Figure [fig:DQTDLDW]. We see that the DQT model produces more comets in the inner Oort cloud ( < 20000AU) and the DLDW model more in the outer Oort Cloud ( > 20000AU). Our distributions differ slightly from those in Figure 3 of because our initial semi-major axes have different boundaries, and because our reconstruction of initial eccentricities and inclination angles is slightly different from the approach used in. Many other Oort cloud initial conditions have been constructed numerically. Given the inherent uncertainty of the Oort cloud’s true initial conditions, we carry out our work using two different Oort cloud models and investigate the sensitivity of our results to this (e.g. in section [sec:ADP]). [fig:DQTDLDW] Stellar encounters ------------------ The geometry of encounters is complicated by the Sun’s motion relative to the local standard of rest (LSR). This solar apex motion could, by itself, produce an anisotropic distribution in the directions of stellar encounters in the heliocentric rest frame (HRF). Any anisotropy must be taken into account when trying to explain the observed anisotropic perihelia of the LPCs. Nonetheless, simulated cometary orbits with an isotropic distribution of stellar encounters which is inconsistent with their method for initializing encounters. Here we use their method to generate encounters, but now initialize stellar encounters self-consistently to have a non-uniform angular distribution. ### Encounter scenario The parameters of stellar encounters are generated using a Monte Carlo sampling method, as follows. We distribute the encounters into different stellar categories (corresponding to different types of stars) according to their frequency, *F**i*, as listed in Table 8 of. In each stellar category, the stellar mass *M**i*, Maxwellian velocity dispersion *σ*⋆ *i*, and solar peculiar velocity *v*⊙ *i*, are given. The encounter scenario in the HRF is illustrated in Figure [fig:impactframe]. The encounter perihelion $\vec{r}\_{\rm enc}$ direction (which has Galactic coordinates *b**p* and *l**p*) is by definition perpendicular to the encounter velocity $\vec{v}\_{\rm enc}$. The angle *β* is uniformly distributed in the interval of [0, 2*π*]. [fig:impactframe] In this encounter scenario in the HRF, the trajectory of a stellar encounter is determined by the encounter velocity $\vec{v}\_{\rm enc}$, the encounter perihelion $\vec{r}\_{\rm enc}$, and the encounter time $t\_{\rm enc}$. In the following paragraphs, we will first find the probability density function (PDF) of encounters for each stellar category as a function of $t\_{\rm enc}$, $r\_{\rm enc}$, and $v\_{\rm enc}$, and then sample these parameters from this using the Monte Carlo method introduced by (hereafter R08). Then we will sample $b\_{\rm enc}$ and $l\_{\rm enc}$ using a revised version of R08’s method. Finally, *b**p* and *l**p* can be easily sampled because $\vec{r}\_{\rm enc}$ is perpendicular to $\vec{v}\_{\rm enc}$. ### Encounter probability The probability for each category of stars is proportional to the number of stars passing through a ring with a width of $dr\_{\rm enc}$ and centered on the Sun. The non-normalized PDF is therefore just $$P\_u(t\_{\rm enc}, r\_{\rm enc}, v\_{\rm enc}) \,=\, 4 \pi n\_i v\_{\rm enc} r\_{\rm enc} \,\propto\, \rho(t\_{\rm enc})v\_{\rm enc}r\_{\rm enc}, \label{eqn:PDF\_enc}$$ where *n**i* is the local stellar number density of the *i**t**h* category of stellar encounters, and $\rho(t\_{\rm enc})$ is the local stellar mass density, which will change as the Sun orbits the Galaxy.[3](#fn3) Thus the encounter probability is proportional to the local mass density, the encounter velocity and the encounter perihelion. We use a Monte Carlo method to sample $t\_{\rm enc}$, $v\_{\rm enc}$, and $r\_{\rm enc}$ from this. In different application cases, we sample the encounter time $t\_{\rm enc}$ over different time spans according to equation [eqn:PDFenc], where the local mass density is calculated using Poisson’s equation with the potentials introduced in section [sec:potential]. Although we may simulate stellar encounters over a long time scale, we ignore the change of the solar apex velocity and direction when simulating the time-varying comet flux (in section [sec:impact]) and the angular distribution of current LPCs (in section [sec:ADP]). We select $r\_{\rm enc}$ with a PDF proportional to $r\_{\rm enc}$ with an upper limit of 4 × 105 AU. However, the sampling process of $v\_{\rm enc}$ is complicated by the solar apex motion and the stellar velocity in LSR, which we accommodate in the following way. The encounter velocity in the HRF, $\vec{v}\_{\rm enc}$, is the difference between the velocity of the stellar encounter in the LSR, *v⃗*⋆, and the solar apex velocity relative to that type of star (category *i*) in the LSR, *v⃗*⊙ *i*, i.e.[4](#fn4) $$\vec{v}\_{\rm enc}=\vec{v}\_\star-\vec{v}\_{\odot i}~. \label{eqn:vector\_venc}$$ We can consider the above formulae as a transformation of a stellar velocity from the LSR to the HRF. The magnitude of this velocity in the HRF is $$v\_{\rm enc}=[v^2\_\star+v^2\_{\odot i}-2v\_{\odot i}v\_\star \cos \delta]^{1/2}~, \label{eqn:venc}$$ where *δ* is the angle between *v⃗*⋆ and *v⃗*⊙ *i* in the LSR. To sample $v\_{\rm enc}$, it is necessary to take into account both the encounter probability given in equation [eqn:PDFenc] and the distribution of *v*⋆. We generate *v*⋆ using $$v\_\star = \sigma\_{\star i} \left[\frac{1}{3}(\eta\_u^2+\eta\_v^2+\eta\_w^2)\right]^{1/2}, \label{eqn:vstar}$$ where *σ*⋆ *i* is the stellar velocity dispersion in the $i^{\rm th}$ category, and *η**u*, *η**v*, *η**w* are random variables, each following a Gaussian distribution with zero mean and unit variance. We then realize the PDF of encounters over $v\_{\rm enc}$ (i.e. $P\_u \propto v\_{\rm enc}$) using R08’s method as follows: (i) we randomly generate *δ* to be uniform in the interval [0, 2*π*]; (ii) adopting $v\_{\sun i}$ from table 1 in R08 and generating *v*⋆ from equation [eqn:vstar], we calculate $v\_{\rm enc}$ using equation [eqn:venc]; (iii) we define a large velocity $V\_{\rm enc}=v\_{\odot i}+3\sigma\_{\star i}$ for the relevant star category and randomly draw a velocity $v\_{\rm rand}$ from a uniform distribution over $[0, V\_{\rm enc}]$. If $v\_{\rm rand} < v\_{\rm enc}$, we accept $v\_{\rm enc}$ and the values of the generated variables *δ*, *v*⋆. Otherwise, we reject it and repeat the process until $v\_{\rm rand} < v\_{\rm enc}$. We generate 105 encounters in this way. Figure [fig:venc] shows the resulting distribution of $v\_{\rm enc}$. It follows a positively-constrained Gaussian-like distribution with mean velocity of 53km/s and a dispersion of 21km/s, which is consistent with the result in R08. In their modelling, R08 adopt a uniform distribution for $\sin b\_{\rm enc}$, and $l\_{\rm enc}$. This is not correct, however, because encounters are more common in the direction of the solar antapex where the encounter velocities are larger than those in other directions (equation [eqn:PDFenc]). We will show how to find the true distribution of $\sin b\_{\rm enc}$, $l\_{\rm enc}$, sin*b**p* and *l**p* as follows. [fig:venc] ### Anisotropic perihelia of encounters To complete the sampling process of encounters, we need to find a 5-variable PDF, i.e. $P\_u(t\_{\rm enc}, r\_{\rm enc}, v\_{\rm enc}, b\_{\rm enc}, l\_{\rm enc})$. We have used R08’s original Monte Carlo method to generate $t\_{\rm enc}$, $r\_{\rm enc}$ and $v\_{\rm enc}$ according to equation [eqn:PDFenc]. However, $b\_{\rm enc}$ and $l\_{\rm enc}$ are not generated because R08 only use equation [eqn:venc] to generate the magnitude of $\vec{v}\_{\rm enc}$ rather than the direction of $\vec{v}\_{\rm enc}$. To sample the directions of $\vec{v}\_{\rm enc}$, we change the first and second steps in R08’s method introduced in section [sec:pdfenc] as follows: (i) we randomly generate {*b*⋆, *l*⋆} such that sin*b*⋆ and *l*⋆ are uniform in the interval of [ − 1, 1] and [0, 2*π*], respectively; (ii) adopting $b\_{\rm apex}=58.87^\circ$ and $l\_{\rm apex}=17.72^\circ$ for the solar apex direction and generating *v*⋆ according to equation [eqn:vstar], we calculate $\vec{v}\_{\rm enc}$ according to equation [eqn:vectorvenc]. Selected in this way, sin*b*⋆, *l*⋆, $\sin b\_{\rm enc}$, and $l\_{\rm enc}$ all have non-uniform distributions. The Galactic latitude *b**p* and longitude *l**p* of the encounter perihelia are also not uniform. Like R08, we draw 197906 encounters over the past 5 Gyr from our distribution of encounters. The resulting histograms of $\sin b\_{\rm enc}$, $l\_{\rm enc}$, sin*b**p*, and *l**p* are shown in Figure [fig:vencdencbl]. [fig:vencdencbl] We see that the encounter velocity, $\vec{v}\_{\rm enc}$, concentrates in the antapex direction, while the encounter perihelion, $\vec{r}\_{\rm enc}$, concentrates in the plane perpendicular to apex-antapex direction. In addition, the distribution of *l**p* is flatter than that of $l\_{\rm enc}$ because $\vec{r}\_{\rm enc}$ concentrates on a plane rather than along a direction. In order to clarify the effect of the solar apex motion, we define *κ* as the angle between the encounter perihelion $\vec{r}\_{\rm enc}$ and the solar apex. If there were no solar apex motion, cos*κ* would be uniform. The effect of solar apex motion is shown in Figure [fig:apexkappa]. The solar apex motion would result in the concentration of encounter perihelia on the plane perpendicular to the apex direction. This phenomenon is detected by using Hipparcos data, although the observational incompleteness biases the data. The non-uniform distribution over cos*κ* results in an anisotropy in the perihelia of LPCs, as we will demonstrate and explain in Section [sec:ADP]. [fig:apexkappa] Methods of numerically simulating the comet orbits -------------------------------------------------- ### AMUSE Taking the above models and initial conditions, we construct an integrator for the orbits of Oort cloud comets via a procedure similar to that in, using the Bridge method in the AMUSE framework[5](#fn5) (a platform for coupling existing codes from different domains; ). A direct integration of the cometary orbits is computationally expensive due to the high eccentricity orbits and the wide range of timescales involved. We therefore split the dynamics of the comets into Keplerian and interaction terms (following ). The Keplerian part has an analytic solution for arbitrary time steps, while the interaction terms of the Hamiltonian consist only of impulsive force kicks. To achieve this we split the Hamiltonian for the system in the following way $$H = H\_{\rm Kepler} + H\_{\rm encounter} + H\_{\rm tide} \label{eqn:H\_split}$$ where $H\_{\rm Kepler}$, $H\_{\rm encounter}$, and $H\_{\rm tide}$ describe the interaction of the comet with the dominant central object (the Sun), a passing star, and the Galactic tide, respectively. Specifically, the Keplerian cometary orbits can be integrated analytically according to $H\_{\rm Kepler}$ while the interactions with the Galactic tide and stellar encounters are taken into account in terms of force kicks. For the time integration a second order leapfrog scheme is used, where the Keplerian evolution is interleaved with the evolution under the interaction terms. The forces for the latter are calculated using direct summation, in which the comet masses are neglected. Meanwhile, the Sun moves around the Galactic center under the forces from the Galactic tide and stellar encounters calculated from $H\_{\rm encounter}$ and $H\_{\rm tide}$ in the leapfrog scheme. We first initialize the orbital elements of the Sun and encountering stars about the Galaxy, and the Oort cloud comets about the Sun. We treat the stellar encounters as a N-body system with a varying number of particles, simulated using the Huayno code. The interaction between comets and the Sun is simulated with a Keplerian code based on. At each time step in the orbital integration we calculate the gravitational force from the Galaxy and stellar encounters. The velocities of the comets are changed according to the Hamiltonian in equation [eqn:Hsplit] at every half time step. Meanwhile, each comet moves in its Keplerian orbit at each time step. All variables are transformed into the HRF in order to take into account the influence of the solar motion and stellar encounters on the cometary orbits. We use constant time steps in order to preserve the symplectic properties of the integration scheme in AMUSE (although we note that a symplectically corrected adaptive time step is used in some codes, such as SCATR ). We use a time step of 0.1Myr for tide-only simulations because we find no difference in the injected flux when simulated using a smaller time step. The choice of time step size is a trade-off between computational speed and sample noise in the injected comet sample. We use a time step of 0.01Myr in the encounter-only and in the combined (tide plus encounter) simulations when modelling the angular distribution of the LPCs’ perihelia (section [sec:ADP]). (In section [sec:sensitivity] we repeat some of these simulations with a shorter time step – 0.001Myr – to confirm that this time step is small enough.) We use a time step of 0.001Myr in all other simulations. In the following simulations we adopt the initial velocity of the Sun from and the initial galactocentric radius from. Other initial conditions and their uncertainties are the same as in. The circular velocity of the Sun (at *R* = 8.27kpc), *v* = 225.06km/s, is calculated based on the axisymmetric Galactic model in Section [sec:potential]. These values are listed in Table [tab:initialcondition]. l\*5cr &*R*/kpc&*V**R*/kpc Myr− 1&*ϕ*/rad&$\dot\phi$/rad Myr− 1&z/kpc&*V**z* /kpc Myr− 1 mean &8.27 &-0.01135 & 0 & 0.029&0.026&-0.0074 standard deviation &0.5& 0.00036&0 &0.003 &0.003&0.00038 [tab:initialcondition] ### Numerical accuracy of the AMUSE-based method To test the numerical accuracy of the AMUSE-based method, we generated 1000 comets from the DLDW model and monitored the conservation of orbital energy and angular momentum. As the perturbation from the Galactic potential and stellar encounters used in our work would violate conservation of the third component of angular momentum (*L**z*), we use a simplified Galactic potential for this test, namely a massive and infinite sheet with $$\Phi\_{\rm sheet}=2\pi G\sigma |z|, \label{eqn:slab\_disk}$$ where *G* is the gravitational constant, $\sigma = 5.0\times 10^6~M\_{\sun}/$kpc2 is the surface density of the massive sheet and *z* is the vertical displacement from the sheet. Because this potential imposes no tidal force on comets if the Sun does not cross the disk, it enables us to test the accuracy of the bridge method in AMUSE by using the conservation of cometary orbital energy and the angular momentum perpendicular to the sheet. To guarantee that the Sun does not cross the plane during the 1Gyr orbital integration (i.e. the oscillation period is more than 2Gyr), we adopt the following initial conditions of the Sun: *R* = 0kpc, *ϕ* = 0, *z* = 0.001kpc, *V**R* = 0kpc/Myr, $\dot\phi=0$rad/Myr, *V**z* = 0.0715kpc/Myr. Integrating the cometary orbits over 1Gyr with a constant time step of 0.1Myr, we calculate the fractional change of the comets’ orbital energies *E* and the vertical component of their angular momenta *L**z* during the motion (Figure [fig:ELztest]). Both quantities are conserved to a high tolerance, with fractional changes of less than 10− 6 for *L**z* and less than 10− 12 for *E*. The numerical errors are independent of the comet’s energy (which is inversely proportional to the semi-major axis). Compared to the magnitude of the perturbations which inject comets from the Oort cloud into the observable zone, these numerical errors can be ignored during a 1Gyr and even a 5Gyr integration. [fig:ELztest] ### Comparison of the AMUSE-based method with other methods Our numerical method calculates perturbations from stellar encounters and the Galactic tide using dynamical equations directly, instead of employing an impulse approximation (e.g. CIA, DIA, or SIA ) or the Averaged Hamiltonian Method (AHM). In the latter the Hamiltonian of the cometary motion is averaged over one orbital period. This can significantly reduce the calculation time, but is potentially less accurate. A more explicit method is to integrate the Newtonian equations of motion directly, e.g. via the Cartesian Method (CM) of, but this is more time consuming. To illustrate the accuracy of the AHM, CM, and AMUSE-based methods in simulating high eccentricity orbits, we integrate the orbit of one comet using all methods. The test comet has a semi-major axis of *a* = 25 000AU and an eccentricity of *e* = 0.996 (as used in ). Adopting the following initial conditions of the Sun – *R* = 8.0kpc, *ϕ* = 0, *z* = 0.026kpc, *V**R* =  − 0.01kpc/Myr, $\dot\phi=0.0275$rad/Myr, *V**z* = 0.00717kpc/Myr – and using the same tide model as described above, the solar orbit under the perturbation from the Galactic tide is integrated over the past 5Gyr. Figure [fig:amuseAHM] shows that the evolutions of the cometary perihelia calculated using the CM and AMUSE-based methods are very similar, whereas AHM shows an evolution which diverges from these. As CM is the most accurate method, this shows that the AHM cannot be used to accurately calculate the time-varying, because it holds the perturbing forces constant during each orbit. Because the AMUSE-based method computes a large sample of comets more efficiently than CM does, we have adopted the AMUSE-based method in our work. [fig:amuseAHM] ### Calculation of the injected comet flux A comet which comes too close to the perturbing effects of the giant planets in the solar system will generally have its orbit altered such that it is injected into a much shorter periodic orbit or is ejected from the solar system on an unbound orbit. We regard a comet as having been injected into the inner solar system in this way when it enters into the “loss cone”, i.e. that region with a heliocentric radius of 15AU or less (the same definition as in and R08). These are the comets which can then, following further perturbations from the planets, hit the Earth. If injected comets enter an observable zone within  < 5AU then they may be observed as a LPC. Comets which are injected into the loss cone or which are ejected from the solar system (i.e. achieve heliocentric distances larger than 4 × 105AU) are removed from the simulation. The observable comets are only a subset of the injected comets because some injected comets can be ejected again by Saturn and Jupiter. But assuming that this is independent of the orbital elements over long time scales, we assume that the flux of injected comets is proportional to the flux of LPCs. Inner Oort cloud comets, in particular comets with *a* < 3000AU, may be injected into the loss cone (*q* < 15AU) but not enter the observable zone (*q* < 5AU). In our simulations we will examine the properties of comets injected into both types of target zone, and we will refer to such injected comets as LPCs. Once we have identified the injected comets, we calculate the Galactic latitudes *b**c* and longitudes *l**c* of their perihelia. Because the orbital elements of the class 1A LPCs are recorded during their first passage into inner solar system, we can reasonably assume that the direction of the LPC perihelion is unchanged after entering the “loss cone”. In Section [sec:impact] and [sec:ADP], we will model the terrestrial cratering time series and the anisotropic perihelion of LPCs based on the injected comet flux. Specifically, in Section [sec:impact], we will show how we convert the simulations of the perturbations of the cometary orbits into a model for the time variation of the cometary flux entering the inner solar system. Bayesian inference method ========================= We summarize here our Bayesian method for quantifying how well a time series model can describe a set of cratering data (or indeed any other series of discrete time measurements with uncertainties). A full description of the method and its application to the cratering data for various non-dynamical models can be found in. Evidence -------- If we define *D* as the time series of craters and *M* as some model for these data, then the evidence of the model is defined as *P*(*D*∣*M*) = ∫*θ**P*(*D*∣*θ*, *M*)*P*(*θ*∣*M*)*d**θ*,  where *θ* is the parameters of the model, and *P*(*D*∣*θ*, *M*) and *P*(*θ*∣*M*) are the likelihood of the data and the prior distribution over the parameters, respectively. The evidence is therefore the prior-weighted average of the likelihood over the parameters. It gives the overall ability of the model to fit the data, rather than the power of any individual set of parameters. As is well known in statistics, and further described in, this is the appropriate metric to use in order to compare models of different flexibility or complexity. If *t**j* is the *true* (unknown) time of the impact of crater *j**t**h*, and *τ**j* is the *measured* time with corresponding uncertainty *σ**j*, then an appropriate error model for this measurement is $$P(\tau\_j|\sigma\_j,t\_j)=\frac{1}{\sqrt{2\pi}\sigma\_j}\exp[-(\tau\_j-t\_j)^2/{2\sigma\_j^2}] \. \label{eqn:measurement}$$ The likelihood for one crater measurement can then be calculated by integrating over the unknown time $$\begin{aligned} P(\tau\_j|\sigma\_j,\theta,M) &=& \int\_{t\_j}P(\tau\_j|\sigma\_j,t\_j,\theta,M)P(t\_j|\sigma\_j,\theta,M)dt\_j\nonumber\\ &=&\int\_{t\_j} P(\tau\_j|\sigma\_j,t\_j)P(t\_j|\theta,M)dt\_j \. \label{eqn:event\_like}\end{aligned}$$ The second term in the second equation describes the time series model: it predicts the probability that an event will occur at time *t**j* given the parameters for that model. The likelihood for the whole time series, *D* = {*τ**j*}, is the product of the individual likelihoods (assuming they are measured independently), in which case $$P(D|\theta,M)=\prod\limits\_j P(\tau\_j|\sigma\_j,\theta,M) \. \label{eqn:likelihood}$$ We use this in equation [eqn:evidence] to calculate the evidence for model *M* give the set of cratering dates. The absolute scale of the evidence is unimportant: we are only interested in ratios of the evidence for any pair of models, known as the *Bayes factor*. As a rule of thumb, if the Bayes factor is larger than 10, then the model represented in the numerator of the ratio is significantly favoured by the data over the other model (see for further discussion of the interpretation). Time series models ------------------ The time series model, *M*, is a model which predicts the variation of the impact probability with time (the normalized cratering rate), i.e. the term *P*(*τ**j*∣*σ**j*, *θ*, *M*) in equation [eqn:likelihood]. The models we use in this work, along with their parameters, *θ*, are defined in Table [tab:tsmodels], and described below *Uniform.* Constant impact probability over the range of the data. As any probability distribution must be normalized over this range, this model has no parameters. *RandProb, RandBkgProb.* Both models comprise *N* impact events at random times, with each event modelled as a Gaussian. *N* times are drawn at random from a uniform time distribution extending over the range of the data. A Gaussian is placed at each of these with a common standard deviation (equal to the average of the real crater age uncertainties). We then sum the Gaussians, add a constant background, *B*, and normalize. This is the RandBkgProb (“random with background”) model. RandProb is the special case for *B* = 0. We calculate the evidence by averaging over a large number of realizations of the model (i.e. times of the events), and, for RandBkgProb, over *B*. For example, when we later model the basic150 time series, we fix *N* = 32 and range *B* from 0 to ∞ (see Table [tab:prior]). *SinProb, SinBkgProb.* Periodic model of angular frequency *ω* and phase *ϕ*0 (model SinProb). There is no amplitude parameter because the model is normalized over the time span of the data. Adding a background *B* to this simulates a periodic variation on top of a constant impact rate (model SinBkgProb). *SigProb.* A monotonically increasing or decreasing nonlinear trend in the impact PDF using a sigmoidal function, characterized by the steepness of the slope, *λ*, and the center of the slope, *t*0. In the limit that *λ* becomes zero, the model becomes a step function at *t*0, and in the limit of very large *λ* it becomes the Uniform model. We restrict *λ* < 0 in our model comparison because the decreasing trend in cratering rate towards the past seems obvious in the time series (see Figure [fig:set3lim]; see also ). However, we do include the increasing trend in our sensitivity test in Section [sec:sensitivity]. *SinSigProb.* Combination of SinProb and SigProb. *TideProb, EncProb, EncTideProb.* Models arising from the dynamical simulation of cometary orbits perturbed by either stellar encounters (EncProb) or the Galactic tide (TideProb) or both (EncTideProb). We describe the modelling approach which produces these distributions in detail in Section [sec:impact]. *EncSigProb, TideSigProb, EncTideSigProb.* Combination of EncProb, TideProb, EncTideProb (respectively) with SigProb. Some of these models – those in the first five lines in Table [tab:tsmodels] – are simple analytic models. The others are models based on dynamical simulations of cometary orbits, which we therefore call dynamical models. In the next section we will explain how we get from a simulation of the perturbation of the cometary orbits to a prediction of the cratering rate. Table [tab:tsmodels] also lists the parameters of the models, i.e. those parameters which we average over in order to calculate the evidence. The prior distributions for these parameters are listed in Table [tab:prior]. | model name | *P**u*(*t*∣*θ*, *M*) | parameters, *θ* | | --- | --- | --- | | Uniform | 1 | none | | RandProb/RandBkgProb | ∑*n* = 1*N*N(*t*; *μ**n*, *σ*)+*B* | *σ*, *B*,*N* | | SinProb/SinBkgProb | 1/2{cos[*ω**t* + *ϕ*0] + 1}+*B* | *ω*, *β*, *B* | | SigProb | [1 + *e*(*t* − *t*0)/*λ*]− 1 | *λ*, *t*0 | | SinSigProb | SinProb+SigProb | *T*, *β*, *B*,*λ*, *t*0 | | EncProb | $\gamma\_{\rm bin}(t)$ | *r⃗*⊙(*t* = 0), *v⃗*⊙(*t* = 0) | | TideProb | *G*3(*t*) | *r⃗*⊙(*t* = 0), *v⃗*⊙(*t* = 0) | | EncTideProb | $[\gamma\_{\rm bin}(t)+\xi G\_3(t)]/(1+\xi)$ | *ξ*, *r⃗*⊙(*t* = 0), *v⃗*⊙(*t* = 0) | | EncSigProb | EncProb + *η* SigProb | *η*, *λ*, *t*0, *r⃗*⊙(*t* = 0), *v⃗*⊙(*t* = 0) | | TideSigProb | TideProb + *η* SigProb | *η*, *λ*, *t*0, *r⃗*⊙(*t* = 0), *v⃗*⊙(*t* = 0) | | EncTideSigProb | EncTideProb + *η* SigProb | *ξ*, *η*, *λ*, *t*0, *r⃗*⊙(*t* = 0), *v⃗*⊙(*t* = 0) | [tab:tsmodels] | model name | details of the prior over the parameters | | --- | --- | | Uniform | no parameters | | RandProb | $\sigma=\bar{\sigma\_i}$, $N=N\_{\rm ts}$, *B* = 0 | | RandBkgProb | $\sigma=\bar{\sigma\_i}$, $N=N\_{\rm ts}$, $B=\frac{1}{\sqrt{2\pi}\sigma}\frac{b}{(1-b)}$ with *b* ∈ [0, 1] | | SinProb | 2*π*/100 < *ω* < 2*π*/10, 0 < *ϕ*0 < 2*π*,*B* = 0 | | SinBkgProb | 2*π*/100 < *ω* < 2*π*/10, 0 < *ϕ*0 < 2*π*, $B=\frac{b}{(1-b)}$ with *b* ∈ [0, 1] | | SigProb |  − 100 < *λ* < 0, $0<t\_0<0.8\tau\_{\rm max}$ | | SinSigProb | Priors from both SinProb and SigProb | | EncProb | Initial conditions listed in Table [tab:initialcondition] | | TideProb | Initial conditions listed in Table [tab:initialcondition] | | EncTideProb | *ξ* = 1, Initial conditions listed in Table [tab:initialcondition] | | EncSigProb | 0 < *η* < 4,  − 100 < *λ* < 0, $0<t\_0<0.8\tau\_{\rm max}$, initial conditions listed in Table [tab:initialcondition] | | TideSigProb | 0 < *η* < 4,  − 100 < *λ* < 0, $0<t\_0<0.8\tau\_{\rm max}$, initial conditions listed in Table [tab:initialcondition] | | EncTideSigProb | *ξ* = 1, 0 < *η* < 4,  − 100 < *λ* < 0, $0<t\_0<0.8\tau\_{\rm max}$, initial conditions listed in Table [tab:initialcondition] | [tab:prior] Modelling the history of the cometary impact rate ================================================= The terrestrial impact rate consists of two parts: the asteroid impact rate and the comet impact rate. We are specifically interested in only the latter in the present work. The background asteroid impact rate is proportional to the number of asteroids in the asteroid belt, which is depleted by the impact of asteroids on planets and their satellites. Over a long time scale (longer than 100 Myr), the background impact rate of asteroids would therefore decrease towards the present. But we could also see variations in this due to the disruption of large asteroids into an asteroid family, which would produce phases of enhanced impacting. In addition to the actual impact rate, the geological record of all impact craters (comet or asteroid) is contaminated by a selection bias: The older a crater is, the more likely it is to have been eroded and so the less likely it is to be discovered. This preservation bias would lead to an apparent increase in the impact rate towards to the present. We model the combined contribution of these two components (variable asteroid impact rate and the preservation bias) to the measured impact rate using a sigmoidal function, which produces a smoothly varying trend with time (model SigProb in Table [tab:tsmodels]). As with the other models, this model has parameters which we average over when computing the model evidence. The cometary impact rate is determined by the gravitational perturbations of the Oort cloud due to the Galactic tide and stellar encounters. Both are modulated by the solar motion around the Galactic center. Some studies suggest that their combined effect injects more comets into the inner solar system than does each acting alone. This so-called synergy effect is difficult to model, however, and will be ignored in our statistical approach. We simulate the effects of the tide and encounters separately (section [sec:simulation]). The resulting cometary flux from these is described by the models TideProb and EncProb respectively. The cometary flux when both processes operate, the model EncTideProb, is the sum of the fluxes from each (each being normalized prior to combination). To include the contributions from the asteroid impacts and the crater preservation bias we can add to this the SigProb model mentioned above. This gives the model EncTideSigProb. The parameters of all these models and their prior ranges are defined in Tables [tab:tsmodels] and [tab:prior]. Tide-induced cometary flux -------------------------- The time variation as the Sun orbits the Galaxy of the tide-induced cometary flux entering the loss cone is calculated using AMUSE-based method (section [sec:method]). We define *f**c* as the relative injected comet flux in a time bin with width Δ*t* $$f\_c=\frac{N\_{\rm inj}}{N\_{\rm tot}\Delta t}, \label{eqn:f\_tide}$$ where $N\_{\rm inj}$ is the number of injected comets in this bin and $N\_{\rm tot}$ is the total number of the comets. We could use *f**c* directly as the model prediction of the comet impact cratering rate, *P**u*(*t*∣*θ*, *M*), for the model TideProb (section [sec:tsmodel]) for that particular set of model parameters. However, as the calculation of the cometary orbits is rather time-consuming, we instead use a proxy for *f**c*, i.e. the vertical tidal force. The tidal force per unit mass experienced by a comet in the Oort Cloud is $$\mathbf{F}=-\frac{G M\_\odot \,\mathbf{\hat r}}{r^2} - G\_1 x \,\mathbf{\hat x} -G\_2 y \, \mathbf{\hat y} - G\_3 z \mathbf{\hat z} \label{eqn:tidal\_force}$$ where **r** is the Sun-comet vector of length *r*, *M*⊙ is the solar mass, and *G* is the gravitational constant.[6](#fn6) The three tidal coefficients, *G*1, *G*2, and *G*3 are defined as $$\begin{array}{l} \displaystyle G\_1=-(A-B)(3A+B)\\ \displaystyle G\_2=(A-B)^2\\ \displaystyle G\_3=4\pi G \rho(R,z)-2(B^2 -A^2) \end{array} \label{eqn:G123}$$ where *A* and *B* are the two Oort constants, and *ρ*(*R*, *z*) is the local mass density which can also be denoted as *ρ*(*t*) in the case of using *G*3(*t*) to build models. Because the two components *G*1 and *G*2 in the Galactic (*x*, *y*) plane are about ten times smaller than the vertical component (*G*3), it is the vertical tidal force that dominates the perturbation of the Oort Cloud. To find a relationship between *f**c* and *G*3, we simulate the orbits of one million comets generated from the DQT model back to 1Gyr in the past under the perturbation of the Galactic tide (stellar encounters are excluded). We use here the loss cone as the target zone when identifying the injected comets (LPCs). The two quantities are compared in Figure [fig:FcG3]. We see that the detrended comet flux (red line) agrees rather well with *G*3 (blue line) over the past 1Gyr, albeit with an imperfect detrending over the first 100Myr. We made a similar comparison for the DLDW model and also find a very close linear relation. Comparing *G*3 with the flux of the comets injected into the observable zone (i.e. *q* < 5AU) for both the DLDW and DQT models, we find that the result is consistent with what we have found for the loss cone. This confirms the relationship between the tide-induced comet flux and the vertical tidal force, which was also demonstrated by (their Figure 9) with a different approach. We are therefore justified in using *G*3 as a proxy for the tide-induced comet flux when we build models of cometary impact rate to compare to the crater time series. [fig:FcG3] Encounter-induced cometary flux ------------------------------- We define the encounter-induced flux entering the loss cone in the same way as *f**c* in equation [eqn:ftide]. We now investigate whether we can introduce a proxy for this too. We postulate the use of the quantity $$\gamma=\frac{M\_{\rm enc}}{v\_{\rm enc}r\_{\rm enc}} \label{eqn:gamma}$$ which is proportional to the change in velocity of the Sun (or equivalently to the mean change in velocity of the comets) as induced by an encounter according to the classical impulse approximation. This proxy has also been used in previous studies to approximate the LPC flux injected by stellar encounters (e.g. ). [fig:gammafcpdf] The injected flux is dominated by those encounters which can signifcantly change the velocity and thus the perihelion of the comets. Considering the important role of these encounters and the long time scale between them (about 100 Myr according to ), we divide the whole time span of simulated stellar encounters into several time bins and use the (normalized) maximum value of *γ* in each bin to approximate such comet showers. We define this binned proxy as $\gamma\_{\rm bin}$, and normalize it over the whole time scale. In Figure [fig:gammafcpdf], we compare this proxy to the normalized encounter-induced flux which is simulated with a time step of 0.001Myr using a sample of 105 comets generated from the DLDW model over 100Myr. We find that the main comet showers can be properly predicted by $\gamma\_{\rm bin}$, although it may miss small comet showers and predict some non-existent small showers. To assess the reliability of the shower prediction of the proxy, we evaluate the fraction of peaks in *f**c* which are correctly identified by $\gamma\_{\rm bin}$, and the fraction of peaks in $\gamma\_{\rm bin}$ which have a corresponding true peak in *f**c*. For the former case, a peak in *f**c* is counted as correctly predicted by the proxy when it occurs in the same time bin as a peak in $\gamma\_{\rm bin}$, or when the *f**c* peak is one bin earlier (because the shower can occur up to 1Myr after the closest approach of the encounter). We find that 23 out of 27 (0.85) flux peaks are correctly predicted by the proxy, while 23 out of 33 (0.70) peaks in $\gamma\_{\rm bin}$ have corresponding peaks in *f**c* (Figure [fig:gammafcpeaks]). This simple counting ignores the intensity of the comet showers. To remedy this use the amplitude of each $\gamma\_{\rm bin}$ peak as a weight, and count the weighted fractions. We find these to be 0.92 and 0.84 respectively. These results suggests that $\gamma\_{\rm bin}$ is a reasonably good proxy for statistical purposes. Hence we use $\gamma\_{\rm bin}$ as the measure of *P**u*(*t*∣*θ*, *M*) for the model EncProb. The linear relationship between *ρ*(*t*) and *G*3(*t*) (equations [eqn:PDFenc] and [eqn:G123]) indicates that the averaged EncProb model over sequences of $\gamma\_{\rm bin}$ is equivalent to the corresponding TideProb model for one solar orbit. We will see in section [sec:comparison] whether there is any significant difference between the evidences for these two models. [fig:gammafcpeaks] Combined tide–encounter cometary flux ------------------------------------- Having defined TideProb and EncProb, we can combine them to make EncTideProb. We can further combine this sum with SigProb (scaled by the parameter *η*) in order to include a smoothly varying component (see Table [tab:tsmodels]). Figure [fig:dynmodels] shows examples of the TideProb, EncTideProb and EncTideSigProb model predictions of the cometary flux for specific values of their parameters. In the upper panel, we see the TideProb model predicts an oscillating variation on at least two time scales. In the middle panel, we add EncProb to TideProb. The amplitude of the background is reduced due to the normalization effect – the encounters dominate – and the high peaks characterize encounter-induced comet showers. In the bottom panel, the SigProb model is added onto the EncTideProb model with *η* = 3. A large value of *λ* has been used in SigProb here, such that the additional trend is almost linear. Meanwhile, we also combine TideProb and SigProb to make TideSigProb. This of course does not show the randomly occurring peaks which are characteristic of the encounters model. [fig:dynmodels] In Section [sec:comparison], we will compare these models with other time series models defined in Section [sec:tsmodel] using Bayesian method. Model comparison ================ Now that we have a way to generate predictions of the comet flux from our dynamical time series models, we use the Bayesian method described in section [sec:bayesian] to calculate the evidences for the various time series models defined in section [sec:tsmodel] for different cratering data sets. Because the solar orbit is more sensitive to the Sun’s initial galactocentric distance (*R*) and angular velocity ($\dot{\phi}$) than to the other four initial conditions, we sample over only those two parameters when calculating the evidences and Bayes factors (ratio of two evidences) for the dynamical models. In order to make our model comparison complete, we will vary all initial conditions individually and simultaneously in section [sec:sensitivity]. To calculate the evidences we sample the parameter space of the dynamical models and other time series models with 104 and 105 points respectively. For the models of EncProb, EncTideProb, EncSigProb and EncTideSigProb, each point represents an entire simulation of the orbit of the Sun about the Galaxy and the corresponding simulation of the comet flux as a function of time. For the latter we use the proxies of *G*3(*t*) and *γ*(*t*) (i.e. the time-varying $\gamma\_{\rm bin}$) described in section [sec:tideflux] and section [sec:encounterflux] respectively. For each orbit of the Sun we just generate a single sequence *γ*(*t*) for the comet flux at random. (Because *γ*(*t*) is modulated by the vertical tide coefficient *G*3(*t*), an average over many sequences of *γ*(*t*) would be smooth and lack the spikes corresponding to comet showers which we see in the individual sequences.) The Bayes factors of various models relative to the uniform model are listed in Table [tab:craterBF]. We see that the SigProb, EncSigProb, TideSigProb and EncTideSigProb models are favoured by all the data sets, sometimes marginally, sometimes by a significant amount relative to certain models. In these favoured models, the negative trend (a decreasing cratering rate towards the past) is favoured much more than the positive trend. Such a negative trend can be picked out in Figure [fig:set3lim]. As the positive values are so clearly ruled out, we only use negative values of *λ* in all the trend models. This would be consistent with the crater preservation bias or the disruption of a large asteroid dominating over any recent increase in the asteroid impact rate (see section [sec:impact]). @ l|l l l l l l Model &basic150&ext150 &full150&basic250&ext250 &full250 RandProb &4.4 &9.3 &72 &3.0 &9.4 &4.7 × 102 RandBkgProb&1.8 &3.8 &31 &2.2 &5.2&1.8 × 102 SinProb &0.34 &0.62 &1.2 &0.43 &0.76 &1.5 SinBkgProb &1.0 &1.2 &1.6 &1.0 &1.2 &1.5 SigProb &15 &63 &9.1 × 103&2.0 × 102 &1.8 × 103&5.8 × 106 SinSigProb &10 &36 &1.6 × 102&1.0 × 102&6.0 × 102&2.6 × 105 EncProb1 &1.5 &3.9 &26 &1.7 &5.2&1.1 × 102 EncProb2 &1.7 &3.3 &77 &1.6 &8.5&2.7 × 102 TideProb1 &0.73 &0.87 &6.7 &0.81 &0.91&1.1 TideProb2 &0.79 &0.86 &10 &0.69 &0.76&0.94 EncTideProb1&1.0 &1.6 &18 &1.3 &2.1&10 EncTideProb2&1.2 &1.8 &25 &1.2 &2.1&24 EncSigProb1 &11 &41 &4.6 × 103&1.5 × 102&1.5 × 103&5.9 × 106 EncSigProb2 &12 &52 &8.7 × 103&1.7 × 102&1.5 × 103&6.6 × 106 TideSigProb1&11 & 38 &4.6 × 103&1.6 × 102&1.4 × 103&6.2 × 106 TideSigProb2&10 & 37 &4.5 × 103&1.6 × 102&1.4 × 103&6.1 × 106 EncTideSigProb1&11&40 &5.0 × 103&1.6 × 102&1.4 × 103&6.0 × 106 EncTideSigProb2&11&40 &4.7 × 103&1.6 × 102&1.5 × 103&6.1 × 106 [tab:craterBF] The SinSigProb model is not favoured more than SigProb, which means the periodic component is not necessary in explaining cratering time series. This is consistent with the conclusion in. Moreover, the pure periodic model is actually slightly less favoured than the uniform model for the “basic” and “ext” data sets. The pure random model (RandProb) is slightly more favoured than the random model with background (RandBkgProb). Both are more favoured than the uniform model, but with relatively low Bayes factors compared to the models with trend components. EncProb is slightly more favoured than the TideProb model. This suggests that the stochastic component of EncProb is slightly preferable to the smooth tidal component of TideProb in predicting the cratering data, although the difference is small. Combining them to make the EncTideProb models does not increase the evidence. The best overall model for explaining the data is SigProb, the pure trend model. Adding the tide or encounters or both does not increase the evidence by a significant amount for any of the data sets. This suggests that the solar motion has little influence on the total observed impact rate (i.e. comets plus asteroids and the preservation bias) either through the Galactic tide or through stellar encounters, at least not in the way in which we have modelled them here. This minor role of the solar motion in generating terrestrial craters weakens the hypothesis that the (semi-)periodic solar motion triggers mass extinctions on the Earth through modulating the impact rate, as some have suggested. We note that a low cometary impact rate relative to the asteroid impact rate has been found by other studies. The evidence is the prior-weighted average of the likelihood over the parameter space. It is therefore possible that some parts of the parameter space are much more favoured than others (i.e. there is a large variation of the likelihood), and that this is not seen due to the averaging. In that case changing the prior, e.g. the range of the parameter space, could change the evidence. (We investigate this systematically in section [sec:sensitivity]). In other words, the tide or encounter models may play a more (or less) significant role if we had good reason to narrow the parameter space. This would be appropriate if we had more accurate determinations of some of the model parameters, for example. We now investigate this by examining how the likelihood varies as a function of individual model parameters (but still be averaged over the other model parameters). Figure [fig:like1D] shows how the resulting likelihood varies as a function of the four parameters in the TideSigProb1 model. The most favoured parameters of the trend component are *λ* ≈  − 60Myr and *t*0 ≈ 100Myr. This trend component represents an increasing cratering rate towards the present over the past 100 Myr, either real or a result of preservation bias. In the upper left graph, the likelihood varies with *R* slightly and varies a lot in the region where *R* < 8kpc and *R* > 9kpc. In the lower right panel, the likelihood increase with *η*, which means that the trend component is important in increasing the likelihood for the TideSigProb model. [fig:like1D] To find the relationship between the likelihood for TideSigProb and the Sun’s initial galactocentric distance *R* and the scale parameter *η*, we fix the parameters of the trend component to *λ* =  − 60Myr and *t*0 = 100Myr. [fig:like2D] In Figure [fig:like2D] we see that the likelihood for TideSigProb increases monotonically with *η* over this range, but has a more complex dependence on *R*. The likelihood is highest at around *R* = 7.0 and *R* = 9.5kpc. In Figure [fig:tideprobR70] we compare the dates of the craters in the basic250 data set with the prediction of the cratering rate from TideProb with *R* = 7.0kpc. There are 7 craters within the first 30Myr compared to 16 and 13 craters in the intervals [30,60]Myr and [60,90]Myr respectively. This lack of craters in the first 30Myr can be better predicted by TideSigProb than by the SigProb model with a negative *λ*. While this is small number statistics, it may suggest that even though we have little evidence for the effect of the tide on cometary impacts in the overall cratering data, it may have had more of an effect in selected time periods. Other explanations are also possible, of course: we cannot say anything about models we have not actually tested, such as a more complex model for the asteroid impact rate variation. [fig:tideprobR70] Modelling the angular distribution of cometary perihelia ======================================================== In this section we predict the 2D angular distribution (latitude, longitude) of the perihelia of LPCs, the observed data for which are shown in Figure [fig:LPC1Abl]. To do this we need to identify from the simulations comets injected over an appropriate time scale. Figure [fig:gammafcpdf] shows that a comet shower usually has a duration of less than 10Myr, something which was also demonstrated by in detailed simulations of individual encounters. The Galactic tide varies little over such a time scale, because the vertical component of the tide, which dominates the total Galactic tide, varies over the period of the orbit of the Sun about the Galaxy, which is of order 200Myr. We may therefore assume that the solar apex is also more or less fixed during the past 10Myr, which is then an appropriate time scale for constructing our sample. We simulate cometary orbits over the past 10Myr as follows: (1) generate one million comets from the Oort cloud model (DLDW or DQT), as well as a set of stellar encounters (about 400 over 10Myr); (2) integrate the cometary orbits under the perturbations of only the Galactic tide (tide-only simulations with a time step of 0.1Myr), only stellar encounters (encounter-only simulations with a time step of 0.01Myr), and both of them (combined simulations with a time step of 0.01Myr) back to 10Myr ago; (3) identify the injected comets and their longitudes and latitudes. We then repeat steps (1)–(3) ten times (i.e. resample the Oort cloud and the set of stellar encounters) and combine the results in order to increase the number statistics. Latitude distribution --------------------- The upper panels of Figure [fig:dynbl] compare the Galactic latitudes of the LPC perihelia with our model predictions. In addition to showing the model predictions for the comets injected into the loss cone, we also show the predicted distributions for comets injected into the observable zone (*q* < 5AU). The former contains more comets, but the latter is of course closer to the observed sample. The small sample of comets within the observable zone have significant sample noise in their angular distributions, so we will only compare model predictions of the angular distribution of comets in the (larger) loss cone. ![image](zone5AU_dt001FALSE_varyencTRUE_sinb_compare_model_data_bl_backwardonly_1e+07_noerr) [fig:dynbl] The upper panels show that the injected LPCs in the pole and equatorial regions are depleted for both DLDW and DQT models, as also found by. According to theoretical prediction, the tide-induced flux should be proportional to ∣sin*b*cos*b*∣, in very good agreement with our tide-only simulations. The observed data broadly agree with this, the main difference being that for negative latitudes the peak is at around -0.4 rather than the model-predicted value of -0.7. This discrepancy was also noticed by, for example, and could be a consequence of the small size of the data set (note the errors bars in the figure). We see in the figure that the PDF of the latitude distribution predicted by the combined simulation always lies between those predicted by the single perturbation simulations. Although the combined simulation of comets injected into the loss cone predicts a flatter distribution than the tide-only simulation does, the stellar encounters cannot entirely smooth out the peaks in the latitude distribution. This is consistent with the results in. Thus the observed non-uniform latitude distribution does not indicate that the Galactic tide dominates at the present epoch, as was claimed by. We can attempt to make a more quantitative assessment of how well our models predict the observed distribution. Using model comparison techniques we can ask whether our dynamical models (the combined tide plus encounters model) explain the data better than a uniform distribution. We can do this crudely on the binned data/simulations shown in the figure via a likelihood test. The act of binning means that the model-predicted number of events per bin is determined by the Poisson distribution, thus defining our likelihood. However, such a test is dependent on the choice of binning, and we have tried out a range of bin widths and centres. While we find that the combined model for the DQT Oort cloud model is always more favoured than a uniform distribution, the significance is marginal. An alternative approach is to use the unbinned data and unbinned model predictions, and to apply a kernel density estimate (KDE) to each. This produces a non-parametric density function for the data and for the model, the difference between which we quantify using the (symmetrized) Kullback-Leibler divergence (KLD). A value of zero divergence means that the two distributions are identical; larger (positive/negative) values indicate larger differences. We find that our dynamical models give smaller KLD values than do the uniform model (i.e. the former predict the data better), for both the DLDW and DQT. Although the distributions formed by the KDE are sensitive to size of the kernel adopted,[7](#fn7) we find that the KLD values are quite insensitive to this, and consistently favour the dynamical models. This suggests that the dynamical models explain the data better than a flat distribution in latitude (although because calibrating KLD ratios into formal significances is not easy, we leave this as a qualitative statement). Longitude distribution ---------------------- The perihelia of LPCs are not distributed uniformly on the celestial sphere. It has been suggested that they lie preferentially on a great circle, as evidenced by two peaks at *l**c* ≃ 135and *l**c* ≃ 315seen in Figure [fig:LPC1Abl]. The comets on this great circle could be induced by stellar encounters with preferred directions, thereby producing the apparent anisotropy. In the lower two panels in Figure [fig:dynbl], we see that the model predictions do not produce any very large peaks, although one around *l**c* ≃ 135 is discernable. We also observe a peak around *l**c* = 0–60 which is proposed as a signal of the “Biermann comet shower”. In our model, this peak is probably the result of accumulated perturbations from several stellar encounters with preferred directions. The peak around *l**c* = 135 is more prominent in the model prediction for the comets injected into the observable zone (red points/line in the figure). This peak is generated primarily by one or more massive stellar encounters. Hence, stellar encounters play a more significant role in injecting comets into the observable zone than just into the loss cone. This is consistent with the “synergy effect” investigated by. As with the latitude distribution, we also measured the KLD for the model predictions (for the loss cone) and for a uniform distribution. The dynamical models predict the data little better than a uniform distribution. (The likelihood test gives a similar result.) One reason for this lack of support for our dynamical (combined) model could be the fact that we are averaging the predicted distribution from the encounters over ten different realizations of the stellar encounters. This will tend to smooth out individual peaks, which are probably produced by just a few encounters with massive stars.[8](#fn8) If we instead only used a single random realization of encounters, we are unlikely to reproduce exactly the showers which occurred. This is an inherent problem of modelling stellar encounters in a stochastic way. This does not affect our model prediction of the latitude distribution nearly as much, however, because its shape is dominated by the non-stochastic tide. In order to investigate this we again use our encounter model via the proxy *γ* (a proxy of comet flux) defined in equation [eqn:gamma], but now as a function of $b\_{\rm p}$ and $l\_{\rm p}$, the direction toward the perihelion of the stellar encounter. Moreover, we now impose a minimum threshold, $\gamma\_{\rm lim}$, on the proxy: The larger the value of $\gamma\_{\rm lim}$, the larger the encounter perturbation must be for it to be included in the model. Using the encounter model described in section [sec:encmod], we simulate 10 million encounters and calculate *γ*, $b\_{\rm p}$, and $l\_{\rm p}$ for each. The predicted direction of an LPC’s perihelion is opposite on the sky to the direction of the encounter perihelion. Thus we can calculate *b**c* and *l**c* accordingly and use *γ*(*b**c*,  *l**c*) to predict the PDF of *b**c* and *l**c*. Then we divide the range of the Galactic longitude into 12 bins and sum *γ* in each bin including only those encounters with $\gamma>\gamma\_{\rm lim}$. Normalizing this gives the angular PDF of the encounter-induced flux, as shown in Figure [fig:lgammalim]. For larger values of $\gamma\_{\rm lim}$ we observe a larger variation in the flux with longitude, as expected, because then fewer encounters contribute to the distribution. As we can see from equation [eqn:gamma], these are the more massive and/or slower stars. These encounters may induce a series of weak comet showers rather than a single strong comet shower. Because strong encounters are rare and extremely weak encounters cannot induce enough anisotropic LPCs, the spikes in the longitude distribution can be caused by at least two weak encounters rather than one strong or many extremely weak encounters. From Figure [fig:dynbl], we see that the tide cannot completely wash out the anisotropy in the longitude distribution induced by these encounters. [fig:lgammalim] Consistent with our results, found that the two spikes in the longitude distribution result from weak impulsive perturbations by analyzing the energy and angular momentum of dynamically new LPCs. Similar to the definition of weak comet showers in and, we define encounters with *γ* in the interval $\lbrack 1\times 10^{-7}, 5\times 10^{-6}\rbrack M\_{\sun} ~km~s^{-1}~AU^{-1}$ as weak encounters. We do not find strong peaks in the longitude distribution of *γ* for these encounters in Figure [fig:lgammalim], because we know that *γ* can underestimate the intensity of the shower (see Figure [fig:gammafcpdf]). Thus a small enhancement of the two peaks in Figure [fig:lgammalim] may correspond to a large enhancement of the peaks in the longitude distribution as predicted by our dynamical model in Figure [fig:dynbl]. Inspecting the catalogue of the frequencies of different types of stellar encounters in table 8 of, we see that there were at least eight encounters with masses equal to or larger than one solar mass encountering the solar system in the past 10Myr with perihelia less than 1pc. These encounters can move to a heliocentric distance much larger than 50pc over that time, which is the upper limit for their unbiased sample of stellar encounters with *M**V* < 5 – see Figure 13 of. We also point out that GL 710 will have a close approach with the solar system in about 1.4Myr at a perihelion longitude of around 135∘. According to studies, it will induce a weak comet shower which is expected to increase the cometary flux by 40%-50%. This supports the suggestion that the solar apex motion induces the non-uniform longitude distribution of the LPCs’ perihelia (see Figure [fig:vencdencbl] and [fig:dynbl]). In addition, Algol, a triple-star system with a total mass of 5.8$M\_{\sun}$, encountered the solar system with a closest distance of 2.5pc 6.9Myr ago. The Galactic longitude of Algol was also close to 135∘. Based on the above plausible scenario, we conclude that the peaks in the longitude distribution of LPC perihelia could arise from the perturbations of a few strong stellar encounters, the encounter directions of which depend on the solar apex motion. Considering the important role of the Galactic tide in generating a non-uniform latitude distribution, and the role of stellar encounters in generating a non-uniform longitude distribution, the synergy effect plays a role in maintaining – rather than smoothing out – the anisotropy in the observed LPCs. In other words, we can explain the anisotropy of the LPC perihelia based only on the solar apex motion and the Galactic tide, without needing to invoke the Jupiter-mass solar companion as proposed by. To date there is no observational evidence for such a companion. We note that a recent analysis of data from the WISE satellite has excluded the existence of a Jupiter-mass solar companion with a heliocentric distance less than 1pc. Sensitivity test ================ Spiral arms and Galactic bar ---------------------------- The spiral arms and Galactic bar are non-axisymmetric, time-varying components of the Galactic potential. These make only a small contribution to the tidal force acting on the Sun and Oort cloud (). However, if their contribution is always in the same direction, the effect of their perturbation could accumulate. This can occur when the Sun is near to the co-rotation resonance, when the rotation velocities of the disk and of the spiral pattern coincide. To test this hypothesis, we simulate the solar and cometary motion adopting various constant pattern speeds of the spiral arms and the bar with fixed Galactic density distributions (specified in Section [sec:potential]). We integrate the solar orbit in the Galactic potential both including and excluding the non-axisymmetric components. The initial conditions of the Sun and potential parameters are given in Table [tab:modelpar]. We find that the gravitational force from the bar is always much larger than that from the spiral arms. However, the difference between the pattern speed of the Galactic bar Ω*b* and solar angular velocity is much larger than the difference between the pattern speed of the spiral arms Ω*s* and solar angular velocity, which results in a much lower accumulated perturbation due to the bar. To see this effect, we integrate the solar orbit back to 5 Gyr in the past. The variations of galactocentric radius and vertical displacement of the Sun are shown in Figure [fig:solarorbit]. The arms have a stronger effect on the solar orbit than does the bar. The spiral arms tend to increase the galactocentric radius of the Sun as the integration proceeds (back in time), while the bar modulates the galactocentric radius by a comparatively small amount. Neither the bar nor the arms significantly affect the vertical displacement amplitude of the Sun. Here the combined perturbation from the potential including both the Galactic bar and spiral arms changes the solar motion the same way as the perturbation from the bar alone. [fig:solarorbit] We now simulate the tide-induced flux corresponding to these different potential models. The lower panel in Figure [fig:fluxnonsym] shows that the non-axisymmetric components do not alter the flux very much. Although the perturbation from the arms can change the solar orbit slightly, the resulting change in the perturbation of the Oort cloud is minimal. The changed tidal force may change some individual cometary orbits, but has little effect on the overall injected comet flux, because the effect of the tide depends also on the distribution of the comets, which is nearly isotropic. We also see that the arms modify the cometary flux more than the bar, consistent with its larger impact on the stellar density. (The limited number of injected comets contributes to the sharp peaks in the relative flux difference, Δ*f**c*/*f**c*, after 3Gyr.) [fig:fluxnonsym] We also investigated the sensitivity of the solar motion and comet flux to the pattern speed of the asymmetric components. We find that the closer the pattern speed of the arms is to the angular velocity of the Sun, the larger the perturbation from the arms is. (We can understand this in terms of a resonance.) Meanwhile, the perturbation from the bar is not sensitive to the bar’s pattern speed. Finally, we also find that the distribution of *b**c* and *l**c* of the comet flux does not change very much for different non-axisymmetric components of the Galactic potential. In summary, we find that the model predictions of the tide-induced cometary flux are generally insensitive to changes in the non-axisymmetric components of the Galactic potential, except when a resonance between the arms and the solar orbit occurs, which increases the variation in the cometary flux. Variations of the prior ----------------------- As discussed earlier, the evidence depends on the prior distribution adopted for the model parameters. As this prior frequently cannot be determined with any certainty, it is important to investigate the sensitivity of the evidence to changes in the prior.[9](#fn9) To complete the calculation of evidences for dynamical models, we also vary the other three initial conditions, $V\_R(t=0~{\rm Myr})$, $z(t=0~{\rm Myr})$, and $V\_z(t=0~{\rm Myr})$, in the EncTideSigProb models, which we previously kept constant. Together with SigProb, EncSigProb and TideSigProb, this was previously the best favoured model (Table [tab:craterBF]). We made numerous changes in the priors by altering their parameter ranges, and re-did all necessary Monte Carlo samplings, numerical simulations, and likelihood calculations and recomputed the Bayes factors. Some of our results are shown in Table [tab:priorchange]. | models | varied prior | Bayes factor for basic150 | Bayes factor for basic250 | | --- | --- | --- | --- | | | none | 4.4 | 3.0 | | | $\sigma=2\bar{\sigma\_i}$ | 2.0 | 4.8 | | | $\sigma=1/2\bar{\sigma\_i}$ | 2.2 | 4.7 | | | $N=2N\_{\rm ts}$ | 1.9 | 1.8 | | | $N=1/2N\_{\rm ts}$ | 2.4 | 7.6 | | | none | 1.8 | 2.2 | | | $\sigma=2\bar{\sigma\_i}$ | 1.6 | 3.7 | | | $\sigma=1/2\bar{\sigma\_i}$ | 1.8 | 2.6 | | | $N=2N\_{\rm ts}$ | 1.5 | 1.5 | | | $N=1/2N\_{\rm ts}$ | 2.4 | 2.9 | | | none | 0.34 | 0.43 | | | 10 < *T* < 100 | 0.12 | 0.14 | | | 2*π*/300 < *ω* < 2*π*/10 | 0.34 | 0.39 | | | 10 < *T* < 300 | 0.88 | 5.4 × 10− 2 | | | none | 1.0 | 1.0 | | | 10 < *T* < 100 | 0.90 | 0.88 | | | 2*π*/300 < *ω* < 2*π*/10 | 1.0 | 1.0 | | | 10 < *T* < 300 | 1.8 | 1.4 | | | none | 15 | 2.0 × 102 | | | $0<t\_0<1.2\tau\_{\rm max}$ | 13 | 1.4 × 102 | | |  − 100 < *λ* < 100 | 7.7 | 1.0 × 102 | | | 0 < *λ* < 100 | 1.3 × 10− 2 | 1.8 × 10− 3 | | | none | 6.4 | 80 | | | $0<t\_0<1.2\tau\_{\rm max}$ | 8.3 | 71 | | | 2*π*/300 < *ω* < 2*π*/10 | 9.9 | 97 | | TideSigProb3 | none | 9.0 | 1.7 × 102 | | TideSigProb4 | none | 9.1 | 1.7 × 102 | | TideSigProb5 | none | 9.0 | 1.7 × 102 | | TideSigProb6 | none | 11 | 1.6 × 102 | [tab:priorchange] The difference in Bayes factors for random models (RandProb, RandBkgProb) and periodic models (SinProb, SinBkgProb) with different prior distributions is less than five. The Bayes factors also remain less than ten so they remain no better explanations of the cratering data than the Uniform model. Thus our former conclusions about these models are not very sensitive to plausible changes in the priors. The TideSigProb models in which other parameters are varied have nearly the same evidences as the TideSigProb models listed in Table [tab:craterBF], so these too are insensitive to these changes in the priors. We also see that the SigProb model with positive *λ* has Bayes factors much lower than SigProb with negative *λ* for both the basic150 and basic250 data sets. The dynamical models have parameters of the Galaxy potential, Sun’s initial conditions and combination ratio parameters (*η* and *ξ*) which are listed in Table [tab:prior]). To keep things simple, we change the fixed parameters and the ranges of the varying parameters individually, and then calculate the evidence by sampling the prior defined by the changed parameter and other parameters shown in Table [tab:prior]. We calculate evidences for dynamical models with double or half the disk mass (*M**d*), halo mass (*M**h*), standard deviation of the initial value *R* (*σ**R*), and the range of the varying ratio between the EncTideProb (or TideProb) and SigProb models (*η*). In addition, previous studies suggest that the number of tide-induced LPCs is not identical to the encounter-induced LPCs, i.e. *ξ* ≠ 1. Thus we multiply the ratio between the tide-induced flux and the encounter-induced flux (*ξ*) by a factor of 4 or 1/4 for the sensitivity test. The resulting Bayes factors calculated for the basic150 data set are shown in Table [tab:dynpriorchange]. In each row we see little variation: the Bayes factors are relatively insensitive to these parameters. This means that either the parameter space of the EncTideSigProb1 model is evenly favoured by the basic150 data set, or the data are unable to discriminate between the compound dynamical models. l|\*11c models &none&2*M**d*&1/2*M**d*&2*M**h*&1/2*M**h*&2*σ**R*&1/2*σ**R*&*ξ* = 4&*ξ* = 1/4&0 < *η* < 8&0 < *η* < 2 &1.5&2.5&3.4&2.5&4.1&2.3&2.6&—&—&—&— &1.0&2.1&2.3&2.6&3.5&1.8&1.0&1.5&0.73&—&— &11&15&11&13&12&12&11&12&10&13&8.8 [tab:dynpriorchange] The model prediction of the anisotropic LPCs (see Figure [fig:dynbl]) depends to a greater or lesser extent on the Galactic potential, the Sun’s initial condition, the Oort Cloud model, and the model of encounters. We vary the model parameters in the same way as we did in Table [tab:dynpriorchange] and simulate ten million orbits of DLDW comets perturbed by the tide and ten samples of stellar encounters backwards to 10Myr ago. We find that the latitude distribution of the LPC perihelia is not sensitive to the change of the Galactic halo mass, the initial conditions of the Sun, or the direction of the solar apex. The amplitudes of the peaks in the latitude distribution are reduced if we decrease the mass of the Galactic disk or increase the stellar masses, which make the stellar encounters play a more important role in injecting comets into the loss cone. However, the overall profile of the peaks is not changed in the latitude distribution. The peaks in the longitude distribution shift slightly if we change the solar apex direction, the masses of the encounters, or the mass of the Galactic disk. The longitude distribution is not sensitive to changes in the other model parameters. Finally, we also tested the effect of changing the time step in the (combined) simulations. We simulated four million comets generated from the DLDW model perturbed by the tide and ten samples of stellar encounters backwards to 10Myr ago using a time step of 0.001Myr (as opposed to 0.01Myr). We find little change in either the latitude or longitude distributions. In addition, we see only 4% more comets injected when using this smaller time step. In summary, we find that the overall shape of the angular distribution of LPC perihelia in both longitude and latitude is not very sensitive to changes in the model parameters, in particular not to the initial distribution of Oort Cloud comets, not to the masses of Galactic halo and disk, and not to the initial conditions of the Sun. Discussion and Conclusion ========================= We have built dynamical models for the impact rate and angular distribution of comets induced by the Galactic tide and stellar encounters, as modulated by the solar motion around the Galaxy. Without using the approximate methods (the averaged Hamiltonian or impulse approximation), we numerically simulate the tide-induced flux and encounter-induced flux separately. We use these to validate the use of proxies for tide-induced flux, *G*3, and for the encounter-induced flux, $\gamma\_{\rm bin}$, in our models. Using the Bayesian evidence framework, we find that the pure trend model (SigProb) together with the dynamical models including a trend component (EncSigProb, TideSigProb and EncTideSigProb) for the cratering record are better favoured than other models we have tested. The trend component indicates a decreasing cratering rate (*λ* < 0) towards the past over the past 100 Myr. This suggests that either the asteroid impact rate or the preservation bias or both dominates the cratering record. Because the craters in our data sets are larger than 5km, the preservation bias may not be very significant over this time scale. The disruption of a single large asteroid could explain the trend in the data, as suggested by. In addition, our models, which include the solar apex motion, can properly predict the anisotropic perihelia of LPCs without assuming a massive body in the outer Oort Cloud or an anisotropic Oort Cloud. The EncTideSigProb, EncSigProb and TideSigProb models have Bayes factors of the same magnitude as the SigProb model, which indicates that either the tide and encounter components are unnecessary in modelling the temporal distribution of craters, or the data cannot effectively discriminate between the models. The stochastic component in the comet flux arising from encounters – as represented by the term *γ* – in the EncProb and EncTideProb models can slightly increase their evidence relative to the TideProb model. We have performed a sensitivity test by changing the prior PDF over the parameters in the dynamical models and other time series models, and find only small changes of the Bayes factors. The asymmetrical components in the Galactic potential could, in principle, increase the time-variation of the comet flux and hence impact rate predicted by the dynamical models, by inducing larger deviations of the Sun’s motion from a circular orbit and thus larger changes in the local stellar density. It turns out that the non-axisymmetric component has relatively little impact on the predicted cometary flux, with the exception of when the Sun is in co-rotation with the spiral arms. In that case the transient resonance can produce large variations in the flux. By including the solar apex motion, our dynamical models for anisotropic LPCs can predict reasonably well the distribution of Galactic latitude and longitude in a set of 102 dynamically new comets. In this model, the asymmetry in the distribution of Galactic latitudes caused by the Sun’s current location and its motion over the past 10Myr (comparable with the time scale of a comet shower). The two narrow peaks in the cometary perihelia at *l**c* = 135∘ and *l**c* = 315∘ could be caused by a handful of strong stellar encounters encountering the Sun with their encountering velocities in the direction of antapex in the HRF. On the other hand, we might also see something similar due to the periodic orbital motion about the Sun of a massive body (such as a brown dwarf) residing within the Oort cloud. However, our dynamical model, which takes into account the solar apex motion, can predict the longitudinal asymmetry without assuming the existence of such a body. In addition, the latitude distribution of LPC perihelia predicted by our simulations is consistent with the theoretical prediction, although one peak in the observed distribution is not properly predicted by our simulations. The synergy effect between the encounters and the tide cannot entirely eliminate the anisotropy induced by either the tide or the encounters. A non-uniform distribution in the perihelion direction of encounters was found by, although the signal is of questionable significance due to the incompleteness, i.e. faint stars which high velocities being too faint after 10Myr for Hipparcos to have observed. An anisotropy in the longitude of LPCs will not correspond to an anisotropy in longitudes of impacts on the Earth’s surface due to the rotation of the Earth and its orbit about the Sun. Some latitude variation may be expected, despite the long-term variation in inclination and obliquity of the Earth’s orbit. Disrupted comets generally retain their original orbital plane, so the resulting asteroids would tend to impact in the plane perpendicular to solar apex. Yet these are all higher order effects which would be difficult to convincingly detect and relate to the solar orbit in the analysis of terrestrial impact craters. Our modelling approach has, like any other, introduced various assumptions and approximations. We have ignored the synergy effect between the Galactic tide and stellar encounters highlighted by. We instead simply sum the tide-induced flux and the encounter-induced flux in the ratio *ξ* to 1. Because the cometary impact rate modulated by the solar motion around the Galactic center seems to be unnecessary in order to explain the data, the synergy effect, which is also influenced by the solar motion, may not change the result significantly. In addition, we use a decreasing impact rate towards the past (negative trend component) to model the combined effect of preservation bias and asteroid impact rate. In modelling the angular distribution of the LPC perihelia, the sample noise in the comets injected into the observable zone prevent us from building a more robust model, especially for the longitude distribution. This problem could be resolved by calculating perturbations based on a more accurately measured Galactic tide and using an actual catalogue of encountering stars in the solar neighborhood as opposed to our stochastic model of plausible encounters. In common with some other studies (e.g. ), we have ignored the perturbing effect on comets from the giant planets, although we acknowledge that the giants planets could influence the predicted LPC flux in particular. The planetary perturbations can also change the fraction of the inner Oort cloud comets among the injected LPCs, which in turn could change the angular distribution of the LPC perihelia. However, these perturbations should not have a significant effect over the relatively short time scale of 10Myr which we use in the simulations to generate the LPC distribution. As the main goal of our work is to study the variable effect of the solar orbit on the LPC flux and angular distribution, rather than to predict the absolute LPC flux precisely, our conclusions should not be overly affected by neglecting the giant planets in this way. In the future, the Gaia survey allow us to detect many more recent stellar encounters down to fainter magnitude limits and larger distances than Hipparcos, thereby allowing us to extend the time scale over which we can get a complete sample of recent stellar encounters. The Gaia magnitude limit of G=20 which is low enough to cover the high velocity stars in a time scale of 10 Myr. For example, a star with absolute magnitude of 10 and a velocity of 80km/s in the HRF would move 800 pc in 10 Myr and so have an apparent magnitude of 19.5. Thus Gaia will be able to observe all stars more massive than early M dwarfs (and thus essentially all relevant stars) encountering the solar system over the past 10 Myr. For more recent timescales Gaia can observe even less massive objects. Moreover, the Gaia catalogue of more massive stellar encounters (stars with absolute magnitudes larger than that of the Sun) may shed light on the study of terrestrial craters over since the beginning of the Phanerozoic era, some 550Myr ago. Gaia can further improve the measurement of Sun’s initial conditions and the potential of the Galaxy. After including planetary perturbations, this would make the simulation of cometary orbits accurate enough to trace the stellar encounter back to the time when it generated comet showers and corresponding terrestrial craters. Acknowledgements ================ We thank Carmen Martinez, Inti Pelupessy, and Arjen van Elteren for explaining, installing, and testing the AMUSE framework. We are indebted to Anthony Brown and Piotr A. Dybczyński for their useful suggestions. We are grateful to the referee, Nathan Kaib, for constructive comments which helped to improve the manuscript. This work has been carried out as part of the Gaia Research for European Astronomy Training (GREAT-ITN) network. The research leading to these results has received funding from the European Union Seventh Framework Programme ([FP7/2007-2013] under grant agreement No. 264895. [lastpage] --- 1. E-mail:[email protected][↩](#fnref1) 2. Note that our angular distribution is different from the one given in because the direction of perihelion is opposite to that of aphelion.[↩](#fnref2) 3. We assume that the mass densities of different stellar categories have the same spatial distribution.[↩](#fnref3) 4. We define a symbol without using the subscript *i* when the symbol is derived from a combination of symbols belonging and not belonging to certain stellar category.[↩](#fnref4) 5. http://www.amusecode.org[↩](#fnref5) 6. We don’t use this equation in simulating cometary orbits in the AMUSE framework.[↩](#fnref6) 7. This is analogous to the size of the histogram bins. A histogram is just a particular type of kernel.[↩](#fnref7) 8. Such massive stars (or stars with relatively high *γ*) move slowly relative to the Sun, and so would generate a relatively narrow peak in comet flux with *l**c*.[↩](#fnref8) 9. A more robust – but also more time-consuming – way of calculating the evidence is presented in.[↩](#fnref9)
arxiv_0000187
Lyapunov characterization of uniform exponential stability for nonlinear infinite-dimensional systems ===================================================================================================== In this paper we deal with infinite-dimensional nonlinear forward complete dynamical systems which are subject to external disturbances. We first extend the well-known Datko lemma to the framework of the considered class of systems. Thanks to this generalization, we provide characterizations of the uniform (with respect to disturbances) local, semi-global, and global exponential stability, through the existence of coercive and non-coercive Lyapunov functionals. The importance of the obtained results is underlined through some applications concerning 1) exponential stability of nonlinear retarded systems with piecewise constant delays, 2) exponential stability preservation under sampling for semilinear control switching systems, and 3) the link between input-to-state stability and exponential stability of semilinear switching systems. *Keywords:* Infinite-dimensional systems, Nonlinear systems, Switching systems, Converse Lyapunov theorems, Exponential stability. Introduction ============ Various works have been recently devoted to the characterization of the stability of infinite-dimensional systems in Banach spaces through *non-coercive* and *coercive* Lyapunov functionals (see, e.g., ). By non-coercive Lyapunov functional, we mean a positive definite functional decaying along the trajectories of the system which satisfies 0 < *V*(*x*) ≤ *α*(∥*x*∥),  ∀ *x* ∈ *X* ∖ {0},  where *X* is the ambient Banach space and *α* belongs to the class ${\cal{K}\_{\infty}}$ of continuous increasing bijections from R+ to R+. Such a function *V* would be coercive if there existed $\alpha\_0\in{\cal{K}\_{\infty}}$ such that *V*(*x*) ≥ *α*0(∥*x*∥) for every *x* ∈ *X*. In  it has been proved that the existence of a coercive Lyapunov functional *V* represents a necessary and sufficient condition for the global asymptotic stability for a general class of infinite-dimensional forward complete dynamical systems. On the other hand, the existence of a non-coercive Lyapunov functional does not guarantee global asymptotic stability and some additional regularity assumption on the dynamics is needed (see, e.g., ). Converse Lyapunov theorems can be helpful for many applications, such as stability analysis of interconnected systems  and for the characterization of input-to-state stability (see, e.g., ). Stability results based on non-coercive Lyapunov functionals may be more easily applied in practice, while the existence of a coercive Lyapunov functional may be exploited to infer additional information on a stable nonlinear system. Here, we consider the same class of abstract forward complete dynamical systems, subject to a shift-invariant set of disturbances, as in . The novelty of our approach is that we focus on exponential (instead of asymptotic) stability. For the rest of the paper, the word *uniform* will refer to uniformity with respect to disturbances. We provide theorems characterizing different types of uniform local, semi-global, and global exponential stability, through the existence of non-coercive and coercive Lyapunov functionals. Using a standard converse Lyapunov approach, we prove that uniform semi-global exponential stability is characterized by the existence of a 1-parameter family of Lyapunov functionals, each of them decaying uniformly on a bounded set, while the union of all such bounded sets is equal to the entire Banach space *X*. Concerning the non-coercive case, we first give a generalization of the Datko lemma . Recall that the latter characterizes the exponential behavior of a linear *C*0-semigroup in a Banach space in terms of a uniform estimate of the *L**p*-norm of the solutions. This result has been extended in  to the framework of nonlinear semigroups. Here, we generalize the Datko lemma to the considered class of infinite-dimensional forward complete dynamical systems. Thanks to such a generalization, we prove that the existence of a non-coercive Lyapunov functional is sufficient, under a uniform growth estimate on the solutions of the system, for the uniform exponential stability. The importance of the obtained results is underlined through some applications as described in the sequel. Retarded functional differential equations form an interesting class of infinite-dimensional systems that we cover by our approach. Converse Lyapunov theorems have been developed for systems described by retarded and neutral functional differential equations (see, e.g., ). Such results have been recently extended in  to switching linear retarded systems through coercive and non-coercive Lyapunov characterizations. After representing a nonlinear retarded functional differential equation as an abstract forward complete dynamical system, all the characterizations of uniform exponential stability provided in the first part of the paper can be applied to this particular class of infinite-dimensional systems. In particular, we characterize the uniform global exponential stability of a retarded functional differential equation in terms of the existence of a non-coercive Lyapunov functional. Another interesting problem when dealing with a continuous-time model is the practical implementation of a designed feedback control. Indeed, in practice, due to numerical and technological limitations (sensors, actuators, and digital interfaces), a continuous measurement of the output and a continuous implementation of a feedback control are impossible. This means that the implemented input is, for almost every time, different from the designed controller. Several methods have been developed in the literature of ordinary differential equations for sampled-data observer design under discrete-time measurements (see, e.g., ), and for sampled-data control design guaranteeing a globally stable closed-loop system (see, e.g., ). Apart from time-delays systems (see, e.g.,  for sampled-data control and for sampled-data observer design), few results exist for infinite-dimensional systems. The difficulties come from the fact that the developed methods do not directly apply to the infinite-dimensional case, for which even the well-posedness of sampled-data control dynamics is not obvious (see, e.g., for more details). Some interesting results have been obtained for infinite-dimensional linear systems . In the nonlinear case no standard methods have been developed and the problem is treated case by case . Here, we focus on the particular problem of feedback stabilization under sampled output measurements of an abstract semilinear infinite-dimensional system. In particular, we consider the dynamics $$\label{semilinear intro} \dot x(t)=Ax(t)+f\_{\s(t)}(x(t),u(t)), \quad t\geq 0,$$ where *x*(*t*) ∈ *X*, *U* is a Banach space, *u* ∈ *U* is the input, *A* is the infinitesimal generator of a *C*0-semigroup of bounded linear operators (*T**t*)*t* ≥ 0 on *X*, $\s:\R\_+\to \mathcal{Q}$ is a piecewise constant switching function, and *f**q* : *X* × *U* → *X* is a Lipschitz continuous nonlinear operator, uniformly with respect to *q* ∈ Q. Assume that only discrete output measurements are available *y*(*t*) = *x*(*t**k*),  ∀ *t* ∈ [*t**k*, *t**k* + 1),  ∀ *k* ≥ 0,  where (*t**k*)*k* ≥ 0 denotes the increasing sequence of sampling times. It is well known that, in general, no feedback of the type *u*(*t*) = *K*(*y*(*t*)) stabilizes system . Moreover, suppose that system  in closed-loop with *u*(*t*) = *K*(*x*(*t*)),  ∀ *t* ≥ 0,  where *K* : *X* → *U* is a globally Lipschitz function satisfying *K*(0) = 0, is uniformly semi-globally exponentially stable. Using our converse Lyapunov theorem, we show that if the maximal sampling period is small enough then, under some additional conditions, system  in closed-loop with the predictor-based sampled-data control *u*(*t*) = *T**t* − *t**k**y*(*t**k*),  ∀ *t* ∈ [*t**k*, *t**k* + 1),  ∀ *k* ≥ 0,  is uniformly locally exponentially stable in each ball around the origin. Furthermore, if the closed loop system - is uniformly globally exponentially stable, then the same property holds for the closed loop system -, under sufficiently small sampling period. We give an example of a wave equation (see, e.g., ) showing the applicability of our result. In recent years, the problem of characterizing input-to-state stability (ISS) for infinite-dimensional systems has attracted a particular attention. Roughly speaking, the ISS property, introduced in  for ordinary differential equations, means that the trajectories of a perturbed system eventually approach a neighborhood of the origin whose size is proportional to the magnitude of the perturbation. This concept has been widely studied in the framework of complex systems such as switching systems (see, e.g., and references therein), time-delay systems (see, e.g., and references therein), and abstract infinite-dimensional systems (see, e.g., ). For example, in  a converse Lyapunov theorem characterizing the input-to-state stability of a locally Lipschitz dynamics through the existence of a locally Lipschitz continuous coercive ISS Lyapunov functional is given. Recently in  it has been shown that, under regularity assumptions on the dynamics, the existence of non-coercive Lyapunov functionals implies input-to-state stability. Here, we provide a result of ISS type, proving that the input-to-state map has finite gain, under the assumption that the unforced system corresponding to  (i.e., with *u* ≡ 0) is uniformly globally exponentially stable. The paper is organized as follows. Section [sec:problem state] presents the problem statement with useful notations and definitions. In Section [main results sec] we state our main results, namely three Datko-type theorems for uniform local, semi-global, and global exponential stability, together with direct and converse Lyapunov theorems. In Section [sec: discussion] we compare the proposed Lyapunov theorems with the current state of art. The applications are given in Section [sec: applications]. In Section [sec-example] we consider an example of a damped wave equation. The proofs are postponed to Section [s:proofs]. Notations --------- By (*X*, ∥ ⋅ ∥) we denote a Banach space with norm ∥ ⋅ ∥ and by *B**X*(*x*, *r*) the closed ball in *X* of center *x* ∈ *X* and radius *r*. By $\R$ we denote the set of real numbers and by ∣ ⋅ ∣ the Euclidean norm of a real vector. We use $\R\_+$ and $\R\_+^{\star}$ to denote the sets of non-negative and positive real numbers respectively. A function *α* : R+ → R+ is said to be of class K if it is continuous, increasing, and satisfies *α*(0) = 0; it is said to be of class K∞ if it is of class K and unbounded. A continuous function *κ* : R+ × R+ → R+ is said to be of class KL if *κ*( ⋅ , *t*) is of class K for each *t* ≥ 0 and, for each *s* ≥ 0, *κ*(*s*,  ⋅ ) is nonincreasing and converges to zero as *t* tends to  + ∞. Problem statement ================= In this paper we consider a forward complete dynamical system evolving in a Banach space *X*. Let us recall the following definition, proposed in . [FC] Let $\Q$ be a nonempty set. Denote by $\S$ a set of functions $\s:\R\_+\to \Q$ satisfying the following conditions: * $\S$ is closed by time-shift, i.e., for all $\s\in \S$ and all *τ* ≥ 0, the *τ*-shifted function $\mathbb{T}\_{\tau}\s:s\mapsto\s(\tau+s)$ belongs to $\S$; * $\S$ is closed by concatenation, i.e., for all $\s\_1,\s\_2\in \S$ and all *τ* > 0 the function $\s$ defined by $\s\equiv\s\_1$ over [0, *τ*] and by $\s(\tau+t)=\s\_2(t)$ for all *t* > 0, belongs to $\S$. Let $\phi: \R\_+\times X\times \S\to X$ be a map. The triple $\Sigma=(X,\S,\phi)$ is said to be a *forward complete dynamical system* if the following properties hold: * $\forall~(x,\s)\in X \times \S$, it holds that $\phi(0,x,\s)=x$; * $\forall~(x,\s)\in X \times \S$, ∀ *t* ≥ 0, and $\forall~\tilde\s\in\S$ such $\tilde\s=\s$ over [0, *t*], it holds that $\phi(t,x,\tilde\s)=\phi(t,x,\s)$; * $\forall~(x,\s)\in X \times \S$, the map $t\mapsto \phi(t,x,\s)$ is continuous; * ∀*t*, *τ* ≥ 0, $\forall~(x,\s)\in X \times \S$, it holds that $\phi(\tau,\phi(t,x,\s),\mathbb{T}\_{t}\s)=\phi(t+\tau,x,\s)$. We will refer to *ϕ* as the *transition map* of Σ. Observe that if Σ is a forward complete dynamical system and $\S$ contains a constant function $\s$ then $(\phi(t,\cdot,\s))\_{t\geq 0}$ is a strongly continuous nonlinear semigroup, whose definition is recalled below. [Semigroup] Let *T**t* : *X* → *X*, *t* ≥ 0, be a family of nonlinear maps. We say that (*T**t*)*t* ≥ 0 is a strongly continuous nonlinear semigroup if the following properties hold: * ∀ *x* ∈ *X*, *T*0*x* = *x*; * ∀ *t*1, *t*2 ≥ 0, *T**t*1*T**t*2*x* = *T**t*1 + *t*2*x*; * ∀ *x* ∈ *X*, the map *t* ↦ *T**t**x* is continuous. An example of forward complete dynamical system is given next. [Piecewise constant switching system][PC] We denote by PC the set of piecewise constant $\s:\R\_+\to \mathcal{Q}$, and we consider here the case $\S=\mathrm{PC}$. Let $\s\in \mathrm{PC}$ be constantly equal to $\s\_k$ over [*t**k*, *t**k* + 1), with 0 = *t*0 < *t*1 < ⋯ < *t**k* < *t* < *t**k* + 1, for *k* ≥ 0. With each $\s\_k$ we associate the strongly continuous nonlinear semigroup $(T\_{\s\_k}(t))\_{t\geq 0}:=(\phi(t,\cdot,\s\_k))\_{t\geq 0}$. By concatenating the flows $(T\_{\s\_k}(t))\_{t\geq 0}$, one can associate with $\s$ the family of nonlinear evolution operators $$\label{sigalotti1} T\_{\s}(t):= T\_{\s\_k}(t-t\_{k})T\_{\s\_{k-1}}(t\_{k}-t\_{k-1})\cdots T\_{\s\_1}(t\_{1}),$$ *t* ∈ [*t**k*, *t**k* + 1). By consequence, system Σ can be identified with the piecewise constant switching system $$\label{sigalotti} x(t)=T\_{\s}(t)x\_0, \ x\_{0}\in X, \ \s\in \mathrm{PC}.$$ Thanks to the representation given by , this paper extends to the nonlinear case some of the results obtained in  on the characterization of the exponential stability of switching linear systems in Banach spaces. Various notions of uniform (with respect to the functions in $\S$) exponential stability of system Σ are given by the following definition. [0-GES def] Consider the forward complete dynamical system $\Sigma=(X,\S,\phi)$. 1. We say that Σ is *uniformly globally exponentially stable at the origin* (UGES, for short) if there exist *M* > 0 and *λ* > 0 such that the transition map *ϕ* satisfies the inequality $$\|\phi(t, x, \s)\|\leq M e^{-\lambda t} \|x\|, \ \forall~t\geq 0,\; \forall~x\in X,\; \forall~\s\in\S.$$ 2. We say that Σ is *uniformly locally exponentially stable at the origin* (ULES, for short) if there exist *R* > 0, *M* > 0, and *λ* > 0 such that the transition map *ϕ* satisfies the inequality, for every *t* ≥ 0, *x* ∈ *B**X*(0, *R*), and $\s\in\S$, $$\label{les-def} \|\phi(t, x, \s)\|\leq M e^{-\lambda t} \|x\|.$$ If inequality  holds true for a given *R* > 0 then we say that Σ is *uniformly exponentially stable at the origin in *B**X*(0, *R*)* (UES in *B**X*(0, *R*), for short). 3. We say that Σ is *uniformly semi-globally exponentially stable at the origin* (USGES, for short) if, for every *r* > 0 there exist *M*(*r*) > 0 and *λ*(*r*) > 0 such that the transition map *ϕ* satisfies the inequality $$\label{sges-def} \|\phi(t, x, \s)\|\leq M(r) e^{-\lambda(r) t} \|x\|,$$ for every *t* ≥ 0, *x* ∈ *B**X*(0, *r*), and $\s\in\S$. [remark-USGES] Up to modifying *r* ↦ *M*(*r*), one can assume without loss of generality in the definition of USGES that *r* ↦ *λ*(*r*) can be taken constant and *r* ↦ *M*(*r*) nondecreasing. Indeed, let us fix *M* :  = *M*(1) and *λ* :  = *λ*(1). One has, by definition of (*M*, *λ*), $$\|\phi(t, x, \s)\|\leq Me^{-\lambda t} \|x\|,$$ for every *t* ≥ 0, *x* ∈ *B**X*(0, 1), and $\s\in\S$. For *R* > 1, by using  there exists *t**R* such that, for every *x* ∈ *B**X*(0, *R*) with ∥*x*∥ ≥ 1, one has for *t* ≥ *t**R*, $$M(R)e^{-\lambda(R)t\_{R}}R=1,~ \|\phi(t,x,\s)\|\leq\frac{\|x\|}{R}\le \min\{1,\|x\|\}.$$ This implies that, for every *t* ≥ *t**R*, *x* ∈ *B**X*(0, *R*), and $\s\in\S$, $$\|\phi(t, x, \s)\|\leq Me^{-\lambda (t-t\_{R})} \|\phi(t\_{R},x,\s)\|\leq Me^{-\lambda (t-t\_{R})}\|x\|.$$ By setting $$\widehat M(R)=\begin{cases} M &R\leq 1,\\ \max\{M(R),M(R)e^{|\lambda-\lambda(R)|t\_{R}},Me^{\lambda t\_R}\}&R>1, \end{cases}$$ one has that $\|\phi(t, x, \s)\|\leq \widehat M(r) e^{-\lambda t} \|x\|, $ for every *t* ≥ 0, *x* ∈ *B**X*(0, *r*), and $\s\in\S$. Finally, we may replace $r\mapsto \widehat M(r)$ with the nondecreasing function $r\mapsto \inf\_{\rho\geq r}\widehat M(\rho)$. The property of semi-global exponential stability introduced in Definition [0-GES def] turns out to be satisfied by some interesting class of infinite-dimensional systems, as described in the following two examples. [example kdv without switch-old] For *L* > 0, let Ω = (0, *L*) and consider the controlled Korteweg–de Vries (KdV) equation $$\label{kdv} \begin{cases} {\eta}\_{t}+{\eta}\_{x}+{\eta}\_{xxx}+ \eta{\eta}\_{x}+\rho(t,x,\eta)=0 & (x,t)\in \Omega\times\R\_+,\\ \eta(t,0)=\eta(t,L)={\eta}\_{x}(t,L)=0 & t\in \R\_+,\\ \eta(0,x)={\eta}\_0(x) & x\in \Omega, \end{cases}$$ where $\rho:\R\_+\times\Omega\times \R\to\R$ is a sufficiently regular nonlinear function. The case *ρ* ≡ 0 is a well known model describing waves on shallow water surfaces . The controllability and stabilizability properties of  have been extensively studied in the literature (see, e.g., ). In the case where the feedback control is of the form *ρ*(*t*, *x*, *η*) = *a*(*x*)*η*, for some non-negative function *a*( ⋅ ) having nonempty support in Ω, system  is globally exponentially stable in *X* = *L*2(0, *L*). In  the authors prove that, when a saturation is introduced in the feedback control *ρ*, the system is only semi-globally exponentially stable in *X*. [example boundary wave without switch] Consider the 1D wave equation with boundary damping $$\label{boundary-wave without switch} \begin{cases} {\psi}\_{tt}-\Delta \psi=0 &(x,t)\in(0,1)\times\R\_+,\\ \psi(0,t)=0&t\in\R\_+,\\ \psi\_x(1,t)=-\sigma(t,\psi\_t(1,t)) & t\in\R\_+,\\ \psi(0)=\psi\_0, \psi\_t(0)=\psi\_1 & x\in(0,1), \end{cases}$$ where $\sigma:\R\_+\times \R\to\R$ is continuous. This system is of special interest when the damping term *σ*(*t*, *ψ**t*(1, *t*)) represents a nonlinear feedback control. Once again, different types of stability can be established (for global and semi-global exponential stability, see e.g., ). In particular, if *σ* is a nonlinearity of saturation type, only semi-global exponential stability holds true in *X* = {(*ψ*0, *ψ*1) ∣ *ψ*0(0) = 0,  *ψ*0ʹ and *ψ*1 ∈ *L*∞(0, 1)}. The systems considered in Examples [example kdv without switch-old] and [example boundary wave without switch] do not depend on *σ* ∈ S as in Definition [FC]. In Section [sec-example] we introduce and study variants of such systems in a switching framework. Main results ============ Datko-type theorems ------------------- In this section we give Datko-type theorems  for an abstract forward complete dynamical system Σ. The uniform (local, semi-global, and global) exponential stability is characterized in terms of the *L**p*-norm of the trajectories of the system. This provides a generalization of the results obtained in  for nonlinear semigroups. The following theorem characterizes the local exponential stability of system Σ. [ULES] Consider a forward complete dynamical system $\Sigma=(X,\S,\phi)$. Let *t*1, *G*0 > 0, and *β* be a function of class K∞ such that $\limsup\_{r\downarrow 0} \frac{\beta(r)}{r}$ is finite and $$\label{exp-bounded2} \|\phi(t,x,\s)\| \leq G\_0\beta(\|x\|), \quad \forall~t\in[0,t\_1],\; \forall~x\in X,\; \forall~\s\in\S.$$ The following statements are equivalent i) System Σ is ULES; ii) for every *p* > 0 there exist a nondecreasing function *k* : R+ → R+ and *R* > 0 such that $$\label{exp-lp3} \int\_{0}^{+\infty}{\|\phi(t,x,\s)\|}^{p} dt \leq k(\|x\|)^p\|x\|^{p},$$ for every *x* ∈ *B**X*(0, *R*) and $\s\in\S$; iii) there exist *p* > 0, *k* : R+ → R+ nondecreasing, and *R* > 0 such that  holds true. Observe that hypothesis  in Theorem [ULES] is global over *X*. Indeed, we do not know if the stability at 0 may be deduced from inequality  if one restricts  to a ball *B**X*(0, *r*). The following theorem characterizes the semi-global exponential stability of system Σ. [USGES] Consider a forward complete dynamical system $\Sigma=(X,\S,\phi)$. Let *t*1, *G*0 > 0, and *β* be a function of class K∞ such that $\limsup\_{r\downarrow 0} \frac{\beta(r)}{r}$ is finite and $$\|\phi(t,x,\s)\| \leq G\_0\beta(\|x\|), \quad \forall~t\in[0,t\_1],\; \forall~x\in X,\; \forall~\s\in\S.$$ The following statements are equivalent i) System Σ is USGES; ii) for every *p* > 0 there exists a nondecreasing function *k* : R+ → R+ such that, for every *x* ∈ *X* and $\s\in\S$ $$\label{lp0} \int\_{0}^{+\infty}{\|\phi(t,x,\s)\|}^{p} dt \leq k(\|x\|)^p\|x\|^{p};$$ iii) there exist *p* > 0 and *k* : R+ → R+ nondecreasing such that  holds true. The particular case of uniformly globally exponentially stable systems is considered in the following theorem. [datko] Consider a forward complete dynamical system $\Sigma=(X,\S,\phi)$. Let *t*1 > 0 and *G*0 > 0 be such that $$\label{exp-bounded} \|\phi(t,x,\s)\| \leq G\_0\|x\|, \ \forall~t\in [0,t\_1],\; \forall~x\in X,\; \forall~\s\in\S.$$ The following statements are equivalent i) System Σ is UGES; ii) for every *p* > 0 there exists *k* > 0 such that $$\label{unasoladai} \int\_{0}^{+\infty}{\|\phi(t,x,\s)\|}^{p} dt \leq k^{p}\|x\|^{p},$$ for every *x* ∈ *X* and $\s\in\S$; iii) there exist *p*, *k* > 0 such that holds true. By the shift-invariance properties given by items a) and iv) of Definition [FC], it is easy to see that  implies $$\label{rem-paolo} \|\phi(t,x,\s)\| \leq Me^{\lambda t}\|x\|,\ \forall~t\geq 0,\; \forall~x\in X,\; \forall~\s\in\S,$$ where *M* = *G*0 and $\lambda = \max\{0,\log (\frac{G\_0}{t\_1})\}$. Notice that inequality  is a nontrivial requirement on system Σ. Even in the linear case, and even if  is satisfied for each constant $\s\equiv \s\_c$, uniformly with respect to $\s\_c$, it does not follow that a similar exponential bound holds for the corresponding system Σ (see ). Lyapunov characterization of exponential stability -------------------------------------------------- In this section we characterize the exponential stability of a forward complete dynamical system through the existence of a Lyapunov functional. First, let us recall the definition of Dini derivative of a functional *V* : *X* → R+. [Dini derivative] Consider a forward complete dynamical system $\Sigma=(X,\S,\phi)$. The upper and lower Dini derivatives $\overline{D}\_{\s}V:X\to \R\cup\{\pm \infty\}$ and $\underline{D}\_{\s}V:X\to \R\cup\{\pm \infty\}$ of a functional *V* : *X* → R+ are defined, respectively, as $$\overline{D}\_{\s}V(x)=\limsup\_{h\downarrow 0}\frac{1}{h}\left(V(\phi(h,x,\s))-V(x)\right),$$ and $$\underline{D}\_{\s}V(x)=\liminf\_{h\downarrow 0}\frac{1}{h}\left(V(\phi(h,x,\s))-V(x)\right),$$ where *x* ∈ *X* and $\s\in\S$. [rem-dini-constant] When $\S$ contains PC, we can associate with every $q\in \Q$ the upper and lower Dini derivatives $\overline{D}\_{q}V$ and $\underline{D}\_{q}V$ corresponding to $\s\equiv q$. Notice that for every $\s\in \mathrm{PC}$ and sufficiently small *h* > 0, we have $\s\_{|\_{(0,h)}}\equiv q$, for some *q* ∈ Q. By consequence, we have $\overline{D}\_{\s}V(\varphi)=\overline{D}\_{q}V(\varphi)$ and $\underline{D}\_{\s}V(\varphi)=\underline{D}\_{q}V(\varphi)$. The regularity of a Lyapunov functional associated with an exponentially stable forward complete dynamical system $\Sigma=(X,\S,\phi)$ is recovered, in our results, from the regularity of the transition map *ϕ*. The $\S$-uniform continuity of the transition map *ϕ* with respect to the initial condition is defined as follows. We say that the transition map *ϕ* of $\Sigma=(X,\S,\phi)$ is $\S$-uniformly continuous if, for any $\bar t>0$, *x* ∈ *X*, and *ɛ* > 0, there exists *η* > 0 such that $$\|\phi(t,x,\s)-\phi(t,y,\s)\|\leq \varepsilon,$$ for every $t\in [0,\bar t]$, *y* ∈ *B**X*(*x*, *η*), and $\s\in \S$. Similarly, the notion of $\S$-uniform Lipschitz continuity of the transition map is given by the following definition. [def:philip] We say that the transition map *ϕ* of $\Sigma=(X,\S,\phi)$ is $\S$-uniformly Lipschitz continuous (respectively, $\S$-uniformly Lipschitz continuous on bounded sets) if, for any $\bar t>0$ (respectively, $\bar t>0$ and *R* > 0), there exists $l(\bar t)>0$ (respectively, $l(\bar t,R)>0$) such that $$\|\phi(t,x,\s)-\phi(t,y,\s)\|\leq l(\bar t)\|x-y\|,$$ for every $t\in [0,\bar t]$, *x*, *y* ∈ *X*, and $\s\in \S$ (respectively, $$\|\phi(t,x,\s)-\phi(t,y,\s)\|\leq l(\bar t,R)\|x-y\|,$$ for every $t\in [0,\bar t]$, *x*, *y* ∈ *B**X*(0, *R*), and $\s\in \S$). The following theorem shows that the existence of a non-coercive Lyapunov functional is sufficient for proving the uniform exponential stability of the forward complete dynamical system Σ, provided that inequality  holds true. [ULES-SG] Consider a forward complete dynamical system $\Sigma=(X,\S,\phi)$. Let *t*1, *G*0 > 0, and *β* be a function of class K∞ such that $\limsup\_{r\downarrow 0} \frac{\beta(r)}{r}$ is finite and $$\label{exp-bounded-bis} \|\phi(t,x,\s)\| \leq G\_0\beta(\|x\|), \quad \forall~t\in[0,t\_1],\; \forall~x\in X,\; \forall~\s\in\S.$$ Then, * if there exist *R* > 0, $V: B\_{X}(0,R)\to \R\_+$, and *p*, *c* > 0 such that, for every *x* ∈ *B**X*(0, *R*) and $\s\in \S$, $$\begin{aligned} V(x)&\leq c\|x\|^{p}, \label{noncoe1-SG}\\ \underline{D}\_{\s}V(x)&\leq -\|x\|^{p},\label{dini1-SG}\end{aligned}$$ and $V(\phi(\cdot,x,\s))$ is continuous from the left at every *t* > 0 such that $\phi(t,x,\s)\in B\_{X}(0,R)$, then system Σ is ULES; * if i) holds true for every *R* > 0 with *V* = *V**R*, *c* = *c**R* and *p* = *p**R* and $$\label{Pettersen} \limsup\_{R\to +\infty}\beta^{-1}\left(\frac{R}{G\_0}\right)\min\left\{1,\left(\frac{t\_1}{c\_R}\right)^{\frac{1}{p\_R}}\right\} =+\infty,$$ then system Σ is USGES; * if *β* in  is equal to the identity function and there exist *p*, *c* > 0 and a functional $V: X\to \R\_+$ such that, for every *x* ∈ *X* and $\s\in \S$, the map $t\mapsto V(\phi(t,x,\s))$ is continuous from the left, and *V* satisfies inequalities - in *X*, then system Σ is UGES. The following theorem states that the existence of a coercive Lyapunov functional is necessary for the uniform exponential stability of a forward complete dynamical system. [coercive1] Consider a forward complete dynamical system $\Sigma=(X,\S,\phi)$ and assume that the transition map *ϕ* is $\S$-uniformly continuous. If Σ is USGES then, for every *r* > 0 there exist $\underline{c}\_r,\overline{c}\_r>0$ and a continuous functional $V\_r: X\to \R\_+$, such that $$\begin{aligned} \label{norm comparison2} \underline{c}\_r\|x\|\leq V\_r(x)\leq \overline{c}\_r\|x\|,& \quad \forall~x\in B\_X(0,r), \\ \label{dini9} \overline{D}\_{\s}V\_r(x)\leq -\|x\|,& \quad \forall~x\in B\_X(0,r),\; \forall~\s\in\S,\\ V\_r=V\_R\mbox{ on }X, &\quad \forall R>0\mbox{ such that }\lambda(r)=\lambda(R)\nonumber\\ &\quad\mbox{ and }M(r)=M(R),\label{eq:indis}\end{aligned}$$ where *λ*( ⋅ ), *M*( ⋅ ) are as in . Moreover, in the case where the transition map *ϕ* is $\S$-uniformly Lipschitz continuous (respectively, $\S$-uniformly Lipschitz continuous on bounded sets), *V**r* can be taken Lipschitz continuous (respectively, Lipschitz continuous on bounded sets). To conclude this section, we state the following corollary which characterizes the uniform global exponential stability of a forward complete dynamical system, completing Item iii) of Theorem [ULES-SG]. [main] Consider a forward complete dynamical system $\Sigma=(X,\S,\phi)$. Assume that the transition map *ϕ* is $\S$-uniformly continuous. If there exist *t*1 > 0 and *G*0 > 0 such that $$\label{exp-bound cor} \|\phi(t,x,\s)\| \leq G\_0\|x\|, \quad \forall~t\in[0,t\_1],\; \forall~x\in X,\; \forall~\s\in\S,$$ then the following statements are equivalent: * System Σ is UGES; * there exists a continuous functional $V: X\to \R\_+$ and positive reals *p*, $\underline c$, and $\overline c$ such that $$\label{norm comparison3} \underline c\|x\|^{p}\leq V(x)\leq \overline c\|x\|^{p}, \quad \forall~x\in X,$$ and $$\label{dini10} \overline{D}\_{\s}V(x)\leq -\|x\|^{p}, \quad \forall~x\in X,\; \forall~\s\in\S;$$ * there exist a functional $V: X\to \R\_+$ and positive reals *p* and *c* such that, for every *x* ∈ *X* and $\s\in \S$, the map $t\mapsto V(\phi(t,x,\s))$ is continuous from the left, inequality  is satisfied and the following inequality holds *V*(*x*) ≤ *c*∥*x*∥*p*,  ∀ *x* ∈ *X*. The fact that item i) implies ii) is a straightforward consequence of Theorem [coercive1], using, in particular,. Moreover ii) clearly implies iii). Finally, iii) implies i), as follows from Theorem [ULES-SG]. Discussion: comparison with the current state of the art ======================================================== We compare here the results stated in Section [lyap-sec] with some interesting similar results, obtained recently in , concerning the Lyapunov characterization of the uniform global asymptotic stability of a forward complete dynamical system. In order to make this comparison, we briefly recall some definitions and assumptions from . [0-GAS def] We say that a forward complete dynamical system $\Sigma=(X,\S,\phi)$ is *uniformly globally asymptotically stable at the origin* (UGAS, for short) if there exists a function *κ* of class KL such that the transition map *ϕ* satisfies the inequality $$\|\phi(t, x, \s)\|\leq \kappa(\|x\|,t), \quad \forall~t\geq 0,\; \forall~x\in X,\; \forall~\s\in\S.$$ The notion of *robust forward completeness* of system Σ is given by the following. The forward complete dynamical system $\Sigma=(X,\S,\phi)$ is said to be *robustly forward complete* (RFC, for short) if for any *C* > 0 and any *τ* > 0 it holds that $$\label{RFC} \sup\_{\|x\|\leq C, t\in [0,\tau], \s\in \S}\|\phi(t,x,\s)\|<\infty.$$ Notice that RFC property of Σ is equivalent to inequality, although it does not necessarily imply that $\limsup\_{r\downarrow 0}\frac{\beta(r)}{r}$ is finite. The notion of *robust equilibrium point*, which may be seen as a form of weak stability at the origin, is given as follows. [REP] We say that 0 ∈ *X* is a *robust equilibrium point* (REP, for short) of the forward complete dynamical system $\Sigma=(X,\S,\phi)$ if for every *ɛ*, *h* > 0, there exists *δ* = *δ*(*ɛ*, *h*) > 0, so that $$\|x\|\leq \delta\implies \|\phi(t,x,\s)\|\leq \varepsilon,\quad \forall t\in [0,h], \forall \s\in \S.$$ One of the main results obtained in  relates the UGAS property with the existence of a non-coercive Lyapunov functional, i.e., a continuous function $V:X\to\R\_+$ satisfying *V*(0) = 0 and the two inequalities 0 < *V*(*x*) ≤ *α*1(∥*x*∥),  ∀ *x* ∈ *X* ∖ {0},  and $$\label{dini9-GAS} \overline{D}\_{\s}V(x)\leq -\alpha\_2(\|x\|), \quad \forall~x\in X,\; \forall~\s\in\S,$$ where *α*1 ∈ K∞ and *α*2 ∈ K. This is formulated by the following theorem. [][Miro19] Consider a forward complete dynamical system $\Sigma=(X,\S,\phi)$ and assume that Σ is robustly forward complete and that 0 is a robust equilibrium point of Σ. If Σ admits a non-coercive Lyapunov functional, then it is UGAS. The existence of a non-coercive Lyapunov functional *V* is not sufficient in order to get the uniform global asymptotic stability of system Σ. Indeed, as shown in , the existence of such a *V* does not imply either the RFC or REP property, hence the necessity of both assumptions. Even in the linear case, an RFC like condition is required (see ). Let us compare Theorem [Miro19] with our corresponding result for UGES. As we already noticed, RFC is equivalent to, and in particular it is ensured by the stronger condition. Moreover, it is easy to check that also implies the REP property. Thus, the hypotheses of Theorem [ULES-SG] imply those of Theorem [Miro19]. As a counterpart, a stronger stability property is obtained (namely, UGES). Also notice that our theorem relaxes the requirements on the functional *V*, since the lower Dini derivatives, instead of the upper one, is used and discontinuities of *V* along the trajectories of Σ are allowed. Applications ============ Nonlinear retarded systems with piecewise constant delays --------------------------------------------------------- Let *r* ≥ 0 and set $\C=\C([-r,0],\mathbb{R}^n)$, the set of continuous functions from [ − *r*, 0] to $\R^n$. Consider the nonlinear retarded system $$\begin{array}{ll} \dot x(t)=f\_{\s(t)}(x\_t), & \quad t\geq 0,\label{delay1}\\ x(\theta)=\varphi(\theta), &\quad \theta\in [-r,0],\end{array}$$ where $x(t)\in \R^n$, $\varphi\in \C$, $x\_t:[-r,0]\rightarrow\R^n$ is the standard notation for the history function defined by *x**t*(*θ*) = *x*(*t* + *θ*),   − *r* ≤ *θ* ≤ 0,  $\s:\R\_+\to \mathcal{Q}$ is a piecewise constant function, and $f\_{q}:\C\to \R^n$ is a continuous functional such that *f**q*(0) = 0 for all *q* ∈ Q. For every *q* ∈ Q and $\varphi\in \C$, we assume that there exists a unique solution *x* over [ − *r*,  + ∞) of  with $\s(t)=q$ for every *t* ≥ 0. This defines a family (*T**q*(*t*))*t* ≥ 0 of nonlinear maps from $\C$ into itself by setting *T**q*(*t*)*φ* = *x**t*,  for *t* ≥ 0. According to, (*T**q*(*t*))*t* ≥ 0 is a strongly continuous semigroup of nonlinear operators on $\C$. We denote by $\Sigma\_r=(\C,\mathrm{PC},\phi)$ the corresponding forward complete dynamical system constructed as in Example [PC]. As a consequence of the switching representation of the nonlinear time-varying delay system , the results of the previous section (in particular Theorem [ULES-SG] and Corollary [main]) apply to system Σ*r*. Let us explicitly provide an application of Corollary [main]. Let *L* > 0 be such that $$\label{lip} |f\_{q}(\psi\_1)-f\_{q}(\psi\_2)|\leq L\|\psi\_1-\psi\_2\|, \ \forall~\psi\_1, \psi\_2\in \C,\;\forall~q\in\Q.$$ The following statements are equivalent: * System Σ*r* is UGES; * there exists a continuous functional $V: X\to \R\_+$ and positive reals *p*, $\underline c$, and $\overline c$ such that $$\underline c\|\psi\|^{p}\leq V(\psi)\leq \overline c\|\psi\|^{p}, \quad \forall~\psi\in \C,$$ and $$\label{dini-delay} \overline{D}\_{q}V(\psi)\leq -\|\psi\|^{p}, \quad \forall~\psi\in \C,\; \forall~q\in \Q;$$ * there exist a functional $V: \C\to \R\_+$ and positive reals *p* and *c* such that, for every $\psi\in \C$ and $q\in \Q$, the map *t* ↦ *V*(*T**q*( ⋅ )*ψ*) is continuous from the left, inequality  is satisfied, and $$V(\psi)\leq c\|\psi\|^{p}, \quad \forall~\psi\in \C.$$ The proof is based on Corollary [main]. In order to apply it, we have to prove that the transition map is PC-uniformly continuous and that  holds true. Using, we easily get $$\begin{aligned} \|\phi(t,\varphi\_1,\sigma)-&\phi(t,\varphi\_2,\sigma)\|\leq L\int\_{0}^{t}{\|\phi(s,\varphi\_1,\sigma)-\phi(s,\varphi\_2,\sigma)\|ds}, \quad \forall~t\geq 0, \end{aligned}$$ which implies ∥*ϕ*(*t*, *φ*1, *σ*) − *ϕ*(*t*, *φ*2, *σ*)∥ ≤ *e**L**t*∥*φ*1 − *φ*2∥,  ∀ *t* ≥ 0. Hence, the transition map *ϕ* is PC-uniformly Lipschitz continuous and, using the fact that *f**q*(0) = 0 for every $q\in\Q$, we have that  holds true with *G*0 = *e**L**t*1. Predictor-based sampled data exponential stabilization ------------------------------------------------------ Let *X*, *U* be two Banach spaces. Consider the semilinear control system $$\label{sample1} \left\{\begin{array}{l} \dot x(t)=Ax(t)+f\_{\s(t)}(x(t),u(t)), \quad t\geq 0,\\ x(0)=x\_0\in X, \end{array}\right.$$ where $u\in \C(\R\_+,U)$ is the control input, *A* is the infinitesimal generator of a *C*0-group $(T\_{t})\_{t\in\R}$ of bounded linear operators on *X*, $\s:\R\_+\to \mathcal{Q}$ is a piecewise constant function, and *f**q* : *X* × *U* → *X*, for *q* ∈ Q, is a Lipschitz continuous nonlinear operator, with Lipschitz constant *L**f* > 0 independent of *q*, such that *f**q*(0, 0) = 0. Let *K* : *X* → *U* be a globally Lipschitz function with Lipschitz constant *L**K* > 0, satisfying *K*(0) = 0, and consider system  in closed-loop with *u*(*t*) = *K*(*x*(*t*)),  ∀ *t* ≥ 0. Observe that, since *A* is the infinitesimal generator of a *C*0-group, the following inequality holds for the corresponding induced norm: there exist Γ, *ω* > 0 such that $$\label{Groupe} \|T\_{t}\|\leq \Gamma e^{\omega |t|}, \quad \forall~t\in \R.$$ The aim of this section is to show the usefulness of our converse theorems in the study of exponential stability preservation under sampling for the semilinear control switching system . For convenience of the reader we give the following definition. Let 0 < *s*1 < ⋯ < *s**k* < ⋯ be an increasing sequence of times such that lim*k* →  + ∞*s**k* =  + ∞. The instants *s**k* are called *sampling instants* and the quantity *δ* = sup*k* ≥ 0(*s**k* + 1 − *s**k*) is called the *maximal sampling time*. By *predictor-based sampled data controller* we mean a feedback *u*( ⋅ ) of the type *u*(*t*) = *K*(*T**t* − *s**k**x*(*s**k*)),  ∀ *t* ∈ [*s**k*, *s**k* + 1),  ∀ *k* ≥ 0. ### Converse Lyapunov result for the closed-loop system – For every $\s\in \mathrm{PC}$, there exist an increasing sequence of times (*t**k*)*k* ≥ 0 and a sequence (*q**k*)*k* ≥ 0 taking values in Q such that *t*0 = 0, lim*k* → ∞*t**k* = ∞ and $\s$ is constantly equal to *q**k* on [*t**k*, *t**k* + 1) for every *k* ≥ 0. For every *q* ∈ Q and *x*0 ∈ *X*, letting $\s(t)\equiv q$, there exists a unique mild solution of  over [0,  + ∞), i.e., a continuous function *x*( ⋅ , *x*0, *q*) satisfying $$\begin{aligned} x(t,x\_0&,q)=T\_{t}x\_0+\int\_{0}^{t}T\_{t-s}f\_{q}\left(x(s,x\_0, q),K(x(s,x\_0, q))\right)ds, \end{aligned}$$ for every *t* ≥ 0. This defines a family (*T**q*(*t*))*t* ≥ 0 of nonlinear maps by setting *T**q*(*t*)*x*0 = *x*(*t*, *x*0, *q*), for *t* ≥ 0. We denote by Σ0 = (*X*, PC, *ϕ*) the corresponding forward complete dynamical system constructed as in Example [PC]. By, system Σ0 is PC-uniformly Lipschitz continuous. As a consequence of Theorem [coercive1], we have the following lemma. [coercive3] Suppose that Σ0 is USGES. Then, for every *r* > 0, there exist $\underline{c}\_r,\overline{c}\_r>0$ and a Lipschitz continuous Lyapunov functional $V\_r: X\to \R\_+$ such that $$\underline{c}\_r\|x\|\leq V\_r(x)\leq \overline{c}\_r\|x\|, \quad \forall~x\in B\_X(0,r),$$ and $$\overline{D}\_{q}V\_r(x)\leq -\|x\|, \quad \forall~x\in B\_X(0,r),\; \forall~q\in\Q.$$ ### Predictor-based sampled data feedback Consider the predictor-based sampled data switching control system $$\label{sample2} \left\{\begin{array}{l} \dot x(t)=Ax(t)+f\_{\s(t)}(x(t),K(T\_{t-s\_k}x(s\_k))),\ s\_{k}\leq t<s\_{k+1},\\ x(0)=x\_0, \end{array}\right.$$ where (*s**k*)*k* ≥ 0 is the increasing sequence of sampling instants. We will say that $x^{\Sigma}:\R\_+\to X$ is a solution of system  if *t* ↦ *x*Σ(*t*) is continuous and for every *k* ≥ 0 and *s**k* ≤ *t* < *s**k* + 1 one has $$\begin{aligned} x^{\Sigma}&(t)=T\_{t-s\_k}x^{\Sigma}(s\_k)+\int\_{0}^{t-s\_k}T\_{t-s\_k-s}f\_{\s(s+s\_k)}\left(x^{\Sigma}(s+s\_k),K\left(T\_{s}x^{\Sigma}(s\_k)\right)\right)ds,\label{sample-mild}\end{aligned}$$ that is, the restriction of *x*Σ on [*s**k*, *s**k* + 1] is a mild solution. By applying  on every interval [*s**k*, *s**k* + 1], we deduce the following. For every *x*0 ∈ *X* and $\s\in\mathrm{PC}$, system  admits a unique solution $x^{\Sigma}:\R\_+\to X$. For *x*0 ∈ *X* and $\s\in \mathrm{PC}$, following the same reasoning as before, we identify the dynamics of system  with the transition map $$\phi(t,x\_0,\s)=x^{\Sigma}(t), \quad t\geq 0,$$ where *x*Σ is given by , and we denote by Σ = (*X*, PC, *ϕ*) the corresponding forward complete dynamical system. Notice that Σ depends on the sequence of sampling instants (*s**k*)*k* ∈ N. [sampling theorem] If system Σ0 is USGES and $$\label{eq:limMr} \lim\_{r\to \infty}\frac{r}{M(r)}=+\infty,$$ where *M*(*r*) is as in, then for every *r* > 0 there exists *δ*⋆(*r*) > 0 such that Σ is UES in *B**X*(0, *r*), provided that the maximal sampling time of (*s**k*)*k* ∈ N is smaller than *δ*⋆(*r*). When Σ0 is uniformly globally exponentially stable, the associated Lyapunov functional is globally uniformly Lipschitz, and by consequence (see the proof of Theorem [sampling theorem]), the following corollary holds true. [sampling theorem-UGES] Suppose that system Σ0 is UGES. Then there exists *δ*⋆ > 0 such that system Σ is UGES provided that the maximal sampling time of (*s**k*)*k* ∈ N is smaller than *δ*⋆. Link between uniform global exponential stability and uniform input-to-state stability -------------------------------------------------------------------------------------- Let *X* and *U* be two Banach spaces and consider the control system , where *A* is the infinitesimal generator of a *C*0-semigroup of bounded linear operators (*T**t*)*t* ≥ 0 on *X*, $\s\in \mathrm{PC}$, and *f**q* : *X* × *U* → *X*, for *q* ∈ Q, is a Lipschitz continuous nonlinear operator, with Lipschitz constant *L**f* > 0 independent of *q*, such that *f**q*(0, 0) = 0. We assume here that the set of admissible controls is $L^p(U):=L^p(\R\_+,U)$ with 1 ≤ *p* ≤  + ∞. Following the reasoning in Section [sample] we can define (see, e.g., ), for every *x*0 ∈ *X*, $\s\in \mathrm{PC}$, and *u* ∈ *L**p*(*U*), the corresponding trajectory $\phi\_u(t,x\_0,\s)$ on $\R\_+$, which is absolutely continuous with respect to *t* and continuous with respect to (*x*0, *u*) ∈ *X* × *L**p*(*U*). We next provide a result of ISS type in the same spirit as those obtained in . In our particular context (UGES and global Lipschitz assumption) we are able to prove that the input-to-state map $u\mapsto \phi\_u(\cdot,0,\s)$ has finite gain. [iss theorem] Assume that the forward complete dynamical system (*X*, PC, *ϕ*0) is UGES. Then for every 1 ≤ *p* ≤  + ∞ and $\s\in \mathrm{PC}$, the input-to-state map $u\mapsto \phi\_u(\cdot,0,\s)$ is well defined as a map from *L**p*(*U*) to *L**p*(*X*) and has a finite *L**p*-gain independent of $\s$, i.e., there exists *c**p* > 0 such that $$\label{lp1} \|\phi\_u(\cdot,0,\s)\|\_{L^p(X)}\leq c\_p\|u\|\_{L^p(U)}, \quad \forall~u\in L^p(U),\; \forall~\s\in\mathrm{PC}.$$ Example: Sample-data exponential stabilization of a switching wave equation =========================================================================== Let Ω be a bounded open domain of class $\C^2$ in $\R^n$, and consider the switching damped wave equation $$\label{wave} \left\{\begin{array}{ll} \frac{\partial^2\psi}{\partial t^2}-\Delta x+\rho\_{\s(t)}(\frac{\partial\psi}{\partial t})=0\quad &\textnormal{in $\Omega\times\R\_+$},\\ \psi=0\quad & \textnormal{on $\partial\Omega\times\R\_+$},\\ \psi(0)=\psi\_0, \psi^{\prime}(0)=\psi\_1\quad & \textnormal{on $\Omega$}, \end{array}\right.$$ where $\s:\R\_+\to \mathcal{Q}$ is a piecewise constant function and $\rho\_q:\R\to\R$, for *q* ∈ Q, is a uniformly Lipschitz continuous function satisfying $$\rho\_q(0)=0, \quad \alpha |v|\leq |\rho\_q(v)|\leq \frac{|v|}{\alpha}, \quad \forall~v\in\R,\; \forall~q\in \mathcal{Q},$$ for some *α* > 0. In the case where $\tilde\rho(t,v):=\rho\_{\s(t)}(v)$ is sufficiently regular, namely a continuous function differentiable on $\R\_+\times (-\infty,0)$ and $\R\_+\times (0,\infty)$, and $v\mapsto \tilde\rho(t,v)$ is nondecreasing, for each initial condition (*ψ*0, *ψ*1) taken in *H*2(Ω) ∩ *H*01(Ω) × *H*01(Ω) there exists a unique strong solution for  in the class ${\bf H}=W^{2,\infty}\_{\rm loc}(\R\_+,L^2(\Omega))\cap W^{1,\infty}\_{\rm loc}(\R\_+,H^1\_0(\Omega))\cap L^{\infty}\_{\rm loc}(\R\_+,H^2(\Omega)\cap H^1\_0(\Omega)) $ (see for more details). For the switching damped wave equation  the existence and uniqueness of a strong solution (in ${\bf H}$) is given by concatenation. Defining the energy of the solution of  by $$E(t)=\frac{1}{2}\int\_{\Omega}\left({\frac{\partial\psi}{\partial t}}^2+|\nabla \psi|^2\right)dx,$$ we can prove, following the same lines of the proof of , that the energy of the solutions in ${\bf H}$ decays uniformly (with respect to the initial condition) exponentially to zero as *E*(*t*) ≤ *E*(0)exp(1 − *μ**t*),  ∀ *t* ≥ 0,  for some *μ* > 0 that depends only on *α*. Let *X* be the Banach space *H*01(Ω) × *L*2(Ω) endowed with the norm ∥*x*∥ = ∥∇*x*1∥*L*2(Ω)2 + ∥*x*2∥*L*2(Ω)2,  and let *A* be the linear operator defined on *X* by $$\begin{aligned} D(A)=&\Big\{x=\left(\begin{array}{c}x\_1\\x\_2\end{array}\right)\in X \mid x\_1\in H^2(\Omega)\cap H^1\_0(\Omega),\\ & \quad x\_2\in H^1\_0(\Omega)\Big\},\\ A=& \left(\begin{array}{cc}0&I\\ \Delta & 0\end{array}\right), \end{aligned}$$ where *I* is the identity operator and Δ denotes the Laplace operator. It is well known that *D*(*A*) is dense in *X* and that *A* is the infinitesimal generator of a *C*0-group of bounded linear operators $(T\_{t})\_{t\in \R}$ on *X* satisfying $$\|T\_{t}\|\leq \Gamma e^{|t|}, \quad \forall~t\in \R.$$ With this formulation, equation  can be rewritten as the initial value problem $$\label{wave-cauchy} \left\{\begin{array}{l} \dot x(t)=Ax(t)+f\_{\s(t)}(x(t),u(t)), \quad t\geq 0,\\ x(0)=x\_0, \end{array}\right.$$ with feedback $$u(t)= x\_2(t) \textnormal{ and } f\_s(x,u)=\left(\begin{array}{c}0\\ \rho\_s(u)\end{array}\right).$$ The associated transition map satisfies the inequality $$\label{decay wave} \|\phi(t,x,\s)\|\leq e^{1-\mu t}\|x\|, \quad \forall~t\geq 0.$$ Note that the constant *μ* does not depend on the solution. Using the density of *D*(*A*) in *X*, inequality  holds true for weak solutions. Thus system  is UGES. The assumptions of Corollary [sampling theorem-UGES] being satisfied, we deduce that for a sufficiently small maximal sampling time the predictor-based sampled data feedback preserves the exponential decay to zero of the energy of the solutions of . Proof of the main results ========================= As a preliminary step, let us state the following useful and straightforward lemma. Given a continuous function $\beta:\R\_+\to \R\_+$ satisfying $\limsup\_{r\downarrow 0} \frac{\beta(r)}{r}<+\infty$, the function $\alpha:\R\_+\to \R\_+$ defined by $$\label{alpha-beta} \alpha(0)=1+\limsup\_{r\downarrow 0} \frac{\beta(r)}{r},\ \alpha(r)=1+\sup\_{s\in (0,r]}\frac{\beta(s)}{s}, \ r>0,$$ is a continuous nondecreasing function satisfying *β*(*r*) ≤ *r**α*(*r*) for every *r* ≥ 0. Proof of Theorem [ULES] ----------------------- By using inequality , one deduces that i) implies ii) with $r\mapsto k(r)\equiv\frac{M}{(p\lambda)^{1/p}}$. Moreover, ii) clearly implies iii). It then remains to prove that iii) implies i). Without loss of generality, we assume that *G*0 ≥ 1. Let $C:\R\_+\to \R\_+$ be the nondecreasing function defined by $$\label{C} C(r)=G\_0\max\left\{\alpha(r),\frac{k(r)}{t\_1^{1/p}}\alpha\left(\frac{k(r)r}{t\_1^{1/p}}\right)\right\},$$ where $\alpha:\R\_+\to \R\_+$ is given by . For *t* > *t*1, *x* ∈ *X*, and $\s\in\S$, we have $$\begin{aligned} \|\phi(t,x,\s)\|&=\|\phi\left(\tau,\phi(t-\tau, x, \s), \mathbb{T}\_{t-\tau}\s\right)\| \leq G\_0\beta\left(\|\phi(t-\tau, x, \s)\|\right), \quad \forall~\tau\in [0,t\_1]. \label{bound1}\end{aligned}$$ One deduces from equations and that, for *t* > *t*1, *x* ∈ *B**X*(0, *R*), and $\s\in\S$, $$\begin{aligned} \label{bound2} t\_1\left(\beta^{-1}\left(\frac{\|\phi(t,x,\s)\|}{G\_0}\right)\right)^{p}&\leq \int\_{t-t\_1}^{t}{\|\phi(\tau,x,\s)\|^{p}d\tau}\leq \int\_{0}^{+\infty}{\|\phi(\tau,x,\s)\|^{p}d\tau}\leq k(\|x\|)^p\|x\|^p.\end{aligned}$$ Therefore, for *t* > *t*1, *x* ∈ *B**X*(0, *R*), and $\s\in\S$, we have $$\label{betaincreasing} \|\phi(t,x,\s)\|\leq G\_0\beta\left(\frac{k(\|x\|)\|x\|}{t\_1^{1/p}}\right).$$ By consequence, bundling together and, we get for *t* ≥ 0 and *x* ∈ *B**X*(0, *R*) that $$\begin{aligned} \|\phi(t,x,\s)\|&\leq G\_0\max\left\{\beta(\|x\|),\beta\left(\frac{k(\|x\|)\|x\|}{t\_1^{1/p}}\right)\right\} \leq C(\|x\|)\|x\|,\label{bound3}\end{aligned}$$ where we used that *β*(∥*x*∥) ≤ ∥*x*∥*α*(∥*x*∥). Let *r* > 0 be such that *r**C*(*r*) < *R*. In particular, for every *t* ≥ 0 and *x* ∈ *B**X*(0, *r*) we have $\|\phi(t,x,\s)\|<R$. It follows from  that, for every *t* ≥ 0 and *x* ∈ *B**X*(0, *r*), $$\begin{aligned} t\|\phi(t,x,\s)\|^{p}&=\int\_{0}^{t}{\|\phi(t-\tau,\phi(\tau,x,\s),\mathbb{T}\_{\tau}\s)\|^{p}d\tau} \leq \int\_{0}^{t}{ C(r)^{p}\|\phi(\tau,x,\s)\|^{p}d\tau}\leq \left( k(r) C(r) \right)^{p}\|x\|^{p},\end{aligned}$$ where the last inequality follows from. By consequence, one has, for every *t* ≥ 0, *x* ∈ *B**X*(0, *r*), and $\s\in\S$ $$\begin{aligned} \|\phi(t,x,\s)\|\leq \frac{k(r) C\left(r\right)}{t^{1/p}}\|x\|.\end{aligned}$$ So, for each 0 < *c* < 1, there exists a positive real number *t*0 = *t*0(*c*, *r*) such that $$\begin{aligned} \|\phi(t,x,\s)\|\leq c\|x\|, \quad \forall~t\geq t\_0,\;\forall~x\in B\_X(0,r),\;\forall~\s\in\S.\end{aligned}$$ Now, let *t* ≥ 0, *x* ∈ *B**X*(0, *r*), and $\s\in \S$ be fixed. There exists an integer *n* ≥ 0 such that *t* = *n**t*0 + *s*, with 0 ≤ *s* < *t*0. Notice that $\phi(jt\_0,x,\s)\in B\_X(0,r)$ for every *j* ∈ N. We have $$\begin{aligned} \|\phi(t,x,\s)\|&=\|\phi(s,\phi(nt\_0,x,\s),\mathbb{T}\_{nt\_0}\s)\| \leq C(r)\|\phi(nt\_0,x,\s)\|\leq C(r)c^{n}\|x\|\leq M(r)e^{-\lambda(r) t}\|x\|,\end{aligned}$$ with $M(r)=\frac{ C(r)}{c}$ and $\lambda(r)=-\frac{\log{(c)}}{t\_0(c,r)}>0$. The uniform local exponential stability of system Σ is established. Proof of Theorem [USGES] ------------------------ If Σ is USGES, then by Remark [remark-USGES] we may assume without loss of generality that *r* ↦ *λ*(*r*) is constant and *r* ↦ *M*(*r*) is nondecreasing. Hence, i) implies ii) with $k(r):= \frac{M(r)}{(p\lambda(r))^{1/p}}$. Moreover, ii) clearly implies iii). It then remains to prove that iii) implies i). For this, let *r* > 0. Let *R* > *r**C*(*r*), where *C* is defined by , and observe that   is satisfied in *B**X*(0, *R*). Following the same proof as in Theorem [ULES], we get the existence of *M*(*r*) > 0 and *λ*(*r*) > 0 such that $$\|\phi(t,x,\s)\|\leq M(r)e^{-\lambda(r) t}\|x\|,$$ for all *t* ≥ 0, *x* ∈ *B**X*(0, *r*), and $\s\in\S$, whence the uniform semi-global exponential stability of system Σ. Proof of Theorem [datko] ------------------------ The proof follows the same steps as that of Theorem [USGES] with *β* equal to the identity function and *k* equal to a constant function. In such a case, the functions *r* ↦ *M*(*r*) and *r* ↦ *λ*(*r*) are constant. Proof of Theorem [ULES-SG] -------------------------- We start by proving item i). Let *x* ∈ *B**X*(0, *R*) and $\s\in \S$. Assume that there exists a first time 0 < *t*⋆ <  + ∞ such that $\|\phi(t^{\star},x,\s)\|=R$. If *t*⋆ ≤ *t*1 then we have *R* ≤ *G*0*β*(∥*x*∥) which implies that ∥*x*∥ ≥ *β*− 1(*R*/*G*0). Then let us take *x* ∈ *B**X*(0, *β*− 1(*R*/*G*0)), so that *t*⋆ > *t*1. For every *t*1 ≤ *t* ≤ *t*⋆, by repeating the same reasoning as in , we have $$\label{bound4} t\_1\beta^{-1}\left(\frac{\|\phi(t,x,\s)\|}{G\_0}\right)^{p}\leq \int\_{t-t\_1}^{t}{\|\phi(\tau,x,\s)\|^{p}d\tau}.$$ In addition, since $t\mapsto V(\phi(t,x,\s))$ is continuous from the left, it follows from  (see ) that $$\begin{aligned} -V(x)&\leq V(\phi(t,x,\s))-V(x)\leq -\int\_{0}^{t}{\|\phi(\tau,x,\s)\|^{p}d\tau}\leq -\int\_{t-t\_1}^{t}{\|\phi(\tau,x,\s)\|^{p}d\tau}.\label{Hagood}\end{aligned}$$ By consequence, from inequalities  and together with , one gets $$\label{contradiction} t\_1\beta^{-1}\left(\frac{\|\phi(t,x,\s)\|}{G\_0}\right)^{p}\leq c\|x\|^p.$$ Thus, evaluating  at *t* = *t*⋆, one gets $$\|x\|\geq \left(\frac{t\_1}{c}\right)^{1/p}\beta^{-1}\left(\frac{R}{G\_0}\right).$$ If we take *x* ∈ *B**X*(0, *γ*) with $$\gamma:=\min\left\{\beta^{-1}\left(\frac{R}{G\_0}\right),\left(\frac{t\_1}{c}\right)^{1/p}\beta^{-1}\left(\frac{R}{G\_0}\right)\right\}$$ we get a contradiction, that is, we then have $$\|\phi(t,x,\s)\|\leq R, \quad \forall~t\geq 0,\; \forall~x\in B\_X(0,\gamma),\;\forall~\s\in \S.$$ From , we have, for every *t* ≥ 0, *x* ∈ *B**X*(0, *γ*), and $\s\in\S$, $$\underline{D}\_{\s}V(\phi(t,x,\s))\leq -\|\phi(t,x,\s)\|^{p},$$ from which, by repeating the same reasoning as in , we deduce the inequality $$\label{dini6} \int\_{0}^{\infty}{\|\phi(\tau,x,\s)\|^{p}d\tau}\leq c\|x\|^{p}, \ \forall~x\in B\_X(0,\gamma),\;\forall~\s\in \S.$$ Thanks to Theorem [ULES], the uniform local exponential stability of system Σ follows from  together with . The statement ii) of the theorem follows from ,, and the definition of *γ*. The last statement follows from the fact that  holds true for every *x* ∈ *X* with *c* independent of ∥*x*∥, using Theorem [datko]. Proof of Theorem [coercive1] ---------------------------- Fix *r* > 0 and let *M* = *M*(*r*) and *λ* = *λ*(*r*) be as in Definition [0-GES def]. Choose *γ* = *γ*(*r*) > 0 such that *γ* − *λ* < 0. Let $\bar t=\frac{\log(M)}{\lambda-\gamma}.$ Let $V\_r:X\to \R\_+$ be the functional defined by $$\label{lyapunov3} V\_r(x)=\frac1\gamma\sup\_{\s\in \S, t\in [0,\bar t]}\|e^{\gamma t}\phi(t,x,\s)\|,\quad x\in X.$$ The functional *V**r* is well defined, since Σ is USGES, which implies that, for every *R* > 0, *t* ≥ 0, and *x* ∈ *B**X*(0, *R*), $$\label{eq:decr} \|e^{\gamma t}\phi(t,x,\s)\|\le M(R)e^{(\gamma-\lambda(R))t}\|x\|.$$ Let us check inequalities  and. The right-hand inequality in  follows directly from, with $$\label{eq:upb} \overline{c}\_r=\frac{M}{\gamma}.$$ The left-hand inequality, with $$\label{eq:lob} \underline{c}\_r=\frac{1}{\gamma},$$ is a straightforward consequence of the definition of *V**r*. Concerning , remark that for all *x* ∈ *B**X*(0, *r*) and *h* ≥ 0, we have $$\begin{aligned} \sup\_{\s\in \S, t\in[\bar t,\bar t+h]}\|e^{\gamma t}\phi(t,x,\s)\|&\leq \sup\_{t\in[\bar t,\bar t+h]}Me^{(\gamma-\lambda) t}\|x\| \leq Me^{(\gamma-\lambda) \bar t}\|x\|= \|x\|,\end{aligned}$$ and hence $$\sup\_{\s\in \S, t\in[0,\bar t+h]}\|e^{\gamma t}\phi(t,x,\s)\|= \sup\_{\s\in \S, t\in[0,\bar t]}\|e^{\gamma t}\phi(t,x,\s)\|.$$ Then, for every $\tilde\s\in\S$, *x* ∈ *B**X*(0, *r*), and *h* ≥ 0, we have $$\begin{aligned} V\_r(\phi(h,x,\tilde\s))&=\frac1\gamma\sup\_{\s\in \S, t\in[0,\bar t]}\|e^{\gamma t}\phi(t,\phi(h,x,\tilde\s),\s)\| \leq \frac1\gamma\sup\_{\s\in \S, t\in[0,\bar t]}\|e^{\gamma t}\phi(t+h,x,\s)\|\nonumber\\ &= \frac{e^{-\gamma h}}{\gamma}\sup\_{\s\in \S, t\in[0,\bar t]}\|e^{\gamma (t+h)}\phi(t+h,x,\s)\| =\frac{e^{-\gamma h}}{\gamma}\sup\_{\s\in \S, t\in[h,\bar t+h]}\|e^{\gamma t}\phi(t,x,\s)\|\nonumber\\ &\leq \frac{e^{-\gamma h}}{\gamma}\sup\_{\s\in \S, t\in[0,\bar t+h]}\|e^{\gamma t}\phi(t,x,\s)\|= e^{-\gamma h} V\_r(x). \end{aligned}$$ Therefore, for all *x* ∈ *B**X*(0, *r*) and any $\tilde\s\in\S$, it follows that $$\begin{aligned} \overline D\_{\tilde\s}V\_r(x)&=\limsup\_{h\downarrow 0}\frac{V\_r(\phi(h,x,\tilde\s))-V\_r(x)}{h} \leq \limsup\_{h\downarrow 0}\frac{e^{-\gamma h}-1}{h}V\_r(x) \leq \limsup\_{h\downarrow 0}\frac{e^{-\gamma h}-1}{h}\frac{\|x\|}{\gamma}\leq -\|x\|,\end{aligned}$$ which implies that inequality  holds true. Let us prove that the functional $V\_r:X\to\R\_+$ is continuous. For *x*, *y* ∈ *X*, we have $$\begin{aligned} |V\_r(x)-V\_r(y) | = &\left|\sup\_{\s\in \S, t\in [0,\bar t]}\|e^{\gamma t}\phi(t,x,\s)\|-\sup\_{\s\in \S, t\in [0,\bar t]}\|e^{\gamma t}\phi(t,y,\s)\|\right|\nonumber\\ \leq&\left| \sup\_{\s\in\S, t\in [0,\bar t]}\left(\|e^{\gamma t}\phi(t,x,\s)\|-\|e^{\gamma t}\phi(t,y,\s)\|\right)\right|\nonumber\\ \leq& \sup\_{\s\in\S, t\in [0,\bar t]}\left|\|e^{\gamma t}\phi(t,x,\s)\|-\|e^{\gamma t}\phi(t,y,\s)\|\right|\nonumber\\ \leq& e^{\gamma \bar t}\sup\_{\s\in\S, t\in[0,\bar t]}\left| \|\phi(t,x,\s)\|-\|\phi(t,y,\s)\|\right|.\label{V-continuity}\end{aligned}$$ The continuity of *V* then follows from the $\S$-uniform continuity of *ϕ*. Moreover, if the transition map *ϕ* is $\S$-uniformly Lipschitz continuous, then we deduce from  that $$\begin{aligned} \left |V\_r(x)-V\_r(y)\right |&\leq&e^{\gamma \bar t}l(\bar t)\|x-y\|,\label{V-Lipshitz}\end{aligned}$$ where $l(\bar t)$ is as in Definition [def:philip], which implies the Lipschitz continuity of *V**r*. In the case where the transition map is Lipschitz continuous on bounded sets, we conclude similarly. The construction of the Lyapunov functional  is based on the classical construction given in  in the context of linear *C*0-semigroups. This is also used in  in order to construct a coercive input-to-state Lyapunov functional for bilinear infinite-dimensional systems with bounded input operators. An alternative construction of a coercive common Lyapunov functional can be given by $$\label{alternative1} V(x)=\sup\_{\s\in \S}\int\_{0}^{+\infty}\|\phi(t,x,\s)\|dt+\sup\_{t\geq 0,\s\in\S}\|\phi(t,x,\s)\|,$$ or also by $$\label{alternative2} V(x)=\int\_{0}^{+\infty}\sup\_{\s\in \S}\|\phi(t,x,\s)\|dt+\sup\_{t\geq 0,\s\in\S}\|\phi(t,x,\s)\|.$$ For both constructions *V* satisfies : the right-hand inequality follows directly from the uniform exponential stability assumption with ${\overline c=M\left(1+1/\lambda\right)}$, and the left-hand one holds with $\underline c=1$. Indeed, the second term appearing in  (respectively, ) guarantees the coercivity of the functional *V*. The first term appearing in  (respectively, ) is actually a (possibly non-coercive) Lyapunov functional which can be used to give a converse to Theorem [ULES-SG]. Proof of Theorem [sampling theorem] ----------------------------------- Fix *r* > 0 and let *V**r* : *X* → R+ be a *L**r*-Lipschitz continuous Lyapunov functional satisfying the conclusions of Lemma [coercive3]. Since *V**r* is decreasing along the trajectories of Σ0 then, setting $\rho:=r \underline{c}\_r/\overline{c}\_r$, we have *ϕ*Σ0(*t*, *x*, *σ*) ∈ *B**X*(0, *r*) for every *x* ∈ *B**X*(0, *ρ*), *t* ≥ 0, and $\sigma\in {\cal S}={\rm PC}$. Let *x*0 ∈ *B**X*(0, *ρ*) and $\s\in {\cal S}$. Let *t*\* > 0 be such that the trajectory *x*Σ( ⋅ ) = *ϕ*Σ( ⋅ , *x*0, *σ*) of system Σ stays in *B**X*(0, *r*) for *t* ∈ [0, *t*\*]. Computing the upper Dini derivative of *V**r* along *x*Σ( ⋅ ) gives $$\begin{aligned} \overline{D}\_{\s}V\_r(x^{\Sigma}(t))&=\limsup\_{h\downarrow 0}\frac{V\_r\left(x^{\Sigma}(t+h)\right)-V\_r\left(x^{\Sigma}(t)\right)}{h}\nonumber\\ &=\limsup\_{h\downarrow 0}\Big[\frac{V\_r\left(x^{\Sigma}(t+h)\right) -V\_r\left(x^{\Sigma\_0}(x^{\Sigma}(t),h)\right)}{h} +\frac{V\_r\left(x^{\Sigma\_0}(x^{\Sigma}(t),h)\right)-V\_r\left(x^{\Sigma}(t)\right)}{h}\Big]\nonumber\\ &\leq \limsup\_{h\downarrow 0}\frac{V\_r\left(x^{\Sigma}(t+h)\right)-V\_r\left(x^{\Sigma\_0}(x^{\Sigma}(t),h)\right)}{h} + \limsup\_{h\downarrow 0}\frac{V\_r\left(x^{\Sigma\_0}(x^{\Sigma}(t),h)\right)-V\_r\left(x^{\Sigma}(t)\right)}{h}\nonumber\\ &\leq L\_r\limsup\_{h\downarrow 0}\frac{\|x^{\Sigma}(t+h)-x^{\Sigma\_0}(x^{\Sigma}(t),h)\|}{h}-\|x^{\Sigma}(t)\|.\label{derivative-sample} \end{aligned}$$ Observe that for *k* ≥ 0, *t* ∈ [*s**k*, *s**k* + 1) ∩ [0, *t*\*], and *h* > 0 small enough we have $$\begin{aligned} x^{\Sigma}&(t+h)=T\_{h}x^{\Sigma}(t)+ \ds\int\_{0}^{h}{T\_{h-s}f\_{\s(t+s)}\left(x^{\Sigma}(t+s),K(T\_{t+s-s\_k}x^{\Sigma}(s\_k))\right)}ds,\end{aligned}$$ and $$\begin{aligned} x^{\Sigma\_0}&(x^{\Sigma}(t),h)=T\_{h}x^{\Sigma}(t)+\ds\int\_{0}^{h}{T\_{h-s}f\_{\s(t+s)}\left(x^{\Sigma\_0}(x^{\Sigma}(t),s),K(x^{\Sigma\_0}(x^{\Sigma}(t),s))\right)}ds.\end{aligned}$$ Using the fact that *f* and *K* are, respectively, *L**f*- and *L**K*-Lipschitz continuous, one gets $$\begin{aligned} \limsup\_{h\downarrow 0}&\frac{\|x^{\Sigma}(t+h)-x^{\Sigma\_0}(x^{\Sigma}(t),h)\|}{h} \leq \Gamma L\_fL\_K\|x^{\Sigma}(t)-T\_{t-s\_k}x^{\Sigma}(s\_k)\|.\end{aligned}$$ Going back to , we obtain that $$\begin{aligned} \overline{D}\_{\s}V\_r&(x^{\Sigma}(t)) \leq \Gamma L\_r L\_fL\_K\|x^{\Sigma}(t)-T\_{t-s\_k}x^{\Sigma}(s\_k)\|-\|x^{\Sigma}(t)\|.\label{dini-sampling} \end{aligned}$$ We now need to estimate *ɛ*(*t* − *s**k*) :  = ∥*x*Σ(*t*) − *T**t* − *s**k**x*Σ(*s**k*)∥,  *t* ∈ [*s**k*, *s**k* + 1). By adding and subtracting $$\int\_{0}^{t-s\_k}T\_{t-s\_k-s}f\_{\s(s+s\_k)}\left(T\_{s}x^{\Sigma}(s\_k),K\left(T\_{s}x^{\Sigma}(s\_k)\right)\right)ds$$ in , and using the identity *f**q*(0, 0) = 0, we obtain $$\begin{aligned} \varepsilon(t-s\_k)\leq&\Gamma e^{\omega\delta}L\_{f}\int\_{0}^{t-s\_k}\varepsilon(s)ds+\Gamma e^{\omega\delta}\int\_{0}^{t-s\_k}\|f\_{\s(s+s\_k)}\left(T\_{s}x^{\Sigma}(s\_k),K\left(T\_{s}x^{\Sigma}(s\_k)\right)\right)\|ds\nonumber\\ \leq&\Gamma e^{\omega\delta}L\_{f}\int\_{0}^{t-s\_k}\varepsilon(s)ds+\Gamma ^2e^{2\omega\delta}L\_{f}(1+L\_{K})\int\_{0}^{t-s\_k}\|x^{\Sigma}(s\_k)\|ds\nonumber\\ \leq&\Gamma e^{\omega\delta}L\_{f}\int\_{0}^{t-s\_k}\varepsilon(s)ds+\Gamma ^2e^{2\omega\delta}L\_{f}(1+L\_{K})\delta\|x^{\Sigma}(s\_k)\|.\end{aligned}$$ By Gronwall’s lemma, we have *ɛ*(*t* − *s**k*) ≤ *c**δ*∥*x*Σ(*s**k*)∥,  where *c* = Γ2*e*2*ω**δ**L**f*(1 + *L**K*)*e**L**f**δ*Γ*e**ω**δ*. Observe that, by , ∥*x*Σ(*s**k*)∥ ≤ Γ*e**ω**δ*∥*T**t* − *s**k**x*Σ(*s**k*)∥. Hence, from ,, and the triangular inequality, we get $$\begin{aligned} \varepsilon(t-s\_k)\leq c\delta \Gamma e^{\omega\delta}\|T\_{t-s\_k}x^{\Sigma}(s\_k)\|\leq c\delta \Gamma e^{\omega\delta}\Big(\varepsilon(t-s\_k)+\|x^{\Sigma}(t)\|\Big),\end{aligned}$$ that is, for sufficiently small *δ*, $$\varepsilon(t-s\_k)\leq \frac{c\delta \Gamma e^{\omega\delta}}{1-c\delta \Gamma e^{\omega\delta}}\|x^{\Sigma}(t)\|.$$ Let *δ*⋆ > 0 be such that $$\label{cstar} c^{\star}:=\Gamma L\_r L\_fL\_K \frac{c\delta^{\star} \Gamma e^{\omega\delta^{\star}}}{1-c\delta^{\star} \Gamma e^{\omega\delta^{\star}}}<1.$$ It follows from  that for every *δ* ∈ (0, *δ*⋆), we have $$\begin{aligned} \overline{D}\_{\s}V\_r(x^{\Sigma}(t))\leq \Gamma L\_r L\_fL\_K\varepsilon(t-t\_k)-\|x^{\Sigma}(t)\|\leq (c^{\star}-1)\|x^{\Sigma}(t)\|\leq \frac{c^{\star}-1}{\overline{c}\_r}V\_r(x^{\Sigma}(t)).\end{aligned}$$ In particular, *V**r* decreases exponentially along *x*Σ, which implies that *t*\* can be taken arbitrarily large and, since *V**r* is coercive, we conclude that Σ is UES in *B**X*(0, *ρ*). To conclude the proof, we need to verify that *ρ* can be taken arbitrarily large (as *r* → ∞) if holds true. In order to do so, notice that if *V**r* is constructed as in the proof of Theorem [coercive1], then $\overline{c}\_r$ and $\underline{c}\_r$ can be chosen as in and. In this case $\rho = r \underline{c}\_r/\overline{c}\_r = r/M(r)$, concluding the proof of the theorem. Proof of Theorem [iss theorem] ------------------------------ Applying Theorem [coercive1], there exist a *L**V*-Lipschitz continuous functional $V: X\to \R\_+$ and $\underline c,\overline c>0$ such that $$\label{eq:coerv} \underline c\|x\|\leq V(x)\leq \overline c\|x\|, \quad \forall~x\in X,$$ and $$\label{lp2} \limsup\_{h\downarrow 0}\frac{V(\phi\_0(h,x,\s))-V(x)}{h}\leq -\|x\|,$$ for every *x* ∈ *X* and $\s\in\S$. Let *t* ≥ 0, $\s\in \S$, and *u* ∈ *L**p*(*U*), 1 ≤ *p* ≤  + ∞. Set $x=\phi\_u(t,0,\s)$. For *h* > 0 small enough, one has $$\begin{aligned} \frac{V(\phi\_u(h,x,\s))-V(x)}{h} &=\frac{V(\phi\_0(h,x,\s))-V(x)}{h} +\frac{V(\phi\_u(h,x,\s))-V(\phi\_0(h,x,\s))}{h}\nonumber\\ & \leq \frac{V(\phi\_0(h,x,\s))-V(x)}{h}+L\_{V}\frac{\|\phi\_u(h,x,\s)-\phi\_0(h,x,\s)\|}{h}\label{lp3}.\end{aligned}$$ Define $e(s)=\phi\_u(s,x,\s)-\phi\_0(s,x,\s)$, for *s* ∈ [0, *h*]. By the variation of constant formula, one has $$e(s)=\int\_{0}^{s} T\_{s-\tau}\Upsilon(t,\tau,x,\s) d\tau, \quad \forall~s\in [0,h],$$ where $$\begin{aligned} \Upsilon(t,\tau,x,\s)=f\_{\s(\tau)}(\phi\_0(\tau,x,\s)+e(\tau),u(t+\tau))-f\_{\s(\tau)}(\phi\_0(\tau,x,\s),0).\end{aligned}$$ Hence, one deduces that ∥*e*(*h*)∥ ≤ Γ*L**f**e**ω**h*∫0*h*(∥*e*(*τ*)∥ + ∥*u*(*t* + *τ*)∥)*d**τ*. For *p* =  + ∞, one deduces that $$\label{linfini} \limsup\_{h\downarrow 0} \frac{\|e(h)\|}{h}\leq \Gamma L\_{f}\|u\|\_{L^{\infty}(U)}.$$ Letting *h* ↓ 0 in  and using one gets $$\begin{aligned} \limsup\_{h\downarrow 0}\frac{V(\phi\_u(t+h,0,\s))-V(\phi\_u(t,0,\s))}{h}&\leq -\|\phi\_u(t,0,\s)\|+\Gamma L\_fL\_V\|u\|\_{L^{\infty}(U)}\\ &\leq -\frac{1}{\overline c}V(\phi\_u(t,0,\s))+c\|u\|\_{L^{\infty}(U)},\end{aligned}$$ with *c* = Γ*L**f**L**V*. Using  for the function $t\mapsto e^{\frac{t}{\overline c}}V(\phi\_u(t,0,\s))$, one concludes that $$V(\phi\_u(t,0,\s))\le c\bar c\|u\|\_{L^{\infty}(U)},\quad t\ge 0.$$ The theorem follows for *p* =  + ∞. We next assume that 1 ≤ *p* <  + ∞. By a standard density argument, it is enough to prove  for those *u* ∈ *L**p*(*U*) which are, in addition, continuous, since $u\mapsto \phi\_u(t,0,\s)$ is continuous in *L**p*(*U*). For *u* continuous, we deduce, from , that $$\label{lp4} \limsup\_{h\downarrow 0} \frac{\|e(h)\|}{h}\leq L\_{f}\|u(t)\|.$$ By using ,, and  one gets $$\begin{aligned} \limsup\_{h\downarrow 0}&\frac{V(\phi\_u(t+h,0,\s))-V(\phi\_u(t,0,\s))}{h}\leq -\|\phi\_u(t,0,\s)\|+c\|u(t)\|,\label{eq:pi}\end{aligned}$$ with *c* = *L**f**L**V*. Let $\varphi\_p(r)=\frac{r^{p}}{p}$, *r* ≥ 0, and *W**p*(*x*) = *φ**p*(*x*), *x* ∈ *X*. Since *φ**p* is increasing and differentiable, one deduce from that $$\begin{aligned} \limsup\_{h\downarrow 0}&\frac{W\_p(\phi\_u(t+h,0,\s))-W\_p(\phi\_u(t,0,\s))}{h} \leq -V^{p-1}(\phi\_u(t,0,\s))\|\phi\_u(t,0,\s)\| +cV^{p-1}(\phi\_u(t,0,\s))\|u(t)\|.\end{aligned}$$ According to , it follows that $$\begin{aligned} 0\leq W\_p(\phi\_u(T,0,\s)) \leq -\int\_0^{T}V^{p-1}(\phi\_u(t,0,\s))\|\phi\_{u}(t,0,\s)\|dt +c\int\_0^{T}V^{p-1}(\phi\_u(t,0,\s))\|u(t)\|dt, \quad \forall~T>0. \end{aligned}$$ By using one gets that $$\begin{aligned} {\underline c}^{p-1}\int\_0^{T}&\|\phi\_u(t,0,\s)\|^pdt \leq c{\overline c}^{p-1}\int\_0^{T}\|\phi\_u(t,0,\s)\|^{p-1}\|u(t)\|dt, \quad \forall~T>0. \end{aligned}$$ The theorem is proved for *p* = 1, by letting *T* →  + ∞. For *p* > 1, one first applies Hölder’s inequality and then lets *T* →  + ∞ to get the conclusion. --- 1. Quartz EA 7393, ENSEA, Cergy-Pontoise, France, [email protected][↩](#fnref1) 2. Laboratoire des Signaux et Systèmes (L2S), Université Paris-Saclay, CNRS, CentraleSupélec, Université Paris-Saclay, Gif-sur-Yvette, France, [email protected][↩](#fnref2) 3. CNRS & Laboratoire des Signaux et Systèmes (L2S), Université Paris-Saclay, CNRS, CentraleSupélec, Gif-sur-Yvette, France, [email protected][↩](#fnref3) 4. Laboratoire Jacques-Louis Lions (LJLL), Inria, Sorbonne Université, Université de Paris, CNRS, Paris, France, [email protected][↩](#fnref4) Lyapunov characterization of uniform exponential stability for nonlinear infinite-dimensional systems ===================================================================================================== In this paper we deal with infinite-dimensional nonlinear forward complete dynamical systems which are subject to external disturbances. We first extend the well-known Datko lemma to the framework of the considered class of systems. Thanks to this generalization, we provide characterizations of the uniform (with respect to disturbances) local, semi-global, and global exponential stability, through the existence of coercive and non-coercive Lyapunov functionals. The importance of the obtained results is underlined through some applications concerning 1) exponential stability of nonlinear retarded systems with piecewise constant delays, 2) exponential stability preservation under sampling for semilinear control switching systems, and 3) the link between input-to-state stability and exponential stability of semilinear switching systems. *Keywords:* Infinite-dimensional systems, Nonlinear systems, Switching systems, Converse Lyapunov theorems, Exponential stability. Introduction ============ Various works have been recently devoted to the characterization of the stability of infinite-dimensional systems in Banach spaces through *non-coercive* and *coercive* Lyapunov functionals (see, e.g., ). By non-coercive Lyapunov functional, we mean a positive definite functional decaying along the trajectories of the system which satisfies 0 < *V*(*x*) ≤ *α*(∥*x*∥),  ∀ *x* ∈ *X* ∖ {0},  where *X* is the ambient Banach space and *α* belongs to the class ${\cal{K}\_{\infty}}$ of continuous increasing bijections from R+ to R+. Such a function *V* would be coercive if there existed $\alpha\_0\in{\cal{K}\_{\infty}}$ such that *V*(*x*) ≥ *α*0(∥*x*∥) for every *x* ∈ *X*. In  it has been proved that the existence of a coercive Lyapunov functional *V* represents a necessary and sufficient condition for the global asymptotic stability for a general class of infinite-dimensional forward complete dynamical systems. On the other hand, the existence of a non-coercive Lyapunov functional does not guarantee global asymptotic stability and some additional regularity assumption on the dynamics is needed (see, e.g., ). Converse Lyapunov theorems can be helpful for many applications, such as stability analysis of interconnected systems  and for the characterization of input-to-state stability (see, e.g., ). Stability results based on non-coercive Lyapunov functionals may be more easily applied in practice, while the existence of a coercive Lyapunov functional may be exploited to infer additional information on a stable nonlinear system. Here, we consider the same class of abstract forward complete dynamical systems, subject to a shift-invariant set of disturbances, as in . The novelty of our approach is that we focus on exponential (instead of asymptotic) stability. For the rest of the paper, the word *uniform* will refer to uniformity with respect to disturbances. We provide theorems characterizing different types of uniform local, semi-global, and global exponential stability, through the existence of non-coercive and coercive Lyapunov functionals. Using a standard converse Lyapunov approach, we prove that uniform semi-global exponential stability is characterized by the existence of a 1-parameter family of Lyapunov functionals, each of them decaying uniformly on a bounded set, while the union of all such bounded sets is equal to the entire Banach space *X*. Concerning the non-coercive case, we first give a generalization of the Datko lemma . Recall that the latter characterizes the exponential behavior of a linear *C*0-semigroup in a Banach space in terms of a uniform estimate of the *L**p*-norm of the solutions. This result has been extended in  to the framework of nonlinear semigroups. Here, we generalize the Datko lemma to the considered class of infinite-dimensional forward complete dynamical systems. Thanks to such a generalization, we prove that the existence of a non-coercive Lyapunov functional is sufficient, under a uniform growth estimate on the solutions of the system, for the uniform exponential stability. The importance of the obtained results is underlined through some applications as described in the sequel. Retarded functional differential equations form an interesting class of infinite-dimensional systems that we cover by our approach. Converse Lyapunov theorems have been developed for systems described by retarded and neutral functional differential equations (see, e.g., ). Such results have been recently extended in  to switching linear retarded systems through coercive and non-coercive Lyapunov characterizations. After representing a nonlinear retarded functional differential equation as an abstract forward complete dynamical system, all the characterizations of uniform exponential stability provided in the first part of the paper can be applied to this particular class of infinite-dimensional systems. In particular, we characterize the uniform global exponential stability of a retarded functional differential equation in terms of the existence of a non-coercive Lyapunov functional. Another interesting problem when dealing with a continuous-time model is the practical implementation of a designed feedback control. Indeed, in practice, due to numerical and technological limitations (sensors, actuators, and digital interfaces), a continuous measurement of the output and a continuous implementation of a feedback control are impossible. This means that the implemented input is, for almost every time, different from the designed controller. Several methods have been developed in the literature of ordinary differential equations for sampled-data observer design under discrete-time measurements (see, e.g., ), and for sampled-data control design guaranteeing a globally stable closed-loop system (see, e.g., ). Apart from time-delays systems (see, e.g.,  for sampled-data control and for sampled-data observer design), few results exist for infinite-dimensional systems. The difficulties come from the fact that the developed methods do not directly apply to the infinite-dimensional case, for which even the well-posedness of sampled-data control dynamics is not obvious (see, e.g., for more details). Some interesting results have been obtained for infinite-dimensional linear systems . In the nonlinear case no standard methods have been developed and the problem is treated case by case . Here, we focus on the particular problem of feedback stabilization under sampled output measurements of an abstract semilinear infinite-dimensional system. In particular, we consider the dynamics $$\label{semilinear intro} \dot x(t)=Ax(t)+f\_{\s(t)}(x(t),u(t)), \quad t\geq 0,$$ where *x*(*t*) ∈ *X*, *U* is a Banach space, *u* ∈ *U* is the input, *A* is the infinitesimal generator of a *C*0-semigroup of bounded linear operators (*T**t*)*t* ≥ 0 on *X*, $\s:\R\_+\to \mathcal{Q}$ is a piecewise constant switching function, and *f**q* : *X* × *U* → *X* is a Lipschitz continuous nonlinear operator, uniformly with respect to *q* ∈ Q. Assume that only discrete output measurements are available *y*(*t*) = *x*(*t**k*),  ∀ *t* ∈ [*t**k*, *t**k* + 1),  ∀ *k* ≥ 0,  where (*t**k*)*k* ≥ 0 denotes the increasing sequence of sampling times. It is well known that, in general, no feedback of the type *u*(*t*) = *K*(*y*(*t*)) stabilizes system . Moreover, suppose that system  in closed-loop with *u*(*t*) = *K*(*x*(*t*)),  ∀ *t* ≥ 0,  where *K* : *X* → *U* is a globally Lipschitz function satisfying *K*(0) = 0, is uniformly semi-globally exponentially stable. Using our converse Lyapunov theorem, we show that if the maximal sampling period is small enough then, under some additional conditions, system  in closed-loop with the predictor-based sampled-data control *u*(*t*) = *T**t* − *t**k**y*(*t**k*),  ∀ *t* ∈ [*t**k*, *t**k* + 1),  ∀ *k* ≥ 0,  is uniformly locally exponentially stable in each ball around the origin. Furthermore, if the closed loop system - is uniformly globally exponentially stable, then the same property holds for the closed loop system -, under sufficiently small sampling period. We give an example of a wave equation (see, e.g., ) showing the applicability of our result. In recent years, the problem of characterizing input-to-state stability (ISS) for infinite-dimensional systems has attracted a particular attention. Roughly speaking, the ISS property, introduced in  for ordinary differential equations, means that the trajectories of a perturbed system eventually approach a neighborhood of the origin whose size is proportional to the magnitude of the perturbation. This concept has been widely studied in the framework of complex systems such as switching systems (see, e.g., and references therein), time-delay systems (see, e.g., and references therein), and abstract infinite-dimensional systems (see, e.g., ). For example, in  a converse Lyapunov theorem characterizing the input-to-state stability of a locally Lipschitz dynamics through the existence of a locally Lipschitz continuous coercive ISS Lyapunov functional is given. Recently in  it has been shown that, under regularity assumptions on the dynamics, the existence of non-coercive Lyapunov functionals implies input-to-state stability. Here, we provide a result of ISS type, proving that the input-to-state map has finite gain, under the assumption that the unforced system corresponding to  (i.e., with *u* ≡ 0) is uniformly globally exponentially stable. The paper is organized as follows. Section [sec:problem state] presents the problem statement with useful notations and definitions. In Section [main results sec] we state our main results, namely three Datko-type theorems for uniform local, semi-global, and global exponential stability, together with direct and converse Lyapunov theorems. In Section [sec: discussion] we compare the proposed Lyapunov theorems with the current state of art. The applications are given in Section [sec: applications]. In Section [sec-example] we consider an example of a damped wave equation. The proofs are postponed to Section [s:proofs]. Notations --------- By (*X*, ∥ ⋅ ∥) we denote a Banach space with norm ∥ ⋅ ∥ and by *B**X*(*x*, *r*) the closed ball in *X* of center *x* ∈ *X* and radius *r*. By $\R$ we denote the set of real numbers and by ∣ ⋅ ∣ the Euclidean norm of a real vector. We use $\R\_+$ and $\R\_+^{\star}$ to denote the sets of non-negative and positive real numbers respectively. A function *α* : R+ → R+ is said to be of class K if it is continuous, increasing, and satisfies *α*(0) = 0; it is said to be of class K∞ if it is of class K and unbounded. A continuous function *κ* : R+ × R+ → R+ is said to be of class KL if *κ*( ⋅ , *t*) is of class K for each *t* ≥ 0 and, for each *s* ≥ 0, *κ*(*s*,  ⋅ ) is nonincreasing and converges to zero as *t* tends to  + ∞. Problem statement ================= In this paper we consider a forward complete dynamical system evolving in a Banach space *X*. Let us recall the following definition, proposed in . [FC] Let $\Q$ be a nonempty set. Denote by $\S$ a set of functions $\s:\R\_+\to \Q$ satisfying the following conditions: * $\S$ is closed by time-shift, i.e., for all $\s\in \S$ and all *τ* ≥ 0, the *τ*-shifted function $\mathbb{T}\_{\tau}\s:s\mapsto\s(\tau+s)$ belongs to $\S$; * $\S$ is closed by concatenation, i.e., for all $\s\_1,\s\_2\in \S$ and all *τ* > 0 the function $\s$ defined by $\s\equiv\s\_1$ over [0, *τ*] and by $\s(\tau+t)=\s\_2(t)$ for all *t* > 0, belongs to $\S$. Let $\phi: \R\_+\times X\times \S\to X$ be a map. The triple $\Sigma=(X,\S,\phi)$ is said to be a *forward complete dynamical system* if the following properties hold: * $\forall~(x,\s)\in X \times \S$, it holds that $\phi(0,x,\s)=x$; * $\forall~(x,\s)\in X \times \S$, ∀ *t* ≥ 0, and $\forall~\tilde\s\in\S$ such $\tilde\s=\s$ over [0, *t*], it holds that $\phi(t,x,\tilde\s)=\phi(t,x,\s)$; * $\forall~(x,\s)\in X \times \S$, the map $t\mapsto \phi(t,x,\s)$ is continuous; * ∀*t*, *τ* ≥ 0, $\forall~(x,\s)\in X \times \S$, it holds that $\phi(\tau,\phi(t,x,\s),\mathbb{T}\_{t}\s)=\phi(t+\tau,x,\s)$. We will refer to *ϕ* as the *transition map* of Σ. Observe that if Σ is a forward complete dynamical system and $\S$ contains a constant function $\s$ then $(\phi(t,\cdot,\s))\_{t\geq 0}$ is a strongly continuous nonlinear semigroup, whose definition is recalled below. [Semigroup] Let *T**t* : *X* → *X*, *t* ≥ 0, be a family of nonlinear maps. We say that (*T**t*)*t* ≥ 0 is a strongly continuous nonlinear semigroup if the following properties hold: * ∀ *x* ∈ *X*, *T*0*x* = *x*; * ∀ *t*1, *t*2 ≥ 0, *T**t*1*T**t*2*x* = *T**t*1 + *t*2*x*; * ∀ *x* ∈ *X*, the map *t* ↦ *T**t**x* is continuous. An example of forward complete dynamical system is given next. [Piecewise constant switching system][PC] We denote by PC the set of piecewise constant $\s:\R\_+\to \mathcal{Q}$, and we consider here the case $\S=\mathrm{PC}$. Let $\s\in \mathrm{PC}$ be constantly equal to $\s\_k$ over [*t**k*, *t**k* + 1), with 0 = *t*0 < *t*1 < ⋯ < *t**k* < *t* < *t**k* + 1, for *k* ≥ 0. With each $\s\_k$ we associate the strongly continuous nonlinear semigroup $(T\_{\s\_k}(t))\_{t\geq 0}:=(\phi(t,\cdot,\s\_k))\_{t\geq 0}$. By concatenating the flows $(T\_{\s\_k}(t))\_{t\geq 0}$, one can associate with $\s$ the family of nonlinear evolution operators $$\label{sigalotti1} T\_{\s}(t):= T\_{\s\_k}(t-t\_{k})T\_{\s\_{k-1}}(t\_{k}-t\_{k-1})\cdots T\_{\s\_1}(t\_{1}),$$ *t* ∈ [*t**k*, *t**k* + 1). By consequence, system Σ can be identified with the piecewise constant switching system $$\label{sigalotti} x(t)=T\_{\s}(t)x\_0, \ x\_{0}\in X, \ \s\in \mathrm{PC}.$$ Thanks to the representation given by , this paper extends to the nonlinear case some of the results obtained in  on the characterization of the exponential stability of switching linear systems in Banach spaces. Various notions of uniform (with respect to the functions in $\S$) exponential stability of system Σ are given by the following definition. [0-GES def] Consider the forward complete dynamical system $\Sigma=(X,\S,\phi)$. 1. We say that Σ is *uniformly globally exponentially stable at the origin* (UGES, for short) if there exist *M* > 0 and *λ* > 0 such that the transition map *ϕ* satisfies the inequality $$\|\phi(t, x, \s)\|\leq M e^{-\lambda t} \|x\|, \ \forall~t\geq 0,\; \forall~x\in X,\; \forall~\s\in\S.$$ 2. We say that Σ is *uniformly locally exponentially stable at the origin* (ULES, for short) if there exist *R* > 0, *M* > 0, and *λ* > 0 such that the transition map *ϕ* satisfies the inequality, for every *t* ≥ 0, *x* ∈ *B**X*(0, *R*), and $\s\in\S$, $$\label{les-def} \|\phi(t, x, \s)\|\leq M e^{-\lambda t} \|x\|.$$ If inequality  holds true for a given *R* > 0 then we say that Σ is *uniformly exponentially stable at the origin in *B**X*(0, *R*)* (UES in *B**X*(0, *R*), for short). 3. We say that Σ is *uniformly semi-globally exponentially stable at the origin* (USGES, for short) if, for every *r* > 0 there exist *M*(*r*) > 0 and *λ*(*r*) > 0 such that the transition map *ϕ* satisfies the inequality $$\label{sges-def} \|\phi(t, x, \s)\|\leq M(r) e^{-\lambda(r) t} \|x\|,$$ for every *t* ≥ 0, *x* ∈ *B**X*(0, *r*), and $\s\in\S$. [remark-USGES] Up to modifying *r* ↦ *M*(*r*), one can assume without loss of generality in the definition of USGES that *r* ↦ *λ*(*r*) can be taken constant and *r* ↦ *M*(*r*) nondecreasing. Indeed, let us fix *M* :  = *M*(1) and *λ* :  = *λ*(1). One has, by definition of (*M*, *λ*), $$\|\phi(t, x, \s)\|\leq Me^{-\lambda t} \|x\|,$$ for every *t* ≥ 0, *x* ∈ *B**X*(0, 1), and $\s\in\S$. For *R* > 1, by using  there exists *t**R* such that, for every *x* ∈ *B**X*(0, *R*) with ∥*x*∥ ≥ 1, one has for *t* ≥ *t**R*, $$M(R)e^{-\lambda(R)t\_{R}}R=1,~ \|\phi(t,x,\s)\|\leq\frac{\|x\|}{R}\le \min\{1,\|x\|\}.$$ This implies that, for every *t* ≥ *t**R*, *x* ∈ *B**X*(0, *R*), and $\s\in\S$, $$\|\phi(t, x, \s)\|\leq Me^{-\lambda (t-t\_{R})} \|\phi(t\_{R},x,\s)\|\leq Me^{-\lambda (t-t\_{R})}\|x\|.$$ By setting $$\widehat M(R)=\begin{cases} M &R\leq 1,\\ \max\{M(R),M(R)e^{|\lambda-\lambda(R)|t\_{R}},Me^{\lambda t\_R}\}&R>1, \end{cases}$$ one has that $\|\phi(t, x, \s)\|\leq \widehat M(r) e^{-\lambda t} \|x\|, $ for every *t* ≥ 0, *x* ∈ *B**X*(0, *r*), and $\s\in\S$. Finally, we may replace $r\mapsto \widehat M(r)$ with the nondecreasing function $r\mapsto \inf\_{\rho\geq r}\widehat M(\rho)$. The property of semi-global exponential stability introduced in Definition [0-GES def] turns out to be satisfied by some interesting class of infinite-dimensional systems, as described in the following two examples. [example kdv without switch-old] For *L* > 0, let Ω = (0, *L*) and consider the controlled Korteweg–de Vries (KdV) equation $$\label{kdv} \begin{cases} {\eta}\_{t}+{\eta}\_{x}+{\eta}\_{xxx}+ \eta{\eta}\_{x}+\rho(t,x,\eta)=0 & (x,t)\in \Omega\times\R\_+,\\ \eta(t,0)=\eta(t,L)={\eta}\_{x}(t,L)=0 & t\in \R\_+,\\ \eta(0,x)={\eta}\_0(x) & x\in \Omega, \end{cases}$$ where $\rho:\R\_+\times\Omega\times \R\to\R$ is a sufficiently regular nonlinear function. The case *ρ* ≡ 0 is a well known model describing waves on shallow water surfaces . The controllability and stabilizability properties of  have been extensively studied in the literature (see, e.g., ). In the case where the feedback control is of the form *ρ*(*t*, *x*, *η*) = *a*(*x*)*η*, for some non-negative function *a*( ⋅ ) having nonempty support in Ω, system  is globally exponentially stable in *X* = *L*2(0, *L*). In  the authors prove that, when a saturation is introduced in the feedback control *ρ*, the system is only semi-globally exponentially stable in *X*. [example boundary wave without switch] Consider the 1D wave equation with boundary damping $$\label{boundary-wave without switch} \begin{cases} {\psi}\_{tt}-\Delta \psi=0 &(x,t)\in(0,1)\times\R\_+,\\ \psi(0,t)=0&t\in\R\_+,\\ \psi\_x(1,t)=-\sigma(t,\psi\_t(1,t)) & t\in\R\_+,\\ \psi(0)=\psi\_0, \psi\_t(0)=\psi\_1 & x\in(0,1), \end{cases}$$ where $\sigma:\R\_+\times \R\to\R$ is continuous. This system is of special interest when the damping term *σ*(*t*, *ψ**t*(1, *t*)) represents a nonlinear feedback control. Once again, different types of stability can be established (for global and semi-global exponential stability, see e.g., ). In particular, if *σ* is a nonlinearity of saturation type, only semi-global exponential stability holds true in *X* = {(*ψ*0, *ψ*1) ∣ *ψ*0(0) = 0,  *ψ*0ʹ and *ψ*1 ∈ *L*∞(0, 1)}. The systems considered in Examples [example kdv without switch-old] and [example boundary wave without switch] do not depend on *σ* ∈ S as in Definition [FC]. In Section [sec-example] we introduce and study variants of such systems in a switching framework. Main results ============ Datko-type theorems ------------------- In this section we give Datko-type theorems  for an abstract forward complete dynamical system Σ. The uniform (local, semi-global, and global) exponential stability is characterized in terms of the *L**p*-norm of the trajectories of the system. This provides a generalization of the results obtained in  for nonlinear semigroups. The following theorem characterizes the local exponential stability of system Σ. [ULES] Consider a forward complete dynamical system $\Sigma=(X,\S,\phi)$. Let *t*1, *G*0 > 0, and *β* be a function of class K∞ such that $\limsup\_{r\downarrow 0} \frac{\beta(r)}{r}$ is finite and $$\label{exp-bounded2} \|\phi(t,x,\s)\| \leq G\_0\beta(\|x\|), \quad \forall~t\in[0,t\_1],\; \forall~x\in X,\; \forall~\s\in\S.$$ The following statements are equivalent i) System Σ is ULES; ii) for every *p* > 0 there exist a nondecreasing function *k* : R+ → R+ and *R* > 0 such that $$\label{exp-lp3} \int\_{0}^{+\infty}{\|\phi(t,x,\s)\|}^{p} dt \leq k(\|x\|)^p\|x\|^{p},$$ for every *x* ∈ *B**X*(0, *R*) and $\s\in\S$; iii) there exist *p* > 0, *k* : R+ → R+ nondecreasing, and *R* > 0 such that  holds true. Observe that hypothesis  in Theorem [ULES] is global over *X*. Indeed, we do not know if the stability at 0 may be deduced from inequality  if one restricts  to a ball *B**X*(0, *r*). The following theorem characterizes the semi-global exponential stability of system Σ. [USGES] Consider a forward complete dynamical system $\Sigma=(X,\S,\phi)$. Let *t*1, *G*0 > 0, and *β* be a function of class K∞ such that $\limsup\_{r\downarrow 0} \frac{\beta(r)}{r}$ is finite and $$\|\phi(t,x,\s)\| \leq G\_0\beta(\|x\|), \quad \forall~t\in[0,t\_1],\; \forall~x\in X,\; \forall~\s\in\S.$$ The following statements are equivalent i) System Σ is USGES; ii) for every *p* > 0 there exists a nondecreasing function *k* : R+ → R+ such that, for every *x* ∈ *X* and $\s\in\S$ $$\label{lp0} \int\_{0}^{+\infty}{\|\phi(t,x,\s)\|}^{p} dt \leq k(\|x\|)^p\|x\|^{p};$$ iii) there exist *p* > 0 and *k* : R+ → R+ nondecreasing such that  holds true. The particular case of uniformly globally exponentially stable systems is considered in the following theorem. [datko] Consider a forward complete dynamical system $\Sigma=(X,\S,\phi)$. Let *t*1 > 0 and *G*0 > 0 be such that $$\label{exp-bounded} \|\phi(t,x,\s)\| \leq G\_0\|x\|, \ \forall~t\in [0,t\_1],\; \forall~x\in X,\; \forall~\s\in\S.$$ The following statements are equivalent i) System Σ is UGES; ii) for every *p* > 0 there exists *k* > 0 such that $$\label{unasoladai} \int\_{0}^{+\infty}{\|\phi(t,x,\s)\|}^{p} dt \leq k^{p}\|x\|^{p},$$ for every *x* ∈ *X* and $\s\in\S$; iii) there exist *p*, *k* > 0 such that holds true. By the shift-invariance properties given by items a) and iv) of Definition [FC], it is easy to see that  implies $$\label{rem-paolo} \|\phi(t,x,\s)\| \leq Me^{\lambda t}\|x\|,\ \forall~t\geq 0,\; \forall~x\in X,\; \forall~\s\in\S,$$ where *M* = *G*0 and $\lambda = \max\{0,\log (\frac{G\_0}{t\_1})\}$. Notice that inequality  is a nontrivial requirement on system Σ. Even in the linear case, and even if  is satisfied for each constant $\s\equiv \s\_c$, uniformly with respect to $\s\_c$, it does not follow that a similar exponential bound holds for the corresponding system Σ (see ). Lyapunov characterization of exponential stability -------------------------------------------------- In this section we characterize the exponential stability of a forward complete dynamical system through the existence of a Lyapunov functional. First, let us recall the definition of Dini derivative of a functional *V* : *X* →�
arxiv_0000191
Breaking and trapping Cooper pairs by Rydberg-molecule spectroscopyin atomic Fermi superfluids ============================================================================================== We propose a spectroscopic probe of the breaking and localization of Cooper pairs in an atomic Fermi superfluid interacting with a Rydberg impurity. This is achieved by monitoring the formation of diatomic and triatomic ultralong-range molecular species in the superfluid across the Bardeen–Cooper–Schrieffer (BCS)-Bose Einstein condensation (BEC) crossover. The triatomic Rydberg molecule in the BEC regime heralds the trapping of a tightly-bound Cooper pair, reminiscent of pion capture in nuclear matter, while the breaking of a Cooper pair on the BCS side by a diatomic Rydberg molecule is evocative of binary-star tidal disruption by a black hole. Spectroscopy of the Fermi superfluid and Rydberg molecules allows for an estimation of the Cooper-pair size while the Rydberg molecule binding energies discern many-body pairing effects. Rydberg atom-based systems have emerged as leading platforms for demonstrating many-body correlations , quantum simulations , quantum error corrections , and quantum optics . When the excited electron scatters from a nearby ground-state atom, under certain conditions, ultralong-range molecular bonds can form . Such long-range Rydberg molecules have been realized  , see also the reviews . Interesting aspects of many-body physics, such as the formation of Bose and Fermi polarons, quantum statistics of gases exhibiting bunching and anti-bunching, with Rydberg molecules have also been reported . These studies exploit the large energy separations between the vibrational energies and the underlying primitive excitations in a quantum gas. In a different context, two-component fermions with attractive interactions form Cooper pairs and exhibit the Bardeen-Cooper-Schrieffer (BCS) to Bose-Einstein condensation (BEC) crossover pioneered by the experiments with ultracold Fermi gases  (see also ). Here, we show that by creating ultralong-range molecules with a Rydberg impurity in a background sea of Cooper pairs, it is possible to a) break the pairs on the BCS side and b) locally trap a Cooper pair on the BEC side. The former bears analogies with the breaking of a binary-star pair by a tidal disruption event into a black hole , while the latter is reminiscent of the capture of pions (quark-antiquark pairs) in hydrogen , deuterium , and helium . Rydberg molecules in Fermi superfluids offer a tabletop laboratory to simulate analogues in stellar and nuclear settings . Through radio-frequency (rf) spectroscopy of the superfluid pairing gap  or Rydberg spectroscopy of the molecular lines , one may probe the reaction of the superfluid to tackle topical problems in condensed matter physics, such as the Cooper-pair size and pairing energies . A typical Rydberg potential is illustrated in Fig. [Fig:Demo] with heteronuclear Rydberg molecules formed in Fermi superfluids. These molecules are (1) diatomic between the impurity and a fermion from a broken Cooper pair and (2) triatomic, or ``a Cooper pair in a molecule" since a Cooper pair is trapped by the Rydberg potential. To our knowledge, the latter type of Rydberg molecules has not been discussed in the literature, and they are different from the trimer Rydberg molecules emanating from two weakly interacting bosons individually trapped by a bosonic Rydberg atom that have been realized  and theoretically studied . Therefore, the Rydberg molecules in Fermi superfluids exploit the interplay and competition between two molecule-formation mechanisms, one between the fermions to bind a Cooper pair and the other among the Rydberg and its neighboring atoms to create a Rydberg molecule. [Fig:Demo] Leveraging the Bogoliubov-de Gennes (BdG) formalism  suitable for studying inhomogeneous effects in Fermi superfluids, we extract the low-lying bound states of the composite Rydberg-Fermi superfluid system. By increasing the Cooper-pair strength, distinct local reactions of the pairing gap to the Rydberg potential will occur: Breaking (trapping) of a weak (strong) Cooper pair leads to a local suppression (enhancement) of the gap function. We identify the formation of diatomic and triatomic Rydberg molecules along with their binding energies, which are raised by the many-body pairing effect when compared to those in a noninteracting gas. In contrast to previous studies  with impurities carrying onsite potentials in Fermi superfluids, the Rydberg potential has its furthest well about hundreds of nanometers away from the core with controllable width and depth, thereby giving rise to rich structures of Rydberg molecules. #### *Rydberg atoms in a Fermi superfluid*.– We consider few Rydberg atoms immersed in a two-component, spin- and mass-balanced Fermi superfluid with contact pairing interactions. Experimentally, this setup can be emulated, for instance, by bosonic 87Rb Rydberg atoms and two hyperfine states of 86Rb for the Fermi superfluid. However, our results equally hold for other Rydberg atom-Fermi superfluid systems. For simplicity, the Rydberg atoms are assumed to be immobile in real space and noninteracting with each other. A quasi one-dimensional (1D) geometry  creating a cigar-shaped cloud similarly to Refs.  is considered. It supports the off-diagonal long-range order of the superfluid while freezing out the transverse degrees-of-freedom. The many-body Hamiltonian of the composite system within the BCS-Leggett theory reads $$\begin{aligned} \label{Eq:Hint} \mathcal{H}=\mathcal{H}\_{BCS}+\sum\_{\sigma}\int dx V\_{Ryd}(x)\psi\_\sigma^{\dagger}(x)\psi\_\sigma(x) d^\dagger d, \end{aligned}$$ where $\mathcal{H}\_{BCS}=\int dx\Big[\sum\_{\sigma} \psi\_{\sigma}^{\dagger}(x)h\_{\sigma}(x)\psi\_{\sigma}(x)+ (\Delta(x) \psi\_{\uparrow}^{\dagger}(x)\psi\_{\downarrow}^{\dagger}(x)+h.c.)\Big]$. The fermion operator acting on the *σ* =  ↑ ,  ↓  component of mass *m* is *ψ**σ*, and $h\_\sigma(x)=-\frac{\hbar^2}{2m}\nabla^2+V\_{ext}(x)-\mu\_\sigma$ denotes the single-particle Hamiltonian with *V**e**x**t*(*x*) summarizing the total external confinement and influence. The order parameter of the *s*-wave Fermi superfluid is the gap function $$\begin{aligned} \label{Eq:DeltaOriginal} \Delta(x)&=-U\langle\psi\_{\downarrow}(x)\psi\_{\uparrow}(x)\rangle.\end{aligned}$$ The effective coupling *U* < 0 is related to the 1D scattering length *a*1*D*  via $U=-\frac{2\hbar^2}{ma\_{1D}}$, tunable by Feshbach resonance , and ⟨…⟩ designates the ground-state expectation value at *T* = 0. The BCS-BEC crossover occurs when the chemical potential (here *μ*↑ = *μ*↓ ≡ *μ*) crosses zero  where the minimum of the quasiparticle-spectrum shifts to zero momentum. Importantly, the second contribution in Eq.  models the Rydberg atom - fermion interaction , with *d* (*d*†) being the annihilation (creation) operator of a Rydberg atom. The ultralong-range Born-Oppenheimer potential between a Rydberg atom and a ground-state fermionic atom is given by $V\_{Ryd}(x)=\frac{2\pi\hbar^2 a\_e}{m\_e}|\psi\_e(x)|^2$. Here, *a**e* denotes the scattering length between the Rydberg electron with mass *m**e* and a fermionic atom, and *x* measures the distance from the Rydberg impurity to the fermionic atom. The atomic Rydberg wave function, *ψ**e*(*x*), is calculated with effective valence potentials . In the vicinity of a Rydberg atom, we replace *d*†*d* by ⟨*d*†*d*⟩ = 1 and hence *V**R**y**d*(*x*) acts as an external potential for the Fermi superfluid. In what follows, it will be implicitly combined with *V**e**x**t*(*x*) in *h**σ*. Moreover, the Fermi energy *E**f* = ℏ2*k**f*2/(2*m*) and wavevector *k**f* = *π**n*/2, of a noninteracting 1D two-component Fermi gas with the same total particle number *N* = ∫*n*(*x*)*d**x* as the superfluid, serve as the energy and inverse-length units. For example, *g* =  − *U**k**f*/*E**f* is the dimensionless coupling constant of pairing between the fermions. #### *BdG formalism.* – To reveal the impact of the Rydberg atoms on the Fermi superfluid, we inspect the composite system as the superfluid undergoes the BCS-BEC crossover. Specifically, H can be diagonalized by the BdG transformation : *ψ*↑(*x*) = ∑*ñ*[*u*↑*ñ*1(*x*)*γ**ñ*1 − *v*↑*ñ*2 \*(*x*)*γ**ñ*2†] and *ψ*↓(*x*) = ∑*ñ*[*u*↓*ñ*2(*x*)*γ**ñ*2 + *v*↓*ñ*1 \*(*x*)*γ**ñ*1†]. The quasiparticle wave functions *u**σ**ñ**j* and *v**σ**ñ**j* with *j* = 1, 2 are to be determined, and they satisfy ∫*d**x*(∣*u**σ**ñ**j*∣2 + ∣*v**σ**ñ**j*∣2) = 1. The BdG equation for the composite system considered here can be block-diagonalized into  $$\label{eq:subset1} \begin{pmatrix} h\_{\uparrow}(x)&\Delta(x)\\ \Delta^\*(x)&-h^\*\_{\downarrow}(x) \end{pmatrix} \begin{pmatrix} u^{\tilde{n}j}\_{\uparrow}(x)\\ v^{\tilde{n}j}\_{\downarrow}(x) \end{pmatrix} =E\_{\tilde{n}j}\begin{pmatrix} u^{\tilde{n}j}\_{\uparrow}(x)\\ v^{\tilde{n}j}\_{\downarrow}(x) \end{pmatrix}.$$ Moreover, the BdG equation has a discrete symmetry connecting the positive and negative energy states, so we drop the indices 1, 2 and  ↑ ,  ↓  from the quasi-particle wave functions. For the ground state, the gap function Eq. ([Eq:DeltaOriginal]) then becomes Δ(*x*) =  − *U*∑*ñ*ʹ*u*↑*ñ*(*x*)*v*↓*ñ* \*(*x*) and the total fermion density $n(x)=\sum\_\sigma n\_{\sigma}(x)= 2{\sum\_{\tilde n }}'|v\_{\tilde n}(x)|^2$ with *n**σ*(*x*) = ⟨*ψ**σ*†(*x*)*ψ**σ*(*x*)⟩. Here ∑*ñ**w*ʹ denotes summation over the positive-energy states. We discretize the space and implement an iterative method  to solve the BdG equation (see SM  for details). [Fig:Rbn62Profile] #### *Discussion*.– The interaction between a Rydberg impurity and nearby fermions is determined by the Rydberg potential shown in Fig. [Fig:Demo]. To account for the impenetrable core of the Rydberg atom, the system is further embedded in a 1D box of size *L* with the Rydberg atom at *x* = 0 and appropriately adjusting the relevant energy and length scales, see also SM . The gap function and density of a representative BCS (BEC) Fermi superfluid with *μ* > 0 (*μ* < 0) subjected to the Rydberg potential are depicted in the left (right) panels of Fig. [Fig:Rbn62Profile]. While the density profiles on both BCS and BEC sides show peaks evidencing the bound states due to the attractive Rydberg potential, the most prominent contrast is the enhancement (suppression) of the gap function around the minima of the Rydberg potential in the BEC (BCS) regime. The decoupling of the gap function and density of a Fermi superfluid on the BCS side has been discussed in vortex structures , and here the Rydberg potential provides another concrete example. We mention the oscillatory boundary effects on the BCS side due to its fermionic excitations, which is explained in the SM . The bound-state wave functions *v**n*(*x*) of the Rydberg potential in the BCS and BEC regimes are presented in Fig. [Fig:Rbn62BS], see the SM  for all bound-state wave functions *u**n* and *v**n*. It is evident that each well may host a series of bound states when the depth of the Rydberg potential is enough to compete with the pairing in the Fermi superfluid. Thus, there is a competition between the intercomponent fermion attraction to maintain the Cooper pairs and the attraction among the Rydberg atom and the fermions to form molecules. The bound-state energies in the BCS regime are clearly separated, and each bound state consists of a single fermion. This implies that the resulting diatomic Rydberg molecules originate from individual fermions due to broken Cooper pairs. The bound states in the BEC regime shown in Fig. [Fig:Rbn62BS](b) are more complex. Indeed, focusing on the furthest well, the first two bound states are clearly separated in energy, indicating that they correspond to diatomic Rydberg molecules. However, the subsequent two higher vibrational bound states in the same well are energetically adjacent with almost identical wave functions. Together with the enhanced gap function shown in Fig. [Fig:Rbn62Profile], the twin bound states suggest the presence of a locally trapped Cooper pair. Therefore, the furthest well hosts a triatomic Rydberg molecule as an excited vibrational state in the BEC regime due to the combination of the strong Cooper pairing and the Rydberg potential being capable of trapping the Cooper pair. There is also a pair of bound states with almost identical binding energies and wave functions localized in the secondary well illustrated in Fig. [Fig:Rbn62BS](b). These are again evident of the creation of another triatomic Rydberg molecule. Therefore, the double-well approximate Rydberg potential depicted in Fig. [Fig:Demo] is able to host both diatomic and triatomic Rydberg molecules. [Fig:Rbn62BS] The Cooper-pair size may be estimated by the BCS coherence length  $$\begin{aligned} \label{Eq:xi} \xi\approx \frac{\hbar v\_f}{\Delta},\end{aligned}$$ where *v**f* is the Fermi velocity. For the Rydberg potential and box confinement used here, the width of the furthest (secondary) well is about 0.06*L* (0.03*L*). The Cooper-pair size of the selected BCS (BEC) case of Fig. [Fig:Rbn62Profile] is *ξ*/*L* ≈ 0.07 (*ξ*/*L* ≈ 0.003) since Δ/*E**f* ≈ 0.14 and *k**f**L* ≈ 111 (Δ/*E**f* ≈ 2.5 and *k**f**L* ≈ 131). Hence, the Cooper pairs on the BCS side cannot be accommodated within the Rydberg-potential wells. In this context, only a fermion from a broken Cooper pair is captured, forming a diatomic molecule. In contrast, the Cooper pairs of the BEC case may fit into the Rydberg potential, which is deep enough to either break a Cooper pair or trap it to form a diatomic or a triatomic Rydberg molecule. For a typical cold-atom cloud with density *n* ≈ 1013 cm− 3 , *E**f* ≈ 1kHz for Rb atoms, the depth of the Rydberg potential in Fig. [Fig:Demo] reaches the order of MHz. The pairing gap is roughly of the order of *E**f* as shown in Fig. [Fig:Rbn62Profile], which can be orders of magnitude smaller than the depth of the Rydberg potential even on the BEC side. The Rydberg molecule lifetime is typically about 10-100 *μ*s , while the timescale in a Fermi gas is governed by ℏ/*E**f* ( ∼  1ms here). Therefore, the above treatment of quasi-equilibrium Fermi superfluids in the presence of Rydberg molecules is reasonable. Meanwhile, a shallow Rydberg potential discussed in the SM  with similar orders of magnitude of the pairing gap and potential depth is shown to also form diatomic and triatomic Rydberg molecules. [Fig:EbRbn62] The respective binding energies (normalized by *E**f*) obtained from the BdG equation with the Rydberg potential of Fig. [Fig:Demo] are illustrated in Fig. [Fig:EbRbn62]. The first two lowest-energy bound states in both BCS and BEC regimes have comparable binding energies since they correspond to diatomic Rydberg molecules consisting of the Rydberg atom and a broken-pair fermion. However, the binding energies of the higher vibrational bound states in the BCS and BEC regimes deviate more significantly because the triatomic Rydberg molecules in the BEC possess relatively larger binding energies within the trapped Cooper pairs. We remark that the relation between the diatomic and triatomic Rydberg molecules in Fermi superfluids is more complex than that in a BEC  due to spin statistics and many-body effects. Indeed, adding an identical boson to a Rydberg dimer leads to a triatomic molecule with twice the diatomic binding energy. However, this does not hold for fermions due to the Pauli exclusion. Specifically, the formation of diatomic and triatomic Rydberg molecules in a Fermi superfluid competes with the binding of Cooper pairs. As such, the many-body contribution of breaking or trapping a Cooper pair plays a decisive role in creating Rydberg molecules, as it becomes apparent by the BdG calculation shown in Fig. [Fig:EbRbn62]. To discern many-body from single-particle effects in the Rydberg molecule formation, we also evaluate the binding energies of diatomic Rydberg molecules with the Schrödinger equation (*h**σ* + *V**R**y**d*)*ψ**σ* = *E**n**S**ψ**σ*, with *h**σ* from H*B**C**S*; see also SM  for the underlying bound states. The normalized single-particle binding energies are presented in Fig. [Fig:EbRbn62]. The many-body binding energies obtained from the BdG equation are in general larger than the corresponding single-particle energies due to pairing effect. However, the binding energies in the BCS regime follow a similar trend with the single-particle energies, and their energy difference remains roughly constant as higher vibrational states are reached. In contrast, the BdG binding energies on the BEC side exhibit larger deviations from their single-particle counterparts. For the diatomic Rydberg molecules, this is because the stronger pairing makes the Fermi superfluid on the BEC side with *μ* < 0 quite different from a noninteracting gas. Based on the gap function shown in Fig. [Fig:Rbn62Profile], the energy difference between the many-body and single-particle binding energies of the diatomic Rydberg molecules are of the same order as the pairing energy. Moreover, the emergence of the triatomic Rydberg molecules results in a substantial energy difference from the noninteracting counterpart due to the trapped Cooper pair, with its own binding energy. #### *Implications*.– Spatially resolved rf spectroscopy of atomic Fermi superfluids , following original attempts in Refs. , map out the local pairing gap. As described in Fig. [Fig:Rbn62Profile], this will determine the types of Rydberg molecules since the pairing is suppressed (enhanced) in the diatomic (triatomic) Rydberg molecule. Meanwhile, the Rydberg molecules in a Fermi superfluid may serve as a probe for the Cooper-pair size because triatomic Rydberg-molecule formation is only possible when the Cooper-pair size is smaller than the width of the Rydberg potential. Differentiating the diatomic and triatomic Rydberg molecules is also achievable by Rydberg-molecule line spectroscopy  since the binding energy of a triatomic Rydberg molecule is substantially larger than that of a diatomic one, as shown in Fig. [Fig:EbRbn62]. The Rydberg impurity-Fermi superfluid system features several tunable parameters, including the depth, width, and location of the Rydberg potential, determined by the Rydberg excitation , and the pairing strength and particle density of the Fermi superfluid (see, e.g., Refs. ). Furthermore, the quasi-1D setup has several advantages. First, the Rydberg lifetime in a lattice is found to be longer for reduced dimensions . If the enhancement also holds in the continuum, it facilitates Rydberg-molecule formation in 1D as there is on average one fermion within the Rydberg orbit, in the cases studied here. Second, the rotational excitations of Rydberg molecules will be less relevant in 1D, significantly simplifying the bound-state spectrum. Moreover, the 1D geometry eases i) the comparison between the Cooper-pair size and the Rydberg potential width, ii) the identification of diatomic or triatomic Rydberg molecules, and iii) the characterization of the Rydberg molecules, e.g., from the density and pairing gap profiles. So far, the Rydberg atoms are assumed to be of different isotopes or species from the Fermi superfluid. Alternatively, if some of the fermions within the superfluid are excited into Rydberg atoms, forming homonuclear Rydberg molecules, this results in a reduced effective pairing gap Δʹ = Δ − *g**l**n**R*, see also the discussion in the SM . Here, *g**l* is the coupling constant for exciting the fermions to Rydberg atoms. Once the excited Rydberg atoms are present, however, the corresponding bound states can be extracted through the BdG formalism. Therefore, dimer or trimer Rydberg molecules are expected via Rydberg excitations stemming from the Fermi superfluid although the reduced effective pairing gap will favor dimer Rydberg molecules. Finally, we note that Rydberg molecules are different from the Cooper-pair splitting in superconductor heterostructures . In this case, the proximity effect is utilized by dynamically sending a Cooper pair, as an excited state with spin entanglement or momentum correlation, to two separate non-superconducting regions in real space. In contrast, the fermion bound in a diatomic Rydberg molecule no longer retains the pairing correlation, while the tightly-bound Cooper pair in a triatomic Rydberg molecule localizes in real space. Along the same lines, there are subtle differences between the Rydberg molecules in Fermi superfluids and the binary tidal disruption event and pion matter. For instance, binding in binary stars (pion matter) stems from gravity (Coulomb interactions), whereas in Rydberg molecules, it is traced back to the electron-atom scattering. #### *Summary and outlook*.– The bound states of Fermi superfluids in a Rydberg-impurity potential testify to the formation of Rydberg molecules. The tunable fermion pairing gives rise to diatomic Rydberg molecules from broken Cooper pairs and triatomic Rydberg molecules from tightly-bound Cooper pairs, exhibiting different features of the gap function due to their distinctive nature. The detection of the triatomic Rydberg molecules may reveal information about the Cooper-pair size while the bound-state energies reflect pairing effects. With the rapid developments of Rydberg physics and Fermi gases, realizations of Rydberg molecules in Fermi superfluids will provide an elegant example of interfacing few- and many-body physics in a unified platform. Furthermore, going beyond the Leggett-BCS theory  of the superfluid ground state, pre-formed Cooper pairs at finite temperatures correct the superfluid transition temperature and lead to the pseudo-gap effect away from the BCS regime . Incorporating pairing-fluctuation theories developed for homogeneous systems  into the BdG formalism remains a challenge, and finite-temperature physics of Rydberg molecules awaits future research. #### *Acknowledgements*.– C. C. C. was partly supported by the NSF (No. PHY-2310656). Support for ITAMP by the NSF is also acknowledged. 67 ifxundefined [1] ifx#1 ifnum [1] #1firstoftwo secondoftwo ifx [1] #1firstoftwo secondoftwo ““#1”” @noop [0]secondoftwo sanitize@url [0]` 12`$12`&12`#12`12`\_12`%12 @startlink[1] @endlink[0] @bib@innerbibempty   and   ,  Many-body physics with individually controlled Rydberg atoms,  [Nature Physics  **16**,  132 ( 2020)](https://doi.org/10.1038/s41567-019-0733-z)  ,  , and   ,  Quantum information with Rydberg atoms,  [Rev. Mod. Phys.  **82**,  2313 ( 2010)](https://doi.org/10.1103/RevModPhys.82.2313)  ,  , and   ,  Rydberg atom quantum technologies,  [Journal of Physics B: Atomic, Molecular and Optical Physics  **53**,  012002 ( 2019)](https://doi.org/10.1088/1361-6455/ab52ef)  ,  ,  ,  ,  , ,  , and   , A concise review of Rydberg atom based quantum computation and quantum simulation,  [Chinese Physics B  **30**,  020305 ( 2021)](https://doi.org/10.1088/1674-1056/abd76f)  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  , and   ,  Logical quantum processor based on reconfigurable atom arrays,  [Nature  **626**,  58 ( 2024)](https://doi.org/10.1038/s41586-023-06927-3)  ,  ,  ,  ,  , and   , @noop Rydberg superatoms: An artificial quantum system for quantum information processing and quantum optics ( 2024),  arXiv: 2404.05330  ,  , and   ,  Creation of Polar and Nonpolar Ultra-Long-Range Rydberg Molecules,  [Phys. Rev. Lett.  **85**,  2458 ( 2000)](https://doi.org/10.1103/PhysRevLett.85.2458)  ,  , and   ,  Energies and dipole moments of long-range molecular Rydberg states,  [J. Phys. B: At. Mol. Opt. Phys.  **35**,  L193 ( 2002)](https://doi.org/10.1088/0953-4075/35/10/101)  ,  , and   ,  Shape-resonance-induced long-range molecular Rydberg states,  [J. Phys. B: At. Mol. Opt. Phys.  **35**,  L199 ( 2002)](https://doi.org/10.1088/0953-4075/35/10/102)  ,  ,  ,  ,  , and   ,  Observation of ultralong-range rydberg molecules,  [Nature  **458**,  1005 ( 2009)](https://doi.org/10.1038/nature07945)  ,  ,  ,  ,  ,  , and   ,  Observation of pendular butterfly Rydberg molecules, [Nature Communications  **7**, 12820 ( 2016)](https://doi.org/10.1038/ncomms12820)  ,  ,  , and  ,  Exploring the vibrational series of pure trilobite Rydberg molecules,  [Nature Communications  **14**,  8108 ( 2023)](https://doi.org/10.1038/s41467-023-43818-7)  ,  ,  ,  , and  , Production of trilobite Rydberg molecule dimers with kilo-Debye permanent electric dipole moments,  [Science  **348**,  99 ( 2015)](https://doi.org/10.1126/science.1260722),  https://www.science.org/doi/pdf/10.1126/science.1260722  ,  ,  ,  ,  , , and  ,  Heteronuclear rydberg molecules,  [Phys. Rev. A  **101**,  060701 ( 2020)](https://doi.org/10.1103/PhysRevA.101.060701)  ,  , and   ,  Ultracold Rydberg molecules,  [Nature Communications  **9**,  1965 ( 2018)](https://doi.org/10.1038/s41467-018-04135-6)   and   ,  Ultralong-range Rydberg molecules,  [Molecular Physics  **118**,  e1679401 ( 2020)](https://doi.org/10.1080/00268976.2019.1679401)  ,  ,  ,  ,  ,  ,  , , ,  , and  ,  Creation of Rydberg Polarons in a Bose Gas,  [Phys. Rev. Lett.  **120**, 083401 ( 2018)](https://doi.org/10.1103/PhysRevLett.120.083401)  ,  , and   ,  Mesoscopic Rydberg Impurity in an Atomic Quantum Gas,  [Phys. Rev. Lett.  **116**, 105302 ( 2016)](https://doi.org/10.1103/PhysRevLett.116.105302)  ,  ,  ,  ,  ,  ,  , ,  ,  , and   ,  Theory of excitation of Rydberg polarons in an atomic quantum gas,  [Phys. Rev. A  **97**,  022707 ( 2018)](https://doi.org/10.1103/PhysRevA.97.022707)  ,  ,  ,  , and   , Rydberg impurity in a Fermi gas: Quantum statistics and rotational blockade,  [Phys. Rev. Res.  **2**, 023021 ( 2020)](https://doi.org/10.1103/PhysRevResearch.2.023021)  and  , @noop Phenomenology of a Rydberg impurity in an ideal Bose Einstein condensate ( 2024),  arXiv: 2404.03980  ,  , and   , Emergence of a molecular Bose–Einstein condensate from a Fermi gas, @noop Nature  **426**,  537 ( 2003)  ,  ,  ,  ,  , and   ,  Condensation of Pairs of Fermionic Atoms near a Feshbach Resonance,  [Phys. Rev. Lett.  **92**, 120403 ( 2004)](https://doi.org/10.1103/PhysRevLett.92.120403)  ,  ,  ,  ,  ,  , and   ,  Collective Excitations of a Degenerate Gas at the BEC-BCS Crossover,  [Phys. Rev. Lett.  **92**,  203201 ( 2004)](https://doi.org/10.1103/PhysRevLett.92.203201)   and   ,  [*Bose–Einstein Condensation in Dilute Gases*](https://doi.org/10.1017/CBO9780511802850),  2nd ed. ( Cambridge University Press, 2008)  ,  [*Fundamentals and New Frontiers of Bose-Einstein Condensation*](https://doi.org/10.1142/7216) ( World Scientific,  Singapore, 2010)  https://www.worldscientific.com/doi/pdf/10.1142/7216  , ed.,  [*The BCS-BEC Crossover and the Unitary Fermi Gas*](https://doi.org/10.1007/978-3-642-21978-8) ( Springer Berlin,  Heidelberg,  2012)  ,  Hyper-velocity and tidal stars from binaries disrupted by a massive galactic black hole,  [Nature  **331**,  687 ( 1988)](https://doi.org/10.1038/331687a0)  ,  ,  , and  ,  Binary disruption by massive black holes: Hypervelocity stars, s stars, and tidal disruption events,  [The Astrophysical Journal Letters  **749**,  L42 ( 2012)](https://doi.org/10.1088/2041-8205/749/2/L42)  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  , ,  ,  ,  ,  ,  ,  ,  ,  ,  , and   ,  Pionic hydrogen, in [*Precision Physics of Simple Atoms and Molecules*](https://doi.org/10.1007/978-3-540-75479-4_10), edited by   ( Springer Berlin Heidelberg,  Berlin, Heidelberg,  2008) pp.  165–186  ,  ,  ,  ,  ,  ,  ,  ,  , , ,  ,  ,  ,  ,  ,  ,  ,  ,  , and   ,  Redetermination of the strong-interaction width in pionic hydrogen,  [The European Phys. J. A  **57**,  70 ( 2021)](https://doi.org/10.1140/epja/s10050-021-00387-x)  ,  ,  ,  ,  ,  ,  ,  ,  ,  , ,  ,  ,  ,  ,  ,  ,  , and   ,  Pionic deuterium,  [The European Phys. J. A  **47**, 88 ( 2011)](https://doi.org/10.1140/epja/i2011-11088-1)  ,  ,  ,  , and   ,  Recent results of laser spectroscopy experiments of pionic helium atoms at PSI,  [SciPost Phys. Proc. ,  026 ( 2021)](https://doi.org/10.21468/SciPostPhysProc.5.026)  ,  Capture of negative exotic particles by atoms, ions and molecules,  [Rep. Progr. Phys.  **67**,  1769 ( 2004)](https://doi.org/10.1088/0034-4885/67/10/R02)  ,  ,  , and  ,  Tomographic rf spectroscopy of a trapped fermi gas at unitarity,  [Phys. Rev. Lett.  **99**,  090403 ( 2007)](https://doi.org/10.1103/PhysRevLett.99.090403)  ,  ,  ,  ,  ,  ,  ,  , , and   ,  High-temperature pairing in a strongly interacting two-dimensional Fermi gas,  [Science  **359**,  452 ( 2018)](https://doi.org/10.1126/science.aan5950),  https://www.science.org/doi/pdf/10.1126/science.aan5950  ,  ,  , and  ,  Determination of the fermion pair size in a resonantly interacting superfluid,  [Nature  **454**,  739 ( 2008)](https://doi.org/10.1038/nature07176)  ,  ,  ,  ,  ,  ,  ,  ,  ,  , and   ,  Rydberg Trimers and Excited Dimers Bound by Internal Quantum Reflection,  [Phys. Rev. Lett.  **105**, 163201 ( 2010)](https://doi.org/10.1103/PhysRevLett.105.163201) ,  , and   ,  Ultra-Long-Range Rydberg Trimers with a Repulsive Two-Body Interaction,  [Phys. Rev. Lett.  **102**, 173001 ( 2009)](https://doi.org/10.1103/PhysRevLett.102.173001)  ,  , and   ,  Ultralong-range triatomic Rydberg molecules in an electric field,  [J. Phys. B: At. Mol. Opt. Phys.  **49**,  124002 ( 2016)](https://doi.org/10.1088/0953-4075/49/12/124002)  , @noop *Superconductivity of Metals and Alloys.*,  2nd ed., Advanced Books Classics ( Chapman and Hall/CRC,  Boulder,  2018)  , @noop *Bogoliubov-de Gennes Method and Its Applications*,  1st ed., Lecture Notes in Physics, 924 ( Springer International Publishing,  Cham,  2016)  ,  ,  ,  , and   ,  Universal Impurity-Induced Bound State in Topological Superfluids,  [Phys. Rev. Lett.  **110**,  020401 ( 2013)](https://doi.org/10.1103/PhysRevLett.110.020401)  ,  ,  ,  ,  ,  ,  , and   ,  Few-body bose gases in low dimensions—a laboratory for quantum dynamics,  [Phys. Rep.  **1042**,  1 ( 2023)](https://doi.org/https://doi.org/10.1016/j.physrep.2023.10.004),  few-body Bose gases in low dimensions—A laboratory for quantum dynamics  ,  ,  , and  ,  1d to 3d Crossover of a Spin-Imbalanced Fermi Gas,  [Phys. Rev. Lett.  **117**,  235301 ( 2016)](https://doi.org/10.1103/PhysRevLett.117.235301)  , @noop *Quantum Liquids : Bose condensation and Cooper pairing in condensed-matter systems*, Oxford Graduate Texts ( Oxford University Press,  Oxford, UK,  2006)  ,  Atomic Scattering in the Presence of an External Confinement and a Gas of Impenetrable Bosons,  [Phys. Rev. Lett.  **81**,  938 ( 1998)](https://doi.org/10.1103/PhysRevLett.81.938)  ,  , and   ,  Dispersion coefficients for alkali-metal dimers, @noop Phys. Rev. A  **49**,  982 ( 1994)  ,  On the theory of superfluidity, @noop J. Phys.  **11**,  23 ( 1947)   and   ,  Proximity effect and spatial Kibble-Zurek mechanism in atomic Fermi gases with inhomogeneous pairing interactions,  [Phys. Rev. A  **107**,  063314 ( 2023)](https://doi.org/10.1103/PhysRevA.107.063314) @noop See Supplemental Material for details on the BdG formalism, the impact of higher Rydberg excitations in the molecule formation, the effect of the Rydberg potential location, further information on the bound states, the formation of homonuclear Rydberg molecules and the creation of Rydberg molecules in a noninteracting gas.  ,  ,  , and   ,  Finite temperature effects in trapped Fermi gases with population imbalance,  [Phys. Rev. A  **74**,  021602 ( 2006)](https://doi.org/10.1103/PhysRevA.74.021602)   and   , Vortex structure and spectrum of an atomic Fermi superfluid in a spherical bubble trap,  [Phys. Rev. A  **108**,  053303 ( 2023)](https://doi.org/10.1103/PhysRevA.108.053303)  ,  , and   ,  Pairing Gap and In-Gap Excitations in Trapped Fermionic Superfluids,  [Science  **305**,  1131 ( 2004)](https://doi.org/10.1126/science.1100782),  https://www.science.org/doi/pdf/10.1126/science.1100782  ,  ,  , and   , Determination of the Superfluid Gap in Atomic Fermi Gases by Quasiparticle Spectroscopy,  [Phys. Rev. Lett.  **101**, 140403 ( 2008)](https://doi.org/10.1103/PhysRevLett.101.140403)  ,  ,  ,  ,  ,  , and   ,  Coherent Many-Body Spin Dynamics in a Long-Range Interacting Ising Chain,  [Phys. Rev. X  **7**,  041063 ( 2017)](https://doi.org/10.1103/PhysRevX.7.041063)  ,  , and   , Electronic entanglement in the vicinity of a superconductor,  [The European Phys. J. B - Condensed Matter and Complex Systems **24**,  287 ( 2001)](https://doi.org/10.1007/s10051-001-8675-4)  ,  , and   ,  Andreev tunneling, coulomb blockade, and resonant transport of nonlocal spin-entangled electrons, [Phys. Rev. B  **63**, 165314 ( 2001)](https://doi.org/10.1103/PhysRevB.63.165314)  ,  ,  , and  , Cooper pair splitter realized in a two-quantum-dot Y-junction,  [Nature  **461**,  960 ( 2009)](https://doi.org/10.1038/nature08432)  ,  ,  ,  , and   ,  Real-time observation of Cooper pair splitting showing strong non-local correlations,  [Nature Communications  **12**,  6358 ( 2021)](https://doi.org/10.1038/s41467-021-26627-8)  ,  , and   , Adiabatic Cooper pair splitter, [Phys. Rev. B  **109**, L081402 ( 2024)](https://doi.org/10.1103/PhysRevB.109.L081402)  ,  , and   ,  Near-Unity Cooper Pair Splitting Efficiency,  [Phys. Rev. Lett.  **109**, 157002 ( 2012)](https://doi.org/10.1103/PhysRevLett.109.157002)  ,  ,  ,  ,  ,  , and   ,  Observation of pseudogap behaviour in a strongly interacting Fermi gas,  [Nature Phys.  **6**,  569 ( 2010)](https://doi.org/10.1038/nphys1709)  ,  ,  ,  ,  ,  , ,  ,  ,  ,  , and   ,  Observation and quantification of the pseudogap in unitary fermi gases,  [Nature  **626**, 288 ( 2024)](https://doi.org/10.1038/s41586-023-06964-y)  ,  ,  , and  ,  Comparison of different pairing fluctuation approaches to BCS–BEC crossover,  [Annals of Physics  **325**,  233 ( 2010)](https://doi.org/https://doi.org/10.1016/j.aop.2009.09.011)   and   ,  Crossover from Bardeen-Cooper-Schrieffer to Bose-Einstein Condensation and the Unitary Fermi Gas,  [Annual Review of Condensed Matter Physics  **5**,  209 ( 2014)](https://doi.org/https://doi.org/10.1146/annurev-conmatphys-031113-133829)  ,  Review of pseudogaps in strongly interacting Fermi gases,  [Reports on Progress in Physics  **80**,  104401 ( 2017)](https://doi.org/10.1088/1361-6633/aa7e53) Bogoliubov-de Gennes calculation ================================ After diagonalization by the BdG transformation (see the main text), the many-body Hamiltonian takes the form H = ∑*ñ**w*ʹ*E**ñ**w**γ**ñ**w*†*γ**ñ**w* + *E**g*. In this expression, *w* = 1, 2 represents the quasi-particle components, *E**g* is the ground-state energy, and ∑*ñ**w*ʹ denotes summation over the positive-energy states. The ground-state energy is $E\_g=-\frac{|\Delta|^2}{U}+\sum\_{\tilde n,w}(\epsilon\_{\tilde nw}-E\_{\tilde{n}w})$ with *ε**ñ**w* being the non-interacting counterpart of the excitation energy *E**ñ**w*. The BdG equation has the symmetry $\begin{pmatrix} u^{\tilde{n}2}\_{\downarrow}(x)\\ v^{\tilde{n}2}\_{\uparrow}(x) \end{pmatrix} =\begin{pmatrix} v^{\tilde{n}1 \*}\_{\downarrow}(x)\\ -u^{\tilde{n}1}\_{\uparrow}(x) \end{pmatrix}$ with *E**ñ*2 =  − *E**ñ*1. The quasi-particle operators obey ⟨*γ**ñ**w*†*γ**m̃**v*⟩ = *δ**ñ**m̃**δ**w**v**f*(*E**ñ**w*) and ⟨*γ**ñ**w**γ**m̃**v*⟩ = ⟨*γ**ñ**w*†*γ**m̃**v*†⟩ = 0 with *f*(*E*) = [*e**E*/*K**B**T* + 1]− 1 being the Fermi distribution function. At finite temperatures, the gap function becomes Δ(*x*) =  − *U*∑*ñ*ʹ*u*↑*ñ*(*x*)*v*↓*ñ* \*(*x*)tanh(*E**ñ*/*k**B**T*) and the total Fermi density $n(x)=\sum\_\sigma n\_{\sigma}(x)= n(x)=2{\sum\_{\tilde n }}' [|v\_{\tilde n}(x)|^2(1-f(E\_{\tilde{n}}))+|u\_{\tilde n}(x)|^2 f(E\_{\tilde{n}})]$. We caution that a bound state with *E**n* < 0 contributes to the density via ∣*u**n*∣2. Due to the symmetry, it is equivalent to a *E**n* > 0 state contributing to the density via ∣*v**n*∣2. In our numerical calculations, we consider a quasi-1D system in a 1D box of length *L*. We discretize the space *x*/*L* = [0, 1] using *n**x* grid points *x**j* = *j**δ**x*, where *δ**x* = *L*/*n**x* and *j* = 0, 1, 2, ...., *n**x* − 1. The BdG equation is also discretized by using the finite-difference method and becomes $$\label{Eq:DiscreteBdG} \sum\_j \begin{pmatrix} h\_{ij}&\Delta\_{ij}\\ \Delta^\*\_{ij}& -h\_{ij} \end{pmatrix} \begin{pmatrix} u\_j^{\tilde{n}}\\ v\_j^{\tilde{n}} \end{pmatrix} =E\_{\tilde{n}}\begin{pmatrix} u\_i^{\tilde{n}}\\ v\_i^{\tilde{n}} \end{pmatrix}.$$ Here Δ*i**j* = Δ*i**δ**i*, *j* for *s*-wave pairing. The BdG Hamiltonian has the size of 2*n**x* × 2*n**x* and we only take the positive energy eigenstates for the calculations of the gap function and density. For the ground state of the Fermi superfluid, the total density becomes $$\label{Eq:rhoBdG} n(x)=2{\sum\_{\tilde n }}' |v\_{\tilde n}(x)|^2.$$ The total fermion number is *N* = *N*↑ + *N*↓ = ∫0*L**n*(*x*)*d**x*. The gap function is given by $$\label{Eq:DeltaBdG} \Delta(x) =-U{\sum\_{\tilde n}}' u\_{\tilde n}(x)v\_{\tilde n}(x).$$ The Rydberg atom is placed at *x* = 0, where the wave functions should vanish due to the impenetrable core of the Rydberg atom. The box boundary at *x* = *L* is chosen such that the wave functions return to the bulk values before encountering the wall at *x* = *L*. The box introduces an energy scale *E*0 = ℏ2/(2*m**L*2), but we use the intrinsic inverse-length and energy units *k**f* and *E**f* carried by the fermions. Numerically, we start with a trial Δ(*r*) and a given set of parameters (*U*, *μ*) in order to solve the BdG equation and thus obtain the eigenvalues and eigenstates of the Rydberg atom-Fermi superfluid system. The gap function is then assembled for the next iteration. The iteration stops when the convergence condition (1/*L*)∫0*L**d**x*∣∣Δ*n**e**w*(*x*)∣ − ∣Δ*o**l**d*(*x*)∣∣/*E*0 < *ε* is satisfied, where Δ*n**e**w*/*o**l**d*(*x*) denote the gap functions between consecutive iterations. We have taken *ε* = 10− 6 and 1000 grid points and checked that further adjustments of those values do not cause qualitative changes. Let us also comment on the oscillatory boundary effects due to the box confinement on the BCS side in the profiles shown in Fig. [Fig:Rbn62Profile] of the main text. The Fermi superfluid on the BCS side has relatively weak pairing. Therefore, when the boundary enforces the gap function and density to vanish, the fermions behave as noninteracting ones exhibiting an oscillatory behavior with length scale 1/*k**f*. This is because noninteracting fermions form a Fermi sea and the perturbation of the system starts at the Fermi momentum, which then results in the aforementioned oscillations at the boundary. In contrast, the fermions form tightly bound pairs on the BEC side, which behave like composite bosons and no longer follow the Fermi statistics. On the BEC side, the composite bosons are repelled by the box boundaries, but they vanish smoothly without the oscillatory behavior. We emphasize that the box confinement is a simple choice to satisfy the impenetrability condition of the Rydberg atom, and the focus should be on the Rydberg-molecule formation due to the Rydberg potential instead of the boundary effects due to the choice of the confinement. [Fig:Rbn62pot] Higher Rydberg excitation ========================= Upon considering a higher Rydberg excitation results in a shallower Rydberg potential. As a paradigmatic example, here, we assume the Sr(71S) Rydberg state in a Fermi superfluid. Since the most relevant part of the Rydberg potential corresponds to the two dips furthest away from its core, we smooth out the highly oscillatory potential located close to the core. A comparison of the Rydberg potentials from the Rb(62S) state and that from the Sr(71S) state is given in Fig. [Fig:Rbn62pot], along with the approximation concerning their two furthest potential wells used in the BdG calculations. When the two potentials are placed in a 1D box with the furthest dips at its center and the length and energy scales properly scaled, as explained below, the potential from the Rb(62S) state is about ten times deeper. The two Rydberg potentials thus allow us to contrast the characteristics of the emergent Rydberg molecules in Fermi superfluids. Placing the approximate double-well Rydberg potential in a 1D box accounts for the impenetrable core of the Rydberg atom and provides a consistent comparison when different Rydberg states are used. We extract the distance *R*0 from the core to the furthest dip and its depth *V*0 of the Rydberg potential. For a selected Rydberg atom and its state, *R*0 and *V*0 are related. For example, *R*0 = 480 nm and *V*0 = 0.32 MHz for the Sr(71S) Rydberg state in a 87Sr Fermi superfluid. The aspect ratio of the Rydberg potential is fixed by introducing the energy scale *E**R* = ℏ2/(2*m**R*02) and obtaining the ratio *V*0/*E**R*. By setting *R*0 = *α**L* with 0 < *α* < 1, the furthest dip is located at *α**L* inside the box. This also fixes the energy relation *E**R* = *E*0/*α*2, from which the Rydberg potential can be expressed in terms of the corresponding *k**f* and *E**f*. [Fig:Srn71Profiles] By squeezing the Rydberg potential towards the left of the box with a smaller *α*, the effective depth of the potential increases, but the widths of the wells decrease. As shown below, the case with *α* = 1/4 exhibits similar behavior on the BCS and BEC sides as those where *α* = 1/2 being presented here. This indicates that the number of bound states is determined by a combination of the potential width and depth. Thus, a simple rescaling of the Rydberg potential inside the box does not lead to further qualitative changes. In the following, we will place the furthest dip of the Rydberg potential at *x* = *L*/2 and scale its depth accordingly. The left column of Figure [Fig:Srn71Profiles] shows the profiles of the gap function and the density for a selected case in the BCS regime with *μ* > 0 and the potential taken from the Sr(71S) Rydberg state. The presence of the Rydberg potential causes a dip in the gap function, which implies a suppression of pairing, and a peak in the density, indicating that unpaired fermions accumulate. [Fig:Ryd05BS] Before analyzing the energy spectrum and wave functions of the BCS case, we show in the right column of Fig. [Fig:Srn71Profiles] the profiles of the gap function and density of a Fermi superfluid on the BEC side with *μ* < 0 under the same Rydberg potential. In stark contrast to the BCS case, the gap function features a peak at the location of the Rydberg potential. The same behavior is evident on the density profile. Thus, the pairing in enhanced inside the Rydberg potential when the Fermi superfluid is BEC-like. The enhancement of pairing also suggests that Cooper pairs are trapped by the Rydberg atom. To explain the contrast between the BCS and BEC cases, we analyze the eigen-functions obtained from the BdG equation, see Fig. [Fig:Ryd05BS]. Since the symmetry of the BdG equation guarantees that every positive energy eigen-state is accompanied by a negative-energy state, a bound state of a particle is also accompanied by a bound state of a hole having the opposite energy sign. For this reason, we examine the profiles of the eigen-functions and identify the bound states with localized patterns in the potential wells. For the BCS and BEC cases presented in Fig. [Fig:Srn71Profiles], most of the eigen-states exhibit oscillatory behavior in the box (not shown for brevity). Nevertheless, we identify particular states featuring localization at the dips of the Rydberg potential. The left column of Fig. [Fig:Ryd05BS] depicts the two energetically lowest localized states on the BCS side that can be identified with this Rydberg potential. From the number of nodes inside the Rydberg potential, one may classify them as the first and second bound states localized in the rightmost potential well. The binding energies of these two bound states on the BCS regime are not adjacent to each other, implying that they are between a fermion and the Rydberg atom as only individual fermionic states well separated in energy are involved. This behavior suggests that the composite object refers to a diatomic Rydberg molecule. The suppression of the gap function on the BCS side can also be understood in terms of the formation of the Rydberg molecules, which breaks a Cooper pair when a fermion falls into the Rydberg potential. Meanwhile, the fermionic atom from a broken Cooper pair in the Rydberg molecule causes a bump in the local density, reminiscent to the occupation of the vortex core by unpaired fermions . For the BEC case demonstrated in the right column of Fig. [Fig:Srn71Profiles], there are also two bound states localized in the Rydberg potential. However, the two localized states have adjacent energies and similar wave functions, as can be seen in the right column of Fig. [Fig:Ryd05BS]. Therefore, two fermions are constituting the first bound state in the Rydberg potential in the BEC regime. This is a direct indication that a tightly-bound Cooper pair falls into the Rydberg potential and forms a three-body bound state, or a Rydberg molecule with a Rydberg atom and a Cooper pair. Therefore, a triatomic Rydberg molecule forms in the BEC case presented here. For the Sr(71S) Rydberg potential, however, the depth is too shallow to host additional bound states. We also estimate the Cooper-pair size using Eq.  in the main text and compare it with the width of the Rydberg potential. For the Sr(71S) Rydberg state, the width of the furthest well is about 0.04*L* after placing it at *x*/*L* = 0.5 of the box. For the BCS case selected above, Δ/*E**f* ≈ 0.18 and *k**f**L* ≈ 98, which lead to *ξ*/*L* ≈ 0.06. Meanwhile, Δ/*E**f* ≈ 2.9 and *k**f**L* ≈ 121 for the shallow-BEC case selected above, so *ξ*/*L* ≈ 0.003. Therefore, the width of the Cooper pairs on the BCS (BEC) side is larger (smaller) than the width of the primary Rydberg-potential well. This also corroborates that a fermion from a broken Cooper pair is trapped by the Rydberg potential to form a diatomic Rydberg molecule when the Fermi superfluid is on the BCS side. In contrast, the relatively shallow Rydberg potential traps a Cooper pair on the BEC side in its primary well, forming ``a Cooper pair in a molecule". Moreover, as compared to the relatively deep Rydberg-potential discussed in the main text, the secondary well of the shallower Rydberg potential presented here does not support bound states. [Fig:Ryd025Profile] Adjusting the location of the Rydberg potential =============================================== Here we take the potential of the Sr(71S) Rydberg state but set *R*0 = *L*/4, i.e., *α* = 1/4. Using this scaling, the depth of the potential increases due to the relatively smaller *R*0, but the width of the potential reduces. Fig. [Fig:Ryd025Profile] presents the profiles of the gap function and the density on the BCS side and BEC side, respectively, under the Sr(71S) Rydberg potential. Similar to the *α* = 1/2 case, the pairing is suppressed on the BCS side but is enhanced on the BEC side. Meanwhile, the density exhibits a peak on both BCS and BEC sides, signaling the emergence of bound states. [Fig:Ryd025states] The left column of Fig. [Fig:Ryd025states] illustrates the localized eigenstates extracted from the BdG equation corresponding to the setup on the BCS side depicted in Fig. [Fig:Ryd025Profile]. It becomes apparent that there are again two bound states with clearly separated binding energies. By examining the nodes of the wave functions, we can infer that they are the first and second bound states of the Rydberg potential. Therefore, the Rydberg atom breaks a Cooper pair and traps a fermionic atom to form a diatomic Rydberg molecule. In contrast, when the Fermi superfluid is in the BEC regime, the right column of Fig. [Fig:Ryd025states] shows the localized eigenstates corresponding to the right column of Fig. [Fig:Ryd025Profile]. There are two bound states with adjacent binding energies and similar wave functions, indicating that a Cooper pair has been trapped by the Rydberg potential. Therefore, the triatomic Rydberg molecule in this case consists of the Rydberg atom and a Cooper pair trapped by its potential. Squeezing the Rydberg potential within the box increases its depth and reduces its width, but it is still not sufficient to break a tightly bound Cooper pair in the shallow BEC regime in this case. Details of bound states ======================= The bound state wave functions *v**n*(*x*) of the Rydberg potential are summarized in Fig. [Fig:Rbn62BS] of the main text. Here, for reasons of completeness, we provide the full bound-state wave functions *u**n* and *v**n* of the Fermi superfluid under the Rydberg potential shown in Fig. [Fig:Demo] of the main text. Specifically, Fig. [Fig:Rbn62BCSstates] presents the localized eigenstates in the Rydberg potential which are extracted from the BdG analysis in the BCS case. Panels (a) to (f) depict, in a hierarchical order, the lowest-energy bound state to the highest-energy one. Since the bound-state energies are well separated and each bound state corresponds to a fermion, the bound states of Fig. [Fig:Rbn62BCSstates] are indicative of diatomic molecule formation. [Fig:Rbn62BCSstates] Fig. [Fig:Rbn62BECstates] shows the bound states calculated from the BdG equation in the BEC regime utilizing the same Rydberg potential. As can be seen, there are two energetically distinct bound states localized in the furthest well [Fig. [Fig:Rbn62BECstates](a), (b)] followed by two almost identical lowest-energy bound states localized in the secondary well [Fig. [Fig:Rbn62BECstates](c), (d)] and another two almost identical bound states localized in the furthest well [Fig. [Fig:Rbn62BECstates](e), (f)]. Each pair of the twin bound states corresponds to a Cooper pair trapped by the Rydberg potential. However, the secondary well on the left traps a Cooper pair into its lowest bound state [Fig. [Fig:Rbn62BECstates](c), (d)], while the furthest well traps a Cooper pair as its third vibrational state [Fig. [Fig:Rbn62BECstates](e), (f)]. [Fig:Rbn62BECstates] Exciting Rydberg atoms within the Fermi superfluid ================================================== Here, instead of introducing bosonic isotopes or atoms of different species as Rydberg impurities in an atomic Fermi superfluid, we consider the alternative process of exciting the atoms of the Fermi superfluid to produce Rydberg atoms. In this case, our description should be modified in order to account for the excitation of Rydberg atoms from the Fermi superfluid. This process is modelled by means of considering the additional Hamiltonian term *H**e**x* = *g**l*∫*d**x*(*d*†*ψ̃*†(*x*)*ψ*↓(*x*)*ψ*↑(*x*) + *h*.*c*.) which, in practice, converts a Cooper pair into a Rydberg atom and an unpaired fermion. In this expression, *ψ̃*† is the creation operator of a fermion from the broken Cooper pair but without excitation to the Rydberg state, *d*† refers to the creation operator of a Rydberg atom, and *g**l* is the coupling constant for exciting the Rydberg atoms. Since a Cooper pair needs to be broken to create a Rydberg atom in this case, *g**l* is expected to be larger than the pairing coupling constant *U*. If the Rydberg-atom density *n**R* from the broken Cooper pairs satisfies *n**R**R*03 ≪ 1, with *R*0 denoting the range of the Rydberg potential, we may approximate ⟨*d*†*ψ̃*†⟩ ≈ ⟨*d*†*d*⟩ = *n**R* = ∫*d**x*⟨*ψ̃*†(*x*)*ψ̃*(*x*)⟩. Following this crude approximation, the excitation Hamiltonian becomes *H**e**x* ≈ *g**l**n**R*∫*d**x*(*ψ*↓(*x*)*ψ*↑(*x*) + *h*.*c*.). This has a form similar to the pairing terms in the BCS Hamiltonian. In particular, if it is combined with the BCS Hamiltonian, the resulting Hamiltonian leads to a suppressed pairing gap Δʹ = Δ − *g**l**n**R*, where Δ is the pairing gap in the absence of the Rydberg excitation. Therefore, exciting the Rydberg atoms directly from the Fermi superfluid instead of introducing different isotopes or species results in a suppression of the pairing gap and favors the formation of a diatomic Rydberg molecule with a fermion from a broken Cooper pair. Moreover, the Rydberg molecules are homonuclear if the Rydberg atoms are excitations of the Fermi superfluid. In contrast, the Rydberg molecules discussed in the main text are heteronuclear because the Rydberg atoms are different from the fermions in the superfluid. [Fig:Rbn62SP] Rydberg molecules in noninteracting gases ========================================= The energetically lowest six bound states obtained from the Schrödinger equation with the outer double-well approximation of the Rb(62S) Rydberg potential are provided in Fig. [Fig:Rbn62SP]. Here, we follow the same scaling process (i.e., *α* = 0.5) to place the furthest dip of the Rydberg potential at the middle of the box. We consider only one component of fermions here, as the other component will result in exactly the same results in a noninteracting system. By analyzing the number of nodes and the localization position of the wave functions, one can identify the series of the bound states localized in the furthest well of the Rydberg potential and the ones located in the secondary well. For example, panels (a), (b), (d), (e) of Fig. [Fig:Rbn62SP] showcase the first four bound states localized in the furthest dip, while panels (c) and (f) depict the two lowest bound states localized in the secondary well. Given the well-separated energies of the bound states, all bound states shown here correspond to diatomic Rydberg molecules between the Rydberg atom and a fermion. Their binding energies are extracted from the eigenvalues of the Schrödinger equation with the Rydberg potential. After proper normalization with respect to the values of *E**f* for the corresponding BCS and BEC cases, the binding energies are shown in Fig. [Fig:EbRbn62] of the main text. We note in passing that triatomic Rydberg molecules binding a noninteracting spin- ↑  fermion and a noninteracting spin- ↓  fermion can also form. However, they will possess twice the binding energy of a corresponding diatomic Rydberg molecule. This is because for noninteracting fermions, each energy level can be occupied by one spin- ↑  and one spin- ↓  fermion since they do not feel the presence of each other. It should be emphasized that this binding process is different from the one of the triatomic Rydberg molecule encompassing a trapped Cooper pair, where pairing effects play a decisive role. Breaking and trapping Cooper pairs by Rydberg-molecule spectroscopyin atomic Fermi superfluids ============================================================================================== We propose a spectroscopic probe of the breaking and localization of Cooper pairs in an atomic Fermi superfluid interacting with a Rydberg impurity. This is achieved by monitoring the formation of diatomic and triatomic ultralong-range molecular species in the superfluid across the Bardeen–Cooper–Schrieffer (BCS)-Bose Einstein condensation (BEC) crossover. The triatomic Rydberg molecule in the BEC regime heralds the trapping of a tightly-bound Cooper pair, reminiscent of pion capture in nuclear matter, while the breaking of a Cooper pair on the BCS side by a diatomic Rydberg molecule is evocative of binary-star tidal disruption by a black hole. Spectroscopy of the Fermi superfluid and Rydberg molecules allows for an estimation of the Cooper-pair size while the Rydberg molecule binding energies discern many-body pairing effects. Rydberg atom-based systems have emerged as leading platforms for demonstrating many-body correlations , quantum simulations , quantum error corrections , and quantum optics . When the excited electron scatters from a nearby ground-state atom, under certain conditions, ultralong-range molecular bonds can form . Such long-range Rydberg molecules have been realized  , see also the reviews . Interesting aspects of many-body physics, such as the formation of Bose and Fermi polarons, quantum statistics of gases exhibiting bunching and anti-bunching, with Rydberg molecules have also been reported . These studies exploit the large energy separations between the vibrational energies and the underlying primitive excitations in a quantum gas. In a different context, two-component fermions with attractive interactions form Cooper pairs and exhibit the Bardeen-Cooper-Schrieffer (BCS) to Bose-Einstein condensation (BEC) crossover pioneered by the experiments with ultracold Fermi gases  (see also ). Here, we show that by creating ultralong-range molecules with a Rydberg impurity in a background sea of Cooper pairs, it is possible to a) break the pairs on the BCS side and b) locally trap a Cooper pair on the BEC side. The former bears analogies with the breaking of a binary-star pair by a tidal disruption event into a black hole , while the latter is reminiscent of the capture of pions (quark-antiquark pairs) in hydrogen , deuterium , and helium . Rydberg molecules in Fermi superfluids offer a tabletop laboratory to simulate analogues in stellar and nuclear settings . Through radio-frequency (rf) spectroscopy of the superfluid pairing gap  or Rydberg spectroscopy of the molecular lines , one may probe the reaction of the superfluid to tackle topical problems in condensed matter physics, such as the Cooper-pair size and pairing energies . A typical Rydberg potential is illustrated in Fig. [Fig:Demo] with heteronuclear Rydberg molecules formed in Fermi superfluids. These molecules are (1) diatomic between the impurity and a fermion from a broken Cooper pair and (2) triatomic, or ``a Cooper pair in a molecule" since a Cooper pair is trapped by the Rydberg potential. To our knowledge, the latter type of Rydberg molecules has not been discussed in the literature, and they are different from the trimer Rydberg molecules emanating from two weakly interacting bosons individually trapped by a bosonic Rydberg atom that have been realized  and theoretically studied . Therefore, the Rydberg molecules in Fermi superfluids exploit the interplay and competition between two molecule-formation mechanisms, one between the fermions to bind a Cooper pair and the other among the Rydberg and its neighboring atoms to create a Rydberg molecule. [Fig:Demo] Leveraging the Bogoliubov-de Gennes (BdG) formalism  suitable for studying inhomogeneous effects in Fermi superfluids, we extract the low-lying bound states of the composite Rydberg-Fermi superfluid system. By increasing the Cooper-pair strength, distinct local reactions of the pairing gap to the Rydberg potential will occur: Breaking (trapping) of a weak (strong) Cooper pair leads to a local suppression (enhancement) of the gap function. We identify the formation of diatomic and triatomic Rydberg molecules along with their binding energies, which are raised by the many-body pairing effect when compared to those in a noninteracting gas. In contrast to previous studies  with impurities carrying onsite potentials in Fermi superfluids, the Rydberg potential has its furthest well about hundreds of nanometers away from the core with controllable width and depth, thereby giving rise to rich structures of Rydberg molecules. #### *Rydberg atoms in a Fermi superfluid*.– We consider few Rydberg atoms immersed in a two-component, spin- and mass-balanced Fermi superfluid with contact pairing interactions. Experimentally, this setup can be emulated, for instance, by bosonic 87Rb Rydberg atoms and two hyperfine states of 86Rb for the Fermi superfluid. However, our results equally hold for other Rydberg atom-Fermi superfluid systems. For simplicity, the Rydberg atoms are assumed to be immobile in real space and noninteracting with each other. A quasi one-dimensional (1D) geometry  creating a cigar-shaped cloud similarly to Refs.  is considered. It supports the off-diagonal long-range order of the superfluid while freezing out the transverse degrees-of-freedom. The many-body Hamiltonian of the composite system within the BCS-Leggett theory reads $$\begin{aligned} \label{Eq:Hint} \mathcal{H}=\mathcal{H}\_{BCS}+\sum\_{\sigma}\int dx V\_{Ryd}(x)\psi\_\sigma^{\dagger}(x)\psi\_\sigma(x) d^\dagger d, \end{aligned}$$ where $\mathcal{H}\_{BCS}=\int dx\Big[\sum\_{\sigma} \psi\_{\sigma}^{\dagger}(x)h\_{\sigma}(x)\psi\_{\sigma}(x)+ (\Delta(x) \psi\_{\uparrow}^{\dagger}(x)\psi\_{\downarrow}^{\dagger}(x)+h.c.)\Big]$. The fermion operator acting on the *σ* =  ↑ ,  ↓  component of mass *m* is *ψ**σ*, and $h\_\sigma(x)=-\frac{\hbar^2}{2m}\nabla^2+V\_{ext}(x)-\mu\_\sigma$ denotes the single-particle Hamiltonian with *V**e**x**t*(*x*) summarizing the total external confinement and influence. The order parameter of the *s*-wave Fermi superfluid is the gap function $$\begin{aligned} \label{Eq:DeltaOriginal} \Delta(x)&=-U\langle\psi\_{\downarrow}(x)\psi\_{\uparrow}(x)\rangle.\end{aligned}$$ The effective coupling *U* < 0 is related to the 1D scattering length *a*1*D*  via $U=-\frac{2\hbar^2}{ma\_{1D}}$, tunable by Feshbach resonance , and ⟨…⟩ designates the ground-state expectation value at *T* = 0. The BCS-BEC crossover occurs when the chemical potential (here *μ*↑ = *μ*↓ ≡ *μ*) crosses zero  where the minimum of the quasiparticle-spectrum shifts to zero momentum. Importantly, the second contribution in Eq.  models the Rydberg atom - fermion interaction , with *d* (*d*†) being the annihilation (creation) operator of a Rydberg atom. The ultralong-range Born-Oppenheimer potential between a Rydberg atom and a ground-state fermionic atom is given by $V\_{Ryd}(x)=\frac{2\pi\hbar^2 a\_e}{m\_e}|\psi\_e(x)|^2$. Here, *a**e* denotes the scattering length between the Rydberg electron with mass *m**e* and a fermionic atom, and *x* measures the distance from the Rydberg impurity to the fermionic atom. The atomic Rydberg wave function, *ψ**e*(*x*), is calculated with effective valence potentials . In the vicinity of a Rydberg atom, we replace *d*†*d* by ⟨*d*†*d*⟩ = 1 and hence *V**R**y**d*(*x*) acts as an external potential for the Fermi superfluid. In what follows, it will be implicitly combined with *V**e**x**t*(*x*) in *h**σ*. Moreover, the Fermi energy *E**f* = ℏ2*k**f*2/(2*m*) and wavevector *k**f* = *π**n*/2, of a noninteracting 1D two-component Fermi gas with the same total particle number *N* = ∫*n*(*x*)*d**x* as the superfluid, serve as the energy and inverse-length units. For example, *g* =  − *U**k**f*/*E**f* is the dimensionless coupling constant of pairing between the fermions. #### *BdG formalism.* – To reveal the impact of the Rydberg atoms on the Fermi superfluid, we inspect the composite system as the superfluid undergoes the BCS-BEC crossover. Specifically, H can be diagonalized by the BdG transformation : *ψ*↑(*x*) = ∑*ñ*[*u*↑*ñ*1(*x*)*γ**ñ*1 − *v*↑*ñ*2 \*(*x*)*γ**ñ*2†] and *ψ*↓(*x*) = ∑*ñ*[*u*↓*ñ*2(*x*)*γ**ñ*2 + *v*↓*ñ*1 \*(*x*)*γ**ñ*1†]. The quasiparticle wave functions *u**σ**ñ**j* and *v**σ**ñ**j* with *j* = 1, 2 are to be determined, and they satisfy ∫*d**x*(∣*u**σ**ñ**j*∣2 + ∣*v**σ**ñ**j*∣2) = 1. The BdG equation for the composite system considered here can be block-diagonalized into  $$\label{eq:subset1} \begin{pmatrix} h\_{\uparrow}(x)&\Delta(x)\\ \Delta^\*(x)&-h^\*\_{\downarrow}(x) \end{pmatrix} \begin{pmatrix} u^{\tilde{n}j}\_{\uparrow}(x)\\ v^{\tilde{n}j}\_{\downarrow}(x) \end{pmatrix} =E\_{\tilde{n}j}\begin{pmatrix} u^{\tilde{n}j}\_{\uparrow}(x)\\ v^{\tilde{n}j}\_{\downarrow}(x) \end{pmatrix}.$$ Moreover, the BdG equation has a discrete symmetry connecting the positive and negative energy states, so we drop the indices 1, 2 and  ↑ ,  ↓  from the quasi-particle wave functions. For the ground state, the gap function Eq. ([Eq:DeltaOriginal]) then becomes Δ(*x*) =  − *U*∑*ñ*ʹ*u*↑*ñ*(*x*)*v*↓*ñ* \*(*x*) and the total fermion density $n(x)=\sum\_\sigma n\_{\sigma}(x)= 2{\sum\_{\tilde n }}'|v\_{\tilde n}(x)|^2$ with *n**σ*(*x*) = ⟨*ψ**σ*†(*x*)*ψ**σ*(*x*)⟩. Here ∑*ñ**w*ʹ denotes summation over the positive-energy states. We discretize the space and implement an iterative method  to solve the BdG equation (see SM  for details). [Fig:Rbn62Profile] #### *Discussion*.– The interaction between a Rydberg impurity and nearby fermions is determined by the Rydberg potential shown in Fig. [Fig:Demo]. To account for the impenetrable core of the Rydberg atom, the system is further embedded in a 1D box of size *L* with the Rydberg atom at *x* = 0 and appropriately adjusting the relevant energy and length scales, see also SM . The gap function and density of a representative BCS (BEC) Fermi superfluid with *μ* > 0 (*μ* < 0) subjected to the Rydberg potential are depicted in the left (right) panels of Fig. [Fig:Rbn62Profile]. While the density profiles on both BCS and BEC sides show peaks evidencing the bound states due to the attractive Rydberg potential, the most prominent contrast is the enhancement (suppression) of the gap function around the minima of the Rydberg potential in the BEC (BCS) regime. The decoupling of the gap function and density of a Fermi superfluid on the BCS side has been discussed in vortex structures , and here the Rydberg potential provides another concrete example. We mention the oscillatory boundary effects on the BCS side due to its fermionic excitations, which is explained in the SM . The bound-state wave functions *v**n*(*x*) of the Rydberg potential in the BCS and BEC regimes are presented in Fig. [Fig:Rbn62BS], see the SM  for all bound-state wave functions *u**n* and *v**n*. It is evident that each well may host a series of bound states when the depth of the Rydberg potential is enough to compete with the pairing in the Fermi superfluid. Thus, there is a competition between the intercomponent fermion attraction to maintain the Cooper pairs and the attraction among the Rydberg atom and the fermions to form molecules. The bound-state energies in the BCS regime are clearly separated, and each bound state consists of a single fermion. This implies that the resulting diatomic Rydberg molecules originate from individual fermions due to broken Cooper pairs. The bound states in the BEC regime shown in Fig. [Fig:Rbn62BS](b) are more complex. Indeed, focusing on the furthest well, the first two bound states are clearly separated in energy, indicating that they correspond to diatomic Rydberg molecules. However, the subsequent two higher vibrational bound states in the same well are energetically adjacent with almost identical wave functions. Together with the enhanced gap function shown in Fig. [Fig:Rbn62Profile], the twin bound states suggest the presence of a locally trapped Cooper pair. Therefore, the furthest well hosts a triatomic Rydberg molecule as an excited vibrational state in the BEC regime due to the combination of the strong Cooper pairing and the Rydberg potential being capable of trapping the Cooper pair. There is also a pair of bound states with almost identical binding energies and wave functions localized in the secondary well illustrated in Fig. [Fig:Rbn62BS](b). These are again evident of the creation of another triatomic Rydberg molecule. Therefore, the double-well approximate Rydberg potential depicted in Fig. [Fig:Demo] is able to host both diatomic and triatomic Rydberg molecules. [Fig:Rbn62BS] The Cooper-pair size may be estimated by the BCS coherence length  $$\begin{aligned} \label{Eq:xi} \xi\approx \frac{\hbar v\_f}{\Delta},\end{aligned}$$ where *v**f* is the Fermi velocity. For the Rydberg potential and box confinement used here, the width of the furthest (secondary) well is about 0.06*L* (0.03*L*). The Cooper-pair size of the selected BCS (BEC) case of Fig. [Fig:Rbn62Profile] is *ξ*/*L* ≈ 0.07 (*ξ*/*L* ≈ 0.003) since Δ/*E**f* ≈ 0.14 and *k**f**L* ≈ 111 (Δ/*E**f* ≈ 2.5 and *k**f**L* ≈ 131). Hence, the Cooper pairs on the BCS side cannot be accommodated within the Rydberg-potential wells. In this context, only a fermion from a broken Cooper pair is captured, forming a diatomic molecule. In contrast, the Cooper pairs of the BEC case may fit into the Rydberg potential, which is deep enough to either break a Cooper pair or trap it to form a diatomic or a triatomic Rydberg molecule. For a typical cold-atom cloud with density *n* ≈ 1013 cm− 3 , *E**f* ≈ 1kHz for Rb atoms, the depth of the Rydberg potential in Fig. [Fig:Demo] reaches the order of MHz. The pairing gap is roughly of the order of *E**f* as shown in Fig. [Fig:Rbn62Profile], which can be orders of magnitude smaller than the depth of the Rydberg potential even on the BEC side. The Rydberg molecule lifetime is typically about 10-100 *μ*s , while the timescale in a Fermi gas is governed by ℏ/*E**f* ( ∼  1ms here). Therefore, the above treatment of quasi-equilibrium Fermi superfluids in the presence of Rydberg molecules is reasonable. Meanwhile, a shallow Rydberg potential discussed in the SM  with similar orders of magnitude of the pairing gap and potential depth is shown to also form diatomic and triatomic Rydberg molecules. [Fig:EbRbn62] The respective binding energies (normalized by *E**f*) obtained from the BdG equation with the Rydberg potential of Fig. [Fig:Demo] are illustrated in Fig. [Fig:EbRbn62]. The first two lowest-energy bound states in both BCS and BEC regimes have comparable binding energies since they correspond to diatomic Rydberg molecules consisting of the Rydberg atom and a broken-pair fermion. However, the binding energies of the higher vibrational bound states in the BCS and BEC regimes deviate more significantly because the triatomic Rydberg molecules in the BEC possess relatively larger binding energies within the trapped Cooper pairs. We remark that the relation between the diatomic and triatomic Rydberg molecules in Fermi superfluids is more complex than that in a BEC  due to spin statistics and many-body effects. Indeed, adding an identical boson to a Rydberg dimer leads to a triatomic molecule with twice the diatomic binding energy. However, this does not hold for fermions due to the Pauli exclusion. Specifically, the formation of diatomic and triatomic Rydberg molecules in a Fermi superfluid competes with the binding of Cooper pairs. As such, the many-body contribution of breaking or trapping a Cooper pair plays a decisive role in creating Rydberg molecules, as it becomes apparent by the BdG calculation shown in Fig. [Fig:EbRbn62]. To discern many-body from single-particle effects in the Rydberg molecule formation, we also evaluate the binding energies of diatomic Rydberg molecules with the Schrödinger equation (*h**σ* + *V**R**y**d*)*ψ**σ* = *E**n**S**ψ**σ*, with *h**σ* from H*B**C**S*; see also SM  for the underlying bound states. The normalized single-particle binding energies are presented in Fig. [Fig:EbRbn62]. The many-body binding energies obtained from the BdG equation are in general larger than the corresponding single-particle energies due to pairing effect. However, the binding energies in the BCS regime follow a similar trend with the single-particle energies, and their energy difference remains roughly constant as higher vibrational states are reached. In contrast, the BdG binding energies on the BEC side exhibit larger deviations from their single-particle counterparts. For the diatomic Rydberg molecules, this is because the stronger pairing makes the Fermi superfluid on the BEC side with *μ* < 0 quite different from a noninteracting gas. Based on the gap function shown in Fig. [Fig:Rbn62Profile], the energy difference between the many-body and single-particle binding energies of the diatomic Rydberg molecules are of the same order as the pairing energy. Moreover, the emergence of the triatomic Rydberg molecules results in a substantial energy difference from the noninteracting counterpart due to the trapped Cooper pair, with its own binding energy. #### *Implications*.– Spatially resolved rf spectroscopy of atomic Fermi superfluids , following original attempts in Refs. , map out the local pairing gap. As described in Fig. [Fig:Rbn62Profile], this will determine the types of Rydberg molecules since the pairing is suppressed (enhanced) in the diatomic (triatomic) Rydberg molecule. Meanwhile, the Rydberg molecules in a Fermi superfluid may serve as a probe for the Cooper-pair size because triatomic Rydberg-molecule formation is only possible when the Cooper-pair size is smaller than the width of the Rydberg potential. Differentiating the diatomic and triatomic Rydberg molecules is also achievable by Rydberg-molecule line spectroscopy  since the binding energy of a triatomic Rydberg molecule is substantially larger than that of a diatomic one, as shown in Fig. [Fig:EbRbn62]. The Rydberg impurity-Fermi superfluid system features several tunable parameters, including the depth, width, and location of the Rydberg potential, determined by the Rydberg excitation , and the pairing strength and particle density of the Fermi superfluid (see, e.g., Refs. ). Furthermore, the quasi-1D setup has several advantages. First, the Rydberg lifetime in a lattice is found to be longer for reduced dimensions . If the enhancement also holds in the continuum, it facilitates Rydberg-molecule formation in 1D as there is on average one fermion within the Rydberg orbit, in the cases studied here. Second, the rotational excitations of Rydberg molecules will be less relevant in 1D, significantly simplifying the bound-state spectrum. Moreover, the 1D geometry eases i) the comparison between the Cooper-pair size and the Rydberg potential width, ii) the identification of diatomic or triatomic Rydberg molecules, and iii) the characterization of the Rydberg molecules, e.g., from the density and pairing gap profiles. So far, the Rydberg atoms are assumed to be of different isotopes or species from the Fermi superfluid. Alternatively, if some of the fermions within the superfluid are excited into Rydberg atoms, forming homonuclear Rydberg molecules, this results in a reduced effective pairing gap Δʹ = Δ − *g**l**n**R*, see also the discussion in the SM . Here, *g**l* is the coupling constant for exciting the fermions to Rydberg atoms. Once the excited Rydberg atoms are present, however, the corresponding bound states can be extracted through the BdG formalism. Therefore, dimer or trimer Rydberg molecules are expected via Rydberg excitations stemming from the Fermi superfluid although the reduced effective pairing gap will favor dimer Rydberg molecules. Finally, we note that Rydberg molecules are different from the Cooper-pair splitting in superconductor heterostructures . In this case, the proximity effect is utilized by dynamically sending a Cooper pair, as an excited state with spin entanglement or momentum correlation, to two separate non-superconducting regions in real space. In contrast, the fermion bound in a diatomic Rydberg molecule no longer retains the pairing correlation, while the tightly-bound Cooper pair in a triatomic Rydberg molecule localizes in real space. Along the same lines, there are subtle differences between the Rydberg molecules in Fermi superfluids and the binary tidal disruption event and pion matter. For instance, binding in binary stars (pion matter) stems from gravity (Coulomb interactions), whereas in Rydberg molecules, it is traced back to the electron-atom scattering. #### *Summary and outlook*.– The bound states of Fermi superfluids in a Rydberg-impurity potential testify to the formation of Rydberg molecules. The tunable fermion pairing gives rise to diatomic Rydberg molecules from broken Cooper pairs and triatomic Rydberg molecules from tightly-bound Cooper pairs, exhibiting different features of the gap function due to their distinctive nature. The detection of the triatomic Rydberg molecules may reveal information about the Cooper-pair size while the bound-state energies reflect pairing effects. With the rapid developments of Rydberg physics and Fermi gases, realizations of Rydberg molecules in Fermi superfluids will provide an elegant example of interfacing few- and many-body physics in a unified platform. Furthermore, going beyond the Leggett-BCS theory  of the superfluid ground state, pre-formed Cooper pairs at finite temperatures correct the superfluid transition temperature and lead to the pseudo-gap effect away from the BCS regime . Incorporating pairing-fluctuation theories developed for homogeneous systems  into the BdG formalism remains a challenge, and finite-temperature physics of Rydberg molecules awaits future research. #### *Acknowledgements*.– C. C. C. was partly supported by the NSF (No. PHY-2310656). Support for ITAMP by the NSF is also acknowledged. 67 ifxundefined [1] ifx#1 ifnum [1] #1firstoftwo secondoftwo ifx [1] #1firstoftwo secondoftwo ““#1”” @noop [0]secondoftwo sanitize@url [0]` 12`$12`&12`#12`12`\_12`%12 @startlink[1] @endlink[0] @bib@innerbibempty   and   ,  Many-body physics with individually controlled Rydberg atoms,  [Nature Physics  **16**,  132 ( 2020)](https://doi.org/10.1038/s41567-019-0733-z)  ,  , and   ,  Quantum information with Rydberg atoms,  [Rev. Mod. Phys.  **82**,  2313 ( 2010)](https://doi.org/10.1103/RevModPhys.82.2313)  ,  , and   ,  Rydberg atom quantum technologies,  [Journal of Physics B: Atomic, Molecular and Optical Physics  **53**,  012002 ( 2019)](https://doi.org/10.1088/1361-6455/ab52ef)  ,  ,  ,  ,  , ,  , and   , A concise review of Rydberg atom based quantum computation and quantum simulation,  [Chinese Physics B  **30**,  020305 ( 2021)](https://doi.org/10.1088/1674-1056/abd76f)  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  , and   ,  Logical quantum processor based on reconfigurable atom arrays,  [Nature  **626**,  58 ( 2024)](https://doi.org/10.1038/s41586-023-06927-3)  ,  ,  ,  ,  , and   , @noop Rydberg superatoms: An artificial quantum system for quantum information processing and quantum optics ( 2024),  arXiv: 2404.05330  ,  , and   ,  Creation of Polar and Nonpolar Ultra-Long-Range Rydberg Molecules,  [Phys. Rev. Lett.  **85**,  2458 ( 2000)](https://doi.org/10.1103/PhysRevLett.85.2458)  ,  , and   ,  Energies and dipole moments of long-range molecular Rydberg states,  [J. Phys. B: At. Mol. Opt. Phys.  **35**,  L193 ( 2002)](https://doi.org/10.1088/0953-4075/35/10/101)  ,  , and   ,  Shape-resonance-induced long-range molecular Rydberg states,  [J. Phys. B: At. Mol. Opt. Phys.  **35**,  L199 ( 2002)](https://doi.org/10.1088/0953-4075/35/10/102)  ,  ,  ,  ,  , and   ,  Observation of ultralong-range rydberg molecules,  [Nature  **458**,  1005 ( 2009)](https://doi.org/10.1038/nature07945)  ,  ,  ,  ,  ,  , and   ,  Observation of pendular butterfly Rydberg molecules, [Nature Communications  **7**, 12820 ( 2016)](https://doi.org/10.1038/ncomms12820)  ,  ,  , and  ,  Exploring the vibrational series of pure trilobite Rydberg molecules,  [Nature Communications  **14**,  8108 ( 2023)](https://doi.org/10.1038/s41467-023-43818-7)  ,  ,  ,  , and  , Production of trilobite Rydberg molecule dimers with kilo-Debye permanent electric dipole moments,  [Science  **348**,  99 ( 2015)](https://doi.org/10.1126/science.1260722),  https://www.science.org/doi/pdf/10.1126/science.1260722  ,  ,  ,  ,  , , and  ,  Heteronuclear rydberg molecules,  [Phys. Rev. A  **101**,  060701 ( 2020)](https://doi.org/10.1103/PhysRevA.101.060701)  ,  , and   ,  Ultracold Rydberg molecules,  [Nature Communications  **9**,  1965 ( 2018)](https://doi.org/10.1038/s41467-018-04135-6)   and   ,  Ultralong-range Rydberg molecules,  [Molecular Physics  **118**,  e1679401 ( 2020)](https://doi.org/10.1080/00268976.2019.1679401)  ,  ,  ,  ,  ,  ,  , , ,  , and  ,  Creation of Rydberg Polarons in a Bose Gas,  [Phys. Rev. Lett.  **120**, 083401 ( 2018)](https://doi.org/10.1103/PhysRevLett.120.083401)  ,  , and   ,  Mesoscopic Rydberg Impurity in an Atomic Quantum Gas,  [Phys. Rev. Lett.  **116**, 105302 ( 2016)](https://doi.org/10.1103/PhysRevLett.116.105302)  ,  ,  ,  ,  ,  ,  , ,  ,  , and   ,  Theory of excitation of Rydberg polarons in an atomic quantum gas,  [Phys. Rev. A  **97**,  022707 ( 2018)](https://doi.org/10.1103/PhysRevA.97.022707)  ,  ,  ,  , and   , Rydberg impurity in a Fermi gas: Quantum statistics and rotational blockade,  [Phys. Rev. Res.  **2**, 023021 ( 2020)](https://doi.org/10.1103/PhysRevResearch.2.023021)  and  , @noop Phenomenology of a Rydberg impurity in an ideal Bose Einstein condensate ( 2024),  arXiv: 2404.03980  ,  , and   , Emergence of a molecular Bose–Einstein condensate from a Fermi gas, @noop Nature  **426**,  537 ( 2003)  ,  ,  ,  ,  , and   ,  Condensation of Pairs of Fermionic Atoms near a Feshbach Resonance,  [Phys. Rev. Lett.  **92**, 120403 ( 2004)](https://doi.org/10.1103/PhysRevLett.92.120403)  ,  ,  ,  ,  ,  , and   ,  Collective Excitations of a Degenerate Gas at the BEC-BCS Crossover,  [Phys. Rev. Lett.  **92**,  203201 ( 2004)](https://doi.org/10.1103/PhysRevLett.92.203201)   and   ,  [*Bose–Einstein Condensation in Dilute Gases*](https://doi.org/10.1017/CBO9780511802850),  2nd ed. ( Cambridge University Press, 2008)  ,  [*Fundamentals and New Frontiers of Bose-Einstein Condensation*](https://doi.org/10.1142/7216) ( World Scientific,  Singapore, 2010)  https://www.worldscientific.com/doi/pdf/10.1142/7216  , ed.,  [*The BCS-BEC Crossover and the Unitary Fermi Gas*](https://doi.org/10.1007/978-3-642-21978-8) ( Springer Berlin,  Heidelberg,  2012)  ,  Hyper-velocity and tidal stars from binaries disrupted by a massive galactic black hole,  [Nature  **331**,  687 ( 1988)](https://doi.org/10.1038/331687a0)  ,  ,  , and  ,  Binary disruption by massive black holes: Hypervelocity stars, s stars, and tidal disruption events,  [The Astrophysical Journal Letters  **749**,  L42 ( 2012)](https://doi.org/10.1088/2041-8205/749/2/L42)  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  ,  , ,  ,  ,  ,  ,  ,  ,  ,  ,  , and   ,  Pionic hydrogen, in [*Precision Physics of Simple Atoms and Molecules*](https://doi.org/10.1007/978-3-540-75479-4_10), edited by   ( Springer Berlin Heidelberg,  Berlin, Heidelberg,  2008) pp.  165–186  ,  ,  ,  ,  ,  ,  ,  ,  , , ,  ,  ,  ,  ,  ,  ,  ,  ,  , and   ,  Redetermination of the strong-interaction width in pionic hydrogen,  [The European Phys. J. A  **57**,  70 ( 2021)](https://doi.org/10.1140/epja/s10050-021-00387-x)  ,  ,  ,  ,  ,  ,  ,  ,  ,  , ,  ,  ,  ,  ,  ,  ,  , and   ,  Pionic deuterium,  [The European Phys. J. A  **47**, 88 ( 2011)](https://doi.org/10.1140/epja/i2011-11088-1)  ,  ,  ,  , and   ,  Recent results of laser spectroscopy experiments of pionic helium atoms at PSI,  [SciPost Phys. Proc. ,  026 ( 2021)](https://doi.org/10.21468/SciPostPhysProc.5.026)  ,  Capture of negative exotic particles by atoms, ions and molecules,  [Rep. Progr. Phys.  **67**,  1769 ( 2004)](https://doi.org/10.1088/0034-4885/67/10/R02)  ,  ,  , and  ,  Tomographic rf spectroscopy of a trapped fermi gas at unitarity,  [Phys. Rev. Lett.  **99**,  090403 ( 2007)](https://doi.org/10.1103/PhysRevLett.99.090403)  ,  ,  ,  ,  ,  ,  ,  , , and   ,  High-temperature pairing in a strongly interacting two-dimensional Fermi gas,  [Science  **359**,  452 ( 2018)](https://doi.org/10.1126/science.aan5950),  https://www.science.org/doi/pdf/10.1126/science.aan5950  ,  ,  , and  ,  Determination of the fermion pair size in a resonantly interacting superfluid,  [Nature  **454**,  739 ( 2008)](https://doi.org/10.1038/nature07176)  ,  ,  ,  ,  ,  ,  ,  ,  ,  , and   ,  Rydberg Trimers and Excited Dimers Bound by Internal Quantum Reflection,  [Phys. Rev. Lett.  **105**, 163201 ( 2010)](https://doi.org/10.1103/PhysRevLett.105.163201) ,  , and   ,  Ultra-Long-Range Rydberg Trimers with a Repulsive Two-Body Interaction,  [Phys. Rev. Lett.  **102**, 173001 ( 2009)](https://doi.org/10.1103/PhysRevLett.102.173001)  ,  , and   ,  Ultralong-range triatomic Rydberg molecules in an electric field,  [J. Phys. B: At. Mol. Opt. Phys.  **49**,  124002 ( 2016)](https://doi.org/10.1088/0953-4075/49/12/124002)  , @noop *Superconductivity of Metals and Alloys.*,  2nd ed., Advanced Books Classics ( Chapman and Hall/CRC,  Boulder,  2018)  , @noop *Bogoliubov-de Gennes Method and Its Applications*,  1st ed., Lecture Notes in Physics, 924 ( Springer International Publishing,  Cham,  2016)  ,  ,  ,  , and   ,  Universal Impurity-Induced Bound State in Topological Superfluids,  [Phys. Rev. Lett.  **110**,  020401 ( 2013)](https://doi.org/10.1103/PhysRevLett.110.020401)  ,  ,  ,  ,  ,  ,  , and   ,  Few-body bose gases in low dimensions—a laboratory for quantum dynamics,  [Phys. Rep.  **1042**,  1 ( 2023)](https://doi.org/https://doi.org/10.1016/j.physrep.2023.10.004),  few-body Bose gases in low dimensions—A laboratory for quantum dynamics  ,  ,  , and  ,  1d to 3d Crossover of a Spin-Imbalanced Fermi Gas,  [Phys. Rev. Lett.  **117**,  235301 ( 2016)](https://doi.org/10.1103/PhysRevLett.117.235301)  , @noop *Quantum Liquids : Bose condensation and Cooper pairing in condensed-matter systems*, Oxford Graduate Texts ( Oxford University Press,  Oxford, UK,  2006)  ,  Atomic Scattering in the Presence of an External Confinement and a Gas of Impenetrable Bosons,  [Phys. Rev. Lett.  **81**,  938 ( 1998)](https://doi.org/10.1103/PhysRevLett.81.938)  ,  , and   ,  Dispersion coefficients for alkali-metal dimers, @noop Phys. Rev. A  **49**,  982 ( 1994)  ,  On the theory of superfluidity, @noop J. Phys.  **11**,  23 ( 1947)   and   ,  Proximity effect and spatial Kibble-Zurek mechanism in atomic Fermi gases with inhomogeneous pairing interactions,  [Phys. Rev. A  **107**,  063314 ( 2023)](https://doi.org/10.1103/PhysRevA.107.063314) @noop See Supplemental Material for details on the BdG formalism, the impact of higher Rydberg excitations in the molecule formation, the effect of the Rydberg potential location, further information on the bound states, the formation of homonuclear Rydberg molecules and the creation of Rydberg molecules in a noninteracting gas.  ,  ,  , and   ,  Finite temperature effects in trapped Fermi gases with population imbalance,  [Phys. Rev. A  **74**,  021602 ( 2006)](https://doi.org/10.1103/PhysRevA.74.021602)   and   , Vortex structure and spectrum of an atomic Fermi superfluid in a spherical bubble trap,  [Phys. Rev. A  **108**,  053303 ( 2023)](https://doi.org/10.1103/PhysRevA.108.053303)  ,  , and   ,  Pairing Gap and In-Gap Excitations in Trapped Fermionic Superfluids,  [Science  **305**,  1131 ( 2004)](https://doi.org/10.1126/science.1100782),  https://www.science.org/doi/pdf/10.1126/science.1100782  ,  ,  , and   , Determination of the Superfluid Gap in Atomic Fermi Gases by Quasiparticle Spectroscopy,  [Phys. Rev. Lett.  **101**, 140403 ( 2008)](https://doi.org/10.1103/PhysRevLett.101.140403)  ,  ,  ,  ,  ,  , and   ,  Coherent Many-Body Spin Dynamics in a Long-Range Interacting Ising Chain,  [Phys. Rev. X  **7**,  041063 ( 2017)](https://doi.org/10.1103/PhysRevX.7.041063)  ,  , and   , Electronic entanglement in the vicinity of a superconductor,  [The European Phys. J. B - Condensed Matter and Complex Systems **24**,  287 ( 2001)](https://doi.org/10.1007/s10051-001-8675-4)  ,  , and   ,  Andreev tunneling, coulomb blockade, and resonant transport of nonlocal spin-entangled electrons, [Phys. Rev. B  **63**, 165314 ( 2001)](https://doi.org/10.1103/PhysRevB.63.165314)  ,  ,  , and  , Cooper pair splitter realized in a two-quantum-dot Y-junction,  [Nature  **461**,  960 ( 2009)](https://doi.org/10.1038/nature08432)  ,  ,  ,  , and   ,  Real-time observation of Cooper pair splitting showing strong non-local correlations,  [Nature Communications  **12**,  6358 ( 2021)](https://doi.org/10.1038/s41467-021-26627-8)  ,  , and   , Adiabatic Cooper pair splitter, [Phys. Rev. B  **109**, L081402 ( 2024)](https://doi.org/10.1103/PhysRevB.109.L081402)  ,  , and   ,  Near-Unity Cooper Pair Splitting Efficiency,  [Phys. Rev. Lett.  **109**, 157002 ( 2012)](https://doi.org/10.1103/PhysRevLett.109.157002)  ,  ,  ,  ,  ,  , and   ,  Observation of pseudogap behaviour in a strongly interacting Fermi gas,  [Nature Phys.  **6**,  569 ( 2010)](https://doi.org/10.1038/nphys1709)  ,  ,  ,  ,  ,  , ,  ,  ,  ,  , and   ,  Observation and quantification of the pseudogap in unitary fermi gases,  [Nature  **626**, 288 ( 2024)](https://doi.org/10.1038/s41586-023-06964-y)  ,  ,  , and  ,  Comparison of different pairing fluctuation approaches to BCS–BEC crossover,  [Annals of Physics  **325**,  233 ( 2010)](https://doi.org/https://doi.org/10.1016/j.aop.2009.09.011)   and   ,  Crossover from Bardeen-Cooper-Schrieffer to Bose-Einstein Condensation and the Unitary Fermi Gas,  [Annual Review of Condensed Matter Physics  **5**,  209 ( 2014)](https://doi.org/https://doi.org/10.1146/annurev-conmatphys-031113-133829)  ,  Review of pseudogaps in strongly interacting Fermi gases,  [Reports on Progress in Physics  **80**,  104401 ( 2017)](https://doi.org/10.1088/1361-6633/aa7e53) Bogoliubov-de Gennes calculation ================================ After diagonalization by the BdG transformation (see the main text), the many-body Hamiltonian takes the form H = ∑*ñ**w*ʹ*E**ñ**w**γ**ñ**w*†*γ**ñ**w* + *E**g*. In this expression, *w* = 1, 2 represents the quasi-particle components, *E**g* is the ground-state energy, and ∑*ñ**w*ʹ denotes summation over the positive-energy states. The ground-state energy is $E\_g=-\frac{|\Delta|^2}{U}+\sum\_{\tilde n,w}(\epsilon\_{\tilde nw}-E\_{\tilde{n}w})$ with *ε**ñ**w* being the non-interacting counterpart of the excitation energy *E**ñ**w*. The BdG equation has the symmetry $\begin{pmatrix} u^{\tilde{n}2}\_{\downarrow}(x)\\ v^{\tilde{n}2}\_{\uparrow}(x) \end{pmatrix} =\begin{pmatrix} v^{\tilde{n}1 \*}\_{\downarrow}(x)\\ -u^{\tilde{n}1}\_{\uparrow}(x) \end{pmatrix}$ with *E**ñ*2 =  − *E**ñ*1. The quasi-particle operators obey ⟨*γ**ñ**w*†*γ**m̃**v*⟩ = *δ**ñ**m̃**δ**w**v**f*(*E**ñ**w*) and ⟨*γ**ñ**w**γ**m̃**v*⟩ = ⟨*γ**ñ**w*†*γ**m̃**v*†⟩ = 0 with *f*(*E*) = [*e**E*/*K**B**T* + 1]− 1 being the Fermi distribution function. At finite temperatures, the gap function becomes Δ(*x*) =  − *U*∑*ñ*ʹ*u*↑*ñ*(*x*)*v*↓*ñ* \*(*x*)tanh(*E**ñ*/*k**B**T*) and the total Fermi density $n(x)=\sum\_\sigma n\_{\sigma}(x)= n(x)=2{\sum\_{\tilde n }}' [|v\_{\tilde n}(x)|^2(1-f(E\_{\tilde{n}}))+|u\_{\tilde n}(x)|^2 f(E\_{\tilde{n}})]$. We caution that a bound state with *E**n* < 0 contributes to the density via ∣*u**n*∣2. Due to the symmetry, it is equivalent to a *E**n* > 0 state contributing to the density via ∣*v**n*∣2. In our numerical calculations, we consider a quasi-1D system in a 1D box of length *L*. We discretize the space *x*/*L* = [0, 1] using *n**x* grid points *x**j* = *j**δ**x*, where *δ**x* = *L*/*n**x* and *j* = 0, 1, 2, ...., *n**x* − 1. The BdG equation is also discretized by using the finite-difference method and becomes $$\label{Eq:DiscreteBdG} \sum\_j \begin{pmatrix} h\_{ij}&\Delta\_{ij}\\ \Delta^\*\_{ij}& -h\_{ij} \end{pmatrix} \begin{pmatrix} u\_j^{\tilde{n}}\\ v\_j^{\tilde{n}} \end{pmatrix} =E\_{\tilde{n}}\begin{pmatrix} u\_i^{\tilde{n}}\\ v\_i^{\tilde{n}} \end{pmatrix}.$$ Here Δ*i**j* = Δ*i**δ**i*, *j* for *s*-wave pairing. The BdG Hamiltonian has the size of 2*n**x* × 2*n**x* and we only take the positive energy eigenstates for the calculations of the gap function and density. For the ground state of the Fermi superfluid, the total density becomes $$\label{Eq:rhoBdG} n(x)=2{\sum\_{\tilde n }}' |v\_{\tilde n}(x)|^2.$$ The total fermion number is *N* = *N*↑ + *N*↓ = ∫0*L**n*(*x*)*d**x*. The gap function is given by $$\label{Eq:DeltaBdG} \Delta(x) =-U{\sum\_{\tilde n}}' u\_{\tilde n}(x)v\_{\tilde n}(x).$$ The Rydberg atom is placed at *x* = 0, where the wave functions should vanish due to the impenetrable core of the Rydberg atom. The box boundary at *x* = *L* is chosen such that the wave functions return to the bulk values before encountering the wall at *x* = *L*. The box introduces an energy scale *E*0 = ℏ2/(2*m**L*2), but we use the intrinsic inverse-length and energy units *k**f* and *E**f* carried by the fermions. Numerically, we start with a trial Δ(*r*) and a given set of parameters (*U*, *μ*) in order to solve the BdG equation and thus obtain the eigenvalues and eigenstates of the Rydberg atom-Fermi superfluid system. The gap function is then assembled for the next iteration. The iteration stops when the convergence condition (1/*L*)∫0*L**d**x*∣∣Δ*n**e**w*(*x*)∣ − ∣Δ*o**l**d*(*x*)∣∣/*E*0 < *ε* is satisfied, where Δ*n**e**w*/*o**l**d*(*x*) denote the gap functions between consecutive iterations. We have taken *ε* = 10− 6 and 1000 grid points and checked that further adjustments of those values do not cause qualitative changes. Let us also comment on the oscillatory boundary effects due to the box confinement on the BCS side in the profiles shown in Fig. [Fig:Rbn62Profile] of the main text. The Fermi superfluid on the BCS side has relatively weak pairing. Therefore, when the boundary enforces the gap function and density to vanish, the fermions behave as noninteracting ones exhibiting an oscillatory behavior with length scale 1/*k**f*. This is because noninteracting fermions form a Fermi sea and the perturbation of the system starts at the Fermi momentum, which then results in the aforementioned oscillations at the boundary. In contrast, the fermions form tightly bound pairs on the BEC side, which behave like composite bosons and no longer follow the Fermi statistics. On the BEC side, the composite bosons are repelled by the box boundaries, but they vanish smoothly without the oscillatory behavior. We emphasize that the box confinement is a simple choice to satisfy the impenetrability condition of the Rydberg atom, and the focus should be on the Rydberg-molecule formation due to the Rydberg potential instead of the boundary effects due to the choice of the confinement. [Fig:Rbn62pot] Higher Rydberg excitation ========================= Upon considering a higher Rydberg excitation results in a shallower Rydberg potential. As a paradigmatic example, here, we assume the Sr(71S) Rydberg state in a Fermi superfluid. Since the most relevant part of the Rydberg potential corresponds to the two dips furthest away from its core, we smooth out the highly oscillatory potential located close to the core. A comparison of the Rydberg potentials from the Rb(62S) state and that from the Sr(71S) state is given in Fig. [Fig:Rbn62pot], along with the approximation concerning their two furthest potential wells used in the BdG calculations. When the two potentials are placed in a 1D box with the furthest dips at its center and the length and energy scales properly scaled, as explained below, the potential from the Rb(62S) state is about ten times deeper. The two Rydberg potentials thus allow us to contrast the characteristics of the emergent Rydberg molecules in Fermi superfluids. Placing the approximate double-well Rydberg potential in a 1D box accounts for the impenetrable core of the Rydberg atom and provides a consistent comparison when different Rydberg states are used. We extract the distance *R*0 from the core to the furthest dip and its depth *V*0 of the Rydberg potential. For a selected Rydberg atom and its state, *R*0 and *V*0 are related. For example, *R*0 = 480 nm and *V*0 = 0.32 MHz for the Sr(71S) Rydberg state in a 87Sr Fermi superfluid. The aspect ratio of the Rydberg potential is fixed by introducing the energy scale *E**R* = ℏ2/(2*m**R*02) and obtaining the ratio *V*0/*E**R*. By setting *R*0 = *α**L* with 0 < *α* < 1, the furthest dip is located at *α**L* inside the box. This also fixes the energy relation *E**R* = *E*0/*α*2, from which the Rydberg potential can be expressed in terms of the corresponding *k**f* and *E**f*. [Fig:Srn71Profiles] By squeezing the Rydberg potential towards the left of the box with a smaller *α*, the effective depth of the potential increases, but the widths of the wells decrease. As shown below, the case with *α* = 1/4 exhibits similar behavior on the BCS and BEC sides as those where *α* = 1/2 being presented here. This indicates that the number of bound states is determined by a combination of the potential width and depth. Thus, a simple rescaling of the Rydberg potential inside the box does not lead to further qualitative changes. In the following, we will place the furthest dip of the Rydberg potential at *x* = *L*/2 and scale its depth accordingly. The left column of Figure [Fig:Srn71Profiles] shows the profiles of the gap function and the density for a selected case in the BCS regime with *μ* > 0 and the potential taken from the Sr(71S) Rydberg state. The presence of the Rydberg potential causes a dip in the gap function, which implies a suppression of pairing, and a peak in the density, indicating that unpaired fermions accumulate. [Fig:Ryd05BS] Before analyzing the energy spectrum and wave functions of the BCS case, we show in the right column of Fig. [Fig:Srn71Profiles] the profiles of the gap function and density of a Fermi superfluid on the BEC side with *μ* < 0 under the same Rydberg potential. In stark contrast to the BCS case, the gap function features a peak at the location of the Rydberg potential. The same behavior is evident on the density profile. Thus, the pairing in enhanced inside the Rydberg potential when the Fermi superfluid is BEC-like. The enhancement of pairing also suggests that Cooper pairs are trapped by the Rydberg atom. To explain the contrast between the BCS and BEC cases, we analyze the eigen-functions obtained from the BdG equation, see Fig. [Fig:Ryd05BS]. Since the symmetry of the BdG equation guarantees that every positive energy eigen-state is accompanied by a negative-energy state, a bound state of a particle is also accompanied by a bound state of a hole having the opposite energy sign. For this reason, we examine the profiles of the eigen-functions and identify the bound states with localized patterns in the potential wells. For the BCS and BEC cases presented in Fig. [Fig:Srn71Profiles], most of the eigen-states exhibit oscillatory behavior in the box (not shown for brevity). Nevertheless, we identify particular states featuring localization at the dips of the Rydberg potential. The left column of Fig. [Fig:Ryd05BS] depicts the two energetically lowest localized states on the BCS side that can be identified with this Rydberg potential. From the number of nodes inside the Rydberg potential, one may classify them as the first and second bound states localized in the rightmost potential well. The binding energies of these two bound states on the BCS regime are not adjacent to each other, implying that they are between a fermion and the Rydberg atom as only individual fermionic states well separated in energy are involved. This behavior suggests that the composite object refers to a diatomic Rydberg molecule. The suppression of the gap function on the BCS side can also be understood in terms of the formation of the Rydberg molecules, which breaks a Cooper pair when a fermion falls into the Rydberg potential. Meanwhile, the fermionic atom from a broken Cooper pair in the Rydberg molecule causes a bump in the local density, reminiscent to the occupation of the vortex core by unpaired fermions . For the BEC case demonstrated in the right column of Fig. [Fig:Srn71Profiles], there are also two bound states localized in the Rydberg potential. However, the two localized states have adjacent energies and similar wave functions, as can be seen in the right column of Fig. [Fig:Ryd05BS]. Therefore, two fermions are constituting the first bound state in the Rydberg potential in the BEC regime. This is a direct indication that a tightly-bound Cooper pair falls into the Rydberg potential and forms a three-body bound state, or a Rydberg molecule with a Rydberg atom and a Cooper pair. Therefore, a triatomic Rydberg molecule forms in the BEC case presented here. For the Sr(71S) Rydberg potential, however, the depth is too shallow to host additional bound states. We also estimate the Cooper-pair size using Eq.  in the main text and compare it with the width of the Rydberg potential. For the Sr(71S) Rydberg state, the width of the furthest well is about 0.04*L* after placing it at *x*/*L* = 0.5 of the box. For the BCS case selected above, Δ/*E**f* ≈ 0.18 and *k**f**L* ≈ 98, which lead to *ξ*/*L* ≈ 0.06. Meanwhile, Δ/*E**f* ≈ 2.9 and *k**f**L* ≈ 121 for the shallow-BEC case selected above, so *ξ*/*L* ≈ 0.003. Therefore, the width of the Cooper pairs on the BCS (BEC) side is larger (smaller) than the width of the primary Rydberg-potential well. This also corroborates that a fermion from a broken Cooper pair is trapped by the Rydberg potential to form a diatomic Rydberg molecule when the Fermi superfluid is on the BCS side. In contrast, the relatively shallow Rydberg potential traps a Cooper pair on the BEC side in its primary well, forming ``a Cooper pair in a molecule". Moreover, as compared to the relatively deep Rydberg-potential discussed in the main text, the secondary well of the shallower Rydberg potential presented here does not support bound states. [Fig:Ryd025Profile] Adjusting the location of the Rydberg potential =============================================== Here we take the potential of the Sr(71S) Rydberg state but set *R*0 = *L*/4, i.e., *α* = 1/4. Using this scaling, the depth of the potential increases due to the relatively smaller *R*0, but the width of the potential reduces. Fig. [Fig:Ryd025Profile] presents the profiles of the gap function and the density on the BCS side and BEC side, respectively, under the Sr(71S) Rydberg potential. Similar to the *α* = 1/2 case, the pairing is suppressed on the BCS side but is enhanced on the BEC side. Meanwhile, the density exhibits a peak on both BCS and BEC sides, signaling the emergence of bound states. [Fig:Ryd025states] The left column of Fig. [Fig:Ryd025states] illustrates the localized eigenstates extracted from the BdG equation corresponding to the setup on the BCS side depicted in Fig. [Fig:Ryd025Profile]. It becomes apparent that there are again two bound states with clearly separated binding energies. By examining the nodes of the wave functions, we can infer that they are the first and second bound states of the Rydberg potential. Therefore, the Rydberg atom breaks a Cooper pair and traps a fermionic atom to form a diatomic Rydberg molecule. In contrast, when the Fermi superfluid is in the BEC regime, the right column of Fig. [Fig:Ryd025states] shows the localized eigenstates corresponding to the right column of Fig. [Fig:Ryd025Profile]. There are two bound states with adjacent binding energies and similar wave functions, indicating that a Cooper pair has been trapped by the Rydberg potential. Therefore, the triatomic Rydberg molecule in this case consists of the Rydberg atom and a Cooper pair trapped by its potential. Squeezing the Rydberg potential within the box increases its depth and reduces its width, but it is still not sufficient to break a tightly bound Cooper pair in the shallow BEC regime in this case. Details of bound states ======================= The bound state wave functions *v**n*(*x*) of the Rydberg potential are summarized in Fig. [Fig:Rbn62BS] of the main text. Here, for reasons of completeness, we provide the full bound-state wave functions *u**n* and *v**n* of the Fermi superfluid under the Rydberg potential shown in Fig. [Fig:Demo] of the main text. Specifically, Fig. [Fig:Rbn62BCSstates] presents the localized eigenstates in the Rydberg potential which are extracted from the BdG analysis in the BCS case. Panels (a) to (f) depict, in a hierarchical order, the lowest-energy bound state to the highest-energy one. Since the bound-state energies are well separated and each bound state corresponds to a fermion, the bound states of Fig. [Fig:Rbn62BCSstates] are indicative of diatomic molecule formation. [Fig:Rbn62BCSstates] Fig. [Fig:Rbn62BECstates] shows the bound states calculated from the BdG equation in the BEC regime utilizing the same Rydberg potential. As can be seen, there are two energetically distinct bound states localized in the furthest well [Fig. [Fig:Rbn62BECstates](a), (b)] followed by two almost identical lowest-energy bound states localized in the secondary well [Fig. [Fig:Rbn62BECstates](c), (d)] and another two almost identical bound states localized in the furthest well [Fig. [Fig:Rbn62BECstates](e), (f)]. Each pair of the twin bound states corresponds to a Cooper pair trapped by the Rydberg potential. However, the secondary well on the left traps a Cooper pair into its lowest bound state [Fig. [Fig:Rbn62BECstates](c), (d)], while the furthest well traps a Cooper pair as its third vibrational state [Fig. [Fig:Rbn62BECstates](e), (f)]. [Fig:Rbn62BECstates] Exciting Rydberg atoms within the Fermi superfluid ================================================== Here, instead of introducing bosonic isotopes or atoms of different species as Rydberg impurities in an atomic Fermi superfluid, we consider the alternative process of exciting the atoms of the Fermi superfluid to produce Rydberg atoms. In this case, our description should be modified in order to account for the excitation of Rydberg atoms from the Fermi superfluid. This process is modelled by means of considering the additional Hamiltonian term *H**e**x* = *g**l*∫*d**x*(*d*†*ψ̃*†(*x*)*ψ*↓(*x*)*ψ*↑(*x*) + *h*.*c*.) which, in practice, converts a Cooper pair into a Rydberg atom and an unpaired fermion. In this expression, *ψ̃*† is the creation operator of a fermion from the broken Cooper pair but without excitation to the Rydberg state, *d*† refers to the creation operator of a Rydberg atom, and *g**l* is the coupling constant for exciting the Rydberg atoms. Since a Cooper pair needs to be broken to create a Rydberg atom in this case, *g**l* is expected to be larger than the pairing coupling constant *U*. If the Rydberg-atom density *n**R* from the broken Cooper pairs satisfies *n**R**R*03 ≪ 1, with *R*0 denoting the range of the Rydberg potential, we may approximate ⟨*d*†*ψ̃*†⟩ ≈ ⟨*d*†*d*⟩ = *n**R* = ∫*d**x*⟨*ψ̃*†(*x*)*ψ̃*(*x*)⟩. Following this crude approximation, the excitation Hamiltonian becomes *H**e**x* ≈ *g**l**n**R*∫*d**x*(*ψ*↓(*x*)*ψ*↑(*x*) + *h*.*c*.). This has a form similar to the pairing terms in the BCS Hamiltonian. In particular, if it is combined with the BCS Hamiltonian, the resulting Hamiltonian leads to a suppressed pairing gap Δʹ = Δ − *g**l**n**R*, where Δ is the pairing gap in the absence of the Rydberg excitation. Therefore, exciting the Rydberg atoms directly from the Fermi superfluid instead of introducing different isotopes or species results in a suppression of the pairing gap and favors the formation of a diatomic Rydberg molecule with a fermion from a broken Cooper pair. Moreover, the Rydberg molecules are homonuclear if the Rydberg atoms are excitations of the Fermi superfluid. In contrast, the Rydberg molecules discussed in the main text are heteronuclear because the Rydberg atoms are different from the fermions in the superfluid. [Fig:Rbn62SP] Rydberg molecules in noninteracting gases ========================================= The energetically lowest six bound states obtained from the Schrödinger equation with the outer double-well approximation of the Rb(62S) Rydberg potential are provided in Fig. [Fig:Rbn62SP]. Here, we follow the same scaling process (i.e., *α* = 0.5) to place the furthest dip of the Rydberg potential at the middle of the box. We consider only one component of fermions here, as the other component will result in exactly the same results in a noninteracting system. By analyzing the number of nodes and the localization position of the wave functions, one can identify the series of the bound states localized in the furthest well of the Rydberg potential and the ones located in the secondary well. For example, panels (a), (b), (d), (e) of Fig. [Fig:Rbn62SP] showcase the first four bound states localized in the furthest dip, while panels (c) and (f) depict the two lowest bound states localized in the secondary well. Given the well-separated energies of the bound states, all bound states shown here correspond to diatomic Rydberg molecules between the Rydberg atom and a fermion. Their binding energies are extracted from the eigenvalues of
arxiv_0000198
above, and shows how the design of an algorithm is strictly related to the problem under investigation. As already discussed in Section [acsop], our algorithm is optimized to exploit time windows. The hierarchic approach we adopt goes into this direction by preparing promising solution fragments that can be exchanged with fragments of the current solution characterized by similar time windows. This introduces a layer of complexity which slows down the method in case time windows are not present. Some instances exist for a problem which is slightly different from TOPTW: the Selective Vehicle Routing Problem with Time Windows, which is described in Boussier et al.. This problem has some complications with respect to TOPTW described in Section [pd]. A demand is associated with each node, and vehicles have a limited capacity. Moreover, the maximum length of tours is constrained, and consequently the departure time from the depot has also to be optimized. Since our method is not designed to cope with these constraints, these instances have not been considered. In particular, dealing with the optimization of the departure times would mean to redesign both the construction phase of the ACS algorithm, and the local search component. For the reasons above, we decided to carry out experiments on the same instances used for the tests summarized in Section [eoptw] for the OPTW (leaving out Solomon’s problems with 50 nodes). In this case we added a parameter indicating the number of vehicles available, *m*. In the test we present, we consider 2 ≤ *m* ≤ 4. Note that also the problems originally proposed in Chao et al. for the TOP use the same range of values for parameter *m*. ### Experimental results Results are summarized in Tables [t2], [t3] and [t4]. Columns have the same meaning as for Table [t1] in Section [res], only columns *Known bounds* and *GVNS* are not reported here (no information is available). Tables [t2], [t3] and [t4] suggest that the algorithm we propose is suitable also for problems with more than one vehicles. No upper bound has been computed for these problems, so it is not possible to estimate the absolute quality of the solutions retrieved. However, we expect these problems to be more difficult than the OPTW problems considered in Section [res] over the same instances. The reason is that the length of each feasible solution increases as *m* increases. By crosschecking the tables, it is also interesting to observe that the contribution of each additional vehicles is more and more marginal as *m* increases. It is finally interesting to observe how, on some problems (typically Cordeau’s instances *pr\**), the time at which the best solution is retrieved approaches the time limit of 3600 seconds. This suggests that longer computation times might have allowed better results. |c|ccc|ccc| [t2] Problem&& &Min&Avg&Max (Nodes)&Min&Avg&Max Problem&& &Min&Avg&Max (Nodes)&Min&Avg&Max c101\_ 100 & 580 & 588.0 & 590 (21) & 49.57 & 110.83 & 163.36 c102\_ 100 & 660 & 660.0 & 660 (22) & 279.25 & 1427.36 & 3381.34 c103\_ 100 & 710 & 710.0 & 710 (21) & 430.59 & 938.86 & 1777.64 c104\_ 100 & 750 & 754.0 & 760 (22) & 314.14 & 1355.04 & 2048.16 c105\_ 100 & 640 & 640.0 & 640 (21) & 449.59 & 1436.61 & 3038.13 c106\_ 100 & 620 & 620.0 & 620 (20) & 46.75 & 104.61 & 207.26 c107\_ 100 & 670 & 670.0 & 670 (22) & 24.37 & 76.02 & 140.65 c108\_ 100 & 680 & 680.0 & 680 (22) & 5.80 & 95.34 & 251.49 c109\_ 100 & 720 & 720.0 & 720 (22) & 282.05 & 1817.29 & 2822.51 c201\_ 100 & 1450 & 1452.0 & 1460 (66) & 93.62 & 508.59 & 1551.86 c202\_ 100 & 1440 & 1446.0 & 1460 (66) & 223.74 & 1568.22 & 3140.17 c203\_ 100 & 1440 & 1446.0 & 1460 (65) & 1846.00 & 2334.80 & 3220.08 c204\_ 100 & 1430 & 1436.0 & 1440 (63) & 206.84 & 1373.22 & 2942.62 c205\_ 100 & 1450 & 1452.0 & 1460 (65) & 31.12 & 925.37 & 2566.74 c206\_ 100 & 1460 & 1460.0 & 1460 (65) & 678.24 & 1876.39 & 2690.34 c207\_ 100 & 1450 & 1452.0 & 1460 (65) & 189.21 & 1433.98 & 3193.58 c208\_ 100 & 1460 & 1462.0 & 1470 (66) & 20.38 & 1164.20 & 1605.87 r101\_ 100 & 349 & 349.0 & 349 (16) & 2.06 & 36.48 & 107.15 r102\_ 100 & 505 & 506.6 & 508 (21) & 797.60 & 2006.59 & 3549.10 r103\_ 100 & 517 & 518.8 & 520 (22) & 459.69 & 1095.06 & 2583.94 r104\_ 100 & 538 & 540.0 & 544 (23) & 1708.55 & 2291.80 & 3367.68 r105\_ 100 & 453 & 453.0 & 453 (20) & 44.19 & 1037.28 & 1687.62 r106\_ 100 & 515 & 523.8 & 529 (21) & 1125.75 & 2034.73 & 3320.77 r107\_ 100 & 526 & 527.4 & 529 (21) & 552.34 & 2357.58 & 3421.24 r108\_ 100 & 548 & 553.6 & 556 (24) & 839.84 & 1906.24 & 2921.28 r109\_ 100 & 505 & 505.4 & 506 (22) & 90.30 & 1386.61 & 2301.20 r110\_ 100 & 523 & 523.4 & 525 (24) & 164.43 & 1180.21 & 2279.10 r111\_ 100 & 533 & 535.6 & 538 (23) & 717.88 & 1726.17 & 3292.45 r112\_ 100 & 539 & 541.6 & 543 (213 & 540.79 & 1653.61 & 3122.94 r201\_ 100 & 1231 & 1236.0 & 1239 (70) & 1766.97 & 2693.58 & 3533.81 r202\_ 100 & 1288 & 1298.6 & 1310 (78) & 538.95 & 2482.22 & 3295.88 r203\_ 100 & 1341 & 1349.4 & 1358 (80) & 2460.71 & 3057.40 & 3543.20 r204\_ 100 & 1397 & 1399.0 & 1404 (89) & 1516.41 & 2713.46 & 3625.27 r205\_ 100 & 1336 & 1342.0 & 1346 (82) & 1909.67 & 2593.74 & 3207.82 r206\_ 100 & 1367 & 1373.8 & 1381 (84) & 2347.97 & 2954.37 & 3414.95 r207\_ 100 & 1367 & 1385.0 & 1400 (88) & 1079.52 & 2624.08 & 3529.26 r208\_ 100 & 1416 & 1425.4 & 1433 (92) & 1913.39 & 2828.88 & 3475.34 r209\_ 100 & 1344 & 1350.6 & 1361 (86) & 2263.24 & 2988.88 & 3424.69 r210\_ 100 & 1351 & 1355.4 & 1360 (81) & 1062.36 & 2423.89 & 3377.75 r211\_ 100 & 1396 & 1403.0 & 1411 (90) & 2005.61 & 2726.10 & 3265.09 rc101\_ 100 & 427 & 427.0 & 427 (19) & 7.32 & 27.90 & 99.56 rc102\_ 100 & 494 & 497.2 & 505 (20) & 319.09 & 1496.91 & 3075.49 rc103\_ 100 & 506 & 510.2 & 516 (21) & 426.05 & 1965.91 & 3533.84 rc104\_ 100 & 565 & 568.8 & 575 (23) & 1583.98 & 2381.26 & 3445.15 rc105\_ 100 & 478 & 478.4 & 480 (21) & 940.80 & 1676.74 & 3369.17 rc106\_ 100 & 480 & 480.2 & 481 (20) & 1121.33 & 1865.07 & 3173.21 rc107\_ 100 & 526 & 530.2 & 534 (21) & 231.60 & 771.48 & 2045.81 rc108\_ 100 & 531 & 541.6 & 550 (22) & 194.45 & 820.99 & 2313.77 rc201\_ 100 & 1354 & 1365.4 & 1376 (66) & 528.52 & 1654.27 & 3495.90 rc202\_ 100 & 1457 & 1464.8 & 1472 (75) & 1076.83 & 2619.56 & 3516.05 rc203\_ 100 & 1523 & 1542.2 & 1573 (80) & 882.99 & 2176.18 & 3476.60 rc204\_ 100 & 1608 & 1614.8 & 1622 (89) & 1202.02 & 2561.77 & 3573.06 rc205\_ 100 & 1405 & 1417.4 & 1428 (73) & 504.28 & 2115.17 & 3322.36 rc206\_ 100 & 1473 & 1495.6 & 1514 (78) & 975.66 & 2105.05 & 3253.70 rc207\_ 100 & 1501 & 1523.0 & 1544 (80) & 1852.18 & 2937.45 & 3424.19 rc208\_ 100 & 1587 & 1608.4 & 1646 (88) & 1459.34 & 2572.30 & 3455.41 pr01\_ 48 & 502 & 502.0 & 502 (32) & 28.15 & 602.21 & 1918.67 pr02\_ 96 & 702 & 705.2 & 714 (439) & 289.31 & 1017.60 & 1850.88 pr03\_ 144 & 727 & 731.4 & 740 (40) & 1308.75 &2083.02 &2628.40 pr04\_ 192 & 877 & 883.6 & 899 (50) & 682.31 & 2439.32 & 3409.04 pr05\_ 240 & 1016 & 1021.8 & 1034 (55) & 442.58 & 2278.50 & 3383.48 pr06\_ 288 & 977 & 987.6 & 995 (51) & 369.49 & 1724.14 & 2648.17 pr07\_ 72 & 566 & 566.0 & 566 (33) & 354.18 & 972.26 & 1669.67 pr08\_ 144 & 808 & 813.0 & 819 (44) & 277.53 & 1729.99 & 2998.38 pr09\_ 216 &851 & 862.0 & 880 (48) & 2628.21 & 3130.95 & 3594.25 pr10\_ 288 & 1053 & 1064.6 & 1078 (57) & 2544.51 & 2918.58 & 3534.76 pr11\_ 48 & 544 & 544.8& 547 (37) &60.64 &880.30& 3380.38 pr12\_ 96 & 759& 763.4 &768 (44) & 1682.31& 2706.06 &3466.05 pr13\_ 144 & 788 &798.8 &816 (45) &2700.42& 3240.72& 3564.82 pr14\_ 288 & 936 & 943.6 & 952 (54) & 1240.28 & 2100.70 & 3187.72 pr15\_ 72 & 1090 & 1100.0 & 1120 (59) & 2653.13 & 3188.79 & 3447.06 pr16\_ 144 & 1089 & 1101.0 & 1119 (58) & 1924.75 & 2833.99 & 3562.92 pr17\_ 72 & 635 &646.2& 652 (39) &64.69 &1378.00 &2951.05 pr18\_ 144& 874 & 888.4 & 907 (47) & 489.81 & 1960.26 & 3372.53 pr19\_ 216& 932 & 940.8 & 950 (54) & 1419.52 & 2366.93 & 3152.26 pr20\_ 288& 1102 & 1113.0 & 1122 (62) & 2549.27 & 3192.33 & 3372.58 |c|ccc|ccc| [t3] Problem&& &Min&Avg&Max (Nodes)&Min&Avg&Max Problem&& &Min&Avg&Max (Nodes)&Min&Avg&Max c101\_ 100 & 810 & 810.0 & 810 (31) & 71.25 & 1515.58 & 3320.80 c102\_ 100 & 910 & 918.0 & 920 (34) & 106.55 & 442.80 & 1121.05 c103\_ 100 & 960 & 972.0 & 980 (33) & 91.88 & 1415.30 & 2230.89 c104\_ 100 & 990 & 1004.0 & 1020 (35) & 476.02 & 681.70 & 856.47 c105\_ 100 & 870 & 870.0 & 870 (33) & 16.07 & 1010.01 & 2143.67 c106\_ 100 & 870 & 870.0 & 870 (32) & 548.69 & 1132.31 & 2698.26 c107\_ 100 & 910 & 910.0 & 910 (34) & 184.38 & 892.65 & 2521.55 c108\_ 100 & 910 & 912.0 & 920 (33) & 82.40 & 971.24 & 3266.16 c109\_ 100 & 950 & 954.0 & 970 (34) & 383.17 & 1327.54 & 2319.47 c201\_ 100 & 1810 & 1810.0 & 1810 (100) & 4.43 & 259.20 & 1082.66 c202\_ 100 & 1780 & 1784.0 & 1790 (98) & 297.39 & 1603.23 & 3271.71 c203\_ 100 & 1710 & 1738.0 & 1760 (95) & 15.86 & 1039.94 & 3351.56 c204\_ 100 & 1750 & 1758.0 & 1770 (96) & 1.12 & 1025.80 & 3526.50 c205\_ 100 & 1790 & 1796.0 & 1800 (99) & 662.65 & 2007.29 & 2966.72 c206\_ 100 & 1780 & 1788.0 & 1800 (99) & 235.53 & 1861.98 & 2459.64 c207\_ 100 & 1780 & 1784.0 & 1790 (98) & 37.69 & 1222.95 & 3518.62 c208\_ 100 & 1780 & 1786.0 & 1800 (99) & 116.61 & 1542.72 & 2783.29 r101\_ 100 & 481 & 481.0 & 481 (21) & 35.79 & 240.56 & 659.12 r102\_ 100 & 674 & 682.0 & 691 (31) & 1930.56 & 2435.40 & 3287.68 r103\_ 100 & 726 & 729.8 & 736 (33) & 1328.00 & 2694.89 & 3592.84 r104\_ 100 & 750 & 760.6 & 773 (34) & 1394.80 & 2531.35 & 3452.78 r105\_ 100 & 619 & 619.2 &620 (29) & 517.98 & 795.97 & 1193.39 r106\_ 100 & 708 & 715.0 & 722 (31) & 353.40 & 755.45 & 985.10 r107\_ 100 & 748 & 751.6 & 757 (34) & 576.31 & 1998.89 & 2850.31 r108\_ 100 & 777 & 781.8 & 790 (36) & 1343.73 & 2388.53 & 3247.86 r109\_ 100 & 695 & 701.8 & 710(32) & 96.33 & 1098.83 & 2644.81 r110\_ 100 & 722 & 732.6 & 737 (34) & 187.77 & 1708.47 & 3466.45 r111\_ 100 & 718 & 755.4 & 770 (34) & 81.13 & 1286.72 & 3265.10 r112\_ 100 & 750 & 758.8 & 769 (33) & 1346.43 & 2091.22 & 3039.20 r201\_ 100 & 1424 & 1428.4 & 1432 (95) & 522.14 & 1399.97 & 2710.82 r202\_ 100 & 1441 & 1445.6 & 1449 (97) & 492.80 & 1966.22 & 3451.92 r203\_ 100 & 1443 & 1452.4 & 1456 (99) & 972.29 & 2564.33 & 3439.09 r204\_ 100 & 1458 & 1458.0 & 1458 (100) & 5.78 & 174.66 & 385.56 r205\_ 100 & 1449 & 1454.8 & 1458 (100) & 386.75 & 1830.71 & 2629.03 r206\_ 100 & 1455 & 1456.8 & 1458 (100) & 15.64 & 1328.82 & 3313.74 r207\_ 100 & 1458 & 1458.0 & 1458 (100) & 7.49 & 598.63 & 2466.06 r208\_ 100 & 1458 & 1458.0 & 1458 (100) & 0.13 & 425.64 & 1964.77 r209\_ 100 & 1455 & 1456.8 & 1458 (100) & 63.77 & 1163.74 & 2681.52 r210\_ 100 & 1454 & 1456.2 & 1458 (100) & 347.09 & 1428.08 & 3400.61 r211\_ 100 & 1457 & 1457.8 & 1458 (100) & 0.68 & 7.33 & 20.39 rc101\_ 100 & 618 & 620.4 & 621 (29) & 16.39 & 195.67 & 389.49 rc102\_ 100 & 695 & 698.8 & 710 (31) & 153.47 & 1145.53 & 2918.14 rc103\_ 100 & 735 & 741.4 & 747 (31) & 1399.25 & 2753.95 & 3488.21 rc104\_ 100 & 793 & 813.2 & 823 (35) & 1364.88 & 2276.97 & 3473.90 rc105\_ 100 & 669 & 677.8 & 682 (30) & 132.26 & 1092.01 & 2976.23 rc106\_ 100 & 688 & 692.4 & 695 (30) & 437.09 & 1748.26 & 3553.41 rc107\_ 100 & 742 & 748.0 & 755 (32) & 208.06 & 1158.45 & 2288.11 rc108\_ 100 & 757 & 768.2 & 783 (35) & 665.06 & 1443.67 & 2548.13 rc201\_ 100 & 1663 & 1672.0 & 1681 (92) & 234.65 & 1084.91 & 2316.65 rc202\_ 100 & 1690 & 1701.4 & 1706 (97) & 308.85 & 2230.74 & 3460.65 rc203\_ 100 & 1714 & 1719.6 & 1724 (100) & 30.28 & 1213.17 & 3513.87 rc204\_ 100 & 1721 & 1722.2 & 1724 (100) & 32.10 & 1261.26 & 3261.65 rc205\_ 100 & 1656 & 1672.2 & 1698 (96) & 22.96 & 1439.79 & 2388.08 rc206\_ 100 & 1699 & 1712.6 & 1722 (99) & 1089.45 & 2185.22 & 3358.51 rc207\_ 100 & 1702 & 1714.8 & 1722 (99) & 1355.01 & 2553.89 & 3175.64 rc208\_ 100 & 1721 & 1722.2 & 1724 (100) & 15.33 & 893.85 & 2146.41 pr01\_ 48 & 619 & 619.0 & 619 (43) & 54.34 & 116.95 & 227.27 pr02\_ 96 & 935 & 938.4 & 942 (56) & 1084.95 & 2333.11 & 3485.29 pr03\_ 144 & 984 & 991.0 &999 (60) & 1893.28 & 876.82 & 3559.88 pr04\_ 192 & 1220 & 1226.4 & 1243 (69) & 2144.70 & 2909.04 & 3561.60 pr05\_ 240 & 1379 & 1386.6 & 1417 (78) & 1983.09 & 2746.66 & 3293.53 pr06\_ 288 & 1345 & 1359.0 & 1370 (71) & 1538.31 & 2719.67 & 3467.55 pr07\_ 72 & 735 & 738.4 & 744 (49) & 246.67 & 1330.57 & 2812.40 pr08\_ 144 & 1112 & 1115.0 & 1118 (59) & 335.57 & 2544.33 & 3566.88 pr09\_ 216 & 1203 & 1210.0 & 1227 (69) & 2579.76 & 3187.73 & 3467.97 pr10\_ 288 & 1418 & 1457.2 & 1492 (80) & 1042.91 & 2873.12 & 3447.16 pr11\_ 48 & 649& 649.0 &649 (46) &62.74 & 237.25 & 404.33 pr12\_ 96 & 960 & 970.6 & 985 (59) & 2424.98 & 2883.74 & 3356.37 pr13\_ 144 & 1080 & 1088.8 & 1101 (68) & 1563.33 & 2515.51 & 2803.83 pr14\_ 192 & 1252 & 1258.6 & 1263 (74) & 1312.17 & 2084.66 & 249.92 pr15\_ 240 & 1455 & 1486.8 & 1509 (83) & 1175.35 & 2671.82 & 3404.18 pr16\_ 288 & 1461 & 1481.8 & 1516 (77) & 198.84 & 2284.53 & 3410.70 pr17\_ 72T & 821 & 822.6 & 832 (54) & 564.12 & 2049.56 & 3174.96 pr18\_ 144 & 1174 & 1209.0 & 1229 (68) & 2199.10 & 2854.33 & 3401.15 pr19\_ 216 & 1289 & 1299.4 & 1320 (75) & 2425.89 & 3094.80 & 3487.74 pr20\_ 288 & 1484 & 1491.2 & 1505 (83) & 1739.74 & 2822.96 & 3268.58 |c|ccc|ccc| [t4] Problem&& &Min&Avg&Max (Nodes)&Min&Avg&Max Problem&& &Min&Avg&Max (Nodes)&Min&Avg&Max c101\_ 100 & 1010 & 1018.0 & 1020 (41) & 105.51 & 1049.07 & 1574.07 c102\_ 100 & 1140 & 1142.0 & 1150 (45) & 195.35 & 1211.26 & 2876.07 c103\_ 100 & 1180 & 1186.0 & 1190 (46) & 1755.25 & 2329.56 & 2776.30 c104\_ 100 & 1220 & 1226.0 & 1240 (46) & 256.95 & 1493.94 & 2746.12 c105\_ 100 & 1050 & 1052.0 & 1060 (43) & 23.33 & 716.48 & 2898.51 c106\_ 100 & 1050 & 1058.0 & 1070 (42) & 32.26 & 556.80 & 1188.74 c107\_ 100 & 1110 & 1114.0 & 1120 (44) & 106.84 & 411.26 & 1204.96 c108\_ 100 & 1110 & 1112.0 & 1120 (43) & 319.07 & 820.09 & 2339.76 c109\_ 100 & 1160 & 1172.0 & 1190 (46) & 75.78 & 915.97 & 2176.46 c201\_ 100 & 1810 & 1810.0 & 1810 (100) & 0.33 & 1.15 & 2.36 c202\_ 100 & 1810 & 1810.0 & 1810 (100) & 0.32 & 8.87 & 36.38 c203\_ 100 & 1810 & 1810.0 & 1810 (100) & 7.86 & 43.90 & 68.74 c204\_ 100 & 1810 & 1810.0 & 1810 (100) & 0.47 & 1.31 & 3.10 c205\_ 100 & 1810 & 1810.0 & 1810 (100) & 0.03 & 0.25 & 0.57 c206\_ 100 & 1810 & 1810.0 & 1810 (100) & 0.00 & 0.13 & 0.39 c207\_ 100 & 1760 & 1800.0 & 1810 (100) & 0.00 & 5.80 & 28.71 c208\_ 100 & 1810 & 1810.0 & 1810 (100) & 0.02 & 0.15 & 0.48 r101\_ 100 & 608 & 608.0 & 608 (29) & 9.48 & 55.13 & 203.19 r102\_ 100 & 817 & 825.6 & 836 (41) & 788.47 & 1924.54 & 2676.62 r103\_ 100 & 893 & 902.2 & 909 (43) & 798.91 & 2622.17 & 3532.33 r104\_ 100 & 931 & 944.2 & 957 (46) & 334.64 & 2343.92 & 3192.24 r105\_ 100 & 761 & 766.0 & 771 (38) & 155.49 & 838.47 & 2394.99 r106\_ 100 & 885 & 889.8 & 893 (41) & 47.98 & 929.50 & 3402.78 r107\_ 100 & 930 & 932.4 & 937 (44) & 1357.96 & 2660.90 & 3523.34 r108\_ 100 & 964 & 976.2 & 994 (48) & 1038.25 & 2596.16 & 3533.89 r109\_ 100 & 873 & 876.6 & 879 (40) & 157.12 & 1374.46 & 2708.17 r110\_ 100 & 886 & 900.0 & 908 (43) & 370.88 & 926.27 & 1686.63 r111\_ 100 & 916 & 932.4 & 944 (45) & 855.90 & 1896.35 & 2866.03 r112\_ 100 & 940 & 947.6 & 954 (45) & 900.41 & 1662.61 & 2496.41 r201\_ 100 & 1458 & 1458.0 & 1458 (100) & 127.45 & 376.41 & 827.34 r202\_ 100 & 1458 & 1458.0 & 1458 (100) & 182.35 & 936.24 & 2728.65 r203\_ 100 & 1458 & 1458.0 & 1458 (100) & 0.02 & 73.92 & 352.55 r204\_ 100 & 1458 & 1458.0 & 1458 (100) & 0.00 & 0.01 & 0.02 r205\_ 100 & 1458 & 1458.0 & 1458 (100) & 0.00 & 1.31 & 5.59 r206\_ 100 & 1458 & 1458.0 & 1458 (100) & 0.00 & 0.01 & 0.06 r207\_ 100 & 1458 & 1458.0 & 1458 (100) & 0.00 & 0.00 & 0.00 r208\_ 100 & 3456 & 3456.0 & 1458 (100) & 0.00 & 0.01 & 0.03 r209\_ 100 & 1458 & 1458.0 & 1458 (100) & 0.00 & 0.00 & 0.00 r210\_ 100 & 1458 & 1458.0 & 1458 (100) & 0.05 & 3.13 & 11.16 r211\_ 100 & 1458 & 1458.0 & 1458 (100) & 0.00 & 0.00 & 0.00 rc101\_ 100 & 801 & 805.4 & 808 (37) & 84.48 & 1324.33 & 3199.64 rc102\_ 100 & 896 & 899.4 & 903 (42) & 1210.62 & 2218.81 & 3479.62 rc103\_ 100 & 937 & 941.8 & 948 (42) & 288.47 & 2005.21 & 3160.23 rc104\_ 100 & 982 & 1013.2 & 1052 (47) & 1331.32 & 2139.30 & 3140.49 rc105\_ 100 & 860 & 867.2 & 875 (40) & 134.31 & 1052.30 & 2201.45 rc106\_ 100 & 888 & 901.4 & 908 (39) & 559.05 & 2106.67 & 2892.46 rc107\_ 100 & 952 & 959.2 & 964 (45) & 104.30 & 1763.11 & 3285.91 rc108\_ 100 & 995 & 1000.2 & 1007 (47) & 746.11 & 2222.29 & 3534.34 rc201\_ 100 & 1724 & 1724.0 & 1724 (100) & 854.00 & 2327.40 & 3427.79 rc202\_ 100 & 1724 & 1724.0 & 1724 (100) & 27.23 & 1095.77 & 2801.97 rc203\_ 100 & 1724 & 1724.0 & 1724 (100) & 0.00 & 308.58 & 1463.78 rc204\_ 100 & 1724 & 1724.0 & 1724 (100) & 0.00 & 53.33 & 266.67 rc205\_ 100 & 1724 & 1724.0 & 1724 (100) & 360.01 & 1313.73 & 2126.78 rc206\_ 100 & 1722 & 1723.6 & 1724 (100) & 0.13 & 2.79 & 6.45 rc207\_ 100 & 1722 & 1722.4 & 1724 (100) & 1.28 & 72.12 & 172.67 rc208\_ 100 & 1724 & 1724.0 & 1724 (100) & 0.00 & 0.00 & 0.00 pr01\_ 48 & 654 & 655.2 & 657 (48) & 0.52 & 15.43 & 50.61 pr02\_ 96 & 1063 & 1067.8 & 1072 (68) & 1185.62 & 2164.96 & 3146.01 pr03\_ 144 & 1190 & 1204.6 & 1222 (77) & 817.89 & 2073.95 & 3476.42 pr04\_ 192 & 1503 & 1506.0 & 1515 (85) & 2255.22 & 2986.02 & 3414.45 pr05\_ 240 & 1710 &1722.2 &1740 (94) &1710.87 &2818.47 &3528.73 pr06\_ 288 & 1709 & 17235.4 & 1740 (95) & 1673.42 & 2935.83 & 3445.90 pr07\_ 72 & 860 & 865.4 & 872 (58) & 35.40 & 1940.19 & 3318.06 pr08\_ 144 & 1337 & 1351.6 & 1376 (79) & 1247.71 & 2804.25 & 3578.14 pr09\_ 216 & 1544 & 1554.2 & 1561 (91) & 3174.85 & 3371.92 & 3586.72 pr10\_ 288 & 1813 & 1819.8 & 1827 (101) & 2415.82 & 3365.93 & 3456.00 pr11\_ 48 & 657 & 657.0 & 657 (48) & 0.00 & 0.09 & 0.35 pr12\_ 96 & 1110 & 1114.0 & 1118 (74) & 1736.62 & 2486.72 & 3002.43 pr13\_ 144 &1303 & 1316.4 & 1329 (85) & 2039.13 & 2926.28 & 3399.21 pr14\_ 192 & 1521 & 1534.4 & 1568 (91) & 1810.47 & 2730.20 & 3268.35 pr15\_ 240& 1800 & 1822.8 & 1854 (102) & 1820.47 & 2631.27 & 3445.79 pr16\_ 288& 1833 & 1867.2 & 1887 (104) & 1140.65 & 2614.66 & 3477.03 pr17\_ 72& 923 & 923.8 & 925 (65) & 1557.48 & 2749.67 & 3450.60 pr18\_ 144& 1455 & 1464.8 & 1470 (82) & 2621.62 & 3209.13 & 3478.46 pr19\_ 216& 1567 & 1579.4 & 1596 (96) & 2740.67 & 3293.94 & 3586.13 pr20\_ 288& 1811 & 1821.4 & 1841 (98) & 2661.58 & 3193.02 & 3472.65 Conclusions =========== An algorithm based on the Ant Colony System paradigm has been discussed for the Team Orienteering Problem with Time Windows. The method is based on the solution of a hierarchic generalization of the original Team Orienteering Problem. Experimental results on benchmarks instances previously adopted in the literature confirm the practical effectiveness of the algorithm we propose. **Acknowledgement** The authors are indebted to Matteo Salani and Giovanni Righini for the useful discussions and for having provided the benchmark instances used in the experiments. AN ANT COLONY SYSTEM FOR TEAM ORIENTEERING PROBLEMS WITH TIME WINDOWS ===================================================================== This paper discusses a heuristic approach for Team Orienteering Problems with Time Windows. The method we propose takes advantage of a solution model based on a hierarchic generalization of the original problem, which is combined with an Ant Colony System algorithm. Computational results on benchmark instances previously adopted in the literature suggest that the algorithm we propose is effective in practice. team orienteering; time windows; metaheuristics; ant colony optimization. Introduction ============ Orienteering is a sport in which a competitor has to select a path from a starting point to a final destination, going through control points along the path. Each control point has a score associated with it. A travel cost is associated with each pair of control points. The competitor has to select a set of control points to be visited, so that the total score is maximized, while the total travel cost does not exceed a given threshold. As described above, the problem was first introduced as the Orienteering Problem (OP) by Tsiligrides (the problem is sometimes referred to as the Selective Traveling Salesman Problem, see Laporte and Martello ). The Team Orienteering Problem (TOP) is a generalization of the OP where competitors are organized in teams. Each team component has to coordinate itself with her/his team-mates, and the objective function is to maximize the cumulative score of the team (each point can be visited by one competitor only). The TOP was firstly studied in Butt and Cavalier under the name Multiple Path Maximum Collection Problem and its current name was introduced in Chao et al.. It is interesting to observe that, since with some elementary considerations (see Archetti et al. ), it is easy to transform paths into tours, the TOP might be seen as a member of the wide family of Vehicle Routing Problems with Profits (see, for example, Feillet et al. for a survey on the Traveling Salesman Problems with Profits). For this reason, competitors are often referred to as vehicles, the team as the fleet, points as customers, and the starting/ending point as the depot. The TOP has been recognized as a model of many different real applications, such as the multi-vehicle version of the home fuel delivery problem (Golden et al., ), the recruiting of college football players (Butt and Cavalier ), the sport game of team orienteering (Chao et al., ), some applications of pickup or delivery services involving the use of common carriers and private fleets (e.g. Hall and Racer ) and the service scheduling of routing technicians (Tang and Miller-Hooks ). Since the TOP is an NP-hard problem (Chao et al. ), the research efforts mainly focus on heuristics and metaheuristics. Butt and Cavalier presented a greedy procedure. Chao et al. proposed a five-step method and a heuristic based on a stochastic algorithm. A tabu search algorithm was proposed by Tang and Miller-Hooks. Archetti et al. proposed two tabu search algorithms and a variable neighborhood search algorithm. Ke et al. presented four methods based on Ant Colony Optimization. Not many exact algorithms have been proposed for the TOP: a column generation based algorithm (Butt and Ryan ) and a branch and price based algorithm (Boussier et al. ). Several studies are devoted to the simpler OP: heuristic algorithms (see Golden et al.,, Chao et al. ), exact approaches (see Fischetti et al. and Gendreau et al. ) and bounding procedures (see Leifer and Rosenwein and Kataoka et al. ), just to mention some of them. Several variants of the TOP exist and one of them takes Time Window constraints into account (TOPTW). Time-window constraints are motivated by different practical situations and typically arise in those routing problems where each customer/location has to be visited within a predefined time interval specified by an earliest and a latest time into which the service has to start. In our context it is easy to see how time windows can make (some of) the applications listed before more realistic. A practical application is for companies dealing in home fuel delivery, that have to select the customers to be served in a given day (prizes are priorities in this case). The only known contributions for the Orienteering Problem with Time Windows (OPTW) are due to Kantor and Rosenwein, who described a heuristic algorithm based on an exhaustive exploration of the feasible solutions space, Mansini et al., which proposed a granular variable neighborhood search heuristic algorithm, and Righini and Salani, where some exact algorithms based on dynamic programming are discussed. Two contributions for the TOPTW, Vansteenwegen et al. and Tricoire et al., are available. They both discuss metaheuristic approaches. It is interesting to observe that some publications exist for a problem similar to the TOPTW: the Selective Vehicle Routing Problem with Time Windows (SVRPTW), where vehicles capacities and customers demands are also taken into account. Boussier et al. and Gueguen proposed exact methods for this problem. The paper is organized as follows. Section [pd] formally describes the Team Orienteering Problem with Time Windows. The Ant Colony System (ACS) algorithm we propose is described in Section [acsop]. Computational results are presented and discussed in Section [exp]. Conclusions are finally drawn in Section [conc]. The Team Orienteering Problem with Time Windows =============================================== The TOPTW is defined as follows. We are given a complete undirected graph *G* = {*V*, *E*}, with a positive weight *t**i**j* associated with each edge, representing the travel time between nodes *i* and *j*. For each node *i* ∈ *V* we have the following data: *p**i* is a positive profit that is collected when the node is visited, [*a**i*, *b**i*] is a time window defining the feasible arrival time at the node and *s**i* is a non-negative service time, that is the amount of time which is spent to visit the node. Two special nodes, numbered 1 and *n*, where *n* = ∣*V*∣, are the endpoints of the path to be computed. We have *p*1 = *p**n* = 0, *s*1 = *s**n* = 0, *a*1 = *a**n* = 0 and *b*1 = *b**n* = *T*, where *T* is equal to the maximum feasible arrival time at node *n*, that is *T* = max*i* ∈ *V* ∖ {1, *n*}{*b**i* + *s**i* + *t**i**n*}. The TOPTW requires the computation of a set P of *m* non-overlapping (apart from origin and destination) elementary paths, such that each path *k* ∈ P is defined as an ordered sequence of nodes starting from node 1 and ending at node *n*. Given a solution, we indicate with *v**i* the arrival time at node *i*. We introduce a set *A* of directed arcs such that ∀{*i*, *j*} ∈ *E*, (*i*, *j*), (*j*, *i*) ∈ *A* and *t**i**j* = *t**j**i*. If we now define a binary variable *x**i**j**k* = 1 if arc (*i*, *j*) ∈ *A* is in path *k*, 0 otherwise, and an integer variable *z**i**k* containing the time customer *i* is visited in path *k* (*z**i**k* does not mean anything if customer *i* is not in path *k*, otherwise it corresponds to *v**i*), and *Q* is an arbitrary large number, a mathematical formulation for the TOPTW is the following one: $$\begin{aligned} (TOPTW) \ \ \ \ \max & \sum\_{k \in \mathcal{P}} \sum\_{(i,j) \in A} p\_i x^k\_{ij}&\label{h1} \\ \text{s.t. } & \sum\_{k \in \mathcal{P}} \sum\_{j \in V} x^k\_{ij} \leq 1 & \forall i \in V \label{h2}\\ & \sum\_{j \in V} x^k\_{1j} = 1 & \forall k \in \mathcal{P} \label{h3}\\ & \sum\_{i \in V} x^k\_{ih} - \sum\_{j \in V} x^k\_{hj} = 0 & \forall h \in V \backslash \{1, n\},\ \forall k \in \mathcal{P} \label{h4}\\ & \sum\_{i \in V} x^k\_{in} = 1 & \forall k \in \mathcal{P} \label{h5}\\ & z^{k}\_{i} + t\_{ij} + s\_i - Q(1 - x^k\_{ij}) \leq z^{k}\_{j} & \forall k \in \mathcal{P} \label{h6}\\ & a\_i \leq z^{k}\_{i} \leq b\_i & \forall i \in V,\ \forall k \in \mathcal{P} \label{h7}\\ & x^k\_{ij} \in \{0, 1\} & \forall (i,j) \in A,\ \forall k \in \mathcal{P} \label{h8}\\ & z^{k}\_{i} \in \mathcal{Z}^+ & \forall i \in V,\ \forall k \in \mathcal{P} \label{h9}\end{aligned}$$ where the objective function ([h1]) maximizes the total prize collected. Constraints ([h2]) impose that each customer is visited at most once; constraints ([h3]), ([h4]) and ([h5]) impose that paths have a feasible structure; inequalities ([h6]) and ([h7]) deal with time windows restrictions; ([h8]) and ([h9]) define the domain of variables. Note that the special case where *m* = ∣P∣ = 1 corresponds to the OPTW. An Ant Colony System for the *TOPTW* ==================================== A brief description of the general ACS paradigm is provided in Section [acs]. Section [sol] is dedicated to the introduction of the Hierarchic Team Orienteering Problem with Time Windows (HTOPTW), which is functional to the algorithm we propose. The solution model adopted by our algorithm is presented in Section [mod]. The remaining subsections are finally dedicated to the description of the ACS algorithm we propose to solve the HTOPTW. Application of an ACS algorithm to a combinatorial optimization problem (HTOPTW in our case) requires the definition of a constructive algorithm and possibly a local search. Accordingly, a constructive procedure in which a set of artificial ants builds feasible solutions to the HTOPTW is described in Section [cons]. A local search procedure specialized for the HTOPTW, that takes in input the solutions built in the constructive phase and bring them to their local optimum, is described in Section [ls]. Ant Colony System ----------------- The ACS algorithm is an element of the Ant Colony Optimization (ACO) family of methods (Dorigo et al. ). These algorithms are based on a computational paradigm inspired by real ant colonies and the way they function. The main underlying idea was to use several constructive computational agents (simulating real ants). A dynamic memory structure, which incorporates information on the effectiveness of previous choices, based on the obtained results, guides the construction process of each agent. The behavior of each single agent is therefore inspired by the behavior of real ants. The paradigm is based on the observation, made by ethologists, that the medium used by ants to communicate information regarding shortest paths to food, consists of *pheromone trails*. A moving ant lays some pheromone on the ground, thus making a path by a trail of this substance. While an isolated ant moves practically at random (exploration), an ant encountering a previously laid trail can detect it and decide, with high probability, to follow it, thus reinforcing the trail with its own pheromone (exploitation). What emerges is a form of *auto-catalytic* process where the more the ants follow a trail, the more attractive that trail becomes to be followed. The process is thus characterized by a positive feedback loop, where the probability with which an ant chooses a path increases with the number of ants that have previously chosen the same path. The mechanism above is the inspiration for the algorithms of the *ACO* family. The Hierarchic Team Orienteering Problem with Time Windows ---------------------------------------------------------- The input data for the HTOPTW are the same as for the TOPTW (see Section [pd]). The HTOPTW requires the computation of an ordered list of non-overlapping (apart from origin and destination) elementary paths L = (*P*1, *P*2, …, *P*∣L∣), where each *P**k* ∈ L is defined as an ordered sequence of nodes starting from node 1, ending at node *n*, such that *a**i* ≤ *v**i* ≤ *b**i* ∀*i* ∈ *P**k*; ∀*P**k*, *P**h* ∈ L, *P**k* and *P**h* have in common only nodes 1 and *n* and ⋃*P**k* ∈ L*P**k* = *V*. For each pair of nodes {*i*, *j*} consecutively visited along the same *P**k* ∈ L we have *v**j* = max{*v**i* + *s**i* + *t**i**j*, *a**j*}, with *v*1 = 0 on each path *P**k* ∈ L. Finally note that we set ∣L∣ equal to an upper bound of the number of paths required to visit all the customers (e.g. *n*), and therefore empty tours are allowed in solutions. The objective function of HTOPTW can be defined as follows: max(∑*P**k* ∈ L, *k* ≤ *m*(∑(*i*, *j*) ∈ *A**p**i**x**i**j**P**k*) + ∑*P**k* ∈ L, *k* > *m*(*M**m* − *k*∑(*i*, *j*) ∈ *A**p**i**x**i**j**P**k*)) such that *a**i* ≤ *v**i* ≤ *b**i*, ∀*P**k* ∈ L, ∀*i* ∈ *P**k*. Note that *M* is an arbitrary number larger than the optimal solution of the *OPTW* problem defined on the same graph *G*. A complete mathematical formulation for the HTOPTW can be obtained by applying straightforward changes to the model for problem TOPTW discussed in Section [pd]. Problem HTOPTW can be seen as a generalization of TOPTW where a set of non-overlapping tours are optimized in a hierarchic fashion. Note that if prizes are integer (as for the benchmarks we will analyze in Section [etoptw]) the first *m* tours represent the solution to the original TOPTW problem, while the remaining tours optimize over the feasible nodes that are not in the solution of the TOPTW, in a hierarchic way. From an optimization point of view, the rational behind the introduction of the problem HTOPTW for solving the TOPTW, is that keeping a hierarchic set of non-overlapping tours helps to have good fragments of tours placed in the (*m* + 1)-st to last tours. These prepared fragments are used by the local search procedure (see Section [ls]) to perform exchanges/insertions, aiming at improving the quality of the hierarchy of tours, and - as a side effect - of the first *m* tours. The drawback of the approach we propose is that an additional overhead is necessary to optimize the (*m* + 1)-st to last tours, that in fact are not part of the original objective function of the TOPTW. Computational results presented in Section [exp] suggest however that it is worth paying for this overhead. Note that a similar idea of optimizing something outside the proper solution in order to prepare good fragments to be used to improve the real solution itself, was proposed in a completely different context (and for a different optimization problem) in Lopez at al.. Again, the idea of keeping more than *m* tours had already been considered also in Archetti et al. in the context of the TOP without time windows. In this case no hierarchic mechanism was introduced, and the best *m* tours were basically selected as the current solution. Moreover, in, due to the different metaheuristic approaches implemented (namely, tabu search and variable neighborhood search), the two policies considered to evolve solutions (namely, swapping two customers of different tours and moving one customer from a tour to a different one) did not lead to an exploitation of the extra tours like in our case, where we move entire segments of routes at once (see Section [ls]). Solution model -------------- The algorithm we propose uses a solution model in which each ant builds a single, giant tour (see Gambardella et al. and Irnich ). A solution is represented as follows: nodes 1 and *n* are unified into a unique depot node, with outgoing travel times corresponding to node 1, and incoming travel times corresponding to node *n*. The depot with all its connections is then duplicated a number of times equal to the number of feasible nodes of the problem. Distances between copies of the depot are set to zero. In such a model, each time a copy of the depot is reached, tour duration is set back to zero. The use of the giant tour representation makes the HTOPTW problem closer to a traditional traveling salesman problem. A feasible solution is a tour that visits all the nodes exactly ones. Note that in case of too many duplicated depots, we will have dummy arcs of type *(depot, depot)* visited by the solution. An advantage of a solution representation like the one we adopt is that the trails in direction of the duplicated depots are less attractive than in case of a single depot. This positively affects the quality of the solutions produced in the constructive procedure described in Section [cons]. Moreover, as observed in Gambardella et al., the giant tour approach makes easier to generalize and efficiently apply generic local searches procedures (see Section [ls]). Construction phase ------------------ The construction phase specializes that of the Ant Colony System algorithm (Dorigo and Gambardella ). The goal is to build feasible solutions for the HTOPTW. It generates feasible solutions with a computational cost of order O(∣*V*∣2). The procedure works as follows. Ants are sent out sequentially (not in parallel). Each ant iteratively starts from node 1 and adds new nodes until all nodes have been visited. When in node *i*, an ant applies a so-called transition rule, that is, it probabilistically chooses the next node *j* from the set *F*(*i*) of feasible nodes. *F*(*i*) contains all the nodes *j* still to be visited and feasible in terms of the time windows associated with them. The ant in node *i* chooses the next node *j* to visit on the basis of two factors: the *pheromone trail* *τ**i**j*, that contains a measure of how good it has been in the past to include arc (*i*, *j*) into a solution (it is the “memory” of the colony) and the heuristic *desirability* *η**i**j*, which is defined as follows: $$\eta\_{ij} = \frac{p\_j}{\max \left \{t\_{ij}; a\_j - v\_i - s\_i \right \} \cdot \left ( b\_j - v\_i - s\_i - t\_{ij} \right ) + 1}\label{ff}$$ The rationale behind equation ([ff]) is that promising customers are those with a high prize, that are not far away from the current node *i*, and such that the associated time window in used in a suitable way. Note that equation ([ff]) heavily relies on time windows, that therefore tend to drive desirabilities. With probability *q*0 the next node to visit is chosen as the node *j*, *j* ∈ *F*(*i*), for which the product *τ**i**j* ⋅ *η**i**j* is highest (deterministic rule), while with probability 1 − *q*0 the node *j* is chosen with a probability given by $${p\_{ij}}\_{j \in F(i)} = \frac{\tau\_{ij} \cdot \eta\_{ij}}{\sum\_{l \in F(i)} (\tau\_{il} \cdot \eta\_{il})}$$ (i.e., nodes connected by arcs with higher values of *τ**i**j*  ⋅  *η**i**j*, *j* ∈ *F*(*i*), have higher probability of being chosen). The value *q*0 is given by *q*0 = 1 − *n̂*/*n*, where *n̂* represents the number of nodes (independently of the total number of nodes of the problem) we would like to choose using the probabilistic transition rule during each iteration of the constructive phase. Such an initialization of *q*0 is common in ACS algorithms for Vehicle Routing-like problems. Justifications for its use can be found in Gambardella et al.. In our implementation only the best ant, that is the ant that built the solution with the largest profit since the beginning of the computation, is allowed to deposit a pheromone trail. The rationale is that in this way a “preferred route” is memorized in the pheromone trail matrix, and future ants will use this information to generate new solutions in a neighborhood of this preferred route. If we refer to the most profitable solution generated since the beginning of the computation as *T**o**u**r**s**B**e**s**t*, and to its profit as *P**r**o**f**i**t**B**e**s**t*, ∀(*i*, *j*) ∈ *T**o**u**r**s**B**e**s**t*, we have the following formula for pheromone update: *τ**i**j* = (1 − *ρ*) ⋅ *τ**i**j* + *ρ* ⋅ *P**r**o**f**i**t**B**e**s**t* where *ρ* is a parameter regulating how strong the pheromone trace left by the best solution is. Pheromone is also updated during solution building. In this case, it is removed from visited arcs. In other words, each ant, when moving from node *i* to node *j*, applies a pheromone updating rule that causes the amount of pheromone trail on arc (*i*, *j*) to decrease. The rule is: *τ**i**j* = (1 − *ψ*) ⋅ *τ**i**j* + *ψ* ⋅ *τ*0 where *ψ* is a parameter regulating the evaporation of the pheromone trace over time, *τ*0 is the initial value of trails. The rationale for using formula ([2]) is that it forces ants to assure a certain variety in generated solutions (if pheromone trail was not consumed by ants, they would tend to generate very similar tours). It was found that good values for the algorithm’s parameters are *ρ* = *ψ* = 0.1, *s* = 10 and *τ*0 = (*P**r**o**f**i**t**F**i**r**s**t* ⋅ *n*)− 1, where *P**r**o**f**i**t**F**i**r**s**t* is the total prize collected by the solution with the highest profit among those generated by the ant colony following the construction procedure described above, without using the pheromone trails. Experience has shown these values - that are the same used in Gambardella et al. for the Vehicle Routing Problem with Time Windows - to be robust. We have also experimentally found that the algorithm is not affected by small changes in the values of these parameters. The number of ants in the population was set to 10 (as in ). Local search ------------ The second ingredient of the method we propose is a local search algorithm, which is run on each of the solutions produced in the construction phase and has the objective of taking them down to a local optimum. The local search procedure implemented is a specialized version of the CROSS exchange procedure described in Taillard et al.. The procedure is based on the exchange of two subchains of customers of the giant tour (see Section [mod]). One of the two sub-chains can eventually be empty, implementing therefore a more traditional insertion routine. The advantage of the use of the giant tour from the local search point of view, is that the two sub-chains can be from the same tour, or from two different tours, eventually including replicas of the depot. The use of the giant tour leads then to a more general local search, that incorporates various classic local searches within a CROSS exchange framework. The maximal length of the sub-chains is, in our implementation, variable. It starts from a value defined by parameter *L**S**i**n**i**t*. Every time no improvement has been found by the last *L**S**w**n**d* generations of ants, the value of *L**S**i**n**i**t* is incremented by *L**S**s**t**e**p*. The aim of this strategy is to makes the local search more and more accurate as the running time elapses. Note that too conservative values for these parameters may lead to a slow algorithm. Some preliminary tests suggested that *L**S**i**n**i**t* = *L**S**w**n**d* = 3 and *L**S**s**t**e**p* = 2 are robust values, probably slightly conservative and privileging solution quality over convergence speed. We will keep these values for the experiments reported in the remainder of the paper. Note that a kind of local search like the CROSS exchange we implemented (which does not perform inversions) is known to be very suitable for problems with time windows. In case of problems without time windows (or with very large time windows), different - and perhaps simpler - local search schemata are likely to be more suitable. Our implementation is based on the formal theory presented in Irnich. In particular, we used a search technique where nodes for exchanges are examined in a *random* order, and we move to the *first* improving solution encountered. In case no improving solution is found among all the possible moves, the best non-improving solution is selected (following a strategy similar to that described in Chao et al. ). This helps not being trapped into weak local optima. At most *N**I* consecutive non-improving solutions are accepted, then the local search is stopped. It was found that *N**I* = 5 is a suitable value. This setting was kept for the experiments reported in Section [exp]. Computational experiments ========================= Since in the literature the OP has often being separated from the TOP (for which, we recall, the OP is a special case), we have decided to split our computational experiments into the two classes: OPTW and TOPTW. Section [eoptw] is then devoted to the OPTW, while Section [etoptw] treats the TOPTW. OPTW ---- The reference benchmarks for the OPTW are those originally introduced in Righini and Salani and used in Mansini et al.. We therefore adopted these benchmarks in our experiments. A brief description of them is given in Section [ben]. Results are presented and discussed in Section [resaaa]. ### Benchmarks We tested our algorithms on two classes of instances, originally adopted in Righini and Salani and Mansini et al. and obtained from the well-known Solomon’s data-set of VRPTW instances (problems *c\*\_ 50*, *c\*\_ 100*, *r\*\_ 100* and *rc\*\_ 100*, see Solomon ) and from instances proposed by Cordeau et al. (problems *pr\**) for the Multi-Depot Periodic Vehicle Routing Problem (MDPVRP). The first data-set is composed by 29 instances obtained by considering the first 50 nodes of Solomon’s instances and by 48 instances obtained by considering the whole 100 nodes of the problems. The other data-set has been derived from Cordeau’s data-set (20 instances), considering all customers active in the same day. The delivery demand associated with each node in the original data-set is interpreted as the prize *p**i* for that node. Cordeau’s instances have up to 288 customers and the time windows are much wider than in Solomon’s problems. ### Experimental results [resaaa] All the experiments reported in this paper have been performed on a computer equipped with a Dual AMD Opteron 250 2.4GHz processor with 4GB of RAM. The algorithms were coded in ANSI C. A time limit of 3600 seconds has been set for each run, and statistics over five runs are reported for each instance. Results are summarized in Table [t1]. Columns have the following meaning. *Problem* identifies the problems; in *Known bounds* the best known bounds are reported (if a single value is present, it is the optimal solution value). Results are taken from Righini and Salani and Mansini et al.. Some improved lower bounds are reported in parenthesis. They were obtained and communicated to the authors by Salani () during the development of the present paper. The following set of columns report the results (prize collected and computation time) obtained by the Granular Variable Neighborhood Search (*GVNS*) algorithm published in Mansini at al.. Columns under *ACS* contain the statistics (over the five runs considered) about the total prize collected in the solutions returned by the ACS algorithm and the computation times required to retrieve the best result of each run. Minimum (*min*), average (*avg*) and maximum (*max*) prizes are reported for each problem. Note that the number of nodes visited by the best solution retrieved for each problem are also reported in brackets. Bold entries in the max prize collected column highlight new best solutions retrieved by our method, which were previously unknown (in two cases Salani matched our new best solutions, but we left our figures in bold anyway, since the results were found in parallel and independently). Entries in italics (last but one block) highlight solutions corresponding to problems for which ACS was not able to match the best known solution. The ACS algorithm we propose was able to retrieve the best known solutions for all the Solomon’s instances (first nine blocks of Table [t1]). Note that the optimal solution was found in every run for most of the Solomon’s problems. ACS was also able to improve 24 lower bounds over 27 instances for which a proven optimal solution is still unknown. Computation times indicate that our method is in general extremely fast on “easy” instances (for which an optimal solution is known). On the remaining instances computation times increase, but a lot of previously unknown best solutions are retrieved. This indicates that the strategy we propose, based on the solution of the HTOPTW (HOPTW in this case) pays. If we compare the results with those of the GVNS method (note that according to Dongarra the machine used in Mansini et al. is approximately 10% faster than the one we used), we can observe how our ACS method dominates GVNS in terms of solution quality (never worse, often better). If execution times are taken into account, we can see that ACS is also faster. This is not true for a few instances, where either the computation time is anyway comparable, or the solution found is better (and this justifies the extra time required by the algorithm). On Cordeau’s instances (last two blocks of Table [t1]) the performance of the ACS algorithm is not as brilliant as on Solomon’s problems. As observed in Righini and Salani (see also Section [ben]), these problems are characterized by particularly wide time windows, while our algorithm - the local search component in particular - has been specialized for problems with narrow time windows (see Section [ls]). Our approach is therefore a penalized on these problems. Moreover, our approach does not seem to scale up particularly well for large instances. This is a common drawback of algorithms based on Ant Colony System, because its construction phase tend to be too time consuming. We were not able to solve to optimality 3 problems over 8 instances for which the optimal solution is known. On the other hand, we were able to improve the best known lower bound for the remaining 2 instances for which a lower bound was previously known. We finally provide the first documented lower bound for the remaining 10 instances. No comparison is possible here with the GVNS algorithm since in Mansini et al. no result is reported for Cordeau’s instances. A large variance in computational times can be inferred from the last two blocks of Table [t1]. Once again, the presence of large time windows, coupled with the local search we implemented, might be an explanation for this. |c|c|cc|ccc|ccc| [t1] Problem &Known&& &bounds& Prize &Sec&& &&&& Min &Avg& Max (Nodes) &Min&Avg&Max Problem &Known&& &bounds& Prize &Sec&& &&&& Min &Avg& Max (Nodes) &Min&Avg&Max c101\_ 50 & 270 & 270 & 0.19 & 270 & 270.0 & 270 (10) & 0.06 & 0.2 & 0.31 c102\_ 50 & 300 & 300 & 0.21 & 300 & 300.0 & 300 (11) & 0.09 & 0.19 & 0.26 c103\_ 50 & 320 & 320 & 0.75 & 320 & 320.0 & 320 (11) & 0.59 & 0.76 & 0.9 c104\_ 50 & 340 & 340 & 0.77 & 340 & 340.0 & 340 (11) & 0.02 & 0.04 & 0.05 c105\_ 50 & 300 & 300 & 0.25 & 300 & 300.0 & 300 (11) & 0.07 & 0.09 & 0.13 c106\_ 50 & 280 & 280 & 0.27 & 280 & 280.0 & 280 (10) & 0.01 & 0.14 & 0.35 c107\_ 50 & 310 & 310 & 0.15 & 310 & 310.0 & 310 (10) & 0.01 & 0.11 & 0.19 c108\_ 50 & 320 & 320 & 0.28 & 320 & 320.0 & 320 (11) & 0.33 & 0.67 & 0.79 c109\_ 50 & 340 & 340 & 0.24 & 340 & 340.0 & 340 (11) & 0.39 & 0.84 & 0.97 r101\_ 50 & 126 & 126 & 22.44 & 126 & 126.0 & 126 (5) & 0 & 0 & 0.01 r102\_ 50 & 198 & 192 & 10.34 & 198 & 198.0 & 198 (9) & 0.07 & 0.08 & 0.11 r103\_ 50 & 214 & 214 & 32.46 & 214 & 214.0 & 214 (9) & 0.31 & 2.17 & 4.18 r104\_ 50 & 227 & 225 & 40.21 & 227 & 227.0 & 227 (10) & 0.84 & 2.57 & 4.11 r105\_ 50 & 159 & 159 & 38.63 & 159 & 159.0 & 159 (6) & 0.01 & 0.01 & 0.01 r106\_ 50 & 208 & 202 & 14.36 & 208 & 208.0 & 208 (10) & 0.28 & 1.97 & 6.22 r107\_ 50 & 220 & 215 & 34.98 & 220 & 220.0 & 220 (9) & 1.95 & 9.27 & 24.56 r108\_ 50 & 227 & 225 & 63.99 & 227 & 227.0 & 227 (10) & 0.39 & 1.47 & 2.88 r109\_ 50 & 192 & 192 & 9.15 & 192 & 192.0 & 192 (8) & 0.02 & 0.03 & 0.03 r110\_ 50 & 208 & 208 & 19.76 & 208 & 208.0 & 208 (9) & 0 & 0.02 & 0.03 r111\_ 50 & 223 & 223 & 36.84 & 223 & 223.0 & 223 (9) & 4.09 & 5.9 & 7.82 r112\_ 50 & 226 & 226 & 41.08 & 226 & 226.0 & 226 (10) & 0.26 & 2.79 & 4.39 rc101\_ 50 & 180 & 180 & 1.19 & 180 & 180.0 & 180 (9) & 0 & 0 & 0 rc102\_ 50 & 230 & 230 & 5.85 & 230 & 230.0 & 230 (10) & 0 & 0.12 & 0.22 rc103\_ 50 & 240 & 240 & 9.68 & 240 & 240.0 & 240 (9) & 0.29 & 0.7 & 0.99 rc104\_ 50 & 270 & 260 & 7.28 & 270 & 270.0 & 270 (10) & 7.57 & 9.37 & 10.36 rc105\_ 50 & 210 & 210 & 3.29 & 210 & 210.0 & 210 (9) & 0.19 & 0.55 & 0.93 rc106\_ 50 & 210 & 210 & 3.17 & 210 & 210.0 & 210 (8) & 0 & 0 & 0 rc107\_ 50 & 240 & 230 & 7.35 & 240 & 240.0 & 240 (8) & 0.02 & 3.52 & 5.05 rc108\_ 50 & 250 & 250 & 12.21 & 250 & 250.0 & 250 (9) & 4.19 & 8.59 & 11.72 c101\_ 100 & 320 & 320 & 0.96 & 320 & 320.0 & 320 (10) & 0.17 & 0.47 & 0.96 c102\_ 100 & 360 & 360 & 5.57 & 360 & 360.0 & 360 (11) & 0.21 & 0.69 & 1.15 c103\_ 100 & 400 & 400 & 3.08 & 400 & 400.0 & 400 (11) & 3.47 & 16.88 & 36.89 c104\_ 100 & 420 & 420 & 3.6 & 420 & 420.0 & 420 (11) & 1.65 & 33.47 & 60.16 c105\_ 100 & 340 & 340 & 3.25 & 340 & 340.0 & 340 (10) & 0.31 & 0.86 & 1.38 c106\_ 100 & 340 & 340 & 1.13 & 340 & 340.0 & 340 (10) & 0.42 & 1.02 & 1.86 c107\_ 100 & 370 & 370 & 0.35 & 370 & 370.0 & 370 (11) & 0.97 & 2.06 & 3.28 c108\_ 100 & 370 & 370 & 0.9 & 370 & 370.0 & 370 (11) & 0.31 & 0.76 & 1.23 c109\_ 100 & 380 & 380 & 1.9 & 380 & 380.0 & 380 (11) & 0.28 & 0.84 & 1.55 c201\_ 100 & 860(870)-1010 & 860 & 1.24 & 870 & 870.0 & **870 (30)** & 6.15 & 189.81 & 408.77 c202\_ 100 & 920-1040 & 920 & 1373.29 & 930 & 930.0 & **930 (32)** & 131.21 & 319.02 & 620.49 c203\_ 100 & 930-1000 & 930 & 1887.89 & 960 & 960.0 & **960 (32)** & 128.34 & 361.01 & 668.01 c204\_ 100 & 960-990 & 960 & 1908.02 & 970 & 970.0 & **970 (32)** & 415.65 & 1617.48 & 2797.30 c205\_ 100 & 910-1040 & 910 & 96.55 & 900 & 906.0 & 910 (31) & 6.97 & 28.19 & 65.89 c206\_ 100 & 920-1050 & 920 & 135.51 & 920 & 920.0 & 920 (32) & 39.29 & 87.86 & 122.00 c207\_ 100 & 920-1030 & 920 & 13.71 & 920 & 920.0 & 920 (32) & 8.07 & 48.50 & 144.60 c208\_ 100 & 940(950)-1040 & 940 & 122.53 & 940 & 940.2 & **950 (31)** & 27.81 & 89.01 & 133.42 r101\_ 100 & 198 & 198 & 41.88 & 198 & 198.0 & 198 (9) & 0.01 & 0.05 & 0.07 r102\_ 100 & 286 & 286 & 516.27 & 286 & 286.0 & 286 (11) & 0.98 & 11.11 & 25.91 r103\_ 100 & 293 & 292 & 456.62 & 292 & 292.6 & 293 (11) & 14.90 & 640.63 & 1392.33 r104\_ 100 & 303 & 303 & 749.09 & 303 & 303.0 & 303 (12) & 32.47 & 164.00 & 464.73 r105\_ 100 & 247 & 247 & 105.13 & 247 & 247.0 & 247 (11) & 0.27 & 3.01 & 6.38 r106\_ 100 & 293 & 293 & 383.55 & 293 & 293.0 & 293 (11) & 2.82 & 86.31 & 265.91 r107\_ 100 & 299 & 297 & 700.88 & 293 & 294.6 & 299 (13) & 438.23 & 922.61 & 1413.90 r108\_ 100 & 308 & 306 & 418.32 & 303 & 306.0 & 308 (13) & 41.67 & 696.10 & 1666.34 r109\_ 100 & 277 & 277 & 87.86 & 277 & 277.0 & 277 (12) & 6.59 & 27.98 & 51.16 r110\_ 100 & 284 & 283 & 289.51 & 282 & 283.2 & 284 (13) & 27.99 & 617.57 & 1569.31 r111\_ 100 & 297 & 295 & 307.14 & 295 & 296.6 & 297 (12) & 7.57 & 484.37 & 1097.63 r112\_ 100 & 298 & 297 & 444.52 & 295 & 297.4 & 298 (12) & 21.77 & 947.06 & 1605.08 r201\_ 100 & 767(788)-916 & 767 & 12.58 & 795 & 795.8 & **797 (38)** & 355.81 & 2339.94 & 3369.30 r202\_ 100 & 862-1037 & 862 & 2938.39 & 895 & 899.6 & **903 (47)** & 389.42 & 1724.42 & 3001.66 r203\_ 100 & 931-1035 & 931 & 690.92 & 983 & 989.8 & **993 (48)** & 1357.57 & 2641.66 & 3502.92 r204\_ 100 & 1014-1138 & 1014 & 14800.28 & 1035 & 1046.4 & **1053 (53)** & 658.87 & 1549.15 & 1964.07 r205\_ 100 & 873(932)-978 & 873 & 2837.95 & 933 & 939.2 & **949 (46)** & 651.50 & 1125.36 & 1740.49 r206\_ 100 & 929-1033 & 929 & 4864.67 & 961 & 983.0 & **1008 (50)** & 1439.12 & 1671.45 & 1951.73 r207\_ 100 & 943-1059 & 943 & 13839.27 & 1016 & 1026.6 & **1035 (49)** & 645.46 & 1090.34 & 1794.28 r208\_ 100 & 1029-1145 & 1029 & 5088.56 & 1048 & 1057.2 & **1071 (55)** & 907.35 & 1260.93 & 1723.50 r209\_ 100 & 879-1004 & 879 & 2453.79 & 910 & 923.4 & **938 (45)** & 856.86 & 1453.24 & 1878.32 r210\_ 100 & 907-1003 & 907 & 6877.15 & 944 & 956.4 & **970 (48)** & 390.38 & 979.10 & 1666.93 r211\_ 100 & 966(978)-1049 & 966 & 3357.44 & 1002 & 1006.8 & **1016 (49)** & 994.55 & 1288.08 & 1746.42 rc101\_ 100 & 219 & 219 & 19.96 & 219 & 219.0 & 219 (9) & 0.03 & 0.20 & 0.41 rc102\_ 100 & 266 & 266 & 38.1 & 266 & 266.0 & 266 (10) & 4.51 & 30.91 & 43.48 rc103\_ 100 & 266 & 266 & 140.84 & 266 & 266.0 & 266 (10) & 29.02 & 57.22 & 135.52 rc104\_ 100 & 301 & 301 & 209.2 & 301 & 301.0 & 301 (11) & 10.22 & 29.37 & 47.26 rc105\_ 100 & 244 & 244 & 36.53 & 244 & 244.0 & 244 (11) & 0.56 & 9.74 & 19.47 rc106\_ 100 & 252 & 250 & 38.1 & 252 & 252.0 & 252 (11) & 5.91 & 308.64 & 520.67 rc107\_ 100 & 277 & 277 & 92.18 & 277 & 277.0 & 277 (10) & 181.85 & 502.07 & 898.18 rc108\_ 100 & 298 & 298 & 104.33 & 298 & 298.0 & 298 (11) & 4.27 & 207.51 & 445.34 rc201\_ 100 & 776(789)-1065 & 776 & 588.33 & 795 & 795.0 & **795 (33)** & 236.62 & 590.80 & 850.05 rc202\_ 100 & 871-1095 & 871 & 1093.06 & 931 & 931.6 & **932 (43)** & 770.93 & 2545.89 & 3458.20 rc203\_ 100 & 907(913)-1148 & 907 & 1569.86 & 974 & 976.6 & **979 (48)** & 1175.47 & 1705.07 & 2094.52 rc204\_ 100 & 1063-1282 & 1063 & 20815.24 & 1079 & 1086.2 & **1107 (48)** & 1185.22 & 1353.17 & 1518.42 rc205\_ 100 & 810(839)-1066 & 810 & 689.72 & 844 & 848.4 & **855 (40)** & 348.01 & 1549.22 & 3403.48 rc206\_ 100 & 850(861)-1088 & 850 & 1093.93 & 881 & 884.0 & **888 (38)** & 305.30 & 1776.39 & 3481.45 rc207\_ 100 & 879(924)-1102 & 879 & 2961.96 & 954 & 960.8 & **967 (42)** & 642.29 & 1324.46 & 1752.91 rc208\_ 100 & 977(1021)-1158 & 977 & 8098.35 & 1002 & 1013.2 & **1040 (45)** & 1112.44 & 1511.37 & 1794.75 pr01\_ 48 & 308 &-&-& 308 & 308.0 & 308 (21) & 97.30 & 256.23 & 563.15 pr02\_ 96 & 404 &-&- & 403 & 403.8 & 404 (24) & 218.87 & 1147.78 & 2452.55 pr03\_ 144 & 394 &-&- & 394 & 394.0 & 394 (22) & 1089.48 & 2024.26 & 3487.34 pr04\_ 192 & 489 &-&- & 473 & 482.6 & *485 (24)* & 303.29 & 1404.67 & 3422.42 pr05\_ 240 & 595 &-&- & 576 & 576.8 & *577 (31)* & 875.62 & 2075.65 & 3574.73 pr06\_ 288 & 501 -? &-&-& 555 & 564.6 & **567 (29)** & 855.47 & 2199.78 & 3587.74 pr07\_ 72 & 298 &-&-& 298 & 298.0 & 298 (17) & 1.66 & 20.11 & 40.69 pr08\_ 144 & 463 &-&-& 461 & 462.6 & 463 (25) & 623.64 & 2475.96 & 3553.32 pr09\_ 216 & 493 &-&-& 481 & 481.8 & *482 (26)* & 1036.70 & 2318.19 & 3674.47 pr10\_ 288 & 584 -? &-&- & 578 & 588.4 & **591 (30)** & 955.54 & 2343.51 & 3042.36 pr11\_ 48 &? &-&-& 327 & 327.8 & **328 (20)** & 28.42 & 1743.72 & 3036.67 pr12\_ 96 &? &-&-& 435 & 436.4 & **437 (24)** & 1217.30 & 2017.56 & 3592.60 pr13\_ 144 &? &-&-& 439 & 441.0 & **442 (23)** & 838.62 & 2312.73 & 3490.68 pr14\_ 192 &? &-&-& 483 & 494.0 & **501 (28)** & 3.37 & 23.11 & 38.31 pr15\_ 240 &? &-&-& 522 & 524.8 & **528 (28)** & 0.00 & 18.20 & 38.86 pr16\_ 288 &? &-&-& 509 & 517.8 & **525 (28)** & 5.26 & 25.73 & 48.18 pr17\_ 72 &? &-&- & 358 & 358.0 & **358 (21)** & 75.83 & 1330.85 & 2567.23 pr18\_ 144 &? &-&-& 465 & 488.8 & **504 (25)** & 137.62 & 1350.04 & 3328.18 pr19\_ 216 &? &-&-& 467 & 475.0 & **480 (26)** & 4.87 & 30.04 & 51.44 pr20\_ 288 &? &-&-& 547 & 552.4 & **556 (29)** & 5.46 & 24.57 & 42.22 TOPTW ----- ### Benchmarks A set of 387 problems has been proposed in Chao et al. for the TOP (without time windows). 270 of the problems have been solved to optimality in Boussier et al., while for the remaining 117 instances optimality has not been proven. We tested the method we propose on these benchmarks. Our method quickly converges to the optimal value for most of the problems, but for the most difficult instances (about 5% of the whole dataset), the solutions provided by our algorithm are up to 5% worse than those retrieved by the ACO algorithm proposed in Ke et al. (state-of-the-art method for the TOP). We recall the reader that the approach discussed in, although based on the same metaheuristic paradigm of our method, is extremely different from the algorithm we propose. The main (but not only) difference is that the idea of a hierarchic problem, useful for problems with time windows, is not considered in (where time windows are not treated). The different design of the methods justifies the results discussed above, and shows how the design of an algorithm is strictly related to the problem under investigation. As already discussed in Section [acsop], our algorithm is optimized to exploit time windows. The hierarchic approach we adopt goes into this direction by preparing promising solution fragments that can be exchanged with fragments of the current solution characterized by similar time windows. This introduces a layer of complexity which slows down the method in case time windows are not present. Some instances exist for a problem which is slightly different from TOPTW: the Selective Vehicle Routing Problem with Time Windows, which is described in Boussier et al.. This problem has some complications with respect to TOPTW described in Section [pd]. A demand is associated with each node, and vehicles have a limited capacity. Moreover, the maximum length of tours is constrained, and consequently the departure time from the depot has also to be optimized. Since our method is not designed to cope with these constraints, these instances have not been considered. In particular, dealing with the optimization of the departure times would mean to redesign both the construction phase of the ACS algorithm, and the local search component. For the reasons above, we decided to carry out experiments on the same instances used for the tests summarized in Section [eoptw] for the OPTW (leaving out Solomon’s problems with 50 nodes). In this case we added a parameter indicating the number of vehicles available, *m*. In the test we present, we consider 2 ≤ *m* ≤ 4. Note that also the problems originally proposed in Chao et al. for the TOP use the same range of values for parameter *m*. ### Experimental results Results are summarized in Tables [t2], [t3] and [t4]. Columns have the same meaning as for Table [t1] in Section [res], only columns *Known bounds* and *GVNS* are not reported here (no information is available). Tables [t2], [t3] and [t4] suggest that the algorithm we propose is suitable also for problems with more than one vehicles. No upper bound has been computed for these problems, so it is not possible to estimate the absolute quality of the solutions retrieved. However, we expect these problems to be more difficult than the OPTW problems considered in Section [res] over the same instances. The reason is that the length of each feasible solution increases as *m* increases. By crosschecking the tables, it is also interesting to observe that the contribution of each additional vehicles is more and more marginal as *m* increases. It is finally interesting to observe how, on some problems (typically Cordeau’s instances *pr\**), the time at which the best solution is retrieved approaches the time limit of 3600 seconds. This suggests that longer computation times might have allowed better results. |c|ccc|ccc| [t2] Problem&& &Min&Avg&Max (Nodes)&Min&Avg&Max Problem&& &Min&Avg&Max (Nodes)&Min&Avg&Max c101\_ 100 & 580 & 588.0 & 590 (21) & 49.57 & 110.83 & 163.36 c102\_ 100 & 660 & 660.0 & 660 (22) & 279.25 & 1427.36 & 3381.34 c103\_ 100 & 710 & 710.0 & 710 (21) & 430.59 & 938.86 & 1777.64 c104\_ 100 & 750 & 754.0 & 760 (22) & 314.14 & 1355.04 & 2048.16 c105\_ 100 & 640 & 640.0 & 640 (21) & 449.59 & 1436.61 & 3038.13 c106\_ 100 & 620 & 620.0 & 620 (20) & 46.75 & 104.61 & 207.26 c107\_ 100 & 670 & 670.0 & 670 (22) & 24.37 & 76.02 & 140.65 c108\_ 100 & 680 & 680.0 & 680 (22) & 5.80 & 95.34 & 251.49 c109\_ 100 & 720 & 720.0 & 720 (22) & 282.05 & 1817.29 & 2822.51 c201\_ 100 & 1450 & 1452.0 & 1460 (66) & 93.62 & 508.59 & 1551.86 c202\_ 100 & 1440 & 1446.0 & 1460 (66) & 223.74 & 1568.22 & 3140.17 c203\_ 100 & 1440 & 1446.0 & 1460 (65) & 1846.00 & 2334.80 & 3220.08 c204\_ 100 & 1430 & 1436.0 & 1440 (63) & 206.84 & 1373.22 & 2942.62 c205\_ 100 & 1450 & 1452.0 & 1460 (65) & 31.12 & 925.37 & 2566.74 c206\_ 100 & 1460 & 1460.0 & 1460 (65) & 678.24 & 1876.39 & 2690.34 c207\_ 100 & 1450 & 1452.0 & 1460 (65) & 189.21 & 1433.98 & 3193.58 c208\_ 100 & 1460 & 1462.0 & 1470 (66) & 20.38 & 1164.20 & 1605.87 r101\_ 100 & 349 & 349.0 & 349 (16) & 2.06 & 36.48 & 107.15 r102\_ 100 & 505 & 506.6 & 508 (21) & 797.60 & 2006.59 & 3549.10 r103\_ 100 & 517 & 518.8 & 520 (22) & 459.69 & 1095.06 & 2583.94 r104\_ 100 & 538 & 540.0 & 544 (23) & 1708.55 & 2291.80 & 3367.68 r105\_ 100 & 453 & 453.0 & 453 (20) & 44.19 & 1037.28 & 1687.62 r106\_ 100 & 515 & 523.8 & 529 (21) & 1125.75 & 2034.73 & 3320.77 r107\_ 100 & 526 & 527.4 & 529 (21) & 552.34 & 2357.58 & 3421.24 r108\_ 100 & 548 & 553.6 & 556 (24) & 839.84 & 1906.24 & 2921.28 r109\_ 100 & 505 & 505.4 & 506 (22) & 90.30 & 1386.61 & 2301.20 r110\_ 100 & 523 & 523.4 & 525 (24) & 164.43 & 1180.21 & 2279.10 r111\_ 100 & 533 & 535.6 & 538 (23) & 717.88 & 1726.17 & 3292.45 r112\_ 100 & 539 & 541.6 & 543 (213 & 540.79 & 1653.61 & 3122.94 r201\_ 100 & 1231 & 1236.0 & 1239 (70) & 1766.97 & 2693.58 & 3533.81 r202\_ 100 & 1288 & 1298.6 & 1310 (78) & 538.95 & 2482.22 & 3295.88 r203\_ 100 & 1341 & 1349.4 & 1358 (80) & 2460.71 & 3057.40 & 3543.20 r204\_ 100 & 1397 & 1399.0 & 1404 (89) & 1516.41 & 2713.46 & 3625.27 r205\_ 100 & 1336 & 1342.0 & 1346 (82) & 1909.67 & 2593.74 & 3207.82 r206\_ 100 & 1367 & 1373.8 & 1381 (84) & 2347.97 & 2954.37 & 3414.95 r207\_ 100 & 1367 & 1385.0 & 1400 (88) & 1079.52 & 2624.08 & 3529.26 r208\_ 100 & 1416 & 1425.4 & 1433 (92) & 1913.39 & 2828.88 & 3475.34 r209\_ 100 & 1344 & 1350.6 & 1361 (86) & 2263.24 & 2988.88 & 3424.69 r210\_ 100 & 1351 & 1355.4 & 1360 (81) & 1062.36 & 2423.89 & 3377.75 r211\_ 100 & 1396 & 1403.0 & 1411 (90) & 2005.61 & 2726.10 & 3265.09 rc101\_ 100 & 427 & 427.0 & 427 (19) & 7.32 & 27.90 & 99.56 rc102\_ 100 & 494 & 497.2 & 505 (20) & 319.09 & 1496.91 & 3075.49 rc103\_ 100 & 506 & 510.2 & 516 (21) & 426.05 & 1965.91 & 3533.84 rc104\_ 100 & 565 & 568.8 & 575 (23) & 1583.98 & 2381.26 & 3445.15 rc105\_ 100 & 478 & 478.4 & 480 (21) & 940.80 & 1676.74 & 3369.17 rc106\_ 100 & 480 & 480.2 & 481 (20) & 1121.33 & 1865.07 & 3173.21 rc107\_ 100 & 526 & 530.2 & 534 (21) & 231.60 & 771.48 & 2045.81 rc108\_ 100 & 531 & 541.6 & 550 (22) & 194.45 & 820.99 & 2313.77 rc201\_ 100 & 1354 & 1365.4 & 1376 (66) & 528.52 & 1654.27 & 3495.90 rc202\_ 100 & 1457 & 1464.8 & 1472 (75) & 1076.83 & 2619.56 & 3516.05 rc203\_ 100 & 1523 & 1542.2 & 1573 (80) & 882.99 & 2176.18 & 3476.60 rc204\_ 100 & 1608 & 1614.8 & 1622 (89) & 1202.02 & 2561.77 & 3573.06 rc205\_ 100 & 1405 & 1417.4 & 1428 (73) & 504.28 & 2115.17 & 3322.36 rc206\_ 100 & 1473 & 1495.6 & 1514 (78) & 975.66 & 2105.05 & 3253.70 rc207\_ 100 & 1501 & 1523.0 & 1544 (80) & 1852.18 & 2937.45 & 3424.19 rc208\_ 100 & 1587 & 1608.4 & 1646 (88) & 1459.34 & 2572.30 & 3455.41 pr01\_ 48 & 502 & 502.0 & 502 (32) & 28.15 & 602.21 & 1918.67 pr02\_ 96 & 702 & 705.2 & 714 (439) & 289.31 & 1017.60 & 1850.88 pr03\_ 144 & 727 & 731.4 & 740 (40) & 1308.75 &2083.02 &2628.40 pr04\_ 192 & 877 & 883.6 & 899 (50) & 682.31 & 2439.32 & 3409.04 pr05\_ 240 & 1016 & 1021.8 & 1034 (55) & 442.58 & 2278.50 & 3383.48 pr06\_ 288 & 977 & 987.6 & 995 (51) & 369.49 & 1724.14 & 2648.17 pr07\_ 72 & 566 & 566.0 & 566 (33) & 354.18 & 972.26 & 1669.67 pr08\_ 144 & 808 & 813.0 & 819 (44) & 277.53 & 1729.99 & 2998.38 pr09\_ 216 &851 & 862.0 & 880 (48) & 2628.21 & 3130.95 & 3594.25 pr10\_ 288 & 1053 & 1064.6 & 1078 (57) & 2544.51 & 2918.58 & 3534.76 pr11\_ 48 & 544 & 544.8& 547 (37) &60.64 &880.30& 3380.38 pr12\_ 96 & 759& 763.4 &768 (44) & 1682.31& 2706.06 &3466.05 pr13\_ 144 & 788 &798.8 &816 (45) &2700.42& 3240.72& 3564.82 pr14\_ 288 & 936 & 943.6 & 952 (54) & 1240.28 & 2100.70 & 3187.72 pr15\_ 72 & 1090 & 1100.0 & 1120 (59) & 2653.13 & 3188.79 & 3447.06 pr16\_ 144 & 1089 & 1101.0 & 1119 (58) & 1924.75 & 2833.99 & 3562.92 pr17\_ 72 & 635 &646.2& 652 (39) &64.69 &1378.00 &2951.05 pr18\_ 144& 874 & 888.4 & 907 (47) & 489.81 & 1960.26 & 3372.53 pr19\_ 216& 932 & 940.8 & 950 (54) & 1419.52 & 2366.93 & 3152.26 pr20\_ 288& 1102 & 1113.0 & 1122 (62) & 2549.27 & 3192.33 & 3372.58 |c|ccc|ccc| [t3] Problem&& &Min&Avg&Max (Nodes)&Min&Avg&Max Problem&& &Min&Avg&Max (Nodes)&Min&Avg&Max c101\_ 100 & 810 & 810.0 & 810 (31) & 71.25 & 1515.58 & 3320.80 c102\_ 100 & 910 & 918.0 & 920 (34) & 106.55 & 442.80 & 1121.05 c103\_ 100 & 960 & 972.0 & 980 (33) & 91.88 & 1415.30 & 2230.89 c104\_ 100 & 990 & 1004.0 & 1020 (35) & 476.02 & 681.70 & 856.47 c105\_ 100 & 870 & 870.0 & 870 (33) & 16.07 & 1010.01 & 2143.67 c106\_ 100 & 870 & 870.0 & 870 (32) & 548.69 & 1132.31 & 2698.26 c107\_ 100 & 910 & 910.0 & 910 (34) & 184.38 & 892.65 & 2521.55 c108\_ 100 & 910 & 912.0 & 920 (33) & 82.40 & 971.24 & 3266.16 c109\_ 100 & 950 & 954.0 & 970 (34) & 383.17 & 1327.54 & 2319.47 c201\_ 100 & 1810 & 1810.0 & 1810 (100) & 4.43 & 259.20 & 1082.66 c202\_ 100 & 1780 & 1784.0 & 1790 (98) & 297.39 & 1603.23 & 3271.71 c203\_ 100 & 1710 & 1738.0 & 1760 (95) & 15.86 & 1039.94 & 3351.56 c204\_ 100 & 1750 & 1758.0 & 1770 (96) & 1.12 & 1025.80 & 3526.50 c205\_ 100 & 1790 & 1796.0 & 1800 (99) & 662.65 & 2007.29 & 2966.72 c206\_ 100 & 1780 & 1788.0 & 1800 (99) & 235.53 & 1861.98 & 2459.64 c207\_ 100 & 1780 & 1784.0 & 1790 (98) & 37.69 & 1222.95 & 3518.62 c208\_ 100 & 1780 & 1786.0 & 1800 (99) & 116.61 & 1542.72 & 2783.29 r101\_ 100 & 481 & 481.0 & 481 (21) & 35.79 & 240.56 & 659.12 r102\_ 100 & 674 & 682.0 & 691 (31) & 1930.56 & 2435.40 & 3287.68 r103\_ 100 & 726 & 729.8 & 736 (33) & 1328.00 & 2694.89 & 3592.84 r104\_ 100 & 750 & 760.6 & 773 (34) & 1394.80 & 2531.35 & 3452.78 r105\_ 100 & 619 & 619.2 &620 (29) & 517.98 & 795.97 & 1193.39 r106\_ 100 & 708 & 715.0 & 722 (31) & 353.40 & 755.45 & 985.10 r107\_ 100 & 748 & 751.6 & 757 (34) & 576.31 & 1998.89 & 2850.31 r108\_ 100 & 777 & 781.8 & 790 (36) & 1343.73 & 2388.53 & 3247.86 r109\_ 100 & 695 & 701.8 & 710(32) & 96.33 & 1098.83 & 2644.81 r110\_ 100 & 722 & 732.6 & 737 (34) & 187.77 & 1708.47 & 3466.45 r111\_ 100 & 718 & 755.4 & 770 (34) & 81.13 & 1286.72 & 3265.10 r112\_ 100 & 750 & 758.8 & 769 (33) & 1346.43 & 2091.22 & 3039.20 r201\_ 100 & 1424 & 1428.4 & 1432 (95) & 522.14 & 1399.97 & 2710.82 r202\_ 100 & 1441 & 1445.6 & 1449 (97) & 492.80 & 1966.22 & 3451.92 r203\_ 100 & 1443 & 1452.4 & 1456 (99) & 972.29 & 2564.33 & 3439.09 r204\_ 100 & 1458 & 1458.0 & 1458 (100) & 5.78 & 174.66 & 385.56 r205\_ 100 & 1449 & 1454.8 & 1458 (100) & 386.75 & 1830.71 & 2629.03 r206\_ 100 & 1455 & 1456.8 & 1458 (100) & 15.64 & 1328.82 & 3313.74 r207\_ 100 & 1458 & 1458.0 & 1458 (100) & 7.49 & 598.63 & 2466.06 r208\_ 100 & 1458 & 1458.0 & 1458 (100) & 0.13 & 425.64 & 1964.77 r209\_ 100 & 1455 & 1456.8 & 1458 (100) & 63.77 & 1163.74 & 2681.52 r210\_ 100 & 1454 & 1456.2 & 1458 (100) & 347.09 & 1428.08 & 3400.61 r211\_ 100 & 1457 & 1457.8 & 1458 (100) & 0.68 & 7.33 & 20.39 rc101\_ 100 & 618 & 620.4 & 621 (29) & 16.39 & 195.67 & 389.49 rc102\_ 100 & 695 & 698.8 & 710 (31) & 153.47 & 1145.53 & 2918.14 rc103\_ 100 & 735 & 741.4 & 747 (31) & 1399.25 & 2753.95 & 3488.21 rc104\_ 100 & 793 & 813.2 & 823 (35) & 1364.88 & 2276.97 & 3473.90 rc105\_ 100 & 669 & 677.8 & 682 (30) & 132.26 & 1092.01 & 2976.23 rc106\_ 100 & 688 & 692.4 & 695 (30) & 437.09 & 1748.26 & 3553.41 rc107\_ 100 & 742 & 748.0 & 755 (32) & 208.06 & 1158.45 & 2288.11 rc108\_ 100 & 757 & 768.2 & 783 (35) & 665.06 & 1443.67 & 2548.13 rc201\_ 100 & 1663 & 1672.0 & 1681 (92) & 234.65 & 1084.91 & 2316.65 rc202\_ 100 & 1690 & 1701.4 & 1706 (97) & 308.85 & 2230.74 & 3460.65 rc203\_ 100 & 1714 & 1719.6 & 1724 (100) & 30.28 & 1213.17 & 3513.87 rc204\_ 100 & 1721 & 1722.2 & 1724 (100) & 32.10 & 1261.26 & 3261.65 rc205\_ 100 & 1656 & 1672.2 & 1698 (96) & 22.96 & 1439.79 & 2388.08 rc206\_ 100 & 1699 & 1712.6 & 1722 (99) & 1089.45 & 2185.22 & 3358.51 rc207\_ 100 & 1702 & 1714.8 & 1722 (99) & 1355.01 & 2553.89 & 3175.64 rc208\_ 100 & 1721 & 1722.2 & 1724 (100) & 15.33 & 893.85 & 2146.41 pr01\_ 48 & 619 & 619.0 & 619 (43) & 54.34 & 116.95 & 227.27 pr02\_ 96 & 935 & 938.4 & 942 (56) & 1084.95 & 2333.11 & 3485.29 pr03\_ 144 & 984 & 991.0 &999 (60) & 1893.28 & 876.82 & 3559.88 pr04\_ 192 & 1220 & 1226.4 & 1243 (69) & 2144.70 & 2909.04 & 3561.60 pr05\_ 240 & 1379 & 1386.6 & 1417 (78) & 1983.09 & 2746.66 & 3293.53 pr06\_ 288 & 1345 & 1359.0 & 1370 (71) & 1538.31 & 2719.67 & 3467.55 pr07\_ 72 & 735 & 738.4 & 744 (49) & 246.67 & 1330.57 & 2812.40 pr08\_ 144 & 1112 & 1115.0 & 1118 (59) & 335.57 & 2544.33 & 3566.88 pr09\_ 216 & 1203 & 1210.0 & 1227 (69) & 2579.76 & 3187.73 & 3467.97 pr10\_ 288 & 1418 & 1457.2 & 1492 (80) & 1042.91 & 2873.12 & 3447.16 pr11\_ 48 & 649& 649.0 &649 (46) &62.74 & 237.25 & 404.33 pr12\_ 96 & 960 & 970.6 & 985 (59) & 2424.98 & 2883.74 & 3356.37 pr13\_ 144 & 1080 & 1088.8 & 1101 (68) & 1563.33 & 2515.51 & 2803.83 pr14\_ 192 & 1252 & 1258.6 & 1263 (74) & 1312.17 & 2084.66 & 249.92 pr15\_ 240 & 1455 & 1486.8 & 1509 (83) & 1175.35 & 2671.82 & 3404.18 pr16\_ 288 & 1461 & 1481.8 & 1516 (77) & 198.84 & 2284.53 & 3410.70 pr17\_ 72T & 821 & 822.6 & 832 (54) & 564.12 & 2049.56 & 3174.96 pr18\_ 144 & 1174 & 1209.0 & 1229 (68) & 2199.10 & 2854.33 & 3401.15 pr19\_ 216 & 1289 & 1299.4 & 1320 (75) & 2425.89 & 3094.80 & 3487.74 pr20\_ 288 & 1484 & 1491.2 & 1505 (83) & 1739.74 & 2822.96 & 3268.58 |c|ccc|ccc| [t4] Problem&& &Min&Avg&Max (Nodes)&Min&Avg&Max Problem&& &Min&Avg&Max (Nodes)&Min&Avg&Max c101\_ 100 & 1010 & 1018.0 & 1020 (41) & 105.51 & 1049.07 & 1574.07 c102\_ 100 & 1140 & 1142.0 & 1150 (45) & 195.35 & 1211.26 & 2876.07 c103\_ 100 & 1180 & 1186.0 & 1190 (46) & 1755.25 & 2329.56 & 2776.30 c104\_ 100 & 1220 & 1226.0 & 1240 (46) & 256.95 & 1493.94 & 2746.12 c105\_ 100 & 1050 & 1052.0 & 1060 (43) & 23.33 & 716.48 & 2898.51 c106\_ 100 & 1050 & 1058.0 & 1070 (42) & 32.26 & 556.80 & 1188.74 c107\_ 100 & 1110 & 1114.0 & 1120 (44) & 106.84 & 411.26 & 1204.96 c108\_ 100 & 1110 & 1112.0 & 1120 (43) & 319.07 & 820.09 & 2339.76 c109\_ 100 & 1160 & 1172.0 & 1190 (46) & 75.78 & 915.97 & 2176.46 c201\_ 100 & 1810 & 1810.0 & 1810 (100) & 0.33 & 1.15 & 2.36 c202\_ 100 & 1810 & 1810.0 & 1810 (100) & 0.32 & 8.87 & 36.38 c203\_ 100 & 1810 & 1810.0 & 1810 (100) & 7.86 & 43.90 & 68.74 c204\_ 100 & 1810 & 1810.0 & 1810 (100) & 0.47 & 1.31 & 3.10 c205\_ 100 & 1810 & 1810.0 & 1810 (100) & 0.03 & 0.25 & 0.57 c206\_ 100 & 1810 & 1810.0 & 1810 (100) & 0.00 & 0.13 & 0.39 c207\_ 100 & 1760 & 1800.0 & 1810 (100) & 0.00 & 5.80 & 28.71 c208\_ 100 & 1810 & 1810.0 & 1810 (100) & 0.02 & 0.15 & 0.48 r101\_ 100 & 608 & 608.0 & 608 (29) & 9.48 & 55.13 & 203.19 r102\_ 100 & 817 & 825.6 & 836 (41) & 788.47 & 1924.54 & 2676.62 r103\_ 100 & 893 & 902.2 & 909 (43) & 798.91 & 2622.17 & 3532.33 r104\_ 100 & 931 & 944.2 & 957 (46) & 334.64 & 2343.92 & 3192.24 r105\_ 100 & 761 & 766.0 & 771 (38) & 155.49 & 838.47 & 2394.99 r106\_ 100 & 885 & 889.8 & 893 (41) & 47.98 & 929.50 & 3402.78 r107\_ 100 & 930 & 932.4 & 937 (44) & 1357.96 & 2660.90 & 3523.34 r108\_ 100 & 964 & 976.2 & 994 (48) & 1038.25 & 2596.16 & 3533.89 r109\_ 100 & 873 & 876.6 & 879 (40) & 157.12 & 1374.46 & 2708.17 r110\_ 100 & 886 & 900.0 & 908 (43) & 370.88 & 926.27 & 1686.63 r111\_ 100 & 916 & 932.4 & 944 (45) & 855.90 & 1896.35 & 2866.03 r112\_ 100 & 940 & 947.6 & 954 (45) & 900.41 & 1662.61 & 2496.41 r201\_ 100 & 1458 & 1458.0 & 1458 (100) & 127.45 & 376.41 & 827.34 r202\_ 100 & 1458 & 1458.0 & 1458 (100) & 182.35 & 936.24 & 2728.65 r203\_ 100 & 1458 & 1458.0 & 1458 (100) & 0.02 & 73.92 & 352.55 r204\_ 100 & 1458 & 1458.0 & 1458 (100) & 0.00 & 0.01 & 0.02 r205\_ 100 & 1458 & 1458.0 & 1458 (100) & 0.00 & 1.31 & 5.59 r206\_ 100 & 1458 & 1458.0 & 1458 (100) & 0.00 & 0.01 & 0.06 r207\_ 100 & 1458 & 1458.0 & 1458 (100) & 0.00 & 0.00 & 0.00 r208\_ 100 & 3456 & 3456.0 & 1458 (100) & 0.00 & 0.01 & 0.03 r209\_ 100 & 1458 & 1458.0 & 1458 (100) & 0.00 & 0.00 & 0.00 r210\_ 100 & 1458 & 1458.0 & 1458 (100) & 0.05 & 3.13 & 11.16 r211\_ 100 & 1458 & 1458.0 & 1458 (100) & 0.00 & 0.00 & 0.00 rc101\_ 100 & 801 & 805.4 & 808 (37) & 84.48 & 1324.33 & 3199.64 rc102\_ 100 & 896 & 899.4 & 903 (42) & 1210.62 & 2218.81 & 3479.62 rc103\_ 100 & 937 & 941.8 & 948 (42) & 288.47 & 2005.21 & 3160.23 rc104\_ 100 & 982 & 1013.2 & 1052 (47) & 1331.32 & 2139.30 & 3140.49 rc105\_ 100 & 860 & 867.2 & 875 (40) & 134.31 & 1052.30 & 2201.45 rc106\_ 100 & 888 & 901.4 & 908 (39) & 559.05 & 2106.67 & 2892.46 rc107\_ 100 & 952 & 959.2 & 964 (45) & 104.30 & 1763.11 & 3285.91 rc108\_ 100 & 995 & 1000.2 & 1007 (47) & 746.11 & 2222.29 & 3534.34 rc201\_ 100 & 1724 & 1724.0 & 1724 (100) & 854.00 & 2327.40 & 3427.79 rc202\_ 100 & 1724 & 1724.0 & 1724 (100) & 27.23 & 1095.77 & 2801.97 rc203\_ 100 & 1724 & 1724.0 & 1724 (100) & 0.00 & 308.58 & 1463.78 rc204\_ 100 & 1724 & 1724.0 & 1724 (100) & 0.00 & 53.33 & 266.67 rc205\_ 100 & 1724 & 1724.0 & 1724 (100) & 360.01 & 1313.73 & 2126.78 rc206\_ 100 & 1722 & 1723.6 & 1724 (100) & 0.13 & 2.79 & 6.45 rc207\_ 100 & 1722 & 1722.4 & 1724 (100) & 1.28 & 72.12 & 172.67 rc208\_ 100 & 1724 & 1724.0 & 1724 (100) & 0.00 & 0.00 & 0.00 pr01\_ 48 & 654 & 655.2 & 657 (48) & 0.52 & 15.43 & 50.61 pr02\_ 96 & 1063 & 1067.8 & 1072 (68) & 1185.62 & 2164.96 & 3146.01 pr03\_ 144 & 1190 & 1204.6 & 1222 (77) & 817.89 & 2073.95 & 3476.42 pr04\_ 192 & 1503 & 1506.0 & 1515 (85) & 2255.22 & 2986.02 & 3414.45 pr05\_ 240 & 1710 &1722.2 &1740 (94) &1710.87 &2818.47 &3528.73 pr06\_ 288 & 1709 & 17235.4 & 1740 (95) & 1673.42 & 2935.83 & 3445.90 pr07\_ 72 & 860 & 865.4 & 872 (58) & 35.40 & 1940.19 & 3318.06 pr08\_ 144 & 1337 & 1351.6 & 1376 (79) & 1247.71 & 2804.25 & 3578.14 pr09\_ 216 & 1544 & 1554.2 & 1561 (91) & 3174.85 & 3371.92 & 3586.72 pr10\_ 288 & 1813 & 1819.8 & 1827 (101) & 2415.82 & 3365.93 & 3456.00 pr11\_ 48 & 657 & 657.0 & 657 (48) & 0.00 & 0.09 & 0.35 pr12\_ 96 & 1110 & 1114.0 & 1118 (74) & 1736.62 & 2486.72 & 3002.43 pr13\_ 144 &1303 & 1316.4 & 1329 (85) & 2039.13 & 2926.28 & 3399.21 pr14\_ 192 & 1521 & 1534.4 & 1568 (91) & 1810.47 & 2730.20 & 3268.35 pr15\_ 240& 1800 & 1822.8 & 1854 (102) & 1820.47 & 2631.27 & 3445.79 pr16\_ 288& 1833 & 1867.2 & 1887 (104) & 1140.65 & 2614.66 & 3477.03 pr17\_ 72& 923 & 923.8 & 925 (65) & 1557.48 & 2749.67 & 3450.60 pr18\_ 144& 1455 & 1464.8 & 1470 (82) & 2621.62 & 3209.13 & 3478.46 pr19\_ 216& 1567 & 1579.4 & 1596 (96) & 2740.67 & 3293.94 & 3586.13 pr20\_ 288& 1811 & 1821.4 & 1841 (98) & 2661.58 & 3193.02 & 3472.65 Conclusions =========== An algorithm based on the Ant Colony System paradigm has been discussed for the Team Orienteering Problem with Time Windows. The method is based on the solution of a hierarchic generalization of the original Team Orienteering Problem. Experimental results on benchmarks instances previously adopted in the literature confirm the practical effectiveness of the algorithm we propose. **Acknowledgement** The authors are indebted to Matteo Salani and Giovanni Righini for the useful discussions and for having provided the benchmark instances used in the experiments.
arxiv_0000201
Using rotation, magnetic activity and lithium to estimate the ages of low mass stars ==================================================================================== The rotation rate, level of magnetic activity and surface lithium abundance are age-dependent quantities in stars of about a solar mass and below. The physical reasons for the evolution of these phenomena are qualitatively understood, but accurate quantitative models remain dependent on empirical calibration using the Sun and stars of known age, chiefly in clusters. In this work I review the status of these “empirical age indicators”, outlining the astrophysics of their time dependence, describing the measurements, assessing the precision (and accuracy) of age estimates when applied to individual stars, and identifying their principle limitations in terms of the mass and age ranges over which they are useful. Finally, I discuss the “lithium depletion boundary” technique which, in contrast to the empirical methods, appears to provide robust, almost model-independent ages that are both precise and accurate, but which is only applicable to coeval groups of stars. Introduction ============ The age of a star is, along with its mass and composition, the most important quantity to know for testing ideas concerning the evolution of stars, stellar systems (clusters and galaxies) and also, by association, their circumstellar material and exoplanetary systems. However, unlike mass and composition, we have no direct means of measuring the age of any star but the Sun. The ages of other stars are inferred or estimated using a hierarchy of techniques, which can be described as (see Soderblom 2010; Soderblom  2013) semi-fundamental, model-dependent, empirical or statistical. Semi-fundamental techniques rely on age-dependent phenomena where the physics is understood, there is little tuning of model parameters required and the results are basically model-independent. Model-dependent techniques include isochrone fitting in the Hertzsprung-Russell (HR) diagram, asteroseismology and white dwarf cooling. Here the physics is mostly understood, but there are annoying gaps in our ability to accurately model the physics without making simplifying assumptions or tuning parameters (e.g. the mixing length) to match observations. Often the precision of the ages determined by such techniques is much better than their absolute accuracy and different models may yield ages that differ by more than their claimed uncertainties. At a level below the model-dependent techniques are empirical age indicators. For these, the understanding of the physics is qualitative, with significant holes in the theory that are usually bridged using semi-empirical relationships with free parameters. The general approach is to calibrate an age-dependent phenomena using similar observations of stars with “known” age (the Sun and stars with ages estimated by semi-fundamental or model-dependent techniques) and then use that calibration to estimate the ages of other stars (e.g. Barnes 2007; Vican 2012). Of course, there is a risk of circularity here; one cannot study the age dependence of a phenomenon using stars with ages estimated using that phenomenon! In this contribution I deal mainly with empirical age indicators associated with the rotation rates, levels of magnetic activity and photospheric lithium abundances of stars with masses *M* ≤ 1.3and how they apply to stars from birth to ages of 10Gyr. It is no coincidence that these phenomena each become useful below this mass. The presence of a sub-photospheric convection zone is responsible for dynamo-generated magnetic fields that are dissipated to provide non-radiative heating in the outer atmosphere and also couple to an ionised wind that drives angular momentum loss. The same convection zone is responsible for mixing pristine material down to interior regions where Li can be burned. The use of these indicators has its root in work done by Kraft and collaborators in the 1960s (e.g. Kraft & Wilson 1965; Kraft 1967), but perhaps the most influential early paper was by Skumanich (1972), who showed that both rotation and activity, and to some extent Li abundance, decayed according to the inverse square root of age. The data used were sparse, consisting of the Sun (age 4.57Gyr) and a few solar-type stars in the Pleiades (age  ≃ 125Myr) and Hyades (age  ≃ 600Myr) open clusters, but nevertheless this paper stimulated much of what follows. The utility of these empirical age indicators is mostly in estimating ages for low-mass main sequence (MS) and pre main sequence (PMS) stars that constitute the vast majority of the Galactic population. A principle advantage of the techniques I will discuss is that they are *distance independent*. With the successful launch of the *Gaia* satellite (Perryman  2001; Brown 2008), it might seem that uncertain stellar distance will be a solved problem within a few years. However, even with precisely known distances, the determination of ages for stars that have reached the main sequence and are still burning hydrogen in their cores is difficult. Position in the HR diagram is age sensitive, but also sensitive to the detailed composition of the star. Even with [Fe/H] known to a very respectable accuracy of  ± 0.05dex, the age of a 5Gyr solar-mass star could only be estimated to a precision of 20 per cent, and considerably worse for lower mass stars with longer main sequence lifetimes that consequently move more slowly in the HR diagram (e.g. see Fig. 20 of Epstein & Pinsonneault 2014). Asteroseismology may be an alternative distance-independent method for age estimation, with the advantage of a strong and well-understood physical basis, but it is not clear that pulsations can easily be detected in main-sequence stars well below a solar mass or in young, active stars (e.g. Huber  2011). Even if they are, it is unlikely that ages could presently be estimated for solar-type stars to absolute precisions better than 10–15 per cent of their main sequence lifetimes (e.g. Gai  2011; Chaplin  2014) and would rapidly become too large to be useful in stars below a solar mass. Hence, there is likely to be a need for age determinations using empirical indicators for the forseeable future. In section [sec2] I discuss measurements of rotation in low-mass stars, the physical basis on which rotation rate could be used to estimate age and review efforts to calibrate “gyrochronology”. Section [sec3] reviews the connection between rotation and magnetic activity and the various attempts to calibrate activity-age relationships using several magnetic activity indicators. Section [sec4] discusses the astrophysics of lithium depletion in solar-type stars, comparison of observations and models and the use of lithium as an empirical age indicator in PMS and MS stars separately. Also included is a description of the “lithium depletion boundary” technique in very low mass stars, which differs from the other methods discussed here in that it requires no empirical calibration and is semi-fundamental. Section [sec5] summarises the status and range of applicability of each of these techniques and briefly discusses efforts to improve empirical calibrations. Conclusions are presented in section [sec6]. Rotation rates and gyrochronology ================================= The motivation for using rotation rate as an empirical age indicator is discussed extensively by Barnes (2007). As well as being distance-independent it seems, at least for older stars (see below), there may be an almost unique relationship between rotation rate and age. Rotation rates are increasingly available; satellites such as *CoRoT* and *Kepler* have accumulated large quantities of rotation data (Affer  2012; McQuillan, Mazeh& Aigrain 2014), and ground-based experiments such as *HATNet* and *SuperWASP*, aimed primarily at variability or exoplanet searches, have the potential to provide rotation periods for vast numbers of stars (e.g. Hartman  2011; Delorme  2011). Photometric monitoring by *Gaia* will add to this haul. [sec2] Measuring rotation rates ------------------------ Rotation rates in low-mass stars can be found in a number of ways (see the review by Bouvier 2013), but only two are mentioned here; the others are generally more difficult to apply routinely. Spectroscopy can be used to estimate that component of spectral line broadening contributed by rotation – the projected equatorial velocity, *v**s**i**n**i*. This can be accomplished by a direct Fourier transform of the spectrum (e.g. Gray 1976; Dravins, Lindegren & Torkelsson 1990) and with very high quality data, it can even be feasible to measure differential rotation with latitude (e.g. Reiners 2007). More frequently, *v*sin*i* is estimated by calibrating the width of a cross-correlation peak against template stars or synthetic spectra with similar atmospheric parameters (e.g. Rhode, Herbst & Mathieu 2001). Although feasible using a single spectrum, and in the case of cross-correlation, a spectrum with very modest signal-to-noise ratio, the principle limitations of spectroscopic methods are the high resolving powers and accurate characterisation of the intrinsic (non-rotating) line profiles required to estimate *v*sin*i* for slow rotators, and the confusing sin*i* axis orientation term. The main alternative, and method of choice, is to monitor the brightness of stars and detect periodic modulation caused by dark magnetic starspots or bright chromospheric plages on their surfaces. Magnetic activity is required for this technique to work, so is best suited to low-mass stars at younger ages with vigorous rotational dynamos (see section [sec3]), where typical modulation amplitudes can be a few mmag to tenths of a magnitude. Typical examples of such studies can be found in Prosser  (1993), Allain  (1996), Herbst  (2000) and Irwin  (2009), which also demonstrate a progression in the efficiency of monitoring facilitated by the advent of large format CCDs. The principle advantage of this technique is that many stars can be almost simultaneously monitored using telescopes of modest aperture (compared with those required for spectroscopy). The disadvantages are that stars need to be monitored intensively and over at least a couple of rotation periods. There is also a potential bias towards the young, most active and most rapidly rotating stars – even in young, magnetically active cluster populations  ≤ 50 per cent of stars have measured rotation periods and older stars may have such small photometric amplitudes and long periods that only space-based photometry is good enough. Both the *Kepler* and *CoRoT* satellites have provided much more precise, lengthy time-series data for field stars (and some clusters) to partly nullify these problems (e.g. Meibom  2011a; Affer  2012; Reinhold, Reiners & Basri 2013; McQuillan  2014). Prior to *Kepler* and *CoRoT*, most information about the rotation of older, less active stars came from chromospheric inhomogeneities monitored in the Caii H and K lines (e.g. Donahue, Saar & Baliunas 1996; Baliunas  1996), because the contrast between chromospheric plages and the immaculate photosphere is greater than for starspots in stars as old/inactive as the Sun. Monitoring on decadal timescales at the Mount Wilson observatory has yielded many rotation periods for solar-type field stars as well as quantitative measurements of their magnetic activity and magnetic activity cycles (see section [sec3]). Rotational evolution and models ------------------------------- [gallet13] ### Observed rotational evolution Most progress in understanding the rotational evolution of solar-type and lower-mass stars comes from observations of rotation rates (predominantly rotation periods) in clusters of stars, whose members are assumed coeval and of similar composition. Compilations of data and reviews of the observations can be found in Irwin & Bouvier (2009), Gallet & Bouvier (2013) and Bouvier  (2013), and these sources also provide an overview of theoretical interpretations of these observations. Figure [gallet13] (from Gallet & Bouvier 2013) illustrates the main features of rotational evolution for groups of stars at around a solar mass, ranging in age from star forming regions at a few Myr, through to the ZAMS at  ∼  100Myr and onto later main sequence life beyond a Gyr. Solar-type stars evidently begin their lives with a wide range of rotation periods between about 1 and 15 days (e.g. in the Orion Nebula cluster; Herbst  2002, or NGC 2264; Makidon  2004). Over the first 10Myr of their lives this distribution changes little despite the order of magnitude reduction in moment of inertia as stars contract along their PMS tracks. Interactions between the star and its circumstellar disk are invoked to remove angular momentum, a process that ceases upon the dispersal of inner disks on timescales of a few Myr. This idea finds support from the correlation found in some star forming regions between the presence of disks/accretion and slower rotation (e.g. Edwards  1993; Rebull  2006; Cieza & Baliber 2007). The rotation rate distributions in older clusters show gradual evolution towards faster rotation rates at the ZAMS, presumably as a result of PMS contraction. Although the long-period envelope remains fairly constant in solar-type stars, there are few slow rotators among lower mass ( ≤ 0.5 *M*⊙) stars at ages of 10-200Myr, with most having rotation period *P* < 3d (e.g. Irwin  2007, 2008). The range of rotation rates in solar-type stars rapidly increases to nearly two orders of magnitude; at  ∼ 15Myr the rotation rate distribution is still quite flat but the range has grown to 0.2 < *P* < 15days (e.g in the h Per cluster; Moraux  2013). At  ∼ 50 − 150 Myr the bulk of solar-type ZAMS stars have 6 < *P* < 10days, but a tail of rapid rotators persists to periods as short as 0.3 days (e.g. in the Alpha Per and Pleiades clusters; Prosser  1993; Krishnamurthi  1998; Hartman  2010). Beyond the ZAMS, with the moment of intertia essentially fixed, the wide distribution of rotation rates in solar-type stars converges, a process thought to be driven by a magnetised stellar wind, with angular momentum losses that increase with rotation rate. Convergence is almost complete for solar-type stars at ages of  ≥ 500Myr (e.g. in the Hyades; Radick  1987, or in M37; Hartman  2009). The timescale for convergence is however mass-dependent and fast rotating K-dwarfs are still seen in clusters with ages of a few hundred Myr (e.g. in M34; Meibom  2011b), whilst M-dwarfs with rotation periods  < 1d are still observed in the Hyades and Praesepe clusters at ages of  ∼ 600Myr (Delorme  2011; Agüeros  2011). In fact if anything, the dispersion in rotation rates appears to grow with age in these lower-mass stars as evidenced in the wide range of periods found for (predominantly old) field M dwarfs (Irwin  2011; McQuillan, Aigrain & Mazeh 2013). ### Rotational evolution models Models to interpret these data are semi-empirical; there are several components that, whilst physically motivated, require calibration using cluster data and the current rotation rate of the Sun. Starting from an initial rotation period at a very young age, the effect of torques and moment of inertia changes are followed and models include some or all of the following ingredients: There is no general agreement yet on which mechanisms prevent the spin-up of contracting PMS stars, but the presence of an inner disk appears to be implicated. The necessary transfer of angular momentum may be provided via the original “disk-locking” proposed between the accretion disk and stellar magnetic field (Camenzind 1990; Koenigl 1991); more recent ideas include accretion-driven winds or magnetospheric ejections (e.g. Matt & Pudritz 2005; Zanni & Ferreira 2013). Whatever is responsible, most rotational evolution models assume that rotation rates are held constant until the inner disk disperses. This disk dispersal timescale, observationally found to be in the range 1–10Myr, almost certainly varies from star-to-star for poorly understood reasons and is a tuneable model parameter, largely set by the difference in the mean and range of rotation rates at the ZAMS compared with those in the initial distribution (e.g. Bouvier, Forestini & Allain 1997). Once disks disperse then stars are free to spin-up if they have not reached the ZAMS. The moment of inertia will decrease roughly on the Kelvin-Helmholtz timescale – around 10Myr for a solar mass star, but hundreds of Myr for lower-mass stars (i.e. much longer than any disk dispersal timescale). Stellar (surface) spin up is moderated both by angular momentum losses and the possible decoupling of radiative core and convective envelope (see below). The large scale magnetic B-field of a star will force its ionised stellar wind into co-rotation out to some distance approximated by the Alfven radius. Upon decoupling, the wind carries away angular momentum at a rate that depends on the rotation rate of the star, the mass-loss rate, the strength and geometry of the magnetic field and the details of the wind velocity profile and interaction with the magnetic field (Mestel & Spruit 1987). A common parametrisation attributable to Kawaler (1988) and Chaboyer, Demarque & Pinsonneault (1995a) is $$\frac{dJ}{dt} = f\_k K\_w \left(\frac{R}{R\_{\odot}}\right)^{2-N} \left(\frac{M}{M\_{\odot}}\right)^{-N/3} \left(\frac{\dot{M}}{10^{-14}}\right)^{1-2N/3} \Omega^{1+4N/3}\ \ (\Omega < \Omega\_{\rm crit})\,, \\ \label{jdot}$$ $$\frac{dJ}{dt} = f\_k K\_w \left(\frac{R}{R\_{\odot}}\right)^{2-N} \left(\frac{M}{M\_{\odot}}\right)^{-N/3} \left(\frac{\dot{M}}{10^{-14}}\right)^{1-2N/3} \Omega\, \Omega\_{\rm crit}^{4N/3}\ \ (\Omega \geq \Omega\_{\rm crit})\,, \label{jdotsat}$$ where *N* is an index specifying the B-field geometry (*N* = 2 is radial, *N* = 3/7 represents a dipolar field), $\dot{M}$ is the wind mass-loss rate in solar masses per year, *K**w* is a constant ( = 2.036 × 1033 in cgs units), *f**k* is a parameter that encapsulates the constant of proportionality in an assumed linear relationship between surface magnetic *flux* and rotation rate Ω, as well as uncertainties in the wind speed as it decouples from the field at the Alfven radius. The strong dependence on Ω is the main physics behind the convergence of rotation rates in later main sequence life. $\Omega\_{\rm crit}$ is a threshold rotation rate at which the B-field and consequently the angular momentum loss rate “saturate”. This is motivated by the need to ensure that fast-rotating stars do not spin-down too quickly upon reaching their ZAMS radius and the observation that saturation is observed in chromospheric and coronal indicators of magnetic activity (see section [sec3]). Of these parameters, several need to be assumed (e.g. *N*, the relationship between B-field and rotation rate) or fixed by ensuring that at 4.5Gyr, the rotation rate of the Sun is reproduced (e.g. *f**k*). If *N* ≃ 1 the angular momentum loss rate is not too dependent on the assumed $\dot{M}$, which is fortunate as there are few constraints on this for stars younger than the Sun. Some of these degrees of freedom are beginning to be constrained by new MHD simulations, albeit still with simplifying assumptions about B-field geometry (e.g. Matt  2012). Using equation [jdot] for a star with a fixed moment of inertia and $\Omega < \Omega\_{\rm crit}$ leads directly to Ω ∝ *t*− *α*, with *α* = 1/2, as suggested by Skumanich (1972). However, it is of critical importance in what follows to note that the *t*− 1/2 behaviour is very dependent on model assumptions and is by no means assured of applying at all masses. For instance Reiners & Mohanty (2012) have pointed out that if there is instead a linear relationship between magnetic *field* and rotation rate then the radius dependence of *d**J*/*d**t* is much stronger (e.g.  ∝ *R*16/3 for a radial field rather than radius-independent in equation [jdot]). As *R* changes even during main sequence evolution, this changes the form of Ω(*t*). Similarly, any mass-dependent or time-dependent changes in $\dot{M}$ or B-field topology will alter *α* and possibly give it a mass- or time-dependence. As angular momentum is lost from the surface, interior processes act to transport angular momentum within stellar radiative zones - these may include hydrodynamic instabilities, magnetic fields or gravity waves (Mestel & Weiss 1987; Chaboyer  1995a; Pinsonneault 1997; Mathis  2013; Mathis 2013; Charbonnel  2013). Some studies treat this numerically as a diffusion process within radiative zones Denissenkov  2010; Eggenberger  2012), others allow the radiative core and convective envelope to rotate as solid bodies at different rates with a coupling timescale (e.g. MacGregor & Brenner 1991; Gallet & Bouvier 2013). In either case there are free parameters associated with the diffusion coefficients or coupling timescales that can partly be constrained by what we know about the internal rotation of the Sun and also its surface lithium abundance (see section [sec4]), but otherwise must be considered free parameters that may depend on mass or surface rotation rate. ### Putting it together Parametrised models incorporating some or all of these features have been studied by many authors in the past decades; more recent studies include Denissenkov  (2010), Spada  (2011) and Gallet & Bouvier (2013). Models begin with an assumed rotation rate for young stars and typically assume this rotation rate is constant whilst the star possesses an inner disk. A prescription for wind angular momentum losses and a stellar evolution model are then used to follow rotation rate as the star contracts and loses angular momentum. The core and envelope may be treated as separate solid body rotators with a coupling timescale. There are sufficient free or assumed parameters in this model (e.g. the dynamo prescription, initial rotation rate, disk lifetime, coupling timescale, *f**k*, $\Omega\_{\rm crit}$) that reasonable fits can be found to the observed distribution of rotation rates. Another degree of freedom is the mass-dependence of these parameters. It has been known for some time that models which match the evolution of rotation rate distributions for solar-type stars do not adequately match those of lower mass stars using the same set of parameters (e.g. Sills, Pinsonneault & Terndrup 2000; Irwin  2011). A mass-dependent $\Omega\_{\rm crit}$, changes in magnetic topology or dynamo location, or wind braking laws with a more extreme mass/radius dependence (e.g. Reiners & Mohanty 2012; Brown 2014) may provide solutions, but at the moment we are some way from being able to directly estimate stellar ages from models of rotational evolution. Empirical gyrochronology ------------------------ ### Gyrochrone construction and use [gyrochrone] The observed evolution of rotation rate distributions and a semi-empirical understanding of the processes involved offer both problems and opportunities. The broad range of rotation rates seen in solar-type stars between about 1 and 200Myr, and to even older ages for lower mass stars, mean that rotation rate is a *poor* age indicator for individual stars at these ages. However, the model predictions of a convergence in rotation rates to a single-valued function of age for older main sequence stars, and observations that suggest this may actually happen at least to ages of a Gyr or so in solar-type stars, suggest that rotation *can* be a good empirical age indicator for these stars. Barnes (2003, 2007) noted that when plotted in the *P* versus *B* − *V* plane, stars in young ( < 1Gyr) clusters and older field stars appear to populate two distinct sequences (schematically illustrated in Fig. [gyrochrone]) and which Barnes termed the I- and C-sequences. The I-sequence appears well-established in a number of clusters and is formed from those stars with converged rotation rates, roughly following the Ω ∝ *t*− 1/2 Skumanich law. The C-sequence appears less tight and is populated by rapidly rotating stars, which are in the saturated regime of magnetic activity (see section [rotactivity]) with $\Omega > \Omega\_{\rm crit}$. As the convergence timescales are observed to be longer in low-mass stars, the junction of these two sequences moves redward, with everything blueward of the junction being on the I-sequence. Barnes suggested that stars on the I-sequence have a fully (magnetically) coupled core and envelope, whereas in stars on the C-sequence the core and envelope are decoupled. The dearth of objects between the two sequences reflects a short evolutionary timescale between the two states and Barnes (2003) interpreted this as a switching between a convective (C) and an interface (I) dynamo that couples the core and envelope, causing a rapid spindown. Irrespective of the (strongly debated – e.g. Denissenkov  2010; Barnes & Kim 2010; Brown 2014) physical processes at play, the phenomenon of an I-sequence can be used to estimate stellar ages. The basic procedure is to say that rotation rate is a *separable* function of both age and colour/mass/$T\_{\rm eff}$. i.e. *P*(*B* − *V*, *t*) = *f*(*B* − *V*) *g*(*t*) ,  $$f(B-V) = a\,[(B-V) - b\,]^c\,, \ \ {\rm and} \ \ \ g(t) = t^\alpha\,. \label{gyroeq2}$$ The function *f* is chosen to match the shape of the I-sequence in young clusters, whilst *α* = 0.5 is equivalent to the Skumanich spin-down law. A number of authors have calibrated relationships of this type (e.g. see Table 1 in Epstein & Pinsonneault 2014); *f* is found from fitting one (or several) clusters in the *P* vs *B* − *V* plane, whilst *α* is determined by matching the solar rotation rate. Different authors find *α* in the range 0.52–0.57 and there are significant differences in the form of *f* too (Barnes 2007; Mamajek & Hillenbrand 2008; Meibom, Mathieu & Stassun 2009, 2011a; Collier Cameron  2009). Figure [gyrochrone] shows gyrochrones (I-sequences) calculated from equations [gyroeq1] and [gyroeq2], with the parameters derived by Barnes (2007), and C-sequences generated according to the functional form defined by Barnes (2003). Two hypothetical stars, A and B, are shown along with the Sun. Star A lies just above the 1Gyr I-sequence gyrochrone *and* to the left of the C-/I-sequence junction for 1Gyr. Therefore its age can be estimated as just greater than 1Gyr. Star B however lies well below the 0.1Gyr I-sequence but *to the right* of the 0.1Gyr C-sequence. As stars with ages up to about 0.3Gyr could exist at this period/colour (on or above their C-sequences) then about the best we can say as that star B is  < 0.3Gyr. This is a fundamental limit of gyrochronology that traces back to the large scatter in rotation rates seen at ages prior to the (mass-dependent) convergence. ### Problems, precision and accuracy In addition to the fundamental problem at young ages just discussed, it can be difficult to measure the rotation period of a star. Whilst young stars (or those with short periods) *may* have a healthy photometric modulation amplitude that can be detected from the ground (actually censuses of rotation period are often  < 50 per cent complete even in young clusters), this is not generally true at older ages, where precision photometry from space is required (see Fig.9 in Reinhold, Reiners & Basri 2013). Alternatively, chromospheric modulation can be used in older stars (e.g. Donahue  1996), but is *much* more expensive in terms of observing time. The precision and accuracy of gyrochronology may also be affected by measurement uncertainties, differential rotation, limited convergence, binarity and most importantly, calibration uncertainties. The precision of most period measurements is high, but stars may have differential rotation with latitude such that the period measured at one epoch may not be that measured at another, depending on the latitudinal starspot distribution. Differential rotation has been studied using the Mt Wilson chromospheric activity time-series, finding Δ*P*/*P* ≃ 0.05 *P*0.3 (Donahue  1996; where Δ*P* is the range of periods found at a single epoch). Reinhold  (2013) use Kepler data to show that, compared with the Sun (which has Δ*P*/*P* ∼ 0.14) differential rotation increases with $T\_{\rm eff}$ and *P*. If Δ*P*/*P* = 0.1 and *P* ∝ *t*− 1/2, then this leads to an age uncertainty of 20 per cent if only a single measurement of *P* is available. The assumption of convergence to a unique I-sequence is approximate. Convergence takes longer at lower masses and so the older and more massive the star, the better this approximation is. Epstein & Pinsonneault (2014) perform simulations and show that this dominates over likely uncertainties caused by differential rotation at *M* < 0.7 *M*⊙ and grows very rapidly below 0.4 *M*⊙. Conveniently, convergence becomes a problem in stars where differential rotation is unlikely to be important and vice-versa, so the overall precision of gyrochronology is likely abut 20 per cent in most cases. The situation may be a little better in stars with ages of 0.5–1Gyr. The empirical scatter of rotation periods around the I-sequence in such clusters suggest that the combination of differential rotation and incomplete convergence could lead to age errors of only 9–15 per cent (Collier Cameron  2009; Delorme  2011). Most of the discussion in this section applies to single stars or at least stars that are effectively isolated from their companions as far as angular momentum transfer is concerned. In particular, stars in close, tidally locked binary systems may appear to rotate at rates much faster than in a single star of similar age. Gyrochronology is essentially calibrated using a group of young ( ≤ 600Myr) clusters and the Sun. In particular, the assumptions of separable mass and time dependencies and a simple, unique power-law time dependence may not be true. For example different wind angular momentum loss prescriptions give quite different predictions of the mass-dependence of Ω, even when tuned to match the rotation rate of the Sun (Reiners & Mohanty 2012). In addition, mass or time-dependencies in $\dot{M}$, B-field topology and stellar radius could lead to radically different gyrochrone shapes and spacing at lower masses and older ages. Some confidence can be gained by noting that the gyrochronological ages of the components of a few (wide) binary pairs with known rotation periods and differing masses are roughly in agreement (Barnes 2007), but they have large individual uncertainties and there are indications that these ages may not agree with those from asteroseismology (Epstein & Pinsonneault 2014). There is an urgent need for better calibrating data (stars with known age and rotation period) at lower masses than the Sun and at ages of 1-10Gyr (see section [improvements]). Magnetic activity as an age indicator ===================================== Magnetic activity indicators ---------------------------- Some of the difficulties associated with calibrating gyrochronology can be addressed by using proxies for rotation that are easier to measure. In stars with outer convection zones it appears that a rotational dynamo can sustain a magnetic field, which emerges from the photosphere and provides a source of non-radiative heating, leading to the formation of a chromosphere and a hot corona. Indicators of magnetic activity include coronal X-ray emission from gas at 106–107K. The chromosphere is at lower temperatures but there are many emission lines that can be found which act as diagnostics of the magnetic heating process(es), found mainly in the blue and ultra-violet part of the spectrum, but which also include Balmer-line emission. Each of these diagnostics demands different technologies and techniques for their study and describing these is beyond the scope of this review. Here we just need to know that usually, a *distance-independent* magnetic activity index can be formed from the excess emission beyond that expected from a normal photosphere, normalised by the bolometric luminosity. The principal examples in most of the literature on age determination are: the Mt Wilson $R^{'}\_{\rm HK}$ index, formed from the chromospheric flux found in the cores of the Caii H and K lines normalised by the bolometric flux; and the ratio of X-ray to bolometric flux. Generally speaking, magnetic activity indices are easier to measure than a rotation period and are often assessed with a single epoch of observation – though this can bring problems (see below). There are of course limitations imposed by the sensitivity of instruments, the distance to the stars in question and the contrast between the photosphere and the magnetic activity indicator, which gets weaker as stars become less active. i.e. Just like rotation, activity gets harder to measure in older stars. The rotation-activity relationship ---------------------------------- The utility of magnetic activity as an age indicator arises because of its close connection with rotation. This connection, ultimately due to the nature of the interior dynamo that amplifies the magnetic field, has been empirically understood for some time. For instance Pallavicini  (1981) noted a good correlation between X-ray luminosity and the square of the rotation velocity; Noyes  (1984) found an equivalent inverse correlation between flux in the Ca ii H and K lines and the rotation period. Noyes  also noted that a much tighter correlation could be found between the ratio of chromospheric flux to bolometric flux ($R^{'}\_{\rm HK}$) and the inverse of the Rossby number (*N**R*, the ratio of rotation period to turnover time at the base of the convection zone). The turnover time increases with decreasing mass and *N**R* has become the parameter of choice in activity-rotation correlations because it reduces the scatter when combining data for stars with a range of masses and convective turnover times (e.g. Dobson & Radick 1989; Pizzolato  2003; Jeffries  2011; Wright  2011). At small *N**R* (shorter rotation periods, or longer convective turnover times at lower masses or in PMS stars) X-ray and chromospheric activity indicators *saturate*. They reach a plateau at *N**R* ≤ 0.1 below which they do not increase further, whilst at larger *N**R*, activity decreases (Vilhu & Walter 1987; Pizzolato  2003). *N**R* = 0.1 corresponds to *P* ≃ 3d for a solar type star, but *P* ≃ 6d for an M0 star with a longer turnover time. This saturation poses serious difficulties for the use of activity as an age indicator in young stars. The period at which saturation occurs in solar mass stars is just below the I-sequence gyrochrone at 100Myr, so a large fraction of stars at this age and younger have saturated magnetic activity and therefore the observation of saturated magnetic activity in a star can only yield an upper limit to its age. This age ambiguity grows at lower masses because the increasing convective turnover times and longer spin-down timescales of lower mass stars means that a larger fraction of stars are saturated at a given age and they remain in the saturated regime for longer. Empirical activity-age relationships ------------------------------------ [lxage] ### X-ray activity Reviews of what is empirically known about the time-dependence of magnetic activity can be found in Randich (2000), Ribas  (2005) and Güdel (2007). Figure [lxage] shows an empirical relationship between X-ray luminosity and age for stars of about a solar mass. Different symbols show mean levels and the interquartile range from surveys of open clusters with “known age” (data are from Randich 2000; Flaccomio, Micela & Sciortino 2003; Preibisch & Feigelson 2005, and references therein) and data for a few field stars and the Sun, where ages have been estimated by other means, and where lines connect multiple measurements of the same star (Telleschi  2005; Güdel 2007). These data are not complete, but they illustrate the basic principles and problems of using empirical activity-age relations to estimate ages. The overall decay of X-ray activity with age is clearly seen, but the decay is not rapid for the first few hunded Myr, especially if the X-ray luminosities were normalised by bolometric luminosity to make them distance independent age indicators. In addition there is scatter at all ages that cannot be attributed to observational uncertainties. In the very young clusters there is at least an order of magnitude range of *L**x* (or $L\_x/L\_{\rm bol}$). This spread is not associated with rotation; most stars here have very low Rossby numbers that would put them in the saturated regime. Some of the spread may be associated with flaring or the presence of circumstellar material (e.g. Wolk  2005; Flaccomio, Micela & Sciortino 2012). In the young ZAMS clusters the spread in X-ray activity remains, but this is known *not* to be due to variability (e.g. Simon & Patten 1998; Jeffries  2006) and these stars have lost their circumstellar material. Instead, the rotation-activity correlation is at work. Many stars have spun down below the threshold where their activity is saturated and hence exhibit lower activity levels, whilst other stars in the same cluster remain as rapid rotators. The lack of variability and strong connection with rotation persists until at least the Hyades at an age of 600Myr, with $L\_{x}/L\_{\rm bol}$ being proportional to *N**R*− 2.7 in the unsaturated regime (Wright  2011). Beyond 1Gyr we expect from Fig. [gallet13] that rotational convergence has taken place, Ω ∝ *t*− 1/2, and hence $L\_{x}/L\_{\rm bol} \propto t^{-1.35}$. If anything, the decay looks like it may be a little steeper than this but there are no old open clusters with good ages near enough to study in detail with X-ray telescopes. Ribas  (2005) derived a time-dependence of between *t*− 1.27 and *t*− 1.92 for the soft and hard X-ray fluxes from 6 solar analogues in the field. Observations of field stars reveal, in contrast to the younger stars, a high level of variability on timescales of years. The Sun’s soft X-ray emission changes by almost two orders of magnitude on a roughly 11-year cycle (Strong & Saba 2009) and there are now observations of several solar analogues that indicate that at some point beyond 1Gyr, large (order of magnitude) and possibly cyclic variability of X-ray emission may commonly occur (e.g. Favata  2008; Robrade  2012). In terms of mass dependence, the decay of coronal activity closely follows what happens with rotation rates. Lower mass stars take longer to spin down and remain in the saturated regime for longer. So, whilst X-ray activity is a poor age indicator for the first 100Myr in a solar-type star, this period extends to 1Gyr or more in M-dwarfs. The longer spin down timescales also mean that field M-dwarfs tend to be more active than field G-dwarfs (when expressed as $L\_{x}/L\_{\rm bol}$, e.g. Preibisch & Feigelson 2005). Even once stars have spun down and reached the converged I-sequence of rotation periods, X-ray activity is still not a very good age indicator because of the high levels of variability. This limits precisions to about a factor of two in age. In principle this could be improved by monitoring the activity of a star over many years, but this is not usually practical. Instead, the recent focus of activity-age relations has turned to chromospheric emission in the form of the $R^{'}\_{\rm HK}$ index. ### Chromospheric activity [rhkplot] In general as one considers activity indicators that arise from lower/cooler layers in the magnetically heated outer atmosphere, the time dependence becomes less steep ( ∼ *t*− 1/2, e.g. Skumanich 1972; Ribas  2005; Findeisen, Hillenbrand & Soderblom 2011) and this appears to be true for G-, K- and M-dwarfs (Stelzer  2013). This is because of the rather steep slope in the correlation between chromospheric and coronal activity indicators (e.g. $L\_X/L\_{\rm bol} \propto (R^{\prime}\_{\rm HK})^{3.46}$; Mamajek & Hillenbrand 2008). However, typical levels of variability in the $R^{\prime}\_{\rm HK}$ index for stars that are young or old are only 5–10 per cent (Baliunas  1995), though young stars are still of course affected by a spread in rotation rates and saturation of chromospheric emission. So, even after taking account of the shallower $R^{\prime}\_{\rm HK}$-age relation, we see that chromospheric activity ages should suffer less than coronal activity ages from uncertainties due to magnetic activity cycles and other variability in older stars. There are several flavours of $R^{\prime}\_{\rm HK}$-age relations in the literature (e.g. Barry, Cromwell & Hege 1987; Soderblom, Duncan & Johnson 1991; Lachaume  1999), which are reviewed in some detail by Mamajek & Hillenbrand (2008), who then provide an updated calibration (see Fig. [rhkplot]) and also provide a $L\_x/L\_{\rm bol}$-age relationship that is bootstrapped from the chromospheric calibration. The $R^{\prime}\_{\rm HK}$-age relation suffers less than the $L\_x/L\_{\rm bol}$-age relation from problems with variability at older ages as discussed above, but the limitations associated with spreads in rotation rate and hence activity at younger ages are just as severe. A further key advantage of the $R^{\prime}\_{\rm HK}$-age relation is that there are good data for solar-type stars in the old open clusters M67 and some data in NGC 188. These calibration “points”, consisting of a set of coeval stars at 4 and 6.9Gyr, give an excellent estimate of the precision of the method. Further tests are provided by the comparison of chromospheric ages for stars in wide binary systems, with either similar components or components with different masses. Problems, precision and accuracy -------------------------------- Mamajek & Hillenbrand (2008) conclude that for solar-type stars older than a few hundred Myr, carefully measured $R^{\prime}\_{\rm HK}$ values yield log age to  ± 0.2 dex, or an age precision of  ∼ 60 per cent. The uncertainty grows rapidly at younger ages, due to the growing dispersion in $R^{\prime}\_{\rm HK}$ (see Fig. [rhkplot]), to become unusable at  ≤ 100Myr except in providing an upper limit to the age of a star. For reasons that are not clear (perhaps binaries have smaller amplitude activity cycles?), the dispersion in empirically determined age between the components of binary systems is lower than the dispersion implied by the spread of observed chromospheric activity in the presumably coeval stars of the Hyades and M67. A further limitation of the chromospheric activity-age relation is that calibrating data for lower mass stars is more sparse. Attempts to determine the slope of the $R^{\prime}\_{\rm HK}$-mass (or colour) relation from cluster data yield a wide diversity of results. Instead, Mamajek & Hillenbrand (2008) use a newly derived activity-*N**R* relation and combine this with a gyrochronology relation to create an activity-age-colour relationship, calibrated for F7-K2 main sequence stars. This assumes that stars have converged to the I-sequence and also makes the same assumptions about the separable nature of the colour and time dependence of rotation rate used in other gyrochronology relations. The new relation does reduce the dispersion in ages estimated for the binary components with different masses, but the dispersion estimated for stars in M67 remains stubbornly at the  ± 0.2 dex level. The situation is similar for X-ray activity indicators, though likely to be worse at older ages despite a steeper decline with age, because coronal X-ray variability is much greater than that of the chromosphere in older stars. Further limitations to the technique mirror those discussed for gyrochronology in section [gyrolimitations]. Whilst differential rotation should not be a problem, the limited convergence of rotation rates onto the I-sequence may be partly responsible for the dispersion in ages estimated for coeval stars. Like gyrochronology, activity-age relationships should not be used for close binary systems where tidal or other interactions may have affected the rotation rates of the components. Finally, like gyrochronology, the activity-age relationships (both coronal and chromospheric) are poorly calibrated at ages older than, and masses lower than, the Sun. Attempts to improve this situation are briefly reviewed in section [improvements]. Lithium depletion and age ========================= The “ecology” of lithium in the universe makes it a fascinating probe of many physical processes, ranging from the big-bang to cosmic ray spallation reactions and mixing in stellar interiors. 7Li is produced during the first minutes of a standard big-bang (Wagoner, Fowler & Hoyle 1967) at a predicted abundance (post-*Planck*) of 7Li/H = 4.89− 0.39+ 0.41 × 10− 10, or A(Li) = 2.69− 0.04+ 0.03, on a logarithmic scale where A(H) = 12 (Coc, Uzan & Vangioni 2013). This abundance is significantly higher than seen in old population II stars, which exhibit a plateau of Li abundance versus $T\_{\rm eff}$ at A(Li) = 2.20 ± 0.09 (Sbordone  2010) – the "Spite plateau” (Spite & Spite 1982). This discrepancy suggests either “new” physics beyond the standard big-bang model or that physical processes have been able to deplete Li from the photospheres of these old stars. On the other hand, the abundance of Li found in meteorites is A(Li) = 3.26 ± 0.05 (Asplund  2009), which implies that the interstellar medium in the Galaxy becomes Li-enriched with time. Several production mechanisms are under investigation; inside AGB stars, cosmic ray spallation, novae (Prantzos 2012). The photospheric solar 7Li abundance is A(Li) = 1.05 ± 0.10 and observations of solar-type stars in the field and open clusters reveal a wide dispersion of A(Li), from less than the solar value to approximately the meteoritic abundance, clearly indicating that depletion mechanisms are at work. It is the time-dependence of these depletion processes that makes Li abundance a potential age indicator. Lithium in pre main sequence stars ---------------------------------- ### The astrophysics of PMS Li depletion After PMS stars are born, they initially contract along fully-convective Hayashi tracks. Once their cores reach temperatures of  ∼ 3 × 106 K, then Li burning commences through the reaction[1](#fn1) 7Li(p,4He)4He. The reaction is extremely temperature dependent ( ∼ *T*20; Bildsten  1997), the density dependence is secondary. Li-depleted material at the core is convectively mixed upwards and replaced with fresh material and the star could then be completely depleted of Li on a timescale of a few–10Myr (much less than the Kelvin-Helmholtz contraction timescale). In stars with *M* < 0.4 *M*⊙, that remain fully convective right through to the ZAMS, this is indeed what happens. However, higher mass stars develop a central radiative core because their central opacity falls far enough to reduce the temperature gradient below the critical value necessary to trigger convection. Convective mixing to the core ceases and the extent to which *photospheric* Li depletion will continue depends now on the temperature of the convection zone base ($T\_{\rm BCZ}$). In stars of *M* ≤ 0.6*M*⊙ (based on the models of Siess, Dufour & Forestini 2000), $T\_{\rm BCZ}$ remains above the Li-burning threshold long enough to completely deplete Li in the photosphere, but in more massive stars Li-depletion should eventually be halted as the radiative core expands. If *M* > 1.1 *M*⊙ the radiative core forms before Li-depletion commences and such stars deplete very little Li. $T\_{\rm BCZ}$ is never hot enough to allow Li-burning in stars with *M* ≥ 1.3 *M*⊙ and their photospheric Li should remain at its initial value. It is worth emphasizing that the above discussion takes account only of convective mixing of material and predicts that depletion of photospheric Li should have ceased by  ∼ 100Myr in all stars with *M* ≥ 0.4 *M*⊙ and considerably earlier in stars of higher mass; i.e. the pattern of Li depletion versus mass should be settled prior to arrival on the ZAMS. Many flavours of evolutionary model have made predictions about the onset and rate of photospheric Li depletion and these can be used to define isochrones of Li depletion in the A(Li) versus $T\_{\rm eff}$ plane (see Fig. [pleiadesli]). For stars that develop a radiative core, the predicted Li depletion as a function of $T\_{\rm eff}$ is highly sensitive to the physics included in the models – the opacities (and therefore metallicity), the efficiency of convection parametrised in terms of a mixing length or overshooting, and the adopted atmospheres (e.g. Chaboyer, Pinsonneault & Demarque 1995b; Piau & Turck-Chièze 2002). For instance, at $T\_{\rm eff} \simeq 5000$K, the models of Baraffe  (2002) with mixing lengths of 1.0 pressure scale height or 1.9 pressure scale heights (the value that matches the solar luminosity at the solar age) have depleted Li by factors of 0.6 and 0.06 respectively at an age of 125Myr (see Fig. [pleiadesli]). ### Models vs observations [pleiadesli] Lithium is almost always measured using the resonance doublet of Lii at 6708Å. There are other transitions in the optical spectrum at 6104Å and 8126Å, but these are *much* weaker and blended with stronger lines. Even in cool stars Li is almost completely ionised, the line strengths are temperature sensitive (in warmer stars) and subject to NLTE effects that perturb abundances by up to 0.3dex depending on the Li abundance, $T\_{\rm eff}$ and metallicity of the atmosphere (e.g. Carlsson  1994). Basic curves of growth for the Lii6708Å feature have been calculated by a number of authors (e.g. Soderblom  1993; Palla  2007). These show that at A(Li) ∼ 3, the equivalent width (EW) is about 0.5Å and in the saturated part of the curve of growth in cool stars ($T\_{\rm eff} < 3500\,K$), while it is weaker ( ∼ 0.15Å), but more sensitive to abundance, in solar type stars. The 6708Å line is also blended with a Fei line at 6707.44Å. This is much weaker than the Li feature for stars with undepleted Li but a more accurate assessment of this blend becomes important as Li is depleted (Soderblom  1993). Other problems associated with estimating an accurate Li abundance arise from an accurate estimation of $T\_{\rm eff}$, especially in young, active stars with spots and chromospheres, and photospheric veiling by an accretion continuum may need to be accounted for in PMS stars with disks. A further problem in comparing models with observations is that models predict Li *depletion*, so an initial abundance must be adopted. For most young clusters this is usually assumed to be close to the solar meteoritic value; there is also evidence from very young (assumed to be undepleted) T-Tauri stars that A(Li)$\_{\rm init} \simeq 3.1$–3.4 (e.g. Martín  1994; Soderblom  1999). There are however plausible reasons from Galactic chemical evolution models and some observational evidence that the initial Li may be positively correlated with [Fe/H] (e.g. Ryan  2001; Cummings  2012). It seems reasonable to assume that for young stars near the Sun there could be a  ± 0.1–0.2 dex spread in the initial A(Li). Figure [pleiadesli] represents the most basic comparison of PMS Li depletion with models, showing Li abundances in the Pleiades, which has an age of 125Myr (Stauffer, Schulz & Kirkpatrick 1998), versus a number of representative model isochrones (at  ≃ 100Myr) from the literature (D’Antona & Mazzitelli 1997; Piau & Turck-Chièze 2002; Baraffe  2002). The solar photospheric Li abundance is also shown. This Figure illustrates several important points: * There appears to be little Li depletion (assuming an initial A(Li) = 3.2) among G-stars and this is predicted by most of the models. As the scatter in abundance ( ≃ 0.2 dex) is similar to the amount of depletion in G-stars and similar to the uncertainty in initial Li abundance, Li depletion will not be an accurate age indicator below 125Myr for these stars. * The models differ vastly in their predictions of PMS Li depletion in cooler stars. There are several differences between these models, but the dominant one as far as Li depletion is concerned is convective efficiency. * Models that have a convective efficiency (mixing length) tuned to match the Sun’s luminosity (the Piau & Turck-Chièze 2002 model and the Baraffe  2002 models with mixing length set to 1.9 pressure scale heights) predict too much Li depletion. A lower convective efficiency provides a better match (see also Tognelli, Degl’Innocenti & Prada Moroni 2012). * A scatter in Li abundance develops in this coeval cluster at $T\_{\rm eff}<5500$K that cannot be accounted for by observational uncertainties ( ∼ 0.1–0.2dex) or explained by the models shown. The large disagreements between the various model flavours and the failure of these models to match the Pleiades or to predict the scatter among the K-stars means that Li abundance in solar-type stars (those that develop a radiative core) cannot yet be used as anything but an *empirical* age indicator. However, the scatter at a given $T\_{\rm eff}$ (or colour) is a problem in that regard too, since stars of similar age may show a wide dispersion in their Li abundances. Unless the factors causing this dispersion can be identified, there is an inevitable uncertainty on any age inferred from an Li abundance. There is a long list of possible causes of the Li-abundance dispersion that have been considered. It seems possible that some fraction of it may be caused by atmospheric inhomogeneities or chromospheric heating of the upper photosphere (e.g. Randich 2001; King  2010), but it is unlikely to be dominant given the lack of variability in the line strengths (Jeffries 1999) and the agreement between abundances derived from the 6104Å and 6708Å features. A big clue may be the correlation with rotation, noted for the Pleiades by Soderblom et al. (1993) and García López, Rebolo & Martín (1994), and now seen in several other young clusters (though not always so strongly e.g. IC 2602, Alpha Per and several young kinematic groups – Randich  2001; da Silva  2009; Balachandran, Mallik & Lambert 2011), in the sense that fast rotators (usually only projected rotation velocity is available) have preserved more Li. Caution must be exercised in interpreting such results unless spectral syntheses have been used to estimate EWs or abundances, as line broadening and blending can cause overestimated EWs from direct integration methods (Margheim 2007). It is unlikely that the structural effects of rotation have much influence, so attention has focused on additional rotational mixing of Li, which may be greater in slower rotators with more internal differential rotation (see Fig. [gallet13] and Bouvier 2008; Eggenberger  2012), or the inhibition of convective mixing by stronger magnetic fields in rapid rotators (e.g. D’Antona, Ventura & Mazzitelli 2000; Somers & Pinsonneault 2014). [empiricalli] A further problem with using Li as an empirical age indicator in young stars is that PMS Li depletion is predicted to be very sensitive to metallicity. For example, the models of Piau & Turck-Chièze (2002) show an order of magnitude increase in ZAMS Li depletion for a solar mass star if the metallicity is increased by just 0.1 dex. The effect is smaller at a fixed $T\_{\rm eff}$ (about 0.2 dex in Li depletion per 0.1 dex change in [M/H] at $T\_{\rm eff} \simeq 5700$K – Somers & Pinsonneault 2014), but grows towards cooler stars. Fortunately, although puzzlingly for theoreticians, this extreme metallicity dependence is not observed. Pairs of clusters with similar ages but differing metallicities have only minor differences in Li depletion pattern, and pairs of clusters with similar metallicities but different ages have Li depletion patterns ordered by age (e.g. Jeffries  2002). It is possible that the metallicity dependence is mostly masked by the time-dependence of processes that either inhibit or enhance PMS Li depletion (see Somers & Pinsonneault 2014). Hence an accurate knowledge of a *young* star’s metallicity does not greatly increase the precision with which an age can be inferred from Li depletion. Figure [empiricalli] shows how Li can be used to infer ages in the form of a plot of Li i 6708Å EWs versus colour (in the absence of differential reddening, it is preferable to show the data in the untransformed observational plane, rather than Li abundance versus $T\_{\rm eff}$) for a number of clusters with ages found by more certain methods (e.g. see section [ldb]). Li is empirically sensitive to age within a mass and age range where Li depletion has begun, but Li is still detectable in the photosphere. If Li is undepleted then only an upper limit to an age is possible, whilst if all the Li has gone then an age lower limit is implied. For isolated stars this means that Li can be used to estimate ages between about 10 and 50Myr for M-dwarfs and between about 20 and a few 100Myr in K-dwarfs (Li has disappeared in K-dwarfs by 600Myr in the Hyades – Soderblom  1995). The dispersion at a given age, limits precision to about a factor of two. G-dwarfs deplete little Li on the PMS, and as this depletion is comparable to uncertainties in initial Li, then Li depletion cannot be confidently used to estimate ages in the range shown. As a result of the above discussion, Li has mainly been used in the literature for *identifying* young stars in circumstances where their ages are otherwise uncertain (e.g. they cannot be placed on an HR diagram because their distances are unknown). A boundary can be defined in Fig. [empiricalli], such that a star with an EW(Li) above the boundary is younger than some desired threshold. Examples include finding low-mass members of star forming regions, especially weak-lined T-Tauri stars with no obvious accretion or circumstellar material (Alcala  1996; Martin 1997 and many more since), or identifying members of spatially dispersed members of young, kinematic groups (e.g. Jeffries 1995; Zuckerman & Webb 2000; Montes  2001). Little effort has so far been applied to obtaining quantitative age estimates (or age probability distributions) for individual stars, though attempts have been made to put groups of coeval young stars in age order using Li (e.g. Mentuch  2008). One notable problem in this endeavour is a lack of well-populated calibrating clusters between ages of 10 and 50Myr. Lithium in main sequence stars ------------------------------ Figure [pleiadesli] shows that the Sun has depleted Li by  ∼ 2dex compared with similar mass stars at the ZAMS in the Pleiades. Such depletion is entirely unanticipated by “standard” models that include only convective mixing and is also observed in field stars at a range of $T\_{\rm eff}$ around the solar value. Additional mixing mechanisms have been proposed that will mix Li-depleted material from the hot radiative core to the base of the convection zone and hence to the photosphere. These include atomic diffusion (Richer & Michaud 1993) or mixing induced by hydrodynamcal instabilities associated with rotation or gravity waves (e.g. Vauclair 1988; García López & Spruit 1991; Pinsonneault 1997, 2010; Charbonnel & Talon 2005). At present, models of age-dependent Li depletion that incorporate these effects have significant uncertainties, including the usually unknown rotational history of the star, and require tuning to match the solar Li abundance and solar interior rotation profile derived from helioseismology (e.g. do Nascimento Jr 2009). Furthermore, these extra mixing mechanisms act in addition to standard PMS Li depletion, but we have already seen that standard models predict too much Li depletion in ZAMS clusters and fail to predict the significant dispersion that is observed. Such uncertainties merely prevent us from confidently inferring the age of a star by comparing its Li abundance to a model. Indeed, the primary use of Li abundance data for older stars and clusters has been to attempt to shed light on these uncertain interior processes. However, the option is still open to empirically calibrate Li depletion beyond the ZAMS using clusters, the Sun and other stars of “known” age. ### Li in field stars General surveys of Li abundances in field stars (e.g. Lambert & Reddy 2004; Takeda & Kawonomoto 2005; Ramírez  2012) show a strong temperature dependence – Li is depleted by  ≥ 3dex or gone in all stars with $T\_{\rm eff} < 5200\,K$, whilst stars at the solar $T\_{\rm eff}$ show a  ∼ 2dex dispersion, with the Sun towards the bottom of the distribution. At higher temperatures the dispersion may narrow again, though there are still some F-stars with very low abundances. It is worth mentioning that measuring the Li abundance in older stars is more challenging, because the EW of the Lii 6708Å feature becomes only a few mÅ at A(Li) ≃ 1 in solar-type stars. Naturally, one would like to know to what extent this scatter is due to age (at a given $T\_{\rm eff}$) and how much is due to confounding (but potentially resolvable) parameters like metallicity (higher opacities lead to a deeper convection zone, more PMS Li depletion and more effective MS mixing) or even the presence of planets (Bouvier 2008; Israelian  2009), and how much might be due to factors that are more difficult to take into account. For example, the rotational history of the star, which appears to affect PMS depletion and is predicted to be a strong influence on MS Li depletion, is not easily determined once rotational convergence has been reached ( ≥ 500Myr for solar-type stars). There is considerable debate on these points in the literature. Baumann  (2010) determine ages from HR diagrams (with typical uncertainties of  ≃  ± 1.5Gyr) for solar analogues and find that there is the expected correlation with age for stars in a tight  ± 0.1dex [Fe/H] range around the solar value. There is still a A(Li) scatter of  ∼ 1dex at a given age, but they attribute this to spreads in the initial rotation conditions of these stars (and find no evidence that the presence of exoplanets is relevant). The larger sample of Ramírez  (2012) also demonstrates that stars with *M* < 1.1 *M*⊙ have greater Li depletion with age and increasing metallicity (and again it is found that exoplanet hosts do not deplete more Li than similar stars with no detected planets), but with a large scatter around the correlations. On the other hand, for a small sample of stars with metallicity and mass very close to the solar value, Monroe  (2013) claim an extremely tight correlation between Li depletion and age, with essentially no scatter. The Sun’s Li abundance lies on this correlation and Monroe   suggest that previous studies, suggesting a large scatter in this relationship and that the solar A(Li) was low, either had insufficient spectral quality or encompassed too wide a range of metallicity and mass to eliminate the dispersion caused by these factors. This latter study, which needs confirmation with a larger sample, holds out the promise of a deterministic relation between A(Li) and age if the mass and metallicity can be accurately determined. However, it makes no reference to Li observations in older clusters, which appear to tell a different story. ### Li in older post-ZAMS clusters The progress of post-ZAMS Li depletion can be empirically followed in the Li depletion patterns of older clusters. Samples in clusters should be coeval (if membership can be established!) and have the added advantage that a good mean metallicity can often be determined from the analysis of a number of stellar spectra. Initial studies included the Hyades and Praesepe at ages of about 600Myr (e.g. Wallerstein, Herbig & Conti 1963; Boesgaard & Tripicco 1986; Boesgaard & Budge 1988; Soderblom  1990), NGC 752 (age 1.7Gyr, Hobbs & Pilachowski 1986a) and M67 (age 4Gyr, Hobbs & Pilachowski 1986b). These studies, which were focused on stars at the solar $T\_{\rm eff}$ and hotter, immediately revealed what has been termed the “F-star Li gap”. Stars in a narrow range $6400< T\_{\rm eff} < 6800$K can deplete their Li to undetectable limits (A(Li) < 1.8) by the age of the Hyades, a process that appears to have begun at ages of  ∼ 150Myr (Steinhauer & Deliyannis 2004). The cause of the “Li gap” is still not fully understood, but likely involves rotation-driven mixing (Deliyannis  1998). In principle, if $T\_{\rm eff}$ can be precisely measured, then the Li abundance in this temperature range could strongly constrain the stellar age between 150Myr and  ∼ 600Myr. Older stars of late F-type and cooler are fainter and harder to study in distant clusters. Sestito & Randich (2005) provide a review of the important literature and a homogeneous reanalysis of the Li abundances. Randich (2010) reviews subsequent observations, mostly made with the 8-m VLT. These observations of solar-type stars in  ∼ 10 clusters with ages between 600Myr and 8Gyr paint a confusing picture. There is little scatter in the A(Li) vs $T\_{\rm eff}$ relationship in the Hyades and this seems to be true in some older clusters like Be 32 and NGC 188 at ages of 6–8Gyr, but solar-type stars in these clusters have 10–20 times as much lithium as the Sun (Randich, Sestito & Pallavicini 2003). On the other hand there are also examples of old clusters (e.g. M67, NGC 6253) where there is a large dispersion in A(Li), with some stars that are as depleted as the Sun, but others with A(Li) ≃ 2.3 (e.g. Pasquini  1997, 2008). It appears that the Li in solar-type stars is slowly depleted (by a factor of 3–4) from about the age of the Pleiades to 1Gyr. In this range it seems reasonable to use Li as an age indicator – the dispersion in A(Li) among clusters in this interval suggests an age precision of only about 0.3 dex though. Beyond 1Gyr some stars appear to deplete Li further whilst others do not. It is unclear at present what factors drive this dichotomy. If a star has a very low abundance then it clearly indicates an age  > 2Gyr, but if A(Li) ∼ 2 then the constraint can only be that the age is  ≥ 500Myr. The lithium depletion boundary ------------------------------ In stars that remain fully convective all the way to the ZAMS (*M* < 0.4 *M*⊙) then Li burning will completely deplete Li from the entire star. Core Li burning begins at an age which depends upon the mass and hence luminosity of the PMS star. Li depletion occurs quickly, so that in a group of coeval stars there should be a sharp boundary between stars exhibiting complete Li depletion and stars with only slightly lower luminosities that still retain all their initial Li (Bildsten  1997). This “lithium depletion boundary” (LDB) was first used to confirm the identity of brown dwarf candidates in the Pleiades – substellar objects should have retained their Li in the Pleiades, but older, more massive objects with similar spectral types would have depleted Li (Basri, Marcy & Graham 1996; Rebolo  1996). Since then, the LDB technique has been used to estimate the ages of 10 clusters and associations by finding the luminosity (or absolute magnitude) of the faintest star which has significantly depleted its Li. The LDB method, as defined above, is almost independent of which evolutionary models are used. The luminosity of the LDB is insensitive to changes in the assumed convective efficiency, composition, atmosphere and equation of state. Burke, Pinsonneault & Sills (2004) performed a set of tests using different input physics finding systematic uncertainties in the range 3–8 per cent. It is worth noting though that ages might be perturbed due to some factor that is assumed or ignored by all models. An example could be extensive coverage by starspots; the blocking of flux at the photosphere would inflate the star leading to a lower central temperature and an underestimated LDB age (e.g. Jackson & Jeffries 2014; Somers & Pinsonneault 2014). The relationship between the luminosity of the LDB and age is steep ($L\_{\rm LDB} \propto t^{-2}$ at 20 < *t* < 100Myr. As a result, typical errors of 0.1 mag in distance moduli or bolometric corrections, lead to 10 per cent uncertainty in $L\_{\rm LDB}$ and hence only 5 per cent age uncertainties. Locating the LDB in relatively sparse datasets is usually more of an issue, and the presence of unresolved binary systems, which may appears 0.75 mag brighter than a single star of the same type, can be a confusing factor (e.g. Jeffries & Oliveira 2005). The LDB method is only applicable to groups of stars in clusters (see Table 1 in Soderblom  2013), but has also been applied to spatially dispersed members of young kinematic groups (the Beta Pic group with an LDB age of 21 ± 4Myr, Binks & Jeffries 2014; the Tuc-Hor group with an LDB age of 41 ± 2Myr, Kraus  2014). In isolated very low-mass stars, the presence of undepleted Li at a given luminosity can give an upper limit to the age, whilst if a star has depleted all its Li then a lower limit to the age is implied. Note that the above discussion refers only to the *luminosity* at the LDB, which of course depends on a distance. The temperature or spectral type at the LDB could be used as a distance-independent marker, but unfortunately the model-insensitivity of the $L\_{\rm LDB}$-age is not reproduced and the $T\_{\rm LDB}$-age relation is quite shallow. Hence such determinations are of much lower precision and subject to significant model-dependent uncertainties dominated by which atmospheres are used and the adopted convective efficiency in the models. A further limitation of the LDB technique is that below 20Myr there is an increasing dispersion in model predictions and below 10Myr some evolutionary models predict no Li depletion at any mass. At older ages the $L\_{\rm LDB}$-age relation becomes much shallower and no Li depletion is expected in objects with *M* < 0.065 *M*⊙. However, the principal limitation is telescope size. Objects around the LDB at ages  ≥ 200Myr are so intrinsically faint that the *R* ≥ 3000 spectroscopy needed to measure the Lii 6708Å feature is impractical, even with 8-m telescopes. Although the measurement of an LDB age is limited to only a few clusters, the fact that the derived ages are mostly model-independent means that they can be used to test or calibrate other age estimation techniques at 20–200Myr in the same clusters (actually the oldest LDB age so far reported is for Blanco 1 at 132 ± 24Myr; Cargile, James & Jeffries 2010). So far, systematic comparisons have only been carried out between LDB ages and ages determined by fitting the positions of high-mass stars in the HR diagram (see Soderblom  2013). Such comparisons reveal that high-mass models without “convective core overshoot” yield ages that are 50 per cent lower than the LDB ages in some clusters (e.g. the Pleiades and Alpha Per clusters; Stauffer  1998, 1999), implying that moderate core overshooting or fast rotation in the high mass stars (or some combination of the two) is required. Similar systematic comparison with empirical age indicators, such as those discussed here, are likely to be valuable additions that can improve the *accuracy* of the empirical ages. The status of rotation, activity and lithium depletion as age indicators ======================================================================== In this section I summarise the status of each of the considered age indicators and point to ongoing developments that might improve the precision and especially the accuracy of these ages. This review was written from the point of view of estimating the ages of individual stars in the field, which is likely to remain the most important application – e.g. estimating the ages of exoplanet hosts (Walkowicz & Basri 2013) or searching for spatially dispersed members of kinematic groups (e.g. Shkolnik, Liu & Reid 2009; da Silva  2009). New data from the *Gaia* satellite and large spectroscopic surveys such as the *Gaia-ESO survey* (Gilmore  2012) will add impetus to this field, with the desire to understand stellar ages and hence the chemical and dynamical history of our Galaxy. All of the techniques discussed have limitations when applied to isolated stars, caused by star-to-star dispersions in the empirical relationships. Of course, if many (assumed) coeval stars are available, then this dispersion can be overcome to give a mean age estimate for the group. I have not emphasized this application because usually in such cases there are age determination methods that are higher up in the accuracy hierarchy (e.g. fitting cluster sequences in an HR diagram, see section 1). However in the case of kinematic groups where the distances to individual members may not be well known, then the *distance-independence* of these empirical relationships may be of benefit (e.g. Mentuch  2008). [summaryplot] Applicability of the techniques ------------------------------- Figure [summaryplot] summarises in a schematic way where each technique can feasibly yield an age (or a limit to the age) as a function of age and mass. ### Rotation and gyrochronology Rotational evolution is not fully understood, however it appears that magnetised winds act to cause the convergence of an initially wide spread of rotation rates on a timescale of  ∼ 100Myr for G-type stars, but as long as 1Gyr for stars of 0.5*M*⊙, accounting for the steeply sloped lower boundary in Fig. [summaryplot]a. Prior to this convergence, stars have a dispersion in rotation rate and only age upper limits can be determined. Once convergence is achieved (or nearly achieved), then ages can be estimated with a precision of  ± 20 per cent. The precision is determined in younger stars by the remaining dispersion in rotation rates at a given age, and in older stars by differential rotation with latitude. The precision will be much poorer in stars with *M* ≤ 0.4 *M*⊙, where convergence is likely to be incomplete even at very old ages. The dark shaded region of Fig. [summaryplot]a indicates that region where gyrochronology is well-calibrated using young clusters ( ≤ 1Gyr) and the Sun. The lighter shaded region indicates where gyrochronology could work in principle, but where calibrating data are absent and so the accuracy of the technique may be poor when using calibrations extrapolated from younger and hotter stars. ### Magnetic activity As magnetic activity depends on rotation, then unsurprisingly, its region of applicability, shown in Fig. [summaryplot]b, is similar to that of gyrochronology. The lower boundary is determined by the large spread of activity levels seen in young stars as a result of their varied rotation rates. The lower boundary has a shallower slope than in Fig. [summaryplot]a because at 0.5 < *M*/*M*⊙ < 1 there should be a small mass range at a given age, just below the mass at which rotational convergence occurs, but where the *maximum* rotation rate leads to activity lower than the saturated activity level and hence an age could be estimated. However at very low masses, the rapid increase in convective turnover time leads to stars at a wide range of ages having rotation rates fast enough to cause saturated activity. Activity diagnostics are generally easier to measure than rotation periods, especially in older stars and so the well-calibrated region for the activity-age relationship is larger than for gyrochronology, with $R^{\prime}\_{\rm HK}$ data for a couple of older clusters providing more confidence in the calibration around a solar mass. Nevertheless, these relationships are poorly constrained at lower masses and the precision is roughly three times worse than gyrochronology in mature stars. This is likely due to magnetic activity cycles, so in principle could be mitigated by repeated observation on long timescales. ### Lithium depletion Lithium abundances can only yield an age in the range where Li is *being* depleted. If the Li has gone, then a lower limit to the age can be inferred; if Li is undepleted, only an upper limit can be determined. The shape of the applicability region shown in Fig. [summaryplot]c is a function of the mass-dependent timescale for PMS Li depletion for stars with *M* < 1 *M*⊙ and the observed timescale of main-sequence depletion in older solar-type stars. Precision is unlikely to be better than a factor of two until it is fully understood why there is a dispersion of Li abundance in stars of the same $T\_{\rm eff}$ and age in calibrating clusters. Beyond 1Gyr, both the theoretical and observational pictures are confused. There appears to be a wide range of possible Li abundances for stars like the Sun. This may be connected with their rotational history, the presence of planets or some other factor; but for now it seems that Li abundance cannot be used to estimate ages in older stars. At the low mass side of Fig. [summaryplot]c, the narrow stripe represents (schematically) the lithum depletion boundary (LDB). Fully convective low-mass stars deplete their Li very rapidly (in a few Myr). Thus in individual stars it would normally only be possible to provide a one-sided limit on the stellar age. However, the power of the LDB is that in a coeval group of stars with a range of masses, the transition across this diagonal boundary will take place at an age-dependent mass or luminosity, allowing the age of the ensemble to be determined accurately. Ongoing efforts to improve calibrations --------------------------------------- For all three of the discussed empirical methods there is a need for more calibration to improve the *accuracy* of the ages and assess their precision. On the gyrochronological front, determining rotation periods in stars of known age and at lower masses requires (a) relatively nearby old clusters that still have a low-mass population, despite the ongoing processes of energy equipartition, mass segregation and tidal stripping, or other samples of stars with “known” ages; (b) space-based observations because spot modulation amplitudes in older stars are very small. Meibom  (2011a) discuss results for NGC 6811, one of 4 clusters between 0.5 and 9Gyr that are present in the Kepler field. NGC 6811 has an age of 1 Gyr and a population spanning F- to early K-types. Of more interest will be the results for NGC 6819 (2.5Gyr) and NGC 6791 (9Gyr), though their low-mass populations may be very sparse. It is also worth noting that the clusters M35, Praesepe, the Hyades and the Pleiades are possible targets for the Kepler K2 mission during 2014/15 (Howell  2014). Whilst these clusters are not old, the data should constrain much better the degree of rotational convergence between 125Myr and 600Myr, providing a much more complete census of rotation periods, especially in the low-mass populations. It is also possible to use Kepler data to provide asteroseismological ages for many of the brighter (and predominantly solar-type) stars in the Kepler field. Asteroseismology can give ages to perhaps 10–15 per cent in these stars and they can then be used to calibrate rotation-age, activity-age and Li-age relationships. This work has already begun: Karoff  (2013) found *P* = *f*(*B* − *V*) *t*0.81 ± 0.10 for a small group of solar-type stars with asteroseismological ages and a Skumanich-like decay in chromospheric activity. From the Sun and a small sample of stars with 0.9 < *M*/*M*⊙ < 1.2 and ages from 1–9Gyr, Garćia  (2014) determine a relationship *P* ∝ *t*0.52 ± 0.09, in much closer accord with earlier work (see section [gyrouse]). Other approaches to fix the ages of possible calibrating stars include using objects which are in resolved binary systems with either subgiants, giants or white dwarfs. The HR diagram (or white dwarf cooling model) is a much more precise tool for estimating the companion age in these cases so, providing there is no possibility of previous interaction or exchange of angular momentum, then the main sequence companion could be used as a calibrator of empirical age indicators. Examples include Silvestri  (2006), Chanamé & Ramírez (2012), Rebassa-Mansergas  (2013) and Bochanski  (2013). No clear results have yet emerged from these programs in terms of calibrating the activity-age or rotation-age relationships. Summary ======= The need for empirical methods of age estimation in low-mass stars is likely to be present for some years to come. In this contribution I have reviewed the astrophysical reasons that rotation, magnetic activity and the photospheric abundance of lithium, change with time in low-mass stars ( ≤ 1.3 *M*⊙). Whilst theoretical models that predict these behaviours are improving rapidly, there are still very significant uncertainties and semi-empirical components that prevent their use in directly estimating stellar ages with any certainty, and which require calibration using stars of known age. Each of these empirical age indicators can play a role in various domains of mass and age, that are schematically illustrated in Fig. [summaryplot]. The rotation-age relationship (or gyrochronology) offers the best prospect of determining precise (to 20 per cent) ages in older stars, and could be complemented by the use of PMS Li depletion to estimate the ages of younger stars at low masses. Magnetic activity offers a less precise age determination in older stars, but is usually easier to measure than rotation. In terms of accuracy, all these methods are compromised to some extent by a lack of calibrating data in stars that are older than the Sun or of lower masses than the Sun. In very low mass stars, the sharp transition between stars that have depleted all their lithium and stars with similar age but only slightly lower luminosities that have preserved all their lithium (the lithium depletion boundary), offers an almost model-independent way of estimating an age for groups of coeval stars. This technique is sensitive between ages of 20 and 200Myr and can be used to investigate the uncertain physics in stellar models or calibrate empirical age indicators. [sec6] Acknowledgements ================ I would like to thank Corinne Charbonnel, Yveline Lebreton, David Valls-Gabaud and the rest of the local organising and scientific committees for arranging an exceptional meeting and for inviting me to be a lecturer. Thanks are due to the Programme National de Physique Stellaire and the CNRS for their financial support. --- 1. The isotope 6Li is burned at lower temperatures and should be completely depleted in all the stars discussed here.[↩](#fnref1) Using rotation, magnetic activity and lithium to estimate the ages of low mass stars ==================================================================================== The rotation rate, level of magnetic activity and surface lithium abundance are age-dependent quantities in stars of about a solar mass and below. The physical reasons for the evolution of these phenomena are qualitatively understood, but accurate quantitative models remain dependent on empirical calibration using the Sun and stars of known age, chiefly in clusters. In this work I review the status of these “empirical age indicators”, outlining the astrophysics of their time dependence, describing the measurements, assessing the precision (and accuracy) of age estimates when applied to individual stars, and identifying their principle limitations in terms of the mass and age ranges over which they are useful. Finally, I discuss the “lithium depletion boundary” technique which, in contrast to the empirical methods, appears to provide robust, almost model-independent ages that are both precise and accurate, but which is only applicable to coeval groups of stars. Introduction ============ The age of a star is, along with its mass and composition, the most important quantity to know for testing ideas concerning the evolution of stars, stellar systems (clusters and galaxies) and also, by association, their circumstellar material and exoplanetary systems. However, unlike mass and composition, we have no direct means of measuring the age of any star but the Sun. The ages of other stars are inferred or estimated using a hierarchy of techniques, which can be described as (see Soderblom 2010; Soderblom  2013) semi-fundamental, model-dependent, empirical or statistical. Semi-fundamental techniques rely on age-dependent phenomena where the physics is understood, there is little tuning of model parameters required and the results are basically model-independent. Model-dependent techniques include isochrone fitting in the Hertzsprung-Russell (HR) diagram, asteroseismology and white dwarf cooling. Here the physics is mostly understood, but there are annoying gaps in our ability to accurately model the physics without making simplifying assumptions or tuning parameters (e.g. the mixing length) to match observations. Often the precision of the ages determined by such techniques is much better than their absolute accuracy and different models may yield ages that differ by more than their claimed uncertainties. At a level below the model-dependent techniques are empirical age indicators. For these, the understanding of the physics is qualitative, with significant holes in the theory that are usually bridged using semi-empirical relationships with free parameters. The general approach is to calibrate an age-dependent phenomena using similar observations of stars with “known” age (the Sun and stars with ages estimated by semi-fundamental or model-dependent techniques) and then use that calibration to estimate the ages of other stars (e.g. Barnes 2007; Vican 2012). Of course, there is a risk of circularity here; one cannot study the age dependence of a phenomenon using stars with ages estimated using that phenomenon! In this contribution I deal mainly with empirical age indicators associated with the rotation rates, levels of magnetic activity and photospheric lithium abundances of stars with masses *M* ≤ 1.3and how they apply to stars from birth to ages of 10Gyr. It is no coincidence that these phenomena each become useful below this mass. The presence of a sub-photospheric convection zone is responsible for dynamo-generated magnetic fields that are dissipated to provide non-radiative heating in the outer atmosphere and also couple to an ionised wind that drives angular momentum loss. The same convection zone is responsible for mixing pristine material down to interior regions where Li can be burned. The use of these indicators has its root in work done by Kraft and collaborators in the 1960s (e.g. Kraft & Wilson 1965; Kraft 1967), but perhaps the most influential early paper was by Skumanich (1972), who showed that both rotation and activity, and to some extent Li abundance, decayed according to the inverse square root of age. The data used were sparse, consisting of the Sun (age 4.57Gyr) and a few solar-type stars in the Pleiades (age  ≃ 125Myr) and Hyades (age  ≃ 600Myr) open clusters, but nevertheless this paper stimulated much of what follows. The utility of these empirical age indicators is mostly in estimating ages for low-mass main sequence (MS) and pre main sequence (PMS) stars that constitute the vast majority of the Galactic population. A principle advantage of the techniques I will discuss is that they are *distance independent*. With the successful launch of the *Gaia* satellite (Perryman  2001; Brown 2008), it might seem that uncertain stellar distance will be a solved problem within a few years. However, even with precisely known distances, the determination of ages for stars that have reached the main sequence and are still burning hydrogen in their cores is difficult. Position in the HR diagram is age sensitive, but also sensitive to the detailed composition of the star. Even with [Fe/H] known to a very respectable accuracy of  ± 0.05dex, the age of a 5Gyr solar-mass star could only be estimated to a precision of 20 per cent, and considerably worse for lower mass stars with longer main sequence lifetimes that consequently move more slowly in the HR diagram (e.g. see Fig. 20 of Epstein & Pinsonneault 2014). Asteroseismology may be an alternative distance-independent method for age estimation, with the advantage of a strong and well-understood physical basis, but it is not clear that pulsations can easily be detected in main-sequence stars well below a solar mass or in young, active stars (e.g. Huber  2011). Even if they are, it is unlikely that ages could presently be estimated for solar-type stars to absolute precisions better than 10–15 per cent of their main sequence lifetimes (e.g. Gai  2011; Chaplin  2014) and would rapidly become too large to be useful in stars below a solar mass. Hence, there is likely to be a need for age determinations using empirical indicators for the forseeable future. In section [sec2] I discuss measurements of rotation in low-mass stars, the physical basis on which rotation rate could be used to estimate age and review efforts to calibrate “gyrochronology”. Section [sec3] reviews the connection between rotation and magnetic activity and the various attempts to calibrate activity-age relationships using several magnetic activity indicators. Section [sec4] discusses the astrophysics of lithium depletion in solar-type stars, comparison of observations and models and the use of lithium as an empirical age indicator in PMS and MS stars separately. Also included is a description of the “lithium depletion boundary” technique in very low mass stars, which differs from the other methods discussed here in that it requires no empirical calibration and is semi-fundamental. Section [sec5] summarises the status and range of applicability of each of these techniques and briefly discusses efforts to improve empirical calibrations. Conclusions are presented in section [sec6]. Rotation rates and gyrochronology ================================= The motivation for using rotation rate as an empirical age indicator is discussed extensively by Barnes (2007). As well as being distance-independent it seems, at least for older stars (see below), there may be an almost unique relationship between rotation rate and age. Rotation rates are increasingly available; satellites such as *CoRoT* and *Kepler* have accumulated large quantities of rotation data (Affer  2012; McQuillan, Mazeh& Aigrain 2014), and ground-based experiments such as *HATNet* and *SuperWASP*, aimed primarily at variability or exoplanet searches, have the potential to provide rotation periods for vast numbers of stars (e.g. Hartman  2011; Delorme  2011). Photometric monitoring by *Gaia* will add to this haul. [sec2] Measuring rotation rates ------------------------ Rotation rates in low-mass stars can be found in a number of ways (see the review by Bouvier 2013), but only two are mentioned here; the others are generally more difficult to apply routinely. Spectroscopy can be used to estimate that component of spectral line broadening contributed by rotation – the projected equatorial velocity, *v**s**i**n**i*. This can be accomplished by a direct Fourier transform of the spectrum (e.g. Gray 1976; Dravins, Lindegren & Torkelsson 1990) and with very high quality data, it can even be feasible to measure differential rotation with latitude (e.g. Reiners 2007). More frequently, *v*sin*i* is estimated by calibrating the width of a cross-correlation peak against template stars or synthetic spectra with similar atmospheric parameters (e.g. Rhode, Herbst & Mathieu 2001). Although feasible using a single spectrum, and in the case of cross-correlation, a spectrum with very modest signal-to-noise ratio, the principle limitations of spectroscopic methods are the high resolving powers and accurate characterisation of the intrinsic (non-rotating) line profiles required to estimate *v*sin*i* for slow rotators, and the confusing sin*i* axis orientation term. The main alternative, and method of choice, is to monitor the brightness of stars and detect periodic modulation caused by dark magnetic starspots or bright chromospheric plages on their surfaces. Magnetic activity is required for this technique to work, so is best suited to low-mass stars at younger ages with vigorous rotational dynamos (see section [sec3]), where typical modulation amplitudes can be a few mmag to tenths of a magnitude. Typical examples of such studies can be found in Prosser  (1993), Allain  (1996), Herbst  (2000) and Irwin  (2009), which also demonstrate a progression in the efficiency of monitoring facilitated by the advent of large format CCDs. The principle advantage of this technique is that many stars can be almost simultaneously monitored using telescopes of modest aperture (compared with those required for spectroscopy). The disadvantages are that stars need to be monitored intensively and over at least a couple of rotation periods. There is also a potential bias towards the young, most active and most rapidly rotating stars – even in young, magnetically active cluster populations  ≤ 50 per cent of stars have measured rotation periods and older stars may have such small photometric amplitudes and long periods that only space-based photometry is good enough. Both the *Kepler* and *CoRoT* satellites have provided much more precise, lengthy time-series data for field stars (and some clusters) to partly nullify these problems (e.g. Meibom  2011a; Affer  2012; Reinhold, Reiners & Basri 2013; McQuillan  2014). Prior to *Kepler* and *CoRoT*, most information about the rotation of older, less active stars came from chromospheric inhomogeneities monitored in the Caii H and K lines (e.g. Donahue, Saar & Baliunas 1996; Baliunas  1996), because the contrast between chromospheric plages and the immaculate photosphere is greater than for starspots in stars as old/inactive as the Sun. Monitoring on decadal timescales at the Mount Wilson observatory has yielded many rotation periods for solar-type field stars as well as quantitative measurements of their magnetic activity and magnetic activity cycles (see section [sec3]). Rotational evolution and models ------------------------------- [gallet13] ### Observed rotational evolution Most progress in understanding the rotational evolution of solar-type and lower-mass stars comes from observations of rotation rates (predominantly rotation periods) in clusters of stars, whose members are assumed coeval and of similar composition. Compilations of data and reviews of the observations can be found in Irwin & Bouvier (2009), Gallet & Bouvier (2013) and Bouvier  (2013), and these sources also provide an overview of theoretical interpretations of these observations. Figure [gallet13] (from Gallet & Bouvier 2013) illustrates the main features of rotational evolution for groups of stars at around a solar mass, ranging in age from star forming regions at a few Myr, through to the ZAMS at  ∼  100Myr and onto later main sequence life beyond a Gyr. Solar-type stars evidently begin their lives with a wide range of rotation periods between about 1 and 15 days (e.g. in the Orion Nebula cluster; Herbst  2002, or NGC 2264; Makidon  2004). Over the first 10Myr of their lives this distribution changes little despite the order of magnitude reduction in moment of inertia as stars contract along their PMS tracks. Interactions between the star and its circumstellar disk are invoked to remove angular momentum, a process that ceases upon the dispersal of inner disks on timescales of a few Myr. This idea finds support from the correlation found in some star forming regions between the presence of disks/accretion and slower rotation (e.g. Edwards  1993; Rebull  2006; Cieza & Baliber 2007). The rotation rate distributions in older clusters show gradual evolution towards faster rotation rates at the ZAMS, presumably as a result of PMS contraction. Although the long-period envelope remains fairly constant in solar-type stars, there are few slow rotators among lower mass ( ≤ 0.5 *M*⊙) stars at ages of 10-200Myr, with most having rotation period *P* < 3d (e.g. Irwin  2007, 2008). The range of rotation rates in solar-type stars rapidly increases to nearly two orders of magnitude; at  ∼ 15Myr the rotation rate distribution is still quite flat but the range has grown to 0.2 < *P* < 15days (e.g in the h Per cluster; Moraux  2013). At  ∼ 50 − 150 Myr the bulk of solar-type ZAMS stars have 6 < *P* < 10days, but a tail of rapid rotators persists to periods as short as 0.3 days (e.g. in the Alpha Per and Pleiades clusters; Prosser  1993; Krishnamurthi  1998; Hartman  2010). Beyond the ZAMS, with the moment of intertia essentially fixed, the wide distribution of rotation rates in solar-type stars converges, a process thought to be driven by a magnetised stellar wind, with angular momentum losses that increase with rotation rate. Convergence is almost complete for solar-type stars at ages of  ≥ 500Myr (e.g. in the Hyades; Radick  1987, or in M37; Hartman  2009). The timescale for convergence is however mass-dependent and fast rotating K-dwarfs are still seen in clusters with ages of a few hundred Myr (e.g. in M34; Meibom  2011b), whilst M-dwarfs with rotation periods  < 1d are still observed in the Hyades and Praesepe clusters at ages of  ∼ 600Myr (Delorme  2011; Agüeros  2011). In fact if anything, the dispersion in rotation rates appears to grow with age in these lower-mass stars as evidenced in the wide range of periods found for (predominantly old) field M dwarfs (Irwin  2011; McQuillan, Aigrain & Mazeh 2013). ### Rotational evolution models Models to interpret these data are semi-empirical; there are several components that, whilst physically motivated, require calibration using cluster data and the current rotation rate of the Sun. Starting from an initial rotation period at a very young age, the effect of torques and moment of inertia changes are followed and models include some or all of the following ingredients: There is no general agreement yet on which mechanisms prevent the spin-up of contracting PMS stars, but the presence of an inner disk appears to be implicated. The necessary transfer of angular momentum may be provided via the original “disk-locking” proposed between the accretion disk and stellar magnetic field (Camenzind 1990; Koenigl 1991); more recent ideas include accretion-driven winds or magnetospheric ejections (e.g. Matt & Pudritz 2005; Zanni & Ferreira 2013). Whatever is responsible, most rotational evolution models assume that rotation rates are held constant until the inner disk disperses. This disk dispersal timescale, observationally found to be in the range 1–10Myr, almost certainly varies from star-to-star for poorly understood reasons and is a tuneable model parameter, largely set by the difference in the mean and range of rotation rates at the ZAMS compared with those in the initial distribution (e.g. Bouvier, Forestini & Allain 1997). Once disks disperse then stars are free to spin-up if they have not reached the ZAMS. The moment of inertia will decrease roughly on the Kelvin-Helmholtz timescale – around 10Myr for a solar mass star, but hundreds of Myr for lower-mass stars (i.e. much longer than any disk dispersal timescale). Stellar (surface) spin up is moderated both by angular momentum losses and the possible decoupling of radiative core and convective envelope (see below). The large scale magnetic B-field of a star will force its ionised stellar wind into co-rotation out to some distance approximated by the Alfven radius. Upon decoupling, the wind carries away angular momentum at a rate that depends on the rotation rate of the star, the mass-loss rate, the strength and geometry of the magnetic field and the details of the wind velocity profile and interaction with the magnetic field (Mestel & Spruit 1987). A common parametrisation attributable to Kawaler (1988) and Chaboyer, Demarque & Pinsonneault (1995a) is $$\frac{dJ}{dt} = f\_k K\_w \left(\frac{R}{R\_{\odot}}\right)^{2-N} \left(\frac{M}{M\_{\odot}}\right)^{-N/3} \left(\frac{\dot{M}}{10^{-14}}\right)^{1-2N/3} \Omega^{1+4N/3}\ \ (\Omega < \Omega\_{\rm crit})\,, \\ \label{jdot}$$ $$\frac{dJ}{dt} = f\_k K\_w \left(\frac{R}{R\_{\odot}}\right)^{2-N} \left(\frac{M}{M\_{\odot}}\right)^{-N/3} \left(\frac{\dot{M}}{10^{-14}}\right)^{1-2N/3} \Omega\, \Omega\_{\rm crit}^{4N/3}\ \ (\Omega \geq \Omega\_{\rm crit})\,, \label{jdotsat}$$ where *N* is an index specifying the B-field geometry (*N* = 2 is radial, *N* = 3/7 represents a dipolar field), $\dot{M}$ is the wind mass-loss rate in solar masses per year, *K**w* is a constant ( = 2.036 × 1033 in cgs units), *f**k* is a parameter that encapsulates the constant of proportionality in an assumed linear relationship between surface magnetic *flux* and rotation rate Ω, as well as uncertainties in the wind speed as it decouples from the field at the Alfven radius. The strong dependence on Ω is the main physics behind the convergence of rotation rates in later main sequence life. $\Omega\_{\rm crit}$ is a threshold rotation rate at which the B-field and consequently the angular momentum loss rate “saturate”. This is motivated by the need to ensure that fast-rotating stars do not spin-down too quickly upon reaching their ZAMS radius and the observation that saturation is observed in chromospheric and coronal indicators of magnetic activity (see section [sec3]). Of these parameters, several need to be assumed (e.g. *N*, the relationship between B-field and rotation rate) or fixed by ensuring that at 4.5Gyr, the rotation rate of the Sun is reproduced (e.g. *f**k*). If *N* ≃ 1 the angular momentum loss rate is not too dependent on the assumed $\dot{M}$, which is fortunate as there are few constraints on this for stars younger than the Sun. Some of these degrees of freedom are beginning to be constrained by new MHD simulations, albeit still with simplifying assumptions about B-field geometry (e.g. Matt  2012). Using equation [jdot] for a star with a fixed moment of inertia and $\Omega < \Omega\_{\rm crit}$ leads directly to Ω ∝ *t*− *α*, with *α* = 1/2, as suggested by Skumanich (1972). However, it is of critical importance in what follows to note that the *t*− 1/2 behaviour is very dependent on model assumptions and is by no means assured of applying at all masses. For instance Reiners & Mohanty (2012) have pointed out that if there is instead a linear relationship between magnetic *field* and rotation rate then the radius dependence of *d**J*/*d**t* is much stronger (e.g.  ∝ *R*16/3 for a radial field rather than radius-independent in equation [jdot]). As *R* changes even during main sequence evolution, this changes the form of Ω(*t*). Similarly, any mass-dependent or time-dependent changes in $\dot{M}$ or B-field topology will alter *α* and possibly give it a mass- or time-dependence. As angular momentum is lost from the surface, interior processes act to transport angular momentum within stellar radiative zones - these may include hydrodynamic instabilities, magnetic fields or gravity waves (Mestel & Weiss 1987; Chaboyer  1995a; Pinsonneault 1997; Mathis  2013; Mathis 2013; Charbonnel  2013). Some studies treat this numerically as a diffusion process within radiative zones Denissenkov  2010; Eggenberger  2012), others allow the radiative core and convective envelope to rotate as solid bodies at different rates with a coupling timescale (e.g. MacGregor & Brenner 1991; Gallet & Bouvier 2013). In either case there are free parameters associated with the diffusion coefficients or coupling timescales that can partly be constrained by what we know about the internal rotation of the Sun and also its surface lithium abundance (see section [sec4]), but otherwise must be considered free parameters that may depend on mass or surface rotation rate. ### Putting it together Parametrised models incorporating some or all of these features have been studied by many authors in the past decades; more recent studies include Denissenkov  (2010), Spada  (2011) and Gallet & Bouvier (2013). Models begin with an assumed rotation rate for young stars and typically assume this rotation rate is constant whilst the star possesses an inner disk. A prescription for wind angular momentum losses and a stellar evolution model are then used to follow rotation rate as the star contracts and loses angular momentum. The core and envelope may be treated as separate solid body rotators with a coupling timescale. There are sufficient free or assumed parameters in this model (e.g. the dynamo prescription, initial rotation rate, disk lifetime, coupling timescale, *f**k*, $\Omega\_{\rm crit}$) that reasonable fits can be found to the observed distribution of rotation rates. Another degree of freedom is the mass-dependence of these parameters. It has been known for some time that models which match the evolution of rotation rate distributions for solar-type stars do not adequately match those of lower mass stars using the same set of parameters (e.g. Sills, Pinsonneault & Terndrup 2000; Irwin  2011). A mass-dependent $\Omega\_{\rm crit}$, changes in magnetic topology or dynamo location, or wind braking laws with a more extreme mass/radius dependence (e.g. Reiners & Mohanty 2012; Brown 2014) may provide solutions, but at the moment we are some way from being able to directly estimate stellar ages from models of rotational evolution. Empirical gyrochronology ------------------------ ### Gyrochrone construction and use [gyrochrone] The observed evolution of rotation rate distributions and a semi-empirical understanding of the processes involved offer both problems and opportunities. The broad range of rotation rates seen in solar-type stars between about 1 and 200Myr, and to even older ages for lower mass stars, mean that rotation rate is a *poor* age indicator for individual stars at these ages. However, the model predictions of a convergence in rotation rates to a single-valued function of age for older main sequence stars, and observations that suggest this may actually happen at least to ages of a Gyr or so in solar-type stars, suggest that rotation *can* be a good empirical age indicator for these stars. Barnes (2003, 2007) noted that when plotted in the *P* versus *B* − *V* plane, stars in young ( < 1Gyr) clusters and older field stars appear to populate two distinct sequences (schematically illustrated in Fig. [gyrochrone]) and which Barnes termed the I- and C-sequences. The I-sequence appears well-established in a number of clusters and is formed from those stars with converged rotation rates, roughly following the Ω ∝ *t*− 1/2 Skumanich law. The C-sequence appears less tight and is populated by rapidly rotating stars, which are in the saturated regime of magnetic activity (see section [rotactivity]) with $\Omega > \Omega\_{\rm crit}$. As the convergence timescales are observed to be longer in low-mass stars, the junction of these two sequences moves redward, with everything blueward of the junction being on the I-sequence. Barnes suggested that stars on the I-sequence have a fully (magnetically) coupled core and envelope, whereas in stars on the C-sequence the core and envelope are decoupled. The dearth of objects between the two sequences reflects a short evolutionary timescale between the two states and Barnes (2003) interpreted this as a switching between a convective (C) and an interface (I) dynamo that couples the core and envelope, causing a rapid spindown. Irrespective of the (strongly debated – e.g. Denissenkov  2010; Barnes & Kim 2010; Brown 2014) physical processes at play, the phenomenon of an I-sequence can be used to estimate stellar ages. The basic procedure is to say that rotation rate is a *separable* function of both age and colour/mass/$T\_{\rm eff}$. i.e. *P*(*B* − *V*, *t*) = *f*(*B* − *V*) *g*(*t*) ,  $$f(B-V) = a\,[(B-V) - b\,]^c\,, \ \ {\rm and} \ \ \ g(t) = t^\alpha\,. \label{gyroeq2}$$ The function *f* is chosen to match the shape of the I-sequence in young clusters, whilst *α* = 0.5 is equivalent to the Skumanich spin-down law. A number of authors have calibrated relationships of this type (e.g. see Table 1 in Epstein & Pinsonneault 2014); *f* is found from fitting one (or several) clusters in the *P* vs *B* − *V* plane, whilst *α* is determined by matching the solar rotation rate. Different authors find *α* in the range 0.52–0.57 and there are significant differences in the form of *f* too (Barnes 2007; Mamajek & Hillenbrand 2008; Meibom, Mathieu & Stassun 2009, 2011a; Collier Cameron  2009). Figure [gyrochrone] shows gyrochrones (I-sequences) calculated from equations [gyroeq1] and [gyroeq2], with the parameters derived by Barnes (2007), and C-sequences generated according to the functional form defined by Barnes (2003). Two hypothetical stars, A and B, are shown along with the Sun. Star A lies just above the 1Gyr I-sequence gyrochrone *and* to the left of the C-/I-sequence junction for 1Gyr. Therefore its age can be estimated as just greater than 1Gyr. Star B however lies well below the 0.1Gyr I-sequence but *to the right* of the 0.1Gyr C-sequence. As stars with ages up to about 0.3Gyr could exist at this period/colour (on or above their C-sequences) then about the best we can say as that star B is  < 0.3Gyr. This is a fundamental limit of gyrochronology that traces back to the large scatter in rotation rates seen at ages prior to the (mass-dependent) convergence. ### Problems, precision and accuracy In addition to the fundamental problem at young ages just discussed, it can be difficult to measure the rotation period of a star. Whilst young stars (or those with short periods) *may* have a healthy photometric modulation amplitude that can be detected from the ground (actually censuses of rotation period are often  < 50 per cent complete even in young clusters), this is not generally true at older ages, where precision photometry from space is required (see Fig.9 in Reinhold, Reiners & Basri 2013). Alternatively, chromospheric modulation can be used in older stars (e.g. Donahue  1996), but is *much* more expensive in terms of observing time. The precision and accuracy of gyrochronology may also be affected by measurement uncertainties, differential rotation, limited convergence, binarity and most importantly, calibration uncertainties. The precision of most period measurements is high, but stars may have differential rotation with latitude such that the period measured at one epoch may not be that measured at another, depending on the latitudinal starspot distribution. Differential rotation has been studied using the Mt Wilson chromospheric activity time-series, finding Δ*P*/*P* ≃ 0.05 *P*0.3 (Donahue  1996; where Δ*P* is the range of periods found at a single epoch). Reinhold  (2013) use Kepler data to show that, compared with the Sun (which has Δ*P*/*P* ∼ 0.14) differential rotation increases with $T\_{\rm eff}$ and *P*. If Δ*P*/*P* = 0.1 and *P* ∝ *t*− 1/2, then this leads to an age uncertainty of 20 per cent if only a single measurement of *P* is available. The assumption of convergence to a unique I-sequence is approximate. Convergence takes longer at lower masses and so the older and more massive the star, the better this approximation is. Epstein & Pinsonneault (2014) perform simulations and show that this dominates over likely uncertainties caused by differential rotation at *M* < 0.7 *M*⊙ and grows very rapidly below 0.4 *M*⊙. Conveniently, convergence becomes a problem in stars where differential rotation is unlikely to be important and vice-versa, so the overall precision of gyrochronology is likely abut 20 per cent in most cases. The situation may be a little better in stars with ages of 0.5–1Gyr. The empirical scatter of rotation periods around the I-sequence in such clusters suggest that the combination of differential rotation and incomplete convergence could lead to age errors of only 9–15 per cent (Collier Cameron  2009; Delorme  2011). Most of the discussion in this section applies to single stars or at least stars that are effectively isolated from their companions as far as angular momentum transfer is concerned. In particular, stars in close, tidally locked binary systems may appear to rotate at rates much faster than in a single star of similar age. Gyrochronology is essentially calibrated using a group of young ( ≤ 600Myr) clusters and the Sun. In particular, the assumptions of separable mass and time dependencies and a simple, unique power-law time dependence may not be true. For example different wind angular momentum loss prescriptions give quite different predictions of the mass-dependence of Ω, even when tuned to match the rotation rate of the Sun (Reiners & Mohanty 2012). In addition, mass or time-dependencies in $\dot{M}$, B-field topology and stellar radius could lead to radically different gyrochrone shapes and spacing at lower masses and older ages. Some confidence can be gained by noting that the gyrochronological ages of the components of a few (wide) binary pairs with known rotation periods and differing masses are roughly in agreement (Barnes 2007), but they have large individual uncertainties and there are indications that these ages may not agree with those from asteroseismology (Epstein & Pinsonneault 2014). There is an urgent need for better calibrating data (stars with known age and rotation period) at lower masses than the Sun and at ages of 1-10Gyr (see section [improvements]). Magnetic activity as an age indicator ===================================== Magnetic activity indicators ---------------------------- Some of the difficulties associated with calibrating gyrochronology can be addressed by using proxies for rotation that are easier to measure. In stars with outer convection zones it appears that a rotational dynamo can sustain a magnetic field, which emerges from the photosphere and provides a source of non-radiative heating, leading to the formation of a chromosphere and a hot corona. Indicators of magnetic activity include coronal X-ray emission from gas at 106–107K. The chromosphere is at lower temperatures but there are many emission lines that can be found which act as diagnostics of the magnetic heating process(es), found mainly in the blue and ultra-violet part of the spectrum, but which also include Balmer-line emission. Each of these diagnostics demands different technologies and techniques for their study and describing these is beyond the scope of this review. Here we just need to know that usually, a *distance-independent* magnetic activity index can be formed from the excess emission beyond that expected from a normal photosphere, normalised by the bolometric luminosity. The principal examples in most of the literature on age determination are: the Mt Wilson $R^{'}\_{\rm HK}$ index, formed from the chromospheric flux found in the cores of the Caii H and K lines normalised by the bolometric flux; and the ratio of X-ray to bolometric flux. Generally speaking, magnetic activity indices are easier to measure than a rotation period and are often assessed with a single epoch of observation – though this can bring problems (see below). There are of course limitations imposed by the sensitivity of instruments, the distance to the stars in question and the contrast between the photosphere and the magnetic activity indicator, which gets weaker as stars become less active. i.e. Just like rotation, activity gets harder to measure in older stars. The rotation-activity relationship ---------------------------------- The utility of magnetic activity as an age indicator arises because of its close connection with rotation. This connection, ultimately due to the nature of the interior dynamo that amplifies the magnetic field, has been empirically understood for some time. For instance Pallavicini  (1981) noted a good correlation between X-ray luminosity and the square of the rotation velocity; Noyes  (1984) found an equivalent inverse correlation between flux in the Ca ii H and K lines and the rotation period. Noyes  also noted that a much tighter correlation could be found between the ratio of chromospheric flux to bolometric flux ($R^{'}\_{\rm HK}$) and the inverse of the Rossby number (*N**R*, the ratio of rotation period to turnover time at the base of the convection zone). The turnover time increases with decreasing mass and *N**R* has become the parameter of choice in activity-rotation correlations because it reduces the scatter when combining data for stars with a range of masses and convective turnover times (e.g. Dobson & Radick 1989; Pizzolato  2003; Jeffries  2011; Wright  2011). At small *N**R* (shorter rotation periods, or longer convective turnover times at lower masses or in PMS stars) X-ray and chromospheric activity indicators *saturate*. They reach a plateau at *N**R* ≤ 0.1 below which they do not increase further, whilst at larger *N**R*, activity decreases (Vilhu & Walter 1987; Pizzolato  2003). *N**R* = 0.1 corresponds to *P* ≃ 3d for a solar type star, but *P* ≃ 6d for an M0 star with a longer turnover time. This saturation poses serious difficulties for the use of activity as an age indicator in young stars. The period at which saturation occurs in solar mass stars is just below the I-sequence gyrochrone at 100Myr, so a large fraction of stars at this age and younger have saturated magnetic activity and therefore the observation of saturated magnetic activity in a star can only yield an upper limit to its age. This age ambiguity grows at lower masses because the increasing convective turnover times and longer spin-down timescales of lower mass stars means that a larger fraction of stars are saturated at a given age and they remain in the saturated regime for longer. Empirical activity-age relationships ------------------------------------ [lxage] ### X-ray activity Reviews of what is empirically known about the time-dependence of magnetic activity can be found in Randich (2000), Ribas  (2005) and Güdel (2007). Figure [lxage] shows an empirical relationship between X-ray luminosity and age for stars of about a solar mass. Different symbols show mean levels and the interquartile range from surveys of open clusters with “known age” (data are from Randich 2000; Flaccomio, Micela & Sciortino 2003; Preibisch & Feigelson 2005, and references therein) and data for a few field stars and the Sun, where ages have been estimated by other means, and where lines connect multiple measurements of the same star (Telleschi  2005; Güdel 2007). These data are not complete, but they illustrate the basic principles and problems of using empirical activity-age relations to estimate ages. The overall decay of X-ray activity with age is clearly seen, but the decay is not rapid for the first few hunded Myr, especially if the X-ray luminosities were normalised by bolometric luminosity to make them distance independent age indicators. In addition there is scatter at all ages that cannot be attributed to observational uncertainties. In the very young clusters there is at least an order of magnitude range of *L**x* (or $L\_x/L\_{\rm bol}$). This spread is not associated with rotation; most stars here have very low Rossby numbers that would put them in the saturated regime. Some of the spread may be associated with flaring or the presence of circumstellar material (e.g. Wolk  2005; Flaccomio, Micela & Sciortino 2012). In the young ZAMS clusters the spread in X-ray activity remains, but this is known *not* to be due to variability (e.g. Simon & Patten 1998; Jeffries  2006) and these stars have lost their circumstellar material. Instead, the rotation-activity correlation is at work. Many stars have spun down below the threshold where their activity is saturated and hence exhibit lower activity levels, whilst other stars in the same cluster remain as rapid rotators. The lack of variability and strong connection with rotation persists until at least the Hyades at an age of 600Myr, with $L\_{x}/L\_{\rm bol}$ being proportional to *N**R*− 2.7 in the unsaturated regime (Wright  2011). Beyond 1Gyr we expect from Fig. [gallet13] that rotational convergence has taken place, Ω ∝ *t*− 1/2, and hence $L\_{x}/L\_{\rm bol} \propto t^{-1.35}$. If anything, the decay looks like it may be a little steeper than this but there are no old open clusters with good ages near enough to study in detail with X-ray telescopes. Ribas  (2005) derived a time-dependence of between *t*− 1.27 and *t*− 1.92 for the soft and hard X-ray fluxes from 6 solar analogues in the field. Observations of field stars reveal, in contrast to the younger stars, a high level of variability on timescales of years. The Sun’s soft X-ray emission changes by almost two orders of magnitude on a roughly 11-year cycle (Strong & Saba 2009) and there are now observations of several solar analogues that indicate that at some point beyond 1Gyr, large (order of magnitude) and possibly cyclic variability of X-ray emission may commonly occur (e.g. Favata  2008; Robrade  2012). In terms of mass dependence, the decay of coronal activity closely follows what happens with rotation rates. Lower mass stars take longer to spin down and remain in the saturated regime for longer. So, whilst X-ray activity is a poor age indicator for the first 100Myr in a solar-type star, this period extends to 1Gyr or more in M-dwarfs. The longer spin down timescales also mean that field M-dwarfs tend to be more active than field G-dwarfs (when expressed as $L\_{x}/L\_{\rm bol}$, e.g. Preibisch & Feigelson 2005). Even once stars have spun down and reached the converged I-sequence of rotation periods, X-ray activity is still not a very good age indicator because of the high levels of variability. This limits precisions to about a factor of two in age. In principle this could be improved by monitoring the activity of a star over many years, but this is not usually practical. Instead, the recent focus of activity-age relations has turned to chromospheric emission in the form of the $R^{'}\_{\rm HK}$ index. ### Chromospheric activity [rhkplot] In general as one considers activity indicators that arise from lower/cooler layers in the magnetically heated outer atmosphere, the time dependence becomes less steep ( ∼ *t*− 1/2, e.g. Skumanich 1972; Ribas  2005; Findeisen, Hillenbrand & Soderblom 2011) and this appears to be true for G-, K- and M-dwarfs (Stelzer  2013). This is because of the rather steep slope in the correlation between chromospheric and coronal activity indicators (e.g. $L\_X/L\_{\rm bol} \propto (R^{\prime}\_{\rm HK})^{3.46}$; Mamajek & Hillenbrand 2008). However, typical levels of variability in the $R^{\prime}\_{\rm HK}$ index for stars that are young or old are only 5–10 per cent (Baliunas  1995), though young stars are still of course affected by a spread in rotation rates and saturation of chromospheric emission. So, even after taking account of the shallower $R^{\prime}\_{\rm HK}$-age relation, we see that chromospheric activity ages should suffer less than coronal activity ages from uncertainties due to magnetic activity cycles and other variability in older stars. There are several flavours of $R^{\prime}\_{\rm HK}$-age relations in the literature (e.g. Barry, Cromwell & Hege 1987; Soderblom, Duncan & Johnson 1991; Lachaume  1999), which are reviewed in some detail by Mamajek & Hillenbrand (2008), who then provide an updated calibration (see Fig. [rhkplot]) and also provide a $L\_x/L\_{\rm bol}$-age relationship that is bootstrapped from the chromospheric calibration. The $R^{\prime}\_{\rm HK}$-age relation suffers less than the $L\_x/L\_{\rm bol}$-age relation from problems with variability at older ages as discussed above, but the limitations associated with spreads in rotation rate and hence activity at younger ages are just as severe. A further key advantage of the $R^{\prime}\_{\rm HK}$-age relation is that there are good data for solar-type stars in the old open clusters M67 and some data in NGC 188. These calibration “points”, consisting of a set of coeval stars at 4 and 6.9Gyr, give an excellent estimate of the precision of the method. Further tests are provided by the comparison of chromospheric ages for stars in wide binary systems, with either similar components or components with different masses. Problems, precision and accuracy -------------------------------- Mamajek & Hillenbrand (2008) conclude that for solar-type stars older than a few hundred Myr, carefully measured $R^{\prime}\_{\rm HK}$ values yield log age to  ± 0.2 dex, or an age precision of  ∼ 60 per cent. The uncertainty grows rapidly at younger ages, due to the growing dispersion in $R^{\prime}\_{\rm HK}$ (see Fig. [rhkplot]), to become unusable at  ≤ 100Myr except in providing an upper limit to the age of a star. For reasons that are not clear (perhaps binaries have smaller amplitude activity cycles?), the dispersion in empirically determined age between the components of binary systems is lower than the dispersion implied by the spread of observed chromospheric activity in the presumably coeval stars of the Hyades and M67. A further limitation of the chromospheric activity-age relation is that calibrating data for lower mass stars is more sparse. Attempts to determine the slope of the $R^{\prime}\_{\rm HK}$-mass (or colour) relation from cluster data yield a wide diversity of results. Instead, Mamajek & Hillenbrand (2008) use a newly derived activity-*N**R* relation and combine this with a gyrochronology relation to create an activity-age-colour relationship, calibrated for F7-K2 main sequence stars. This assumes that stars have converged to the I-sequence and also makes the same assumptions about the separable nature of the colour and time dependence of rotation rate used in other gyrochronology relations. The new relation does reduce the dispersion in ages estimated for the binary components with different masses, but the dispersion estimated for stars in M67 remains stubbornly at the  ± 0.2 dex level. The situation is similar for X-ray activity indicators, though likely to be worse at older ages despite a steeper decline with age, because coronal X-ray variability is much greater than that of the chromosphere in older stars. Further limitations to the technique mirror those discussed for gyrochronology in section [gyrolimitations]. Whilst differential rotation should not be a problem, the limited convergence of rotation rates onto the I-sequence may be partly responsible for the dispersion in ages estimated for coeval stars. Like gyrochronology, activity-age relationships should not be used for close binary systems where tidal or other interactions may have affected the rotation rates of the components. Finally, like gyrochronology, the activity-age relationships (both coronal and chromospheric) are poorly calibrated at ages older than, and masses lower than, the Sun. Attempts to improve this situation are briefly reviewed in section [improvements]. Lithium depletion and age ========================= The “ecology” of lithium in the universe makes it a fascinating probe of many physical processes, ranging from the big-bang to cosmic ray spallation reactions and mixing in stellar interiors. 7Li is produced during the first minutes of a standard big-bang (Wagoner, Fowler & Hoyle 1967) at a predicted abundance (post-*Planck*) of 7Li/H = 4.89− 0.39+ 0.41 × 10− 10, or A(Li) = 2.69− 0.04+ 0.03, on a logarithmic scale where A(H) = 12 (Coc, Uzan & Vangioni 2013). This abundance is significantly higher than seen in old population II stars, which exhibit a plateau of Li abundance versus $T\_{\rm eff}$ at A(Li) = 2.20 ± 0.09 (Sbordone  2010) – the "Spite plateau” (Spite & Spite 1982). This discrepancy suggests either “new” physics beyond the standard big-bang model or that physical processes have been able to deplete Li from the photospheres of these old stars. On the other hand, the abundance of Li found in meteorites is A(Li) = 3.26 ± 0.05 (Asplund  2009), which implies that the interstellar medium in the Galaxy becomes Li-enriched with time. Several production mechanisms are under investigation; inside AGB stars, cosmic ray spallation, novae (Prantzos 2012). The photospheric solar 7Li abundance is A(Li) = 1.05 ± 0.10 and observations of solar-type stars in the field and open clusters reveal a wide dispersion of A(Li), from less than the solar value to approximately the meteoritic abundance, clearly indicating that depletion mechanisms are at work. It is the time-dependence of these depletion processes that makes Li abundance a potential age indicator. Lithium in pre main sequence stars ---------------------------------- ### The astrophysics of PMS Li depletion After PMS stars are born, they initially contract along fully-convective Hayashi tracks. Once their cores reach temperatures of  ∼ 3 × 106 K, then Li burning commences through the reaction[1](#fn1) 7Li(p,4He)4He. The reaction is extremely temperature dependent ( ∼ *T*20; Bildsten  1997), the density dependence is secondary. Li-depleted material at the core is convectively mixed upwards and replaced with fresh material and the star could then be completely depleted of Li on a timescale of a few–10Myr (much less than the Kelvin-Helmholtz contraction timescale). In stars with *M* < 0.4 *M*⊙, that remain fully convective right through to the ZAMS, this is indeed what happens. However, higher mass stars develop a central radiative core because their central opacity falls far enough to reduce the temperature gradient below the critical value necessary to trigger convection. Convective mixing to the core ceases and the extent to which *photospheric* Li depletion will continue depends now on the temperature of the convection zone base ($T\_{\rm BCZ}$). In stars of *M* ≤ 0.6*M*⊙ (based on the models of Siess, Dufour & Forestini 2000), $T\_{\rm BCZ}$ remains above the Li-burning threshold long enough to completely deplete Li in the photosphere, but in more massive stars Li-depletion should eventually be halted as the radiative core expands. If *M* > 1.1 *M*⊙ the radiative core forms before Li-depletion commences and such stars deplete very little Li. $T\_{\rm BCZ}$ is never hot enough to allow Li-burning in stars with *M* ≥ 1.3 *M*⊙ and their photospheric Li should remain at its initial value. It is worth emphasizing that the above discussion takes account only of convective mixing of material and predicts that depletion of photospheric Li should have ceased by  ∼ 100Myr in all stars with *M* ≥ 0.4 *M*⊙ and considerably earlier in stars of higher mass; i.e. the pattern of Li depletion versus mass should be settled prior to arrival on the ZAMS. Many flavours of evolutionary model have made predictions about the onset and rate of photospheric Li depletion and these can be used to define isochrones of Li depletion in the A(Li) versus $T\_{\rm eff}$ plane (see Fig. [pleiadesli]). For stars that develop a radiative core, the predicted Li depletion as a function of $T\_{\rm eff}$ is highly sensitive to the physics included in the models – the opacities (and therefore metallicity), the efficiency of convection parametrised in terms of a mixing length or overshooting, and the adopted atmospheres (e.g. Chaboyer, Pinsonneault & Demarque 1995b; Piau & Turck-Chièze 2002). For instance, at $T\_{\rm eff} \simeq 5000$K, the models of Baraffe  (2002) with mixing lengths of 1.0 pressure scale height or 1.9 pressure scale heights (the value that matches the solar luminosity at the solar age) have depleted Li by factors of 0.6 and 0.06 respectively at an age of 125Myr (see Fig. [pleiadesli]). ### Models vs observations [pleiadesli] Lithium is almost always measured using the resonance doublet of Lii at 6708Å. There are other transitions in the optical spectrum at 6104Å and 8126Å, but these are *much* weaker and blended with stronger lines. Even in cool stars Li is almost completely ionised, the line strengths are temperature sensitive (in warmer stars) and subject to NLTE effects that perturb abundances by up to 0.3dex depending on the Li abundance, $T\_{\rm eff}$ and metallicity of the atmosphere (e.g. Carlsson  1994). Basic curves of growth for the Lii6708Å feature have been calculated by a number of authors (e.g. Soderblom  1993; Palla  2007). These show that at A(Li) ∼ 3, the equivalent width (EW) is about 0.5Å and in the saturated part of the curve of growth in cool stars ($T\_{\rm eff} < 3500\,K$), while it is weaker ( ∼ 0.15Å), but more sensitive to abundance, in solar type stars. The 6708Å line is also blended with a Fei line at 6707.44Å. This is much weaker than the Li feature for stars with undepleted Li but a more accurate assessment of this blend becomes important as Li is depleted (Soderblom  1993). Other problems associated with estimating an accurate Li abundance arise from an accurate estimation of $T\_{\rm eff}$, especially in young, active stars with spots and chromospheres, and photospheric veiling by an accretion continuum may need to be accounted for in PMS stars with disks. A further problem in comparing models with observations is that models predict Li *depletion*, so an initial abundance must be adopted. For most young clusters this is usually assumed to be close to the solar meteoritic value; there is also evidence from very young (assumed to be undepleted) T-Tauri stars that A(Li)$\_{\rm init} \simeq 3.1$–3.4 (e.g. Martín  1994; Soderblom  1999). There are however plausible reasons from Galactic chemical evolution models and some observational evidence that the initial Li may be positively correlated with [Fe/H] (e.g. Ryan  2001; Cummings  2012). It seems reasonable to assume that for young stars near the Sun there could be a  ± 0.1–0.2 dex spread in the initial A(Li). Figure [pleiadesli] represents the most basic comparison of PMS Li depletion with models, showing Li abundances in the Pleiades, which has an age of 125Myr (Stauffer, Schulz & Kirkpatrick 1998), versus a number of representative model isochrones (at  ≃ 100Myr) from the literature (D’Antona & Mazzitelli 1997; Piau & Turck-Chièze 2002; Baraffe  2002). The solar photospheric Li abundance is also shown. This Figure illustrates several important points: * There appears to be little Li depletion (assuming an initial A(Li) = 3.2) among G-stars and this is predicted by most of the models. As the scatter in abundance ( ≃ 0.2 dex) is similar to the amount of depletion in G-stars and similar to the uncertainty in initial Li abundance, Li depletion will not be an accurate age indicator below 125Myr for these stars. * The models differ vastly in their predictions of PMS Li depletion in cooler stars. There are several differences between these models, but the dominant one as far as Li depletion is concerned is convective efficiency. * Models that have a convective efficiency (mixing length) tuned to match the Sun’s luminosity (the Piau & Turck-Chièze 2002 model and the Baraffe  2002 models with mixing length set to 1.9 pressure scale heights) predict too much Li depletion. A lower convective efficiency provides a better match (see also Tognelli, Degl’Innocenti & Prada Moroni 2012). * A scatter in Li abundance develops in this coeval cluster at $T\_{\rm eff}<5500$K that cannot be accounted for by observational uncertainties ( ∼ 0.1–0.2dex) or explained by the models shown. The large disagreements between the various model flavours and the failure of these models to match the Pleiades or to predict the scatter among the K-stars means that Li abundance in solar-type stars (those that develop a radiative core) cannot yet be used as anything but an *empirical* age indicator. However, the scatter at a given $T\_{\rm eff}$ (or colour) is a problem in that regard too, since stars of similar age may show a wide dispersion in their Li abundances. Unless the factors causing this dispersion can be identified, there is an inevitable uncertainty on any age inferred from an Li abundance. There is a long list of possible causes of the Li-abundance dispersion that have been considered. It seems possible that some fraction of it may be caused by atmospheric inhomogeneities or chromospheric heating of the upper photosphere (e.g. Randich 2001; King  2010), but it is unlikely to be dominant given the lack of variability in the line strengths (Jeffries 1999) and the agreement between abundances derived from the 6104Å and 6708Å features. A big clue may be the correlation with rotation, noted for the Pleiades by Soder
arxiv_0000204
Hybrid Recommender Systems: A Systematic Literature Review ========================================================== Recommender systems are software tools used to generate and provide suggestions for items and other entities to the users by exploiting various strategies. Hybrid recommender systems combine two or more recommendation strategies in different ways to benefit from their complementary advantages. This systematic literature review presents the state of the art in hybrid recommender systems of the last decade. It is the first quantitative review work completely focused in hybrid recommenders. We address the most relevant problems considered and present the associated data mining and recommendation techniques used to overcome them. We also explore the hybridization classes each hybrid recommender belongs to, the application domains, the evaluation process and proposed future research directions. Based on our findings, most of the studies combine collaborative filtering with another technique often in a weighted way. Also cold-start and data sparsity are the two traditional and top problems being addressed in 23 and 22 studies each, while movies and movie datasets are still widely used by most of the authors. As most of the studies are evaluated by comparisons with similar methods using accuracy metrics, providing more credible and user oriented evaluations remains a typical challenge. Besides this, newer challenges were also identified such as responding to the variation of user context, evolving user tastes or providing cross-domain recommendations. Being a hot topic, hybrid recommenders represent a good basis with which to respond accordingly by exploring newer opportunities such as contextualizing recommendations, involving parallel hybrid algorithms, processing larger datasets, etc. Hybrid Recommendations Recommender Systems Systematic Review Recommendation Strategies Introduction ============ Historically people have relied on their peers or on experts’ suggestions for decision support and recommendations about commodities, news, entertainment, etc. The exponential growth of the digital information in the last 25 years, especially in the web, has created the problem of information overload. Information overload is defined as “stress induced by reception of more information than is necessary to make a decision and by attempts to deal with it with outdated time management practices”.[1](#fn1) This problem limits our capacity to review the specifications and choose between numerous alternatives of items in the online market. On the other hand, information science and technology reacted accordingly by developing information filtering tools to alleviate the problem. Recommender Systems (RSs) are one such tools that emerged in the mid 90s. They are commonly defined as software tools and techniques used to provide suggestions for items and other recommendable entities to users. In the early days (beginning of 90s) RSs were the study subject of other closely related research disciplines such as Human Computer Interaction (HCI) or Information Retrieval (IR). Today, RSs are found everywhere helping users in searching for various types of items and services. They also serve as sales assistants for businesses increasing their profits. Technically all RSs employ one or more recommendation strategies such as Content-Based Filtering (CBF), Collaborative Filtering (CF), Demographic Filtering (DF), Knowledge-Based Filtering (KBF), etc. described below: * **Collaborative Filtering**: The basic assumption of CF is that people who had similar tastes in the past will also have similar tastes in the future. One of its earliest definitions is “collaboration between people to help one another perform filtering by recording their reactions to documents they read”. This approach uses ratings or other forms of user generated feedback to spot taste commonalities between groups of users and then generates recommendations based on inter-user similarities. CF recommenders suffer from problems like cold-start (new user or new item), “gray sheep” (users that do not fit in any taste cluster), etc. * **Content-Based Filtering**: CBF is based on the assumption that people who liked items with certain attributes in the past, will like the same kind of items in the future as well. It makes use of item features to compare the item with user profiles and provide recommendations. Recommendation quality is limited by the selected features of the recommended items. Same as CF, CBF suffer from the cold-start problem. * **Demographic Filtering**: DF uses demographic data such as *age*, *gender*, *education*, etc. for identifying categories of users. It does not suffer from the new user problem as is doesn’t use ratings to provide recommendations. However, it is difficult today to collect enough demographic information that is needed because of online privacy concerns, limiting the utilization of DF. It is still combined with other recommenders as a reinforcing technique for better quality. * **Knowledge-Based Filtering**: KBF uses knowledge about users and items to reason about what items meet the users’ requirements, and generate recommendations accordingly. A special type of KBFs are constraint-based RSs which are capable to recommend complex items that are rarely bought (i.e. cars or houses) and manifest important constrains for the user (price). It is not possible to successfully use CF or CBF in this domain of items as few user-system interaction data are available (people rarely buy houses). One of the earliest recommender systems was Tapestry, a manual CF mail system. The first computerized RS prototypes also applied a collaborative filtering approach and emerged in mid 90s. GroupLens was a CF recommendation engine for finding news articles. In the authors present a detailed analysis and evaluation of the Bellcore video recommender algorithm and its implementation embedded in the Mosaic browser interface. Ringo used taste similarities to provide personalized music recommendations. Other prototypes like NewsWeeder and InfoFinder recommended news and documents using CBF, based on item attributes. In late 90s important commercial RS prototypes also came out with Amazon.com recommender being the most popular. Many researchers started to combine the recommendation strategies in different ways building hybrid RSs which we consider in this review. Hybrid RSs put together two or more of the other strategies with the goal of reinforcing their advantages and reducing their disadvantages or limitations. One of the first was Fab, a meta-level recommender (see section 3.4.6) which was used to suggest websites. It incorporated a combination of CF to find users having similar website preferences, with CBF to find websites with similar content. Other works such as followed shortly and hybrid RSs became a well established recommendation approach. The continuously growing industrial interest in the recent and promising domains of mobile and social web has been followed by a similar increase of academic interest in RSs. ACM RecSys annual conference[2](#fn2) is now the most significant event for presenting and discussing RS research. The work of Burke in is one of the first qualitative surveys addressing hybrid RSs. The author analyzes advantages and disadvantages of the different recommendation strategies and provides a comprehensive taxonomy for classifying the ways they combine with each other to form hybrid RSs. He also presents several hybrid RS prototypes falling into the 7 hybridization classes of the taxonomy. Another early exploratory work is where several experiments combining personalized agents with opinions of community members in a CF framework are conducted. They conclude that this combination produces high-quality recommendations and that the best results of CF are achieved using large data of user communities. Other review works are more generic and address RSs in general, not focusing in any RS type. They reflect the increasing interest in the field in quantitative terms. In the authors perform a review work of 249 journal and conference RS publications from 1995 to 2013. The peak publication period of the works they consider is between 2007 and 2013 (last one-third of the analyzed period). They emphasize the fact that the current hybrid RSs are incorporating location information into existing recommendation algorithms. They also highlight the proper combination of existing methods using different forms of data, and evaluating other characteristics (e.g., diversity and novelty) besides accuracy as future trends. In the authors review 210 recommender system articles published in 46 journals from 2001 to 2010. They similarly report a rapid increase of publications between 2007 and 2010 and predict an increase interest in mixing existing recommendation methods or using social network analysis to provide recommendations. In this review paper we summarize the state of the art of hybrid RSs in the last 10 years. We follow a systematic methodology to analyze and interpret the available facts related to the 7 research questions we defined. This methodology defined at provides an unbiased and reproducible way for undertaking a review work. Unlike the other review works not focused in any RS type, this systematic literature review is the first quantitative work that is entirely focused in recent hybrid RS publications. For this reason it was not possible for us to have a direct basis with which to compare our results. Nevertheless we provide some comparisons of results for certain aspects in which hybrid RSs do not differ from other types of RSs. To have a general idea about what percentage of total RS publications address hybrid RSs we examined, a survey work about RSs in general. Here the authors review the work of 330 papers published in computer science and information systems conferences proceedings and journals from 2006 to 2011. Their results show that hybrid recommendation paradigm is the study object of about 14.5% of their reviewed literature. We considered the most relevant problems hybrid RSs attempt to solve, the data mining and machine learning methods involved, RS technique combinations the studies utilize and the hybridization classes the proposed systems fall into. We also observed the domains in which the contributions were applied and the evaluation strategies, characteristics and metrics that were used. Based on the suggestions of the authors and the identified challenges we also present some future work directions which seem promising and in concordance with the RS trends. Many primary studies were retrieved from digital libraries and the most relevant papers were selected for more detailed processing (we use the terms paper and study interchangeably to refer to the same object / concept). We hope this work will help anyone working in the field of (hybrid) RSs, especially by providing insights about future trends or opportunities. The remainder of the paper is structured as follows. Section 2 briefly summarizes the methodology we followed, the objectives and research questions defined, the selection of papers and the quality assessment process. Section 3 introduces the results of the review organized in accordance with each research question. Section 4 discusses and summarizes each result whereas Section 5 concludes. Finally we list the selected papers in Appendix A. Methodology =========== The review work of this paper follows the guidelines that were defined by Kitchenham and Charters for systematic literature reviews in Software Engineering. ![Systematic literature review protocol](./images/Protocol7)Systematic literature review protocol The purpose of a systematic literature review is to present a verifiable and unbiased treatment of a research topic utilizing a rigorous and reproducible methodology. The guidelines that were followed are high level and do not consider the influence of research questions type on the review procedures. In Figure 1 we present the protocol of the review. It represents a clear set of steps which assist the management of the review process. The protocol was defined by the first author and verified by the second author. In the following sections we describe each step we summarized in Figure 1. Research questions, search string and digital sources ----------------------------------------------------- The primary goal of this systematic literature review is to understand what challenges hybrid RSs could successfully address, how they are developed and evaluated and in what ways or aspects they could be experimented with. To this end, we defined the following research questions: [**RQ1** ] What are the most relevant studies addressing hybrid recommender systems? [**RQ2** ] What problems and challenges are faced by the researchers in this field? Which data mining and machine learning techniques are used in hybrid RSs? What recommendation techniques are combined and which problems they solve? [**RQ4** ] What hybridization classes are used, based on the taxonomy of Burke? [**RQ5** ] In what domains are hybrid recommenders applied? What methodologies are used for the evaluation and what metrics they utilize? Which RS characteristics are evaluated and what metrics they use? What datasets are used for training and testing hybrid RSs? [**RQ7** ] Which directions are most promising for future research? Furthermore we picked five scientific digital libraries that represent our primary sources for computer science research publications. They are listed in Table 1. Other similar sources were not considered as they mainly index data from the primary sources. [ht] l l **Source** & **URL** [0.5ex] SpringerLink & http://link.springer.com Science Direct & http://www.sciencedirect.com IEEExplore & http://ieeexplore.ieee.org ACM Digital Library & http://dl.acm.org Scopus & http://www.scopus.com [0.5ex] [ht] l l **Keyword** & **Synonyms** [0.5ex] Hybrid & Hybridization, Mixed Recommender & Recommendation System & Software, Technique, Technology, Approach, Engine We defined *(“Hybrid”, “Recommender”, “Systems”)* as the basic set of keywords. Then we added synonyms to extend it and obtain the final set of keywords. The set of keywords and synonyms is listed in Table 2. The search string we defined is: *(“Hybrid” OR “Hybridization” OR “Mixed”) AND (“Recommender” OR “Recommendation”) AND (“System” OR “Software” OR “Technique” OR “Technology” OR “Approach” OR “Engine”)* Selection of papers ------------------- Following Step 4 of the protocol, we applied the search string in the search engines of the five digital libraries and found 9673 preliminary primary studies (see Table 4). The digital libraries return different numbers of papers because of the dissimilar filtering settings they use in their search engines. This retrieval process was conducted during May 2015. To objectively decide whether to select each preliminary primary study for further processing or not, we defined a set of inclusion / exclusion criteria listed in Table 3. The inclusion / exclusion criteria are considered as a basis of concentrating in the most relevant studies with which to achieve the objectives of the review. [ht] p9.7cm **Inclusion criteria** [0.5ex] Papers presenting hybrid recommender systems, algorithms, approaches, etc. [0.8ex] Papers that even though do not specifically present hybrid RSs, provide recommendations combining different data mining techniques. [0.8ex] Papers from conferences and journals [0.8ex] Papers published from 2005 to 2015 [0.8ex] Papers written in English language only [0.8ex] **Exclusion criteria** Papers not addressing recommender systems at all [0.8ex] Papers addressing RSs but not implying any hybridization or combination of different approaches or data mining techniques. [0.8ex] Papers that report only abstracts or slides of presentation, lacking detailed information [0.8ex] Grey literature [0.8ex] Duplicate papers were removed and a coarse selection phase followed. Processing all of them strictly was not practical. Therefore we decided to include journal and conference papers only, leaving out gray literature, workshop presentations or papers that report abstracts or presentation slides. We initially analyzed title, publication year and publication type (journal, conference, workshop, etc.). In many cases abstract or even more parts of each paper were examined for deciding to keep it or not. Our focus in this review work is on hybrid recommender systems. Thus we selected papers presenting mixed or combined RSs dropping out any paper addressing single recommendation strategies or papers not addressing RSs at all. Hybrid RSs represent a somehow newer family of recommender systems compared to other well known and widely used families such as CF or CBF. Therefore the last decade (2005-2015) was considered an appropriate publication period. [ht] l c c c & **Digital source** & **Search and retrieval** & **Coarse selection** & **Detailed selection** & 4152 & 50 & 13 & 3582 & 27 & 9 & 1012 & 53 & 13 & 484 & 35 & 12 & 443 & 75 & 29 & **9673** & **240** & **76** Using inclusion / exclusion and this coarse selection step we reached to a list of 240 papers. In the next step we performed a more detailed analysis and selection of the papers reviewing abstract and other parts of every paper. Besides relevance based on the inclusion / exclusion criteria, completeness (in terms of problem definition, description of the proposed method / technique / algorithm and evaluation of results) of each study was also taken into account. Finally we reached to our set of 76 included papers. The full list is presented in Appendix A together with the publication details. Quality assessment ------------------ We also defined 6 questions listed in Table 5 for the quality estimation of the selected studies. Each of the question receives score values of 0, 0.5 and 1 which represent answers “no”, “partly” and “yes” correspondingly. The questions we defined do not reflect equal level of importance in the overall quality of the studies. For this reason we decided to weight them with coefficients of 0.5 (low importance) 1 (medium importance) and 1.5 (high importance). [ht] p5.8cm c r **Quality Question** & **Score** & **Weight** [0.5ex] QQ1. Did the study clearly describe the problems that is addressing? & yes / partly / no (1 / 0.5 / 0) & 1 [0.8ex] QQ2. Did the study review the related work for the problems? & yes / partly / no (1 / 0.5 / 0) & 0.5 [0.8ex] QQ3. Did the study recommend any further research? & yes / partly / no (1 / 0.5 / 0) & 0.5 [0.8ex] QQ4. Did the study describe the components or architecture of the proposed system? & yes / partly / no (1 / 0.5 / 0) & 1.5 [0.8ex] QQ5. Did the study provide an empirical evaluation? & yes / partly / no (1 / 0.5 / 0) & 1.5 [0.8ex] QQ6. Did the study present a clear statement of findings? & yes / partly / no (1 / 0.5 / 0) & 1 [0.8ex] We set higher weight to the quality questions that address the components/architecture of the system/solution (QQ4) and the empirical evaluation (QQ5). Quality questions that address problem description (QQ1) and statement of results (QQ6) got medium importance. We set a low importance weight to the two questions that address the related studies (QQ2) and future work (QQ3). The papers were split in two disjoint subsets. Each subset of papers was evaluated by one of the authors. In cases of indecision the quality score was set after a discussion between the authors. At the end, the final weighted quality score of each study was computed using the following formula: *s**c**o**r**e* = ∑*i* = 16*w**i* \* *v**i*/6 *w**i* is the weight of question *i* *(0.5, 1, 1.5)* *v**i* is the vote for question *i* *(0, 0.5, 1)* After this evaluation, cross-checking of the assessment was done on arbitrary studies (about 40% of included papers) by the second author. At the end, an agreement on differences was reached by discussion. Data extraction --------------- Data extraction was carried on the final set of selected primary studies. We collected both paper meta-data (i.e., author, title, year, etc.) and content data important to answer our research questions like problems, application domains, etc. Table 6 presents our data extraction form. In the first column we list the extracted data, in the second column we provide an explanation for some of the extracted data which may seem unclear and in the third column the research question with which the data is related. All the extracted information was stored in Nvivo[3](#fn3) which was used to manage data extraction and synthesis process. Nvivo is a data analysis software tool that helps in automating the identification and the labeling of the initial segments of text from the selected studies. [ht] l l l **Extracted Data** & **Explanation** & **RQ** [0.5ex] ID & A unique identifier of the form Pxx we set to each paper & - Title & - & RQ1 Authors & - & - Publication year & - & RQ1 Conference year & - & - Volume & Volume of the journal & - Location & Location of the conference & - Source & Digital library from which was retrieved & - Publisher & - & - Examiner & Name of person who performed data extraction & - Participants & Study participants like students, academics, etc. Goals & Work objectives & - Application domain & Domain in which the study is applied & RQ5 Approach & Hybrid recommendation approach applied & RQ3b Contribution & Contribution of the research work & - Dataset & Public dataset used to train and evaluate the algorithm & RQ6c DM techniques & Data mining techniques used & RQ3a Evaluation methodology & Methodology used to evaluate the RS & RQ6a Evaluated characteristic & RS characteristics evaluated & RQ6b Future work & Suggested future works & RQ7 Hybrid class & Class of hybrid RS & RQ4 Research problem & - & RQ2 Score & Overall weighted quality score & - Other Information & - & - Synthesis --------- For the synthesis step we followed Cruzes and Dyba methodology for the thematic synthesis. Their methodology uses the concept of codes which are labeled segments of text to organize and aggregate the extracted information. Following the methodology we defined some initial codes which reflected the research questions. Some examples include the first research problems found, hybrid recommendation classes, first application domains, data mining techniques, recommendation approaches and evaluation methodologies. After completing the reading we had refined or detailed each of the initial codes with more precise sub-codes (leaf nodes in NVivo) which were even closer to the content of the selected papers, covering all the problems found, all the datasets used, and similar detailed data we found. We finished assigning codes to all the highlighted text segments of the papers and then the codes were aggregated in themes (of different levels if necessary) by which the papers were grouped. Afterwards a model of higher-order themes was created to have an overall picture. The research questions were mapped with the corresponding themes. Finally, the extracted data were summarized in categories which are reported in the results section (in pictures or tables) associated with the research questions they belong to. Results ======= In this section we present the results we found from the selected studies to answer each research question. We illustrate the different categories of problems, techniques, hybridization classes, evaluation methodologies, etc. with examples from the included studies. The results are further discussed in the next section. RQ1: Included studies --------------------- ![Distribution of studies per publication year](./images/Year2)Distribution of studies per publication year RQ1 addresses the most relevant studies that present Hybrid RSs. We selected 76 papers as the final ones for further processing. They were published in conference proceedings and journals from 2005 to 2015. The publication year distribution of the papers is presented in Figure 2. It shows that most of the hybrid RS papers we selected were published in the last 5 years. For the quality assessment process we used the quality questions listed in Table 5. In Figure 3, the box plots of quality score distributions per study type (conference or journal) are shown. We see that about 75% of journal studies have quality score higher than 0.9. Same is true for about 35% of conference studies. In Figure 4 we present the average quality score about each quality question. ![Boxplot of quality score per publication type](./images/BoxPlots)Boxplot of quality score per publication type ![Average score of each quality question](./images/Score2)Average score of each quality question QQ4 (Did the study describe the components or architecture of the proposed system?) has the highest average score (0.947) wheres QQ3 (Did the study suggest further research?) has the lowest (0.651). The weighted quality score is higher than 0.81 for any included paper. Only one journal study got a weighted average score of 1.0 (highest possible). RQ2: Research problems ---------------------- To answer RQ2 we summarize the most important RS problems the studies try to solve. A total of 12 problems were found. The most frequent are presented in Figure 5 with the corresponding number of studies where they appear. Studies may (and often do) address more than one problem. Same thing applies for other results (data mining techniques, domains, evaluation metrics, etc.) reported in this section. Below we describe each of the problems: **Cold-start** This problem is heavily addressed in the literature and has to do with recommendations for new users or items. In the case of new users the system has no information about their preferences and thus fails to recommend anything to them. In the case of new items the system has no ratings for these items and doesn’t know to whom recommend them. To alleviate cold-start, authors in use a probabilistic model to extract latent features from item’s representation. Using the latent features they generate accurate pseudo ratings, even in cold-start situation when few or no ratings are provided. Another example is where the authors try to solve the new user cold-start in the e-learning domain by combining CF with a CBF representation of learning contents. Cold-start problem is also treated in where the authors merge the weighted outputs of different recommendation strategies using Ordered Weighted Averaging (OWA), a mathematical technique first introduced in. In total, cold-start was found in 23 studies. **Data sparsity** This problem rises from the fact that users usually rate a very limited number of the available items, especially when the catalog is very large. The result is a sparse *user-item* rating matrix with insufficient data for identifying similar users or items, negatively impacting the quality of the recommendations. Data sparsity is prevalent in CF RSs which rely on peer feedback to provide recommendations. In data sparsity of cross-domain recommendations is solved using a factorization model of the triadic relation *user-item-domain*. Also in we find an attempt to solve data sparsity by treating each user-item rating as predictor of other missing ratings. They estimate the final ratings by merging ratings of the same item by other users, different item ratings made by the same user and ratings of other similar users on other similar items. Another example is where CF is combined with Naive Bayes in a switching way. Data sparsity was a research problem of 22 studies. **Accuracy** Recommendation accuracy is the ability of a RS to correctly predict the item preferences of each user. Much attention has been paid to improve the recommendation accuracy since the dawn of RSs. Obviously there is still place for recommendation accuracy improvements. This is especially true in data sparsity situations, as accuracy and data sparsity are two problems that appear together in 6 studies (e.g., ). In a Bayesian network model with user nodes, item nodes, and feature nodes is used to combine CF with CBF and attain better recommendation quality. Other example is where a web content RS is constructed. The authors construct user’s long term interest based on his/her navigation history. Than the similarity of user’s profile with website content is computed to decide whether to suggest the website or not. Experiments conducted with news websites show improved accuracy results. Improving accuracy was a research objective of 16 studies. **Scalability** This is a difficult to attain characteristic which is related to the number of users and items the system is designed to work for. A system designed to recommend few items to some hundreds of users will probably fail to recommend hundreds of items to millions of people, unless it is designed to be highly scalable. Hyred in is an example of a system designed to be scalable and overcome data sparsity problem as well. The authors combine a modified Pearson correlation CF with distance-to-boundary CBF. They find the nearest and furthest neighbors of each user to reduce the dataset. The use of this compressed dataset improves scalability, alleviates sparsity, and also slightly reduced the computational time of the system. In the authors propose a hybrid RS designed to recommend images in social networks. They involve CF and CBF in a weighted way and also consider aesthetic characteristics of images for a better filtering, which overcomes the problem of scalability and cold-start as well. In a system with better scalability is conceived by combining Naive Bayer and SVM with CF. Improving scalability was addressed in 11 studies. **Diversity** This is a desired characteristic that is getting attention recently. Having diverse recommendations is important as it helps to avoid the popularity bias. The latter is having a recommendation list with items very similar to each other (e.g., showing all the episodes of a very popular saga). A user that is not interested in one of them is probably not interested in any of them and gets no value from that recommendation list. *K-Furthest Neighbors*, the inverted neighborhood model of K-NN is used in for the purpose of creating more diverse recommendations. The authors report an increased diversity. However, the user study they conduct shows that the perceived usefulness of it is not different from the one of traditional CF. In the concept of Experts is utilized to find novel and relevant items to recommend. The ratings of users are analyzed and some of the users are promoted as “experts” of a certain taste. They generate recommendations of their for the rest of the “normal” users in that item taste. Diversity is also addressed in totaling in 3 studies. **Other** These are other problems appearing in few studies. They include Lack of Personalization, Privacy Preserving, Noise Reduction, Data source Integration, Lack of Novelty and User preference Adaptiveness. ![Addressed problems](./images/Problems8)Addressed problems RQ3a: Data mining and machine learning techniques ------------------------------------------------- In this section we address the distribution of the studies according to the basic Data Mining (DM) and Machine Learning (ML) techniques they use to build their hybrid RSs. The variety of DM and ML techniques or algorithms used is high. Authors typically use different techniques to build the diverse components of their solutions or prototypes. In Table 7 we present the most frequent that were found in the included studies. Below we describe some of them. More details about the characteristics of DM/ML techniques and how they are utilized to build RSs can be found at. **K-NN** *K-Nearest Neighbors* is a well known classification algorithm with several versions and implementations, widely utilized in numerous data mining and other applications. This technique is popular among collaborative filtering RSs which represent the most common family of recommenders. It is mostly utilized to analyze neighborhood and find users of similar profiles or analyze items’ catalog and find items with similar characteristics. K-NN was found in a total of 59 studies. **Clustering** There are various clustering algorithms used in RSs and other data mining applications. They typically try to put up a set of categories with which data can be identified. The most popular is *K-means* which partitions the entire data into K clusters. In RSs clustering is mostly applied to preprocess the data. In the authors experiment with K-way (similar to K-means) clustering and *Bisecting K-means* for grouping different types of learning items. They also use CBF to create learners’ profiles and build an e-learning recommender with improved accuracy. An other example is where websites are clustered using co-occurence of pages and the content data of pages. The results are aggregated to get the final recommendations and overcome data sparsity. In total clustering algorithms were used in 34 studies. **Association rules** Association rule mining tries to discover valuable relations (association rules) in large databases of data. These associations are in the form X `=>` Y, where X and Y are sets of items. The association that are above a minimum level of support with an acceptable level of confidence can be used to derive certain conclusions. In recommender systems this conclusions are of the form “X likes Y” where X is a user to whom the system can recommend item Y. In information collected from a discussion group is mined and association rules are used to form the user similarity neighborhood. Word Sense Disambiguation is also used to select the appropriate semantically related concept from posts which are then recommended to the appropriate users of the forum. This hybrid meliorates different problems such as cold-start, data sparsity and scalability. In classification based on association methods is applied to build a RS in the domain of tourism. The system is more resistant to cold-start and sparsity problems. To overcome cold-start, the authors in propose a procedure for finding similar items by association rules. Their algorithm considers the user-item matrix as a transaction database where the user Id is the transactional Id. They find the support of each item and keep items with support greater than a threshold. Afterwards, they calculate the confidence of remaining rules and rule scores by which they find the most similar item to any of the items. Association rules were found in 17 studies. **Fuzzy logic** Also called *fuzzy set theory* it is a set of mathematical methods that can be used to build hybrid RSs. Those methods are also called reclusive in the literature. Contrary to CF which relies on neighborhood preferences without considering item characteristics, they require some representation of the recommended items. Reclusive methods are complementary to collaborative methods and are often combined with them to form hybrid RSs. An example of using Fuzzy logic is where better accuracy is achieved by combining 2 CFs with a fuzzy inference system in a weighted way to recommend leaning web resources. In fuzzy clustering is used to integrate user profiles retrieved by a CF with Point Of Interest (POI) data retrieved from a context aware recommender. The system is used in the domain of tourism and provides improved accuracy. In total Fuzzy logic was found in 14 studies. **Matrix manipulation** Here we put together the different methods and algorithms that are based on matrix operations. The methods we identified are Singular Value Decomposition (SVD), Latent Dirichlet Allocation (LDA), Principal Component Analysis (PCA), Dimensionality Reduction and similar matrix factorization techniques. Matrix manipulation methods are often used to build low error collaborative RSs and were especially promoted after the Netflix challenge was launched in 2006. In a topic model based on LDA is used to learn the probability that a user rates an item. An other example is where Dimensionality Reduction is used to solve sparsity and scalability in a multi-criteria CF. They were found in 9 studies. **Other** Other less frequent techniques such as Genetic Algorithms, Naive Bayes, Neural Networks, Notion of Experts, Statistical Modeling, etc. were found in 19 papers. [ht] l c **DM/ML technique** & **Studies** [0.5ex] K-NN & 59 Clustering & 34 Association rules & 17 Fuzzy logic & 14 Matrix manipulation & 9 Other & 19 RQ3b: Recommendation technique combinations ------------------------------------------- In this section we present a list of the most common technique combinations that form hybrid RSs. We also present the problems each of this combinations is most frequently associated with. In the following subsections the construct and technical details of some of the prototypes implementing each combination is described. Table 8 presents the summarized results. [ht] l c c c c c c & **Problem** & **CF-X** & **CF-CBF** & **CF-CBF-X** & **IICF-UUCF** & **CBF-X** & **Other** & 2 & 3 & 2 & 1 & 1 & 5 & 0 & 5 & 3 & 3 & 4 & 6 & 2 & 3 & 0 & 2 & 2 & 4 & 0 & 2 & 2 & 0 & 2 & 2 & 2 & 0 & 0 & 0 & 0 & 1 & 0 & 2 & 1 & 1 & 1 & 2 & **6** & **15** & **8** & **7** & **10** & **20** ### CF-X Here we report studies that combine CF with one other technique which is not CBF (those are counted as CF-CBF). An example of this combination is where the authors go hybrid to improve the performance of a multi-criteria recommender. They base their solution on the assumption that usually only a few selection criteria are the ones which impact user preferences about items and their corresponding ratings. Clustering is used first to group users based on the items’ criteria they prefer. CF is then used within each cluster of similar users to predict the ratings. They illustrate their method by recommending hotels from TripAdvisor[4](#fn4) and report performance improvements over traditional CF. Other attempt to improve the predictive accuracy of traditional CF is. Here the authors integrate in CF discrete demographic data about the users such as *gender*, *age*, *occupation*, etc. Fuzzy logic is used to compute similarities between users utilizing this extra demographic data and integrate the extra similarities with the user-based similarities calculated from ratings history. After calculating the final user similarities their algorithm predicts the rating values. The extra performance which is gained from the better user similarities that are obtained, comes at the cost of a slightly larger computational time which is however acceptable. In total CF-X combination was found in 6 studies with X being KBF, DF or a DM/ML technique from those listed in Table 6. ### CF-CBF This is a very popular hybrid RS utilizing the two most successful recommendation strategies. In many cases the recommendations of both systems are weighted to produce the final list of predictions. In other cases the hybrid RS switches from CF to CBF or is made up of a more complex type of combination (see section 3.5). An example is where the authors develop a hybrid RS suitable for working with high volumes of data and solve scalability problems in e-commerce systems. Their solution first involves CF (Pearson’s product moment coefficients) to reduce the dataset by finding the nearest neighbors of each user, discarding the rest and reducing the dataset. Afterwards distance-to-boundary CBF is used to define the decision boundary of items purchased by the target user. The final step combines the CF score (correlation coeficient between two customers) with the distance-to-boundary score (distance between the decision boundary and each item) in a weighted linear form. The authors report an improved accuracy of their hybrid RS working in the reduced dataset, compared to other existing algorithms that use full datasets. In the authors propose a CF-CBF hybrid recommender which is based on Bayesian networks. This model they build uses probabilistic reasoning to compute the probability distribution over the expected rating. The weight of each recommending strategy (CF and CBF) is automatically selected, adapting the model to the specific conditions of the problem (it can be applied to various domains). The authors demonstrate that their combination of CF and CBF improves the recommendation accuracy. Other studies involve similar mathematical models or constructs (e.g., fuzzy logic) to put together CF and CBF and gain performance or other benefits. In total CF-CBF contributions were found in 15 studies. ### CF-CBF-X Those are cases in which CF and CBF are combined together with a third approach. One example is where CF and CBF are combined with DF to generate recommendations for groups of similar profiles (users). These kind of recommendations are particularly useful in online social networks (e.g., for advertising). The goal of the authors is to provide good recommendations in data sparsity situations. First CBF is used to analyse ratings and items’ attributes. CF is then invoked as the second stage of the cascade to generate the group recommendations. DF is used to reinforce CF in the cases of sparse profiles (users with few ratings). In total CF-CBF-X was found in 8 studies. X is mostly a clustering technique or DF. ### IICF-UUCF Item-Item CF and User-User CF are two forms of CF recommenders, differing on the way the neighborhoods are formed. Some studies combine both of them to improve overall CF performance. An example is where the authors present a hybrid recommendation framework they call Collaborative Filtering Topic Model (CFTM) which considers both user’s reviews and ratings about items of a certain topic (or domain) in e-commerce. The first stage which is offline performs sentiment analysis in the reviews to calculate the User or Item similarity. The second stage of the cascade uses IICF or UUCF (switching) to predict the ratings. The authors evaluate using 6 datasets of different domains from Amazon and report that their hybrid approach performs better than traditional CF, especially in sparsity situations. IICF-UUCF combinations were found in 7 studies. ### CBF-X There were also 10 studies in which CBF is combined with another technique X which is not CF (counted as CF-CBF). X represents different approaches like KBF and DF or DM/ML techniques like clustering etc. One example is where the authors describe and use the interesting notion of *user lifestyle*. They select demographic information, consumer credit data and TV program preferences as lifestyle indicators, and confirm their significance by performing statistical analysis on 502 users. The most significant lifestyle attributes are binary encoded and used to form the neighborhoods and ratings of each user by means of Pearson correlation. The authors call the resulting complete (in terms of ratings) matrix *pseudoUser - item* matrix. It is then used for a Pearson based (classical CF) prediction of the original *user-item* ratings. Considerable performance improvements are reported. ### Other Other implementations include combinations of the same recommendation strategy (e.g., *CF1-CF2* with different similarity measures or tuning parameters each), trust-aware recommenders that are being used in social communities, prototypes using association rules mining, neural networks, genetic algorithms, dimensionality reduction, social tagging, semantic ontologies, pattern mining or different machine learning classifiers. RQ4: Classes of hybridization ----------------------------- To answer RQ4 we classified the examined hybrid RSs according to the taxonomy proposed by Burke. This taxonomy categorizes hybrid RSs in 7 classes based on the way the different recommendations techniques are aggregated with each other. Each class is explained in the subsections below where we discuss in more details few examples from the included papers. The results are summarized in Figure 6. ![Distribution of studies per hybridization class](./images/Classes3)Distribution of studies per hybridization class ### Weighted Weighted hybrids were the most frequent. They compute the scores of the items they recommend by aggregating the output scores of each recommendation technique using weighted linear functions. One of the first weighted recommenders was P-Tango which combined CF and CBF rating scores in a linear weighted way to recommend online newspapers. In P-Tango, aggregation was made giving equal initial weights to each score and then possibly adapting by the feedback of users. The weights of CF and CBF are set on a per-user basis enabling the system to determine the optimal mix for each user and alleviating the “gray sheep” problem. In the authors propose a weighting method for combining user-user, user-tag and user-item CF relations in social media. The method they propose computes the final rating score of an item for a user as the linear combination of the above three CF relations. Unlike the traditional CF, this weighted hybrid CF recommender is completely based on tags and does not require that users provide explicit rating scores for the items that are recommended (e.g., photos). An other example is where the authors combine a content-based model with a rule-based model to recommend e-learning materials. They build their CBF using an education domain ontology and compute the scores of each learning material using *Vector Space Model* and *TF-IDF*. The rule-based recommender utilizes the ontology and the user’s previously visited concepts to realize a semantic mapping between user’s query and his/her semantic profile, resulting in adequate term recommendations about learning materials. The two RS modules set different weights to each recommended item based on user’s preferences and higher accuracy is achieved. Apparently the benefit of a weighted hybrid is the fact that it uses a straightforward way to combine the results of each involved technique. It is also easy to adjust priority assignment for each involved strategy by changing the weights. This class of hybrid RS was used in 22 (28.9%) of the included studies. ### Feature combination This type of hybrid RSs treats one recommender’s output as additional feature data, and uses the other recommender (usually content-based which makes extensive use of item features) over the new extended data. In case of a CF-CBF hybrid, the system does not exclusively rely on the collaborative data output of CF. That output is considered as additional data for the CBF which generates the final list. This reduces the sensitivity to possible sparsity of the initial data. For example, in the authors present a CF-CBF book recommender which implements an extended feature combination strategy. In the first phase new features (prefered books) are generated by applying CF among the readers. In the second phase they utilize *fuzzy c-means clustering* and *type-2 fuzzy logic* to obtained data for creating book categories of each user type (teacher, researcher, student). In the third and final phase CBF is involved to recommend the most relevant books to each user. The authors report performance improvements both in MAE and F1 accuracy scores. Also in the authors build an information system about courses and study materials for scholars. The system invokes a web crawler to collect related web pages and classifies the obtained results in different item categories (websites, courses, academic activities) using a web page classifier supported by a school ontology. An information extractor is later invoked to get significant web page features. Finally the system operates on the extra features of each item category to produce integrated recommendations based on the order of the keyword weight of each item. System verification reports higher recommendation quality and reliability. Feature combination hybrids were found in 12 (15.8%) studies. ### Cascade Cascade hybrids are examples of a staged recommendation process. First one technique is employed to generate a coarse ranking of candidate items and than a second technique refines the list from the preliminary candidate set. Cascades are order-sensitive; A CF-CBF would certainly produce different results from a CBF-CF. An example is which presents a mobile music cascade recommender combining SVM genre classification with collaborative user personality diagnosis. The first level of the recommendation process consists of a multi-class SVM classifier of songs based on their genre. The second level is a personality diagnosis which assumes that user preferences for songs constitute a characterization of their underlying personality. The personality type of each user is assumed to be the vector of ratings in the items the user has seen. The personality diagnosis approach estimates the probability that each active user is of the same personality type as other users. As a result the probability that a active user will like new songs is computed in a more personalized way. In the authors combine two CF systems with different properties. The first module is responsible for retrieving the data and generating the list of neighbors for each user. This module uses two distance measures, Pearson’s coefficient and Euclidean distance in a switching way, depending on the user’s deviation from his/her average rating. The authors report that Euclidean distance performs better than Pearson’s coefficient in most of the cases. In the second module of the cascade, they experiment switching between three predictors to generate the final recommendations: Bayesian estimator, Pearson’s weighted sum and adjusted weighted sum. They also report that the Bayesian prediction gives best results. An other example of a cascade hybrid is. It implements a cascade of item-based CF and Sequential Pattern Mining (SPM) to recommend items in an e-learning environment. To adopt the CF to the e-learning domain they introduce a damping function which decreases the importance of “old” ratings. The SPM module takes in a list of k most similar items for each item and determines it support. At the end it prunes the items with support less than the threshold and generates the recommended items. The authors also apply this recommender in P2P learning environments for resource pre-fetching. Cascade hybrids were found in 8 (10.5%) studies. ### Switching In a switching hybrid the system switches between different recommendation techniques according to some criteria. For example, a CF-CBF approach can switch to the content-based recommender only when the collaborative strategy doesn’t provide enough credible recommendations. Even different versions of the same basic strategy (e.g., CBF1-CBF2 or CF1-CF2) can be integrated in a switching form. An example is DailyLearner, an online news recommender presented in. It first employs a short-term CBF recommender which considers the recently rated news stories utilizing *Nearest Neighbor* text classification and *Vector Space Model* with *TF-IDF* weights. If a new story has no near neighbors the system switches to the long-term model which is based on data collected over a longer time period, presenting user’s general preferences. It uses a Naive Bayes classifier to estimate the probability of news being important or not. In the authors build a switching hybrid RS that is based on a Naive Bayes classifier and Item-Item CF. The classifier is trained in offline phase and used to generate the recommendations. If this recommendations have poor confidence the Item-Item CF recommendations are used instead. First, they compute the posterior probability of each class generated by the Naive Bayes classifier. Then they assume that the classifier’s confidence is high if the posterior probability of the predicted class is sufficiently higher than the ones of the other classes. Movielens and Filmtrust are employed to evaluate the approach and performance improvements are reported, both in accuracy and in coverage. An other example of a switching hybrid is where the authors describe the design and implementation of a mobile locaton-aware CF-KBF recommender of touristic sites (e.g., restaurants). Their system involves both CF and KBF modules in generating recommendations. Then 3D-GIS location data are used to compute the physical distance of the mobile user from the recommended sites. The system switches from one recommendation strategy to the other and performs a distance-based re-ranking of the recommendations, choosing the sites that are physically closer to the user with higher accuracy. In most of the cases we see that complexity of switching RSs lies in the switching criteria which are mostly based on distance or similarity measures. However, this systems are sensitive to the strengths and weaknesses of the composing techniques. This hybrid RS category was found in 7 (9.2%) studies. ### Feature augmentation In this class of hybrids, one of the combined techniques is used to produce an item prediction or classification which is then comprised in the operation of the other recommendation technique. Feature augmentation hybrids are order-sensitive as the second technique is based on the output of the first. For example an association rules engine can generate for any item, similar items which can be used as augmented item attributes inside a second recommender to improve its recommendations. Libra presented in is a content-based book recommender. It augments the textual features of the books with “related authors” and “related titles” data obtained from Amazon CF recommender to obtain a better recommendation quality. Libra uses an inductive learner to create user profiles. This inductive learner is based on vectorized bag-of-words naive Bayes text classifier. The authors report that the integrated collaborative content has a significant positive effect on recommendation performance. presents a hybrid method which combines multidimensional clustering and CF to increase recommendation diversity. They first invoke multidimensional clustering to collect and cluster user and item data. Clusters with similar features are deleted and the remaining feature clusters are fed into the CF module. Item-Item similarity is computed using an adjusted cosine similarity which works for *m* cluster features of each item. Finally the rating predictions are computed base on item-item similarity and the rating deviations from neighbors. The authors report an increase in recommendation diversity with only minimal loss in accuracy. Feature augmentation offers a means of improving the performance of a system (in the above examples the second recommender) without the need to modify it. The extra functionality is added by augmenting the processed data. This hybrid RS class was used in 7 (9.2%) studies. ### Meta level Meta levels are also an example of order-sensitive hybrid RSs that use an entire model produced by the first technique as input for the second technique. It is typical to use content-based recommenders to build item representation models, and then employ this models in collaborative recommenders to match the items with user profiles. A meta level recommendation strategy was implemented by Fab, one of the first website recommenders. Fab uses a selection agent which based on *term vector model* accumulate user-specific feedback about areas of interest for each user. There are also two collection agents: search agents which perform a search for websites, and index agents which construct queries for already found websites to avoid duplicate work. Collection agents utilize the models of the users (collaborative component) to collect the most relevant websites which are then recommended to the users. Also presents a meta level recommender used in the domain of music which integrates CF with CBF. Here each user is stochastically matched with a music genre based on the collaborative output. Then the system generates a musical piece for the user based on the acoustic features. For the integration they adopt a probabilistic generative model called *three-way aspect model*. As this model is only used for textual analysis and indexing (bag-of-words representation) they propose the *bag-of-timbres* model, an interesting approach to content-based music recommendations which represents each musical piece as a set of polyphonic timbres. The advantage this hybridization class presents is that the learned model of the first technique is compressed and thus better used from the second. However, the integration effort is considerable and use of advanced constructs is often required. This hybrid RS class was found in 7 (9.2%) studies. ### Mixed Mixed hybrids represent the simplest form of hybridization and are reasonable when it is possible to put together a high number of different recommenders simultaneously. Here the generated item lists of each technique are added to produce a final list of recommended items. One of the first examples of mixed hybrids was PTV system which used CBF to relate similar programs to the user profile and CF to relate similar user profiles together. The CBF module converts each user profile in a feature-based representation they call *profile schema* which is basically a TV program content summary represented in features. The CF module computes the similarity of two users utilizing a graded difference metric of the ranked TV programs in each user’s profile. At the end, a selection of programs recommended by the two modules is suggested. Yet another example of recommending TV programs is a CF-CBF mixed hybrid named queveo.tv described in. Here the authors use demographic information such as age, gender and profession together with user’s history to build his/her profile which is used by the CBF module. This module makes use of Vector Space Model and cosine correlation to provide the recommended TV programs. The CF module uses both user-based CF to generate the top neighbors of the active user, and item-based CF to predict the level of interest of the user for a certain item. At the end the system takes recommendations from the two modules to generates the final list of TV programs. Those TV programs that were part of both listings (CBF and CF) are highlighted as *Star Recommendations*, as they are probably the most interesting for the user. Mixed hybrid RSs are simple and can eliminate acute problems like cold-start (new user or new item). They were found in 3 (3.9%) studies only. RQ5: Application domains ------------------------ A rich collection of 18 application domains was identified. Figure 7 presents the percentage of studies for each application domain. We see that most of the studies (21 or 27.6%) are domain independent. They haven’t been applied to a particular domain. Movie domain was considered by 17 (22.3%) studies. Next comes education or e-learning considered by 9 (11.8%) studies. ![Distribution of studies according to the application domains](./images/Domains2)Distribution of studies according to the application domains Six (7.8%) studies were applied in the domain of music. There were also web service RSs implemented in 5 (6.5%) studies. Other domains are images, touristic sites, TV programs, web pages and microposts which appeared in 2 (2.6%) studies each. Domains like business, food, news, bibliography, etc. categorized as “Other” count for less than 10.5% of the total number of studies. RQ6: Evaluation --------------- Another important aspect of hybrid RSs that we examined is the evaluation process. In this section we present results about the evaluation methodologies and the corresponding involved metrics (answering RQ6a), evaluated RS characteristics and the utilized metrics for each (answering RQ6b) and finally the public datasets used to train and test the algorithms (answering RQ6c). ### RQ6a: Evaluation Methodologies [ht] l c **Methodology** & **Studies** [0.5ex] Comparison with similar method & 58 User survey & 14 Comparison and user survey & 3 No evaluation & 1 Here we try to explain how (with what methodologies) the evaluation process is performed and what metrics are involved in each methodology. Table 9 lists the distribution of studies according to the methodology they use to perform the evaluation. There are 58 (more than three-quarters) studies comparing the proposed system (or solution) with a similar well known method or technique. Usually CF-X or CF-CBF hybrid RSs are compared with pure CF or CBF. In some cases the proposed system is compared with different parameter configurations of itself. Accuracy or error measures like MAE (Mean Average Error) or RMSE (Root Mean Square Error) are very common. They estimate the divergence of the RS predictions from the actual ratings. Decision support metrics like Precision, Recall and F1 are also very frequent. Precision is the percentage of selected items that are relevant. Recall is the percentage of relevant items that are recommended. F1 is the harmonic mean of the two. User surveys are the other evaluation methodology utilized in 14 studies. They mainly perform subjective quality assessment of the RS and require the involvement of users who provide feedback for their perception about the system. Surveys are usually question based and reflect the opinion of users about different aspects of the hybrid recommender. An example of user surveys is where the participants were 30 high school students. In the users of the survey are customers of a web retail store who rated products they purchased. In a mix of real and simulated users are used to rate movies, books, etc. In total user surveys were conducted in 14 studies. Both comparisons and surveys are used in 3 studies: where the participants were 17 males along with 15 females and different versions of the system were compared with each-other, where the system was compared with CF using Movielens and the survey involved 132 participants, and where online user profiles were utilized for the survey, and the proposed fuzzy hybrid book RS was compared with traditional CF. The only study with no evaluation at all was. Here the authors present a personalized hybrid recommendation framework which integrates trust-based filtering with multi-criteria CF. This framework is specifically designed for various Government-to-Business e-service recommendations. The authors leave the evaluation of their framework as a future work. ### RQ6b: Characteristics and metrics In order to address RQ6b we analyzed the recommendation characteristics the authors evaluate, and what metrics they utilize. Five characteristics were identified, listed in Table 10. The top characteristic is accuracy measured in 62 studies. It is followed by user satisfaction, a subjective characteristic assessed in 10 studies. Diversity is about having different list of recommended items each time the user interacts with the system. In total it was measured in 7 studies. Computational complexity of the RS is measured in 6 studies. Novelty and serendipity express the capability of the hybrid RS to recommend new or even unexpected but still relevant items to the user. They were measured in 4 studies. [ht] l c **Recommendation characteristic** & **Studies** [0.5ex] Accuracy & 62 User satisfaction & 10 Diversity & 7 Computational complexity & 6 Novelty-Serendipity & 4 We also observed the metrics that authors use for each evaluated characteristic, summarized in Table 11. Accuracy is mostly measured by means of precision (31 studies), recall (23) and F1 (14). MAE and RMSE were found in 27 and 6 studies correspondingly. Other less frequent metrics used to evaluate accuracy include MSE (Mean Squared Error), nDCG (normalized Discounted Cumulative Gain), AUC, etc. They were found in 15 studies. As previously mentioned user satisfaction is measured by means of user surveys which were found in 10 studies. They usually consist of polls which aim to get the opinion of the users about different recommendation aspects of the system. Diversity is measured mostly by coverage which was found in 4 studies. In the other cases it is measured using ranking distances (3 studies). Execution time is the time it takes for the system to provide the recommendations and is a measure of the computational complexity. It was found in 6 studies. Novelty and Serendipity are measured by less known metrics such as Surprisal, Coverage in Long-Tail or Expected Popularity Complement. [ht] | | | | | --- | --- | --- | | **Characteristic** | **Metrics** | **Studies** | | [0.5ex] Accuracy | Precision | 31 | | | MAE | 27 | | | Recall | 23 | | | F1 | 14 | | | RMSE | 6 | | | Other | 15 | | User satisfaction | Qualitative Subjective Assessment | 10 | | Diversity | Coverage | 4 | | | Ranking distances | 3 | | Complexity | Execution time | 6 | | Novelty-Serendipity | Surprisal | 2 | | | Coverage in Long-Tail | 1 | | | Expected Popularity Complement | 1 | ### RQ6c: Datasets We also kept track of the public datasets used by the authors to evaluate their hybrid RSs. These datasets are used by the scientific community to replicate experiments and validate or improve their techniques. ![Distribution of studies according to the datasets they use for evaluation](./images/Datasets2)Distribution of studies according to the datasets they use for evaluation There are 55 studies that use at least one public dataset. Sometimes a study uses more than one dataset. On the other hand 21 studies do not use any dataset. Sometimes they use synthetic data or rely on user surveys or other techniques. In Figure 8 we present the datasets that were used and the number of studies in which they appear. **MovieLens** [5](#fn5) used in 26 studies, is one of the most popular public datasets used in the field of RSs. It was collected and made available by GroupLens[6](#fn6) which is still maintaining it. **EachMovie** is also a movie dataset used in 6 studies. Even though it is now retired, it was the original basis for MovieLens and has been extensively used by the RS community. **FilmTrust** is a movie dataset and a recommendation website that uses the concept of trust to recommend movies. It is smaller in size compared to the other movie datasets but it has the advantage of being more recent in content. FilmTrust was used in 5 studies. **Yahoo-Movie** is a dataset containing a subset of Yahoo Movie community preferences for movies. It also contains descriptive information about many movies released prior to November 2003. Yahoo-Movie was used in 3 studies. **Last.fm** [7](#fn7) is a music dataset crawled by last.fm website. It contains information about some of the users’ attributes, their track preferences and the artists. Last.fm was used in 3 studies. **Tripadvisor** is a dataset consisting of hotel and site reviews crawled by tripadvisor website. It is especially used to provide touristic recommendations to mobile users. Tripadvisor was used in 2 studies. **Delicious** [8](#fn8) is a dataset containing website bookmarks and tags of the form (user, tag, bookmark) shared by many users within the network. Delicious dataset was used in 2 studies. **Other** less popular datasets containing different type of recommendable items were found in 16 studies. RQ7: Future work ---------------- The last research question has to do with future work opportunities and directions. Our findings are summarized in Table 12 and shortly explained below: **Extend the proposed solution** It is a common suggestion stated by many authors. They often identify and suggest several additional parts or components which could be aggregated to the system to improve the performance, extend the functionalities, etc. It is suggested in 14 (18.4%) studies. **Perform better evaluation** It is difficult to evaluate recommender systems. The hard part is to find the most appropriate techniques or algorithms that can be used as benchmark. Performing a good evaluation of the proposed system increases its value and credibility. This suggestion appears in 11 (14.4%) studies. **Add context to recommendations** The authors suggest to make more use of contextual (location, time of day, etc.) data which are revealed by mobile users. It appears in 8 (10.5%) studies. **Consider other application domains** Some of the studies apply their contributions in a certain domain. Different authors target alternative domains or propose domain independent contributions. Considering other domains was suggested in 7 (9.2%) studies. **Use more data or item features** Some authors plan to use more data for training their algorithms or plan to extract and use more features of the recommended items. This has been stated in 7 (9.2%) studies. **Experiment with more or different algorithms** Some authors suggest to combine different recommendation or data mining algorithms and see the results they can obtain. Sometimes they suggest to use alternative similarity measures also. This has been suggested in 6 (7.9%) studies. **Try other hybridization class** Although it is not always possible, combining the applied techniques in another way could bring better results. Trying another hybridization class appeared in 5 (6.5%) studies. **Other** Other future work suggestions include applying hybrid RSs in less frequent domains or contexts, making more personalized recommendations, reducing the computational cost of the solution, improving other recommendation quality criteria (besides accuracy) like diversity or serendipity, etc. [ht] l c Future work & Studies [0.5ex] Extend the proposed solution & 14 Perform better evaluation & 11 Other & 9 Add context to recommendations & 8 Consider other application domains & 7 Use more data or item features & 7 Experiment with more or different algorithms & 6 Try other hybrid recommendation class & 5 Discussion ========== The main issues covered in this work are presented in the schematic model of Figure 9. The issues are associated with the research question they belong to. In this section we discuss the obtained results for each research question. ![RQs and higher-order themes](./images/HRS2)RQs and higher-order themes Selected studies ---------------- The quality evaluation results of the selected studies are presented in Figure 3 and Figure 4. These results indicate that journal studies have lower spread and slightly higher quality score than conference studies. The authors in, a systematic review work about linked data-based recommender systems, report similar results. Regarding the publication year of the selected studies, we see in Figure 2 a steady increase in hybrid RS publications. More than 76% of the included papers were published in the second half (from 2010 later on) of the 10 years time period. This high number of recent publications suggest that hybrid RSs are still a hot topic. As mentioned in introduction, similar increased academic interest in RSs is also reported by other surveys like or. Some factors that have boosted the publications and development of RSs are probably the Netflix Prize[9](#fn9) (2006-2009) and the boom of social networks. Problems and challanges ----------------------- Cold-start was the most acute problem that was found. CF RSs are the most affected by cold-start as they generate recommendations relying on ratings only. Hybrid RSs try to overcome the lack of ratings by combining CF or other recommendation techniques with association rule mining or other mathematical constructs which extract and use features from items. Data sparsity is also a very frequent problem in the field of RSs. It represents a recommendation quality degradation due to the insufficient number of ratings. Hybrid approaches try to solve it by combining several matrix manipulation techniques with the basic recommendation strategies. They also try to make more use of item features, item reviews, user demographic data or other known user characteristics. Accuracy has been the top desired characteristic of RSs since their dawn, as it directly influences user satisfaction. Improving recommendation accuracy is a problem that is mostly addressed by using parallel (i.e. in a weighted or switching hybrid classes) recommendation techniques. Scalability is also an important problem which is frequently found in association with data sparsity (appear together in 9 studies). Lack of diversity is a problem that has been addressed in few studies. As explained in diversity is frequently in contradiction with accuracy. Authors usually attain higher diversity by tolerable relaxations in accuracy. In general we see that hybrid RSs try to solve the most acute problems that RSs face. In Table 13 we summarize some typical solutions about each problem with examples from papers discussed in sections 3.2 - 3.5. [ht] p1.3cm p8.4cm p2.4cm **Problems** & **Possible Solutions** & **References** Cold-Start & Use association rule mining on item or user data to find relations which can compensate the lack of ratings. Mathematical constructs for feature extraction and combination of different strategies can also be used. & ,,,, [0.01ex] Sparsity & Use the few existing ratings or certain item features to generate extra pseudo ratings. Experiment with Matrix Factorization or Dimensionality Reduction. &,,, [0.01ex] Accuracy & Use Fuzzy Logic or Fuzzy Clustering in association with CF. Try putting together CF with CBF using Probabilistic Models, Bayesian Networks or other mathematical constructs. &,,,,, [0.01ex] Scalability & Try to compress or reduce the datasets with Clustering or different measures of similarity. &,, [0.01ex] Diversity & Try modifying neighborhood creation by relaxing similarity (possible loss in accuracy) or use the concept of experts for certain item tastes. &,, Techniques and combinations --------------------------- As shown in Table 7, K-NN is the most popular DM technique among hybrid RSs. This result highlights the fact that K-NN CF is one of the most successful and widespread RSs. Clustering techniques are also commonly used. There are different types of clustering algorithms with *K-means* being the most popular. Clustering as a process is mostly involved in preliminary phases to identify similar users, similar items, similar item features, etc. Association rules are also used to identify frequent relations between users and items. Fuzzy logic and matrix manipulation methods are also incorporated in hybrid RSs. In most of the cases authors combine 2 recommendation strategies. In few cases event 3 are involved. CF-CBF is the most popular combination, commonly associated with recurrent problems like data sparsity, cold-start and accuracy. CF-CBF-X is also common. Here CF and CBF are combined together and reinforced by a third technique. In CF-X combinations, X is usually integrated in CF to improve its performance and usually represents fuzzy logic (reclusive methods are complementary to collaborative methods) or clustering. IICF-UUCF is also popular as it represents the combination of two basic version of CF. In conclusion, as can be inferred from Table 8, the most common recommendation techniques (with CF been the most popular) are combined to solve the typical problems which are cold-start, data sparsity and accuracy. Actually it is not a surprise that CF combines with almost any other recommendation technique. Other surveys report similar results. In the authors present a broad survey about CF techniques. They also conclude that most of hybrid CF recommenders use CF methods in combination with content-based methods (CF-CBF is also the most frequent combination we found) or other methods to fix problems of either recommendation technique and to improve recommendation performance. CBF-X addresses problems like data sparsity, accuracy and scalability. Other combinations put together techniques like Bayesian methods, demographic filtering, neural networks, regression, association rules mining or genetic algorithms. It is important to note that in some cases hybrid RSs are not built by combining different recommendation techniques. In those cases they represent combinations of different data sources, item or user representations, etc. embedded in a single RS. For this reason the number of the reported combinations is smaller than the number of total primary studies we analyzed. Hybridization classes --------------------- Regarding the hybrid classes, weighted hybrid is the most popular. It often combines CF and CBF recommendations in a dynamic way (weights change over time). Feature combination is the second, putting together data from two or more sources. Cascade, switching, feature augmentation and meta-level have almost equal frequency of appearance whereas mixed hybrid is the least common class. There is also a last category we denoted as “Other” which includes 13.2% of the studies. It was not possible for us to identify a hybridization class of this recommenders based on Burke’s taxonomy (which might also need to be extended). In some studies hybrid RSs are not combinations of two or more recommendation strategies in a certain way. They put together different data sources and item or user representations in a single strategy. In this sense, the “Other” category means “we don’t know”. Various mathematical constructs are used as “gluing” methods between the different components of the systems based on the hybridization class. Weighted, Mixed, Switching and Feature Combination are order-insensitive; there is no difference between a switching CF-CBF and a switching CBF-CF. In this sense these 4 classes are easier to concatenate compared to Cascade, Feature Augmentation and meta-level which are inherently ordered. The few mixed systems do not need the “glue” at all as their components generate recommendations independently from each other. Our results indicate that Weighted hybrids usually rely on weighted linear functions with static or dynamic weights which are updated based on the user feedback. Switching hybrids usually rely on distance/similarity measures such as Euclidean distance, Pearson correlation, Cosine similarity, etc. to decide which of the components to activate in a certain time. Feature combinations usually involve fuzzy logic to match the features obtained by one module with those of the other module. Feature augmentation, Cascade and Meta-level hybrids rely on even more complex and advanced mathematical frameworks such as probabilistic modeling, Bayesian networks, etc. Application domains ------------------- A rich set of application domains was found as shown in Figure 7. Many of the studies are domain independent (more than a quarter). They are not limited to any particular domain and the methods or algorithms they present can be applied in different domains with minor or no changes at all. Movies are obviously the most recommended items. It is somehow because of the large amount of public and freely accessible user feedback about movie preferences (i.e. many public movie datasets on the web[10](#fn10)) which are highly helpful. There is also a rich set of algorithms and solutions (Netflix $1M challenge was a big motivation to improve movie recommenders). This allows researchers to train and test their recommendation algorithms easily. Education or e-learning is another domain in which hybrid RSs are gaining popularity. The amount of educational material on the web has been increasing dramatically in the last years and MOOCs (Massive Open Online Course) are becoming very popular. Other somehow popular domains are music and web services. More detailed information about the application domains of recommender systems can be found at where the authors illustrate each application domain category with real RS applications found in the web. Evaluation ---------- Evaluation of Recommender Systems is an essential phase which helps in choosing the right algorithm in a certain context and for a certain problem. However, as explained in, evaluating recommender systems is not an easy task. Certain algorithms may perform better or worse in different datasets and it is not easy to decide what metrics to combine when performing comparative evaluations. With the three research questions about evaluation, we addressed different aspects of this delicate process. Based on our results most of the studies evaluate hybrid RSs by comparing them with similar methods. The experiments which are usually offline utilize accuracy or error metrics like MAE or RMSE and information retrieval metrics like precision, recall and F1. Similar results are reported in where offline evaluations that typically measure accuracy are dominant. User surveys are less popular, using subjective quality assessments and occasionally precision or recall. These kind of experiments are mostly online (i.e. users interacting with the system and answering questions) and offer more direct and credible evaluation conclusions. From the results, we see that researchers find it easier to compare their system with other systems using public data rather than to perform massive user surveys for a more subjective and qualitative evaluation. Regarding RS characteristics, accuracy results to be the most commonly evaluated characteristic of the hybrid RSs. This is partly because it is easy to represent and compute it by means of various measures that exist. The most frequent metrics used to evaluate accuracy are Precision, Recall and MAE. User satisfaction (subjective recommendation quality) comes second. It is evaluated by means of user surveys. There is a lot of discussion in the literature about recommendation diversity. In the authors conclude that the user’s overall liking of recommendations goes beyond accuracy and involves other factors like diversity. On the other hand, in the authors agree that increasing diversity in recommendations comes with a cost in accuracy. Our results show that diversity is still less frequently evaluated. Actually most of the studies that try to provide diversity do it by conceding accuracy. In the authors explore the use of serendipity and coverage as both characteristics and quality measures of RSs. They suggest that serendipity and coverage are designed to account for the quality and usefulness of the recommendations better than accuracy does. In our results serendipity is rarely evaluated. It is important to note that the difference between recommendation characteristics and evaluation metrics is sometimes subtle. This is the case for coverage. Is coverage a recommendation characteristic, a recommendation metric or both? In some works like and coverage is considered as both a characteristic and metric. As a characteristic it reflects the usefulness of the system. The higher the coverage (more items predicted and recommended) the more useful the recommender system for the users. In other works like it is only considered as a metric with which the authors evaluate diversity, another recommendation characteristic. In the studies we considered for this review coverage is both considered as a metric for estimating the diversity and as a recommendation characteristic of the systems. Few studies we analyzed evaluate the computational complexity of the systems they propose by measuring the execution time. Besides the new trends, the results indicate that accuracy is still the most frequently evaluated characteristic. We also considered the public datasets used to perform the evaluation. With the exponential growth of the web content there are more and more public data and datasets which can be used to train and test new algorithms. These datasets usually come from highly visited web portals or services and represent user preferences about things like movies, music, news, books, etc. In we present the characteristics of some of the most popular public datasets and the types of RSs they can be used for. It is convenient to exploit them for evaluating novel algorithms or recommendation techniques in offline experiments. The evaluation process steps are clearly explained in. The result of this review indicate that movie datasets led by Movielens are very popular being used in more than 72% of the studies. This is somehow related with the fact that movie domain is also highly preferred. Many authors chose to experiment in the domain of movies to easily evaluate their prototypes. Music, web services, tourism, images datasets, etc. make up the rest of the datasets the studies use. Future work ----------- With RQ7 we tried to uncover the most important future work directions in hybrid recommender systems. Extending or improving the proposed solution is the most common future work the authors intend to undertake. Extension of the proposed solutions comes in diverse forms like *(i)* extend by applying more algorithms, *(ii)* extend the personalization level by adapting more to the user context and profile, *(iii)* extend by using more datasets or item features, etc. Performing a comprehensive evaluation is something in which many studies fail. This is why some authors present it as a future work. It usually happens in the cases when the authors implement their algorithm or method in a prototype. In these cases comparison with similar methods using accuracy metrics does not provide clear insights about recommendation or system quality. Reinforcing with subjective user feedback may be the best way to optimize evaluation of the system, making it more user oriented. A highly desired characteristic from RSs is adapting to the user interest shifting or evolving over time, especially as a results of rapid context changes. As a result, different authors suggest to add context to their systems or to analyze different criteria of items or users as ways to improve the recommendation quality. Context-Aware Recommender Systems (CARS) and Multi-Criteria Recommender Systems (MCRS) are relatively new approaches which are gaining popularity in the field of RSs. They are promoted by the increased use of mobile devices which reveal user details (i.e. the location) that can be used as important contextual inputs. Combining context and multiple criteria with other hybrid recommendation techniques could be a good direction in which to experiment. Considering other application domains in which hybrid RSs could be applied is also stated by some authors. Many of the works were domain independent and can be easily adapted to different recommendation domains. One step further could be to have hybrid RSs recommend items from different (changing) domains and implement the so called cross domain recommender systems. Having found the best movie for the weekend, the user may also want to find the corresponding soundtrack or the book in which the movie may be based on. Cross-domain RSs are an emerging research topic. Different recommendation strategies like CF and CBF could be specialized in different domains of interest and then joined together in a weighted, switching, mixed or other hybrid cross-domain RS which would recommend different items to its users. Combining more data from different sources or with various item features was a way to create hybrid RSs. Using more data is a common trend not only in recommender systems but in similar disciplines as well. However, having and using big volumes of data requires scaling in computations. One way to achieve this high scalability is by parallelizing the algorithms following *MapReduce* model which could be a future direction as suggested in. Experimenting with other hybrid recommendation classes is also possible in many cases. The results indicate that some hybrid classes are rarely explored (i.e. mixed hybrid appears in 3 studies only). It could be a good idea to experiment building CF-CBF, CF-CBF, CF-KBF or other types of mixed hybrids and observe what characteristics this systems could provide. Other future work suggestions include increasing personalization and reducing the computational cost of the system. Conclusions =========== In this review work we analyzed 76 primary studies from journals and conference proceedings which address hybrid RSs. We tried to identify the most acute problems they solve to provide better recommendations. We also analyzed the data mining and machine learning techniques they use, the recommendation strategies they combine, hybridization classes they belong to, application domains and dataset, evaluation process, and possible future work directions. With regard to the research problems cold-start, data sparsity and accuracy are the most recurrent problems for which hybrid approaches are explored. The authors typically use association rules mining in combination with traditional recommendation strategies to find user-item relations and compensate the lack of ratings in cold-start situations. We also found that matrix factorization techniques help to compress the existing sparse ratings and attain acceptable accuracy. It was also typical to find studies in which collaborative filtering was combined with other techniques such as fuzzy logic attempting to alleviate cold-start or data sparsity and at the same time provide good recommendation accuracy. We also presented a classification of the included studies based on the different DM/ML techniques they utilize to build the systems and their recommendation technique combinations. K-NN classifier which is commonly used to construct the neighborhood in collaborative RSs, was the most popular among the data mining technique. On the other hand, CF was the most commonly used recommendation strategy, frequently combined with each of the other strategies attempting to solve any kind of problem. We identified and classified the different hybridization approaches relying in the taxonomy proposed by Burke and found that the weighted hybrid is the most recurrent, obviously because of the simplicity and dynamicity it offers. Other hybridization classes such as meta level or feature augmentation are rare as they need complicated mathematical constructs to aggregate the results of the different recommenders they combine. Concerning evaluation, accuracy is still considered the most important characteristic. The authors predominantly use comparisons with similar methods and involve error or prediction metrics in the evaluation process. This evaluation methodology is “hermetic” and often not credible. User satisfaction is commonly evaluated with subjective data feedback from surveys which are user oriented, more credible and thus highly suggested. Additionally, computational complexity was found in few cases. We also investigated what public datasets are typically used to perform evaluation of the hybrid systems. Based on our findings movie datasets led by Movielens are the most popular, facilitating the evaluation process. Moreover movie domain was the most preferred for prototyping, among the numerous that were identified. More than three-quarters of our included studies were published in the last five years. This high and growing number of recent publications in the field lets us believe that hybrid RSs are a hot and interesting topic. Our findings indicate that future works could be focused in context awareness of recommendations and models with which to formalize and aggregate severals contextual factors inside a hybrid recommender. Such RSs could be able to respond to quick shifts of user interest with high accuracy. We also found that there are many combinations of recommendation techniques or hybridization classes which are not explored. Thus they represent a good basis for future experimentations in the field. Using more data was another possible work direction we found. In the epoch of big data, processing more or larger dataset (as even more become available) with hybrid parallel algorithms could be a good way to alleviate the problem of scalability and also provide better recommendation quality. Other future work direction could be using hybrid RSs to build cross domain recommenders or improve the computation complexity of the existing techniques. Acknowledgments =============== This work was supported by a fellowship from TIM[11](#fn11). Appendix A. Selected Papers =========================== [1] > p#1 l L2cm l L3.8cm L0.7cm L3.8cm P & Authors & Year & Title & Source & Publication details   – *Continued from previous page* P & Authors & Year & Title & Source & Publication details & Wang, J.; De Vries, P. A.; Reinders, J. T. M.; & 2006 & Unifying User-based and Item-based Collaborative Filtering Approaches by Similarity Fusion & ACM & 29th Annual International ACM SIGIR Conference on Research & Development on Information Retrieval, Seattle 2006 [1ex] & Gunawardana, A.; Meek, C.; & 2008 & Tied Boltzmann Machines for Cold Start Recommendations & ACM & 2nd ACM Conference on Recommender Systems, Lousanne, Switzerland, 23rd-25th October 2008 [1ex] P3 & Gunawardana, A.; Meek, C.; & 2009 & A Unified Approach to Building Hybrid Recommender Systems & ACM & 3rd ACM Conference on Recommender Systems, New York, October 23-25, 2009 [1ex] P4 & Park, S. T.; Chu, W.; & 2009 & Pairwise Preference Regression for Cold-start Recommendation & ACM & 3rd ACM Conference on Recommender Systems, New York, October 23-25, 2009 [1ex] & Ghazanfar, M. A.; Prugel-Bennett, A.; & 2010 & An Improved Switching Hybrid Recommender System Using Naive Bayes Classifier and Collaborative Filtering & ACM & Proceedings of the International MultiConference of Engineers and Computer Scientists 2010, Vol I, Hong Kong, March 17-19, 2010 [1ex] & Zhuhadar, L.; Nasraoui, O.; & 2010 & An Improved Switching Hybrid Recommender System Using Naive Bayes Classifier and Collaborative Filtering & ACM & Proceedings of the International MultiConference of Engineers and Computer Scientists 2010, Vol I, Hong Kong, March 17-19, 2010 & Hwang, C. S.; & 2010 & Genetic Algorithms for Feature Weighting in Multi-criteria Recommender Systems & ACM & Journal of Convergence Information Technology, Vol. 5, N. 8, October 2010 & Liu, L.; Mehandjiev, N.; Xu, D. L.; & 2011 & Multi-Criteria Service Recommendation Based on User Criteria Preferences & ACM & 5th ACM Conference on Recommender Systems, Chicago, Oct 23rd-27th 2011 P9 & Bostandjiev, S.; O’Donovan, J.; Höllerer, T.; & 2012 & TasteWeights: A Visual Interactive Hybrid Recommender System & ACM & 6th ACM Conference on Recommender Systems, Dublin, Sep. 9th-13th, 2012 & Stanescu, A.; Nagar, S.; Caragea, D.; & 2013 & A Hybrid Recommender System: User Profiling from Keywords and Ratings & ACM & A Hybrid Recommender System: User Profiling from Keywords and Ratings [1ex] & Hornung, T.; Ziegler, C. N.; Franz, S.; & 2013 & Evaluating Hybrid Music Recommender Systems & ACM & 2013 IEEE/WIC/ACM International Conferences on Web Intelligence (WI) and Intelligent Agent Technology (IAT) [1ex] & Said, A.; Fields, B.; Jain, B. J.; & 2013 & User-Centric Evaluation of a K-Furthest Neighbor Collaborative Filtering Recommender Algorithm & ACM & The 16th ACM Conference on Computer Supported Cooperative Work and Social Computing, Texas, Feb. 2013 [1ex] & Hu, L.; Cao, J.; Xu, G.; Cao, L.; Gu, Z.; Zhu, C.; & 2013 & Personalized Recommendation via Cross-Domain Triadic Factorization & Scopus & 22nd ACM International WWW Conference, May 2013, Brasil [1ex] & Christensen, I.; Schiaffino, S.; & 2014 & A Hybrid Approach for Group Profiling in Recommender Systems & ACM & Journal of Universal Computer Science, vol. 20, no. 4, 2014 [1ex] P15 & Garden, M.; Dudek, G.; & 2005 & Semantic feedback for hybrid recommendations in Recommendz & IEEE & IEEE 2005 International Conference on e-Technology, e-Commerce and e-Service [1ex] P16 & Bezerra, B. L. D.; Carvalho, F. T.; Filho, V. M.; & 2006 & C2 :: A Collaborative Recommendation System Based on Modal Symbolic User Profile & IEEE & Proceedings of the 2006 IEEE/WIC/ACM International Conference on Web Intelligence [1ex] P17 & Ren, L.; He, L.; Gu, J.; Xia, W.; Wu, F.; & 2008 & A Hybrid Recommender Approach Based on Widrow-Hoff Learning & IEEE & IEEE 2008 Second International Conference on Future Generation Communication and Networking [1ex] P18 & Godoy, D.; Amandi, A.; & 2008 & Hybrid Content and Tag-based Profiles for Recommendation in Collaborative Tagging Systems & IEEE & IEEE 2008 Latin American Web Conference [1ex] P19 & Aimeur, E.; Brassard, G.; Fernandez, J. M.; Onana, F. S. M.; Rakowski, Z.; & 2008 & Experimental Demonstration of a Hybrid Privacy-Preserving Recommender System & IEEE & The Third International Conference on Availability, Reliability and Security, IEEE 2008 [1ex] & Yoshii, K.; Goto, M.; Komatani, K.; Ogata, T.; Okuno, H. G.; & 2008 & An Efficient Hybrid Music Recommender System Using an Incrementally Trainable Probabilistic Generative Model & IEEE & IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 16, NO. 2, FEBRUARY 2008 [1ex] & Maneeroj, S.; Takasu, A.; & 2009 & Hybrid Recommender System Using Latent Features & IEEE & IEEE 2009 International Conference on Advanced Information Networking and Applications [1ex] P22 & Meller, T.; Wang, E.; Lin, F.; Yang, C.; & 2009 & New Classification Algorithms for Developing Online Program Recommendation Systems & IEEE & IEEE 2009 International Conference on Mobile, Hybrid, and On-line Learning [1ex] & Shambour, Q.; Lu, J.; & 2010 & A Framework of Hybrid Recommendation System for Government-to-Business Personalized e-Services & IEEE & IEEE 2010 Seventh International Conference on Information Technology [1ex] & Deng, Y.; Wu, Z.; Tang, C.; Si, H.; Xiong, H.; Chen, Z.; & 2010 & A Hybrid Movie Recommender Based on Ontology and Neural Networks & IEEE & A Hybrid Movie Recommender Based on Ontology and Neural Networks [1ex] & Yang, S. Y.; Hsu, C. L.; & 2010 & A New Ontology-Supported and Hybrid Recommending Information System for Scholars & Scopus & 13th International Conference on Network-Based Information Systems [1ex] & Basiri, J.; Shakery, A.; Moshiri, B.; Hayat, M.; & 2010 & Alleviating the Cold-Start Problem of Recommender Systems Using a New Hybrid Approach & IEEE & IEEE 2010 5th International Symposium on Telecommunications (IST’2010) [1ex] & Valdez, M. G.; Alanis, A.; Parra, B.; & 2010 & Fuzzy Inference for Learning Object Recommendation & IEEE & IEEE 2010 International Conference on Fuzzy Systems [1ex] & Choi, S. H.; Jeong, Y. S.; Jeong, M. K.; & 2010 & A Hybrid Recommendation Method with Reduced Data for Large-Scale Application & IEEE & IEEE Transactions on systems, man and cybernetics - Part C: Applicatios and Reviews, VOL. 40, NO. 5, September 2010 [1ex] & Ghazanfar, M. A.; Prugel-Bennett, A.; & 2010 & Building Switching Hybrid Recommender System Using Machine Learning Classifiers and Collaborative Filtering & IEEE & IEEE IAENG International Journal of Computer Science, 37:3, IJCS\_37\_3\_09 [1ex] P30 & Castro-Herrera, C.; & 2010 & A Hybrid Recommender System for Finding Relevant Users in Open Source Forums & Scopus & IEEE 3rd International Conference on Managing Requirements Knowledge, Sept. 2010 [1ex] P31 & Tath, I.; Biturk, A.; & 2011 & A Tag-based Hybrid Music Recommendation System Using Semantic Relations and Multi-domain Information & IEEE & 11th IEEE International Conference on Data Mining Workshops, Dec. 2011 [1ex] P32 & Kohi, A.; Ebrahimi, S. J.; Jalali, M.; & 2011 & Improving the Accuracy and Efficiency of Tag Recommendation System by Applying Hybrid Methods & IEEE & IEEE 1st International eConference on Computer and Knowledge Engineering (ICCKE), October 13-14, 2011 [1ex] P33 & Kohi, A.; Ebrahimi, S. J.; Jalali, M.; & 2011 & Improving the Accuracy and Efficiency of Tag Recommendation System by Applying Hybrid Methods & IEEE & IEEE 1st International eConference on Computer and Knowledge Engineering (ICCKE), October 13-14, 2011 [1ex] & Fenza, G.; Fischetti, E.; Furno, D.; Loia, V.; & 2011 & A hybrid context aware system for tourist guidance based on collaborative filtering & Scopus & 2011 IEEE International Conference on Fuzzy Systems, June 27-30, 2011, Taipei, Taiwan [1ex] P35 & Shambour, Q.; Lu, J.; & 2011 & A Hybrid Multi-Criteria Semantic-enhanced Collaborative Filtering Approach for Personalized Recommendations & IEEE & 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology [1ex] & Li, X.; Murata, T.; & 2012 & Multidimensional Clustering Based Collaborative Filtering Approach for Diversified Recommendation & IEEE & The 7th International Conference on Computer Science & Education July 14-17, 2012. Melbourne, Australia [1ex] & Shahriyary, S.; Aghabab, M. P.; & 2013 & Recommender systems on web service selection problems using a new hybrid approach & IEEE & IEEE 4th International Conference on Computer and Knowledge Engineering, 2014 [1ex] & Yu, C. C.; Yamaguchi, T.; Takama, Y.; & 2013 & A Hybrid Recommender System based Non-common Items in Social Media & IEEE & IEEE International Joint Conference on Awareness Science and Technology and Ubi-Media Computing, 2013 [1ex] P39 & Buncle, J.; Anane, R.; Nakayama, M.; & 2013 & A Recommendation Cascade for e-learning & IEEE & 2013 IEEE 27th International Conference on Advanced Information Networking and Applications [1ex] & Bedi, P.; Vashisth, P.; Khurana, P.; & 2013 & Modeling User Preferences in a Hybrid Recommender System using Type-2 Fuzzy Sets & Scopus & IEEE International Conference on Fuzzy Systems, July 2013 [1ex] & Andrade, M. T.; Almeida, F.; & 2013 & Novel Hybrid Approach to Content Recommendation based on Predicted Profiles & IEEE & 2013 IEEE 10th International Conference on Ubiquitous Intelligence & Computing [1ex] P42 & Yao, L.; Sheng, Q. Z.; Segev, A.; Yu, J.; & 2013 & Recommending Web Services via Combining Collaborative Filtering with Content-based Features & IEEE & 2013 IEEE 20th International Conference on Web Services [1ex] P43 & Luo, Y.; Xu, B.; Cai, H.; Bu, F.; & 2014 & A Hybrid User Profile Model for Personalized Recommender System with Linked Open Data & IEEE & IEEE 2014 Second International Conference on Enterprise Systems [1ex] & Sharif, M. A.; Raghavan, V. V.; & 2014 & A Clustering Based Scalable Hybrid Approach for Web Page & IEEE & 2014 IEEE International Conference on Big Data [1ex] P45 & Xu, S.; Watada, J.; & 2014 & A Method for Hybrid Personalized Recommender based on Clustering of Fuzzy User Profiles & IEEE & IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) July 6-11, 2014, Beijing, China [1ex] & Lee, K.; Lee, K.; & 2014 & Using Dynamically Promoted Experts for Music Recommendation & IEEE & IEEE Transactions on Multimedia, VOL. 16, NO. 5, August 2014 [1ex] & Chughtai, M. W.; Selamat, A.; Ghani, I.; Jung, J. J.; & 2014 & E-Learning Recommender Systems Based on Goal-Based Hybrid Filtering & IEEE & International Journal of Distributed Sensor Networks Volume 2014pages [1ex] P48 & Li, Y.; Lu, L.; Xufeng, L. & 2005 & A hybrid collaborative filtering method for multiple-interests and multiple-content recommendation in E-Commerce & Science Direct & Expert Systems with Applications 28 (2005) 67–77 [1ex] & Kunaver, M.; Pozrl, T.; Pogacnik, M.; Tasic, J.; & 2007 & Optimisation of combined collaborative recommender systems & Science Direct & International Journal of Electronics and Communications (AEU), 2007, 433-443 [1ex] P50 & Albadvi, A.; Shahbazi, M.; & 2009 & A hybrid recommendation technique based on product category attributes & Scopus & Expert Systems with Applications 36 (2009) 11480–11488 [1ex] & Capos, L. M.; Fernandez-Luna, J. M.; Huete, J. F.; Rueda-Morales, M. A.; & 2010 & Combining content-based and collaborative recommendations: A hybrid approach based on Bayesian networks & Science Direct & International Journal of Approximate Reasoning 51 (2010) 785–799 [1ex] & Barragans-Martínez, A. B.; Costa-Montenegro, E.; Burguillo, J. C.; Rey-Lopez, M.; Mikic-Fonte, F. A.; Peleteiro, A.; & 2010 & A hybrid content-based and item-based collaborative filtering approach to recommend TV programs enhanced with singular value decomposition & Science Direct & International Journal of Information Sciences 180 (2010) 4290–4311 [1ex] & Wen, H.; Fang, L.; Guan, L.; & 2012 & A hybrid approach for personalized recommendation of news on the Web & Science Direct & International Journal of Expert Systems with Applications 39 (2012) 5806–5814 [1ex] P54 & Porcel, C.; Tejeda-Lorente, A.; Martinez, M. A.; Herrera-Viedma, E.; & 2012 & A hybrid recommender system for the selective dissemination of research resources in a Technology Transfer Office & Science Direct & International Journal of Information Sciences 184 (2012) 1–19 [1ex] & Noguera, J. M.; Barranco, M. J.; Segura, R. J.; Martinez, L.; & 2012 & A mobile 3D-GIS hybrid recommender system for tourism & Science Direct & International Journal of Information Sciences 215 (2012) 37–52 [1ex] P56 & Salehi, M.; Pourzaferani, M.; Razavi, S. A.; & 2013 & Hybrid attribute-based recommender system for learning material using genetic algorithm and a multidimensional information model & Science Direct & Egyptian Informatics Journal (2013) 14, 67–78 [1ex] P57 & Zang, Z.; Lin, H.; Liu, K.; Wu, D.; Zhang, G.; Lu, J.; & 2013 & A hybrid fuzzy-based personalized recommender system for telecom products/services & Science Direct & International Journal of Information Sciences 235 (2013) 117–129 [1ex] & Kardan, A. A.; Ebrahimi, M.; & 2013 & A novel approach to hybrid recommendation systems based on association rules mining for content recommendation in asynchronous discussion groups & Science Direct & International Journal of Information Sciences 219 (2013) 93–110 [1ex] & Lucas, J. P.; Luz, N.; Moreno, M. N.; Anacleto, R.; Figueiredo, A. A.; Martins, C.; & 2013 & A hybrid recommendation approach for a tourism system & Science Direct & International Journal of Expert Systems with Applications 40 (2013) 3532–3550 [1ex] & Son, L. H.; & 2014 & HU-FCF: A hybrid user-based fuzzy collaborative filtering method in Recommender Systems & Science Direct & International Journal of Expert Systems with Applications 41 (2014) 6861–6870 [1ex] & Son, L. H.; & 2014 & HU-FCF++: A novel hybrid method for the new user cold-start problem in recommender systems & Scopus & Engineering Applications of Artificial Intelligence 41(2015)207–222 [1ex] P62 & Lekakos, G.; Caravelas, P.; & 2006 & A hybrid approach for movie recommendation & Springer & Multimed Tools Appl (2008) 36:55–70 DOI 10.1007/s11042-006-0082-7, Springer [1ex] & Lekakos, G.; Giaglis, G. M.; & 2007 & A hybrid approach for improving predictive accuracy of collaborative filtering algorithms & Springer & User Model User-Adap Inter (2007) 17:5–40 DOI 10.1007/s11257-006-9019-0, Springer [1ex] P64 & Degemmis, M.; Lops, P.; Semeraro, G.; & 2007 & A content-collaborative recommender that exploits WordNet-based user profiles for neighborhood formation & Springer & User Model User-Adap Inter (2007) 17:217–255, DOI 10.1007/s11257-006-9023-4, Springer [1ex] P65 & Cho, J.; Kang, E.; & 2010 & Personalized Curriculum Recommender System Based on Hybrid Filtering & Springer & ICWL 2010, LNCS 6483, pp. 62–71, 2010, Springer [1ex] P66 & Aksel, F.; Biturk, A.; & 2010 & Enhancing Accuracy of Hybrid Recommender Systems through Adapting the Domain Trends & Scopus & Workshop on the Practical Use of Recommender Systems, Algorithms and Technologies held in conjunction with RecSys 2010. Sept. 30, 2010, Barcelona [1ex] & Lampropoulos, A. S.; Lampropoulos, P. S.; Tsihrintzis, G. A.; & 2011 & A Cascade-Hybrid Music Recommender System for mobile services based on musical genre classification and personality diagnosis & Springer & Multimed Tools Appl (2012) 59:241–258 DOI 10.1007/s11042-011-0742-0, Springer [1ex] & Chen, W.; Niu, Z.; Zhao, X.; Li, Y.; & 2012 & A hybrid recommendation algorithm adapted in e-learning environments & Springer & World Wide Web (2014) 17:271–284 DOI 10.1007/s11280-012-0187-z [1ex] P69 & Sanchez, F.; Barrileo, M.; Uribe, S.; Alvarez, F.; Tena, A.; Mendez, J. M.; & 2012 & Social and Content Hybrid Image Recommender System for Mobile Social Networks & Springer & Mobile Netw Appl (2012) 17:782–795 DOI 10.1007/s11036-012-0399-6, Springer [1ex] & Zheng, X.; Ding, W.; Xu, J.; Chen, D.; & 2013 & Personalized recommendation based on review topics & Scopus & SOCA (2014) 8:15–31 DOI 10.1007/s11761-013-0140-8 [1ex] P71 & Cao, J.; Wu, Z.; Wang, Y.; Zhuang, Y.; & 2013 & Hybrid Collaborative Filtering algorithm for bidirectionalWeb service recommendation & Springer & Knowl Inf Syst (2013) 36:607–627 DOI 10.1007/s10115-012-0562-1 [1ex] P72 & Burke, R.; Vahedian, F.; Mobasher, B.; & 2014 & Hybrid Recommendation in Heterogeneous Networks & Springer & UMAP 2014, LNCS 8538, pp. 49–60, 2014, Springer [1ex] P73 & Nikulin, V.; & 2014 & Hybrid Recommender System for Prediction of the Yelp Users Preferences & Springer & ICDM 2014, LNAI 8557, pp. 85–99, 2014, Springer [1ex] P74 & Sarne, G. M. L.; & 2014 & A novel hybrid approach improving effectiveness of recommender systems & Springer & J Intell Inf Syst DOI 10.1007/s10844-014-0338-z [1ex] & Zhao, X.; Niu, Z.; Chen, W.; Shi, C.; Niu, K.; Liu, D.; & 2014 & A hybrid approach of topic model and matrix factorization based on two-step recommendation framework & Springer & J Intell Inf Syst DOI 10.1007/s10844-014-0334-3, Springer [1ex] & Nilashi, M.; Ibrahim, O. B.; Ithnin, N.; Zakaria, R.; & 2014 & A multi-criteria recommendation system using dimensionality reduction and Neuro-Fuzzy techniques & Springer & Soft Comput DOI 10.1007/s00500-014-1475-6, Springer-Verlag Berlin Heidelberg 2014 [1ex] References ========== --- 1. <http://www.businessdictionary.com/definition/information-overload.html>[↩](#fnref1) 2. <https://recsys.acm.org/>[↩](#fnref2) 3. <http://www.qsrinternational.com/products.aspx>[↩](#fnref3) 4. <http://www.tripadvisor.co.uk>[↩](#fnref4) 5. <http://grouplens.org/node/73>[↩](#fnref5) 6. <http://grouplens.org>[↩](#fnref6) 7. <http://ocelma.net/MusicRecommendationDataset/lastfm-360K.html>[↩](#fnref7) 8. <http://disi.unitn.it/~knowdive/dataset/delicious/>[↩](#fnref8) 9. http://www.netflixprize.com/[↩](#fnref9) 10. <https://gist.github.com/entaroadun/1653794>[↩](#fnref10) 11. https://www.tim.it/[↩](#fnref11) Hybrid Recommender Systems: A Systematic Literature Review ========================================================== Recommender systems are software tools used to generate and provide suggestions for items and other entities to the users by exploiting various strategies. Hybrid recommender systems combine two or more recommendation strategies in different ways to benefit from their complementary advantages. This systematic literature review presents the state of the art in hybrid recommender systems of the last decade. It is the first quantitative review work completely focused in hybrid recommenders. We address the most relevant problems considered and present the associated data mining and recommendation techniques used to overcome them. We also explore the hybridization classes each hybrid recommender belongs to, the application domains, the evaluation process and proposed future research directions. Based on our findings, most of the studies combine collaborative filtering with another technique often in a weighted way. Also cold-start and data sparsity are the two traditional and top problems being addressed in 23 and 22 studies each, while movies and movie datasets are still widely used by most of the authors. As most of the studies are evaluated by comparisons with similar methods using accuracy metrics, providing more credible and user oriented evaluations remains a typical challenge. Besides this, newer challenges were also identified such as responding to the variation of user context, evolving user tastes or providing cross-domain recommendations. Being a hot topic, hybrid recommenders represent a good basis with which to respond accordingly by exploring newer opportunities such as contextualizing recommendations, involving parallel hybrid algorithms, processing larger datasets, etc. Hybrid Recommendations Recommender Systems Systematic Review Recommendation Strategies Introduction ============ Historically people have relied on their peers or on experts’ suggestions for decision support and recommendations about commodities, news, entertainment, etc. The exponential growth of the digital information in the last 25 years, especially in the web, has created the problem of information overload. Information overload is defined as “stress induced by reception of more information than is necessary to make a decision and by attempts to deal with it with outdated time management practices”.[1](#fn1) This problem limits our capacity to review the specifications and choose between numerous alternatives of items in the online market. On the other hand, information science and technology reacted accordingly by developing information filtering tools to alleviate the problem. Recommender Systems (RSs) are one such tools that emerged in the mid 90s. They are commonly defined as software tools and techniques used to provide suggestions for items and other recommendable entities to users. In the early days (beginning of 90s) RSs were the study subject of other closely related research disciplines such as Human Computer Interaction (HCI) or Information Retrieval (IR). Today, RSs are found everywhere helping users in searching for various types of items and services. They also serve as sales assistants for businesses increasing their profits. Technically all RSs employ one or more recommendation strategies such as Content-Based Filtering (CBF), Collaborative Filtering (CF), Demographic Filtering (DF), Knowledge-Based Filtering (KBF), etc. described below: * **Collaborative Filtering**: The basic assumption of CF is that people who had similar tastes in the past will also have similar tastes in the future. One of its earliest definitions is “collaboration between people to help one another perform filtering by recording their reactions to documents they read”. This approach uses ratings or other forms of user generated feedback to spot taste commonalities between groups of users and then generates recommendations based on inter-user similarities. CF recommenders suffer from problems like cold-start (new user or new item), “gray sheep” (users that do not fit in any taste cluster), etc. * **Content-Based Filtering**: CBF is based on the assumption that people who liked items with certain attributes in the past, will like the same kind of items in the future as well. It makes use of item features to compare the item with user profiles and provide recommendations. Recommendation quality is limited by the selected features of the recommended items. Same as CF, CBF suffer from the cold-start problem. * **Demographic Filtering**: DF uses demographic data such as *age*, *gender*, *education*, etc. for identifying categories of users. It does not suffer from the new user problem as is doesn’t use ratings to provide recommendations. However, it is difficult today to collect enough demographic information that is needed because of online privacy concerns, limiting the utilization of DF. It is still combined with other recommenders as a reinforcing technique for better quality. * **Knowledge-Based Filtering**: KBF uses knowledge about users and items to reason about what items meet the users’ requirements, and generate recommendations accordingly. A special type of KBFs are constraint-based RSs which are capable to recommend complex items that are rarely bought (i.e. cars or houses) and manifest important constrains for the user (price). It is not possible to successfully use CF or CBF in this domain of items as few user-system interaction data are available (people rarely buy houses). One of the earliest recommender systems was Tapestry, a manual CF mail system. The first computerized RS prototypes also applied a collaborative filtering approach and emerged in mid 90s. GroupLens was a CF recommendation engine for finding news articles. In the authors present a detailed analysis and evaluation of the Bellcore video recommender algorithm and its implementation embedded in the Mosaic browser interface. Ringo used taste similarities to provide personalized music recommendations. Other prototypes like NewsWeeder and InfoFinder recommended news and documents using CBF, based on item attributes. In late 90s important commercial RS prototypes also came out with Amazon.com recommender being the most popular. Many researchers started to combine the recommendation strategies in different ways building hybrid RSs which we consider in this review. Hybrid RSs put together two or more of the other strategies with the goal of reinforcing their advantages and reducing their disadvantages or limitations. One of the first was Fab, a meta-level recommender (see section 3.4.6) which was used to suggest websites. It incorporated a combination of CF to find users having similar website preferences, with CBF to find websites with similar content. Other works such as followed shortly and hybrid RSs became a well established recommendation approach. The continuously growing industrial interest in the recent and promising domains of mobile and social web has been followed by a similar increase of academic interest in RSs. ACM RecSys annual conference[2](#fn2) is now the most significant event for presenting and discussing RS research. The work of Burke in is one of the first qualitative surveys addressing hybrid RSs. The author analyzes advantages and disadvantages of the different recommendation strategies and provides a comprehensive taxonomy for classifying the ways they combine with each other to form hybrid RSs. He also presents several hybrid RS prototypes falling into the 7 hybridization classes of the taxonomy. Another early exploratory work is where several experiments combining personalized agents with opinions of community members in a CF framework are conducted. They conclude that this combination produces high-quality recommendations and that the best results of CF are achieved using large data of user communities. Other review works are more generic and address RSs in general, not focusing in any RS type. They reflect the increasing interest in the field in quantitative terms. In the authors perform a review work of 249 journal and conference RS publications from 1995 to 2013. The peak publication period of the works they consider is between 2007 and 2013 (last one-third of the analyzed period). They emphasize the fact that the current hybrid RSs are incorporating location information into existing recommendation algorithms. They also highlight the proper combination of existing methods using different forms of data, and evaluating other characteristics (e.g., diversity and novelty) besides accuracy as future trends. In the authors review 210 recommender system articles published in 46 journals from 2001 to 2010. They similarly report a rapid increase of publications between 2007 and 2010 and predict an increase interest in mixing existing recommendation methods or using social network analysis to provide recommendations. In this review paper we summarize the state of the art of hybrid RSs in the last 10 years. We follow a systematic methodology to analyze and interpret the available facts related to the 7 research questions we defined. This methodology defined at provides an unbiased and reproducible way for undertaking a review work. Unlike the other review works not focused in any RS type, this systematic literature review is the first quantitative work that is entirely focused in recent hybrid RS publications. For this reason it was not possible for us to have a direct basis with which to compare our results. Nevertheless we provide some comparisons of results for certain aspects in which hybrid RSs do not differ from other types of RSs. To have a general idea about what percentage of total RS publications address hybrid RSs we examined, a survey work about RSs in general. Here the authors review the work of 330 papers published in computer science and information systems conferences proceedings and journals from 2006 to 2011. Their results show that hybrid recommendation paradigm is the study object of about 14.5% of their reviewed literature. We considered the most relevant problems hybrid RSs attempt to solve, the data mining and machine learning methods involved, RS technique combinations the studies utilize and the hybridization classes the proposed systems fall into. We also observed the domains in which the contributions were applied and the evaluation strategies, characteristics and metrics that were used. Based on the suggestions of the authors and the identified challenges we also present some future work directions which seem promising and in concordance with the RS trends. Many primary studies were retrieved from digital libraries and the most relevant papers were selected for more detailed processing (we use the terms paper and study interchangeably to refer to the same object / concept). We hope this work will help anyone working in the field of (hybrid) RSs, especially by providing insights about future trends or opportunities. The remainder of the paper is structured as follows. Section 2 briefly summarizes the methodology we followed, the objectives and research questions defined, the selection of papers and the quality assessment process. Section 3 introduces the results of the review organized in accordance with each research question. Section 4 discusses and summarizes each result whereas Section 5 concludes. Finally we list the selected papers in Appendix A. Methodology =========== The review work of this paper follows the guidelines that were defined by Kitchenham and Charters for systematic literature reviews in Software Engineering. ![Systematic literature review protocol](./images/Protocol7)Systematic literature review protocol The purpose of a systematic literature review is to present a verifiable and unbiased treatment of a research topic utilizing a rigorous and reproducible methodology. The guidelines that were followed are high level and do not consider the influence of research questions type on the review procedures. In Figure 1 we present the protocol of the review. It represents a clear set of steps which assist the management of the review process. The protocol was defined by the first author and verified by the second author. In the following sections we describe each step we summarized in Figure 1. Research questions, search string and digital sources ----------------------------------------------------- The primary goal of this systematic literature review is to understand what challenges hybrid RSs could successfully address, how they are developed and evaluated and in what ways or aspects they could be experimented with. To this end, we defined the following research questions: [**RQ1** ] What are the most relevant studies addressing hybrid recommender systems? [**RQ2** ] What problems and challenges are faced by the researchers in this field? Which data mining and machine learning techniques are used in hybrid RSs? What recommendation techniques are combined and which problems they solve? [**RQ4** ] What hybridization classes are used, based on the taxonomy of Burke? [**RQ5** ] In what domains are hybrid recommenders applied? What methodologies are used for the evaluation and what metrics they utilize? Which RS characteristics are evaluated and what metrics they use? What datasets are used for training and testing hybrid RSs? [**RQ7** ] Which directions are most promising for future research? Furthermore we picked five scientific digital libraries that represent our primary sources for computer science research publications. They are listed in Table 1. Other similar sources were not considered as they mainly index data from the primary sources. [ht] l l **Source** & **URL** [0.5ex] SpringerLink & http://link.springer.com Science Direct & http://www.sciencedirect.com IEEExplore & http://ieeexplore.ieee.org ACM Digital Library & http://dl.acm.org Scopus & http://www.scopus.com [0.5ex] [ht] l l **Keyword** & **Synonyms** [0.5ex] Hybrid & Hybridization, Mixed Recommender & Recommendation System & Software, Technique, Technology, Approach, Engine We defined *(“Hybrid”, “Recommender”, “Systems”)* as the basic set of keywords. Then we added synonyms to extend it and obtain the final set of keywords. The set of keywords and synonyms is listed in Table 2. The search string we defined is: *(“Hybrid” OR “Hybridization” OR “Mixed”) AND (“Recommender” OR “Recommendation”) AND (“System” OR “Software” OR “Technique” OR “Technology” OR “Approach” OR “Engine”)* Selection of papers ------------------- Following Step 4 of the protocol, we applied the search string in the search engines of the five digital libraries and found 9673 preliminary primary studies (see Table 4). The digital libraries return different numbers of papers because of the dissimilar filtering settings they use in their search engines. This retrieval process was conducted during May 2015. To objectively decide whether to select each preliminary primary study for further processing or not, we defined a set of inclusion / exclusion criteria listed in Table 3. The inclusion / exclusion criteria are considered as a basis of concentrating in the most relevant studies with which to achieve the objectives of the review. [ht] p9.7cm **Inclusion criteria** [0.5ex] Papers presenting hybrid recommender systems, algorithms, approaches, etc. [0.8ex] Papers that even though do not specifically present hybrid RSs, provide recommendations combining different data mining techniques. [0.8ex] Papers from conferences and journals [0.8ex] Papers published from 2005 to 2015 [0.8ex] Papers written in English language only [0.8ex] **Exclusion criteria** Papers not addressing recommender systems at all [0.8ex] Papers addressing RSs but not implying any hybridization or combination of different approaches or data mining techniques. [0.8ex] Papers that report only abstracts or slides of presentation, lacking detailed information [0.8ex] Grey literature [0.8ex] Duplicate papers were removed and a coarse selection phase followed. Processing all of them strictly was not practical. Therefore we decided to include journal and conference papers only, leaving out gray literature, workshop presentations or papers that report abstracts or presentation slides. We initially analyzed title, publication year and publication type (journal, conference, workshop, etc.). In many cases abstract or even more parts of each paper were examined for deciding to keep it or not. Our focus in this review work is on hybrid recommender systems. Thus we selected papers presenting mixed or combined RSs dropping out any paper addressing single recommendation strategies or papers not addressing RSs at all. Hybrid RSs represent a somehow newer family of recommender systems compared to other well known and widely used families such as CF or CBF. Therefore the last decade (2005-2015) was considered an appropriate publication period. [ht] l c c c & **Digital source** & **Search and retrieval** & **Coarse selection** & **Detailed selection** & 4152 & 50 & 13 & 3582 & 27 & 9 & 1012 & 53 & 13 & 484 & 35 & 12 & 443 & 75 & 29 & **9673** & **240** & **76** Using inclusion / exclusion and this coarse selection step we reached to a list of 240 papers. In the next step we performed a more detailed analysis and selection of the papers reviewing abstract and other parts of every paper. Besides relevance based on the inclusion / exclusion criteria, completeness (in terms of problem definition, description of the proposed method / technique / algorithm and evaluation of results) of each study was also taken into account. Finally we reached to our set of 76 included papers. The full list is presented in Appendix A together with the publication details. Quality assessment ------------------ We also defined 6 questions listed in Table 5 for the quality estimation of the selected studies. Each of the question receives score values of 0, 0.5 and 1 which represent answers “no”, “partly” and “yes” correspondingly. The questions we defined do not reflect equal level of importance in the overall quality of the studies. For this reason we decided to weight them with coefficients of 0.5 (low importance) 1 (medium importance) and 1.5 (high importance). [ht] p5.8cm c r **Quality Question** & **Score** & **Weight** [0.5ex] QQ1. Did the study clearly describe the problems that is addressing? & yes / partly / no (1 / 0.5 / 0) & 1 [0.8ex] QQ2. Did the study review the related work for the problems? & yes / partly / no (1 / 0.5 / 0) & 0.5 [0.8ex] QQ3. Did the study recommend any further research? & yes / partly / no (1 / 0.5 / 0) & 0.5 [0.8ex] QQ4. Did the study describe the components or architecture of the proposed system? & yes / partly / no (1 / 0.5 / 0) & 1.5 [0.8ex] QQ5. Did the study provide an empirical evaluation? & yes / partly / no (1 / 0.5 / 0) & 1.5 [0.8ex] QQ6. Did the study present a clear statement of findings? & yes / partly / no (1 / 0.5 / 0) & 1 [0.8ex] We set higher weight to the quality questions that address the components/architecture of the system/solution (QQ4) and the empirical evaluation (QQ5). Quality questions that address problem description (QQ1) and statement of results (QQ6) got medium importance. We set a low importance weight to the two questions that address the related studies (QQ2) and future work (QQ3). The papers were split in two disjoint subsets. Each subset of papers was evaluated by one of the authors. In cases of indecision the quality score was set after a discussion between the authors. At the end, the final weighted quality score of each study was computed using the following formula: *s**c**o**r**e* = ∑*i* = 16*w**i* \* *v**i*/6 *w**i* is the weight of question *i* *(0.5, 1, 1.5)* *v**i* is the vote for question *i* *(0, 0.5, 1)* After this evaluation, cross-checking of the assessment was done on arbitrary studies (about 40% of included papers) by the second author. At the end, an agreement on differences was reached by discussion. Data extraction --------------- Data extraction was carried on the final set of selected primary studies. We collected both paper meta-data (i.e., author, title, year, etc.) and content data important to answer our research questions like problems, application domains, etc. Table 6 presents our data extraction form. In the first column we list the extracted data, in the second column we provide an explanation for some of the extracted data which may seem unclear and in the third column the research question with which the data is related. All the extracted information was stored in Nvivo[3](#fn3) which was used to manage data extraction and synthesis process. Nvivo is a data analysis software tool that helps in automating the identification and the labeling of the initial segments of text from the selected studies. [ht] l l l **Extracted Data** & **Explanation** & **RQ** [0.5ex] ID & A unique identifier of the form Pxx we set to each paper & - Title & - & RQ1 Authors & - & - Publication year & - & RQ1 Conference year & - & - Volume & Volume of the journal & - Location & Location of the conference & - Source & Digital library from which was retrieved & - Publisher & - & - Examiner & Name of person who performed data extraction & - Participants & Study participants like students, academics, etc. Goals & Work objectives & - Application domain & Domain in which the study is applied & RQ5 Approach & Hybrid recommendation approach applied & RQ3b Contribution & Contribution of the research work & - Dataset & Public dataset used to train and evaluate the algorithm & RQ6c DM techniques & Data mining techniques used & RQ3a Evaluation methodology & Methodology used to evaluate the RS & RQ6a Evaluated characteristic & RS characteristics evaluated & RQ6b Future work & Suggested future works & RQ7 Hybrid class & Class of hybrid RS & RQ4 Research problem & - & RQ2 Score & Overall weighted quality score & - Other Information & - & - Synthesis --------- For the synthesis step we followed Cruzes and Dyba methodology for the thematic synthesis. Their methodology uses the concept of codes which are labeled segments of text to organize and aggregate the extracted information. Following the methodology we defined some initial codes which reflected the research questions. Some examples include the first research problems found, hybrid recommendation classes, first application domains, data mining techniques, recommendation approaches and evaluation methodologies. After completing the reading we had refined or detailed each of the initial codes with more precise sub-codes (leaf nodes in NVivo) which were even closer to the content of the selected papers, covering all the problems found, all the datasets used, and similar detailed data we found. We finished assigning codes to all the highlighted text segments of the papers and then the codes were aggregated in themes (of different levels if necessary) by which the papers were grouped. Afterwards a model of higher-order themes was created to have an overall picture. The research questions were mapped with the corresponding themes. Finally, the extracted data were summarized in categories which are reported in the results section (in pictures or tables) associated with the research questions they belong to. Results ======= In this section we present the results we found from the selected studies to answer each research question. We illustrate the different categories of problems, techniques, hybridization classes, evaluation methodologies, etc. with examples from the included studies. The results are further discussed in the next section. RQ1: Included studies --------------------- ![Distribution of studies per publication year](./images/Year2)Distribution of studies per publication year RQ1 addresses the most relevant studies that present Hybrid RSs. We selected 76 papers as the final ones for further processing. They were published in conference proceedings and journals from 2005 to 2015. The publication year distribution of the papers is presented in Figure 2. It shows that most of the hybrid RS papers we selected were published in the last 5 years. For the quality assessment process we used the quality questions listed in Table 5. In Figure 3, the box plots of quality score distributions per study type (conference or journal) are shown. We see that about 75% of journal studies have quality score higher than 0.9. Same is true for about 35% of conference studies. In Figure 4 we present the average quality score about each quality question. ![Boxplot of quality score per publication type](./images/BoxPlots)Boxplot of quality score per publication type ![Average score of each quality question](./images/Score2)Average score of each quality question QQ4 (Did the study describe the components or architecture of the proposed system?) has the highest average score (0.947) wheres QQ3 (Did the study suggest further research?) has the lowest (0.651). The weighted quality score is higher than 0.81 for any included paper. Only one journal study got a weighted average score of 1.0 (highest possible). RQ2: Research problems ---------------------- To answer RQ2 we summarize the most important RS problems the studies try to solve. A total of 12 problems were found. The most frequent are presented in Figure 5 with the corresponding number of studies where they appear. Studies may (and often do) address more than one problem. Same thing applies for other results (data mining techniques, domains, evaluation metrics, etc.) reported in this section. Below we describe each of the problems: **Cold-start** This problem is heavily addressed in the literature and has to do with recommendations for new users or items. In the case of new users the system has no information about their preferences and thus fails to recommend anything to them. In the case of new items the system has no ratings for these items and doesn’t know to whom recommend them. To alleviate cold-start, authors in use a probabilistic model to extract latent features from item’s representation. Using the latent features they generate accurate pseudo ratings, even in cold-start situation when few or no ratings are provided. Another example is where the authors try to solve the new user cold-start in the e-learning domain by combining CF with a CBF representation of learning contents. Cold-start problem is also treated in where the authors merge the weighted outputs of different recommendation strategies using Ordered Weighted Averaging (OWA), a mathematical technique first introduced in. In total, cold-start was found in 23 studies. **Data sparsity** This problem rises from the fact that users usually rate a very limited number of the available items, especially when the catalog is very large. The result is a sparse *user-item* rating matrix with insufficient data for identifying similar users or items, negatively impacting the quality of the recommendations. Data sparsity is prevalent in CF RSs which rely on peer feedback to provide recommendations. In data sparsity of cross-domain recommendations is solved using a factorization model of the triadic relation *user-item-domain*. Also in we find an attempt to solve data sparsity by treating each user-item rating as predictor of other missing ratings. They estimate the final ratings by merging ratings of the same item by other users, different item ratings made by the same user and ratings of other similar users on other similar items. Another example is where CF is combined with Naive Bayes in a switching way. Data sparsity was a research problem of 22 studies. **Accuracy** Recommendation accuracy is the ability of a RS to correctly predict the item preferences of each user. Much attention has been paid to improve the recommendation accuracy since the dawn of RSs. Obviously there is still place for recommendation accuracy improvements. This is especially true in data sparsity situations, as accuracy and data sparsity are two problems that appear together in 6 studies (e.g., ). In a Bayesian network model with user nodes, item nodes, and feature nodes is used to combine CF with CBF and attain better recommendation quality. Other example is where a web content RS is constructed. The authors construct user’s long term interest based on his/her navigation history. Than the similarity of user’s profile with website content is computed to decide whether to suggest the website or not. Experiments conducted with news websites show improved accuracy results. Improving accuracy was a research objective of 16 studies. **Scalability** This is a difficult to attain characteristic which is related to the number of users and items the system is designed to work for. A system designed to recommend few items to some hundreds of users will probably fail to recommend hundreds of items to millions of people, unless it is designed to be highly scalable. Hyred in is an example of a system designed to be scalable and overcome data sparsity problem as well. The authors combine a modified Pearson correlation CF with distance-to-boundary CBF. They find the nearest and furthest neighbors of each user to reduce the dataset. The use of this compressed dataset improves scalability, alleviates sparsity, and also slightly reduced the computational time of the system. In the authors propose a hybrid RS designed to recommend images in social networks. They involve CF and CBF in a weighted way and also consider aesthetic characteristics of images for a better filtering, which overcomes the problem of scalability and cold-start as well. In a system with better scalability is conceived by combining Naive Bayer and SVM with CF. Improving scalability was addressed in 11 studies. **Diversity** This is a desired characteristic that is getting attention recently. Having diverse recommendations is important as it helps to avoid the popularity bias. The latter is having a recommendation list with items very similar to each other (e.g., showing all the episodes of a very popular saga). A user that is not interested in one of them is probably not interested in any of them and gets no value from that recommendation list. *K-Furthest Neighbors*, the inverted neighborhood model of K-NN is used in for the purpose of creating more diverse recommendations. The authors report an increased diversity. However, the user study they conduct shows that the perceived usefulness of it is not different from the one of traditional CF. In the concept of Experts is utilized to find novel and relevant items to recommend. The ratings of users are analyzed and some of the users are promoted as “experts” of a certain taste. They generate recommendations of their for the rest of the “normal” users in that item taste. Diversity is also addressed in totaling in 3 studies. **Other** These are other problems appearing in few studies. They include Lack of Personalization, Privacy Preserving, Noise Reduction, Data source Integration, Lack of Novelty and User preference Adaptiveness. ![Addressed problems](./images/Problems8)Addressed problems RQ3a: Data mining and machine learning techniques ------------------------------------------------- In this section we address the distribution of the studies according to the basic Data Mining (DM) and Machine Learning (ML) techniques they use to build their hybrid RSs. The variety of DM and ML techniques or algorithms used is high. Authors typically use different techniques to build the diverse components of their solutions or prototypes. In Table 7 we present the most frequent that were found in the included studies. Below we describe some of them. More details about the characteristics of DM/ML techniques and how they are utilized to build RSs can be found at. **K-NN** *K-Nearest Neighbors* is a well known classification algorithm with several versions and implementations, widely utilized in numerous data mining and other applications. This technique is popular among collaborative filtering RSs which represent the most common family of recommenders. It is mostly utilized to analyze neighborhood and find users of similar profiles or analyze items’ catalog and find items with similar characteristics. K-NN was found in a total of 59 studies. **Clustering** There are various clustering algorithms used in RSs and other data mining applications. They typically try to put up a set of categories with which data can be identified. The most popular is *K-means* which partitions the entire data into K clusters. In RSs clustering is mostly applied to preprocess the data. In the authors experiment with K-way (similar to K-means) clustering and *Bisecting K-means* for grouping different types of learning items. They also use CBF to create learners’ profiles and build an e-learning recommender with improved accuracy. An other example is where websites are clustered using co-occurence of pages and the content data of pages. The results are aggregated to get the final recommendations and overcome data sparsity. In total clustering algorithms were used in 34 studies. **Association rules** Association rule mining tries to discover valuable relations (association rules) in large databases of data. These associations are in the form X `=>` Y, where X and Y are sets of items. The association that are above a minimum level of support with an acceptable level of confidence can be used to derive certain conclusions. In recommender systems this conclusions are of the form “X likes Y” where X is a user to whom the system can recommend item Y. In information collected from a discussion group is mined and association rules are used to form the user similarity neighborhood. Word Sense Disambiguation is also used to select the appropriate semantically related concept from posts which are then recommended to the appropriate users of the forum. This hybrid meliorates different problems such as cold-start, data sparsity and scalability. In classification based on association methods is applied to build a RS in the domain of tourism. The system is more resistant to cold-start and sparsity problems. To overcome cold-start, the authors in propose a procedure for finding similar items by association rules. Their algorithm considers the user-item matrix as a transaction database where the user Id is the transactional Id. They find the support of each item and keep items with support greater than a threshold. Afterwards, they calculate the confidence of remaining rules and rule scores by which they find the most similar item to any of the items. Association rules were found in 17 studies. **Fuzzy logic** Also called *fuzzy set theory* it is a set of mathematical methods that can be used to build hybrid RSs. Those methods are also called reclusive in the literature. Contrary to CF which relies on neighborhood preferences without considering item characteristics, they require some representation of the recommended items. Reclusive methods are complementary to collaborative methods and are often combined with them to form hybrid RSs. An example of using Fuzzy logic is where better accuracy is achieved by combining 2 CFs with a fuzzy inference system in a weighted way to recommend leaning web resources. In fuzzy clustering is used to integrate user profiles retrieved by a CF with Point Of Interest (POI) data retrieved from a context aware recommender. The system is used in the domain of tourism and provides improved accuracy. In total Fuzzy logic was found in 14 studies. **Matrix manipulation** Here we put together the different methods and algorithms that are based on matrix operations. The methods we identified are Singular Value Decomposition (SVD), Latent Dirichlet Allocation (LDA), Principal Component Analysis (PCA), Dimensionality Reduction and similar matrix factorization techniques. Matrix manipulation methods are often used to build low error collaborative RSs and were especially promoted after the Netflix challenge was launched in 2006. In a topic model based on LDA is used to learn the probability that a user rates an item. An other example is where Dimensionality Reduction is used to solve sparsity and scalability in a multi-criteria CF. They were found in 9 studies. **Other** Other less frequent techniques such as Genetic Algorithms, Naive Bayes, Neural Networks, Notion of Experts, Statistical Modeling, etc. were found in 19 papers. [ht] l c **DM/ML technique** & **Studies** [0.5ex] K-NN & 59 Clustering & 34 Association rules & 17 Fuzzy logic & 14 Matrix manipulation & 9 Other & 19 RQ3b: Recommendation technique combinations ------------------------------------------- In this section we present a list of the most common technique combinations that form hybrid RSs. We also present the problems each of this combinations is most frequently associated with. In the following subsections the construct and technical details of some of the prototypes implementing each combination is described. Table 8 presents the summarized results. [ht] l c c c c c c & **Problem** & **CF-X** & **CF-CBF** & **CF-CBF-X** & **IICF-UUCF** & **CBF-X** & **Other** & 2 & 3 & 2 & 1 & 1 & 5 & 0 & 5 & 3 & 3 & 4 & 6 & 2 & 3 & 0 & 2 & 2 & 4 & 0 & 2 & 2 & 0 & 2 & 2 & 2 & 0 & 0 & 0 & 0 & 1 & 0 & 2 & 1 & 1 & 1 & 2 & **6** & **15** & **8** & **7** & **10** & **20** ### CF-X Here we report studies that combine CF with one other technique which is not CBF (those are counted as CF-CBF). An example of this combination is where the authors go hybrid to improve the performance of a multi-criteria recommender. They base their solution on the assumption that usually only a few selection criteria are the ones which impact user preferences about items and their corresponding ratings. Clustering is used first to group users based on the items’ criteria they prefer. CF is then used within each cluster of similar users to predict the ratings. They illustrate their method by recommending hotels from TripAdvisor[4](#fn4) and report performance improvements over traditional CF. Other attempt to improve the predictive accuracy of traditional CF is. Here the authors integrate in CF discrete demographic data about the users such as *gender*, *age*, *occupation*, etc. Fuzzy logic is used to compute similarities between users utilizing this extra demographic data and integrate the extra similarities with the user-based similarities calculated from ratings history. After calculating the final user similarities their algorithm predicts the rating values. The extra performance which is gained from the better user similarities that are obtained, comes at the cost of a slightly larger computational time which is however acceptable. In total CF-X combination was found in 6 studies with X being KBF, DF or a DM/ML technique from those listed in Table 6. ### CF-CBF This is a very popular hybrid RS utilizing the two most successful recommendation strategies. In many cases the recommendations of both systems are weighted to produce the final list of predictions. In other cases the hybrid RS switches from CF to CBF or is made up of a more complex type of combination (see section 3.5). An example is where the authors develop a hybrid RS suitable for working with high volumes of data and solve scalability problems in e-commerce systems. Their solution first involves CF (Pearson’s product moment coefficients) to reduce the dataset by finding the nearest neighbors of each user, discarding the rest and reducing the dataset. Afterwards distance-to-boundary CBF is used to define the decision boundary of items purchased by the target user. The final step combines the CF score (correlation coeficient between two customers) with the distance-to-boundary score (distance between the decision boundary and each item) in a weighted linear form. The authors report an improved accuracy of their hybrid RS working in the reduced dataset, compared to other existing algorithms that use full datasets. In the authors propose a CF-CBF hybrid recommender which is based on Bayesian networks. This model they build uses probabilistic reasoning to compute the probability distribution over the expected rating. The weight of each recommending strategy (CF and CBF) is automatically selected, adapting the model to the specific conditions of the problem (it can be applied to various domains). The authors demonstrate that their combination of CF and CBF improves the recommendation accuracy. Other studies involve similar mathematical models or constructs (e.g., fuzzy logic) to put together CF and CBF and gain performance or other benefits. In total CF-CBF contributions were found in 15 studies. ### CF-CBF-X Those are cases in which CF and CBF are combined together with a third approach. One example is where CF and CBF are combined with DF to generate recommendations for groups of similar profiles (users). These kind of recommendations are particularly useful in online social networks (e.g., for advertising). The goal of the authors is to provide good recommendations in data sparsity situations. First CBF is used to analyse ratings and items’ attributes. CF is then invoked as the second stage of the cascade to generate the group recommendations. DF is used to reinforce CF in the cases of sparse profiles (users with few ratings). In total CF-CBF-X was found in 8 studies. X is mostly a clustering technique or DF. ### IICF-UUCF Item-Item CF and User-User CF are two forms of CF recommenders, differing on the way the neighborhoods are formed. Some studies combine both of them to improve overall CF performance. An example is where the authors present a hybrid recommendation framework they call Collaborative Filtering Topic Model (CFTM) which considers both user’s reviews and ratings about items of a certain topic (or domain) in e-commerce. The first stage which is offline performs sentiment analysis in the reviews to calculate the User or Item similarity. The second stage of the cascade uses IICF or UUCF (switching) to predict the ratings. The authors evaluate using 6 datasets of different domains from Amazon and report that their hybrid approach performs better than traditional CF, especially in sparsity situations. IICF-UUCF combinations were found in 7 studies. ### CBF-X There were also 10 studies in which CBF is combined with another technique X which is not CF (counted as CF-CBF). X represents different approaches like KBF and DF or DM/ML techniques like clustering etc. One example is where the authors describe and use the interesting notion of *user lifestyle*. They select demographic information, consumer credit data and TV program preferences as lifestyle indicators, and confirm their significance by performing statistical analysis on 502 users. The most significant lifestyle attributes are binary encoded and used to form the neighborhoods and ratings of each user by means of Pearson correlation. The authors call the resulting complete (in terms of ratings) matrix *pseudoUser - item* matrix. It is then used for a Pearson based (classical CF) prediction of the original *user-item* ratings. Considerable performance improvements are reported. ### Other Other implementations include combinations of the same recommendation strategy (e.g., *CF1-CF2* with different similarity measures or tuning parameters each), trust-aware recommenders that are being used in social communities, prototypes using association rules mining, neural networks, genetic algorithms, dimensionality reduction, social tagging, semantic ontologies, pattern mining or different machine learning classifiers. RQ4: Classes of hybridization ----------------------------- To answer RQ4 we classified the examined hybrid RSs according to the taxonomy proposed by Burke. This taxonomy categorizes hybrid RSs in 7 classes based on the way the different recommendations techniques are aggregated with each other. Each class is explained in the subsections below where we discuss in more details few examples from the included papers. The results are summarized in Figure 6. ![Distribution of studies per hybridization class](./images/Classes3)Distribution of studies per hybridization class ### Weighted Weighted hybrids were the most frequent. They compute the scores of the items they recommend by aggregating the output scores of each recommendation technique using weighted linear functions. One of the first weighted recommenders was P-Tango which combined CF and CBF rating scores in a linear weighted way to recommend online newspapers. In P-Tango, aggregation was made giving equal initial weights to each score and then possibly adapting by the feedback of users. The weights of CF and CBF are set on a per-user basis enabling the system to determine the optimal mix for each user and alleviating the “gray sheep” problem. In the authors propose a weighting method for combining user-user, user-tag and user-item CF relations in social media. The method they propose computes the final rating score of an item for a user as the linear combination of the above three CF relations. Unlike the traditional CF, this weighted hybrid CF recommender is completely based on tags and does not require that users provide explicit rating scores for the items that are recommended (e.g., photos). An other example is where the authors
arxiv_0000206
Non-integrable dimer models: universality and scaling relations =============================================================== In the last few years, the methods of constructive Fermionic Renormalization Group have been successfully applied to the study of the scaling limit of several two-dimensional statistical mechanics models at the critical point, including: weakly non-integrable 2D Ising models, Ashkin-Teller, 8-Vertex, and close-packed interacting dimer models. In this note, we focus on the illustrative example of the interacting dimer model and review some of the universality results derived in this context. In particular, we discuss the massless Gaussian free field (GFF) behavior of the height fluctuations. It turns out that GFF behavior is connected with a remarkable identity (‘Haldane’ or ’Kadanoff relation’) between an amplitude and an anomalous critical exponent, characterizing the large distance behavior of the dimer-dimer correlations. Introduction ============ The scaling limit of the Gibbs measure of a statistical mechanics model at a second order phase transition is expected to be universal, in particular, to be robust under ‘irrelevant’ perturbations and conformally invariant. Conceptually, the route towards universality is clear: one should first integrate out the small-scale degrees of freedom, then rescale the variables associated with the large-scale fluctuations, and show that the critical model reaches a fixed point, under iterations of this procedure (‘Wilsonian’ Renormalization Group (RG)). On general grounds, the fixed point is expected to be conformally invariant: therefore, in a second step, one can use the methods of Conformal Field Theory (CFT) to classify and characterize all the possible conformally invariant fixed points (CFT methods provide, in fact, a complete classification of such fixed point theories in two dimensions; remarkably, there has been recent progress in the characterization of three dimensional conformally invariant theories, too). The conformal fixed point theory of interest for the description of a given statistical mechanics system can often be identified by using specific information on the critical exponents, typically available from the RG construction. Even though widely accepted and believed to be correct, there are only few cases, mostly in two dimensions, for which this procedure and/or its predictions can be rigorously confirmed. 1. A first class of critical models for which the existence and conformal invariance of the scaling limit is rigorously known consists of two-dimensional, integrable Ising and dimer models on isoradial graphs. The key technical tool used to prove conformal invariance is discrete holomorphicity, which is a manifestation of integrability. The method is flexible enough to prove robustness of the scaling limit under geometric deformations (e.g., of the domain, or of the underlying lattice), but it is not able to explain universality under perturbations of the microscopic Hamiltonian. [For a proof of conformal invariance of crossing probabilities via discrete holomorphicity methods in a non-integrable model, see.] 2. A second class of critical models for which several predictions of Wilsonian RG and CFT have been rigorously substantiated consists of non-integrable perturbations of determinantal models, such as interacting dimers, Ashkin-Teller and vertex models. These results are based on a constructive, fermionic, version of Wilsonian RG: they allow to construct the bulk scaling limit of ‘local fermionic observables’ and to prove scaling relations among critical exponents, but they are not flexible enough yet to compute ‘non-local’ observables (such as spin-spin correlations in perturbed Ising or monomer-monomer correlations in perturbed dimers) or to accommodate geometric deformations of the domain. [For a first result on perturbed Ising models, based on probabilistic tools, see.] Not much is rigorously known about the existence and nature of the scaling limit of other critical models in two and three dimensions. It is a central challenge of mathematical physics for the coming years to extend and effectively combine the available techniques, in order to cover new models of physical interest and new instances of universality. In this paper, we review a few selected results on the universality of non-integrable two dimensional models, based on the fermionic RG methods mentioned above. For definiteness, we restrict our attention to ‘interacting dimer models’. We first introduce them informally, and give a first overview both of the ‘classical’ known results, and of our new results. Next, in the following sections, we introduce their definitions and state the relevant results on their critical behavior in a more precise way. Model and results: an overview. ------------------------------- Dimer models at close packing on planar, bipartite, graphs are highly simplified models of random surfaces or of liquids of anisotropic molecules at high density: the connection between these two apparently unrelated classes of systems is mediated by the notion of height function, which is in one-to-one correspondence with close-packed dimer configurations, as illustrated in Fig.[fig1]. [fig1] *Integrable dimer models.* A remarkable feature of close-packed dimer models is that there is a natural family of exactly solvable, critical, models, which exhibit a very rich and interesting behavior. This family is parametrized, on the one hand, by the specific periodic, infinite, bipartite planar lattice on which the dimer configuration lives; on the other hand, by the positive weights (or ‘activities’) associated to the edges of the graph. The exact solution is based on a determinantal representation of the partition function, due to Kasteleyn and Temperley-Fisher, valid for generic edge weights. The edge weights control the average slope of the height: in this sense, the family of solvable dimer models describes discrete random surfaces with different average slopes (or ‘tilt’). The solution shows that there is an open set of edge weights for which the dimer-dimer correlations decay algebraically and, correspondingly, the height fluctuations are asymptotically described by a massless Gaussian Free Field (GFF) at large distances: this critical phase is called ‘liquid’ or ‘rough’, depending on whether one focuses on the behavior of the dimer correlations or of the height profiles. This critical phase displays a subtle form of universality, in that the variance of the GFF height fluctuations is asymptotically, at large distances, *independent* of the dimer weights (in particular, of the average slope of the height profile) and of the underlying graph. In the case that the underlying lattice is the honeycomb lattice, as in Fig.[fig1], dimer configurations can be seen as stepped interfaces of minimal surface area (given the boundary height), i.e., they can be seen as domain walls for the 3D Ising model at zero temperature. In fact, the probability measure on dimer configurations induced by setting all edge weights equal to 1 is the same as the zero-temperature measure of the 3D Ising model with suitable *tilted* Dobrushin-type boundary conditions. By using this connection, one recognizes that the GFF nature of the height fluctuations proves the existence of a rough phase in 3D Ising at *T* = 0 with such boundary conditions. It is an open problem to prove that the rough phase persists at small, positive, temperatures. *Non-integrable dimers: main results (in brief).* Wilsonian RG and the bosonization method[1](#fn1) suggest that the GFF nature of the height fluctuations should be robust under non-integrable perturbations of the dimer model. In order to test this prediction in a concrete setting, we consider a class of ‘interacting’ dimer models, including the 6-vertex model (in its dimer representation ) and non-integrable variants thereof. For weak enough interactions (weak but independent of the system size), we prove that the height fluctuations still converge to GFF, as in the integrable case. However, in this case, the variance appears to depend on the interaction and on the dimer weights, see Section [sec4]. Therefore, the form of universality exhibited by the integrable dimer model seems to break down as soon as we move out of the exactly solvable case. Remarkably, a new form of universality emerges in the interacting case: the (pre-factor of the) variance equals the anomalous critical exponent of the dimer correlations. This is an instance of a ‘Kadanoff’ or ‘Haldane’ scaling relation, see for a brief introduction to these ‘weak universality’ relations. In the next sections, we will describe the models of interest and state our main results more precisely: in Section [sec2] we introduce the family of solvable dimer models and briefly review a selection of known results about its correlation functions; in Section [sec3] we introduce the class of non-integrable dimer models we consider, state our main results and give a brief sketch of the ideas involved in the proof (see for a complete proof); in Section [sec4] we illustrate the Haldane scaling relation by a first order computation: we compute both the pre-factor of the variance and the critical exponent at linear order in the perturbation strength, and check that the two results agree (the computation shows how non-trivial and remarkable the result is: already at first order the validity of the Haldane relation requires very subtle cancellations). Our explicit computations proves, in particular, that the critical exponent is non-universal, i.e., it depends both on the interaction strength and on the dimer weights. Non-interacting dimers ====================== For simplicity, we restrict to dimers on the square lattice. A close-packed dimer configuration (or perfect matching) on $\L\subset \mathbb Z^2$ is a collection of hard rods of length 1, which can be arranged on $\L$ in such a way that they cover all the vertices of $\L$ exactly once, see Fig. [fig2]. It is important that $\L$ is bipartite: we emphasize this fact by coloring white and black the vertices of the two sublattices. [fig2] Any dimer configuration is in one-to-one correspondence with a height profile, defined on the faces up to an overall additive constant. The height profile is defined by the differences between the values of the height at the face *f* and *f*ʹ: $$\label{hdiff}h(f')-h(f)=\sum\_{b\in C\_{f\to f'}}\sigma\_b({\mathds 1}\_{b}-1/4),$$ where the sum runs over the bonds crossed by a lattice path from *f* to *f*ʹ, *σ**b* is equal to  + 1 or  − 1, depending on whether *b* crossed with the white site on the right or on the left and $\mathds 1\_b$ is the indicator function of having a dimer at *b*. A key point of the definition of the height function is that the right side is independent of the choice of the lattice path from *f* to *f*ʹ (independence follows from the close-packing condition). The family of integrable, close-packed, dimer models that we informally introduced in the previous section is defined by the following partition function: $$\begin{aligned} \label{eq:Z0L} Z^0\_L=\sum\_{M\in{\Omega}\_L}p\_{L,0}(M), \qquad p\_{L,0}(M)=\prod\_{b\in M}t\_{r(b)}, \end{aligned}$$ defined on a discrete torus $\mathbb T\_L$ of side *L* (see ): here Ω*L* is the set of close-packed dimer configurations on the discrete torus, and *r*(*b*) ∈ {1, 2, 3, 4} label the ‘type’ of edge (we let *r* = 1 if the edge is horizontal with the white site on the right, *r* = 2 if it is vertical with the white site on the top, *r* = 3 if it is horizontal with the white site on the left, *r* = 4 if it is vertical with the white site on the bottom). The model is parametrized by the choice *t*1, *t*2, *t*3, *t*4; without loss of generality, we can set *t*4 = 1, and we shall do so in the following. The dimer weights *t**j* play the role of chemical potentials, fixing in particular the average slope: $$\mathbb E\_L[h(f+e\_i)-h(f)]=\rho\_i(t\_1,t\_2,t\_3), \quad i=1,2,$$ where $\mathbb E\_L$ indicates the statistical average w.r.t. the probability measure $\mathbb P\_L$ induced by the weights *p**L*, 0(*M*). As anticipated above, this dimer model is exactly solvable : for example, *Z**L*0 = ∣det*K*∣,  where $K=K(\ul t)$ is a complex adjacency matrix, known as the *Kasteleyn* matrix: its elements are labelled by a pair of sites of different color and are non-zero only if the pair forms a nearest-neighbor edge of the square lattice; the value of the element of *K* corresponding to an edge of type 1, 2, 3, 4 is, correspondingly, *K*1 = *t*1,  *K*2 = *i**t*2,  *K*3 =  − *t*3,  *K*4 =  − *i*. Remarkably, also the multipoint dimer correlation functions can be explicitly computed. For instance, if *b*(*x*, *j*) is the bond of type *j* and black site *x*, the two-point dimer correlation reads (in the special case of two dimers of type 1; similar formulas are valid for the other cases): $$\mathbb E[ {\mathds 1}\_{b(x,1)};{\mathds 1}\_{b(y,1)} ]=-t\_1^2\,K^{-1}(x,y)\,K^{-1}(y,x),$$ where: $\mathbb E$ is the expectation w.r.t. $\mathbb P$, the weak limit of $\mathbb P\_L$ as *L* → ∞; the semi-colon indicates ‘truncation’ (i.e., $\mathbb E(A;B)=\mathbb E(A;B)-\mathbb E(A)\,\mathbb E(B)$); and $$\begin{aligned} \label{K-1} && K^{-1}(x,y)={\int\_{[-\pi,\pi]^2}\frac{d k}{(2\pi)^2}}\frac{e^{-i k({ x}-{ y})}}{\mu(k)}, \nonumber\\ {\rm with} && \mu(k)=t\_1+ it\_2 e^{i k\_1}-t\_3e^{i k\_1+i k\_2}-i e^{i k\_2}.\nonumber\end{aligned}$$ Note that the zeros of *μ*(*k*) lie at the intersection of two circles in the complex plane, i.e., they are defined by the equation $$e^{i k\_2}=\frac{t\_1+i t\_2 e^{i k\_1}}{i+ t\_3 e^{i k\_1}}.$$ ‘Generically’, these two circles intersect transversally in two points: in this situation *K*− 1(*x*, *y*) decays to zero algebraically, like ∣*x* − *y*∣− 1, as ∣*x* − *y*∣ → ∞; this algebraic decay is often referred to by saying that the model is *critical*. Once the two-point dimer correlations are explicitly known, one can compute the fluctuations of the height difference. In particular, the variance w.r.t. $\mathbb P$ of the height difference diverges logarithmically: $$\label{var} {\rm Var}\_{\mathbb P}[h(f)-h(f')]= \frac{1}{\pi^2}\log |f-f'|+O(1)$$ as the distance ∣*f* − *f*ʹ∣ tends to infinity. The pre-factor 1/*π*2 in front of the logarithm is *independent* of *t*1, *t*2, *t*3: this ‘universality’ property is not accidental and is in fact related to a maximality (‘Harnack’) property of the spectral curve of integrable dimer models. The proof of is not trivial; it is based on a sufficiently smart combination of the following ingredients (the ‘smart’ part is in the use of the third ingredient): (1) the definition of height difference, see ; (2) the explicitly formula of $\mathbb E[ \mathds 1\_{b};\mathds 1\_{b'}]$; (3) the path-independence of the height (in order to get a sensible expression in the large distance limit, one first needs to properly deform the two paths from *f* to *f*ʹ involved in the computation of the variance; next, one can pass to a continuum limit, by replacing discrete sums with integrals and by replacing the finite distance dimer-dimer correlation with its large-distance asymptotic behavior, up to error terms that can be explicitly estimated ). Building upon these ingredients one can refine in various directions, in particular one can prove that: * height fluctuations converge to a massless GFF after proper coarse graining and rescaling, * the scaling limit of the height field is conformally covariant. It is very natural to ask whether these features are robust under perturbations of the Gibbs measure that break the determinant structure of Kasteleyn’s solution. This point is discussed in the following section. Interacting dimers: model and main results ========================================== We consider a family of ‘interacting’ dimer models, defined by the following partition function: $$\begin{aligned} \label{eq:ZlL} Z^\l\_L=\sum\_{M\in{\Omega}\_L}p\_{L,\l}(M), \qquad p\_{L,\l}(M)=\Big(\prod\_{b\in M}t\_{r(b)}\Big)e^{{\l} \sum\_{x\in \L}{f}(\t\_x M)},\end{aligned}$$ where: ${\l}$ is the interaction strength (to be thought of as ‘small’), $\L$ is the lattice of black sites in $\mathbb T\_L$, *f* is a local function of the dimer configuration around the origin, and $\t\_x$ is the ‘translation operator’ by the lattice vector *x*. In analogy with the non-interacting case, we let $\mathbb P\_{L,\l}$ be the finite volume Gibbs measure associated with weights $p\_{L,\l}$ and $\mathbb P\_\l$ its infinite-volume limit (existence of the limit is part of the results in ). For a special choice of *f*, the model reduces to the 6-vertex model, which is integrable by Bethe ansatz (but the solution is not determinantal), see. However, generically, the model is non-integrable. Possibly, the simplest choice of *f* that makes the model non-integrable is the ‘plaquette interaction’ previously considered in : $$\label{eq:plaq} f=\mathds 1\_{e\_1}\mathds 1\_{e\_2}+\mathds 1\_{e\_3}\mathds 1\_{e\_4}+\mathds 1\_{e\_1}\mathds 1\_{e\_5}+\mathds 1\_{e\_6}\mathds 1\_{e\_7}$$ where *e*1, …, *e*7 are the edges in Fig. [fig:esempi]. [fig:esempi] Our main results do not depend on the specific choice of *f*. They concern the asymptotic behavior of dimer-dimer correlations and of the fluctuations of the height difference and are stated next. [th:1] Let *t*1, *t*2, *t*3 be such that *μ*(*k*) has two distinct simple zeros, *p*± (in particular, the ratio of $\a\_\o:=\partial\_{k\_1}\mu(p^\o)$ and $\b\_\o:=\partial\_{k\_2}\mu(p^\o)$ is not real). Then, for $\l$ small enough, && Eł[1b(x,r);1b(0,r’)]= 142 ø= [dimdim] &&+142 ø= e-i(|pø-|p-ø) x+|Rr,r’(x),where[2](#fn2): $\bar K\_{\o, r}$, $\bar H\_{\o,r},$ $\bar \a\_\o$, $\bar \b\_\o$, $\bar p^\o$, $\n$ are analytic functions of $\l$, such that $\bar \a\_\o\big|\_{\l=0}=\a\_\o$, $\bar \b\_\o\big|\_{\l=0}=\b\_\o$, $\bar p^\o\big|\_{\l=0}=p^\o$ and $\bar K\_{\o, r}\big|\_{\l=0}=\bar H\_{\o,r}\big|\_{\l=0}=K\_r e^{-i p^\o v\_r}$. Recall that *K**r* were defined in ; moreover, $$\begin{aligned} \label{eq:vr} v\_1=(0,0), v\_2=(-1,0), v\_3=(-1,-1), v\_4=(0,-1). \end{aligned}$$ These constants satisfy the following symmetry relations: $$\begin{aligned} \label{eq:symmab} && \bar \alpha\_\o^\*=-\bar\alpha\_{-\o},\hskip.75truecm \bar\beta\_\o^\*=-\bar\beta\_{-\o} \\ \label{eq:symmK} && \bar K\_{\o, r}^\*=\bar K\_{-\o,r},\quad \bar H\_{\o, r}^\*=\bar H\_{-\o,r},\\ \label{eq:symmp} &&\bar p^++\bar p^-=(\pi,\pi).\end{aligned}$$ Moreover, $$\nu(\l)=1+a\l+O(\l^2)$$ and, generically, *a* ≠ 0. Finally, $\bar R\_{r,r'}(x,x')=O(|x-x'|^{-5/2})$ (the exponent 5/2 could be replaced by any $\d<3$ provided $\l$ is small enough). A few comments are in order: * The proof provides a constructive algorithm for computing $\bar K\_{\o,r}, \bar H\_{\o,r}$, etc, at any desired precision. However, we do not have closed formulas for these quantities, and we do not expect that it is possible to obtain any by other methods. * Generically, the *anomalous exponent* *ν* has a non-zero first order contribution in $\l$: therefore, it is larger or smaller than 1, depending on whether $a \l$ is positive or negative. In particular, the asymptotic, large-distance, behavior of the dimer-dimer correlation is dominated by the first or second term in, depending on whether $a\l$ is positive or negative. * Once we have such a refined asymptotics as, we can plug it into the formula for the height variance, $$\mathbb E\_\l[{h(f)-h(f');h(f)-h(f')}]=\sum\_{b,b'\in C\_{f\to f'}}\sigma\_b\sigma\_{b'}\mathbb E\_\l[ \mathds 1\_{b};\mathds 1\_{b'} ],$$ in order to understand its growth as ∣*f* − *f*ʹ∣ → ∞. As anticipated above, one expects that the growth is still logarithmic, like in the non-interacting case. However, this is not completely straightforward: as remarked in the previous item, if $a\l<0$ the asymptotic behavior of the dimer-dimer correlation is dominated by the second term in, which is characterized by the critical exponent *ν* < 1. Therefore, after double-summation over *b*, *b*ʹ, one may fear that the interacting variance $\mathbb E\_\l[{h(f)-h(f');h(f)-h(f')}]$ grows like ∣*f* − *f*ʹ∣2(1 − *ν*) at large distances, rather than logarithmically. Remarkably, this is not the case, thanks to the oscillating factor $e^{-i(\bar p^\o-\bar p^{-\o}) x}$ in front of the second term of. [th:2] Under the same hypothesis of Theorem 1, $${\rm Var}\_{\mathbb P\_\l}(h(f)-h(f'))= \frac{{A(\lambda)}}{\pi^2}\log |f-f'|+O(1)$$ where $$A(\l)=-\Biggl[\frac{\bar K\_{\o,1}+\bar K\_{\o,2}}{\bar \b\_\o}\Biggr]^2=-\Biggl[\frac{\bar K\_{\o,1}+\bar K\_{\o,4}}{\bar \a\_\o}\Biggr]^2.$$ In general, $A(\l)$ depends explicitly on $\l,t\_1,t\_2,t\_3$ and on the type of interaction potential in. Moreover, $$\label{Ha}{A(\l)}={\n(\l)}.$$ The proof of Theorem [th:2] provides two different algorithms for computing the coefficients of the analytic functions *A* and *ν*. At first order in $\l$, one can check directly the validity of. The computation is very instructive and, as a by-product, it shows that $A(\l)$ depends explicitly on $\l,t\_1,t\_2,t\_3$; see Sect.[sec4] for a proof. However, a direct proof of the validity of *A* = *ν* by direct inspection and comparison of the two power series defining *A* and *ν* seems hopeless (the computation in the next section is a clear indication). The proof of is based on a subtle cancellation mechanism, which uses a comparison between the lattice Ward Identities of the dimer model and the emergent chiral Ward Identities of a ‘reference’ continuum model (the ‘infrared fixed point theory’), see. The identity $A(\l)=\nu(\l)$ is the analogue of one of the scaling relations (the one between compressibility and the density-density critical exponent) proposed by D. Haldane in the context of Luttinger liquids. It is also related to one of the scaling relations among the critical exponents of the 8-vertex and of the Ashkin-Teller model, proposed by L. Kadanoff, see the discussion right before eq.(1.2) of. For this reason, we call it ‘Haldane’ or ‘Kadanoff’ relation. There are not many previous examples of rigorously established Haldane or Kadanoff relations: the known cases are mostly restricted to exactly solved models of interacting fermions or quantum spins (Luttinger model, or the XXZ chain ) and non-integrable perturbations thereof. As stated, Theorem 2 concerns only the asymptotic growth of the variance of the height difference. However, a slight extension of its proof, along the same lines as, proves that, after coarse-graining, *h*(*f*) converges to *ϕ*, the massless GFF of covariance $$\mathbb E\_\l(\phi(x) \phi(y))=-\frac{A(\lambda)}{2\pi^2}\log|x-y|,$$ in the same sense as. Theorem [th:2] and its extension mentioned in the previous item prove in a very strong and sharp sense that the random surface associated with the interacting dimer model is in a rough phase, characterized by logarithmic fluctuations. In this sense, our theorem is of interest in the context of the fluctuation theory of discrete random surface models, which is a classical topic in probability and statistical mechanics. Previous, related, results include the proofs of logarithmic height fluctuations in anharmonic crystals, SOS model, 6-vertex and Ginzburg-Landau type models. The proofs of Theorems [th:1] and [th:2] are hard and lengthy, and we refer the reader to for it. Here we limit ourselves to mention the main steps and ideas involved in the proof: 1. The starting point is a representation of the non-interacting model, characterized by Kasteleyn’s determinantal solution, in terms of a free fermionic theory in dimension *d* = 1 + 1. 2. Next, we provide an exact representation of the interacting dimer model in terms of an interacting fermionic theory in *d* = 1 + 1, the interaction being (at dominant order) a quartic fermionic interaction. In this sense, the interacting dimer model maps into a sort of ‘fermionic *ϕ*24 theory’. 3. The fermionic model, into which the original dimer model has been mapped, can be analyzed via standard fermionic multiscale cluster expansion methods (fermionic constructive RG) due, among others, to Gawedzki-Kupiainen, Lesniewski, Benfatto-Gallavotti, Feldman-Magnen-Rivasseau-Trubowitz. 4. The fermionic RG scheme turns out to be convergent if and only if the infrared flow of the ‘running coupling constants’, controlled by the so-called beta-function equation, is convergent in the limit of large scales. In order to control the RG flow, we compare it with the one of a reference continuum model: some of the ingredients involved in the comparison are the Ward Identities (both for the lattice dimer model and for the continuum reference one), the Schwinger-Dyson equation and the non-renormalization of the anomalies for the reference model. 5. In order to obtain the fine asymptotics of the dimer-dimer correlations, as well as the Haldane relation connecting *A* and *ν*, we need to compare the asymptotic, emergent, chiral Ward Identities of the reference model with the exact lattice Ward Identities of the dimer model, following from the local conservation law of the dimer number, $\sum\_{b\sim x}\mathds 1\_b=1$, with the sum running over edges incident to *x*. The comparison guarantees that the ratio *A*/*ν* is ‘protected’ by symmetry (there is no ‘dressing’ or ‘renormalization’ due to the interaction). We conclude this section by mentioning a few open problems and perspectives: * It would be interesting to study the critical theory in finite domains and, in perspective, prove its conformal covariance. From a technical point of view, going in this direction requires a non-trivial extension of the RG multiscale methods to the non-translationally-invariant setting, and a sharp control of the RG flow of the marginal boundary running coupling constants. Promising results in this direction have been recently obtained in the context of non-planar critical Ising models in the half-plane. * So far, we can only compute the scaling limit of the height function after coarse graining, cf. for instance. It would be very interesting to prove a central limit theorem on a more local scale; i.e., computing the average of *e**i**α*(*h*(*f*) − *h*(*f*ʹ)), instead of computing the characteristic function for the height function integrated against a test function. Similarly, it would be interesting to compute the scaling limit of the monomer-monomer correlations. This problem is not merely technical: it is strictly connected with the computation of the scaling limit of the spin-spin correlations in non-integrable, two-dimensional Ising models, which is currently out of reach of the current techniques. * While we expect the analogs of Theorems [th:1] and [th:2] to be true also for the interacting dimer model on the honeycomb lattice of Fig. [fig1], it is not obvious that the same qualitative behavior holds for interacting dimers on general $\mathbb Z^2$-periodic bipartite planar graphs, with an elementary cell containing more than two vertices. In the non-interacting case, using $\mathbb Z^2$ or any other $\mathbb Z^2$-periodic bipartite planar graph makes essentially no difference. However, the effective fermionic theory is different for different periodic bipartite lattices: the larger the elementary cell, the larger the number of ‘colors’ of the fermionic field associated with the fermionic description of the system. In the presence of interactions, the number of colors is known to affect the qualitative behavior of the system (as an example, compare the behavior of the Luttinger model with that of the 1D Hubbard model ): depending on it and on the specific form of the interaction, the system could display either an anomalous Fermi liquid behavior, or it could open a gap, entering more exotic quantum phases. It would be very interesting to see whether any of these exotic behaviors can arise in interacting dimer models on decorated periodic lattices. * Finally, as suggested by the remarks in the introduction, it would be nice to see whether the current methods can be used to prove the existence of a rough, logarithmically correlated, low-temperature phase of the interface of 3D Ising with tilted Dobrushin boundary conditions. As observed above, the fluctuations of the surface at zero temperature can be mapped exactly into a problem of non-interacting dimers on the hexagonal lattice. It remains to be seen whether low-temperature fluctuations can be effectively described in terms of weakly interacting dimers on the hexagonal lattice. First order computation ======================= The goal of this section is to verify, at first order in $\l$, the validity of the ‘Haldane’, or ‘Kadanoff’, relation,, and to compute explicitly (to fix ideas, for the case of the plaquette interaction ) the constant *a* in the expansion $$\begin{aligned} \label{eq:expansione} \nu(\l)=A(\l)=1+a\l+O(\l^2). \end{aligned}$$ The result is: [eqa]a=-,where we recall that *p*± are the two zeros of *μ*(*k*), which are assumed to be non-degenerate, $\a\_\pm=\partial\_{k\_1}\mu(p^\pm)$ and $\b\_\pm=\partial\_{k\_2}\mu(p^\pm)$. It is straightforward to check that the right side of explicitly depends on the dimer weights: e.g., in the simple case *t*1 = *t*3 = *t*, *t*2 = 1, in which *p*+ = 0, *p*− = (*π*, *π*), one has *a* =  − 2(1 + *t*2)/(*π**t*); in the case *t*1 = *t*2 = *t*3 = 1, the result coincides with the one found in, *a* =  − 4/*π*. We will use the same notations and conventions as in ; we do not repeat here the definitions and we assume the reader has familiarity in particular with and. We recall just that $\mathcal E\_0$ denotes the Grassmann Gaussian measure with propagator (cf. ) $$\begin{aligned} \label{eq:E0} g(x-y):= \mathcal E\_0(\psi^-\_x\psi^+\_y)=K^{-1}(x,y)\end{aligned}$$ For the following, it is convenient to introduce the rescaled coupling constant $$\begin{aligned} \label{eq:u} u:=(t\_1 t\_3+t\_2)\l.\end{aligned}$$ First-order computation of *A* ------------------------------ The two-point dimer correlation function is given as $$\begin{aligned} \label{eq:dd} G^{(0,2)}\_{r,r'}(x,0):=\mathbb E\_\l(\mathds 1\_e;\mathds 1\_{e'})=\lim\_{L\to\infty}\partial\_{A\_e} \partial\_{A\_{e'}} \mathcal W\_\L(A)|\_{A\equiv 0}, \end{aligned}$$ where *e* (resp. *e*ʹ) is the edge of type *r* (resp. *r*ʹ) with black endpoint of coordinates *x* (resp. 0) and $\mathcal W\_\L(A)$ is the moment generating function $$\begin{aligned} \label{eq:momgen} e^{\mathcal W\_\L(A)}:=\sum\_{M\in\Omega\_L}p\_{L,\l}(M)\prod\_e e^{A\_e \mathds 1\_e}.\end{aligned}$$ Our convention for the Fourier transform of the two-point dimer correlation is that $$\begin{aligned} \label{eq:Gfourier} \hat G^{(0,2)}\_{r,r'}(p)=\sum\_x e^{-i p x}G^{(0,2)}\_{r,r'}(x,0).\end{aligned}$$ Since we are interested in the long-distance behavior of correlations, we will look at the small-*p* behavior and in particular at the discontinuity of $\hat G^{(0,2)}\_{r,r'}(p)$ at *p* = 0. We recall that, since we are working on the torus, $\exp(\mathcal W\_\L(A))$ can be written as the linear combination of four Grassmann integrals with non-quadratic action, cf.. For the computation of correlations in the *L* → ∞ limit and at finite order in perturbation theory, we can safely replace with an expression involving *a single* Grassmann integral: $$\begin{aligned} \label{eq:piusemp} e^{\mathcal W\_\L(A)}=\int D\psi e^{S(\psi)+V(\psi,A)},\end{aligned}$$ with *S* and *V* as in. In the case of the plaquette interaction, the potential *V*(*ψ*, *A*) equals (neglecting terms of order $\l^2$ and higher) $$\begin{aligned} \label{eq:Vprimod} V(\psi,A)= -\sum\_e(e^{A\_e}-1)E\_e+\l\sum\_{\gamma=\{e\_1,e\_2\}\subset \Lambda}E\_{e\_1}E\_{e\_2}e^{A\_{e\_1}+A\_{e\_2}}\end{aligned}$$ where the second sum runs over pairs of parallel edges {*e*1, *e*2} on the boundary of the same face. In particular, setting *A* ≡ 0 and using the definition for *E**e* in terms of the Grassmann variables *ψ*± associated to the endpoints of the edge *e*, one finds that the potential is exactly quartic in the Grassmann fields: $$\begin{aligned} \label{eq:V4} V\_4(\psi):= V(\psi,0)&=-u\sum\_x\psi^+\_x\psi^-\_x\left[\psi^+\_{x+(0,1)}\psi^-\_{x-(1,0)}+\psi^+\_{x+(1,0)}\psi^-\_{x-(0,1)}\right]. \end{aligned}$$ From, and we see that the two-point dimer correlation function equals, at first order in *λ* and in the *L* → ∞ limit, $$\begin{aligned} \label{i1}\nonumber G^{(0,2)}\_{r,r'}(x,0)&=\mathcal E\_0( E\_{e};E\_{e'})\\&- \lambda [\mathcal E\_0( E\_{e};I^{(1)}\_{0,r'})+ \mathcal E\_0(I^{(1)}\_{x,r};E\_{e'})]+\mathcal E\_0(E\_{e};E\_{e'};V\_4)\end{aligned}$$ where $\mathcal E\_0(\dots;\dots)$ denotes truncated expectation and $$\begin{gathered} I^{(1)}\_{x,r}=\left\{ \begin{array}{lll} K\_1 K\_3\psi^+\_{x}\psi^-\_x(\psi^+\_{x+(0,1)}\psi^-\_{x-(1,0)}+\psi^+\_{x+(1,0)}\psi^-\_{x-(0,1)}) & \text{ if} & r=1 \\ K\_2 K\_4\psi^+\_{x}\psi^-\_{x+v\_2}(\psi^+\_{x+(0,1)}\psi^-\_{x}+\psi^+\_{x-(1,0)}\psi^-\_{x-(1,1)})& \text{ if} & r=2 \\ K\_1 K\_3\psi^+\_{x}\psi^-\_{x+v\_3}(\psi^+\_{x-(1,0)}\psi^-\_{x-(1,0)}+\psi^+\_{x-(0,1)}\psi^-\_{x-(0,1)}) &\text{ if} & r=3 \\ K\_2 K\_4\psi^+\_{x}\psi^-\_{x+v\_4}(\psi^+\_{x+(1,0)}\psi^-\_{x}+\psi^+\_{x-(0,1)}\psi^-\_{x-(1,1)})&\text{ if} & r=4. \end{array} \right. \end{gathered}$$ For *λ* = 0 one finds from, and from Lemma [lemma:formulozzo] below $$\begin{aligned} \label{eq:mardepanza} \left. \hat G^{(0,2)}\_{r,r'}(p)\right|\_{\lambda=0}&=-K\_r K\_{r'}{\int\_{[-\pi,\pi]^2}\frac{d k}{(2\pi)^2}}\frac{e^{-i k v\_{r}-i (k+p)v\_{r'}}}{\mu(k)\mu(k+p)}\\\nonumber&= -\frac i{2\pi}\frac{K\_r K\_{r'}} {\alpha\_+\beta\_--\alpha\_-\beta\_+}\sum\_{\omega=\pm}\frac{D\_{-\omega}(p)}{D\_\omega(p)}e^{-i p^\omega(v\_r+v\_{r'})}+R(p).\end{aligned}$$ Here and in the following, *R*(*p*) denotes a function that is continuous at *p* = 0 (the precise value of *R*(*p*) can change from line to line). In, $v\_r\in \mathbb Z^2$, *r* = 1, …, 4, is as in while $$\begin{aligned} \label{eq:Domega} D\_\omega(p)=\a\_\o p\_1+\b\_\o p\_2,\quad \o=\pm. \end{aligned}$$ Next we compute the first-order contribution $$- \lambda [\mathcal E\_0( E\_{e};I^{(1)}\_{0,r'})+ \mathcal E\_0(I^{(1)}\_{x,r};E\_{e'})]$$ in. As explained at the beginning of, by the fermionic Wick theorem, we can replace *I**x*, *r*(1) by its “linearization” $\bar I^{(1)}\_{x,r}$ obtained by contracting in all possible ways two of its four *ψ* fields. For instance, with *g*( ⋅ ) as in, $$\begin{gathered} \label{eq:Ibar1} \overline I^{(1)}\_{x,1}=K\_1K\_3\left[-g(v\_1)\left(\psi^+\_{x+(0,1)}\psi^-\_{x-(1,0)}+\psi^+\_{x+(1,0)}\psi^-\_{x-(0,1)} \right)\right.\\+g(v\_2)\left(\psi^+\_{x+(0,1)}\psi^-\_x+\psi^+\_x\psi^-\_{x-(0,1)} \right)\\ \left.+g(v\_4)\left(\psi^+\_{x+(1,0)}\psi^-\_x+\psi^+\_x\psi^-\_{x-(1,0)} \right) -2g(v\_3)\psi^+\_x\psi^-\_x \right]. \end{gathered}$$ In Fourier space (with the conventions of for the Grassmann fields in momentum space) one has $$\label{eq:Ibar1F} \overline I^{(1)}\_{x,r}={\int\_{[-\pi,\pi]^2}\frac{d k}{(2\pi)^2}}{\int\_{[-\pi,\pi]^2}\frac{d p}{(2\pi)^2}}e^{i p x}\hat\psi^+\_{k+p}W\_r(k,p)\hat\psi^-\_k$$ where $$\begin{gathered} \label{eq:W1} W\_1(k,p)=K\_1K\_3\left[-g(v\_1)\left(e^{i(k\_1+k\_2+p\_1)}+e^{i(k\_1+k\_2+p\_2)} \right)\right.\\+g(v\_2)\left(e^{i(k\_2+p\_2)}+e^{i k\_2} \right) \left.+g(v\_4)\left(e^{i(k\_1+p\_1)}+e^{i k\_1} \right) -2g(v\_3) \right].\end{gathered}$$ Similar formulas hold for *r* = 2, 3, 4, with *W*1(*k*, *p*) replaced by *W**r*(*k*, *p*). One easily checks (for *r* = 1 this can be verified immediately from ) that $$\begin{gathered} \label{eq:W0} W\_r(k,0)=2 t\_r t\_{r+2}W(k),\\ W(k)=\left[g(v\_1)e^{i(k\_1+k\_2)}-g(v\_2)e^{ik\_2}-g(v\_4)e^{i k\_1} +g(v\_3) \right]\\= {\int\_{[-\pi,\pi]^2}\frac{d k'}{(2\pi)^2}}\frac{(e^{ik\_1}-e^{i k\_1'})(e^{ik\_2}-e^{i k\_2'})}{\mu(k')}\end{gathered}$$ with the convention that *t*4 = 1 and that $t\_r:=t\_{r\!\!\mod \!4}$ if *r* > 4. Note that $$\begin{aligned} \label{eq:nota0} {W(p^+)}^\*=W(p^-),\end{aligned}$$ because *p*+ + *p*− = (*π*, *π*) and $g(v\_1),g(v\_2)\in \mathbb R$, while $g(v\_2),g(v\_4)\in i\mathbb R$. Using and, together with $$\begin{aligned} \label{eq:I0F} E\_{e}=K\_r{\int\_{[-\pi,\pi]^2}\frac{d k}{(2\pi)^2}}{\int\_{[-\pi,\pi]^2}\frac{d p}{(2\pi)^2}}e^{i p x}\hat \psi^+\_{k+p}\hat\psi^-\_ke^{-i k v\_r}, \end{aligned}$$ (if *e* is, as above, of type *r* and with black vertex of coordinates *x*) we see that $$\begin{gathered} \label{eq:vestooss} - \lambda [\mathcal E\_0( E\_{e};I^{(1)}\_{0,r'})+ \mathcal E\_0(I^{(1)}\_{x,r};E\_{e'})](p) \\= 2 \l{\int\_{[-\pi,\pi]^2}\frac{d k}{(2\pi)^2}}\frac1{\mu(k)\mu(k+p)}\times\\ \times[K\_r t\_{r'}t\_{r'+2}e^{-i k v\_r}+K\_{r'}t\_r t\_{r+2}e^{-i kv\_{r'}}]W(k) +R(p).\end{gathered}$$ Thanks to Lemma [lemma:formulozzo], we can rewrite as $$\begin{gathered} \label{eq:vestooss2} \frac{i\l}{\pi(\alpha\_+\beta\_--\alpha\_-\beta\_+)}\times \\ \nonumber\times \sum\_{\o=\pm}\frac{D\_{-\o}(p)}{D\_\o(p)}\left[ K\_r t\_{r'}t\_{r'+2}e^{-i p^\o v\_r}+K\_{r'} t\_r t\_{r+2}e^{-i p^\o v\_{r'}} \right]W(p^\o) +R(p).\end{gathered}$$ It remains to compute the term $\mathcal E\_0(E\_{e};E\_{e'};V\_4)$ in. Applying Wick’s theorem, we see that either *N* = 0 or *N* = 2 of the four fields of *V*4 are contracted among themselves, the remaining 4 − *N* ones being contracted with fields from *E**e* or *E**e*ʹ. Here we compute the contribution, call it *a**r*, *r*ʹ(0, 2)(*x*, 0), from those terms where *N* = 2. The contribution *b**r*, *r*ʹ(0, 2)(*x*, 0) from the terms with *N* = 0 is computed later. We note that $$\begin{aligned} \label{N2} \mathcal E\_0( E\_{e};E\_{e'})+\lambda a^{(0,2)}\_{r,r'}(x,0)= \mathcal E\_\l( E\_{e};E\_{e'})+O(\lambda^2)\end{aligned}$$ where $\mathcal E\_\l(\cdot)$ is the Gaussian Grassmann measure where the action *S*(*ψ*) =  − ∑*e**E**e* has been replaced by $S(\psi)+ \overline V\_4$, with $\overline V\_4$ the linearization of *V*4 (i.e. the bilinear operator obtained by contracting in every possible way two of the four fields of *V*4, as above). In our case, $$\begin{gathered} \bar V\_4=-2u\sum\_z[-g(v\_1)\psi^+\_{z}\psi^-\_{z-(1,1)}+g(v\_2)\psi^+\_{z}\psi^-\_{z-(0,1)} \\-g(v\_3)\psi^+\_z\psi^-\_z+g(v\_4)\psi^+\_z\psi^-\_{z-(1,0)}].\end{gathered}$$ Then, we see that the measure $\mathcal E\_\l( \cdot)$ is nothing but the Gaussian Grassmann measure where *μ*(*k*) is replaced by $$\bar \mu(k):=\mu(k)-2 u W(k)$$ and *W*(*k*) was defined in. As already mentioned in Theorem [th:1], the ratio of complex numbers $\alpha\_\o,\b\_\o$ is not real ; therefore we can write uniquely *W*(*p**ω*) as $$\begin{aligned} \label{eq:fpo} W(p^\omega)=c^\omega\_1\alpha\_\omega+c^\omega\_2\beta\_\omega \text{ with } c^\omega\_1,c^\omega\_1\in \mathbb R.\end{aligned}$$ Via Taylor expansion, we have then (always at first order in $\l$) $$\begin{aligned} \label{eq:Taylor} \bar \mu(k)=\bar\alpha\_\omega(k\_1-\bar p^\omega\_1)+\bar\beta\_\omega(k\_2-\bar p^\omega\_2)+O(|k-\bar p^\omega|^2)\end{aligned}$$ where the zeros of $\bar\mu(\cdot)$ are $$\begin{aligned} \label{eq:ptilde} \bar p^\omega\_j=p^\omega\_j+2u c^\omega\_j, \quad j=1,2\end{aligned}$$ and $$\begin{gathered} \label{a+b+} \bar \alpha\_\omega=\alpha\_\omega+2u[ -\partial\_{k\_1}W(p^\omega)+c^\omega\_1\partial^2\_{k\_1}\mu(p^\omega)+c^\omega\_2\partial^2\_{k\_1 k\_2}\mu(p^\omega)]\\ \bar \beta\_\omega=\beta\_\omega+2u[- \partial\_{k\_2}W(p^\omega)+c^\omega\_2\partial^2\_{k\_2}\mu(p^\omega)+c^\omega\_1\partial^2\_{k\_1 k\_2}\mu(p^\omega)].\end{gathered}$$ Explicitly, $$\begin{aligned} \nn \partial\_{k\_1}W(p^\omega)&= i [g(v\_1)e^{i (p^\o\_1+p^\o\_2)}-g(v\_4)e^{i p^\o\_1}]\\ \nn \partial\_{k\_2}W(p^\omega)&= i [g(v\_1)e^{i (p^\o\_1+p^\o\_2)}-g(v\_2)e^{i p^\o\_2}]\\ \label{adm} \partial^2\_{k\_1}\mu(p^\omega)&=i\alpha\_\o, \quad \partial^2\_{ k\_2}\mu(p^\omega)=i\beta\_\o, \quad \partial^2\_{k\_1k\_2}\mu(p^\omega)=t\_3 e^{i(p^\o\_1+p^\o\_2)}. \end{aligned}$$ This implies in particular the symmetry at first order in $\l$. Also, from it follows that $c^\o\_j=-c^{-\o}\_j$ so that holds a first order. Altogether, equals in Fourier space, for small *p* and disregarding $O(\l^2)$ terms, $$\begin{aligned} \label{eq:ordine0dress} -\frac i{2\pi}\frac{K\_r K\_{r'}} {\bar \alpha\_+\bar\beta\_--\bar \alpha\_-\bar \beta\_+}\sum\_{\o=\pm}\frac{\bar D\_{-\omega}(p)}{\bar D\_\omega(p)}e^{-i \bar p^\omega(v\_r+v\_{r'})}+R(p)\end{aligned}$$ where $\bar D\_\omega$ is defined as with $\alpha\_\o,\beta\_\o$ replaced by $\bar\alpha\_\o,\bar\beta\_\o$. Finally, we compute the contribution *b**r*, *r*ʹ(0, 2)(*p*) to the Fourier transform of $\mathcal E\_0(E\_{x,r};E\_{0,r'};V\_4) $ where none of the fields of *V*4 are contracted among themselves. First we write in Fourier space $$\begin{gathered} V\_4=-u{\int\_{[-\pi,\pi]^2}\frac{d k}{(2\pi)^2}}{\int\_{[-\pi,\pi]^2}\frac{d k'}{(2\pi)^2}}{\int\_{[-\pi,\pi]^2}\frac{d p}{(2\pi)^2}}\hat\psi^+\_{k+p}\hat\psi^-\_k\hat\psi^+\_{k'-p}\hat\psi^-\_{k'}\\ \times(e^{i(k'\_1+k'\_2-p\_2)}+e^{i(k'\_1+k'\_2-p\_1)})\\ =-\frac u4 {\int\_{[-\pi,\pi]^2}\frac{d k}{(2\pi)^2}}{\int\_{[-\pi,\pi]^2}\frac{d k'}{(2\pi)^2}}{\int\_{[-\pi,\pi]^2}\frac{d p}{(2\pi)^2}}\hat\psi^+\_{k+p}\hat\psi^-\_k\hat\psi^+\_{k'-p}\hat\psi^-\_{k'}W(k,k',p)\end{gathered}$$ where $$\begin{gathered} W(k,k',p)= e^{i(k\_1'+k\_2'-p\_2)}+e^{i(k\_1'+k\_2'-p\_1)} +e^{i(k\_1+k\_2+p\_2)} +e^{i(k\_1+k\_2+p\_1)}\\ -e^{i(k\_2+k'\_1+p\_2)}-e^{i(k\_1+k'\_2+p\_1)}-e^{i(k\_2'+k\_1-p\_2)}-e^{i(k\_1'+k\_2-p\_1)}.\label{eqW}\end{gathered}$$ The second expression for *V*4 is obtained by symmetrizing over the four possible ways of ordering the fields $\hat\psi^+\_{k+p}\hat\psi^-\_k\hat\psi^+\_{k'-p}\hat\psi^-\_{k'}$ in such a way that the order of the upper indices is ( + ,  − ,  + ,  − ). Therefore, *b**r*, *r*ʹ(0, 2)(*p*) equals $$\begin{aligned} -u K\_r K\_{r'}{\int\_{[-\pi,\pi]^2}\frac{d k}{(2\pi)^2}}{\int\_{[-\pi,\pi]^2}\frac{d k'}{(2\pi)^2}}W(k,k',p)\times\label{b02}\\ \times\frac{e^{-i(k'-p)v\_r-i(k+p)v\_{r'}}}{\mu(k)\mu(k+p)\mu(k')\mu(k'-p)}. \nonumber\end{aligned}$$ Note that $$\begin{aligned} \label{eq:W2} W(k,k',0)=2(e^{i k\_1}-e^{i k'\_1})(e^{i k\_2}-e^{i k'\_2}).\end{aligned}$$ Then, $$\begin{aligned} \label{eq:dopbol} b^{(0,2)}\_{r,r'}(p)= -2 u K\_r K\_{r'}{\int\_{[-\pi,\pi]^2}\frac{d k}{(2\pi)^2}}{\int\_{[-\pi,\pi]^2}\frac{d k'}{(2\pi)^2}}\times\\ \times\frac{e^{-ik'v\_r-ikv\_{r'}}(e^{i k\_1}-e^{i k'\_1})(e^{i k\_2}-e^{i k'\_2})}{\mu(k)\mu(k+p)\mu(k')\mu(k'-p)}+R(p).\end{aligned}$$ From this expression we want to extract the term that is discontinuous at *p* = 0. Expanding the product in we see that the integral is given by a sum of combinations of integrals of the type $\mathcal I\_{a}$ as in Lemma [lemma:formulozzo] below. Applying to each of the integrals, it is easily checked that the terms $(D\_{-\o}(p)/D\_\o(p))^2$ cancel and one is left with[3](#fn3) $$\begin{gathered} \label{eq:dopbolfor} -\frac{i u K\_r K\_{r'}}{\pi(\alpha\_+\beta\_--\alpha\_-\beta\_+)}\sum\_{\o=\pm}\frac{D\_{-\o}(p)}{D\_\o(p)}(e^{-i p^\o v\_{r'}} U^\o\_r+e^{-i p^\o v\_{r}}U\_{r'})+R(p),\\ U^\o\_r= e^{i p^\o(1,1)}C(-v\_r)+C(-v\_{r}+(1,1))\\-e^{i p^\o(1,0)}C(-v\_r+(0,1))-e^{i p^\o(0,1)}C(-v\_r+(1,0)).\end{gathered}$$ Using we see that $$\begin{aligned} \label{eq:nota7} {(U^\o\_r)^\*}=U^{-\o}\_r\end{aligned}$$ and from one checks that $$\begin{aligned} \nn U^+\_1&=&\frac{e^{i p^+\_2}}{2\pi}\int\_{p^-\_1}^{p^+\_1+2\pi}d\theta \frac{e^{i p^+\_1}-e^{i\theta}}{(K\_1+K\_2 e^{i\theta})^2}\\ \label{eq:U1} &&-\frac{2i}{\pi(\alpha\_+\beta\_--\alpha\_-\beta\_+)}\frac{\beta\_+}{\beta\_-}\cos(p^+\_1)\cos(p^+\_2)\\\nn U^+\_2&=&\frac{e^{i p^+\_2}}{2\pi}\int\_{p^-\_1}^{p^+\_1+2\pi}d\theta \frac{e^{i\theta}(e^{i p^+\_1}-e^{i\theta})}{(K\_1+K\_2 e^{i\theta})^2}\\ &&-\frac{2i}{\pi(\alpha\_+\beta\_--\alpha\_-\beta\_+)}e^{i p^-\_1}\frac{\beta\_+}{\beta\_-}\cos(p^+\_1)\cos(p^+\_2)\\\nn U^+\_3&=&\frac{1}{2\pi}\int\_{p^+\_1}^{p^-\_1}d\theta \frac{e^{i\theta}(e^{i \theta}-e^{i p^+\_1})}{(-K\_3e^{i\theta}-K\_4)^2}\\ &&-\frac{2i}{\pi(\alpha\_+\beta\_--\alpha\_-\beta\_+)}e^{i p^-\_1+ip^-\_2}\frac{\beta\_+}{\beta\_-}\cos(p^+\_1)\cos(p^+\_2)\\\nn U^+\_4&=&\frac{1}{2\pi}\int\_{p^+\_1}^{p^-\_1}d\theta \frac{e^{i \theta}-e^{i p^+\_1}}{(K\_3e^{i\theta}+K\_4)^2}\\ &&-\frac{2i}{\pi(\alpha\_+\beta\_--\alpha\_-\beta\_+)}e^{i p^-\_2}\frac{\beta\_+}{\beta\_-}\cos(p^+\_1)\cos(p^+\_2).\end{aligned}$$ Summarizing, we obtained (dropping as usual the terms $O(\l^2)$) $$\label{eq:cobinaz} \hat G^{(0,2)}\_{r,r'}(p)=-\frac{i}{2\pi(\bar\alpha\_+\bar\beta\_--\bar\alpha\_-\bar\beta\_+)}\sum\_{\o=\pm}\frac{\bar D\_{-\o}(p)}{\bar D\_\o(p)}\bar K\_{\o,r }\bar K\_{\o,r'} e^{-i \bar p^\o(v\_r+v\_{r'})}+R(p)$$ with $$\begin{aligned} \label{eq:Ktilde} \bar K\_{\o,r}:= K\_r\left(e^{-i \bar p^\o v\_r}-\frac{2\lambda t\_r t\_{r+2}}{K\_{r}}W(p^\o)+2 u U^\o\_r\right).\end{aligned}$$ Thanks to, the prefactor of is real and $$\left(\frac{\bar D\_-(p)}{\bar D\_+(p)}\right)^\*=\frac{\bar D\_+(p)}{\bar D\_-(p)}.$$ Finally, using, and, we see that the symmetry holds at first order in $\l$, so that reduces to $$\begin{aligned} \label{eq:combinaz} \hat G^{(0,2)}\_{r,r'}(p)=-\frac{i}{\pi(\bar\alpha\_+\bar\beta\_--\bar\alpha\_-\bar\beta\_+)}\Re\left[\frac{\bar D\_{-}(p)}{\bar D\_+(p)}\bar K\_{+,r} \bar K\_{+,r'} \right]+R(p).\end{aligned}$$ As discussed in, this asymptotic behavior of the Fourier transform of the dimer-dimer correlation for *p* → 0 is equivalent to the asymptotic behavior for large distances of the first line of (the part proportional to $\bar H\_{\o,r}$, instead, in momentum space around *p* = 0 can be absorbed into the error term *R*(*p*), because of the oscillating prefactor). In order to prove at first order that the variance of the height difference grows logarithmically at large distances, as stated in Theorem [th:2], we need to show that (see ) $$\begin{aligned} \label{eq:sespera} \frac{\Delta\_2}{\bar \alpha\_+}:=\frac{ \sum\_{r\in\{1,4\}}\bar K\_{+,r} }{\bar \alpha\_+}= \frac{\Delta\_1}{\bar \beta\_+}:= \frac{ \sum\_{r\in\{1,2\}}\bar K\_{+,r} }{\bar \beta\_+}\end{aligned}$$ at order *λ*. This ratio is nothing but the first-order expansion of $i \sqrt{A(\lambda)}$, cf. Theorem [th:2]. Since for *λ* = 0 both ratios equal *i* we write, at order $\l$, $$\begin{aligned} \bar\alpha\_+&=\alpha\_++2u \alpha\_+^{(1)},\qquad\qquad\qquad \bar\beta\_+=\beta\_++2u \beta\_+^{(1)},\\ \Delta\_1&=i \beta\_++2u \Delta\_1^{(1)},\qquad\qquad\quad\ \Delta\_2=i\alpha\_++2u\Delta^{(1)}\_2,\end{aligned}$$ with *α*+(1), *β*+(1), Δ1(1), Δ1(2) four *λ*-independent constants. Then, we need to prove that $$\begin{aligned} \label{eq:sp2} (\Delta^{(1)}\_2-i\alpha\_+^{(1)})\beta\_+- (\Delta^{(1)}\_1-i\beta\_+^{(1)})\alpha\_+=0.\end{aligned}$$ From and it follows that $$\begin{gathered} \label{eq:a+b+bis} \alpha\_+^{(1)}= -i g(v\_1)e^{i(p^+\_1+p^+\_2)}+ig(v\_4)e^{i p^+\_1}+i \alpha\_+ c^+\_1 + c^+\_2 t\_3 e^{i (p^+\_1+p^+\_2)}\\ \beta\_+^{(1)}=-i g(v\_1)e^{i(p^+\_1+p^+\_2)}+i g(v\_2)e^{i p^+\_2}+i \beta\_+ c^+\_2+c^+\_1 t\_3 e^{i (p^+\_1+p^+\_2)}\end{gathered}$$ and from and we see that $$\begin{aligned} \label{eq:delta1} \Delta\_2^{(1)}&=&-W(p^+)+K\_1 U^+\_1+K\_4 U^+\_4+i c^+\_2 K\_4 e^{i p^+\_2}\\ \Delta\_1^{(1)}&=&-W(p^+)+K\_1 U^+\_1+K\_2 U^+\_2+i c^+\_1 K\_2 e^{i p^+\_1}.\end{aligned}$$ Therefore, $$\begin{gathered} \Delta\_2^{(1)}-i \alpha\_+^{(1)}=K\_1 U^+\_1+K\_4 U^+\_4-g(v\_1)e^{i(p^+\_1+p^+\_2)}+g(v\_4)e^{i p^+\_1}\\ \Delta\_1^{(1)}-i \beta\_+^{(1)}=K\_1 U^+\_1+K\_2 U^+\_2-g(v\_1)e^{i(p^+\_1+p^+\_2)}+g(v\_2)e^{i p^+\_2}\end{gathered}$$ where we used *c*1+*α*+ + *c*2+*β*+ = *W*(*p*+) as in. Then, using the definition of *U**r*+, the l.h.s. of is $$\begin{gathered} \label{eq:cafebriosc} -\frac2\pi\frac{\beta\_+}{\beta\_-}\cos(p^+\_1)\cos(p^+\_2)\\-g(v\_1)e^{i(p^+\_1+p^+\_2)}(\beta\_+-\alpha\_+)+\beta\_+g(v\_4)e^{i p^+\_1}-\alpha\_+g(v\_2)e^{i p^+\_2}\\ -\alpha\_+\frac{e^{ip^+\_2}}{2\pi}\int\_{p^-\_1}^{p^+\_1+2\pi}\frac{e^{i p^+\_1}-e^{i\theta}}{K\_1+K\_2 e^{i\theta}}d\theta +\beta\_+K\_1\frac{e^{i p^+\_2}}{2\pi}\int\_{p^-\_1}^{p^+\_1+2\pi}\frac{e^{i p^+\_1}-e^{i\theta}}{[K\_1+K\_2 e^{i\theta}]^2}d\theta\\ +\beta\_+ K\_4\frac1{2\pi}\int\_{p^+\_1}^{p^-\_1}\frac{e^{i\theta}-e^{i p^+\_1}}{[K\_3 e^{i\theta}+K\_4]^2}d\theta.\end{gathered}$$ A simple application of the residue theorem shows that $$\begin{aligned} \label{eq:Goo} g(v\_1)=\frac1{2\pi}\int\_{p^-\_1}^{p^+\_1+2\pi}\frac1{K\_1+K\_2 e^{i\theta}}d\theta\\ g(v\_2)=\frac1{2\pi}\int\_{p^-\_1}^{p^+\_1+2\pi}\frac{e^{i\theta}}{K\_1+K\_2 e^{i\theta}}d\theta\\ g(v\_4)=\frac1{2\pi}\int\_{p^+\_1}^{p^-\_1}\frac1{K\_3e^{i\theta}+K\_4}d\theta.\end{aligned}$$ Therefore, the terms proportional to *α*+ in cancel and one is left with $$\begin{gathered} \label{eq:Goo1} -\frac2\pi\frac{\beta\_+}{\beta\_-}\cos(p^+\_1)\cos(p^+\_2)\\-e^{i(p^+\_1+p^+\_2)}\beta\_+\frac1{2\pi}\int\_{p^-\_1}^{p^+\_1+2\pi}\frac1{K\_1+K\_2 e^{i\theta}}d\theta+\beta\_+e^{i p^+\_1}\frac1{2\pi}\int\_{p^+\_1}^{p^-\_1}\frac1{K\_3e^{i\theta}+K\_4}d\theta\\ +\beta\_+K\_1\frac{e^{i p^+\_2}}{2\pi}\int\_{p^-\_1}^{p^+\_1+2\pi}\frac{e^{i p^+\_1}-e^{i\theta}}{[K\_1+K\_2 e^{i\theta}]^2}d\theta\\ +\beta\_+ K\_4\frac1{2\pi}\int\_{p^+\_1}^{p^-\_1}\frac{e^{i\theta}-e^{i p^+\_1}}{[K\_3 e^{i\theta}+K\_4]^2}d\theta \\ = -\frac2\pi\frac{\beta\_+}{\beta\_-}\cos(p^+\_1)\cos(p^+\_2) +\beta\_+\frac{K\_4+K\_3 e^{i p^+\_1}}{2\pi i }\int\_{e^{i p^+\_1}}^{e^{i p^-\_1}}\frac{dz} {(K\_3z+K\_4)^2}\\ -\beta\_+(K\_1+K\_2 e^{i p^+\_1})\frac{e^{i p^+\_2}}{2\pi i}\int\_{e^{i p^-\_1}}^{e^{i p^+\_1}}\frac{dz }{(K\_1+K\_2 z)^2}.\end{gathered}$$ Computing the integrals in the complex plane and using the explicit expressions for $\alpha\_\o,\beta\_\o$, which follow from their definition (see the statement of Theorem 1), one finds that is zero, so that holds. In addition, one sees that the ratio is given by $$\label{sqrtA} i \left( 1+\frac{4u}{\pi i} \frac{\cos(p^+\_1)\cos(p^+\_2)}{\alpha\_+\beta\_--\alpha\_-\beta\_+} \right)+O(\l^2).$$ Recalling that this ratio equals $i\sqrt{A(\l)}$ and that $\alpha\_\o^\*=-\alpha\_{-\o},\beta^\*\_\o=-\beta\_{-\o}$, we find that $A(\l)=1+a\l+O(\l^2)$, with *a* as in. First-order computation of $\nu(\l)$ ------------------------------------ By arguing like in, in order to compute the first order contribution to *ν*, it is enough to extract the most divergent part, as *p* → *p*− − *p*+, of the first order contribution to $\hat G^{(0,2)}\_{r,r'}(p)$. For definiteness, we assume, generically, that *p*+ − *p*− ≠ *p*− − *p*+ mod 2*π* (the complementary case, that corresponds to zero average tilt, can be treated analogously). The coefficient *ν*1 in the expansion $\nu(\l)=1+\nu\_1\l+O(\l^2)$ can be read from: [comp]G(0,2)r,r’(p--p++q)= (|q|)2+ …,where the dots indicate lower order terms in $\l$ or in *q*, as *q* → 0; i.e., they indicate terms that are either $O(\l^2)$, or less divergent than (log∣*q*∣)2 as *q* → 0. By inspection (see the analogous discussion a few lines before ), the only term that diverges like (log∣*q*∣)2 as *q* → 0 comes from *b**r*, *r*ʹ(0, 2), see ; setting, e.g., *r* = *r*ʹ = 1, $$\begin{aligned} &&\hat G^{(0,2)}\_{1,1}(p^{-}-p^++q)=-u t\_1^2 {\int\_{[-\pi,\pi]^2}\frac{d k}{(2\pi)^2}}{\int\_{[-\pi,\pi]^2}\frac{d k'}{(2\pi)^2}}W(k,k',p^--p^++q)\times\nonumber\\ &&\qquad \qquad \times \frac{1}{\mu(k)\mu(k+p^--p^++q)\mu(k')\mu(k'-p^-+p^+-q)}+ \ldots,\nonumber\end{aligned}$$ where the dots indicate lower order terms, as explained above. The dominant contribution to the right side comes from the region where *k* is close to *p*+ and *k*ʹ is close to *p*−. By explicitly computing the dominant contribution to the integrand in this region we find: $$\begin{aligned} &&\hat G^{(0,2)}\_{r,r'}(p^{-}-p^++q)=-\frac{u t\_1^2}{16\pi^4} \int\_{[-\e,\e]^2}\!\!\! ds \int\_{[-\e,\e]^2} \!\!\!ds'\ W(p^+,p^-,p^--p^+) \times\nonumber\\ &&\qquad \qquad \times \frac{1}{D\_+(s)D\_-(s+q) D\_-(s')D\_+(s'-q)}+ \ldots,\nonumber\end{aligned}$$ where $\e$ is an arbitrary, small enough, positive number. Recalling and the fact that *p*+ + *p*− = (*π*, *π*), we find: *W*(*p*+, *p*−, *p*− − *p*+) =  − 2(*e**i**p*1+ − *e**i**p*1−)(*e**i**p*2+ − *e**i**p*2−) =  − 8cos(*p*1+) cos(*p*2+). Moreover, $$\int\_{[-\e,\e]^2}\!\!\! ds\ \frac{1}{D\_+(s)D\_-(s+q)}=\frac{4i\pi}{(\a\_+\b\_--\a\_-\b\_+)}\log|q|+\ldots$$ Putting things together we find: $$\hat G^{(0,2)}\_{r,r'}(p^{-}-p^++q)=-\frac{8u t\_1^2\cos(p\_1^+)\,\cos(p\_2^+)}{\pi^2(\a\_+\b\_--\a\_-\b\_+)^2}(\log|q|)^2+\ldots$$ By comparing with, we find, as desired, $$\nu\_1 = \frac{8(u/\l)}{\pi i}\frac1{\a\_+\b\_--\a\_-\b\_+}\cos(p\_1^+)\,\cos(p\_2^+).$$ A useful integral formula ------------------------- [lemma:formulozzo] Let $a=(a\_1,a\_2)\in \mathbb Z^2$ and $$\begin{aligned} \label{eq:I} \mathcal I\_{a}(p)={\int\_{[-\pi,\pi]^2}\frac{d k}{(2\pi)^2}}\frac{e^{i k a}}{\mu(k)\mu(k+p)}.\end{aligned}$$ Assume that cos(*p*1+) > 0. Then, one has: $$\label{eq:Ibis} \mathcal I\_{a}(p)=\frac i{2\pi}\frac1{\alpha\_+\beta\_--\alpha\_-\beta\_+}\sum\_{\omega=\pm}\frac{D\_{-\omega}(p)}{D\_\omega(p)}e^{ip^\omega a} +C(a)+R\_0(p)$$ where *R*0(*p*) tends continuously to zero as *p* → 0 and $$\begin{gathered} C(a)=-\frac i{2\pi}\frac1{\alpha\_+\beta\_--\alpha\_-\beta\_+}\sum\_{\omega=\pm}\frac{\beta\_{-\omega}}{\beta\_\omega}e^{ip^\omega a}\\ +\frac{(1-a\_2)}{2\pi}\int\_{0}^{2\pi}d\theta e^{i \theta a\_1}\frac{(K\_1+K\_2 e^{i \theta})^{a\_2-2}}{(-K\_3e^{i \theta}-K\_4)^{a\_2}}\\\times \left[1\_{\{a\_2\le 0\}} 1\_{\{p\_1^-\le \theta\le p\_1^++2\pi\}}-1\_{\{a\_2\ge 1\}}1\_{\{p\_1^+\le \theta\le p\_1^-\}}\right].\end{gathered}$$ The proof is obtained with a straightforward although a bit lengthy application of the residue theorem; we skip details. Note also that $ ({\mathcal I\_{a}(p)})^\*=(-1)^{a\_1+a\_2}\mathcal I\_{a}(-p)$ so that $$\begin{aligned} \label{eq:nota6} C(a)^\*=(-1)^{a\_1+a\_2}C(a).\end{aligned}$$ **Acknowledgements.** This review is based on a longstanding collaboration with Vieri Mastropietro, whom we thank for countless inspiring discussions. This work has been supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (ERC CoG UniCoSM, grant agreement n.724939). F.T. was partially supported by the CNRS PICS grant 151933, by ANR-15-CE40-0020-03 Grant LSD, ANR-18-CE40-0033 Grant DIMERS and by Labex MiLyon (ANR-10-LABX-0070). This work was started during a long-term stay of A.G. at Univ. Lyon-1, co-funded by Amidex and CNRS. 999999 M. Aizenman, H. Duminil-Copin, V. Tassion, S. Warzel: *Emergent Planarity in two-dimensional Ising Models withfinite-range Interactions*, to appear on Invent. Math., arXiv:1801.04960 I.T. Alam, M.T. Batchelor: *Integrability as a consequence of discrete holomorphicity: loop models*, J. Phys. A **47**, 215201 (2014). F. Alet, J. Jacobsen, G. Misguich, V. Pasquier, F. Mila, M. Troyer: *Interacting Classical Dimers on the Square Lattice*, Phys. Rev. Lett. **94**, 235702 (2005); F. Alet, Y. Ikhlef, J. Jacobsen, G. Misguich, V. Pasquier: *Classical dimers with aligning interactions on the square lattice*, Phys. Rev. E **74** (2006), 041124. R. Cerf, R. Kenyon: *The Low-Temperature Expansion of the Wulff Crystal in the 3D Ising Model*, Comm. Math. Phys. **222** (2001), 149-179. G. Antinucci, A. Giuliani, R. Greenblatt, in preparation. R. J. Baxter: *Partition function of the Eight-Vertex lattice model*, Ann. Phys. **70**, 193-228 (1972). G. Benfatto, P. Falco, V. Mastropietro: *Universal Relations for Nonsolvable Statistical Models*, Phys. Rev. Lett. **104**, 075701 (2010); and *Extended Scaling Relations for Planar Lattice Models*, Comm. Math. Phys. **292**, 569-605 (2009). G. Benfatto, G. Gallavotti: *Perturbation theory of the Fermi surface in a quantum liquid. A general quasiparticle formalism and one-dimensional systems*, J. Stat. Phys. **59**, 541-664 (1990). G. Benfatto, V. Mastropietro: *Drude weight in non solvable quantum spin chains*, J. Stat. Phys. **143**, 251-260 (2011); G. Benfatto, V. Mastropietro: *Universality relations in non-solvable quantum spin chains*, J. Stat. Phys. **138**, 1084-1108 (2010). H. J. Brascamp, E. H. Lieb, J. L. Lebowitz: *The statistical mechanics of anharmonic lattices*, in: Nachtergaele B., Solovej J.P., Yngvason J. (eds) Statistical Mechanics (1975), Springer, Berlin, Heidelberg D. Chelkak, C. Hongler, K. Izyurov: *Conformal invariance of spin correlations in the planar Ising model*, Annals of mathematics **181** (2015), 1087-1138. J. G. Conlon, T. Spencer: *A strong central limit theorem for a class of random surfaces*, Commun. Math. Phys. **325** (2014), 1-15. J. Dubédat, *Dimers and families of Cauchy-Riemann operators I*, Journal of the American Mathematical Society **28** (2015), 1063-1167. H. Duminil-Copin, C. Hongler, P. Nolin: *Connection probabilities and RSW-type bounds for the two-dimensional FK Ising model*, Comm. Pure Appl. Math. **64**, 1165-1198 (2011). P. Falco: *Arrow-arrow correlations for the six-vertex model*, Phys. Rev. E **88**, 030103(R) (2013). J. Feldman, J. Magnen, V. Rivasseau, E. Trubowitz: *An infinite volume expansion for many fermions Green functions*, Helv. Phys. Acta **65**, 679-721 (1992). J. Fröhlich, T. Spencer: *The Kosterlitz-Thouless transition in two-dimensional Abelian spin systems and the Coulomb gas*, Comm. Math. Phys. **81** (1981), 527–602. K. Gawedzki, A. Kupiainen: *Gross-Neveu model through convergent perturbation expansions*, Comm. Math. Phys. **102**, 1-30 (1985). G. Giacomin, S. Olla, H. Spohn: *Equilibrium uctuations for the ∇*ϕ* interface model*, Ann. Probab. **29**, 1138-1172 (2001). A. Giuliani, V. Mastropietro: *Anomalous critical exponents in the anisotropic Ashkin-Teller model*, Phys. Rev. Lett. **93**, 190603 (2004); and *Anomalous universality in the anisotropic Ashkin-Teller model*, Comm. Math. Phys. **256**, 681-735 (2005). A. Giuliani, V. Mastropietro, F. Toninelli: *Height fluctuations in interacting dimers*, Ann. Inst. Henri Poincaré (Prob. Stat) **53** (2017), 98-168. A. Giuliani, V. Mastropietro, F. Toninelli: *Haldane relation for interacting dimers*, J. Stat. Mech. (2017) 034002. A. Giuliani, V. Mastropietro, F. Toninelli: *Non-integrable dimers: Universal fluctuations of tilted height profiles*, arXiv:1904.07526 F. D. M. Haldane: *General Relation of Correlation Exponents and Spectral Properties of One-Dimensional Fermi Systems: Application to the Anisotropic *S* = 1/2 Heisenberg Chain*, Phys. Rev. Lett. **45**, 1358-1362 (1980). L. P. Kadanoff: *Connections between the Critical Behavior of the Planar Model and that of the Eight-Vertex Model*, Phys. Rev. Lett. **39**, 903-905 (1977). P. W. Kasteleyn: *The statistics of dimers on a lattice: I. The number of dimer arrangements on a quadratic lattice*, Physica **27**, 1209-1225 (1961); and J. Math. Phys. **4**, 287-293 (1963). R. Kenyon: *Local statistics of lattice dimers*, Ann. Inst. H. Poincaré Prob Stat. **33**, 591-618 (1997); *Conformal Invariance of Domino Tiling*, Ann. Probab. **28**, 759-795 (2000); and *Dominos and the Gaussian Free Field*, ibid. **29**, 1128-1137 (2001). R. Kenyon: *Lectures on dimers*, Park City Math Institute Lectures, available at arXiv:0910.3129. R. Kenyon, A. Okounkov, S. Sheffield: *Dimers and amoebae*, Ann. Math. **163**, 1019-1056 (2006). D. Ioffe, S. Shlosman, Y. Velenik: *2D models of statistical physics with continuous symmetry: the case of singular interactions*, Comm. Math. Phys. **226**, 433-454 (2002). A. Lesniewski: *Effective action for the Yukawa2 quantum field theory*, Comm. Math. Phys. **108**, 437-467 (1987). E.H. Lieb, F. Y. Wu: *Absence of Mott Transition in an Exact Solution of the Short-Range, One-Band Model in One Dimension*, Phys. Rev. Lett. **20**, 1445 (1968). D.C. Mattis, E.H. Lieb: *Exact Solution of a Many-Fermion System and Its Associated Boson Field*, J. Math. Phys. **6**, 304 (1965). J. Miller: *Fluctuations for the Ginzburg-Landau ∇*ϕ* Interface Model on a Bounded Domain*, Comm. Math. Phys. **308**, 591-639 (2011). P. Milos, R. Peled: *Delocalization of Two-Dimensional Random Surfaces with Hard-Core Constraints*, Comm. Math. Phys. **340**, 1-46 (2015). A. Naddaf, T. Spencer: *On homogenization and scaling limit of some gradient perturbations of a massless free field*, Comm. Math. Phys. **183**, 55-84 (1997). D. Poland, S. Rychkov, A. Vichi: *The conformal bootstrap: Theory, numerical techniques, and applications*, Rev. Mod. Phys. **91**, 015002 (2019). S. Smirnov: *Critical percolation in the plane: conformal invariance, Cardy’s formula, scaling limits*, Comptes Rendus de l’Académie des Sciences-Series I-Mathematics, **333** (2001), 239-244. S. Smirnov: *Conformal invariance in random cluster models, I, Holomorphic fermions in the Ising model*, Ann. Math. **172**, 1435-1467 (2010). N.V. Temperley, M. E. Fisher: *Dimer problem in statistical mechanics-an exact result*, Philos. Mag. **6**, 1061-1063 (1961). --- 1. See, e.g., for a brief introduction to bosonization.[↩](#fnref1) 2. A warning on notation: given a quantity (such as $\alpha\_\o,p^\o$) referring to the non-interacting model, the corresponding *λ*-dependent quantity for the interacting model will be distinguished by a bar, such as $\bar\alpha\_\o,$ etc. On the other hand, we denote by *z*\* the complex conjugate of a number *z*.[↩](#fnref2) 3. Here we assume, without loss of generality, that cos(*p*1+) > 0. Since *p*1+ + *p*1− = *π*, we are just deciding which of the two zeros of *μ*( ⋅ ) we call *p*+.[↩](#fnref3) Non-integrable dimer models: universality and scaling relations =============================================================== In the last few years, the methods of constructive Fermionic Renormalization Group have been successfully applied to the study of the scaling limit of several two-dimensional statistical mechanics models at the critical point, including: weakly non-integrable 2D Ising models, Ashkin-Teller, 8-Vertex, and close-packed interacting dimer models. In this note, we focus on the illustrative example of the interacting dimer model and review some of the universality results derived in this context. In particular, we discuss the massless Gaussian free field (GFF) behavior of the height fluctuations. It turns out that GFF behavior is connected with a remarkable identity (‘Haldane’ or ’Kadanoff relation’) between an amplitude and an anomalous critical exponent, characterizing the large distance behavior of the dimer-dimer correlations. Introduction ============ The scaling limit of the Gibbs measure of a statistical mechanics model at a second order phase transition is expected to be universal, in particular, to be robust under ‘irrelevant’ perturbations and conformally invariant. Conceptually, the route towards universality is clear: one should first integrate out the small-scale degrees of freedom, then rescale the variables associated with the large-scale fluctuations, and show that the critical model reaches a fixed point, under iterations of this procedure (‘Wilsonian’ Renormalization Group (RG)). On general grounds, the fixed point is expected to be conformally invariant: therefore, in a second step, one can use the methods of Conformal Field Theory (CFT) to classify and characterize all the possible conformally invariant fixed points (CFT methods provide, in fact, a complete classification of such fixed point theories in two dimensions; remarkably, there has been recent progress in the characterization of three dimensional conformally invariant theories, too). The conformal fixed point theory of interest for the description of a given statistical mechanics system can often be identified by using specific information on the critical exponents, typically available from the RG construction. Even though widely accepted and believed to be correct, there are only few cases, mostly in two dimensions, for which this procedure and/or its predictions can be rigorously confirmed. 1. A first class of critical models for which the existence and conformal invariance of the scaling limit is rigorously known consists of two-dimensional, integrable Ising and dimer models on isoradial graphs. The key technical tool used to prove conformal invariance is discrete holomorphicity, which is a manifestation of integrability. The method is flexible enough to prove robustness of the scaling limit under geometric deformations (e.g., of the domain, or of the underlying lattice), but it is not able to explain universality under perturbations of the microscopic Hamiltonian. [For a proof of conformal invariance of crossing probabilities via discrete holomorphicity methods in a non-integrable model, see.] 2. A second class of critical models for which several predictions of Wilsonian RG and CFT have been rigorously substantiated consists of non-integrable perturbations of determinantal models, such as interacting dimers, Ashkin-Teller and vertex models. These results are based on a constructive, fermionic, version of Wilsonian RG: they allow to construct the bulk scaling limit of ‘local fermionic observables’ and to prove scaling relations among critical exponents, but they are not flexible enough yet to compute ‘non-local’ observables (such as spin-spin correlations in perturbed Ising or monomer-monomer correlations in perturbed dimers) or to accommodate geometric deformations of the domain. [For a first result on perturbed Ising models, based on probabilistic tools, see.] Not much is rigorously known about the existence and nature of the scaling limit of other critical models in two and three dimensions. It is a central challenge of mathematical physics for the coming years to extend and effectively combine the available techniques, in order to cover new models of physical interest and new instances of universality. In this paper, we review a few selected results on the universality of non-integrable two dimensional models, based on the fermionic RG methods mentioned above. For definiteness, we restrict our attention to ‘interacting dimer models’. We first introduce them informally, and give a first overview both of the ‘classical’ known results, and of our new results. Next, in the following sections, we introduce their definitions and state the relevant results on their critical behavior in a more precise way. Model and results: an overview. ------------------------------- Dimer models at close packing on planar, bipartite, graphs are highly simplified models of random surfaces or of liquids of anisotropic molecules at high density: the connection between these two apparently unrelated classes of systems is mediated by the notion of height function, which is in one-to-one correspondence with close-packed dimer configurations, as illustrated in Fig.[fig1]. [fig1] *Integrable dimer models.* A remarkable feature of close-packed dimer models is that there is a natural family of exactly solvable, critical, models, which exhibit a very rich and interesting behavior. This family is parametrized, on the one hand, by the specific periodic, infinite, bipartite planar lattice on which the dimer configuration lives; on the other hand, by the positive weights (or ‘activities’) associated to the edges of the graph. The exact solution is based on a determinantal representation of the partition function, due to Kasteleyn and Temperley-Fisher, valid for generic edge weights. The edge weights control the average slope of the height: in this sense, the family of solvable dimer models describes discrete random surfaces with different average slopes (or ‘tilt’). The solution shows that there is an open set of edge weights for which the dimer-dimer correlations decay algebraically and, correspondingly, the height fluctuations are asymptotically described by a massless Gaussian Free Field (GFF) at large distances: this critical phase is called ‘liquid’ or ‘rough’, depending on whether one focuses on the behavior of the dimer correlations or of the height profiles. This critical phase displays a subtle form of universality, in that the variance of the GFF height fluctuations is asymptotically, at large distances, *independent* of the dimer weights (in particular, of the average slope of the height profile) and of the underlying graph. In the case that the underlying lattice is the honeycomb lattice, as in Fig.[fig1], dimer configurations can be seen as stepped interfaces of minimal surface area (given the boundary height), i.e., they can be seen as domain walls for the 3D Ising model at zero temperature. In fact, the probability measure on dimer configurations induced by setting all edge weights equal to 1 is the same as the zero-temperature measure of the 3D Ising model with suitable *tilted* Dobrushin-type boundary conditions. By using this connection, one recognizes that the GFF nature of the height fluctuations proves the existence of a rough phase in 3D Ising at *T* = 0 with such boundary conditions. It is an open problem to prove that the rough phase persists at small, positive, temperatures. *Non-integrable dimers: main results (in brief).* Wilsonian RG and the bosonization method[1](#fn1) suggest that the GFF nature of the height fluctuations should be robust under non-integrable perturbations of the dimer model. In order to test this prediction in a concrete setting, we consider a class of ‘interacting’ dimer models, including the 6-vertex model (in its dimer representation ) and non-integrable variants thereof. For weak enough interactions (weak but independent of the system size), we prove that the height fluctuations still converge to GFF, as in the integrable case. However, in this case, the variance appears to depend on the interaction and on the dimer weights, see Section [sec4]. Therefore, the form of universality exhibited by the integrable dimer model seems to break down as soon as we move out of the exactly solvable case. Remarkably, a new form of universality emerges in the interacting case: the (pre-factor of the) variance equals the anomalous critical exponent of the dimer correlations. This is an instance of a ‘Kadanoff’ or ‘Haldane’ scaling relation, see for a brief introduction to these ‘weak universality’ relations. In the next sections, we will describe the models of interest and state our main results more precisely: in Section [sec2] we introduce the family of solvable dimer models and briefly review a selection of known results about its correlation functions; in Section [sec3] we introduce the class of non-integrable dimer models we consider, state our main results and give a brief sketch of the ideas involved in the proof (see for a complete proof); in Section [sec4] we illustrate the Haldane scaling relation by a first order computation: we compute both the pre-factor of the variance and the critical exponent at linear order in the perturbation strength, and check that the two results agree (the computation shows how non-trivial and remarkable the result is: already at first order the validity of the Haldane relation requires very subtle cancellations). Our explicit computations proves, in particular, that the critical exponent is non-universal, i.e., it depends both on the interaction strength and on the dimer weights. Non-interacting dimers ====================== For simplicity, we restrict to dimers on the square lattice. A close-packed dimer configuration (or perfect matching) on $\L\subset \mathbb Z^2$ is a collection of hard rods of length 1, which can be arranged on $\L$ in such a way that they cover all the vertices of $\L$ exactly once, see Fig. [fig2]. It is important that $\L$ is bipartite: we emphasize this fact by coloring white and black the vertices of the two sublattices. [fig2] Any dimer configuration is in one-to-one correspondence with a height profile, defined on the faces up to an overall additive constant. The height profile is defined by the differences between the values of the height at the face *f* and *f*ʹ: $$\label{hdiff}h(f')-h(f)=\sum\_{b\in C\_{f\to f'}}\sigma\_b({\mathds 1}\_{b}-1/4),$$ where the sum runs over the bonds crossed by a lattice path from *f* to *f*ʹ, *σ**b* is equal to  + 1 or  − 1, depending on whether *b* crossed with the white site on the right or on the left and $\mathds 1\_b$ is the indicator function of having a dimer at *b*. A key point of the definition of the height function is that the right side is independent of the choice of the lattice path from *f* to *f*ʹ (independence follows from the close-packing condition). The family of integrable, close-packed, dimer models that we informally introduced in the previous section is defined by the following partition function: $$\begin{aligned} \label{eq:Z0L} Z^0\_L=\sum\_{M\in{\Omega}\_L}p\_{L,0}(M), \qquad p\_{L,0}(M)=\prod\_{b\in M}t\_{r(b)}, \end{aligned}$$ defined on a discrete torus $\mathbb T\_L$ of side *L* (see ): here Ω*L* is the set of close-packed dimer configurations on the discrete torus, and *r*(*b*) ∈ {1, 2, 3, 4} label the ‘type’ of edge (we let *r* = 1 if the edge is horizontal with the white site on the right, *r* = 2 if it is vertical with the white site on the top, *r* = 3 if it is horizontal with the white site on the left, *r* = 4 if it is vertical with the white site on the bottom). The model is parametrized by the choice *t*1, *t*2, *t*3, *t*4; without loss of generality, we can set *t*4 = 1, and we shall do so in the following. The dimer weights *t**j* play the role of chemical potentials, fixing in particular the average slope: $$\mathbb E\_L[h(f+e\_i)-h(f)]=\rho\_i(t\_1,t\_2,t\_3), \quad i=1,2,$$ where $\mathbb E\_L$ indicates the statistical average w.r.t. the probability measure $\mathbb P\_L$ induced by the weights *p**L*, 0(*M*). As anticipated above, this dimer model is exactly solvable : for example, *Z**L*0 = ∣det*K*∣,  where $K=K(\ul t)$ is a complex adjacency matrix, known as the *Kasteleyn* matrix: its elements are labelled by a pair of sites of different color and are non-zero only if the pair forms a nearest-neighbor edge of the square lattice; the value of the element of *K* corresponding to an edge of type 1, 2, 3, 4 is, correspondingly, *K*1 = *t*1,  *K*2 = *i**t*2,  *K*3 =  − *t*3,  *K*4 =  − *i*. Remarkably, also the multipoint dimer correlation functions can be explicitly computed. For instance, if *b*(*x*, *j*) is the bond of type *j* and black site *x*, the two-point dimer correlation reads (in the special case of two dimers of type 1; similar formulas are valid for the other cases): $$\mathbb E[ {\mathds 1}\_{b(x,1)};{\mathds 1}\_{b(y,1)} ]=-t\_1^2\,K^{-1}(x,y)\,K^{-1}(y,x),$$ where: $\mathbb E$ is the expectation w.r.t. $\mathbb P$, the weak limit of $\mathbb P\_L$ as *L* → ∞; the semi-colon indicates ‘truncation’ (i.e., $\mathbb E(A;B)=\mathbb E(A;B)-\mathbb E(A)\,\mathbb E(B)$); and $$\begin{aligned} \label{K-1} && K^{-1}(x,y)={\int\_{[-\pi,\pi]^2}\frac{d k}{(2\pi)^2}}\frac{e^{-i k({ x}-{ y})}}{\mu(k)}, \nonumber\\ {\rm with} && \mu(k)=t\_1+ it\_2 e^{i k\_1}-t\_3e^{i k\_1+i k\_2}-i e^{i k\_2}.\nonumber\end{aligned}$$ Note that the zeros of *μ*(*k*) lie at the intersection of two circles in the complex plane, i.e., they are defined by the equation $$e^{i k\_2}=\frac{t\_1+i t\_2 e^{i k\_1}}{i+ t\_3 e^{i k\_1}}.$$ ‘Generically’, these two circles intersect transversally in two points: in this situation *K*− 1(*x*, *y*) decays to zero algebraically, like ∣*x* − *y*∣− 1, as ∣*x* − *y*∣ → ∞; this algebraic decay is often referred to by saying that the model is *critical*. Once the two-point dimer correlations are explicitly known, one can compute the fluctuations of the height difference. In particular, the variance w.r.t. $\mathbb P$ of the height difference diverges logarithmically: $$\label{var} {\rm Var}\_{\mathbb P}[h(f)-h(f')]= \frac{1}{\pi^2}\log |f-f'|+O(1)$$ as the distance ∣*f* − *f*ʹ∣ tends to infinity. The pre-factor 1/*π*2 in front of the logarithm is *independent* of *t*1, *t*2, *t*3: this ‘universality’ property is not accidental and is in fact related to a maximality (‘Harnack’) property of the spectral curve of integrable dimer models. The proof of is not trivial; it is based on a sufficiently smart combination of the following ingredients (the ‘smart’ part is in the use of the third ingredient): (1) the definition of height difference, see ; (2) the explicitly formula of $\mathbb E[ \mathds 1\_{b};\mathds 1\_{b'}]$; (3) the path-independence of the height (in order to get a sensible expression in the large distance limit, one first needs to properly deform the two paths from *f* to *f*ʹ involved in the computation of the variance; next, one can pass to a continuum limit, by replacing discrete sums with integrals and by replacing the finite distance dimer-dimer correlation with its large-distance asymptotic behavior, up to error terms that can be explicitly estimated ). Building upon these ingredients one can refine in various directions, in particular one can prove that: * height fluctuations converge to a massless GFF after proper coarse graining and rescaling, * the scaling limit of the height field is conformally covariant. It is very natural to ask whether these features are robust under perturbations of the Gibbs measure that break the determinant structure of Kasteleyn’s solution. This point is discussed in the following section. Interacting dimers: model and main results ========================================== We consider a family of ‘interacting’ dimer models, defined by the following partition function: $$\begin{aligned} \label{eq:ZlL} Z^\l\_L=\sum\_{M\in{\Omega}\_L}p\_{L,\l}(M), \qquad p\_{L,\l}(M)=\Big(\prod\_{b\in M}t\_{r(b)}\Big)e^{{\l} \sum\_{x\in \L}{f}(\t\_x M)},\end{aligned}$$ where: ${\l}$ is the interaction strength (to be thought of as ‘small’), $\L$ is the lattice of black sites in $\mathbb T\_L$, *f* is a local function of the dimer configuration around the origin, and $\t\_x$ is the ‘translation operator’ by the lattice vector *x*. In analogy with the non-interacting case, we let $\mathbb P\_{L,\l}$ be the finite volume Gibbs measure associated with weights $p\_{L,\l}$ and $\mathbb P\_\l$ its infinite-volume limit (existence of the limit is part of the results in ). For a special choice of *f*, the model reduces to the 6-vertex model, which is integrable by Bethe ansatz (but the solution is not determinantal), see. However, generically, the model is non-integrable. Possibly, the simplest choice of *f* that makes the model non-integrable is the ‘plaquette interaction’ previously considered in : $$\label{eq:plaq} f=\mathds 1\_{e\_1}\mathds 1\_{e\_2}+\mathds 1\_{e\_3}\mathds 1\_{e\_4}+\mathds 1\_{e\_1}\mathds 1\_{e\_5}+\mathds 1\_{e\_6}\mathds 1\_{e\_7}$$ where *e*1, …, *e*7 are the edges in Fig. [fig:esempi]. [fig:esempi] Our main results do not depend on the specific choice of *f*. They concern the asymptotic behavior of dimer-dimer correlations and of the fluctuations of the height difference and are stated next. [th:1] Let *t*1, *t*2, *t*3 be such that *μ*(*k*) has two distinct simple zeros, *p*± (in particular, the ratio of $\a\_\o:=\partial\_{k\_1}\mu(p^\o)$ and $\b\_\o:=\partial\_{k\_2}\mu(p^\o)$ is not real). Then, for $\l$ small enough, && Eł[1b(x,r);1b(0,r’)]= 142 ø= [dimdim] &&+142 ø= e-i(|pø-|p-ø) x+|Rr,r’(x),where[2](#fn2): $\bar K\_{\o, r}$, $\bar H\_{\o,r},$ $\bar \a\_\o$, $\bar \b\_\o$, $\bar p^\o$, $\n$ are analytic functions of $\l$, such that $\bar \a\_\o\big|\_{\l=0}=\a\_\o$, $\bar \b\_\o\big|\_{\l=0}=\b\_\o$, $\bar p^\o\big|\_{\l=0}=p^\o$ and $\bar K\_{\o, r}\big|\_{\l=0}=\bar H\_{\o,r}\big|\_{\l=0}=K\_r e^{-i p^\o v\_r}$. Recall that *K**r* were defined in ; moreover, $$\begin{aligned} \label{eq:vr} v\_1=(0,0), v\_2=(-1,0), v\_3=(-1,-1), v\_4=(0,-1). \end{aligned}$$ These constants satisfy the following symmetry relations: $$\begin{aligned} \label{eq:symmab} && \bar \alpha\_\o^\*=-\bar\alpha\_{-\o},\hskip.75truecm \bar\beta\_\o^\*=-\bar\beta\_{-\o} \\ \label{eq:symmK} && \bar K\_{\o, r}^\*=\bar K\_{-\o,r},\quad \bar H\_{\o, r}^\*=\bar H\_{-\o,r},\\ \label{eq:symmp} &&\bar p^++\bar p^-=(\pi,\pi).\end{aligned}$$ Moreover, $$\nu(\l)=1+a\l+O(\l^2)$$ and, generically, *a* ≠ 0. Finally, $\bar R\_{r,r'}(x,x')=O(|x-x'|^{-5/2})$ (the exponent 5/2 could be replaced by any $\d<3$ provided $\l$ is small enough). A few comments are in order: * The proof provides a constructive algorithm for computing $\bar K\_{\o,r}, \bar H\_{\o,r}$, etc, at any desired precision. However, we do not have closed formulas for these quantities, and we do not expect that it is possible to obtain any by other methods. * Generically, the *anomalous exponent* *ν* has a non-zero first order contribution in $\l$: therefore, it is larger or smaller than 1, depending on whether $a \l$ is positive or negative. In particular, the asymptotic, large-distance, behavior of the dimer-dimer correlation is dominated by the first or second term in, depending on whether $a\l$ is positive or negative. * Once we have such a refined asymptotics as, we can plug it into the formula for the height variance, $$\mathbb E\_\l[{h(f)-h(f');h(f)-h(f')}]=\sum\_{b,b'\in C\_{f\to f'}}\sigma\_b\sigma\_{b'}\mathbb E\_\l[ \mathds 1\_{b};\mathds 1\_{b'} ],$$ in order to understand its growth as ∣*f* − *f*ʹ∣ → ∞. As anticipated above, one expects that the growth is still logarithmic, like in the non-interacting case. However, this is not completely straightforward: as remarked in the previous item, if $a\l<0$ the asymptotic behavior of the dimer-dimer correlation is dominated by the second term in, which is characterized by the critical exponent *ν* < 1. Therefore, after double-summation over *b*, *b*ʹ, one may fear that the interacting variance $\mathbb E\_\l[{h(f)-h(f');h(f)-h(f')}]$ grows like ∣*f* − *f*ʹ∣2(1 − *ν*) at large distances, rather than logarithmically. Remarkably, this is not the case, thanks to the oscillating factor $e^{-i(\bar p^\o-\bar p^{-\o}) x}$ in front of the second term of. [th:2] Under the same hypothesis of Theorem 1, $${\rm Var}\_{\mathbb P\_\l}(h(f)-h(f'))= \frac{{A(\lambda)}}{\pi^2}\log |f-f'|+O(1)$$ where $$A(\l)=-\Biggl[\frac{\bar K\_{\o,1}+\bar K\_{\o,2}}{\bar \b\_\o}\Biggr]^2=-\Biggl[\frac{\bar K\_{\o,1}+\bar K\_{\o,4}}{\bar \a\_\o}\Biggr]^2.$$ In general, $A(\l)$ depends explicitly on $\l,t\_1,t\_2,t\_3$ and on the type of interaction potential in. Moreover, $$\label{Ha}{A(\l)}={\n(\l)}.$$ The proof of Theorem [th:2] provides two different algorithms for computing the coefficients of the analytic functions *A* and *ν*. At first order in $\l$, one can check directly the validity of. The computation is very instructive and, as a by-product, it shows that $A(\l)$ depends explicitly on $\l,t\_1,t\_2,t\_3$; see Sect.[sec4] for a proof. However, a direct proof of the validity of *A* = *ν* by direct inspection and comparison of the two power series defining *A* and *ν* seems hopeless (the computation in the next section is a clear indication). The proof of is based on a subtle cancellation mechanism, which uses a comparison between the lattice Ward Identities of the dimer model and the emergent chiral Ward Identities of a ‘reference’ continuum model (the ‘infrared fixed point theory’), see. The identity $A(\l)=\nu(\l)$ is the analogue of one of the scaling relations (the one between compressibility and the density-density critical exponent) proposed by D. Haldane in the context of Luttinger liquids. It is also related to one of the scaling relations among the critical exponents of the 8-vertex and of the Ashkin-Teller model, proposed by L. Kadanoff, see the discussion right before eq.(1.2) of. For this reason, we call it ‘Haldane’ or ‘Kadanoff’ relation. There are not many previous examples of rigorously established Haldane or Kadanoff relations: the known cases are mostly restricted to exactly solved models of interacting fermions or quantum spins (Luttinger model, or the XXZ chain ) and non-integrable perturbations thereof. As stated, Theorem 2 concerns only the asymptotic growth of the variance of the height difference. However, a slight extension of its proof, along the same lines as, proves that, after coarse-graining, *h*(*f*) converges to *ϕ*, the massless GFF of covariance $$\mathbb E\_\l(\phi(x) \phi(y))=-\frac{A(\lambda)}{2\pi^2}\log|x-y|,$$ in the same sense as. Theorem [th:2] and its extension mentioned in the previous item prove in a very strong and sharp sense that the random surface associated with the interacting dimer model is in a rough phase, characterized by logarithmic fluctuations. In this sense, our theorem is of interest in the context of the fluctuation theory of discrete random surface models, which is a classical topic in probability and statistical mechanics. Previous, related, results include the proofs of logarithmic height fluctuations in anharmonic crystals, SOS model, 6-vertex and Ginzburg-Landau type models. The proofs of Theorems [th:1] and [th:2] are hard and lengthy, and we refer the reader to for it. Here we limit ourselves to mention the main steps and ideas involved in the proof: 1. The starting point is a representation of the non-interacting model, characterized by Kasteleyn’s determinantal solution, in terms of a free fermionic theory in dimension *d* = 1 + 1. 2. Next, we provide an exact representation of the interacting dimer model in terms of an interacting fermionic theory in *d* = 1 + 1, the interaction being (at dominant order) a quartic fermionic interaction. In this sense, the interacting dimer model maps into a sort of ‘fermionic *ϕ*24 theory’. 3. The fermionic model, into which the original dimer model has been mapped, can be analyzed via standard fermionic multiscale cluster expansion methods (fermionic constructive RG) due, among others, to Gawedzki-Kupiainen, Lesniewski, Benfatto-Gallavotti, Feldman-Magnen-Rivasseau-Trubowitz. 4. The fermionic RG scheme turns out to be convergent if and only if the infrared flow of the ‘running coupling constants’, controlled by the so-called beta-function equation, is convergent in the limit of large scales. In order to control the RG flow, we compare it with the one of a reference continuum model: some of the ingredients involved in the comparison are the Ward Identities (both for the lattice dimer model and for the continuum reference one), the Schwinger-Dyson equation and the non-renormalization of the anomalies for the reference model. 5. In order to obtain the fine asymptotics of the dimer-dimer correlations, as well as the Haldane relation connecting *A* and *ν*, we need to compare the asymptotic, emergent, chiral Ward Identities of the reference model with the exact lattice Ward Identities of the dimer model, following from the local conservation law of the dimer number, $\sum\_{b\sim x}\mathds 1\_b=1$, with the sum running over edges incident to *x*. The comparison guarantees that the ratio *A*/*ν* is ‘protected’ by symmetry (there is no ‘dressing’ or ‘renormalization’ due to the interaction). We conclude this section by mentioning a few open problems and perspectives: * It would be interesting to study the critical theory in finite domains and, in perspective, prove its conformal covariance. From a technical point of view, going in this direction requires a non-trivial extension of the RG multiscale methods to the non-translationally-invariant setting, and a sharp control of the RG flow of the marginal boundary running coupling constants. Promising results in this direction have been recently obtained in the context of non-planar critical Ising models in the half-plane. * So far, we can only compute the scaling limit of the height function after coarse graining, cf. for instance. It would be very interesting to prove a central limit theorem on a more local scale; i.e., computing the average of *e**i**α*(*h*(*f*) − *h*(*f*ʹ)), instead of computing the characteristic function for the height function integrated against a test function. Similarly, it would be interesting to compute the scaling limit of the monomer-monomer correlations. This problem is not merely technical: it is strictly connected with the computation of the scaling limit of the spin-spin correlations in non-integrable, two-dimensional Ising models, which is currently out of reach of the current techniques. * While we expect the analogs of Theorems [th:1] and [th:2] to be true also for the interacting dimer model on the honeycomb lattice of Fig. [fig1], it is not obvious that the same qualitative behavior holds for interacting dimers on general $\mathbb Z^2$-periodic bipartite planar graphs, with an elementary cell containing more than two vertices. In the non-interacting case, using $\mathbb Z^2$ or any other $\mathbb Z^2$-periodic bipartite planar graph makes essentially no difference. However, the effective fermionic theory is different for different periodic bipartite lattices: the larger the elementary cell, the larger the number of ‘colors’ of the fermionic field associated with the fermionic description of the system. In the presence of interactions, the number of colors is known to affect the qualitative behavior of the system (as an example, compare the behavior of the Luttinger model with that of the 1D Hubbard model ): depending on it and on the specific form of the interaction, the system could display either an anomalous Fermi liquid behavior, or it could open a gap, entering more exotic quantum phases. It would be very interesting to see whether any of these exotic behaviors can arise in interacting dimer models on decorated periodic lattices. * Finally, as suggested by the remarks in the introduction, it would be nice to see whether the current methods can be used to prove the existence of a rough, logarithmically correlated, low-temperature phase of the interface of 3D Ising with tilted Dobrushin boundary conditions. As observed above, the fluctuations of the surface at zero temperature can be mapped exactly into a problem of non-interacting dimers on the hexagonal lattice. It remains to be seen whether low-temperature fluctuations can be effectively described in terms of weakly interacting dimers on the hexagonal lattice. First order computation ======================= The goal of this section is to verify, at first order in $\l$, the validity of the ‘Haldane’, or ‘Kadanoff’, relation,, and to compute explicitly (to fix ideas, for the case of the plaquette interaction ) the constant *a* in the expansion $$\begin{aligned} \label{eq:expansione} \nu(\l)=A(\l)=1+a\l+O(\l^2). \end{aligned}$$ The result is: [eqa]a=-,where we recall that *p*± are the two zeros of *μ*(*k*), which are assumed to be non-degenerate, $\a\_\pm=\partial\_{k\_1}\mu(p^\pm)$ and $\b\_\pm=\partial\_{k\_2}\mu(p^\pm)$. It is straightforward to check that the right side of explicitly depends on the dimer weights: e.g., in the simple case *t*1 = *t*3 = *t*, *t*2 = 1, in which *p*+ = 0, *p*− = (*π*, *π*), one has *a* =  − 2(1 + *t*2)/(*π**t*); in the case *t*1 = *t*2 = *t*3 = 1, the result coincides with the one found in, *a* =  − 4/*π*. We will use the same notations and conventions as in ; we do not repeat here the definitions and we assume the reader has familiarity in particular with and. We recall just that $\mathcal E\_0$ denotes the Grassmann Gaussian measure with propagator (cf. ) $$\begin{aligned} \label{eq:E0} g(x-y):= \mathcal E\_0(\psi^-\_x\psi^+\_y)=K^{-1}(x,y)\end{aligned}$$ For the following, it is convenient to introduce the rescaled coupling constant $$\begin{aligned} \label{eq:u} u:=(t\_1 t\_3+t\_2)\l.\end{aligned}$$ First-order computation of *A* ------------------------------ The two-point dimer correlation function is given as $$\begin{aligned} \label{eq:dd} G^{(0,2)}\_{r,r'}(x,0):=\mathbb E\_\l(\mathds 1\_e;\mathds 1\_{e'})=\lim\_{L\to\infty}\partial\_{A\_e} \partial\_{A\_{e'}} \mathcal W\_\L(A)|\_{A\equiv 0}, \end{aligned}$$ where *e* (resp. *e*ʹ) is the edge of type *r* (resp. *r*ʹ) with black endpoint of coordinates *x* (resp. 0) and $\mathcal W\_\L(A)$ is the moment generating function $$\begin{aligned} \label{eq:momgen} e^{\mathcal W\_\L(A)}:=\sum\_{M\in\Omega\_L}p\_{L,\l}(M)\prod\_e e^{A\_e \mathds 1\_e}.\end{aligned}$$ Our convention for the Fourier transform of the two-point dimer correlation is that $$\begin{aligned} \label{eq:Gfourier} \hat G^{(0,2)}\_{r,r'}(p)=\sum\_x e^{-i p x}G^{(0,2)}\_{r,r'}(x,0).\end{aligned}$$ Since we are interested in the long-distance behavior of correlations, we will look at the small-*p* behavior and in particular at the discontinuity of $\hat G^{(0,2)}\_{r,r'}(p)$ at *p* = 0. We recall that, since we are working on the torus, $\exp(\mathcal W\_\L(A))$ can be written as the linear combination of four Grassmann integrals with non-quadratic action, cf.. For the computation of correlations in the *L* → ∞ limit and at finite order in perturbation theory, we can safely replace with an expression involving *a single* Grassmann integral: $$\begin{aligned} \label{eq:piusemp} e^{\mathcal W\_\L(A)}=\int D\psi e^{S(\psi)+V(\psi,A)},\end{aligned}$$ with *S* and *V* as in. In the case of the plaquette interaction, the potential *V*(*ψ*, *A*) equals (neglecting terms of order $\l^2$ and higher) $$\begin{aligned} \label{eq:Vprimod} V(\psi,A)= -\sum\_e(e^{A\_e}-1)E\_e+\l\sum\_{\gamma=\{e\_1,e\_2\}\subset \Lambda}E\_{e\_1}E\_{e\_2}e^{A\_{e\_1}+A\_{e\_2}}\end{aligned}$$ where the second sum runs over pairs of parallel edges {*e*1, *e*2} on the boundary of the same face. In particular, setting *A* ≡ 0 and using the definition for *E**e* in terms of the Grassmann variables *ψ*± associated to the endpoints of the edge *e*, one finds that the potential is exactly quartic in the Grassmann fields: $$\begin{aligned} \label{eq:V4} V\_4(\psi):= V(\psi,0)&=-u\sum\_x\psi^+\_x\psi^-\_x\left[\psi^+\_{x+(0,1)}\psi^-\_{x-(1,0)}+\psi^+\_{x+(1,0)}\psi^-\_{x-(0,1)}\right]. \end{aligned}$$ From, and we see that the two-point dimer correlation function equals, at first order in *λ* and in the *L* → ∞ limit, $$\begin{aligned} \label{i1}\nonumber G^{(0,2)}\_{r,r'}(x,0)&=\mathcal E\_0( E\_{e};E\_{e'})\\&- \lambda [\mathcal E\_0( E\_{e};I^{(1)}\_{0,r'})+ \mathcal E\_0(I^{(1)}\_{x,r};E\_{e'})]+\mathcal E\_0(E\_{e};E\_{e'};V\_4)\end{aligned}$$ where $\mathcal E\_0(\dots;\dots)$ denotes truncated expectation and $$\begin{gathered} I^{(1)}\_{x,r}=\left\{ \begin{array}{lll} K\_1 K\_3\psi^+\_{x}\psi^-\_x(\psi^+\_{x+(0,1)}\psi^-\_{x-(1,0)}+\psi^+\_{x+(1,0)}\psi^-\_{x-(0,1)}) & \text{ if} & r=1 \\ K\_2 K\_4\psi^+\_{x}\psi^-\_{x+v\_2}(\psi^+\_{x+(0,1)}\psi^-\_{x}+\psi^+\_{x-(1,0)}\psi^-\_{x-(1,1)})& \text{ if} & r=2 \\ K\_1 K\_3\psi^+\_{x}\psi^-\_{x+v\_3}(\psi^+\_{x-(1,0)}\psi^-\_{x-(1,0)}+\psi^+\_{x-(0,1)}\psi^-\_{x-(0,1)}) &\text{ if} & r=3 \\ K\_2 K\_4\psi^+\_{x}\psi^-\_{x+v\_4}(\psi^+\_{x+(1,0)}\psi^-\_{x}+\psi^+\_{x-(0,1)}\psi^-\_{x-(1,1)})&\text{ if} & r=4. \end{array} \right. \end{gathered}$$ For *λ* = 0 one finds from, and from Lemma [lemma:formulozzo] below $$\begin{aligned} \label{eq:mardepanza} \left. \hat G^{(0,2)}\_{r,r'}(p)\right|\_{\lambda=0}&=-K\_r K\_{r'}{\int\_{[-\pi,\pi]^2}\frac{d k}{(2\pi)^2}}\frac{e^{-i k v\_{r}-i (k+p)v\_{r'}}}{\mu(k)\mu(k+p)}\\\nonumber&= -\frac i{2\pi}\frac{K\_r K\_{r'}} {\alpha\_+\beta\_--\alpha\_-\beta\_+}\sum\_{\omega=\pm}\frac{D\_{-\omega}(p)}{D\_\omega(p)}e^{-i p^\omega(v\_r+v\_{r'})}+R(p).\end{aligned}$$ Here and in the following, *R*(*p*) denotes a function that is continuous at *p* = 0 (the precise value of *R*(*p*) can change from line to line). In, $v\_r\in \mathbb Z^2$, *r* = 1, …, 4, is as in while $$\begin{aligned} \label{eq:Domega} D\_\omega(p)=\a\_\o p\_1+\b\_\o p\_2,\quad \o=\pm. \end{aligned}$$ Next we compute the first-order contribution $$- \lambda [\mathcal E\_0( E\_{e};I^{(1)}\_{0,r'})+ \mathcal E\_0(I^{(1)}\_{x,r};E\_{e'})]$$ in. As explained at the beginning of, by the fermionic Wick theorem, we can replace *I**x*, *r*(1) by its “linearization” $\bar I^{(1)}\_{x,r}$ obtained by contracting in all possible ways two of its four *ψ* fields. For instance, with *g*( ⋅ ) as in, $$\begin{gathered} \label{eq:Ibar1} \overline I^{(1)}\_{x,1}=K\_1K\_3\left[-g(v\_1)\left(\psi^+\_{x+(0,1)}\psi^-\_{x-(1,0)}+\psi^+\_{x+(1,0)}\psi^-\_{x-(0,1)} \right)\right.\\+g(v\_2)\left(\psi^+\_{x+(0,1)}\psi^-\_x+\psi^+\_x\psi^-\_{x-(0,1)} \right)\\ \left.+g(v\_4)\left(\psi^+\_{x+(1,0)}\psi^-\_x+\psi^+\_x\psi^-\_{x-(1,0)} \right) -2g(v\_3)\psi^+\_x\psi^-\_x \right]. \end{gathered}$$ In Fourier space (with the conventions of for the Grassmann fields in momentum space) one has $$\label{eq:Ibar1F} \overline I^{(1)}\_{x,r}={\int\_{[-\pi,\pi]^2}\frac{d k}{(2\pi)^2}}{\int\_{[-\pi,\pi]^2}\frac{d p}{(2\pi)^2}}e^{i p x}\hat\psi^+\_{k+p}W\_r(k,p)\hat\psi^-\_k$$ where $$\begin{gathered} \label{eq:W1} W\_1(k,p)=K\_1K\_3\left[-g(v\_1)\left(e^{i(k\_1+k\_2+p\_1)}+e^{i(k\_1+k\_2+p\_2)} \right)\right.\\+g(v\_2)\left(e^{i(k\_2+p\_2)}+e^{i k\_2} \right) \left.+g(v\_4)\left(e^{i(k\_1+p\_1)}+e^{i k\_1} \right) -2g(v\_3) \right].\end{gathered}$$ Similar formulas hold for *r* = 2, 3, 4, with *W*1(*k*, *p*) replaced by *W**r*(*k*, *p*). One easily checks (for *r* = 1 this can be verified immediately from ) that $$\begin{gathered} \label{eq:W0} W\_r(k,0)=2 t\_r t\_{r+2}W(k),\\ W(k)=\left[g(v\_1)e^{i(k\_1+k\_2)}-g(v\_2)e^{ik\_2}-g(v\_4)e^{i k\_1} +g(v\_3) \right]\\= {\int\_{[-\pi,\pi]^2}\frac{d k'}{(2\pi)^2}}\frac{(e^{ik\_1}-e^{i k\_1'})(e^{ik\_2}-e^{i k\_2'})}{\mu(k')}\end{gathered}$$ with the convention that *t*4 = 1 and that $t\_r:=t\_{r\!\!\mod \!4}$ if *r* > 4. Note that $$\begin{aligned} \label{eq:nota0} {W(p^+)}^\*=W(p^-),\end{aligned}$$ because *p*+ + *p*− = (*π*, *π*) and $g(v\_1),g(v\_2)\in \mathbb R$, while $g(v\_2),g(v\_4)\in i\mathbb R$. Using and, together with $$\begin{aligned} \label{eq:I0F} E\_{e}=K\_r{\int\_{[-\pi,\pi]^2}\frac{d k}{(2\pi)^2}}{\int\_{[-\pi,\pi]^2}\frac{d p}{(2\pi)^2}}e^{i p x}\hat \psi^+\_{k+p}\hat\psi^-\_ke^{-i k v\_r}, \end{aligned}$$ (if *e* is, as above, of type *r* and with black vertex of coordinates *x*) we see that $$\begin{gathered} \label{eq:vestooss} - \lambda [\mathcal E\_0( E\_{e};I^{(1)}\_{0,r'})+ \mathcal E\_0(I^{(1)}\_{x,r};E\_{e'})](p) \\= 2 \l{\int\_{[-\pi,\pi]^2}\frac{d k}{(2\pi)^2}}\frac1{\mu(k)\mu(k+p)}\times\\ \times[K\_r t\_{r'}t\_{r'+2}e^{-i k v\_r}+K\_{r'}t\_r t\_{r+2}e^{-i kv\_{r'}}]W(k) +R(p).\end{gathered}$$ Thanks to Lemma [
arxiv_0000212
Improved bounds for the Erdos-Rogers function ============================================= [classification=text] The Erdos-Rogers function *f**s*, *t* measures how large a *K**s*-free induced subgraph there must be in a *K**t*-free graph on *n* vertices. While good estimates for *f**s*, *t* are known for some pairs (*s*, *t*), notably when *t* = *s* + 1, in general there are significant gaps between the best known upper and lower bounds. We improve the upper bounds when *s* + 2 ≤ *t* ≤ 2*s* − 1. For each such pair we obtain for the first time a proof that *f**s*, *t* ≤ *n**α**s*, *t* + *o*(1) with an exponent *α**s*, *t* < 1/2, answering a question of Dudek, Retter and Rödl. Introduction ============ Let *G* be a graph with *n* vertices that contains no *K*4. How large a triangle-free induced subgraph must *G* have? The standard proof of Ramsey’s theorem implies that *G* contains an independent set of size *n*1/3, but can we do better? A simple argument shows that the answer is yes. Indeed, each vertex in *G* has a triangle-free neighbourhood, and either there is a vertex of degree *n*1/2 or one can find an independent set of size roughly *n*1/2 by repeatedly choosing vertices and discarding their neighbours. This stronger argument still feels a little wasteful, because in the second case one finds an independent set rather than a triangle-free subgraph. Moreover, there is no obvious example that yields a matching upper bound, so it is not immediately clear whether 1/2 is the correct exponent. The problem above is an example of a general problem that was first considered by Erdos and Rogers. Given positive integers 1 < *s* < *t* and *n* > 2, define *f**s*, *t*(*n*) to be the minimum over all *K**t*-free graphs *G* with *n* vertices of the order of the largest induced *K**s*-free subgraph of *G*. We have just been discussing the function *f*3, 4. The function *f**s*, *t* is known as the *Erdos-Rogers function*. It has been studied by several authors: for a detailed survey covering many of the known results on the subject, see. For a more recent exposition, see also section 3.5.2 of. The first bounds were obtained by Erds and Rogers who showed that for every *s* there exists a positive constant *ε*(*s*) such that *f**s*, *s* + 1(*n*) ≤ *n*1 − *ε*(*s*). About 30 years later, Bollobás and Hind improved the estimate for *ε*(*s*) and established the lower bound *f**s*, *t*(*n*) ≥ *n*1/(*t* − *s* + 1). In particular, *f**s*, *s* + 1(*n*) ≥ *n*1/2 (by the obvious generalization of the argument for *f*3, 4 above). Subsequently, Krivelevich improved these lower bounds by a small power of log*n* and also gave a new general upper bound, which is $$\label{kriv} f\_{s,t}(n)\leq O(n^{\frac{s}{t+1}}(\log n)^{\frac{1}{s-1}}).$$ Later, the lower bound was significantly improved by Sudakov. He showed that if *t* > *s* + 1, then *f**s*, *t*(*n*) ≥ Ω(*n**a**s*, *t*) where *a**s*, *t* is defined recursively. In particular, when *s* is fixed and *t* → ∞, he obtained the bound $$\label{sud} f\_{s,t}(n)\geq \Omega(n^{\frac{s}{2t}+O(1/t^2)}).$$ We remark that if *t* ≥ 2*s* then ([kriv]) is the best known upper bound, while Sudakov’s lower bound is the best known for every *t* > *s* + 1. In particular, the upper bound is roughly the square of the lower bound in the range *t* ≥ 2*s*. Recently, there has been quite a lot of progress on the case *t* = *s* + 1. First, Dudek and Rödl showed that *f**s*, *s* + 1(*n*) ≤ *O*(*n*2/3). Then Wolfovitz proved that for sufficiently large *n* we have *f*3, 4(*n*) ≤ *n*1/2(log*n*)120, yielding the slightly surprising fact that the exponent 1/2 is indeed the right one in that case. Finally, Dudek, Retter and Rödl, generalizing Wolfovitz’s construction, showed that for any *s* ≥ 3 there exist constants *c*1 and *c*2 such that *f**s*, *s* + 1(*n*) ≤ *c*1*n*1/2(log*n*)*c*2 so the exponent 1/2 is correct for all *f**s*, *s* + 1. However, the problem of finding the correct exponent of *n* for general *s*, *t* remains open. A particularly important case is when *t* = *s* + 2 since *f**s*, *t*(*n*) ≤ *f**s*, *s* + 2(*n*) for any *t* ≥ *s* + 2. Sudakov’s lower bound gives *f**s*, *s* + 2(*n*) = Ω(*n**β**s*) where $\beta\_s=1/2-\frac{1}{6s-6}$. Dudek, Retter and Rödl in showed that for any *s* ≥ 4 there exists a constant *c* depending only on *s* such that *f**s*, *s* + 2(*n*) ≤ *c**n*1/2. Note that the exponent 1/2 follows from the bound for *f**s*, *s* + 1, so this improves it by removing the log factor. Having established this, Dudek, Retter and Rödl asked the following question. Does there exist *s* ≥ 3 such that *f**s*, *s* + 2(*n*) = *o*(*n*1/2)? Another central open problem in the area is the following question of Erds. Is it true that $$\label{erdoseqn} \lim\_{n\rightarrow \infty} \frac{f\_{s+1,t}(n)}{f\_{s,t}(n)}=\infty$$ for every *t* > *s* + 1? The answer has been shown to be yes when *t* = *s* + 2 ≥ 6 and when (*s*, *t*) is one of the pairs (2, 4), (2, 5), (2, 6), (2, 7), (2, 8) or (3, 6). Our results ----------- In this paper, we prove that the answer to the first question above is yes. We also establish ([erdoseqn]) for the families of pairs *t* = *s* + 3 ≥ 7 and *t* = *s* + 2 ≥ 5. We obtain these results by proving a significant improvement for the upper bound on *f**s*, *t* when *s* + 2 ≤ *t* ≤ 2*s* − 1. The previous best upper bound for these parameters appeared in and was *f**s*, *t*(*n*) ≤ *c**n*1/2 (except for the pair *s* = 3, *t* = 5, where this bound was not established). We do not just obtain bounds of the form *o*(*n*1/2), but we improve the exponents throughout the range. Our construction is probabilistic, and has some similarities to the constructions that established the previous best upper bounds. However, an important difference is that we do not make use of algebraic objects such as projective planes. To state the bound that comes out of our argument takes a small amount of preparation. Let *s* ≥ 3 and *s* + 2 ≤ *t* ≤ 2*s* − 1. Call (*s*, *t*) *regular* if *s* ≥ 11 and *s* + 3 ≤ *t* ≤ 2*s* − 4 or if (*s*, *t*) ∈ {(10, 14), (10, 15)} and call it *exceptional* otherwise. Let $$\alpha=\alpha\_{s,t}=\begin{cases} \alpha(1)=\frac{(s-2)(t-s)(t+s-1)+2t-2s}{(2s-3)(t-s)(t+s-1)-2s+4}, & \text{ if } (s,t) \text{ is regular} \\ \alpha(2)=\frac{(s-2)(t-s)(s-1)+s-1}{(2s-3)(t-s)(s-1)+2s-t}, & \text{ if } (s,t) \text{ is exceptional} \end{cases}$$ We will prove the following theorem. [ERbound] For any *s* ≥ 3, *s* + 2 ≤ *t* ≤ 2*s* − 1, there exists some constant *c* = *c*(*s*, *t*) such that *f**s*, *t*(*n*) ≤ *n**α*(log*n*)*c*. It is not hard to check that *α* < 1/2 for all pairs (*s*, *t*) in the given range. Thus, as mentioned above, we obtain a strong answer to the question of Dudek, Retter and Rödl. For every *s* ≥ 3, we have *f**s*, *s* + 2(*n*) = *o*(*n*1/2). The simplest case where our result is new is the case *s* = 3, *t* = 5. There we obtain an upper bound of *n*6/13(log*n*)*c*. For comparison, Sudakov’s lower bound is *c**n*5/12. Since the exponent when *t* = *s* + 1 is 1/2, our result also implies a positive answer to the question of Erdos in the following family of cases. $$\lim\_{n\rightarrow \infty} \frac{f\_{s+1,s+2}(n)}{f\_{s,s+2}(n)}\rightarrow \infty$$ That is, ([erdoseqn]) holds for *t* = *s* + 2 ≥ 5. If *t* = *s* + 3, then $$\alpha=\begin{cases} \frac{3s^2-3s-3}{6s^2-4s-7}, & \text{ if } s\geq 11 \\ \frac{3s^2-8s+5}{6s^2-14s+6}, & \text{ if } 4\leq s\leq 10 \end{cases}$$ Comparing this with Sudakov’s lower bound *f**s* + 1, *s* + 3(*n*) ≥ Ω(*n**β**s* + 1), where $\beta\_{s+1}=\frac{3s-1}{6s}$, we get the following additional result. $$\lim\_{n\rightarrow \infty} \frac{f\_{s+1,s+3}(n)}{f\_{s,s+3}(n)}\rightarrow \infty$$ That is, ([erdoseqn]) holds for *t* = *s* + 3 ≥ 7. In the following table, we compare the exponent of *n* in the best known lower bound with that in our new upper bound (both rounded to three decimal places). |c||c|c| & Our new upper bound & Best known lower bound *s* = 3, *t* = 5 & 0.462 & 0.417 *s* = 4, *t* = 6 & 0.467 & 0.444 *s* = 4, *t* = 7 & 0.457 & 0.375 *s* = 5, *t* = 7 & 0.475 & 0.458 *s* = 5, *t* = 8 & 0.465 & 0.404 *s* = 5, *t* = 9 & 0.460 & 0.351 In the case *t* = *s* + 2, our bound is *f**s*, *s* + 2(*n*) ≤ *n**α* + *o*(1) for $\alpha=1/2-\frac{s-2}{8s^2-18s+8}\approx 1/2-\frac{1}{8s}$ while Sudakov’s lower bound is *f**s*, *s* + 2(*n*) ≥ *n**β* + *o*(1) for $\beta=1/2-\frac{1}{6s-6}\approx 1/2-\frac{1}{6s}$. It would be very interesting to know whether either of these two estimates reflects the true asymptotics of *f**s*, *s* + 2. It would be particularly interesting to know whether either of the exponents 5/12 or 6/13 is the correct one for (*s*, *t*) = (3, 5). We have made some effort to optimize our construction, whereas there appear to be places where Sudakov’s argument is potentially throwing information away, so our guess is that 6/13 is correct, but this guess is very tentative and could easily turn out to be wrong. An overview of the argument --------------------------- We will now sketch the key steps in our argument. For simplicity, we will focus on the *s* = 3, *t* = 5 case. Then, as mentioned above, Theorem [ERbound] says that *f*3, 5(*n*) ≤ *n*6/13(log*n*)*c*. That is, we construct a *K*5-free graph *G* in which every subset of size roughly *n*6/13 induces a triangle. The basic idea is very simple. We are looking for a graph that contains ``triangles everywhere" but does not contain any *K*5s. The obvious way to create a large number of triangles without creating *K*5s is to take a complete tripartite graph. Of course, this on its own does nothing, since a complete tripartite graph has a huge independent set, but we can use it as a building block by taking a union of many complete tripartite graphs. In previous constructions, such as Wolfovitz’s graph that gives an upper bound for *f*3, 4(*n*), the vertex sets of these tripartite graphs are chosen algebraically – in Wolfovitz’s case they are the lines of a projective plane. The main difference in our approach is that we simply choose them at random, where the number we choose and the size of each one are parameters that we optimize at the end of the argument. This creates difficulties that are not present in the earlier approaches, but in the end allows us to prove stronger bounds. Thus, we begin by taking a graph *G*0, which is a union of roughly *n*9/13 complete tripartite graphs with parts having size roughly *n*6/13 each, these parts being randomly chosen subsets of *V*(*G*0). It is not hard to prove that *G*0 contains a triangle in every set of vertices of size roughly *n*6/13. However, *G*0 also contains many *K*5s, so we have to delete some edges. It is here that the proof becomes less simple: while random constructions followed by edge deletions are very standard, in this case we need rather delicate arguments in order to prove that it can be done without removing all the triangles from a set of size around *n*6/13. First, let us check that every set of size roughly *n*6/13 does indeed induce a triangle in *G*0. Let *A* be a subset of *V*(*G*0) of size *n*6/13. A given tripartite copy will intersect *A* in at least 3 vertices with probability roughly *n*− 3/13. Thus, as we place *n*9/13 tripartites, the expected number of those tripartites that give a triangle in *A* is roughly *n*6/13. Hence, by the Chernoff bound, the probability that *A* does not contain a triangle is roughly *e*− *n*6/13. But the number of subsets of *V*(*G*0) of size *n*6/13 is *very* roughly *n**n*6/13. Modifying the parameters by log*n* factors suitably, a union bound shows that almost surely every subset *A* of size roughly *n*6/13 will contain a triangle. In fact, a slightly more careful examination of this argument reveals that almost surely every such subset will contain at least *n*6/13 triangles, each coming from a single tripartite graph such that the tripartites corresponding to different triangles are all distinct. Now let us specify which edges get deleted. We shall delete them in two stages. The first stage consists of what we call *Type 1* deletions. Given any two of our random tripartite graphs, with vertex sets *A* = *A*0 ∪ *A*1 ∪ *A*2 and *B* = *B*0 ∪ *B*1 ∪ *B*2, we remove all edges *x**y* such that *x*, *y* ∈ *A* ∩ *B*. We do not insist that *x**y* is an edge of both tripartite graphs: if, for example, *x*, *y* ∈ *A*0, *x* ∈ *B*0 and *y* ∈ *B*1, then the edge *x**y* will be removed. Let *G*1 be the resulting graph when all such edges have been deleted. The reason for these deletions is that each of our tripartite graphs contains many copies of *K*3, 1, 1, which are somewhat ``dangerous" for us, since all it takes to convert a *K*3, 1, 1 into a *K*5 is the addition of a further triangle. If we do not do Type 1 deletions, then we will obtain *K*5s in this way too frequently, with the result that most edges in the graph are contained in a *K*5. Indeed, the expected number of edges in *G*0 is roughly *n*9/13(*n*6/13)2 = *n*21/13 and the expected number of *K*5s of the above form is roughly *n*5(*n*9/13)2(*n*− 7/13)8 = *n*27/13. Type 1 deletion is feasible in the sense that it destroys only a small proportion of the edges of *G*0. That is because it is significantly less likely for a pair of vertices to be contained in two tripartite copies than for it to be contained in one tripartite copy. Thanks to Type 1 deletions, it has become ``difficult" for *K*5s to appear in *G*1, since now none of our random tripartite graphs can intersect a *K*5 in more than 3 vertices. Indeed, if one of them intersects a *K*5 in say 4 vertices, then there exist two of those vertices between which this tripartite does not provide an edge, and if one of the other tripartites gives an edge in *G*0 between those two vertices, that edge is deleted. Thus, it is easy to check that if a *K*5 appears in *G*1, then it has to do so in one of the following ways. 1. All 10 edges of the *K*5 come from distinct tripartites. 2. There is one tripartite giving a triangle in the *K*5 but all the other 7 edges come from distinct tripartites. 3. There are two tripartites that each give a triangle in the *K*5, these two triangles sharing a single vertex, and all the other 4 edges come from distinct tripartites. We now delete at least one edge from each of these remaining *K*5s. This will be done probabilistically and the precise method will be explained later. The deletions in this second round we call *Type 2* deletions. Once they have been performed, the resulting graph is our final graph *G*. The graph *G* is *K*5-free, by definition, but we now have to show that we have not inadvertently destroyed all the triangles in some set of *n*6/13(log*n*)*c* vertices. We begin by checking the more basic requirement that the Type 2 deletions destroy only a small proportion of the edges. That is, we check that the expected number of *K*5s in *G*1 is less than the expected number of edges (which is already computed to be *n*21/13). To do this, we split into the three cases mentioned above. To calculate the expected number of *K*5s of type (i), observe that there are at most *n*5 choices for the vertex set, and (*n*9/13)10 choices for the copies of tripartites giving an edge (since there are *n*9/13 tripartites to choose from and we need 10 of them), and the probability that the vertices of the *K*5 are in these tripartites as prescribed is (*n*− 7/13)20 (since the probability that a given vertex is in a given tripartite is *n*− 7/13), giving that the expected number of these *K*5 is *n*15/13. Similarly, the expected number of *K*5s of type (ii) is *n*5(*n*9/13)8(*n*− 7/13)17 = *n*18/13. Finally, the expected number of *K*5s of type (iii) is *n*5(*n*9/13)6(*n*− 7/13)14 = *n*21/13. This last number is roughly equal to the expected number of edges, therefore we will need to modify the parameters by log*n* factors. However, the main point is that after this second round of deletions, most edges of the original graph are still present. In order to finish off the proof, there are two main difficulties to overcome. The first one is that even though we have made sure that globally not too many edges are deleted, this is, as we have already mentioned, just a necessary condition for the argument to have a chance of working. What we actually need is the stronger statement that every induced subset of size *n*6/13(log*n*)*c* still contains a triangle. We can hope that the small set of edges we have removed is ``sufficiently random“ for this to be the case, but actually proving that takes some work. Let us sketch how we do it. From now on, it will be convenient to think of each tripartite as having a colour: accordingly, we call the tripartites ``colour classes”. If a vertex belongs to, say, the red tripartite, then we say that that vertex is red. Let us now fix a set *A* of size *n*6/13(log*n*)*c*. As shown above, we can take it for granted that *G*0 contains a big set T of triangles in *A*, all coming from different colour classes. Moreover, these triangles will be uniformly distributed over *A*. Let *T**C* ∈ T be a triangle coming from the colour class *C*. (Note that not every colour gives a triangle, and not every triangle in *A* comes from just one colour class.) Let us first deal with Type 1 deletions. An edge of some *T**C* gets deleted by the Type 1 deletions if the endpoints of this edge share a colour other than *C*. So intuitively we can imagine that *G*0 has already been constructed, and then we place these triangles *T**C* randomly inside *A* and hope that most triangles will not have any edge contained in another colour class. It is not too hard to show, under suitable assumptions, that with very high probability the density of pairs of vertices in *A* sharing a colour is fairly low (this essentially comes from the fact that the typical sizes of the tripartites are smaller - after adjusting the parameters by suitable log factors - than the size of *A*). Therefore for a fixed *T**C* it is indeed true that with fairly high probability its edges will not be deleted by Type 1 deletions. However, these events are not independent for different colours *C*. To overcome this difficulty, we define a set Π of roughly log*n* partitions with the property that for any pair of distinct colours *C*, *D* there is a *π* ∈ Π such that *D* is in the first part of *π* and *C* is in the second part. We now define a *π*-*dangerous pair* to be a pair of vertices that share a colour from the first part of *π*. If an edge *x**y* of a *T**C* gets deleted (by Type 1 deletions) then *x* and *y* share a colour *D* ≠ *C* and there is some *π* ∈ Π such that *D* is in the first part of *π* and *C* is in the second part of *π* and therefore (*x*, *y*) is a *π*-dangerous pair. But note that, as indicated above, the density of *π*-dangerous pairs will be fairly low, so the probability that an edge of *T**C* is deleted because of a colour in the first part of *π* is low, and, conditional on the outcome of colours in the first part of *π*, these events are now independent for all *C* in the second part of *π*. We can therefore conclude that only a small proportion of these *T**C*s will lose an edge thanks to colours in the first part of *π*. Thus, since Π is small, we deduce that most triangles *T**C* will not lose an edge. That is, we can find many triangles in *A* even after the Type 1 deletions. Now let us define Type 2 deletions. Given the graph *G*1, we order its edges randomly and keep each edge provided that it does not form a *K*5 when combined with the edges that we have already decided to keep. We remark that this construction is a variant of the so called *K*5-free process. The edges we keep will form our final graph *G*. To be more precise, we note here that in fact we keep an edge only if it does not form a so called *core* of a *K*5 of *G*1 when combined with the edges that we have already decided to keep. The core is a certain subgraph of a *K*5 defined in terms of the colours of its edges. The reader is encouraged to think of the core of a *K*5 as the *K*5 itself (especially as we can prove that the core of any *K**t* is itself, but the proof of this fact is very long and we do not include it in this paper). As shown above, the number of *K*5s in *G*1 is less than the number of edges, that is, on average an edge is contained in less than one *K*5. In fact, one can show that almost surely *every* edge will be contained in a relatively small number of *K*5s. It is not hard to see that this means that any triangle in *G*1 is also present in *G* with probability not very close to 0. Since the number of triangles in *G*1[*A*] is large, standard concentration inequalities will imply that with very high probability *G*[*A*] still contains a triangle. Using the union bound over all *A* (of size roughly *n*6/13), we conclude that almost surely every *G*[*A*] contains a triangle, finishing the proof. Let us briefly discuss how we determined the parameters of our construction. Let *n**δ* be the number of tripartite copies placed, let *n**β* be the size of each part of each of these copies, and let *n**α* be the set size that will guarantee an induced triangle. The parameters *δ*, *β* have been chosen to optimize the result: that is, to allow *α* to be as small as possible. There are three main conditions that we need to impose on these parameters. The first one is that we need enough triangles in *G*0 inside every *A* of size *n**α*. It is not hard to see that this condition is equivalent to *δ* + 3(*α* + *β* − 1) ≥ *α*. The second one comes from the fact that the parts of the tripartites will not contain a triangle in *G* (since every edge inside a part of a tripartite gets deleted by Type 1 deletions), so we trivially need *α* ≥ *β*. Finally, we want the expected number of *K*5s in *G*1 to be less than the expected number of edges in *G*1 which gives (only considering those *K*5s which are type (iii) in the sense described a few paragraphs above) *δ* + 2*β* ≥ 5 + 6*δ* + 14(*β* − 1). It is not hard to see that these conditions force *α* ≥ 6/13 and that equality is achieved by taking *δ* = 9/13, *β* = 6/13. This leads us to the other main difficulty, which arises only when we consider more general values of *s*, *t*. While ([enoughtriangles]) is essentially the same but with 3 replaced by *s*, and ([partsnotriangles]) is exactly the same, ([k5freqs]) becomes completely different. Indeed, it will be crucial to analyse all possible ways that a *K**t* can occur in *G*1 in some systematic way, rather than writing down the three possibilities (i),(ii),(iii) as we did above in the *s* = 3, *t* = 5 case, since in general there are many ways that a *K**t* can be formed from the contributions of the various *s*-partite graphs. Analysing these decompositions of *K**t*, which we shall refer to as colour schemes (again by imagining that each *s*-partite graph has its own colour), is necessary to determine the best parameters *δ*, *β*, and also to prove Theorem [ERbound] for these parameters. The complicated formula for *α* is obtained by solving the system of inequalities ([enoughtriangles]),([partsnotriangles]),([k5freqs]) that we obtain in the general case. The organization of this paper is as follows. In Section [sectionconstruction] we present our construction. In Section [sectionmain] we give the main part of the proof conditional on three lemmas. These lemmas are proved in Section [sectionauxiliary]. The first one, which asserts that each edge in *G*1 is contained in a small number of (cores of) *K**t*s, is proved in Subsection [subsectionfewcores], conditional on a lemma about colour schemes that is proved in Subsection [subsectionnegscheme]. The result that says that *G*1[*A*] contains many *K**s*s is proved in Subsection [subsectionfindlarget]. Finally, there is an appendix that contains some tedious computations and the source code of a program relevant to some results in Subsection [subsectionnegscheme]. The precise construction and the main result ============================================ Logarithms throughout the paper are to base *e*. We will not be concerned with floor signs, divisibility, and so on. Also, we will tacitly assume that *n* is sufficiently large whenever this is needed. Moreover, throughout the rest of the paper, it is to be understood that *s* ≥ 3 and that *s* + 2 ≤ *t* ≤ 2*s* − 1. Recall that a pair (*s*, *t*) is regular if *s* ≥ 11 and *s* + 3 ≤ *t* ≤ 2*s* − 4 or if (*s*, *t*) ∈ {(10, 14), (10, 15)}, and otherwise it is exceptional. Let $$\delta=s-(2s-1)\alpha=\begin{cases} \delta(1)=\frac{(2s-2)(t-s)(t+s-1)+2s^2-4st+2t+2s}{(2s-3)(t-s)(t+s-1)-2s+4}, & \text{ if } (s,t) \text{ is regular} \\ \delta(2)=\frac{(2s-2)(t-s)(s-1)-st+3s-1}{(2s-3)(t-s)(s-1)+2s-t}, & \text{ if } (s,t) \text{ is exceptional} \end{cases}$$ [alphaanddelta] *δ* < 2*α* < 1. If (*s*, *t*) is regular, then $$\begin{aligned} 2\alpha-\delta&=\frac{4st-2s^2+2t-6s-2(t-s)(t+s+1)}{(2s-3)(t-s)(t+s-1)-2s+4} \\ &=\frac{4st-2t^2-4s}{(2s-3)(t-s)(t+s-1)-2s+4}> 0, \end{aligned}$$ since *s* + 1 ≤ *t* ≤ 2*s* − 2. If (*s*, *t*) is exceptional, then $$2\alpha-\delta=\frac{st-s-1-2(t-s)(s-1)}{(2s-3)(t-s)(s-1)+2s-t}=\frac{2s^2-st+2t-3s-1}{(2s-3)(t-s)(s-1)+2s-t}>0,$$ since *s* + 1 ≤ *t* ≤ 2*s* − 1. By Lemma [app2] (e) from the appendix, we have *δ* > 2/3 > 1/2, which implies that *α* < 1/2. Intuitively, one can think of *α* as 1/2 − *ε* for *ε* quite small and *δ* = 1/2 + (2*s* − 1)*ε*. This makes *δ* significantly greater than 1/2 but less than 1. Also, it may be helpful to bear in mind the case *s* = 3, *t* = 5, where, as we have seen, *δ* = 9/13 and *α* = 6/13. Let *m* = *n**δ*(log*n*)− *c*1 *γ* = *n**α* − 1(log*n*)− *c*2 *a* = *n**α*(log*n*)*c*3 where *c*1, *c*2, *c*3 are positive constants, to be specified, that depend on *s* and *t*. (In fact, *c*1 can be taken to be 0. All we need are that *c*2 is suitably large and that *c*3 is sufficiently larger than *c*1, *c*2.) The following estimates will be used several times later in the paper. [mandgamma] *m**γ* > 1 and *m**γ*2 < 1. Note that *δ* + (*α* − 1) = (*s* − 1) − (2*s* − 2)*α* > 0 since *α* < 1/2. This implies that *m**γ* > 1. Also, *δ* + 2(*α* − 1) < 4*α* − 2 < 0, by Lemma [alphaanddelta]. This implies that *m**γ*2 < 1. We construct the graph *G*0 as follows. Let *V* = *V*(*G*0) = {1, 2, ..., *n*}. Define independent random subsets *S*1, ..., *S**m* of *V* in such a way that each *S**i* contains each *v* ∈ *V* independently with probability *γ*. We call *S**i* the *i*th *colour class*. If *v* ∈ *S**i*, we say that *v* *has colour* *i*. Now randomly partition each *S**i* into *s* sets, *S**i*1, *S**i*2, ..., *S**i**s* by placing each element of *S**i* independently at random in one of these parts, and use these sets to define a complete *s*-partite graph. Let *G*0 be the union of these *s*-partite graphs. We say that a pair of vertices has colour *i* if both its members have colour *i*. We do not require the pair to form an edge in *G*0. Remove all edges of *G*0 that have at least two colours to obtain the subgraph *G*1. Again, we do not require both colours to give an edge. Another way to state the condition is that if *x**y* is an edge of colour *i* and *x* and *y* both have colour *j* for some *j* ≠ *i*, then we remove the edge *x**y* even if *x* and *y* belong to the same set *S**j**r*. Finally, for every *K**t* in *G*1 we randomly remove a certain edge, which we shall specify in a moment. The resulting graph is called *G*. The graph *G* is obviously *K**t*-free. We shall prove that for suitable choices for the constants *c*1, *c*2, *c*3, we have the following result, which is our main theorem. [mainresult] For *n* sufficiently large, there is a positive probability that every subset *A* of *G* with ∣*A*∣ = *a* contains a *K**s*. Obviously Theorem [mainresult] implies Theorem [ERbound]. Let us now specify which edges are removed from *G*1. Suppose that *x*1, ..., *x**t* form a *K**t* in *G*1. Then necessarily any two distinct vertices *x**i* and *x**j* share precisely one colour. Indeed, they must share at least one colour since *x**i**x**j* ∈ *E*(*G*0) but they cannot share more than one since then *x**i**x**j* would have been removed from *G*0 during the first round of deletions. [scheme] A *colour scheme* for *K**t* with parameter *s*, or *scheme* for short, is a set *X* of *t* nodes and a set D of subsets of *X*, which we call *colours*, or *blocks*, such that 1. For any *x*, *y* ∈ *X*, there is a unique *D* ∈ D such that *x*, *y* both belong to *D*. 2. Every colour appears on at least two nodes. 3. Every colour appears at most *s* times. A pair of nodes is called an *edge* and the *colour* of an edge is the unique colour that contains both endpoints. (Note that a node may have several colours.) If a node *x* belongs to a colour *D*, we shall say that *D* *labels* *x*. We also define a *label* to be a pair (*x*, *D*) such that *x* is a node and *D* labels *x*. The number of labels in a scheme is thus the sum of the sizes of all the colours. If *X* = {*x*1, ..., *x**t*} forms a *K**t* in *G*1, then there is set of (at most ${t \choose 2}$) colours such that *X* is a colour scheme with respect to those colours, and no other colour labels more than one vertex in *X*. Indeed, we have already observed that property (i) holds. Choosing the colours suitably, (ii) can clearly be achieved. For property (iii), observe that if some colour *D* labels at least *s* + 1 vertices, then there must exist distinct vertices *x**i* and *x**j* that belong to the same part of the complete *s*-partite graph of colour *D*. Then *D* does not provide an edge between *x**i* and *x**j*, so some other colour must, but then *x**i* and *x**j* share at least two colours, which contradicts (i). Thus, any *K**t* in *G*1 can be viewed as a scheme in a natural way. A simple upper bound for the expected number of *K**t*s associated with a scheme *Q* is *n**t**m**b**γ**l*, where *l* is the number of labels of *Q* and *b* is the number of colours of *Q*. Indeed, the number of ways choosing the *t* nodes is at most *n**t*, the number of ways of choosing the *b* colours (from the *m* colours used to construct *G*1) is at most *m**b*, and the probability that any given choice of nodes and colours realizes the scheme is *γ**l*, since for each label the probability that the given node receives the given colour is *γ*, and all these events are independent. Now *n**t**m**b**γ**l* = *n**t* + *b**δ* + *l*(*α* − 1)(log*n*)*f* for some *f* = *f*(*s*, *t*, *b*, *l*). Also, once we know that a certain pair *u*, *v* of vertices have a colour in common, the expected number of *K**t*s associated with *Q* that contain *u* and *v* becomes at most roughly *n**t* − 2*m**b* − 1*γ**l* − 2 = *n**t* − 2 + (*b* − 1)*δ* + (*l* − 2)(*α* − 1)(log*n*)*f*ʹ. This motivates the following definition. [schemevalue] The *value* of a scheme *Q* with *b* colours and *l* labels, denoted *v*(*Q*), is given by the formula *v*(*Q*) = *t* − 2 + (*b* − 1)*δ* + (*l* − 2)(*α* − 1). Thus, roughly speaking, the expected number of *K**t*s associated with a scheme *Q* that contain a given edge in *G*1 is at most *n**v*(*Q*) up to log factors. The following lemma – proved in Subsection [subsectionnegscheme] – shows that this number is small. [negscheme] Let *Q* be a scheme. Then *v*(*Q*) ≤ 0. We shall also need a generalization of the notion of a scheme where a pair of nodes does not need to have a colour, if it does have a colour then that colour does not have to be unique, and a colour is allowed to label more than *s* nodes. [colourconfig] A *colour configuration* consists of a set of nodes and a set of colours labelling the nodes such that every colour appears on at least two nodes. Given a colour configuration *W* and a subset *S* of its nodes, we define the *subconfiguration induced by* *S* to be the configuration whose nodes are the elements of *S* and whose colours are the colours of *W* that appear at least twice on *S* (which then label the nodes in *S* that they labelled in *W*). The *value* of a configuration *W* is defined to be *v*(*W*) = *h* − 2 + (*b* − 1)*δ* + (*l* − 2)(*α* − 1),  where *h* is the number of nodes, *b* is the number of colours and *l* is the number of labels in *W* (where a label is again a pair (*x*, *D*) where *x* is a node labelled by the colour *D*). The same argument as for schemes shows that, once we condition on the event that *u* and *v* are both coloured red, the expected number of occurrences of a colour configuration *W* that contain both *u* and *v* is at most *n**v*(*W*) up to log factors. (In fact, it is smaller unless *u* and *v* share a colour in *W*.) [defncore] The *core* of a scheme *Q*, denoted *C*(*Q*), is the induced subconfiguration *S* on at least two nodes for which *v*(*S*) is minimal. If several subconfigurations have the same value then the core is the one with the maximum number of nodes. If this is still not unique, then we simply pick an arbitrary one with the given properties. We can in fact prove that *C*(*Q*) = *Q* for every scheme *Q*. Although using that fact would simplify the argument in this paper slightly, this gain does not compensate for the extra work needed to establish it, so we shall avoid using it. Nevertheless, the reader is encouraged to think of a core just as a scheme: that is, as a *K**t* in the graph *G*1 with the colours given by the *s*-partite graphs with vertex sets that contain at least two of its vertices. [corefacts] Let *Q* be a scheme. Then *C*(*Q*) has at least 3 nodes, *v*(*C*(*Q*)) ≤ 0, and *v*(*S*) ≥ *v*(*C*(*Q*)) for every induced subconfiguration *S* of *C*(*Q*) with at least two nodes. The first two assertions follow from Lemma [negscheme], since an induced subconfiguration of *Q* with two nodes has value 0. The third assertion follows immediately from the definition of the core. We can now define *G* precisely. Following an idea in, we assign independently to each edge *e* of *G*1 a birthtime *β**e*, chosen uniformly randomly from [0, 1]. Equivalently, we order the edges of *G*1 uniformly at random from all the possible orderings. To define the edge set *E*(*G*), which will be a subset of *E*(*G*1), we recursively decide for each *e* ∈ *E*(*G*1) whether *e* ∈ *E*(*G*), as follows. Suppose that the decision has been made for every *e*ʹ ∈ *E*(*G*1) with *β**e*ʹ < *β**e*. Then let *e* ∈ *E*(*G*) unless there is a *K**t* in *G*1, which we view as a scheme *Q*, for which the edges of *C*(*Q*) all have birthtime at most *β**e* and they all (apart from *e*) already belong to *E*(*G*). For any *K**t* in *G*1 there is an edge in the core of that *K**t* that is not an edge of *G*, since if all the edges in the core apart from the last one are chosen to belong to *E*(*G*), then the last one is not. Thus, *G* is *K**t*-free. It remains to prove that with positive probability every set of *a* vertices still contains a *K**s*, which was Theorem [mainresult] above. The proof of Theorem [mainresult] ================================= In this section, we shall prove Theorem [mainresult] conditional on two lemmas, which we shall prove in Section [sectionauxiliary] and which are where most of the work will be. The first one says, roughly speaking, that for any *A* of size *a*, the induced subgraph *G*1[*A*] of *G*1 contains many copies of *K**s*. [findlarget] Almost surely, for every *A* of size *a* there is a set of Ω(*m**a**s**γ**s*) monochromatic copies of *K**s* inside *G*1[*A*], each with a different colour. The second tells us that any edge in *G*1 is contained in few cores. Here, and in what follows, we use the word ``core" to refer to the core of a *K**t* in *G*1. [fewcores] Almost surely, any edge in *G*1 is contained in at most (log*n*)2*t* cores. We shall use McDiarmid’s inequality in the next proof, which for convenience we recall here. Let *Y*1, …, *Y**N* be independent random variables, taking values in a set *S*, and let *X* = *g*(*Y*1, …, *Y**N*) for some *g* : *S**N* → R with the property that if *y*, *y*ʹ ∈ *S**N* only differ in their *i*th coordinate, then ∣*g*(*y*) − *g*(*y*ʹ)∣ ≤ *c**i*. Then the inequality states that $$\mathbb{P}\big\lbrack |X-\mathbb{E}\lbrack X\rbrack|\geq r \big\rbrack\leq 2 \exp\Big(\frac{-2r^2}{\sum\_i c\_i^2}\Big).$$ The following lemma, together with Lemmas [findlarget] and [fewcores] and a union bound, implies Theorem [mainresult]. Suppose that *G*1 is such that any edge in *G*1 is contained in at most (log*n*)2*t* cores. Let *A* be a set of vertices of size *a* such that the induced subgraph *G*1[*A*] contains Ω(*m**a**s**γ**s*) monochromatic copies of *K**s*, each with a different colour. Then the probability, conditional on the graph *G*1, that *G*[*A*] does not contain any *K**s* is $o\big(\frac{1}{{n \choose a}}\big)$. Choose Ω(*m**a**s**γ**s*) monochromatic copies of *K**s* in *G*1[*A*], all of distinct colours. Let the set of these copies be T. Then by the definition of the first deletion process, the elements of T are edge disjoint. Let *T* ∈ T. Let *E**T* be the set of all edges of cores that have at least one edge that belongs to *T*, together with the edges of *T* itself. Clearly, $|E\_T|\leq \binom s2+\binom s2(\log n)^{2t}{t \choose 2}\leq (\log n)^{3t}$. Let *B**T* be the event that the birthtimes of the edges of *T* precede the birthtimes of all other edges in *E**T*. If *B**T* occurs, then the only way an edge of *T* could be deleted from *G*1 and therefore fail to be present in *G* is if *T* itself contains a core of some *K**t*. But note that there is no colour that labels every vertex in a core *C*. Indeed, if there is such a colour, then since all edges in a core belong to *G*1, there is no other colour appearing at least twice on the node set of *C*, therefore *C*, considered as a colour configuration, has value *h* − 2 + (*h* − 2)(*α* − 1) = (*h* − 2)*α* (where *h* is the number of nodes in *C*), which contradicts Lemma [corefacts]. It follows that if *B**T* occurs, then every edge of *T* is present in *G*. For a fixed *G*1, let *X* be the number of events *B**T* that occur over all *T* ∈ T. Then *X* is a random variable with the property that if *X* ≠ 0, then there is some *T* ∈ T that belongs to *G*[*A*]. It therefore suffices to prove that $\mathbb{P}\lbrack X=0\rbrack=o\big(\frac{1}{{n \choose a}}\big)$. To do this, we apply McDiarmid’s inequality when *Y**i* is the birthtime of the *i*th edge. Since the *T* ∈ T are edge disjoint, and any edge *e* in *G*1 is contained in at most (log*n*)2*t* cores, it follows that *e* is contained in at most $1+(\log n)^{2t}{t \choose 2}\leq (\log n)^{3t}$ of the graphs *E**T*. Hence, changing the birthtime *β**e* of *e* influences at most (log*n*)3*t* of the events *B**T*. Also, if *e* ∉  ∪ *T* ∈ T*E**T*, then *β**e* does not influence *any* event *B**T*. Thus, by McDiarmid’s inequality (with some *N* ≤ ∣T∣(log*n*)3*t*), we get $$\mathbb{P}\lbrack X=0\rbrack\leq 2 \exp\Big(\frac{-2(\mathbb{E}\lbrack X\rbrack)^2}{|\mathcal{T}|(\log n)^{3t}((\log n)^{3t})^2}\Big).$$ Now note that $\mathbb{P}\lbrack B\_T\rbrack\geq |E\_T|^{-{s\choose 2}}\geq (\log n)^{-3s^2t}$, so E[*X*] ≥ ∣T∣(log*n*)− 3*s*2*t*, and $$\mathbb{P}\lbrack X=0\rbrack\leq 2 \exp\Big(\frac{-2|\mathcal{T}|}{(\log n)^{6s^2t+9t}}\Big).$$ Finally, note that ${n \choose a}\leq n^a= \exp(a\log n)$. To finish the proof we just need to verify that $\frac{|\mathcal{T}|}{(\log n)^{6s^2t+9t}}=\omega(a\log n)$. Since $$\frac{|\mathcal{T}|}{a}=\Omega(ma^{s-1}\gamma^s)=n^{\delta+(s-1)\alpha+s(\alpha-1)}(\log n)^{-c\_1+(s-1)c\_3-sc\_2}=(\log n)^{-c\_1+(s-1)c\_3-sc\_2},$$ we are done provided that (*s* − 1)*c*3 − *s**c*2 − *c*1 > 6*s*2*t* + 9*t* + 1. The proofs of the auxiliary lemmas ================================== In this section we shall prove Lemmas [negscheme], [findlarget] and [fewcores], which are the results we used in the proof of Theorem [mainresult] but have not yet proved. The proof of Lemma [fewcores] ----------------------------- Let *e* be an edge in *G*1. We would like to show that it belongs to at most (log*n*)2*t* cores. Any core that contains *e* can be viewed as a core in a scheme that contains *e*, and as such it has nonpositive value. But for any colour configuration *W* (with more than two labels), the expected number of occurrences of that colour configuration in *G*0 containing a fixed edge in *G*1 is at most *n**v*(*W*)(log*n*)− *c*2 (as we remarked slightly less precisely after Definition [colourconfig]), which is at most (log*n*)− *c*2 if *v*(*W*) ≤ 0. In particular, the probability that an edge *e* in *G*1 is contained in *r* cores that are pairwise disjoint apart from their intersection on *e* is at most (log*n*)− *r**c*2. If *r* = log*n* then this is much less than 1/*n*2, and therefore almost surely no edge is contained in log*n* cores of the above form. In general, the cores containing *e* need not be disjoint. This adds a complication, and we need to introduce a few definitions to handle it, but the main reason Lemma [fewcores] holds is the one given in the previous paragraph. The next definition describes the kind of colour configuration which – if it occurs in *G*0 – can produce many cores in *G*1 (that is, cores of *K**t*s in *G*1 that we view as schemes) that contain a given edge *x**y*. Soon we shall argue that almost surely no such large configuration occurs in *G*0. [corecontainer] An *abstract core container* *W* is a colour configuration whose nodes are {*x*} ∪ {*y*} ∪ *Z* and in which every *z* ∈ *Z* is contained in at least one abstract core, where an abstract core is defined as follows. An *abstract core* in a core container is an induced subconfiguration *S* consisting of at most *t* nodes and containing *x* and *y* such that for any induced subconfiguration *S*ʹ ⊂ *S* containing *x*, *y*, we have *v*(*S*ʹ) ≥ *v*(*S*) and such that for any two distinct *u*, *v* ∈ *S* there is a unique colour that labels both *u* and *v*. The *size* of a core container is the number of nodes it contains. A core container is *irreducible* if it is not possible to remove a label or colour and still have a core container. Assume for a moment that we know that the core of a scheme is the scheme itself (see the remark after Definition [defncore]). Then Lemma [fewcores] just asserts that each edge in *G*1 is contained in few *K**t*s. Then we can replace the technical notion of abstract core container with the notion of *abstract scheme container* instead. What we mean by that is a colour configuration whose nodes are {*x*} ∪ {*y*} ∪ *Z* and in which every *z* ∈ *Z* is contained in at least one colour scheme containing *x* and *y* as well. This is a configuration that is dangerous to us since if it occurs in *G*0, then the edge *x**y* is contained in many *K**t*s (corresponding to the various schemes in the configuration). Note that as the vertices of *G*0 are coloured, we can naturally talk about *G*0 containing various colour configurations. We shall now establish that: 1. If an edge in *G*1 is contained in many cores, then there is a large irreducible core container in *G*0. 2. There are not too many irreducible abstract core containers of fixed size. 3. The expected number of occurrences in *G*0 of any large abstract core container is small. The last two points will imply that almost surely there is no large irreducible core container in *G*0, which in turn implies that there is no edge in *G*1 that is contained in many cores. Note that for the second point it is important that we count only irreducible core containers because otherwise the number of colours in the core container could be arbitrarily large. [coretocont] If the edge *e* = *u**v* is contained in at least *r* cores of *K**t*s in *G*1, then there is an irreducible core container *W* in *G*0 with *x* = *u*, *y* = *v* (as in Definition [corecontainer]) and with size between $\frac{1}{2}r^{1/t}$ and *t**r*. Define a colour configuration *W*0 as follows. Arbitrarily pick *r* cores that contain *e*. The set of nodes of *W*0 is the set of vertices of *G*1 that are in one of these *r* cores. The set of colours is the set of those colours in *G*0 that appear at least twice on this set of nodes. This does indeed define a core container, since any core of a *K**t* in *G*1 that contains *e* satisfies the two properties required of an abstract core in *W*0: the minimality of *v* follows from the definition of a core, and the condition about the colours follows from the fact that the *K**t* belongs to *G*1. How many nodes does *W*0 have? Any core consists of between 2 and *t* nodes, so if the number of nodes of *W*0 is *h*, then $r\leq \sum\_{2\leq j\leq t} {h \choose j}\leq (2h)^t$. Thus, $h\geq \frac{1}{2}r^{1/t}$. On the other hand, *h* ≤ *r**t*, since the vertex set of *W*0 is a union of *r* cores. Now remove labels or colours as long as we still get a core container; the object we end up with is an irreducible core container of the required size. [containercount] The number of distinct irreducible abstract core containers of size *h* is at most *h**t*2 ⋅ 2*h**t*2 ⋅ *h**h**t*2. First we shall prove that the number of labels in an irreducible core container of size *h* is at most $2h{t \choose 2}\leq ht^2$. For any occurrence of a colour *D* at some node *u* (that is, for any label (*u*, *D*)), there must exist *v* ∈ {*x*} ∪ {*y*} ∪ *Z* such that every abstract core containing *v* contains *u* and the colour *D*, or else we could remove the occurrence of *D* at *u* and still have a core configuration. But for any *v*, there are at most $2{t \choose 2}$ such pairs (*u*, *D*), since *u* must belong to the intersection of the vertex sets of the abstract cores containing *v*, and in a given abstract core there are at most $2{t \choose 2}$ labels. Indeed, an abstract core is an induced subconfiguration so each of its colours labels at least two nodes. Now if an abstract core has *q* colours and they label *d*1, …, *d**q* nodes, then $\sum\_{i\leq q} {d\_i \choose 2}\leq {t \choose 2}$ because the abstract core has at most ${t \choose 2}$ pairs of nodes. Since *d**i* ≥ 2 for each *i*, it follows that $\sum\_{i\leq q} d\_i\leq 2{t\choose 2}$. So there are at most *h**t*2 choices for the total number of labels. Since the partition function *p*(*k*) is at most 2*k*, it follows that for each possibility for the number of labels, there are at most 2*h**t*2 choices for the number of occurrences for each colour class. Suppose we have *b* colours and the numbers of times that they occur are *l*1, …, *l**b*. Then the number of choices for the vertices labelled by these colours is at most ${h \choose l\_1}{h \choose l\_2}\dots {h \choose l\_b}\leq h^{l\_1+\dots+l\_b}\leq h^{ht^2}$. Next, we shall investigate how many copies we expect to have in *G*0 of a given abstract core container. Let *W* be more generally any colour configuration with *h* nodes, *b* colours and *l* labels. Then the expected number of occurrences of such a configuration is at most *n**h**m**b**γ**l*. Indeed, the number of ways of choosing the *h* nodes is at most *n**h*. The number of ways of choosing the *b* colours is at most *m**b*. And for each label, the probability that the given node receives the given colour is *γ*, and all these events are independent, so the probability that any given choice of nodes and colours realizes the scheme is *γ**l*. We call *n**h**m**b**γ**l* the *frequency* of the configuration *W* and denote it by *ω*(*W*). [contfreq] Let *W* be an abstract core container of size *h*. Then $$\omega(W)\leq n^2(\log n)^{-\frac{h-2}{t}c\_2}.$$ To prove this result, we will kill some of the nodes and colours and remove some of the labels of the core container in steps. To keep track of which nodes and colours have been killed, we introduce the following definition. [partialconfig] A *partial configuration* *P* consists of four pairwise disjoint sets {*x*}, {*y*}, *Z*0 and *Z*1 of nodes, and two disjoint sets B0, B1 of colours that label those nodes in such a way that any *B* ∈ B1 labels at least two nodes. We write B for B0 ∪ B1 and *Z* for *Z*0 ∪ *Z*1. We now generalize the notion of frequency to this setting, which can be thought of as the expected number of occurrences of the colour configuration for given choices of the nodes in *Z*0 and colours in B0, which represent the nodes and colours that have already been killed. Thus, we let *r* = ∣{*x*} ∪ {*y*} ∪ *Z*1∣ be the number of nodes yet to choose, we let *g* = ∣B1∣ be the number of colours yet to choose, and we let *u* be the total number of labels, including the labels on nodes in *Z*0 and of colours in B0. Then we can choose the remaining nodes in at most *n**r* ways and the remaining colours in at most *m**g* ways, and for each label there is a probability *γ* that the given node receives the given colour. So we define the frequency *ω*(*P*) to be *n**r**m**g**γ**u*. [**Proof of Lemma [contfreq].**] We shall define a sequence *P*0, …, *P**k* of partial configurations such that *ω*(*P*0) = *ω*(*W*), *ω*(*P**k*) ≤ *n*2, $k\geq \frac{h-2}{t}$ and *ω*(*P**j*) ≥ *ω*(*P**j* − 1)(log*n*)*c*2. Clearly, this suffices to prove the lemma. We shall define the *P**j* recursively. In what follows we use the notation of Definition [corecontainer] and Definition [partialconfig]. When there is ambiguity, we will write *Z*0(*P*) to mean *Z*0 in the partial configuration *P*, and similarly for *Z*1, B0, B1. The set of all nodes (respectively, colours) for every *P**j* will be the same as the set of all nodes (respectively, colours) of *W*, namely {*x*} ∪ {*y*} ∪ *Z* (respectively, B). However, B0, B1, *Z*0, *Z*1 and the labels will be different for the various *P**j*. Let us define *P*0 to be the partial configuration whose nodes, colours and labels are the same as those of *W* and which has *Z*0 = B0 = ∅. Then *ω*(*P*0) = *ω*(*W*). Given *P**j* − 1 with *Z*1(*P**j* − 1) ≠ ∅, we define *P**j* as follows. Pick some *z* ∈ *Z*1(*P**j* − 1) arbitrarily. As *W* is a core container, we can choose an abstract core *S* in *W* that contains *z*. Let *S*1 = *S* ∩ *Z*1(*P**j* − 1). Let D be the set of those colours *B* ∈ B1(*P**j* − 1) that occur at least twice on *S* in *P**j* − 1. Then let the sets of nodes of *P**j* be *Z*0(*P**j*) = *Z*0(*P**j* − 1) ∪ *S*1 and *Z*1(*P**j*) = *Z*1(*P**j* − 1) \ *S*1, and let the sets of colours be B0(*P**j*) = B0(*P**j* − 1) ∪ D and B1(*P**j*) = B1(*P**j* − 1) \ D. The labels of *P**j* are those of *P**j* − 1 except that all occurrences of colours in B0(*P**j*) are removed from *S*. It is clear that *P**j* is a partial configuration. We want to prove that *ω*(*P**j*) ≥ *ω*(*P**j* − 1)(log*n*)*c*2. *Claim.* $\frac{\omega(P\_j)}{\omega(P\_{j-1})}\geq \frac{\omega(S\setminus S\_1)}{\omega(S)}$, where *S* and *S* \ *S*1 are identified with their induced subconfigurations from *W*. *Proof of Claim.* The contribution of the nodes is (a factor of) *n*− ∣*S*1∣ to both $\frac{\omega(P\_j)}{\omega(P\_{j-1})}$ and $\frac{\omega(S\setminus S\_1)}{\omega(S)}$. Hence it suffices to prove that the contribution of any colour (and its labels) to $\frac{\omega(P\_j)}{\omega(P\_{j-1})}$ is at least as much as its contribution to $\frac{\omega(S\setminus S\_1)}{\omega(S)}$. There are two cases to consider. *Case 1.* If *B* is a colour that occurs at most once on *S* in *W*, then its contribution to $\frac{\omega(S\setminus S\_1)}{\omega(S)}$ is 1, whereas its contribution to $\frac{\omega(P\_j)}{\omega(P\_{j-1})}$ is at least 1. (Indeed, since *m**γ*2 < 1, the contribution of *any* colour to $\frac{\omega(P\_j)}{\omega(P\_{j-1})}$ is at least 1.) *Case 2.* Suppose, then, that *B* is a colour that occurs at least twice on *S* in *W*. *Case 2a.* If *B* ∈ B0(*P**j* − 1), then let *d* be the number of occurrences of *B* on *S*1 in *W*. The contribution of *B* to $\frac{\omega(S \setminus S\_1)}{\omega(S)}$ is at most *γ*− *d*. Indeed, this is clear unless *B* occurs exactly once on *S* \ *S*1 in *W*. But if this is the case, then the contribution of *B* is precisely *m*− 1*γ*− (*d* + 1), which is at most *γ*− *d*, by Lemma [mandgamma]. Note that any node in *S*1 (and in fact more generally in *Z*1(*P**j* − 1)) that is labelled by *B* in *W* is also labelled by *B* in *P**j* − 1. Therefore, the contribution of *B* to $\frac{\omega(P\_j)}{\omega(P\_{j-1})}$ is at least *γ*− *d*. *Case 2b.* If *B* ∈ B1(*P**j* − 1), then let *d* be the number of occurrences of *B* on *S* in *W*. The contribution of *B* to $\frac{\omega(S \setminus S\_1)}{\omega(S)}$ is at most *m*− 1*γ*− *d*. Indeed, this is clear unless *B* occurs at least twice on *S* \ *S*1 in *W*. But in this case the contribution of *B* is at most *γ*− (*d* − 2), which is at most *m*− 1*γ*− *d*, by Lemma [mandgamma]. Note that any node that is labelled by *B* in *W* is also labelled by *B* in *P**j* − 1. Therefore, *B* ∈ D and the contribution of *B* to $\frac{\omega(P\_j)}{\omega(P\_{j-1})}$ is precisely *m*− 1*γ*− *d*. This completes the proof of the claim. Since *S* is an abstract core in *W*, we have *v*(*S*) ≤ *v*(*S* \ *S*1), by the minimality of *S*. Because *S*1 ≠ ∅, and every node in a core has a label on it, it follows that, considering *S* and *S* \ *S*1 as induced subconfigurations of *W*, we have *ω*(*S* \ *S*1) ≥ *ω*(*S*)(log*n*)*c*2. Using the claim above, the inequality *ω*(*P**j*) ≥ *ω*(*P**j* − 1)(log*n*)*c*2 follows. Eventually we obtain a partial configuration *P**j* with *Z*1(*P**j*) = ∅. When this happens, we set *k* = *j*. By definition, we have in that case that *ω*(*P**k*) = *n*2*m**g**γ**u* where *g* = ∣B1(*P**k*)∣ and *u* is the number of labels in *P**k*. Since any *B* ∈ B1(*P**k*) labels at least two nodes in *P**k* and *m**γ*2 ≤ 1, we find that *ω*(*P**k*) ≤ *n*2. Also note that ∣*Z*1(*P**j*)∣ ≥ ∣*Z*1(*P**j* − 1)∣ − *t* for any *j*, and ∣*Z*1(*P*0)∣ = ∣*Z*∣ = *h* − 2, so $k\geq \frac{h-2}{t}$. We are now in a position to complete the proof of Lemma [fewcores]. [**Proof of Lemma [fewcores].**] By Lemma [coretocont], it suffices to prove that in *G*0 the expected number of irreducible core containers of size between log*n* and (log*n*)3*t* is *o*(1). *Claim.* If log*n* ≤ *h* ≤ (log*n*)3*t*, then the expected number of irreducible core containers of size *h* in *G*0 is at most *n*2(log*n*)− *t*3*h*. *Proof of Claim.* By Lemmas [containercount] and [contfreq], the expected number of irreducible core containers of size *h* in *G*0 is at most $ht^2 2^{ht^2}h^{ht^2}n^2(\log n)^{-\frac{h-2}{t}c\_2}\leq h^{3ht^2}n^2(\log n)^{-\frac{h-2}{t}c\_2}$. If *c*2 ≥ 11*t*4, then this is at most *h*3*h**t*2*n*2(log*n*)− 11(*h* − 2)*t*3 ≤ *h*3*h**t*2*n*2(log*n*)− 10*h**t*3 ≤ *n*2(log*n*)− *h**t*3 so the claim is proved. But ∑*h* ≥ log*n**n*2(log*n*)− *t*3*h* = *o*(1), and the proof is complete. The proof of Lemma [findlarget] ------------------------------- Our proof is based on the following two observations. 1. For any set of vertices *A* of size *a*, *G*0[*A*] contains many monochromatic *s*-cliques with pairwise distinct colours. 2. If a monochromatic *s*-clique is present in *G*0, then it is present also in *G*1 with high probability, and, crucially, the events that various *s*-cliques are preserved are ``sufficiently independent". First, we shall construct a small set of bipartitions of the set of colours with a suitable property. In a moment it will become clear why we need this. We will refer to the two parts of a bipartition as the *first part/first half* and the *second part/second half*. There exists a constant *c* and a set Π of *c*log*n* partitions of the set of *m* colours, each into two sets of size *m*/2, such that for any two distinct colours *C* and *D* there is a *π* ∈ Π such that *D* is contained in the first part of *π* and *C* is contained in the second part of *π*. Take *l* = *c*log*n* random partitions. For any *C*, *D*, the probability that none of the partitions is suitable is less than $(1-\frac{1}{5})^l=n^{-c\log (5/4)}$. For *c* sufficiently large this is less than *n*− 2, which is in turn less than *m*− 2 and the result follows from the union bound over all choices of *C*, *D*. Let *x**y* be an edge in *G*0. Recall that it is not an edge in *G*1 if *x*, *y* have at least two colours in common. Suppose that this is the case. Then there exists some *π* ∈ Π such that *x* and *y* have a colour in common from the first half of *π* and also a colour in common from the second half of *π*. From now on, when we say ``the first *m*/2 colours“, we will mean ``the *m*/2 colours in the first part of *π*” provided it is clear which *π* we are talking about. A pair (*x*, *y*) of vertices is **π*-dangerous* for some *π* ∈ Π if there is a colour class among the first *m*/2 colours that contains both *x* and *y*. Fix a set *A* of vertices with ∣*A*∣ = *a*. Let D be the collection of colours *D* such that at least one *K**s* inside *A* is entirely coloured with colour *D* in *G*0. (We require that every edge is given by this colour: that is, the vertices of the *K**s* are in different parts of the complete *s*-partite graph with colour *D*.) For each *π* ∈ Π, let D*π* be the set of all *D* ∈ D such that *D* is one of the last *m*/2 colours. To make sense of the statement of the next lemma, the reader should recall that *a**γ* is significantly less than 1. (See the beginning of Section [sectionconstruction] for their precise values.) [largedpi] With probability $1-o(\frac{1}{{n \choose a}})$, ∣D*π*∣ = Ω(*m**a**s**γ**s*) for every *π* ∈ Π. Let *C* be any colour class. The probability that *C* intersects *A* in exactly *s* elements is $$\mathbb{P}[\Bin(a,\gamma)=s]= {a \choose s}\gamma^s(1-\gamma)^{a-s}=\Omega(a^s\gamma^s(1-\gamma)^a)=\Omega(a^s\gamma^s(e^{-2\gamma})^a)= \Omega(a^s\gamma^s),$$ where the last inequality follows from the fact that *a**γ* = *n*2*α* − 1(log*n*)*c*3 − *c*2 = *o*(1). Hence P[*C* ∈ D] = Ω(*a**s**γ**s*). Moreover, the events {*C* ∈ D} are independent. Thus, for any *π*, by the Chernoff bound we get $\mathbb{P}\big[|\mathcal{D}\_{\pi}|=o(ma^s\gamma^s)\big]\leq e^{-\Omega(ma^s\gamma^s)}$. Therefore, using the union bound over all *π* ∈ Π, it suffices to prove that $(\log n)e^{-\Omega(ma^s\gamma^s)}=o(\frac{1}{{n \choose a}})$. But ${n \choose a}\leq n^a=e^{a\log n}$. Hence, we need (log*n*)*e*− Ω(*m**a**s**γ**s*) = *o*(*e*− *a*log*n*). For this, it is enough to prove that *a*log*n* = *o*(*m**a**s**γ**s*), ie. log*n* = *o*(*m**a**s* − 1*γ**s*). Since *m**a**s* − 1*γ**s* = *n**δ* + (*s* − 1)*α* + *s*(*α* − 1)(log*n*)− *c*1 + (*s* − 1)*c*3 − *s**c*2 = (log*n*)− *c*1 + (*s* − 1)*c*3 − *s**c*2,  we are done provided that (*s* − 1)*c*3 − *s**c*2 − *c*1 > 1. Therefore, using the union bound over all sets *A* of size *a*, we may assume that ∣D*π*∣ = Ω(*m**a**s**γ**s*) for every *π* ∈ Π and every such set *A*. [dangerousdensity] With probability 1 − *o*(1) the following holds. For every *A* of size *a* and for every *π* ∈ Π, the density of *π*-dangerous pairs in *A* is $o(\frac{1}{\log n})$. This result, which we shall prove later, allows us to assume for our fixed set *A* that the following statement holds. 1. ( ⋆ ) For any *π* ∈ Π, the density of *π*-dangerous pairs in *A* is $o(\frac{1}{\log n})$. For each $C\in\mathcal D$, pick a *K**s* uniformly at random in *G*0[*A*] of colour *C*, and call it *T**C*. We can now prove that with sufficiently high probability, most *T**C* will be present in *G*1. Let *π* ∈ Π. Then with probability $1-o(\frac{1}{(\log n){n \choose a}})$, the number of colours *C* ∈ D*π* for which *T**C* has a *π*-dangerous pair of vertices is $o(\frac{|\mathcal{D}\_{\pi}|}{\log n})$. We condition everything on the already chosen first *m*/2 colour classes. Now let *C* ∈ D*π*. (Recall that this means that there is a *K**s* in *A* in the graph *G*0 with all its edges of colour *C*, and moreover that *C* is one of the last *m*/2 colours with respect to *π*.) Label the vertices of *T**C* by 1, 2, ..., *s*. Note that any pair of vertices in *A* is chosen with equal probability and, by condition ( ⋆ ), at most $o(\frac{|A|^2}{\log n})$ of them are *π*-dangerous. So the probability that the first two vertices of *T**C* form a *π*-dangerous pair is $o(\frac{1}{\log n})$. Hence, for any *C* ∈ D*π*, the probability that *T**C* has a pair of vertices which form a *π*-dangerous pair is bounded above by some $p=o(\frac{1}{\log n})$. Moreover, this holds for all such *C* *independently* of the others. Thus, the probability that *T**C* contains a *π*-dangerous pair for more than $\Omega(\frac{|\mathcal{D}\_{\pi}|}{\log n})$ choices of *C* ∈ D*π* is at most $\mathbb{P}\big[\Bin(|\mathcal{D}\_{\pi}|,p)=\Omega(\frac{|\mathcal{D}\_{\pi}|}{\log n})\big]$. But this is $e^{-\Omega(\frac{|\mathcal{D}\_{\pi}|}{\log n})}$. So it remains to show that $(\log n){n \choose a}=o(e^{\Omega(\frac{|\mathcal{D}\_{\pi}|}{\log n})})$. Since ${n \choose a}\leq n^a= e^{a\log n}$, it suffices to prove that $a\log n=o(\frac{|\mathcal{D}\_{\pi}|}{\log n})$. But ∣D*π*∣ = Ω(*m**a**s**γ**s*) so it is enough to prove that (log*n*)2 = *o*(*m**a**s* − 1*γ**s*). By equation ([eqnmagamma]), this holds provided that (*s* − 1)*c*3 − *s**c*2 − *c*1 > 2. With probability $1-o(\frac{1}{{n \choose a}})$, for all but *o*(∣D∣) colours *C* ∈ D, all the edges of *T**C* are present in *G*1. Suppose that *C* ∈ D and *T**C* has an edge *e* which is not present in *G*1. Then there exists some *π* ∈ Π such that *C* is in the second half of *π* (so *C* ∈ D*π*) and *e* is *π*-dangerous. But by the previous lemma, with probability $1-o(\frac{1}{{n \choose a}})$ the number of such colours *C* is $o(|\Pi|\cdot \frac{|\mathcal{D}|}{\log n})=o(|\mathcal{D}|)$. Using Lemma [largedpi] and the union bound over all *A*, Lemma [findlarget] follows. We now return to proving Lemma [dangerousdensity]. Recall that we want to show that almost surely for every *A* and every *π*, the density of *π*-dangerous pairs in *A* is $o(\frac{1}{\log n})$. This is essentially best possible, since if we choose *A* to contain one of our colour classes entirely (for a colour chosen from the first part of *π*), then the pairs of vertices in that colour class will all be *π*-dangerous. Moreover, as the typical size of a colour class is *n**γ* = *n**α*(log*n*)− *c*2 = *a*(log*n*)− *c*2 − *c*3, the set of these pairs will have density roughly (log*n*)− 2*c*2 − 2*c*3. Accordingly, the next lemma is to make sure that no colour class is exceptionally large. With probability 1 − *o*(1), the size of every colour class is at most 2*n**γ*. $\mathbb{P}[\Bin(n,\gamma)> 2n\gamma]=e^{-\Omega(n\gamma)}=o(\frac{1}{m})$. The result follows from the union bound over all colours. So we may assume that all colour classes have size at most 2*n**γ*. After applying the union bound over all *π* ∈ Π and *A*, the next result completes the proof of Lemma [findlarget]. Fix *π* ∈ Π and a set *A* of size *a*. With probability $1-o(\frac{1}{(\log n){n \choose a}})$, the number of pairs in *A* which are *π*-dangerous is at most $4\frac{a^2}{(\log n)^{2}}$. The number of *π*-dangerous pairs in *A* is at most $$\label{type1bound} \sum\_{i=1}^{m}\big(\min\{\Bin(a,\gamma),2n\gamma\}\big)^2.$$ Let $h=\frac{a}{m^{1/2}\log n}$. Note that $\log h=(\alpha-\frac{1}{2}\delta)\log n+O(\log \log n)$ and recall that $\alpha>\frac{1}{2}\delta$. Now let $p=\mathbb{P}(\Bin(a,\gamma)\geq h)\leq {a \choose h}\gamma^h\leq (a\gamma)^h\leq e^{-\Omega(h\log n)}$. Pick some tiny positive *ρ* > 0. Note that $$\begin{aligned} \mathbb{P}[\Bin(m,p)\geq m^{1/2+\rho}]&\leq {m \choose m^{1/2+\rho}}p^{m^{1/2+\rho}}\leq (mp)^{m^{1/2+\rho}}=e^{-\Omega(m^{1/2+\rho}h\log n)}\\ &=e^{-\Omega(am^{\rho})}= o\bigg(\frac{1}{(\log n){n \choose a}}\bigg).\\\end{aligned}$$ Therefore we may assume that at most *m*1/2 + *ρ* of the random variables $\Bin(a,\gamma)$ take value more than *h*. The total contribution to ([type1bound]) of the terms with $\Bin(a,\gamma)\leq h$ is at most $mh^2=\frac{a^2}{(\log n)^{2}}$. The random variable $X\sim \Bin(a,\gamma)$, conditional on *X* ≥ *h*, is bounded above by *h* + *X*ʹ where *X*ʹ is an independent instance of $\Bin(a,\gamma)$. As we assume that all colour classes have size at most 2*n**γ*, it follows that the total contribution to ([type1bound]) of the terms with $\Bin(a,\gamma)\geq h$ is bounded above by $$\label{type1bound2} \sum\_{i=1}^{m^{1/2+\rho}} \Big(h+\min\{\Bin(a,\gamma),2n\gamma\}\Big)^2$$ and we just need to show that this sum is less than $3\frac{a^2}{(\log n)^{2}}$ with probability $1-o(\frac{1}{{m \choose m^{1/2+\rho}}(\log n){n \choose a}})$. The sum in ([type1bound2]) is at most $m^{1/2+\rho}h^2+(2h+2n\gamma)\sum\_{i=1}^{m^{1/2+\rho}}\Bin(a,\gamma)$. The first term is at most $\frac{a^2}{(\log n)^{2}}$. Also, log(*n**γ*) = *α*log*n* + *O*(loglog*n*) and therefore *n**γ* ≥ *h*, so we just need to show that $\sum\_{i=1}^{m^{1/2+\rho}}\Bin(a,\gamma)\leq \frac{a^2}{2n\gamma(\log n)^{2}}$ with the required probability. But the left-hand side is $\Bin(m^{1/2+\rho}a,\gamma)$ and $\mathbb{P}\big[\Bin(m^{1/2+\rho}a,\gamma)\geq \frac{a^2}{2n\gamma(\log n)^{2}}\big]= e^{-\Omega(\frac{a^2}{2n\gamma(\log n)^{2}})}$ since $m^{1/2+\rho}a\gamma = o(\frac{a^2}{2n\gamma(\log n)^{2}})$. This last inequality holds because $$\log (m^{1/2+\rho}a\gamma)=\big((1/2+\rho)\delta+\alpha+(\alpha-1)\big)\log n+O(\log \log n)$$ and $$\log(\frac{a^2}{2n\gamma(\log n)^{2}})=\big(2\alpha-1-(\alpha-1)\big)\log n+O(\log\log n),$$ and (1/2 + *ρ*)*δ* + *α* < 1 for *ρ* sufficiently small (since *δ* < 1 and *α* < 1/2). Finally, ${m \choose m^{1/2+\rho}}(\log n){n \choose a}= e^{O(a\log n)}$ because *m*1/2 + *ρ* = *o*(*a*) for *ρ* sufficiently small (as *δ* < 2*α*). But $a\log n=o(\frac{a^2}{n\gamma(\log n)^{2}})$ provided that *c*3 + *c*2 > 3, so we are done. The proof of Lemma [negscheme] ------------------------------ It is convenient to introduce the parameter $$\eta=2(1-\alpha)-\delta=\begin{cases} \eta(1)=\frac{-2s^2+4st-2s-6t+8}{(2s-3)(t-s)(t+s-1)-2s+4}, & \text{ if } (s,t) \text{ is regular} \\ \eta(2)=\frac{st-s-2t+3}{(2s-3)(t-s)(s-1)+2s-t}, & \text{ if } (s,t) \text{ is exceptional} \end{cases}$$  − *η* is the contribution of a block of size two to the value of a scheme. By Lemma [alphaanddelta], we have *η* > 2 − 4*α* > 0. The next lemma follows easily from Definition [schemevalue] and is a convenient way to look at the value of a scheme. [valueconvenient] Let *Q* be a scheme. Then *v*(*Q*) = *t* + ∑*D* ∈ D(*δ* + ∣*D*∣(*α* − 1)) − (*δ* + 2*α*) where D is the set of colours in *Q* and ∣*D*∣ is the number of nodes in *Q* that are coloured with *D*. We shall now identify a scheme for which equality in Lemma [negscheme] will hold: the value of *α* was chosen so that the value of this scheme would be 0. This is the (in)equality that generalizes equation ([k5freqs]) from the introduction. This ``extremal scheme" turns out to be different in the regular and the exceptional case, which is why the formula for *α* also differs in the two cases. Let *Q*1 be the scheme where one colour gives a block of size *s* and the rest of the edges are given by pairwise distinct colours. Let *Q*2 be the scheme where one colour gives a block of size *s*, another gives a block of size *t* − *s* + 1 sharing a single vertex with the previous block and the rest of the edges are given by pairwise distinct colours. [extremal] 1. If (*s*, *t*) is regular, then *v*(*Q*1) = 0. 2. If (*s*, *t*) is exceptional, then *v*(*Q*2) = 0. 3. If (*s*, *t*) is regular, then *v*(*Q*2) ≤ 0. 4. If (*s*, *t*) is exceptional, then *v*(*Q*1) ≤ 0. We have $$v(Q\_1)=t+(\delta+s(\alpha-1))+({t \choose 2}-{s \choose 2})(\delta+2(\alpha-1))-(\delta+2\alpha),$$ and (a) follows by direct substitution. We also have $$\begin{aligned} v(Q\_2)=t&+(\delta+s(\alpha-1))+(\delta+(t-s+1)(\alpha-1))\\ &+\Bigl({t \choose 2}-{s \choose 2}-{t-s+1 \choose 2}\Bigr)(\delta+2(\alpha-1))-(\delta+2\alpha),\\\end{aligned}$$ and (b) follows by direct substitution. The difference between *Q*1 and *Q*2 is that the former contains ${t-s+1 \choose 2}$ edges of distinct colours where the latter contains a block of size *t* − *s* + 1. Using Lemmas [app1] and [app2] (a) from the appendix, we obtain statements (c) and (d). We call a block in a scheme *large* if it has size at least 3 and *small* otherwise. We call it an *s*-*block* if it has size *s*. We shall begin by proving Lemma [negscheme] in the special case when there is an *s*-block in the scheme. [copycase] If *Q* is a scheme and it has an *s*-block then *v*(*Q*) ≤ 0. Assume that *Q* is such that *v*(*Q*) is maximal. It is enough to show that *Q* = *Q*1 or *Q* = *Q*2. Since *Q* has an *s*-block, any other block must have size at most *t* − *s* + 1. By Lemmas [app1] and [app2] (c) from the appendix, any large block of size smaller than *t* − *s* gives a smaller contribution to the value than one obtains if the corresponding edges have pairwise distinct colours. Therefore, we may assume that *Q* has no such block. So every block in *Q*, other than the one of size *s*, has size 2, *t* − *s* or *t* − *s* + 1. If there is a block of size *t* − *s* + 1, then *Q* = *Q*2. If there are no large blocks, then *Q* = *Q*1. Otherwise, there is a block of size *t* − *s* ≥ 3. If there are no other large blocks, then we claim that *v*(*Q*) ≤ *v*(*Q*1) or *v*(*Q*) ≤ *v*(*Q*2). Indeed, the (*t* − *s*)-block can be modified to become a (*t* − *s* + 1)-block (and *Q* then becomes *Q*2) and this increases the value provided that (*α* − 1) ≥ (*t* − *s*)( − *η*), or equivalently (*t* − *s*)*η* ≥ (1 − *α*). So we may assume that (*t* − *s*)*η* < (1 − *α*). But *δ* = *s* − (2*s* − 1)*α* > 1 − *α*, since *α* < 1/2. Hence, (*t* − *s*)*η* < *δ*, but then *v*(*Q*) ≤ *v*(*Q*1) by Lemma [app1] from the appendix. We may therefore assume that there are at least two large blocks other than the one of size *s*, and that both have size *t* − *s*. This forces *t* − *s* to equal 3. Moreover, by Lemmas [app1] and [app2] (b), we have that *t* = 2*s* − 1. It follows that *s* = 4 and *t* = 7. So *Q* consists of a 4-block and several 3-blocks (there can be at most 3) and the rest of the edges are given by distinct colours. It is easy to check that in this case *v*(*Q*) ≤ 0. Using the previous result, to prove Lemma [negscheme], it is sufficient to prove the following statement. [reduction] Suppose that *Q* is a scheme with *v*(*Q*) as large as possible. Assume also that *Q* does not contain a block of size *s*. Then *v*(*Q*) ≤ 0. To prove Lemma [reduction], we shall introduce the following definition. Let *P* be a node in a scheme. The *local value* at *P*, which we denote by *v*(*P*), is defined by the formula *v*(*P*) = 1 + ∑*D* : *P* ∈ *D*(*δ*/∣*D*∣ + (*α* − 1)),  where the summation is over all blocks containing *P*. If *P* is in a block of size 2 and two blocks of size 4, then *v*(*P*) = 1 + 3(*α* − 1) + *δ*/2 + 2 ⋅ *δ*/4. [locsum] For any scheme *Q*, we have ∑*P**v*(*P*) = *v*(*Q*) + (*δ* + 2*α*) where the summation is over all nodes of *Q*. This statement follows easily from Lemma [valueconvenient]. The next result is the key part in the proof of Lemma [negscheme]. [localneg] Suppose that *Q* is a scheme such that *v*(*Q*) is maximal. Let *P* be a node and assume that every block containing *P* has size less than *t*/2. Then *v*(*P*) < 2*δ*/*t*. Let the blocks of *Q* that contain *P* have sizes *r*1, ..., *r**u*. Then ∑*i**r**i* = *t* + *u* − 1. Let *k* be the minimal integer greater than 2 that is equal to some *r**i* (or, if no such integer exists, then let *k* be large enough that *δ*/*k* − *δ*/(*k* + 1) < *η*/2). Let $R=\lfloor \frac{t-1}{2} \rfloor$. By assumption, *r**i* ≤ *R* for all *i*. Moreover, by the maximality of *v*(*Q*) and Lemma [app1], we have the inequality *k**η* ≥ *δ* and therefore $\delta/k-\delta/(k+1)=\frac{\delta}{k(k+1)}\leq \frac{\eta}{k+1}<\eta/2$. *Claim 1.* There exist positive integers *w* and *q*1, ..., *q**w* such that 1. 2 ≤ *q**j* ≤ *R* for all *j* 2. ∑*j**q**j* = *t* + *w* − 1 3. There is at most one *j* for which 2 < *q**j* < *k* and if there is any *i* with *q**i* = 2, then there is no *j* with 2 < *q**j* < *k*. 4. *v*(*P*) ≤ 1 + ∑*j*(*δ*/*q**j* + (*α* − 1)) 5. Either all but at most one *q**j* are equal to *R* or else *q**j* ∈ {2, *R*} for all *j* *Proof of Claim 1.* Note that *v*(*P*) = 1 + ∑*i*(*δ*/*r**i* + (*α* − 1)). Define *w*, *q*1, *q*2, ...*q**w* to be the integers that maximize the quantity 1 + ∑*j*(*δ*/*q**j* + (*α* − 1)) subject to the conditions (i),(ii) and (iii). Since the *r**i* satisfy (i),(ii),(iii), we get *v*(*P*) ≤ 1 + ∑*j*(*δ*/*q**j* + (*α* − 1)). We are left to prove (v), so let us suppose that it does not hold. There are two cases to consider. *Case 1.* If there exists some *i* with *q**i* = 2, then there is a *j* such that *q**j* ∉ {2, *R*} and by (iii) we have *q**j* ≥ *k*. Hence, *δ*/*q**j* − *δ*/(*q**j* + 1) < *η*/2. After relabelling, we may assume that *j* = *w* − 1, *i* = *w*. Now set *w*ʹ = *w* − 1, *q*ʹ*h* = *q**h* for all *h* ≤ *w* − 2 and *q*ʹ*w* − 1 = *q**w* − 1 + 1. Then *q*ʹ1, ..., *q*ʹ*w*ʹ satisfy (i),(ii),(iii) and 1 + ∑*h* ≤ *w*(*δ*/*q**h* + (*α* − 1)) < 1 + ∑*h* ≤ *w*ʹ(*δ*/*q*ʹ*h* + (*α* − 1)),  which is a contradiction. *Case 2.* If there is no *i* with *q**i* = 2, then since (v) is assumed to fail, there must exist *i* ≠ *j* with 2 < *q**i* ≤ *q**j* < *R*. Moreover, we may assume that *q**i* is minimal among all *q**h*s. Without loss of generality, *i* = *w* − 1, *j* = *w*. Now define *q*ʹ*h* = *q**h* for all *h* ≤ *w* − 2, *q*ʹ*w* − 1 = *q**w* − 1 − 1 and *q*ʹ*w* = *q**w* + 1. Then *q*ʹ1, ..., *q*ʹ*w* satisfy (i),(ii),(iii) and 1 + ∑*h* ≤ *w*(*δ*/*q**h* + (*α* − 1)) < 1 + ∑*h* ≤ *w*(*δ*/*q*ʹ*h* + (*α* − 1)),  which is a contradiction. This completes the proof of Claim 1. *Claim 2.* If *q*1, ..., *q**w* satisfy the conditions (i),(ii),(v) in Claim 1, then 1 + ∑*h* ≤ *w*(*δ*/*q**h* + (*α* − 1)) < 2*δ*/*t*. *Proof of Claim 2.* For *t* ≤ 13, this is a straightforward check, which we performed using a computer program, since it would have taken inordinately long to do it by hand. (The code, written in Matlab, can be found at the end of the appendix.) So we shall assume that *t* ≥ 14. Then $3R\geq 3\cdot\frac{t-2}{2}>t+2$, so there are at most two *q**j*s with *q**j* = *R*. Using (v), this leaves the following cases. Case 1: *q**j* = 2 for all *j* Case 2: *q*1 = *R* and *q**j* = 2 for all *j* ≥ 2 Case 3a: $q\_1=q\_2=R=\frac{t-2}{2}$ and *q*3 = *q*4 = *q*5 = 2 (*w* = 5) Case 3b: $q\_1=q\_2=R=\frac{t-1}{2}$ and *q*3 = *q*4 = 2 (*w* = 4) Case 4a: $q\_1=q\_2=R=\frac{t-2}{2},q\_3=4$ (*w* = 3) Case 4b: $q\_1=q\_2=R=\frac{t-1}{2},q\_3=3$ (*w* = 3) By Lemmas [app1] and [app2] (d) we have (*l* − 1)(*δ*/2 + (*α* − 1)) < (*δ*/*l* + (*α* − 1)) when $l=\frac{t-1}{2}$. Moreover, we have the inequality $$\big(\delta/\Big(\frac{t-2}{2}\Big)+(\alpha-1)\big)+\frac{1}{2}\big(\delta/2+(\alpha-1)\big)<\big(\delta/\Big(\frac{t-1}{2}\Big)+(\alpha-1)\big),$$ since this is equivalent to $\frac{2\delta}{(t-1)(t-2)}<\eta/4$, which holds because (*t* − 1)*η* ≥ 2*δ* and *t* − 2 > 4. It is not hard to see that these two observations allow us to deduce all Cases 1-3 from Case 3b. To prove Case 3b, we need the inequality $$1+2(\delta/(\frac{t-1}{2})+(\alpha-1))-\eta< 2\delta/t,$$ which is given in Lemma [app2] (f). Clearly, Case 4a follows from Case 4b. To prove Case 4b, we need $$1+2(\delta/(\frac{t-1}{2})+(\alpha-1))+(\delta/3+(\alpha-1))< 2\delta/t.$$ Using *α* < 1/2 and *δ* < 1, it suffices to prove that 4/(*t* − 1) − 2/*t* ≤ 1/6, which holds for *t* ≥ 14. This completes the proof of Claim 2, and the two claims imply the lemma. [contains] Suppose that *Q* is a scheme such that its *v*(*Q*) is as large as possible and such that the largest block *D* of *Q* has size at least *t*/2. Then *D* has size *s*. Suppose not. Pick a node *P* with *P* ∉ *D*. Let *D* have size *k* ≥ *t*/2. Suppose that *P* is contained in exactly *r* large blocks. Define a scheme *Q*ʹ as follows. *Q*ʹ has the same blocks as *Q* except that * *P* is removed from all large blocks, * all small blocks containing *P* and a node in *D* are deleted, * *P* is added to *D*, * the missing edges are now provided by distinct colours. We now compare the values *v*(*Q*) and *v*(*Q*ʹ). The node *P* is in only one large block in *Q*ʹ while it is in *r* large blocks in *Q*. The number of small blocks containing *P* is precisely *t* − *k* − 1 in *Q*ʹ while it is at least *k* − *r* in *Q*. That is because any large block containing *P* contains at most one element of *D*. So $$\begin{aligned} v(Q')-v(Q)&\geq (r-1)(1-\alpha)+((t-k-1)-(k-r))(\delta+2(\alpha-1))\\ &=(r-1)(1-\alpha)-(t-2k+(r-1))\eta\geq (r-1)(1-\alpha-\eta).\\ \end{aligned}$$ But 1 − *α* − *η* = *δ* − (1 − *α*) = 1/2 + (2*s* − 1)*ε* − (1/2 + *ε*) > 0. This contradicts the maximality of *v*(*Q*) if *r* ≥ 2. If *r* = 1, then let the unique large block containing *P* have size *l*. By assumption, *l* ≤ *k*. Hence, *v*(*Q*ʹ) − *v*(*Q*) = ((*t* − *k* − 1) − (*t* − *l*))(*δ* + 2(*α* − 1)) = (*k* + 1 − *l*)*η* > 0,  a contradiction. If *r* = 0, then $$v(Q')-v(Q)=-(1-\alpha)-k(\delta+2(\alpha-1))=k\eta-(1-\alpha)\geq \frac{t}{2}\eta-(1-\alpha).$$ But by Lemma [app2] (d), this is at least *δ* − (1 − *α*) > 0. This is a contradiction and the lemma is proved. We are ready to complete the proof of Lemma [negscheme]. [**Proof of Lemma [negscheme].**] We may assume that *v*(*Q*) is maximal possible among all schemes *Q*. If *Q* has a block of size *s*, then we are done by Lemma [copycase]. Otherwise, by Lemma [contains], there is no block of size greater than or equal to *t*/2. But then Lemma [locsum] and Lemma [localneg] together imply that $v(Q)\leq t\frac{2\delta}{t}-(\delta+2\alpha)=\delta-2\alpha<0$. Appendix ======== [app1] For any *k* > 2, we have $${k \choose 2}(\delta+2(\alpha-1))>\delta+k(\alpha-1) \Longleftrightarrow k\eta<\delta$$ $$\begin{aligned} {k \choose 2}(\delta+2(\alpha-1))&>\delta+k(\alpha-1)\\ \iff &(k-1)(\delta+2(\alpha-1))>2\delta/k+2(\alpha-1)\\ \iff &(k-1)\eta<2(1-\alpha)-2\delta/k=\eta+\delta(1-2/k)\\ \iff &(k-2)\eta<\delta(k-2)/k\\ \iff & k\eta<\delta.\\\end{aligned}$$ [app2] (a) (*t* − *s* + 1)*η* < *δ* if and only if (*s*, *t*) is regular. (b) (*t* − *s*)*η* < *δ* unless *t* = 2*s* − 1 (c) (*t* − *s* − 1)*η* < *δ* (d) (*t* − 1)*η* > 2*δ*. (e) *δ* > 2/3. (f) $1+2(\delta/(\frac{t-1}{2})+(\alpha-1))-\eta< 2\delta/t$ Assume first that (*s*, *t*) is regular. Then after some tedious calculations, one finds that (a) is equivalent to the inequality (*s* − 2)(*t* − *s* − 2)(2*s* − *t* − 3) − *t* − 3*s* + 8 > 0 The left hand side is a quadratic in *t* with negative leading coefficient so it is enough to check that the inequality holds when *t* = *s* + 3 and when *t* = 2*s* − 4. For *t* = *s* + 3 we require (*s* − 2)(*s* − 6) − 4*s* + 5 > 0, which holds for *s* ≥ 11, and for *t* = 2*s* − 4 we require (*s* − 2)(*s* − 6) − 5*s* + 12 > 0, which holds for *s* ≥ 11. It therefore suffices to check the inequality for the pairs (*s*, *t*) = (10, 14) and (*s*, *t*) = (10, 15). This can be done by direct substitution. So (a) is proved (when (*s*, *t*) is regular) which immediately implies (b) and (c). Now let us assume that (*s*, *t*) is exceptional. Then the inequality (*t* − *s* + *c*)*η* < *δ* is equivalent to the inequality (*s* − 2)(*t* − *s* − *c* − 1)(2*s* − *t* − 2*c* − 1) + ( − 2*c*2 − 2*c* + 1)*s* − *t* + 4*c*2 + 3*c* + 1 > 0 When *c* = 1, this says that (*s* − 2)(*t* − *s* − 2)(2*s* − *t* − 3) − 3*s* − *t* + 8 > 0, so in order to prove (a) we need to show that this does not hold. For *t* ∈ {*s* + 2, 2*s* − 3, 2*s* − 2, 2*s* − 1} that is clear, since (*s* − 2)(*t* − *s* − 2)(2*s* − *t* − 3) ≤ 0. We are left to check that the inequality fails for the pairs (7, 10), (8, 11), (8, 12), (9, 12), (9, 13), (9, 14), (10, 13), and (10, 16). If *t* = *s* + 3, then we need *s*2 − 12*s* + 17 ≤ 0 which indeed holds for 7 ≤ *s* ≤ 10. If *t* = 2*s* − 4, then we need *s*2 − 13*s* + 24 ≤ 0 which indeed holds for 7 ≤ *s* ≤ 10. We have only (*s*, *t*) = (9, 13) left to check. That is done by direct substitution. When *c* = 0, then ([ceqn]) says that (*s* − 2)(*t* − *s* − 1)(2*s* − *t* − 1) + *s* − *t* + 1 > 0. But if *s* + 2 ≤ *t* ≤ 2*s* − 2, then the left hand side is minimal at *t* = 2*s* − 2 and there it takes value (*s* − 3)2 > 0. (Note that *s* > 3 in this case.) This proves (b). When *c* =  − 1 in ([ceqn]), then it says that (*s* − 2)(*t* − *s*)(2*s* − *t* + 1) + *s* − *t* + 2 > 0. But the left hand side is minimal when *t* = 2*s* − 1, and then it is 2*s*2 − 7*s* + 7 > 0. This proves (c). (d) In the regular case the statement is equivalent to the inequality 2*s*3 − *s*2*t* − 5*s*2 + 3*s**t* − *t*2 + *s* + 3*t* − 4 > 0 But in the regular case we have 2*s* − *t* ≥ 4, so 2*s*3 − *s*2*t* ≥ 4*s*2. Since  − *s*2 + 3*s**t* − *t*2 ≥ 0 and *s* + 3*t* − 4 > 0, the statement follows. In the exceptional case, (d) is equivalent to the inequality (*s* − 2)(2*s* − *t*)2 + (*t* − *s* − 1) > 0 which is clear. (e) In the regular case, the statement is equivalent to the inequality 2*s*(*t* + *s* − 5)(*t* − *s* − 2) + 2*s*2 − 10*s* + 6*t* − 8 > 0,  which is easily seen to hold. In the exceptional case, it is equivalent to the inequality (2*s*2 − 5*s*)(*t* − *s* − 2) + *s*2 − 5*s* + 2*t* − 3 > 0,  which again clearly holds. (f) Since (by (d)) we have $\delta/(\frac{t-1}{2})<\eta$, this inequality reduces to 1 + 2(*α* − 1) + 2*δ*/(*t* − 1) < 2*δ*/*t*,  or, equivalently, to $$2\alpha<1-\frac{2\delta}{t(t-1)}.$$ Expressing *α* in terms of *δ* and performing some routine algebraic manipulations, we find that we need to prove that $$\frac{2(2s-1)}{t(t-1)}\delta<2\delta-1.$$ Since *δ* < 1, the left hand side of this inequality is less than 4/*t* < 1/3 while the right hand side is greater than 1/3, by part (e), so the proof is complete. Below we present the Matlab code that we used to perform the case check in the proof of Lemma [localneg]. ``` % go through all pairs (s,t) for t=5:13 for s=(floor(t/2)+1):(t-2) % these pairs are all exceptional alpha=((s-2)*(t-s)*(s-1)+s-1)/((2*s-3)*(t-s)*(s-1)+2*s-t); delta=s-(2*s-1)*alpha; eta=2*(1-alpha)-delta; % bad will be changed to 1 if the inequality that we want % to prove fails bad=0; R=floor((t-1)/2); % j will count the number of q_h which are equal to R for j=0:4 a=(t-1)-j*(R-1); if 0<=a % in the following case every q_h is 2 or R v=1+a*(delta/2+alpha-1)+j*(delta/R+alpha-1); % check that our inequality holds with a suitably large % difference which can't be due to rounding errors if v>2*delta/(t)-10^(-3) bad=1; end end if (2<=a+1) && (a+1<=R) % in the following case there is only one q_h that % is not equal to R v=1+(delta/(a+1)+alpha-1)+j*(delta/R+alpha-1); if v>2*delta/(t)-10^(-3) bad=1; end end end % tabulate the result: for each pair (s,t) we print % whether the inequality failed (1) or not (0) fprintf('%5d %5d %5d \n',s,t,bad) end end ``` Acknowledgments =============== The second author was in Paris, supported by the Fondation Sciences Mathématiques de Paris, while this work was carried out. The first author held an FSMP Chair for the academic year 2017-8, also while this work was carried out, and would like to thank the FSMP for its support. He would also like to thank the Équipe d’Analyse Fonctionelle at Sorbonne Université for hosting him while he was in Paris. We are grateful to the two anonymous referees for their very careful reviews. 10 Béla Bollobás and H. R. Hind, *Graphs without large triangle free subgraphs*, Discrete Mathematics **87** (1991), no. 2, 119–131. David Conlon, Jacob Fox, and Benny Sudakov, *Recent developments in graph Ramsey theory.*, Surveys in combinatorics **424** (2015), 49–118. Andrzej Dudek, Troy Retter, and Vojtěch Rödl, *On generalized Ramsey numbers of Erds and Rogers*, Journal of Combinatorial Theory, Series B **109** (2014), 213–227. Andrzej Dudek and Vojtěch Rödl, *On *K**s*-free subgraphs in *K**s* + *k*-free graphs and vertex Folkman numbers*, Combinatorica **31** (2011), no. 1, 39. Andrzej Dudek and Vojtěch Rödl, *On the function of Erds and Rogers*, Ramsey theory, Springer, 2011, pp. 63–76. Paul Erds, *Some of my recent problems in combinatorial number theory, geometry and combinatorics*, Graph theory, combinatorics, and algorithms **1,2** (1995), 335–349. Paul Erds and C. A. Rogers, *The construction of certain graphs*, Canad. J. Math **14** (1962), 702–707. Michael Krivelevich, **K**s*-free graphs without large *K**r*-free subgraphs*, Combinatorics, Probability and Computing **3** (1994), no. 3, 349–354. Michael Krivelevich, *Bounding Ramsey numbers through large deviation inequalities*, Random Structures & Algorithms **7** (1995), no. 2, 145–155. Colin McDiarmid, *On the method of bounded differences*, London Mathematical Society Lecture Note Series, pp. 148–188, Cambridge University Press, 1989. Benny Sudakov, *Large *K**r*-free subgraphs in *K**s*-free graphs and some other Ramsey-type problems*, Random Structures & Algorithms **26** (2005), no. 3, 253–265. Benny Sudakov, *A new lower bound for a Ramsey-type problem*, Combinatorica **25** (2005), no. 4, 487–498. Guy Wolfovitz, **K*4-free graphs without large induced triangle-free subgraphs*, Combinatorica **33** (2013), no. 5, 623–631. [tim] Timothy Gowers Department of Pure Mathematics and Mathematical Statistics University of Cambridge, UK wtg10dpmmscamac <https://www.dpmms.cam.ac.uk/person/wtg10> [oliver] Oliver Janzer Department of Pure Mathematics and Mathematical Statistics University of Cambridge, UK oj224camacuk <https://www.maths.cam.ac.uk/person/oj224> --- 1. Department of Pure Mathematics and Mathematical Statistics, University of Cambridge[↩](#fnref1) Improved bounds for the Erdos-Rogers function ============================================= [classification=text] The Erdos-Rogers function *f**s*, *t* measures how large a *K**s*-free induced subgraph there must be in a *K**t*-free graph on *n* vertices. While good estimates for *f**s*, *t* are known for some pairs (*s*, *t*), notably when *t* = *s* + 1, in general there are significant gaps between the best known upper and lower bounds. We improve the upper bounds when *s* + 2 ≤ *t* ≤ 2*s* − 1. For each such pair we obtain for the first time a proof that *f**s*, *t* ≤ *n**α**s*, *t* + *o*(1) with an exponent *α**s*, *t* < 1/2, answering a question of Dudek, Retter and Rödl. Introduction ============ Let *G* be a graph with *n* vertices that contains no *K*4. How large a triangle-free induced subgraph must *G* have? The standard proof of Ramsey’s theorem implies that *G* contains an independent set of size *n*1/3, but can we do better? A simple argument shows that the answer is yes. Indeed, each vertex in *G* has a triangle-free neighbourhood, and either there is a vertex of degree *n*1/2 or one can find an independent set of size roughly *n*1/2 by repeatedly choosing vertices and discarding their neighbours. This stronger argument still feels a little wasteful, because in the second case one finds an independent set rather than a triangle-free subgraph. Moreover, there is no obvious example that yields a matching upper bound, so it is not immediately clear whether 1/2 is the correct exponent. The problem above is an example of a general problem that was first considered by Erdos and Rogers. Given positive integers 1 < *s* < *t* and *n* > 2, define *f**s*, *t*(*n*) to be the minimum over all *K**t*-free graphs *G* with *n* vertices of the order of the largest induced *K**s*-free subgraph of *G*. We have just been discussing the function *f*3, 4. The function *f**s*, *t* is known as the *Erdos-Rogers function*. It has been studied by several authors: for a detailed survey covering many of the known results on the subject, see. For a more recent exposition, see also section 3.5.2 of. The first bounds were obtained by Erds and Rogers who showed that for every *s* there exists a positive constant *ε*(*s*) such that *f**s*, *s* + 1(*n*) ≤ *n*1 − *ε*(*s*). About 30 years later, Bollobás and Hind improved the estimate for *ε*(*s*) and established the lower bound *f**s*, *t*(*n*) ≥ *n*1/(*t* − *s* + 1). In particular, *f**s*, *s* + 1(*n*) ≥ *n*1/2 (by the obvious generalization of the argument for *f*3, 4 above). Subsequently, Krivelevich improved these lower bounds by a small power of log*n* and also gave a new general upper bound, which is $$\label{kriv} f\_{s,t}(n)\leq O(n^{\frac{s}{t+1}}(\log n)^{\frac{1}{s-1}}).$$ Later, the lower bound was significantly improved by Sudakov. He showed that if *t* > *s* + 1, then *f**s*, *t*(*n*) ≥ Ω(*n**a**s*, *t*) where *a**s*, *t* is defined recursively. In particular, when *s* is fixed and *t* → ∞, he obtained the bound $$\label{sud} f\_{s,t}(n)\geq \Omega(n^{\frac{s}{2t}+O(1/t^2)}).$$ We remark that if *t* ≥ 2*s* then ([kriv]) is the best known upper bound, while Sudakov’s lower bound is the best known for every *t* > *s* + 1. In particular, the upper bound is roughly the square of the lower bound in the range *t* ≥ 2*s*. Recently, there has been quite a lot of progress on the case *t* = *s* + 1
arxiv_0000214
enzeller, I. 1992,, 92, 481 Brown, S. F., Donati, J.-F., Rees, D. E., & Semel, M. 1991,, 250, 463 Browning, M. K. 2008,, 676, 1262 Calvet, N., Muzerolle, J., Briceño, C., et al. 2004,, 128, 1294 Calvet, N., & Gullbring, E. 1998,, 509, 802 Carpenter, J. M., Mamajek, E. E., Hillenbrand, L. A., & Meyer, M. R. 2006,, 651, L49 Chabrier, G., & Baraffe, I. 1997,, 327, 1039 Chuntonov, G. A., Smirnov, D. A., & Lamzin, S. A. 2007, Astronomy Letters, 33, 38 Cieza, L. A., et al.  2010,, 712, 925 Currie, T., & Kenyon, S. J. 2009,, 138, 703 D’Antona, F., Ventura, P., & Mazzitelli, I. 2000,, 543, L77 de Val-Borro, M., Gahm, G. F., Stempels, H. C., & Pepliński, A. 2011,, 413, 2679 Donati, J.-F., Gregory, S. G., Alencar, S. H. P., et al. 2012,, in press [astro-ph/1206.1770] Donati, J.-F. 2011, IAU Symposium, 271, 23 Donati, J.-F., Gregory, S. G., Montmerle, T., et al. 2011c,, 417, 1747 Donati, J.-F., Gregory, S. G., Alencar, S. H. P., et al. 2011b,, 417, 472 Donati, J.-F., Bouvier, J., Walter, F. M., et al. 2011a,, 412, 2454 Donati, J.-F., Skelly, M. B., Bouvier, J., et al. 2010b,, 409, 1347 Donati, J.-F., Skelly, M. B., Bouvier, J., et al. 2010a,, 402, 1426 Donati, J.-F., & Landstreet, J. D. 2009,, 47, 333 Donati, J.-F., Jardine, M. M., Gregory, S. G., et al. 2008b,, 386, 1234 Donati, J.-F., Morin, J., Petit, P., et al. 2008a,, 390, 545 Donati, J.-F., Jardine, M. M., Gregory, S. G., et al. 2007,, 380, 1297 Donati, J.-F., Howarth, I. D., Jardine, M. M., et al. 2006,, 370, 629 Donati, J.-F. 2003, Astronomical Society of the Pacific Conference Series, 307, 41 Donati, J.-F., Collier Cameron, A., Semel, M., et al. 2003,, 345, 1145 Donati, J.-F. 2001, Astrotomography, Indirect Imaging Methods in Observational Astronomy, 573, 207 Donati, J.-F., & Brown, S. F. 1997,, 326, 1135 Donati, J.-F., Semel, M., Carter, B. D., Rees, D. E., & Collier Cameron, A. 1997,, 291, 658 Dunstone, N. J., Hussain, G. A. J., Collier Cameron, A., et al. 2008a,, 387, 481 Dunstone, N. J., Hussain, G. A. J., Collier Cameron, A., et al. 2008b,, 387, 1525 Eisner, J. A., Monnier, J. D., Woillez, J., et al. 2010,, 718, 774 Eisner, J. A., Graham, J. R., Akeson, R. L., & Najita, J. 2009,, 692, 309 Endal, A. S., & Sofia, S. 1981,, 243, 625 Ercolano, B., Bastian, N., Spezzi, L., & Owen, J. 2011,, 416, 439 Fedele, D., van den Ancker, M. E., Henning, T., Jayawardhana, R., & Oliveira, J. M. 2010,, 510, A72 Getman, K. V., Broos, P. S., Salter, D. M., Garmire, G. P., & Hogerheijde, M. R. 2011,, 730, 6 Ghez, A. M., Neugebauer, G., & Matthews, K. 1993,, 106, 2005 Gilliland, R. L. 1986,, 300, 339 Gorti, U., Dullemond, C. P., & Hollenbach, D. 2009,, 705, 1237 Goudard, L., & Dormy, E. 2008, Europhysics Letters, 835, 59001 Gras-Velázquez, À., & Ray, T. P. 2005,, 443, 541 Gregory, S. G., & Donati, J.-F. 2011, Astronomische Nachrichten, 332, 1027 Gregory, S. G., Jardine, M., Gray, C. G., & Donati, J.-F. 2010, Reports on Progress in Physics, 73, 126901 Gregory, S. G., Matt, S. P., Donati, J.-F., & Jardine, M. 2008,, 389, 1839 Gregory, S. G., Jardine, M., Simpson, I., & Donati, J.-F. 2006,, 371, 999 Gregory, S. G., Jardine, M., Collier Cameron, A., & Donati, J.-F. 2005, 13th Cambridge Workshop on Cool Stars, Stellar Systems and the Sun, 560, 191 Gullbring, E., Hartmann, L., Briceno, C., & Calvet, N. 1998,, 492, 323 Hartmann, L. 2009, Accretion Processes in Star Formation: Second Edition, by Lee Hartmann. ISBN 978-0-521-53199-3. Published by Cambridge University Press, Cambridge, UK, 2009 Hartmann, L. 2001,, 121, 1030 Hartmann, L., Calvet, N., Gullbring, E., & D’Alessio, P. 1998,, 495, 385 Hayashi, C. 1961,, 13, 450 Henyey, L., Vardya, M. S., & Bodenheimer, P. 1965,, 142, 841 Hillenbrand, L. A., Bauermeister, A., & White, R. J. 2008, 14th Cambridge Workshop on Cool Stars, Stellar Systems, and the Sun, 384, 200 Hillenbrand, L. A., & White, R. J. 2004,, 604, 741 Hussain, G. A. J., 2012, Astronomische Nachrichten, 333, 4 Hussain, G. A. J., Collier Cameron, A., Jardine, M. M., et al. 2009,, 398, 189 Hussain, G. A. J., Jardine, M., Donati, J.-F., et al. 2007,, 377, 1488 Isella, A., Tatulli, E., Natta, A., & Testi, L. 2008,, 483, L13 Johns-Krull, C. M., Valenti, J. A., & Koresko, C. 1999a,, 516, 900 Johns-Krull, C. M., Valenti, J. A., Hatzes, A. P., & Kanaan, A. 1999b,, 510, L41 Johns-Krull, C. M., Valenti, J. A., & Linsky, J. L. 2000,, 539, 815 Johns-Krull, C. M. 2007,, 664, 975 Jung, Y. K., & Kim, Y.-C. 2007, Journal of Astronomy and Space Sciences, 24, 1 Kastner, J. H., Huenemoerder, D. P., Schulz, N. S., Canizares, C. R., & Weintraub, D. A. 2002,, 567, 434 Kim, Y.-C., & Demarque, P.  1996,, 457, 340 Königl, A. 1991,, 370, L39 Kraus, S., Preibisch, T., & Ohnaka, K. 2008,, 676, 490 Landin, N. R., Mendes, L. T. S., & Vaz, L. P. R. 2010,, 510, A46 Lavigne, J.-F., Doyon, R., Lafrenière, D., Marois, C., & Barman, T. 2009,, 704, 1098 Lejeune, T., & Schaerer, D. 2001,, 366, 538 Littlefair, S. P., Naylor, T., Harries, T. J., Retter, A., & O’Toole, S. 2004,, 347, 937 Loinard, L., Torres, R. M., Mioduszewski, A. J., & Rodríguez, L. F. 2008,, 675, L29 Long, M., Romanova, M. M., & Lamb, F. K. 2012, New Astronomy, 17, 232 Long, M., Romanova, M. M., Kulkarni, A. K., & Donati, J.-F. 2011,, 413, 1061 Long, M., Romanova, M. M., & Lovelace, R. V. E. 2008,, 386, 1274 Long, M., Romanova, M. M., & Lovelace, R. V. E. 2007,, 374, 436 Luhman, K. L., Allen, L. E., Allen, P. R., et al. 2008,, 675, 1375 Malbet, F. 2009,, 53, 285 Mamajek, E. E. 2009, American Institute of Physics Conference Series, 1158, 3 Marsden, S. C., Jardine, M. M., Ramírez Vélez, J. C., et al. 2011,, 413, 1922 Matt, S. P., Pinzón, G., Greene, T. P., & Pudritz, R. E. 2012,, 745, 101 Matt, S. P., Pinzón, G., de la Reza, R., & Greene, T. P. 2010,, 714, 989 Matt, S., & Pudritz, R. E. 2005,, 356, 167 Mayne, N. J., Naylor, T., Littlefair, S. P., Saunders, E. S., & Jeffries, R. D. 2007,, 375, 1220 Mayne, N. J., & Naylor, T. 2008,, 386, 261 Mayne, N. J. 2010,, 408, 1409 Millan-Gabet, R., Malbet, F., Akeson, R., et al. 2007, Protostars and Planets V, 539 Mohanty, S., & Shu, F. H. 2008,, 687, 1323 Morin, J., Donati, J.-F., Petit, P., et al. 2008,, 390, 567 Morin, J., Donati, J.-F., Petit, P., et al. 2010,, 407, 2269 Morin, J., Dormy, E., Schrinner, M., & Donati, J. -F. 2011,, 418, 133 Morin, J., Donati, J.-F., Petit, P., et al. 2011, IAU Symposium, 273, 181 Najita, J., Carr, J. S., & Mathieu, R. D. 2003,, 589, 931 Neuhäuser, R., Guenther, E. W., Wuchterl, G., et al. 2005,, 435, L13 Palla, F., & Stahler, S. W. 2001,, 553, 299 Palla, F., Randich, S., Flaccomio, E., & Pallavicini, R. 2005,, 626, L49 Palla, F. 2001, From Darkness to Light: Origin and Evolution of Young Stellar Clusters, 243, 525 Pinsonneault, M. 1997,, 35, 557 Pizzolato, N., Maggio, A., Micela, G., Sciortino, S., & Ventura, P. 2003,, 397, 147 Preibisch, T., et al. 2005,, 160, 401 Qi, C., et al. 2004,, 616, L11 Ragland, S., Akeson, R. L., Armandroff, T., et al. 2009,, 703, 22 Rebull, L. M., Stauffer, J. R., Ramirez, S. V., et al. 2006,, 131, 2934 Reiners, A., & Basri, G. 2009,, 496, 787 Reipurth, B., & Zinnecker, H. 1993,, 278, 81 Roberts, P. H. 1988, Geophysical and Astrophysical Fluid Dynamics, 44, 3 Rodriguez, D. R., Kastner, J. H., Wilner, D., & Qi, C. 2010,, 720, 1684 Romanova, M. M., Long, M., Lamb, F. K., Kulkarni, A. K., & Donati, J.-F. 2011,, 411, 915 Romanova, M. M., Ustyugova, G. V., Koldoba, A. V., & Lovelace, R. V. E. 2004a,, 610, 920 Romanova, M. M., Ustyugova, G. V., Koldoba, A. V., & Lovelace, R. V. E. 2004b,, 616, 151 Salter, D. M., Kóspál, Á., Getman, K. V., et al. 2010,, 521, A32 Saunders, E. S., Naylor, T., Mayne, N., & Littlefair, S. P. 2009,, 397, 405 Schrinner, M., Petitdemange, L., & Dormy, E. 2012,, 752, 121 Sicilia-Aguilar, A., Hartmann, L. W., Briceño, C., Muzerolle, J., & Calvet, N. 2004,, 128, 805 Siess, L. 2001, From Darkness to Light: Origin and Evolution of Young Stellar Clusters, Astronomical Society of the Pacific Conference Series, 243, 581 Siess, L., Dufour, E., & Forestini, M. 2000,, 358, 593 Simon, M., Dutrey, A., & Guilloteau, S. 2000,, 545, 1034 Simon, M., Howell, R. R., Longmore, A. J., et al. 1987,, 320, 344 Skelly, M. B., Donati, J.-F., Bouvier, J., et al. 2012,, submitted Skelly, M. B., Donati, J.-F., Bouvier, J., et al. 2010,, 403, 159 Soderblom, D. R. 2010,, 48, 581 Stempels, H. C., & Gahm, G. F. 2004,, 421, 1159 Symington, N. H., Harries, T. J., Kurosawa, R., & Naylor, T. 2005,, 358, 977 Tognelli, E., Prada Moroni, P. G., & Degl’Innocenti, S. 2011,, 533, A109 Valenti, J. A., & Johns-Krull, C. M. 2004,, 292, 619 Waite, I. A., Marsden, S. C., Carter, B. D., et al. 2011,, 413, 1949 Ward-Thompson, D., & Whitworth, A. P. 2011, An Introduction to Star Formation by Derek Ward-Thompson and Anthony P. Whitworth. Cambridge University Press, 2011. ISBN: 9780521630306, Williams, J. P., & Cieza, L. A. 2011,, 49, 67 Willis, D. M., & Osborne, A. D. 1982, Geophysical Journal International, 68, 765 Yang, H., & Johns-Krull, C. M. 2011,, 729, 83 --- 1. More massive T Tauri stars become entirely radiative during their PMS evolution and some develop convective cores on the main sequence if the power generated from thermonuclear reactions is sufficient (see e.g. the models of ). Stars with mass $\lesssim$0.35$\,{\rm M}\_\odot$ arrive on the main sequence and hydrogen fusion begins before a radiative core can develop, and thus retain a fully convective interior during their PMS evolution.[↩](#fnref1) 2. In this paper we refer to stars of mass $M\_\ast\lesssim0.5\,{\rm M}\_\odot$ as low mass, $0.5\lesssim M\_\ast/{\rm M}\_\odot\lesssim1.0$ as intermediate mass, and $M\_\ast\gtrsim1.0\,{\rm M}\_\odot$ as high mass PMS stars.[↩](#fnref2) 3. Recently have published a series of numerical simulations that suggest that bistable dynamo behavior can occur for all MS M-dwarfs in the fully convective regime, including those close to the fully convective limit. Presently there is no observational evidence for this, with bistable behavior only apparent for stars (both MS and PMS) located well below the fully convective limit.[↩](#fnref3) 4. This simple illustrative example ignores the tilt of the multipole components which should be accounted for when calculating the field strength at the inner disk, and consequently the disk truncation radius.[↩](#fnref4) 5. Recent theoretical models suggest only a weak stellar mass dependence on disk lifetimes. Furthermore disk lifetimes may be influenced by the star forming environment.[↩](#fnref5) 6. There is a difference in *R**t* for multipolar compared to dipolar magnetospheres, although in most cases this difference is small (see section 6 of ). Provided that the dipole component is not significantly tilted with respect to the stellar rotation axis *R**t* values can be estimated using the strength of the dipole component at the stellar rotation pole. The higher order field components, however, must be accounted for in models of accretion flow onto the star.[↩](#fnref6) 7., their figure 2, and, his figure 1, provide more complete overviews of the links between mass, rotation period, field topology, and Rossby number for MS stars. Such plots of the $M\_\ast-P\_{\rm rot}$ plane with contours of constant *R**o* are not meaningful for our small sample of PMS stars, which span a range of stellar ages, as the *R**o* contours are highly sensitive to age due to the variation of $\tau\_{\rm c}$ values with the stellar contraction and radiative core growth.[↩](#fnref7) 8. The trend in Figure [spinup] is unlikely to be caused (at least entirely) by the stellar contraction, that is by the spin-up of stars as they contract with age in order to conserve angular momentum - there is no clear trend between $P\_{\rm rot}$ and *R*\* across our sample. Furthermore, stars in Figure [spinup] on either side of the fully convective divide span a range of ages.[↩](#fnref8) Can we predict the global magnetic topology of a pre-main sequence starfrom its position in the Hertzsprung-Russell diagram? ============================================================================================================================ Zeeman-Doppler imaging studies have shown that the magnetic fields of T Tauri stars can be significantly more complex than a simple dipole and can vary markedly between sources. We collect and summarize the magnetic field topology information obtained to date and present Hertzsprung-Russell (HR) diagrams for the stars in the sample. Intriguingly, the large scale field topology of a given pre-main sequence (PMS) star is strongly dependent upon the stellar internal structure, with the strength of the dipole component of its multipolar magnetic field decaying rapidly with the development of a radiative core. Using the observational data as a basis, we argue that the general characteristics of the global magnetic field of a PMS star can be determined from its position in the HR diagram. Moving from hotter and more luminous to cooler and less luminous stars across the PMS of the HR diagram, we present evidence for four distinct magnetic topology regimes. Stars with large radiative cores, empirically estimated to be those with a core mass in excess of  ∼ 40% of the stellar mass, host highly complex and dominantly non-axisymmetric magnetic fields, while those with smaller radiative cores host axisymmetric fields with field modes of higher order than the dipole dominant (typically, but not always, the octupole). Fully convective stars stars above $\gtrsim0.5\,{\rm M}\_\odot$ appear to host dominantly axisymmetric fields with strong (kilo-Gauss) dipole components. Based on similarities between the magnetic properties of PMS stars and main sequence M-dwarfs with similar internal structures, we speculate that a bistable dynamo process operates for lower mass stars ($\lesssim0.5\,{\rm M}\_\odot$ at an age of a few Myr) and that they will be found to host a variety of magnetic field topologies. If the magnetic topology trends across the HR diagram are confirmed they may provide a new method of constraining PMS stellar evolution models. Introduction ============ At the end of the protostellar phase of spherical accretion, a newly formed and optically visible pre-main sequence (PMS) T Tauri star is highly luminous due to its large surface area (*L*\* ∝ *R*\*2). The contracting star thus begins its journey towards the main sequence in the upper right of the Hertzsprung-Russell (HR) diagram while accreting material from its circumstellar disk. At this stage the temperature *T* and density *ρ* in the central regions of the star are not sufficient for thermonuclear reactions to occur, and the stellar luminosity is supplied by the release of the gravitational potential energy via the stellar contraction. During the fully convective phase of evolution the PMS star follows an almost vertical downward path in the HR diagram, called the Hayashi track. As the gravitational contraction proceeds the opacity *κ* in the central regions becomes dominated by free-free and bound-free transitions (e.g. ), for which *κ* ∝ *ρ**T*− 7/2. As the temperature continues to rise the central opacity thus drops, the star becomes more transparent, and the radiative gradient decreases below the critical value required to support convection (see the discussion in ) and a radiative core forms. This radiative core continues to grow reducing the depth of the convective zone.[1](#fn1) Eventually the temperature and luminosity of the contracting star rise, and it leaves its Hayashi track and moves onto its Henyey track, a process sometimes referred to as the “convective-radiative transition” (e.g. ). The rapid increase in effective temperature at a slowly increasing luminosity during the Henyey phase leads to a clear “gap” in color-magnitude diagrams of PMS clusters. The size of the gap is dependent on the mass cut-off between fully and partially convective stars which itself is a function of stellar age, meaning that the gap can, in principle, be used as a distance independent age indicator. Several observational results have been attributed to the development of a radiative core at the end of the fully convective phase of evolution. argue that the ratio of X-ray to bolometric luminosity is systematically lower for stars with radiative cores compared to those on the fully convective portion of their mass tracks, with finding similar reductions in X-ray luminosities in older PMS clusters (the older the cluster the greater the fraction of stars that have ended the fully convective phase). invoke radiative core development to explain the reduction in the scatter in X-ray luminosities apparent in rotation-activity plots in older PMS star forming regions. Furthermore, the growth of a radiative core appears to coincide with a reduction in the number of periodically variable T Tauri stars. The authors attribute this result to differing cool spot distributions in fully and partially convective stars with large cool spots (where bundles of magnetic flux burst through the stellar surface into the atmosphere) on fully convective objects and smaller more numerous spots on stars with radiative cores which naturally leads to less rotationally modulated variability. All of these observational results can be qualitatively explained if the external magnetic field topology of T Tauri stars changes as the stellar internal structure transitions from fully to partially convective. T Tauri stars have long been known to possess surface-averaged magnetic fields of order a kilo-Gauss as determined from Zeeman broadening measurements (e.g. and references therein). Such strong magnetic fields can disrupt circumstellar disks at a distance of a few stellar radii, provided that they are sufficiently globally ordered, a key requirement of magnetospheric accretion models (see for a review). The disk truncation radius *R**t* is set by the interplay between the strength of the stellar magnetosphere at the inner disk, which can be approximated from the polar strength of the dipole component of the multipolar stellar magnetic field at the surface of the star $B\_{\rm dip}$, and the disk mass accretion rate $\dot{M}$. Larger disk truncation radii are expected for stronger dipole components and/or weaker mass accretion rates as $R\_t \propto B\_{\rm dip}^{4/7}\dot{M}^{-2/7}$ (e.g. ), quantities which may vary significantly with time. Typical disk truncation radii are believed to be  ∼ 5 *R*\*, or  ∼ 0.05$\,{\rm AU}$ for a prototypical $2\,{\rm R}\_{\odot}$ T Tauri star. Such small scales, within or comparable to typical dust sublimation radii, can be probed with high resolution spectroscopy of gas emission lines (e.g. ) and by long baseline interferometry (see, and for reviews of the technique). Gas is typically found to extend closer towards the star than the dusty component of the disk for both T Tauri stars and the related more massive Herbig Ae/Be stars. For some sources the gas component of the disk provides a significant amount of the detected inner disk flux. The inner disk gas couples to the field lines of the stellar magnetosphere and is channeled onto the stellar surface at high velocity, where it shocks and produces detectable hotspots that are the source of continuum emission in excess of the stellar photospheric emission, as well as soft X-rays (e.g. ). The geometry and distribution of accretion hot spots is a strong function of the stellar magnetospheric geometry. Complementing the Zeeman broadening analysis, which is carried out in unpolarized light, measurement of the level of circular polarization in both accretion related emission lines and photospheric absorption lines allows information to be derived about the field topology itself. The large scale magnetic fields of accreting T Tauri stars appear to be well-ordered, and are simpler than the complex and loopy surface field regions (e.g. ). However, although the large-scale field that is interacting with the inner disk is somewhat dipole-like in appearance, the path of field lines close to the star, and consequently the magnetospheric accretion flow, is distorted close to the stellar surface by the complex field regions. Magnetic surface maps have now been published for a number of accreting T Tauri stars, one non-accreting weak line T Tauri star, and a few older post T Tauri stars, derived from the technique of Zeeman-Doppler imaging, as we discuss in the following section. Most of the published magnetic maps have been obtained as part of the Magnetic Protostars and Planets (MaPP) project. The main goal of this large program with the ESPaDOnS spectropolarimeter at the Canada-France-Hawai’i telescope, and the twin instrument NARVAL at the Télescope Bernard Lyot in the Pyrenées, is to investigate variations in the magnetic topology of accreting T Tauri stars of different mass[2](#fn2), age, accretion rate, rotation period, and outflow properties (see for a brief introduction to the program). The initial MaPP results have demonstrated that accreting T Tauri stars possess multipolar magnetic fields but with dipole components that are strong enough to disrupt the inner disk at distances of up to several stellar radii. Intriguingly, the field complexity and the polar strength of the dipole component appear to increase and decrease respectively when comparing stars with fully convective interiors to those which have developed radiative cores (e.g. ) - a concept that we explore fully in this paper. Similar variations in magnetic field topology of main sequence (MS) M-dwarfs that span the fully convective divide have been discovered by and. [mdwarfs] [bdipmdwarf] On the MS stars below  ∼ 0.35$\,{\rm M}\_\odot$, later than roughly spectral type M4, have a fully convective internal structure while more massive stars do not. Fully convective M-dwarfs close to the fully convective limit ($\gtrsim0.2\,{\rm M}\_\odot$) host simple, axisymmetric magnetic fields with strong dipole components. M-dwarfs of earlier spectral type which are partially convective with small radiative cores have magnetic fields with weaker dipole components that are dominantly axisymmetric (most of the magnetic energy is in the poloidal field modes; ). M-dwarfs with more substantial radiative cores have both weak dipole components and more complex magnetic fields (less magnetic energy in the poloidal field modes; ). This behavior is illustrated in Figures [mdwarfs] and [bdipmdwarf]. Clear differences in (the large scale) stellar magnetic field topologies are observed as a direct manifestation of the differing stellar internal structure and therefore, presumably, the different type of dynamo mechanism operating in fully and partially convective stars. Stars with outer convective zones and radiative cores are believed to possess a solar-like tacholine, a shear layer between the core and envelope, whereas fully convective stars lack this interface, and the dynamo process is different (e.g. ). , following on from, point out that the simple large scale fields of many fully convective M-dwarfs are more akin to the simple magnetic topologies of the gas giant planets within our Solar System (e.g. ), rather than the messy and complex fields observed on active zero-age main sequence K-type stars (e.g. ). However, the lowest mass M-dwarfs ($\lesssim$0.2$\,{\rm M}\_\odot$) with very similar stellar parameters (rotation rate and mass) can show drastically different magnetic field topologies. This may be caused by a bistable dynamo process, with a weak and a strong field dynamo branch, such that the two different dynamo regimes co-exist over a certain range of parameters (see figure 4 of, adapted from ). Fully convective stars in the strong field regime maintain steady and simple axisymmetric dominantly dipolar magnetic fields while fully convective stars with identical stellar parameters but in the weak field regime host complex multipolar fields which evolve rapidly in time and have weak dipole components. In this paper we explore the trends in the magnetic field topology of PMS stars with stellar internal structure that are now emerging from spectropolarimetric observing programs. Unlike MS M-dwarfs, however, the internal structure of PMS stars is changing rapidly due to the stellar contraction and (if the star is massive enough) the development of a radiative core. The stellar age is thus an important parameter in addition to mass when examining magnetic field topology variations for PMS stars caused by changes in the internal structure of the star. Given the importance of the magnetic field topology in controlling the star-disk interaction, in §[topo] we summarize the magnetic topology information, and the fundamental stellar parameters, that have been obtained to date for accreting PMS stars. In §[hrdiagrams] we position the stars with derived magnetic maps onto HR diagrams constructed from two different PMS evolutionary models, examine the role of the development of a radiative core in setting the magnetic field topology, and compare the field topologies of PMS stars to those of MS M-dwarfs with similar internal structures. In §[predictHRD] we argue that it may be possible to predict the global magnetic topology characteristics of a given star (e.g. whether or not the field will have a strong dipole component, is dominantly axisymmetric or non-axisymmetric etc) based on its position in the HR diagram. Our ability to do this, however, is dependent on a number of caveats and assumptions including the accuracy with which the star can be positioned in the HR diagram observationally, and the veracity of the PMS evolutionary models themselves. In §[discussion] we explore the implications of magnetic topology variations in terms of the star-disk interaction, while §[conclusions] contains our conclusions. T Tauri magnetic field topology =============================== Over the past few years spectropolarimetric Zeeman-Doppler imaging (ZDI) studies have revealed that the field topology of T Tauri stars can vary significantly between sources ( and provide reviews of the basic methodology of ZDI, while details specific to accreting T Tauri stars are discussed in and ). Magnetic maps of T Tauri stars are constructed by measuring the circular polarization (Stokes V) signal in both accretion related emission lines and in photospheric absorption lines, over at least one complete stellar rotation cycle and in practice several cycles. Circular polarization can be measured directly in the accretion related emission lines, for example HeI 5876Å or the CaII infrared triplet, but the signal is often too weak in a given individual magnetically sensitive photospheric absorption line. Thus, cross-correlation techniques, such as least-squares deconvolution, are used to extract information from as many lines as possible. By monitoring how rotationally modulated distortions, generated by magnetic surface features, move through the Stokes V profile, a 2D distribution of magnetic polarities across the surface of stars can be determined using maximum entropy reconstruction techniques, as well as the field orientation within the magnetic regions. The ability to derive maps of the surface magnetic topology of T Tauri stars (and of stars generally) is subject to some limitations, as discussed in detail by. ZDI, like all polarization techniques, suffers from the effects of flux cancellation. Photons received from regions of the stellar atmosphere permeated by opposite polarity magnetic fields are polarized in the opposite sense. Their signals can therefore cancel, resulting in a net polarization signal of zero. Due to this flux cancellation effect it is possible to recover information only about the medium-to-large scale field topology. Detailed features on scales that can be resolved in solar magnetograms remain below the resolution limit achievable in stellar magnetic maps (see for further discussion). Spectropolarimetric Stokes V studies thus likely miss a large fraction of the total magnetic flux, presumably contained within the tangled and complex small scale field, perhaps on the scale of bipolar groups detected on the Sun. The resolution achievable in stellar magnetic maps is also dependent on the stellar rotation period and the inclination. Once magnetic maps have been derived the field topology can be reconstructed as the values of the coefficients of a spherical harmonic decomposition, and the strength of the various field modes determined (e.g. ). For example, it is possible to decompose the field into the sum of a poloidal plus a toroidal component, and to calculate the strength, tilt, and phase of tilt, of the various multipole moments. Due to the stellar inclination surface magnetic field information cannot be obtained across portions of the stellar surface that remain hidden from view to an observer. This limitation is important when constructing 3D models of T Tauri magnetospheres via field extrapolation from the magnetic maps (e.g. ) as an assumption must be made whether to favor the symmetric (the odd ℓ number modes e.g. the dipole, the octupole, the dotriacontapole etc) or antisymmetric (the even ℓ number modes e.g. the quadrupole, the hexadecapole etc) field modes. As part of the tomographic imaging process maps of the surface distribution of cool (dark) spots and accretion related hot spots are also derived. These maps suggest that for the bulk of accreting T Tauri stars gas in accretion columns impacts the stellar surface at high latitudes close to the poles. This suggests that it is antisymmetric field modes, like the dipole and the octupole, that dominate. If the symmetric field modes were to dominate then the majority of the gas would accrete onto the equatorial regions, for example, which is not observed. The choice of whether to favor the symmetric or anti-symmetric modes, although important when constructing 3D models of the stellar magnetosphere, does not fundamentally change the appearance of the magnetic field maps. Magnetic maps have now been published for a number of T Tauri stars (see for a review). Some T Tauri stars host simple axisymmetric large-scale magnetic fields that are dominantly dipolar (AA Tau and BP Tau) or where a higher order field mode dominates (typically, but not always, the octupole; V2129 Oph, GQ Lup, TW Hya, and MT Ori), while others host highly complex magnetic fields that are dominantly non-axisymmetric with many high order multipole components (V4046 Sgr AB, CR Cha, CV Cha, and V2247 Oph). In Appendix [starinfo] we provide detailed information about the magnetic field topology of every accreting T Tauri star for which magnetic maps have been derived to date, with the main stellar properties listed in Tables [paramsfund] and [params]. The stellar internal structure information, masses, and ages have been derived by placing the stars onto the HR diagram as discussed in §[new3.1]. lcccccccc AA Tau & K7 & 4000 ± 100 & 0.0 ± 0.1 & 8.22 & 230 & no & … & 1 BP Tau & K7 & 4055 ± 112 & -0.03 ± 0.1 & 7.6 & 237 & no & … & 2 V2129 Oph & K5 & 4500 ± 100 & 0.15 ± 0.1 & 6.53 & 182 & yes & 78 & 3,4 GQ Lup & K7 & 4300 ± 50 & 0.0 ± 0.1 & 8.4 & 199 & yes & 100 & 5 TW Hya & K7 & 4075 ± 75 & -0.46 ± 0.1 & 3.56 & 180 & no & … & 6 MT Ori & K2 & 4600 ± 100 & 1.49 ± 0.13 & 8.53 & 322 & no & … & 7 V4046 Sgr A & K5 & 4370 ± 100 & -0.39 ± 0.1 & 2.42 & 117 & yes & 0.041 & 8 V4046 Sgr B & K5 & 4100 ± 100 & -0.57 ± 0.1 & 2.42 & 130 & yes & 0.041 & 8 CR Cha & K2 & 4900 ± 100 & 0.58 ± 0.13 & 2.3 & 92 & no & … & 9 CV Cha & G8 & 5500 ± 100 & 0.89 ± 0.08 & 4.4 & 56 & yes & 1596 & 9 V2247 Oph & M1 & 3500 ± 150 & -0.33 ± 0.1 & 3.5 & 222 & yes & 36 & 10 llccccccccc AA Tau & Dec07 & 0.70 & 0.00 & 1.42 & 0.63 & 0.00 & 1.39 & 1.9 & 1 … & Jan09 & … & … & … & … & … & … & 1.7 & 1 BP Tau & Feb06 & 0.75 & 0.00 & 1.80 & 0.69 & 0.00 & 1.64 & 1.2 & 2 … & Dec06 & … & … & … & … & … & … & 1.0 & 2 V2129 Oph & Jun05 & 1.36 & 0.19 & 3.67 & 1.14 & 0.10 & 2.28 & 0.3 & 3,4 … & Jul09 & … & … & … & … & … & … & 1.0 & 4 GQ Lup & Jul09 & 1.06 & 0.13 & 3.33 & 0.93 & 0.02 & 2.39 & 1.1 & 5 … & Jun11 & … & … & … & … & … & … & 0.9 & 5 TW Hya & Mar08 & 0.83 & 0.18 & 9.17 & 0.84 & 0.27 & 7.13 & 0.4 & 6 … & Mar10 & … & … & … & … & … & … & 0.7 & 6 MT Ori & Dec08 & 2.7 &  > 0.03, < 0.36 & 0.24 & 1.96 & 0.00 & 0.18 &  < 0.1 & 7 V4046 Sgr A & Sep09 & 0.91 & 0.47 & 13.0 & 0.98 & 0.64 & 12.0 & 0.1 & 8 V4046 Sgr B & Sep09 & 0.87 & 0.40 & 13.0 & 0.85 & 0.50 & 12.1 & 0.08 & 8 CR Cha & Apr06 & 1.96 & 0.65 & 2.89 & 1.78 & 0.39 & 1.67 &  > 0.09 & 9 CV Cha & Apr06 & 2.04 & 0.98 & 4.51 & 2.19 & 0.94 & 2.97 &  > 0.02 & 9 V2247 Oph & Jul08 & 0.36 & 0.00 & 1.4 & 0.35 & 0.00 & 1.67 & 0.1 & 10 The effective temperatures $T\_{\rm eff}$ and luminosities *L*\* listed in Table [paramsfund] are the values that were adopted in each of the papers where the magnetic maps were published, as listed in the reference column of the table, and to which readers are referred for detailed discussion. The one exception is the luminosity of V2247 Oph which we have updated using a more refined estimate of the distance to the *ρ*-Oph star forming region (see §[v2247]). Typically the effective temperatures were sourced from previously published literature values as derived from high resolution spectra. The exception to this is GQ Lup for which derived a new $T\_{\rm eff}$ as previous literature values were estimated from low resolution spectra and proved to be highly discrepant. As a consistency check a new spectral classification tool (called MagIcS) has been developed and applied to the ESPaDOnS spectra and has been tested for a number of main sequence and PMS template stars (Donati, in prep.). The assumed errors in $T\_{\rm eff}$ values are also taken from the previously published values, or assumed to be $100\,{\rm K}$ when no error estimate is available (errors from the spectral classification tool are $<100\,{\rm K}$). Luminosities were derived from the visual magnitudes and distance estimates to the various star forming regions, taking account of the uncertainty associated with the presence of surface cool spots. In the papers with published magnetic maps the error in $\log(L\_\ast/{\rm L}\_\odot)$ is typically assumed to be $0.1\,{\rm dex}$, with the exception of CR Cha, CV Cha and MT Ori. Further stellar parameters, including the stellar rotation periods adopted during the magnetic map reconstruction process, are listed in Table [params]. Given the importance of the dipole component in controlling the star-disk interaction, see §[dipcomp], we also list the polar strength of the dipole component of the multipolar magnetic field of each star. In Table [paramsfund] we also highlight which stars in our sample are part of binary systems and the binary separation. For those with large separations the presence of a companion star is not expected to have any influence of over the stellar magnetic field topology. Those with the smallest separations, V2247 Oph and V4046 Sgr AB, are found to host complex non-axisymmetric magnetic fields. We find little difference when comparing the field complexity found on single and binary stars. For example, both components of HD 155555, a tidally locked close post T Tauri binary, have magnetic field topologies and surface differential rotation measurements that are consistent with those of single stars with similar spectral types. Similar results have also being found for the M-dwarf eclipsing binary YY Gem (Morin et al., in prep.). Therefore, we do not expect that binarity plays a significant role in setting the field complexity, although it clearly plays a role in the evolution of the large-scale coronal field due to the interaction between the stellar magnetospheres if the binary separation is sufficiently small, e.g. DQ Tau. Such large-scale changes in the coronal field that triggers flares appears to be generated by only small changes in the surface field topology, as determined by contemporaneous spectropolarimetric and X-ray observations. Information from the Hertzsprung-Russell diagram ================================================ In this section we construct HR diagrams using the stars listed in Table [paramsfund] and contrast mass, age, and internal structure properties derived from two different PMS evolutionary models. We discuss the variation in the magnetic field topology of stars across the diagram and explore the similarities between the field topologies of PMS stars and main sequence M-dwarfs with similar internal structures. Magnetic field topology and stellar internal structure ------------------------------------------------------ Figure [hr] shows HR diagrams constructed from the and the Pisa PMS stellar evolution models. The mass tracks are colored according to the internal structure of the star - black for fully convective stars, and red for partially convective stars with radiative cores. The HR diagrams also include internal structure contours, the solid blue lines that connect stars of different effective temperature and luminosity, equivalently mass and age, but with the same internal structure (the same values of $M\_{\rm core}/M\_\ast$). The solid blue line on the right is the fully convective limit. Stars that lie in the region of the HR diagram above and to the right of the fully convective limit have fully convective interiors, and those in the region below and to the left have radiative cores, or are entirely radiative for the more massive stars beyond a certain age. The points in Figure [hr] are the stars listed in Table [paramsfund] and Appendix [starinfo] with different symbols representing stars with different large scale magnetic field topologies, as detailed in the caption. The stellar masses, ages, and radiative core masses derived from the and the Pisa models are listed in Table [params]. Generally the masses derived from both models agree to within  ∼ 10%, at least for the small sample of stars considered in this work. The exceptions are V2129 Oph and MT Ori which have masses  ∼ 20% and  ∼ 40% larger respectively in the models. With the exception of the lowest mass star V2247 Oph, the isochronal ages are consistently younger in the models. The largest age differences occur for high mass T Tauri stars, as well as for TW Hya. It can be seen from Figure [hr] that more massive PMS stars leave their Hayashi tracks at a younger age and their radiative cores grow more rapidly than those of lower mass stars (see Appendix [fullylimit] for further details). There is no general trend in the core mass relative to the stellar mass ($M\_{\rm core}/M\_\ast$) between the models; some stars have larger cores in one model, but in the same model other stars have smaller cores. Nevertheless it is encouraging that, with the exception of MT Ori, if a given star has ended the fully convective phase in one model it has also done so in the other model. In this paper we choose to consider variations in the stellar internal structure by considering the fractional radiative core mass $M\_{\rm core}/M\_\ast$ rather than considering the variation in the fractional radiative core radius $R\_{\rm core}/R\_\ast$. This is because once a core develops a change in the ratio $M\_{\rm core}/M\_\ast$ is a direct reflection of the growth of the radiative core, assuming that in the T Tauri phase the star is no longer accumulating significant mass via spherical infall and the stellar mass is set. In contrast, changes in the ratio $R\_{\rm core}/R\_\ast$ represents both the growth of the core, and the radius decrease of the contracting PMS star, see Figure [agecore]. The mass of the radiative core is therefore our preferred internal structure proxy, although it is directly related to the core radius as $M\_{\rm core}\propto R\_{\rm core}^3$ for a polytropic star. Once a star evolves on to the main sequence and its internal structure and radius have settled, both internal structure proxies can be used. [hr] Intermediate and high mass T Tauri stars ($\gtrsim$0.5$\,{\rm M}\_\odot$) ------------------------------------------------------------------------- It appears that the general characteristics of the large scale magnetic topology of an accreting T Tauri star ($\gtrsim$0.5$\,{\rm M}\_\odot$) are strongly related to the star’s position in the HR diagram (see Figure [hr]). Stars which have similar internal structures (but very different mass/age and effective temperature/luminosity) appear to have similar magnetic field topologies: (i) stars in the completely convective regime (at least those above  ∼ 0.5$\,{\rm M}\_\odot$ at an age of  ∼ few ${\rm Myr}$; see §[lowmass] where we discuss low mass T Tauri stars) have strong dipole components to their magnetic fields and their fields are dominantly axisymmetric (AA Tau and BP Tau); (ii) stars with small radiative cores and large outer convective zones have magnetic fields that are dominantly axisymmetric and have high order components that dominate the dipole (V2129 Oph, GQ Lup, TW Hya, and MT Ori - at least in the model for the latter, see below); and (iii) more evolved stars with substantial radiative cores and small outer convective zones have complex non-axisymmetric magnetic fields with weak dipole components (V4046 Sgr AB, CR Cha and CV Cha). In general the larger the radiative core the more complex the large scale magnetic field, and the weaker the dipole component (see Table [params]). We note that the magnetic field of MT Ori is largely axisymmetric and the octupole, the dotriacontrapole, and the ℓ = 7 field mode dominate the dipole. Its magnetic field is more similar to those of V2129 Oph, TW Hya, and GQ Lup, all of which have small radiative cores, and very different to those of the fully convective stars AA Tau and BP Tau. It thus seems likely that MT Ori has developed a radiative core, and the models give a more accurate representation of the stellar structure in this region of the HR diagram (the models suggest that MT Ori is still fully convective). This argument is further supported by the observed trends in the field topology with varying internal structure for MS M-dwarfs on either side of the fully convective divide, as we discuss in §[mdwarfcomp]. Low mass T Tauri stars ($\lesssim$0.5$\,{\rm M}\_\odot$) - dynamo bistability? ------------------------------------------------------------------------------ The low mass T Tauri regime ($\lesssim0.5\,{\rm M}\_\odot$) is, with the exception of V2247 Oph, an unexplored region of the HR diagram in terms of stellar magnetic field topologies. Intriguingly the field topology of V2247 Oph is complex and non-axisymmetric with a weak dipole component, and resembles the fields of more massive T Tauri stars with substantial radiative cores rather than that of the more massive fully convective stars. We therefore speculate that a bistable dynamo process with weak and strong field branches operates amongst the lowest mass fully convective PMS stars, similar to the lowest mass MS M-dwarfs discussed at the end of §[intro] (see, their figure 4 in particular). V2247 Oph would then belong to the weak field branch, while another fully convective star with similar stellar parameters but which belonged to the strong field dynamo branch, would host a simple magnetic field with a strong dipole component. Once the low mass T Tauri regime has been explored in detail we expect that stars with a variety of field topologies will be found, some with weak dipole components corresponding to the weak field dynamo branch, and some with strong dipole components corresponding to the strong field dynamo branch. As magnetic maps have yet to be obtained for the lowest mass fully convective PMS stars the exact boundary between the strong dipole component regime and the bistable dynamo regime across the HR diagram is unconstrained observationally, and also theoretically. As bistable dynamo behavior for MS M-dwarfs only occurs for stellar masses $\lesssim0.2\,{\rm M}\_\odot$ it is tempting to use this as the boundary separating the regions of fully convective PMS stars with strong dipole components (those with $M\_\ast\gtrsim0.2\,{\rm M}\_\odot$) and those fully convective stars where some host fields with strong dipole components while other stars with similar parameters host complex fields with weak dipole components (those with $M\_\ast\lesssim0.2\,{\rm M}\_\odot$).[3](#fn3) The $0.2\,{\rm M}\_\odot$ boundary is illustrated as the right-hand dashed blue line in the HR diagrams in Figure [hr]. However, for MS M-dwarfs whose internal structure and therefore presumably the dynamo magnetic field generation process has settled, the $0.2\,{\rm M}\_\odot$ boundary is  ∼ 60% of the MS fully convective limit of  ∼ 0.35$\,{\rm M}\_\odot$. PMS stars are still contracting, however, and as the boundary between stars which are fully convective and those which are not is a function of age (see Appendix [fullylimit]), we can also speculate that the mass boundary below which bistable dynamo behavior occurs is itself a function of age. The left-hand dashed blue line in Figure [hr] illustrates this alternative boundary which occurs at a stellar mass that is 60% of the fully convective limit at that age. Thus, for a given bistable dynamo boundary, fully convective stars to the left of the boundary in the HR diagram would have simple axisymmetric fields with strong dipole components; stars to the right would be in the bistable regime and may host a variety of field topologies just like the latest spectral type (lowest mass) M-dwarfs. Taking the age dependent boundary defined in Figure [hr] would mean that both AA Tau and BP Tau are actually in the bistable dynamo regime, and with simple magnetic fields with strong dipole components they would be on the strong field dynamo branch. In reality the two dashed blue lines in Figure [hr] likely represent upper and lower limits to the true bistable dynamo limit. Clearly more data, and in particular more ZDI studies, are required for fully convective T Tauri stars to better constrain this limit and test our predictions. Comparison with the magnetic topologies of MS M-dwarfs ------------------------------------------------------ Although the links between T Tauri magnetic field topologies and stellar internal structure discussed in §[intermed] & [lowmass] are thus far based on a limited sample of PMS stars, similar trends have been found for MS M-dwarfs on either side of the fully convective divide. For MS M-dwarfs the transition from dominantly axisymmetric to non-axisymmetric fields occurs once the stellar mass exceeds  ∼ 0.5$\,{\rm M}\_\odot$ (see Figure [mdwarfs]) which corresponds roughly to $M\_{\rm core}/M\_\ast\approx0.4$ (the models give $M\_{\rm core}/M\_\ast=0.26$ for M-dwarfs of mass 0.5$\,{\rm M}\_\odot$ and 0.44 for 0.6$\,{\rm M}\_\odot$). The trends in magnetic topology of MS M-dwarfs across the fully convective limit thus roughly match those found from ZDI studies of PMS stars with similar internal structures, although there may be one subtle difference. T Tauri stars with small radiative cores ($0<M\_{\rm core}/M\_\ast\lesssim0.4$) host axisymmetric magnetic fields but field modes of higher order than the dipole dominate (typically, but not always, the octupole e.g. TW Hya, V2129 Oph, and GQ Lup). In contrast, M-dwarfs with similar fractional radiative core mass ($M\_{\rm core}/M\_\ast$) host axisymmetric fields but the dipole component (although similarly weaker than that of fully convective stars) is the dominant field mode. These M-dwarfs have small inclinations, closer to pole-on. In such cases it is difficult to recover field topology information at low stellar latitudes, and to reliably infer field modes above the dipole when the dipole component is strong (large polar cool spots, e.g. that on TW Hya, help alleviate this problem in PMS stars with similarly low inclinations). This apparent difference between the PMS and main sequence sample may just be observational bias. Alternatively, if the difference between the samples is real, it may be due to the rapidly changing internal structure of a PMS star with the core continuing to grow as the star evolves towards the main sequence. The growth rate of the core (the rate of increase of the ratio $M\_{\rm core}/M\_\ast$) is more rapid the higher the stellar mass (see Appendix [fullylimit]). Thus as higher mass stars transition from fully convective to partially convective the dipole component of their magnetic fields may decay more rapidly the faster the core develops. Taking V2129 Oph and TW Hya as examples, although both currently have small radiative cores similar in size to mid spectral type M-dwarfs ( ∼ M4-M5 or 0.35-0.5$\,{\rm M}\_\odot$) of  ∼ 20% of their stellar mass, by the time they arrive on the main sequence they will have substantial radiative cores with $M\_{\rm core}/M\_\ast\approx0.95$ and  ≈ 0.75 respectively, and be of spectral type  ∼ F7 and  ∼ K3. Although TW Hya and V2129 Oph currently have internal structures that are comparable to mid M-dwarfs they will differ substantially by the time they arrive on the main sequence. By this stage their field topologies will likely resemble the more complex fields found for stars with small outer convection zones, like CR Cha, CV Cha and V4046 Sgr AB, and the earlier spectral type M-dwarfs. In other words we are observing the PMS stars at a stage of their evolution where their large-scale magnetic fields are in the process of transitioning from simple to more complex fields. This may explain the one subtle difference between PMS magnetic field topologies and those of MS M-dwarfs with currently similar internal structures. Can we predict the magnetic field topology of T Tauri stars? ============================================================ The magnetic Hertzsprung-Russell diagram ---------------------------------------- As summarized above, ZDI studies have revealed that T Tauri stars host multipolar magnetic fields; however the field topology seems to be strongly linked to the stellar internal structure, and consequently to how the magnetic field is generated and maintained by differing dynamo mechanisms. Empirically we define four distinct magnetic topology regions across the PMS, see Figure [colorhr], defined as we move from upper-left (warm/luminous) to lower-right (cool/faint) in the HR diagram: * Region 1 (blue in Figure [colorhr]): stars with substantial radiative cores $M\_{\rm core}/M\_\ast\gtrsim0.4$. In this region stars have complex magnetic fields with many high order components. The fields are highly non-axisymmetric and the dipole component is weak. This region contains the most massive T Tauri stars, typically those of spectral type G or early K, and also older stars of later spectral type. V4046 Sgr AB, CR Cha and CV Cha lie in this region. * Region 2 (green in Figure [colorhr]): stars with small radiative cores $0<M\_{\rm core}/M\_\ast\lesssim0.4$. In this region stars have magnetic fields that are dominated by strong high order field components. The dipole component may be weak or strong but it contains less magnetic energy than the higher order field modes. The fields are largely axisymmetric. MT Ori, TW Hya, V2129 Oph, and GQ Lup, lie in this region. * Region 3 (yellow in Figure [colorhr]): fully convective stars to the left of some boundary between the dashed blue lines. In this region stellar magnetic fields are axisymmetric with strong (kilo-Gauss) dipole components. AA Tau and BP Tau likely lie in this region. * Region 4 (within the yellow region in Figure [colorhr]): fully convective stars to the right of some boundary between the dashed blue lines. The boundary between this region and region 3 is not well defined observationally. The dashed blue lines in Figure [colorhr] are possible upper and lower limits to the true boundary (see §[lowmass]). By comparison to the magnetic topologies of the lowest mass fully convective M-dwarfs we expect that this region will be populated by stars with a mix of magnetic topologies, the dynamo process being bistable with a strong and weak field branch. Stars on the strong (weak) field branch will have fields similar to stars in region 3 (1). V2247 Oph lies in this region and on the weak field dynamo branch. [colorhr] The general magnetic topology characteristics of a given T Tauri star will change with age as the star evolves down its mass track towards the MS. In the high and intermediate mass regime, stars with mass $\gtrsim0.5\,{\rm M}\_\odot$, T Tauri stars initially host magnetic fields that are axisymmetric with a strong dipole component. As the fully convective phase of evolution ends and a small radiative core develops the field remains largely axisymmetric but the dipole component decays away, leaving a field that is dominated by strong high order field components (those with ℓ > 1). By the time that the core mass exceeds $M\_{\rm core}/M\_\ast\approx0.4$ the dipole component is weak, and the field is complex having lost its earlier axisymmetry. This core mass boundary is empirical, based on the limited sample of stars where magnetic maps have been published to date, and is therefore somewhat speculative at this stage. Clearly more data is required to confirm the exact value, but we note that currently unpublished data is consistent with these trends. The boundary may be more fluid, and itself dependent on stellar mass given that for higher mass stars the growth rate of the radiative core is more rapid than for lower mass stars (see Appendix [fullylimit]). Furthermore, if the boundary between fully convective stars with simple fields and those in the bistable regime is as extreme as masses below 60% of the fully convective limit (see the discussion in §[lowmass]; and the left-hand blue dashed line in Figure [colorhr]) then this picture would have to be modified, as some stars would be born within the bistable regime and host fields with weak dipole components which would then strengthen as the stars approach the fully convective limit. It is not clear what interplay between the stellar contraction, rotation period and mass could influence the dynamo process in this way. The general T Tauri magnetic topology trends are currently empirical and are based on knowledge garnered from observationally derived magnetic maps. However, a magnetic topology change due to the transition from a fully to a partially convective stellar interior is further supported by the observed changes in periodic variability and X-ray luminosities. If ZDI data acquired in future continues to follow the empirical trends, then in principle it is possible to infer the general properties of a T Tauri star’s large scale magnetic field solely from its position in the HR diagram. We do not claim that it is possible to know the exact properties of a star’s magnetic field based solely on its effective temperature and luminosity. Indeed we expect that the large scale field topology, and the strength of the various magnetic field components, will evolve in time due to magnetic cycles (see for discussion about the changes in the fields on V2129 Oph, TW Hya, and GQ Lup, although longer timescale observing programs potentially spanning several years are required to search for and confirm the existence of magnetic cycles on T Tauri stars). Nonetheless, it appears as though it is possible to estimate the general properties of the stellar magnetosphere, for example whether or not the field will be dominantly axisymmetric with a weak dipole component, or axisymmetric with strong higher order components, or if the field will be highly complex with many multipolar components and non-axisymmetric. Our ability to do so, however, depends on both the accuracy with which the star has been positioned in the HR diagram and on the veracity of the PMS stellar evolution models. Limitations: observational & theoretical ---------------------------------------- Our ability to ascertain the magnetic topology of a given star from the HR diagram would be limited by how well we can position the star in the HR diagram and on the dependability of the PMS evolution models (see for a detailed review). Observationally the challenges lie in the assignment of a stellar spectral type, and the subsequent conversion to effective temperature with the assumption of some metallicity and surface gravity dependent scale. Likewise to discern the stellar luminosity we must carefully account for extinction; the presence of large surface cool spots and the related photometric variability; for accreting PMS stars the additional luminosity from accretion; uncertainties in the distance estimate; and for some sources the existence of unresolved close companions. Theoretically the errors that can arise from assumptions in the constituent input physics of the PMS evolution models have been succinctly summarized by, and, with the latter paper providing detailed comparison between different evolutionary models. Modeling the evolution of a forming star along its mass track and across the HR diagram is a formidable task. Errors, as well as differences between the various available PMS evolution models, arise from differing assumptions about the equation of state; the adopted boundary conditions, for example whether a grey or more realistic atmosphere model is employed; how convection is handled; the assumed metallicity; the effects of mass accretion and rotation; and the influence of different formation histories during the protostellar phase. Taking these effects into account, estimates the errors in the mass tracks to be $\Delta T\_{\rm eff}\sim100-200\,{\rm K}$ and $\Delta\log{(L\_\ast/{\rm L}\_\odot)}\sim0.1$. Additionally if magnetic fields themselves are not accounted for in models of convection, a further source of error is introduced to the models. Finally, most models do not account for episodic accretion which may alter the stellar structure and the age at which stars of a given mass develop a radiative core. The uncertainty in the stellar evolution models themselves, and the observational difficulty in accurately assigning effective temperatures and luminosities, must be kept in mind when using Figure [colorhr] to predict the general magnetic field properties of a particular T Tauri star. However, turning the problem around, rather than using the star’s position in the HR diagram to ascertain its magnetic topology, it may be possible to use the observationally derived magnetic topology to test the accuracy of certain aspects of the PMS evolution models themselves. Just as dynamical mass measurements for binary stars and for single stars with disks can be used to constrain the accuracy of mass tracks, and the amount of lithium depletion isochronal ages (e.g. ; ), magnetic field topologies may be used to test stellar internal structure information. For example, in §[canwepredict] we argued that the field topology of T Tauri stars varies from simple and axisymmetric with a strong dipole component to complex and highly non-axisymmetric with a weak dipole component with the growth of a radiative core. A difference in the external field topology is expected given the different dynamo process operating in fully convective stars compared to more evolved and/or more massive stars with outer convection zones, radiative cores, and stellar analogs of the solar tacholine. The right hand solid blue line in Figure [hr] denotes the fully convective limit. By carrying out ZDI studies for stars around this limit, stark variation in the field topology between various stars may be used as a probe to observationally constrain which regions of the HR diagram are populated by fully convective stars, and which regions are populated by stars with radiative cores. In other words, by determining the regions of the HR diagram where stars with simple and complex magnetic fields lie, we can determine whether or not the internal structure information derived from the PMS evolution models is accurate. Discussion ========== The dipole component of T Tauri magnetospheres and the star-disk interaction ---------------------------------------------------------------------------- For accreting T Tauri stars it is generally the strength of the dipole component that is the most significant in terms of controlling the disk truncation radius, even when the dipole component is weak compared to the higher order components. This can be seen by considering the field strength at the inner disk truncation radius. Let’s consider a simple example of a star with a dipole plus an octupole field component, as many accreting T Tauri stars host large scale magnetic fields of this form. In the equatorial plane the contribution to the vertical component of **B** threading the disk at a distance *r* from the stellar center from the dipole component of polar strength $B\_{\rm dip}$ is $B\_{\rm dip}(R\_\ast/r)^3/2$. For the octupole field component the equivalent expression is $3B\_{\rm oct}(R\_\ast/r)^5/8$ where $B\_{\rm oct}$ is the polar strength of the octupole. Thus, assuming a typical disk truncation radius of 5 *R*\* (which is  ∼ 70% of the equatorial corotation radius for a $2\,{\rm R}\_\odot$ solar mass star with a rotation period of $6\,{\rm d}$), then the ratio of the strength of the octupole to the dipole component at the inner disk edge is $(3/100)(B\_{\rm oct}/B\_{\rm dip})$. Taking the ratio of the polar strength of the field components as $B\_{\rm oct}/B\_{\rm dip}=10$, which is larger than thus far observed for any T Tauri star (which is $B\_{\rm oct}/B\_{\rm dip}\approx6$ for TW Hya in March 2008; ) then the contribution to the field at the inner disk from the octupole component is only 30% that of the dipole component, and becomes less significant the weaker (stronger) the octupole (dipole) component and for larger disk truncation radii[4](#fn4). The dipole is, in the majority of cases, the most significant field component in controlling the disk truncation radius. Figure [bdipctts] shows the variation in the polar strength of the dipole component for high and intermediate mass T Tauri stars listed in Table [params]. It is clear that more massive and/or older T Tauri stars, those which have ended the fully convective phase of evolution, have weaker dipole components than younger and/or lower mass stars. A possible exception is for some of the low mass T Tauri stars which may show a variety of field topologies (see §[lowmass]). [bdipctts] The observed rapid decay in the dipole component with the growth of a radiative core can influence the star-disk interaction only if the fully convective phase ends before the disk has dispersed. As demonstrated in Appendix [fullylimit] the age at which a radiative core develops is highly dependent on stellar mass, with high mass T Tauri stars ($\gtrsim1.0\,{\rm M}\_\odot$) ending the fully convective phase in $\lesssim2.6\,{\rm Myr}$ based on the models, or $\lesssim2.2\,{\rm Myr}$ based on the models. This timescale drops to as little as $0.5\,{\rm Myr}$ in both models for stars of $2\,{\rm M}\_\odot$. Therefore the drop in the dipole component, and the subsequent effect on the star-disk interaction, is more relevant for higher mass T Tauri stars than for lower mass stars as most of the latter will have lost their disks before the end of the fully convective phase. However, there is also observational evidence that the disk lifetime is mass dependent with high mass stars losing their disks faster than stars in the intermediate and low mass range [5](#fn5). Therefore, the effect of the evolution of the large scale stellar magnetic field topology becomes a question of timescales. For a given star does the radiative core develop before it stops interacting with its circumstellar disk? There are a number of well studied T Tauri stars in the high and intermediate mass range which have developed radiative cores and which show evidence for significant ongoing accretion and substantial disks, for example the stars discussed in this paper as well as those studied by. In principle the rapid drop in the dipole component, and therefore the field strength at the inner disk, at the end of fully convective phase will allow the disk to push closer to the star. This would lead to a increased spin-up torque acting on the star due to the magnetic links with the disk interior to the equatorial corotation radius that are rotating faster than the star (in addition to the spin up torques from accretion and the stellar contraction; ) and consequently an increase in the stellar rotation rate. In contrast, fully convective stars (at least those that are not in the weak field bistable dynamo regime) with strong dipole components should be able to maintain their slow rotation by truncating their disks out to, and perhaps even beyond, the corotation radius (the propeller regime; ). There is tentative evidence for this rotational evolution scenario within the sample of stars considered in this paper. In Figure [hr] the size of the symbols is proportional to the rotation period of the star (listed in Table [params]). For intermediate and high mass T Tauri stars ( > 0.5$\,{\rm M}\_\odot$) the fully convective stars which have strong dipole components are more slowly rotating than those which have ended the fully convective phase. Additionally, stars which have spent longer with radiative cores (and therefore with weak dipole components) are, on average, rotating faster than the fully convective stars. This strongly hints that the effect of the change in the magnetic topology with the development of a radiative core is that a PMS star enters a spin-up phase, if they are still interacting with their disks when this transition occurs. However, this picture may be too simplistic, as the disk truncation radius is sensitive to parameters other than the magnetic field strength which will vary with time, and disk lifetimes are likely also a function of many parameters. The disk truncation radius depends on the stellar radius, the mass accretion rate, and the polar strength of the stellar dipole component $R\_t \propto B\_{\rm dip}^{4/7}R\_\ast^{12/7}\dot{M}^{-2/7}$ (e.g. )[6](#fn6). Although a drop in the dipole component will allow the disk to push closer to the star (as will the reduction in the stellar radius, since PMS stars are contracting) the observed drop in mass accretion rate with increasing stellar age (e.g. ) has the opposite effect and allows the stellar magnetosphere to keep the disk at bay at a larger radius. Thus the disk truncation radius, and consequently the balance of torques in the star-disk system, will depend on the interplay between the rate of decay of the dipole component and the drop in the mass accretion rate with time. Additionally magnetic cycles (the beginnings of which may have already been observed in V2129 Oph and GQ Lup, see Appendix [starinfo]) will cause variations in the large scale field topology, and therefore the disk truncation radius, over time. Thus a new generation of magnetospheric accretion models that track the rotational evolution of the star incorporating the time evolution of the mass accretion rate, the stellar contraction (similar to those of ) and for the first time the time evolution of magnetic fields following the observational correlations and potential magnetic cycles, are now warranted. Magnetic field topology, rotation rate, and Rossby number --------------------------------------------------------- Throughout this paper we have concentrated on the links between the stellar mass and age, which considered in tandem reveal the stellar internal structure, and the large-scale magnetic field topology. However, the stellar rotation rate and the Rossby number, the ratio of the rotation period to the local convective turnover time in the stellar interior ($Ro=P\_{\rm rot}/\tau\_{\rm c}$), may also influence the magnetic topology. In order to search for such trends we require estimates of the convective turnover time $\tau\_{\rm c}$. As PMS stars are contracting, and especially once a radiative core begins to grow at the expense of the convective zone depth, $\tau\_{\rm c}$ values are highly sensitive to the stellar mass and age. Unfortunately most published $\tau\_{\rm c}$ estimates come from models that track the stellar evolution over timescales of order Gyr and across a limited range of stellar mass, and lack the time and mass resolution required for our purposes. An exception is the model of. Y.-C. Kim has kindly supplied us with finer resolution grids than published. Our rough convective turnover time estimates are listed in Table [paramsfund]. The $\tau\_{\rm c}$ values are calculated at a distance of half of the mixing length above the base of the convective zone. Although it is only an assumption that this is the depth where the dynamo operates, the same assumption is made in the cited models and is fully consistent with the work of others (e.g. ). [magrot] Zeeman broadening studies (e.g. ) have found no links between the mean surface magnetic field strengths of PMS stars and rotation parameters. This is perhaps not surprising given that all T Tauri stars lie in the saturated regime (with some into the supersaturated regime) of the well-defined MS rotation-activity relation: plots of the ratio of X-ray to bolometric luminosity versus Rossby number. Zeeman broadening, which probes all of the small scale magnetic field regions close to the star (the tangling and reconnection of which gives rise to the X-ray emission), does not give access to information about the large-scale field topology. In this work we are particularly interested in links between the rotation parameters and the polar strength of the dipole component, given its importance to the star-disk interaction (see §[dipcomp]). [spinup] In Figure [magrot] we present plots of $B\_{\rm dip}$ versus the rotation parameters ($P\_{\rm rot}$ and *R**o*; plots with *v*\*sin*i* as the abscissa, which are not shown, are similar to the $P\_{\rm rot}$ plots) for both intermediate/high mass PMS stars and the early/mid-spectral type MS M-dwarfs from and. The M-dwarf sample spans the unsaturated and saturated regimes of the rotation-activity relation. The saturated regime occurs at rotation periods of $P\_{\rm rot}\lesssim4\,{\rm d}$ for M-dwarfs of mass  ∼ 0.5$\,{\rm M}\_\odot$, or at $Ro\lesssim0.1$. M-dwarfs which lie in the saturated regime (like all PMS stars) show little relation between $P\_{\rm rot}$ and $B\_{\rm dip}$ with a range of polar dipole strengths, being strongest for the fully convective stars (lowest mass; circles in Figure [magrot] upper right panel), weakest for the stars with small outer convective zones (highest mass; asterisks), and of intermediate strength for stars slightly more massive than the MS fully convective limit (triangles), as already discussed - see Figure [bdipmdwarf].[7](#fn7) There does appear to be correlations between $B\_{\rm dip}$ and $P\_{\rm rot}$, and $B\_{\rm dip}$ and *R**o*, for MS M-dwarfs if the sample is considered as a whole. The latter trend is driven by the factor of  ∼ 10 range of $P\_{\rm rot}$ values across the sample rather than the range of $\tau\_{\rm c}$ values, which vary by a factor of  ∼ 2 (as listed in and ), with the overall correlations arising as all the observed fully convective stars (circles in Figure [magrot] right panel) and almost all stars which are mostly convective (triangles) are faster rotators, while substantially radiative stars (asterisks) span a range of rotation rates. There is a clearer link between $B\_{\rm dip}$ and $P\_{\rm rot}$ for the PMS sample with the stars with the strongest dipole components (fully convective stars) spinning more slowly than stars with weaker dipole components (stars with radiative cores) - see Figure [magrot]. This is likely being driven by the star-disk interaction with stars with stronger dipole components able to truncate their disks out to corotation, see §[dipcomp], and those with weaker dipole components having smaller disk truncation radii and therefore being spun-up. GQ Lup and MT Ori (the rightmost three triangles in Figure [magrot]) are not exceptions to this trend, as these stars have only recently ended the fully convective phase and presumably have not had enough time to spin-up. This argument is supported by the strong correlation in Figure [spinup] where we have estimated the time relative to the end of the fully convective phase [that is the age of a star minus the age at which a star of its mass is expected to develop a radiative core as calculated from equation ([endfully])]. It is clear that the longer a star has spent with a radiative core, the faster its rotation period.[8](#fn8) We suggest that the relation between the stellar rotation rate and the strength of the dipole component for PMS stars is driven by the star-disk interaction rather than the dynamo magnetic field generation process itself. This is further supported by the lack of any clear relation between $B\_{\rm dip}$ and Rossby number, see Figure [magrot] (lower left panel). Although $\tau\_{\rm c}$ values drop from a couple of hundred days to a few tens of days with the development of a large radiative core (e.g. ), $P\_{\rm rot}$ values are also smaller for stars with large cores (see the asterisks in Figure [magrot], upper left panel). Thus there is little variation in *R**o* across our PMS sample, all of which lie in the saturated regime of the rotation-activity relation. Likewise, there is little variation in *R**o* values for MS M-dwarfs that lie in the saturated regime (*R**o* ≤ 0.1). Non-accreting pre-main sequence stars ------------------------------------- Given the importance magnetic fields in controlling the star-disk interaction, and in turn the stellar rotational evolution, we have thus far focussed our discussion on accreting T Tauri stars. Magnetic maps have also been published for one non-accreting weak line T Tauri star, V410 Tau ; and a handful of post T Tauri stars which have long since lost their disks and have spun-up, HD 155555 (a close binary system; ), HD 141943, and HD 106506. With the exception of V410 Tau all of the non-accreting stars have substantial radiative cores ($M\_{\rm core}/M\_\ast\approx0.93$ and  ≈ 0.84 for the primary and secondary stars of HD 155555) or have entirely radiative interiors (HD 106506 and HD 141943) as inferred from the models of. The non-accreting T Tauri stars are typically faster rotators than the accreting stars considered in §[topo] with $P\_{\rm rot}\le2.2\,{\rm d}$, as is commonly found (e.g. ). The post T Tauri stars have small, or no, outer convective zones and are found to host highly complex magnetic fields with many high order field components. This is consistent with the magnetic evolutionary scenario discussed above for accreting T Tauri stars. V410 Tau is the only non-accreting star with a small radiative core for which magnetic maps have been obtained. It is a young ( ∼ 1.7$\,{\rm Myr}$) and higher mass star ($M\_\ast\approx1.4\,{\rm M}\_\odot$). Widely varying estimates of its effective temperature have been reported in the literature, and its luminosity is highly uncertain given the large spot coverage (see the discussion in ). Thus the position of V410 Tau in the HR diagram is poorly constrained. According to the models of this star has already ended the fully convective phase of evolution and has a small radiative core ($M\_{\rm core}/M\_\ast\approx0.07$), although given the uncertainty in its HR diagram position it may have an internal structure that ranges from fully convective, to having a moderately sized core ($M\_{\rm core}/M\_\ast\approx0.3$; ). Its complex magnetic field topology has more in common with the accreting T Tauri stars with large radiative cores, suggesting that it has indeed ended the fully convective phase. However, with only one genuine weak line T Tauri star studied thus far it is not clear if the magnetic fields of these stars will follow similar trends as found for accreting classical T Tauri stars; but given that accreting stars (of age a few Myr) follow a similar magnetic topology trend with internal structure as found for MS M-dwarfs (of age a few Gyr), it is reasonable to assume that the magnetic topologies of more evolved PMS stars will follow suit. If they do not, it may indicate that accretion is modifying the stellar magnetic field generation process. Furthermore, if our conclusion that it is the star-disk interaction that is driving the relation between $P\_{\rm rot}$ and $B\_{\rm dip}$ (see §[tauc] and Figure [magrot], upper left panel) is correct then we do not necessarily expect to find the same behavior for systems where the disk has dispersed. With the influence of the disk removed, underlying relationships between the stellar magnetic field topology and the dynamo properties may be revealed. Non-accreting T Tauri stars will be the target of future spectropolarimetric observing campaigns to specifically address such issues. Conclusions =========== Spectropolarimetric observations carried out over at least a full stellar rotation, and ideally several rotation periods, combined with tomographic imaging techniques have allowed maps of the magnetic fields of a small sample of accreting T Tauri stars to be derived. T Tauri magnetic field topologies are found to vary with the stellar parameters. We find that the large scale topology appears to be directly linked to the internal structure of the star, and in particular to the size of the radiative core that develops at the end of the fully convective phase of evolution. We define four regions across the HR diagram, see §[canwepredict] and Figure [colorhr], delineating stars with different magnetic topology characteristics. Stars with substantial radiative cores, $M\_{\rm core}/M\_\ast\gtrsim0.4$, have complex fields that are highly non-axisymmetric with weak dipole components, only a few tenths of a kG at most (this defines region 1 of the HR diagram as discussed in §[canwepredict] and colored blue in Figure [colorhr]). V4046 Sgr AB, CR Cha and CV Cha have this type of field topology. Stars which have small radiative cores, $0<M\_{\rm core}/M\_\ast\lesssim0.4$, have largely axisymmetric large scale magnetic field topologies but field modes of higher order than the dipole component dominate. Their dipole components are generally weaker than those found for fully convective stars and appear to range from less than $0.1\,{\rm kG}$ to around $1\,{\rm kG}$. MT Ori, TW Hya, V2129 Oph, and GQ Lup possess magnetic fields like this. Intriguingly, stars that fall in this region of the HR diagram (region 2 discussed in §[canwepredict] and colored green in Figure [colorhr]) for which magnetic maps have been published all have strong octupole components to their magnetic fields, and this is the dominant field mode in TW Hya, V2129 Oph, and GQ Lup. There are currently no theoretical models to explain this trend. We emphasize that the limit of $M\_{\rm core}/M\_\ast\approx0.4$ between the regions 1 and 2 is empirical and more observations are required to properly determine the exact boundary. The boundary may also be a function of stellar mass itself, but we note that for main sequence M-dwarfs (whose topology trends with stellar internal structure mirror the behavior of PMS stars) the transition from dominantly axisymmetric to dominantly non-axisymmetric fields occurs at *M*\* ∼ 0.5$\,{\rm M}\_\odot$ (see Figure [mdwarfs]) which (roughly) corresponds to $M\_{\rm core}/M\_\ast\approx0.4$. Fully convective stars of mass $\gtrsim0.5\,{\rm M}\_\odot$ host simple fields that are dominantly axisymmetric with strong dipole components of order one to a few kG (region 3 of the HR diagram as discussed in §[canwepredict] and colored yellow in Figure [colorhr]). AA Tau and BP Tau fall into this category. Such stars will develop radiative cores before they arrive on the main sequence, at which point it seems likely that the dipole component of their magnetic fields will decay, but initially the axisymmetric nature of their fields will be maintained. Further core growth will eventually destroy the axisymmetry of their magnetic fields leaving the stars with complex fields with weak dipole components. It appears as though this drop in the dipole component, which will reduce the disk truncation radius and increase the spin-up torque on the star, influences the stellar rotation rate with stars which have spent longer with radiative cores being faster rotators. Although the magnetic topology trends that we have observed across the PMS of the HR diagram (see Figures [hr] & [colorhr]) are thus far based on a limited sample of stars, it is the overall excellent agreement between the large-scale field topologies of PMS stars and those of MS M-dwarfs with comparable internal structures (i.e. similar ratios of core mass to stellar mass $M\_{\rm core}/M\_\ast$) that gives us confidence to define distinct magnetic topology regimes. Further spectropolarimetric studies of PMS stars across the HR diagram are now required to test our conclusions. By far the clearest trends, observed for both the MS and PMS sample, is the rapid increase in field complexity, and the rapid decrease in the dipole component, when moving from objects close to the fully convective divide to those with substantial radiative cores. The lowest mass M-dwarfs (below  ∼ 0.2$\,{\rm M}\_\odot$, or later than spectral type  ∼ M5) are found to host a variety of field topologies which may be due to a bistable dynamo process. The similarity between the field topologies of MS M-dwarfs and PMS stars allows us to predict that bistable dynamo behavior, and therefore stars with a variety of large scale field topologies, will be found for the lowest mass PMS stars. We have thus defined a fourth region of the HR diagram where such stars will be found. The exact boundary between this region 4 and region 3 is poorly constrained and more ZDI studies are required for stars in this low mass regime. The only star studied in this region thus far is V2247 Oph, a fully convective star with a complex field that likely resides on the weak field bistable dynamo branch (stars on the strong field branch would host simple fields). Although he magnetic topology trends with stellar internal structure are empirical and currently lack theoretical grounding there is additional evidence for a field topology change as stars transition from fully to partially convective. With the growth of a radiative core large-scale magnetic fields become more and more complex. argued that such a change could explain their observed reduction in the number of periodically variable PMS stars, with fully convective stars hosting simple fields with large cool spots and partially convective stars more complex fields with more numerous and distributed smaller spots (which causes less photometric rotational variability). This is fully consistent with the Doppler maps of fully convective T Tauri stars which often show large (usually slightly offset from the rotation pole) high latitude spots (e.g. ). Likewise and find a systematic reduction in the ratio of X-ray to bolometric luminosity, which is driven by the strength of the convective dynamo, for stars with radiative cores. Furthermore argue that the change from fully to partially convective stellar interiors can explain their observation that the scatter in X-ray luminosities in rotation-activity plots reduces with increasing PMS cluster age. Whilst these observations lack the clarity or precision of our own they do add significant support to the change in magnetic field topology that we observe. We conclude that it is possible to predict the general characteristics of the magnetic field of a PMS star based purely on its position in the HR diagram. For example whether the field will be axisymmetric with a strong dipole component, or axisymmetric with a field component of higher order than the dipole dominant, or complex and non-axisymmetric with a weak dipole component. Large scale magnetic field topologies are likely variable over time too, as has been observed for V2129 Oph, TW Hya (tentatively), and GQ Lup. However, although the polar strength of the various field components was observed to vary for all of these stars, the general characteristics of their large-scale fields remained the same at both epochs - dominantly octupolar and well described by a tilted dipole plus a tilted octupole component. Likewise the general properties of the magnetic fields of AA Tau and BP Tau remained the same as derived from datasets taken in different observing seasons. We do caution, however, that our ability to predict the general large scale magnetic topology characteristics of a given PMS star is reliant on the veracity of the PMS evolution models themselves and on our ability to accurately position the star in the HR diagram in the first place. An alternative point of view, however, is that we can use ZDI studies and the derived magnetic topologies of T Tauri stars as a direct test of the internal structure information derived from the PMS evolutionary models themselves, just as dynamical mass measurements can constrain the mass tracks and lithium depletion the isochrones. For example, if a star is found to host a largely axisymmetric field, but with dominant high order components, it is likely that this star has already ended the fully convective phase of evolution and has developed a small radiative core. If the star falls in a region of the HR diagram where fully convective stars lie, it could then be argued that either the models are inadequate in this region, and/or a better assignment of the stellar effective temperature and luminosity needs to be made. In principle ZDI studies can be used to provide strong observational constraints on the divide between the fully and partially convective regions of the PMS in the HR diagram; a divide that is exquisitely model dependent. Pinpointing this divide observationally should allow the detailed testing of evolutionary models of stellar internal structure. Observationally constraining the fully convective divide as a function of mass and age does not only have important implications for periodic variability and X-ray emission, as discussed above. Additionally the development of a radiative core likely leads to a dramatic redistribution of angular momentum as the convective envelope and core decouple, which, if further studied, should yield insights into such phenomena as rotationally induced mixing (e.g. ). Furthermore, as the interaction with a circumstellar disk is (in most, but crucially not all, cases) dominated by the large scale dipole component of the magnetic field, studying stars with disks across the fully convective/radiative core divide will enable us to probe the dynamics and physics of magnetopsheric accretion as a function of magnetic field topology in an extremely targeted fashion. Data from the Magnetic Protostars and Planets (MaPP) program will continue to be obtained until at least the end of 2012. This continued stream of T Tauri magnetic maps, coupled with those for stars already observed as part of MaPP but not yet published, will allow the HR diagram to be more fully populated. This will allow the boundaries separating the different topology regions within the HR diagram to be better constrained observationally. Furthermore, by repeatedly observing the same stars over timescales of several years we will gain insight into the long term variability of the large scale magnetospheres of T Tauri stars, and possibly the existence of magnetic cycles. MaPP will thus further advance our understanding of the magnetism of forming low mass, including solar-like, stars. From a theoretical perspective models of the magnetospheric accretion process that incorporate magnetic fields with an observed degree of complexity have been developed. However, the observed variations in the magnetic field topology with the development of a radiative core, and the possible bistable dynamo process that appears to operate amongst the lowest mass T Tauri stars, highlight the need for new models, similar to those of, but which take proper account of both the magnetic field complexity, and the field variation with changes in the stellar internal structure. The authors thank L. Siess, N. Baliber & F. C. Adams for insightful discussions, E. Tognelli & P. G. Prada Moroni for sending stellar internal structure information from their PMS evolution models, Y.-C. Kim for sending convective turnover time estimates, and the referee for their useful comments. SGG is supported by NASA grant HST-GO-11616.07-A. JM is supported by a postdoctoral fellowship of the Alexander von Humboldt foundation. The “Magnetic Protostars and Planets” (MaPP) project is supported by the funding agencies of CFHT and TBL (through the allocation of telescope time) and by CNRS/INSU in particular, as well as by the French “Agence Nationale pour la Recherche” (ANR). Accreting T Tauri stars with magnetic maps derived from ZDI =========================================================== Stars with strong dipole components and axisymmetric large scale magnetic fields -------------------------------------------------------------------------------- ### AA Tau AA Tau is one of the best studied accreting stars and hosts the simplest large scale magnetic field yet discovered on any T Tauri star. Its large scale field is dominantly dipolar, with weak high order field components. The dipole component is strong with a polar field strength of  ∼ 2 kG (perhaps a large as 3 kG, see the discussion in ) and is tilted by  ∼ 10–20∘ with respect to the stellar rotation axis, see Table [params]. The star was observed at two different epochs separated by a year. The large scale field topology showed no significant evolution, which may be linked to the moderate phase coverage obtained at both epochs, but repeat observations are required to confirm this. Given the weak mass accretion rate on to AA Tau during the ESPaDOnS observations (an average of $\log\dot{M}=-9.2\,{\rm M}\_\odot{\rm yr}^{-1}$; ) coupled with the strength of the dipole component the disk may be truncated close-to the equatorial coorotation radius, or even beyond at some epochs. The mass accretion rate, however, was observed to vary by an order of magnitude from $\log\dot{M}=-9.6$ to $-8.5\,{\rm M}\_\odot{\rm yr}^{-1}$. AA Tau has a completely convective interior according to both the and the PMS stellar evolution models. ### BP Tau BP Tau has long been known to possess a strong stellar-disk averaged magnetic field from Zeeman broadening measurements, with strong circular polarization measured in the accretion related HeI 5876Å emission line. Like AA Tau, ZDI has revealed that BP Tau has a strong dipole component to its magnetic field. However it also possesses a strong octupole field component (of polar field strength of 1.6-1.8 kG compared to the dipole component of polar field strength 1-1.2 kG; see footnote c of Table [params]). Both the dipole and octupole moments are tilted relative to the stellar rotation axis, but by different amounts and towards different rotation phases. Magnetic maps have been derived at two different epochs, separated by around 10 months. The large scale field topology showed little change over this time, apart from an apparent rotation of the entire surface field by 0.25 in phase. This was most likely caused be a small error in the assumed rotation period (7.6 ± 0.1 d) building up over the  ∼ 39 rotations between the observing epochs. Variations in the large scale field topology cannot, however, be ruled out on longer time scales. We further note that the magnetic maps for BP Tau were published prior to the MaPP project, using an experimental version of the magnetic imaging code. This code considered polarization signals in the photospheric absorption lines and the accretion related emission lines separately, whereas the more mature version of the code constructs the maps by considering both signals simultaneously. The archival spectropolarimetric observations of BP Tau will be re-analyzed in a forthcoming paper, and will be presented alongside new recently obtained data. From its position in the HR diagram BP Tau is a fully convective star. Stars with dominant high order magnetic field components and axisymmetric large scale fields -------------------------------------------------------------------------------------------- ### V2129 Oph V2129 Oph was the first accreting T Tauri star for which magnetic maps were published (; see also for a re-analysis of the original dataset using the latest version of the magnetic imaging code). It has been observed at two different epochs, June 2005 and July 2009. At both epochs V2129 Oph was found to host a dominantly octupolar magnetic field. The dipole component of its multipolar magnetic field was found to vary by a factor of about three, from  ∼ 0.3$\,{\rm kG}$ to  ∼ 0.9$\,{\rm kG}$, in the four years between the observing runs. The clear detection of secular evolution of the large scale magnetic field demonstrates that it is dynamo generated and not of fossil origin. At both epochs the dipole and octupole field components were found to be slightly tilted with respect to the stellar rotation axis and tilted towards different rotation phases. V2129 Oph has a binary companion although this is about 50 times fainter in the V-band than V2129 Oph itself. The projected separation of 0.65”, as measured by, translates to $78\,{\rm AU}$ assuming a distance of $120\,{\rm pc}$ to the *ρ* Oph star forming region. V2129 Oph is no longer fully convective and has developed a small radiative core, $M\_{\rm core}/M\_\ast\approx0.2$. ### GQ Lup GQ Lup was found to host a dominantly octupolar magnetic field when observed in both July 2009 and June 2011. However its large scale field weakened considerably between the two observing epochs indicating a non-stationary dynamo process. The polar strength of the octupole component dropped from $2.4\,{\rm kG}$ to $1.6\,{\rm kG}$ and that of the dipole from $1.1\,{\rm kG}$ to $0.9\,{\rm kG}$ between 2009 and 2011. At both epochs the octupole was roughly aligned with the stellar rotation axis but the dipole component was tilted by  ∼ 30∘. Of the stars for which magnetic maps have been derived to date, GQ Lup displays the strongest large scale fields yet discovered, with the longitudinal field component measured in accretion related emission lines (i.e. those which probe the field where accretion columns impact the star) reaching $6\,{\rm kG}$. GQ Lup has a known substellar companion in its outer accretion disk, orbiting at $\sim100\,{\rm AU}$, that is most likely a brown dwarf. GQ Lup has developed a small radiative core, $M\_{\rm core}/M\_\ast \approx 0.13$. ### TW Hya As with V2129 Oph and GQ Lup, TW Hya was found to host a dominantly octupolar magnetic field at the two epochs it was observed. At both epochs the dipole component was found to be weak relative to the octupole component, with the ratio of their polar strengths varying from $B\_{\rm oct}/B\_{\rm dip}\approx6$ in March 2008 to $B\_{\rm oct}/B\_{\rm dip}\approx4$ in March 2010. At the first epoch the positive pole of the dipole component was found to be tilted by about 45∘ relative to the main negative pole of the octupole component. At the second epoch the dipole and octupole moments were roughly anti-parallel, with the main negative pole of the octupole coincident with the visible rotation pole of the star. The change in the tilt of the dipole component is tentative, however, given the relative weakness of the dipole component and the limited phase coverage obtained during the first observing run. It may be indicative of a magnetic cycle, but clearly repeated observations, potentially over many years and with improved phase coverage, are required to confirm this. TW Hya is a somewhat atypical T Tauri star given its low inclination ( ≈ 7∘; ) and since it still has a significant mass accretion rate (averaging $\log\dot{M}=-8.9\,{\rm M}\_\odot{\rm yr}^{-1}$ at both observing epochs; ) despite being of an age ($\sim9\,{\rm Myr}$) where the disks of most T Tauri stars have dispersed and accretion has ceased (e.g. ). TW Hya has an interior structure that consists of a radiative core surrounded by an outer convective envelope, with a core mass of $M\_{\rm core}/M\_\ast\approx0.2$. ### MT Ori MT Ori hosts a complex magnetic field with the surface of the star covered in many regions of opposite polarity, although its large scale field is dominantly axisymmetric (i.e. the *m* = 0 field modes dominate; ). The large scale dipole component was found to be weak $<100\,{\rm G}$ with the field dominated by the octupole (ℓ = 3), the dotriacontapole (ℓ = 5), and the ℓ = 7 field modes. The total contribution from the 3 ≤ ℓ ≤ 7 field components was 13 times stronger than the dipole (ℓ = 1) component with the octupole four times stronger than the dipole. At  ∼ 2.7$\,{\rm M}\_\odot$ according to the models of, or  ∼ 2$\,{\rm M}\_\odot$ using the models of, and  ∼ 0.25$\,{\rm Myr}$ this is the highest mass and youngest star in the MaPP sample. Despite its young age, MT Ori is massive enough to have already developed a small radiative core, at least in the models of. The models suggest that MT Ori is still fully convective. Given the similarity of its magnetic field to that of TW Hya, V2129 Oph, and GQ Lup, stars which have small radiative cores in both PMS evolution models, and the dissimilarity between its field and the simple fields of the fully convective stars AA Tau and BP Tau, we suggest that MT Ori does indeed have a small radiative core. Given the large uncertainty in its effective temperature and luminosity (see ) the core mass lies somewhere in the range $0.01\le M\_{\rm core}/M\_\ast\le0.36$ according to the models. Stars with complex non-axisymmetric large scale magnetic fields with weak dipole components ------------------------------------------------------------------------------------------- ### V4046 Sgr AB Both stars of the close binary system V4046 Sgr host complex magnetic fields with many high order field components. The large scale magnetosphere of each star, their dipole components, are weak and highly tilted with respect to their rotation axes (of polar strength  ∼ 100$\,{\rm kG}$ and  ∼ 80$\,{\rm kG}$ and tilted by 60∘ and 90∘ on the primary and secondary respectively). The planes of the tilts of the dipole moments are also offset by roughy 0.7 in rotation phase, further increasing the field complexity. The binary magnetospheric structure is highly complex and will be presented in a future paper. A circular polarization signal was not detected in the accretion related emission lines, consistent with the complex magnetic geometries and likely indicative of accretion spots being distributed across many opposite polarity regions. The binary orbit is circularized and synchronized with accretion occurring from a circumbinary disk. Recent numerical simulations suggest that small local circumstellar disks, distinct from the global circumbinary disk, may also form around the individual stars. As with TW Hya (see §[twhya]), V4046 Sgr (age  ∼ 13 Myr) is still accreting at an age when most T Tauri stars have lost their disks (e.g. ). The masses of V4046 Sgr AB listed in Table [params] are derived from the PMS evolution models and placing the stars on the HR diagram. These can be compared to the more accurate dynamical masses of $0.912\,{\rm M}\_\odot$ and $0.873\,{\rm M}\_\odot$ calculated by for V4046 Sgr A and V4046 Sgr B respectively. Both binary components have ended the fully convective phase of evolution with the primary and secondary have core masses of roughly 50% and 40% of their respective stellar masses using the models of. ### CR Cha CR Cha hosts a particularly complex magnetic field with a significant fraction of the magnetic energy in high ℓ-number field modes. Unlike the other T Tauri stars discussed in this paper CR Cha (and CV Cha, see below) were observed with SemelPol, a spectropolarimeter at the Anglo-Australian telescope. As with V4046 Sgr, a circular polarization signal was not detected in the accretion related emission lines. noted that this non-detection may have been due to insufficient S/N. CR Cha is too far south to be re-observed with ESPaDOnS. However, as Stokes V signals were not detected in the emission lines in the higher S/N ESPaDOnS spectra of V4046 Sgr, and as the magnetic maps of V4046 Sgr AB and CR Cha reveal a similar level of field complexity, a more likely explanation is that magnetospheric accretion on to the surface of CR Cha occurs into several distributed opposite polarity magnetic regions, yielding a net polarization signal of zero due to the flux cancellation effect. Because of this the value listed for the dipole component of the multipolar field of CR Cha in Table [params] is a lower limit and the true value may be larger. According to the models it has a substantial radiative core $M\_{\rm core}/M\_\ast = 0.65$. ### CV Cha The magnetic field of CV Cha is highly complex with many high order field components. As with CR Cha a circular polarization signal was not detected in the accretion related emission lines, and the dipole component listed in Table [params] is likely a lower limit to the true value. CV Cha is the primary star of a large separation (1596$\,{\rm AU}$) binary system; CW Cha being the secondary star. Of the two Chamealeon I stars for which magnetic maps have been published, CV Cha has the larger mass accretion rate, $\log{\dot{M}}=-7.5\,{\rm M}\_\odot {\rm yr}^{-1}$ compared to $\log{\dot{M}}=-9.0\,{\rm M}\_\odot {\rm yr}^{-1}$ for CR Cha. CV Cha is the earliest spectral type star in our sample (see Table [paramsfund]), is well into the Henyey phase of its evolution, and is almost entirely radiative ($M\_{\rm core}/M\_\ast \approx 1.0$; ). ### V2247 Oph V2247 Oph is the lowest mass T Tauri star for which magnetic maps have been published. It is fully convective and it is found to host a complex magnetic field with a weak dipole component. Its topology is therefore similar to the more massive T Tauri stars which have developed substantial radiative cores, rather than the simple fields of the other fully convective stars AA Tau and BP Tau. Its mass of  ∼ 0.36$\,{\rm M}\_\odot$ has been obtained from the models using a luminosity appropriate for a distance to the *ρ* Oph star forming region of $120\,{\rm pc}$. V2247 Oph is weakly accreting and has previously been classified as a non-accreting weak line T Tauri star (e.g. ). However, all of the accretion related emission lines are present in the ESPaDOnS spectra, with evidence for a weak accretion rate. The accretion rate is found to be highly variable over timescales of order a week, and over several years, reaching peaks of around $\log\dot{M}\approx{-9}\,{\rm M}\_\odot{\rm yr}^{-1}$. The spectral energy distribution of V2247 Oph suggests that its disk is rather evolved with a large inner (dust) disk gap. This may be due to a nearby binary companion star, separated by $36\,{\rm AU}$ adopting the distance given above, suggesting that accretion occurs from a circumbinary disk. V2247 Oph is rotating about twice as fast ($P\_{\rm rot}\sim3.5\,{\rm d}$) as the other fully convective stars in the sample, AA Tau ($P\_{\rm rot}=8.22\,{\rm d}$) and BP Tau ($P\_{\rm rot}=7.6\,{\rm d}$). As speculated by the faster spin rate of V2247 Oph may be a direct reflection of its complex magnetic field. Stars with magnetic fields with weaker dipole components would have disks that are magnetospherically truncated closer to the star, potentially resulting in a larger spin-up torque in comparison to that experienced by stars with stronger dipole components that are able to truncate their disks at larger radii (e.g. ). The variation in field topology between the more massive fully convective PMS stars and the low mass PMS star V2247 Oph is similar to what has been found for the main sequence M-dwarfs, see §[lowmass] and. The fully convective limit as a function of stellar age ======================================================= The age at which a star ends the fully convective phase of evolution (the fully convective limit) is a function of stellar mass (see Figure [agecore2]). From the models (*Z* = 0.02 with convective overshooting) we can estimate the age at which the fully convective phase ends, $${\rm age}\,[{\rm Myr}] \approx \left( \frac{1.494}{M\_\ast/{\rm M}\_\odot}\right )^{2.364}, \label{endfully}$$ which is the power-law fit, the solid black line, in Figure [agecore2] (stars below  ∼ 0.35$\,{\rm M}\_\odot$ remain fully convective e.g. ). Thus a $0.5\,{\rm M}\_\odot$ star ends the fully convective phase and develops a radiative core at an age of  ∼ 13.3$\,{\rm Myr}$, while a $1\,{\rm M}\_\odot$ and a $2\,{\rm M}\_\odot$ star develop a radiative core at an age of  ∼ 2.6$\,{\rm Myr}$ and  ∼ 0.5$\,{\rm Myr}$ respectively. The early development of a radiative core, and its more rapid growth, for more massive stars leads to a gap in the observed color-magnitude diagrams of young PMS clusters. The size of which, as discussed by, is a function of age and can possibly be used as a distance independent age indicator. The fully convective limit as a function of age. A power-law fit (solid line) to the data from the models (points) is given by equation ([endfully]). At a given age stars of higher mass than the fully convective limit have developed radiative cores while those of lower mass remain fully convective. [agecore2] [agecore] Recently new mass tracks and isochrones have been published by ; the Pisa PMS stellar evolution models. The Pisa model equivalent of equation ([endfully]) is $${\rm age}\,[{\rm Myr}] \approx \left( \frac{1.448}{M\_\ast/{\rm M}\_\odot}\right )^{2.101} \label{endfullypisa}$$ where *Z* = 0.02, *Y* = 0.288, *α* = 1.68, and $X\_{\rm D}=2\times10^{-5}$ have been assumed. Thus in the Pisa models a $0.5\,{\rm M}\_\odot$ star ends the fully convective phase and develops a radiative core at an age of  ∼ 9.3$\,{\rm Myr}$, while a $1\,{\rm M}\_\odot$ and a $2\,{\rm M}\_\odot$ star develop a radiative core at an age of  ∼ 2.2$\,{\rm Myr}$ and  ∼ 0.5$\,{\rm Myr}$ respectively. A comparison between equations ([endfully]) and ([endfullypisa]) reveals that the fully convective phase ends at approximately the same age for high mass T Tauri stars, with differences of  < 0.5$\,{\rm Myr}$ for stars of mass  > 0.95$\,{\rm M}\_\odot$, but this difference is larger for lower mass stars and exceeds $4\,{\rm Myr}$ for stars of mass  < 0.5$\,{\rm M}\_\odot$. As lower mass stars are more likely to lose their disk before a radiative core develops, and therefore before any drastic change in the large-scale field topology occurs, then the difference in the age at which a core develops between the and the models is not too significant. For example, for a $0.5\,{\rm M}\_\odot$ star the fully convective phase ends at $13.3\,{\rm Myr}$ or $9.3\,{\rm Myr}$ in the and models respectively. However, the fraction of stars with disks (substantial, primordial, dusty disks) is only  ∼ 0.5% or  ∼ 2.4% at such ages, using the disk lifetime estimate given by. Thus the $4\,{\rm Myr}$ year difference in age between the end of the fully convective phase in the two PMS evolution models has little effect in terms of any stellar spin-up that may occur when a star is still coupled to its disk when a radiative core develops. Of course, we must also account for the fact that for a given star the different mass tracks and isochrones of each model yield different age and mass estimates. Adams, F. C., & Gregory, S. G. 2012,, 744, 55 Akeson, R. 2008,, 52, 94 Akeson, R. L., Walker, C. H., Wood, K., et al. 2005,, 622, 440 Alexander, F., & Preibisch, T. 2012,, 539, A64 Argiroffi, C., Maggio, A., Montmerle, T., et al. 2012,, 752, 100 Argiroffi, C., Flaccomio, E., Bouvier, J., et al. 2011,, 530, A1 Aurière, M. 2003, EAS Publications Series, 9, 105 Baraffe, I., & Chabrier, G. 2010,, 521, A44 Bouvier, J. 2008,, 489, L53 Bouvier, J., Cabrit, S., Fernandez, M., Martin, E. L., & Matthews, J. M. 1993,, 272, 176 Bouvier, J., & Appenzeller, I. 1992,, 92, 481 Brown, S. F., Donati, J.-F., Rees, D. E., & Semel, M. 1991,, 250, 463 Browning, M. K. 2008,, 676, 1262 Calvet, N., Muzerolle, J., Briceño, C., et al. 2004,, 128, 1294 Calvet, N., & Gullbring, E. 1998,, 509, 802 Carpenter, J. M., Mamajek, E. E., Hillenbrand, L. A., & Meyer, M. R. 2006,, 651, L49 Chabrier, G., & Baraffe, I. 1997,, 327, 1039 Chuntonov, G. A., Smirnov, D. A., & Lamzin, S. A. 2007, Astronomy Letters, 33, 38 Cieza, L. A., et al.  2010,, 712, 925 Currie, T., & Kenyon, S. J. 2009,, 138, 703 D’Antona, F., Ventura, P., & Mazzitelli, I. 2000,, 543, L77 de Val-Borro, M., Gahm, G. F., Stempels, H. C., & Pepliński, A. 2011,, 413, 2679 Donati, J.-F., Gregory, S. G., Alencar, S. H. P., et al. 2012,, in press [astro-ph/1206.1770] Donati, J.-F. 2011, IAU Symposium, 271, 23 Donati, J.-F., Gregory, S. G., Montmerle, T., et al. 2011c,, 417, 1747 Donati, J.-F., Gregory, S. G., Alencar, S. H. P., et al. 2011b,, 417, 472 Donati, J.-F., Bouvier, J., Walter, F. M., et al. 2011a,, 412, 2454 Donati, J.-F., Skelly, M. B., Bouvier, J., et al. 2010b,, 409, 1347 Donati, J.-F., Skelly, M. B., Bouvier, J., et al. 2010a,, 402, 1426 Donati, J.-F., & Landstreet, J. D. 2009,, 47, 333 Donati, J.-F., Jardine, M. M., Gregory, S. G., et al. 2008b,, 386, 1234 Donati, J.-F., Morin, J., Petit, P., et al. 2008a,, 390, 545 Donati, J.-F., Jardine, M. M., Gregory, S. G., et al. 2007,, 380, 1297 Donati, J.-F., Howarth, I. D., Jardine, M. M., et al. 2006,, 370, 629 Donati, J.-F. 2003, Astronomical Society of the Pacific Conference Series, 307, 41 Donati, J.-F., Collier Cameron, A., Semel, M., et al. 2003,, 345, 1145 Donati, J.-F. 2001, Astrotomography, Indirect Imaging Methods in Observational Astronomy, 573, 207 Donati, J.-F., & Brown, S. F. 1997,, 326, 1135 Donati, J.-F., Semel, M., Carter, B. D., Rees, D. E., & Collier Cameron, A. 1997,, 291, 658 Dunstone, N. J., Hussain, G. A. J., Collier Cameron, A., et al. 2008a,, 387, 481 Dunstone, N. J., Hussain, G. A. J., Collier Cameron, A., et al. 2008b,, 387, 1525 Eisner, J. A., Monnier, J. D., Woillez, J., et al. 2010,, 718, 774 Eisner, J. A., Graham, J. R., Akeson, R. L., & Najita, J. 2009,, 692, 309 Endal, A. S., & Sofia, S. 1981,, 243, 625 Ercolano, B., Bastian, N., Spezzi, L., & Owen, J. 2011,, 416, 439 Fedele, D., van den Ancker, M. E., Henning, T., Jayawardhana, R., & Oliveira, J. M. 2010,, 510, A72 Getman, K. V., Broos, P. S., Salter, D. M., Garmire, G. P., & Hogerheijde, M. R. 2011,, 730, 6 Ghez, A. M., Neugebauer, G., & Matthews, K. 1993,, 106, 2005 Gilliland, R. L. 1986,, 300, 339 Gorti, U., Dullemond, C. P., & Hollenbach, D. 2009,, 705, 1237 Goudard, L., & Dormy, E. 2008, Europhysics Letters, 835, 59001 Gras-Velázquez, À., & Ray, T. P. 2005,, 443, 541 Gregory, S. G., & Donati, J.-F. 2011, Astronomische Nachrichten, 332, 1027 Gregory, S. G., Jardine, M., Gray, C. G., & Donati, J.-F. 2010, Reports on Progress in Physics, 73, 126901 Gregory, S. G., Matt, S. P., Donati, J.-F., & Jardine, M. 2008,, 389, 1839 Gregory, S. G., Jardine, M., Simpson, I., & Donati, J.-F. 2006,, 371, 999 Gregory, S. G., Jardine, M., Collier Cameron, A., & Donati, J.-F. 2005, 13th Cambridge Workshop on Cool Stars, Stellar Systems and the Sun, 560, 191 Gullbring, E., Hartmann, L., Briceno, C., & Calvet, N. 1998,, 492, 323 Hartmann, L. 2009, Accretion Processes in Star Formation: Second Edition, by Lee Hartmann. ISBN 978-0-521-53199-3. Published by Cambridge University Press, Cambridge, UK, 2009 Hartmann, L. 2001,, 121, 1030 Hartmann, L., Calvet, N., Gullbring, E., & D’Alessio, P. 1998,, 495, 385 Hayashi, C. 1961,, 13, 450 Henyey, L., Vardya, M. S., & Bodenheimer, P. 1965,, 142, 841 Hillenbrand, L. A., Bauermeister, A., & White, R. J. 2008, 14th Cambridge Workshop on Cool Stars, Stellar Systems, and the Sun, 384, 200 Hillenbrand, L. A., & White, R. J. 2004,, 604, 741 Hussain, G. A. J., 2012, Astronomische Nachrichten, 333, 4 Hussain, G. A. J., Collier Cameron, A., Jardine, M. M., et al. 2009,, 398, 189 Hussain, G. A. J., Jardine, M., Donati, J.-F., et al. 2007,, 377, 1488 Isella, A., Tatulli, E., Natta, A., & Testi, L. 2008,, 483, L13 Johns-Krull, C. M., Valenti, J. A., & Koresko, C. 1999a,, 516, 900 Johns-Krull, C. M., Valenti, J. A., Hatzes, A. P., & Kanaan, A. 1999b,, 510, L41 Johns-Krull, C. M., Valenti, J. A., & Linsky, J. L. 2000,, 539, 815 Johns-Krull, C. M. 2007,, 664, 975 Jung, Y. K., & Kim, Y.-C. 2007, Journal of Astronomy and Space Sciences, 24, 1 Kastner, J. H., Huenemoerder, D. P., Schulz, N. S., Canizares, C. R., & Weintraub, D. A. 2002,, 567, 434 Kim, Y.-C., & Demarque, P.  1996,, 457, 340 Königl, A. 1991,, 370, L39 Kraus, S., Preibisch, T., & Ohnaka, K. 2008,, 676, 490 Landin, N. R., Mendes, L. T. S., & Vaz, L. P. R. 2010,, 510, A46 Lavigne, J.-F., Doyon, R., Lafrenière, D., Marois, C., & Barman, T. 2009,, 704, 1098 Lejeune, T., & Schaerer, D. 2001,, 366, 538 Littlefair, S. P., Naylor, T., Harries, T. J., Retter, A., & O’Toole, S. 2004,, 347, 937 Loinard, L., Torres, R. M., Mioduszewski, A. J., & Rodríguez, L. F. 2008,, 675, L29 Long, M., Romanova, M. M., & Lamb, F. K. 2012, New Astronomy, 17, 232 Long, M., Romanova, M. M., Kulkarni, A. K., & Donati, J.-F. 2011,, 413, 1061 Long, M., Romanova, M. M., & Lovelace, R. V. E. 2008,, 386, 1274 Long, M., Romanova, M. M., & Lovelace, R. V. E. 2007,, 374, 436 Luhman, K. L., Allen, L. E., Allen, P. R., et al. 2008,, 675, 1375 Malbet, F. 2009,, 53, 285 Mamajek, E. E. 2009, American Institute of Physics Conference Series, 1158, 3 Marsden, S. C., Jardine, M. M., Ramírez Vélez, J. C., et al. 2011,, 413, 1922 Matt, S. P., Pinzón, G., Greene, T. P., & Pudritz, R. E. 2012,, 745, 101 Matt, S. P., Pinzón, G., de la Reza, R., & Greene, T. P. 2010,, 714, 989 Matt, S., & Pudritz, R. E. 2005,, 356, 167 Mayne, N. J., Naylor, T., Littlefair, S. P., Saunders, E. S., & Jeffries, R. D. 2007,, 375, 1220 Mayne, N. J., & Naylor, T. 2008,, 386, 261 Mayne, N. J. 2010,, 408, 1409 Millan-Gabet, R., Malbet, F., Akeson, R., et al. 2007, Protostars and Planets V, 539 Mohanty, S., & Shu, F. H. 2008,, 687, 1323 Morin, J., Donati, J.-F., Petit, P., et al. 2008,, 390, 567 Morin, J., Donati, J.-F., Petit, P., et al. 2010,, 407, 2269 Morin, J., Dormy, E., Schrinner, M., & Donati, J. -F. 2011,, 418, 133 Morin, J., Donati, J.-F., Petit, P., et al. 2011, IAU Symposium, 273, 181 Najita, J., Carr, J. S., & Mathieu, R. D. 2003,, 589, 931 Neuhäuser, R., Guenther, E. W., Wuchterl, G., et al. 2005,, 435, L13 Palla, F., & Stahler, S. W. 2001,, 553, 299 Palla, F., Randich, S., Flaccomio, E., & Pallavicini, R. 2005,, 626, L49 Palla, F. 2001, From Darkness to Light: Origin and Evolution of Young Stellar Clusters, 243, 525 Pinsonneault, M. 1997,, 35, 557 Pizzolato, N., Maggio, A., Micela, G., Sciortino, S., & Ventura, P. 2003,, 397, 147 Preibisch, T., et al. 2005,, 160, 401 Qi, C., et al. 2004,, 616, L11 Ragland, S., Akeson, R. L., Armandroff, T., et al. 2009,, 703, 22 Rebull, L. M., Stauffer, J. R., Ramirez, S. V., et al. 2006,, 131, 2934 Reiners, A., & Basri, G. 2009,, 496, 787 Reipurth, B., & Zinnecker, H. 1993,, 278, 81 Roberts, P. H. 1988, Geophysical and Astrophysical Fluid Dynamics, 44, 3 Rodriguez, D. R., Kastner, J. H., Wilner, D., & Qi, C. 2010,, 720, 1684 Romanova, M. M., Long, M., Lamb, F. K., Kulkarni, A. K., & Donati, J.-F. 2011,, 411, 915 Romanova, M. M., Ustyugova, G. V., Koldoba, A. V., & Lovelace, R. V. E. 2004a,, 610, 920 Romanova, M. M., Ustyugova, G. V., Koldoba, A. V., & Lovelace, R. V. E. 2004b,, 616, 151 Salter, D. M., Kóspál, Á., Getman, K. V., et al. 2010,, 521, A32 Saunders, E. S., Naylor, T., Mayne, N., & Littlefair, S. P. 2009,, 397, 405 Schrinner, M., Petitdemange, L., & Dormy, E. 2012,, 752, 121 Sicilia-Aguilar, A., Hartmann, L. W., Briceño, C., Muzerolle, J., & Calvet, N. 2004,, 128, 805 Siess, L. 2001, From Darkness to Light: Origin and Evolution of Young Stellar Clusters, Astronomical Society of the Pacific Conference Series, 243, 581 Siess, L., Dufour, E., & Forestini, M. 2000,, 358, 593 Simon, M., Dutrey, A., & Guilloteau, S. 2000,, 545, 1034 Simon, M., Howell, R. R., Longmore, A. J., et al. 1987,, 320, 344 Skelly, M. B., Donati, J.-F., Bouvier, J., et al. 2012,, submitted Skelly, M. B., Donati, J.-F., Bouvier, J., et al. 2010,, 403, 159 Soderblom, D. R. 2010,, 48, 581 Stempels, H. C., & Gahm, G. F. 2004,, 421, 1159 Symington, N. H., Harries, T. J., Kurosawa, R., & Naylor, T. 2005,, 358, 977 Tognelli, E., Prada Moroni, P. G., & Degl’Innocenti, S. 2011,, 533, A109 Valenti, J. A., & Johns-Krull, C. M. 2004,, 292, 619 Waite, I. A., Marsden, S. C., Carter, B. D., et al. 2011,, 413, 1949 Ward-Thompson, D., & Whitworth, A. P. 2011, An Introduction to Star Formation by Derek Ward-Thompson and Anthony P. Whitworth. Cambridge University Press, 2011. ISBN: 9780521630306, Williams, J. P., & Cieza, L. A. 2011,, 49, 67 Willis, D. M., & Osborne, A. D. 1982, Geophysical Journal International, 68, 765 Yang, H., & Johns-Krull, C. M. 2011,, 729, 83 --- 1. More massive T Tauri stars become entirely radiative during their PMS evolution and some develop convective cores on the main sequence if the power generated from thermonuclear reactions is sufficient (see e.g. the models of ). Stars with mass $\lesssim$0.35$\,{\rm M}\_\odot$ arrive on the main sequence and hydrogen fusion begins before a radiative core can develop, and thus retain a fully convective interior during their PMS evolution.[↩](#fnref1) 2. In this paper we refer to stars of mass $M\_\ast\lesssim0.5\,{\rm M}\_\odot$ as low mass, $0.5\lesssim M\_\ast/{\rm M}\_\odot\lesssim1.0$ as intermediate mass, and $M\_\ast\gtrsim1.0\,{\rm M}\_\odot$ as high mass PMS stars.[↩](#fnref2) 3. Recently have published a series of numerical simulations that suggest that bistable dynamo behavior can occur for all MS M-dwarfs in the fully convective regime, including those close to the fully convective limit. Presently there is no observational evidence for this, with bistable behavior only apparent for stars (both MS and PMS) located well below the fully convective limit.[↩](#fnref3) 4. This simple illustrative example ignores the tilt of the multipole components which should be accounted for when calculating the field strength at the inner disk, and consequently the disk truncation radius.[↩](#fnref4) 5. Recent theoretical models suggest only a weak stellar mass dependence on disk lifetimes. Furthermore disk lifetimes may be influenced by the star forming environment.[↩](#fnref5) 6. There is a difference in *R**t* for multipolar compared to dipolar magnetospheres, although in most cases this difference is small (see section 6 of ). Provided that the dipole component is not significantly tilted with respect to the stellar rotation axis *R**t* values can be estimated using the strength of the dipole component at the stellar rotation pole. The higher order field components, however, must be accounted for in models of accretion flow onto the star.[↩](#fnref6) 7., their figure 2, and, his figure 1, provide more complete overviews of the links between mass, rotation period, field topology, and Rossby number for MS stars. Such plots of the $M\_\ast-P\_{\rm rot}$ plane with contours of constant *R**o* are not meaningful for our small sample of PMS stars, which span a range of stellar ages, as the *R**o* contours are highly sensitive to age due to the variation of $\tau\_{\rm c}$ values with the stellar contraction and radiative core growth.[↩](#fnref7) 8. The trend in Figure [spinup] is unlikely to be caused (at least entirely) by the stellar contraction, that is by the spin-up of stars as they contract with age in order to conserve angular momentum - there is no clear trend between $P\_{\rm rot}$ and *R*\* across our sample. Furthermore, stars in Figure [spinup] on either side of the fully convective divide span a range of ages.[↩](#fnref8)
arxiv_0000221
The mass-*L**x* relation for moderate luminosity X-ray clusters =============================================================== We present measurements of the masses of a sample of 25 moderate X-ray luminosity clusters of galaxies from the 160 square degree ROSAT survey. The masses were obtained from a weak lensing analysis of deep *F*814*W* images obtained using the Advanced Camera for Surveys (ACS). We present an accurate empirical correction for the effect of charge transfer (in)efficiency on the shapes of faint galaxies. A significant lensing signal is detected around most of the clusters. The lensing mass correlates tightly with the cluster richness. We measured the intrinsic scatter in the scaling relation between *M*2500 and *L**X* to be *σ*log*L**X*∣*M* = 0.23− 0.04+ 0.10. The best fit power law slope and normalisation are found to be *α* = 0.68 ± 0.07 and $M\_X=(1.2\pm0.12)\times h\_{70}^{-1} 10^{14}\msun$ (for *L**X* = 2 × 1044*h*70− 2 erg/s). These results agree well with a number of recent studies, but the normalisation is lower compared to the study of. One explanation for this difference may be the fact that (sub)structures projected along the line-of-sight boost both the galaxy counts and the lensing mass. Such superpositions lead to an increased mass at a given *L**X* when clusters are binned by richness. Introduction ============ Clusters of galaxies are known to exhibit correlations between their various observable properties, such as the well-known relation between X-ray luminosity and X-ray temperature. These scaling relations are the result of the various processes that govern the formation and subsequent evolution of galaxy clusters. Hence, the study of cluster scaling laws provides the basis of testing models for the formation of clusters of galaxies and of galaxies themselves. For instance, N-body codes can nowadays reliably and robustly predict the evolution of the mass function of cluster halos. Hydrodynamic simulations and semi-analytic techniques then allow us to predict the X-ray and optical appearance of these halos. The mean relation and the intrinsic dispersion of the resulting mass-observable relations provide a test of the adequacy of the physical processes included in the simulations: a realistic simulation should reproduce the mean relation, the intrinsic scatter, and its evolution (if any). Such comparisons have recently led to the realization that cluster simulations must include radiative cooling and feedback from supernovae and AGNs in order to successfully explain the observed scaling relations (see for a review). While the core regions of clusters remain difficult to model accurately, high-resolution simulations with cooling and feedback now produce clusters whose X-ray properties agree well with those of observed clusters outside the central 100 kpc or so. An empirically determined mass scaling relation is not only useful to directly test our understanding of cluster physics, it also allows one to relate the observables directly to the mass of cluster-sized halos. The latter, for instance, enables one to transform the observed luminosity function into a mass function, which in turn can be compared to different cosmological models. Such studies have already provided interesting constraints on cosmological parameters which complement constraints from other probes. To determine cluster masses, a number of methods are available. Dynamical techniques, such as the assumption of hydrostatic equilibrium in X-ray observations, have been widely used. The inferred masses, however, are likely to be systematically underestimated by up to 20% because of pressure support from turbulence and energy input from active galactic nuclei. Masses derived from weak gravitational lensing do not require assumptions about the dynamical state of the cluster and can in principle be compared directly to the outcome of numerical simulations. It is worth noting, however, that lensing is sensitive to all structure along the line-of-sight. Weak lensing is now a well-established technique and recent work has indeed provided support for non-hydrostatic gas in the outskirts of X-ray luminous clusters of galaxies. To date, most work has focussed on the most massive clusters at intermediate redshifts, because of the relatively large lensing signal. In this paper we focus on extending the mass range towards clusters with lower X-ray luminosities ($L\_x < {\rm few~}10^{44}$ erg/s). To detect the lensing signal of these lower mass clusters, the number density of lensed background galaxies needs to be increased significantly, compared to deep ground based observations. The reason for this requirement is that the error in the weak lensing mass estimate is set by the intrinsic shapes of the galaxies and their number density. To this end, we have conducted a weak-lensing analysis on images obtained during a snapshot survey of 25 moderate luminosity X-ray clusters with the Advanced Camera for Surveys (ACS) on board the Hubble Space Telescope (HST). These clusters were randomly selected from the ROSAT 160 Square Degree (160SD) Cluster Catalog. The clusters in the 160SD sample were serendipitously discovered in pointed, relatively wide-field (30’ diameter) observations of the Position-Sensitive Proportional Counter (PSPC) on board the ROSAT X-ray observatory (1990-1999). Seventy-two X-ray clusters between *z* = 0.3 − 0.7 were discovered (see Fig. [sample]). All have been identified and assigned redshifts. Because this survey included pointed observations that were quite long, some of these clusters are among the faintest clusters of galaxies known at these moderately high redshifts. Therefore, our new weak lensing measurements extend the X-ray luminosity limit of the mass-*L**x* relation by almost an order of magnitude, based on targeted observations. We note that studies of the ensemble-averaged properties of clusters discovered in the SDSS and X-ray groups in COSMOS have pushed the limits to even lower luminosities. The structure of the paper follows. In §2 we describe our data and weak lensing analysis. In particular we discuss how we correct for the effects of CTE in our ACS data and how we correct for PSF anisotropy. The measurements of the cluster masses, and the comparison to the X-ray properties are presented in §3. We compare our results to previous work and examine biases in our mass estimates that arise from uncertainties in the position of the cluster center and sample selection. Throughout this paper we assume a flat ΛCDM cosmology with Ω*m* = 0.3 and *H*0 = 70*h*70 km/s/Mpc. crccccccccc name & number & *z* & *L**X* & RA$\_{\rm X-ray}$ & DEC$\_{\rm X-ray}$ & RA$\_{\rm BCG}$ & DEC$\_{\rm BCG}$ & $Q\_{\rm BCG}$ & ⟨*β*⟩ & ⟨*β*2⟩ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) RXJ0056.9 − 2740 & 6 & 0.563 & 1.14 &  00*h*56*m*56.1*s* &  − 27∘40ʹ12ʺ &  00*h*56*m*56.98*s* &  − 27∘40ʹ29.9ʺ & 2 & 0.42 & 0.23 RXJ0110.3 + 1938 & 8 & 0.317 & 0.40 &  01*h*10*m*18.0*s* &  + 19∘38ʹ23ʺ &  01*h*10*m*18.22*s* &  + 19∘38ʹ19.4ʺ & 0 & 0.61 & 0.43 RXJ0154.2 − 5937 & 20 & 0.360 & 0.90 &  01*h*54*m*14.8*s* &  − 59∘37ʹ48ʺ &  01*h*54*m*13.72*s* &  − 59∘37ʹ31.0ʺ & 1 & 0.57 & 0.38 RXJ0522.2 − 3625 & 41 & 0.472 & 2.07 &  05*h*22*m*14.2*s* &  − 36∘25ʹ04ʺ &  05*h*22*m*15.48*s* &  − 36∘24ʹ56.1ʺ & 1 & 0.48 & 0.29 RXJ0826.1 + 2625 & 52 & 0.351 & 0.65 &  08*h*26*m*06.4*s* &  + 26∘25ʹ47ʺ &  08*h*26*m*09.45*s* &  + 26∘25ʹ03.1ʺ & 0 & 0.58 & 0.39 RXJ0841.1 + 6422 & 56 & 0.342 & 1.60 &  08*h*41*m*07.4*s* &  + 64∘22ʹ43ʺ &  08*h*41*m*07.65*s* &  + 64∘22ʹ26.0ʺ & 2 & 0.59 & 0.40 RXJ0847.1 + 3449 & 59 & 0.560 & 1.17 &  08*h*47*m*11.3*s* &  + 34∘49ʹ16ʺ &  08*h*47*m*11.79*s* &  + 34∘48ʹ51.8ʺ & 1 & 0.42 & 0.23 RXJ0910.6 + 4248 & 69 & 0.576 & 1.44 &  09*h*10*m*39.7*s* &  + 42∘48ʹ41ʺ &  09*h*10*m*40.53*s* &  + 42∘49ʹ59.1ʺ & -1&0.41 & 0.23 RXJ0921.2 + 4528 & 70 & 0.315 & 1.40 &  09*h*21*m*13.4*s* &  + 45∘28ʹ50ʺ &  09*h*21*m*13.46*s* &  + 45∘28ʹ56.1ʺ & 1 & 0.61 & 0.43 RXJ0926.6 + 1242 & 71 & 0.489 & 2.04 &  09*h*26*m*36.6*s* &  + 12∘42ʹ56ʺ &  09*h*26*m*36.70*s* &  + 12∘43ʹ03.8ʺ & 2 & 0.47 & 0.28 RXJ0957.8 + 6534 & 80 & 0.530 & 1.39 &  09*h*57*m*53.2*s* &  + 65∘34ʹ30ʺ &  09*h*57*m*51.22*s* &  + 65∘34ʹ25.1ʺ & 2 & 0.44 & 0.25 RXJ1015.1 + 4931 & 88 & 0.383 & 0.78 &  10*h*15*m*08.5*s* &  + 49∘31ʹ32ʺ &  10*h*15*m*08.44*s* &  + 49∘31ʹ50.8ʺ & 2 & 0.55 & 0.36 RXJ1117.2 + 1744 & 96 & 0.305 & 0.54 &  11*h*17*m*12.0*s* &  + 17∘44ʹ24ʺ &  11*h*17*m*11.23*s* &  + 17∘44ʹ00.5ʺ & 2 & 0.62 & 0.44 RXJ1117.4 + 0743 & 97 & 0.477 & 0.76 &  11*h*17*m*26.1*s* &  + 07∘43ʹ35ʺ &  11*h*17*m*26.04*s* &  + 07∘43ʹ38.3ʺ & 1 & 0.48 & 0.29 RXJ1123.1 + 1409 & 101 & 0.340 & 1.01 &  11*h*23*m*10.2*s* &  + 14∘09ʹ44ʺ &  11*h*23*m*10.95*s* &  + 14∘08ʹ36.4ʺ & 1 & 0.59 & 0.40 RXJ1354.2 − 0221 & 151 & 0.546 & 1.53 &  13*h*54*m*16.9*s* &  − 02∘21ʹ47ʺ &  13*h*54*m*17.19*s* &  − 02∘21ʹ59.2ʺ & 2 & 0.43 & 0.24 RXJ1524.6 + 0957 & 170 & 0.516 & 3.94 &  15*h*24*m*40.3*s* &  + 09∘57ʹ39ʺ &  15*h*24*m*41.56*s* &  + 09∘57ʹ34.3ʺ & 1 & 0.45 & 0.26 RXJ1540.8 + 1445 & 172 & 0.441 & 0.81 &  15*h*40*m*53.3*s* &  + 14∘45ʹ34ʺ &  15*h*40*m*53.96*s* &  + 14∘45ʹ56.0ʺ & 1 & 0.51 & 0.31 RXJ1642.6 + 3935 & 186 & 0.355 & 0.62 &  16*h*42*m*38.9*s* &  + 39∘35ʹ53ʺ &  16*h*42*m*38.35*s* &  + 39∘36ʹ10.4ʺ & 1 & 0.58 & 0.39 RXJ2059.9 − 4245 & 199 & 0.323 & 2.44 &  20*h*59*m*55.2*s* &  − 42∘45ʹ33ʺ &  20*h*59*m*54.92*s* &  − 42∘45ʹ32.1ʺ & 2 & 0.61 & 0.42 RXJ2108.8 − 0516 & 200 & 0.319 & 0.81 &  21*h*08*m*51.2*s* &  − 05∘16ʹ49ʺ &  21*h*08*m*51.17*s* &  − 05∘16ʹ58.4ʺ & 2 & 0.61 & 0.42 RXJ2139.9 − 4305 & 203 & 0.376 & 0.59 &  21*h*39*m*58.5*s* &  − 43∘05ʹ14ʺ &  21*h*39*m*58.22*s* &  − 43∘05ʹ13.9ʺ & 2 & 0.56 & 0.37 RXJ2146.0 + 0423 & 204 & 0.531 & 2.61 &  21*h*46*m*04.8*s* &  + 04∘23ʹ19ʺ &  21*h*46*m*05.52*s* &  + 04∘23ʹ14.3ʺ & 2 & 0.44 & 0.25 RXJ2202.7 − 1902 & 205 & 0.438 & 0.70 &  22*h*02*m*44.9*s* &  − 19∘02ʹ10ʺ &  22*h*02*m*45.50*s* &  − 19∘02ʹ21.1ʺ & 2 & 0.51 & 0.31 RXJ2328.8 + 1453 & 219 & 0.497 & 1.02 &  23*h*28*m*49.9*s* &  + 14∘53ʹ12ʺ &  23*h*28*m*52.27*s* &  + 14∘52ʹ42.8ʺ & 1 & 0.46 & 0.27   Data and analysis ================= The data studied here were obtained as part of a snapshot program (PI: Donahue) to study a sample of clusters found in the 160 square degree survey. The clusters from the latter survey were selected based on the serendipitous detection of extended X-ray emission in ROSAT PSPC observations, resulting in a total survey area of 160 deg2. A detailed discussion of the survey can be found in. The sample was reanalysed by, which also lists spectroscopic redshifts for most of the clusters. The X-ray luminosity as a function of cluster redshift is plotted in Figure [sample]. The HST snapshot program targeted clusters with 0.3 < *z* < 0.6, which were observed during HST cycles 13 and 14. This resulted in a final sample of 25 clusters that were imaged in the *F*814*W* filter with integration times of  ∼ 2200s. Note that these were drawn randomly from the sample, and therefore represent a fair sample of an X-ray flux limited survey. The clusters with HST data are indicated in Figure [sample] by the large open points. Table [tabsample] provides a list of the clusters that were observed. It also lists the restframe X-ray luminosities in the 0.1 − 2.4 keV band. The values were taken from the BAX X-Rays Galaxy Clusters Database[1](#fn1) and converted to the cosmology we adopted here. As explained on the BAX website, the luminosities were derived from the flux measurements using a Raymond-Smith type spectrum assuming a metallicity of 0.33 times solar. Each set of observations consists of four exposures, which allow for efficient removal of cosmic rays. We use the multidrizzle task to remove the geometric distortion of ACS images and to combine the multiple exposures. This task also removes cosmic rays and bad pixels, etc. Because of the large geometric distortion and offsets between the individual exposures, the images need to be resampled before co-addition. A number of options have been studied by who suggest to use a Gaussian kernel and an output pixel scale of 0.03“. However, this procedure leads to correlated noise in the final image. Instead we opt for the lanczos3 resampling kernel and keep the original pixel size of 0.05”. These choices preserve the noise properties much better, while the images of the galaxies are adequately sampled. These co-added images are subsequently used in the weak lensing analysis. Shape analysis -------------- The first step in the weak lensing analysis is the detection of stars and galaxies, for which we use SExtractor. The next important step is the unbiased measurement of the shapes of the faint background galaxies that will be used to quantify the lensing signal. To do so, we measure weighted quadrupole moments, defined as $$I\_{ij}=\int d^2{{\mbox{{\it \boldmath$x$}}}} x\_i x\_j W({{\mbox{{\it \boldmath$x$}}}}) f({{\mbox{{\it \boldmath$x$}}}}),$$ where $W({{\mbox{{\it \boldmath$x$}}}})$ is a Gaussian with a dispersion *r**g*, which is matched to the size of the object. These quadrupole moments are combined to form the two-component polarization $$e\_1=\frac{I\_{11}-I\_{22}} {I\_{11}+I\_{22}}, ~{\rm and}~e\_2=\frac{I\_{12}}{I\_{11}+I\_{22}}.$$ However, we cannot simply use the observed polarizations, because they have been modified by a number of instrumental effects. Although the PSF of HST is small compared to ground based data, the shapes of the galaxies are nonetheless slightly circularized, lowering the amplitude of the observed lensing signal. Furthermore, the PSF is not round and the resulting anisotropy in the galaxy shapes needs to be undone. To do so, we use the well established method proposed by, with the modifications provided in and. Correction for CTE degradation ------------------------------ An issue that is relevant for ACS observations is the degradation of the charge transfer efficiency (CTE) with time. Over time, cosmic rays cause an increasing number of defects in the detector. During read-out, these defects can trap charges for a while. The delayed transfer of charge leads to a trail of electrons in the read-out direction, causing objects to appear elongated. The effect is strongest for faint objects, because brighter objects quickly fill the traps. provide a detailed discussion of CTE effects and their impact on weak lensing studies. Similar to we derive an empirical correction for CTE, but our adopted approach differs in a number of ways. The presence of CTE in our data leads to a slight modification of our lensing analysis. First, we note that CTE affects only *e*1 and that the change in shape occurs during the read-out stage. Hence the measured value for *e*1 needs to be corrected for CTE *before* the correction for PSF anisotropy and circularization: $$e\_1=e\_1^{\rm obs} - e\_1^{\rm CTE},$$ where $e\_1^{\rm CTE}$ is the predicted change in polarization given by the model derived in the Appendix. To derive our CTE model, we used observations of the star cluster NGC 104 as well as 100 exposures from the COSMOS survey. The observations of a star cluster allow us to study the effect of CTE as a function of position, time and signal-to-noise ratio with high precision, because stars are intrinsically round after correction for PSF anisotropy. We find that the CTE effect increases linearly with time and distance from the read-out electronics. The amplitude of the effect is observed to be proportional to $\sqrt{S/N}$. We use 100 exposures from COSMOS to examine how the CTE effect depends on source size. We find a strong size dependence, with $e\_1^{\rm CTE}\propto r\_g^{-2}$ (where *r**g* is the dispersion of the Gaussian weight function used to measure the quadrupole moments). Correction for the PSF ---------------------- Once the CTE effect has been subtracted for all objects (including stars), the lensing analysis proceeds as described in. Hence the next step is to correct both polarization components for PSF anisotropy. The ACS PSF is time dependent and therefore a different PSF model is required for each observation. An added complication is the fact that only a limited number of stars are observed in each image. To estimate the spatial variation of the PSF anisotropy a number of procedures have been proposed. Here we opt for a simple approach, similar to the one used in. We use observations of the star cluster NGC104 to derive an adequate model for PSF anisotropy. These data were taken at the start of ACS operations (PID 9018), and therefore do not suffer from CTE effects. We model the PSF anisotropy by a third order polynomial in both *x* and *y*. Such a model would not be well constrained by our galaxy cluster data, but it is thanks to the high number density of stars in NGC104. We select one of the models as our reference, because the PSF pattern varied only mildly from exposure to exposure. The resulting reference model is shown in Figure [psfmod]. The PSF anisotropy is fairly small, but can reach  ∼ 4% towards the edges of the field. Most of the variation in the ACS PSF arises from focus changes, which are the result of changes in the telescope temperature as it orbits the Earth. We therefore expect that a scaled version of our model can capture much of the spatial variation: as the detector moves from one side of the focus to the other, the direction of PSF anisotropy changes by 90 degrees, which corresponds to a change of sign in the polarization. To account for additional low order changes, we also include a first order polynomial: $$e\_i^{\rm PSF}=\alpha e\_i^{\rm NGC104} + a\_0 +a\_1 x +a\_2 y.$$ This simple model, with only 4 parameters, is used to characterize the PSF anisotropy for each individual galaxy cluster exposure. The number of stars in the galaxy cluster images ranges from 7 to 84, with a median of 20. To examine how well our approach works, we averaged the shapes of all the stars in our 25 galaxy cluster exposures. The ensemble averaged PSF polarization as a function of position is shown in Figure [psfres]. The open points show the average PSF shapes before correction. More importantly, the filled points show the residuals in polarization after the best fit model for each galaxy cluster pointing has been subtracted. These results suggest our model is adequate for our weak lensing analysis. Lensing signal -------------- The corrected galaxy shapes are used to measure the weak lensing signal. A convenient way to quantify the lensing signal is by computing the azimuthally averaged tangential shear *γ**T* as a function of radius from the cluster center (see §2.5 for our choice of the center). It can be related to the surface density through $$\langle\gamma\_t\rangle(r)=\frac{\bar\Sigma(<r) - \bar\Sigma(r)} {\Sigma\_{\rm crit}}=\bar\kappa(<r)-\bar\kappa(r),$$ where $\bar\Sigma(<r)$ is the mean surface density within an aperture of radius *r*, and $\bar\Sigma(r)$ is the mean surface density on a circle of radius *r*. The convergence *κ*, or dimensionless surface density, is the ratio of the surface density and the critical surface density $\Sigma\_{\rm crit}$, which is given by $$\Sigma\_{\rm crit}=\frac{c^2}{4\pi G}\frac{D\_s}{D\_l D\_{ls}},$$ where *D**l* is the angular diameter distance to the lens. *D**s* and *D**l**s* are the angular diameter distances from the observer to the source and from the lens to the source, respectively. The parameter *β* = max[0, *D**l**s*/*D**s*] is a measure of how the amplitude of the lensing signal depends on the redshifts of the source galaxies (where a value of 0 is assigned to sources with a redshift lower than that of the cluster). One of the advantages of weak lensing is that the (projected) mass can be determined in a model-independent way. To derive accurate masses requires wide field imaging data, which we lack because of the small field of view of ACS. Instead we fit parametric models to the lensing signal. The singular isothermal sphere (SIS) is a simple model to describe the cluster mass distribution. In this case the convergence and tangential shear are equal: $$\kappa(r)=\gamma\_T(r)=\frac{r\_E}{2r},$$ where *r**E* is the Einstein radius. In practice we do not observe the tangential shear directly, but we observe the reduced shear *g**T* instead: $$g\_T=\frac{\gamma\_T}{1-\kappa}.$$ We correct our model predictions for this effect. Under the assumption of isotropic orbits and spherical symmetry, the Einstein radius (in radian) can be expressed in terms of the line-of-sight velocity dispersion: $$r\_E=4\pi \left(\frac{\sigma}{c}\right)^2 \beta.$$ We use this model when listing lensing inferred velocity dispersions in Table [tabresult]. A more physically motivated model is the one proposed by who provide a fitting function to the density profiles observed in numerical simulations of cold dark matter. We follow the conventions outlined in, but use an updated relation between the virial mass $M\_{\rm vir}$ and concentration *c*. studied numerical simulations using the best fit parameters of the WMAP5 cosmology. The best fit $c(M\_{\rm vir})$ is given by: $$c=7.85\left({\frac{M\_{\rm vir}}{2\times 10^{12}}}\right)^{-0.081}{(1+z)^{-0.71}}.\label{mcrel}$$ We use this relation when fitting the NFW model to the lensing signal. Rather than the virial mass, in Table [tabresult] we list *M*2500 which is the mass within a radius *r*2500 where the mean mass density of the halo is 2500 times the critical density at the redshift of the cluster. We note that a different choice for the mass-concentration relation will change the inferred masses much. We found that the inferred values for *M*2500 change by 5 − 10% if we change the pre-factor in Equation [mcrel] by  ± 1, which is much smaller than our statistical uncertainties. When comparing to ensemble averaged results from other studies, however, differences in the adopted mass-concentration relation may become a dominant source of uncertainty. Finally, we note that the lensing signal is sensitive to all matter along the line-of-sight. As shown in, the large-structure in the universe introduces cosmic noise, which increases the formal error in the mass estimate, compared to just the statistical uncertainty. The listed uncertainties in the weak lensing masses include this contribution. Cluster center -------------- We have to choose a position around which we measure the weak lensing signal as a function of radius. An offset between the adopted position and the ‘true’ center of the dark matter halo will lead to an underestimate of the cluster mass. Possible substructure in the cluster core complicates a simple definition of the cluster center, but it can also lead to biased mass estimates. We expect our results to be less affected by substructure because the ACS data used here extend to larger radii than the WFPC2 observations discussed in. The resulting bias depends on the detailed procedure that is used to to interpret the lensing signal. For instance, used wide field imaging data to measure the lensing signal out to large radii and to derive (almost) model-independent masses. This procedure minimizes the effect of centroiding errors because the large-scale signal is affected much less, compared to the signal on small scales. Unfortunately, the ACS field of view is relatively small, which prevents us from following the same approach and we fit a parameterized mass model to the lensing signal instead. To reduce the sensitivity of our results to centroiding errors and central substructure, we fit the NFW and SIS models to the tangential distortion at radii 200 < *r* < 750*h*70− 1kpc. The advantage of restricting the analysis to larger scales is also evident from Figure [biasoffset], where we show the bias in mass as a function of centroid offset $r\_{\rm off}$. The solid curve shows the results when fitting an NFW model to the signal within 200 − 750*h*70− 1 kpc, which is the range we use for the ACS data studied here. provide cluster positions based on the X-ray emission. For the clusters in our sample, the listed centroiding uncertainty depends on the X-ray luminosity (with smaller values for the more luminous systems). An alternative approach is to use the location of the brightest cluster galaxy (BCG). The resulting positions are listed in Table [tabsample]. In many cases a clear candidate can be identified (indicated by a value of $Q\_{\rm BCG}$ of 1 or 2 in Table [tabsample]), but in a number of cases the choice is ambiguous (indicated by $Q\_{\rm BCG}$ -1 or 0). High quality X-ray data can be used to refine the centers, but such data are lacking for our sample. Even if such data were available, there can be a offset between the BCG and the peak in the X-ray emission. The distribution of offsets observed by for the massive clusters in the Canadian Cluster Comparison Project, are indicated by the histogram in Figure [biasoffset]. It is clear that the offsets are typically less than 50 kpc, leading to biases less than 5%. The larger offsets are found for merging massive systems where the identification of the BCG is difficult. Such major mergers do not appear to be present in the sample of clusters studied here. We compared the offset between the X-ray position and the adopted location of the BCG to the uncertainty listed in and find fair agreement: most of the BCGs are located within the radius corresponding to the 90% confidence X-ray position error circle. Only for 4 low luminosity systems do we find an offset that is much larger. The positions of the BCGs can be determined more accurately, and we therefore adopt these as the cluster centers when listing our mass estimates. Source galaxies --------------- The lensing signal is largest for background galaxies at redshifts much larger than the cluster. We lack redshift information for our sources and instead we select a sample of faint (distant) galaxies with 24 < *F*814*W* < 26.5, which also reduces contamination by cluster members which dominate the counts at bright magnitudes. Nonetheless, contamination by cluster members cannot be ignored because we lack color information. We note that adding a single color would not improve the situation significantly because the faint members span a wide range in color, unlike the bright members, almost all of which occupy a narrow red-sequence. To account for contamination by cluster members we measure the number density of faint galaxies as a function of distance from the adopted cluster center and boost the signal by the inferred fraction of cluster members. The corrected tangential distortion for RXJ0847.1+3449 as a function of distance from the adopted cluster centre is shown in Figure [gtprof]a. The red line shows the best fit singular isothermal sphere model (fitted to radii  > 25ʺ). If the observed signal is caused by gravitational lensing, the signal should vanish if the source galaxies are rotated by 45 degrees. Figure [gtprof]b shows the results of this test, which are indeed consistent with no signal. The lower panel in Figure [gtprof] shows the excess counts $\Delta n\_{\rm bg}$ as a function of radius for the cluster RXJ0847.1+3449 (*z* = 0.56). To determine the background count levels we used the 100 COSMOS pointings that were used to measure the size-dependence of CTE. We measure the excess counts for each cluster individually, because the sample spans a fair range in redshift and mass. This cluster shows a signicant excess of faint galaxies over the background number density of  ∼ 61 arcmin− 2. We assume that these faint cluster members are oriented randomly and thus simply dillute the lensing signal. To quantify the overdensity of faint members we adopt $\Delta n\_{\rm bg}\propto r^{-1}$ and determine the best fit normalization for each cluster separately (indicated by the drawn line in Figure [gtprof]c. This simple model is used to correct the observed tangential distortion for contamination by cluster members. We find that this correction leads to an average increase in the best fit Einstein radius of  ∼ 20%. The uncertainty in the amplitude of the contamination is included in our quoted measurement errors (which are increased by less than 5%). The conversion of the lensing signal into an estimate for the mass of the cluster requires knowledge of the redshift distribution of the source galaxies. These are too faint to be included in spectroscopic redshift surveys or even from ground based photometric redshift surveys. Instead we use the photometric redshift distributions determined from the Hubble Deep Fields. For the range in apparant magnitudes used in the lensing analysis, the resulting average value for ⟨*β*⟩ and the variance ⟨*β*2⟩ are listed in Table [tabsample]. As discussed in the latter quantity is needed to account for the fact that we measure the reduced shear. We estimate the uncertainty in *β* by considering the variation between the two HDFs. As the average source redshift is much higher than the cluster redshifts, the resulting relative uncertainties are small:  ∼ 2% at *z* = 0.3, increasing to  ∼ 5% at *z* = 0.6, much smaller than our statistical errors. llllllll name & *r**E* & *σ* & *r*2500 & *M*2500 & *N*2500 & [”] & [km/s] & [*h*70− 1Mpc] & [*h*70− 11013M⊙] & RXJ0056.9 − 2740 & 5.4 ± 1.5 & 678− 101+ 88 & 0.270 & 5.2− 3.0+ 4.2 & 13 ± 4 RXJ0110.3 + 1938 & 5.8 ± 1.3 & 577− 69+ 62 & 0.293 & 5.0− 2.6+ 3.4 & 4 ± 2 RXJ0154.2 − 5937 & 3.6 ± 1.4 & 474− 101+ 83 & 0.218 & 2.2− 1.4+ 2.6 & 1 ± 1 RXJ0522.2 − 3625 & 6.8 ± 1.3 & 710− 73+ 66 & 0.313 & 7.2− 3.1+ 4.2 & 7 ± 3 RXJ0826.1 + 2625 & 2.4 ± 1.8 & 384− 192+ 124 & 0.157 & 0.8− 2.1+ 2.1 & 2 ± 1 RXJ0841.1 + 6422 & 7.5 ± 1.3 & 668− 59+ 55 & 0.354 & 9.0− 3.5+ 3.9 & 15 ± 4 RXJ0847.1 + 3449 & 10.3 ± 1.7 & 937− 80+ 74 & 0.452 & 24.2− 7.6+ 8.9 & 21 ± 5 RXJ0910.6 + 4248 & 3.5 ± 1.7 & 551− 157+ 121 & 0.246 & 4.0− 2.9+ 4.7 & 3 ± 2 RXJ0921.2 + 4528 & 2.1 ± 1.5 & 344− 164+ 108 & 0.180 & 1.2− 1.2+ 1.8 & 5 ± 2 RXJ0926.6 + 1242 & 8.9 ± 1.5 & 822− 70+ 65 & 0.407 & 16.2− 5.1+ 6.0 & 11 ± 4 RXJ0957.8 + 6534 & 4.5 ± 1.4 & 605− 105+ 89 & 0.257 & 4.3− 2.6+ 3.2 & 3 ± 2 RXJ1015.1 + 4931 & 6.0 ± 1.4 & 618− 79+ 70 & 0.296 & 5.6− 2.9+ 3.4 & 3 ± 2 RXJ1117.2 + 1744 & 1.9 ± 1.6 & 331− 185+ 114 & 0.252 & 3.1− 2.1+ 3.2 & 6 ± 3 RXJ1117.4 + 0743 & 5.5 ± 1.4 & 640− 86+ 76 & 0.280 & 5.2− 2.8+ 3.4 & 13 ± 4 RXJ1123.1 + 1409 & 5.8 ± 1.4 & 588− 78+ 69 & 0.271 & 4.0− 2.6+ 3.8 & 9 ± 3 RXJ1354.2 − 0221 & 9.6 ± 1.4 & 895− 69+ 64 & 0.428 & 20.2− 5.6+ 6.4 & 23 ± 5 RXJ1524.6 + 0957 & 4.3 ± 1.3 & 585− 98+ 84 & 0.336 & 9.5− 3.8+ 4.4 & 14 ± 4 RXJ1540.8 + 1445 & 4.0 ± 1.4 & 530− 99+ 83 & 0.279 & 4.9− 2.5+ 3.7 & 13 ± 4 RXJ1642.6 + 3935 & 2.2 ± 1.5 & 371− 156+ 107 & 0.239 & 2.8− 1.8+ 2.8 & 5 ± 2 RXJ2059.9 − 4245 & 4.4 ± 1.3 & 508− 82+ 71 & 0.280 & 4.4− 2.4+ 3.3 & 6 ± 3 RXJ2108.8 − 0516 & 3.4 ± 1.4 & 442− 100+ 81 & 0.210 & 1.8− 1.4+ 2.2 & 5 ± 2 RXJ2139.9 − 4305 & 5.2 ± 1.4 & 572− 84+ 73 & 0.292 & 5.3− 2.6+ 3.7 & 9 ± 3 RXJ2146.0 + 0423 & 8.5 ± 1.4 & 830− 73+ 67 & 0.436 & 21.0− 5.7+ 6.7 & 14 ± 4 RXJ2202.7 − 1902 & 1.5 ± 1.4 & 319− 252+ 127 & 0.152 & 0.8− 0.8+ 2.0 & 2 ± 1 RXJ2328.8 + 1453 & 4.9 ± 1.4 & 612− 93+ 81 & 0.254 & 4.0− 2.6+ 3.7 & 6 ± 3 Results ======= As discussed above, we consider two often used parametric models for the cluster mass distribution. We fit these to the observed lensing signal at 200 − 750*h*70− 1kpc from the cluster center. The resulting Einstein radii and velocity dispersions are listed in Table [tabresult]. We also list the best-fit value for *M*2500 inferred from the NFW model fit to the signal, where we use the mass-concentration relation given by Eqn. [mcrel]. For reference we also list the corresponding value for *r*2500 from the NFW fit. We use this radius to compute *N*2500, the excess number of galaxies with a rest-frame *B*-band luminosity  − 22 < *M**B* <  − 18.5 within *r*2500 (we assume passive evolution when computing the rest-frame *B*-band luminosity). Recent studies of the cluster richness consider only galaxies on the red-sequence, because of the higher contrast, which improves the signal-to-noise ratio of the measurement. Unfortunately we lack color information, and we compute the total excess of galaxies. The background count levels were determined using the COSMOS pointings that were used to measure the size-dependence of CTE. Figure [mlx]a shows the resulting lensing mass as a function of the restframe X-ray luminosity in the 0.1 − 2.4 keV band. The luminosities and masses have been scaled to redshift zero assuming self-similar evolution with respect to the critical density, where $$E(z)=\frac{H(z)}{H\_0}=\sqrt{\Omega\_m(1+z)^3+\Omega\_\Lambda}$$ for flat cosmologies. The solid points indicate the clusters from the sample studied here. The clusters from the 160SD survey for which X-ray temperatures have been determined (see §3.1) are indicated by blue points. To extend the range in X-ray luminosity we also show measurements for the massive clusters that were studied in. We note, that used bolometric X-ray luminosities from, whereas here we use the restframe luminosities in the 0.1 − 2.4 keV band (which are a factor  ∼ 4 smaller). We use the mass estimates from which used new photometric redshift distributions, which were based on much larger data sets. The agreement in lensing masses is good in the regions where the two samples overlap. However, the scatter in the mass-luminosity relation is substantial (both for the clusters studied here as well as the more massive clusters). We examined whether some of the scatter could be due to the uncertainty in the position of the cluster center, but find no difference when comparing results for clusters with different levels of confidence in the identification of the BCG (see $Q\_{\rm BCG}$ in Table [tabsample]. It is worth noting, however, that we do not find any massive clusters ( > 1014*M*⊙) that are X-ray faint (i.e., *L**X* < 1044 erg/s), implying that the dispersion in the *M* − *L**X* relation is relatively well-behaved. [mlx] studied a sample of 63 X-ray bright clusters and derived masses under the assumption of hydrostatic equilibrium. This sample of clusters spans a similar range in *L**X* as our combined sample. We converted their measurements for *M*500 to *M*2500 using the mass-concentration relation given by Eqn. [mcrel] and show the results in Figure [mlx] (small grey points). The agreement with our findings is very good. A more quantitative comparison is presented in §3.3. have shown that there is a good relation between the cluster richness and the mass (and various proxies) for massive clusters. Our study extends to lower masses and as is shown in Figures [mlx]b and c *N*2500 correlates well with both the X-ray luminosity and lensing mass[2](#fn2). Similar results have been obtained using SDSS cluster samples. We assumed that *N*2500 scales as the mass *M*2500, which is a reasonable assumption if the galaxies trace the density profile. This choice, however, does not affect our conclusions. The results agree well in the regions where the two samples overlap. The correlation between *N*2500 and *M*2500 is tighter than that of *N*2500 and *L**X*. The former is less sensitive to the projections along the line-of-sight (either substructures or an overall elongation of the cluster), because both the galaxy counts and the lensing mass are derived from projected measurements. The X-ray results provide a different probe of the distribution of baryons, which is expected to lead to additional scatter. Furthermore, some of the scatter may be caused by unknown contributions by AGNs. The importance of AGN can be evaluated using a combination of deeper, high spatial resolution $(\lesssim 5'')$ X-ray and radio imaging. Such X-ray observations would also provide estimates for the temperature of the X-ray gas (which is a better measure of the cluster mass). Unfortunately, such data exist for only five of the low-mass clusters. For these, X-ray temperatures have been derived, which are listed Table [tabtx]. For the massive clusters we use the values from that were used in. We note that all clusters follow a tight *L**x* − *T**x* relation. Comparison with X-ray temperature --------------------------------- Figure [mtx] shows the resulting plot of *M*2500 as a function of X-ray temperature. RXJ1117.4 + 0743 and RXJ1524.6 + 0957 lie on the tight relation defined by the bulk of the clusters. The measurements from also follow this relation (light grey points). However, RXJ0847.1 + 3449 and RXJ1354.2 − 0221 appear to be more massive than might be expected based on *T**X*. They appear to lie on a parallel relation, along with some of the clusters from. The latter clusters are CL0024+16 and A370, which are well known strong lensing clusters (indicated in red in Figures [mlx] and [mtx]). These clusters were observed because of their strong lensing properties and included in the search for archival CFHT data carried out by. Interestingly, all four clusters are outliers on both the *M*2500-*L**x* and *N*2500 − *L**x* relations presented in Figure [mlx], but follow the mass-richness relation. This is consistent with the presence of (sub)structures along the line-of-sight boosting both *M*2500 and *N*2500: the projection of two mass concentrations (along the line-of-sight) would increase the richness and the weak lensing mass approximately linearly. The inferred X-ray temperature on the other hand will be close to that of the more massive system, whereas the X-ray luminosity will be much lower than expected, because it is proportional to the square of the electron density. We note that both RXJ0847.1 + 3449 and RXJ1354.2 − 0221 show evidence of strong lensing. lll name & *k**T**X* [keV] & ref. RXJ0110.3 + 1938 & 1.46− 0.15+ 0.19 & 1 RXJ0847.1 + 3449 & 3.62− 0.51+ 0.58 & 2 RXJ1117.4 + 0743 & 3.3− 0.36+ 0.42 & 3 RXJ1354.2 − 0221 & 3.66− 0.5+ 0.6 & 2 RXJ1524.6 + 0957 & 5.1 ± 0.36 & 4 Interestingly, the X-ray image of RXJ0847.1 + 3449 in shows evidence for a nearby cluster candidate. The case is less clear for RXJ1354.2 − 0221, but the X-ray image shows a complex morphology. Unfortunately we lack the dynamical data to confirm whether RXJ0847.1 + 3449 and RXJ1354.2 − 0221 are projected systems. The cluster RXJ1117.4 + 0743 was studied in detail by, who find that this cluster is also a projection of two structures. For the main component infer a galaxy velocity dispersion of 592 ± 82 km/s, whereas the second structure is less massive with *σ**v* = 391 ± 85 km/s. Based on our lensing analysis we obtained a velocity dispersion of 639− 86+ 76 km/s, in good agreement with the dynamical results for the main cluster. also performed a weak lensing analysis based on ground based imaging data and obtained a velocity dispersion of *σ* = 778 ± 89 km/s (where we took the average of their results for the *g*ʹ and *r*ʹ band), implying a mass 50% larger than our estimate. We are not able identify an obvious cause for this difference, but note that PSF-related systematics have a larger impact on ground based results. [mtx] The *M*2500 − *L**X* scaling relation ------------------------------------- In this section we examine the correlation between *M*2500 and the X-ray luminosity, in particular the normalization and the power law slope. We fit a power law model to the combined sample to maximize the leverage in X-ray luminosity $$E(z)M\_{2500}=M\_x \left(\frac{L\_x/E(z)}{2 \times 10^{44}{\rm erg/s}}\right)^\alpha.$$ If we naively fit this model to the measurements we find that the value for *χ*2 of the best fit is too high (*χ*2 = 106 with 43 degrees of freedom). This indicates that there is intrinsic scatter in the relation, which is also apparent from Figure [mlx]. We need to account for the intrinsic scatter in the fitting procedure, because ignoring it will generally bias the best fit parameters. We fit the model to our measurements, which have errors that follow a normal distribution. The intrinsic scatter, however, can be described by a log-normal distribution. We will assume that the intrinsic scatter can be approximated with a normal distribution with a dispersion *σ**Q* ≈ ln(10)*Q*log*Q* (we use the log with base 10). To fit the *M*2500 − *L**X* relation we follow a maximum likelihood approach. For a model *f* with parameters ${\bf a}$, the predicted values are $y\_i=f(x\_i;{\bf a})$. The uncertainties in *x**i* and *y**i* are given by *σ**x*, *i* and *σ**y*, *i*. If we assume a Gaussian intrinsic scatter *σ**Q*, *i* in the *y**i* coordinate, the likelihood ${\cal L}$ is given by $${\cal L}=\prod\_{i=1}^{n}\frac{1}{\sqrt{2\pi}w\_i}\exp\left[-\frac{[y\_i-f(x\_i;{\bf a})]^2}{2w\_i^2}\right],$$ where *w**i* accounts for the scatter: $$w\_i^2=\left[{\frac{df}{dx}(x\_i)}\right]^2\sigma\_{x,i}^2+\sigma\_{y,i}^2+\sigma\_{Q,i}^2.$$ If we consider the logarithm of the likelihood it becomes clear why including the intrinsic scatter differs from standard least squares minimization: $$-2\ln {\cal L}=2\sum\_{i=1}^n \ln w\_i + \sum\_{i=1}^n \left(\frac{y\_i-f(x\_i;a\_j)}{w\_i}\right)^2+C,$$ where the second term corresponds to the usual *χ*2. For a linear relation with no intrinsic scatter, the first term is a constant for a given data set and the likelihood is maximimized by minimizing *χ*2. However, if intrinsic scatter is included as a free parameter, the first term acts as a penalty function, and cannot be ignored. The presence of intrinsic scatter also exacerbates the Malmquist bias for a flux limited sample, such as the 160SD survey[3](#fn3). As a result the average flux of the observed sample of clusters is biased high compared to the mean of the parent population (in particular near the flux limit of the survey). To account for Malmquist bias we follow the procedure outlined in Appendix A.2 of and correct the X-ray luminosities before fitting the *M*2500 − *L**X* relation. We find that the correction for Malmquist bias is relatively modest, increasing the normalisation *M**X* by  ∼ 5% and reducing the slope *α* by  ∼ 5%[4](#fn4). We find an intrinsic scatter of *σ*log*L**X*∣*M* = 0.23− 0.04+ 0.10 (or a relative error of  ∼ 70%), in good agreement with other studies. list a somewhat larger scatter of *σ*log*L**X*∣*M* = 0.29 for the HIFLUGCS sample, whereas and find *σ*log*L**X*∣*M* = 0.17 ± 0.02 and 0.18 ± 0.02, respectively. Our results also agree well with who find *σ*log*M*∣*L**X* = 0.19 ± 0.03, which is in good agreement with our value of 0.17− 0.03+ 0.04. [slope] The likelihood contours for the power law slope *α* of the *M*2500 − *L**X* relation and the mass *M*2500 of a cluster with a luminosity of *L**X* = 2 × 1044*h*70− 2erg/s are shown in Figure [slope]. For the combined sample of clusters we find a best fit slope of *α* = 0.68 ± 0.07 and a normalization $M\_X=(1.2\pm0.12)\times 10^{14} h\_{70}^{-1}\msun$. The inferred slope is consistent with the value of $\frac{3}{4}$ expected for self-similar evolution. It is also interesting to compare the constraints from both sub-samples separately. The best fit slope for the clusters studied in is *α* = 0.55− 0.09+ 0.10 with a normalization of $M\_X=(1.5\pm0.2)\times 10^{14}h\_{70}^{-1}\msun$ (indicated by the pink contours)[5](#fn5). The blue contours in Figure [slope] indicate the constraints for the HST sample studied here. The parameters are not well constrained, with best fit values *α* = 0.63 ± 0.24 and a normalization $M\_X=(1.0\pm0.24\times 10^{14}h\_{70}^{-1}\msun$. The difference in the constraints from the (extended) sample and the 160SD systems may hint at a deviation from a single power law relation. As discussed in §2.5, however, the uncertainty in the position of the cluster center leads to an underestimate of the cluster mass, as does the presence of substructure. The results are much less sensitive to these problems. To quantify this, we combine the results with the 160SD clusters with $Q\_{\rm BCG}=2$ (12 systems) and the ones with $Q\_{\rm BCG}<2$ (13 systems). For the former we find $M\_X=(1.30\pm0.15)\times 10^{14}h\_{70}^{-1}\msun$ and *α* = 0.63 ± 0.08, whereas requiring $Q\_{\rm BCG}<2$ yields $M\_X=(1.19\pm0.14)\times 10^{14}h\_{70}^{-1}\msun$ and *α* = 0.68 ± 0.08. This comparison suggests that our normalisation may be biased low by as much as  ∼ 10%. Comparison with X-ray samples ----------------------------- The relation between the X-ray luminosity and cluster mass has been studied extensively. In this section we compare our measurements to a number of recent results, which are shown in Figure [zoom]. Where needed, the X-ray luminosities are converted to the 0.1 − 2.4 keV band and Eqn. [mcrel] has been used to convert masses to *M*2500 and adjust the slopes (because the relation between *M*2500 and *M*200 (or *M*500) is a power law with a slope less than 1.). studied a sample of 63 clusters, with masses derived under the assumption of hydrostatic equilibrium. Their BCES bisector results for the flux-limited sample yields *α* = 0.60 ± 0.05 and $M\_x=(1.3\pm0.09)\times10^{14} \msun$ are indicated by the blue triangle in Fig. [zoom]., however, have argued that these results suffer from Malmquist bias. Instead they compared the X-ray number counts to the mass function in ΛCDM cosmologies and derived *α* = 0.54 ± 0.02 and a high normalization of $M\_X=(2.1\pm0.1)\times 10^{14}\msun$. This result, however depends strongly on the adopted value for *σ*8, and combination with the WMAP3 data lowers the normalization to $M\_X=(1.56\pm0.08)\times10^{14}\msun$ (open pink triangle in Fig. [zoom]). studied a sample of *z* ∼ 0.05 and *z* ∼ 0.5 clusters that were observed with Chandra. Their results with *α* = 0.55 ± 0.05 and $M\_x=(1.43\pm0.08)\times10^{14}\msun$ are indicated by the open orange square. Another recent study was presented by who found a fairly steep slope of *α* = 0.70 ± 0.04 and $M\_x=(1.42\pm0.28)\times10^{14}\msun$ (purple square). The constraints from these studies are largely driven by X-ray luminous clusters (this is particularly true for the results of and ). If the slope of the *M*2500 − *L**X* varies with *L**X*, then it is more appropriate to compare these studies to the results from the sample studied in, for which the agreement is indeed very good. When combined with the results from the 160SD survey, the resulting scaling relation has a lower normalisation and steeper slope. To examine whether this could be caused by a change in slope, it is interesting to compare to measurements at lower luminosities. At the low X-ray luminosity end, studied X-ray groups using COSMOS. Because of the small area surveyed, the groups are less luminous than the sample of clusters from the 160SD survey studied here. Nonetheless it is interesting to extrapolate their results for comparison. use the mass-concentration relation from. We refit the 160SD sample using this relation and find that the average *M*2500 is unchanged, but that *M*200 is increased by 24%. We account for this when converting the measurements from, and find that they imply $M\_x=(1.4\pm0.3)\times10^{14}\msun$ and a slope *α* = 0.61 ± 0.13 (indicated by the open green circle in Fig. [zoom]), in fair agreement with our results. We also note that this highlights the difficulty in comparing results, especially when the analyses differ in detail. This is particularly relevant for the comparison with SDSS results in the following section. [zoom] Comparison with SDSS -------------------- measured the scaling relation between *L**X* and *M*200 for a large sample of optically selected clusters found in the Sloan Digital Sky Survey. This allowed them to extend the study of the *M* − *L**X* relation to much lower luminosities, compared to most of the measurements presented in the previous section. The clusters were binned in richness and for each bin, the mean X-ray luminosity and weak lensing mass were determined. In the lensing analysis, described in, both the mass and concentration are fit as free parameters. The resulting values for *c* agree well with the relation presented in. Upon conversion to *M*2500 we find that the measurements from correspond to $M\_X=(1.5\pm0.1)\times 10^{14}h\_{70}^{-1}\msun$ and *α* = 0.55 ± 0.04. This result is indicated in Figure [zoom] by the red point. have pointed out that the weak lensing masses determined by may be too low (by as much as  ∼ 24%), because of a bias in the source redshift distribution. The red arrow indicates the shift in normalization if the bias pointed out by is correct. In that case the results from disagree with all other measurements. find a shallower slope and higher mass, which appears to be inconsistent with our results. As discussed above, uncertainties in the adopted cluster centers may lead us to underestimate our normalisation by as much as  ∼ 10%, which is not sufficient to remove the difference. model the centroiding uncertainty using results from mock catalogs. However, if the model overestimates the offsets the resulting masses will be biased high and argue that this may indeed be the case. Another possible explanation for the large normalisation of is that their results are expected to be biased towards a lower *L**X* at a given lensing mass. The reason for this was already alluded to in §3.1, namely that some clusters appear X-ray underluminous and have low X-ray temperatures, given their high masses. These clusters, however do follow the tight mass-richness relation. bin their clusters in richness and analyse stacked X-ray and lensing data. The fact that *N*2500 and *M*2500 are strongly correlated, then leads to a larger mean mass at a given X-ray luminosity. The presence of substructure in the cluster (as well as filaments), will boost both the lensing and richness estimates relative to the X-ray luminosity. Such structures are numerous at the low mass end: it is rare to find an alignment of rich clusters, because they are rare, whereas an alignment of groups should be more frequent. Hence, the bias is likely to be mass dependent, increasing the masses of low X-ray luminosity systems by a larger fraction, compared to X-ray luminous clusters. The consequence is a flatter slope of the *M* − *L**X* relation. Comparison with numerical simulations can help clarify the amplitudes of the various biases listed above. However, the argument presented above highlights the danger of stacking samples of clusters when parameters are correlated. Identifying and quantifying such correlation requires well-characterized (both in selection and data coverage) and large samples of clusters. The 160SD clusters provide an excellent starting point, but increasing the number of cluster in this mass range with high quality X-ray data is of great importance to better constrain the scaling relations and to help interpret the results from stacked samples of clusters. Conclusions =========== To extend the observed scaling relation between cluster mass and X-ray luminosity towards lower *L**X*, we have determined the masses of a sample of 25 clusters of galaxies drawn from the 160 square degree ROSAT survey. The clusters have redshifts 0.3 < *z* < 0.6, and the X-ray luminosities range from 2 × 1043 to 2 × 1044 erg/s. The masses were determined based on a weak lensing analysis of images in the *F*814*W* filter obtained using the ACS on the HST. To measure the mass we assume that the brightest cluster galaxy indicates the center of the cluster. In most cases this leads to an unambiguous identification, but in a number of cases the choice of center is less clear. Nonetheless, we have verified that our choice of center does not affect the results for the inferred scaling relations. To correctly interpret the weak lensing data, we derived an accurate emperical correction for the effects of CTE on the shapes of faint galaxies. We detect a significant lensing signal around most of the clusters. To increase the range in cluster properties, we extend the sample with massive clusters studied by. The lensing masses agree well where the two samples overlap in *L**X*. The inferred lensing masses correlate well with the overdensity of galaxies (i.e., cluster richness). The relation between mass and X-ray luminosity has significant intrinsic scatter. Under the assumption it follows a log-normal distribution we find a scatter of *σ*log*L**X*∣*M* = 0.23− 0.04+ 0.10, which is in good agreement with other studies. We fit a power law relation between *M*2500 and *L**X* and find a best fit slope of *α* = 0.68 ± 0.07 and a normalisation (for *L**X* = 2 × 1044*h*70− 2 erg/s) of $M\_X=(1.2\pm0.12)\times h\_{70}^{-1} 10^{14}\msun$. Comparison with other studies is complicated by the fact that the conversion of the masses depends on the assumed mass-concentration relation. We find that the results for the sample of massive clusters from agrees well with a number of recent studies. The combination with clusters from the 160SD survey lowers the normalisation, which could be caused by a steepening of the *M* − *L**X* relation. However, a study of low mass systems by finds a higher normalisation. Uncertainties in the position of the cluster center, as well as deviations from the adopted NFW profile (e.g., substructures) may bias our masses low, but we estimate this to be less than 10%, which cannot explain the difference with. On the other hand, structures along the line-of-sight, which also may simply reflect the fact that clusters themselves are highly elongated, will lead to a higher normalisation for the study by. They binned clusters discovered in the SDSS by richness and measured their ensemble averaged X-ray luminosity and lensing mass. In this case the tight correlation between lensing mass and richness results in a low average *L**X* at a given mass. Furthermore, the relative importance of substructures and projections is mass dependent, preferentially affecting low mass systems. Consequently, both the slope and the normalization of the *M* − *L**X* relation are affected. To investigate the importance of the various biases, larger samples of low mass clusters need to be observed. Better X-ray observations provide a good starting point to extend the mass range over which scaling relations are determined and to improve the interpretation of ensemble averaged samples of clusters. We thank the anonymous referee for useful suggestions that have improved the paper. HH acknowledges support by NSERC, the Canadian Institute for Advanced Research (CIfAR) and the Canadian Foundation for Innovation (CFI) as well as a VIDI grant from the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO). HH also thanks Andisheh Mahdavi for discussion on fitting relations with intrinsic scatter. MD and GMV acknowledge the support of two STScI/HST grants (HST-GO-10490.01 HST-GO-10152.01) and and a NASA LTSA grant (NNG-05GD82G). This research has made use of the X-Rays Clusters Database (BAX) which is operated by the Laboratoire d’Astrophysique de Tarbes-Toulouse (LATT), under contract with the Centre National d’Etudes Spatiales (CNES). Measurement of CTE effects ========================== In this appendix we present the results of our study of the effect of CTE on shape measurements. Similar to other studies we derive an empirical correction, although our actual implementation differs from these previous works in a number of ways. The CTE problem occurs during readout, and therefore the correction should be applied before the correction for PSF anisotropy. Ideally one would like to correct the images before shape measurements are done, but for our purposes such a sophisticated approach is less important. Instead we will quantify the change in *e*1 (this is the polarization component that quantifies the elongation along the *x* −  and *y* −  direction). We note that our measurements are done on images after they have been corrected for camera distortion. measured the effect of CTE by determining the mean galaxy ellipticity as a function of distance from the readout electronics. We cannot do so for a number of reasons. First, we have data from a much smaller number of images. More importantly, the presence of the lensing signal induced by the clusters also gives rise to a variation in *e*1: it is larger in the centre of the field and decreases towards the edge. Instead we examine observations of dense star fields. After correction for PSF anisotropy correction, the shape of each star provides an accurate estimate of the CTE effect, because it is intrinsically round, unlike a galaxy. As a result, we can measure the amplitude of CTE much more accurately. We use observations of the star cluster NGC104, which has been observed at regular epochs for a program to study the stability of ACS photometry. We select only the data with exposure times of 30s, which yields a uniform data set of 17 exposures, of which we omit one. These single exposures are drizzled and we measure the shape parameters from these images. The PSF anisotropy model for each exposure is determined using stars with *F*814*W* < 18, as the effects of CTE are expected to be small for such bright objects. The fainter stars are corrected for PSF anisotropy and we measure *e*1 as a function of *y* coordinate. We assume that the pattern is the same for each ACS chip and combine the data. We note that this assumption is actually supported by our measurements. For each exposure we fit a linear trend with distance from the read-out electronics $$e\_1^{\rm CTE}=e\_1^{\rm max} (y/2048),$$ where $e\_1^{\rm max}$ is the the maximum induced polarization. Figure [ctestar](a) shows the resulting value for $e\_1^{\rm max}$ as a function of time for stars with high (black points) and low (red points) signal-to-noise ratios. These measurements are consistent with a linear increase with time of the CTE induced distortion. As expected the CTE effects are also more pronounced for fainter objects. To investigate this trend further, we compute $e\_1^{\rm max}$ as a function of signal-to-noise ratio, adopting the average Modified Julian Date of our cluster observations. The results are shown in Figure [ctestar](b). We fit the following model to our measurements $$e\_1^{\rm CTE}=e\_1^{\rm max} \frac{1}{\sqrt{S/N}}\left(\frac{y}{2048}\right) (MJD-52,333).$$ The red line in Figure [ctestar](b) corresponds to the best model, for which we find $e\_1^{\rm max}=(-9.07\pm0.16)\times 10^{-5}$. Note that we have assumed that the CTE effect is proportional to $\sqrt{S/N}$, which is different from who argue for a scaling  ∝ (*S*/*N*)− 1. The latter scaling, however, is inconsistent with our findings. We have confirmed our results with other stellar fields. We note that use a different shape measurement method, which may explain some of the differences. We now have an accurate model for the effects of CTE for stellar images, but it is not clear whether it is adequate for galaxies, which are more extended. Our cluster data cannot be used for this test, because the lensing signal induced by the clusters leads to a comparable change in *e*1 with *y* position. Instead we retrieved 100 pointings taken as part of the COSMOS survey (PID:10092) and analysed these data in a similar fashion as our own. Figure [ctegal] shows the measurement of $e\_1^{\rm max}$ as a function of galaxy size *r**g* for a signal-to-noise ratio of 20 and a mean modified Julian date of 53195. The points have been corrected for the small increase in mean S/N ratio as *r**g* increases. The left-most point is the result derived from NGC104. We detect a clear size dependence of the CTE signal. We assume the dependence is a power-law, and we find a best fit slope of  − 1.85 ± 0.3; the best fit model is indicated by the solid line in Figure [ctegal]. We therefore revise our PSF-based model to account for the size dependence of the CTE effect: $$e\_1^{\rm CTE}=e\_1^{\rm max} \frac{1}{\sqrt{S/N}}\left(\frac{y}{2048}\right) \left(\frac{r\_g}{0.05''}\right)^{-1.85} (MJD-52,333),$$ where the best fit value for $e\_1^{\rm max}=(-8.3\pm0.14)\times 10^{-5}$. This model is used to correct the observed *e*1 for stars and galaxies in our data. --- 1. http://bax.ast.obs-mip.fr/[↩](#fnref1) 2. To compute *N*2500 for the sample of massive clusters we used the same selection criteria as discussed above.[↩](#fnref2) 3. We assume that the clusters studied in do not suffer from Malmquist bias.[↩](#fnref3) 4. We also computed the cluster mass function in a ΛCDM cosmology (using parameters from and assigned X-ray luminosities using our best fit *M*2500 − *L**X* relation and intrinsic scatter and find similar biases.[↩](#fnref4) 5. We note that the slope is different from the original results, who found *α* = 0.43 ± 0.1. The reason for this large change is twofold. First, did not account for intrinsic scatter in the fit of the scaling relation. Furthermore, the current analysis includes three clusters for which no X-ray data was available in. The clusters in question are A209, A383 and MS1231+15. The latter two have the lowest X-ray luminosities and drive much of the change in slope, whereas including A209 does not change the previous results. Restricting the sample to the clusters that were used by to constrain the slope, we find *α* = 0.41 ± 0.10, in agreement with the orginal result. Such a rather large variation in best-fit slope demonstrates the need for larger samples of clusters with multi-wavelength data.[↩](#fnref5) The mass-*L**x* relation for moderate luminosity X-ray clusters =============================================================== We present measurements of the masses of a sample of 25 moderate X-ray luminosity clusters of galaxies from the 160 square degree ROSAT survey. The masses were obtained from a weak lensing analysis of deep *F*814*W* images obtained using the Advanced Camera for Surveys (ACS). We present an accurate empirical correction for the effect of charge transfer (in)efficiency on the shapes of faint galaxies. A significant lensing signal is detected around most of the clusters. The lensing mass correlates tightly with the cluster richness. We measured the intrinsic scatter in the scaling relation between *M*2500 and *L**X* to be *σ*log*L**X*∣*M* = 0.23− 0.04+ 0.10. The best fit power law slope and normalisation are found to be *α* = 0.68 ± 0.07 and $M\_X=(1.2\pm0.12)\times h\_{70}^{-1} 10^{14}\msun$ (for *L**X* = 2 × 1044*h*70− 2 erg/s). These results agree well with a number of recent studies, but the normalisation is lower compared to the study of. One explanation for this difference may be the fact that (sub)structures projected along the line-of-sight boost both the galaxy counts and the lensing mass. Such superpositions lead to an increased mass at a given *L**X* when clusters are binned by richness. Introduction ============ Clusters of galaxies are known to exhibit correlations between their various observable properties, such as the well-known relation between X-ray luminosity and X-ray temperature. These scaling relations are the result of the various processes that govern the formation and subsequent evolution of galaxy clusters. Hence, the study of cluster scaling laws provides the basis of testing models for the formation of clusters of galaxies and of galaxies themselves. For instance, N-body codes can nowadays reliably and robustly predict the evolution of the mass function of cluster halos. Hydrodynamic simulations and semi-analytic techniques then allow us to predict the X-ray and optical appearance of these halos. The mean relation and the intrinsic dispersion of the resulting mass-observable relations provide a test of the adequacy of the physical processes included in the simulations: a realistic simulation should reproduce the mean relation, the intrinsic scatter, and its evolution (if any). Such comparisons have recently led to the realization that cluster simulations must include radiative cooling and feedback from supernovae and AGNs in order to successfully explain the observed scaling relations (see for a review). While the core regions of clusters remain difficult to model accurately, high-resolution simulations with cooling and feedback now produce clusters whose X-ray properties agree well with those of observed clusters outside the central 100 kpc or so. An empirically determined mass scaling relation is not only useful to directly test our understanding of cluster physics, it also allows one to relate the observables directly to the mass of cluster-sized halos. The latter, for instance, enables one to transform the observed luminosity function into a mass function, which in turn can be compared to different cosmological models. Such studies have already provided interesting constraints on cosmological parameters which complement constraints from other probes. To determine cluster masses, a number of methods are available. Dynamical techniques, such as the assumption of hydrostatic equilibrium in X-ray observations, have been widely used. The inferred masses, however, are likely to be systematically underestimated by up to 20% because of pressure support from turbulence and energy input from active galactic nuclei. Masses derived from weak gravitational lensing do not require assumptions about the dynamical state of the cluster and can in principle be compared directly to the outcome of numerical simulations. It is worth noting, however, that lensing is sensitive to all structure along the line-of-sight. Weak lensing is now a well-established technique and recent work has indeed provided support for non-hydrostatic gas in the outskirts of X-ray luminous clusters of galaxies. To date, most work has focussed on the most massive clusters at intermediate redshifts, because of the relatively large lensing signal. In this paper we focus on extending the mass range towards clusters with lower X-ray luminosities ($L\_x < {\rm few~}10^{44}$ erg/s). To detect the lensing signal of these lower mass clusters, the number density of lensed background galaxies needs to be increased significantly, compared to deep ground based observations. The reason for this requirement is that the error in the weak lensing mass estimate is set by the intrinsic shapes of the galaxies and their number density. To this end, we have conducted a weak-lensing analysis on images obtained during a snapshot survey of 25 moderate luminosity X-ray clusters with the Advanced Camera for Surveys (ACS) on board the Hubble Space Telescope (HST). These clusters were randomly selected from the ROSAT 160 Square Degree (160SD) Cluster Catalog. The clusters in the 160SD sample were serendipitously discovered in pointed, relatively wide-field (30’ diameter) observations of the Position-Sensitive Proportional Counter (PSPC) on board the ROSAT X-ray observatory (1990-1999). Seventy-two X-ray clusters between *z* = 0.3 − 0.7 were discovered (see Fig. [sample]). All have been identified and assigned redshifts. Because this survey included pointed observations that were quite long, some of these clusters are among the faintest clusters of galaxies known at these moderately high redshifts. Therefore, our new weak lensing measurements extend the X-ray luminosity limit of the mass-*L**x* relation by almost an order of magnitude, based on targeted observations. We note that studies of the ensemble-averaged properties of clusters discovered in the SDSS and X-ray groups in COSMOS have pushed the limits to even lower luminosities. The structure of the paper follows. In §2 we describe our data and weak lensing analysis. In particular we discuss how we correct for the effects of CTE in our ACS data and how we correct for PSF anisotropy. The measurements of the cluster masses, and the comparison to the X-ray properties are presented in §3. We compare our results to previous work and examine biases in our mass estimates that arise from uncertainties in the position of the cluster center and sample selection. Throughout this paper we assume a flat ΛCDM cosmology with Ω*m* = 0.3 and *H*0 = 70*h*70 km/s/Mpc. crccccccccc name & number & *z* & *L**X* & RA$\_{\rm X-ray}$ & DEC$\_{\rm X-ray}$ & RA$\_{\rm BCG}$ & DEC$\_{\rm BCG}$ & $Q\_{\rm BCG}$ & ⟨*β*⟩ & ⟨*β*2⟩ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) RXJ0056.9 − 2740 & 6 & 0.563 & 1.14 &  00*h*56*m*56.1*s* &  − 27∘40ʹ12ʺ &  00*h*56*m*56.98*s* &  − 27∘40ʹ29.9ʺ & 2 & 0.42 & 0.23 RXJ0110.3 + 1938 & 8 & 0.317 & 0.40 &  01*h*10*m*18.0*s* &  + 19∘38ʹ23ʺ &  01*h*10*m*18.22*s* &  + 19∘38ʹ19.4ʺ & 0 & 0.61 & 0.43 RXJ0154.2 − 5937 & 20 & 0.360 & 0.90 &  01*h*54*m*14.8*s* &  − 59∘37ʹ48ʺ &  01*h*54*m*13.72*s* &  − 59∘37ʹ31.0ʺ & 1 & 0.57 & 0.38 RXJ0522.2 − 3625 & 41 & 0.472 & 2.07 &  05*h*22*m*14.2*s* &  − 36∘25ʹ04ʺ &  05*h*22*m*15.48*s* &  − 36∘24ʹ56.1ʺ & 1 & 0.48 & 0.29 RXJ0826.1 + 2625 & 52 & 0.351 & 0.65 &  08*h*26*m*06.4*s* &  + 26∘25ʹ47ʺ &  08*h*26*m*09.45*s* &  + 26∘25ʹ03.1ʺ & 0 & 0.58 & 0.39 RXJ0841.1 + 6422 & 56 & 0.342 & 1.60 &  08*h*41*m*07.4*s* &  + 64∘22ʹ43ʺ &  08*h*41*m*07.65*s* &  + 64∘22ʹ26.0ʺ & 2 & 0.59 & 0.40 RXJ0847.1 + 3449 & 59 & 0.560 & 1.17 &  08*h*47*m*11.3*s* &  + 34∘49ʹ16ʺ &  08*h*47*m*11.79*s* &  + 34∘48ʹ51.8ʺ & 1 & 0.42 & 0.23 RXJ0910.6 + 4248 & 69 & 0.576 & 1.44 &  09*h*10*m*39.7*s* &  + 42∘48ʹ41ʺ &  09*h*10*m*40.53*s* &  + 42∘49ʹ59.1ʺ & -1&0.41 & 0.23 RXJ0921.2 + 4528 & 70 & 0.315 & 1.40 &  09*h*21*m*13.4*s* &  + 45∘28ʹ50ʺ &  09*h*21*m*13.46*s* &  + 45∘28ʹ56.1ʺ & 1 & 0.61 & 0.43 RXJ0926.6 + 1242 & 71 & 0.489 & 2.04 &  09*h*26*m*36.6*s* &  + 12∘42ʹ56ʺ &  09*h*26*m*36.70*s* &  + 12∘43ʹ03.8ʺ & 2 & 0.47 & 0.28 RXJ0957.8 + 6534 & 80 & 0.530 & 1.39 &  09*h*57*m*53.2*s* &  + 65∘34ʹ30ʺ &  09*h*57*m*51.22*s* &  + 65∘34ʹ25.1ʺ & 2 & 0.44 & 0.25 RXJ1015.1 + 4931 & 88 & 0.383 & 0.78 &  10*h*15*m*08.5*s* &  + 49∘31ʹ32ʺ &  10*h*15*m*08.44*s* &  + 49∘31ʹ50.8ʺ & 2 & 0.55 & 0.36 RXJ1117.2 + 1744 & 96 & 0.305 & 0.54 &  11*h*17*m*12.0*s* &  + 17∘44ʹ24ʺ &  11*h*17*m*11.23*s* &  + 17∘44ʹ00.5ʺ & 2 & 0.62 & 0.44 RXJ1117.4 + 0743 & 97 & 0.477 & 0.76 &  11*h*17*m*26.1*s* &  + 07∘43ʹ35ʺ &  11*h*17*m*26.04*s* &  + 07∘43ʹ38.3ʺ & 1 & 0.48 & 0.29 RXJ1123.1 + 1409 & 101 & 0.340 & 1.01 &  11*h*23*m*10.2*s* &  + 14∘09ʹ44ʺ &  11*h*23*m*10.95*s* &  + 14∘08ʹ36.4ʺ & 1 & 0.59 & 0.40 RXJ1354.2 − 0221 & 151 & 0.546 & 1.53 &  13*h*54*m*16.9*s* &  − 02∘21ʹ47ʺ &  13*h*54*m*17.19*s* &  − 02∘21ʹ59.2ʺ & 2 & 0.43 & 0.24 RXJ1524.6 + 0957 & 170 & 0.516 & 3.94 &  15*h*24*m*40.3*s* &  + 09∘57ʹ39ʺ &  15*h*24*m*41.56*s* &  + 09∘57ʹ34.3ʺ & 1 & 0.45 & 0.26 RXJ1540.8 + 1445 & 172 & 0.441 & 0.81 &  15*h*40*m*53.3*s* &  + 14∘45ʹ34ʺ &  15*h*40*m*53.96*s* &  + 14∘45ʹ56.0ʺ & 1 & 0.51 & 0.31 RXJ1642.6 + 3935 & 186 & 0.355 & 0.62 &  16*h*42*m*38.9*s* &  + 39∘35ʹ53ʺ &  16*h*42*m*38.35*s* &  + 39∘36ʹ10.4ʺ & 1 & 0.58 & 0.39 RXJ2059.9 − 4245 & 199 & 0.323 & 2.44 &  20*h*59*m*55.2*s* &  − 42∘45ʹ33ʺ &  20*h*59*m*54.92*s* &  − 42∘45ʹ32.1ʺ & 2 & 0.61 & 0.42 RXJ2108.8 − 0516 & 200 & 0.319 & 0.81 &  21*h*08*m*51.2*s* &  − 05∘16ʹ49ʺ &  21*h*08*m*51.17*s* &  − 05∘16ʹ58.4ʺ & 2 & 0.61 & 0.42 RXJ2139.9 − 4305 & 203 & 0.376 & 0.59 &  21*h*39*m*58.5*s* &  − 43∘05ʹ14ʺ &  21*h*39*m*58.22*s* &  − 43∘05ʹ13.9ʺ & 2 & 0.56 & 0.37 RXJ2146.0 + 0423 & 204 & 0.531 & 2.61 &  21*h*46*m*04.8*s* &  + 04∘23ʹ19ʺ &  21*h*46*m*05.52*s* &  + 04∘23ʹ14.3ʺ & 2 & 0.44 & 0.25 RXJ2202.7 − 1902 & 205 & 0.438 & 0.70 &  22*h*02*m*44.9*s* &  − 19∘02ʹ10ʺ &  22*h*02*m*45.50*s* &  − 19∘02ʹ21.1ʺ & 2 & 0.51 & 0.31 RXJ2328.8 + 1453 & 219 & 0.497 & 1.02 &  23*h*28*m*49.9*s* &  + 14∘53ʹ12ʺ &  23*h*28*m*52.27*s* &  + 14∘52ʹ42.8ʺ & 1 & 0.46 & 0.27   Data and analysis ================= The data studied here were obtained as part of a snapshot program (PI: Donahue) to study a sample of clusters found in the 160 square degree survey. The clusters from the latter survey were selected based on the serendipitous detection of extended X-ray emission in ROSAT PSPC observations, resulting in a total survey area of 160 deg2. A detailed discussion of the survey can be found in. The sample was reanalysed by, which also lists spectroscopic redshifts for most of the clusters. The X-ray luminosity as a function of cluster redshift is plotted in Figure [sample]. The HST snapshot program targeted clusters with 0.3 < *z* < 0.6, which were observed during HST cycles 13 and 14. This resulted in a final sample of 25 clusters that were imaged in the *F*814*W* filter with integration times of  ∼ 2200s. Note that these were drawn randomly from the sample, and therefore represent a fair sample of an X-ray flux limited survey. The clusters with HST data are indicated in Figure [sample] by the large open points. Table [tabsample] provides a list of the clusters that were observed. It also lists the restframe X-ray luminosities in the 0.1 − 2.4 keV band. The values were taken from the BAX X-Rays Galaxy Clusters Database[1](#fn1) and converted to the cosmology we adopted here. As explained on the BAX website, the luminosities were derived from the flux measurements using a Raymond-Smith type spectrum assuming a metallicity of 0.33 times solar. Each set of observations consists of four exposures, which allow for efficient removal of cosmic rays. We use the multidrizzle task to remove the geometric distortion of ACS images and to combine the multiple exposures. This task also removes cosmic rays and bad pixels, etc. Because of the large geometric distortion and offsets between the individual exposures, the images need to be resampled before co-addition. A number of options have been studied by who suggest to use a Gaussian kernel and an output pixel scale of 0.03“. However, this procedure leads to correlated noise in the final image. Instead we opt for the lanczos3 resampling kernel and keep the original pixel size of 0.05”. These choices preserve the noise properties much better, while the images of the galaxies are adequately sampled. These co-added images are subsequently used in the weak lensing analysis. Shape analysis -------------- The first step in the weak lensing analysis is the detection of stars and galaxies, for which we use SExtractor. The next important step is the unbiased measurement of the shapes of the faint background galaxies that will be used to quantify the lensing signal. To do so, we measure weighted quadrupole moments, defined as $$I\_{ij}=\int d^2{{\mbox{{\it \boldmath$x$}}}} x\_i x\_j W({{\mbox{{\it \boldmath$x$}}}}) f({{\mbox{{\it \boldmath$x$}}}}),$$ where $W({{\mbox{{\it \boldmath$x$}}}})$ is a Gaussian with a dispersion *r**g*, which is matched to the size of the object. These quadrupole moments are combined to form the two-component polarization $$e\_1=\frac{I\_{11}-I\_{22}} {I\_{11}+I\_{22}}, ~{\rm and}~e\_2=\frac{I\_{12}}{I\_{11}+I\_{22}}.$$ However, we cannot simply use the observed polarizations, because they have been modified by a number of instrumental effects. Although the PSF of HST is small compared to ground based data, the shapes of the galaxies are nonetheless slightly circularized, lowering the amplitude of the observed lensing signal. Furthermore, the PSF is not round and the resulting anisotropy in the galaxy shapes needs to be undone. To do so, we use the well established method proposed by, with the modifications provided in and. Correction for CTE degradation ------------------------------ An issue that is relevant for ACS observations is the degradation of the charge transfer efficiency (CTE) with time. Over time, cosmic rays cause an increasing number of defects in the detector. During read-out, these defects can trap charges for a while. The delayed transfer of charge leads to a trail of electrons in the read-out direction, causing objects to appear elongated. The effect is strongest for faint objects, because brighter objects quickly fill the traps. provide a detailed discussion of CTE effects and their impact on weak lensing studies. Similar to we derive an empirical correction for CTE, but our adopted approach differs in a number of ways. The presence of CTE in our data leads to a slight modification of our lensing analysis. First, we note that CTE affects only *e*1 and that the change in shape occurs during the read-out stage. Hence the measured value for *e*1 needs to be corrected for CTE *before* the correction for PSF anisotropy and circularization: $$e\_1=e\_1^{\rm obs} - e\_1^{\rm CTE},$$ where $e\_1^{\rm CTE}$ is the predicted change in polarization given by the model derived in the Appendix. To derive our CTE model, we used observations of the star cluster NGC 104 as well as 100 exposures from the COSMOS survey. The observations of a star cluster allow us to study the effect of CTE as a function of position, time and signal-to-noise ratio with high precision, because stars are intrinsically round after correction for PSF anisotropy. We find that the CTE effect increases linearly with time and distance from the read-out electronics. The amplitude of the effect is observed to be proportional to $\sqrt{S/N}$. We use 100 exposures from COSMOS to examine how the CTE effect depends on source size. We find a strong size dependence, with $e\_1^{\rm CTE}\propto r\_g^{-2}$ (where *r**g* is the dispersion of the Gaussian weight function used to measure the quadrupole moments). Correction for the PSF ---------------------- Once the CTE effect has been subtracted for all objects (including stars), the lensing analysis proceeds as described in. Hence the next step is to correct both polarization components for PSF anisotropy. The ACS PSF is time dependent and therefore a different PSF model is required for each observation. An added complication is the fact that only a limited number of stars are observed in each image. To estimate the spatial variation of the PSF anisotropy a number of procedures have been proposed. Here we opt for a simple approach, similar to the one used in. We use observations of the star cluster NGC104 to derive an adequate model for PSF anisotropy. These data were taken at the start of ACS operations (PID 9018), and therefore do not suffer from CTE effects. We model the PSF anisotropy by a third order polynomial in both *x* and *y*. Such a model would not be well constrained by our galaxy cluster data, but it is thanks to the high number density of stars in NGC104. We select one of the models as our reference, because the PSF pattern varied only mildly from exposure to exposure. The resulting reference model is shown in Figure [psfmod]. The PSF anisotropy is fairly small, but can reach  ∼ 4% towards the edges of the field. Most of the variation in the ACS PSF arises from focus changes, which are the result of changes in the telescope temperature as it orbits the Earth. We therefore expect that a scaled version of our model can capture much of the spatial variation: as the detector moves from one side of the focus to the other, the direction of PSF anisotropy changes by 90 degrees, which corresponds to a change of sign in the polarization. To account for additional low order changes, we also include a first order polynomial: $$e\_i^{\rm PSF}=\alpha e\_i^{\rm NGC104} + a\_0 +a\_1 x +a\_2 y.$$ This simple model, with only 4 parameters, is used to characterize the PSF anisotropy for each individual galaxy cluster exposure. The number of stars in the galaxy cluster images ranges from 7 to 84, with a median of 20. To examine how well our approach works, we averaged the shapes of all the stars in our 25 galaxy cluster exposures. The ensemble averaged PSF polarization as a function of position is shown in Figure [psfres]. The open points show the average PSF shapes before correction. More importantly, the filled points show the residuals in polarization after the best fit model for each galaxy cluster pointing has been subtracted. These results suggest our model is adequate for our weak lensing analysis. Lensing signal -------------- The corrected galaxy shapes are used to measure the weak lensing signal. A convenient way to quantify the lensing signal is by computing the azimuthally averaged tangential shear *γ**T* as a function of radius from the cluster center (see §2.5 for our choice of the center). It can be related to the surface density through $$\langle\gamma\_t\rangle(r)=\frac{\bar\Sigma(<r) - \bar\Sigma(r)} {\Sigma\_{\rm crit}}=\bar\kappa(<r)-\bar\kappa(r),$$ where $\bar\Sigma(<r)$ is the mean surface density within an aperture of radius *r*, and $\bar\Sigma(r)$ is the mean surface density on a circle of radius *r*. The convergence *κ*, or dimensionless surface density, is the ratio of the surface density and the critical surface density $\Sigma\_{\rm crit}$, which is given by $$\Sigma\_{\rm crit}=\frac{c^2}{4\pi G}\frac{D\_s}{D\_l D\_{ls}},$$ where *D**l* is the angular diameter distance to the lens. *D**s* and *D**l**s* are the angular diameter distances from the observer to the source and from the lens to the source, respectively. The parameter *β* = max[0, *D**l**s*/*D**s*] is a measure of how the amplitude of the lensing signal depends on the redshifts of the source galaxies (where a value of 0 is assigned to sources with a redshift lower than that of the cluster). One of the advantages of weak lensing is that the (projected) mass can be determined in a model-independent way. To derive accurate masses requires wide field imaging data, which we lack because of the small field of view of ACS. Instead we fit parametric models to the lensing signal. The singular isothermal sphere (SIS) is a simple model to describe the cluster mass distribution. In this case the convergence and tangential shear are equal: $$\kappa(r)=\gamma\_T(r)=\frac{r\_E}{2r},$$ where *r**E* is the Einstein radius. In practice we do not observe the tangential shear directly, but we observe the reduced shear *g**T* instead: $$g\_T=\frac{\gamma\_T}{1-\kappa}.$$ We correct our model predictions for this effect. Under the assumption of isotropic orbits and spherical symmetry, the Einstein radius (in radian) can be expressed in terms of the line-of-sight velocity dispersion: $$r\_E=4\pi \left(\frac{\sigma}{c}\right)^2 \beta.$$ We use this model when listing lensing inferred velocity dispersions in Table [tabresult]. A more physically motivated model is the one proposed by who provide a fitting function to the density profiles observed in numerical simulations of cold dark matter. We follow the conventions outlined in, but use an updated relation between the virial mass $M\_{\rm vir}$ and concentration *c*. studied numerical simulations using the best fit parameters of the WMAP5 cosmology. The best fit $c(M\_{\rm vir})$ is given by: $$c=7.85\left({\frac{M\_{\rm vir}}{2\times 10^{12}}}\right)^{-0.081}{(1+z)^{-0.71}}.\label{mcrel}$$ We use this relation when fitting the NFW model to the lensing signal. Rather than the virial mass, in Table [tabresult] we list *M*2500 which is the mass within a radius *r*2500 where the mean mass density of the halo is 2500 times the critical density at the redshift of the cluster. We note that a different choice for the mass-concentration relation will change the inferred masses much. We found that the inferred values for *M*2500 change by 5 − 10% if we change the pre-factor in Equation [mcrel] by  ± 1, which is much smaller than our statistical uncertainties. When comparing to ensemble averaged results from other studies, however, differences in the adopted mass-concentration relation may become a dominant source of uncertainty. Finally, we note that the lensing signal is sensitive to all matter along the line-of-sight. As shown in, the large-structure in the universe introduces cosmic noise, which increases the formal error in the mass estimate, compared to just the statistical uncertainty. The listed uncertainties in the weak lensing masses include this contribution. Cluster center -------------- We have to choose a position around which we measure the weak lensing signal as a function of radius. An offset between the adopted position and the ‘true’ center of the dark matter halo will lead to an underestimate of the cluster mass. Possible substructure in the cluster core complicates a simple definition of the cluster center, but it can also lead to biased mass estimates. We expect our results to be less affected by substructure because the ACS data used here extend to larger radii than the WFPC2 observations discussed in. The resulting bias depends on the detailed procedure that is used to to interpret the lensing signal. For instance, used wide field imaging data to measure the lensing signal out to large radii and to derive (almost) model-independent masses. This procedure minimizes the effect of centroiding errors because the large-scale signal is affected much less, compared to the signal on small scales. Unfortunately, the ACS field of view is relatively small, which prevents us from following the same approach and we fit a parameterized mass model to the lensing signal instead. To reduce the sensitivity of our results to centroiding errors and central substructure, we fit the NFW and SIS models to the tangential distortion at radii 200 < *r* < 750*h*70− 1kpc. The advantage of restricting the analysis to larger scales is also evident from Figure [biasoffset], where we show the bias in mass as a function of centroid offset $r\_{\rm off}$. The solid curve shows the results when fitting an NFW model to the signal within 200 − 750*h*70− 1 kpc, which is the range we use for the ACS data studied here. provide cluster positions based on the X-ray emission. For the clusters in our sample, the listed centroiding uncertainty depends on the X-ray luminosity (with smaller values for the more luminous systems). An alternative approach is to use the location of the brightest cluster galaxy (BCG). The resulting positions are listed in Table [tabsample]. In many cases a clear candidate can be identified (indicated by a value of $Q\_{\rm BCG}$ of 1 or 2 in Table [tabsample]), but in a number of cases the choice is ambiguous (indicated by $Q\_{\rm BCG}$ -1 or 0). High quality X-ray data can be used to refine the centers, but such data are lacking for our sample. Even if such data were available, there can be a offset between the BCG and the peak in the X-ray emission. The distribution of offsets observed by for the massive clusters in the Canadian Cluster Comparison Project, are indicated by the histogram in Figure [biasoffset]. It is clear that the offsets are typically less than 50 kpc, leading to biases less than 5%. The larger offsets are found for merging massive systems where the identification of the BCG is difficult. Such major mergers do not appear to be present in the sample of clusters studied here. We compared the offset between the X-ray position and the adopted location of the BCG to the uncertainty listed in and find fair agreement: most of the BCGs are located within the radius corresponding to the 90% confidence X-ray position error circle. Only for 4 low luminosity systems do we find an offset that is much larger. The positions of the BCGs can be determined more accurately, and we therefore adopt these as the cluster centers when listing our mass estimates. Source galaxies --------------- The lensing signal is largest for background galaxies at redshifts much larger than the cluster. We lack redshift information for our sources and instead we select a sample of faint (distant) galaxies with 24 < *F*814*W* < 26.5, which also reduces contamination by cluster members which dominate the counts at bright magnitudes. Nonetheless, contamination by cluster members cannot be ignored because we lack color information. We note that adding a single color would not improve the situation significantly because the faint members span a wide range in color, unlike the bright members, almost all of which occupy a narrow red-sequence. To account for contamination by cluster members we measure the number density of faint galaxies as a function of distance from the adopted cluster center and boost the signal by the inferred fraction of cluster members. The corrected tangential distortion for RXJ0847.1+3449 as a function of distance from the adopted cluster centre is shown in Figure [gtprof]a. The red line shows the best fit singular isothermal sphere model (fitted to radii  > 25ʺ). If the observed signal is caused by gravitational lensing, the signal should vanish if the source galaxies are rotated by 45 degrees. Figure [gtprof]b shows the results of this test, which are indeed consistent with no signal. The lower panel in Figure [gtprof] shows the excess counts $\Delta n\_{\rm bg}$ as a function of radius for the cluster RXJ0847.1+3449 (*z* = 0.56). To determine the background count levels we used the 100 COSMOS pointings that were used to measure the size-dependence of CTE. We measure the excess counts for each cluster individually, because the sample spans a fair range in redshift and mass. This cluster shows a signicant excess of faint galaxies over the background number density of  ∼ 61 arcmin− 2. We assume that these faint cluster members are oriented randomly and thus simply dillute the lensing signal. To quantify the overdensity of faint members we adopt $\Delta n\_{\rm bg}\propto r^{-1}$ and determine the best fit normalization for each cluster separately (indicated by the drawn line in Figure [gtprof]c. This simple model is used to correct the observed tangential distortion for contamination by cluster members. We find that this correction leads to an average increase in the best fit Einstein radius of  ∼ 20%. The uncertainty in the amplitude of the contamination is included in our quoted measurement errors (which are increased by less than 5%). The conversion of the lensing signal into an estimate for the mass of the cluster requires knowledge of the redshift distribution of the source galaxies. These are too faint to be included in spectroscopic redshift surveys or even from ground based photometric redshift surveys. Instead we use the photometric redshift distributions determined from the Hubble Deep Fields. For the range in apparant magnitudes used in the lensing analysis, the resulting average value for ⟨*β*⟩ and the variance ⟨*β*2⟩ are listed in Table [tabsample]. As discussed in the latter quantity is needed to account for the fact that we measure the reduced shear. We estimate the uncertainty in *β* by considering the variation between the two HDFs. As the average source redshift is much higher than the cluster redshifts, the resulting relative uncertainties are small:  ∼ 2% at *z* = 0.3, increasing to  ∼ 5% at *z* = 0.6, much smaller than our statistical errors. llllllll name & *r**E* & *σ* & *r*2500 & *M*2500 & *N*2500 & [”] & [km/s] & [*h*70− 1Mpc] & [*h*70− 11013M⊙] & RXJ0056.9 − 2740 & 5.4 ± 1.5 & 678− 101+ 88 & 0.270 & 5.2− 3.0+ 4.2 & 13 ± 4 RXJ0110.3 + 1938 & 5.8 ± 1.3 & 577− 69+ 62 & 0.293 & 5.0− 2.6+ 3.4 & 4 ± 2 RXJ0154.2 − 5937 & 3.6 ± 1.4 & 474− 101+ 83 & 0.218 & 2.2− 1.4+ 2.6 & 1 ± 1 RXJ0522.2 − 3625 & 6.8 ± 1.3 & 710− 73+ 66 & 0.313 & 7.2− 3.1+ 4.2 & 7 ± 3 RXJ0826.1 + 2625 & 2.4 ± 1.8 & 384− 192+ 124 & 0.157 & 0.8− 2.1+ 2.1 & 2 ± 1 RXJ0841.1 + 6422 & 7.5 ± 1.3 & 668− 59+ 55 & 0.354 & 9.0− 3.5+ 3.9 & 15 ± 4 RXJ0847.1 + 3449 & 10.3 ± 1.7 & 937− 80+ 74 & 0.452 & 24.2− 7.6+ 8.9 & 21 ± 5 RXJ0910.6 + 4248 & 3.5 ± 1.7 & 551− 157+ 121 & 0.246 & 4.0− 2.9+ 4.7 & 3 ± 2 RXJ0921.2 + 4528 & 2.1 ± 1.5 & 344− 164+ 108 & 0.180 & 1.2− 1.2+ 1.8 & 5 ± 2 RXJ0926.6 + 1242 & 8.9 ± 1.5 & 822− 70+ 65 & 0.407 & 16.2− 5.1+ 6.0 & 11 ± 4 RXJ0957.8 + 6534 & 4.5 ± 1.4 & 605− 105+ 89 & 0.257 & 4.3− 2.6+ 3.2 & 3 ± 2 RXJ1015.1 + 4931 & 6.0 ± 1.4 & 618− 79+ 70 & 0.296 & 5.6− 2.9+ 3.4 & 3 ± 2 RXJ1117.2 + 1744 & 1.9 ± 1.6 & 331− 185+ 114 & 0.252 & 3.1− 2.1+ 3.2 & 6 ± 3 RXJ1117.4 + 0743 & 5.5 ± 1.4 & 640− 86+ 76 & 0.280 & 5.2− 2.8+ 3.4 & 13 ± 4 RXJ1123.1 + 1409 & 5.8 ± 1.4 & 588− 78+ 69 & 0.271 & 4.0− 2.6+ 3.8 & 9 ± 3 RXJ1354.2 − 0221 & 9.6 ± 1.4 & 895− 69+ 64 & 0.428 & 20.2− 5.6+ 6.4 & 23 ± 5 RXJ1524.6 + 0957 & 4.3 ± 1.3 & 585− 98+ 84 & 0.336 & 9.5− 3.8+ 4.4 & 14 ± 4 RXJ1540.8 + 1445 & 4.0 ± 1.4 & 530− 99+ 83 & 0.279 & 4.9− 2.5+ 3.7 & 13 ± 4 RXJ1642.6 + 3935 & 2.2 ± 1.5 & 371− 156+ 107 & 0.239 & 2.8− 1.8+ 2.8 & 5 ± 2 RXJ2059.9 − 4245 & 4.4 ± 1.3 & 508− 82+ 71 & 0.280 & 4.4− 2.4+ 3.3 & 6 ± 3 RXJ2108.8 − 0516 & 3.4 ± 1.4 & 442− 100+ 81 & 0.210 & 1.8− 1.4+ 2.2 & 5 ± 2 RXJ2139.9 − 4305 & 5.2 ± 1.4 & 572− 84+ 73 & 0.292 & 5.3− 2.6+ 3.7 & 9 ± 3 RXJ2146.0 + 0423 & 8.5 ± 1.4 & 830− 73+ 67 & 0.436 & 21.0− 5.7+ 6.7 & 14 ± 4 RXJ2202.7 − 1902 & 1.5 ± 1.4 & 319− 252+ 127 & 0.152 & 0.8− 0.8+ 2.0 & 2 ± 1 RXJ2328.8 + 1453 & 4.9 ± 1.4 & 612− 93+ 81 & 0.254 & 4.0− 2.6+ 3.7 & 6 ± 3 Results ======= As discussed above, we consider two often used parametric models for the cluster mass distribution. We fit these to the observed lensing signal at 200 − 750*h*70− 1kpc from the cluster center. The resulting Einstein radii and velocity dispersions are listed in Table [tabresult]. We also list the best-fit value for *M*2500 inferred from the NFW model fit to the signal, where we use the mass-concentration relation given by Eqn. [mcrel]. For reference we also list the corresponding value for *r*2500 from the NFW fit. We use this radius to compute *N*2500, the excess number of galaxies with a rest-frame *B*-band luminosity  − 22 < *M**B* <  − 18.5 within *r*2500 (we assume passive evolution when computing the rest-frame *B*-band luminosity). Recent studies of the cluster richness consider only galaxies on the red-sequence, because of the higher contrast, which improves the signal-to-noise ratio of the measurement. Unfortunately we lack color information, and we compute the total excess of galaxies. The background count levels were determined using the COSMOS pointings that were used to measure the size-dependence of CTE. Figure [mlx]a shows the resulting lensing mass as a function of the restframe X-ray luminosity in the 0.1 − 2.4 keV band. The luminosities and masses have been scaled to redshift zero assuming self-similar evolution with respect to the critical density, where $$E(z)=\frac{H(z)}{H\_0}=\sqrt{\Omega\_m(1+z)^3+\Omega\_\Lambda}$$ for flat cosmologies. The solid points indicate the clusters from the sample studied here. The clusters from the 160SD survey for which X-ray temperatures have been determined (see §3.1) are indicated by blue points. To extend the range in X-ray luminosity we also show measurements for the massive clusters that were studied in. We note, that used bolometric X-ray luminosities from, whereas here we use the restframe luminosities in the 0.1 − 2.4 keV band (which are a factor  ∼ 4 smaller). We use the mass estimates from which used new photometric redshift distributions, which were based on much larger data sets. The agreement in lensing masses is good in the regions where the two samples overlap. However, the scatter in the mass-luminosity relation is substantial (both for the clusters studied here as well as the more massive clusters). We examined whether some of the scatter could be due to the uncertainty in the position of the cluster center, but find no difference when comparing results for clusters with different levels of confidence in the identification of the BCG (see $Q\_{\rm BCG}$ in Table [tabsample]. It is worth noting, however, that we do not find any massive clusters ( > 1014*M*⊙) that are X-ray faint (i.e., *L**X* < 1044 erg/s), implying that the dispersion in the *M* − *L**X* relation is relatively well-behaved. [mlx] studied a sample of 63 X-ray bright clusters and derived masses under the assumption of hydrostatic equilibrium. This sample of clusters spans a similar range in *L**X* as our combined sample. We converted their measurements for *M*500 to *M*2500 using the mass-concentration relation given by Eqn. [mcrel] and show the results in Figure [mlx] (small grey points). The agreement with our findings is very good. A more quantitative comparison is presented in §3.3. have shown that there is a good relation between the cluster richness and the mass (and various proxies) for massive clusters. Our study extends to lower masses and as is shown in Figures [mlx]b and c *N*2500 correlates well with both the X-ray luminosity and lensing mass[2](#fn2). Similar results have been obtained using SDSS cluster samples. We assumed that *N*2500 scales as the mass *M*2500, which is a reasonable assumption if the galaxies trace the density profile. This choice, however, does not affect our conclusions. The results agree well in the regions where the two samples overlap. The correlation between *N*2500 and *M*2500 is tighter than that of *N*2500 and *L**X*. The former is less sensitive to the projections along the line-of-sight (either substructures or an overall elongation of the cluster), because both the galaxy counts and the lensing mass are derived from projected measurements. The X-ray results provide a different probe of the distribution of baryons, which is expected to lead to additional scatter. Furthermore, some of the scatter may be caused by unknown contributions by AGNs. The importance of AGN can be evaluated using a combination of deeper, high spatial resolution $(\lesssim 5'')$ X-ray and radio imaging. Such X-ray observations would also provide estimates for the temperature of the X-ray gas (which is a better measure of the cluster mass). Unfortunately, such data exist for only five of the low-mass clusters. For these, X-ray temperatures have been derived, which are listed Table [tabtx]. For the massive clusters we use the values from that were used in. We note that all clusters follow a tight *L**x* − *T**x* relation. Comparison with X-ray temperature --------------------------------- Figure [mtx] shows the resulting plot of *M*2500 as a function of X-ray temperature. RXJ1117.4 + 0743 and RXJ1524.6 + 0957 lie on the tight relation defined by the bulk of the clusters. The measurements from also follow this relation (light grey points). However, RXJ0847.1 + 3449 and RXJ1354.2 − 0221 appear to be more massive than might be expected based on *T**X*. They appear to lie on a parallel relation, along with some of the clusters from. The latter clusters are CL0024+16 and A370, which are well known strong lensing clusters (indicated in red in Figures [mlx] and [mtx]). These clusters were observed because of their strong lensing properties and included in the search for archival CFHT data carried out by. Interestingly, all four clusters are outliers on both the *M*2500-*L**x* and *N*2500 − *L**x* relations presented in Figure [mlx], but follow the mass-richness relation. This is consistent with the presence of (sub)structures along the line-of-sight boosting both *M*2500 and *N*2500: the projection of two mass concentrations (along the line-of-sight) would increase the richness and the weak lensing mass approximately linearly. The inferred X-ray temperature on the other hand will be close to that of the more massive system, whereas the X-ray luminosity will be much lower than expected, because it is proportional to the square of the electron density. We note that both RXJ0847.1 + 3449 and RXJ1354.2 − 0221 show evidence of strong lensing. lll name & *k**T**X* [keV] & ref. RXJ0110.3 + 1938 & 1.46− 0.15+ 0.19 & 1 RXJ0847.1 + 3449 & 3.62− 0.51+ 0.58 & 2 RXJ1117.4 + 0743 & 3.3− 0.36+ 0.42 & 3 RXJ1354.2 − 0221 & 3.66− 0.5+ 0.6 & 2 RXJ1524.6 + 0957 & 5.1 ± 0.36 & 4 Interestingly, the X-ray image of RXJ0847.1 + 3449 in shows evidence for a nearby cluster candidate. The case is less clear for RXJ1354.2 − 0221, but the X-ray image shows a complex morphology. Unfortunately we lack the dynamical data to confirm whether RXJ0847.1 + 3449 and RXJ1354.2 − 0221 are projected systems. The cluster RXJ1117.4 + 0743 was studied in detail by, who find that this cluster is also a projection of two structures. For the main component infer a galaxy velocity dispersion of 592 ± 82 km/s, whereas the second structure is less massive with *σ**v* = 391 ± 85 km/s. Based on our lensing analysis we obtained a velocity dispersion of 639− 86+ 76 km/s, in good agreement with the dynamical results for the main cluster. also performed a weak lensing analysis based on ground based imaging data and obtained a velocity dispersion of *σ* = 778 ± 89 km/s (where we took the average of their results for the *g*ʹ and *r*ʹ band), implying a mass 50% larger than our estimate. We are not able identify an obvious cause for this difference, but note that PSF-related systematics have a larger impact on ground based results. [mtx] The *M*2500 − *L**X* scaling relation ------------------------------------- In this section we examine the correlation between *M*2500 and the X-ray luminosity, in particular the normalization and the power law slope. We fit a power law model to the combined sample to maximize the leverage in X-ray luminosity $$E(z)M\_{2500}=M\_x \left(\frac{L\_x/E(z)}{2 \times 10^{44}{\rm erg/s}}\right)^\alpha.$$ If we naively fit this model to the measurements we find that the value for *χ*2 of the best fit is too high (*χ*2 = 106 with 43 degrees of freedom). This indicates that there is intrinsic scatter in the relation, which is also apparent from Figure [mlx]. We need to account for the intrinsic scatter in the fitting procedure, because ignoring it will generally bias the best fit parameters. We fit the model to our measurements, which have errors that follow a normal distribution. The intrinsic scatter, however, can be described by a log-normal distribution. We will assume that the intrinsic scatter can be approximated with a normal distribution with a dispersion *σ**Q* ≈ ln(10)*Q*log*Q* (we use the log with base 10). To fit the *M*2500 − *L**X* relation we follow a maximum likelihood approach. For a model *f* with parameters ${\bf a}$, the predicted values are $y\_i=f(x\_i;{\bf a})$. The uncertainties in *x**i* and *y**i* are given by *σ**x*, *i* and *σ**y*, *i*. If we assume a Gaussian intrinsic scatter *σ**Q*, *i* in the *y**i* coordinate, the likelihood ${\cal L}$ is given by $${\cal L}=\prod\_{i=1}^{n}\frac{1}{\sqrt{2\pi}w\_i}\exp\left[-\frac{[y\_i-f(x\_i;{\bf a})]^2}{2w\_i^2}\right],$$ where *w**i* accounts for the scatter: $$w\_i^2=\left[{\frac{df}{dx}(x\_i)}\right]^2\sigma\_{x,i}^2+\sigma\_{y,i}^2+\sigma\_{Q,i}^2.$$ If we consider the logarithm of the likelihood it becomes clear why including the intrinsic scatter differs from standard least squares minimization: $$-2\ln {\cal L}=2\sum\_{i=1}^n \ln w\_i + \sum\_{i=1}^n \left(\frac{y\_i-f(x\_i;a\_j)}{w\_i}\right)^2+C,$$ where the second term corresponds to the usual *χ*2. For a linear relation with no intrinsic scatter, the first term is a constant for a given data set and the likelihood is maximimized by minimizing *χ*2. However, if intrinsic scatter is included as a free parameter, the first term acts as a penalty function, and cannot be ignored. The presence of intrinsic scatter also exacerbates the Malmquist bias for a flux limited sample, such as the 160SD survey[3](#fn3). As a result the average flux of the observed sample of clusters is biased high compared to the mean of the parent population (in particular near the flux limit of the survey). To account for Malmquist bias we follow the procedure outlined in Appendix A.2 of and correct the X-ray luminosities before fitting the *M*2500 − *L**X* relation. We find that the correction for Malmquist bias is relatively modest, increasing the normalisation *M**X* by  ∼ 5% and reducing the slope *α* by  ∼ 5%[4](#fn4). We find an intrinsic scatter of *σ*log*L**X*∣*M* = 0.23− 0.04+ 0.10 (or a relative error of  ∼ 70%), in good agreement with other studies. list a somewhat larger scatter of *σ*log*L**X*∣*M* = 0.29 for the HIFLUGCS sample, whereas and find *σ*log*L**X*∣*M* = 0.17 ± 0.02 and 0.18 ± 0.02, respectively. Our results also agree well with who find *σ*log*M*∣*L**X* = 0.19 ± 0.03, which is in good agreement with our value of 0.17− 0.03+ 0.04. [slope] The likelihood contours for the power law slope *α* of the *M*2500 − *L**X* relation and the mass *M*2500 of a cluster with a luminosity of *L**X* = 2 × 1044*h*70− 2erg/s are shown in Figure [slope]. For the combined sample of clusters we find a best fit slope of *α* = 0.68 ± 0.07 and a normalization $M\_X=(1.2\pm0.12)\times 10^{14} h\_{70}^{-1}\msun$. The inferred slope is consistent with the value of $\frac{3}{4}$ expected for self-similar evolution. It is also interesting to compare the constraints from both sub-samples separately. The best fit slope for the clusters studied in is *α* = 0.55− 0.09+ 0.10 with a normalization of $M\_X=(1.5\pm0.2)\times 10^{14}h\_{70}^{-1}\msun$ (indicated by the pink contours)[5](#fn5). The blue contours in Figure [slope] indicate the constraints for the HST sample studied here. The parameters are not well constrained, with best fit values *α* = 0.63 ± 0.24 and a normalization $M\_X=(1.0\pm0.24\times 10^{14}h\_{70}^{-1}\msun$. The difference in the constraints from the (extended) sample and the 160SD systems may hint at a deviation from a single power law relation. As discussed in §2.5, however, the uncertainty in the position of the cluster center leads to an underestimate of the cluster mass, as does the presence of substructure. The results are much less sensitive to these problems. To quantify this, we combine the results with the 160SD clusters with $Q\_{\rm BCG}=2$ (12 systems) and the ones with $Q\_{\rm BCG}<2$ (13 systems). For the former we find $M\_X=(1.30\pm0.15)\times 10^{14}h\_{70}^{-1}\msun$ and *α* = 0.63 ± 0.08, whereas requiring $Q\_{\rm BCG}<2$ yields $M\_X=(1.19\pm0.14)\times 10^{14}h\_{70}^{-1}\msun$ and *α* = 0.68 ± 0.08. This comparison suggests that our normalisation may be biased low by as much as  ∼ 10%. Comparison with X-ray samples ----------------------------- The relation between the X-ray luminosity and cluster mass has been studied extensively. In this section we compare our measurements to a number of recent results, which are shown in Figure [zoom]. Where needed, the X-ray luminosities are converted to the 0.1 − 2.4 keV band and Eqn. [mcrel] has been used to convert masses to *M*2500 and adjust the slopes (because the relation between *M*2500 and *M*200 (or *M*500) is a power law with a slope less than 1.). studied a sample of 63 clusters, with masses derived under the assumption of hydrostatic equilibrium. Their BCES bisector results for the flux-limited sample yields *α* = 0.60 ± 0.05 and $M\_x=(1.3\pm0.09)\times10^{14} \msun$ are indicated by the blue triangle in Fig. [zoom]., however, have argued that these results suffer from Malmquist bias. Instead they compared the X-ray number counts to the mass function in ΛCDM cosmologies and derived *α* = 0.54 ± 0.02 and a high normalization of $M\_X=(2.1\pm0.1)\times 10^{14}\msun$. This result, however depends strongly on the adopted value for *σ*8, and combination with the WMAP3 data lowers the normalization to $M\_X=(1.56\pm0.08)\times10^{14}\msun$ (open pink triangle in Fig. [zoom]). studied a sample of *z* ∼ 0.05 and *z* ∼ 0.5 clusters that were observed with Chandra. Their results with *α* = 0.55 ± 0.05 and $M\_x=(1.43\pm0.08)\times10^{14}\msun$ are indicated by the open orange square. Another recent study was presented by who found a fairly steep slope of *α* = 0.70 ± 0.04 and $M\_x=(1.42\pm0.28)\times10^{14}\msun$ (purple square). The constraints from these studies are largely driven by X-ray luminous clusters (this is particularly true for the results of and ). If the slope of the *M*2500 − *L**X* varies with *L**X*, then it is more appropriate to compare these studies to the results from the sample studied in, for which the agreement is indeed very good. When combined with the results from the 160SD survey, the resulting scaling relation has a lower normalisation and steeper slope. To examine whether this could be caused by a change in slope, it is interesting to compare to measurements at lower luminosities. At the low X-ray luminosity end, studied X-ray groups using COSMOS. Because of the small area surveyed, the groups are less luminous than the sample of clusters from the 160SD survey studied here. Nonetheless it is interesting to extrapolate their results for comparison. use the mass-concentration relation from. We refit the 160SD sample using this relation and find that the average *M*2500 is unchanged, but that *M*200 is increased by 24%. We account for this when converting the measurements from, and find that they imply $M\_x=(1.4\pm0.3)\times10^{14}\msun$ and a slope *α* = 0.61 ± 0.13 (indicated by the open green circle in Fig. [zoom]), in fair agreement with our results. We also note that this highlights the difficulty in comparing results, especially when the analyses differ in detail. This is particularly relevant for the comparison with SDSS results in the following section. [zoom] Comparison with SDSS -------------------- measured the scaling relation between *L**X* and *M*200 for a large sample of optically selected clusters found in the Sloan Digital Sky Survey. This allowed them to extend the study of the *M* − *L**X* relation to much lower luminosities, compared to most of the measurements presented in the previous section. The clusters were binned in richness and for each bin, the mean X-ray luminosity and weak lensing mass were determined. In the lensing analysis, described in, both the mass and concentration are fit as free parameters. The resulting values for *c* agree well with the relation presented in. Upon conversion to *M*2500 we find that the measurements from correspond to $M\_X=(1.5\pm0.1)\times 10^{14}h\_{70}^{-1}\msun$ and *α* = 0.55 ± 0.04. This result is indicated in Figure [zoom] by the red point. have pointed out that the weak lensing masses determined by may be too low (by as much as  ∼ 24%), because of a bias in the source redshift distribution. The red arrow indicates the shift in normalization if the bias pointed out by is correct. In that case the results from disagree with all other measurements. find a shallower slope and higher mass,
arxiv_0000224
to the high propagation loss of mmWave signals, beamforming techniques have to be employed to achieve sufficiently high signal-to-noise ratios (SNRs) in mmWave communications. Fortunately, benefiting from the small wavelength of mmWave signals, a large number of antennas can be equipped in a small area to realize high array gains. Furthermore, the resulting highly directional mmWave beams improve transmission security by reducing the power of the signals received by eavesdroppers. However, a drawback of mmWave communications is that obstacles on the ground may prevent the establishment of line-of-sight (LoS) links, which leads to severely attenuated received signal powers even if beamforming is applied. To address this issue, a novel heterogeneous multi-beam cloud radio access network and a decentralized algorithm for beam pair selection were proposed for seamless mmWave coverage in. On the other hand, unmanned aerial vehicle (UAV) communication has attracted significant attention during the past few years, and the integration of UAV into wireless communications is expected to play an important role in B5G and 6G. Benefiting from their mobility, UAVs can be flexibly deployed in areas without infrastructure coverage, e.g., deserts, oceans, and disaster areas where the terrestrial base stations (BSs) may be broken. Compared with conventional terrestrial BSs, UAVs operate at much higher altitudes, and typically have a high probability of being able to establish a line-of-sight (LoS) communication link with the ground user equipment (UE). However, UAVs may also suffer from strong interference from neighboring infrastructures/equipments, including neighboring BSs, ground UEs, and other aircrafts. Thus, interference management is one of the key challenges in UAV communications. To address these problems, the combination of mmWave communications and UAV communications is promising and has unique advantages. First, due to the poor diffraction ability and high propagation loss of mmWave signals, the coverage range of mmWave networks is limited. Energy-efficient UAVs can be flexibly deployed and reconstituted to form a multi-hop network to enlarge the coverage range of mmWave communication networks. Second, at high UAV altitudes, the probability of an LoS link is high because shadowing of the air-to-ground link and the air-to-air link by buildings is unlikely to occur. This property is ideal for the highly directional mmWave signals, for which the non-LoS (NLoS) paths are highly attenuated. Third, large numbers of antennas can be integrated in the small area available at UAVs because of the small wavelengths of mmWave signals. Hence, directional beamforming can be used to effectively enhance the power of the target signal and to suppress the interference at the UAV. Motivated by these advantages, integrating UAVs into mmWave cellular has attracted considerable attention recently. In, the potential of and approaches for combining UAV and mmWave communication were investigated, where fast beamforming training and tracking, spatial division multiple access, blockage, and user discovery were considered. In, the channel characteristics and precoder design for mmWave-UAV systems were analyzed, and several general challenges and possible solutions were presented for mmWave-UAV cellular networks. The use of UAVs for dynamic routing in mmWave backhaul networks was proposed in, where the outage probability, spectral efficiency, and outage and non-outage duration distributions were analyzed. In, multiple access schemes for mmWave-UAV communications were introduced, and a novel link-adaptive constellation-division multiple access technique was proposed. In, a blind beam tracking approach was proposed for a UAV-satellite communication system employing a large-scale antenna array. In, a beam tracking protocol for mmWave UAV-to-UAV communication was designed, where the position and altitude of the UAV were predicted via a Gaussian process based learning algorithm. Due to the unstable beam pointing in mmWave-UAV communications, an optimized beamforming scheme taking into account beam deviation was proposed to overcome beam misalignment in. In, the two-dimensional position and the downlink beamformer of a fixed-altitude UAV were jointly optimized to mitigate the UAV jittering and user location uncertainty. Different from the works above, in this paper, we propose to use a full-duplex UAV (FD-UAV) relay to facilitate mmWave communication. Specifically, an FD-UAV relay is deployed between a source node (SN) and a destination node (DN) to establish an LoS link, where large antenna arrays are employed for beamforming to enable directional beams facilitating high channel gains. Although physically separated antenna panels and directional antennas are usually used for mmWave transceivers, the small sidelobes of the radiation pattern, which are inevitable, may result in significant self-interference (SI) for FD relays. The authors of have shown that, in addition to 70-80 dB physical isolation realized by increasing the distance between a transmitter (Tx) antenna panel and an adjacent receiver (Rx) antenna panel, 35-50 dB isolation via SI reduction[6](#fn6) is needed to enable successful reception of mmWave signals in in-band FD wireless backhaul links. This motivates us to investigate SI mitigation via mmWave beamforming. In, an orthogonal matching pursuit-based (OMP-based) SI-cancellation precoding algorithm was proposed to eliminate the SI and to improve the spectral efficiency in an FD relaying system. In, the impact of the beamwidth and the SI coefficient on the maximum achievable data rate was analyzed for a two-hop amplified-and-forward mmWave relaying system. However, the 3-dimensional (3-D) positioning of the UAV relay, which is investigated in this paper, has not been considered. Besides, the placement, trajectory, resource allocation, and transceiver design of UAVs have also been widely investigated. However, the effects of the mmWave channel and 3-D analog beamforming were not studied in these works. In the considered mmWave communication system, the position of the FD-UAV relay, the beamforming, and the power control have a significant impact on performance. Thus, these variables have to be carefully optimized. The main contributions of this paper can be summarized as follows. 1. We propose to deploy an FD-UAV relay to improve the end-to-end performance of a mmWave communication system. We formulate a corresponding optimization problem for maximization of the achievable rate between the SN and the DN. Thereby, Tx and Rx beamforming are utilized to mitigate the SI at the FD-UAV relay. To the best of our knowledge, this is the first work which investigates the joint optimization of positioning, beamforming, and power control for mmWave FD-UAV relays. 2. To handle the formulated non-convex optimization problem with high-dimensional, highly coupled variable vectors, we first assume an LoS environment and ideal beamforming, where the full array gains can be obtained for the SN-to-UAV (S2V) link and the UAV-to-DN (V2D) link, while the interference can be completely suppressed in the beamforming domain. Based on this assumption, we obtain the corresponding conditional optimal solution for the position of the FD-UAV relay in closed form. Then, we deploy the UAV to the position which is closest to the conditional optimal position and yields LoS paths for both the S2V and the V2D links. 3. We propose an alternating interference suppression (AIS) algorithm for the joint design of the beamforming vectors (BFVs) and the power control variables. In each iteration, the beam gains for the target signals of the S2V and the V2D links are alternatingly maximized, while the interference is successively reduced. Meanwhile, the optimal power allocation to the SN and FD-UAV relay is updated in closed form for the given position and BFVs. 4. Simulation results show that the proposed joint positioning, beamforming, and power control scheme outperforms three benchmark schemes. In fact, our results reveal that the proposed joint optimization method can closely approach a performance upper bound for mmWave FD-UAV relay systems. The rest of this paper is organized as follows. In Section II, we introduce the system model and formulate the proposed joint positioning, beamforming, and power control problem. In Section III, we provide our solution for the formulated problem. Simulation results are presented in Section IV, and the paper is concluded in Section V. *Notation*: *a*, **a**, **A**, and A denote a scalar, a vector, a matrix, and a set, respectively. $(\cdot)^{\rm{T}}$, ( ⋅ )\*, and $(\cdot)^{\rm{H}}$ denote transpose, conjugate, and conjugate transpose, respectively. ∣*a*∣ and ∥**a**∥ denote the absolute value of *a* and the Frobenius norm of **a**, respectively. ⌈*a*⌉ represents the minimum integer no smaller than real number *a*. E( ⋅ ) denotes the expected value of a random variable. R( ⋅ ) and ∠( ⋅ ) denote the real part and the phase of a complex number, respectively. [**a**]*i* and [**A**]*i*, *j* denote the *i*-th entry of vector **a** and the entry in the *i*-th row and *j*-th column of matrix **A**, respectively. System Model and Problem Formulation ==================================== We consider an end-to-end transmission scenario, where a SN serves a remote DN as shown in Fig. [fig:system][7](#fn7). The SN and the DN are equipped with uniform planar arrays (UPAs) employing *N*Stot = *M*S × *N*S and *N*Dtot = *M*D × *N*D antennas, respectively, to overcome the high path loss in the mmWave band. Due to obstacles such as ground buildings, the channel from the SN to the DN may be blocked. Thus, an FD-UAV relay, equipped with an *N*ttot = *M*t × *N*t Tx-UPA and an *N*rtot = *M*r × *N*r Rx-UPA, is deployed between the SN and the DN to improve system performance. [fig:system] Signal Model ------------ In the considered system, the SN transmits signal *s*1 to the UAV with power *P*S, and concurrently, the UAV transmits signal *s*2 to the DN with power *P*V, where E(∣*s**i*∣2) = 1 for *i* = 1, 2. Thus, the received signal at the UAV is given by[8](#fn8) $$\label{eq\_signal\_UAV} \begin{aligned} &\bar{y}\_{\mathrm{V}}=\mathbf{w}\_{\mathrm{r}}^{\rm{H}}\mathbf{H}\_{\mathrm{S2V}}\mathbf{w}\_{\mathrm{S}} \sqrt{P\_{\mathrm{S}}}s\_{1} + \mathbf{w}\_{\mathrm{r}}^{\rm{H}}\mathbf{H}\_{\mathrm{SI}}\mathbf{w}\_{\mathrm{t}} \sqrt{P\_{\mathrm{V}}}s\_{2} + n\_{1}, \end{aligned}$$ where **H**S2V ∈ C*N*rtot × *N*Stot is the channel matrix between the SN and the UAV. **H**SI ∈ C*N*rtot × *N*ttot is the SI channel matrix between the Tx-UPA and the Rx-UPA at the FD-UAV relay. *n*1 denotes the white Gaussian noise at the UAV having zero mean and power *σ*12. **w**S ∈ C*N*Stot × 1, **w**r ∈ C*N*rtot × 1, and **w**t ∈ C*N*ttot × 1 represent the SN-BFV, the Rx-BFV at the UAV, and the Tx-BFV at the UAV, respectively. The received signal at the DN is given by $$\label{eq\_signal\_DN} \bar{y}\_{\mathrm{D}}=\mathbf{w}\_{\mathrm{D}}^{\rm{H}}\mathbf{H}\_{\mathrm{S2D}}\mathbf{w}\_{\mathrm{S}} \sqrt{P\_{\mathrm{S}}}s\_{1} + \mathbf{w}\_{\mathrm{D}}^{\rm{H}}\mathbf{H}\_{\mathrm{V2D}}\mathbf{w}\_{\mathrm{t}} \sqrt{P\_{\mathrm{V}}}s\_{2} + n\_{2},$$ where **H**V2D ∈ C*N*Dtot × *N*ttot is the channel matrix between the UAV and the DN. **H**S2D ∈ C*N*Dtot × *N*Stot is the channel matrix between the SN and the DN. **w**D ∈ C*N*Dtot × 1 denotes the DN-BFV. *n*2 denotes the white Gaussian noise at the DN having zero mean and power *σ*22. In general, there are two main strategies for mmWave beamforming, i.e., digital beamforming and analog beamforming. For digital beamforming, each antenna is connected to an independent radio frequency (RF) chain, and thus flexible beamforming is possible due to the large degrees of freedom (DoFs) of the digital beamforming matrices. However, for mmWave systems, the hardware cost and power consumption for digital beamforming are high. In contrast, analog beamforming is more energy efficient, as multiple antennas are connected to only one RF chain via phase shifters. In addition, for FD communication, analog-circuit-domain SI cancellation is usually performed before digital sampling to avoid saturation due to strong SI. For these reasons, analog beamforming is adopted for the considered mmWave FD-UAV relay, which has limited battery capacity and may experience strong SI. The employed analog BFVs impose a constant-modulus (CM) constraint, i.e., $$\label{eq\_CM\_B} \left|\left[ \mathbf{w}\_{\tau}\right]\_{n}\right|=\frac{1}{\sqrt{N\_{\tau}^{\mathrm{tot}}}}, ~ 1 \leq n \leq N\_{\tau}^{\mathrm{tot}}, \tau=\left\{\mathrm{S}, \mathrm{r}, \mathrm{t}, \mathrm{D}\right\}.$$ Then, we can obtain the achievable rates of the S2V and V2D links as follows $$\label{eq\_Rate\_S2V} R\_{\mathrm{S2V}}=\log\_{2}\left (1+ \frac{\left | \mathbf{w}\_{\mathrm{r}}^{\rm{H}}\mathbf{H}\_{\mathrm{S2V}}\mathbf{w}\_{\mathrm{S}} \right |^{2}P\_{\mathrm{S}}}{\left | \mathbf{w}\_{\mathrm{r}}^{\rm{H}}\mathbf{H}\_{\mathrm{SI}}\mathbf{w}\_{\mathrm{t}} \right |^{2}P\_{\mathrm{V}}+\sigma\_{1}^{2}}\right ),$$ $$\label{eq\_Rate\_V2D} R\_{\mathrm{V2D}}=\log\_{2}\left (1+ \frac{\left | \mathbf{w}\_{\mathrm{D}}^{\rm{H}}\mathbf{H}\_{\mathrm{V2D}}\mathbf{w}\_{\mathrm{t}} \right |^{2}P\_{\mathrm{V}}} {\left | \mathbf{w}\_{\mathrm{D}}^{\rm{H}}\mathbf{H}\_{\mathrm{S2D}}\mathbf{w}\_{\mathrm{S}} \right |^{2}P\_{\mathrm{S}}+\sigma\_{2}^{2}}\right ).$$ Since the S2D link has a small channel gain due to the assumed blockage, the signal received via the S2D link is treated as interference at DN. Note that the achievable rates in and hold for coherent detection. Therefore, the FD-UAV relay and DN need to know the effective channel gains $\mathbf{w}\_{\mathrm{r}}^{\rm{H}}\mathbf{H}\_{\mathrm{S2V}}\mathbf{w}\_{\mathrm{S}}$ and $\mathbf{w}\_{\mathrm{D}}^{\rm{H}}\mathbf{H}\_{\mathrm{V2D}}\mathbf{w}\_{\mathrm{t}}$, respectively. The achievable rate between the SN and the DN is the minimum of the rates of the S2V and V2D links, i.e., *R*S2D = min{*R*S2V, *R*V2D}. Channel Model ------------- Due to the directivity and sparsity of the far-field mmWave-channel, the channel matrices of the S2V and V2D links can be expressed as a superposition of multipath components, where different paths have different angles of departure (AoDs) and angles of arrival (AoAs). Hence, the channel matrices of the S2V, V2D, and SN-to-DN (S2D) links are modeled as follows $$\label{Channel\_S2V} \begin{aligned} \mathbf{H}\_{\mathrm{S2V}} &= \chi\_{\mathrm{S2V}} \beta\_{\mathrm{S2V}}^{(0)} \mathbf{a}\_{\mathrm{r}}(\theta\_{\mathrm{r}}^{(0)},\phi\_{\mathrm{r}}^{(0)})\mathbf{a}\_{\mathrm{S}}^{\mathrm{H}}(\theta\_{\mathrm{S}}^{(0)},\phi\_{\mathrm{S}}^{(0)}) \\ &~~+ \sum \limits\_{\ell=1}^{L\_{\mathrm{S2V}}} \beta\_{\mathrm{S2V}}^{(\ell)} \mathbf{a}\_{\mathrm{r}}(\theta\_{\mathrm{r}}^{(\ell)},\phi\_{\mathrm{r}}^{(\ell)})\mathbf{a}\_{\mathrm{S}}^{\mathrm{H}}(\theta\_{\mathrm{S}}^{(\ell)},\phi\_{\mathrm{S}}^{(\ell)}), \end{aligned}$$ $$\label{Channel\_V2D} \begin{aligned} \mathbf{H}\_{\mathrm{V2D}} &= \chi\_{\mathrm{V2D}} \beta\_{\mathrm{V2D}}^{(0)} \mathbf{a}\_{\mathrm{D}}(\theta\_{\mathrm{D}}^{(0)},\phi\_{\mathrm{D}}^{(0)})\mathbf{a}\_{\mathrm{t}}^{\mathrm{H}}(\theta\_{\mathrm{t}}^{(0)},\phi\_{\mathrm{t}}^{(0)}) \\ &~~+ \sum \limits\_{\ell=1}^{L\_{\mathrm{V2D}}} \beta\_{\mathrm{V2D}}^{(\ell)} \mathbf{a}\_{\mathrm{D}}(\theta\_{\mathrm{D}}^{(\ell)},\phi\_{\mathrm{D}}^{(\ell)})\mathbf{a}\_{\mathrm{t}}^{\mathrm{H}}(\theta\_{\mathrm{t}}^{(\ell)},\phi\_{\mathrm{t}}^{(\ell)}), \end{aligned}$$ $$\label{Channel\_S2D} \begin{aligned} \mathbf{H}\_{\mathrm{S2D}}= \sum \limits\_{\ell=1}^{L\_{\mathrm{S2D}}} \beta\_{\mathrm{S2D}}^{(\ell)} \mathbf{a}\_{\mathrm{D}}(\theta\_{\mathrm{\widetilde{D}}}^{(\ell)},\phi\_{\mathrm{\widetilde{D}}}^{(\ell)}) \mathbf{a}\_{\mathrm{S}}^{\mathrm{H}}(\theta\_{\mathrm{\widetilde{S}}}^{(\ell)},\phi\_{\mathrm{\widetilde{S}}}^{(\ell)}), \end{aligned}$$ where index ℓ = 0 represents the LoS component and indices ℓ ≥ 1 represent the NLoS components. *L*S2V, *L*V2D, and *L*S2D are the total number of NLoS components for the S2V, V2D, and S2D channels, respectively. Random variables *χ*S2V and *χ*V2D are equal to 1 if the LoS path exists and equal to 0 otherwise. Furthermore, the LoS path from the SN to the DN is assumed to be blocked, which is the main motivation for deploying an FD-UAV relay. *β*S2V(ℓ), *β*V2D(ℓ), and *β*S2D(ℓ) are the complex coefficients of the S2V, V2D, and S2D paths, respectively. *θ*S(ℓ), *ϕ*S(ℓ), *θ*r(ℓ), and *ϕ*r(ℓ) represent the elevation AoD (E-AoD), azimuth AoD (A-AoD), elevation AoA (E-AoA), and azimuth AoA (A-AoA) of the S2V path, respectively. *θ*t(ℓ), *ϕ*t(ℓ), *θ*D(ℓ), and *ϕ*D(ℓ) represent the E-AoD, A-AoD, E-AoA, and A-AoA of the V2D path, respectively. $\theta\_{\mathrm{\widetilde{B}}}^{(\ell)}$, $\phi\_{\mathrm{\widetilde{B}}}^{(\ell)}$, $\theta\_{\mathrm{\widetilde{U}}}^{(\ell)}$, and $\phi\_{\mathrm{\widetilde{U}}}^{(\ell)}$ represent the E-AoD, A-AoD, E-AoA, and A-AoA of the S2D path, respectively. **a**S( ⋅ ), **a**r( ⋅ ), **a**t( ⋅ ), and **a**D( ⋅ ) are the steering vectors of the UPA at the SN, the Rx-UPA at the FD-UAV relay, the Tx-UPA at the FD-UAV relay, and the UPA at the DN, respectively. The steering vectors are given as follows $$\label{eq\_steeringVCT} \begin{aligned} &\mathbf{a}\_{\tau}(\theta\_{\tau},\phi\_{\tau})= [1,\cdots,e^{j2\pi\frac{d}{\lambda}\cos\theta\_{\tau}[(m-1)\cos \phi\_{\tau}+(n-1)\sin \phi\_{\tau}]},\\ &~~~~\cdots,e^{j2\pi\frac{d}{\lambda}\cos\theta\_{\tau}[(M\_{\tau}^{\mathrm{tot}}-1)\cos \phi\_{\tau}+(N\_{\tau}^{\mathrm{tot}}-1)\sin \phi\_{\tau}]} ]^{\mathrm{T}}, \end{aligned}$$ where *d* is the spacing between adjacent antennas, *λ* is the carrier wavelength, 0 ≤ *m* ≤ *M**τ*tot − 1, 0 ≤ *n* ≤ *N**τ*tot − 1, and *τ* = {S, r, t, D}. Particularly, for half-wavelength spacing arrays, we have *d* = *λ*/2. For the LoS path of the SI channel at the FD-UAV relay, the far-field range condition, *R* ≥ 2*D*2/*λ*, where *R* is the distance between the Tx antenna and the Rx antenna and *D* is the diameter of the antenna aperture, does not hold in general. Thus, the SI channel has to be modeled using the near-field model as follows $$\label{Channel\_SI} \left[\mathbf{H}\_{\mathrm{SI}}\right]\_{m, n}=\beta\_{\mathrm{SI}}^{(m,n)} \exp \left(-j 2 \pi \frac{r\_{m, n}}{\lambda}\right),$$ where *β*SI(*m*, *n*) are the complex coefficients of the SI channel, and *r**m*, *n* is the distance between the *m*-th Tx array element and the *n*-th Rx array element. Note that for the SI channel, NLoS paths may also exist, due to reflectors around the FD-UAV relay. Since the propagation distances of the NLoS paths are much longer than that of the LoS path, which leads to a higher attenuation, we focus on the LoS component of the SI channel. Although the SI channel model is more complicated compared to the far-field channel model, the FD-UAV relay is expected to be able to acquire the corresponding channel state information (CSI), as the SI channel is only slowly varying. In this paper, we assume that for a given fixed position of the FD-UAV relay, instantaneous CSI is available at the SN, FD-UAV relay, and DN via channel estimation. However, the FD-UAV can acquire only the CSI for the position it is at. Next, we provide the models for the parameters of the channel matrices in -,. As shown in Fig. [fig:system], we establish a coordinate system with the origin at the SN, and the three axes *x*, *y*, and *z*, are separately aligned with the directions of east, north, and vertical (upward), respectively. Without loss of generality, we assume the SN and the DN both have zero altitude, and the UPAs are parallel to the plane spanned by the *x* and *y* axes. Then, the coordinates of the DN are (*x*D, *y*D, 0), and the coordinates of the FD-UAV relay are (*x*V, *y*V, *h*V). According to basic geometry, we obtain the parameters of the S2V link, including the distance and the AoDs and AoAs of the LoS path, as follows $$\label{anlge\_S2V} \left \{ \begin{aligned} &{d\_{\mathrm{S} 2 \mathrm{V}}=\sqrt{x\_{\mathrm{V}}^{2}+y\_{\mathrm{V}}^{2}+h\_{\mathrm{V}}^{2}}}, \\ &{\theta\_{\mathrm{S}}^{(0)}=\theta\_{\mathrm{r}}^{(0)}=\arctan \frac{h\_{\mathrm{V}}}{\sqrt{x\_{\mathrm{V}}^{2}+y\_{\mathrm{V}}^{2}}}}, \\ &{\phi\_{\mathrm{S}}^{(0)}=\phi\_{\mathrm{r}}^{(0)}=\arctan \frac{y\_{\mathrm{V}}}{x\_{\mathrm{V}}}}. \end{aligned} \right.$$ Similarly, we obtain the parameters of the V2D link as $$\label{anlge\_V2D} \left \{ \begin{aligned} &{d\_{\mathrm{V} 2 \mathrm{D}}=\sqrt{\left(x\_{\mathrm{V}}-x\_{\mathrm{D}}\right)^{2}+\left(y\_{\mathrm{V}}-y\_{\mathrm{D}}\right)^{2}+h\_{\mathrm{V}}^{2}}}, \\ &{\theta\_{\mathrm{t}}^{(0)}=\theta\_{\mathrm{D}}^{(0)}=\arctan \frac{h\_{\mathrm{V}}}{\sqrt{\left(x\_{\mathrm{V}}-x\_{\mathrm{D}}\right)^{2}+\left(y\_{\mathrm{V}}-y\_{\mathrm{D}}\right)^{2}}}}, \\ &{\phi\_{\mathrm{t}}^{(0)}=\phi\_{\mathrm{D}}^{(0)}=\arctan \frac{y\_{\mathrm{V}}-y\_{\mathrm{D}}}{x\_{\mathrm{V}}-x\_{\mathrm{D}}}}. \end{aligned} \right.$$ For the S2V, V2D, and S2D links, which are characterized by far-field channels, the AoDs and AoAs of the NLoS paths are assumed to be uniformly distributed. Considering the propagation conditions at mmWave frequencies, the complex coefficients of the LoS and NLoS paths are modeled as $$\label{coef\_LoS} \beta\_{\mathrm{S2V}}^{(0)}=\frac{c}{4\pi f\_{c}}d\_{\mathrm{S2V}}^{-\alpha\_{\mathrm{LoS}}/2}, \beta\_{\mathrm{V2D}}^{(0)}=\frac{c}{4\pi f\_{c}}d\_{\mathrm{V2D}}^{-\alpha\_{\mathrm{LoS}}/2},$$ $$\label{coef\_NLoS} \left \{ \begin{aligned} \beta\_{\mathrm{S2V}}^{(\ell)}=\frac{c}{4\pi f\_{c}}d\_{\mathrm{S2V}}^{-\alpha\_{\mathrm{NLoS}}/2}X\_{1}, ~~\text{for}~ \ell\geq 1,\\ \beta\_{\mathrm{V2D}}^{(\ell)}=\frac{c}{4\pi f\_{c}}d\_{\mathrm{V2D}}^{-\alpha\_{\mathrm{NLoS}}/2}X\_{2}, ~~\text{for}~ \ell\geq 1,\\ \beta\_{\mathrm{S2D}}^{(\ell)}=\frac{c}{4\pi f\_{c}}d\_{\mathrm{S2D}}^{-\alpha\_{\mathrm{NLoS}}/2}X\_{3}, ~~\text{for}~ \ell\geq 1, \end{aligned} \right.$$ where *c* is the constant speed of light, *f**c* is the carrier frequency, and $d\_{\mathrm{S2D}}=\sqrt{x\_{\mathrm{D}}^{2}+y\_{\mathrm{D}}^{2}}$ is the distance of the S2D link. *α*LoS and *α*NLoS are the large-scale path loss exponents for the LoS and NLoS links, respectively. *X**i*, *i* = 1, 2, 3, are the gains for the NLoS paths, which are assumed to be circular symmetric complex Gaussian random variables with zero mean and standard deviation *σ**f*, i.e., Rayleigh fading is assumed. For the SI channel, the complex coefficient is given by $$\label{coef\_SI} \beta\_{\mathrm{SI}}^{(m,n)}=\frac{c}{4\pi f\_{c}}r\_{m,n}^{-\alpha\_{\mathrm{LoS}}/2}.$$ Besides, due to obstacles on the ground, the probabilities that an LoS path exists for the S2V and V2D links are modelled as logistic functions of the elevation angles, i.e., $$\label{prob\_LoS\_S2V} \hat{P}\_{\mathrm{S2V}}^{\mathrm{LoS}}=\frac{1}{1+a\exp{(-b(\frac{180}{\pi}\theta\_{\mathrm{r}}^{(0)}-a)})},$$ $$\label{prob\_LoS\_V2D} \hat{P}\_{\mathrm{V2D}}^{\mathrm{LoS}}=\frac{1}{1+a\exp{(-b(\frac{180}{\pi}\theta\_{\mathrm{t}}^{(0)}-a)})},$$ where *a* and *b* are positive modelling parameters whose values depend on the propagation environment. Random variables *χ*S2V and *χ*V2D in and are generated based on the LoS probabilities in and, respectively. Hereto, the statistical channel models for S2V, V2D, and S2D links have been provided. For the communication scenario considered in this paper, the instantaneous channel responses are generated according to these statistical models. From the above, we observe that the S2V and V2D channels, including the propagation loss, the spatial angles, and the probabilities that an LoS link exists, depend on the position of the UAV. Thus, the position of the FD-UAV relay has significant influence on the achievable data rate. However, in practice, the instantaneous CSI is not a priori known by the SN, UAV, and DN before the UAV is deployed at a given fixed position and performs channel estimation. This property distinguishes the considered FD-UAV relay system from traditional FD relay networks on the ground where the position of the relay is fixed. Problem Formulation ------------------- To maximize the achievable rate from the SN to the DN, we formulate the following problem for joint optimization of the UAV positioning, BFVs, and transmit powers: $$\label{eq\_problem} \begin{aligned} \mathop{\mathrm{Maximize}}\limits\_{\Psi}~~ &\min \left\{R\_{\mathrm{S2V}}, R\_{\mathrm{V2D}}\right\}\\ \mathrm{Subject~ to}~~ &\left(x\_{\mathrm{V}}, y\_{\mathrm{V}}\right) \in\left[0, x\_{\mathrm{D}}\right] \times\left[ 0, y\_{\mathrm{D}} \right], \\ &h\_{\min} \leq h\_{\mathrm{V}} \leq h\_{\max},\\ &0 \leq P\_{\mathrm{S}} \leq P\_{\mathrm{S}}^{\mathrm{tot}}, \\ &0 \leq P\_{\mathrm{V}} \leq P\_{\mathrm{V}}^{\mathrm{tot}}, \\ &\left|\left[ \mathbf{w}\_{\tau}\right]\_{n}\right|=\frac{1}{\sqrt{N\_{\tau}^{\mathrm{tot}}}}, ~ \tau=\left\{\mathrm{S}, \mathrm{r}, \mathrm{t}, \mathrm{D}\right\}, ~\forall n, \end{aligned}$$ where Ψ = {*x*V, *y*V, *h*V, **w**S, **w**D, **w**r, **w**t, *P*S, *P*V}. The first constraint indicates that the FD-UAV relay should be deployed between the SN and the DN. The second constraint limits the altitude of the FD-UAV relay, where *h*min and *h*max are the minimum and maximum values, respectively. The third and fourth constraints indicate that the transmit powers are nonnegative and cannot exceed a maximum value, where *P*Stot and *P*Vtot are the maximum transmit powers of the SN and the FD-UAV relay, respectively. The fifth constraint is the CM constraint on the analog BFVs. Due to the non-convex nature and high-dimensional, highly coupled variable vectors, Problem cannot be directly solved with existing optimization tools. Thus, we develop a solution for in the next section. Solution of the Problem ======================= Since in Problem the position variables, BFVs, and power control variables are highly coupled, it is difficult to obtain a globally optimal solution. In this section, we develop a sub-optimal solution for Problem. Since the position of the FD-UAV relay crucially affects the S2V and V2D channel matrices, we first optimize *x*V, *y*V, and *h*V. Then, given the position of the FD-UAV relay and the corresponding instantaneous CSI, we develop the proposed AIS algorithm for joint optimization of the BFVs and the power control variables. Finally, we summarize the proposed overall solution for joint positioning, beamforming, and power control in mmWave FD-UAV relay systems. Positioning Under Ideal Beamforming ----------------------------------- Since the LoS path is much stronger than the NLoS paths at mmWave frequencies in general, we neglect the NLoS paths for optimization of the position of the FD-UAV relay in this subsection. Furthermore, the motivation for deploying an FD-UAV relay is to establish LoS communication links for both the S2V and the V2D links, otherwise the communication quality will be poor. Thus, we assume that both the S2V and the V2D links have an LoS path[9](#fn9), and optimize the position of the FD-UAV relay under the assumption of ideal beamforming. [DefiidealBF] **(Ideal Beamforming)** For ideal BFVs **w***τ*, *τ* = {S, r, t, D}, assuming an LoS environment, the FD-UAV relay system achieves the full array gains for the S2V and V2D links, respectively, while the SI and the interference caused by the S2D link are completely eliminated in the beamforming domain, i.e., $$\label{eq\_idealBF} \left\{ \begin{aligned} &\left | \mathbf{w}\_{\mathrm{r}}^{\mathrm{H}}\mathbf{H}\_{\mathrm{S2V}}\mathbf{w}\_{\mathrm{S}} \right |^{2}=\left|\beta\_{\mathrm{S2V}}^{(0)}\right|^{2} N\_{\mathrm{S}}^{\mathrm{tot}} N\_{\mathrm{r}}^{\mathrm{tot}},\\ &\left | \mathbf{w}\_{\mathrm{D}}^{\mathrm{H}}\mathbf{H}\_{\mathrm{V2D}}\mathbf{w}\_{\mathrm{t}} \right |^{2}=\left|\beta\_{\mathrm{V2D}}^{(0)}\right|^{2} N\_{\mathrm{t}}^{\mathrm{tot}} N\_{\mathrm{D}}^{\mathrm{tot}},\\ &\left | \mathbf{w}\_{\mathrm{r}}^{\mathrm{H}}\mathbf{H}\_{\mathrm{SI}}\mathbf{w}\_{\mathrm{t}} \right |^{2}=\left | \mathbf{w}\_{\mathrm{D}}^{\rm{H}}\mathbf{H}\_{\mathrm{S2D}}\mathbf{w}\_{\mathrm{S}} \right |^{2}=0. \end{aligned} \right.$$ Substituting and into and, for a pure LoS environment, we obtain upper bounds for the achievable rates of the S2V and V2D links as follows $$\label{eq\_RateBound\_S2V} \bar{R}\_{\mathrm{S2V}}=\log \_{2}\left(1+\frac{c^{2}}{16\pi^{2} f\_{c}^{2}} \frac{N\_{\mathrm{S}}^{\mathrm{tot}} N\_{\mathrm{r}}^{\mathrm{tot}} P\_{\mathrm{S}}^{\mathrm{tot}}}{d\_{\mathrm{S2V}}^{\alpha\_{\mathrm{LoS}}} \sigma\_{1}^{2}}\right),$$ $$\label{eq\_RateBound\_V2D} \bar{R}\_{\mathrm{V2D}}=\log \_{2}\left(1+\frac{c^{2}}{16\pi^{2} f\_{c}^{2}} \frac{N\_{\mathrm{t}}^{\mathrm{tot}} N\_{\mathrm{D}}^{\mathrm{tot}} P\_{\mathrm{V}}^{\mathrm{tot}}}{d\_{\mathrm{V2D}}^{\alpha\_{\mathrm{LoS}}} \sigma\_{2}^{2}}\right).$$ Note that the upper bounds given by and are valid for a pure LoS environment without NLoS paths. When the NLoS paths are also considered, we obtain upper bounds for the achievable rates of the S2V and V2D links as follows $$\label{eq\_RateBound2\_S2V} \bar{\bar{R}}\_{\mathrm{S2V}}=\log \_{2}\left(1+\sum \limits\_{\ell=0}^{L\_{\mathrm{S2V}}} \left|\beta\_{\mathrm{S2V}}^{(\ell)}\right|^{2} \frac{N\_{\mathrm{S}}^{\mathrm{tot}} N\_{\mathrm{r}}^{\mathrm{tot}} P\_{\mathrm{S}}^{\mathrm{tot}}}{\sigma\_{1}^{2}}\right),$$ $$\label{eq\_RateBound2\_V2D} \bar{\bar{R}}\_{\mathrm{V2D}}=\log \_{2}\left(1+\sum \limits\_{\ell=0}^{L\_{\mathrm{V2D}}} \left|\beta\_{\mathrm{V2D}}^{(\ell)}\right|^{2} \frac{N\_{\mathrm{t}}^{\mathrm{tot}} N\_{\mathrm{D}}^{\mathrm{tot}} P\_{\mathrm{V}}^{\mathrm{tot}}}{\sigma\_{2}^{2}}\right).$$ We refer to the achievable rates in and as *approximate upper bounds*, and to the achievable rates in and as *strict upper bounds*. Since the NLoS paths are not a priori known for different positions of the FD-UAV relay, the approximate upper bounds are used for UAV positioning. The performance gap between the approximate upper bounds and the strict upper bounds will be evaluated via simulations in Section IV. As can be seen, for an LoS environment and ideal beamforming, the achievable rates in and depend only on the distances *d*S2V, *d*V2D, and the transmit powers *P*S, *P*V. Note that the achievable rates are both monotonically increasing in the transmit power. Hence, *P*Stot and *P*Vtot are the optimal transmit powers maximizing the upper-bound rate for an LoS environment and ideal beamforming. In the following theorem, we provide the corresponding optimal position of the FD-UAV relay. [Theoposi] For an LoS environment and ideal beamforming, the optimal solution for the UAV’s position is given by (*x*V⋆, *y*V⋆, *h*V⋆) = (*ρ*⋆*x*D, *ρ*⋆*y*D, *h*min) with $$\label{OptPosition} \rho^{\star}=\left \{ \begin{aligned} &0,~\text{if}~\frac{N\_{\mathrm{S}}^{\mathrm{tot}} N\_{\mathrm{r}}^{\mathrm{tot}} P\_{\mathrm{S}}^{\mathrm{tot}}\sigma\_{2}^{2}}{N\_{\mathrm{t}}^{\mathrm{tot}} N\_{\mathrm{D}}^{\mathrm{tot}} P\_{\mathrm{V}}^{\mathrm{tot}}\sigma\_{1}^{2}} \leq \frac{h\_{\min}^{\alpha\_{\mathrm{LoS}}}}{\left(x\_{\mathrm{D}}^{2}+y\_{\mathrm{D}}^{2}+h\_{\min}^{2}\right)^{\frac{\alpha\_{\mathrm{LoS}}}{2}}}, \\ &1,~\text{if}~\frac{N\_{\mathrm{S}}^{\mathrm{tot}} N\_{\mathrm{r}}^{\mathrm{tot}} P\_{\mathrm{S}}^{\mathrm{tot}}\sigma\_{2}^{2}}{N\_{\mathrm{t}}^{\mathrm{tot}} N\_{\mathrm{D}}^{\mathrm{tot}} P\_{\mathrm{V}}^{\mathrm{tot}}\sigma\_{1}^{2}} \geq \frac{\left(x\_{\mathrm{D}}^{2}+y\_{\mathrm{D}}^{2}+h\_{\min}^{2}\right)^{\frac{\alpha\_{\mathrm{LoS}}}{2}}}{h\_{\min}^{\alpha\_{\mathrm{LoS}}}}, \\ &\frac{1}{2},~\text{if}~\frac{N\_{\mathrm{S}}^{\mathrm{tot}} N\_{\mathrm{r}}^{\mathrm{tot}} P\_{\mathrm{S}}^{\mathrm{tot}}\sigma\_{2}^{2}}{N\_{\mathrm{t}}^{\mathrm{tot}} N\_{\mathrm{D}}^{\mathrm{tot}} P\_{\mathrm{V}}^{\mathrm{tot}}\sigma\_{1}^{2}}=1,\\ &\frac{-b'-\sqrt{b'^2-4a'c'}}{2a'},~\text{otherwise}, \end{aligned} \right.$$ where parameters *a*ʹ, *b*ʹ, and *c*ʹ are given by $$\left \{ \begin{aligned} &a'=\left(\left(\frac{N\_{\mathrm{S}}^{\mathrm{tot}} N\_{\mathrm{r}}^{\mathrm{tot}} P\_{\mathrm{S}}^{\mathrm{tot}}}{\sigma\_{1}^{2}}\right)^{\frac{2}{\alpha\_{\mathrm{LoS}}}}-\left(\frac{N\_{\mathrm{t}}^{\mathrm{tot}} N\_{\mathrm{D}}^{\mathrm{tot}} P\_{\mathrm{V}}^{\mathrm{tot}}}{\sigma\_{2}^{2}}\right)^{\frac{2}{\alpha\_{\mathrm{LoS}}}}\right)\\ &~~~~~~~~~\times \left(x\_{\mathrm{D}}^{2}+y\_{\mathrm{D}}^{2}\right),\\ &b'=-2\left(\frac{N\_{\mathrm{S}}^{\mathrm{tot}} N\_{\mathrm{r}}^{\mathrm{tot}} P\_{\mathrm{S}}^{\mathrm{tot}}}{\sigma\_{1}^{2}}\right)^{\frac{2}{\alpha\_{\mathrm{LoS}}}}\left(x\_{\mathrm{D}}^{2}+y\_{\mathrm{D}}^{2}\right),\\ &c'=\left(\frac{N\_{\mathrm{S}}^{\mathrm{tot}} N\_{\mathrm{r}}^{\mathrm{tot}} P\_{\mathrm{S}}^{\mathrm{tot}}}{\sigma\_{1}^{2}}\right)^{\frac{2}{\alpha\_{\mathrm{LoS}}}}\left(x\_{\mathrm{D}}^{2}+y\_{\mathrm{D}}^{2}\right)+\\ &\left(\left(\frac{N\_{\mathrm{S}}^{\mathrm{tot}} N\_{\mathrm{r}}^{\mathrm{tot}} P\_{\mathrm{S}}^{\mathrm{tot}}}{\sigma\_{1}^{2}}\right)^{\frac{2}{\alpha\_{\mathrm{LoS}}}}-\left(\frac{N\_{\mathrm{t}}^{\mathrm{tot}} N\_{\mathrm{D}}^{\mathrm{tot}} P\_{\mathrm{V}}^{\mathrm{tot}}}{\sigma\_{2}^{2}}\right)^{\frac{2}{\alpha\_{\mathrm{LoS}}}}\right) h\_{\min}^{2}. \end{aligned} \right.$$ See Appendix A. Since an LoS environment and ideal beamforming are assumed in Theorem [Theoposi], in the following, we refer to as the *conditional optimal position* of the FD-UAV relay. However, due to possible obstacles on the ground, the LoS path for the S2V and V2D links may be blocked. Since the existence of an LoS path depends on the actual environment and is not a priori known by the SN, UAV, and DN, it is necessary for the FD-UAV relay to adjust its position if needed. To this end, the UAV is initially deployed to the conditional optimal position (*x*V⋆, *y*V⋆, *h*V⋆) and the instantaneous CSI is acquired. If there exist LoS paths for both the S2V and the V2D links, the UAV remains at position (*x*V⋆, *y*V⋆, *h*V⋆) as it is optimal for an LoS environment. Otherwise, if an LoS path for the S2V link and/or the V2D link does not exist for position (*x*V⋆, *y*V⋆, *h*V⋆), the UAV moves around the initial position until LoS links are established. Specifically, we start an iterative process indexed by *t*. The *t*-th neighborhood for the position of the FD-UAV relay is defined as C*t* = {(*x*V⋆ ± *i**ε**x*, *y*V⋆ ± *j**ε**y*, *h*min + *k**ε**h*) ∈ C ∣ *i*, *j*, *k* = 0, 1, ⋯, *t*}, where *ε**x*, *ε**y*, and *ε**h* determine the granularity of the search space for directions *x*, *y*, and *z*, respectively. C = [0, *x*D] × [0, *y*D] × [*h*min, *h*max] denotes the feasible region for the position of the FD-UAV relay. During the search, the UAV gradually increases its distance from (*x*V⋆, *y*V⋆, *h*V⋆), i.e., index *t* is increased by 1 in each iteration. The iteration terminates when a point in C*t* is found which yields LoS paths for both the S2V and the V2D links, and the selected position of the FD-UAV relay is given by $$\label{AdjPosition} \left(x\_{\mathrm{V}}^{\circ},y\_{\mathrm{V}}^{\circ},h\_{\mathrm{V}}^{\circ}\right)=\mathop{\mathrm{arg~min}}\limits\_{(x,y,h)\in \mathcal{L}\_{t}} d\_{x,y,h},$$ where L*t* ⊆ C*t* \ C*t* − 1 denotes the set of coordinates which yield LoS paths for both the S2V and the V2D links in the *t*-th neighborhood, and C*t* \ C*t* − 1 contains the elements of C*t* that are not included in C*t* − 1. $d\_{x,y,h}=\sqrt{(x-x\_{\mathrm{V}}^{\star})^{2}+(y-y\_{\mathrm{V}}^{\star})^{2}+(h-h\_{\mathrm{V}}^{\star})^{2}}$ is the Euclidean distance between the candidate coordinates (*x*, *y*, *h*) and (*x*V⋆, *y*V⋆, *h*V⋆). If L*t* contains multiple sets of coordinates which have the smallest distance from the initial position, one set of the coordinates is selected at random from these candidates. Hereto, the position of the FD-UAV relay is determined. Note that the transmit powers at the SN and FD-UAV relay are set to the maximal possible values. However, this may result in a waste of power. For instance, when the achievable rate of the S2V link is always smaller than that of the V2D link, increasing the FD-UVA’s transmit power can not enlarge the achievable rate of the DN because the rate is limited by the S2V link. Besides, if the SI is not completely suppressed for non-ideal beamforming, increasing the FD-UAV’s transmit power may also increase the interference for the S2V link, and thus the achievable rate decreases. For these reasons, in the following, we first design the BFVs before we optimize the power control to maximize the achievable rate. Beamforming Design ------------------ In this subsection, we design the BFVs for the given coordinates of the FD-UAV relay. It is assumed that full CSI is available at the SN, the DN, and the FD-UAV relay, where both the LoS and NLoS components are considered for the S2V and the V2D links. Due to the non-convex CM constraints and the coupled variables, it is challenging to jointly optimize the BFVs at the SN, UAV, and DN. To address this issue, we propose the AIS algorithm, which employs alternating optimization to design the BFV at the SN, the BFV at the DN, and the Tx/Rx-BFV at the FD-UAV relay. First, we initialize the BFVs with the normalized steering vectors corresponding to the LoS paths for the S2V and V2D channels, i.e., $$\label{eq\_BFV\_SN} \mathbf{w}\_{\tau}^{(0)}=\frac{1}{\sqrt{N\_{\tau}^{\mathrm{tot}}}}\mathbf{a}\_{\tau}(\theta\_{\tau}^{(0)},\phi\_{\tau}^{(0)}), \tau=\left\{\mathrm{S}, \mathrm{r}, \mathrm{t}, \mathrm{D}\right\}.$$ Then, we start an iterative process. Given an SN-BFV, a DN-BFV, and a Tx-BFV, such that the received signal power of the V2D link and the interference from the S2D link are fixed, motivated by, we optimize the Rx-BFV to maximize the received signal power of the S2V link, while suppressing the SI. Specifically, in the *k*-th iteration, we solve the following problem: $$\label{eq\_problem\_sub1} \begin{aligned} \mathop{\mathrm{Maximize}}\limits\_{\mathbf{w}\_{\mathrm{r}}}~~~~~ &\left|\mathbf{w}\_{\mathrm{r}}^{\mathrm{H}} \mathbf{H}\_{\mathrm{S2V}}\mathbf{w}\_{\mathrm{S}}^{(k-1)}\right|\\ \mathrm{Subject~ to}~~~~~ &\left | \mathbf{w}\_{\mathrm{r}}^{\mathrm{H}}\mathbf{H}\_{\mathrm{SI}}\mathbf{w}\_{\mathrm{t}}^{(k-1)} \right | \leq \eta^{(k)}\_{1}, \\ &\left|\left[ \mathbf{w}\_{\mathrm{r}}\right]\_{n}\right| \leq \frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}, ~ 1 \leq n \leq N\_{\mathrm{r}}^{\mathrm{tot}}, \end{aligned}$$ where **w**S(*k* − 1) and **w**t(*k* − 1) are the fixed SN-BFV and Tx-BFV obtained in the (*k* − 1)-th iteration, respectively, and *η*1(*k*) is the interference suppression factor. The suppression factor successively decreases in each iteration. Besides, the CM constraint on the BFV is relaxed to a convex constraint in Problem. We will show later that this relaxation has little influence on the performance. Similarly, given the Rx-BFV obtained in Problem, i.e., **w**r(*k*), and the DN-BFV **w**D(*k* − 1), such that the received signal power of the S2V link and the interference from the S2D link are fixed, motivated by,, we optimize the Tx-BFV to maximize the received signal power of the V2D link, while suppressing the SI. Specifically, we solve the following problem: $$\label{eq\_problem\_sub2} \begin{aligned} \mathop{\mathrm{Maximize}}\limits\_{\mathbf{w}\_{\mathrm{t}}}~~~~~ &\left | \mathbf{w}\_{\mathrm{D}}^{(k-1)\mathrm{H}}\mathbf{H}\_{\mathrm{V2D}}\mathbf{w}\_{\mathrm{t}} \right |\\ \mathrm{Subject~ to}~~~~~ &\left | \mathbf{w}\_{\mathrm{r}}^{(k)\mathrm{H}}\mathbf{H}\_{\mathrm{SI}}\mathbf{w}\_{\mathrm{t}} \right | \leq \eta^{(k)}\_{2}, \\ &\left|\left[ \mathbf{w}\_{\mathrm{t}}\right]\_{n}\right| \leq \frac{1}{\sqrt{N\_{\mathrm{t}}^{\mathrm{tot}}}}, ~ 1 \leq n \leq N\_{\mathrm{t}}^{\mathrm{tot}}, \end{aligned}$$ where *η*2(*k*) is the interference suppression factor. After obtaining the Rx-BFV **w**r(*k*) and the Tx-BFV **w**t(*k*) in the *k*-th iteration, we optimize the SN-BFV and DN-BFV in a similar manner. Specifically, given the fixed DN-BFV **w**D(*k* − 1), we optimize the SN-BFV to maximize the received signal power of the S2V link, while suppressing the interference caused by the S2D link, i.e., $$\label{eq\_problem\_sub3} \begin{aligned} \mathop{\mathrm{Maximize}}\limits\_{\mathbf{w}\_{\mathrm{S}}}~~~~~ &\left|\mathbf{w}\_{\mathrm{r}}^{(k)\mathrm{H}} \mathbf{H}\_{\mathrm{S2V}}\mathbf{w}\_{\mathrm{S}}\right|\\ \mathrm{Subject~ to}~~~~~ &\left | \mathbf{w}\_{\mathrm{D}}^{(k-1)\mathrm{H}}\mathbf{H}\_{\mathrm{S2D}}\mathbf{w}\_{\mathrm{S}} \right | \leq \eta^{(k)}\_{3}, \\ &\left|\left[ \mathbf{w}\_{\mathrm{S}}\right]\_{n}\right| \leq \frac{1}{\sqrt{N\_{\mathrm{S}}^{\mathrm{tot}}}}, ~ 1 \leq n \leq N\_{\mathrm{S}}^{\mathrm{tot}}, \end{aligned}$$ Finally, we optimize the DN-BFV to maximize the received signal power of the V2D link, while suppressing the interference caused by the S2D link, i.e., $$\label{eq\_problem\_sub4} \begin{aligned} \mathop{\mathrm{Maximize}}\limits\_{\mathbf{w}\_{\mathrm{D}}}~~~~~ &\left | \mathbf{w}\_{\mathrm{D}}^{\mathrm{H}}\mathbf{H}\_{\mathrm{V2D}}\mathbf{w}\_{\mathrm{t}}^{(k)} \right |\\ \mathrm{Subject~ to}~~~~~ &\left | \mathbf{w}\_{\mathrm{D}}^{\mathrm{H}}\mathbf{H}\_{\mathrm{S2D}}\mathbf{w}\_{\mathrm{S}}^{(k)} \right | \leq \eta^{(k)}\_{4}, \\ &\left|\left[ \mathbf{w}\_{\mathrm{D}}\right]\_{n}\right| \leq \frac{1}{\sqrt{N\_{\mathrm{D}}^{\mathrm{tot}}}}, ~ 1 \leq n \leq N\_{\mathrm{D}}^{\mathrm{tot}}, \end{aligned}$$ To ensure that the interferences from the SI channel and the S2D channel are reduced in each iteration, we set *η**i*(*k*) = *η* + *μ**i*(*k*) for *i* = {1, 2, 3, 4}, where *η* is a nonnegative lower bound for the interference suppression factor. One possible choice is $\mu^{(k)}\_{1}=\frac{\mu^{(k-1)}\_{2}}{\kappa}$, $\mu^{(k)}\_{2}=\frac{\mu^{(k)}\_{1}}{\kappa}$, $\mu^{(k)}\_{3}=\frac{\mu^{(k-1)}\_{4}}{\kappa}$, and $\mu^{(k)}\_{4}=\frac{\mu^{(k)}\_{3}}{\kappa}$, where *κ* is defined as the step size for the reduction of the interference suppression factor. The iterative process can be stopped when the increase of the achievable rate is no larger than a threshold *ε**r*. Problems,,, and have a similar form. Thus, we only develop the solution of Problem in detail, and the other problems can be solved in the same manner. For Problem, a convex objective function is maximized, which makes it a non-convex problem. Fortunately, a phase rotation of the BFVs does not impact the optimality of this problem. If **w**r\star is an optimal solution, then **w**r\star*e**j**π**ω* is also an optimal solution. Exploiting this property, we can always find an optimal solution, where the argument of the magnitude operator ∣ ⋅ ∣ in the objective function of Problem is a real number. Then, Problem becomes equivalent to $$\label{eq\_problem\_sub1\_eq} \begin{aligned} \mathop{\mathrm{Maximize}}\limits\_{\mathbf{w}\_{\mathrm{r}}}~~~~~ &\mathfrak{R}\left(\mathbf{w}\_{\mathrm{r}}^{\mathrm{H}} \mathbf{H}\_{\mathrm{S2V}}\mathbf{w}\_{\mathrm{S}}^{(k-1)}\right)\\ \mathrm{Subject~ to}~~~~~ &\left | \mathbf{w}\_{\mathrm{r}}^{\rm{H}}\mathbf{H}\_{\mathrm{SI}}\mathbf{w}\_{\mathrm{t}}^{(k-1)} \right | \leq \eta^{(k)}\_{1},\\ &\left|\left[ \mathbf{w}\_{\mathrm{r}}\right]\_{n}\right| \leq \frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}, ~ 1 \leq n \leq N\_{\mathrm{r}}^{\mathrm{tot}}, \end{aligned}$$ where R( ⋅ ) denotes the real part of a complex number. Problem is a convex problem and can be solved by utilizing standard optimization tools such as CVX. After obtaining the optimal solution of Problems,,, and, which we denote by **w**r\circ, **w**t\circ, **w**S\circ, and **w**D\circ, respectively, we normalize the modulus of the BFVs’ elements to satisfy the CM constraint, i.e., $$\label{eq\_normal1} \left[ \mathbf{w}\_{\tau}^{(k)}\right]\_{n} = \frac{1}{\sqrt{N\_{\tau}^{\mathrm{tot}}}}\frac{\left[ \mathbf{w}\_{\tau}^{\mathrm{\circ}}\right]\_{n}}{\left|\left[ \mathbf{w}\_{\tau}^{\mathrm{\circ}}\right]\_{n}\right|}, ~ 1 \leq n \leq N\_{\tau}^{\mathrm{tot}}, \tau=\left\{\mathrm{S}, \mathrm{r}, \mathrm{t}, \mathrm{D}\right\}.$$ During the alternating optimization of the Tx-BFV and Rx-BFV in Problems and, respectively, the SI at the FD-UAV relay decreases successively, because the interference suppression factor decreases in each iteration. Similarly, the interference from the S2D link decreases successively, benefiting from the alternating optimization of the SN-BFV and DN-BFV in Problems and, respectively. Meanwhile, the beam gains of the target signals are maximized. With the AIS algorithm, the interference suppression factor finally converges to its lower bound *η*, and thus the powers of the SI and the interference from the S2D link are no larger than *η*2*P*Vtot and *η*2*P*Stot, respectively. To maximize the achievable rate, the interference powers should be restricted to be smaller than the noise powers, i.e., *η*2*P*Vtot < *σ*12 and *η*2*P*Stot < *σ*22. Hence, a small *η* is preferable to minimize the influence of the SI. However, a too small value of *η* leads to smaller gains of the target signals because of the stricter interference constraints in,,, and. In fact, there is a tradeoff between the powers of the interferences and the powers of the target signals. Now, the influence of the relaxation and normalization of the BFVs remains to be analyzed. To this end, we provide the following theorem. [Theomodulus] There always exists an optimal solution of Problem, where at most one element of the optimal BFV does not satisfy the CM constraint. See Appendix B. Theorem [Theomodulus] suggests that the relaxation and normalization of the BFVs in have little influence on the rate performance because they impact at most one of their elements. In particular, when the number of antennas is large, the impact of a single element’s normalization on the effective channel gain is small. Power Control ------------- As we have discussed before, to maximize the achievable rate from the SN to the DN and to avoid a waste of transmit power, the power control at the SN and FD-UAV relay should be carefully designed. Substituting the designed BFVs into and, we obtain the achievable rates of the S2V and V2D links as follows $$\label{eq\_Rate\_S2V3} \tilde{R}\_{\mathrm{S2V}}=\log\_{2}\left (1+ \frac{G\_{\mathrm{S2V}} P\_{\mathrm{S}}}{G\_{\mathrm{SI}} P\_{\mathrm{V}}+\sigma\_{1}^{2}}\right ),$$ $$\label{eq\_Rate\_V2D3} \tilde{R}\_{\mathrm{V2D}}=\log\_{2}\left (1+ \frac{G\_{\mathrm{V2D}} P\_{\mathrm{V}}}{G\_{\mathrm{S2D}}P\_{\mathrm{S}}+\sigma\_{2}^{2}}\right ),$$ where $G\_{\mathrm{S2V}}=\left | \mathbf{w}\_{\mathrm{r}}^{(k)\rm{H}}\mathbf{H}\_{\mathrm{S2V}}\mathbf{w}\_{\mathrm{S}}^{(k)} \right |^{2}$, *G*SI = ∣**w**r(*k*)H**H**SI**w**t(*k*)∣2, *G*V2D = ∣**w**D(*k*)H**H**V2D**w**t(*k*)∣2, and $G\_{\mathrm{S2D}}=\left | \mathbf{w}\_{\mathrm{D}}^{(k)\rm{H}}\mathbf{H}\_{\mathrm{S2D}}\mathbf{w}\_{\mathrm{S}}^{(k)} \right |^{2}$. To maximize the minimum of *R̃*S2V and *R̃*V2D as well as minimize the total transmit power, we provide the following theorem. [Theopower] For given position and BFVs, the optimal power allocation for the SN and FD-UAV relay is given as follows $$\label{eq\_opt\_power} \footnotesize \begin{aligned} &\left\{ \begin{aligned} &P\_{\mathrm{S}}^{\star}=P\_{\mathrm{S}}^{\mathrm{tot}},\\ &P\_{\mathrm{V}}^{\star}=\frac{-b\_{1}+\sqrt{b\_{1}^2-4a\_{1}c\_{1}}}{2a\_{1}}, \end{aligned} \right. ~\text{if}~\frac{G\_{\mathrm{S2V}} P\_{\mathrm{S}}^{\mathrm{tot}}}{G\_{\mathrm{SI}} P\_{\mathrm{V}}^{\mathrm{tot}}+\sigma\_{1}^{2}}<\frac{G\_{\mathrm{V2D}} P\_{\mathrm{V}}^{\mathrm{tot}}}{G\_{\mathrm{S2D}}P\_{\mathrm{S}}^{\mathrm{tot}}+\sigma\_{2}^{2}};\\ &\left\{ \begin{aligned} &P\_{\mathrm{S}}^{\star}=\frac{-b\_{2}+\sqrt{b\_{2}^2-4a\_{2}c\_{2}}}{2a\_{2}},\\ &P\_{\mathrm{V}}^{\star}=P\_{\mathrm{V}}^{\mathrm{tot}}, \end{aligned} \right. ~\text{if}~\frac{G\_{\mathrm{S2V}} P\_{\mathrm{S}}^{\mathrm{tot}}}{G\_{\mathrm{SI}} P\_{\mathrm{V}}^{\mathrm{tot}}+\sigma\_{1}^{2}} \geq \frac{G\_{\mathrm{V2D}} P\_{\mathrm{V}}^{\mathrm{tot}}}{G\_{\mathrm{S2D}}P\_{\mathrm{S}}^{\mathrm{tot}}+\sigma\_{2}^{2}}; \end{aligned}$$ where *a*1 = *G*SI*G*V2D, *b*1 = *G*V2D*σ*12, *c*1 =  − *G*S2V*P*Stot(*G*S2D*P*Stot + *σ*22), and *a*2 = *G*S2D*G*S2V, *b*2 = *G*S2V*σ*22, *c*2 =  − *G*V2D*P*Vtot(*G*SI*P*Vtot + *σ*12). Note that our goal is to maximize the minimum of *R̃*S2V and *R̃*V2D. Assume that the optimal transmit powers at the SN and the FD-UAV relay are both smaller than their maximum values, i.e., *P*S⋆ < *P*Stot and *P*V⋆ < *P*Vtot. We set *P*S∘ = (1 + *δ*)*P*S⋆ and *P*V∘ = (1 + *δ*)*P*V⋆, where *δ* is positive and small enough to ensure that (*P*S∘, *P*V∘) do not exceed the maximum values of the transmit powers. It can be verified that (*P*S∘, *P*V∘) yield a larger achievable rate than (*P*S⋆, *P*V⋆), which contradicts the assumption that (*P*S⋆, *P*V⋆) is optimal. Thus, we conclude that for the optimal power allocation, at least one of the transmit powers assumes the maximum possible value. When $\frac{G\_{\mathrm{S2V}} P\_{\mathrm{S}}^{\mathrm{tot}}}{G\_{\mathrm{SI}} P\_{\mathrm{V}}^{\mathrm{tot}}+\sigma\_{1}^{2}}<\frac{G\_{\mathrm{V2D}} P\_{\mathrm{V}}^{\mathrm{tot}}}{G\_{\mathrm{S2D}}P\_{\mathrm{S}}^{\mathrm{tot}}+\sigma\_{2}^{2}}$, we have *R̃*S2V < *R̃*V2D. Thus, *P*S⋆ = *P*Stot maximizes the achievable rate of the S2V link. Meanwhile, to avoid the waste of transmit power and to maximize the achievable rate, *P*V should be reduced, whereby the achievable rate of the V2D link decreases while the achievable rate of the S2V link increases. Solving equation *R̃*V2D = *R̃*S2V for *P*S⋆ = *P*Stot, we obtain the optimal transmit power of the FD-UAV relay as $P\_{\mathrm{V}}^{\star}=\frac{-b\_{1}+\sqrt{b\_{1}^2-4a\_{1}c\_{1}}}{2a\_{1}}$. Similarly, when $\frac{G\_{\mathrm{S2V}} P\_{\mathrm{S}}^{\mathrm{tot}}}{G\_{\mathrm{SI}} P\_{\mathrm{V}}^{\mathrm{tot}}+\sigma\_{1}^{2}} \geq \frac{G\_{\mathrm{V2D}} P\_{\mathrm{V}}^{\mathrm{tot}}}{G\_{\mathrm{S2D}}P\_{\mathrm{S}}^{\mathrm{tot}}+\sigma\_{2}^{2}}$, we have *R̃*S2V ≥ *R̃*V2D. Thus, *P*V⋆ = *P*Vtot maximizes the achievable rate of the V2D link. Meanwhile, to avoid the waste of transmit power and to maximize the achievable rate, *P*S should be reduced. Then, the achievable rate of the S2V link decreases while the achievable rate of the V2D link increases. Solving equation *R̃*V2D = *R̃*S2V for *P*V⋆ = *P*Vtot, the optimal transmit power of the SN is obtained as $P\_{\mathrm{S}}^{\star}=\frac{-b\_{2}+\sqrt{b\_{2}^2-4a\_{2}c\_{2}}}{2a\_{2}}$. This concludes the proof. Hereto, we have obtained the optimal solution of the transmit power variables. Overall Solution ---------------- We summarize the overall solution of the joint positioning, beamforming, and power control problem for mmWave FD-UAV relay systems in Algorithm 1. In line 1, we obtain the conditional optimal position of the FD-UAV relay based on Theorem [Theoposi], assuming an LoS environment and ideal beamforming. In lines 2-11, we find the position of the FD-UAV relay in a neighborhood of the conditional optimal position. Then, in lines 17-31, we successively decrease the interferences by alternately solving Problems,,, and, where the optimal power allocation according to Theorem [Theopower] is incorporated in each iteration to maximize the achievable rate, see line 29. Note that the position of the FD-UAV relay is not updated during the iterative process as the obtained solution achieves a near-optimal performance if the proposed algorithm approaches ideal beamforming. The algorithm terminates if the improvement in the achievable rate from one iteration to the next falls below a threshold *ε**r*. The convergence of Algorithm 1 will be studied via simulations in Section IV. [h] [1]  *M*S, *N*S, *M*D, *N*D, *M*t, *N*t, *M*r, *N*r, *x*D, *y*D, *h*min, *h*max *P*Stot, *P*Vtot, *σ*1, *σ*2, *f**c* *α*LoS, *α*NLoS, *σ**f*, *a*, *b*, *ε**x*, *ε**y*, *ε**h*, *η*, *κ*, *ε**r*.  *x*V\circ, *y*V\circ, *h*V\circ, **w**S\star, **w**D\star, **w**r\star, **w**t\star, *P*S⋆, *P*V⋆. Calculate (*x*V\star, *y*V\star, *h*V\star) based on Theorem [Theoposi]. Set (*x*V\circ, *y*V\circ, *h*V\circ) = (*x*V\star, *y*V\star, *h*V\star). Initialize *t* = 0, C0 = {(*x*V\star, *y*V\star, *h*V\star)}, $\mathcal{L}\_{0}={\O}$. Update *t* = *t* + 1. Obtain C*t* and L*t*. Determine (*x*V\circ, *y*V\circ, *h*V\circ) based on. Estimate channel matrices **H**S2V, **H**V2D, **H**S2D, and **H**SI. Initialize *k* = 0. Initialize **w**S(0), **w**D(0), **w**r(0) and **w**t(0) according to. Initialize $\mu^{(0)}\_{2}=\left | \mathbf{w}\_{\mathrm{r}}^{\rm{(0)H}}\mathbf{H}\_{\mathrm{SI}}\mathbf{w}\_{\mathrm{t}}^{(0)} \right |$. Calculate *R*S2D(0) according to and define *R*S2D( − 1) =  − ∞. *k* = *k* + 1. Update the suppression factor $\mu^{(k)}\_{i}=\frac{\mu^{(k-1)}\_{i+1}}{\kappa}$ and *η**i*(*k*) = *η* + *μ**i*(*k*) for *i* = 1, 3. Update the suppression factor $\mu^{(k)}\_{i}=\frac{\mu^{(k)}\_{i-1}}{\kappa}$ and *η**i*(*k*) = *η* + *μ**i*(*k*) *i* = 2, 4. Solve Problem to obtain **w**r∘. Normalize **w**r∘ according to and obtain **w**r(*k*). Solve Problem to obtain **w**t∘. Normalize **w**t∘ according to and obtain **w**t(*k*). Solve Problem to obtain **w**S∘. Normalize **w**S∘ according to and obtain **w**S(*k*). Solve Problem to obtain **w**D∘. Normalize **w**D∘ according to and obtain **w**D(*k*). Obtain *P*S(*k*) and *P*V(*k*) according to Theorem [Theopower]. Calculate *R*S2D(*k*) according to. **w**r\star = **w**r(*k*), **w**t\star = **w**t(*k*), *P*S⋆ = *P*S(*k*), and *P*V⋆ = *P*V(*k*). *x*V\star, *y*V\star, *h*V\star, **w**S\star, **w**D\star, **w**r\star, **w**t\star, *P*S⋆, *P*V⋆. In the proposed joint positioning, beamforming, and power control algorithm, the FD-UAV positioning is determined first and entails a maximum computational complexity of O(*K**x**K**y**K**h*), where $K\_{x}=\lceil\frac{x\_{\mathrm{D}}}{\epsilon\_{x}}\rceil$, $K\_{y}=\lceil\frac{y\_{\mathrm{D}}}{\epsilon\_{y}}\rceil$, and $K\_{h}=\lceil\frac{h\_{\max}-h\_{\min}}{\epsilon\_{h}}\rceil$ are the maximum possible numbers of candidate coordinates for directions *x*, *y*, and *z*, respectively. The complexity of solving Problem by using the interior point method and the normalization of the Rx-BFV is O(*N*rtot3.5) and O(*N*rtot), respectively. Then, the complexity of the joint beamforming and power control process from line 19 to 30 in Algorithm 1 is O(*N*maxtot3.5), where *N*maxtot = max{*N*rtot, *N*ttot, *N*Stot, *N*Dtot}. As a result, the overall computational complexity of Algorithm 1 is O(*K**x**K**y**K**h* + *T**N*maxtot3.5), where *T* is the maximum number of iterations of the AIS algorithm. Performance Evaluation ====================== In this section, we provide simulation results to evaluate the performance of the proposed joint positioning, beamforming, and power control scheme for mmWave FD-UAV relay systems. Simulation Setup and Benchmark Schemes -------------------------------------- We adopt the channel models in,,, and, where the probabilities that an LoS path exists for the S2V and V2D channels are given by and, respectively. The number of NLoS components for the S2V, V2D, and S2D channels are assumed to be identical, i.e., *L*S2V = *L*V2D = *L*S2D = *L*. The adopted simulation parameter settings are provided in Table [tab:para], unless specified otherwise. Half-wavelength spacing UPAs are used at all nodes, and the Tx-UPA and Rx-UPA at the FD-UAV relay are parallel to each other with a distance of 10*λ* ( ≈  8 cm). For the proposed AIS algorithm, the lower bound for the SI suppression factor is set to $\eta=\min\left\{\frac{\sigma\_{1}}{10\sqrt{P\_{\mathrm{S}}^{\mathrm{tot}}}},\frac{\sigma\_{2}}{10\sqrt{P\_{\mathrm{V}}^{\mathrm{tot}}}}\right\}$, such that the interference power is in the same range as the noise power. Each simulation point is averaged over 103 node distributions and channel realizations, where the DN is randomly distributed in a disk of radius 500 m, with the SN at its center. [h] [tab:para] | **Parameter** | **Description** | **Value** | | --- | --- | --- | | *h*min | Minimum altitude of UAV | 100 m | | *h*max | Maximum altitude of UAV | 300 m | | *P*Stot | Maximum transmit power of the SN | 20 dBm | | *P*Vtot | Maximum transmit power of the UAV | 20 dBm | | *σ*12 | Power of the noise at the UAV | -110 dBm | | *σ*22 | Power of the noise at the DN | -110 dBm | | *f**c* ( = *c*/*λ*) | Carrier frequency | 38 GHz | | *α*LoS | Path loss exponent for LoS paths | 1.9 | | *α*NLoS | Path loss exponent for NLoS paths | 3.3 | | *L* | Number of NLoS components | 4 | | *σ**f* | Standard deviation of shadow factor | $1/\sqrt{L}$ | | *a* | Environment parameter in and | 11.95 | | *b* | Environment parameter in and | 0.14 | | *M*S × *N*S | Antenna array size at the SN | 4 × 4 | | *M*D × *N*D | Antenna array size at the DN | 4 × 4 | | *M*t × *N*t | Antenna array size of Tx-UPA at the UAV | 4 × 4 | | *M*r × *N*r | Antenna array size of Rx-UPA at the UAV | 4 × 4 | | *ε**x* | Granularity for coordinate *x* | 1 m | | *ε**y* | Granularity for coordinate *y* | 1 m | | *ε**h* | Granularity for coordinate *h* | 1 m | | *κ* | Step size for AIS | 10 | | *ε**r* | Threshold for convergence of Algorithm 1 | 0.01 bps/Hz | Two upper bounds for the achievable rate for mmWave FD-UAV relay systems are considered. The proposed approximate upper bound is obtained as the minimum of and, while the proposed strict upper bound is the minimum of and. For both upper bounds, the FD-UAV relay is assumed to be at the designed position (*x*V\circ, *y*V\circ, *h*V\circ). Furthermore, three benchmark schemes are used for comparison, namely “RandPos & AIS”, “DesPos & steer”, and “DesPos & OMP”, respectively. For the “RandPos & AIS” scheme, the position of the FD-UAV relay is randomly selected from the feasible region of Problem, and the proposed AIS algorithm is employed for beamforming. For the “DesPos & steer” scheme, the designed position for the FD-UAV relay, i.e., (*x*V\circ, *y*V\circ, *h*V\circ) given by, is employed, and the steering vectors in are used for beamforming. For the “DesPos & OMP” scheme, the designed position for the FD-UAV relay is employed, and the BFVs are obtained by utilizing the OMP-based SI-cancellation precoding algorithm in, where the number of RF chains is denoted by *N*RF. For all benchmark schemes, the optimal transmit powers from Theorem [Theopower] are adopted at the SN and the FD-UAV relay. Simulation Results ------------------ [Fig:iterationka] [Fig:iterationPUAV] First, in Fig. [Fig:iterationka], we evaluate the convergence of the proposed AIS beamforming method (Algorithm 1) for different step sizes for the reduction of the interference suppression factor (i.e., *κ* in Algorithm 1). Identical sizes are adopted for the UPA at the SN, the UPA at the DN, and the Tx and Rx UPAs at the FD-UAV relay, i.e., 4 × 4 or 8 × 8. As can be observed, the proposed ASIS beamforming method converges very fast to a value close to the performance upper bound, and the approximate upper bound is very close to the strict upper bound. These results confirm the assumption of a pure LoS environment in Section III-A because the LoS path has much higher power compared to the NLoS paths. When the antenna array size is 4 × 4 at the FD-UAV relay, after convergence, the performance gap between the proposed method and the upper bound is no more than 0.3 bps/Hz, and this gap reduces to 0.1 bps/Hz when the antenna array size is 8 × 8. For larger numbers of antennas, there are more DoFs for minimization of the SI. Thus, the performance gap between the proposed method and the upper bound becomes smaller. The results in Fig. [Fig:iterationka] demonstrate that the proposed method can achieve a near-upper-bound performance in terms of the achievable rate. In addition, the speed of convergence of the proposed AIS algorithm depends on the step size for the reduction of the suppression factor. For larger *κ*, the AIS algorithm converges faster. However, if *κ* is chosen too large, for example, *κ* →  + ∞, the SI decreases too fast in the first iteration for designing **w**r(*k*). As such, the effective channel gain of the S2V link may be much smaller than that of the V2D link, which negatively affects the achievable rate of the DN. Thus, to achieve a favorable tradeoff between the achievable rate and computational complexity, we set *κ* = 10 for the following simulations. Fig. [Fig:iterationPUAV] shows the convergence performance of the proposed AIS algorithm for different maximum transmit powers of the FD-UAV relay. For all considered cases, the proposed algorithm converges to a near-upper-bound achievable rate within few iterations, where all curves reach steady state after 4 iterations. Particularly, as the maximum transmit power at the FD-UAV relay increases, the number of the iterations required for convergence increases. The reason is that a higher transmit power of the UAV causes more SI, and thus more iterations are required to successively reduce the SI. [Fig:GainPower] To shed more light on the properties of Algorithm 1, in Fig. [Fig:GainPower], we show the change of the channel gains and transmit powers during the iterations. In particular, we show the normalized channel gains, which are the ratios of the effective channel gains and the noise power in and, i.e., $\left | \mathbf{w}\_{\mathrm{r}}^{\rm{H}}\mathbf{H}\_{\mathrm{S2V}}\mathbf{w}\_{\mathrm{S}} \right |^{2}/\sigma\_{1}^{2}$, $\left | \mathbf{w}\_{\mathrm{r}}^{\rm{H}}\mathbf{H}\_{\mathrm{SI}}\mathbf{w}\_{\mathrm{t}} \right |^{2}/\sigma\_{1}^{2}$, $\left | \mathbf{w}\_{\mathrm{D}}^{\rm{H}}\mathbf{H}\_{\mathrm{V2D}}\mathbf{w}\_{\mathrm{t}} \right |^{2}/\sigma\_{1}^{2}$, and $\left | \mathbf{w}\_{\mathrm{D}}^{\rm{H}}\mathbf{H}\_{\mathrm{S2D}}\mathbf{w}\_{\mathrm{S}} \right |^{2}/\sigma\_{1}^{2}$. As can be observed, the channel gain of the SI channel decreases fast and converges to the lower bound *η*2/*σ*12, since the SI suppression factor is reduced in each iteration in and. The channel gain of the S2D channel is always lower than that of the SI channel because of the long transmission distance and the blockage of the LoS link between SN and DN. Besides, the channel gains of the S2V and V2D links remain almost unchanged during the iterations, which confirms the rational behind the proposed AIS beamforming algorithm. This is also the reason for why the achievable rate of the proposed scheme can approach the performance upper bound. For the variation of transmit powers, during the first iteration, the transmit power of the FD-UAV relay is very low, while the SN transmits with the maximal power. This is because the S2V link suffers from high SI for the initially chosen BFVs, and thus the FD-UAV reduces the transmit power to decrease the SI. After several iterations, the effective channel gain of the SI channel becomes lower, and thus the FD-UAV relay can increase its transmit power to improve the achievable rate of the V2D link. [Fig:ratePBS] [Fig:ratePUAV] Fig. [Fig:ratePBS] compares the achievable rate performance of different methods as a function of the SN transmit power. As can be observed, the proposed joint position, beamforming, and power control method achieves a performance very close to the performance upper bound, and outperforms all benchmark schemes. In addition, as *P*Stot increases, the speed of the increase of the achievable rate becomes smaller. The reason for this behavior is as follows. According to Theorem [Theoposi], the conditional optimal position of the FD-UAV relay moves towards the DN as the transmit power of the SN increases. When *P*Stot is sufficiently large, the conditional optimal position of the FD-UAV relay is right above the DN, and the achievable rate of the V2D link cannot increase anymore. In other words, the overall achievable rate is limited by the rate of the V2D link. We also observe that for one RF chain, the OMP-based SI-cancellation precoding algorithm in yields a similar performance as the steering vector-based beamforming scheme. When the number of RF chains increases, more SI can be mitigated in the digital beamforming domain, and the performance of the “DesPos & OMP” scheme improves. Fig. [Fig:ratePUAV] compares the achievable rate performance of different methods as a function of the FD-UAV relay transmit power. The proposed scheme outperforms again all benchmark schemes. As *P*Vtot increases, the achievable rate of the proposed method improves, but the rate of improvement decreases. The reason for this is that the position of the FD-UAV relay moves towards the SN as *P*V increases. When the transmit power of the FD-UAV relay is sufficiently large, the conditional optimal position of the FD-UAV relay is right above the SN, and the achievable rate of the S2V link cannot be further improved and limits the overall performance. In addition, as the transmit power of the FD-UAV relay increases, the achievable rate of the “DesPos & steer” scheme remains low because the SI is high at the FD-UAV relay if the steering vectors are employed for beamforming. The results in Figs. [Fig:ratePBS] and [Fig:ratePUAV] indicate that both the UAV positioning and the BFVs have a significant impact on the achievable-rate performance of mmWave FD-UAV relay systems. [Fig:rateD] [Fig:rateMN] Fig. [Fig:rateD] compares the achievable rate of different methods as a function of the SN-DN distance. For each point on the horizontal axis, the DN is randomly distributed on a circle with the SN at its center and a fixed radius, i.e., the SN-DN distance. As can be observed, the achievable rates for the five considered schemes all decrease as the distance increases because the path loss increases. In particular for the “RandPos & AIS” scheme, the achievable rate decreases rapidly with increasing distance. The reason for this behaviour is that, for larger SN-DN distances, the range of possible UAV positions increases, and the randomly deployed UAV may be further from the conditional optimal position. Fig. [Fig:rateMN] compares the achievable rate of different methods as a function of the antenna array size for *M**τ* = *N**τ* = *N*a and *τ* = {t, r, S, D}. As the antenna array size increases, the achievable rate of the proposed joint positioning, beamforming, and power control method also increases because higher array gains can be obtained and more DoFs are available for suppression of the SI. However, due to the jitter of the UAV, the elevation angles and the azimuth angles of the air-to-ground channels may change rapidly, which results in beam misalignment. To evaluate the impact of beam misalignment, we model the real AoDs/AoAs of the S2V link and the V2D link as uniformly distributed random variables with fixed means and deviation *δ*m, i.e., *θ̄**τ*(ℓ) ∈ [*θ**τ*(ℓ) − *δ*m/2, *θ**τ*(ℓ) + *δ*m/2] and *ϕ̄**τ*(ℓ) ∈ [*ϕ**τ*(ℓ) − *δ*m/2, *ϕ**τ*(ℓ) + *δ*m/2] for *τ* = {t, r, S, D}. The BFVs and power control are designed based on the estimated AoDs and AoAs (*θ**τ*(ℓ) and *ϕ**τ*(ℓ)), while the achievable rates are calculated based on the real AoDs and AoAs (*θ̄**τ*(ℓ) and *ϕ̄**τ*(ℓ)). As can be observed from Fig. [Fig:rateMN], the achievable rates are very close to the upper bound for *δ*m = 1∘, *δ*m = 5∘, and *δ*m = 10∘. The reason is as follows. According to the array theory, the half-power beamwidth for a linear phased array employing steering vectors is Θ = 2∣*θ**m* − *θ**h*∣, where $\theta\_{m}=\cos^{-1}\left(\frac{\beta \lambda}{2\pi d}\right)$ is the angle maximizing the array gain, $\theta\_{h}=\cos^{-1}\left[\frac{\lambda}{2\pi d}(-\beta \pm \frac{2.782}{N})\right]$ is the 3-dB point for the array gain, *β* is the difference in phase excitation between the antenna elements, and *N* is the array size. For *N* = 9, *β* = 0, and *d*/*λ* = 1/2, the half-power beamwidth is Θ ≈ 11.3∘. Thus, beam misalignments with deviations not exceeding 10∘ have little impact on the achievable rate. For larger array sizes, the beamwidth decreases and the impact of beam misalignment becomes more significant. The results in Fig. [Fig:rateMN] demonstrate the robustness of the proposed AIS beamforming algorithm with respect to beam misalignment. Conclusion ========== In this paper, we proposed to employ an FD-UAV relay to improve the achievable rate of a mmWave communication system, where the SN, DN, and FD-UAV relay are all equipped with UPAs and use directional beams to overcome the high path loss of mmWave signals. Analog beamforming was utilized to mitigate the SI at the FD-UAV relay. We formulated a joint optimization problem for the UAV positioning, analog beamforming, and power control for maximization of the minimum of the achievable rates of the S2V and V2D links. To solve this highly non-convex, highly coupled, and high-dimensional problem, we first obtained the conditional optimal position of the FD-UAV relay for maximization of an approximate upper bound for the achievable rate, under the assumption of an LoS environment and ideal beamforming. Then, the UAV was deployed at the position which was closest to the conditional optimal position and yielded LoS paths for both the S2V and the V2D links. Subsequently, we developed an iterative algorithm for joint optimization of the BFVs and the power control variables. In each iteration, the BFVs were optimized for maximization of the beam gains of the target signals and successive reduction of the interference, and the optimal power control variables were updated in closed form. Simulation results demonstrated that the proposed joint positioning, beamforming, and power control method for mmWave FD-UAV relay system can closely approach a performance upper bound in terms of the achievable rate and significantly outperforms three benchmark schemes. Proof of Theorem [Theoposi] =========================== Based on and, we find that to maximize the achievable rate, the FD-UAV relay should always be deployed on the line segment between the SN and the DN with the minimum altitude. Otherwise, the S2V and V2D distances would both increase, which results in an additional propagation loss. Thus, we can set the coordinates of the UAV as (*x*V, *y*V, *h*V) = (*ρ**x*D, *ρ**y*D, *h*min), where 0 ≤ *ρ* ≤ 1. Notice that the objective in Problem is to maximize the minimal rate of the S2V and V2D links. If $\frac{N\_{\mathrm{S}}^{\mathrm{tot}} N\_{\mathrm{r}}^{\mathrm{tot}} P\_{\mathrm{S}}^{\mathrm{tot}}\sigma\_{2}^{2}}{N\_{\mathrm{t}}^{\mathrm{tot}} N\_{\mathrm{D}}^{\mathrm{tot}} P\_{\mathrm{V}}^{\mathrm{tot}}\sigma\_{1}^{2}} \leq \frac{h\_{\min}^{\alpha\_{\mathrm{LoS}}}}{\left(x\_{\mathrm{D}}^{2}+y\_{\mathrm{D}}^{2}+h\_{\min}^{2}\right)^{\frac{\alpha\_{\mathrm{LoS}}}{2}}}$, we have *R̄*S2V ≤ *R̄*V2D for any *ρ* ∈ [0, 1]. Thus, the FD-UAV relay should be deployed right at the SN to maximize the minimal rate, i.e., *R̄*S2V. As a result, the optimal coordinates of the UAV are obtained for *ρ*⋆ = 0. Similarly, if $\frac{N\_{\mathrm{S}}^{\mathrm{tot}} N\_{\mathrm{r}}^{\mathrm{tot}} P\_{\mathrm{S}}^{\mathrm{tot}}\sigma\_{2}^{2}}{N\_{\mathrm{t}}^{\mathrm{tot}} N\_{\mathrm{D}}^{\mathrm{tot}} P\_{\mathrm{V}}^{\mathrm{tot}}\sigma\_{1}^{2}} \geq \frac{\left(x\_{\mathrm{D}}^{2}+y\_{\mathrm{D}}^{2}+h\_{\min}^{2}\right)^{\frac{\alpha\_{\mathrm{LoS}}}{2}}}{h\_{\min}^{\alpha\_{\mathrm{LoS}}}}$, we have *R̄*S2V ≥ *R̄*V2D for any *ρ* ∈ [0, 1]. Thus, the FD-UAV relay should be deployed right at the DN to maximize the minimal rate, i.e., *R̄*V2D. As a result, the optimal coordinates of the UAV are obtained for *ρ*⋆ = 1. For the case $ \frac{h\_{\min}^{\alpha\_{\mathrm{LoS}}}}{\left(x\_{\mathrm{D}}^{2}+y\_{\mathrm{D}}^{2}+h\_{\min}^{2}\right)^{\frac{\alpha\_{\mathrm{LoS}}}{2}}}< \frac{N\_{\mathrm{S}}^{\mathrm{tot}} N\_{\mathrm{r}}^{\mathrm{tot}} P\_{\mathrm{S}}^{\mathrm{tot}}\sigma\_{2}^{2}}{N\_{\mathrm{t}}^{\mathrm{tot}} N\_{\mathrm{D}}^{\mathrm{tot}} P\_{\mathrm{V}}^{\mathrm{tot}}\sigma\_{1}^{2}} <\frac{\left(x\_{\mathrm{D}}^{2}+y\_{\mathrm{D}}^{2}+h\_{\min}^{2}\right)^{\frac{\alpha\_{\mathrm{LoS}}}{2}}}{h\_{\min}^{\alpha\_{\mathrm{LoS}}}} $, the relative size of *R̄*S2V and *R̄*V2D depends on the value of *ρ*. It is easy to verify that *R̄*S2V is decreasing in *ρ*, while *R̄*V2D is increasing in *ρ*. Thus, the minimal rate is maximized if and only if *R̄*S2V = *R̄*V2D. This is an equation for variable *ρ*. When $\frac{N\_{\mathrm{S}}^{\mathrm{tot}} N\_{\mathrm{r}}^{\mathrm{tot}} P\_{\mathrm{S}}^{\mathrm{tot}}\sigma\_{2}^{2}}{N\_{\mathrm{t}}^{\mathrm{tot}} N\_{\mathrm{D}}^{\mathrm{tot}} P\_{\mathrm{V}}^{\mathrm{tot}}\sigma\_{1}^{2}}=1$, we obtain a linear equation with solution $\rho^{\star}=\frac{1}{2}$. For the other cases, we have a quadratic equation with solution $\rho^{\star}=\frac{-b'-\sqrt{b'^2-4a'c'}}{2a'}$ as shown in, which is the unique solution located in the interval [0, 1]. This completes the proof. Proof of Theorem [Theomodulus] ============================== For notational simplicity, we employ the definitions **h**S2V = **H**S2V**w**S(*k* − 1) and **h**SI = **H**SI**w**t(*k* − 1) in. Note that Problems,,, and have a similar form, Theorem [Theomodulus] holds for all four problems. We only present the proof for Problem. A similar proof can be provided for the other problems. Let **w**r∘ denote the optimal solution of Problem, which satisfies $$\left\{ \begin{aligned} &\mathbf{w}\_{\mathrm{r}}^{\circ\mathrm{H}} \mathbf{h}\_{\mathrm{S2V}} = l\_{1}e^{j\omega\_{1}} \\ &\mathbf{w}\_{\mathrm{r}}^{\circ\mathrm{H}}\mathbf{h}\_{\mathrm{SI}} = l\_{2}e^{j\omega\_{2}}, \end{aligned} \right.$$ where *l*1 and *ω*1 denote the modulus and phase of **w**r∘ H**h**S2V, respectively. *l*2 and *ω*2 denote the modulus and phase of **w**r∘ H**h**SI, respectively. According to the formulation of Problem, we know that *l*2 ≤ *η*1(*k*) and *l*1 is the maximum of the objective function. Note that *N*rtot ≥ 2 is an implicit precondition for beamforming at the mmWave FD-UAV relay. Assume that **w**r∘ has two elements which do not satisfy the CM constraint, i.e., $\left |[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}\right |<\frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}$ and $\left |[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_2}\right |<\frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}$, where $\{\pi\_{n}\}\subseteqq\{1,2,\cdots,N\_{\mathrm{r}}^{\mathrm{tot}}\}$ is the sequence of the BFV’s indices. Furthermore, we keep [**w**r]*π**n* = [**w**r∘]*π**n* fixed for *n* = 3, 4, ⋯, *N*rtot, and construct a new solution by adjusting [**w**r]*π*1 and [**w**r]*π*2, which can be obtained by solving the following problem: $$\label{eq\_problem\_sub1\_pro} \begin{aligned} \mathop{\mathrm{Maximize}}\limits\_{[\mathbf{w}\_{\mathrm{r}}]\_{\pi\_1},[\mathbf{w}\_{\mathrm{r}}]\_{\pi\_2}}~~~~~ &\left|\mathbf{w}\_{\mathrm{r}}^{\mathrm{H}} \mathbf{h}\_{\mathrm{S2V}}\right|\\ \mathrm{Subject~ to}~~~~~ &\mathbf{w}\_{\mathrm{r}}^{\mathrm{H}}\mathbf{h}\_{\mathrm{SI}} = l\_{2}e^{j\omega\_{2}}, \\ &\left|\left[ \mathbf{w}\_{\mathrm{r}}\right]\_{\pi\_1}\right| \leq \frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}},\\ &\left|\left[ \mathbf{w}\_{\mathrm{r}}\right]\_{\pi\_2}\right| \leq \frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}. \end{aligned}$$ Based on the assumption that **w**r∘ is the optimal solution of Problem, we know that **w**r∘ is also the optimal solution of Problem, because the feasible region of Problem is a subset of that of Problem. Next, we provide the following two lemmas to illustrate a key property of the solution, for $\frac{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2}} \neq \frac{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}}$ and $\frac{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2}} = \frac{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}}$, respectively. If $\frac{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2}} \neq \frac{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}}$ holds, the assumption $\left |[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}\right |<\frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}$ and $\left |[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_2}\right |<\frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}$ cannot hold. If $\frac{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2}} \neq \frac{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}}$ holds, according to the first constraint in Problem, we can express [**w**r]*π*2 as a function of [**w**r]*π*1, i.e., $$\label{eq\_w2} \begin{aligned} \left[\mathbf{w}\_{\mathrm{r}}\right]\_{\pi\_2}^{\*} &=\frac{l\_{2}e^{j\omega\_{2}}-\sum \limits\_{n=3}^{N\_{\mathrm{r}}^{\mathrm{tot}}} [\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_n}^{\*}\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_n}} {\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}} -[\mathbf{w}\_{\mathrm{r}}]\_{\pi\_1}^{\*} \frac{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}}\\ &\triangleq f\_{1}\left([\mathbf{w}\_{\mathrm{r}}]\_{\pi\_1}\right). \end{aligned}$$ Substituting into the objective function of Problem, we obtain $$\label{eq\_obj\_w1}\small \begin{aligned} &\mathbf{w}\_{\mathrm{r}}^{\mathrm{H}} \mathbf{h}\_{\mathrm{S2V}}\\ =&[\mathbf{w}\_{\mathrm{r}}]\_{\pi\_1}^{\*}\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}+[\mathbf{w}\_{\mathrm{r}}]\_{\pi\_2}^{\*}\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2} +\sum \limits\_{n=3}^{N\_{\mathrm{r}}^{\mathrm{tot}}} [\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_n}^{\*}\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_n}\\ =&[\mathbf{w}\_{\mathrm{r}}]\_{\pi\_1}^{\*}\underbrace{\left(\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}-\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2} \frac{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}}\right)} \limits\_{=\hat{k}}\\ &~+\underbrace{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2}\frac{l\_{2}e^{j\omega\_{2}}-\sum \limits\_{n=3}^{N\_{\mathrm{r}}^{\mathrm{tot}}} [\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_n}^{\*}\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_n}} {\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}} \sum \limits\_{n=3}^{N\_{\mathrm{r}}^{\mathrm{tot}}} [\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_n}^{\*}\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_n}} \limits\_{=\hat{b}}\\ \triangleq & \hat{k}[\mathbf{w}\_{\mathrm{r}}]\_{\pi\_1}^{\*}+\hat{b} \triangleq f\_{2}\left([\mathbf{w}\_{\mathrm{r}}]\_{\pi\_1}\right). \end{aligned}$$ Note that $\frac{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2}} \neq \frac{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}}$ holds in Lemma 1. Thus, we have *k̂* ≠ 0 in. Because of the assumption $\left |[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}\right |<\frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}$ and $\left |[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_2}\right |<\frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}$, we can always find a real number *δ*, which is positive and small enough to satisfy $$\left\{ \begin{aligned} &\left |[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1} \pm \delta\right |<\frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}},\\ &\left |f\_{1}\left([\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1} \pm \delta\right)\right |<\frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}. \end{aligned} \right.$$ This means that ([**w**r∘]*π*1 + *δ*) and ([**w**r∘]*π*1 − *δ*) are both located in the feasible region of Problem. Since [**w**r∘]*π*1 is the optimal solution of Problem, the objective function at [**w**r∘]*π*1 + *δ* and [**w**r∘]*π*1 − *δ* is no larger than at [**w**r∘]*π*1, i.e., $$\left\{ \begin{aligned} &\left |f\_{2}\left([\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}+\delta \right)\right |^{2} \leq \left |f\_{2}\left([\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}\right)\right |^{2},\\ &\left |f\_{2}\left([\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}-\delta \right)\right |^{2} \leq \left |f\_{2}\left([\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}\right)\right |^{2}, \end{aligned} \right.$$ According to the definition in, we obtain $$\small \begin{aligned} &\left\{ \begin{aligned} &\left |\hat{k}[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}^{\*}+\hat{b}+\hat{k}\delta\right |^{2} \leq \left |\hat{k}[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}^{\*}+\hat{b}\right |^{2}\\ &\left |\hat{k}[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}^{\*}+\hat{b}-\hat{k}\delta\right |^{2} \leq \left |\hat{k}[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}^{\*}+\hat{b}\right |^{2}\\ \end{aligned} \right. \Rightarrow \\& \left\{ \begin{aligned} &\mathfrak{R}\left(\left(\hat{k}[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}^{\*}+\hat{b}\right)^{\*}\hat{k}\delta\right)+ \left|\hat{k}\delta\right |^{2} \leq 0\\ &-\mathfrak{R}\left(\left(\hat{k}[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}^{\*}+\hat{b}\right)^{\*}\hat{k}\delta\right)+ \left|\hat{k}\delta\right |^{2} \leq 0\\ \end{aligned} \right. \Rightarrow 2\left|\hat{k}\delta\right |^{2}\leq 0, \end{aligned}$$ which contradicts the fact that *k̂* ≠ 0 and *δ* > 0. Thus, we can conclude that the assumption that **w**r∘ has two elements that do not satisfy the CM constraint cannot hold when $\frac{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2}} \neq \frac{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}}$. In other words, if there are any two elements that do not satisfy the CM constraint, they always have $\frac{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2}} = \frac{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}}$. If $\frac{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2}} = \frac{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}}$ holds, there always exists another optimal solution of Problem, where at least one of [**w**r]*π*1 and [**w**r]*π*2 satisfies the CM constraint. Based on $\frac{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2}} = \frac{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}}$, we obtain $$\begin{aligned} &\frac{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}} =\frac{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2}}{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}}\\ =&\frac{[\mathbf{w}\_{\mathrm{r}}]\_{\pi\_1}^{\*}\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}+ [\mathbf{w}\_{\mathrm{r}}]\_{\pi\_2}^{\*}\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2}}{[\mathbf{w}\_{\mathrm{r}}]\_{\pi\_1}^{\*} \left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}+ [\mathbf{w}\_{\mathrm{r}}]\_{\pi\_2}^{\*}\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}} \triangleq \chi. \end{aligned}$$ This indicates that [**w**r]*π*1\*[**h**S2V]*π*1 + [**w**r]*π*2\*[**h**S2V]*π*2 and [**w**r]*π*1\*[**h**SI]*π*1 + [**w**r]*π*2\*[**h**SI]*π*2 always have the same ratio regardless of the values of [**w**r]*π*1 and [**w**r]*π*2. We call this property the *constant-ratio property*. Since $\left |[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}\right |<\frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}$ and $\left |[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_2}\right |<\frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}$, it is easy to see that $$\begin{aligned} 0 &\leq \left |[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}^{\*}\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}+ [\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_2}^{\*}\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2} \right |\\ & < \frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}\left(|\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}|+|\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2}|\right), \end{aligned}$$ and $$\begin{aligned} 0 &\leq \left |[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}^{\*}\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}+ [\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_2}^{\*}\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2} \right |\\ & < \frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}\left(|\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}| +|\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}|\right). \end{aligned}$$ [Fig:wadj] Next, we will consider two cases shown in Fig. [Fig:wadj]. We define *ā* = ∣[**w**r∘]*π*1\*[**h**S2V]*π*1 + [**w**r∘]*π*2\*[**h**S2V]*π*2∣, $\bar{b}=\frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}|\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}|$, and $\bar{c}=\frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}|\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2}|$. The corresponding angles in Fig. [Fig:wadj] are defined as follows $$\left\{ \begin{aligned} &u=\angle \left([\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}^{\*}\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}+ [\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_2}^{\*}\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2}\right),\\ &v\_{1}=\arccos\frac{\bar{a}^2+\bar{b}^2-\bar{c}^2}{2\bar{a}\bar{b}},\\ &v\_{2}=\arccos\frac{\bar{a}^2+\bar{c}^2-\bar{b}^2}{2\bar{a}\bar{c}}. \end{aligned} \right.$$ *Case 1*: *ā* ≥ ∣*b̄* − *c̄*∣. In this case, according to the constant-ratio property, it is easy to verify that $\left |[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}^{\*}\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}+ [\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_2}^{\*}\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2} \right | \geq \frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}\left(\left|\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}| +|\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}\right|\right)$ holds. According to the triangle inequality, we can always find other [**w**r]*π*1 and [**w**r]*π*2 which satisfy the CM constraint. The basic idea is to adjust the phases of the two complex elements, and keep [**w**r]*π*1\*[**h**S2V]*π*1 + [**w**r]*π*2\*[**h**S2V]*π*2 = *ā**e**j**u* unchanged in Fig. [Fig:wadj]. The new solutions are generated as follows $$\label{eq\_w\_adj} \left\{ \begin{aligned} &[\mathbf{w}\_{\mathrm{r}}^{\diamond}]\_{\pi\_1}=\frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}e^{-j(u-v\_{1}-\vartheta\_{1})},\\ &[\mathbf{w}\_{\mathrm{r}}^{\diamond}]\_{\pi\_2}=\frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}e^{-j(u+v\_{2}-\vartheta\_{2})}, \end{aligned} \right.$$ where *ϑ*1 = ∠([**h**S2V]*π*1) and *ϑ*2 = ∠([**h**S2V]*π*2). Then, it is easy to verify that [**w**r⋄]*π*1 and [**w**r⋄]*π*2 in satisfy $$\label{eq\_invir} \left\{ \begin{aligned} &[\mathbf{w}\_{\mathrm{r}}^{\diamond}]\_{\pi\_1}^{\*}\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}+ [\mathbf{w}\_{\mathrm{r}}^{\diamond}]\_{\pi\_2}^{\*}\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2} \\ &~~~~~~=[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}^{\*}\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1} +[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_2}^{\*}\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2},\\ &[\mathbf{w}\_{\mathrm{r}}^{\diamond}]\_{\pi\_1}^{\*}\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}+ [\mathbf{w}\_{\mathrm{r}}^{\diamond}]\_{\pi\_2}^{\*}\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}\\ &~~~~~~=[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}^{\*}\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}+ [\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_2}^{\*}\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}, \end{aligned} \right.$$ which means that the designed [**w**r⋄]*π*1 and [**w**r⋄]*π*2 in are also optimal solutions of Problem for which all elements satisfy the CM constraint. *Case 2*: *ā* > ∣*b̄* − *c̄*∣. In this case, according to the constant-ratio property, it is easy to verify that $\left |[\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_1}^{\*}\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}+ [\mathbf{w}\_{\mathrm{r}}^{\circ}]\_{\pi\_2}^{\*}\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2} \right | < \frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}\left(\left|\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}| +|\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}\right|\right)$ holds. This indicates that [**w**r]*π*1 and [**w**r]*π*2 cannot be adjusted such that both satisfy the CM constraint because the triangle inequality is not satisfied, i.e., the difference between the lengths of two sides is less than the length of the third side. However, we can adjust them such that one element satisfies the CM constraint. The basic idea is to enlarge the shorter side to satisfy the CM constraint, and then adjust the longer side to keep [**w**r]*π*1\* + [**w**r]*π*2\*[**h**S2V]*π*2 = *ā**e**j**u* unchanged in Fig. [Fig:wadj]. Without loss of generality, we assume *b̄* ≥ *c̄* as shown in Fig. [Fig:wadj].[10](#fn10) Then, we can generate a new solution as follows $$\label{eq\_w\_adj1} \left\{ \begin{aligned} &[\mathbf{w}\_{\mathrm{r}}^{\diamond}]\_{\pi\_1}=\left(\frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}+\frac{a}{\left|\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}\right|}\right)e^{-j(u-\vartheta\_{1})},\\ &[\mathbf{w}\_{\mathrm{r}}^{\diamond}]\_{\pi\_2}=\frac{1}{\sqrt{N\_{\mathrm{r}}^{\mathrm{tot}}}}e^{-j(u-\vartheta\_{2}+\pi)}. \end{aligned} \right.$$ It is easy to verify that [**w**r⋄]*π*1 and [**w**r⋄]*π*2 in satisfy, which means that they are also an optimal solution of Problem for which only one element does not satisfy the CM constraint. Thus, we can conclude that if $\frac{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2}} = \frac{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}}$ holds, we can always construct an optimal solution of Problem, where at most one element does not satisfy the CM constraint. Based on Lemma 1, we know that for any two elements of the BFV which do not satisfy the CM constraint, $\frac{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{S2V}}\right]\_{\pi\_2}} \neq \frac{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_1}}{\left[\mathbf{h}\_{\mathrm{SI}}\right]\_{\pi\_2}}$ cannot hold. In other words, these elements always satisfy the *constant-ratio property* in Lemma 2. Then, for any two elements that do not satisfy the CM constraint, we can always construct a new solution based on Lemma 2, where at most one element does not satisfies the CM constraint. Note that if there are three or more elements that do not satisfy the CM constraint, this construction can be repeated until only one or zero elements do not satisfy the CM constraint. Thus, we can conclude that there always exists an optimal solution of Problem, for which at most one element of the optimal BFV does not satisfy the CM constraint. --- 1. Manuscript received September 14, 2019; revised January 15, 2020; accepted February 16, 2020. Date of publication XX, 2020; date of current version XX, 2020. This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant 61827901, Grant 91538204, and Grant 91738301, and in part by the Open Research Fund of Key Laboratory of Space Utilization, Chinese Academy of Sciences, under Grant LSU-DZXX-2017-02. The associate editor coordinating the review of this paper and approving it for publication was Jiayi Zhang. (*Corresponding authors:Jun Zhang; Zhenyu Xiao*.)[↩](#fnref1) 2. L. Zhu, Z. Xiao and X. Cao are with the School of Electronic and Information Engineering, Beihang University, Beijing 100191, China. ([email protected], [email protected], [email protected])[↩](#fnref2) 3. J. Zhang is with the Advanced Research Institute of Multidisciplinary Science, Beijing Institute of Technology, Beijing 100081, China. ([email protected])[↩](#fnref3) 4. X.-G. Xia is with the Department of Electrical and Computer Engineering, University of Delaware, Newark, DE 19716, USA. ([email protected])[↩](#fnref4) 5. R. Schober is with the Institute for Digital Communications, Friedrich-Alexander University of Erlangen-Nuremberg, Erlangen 91054, Germany. ([email protected])[↩](#fnref5) 6. SI reduction methods for FD terminals are usually partitioned into three classes: propagation-domain, analog-circuit-domain, and digital-domain techniques. Tx and Rx beamforming at the FD-UAV relay can be categorized as propagation-domain and analog-circuit-domain approaches, respectively.[↩](#fnref6) 7. FD-UAV relays can be used to increase the end-to-end data rate between two ground nodes with poor link quality in B5G mmWave networks. Exemplary application scenarios include BS-to-UE communication, backhaul links, device-to-device communications, and communication between two terrestrial mobile BSs in emergency situations.[↩](#fnref7) 8. We assume that a hovering rotary-wing UAV is deployed at a fixed position to support the communication between SN and DN. Thus, the Doppler effect is not considered in this paper.[↩](#fnref8) 9. For a sufficiently large *h*min, the probabilities that LoS paths exist, given by and, approach 1, and thus the LoS-environment assumption adopted for positioning is reasonable. If an LoS path does not exist for the S2V and/or the V2D links at the optimized position, we resort to the strategy specified after Theorem 1.[↩](#fnref9) 10. When *b̄* < *c̄*, we can construct new optimal solutions in a similar manner.[↩](#fnref10)
arxiv_0000235
Sparse Regression at Scale: Branch-and-Bound rooted in First-Order Optimization =============================================================================== ### April, 2021 We consider the least squares regression problem, penalized with a combination of the ℓ0 and squared ℓ2 penalty functions (a.k.a. ℓ0ℓ2 regularization). Recent work shows that the resulting estimators are of key importance in many high-dimensional statistical settings. However, exact computation of these estimators remains a major challenge. Indeed, modern exact methods, based on mixed integer programming (MIP), face difficulties when the number of features *p* ∼ 104. In this work, we present a new exact MIP framework for ℓ0ℓ2-regularized regression that can scale to *p* ∼ 107, achieving speedups of at least 5000x, compared to state-of-the-art exact methods. Unlike recent work, which relies on modern commercial MIP solvers, we design a specialized nonlinear branch-and-bound (BnB) framework, by critically exploiting the problem structure. A key distinguishing component in our framework lies in efficiently solving the node relaxations using a specialized first-order method, based on coordinate descent (CD). Our CD-based method effectively leverages information across the BnB nodes, through using warm starts, active sets, and gradient screening. In addition, we design a novel method for obtaining dual bounds from primal CD solutions, which certifiably works in high dimensions. Experiments on synthetic and real high-dimensional datasets demonstrate that our framework is not only significantly faster than the state of the art, but can also deliver certifiably optimal solutions to statistically challenging instances that cannot be handled with existing methods. We open source the implementation through our toolkit L0BnB. Introduction ============ We consider the sparse linear regression problem with additional ℓ2 regularization. A natural way to impose sparsity in this context is through controlling the ℓ0 (pseudo) norm of the estimator, which counts the number of nonzero entries. More concretely, let *X* ∈ R*n* × *p* be the data matrix, with *n* samples and *p* features, and *y* ∈ R*n* be the response vector. We focus on the least squares problem with a combination of ℓ0 and ℓ2 regularization: $$\begin{aligned} \label{eq:main} \min\_{\beta \in \mathbb{R}^p} \frac{1}{2} \| y - X \beta \|^{2}\_2 + \lambda\_0 \| \beta \|\_0+ \lambda\_2 \|\beta\|^{2}\_2, \end{aligned}$$ where ∥*β*∥0 is defined as the number of nonzero entries in the regression coefficients *β* ∈ R*p*, and ∥*β*∥22 is the squared ℓ2-norm of *β* (also referred to as ridge regularization). The regularization parameters *λ*0 and *λ*2 are assumed to be specified by the practitioner[2](#fn2). We note that the presence of ridge regularization (i.e., *λ*2 > 0) can be important from a statistical viewpoint—see for example  for further discussions on this matter. Statistical properties of ℓ0-based estimators have been extensively studied in the statistics literature . Specifically, under suitable assumptions on the underlying data and appropriate choices of tuning parameters, global solutions of  have optimal support recovery properties ; and optimal prediction error bounds that do not depend upon *X* . Appealing prediction error bounds available from optimal solutions of  for the low-signal regime are discussed in . Such strong guarantees are generally not satisfied by heuristic solutions to —see for theoretical support and  for numerical evidence. From a practical perspective, the ability to certify the optimality of solutions (e.g., via dual bounds) is important and can engender trust in mission critical applications such as healthcare. Despite its appeal, Problem  is labeled as NP-Hard  and poses computational challenges. Recently, there has been exciting work in developing Mixed Integer Programming (MIP)-based approaches to solve , e.g.,. Specifically, demonstrated that off-the-shelf MIP solvers can handle problem instances for *p* up to a thousand. Larger instances can be handled when *λ*2 is sufficiently large and the feature correlations are low—for example, see the approach of ; and the method in  for the classification variant of . The approaches of  rely on commercial MIP solvers such as Gurobi. While these state-of-the-art global optimization approaches show very promising results, they are still relatively slow for practical usage . For example, our experiments show that these methods cannot terminate in two hours for typical instances with *p* ∼ 104. On the other hand, the fast Lasso solvers, e.g., `glmnet`, and local optimization methods for, such as `L0Learn`, can handle much larger instances, and they typically terminate in the order of milliseconds to seconds. Our goal in this paper is to advance the computational methodology for the global optimization of Problem . In particular, we aim to (i) reduce the run time for solving problem instances with *p* ∼ 104 from hours to seconds, and (ii) scale to larger problem instances with *p* ∼ 107 in reasonable times (order of minutes to hours). To this end, we propose a specialized nonlinear branch-and-bound (BnB) framework that does not rely on commercial MIP solvers. We employ a first-order method, which carefully exploits the problem structure, to solve the node relaxations. This makes our approach quite different from prior work on global optimization for Problem, which rely on commercial MIP solvers, e.g., Gurobi and CPLEX. These MIP solvers are also based on a BnB framework, but they are equipped with general-purpose relaxation solvers and heuristics that do not take into account the specific structure in Problem. Our BnB solves a mixed integer second order cone program (MISOCP) that is based on a perspective reformulation of Problem. The algorithm exploits the sparsity structure in the problem during different stages: when solving node relaxations, branching, and obtaining upper bounds. The continuous node relaxations that appear in our BnB have not been studied at depth in earlier work. A main contribution of our work is to show that these relaxations, which involve seemingly complicated linear and conic constraints, can be efficiently handled using a primal coordinate descent (CD)-based algorithm. Indeed, this represents a radical change from the primal-dual relaxation solvers commonly used in state-of-the-art MIP solvers. Our choice of CD is motivated by its ability to effectively share information across the BnB nodes (such as warm starts), and more generally by its high scalability in the context of sparse learning, e.g., see. Along with CD, we propose additional strategies, namely, active set updates and gradient screening, which reduce the coordinate update complexity by exploiting the information shared across the BnB tree. Although our CD-based algorithm for solving BnB node relaxations is highly scalable, it only generates primal solutions. However, dual bounds are required for search space pruning in BnB. Thus, we propose a novel method to efficiently generate dual bounds from the primal solutions. We analyze these dual bounds and prove that their tightness is not affected by the number of features *p*, but rather by the number of nonzeros in the primal solution. This result serves as a theoretical justification for why our CD-based algorithm can lead to tight dual bounds in high dimensions. **Contributions and Structure:** We summarize our key contributions below. * We formulate Problem as a MISOCP, based on a perspective formulation. We provide a new analysis of the relaxation tightness, which identifies parameter ranges for which the perspective formulation can outperform popular formulations (see Section [sec:formulations]). * To solve the MISOCP, we design a specialized nonlinear BnB, with the following main contributions (see Section [sec:bnb]): + We show that the node relaxations, which involve linear and conic constraints, can be reformulated as a least squares problem with a non-differentiable but separable penalty. To solve the latter reformulation, we develop a primal CD algorithm, along with active set updates and gradient screening that use information shared across the BnB tree to reduce the coordinate update cost. + We develop a new efficient method for obtaining dual bounds from the primal solutions. We analyze these dual bounds and show that their tightness depends on the sparsity level rather than *p*. + We introduce efficient methods that exploit sparsity when selecting branching variables and obtaining incumbents[3](#fn3). * We perform a series of experiments on high-dimensional synthetic and real datasets, with *p* up to 8.3 × 106. We study the effect of the regularization parameters and dataset characteristics on the run time, and perform ablation studies. The results indicate that our approach can be 5000x faster than the state of the art in some settings, and is capable of handling difficult statistical instances which were virtually unsolvable before (See Section [sec:experiments]). We open source the implementation through our toolkit L0BnB: <https://github.com/alisaab/L0BnB> **Related Work:** As mentioned earlier, an impressive line of recent work considers solving Problem, or its cardinality-constrained variant, to optimality. used Gurobi on a Big-M formulation, which can handle *n* ∼ *p* ∼ 103 in the order of minutes to hours. scale the problem even further by applying outer-approximation (using Gurobi) on a boolean reformulation  of the problem. Their approach can handle *p* ∼ 105 in the order of minutes when *n* and *λ*2 are sufficiently large, and the feature correlations are sufficiently small. This outer-approximation approach has also been generalized to sparse classification in. consider solving a perspective formulation of the problem directly using Gurobi and reported timings that compare well with —the largest problem instances they consider have *p* ∼ 103.  show that a variant of Problem for classification can be solved through a sequence of MIPs (solved using Gurobi), each having a small number of binary variables, as opposed to *p* binary variables in the common approaches. Their approach can handle *n* = 103 and *p* = 50, 000 in minutes if the feature correlations are sufficiently small. Our specialized BnB, on the other hand, can solve all the instances mentioned above with speed-ups that exceed 5000x, and can scale to problems with *p* ∼ 107. Moreover, our numerical experiments show that, unlike prior work, our BnB can handle difficult problems with relatively small *λ*2 and/or high feature correlations. In addition to the global optimization approaches discussed above, there is an interesting body of work in the broader optimization community on improved relaxations for sparse ridge regression, e.g.,, and algorithms that locally optimize an ℓ0-based objective. There is also a rich literature on solving mixed integer nonlinear programs (MINLPs) using BnB, e.g., see . Our approach is based on the nonlinear BnB framework , where a nonlinear subproblem is solved at every node of the search tree. Interior point methods are a popular choice for these nonlinear subproblems, especially for MISOCPs, e.g., they are used in MOSEK and are also one of the supported options in CPLEX. Generally, interior point based nonlinear solvers are not as effective in exploiting warm starts and sparsity as linear programming solvers, which led to an alternative approach known as outer-approximation (OA). In OA, a sequence of relaxations, consisting of mixed integer linear programs, are solved until converging to a solution of the MINLP. State-of-the-art solvers such as BARON and Gurobi apply OA on extended formulations— laid the ground work for this approach. There is also a line of work on specialized OA reformulations and algorithms for mixed integer conic programs (which include MISOCPs), e.g., see and the references therein. In this paper, we pursue a different approach: making use of problem-specific structure, we show that the relaxation of the MISOCP can be effectively handled by our proposed CD-based algorithm. **Notation and Supplementary Material:** We denote the set {1, 2, …, *p*} by [*p*]. For any set *A*, the complement is denoted by *A**c*. We let ∥ ⋅ ∥*q* denote the standard ℓ*q* norm with *q* ∈ {0, 1, 2, ∞}. For any vector *v* ∈ R*k*, sign(*v*) ∈ R*k* refers to the vector whose *i*th component is given by sign(*v**i*) = *v**i*/∣*v**i*∣ if *v**i* ≠ 0 and sign(*v**i*) ∈ [ − 1, 1] if *v**i* = 0. We denote the support of *β* ∈ R*p* by Supp(*β*) = {*i* : *β**i* ≠ 0, *i* ∈ [*p*]}. For a set *S* ⊆ [*p*], use *β**S* ∈ R∣*S*∣ to denote the subvector of *β* with indices in *S*. Similarly, *X**S* refers to the submatrix of *X* whose columns correspond to *S*. For a scalar *a*, we denote [*a*]+ = max{*a*, 0}. Given a set of real numbers {*a**i*}*i* = 1*N* and a scalar *c*, we use {*a**i*}*i* = 1*N* ⋅ *c* to denote {*c**a**i*}*i* = 1*N*. The proofs of all propositions, lemmas, and theorems are in the appendix. MIP Formulations and Relaxations ================================ In this section, we present MIP formulations for Problem and study their corresponding relaxations. MIP Formulations ---------------- **The Big-M Formulation:** We assume that there is a finite scalar *M* > 0 (a-priori specified) such that an optimal solution of Problem , say *β*\*, satisfies: ∥*β*\*∥∞ ≤ *M*. This allows for modeling as a mixed integer quadratic program (MIQP) using the following Big-M formulation: $$\label{eq:bigM} \begin{aligned} \text{B}(M): ~~~ \min\_{\beta, z} ~~&~\frac{1}{2} \| y - X \beta \|^{2}\_2 + \lambda\_0 \sum\_{i\in [p]} z\_i+ \lambda\_2 \|\beta\|^{2}\_2 \\ \text{s.t.}~~&~~ -M z\_i \leq \beta\_i \leq { M} z\_i,~ i \in [p] \\ &~~z\_i \in \{0,1\}, ~ i \in [p], \end{aligned}$$ where each binary variable *z**i* controls whether *β**i* is zero or not via the first constraint in —i.e., if *z**i* = 0 then *β**i* = 0. Such Big-M formulations are widely used in mixed integer programming and have been recently explored in multiple works on ℓ0 regularization, e.g.,. See for a discussion on how to estimate *M* in practice. **The Perspective Formulation:** In MIP problems where bounded continuous variables are activated by indicator variables, perspective reformulations can lead to stronger MIP relaxations and thus improve the run time of BnB algorithms. Here, we apply a perspective reformulation to the ridge term ∥*β*∥22 in Problem. Specifically, we introduce the auxiliary continuous variables *s**i* ≥ 0, *i* ∈ [*p*] and rotated second order cone constraints *β**i*2 ≤ *s**i**z**i*, *i* ∈ [*p*]. We then replace the term ∥*β*∥22 with ∑*i* ∈ [*p*]*s**i*. Thus, each *s**i* takes the place of *β**i*2. This leads to the following reformulation of : $$\label{eq:conicbigM} \begin{aligned} \text{PR}(M): ~~~ \min\_{\beta, z, s} ~~&~ \frac{1}{2} \| y - X \beta \|^{2}\_2 + \lambda\_0 \sum\_{i \in [p]} z\_i+ \lambda\_2 \sum\_{i \in [p]} s\_i \\ \text{s.t.}~~&~~ \beta\_i^2 \leq s\_i z\_i, ~ i \in [p]\\ & ~~-M z\_i \leq \beta\_{i} \leq M z\_i, ~ i \in [p] \\ &~~ z\_i \in \{0,1\}, s\_i \geq 0, ~ i \in [p]. \end{aligned}$$ Problem  can be expressed as a MISOCP. Similar to , formulation  is equivalent to Problem (as long as *M* is suitably chosen). Algorithms for formulation  will be the main focus of our paper. If we set *M* = ∞ in , then the constraints *β**i* ∈ [ − *M**z**i*, *M**z**i*], *i* ∈ [*p*] can be dropped, which makes PR(∞) independent of a Big-M parameter. If *λ*2 > 0, then PR(∞) is equivalent to —this holds since *β**i*2 ≤ *s**i**z**i* enforces $z\_i = 0 \implies \beta\_i = 0$. We note that have studied Problem and focused on the special case of PR(∞). shows that PR(∞) is equivalent to the pure binary formulations considered in. We also note that have considered a similar perspective formulation for the cardinality constrained variant of Problem. In Proposition [prop:v1v2] below, we present new bounds that quantify the relaxation strengths of PR(*M*), PR(∞), and B(*M*). Moreover, we are the first to present a tailored BnB procedure for formulation. Relaxation of the Perspective Formulation ----------------------------------------- In this section, we study the *interval relaxation* of Problem, which is obtained by relaxing all binary *z**i*’s to the interval [0, 1]. Specifically, we present a new compact reformulation of the interval relaxation that leads to useful insights and facilitates our algorithm development. We also discuss how this reformulation compares with the interval relaxations of B(*M*) and PR(∞). Theorem [theorem:relaxation] shows that the interval relaxation of  can be reformulated purely in the *β* space. This leads to a regularized least squares criterion, where the regularizer involves the reverse Huber  penalty—a hybrid between the ℓ1 and ℓ2 (squared) penalties. The reverse Huber penalty B : R → R, is given by: $$\begin{aligned} \label{eq:reverse\_huber} \mathcal{B}(t) = \begin{cases} |t| & |t| \leq 1 \\ (t^2+1)/{2} & |t| \geq 1. \end{cases} \end{aligned}$$ (Reduced Relaxation) [theorem:relaxation] Let us define the functions *ψ*, *ψ*1, *ψ*2 as $$\begin{aligned} \psi(\beta\_i; \lambda\_0, \lambda\_2, M) = \begin{cases} \psi\_1(\beta\_i; \lambda\_0, \lambda\_2) := 2 \lambda\_0 \mathcal{B}( \beta\_i \sqrt{{\lambda\_2}/{\lambda\_0}}) & \text{if}~\sqrt{{\lambda\_0}/{\lambda\_2}} \leq M \\ \psi\_2(\beta\_i; \lambda\_0, \lambda\_2, M) := (\lambda\_0/M + \lambda\_2 M) |\beta\_i| & \text{if}~\sqrt{{\lambda\_0}/{\lambda\_2}} > M. \end{cases} \end{aligned}$$ The interval relaxation of is equivalent to: $$\begin{aligned} \label{eq:relaxation} \min\_{\beta \in \mathbb{R}^p} ~ F(\beta):= \frac{1}{2} \| y - X \beta \|\_2^2 + \sum\_{i \in [p]} \psi(\beta\_i; \lambda\_0, \lambda\_2, M) ~~ \text{s.t.} ~~ \|\beta\|\_{\infty} \leq M, \end{aligned}$$ and we let *V*PR(*M*) denote the optimal objective value of . The reduced formulation  has an important role in both the subsequent analysis and the algorithmic development in Section [sec:bnb]. Theorem [theorem:relaxation] shows that the conic and Big-M constraints in the relaxation of can be completely eliminated, at the expense of introducing (in the objective) the non-differentiable penalty function ∑*i**ψ*(*β**i*; *λ*0, *λ*2, *M*) which is separable across the *p* coordinates *β**i*, *i* ∈ [*p*]. Depending on the value of $\sqrt{{\lambda\_0}/{\lambda\_2}}$ compared to *M*, the penalty *ψ*(*β**i*; *λ*0, *λ*2, *M*) is either the reverse Huber penalty (i.e., *ψ*1) or the ℓ1 penalty (i.e., *ψ*2), both of which are sparsity-inducing. In Figure [fig:penalties] (left panel), we plot *ψ*1(*β*; *λ*0, *λ*2) for *λ*2 = 1 at different values of *λ*0. In Figure 1 (right panel), we plot *ψ*(*β*; *λ*0, *λ*2, *M*) at *λ*0 = *λ*2 = 1 and for different values of *M*. The appearance of a pure ℓ1 penalty in the objective is interesting in this case since the original formulation in has a ridge term in the objective. Informally speaking, when $\sqrt{{\lambda\_0}/{\lambda\_2}} > M$, the constraint ∣*β**i*∣ ≤ *M**z**i* becomes active at any optimal solution, which turns the ridge term into an ℓ1 penalty—for further discussion on this matter, see the proof of Theorem [theorem:relaxation]. | | | | --- | --- | | | | [fig:penalties] Next, we analyze the tightness of the perspective relaxation in . **Tightness of the Perspective Relaxation :** has shown that the interval relaxation of PR(∞) can be written as: $$\begin{aligned} \label{eq:conic\_relaxation} V\_{\text{PR}(\infty)} = \min\_{\beta \in \mathbb{R}^p} G(\beta):= \frac{1}{2} \| y - X \beta \|\_2^2 + \sum\_{i\in [p]} \psi\_1(\beta\_i; \lambda\_0, \lambda\_2), \end{aligned}$$ where *ψ*1(*β**i*; *λ*0, *λ*2) is defined in Theorem [theorem:relaxation]. Note that can also be obtained from Theorem [theorem:relaxation] (with *M* = ∞). For $\sqrt{{\lambda\_0}/{\lambda\_2}} \leq M$, the relaxation of PR(*M*) in Theorem [theorem:relaxation] matches that in, but with the additional box constraint ∥*β*∥∞ ≤ *M*. When *M* is large, this box constraint becomes inactive, making the interval relaxations of PR(*M*) and PR(∞) equivalent. However, when $\sqrt{{\lambda\_0}/{\lambda\_2}} > M$, the interval relaxation of PR(*M*) can have a (strictly) larger objective than that of both B(*M*) and PR(∞), as we show in Proposition [prop:v1v2]. [prop:v1v2] Let *V*B(*M*) denote the optimal objective of the interval relaxation of B(*M*). Let *β*\* be an optimal solution to, and define the function $h(\lambda\_0, \lambda\_2, M) = {\lambda\_0}/{M} + \lambda\_2 M - 2 \sqrt{\lambda\_0 \lambda\_2}$. Then, the following holds for $\sqrt{{\lambda\_0}/{\lambda\_2}} > M$: $$\begin{aligned} & V\_{\text{PR}(M)} \geq V\_{\text{B}(M)} + \lambda\_2 (M \| \beta^{\*} \|\_1 - \| \beta^{\*} \|\_2^2) \label{eq:v14}\\ & V\_{\text{PR}(M)} \geq V\_{\text{PR}(\infty)} + h(\lambda\_0, \lambda\_2, M) \| \beta^{\*} \|\_1. \label{eq:v123} \end{aligned}$$ We make a couple of remarks. For bound, we always have (*M*∥*β*\*∥1 − ∥*β*\*∥22) ≥ 0 since ∥*β*\*∥∞ ≤ *M*. If there is an *i* ∈ [*p*] with 0 < ∣*β**i*\*∣ < *M*, then (*M*∥*β*\*∥1 − ∥*β*\*∥22) > 0, and consequently *V*PR(*M*) > *V*B(*M*) (as long as *λ*2 > 0). In bound, for $\sqrt{{\lambda\_0}/{\lambda\_2}} > M$, *h*(*λ*0, *λ*2, *M*) is strictly positive and monotonically decreasing in *M*, which implies that *V*PR(*M*) > *V*PR(∞) (as long as *β*\* ≠ 0). In Section [sec:timingcomparison], our experiments empirically validate Proposition [prop:v1v2]: using PR(*M*) with a sufficiently tight (but valid) *M* can speed up the same BnB solver by more than 90x compared to PR(∞). Note that Proposition [prop:v1v2] does not directly compare *V*PR(∞) with *V*B(*M*). In Proposition [prop:relaxobj], we establish a new result which compares *V*PR(∞) with *V*B(*M*). Before we present Proposition [prop:relaxobj], we introduce some notation. Let S(*λ*2) be the set of optimal solutions (in *β*) of the interval relaxation of PR(∞); and define $$\begin{aligned} \label{eq:lambda2star} \mathcal{L}(M) := \{ \lambda\_2 > 0 \ | \ \exists \beta \in \mathcal{S}(\lambda\_2) \text{ s.t. } \| \beta \|\_{\infty} \leq M \}. \end{aligned}$$ [prop:relaxobj] Let L(*M*) be as defined in. Then, the following holds: $$\begin{aligned} & V\_{\text{B}(M)} \geq V\_{\text{PR}(\infty)}, \text{ if } M \leq \tfrac12\sqrt{{\lambda\_0}/{\lambda\_2}} \label{eq:relax\_obj\_1} \\ & V\_{\text{B}(M)} \leq V\_{\text{PR}(\infty)}, \text{ if } M \geq \sqrt{{\lambda\_0}/{\lambda\_2}} \text{ and } \lambda\_2 \in \mathcal{L}(M). \label{eq:relax\_obj\_3} \end{aligned}$$ Proposition [prop:relaxobj] implies that if *λ*2 is relatively small (with other parameters remaining fixed), then the relaxation of the Big-M formulation (with objective *V*B(*M*)) will have a higher objective than the relaxation of PR(∞). On the other hand, if *λ*2 is sufficiently large, then the relaxation of PR(∞) will have a higher objective. We note that the result of Proposition [prop:relaxobj] applies for any *M* ≥ 0, even if it is mis-specified. However, in the case of mis-specification, *V*B(*M*) (and also *V*PR(*M*)) may no longer correspond to a valid relaxation of Problem. In contrast, *V*PR(∞) is always a valid relaxation of . A Specialized Branch-and-Bound (BnB) Framework ============================================== In this section, we develop a specialized nonlinear BnB framework for solving the perspective formulation in. First, we briefly recall the high-level mechanism behind nonlinear BnB, in the context of our problem. **Nonlinear BnB at a Glance:** The algorithm starts by solving the (nonlinear) interval relaxation of, i.e., the root node. Then, it selects a branching variable, say variable *j* ∈ [*p*], and creates two new nodes (optimization subproblems): one node with *z**j* = 0 and another with *z**j* = 1, where all the other *z**i*’s are relaxed to the interval [0, 1]. For every unvisited node, the algorithm proceeds recursively, i.e., by solving an optimization subproblem at the current node and then branching on a new variable to create two new nodes. This leads to a search tree with nodes corresponding to optimization subproblems and edges representing branching decisions. To reduce the size of the search tree, BnB prunes a node (i.e., does not branch on it) in either one of the following situations: (i) an optimal solution to the relaxation at the current node has an integral *z* or (ii) the objective of the current relaxation exceeds the best available upper bound on . In case (ii), the relaxation need not be solved exactly: lower bounds on the relaxation’s objective (a.k.a. dual bounds) can be used for pruning. However, in case (i), if the dual bound does not exceed the best upper bound, the node should be solved to optimality in order to ensure correctness of the pruning decision[4](#fn4). As BnB explores more nodes, its (global) lower bound is guaranteed to converge to the optimal objective of. In practice, we can terminate the algorithm early, if the gap between the best lower and upper bounds is below a pre-specified user-defined threshold. **Overview of our Strategies:** The discussion above outlines how nonlinear BnB operates in general. The specific strategies used such as solving the relaxations, passing information across the nodes, and selecting branching variables, can have a key impact on scalability. In the rest of this section, we will give a detailed account of the strategies used in our BnB. We first provide an overview of these strategies: * **A Primal Relaxation Solver**: Unlike state-of-the-art approaches for nonlinear BnB, which employ primal-dual interior point solvers for node relaxations , we rely solely on a primal method which consists of a highly scalable CD-based algorithm. The algorithm solves the node relaxations of in the *β*-space as opposed to the extended (*β*, *s*, *z*) space—these relaxations are variants of the reduced relaxation introduced in. The algorithm heavily shares and exploits warm starts, active sets, and information on the gradients, across the BnB tree. This will be developed in Section [sec:CD]. * **Dual Bounds**: Dual bounds on the objective of node relaxations are required by BnB for search space pruning, yet our relaxation solver works in the primal space for scalability considerations. We develop a new efficient method for obtaining dual bounds from the primal solutions. We provide an analysis of this method and show that the tightness of the dual bounds depends on the sparsity level and *not* on the number of features *p*. See Section [sec:dualbds]. * **Branching and Incumbents**: We present an efficient variant of strong branching, which leverages the solutions and active sets of previous node relaxations to make optimization tractable. Moreover, we employ several efficient heuristics to obtain incumbents. See Section [sec:branching]. For simplicity of exposition, in the remainder of Section [sec:bnb], we assume that the columns of *X* and *y* have unit ℓ2 norm. Primal Relaxation Solver: Active-set Coordinate Descent ------------------------------------------------------- To simplify the presentation, we will focus on solving the root relaxation. To this end, we solve the reduced formulation in the *β*-space . We operate on the reduced formulation compared to the interval relaxation of  in the extended (*β*, *s*, *z*) space due to computational reasons. After solving, we use the resulting solution, say *β*\*, to construct a corresponding solution (*β*\*, *s*\*, *z*\*) to the interval relaxation of —see the proof of Theorem [theorem:relaxation] for how to obtain (*β*\*, *s*\*, *z*\*) from *β*\*. We then use *z*\* for branching. The rest of the nodes in BnB involve fixing some binary variables in to 0 or 1, so their subproblems can be obtained by minor modifications to the root relaxation. For completeness, in Appendix [appendix:nodesubproblems], we discuss how to formulate and solve these node subproblems. Problem  is of the composite form : the objective is the sum of a smooth loss function and a non-smooth but separable penalty. In addition, the feasible set, consisting of the constraints ∣*β**i*∣ ≤ *M*, *i* ∈ [*p*] is separable across the coordinates. This makes Problem amenable to cyclic CD . To our knowledge, the use of cyclic CD for Problem  is novel. We also emphasize that a direct application of cyclic CD to Problem will face scalability issues, and more importantly, it does not readily deliver dual bounds. The additional strategies we develop later in this section are essential for both achieving scalability and obtaining (provably) high quality dual bounds. Cyclic CD visits the coordinates according to a fixed ordering, updating one coordinate at a time, as detailed in Algorithm 1. * **Algorithm 1: Cyclic CD for Relaxation** * **Input:** Initialization *β̂* * **While** not converged: + **For** *i* ∈ [*p*] :  $$\begin{aligned} \label{eq:univariate} \hat{\beta}\_i \gets \operatorname\*{arg\,min}\_{\beta\_i \in \mathbb{R}} F(\hat{\beta}\_1, \dots, \beta\_i, \dots, \hat{\beta}\_p) ~~ \text{ s.t. }~~ |\beta\_i| \leq M. \end{aligned}$$ Every limit point of Algorithm 1 is a global minimizer of, and the algorithm has a sublinear rate of convergence. We will show that the solution of can be computed in closed-form. To this end, since the columns of *X* have unit ℓ2-norm, we note that  is equivalent to: $$\begin{aligned} \label{eq:proximal} \min\_{\beta\_i \in \mathbb{R}}~~\frac{1}{2} ( \beta\_i - \tilde{\beta}\_i )^2 + \psi(\beta\_i; \lambda\_0, \lambda\_2, M) ~~~ \text{ s.t. }~~~ |\beta\_i| \leq M, \end{aligned}$$ where *β̃**i* :  = ⟨*y* − ∑*j* ≠ *i**X**j**β̂**j*, *X**i*⟩. Given non-negative scalar parameters *a* and *m*, we define the *boxed soft-thresholding operator* *T* : R → R as $$T(t; a, m) := \begin{cases} 0 & \text{ if } |t| \leq a \\ (|t| - a) \operatorname{sign}(t) & \text{ if } a < |t| \leq a + m\\ m \operatorname{sign}(t) & \text{ otherwise. } \end{cases}$$ For $\sqrt{{\lambda\_0}/{\lambda\_2}} \leq M$, the solution of is given by: $$\begin{aligned} \label{eq:case1sol} \hat{\beta}\_i = \begin{cases} T(\tilde{\beta}\_i; 2 \sqrt{\lambda\_0 \lambda\_2}, M) & \text{ if } |\tilde{\beta}\_i| \leq 2 \sqrt{\lambda\_0 \lambda\_2} + \sqrt{{\lambda\_0}/{\lambda\_2}} \\ T(\tilde{\beta}\_i(1 + 2 \lambda\_2)^{-1}; 0, M) & \text{ otherwise } \end{cases} \end{aligned}$$ and for $\sqrt{{\lambda\_0}/{\lambda\_2}} > M$, the solution of is: $$\begin{aligned} \label{eq:case2sol} \hat{\beta}\_i = T(\tilde{\beta}\_i; \lambda\_0/M + \lambda\_2 M, M). \end{aligned}$$ Thus, update in Algorithm 1 can be computed in closed-form using expressions and. See Appendix [appendix:thresholding] for a derivation of the update rules. **Warm Starts:** Interior-point methods used in state-of-the-art nonlinear BnB solvers cannot easily use warm starts. On the other hand, cyclic CD has proven to be very effective in exploiting warm starts, e.g., see. For every node in BnB, we use its parent’s primal solution as a warm start. Recall that the parent and children nodes have the same relaxations, except that a single binary variable (used for branching) is fixed to zero or one in the children[5](#fn5). Intuitively, this can make the warm start close to the optimal solution. Moreover, the supports of the optimal solutions of the parent and children nodes are typically very close. We exploit this observation by sharing information about the supports across the BnB tree, as we describe next in Section [sec:activesets]. ### Active Sets Update in Algorithm 1 requires O(*n*) operations, so every full cycle (across all *p* coordinates) of the algorithm has cost of O(*n**p*). This becomes a major bottleneck for large *n* or *p*, since CD can require many cycles before convergence. In practice, the majority of the variables stay zero during the course of Algorithm 1 (assuming that the regularization parameters are chosen so that the optimal solution of the relaxation is sparse, and good warm starts are used). Motivated by this observation, we run Algorithm 1 restricted to a small subset of the variables A ⊆ [*p*], which we refer to as the *active set*, i.e., we solve the following restricted problem: $$\begin{aligned} \label{eq:restricted\_problem} \hat{\beta} \in \operatorname\*{arg\,min}\_{\beta \in \mathbb{R}^{p}} F(\beta) ~~~~ \text{s.t. } ~~~~ \|\beta \|\_{\infty} \leq M, ~~ \beta\_{\mathcal{A}^c} = 0. \end{aligned}$$ After solving, we augment the active set with any variable *i* ∈ Supp(*β̂*)*c* that violates the following optimality condition: $$0 = \operatorname\*{arg\,min}\_{\beta\_i \in \mathbb{R}} F(\hat{\beta}\_1, \dots, \beta\_i, \dots, \hat{\beta}\_p) ~~ \text{ s.t. } ~~ |\beta\_i| \leq M.$$ We repeat this procedure of solving the restricted problem and then augmenting A with violating variables, until there are no more violations. The algorithm is summarized below. * **Algorithm 2: The Active-set Algorithm[6](#fn6)** * **Input:** Initial solution *β̂* and initial active set A. * **Repeat**: + Step 1: Solve using Algorithm 1 to get a solution *β̂*. + Step 2: $\mathcal{V} \gets \{i \in \text{Supp}(\hat{\beta})^c \ | \ 0 \neq \operatorname\*{arg\,min}\_{|\beta\_i| \leq M} F(\hat{\beta}\_1, \dots, \beta\_i, \dots, \hat{\beta}\_p) \}$. + Step 3: If V is empty **terminate**, otherwise, A ← A ∪ V. The quality of the initial active set affects the number of iterations in Algorithm 2. For example, if A is a superset of the support of an optimal solution to, then V in the first iteration of Algorithm 2 will be empty, and the algorithm will terminate in a single iteration. For every node in BnB, we choose the initial active set to be the same as the final active set of the parent node. This works well in practice because parent and children nodes typically have similar supports. For the root relaxation, we obtain the initial active set by choosing a small subset of features which have the highest (absolute) correlation with *y*. In Section [sec:dualbds], we present a novel method that makes use of the updates in Algorithm 2 to obtain provably high-quality dual bounds. We note that our approach goes beyond standard active-set methods. In particular, (i) we use active sets in the context of a BnB tree, percolating information from parents to children (as opposed to warm-start continuation across a grid of regularization parameters); and (ii) we exploit the active sets to deliver dual bounds. In the next remark, we discuss how Step 1 in Algorithm 2 can be performed inexactly. [remark:inexactness] In practice, we solve the restricted optimization problem in Step 1 of Algorithm 2 inexactly. Specifically, in Step 1, we terminate Algorithm 1 when the relative change in the objective values is below a small numerical tolerance[7](#fn7). For Step 3, we ensure that V is (exactly) empty before terminating the algorithm, which guarantees that there are no optimality violations outside the support. Having no optimality violations outside the support will be essential to obtain the tight dual bounds discussed in Section [sec:dualbds]. ### Gradient Screening Algorithm 2 effectively reduces the number of coordinate updates, through restricting optimization to the active set. However, checking the optimality conditions outside the active set, i.e., performing Step 2, requires O(*n**p*) operations. This is a bottleneck when *p* is large, even if a small number of such checks (or passes) are performed. To mitigate this, we present a *gradient screening* method which reduces the time complexity of these optimality checks. Our method is inspired by the gradient screening technique proposed in for a different problem: learning sparse hierarchical interactions via a convex optimization formulation. In the current paper, the optimality checks in Step 2 of Algorithm 2 essentially require computing a gradient of the least squares loss, in order to construct V. Loosely speaking, gradient screening is designed to avoid computing the “non-essential” parts of this gradient by using previously computed quantities in the BnB tree. In the following proposition, we give an explicit way to construct the set V in Step 2 of Algorithm 2. [prop:Vexplicit] Let *β̂* and V be as defined in Algorithm 2, and define *r̂* = *y* − *X**β̂*. Then, the set V can be equivalently written as follows: $$\begin{aligned} \label{eq:V} \mathcal{V} = \{ i \in \text{Supp}(\hat{\beta})^c \ | \ | \langle \hat{r}, X\_i \rangle| > c(\lambda\_0, \lambda\_2, M) \}, \end{aligned}$$ where *X**i* denotes the *i*-th column of *X*; and $c(\lambda\_0, \lambda\_2, M) = 2 \sqrt{\lambda\_0 \lambda\_2}$ if $\sqrt{{\lambda\_0}/{\lambda\_2}} \leq M$, and *c*(*λ*0, *λ*2, *M*) = (*λ*0/*M* + *λ*2*M*) if $\sqrt{{\lambda\_0}/{\lambda\_2}} > M$. By Proposition [prop:Vexplicit], constructing V directly costs O(*n*(*p* − ∥*β̂*∥0)). Next, we will discuss how to compute V with a lower cost, by making use of previously computed quantities. Suppose we have access to a set $\hat{\mathcal{V}} \subseteq [p]$ such that $\mathcal{V} \subseteq \hat{\mathcal{V}}$. Then, we can construct V by restricting the checks in to $\hat{\mathcal{V}}$ instead of Supp(*β̂*)*c*, i.e., the following holds: $$\begin{aligned} \label{eq:Vrestricted} \mathcal{V} = \{ i \in \hat{\mathcal{V}} \ | \ | \langle \hat{r}, X\_i \rangle| > c(\lambda\_0, \lambda\_2, M) \}. \end{aligned}$$ Assuming $\hat{\mathcal{V}}$ is available, the cost of computing V in is $\mathcal{O}(n |\hat{\mathcal{V}}|)$. This cost can be significantly smaller than that of, if $|\hat{\mathcal{V}}|$ is sufficiently small. Next, we present a method that can obtain a relatively small $\hat{\mathcal{V}}$ in practice, thereby speeding up the computation of V. **Computation of $\hat{\mathcal{V}}$:** Proposition [prop:screening] presents a method to construct a set $\hat{\mathcal{V}}$ which satisfies $\mathcal{V} \subseteq \hat{\mathcal{V}}$ (as discussed above), using information from a warm start *β*0 (e.g., the solution of the relaxation from the parent node in BnB). [prop:screening] Let *β̂* and V be as defined in Algorithm 2. Let *β*0 be an arbitrary vector in R*p*. Define *r̂* = *y* − *X**β̂*, *r*0 = *y* − *X**β*0, and *ε* = ∥*X**β*0 − *X**β̂*∥2. Then, the following holds: $$\begin{aligned} \label{eq:V\_hat} \mathcal{\mathcal{V}} \subseteq \hat{\mathcal{V}} := \Big \{ i \in \text{Supp}(\hat{\beta})^c \ | \ | \langle r^0, X\_i \rangle | > c(\lambda\_0, \lambda\_2, M) - \epsilon \Big\}. \end{aligned}$$ The set $\hat{\mathcal{V}}$ defined in depends solely on *ε* and the quantities ∣⟨*r*0, *X**i*⟩∣, *i* ∈ Supp(*β̂*)*c*, which we will assume to be sorted and available along with the warm start *β*0. This is in contrast to a direct evaluation of V in, which requires the costly computation of the terms ∣⟨*r̂*, *X**i*⟩∣, *i* ∈ Supp(*β̂*)*c*. Given the sorted values ∣⟨*r*0, *X**i*⟩∣, *i* ∈ Supp(*β̂*)*c*, we can identify $\hat{\mathcal{V}}$ in with O(log*p*) operations, using a variant of binary search. Thus, the overall cost of computing V using is $\mathcal{O}(n|\hat{\mathcal{V}}| + \log p)$. We also note that in practice the inclusion ${\mathcal V} \subseteq \hat{\mathcal V}$ in is strict. Proposition [prop:screening] tells us that the closer *X**β*0 is to *X**β̂*, the smaller $\hat{\mathcal{V}}$ is, i.e., the lower is the cost of computing V in. Thus, for the method to be successful, we need a good warm start *β*0. In practice, we obtain *β*0 for the current node in BnB from its parent, and we update *β*0 as necessary during the course of Algorithm 2 (as detailed below). Let *ε*gs ∈ (0, 1) be a pre-specified parameter for the gradient screening procedure. The procedure, which replaces Step 2 of Algorithm 2, is defined as follows: 1. **Gradient Screening** 2. If this is the root node and the first iteration of Algorithm 2, then: update *β*0 ← *β̂*, *r*0 = *y* − *X**β*0; compute and sort the terms ∣⟨*r*0, *X**i*⟩∣, *i* ∈ [*p*]; compute V using ; and skip all the steps below. 3. If this is the first iteration of Algorithm 2, get *β*0 from the parent node in the BnB tree. 4. Compute $\hat{\mathcal{V}}$ using and then V using [8](#fn8). 5. If $|\hat{\mathcal{V}}| > \epsilon\_{\text{gs}} p$, update *β*0 ← *β̂*, *r*0 = *y* − *X**β*0; re-compute and sort the terms ∣⟨*r*0, *X**i*⟩∣, *i* ∈ [*p*]. If *X**β*0 is not a good estimate of *X**β̂*, then $\hat{\mathcal{V}}$ might become large. To avoid this issue, we update *β*0 in Step 4 above, every time the set $\hat{\mathcal{V}}$ becomes relatively large ($|\hat{\mathcal{V}}| > \epsilon\_{\text{gs}} p$). The parameter *ε*gs controls how often *β*0 is updated. In our implementation, we use *ε*gs = 0.05 by default, but we note that the parameter can be generally tuned in order to improve the running time. The updated *β*0 will be passed to the children in the BnB tree. While Step 4 above can be costly, it is not performed often in practice as the solutions of the relaxations in the parent and children nodes are typically close. Based on our current implementation and experiments, we see notable benefits from gradient screening when *p* ≥ 104 (e.g., more than 2x speedup for *p* = 106). For smaller values of *p*, we typically observe a small additional overhead from gradient screening. Moreover, we note that gradient screening will increase memory consumption, because each of the open nodes in the tree will need to store a *p*-dimensional vector (consisting of the quantities ∣⟨*r*0, *X**i*⟩∣, *i* ∈ [*p*], which are maintained by gradient screening). Dual Bounds ----------- In practice, we use Algorithm 2 to obtain inexact primal solutions for relaxation, as discussed in Remark [remark:inexactness]. However, dual bounds are needed to perform search space pruning in BnB. Here, we present a new efficient method to obtain dual bounds from the primal solutions. Moreover, we prove that our method can obtain dual bounds whose tightness depends on the sparsity level of the relaxation rather than *p*. We start by introducing a Lagrangian dual of relaxation in Theorem [theorem:duals]. [theorem:duals] For $\sqrt{{\lambda\_0}/{\lambda\_2}} \leq M $, a dual of Problem is given by: $$\begin{aligned} \max\_{ \alpha \in \mathbb{R}^n, \gamma \in {\mathbb{R}}^{p}} & ~~ h\_1(\alpha, \gamma) := - \frac{1}{2} \| \alpha \|\_2^2 - \alpha^T y - \sum\_{i \in [p]} v(\alpha, \gamma\_i), \label{eq:objdualcase1} \end{aligned}$$ where *v* : R*n* + 1 → R is defined as follows: $$\begin{aligned} \label{thm2-defn-v1} v(\alpha, \gamma\_i) := \Big[ \frac{(\alpha^T X\_i - \gamma\_i)^2}{4 \lambda\_2} - \lambda\_0 \Big]\_{+} + M |\gamma\_i|. \end{aligned}$$ Otherwise, if $\sqrt{{\lambda\_0}/{\lambda\_2}} > M $, a dual of Problem is given by $$\label{eq:objdualcase2} \begin{aligned} \max\_{\rho \in \mathbb{R}^n, \mu \in \mathbb{R}^{p}} &~ h\_2(\rho, \mu) := - \frac{1}{2} \| \rho \|\_2^2 - \rho^T y - M \|\mu\|\_1 \\ \text{s.t.}~~&~~ |\rho^T X\_i| - \mu\_i \leq \lambda\_0/M + \lambda\_2 M, ~ i \in [p]. \end{aligned}$$ Let *β*\* be an optimal solution to Problem and define *r*\* = *y* − *X**β*\*. The optimal dual variables for are given by: $$\label{thm2-statement-pr-dual1} \alpha^{\*} = -r^{\*}~~\text{and}~~\gamma^{\*}\_i = \mathds{1}\_{[|\beta\_i^{\*}| = M]}({\alpha}^{\*T} X\_i - 2M\lambda\_2 \operatorname{sign}({\alpha}^{\*T} X\_i)), i \in [p].$$ Moreover, the optimal dual variables for are: $$\begin{aligned} \label{thm2-statement-pr-dual2} \rho^{\*} = -r^{\*}~~~\text{and}~~~\mu^{\*}\_i = \mathds{1}\_{[|\beta\_i^{\*}| = M]}(|\rho^{\*T} X\_i| - \lambda\_0/M - \lambda\_2 M), i \in [p]. \end{aligned}$$ Note that strong duality holds for relaxation since it satisfies Slater’s condition, so the optimal objective of the dual in Theorem [theorem:duals] matches that of. **Dual Feasible Solutions:** Let *β̂* be an inexact primal solution obtained using Algorithm 2 and define *r̂* = *y* − *X**β̂*. We discuss next how to construct a dual feasible solution using *β̂*, i.e., without solving the dual in Theorem [theorem:duals]. Here, we construct a dual solution (*α̂*, *γ̂*) for Problem as follows: $$\begin{aligned} \label{eq:case1dualsol} \hat{\alpha} = - \hat{r} ~~ \text{and} ~~ \hat{\gamma} \in \operatorname\*{arg\,max}\_{\gamma \in \mathbb{R}^p} h\_1(\hat{\alpha}, \gamma). \end{aligned}$$ The choice *α̂* =  − *r̂* is motivated by the optimality conditions in Theorem [theorem:duals], while *γ̂* maximizes the dual objective (with *α* fixed to *α̂*). Note that the constructed solution is (trivially) feasible since the dual in is unconstrained. Since is separable across the *γ**i*’s, *γ̂**i* is equivalently the solution of min*γ**i* ∈ R*v*(*α̂*, *γ**i*), whose corresponding solution is given by: $$\begin{aligned} \label{eq:case1dualsolexplicit} \hat{\gamma}\_i = T(\hat{\alpha}^T X\_i; 2M\lambda\_2,\infty). \end{aligned}$$ Thus, *γ̂* can be obtained in closed-form using. Since *β̂* is the output of Algorithm 2, we know that the corresponding set V in Algorithm 2 is empty. Thus, by Proposition [prop:Vexplicit], we have $|\hat{r}^T X\_i| \leq 2\sqrt{\lambda\_0 \lambda\_2}$ for any *i* ∈ Supp(*β̂*)*c*. Using *r̂* =  − *α̂* and $\sqrt{{\lambda\_0}/{\lambda\_2}} \leq M$ in the previous in inequality, we get ∣*α̂**T**X**i*∣ ≤ 2*M**λ*2. Using the latter bound in, we get that *γ̂**i* = 0 for any *i* ∈ Supp(*β̂*)*c*. However, for *i* ∈ Supp(*β̂*), *γ̂**i* can be potentially nonzero. We construct a dual feasible solution (*ρ̂*, *μ̂*) for  as follows: $$\begin{aligned} \label{eq:case2dualsol} \hat{\rho} = - \hat{r} ~~ \text{and} ~~ \hat{\mu} \in \operatorname\*{arg\,max}\_{\mu \in \mathbb{R}^p} & ~~ h\_2(\hat{\rho}, \mu)~~ \text{ s.t. } ~~ (\hat{\rho}, \mu) \text { is feasible for } \eqref{eq:objdualcase2}. \end{aligned}$$ Similar to, the choice *ρ̂* =  − *r̂* is motivated by the optimality conditions in Theorem [theorem:duals], whereas the choice *μ̂* maximizes the dual objective under the condition that *ρ* =  − *r̂* (while ensuring feasibility). It can be readily seen that *μ̂**i* is given in closed form by: $$\begin{aligned} \label{eq:mu\_hat} \hat{\mu}\_i = \Big [|\hat{\rho}^T X\_i| - \lambda\_0/M - \lambda\_2 M \Big]\_{+}. \end{aligned}$$ Note that for any *i* ∈ Supp(*β̂*)*c*, we have ∣*ρ̂**T**X**i*∣ = ∣*r̂**T**X**i*∣ ≤ *λ*0/*M* + *λ*2*M* (by Proposition [prop:Vexplicit]), which implies that *μ̂**i* = 0. However, for *i* ∈ Supp(*β̂*), *μ̂**i* can be potentially nonzero. **Quality of the Dual Bounds:** In Theorem [theorem:dualitygap], we quantify the tightness of the dual bounds obtained from the dual feasible solutions and. [theorem:dualitygap] Let *α*\*, *γ*\*, *ρ*\*, and *μ*\* be the optimal dual variables defined in Theorem [theorem:duals]. Let *β*\* be an optimal solution to, and *β̂* be an inexact solution obtained using Algorithm 2. Define the primal gap *ε* = ∥*X*(*β*\* − *β̂*)∥2. Let *k* = ∥*β̂*∥0 denote the number of nonzeros in the inexact solution to . For a fixed (*M*, *λ*2), the following holds for the dual solution (*α̂*, *γ̂*) defined in : $$\begin{aligned} \label{eq:dualbd1} & h\_1(\hat{\alpha}, \hat{\gamma}) \geq h\_1(\alpha^{\*}, \gamma^{\*}) - k \mathcal{O}(\epsilon) - k \mathcal{O}(\epsilon^2), \end{aligned}$$ and for the dual solution (*ρ̂*, *μ̂*) defined in, we have: $$\begin{aligned} \label{eq:dualbd2} & h\_2(\hat{\rho}, \hat{\mu}) \geq h\_2(\rho^{\*}, \mu^{\*}) - k \mathcal{O}(\epsilon) - \mathcal{O}(\epsilon^2). \end{aligned}$$ Interestingly, the bounds established in Theorem [theorem:dualitygap] do not depend on *p*, but rather on the support size *k*. Specifically, the constants in O(*ε*) and O(*ε*2) only involve *M* and *λ*2 (these constants are made explicit in the proof). In practice, we seek highly sparse solutions, i.e., *k* ≪ *p*—suggesting that the quality of the dual bounds deteriorates with *k* and not *p*. The main driver behind these tight bounds is Algorithm 2, which performs optimality checks on the coordinates outside the support. If vanilla CD was used instead of Algorithm 2, then the term *k* appearing in the bounds  and will be replaced by *p*, making the bounds loose[9](#fn9). **Efficient Computation of the Dual Bounds:** A direct computation of the dual bound *h*1(*α̂*, *γ̂*) or *h*2(*ρ̂*, *μ̂*) costs O(*n**p*) operations. Interestingly, we show that this cost can be reduced to O(*n* + *n*∥*β̂*∥0) (where we recall that *β̂* is a solution from Algorithm 2). First, we consider the case of $\sqrt{{\lambda\_0}/{\lambda\_2}} \leq M $, where the goal is to compute *h*1(*α̂*, *γ̂*). By Lemma [lemma:v] (in the appendix), we have *v*(*α̂*, *γ̂**i*) = 0 for every *i* ∈ Supp(*β̂*)*c*. Thus, *h*1(*α̂*, *γ̂*) can be simplified to: $$\begin{aligned} \label{eq:h1\_simplified} h\_1(\hat{\alpha}, \hat{\gamma}) = - \frac{1}{2} \|\hat{\alpha} \|\_2^2 - \hat{\alpha}^T y - \sum\_{i \in \text{Supp}(\hat{\beta})} v(\hat{\alpha}, \hat{\gamma}\_i). \end{aligned}$$ Now, we consider the case of $\sqrt{{\lambda\_0}/{\lambda\_2}} > M $, where the goal is to evaluate *h*2(*ρ̂*, *μ̂*). By construction, the solution is dual feasible, which means that the constraints in need not be checked when computing the bound. Moreover, *μ̂**i* = 0 for every *i* ∈ Supp(*β̂*)*c* (see the discussion after ). Thus, the dual bound can be expressed as follows: $$\begin{aligned} \label{eq:h2\_simplified} h\_2(\hat{\rho}, \hat{\mu}) = - \frac{1}{2} \| \hat{\rho} \|\_2^2 - \hat{\rho}^T y - M \sum\_{i \in \text{Supp}(\hat{\beta})} |\hat{\mu}\_i|. \end{aligned}$$ The expressions in and can be computed in O(*n* + *n*∥*β̂*∥0) operations. Branching and Incumbents ------------------------ Many branching strategies for BnB have been explored in the literature, e.g., random branching, strong branching, and pseudo-cost branching. Among these strategies, strong branching has proven to be very effective in minimizing the size of the search tree. Strong branching selects the variable which leads to the maximum increase in the lower bounds of the children nodes. To select such a variable, two temporary node subproblems should be solved for every non-integral variable in the current relaxation. This can become a computational bottleneck, as each temporary subproblem involves solving a nonlinear optimization problem similar to. To address this challenge, we use a fast (approximate) version of strong branching, in which we restrict the optimization in these temporary subproblems to the active set of the current node (instead of optimizing over all *p* variables). In practice, this often leads to very similar search trees compared to exact strong branching, since the active set of the parent is typically close to the support of the children. We obtain the initial upper bound using `L0Learn`, which uses CD and efficient local search algorithms to obtain good quality feasible solutions for Problem. Moreover, at every node in the BnB tree, we attempt to improve the incumbent by making use of the support of a solution to the node relaxation. Specifically, we solve the following ℓ2 regularized least squares problem restricted to the relaxation’s support *S*: $ \min\_{\beta\_{{S}} \in \mathbb{R}^{|{S}|}} \frac{1}{2} \| y - X\_{{S}} \beta\_{{S}} \|^{2}\_2 + \lambda\_2 \|\beta\_{{S}}\|^{2}\_2. $ Since *S* is typically small, this problem can be solved efficiently by inverting a small ∣*S*∣ × ∣*S*∣ matrix. If *S* is similar to the support of the parent node, then a solution for the current node can be computed via a low-rank update . Experiments =========== We perform a series of high-dimensional experiments to study the run time of our BnB, understand its sensitivity to parameter and algorithm choices, and compare to state-of-the-art approaches. While our dataset and parameter choices are motivated by statistical considerations, our focus here is not to study the statistical properties of ℓ0 estimators. We refer the reader to for empirical studies on statistical properties. Experimental Setup ------------------ **Synthetic Data Generation:** We generate a multivariate Gaussian data matrix with samples drawn from MVN(0, Σ*p* × *p*), a sparse coefficient vector *β*† ∈ R*p* with *k*† equi-spaced nonzero entries all set to 1, and a noise vector $\epsilon\_{i} \stackrel{\text{iid}}{\sim} N(0, \sigma^2)$. We denote the support of the true regression coefficients *β*† by *S*†. The response is then obtained from the linear model *y* = *X**β*† + *ε*. We define the *signal-to-noise ratio (SNR)* as follows: SNR = Var(*X**β*†)/*σ*2. Unless otherwise specified, we set *σ*2 to achieve SNR = 5—this is a relatively difficult setting which still allows for full support recovery, under suitable choices of *n*, *p*, and Σ (see for a discussion on appropriate levels of SNR). We perform mean-centering followed by normalization (to have unit ℓ2 norm) on *y* and each column of *X*. To help in exposition, we denote the resulting processed dataset by (*ỹ*, *X̃*) and the (scaled) regression coefficients by *β̃*†. **Warm Starts, *λ*0, *λ*2, and *M*:** The parameters *λ*2 and *M* can affect the run time significantly, so we study the sensitivity to these choices in our experiments. We consider choices that are relevant from a statistical perspective, as we discuss next. For a fixed *λ*2, let *β*(*λ*2) be the solution of ridge regression restricted to the support of the true solution *β̃*†: $$\begin{aligned} \beta(\lambda\_2) \in \operatorname\*{arg\,min}\_{\beta \in \mathbb{R}^p} \frac{1}{2} \| \tilde{y} - \tilde{X} \beta \|^{2}\_2 + \lambda\_2 \|\beta\|^{2}\_2 ~~~~ \text{s.t.} ~~~~ \beta\_{(S^{\dagger})^c} = 0, \end{aligned}$$ We define *λ*2\* as the *λ*2 which minimizes the ℓ2 estimation error of *β*(*λ*2), i.e., $$\begin{aligned} \label{eq:estimation\_error} \lambda\_2^{\*} \in \operatorname\*{arg\,min}\_{\lambda\_2 \geq 0} \| \tilde{\beta}^{\dagger} - \beta(\lambda\_2) \|\_2. \end{aligned}$$ We estimate *λ*2\* using grid search, with 50 points equi-spaced on a logarithmic scale in the range [10− 4, 104]. In the experiments, we report our *λ*2 choices as a fraction or multiple of *λ*2\* (e.g., *λ*2 = 0.1*λ*2\*). Moreover, for each *λ*2, we define *M*\*(*λ*2) = ∥*β*(*λ*2)∥∞ and report our choices of the Big-M in terms of *M*\*(*λ*2)—we also use the notation *M*\* to refer to *M*\*(*λ*2) when *λ*2 is clear from context. Note that if Problem has a unique solution, and this solution has support *S*†, then *M*\*(*λ*2) is the smallest valid choice of *M* in formulation. Unless otherwise specified, for each *λ*2 considered, we fix *λ*0 to a value *λ*0\* (which is a function of *λ*2) that leads to the true support size *k*†. We obtain *λ*0\* using `L0Learn`[10](#fn10). Note that *λ*0\* might not exist in general, but in all the experiments we considered, `L0Learn` was able to such a value. Moreover, Section [sec:varygrid], presents an experiment where *λ*0 is varied over a range that is independent of `L0Learn`. We obtain warm starts from `L0Learn`[11](#fn11) and use them for all the MIP solvers considered. **Solvers and Settings:** Our solver, L0BnB[12](#fn12), is written in Python with critical code sections optimized using Numba. We compare L0BnB with Gurobi (GRB), MOSEK (MSK), and BARON (B) on formulation. We also compare with who solve the cardinality-constrained variant of with the number of nonzeros set to *k*†. In all solvers (including L0BnB), we use the following settings: * Relative Optimality Gap: Given an upper bound UB and a lower bound LB, the relative optimality gap is defined as (UB − LB)/UB. We set this to 1%. * Integer Feasibility Tolerance (*ε*if): This tolerance is used to determine whether a variable obtained from a node relaxation will be declared as integral (for the purpose of branching). Specifically, in our context, if a variable *z**i* is within *ε*if from 0 or 1, then it is declared as integral. We set *ε*if = 10− 4. * Primal-Dual Optimality Gap (*ε*pd): This is the relative gap between the primal and dual bounds at a given node, which is used as termination criterion for the subproblem solver. We set *ε*pd = 10− 5. In L0BnB, this gap is satisfied when an integral solution to a node’s relaxation is encountered. Comparison with State-of-the-art solvers ---------------------------------------- ### Varying Number of Features In this section, we study the scalability of the different solvers in terms of the number of features *p*. We generate synthetic datasets[13](#fn13) with *n* = 103, *p* ∈ {103, 104, 105, 106}, and *k*† = 10. We consider a constant correlation setting, where Σ*i**j* = 0.1 ∀*i* ≠ *j* and 1 otherwise. The parameters *λ*2 and *M* can have a significant effect on the run time. Thus, we report the timings for different choices of these parameters. In particular, in Table [table:fixedM] (top panel), we report the timings for *λ*2 ∈ {0.1*λ*2\*, *λ*2\*, 10*λ*2\*}, where for each *λ*2, we fix *M* = 1.5*M*\*(*λ*2) (where *λ*2\* and *M*\* are defined in Section [section:setup]). In Table [table:fixedM] (bottom panel), we fix *λ*2 = *λ*2\* and report the timings for *M* ∈ {*M*\*(*λ*2), 2*M*\*(*λ*2), 4*M*\*(*λ*2), ∞}. Note that the results for L0BnB in Table [table:fixedM] are without gradient screening; in Table [table:gradinetscreening] in Appendix [appendix:experimentvarypm], we report the timings with gradient screening enabled. Moreover, in Appendix [appendix:experimentvarypm], we report the values of *λ*0\* and *λ*2\*, along with the number of BnB nodes explored. [h] [table:fixedM] l|ccccc|ccccc|ccccc| & & & & L0BnB & GRB & MSK & B & & L0BnB & GRB & MSK & B & & L0BnB & GRB & MSK & B & & **0.7** & 70 & 92 & (4%) & (34%) & **2** & 57 & 154 & - & - & **0.01** & 148 & 28 & (5%) & 0.08 & **3** & (15%) & 1697 & - & (78%) & **12** & (12%) & 3872 & - & - & **0.06** & (11%) & 314 & - & 5 & **34** & - & - & - & (86%) & **545** & - & - & - & - & **0.5** & - & - & - & 8 & **1112** & - & - & - & - & **(23%)** & - & - & - & - & **8** & - & - & - & 46 l|cccc|cccc|cccc|cccc| & & & & & L0BnB & GRB & MSK & B & L0BnB & GRB & MSK & B & L0BnB & GRB & MSK & B & L0BnB & GRB & MSK & B & **0.2** & 27 & 57 & (2%) & **0.8** & 112 & 128 & - & **0.8** & 1219 & 137 & - & **0.8** & 3974 & 91 & - & **0.9** & 10571 & 665 & - & **5** & (24%) & 3260 & - & **5** & (50%) & 3289 & - & **6** & - & 9025 & - & **8** & - & - & - & **70** & - & - & - & **81** & - & - & - & **81** & - & - & - & **121** & - & - & - & **9265** & - & - & - & **10986** & - & - & - & **11010** & - & - & - [h] [table:fixedM] l|cccc|cccc|cccc| & & & & & & & & & & & & & & & & **6 & 95 & 216 & 1079 & **2 & 81 & 273 & - & **0.01 & 2372 & 54 & 0.4 & **14 & - & 5354 & (45%) & **33 & - & 6693 & - & 0.8 & - & 1511 & **0.6 & **255 & - & (35%) & (70%) & **544 & - & (41%) & - & **10 & - & (3%) & 13 & **3468 & - & - & (88%) & **(7%) & - & - & - & **43 & - & - & 4123************************ l|ccc|ccc|ccc|ccc| & & & & & & & MSK & & & MSK & & & MSK & L0BnB & GRB & MSK & **0.7 & 35 & 106 & **1 & 199 & 279 & **1 & 1636 & 259 & **1 & 2307 & 336 & **2 & - & 1909 & **21 & - & 6646 & **23 & - & (7%) & **23 & - & - & **25 & - & (9%) & **543 & - & (59%) & **588 & - & - & **628 & - & - & **309 & - & - & **7180 & - & - & **(3%) & - & - & **(3%) & - & -******************************** The results in the top panel of Table [table:fixedM] indicate significant speed-ups, reaching over 200, 000x compared to Gurobi, 28, 000x compared to MOSEK, and 20, 000x compared to. At *λ*2 = *λ*2\*, L0BnB is the only solver that can handle *p* ≥ 105, and can, in fact, handle *p* = 106 in less than 20 minutes. Recall that *λ*2\* minimizes the ℓ2 estimation error and leads to an estimator that is the closest to the ground truth. When the ridge parameter is weak i.e., for *λ*2 = 0.1*λ*2\*, L0BnB is again the only solver that can handle *p* ≥ 105. When the ridge parameter is large, i.e., for *λ*2 = 10*λ*2\*, the optimization problem seems to become easier: L0BnB can be more than 100x faster compared to *λ*2\*, and can handle up to *p* ≈ 106. The speed-ups for *λ*2 = 10*λ*2\* can be attributed to the fact that a larger *λ*2 adds a large amount of regularization to the objective (via the perspective term)—improving the performance of the relaxation solvers[14](#fn14). It is also worth emphasizing that L0BnB is prototyped in Python, as opposed to the highly efficient BnB routines available in commercial solvers such as Gurobi and MOSEK. Ideally, we desire a solver that can solve Problem  over a range of *λ*2 values, which includes values in the neighborhood of *λ*2\*. However, the results in Table [table:fixedM] suggest that the state-of-the-art methods (except L0BnB) seem to only work for quite large values of *λ*2 (which, in this case, do not correspond to solutions that are interesting from a statistical viewpoint). On the other hand, L0BnB seems to be the only method that can scale to *p* ∼ 106 while being relatively robust to the choice of *λ*2. In the bottom panel of Table [table:fixedM], the results also indicate that L0BnB significantly outperforms Gurobi, MOSEK, and BARON for different choices of *M*. For all the solvers, the run time increases with *M*, and the longest run times are for *M* = ∞. This empirically validates our result in Proposition [prop:v1v2], where we show that for a sufficiently small (but valid) value of *M*, PR(*M*) can be better than PR(∞), in terms of the relaxation quality. However, even with *M* = ∞, L0BnB can solve *p* = 106 in around 3 hours, whereas all other solvers have a 100% gap after 4 hours. We also note that Table [table:fixedM] considers both the two settings in Theorem [theorem:relaxation]: $\sqrt{\lambda\_0/\lambda\_2} > M$ and $\sqrt{\lambda\_0/\lambda\_2} \leq M$. Specifically, we have $\sqrt{\lambda\_0^{\*}/\lambda\_2^{\*}} > M$ for *M* ∈ {*M*\*(*λ*2\*), 1.5*M*\*(*λ*2\*), 2*M*\*(*λ*2\*)}, and $\sqrt{\lambda\_0^{\*}/\lambda\_2^{\*}} \leq M$ for *M* ∈ {4*M*\*(*λ*2\*), ∞}. L0BnB achieves notable improvements over other solvers for both of these settings. ### Varying Signal-to-Noise Ratio (SNR) Here we investigate the effect of SNR on the running time of the different solvers. To this end, we generate synthetic datasets with *n* = 103, *p* = 103, *k*† = 10, under a constant correlation setting, where Σ*i**j* = 0.1 ∀*i* ≠ *j* and 1 otherwise. We vary SNR in {0.5, 1, 2, 3, 4, 5}. We set *λ*0 = *λ*0\*, *λ*2 = *λ*2\* (where *λ*0\* and *λ*2\* depend on SNR), and *M* = 1.5*M*\*(*λ*2). We report the running time versus SNR in Table [table:varysnr]. The corresponding values of *λ*0\* and *λ*2\*, and the number of BnB nodes are reported in Appendix [appendix:varysnr]. The results indicate that L0BnB is much faster, and less sensitive to changes in SNR, compared to the other solvers. For example, L0BnB can handle SNR=0.5 in less than 4 minutes, whereas the fastest competing solver (MOSEK) takes around 46 minutes. [h] [table:varysnr] | SNR | M | L0BnB | GRB | MSK | B | | --- | --- | --- | --- | --- | --- | | 0.5 | 0.269 | **231** | (2%) | 2757 | - | | 1 | 0.307 | **1.7** | 1213 | 1046 | - | | 2 | 0.333 | **1.4** | 119 | 288 | - | | 3 | 0.346 | **1.3** | 102 | 173 | - | | 4 | 0.347 | **1.0** | 77 | 113 | - | | 5 | 0.348 | **0.8** | 77 | 92 | - | ### Varying *λ*0 and *λ*2 In the experiment of Section [sec:timingcomparison], we studied the running time for *λ*0 = *λ*0\* so that the model recovers the true support size. In this experiment, we study the running time over a grid of *λ*0 and *λ*2 values, which includes various support sizes. We consider synthetic instances with *p* ∈ {103, 104}, *n* = 103, *k*† = 10, under a constant correlation setting, where Σ*i**j* = 0.1 ∀*i* ≠ *j* and 1 otherwise. We vary *λ*2 ∈ {0.1, 0.5, 1, 2, 10} ⋅ *λ*2\* and *λ*0 ∈ {0.5, 0.1, 0.01} ⋅ *λ*0m, where *λ*0m is a value of *λ*0 which sets all coefficients to zero. Following [15](#fn15), we use *λ*0m = (2 + 4*λ*2)− 1∥*X**T**y*∥∞2. Next, we describe, how given a pair (*λ*0, *λ*2) in the grid, we estimate the corresponding Big-M value *M*(*λ*0, *λ*2). Let *β*(*λ*0, *λ*2) be the solution of Problem restricted to the true support: $$\beta(\lambda\_0, \lambda\_2) \in \operatorname\*{arg\,min}\_{\beta} \frac{1}{2} \| y - X \beta \|^{2}\_2 + \lambda\_0 \| \beta \|\_0+ \lambda\_2 \|\beta\|^{2}\_2 ~~ \text{s.t.} ~~ \beta\_{(S^{\dagger})^c} = 0.$$ We compute *β*(*λ*0, *λ*2) exactly using formulation with *M* = ∞. We then compute an estimate of the Big-M value: *M̃*(*λ*0, *λ*2) :  = ∥*β*(*λ*0, *λ*2)∥∞. For every (*λ*0, *λ*2) in the grid, we solve Problem for *M*(*λ*0, *λ*2) = 1.5*M̃*(*λ*0, *λ*2). In Table [table:varyl0l2], we report the running time for *p* = 103 (top panel) and *p* = 104 (bottom panel). The values of *λ*2\* and *λ*0, along with the number of BnB nodes explored, are reported in Appendix [appendix:varyl0l2]. In Table [table:varyl0l2], for all values of *λ*2 considered and *λ*0 ∈ {0.5, 0.1} ⋅ *λ*0m, L0BnB is faster than the competing methods, with speed-ups exceeding 10,000x compared to both Gurobi and MOSEK. However, for *λ*0 = 0.01*λ*0m, none of the solvers are able to solve the problem in 1 hour, and the gaps seem to be comparable. We also note that for *p* = 104, L0BnB is able to solve (to optimality) 13 out of the 20 instances in the 1 hour limit. In contrast, Gurobi could not solve any of the instances and MOSEK could only solve 6. [htbp] [table:varyl0l2] Sensitivity to Data Parameters ------------------------------ We study how L0BnB’s running time is affected by the following data specific parameters: number of samples (*n*), feature correlations (Σ), and number of nonzero coefficients (*k*†). In the experiments below, we fix *λ*2 = *λ*2\* and *M* = 1.5*M*\*(*λ*2\*). For all the experiments in this section, the values of *λ*0, *λ*2, and *M* used are reported in Appendix [appendix:L0BnBexps]. **Number of Samples:** We fix *k*† = 10, *p* = 104, and consider a constant correlation setting Σ*i**j* = 0.1 for *i* ≠ *j* and 1 otherwise[16](#fn16). The timings for *n* ∈ {102, 103, 104, 105} are in Table [table:sensitivity]. The results indicate that the problem can be solved in reasonable times (order of seconds to minutes) for *n* ≥ 103. The problems can be solved the fastest when *n* is close to *p*, and the extreme cases *n* = 102 and *n* = 105 are the slowest. We contend that for large *n*, the CD updates (which cost O(*n*) each) become a bottleneck. For *n* = 102, the CD updates are cheaper, however, the size of the search tree is significantly larger—suggesting that a small value of *n* can lead to loose relaxations and require more branching. Note also that the underlying statistical problem is the most difficult for *n* = 102 compared to other larger values of *n*. **Feature Correlations:** We generate synthetic datasets with *p* ∈ {104, 105}, *n* = 103, *k*† = 10, and Σ = Diag(Σ(1), Σ(2), …, Σ(*k*†)) is block-diagonal. Such block structures are commonly used in the sparse learning literature . Each Σ(*l*) is a correlation matrix with the same dimension. Given a correlation parameter *ρ* ∈ (0, 1), for each *l* ∈ [*k*†], we assume that Σ*i**j*(*l*) = *ρ*∣*i* − *j*∣ for any *i*, *j* ∈ [*p*/*k*†], i.e., the correlation is exponentially decaying in each block. We report the timings for different values of the parameter *ρ* in Table [table:sensitivity]. The results indicate that higher correlations lead to an increase in the run time, but even the high correlation instances can be solved in reasonable time. For example, when *p* = 104, *ρ* = 0.9 can be solved in less than 10 minutes. When *p* = 105, *ρ* = 0.8 can be solved in around 13 minutes, and *ρ* = 0.9 can be solved to a 13% gap in 2 hours. **Sparsity of the Regression Coefficients:** We consider datasets with *n* = 103, *p* = 104, and the same correlation setting as the experiment for the number of samples. The results in Table [table:sensitivity] show that problems with 15 nonzeros can be handled in 166 seconds, and for 20 and 25 nonzeros, decent gaps ( ≤ 10%) can be obtained in 2 hours. We also note that for larger *λ*2 or tighter *M* choices, larger values of *k*† can be handled. [h] [table:sensitivity] |c|c|c|c|c| $\boldsymbol n$ & 102 & 103 & 104 & 105 **t & (42%) & 11 & 30 & 1701** |c|c|c|c|c|c|c| $\boldsymbol \rho$ & 0.1 & 0.3 & 0.5 & 0.7 & 0.8 & 0.9 **t(*p* = 104) & 4 & 4 & 5 & 16 & 55 & 530 **t(*p* = 105) & 35 & 39 & 60 & 231 & 808 & (13%)**** |c|c|c|c|c| $\boldsymbol k^{\dagger}$ & 10 & 15 & 20 & 25 **t & 4 & 166 & (10%) & (6%)** Real Data and Ablation Studies (Algorithm Settings) --------------------------------------------------- Here we investigate the run time of L0BnB on the Riboflavin dataset —a genetics dataset used for predicting Vitamin B2 production levels. The original dataset has *p* = 4088 and *n* = 71. We augment the dataset with pairwise feature interactions to get *p* = 8, 362, 004. We mean center and normalize the response and the columns of the data matrix. We then run 5-fold cross-validation in `L0Learn` (with default parameters) to find the optimal regularization parameters *λ̂*0 and *λ̂*2. We set *M* to be 10 times the ℓ∞ norm of the solution obtained from cross-validation. In L0BnB, we solve the problem for *λ*2 ∈ {0.1*λ̂*2, *λ̂*2} and vary *λ*0 to generate solutions of different support sizes. The run time, for each *λ*2, as a function of the support size is reported in Figure [fig:real]. Interestingly, at *λ̂*2, all the support sizes obtained (up to 15 nonzeros) can be handled in less than a minute. Moreover, the increase in time is relatively slow as the number of nonzeros increases. When *λ*2 becomes smaller (i.e., *λ*2 = 0.1*λ̂*2), the run times increase (though they are reasonable) as the problem becomes more difficult. When an optimal solution has six nonzeros, L0BnB for *λ*2 = 0.1*λ̂*2 is approximately 20x slower than *λ*2 = *λ̂*2. We perform an ablation study to measure the effect of key choices in our BnB on the run time. Particularly, we consider the following changes: (i) replacing our relaxation solver with MOSEK, (ii) turning off warm starts in our solver, and (iii) turning off active sets in our solver. We measure the run time before and after these changes on synthetic data with *n* = 103, *k*† = 10, Σ*i**j* = 0.1 for all *i* ≠ *j* and 1 otherwise. We use the parameters *λ*0 = *λ*0\*, *λ*2 = *λ*2\*, and *M* = 1.5*M*\*(*λ*2\*) (with the same values as those used in the experiment of Section [sec:timingcomparison]). The run times for different choices of *p* are reported in Table [table:ablation]. The results show that replacing our relaxation solver with MOSEK will slow down the BnB by more than 1200x at *p* = 104. The results for MOSEK are likely to be even slower for *p* = 106 as it still had a 100% gap after 2 hours. We note that MOSEK employs a state-of-the-art conic optimization solver (based on an interior point method). The significant speed-ups here can be attributed to our CD-based algorithm, which is designed to effectively exploit the sparsity structure in the problem. The results also indicate that warm starts and active sets are important for run time, e.g., removing warm starts and active sets at *p* = 105 can slow down the algorithm by more than 16x and 67x, respectively. [c]0.5 [table:ablation] [c]0.5![](Timings-Real.png "fig:") [fig:real] Conclusion ========== We considered the exact computation of estimators from the ℓ0ℓ2-regularized least squares problem. While current approaches for this problem rely on commercial MIP-solvers, we propose a highly specialized nonlinear BnB framework for solving the problem. A key workhorse in our approach is a fast coordinate descent procedure to solve the node relaxations along with active set updates and gradient screening, which exploit information across the search tree for computational efficiency. Moreover, we proposed a new method for obtaining dual bounds from our primal coordinate descent solutions and showed that the quality of these bounds depend on the sparsity level, rather than the number of features *p*. Our experiments on both real and synthetic data indicate that our method exhibits over 5000x speedups compared to the fastest solvers, handling high-dimensional instances with *p* = 8.3 × 106 in the order of seconds to few minutes. Our method appears to be more robust to the choices of the regularization parameters and can handle difficult statistical problems (e.g., relatively high correlations or small number of samples). Our work demonstrates for the first time that carefully designed first-order methods can be highly effective within a BnB framework; and can perhaps, be applied to more general mixed integer programs involving sparsity. There are multiple directions for future work. recently showed that safe screening rules, which eliminate variables from the optimization problem, can be a very effective preprocessing step for ℓ0ℓ2 regularized regression. One promising direction is to extend such rules to dynamically eliminate variables during the course of BnB. Another important direction is to develop specialized methods that can dynamically infer and tighten the Big-M values used in our formulation. Acknowledgements ---------------- Hussein Hazimeh acknowledges research support from the Office of Naval Research ONR-N000141812298. Rahul Mazumder acknowledges research funding from the Office of Naval Research ONR-N000141812298 (Young Investigator Award), the National Science Foundation (NSF-IIS-1718258) and IBM. The authors would like to thank the referees for their constructive comments, which led to an improvement in the paper. Proofs of Technical Results =========================== **Proof of Theorem [theorem:relaxation]:** The interval relaxation of can be expressed as: [eq:relaxintermediate] $$\begin{aligned} \min\_{\beta} ~~~ & \Big \{ \frac{1}{2} \| y - X \beta \|\_2^2 + \sum\_{i \in [p]} {\min\_{s\_i,z\_i} (\lambda\_0 z\_i + \lambda\_2 s\_i)} \Big \} \\ & \beta\_i^2 \leq s\_i z\_i, ~ i \in [p] \label{eq:secondcone} \\ & -{M} z\_i \leq \beta\_i \leq {M} z\_i, ~ i \in [p] \label{eq:M}\\ & z\_i \in [0,1], s\_i \geq 0, ~ i \in [p] \label{eq:szvars} \end{aligned}$$ Let $\omega(\beta\_i; \lambda\_0, \lambda\_2, M) := \min\_{s\_i,z\_i} (\lambda\_0 z\_i + \lambda\_2 s\_i) ~ \text{s.t.} ~ \eqref{eq:secondcone}, \eqref{eq:M}, \eqref{eq:szvars}.$ Next we obtain an expression for *ω*(*β**i*; *λ*0, *λ*2, *M*). Let (*β**i*, *z**i*, *s**i*) be a feasible solution for . Note that $\hat{z}\_i := \max \{ \frac{\beta\_i^2}{s\_i}, \frac{|\beta\_i|}{{M}} \} $ is the smallest possible value of *z**i*, which satisfies constraints and (for the case of *β**i* = *s**i* = 0, we define *β**i*2/*s**i* = 0). Thus, the objective value corresponding to (*β**i*, *ẑ**i*, *s**i*) is less than or equal to that of a feasible solution (*β**i*, *z**i*, *s**i*). This implies that we can replace constraints and with the constraint $z\_i = \max \{ \frac{\beta\_i^2}{s\_i}, \frac{|\beta\_i|}{{M}} \}$ without changing the optimal objective of the problem. This replacement leads to: $$\omega(\beta\_i; \lambda\_0, \lambda\_2, M) = \min\_{s\_i,z\_i}~ (\lambda\_0 z\_i + \lambda\_2 s\_i) ~~~ \text{s.t.} ~~~ z\_i = \max \Big\{ \frac{\beta\_i^2}{s\_i}, \frac{|\beta\_i|}{{M}} \Big\}, ~ z\_i \in [0,1], ~ s\_i \geq 0.$$ In the above, we can eliminate the variable *z**i*, leading to: $$\begin{aligned} \label{eq:pintermediate} \omega(\beta\_i; \lambda\_0, \lambda\_2, M) = \min\_{s\_i} & \max \Big \{ \underbrace{ \lambda\_0 \frac{\beta\_i^2}{s\_i} + \lambda\_2 s\_i }\_{\text{Case I}}, \underbrace{ \lambda\_0 \frac{|\beta\_i|}{{M}} + \lambda\_2 s\_i}\_{\text{Case II}} \Big \} ~~ \text{s.t.} ~~ s\_i \geq \beta\_i^2, ~ |\beta\_i| \leq M. \end{aligned}$$ Suppose that Case I attains the maximum in. This holds if *s**i* ≤ ∣*β**i*∣*M*. The function $\lambda\_0 \frac{\beta\_i^2}{s\_i} + \lambda\_2 s\_i $ is convex in *s**i*, and the optimality conditions of this function imply that the optimal solution is $s\_i^{\*} = |\beta\_i| \sqrt{{\lambda\_0}/{\lambda\_2}}$ if $|\beta\_i| \leq \sqrt{{\lambda\_0}/{\lambda\_2}} \leq M$ and *s**i*\* = *β**i*2 if $\sqrt{{\lambda\_0}/{\lambda\_2}} \leq |\beta\_i| \leq M$. Plugging *s**i*\* into  leads to $\omega(\beta\_i; \lambda\_0, \lambda\_2, M) = 2 \lambda\_0 \mathcal{B}( \beta\_i \sqrt{{\lambda\_2}/{\lambda\_0}})$, assuming $\sqrt{{\lambda\_0}/{\lambda\_2}} \leq M$. Now suppose Case II attains the maximum in. This holds if *s**i* ≥ ∣*β**i*∣*M*. The function $\lambda\_0 \frac{|\beta\_i|}{M} + \lambda\_2 s\_i$ is monotonically increasing in *s**i*, where *s**i* is lower bounded by ∣*β**i*∣*M* and *β**i*2. But we always have ∣*β**i*∣*M* ≥ *β**i*2 (since ∣*β**i*∣ ≤ *M*), which implies that the optimal solution is *s**i*\* = ∣*β**i*∣*M*. Substituting *s**i*\* into we get *ω*(*β**i*; *λ*0, *λ*2, *M*) = (*λ*0/*M* + *λ*2*M*)∣*β**i*∣, and this holds as long as $\sqrt{{\lambda\_0}/{\lambda\_2}} > M$. **Proof of Proposition [prop:v1v2]:** First, we show . Note that *V*B(*M*) can be simplified to: $$\begin{aligned} \label{eq:bigm\_relax\_temp} V\_{\text{B}(M)} = \min\_{ \|\beta\|\_{\infty} \leq M} H(\beta) := \frac{1}{2} \| y - X \beta \|\_2^2 + \sum\_{i \in [p]} \Big( \frac{\lambda\_0}{M} |\beta\_i| + \lambda\_2 \beta\_i^2 \Big). \end{aligned}$$ Recalling Theorem [theorem:relaxation] and the definition of *F*(*β*) in , note that for $\sqrt{{\lambda\_0}/{\lambda\_2}} > M$: $$\begin{aligned} V\_{\text{PR}(M)} - V\_{\text{B}(M)} \geq F(\beta^{\*}) - H(\beta^{\*}) =& \sum\nolimits\_{i \in [p]} \Big\{ \psi\_2(\beta\_i^{\*}; \lambda\_0, \lambda\_2,M) - \frac{\lambda\_0}{M} |\beta\_i^{\*}| - \lambda\_2 (\beta\_i^{\*})^2 \Big\} \\ =& \lambda\_2 \sum\nolimits\_{i \in [p]} (M|\beta\_i^{\*}| - (\beta\_i^{\*})^2) \end{aligned}$$ where the inequality holds since *β*\*, an optimal solution to , is feasible for . This establishes. We now show . Since ∣*β**i*\*∣ ≤ *M* and $\sqrt{{\lambda\_0}/{\lambda\_2}} > M$, we must have $\psi\_1(\beta\_i^{\*}; \lambda\_0, \lambda\_2) = 2 |\beta\_i^{\*}| \sqrt{\lambda\_0 \lambda\_2}$ (this corresponds to the first case in ). Also recall that *V*PR(∞) = min*β**G*(*β*), where *G*(*β*) is defined in (this follows from applying Theorem [theorem:relaxation] with *M* = ∞). The following then holds: $$\label{append-dummy1} \begin{aligned} V\_{\text{PR}(M)} - V\_{\text{PR}(\infty)} \geq F(\beta^{\*}) - G(\beta^{\*}) =& \sum\nolimits\_{i \in [p]} \Big\{\psi\_2(\beta\_i^{\*}; \lambda\_0, \lambda\_2,M) - \psi\_1(\beta\_i^{\*}; \lambda\_0, \lambda\_2) \Big\}\\ =& h(\lambda\_0, \lambda\_2, M) \| \beta^{\*} \|\_1, \end{aligned}$$ where the inequality holds since *β*\*, an optimal solution to , is feasible for . **Proof of Proposition [prop:relaxobj]:** First, we recall that *V*PR(∞) = min*β**G*(*β*), where *G*(*β*) is defined in, and that *V*B(*M*) is defined in. Define a function *t* : R → R as follows $$t(\beta\_i) := 2 \lambda\_0 \mathcal{B}( \beta\_i \sqrt{{\lambda\_2}/{\lambda\_0}}) - \frac{\lambda\_0}{M} |\beta\_i| - \lambda\_2 \beta\_i^2.$$ Next, we prove . **Proof of :** Suppose that $M \leq \frac{1}{2}\sqrt{{\lambda\_0}/{\lambda\_2}}$ and ∣*β**i*∣ ≤ *M*. Using the fact that $|\beta\_i| \leq \sqrt{{\lambda\_0}/{\lambda\_2}}$ and the definition of B, we have $\mathcal{B}( \beta\_i \sqrt{{\lambda\_2}/{\lambda\_0}}) = \sqrt{{\lambda\_2}/{\lambda\_0}} |\beta\_i|$. This leads to: $$\begin{aligned} \label{eq:t\_first\_part} t(\beta\_i) = \Big(2 \sqrt{\lambda\_0 \lambda\_2} - \tfrac{\lambda\_0}{M} \Big) |\beta\_i| - \lambda\_2 \beta\_i^2 \leq 0, \end{aligned}$$ where the inequality follows since $2 \sqrt{\lambda\_0 \lambda\_2} - {\lambda\_0}/{M} \leq 0$ for $M \leq \frac{1}{2}\sqrt{{\lambda\_0}/{\lambda\_2}}$. Now, let *β*† be an optimal solution to. Then, the following holds: $$\begin{aligned} \label{eq:v13\_part1} V\_{\text{B}(M)} - V\_{\text{PR}(\infty)} \geq H(\beta^{\dagger}) - G(\beta^{\dagger}) = - \sum\nolimits\_{i \in [p]} t(\beta\_i^{\dagger}) \geq 0, \end{aligned}$$ where the first inequality follows from *V*B(*M*) = *H*(*β*†) and *V*PR(∞) ≤ *G*(*β*†). The second inequality in  follows from. This establishes. To show, we will need the following lemma. [lemma:intermediateproofprop1] Let $M \geq \sqrt{{\lambda\_0}/{\lambda\_2}}$, then for any *β**i* ∈ [ − *M*, *M*], we have *t*(*β**i*) ≥ 0. [Proof of Lemma [lemma:intermediateproofprop1]] Suppose that $M \geq \sqrt{{\lambda\_0}/{\lambda\_2}}$ and ∣*β**i*∣ ≤ *M*. There are two cases to consider here: Case (i): $|\beta\_i| \leq \sqrt{{\lambda\_0}/{\lambda\_2}}$ and Case (ii): $|\beta\_i| \geq \sqrt{{\lambda\_0}/{\lambda\_2}}$. For Case (i), from the definition of B, we have $2 \lambda\_0 \mathcal{B}( \beta\_i \sqrt{{\lambda\_2}/{\lambda\_0}}) = 2 \sqrt{\lambda\_0 \lambda\_2} |\beta\_i|$. Therefore, $$t(\beta\_i) = \Big(2 \sqrt{\lambda\_0 \lambda\_2} - \frac{\lambda\_0}{M} \Big) |\beta\_i| - \lambda\_2 \beta\_i^2.$$ In the above, it is easy to check that for $M \geq \sqrt{{\lambda\_0}/{\lambda\_2}}$ and $|\beta\_i| \leq \sqrt{{\lambda\_0}/{\lambda\_2}}$, we have *t*(*β**i*) ≥ 0. Now for Case (ii), we have $2 \lambda\_0 \mathcal{B}( \beta\_i \sqrt{{\lambda\_2}/{\lambda\_0}}) = \lambda\_0 + \lambda\_2 \beta\_i^2$—this leads to $t(\beta\_i) = \lambda\_0 \Big( 1 - \frac{|\beta\_i|}{M} \Big)$, which is non-negative since we assume that ∣*β**i*∣ ≤ *M*. This establishes Lemma [lemma:intermediateproofprop1]. **Proof of :** Define *v*\*(*M*) = min∥*β*∥∞ ≤ *M**G*(*β*) (recall, *G*(*β*) is defined in ) and let *β*\* be an optimal solution i.e., *v*\*(*M*) = *G*(*β*\*). Suppose that $M \geq \sqrt{{\lambda\_0}/{\lambda\_2}}$. Then, the following holds: $$\begin{aligned} \label{eq:relax\_obj\_2} v^{\*}(M) - V\_{\text{B}(M)} \geq G(\beta^{\*}) - H(\beta^{\*}) = \sum\_{i \in [p]} t(\beta\_i^{\*}) \geq 0 \end{aligned}$$ where the first inequality holds since *β*\* is a feasible solution to so *V*B(*M*) ≤ *H*(*β*\*), and the last inequality is due to Lemma [lemma:intermediateproofprop1]. Now suppose that *λ*2 ∈ L(*M*) (defined in ) and let *β̂* ∈ S(*λ*2) (i.e., *β̂* is optimal for ) be such that it satisfies ∥*β̂*∥∞ ≤ *M*. Since *V*PR(∞) ≤ *v*\*(*M*) and *β̂* is feasible for the problem corresponding to *v*\*(*M*), then *v*\*(*M*) = *G*(*β̂*). But by we have *v*\*(*M*) ≥ *V*B(*M*), which combined with *v*\*(*M*) = *G*(*β̂*), leads to *G*(*β̂*) ≥ *V*B(*M*). This establishes (since by definition *G*(*β̂*) = *V*PR(∞)); and completes the proof of Proposition [prop:relaxobj]. **Proof of Proposition [prop:Vexplicit]:** Fix some *i* ∈ Supp(*β̂*)*c*. Recall that the one-dimensional problem: min∣*β**i*∣ ≤ *M**F*(*β̂*1, …, *β**i*, …, *β̂**p*) is equivalent to Problem, where *β̃**i* = ⟨*y* − ∑*j* ≠ *i**X**j**β̂**j*, *X**i*⟩. Since *β̂**i* = 0, we have *β̃**i* = ⟨*r̂*, *X**i*⟩ where, *r̂* = *y* − *X**β̂*. For $\sqrt{{\lambda\_0}/{\lambda\_2}} \leq M$, implies that the solution of is nonzero iff $|\tilde{\beta}\_i| > 2 \sqrt{\lambda\_0 \lambda\_2}$. Similarly, for $\sqrt{{\lambda\_0}/{\lambda\_2}} > M$, by, the solution of is nonzero iff ∣*β̃**i*∣ > *λ*0/*M* + *λ*2*M*. Using these observations in the definition of V in Algorithm 2, leads to the result of the proposition. **Proof of Proposition [prop:screening]:** Fix some *i* ∈ V. Note that ∣⟨*r̂*, *X**i*⟩ − ⟨*r*0, *X**i*⟩∣ = ∣⟨*X**β*0 − *X**β̂*, *X**i*⟩∣ ≤ ∥*X**β*0 − *X**β̂*∥2∥*X**i*∥2 ≤ *ε*. Using the triangle inequality and the bound above, we get: $$\begin{aligned} |\langle \hat{r}, X\_i \rangle| & \leq |\langle {r}^{0}, X\_i \rangle | + | \langle \hat{r}, X\_i \rangle - \langle {r}^{0}, X\_i \rangle | \leq |\langle {r}^{0}, X\_i \rangle | + \epsilon. \end{aligned}$$ Therefore, if *i* ∈ V, i.e., ∣⟨*r̂*, *X**i*⟩∣ > *c*(*λ*0, *λ*2, *M*), then ∣⟨*r*0, *X**i*⟩∣ + *ε* > *c*(*λ*0, *λ*2, *M*), implying that $i \in \hat{\mathcal{V}}$, as defined in . **Proof of Theorem [theorem:duals]:** Problem can be written equivalently as: $$\begin{aligned} \label{eq:primalrewrite} \min\_{\beta \in \mathbb{R}^{p}, r \in \mathbb{R}^n} ~~ & \frac{1}{2} \| r \|\_2^2 + \sum\_{i\in [p]} \psi(\beta\_i; \lambda\_0, \lambda\_2, M) ~~ \text{s.t.} ~~ r = y - X \beta, ~~ | \beta\_i | \leq M, \ \forall i \in [p]. \end{aligned}$$ **Case of $\sqrt{{\lambda\_0}/{\lambda\_2}} \leq M$:** We first consider the case when $\sqrt{{\lambda\_0}/{\lambda\_2}} \leq M$. We dualize all the constraints in , leading to the following Lagrangian dual: $$\begin{aligned} \label{eq:dualintermediate} \max\_{\alpha \in \mathbb{R}^n, \eta \in \mathbb{R}^p\_{\geq 0}} ~~ \min\_{\beta \in \mathbb{R}^p, r \in \mathbb{R}^n} ~~ L(\beta, r, \alpha, \eta) \end{aligned}$$ where *L*(*β*, *r*, *α*, *η*) is the Lagrangian function defined as follows: $$L(\beta, r, \alpha, \eta) := \frac{1}{2} \| r \|\_2^2 + \sum\_{i \in [p]} \psi\_1(\beta\_i; \lambda\_0, \lambda\_2) + \alpha^T(r - y + X \beta) + \sum\_{i \in [p]} \eta\_i (| \beta\_i | - M).$$ Next, we discuss how to solve the inner minimization problem in. Let us write: $$\begin{aligned} \label{eq:inner\_min\_L} \min\_{\beta, r} ~~ L(\beta, r, \alpha, \eta) =\min\_{r} \Big \{ \frac{1}{2} \| r \|\_2^2 + \alpha^T r \Big \} + \sum\_{i \in [p]} \min\_{\beta\_i} \Big \{ D\_i(\beta\_i) \Big \} - \alpha^T y - \sum\_{i \in [p]} M \eta\_i \end{aligned}$$ where *D**i*(*β**i*) :  = *ψ*1(*β**i*; *λ*0, *λ*2) + *α**T**X**i**β**i* + *η**i*∣*β**i*∣. Note that the minimizer wrt *r* is given by *r̃*\* =  − *α*. In what follows, we consider some *i* ∈ [*p*] and derive an optimal solution *β̃**i*\* of min*β**i**D**i*(*β**i*). There are two cases to consider here. $|\beta\_i| \leq \sqrt{\lambda\_0/\lambda\_2}$. Here, $\psi\_1(\beta\_i; \lambda\_0, \lambda\_2) = 2 \sqrt{\lambda\_0 \lambda\_2} |\beta\_i|$. Note that $$\label{thm2-case11} \tilde{\beta}^\*\_{i} = \operatorname\*{arg\,min}\_{\beta\_i} ~ D\_i(\beta\_i) = \begin{cases} 0 & \text{if~~~$2 \sqrt{\lambda\_0 \lambda\_2} + \eta\_i - | \alpha^T X\_i | \geq 0$}\\ -\sqrt{\lambda\_0/\lambda\_2}\operatorname{sign}(\alpha^TX\_{i})& \text{otherwise} \end{cases}$$ and as we will see, the second case above is a special case of Case 2 below. $|\beta\_i| \geq \sqrt{\lambda\_0/\lambda\_2}$. In this case, *ψ*1(*β**i*; *λ*0, *λ*2) = *λ*2*β**i*2 + *λ*0. Note that $$\label{thm2-case12} \tilde{\beta}\_i^{\*}=\operatorname\*{arg\,min}\_{\beta\_i} D\_i(\beta\_i) = \operatorname\*{arg\,min}\_{\beta\_i} \lambda\_2 \beta\_i^2 + \alpha^T X\_i \beta\_i + \eta\_i |\beta\_i| = {T({-\alpha^T X\_i}; {\eta\_i}, \infty)}/{(2\lambda\_2)}$$ where, *T* above is the soft-thresholding operator . But *β̃**i*\* in  should satisfy $|\tilde{\beta}\_i^{\*}| \geq \sqrt{\lambda\_0/\lambda\_2}$ in this case. Using the definition of *T*(.), the latter condition can be written as: $(| \alpha^T X\_i | - \eta\_i)/(2\lambda\_2) \geq \sqrt{\lambda\_0/\lambda\_2}$, which simplifies to $| \alpha^T X\_i | - \eta\_i \geq 2 \sqrt{\lambda\_0 \lambda\_2}$. In this case, *D**i*(*β̃**i*\*) can be written as: $$\begin{aligned} \label{eq:D\_2} D\_i(\tilde{\beta}\_i^{\*}) = - \frac{1}{4 \lambda\_2} \Big( |\alpha^T X\_i| - \eta\_i \Big)^2 + \lambda\_0 ~~ \text{ if } ~~ | \alpha^T X\_i | - \eta\_i \geq 2 \sqrt{\lambda\_0 \lambda\_2}. \end{aligned}$$ Combining  and, we have: $D\_i(\tilde{\beta}\_i^{\*}) = - \Big[ \frac{(|\alpha^T X\_i| - \eta\_i)^2}{4 \lambda\_2} - \lambda\_0 \Big]\_{+}.$ Plugging the above expression of *D**i*(*β̃**i*\*) along with *r̃*\* =  − *α* in, we get the following dual problem: $$\begin{aligned} \label{thm2-append-line1} \max\_{\alpha \in \mathbb{R}^n, \eta \in \mathbb{R}^p\_{\geq 0}} & - \frac{1}{2} \| \alpha \|\_2^2 - \alpha^T y - \sum\_{i=1}^{p} \Big[ \frac{(|\alpha^T X\_i| - \eta\_i)^2}{4 \lambda\_2} - \lambda\_0 \Big]\_{+} - \sum\_{i=1}^{p} M \eta\_i. \end{aligned}$$ Note we can drop the non-negativity constraint *η**i* ≥ 0 above. Using a new variable *γ* in place of *η* (note that *η**i* = ∣*γ**i*∣ for all *i*), Problem  can be reformulated as: $$\begin{aligned} \label{thm2-append-line11} \max\_{\alpha \in \mathbb{R}^n, \gamma \in \mathbb{R}^p } & - \frac{1}{2} \| \alpha \|\_2^2 - \alpha^T y - \sum\_{i=1}^{p} \Big[ \frac{(\alpha^T X\_i - \gamma\_i)^2}{4 \lambda\_2} - \lambda\_0 \Big]\_{+} - \sum\_{i=1}^{p} M |\gamma\_i| \end{aligned}$$ which is the formulation in. We now show . Let (*α*\*, *η*\*) be an optimal solution of . First, it is easy to see $r^{\*} = \operatorname\*{arg\,min}\_{r} L(\beta^{\*}, r, \alpha^{\*}, \eta^{\*}) = - \alpha^{\*}$. For the second part of , note by complementary slackness: if ∣*β**i*\*∣ < *M* then *η**i*\* = 0 and consequently *γ**i*\* = 0. Next, we will derive *γ**i*\* for the case of ∣*β**i*\*∣ = *M* > 0. We first consider the case of *β**i*\* = *M*. Using with the identifications: *β̃**i*\* = *M*, *α* = *α*\*, and *η* = *η*\*, we get *M* = ( − *α*\* *T**X**i* − *η**i*\*sign( − *α*\* *T**X**i*))/(2*λ*2), where sign(*α*\* *T**X**i*) =  − 1. Using this along with the fact that *γ**i*\* = *η**i*\*sign(*α*\* *T**X**i*), we obtain *γ**i*\* = *α*\* *T**X**i* + 2*λ*2*M* = *α*\* *T**X**i* − 2*λ*2*M*sign(*α*\* *T**X**i*). Using this identity along with a similar argument for *β**i* =  − *M*, we arrive at the expression in . By the optimality conditions, the following holds: *α*\* *T**X**i* ≤  − 2*λ*2*M* (this can be verified from along with the fact that *η**i* ≥ 0). Plugging $\sqrt{\lambda\_0/\lambda\_2} \leq M$ in the latter inequality, we get $\alpha^{\*T} X\_i \leq - 2 \sqrt{\lambda\_0 \lambda\_2}$. The optimal solution *γ*\* must satisfy $\gamma^{\*} \in \operatorname\*{arg\,max}\_{\gamma} h\_1(\alpha^{\*}, \gamma)$. By the separability of the objective (in *γ**j*), we get: $$\begin{aligned} \label{eq:gamma\_i\_temp} \gamma\_i^{\*} \in \operatorname\*{arg\,min}\_{\gamma\_i} \Big[ \underbrace{\frac{(\alpha^{\*T} X\_i - \gamma\_i)^2}{4 \lambda\_2} - \lambda\_0}\_{:= J(\gamma\_i)} \Big]\_{+} + M |\gamma\_i|. \end{aligned}$$ Next, we show, using contradiction, that the optimal solution *γ**i*\* must satisfy $\gamma^{\*}\_i \geq \alpha^{\*T} X\_i + 2\sqrt{\lambda\_0 \lambda\_2}$. Suppose, for the purpose of contradiction that $\gamma^{\*}\_i < \gamma\_i^{\dagger} := \alpha^{\*T} X\_i + 2\sqrt{\lambda\_0 \lambda\_2}$. Note that *γ**i*† ≤ 0 (this follows since $\alpha^{\*T} X\_i \leq - 2\sqrt{\lambda\_0 \lambda\_2}$, which we established previously). Since *γ**i* = *α*\* *T**X**i* is the minimizer of *J*(*γ**i*), the optimal solution must satisfy *γ**i*\* ≥ *α*\* *T**X**i* (because otherwise, if *γ**i*\* < *α*\* *T**X**i*, then *γ**i* = *α*\* *T**X**i* will have a strictly lower objective in than *γ**i*\*). For *γ**i* ∈ [*α*\* *T**X**i*, *γ**i*†], we have *J*(*γ**i*) ≤ 0. But since *γ**i*\* and *γ**i*† fall in the latter interval, we have [*J*(*γ**i*\*)]+ = [*J*(*γ**i*†)]+ = 0. Moreover, ∣*γ**i*\*∣ > ∣*γ**i*†∣ (this holds since *γ**i*\* < *γ**i*† ≤ 0). Therefore, the solution *γ**i*† has a strictly lower objective in than *γ**i*\*. This contradicts the optimality of *γ**i*\*, so we conclude that $\gamma^{\*}\_i \geq \alpha^{\*T} X\_i + 2\sqrt{\lambda\_0 \lambda\_2}$. Since the optimal solution satisfies $\gamma^{\*}\_i \geq \alpha^{\*T} X\_i + 2\sqrt{\lambda\_0 \lambda\_2}$, is equivalent to $$\begin{aligned} \label{eq:gamma\_i\_temp\_2} \gamma\_i^{\*} \in \operatorname\*{arg\,min}\_{\gamma\_i} J(\gamma\_i) + M |\gamma\_i| ~~~ \text{s.t.} ~~ \gamma\_i \geq \alpha^{\*T} X\_i + 2\sqrt{\lambda\_0 \lambda\_2} \end{aligned}$$ where we used the fact that for $\gamma\_i \geq \alpha^{\*T} X\_i + 2\sqrt{\lambda\_0 \lambda\_2}$, we have *J*(*γ**i*) ≥ 0, and thus [*J*(*γ**i*)]+ = *J*(*γ**i*). To solve, we first consider a simpler (unconstrained) problem; specifically, min*γ**i* ∈ R{*J*(*γ**i*) + *M*∣*γ**i*∣} whose optimal solution *γ̃**i* is given by soft thresholding: *γ̃**i* = *T*(*α*\* *T**X**i*; 2*λ*2*M*, ∞). Recall that *α*\* *T**X**i* ≤  − 2*λ*2*M*—this implies ∣*α*\* *T**X**i*∣ ≥ 2*λ*2*M*, which leads to *T*(*α*\* *T**X**i*; 2*λ*2*M*, ∞) = *α*\* *T**X**i* − 2*λ*2*M*sign(*α*\* *T**X**i*) = *α*\* *T**X**i* + 2*λ*2*M*, i.e., *γ̃**i* = *α*\* *T**X**i* + 2*λ*2*M*. Plugging the inequality $\sqrt{\lambda\_0/\lambda\_2} \leq M$ into the expression of *γ̃**i*, we get that *γ**i* = *γ̃**i* satisfies the constraint in. Thus, *γ̃**i* is an optimal solution for. This concludes the proof for the case of *β**i* = *M*. For *β**i* =  − *M*, the corresponding optimal solution *γ**i*\* can be obtained using the same steps described above, which leads to *γ**i*\* = *α*\* *T**X**i* − 2*λ*2*M*. **Case of $\sqrt{{\lambda\_0}/{\lambda\_2}} > M$:** The proof for this case follows along the lines similar to what was shown above. We omit the proof. The following lemma is useful for proving Theorem [theorem:dualitygap]. [lemma:v] Suppose $\sqrt{{\lambda\_0}/{\lambda\_2}} \leq M $. Let *β̂* be a solution from Algorithm 2 and (*α̂*, *γ̂*) be the corresponding dual solution defined in. Let *β*\* and *r*\* be as defined in Theorem [theorem:duals], and define the primal gap *ε* = ∥*X*(*β*\* − *β̂*)∥2. Then, for every *i* ∈ Supp(*β̂*)*c*, we have *v*(*α̂*, *γ̂**i*) = 0 (see  for definition of *v*); and for every *i* ∈ Supp(*β̂*), we have $$\begin{aligned} \label{eq:vupbd} v(\hat{\alpha}, \hat{\gamma}\_i) \leq c\_i \epsilon + (4 \lambda\_2)^{-1} {\epsilon^2} + v(\alpha^{\*}, \gamma^{\*}\_i), \end{aligned}$$ where *c**i* = (2*λ*2)− 1 if ∣*β**i*\*∣ < *M*, and *c**i* = *M* if ∣*β**i*\*∣ = *M*. [Proof of Lemma [lemma:v]] Fix some *i* ∈ Supp(*β̂*)*c*. Since *β̂* is the output of Algorithm 2, the set V in Algorithm 2 must be empty. Thus, using Proposition [prop:Vexplicit], we have: $|\hat{r}^{T} X\_i| \leq 2 \sqrt{\lambda\_0 \lambda\_2}$. As *r̂* =  − *α̂*, and consequently using the definition of *γ̂* in , we have *γ̂**i* = 0. Thus, (*α̂**T**X**i* − *γ̂**i*)2/(4*λ*2) = (*r̂**T**X**i*)2/(4*λ*2) ≤ *λ*0,  which implies that *v*(*α̂*, *γ̂**i*) = 0. Now fix some *i* ∈ Supp(*β̂*), and let *a* :  = (*α̂* − *α*\*)*T**X**i* and *b* :  = *α*\* *T**X**i* − *γ**i*\*. By and the definition of *h*1 in , we have $\hat{\gamma}\_i = \operatorname\*{arg\,min}\_{\gamma\_i} v(\hat{\alpha}, \gamma\_i)$, which leads to *v*(*α̂*, *γ̂**i*) ≤ *v*(*α̂*, *γ**i*\*). An upper bound on *v*(*α̂*, *γ̂**i*) can be then obtained as follows: $$\begin{aligned} v(\hat{\alpha}, \hat{\gamma}\_i) \leq v(\hat{\alpha}, \gamma^{\*}\_i) & = \Big[ \frac{(\hat{\alpha}^T X\_i - \gamma^{\*}\_i)^2}{4 \lambda\_2} - \lambda\_0 \Big]\_{+} + M |\gamma^{\*}\_i| \nonumber \\ & = \Big[ \frac{((\hat{\alpha} - \alpha^{\*})^T X\_i + \alpha^{\*T} X\_i - \gamma^{\*}\_i)^2}{4 \lambda\_2} - \lambda\_0 \Big]\_{+} + M |\gamma^{\*}\_i| \nonumber \\ & = \Big[ \frac{a^2 + 2ab + b^2}{4 \lambda\_2} - \lambda\_0 \Big]\_{+} + M |\gamma^{\*}\_i| \nonumber \\ & \leq \frac{a^2 + 2|a| |b|}{4 \lambda\_2} + \Big[ \frac{b^2}{4 \lambda\_2} - \lambda\_0 \Big]\_{+} + M |\gamma^{\*}\_i| \nonumber \\ & \leq \frac{a^2 + 2|a| |b|}{4 \lambda\_2} + v(\alpha^{\*}, \gamma^{\*}\_i)\label{eq:bdv}. \end{aligned}$$ Next, we obtain upper bounds on ∣*a*∣ and ∣*b*∣. By the Cauchy-Schwarz inequality, we have ∣*a*∣ ≤ ∥*X*(*β*\
arxiv_0000244
.16 ± 0.540.54 44 & PSZ2 G150.56+58.32 & CLGJ1115+5319 & 0.47 & S & 12.77 ± 2.40 & 10.18 ± 13.31 & 34.06 ± 18.57 & 0.13 ± 0.02 & 8.14 ± 1.49 & 10.04 ± 1.61 & 7.44 ± 0.530.50 45 & PSZ2 G170.98+39.45 & [SPD2011] 16774 & 0.5131 & S & 10.11 ± 1.38 & 31.48 ± 10.20 &  − 30.87 ± 12.67 & 0.12 ± 0.02 & 6.43 ± 0.86 & 8.24 ± 1.30 & 7.55 ± 0.710.65 46 & PSZ2 G094.56+51.03 & N/A & 0.5392 & S & 10.83 ± 1.43 & 81.61 ± 8.09 & 52.86 ± 8.80 & 0.13 ± 0.02 & 6.85 ± 0.88 & 6.46 ± 0.93 & 5.90 ± 0.440.45 47 & PSZ2 G228.16+75.20 & CLGJ1149+2223 & 0.545 & S & 15.63 ± 1.66 &  − 15.49 ± 5.32 & 17.11 ± 4.75 & 0.13 ± 0.01 & 9.78 ± 1.01 & 9.64 ± 0.94 & 9.69 ± 0.550.53 48 & PSZ2 G213.39+80.59 & SDSSCGB41791 & 0.5586 & S & 9.31 ± 1.32 &  − 9.73 ± 11.90 & 69.37 ± 12.14 & 0.13 ± 0.02 & 5.89 ± 0.81 & 8.03 ± 1.39 & 6.77 ± 0.650.63 49 & PSZ2 G066.41+27.03 & N/A & 0.5699 & S & 13.23 ± 2.05 &  − 33.18 ± 11.12 & 97.03 ± 11.32 & 0.13 ± 0.02 & 8.27 ± 1.25 & 7.33 ± 0.82 & 7.72 ± 0.540.52 50 & PSZ2 G144.83+25.11 & CLGJ0647+7015 & 0.584 & S & 11.69 ± 1.46 & 4.15 ± 7.87 &  − 1.21 ± 8.54 & 0.13 ± 0.02 & 7.32 ± 0.89 & 8.50 ± 1.27 & 7.80 ± 0.740.72 51 & PSZ2 G045.87+57.70 & N/A & 0.611 & S & 9.22 ± 1.97 & 11.71 ± 14.87 & 24.21 ± 12.21 & 0.13 ± 0.02 & 5.78 ± 1.20 & 8.49 ± 1.61 & 7.05 ± 0.710.66 52 & PSZ2 G108.27+48.66 & N/A & 0.674 & S & 9.31 ± 1.46 & 9.99 ± 11.34 & 35.79 ± 11.45 & 0.13 ± 0.02 & 5.77 ± 0.88 & 8.44 ± 1.58 & 4.96 ± 0.520.48 53 & PSZ2 G086.93+53.18 & N/A & 0.6752 & P & 9.85 ± 1.69 &  − 47.72 ± 14.38 & 27.69 ± 10.67 & 0.13 ± 0.02 & 6.10 ± 1.01 & 6.07 ± 1.09 & 5.46 ± 0.520.51 54 & PSZ2 G141.77+14.19 & N/A & 0.83 & P & 10.99 ± 1.50 &  − 4.36 ± 8.54 &  − 19.02 ± 8.85 & 0.13 ± 0.02 & 6.61 ± 0.87 & 9.94 ± 2.01 & 7.77 ± 0.950.90 AMI simulations with PSZ2 mass inputs ===================================== To investigate further the discrepancies between the mass estimates, it was decided to create simulated data based on the PSZ2 mass estimates obtained from the slicing methodology, which were then ‘observed’ by AMI. The data from these simulated observations were analysed the same way as the real data. The simulations were carried out using the in-house AMI simulation package Profile, which has been used in various forms in e.g.,, and. The input parameters for the simulation– which uses the physical model to create the cluster– are the sampling parameters of the model. Since does not give a method for calculating *M*(*r*200) it was calculated as follows. First *r*500 was calculated by solving $M\_{\rm SZ} = 500 \times \frac{4\pi}{3} \rho\_{\rm crit}(z) r\_{500}^{3}$ for *r*500. *r*200 can be determined from *r*500, but we note that the function mapping from *r*200 to *r*500 is non-invertible, thus *r*200 had to be calculated by solving equation [eqn:r200r5001] iteratively. *M*(*r*200) can then be calculated from $M(r\_{200}) = 200 \times \frac{4\pi}{3} \rho\_{\rm crit}(z) r\_{200}^{3}$. As well as the values of *M*(*r*200) derived from PSZ2 mass estimates, values for the other inputs were also required. We used $f\_{\rm gas}(r\_{200}) = 0.13$, $z = z\_{\rm Planck}$, and $x\_{\rm c} = y\_{\rm c} = 0$ arcsec. The objective of these simulations was to see whether we could recover the mass input into the simulation to create a cluster using the physical model, ‘observed’ by AMI and then analysed using the same model. We tried this for the four sets of simulations described below. For each simulation different noise / canonical radio-source environment realisations (where relevant) were used each time. Due to the large sample size this should not affect any systematic trends seen in the results, and it avoids having to pick a particular realisation to be used in all the simulations. . The points with circular markers correspond to clusters whose redshifts were measured photometrically (as listed in Table [tab:results]). [graph:m500planck] [graph:m500planckfrac] Simulations of clusters plus instrumental noise ----------------------------------------------- For each cluster, *M*(*r*200) was calculated and Gaussian instrumental noise was added to the sky. The RMS of the noise added was $0.7~ \rm{Jy}$ per channel per baseline per second, a value typical of an AMI cluster observation. Figure [graph:simNSNBmap] shows the map produced from the simulated data of cluster PSZ2 G044.20+48.66 plus this instrumental noise. The mass estimate derived from the Bayesian analysis of this cluster is 0.56 standard deviations above the input value. [graph:simNSNBmap] Figure [graph:simulatednsnb] shows the difference between the input masses and the ones recovered from running the simulated observations through McAdam, visualised using a histogram. All but three of the clusters lie within one standard deviation of the input mass, and even these clusters (PSZ2 G154.13+40.19, PSZ2 G207.88+81.31 and PSZ2 G213.39+80.59) give an output mass 1.01, 1.26 and 1.08 standard deviations below the input mass. [graph:simulatednsnb] Simulations further adding confusion noise and primordial CMB ------------------------------------------------------------- Confusion noise is defined to be the flux from radio-sources below a certain limit (here $S\_{\rm{conf}} = 0.3~\rm{mJy}$). In this Section all radio-source realisations only contribute to the confusion noise. However in Sections [subsec:CSsims] and [subsec:sourcesims] sources above $S\_{\rm{conf}}$ are included. The confusion noise contributions (see e.g. Section 5.3 of FF09) were sampled from the probability density function corresponding to the 10C source counts given in, and placed at positions chosen at random. Similarly, the primordial CMB realisations were sampled from an empirical distribution, and randomly added to the maps. Figure [graph:simNSmap] shows the map produced from the simulated data of cluster PSZ2 G044.20+48.66, including the three noise contributions. The mass estimate derived from the Bayesian analysis of this cluster is 0.22 standard deviations above the input value. [graph:simNSmap] The differences between output and input masses are shown in Figure [graph:simulatedns]. This time eight out of the 54 clusters cannot recover the input mass to within one standard deviation. In all eight of these cases, the mass is underestimated with respect to the input value. Five of the outlier values correspond to clusters at low redshift (*z* < 0.2). [graph:simulatedns] Simulations further adding a canonical radio-source environment --------------------------------------------------------------- The third set of simulations included recognised radio-sources, which formed a canonical radio-source environment. They were created in the same way as with the confusion noise described above, but with higher flux limits so that in reality, the LA would have been able to recognise them. The upper flux limit was set to $25~\rm{mJy}$. Figure [graph:simCSmap] shows the map produced from the simulated data of cluster PSZ2 G044.20+48.66, including a canonical source environment and background noise. The mass estimate derived from the Bayesian analysis of this cluster is 0.51 standard deviations below the input value. [graph:simCSmap] Figure [graph:simulatedcs] shows that the canonical radio-source environment have little effect on the mass estimation relative to Section [subsec:noisesims], as there are still 8 clusters which give mass estimates greater than one standard deviation away from the input value. Note that in this case, the outliers occurred across the entire range of redshifts, which suggests that in Section [subsec:noisesims] the low redshift trend was just a coincidence. [graph:simulatedcs] Simulations with LA observed radio-source environment plus instrumental, confusion and CMB noise ------------------------------------------------------------------------------------------------ The final set of simulations included the radio-source environment measured by the LA during the real observation for each cluster. These are only estimates of the actual source environments, and are only as reliable as the LA’s ability to measure them. Figure [graph:simrsmap] shows the maps produced from the real and simulated data of cluster PSZ2 G044.20+48.66. The mass estimate derived from the Bayesian analysis of the simulated dataset is just 0.08 standard deviations above the input value. Figure [graph:simulatedrs] shows that including the LA observed radio-source environment has a large effect on the results, as this time there are 16 clusters which are more than one standard deviation away from the input mass. Furthermore, three of these overestimated the mass relative to the input, the first time we have seen this occur in any of the simulations. A possible source of bias could be due to for example, the empirical prior on the spectral index incorrectly modelling some radio-sources. Another source of bias could be the position of a source relative to the cluster, and the magnitude of the source flux. For example, if a high flux radio-source is close to the centre of the galaxy cluster, then even a slight discrepancy between the real and the modelled values for the source could have a large effect on the cluster parameter estimates. We now compare these results to the simulations in YP15 (which concluded that the underestimation of the simulation input values could be due to deviation from the ‘universal’ profile, see Figure 23a in the paper). The results of the large cluster simulations (total integrated Comptonisation parameter  = 7 × 103 arcmin2 and $\theta\_{\rm p} = 7.4$ arcmin) in YP15 seem biased low at a more significant level than those in Figure [graph:simulatedrs], as in the former case less than half of the clusters recover the true value within two standard deviations. For the smaller clusters however, YP15 found a slight upward bias in the simulation results, but this is probably smaller in magnitude than the bias found in this Section. (a) 0.45(b) [graph:simrsmap] Statistics of results of real and simulated data ------------------------------------------------ Looking at the histograms produced in Sections [subsec:NSNBsims], [subsec:noisesims], [subsec:CSsims], and [subsec:sourcesims], in the last three cases it is apparent that there is a negative skew in the data, i.e. the output masses are systematically low relative to the input masses. The skews calculated from the samples associated with the four histograms are  − 0.17,  − 1.30,  − 0.91, and  − 0.96 respectively in units of standard deviations of the output mass. This suggests that the inclusion of confusion and CMB noise bias the AMI cluster mass estimates. We also calculate the median values associated with the histograms, and compare them with the medians corresponding to the real AMI and PSZ2 masses given in Figure [graph:m500planck]. The median values for the four histograms are  − 0.24, 0.09,  − 0.27 and  − 0.34 respectively in units of standard deviations of the output mass. For the real data the median values for $(M\_{\rm AMI}(r\_{500}) - M\_{\rm Pl,\, marg}(r\_{500})) / \sigma\_{\rm AMI}$ and $(M\_{\rm AMI}(r\_{500}) - M\_{\rm Pl,\, slice}(r\_{500})) / \sigma\_{\rm AMI}$ are  − 1.57 and  − 0.56. It makes sense to compare the second of these real data values with those obtained from the simulations, as it was $M\_{\rm Pl,\, slice}(r\_{500})$ which was used to derive the input masses. The fact that the median from the real data is greater in magnitude than the values from the simulations implies in general, our simulations can recover their input values with better agreement than that obtained between real AMI estimates and those obtained from *Planck* data using the slicing function methodology. This seems plausible as you would expect that inferring results from data which was created using the same model used in the inference would be more accurate than results from data taken from two different telescopes, which use different models in their inference. Furthermore the simulation medians tell us that AMI is capable of inferring the masses derived with the slicing methodology, if the cluster is created using the model used in the inference and assuming there are no large discrepancies between the real and simulated AMI observations. Conclusions =========== We have made observations of galaxy clusters detected by the *Planck* space telescope, with the Arcminute Microkelvin Imager (AMI) radio interferometer system in order to compare mass estimates obtained from their data. We then analysed this data using a physical model based on the one described in, following largely the data analysis method outlined in. This allowed us to derive physical parameter estimates for each cluster, in particular the total mass out to a given radius. We have also calculated two mass estimates for each cluster from *Planck*’s PowellSnakes detection algorithm data following (PSZ2). We found the following. * For the AMI mass estimates of *Planck* selected clusters there is generally a steeping in the mass of galaxy clusters as a function of redshift, which flattens out at around *z* ≈ 0.5. * AMI *M*(*r*500) estimates are within one combined standard deviation of the PSZ2 slicing function mass estimates for 31 out of the final sample of 54 clusters. However, the AMI masses are lower than both PSZ2 estimates for 37 out of the 54 cluster sample. To investigate further the possible biasing of AMI mass estimates, we created simulations of AMI data with input mass values from the PSZ2 slicing methodology. We considered four different cases for the simulations: 1) galaxy cluster plus instrumental noise; 2) galaxy cluster plus instrumental plus confusion and CMB noise; 3) galaxy cluster plus instrumental, confusion and CMB noise, plus a randomly positioned radio-source environment; 4) galaxy cluster plus instrumental, confusion and CMB noise, plus the radio-source environment detected by the LA in the real observations. These simulated datasets were analysed in the same way as the real datasets, and we found the following. * For case 1), the physical model recovered the input mass to within one standard deviation for 51 of the 54 clusters. The three which did not give an underestimate relative to the masses input to the simulation. * For case 2), eight of the simulations gave results which were more than one standard deviation lower than the input values. This highlights the effect of incorporating the noise sources into the error covariance matrix rather than trying to model the associated signals explicitly. * Case 3) shows similar results to case 2), which implies that ‘ideal’ radio-sources placed randomly in the sky have little effect on cluster mass estimates. * However in case 4) with real source environments, 16 simulations did not recover the input mass to within one standard deviation. This suggests that real radio-source environments, which can include sources with high flux values, and often sources which are located very close to the cluster centre, introduce biases in the cluster mass estimates. In real observations there are also additional issues (the sources are not ‘ideal’), such as sources being extended and emission not being circularly symmetric on the sky. * Cases 2), 3) and 4) give distributions of output  −  input mass which are negatively skewed. Thus AMI mass estimates are expected to be systematically lower than the PSZ2 slicing methodology values. * Compared to the results of simulations of large clusters carried out in, which test the robustness of the ‘universal’ pressure profile, the case 4) bias appears relatively small in magnitude, and in the same direction (downward). When comparing the case 4) results with the small cluster simulations of, the latter shows a relatively small bias in the opposite direction. * The median values of the distributions of output  −  input mass of the simulations in each of the four cases are smaller in magnitude than those obtained from comparing AMI and PSZ2 estimates from real data. This is expected as we are using the same model to simulate and analyse the clusters in all four cases. * The simulated and real data medians also indicate that while the simulations have shown that AMI mass estimates are systematically low, this does not fully accommodate for the discrepancies in the results obtained from the real data. This suggests that there is a systematic difference between the AMI and *Planck* data and / or the cluster models used to determine the mass estimates (which generally leads to PSZ2 estimates being higher than those obtained from AMI data). In a forthcoming paper, comparison of the ‘observational’ parameters (i.e. the integrated Comptonisation parameter *Y* and the angular radius *θ*) obtained from AMI data will be analysed. Furthermore, in and, different dark matter and pressure models will be considered, and in, Bayesian analysis will be performed on joint AMI-*Planck* datasets. [graph:simulatedrs] Acknowledgements ================ This work was performed using the Darwin Supercomputer of the University of Cambridge High Performance Computing (HPC) Service (<http://www.hpc.cam.ac.uk/>), provided by Dell Inc. using Strategic Research Infrastructure Funding from the Higher Education Funding Council for England and funding from the Science and Technology Facilities Council. The authors would like to thank Stuart Rankin from HPC and Greg Willatt and David Titterington from Cavendish Astrophysics for computing assistance. They would also like to thank Dave Green for his invaluable help using LaTeX. Kamran Javid acknowledges an STFC studentship. Yvette Perrott acknowledges support from a Trinity College Junior Research Fellowship. Arnaud, M., Pratt, G. W., Piffaretti, R., Böhringer, H., Croston, J. H., Pointecouteau, E. 2010. The universal galaxy cluster pressure profile from a representative sample of nearby systems (REXCESS) and the Y*S**Z* - M500 relation. Astronomy and Astrophysics 517, A92. Böhringer, H., and 33 colleagues 2007. The representative XMM-Newton cluster structure survey (REXCESS) of an X-ray luminosity selected galaxy cluster sample. Astronomy and Astrophysics 469, 363. Bahcall, J. N., Sarazin, C. L. 1977. Parameters and predictions for the X-ray emitting gas of Coma, Perseus, and Virgo.. The Astrophysical Journal 213, L99. Carvalho, P., Rocha, G., Hobson, M. P., Lasenby, A. 2012. PowellSnakes II: a fast Bayesian approach to discrete object detection in multi-frequency astronomical data sets. Monthly Notices of the Royal Astronomical Society 427, 1384. Chevallier, M., Polarski, D. 2001. Accelerating Universes with Scaling Dark Matter. International Journal of Modern Physics D 10, 213. Corless, V. L., King, L. J., Clowe, D. 2009. A new look at massive clusters: weak lensing constraints on the triaxial dark matter haloes of A1689, A1835 and A2204. Monthly Notices of the Royal Astronomical Society 393, 1235. Davies, J. I., and 8 colleagues 2011. The Arecibo Galaxy Environment Survey - IV. The NGC 7448 region and the H I mass function. Monthly Notices of the Royal Astronomical Society 415, 1883. AMI Consortium, and 18 colleagues 2011. 10C survey of radio sources at 15.7 GHz - II. First results. Monthly Notices of the Royal Astronomical Society 415, 2708. Feroz, F., Hobson, M. P., Bridges, M. 2009. MULTINEST: an efficient and robust Bayesian inference tool for cosmology and particle physics. Monthly Notices of the Royal Astronomical Society 398, 1601. Feroz, F., Hobson, M. P., Zwart, J. T. L., Saunders, R. D. E., Grainge, K. J. B. 2009. Bayesian modelling of clusters of galaxies from multifrequency-pointed Sunyaev-Zel’dovich observations. Monthly Notices of the Royal Astronomical Society 398, 2049. AMI Consortium, and 17 colleagues 2011. 10C survey of radio sources at 15.7 GHz - I. Observing, mapping and source extraction. Monthly Notices of the Royal Astronomical Society 415, 2699. Gioia, I. M., and 6 colleagues 1990. The Einstein Observatory Extended Medium-Sensitivity Survey. I. X-Ray Data and Analysis. The Astrophysical Journal Supplement Series 72, 567. Grainge, K., and 6 colleagues 2002. Measuring the Hubble constant from Ryle Telescope and X-ray observations, with application to Abell 1413. Monthly Notices of the Royal Astronomical Society 333, 318. Haehnelt, M. G., Tegmark, M. 1996. Using the Kinematic Sunyaev-Zeldovich effect to determine the peculiar velocities of clusters of galaxies.. Monthly Notices of the Royal Astronomical Society 279, 545. Hao, J., and 13 colleagues 2010. A GMBCG Galaxy Cluster Catalog of 55,424 Rich Clusters from SDSS DR7. The Astrophysical Journal Supplement Series 191, 254. Herranz, D., and 6 colleagues 2002. Filtering techniques for the detection of Sunyaev-Zel’dovich clusters &lt;br/&gt; in multifrequency maps. Monthly Notices of the Royal Astronomical Society 336, 1057. Hickish, J., and 20 colleagues 2018. A digital correlator upgrade for the Arcminute MicroKelvin Imager. Monthly Notices of the Royal Astronomical Society 475, 5677. Hinshaw, G., and 20 colleagues 2013. Nine-year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameter Results. The Astrophysical Journal Supplement Series 208, 19. Hobson, M. P., Maisinger, K. 2002. Maximum-likelihood estimation of the cosmic microwave background power spectrum from interferometer observations. Monthly Notices of the Royal Astronomical Society 334, 569. Hu, W., Kravtsov, A. V. 2003. Sample Variance Considerations for Cluster Surveys. The Astrophysical Journal 584, 702. Javid, K., Perrott, Y. C., Hobson, M. P., Olamaie, M., Rumsey, C., Saunders, R. D. E. 2018. Comparison of physical and observational galaxy cluster modelling. arXiv e-prints arXiv:1805.01968, DOI:<https://doi.org/10.17863/CAM.38865> Javid, K. 2019. Physical modelling of galaxy clusters and Bayesian inference in astrophysics (Doctoral thesis). University of Cambridge, DOI:<https://doi.org/10.17863/CAM.40616> Javid, K., Perrott, Y. C., Rumsey, C., Saunders, R. D. E. 2019. Physical modelling of galaxy cluster Sunyaev-Zel’dovich data using Einasto dark matter profiles. Monthly Notices of the Royal Astronomical Society 489, 3135, DOI:<https://doi.org/10.1093/mnras/stz2341>. Komatsu, E., and 20 colleagues 2011. Seven-year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Interpretation. The Astrophysical Journal Supplement Series 192, 18. Kravtsov, A. V., Vikhlinin, A., Nagai, D. 2006. A New Robust Low-Scatter X-Ray Mass Indicator for Clusters of Galaxies. The Astrophysical Journal 650, 128. Mehrtens, N., and 38 colleagues 2012. The XMM Cluster Survey: optical analysis methodology and the first data release. Monthly Notices of the Royal Astronomical Society 423, 1024. Melin, J.-B., Bartlett, J. G., Delabrouille, J. 2006. Catalog extraction in SZ cluster surveys: a matched filter approach. Astronomy and Astrophysics 459, 341. Nagai, D., Kravtsov, A. V., Vikhlinin, A. 2007. Effects of Galaxy Formation on Thermodynamics of the Intracluster Medium. The Astrophysical Journal 668, 1. Navarro, J. F., Frenk, C. S., White, S. D. M. 1995. Simulations of X-ray clusters. Monthly Notices of the Royal Astronomical Society 275, 720. Neto, A. F., and 8 colleagues 2007. The statistics of Λ CDM halo concentrations. Monthly Notices of the Royal Astronomical Society 381, 1450. Olamaie, M., Hobson, M. P., Grainge, K. J. B. 2013. Mass and pressure constraints on galaxy clusters from interferometric Sunyaev-Zel’dovich observations. Monthly Notices of the Royal Astronomical Society 430, 1344. Olamaie, M., Hobson, M. P., Grainge, K. J. B. 2012. A simple parametric model for spherical galaxy clusters. Monthly Notices of the Royal Astronomical Society 423, 1534. AMI Consortium, and 18 colleagues 2012. Parametrization effects in the analysis of AMI Sunyaev-Zel’dovich observations. Monthly Notices of the Royal Astronomical Society 421, 1136. Perley, R. A., Butler, B. J. 2013. An Accurate Flux Density Scale from 1 to 50 GHz. The Astrophysical Journal Supplement Series 204, 19. Perrott, Y. C., and 41 colleagues 2015. Comparison of Sunyaev-Zel’dovich measurements from Planck and from the Arcminute Microkelvin Imager for 99 galaxy clusters. Astronomy and Astrophysics 580, A95. Perrott, Y. C., and 7 colleagues 2019. Sunyaev-Zel’dovich profile fitting with joint AMI-Planck analysis. Monthly Notices of the Royal Astronomical Society 486, 2116, DOI:<https://doi.org/10.1093/mnras/stz826>. Petrov, L., Kovalev, Y. Y., Fomalont, E. B., Gordon, D. 2008. The Sixth VLBA Calibrator Survey: VCS6. The Astronomical Journal 136, 580. Piffaretti, R., Arnaud, M., Pratt, G. W., Pointecouteau, E., Melin, J.-B. 2011. The MCXC: a meta-catalogue of x-ray detected clusters of galaxies. Astronomy and Astrophysics 534, A109. Planck Collaboration, and 275 colleagues 2014. Planck 2013 results. XXIX. The Planck catalogue of Sunyaev-Zeldovich sources. Astronomy and Astrophysics 571, A29. Planck Collaboration, and 277 colleagues 2015. Planck 2013 results. XXXII. The updated Planck catalogue of Sunyaev-Zeldovich sources. Astronomy and Astrophysics 581, A14. Planck Collaboration, and 254 colleagues 2014. Planck 2013 results. XX. Cosmology from Sunyaev-Zeldovich cluster counts. Astronomy and Astrophysics 571, A20. Planck Collaboration, and 185 colleagues 2013. Planck intermediate results. III. The relation between galaxy cluster mass and Sunyaev-Zeldovich signal. Astronomy and Astrophysics 550, A129. Planck Collaboration, and 237 colleagues 2011. Planck early results. VIII. The all-sky early Sunyaev-Zeldovich cluster sample. Astronomy and Astrophysics 536, A8. Planck Collaboration, and 209 colleagues 2011. Planck early results. XI. Calibration of the local galaxy cluster Sunyaev-Zeldovich scaling relations. Astronomy and Astrophysics 536, A11. Planck Collaboration, and 259 colleagues 2016. Planck 2015 results. XXVII. The second Planck catalogue of Sunyaev-Zeldovich sources. Astronomy and Astrophysics 594, A27. Planck Collaboration, and 199 colleagues 2015. Planck intermediate results. XXVI. Optical identification and redshifts of Planck clusters with the RTT150 telescope. Astronomy and Astrophysics 582, A29. Planck Collaboration, and 190 colleagues 2016. Planck intermediate results. XXXVI. Optical identification and redshifts of Planck SZ sources with telescopes at the Canary Islands observatories. Astronomy and Astrophysics 586, A139. Planck Collaboration, and 203 colleagues 2011. Planck early results. IX. XMM-Newton follow-up for validation of Planck cluster candidates. Astronomy and Astrophysics 536, A9. Rudy, D. J., Muhleman, D. O., Berge, G. L., Jakosky, B. M., Christensen, P. R. 1987. Mars: VLA observations of the Northern Hemisphere and the north polar region at wavelengths of 2 and 6 cm. Icarus 71, 159. Rykoff, E. S., and 14 colleagues 2014. redMaPPer. I. Algorithm and SDSS DR8 Catalog. The Astrophysical Journal 785, 104. Sunyaev, R. A., Zeldovich, Y. B. 1970. The Spectrum of Primordial Radiation, its Distortions and their Significance. Comments on Astrophysics and Space Physics 2, 66. Vikhlinin, A., and 6 colleagues 2006. Chandra Sample of Nearby Relaxed Galaxy Clusters: Mass, Gas Fraction, and Mass-Temperature Relation. The Astrophysical Journal 640, 691. Voges, W., and 19 colleagues 1999. The ROSAT all-sky survey bright source catalogue. Astronomy and Astrophysics 349, 389. Waldram, E., Bolton, R., Pooley, G. G., Riley, J. M. 2007. The radio source population at frequencies of 15 to 70GHz: some deductions from 9C follow-up observations. From Planets to Dark Energy: The Modern Radio Universe 140. Wechsler, R. H., Bullock, J. S., Primack, J. R., Kravtsov, A. V., Dekel, A. 2001. Concentrations and Assembly Histories of Dark Matter Halos. arXiv e-prints astro-ph/0111069. Wen, Z. L., Han, J. L., Liu, F. S. 2012. A Catalog of 132,684 Clusters of Galaxies Identified from Sloan Digital Sky Survey III. The Astrophysical Journal Supplement Series 199, 34. Willis, J. P., and 14 colleagues 2013. Distant galaxy clusters in the XMM Large Scale Structure survey. Monthly Notices of the Royal Astronomical Society 430, 134. York, D. G., and 144 colleagues 2000. The Sloan Digital Sky Survey: Technical Summary. The Astronomical Journal 120, 1579. Zwart, J. T. L., and 60 colleagues 2008. The Arcminute Microkelvin Imager. Monthly Notices of the Royal Astronomical Society 391, 1545. Zwicky, F. 1933. Die Rotverschiebung von extragalaktischen Nebeln. Helvetica Physica Acta 6, 110. Zwicky, F. 1937. On the Masses of Nebulae and of Clusters of Nebulae. The Astrophysical Journal 86, 217. [lastpage] --- 1. E-mail: [email protected][↩](#fnref1) 2. <http://getdist.readthedocs.io/en/latest/>.[↩](#fnref2) 3. <http://aips.nrao.edu/>.[↩](#fnref3) 4. <https://pla.esac.esa.int/pla/catalogues>.[↩](#fnref4) Physical modelling of galaxy clusters detected by the *Planck* satellite ======================================================================== ### Accepted XXX. Received YYY; in original form ZZZ [firstpage] We present a comparison of mass estimates for 54 galaxy cluster candidates from the second *Planck* catalogue (PSZ2) of Sunyaev–Zel’dovich sources. We compare the mass values obtained with data taken from the Arcminute Microkelvin Imager (AMI) radio interferometer system and from the *Planck* satellite. The former of these uses a Bayesian analysis pipeline that parameterises a cluster in terms of its physical quantities, and models the dark matter and baryonic components of a cluster using Navarro-Frenk-White (NFW) and generalised-NFW profiles respectively. Our mass estimates derived from *Planck* data are obtained from the results of the Bayesian detection algorithm PowellSnakes, are based on the methodology detailed in the PSZ2 paper, and produce two sets of mass estimates; one estimate is calculated directly from the angular radius *θ* – integrated Comptonisation parameter *Y* posterior distributions, and the other uses a ‘slicing function’ to provide information on *θ* based on X-ray measurements and previous *Planck* mission samples. We find that for 37 of the clusters, the AMI mass estimates are lower than both values obtained from *Planck* data. However the AMI and slicing function estimates are within one combined standard deviation of each other for 31 clusters. We also generate cluster simulations based on the slicing-function mass estimates, and analyse them in the same way as we did the real AMI data. We find that inclusion in the simulations of radio-source confusion, CMB noise and measurable radio-sources causes AMI mass estimates to be systematically low. methods: data analysis – galaxies: clusters: general – cosmology: observations. Introduction ============ In the local Universe and out to redshifts of around two, clusters of galaxies are observed as massive gravitationally bound structures, often roughly spherical and with very dense central cores. It is over eighty years ago that it was first postulated that a galaxy cluster’s mass is dominated by dark matter ( and ). More recently it has been shown that dark matter contributes  ≈ 90% of the cluster mass (see e.g. and ). Stars, gas and dust in galaxies, as well as a hot ionised intra-cluster medium (ICM), make up the rest of the mass in a cluster, with the latter being the most massive baryonic component. The galaxies emit in the optical and infrared wavebands, whilst the ICM emits in X-ray via thermal Bremsstrahlung and also interacts with cosmic microwave background (CMB) photons via inverse Compton scattering. This last effect is what is known as the Sunyaev–Zel’dovich (SZ) effect. It is this effect which is detected by the *Planck* satellite and the Arcminute Microkelvin Imager (AMI) radio interferometer system, which are the telescopes featured in this analysis. The clusters detected by *Planck* form the basis of the sample considered in this work. (from here on YP15) present the results of the AMI follow-up of *Planck* clusters– this follow-up is analysed using the ‘observational model’, which parameterises a cluster in terms of its integrated Comptonisation parameter *Y* and angular scale *θ*. YP15 find that these AMI estimates for *Y* are consistently lower than the values obtained from *Planck* data, and conclude that this may indicate that the cluster pressure profiles are deviating from the ‘universal’ one. Here, we try to overcome this by considering a model which uses redshift information to break this degeneracy. We use a physical model largely based on the one described in (from here on MO12), with data obtained from AMI of clusters detected by *Planck* (including ones which were detected after the analysis in YP15 was carried out). We also consider the cluster mass estimates given in the *Planck* cluster catalogue and compare them with the values obtained using AMI data. Furthermore we use the *Planck* cluster catalogue mass estimates as inputs to simulations which are then analysed in the same way as real AMI observations. In Section [sec:telescopes] we give an overview of the *Planck* mission and AMI in the context of *Planck* observed clusters. In Section [sec:models] we review how the physical modelling process works with data obtained from AMI, and we summarise the methodology used to obtain the mass estimates given in. Sections [sec:resultsi] and [sec:resultsii] present the results of our analysis, including simulated AMI data which used mass estimates obtained from *Planck* data as inputs. A ‘concordance’ flat ΛCDM cosmology is assumed: $\Omega\_{\rm M} = 0.3$, ΩΛ = 0.7, $\Omega\_{\rm R} = 0$, $\Omega\_{\rm K} = 0$, *h* = 0.7, $H\_{0} = 100~h~\rm km~s^{-1}$ Mpc− 1, *σ*8 = 0.8, *w*0 =  − 1, and $w\_{\rm a} = 0$. The first four parameters correspond to the (dark + baryonic) matter, the cosmological constant, the radiation, and the curvature densities respectively. *h* is the dimensionless Hubble parameter, while *H*0 is the Hubble parameter now and *σ*8 is the power spectrum normalisation on the scale of 8 *h*− 1 Mpc now. $w\_{\rm 0}$ and $w\_{\rm a}$ are the equation of state parameters of the Chevallier-Polarski-Linder parameterisation. *Planck* and AMI telescopes, and the cluster sample =================================================== *Planck* mission ---------------- The combination of the *Planck* satellite’s low frequency and high frequency instruments provide nine frequency channels in the range 37 GHz – 857 GHz. Of particular importance for the work described here are the *Planck* cluster-catalogues (see, and for papers relating to catalogues PSZ1, PSZ1.2 and PSZ2 respectively, where ‘PSZX’ refers to the Xth *Planck* SZ catalogue). These provide, for example, cluster candidate positions, redshift (*z*) values, integrated Comptonisation parameter values and mass (*M*) estimates. PSZ2 is the most recent all-sky *Planck* cluster catalogue, and is the one which we refer to in this paper unless stated otherwise. PSZ2 redshift values -------------------- Catalogue *z* values are measured in the optical / infrared or X-ray, with major input from the Sloan Digital Sky Survey. A number of cluster catalogues have been extracted from these data (see e.g.,, and ), providing estimates of both spectroscopic and photometric *z* values, the reliability of the latter values falls as *z* increases. In the X-ray part of the spectrum, the Meta-Catalogue of X-ray detected Clusters of galaxies, or MCXC has a substantial number of matches with the *Planck*-catalogue clusters. The MCXC is from the available catalogues based on the ROSAT All-Sky Survey as well as serendipitous X-ray catalogues (see e.g. ). MCXC contains only clusters with measured *z*, but does not state the redshift type or source. Further sources of *Planck* catalogue clusters candidate *z*s are the Russian-Turkish Telescope and the ENO telescopes in the Canary Islands ; for each *z* these state whether it was obtained photometrically or spectroscopically. AMI --- AMI is an interferometer system near Cambridge, UK, designed for SZ studies (see e.g. ). It consists of two arrays: the Small Array (SA), optimised to couple to SZ signal, with an angular resolution of  ≈ 3 arcmin and sensitivity to structures up to  ≈ 10 arcmin in scale; and the Large Array (LA), with angular resolution of  ≈ 30 arcsec, which is largely insensitive to SZ, and is used to characterise and subtract confusing radio-sources. Both arrays operate at a central frequency of  ≈ 15.7 GHz, and at the time the AMI data for this paper were taken, with a bandwidth of  ≈ 4.3  GHz, divided into six channels. A summary of AMI’s characteristics is given in Table [tab:ami]. Note that AMI has recently received a new digital correlator, but all real data used in this work were obtained by the system with its old analogue correlator. Our pointing strategy for each cluster was as follows. Clusters were observed using a single pointing centre on the SA, which has a primary beam of size  ≈ 20 arcmin FWHM, to noise levels of $\lessapprox 120~\mu \rm{Jy}~\rm{beam}^{-1}$. To cover the same area with the LA, which has a primary beam of size  ≈  six arcmin FWHM, the cluster field was observed as a 61-point hexagonal raster. The noise level of the raster was $\lessapprox 100~\mu \rm{Jy}~\rm{beam}^{-1}$ in the central 19 pointings, and slightly higher in the outer regions. The observations for a given cluster field were carried out simultaneously on both arrays, and the average observation time per cluster was  ≈ 30  hours. The observations were carried out between 2013 and 2015, and so they began before PSZ2 was published. This means that the AMI pointing centre coordinates in general were not the same as those published in the final *Planck* catalogue which was released in 2015. This is discussed in the context of the cluster centre offset parameters in Section [subsub:cluspriors]. Data from both arrays were flagged for interference and calibrated using the AMI in-house software package REDUCE. Flux calibration was applied using contemporaneous observations of the primary calibration sources 3C 286, 3C 48, and 3C 147. The assumed flux densities for 3C 286 were converted from Very Large Array total-intensity measurements and are consistent with. The assumed flux densities for 3C 48 and 3C 147 were based on long-term monitoring with the SA using 3C 286 for flux calibration. Phase calibration was applied using interleaved observations of a nearby bright source selected from the Very Long Baseline Array Calibrator survey ; in the case of the LA, a secondary amplitude calibration was also applied using observations of the phase calibration source on the SA. lcc & SA & LA Antenna diameter & $3.7~\rm{m}$ & $12.8~\rm{m}$ Number of antennas & 10 & 8 Baseline lengths (current) & $5-20~\rm{m}$ & $18-110~\rm{m}$ Primary beam FWHM (at $15.7~\rm{GHz}$) & $20.1~\rm{arcmin}$ & $5.5~\rm{arcmin}$ Typical synthesised beam FWHM & $3~\rm{arcmin}$ & $30~\rm{arcsec}$ Flux sensitivity & $30~\rm{mJy}~\rm{s}^{1/2}$ & $3~\rm{mJy}~\rm{s}^{1/2}$ [tab:ami] Selection of the cluster sample ------------------------------- PSZ2 contains 1653 cluster candidates detected in the all-sky 29 month mission. The initial cluster selection criteria for AMI closely resembles that described in YP15, with a few modifications as follows. * The lower *z* limit *z* ≤ 0.100 was relaxed here, to see how well AMI data can constrain the the physical model parameters at low redshift. However it is important to realise that the sample at *z* ≤ 0.100 were not observed specifically for the purpose of this work, but were part of other observation projects. * The *Planck* signal-to-noise ratio (S/N) lower bound was reduced to 4.5. * The automatic radio-source environment rejection remained the same. However the manual rejection was done on a map-by-map basis– see Section [sec:resultsi]. * Note that the observation declination limits 20∘ < *δ* < 87∘ were kept. This led to an initial sample size of 199 clusters, The maximum and minimum values of some key parameters for this sample from the *Planck* catalogue are given in Table [tab:initialsample]. Note that $M\_{\rm{SZ}}$ is taken in PSZ2 as the hydrostatic equilibrium mass *M*(*r*500), assuming the best-fit *Y* − *M* relation (see Section [subsubsec:planckmass]). lcc Parameter & Minimum value & Maximum value Declination & 20.31∘ & 86.24∘ *z* & 0.045 & 0.83 S/N & 4.50 & 28.40 $M\_{\rm{SZ}}$ ( × 1014 *M*Sun) & 1.83 & 10.80 [tab:initialsample] AMI data analysis and PSZ2 scaling relations methodology ======================================================== Our AMI Bayesian data analysis pipeline, McAdam closely resembles the one described in (FF09 from here on). In this Section the key aspects of the analysis are summarised, and also we note modifications specific to the work of this paper. A physical model for AMI data ----------------------------- In the implementation of McAdam used here, we in large employed the model of MO12 to derive physical properties of a galaxy cluster (i.e. mass, pressure, density, radius and temperature values) from the data obtained from an SZ-detecting interferometer plus *z*-information. The model assumes an Navarro-Frenk-White (NFW) profile for the dark matter density as a function of cluster radius *r*, $$\label{eqn:nfw} \rho\_{\rm dm}(r) = \frac{\rho\_{\rm s}}{\left(\frac{r}{r\_{\rm s}}\right)\left(1+\frac{r}{r\_{\rm s}}\right)^{2}},$$ where $\rho\_{\rm s}$ is an overall density normalisation coefficient and $r\_{\rm s}$ is a characteristic radius defined by $r\_{\rm s} \equiv r\_{200}/c\_{200}$ and is the radius at which the logarithmic slope of the profile d ln*ρ*(*r*)/d ln*r* is  − 2. *r*200 is the radius at which the *mean* cluster density is $200 \times \rho\_{\rm crit}(z)$. $\rho\_{\rm crit}(z)$ is the critical density of the Universe at the cluster *z* which is given by $\rho\_{\rm crit}(z) = 3H(z)^{2}/8\pi G$ where *H*(*z*) is the Hubble parameter (at the cluster redshift) and *G* is Newton’s constant. *c*200 is the concentration parameter at this radius. Following, we calculate *c*200 for an NFW dark matter density profile taken from the expression in $$\label{eqn:c200} c\_{200} = \frac{5.26}{1+z} \left( \frac{M(r\_{200})}{10^{14}h^{-1}M\_{\mathrm{Sun}}} \right)^{-0.1}.$$ The 1/(1 + *z*) factor comes from and is obtained from N-body simulated dark matter halos between *z* = 0 and *z* = 7. The remainder of the relation was derived in by fitting a power-law for *c*200 to N-body simulated cluster data. Note that the sample used in was assumed to contain clusters that are relaxed. In equation [eqn:c200] *M*(*r*200) is the mass enclosed at radius *r*200. Thus for given values of *z* and *M*(*r*200) (which are input parameters to the model, see Section [subsub:cluspriors]), *c*200 can be calculated. Furthermore if we take $M(r\_{200}) = 200 \times \frac{4\pi}{3} \rho\_{\rm crit}(z) r\_{200}^{3}$ then we can also calculate *r*200 and so $r\_{\rm s}$. Following, the generalised NFW (GNFW) model is used to parameterise the electron pressure as a function of radius from the cluster centre $$\label{eqn:gnfw} P\_{\rm e}(r) = \frac{P\_{\rm ei}}{\left(\frac{r}{r\_{\rm p}}\right)^{c}\left(1+\left(\frac{r}{r\_{\rm p}}\right)^{a}\right)^{(b-c)/a}},$$ where $P\_{\rm ei}$ is an overall pressure normalisation factor and $r\_{\rm p}$ is another characteristic radius, defined by $r\_{\rm p} \equiv r\_{500}/c\_{500}$ (*r*500 is the radius at which the cluster density is $500 \times \rho\_{\rm crit}(z)$). The parameters *a*, *b* and *c* describe the slope of the pressure profile at $ r / r\_{\rm p} \approx 1$, $r / r\_{\rm p} \gg 1 $ and $r / r\_{\rm p} \ll 1$ respectively. For values $r / r\_{\rm p} \ll 1$ the logarithmic slope (dln*P*e(*r*)/dln*r*) converges to  − *c*. For values $r / r\_{\rm p} \gg 1$ the logarithmic slope converges to  − *b*. The value of *a* dictates how quickly (in terms of *r*) the slope switches between these two values, and in the limit that *a* tends to zero, the logarithmic slope is  − (*b* + *c*)/2 for all *r*. Consistent with many of the *Planck* follow-up papers (see e.g. ) and with MO12 the slope parameters are taken to be *a* = 1.0620, *b* = 5.4807 and *c* = 0.3292. These ‘universal’ values are from and are the GNFW slope parameters derived for the standard self-similar case using scaling relations from a REXCESS sub-sample (of 20 well-studied low-*z* clusters observed with XMM-Newton), as described in Appendix B of the paper. We also use the Arnaud et al. value for the concentration parameter $c\_{500} \equiv r\_{500} / r\_{\rm p}$ of 1.156. We note however that in YP15 using simulations it was shown that the disagreement between *Planck* and AMI parameter estimates may indicate pressure profiles deviating from the ‘universal’ profile. For any model it is important to know the underlying assumptions which allow it to be valid. The four relevant assumptions in MO12 are * The cluster is spherically symmetric. * The cluster is in hydrostatic equilibrium up to radius *r*200. This means at any radius up to *r*200 the outward pushing pressure force created by the pressure differential at that point must be equal to the gravitational binding force due to the mass enclosed within that radius (, see equation 4 of MO12). * The gas mass fraction $f\_{\rm gas}(r)$ is much less than unity up to radius *r*200, so that the total mass is $M(r\_{200}) \approx M\_{\rm dm}(r\_{200})$. Consequently the total mass out to *r*200 is given by the integral of the dark matter density along the radius of the cluster (equation 5 of MO12). * The cluster gas is assumed to be an ideal gas, so that its temperature can be trivially represented in terms of its pressure (equations 13 and 14 of MO12). The calculation steps used for the present paper are as described in MO12 except for one modification. Previously, the mapping from *r*200 to *r*500 was a constant factor $\frac{2}{3}$ which was derived by solving the equation $$\label{eqn:r200r5001} \begin{split} \left( \frac{r\_{\rm s}}{r\_{500}} \right)^{3} \left[ \ln \left( 1+\frac{r\_{500}}{r\_{\rm s}} \right) - \left( 1 + \frac{r\_{\rm s}}{r\_{500}} \right)^{-1} \right] = \frac{5}{2} \left( \frac{r\_{\rm s}}{r\_{200}} \right)^{3} \times \\ \left[ \ln \left( 1+\frac{r\_{200}}{r\_{\rm s}} \right) - \left( 1 + \frac{r\_{\rm s}}{r\_{200}} \right)^{-1} \right] \end{split}$$ for a range of values of *M*(*r*200) and *z*. However, following, there is an analytic mapping from *r*200 to *r*500. Consider the equation $$\label{eqn:r200r5002} g(r\_{\rm s}/r\_{500}) = \frac{5}{2} g(r\_{\rm s}/r\_{200}),$$ where *g*(*x*) = *x*3[ln(1 + *x*− 1) − (1 + *x*)− 1]. Equation [eqn:r200r5002] requires that $g(r\_{\rm s}/r\_{500})$ be inverted so that $$\label{eqn:r200r5004} \frac{r\_{\rm s}}{r\_{500}} = x \left( g\_{500}=\frac{5}{2}f(r\_{\rm s}/r\_{200}) \right),$$ where $$\label{eqn:r200r5005} x(g\_{500}) = \left[ a\_{1}g\_{500}^{2p} + \frac{9}{16} \right] ^{-1/2} + 2g\_{500}.$$ Here *p* = *a*2 + *a*3ln*g*500 + *a*4(ln*g*500)2, and the four fitting parameters correspond to *a*1 = 0.5116, *a*2 =  − 0.4283, *a*3 =  − 3.13 × 10− 3 and *a*4 =  − 3.52 × 10− 5. This gives a fit to better than 0.3% accuracy for 0 < *c*200 < 20 and is exact in the limit that *c*200 → 0. Bayesian parameter estimation ----------------------------- Given a model M and a data vector ${\boldsymbol{\mathcal{D}}}$, one can obtain model parameters (also known as input or sampling parameters) ${\boldsymbol{\Theta}}$ conditioned on M and ${\boldsymbol{\mathcal{D}}}$ using Bayes’ theorem: $$\label{eqn:bayes} Pr\left({\boldsymbol{\Theta}}|{\boldsymbol{\mathcal{D}}},\mathcal{M}\right) = \frac{Pr\left({\boldsymbol{\mathcal{D}}}|{\boldsymbol{\Theta}},\mathcal{M}\right)Pr\left({\boldsymbol{\Theta}}|\mathcal{M}\right)}{Pr\left({\boldsymbol{\mathcal{D}}}|\mathcal{M}\right)},$$ where $Pr({\boldsymbol{\Theta}}|{\boldsymbol{\mathcal{D}}},\mathcal{M}) \equiv \mathcal{P}({\boldsymbol{\Theta}})$ is the posterior distribution of the model parameter set, $Pr({\boldsymbol{\mathcal{D}}}|{\boldsymbol{\Theta}},\mathcal{M}) \equiv \mathcal{L}({\boldsymbol{\Theta}})$ is the likelihood function for the data, $Pr({\boldsymbol{\Theta}}|\mathcal{M}) \equiv \pi({\boldsymbol{\Theta}})$ is the prior probability distribution for the model parameter set, and $Pr({\boldsymbol{\mathcal{D}}}|\mathcal{M}) \equiv \mathcal{Z}({\boldsymbol{\mathcal{D}}})$ is the Bayesian evidence of the data given a model M. The evidence can be interpreted as the factor required to normalise the posterior over the model parameter space: $$\label{eqn:evidence} \mathcal{Z}({\boldsymbol{\mathcal{D}}}) = \int \mathcal{L}({\boldsymbol{\Theta}}) \pi({\boldsymbol{\Theta}})\, \mathrm{d}{\boldsymbol{\Theta}},$$ where the integral is carried out over the *N*-dimensional parameter space. Although $\mathcal{Z}({\boldsymbol{\mathcal{D}}})$ is not important in the context of parameter estimation, it is central to the way that the posterior distributions are determined using the nested sampling algorithm MultiNest. MultiNest is a Monte Carlo algorithm which makes use of a transformation of the *N*-dimensional evidence integral into a much easier to evaluate one-dimensional integral, and generates samples from the posterior distribution $\mathcal{P}({\boldsymbol{\Theta}})$ as a by-product. The input parameters can be split into two subsets (which are assumed to be independent of one another): cluster parameters ${\boldsymbol{\Theta}}\_{\rm cl}$ and radio-source or ‘nuisance’ parameters ${\boldsymbol{\Theta}}\_{\rm rs}$. ### Cluster parameter prior distributions Following FF09, the cluster parameters are assumed to be independent of one another, so that $$\label{eqn:cluspriors} \pi\left({\boldsymbol{\Theta}}\_{\rm cl}\right) = \pi(M(r\_{200}))\pi(f\_{\rm gas}(r\_{200}))\pi(z)\pi(x\_{\rm c})\pi(y\_{\rm c}).$$ $x\_{\rm c}$ and $y\_{\rm c}$ are the cluster centre offsets from the SA pointing centre, measured in arcseconds. The prior distributions assigned to the cluster parameters are the same as the ones used in, but with an alteration to the mass limits. Upon running McAdam on data from a few of the *Planck* clusters, it was found that the posterior distributions of *M*(*r*200) were hitting the lower bound 1 × 1014 *M*Sun used in. Hence for this analysis the lower limit on *M*(*r*200) was decreased. Table [tab:clusterpriors] lists the type of prior used for each cluster parameter and the probability distribution parameters. Values for $z\_{\rm Planck}$ were taken from the PSZ2 catalogue. lc Parameter & Prior distribution $x\_{\rm c}$ & N(0ʺ, 60ʺ) $y\_{\rm c}$ & N(0ʺ, 60ʺ) *z* & $\delta(z\_{\rm Planck})$ *M*(*r*200) & $\mathcal{U} [ \log (0.5\times 10^{14} M\_{\rm{Sun}}), \log (50\times 10^{14} M\_{\rm{Sun}})]$ $f\_{\rm gas}(r\_{200})$ & N(0.13, 0.02) [tab:clusterpriors] ### Measured radio-source parameter prior distributions Each radio-source recognised and measured by the LA can also be modelled in the analysis. Following FF09, each source can be parameterised by four variables: its position on the sky ($x\_{\rm rs}$, $y\_{\rm rs}$), its measured flux density $S\_{\rm rs}$, and its spectral index $\alpha\_{\rm rs}$. The latter of these quantities describes how the flux density of a radiating object depends on the frequency of the radiation. Assuming these are independent, then for source *i* $$\label{eqn:rspriors} \pi({\boldsymbol{\Theta}}\_{\mathrm{rs, } i}) = \pi(x\_{\mathrm{rs, } i})\pi(y\_{\mathrm{rs, } i})\pi(S\_{\mathrm{rs, } i})\pi(\alpha\_{\mathrm{rs, } i}).$$ Delta functions are applied to the distributions on $x\_{\rm rs}$ and $y\_{\rm rs}$, due to the LA’s ability to measure spatial positions to high accuracy: $\pi(x\_{\rm rs}) = \delta(x\_{\mathrm{rs, \, LA}})$, $\pi(y\_{\rm rs}) = \delta(y\_{\mathrm{rs, \, LA}})$. Delta priors were also set on $S\_{\rm rs}$ and $\alpha\_{\rm rs}$ (centred on the values measured by the LA), if the measured $S\_{\rm rs}$ was less than four times the instrumental noise associated with the observation, and the source was more than 5 arcminutes away from the SA pointing centre: $\pi(S\_{\rm rs}) = \delta(S\_{\mathrm{rs, \, LA}})$, $\pi(\alpha\_{\rm rs}) = \delta(\alpha\_{\mathrm{rs, \, LA}})$. Otherwise, a Gaussian prior was set on $S\_{\rm{rs}}$ centred at the LA measured value with a standard deviation equal to 40% of the measured value ($\sigma\_{\rm rs} = 0.4 \times S\_{\rm rs, \, LA}$): $\pi(S\_{\rm rs}) \sim \mathcal{N}(S\_{\rm rs, \, LA}, \sigma\_{\rm rs})$. The spectral index $\alpha\_{\rm rs}$ was modelled using the empirical distribution determined in : $\pi(\alpha\_{\rm rs}) = \mathcal{W}(\alpha\_{\rm rs})$. ### Likelihood function Following and FF09, the likelihood function is given by: $$\label{eqn:likelihood} \mathcal{L}({\boldsymbol{\Theta}}) = \frac{1}{Z\_{D}}e^{-\frac{1}{2}\chi^{2}}.$$ Here *χ*2 is a measure of the goodness-of-fit between the real and modelled data and can be expressed as $$\label{eqn:chisq} \chi^{2} = \sum\limits\_{\nu,\nu'} ({\boldsymbol{d}}\_{\nu} - {\boldsymbol{d}}\_{\nu}^{\rm p}({\boldsymbol{\Theta}}))^{\rm T} \mathbfss{C}\_{\nu,\nu'}^{-1} ({\boldsymbol{d}}\_{\nu'} - {\boldsymbol{d}}\_{\nu'}^{\rm p}({\boldsymbol{\Theta}})).$$ In this expression ${\boldsymbol{d}}\_{\nu}$ are the real data observed by AMI at frequency *ν*, and ${\boldsymbol{d}}\_{\nu}^{\rm p}({\boldsymbol{\Theta}})$ are the predicted data generated by the model also at frequency *ν*. AMI data are measured in six neighbouring frequency channels as described in. To generate the predicted data points, values are sampled from $\pi({\boldsymbol{\Theta}})$ which are used in the calculations outlined in MO12 and to generate a pressure profile for the cluster which can be used to replicate the SZ signal measured by an interferometer as detailed in Sections 4 and 5 of FF09. $\mathbfss{C}\_{\nu,\nu'} \equiv \mathbfss{C}^{\rm ins}\_{\nu,\nu'} + \mathbfss{C}^{\rm CMB}\_{\nu,\nu'} + \mathbfss{C}^{\rm conf}\_{\nu,\nu'}$ is the theoretical covariance matrix, which includes instrumental, primordial CMB and source confusion noise as described in FF09 and. Source confusion noise allows for the remaining radio-sources with flux densities less than some flux limit $S\_{\rm lim}$, that cannot be measured accurately by the LA. The instrumental noise is actually measured during the observation and so does not need to be predicted. Referring back to equation [eqn:likelihood], *Z**D* is a normalisation constant given by $(2\pi)^{D / 2} |\mathbfss{C}|^{1/2}$ where *D* is the length of ${\boldsymbol{d}}$ (the vector of data from all frequencies). PSZ2 methodology for deriving cluster mass estimates ---------------------------------------------------- For comparison with the mass values obtained with AMI data, we look at the PSZ2 mass estimates obtained from *Planck* data and the requisite scaling relations. The mass values published in PSZ2 are derived from data from one of three detection algorithms: MMF1, MMF3 (both of which are extensions of the matched multi-filter algorithm suitable for SZ studies (MMF, see, and ), over the whole sky) and PowellSnakes (PwS, ). The former two rely on multi-frequency matched-filter detection methods, whilst PwS is a fully Bayesian method. Since the PwS methodology most closely matches the Bayesian analysis pipeline used for AMI data, we focus on the cluster parameter values from PwS. The observable quantity measured by *Planck* is the integrated Comptonisation parameter *Y*. As described in Section 5 of the PSZ2 paper, for each cluster candidate a two-dimensional posterior for the integrated Comptonisation parameter within the radius 5*r*500, *Y*(5*r*500) and the angular scale radius of the GNFW pressure, $\theta\_{\rm p}$ ($= r\_{\rm p}/D\_{A}$). The values for *Y*(5*r*500) published in PSZ2 are obtained by marginalising over $\theta\_{\rm p}$ and then taking the expected value of *Y*(5*r*500). We will refer to this value as $Y\_{\rm marg}(5r\_{500})$. As described in Sections 5.2 and 5.3 of, this ‘blind’ measurement of the integrated Comptonisation parameter may not be reliable when the underlying cluster pressure distribution deviates from that given by the GNFW model. To overcome this, a function relating *Y*(5*r*500) and $\theta\_{\rm p}$ is derived in an attempt to provide prior information on the angular scale of the cluster based on X-ray measurements and earlier *Planck* mission samples. We refer to this function as the slicing function. ### Derivation of the slicing function The scaling relations considered here are given in. Of particular importance to deriving the slicing function, are the *Y*(*r*500) − *M*(*r*500) and *θ*500 − *M*(*r*500) relations. The first of these is given by $$\label{eqn:y500m500} E(z)^{-2/3}\left[\frac{D\_{A}^{2}Y(r\_{500})}{10^{-4}\rm{Mpc}^{2}}\right] = 10^{-0.19 \pm 0.02} \left[\frac{(1-b)M(r\_{500})}{6 \times 10 ^{14}~M\_{\rm{Sun}}}\right] ^{1.79 \pm 0.08},$$ where $E(z) = \sqrt{\Omega\_{\rm M}(1 + z)^{3} + \Omega\_{\Lambda}}$ and is equal to the ratio of the Hubble parameter evaluated at redshift *z* to its value now for a flat ΛCDM Universe. The factor in the exponent  − 2/3 arises from the scaling relations between mass, temperature and Comptonisation parameter given by equations 1–5 in. (1 − *b*) represents a bias factor, which is assumed in to contain four possible observational biases of departure from hydrostatic equilibrium, absolute instrument calibration, temperature inhomogeneities and residual selection bias. Its value is calculated to be (1 − *b*) = 0.80− 0.01+ 0.02 from numerical simulations as described in Appendix A.4 of. Equation [eqn:y500m500] uses the fitting parameters from the relation between $Y\_{\rm X}(r\_{500})$ (the X-ray ‘analogue’ of the integrated Comptonisation parameter see e.g., $Y\_{\rm X}(r\_{500}) \equiv M\_{\rm g}(r\_{500}) T\_X$ where $M\_{\rm g}$ is the cluster gas mass within *r*500 and *T**X* is the spectroscopic temperature in the range [0.15, 0.75]*r*500) and the X-ray hydrostatic mass, $M\_{\rm HE}(r\_{500})$ (which is equal to (1 − *b*)*M*(*r*500)), established for 20 local *relaxed* clusters by to give the relation between the X-ray mass proxy *M**Y**X*(*r*500) and *M*(*r*500). Finally, the fitting parameters for the *Y*(*r*500) − *M**Y**X*(*r*500) relation are obtained empirically from a 71-cluster sample consisting of SZ data from the *Planck* Early SZ clusters, of *Planck*-detected LoCuSS clusters and from the XMM-Newton validation programme, all with X-ray data taken from XMM-Newton observations ( and ). The *θ*500 − *M*(*r*500) relation is based on the equation $M(r\_{500}) = 500 \times \frac{4\pi}{3} \rho\_{\rm crit}(z) r\_{500}^{3}$ and is given by $$\label{eqn:theta500m500} \theta\_{500} = 6.997 \left[\frac{h}{0.7}\right]^{-2/3} \left[ \frac{(1-b)M\_{500}}{3 \times 10^{14}~M\_{\rm{Sun}}} \right]^{1/3} E(z)^{-2/3}\left[ \frac{D\_{A}}{500~\rm{Mpc}} \right].$$ Equations ([eqn:y500m500]) and ([eqn:theta500m500]) can be solved for (1 − *b*)*M*(*r*500) and equated to give *Y*(*r*500) as a function of *θ*500 $$\label{eqn:y500theta500} Y(r\_{500}) = \left[ \frac{\theta\_{500}}{6.997} \right] ^{5.4 \pm 0.2} \left[ \frac{h}{0.7} \right] ^{3.60 \pm 0.13} \left[\frac{E(z)^{4.26 \pm 0.13} D\_{A}^{3.4 \pm 0.2}}{10^{19.29 \pm 0.54} ~ \rm{Mpc}^{3.4 \pm 0.2}} \right],$$ where *Y*(*r*500) is in $\rm{sr}$. Assuming a GNFW pressure profile, *Y*(*r*500) can be converted to the corresponding value of *Y*(5*r*500), through the relation $$\label{eqn:yr500y5r500} \frac{Y(r\_{500})}{Y(5r\_{500})} = \frac{B \left( \frac{(c\_{500})^{a}}{1+(c\_{500})^{a}}; \frac{3 - c}{a}, \frac{b - 3}{a} \right)}{B \left( \frac{(5c\_{500})^{a}}{1+(5c\_{500})^{a}}; \frac{3 - c}{a}, \frac{b - 3}{a} \right)},$$ where $B(x;y,z) = \int\_{0}^{x} t^{y-1}(1-t)^{z-1}\rm{d}t$ is the incomplete beta function. For the GNFW parameter values used in equation [eqn:gnfw], equation [eqn:yr500y5r500] gives a value of 0.55. Similarly, *θ*500 can be related to $\theta\_{\rm p}$ through the relation $\theta\_{\rm p} = \theta\_{500} / c\_{500} $. ### Mass estimates For a given cluster, the resulting *Y*(5*r*500) function is used to ‘slice’ the posterior, and the value where the function intersects the posterior ‘ridge’ is taken to be the most reliable estimate of *Y*(5*r*500), given the external information. The posterior ridge (see Figure [graph:slicing]) is defined to be the value of *Y*(5*r*500) which gives the highest probability density for a given $\theta\_{\rm p}$. The error estimates are obtained by considering where the slicing function intersects with the ridges defined by the 68% maximum likelihood confidence intervals for *Y*(5*r*500) at each $\theta\_{\rm p}$. *Y*(5*r*500) is then converted to *Y*(*r*500) using the the reciprocal of the value given by equation [eqn:yr500y5r500], and this is used to derive a value for *M*(*r*500) using equation [eqn:y500m500], but with the (1 − *b*) term excluded. The bias term is not included in the *M*(*r*500) calculation because it has already been accounted for in the slicing function. Note that this value of *M*(*r*500) is what is referred to as $M\_{\rm SZ}$ in PSZ2. [graph:slicing] AMI and PSZ2 mass estimates =========================== First we describe how we arrived at a final sample of clusters for which the AMI mass estimates are compared with those derived from *Planck* data. Final cluster sample -------------------- ### Well constrained posterior sample McAdam was used on data from the initial sample of 199 clusters. MultiNest failed to produce posterior distributions for two clusters. These clusters were surrounded by high flux, extended radio-sources. Of the 197 clusters for which posterior distributions were produced, 73 clusters show good constraints (adjudged by physical inspection) on the sampling parameters *M*(*r*200), $f\_{\rm gas}(r\_{200})$, $x\_{\rm c}$ and $y\_{\rm c}$; with *z*s ranging from 0.089 to 0.83. We illustrate a ‘well constrained’ posterior distribution (for cluster PSZ2 G184.68+28.91) in the first half of Figure [graph:posteriorconstraints], plotted using GetDist[2](#fn2). In contrast the second half of Figure [graph:posteriorconstraints] is an example of a cluster (PSZ2 G121.77+51.75) which shows poor constraints on mass as the posterior distribution is peaked at the lower boundary of the mass sampling range ($5\times 10^{13} M\_{\rm{Sun}}$) which could not be classed as a detection within our mass prior range. We also note that in the latter case the mass posterior largely resembles the log uniform prior distribution. (a) 0.45(b) [graph:posteriorconstraints] ### Moderate radio-source environment sample For the 197 cluster sample, AMI data maps were produced using the software package AIPS[3](#fn3) using the automated CLEAN procedure with a limit determined using IMEAN. Source-finding was carried out at a four *σ* level on the LA continuum map, as described in and. For each cluster both a non-source-subtracted and a source-subtracted map was produced. The values used to subtract the sources from the maps were the mean values of the one-dimensional marginalised posterior distributions of the sources’ position, flux and spectral index produced by McAdam. Maps of the 73 cluster sample were inspected in detail. It was found that for seven of these clusters, even though the posterior distributions were well constrained, that the radio-source and primordial CMB contamination could bias the cluster parameter estimates in an unpredictable way. In these cases it was found that the subtracted maps contained residual flux close to the cluster centre, from either radio-sources (some of which were extended), radio-frequency interference, or CMB. PSZ2 G125.37-08.67 is an example of one of these clusters and its non-source-subtracted and source-subtracted maps are shown in Figure [graph:badrs]. We thus arrived at a 66 cluster sample. (a) 0.45(b) [graph:badrs] ### Well defined cluster-centre sample The posteriors of $x\_{\rm c}$ and $y\_{\rm c}$ give the position of the modelled cluster centre relative to the actual SA pointing centre used for the observation. For seven of the 66 cluster sample, it was found that the mean posterior values of $x\_{\rm c}$ and $y\_{\rm c}$ changed dramatically between different runs of McAdam (on the same cluster data), by up to 70 arcseconds in either direction, leading to differences in mass estimates of up to 70%. The estimates for these clusters are not reliable, since the model was creating a completely different cluster between runs, and so these clusters were excluded leaving a 59 cluster sample. For the remaining clusters, the change in *M*(*r*200) between runs was much smaller than the standard deviation of the corresponding posterior distributions. Figure [graph:offsetmap] shows the subtracted map for PSZ2 G183.90+42.99, which we consider to be an example of a cluster with an ill-defined centre. The map shows three flux decrement peaks close to the cluster centre. Movement of the centre between these peaks with the current source environment modelling would lead to a change in the size of the predicted cluster, and consequently different mass estimates each time. [graph:offsetmap] ### PwS detected cluster sample For five of the 59 cluster sample, the data available on the *Planck* website[4](#fn4) did not contain a detection using the PwS algorithm, and so no mass estimates based on PwS data could be calculated. Hence the final sample size for which we present the mass estimates from both AMI and *Planck* data is 54. It is important to realise that selection biases are introduced in reducing the sample size from 199 to 54. In particular, selecting only the clusters which showed good AMI posterior constraints means that clusters corresponding to a signal too faint for AMI to detect, clusters with large enough angular size for AMI’s shortest baselines not to be able to measure the signal from the outskirts of the cluster (“resolved clusters”), and clusters where the radio-source and CMB contamination dwarfs the signal of the cluster, are all likely to have been excluded from the sample to some extent. In addition, removing the seven clusters with an ill defined centre likely removes some unrelaxed clusters from the sample. AMI and PSZ2 mass estimates --------------------------- The AMI and PSZ2 parameter estimates for the 54 clusters are given in Table [tab:results]. The clusters are listed in ascending order of *z*. Note that whether a redshift is photometric or spectroscopic is stated in the fifth column. All AMI values are the mean values of the corresponding parameter posterior distributions, with the error taken as the standard deviation. The estimates of the sampling parameters are included for comparison with each other, and with the sampling prior ranges and associated parameters given in Table [tab:clusterpriors]. The AMI values for *M*(*r*500) are given for comparison with the corresponding PSZ2 estimates. Two values for the PSZ2 mass estimates are given, $M\_{\rm Pl,\, marg}(r\_{500})$ and $M\_{\rm Pl,\, slice}(r\_{500})$. $M\_{\rm Pl,\, marg}(r\_{500})$ corresponds to the mass given by the *Y*(*r*500) − *M*(*r*500) relation when the marginalised integrated Comptonisation parameter is used as described in Section [subsec:pws]. The uncertainties associated with these *Y* values are taken as the standard deviations of the marginalised posteriors. $M\_{\rm Pl,\, slice}(r\_{500})$ is detailed in Section [subsubsec:planckmass]; its associated errors are calculated from the *Y*(5*r*500) values where the slicing function intersects with the two ridges formed by the 68% confidence interval values of the *Y*(5*r*500) probability densities over the posterior domain of $\theta\_{\rm p}$. Figure [graph:m200z] shows *M*(*r*200) as a function of *z*. Excluding the clusters at *z* = 0.089,  0.4 and 0.426, there is a steepening in mass between $ 0.1 \lessapprox z \lessapprox 0.5$ before it flattens off at higher *z*. This result is roughly consistent with the PSZ2 mass estimates (at *r*500) obtained in. [graph:m200z] We now focus on the comparison between AMI and *Planck* mass estimates. Note that do not provide any means for estimating *M*(*r*200) from their data, as *r*200 is the distance related to the scale radius ($r\_{200} = c\_{200} \times r\_{\rm s}$) for the NFW dark matter profile given by equation [eqn:nfw], which they do not incorporate into their modelling process. Figure [graph:m500planck] gives the AMI and two *Planck* estimates for *M*(*r*500) vs the row number, in Table [tab:results]. We have not used *z* as the independent variable in this plot for clarity. The row number is monotonically related to *z*, as Table [tab:results] is sorted by ascending *z*. From Figure [graph:m500planck] it is clear that AMI underestimates the mass relative to both PSZ2 values. In fact *M*(*r*500) is lower than $M\_{\rm Pl,\, slice}(r\_{500})$ in 37 out of 54 cases. *M*(*r*500) is lower than $M\_{\rm Pl,\, marg}(r\_{500})$ in 45 out of 54 cases. 31 of the AMI masses are within one combined standard deviation of $M\_{\rm Pl,\, slice}(r\_{500})$, while 46 are within two. Four clusters have discrepancies larger than three combined standard deviations. Three of these clusters are at relatively low redshift ( ≤ 0.25), whilst one is at *z* = 0.43. Figure [graph:m500planckfrac] shows the pairwise ratios of mass estimates between the three different methods. The most obvious thing to note is that the ratio of PSZ2 masses is consistently greater than one, which again emphasises the fact that the marginalisation method attributes a much higher mass to the clusters than the slicing method. Furthermore, the ratio of AMI mass to the marginalised mass is small at medium redshift, which suggests that the marginalised mass is systematically high in this range. This graph also emphasises that the AMI mass and the slicing methodology mass are the most consistent with one another. llllllllllll [tab:results] & & & & & & & & & & & & & & & & & & & & & & =1mm = 1 & PSZ2 G044.20+48.66 & ACO2142 & 0.0894 & S & 13.49 ± 2.35 & 9.14 ± 18.20 & 8.80 ± 15.08 & 0.13 ± 0.02 & 9.25 ± 1.58 & 10.81 ± 0.42 & 8.76 ± 0.210.19 2 & PSZ2 G053.53+59.52 & ACO2034 & 0.113 & S & 8.51 ± 1.28 &  − 1.80 ± 13.10 & 19.39 ± 9.86 & 0.13 ± 0.02 & 5.87 ± 0.86 & 5.38 ± 0.39 & 5.48 ± 0.240.24 3 & PSZ2 G151.90+11.63 & CIZAJ0515.3+5845 & 0.12 & S & 5.74 ± 1.24 & 67.58 ± 27.09 & 68.01 ± 18.58 & 0.13 ± 0.02 & 3.99 ± 0.84 & 4.23 ± 1.03 & 3.65 ± 0.470.50 4 & PSZ2 G218.59+71.31 & ACO1272 & 0.137 & S & 2.70 ± 0.99 & 2.82 ± 25.21 &  − 16.62 ± 25.98 & 0.13 ± 0.02 & 1.90 ± 0.68 & 4.79 ± 0.80 & 3.62 ± 0.300.30 5 & PSZ2 G226.18+76.79 & ACO1413 & 0.1427 & S & 8.19 ± 1.23 &  − 35.33 ± 10.98 &  − 1.13 ± 13.44 & 0.13 ± 0.02 & 5.62 ± 0.82 & 6.14 ± 0.55 & 5.98 ± 0.250.25 6 & PSZ2 G165.06+54.13 & ACO990 & 0.144 & S & 7.80 ± 1.35 & 32.43 ± 13.21 &  − 27.57 ± 15.52 & 0.14 ± 0.02 & 5.36 ± 0.90 & 5.13 ± 0.51 & 4.83 ± 0.290.28 7 & PSZ2 G077.90-26.63 & ACO2409 & 0.147 & S & 9.09 ± 1.32 &  − 26.87 ± 10.89 & 18.00 ± 11.85 & 0.14 ± 0.02 & 6.22 ± 0.88 & 5.92 ± 0.58 & 5.08 ± 0.270.27 8 & PSZ2 G050.40+31.17 & ACO2259 & 0.164 & S & 5.52 ± 1.19 & 35.72 ± 21.77 & 9.31 ± 19.56 & 0.13 ± 0.02 & 3.80 ± 0.80 & 4.53 ± 0.62 & 4.36 ± 0.360.35 9 & PSZ2 G097.72+38.12 & ACO2218 & 0.1709 & S & 10.65 ± 1.68 & 31.99 ± 15.25 &  − 0.95 ± 13.52 & 0.13 ± 0.02 & 7.23 ± 1.11 & 7.44 ± 0.40 & 6.64 ± 0.170.17 10 & PSZ2 G099.30+20.92 & MCXCJ1935.3+6734 & 0.171 & S & 5.57 ± 1.24 &  − 37.19 ± 19.92 &  − 24.50 ± 21.16 & 0.13 ± 0.02 & 3.83 ± 0.83 & 5.88 ± 0.93 & 3.91 ± 0.250.23 11 & PSZ2 G067.17+67.46 & ACO1914 & 0.1712 & S & 10.45 ± 1.49 & 31.39 ± 12.81 &  − 33.15 ± 11.99 & 0.13 ± 0.02 & 7.09 ± 0.99 & 7.14 ± 0.47 & 7.04 ± 0.270.26 12 & PSZ2 G167.67+17.63 & RXJ0638.1+4747 & 0.174 & S & 4.78 ± 1.36 &  − 28.70 ± 31.24 & 10.76 ± 28.64 & 0.13 ± 0.02 & 3.30 ± 0.92 & 7.72 ± 0.81 & 6.31 ± 0.340.33 13 & PSZ2 G066.68+68.44 & ACO1902 & 0.181 & S & 4.95 ± 1.43 & 56.07 ± 25.47 & 8.14 ± 33.23 & 0.13 ± 0.02 & 3.41 ± 0.97 & 5.27 ± 0.84 & 3.98 ± 0.370.33 14 & PSZ2 G065.28+44.53 & ACO2187 & 0.183 & S & 5.24 ± 1.28 &  − 16.66 ± 22.61 &  − 16.54 ± 21.65 & 0.13 ± 0.02 & 3.60 ± 0.86 & 3.89 ± 0.98 & 3.56 ± 0.510.47 15 & PSZ2 G084.47+12.63 & MCXCJ1948.3+5113 & 0.185 & S & 4.79 ± 1.22 &  − 73.73 ± 31.17 &  − 16.97 ± 20.93 & 0.13 ± 0.02 & 3.30 ± 0.82 & 5.98 ± 0.65 & 4.94 ± 0.340.33 16 & PSZ2 G100.04+23.73 & ACO2317 & 0.21 & S & 5.44 ± 1.13 & 20.24 ± 19.02 &  − 22.73 ± 20.90 & 0.13 ± 0.02 & 3.72 ± 0.75 & 4.10 ± 0.80 & 3.73 ± 0.310.29 17 & PSZ2 G180.60+76.65 & SDSSCGB26344.3 & 0.2138 & S & 5.38 ± 1.21 & 37.81 ± 15.59 &  − 66.98 ± 19.41 & 0.13 ± 0.02 & 3.68 ± 0.81 & 6.76 ± 0.75 & 6.00 ± 0.340.35 18 & PSZ2 G166.09+43.38 & ACO773N & 0.2172 & S & 9.84 ± 1.39 &  − 5.35 ± 10.66 &  − 3.98 ± 9.70 & 0.13 ± 0.02 & 6.63 ± 0.92 & 7.76 ± 0.73 & 6.87 ± 0.320.34 19 & PSZ2 G125.30-27.99 & N/A & 0.223 & P & 4.51 ± 1.31 &  − 8.08 ± 26.99 & 8.82 ± 30.24 & 0.13 ± 0.02 & 3.09 ± 0.87 & 5.54 ± 0.98 & 4.70 ± 0.550.56 20 & PSZ2 G060.13+11.44 & N/A & 0.224 & S & 7.47 ± 1.22 &  − 64.79 ± 12.50 &  − 49.27 ± 14.16 & 0.13 ± 0.02 & 5.06 ± 0.80 & 7.55 ± 1.09 & 5.34 ± 0.500.49 21 & PSZ2 G166.62+42.13 & ACO746 & 0.232 & P & 3.56 ± 1.07 &  − 38.98 ± 29.87 &  − 38.09 ± 37.84 & 0.13 ± 0.02 & 2.44 ± 0.72 & 5.60 ± 0.71 & 5.36 ± 0.410.39 22 & PSZ2 G097.94+19.43 & 4C 65.28 & 0.25 & S & 5.01 ± 1.31 &  − 114.76 ± 22.50 &  − 13.64 ± 34.07 & 0.13 ± 0.02 & 3.40 ± 0.87 & 5.69 ± 0.85 & 4.04 ± 0.330.30 23 & PSZ2 G164.29+08.94 & N/A & 0.251 & P & 5.97 ± 1.06 &  − 62.17 ± 14.03 & 18.12 ± 17.06 & 0.13 ± 0.02 & 4.04 ± 0.70 & 7.91 ± 1.36 & 6.24 ± 0.640.62 24 & PSZ2 G133.60+69.04 & RXJ1229.0+4737 & 0.254 & S & 5.26 ± 1.60 & 5.87 ± 25.04 & 59.40 ± 37.35 & 0.13 ± 0.02 & 3.57 ± 1.06 & 7.04 ± 0.97 & 5.42 ± 0.430.38 25 & PSZ2 G086.47+15.31 & MCXCJ1938.3+5409 & 0.26 & S & 10.89 ± 1.87 &  − 39.65 ± 13.24 & 19.83 ± 12.61 & 0.13 ± 0.02 & 7.25 ± 1.21 & 9.54 ± 0.63 & 7.76 ± 0.280.29 26 & PSZ2 G139.62+24.18 & N/A & 0.2671 & S & 8.13 ± 1.28 & 36.66 ± 11.64 &  − 12.58 ± 10.80 & 0.13 ± 0.02 & 5.45 ± 0.84 & 8.34 ± 1.06 & 7.11 ± 0.470.48 27 & PSZ2 G184.68+28.91 & ACO611 & 0.288 & S & 7.90 ± 1.02 & 22.61 ± 10.45 & 13.48 ± 9.97 & 0.13 ± 0.02 & 5.28 ± 0.67 & 11.44 ± 2.30 & 5.61 ± 0.530.52 28 & PSZ2 G154.13+40.19 & ACO747 & 0.29 & P & 6.46 ± 1.13 & 70.99 ± 14.72 &  − 42.86 ± 13.25 & 0.13 ± 0.02 & 4.33 ± 0.74 & 6.09 ± 1.10 & 5.48 ± 0.460.45 29 & PSZ2 G095.49+16.41 & N/A & 0.3 & S & 5.43 ± 1.12 &  − 24.47 ± 19.10 &  − 102.18 ± 18.33 & 0.13 ± 0.02 & 3.65 ± 0.74 & 4.91 ± 0.99 & 4.38 ± 0.490.48 30 & PSZ2 G109.52-19.16 & N/A & 0.3092 & P & 8.53 ± 1.40 &  − 30.38 ± 13.77 &  − 15.21 ± 15.15 & 0.13 ± 0.02 & 5.66 ± 0.91 & 8.34 ± 1.79 & 5.78 ± 0.520.48 31 & PSZ2 G198.90+18.16 & [SPD2011] 298 & 0.3184 & P & 7.61 ± 1.18 & 26.76 ± 14.62 &  − 58.07 ± 11.95 & 0.13 ± 0.02 & 5.06 ± 0.77 & 7.99 ± 1.47 & 5.87 ± 0.570.55 32 & PSZ2 G152.33+81.28 & MCXCJ1230.7+3439 & 0.333 & S & 6.27 ± 1.12 &  − 52.81 ± 20.78 & 44.11 ± 14.62 & 0.13 ± 0.02 & 4.17 ± 0.73 & 5.08 ± 0.96 & 5.05 ± 0.570.53 33 & PSZ2 G108.17-11.56 & N/A & 0.336 & S & 8.00 ± 1.23 & 35.19 ± 13.14 &  − 70.15 ± 19.09 & 0.13 ± 0.02 & 5.29 ± 0.80 & 9.82 ± 1.29 & 7.42 ± 0.600.57 34 & PSZ2 G132.47-17.27 & MCXCJ0142.9+4438 & 0.341 & S & 12.43 ± 1.85 & 31.87 ± 10.19 & 15.27 ± 12.93 & 0.13 ± 0.02 & 8.13 ± 1.18 & 8.27 ± 1.12 & 8.07 ± 0.650.61 35 & PSZ2 G207.88+81.31 & ACO1489 & 0.353 & S & 11.26 ± 1.61 & 68.55 ± 8.44 & 62.56 ± 11.55 & 0.13 ± 0.02 & 7.36 ± 1.02 & 8.01 ± 0.95 & 7.54 ± 0.450.45 36 & PSZ2 G157.32-26.77 & MCSJ0308.9+2645 & 0.356 & S & 14.28 ± 2.12 & 0.33 ± 8.12 & 17.65 ± 11.53 & 0.13 ± 0.02 & 9.27 ± 1.34 & 10.95 ± 1.12 & 10.67 ± 0.650.64 37 & PSZ2 G071.21+28.86 & RXSJ175201.5+444046 & 0.366 & S & 9.26 ± 1.51 &  − 29.82 ± 9.95 &  − 12.58 ± 13.26 & 0.13 ± 0.02 & 6.07 ± 0.96 & 6.15 ± 0.80 & 6.70 ± 0.460.44 38 & PSZ2 G194.98+54.12 & MCSJ1006.9+3200 & 0.375 & P & 8.90 ± 1.56 & 32.58 ± 12.17 &  − 0.22 ± 19.18 & 0.13 ± 0.02 & 5.83 ± 1.00 & 6.31 ± 1.38 & 5.30 ± 0.680.65 39 & PSZ2 G109.86+27.94 & N/A & 0.4 & S & 4.57 ± 1.28 & 3.98 ± 22.50 & 7.39 ± 18.70 & 0.13 ± 0.02 & 3.03 ± 0.83 & 5.23 ± 0.91 & 5.23 ± 0.480.45 40 & PSZ2 G083.29-31.03 & MCXCJ2228.6+2036 & 0.412 & S & 11.85 ± 1.73 & 81.05 ± 13.29 &  − 3.42 ± 12.73 & 0.13 ± 0.02 & 7.65 ± 1.09 & 9.21 ± 0.95 & 8.31 ± 0.450.44 41 & PSZ2 G063.38+53.44 & NSCJ1537+392702 & 0.422 & S & 12.17 ± 1.94 & 46.13 ± 12.01 & 46.02 ± 9.37 & 0.13 ± 0.02 & 7.84 ± 1.22 & 7.78 ± 1.54 & 6.17 ± 0.620.58 42 & PSZ2 G063.80+11.42 & N/A & 0.426 & S & 5.13 ± 1.19 &  − 36.41 ± 22.22 &  − 47.14 ± 19.79 & 0.13 ± 0.02 & 3.37 ± 0.76 & 5.53 ± 0.63 & 6.41 ± 0.580.57 43 & PSZ2 G157.43+30.34 & RXJ0748.6+5940 & 0.45 & P & 11.64 ± 1.56 &  − 61.32 ± 7.38 & 4.53 ± 8.27 & 0.13 ± 0.02 & 7.47 ± 0.98 & 6.71 ± 0.44 & 8.16 ± 0.540.54 44 & PSZ2 G150.56+58.32 & CLGJ1115+5319 & 0.47 & S & 12.77 ± 2.40 & 10.18 ± 13.31 & 34.06 ± 18.57 & 0.13 ± 0.02 & 8.14 ± 1.49 & 10.04 ± 1.61 & 7.44 ± 0.530.50 45 & PSZ2 G170.98+39.45 & [SPD2011] 16774 & 0.5131 & S & 10.11 ± 1.38 & 31.48 ± 10.20 &  − 30.87 ± 12.67 & 0.12 ± 0.02 & 6.43 ± 0.86 & 8.24 ± 1.30 & 7.55 ± 0.710.65 46 & PSZ2 G094.56+51.03 & N/A & 0.5392 & S & 10.83 ± 1.43 & 81.61 ± 8.09 & 52.86 ± 8.80 & 0.13 ± 0.02 & 6.85 ± 0.88 & 6.46 ± 0.93 & 5.90 ± 0.440.45 47 & PSZ2 G228.16+75.20 & CLGJ1149+2223 & 0.545 & S & 15.63 ± 1.66 &  − 15.49 ± 5.32 & 17.11 ± 4.75 & 0.13 ± 0.01 & 9.78 ± 1.01 & 9.64 ± 0.94 & 9.69 ± 0.550.53 48 & PSZ2 G213.39+80.59 & SDSSCGB41791 & 0.5586 & S & 9.31 ± 1.32 &  − 9.73 ± 11.90 & 69.37 ± 12.14 & 0.13 ± 0.02 & 5.89 ± 0.81 & 8.03 ± 1.39 & 6.77 ± 0.650.63 49 & PSZ2 G066.41+27.03 & N/A & 0.5699 & S & 13.23 ± 2.05 &  − 33.18 ± 11.12 & 97.03 ± 11.32 & 0.13 ± 0.02 & 8.27 ± 1.25 & 7.33 ± 0.82 & 7.72 ± 0.540.52 50 & PSZ2 G144.83+25.11 & CLGJ0647+7015 & 0.584 & S & 11.69 ± 1.46 & 4.15 ± 7.87 &  − 1.21 ± 8.54 & 0.13 ± 0.02 & 7.32 ± 0.89 & 8.50 ± 1.27 & 7.80 ± 0.740.72 51 & PSZ2 G045.87+57.70 & N/A & 0.611 & S & 9.22 ± 1.97 & 11.71 ± 14.87 & 24.21 ± 12.21 & 0.13 ± 0.02 & 5.78 ± 1.20 & 8.49 ± 1.61 & 7.05 ± 0.710.66 52 & PSZ2 G108.27+48.66 & N/A & 0.674 & S & 9.31 ± 1.46 & 9.99 ± 11.34 & 35.79 ± 11.45 & 0.13 ± 0.02 & 5.77 ± 0.88 & 8.44 ± 1.58 & 4.96 ± 0.520.48 53 & PSZ2 G086.93+53.18 & N/A & 0.6752 & P & 9.85 ± 1.69 &  − 47.72 ± 14.38 & 27.69 ± 10.67 & 0.13 ± 0.02 & 6.10 ± 1.01 & 6.07 ± 1.09 & 5.46 ± 0.520.51 54 & PSZ2 G141.77+14.19 & N/A & 0.83 & P & 10.99 ± 1.50 &  − 4.36 ± 8.54 &  − 19.02 ± 8.85 & 0.13 ± 0.02 & 6.61 ± 0.87 & 9.94 ± 2.01 & 7.77 ± 0.950.90 AMI simulations with PSZ2 mass inputs ===================================== To investigate further the discrepancies between the mass estimates, it was decided to create simulated data based on the PSZ2 mass estimates obtained from the slicing methodology, which were then ‘observed’ by AMI. The data from these simulated observations were analysed the same way as the real data. The simulations were carried out using the in-house AMI simulation package Profile, which has been used in various forms in e.g.,, and. The input parameters for the simulation– which uses the physical model to create the cluster– are the sampling parameters of the model. Since does not give a method for calculating *M*(*r*200) it was calculated as follows. First *r*500 was calculated by solving $M\_{\rm SZ} = 500 \times \frac{4\pi}{3} \rho\_{\rm crit}(z) r\_{500}^{3}$ for *r*500. *r*200 can be determined from *r*500, but we note that the function mapping from *r*200 to *r*500 is non-invertible, thus *r*200 had to be calculated by solving equation [eqn:r200r5001] iteratively. *M*(*r*200) can then be calculated from $M(r\_{200}) = 200 \times \frac{4\pi}{3} \rho\_{\rm crit}(z) r\_{200}^{3}$. As well as the values of *M*(*r*200) derived from PSZ2 mass estimates, values for the other inputs were also required. We used $f\_{\rm gas}(r\_{200}) = 0.13$, $z = z\_{\rm Planck}$, and $x\_{\rm c} = y\_{\rm c} = 0$ arcsec. The objective of these simulations was to see whether we could recover the mass input into the simulation to create a cluster using the physical model, ‘observed’ by AMI and then analysed using the same model. We tried this for the four sets of simulations described below. For each simulation different noise / canonical radio-source environment realisations (where relevant) were used each time. Due to the large sample size this should not affect any systematic trends seen in the results, and it avoids having to pick a particular realisation to be used in all the simulations. . The points with circular markers correspond to clusters whose redshifts were measured photometrically (as listed in Table [tab:results]). [graph:m500planck] [graph:m500planckfrac] Simulations of clusters plus instrumental noise ----------------------------------------------- For each cluster, *M*(*r*200) was calculated and Gaussian instrumental noise was added to the sky. The RMS of the noise added was $0.7~ \rm{Jy}$ per channel per baseline per second, a value typical of an AMI cluster observation. Figure [graph:simNSNBmap] shows the map produced from the simulated data of cluster PSZ2 G044.20+48.66 plus this instrumental noise. The mass estimate derived from the Bayesian analysis of this cluster is 0.56 standard deviations above the input value. [graph:simNSNBmap] Figure [graph:simulatednsnb] shows the difference between the input masses and the ones recovered from running the simulated observations through McAdam, visualised using a histogram. All but three of the clusters lie within one standard deviation of the input mass, and even these clusters (PSZ2 G154.13+40.19, PSZ2 G207.88+81.31 and PSZ2 G213.39+80.59) give an output mass 1.01, 1.26 and 1.08 standard deviations below the input mass. [graph:simulatednsnb] Simulations further adding confusion noise and primordial CMB ------------------------------------------------------------- Confusion noise is defined to be the flux from radio-sources below a certain limit (here $S\_{\rm{conf}} = 0.3~\rm{mJy}$). In this Section all radio-source realisations only contribute to the confusion noise. However in Sections [subsec:CSsims] and [subsec:sourcesims] sources above $S\_{\rm{conf}}$ are included. The confusion noise contributions (see e.g. Section 5.3 of FF09) were sampled from the probability density function corresponding to the 10C source counts given in, and placed at positions chosen at random. Similarly, the primordial CMB realisations were sampled from an empirical distribution, and randomly added to the maps. Figure [graph:simNSmap] shows the map produced from the simulated data of cluster PSZ2 G044.20+48.66, including the three noise contributions. The mass estimate derived from the Bayesian analysis of this cluster is 0.22 standard deviations above the input value. [graph:simNSmap] The differences between output and input masses are shown in Figure [graph:simulatedns]. This time eight out of the 54 clusters cannot recover the input mass to within one standard deviation. In all eight of these cases, the mass is underestimated with respect to the input value. Five of the outlier values correspond to clusters at low redshift (*z* < 0.2). [graph:simulatedns] Simulations further adding a canonical radio-source environment --------------------------------------------------------------- The third set of simulations included recognised radio-sources, which formed a canonical radio-source environment. They were created in the same way as with the confusion noise described above, but with higher flux limits so that in reality, the LA would have been able to recognise them. The upper flux limit was set to $25~\rm{mJy}$. Figure [graph:simCSmap] shows the map produced from the simulated data of cluster PSZ2 G044.20+48.66, including a canonical source environment and background noise. The mass estimate derived from the Bayesian analysis of this cluster is 0.51 standard deviations below the input value. [graph:simCSmap] Figure [graph:simulatedcs] shows that the canonical radio-source environment have little effect on the mass estimation relative to Section [subsec:noisesims], as there are still 8 clusters which give mass estimates greater than one standard deviation away from the input value. Note that in this case, the outliers occurred across the entire range of redshifts, which suggests that in Section [subsec:noisesims] the low redshift trend was just a coincidence. [graph:simulatedcs] Simulations with LA observed radio-source environment plus instrumental, confusion and CMB noise ------------------------------------------------------------------------------------------------ The final set of simulations included the radio-source environment measured by the LA during the real observation for each cluster. These are only estimates of the actual source environments, and are only as reliable as the LA’s ability to measure them. Figure [graph:simrsmap] shows the maps produced from the real and simulated data of cluster PSZ2 G044.20+48.66. The mass estimate derived from the Bayesian analysis of the simulated dataset is just 0.08 standard deviations above the input value. Figure [graph:simulatedrs] shows that including the LA observed radio-source environment has a large effect on the results, as this time there are 16 clusters which are more than one standard deviation away from the input mass. Furthermore, three of these overestimated the mass relative to the input, the first time we have seen this occur in any of the simulations. A possible source of bias could be due to for example, the empirical prior on the spectral index incorrectly modelling some radio-sources. Another source of bias could be the position of a source relative to the cluster, and the magnitude of the source flux. For example, if a high flux radio-source is close to the centre of the galaxy cluster, then even a slight discrepancy between the real and the modelled values for the source could have a large effect on the cluster parameter estimates. We now compare these results to the simulations in YP15 (which concluded that the underestimation of the simulation input values could be due to deviation from the ‘universal’ profile, see Figure 23a in the paper). The results of the large cluster simulations (total integrated Comptonisation parameter  = 7 × 103 arcmin2 and $\theta\_{\rm p} = 7.4$ arcmin) in YP15 seem biased low at a more significant level than those in Figure [graph:simulatedrs], as in the former case less than half of the clusters recover the true value within two standard deviations. For the smaller clusters however, YP15 found a slight upward bias in the simulation results, but this is probably smaller in magnitude than the bias found in this Section. (a) 0.45(b) [graph:simrsmap] Statistics of results of real and simulated data ------------------------------------------------ Looking at the histograms produced in Sections [subsec:NSNBsims], [subsec:noisesims], [subsec:CSsims], and [subsec:sourcesims], in the last three cases it is apparent that there is a negative skew in the data, i.e. the output masses are systematically low relative to the input masses. The skews calculated from the samples associated with the four histograms are  − 0.17,  − 1.30,  − 0.91, and  − 0.96 respectively in units of standard deviations of the output mass. This suggests that the inclusion of confusion and CMB noise bias the AMI cluster mass estimates. We also calculate the median values associated with the histograms, and compare them with the medians corresponding to the real AMI and PSZ2 masses given in Figure [graph:m500planck]. The median values for the four histograms are  − 0.24, 0.09,  − 0.27 and  − 0.34 respectively in units of standard deviations of the output mass. For the real data the median values for $(M\_{\rm AMI}(r\_{500}) - M\_{\rm Pl,\, marg}(r\_{500})) / \sigma\_{\rm AMI}$ and $(M\_{\rm AMI}(r\_{500}) - M\_{\rm Pl,\, slice}(r\_{500})) / \sigma\_{\rm AMI}$ are  − 1.57 and  − 0.56. It makes sense to compare the second of these real data values with those obtained from the simulations, as it was $M\_{\rm Pl,\, slice}(r\_{500})$ which was used to derive the input masses. The fact that the median from the real data is greater in magnitude than the values from the simulations implies in general, our simulations can recover their input values with better agreement than that obtained between real AMI estimates and those obtained from *Planck* data using the slicing function methodology. This seems plausible as you would expect that inferring results from data which was created using the same model used in the inference would be more accurate than results from data taken from two different telescopes, which use different models in their inference. Furthermore the simulation medians tell us that AMI is capable of inferring the masses derived with the slicing methodology, if the cluster is created using the model used in the inference and assuming there are no large discrepancies between the real and simulated AMI observations. Conclusions =========== We have made observations of galaxy clusters detected by the *Planck* space telescope, with the Arcminute Microkelvin Imager (AMI) radio interferometer system in order to compare mass estimates obtained from their data. We then analysed this data using a physical model based on the one described in, following largely the data analysis method outlined in. This allowed us to derive physical parameter estimates for each cluster, in particular the total mass out to a given radius. We have also calculated two mass estimates for each cluster from *Planck*’s PowellSnakes detection algorithm data following (PSZ2). We found the following. * For the AMI mass estimates of *Planck* selected clusters there is generally a steeping in the mass of galaxy clusters as a function of redshift, which flattens out at around *z* ≈ 0.5. * AMI *M*(*r*500) estimates are within one combined standard deviation of the PSZ2 slicing function mass estimates for 31 out of the final sample of 54 clusters. However, the AMI masses are lower than both PSZ2 estimates for 37 out of the 54 cluster sample. To investigate further the possible biasing of AMI mass estimates, we created simulations of AMI data with input mass values from the PSZ2 slicing methodology. We considered four different cases for the simulations: 1) galaxy cluster plus instrumental noise; 2) galaxy cluster plus instrumental plus confusion and CMB noise; 3) galaxy cluster plus instrumental, confusion and CMB noise, plus a randomly positioned radio-source environment; 4) galaxy cluster plus instrumental, confusion and CMB noise, plus the radio-source environment detected by the LA in the real observations. These simulated datasets were analysed in the same way as the real datasets, and we found the following. * For case 1), the physical model recovered the input mass to within one standard deviation for 51 of the 54 clusters. The three which did not give an underestimate relative to the masses input to the simulation. * For case 2), eight of the simulations gave results which were more than one standard deviation lower than the input values. This highlights the effect of incorporating the noise sources into the error covariance matrix rather than trying to model the associated signals explicitly. * Case 3) shows similar results to case 2), which implies that ‘ideal’ radio-sources placed randomly in the sky have little effect on cluster mass estimates. * However in case 4) with real source environments, 16 simulations did not recover the input mass to within one standard deviation. This suggests that real radio-source environments, which can include sources with high flux values, and often sources which are located very close to the cluster centre, introduce biases in the cluster mass estimates. In real observations there are also additional issues (the sources are not ‘ideal’), such as sources being extended and emission not being circularly symmetric on the sky. * Cases 2), 3) and 4) give distributions of output  −  input mass which are negatively skewed. Thus AMI mass estimates are expected to be systematically lower than the PSZ2 slicing methodology values. * Compared to the results of simulations of large clusters carried out in, which test the robustness of the ‘universal’ pressure profile, the case 4) bias appears relatively small in magnitude, and in the same direction (downward). When comparing the case 4) results with the small cluster simulations of, the latter shows a relatively small bias in the opposite direction. * The median values of the distributions of output  −  input mass of the simulations in each of the four cases are smaller in magnitude than those obtained from comparing AMI and PSZ2 estimates from real data. This is expected as we are using the same model to simulate and analyse the clusters in all four cases. * The simulated and real data medians also indicate that while the simulations have shown that AMI mass estimates are systematically low, this does not fully accommodate for the discrepancies in the results obtained from the real data. This suggests that there is a systematic difference between the AMI and *Planck* data and / or the cluster models used to determine the mass estimates (which generally leads to PSZ2 estimates being higher than those obtained from AMI data). In a forthcoming paper, comparison of the ‘observational’ parameters (i.e. the integrated Comptonisation parameter *Y* and the angular radius *θ*) obtained from AMI data will be analysed. Furthermore, in and, different dark matter and pressure models will be considered, and in, Bayesian analysis will be performed on joint AMI-*Planck* datasets. [graph:simulatedrs] Acknowledgements ================ This work was performed using the Darwin Supercomputer of the University of Cambridge High Performance Computing (HPC) Service (<http://www.hpc.cam.ac.uk/>), provided by Dell Inc. using Strategic Research Infrastructure Funding from the Higher Education Funding Council for England and funding from the Science and Technology Facilities Council. The authors would like to thank Stuart Rankin from HPC and Greg Willatt and David Titterington from Cavendish Astrophysics for computing assistance. They would also like to thank Dave Green for his invaluable help using LaTeX. Kamran Javid acknowledges an STFC studentship. Yvette Perrott acknowledges support from a Trinity College Junior Research Fellowship. Arnaud, M., Pratt, G. W., Piffaretti, R., Böhringer, H., Croston, J. H., Pointecouteau, E. 2010. The universal galaxy cluster pressure profile from a representative sample of nearby systems (REXCESS) and the Y*S**Z* - M500 relation. Astronomy and Astrophysics 517, A92. Böhringer, H., and 33 colleagues 2007. The representative XMM-Newton cluster structure survey (REXCESS) of an X-ray luminosity selected galaxy cluster sample. Astronomy and Astrophysics 469, 363. Bahcall, J. N., Sarazin, C. L. 1977. Parameters and predictions for the X-ray emitting gas of Coma, Perseus, and Virgo.. The Astrophysical Journal 213, L99. Carvalho, P., Rocha, G., Hobson, M. P., Lasenby, A. 2012. PowellSnakes II: a fast Bayesian approach to discrete object detection in multi-frequency astronomical data sets. Monthly Notices of the Royal Astronomical Society 427, 1384. Chevallier, M., Polarski, D. 2001. Accelerating Universes with Scaling Dark Matter. International Journal of Modern Physics D 10, 213. Corless, V. L., King, L. J., Clowe, D. 2009. A new look at massive clusters: weak lensing constraints on the triaxial dark matter haloes of A1689, A1835 and A2204. Monthly Notices of the Royal Astronomical Society 393, 1235. Davies, J. I., and 8 colleagues 2011. The Arecibo Galaxy Environment Survey - IV. The NGC 7448 region and the H I mass function. Monthly Notices of the Royal Astronomical Society 415, 1883. AMI Consortium, and 18 colleagues 2011. 10C survey of radio sources at 15.7 GHz - II. First results. Monthly Notices of the Royal Astronomical Society 415, 2708. Feroz, F., Hobson, M. P., Bridges, M. 2009. MULTINEST: an efficient and robust Bayesian inference tool for cosmology and particle physics. Monthly Notices of the Royal Astronomical Society 398, 1601. Feroz, F., Hobson, M. P., Zwart, J. T. L., Saunders, R. D. E., Grainge, K. J. B. 2009. Bayesian modelling of clusters of galaxies from multifrequency-pointed Sunyaev-Zel’dovich observations. Monthly Notices of the Royal Astronomical Society 398, 2049. AMI Consortium, and 17 colleagues 2011. 10C survey of radio sources at 15.7 GHz - I. Observing, mapping and source extraction. Monthly Notices of the Royal Astronomical Society 415, 2699. Gioia, I. M., and 6 colleagues 1990. The Einstein Observatory Extended Medium-Sensitivity Survey. I. X-Ray Data and Analysis. The Astrophysical Journal Supplement Series 72, 567. Grainge, K., and 6 colleagues 2002. Measuring the Hubble constant from Ryle Telescope and X-ray observations, with application to Abell 1413. Monthly Notices of the Royal Astronomical Society 333, 318. Haehnelt, M. G., Tegmark, M. 1996. Using the Kinematic Sunyaev-Zeldovich effect to determine the peculiar velocities of clusters of galaxies.. Monthly Notices of the Royal Astronomical Society 279, 545. Hao, J., and 13 colleagues 2010. A GMBCG Galaxy Cluster Catalog of 55,424 Rich Clusters from SDSS DR7. The Astrophysical Journal Supplement Series 191, 254. Herranz, D., and 6 colleagues 2002. Filtering techniques for the detection of Sunyaev-Zel’dovich clusters &lt;br/&gt; in multifrequency maps. Monthly Notices of the Royal Astronomical Society 336, 1057. Hickish, J., and 20 colleagues 2018. A digital correlator upgrade for the Arcminute MicroKelvin Imager. Monthly Notices of the Royal Astronomical Society 475, 5677. Hinshaw, G., and 20 colleagues 2013. Nine-year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameter Results. The Astrophysical Journal Supplement Series 208, 19. Hobson, M. P., Maisinger, K. 2002. Maximum-likelihood estimation of the cosmic microwave background power spectrum from interferometer observations. Monthly Notices of the Royal Astronomical Society 334, 569. Hu, W., Kravtsov, A. V. 2003. Sample Variance Considerations for Cluster Surveys. The Astrophysical Journal 584, 702. Javid, K., Perrott, Y. C., Hobson, M. P., Olamaie, M., Rumsey, C., Saunders, R. D. E. 2018. Comparison of physical and observational galaxy cluster modelling. arXiv e-prints arXiv:1805.01968, DOI:<https://doi.org/10.17863/CAM.38865> Javid, K. 2019. Physical modelling of galaxy clusters and Bayesian inference in astrophysics (Doctoral thesis). University of Cambridge, DOI:<https://doi.org/10.17863/CAM.40616> Javid, K., Perrott, Y. C., Rumsey, C., Saunders, R. D. E. 2019. Physical modelling of galaxy cluster Sunyaev-Zel’dovich data using Einasto dark matter profiles. Monthly Notices of the Royal Astronomical Society 489, 3135, DOI:<https://doi.org/10.1093/mnras/stz2341>. Komatsu, E., and 20 colleagues 2011. Seven-year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Interpretation. The Astrophysical Journal Supplement Series 192, 18. Kravtsov, A. V., Vikhlinin, A., Nagai, D. 2006. A New Robust Low-Scatter X-Ray Mass Indicator for Clusters of Galaxies. The Astrophysical Journal 650, 128. Mehrtens, N., and 38 colleagues 2012. The XMM Cluster Survey: optical analysis methodology and the first data release. Monthly Notices of the Royal Astronomical Society 423, 1024. Melin, J.-B., Bartlett, J. G., Delabrouille, J. 2006. Catalog extraction in SZ cluster surveys: a matched filter approach. Astronomy and Astrophysics 459, 341. Nagai, D., Kravtsov, A. V., Vikhlinin, A. 2007. Effects of Galaxy Formation on Thermodynamics of the Intracluster Medium. The Astrophysical Journal 668, 1. Navarro, J. F., Frenk, C. S., White, S. D. M. 1995. Simulations of X-ray clusters. Monthly Notices of the Royal Astronomical Society 275, 720. Neto, A. F., and 8 colleagues 2007. The statistics of Λ CDM halo concentrations. Monthly Notices of the Royal Astronomical Society 381, 1450. Olamaie, M., Hobson, M. P., Grainge, K. J. B. 2013. Mass and pressure constraints on galaxy clusters from interferometric Sunyaev-Zel’dovich observations. Monthly Notices of the Royal Astronomical Society 430, 1344. Olamaie, M., Hobson, M. P., Grainge, K. J. B. 2012. A simple parametric model for spherical galaxy clusters. Monthly Notices of the Royal Astronomical Society 423, 1534. AMI Consortium, and 18 colleagues 2012. Parametrization effects in the analysis of AMI Sunyaev-Zel’dovich observations. Monthly Notices of the Royal Astronomical Society 421, 1136. Perley, R. A., Butler, B. J. 2013. An Accurate Flux Density Scale from 1 to 50 GHz. The Astrophysical Journal Supplement Series 204, 19. Perrott, Y. C., and 41 colleagues 2015. Comparison of Sunyaev-Zel’dovich measurements from Planck and from the Arcminute Microkelvin Imager for 99 galaxy clusters. Astronomy and Astrophysics 580, A95. Perrott, Y. C., and 7 colleagues 2019. Sunyaev-Zel’dovich profile fitting with joint AMI-Planck analysis. Monthly Notices of the Royal Astronomical Society 486, 2116, DOI:<https://doi.org/10.1093/mnras/stz826>. Petrov, L., Kovalev, Y. Y., Fomalont, E. B., Gordon, D. 2008. The Sixth VLBA Calibrator Survey: VCS6. The Astronomical Journal 136, 580. Piffaretti, R., Arnaud, M., Pratt, G. W., Pointecouteau, E., Melin, J.-B. 2011. The MCXC: a meta-catalogue of x-ray detected clusters of galaxies. Astronomy and Astrophysics 534, A109. Planck Collaboration, and 275 colleagues 2014. Planck 2013 results. XXIX. The Planck catalogue of Sunyaev-Zeldovich sources. Astronomy and Astrophysics 571, A29. Planck Collaboration, and 277 colleagues 2015. Planck 2013 results. XXXII. The updated Planck catalogue of Sunyaev-Zeldovich sources. Astronomy and Astrophysics 581, A14. Planck Collaboration, and 254 colleagues 2014. Planck 2013 results. XX. Cosmology from Sunyaev-Zeldovich cluster counts. Astronomy and Astrophysics 571, A20. Planck Collaboration, and 185 colleagues 2013. Planck intermediate results. III. The relation between galaxy cluster mass and Sunyaev-Zeldovich signal. Astronomy and Astrophysics 550, A129. Planck Collaboration, and 237 colleagues 2011. Planck early results. VIII. The all-sky early Sunyaev-Zeldovich cluster sample. Astronomy and Astrophysics 536, A8. Planck Collaboration, and 209 colleagues 2011. Planck early results. XI. Calibration of the local galaxy cluster Sunyaev-Zeldovich scaling relations. Astronomy and Astrophysics 536, A11. Planck Collaboration, and 259 colleagues 2016. Planck 2015 results. XXVII. The second Planck catalogue of Sunyaev-Zeldovich sources. Astronomy and Astrophysics 594, A27. Planck Collaboration, and 199 colleagues 2015. Planck intermediate results. XXVI. Optical identification and redshifts of Planck clusters with the RTT150 telescope. Astronomy and Astrophysics 582, A29. Planck Collaboration, and 190 colleagues 2016. Planck intermediate results. XXXVI. Optical identification and redshifts of Planck SZ sources with telescopes at the Canary Islands observatories. Astronomy and Astrophysics 586, A139. Planck Collaboration, and 203 colleagues 2011. Planck early results. IX. XMM-Newton follow-up for validation of Planck cluster candidates. Astronomy and Astrophysics 536, A9. Rudy, D. J., Muhleman, D. O., Berge, G. L., Jakosky, B. M., Christensen, P. R. 1987. Mars: VLA observations of the Northern Hemisphere and the north polar region at wavelengths of 2 and 6 cm. Icarus 71, 159. Rykoff, E. S., and 14 colleagues 2014. redMaPPer. I. Algorithm and SDSS DR8 Catalog. The Astrophysical Journal 785, 104. Sunyaev, R. A., Zeldovich, Y. B. 1970. The Spectrum of Primordial Radiation, its Distortions and their Significance. Comments on Astrophysics and Space Physics 2, 66. Vikhlinin, A., and 6 colleagues 2006. Chandra Sample of Nearby Relaxed Galaxy Clusters: Mass, Gas Fraction, and Mass-Temperature Relation. The Astrophysical Journal 640, 691. Voges, W., and 19 colleagues 1999. The ROSAT all-sky survey bright source catalogue. Astronomy and Astrophysics 349, 389. Waldram, E., Bolton, R., Pooley, G. G., Riley, J. M. 2007. The radio source population at frequencies of 15 to 70GHz: some deductions from 9C follow-up observations. From Planets to Dark Energy: The Modern Radio Universe 140. Wechsler, R. H., Bullock, J. S., Primack, J. R., Kravtsov, A. V., Dekel, A. 2001. Concentrations and Assembly Histories of Dark Matter Halos. arXiv e-prints astro-ph/0111069. Wen, Z. L., Han, J. L., Liu, F. S. 2012. A Catalog of 132,684 Clusters of Galaxies Identified from Sloan Digital Sky Survey III. The Astrophysical Journal Supplement Series 199, 34. Willis, J. P., and 14 colleagues 2013. Distant galaxy clusters in the XMM Large Scale Structure survey. Monthly Notices of the Royal Astronomical Society 430, 134. York, D. G., and 144 colleagues 2000. The Sloan Digital Sky Survey: Technical Summary. The Astronomical Journal 120, 1579. Zwart, J. T. L., and 60 colleagues 2008. The Arcminute Microkelvin Imager. Monthly Notices of the Royal Astronomical Society 391, 1545. Zwicky, F. 1933. Die Rotverschiebung von extragalaktischen Nebeln. Helvetica Physica Acta 6, 110. Zwicky, F. 1937. On the Masses of Nebulae and of Clusters of Nebulae. The Astrophysical Journal 86, 217. [lastpage] --- 1. E-mail: [email protected][↩](#fnref1) 2. <http://getdist.readthedocs.io/en/latest/>.[↩](#fnref2) 3. <http://aips.nrao.edu/>.[↩](#fnref3) 4. <https://pla.esac.esa.int/pla/catalogues>.[↩](#fnref4)
arxiv_0000246
Monte Carlo on compact complex manifoldsusing Bergman kernels ============================================================= In this paper, we propose a new randomized method for numerical integration on a compact complex manifold with respect to a continuous volume form. Taking for quadrature nodes a suitable determinantal point process, we build an unbiased Monte Carlo estimator of the integral of any Lipschitz function, and show that the estimator satisfies a central limit theorem, with a faster rate than under independent sampling. In particular, seeing a complex manifold of dimension *d* as a real manifold of dimension $d\_\R=2d$, the mean squared error for *N* quadrature nodes decays as *N*− 1 − 2/*d*R; this is faster than previous DPP-based quadratures and reaches the optimal worst-case rate investigated by in Euclidean spaces. The determinantal point process we use is characterized by its kernel, which is the Bergman kernel of a holomorphic Hermitian line bundle, and we strongly build upon the work of Berman that led to the central limit theorem in. We provide numerical illustrations for the Riemann sphere. ***Keywords—*** Bergman kernel, complex manifolds, determinantal point processes, Monte Carlo integration Introduction ============ Numerical integration on complex manifolds ------------------------------------------ Numerical integration, also known as quadrature, is the task of approximating an integral by a weighted sum of evaluations of the integrand. The points at which the integrand is evaluated are called the nodes of the quadrature. One usually distinguishes between Monte Carlo methods (MC; ), where the nodes are taken to be a random configuration of points, and quasi-Monte Carlo methods (QMC; ), which rely on deterministic configurations such as low-discrepancy sequences. Both approaches come with different measures of efficiency: for deterministic configurations, one usually wants to bound the worst-case error over a (large) class of functions. For random configurations, it is typical to consider a single integrand and derive a concentration inequality or a central limit theorem, and characterize their rate of convergence as the number of nodes tends to infinity. Both MC and QMC methods have their strong points: MC error bounds are usually easier to intepret and estimate, while QMC convergence rates usually decrease faster than MC for smooth integrands. There is a growing body of literature on methods that try to take the best of both worlds, like randomized QMC or variants of kernel herding, to cite only a few influential papers. Similarly, and proved fast central limit theorems for integration with particular classes of determinantal point processes (DPPs, ). Together with their computational tractability, the tunable negative dependence among the points of a DPP makes this distribution a natural candidate for structured Monte Carlo integration; see also. While previous work on Monte Carlo with DPPs has focused on integrals over the Euclidean space, we investigate in this paper the use of a DPP for integrating over a compact complex manifold. Real and complex manifolds naturally arise in domains as diverse as theoretical physics, information geometry, or even computer vision. They often represent a space of states or parameters, and the geometric properties of the manifold reflect the inherent properties of the underlying model. Integrating over manifolds is a key task, for instance, in Bayesian inference when the manifold represents the considered probabilistic models. Numerical integration on manifolds is a relatively recent topic, but the field is growing, see e.g. the QMC methods in and the MC methods in, to cite only a few. In this article, we follow and introduce a well-chosen DPP on a compact complex manifold, namely the DPP with kernel the Bergman kernel of the manifold. This allows us to generalize the orthogonal polynomials-based arguments of, and actually improve on the rate in their central limit theorem, to now match a lower bound dating back to. Our paper can also be seen as a Monte Carlo counterpart of, where the worst-case error of a DPP over the sphere known as the *spherical ensemble* is investigated, comparing it to the QMC designs studied by. Our proofs heavily rely on the seminal papers by, and a third way to see our paper is as an extension of a central limit theorem in to an unbiased estimator of the integral of interest. We also point to recent work by, who derive variance estimates for linear statistics under DPPs on the sphere using very different analytic techniques that may generalize to more manifolds. In the rest of this introduction, we state our main result, give some context about other methods of integration on manifolds, and give the outline of the paper. Main result ----------- We now define our setting and state our main result. The setting is fairly technical; the reader not accustomed to the vocabulary of complex manifolds is referred to the more exhaustive development in Section [sec:Cpxmfd]. Let *L* be a holomorphic line bundle over a compact complex manifold $\M$ of dimension *d*. If *h* is an Hermitian metric on *L* with local weight *ϕ*, we shall denote respectively by ⟨ ⋅ ,  ⋅ ⟩*ϕ* and | ⋅ |*ϕ* the associated inner product and norm on each fiber. Given a section *e**U* of *L* that does not vanish on an open subset *U* (a *trivializing section*), any section $s\in H^0(\M,L^k)$ can be represented on *U* by a function $f:U\to\C$ as *s*(*x*) = *f*(*x*)*e**U*(*x*),  ∀*x* ∈ *U*. In particular, the local weight *ϕ* is characterized by *e**U* as follows: for any two sections *s*1, *s*2 respectively represented by *f*1, *f*2, $$h\_x(s\_1(x),s\_2(x))=f\_1(x)\overline{f\_2}(x)h\_x(e\_U(x),e\_U(x))=f\_1(x)\overline{f\_2}(x)\e^{-\phi(x)}.$$ Furthermore, if *μ* is a finite measure on $\M$, then (*ϕ*, *μ*) is called a *weighted measure*. Such a pair induces an inner product on the space $H^0(\M,L)$ of holomorphic sections of *L*, $$\label{eq:prodscal} \langle s^{(1)},s^{(2)}\rangle\_{(\phi,\mu)}=\int\_\M h\_x(s^{(1)}(x),s^{(2)}(x))\d\mu(x).$$ In the present article, we consider a semiclassical setting: we replace *L* by *L**k* = *L*⊗ *k* and let *k* → ∞. We endow *L**k* with the metric *h**k* with weight *k**ϕ*, and the corresponding weighted measure is then (*k**ϕ*, *μ*). The inner product space $(H^0(\M,L^k),\langle\cdot,\cdot\rangle\_{(k\phi,\mu)})$ is in fact a finite-dimensional Hilbert space; we denote by *N**k* its dimension. There exists a reproducing kernel *B*(*k**ϕ*, *μ*) of $H^0(\M,L^k)$ called the *Bergman kernel*, which intuitively is the integral kernel of the projection $L^2(\M, L^k)\rightarrow H^0(\M,L^k)$. If (*s**i*)1 ≤ *i* ≤ *N**k* is an orthonormal basis of $H^0(\M,L^k)$, *B*(*k**ϕ*, *μ*) admits the following decomposition: $$B\_{(k\phi,\mu)}(x,y) = \sum\_{i=1}^{N\_k} s\_i(x)\otimes\overline{s\_i}(y),$$ and on the diagonal it even becomes a well-defined function $\M\to\R$, *B*(*k**ϕ*, *μ*)(*x*, *x*) = ∑*i* = 1*N**k**h**x**k*(*s**i*(*x*), *s**i*(*x*)),  thanks to the isomorphism of fibers $L\_x\otimes\overline{L\_x}\cong\C$. Under suitable assumptions on the metric *h* and the measure *μ*, the *Bergman measures* $$\d\mu\_{k\phi}(x)=\frac{1}{N\_k}B\_{(k\phi,\mu)}(x,x)\d\mu(x)$$ converge weakly to an equilibrium measure when *k* tends to infinity. More precisely, the weighted measure (*ϕ*, *μ*) is called *strongly regular*[1](#fn1) if (*i*) the weight *ϕ* is locally C1, 1-smooth, *i.e.* it is differentiable and its first partial derivatives $\frac{\partial\phi}{\partial z\_i},\frac{\partial\phi}{\partial\bar{z}\_j}$ are locally Lipschitz continuous, and (*i**i*) the measure *μ* corresponds to the volume form $$\omega\_d = \det (h\_0) \left(\frac{i}{2}\right)^d \d z\_1\wedge \d\bar{z}\_1\wedge\cdots\wedge \d z\_d\wedge \d\bar{z}\_d,$$ with respect to a continuous Hermitian metric *h*0 on $\M$ (possibly different from the metric with local weight *ϕ*). For any *k* ≥ 1, let *μ**k**ϕ* be the measure on $\M$ defined by $$\d\mu\_{k\phi}(x)=\frac{1}{N\_k}B\_{(k\phi,\mu)}(x,x)\d\mu(x).$$ If the weighted measure is strongly regular, proved the weak convergence of measures $$\label{eq:cv\_eq\_mes} \mu\_{k\phi}{\mathrel{\mathop{\kern 0pt\longrightarrow}\limits\_{k\to\infty}^{}}} \mu\_\eq,$$ where $\mu\_\eq$ is the pluripotential equilibrium measure (or *Monge–Ampère measure*) $$\label{eq:equilibrium\_measure} \mu\_\eq = \frac{1}{\vol(L)}\left(\ddc\phi\_e\right)^d.$$ Here, *ϕ**e* denotes the upper semicontinuous regularization of the plurisubharmonic enveloppe of *ϕ*, and $\vol(L)$ denotes the volume of the line bundle, defined by $$\vol(L) = \limsup\_{k\to\infty} \frac{d!N\_k}{k^d},$$ while the operator $\d\d^c$ in  is defined as follows. The Dolbeault operators ∂ and $\bar\partial$ on *L* give rise to the operator $\ddbar$, which maps (*p*, *q*)-forms to (*p* + 1, *q* + 1)-forms on $\M$. Consider the real operators $\d=\partial+\bar\partial$ and $\d^c=\frac{1}{4\i\pi}(\partial - \bar\partial)$, now we obtain $$\ddc = \frac{\i}{2\pi}\ddbar.$$ In our setting, $\ddbar\phi\_e$ is then a complex (1, 1)-form, and $\ddc\phi\_e$ a real 2-form, whose *d*th exterior power leads to a volume form $\omega\_\phi=(\ddc\phi\_e)^d$ that can be normalized to produce the equilibrium measure $\mu\_\eq$. Note that in the particular case where $\ddc\phi>0$, we have *ϕ**e* = *ϕ*. All *μ**k**ϕ*, as well as $\mu\_\eq$, are absolutely continuous with respect to *μ*, with respective densities[2](#fn2) $$\beta\_k(x)=\frac{1}{N\_k}B\_{(k\phi,\mu)}(x,x)\ \text{and} \ \beta\_\eq(x)=\frac{\det(\ddc\phi\_e)(x)}{\vol(L)}.$$ There is a subset of $\M$ called the *weak bulk*, or simply *bulk*, such that the weak convergence can be replaced by a pointwise convergence $\beta\_k(x)\to \beta\_\eq(x)$ for all *x* in the bulk. For almost every point *x* outside the bulk, *β**k*(*x*) → 0. In the simpler case $\ddc\phi>0$, the weak bulk is the whole manifold. Note that, according to, we always have $\ddc\phi>0$ in the weak bulk. The 2-form $\ddc\phi$ also induces a metric whose associated inner product and norm are denoted by $\langle\cdot,\cdot\rangle\_{\ddc\phi}$ and $\vert\cdot\vert\_{\ddc\phi}$. Most of the ideas and results about the convergence of Bergman measures are related to the idea of having (at least locally) a positive curvature form $\ddc\phi$. It might seem restrictive with respect to the choice of the manifold, but in fact one can endow any compact complex manifold with a positive line bundle: for instance, although the standard torus $\C/\Z^2$ endowed with the metric inherited by $\C$ is flat (in the sense that its tangent bundle has zero curvature), one can still consider another line bundle which has positive curvature. It is important to make a distinction between the geometry of the underlying manifold (which is related to the geometry of its (co)tangent bundle, and captured by the Borel measure *μ*) and the geometry of the line bundle. Beside this purely deterministic construction, one can consider a family (*X*1, …, *X**N**k*) of random variables on $\M$, whose joint density with respect to *μ*⊗ *N**k* is proportional to |det(*s**i*(*x**j*))|*k**ϕ*2 = ∑*σ*, *τ* ∈ S*N**k**ɛ*(*σ*)*ɛ*(*τ*)⟨*s**σ*(1)(*x*1), *s**τ*(1)(*x*1)⟩*k**ϕ*⋯⟨*s**σ*(*N**k*)(*x**N**k*), *s**τ*(*N**k*)(*x**N**k*)⟩*k**ϕ*,  where (*s**i*) is an orthonormal basis of $H^0(\M,L^k)$ with respect to the inner product. This density is symmetric with respect to the *x**i*’s and vanishes as soon as there are *i*, *j* such that *i* ≠ *j* and *x**i* = *x**j*: the family (*X*1, …, *X**N**k*) therefore defines a random configuration, or a simple point process, and belongs to the subclass of *determinantal point processes* (DPPs); see for some generalities and relations with models of noninteracting fermions in quantum mechanics. The Bergman kernel contains most of the required information to study the distribution of the point process, and complex geometers provided many powerful asymptotic results about this kernel. In particular, Berman proved the following central limit theorem. [, Theorem 1.5][thm:CLT] Let *L* be a holomorphic line bundle over a compact complex manifold $\M$. Let (*ϕ*, *μ*) be a strongly regular weighted measure and $\mu\_\eq$ be the associated Monge–Ampère measure. For any $k\in\N^\*$, let (*X*1, …, *X**N**k*) be a DPP with kernel *B*(*k**ϕ*, *μ*). For any Lipschitz continuous $f:\M\to\R$ with compact support included in the weak bulk, $$\sqrt{N\_k^{1+\frac{1}{d}}}\left(\frac{1}{N\_k}\sum\_{i=1}^{N\_k}f(X\_i)-\E\left[\frac{1}{N\_k}\sum\_{i=1}^{N\_k}f(X\_i)\right]\right) {\mathrel{\mathop{\kern 0pt\longrightarrow}\limits\_{k\to\infty}^{(d)}}}\mathcal{N}(0,\tfrac12\Vert \d f\Vert\_{\ddc\phi}^2),$$ where $\Vert \d f\Vert\_{\ddc\phi}^2$ is the *Dirichlet norm* $$\label{eq:asymp\_var} \Vert \d f\Vert\_{\ddc\phi}^2 = \int\_\M\vert\nabla f\vert\_{\ddc\phi}^2(\ddc\phi)^d.$$ Just like, this theorem is not a result on Monte Carlo integration as it stands, since the linear statistics in the left-hand side has no reason to converge to the target integral $\int f\d\mu$ in any useful sense. Actually, $$\E\left[\frac{1}{N\_k}\sum\_i f(X\_i)\right]=\frac{1}{N\_k}\int\_\M f(x)B\_{(k\phi,\mu)}(x,x)\d\mu(x)$$ depends on *k* and converges to the integral of *f* with respect to the equilibrium measure $\mu\_\eq$. The main result of the present paper is a variant of Theorem [thm:CLT] akin to, where we introduce the inverse of a kernel diagonal as a weight in the linear statistic. To state our result, let the *equilibrium weight* be $$w\_\eq^\phi = \frac{\vol(L)}{d!\det(\ddc\phi)},$$ which is well-defined in the weak bulk. [thm:MCmain] Let *L* be a holomorphic line bundle over a compact complex manifold $\M$, (*ϕ*, *μ*) be a strongly regular weighted measure, and *F* be a line bundle endowed with a continuous local weight *ψ*. For any $k\in\N^\*$ let (*X*1, …, *X**N**k*) be a DPP with kernel *B*(*k**ϕ* + *ψ*, *μ*). For any Lipschitz continuous $f:\M\to\C$ with compact support included in the weak bulk and such that $\sigma\_{f,\phi}^2:=\frac12 \Vert \d (w\_\eq^\phi f)\Vert\_{\ddc\phi}^2<\infty$, $$\sqrt{N\_k^{1+\frac{1}{d}}}\left(\sum\_{i=1}^{N\_k}\frac{f(X\_i)}{B\_{(k\phi+\psi,\mu)}(X\_i,X\_i)}-\int\_\M f(x)\d\mu(x)\right) {\mathrel{\mathop{\kern 0pt\longrightarrow}\limits\_{k\to\infty}^{(d)}}}\mathcal{N}\left(0,\sigma\_{f,\phi}^2\right).$$ Unlike in Theorem [thm:CLT], the expectation of the estimator in the left-hand side does not depend on *k*, and is directly the integral of *f* with respect to the target measure *μ*. It is also interesting to remark that the assumptions on *f* slightly differ between the two theorems, because we need to ensure that the asymptotic variance is finite, which was trivially true in Theorem [thm:CLT]. The proof of Theorem [thm:MCmain] will be given in Section [sec:proofs]. It mostly follows the steps of the proof of Theorem [thm:CLT] by, with different estimates due to the dependence in *k* of the integrand. Finally, let us stress the importance of the ``universality" of Theorem [thm:MCmain]: if a DPP with kernel *B*(*k**ϕ* + *ψ*, *μ*) can produce an estimator of $\int\_\M f(x)\d\mu(x)$, then it can also produce an estimator of $\int\_\M f(x)\e^{-V(x)}\d\mu(x)$ for many weight functions $V:\M\to\R$, because the asymptotic variance does not depend on *ψ*. [cor:universality] Let *L* be a holomorphic line bundle over a compact complex manifold $\M$, (*ϕ*, *μ*) be a strongly regular weighted measure, and $\psi:\M\to\R$ be a continuous local weight. If (*X*1, …, *X**N**k*) is a DPP with kernel *B*(*k**ϕ*, *μ*), then it is also a DPP with kernel $B\_{(k\phi-\psi,\e^{-\psi}\mu)}$, and for any Lipschitz continuous $f:\M\to\C$ with compact support included in the weak bulk and such that $\Vert \d (w\_\eq^\phi)f\Vert\_{\ddc\phi}^2<\infty$, $$\sqrt{N\_k^{1+\frac{1}{d}}}\left(\sum\_{i=1}^{N\_k}\frac{f(X\_i)}{B\_{(k\phi,\e^{-\psi}\mu)}(X\_i,X\_i)}-\int\_\M f(x)\e^{-\psi(x)}\d\mu(x)\right) {\mathrel{\mathop{\kern 0pt\longrightarrow}\limits\_{k\to\infty}^{(d)}}}\mathcal{N}\left(0,\widehat{\sigma}\_{f,\phi}^2\right).$$ In particular, the asymptotic variance remains the same as in Theorem [thm:MCmain], even though we use the same DPP but a different target measure. This invariance was expected since a similar property holds for multivariate orthogonal ensembles under the so-called *Nevai condition*; see. However, it remains suprising if we compare the situation to classical (independent) importance sampling, where sampling the quadrature nodes from a distribution different from the target *μ* has an impact on the asymptotic variance. Comparison with other methods ----------------------------- In a previous paper, the second author and Hardy provided a method for integration over the hypercube $I^d=[-1,1]^d\subset\R^d$ in any dimension for a measure *μ*. They started by a general CLT analogous to Theorem [thm:CLT] based on a DPP with kernel *K**N*(*x*, *y*) = ∑*k* = 0*N* − 1*ϕ**k*(*x*)*ϕ**k*(*y*),  where (*ϕ**i*) is a family of orthonormal functions in *L*2(*μ*), and they expressed the asymptotic variance in terms of Fourier coefficients. When *f* is a linear combination of monomials *x*1*α*1⋯*x**n**α**n* with *α**i* ∈ {0, 1}, this asymptotic variance coincides with the Dirichlet norm $\frac12\Vert \d f\Vert\_{\omega\_\eq^{\otimes d}}^2$, where $\omega\_\eq^{\otimes d}$ is the equilibrium weight corresponding to $\ddc\phi$ in our setting. They finally proved the following convergence result. [, Theorem 3][thm:BH] Let $\mu(\d x)=\omega(x)\d x$ be a positive measure absolutely continuous with respect to the Lebesgue measure, with density *ω*(*x*) = ∏*i* = 1*d**ω**i*(*x**i*), such that supp(*μ*) ⊂ *I**d* and *ω* is C1 and positive on ( − 1, 1)*d*. Assume further that for any *ɛ* > 0, $$\frac{1}{N}\sup\_{x\in [-1+\varepsilon,1-\varepsilon]^d}\vert\nabla K\_N(x,x)\vert <\infty.$$ If (*X*1, …, *X**N*) is the multivariate orthonormal ensemble with respect to *μ*, that is, the DPP with kernel *K**N*, then for every $f\in\mathscr{C}^1(I^d,\R)$ supported in [ − 1 + *ɛ*, 1 − *ɛ*]*d* for a fixed *ɛ* > 0, we have the following central limit theorem: $$\sqrt{N^{1+\frac1d}}\left(\sum\_{i=1}^N\frac{f(X\_i)}{K\_N(X\_i,X\_i)}-\int\_{I^d}f(x)\d\mu(x)\right){\mathrel{\mathop{\kern 0pt\longrightarrow}\limits\_{N\to\infty}^{law}}}\mathcal{N}(0,\sigma\_f^2),$$ where *σ**f*2 is an asymptotic variance that depends on *f* and *ω*. Although Theorem [thm:MCmain] seems to simply generalize this result (and that was what we first expected), it happens not to be the case. Indeed, in the authors considered *real* *d*-dimensional spaces. if $\M$ is a complex manifold of complex dimension $d\_\C$, it is in particular a real manifold of dimension $d\_\R=2d\_\C$, therefore the speed of convergence that appears in Theorem [thm:MCmain] is $$\frac{1}{\sqrt{N^{1+\frac{1}{d\_\C}}}}=\frac{1}{\sqrt{N^{1+\frac{2}{d\_\R}}}},$$ which is actually better than the speed obtained in. It is in fact the same order as the worst-case mean square error of any randomized integration algorithm on $\R^d$ for C1 functions, according to a result by, see also. Note that a key assumption of Theorem [thm:BH] is the positivity of the density *ω* on ( − 1, 1)*d*, which is similar to the property $\ddc\phi>0$ satisfied in the weak bulk in Theorem [thm:MCmain]. Quasi-Monte Carlo methods on a compact real Riemannian manifold $\M$ of dimension *d*, endowed with a Riemannian measure $\vol$, have been studied by. The authors provide upper bounds of quadrature errors in the following setting: assume that $\M=U\_1\cup \cdots \cup U\_N$ is a partition of $\M$ in disjoint subsets. For any 1 ≤ *i* ≤ *N*, set $w\_i=\vol(U\_i)$. [] For every *d*/2 < *α* < *d*/2 + 1 there exists a constant *c* > 0 and points *z**i* ∈ *U**i*, 1 ≤ *i* ≤ *N*, such that $$\left\vert \sum\_{i=1}^N w\_i f(z\_i) - \int\_\M f(x)\d x\right\vert \leq c\max\_{1\leq i\leq N}\{\mathrm{diameter}(U\_i)^\alpha\}\Vert f\Vert\_{W^{\alpha,2}(\M)}.$$ They improved this result by controlling the difference between any probability measure on $\M$ and the uniform measure $\d x$, so that their method works for any probability measure, provided that the integrand is regular enough. An important difference between our approach and theirs is the fact that their bound relies on the maximum diameter of the subsets appearing in the partition of $\M$, whereas ours relies on the volume of the line bundle over $\M$. At the intersection of Monte Carlo methods and QMC guarantees, in the case of the sphere $\Sbb^2$, there is a result due to that we now explain. Let *X* = (*x*1, …, *x**N**k*) be a *N**k*-point configuration on $\Sbb^2$. The *worst-case error* for the Monte Carlo method with respect to the smoothness parameter *s* ∈ (1, ∞) is defined by $$\wce(X,s) = \sup\_{f:\Vert f\Vert\_{H^s(\Sbb^2)}\leq 1}\left\vert \int\_{\Sbb^2} f(x)\d\vol\_{S^2}(x)-\sum\_{i=1}^{N\_k}\frac{f(x\_i)}{B\_k(x\_i,x\_i)}\right\vert,$$ where $\Vert f\Vert\_{H^s(\Sbb^2)}$ is the norm of the Sobolev space $H^s(\Sbb^2)$. Recalling[3](#fn3) that *B**k*(*x*, *x*) = *k* + 1 for all $x\in\Sbb$, we remark that if *X* is the DPP with kernel the Bergman kernel on $\Sbb^2$ (the so-called *spherical ensemble*) and $\d\hat{\mu}\_k$ is its empirical measure, then $$\sum\_{i=1}^{N\_k}\frac{f(x\_i)}{B\_k(x\_i,x\_i)} = \int\_{\Sbb^2} f(x)d\hat{\mu}\_k(x).$$ We also notice that $\d\mu\_\eq^\phi=\d\vol\_{S^2}$, therefore the statements of Theorems [thm:CLT] and [thm:MCmain] are equivalent, and in this case Berman already estimated the worst-case error. [, Theorem 1.1] Let *X* = (*x*1, …, *x**N*) be the spherical ensemble with *N* particles. For any *s* ∈ (1, 2], there exists a constant *C*(*s*) such that for any $R\in[\log(N)^{-\frac12},N\log(N)^{-\frac12}]$, $$\P\_N\left(\wce(X,s) \leq R^{\frac{s}{2}}\frac{\log(N)^{\frac{s}{4}}}{N^{\frac{s}{2}}}\right)\geq 1 - \frac{1}{N^{R^2/C(s)}-C(s)}.$$ Finally, a string of works have investigated Markov chain Monte Carlo (MCMC) on manifolds embedded in Euclidean spaces, for integration with respect to the Hausdorff measure; see e.g. the seminal and, as well as references in the latter paper. Although not explicitly mentioned, with the right assumptions, we expect a central limit theorem to hold for these chains, but with the usual rate of the inverse of the square root of the number of integrand evaluations. Convergence will thus be slower than our DPP-based method, when measured in number of integrand evaluations. Let us also emphasize that our method is a paradigm shift in the sense of requiring only minimal geometric information on the underlying space. Outline of the article ---------------------- The rest of the paper is organized as follows. In Section [sec:Cpxmfd], we introduce the necessary background in complex geometry, culminating with the Bergman kernel. In Section [sec:DPP], we introduce the DPPs that we will be using as quadrature nodes, which we call Bergman ensembles. Section [sec:kernelestimates] contains kernel estimates that are necessary in the proof of Theorem [thm:MCmain], to which Section [sec:proofs] is dedicated. We specify all notions in the case of the Riemann sphere in Section [sec:sphere], and we give experimental illustrations of Theorem [thm:MCmain] in that case, where the reference process can be easily sampled using random matrix models. Finally, we discuss limitations of the method, as well as current and future work, in Section [sec:discussion]. Complex manifolds and Bergman kernels ===================================== In this section, we recall enough notions on complex manifolds and Hermitian line bundles to define the Bergman kernel, which is the main geometric object involved in our model. A reader accustomed to the vocabulary of smooth (real) manifolds should quickly see the modifications, essentially adding holomorphicity requirements. Everything in section is standard; we refer to for complex manifolds and line bundles, and for Bergman kernels. In the specific context of determinantal point processes, the reader may also refer to. We provide a few examples at the end of the section, but the special case of the Riemann sphere will be treated in more detail, and numerically illustrated, in Section [sec:sphere]. Basic definitions ----------------- We start with the notion of complex manifold. [def:cpx1] A *complex manifold* of dimension *d* is a topological space $\M$ endowed with a family (*U**i*, *φ**i*)*i* ∈ *I* of open subsets $U\_i\subset \M$ and homeomorphisms $\varphi\_i:U\_i\to\varphi\_i(U\_i)\subset\C^d$ such that, if $U\_i\cap U\_j\neq \varnothing$, *φ**i* ∘ *φ**j*− 1 : *φ**j*(*U**i* ∩ *U**j*) → *φ**i*(*U**i* ∩ *U**j*) is a biholomorphism[4](#fn4) between open subsets of $\C^d$. The open subsets *U**i* are called *charts*, and the maps *φ**i* *local coordinates*. The fundamental idea of complex manifolds is that the compatibility of charts and coordinates, which is illustrated in Figure [fig:manifold], enables the use of the usual tools of complex analysis on $\C^d$. For instance, a function $f:\M\to\C$ is holomorphic (resp. C*s*) if for any *i* ∈ *I* the function $f\circ\varphi\_i^{-1}:\varphi\_i(U\_i)\to \C$ is holomorphic (resp. C*s*). [fig:manifold] [def:cpx2] Let $\M$ be a complex manifold. A *holomorphic vector bundle of rank* *r* over $\M$ is a complex manifold *E* endowed with a holomorphic surjective map $\pi:E\to \M$ such that, for any $x\in \M$, the *fiber* *E**x* = *π*− 1(*x*) is an *r*-dimensional vector space over $\C$. A holomorphic vector bundle of rank *r* is *locally trivial* if there exist an open covering (*V**j*)*j* ∈ *J* of $\M$ and biholomorphic maps $\psi\_j:\pi^{-1}(V\_j)\to V\_j\times\C^r$ such that the diagram $$\begin{tikzcd} \pi^{-1}(V\_j) \ar[rr, "\psi\_j"] \ar[dr, "\pi"'] & & V\_j\times\C^r \ar[dl, "\mathrm{pr}\_1"]\\ & V\_j & \end{tikzcd}$$ commutes, and such that the restriction of *ψ**j* to *E**x* is a $\C$-linear map for all $x\in \M$. The maps *ψ**j* are called *trivialization functions*. Vector bundles are usually denoted as $E\rightarrow \M$. A graphical depiction of a vector bundle of rank 1 (a so-called *line* bundle) is given in Figure [fig:bundle]. In words, in a locally trivial vector bundle, on each *V**j*, *π* is akin to the canonical projection of *V**j* × C*r* onto *V**j*. Finally, we define *sections*, for which we can talk of holomorphicity. [fig:bundle] [def:cpx3] Let *E* be a holomorphic vector bundle over a complex manifold $\M$. A *local section* of *E* is a continuous map *s* : *U* → *E*, for $U\subset \M$ open, such that $\pi\circ s = \Id\_\M$ on *U*. A local section defined on $\M$ is called a *global section*. Heuristically, taking a section of the vector bundle is equivalent to taking a continuous family of vectors indexed by an open subset of $\M$, that is, a vector field on $\M$. We denote by $\mathscr{C}^s(\M,E)$ (resp. $H^0(\M,E)$) the space of C*s* (resp. holomorphic) sections of *E*, for any 0 ≤ *s* ≤ ∞. Operations on vector bundles ---------------------------- As a holomorphic vector bundle *E* over a complex manifold $\M$ induces a complex vector space on each fiber, one can leverage this linear algebraic structure in two ways: to perform algebraic transformations (direct sum, tensor product, wedge product) or to enrich the line bundle. The idea is that any such operation can be done on a fiber, in a way which is explicit whenever one works in a local trivialization. **Hermitian metrics.** If we endow *E* with a family of Hermitian inner products $h\_x:E\_x\times E\_x\to\C$, the vector bundle is said to be *Hermitian*, and *h* is called a *Hermitian metric*. The regularity (e.g. continuous, differentiable, smooth) of the metric, by convention, will be the regularity of the map *x* ↦ *h**x*. This regularity can be made more explicit by using the notion of local weight: let $U\subset \M$ be an open subset where *E* can be trivialized. There exists a section *e**U* : *U* → *E* such that *e**U*(*x*) ≠ 0 for all *x* ∈ *U*, called *local frame*, such that for all $s\in H^0(\M,E)$, there exists $f:U\to\C$ that satisfies *s*(*x*) = *f*(*x*)*e**U*(*x*),  ∀*x* ∈ *U*. In this case, the Hermitian metric *h* reads $$h\_x(s\_1(x),s\_2(x))=f\_1(x)\overline{f\_2}(x) h\_x(e\_U(x),e\_U(x)) = f\_1(x)\overline{f\_2}(x) \e^{-\phi(x)},$$ where $\phi:U\to\R$ is defined by *ϕ*(*x*) =  − log*h**x*(*e**U*(*x*), *e**U*(*x*)) and is called the *local weight* of *h*. The regularity of the metric *h* is then equivalent to the regularity of the weight *ϕ* as a function on (an open subset of) the complex manifold $\M$. We will often identify the metric *h* with its local weight for the sake of simplicity. **Pullback bundle.** If $E\to\M\_2$ is a vector bundle with projection *π* and $f:\M\_1\to\M\_2$ is a holomorphic function, then we define the *pullback bundle* $f^\*E\to\M\_1$ as $$f^\*E=\{(m,v)\in\M\times E: f(m)=\pi(v)\}\subset\M\times E,$$ with projection *π̃*(*m*, *v*) = *m*. **Dual bundle.** If $E\to\M$ is a vector bundle, one can define its *dual bundle* $E^\*\to\M$ as the vector bundle whose fibers are the dual fibers of *E*: (*E*\*)*x* = (*E**x*)\*. If *E* is endowed with a Hermitian metric *h*, we shall denote $\overline{E}$ its dual. For any section *s*1 of *E*, there is a unique[5](#fn5) section $\overline{s\_1}$ of $\overline{E}$ such that $$\label{eq:contraction} (\overline{s\_1}(x), s\_2(x)) = h\_x(s\_2(x),s\_1(x)), \quad \forall s\_2\in H^0(\M,E), \forall x\in \M,$$ where parentheses in the left-hand side denote the duality pairing. **Tensor products of bundles.** If $E\_1\to \M$ and $E\_2\to \M$ are two vector bundles over the same manifold, their *tensor product* is the vector bundle $E\_1\otimes E\_2\to \M$ whose fibers are defined by tensor products of the fibers of *E*1 and *E*2. In a similar fashion, if $E\_1\to \M\_1$ and $E\_2\to \M\_2$ are two vector bundles over two separate manifolds, their *external tensor product* is the vector bundle $$E\_1\boxtimes E\_2 = \mathrm{pr}\_1^\*E\_1\otimes\mathrm{pr}\_2^\*E\_2\to \M\_1\times \M\_2,$$ where we denoted by $\mathrm{pr}\_1:\M\_1\times \M\_2\to \M\_1$ and $\mathrm{pr}\_2:\M\_1\times \M\_2\to \M\_2$ the standard projections. It must be distinguished from the previously defined tensor product of two bundles over the same manifold $\M$. Differential forms and integration on complex manifolds ------------------------------------------------------- Any complex manifold $\M$ of dimension *d* is also a smooth real manifold of dimension 2*d*, *i.e.* it can be locally modelled on $\R^{2d}$ by the natural identification $\C\cong\R^2$. The *tangent bundle* $T\M$ of $\M$ is the smooth vector bundle[6](#fn6) of rank 2*d* such that for any $x\in \M$, the fibre $T\_x\M$ is the tangent space of $\M$ at *x*. There is a morphism of line bundles $J:T\M\to T\M$ such that $J^2=-\Id\_{T\M}$, defined on each fibre by $J\_x:T\_x\M\to T\_x\M$, which satisfies the Cauchy–Riemann equations on open subsets $U\subset \M$ $$\d f\_x(J\_xv)=\i \times \d f\_x(v),\ \forall v\in T\_x\M,\ \forall f\in\mathscr{O}(U),\ \forall x\in U.$$ The cotangent bundle $T^\*\M=\Hom\_\R(T\M,\C)$ splits into $\Hom\_\C(T\M,\C)\oplus\Hom\_\C(T\M,\bar\C)$ of $\C$-linear and $\C$-antilinear maps, where $T\M$ is endowed with the complex structure induced by the morphism *J*. We denote respectively by $T^{\*(1,0)}\M$ and $T^{\*(0,1)}\M$ the subspaces of this decomposition. If (*z*1, …, *z**d*) is a local holomorphic coordinate system in an open subset $U\subset \M$ (for instance, in the atlas (*U**i*, *φ**i*), it means that we note $\varphi\_i(x)=(z\_1,\ldots,z\_d)\in\C^d$ for any *x* ∈ *U**i*), and if we set $z\_j=x\_j+\i y\_j$, then (*x**j*, *y**j*) is a local smooth coordinate system of $\M$ as a real manifold, and $(\d z\_1,\ldots,\d z\_d)$ is a local frame of $T^{\*(1,0)}\M$, where $\d z\_j=\d x\_j+\i \d y\_j$. Analogously, $(\d\bar{z}\_1,\ldots,\d\bar{z}\_d)$ is a local frame of $T^{\*(0,1)}\M$, where $\d\bar{z}\_j=\d x\_j-\i \d y\_j$. The *bundle of* (*p*, *q*)-*forms* on a complex manifold $\M$ is the vector bundle $\Lambda^{p,q}(T^\*\M)=\Lambda^p(T^{\*(1,0)}\M)\otimes\Lambda^q(T^{\*(0,1)}\M)$. We denote by $\Omega^{p,q}(\M)$ the subspace of smooth (*p*, *q*)-forms on $\M$. Any (*p*, *q*)-form *ω* on $\M$ can be expressed as follows in local coordinates: $$\label{eq:local\_form} \omega(x) = \sum\_{\substack{1\leq i\_1<\cdots<i\_p\leq d\\ 1\leq j\_1<\cdots<j\_q\leq d}} u\_{i\_1,\ldots,i\_p,j\_1,\ldots,j\_q}(x) \d z\_{i\_1}\wedge\cdots\wedge \d z\_{i\_p}\wedge \d\bar{z}\_{j\_1}\wedge\cdots\wedge \d\bar{z}\_{j\_q},$$ and in particular any (0, 0)-form is simply a function $f:\M\to\C$. There are two differential operators of interest, called the *Dolbeault operators*: $$\partial:\left\lbrace\begin{array}{ccc} \Omega^{(p,q)}(\M) & \to & \Omega^{(p+1,q)}(\M)\\ f(x)\omega(x) & \mapsto & \sum\_i\frac{\partial f(x)}{\partial z\_i}\d z\_i\wedge\omega(x) \end{array}\right.,\ \bar\partial:\left\lbrace\begin{array}{ccc} \Omega^{(p,q)}(\M) & \to & \Omega^{(p+1,q)}(\M)\\ f(x)\omega(x) & \mapsto & \sum\_i\frac{\partial f(x)}{\partial \bar{z}\_i}\d \bar{z}\_i\wedge\omega(x) \end{array}\right.,$$ and they can be turned into operators $\d=\partial+\bar\partial$ and $\d^c=\frac{1}{4\i\pi}(\partial-\bar\partial)$, where $\d$ coincides with the exterior derivative. A *volume form* on $\M$ is a volume form in the differential sense, therefore a nonvanishing section of $\Lambda^{2d}(T^\*\M)\simeq\Lambda^{d,d}(T^\*\M)$, and it can be written locally on $U\subset \M$ as $$\omega(x)=u(x)\d z\_1\wedge\cdots\wedge \d z\_d\wedge \d\bar{z}\_1\wedge\cdots\wedge \d\bar{z}\_d,$$ where $u:\M\to\C$ is a function that does not vanish. The volume form is continuous (resp. smooth, holomorphic) if and only if *u* is continuous (resp. smooth, holomorphic). Any volume form on $\M$ can be identified to a Borel measure $\d\mu$ on $\M$ by setting $\int\_U \d\mu = \int\_U \omega$ for any Borel set $U\subset \M$. Line bundles and Bergman kernel ------------------------------- In the sequel, we will consider vector bundles of rank 1, also called *line bundles*. We shall usually denote *L* such a line bundle, rather than *E*. Let *L* be a holomorphic line bundle over $\M$, endowed with an Hermitian metric *h*. Let *μ* be a continuous volume form on $\M$ and *ϕ* be the local weight corresponding to *h*. The *Bergman kernel* *B*(*ϕ*, *μ*) of *L* with respect to the weighted measure (*ϕ*, *μ*) is the Schwartz kernel (cf. for a detailed introduction of such kernels) of the orthogonal projection $$P\_{(\phi,\mu)}:L^2(\M,L) \longrightarrow H^0(\M,L)$$ with respect to *μ*. Namely, it is a section of $L\boxtimes\overline{L}\rightarrow \M\times \M$, and it can be written as $$B\_{(\phi,\mu)}(x,y) = \sum\_{i=1}^Ns\_i(x)\otimes\overline{s\_i(y)},\ \forall x,y\in \M,$$ where (*s**i*) is an orthornormal basis of $H^0(\M,L)$ for the inner product ⟨ ⋅ ,  ⋅ ⟩(*ϕ*, *μ*). By construction, *B*(*ϕ*, *μ*) is the reproducing kernel of the Hilbert space $(H^0(\M,L),\langle\cdot,\cdot\rangle\_{(\phi,\mu)})$, which means that $$\label{eq:reprod\_Bergman} \int\_\M B\_{(\phi,\mu)}(x,y)\cdot s(y)\d\mu(y) = s(x),\ \forall s\in H^0(\M,L),\ \forall x\in \M.$$ The dot in represents the contraction between the Bergman kernel and the section *s* induced by, so that the left-hand side of is the decomposition of *s* onto the orthogonal basis (*s**i*); see for more details. As we shall see later, the correlation functions of our point processes will be expressed as determinants of the Bergman kernel. Let us stress that such a determinant is not obvious to define: for instance, formally, $$\label{eq:wrong\_det} \left\vert\begin{matrix} B\_\phi(x,x) & B\_\phi(x,y)\\ B\_\phi(y, x) & B\_\phi(y,y) \end{matrix}\right\vert = B\_\phi(x,x)\otimes B\_\phi(y,y) - B\_\phi(x,y)\otimes B\_\phi(y,x),\ \forall x,y\in \M,$$ but this equality does not really make sense because we are adding elements of two different vector spaces: one is a section of $L\_x\otimes\overline{L}\_x\otimes L\_y\otimes\overline{L}\_y$ and the other is a section of $L\_x\otimes\overline{L}\_y\otimes L\_y\otimes\overline{L}\_x$. We circumvent this difficulty by using contractions on tensor products of each fiber and its dual, thanks to the following isomorphism of vector spaces, for any finite-dimensional vector space *E*: $$\left\lbrace\begin{array}{ccc} L\_x\otimes E\otimes\overline{L\_x} & {\mathrel{\mathop{\kern 0pt\longrightarrow}\limits\_{}^{\sim}}} & E\\ u\_x\otimes w\otimes\overline{v\_x} & \longmapsto & h\_x(u\_x,v\_x)w \end{array}\right..$$ In particular, the Bergman kernel on the diagonal can be identified with a function $\M\to\C$ by $$B\_\phi(x,x) = \sum\_{i=1}^N h\_x(s\_i(x),s\_i(x)),\ \forall x\in \M,$$ and Equation becomes $$\left\vert\begin{matrix} B\_\phi(x,x) & B\_\phi(x,y)\\ B\_\phi(y, x) & B\_\phi(y,y) \end{matrix}\right\vert = B\_\phi(x,x)B\_\phi(y,y) - B\_\phi(x,y)\cdot B\_\phi(y,x),\ \forall x,y\in \M.$$ Note the  ⋅  in the right-hand side, which is a contraction in the sense of. It leads to the following definition of the determinant of the Bergman kernel, cf.. [def:detBk] The *determinant* det(*B**ϕ*(*x**i*, *x**j*))1 ≤ *i*, *j* ≤ *n* is defined by det(*B**ϕ*(*x**i*, *x**j*))1 ≤ *i*, *j* ≤ *n* = ∑*σ* ∈ S*n**ɛ*(*σ*)∑*i*1, …, *i**n* = 1*N*∏*j* = 1*n**h**x**j*(*s**i**j*(*x**j*), *s**i**σ*− 1(*j*)(*x**j*)). We finish this section with a result that will be instrumental later, the so-called *extremal property* of the Bergman kernel: $$\label{eq:ext\_prop\_Bergman} B\_\phi(x,x) = \sup\{\vert s(x)\vert\_\phi^2:\ s\in H^0(\M,L), \Vert s\Vert\_{(\phi,\mu)}^2\leq 1\}.$$ It is a direct consequence of the reproducing property of the kernel. Examples -------- Let us give a few examples without proofs, which are all standard and can be found for instance in. **The plane.** The complex plane $\C$ is obviously a complex manifold, but it is not compact. However, most of the aforementioned constructions still hold, and are intuitive. We endow the real plane $\R^2$ with the standard symplectic form $\omega=\d x\wedge \d y$, and identify it with $\C$ by setting $z=\frac{x+\i y}{\sqrt{2}}$, so that $\omega = \i \d z\wedge\d\bar{z}$. We can consider the trivial line bundle $L=\R^2\times\C\to\R^2$, endowed with its standard Hermitian metric $$h\_{x,y}(z,w)=z\overline{w},\ \forall (x,y)\in\R^2,\ \forall z,w\in\C.$$ A global nonvanishing section of *L* is given by $$\psi:\left\lbrace\begin{array}{ccc} \C & \to & \C \\ z & \mapsto & e^{-\frac{\vert z\vert^2}{2}}. \end{array}\right.$$ As $\C$ is not compact, the space *H*0(*M*, *L**k*) is infinite-dimensional, and it is not even a subspace of $L^2(\C,L^k)$. However, if we replace it by $$\mathcal{H}\_k=H^0(\C,L^k)\cap L^2(\C,L^k),$$ we obtain the so-called *Bargmann spaces*, which are Hilbert spaces for all *k* ≥ 1. They have the following explicit description: $$\mathcal{H}\_k = \{ f\psi^k:\ f:\C\to\C \text{ is holomorphic}, \int\_\C f(z)e^{-k\vert z\vert^2}\d z\wedge\d\bar{z} < +\infty\}.$$ This Hilbert spaces is generated by the functions (*z**n**ψ**k*)*n* ≥ 0, and one can check that they form an orthonormal family of H*k*, hence an orthogonal basis. A straightforward computation of their norm yields the orthonormal basis (*s**k*, *n*)*n* ≥ 0 with $$s\_{k,n}(z) = \sqrt{\frac{k^{n+1}}{2\pi n!}}z^n\psi^k(z),\ \forall z\in\C,\ \forall n\geq 0.$$ The Bergman kernel is $$B\_{k}(z,w)=\sum\_{n\geq 0} s\_{k,n}(z)\overline{s\_{k,n}}(w) = \frac{k}{2\pi}\sum\_{n\geq 0}\frac{1}{n!}\left(kz\bar{w}\right)^n \psi^k(z)\psi^k(w) = \frac{k}{2\pi}e^{kz\bar{w}-\frac{k}{2}\vert z\vert^2 - \frac{k}{2}\vert w\vert}.$$ In this context, *B**k* is rather called the *Christoffel–Darboux kernel*, and the orthonormal family (*s**k*, *n*) is related to the Hermite polynomials. If we fix *k* = 1, the kernel is related to the *infinite Ginibre ensemble* in random matrix theory. **Projective spaces.** For any *d* ≥ 1, the complex projective space $\CP^d$ is defined as the quotient $\C^{d+1}\setminus\{0\}/\C^\*$ for the action $$\lambda\cdot(z\_0,\ldots,z\_d) = (\lambda z\_0,\ldots,\lambda z\_d),\ \forall \lambda\in\C^\*, \ \forall (z\_0,\ldots,z\_d)\in\C^{d+1}\setminus\{0\}.$$ We denote by [*Z*0 : ⋯ : *Z**d*] the equivalence class of (*Z*0, …, *Z**d*) in $\CP^d$, known as the *homogeneous coordinates*. We also denote by *π* the projection $\C^{d+1}\setminus\{0\}\to\CP^d$ induced by the action of $\C^\*$. The structure of complex manifold is for instance given by the atlas (*U**i*, *φ**i*)0 ≤ *i* ≤ *d*, with $$U\_i = \{[Z\_0:\cdots:Z\_d]\in\CP^d, Z\_i\neq 0\},$$ $$\varphi\_i:[Z\_0:\cdots:Z\_d]\mapsto \left(\frac{Z\_0}{Z\_i},\ldots,\widehat{\frac{Z\_i}{Z\_i}},\ldots,\frac{Z\_d}{Z\_i}\right) = (z\_1,\ldots,z\_d)\in\C^d\setminus\{0\},$$ where the hat indicates that the term is omitted. Consider the set $$\mathcal{O}(-1)= \{([u],\lambda u), u\in\C^{d+1}\setminus\{0\}, \lambda\in\C\},$$ together with the projection $\pi:\mathcal{O}(1)\to\CP^d$ given by *π*([*u*], *v*) = [*u*]. It defines a holomorphic line bundle, called the *tautological line bundle*. The fibers over $\CP^d$ are exactly the lines generated by the homogeneous coordinates. It is endowed with a Hermitian metric induced by the one from $\C^{d+1}$: $$h\_{[u]}(v,w) = \sum\_{i=1}^{d+1}v\_i\overline{w\_i},\ \forall [u]\in\CP^d,\ \forall v,w\in\C^{d+1}.$$ The complex projective spaces can be somehow related to the unit spheres $\Sbb^{2d+1}\subset\R^{2d+2}$ by the following construction: the action of $\C^\*$ on $\C^{d+1}\setminus\{0\}\simeq\R^{2d+2}$ can be decomposed into two successive actions of $\R\_+^\*$ and $\Sbb^1$, thanks to the polar decomposition of nonzero complex numbers. Let us denote by $\pi\_1:\C^{d+1}\to\Sbb^{2d+1}$ and $\pi\_2:\Sbb^{2d+1}\to\Sbb^{2d+1}/\Sbb^1$ the corresponding projections. The following diagram commutes: $$\begin{tikzcd} \C^{d+1}\setminus\{0\} \ar[r, "\pi\_1"] \ar[d, "\pi"'] & \Sbb^{2d+1} \ar[d, "\pi\_2"] \\ \CP^d \ar[r, dashed, "\simeq"] & \Sbb^{2d+1}/\Sbb^1. \end{tikzcd}$$ The bottom arrow is indeed a bijection. The *Fubini–Study* metric on $\CP^d$ is the Hermitian metric on $\CP^d\simeq\Sbb^{2d+1}/\Sbb^1$ induced by the round metric (which is a Riemannian metric) on $\Sbb^{2d+1}$. In the case *d* = 1, there is an additional relationship $\CP^1\simeq\Sbb^2$ that we will develop in Section [sec:sphere]. Bergman ensembles ================= Definitions ----------- Let $\M$ be a compact complex manifold of dimension *d* endowed with a Borel probability measure *μ*, *L* be a holomorphic line bundle over $\M$ with Hermitian metric *h*, represented by a local weight function *ϕ*. Set $N=\dim H^0(\M,L)$, and denote by P*ϕ* the probability measure on $\M^{N}$ defined by $$\label{eq:P\_phi} \d\mathcal{P}\_\phi(x\_1,\ldots,x\_N)=\frac{1}{Z\_N(\phi)}\vert\det(s\_i(x\_j))\vert\_{\phi}^2\d\mu^{\otimes N}(x\_1,\ldots,x\_N),$$ where (*s**i*) is an orthonormal basis of $(H^0(\M,L),\langle\cdot,\cdot\rangle\_{(\phi,\mu)})$, and *Z**N*(*ϕ*) is a normalization constant called *partition function*. Using the generalized Cauchy–Binet identity, the partition function satisfies *Z**N*(*ϕ*) = *N*! for any weight *ϕ*. We will denote by $\E\_\phi$ the expectation with respect to P*ϕ*, i.e., for any bounded measurable $F:\M^N\to\C$, $$\E\_\phi[F(X\_1,\ldots,X\_N)] = \int\_{\M^N} F \d\mathcal{P}\_\phi.$$ Recall that a (simple) point process on $\M$ is a random configuration on $\M$, or equivalently the counting measure of this configuration. In particular, given a family (*X*1, …, *X**N*) of almost-surely distinct random variables on $\M$, the random measure ∑*i* = 1*N**δ**X**i* defines almost-surely a simple point process. The *n*-point correlation function $\rho\_n:\M^n\to\R$ of such point process, when it exists, is characterized by the following property: for any bounded measurable function $f:\M^n\to\R$, $$\E\_\phi\left[\sum\_{i\_1\neq\cdots\neq i\_n} f(X\_{i\_1},\ldots,X\_{i\_n})\right] = \int\_{\M^n}f(x\_1,\ldots,x\_n)\rho\_n(x\_1,\ldots,x\_n)\d\mu^{\otimes N}(x\_1,\ldots,x\_n).$$ The *Bergman ensemble* for the weighted measure (*ϕ*, *μ*) is the simple point process ∑*i* = 1*N**δ**X**i*,  where (*X*1, …, *X**N*) is a family of random variables on $\M$ with distribution P*ϕ*. It was proved in that such ensemble is a *determinantal point process* with kernel *B*(*ϕ*, *μ*), which means by definition that $$\rho\_n(x\_1,\ldots,x\_N) = \det(B\_{(\phi,\mu)}(x\_i,x\_j)),\ \forall n\geq 1,\ \forall x\_1,\ldots,x\_n\in\M,$$ where the determinant of the Bergman kernel is given in Definition [def:detBk]. Although this construction holds for any Hermitian line bundle, we shall focus on the case where *L* is replaced by *L**k* ⊗ *F*, for large *k*, with corresponding weight *k**ϕ* + *ϕ**F*. Convergence of the Bergman measures ----------------------------------- The first macroscopic estimation of the Bergman ensemble for the weighted measure (*k**ϕ*, *μ*) for large *k* is given by the asymptotics of the *Bergman measures* $\d\beta\_k(x)=\frac{1}{N\_k}B\_{(k\phi,\mu)}(x,x)\d\mu(x)$. Indeed, as $$\E\_{k\phi}\left[\frac{1}{N\_k}\sum\_{i=1}^{N\_k} f(X\_i)\right] = \int\_\M f(x)\d\beta\_k(x),$$ the weak convergence in expectation of the empirical measures of the DPP is equivalent to the weak convergence of the Bergman measures. In the case of a compact complex manifold $\M$ endowed with a positive Hermitian line bundle *L*, the Bergman measures converge pointwise (hence weakly) to an equilibrium measure $\mu\_\eq$ because of the diagonal expansion of the Bergman kernel. In the more general case studied by that we consider here, the (weak or pointwise) convergence of Bergman measures is not automatic. If $\Omega\subset\C^d$ is an open set, a function *f* : Ω → [ − ∞,  + ∞] is called *plurisubharmonic* (psh) if it is upper semicontinuous, and if for all *z* ∈ Ω and $\xi\in\C^d$ such that |*ξ*| < *d*(*z*, Ω*c*), $$f(z) \leq \frac{1}{2\pi}\int\_0^{2\pi}f(z + e^{i\theta}\xi)\d\theta.$$ Given a complex manifold $\M$ of dimension *d*, a function $f:\M\to[-\infty,+\infty]$ is called plurisubharmonic if for any chart (*U*, *φ*), *f* ∘ *φ*− 1 : *φ*− 1(*U*) → [ − ∞,  + ∞] is plurisubharmonic. A function $f\in\mathscr{C}^{1,1}(\M)$ is plurisubharmonic if and only if $\ddbar f$ is a nonnegative (1, 1)-form, namely the coefficients of the decomposition of $\ddbar f$ in are nonnegative for all $x\in\M$. Equivalently, it means that $\ddc f$ is a nonnegative 2-form. Now, let us go back to the Hermitian line bundle $L\to\M$ endowed with a local weight *ϕ*. If *ϕ* is only continuous, one can define its *plurisubharmonic envelope* $\phi\_e:\M\to[-\infty,+\infty]$ by *ϕ**e*(*x*) = sup{*ψ*(*x*) : *ψ* continuous and psh s.t. *ψ* ≤ *ϕ*}. [def:weakbulk] The *weak bulk* is the largest open subset of $\M$ where the following equality holds pointwise: $$\det(\ddc\phi\_e) = \det(\ddc\phi).$$ See for a description of the weak bulk and its properties. A noticeable fact is that it is contained in the set $\{x\in\M:\ddc\phi(x)>0\}$. The Bergman measure *β**k* has density $\frac{1}{N\_k}B\_{(k\phi,\mu)}$ with respect to $\d\mu$; we will actually control the convergence of the inverse of this density, for a reason that will appear later. Let $w\_\eq^\phi = \frac{\vol(L)}{d!\det(\ddc\phi)}$ be the *equilibrium weight*. The following lemma is an analog of a classical result about the Christoffel–Darboux kernel for orthogonal polynomials on the unit circle and on the real segment [ − 1, 1]. [lem:cvbergmanmeasure] Let $\M$ be a compact complex manifold of dimension *d* endowed with a continuous volume form *ω*, and *μ* be the Borel measure on $\M$ corresponding to *ω*. Let *L* be a holomorphic line bundle over $\M$ endowed with an Hermitian metric *h* corresponding to a local weight *ϕ* such that (*ϕ*, *μ*) is strongly regular. For all *x* in the weak bulk, $$\label{eq:Totik} \frac{N\_k}{B\_{(k\phi,\mu)}(x,x)} = w\_\eq^\phi + O(k^{-1}),$$ where the remainder is uniformly bounded in $x\in \M$. The result will follow from two separate estimates that we pull together from the literature: one for $N\_k=\dim H^0(\M,L^k)$ and one for *B*(*k**ϕ* + *ϕ**k*, *μ*) on the diagonal. Both are quite standard but we expose them for readers who are less familiar with complex geometry. We define the space of *L**k*-valued (*p*, *q*)-forms as $$\Omega^{p,q}(L^k)=\mathscr{C}^\infty(\Omega^{p,q}(\M)\otimes L^k).$$ The Dolbeault operator $\bar\partial$ extends to an operator $\bar\partial\_{L^k}:\Omega^{p,q}(L^k)\to\Omega^{p,q+1}(L^k)$, and we define for all *q* ≥ 0 the vector space $$H^q(M,L^k)=\ker(\bar\partial\_{L^k}:\Omega^{0,q}(L^k)\to \Omega^{0,q+1}(L^k))/\bar\partial\_{L^k}(\Omega^{0,q-1}(L^k).$$ Note that for *q* = 0, it coincides with the space of holomorphic sections *H*0(*M*, *L**k*) which is at the center of the present paper. The dimensions of these spaces are involved in the definition of the *Euler characteristic* of the line bundle: $$\chi(\M,L^k) := \sum\_{q=0}^d (-1)^q\dim H^q(\M,L^k).$$ The Kodaira–Serre vanishing theorem states that $\dim H^q(\M,L^k)=0$ for all *q* > 0 and *k* large enough, leaving $$\chi(\M,L^k) = \dim H^0(\M,L^k) = N\_k.$$ We can then apply the Hirzebruch–Riemann–Roch formula (see for instance), and there is a polynomial *P**d* − 1 ∈ Q[*X*] of degree at most *d* − 1 such that $$\label{eq:dvp\_Nk} \chi(\M,L^k)=\frac{\vol(L)}{d!}k^d + P\_{d-1}(k)=N\_k.$$ On the diagonal, the Bergman kernel admits the following expansion : for any *x* in the weak bulk, $$\label{eq:dvp\_Bk} B\_{(k\phi,\mu)}(x,x) = \det(\ddc\phi)k^d + O(k^{d-1}),$$ where the big *O* only depends on *k* and is uniform in *x*. Equation results trivially from and. Laplace transform of linear statistics -------------------------------------- DPPs are known to have tractable Laplace transforms of linear statistics, and has used this Laplace transform to obtain central limit theorems for $\d\mathcal{P}\_\phi$. For any nonnegative and measurable $\psi: \M\rightarrow \mathbb{R}$, define the log Laplace transform of the linear statistics ∑*i**ψ*(*X**i*) as $$K\_\phi^\psi(t)=\log\E\_\phi\left[\e^{-t\sum\_{i=1}^N\psi(X\_i)}\right].$$ The first result used by Berman is the following, relating the derivatives of *K**ϕ**ψ* and the expectation and variance of the linear statistics. [][prop:expvar] The log Laplace transform *K**ϕ**ψ* is at least twice derivable with respect to *t*, and satisfies $$\frac{\d}{\d t}K\_\phi^\psi(t) = -\E\_{\phi+t\psi}\left[\sum\_{i=1}^N\psi(X\_i)\right] = -\int\_\M \psi(x)B\_{(\phi+t\psi,\mu)}(x,x)\d\mu(x),$$ $$\begin{aligned} \frac{\d^2}{\d t^2}K\_\phi^\psi(t) &= \Var\_{\phi+t\psi}\left[\sum\_{i=1}^N\psi(X\_i)\right]\nonumber\\ &= \frac12\int\_{\M^2} (\psi(x)-\psi(y))^2\vert B\_{(\phi+t\psi,\mu)}(x,y)\vert\_{\phi+t\psi}^2\d\mu^{\otimes 2}(x,y).\end{aligned}$$ The second one is a control of the asymptotics of the integral in the expression of the variance in Proposition [prop:expvar], when *ϕ* is replaced by *k**ϕ* and *k* → ∞. [, Theorem 5.8] [thm:fluctuationsvariance0] Let $\M$ be a compact complex manifold of dimension *d* endowed with a Borel measure *μ* associated with a continuous volume form, *L* be a big line bundle over $\M$ endowed with a C1, 1 metric *ϕ*, *F* be a line bundle endowed with a continuous metric with weight *ϕ**F*, and *B**k**ϕ* + *ϕ**F* be the Bergman kernel of $H^0(\M,L^k\otimes F)$. If *f* is a Lipschitz function with compact support included in the bulk, then $$\lim\_{k\to\infty} \frac{1}{2}\iint\_{\M^2} k^{1-d} |B\_{k\phi+\phi\_F}(x,y)|^2(f(x)-f(y))^2\d\mu^{\otimes 2}(x,y) = \Vert \d f\Vert\_{\ddc\phi}^2.$$ Let $f:\M\to\C$ be a Lipschitz continuous function with compact support included in the weak bulk, and set $$f\_k:x\mapsto \frac{N\_k}{B\_{(k\phi,\mu)}(x,x)}f(x).$$ Our proof of Theorem [thm:MCmain] will follow the steps of Theorem [thm:CLT] by replacing *f* with *f**k*. In particular, we will prove a generalization of Theorem [thm:fluctuationsvariance0] taking into account the dependence on *k* in both *f* and *ϕ**F*. See Theorem [thm:fluctuationsvariance] for the precise statement. As we will see, almost all arguments from will remain unchanged. Bergman kernel estimates ======================== Before we dive into the proofs of our main Theorem, let us state a few technical estimates of the Bergman kernel. They generalize slightly some results by, yet their proofs are almost identical. They are based on *local* properties of the kernel, meaning that they are proved by using adequate sets of coordinates. Local coordinates *z*1, …, *z**d* are called *normal* if the (1, 1)-form *ω* that induces the continuous volume form *ω**d* associated with the measure *μ* has the following expression when |*z*| → 0: $$\label{eq:normal\_coord} \omega(z) = \frac{i}{2}\sum\_{i,j=1}^d h\_{ij}^{(0)}(z) \d z\_j\wedge\d\bar{z}\_j,\quad h\_{ij}^{(0)}(z) = \delta\_{ij} + O(\vert z\vert^2).$$ A local trivialization *e**U* with weight $\phi=-\ddbar\log\vert e\_U\vert\_\phi$ is *normal* if the weight *ϕ* has the following expression when |*z*| → 0: *ϕ*(*z*) = ∑*j* = 1*d**λ**j*|*z**j*|2 + *O*(|*z*|3). See or for an explanation of why such coordinates exist. In this setting, we have an explicit expression of the curvature $\ddc\phi$ in the centered coordinate: $$\ddc\phi(0) = \frac{\i}{2\pi}\sum\_{j=1}^d\lambda\_j \d z\_j\wedge\d\bar{z}\_j,$$ so that $$\omega\_\phi=(\ddc\phi)^d = \frac{\det\lambda}{\pi^d} \omega^d,$$ where we denote by $\lambda=\diag(\lambda\_1,\ldots,\lambda\_d)$ the diagonal matrix of eigenvalues of the curvature of *ϕ*. Given a family (*ϕ**k*) of local weights on *F* that converges uniformly to a weight *ϕ**F*, we will study the convergence of the Bergman kernel *B*(*k**ϕ* + *ϕ**k*, *μ*) at two different scales: near the diagonal, and far from the diagonal. All results are actually variants of results by, which were stated for *B*(*k**ϕ* + *ϕ**F*, *μ*), and their proofs are extremely similar, but we shall recall them for the sake of completeness. They will rely on, as well on the the following estimate. [lem:kphi] Let $\M$ be a compact complex manifold of dimension *d* endowed with a continuous volume form *ω**d*, and *μ* be the Borel measure on $\M$ corresponding to *ω**d*. Let *L* be a holomorphic line bundle over $\M$ endowed with an Hermitian metric *h* corresponding to a local weight *ϕ* such that (*ϕ*, *μ*) is strongly regular. For all *x* in the weak bulk, in a normal trivialization and a normal coordinate system *z* centered at *x*, $$\label{eq:kphi} k\phi(z)+\phi\_k(z) = k\left(\sum\_i \lambda\_i\vert z\_i\vert^2 + {\mathrel{\mathop{\kern 0ptO(\vert z\vert^3)}\limits\_{\vert z\vert\to 0}^{}}}\right).$$ One can choose the trivialization such that *ϕ**F*(0) = 0, and in this case the uniform convergence of (*ϕ**k*) and the continuity of *ϕ**F* yield *ϕ**k*(*z*) = *ϕ**F*(*z*) + *o**k* → ∞(1) = *o*|*z*| → 0(|*z*|) + *o**k* → ∞(1),  so that *k**ϕ*(*z*) + *ϕ**k*(*z*) and *k**ϕ*(*z*) have the same asymptotic expression for |*z*| → 0 and *k* → ∞ (which actually does not depend on *ϕ**F*). In particular, we have the following rescaled uniform convergence for |*z*| ≤ *R*, with *R* > 0 fixed: $$\label{eq:kphi\_rescaled} k\phi\left(\frac{z}{\sqrt{k}}\right)+\phi\_k\left(\frac{z}{\sqrt{k}}\right)-\sum\_i\lambda\_i\vert z\_i\vert^2 = \frac{1}{\sqrt{k}}O(\vert z\vert^3).$$ Scaling limit near the diagonal ------------------------------- [thm:localuniv] Let *L* be a big line bundle with a weight *ϕ* which is locally C1, 1, and *F* be another line bundle endowed with a continuous local weight *ϕ**F*. Let (*ϕ**k*)*k* be a sequence of continuous local weights on *F* that converges uniformly to *ϕ**F*. Assume that *μ* is a continuous volume form on $\M$. Let *x* be a fixed point in the weak bulk and take normal local coordinates *z* centered at *x* and a normal trivialization of *L* ⊗ *F*. Then $$\label{eq:Bergman\_expansion} k^{-d}B\_{(k\phi+\phi\_k,\mu)}\left(\frac{z}{\sqrt{k}},\frac{w}{\sqrt{k}}\right){\mathrel{\mathop{\kern 0pt\longrightarrow}\limits\_{k\to\infty}^{}}}\frac{\det\lambda}{\pi^d}\e^{\langle\lambda z,w\rangle}$$ in the C∞-topology on compact subsets of $\C^d\times\C^d$. In order to prove this Theorem, we will need two inequalities that generalize well-known estimates in the case of *B*(*k**ϕ* + *ϕ**F*, *μ*). [Holomorphic Morse inequality][lem:Morse] Let $x\in\M$ be a point such that the second order derivatives of *ϕ* exist, and let *z* be a normal local coordinate system centered at *x*. Then $$\limsup\_k k^{-d} B\_{(k\phi+\phi\_k,\mu)}\left(\frac{z}{\sqrt{k}},\frac{z}{\sqrt{k}}\right) \leq \frac{\det(\lambda)}{\pi^d}.$$ Let *ɛ* > 0 be fixed. Using the local normal coordinate system *z* and a normal trivialization *e**U* in a local neighborhood of *x*, given an orthonormal basis (*s**i*) of $H^0(\M,L^k\otimes F)$, there are holomorphic functions (*f**i*) such that for all *i* $$\left\vert s\_i\left(\frac{z}{\sqrt{k}}\right)\right\vert\_{k\phi+\phi\_k}^2 = \left\vert f\_i\left(\frac{z}{\sqrt{k}}\right)\right\vert^2 e^{-k\phi\left(\frac{z}{\sqrt{k}}\right)-\phi\_k\left(\frac{z}{\sqrt{k}}\right)},$$ therefore $$B\_{(k\phi+\phi\_k,\mu)}\left(\frac{z}{\sqrt{k}},\frac{z}{\sqrt{k}}\right) = \sum\_i \left\vert f\_i\left(\frac{z}{\sqrt{k}}\right)\right\vert^2e^{-k\phi\left(\frac{z}{\sqrt{k}}\right)-\phi\_k\left(\frac{z}{\sqrt{k}}\right)}.$$ Analogously, we have *B*(*k**ϕ* + *ϕ**k*, *μ*)(0, 0) = ∑*i*|*f**i*(0)|2*e*− *k**ϕ*(0) − *ϕ**k*(0) = ∑*i*|*f**i*(0)|2. For all 1 ≤ *i* ≤ *N**k* there are integers *n**i* such that for all *k* ≥ *n**i*,  $$\left\vert f\_i\left(\frac{z\_i}{\sqrt{k}}\right)\right\vert^2 \leq \vert f\_i(0)\vert^2(1 + \varepsilon).$$ For *k* large enough, we also have $$\left\vert k\phi\left(\frac{z}{\sqrt{k}}\right)+\phi\_k\left(\frac{z}{\sqrt{k}}\right)\right\vert \leq \varepsilon,$$ therefore, for *k* large enough, $$B\_{(k\phi+\phi\_k,\mu)}\left(\frac{z}{\sqrt{k}},\frac{z}{\sqrt{k}}\right) \leq \sum\_i\vert f\_i(0)\vert^2(1+\varepsilon)e^{\varepsilon}=B\_{(k\phi+\phi\_k,\mu)}(0,0)(1+\varepsilon)e^{\varepsilon}.$$ Using the extremal property of the Bergman kernel, for any fixed *R* > 0, $$\begin{aligned} k^{-d}B\_{(k\phi+\phi\_k,\mu)}(0,0) & = \sup\_{s\in H^0(\M,L^k\otimes F)}\frac{\vert s(0)\vert\_{k\phi+\phi\_k}^2}{k^d\int\_\M \vert s(x)\vert\_{k\phi+\phi\_k}^2\d\mu(x)}\\ & \leq \sup\_{s\in H^0(\M,L^k\otimes F)}\frac{\vert s(0)\vert\_{k\phi+\phi\_k}^2}{k^d\int\_{\vert z\vert\leq \frac{R}{\sqrt{k}}} \vert s(z)\vert\_{k\phi+\phi\_k}^2\d\mu(z)}.\end{aligned}$$ We can replace *z* by $\frac{z}{\sqrt{k}}$ in the integral, giving $$k^d\int\_{\vert z\vert\leq \frac{R}{\sqrt{k}}} \vert s(z)\vert\_{k\phi+\phi\_k}^2\d\mu(z) = \int\_{\vert z\vert\leq R}\left\vert f\left(\frac{z}{\sqrt{k}}\right)\right\vert e^{-k\phi\left(\frac{z}{\sqrt{k}}\right)-\phi\_k\left(\frac{z}{\sqrt{k}}\right)}\d\mu(z),$$ where *f* is a holomorphic function such that locally *s*(*z*) = *f*(*z*)*e**U*(*z*). From we deduce that $$\sup\_{\vert z\vert\leq R}\left\vert k\phi\left(\frac{z}{\sqrt{k}}\right)+\phi\_k\left(\frac{z}{\sqrt{k}}\right)-\sum\_i\lambda\_i\vert z\_i\vert^2\right\vert{\mathrel{\mathop{\kern 0pt\longrightarrow}\limits\_{k\to\infty}^{}}} 0,$$ and by continuity of *f* we get $$\lim\_{k\to\infty}\int\_{\vert z\vert\leq R}\left\vert f\left(\frac{z}{\sqrt{k}}\right)\right\vert e^{-k\phi\left(\frac{z}{\sqrt{k}}\right)-\phi\_k\left(\frac{z}{\sqrt{k}}\right)}\d\mu(z) = \int\_{\vert z\vert \leq R}e^{-\sum\_i\lambda\_i\vert z\_i\vert^2}\prod\_i \frac{\i}{2}\d z\_i\wedge\d\bar{z}\_i.$$ So far, we have obtained that, for *ɛ* > 0 fixed, for *k* large enough, $$k^{-d}B\_{(k\phi+\phi\_k,\mu)}\left(\frac{z}{\sqrt{k}},\frac{z}{\sqrt{k}}\right) \leq \sup\_{s\in H^0(\M,L^k\otimes F)}\frac{\vert s(0)\vert\_{k\phi+\phi\_k}^2(1+\varepsilon)e^{\varepsilon}}{k^d\int\_{\vert z\vert\leq \frac{R}{\sqrt{k}}} \vert s(z)\vert\_{k\phi+\phi\_k}^2\d\mu(z)}.$$ Taking the limsup over *k* and then letting *R* → ∞ yields $$\limsup\_k k^dB\_{(k\phi+\phi\_k,\mu)}\left(\frac{z}{\sqrt{k}},\frac{z}{\sqrt{k}}\right) \leq \frac{(1+\varepsilon)e^\varepsilon}{\int\_{\C^d}e^{-\sum\_i\lambda\_i\vert z\_i\vert^2}\prod\_i \frac{\i}{2}\d z\_i\wedge\d\bar{z}\_i}=\frac{\det(\lambda)}{\pi^d}(1+\varepsilon)e^\varepsilon.$$ As it holds for any *ɛ* > 0, letting *ɛ* → 0 yields the lemma. [lem:lowboundbk] Let $x\in\M$ be a point in the weak bulk, and *z* be a normal local coordinate system centered at *x*. Then $$\liminf\_k\frac{1}{k^d}\left\vert B\_{(k\phi+\phi\_k,\mu)}\left(\frac{z}{\sqrt{k}},\frac{z}{\sqrt{k}}\right)\right\vert\_{k\phi+\phi\_k}\geq \frac{\det(\lambda)}{\pi^d}.$$ **Step 1: construction of a smooth extremal section.** Let $x\in\M$ be in the weak bulk. We will prove that there exists a smooth section *σ**k* of *L**k* ⊗ *F* such that for *z*0 fixed and for (*ϕ**k*) a sequence of weights on *F* such that *ϕ**k*(0) = 0 for all *k* and *ϕ**k* → *ϕ**F* uniformly, $$\label{eq:sigma\_k} (i)\ \lim\_{k\to\infty}\frac{\vert\sigma\_k\vert\_{k\phi}^2\left(\frac{z\_0}{\sqrt{k}}\right)}{k^d\Vert\sigma\_k\Vert\_{k\phi+\phi\_k}^2} = \det\_\omega(\ddc\phi)(x),\quad (ii)\ \Vert\bar\partial\sigma\_k\Vert\_{k\phi+\phi\_k}^2 \leq C e^{-k/C}.$$ In order to do so, we take a smooth function *χ* which is constant and equal to 1 on {*z* : |*z*| ≤ *δ*/2} and supported in {*z* : |*z*| ≤ *δ*} for a given *δ* that we will fix later. Let $\sqrt{\lambda}=\diag(\sqrt{\lambda\_1},\ldots,\sqrt{\lambda\_d})$ be the squared root of the matrix *λ*, and set for any *z* such that |*z*| ≤ *λ* $$\sigma\_k(z) = \chi(z)e^{-\frac{k}{2}\left(\left\vert \sqrt{\lambda}(z - \frac{z\_0}{\sqrt{k}})\right\vert^2 - \vert \sqrt{\lambda}z\vert^2\right)}e\_U(z)=\chi(z) e^{k\left(\langle \sqrt{\lambda}z,\sqrt{\lambda}\frac{z\_0}{\sqrt{k}}\rangle - \frac{1}{2k}\vert\sqrt{\lambda}z\_0\vert^2 \right)}e\_U(z),$$ where *e**U* is a local frame such that |*e**U*(*z*)|*k**ϕ*2 = *e*− *k**ϕ*(*z*). We extend *σ**k* to the value 0 for |*z*| > *δ*, which indeed defines a smooth section. We claim that *σ**k* satisfies. Indeed, $$\vert\sigma\_k\vert\_{k\phi}^2\left(\frac{z\_0}{\sqrt{k}}\right) = \chi\left(\frac{z\_0}{\sqrt{k}}\right)^2e^{\vert\sqrt{\lambda}z\_0\vert^2}e^{-k\phi\left(\frac{z\_0}{\sqrt{k}}\right)},$$ and for *k* large enough we have $\vert z\_0/\sqrt{k}\vert\leq \delta/2$, so that yields $$\lim\_{k\to\infty}\vert\sigma\_k\vert\_{k\phi}^2\left(\frac{z\_0}{\sqrt{k}}\right) = 1.$$ Likewise, let us compute the denominator of (*i*) in. Using the fact that the support of *σ**k* is included in {|*z*| ≤ *δ*}, $$\begin{aligned} \Vert\sigma\_k\Vert\_{(k\phi+\phi\_k,\mu)}^2 & = \int\_\M \vert\sigma\_k\vert\_{k\phi+\phi\_k}^2(x)\omega\_d(x)\\ & = \int\_{\vert z\vert \leq \delta} \chi(z)^2e^{k\left(\left\vert \sqrt{\lambda}(z - \frac{z\_0}{\sqrt{k}})\right\vert^2-\vert\sqrt{\lambda}z\vert^2\right)}e^{-k\phi(z)-\phi\_k(z)}\det h^{(0)}(z)\bigwedge\_i\frac{\i}{2}\d z\_i\wedge\d\bar{z}\_i.\end{aligned}$$ We split the domain of integration into two regions: $$A\_k = \{\vert z\vert \leq R/\sqrt{k}\},\quad B\_k = \{R/\sqrt{k}\leq\vert z\vert \leq \delta\},$$ and we perform a change of variables $\zeta =\sqrt{k}z$. First, $$\Vert\sigma\_k\Vert\_{(k\phi+\phi\_k,\mu),A\_k}^2 = k^{-d}\int\_{\vert\zeta\vert\leq R} \chi\left(\frac{\zeta}{\sqrt{k}}\right)^2e^{\vert\sqrt{\lambda}(\zeta-z\_0)\vert^2}e^{k\vert\sqrt{\lambda}\zeta\vert^2-k\phi\left(\frac{\zeta}{\sqrt{k}}\right)-\phi\_k\left(\frac{\zeta}{\sqrt{k}}\right)}\det h^{(0)}\left(\frac{\zeta}{\sqrt{k}}\right)\bigwedge\_i \frac{\i}{2} \d\zeta\_i\wedge\d\bar{\zeta}\_i$$ and a dominated convergence combined with yields $$\lim\_{k\to\infty}k^d\Vert\sigma\_k\Vert\_{(k\phi+\phi\_k,\mu),A\_k}^2 = \int\_{\vert\zeta\vert\leq R}e^{\vert\sqrt{\lambda}(\zeta-z\_0)\vert^2}\left(\frac{\i}{2}\right)^d \d\zeta\_1\wedge\d\bar{\zeta}\_1\wedge\cdots\wedge\d\zeta\_d\wedge\d\bar{\zeta}\_d.$$ The RHS converges then to (2*π*)*d*(det*λ*)− 1 as *R* → ∞. Using similar arguments we see that *k**d*‖*σ**k*‖(*k**ϕ* + *ϕ**k*, *μ*), *B**k*2 converges, as *k* → ∞, to the tail of a multidimensional Gaussian integral, and letting *R* → ∞ make it finally vanish. We have proved (*i*). Now, let us prove the point (*i**i*). We have for any *z* $$\bar\partial \sigma\_k(z) = \left(\bar\partial\chi(z)\right)e^{-\frac{k}{2}\left(\left\vert \sqrt{\lambda}(z - \frac{z\_0}{\sqrt{k}})\right\vert^2 - \vert \sqrt{\lambda}z\vert^2\right)}e\_U(z)$$ because the exponential part is holomorphic. In partiular, it means that $\bar\partial\sigma\_k(z)=0$ for all |*z*| ≤ *δ*/2, and we get $$\begin{aligned} \Vert\bar\partial\sigma\_k\Vert\_{(k\phi+\phi\_k,\mu)}^2 = & \int\_{\delta/2\leq \vert z\vert\leq \delta}\vert\bar\partial\sigma\_k\vert\_{k\phi+\phi\_k}^2(z)\det h^{(0)}(z)\bigwedge\_i\frac{\i}{2}\d z\_i\wedge\d\bar{z}\_i\\ = & \int\_{\delta/2\leq \vert z\vert\leq \delta}\left(\bar\partial\chi(z)\right)^2e^{-k\vert\sqrt{\lambda}(z-\frac{z\_0}{k})\vert^2+k\vert\sqrt{\lambda}z\vert^2-k\phi(z)-\phi\_k(z)}\det h^{(0)}(z)\bigwedge\_i\frac{\i}{2}\d z\_i\wedge\d\bar{z}\_i.\end{aligned}$$ By we know that there exists a constant *C*1 > 0 such that $$\left\vert k\vert\sqrt{\lambda}z\vert^2-k\phi(z)-\phi\_k(z)\right\vert\leq C\_1k\vert z\vert^3,$$ so that if we take *δ* small enough we get, for |*z*| ≤ *δ*, $$\left\vert k\vert\sqrt{\lambda}z\vert^2-k\phi(z)-\phi\_k(z)\right\vert\leq \frac{k}{4}\inf\_i\lambda\_i\vert z\vert^2\leq \frac{k}{4}\vert\sqrt{\lambda}z\vert^2.$$ For *k* large enough, we also have $$\vert\sqrt{\lambda}(z-\frac{z\_0}{k})\vert^2 \geq \frac14 \vert\sqrt{\lambda} z\vert^2.$$ Let us combine these estimates and set $C\_2=\sup\_{\vert z\vert\leq \delta}\left(\bar\partial\chi(z)\det h^{(0)}(z)\right)^2$, yielding $$\Vert\bar\partial\sigma\_k\Vert\_{(k\phi+\phi\_k,\mu)}^2 \leq C\_2 \int\_{\delta/2\leq \vert z\vert\leq \delta} e^{-\frac{k}{2}\vert\sqrt{\lambda}z\vert^2}\bigwedge\_i\frac{\i}{2}\d z\_i\wedge\d\bar{z}\_i.$$ Equation (*i**i*) follows. **Step 2: perturbation to a holomorphic extremal section.** We equip *L**k* ⊗ *F* with a strictly positively curved modification *ψ**k* of the weight *k**ϕ* + *ϕ**k* (see Lemma 2.5 in ). Let $g\_k=\bar\partial\sigma\_k$. According to the Hörmander–Kodaira estimate (see Theorem 4.1 in ), there exists for all *k* a smooth section *u**k* with values in $L\otimes K\_\M$, where $K\_\M$ is the canonical bundle of $\M$, such that $$\bar\partial u\_k = g\_k,\quad \Vert u\_k\Vert\_{(\psi\_k,\mu)}\leq C\Vert g\_k\Vert\_{(\psi\_k,\mu)}.$$ We conclude by setting *α**k* = *σ**k* − *u**k*, which is indeed holomorphic, and satisfies. [Proof of Theorem [thm:localuniv]] It follows more or less directly from Lemmas [lem:Morse] and [lem:lowboundbk]. The proof of can also be adapted verbatim using our estimates. Off-diagonal decay ------------------ The following result is adapted from, and has more or less the same proof. [thm:offdiag] Let *L* be a big line bundle with a C*l**o**c*1, 1 weight *ϕ* and *F* be another line bundle endowed with a continuous local weight *ϕ**F*. Assume that *μ* is the Borel measure associated to a continuous volume form on $\M$, and that (*ϕ**k*)*k* is a sequence of weights on *F* that converge uniformly to *ϕ**F*. Let *E* be a compact subset of the interior of the bulk. There is a constant *C* such that for any *k*, any *x* ∈ *E* and $y\in \M$, $$\label{eq:off\_diag} k^{-2d}|B\_{k\phi+\phi\_k}(x,y)|^2 \leq C \e^{-\frac{\sqrt{k}d(x,y)}{C}},$$ where *d*(*x*, *y*) is the distance with respect to a smooth metric on $\M$. Convergence to the equilibrium weight ------------------------------------- A sequence (*f**k*) of functions on $\M$ with values in $\R$ is said to *converge uniformly* to $f:\M\to\R$ *with speed* *u**k*, where (*u**k*) is an increasing sequence of positive real numbers with limit ∞, if there exists *C* > 0 such that $$\sup\_{x\in \M}\vert f\_k(x)-f(x)\vert \leq \frac{C}{u\_k}.$$ If (*v**k*) is another sequence of positive real numbers with limit ∞ such that *u**k* > *v**k* for all *k*, then we also say that *f**k* converges uniformly to *f* *faster than* *v**k*. Note that she speed of convergence is not unique: if (*u**k*) and (*v**k*) are two sequences of positive numbers such that *u**k* ≤ *v**k* and if *f**k* converge to *f* uniformly with speed *u**k*, then *a fortiori* it also converges with speed *v**k*. The following proposition is a trivial consequence of Lemma [lem:cvbergmanmeasure]. [cor:convergencefk] Let $\M$ be a compact complex manifold of dimension *d* endowed with a Borel measure *μ* associated with a continuous volume form, *L* be a line bundle over $\M$ endowed with a C1, 1 weight *ϕ* such that (*ϕ*, *μ*) is strongly regular, *F* be a line bundle endowed with a continuous weight *ϕ**k*, and *B**k**ϕ* + *ϕ**k* be the Bergman kernel of $H^0(\M,L^k\otimes F)$. Assume that the sequence (*ϕ**k*) converges uniformly to a continuous weight *ϕ**F*. Let $f:\M\to\C$ be a Lipschitz function with compact support included in the weak bulk. If we set $f\_k:x\mapsto \frac{N\_k}{B\_{(k\phi,\mu)}(x,x)}f(x)$ for any *k*, then all *f**k* are Lipschitz and they converge uniformly to $f\_\eq=w\_\eq^\phi f$ with speed $\tfrac{1}{k}$. We are now able to state and prove a variant of Theorem [thm:fluctuationsvariance0]. [thm:fluctuationsvariance] Let $\M$ be a compact complex manifold of dimension *d* endowed with a Borel measure *μ* associated with a continuous volume form, *L* be a line bundle over $\M$ endowed with a C1, 1 weight *ϕ* such that (*ϕ*, *μ*) is strongly regular, *F* be a line bundle endowed with a continuous weight *ϕ**k*, and *B**k**ϕ* + *ϕ**k* be the Bergman kernel of $H^0(\M,L^k\otimes F)$. Let (*f**k*) be a sequence of Lipschitz functions with compact support included in the weak bulk. Assume that: 1. the sequence (*ϕ**k*) converges uniformly to a continuous weight *ϕ**F*; 2. the sequence (*f**k*) converges uniformly to a Lipschitz continuous function *f* faster than $\tfrac{1}{\sqrt{k}}$. Then, $$\label{eq:double\_int} \lim\_{k\to\infty} \frac{1}{2}\iint\_{\M^2} k^{1-d} |B\_{k\phi+\phi\_k}(x,y)|^2(f\_k(x)-f\_k(y))^2\d\mu^{\otimes 2}(x,y) = \Vert \d f\Vert\_{\ddc \phi}^2.$$ We follow closely the proof of Theorem 5.8 in, because most of the arguments still apply. Let *d* be the distance on $\M$ induced by any continous metric. For fixed *k* ≥ 1 and *R* > 0, we split the integral in into three parts *A**k*, *R*, *B**k*, *R*, *C**k*, *R*, corresponding to integrating respectively over *d*(*x*, *y*) ≥ 1, $k^{-\frac12}R\leq d(x,y)\leq 1$ and $0\leq d(x,y)\leq k^{-\frac12}R$. The idea is to let *k* → ∞ then *R* → ∞. The first two contributions vanish in the large *k* limit thanks to the off-diagonal decay of the Bergman kernel (Theorem [thm:offdiag]); we will then focus on the third one. The key point is to prove that for *x* in the bulk and *z* ∈ C*d* a normal coordinate, as well as normal trivializations of *L* and *F*, all centered at *x*, $$\label{eq:double\_cv} \sup\_{|z|<R}\left\vert \sqrt{k}\left(f\_k\left(\frac{z}{\sqrt{k}}\right)-f\_k(0)\right)-\nabla f(0)\cdot z\right\vert \longrightarrow 0.$$ We have, for any ∣*z*∣ < *R*, $$\begin{aligned} \sqrt{k}\left(f\_k\left(\frac{z}{\sqrt{k}}\right)-f\_k(0)\right)-\nabla f(0)\cdot z = & \sqrt{k}\left(f\_k\left(\frac{z}{\sqrt{k}}\right)-f\left(\frac{z}{\sqrt{k}}\right)\right) - \sqrt{k}(f\_k(0)-f\_k(0)) \\ & + \sqrt{k}\left(f\left(\frac{z}{\sqrt{k}}\right)-f(0)\right)-\nabla f(0)\cdot z\end{aligned}$$ First, as *f* is differentiable at 0, we have $$\left\vert \sqrt{k}\left(f\left(\frac{z}{\sqrt{k}}\right)-f(0)\right)-\nabla f(0)\cdot z\right\vert = |z|\varepsilon\left(\frac{1}{\sqrt{k}}\right){\mathrel{\mathop{\kern 0pt\longrightarrow}\limits\_{k\to\infty}^{}}}0.$$ Then, using the uniform convergence faster than $\tfrac{1}{\sqrt{k}}$, we get that for all *ɛ* > 0, there exists *k*0 such that for all *k* ≥ *k*0, for all $z\in \C^d$, $$\sqrt{k}\vert f\_k(z)-f(z)\vert \leq \varepsilon.$$ It is in particular true if *z* = 0, and if one replaces *z* by $\frac{z}{\sqrt{k}}$ because of the uniformity. We conclude by the triangle inequality that is satisfied. Once we get this estimate, the rest of the proof is identical to the proof of Theorem [thm:fluctuationsvariance0]. Proof of the main results ========================= We are now in position to prove the main results of this paper. [Proof of Theorem [thm:MCmain]] We want to prove that if (*X*1, …, *X**N**k*) is distributed according P*k**ϕ* + *ψ*, $$\sqrt{N\_k^{1+\frac1d}}\left(\sum\_i\frac{1}{N\_k}f\_k(X\_i)-\int\_\M f(x)\d\mu(x)\right) \Rightarrow \mathcal{N}(0,\sigma^2).$$ It is equivalent to consider the convergence of the rescaled random variables $$\label{eq:cv\_shift} \Xi\_k = N\_k^{\frac{1-d}{2d}}\left(\sum\_i f\_k(X\_i)-N\_k\int f(x)\d\mu(x)\right).$$ We set $u\_k:x\mapsto N\_k^{\frac{1-d}{2d}}\left[f\_k(x)-\int f(y)\d\mu(y)\right]$, so that Ξ*k* = ∑*i**u**k*(*X**i*) is a linear statistic of the point process. We set, for any $t\in\C$, $$\begin{aligned} F\_k(t) &= -\log\E\_{k\phi+\psi+tu\_{k,\phi,d}}\left[\e^{-t\sum\_i u\_{k,\phi,d}(X\_i)}\right]\\ &= -\log\E\_{k\phi+\psi+tu\_{k,\phi,d}}\left[e^{tN\_k^{\frac{1-d}{2d}}\sum\_i f\_k(X\_i)}\right]+ N\_k^{\frac{1-d}{2d}}N\_k\int\_\M f(x)\d \mu(x).\end{aligned}$$ It clearly defines a holomorphic function on $\C$, and it is uniformly bounded on any compact subset of $\C$. Our goal is to demonstrate that *F**k*(*i**ξ*) converges to the Fourier transform of the right Gaussian, for all $\xi\in\R$; the proof will be decomposed into three steps. **Step 1: convergence on $\R$.** Let *t* ∈ R. According to Proposition [prop:expvar], we have $$\begin{aligned} \frac{\d}{\d t} F\_k(t) &= \E\_{k\phi+\psi+tu\_k}\left[ \Xi\_k\right] = N\_k^{\frac{1-d}{2d}}\E\_{k\phi+\psi+tu\_k}\left[\sum\_i f\_k(X\_i)\right]-N\_k^{\frac{1-d}{2d}}N\_k\int\_\M f(x)\d\mu(x),\end{aligned}$$ and in particular it vanishes at *t* = 0 because $$\E\_{k\phi+\psi}\left[\sum\_i f\_k(X\_i)\right] = N\_k\E\_{k\phi}\left[\sum\_i \frac{f(X\_i)}{B\_{(k\phi,\mu)}(X\_i,X\_i)}\right] = N\_k\int\_\M f(x)\d\mu(x).$$ We also have $$\begin{aligned} \frac{\d^2 F\_k(t)}{\d t^2} &= \Var\_{k\phi+\psi+tu\_k}\left[ \Xi\_k\right]\\ &= N\_k^{\frac{1-d}{d}}\frac12\iint\_{\M^2} (f\_k(x)-f\_k(y))^2\vert B\_{(k\phi+\psi+tu\_k,\mu)}(x,y)\vert\_{k\phi+\psi+tu\_k}^2\d\mu^{\otimes 2}(x,y)\end{aligned}$$ On the one hand, we know by Proposition [cor:convergencefk] that the sequence (*f**k*) converges uniformly to $f\_\eq^\phi$, which is Lipschitz continuous. On the other hand, we also have the uniform convergence of (*u**k*). If *d* = 1, then $N\_k^{\frac{1-d}{d}}=1$ and the uniform limit is $f\_\eq^\phi$, otherwise the uniform limit is 0. In any case, we can apply Theorem [thm:fluctuationsvariance] and get $$\lim\_{k\to\infty}\frac{\d^2 F\_k(t)}{\d t^2} = \Vert \d f\_\eq^\phi\Vert\_{\ddc\phi}^2,\ \forall t\geq 0.$$ Now, we can rewrite *F**k* as $$F\_k(t) = F\_k(0) + \int\_0^t \frac{\d F\_k(s)}{\d t} \d s = \int\_0^t\left( \frac{\d F\_k(0)}{\d t}+\int\_0^s \frac{\d^2 F\_k(u)}{\d t^2}\d u\right) \d s.$$ Since $F\_k(0)=\frac{\d F\_k(0)}{\d t}=0$, this simplifies to $$F\_k(t)= \int\_0^t \int\_0^s \frac{\d^2 F\_k(u)}{\d t^2} \d u \d s,$$ and dominated convergence (induced by the uniform convergence and integration on compact sets) yields $$\lim\_{k\to\infty} F\_k(t) = -\int\_0^t\int\_0^s\Vert \d f\_\eq^\phi\Vert^2\d u\d s = -\frac{t^2}{2}\Vert \d f\_\eq\Vert^2.$$ **Step 2: analytic continuation.** As stated earlier in the proof, we know that (*F**k*) is locally uniformly bounded, so that Montel’s theorem states that the family (*F**k*)*k* ≥ 0 is normal. Thus, from any subsequence of the family (*F**k*) we can extract a subsubsequence that converges uniformly to some holomorphic function *F*∞ on any compact of $\C$; it is in particular the case for the sequence (*F**k*). At that point, *F*∞ might depend on the subsequence; however we know that all these limits coincide on $\R$, according to Step 1. From the analytic extension Theorem, we obtain that these limits also coincide on $\C$, hence we have the uniform convergence *F**k* → *F*∞ on all compacts subsets of $\C$, where $F\_\infty(t)=-\frac{t^2}{2}\Vert \d f\_\eq\Vert^2$. **Step 3: restriction to $i\R$.** If we restrict the previous convergence to the imaginary line $i\R$, we obtain $$\lim\_{k\to\infty}\E\_{k\phi+\psi+tu\_k}\left[\e^{i\xi \sum\_i u\_k(X\_i)}\right] = \exp\left(-\frac{\xi^2}{2}\Vert \d f\_\eq\Vert^2\right),$$ and this holds for all compact subsets of $i\R$. We recognize, in the right-hand side, the characteristic function of a Gaussian distribution, and the convergence follows. [Proof of Corollary [cor:universality]] Let (*X*1, …, *X**N*) be a DPP with kernel *B*(*k**ϕ* + *ψ*, *μ*). Its distribution is given by $$\frac{1}{N\_k!}\vert\det(s\_i(x\_j))\vert\_{k\phi+\psi}^2 \d\mu^{\otimes N\_k}(x\_1,\ldots,x\_{N\_k}) = \frac{1}{N\_k!}\vert\det(s\_i(x\_j))\vert\_{k\phi}^2 e^{-2\sum\_i\psi(x\_i)}\d\mu^{\otimes N\_k}(x\_1,\ldots,x\_{N\_k}).$$ Here, (*s**i*) is an orthonormal basis of *H*0(*M*, *L**k*) for the inner product ⟨ ⋅ ,  ⋅ ⟩(*k**ϕ* + *ψ*, *μ*), which is equal to the inner product ⟨ ⋅ ,  ⋅ ⟩(*k**ϕ*, *e*− 2*ψ**μ*): for any sections *s*1, *s*2, $$\begin{aligned} \langle s\_1, s\_2\rangle\_{(k\phi+\psi,\mu)} & = \int\_\M \langle s\_1(x),s\_2(x)\rangle\_{k\phi+\psi}\d\mu(x)\\ & = \int\_\M \langle s\_1(x),s\_2(x)\rangle\_{k\phi}e^{-2\psi(x)}\d\mu(x)\\ & = \langle s\_1,s\_2\rangle\_{(k\phi,e^{-2\psi}\mu)}.\end{aligned}$$ Let $V:\M\to\R$ be a fixed continuous local weight. The result follows then from a direct application of Theorem [thm:MCmain] for the weighted measure (*k**ϕ*, *e*− 2*ψ**μ*) where $\psi:\M\to\R$ is defined by $\psi(x)=\frac{1}{2}V(x)$. Application to the Riemann sphere ================================= We shall illustrate our result to the simplest possible model, where computations can be made explicit and simulations are affordable. Complex structure and Bergman kernel ------------------------------------ Consider the unit sphere $\Sbb^2\subset\R^3$; we will simultaneously see it as a submanifold of $\R^3$ and as a complex manifold of dimension 1. As a submanifold, it is defined by $\Sbb^2=F^{-1}(0)$, where $F:\R^3\to\R,(x,y,z)\mapsto x^2+y^2+z^2-1$. It can be endowed with the atlas (*U*0, *U*1) such that *U*0 is the sphere without the North pole (0, 0, 1) and *U*1 the sphere without the South pole (0, 0,  − 1). The corresponding charts are given by stereographic projections: $$\varphi\_0:\left\lbrace\begin{array}{ccc} S^2\setminus\{(0,0,1)\} & \longrightarrow & \C\\ (x,y,z) & \longmapsto & \frac{x+iy}{1-z} \end{array}\right.,\ \varphi\_1:\left\lbrace\begin{array}{ccc} S^2\setminus\{(0,0,-1)\} & \longrightarrow & \C\\ (x,y,z) & \longmapsto & \frac{x+iy}{1+z} \end{array}\right..$$ We denote by *ζ* the local complex coordinate given by those charts. Note that *φ*0 (resp. *φ*1) is centered in the South pole (resp. the North pole). We will usually stick to *U*0 but everything is similar in *U*1. If we take *Z* ∈ *φ*0(*U*0 ∩ *U*1), then $$\varphi\_1\circ\varphi\_0(\zeta)=\frac{\zeta}{|\zeta|^2},$$ which is holomorphic on $\C^\*=\varphi\_0(U\_0\cap U\_1)$, and it is an involution, hence a biholomorphism. It follows that $\Sbb^2$ is a complex manifold of dimension 1, *i.e.* a Riemann surface. Note that, as *F*− 1(0), it is a closed subset of $\R^3$, and it is obviously bounded, therefore it is compact. We endow this manifold with the following volume form in the local chart *U*0: $$\omega(\zeta) = \frac{\i\d\zeta\wedge \d\bar{\zeta}}{2\pi(1+|\zeta|^2)^2}.$$ A quick computation shows that $$\int\_\C \omega = 1,$$ therefore it corresponds to a probability measure that we will denote by $\d\vol\_{\Sbb^2}$. We will denote by $\d m(\zeta)=\frac{\i}{2}\d\zeta\wedge \d\bar{\zeta}$ the Lebesgue measure on $\C$, so that $$\d\vol\_{\Sbb^2}(\zeta)=\frac{\d m(\zeta)}{\pi(1+|\zeta|^2)^{2}}.$$ Let us consider the line bundle[7](#fn7) *L* defined as follows: for any point $P=(x,y,z)\in \Sbb^2$, the fiber *L**P* is the line in $\R^3$ generated by *P*. We have the open covering (*U*0, *U*1) of *S*2, and the associated trivialization functions are *ψ*0 : (*P*, *λ**P*) ∈ *π*− 1(*U*0) ↦ (*P*, *λ*) and *ψ*1 : (*P*, *λ**P*) ∈ *π*− 1(*U*1) ↦ (*P*, *λ*). The transition function *γ*10 is then the identity. Let us restrict again to *U*0: a section of *L**U*0 is a function $f:U\_0\to\C$ that maps *P* to *λ* = *f*(*P*), that is, a choice of a coordinate *λ* in the complex line $L\_P=\C P$. The restriction to *U*0 of a holomorphic section of *L* is then a holomorphic function $f:U\_0\to\C$, that we can identify to a holomorphic function $f\_0:\C\to\C$ *via* the stereographic projection *f*0 = *f* ∘ *φ*0− 1. Let us endow *L**k* with the metric *h**k* given in the local coordinate *ζ* on *U*0 by $$h^k(s\_1(\zeta),s\_2(\zeta))=f\_1(\zeta)\overline{f\_2}(\zeta)\e^{-k\phi(\zeta)},$$ with a weight *ϕ*(*ζ*) = log(1 + ∣*ζ*∣2) that has positive curvature: $$\ddc\phi(\zeta) = \frac{\i}{2\pi}\ddbar\phi(\zeta) = \omega(\zeta).$$ The inner product $\langle\cdot,\cdot\rangle:=\langle\cdot,\cdot\rangle\_{(k\phi,d\vol\_{\Sbb^2})}$ is then given by $$\langle s\_1,s\_2\rangle = \int\_\C f\_1(\zeta)\overline{f\_2}(\zeta)\frac{\d m(\zeta)}{\pi(1+|\zeta|^2)^{2+k}}.$$ An orthonormal basis of $H^0(\Sbb^2,L^k),\langle\cdot,\cdot\rangle\_{(k\phi,\d\vol\_{\Sbb^2})}$ is given by (*s*ℓ)0 ≤ ℓ ≤ *k*, where the sections *s*ℓ are the spherical harmonics, given in complex stereographic coordinates in *U*0 by $$s\_\ell(\zeta)=\sqrt{k+1}\sqrt{\binom{k}{\ell}}\zeta^\ell,\ \forall 0\leq\ell\leq k.$$ In particular, we find that $H^0(\Sbb^2,L^k)$ has dimension *N**k* = *k* + 1. The Bergman kernel can be written in local coordinates $$B\_{(k\phi,d\vol\_{\Sbb^2})}(\zeta,\xi)=(k+1)\e^{-\frac{k}{2}(\phi(\zeta)+\phi(\xi))}\sum\_{\ell=0}^k\binom{k}{\ell}(\zeta\overline{\xi})^\ell=(k+1) \frac{(1+\zeta\overline{\xi})^k}{(1+\vert \zeta\vert^2)^{\frac{k}{2}}(1+\vert \xi\vert^2)^{\frac{k}{2}}}.$$ In particular, the Bergman kernel is constant on the diagonal: $$B\_{(k\phi,d\vol\_{\Sbb^2})}(\zeta,\zeta) = k+1,\ \forall \zeta\in\C.$$ The spherical ensemble ---------------------- Now we can describe the DPP whose kernel is *B**k*: it is the point process associated to the (*k* + 1)-tuple of random variables on $\Sbb^2$ whose joint distribution is given, in the stereographic coordinate *ζ* on *U*0, by $$\frac{1}{(k+1)!}\vert\det(s\_{i-1}(\zeta\_j))\_{1\leq i,j\leq k+1}\vert^2\prod\_{i=1}^{k+1}\frac{\d m(\zeta\_i)}{(1+|\zeta\_i|^2)^{k+2}}.$$ proved that the *spherical ensemble*, defined as the distribution of the eigenvalues of *A**B*− 1 where *A* and *B* are independent standard Gaussian complex matrices, has joint distribution $$\frac{1}{Z\_N}\prod\_{i<j} |\zeta\_i-\zeta\_j|^2\prod\_{i=1}^{N}\frac{dm(\zeta\_i)}{(1+|\zeta\_i|^2)^{N+1}}.$$ We see that in the case of *N* = *N**k* = *k* + 1, we obtain the DPP on the sphere associated to the normalized volume form $\d\vol\_{\Sbb^2}$ and the metric *h* with Kähler potential *ϕ*(*ζ*) = log(1 + ∣*ζ*∣2), which has a curvature everywhere positive (in particular, it means that the bulk is the whole sphere). If we denote by (*X*1, …, *X**k* + 1) the spherical ensemble, Theorem [thm:MCmain] states that for any Lipschitz function $f:\Sbb^2\to\R$, the integral $$\int\_{\Sbb^2} f(x)\d\vol\_{\Sbb^2}(x) = \int\_\C f\circ\varphi\_0^{-1}(\zeta) \frac{\d m(\zeta)}{\pi(1+|\zeta|^2)^2}$$ can be approximated by $$\sum\_{i=1}^{k+1} \frac{f\circ\varphi\_0^{-1}(X\_i)}{B\_k(X\_i,X\_i)}.$$ Numerical experiments --------------------- We shall now proceed to a comparison of our Monte Carlo method, which reduces to using the empirical measure of the spherical ensemble as a Monte Carlo estimator, with a few other estimators. We consider a standard Monte Carlo estimator with a i.i.d. uniform sample, a Monte Carlo estimator based on a DPP in [ − 1, 1]2 mapped to the sphere, and a randomized Quasi Monte Carlo estimator. The latter two are now introduced in more detail, before showing the experimental results. ### Legendre DPP The *Jacobi measure* of parameters *α*1, *β*1, …, *α**d*, *β**d* >  − 1 is the measure on ( − 1, 1)*d* given by $$\d\mu\_{\alpha,\beta}(x\_1,\ldots,x\_d) = \prod\_{j=1}^d(1-x\_j)^{\alpha\_j}(1+x\_j)^{\beta\_j}\d x\_j.$$ The corresponding orthonormal polynomials are the so-called multivariate *Jacobi polynomials*; see e.g.. It has been shown by that, using a suitable ordering (*p**k*) of these multivariate Jacobi polynomials, the projection determinantal point process with kernel ∑*k* = 1*N**p**k*(*x*)*p**k*(*y*) satisfies the assumptions of Theorem [thm:BH]. An interesting fact is that the integration on $\Sbb^2$ with respect to the uniform measure boils down to an integration on ( − 1, 1)2 with respect to the uniform measure, which is actually the Jacobi measure of parameters (0, 0), as explained in the following proposition. [prop:Jacobiint] For any $f:\Sbb^2\to\R$ measurable and bounded, $$\int\_{\Sbb^2} f(x,y,z)\d\vol\_{\Sbb^2}(x,y,z) = \frac{1}{4}\int\_{(-1,1)^2}f\circ\Phi(x,y) \d x\d y,$$ where $\Phi:[-1,1]^2\to\Sbb^2$ is the function defined by $$\Phi(x,y) = (\sqrt{1-x^2}\cos(\pi(y+1)),\sqrt{1-x^2}\sin(\pi(y+1)),x).$$ Let $\Phi\_1:[0,2\pi]\times[0,\pi]\to\Sbb^2$ be the change of variable from spherical to Cartesian coordinates in the sphere, namely Φ1(*θ*, *ϕ*) = (cos*θ*sin*ϕ*, sin*θ*sin*ϕ*, cos*ϕ*). It is a diffeomorphism from (0, 2*π*) × (0, *π*) onto its image, whose complement is negligible in $\Sbb^2$ with respect to $\d\vol\_{\Sbb^2}$. Moreover, $$\Phi\_1^\*\d\vol\_{\Sbb^2}(\theta,\phi) = \frac{1}{4\pi}\sin\phi\d\theta\d\phi.$$ Hence, for any bounded measurable function $f:\Sbb^2\to\R$, $$\begin{aligned} \int\_{\Sbb^2}f(x,y,z)\d\vol\_{\Sbb^2}(x,y,z) & = \int\_{\Phi\_1((0,2\pi)\times(0,\pi)} f(x,y,z)\d\vol\_{\Sbb^2}(x,y,z)\\ & = \int\_{(0,2\pi)\times(0,\pi)} f\circ\Phi\_1(\theta,\phi)\sin\phi\frac{\d\theta\d\phi}{4\pi}.\end{aligned}$$ We also introduce the diffeomorphism $$\Phi\_2:\left\lbrace\begin{array}{ccc} (0,2\pi)\times(0,\pi) & \to & (-1,1)^2\\ (\theta,\phi) & \mapsto & (\cos\phi, \frac{\theta}{\pi}-1). \end{array}\right.$$ The result follows from the fact that Φ = Φ1 ∘ Φ2− 1. We conclude with a remark that in our case, where *d* = 2 and *α* = *β* = 0, the multivariate Jacobi polynomials specialize to the Legendre polynomials. To sample the corresponding DPP, we use the classical algorithm by, in the specific implementation of the Python library DPPy for multivariate Jacobi ensembles. ### Randomized spiral points Following or, given a fixed parameter *C* > 0 and a fixed sample size *N*, the *generalized spiral points* are the points of the sphere with spherical coordinates (*θ**i*, *ϕ**i*)1 ≤ *i* ≤ *N* defined by an iterative procedure: for any 1 ≤ *i* ≤ *N*,  set $z\_i=1-\frac{2i-1}{N}$ and $$\theta\_i = \arccos z\_i, \ \phi\_i = C\sqrt{N}\theta\_i.$$ It provides a deterministic low-discrepancy family {(*x**i*, *y**i*, *z**i*), 1 ≤ *i* ≤ *N*} of points of $\Sbb^2$, which can be randomized through a random (uniform) rotation $R\in\SO(3)$. Although the QMC method using spiral points was studied in the aforementioned papers, we are unaware of any theoretical estimation of the variance of the corresponding randomized QMC. Yet we expect it to be competitive in our low-dimensional setting. ### Results In Figure [fig:samples], we display samples of all the models we consider. Note that all spiral points of the sample are randomized through the same rotation, which makes them look like usual spiral points. In the case of a Jacobi ensemble, we take a Jacobi ensemble of parameters (0, 0) (or equivalently, a Legendre ensemble) on [ − 1, 1]2 mapped onto the sphere through the diffeomorphism Φ introduced in Proposition [prop:Jacobiint]. It corresponds to the method of. As expected, we see that a cluster appears in the image of the boundary of the square [ − 1, 1]2. [fig:samples] Figure [fig:integrals] displays the logarithm of each sample variance as a function of log*N*, across 200 independent repetitions for each model and each *N*. Keeping in mind that the variance should be proportional to *N**α* for various values of *α* depending on the model, all plots are supposed to be linear. [fig:integrals] We consider two functions $f\_1,f\_2:\R^3\to\R$ that we restrict to $\Sbb^2$, and we choose them with support in $$\Sbb\_+^2=\{(x,y,z)\in\Sbb^2: z\geq 0\}$$ in order to avoid numerical errors that could happen in stereographic coordinates for points which are close to the South pole (corresponding to the point at infinity). We take *f*1(*x*, *y*, *z*) = *z*2**1***z* ≥ 0,  *f*2(*x*, *y*, *z*) = |*x*|3/2*y**z***1***z* ≥ 0. Both functions are C1 on $\Sbb\_+^2$, but *f*1 is actually smooth. Besides, *f*2 is supported in the image of the open square ( − 1, 1)2 and satisfies therefore the assumptions of, whereas *f*1 is nonzero on the image of the boundary of the square. Both functions also naturally satisfy the assumptions of the CLT for the i.i.d. Monte Carlo method. In both cases, the variances of the estimators have the same rankings, and the slopes are quite close to their theoretical values. It is interesting to see that for *f*2, on low values of *N*, the estimator for the Jacobi ensemble has a slightly lower variance than the spherical ensemble, although it decays more slowly when *N* grows. It does not happen for *f*1, which is not surprising because it is an edge case for the method by. We also remark that the randomized spiral points seem to provide an overall better performing estimator than all other methods, although there is no theoretical result to support that, with a similar slope to the spherical ensemble. Conclusion and perspectives =========================== Building on Berman’s seminal work that led to the central limit theorem in,
arxiv_0000247
3/ASY-1994-9301>. P. Azérad, F. Guillén; Mathematical justification of the hydrostatic approximation in the primitive equations of geophysical fluid dynamics, SIAM J. Math. Anal. 33(4) (2001), pp. 847–859, <https://doi.org/10.1137/S0036141000375962>. G. Bayada, M. Chambat; The Transition Between the Stokes Equations and the Reynolds Equation: A Mathematical Proof, Appl. Math. Optim. 14 (1986), pp. 73–93, <https://doi.org/10.1007/BF01442229>. G. Bayada, M. Chambat, I. Ciuperca; Asymptotic Navier–Stokes equations in a thin moving boundary domain, Asymptotic Analysis 21(2) (1999), pp. 117–132, <https://content.iospress.com/articles/asymptotic-analysis/asy362>. A. Bermúdez, J. M. Viaño; Une justification des équations de la thermoélasticité des poutres à section variable par des méthodes asymptotiques, RAIRO Analyse Numérique 18(4) (1984), pp. 347–376, <https://doi.org/10.1051/m2an/1984180403471>. O. Besson, M. R. Laydi, Some estimates for the anisotropic Navier-Stokes equations and for the hydrostatic approximation. ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique 26(7) (1992), pp. 855–865, <http://www.numdam.org/item/M2AN_1992__26_7_855_0/>. D. Bresch, J. Lemoine, J. Simon, Écoulement engendré par le vent et la force de Coriolis dans un domaine mince: I Cas stationnaire, Comptes Rendus de l’Académie des Sciences - Series I - Mathematics 325(7) (1997), pp. 807–812, <https://doi.org/10.1016/S0764-4442(97)80064-X>. D. Bresch, J. Lemoine, J. Simon, Écoulement engendré par le vent et la force de Coriolis dans un domaine mince: II cas d’évolution, Comptes Rendus de l’Académie des Sciences - Series I - Mathematics 327(3) (1998), pp. 329–334, <https://doi.org/10.1016/S0764-4442(98)80155-9>. D. Bresch, P. Noble; Mathematical justification of a shallow water model, Methods and Applications of Analysis 14(2) (2007), pp. 87–118, <https://dx.doi.org/10.4310/MAA.2007.v14.n2.a1>. G. Castiñeira, J. M. Rodríguez; Asymptotic Analysis of a Viscous Fluid in a Curved Pipe with Elastic Walls, F. Ortegón Gallego, M. Redondo Neble, J. Rodríguez Galván (eds), Trends in Differential Equations and Applications, SEMA SIMAI Springer Series 8, Springer, Cham. (2016), pp. 73–87, <https://doi.org/10.1007/978-3-319-32013-7_5>. G. Castiñeira, E. Marušić-Paloka, I. Pažanin, J. M. Rodríguez; Rigorous justification of the asymptotic model describing a curved-pipe flow in a time-dependent domain, Z Angew Math Mech. 99(1) (2019), 99:e201800154, <https://doi.org/10.1002/zamm.201800154>. J.P. Chaomleffel, Influece des forces d’inertie en lubrification hybride, Thèse Mécanique, INSA, Lyon, 1983, <http://www.sudoc.fr/043093701>. G. Chipot, M. Luskin; Existence and Uniqueness of Solutions to the Compressible Reynolds Lubrication Equation, Siam J. Math. Anal. 17(6) (1986), pp. 1390–1399, <https://doi.org/10.1137/0517098>. M. Chipot; On the Reynolds lubrication equation, Nonlinear Analysis: Theory, Methods & Applications, 12(7) (1988), pp. 699–718, <https://doi.org/10.1016/0362-546X(88)90023-5>. P. G. Ciarlet; A justification of the von Kármán equations, Arch. Rational Mech. Anal. 73 (1980), pp. 349–389, <https://doi.org/10.1007/BF00247674>. P. G. Ciarlet, P. Destuynder; A justification of the two dimensional linear plate model, J. Mec. 18 (1979), pp. 315–344, <https://zbmath.org/0415.73072>. P. G. Ciarlet, P. Destuynder; A justification of a nonlinear model in plate theory, Comp. Methods Appl. Mech. Engrg. 17-18 (1979), pp. 227–258, <https://doi.org/10.1016/0045-7825(79)90089-6>. G. Cimatti; How the Reynolds equation is related to the Stokes equations, Appl. Math. Optim. 10 (1983), pp. 267–274, <https://doi.org/10.1007/BF01448389>. G. Cimatti; Existence and uniqueness for non linear Reynolds equation, International Journal of Engineering Science 24(5) (1986), pp. 827–834, <https://doi.org/10.1016/0020-7225(86)90116-3>. G. Cimatti; A rigorous justification of the Reynolds equation, Quart. Appl. Math. 45 (1987), pp. 627–644, <https://doi.org/10.1090/qam/917014>. W. R. Dean; Note on the motion of fluid in a curved pipe, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 4(20) (1927), pp. 208–223, <https://doi.org/10.1080/14786440708564324>. W. R. Dean; The stream-line motion of fluid in a curved pipe (second paper), The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 5(30) (1928), pp. 673–695, <https://doi.org/10.1080/14786440408564513>. A. Decoene, L. Bonaventura, E. Miglio, F. Saleri; Asymptotic derivation of the section-averaged shallow water equations for natural river hydraulics, Mathematical Models and Methods in Applied Sciences 19(3) (2009), pp. 387–417, <https://doi.org/10.1142/S0218202509003474>. H. Dridi; Comportement asymptotique des équations de Navier-Stokes dans des domaines “aplatis”, Bull. Sc. Math. 106 (1982), pp. 369–385, <https://zbmath.org/?q=an:0512.35015>. H. G. Elrod; A derivation of the basic equations for hydrodynamic lubrication with a fluid having constant properties, Quart. Appl. Math. 17 (1960), pp. 349–359, <https://doi.org/10.1090/qam/109552>. S. Ferrari, F. Saleri; A new two-dimensional shallow water model including pressure effects and slow varying bottom topography, ESAIM: M2AN 38(2) (2004), pp. 211–234, <https://doi.org/10.1051/m2an:2004010>. B. Fantino, J. Frene, M. Godet; Conditions d’utilisation de l’équation de Reynolds en mécanique des films minces, C. R. Acad. Sci. Paris 272A (1971), pp. 691–693, <https://gallica.bnf.fr/ark:/12148/bpt6k480300n>. K. O. Friedrichs, R. F. Dressler; A boundary-layer theory for elastic plates, Comm. Pure Appl. Math. 14 (1961), pp. 1–33, <https://doi.org/10.1002/cpa.3160140102>. J.-F. Gerbeau, B. Perthame; Derivation of viscous Saint-Venant system for laminar shallow water; numerical validation, Discrete & Contin. Dyn. Syst.- Series B 1(1) (2001), pp. 89–102, <http://dx.doi.org/10.3934/dcdsb.2001.1.89>. A. L. Goldenveizer; Derivation of an approximated theory of bending of a plate by the method of asymptotic integration of the equations of the theory of elasticity, Prikl. Mat. Mekh. 26(4) (1962), pp. 668–686, <https://doi.org/10.1016/0021-8928(62)90161-2>. E. Grenier; On the derivation of homogeneous hydrostatic equations, ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique 33(5) (1999), pp. 965–970, <http://www.numdam.org/item/M2AN_1999__33_5_965_0/>. Changbing Hu; Asymptotic analysis of the primitive equations under the small depth assumption, Nonlinear Analysis: Theory, Methods & Applications 61(3) (2005), pp. 425–460, <https://doi.org/10.1016/j.na.2004.12.005>. F. Marche; Derivation of a new two-dimensional viscous shallow water model with varying topography, bottom friction and capillary effects, European Journal of Mechanics B/Fluids 26(1) (2007), pp. 49–63, <https://doi.org/10.1016/j.euromechflu.2006.04.007>. E. Marušić-Paloka; The effects of flexion and torsion on a fluid flow through a curved pipe, Appl. Math. Optim. 44(3) (2001), pp. 245–272, <https://doi.org/10.1007/s00245-001-0021-y>. E. Marušić-Paloka, I. Pažanin; Fluid flow through a helical pipe, Z. Angew. Math. Phys. 58(1) (2007), pp. 81–99, <https://doi.org/10.1007/s00033-006-0073-6>. I. Moise, R. Temam, M. Ziane; Asymptotic analysis of the Navier-Stokes equations in thin domains, Topol. Methods Nonlinear Anal. 10(2) (1997), pp. 249–282, <https://projecteuclid.org/euclid.tmna/1476842206>. S. A. Nazarov; Asymptotic solution of the Navier-Stokes problem on the flow of a thin layer of fluid, Sib Math J 31(2) (1990), pp. 296–307, <https://doi.org/10.1007/BF00970660>. J. T. Oden, S. R. Wu; Existence of solutions to the Reynolds’ equation of elastohydrodynamic lubrication, International Journal of Engineering Science 23(2) (1985), pp. 207–215, <https://doi.org/10.1016/0020-7225(85)90075-8>. P. Orenga; Un théorème d’existence de solutions d’un problème de shallow water, Arch. Rational Mech. Anal. 130 (1995), pp. 183–204, <https://doi.org/10.1007/BF00375155>. G. Panasenko, R. Stavre; Asymptotic analysis of the stokes flow in a cylindrical elastic tube, Appl. Anal. 91(11) (2012), pp. 1999–2027, <https://doi.org/10.1080/00036811.2011.584187>. G. Panasenko, K. Pileckas; Asymptotic analysis of the non-steady Navier-Stokes equations in a tube structure. I. The case without boundary layer-in-time, Nonlinear Analysis: Theory, Methods & Applications 122 (2015), pp. 125–168, <https://doi.org/10.1016/j.na.2015.03.008>. G. Panasenko, K. Pileckas; Asymptotic analysis of the non-steady Navier-Stokes equations in a tube structure. II. General case, Nonlinear Analysis 125 (2015), pp. 582–607, <https://doi.org/10.1016/j.na.2015.05.018>. A. Rigolot; Sur une théorie asymptotique des poutres, J. Mécanique 11(4) (1972), pp. 673–703, <https://zbmath.org/?q=an:0257.73013>. O. Reynolds; On the theory of lubrication and its application to Mr Beauchamp tower’s experiments, Phil. Trans. Roy Soc. London 117 (1886), pp. 157–234, <https://www.jstor.org/stable/109480>. J. M. Rodríguez, R. Taboada-Vázquez; From Navier-Stokes equations to Shallow Waters with viscosity by asymptotic analysis, Asymptotic Analysis 43(4) (2005), pp. 267–285, <https://content.iospress.com/articles/asymptotic-analysis/asy691>. J. M. Rodríguez, R. Taboada-Vázquez; From Euler and Navier-Stokes Equations to Shallow Waters by Asymptotic Analysis, Advances in Engineering Software 38(6) (2007), pp. 399-409, <https://doi.org/10.1016/j.advengsoft.2006.09.011>. J. M. Rodríguez, R. Taboada-Vázquez; A new shallow water model with polynomial dependence on depth, Mathematical Methods in the Applied Sciences 31(5) (2008), pp. 529–549, <https://doi.org/10.1002/mma.924>. J. M. Rodríguez, R. Taboada-Vázquez; A new shallow water model with linear dependence on depth, Mathematical and Computer Modelling 48(3-4) (2008), pp. 634–655, <https://doi.org/10.1016/j.mcm.2007.11.002>. J. M. Rodríguez, R. Taboada-Vázquez; Bidimensional shallow water model with polynomial dependence on depth through vorticity, Journal of Mathematical Analysis and Applications 359(2) (2009), pp. 556-569, <https://doi.org/10.1016/j.jmaa.2009.06.003>. J. M. Rodríguez, R. Taboada-Vázquez; Derivation of a new asymptotic viscous shallow water model with dependence on depth, Applied Mathematics and Computation 219(7) (2012), pp. 3292-3307, <https://doi.org/10.1016/j.amc.2011.08.053>. A. J. C. de Saint-Venant; Théorie du mouvement non-permanent des eaux, avec application aux crues des rivières et á l’introduction des marées dans leur lit, C. R. Acad. Sci. Paris 73 (1871), pp. 147–154, <https://gallica.bnf.fr/ark:/12148/bpt6k3030d>. J. J. Stoker; Differential Geometry, in Wiley Classics Library, Wiley, 1989, <https://www.wiley.com/en-es/Differential+Geometry-p-9780471504030>. L. Sundbye; Global Existence for the Cauchy Problem for the Viscous Shallow Water Equations, Rocky Mountain J. Math. 28(3) (1998), pp. 1135–1152, <https:doi.org/10.1216/rmjm/1181071760>. Z. Tutek, I. Aganovič, J.- C. Nedelec; A justification of the one-dimensional model of an elastic beam, Math. Methods in Applied Sci. 8 (1986), pp. 502–515, <https://doi.org/10.1002/mma.1670080133>. R. K. Zeytounian; Modélisation asymptotique en mécanique des fluides newtoniens, Springer-Verlag, 1994, <https://www.springer.com/gp/book/9783540578383>. Change of variable ================== Let us consider the change of variable - between the original domain and the reference domain. Its jacobian matrix is $$\mathbf{J}^{\varepsilon}= \begin{pmatrix} \dfrac{ \partial x\_1^{\varepsilon}}{\partial \xi\_1} & \dfrac{ \partial x\_1^{\varepsilon}}{\partial \xi\_2} & \dfrac{ \partial x\_1^{\varepsilon}}{\partial \xi\_3} & \dfrac{ \partial x\_1^{\varepsilon}}{\partial t}\\ {}\\ \dfrac{ \partial x\_2^{\varepsilon}}{\partial \xi\_1} & \dfrac{ \partial x\_2^{\varepsilon}}{\partial \xi\_2} & \dfrac{ \partial x\_2^{\varepsilon}}{\partial \xi\_3} & \dfrac{ \partial x\_2^{\varepsilon}}{\partial t}\\ {}\\ \dfrac{ \partial x\_3^{\varepsilon}}{\partial \xi\_1}& \dfrac{ \partial x\_3^{\varepsilon}}{\partial \xi\_2}& \dfrac{ \partial x\_3^{\varepsilon}}{\partial \xi\_3}& \dfrac{ \partial x\_3^{\varepsilon}}{\partial t}\\ {}\\ \dfrac{ \partial t^{\varepsilon}}{\partial \xi\_1}& \dfrac{ \partial t^{\varepsilon}}{\partial \xi\_2}& \dfrac{ \partial t^{\varepsilon}}{\partial \xi\_3} & \dfrac{ \partial t^{\varepsilon}}{\partial t} \\ \end{pmatrix}$$ and it is clear from - that $$\begin{aligned} \dfrac{ \partial x\_i^{\varepsilon}}{\partial \xi\_j} &=& a\_{ji}+\varepsilon \xi\_3 \dfrac{\partial h}{\partial \xi\_j} a\_{3i} +\varepsilon \xi\_3 h\dfrac{\partial a\_{3i}}{\partial \xi\_j}, \quad (i=1,2,3; j=1,2) \label{parcial\_xiuv\_ap}\\ \dfrac{ \partial x\_i^{\varepsilon}}{\partial \xi\_3} &=& \varepsilon h a\_{3i}, \quad (i=1,2,3) \label{parcial\_xiw\_ap}\\ \dfrac{ \partial x\_i^{\varepsilon}}{\partial t} &=& \dfrac{\partial x\_i}{\partial t}+\varepsilon \xi\_3 \dfrac{\partial h}{\partial t}a\_{3i} +\varepsilon \xi\_3 h\dfrac{\partial a\_{3i}}{\partial t}, \quad (i=1,2,3) \label{parcial\_xit\_ap} \\ \dfrac{ \partial t^{\varepsilon}}{\partial \xi\_1} &=&\dfrac{ \partial t^{\varepsilon}}{\partial \xi\_2}=\dfrac{ \partial t^{\varepsilon}}{\partial \xi\_3}=0, \label{parcial\_tuvw\_ap}\\ \dfrac{ \partial t^{\varepsilon}}{\partial t} &=&1 \label{parcial\_tt\_ap} \end{aligned}$$ We can compute $$(\mathbf{J}^{\varepsilon})^{-1}= \begin{pmatrix} \dfrac{ \partial \xi\_1}{\partial x\_1^{\varepsilon}} & \dfrac{ \partial \xi\_1}{\partial x\_2^{\varepsilon}} & \dfrac{ \partial \xi\_1}{\partial x\_3^{\varepsilon}} & \dfrac{ \partial \xi\_1}{\partial t^{\varepsilon}} \\ {}\\ \dfrac{ \partial \xi\_2}{\partial x\_1^{\varepsilon}} & \dfrac{ \partial \xi\_2}{\partial x\_2^{\varepsilon}} & \dfrac{ \partial \xi\_2}{\partial x\_3^{\varepsilon}} & \dfrac{ \partial \xi\_2}{\partial t^{\varepsilon}}\\ {}\\ \dfrac{ \partial \xi\_3}{\partial x\_1^{\varepsilon}}& \dfrac{ \partial \xi\_3}{\partial x\_2^{\varepsilon}}& \dfrac{ \partial \xi\_3}{\partial x\_3^{\varepsilon}}& \dfrac{ \partial \xi\_3}{\partial t^{\varepsilon}}\\ {}\\ \dfrac{ \partial t}{\partial x\_1^{\varepsilon}}& \dfrac{ \partial t}{\partial x\_2^{\varepsilon}}& \dfrac{ \partial t}{\partial x\_3^{\varepsilon}} & \dfrac{ \partial t}{\partial t^{\varepsilon}} \\ \end{pmatrix} \label{J-1\_ap}$$ writing its components in the basis {*a⃗*1, *a⃗*2, *a⃗*3}: $$\begin{aligned} \left(\dfrac{ \partial \xi\_1}{\partial x\_1^{\varepsilon}}, \dfrac{ \partial \xi\_1}{\partial x\_2^{\varepsilon}}, \dfrac{ \partial \xi\_1}{\partial x\_3^{\varepsilon}} \right) &=& \alpha\_1 \vec{a}\_1 + \beta\_1 \vec{a}\_2 +\gamma\_1 \vec{a}\_3 \label{Dxu\_base\_a\_ap} \\ \left(\dfrac{ \partial \xi\_2}{\partial x\_1^{\varepsilon}}, \dfrac{ \partial \xi\_2}{\partial x\_2^{\varepsilon}}, \dfrac{ \partial \xi\_2}{\partial x\_3^{\varepsilon}} \right) &=& \alpha\_2 \vec{a}\_1 + \beta\_2 \vec{a}\_2 +\gamma\_2 \vec{a}\_3 \\ \left(\dfrac{ \partial \xi\_3}{\partial x\_1^{\varepsilon}}, \dfrac{ \partial \xi\_3}{\partial x\_2^{\varepsilon}}, \dfrac{ \partial \xi\_3}{\partial x\_3^{\varepsilon}} \right) &=& \alpha\_3 \vec{a}\_1 + \beta\_3 \vec{a}\_2 +\gamma\_3 \vec{a}\_3 \\ \left(\dfrac{ \partial t}{\partial x\_1^{\varepsilon}}, \dfrac{ \partial t}{\partial x\_2^{\varepsilon}}, \dfrac{ \partial t}{\partial x\_3^{\varepsilon}} \right) &=& \alpha\_4 \vec{a}\_1 + \beta\_4 \vec{a}\_2 +\gamma\_4 \vec{a}\_3 \label{parcial\_uvw\_xit\_ap}\end{aligned}$$ and using that (**J***ɛ*)− 1**J***ɛ* = *I* Taking into account that $$\begin{aligned} &&\vec{a}\_i \cdot \vec{a}\_3=0, \quad (i=1,2),\label{a1a30}\\ &&\|\vec{a}\_3\|=1\label{a31}\\ &&\vec{a}\_3 \cdot \dfrac{\partial \vec{a}\_3}{\partial \xi\_i}=0 \quad (i=1,2)\label{a3da30}\end{aligned}$$ and introducing the following notation for the coefficients of the first and second fundamental forms of the surface parametrized by *X⃗* (here *t* acts only as a parameter): $$\begin{aligned} E&=&\vec{a}\_1 \cdot\vec{a}\_1 \label{coef-E} \\ F&=&\vec{a}\_1 \cdot\vec{a}\_2 \label{coef-F} \\ G&=&\vec{a}\_2 \cdot\vec{a}\_2 \label{coef-G}\end{aligned}$$ $$\begin{aligned} e&=& -\vec{a}\_1 \cdot \dfrac{\partial \vec{a}\_{3}}{\partial \xi\_1} = \vec{a}\_3 \cdot \dfrac{\partial \vec{a}\_{1}}{\partial \xi\_1} \label{coef-e} \\ f&=&- \vec{a}\_1 \cdot \dfrac{\partial \vec{a}\_{3}}{\partial \xi\_2}=-\vec{a}\_2 \cdot\dfrac{\partial \vec{a}\_{3}}{\partial \xi\_1}= \vec{a}\_3 \cdot \dfrac{\partial \vec{a}\_{1}}{\partial \xi\_2}= \vec{a}\_3 \cdot \dfrac{\partial \vec{a}\_{2}}{\partial \xi\_1} \label{coef-f} \\ g&=&-\vec{a}\_2 \cdot\dfrac{\partial \vec{a}\_{3}}{\partial \xi\_2}= \vec{a}\_3 \cdot \dfrac{\partial \vec{a}\_{2}}{\partial \xi\_2} \label{coef-g} \end{aligned}$$ we deduce from : $$\begin{aligned} \alpha\_1 &=&\dfrac{\|\vec{a}\_2\|^2+ \varepsilon \xi\_3 h \left(\vec{a}\_2 \cdot \dfrac{\partial \vec{a}\_{3}}{\partial \xi\_2}\right)}{A(\varepsilon)} = \dfrac{G - \varepsilon \xi\_3 h g}{A(\varepsilon)}\label{alfa1}\\ \beta\_1& =&-\dfrac{ \vec{a}\_1 \cdot \vec{a}\_{2} +\varepsilon \xi\_3 h \left(\vec{a}\_1 \cdot\dfrac{\partial \vec{a}\_{3}}{\partial \xi\_2}\right)}{A(\varepsilon)} = -\dfrac{F -\varepsilon \xi\_3 h f}{A(\varepsilon)}\label{beta1}\\ \alpha\_2 &=&-\dfrac{\vec{a}\_2 \cdot \vec{a}\_{1} + \varepsilon \xi\_3 h\left( \vec{a}\_2 \cdot \dfrac{\partial \vec{a}\_{3}}{\partial \xi\_1}\right)}{A(\varepsilon)}=\beta\_1 \label{alfa2}\\ \beta\_2& =&\dfrac{ \|\vec{a}\_1\|^2 +\varepsilon \xi\_3 h\left( \vec{a}\_1 \cdot\dfrac{\partial \vec{a}\_{3}}{\partial \xi\_1}\right)}{A(\varepsilon)} = \dfrac{ E -\varepsilon \xi\_3 h e}{A(\varepsilon)} \label{beta2}\\ \gamma\_i&=&0 \quad (i=1,2) \label{gamma12}\\ \alpha\_3 &=&-\dfrac{ \xi\_3 }{ h}\left( \alpha\_1 \dfrac{\partial h}{\partial \xi\_1} + \alpha\_2 \dfrac{\partial h}{\partial \xi\_2} \right)\label{alfa312}\\ \beta\_3&=& -\dfrac{ \xi\_3 }{ h}\left( \beta\_1\dfrac{\partial h}{\partial \xi\_1} + \beta\_2 \dfrac{\partial h}{\partial \xi\_2}\right) \label{beta312}\\ \gamma\_3&=&\dfrac{1}{\varepsilon h} \label{gamma3}\end{aligned}$$ $$\begin{aligned} \dfrac{ \partial \xi\_1}{\partial t^{\varepsilon}} &=&-(\alpha\_1 \vec{a}\_1 + \beta\_1 \vec{a}\_2)\cdot\left( \dfrac{\partial \vec{X}}{\partial t} + \varepsilon \xi\_3 h \dfrac{\partial \vec{a}\_{3}}{\partial t} \right) \label{parcial\_u\_teps\_alfa\_beta\_ap}\\ \dfrac{ \partial \xi\_2}{\partial t^{\varepsilon}}& =&-( \alpha\_2 \vec{a}\_1 + \beta\_2 \vec{a}\_2)\cdot \left(\dfrac{\partial \vec{X}}{\partial t} + \varepsilon \xi\_3 h \dfrac{\partial \vec{a}\_{3}}{\partial t} \right)\label{parcial\_v\_teps\_alfa\_beta\_ap}\\ \dfrac{ \partial \xi\_3}{\partial t^{\varepsilon}}& =&- (\alpha\_3 \vec{a}\_1 +\beta\_3 \vec{a}\_2) \cdot\left(\dfrac{\partial \vec{X}}{\partial t} +\varepsilon \xi\_3 h \dfrac{\partial \vec{a}\_{3}}{\partial t} \right) -\dfrac{1}{\varepsilon h} \vec{a}\_3 \cdot\dfrac{\partial \vec{X}}{\partial t}- \dfrac{\xi\_3}{ h} \dfrac{\partial h}{\partial t} \label{parcial\_w\_teps\_alfa\_beta\_ap}\\ \alpha\_4 &=&\beta\_4=\gamma\_4=0\\ \dfrac{\partial t}{\partial x^{\varepsilon}\_i}&=&0, \quad (i=1,2,3) \label{dtdxi\_0\_ap} \\ \dfrac{\partial t}{\partial t^{\varepsilon}}&=&1 \label{dtdteps\_1\_ap}\end{aligned}$$ where $$\begin{aligned} A(\varepsilon)&=&\|\vec{a}\_1\|^2 \|\vec{a}\_2\|^2-\left( \vec{a}\_1 \cdot \vec{a}\_{2}\right)^2 \nonumber \\ &+&\varepsilon \xi\_3 h\left[\|\vec{a}\_2\|^2 \left(\vec{a}\_1 \cdot\dfrac{\partial \vec{a}\_{3}}{\partial \xi\_1}\right) + \|\vec{a}\_1\|^2 \left(\vec{a}\_2 \cdot \dfrac{\partial \vec{a}\_{3}}{\partial \xi\_2}\right)- \left( \vec{a}\_1 \cdot \vec{a}\_{2}\right)\left(\vec{a}\_1 \cdot\dfrac{\partial \vec{a}\_{3}}{\partial \xi\_2}+\vec{a}\_2 \cdot \dfrac{\partial \vec{a}\_{3}}{\partial \xi\_1}\right)\right]\nonumber \\ &+& \varepsilon^2 \xi\_3^2 h^2 \left[\left(\vec{a}\_1 \cdot\dfrac{\partial \vec{a}\_{3}}{\partial \xi\_1} \right)\left(\vec{a}\_2 \cdot \dfrac{\partial \vec{a}\_{3}}{\partial \xi\_2} \right) -\left( \vec{a}\_1 \cdot\dfrac{\partial \vec{a}\_{3}}{\partial \xi\_2}\right) \left( \vec{a}\_2 \cdot \dfrac{\partial \vec{a}\_{3}}{\partial \xi\_1} \right)\right]\nonumber\\ &=&EG-F^2 + \varepsilon \xi\_3 h\left(-G e - E g+2 f F \right) + \varepsilon^2 \xi\_3^2 h^2 \left(e g -f^2\right) \label{A}\end{aligned}$$ If we denote by $$\begin{aligned} A^0&=&\|\vec{a}\_1\|^2 \|\vec{a}\_2\|^2-\left( \vec{a}\_1 \cdot \vec{a}\_{2}\right)^2 =EG-F^2=\|\vec{a}\_1 \times \vec{a}\_2\|^2 \label{A0} \\ A^1&=&\|\vec{a}\_2\|^2 \left(\vec{a}\_1 \cdot\dfrac{\partial \vec{a}\_{3}}{\partial \xi\_1}\right) + \|\vec{a}\_1\|^2 \left(\vec{a}\_2 \cdot \dfrac{\partial \vec{a}\_{3}}{\partial \xi\_2}\right) \nonumber \\ &&{} - \left( \vec{a}\_1 \cdot \vec{a}\_{2}\right)\left(\vec{a}\_1 \cdot\dfrac{\partial \vec{a}\_{3}}{\partial \xi\_2}+\vec{a}\_2 \cdot \dfrac{\partial \vec{a}\_{3}}{\partial \xi\_1}\right) = -eG-gE+2fF \label{A^1} \\ A^2&=&\left(\vec{a}\_1 \cdot\dfrac{\partial \vec{a}\_{3}}{\partial \xi\_1} \right)\left(\vec{a}\_2 \cdot \dfrac{\partial \vec{a}\_{3}}{\partial \xi\_2} \right) -\left( \vec{a}\_1 \cdot\dfrac{\partial \vec{a}\_{3}}{\partial \xi\_2}\right) \left( \vec{a}\_2 \cdot \dfrac{\partial \vec{a}\_{3}}{\partial \xi\_1} \right)=eg-f^2\label{A2}\end{aligned}$$ then we obtain that *A*(*ɛ*) = *A*0 + *ɛ**ξ*3*h**A*1 + *ɛ*2*ξ*32*h*2*A*2 We remark (see ) that *A*0, *A*1 and *A*2 are related to the Gaussian curvature (*K**G*) of the surface parametrized by *X⃗* and its mean curvature (*K**m*), since $$\begin{aligned} K\_G &=& \dfrac{eg-f^2}{EG-F^2} = \dfrac{A^2}{A^0} \label{KG} \\ K\_m &=& \dfrac{eG+gE-2fF}{2(EG-F^2)} = - \dfrac{A^1}{2A^0} \label{Km}\end{aligned}$$ Furthermore, the principal curvatures of *X⃗* are the solutions of the equation *A*0*K**n*2 + *A*1*K**n* + *A*2 = 0 Coefficients definition ======================= In this appendix, we introduce some coefficients that depend only on the lower bound surface parametrization, *X⃗* and other coefficients that depend both on the parametrization and on the gap *h*. We will use these coefficients throughout this article. In addition to the coefficients that will be defined below, others have been introduced in the body of the paper and in [ApendiceA]: the coefficients of the first and second fundamental forms of the surface parametrized by *X⃗* (denoted by *E*, *F*, *G* and *e*, *f*, *g*, respectively), defined in - and - from the basis {*a⃗*1, *a⃗*2, *a⃗*3} (see -), the coefficients *α**i*, *β**i* and *γ**i* (*i* = 1, 2, 3) in -, and their development in powers of *ɛ* in -, *A*(*ɛ*) and its development in powers of *ɛ* in -, along with its relation with the Gaussian curvature and the mean curvature of the surface parametrized by *X⃗* in -, and, finally, the definition of *Â**i*0 (*i* = 1, 2) in. The following coefficients depend only on the parametrization *X⃗*: $$\begin{aligned} B\_{11}&=&Ge-Ff \label{B11}\\ B\_{12}&=&Gf -Fg\label{B12}\\ B\_{21}&=& Ef-Fe \label{B21}\\ B\_{22}&=& Eg-Ff \label{B22}\\ C^0\_l&=& \alpha\_l^0 \left( \vec{a}\_1 \cdot \dfrac{\partial \vec{X}}{\partial t}\right) + \beta\_l^0 \left( \vec{a}\_2 \cdot \dfrac{\partial \vec{X}}{\partial t}\right) \quad (l=1,2) \label{C}\\ H^0\_{ilk}&=& \alpha\_i^0\left(\vec{a}\_1\cdot \dfrac{\partial \vec{a}\_k}{\partial \xi\_l}\right)+\beta\_i^0\left(\vec{a}\_2 \cdot \dfrac{\partial \vec{a}\_k}{\partial \xi\_l}\right)\quad (i,l=1,2; \quad k=1,2,3)\label{H}\\ I&=&\left(\vec{a}\_1 \times \dfrac{\partial \vec{a}\_3}{\partial \xi\_2}\right)\cdot \vec{a}\_3 + \left(\dfrac{\partial \vec{a}\_3}{\partial \xi\_1} \times \vec{a}\_2\right)\cdot \vec{a}\_3 \label{I}\\ J^0\_{lm}&=& \alpha\_l^0 \alpha\_m^0 E + (\beta\_l^0 \alpha\_m^0 + \alpha\_l^0 \beta\_m^0) F + \beta\_l^0 \beta\_m^0 G \nonumber \\ &=&\alpha\_l^0 \delta\_{m1}+\beta\_l^0\delta\_{m2} \quad (l,m=1,2) \label{J}\\ {L}^0\_{kli} &=& \sum\_{m=1}^2 \left[ \left( \dfrac{\partial\alpha\_l^0}{\partial \xi\_m} \delta\_{m1}+\dfrac{\partial\beta\_l^0}{\partial \xi\_m} \delta\_{m2}+\alpha\_l^0 H^0\_{mm1} +\beta\_l^0H^0\_{mm2}\right) \delta\_{ki}\right.\nonumber\\ &+&\left. 2 H^0\_{imk} J^0\_{lm}\right] \quad (i,l=1,2; \quad k=1,2,3) \label{L}\\ Q^0\_{ik}&=&\alpha\_i^0 \left( \vec{a}\_1 \cdot \dfrac{\partial \vec{a}\_k}{\partial t}\right) + \beta\_i^0 \left( \vec{a}\_2 \cdot \dfrac{\partial \vec{a}\_k}{\partial t}\right)-\sum\_{l=1}^2 H^0\_{ilk}C^0\_l \nonumber \\ &&(i=1,2; \,k=1,2,3) \label{Q} \\ R^0\_{ik} &=& Q^0\_{ik}+ H^0\_{ik3} \left( \dfrac{\partial \vec{X}}{\partial t} \cdot \vec{a}\_3\right) \quad (i=1,2; \,k=1,2) \label{R} \end{aligned}$$ $$\begin{aligned} S^0\_{ik} &=& \dfrac{I \sqrt{A^0} -A^1}{A^0} \left[ \alpha\_i^0\left( \dfrac{\partial \vec{a}\_k}{\partial \xi\_1} \cdot \vec{a}\_3\right) +\beta\_i^0 \left( \dfrac{\partial \vec{a}\_k}{\partial \xi\_2} \cdot \vec{a}\_3\right) \right]\nonumber\\ &+& \sum\_{m=1}^2 \sum\_{l=1}^2 \left[ \left( \alpha^0\_i \left( \vec{a}\_1 \cdot \dfrac{\partial^2 \vec{a}\_k}{\partial \xi\_l \partial \xi\_m}\right) + \beta^0\_i \left( \vec{a}\_2 \cdot \dfrac{\partial^2 \vec{a}\_k}{\partial \xi\_l \partial \xi\_m}\right)\right)J^0\_{lm}\right.\nonumber\\ &+&\left.\left( \dfrac{\partial\alpha\_l^0}{\partial \xi\_m} \delta\_{m1}+\dfrac{\partial\beta\_l^0}{\partial \xi\_m} \delta\_{m2}+\alpha\_l^0 H^0\_{mm1} +\beta\_l^0H^0\_{mm2}\right)H^0\_{ilk}\right]\nonumber\\ &-&\dfrac{1}{\sqrt{A^0}}\left[ \left(\vec{a}\_1 \times \dfrac{\partial \vec{a}\_3}{\partial \xi\_2}\right)+ \left(\dfrac{\partial \vec{a}\_3}{\partial \xi\_1} \times \vec{a}\_2\right)\right] \cdot \left( \alpha^0\_i \dfrac{\partial \vec{a}\_k}{\partial \xi\_1} + \beta^0\_i\dfrac{\partial \vec{a}\_k}{\partial \xi\_2} \right) \nonumber \\ &&(i,k=1,2)\label{S} \end{aligned}$$ Coefficients *B**i**l*0 and *H**i**l*30 are related in the following way: $${H^0\_{il3}}=-\dfrac{B\_{il}}{A^0} \quad (i,l=1,2)$$ The following coefficients depend on the parametrization *X⃗* and on function *h*: $$\begin{aligned} F^0\_i(h)&=&\int\_0^1 f^0\_i \, d\xi\_3+\dfrac{s\_0}{\rho h}(\vec{f}^1\_{R\_1} + \vec{f}^1\_{R\_0}) \cdot \left( \alpha^0\_i \vec{a}\_1 + \beta^0\_i \vec{a}\_2 \right) \quad (i=1,2) \label{F} \\ \vec{\eta}(h)&=&\dfrac{\partial h}{\partial \xi\_2} (\vec{a}\_1 \times \vec{a}\_3) + h \left(\vec{a}\_1 \times \dfrac{\partial \vec{a}\_3}{\partial \xi\_2}\right) + \dfrac{\partial h}{\partial \xi\_1} (\vec{a}\_3\times \vec{a}\_2) + h \left(\dfrac{\partial \vec{a}\_3}{\partial \xi\_1} \times \vec{a}\_2\right)\label{eta}\\ \psi(h)\_{ijl}^0&=& \dfrac{1}{h} \left[ \left( \alpha\_l^0 \dfrac{\partial h}{\partial \xi\_1} +\beta\_l^0 \dfrac{\partial h}{\partial \xi\_2} \right) \delta\_{ij} + \dfrac{\partial h}{\partial \xi\_j} \left( \alpha\_l^0 \delta\_{1i} + \beta\_l^0 \delta\_{2i}\right)\right] \quad (i,j,l=1,2) \label{psi}\\ \chi(h)\_{ik}^0&=&\dfrac{1}{h} \left\{ \dfrac{\partial h}{\partial \xi\_1} \left[\sum\_{l=1}^2H^0\_{ilk}\alpha\_l^0 -\dfrac{1}{\sqrt{A^0}}(\vec{a}\_3\times \vec{a}\_2)\cdot \left( \alpha\_i^0 \dfrac{ \partial \vec{a}\_{k}}{\partial \xi\_1}+\beta\_i^0 \dfrac{ \partial \vec{a}\_{k}}{\partial \xi\_2}\right)\right]\right.\nonumber\\ &+&\left. \dfrac{\partial h}{\partial \xi\_2} \left[ \sum\_{l=1}^ 2H^0\_{ilk}\beta\_l^0 -\dfrac{1}{\sqrt{A^0}} (\vec{a}\_1 \times \vec{a}\_3) \cdot \left( \alpha\_i^0 \dfrac{ \partial \vec{a}\_{k}}{\partial \xi\_1}+ \beta\_i^0 \dfrac{ \partial \vec{a}\_{k}}{\partial \xi\_2} \right) \right]\right\} \nonumber\\ && (i=1,2, \,k=1,2,3)\label{chiik}\\ \kappa(h)^0\_i&=& -\dfrac{1}{h^2}\left[\alpha^0\_i \dfrac{\partial}{\partial t}\left(h \dfrac{ \partial h}{\partial \xi\_1} \right) +\beta^0\_i \dfrac{\partial}{\partial t}\left(h \dfrac{ \partial h}{\partial \xi\_2} \right) \right]\nonumber\\ &+& \left[ \dfrac{\partial }{\partial \xi\_1}\left( \dfrac{\partial \vec{X}}{\partial t} \cdot \vec{a}\_3\right) \left( L^0\_{31i} - \dfrac{A^1}{A^0} \alpha^0\_i \right)+\dfrac{\partial }{\partial \xi\_2}\left( \dfrac{\partial \vec{X}}{\partial t} \cdot \vec{a}\_3\right) \left( L^0\_{32i} - \dfrac{A^1}{A^0} \beta^0\_i \right)\right]\nonumber\\ &+& \left( \dfrac{\partial \vec{X}}{\partial t} \cdot \vec{a}\_3\right) \left\{ \chi(h)^0\_{i3} + \sum\_{m=1}^2 \sum\_{l=1}^2 \left[ \left( \alpha^0\_i \left( \vec{a}\_1 \cdot \dfrac{\partial^2 \vec{a}\_3}{\partial \xi\_l \partial \xi\_m}\right) + \beta^0\_i \left( \vec{a}\_2 \cdot \dfrac{\partial^2 \vec{a}\_3}{\partial \xi\_l \partial \xi\_m}\right)\right)J^0\_{lm}\right.\right.\nonumber\\ &+&\left. \left.\left( \dfrac{\partial\alpha\_l^0}{\partial \xi\_m} \delta\_{m1}+\dfrac{\partial\beta\_l^0}{\partial \xi\_m} \delta\_{m2}+\alpha\_l^0 H^0\_{mm1} +\beta\_l^0H^0\_{mm2}\right)H^0\_{il3}\right]\right.\nonumber\\ &-&\left.\dfrac{1}{\sqrt{A^0}}\left[ \left(\vec{a}\_1 \times \dfrac{\partial \vec{a}\_3}{\partial \xi\_2}\right)+ \left(\dfrac{\partial \vec{a}\_3}{\partial \xi\_1} \times \vec{a}\_2\right)\right] \cdot \left( \alpha^0\_i \dfrac{\partial \vec{a}\_3}{\partial \xi\_1} + \beta^0\_i\dfrac{\partial \vec{a}\_3}{\partial \xi\_2} \right)\right\} \quad (i=1,2)\label{kappa} \\ \hat{\kappa}(h)^0\_i &=& \kappa(h)^0\_i - \left( \alpha\_i^0 \dfrac{ \partial}{\partial \xi\_1} \left ( \frac{2}{h} \dfrac{\partial h}{\partial t} \right ) + \beta\_i^0 \dfrac{ \partial}{\partial \xi\_2} \left ( \frac{2}{h} \dfrac{\partial h}{\partial t} \right ) \right) \quad (i=1,2)\label{kappa-hat}\end{aligned}$$ where *δ**i**j* is the Kronecker Delta. Derivation of equations to calculate *V⃗*0 ========================================== Let us identify the terms of order *ɛ*0 in the equation. We simplify that equation, taking into account -,, and -. Then we multiply the equation obtained by *a*1*i* and we yield: $$\begin{aligned} &&\hspace\*{-0.5cm} \sum\_{k=1}^3 \left(\dfrac{ \partial V\_k^0}{\partial t} (\vec{a}\_1 \cdot \vec{a}\_{k}) + V\_k^0 \left(\vec{a}\_1 \cdot \dfrac{ \partial \vec{a}\_{k}}{\partial t} \right)\right) \nonumber\\ &&{}+ \sum\_{l=1}^2 \sum\_{k=1}^3\left( \dfrac{ \partial V\_k^0}{\partial \xi\_l} (\vec{a}\_1 \cdot \vec{a}\_{k})+ V\_k^0 \left(\vec{a}\_1 \cdot \dfrac{ \partial \vec{a}\_{k}}{\partial \xi\_l} \right) \right)\left( V\_l^0-C^0\_l \right) \nonumber\\ &&=-\dfrac{1}{\rho\_0} \dfrac{ \partial p^0}{\partial \xi\_1} + \nu \left\{ \sum\_{m=1}^2 \sum\_{l=1}^2 \sum\_{k=1}^3 \dfrac{\partial}{\partial \xi\_m}\left[ \dfrac{ \partial (V\_k^0 \vec{a}\_k)}{\partial \xi\_l} (\alpha\_l^0 {a}\_{1j} + \beta\_l^0 {a}\_{2j}) \right]\cdot \vec{a}\_1 (\alpha\_m^0 {a}\_{1j} + \beta\_m^0 {a}\_{2j})\right.\nonumber\\ && \left.{} +\dfrac{1 }{h} \dfrac{A^1}{A^0} \sum\_{k=1}^2 \dfrac{ \partial u\_k^1}{\partial \xi\_3} (\vec{a}\_1 \cdot \vec{a}\_{k}) +\dfrac{1 }{h^2} \sum\_{k=1}^2 \dfrac{ \partial^2 u\_k^2}{\partial \xi\_3^3} (\vec{a}\_1 \cdot \vec{a}\_{k}) \right\}+ \sum\_{k=1}^2 f\_k^0 (\vec{a}\_1 \cdot \vec{a}\_{k}) \label{ec\_ns\_eps0\_abrev\_a1}\end{aligned}$$ where we have denoted by *V*30 = *u*30 (to achieve a more compact expression), and coefficients *C**l*0, (*l* = 1, 2), are given by. Analogously, if we multiply the same equation by *a*2*i* and *a*3*i*, we obtain, respectively: $$\begin{aligned} &&\hspace\*{-0.5cm} \sum\_{k=1}^3 \left(\dfrac{ \partial V\_k^0}{\partial t} (\vec{a}\_2 \cdot \vec{a}\_{k}) + V\_k^0 \left(\vec{a}\_2 \cdot \dfrac{ \partial \vec{a}\_{k}}{\partial t} \right)\right) \nonumber\\ &&{}+ \sum\_{l=1}^2 \sum\_{k=1}^3 \left( \dfrac{ \partial V\_k^0}{\partial \xi\_l} (\vec{a}\_2 \cdot \vec{a}\_{k})+ V\_k^0 \left(\vec{a}\_2 \cdot \dfrac{ \partial \vec{a}\_{k}}{\partial \xi\_l} \right) \right)\left( V\_l^0- C^0\_l \right) \nonumber\\ &&=-\dfrac{1}{\rho\_0} \dfrac{ \partial p^0}{\partial \xi\_2} + \nu \left\{ \sum\_{m=1}^2 \sum\_{l=1}^2 \sum\_{k=1}^3 \dfrac{\partial}{\partial \xi\_m}\left[ \dfrac{ \partial (V\_k^0 \vec{a}\_k)}{\partial \xi\_l} (\alpha\_l^0 {a}\_{1j} + \beta\_l^0 {a}\_{2j}) \right]\cdot \vec{a}\_2 (\alpha\_m^0 {a}\_{1j} + \beta\_m^0 {a}\_{2j})\right.\nonumber\\ && \left.{} +\dfrac{1 }{h} \dfrac{A^1}{A^0} \sum\_{k=1}^2 \dfrac{ \partial u\_k^1}{\partial \xi\_3} (\vec{a}\_2 \cdot \vec{a}\_{k}) +\dfrac{1 }{h^2} \sum\_{k=1}^2 \dfrac{ \partial^2 u\_k^2}{\partial \xi\_3^3} (\vec{a}\_2 \cdot \vec{a}\_{k}) \right\}+ \sum\_{k=1}^2 f\_k^0 (\vec{a}\_2 \cdot \vec{a}\_{k}) \label{ec\_ns\_eps0\_abrev\_a2}\end{aligned}$$ $$\begin{aligned} &&\hspace\*{-0.5cm}\dfrac{ \partial V\_3^0}{\partial t}+ \sum\_{k=1}^2 V\_k^0 \left(\vec{a}\_3 \cdot \dfrac{ \partial \vec{a}\_{k}}{\partial t} \right)\nonumber\\ &&{}+ \sum\_{l=1}^2 \left( \dfrac{ \partial V\_3^0}{\partial \xi\_l}+ \sum\_{k=1}^2 V\_k^0 \left(\vec{a}\_3 \cdot \dfrac{ \partial \vec{a}\_{k}}{\partial \xi\_l} \right) \right)\left( V\_l^0- C^0\_l\right) \nonumber\\ &&=-\dfrac{1}{\rho\_0 h} \dfrac{ \partial p^1}{\partial \xi\_3} + \nu \left\{ \sum\_{m=1}^2 \sum\_{l=1}^2 \sum\_{k=1}^3 \dfrac{\partial}{\partial \xi\_m}\left[ \dfrac{ \partial (V\_k^0 \vec{a}\_k)}{\partial \xi\_l} (\alpha\_l^0 {a}\_{1j} + \beta\_l^0 {a}\_{2j}) \right]\cdot \vec{a}\_3 (\alpha\_m^0 {a}\_{1j} + \beta\_m^0 {a}\_{2j})\right.\nonumber\\ && \left.{} +\dfrac{1 }{h} \dfrac{A^1}{A^0} \dfrac{\partial h}{\partial t} +\dfrac{1 }{h^2} \dfrac{ \partial^2 u\_3^2}{\partial \xi\_3^3} \right\}+ f\_3^0 \label{ec\_ns\_eps0\_abrev\_a3}\end{aligned}$$ Next we multiply equation by *α*10 and we add equation multiplied by *α*20 to get: $$\begin{aligned} &&\hspace\*{-0.5cm}\dfrac{ \partial V\_1^0}{\partial t} + \sum\_{l=1}^2 \alpha\_l^0 \sum\_{k=1}^3 \left( V\_k^0 \left(\vec{a}\_l \cdot \dfrac{ \partial \vec{a}\_{k}}{\partial t} \right)\right) \nonumber\\ &&{}+ \sum\_{l=1}^2 \left( \dfrac{ \partial V\_1^0}{\partial \xi\_l} + \sum\_{m=1}^2 \alpha\_m^0 \sum\_{k=1}^3 V\_k^0 \left(\vec{a}\_m \cdot \dfrac{ \partial \vec{a}\_{k}}{\partial \xi\_l} \right) \right)\left( V\_l^0- C^0\_l \right)=-\dfrac{1}{\rho\_0} \sum\_{l=1}^2 \alpha\_l^0 \dfrac{ \partial p^0}{\partial \xi\_l} \nonumber\\ &&{} + \nu \left\{ \sum\_{m=1}^2\sum\_{p=1}^2\alpha\_p^0 \sum\_{l=1}^2 \sum\_{k=1}^3 \dfrac{\partial}{\partial \xi\_m}\left[ \dfrac{ \partial (V\_k^0 \vec{a}\_k)}{\partial \xi\_l} (\alpha\_l^0 {a}\_{1j} + \beta\_l^0 {a}\_{2j}) \right]\cdot \vec{a}\_p (\alpha\_m^0 {a}\_{1j} + \beta\_m^0 {a}\_{2j})\right.\nonumber\\ && \left.{} +\dfrac{1 }{h} \dfrac{A^1}{A^0} \dfrac{ \partial u\_1^1}{\partial \xi\_3} +\dfrac{1 }{h^2} \dfrac{ \partial^2 u\_1^2}{\partial \xi\_3^3} \right\}+ f\_1^0 \label{ec\_ns\_eps0\_abrev\_a1b}\end{aligned}$$ In the same way, we multiply equation by *β*20 and we add equation multiplied by *β*10 to obtain: $$\begin{aligned} &&\hspace\*{-0.5cm}\dfrac{ \partial V\_2^0}{\partial t} + \sum\_{l=1}^2 \beta\_l^0 \sum\_{k=1}^3 \left( V\_k^0 \left(\vec{a}\_l \cdot \dfrac{ \partial \vec{a}\_{k}}{\partial t} \right)\right) \nonumber\\ &&{}+ \sum\_{l=1}^2 \left( \dfrac{ \partial V\_2^0}{\partial \xi\_l} + \sum\_{m=1}^2 \beta\_m^0 \sum\_{k=1}^3 V\_k^0 \left(\vec{a}\_m \cdot \dfrac{ \partial \vec{a}\_{k}}{\partial \xi\_l} \right) \right)\left( V\_l^0-C^0\_l\right) =-\dfrac{1}{\rho\_0} \sum\_{l=1}^2 \beta\_l^0 \dfrac{ \partial p^0}{\partial \xi\_l} \nonumber\\ &&{} + \nu \left\{ \sum\_{m=1}^2\sum\_{p=1}^2\beta\_p^0 \sum\_{l=1}^2 \sum\_{k=1}^3 \dfrac{\partial}{\partial \xi\_m}\left[ \dfrac{ \partial (V\_k^0 \vec{a}\_k)}{\partial \xi\_l} (\alpha\_l^0 {a}\_{1j} + \beta\_l^0 {a}\_{2j}) \right]\cdot \vec{a}\_p (\alpha\_m^0 {a}\_{1j} + \beta\_m^0 {a}\_{2j})\right.\nonumber\\ && \left.{} +\dfrac{1 }{h} \dfrac{A^1}{A^0} \dfrac{ \partial u\_2^1}{\partial \xi\_3} +\dfrac{1 }{h^2} \dfrac{ \partial^2 u\_2^2}{\partial \xi\_3^3} \right\}+ f\_2^0 \label{ec\_ns\_eps0\_abrev\_a2b}\end{aligned}$$ We yield the following equations by integrating - over *ξ*3 from 0 to 1, and using expressions -, - and -: $$\begin{aligned} &&\hspace\*{-0.5cm}\dfrac{ \partial V\_1^0}{\partial t} + \sum\_{l=1}^2 \left( V\_l^0-C^0\_l \right) \dfrac{ \partial V\_1^0}{\partial \xi\_l} + \sum\_{k=1}^3 \left( Q^0\_{1k}+\sum\_{l=1}^2 H^0\_{1lk} V\_l^0\right) V\_k^0 \nonumber\\ &&=-\dfrac{1}{\rho\_0} \sum\_{l=1}^2 \alpha\_l^0 \dfrac{ \partial p^0}{\partial \xi\_l}+\nu \left\{ \sum\_{m=1}^2 \sum\_{l=1}^2 \dfrac{ \partial^2 V\_1^0 }{\partial \xi\_m \partial \xi\_l} J^0\_{lm} + \sum\_{k=1}^2 \sum\_{l=1}^2 \dfrac{ \partial V\_k^0 }{\partial \xi\_l}( L^0\_{kl1} +\psi(h)^0\_{1kl} )\right.\nonumber\\ && \left. {} + \sum\_{k=1}^2 V\_k^0 ({S}\_{1k}^0+\chi(h)^0\_{1k}) + \kappa(h)^0\_1 \right\}+ {F}^0\_1(h) \label{ec\_V10s} \end{aligned}$$ $$\begin{aligned} &&\hspace\*{-0.5cm}\dfrac{ \partial V\_2^0}{\partial t}+ \sum\_{l=1}^2 \left( V\_l^0-C^0\_l \right) \dfrac{ \partial V\_2^0}{\partial \xi\_l} +\sum\_{k=1}^3 \left(Q^0\_{2k} + \sum\_{l=1}^2 H^0\_{2lk} V\_l^0 \right) V\_k^0\nonumber\\ &&=-\dfrac{1}{\rho\_0} \sum\_{l=1}^2 \beta\_l^0 \dfrac{ \partial p^0}{\partial \xi\_l} + \nu \left\{ \sum\_{m=1}^2 \sum\_{l=1}^2 \dfrac{ \partial^2 V\_2^0}{\partial \xi\_m \partial \xi\_l} J^0\_{lm} + \sum\_{k=1}^2 \sum\_{l=1}^2 \dfrac{ \partial V\_k^0 }{\partial \xi\_l}( L^0\_{kl2} +\psi(h)^0\_{2kl} ) \right.\nonumber\\ && \left.{} +\sum\_{k=1}^2 V\_k^0 ({S}^0\_{2k}+\chi(h)^0\_{2k}) + \kappa(h)^0\_2 \right\} + {F}^0\_2(h) \label{ec\_V20s}\end{aligned}$$ where *H**i**l**k*0, *J**l**m*0, *L**k**l**i*0, *Q**i**k*0, *S**i**k*0, *F**i*0(*h*), *ψ*(*h*)*i**k**l*0, *χ*(*h*)*i**k*0 and *κ*(*h*)*i*0 are given by, -, - and -. Finally, from last equations, taking into account that *V*30 = *u*30,, and rearranging terms, we obtain $$\begin{aligned} &&\hspace\*{-0.5cm}\dfrac{ \partial V\_i^0}{\partial t} + \sum\_{l=1}^2 \left( V\_l^0-C^0\_l \right) \dfrac{ \partial V\_i^0}{\partial \xi\_l} + \sum\_{k=1}^2 \left( R^0\_{ik}+\sum\_{l=1}^2 H^0\_{ilk} V\_l^0 \right) V\_k^0\nonumber\\ &&=-\dfrac{1}{\rho\_0}\left( \alpha\_i^0 \dfrac{ \partial \pi\_0^0}{\partial \xi\_1} + \beta\_i^0 \dfrac{ \partial \pi\_0^0}{\partial \xi\_2} \right) \nonumber \\ &&{} +\nu \left\{ \sum\_{m=1}^2 \sum\_{l=1}^2 \dfrac{ \partial^2 V\_i^0 }{\partial \xi\_m \partial \xi\_l} J^0\_{lm} + \sum\_{k=1}^2 \sum\_{l=1}^2 \dfrac{ \partial V\_k^0 }{\partial \xi\_l}( L^0\_{kli} +\psi(h)^0\_{ikl} )\right.\nonumber\\ && \left. {} + \sum\_{k=1}^2 V\_k^0 ({S}\_{ik}^0+\chi(h)^0\_{ik}) + \hat{\kappa}(h)^0\_i \right\}+ {F}^0\_i(h)- Q^0\_{i3} \left ( \frac{\partial \vec{X}}{\partial t} \cdot \vec{a}\_3\right ) \quad (i=1,2)\label{ec\_Vi0} \end{aligned}$$ where *R**i**k*0 and *κ̂*(*h*)*i*0 are given by and. Asymptotic analysis of a thin fluid layer flow between two moving surfaces ========================================================================== , In this paper we study the behavior of an incompressible viscous fluid moving between two very close surfaces also in motion. Using the asymptotic expansion method we formally justify two models, a lubrication model and a shallow water model, depending on the boundary conditions imposed. Finally, we discuss under what conditions each of the models would be applicable. Lubrication Shallow waters Asymptotic analysis. 35Q35 76M45 35C20 41A60 76D08 Introduction ============ The asymptotic analysis method is a mathematical tool that has been widely used to obtain and justify reduced models, both in solid and fluid mechanics, when one or two of the dimensions of the domain in which the model is formulated are much smaller than the others. After the pioneering works of Friedrichs, Dressler and Goldenveizer (see and ) the asymptotic development technique has been used successfully to justify beam, plate and shell theories (see, for example,,,,,,, and many others). This same technique has also been used in fluid mechanics to justify various types of models, such as lubrication models, shallow water models, tube flow models, etc. (see, for example,,,,,,,,,,,,,,,,, -,,,,,,,,,, and many others). In this work, we are interested in justifying, again using the asymptotic development technique, a lubrication model in a thin domain with curved mean surface. Following the steps of, but with a different starting point, we devote sections [sec-domain] and [sec-lubrication] to this justification. During the above process we have observed that, depending on the boundary conditions, other models can be obtained, which we show in section [sec-thin-layer]. In this section we derive a shallow water model changing the boundary conditions that we had imposed in section [sec-lubrication]: instead of assuming that we know the velocities on the upper and lower boundaries of the domain, we assume that we know the tractions on these upper and lower boundaries. Thus, two new models are presented in sections 3 and 4 of this article. These models can not be found in the literature, as far as we know. In addition, the method used to justify them allows us to answer the question of when each of them is applicable. In section [conc] we discuss the models yielded, as well as the difference between one model and another depending on the boundary conditions, reaching the conclusion that the magnitude of the pressure differences at the lateral boundary of the domain is key when deciding which of the two models best describes the fluid behavior. Derivation of the model ======================= Original domain --------------- Let us consider a three-dimensional thin domain, Ω*t**ɛ*, filled by a fluid, that varies with time *t* ∈ [0, *T*], given by $$\begin{aligned} \Omega^{\varepsilon}\_t&=&\left\{ (x\_1^{\varepsilon},x\_2^{\varepsilon},x\_3^{\varepsilon})\in{R}^3:x\_i(\xi\_1,\xi\_2,t)\leq x\_i^{\varepsilon} \leq x\_i(\xi\_1,\xi\_2,t)+ h^\varepsilon(\xi\_1,\xi\_2,t)N\_i(\xi\_1,\xi\_2,t), \nonumber\right.\\ &&\left. (i=1,2,3), \ (\xi\_1,\xi\_2)\in D\subset \mathbb{R}^2 \right\} \label{eq-o-domain} \end{aligned}$$ where *X⃗**t*(*ξ*1, *ξ*2) = *X⃗*(*ξ*1, *ξ*2, *t*) = (*x*1(*ξ*1, *ξ*2, *t*), *x*2(*ξ*1, *ξ*2, *t*), *x*3(*ξ*1, *ξ*2, *t*)) is the lower bound surface parametrization, *h**ɛ*(*ξ*1, *ξ*2, *t*) is the gap between the two surfaces in motion, and *N⃗*(*ξ*1, *ξ*2, *t*) is the unit normal vector: $$\vec{N}(\xi\_1,\xi\_2,t)=\dfrac{ \dfrac{\partial\vec{X}}{\partial \xi\_1} \times \dfrac{\partial\vec{X}}{\partial \xi\_2} }{\left\|\dfrac{\partial\vec{X}}{\partial \xi\_1} \times \dfrac{\partial\vec{X}}{\partial \xi\_2}\right\|}$$ The lower bound surface is assumed to be regular, $$\dfrac{\partial\vec{X}}{\partial \xi\_1} \times \dfrac{\partial\vec{X}}{\partial \xi\_2} \neq \vec{0} \quad \forall \ (\xi\_1,\xi\_2)\in D\subset \mathbb{R}^2, \ \forall \ t\in [0,T],$$ and the gap is assumed to be small with regard to the dimension of the bound surfaces. We take into account that the fluid film between the surfaces is thin by introducing a small non-dimensional parameter *ɛ*, and setting that *h**ɛ*(*ξ*1, *ξ*2, *t*) = *ɛ**h*(*ξ*1, *ξ*2, *t*) where *h*(*ξ*1, *ξ*2, *t*) ≥ *h*0 > 0,  ∀ (*ξ*1, *ξ*2) ∈ *D* ⊂ R2,  ∀ *t* ∈ [0, *T*]. Construction of the reference domain ------------------------------------ Let us consider Ω = *D* × [0, 1] a domain independent of *ɛ* and *t*, which is related to Ω*t**ɛ* by the following change of variable: $$\begin{aligned} t^\varepsilon&=&t \label{eq-1-cv} \\ x\_i^\varepsilon &=& x\_i(\xi\_1,\xi\_2,t)+\varepsilon \xi\_3 h(\xi\_1,\xi\_2,t)N\_i(\xi\_1,\xi\_2,t) \label{eq-2-cv}\end{aligned}$$ where (*ξ*1, *ξ*2) ∈ *D* and *ξ*3 ∈ [0, 1]. Let us define the basis {*a⃗*1, *a⃗*2, *a⃗*3} $$\begin{aligned} \vec{a}\_1(\xi\_1,\xi\_2,t)&=&\dfrac{\partial \vec{X}(\xi\_1,\xi\_2,t)}{\partial \xi\_1} \label{base\_a1} \\ \vec{a}\_2(\xi\_1,\xi\_2,t)&=&\dfrac{\partial \vec{X}(\xi\_1,\xi\_2,t)}{\partial \xi\_2}\\ \vec{a}\_3(\xi\_1,\xi\_2,t)&=& \vec{N}(\xi\_1,\xi\_2,t) \label{base\_a3}\end{aligned}$$ In [ApendiceA] we obtain $$\begin{aligned} \dfrac{ \partial x\_i^{\varepsilon}}{\partial \xi\_j} &=& a\_{ji}+\varepsilon \xi\_3 \dfrac{\partial h}{\partial \xi\_j} a\_{3i} +\varepsilon \xi\_3 h\dfrac{\partial a\_{3i}}{\partial \xi\_j}, \quad (i=1,2,3; j=1,2) \label{parcial\_xiuv}\\ \dfrac{ \partial x\_i^{\varepsilon}}{\partial \xi\_3} &=& \varepsilon h a\_{3i}, \quad (i=1,2,3) \label{parcial\_xiw}\\ \dfrac{ \partial x\_i^{\varepsilon}}{\partial t} &=& \dfrac{\partial x\_i}{\partial t}+\varepsilon \xi\_3 \dfrac{\partial h}{\partial t}a\_{3i} +\varepsilon \xi\_3 h\dfrac{\partial a\_{3i}}{\partial t}, \quad (i=1,2,3) \label{parcial\_xit} \\ \dfrac{ \partial t^{\varepsilon}}{\partial \xi\_1} &=&\dfrac{ \partial t^{\varepsilon}}{\partial \xi\_2}=\dfrac{ \partial t^{\varepsilon}}{\partial \xi\_3}=0, \quad (i=1,2,3) \label{parcial\_tuvw}\\ \dfrac{ \partial t^{\varepsilon}}{\partial t} &=&1, \label{parcial\_tt} \end{aligned}$$ where *a**i**j* = *a⃗**i* ⋅ *e⃗**j*, (*i*, *j* = 1, 2, 3), {*e⃗*1, *e⃗*2, *e⃗*3} is the canonical basis of R3, and $$\begin{aligned} &&\left(\dfrac{ \partial \xi\_1}{\partial x\_1^{\varepsilon}}, \dfrac{ \partial \xi\_1}{\partial x\_2^{\varepsilon}}, \dfrac{ \partial \xi\_1}{\partial x\_3^{\varepsilon}} \right) = \alpha\_1 \vec{a}\_1 + \beta\_1 \vec{a}\_2 +\gamma\_1 \vec{a}\_3 \label{Dx\_xi1\_base\_a} \\ &&\left(\dfrac{ \partial \xi\_2}{\partial x\_1^{\varepsilon}}, \dfrac{ \partial \xi\_2}{\partial x\_2^{\varepsilon}}, \dfrac{ \partial \xi\_2}{\partial x\_3^{\varepsilon}} \right) = \alpha\_2 \vec{a}\_1 + \beta\_2 \vec{a}\_2 +\gamma\_2 \vec{a}\_3 \\ &&\left(\dfrac{ \partial \xi\_3}{\partial x\_1^{\varepsilon}}, \dfrac{ \partial \xi\_3}{\partial x\_2^{\varepsilon}}, \dfrac{ \partial \xi\_3}{\partial x\_3^{\varepsilon}} \right) = \alpha\_3 \vec{a}\_1 + \beta\_3 \vec{a}\_2 +\gamma\_3 \vec{a}\_3\label{Dx\_xi3\_base\_a}\end{aligned}$$ $$\begin{aligned} &&\dfrac{ \partial \xi\_1}{\partial t^{\varepsilon}} = -(\alpha\_1 \vec{a}\_1 + \beta\_1 \vec{a}\_2)\cdot\left( \dfrac{\partial \vec{X}}{\partial t} + \varepsilon \xi\_3 h \dfrac{\partial \vec{a}\_{3}}{\partial t} \right) \label{parcial\_xi1\_teps\_alfa\_beta}\\ &&\dfrac{ \partial \xi\_2}{\partial t^{\varepsilon}} = -( \alpha\_2 \vec{a}\_1 + \beta\_2 \vec{a}\_2)\cdot \left(\dfrac{\partial \vec{X}}{\partial t} + \varepsilon \xi\_3 h \dfrac{\partial \vec{a}\_{3}}{\partial t} \right)\label{parcial\_xi2\_teps\_alfa\_beta}\\ &&\dfrac{ \partial \xi\_3}{\partial t^{\varepsilon}} = - (\alpha\_3 \vec{a}\_1 +\beta\_3 \vec{a}\_2) \cdot\left(\dfrac{\partial \vec{X}}{\partial t} +\varepsilon \xi\_3 h \dfrac{\partial \vec{a}\_{3}}{\partial t} \right) -\dfrac{1}{\varepsilon h} \vec{a}\_3 \cdot\dfrac{\partial \vec{X}}{\partial t}- \dfrac{\xi\_3}{ h} \dfrac{\partial h}{\partial t} \label{parcial\_xi3\_teps\_alfa\_beta}\\ && \dfrac{\partial t}{\partial x^{\varepsilon}\_i}= 0 \quad (i=1,2,3) \label{dtdxi\_0} \\ && \dfrac{\partial t}{\partial t^{\varepsilon}} = 1 \label{dtdteps\_1}\end{aligned}$$ where *α**i*, *β**i*, *γ**i* (*i* = 1, 2, 3) are given by - in [ApendiceA]. Given any function *F**ɛ*(*x*1*ɛ*, *x*2*ɛ*, *x*3*ɛ*, *t**ɛ*) defined on Ω*t**ɛ*, we can define another function *F*(*ɛ*)(*ξ*1, *ξ*2, *ξ*3, *t*) on Ω using the change of variable *F*(*ɛ*)(*ξ*1, *ξ*2, *ξ*3, *t*) = *F**ɛ*(*x*1*ɛ*, *x*2*ɛ*, *x*3*ɛ*, *t**ɛ*) and the relation between its partial derivatives is trivially: $$\begin{aligned} \dfrac{ \partial F^{\varepsilon}}{\partial x^{\varepsilon}\_i} &=& \displaystyle \frac{ \partial F(\varepsilon)}{\partial \xi\_1} \displaystyle \frac{ \partial \xi\_1}{\partial x^{\varepsilon}\_i}+ \displaystyle \frac{ \partial F(\varepsilon)}{\partial \xi\_2} \displaystyle \frac{ \partial \xi\_2}{\partial x^{\varepsilon}\_i}+ \displaystyle \frac{ \partial F(\varepsilon)}{\partial \xi\_3} \displaystyle \frac{ \partial \xi\_3}{\partial x^{\varepsilon}\_i} \label{der\_F\_xi}\\ \dfrac{ \partial F^{\varepsilon}}{\partial t^{\varepsilon}} &=& \displaystyle \frac{ \partial F(\varepsilon)}{\partial t} + \displaystyle \frac{ \partial F(\varepsilon)}{\partial \xi\_1} \displaystyle \frac{ \partial \xi\_1}{\partial t^{\varepsilon}}+ \displaystyle \frac{ \partial F(\varepsilon)}{\partial \xi\_2} \displaystyle \frac{ \partial \xi\_2}{\partial t^{\varepsilon}}+ \displaystyle \frac{ \partial F(\varepsilon)}{\partial \xi\_3} \displaystyle \frac{ \partial \xi\_3}{\partial t^{\varepsilon}}\label{der\_F\_t}\end{aligned}$$ where $\dfrac{\partial \xi\_j}{\partial x\_i^{\varepsilon}}$, $\dfrac{ \partial \xi\_j}{\partial t^{\varepsilon}}$ are given by -. Navier-Stokes Equations ----------------------- Let us consider an incompressible newtonian fluid, so we can assume that the fluid motion is governed by Navier-Stokes equations (*i* = 1, 2, 3): $$\begin{aligned} && \rho\_0\left(\dfrac{\partial u\_i^{\varepsilon}}{\partial t^{\varepsilon}}+ \dfrac{\partial u\_i^{\varepsilon}}{\partial x\_j^{\varepsilon}} u\_j^{\varepsilon}\right)=-\dfrac{\partial p^{\varepsilon}}{\partial x\_i^{\varepsilon}}+ \mu \left( \dfrac{\partial^2 u\_i^{\varepsilon}}{\partial (x\_1^{\varepsilon})^2} + \dfrac{\partial^2 u\_i^{\varepsilon}}{\partial (x\_2^{\varepsilon})^2} + \dfrac{\partial^2 u\_i^{\varepsilon}}{\partial (x\_3^{\varepsilon})^2}\right)+ \rho\_0 f\_i^{\varepsilon} \label{ec\_ns\_ij}\\ &&\dfrac{\partial u\_j^{\varepsilon}}{\partial x\_j^{\varepsilon}}=0 \label{div\_i}\end{aligned}$$ where repeated indices indicate summation (*j* takes values from 1 to 3), *ρ*0 is the fluid density, assumed to be constant, *u⃗**ɛ* = (*u*1*ɛ*, *u*2*ɛ*, *u*3*ɛ*) is the fluid velocity, *p**ɛ* is the pressure, *μ* is the dynamic viscosity and *f⃗**ɛ* denotes the external density of volume forces. Let us write *u⃗**ɛ* and *f⃗**ɛ* in the new basis - (repeated indices *i* and *k* indicate summation from 1 to 3): $$\begin{aligned} \vec{u}^{\varepsilon} &=& u\_i^{\varepsilon} \vec{e}\_i = u\_k(\varepsilon)\vec{a}\_{k} \\ \vec{f}^\varepsilon &=& f\_i^{\varepsilon}\vec{e}\_i = f\_k(\varepsilon)\vec{a}\_{k}\end{aligned}$$ so we have $$\begin{aligned} u\_i^{\varepsilon}&=& \left ( u\_k(\varepsilon)\vec{a}\_{k} \right ) \cdot \vec{e}\_i = u\_k(\varepsilon) a\_{ki} \label{cambio\_base\_u}\\ f\_i^{\varepsilon}&=& \left ( f\_k(\varepsilon)\vec{a}\_{k} \right ) \cdot \vec{e}\_i = f\_k(\varepsilon){a}\_{ki} \label{cambio\_base\_f}\end{aligned}$$ Taking into account -, equations - yield (*i* = 1, 2, 3): $$\begin{aligned} && \rho\_0\left(\dfrac{\partial (u\_k(\varepsilon){a}\_{ki})}{\partial t^{\varepsilon}}+ \dfrac{\partial (u\_k(\varepsilon){a}\_{ki})}{\partial x\_j^{\varepsilon}} (u\_k(\varepsilon){a}\_{kj})\right)=-\dfrac{\partial p(\varepsilon)}{\partial x\_i^{\varepsilon}}\nonumber\\ &&{}\hspace\*{+0.5cm}+ \mu \left( \dfrac{\partial^2 (u\_k(\varepsilon){a}\_{ki})}{\partial (x\_1^{\varepsilon})^2} + \dfrac{\partial^2 (u\_k(\varepsilon){a}\_{ki})}{\partial (x\_2^{\varepsilon})^2} + \dfrac{\partial^2 (u\_k(\varepsilon){a}\_{ki})}{\partial (x\_3^{\varepsilon})^2}\right)+ \rho\_0 f\_k(\varepsilon){a}\_{ki} \label{ec\_ns\_ij\_a}\\ &&\dfrac{\partial (u\_k(\varepsilon){a}\_{kj})}{\partial x\_j^{\varepsilon}}=0 \label{div\_i\_a}\end{aligned}$$ Equations - can be written in the reference domain Ω, using - and -, as follows (repeated indices indicates summation from 1 to 3; *i* = 1, 2, 3): $$\begin{aligned} &&\dfrac{ \partial u\_k(\varepsilon)}{\partial t} {a}\_{ki} + u\_k(\varepsilon)\dfrac{ \partial {a}\_{ki}}{\partial t} \nonumber\\ &&\hspace\*{+0.5cm}{}+\left({a}\_{ki} \dfrac{ \partial u\_k(\varepsilon)}{\partial \xi\_l} + u\_k(\varepsilon)\dfrac{ \partial {a}\_{ki}}{\partial \xi\_l} \right)\left[ -(\alpha\_l \vec{a}\_1 + \beta\_l \vec{a}\_2)\cdot\left( \dfrac{\partial \vec{X}}{\partial t} + \varepsilon \xi\_3 h \dfrac{\partial \vec{a}\_{3}}{\partial t} \right) \right] \nonumber\\ &&\hspace\*{+0.5cm}{}+\left({a}\_{ki}\dfrac{ \partial u\_k(\varepsilon)}{\partial \xi\_3} + u\_k(\varepsilon)\dfrac{ \partial {a}\_{ki}}{\partial \xi\_3} \right)\left( -\dfrac{1}{\varepsilon h} \vec{a}\_3 \cdot\dfrac{\partial \vec{X}}{\partial t}- \dfrac{\xi\_3}{ h} \dfrac{\partial h}{\partial t} \right)\nonumber\\ &&\hspace\*{+0.5cm}{}+u\_k(\varepsilon){a}\_{kj} \left({a}\_{ki} \dfrac{ \partial u\_k(\varepsilon)}{\partial \xi\_l} +u\_k(\varepsilon) \dfrac{ \partial {a}\_{ki}}{\partial \xi\_l} \right) \left( \alpha\_l {a}\_{1j} + \beta\_l {a}\_{2j}+ \gamma\_l {a}\_{3j}\right)\nonumber\\ &&\hspace\*{+0.5cm}=-\dfrac{1}{\rho\_0}\dfrac{ \partial p(\varepsilon)}{\partial \xi\_l}\left(\alpha\_l {a}\_{1i} + \beta\_l {a}\_{2i}+ \gamma\_l {a}\_{3i}\right)\nonumber\\ &&\hspace\*{+0.5cm}{} + \nu \left\{ \left[\dfrac{ \partial^2 (u\_k(\varepsilon){a}\_{ki})}{\partial \xi\_l \partial \xi\_m} \left( \alpha\_l {a}\_{1j} + \beta\_l {a}\_{2j}+ \gamma\_l {a}\_{3j} \right)\right.\right. \nonumber\\ &&\hspace\*{+0.5cm}\left.\left.{}+ \dfrac{ \partial (u\_k(\varepsilon){a}\_{ki})}{\partial \xi\_l}\dfrac{\partial}{\partial \xi\_m}\left( \alpha\_l {a}\_{1j} + \beta\_l {a}\_{2j}+ \gamma\_l {a}\_{3j} \right) \right] \left( \alpha\_m {a}\_{1j} + \beta\_m {a}\_{2j}+ \gamma\_m {a}\_{3j} \right) \right\} \nonumber\\ &&\hspace\*{+0.5cm}+ f\_k(\varepsilon){a}\_{ki}\label{ec\_ns\_ij\_alfa\_beta}\\ && \left({a}\_{kj} \dfrac{ \partial u\_k(\varepsilon)}{\partial \xi\_l} + u\_k(\varepsilon)\dfrac{ \partial {a}\_{kj}}{\partial \xi\_l}\right) \left(\alpha\_l {a}\_{1j} + \beta\_l {a}\_{2j}+ \gamma\_l {a}\_{3j}\right) =0 \label{div\_i\_dr\_alfa\_beta}\end{aligned}$$ Asymptotic Analysis ------------------- Let us assume that *u**i*(*ɛ*), *f**i*(*ɛ*) (*i* = 1, 2, 3) and *p*(*ɛ*) can be developed in powers of *ɛ*, that is: $$\begin{aligned} && u\_i(\varepsilon) = u\_i^0 + \varepsilon u\_i^1 + \varepsilon^2 u\_i^2 + \cdots \quad (i=1,2,3) \label{ansatz\_1}\\ &&p(\varepsilon) =\varepsilon^{-2} p^{-2} + \varepsilon^{-1} p^{-1} +p^0 + \varepsilon p^1 + \varepsilon^2 p^2 + \cdots \label{ansatz\_2}\\ && f\_i(\varepsilon) = f\_i^0 + \varepsilon f\_i^1 + \varepsilon^2 f\_i^2 + \cdots \quad (i=1,2,3) \label{ansatz\_3}\end{aligned}$$ In making this choice, we follow,, and. Before substituting *α**i*, *β**i*, *γ**i* (*i* = 1, 2, 3) in -, we must develop - in powers of *ɛ*. It is easy to check that $$\begin{aligned} \alpha\_i &=& \alpha\_i^0+\varepsilon \xi\_3 h \alpha\_i^1 +\varepsilon^2 \xi\_3^2h^2 \alpha\_i^2+\cdots, \quad (i=1,2) \label{alfaides}\\ \alpha\_3 &=&\dfrac{ \xi\_3 }{ h}(\alpha\_3^0+\varepsilon \xi\_3 h \alpha\_3^1 +\varepsilon^2 \xi\_3^2h^2 \alpha\_3^2+\cdots), \label{alfa3des}\\ \beta\_i &=&\beta\_i^0+\varepsilon \xi\_3 h \beta\_i^1 +\varepsilon^2 \xi\_3^2h^2 \beta\_i^2+\cdots, \quad (i=1,2) \label{betaides}\\ \beta\_3&=&\dfrac{ \xi\_3 }{ h}(\beta\_3^0+\varepsilon \xi\_3 h \beta\_3^1 +\varepsilon^2 \xi\_3^2h^2 \beta\_3^2+\cdots), \label{beta3des} \\ \gamma\_3&=&\dfrac{1}{\varepsilon h}, \quad \gamma\_1 = \gamma\_2 = 0, \label{gammades} \end{aligned}$$ where $$\begin{aligned} &&\alpha\_1^0=\dfrac{\|\vec{a}\_2\|^2}{A^0}=\dfrac{G}{EG-F^2}\label{alfa10}\\ &&\alpha\_1^1= \dfrac{\vec{a}\_2\cdot \dfrac{\partial \vec{a}\_{3}}{\partial \xi\_2}- \alpha\_1^0 A^1}{ A^0}=- \dfrac{g + \alpha\_1^0 A^1}{ A^0}\label{alfa11} \\ &&\alpha\_1^n= -\dfrac{\alpha\_1^{n-2} A^2+\alpha\_1^{n-1} A^1}{ A^0}, \quad n\geq 2 \label{alfa1n}\\ &&\alpha\_2^0=\beta\_1^0=-\dfrac{ \vec{a}\_2\cdot \vec{a}\_{1}}{ A^0} =-\dfrac{ F}{ A^0} \label{alfa20\_beta10}\\ &&\alpha\_2^1=\beta\_1^1 =- \dfrac{\vec{a}\_2\cdot \dfrac{\partial \vec{a}\_{3}}{\partial \xi\_1} +\alpha\_2^0 A^1}{ A^0}= \dfrac{f- \alpha\_2^0 A^1}{ A^0} \label{alfa21\_beta11}\\ &&\alpha\_2^n = \beta\_1^n= -\dfrac{ \alpha\_2^{n-2} A^2 + \alpha\_2^{n-1} A^1}{ A^0}, \quad n\geq 2 \label{alfa2n\_beta1n}\\ &&\alpha\_3^0=\dfrac{\dfrac{\partial h}{\partial \xi\_2} \vec{a}\_{1} \cdot \vec{a}\_2 - \dfrac{\partial h}{\partial \xi\_1} \|\vec{a}\_2\|^2 }{A^0} =\dfrac{\dfrac{\partial h}{\partial \xi\_2} F - \dfrac{\partial h}{\partial \xi\_1} G }{A^0} \label{alfa30}\\ &&\alpha\_3^1= \dfrac{\vec{a}\_2\cdot\left[\dfrac{\partial h}{\partial \xi\_2} \dfrac{\partial \vec{a}\_{3}}{\partial \xi\_1} - \dfrac{\partial h}{\partial \xi\_1} \dfrac{\partial \vec{a}\_{3}}{\partial \xi\_2}\right]-\alpha\_3 ^0 A^1 } { A^0} = \dfrac{-\dfrac{\partial h}{\partial \xi\_2} f + \dfrac{\partial h}{\partial \xi\_1} g-\alpha\_3 ^0 A^1 } { A^0} \label{alfa31}\\ && \alpha\_3^n=-\dfrac{\alpha\_3^{n-2} A^2 + \alpha\_3^{n-1} A^1} {A^0}, \quad n\geq 2 \label{alfa3n}\end{aligned}$$ $$\begin{aligned} &&\beta\_2^0=\dfrac{ \|\vec{a}\_{1}\|^2}{ A^0} =\dfrac{ E}{ A^0} \label{beta20}\\ &&\beta\_2^1 = \dfrac{\vec{a}\_1\cdot \dfrac{\partial \vec{a}\_{3}}{\partial \xi\_1} -\beta\_2^0 A^1}{ A^0} =- \dfrac{e +\beta\_2^0 A^1}{ A^0}\label{beta21}\\ &&\beta\_2^n =-\dfrac{ \beta\_2^{n-2} A^2 + \beta\_2^{n-1} A^1}{ A^0}, \quad n\geq 2 \label{beta22}\\ &&\beta\_3^0=\dfrac{\dfrac{\partial h}{\partial \xi\_1} \vec{a}\_{1} \cdot \vec{a}\_2 - \dfrac{\partial h}{\partial \xi\_2}\|\vec{a}\_1\|^2 }{A^0} =\dfrac{\dfrac{\partial h}{\partial \xi\_1} F - \dfrac{\partial h}{\partial \xi\_2}E }{A^0} \label{beta30}\\ &&\beta\_3^1= \dfrac{\dfrac{\partial h}{\partial \xi\_1} \left( \vec{a}\_1 \cdot\dfrac{\partial \vec{a}\_{3}}{\partial \xi\_2}\right) - \dfrac{\partial h}{\partial \xi\_2} \left( \vec{a}\_1 \cdot\dfrac{\partial \vec{a}\_{3}}{\partial \xi\_1}\right)-\beta\_3 ^0 A^1 } { A^0} = \dfrac{-\dfrac{\partial h}{\partial \xi\_1} f + \dfrac{\partial h}{\partial \xi\_2} e-\beta\_3 ^0 A^1 } { A^0} \label{beta31}\\ && \beta\_3^n=-\dfrac{\beta\_3^{n-2} A^2 + \beta\_3^{n-1} A^1} {A^0}, \quad n\geq 2 \label{beta3n}\end{aligned}$$ The substitution of the developments - and - in -, and the identification of the terms multiplied by the same power of *ɛ*, lead to a series of equations that will allow us to determine $\vec{\bf{u}}^0$, *p*− 2, etc. In what follows, we will use the standard summation convention that repeated indices indicate summation from 1 to 3, unless we indicate otherwise. In this way, we first identify the terms multiplied by *ɛ*− 3: $$-\dfrac{1}{\rho\_0}\dfrac{ \partial p^{-2}}{\partial \xi\_3} \dfrac{1}{h} {a}\_{3i}=0 \quad (i=1,2,3)$$ so we have $$\begin{aligned} \dfrac{ \partial p^{-2}}{\partial \xi\_3} =0\label{dp-2\_dxi30}\end{aligned}$$ As a second step, we identify the terms multiplied by *ɛ*− 2. Multiplying by *a⃗**i*,  (*i* = 1, 2, 3), we obtain: $$\begin{aligned} && \dfrac{\mu}{ h^2}\left( E\dfrac{ \partial^2 u\_1^0}{\partial \xi\_3^2}+ F\dfrac{ \partial^2 u\_2^0}{\partial \xi\_3^2}\right) = \dfrac{ \partial p^{-2} }{\partial \xi\_1} \label{u10u20p-2a}\\ && \dfrac{\mu}{ h^2}\left(F \dfrac{ \partial^2 u\_1^0}{\partial \xi\_3^2} + G \dfrac{ \partial^2 u\_2^0}{\partial \xi\_3^2}\right) = \dfrac{ \partial p^{-2} }{\partial \xi\_2}\label{u10u20p-2b}\\ &&\dfrac{ \partial p^{-1}}{\partial \xi\_3} \dfrac{1}{ h}= \dfrac{\mu}{ h^2}\dfrac{ \partial^2 u\_3^0}{\partial \xi\_3^2} \label{eps-2\_ecns3}\end{aligned}$$ The terms multiplied by *ɛ*− 1 in are: $$\dfrac{ \partial u\_k^0}{\partial \xi\_3} {a}\_{kj} \dfrac{1}{h} {a}\_{3j}= \dfrac{ \partial u\_3^0}{\partial \xi\_3} \dfrac{1}{h} =0 \label{du30\_xi3\_0}$$ and using this equality in, we deduce: $$\dfrac{ \partial p^{-1}}{\partial \xi\_3} =0\label{dp-1\_dxi30}$$ From the terms multiplied by *ɛ*− 1 in we obtain: $$\begin{aligned} &&\dfrac{\rho\_0 A\_0}{ h} \left( E \dfrac{ \partial u\_1^0}{\partial \xi\_3}+F \dfrac{ \partial u\_2^0}{\partial \xi\_3} \right) \left( u\_3^0 - \vec{a}\_3 \cdot\dfrac{\partial \vec{X}}{\partial t} \right)\nonumber \\ &&\hspace\*{+0.5cm}{}+\xi\_3 h\left[\dfrac{ \partial p^{-2}}{\partial \xi\_1} \left( G e -f F \right) +\dfrac{ \partial p^{-2}}{\partial \xi\_2} \left(f E -e F \right) \right] + \dfrac{\partial p^{-1}}{\partial \xi\_1} A^0 \nonumber \\ &&\hspace\*{+0.5cm}= \mu \dfrac{A^0}{ h^2}\left( \dfrac{ \partial^2 u\_1^1 }{\partial \xi\_3^2} E+ \dfrac{ \partial^2 u\_2^1 }{\partial \xi\_3^2} F\right)+ \dfrac{\mu A^1}{ h}\left(\dfrac{ \partial u\_1^0 }{\partial \xi\_3} E + \dfrac{ \partial u\_2^0 }{\partial \xi\_3} F\right)\label{ns1\_eps-1}\\ &&\dfrac{\rho\_0 A\_0}{ h} \left( F \dfrac{ \partial u\_1^0}{\partial \xi\_3} + G \dfrac{ \partial u\_2^0}{\partial \xi\_3}\right) \left(u\_3^0- \vec{a}\_3 \cdot\dfrac{\partial \vec{X}}{\partial t} \right)\nonumber \\ &&\hspace\*{+0.5cm}{}+\xi\_3 h\left[\dfrac{ \partial p^{-2}}{\partial \xi\_1} \left( f G - g F \right)+\dfrac{ \partial p^{-2}}{\partial \xi\_2} \left( E g -f F\right) \right] + \dfrac{\partial p^{-1}}{\partial \xi\_2}A^0\nonumber \\ &&\hspace\*{+0.5cm} = \mu \dfrac{A^0}{ h^2}\left( \dfrac{ \partial^2 u\_1^1 }{\partial \xi\_3^2} F + \dfrac{ \partial^2 u\_2^1 }{\partial \xi\_3^2} G \right)+ \dfrac{\mu A^1}{ h}\left(\dfrac{ \partial u\_1^0 }{\partial \xi\_3} F + \dfrac{ \partial u\_2^0 }{\partial \xi\_3} G \right)\label{ns2\_eps-1}\\ &&\dfrac{ \partial p^0}{\partial \xi\_3}h = \mu \dfrac{ \partial^2 u\_3^1 }{\partial \xi\_3^2} \label{dp0\_dxi3}\end{aligned}$$ Finally, the term of order 0 in is: $$\begin{aligned} &&\dfrac{1}{ h} \dfrac{ \partial u\_3^1}{\partial \xi\_3} =-\dfrac{\hat{A}\_1^0}{A^0}u\_1^0-\dfrac{\hat{A}\_2^0}{A^0}u\_2^0-\dfrac{A^1}{A^0}u\_3^0\nonumber\\ &&{}- \dfrac{ \partial u\_1^0}{\partial \xi\_1} - \dfrac{ \partial u\_2^0}{\partial \xi\_2} +\dfrac{ \xi\_3 }{ h } \left(\dfrac{ \partial u\_1^0}{\partial \xi\_3} \ \dfrac{\partial h}{\partial \xi\_1} + \dfrac{ \partial u\_2^0}{\partial \xi\_3}\dfrac{\partial h}{\partial \xi\_2} \right) \label{ec\_div\_eps0}\end{aligned}$$ where $$\begin{aligned} \hat{A}\_i^0 &=& \|\vec{a}\_2 \|^2\left(\vec{a}\_1 \cdot \dfrac{ \partial \vec{a}\_{i}}{\partial \xi\_1} \right)- ( \vec{a}\_1 \cdot \vec{a}\_{2})\left(\vec{a}\_2 \cdot \dfrac{ \partial \vec{a}\_{i}}{\partial \xi\_1} +\vec{a}\_1 \cdot \dfrac{ \partial \vec{a}\_{i}}{\partial \xi\_2}\right)+\|\vec{a}\_1\|^2 \left(\vec{a}\_2 \cdot \dfrac{ \partial \vec{a}\_{i}}{\partial \xi\_2} \right)\nonumber\\ &=& \dfrac{1}{2}G\dfrac{\partial E}{\partial \xi\_i}-\dfrac{1}{2}\dfrac{\partial F^2}{\partial \xi\_i}+ \dfrac{1}{2}E\dfrac{\partial G}{\partial \xi\_i} = \dfrac{1}{2}\dfrac{\partial(EG- F^2)}{\partial \xi\_i}= \dfrac{1}{2}\dfrac{\partial A^0}{\partial \xi\_i}, \quad (i=1,2). \label{Bi}\end{aligned}$$ A new generalized lubrication model =================================== Reynolds wrote, in 1886, a seminal work on lubrication theory (see ), where he introduced heuristically the Reynolds equation. This two-dimensional equation describing the stationary flow of a thin layer of fluid is considered to be the key element for modelling lubrication phenomena. Since then, we can find numerous works in which more general physical models have been considered. Most models dedicated to the study of thin film flow, specially in lubrication, are derived from the Stokes equation. These first works were focused on stationary models in which the gap and the boundary conditions were fixed with respect to time (see,, ). These assumptions were considered no longer valid in some devices, so variation with respect to time of the domain was introduced (see ). In the same way, in some cases, the inertial effects can not be ignored (see ), so the studies using Navier-Stokes equation, as ours, turned out to be relevant (see, for example). It was in 1959, in, when full Navier-Stokes equations were used firstly. Various boundary conditions for the velocity of the surfaces (see ), and other types of generalizations have also been studied (see,, or ). In this work, as we have stated in the previous section, we will use Navier-Stokes equations to derive a new generalized lubrication model. We are considering a three-dimensional thin domain, that varies with time, whose mean surface can be chosen without any restriction (in particular, neither the lower boundary surface, nor the upper boundary surface, need to be flat). With respect to boundary conditions, we assume that the fluid slips at the lower surface (*ξ*3 = 0), and at the upper surface (*ξ*3 = 1), but there is continuity in the normal direction, so the tangential velocities at the lower and upper surfaces are known, and the normal velocity of each of them must match the fluid velocity. $$\begin{aligned} u\_k^{\varepsilon} \vec{e}\_k = u\_k(\varepsilon)\vec{a}\_{k}&=& V\_1(\varepsilon) \vec{a}\_1+V\_2(\varepsilon) \vec{a}\_2+\left( \dfrac{\partial \vec{X}}{\partial t} \cdot \vec{a}\_3\right)\vec{a}\_3 \ \textrm{on }\xi\_3=0 \label{cc\_xi3\_0}\\ u\_k^{\varepsilon} \vec{e}\_k =u\_k(\varepsilon)\vec{a}\_{k}&=& W\_1(\varepsilon) \vec{a}\_1+ W\_2(\varepsilon) \vec{a}\_2+\left( \dfrac{\partial (\vec{X}+ \varepsilon h\vec{a}\_3)}{\partial t} \cdot \vec{a}\_3\right)\vec{a}\_3 \ \textrm{on }\xi\_3=1 \label{cc\_xi3\_1}\end{aligned}$$ where *V*1*a⃗*1 + *V*2*a⃗*2 is the tangential velocity at the lower surface and *W*1*a⃗*1 + *W*2*a⃗*2 is the tangential velocity at the upper surface. So we have, $$\begin{aligned} u\_k(\varepsilon)&=& V\_k(\varepsilon) \quad (k=1,2) \quad \textrm{on }\xi\_3=0 \label{cc\_uk\_xi3\_0}\\ u\_3(\varepsilon) &=& \dfrac{\partial \vec{X}}{\partial t} \cdot \vec{a}\_3\quad \textrm{on }\xi\_3=0 \label{cc\_u3\_xi3\_0}\\ u\_k(\varepsilon)&=& W\_k(\varepsilon) \quad (k=1,2) \quad \textrm{on }\xi\_3=1 \label{cc\_uk\_xi3\_1}\\ u\_3(\varepsilon)&=& \dfrac{\partial (\vec{X}+\varepsilon h\vec{a}\_3)}{\partial t} \cdot \vec{a}\_3 \quad \textrm{on }\xi\_3=1 \label{cc\_u3\_xi3\_1}\end{aligned}$$ If we assume, in the same way as in -, that $$\begin{aligned} && V\_i(\varepsilon) = V\_i^0 + \varepsilon V\_i^1 + \varepsilon^2 V\_i^2 + \cdots \quad (i=1,2) \label{ansatzcc0} \\ && W\_i(\varepsilon) = W\_i^0 + \varepsilon W\_i^1 + \varepsilon^2 W\_i^2 + \cdots \quad (i=1,2) \label{ansatzcc}\end{aligned}$$ we yield from -: $$\begin{aligned} u\_k^l&=& V\_k^{l} \quad (k=1,2, \quad l=0,1,2,\dots) \quad \textrm{on }\xi\_3=0 \label{cc\_ukl\_xi3\_0}\\ u\_k^l&=& W\_k^{l} \quad (k=1,2, \quad l=0,1,2,\dots) \quad \textrm{on }\xi\_3=1 \label{cc\_ukl\_xi3\_1}\\ u\_3^0 &=& \dfrac{\partial \vec{X}}{\partial t} \cdot \vec{a}\_3\quad \textrm{on }\xi\_3=0 \label{cc\_u30\_xi3\_0}\\ u\_3^l &=&0 \quad (l=1,2,\dots) \quad \textrm{on }\xi\_3=0 \label{cc\_u3l\_xi3\_0}\\ u\_3^0 &=& \dfrac{\partial \vec{X}}{\partial t}\cdot \vec{a}\_3\quad \textrm{on }\xi\_3=1 \label{cc\_u30\_xi3\_1}\\ u\_3^1&=& \dfrac{\partial ( h\vec{a}\_3)}{\partial t} \cdot \vec{a}\_3= \dfrac{\partial h}{\partial t} \quad \textrm{on }\xi\_3=1 \label{cc\_u31\_xi3\_1}\\ u\_3^l &=&0 \quad (l=2,3, \dots) \quad \textrm{on }\xi\_3=1 \label{cc\_u3l\_xi3\_1}\end{aligned}$$ From - we can deduce: $$\begin{aligned} &&\hspace\*{-0.2cm}\dfrac{ \partial^2 u\_2^0}{\partial (\xi\_3)^2} =\dfrac{h^2}{\mu A^0}\left( E \dfrac{ \partial p^{-2}}{\partial \xi\_2}- F \dfrac{ \partial p^{-2}}{\partial \xi\_1}\right)\\ &&\hspace\*{-0.2cm} \dfrac{ \partial^2 u\_1^0}{\partial (\xi\_3)^2} =\dfrac{h^2}{\mu A^0}\left(G \dfrac{ \partial p^{-2}}{\partial \xi\_1}- F \dfrac{ \partial p^{-2}}{\partial \xi\_2}\right)\end{aligned}$$ As *p*− 2 does not depend on *ξ*3 (see ), we can integrate the previous equations in *ξ*3 and impose - $$\begin{aligned} &&\hspace\*{-0.2cm} u\_1^0 =\dfrac{h^2 (\xi\_3^2-\xi\_3)}{2\mu A^0}\left( G \dfrac{ \partial p^{-2}}{\partial \xi\_1}- F \dfrac{ \partial p^{-2}}{\partial \xi\_2}\right)+\xi\_3(W\_1^{0}- V\_1^{0})+ V\_1^{0}\label{u1\_0\_lub}\\ &&\hspace\*{-0.2cm} u\_2^0=\dfrac{h^2 (\xi\_3^2-\xi\_3)}{2\mu A^0}\left(E\dfrac{ \partial p^{-2}}{\partial \xi\_2}- F \dfrac{ \partial p^{-2}}{\partial \xi\_1}\right)+\xi\_3 (W\_2^{0}-V\_2^{0})+ V\_2^{0} \label{u2\_0\_lub}\end{aligned}$$ From, and we know: $$u\_3^0 = \dfrac{\partial \vec{X}}{\partial t} \cdot \vec{a}\_3 \label{u30\_lub}$$ Now, we yield the following equation by substituting *u**i*0 (*i* = 1, 2, 3) into equation by their expressions -, integrating over *ξ*3 from 0 to 1, and evaluating by using and : $$\begin{aligned} &&\hspace\*{-0.5cm}\dfrac{ \partial }{\partial \xi\_1} \left[ \dfrac{h^3 }{ A^0}\left(G \dfrac{ \partial p^{-2}}{\partial \xi\_1}- F \dfrac{ \partial p^{-2}}{\partial \xi\_2}\right)\right] + \dfrac{ \partial }{\partial \xi\_2} \left[\dfrac{h^3 }{ A^0} \left( E \dfrac{ \partial p^{-2}}{\partial \xi\_2}- F \dfrac{ \partial p^{-2}}{\partial \xi\_1}\right)\right]\nonumber\\ &&{}=12\mu \dfrac{\partial h}{\partial t} + 12\mu\dfrac{h A^1}{A^0} \left(\dfrac{\partial \vec{X}}{\partial t} \cdot \vec{a}\_3\right)\nonumber\\ &&{}- \dfrac{h^3 \hat{A}\_1^0}{ (A^0)^2} \left( G \dfrac{ \partial p^{-2}}{\partial \xi\_1}- F \dfrac{ \partial p^{-2}}{\partial \xi\_2}\right)-\dfrac{h^3 \hat{A}\_2^0}{ (A^0)^2} \left( E \dfrac{ \partial p^{-2}}{\partial \xi\_2}- F \dfrac{ \partial p^{-2}}{\partial \xi\_1}\right) \nonumber\\ &&{}+6\mu\dfrac{h \hat{A}\_1^0}{A^0}(W\_1^{0}+ V\_1^{0}) - 6\mu\dfrac{\partial h}{\partial \xi\_1} (W\_1^{0}- V\_1^{0}) + 6\mu h\dfrac{ \partial }{\partial \xi\_1}(W\_1^{0} + V\_1^{0})\nonumber\\ &&{}+ 6\mu\dfrac{h \hat{A}\_2^0}{A^0} (W\_2^{0} + V\_2^{0}) - 6\mu \dfrac{\partial h}{\partial \xi\_2} (W\_2^{0}-V\_2^{0}) + 6\mu h \dfrac{ \partial }{\partial \xi\_2} (W\_2^{0} + V\_2^{0})\end{aligned}$$ If we denote by $$\begin{aligned} &&\textrm{div}(f\_1,f\_2)= \dfrac{ \partial f\_1}{\partial \xi\_1} + \dfrac{ \partial f\_2}{\partial \xi\_2}\label{div\_def} \\ &&\nabla f= \left( \dfrac{ \partial f}{\partial \xi\_1}, \dfrac{ \partial f}{\partial \xi\_2}\right) \label{grad\_def}\\ &&\vec{V}^{0}=(V\_1^{0},V\_2^{0}), \quad \vec{W}^{0}=(W\_1^{0},W\_2^{0}), \quad (\hat{A}\_1^0,\hat{A}\_2^0)=\dfrac{1}{2}\nabla A^0 \label{vectores}\\ &&M=\begin{pmatrix}G & -F\\ -F & E\end{pmatrix}\end{aligned}$$ and we take into account that $$\textrm{div}(\vec{\omega})+\dfrac{1}{2A^0}\nabla A^0 \cdot \vec{\omega} = \dfrac{1}{\sqrt{A^0}}\textrm{div}(\sqrt{A^0}\vec{\omega}) \label{div\_A0\_w}$$ we arrive at the equation: $$\begin{aligned} &&\hspace\*{-0.9cm} \dfrac{1}{\sqrt{A^0}} \textrm{div}\left(\dfrac{h^3 }{ \sqrt{A^0}} M \nabla p^{-2} \right) =12\mu \dfrac{\partial h}{\partial t} + 12\mu\dfrac{h A^1}{A^0} \left(\dfrac{\partial \vec{X}}{\partial t} \cdot \vec{a}\_3\right)\nonumber\\ &&\hspace\*{-0.5cm}{} - 6\mu \nabla h\cdot (\vec{W}^{0}-\vec{V}^{0}) + \dfrac{6\mu h}{\sqrt{A^0}} \textrm{div} (\sqrt{A^0}(\vec{W}^{0} + \vec{V}^{0})) \label{Reynolds\_gen}\end{aligned}$$ that can be considered a generalization of Reynolds equation. We claim that is a new generalized Reynolds equation because, if we consider the classic assumptions to derive Reynolds equations, we re-obtain the classic Reynolds equation from. For example, in,, and the domain considered is independent of time, *x*3 = 0 in, the upper surface is fixed ($\vec{W}=\vec{0}$) and the lower surface is moving in the *x*1-direction with constant velocity (*V⃗* = (*s*, 0)). Under these assumptions, we can choose a surface parametrization, *X⃗*, such that *E* = *G* = 1 and *F* = 0, and then equation writes as the classical Reynolds equation: $$\textrm{div}\left(h^3 \nabla p^{-2} \right) = 6\mu s \frac{\partial h}{\partial \xi\_1} \label{eq-Reynolds-clasica}$$ In time is taken into account, allowing *h* to depend on time, and then the term $\dfrac{\partial h}{\partial t}$ appears: $$\textrm{div}\left(h^3 \nabla p^{-2} \right) = 12\mu \frac{\partial h}{\partial t} + 6\mu s \frac{\partial h}{\partial \xi\_1} \label{eq-Reynolds-clasica-2}$$ The matrix *M* and the coefficients *A*0, *A*1, that appear in, depend only on the geometry of the surface parametrized by *X⃗*. In fact, the matrix $\dfrac{1}{A^0}M$ is the inverse of the matrix of the first fundamental form of *X⃗*, and the term $\dfrac{A^1}{A^0}=-2K\_{m}$ (see ). [rmk-b-c-p] Equation must be completed with boundary conditions at ∂*D*, usually the value of *p*− 2 at ∂*D*. [rmk-eq-h] Equation can be re-scaled, and then *p**ɛ* is approximated by *p*− 2, *ɛ* = *ɛ*− 2*p*− 2, solution of $$\begin{aligned} &&\hspace\*{-0.9cm} \dfrac{1}{\sqrt{A^0}} \textrm{div}\left(\dfrac{(h^{\varepsilon})^3 }{ \sqrt{A^0}} M \nabla p^{-2, \varepsilon} \right) =12\mu \dfrac{\partial h^{\varepsilon }}{\partial t^{\varepsilon }} + 12\mu\dfrac{h^{\varepsilon } A^1}{A^0} \left(\dfrac{\partial \vec{X}}{\partial t} \cdot \vec{a}\_3\right)\nonumber\\ &&\hspace\*{-0.5cm}{} - 6\mu \nabla h^{\varepsilon } \cdot (\vec{W}^{0}-\vec{V}^{0}) + \dfrac{6\mu h^{\varepsilon }}{\sqrt{A^0}} \textrm{div} (\sqrt{A^0}(\vec{W}^{0} + \vec{V}^{0})) \label{Reynolds\_gen\_resc}\end{aligned}$$ We must point out that the expression $$\dfrac{1}{\sqrt{A^0}}\textrm{div}(\sqrt{A^0}\vec{\omega})= \omega^1\_{,1}+\omega^2\_{,2}$$ is exactly the covariant divergence of *ω⃗*, where *ω*, *β**α* stands for the covariant devirative of *ω**α* with respect to *ξ**β*. A new thin fluid layer model ============================ Thin fluid layer models are widely used for the analysis and numerical simulation of a large number of geophysical phenomena, such as rivers or coastal flows and other hydraulic applications. Saint-Venant firstly derived in his paper a shallow water model, since then numerous authors have studied this type of models (see, for example,,, -,,, ), on many occasions using asymptotic analysis techniques to justify them (see,-). With this aim, in this section we will study what happens when, instead of considering that the tangential and normal velocities are known on the upper and lower surfaces, as we have done in -, we assume that the normal component of the traction on *ξ*3 = 0 and on *ξ*3 = 1 are known pressures, and that the tangential component of the traction on these surfaces are friction forces depending on the value of the velocities on ∂*D*. Therefore, we assume that $$\begin{aligned} &&\vec{T}^{\varepsilon}\cdot \vec{n}^{\varepsilon}\_0 = (\sigma^{\varepsilon} \vec{n}^{\varepsilon}\_0)\cdot \vec{n}^{\varepsilon}\_0=-\pi^{\varepsilon}\_0 \textrm{ on } \xi\_3=0, \label{Tn0\_xi3\_0}\\ &&\vec{T}^{\varepsilon}\cdot \vec{n}^{\varepsilon}\_1 = (\sigma^{\varepsilon} \vec{n}^{\varepsilon}\_1)\cdot \vec{n}^{\varepsilon}\_1=-\pi^{\varepsilon}\_1 \textrm{ on } \xi\_3=1, \label{Tn1\_xi3\_1} \\ &&\vec{T}^{\varepsilon}\cdot \vec{a}\_i= (\sigma^{\varepsilon} \vec{n}^{\varepsilon}\_0)\cdot \vec{a}\_i=-\vec{f}^{\varepsilon}\_{R0}\cdot \vec{a}\_i \textrm{ on } \xi\_3=0,\quad (i=1,2) \label{Tai\_xi3\_0}\\ &&\vec{T}^{\varepsilon}\cdot \vec{v}\_i^{\varepsilon}= (\sigma^{\varepsilon} \vec{n}^{\varepsilon}\_1)\cdot \vec{v}\_i^{\varepsilon}=-\vec{f}^{\varepsilon}\_{R1}\cdot \vec{v}\_i^{\varepsilon} \textrm{ on } \xi\_3=1,\quad (i=1,2) \label{Tvi\_xi3\_1}\end{aligned}$$ where *T⃗**ɛ* is the traction vector and *σ**ɛ* is the stress tensor given by $$\sigma^{\varepsilon}\_{ij}= -p^{\varepsilon}\delta\_{ij}+\mu \left( \dfrac{\partial u\_i^{\varepsilon}}{\partial x^{\varepsilon}\_j} + \dfrac{\partial u\_j^{\varepsilon}}{\partial x^{\varepsilon}\_i}\right), \quad (i,j=1,2,3)\label{sigmaij}$$ vectors *n⃗*0*ɛ*, *n⃗*1*ɛ* are, respectively, the outward unit normal vectors to the lower and the upper surfaces, that is $$\begin{aligned} &&\vec{n}^{\varepsilon}\_0=s\_0 \vec{a}\_3 \label{n0}\\ &&\vec{n}^{\varepsilon}\_1=-s\_0\dfrac{\vec{v}^{\varepsilon}\_3}{\|\vec{v}^{\varepsilon}\_3\|} \label{n1} \end{aligned}$$ where *s*0 =  − 1or *s*0 = 1 is fixed (*n⃗*0*ɛ* = *a⃗*3 or *n⃗*0*ɛ* =  − *a⃗*3, depending on the orientation of the parametrization *X⃗*), and $$\begin{aligned} &&\vec{v}^{\varepsilon}\_1=\vec{a}\_1+\varepsilon \left(\dfrac{\partial h}{\partial \xi\_1} \vec{a}\_3 + h \dfrac{\partial \vec{a}\_3}{\partial \xi\_1}\right) \label{v1}\\ &&\vec{v}^{\varepsilon}\_2=\vec{a}\_2+\varepsilon \left(\dfrac{\partial h}{\partial \xi\_2} \vec{a}\_3 + h \dfrac{\partial \vec{a}\_3}{\partial \xi\_2}\right) \label{v2}\\ &&\vec{v}^{\varepsilon}\_3=\vec{v}^{\varepsilon}\_1 \times \vec{v}^{\varepsilon}\_2 \label{v3\_p}\end{aligned}$$ From the identities -, we also have the equalities: $$\begin{aligned} &&\vec{v}^{\varepsilon}\_3=\vec{a}\_1 \times \vec{a}\_2 +\varepsilon \left[\dfrac{\partial h}{\partial \xi\_2} (\vec{a}\_1 \times \vec{a}\_3) + h \left(\vec{a}\_1 \times \dfrac{\partial \vec{a}\_3}{\partial \xi\_2}\right) + \dfrac{\partial h}{\partial \xi\_1} (\vec{a}\_3\times \vec{a}\_2) + h \left(\dfrac{\partial \vec{a}\_3}{\partial \xi\_1} \times \vec{a}\_2\right)\right]\nonumber\\ &&\hspace\*{0.5cm}+\varepsilon^2 \left[ \left(\dfrac{\partial h}{\partial \xi\_1} \vec{a}\_3 + h \dfrac{\partial \vec{a}\_3}{\partial \xi\_1}\right)\times\left(\dfrac{\partial h}{\partial \xi\_2} \vec{a}\_3 + h \dfrac{\partial \vec{a}\_3}{\partial \xi\_2}\right) \right]\label{v3}\\ &&\|\vec{v}^{\varepsilon}\_3\|=\|\vec{a}\_1 \times \vec{a}\_2\| + \varepsilon h \left[ \vec{a}\_3 \cdot \left(\vec{a}\_1 \times \dfrac{\partial \vec{a}\_3}{\partial \xi\_2}\right) +\vec{a}\_3 \cdot \left(\dfrac{\partial \vec{a}\_3}{\partial \xi\_1} \times \vec{a}\_2\right)\right] + O(\varepsilon^2)\label{mod\_v3}\end{aligned}$$ Typically, the friction force is of the form $$\vec{f}^{\varepsilon}\_{R\alpha} = \rho\_0 C\_R^\varepsilon \| \vec{{u}}^\varepsilon \| \vec{{u}}^\varepsilon \textrm{ on } \xi\_3=\alpha, \quad (\alpha=0,1)$$ where *C**R**ɛ* is a small constant. Let us assume that it is of order *ɛ*, that is, *C**R**ɛ* = *ɛ**C**R*1 Now, taking into account, -, we have the following development in powers of *ɛ*: $$\begin{aligned} \sigma\_{ij}(\varepsilon)&=& -\sum\_{r=-2}^{\infty} \varepsilon^r p^r \delta\_{ij} \nonumber\\ &+& \mu \sum\_{r=0}^{\infty} \varepsilon^r \left[ \left(\dfrac{ \partial u\_k^r}{\partial \xi\_l}{a}\_{ki} +u\_k^r \dfrac{ \partial {a}\_{ki}}{\partial \xi\_l} \right)\dfrac{ \partial \xi\_l}{\partial x^{\varepsilon}\_j} + \left(\dfrac{ \partial u\_k^r}{\partial \xi\_l}{a}\_{kj} +u\_k^r \dfrac{ \partial {a}\_{kj}}{\partial \xi\_l}\right)\dfrac{ \partial \xi\_l}{\partial x^{\varepsilon}\_i} \right] \nonumber\\ &=& -\varepsilon^{-2} p^{-2} \delta\_{ij}+ \varepsilon^{-1} \left\{ -p^{-1} \delta\_{ij} +\mu \left[ \dfrac{{a}\_{3j}}{ h} \dfrac{ \partial u\_k^0}{\partial \xi\_3}{a}\_{ki
arxiv_0000248
resolution, so the greatest effect would be on the Sersic indices of small, high $\mathrm{n\_{s}}$ ellipticals. The greater concern with these results is the relaxation that occurs in the bulges of progenitor spirals. The effective radius of the M31 model is 1.5 kpc, but most galaxies are scaled to smaller sizes than this, with 0.5 to 1 kpc bulge *R**e**f**f*. Moreover, in groups with larger numbers of galaxies, total particle counts are larger, but individual spirals can have as few as 60,000 stellar particles, of which only 20,000 are in the bulge. The bulge is partially stabilized (and flattened) by the disk, but the disk forms a core near the center of the galaxy, and so one might expect the behaviour of these compact, marginally resolved bulges to be similar to the Sersic plus halo models. We will now test this hypothesis with convergence studies of group mergers. Group Simulation Convergence ---------------------------- We test numerical convergence in the groups by running a selected sample with a factor of eight higher and lower resolution and comparing parameters after the usual elapsed times (5.0, 7.7 and 10.3 Gyr). As all of the groups are resolved with an average of over a million stellar particles, numerical convergence is expected to be good once groups have merged. However, as detailed above, the least massive spirals in more massive groups are not as well resolved, so not all groups are expected to be converged at our standard resolution. ### Parameter Recovery Fig. [fig:lreconv] shows convergence for several identical groups on the size-luminosity and size-\sigma relations after 10.3 Gyr. Central remnant luminosities are fairly constant across all resolutions, but low resolutions can have slightly lower values. Sizes and dispersions are larger at low resolutions. Both trends continue from fiducial/medium to high resolution, although it is not as extreme - sizes are usually not more than 10 percent smaller between medium and high resolution. Sersic indices are systematically lower at low resolution by a factor of 1 to 2 (Fig. [fig:nreconv]). The trend persists at high resolution, although $\mathrm{n\_{s}}$ typically increases by a smaller factor of 0.2 to 0.3 between mid to high resolutions. Of the four parameters tested, then, luminosity appears to be the most robust, while the Sersic index is most sensitive to resolution effects. The effects on sizes are too small to fully reconcile the mismatch between sizes of faint simulated galaxies compared to observed ellipticals (Fig. [fig:sizelum]). Dispersions generally decrease with increasing resolution, and so numerical effects also cannot explain the lower intercept of the simulated Faber-Jackson relation compared to that of observed ellipticals (Fig. [fig:fj]). In general, increasing resolution by a factor of eight produces similar trends in the group simulations as in isolated Sersic plus halo models - Sersic indices increase, while sizes and dispersions decrease. The effects are not very large going from our standard (medium) to high resolution but are considerable when stepping down to low resolution. We recommend that a minimum of a million stellar particles be used to adequately resolve spheroidal galaxies. While luminosities and masses remain converged at low resolution, sizes and dispersions are overestimated. Sersic indices are especially untrustworthy, being systematically offset lower by one or two from higher resolutions. Scaling Relations at Different Times ==================================== The scaling relations presented in §[subsec:scalerel] are nominally for a zero-redshift galaxy population, assuming evolution from z=2. We can instead consider scaling relations at younger ages, assuming a fixed formation time for all groups. This is equivalent to assuming evolution from z=1 or z=0.5, since the only initial redshift-dependent parameter in the initial conditions is the group size. One might also consider combining groups from different snapshots into a single sample to simulate a sample with galaxies of different ages; however, this is best left to purely cosmological initial conditions with known merger trees and formation times. Simulations, Sersic model L and *r**e**f**f*, Unweighted | B.$\mathrm{n\_{s}}$ | Time | Sample | Slope | Intercept | R.M.S. | | --- | --- | --- | --- | --- | --- | | All | 5.0 | All | 0.51  ±  0.01 | -4.73  ±  0.06 | 0.11 | | All | 7.7 | All | 0.53  ±  0.01 | -4.88  ±  0.05 | 0.11 | | All | 10.3 | All | 0.58  ±  0.01 | -5.32  ±  0.06 | 0.12 | | All | 5.0 | Many | 0.57  ±  0.01 | -5.29  ±  0.10 | 0.11 | | All | 7.7 | Many | 0.58  ±  0.01 | -5.30  ±  0.06 | 0.10 | | All | 10.3 | Many | 0.62  ±  0.01 | -5.69  ±  0.06 | 0.10 | | All | 5.0 | Few | 0.47  ±  0.01 | -4.36  ±  0.06 | 0.10 | | All | 7.7 | Few | 0.50  ±  0.01 | -4.61  ±  0.06 | 0.10 | | All | 10.3 | Few | 0.54  ±  0.01 | -4.96  ±  0.08 | 0.12 | [tab:lreffevol] With these caveats in mind, we now present predictions for the evolution of the slope and scatter of selected scaling relations assuming a fixed formation time for all groups. The best-fit relations measured in Tab. [tab:lreffevol] show slight evolution with time in the slopes (increasing) and intercepts (decreasing) and limited evolution in scatter. The steepening of the slope and lowering of the intercept would seem to suggest that brighter ellipticals grow off the relation at later times while fainter ellipticals grow slowly, if it all - in our case largely by construction, since the Few-merger sample does not have any late-time mergers. This interpretation is complicated by the fact that some of the largest groups do not have a relaxed, early-type central remnant formed in the earlier time steps and so are not included in the sample at earlier times but are included later on. Thus, as in most observational catalogs, not all of the descendants can necessarily be clearly identified with a previous early-type ancestor. Simulations, Sersic model L, Unweighted | B.$\mathrm{n\_{s}}$ | Time | Sample | Slope | Intercept | R.M.S. | | --- | --- | --- | --- | --- | --- | | All | 5.0 | All | 0.31  ±  0.00 | -1.08  ±  0.02 | 0.05 | | All | 7.7 | All | 0.30  ±  0.00 | -0.99  ±  0.02 | 0.04 | | All | 10.3 | All | 0.28  ±  0.00 | -0.86  ±  0.02 | 0.04 | | All | 5.0 | Many | 0.29  ±  0.00 | -0.89  ±  0.03 | 0.05 | | All | 7.7 | Many | 0.28  ±  0.00 | -0.83  ±  0.02 | 0.04 | | All | 10.3 | Many | 0.27  ±  0.00 | -0.72  ±  0.02 | 0.04 | | All | 5.0 | Few | 0.32  ±  0.00 | -1.21  ±  0.02 | 0.04 | | All | 7.7 | Few | 0.31  ±  0.00 | -1.09  ±  0.02 | 0.04 | | All | 10.3 | Few | 0.30  ±  0.00 | -0.98  ±  0.03 | 0.04 | [tab:fjevol] The best-fit Faber-Jackson relations measured in Tab. [tab:lreffevol] also show slight evolution of the slope, but in the opposite sense (decreasing/flattening), with a corresponding increase in the intercept. However, the scatter remains largely unchanged at 0.04 dex. --- 1. Available at http://www.mpa-garching.mpg.de/SDSS/DR7/[↩](#fnref1) Mergers in Galaxy Groups - I. Structure and Properties of Elliptical Remnants ============================================================================= We present collisionless simulations of dry mergers in groups of three to twenty-five galaxies to test the hypothesis that elliptical galaxies form at the centers of such groups. Mock observations of the central remnants confirm their similarity to ellipticals, despite having no dissipational component. We vary the profile of the original spiral’s bulge and find that ellipticals formed from spirals with exponential bulges have too low Sersic indices. Mergers of spirals with de Vaucouleurs (classical) bulges produce remnants with larger Sersic indices correlated with luminosity, as with SDSS ellipticals. Exponential bulge mergers are better fits to faint ellipticals, whereas classical bulge mergers better match luminous ellipticals. Similarly, luminous ellipticals are better reproduced by remnants undergoing many ( > 5) mergers, and fainter ellipticals by those with fewer mergers. The remnants follow tight size-luminosity and velocity dispersion-luminosity (Faber-Jackson) relations ( < 0.12 dex scatter), demonstrating that stochastic merging can produce tight scaling relations if the merging galaxies also follow tight scaling relations. The slopes of the size-luminosity and Faber-Jackson relations are close to observations but slightly shallower in the former case. Both relations’ intercepts are offset - remnants are too large but have too low dispersions at fixed luminosity. Some remnants show substantial (v/*σ* >  0.1) rotational support, although most are slow rotators and few are very fast rotators (v/*σ* >  0.5). These findings contrast with previous studies concluding that dissipation necessary to produce ellipticals from binary mergers of spirals. Multiple, mostly minor and dry mergers can produce bright ellipticals, whereas significant dissipation could be required to produce faint, rapidly-rotating ellipticals. Introduction ============ Merging of spiral galaxies is a promising mechanism for producing elliptical galaxies. Although it was perhaps not until that interacting spirals became widely accepted as elliptical progenitors, simulations of interacting spirals date back at least to and arguably as far as. Much of this work has focused on major mergers (mass ratio  < 3:1) of pairs of spiral galaxies on parabolic orbits. While such binary major mergers certainly are observed in the local universe - and will likely be the ultimate fate of the Milky Way and M31 - they may not be as common as minor hierarchical mergers. Observational evidence and numerical simulations suggest that most L\* galaxies are found in groups, where the central galaxy is likely to have experienced multiple mergers and have several surviving satellites. Furthermore, late-type galaxies in groups follow a Schechter luminosity function similar to those in other environments. Thus, if high-redshift groups are composed primarily of spiral galaxies, they are likely dominated by several bright spirals with a larger number of fainter satellites, as in our Local Group. Our hypothesis is that groups of three or more spiral galaxies with luminosity distributions following a Schechter function will naturally merge to produce a central elliptical galaxy, and possibly fainter satellites. We aim to test this hypothesis with numerical experiments. More specifically, we test whether the properties of the central galaxies formed through collisionless mergers in groups of spirals are consistent with observations of local ellipticals. This first paper in a series outlines our methodology and demonstrates that the results are both qualitatively and quantitatively different from both the more prevalent studies on binary mergers and also the less abundant literature on galaxy group mergers. We present results on morphological and kinematical measures as well as two dimensional scaling relations. Paper II will focus on the three dimensional fundamental plane scaling relation. To further motivate this endeavour, we will outline some of the key results of the past several decades of work in this field. and were amongst the first to produce simulations of mergers in groups of galaxies (10-20 each), using 20 and 100 particles per galaxy, respectively. introduced separate stellar and dark matter profiles, with 30 and 270 particles for each component, respectively. added stellar disks and bulges to the galaxy models, taking advantage of the newly invented N-body tree code to increase resolution to 4096 luminous and dark particles each. While the arrangement of orbits was somewhat artificial (a pair of triple systems each consisting of a binary orbiting a more massive single galaxy), the study showed that mergers of realistic galaxies in compact groups can be rapid and produce central remnants with de Vaucouleurs profiles and shell structures, similar to local ellipticals. A problem with the collisionless merger scenario was identified by. Collisionless mergers cannot increase central phase space density, which is necessary if disks are to merge and produce ellipticals with higher central densities than their progenitors, as is often the case. directly addressed this issue with simulations of binary mergers of spirals with bulges, which, unlike mergers of bulgeless spirals, are capable of producing sufficiently centrally dense remnants. extended this methodology to mergers of groups of six equal-mass spirals. The resolution of these simulations was a factor of 18 higher than, with almost 150,000 particles per galaxy, and included a comparison sample of pair mergers. Group mergers were shown to produce remnants with some rotation, in contrast to the non-rotating remnants typical of dry binary mergers. Both varieties of mergers produced early-type galaxies well fit by de Vaucouleurs profiles. However, group mergers with bulges did not maintain centrally concentrated profiles, exhibiting the same low central phase space density as found in bulgeless pair mergers. This may be seen as once again disfavoring the group merging scenario; however, it should be noted these mergers were of equal-mass galaxies and hence of very large mass ratios, which would tend to maximize this problem. Furthermore, argues that ellipticals have lower central densities (’cores’) than expected from inward extrapolation of their outer surface brightness profiles, which is consistent with the group merging scenario and essentially the opposite of the central phase space density problem, although attribute these cores to scouring of inner stars by inspiraling supermassive black holes rather than purely stellar dynamical processes. Since, the relatively rapid pace of advancements in the field of simulations of merging in galaxy groups has slowed, with the focus shifting to studies of hydrodynamical processes in group environments. This can be partially attributed to the findings of that collisionless binary mergers are unable to produce ellipticals following a tilted fundamental plane relation, although found an appreciable tilt by merging spirals sampled from an appropriate Schechter luminosity function. also found that collisionless binary mergers could only produce very slow-rotating remnants and that dissipation was required to produce significantly rotationally-supported ellipticals. However, between and, very few studies have tested whether these results apply to multiple collisionless mergers as well. Galaxy clusters and starbursts in groups have been considered. simulated consecutive mergers of spheroidal galaxies, roughly approximating hierarchical group merging. While this approach has provided useful estimates of the growth of stellar mass and size, the use of purely spheroidal progenitors is questionable given the prevalence of disks at high redshift. combined the results of binary merger simulations with cosmological merger trees, using empirical halo occupation models and spiral scaling relations to predict the evolution of early-type scaling relations. However, the models only incorporated multiple mergers by allowing binary merger remnants sufficient time to relax dynamically, whereas find that halos undergoing multiple mergers are likely to have two mergers in quick succession. Galaxies in groups are thuse likely to undergo multiple mergers within a relatively short periods rather than a steady stream of isolated mergers. More recently, fully cosmological simulations of mergers in a group or several groups of galaxies have been performed by, for example, and. Such simulations naturally incorporate hierarchical merging, typically by using the ’zoom-in’ method of re-simulating a small sub-volume of a large dark matter-only cosmological simulation at higher resolution. and demonstrated that ellipticals can form in groups, with minor mergers being an important mechanism in controlling the evolution of the central galaxy size and mass. showed success in producing not only a central elliptical but early-type satellites by simulating a single group using this method. However, such ab initio simulations encounter difficulties achieving sufficient spatial resolution to produce realistic spiral galaxies. Typical softening lengths in such simulations are between 500 to 1000 pc, which significantly alters the inner profile of ellipticals, especially at low masses. Increasing the resolution can mitigate this problem but also greatly increases computational cost and limits the possible sample size to a few groups. It is clear that there is a gap in the literature on multiple mergers in groups, even though multiple mergers likely create brightest cluster galaxies. By contrast, observations of elliptical galaxies have advanced tremendously in recent years, providing public catalogs of morphologies of thousands of nearby ellipticals and spirals alike, mainly based on Sloan Digital Sky Survey (SDSS) images. The SAURON project and its volume-limited successor survey Atlas3D have provided integral-field kinematics of hundreds of early type galaxies at a comparable resolution to SDSS. It is increasingly necessary to match the large samples of observations with simulations and explore the vast parameter space of conditions in galaxy groups. To meet the requirement of a large simulation sample, it is currently necessary to focus on dry merging and gravitational dynamics alone. Hydrodynamical simulations are more computationally expensive and add numerous parameters to initial conditions: disk gas fractions, gas disk scale heights and lengths relative to the stellar disk, the presence of a gaseous halo, etc. More importantly, existing literature has yet to establish the effects of collisionless gravitational dynamics in group mergers on central remnant structure. There is observational evidence suggesting that dry merging contributes significantly to the growth of massive galaxies, particularly ellipticals. Even if exclusively dry merging is not the most common mechanism for forming ellipticals, many ellipticals will have experienced at least one dry merger in their lifetimes and it is instructive to ask what collisionless dynamics alone would predict before moving on to hydrodynamical processes. The remainder of the paper is structured as follows: §[sec:simulations] motivates and details the methods used in creating the simulations, while §[sec:analysis] details the analysis methodology and pipeline, with additional tests presented in App. [app:analysistesting]. A more detailed examination of numerical convergence can be found in App. [app:numerics]. Key results on morphology, scaling relations and kinematics of central remnants are presented in §[sec:results]. The implications of these results on theories of elliptical galaxy formation are detailed in §[sec:discussion], with reference to prior studies on the subject. The conclusions are summarized in §[sec:conclusions]. Simulations =========== The simulations are designed to extend the methodology of binary galaxy merger simulations to groups of galaxies. This section details the parameters of the group sample (§[subsec:icgroupsample]), as well as the two key ingredients in the initial conditions: group configuration (§[subsec:icgroupconfig]) and galaxy models (§[subsec:icgalaxymodels]). Finally, the code and parameters used for the simulations are described in §[subsec:simcode]. Our choice of initial condition parameterization is designed to evenly sample the parameter space of groups which are likely to produce a central elliptical remnants, rather than be a unbiased sampling of real, nearby galaxy groups. This approach is similar to that used in binary mergers simulations, in which the orbits are typically nearly parabolic, with some cosmologically-motivated distribution of pericentric distance and disk alignment. In our case, we model group-sized halos at the turnaround radius at z=1-2, such that the subhalos are likely to contain spiral galaxies which will eventually merge to form one central elliptical. We use two galaxy models designed to reproduce the surface brightness profile and rotation curve of M31 and scale these models according to the Tully-Fisher relation. The only parameter we vary between the models is the profile of the bulge, which has a substantial impact on the structure of the merger remnant ( §[subsec:morphology]). While this approach does not reproduce the variety of spiral galaxies found in the local universe, let alone at high redshift, it maintains the simplicity of the initial conditions. We do not vary the bulge fraction in the progenitors, as pre-formed bulges are required to produce sufficient central densities in the merger remnant. We discuss these choices further in §[sec:discussion]. Although the simulations are nominally scale-free, as with any system of units having G=1, our simulations assume units of length in kpc, velocity in 100 $\mathrm{km s^{-1}}$, time in 9.73 million years and mass in 2.325 × 109*M*⊙. The luminosity function sampling and initial group radius impose a unique, preferred scaling to each simulation, such that mergers of groups with the same number of galaxies but different luminosities are not simply re-scaled versions of each other. Group Sample ------------ We create groups with total luminosities from 0.1-10L\* and masses between 2 × 1011 − 2 × 1013*M*⊙. We incorporate several basic assumptions consistent with observations and cosmological simulation predictions. More massive groups contain more galaxies on average, with galaxies preferentially located closer to the center of the group. The group as a whole is initially collapsing, with galaxies located within *R**m**a**x* = 2 × *R*200, *z* = 2 but having insufficient orbital energy to prevent collapse (i.e. the groups are sub-virial). We simulate each group configuration twice, with each simulation containing either spiral galaxies with exponential bulges or classical bulges (but not both), referring to the former sample as B.n*s*=1 and the latter as B.n*s*=4 for short. There are 3 sets of simulations, each with different random seeds for the initial conditions. Each set has 7 target luminosity (or mass) bins, ranging from 1/8 to 8 L\* and increasing by factors of 2. Each bin contains 8 groups, for a total of 3 × 7 × 8 = 168 simulations, of which 3 × 7 × 2 = 42 are mergers of spirals with equal masses, while the remaining 126 are sampled from a realistic luminosity function. Since each simulation is run twice (with different spiral bulge profiles), there are nominally 336 simulations, but only 168 different sets of galaxy masses and orbits. Each group has a number of galaxies between *N**m**i**n* = 3 and *N**m**a**x* = 2 + (5/6) × 10 ⋅ (*L*/*L* \* )1/2. Within each group luminosity bin, the number of galaxies in each simulation varies linearly from the minimum (3) to the maximum, so that L\* groups have between 3 and 10 galaxies and the largest groups have 25 galaxies. This range of galaxy numbers roughly covers the number of bright galaxies one would expect in poor groups. The mass range covered by the groups is 2.0 × 1011*M*⊙ to 3.0 × 1013*M*⊙. cccccc Group Mass (M\*) & N*m**i**n* & N*m**a**x*,F & N*m**i**n*,M & N*m**a**x* & 1/8 & 3 & 3 & 4 & 4 1/4 & 3 & 4 & 5 & 6 1/2 & 3 & 5 & 6 & 8 1 & 3 & 6 & 7 & 10 2 & 3 & 7 & 9 & 13 4 & 3 & 9 & 12 & 18 8 & 3 & 12 & 16 & 25 [tab:ngals] We further subdivide the sample into groups with relatively many mergers (the Many-merger or ’M’ subsample) or relatively few (Few-merger or ’F’). The groups in each mass bin with the three lowest initial galaxy counts are part of the F subsample, while the groups with the three largest initial galaxy counts qualify for the M subsample. Because the maximum number of galaxies changes in each mass bin, the dividing line between the Many-merger and Few-merger subsamples depends on mass and is not a fixed number of galaxies or mergers. Each mass bin also contains two groups with equal-mass galaxies (’Eq’), one with three galaxies (’F-Eq’) and the other with the same number of galaxies - N*m**i**n*,M - as the fourth group in the LF-sampled simulations (i.e., the group in the ’M’ subsample with the fewest galaxies). The number of galaxies in a representative number of groups is listed in Tab. [tab:ngals]. Group Configuration ------------------- Once the target luminosity and number of galaxies are selected, each group is initialized through the following steps: 1. Randomly select luminosities for all of the galaxies from a restricted range of the spiral galaxy luminosity function. 2. Set the maximum radius within which to spawn galaxies, *R**m**a**x* = 2*R*200, *z* = 2. 3. Place the most luminous galaxy in the center. 4. Place all other galaxies in order of decreasing luminosity. 5. Compute the group’s gravitational potential energy. 6. Assign random velocities to the satellite galaxies, applying an inward and radial bias and re-selecting any velocities with *v* > *v**e**s**c*. Galaxy luminosities are randomly sampled from the inclination- and extinction-corrected spiral luminosity (Schechter) function of. For the r-band, the faint-end slope *α* =  − 1.26 and *M* \* *r* =  − 20.99 + 5log(*h* = 0.71), or *M* \* *r* =  − 21.73, which is nearly identical to our standard M31 model’s absolute magnitude of *M**r* =  − 21.69. We set a minimum luminosity of 0.01*L* \*  for the spirals, as we do not expect the luminosity function to continue to arbitrarily faint magnitudes. Luminosities are drawn from a restricted range of the luminosity function with a width equal to (*N**g**a**l**s* + 2)/10 dex, such that the integral under the curve is equal to the target group luminosity. This limited range produces groups with smaller magnitude differences between the brightest central galaxy and the next brightest satellite, making major mergers more likely - especially in groups with few galaxies. We avoid simulating groups with a single luminous spiral and several much fainter satellites, since these groups would only produce relatively minor mergers and would be unlikely to produce ellipticals. The 42 groups with equal-mass spirals (’Eq’) are not sampled from an LF and instead have exactly the target luminosity, split evenly between three or a larger number of galaxies. Once a luminosity is determined for each group galaxy, the galaxies are randomly assigned locations within the group in order of decreasing luminosity. The most luminous galaxy is placed at the center of the halo, while subsequent galaxies are given a random radius with a likelihood inversely proportional to radius, i.e., *ρ* ∝ *r*− 1, and with *r* < *R**m**a**x*, where *R**m**a**x* = 2*R*200, *z* = 2. *R*200, *z* = 2 is the radius at which a volume enclosing the total group mass has a mean density of 200 times the critical density at z=2. Next, a random polar and azimuthal angle is given. The minimum distance between galaxies is set by 0.5 × *R**m**a**x* \* (*M**g**a**l**a**x**y*/*M**g**r**o**u**p*)1/3, which allows galaxy halos to be in contact but not overlap significantly. All of the galaxies are given preferentially inward and radial orbits. The group itself is sub-virial to ensure collapse and no satellite is given a speed ∣*v*∣ > *v**e**s**c**a**p**e*. This is accomplished by giving each group a target virial ratio *Q**t**a**r**g**e**t* =  − 2*T*/*W* = 0.5, where W is the gravitational potential energy of the group. This is equivalent to a zero-energy parabolic orbit in a galaxy pair, where T=-W. The group velocity dispersion is determined by *Q**t**a**r**g**e**t*: *σ* = ( − *Q* × (*W*/*a*)/*M*)0.5, where *a* = 3*s* − 2*β* and *β* is an orbital anisotropy parameter. Each galaxy’s radial velocity is then *σ* \* *s*, where s is a number randomly selected from a unit Gaussian centered on s=0.5. On average more than 70% of galaxies will have inward radial velocities. The azimuthal and polar velocities are given by *σ*(*s* × (1 − *β*)0.5), where *β* = 0.5 directs most of the velocity of the galaxy radially. In two of the three sets of simulations, the initial conditions in groups of similar mass (1/8 to 1/2, 1 to 4, and 8) are correlated, in the sense that galaxy positions are seeded in the same order (but not individual galaxy masses or orbits). This is intended to test the effect of adding additional galaxies to otherwise similar initial conditions. In the third set, all of the initial conditions are completely randomized. We note no statistically significant differences between the partially and completely random initial conditions for the relations presented in this paper; however, we will note some differences in the fundamental plane parameters in Paper II. Finally, each galaxy has its own massive, extended dark halo. In practice, the these individual halos overlap in their outer regions, leaving little or no ’empty’ space between galaxies. We have also experimented with including a separate dark matter halo for the group, not associated with any particular galaxy, but find that group galaxies then (unrealistically) merge with this invisible dark halo rather than with each other. Galaxy Models ------------- Initial spiral galaxy models are created using the GalactICS galaxy initial condition code. This code generates equilibrium models of galaxies with a bulge, disk and halo through spherical harmonic expansions of analytic potentials. Although the models begin in equilibrium and do not require additional time to settle, they have been tested in isolation. All models remain (statistically) unchanged for at least a Hubble time, even at the lowest resolution (55,000 particles per galaxy), which the vast majority of galaxies exceed. The models are similar to the ’M31c’ model of, with some parameters adjusted following the approach of to better reproduce the surface brightness profile and rotation curve of M31. M31 was chosen as a well-studied, nearby spiral having a sufficiently massive bulge to produce concentrated merger remnants. The models are bar-stable and contain a massive, non-rotating bulge, as well as a dark matter halo. The first variant uses a nearly exponential bulge with $\mathrm{n\_{s}}=0.93$, which will be referred to as exponential for convenience. A second variant uses an $\mathrm{n\_{s}}=4$ de Vaucouleurs or classical bulge but otherwise identical parameters. The halo density profile is designed to match an NFW profile at large radii and smoothly drop to zero at large radii. The halo has a 6.07 kpc scale radius, *ρ* ∝ *r*− 1 inner cusp and *r*− 2.3 outer slope, an outer radius of 300 kpc and a total mass of 1185 units, or 2.75 × 1012*M*⊙. This profile produces a 30:1 ratio in dark:baryonic (stellar) mass, which is a factor of two larger than estimates for M31 and the Milky Way, but smaller than global estimate for the universal dark:stellar mass ratio. The disk has a 5.8 kpc scale radius and a 750 pc sech^2 scale height, equivalent to a 375 pc exponential scale height. The disk is cut off past 6 scale radii, or 35 kpc, for a total mass of 25 simulation units, or 5.8 × 1010*M*⊙. We adopt a disk stellar mass-to-light ratio of (*M*/*L**r*)*D* = 3.4. Each bulge has a 1.5 kpc effective radius. The exponential bulge and de Vaucouleurs bulge have masses of 14.75 units (3.4 × 1010*M*⊙) and 15 units (3.5 × 1010*M*⊙), respectively. The bulge-to-total mass ratio *B*/*T**M* is about 33% in both models, larger than the 20-30% estimate for the R-band B/T ratio *B*/*T**R* in M31. The models could compensate by using a lower (*M*/*L**r*)*B*; however, the bulge kinematics favor a lower value of 1.9, which we adopt here. While (*M*/*L**r*)*B* does not affect the simulations, the resulting bulge-to-total light ratio *B*/*T**r* is 50%, and so mock images are more strongly weighted to bulge stars than disk stars. The rotation curve for both models is shown in Fig. [fig:m31rotationcurve]. The bulge dominates within the inner 4-5 kpc and the halo thereafter, with the disk contribution typically half that of the halo. The non-maximal disk is both consistent with recent observations of spiral galaxies (see for a review) and also promotes bar stability. Although the exponential bulge can be torqued into a bar, the intrinsic bar stability means that any remnant properties such as rotation are a result of the merging process and not secular instabilities. The rotation curve of the M31 models used in these simulations. The rotation curve is dominated by the bulge inside in the inner 5 kpc and by the halo thereafter. The more concentrated $\mathrm{n\_{s}=4}$ bulge also produces a sharper, inner peak in the rotation curve at 1 kpc. [fig:m31rotationcurve] To scale our model to different masses, we multiply all masses by a factor *m* while retaining the same *M*/*L**R*. Velocities are scaled by *m*0.29, assuring that the galaxies follow a Tully-Fisher relation *V* ∝ *L*0.29. To maintain virial equilibrium (*R* ∝ *M*/*σ*2, or, log(*R*) ∝ log(*M*) − 2log(*σ*)), particle distances from the center of the galaxy are scaled by a factor of *R* ∝ *M*/*σ*2 ∝ *m*1 − 2 × 0.29, or, *R* ∝ *m*0.42. As a result, surface brightness scales weakly with mass - *L*/*R*2 ∝ *m*0.16 - consistent with the Tully-Fisher relation’s assumption of nearly constant effective and/or central surface brightnesses. We do not incorporate scatter into the input galaxy scaling relations, so that scatter in the merger remnant scaling relations is both a lower limit and dependent on the formation process (merging) and bulge profile, rather than an additional input parameter like the Tully-Fisher relation’s scatter. Similarly, we use the same bulge fraction for all galaxies. We deliberately avoid using bulgeless disks, as existing literature (e.g., ) shows that bulgeless disk mergers do not produce sufficiently high central densities. We will further discuss the implications of these choices in §[sec:discussion]. The lowest resolution model has 5,000 bulge, 10,000 disk and 40,000 halo particles, for a 1:2:8 bulge:disk:halo ratio, and 15,000 stellar particles. More massive galaxies have larger particle counts by factors of two, up to a maximum of 480,000 disk particles. Most groups have at least three galaxies with 60,000 stellar particles and only a few tens of galaxies have fewer than 30,000 stellar particles. By scaling resolution this way, stellar particles all have the same mass within a factor of three, while dark particles are not more than 10 times more massive than star particles, limiting spurious numerical artifacts. App. [app:numerics] discusses the effects of numerical resolution in greater detail; in summary, this resolution is more than sufficient for the more massive galaxies and adequate for the least massive satellites. Simulation Code and Parameters ------------------------------ Each group is simulated for 10 Gyr with the parallel N-body tree code PARTREE. Figures [fig:initcondimages] and [fig:finalimages] show a typical evolution for one such group. The simulations use 52,000 fixed timesteps of 0.02 units - about 195,000 years - and a softening length (spatial resolution) of 100 pc. We use an opening angle of *θ* = 0.9 with forces computed to quadrupole order. While this opening angle is somewhat larger than typical values of 0.7 to 0.8, PARTREE calculates forces between nearby particles in different trees directly, eliminating the source of the largest force errors. For *θ* = 1.0, PARTREE has been shown to produce median force errors under 0.2%, with 90% of force errors under 0.5% ; force errors with *θ* = 0.9 are considerably smaller. Analysis ======== The simulations are analyzed at three different epochs after 5.0, 7.7 and 10.3 Gyr, which correspond to formation redshifts of about 0.5, 1, and 2, respectively, if one assumes that the group formed at t=0 Gyr. Since the only redshift-dependent parameter in the initial conditions is the maximum radius of the group, analysis of the same group at different epochs is equivalent to assuming a different age for the group. Also, since galaxies are given an initial separation sufficient to prevent their halos from overlapping significantly, it typically takes 1-2 Gyr for the first mergers to occur. Groups with fewer galaxies complete the merger process after another 2-3 Gyr and so are not sensitive to the choice of formation time, while groups with more galaxies continue slowly accreting lower-mass satellites and growing even after 10 Gyr. Although we do not introduce additional galaxies into the simulation to mimic cosmological accretion, we note that the long merging time for less massive galaxies still allows for late-time mergers in richer groups. Analysis Pipeline ----------------- Once the simulations are complete, we create mock r-band photometry and kinematics of each group at the three different epochs, placing the group at a mock redshift of 0.025 (about 100 Mpc away). In brief, we create SDSS-like photometry of the central galaxy out to 8 effective radii, including a sky background and appropriate signal-to-noise ratio. We use GALFIT   to fit a single Sersic profile to each galaxy. We also use GALMORPH to fit a de Vaucouleurs profile to the sky- and satellite-subtracted image, both for comparison to general Sersic fits and to the de Vaucouleurs fits of. Finally, we create spatially resolved kinematics at the same scale, and use these maps to measure kinematical quantities within the central region and the effective radius of the central galaxy. Although our simulations do not resolve faint satellites particular well, our pipeline is able to recover the properties of the central ellipticals with precision comparable to SDSS observations. Simulations are processed with our own imaging pipeline, which is intended to create images of the central galaxy in each group equivalent to those produced by the SDSS. We convert mass to luminosity to create nominal r-band images, using fixed stellar mass-to-light ratios for the bulge and disk components. We then extract a one-dimensional profile of the central galaxy in circular bins, masking out the central regions of satellite galaxies. A single Sersic profile is then fit to produce a rough estimate of the effective radius of the central galaxy (*R**e**f**f*, *e**s**t*). Example mock image of the major axis projection of the central galaxy from a typical L\* group after 10 Gyr of simulation. The image shows SDSS-equivalent r-band photometry down to the mean sky level, overlaying contours from the image itself (dashed, black) and the best fit GALFIT Sersic model (solid, black). The gray ellipse shows the effective radius but with no boxiness parameter altering its shape. The image is 29 kpc or 150 SDSS pixels across. [fig:pipeline] Next, we create a FITS image out to 8*R**e**f**f*, *e**s**t* around the central galaxy. The image is smoothed by a point spread function (PSF) with a full-width at half-maximum (FWHM) of 1.43 arcseconds, typical for SDSS r-band observations. Galaxies are imaged at a mock redshift of *z**o**b**s* = 0.025, typical for the SDSS spectroscopic sample used in. Fig. [fig:pipeline] shows an example image of a typical galaxy. The pixel scale is identical to that used by SDSS, 0.396 arcsec/pixel. Most importantly, we add a sky background with both a mean surface brightness and variations comparable to SDSS observations. In the r band, the mean sky value is 20.86 and variations are Gaussian distributed with a standard deviation of 2.65%, equivalent to the SDSS asinh zero-flux magnitude of 24.80 (which itself was chosen to be approximately 1-sigma of typical sky noise). We also create maps of the projected dark matter distribution using the same pixel scale (but no PSF). In addition to photometry, we create kinematical maps of the first four moments of the luminosity-weighted velocities of particles in each pixel (velocity, r.m.s. velocity dispersion *σ*, and v3 and v4). Although we do smooth these maps by the same PSF and use the same pixel scale as the photometry, we do not add a sky background or any instrument-specific noise. We do not perform any fitting to the kinematic quantities, choosing r.m.s. velocity dispersions rather than fitting any profiles, and so the kinematical maps remain largely instrument-agnostic beyond the choice of pixel scale and PSF. The maps can then be used both to measure central velocity dispersions and spatially resolved kinematics, comparing to SDSS and Atlas3D respectively. Finally, we create an error map for the photometry, which will be used to perform *χ*2 minimization in fitting profiles to the galaxies. The error is the square root of the luminosity in each pixel multiplied by some constant factor, which scales the signal-to-noise ratio across the image. The constant itself is simply related to the image exposure time, given a certain zero-point equivalent to 1 count per second (for SDSS r-band, this is about 26.7) and mean sky variation. This scheme contrasts with, e.g.,, and other simulations which use the square root of the number of particles in each pixel as the error. The per-pixel errors do not scale directly with the resolution of the simulation but should instead converge with increasing resolution. Similarly, setting a non-zero floor to the error map ensures that pixels with no signal are not ignored in the fit, which is necessary since the absence of a signal is meaningful. For each galaxy, we create images in 10 randomly oriented but evenly spaced projections. These are the ten projections passing through opposite faces of a regular icosahedron, but arbitrarily rotated with respect to the central galaxy. We also use the three projections corresponding to estimates of the principal axes of the central galaxy. We fit every galaxy in the image with a single Sersic profile using GALFIT. Sufficiently large galaxies (including the central galaxy) fit a boxiness parameter (C0) as well, which allows for elliptical isophotes to vary from diamond-shaped (C0  <  0) to rectangular- or box-shaped (C0  >  0). For highly inclined disks with a bulge, this can also provide a better fit than an unmodified ellipse. The GALFIT fits also include a fixed sky background equal to the mean sky brightness. We do not allow for the sky value to vary, as doing so would result in over-fitting the sky, a common problem in observations. Since different surveys and even different data releases of the SDSS have employed various methods for fitting sky backgrounds, we opt to avoid the difficulty of reproducing each methodology and simply fit with the known mean sky value. This does not, however, remove the pixel-to-pixel variation in sky brightness, which sets the effective limiting surface brightness in the image. We use the GALFIT fits to create a sky- and satellite-subtracted image of the central galaxy in each frame. This image is used to measure various quantities, including alternative non-parametric half-light radii. We also use GALMORPH to fit a single de Vaucouleurs profile to this sky-substracted image. This provides a direct comparison to the methodology used in, with the caveat that our use of GALFIT to fit the profiles of satellite galaxies may not match the exact methodology employed in masking nearby sources in SDSS or other surveys. Photometric and Kinematic Measures ---------------------------------- Sizes and luminosities of the central remnants are measured several different ways. The preferred luminosity measure is the total luminosity within the deconvolved model image of the central galaxy, roughly equivalent to model magnitudes in SDSS and other surveys. For comparison, we also measure several other sizes and luminosities, including non-parametric Petrosian radii (see for the SDSS implementation and for analysis thereof). A thorough analysis of the suitability of these measures is presented in App. [app:analysistesting]. Kinematical maps are used to measure the velocity distributions - mean velocities (for rotation measures), dispersions, and higher order moments. Generally, we use central dispersions within 1/8 *R**e**f**f* and rotation measures within *R**e**f**f*. Velocity dispersions in the central remnants do vary radially, generally dropping from peak central values. Integral field surveys such as Atlas3D can measure dispersions out to 0.5 to 1 *R**e**f**f*, whereas fiber dispersions from SDSS are measured within fixed angular diameters, and hence variable fractions of *R**e**f**f*. Aperture corrections are often applied to fiber dispersions to convert them to a fixed fraction of *R**e**f**f*, with 1/8 *R**e**f**f* a typical choice for SDSS observations. However, we find that central dispersions are nearly identical to effective dispersions (within 1 *R**e**f**f*), with most galaxies lying on a linear relation and only a handful of outliers, so aperture corrections are not necessary for the simulations. The central velocity dispersions in simulations can be artificially depressed by softening of the gravitational potential. We mask out the central 300 pc to compensate, and measure central dispersions within 1/8 *R**e**f**f* where possible. For the few galaxies where 1/8 *R**e**f**f* is smaller than 300 pc, we enlarge the aperture by factors of 1/8 *R**e**f**f* until a reliable estimate is obtained. We have also considered the kinetic energy measure $S=\sqrt{\sigma^2+v^2}$, or equivalently $S=\sigma \times \sqrt{1+(v/\sigma)^2}$. This is a more accurate measure of the stellar kinetic energy for galaxies with significant rotation. However, most simulated galaxies do not have sufficient rotational support for this correction to be significant, and there are not yet any large samples of galaxies with published dispersions and *v*/*σ* to compare to. Results ======= The main results presented in this paper are the morphologies and kinematics of central group galaxies. Although we do fit satellites as well, this is mainly to exclude their contribution from the profile of the central galaxies. Few satellite galaxies are sufficiently well resolved to recover sizes and Sersic indices accurately, but we only require their total luminosities to be recovered and subtracted from the central galaxy’s profile. Unless otherwise noted, all radii measured with elliptical annuli are $\sqrt{(a \times b)}$, where a and b are the major and minor axis lengths. Observational Comparisons ------------------------- Our results are compared to three published data sets for nearby galaxies. The Atlas3D survey is a volume-limited integral field unit survey of the kinematics of 260 nearby early-type galaxies. A3D provides kinematical maps with a pixel size about twice as large as that of SDSS. This is mitigated by the larger aperture and very low redshifts (z < 0.01) of the sample as compared to our nominal mock sample redshift (0.025). Sersic profile fits are also available from, with photometry from a variety of sources but typically comparable to or better than SDSS. published three different profile fits for over a million SDSS galaxies. We use the single Sersic decompositions for direct comparison and the free Sersic (bulge) plus exponential (disk) decompositions for diagnostic purposes. Although these fits were performed with a different code - GIM2D - the fitting procedure is similar to our GALFIT fits. We select galaxies with spectroscopic redshifts 0.01 < *z* < 0.3 to ensure availability of a reliable V$\mathrm{\_{max}}$ volume correction term. We use the logarithmic median velocity dispersion between two sources - the SDSS DR7 and Princeton measurements (both included in DR7, ). Stellar masses are based on the MPA-JHU DR7 catalog [1](#fn1), using fits to the multi-band photometry. Detailed visual classification of of nearly 6,000 early-type galaxies from SDSS with z < 0.1 is provided by. Volume corrections are applied with the standard 1/V$\mathrm{\_{max}}$ weighting scheme. Profile fits from this catalog include Petrosian sizes from the SDSS pipeline and Sersic fits from S+11. Although the original catalog of N+10 contained over 14,000 galaxies, eliminating bad fits and unmatched/misclassified objects provides just over 11,000 galaxies, of which nearly 5,000 are early-types. We exclude all SDSS galaxies with extreme velocity dispersions (\sigma < 20 $\mathrm{km s^{-1}}$or \sigma > 400 $\mathrm{km s^{-1}}$) or effective radii smaller than 0.3 kpc. Where visual classifications are available (A3D, N+10) we select galaxies with Hubble T-types less than 0 as early-types. T-types less than -3 are included in the elliptical sample while the remainder are classified as S0s. The majority of the S+11 sample does not have visual classifications, other than the small subset classified by N+10. We adopt a series of empirical cuts similar to those of to identify early-type galaxies, testing these against the N+10 subset. The early-type sample contains galaxies with $\mathrm{n\_{s} > 1}$, and – from the disk plus free $\mathrm{n\_{s}}$ fits – r-band bulge to total luminosity ratio *B*/*T**r* > 0.4, disk inclination less than 63 degrees, and bulge $\mathrm{r\_{eff} > 0.5 kpc}$. Early-types must also have a spectroscopic eclass value less than -0.1 (see, but note that the sign convention in SDSS is opposite), which selects galaxies with spectra consistent with a passive population. This early-type sample is subdivided into an elliptical subset, which imposes further cuts based on the single Sersic fits: g-band image smoothness S2 <0.08, or g-band image smoothness S2 <0.12 and $\mathrm{B/T\_{r} < 0.6}$. These cuts are similar to those suggested by to select early-type galaxies from morphology alone, but also serve to decrease contamination from S0s and early-type spirals in the elliptical sample. All galaxies not classified as early-type but meeting the dispersion and *R**e**f**f* cut are identified as spirals. | S+11 Subsample | N+10 Es | N+10 S0s | N+10 Spirals | | --- | --- | --- | --- | | Es | 1874 | 1095 | 93 | | S0s | 98 | 350 | 323 | | Spirals | 193 | 1265 | 5272 | | Unclassified | 13 | 54 | 604 | | Total | 2178 | 2764 | 6292 | [tab:morphcuts] The samples obtained by applying these cuts to the N+10 catalog are listed in Tab. [tab:morphcuts]. The elliptical sample is 86% complete. While it is only 61% pure, the contamination mainly comes from S0s and not spirals. No cuts appear to be able to reliably classify S0s, which contaminate both elliptical and spiral samples. In principle, we could instead use the S+11 cuts on the N+10 sample rather than relying on visual classifications at all; however, visual classifications are repeatable and fairly robust (see for comparisons to previous classifications), and as seen in A3D, there are significant differences in rotational support between the elliptical and S0 population, even if no automated morphological classification can separate them. Additional weightings are necessary to compare these catalogs to our own simulations, which probe a range of about 5 in absolute magnitude and have a nearly flat luminosity function. We produce r-band luminosity functions for each sample, then weight by the ratio of the simulated luminosity function to the observed one. Elliptical and S0 subsamples use all simulated galaxies versus E/S0 classifications from observed catalogs - i.e., we do not morphologically classify simulated merger remnants. This weighting procedure turns each observational sample into a catalog with equal numbers of galaxies at each luminosity, directly comparable to our simulations. Although the weightings are not vital for tight scaling relations like the fundamental plane, they are necessary for fair comparisons of weaker correlations and histograms marginalizing over luminosity. Morphology ---------- As detailed in §[subsec:analysismethods], the central galaxies are fit with a single Sersic profile. Each profile has six free parameters (in addition to the two coordinates for the centre of the galaxy): the Sersic index $\mathrm{n\_{s}}$, an effective half-light radius *r**e**f**f*, a surface brightness at this radius, an ellipticity *ε*, a position angle, and a boxiness parameter C0 modifying the shape of the ellipse from diamond (negative C0) to box-shaped (positive C0). Ellipticals have long been known to be best fit by larger Sersic indices than disks, to have small ellipticities and several correlations between size, luminosity and Sersic index. Any satellite galaxies in the image are also fit with a single Sersic profile. ### Sersic Indices  Figure [fig:sersicnhist] shows various histograms of the Sersic index distribution for the B.n*s*=1, B.n*s*=4 and B.n*s*=all samples. Each individual bulge type produces a narrow distribution of Sersic indices. The B.n*s*=4 sample’s distribution is narrower and peaked at a larger value of $\mathrm{n\_{s}}=5$ than the observational distributions. The B.n*s*=1 sample’s peak at $\mathrm{n\_{s}}=3$ is significantly lower than those of N+10 and A3D ellipticals, and the distribution is narrower still than that of B.n*s*=4. The combined B.n*s*=all sample’s $\mathrm{n\_{s}}$ distribution is nearly bimodal due to this separation and approximately twice as broad as B.n*s*=1 alone. By contrast, most observed distributions are unimodal, although the S+11 distributions show a larger peak of high $\mathrm{n\_{s}}$ galaxies, which is only reproduced in the B.n*s*=4 sample. There is also a hint of bimodality in the S0 distributions, which we have diminished by setting a lower limit of $\mathrm{n\_{s}}=1$. The peak of the S0 distribution is best reproduced by the B.n*s*=1, but, as will be demonstrated in §[subsubsec:rotation], the remnants’ rotational support is far lower than that of typical S0s. None of the simulation samples can reproduce the width of the observed S0 distributions. Although each of the B.n*s*=1 and B.n*s*=4 samples are individually a poor fit to the elliptical data - particularly being too narrow of a distribution - the naive linear combination of the two (B.N*s*=all) provides a better match. The B.n*s*=all sample is also a better match to the elliptical distributions than the S0, the latter of which tend to smaller Sersic indices. While it is not a particularly realistic distribution - assuming that half of the groups in the universe contain galaxies with only exponential bulges while the other half contain de Vaucouleurs bulges - we will elaborate on the implications for more realistic bulge profile distributions in §[sec:discussion]. The difference in Sersic index between the Many- and Few-merger subsamples is small in the L.F.-sampled case but is maximized at about 0.5 for equal-mass mergers. Furthermore, the distributions of the Few-, equal-mass merger remnants in the different bulge samples are sufficiently narrow that the combined B.n*s*=all, Few-merger subsample is distinctly bimodal. Thus, it appears that multiple mergers are sufficient to broaden the distributions of remnant Sersic indices, but sampling progenitors from a realistic luminosity function can accomplish the same purpose, even with relatively few mergers. Major axis projections of central remnants have systematically larger Sersic indices than the medial or minor axis projections (bottom left and middle panels of Fig. [fig:sersicnhist]), with the peak of the distribution shifted by about 1. Medial and minor axes have nearly identical distributions, even though their ellipticities and semi-major axes are not necessarily the same. As Fig. [fig:lsersicn] shows, the variation in Sersic index for a single galaxy over different viewing angles is not usually much larger than one (and often smaller), so projections aligned near the major axis appear to produce the largest $\mathrm{n\_{s}}$ profiles. Only B.n*s*=4 mergers produce a correlation between luminosity and $\mathrm{n\_{s}}$, as shown in Fig. [fig:lsersicn]. This is partly a result of more massive ellipticals being produced by more mergers. In both B.n*s* samples, Many-merger remnants tend to have larger $\mathrm{n\_{s}}$ at fixed luminosity. However, in the B.n*s*=4 sample, even the Few-merger subsample shows a small positive slope in Sersic index, whereas the trend is flat or even slightly negative for B.n*s*=1. The overall trend is dependent both on the initial bulge profile and the number of mergers. A positive dependence of merger rate on halo mass is a prediction of ΛCDM (e.g. ). Exponential bulges, however, are simply not concentrated enough to create merger remnants with $\mathrm{n\_{s}} > 4$, even with repeated merging. Thus, luminous ellipticals are unlikely to be the product of only exponential bulge mergers. The degree of agreement between simulations and observations is difficult to judge, since the observational samples do not completely agree. The N+10 $\mathrm{n\_{s}}$-L relation appears to flatten above $\mathrm{10^{10}L\_{\odot}}$. This could be due to the larger redshift range of the S+11 sample; however, we find that GALFIT-derived Sersic fits to mock images at higher redshift tend to fit lower Sersic indices, so this systematic trend would have to be reversed in observed ellipticals to explain the shift. ### Ellipticities Ellipticities of the remnants are on the average slightly larger than observed elliptical samples but lower and more sharply peaked than S0s.  Fig. [fig:ehist] shows that there is only a small difference between the Many- and Few-merger subsamples, while there is about a 0.05 shift towards rounder remnants from the B.n*s*=4 to B.n*s*=1 samples. On the whole the distributions are not unreasonable, lying closer to observations of ellipticals than of S0s, while lacking the tail of highly elliptical shapes found in S0s. Although the Many-merger remnants are slightly rounder on average than the Few-merger, the difference is not large even in equal-mass mergers. This is somewhat surprising, considering that the progenitor galaxy orbits are nearly isotropic and should tend to produce spheroidal remnants as the number of mergers increases. We will elaborate on this point further in §[sec:discussion]. The intrinsic ellipticities of the remnants along the principal axis projections are also shown in the bottom left and middle panel of Fig. [fig:ehist]. The distributions are consistent with the remnants being triaxial, with the median value in each projection being both different than the others and greater than zero. The smallest axis ratios are found for the minor axis projection, which would be the case for ellipsoids closer to prolate than oblate. Most galaxies have a medial axis ellipticity of around 0.4, with few being rounder than 0.2, indicating that almost all galaxies have a significantly shorter minor axis than the major axis. In addition to having larger Sersic indices, brighter galaxies trend toward smaller ellipticities and rounder shapes (Fig. [fig:lell]). This trend might be expected for more luminous galaxies with many mergers. If the orbits of the merging galaxies are isotropically distributed, the resulting remnant should be close to spherical. Such a trend is present in the simulations, although it appears stronger for the B.n*s*=4 sample. Much of the scatter in the relation appears to be due to projection effects of the inherently triaxial simulated galaxies, although median ellipticities show significant scatter as well. The B.n*s*=1 sample also appears to have few very round (*ε* < 0.1) remnants, especially at low luminosities. Scaling Relations ----------------- ### Size-Luminosity/Stellar Mass and Kormendy Relations The size-luminosity relation of merger remnants after 10 Gyr. Each point shows one of the principal axis projections. Light (green) lines connect different projections of the same galaxy. Darker (red and black) lines connect the same projection for groups with different progenitor bulge profiles but otherwise identical initial conditions. The light (green) lines can be viewed as contributions to scatter in the relation from projection effects, while the darker lines show differences from progenitor galaxies. [fig:sizelum]  Fig. [fig:sizelum] shows the Sersic model size-luminosity relation for principal axis projection of simulated galaxies after 10.3 Gyr, connecting otherwise identical groups with different spiral bulges. All relations have very small scatter. Part of the scatter is caused by the B.n*s*=1 sample having smaller sizes (a real effect) and lower luminosities (partly a real effect, but largely systematic, as will be shown in App. [app:analysistesting]). Regardless, both projection effects and different progenitor bulge profiles contribute to the scatter in the relation. Simulations: Ten equally-spaced projections, randomly oriented | B.$\mathrm{n\_{s}}$ | Subsample | Slope | Intercept | R.M.S. | | --- | --- | --- | --- | --- | | 1 | Unweighted | 0.53  ±  0.01 | -4.89  ±  0.07 | 0.10 | | 1 | Weighted | 0.58  ±  0.01 | -5.36  ±  0.06 | 0.10 | | 1 | Many | 0.55  ±  0.01 | -5.08  ±  0.05 | 0.07 | | 1 | Few | 0.50  ±  0.01 | -4.67  ±  0.08 | 0.10 | | 4 | Unweighted | 0.61  ±  0.01 | -5.66  ±  0.10 | 0.12 | | 4 | Weighted | 0.67  ±  0.01 | -6.20  ±  0.07 | 0.12 | | 4 | Many | 0.65  ±  0.01 | -5.95  ±  0.08 | 0.09 | | 4 | Few | 0.55  ±  0.01 | -5.11  ±  0.11 | 0.11 | | All | Unweighted | 0.58  ±  0.01 | -5.32  ±  0.06 | 0.12 | | All | Weighted | 0.63  ±  0.01 | -5.92  ±  0.06 | 0.12 | | All | Many | 0.62  ±  0.01 | -5.69  ±  0.06 | 0.10 | | All | Few | 0.54  ±  0.01 | -4.96  ±  0.08 | 0.12 | Principal axis projections, unweighted | B.$\mathrm{n\_{s}}$ | Projection | Slope | Intercept | R.M.S. | | --- | --- | --- | --- | --- | | 1 | Major axis | 0.55  ±  0.02 | -5.12  ±  0.19 | 0.10 | | 1 | Minor axis | 0.52  ±  0.02 | -4.70  ±  0.21 | 0.10 | | 4 | Major axis | 0.60  ±  0.02 | -5.53  ±  0.24 | 0.11 | | 4 | Minor axis | 0.58  ±  0.02 | -5.25  ±  0.29 | 0.11 | Observations | Cat. | Type | Weight | Slope | Intercept | R.M.S. | | --- | --- | --- | --- | --- | --- | | S+11 | E | N | 0.85  ±  0.00 | -8.34  ±  0.01 | 0.12 | | S+11 | E | Y | 0.80  ±  0.00 | -7.82  ±  0.02 | 0.12 | | N+10 | E | N | 0.66  ±  0.01 | -6.40  ±  0.10 | 0.09 | | N+10 | S0 | N | 0.63  ±  0.01 | -6.03  ±  0.11 | 0.12 | | N+10 | E | Y | 0.65  ±  0.02 | -6.29  ±  0.21 | 0.09 | | N+10 | S0 | Y | 0.61  ±  0.02 | -5.82  ±  0.24 | 0.12 | | A3D | E | N | 0.72  ±  0.03 | -6.90  ±  0.35 | 0.12 | | A3D | S0 | N | 0.57  ±  0.05 | -5.41  ±  0.24 | 0.12 | | A3D | E | Y | 0.72  ±  0.04 | -6.97  ±  0.65 | 0.12 | | A3D | S0 | Y | 0.60  ±  0.05 | -5.70  ±  0.44 | 0.12 | [tab:sizelum] Table [tab:sizelum] lists best-fit Sersic model size-luminosity relations for simulations and observations alike, obtained by least-squares minimization of the orthogonal scatter. In all of the simulation samples, the scatter is relatively small at about 0.1 dex. The scatter does not appear to be mainly due to projection effects or combining progenitors. Fits to major axis projections have similar scatter to the ten equidistant but randomly aligned projections. Similarly, though some groups show projection-dependent sizes and luminosities, these variations are smaller than the scatter in median values, and are likely a result of the mild correlation between Sersic index and luminosity of projections of the same galaxy (evidenced in Fig. [fig:lsersicn]). If sizes and luminosities are generally accurate to within 10-20% or 0.04-0.08 dex, as suggested by our testing, then some of the scatter could be intrinsic. The scatter in the unweighted simulation data is comparable to that in observed ellipticals (slightly larger than N+10), while the slope is considerably shallower and the intercepts larger. Separate fits to the Many- and Few-merger subsamples show a large difference of 0.05 to 0.1 in slope. Also, as Fig. [fig:sizelum] demonstrates, the Many-merger subsample is larger at fixed luminosity than the few merger sample. Thus, the slope of the predicted relation can be maximized by giving a larger weight to luminous, Many-merger remnants (and a smaller weight to faint galaxies), while applying the opposite weighting to groups of relatively few galaxies, such that their weights are largest at low luminosities. We apply such a weighting in Tab. [tab:sizelum] and find that it can steepen the slope of the size-luminosity relation further than even the Many-merger subsample alone, bringing it close to observed values for N+10 but still short of S+11 and A3D. Table [tab:sizelum] also lists values for observational data, with both 1/V$\mathrm{\_{max}}$ corrections and optional weighting to match the luminosity function of the simulations. This weighting scheme only makes a significant difference in the S+11 sample - otherwise, most scaling relations are insensitive to weighting method, as one would expect if they are truly linear with uniform scatter. Some curvature may exist at the low- or high-luminosity extremes, but it is unclear whether it is real or systematic. Simulations: Ten equally-spaced projections, randomly oriented | B.$\mathrm{n\_{s}}$ | Subsample | Slope | Intercept | R.M.S. | | --- | --- | --- | --- | --- | | 1 | Unweighted | 0.50  ±  0.01 | -4.62  ±  0.06 | 0.10 | | 1 | Weighted | 0.54  ±  0.01 | -5.06  ±  0.05 | 0.10 | | 1 | Many | 0.52  ±  0.01 | -4.81  ±  0.06 | 0.08 | | 1 | Few | 0.48  ±  0.01 | -4.43  ±  0.09 | 0.09 | | 4 | Unweighted | 0.49  ±  0.01 | -4.47  ±  0.05 | 0.10 | | 4 | Weighted | 0.54  ±  0.01 | -4.99  ±  0.06 | 0.10 | | 4 | Many | 0.52  ±  0.01 | -4.72  ±  0.07 | 0.08 | | 4 | Few | 0.46  ±  0.01 | -4.22  ±  0.07 | 0.09 | | All | Unweighted | 0.49  ±  0.00 | -4.50  ±  0.04 | 0.10 | | All | Weighted | 0.54  ±  0.01 | -5.04  ±  0.04 | 0.10 | | All | Many | 0.52  ±  0.00 | -4.76  ±  0.06 | 0.08 | | All | Few | 0.47  ±  0.01 | -4.34  ±  0.06 | 0.09 | Observations | Cat. | Type | Weight | Slope | Intercept | R.M.S. | | --- | --- | --- | --- | --- | --- | | N+10 | E | N | 0.62  ±  0.01 | -6.04  ±  0.07 | 0.08 | | N+10 | S0 | N | 0.57  ±  0.01 | -5.53  ±  0.07 | 0.10 | | N+10 | E | Y | 0.58  ±  0.01 | -5.61  ±  0.22 | 0.08 | | N+10 | S0 | Y | 0.54  ±  0.01 | -5.14  ±  0.09 | 0.10 | [tab:petrosizelum] The Petrosian *R*50 size-luminosity relation (Tab. [tab:petrosizelum]) shows smaller scatter than Sersic sizes, despite the fact that uncorrected Petrosian half-light radii systematically underestimate the luminosities of pure Sersic profiles and simulated galaxies alike. This is especially true for the B.n*s*=4 sample, which has slightly lower scatter than B.n*s*=1 mergers, despite having greater systematic errors on *R**e**f**f* due to its larger mean $\mathrm{n\_{s}}$. The slopes are still shallower than those observed in N+10, but the difference can shrink to less than 0.05 if considering weightings for both simulations and observations. The implications of these results will be discussed further in §[sec:discussion]. | Cat. | Type | Weight | Slope | Intercept | R.M.S. | | --- | --- | --- | --- | --- | --- | | S+11 | E | N | 0.78  ±  0.00 | -7.95  ±  0.02 | 0.13 | | S+11 | E | Y | 0.75  ±  0.00 | -7.53  ±  0.03 | 0.13 | | N+10 | E | N | 0.64  ±  0.01 | -6.39  ±  0.06 | 0.09 | | N+10 | S0 | N | 0.57  ±  0.01 | -5.61  ±  0.09 | 0.12 | | N+10 | E | Y | 0.60  ±  0.02 | -5.89  ±  0.20 | 0.09 | | N+10 | S0 | Y | 0.48  ±  0.02 | -4.63  ±  0.17 | 0.13 | [tab:sizemstellar] The best-fit relations between size and stellar mass for the S+11 and N+10 catalogs are listed in Tab. [tab:sizemstellar]. The slopes are slightly shallower than those for the size-luminosity relations and closer to (but not quite matching) those predicted by the simulations, which do not have significant variations in the stellar mass-to-light ratio. Thus some of the tension between the slopes of the simulated and observed size-luminosity relations can be resolved by accounting for the variable stellar mass-to-light ratio of observed galaxies, which increases in more luminous observed ellipticals but is nearly constant by construction in the simulated remnants. | Cat. | Type | Weight | Slope | Intercept | R.M.S. | | --- | --- | --- | --- | --- | --- | | N+10 | E | N | 0.59  ±  0.01 | -5.97  ±  0.09 | 0.09 | | N+10 | S0 | N | 0.50  ±  0.01 | -4.922  ±  0.08 | 0.11 | | N+10 | E | Y | 0.52  ±  0.02 | -5.21  ±  0.21 | 0.09 | | N+10 | S0 | Y | 0.41  ±  0.01 | -4.03  ±  0.11 | 0.12 | [tab:petrosizemstellar] The Petrosian size-stellar mass relation shows slightly shallower slope, as with Sersic models. In fact, the slope and scatter of the weighted simulations (0.52 and 0.09) are within the quite small bootstrap errors (0.01) of the weighted observations (0.51 and 0.09), while the intercept is higher (-4.81 versus -5.08) but still also within the more generous error bars. Thus, it is entirely possible to match the slopes, and, to a lesser extent, the intercepts of the size-mass relation, depending on the fitting technique and sample weights. However, this alone does not justify either weighting scheme. The observational scheme is reasonable, since matching luminosity functions is necessary in order to make a fair comparison. The simulation scheme is not as well justified, since the number of mergers per group is somewhat arbitrary. The Kormendy relation, shown in Fig. [fig:kormendy], has large scatter and shallow slope, especially for the B.n*s*=1 relation, which is nearly flat. None of the observed relations are quite linear. While the kink at small sizes is likely a systematic artifact, the curvature near 5-6 kpc appears more robust and also more significant than the equivalent curvature in the size-luminosity relation. As in the size-luminosity relation, it appears as if the simulated galaxies are either too faint for their size or too large to be so faint. Interestingly, the relation for large ellipticals appears to asymptote towards the slope of constant luminosity ($\mathrm{d\log(\mu\_{e})/d\log(R\_{eff})}$ = 5), which suggests that bright ellipticals can grow significantly in size without adding a large amount of stellar mass. In fact, many BCGs have exceptionally large effective radii and faint mean surface brightnesses. However, most of the similar simulated remnants in Fig. [fig:sizelum] are mergers of many equal-mass spirals, rather than luminosity function-sampled remnants - without this M-Eq subsample, the simulated Kormendy relation is rather weak, especially for the B.n*s*=1 sample. ### Faber-Jackson Relation [fig:fj] Simulations: Ten equally-spaced projections, randomly oriented | B.$\mathrm{n\_{s}}$ | Subsample | Slope | Intercept | R.M.S. | | --- | --- | --- | --- | --- | | 1 | Unweighted | 0.28  ±  0.00 | -0.85  ±  0.02 | 0.04 | | 1 | Weighted | 0.27  ±  0.00 | -0.68  ±  0.02 | 0.05 | | 1 | Many | 0.27  ±  0.00 | -0.68  ±  0.03 | 0.03 | | 1 | Few | 0.30  ±  0.00 | -0.98  ±  0.04 | 0.04 | | 4 | Unweighted | 0.29  ±  0.00 | -0.90  ±  0.02 | 0.04 | | 4 | Weighted | 0.27  ±  0.00 | -0.74  ±  0.02 | 0.04 | | 4 | Many | 0.27  ±  0.00 | -0.75  ±  0.02 | 0.03 | | 4 | Few | 0.30  ±  0.00 | -0.99  ±  0.03 | 0.04 | | All | Unweighted | 0.28  ±  0.00 | -0.86  ±  0.02 | 0.04 | | All | Weighted | 0.27  ±  0.00 | -0.71  ±  0.02 | 0.05 | | All | Many | 0.27  ±  0.00 | -0.72  ±  0.02 | 0.04 | | All | Few | 0.30  ±  0.00 | -0.98  ±  0.03 | 0.04 | Principal axis projections, unweighted | B.$\mathrm{n\_{s}}$ | Projection | Slope | Intercept | R.M.S. | | --- | --- | --- | --- | --- | | 1 | Major axis | 0.27  ±  0.01 | -0.70  ±  0.06 | 0.04 | | 1 | Minor axis | 0.30  ±  0.01 | -1.01  ±  0.06 | 0.04 | | 4 | Major axis | 0.27  ±  0.01 | -0.69  ±  0.05 | 0.04 | | 4 | Minor axis | 0.30  ±  0.01 | -1.12  ±  0.06 | 0.04 | Observations | Cat. | Type | Weight | Slope | Intercept | R.M.S. | | --- | --- | --- | --- | --- | --- | | S+11 | E | N | 0.27  ±  0.00 | -0.63  ±  0.01 | 0.08 | | S+11 | E | Y | 0.30  ±  0.00 | -0.92  ±  0.02 | 0.08 | | N+10 | E | N | 0.28  ±  0.01 | -0.67  ±  0.10 | 0.07 | | N+10 | S0 | N | 0.36  ±  0.01 | -1.57  ±  0.09 | 0.10 | | N+10 | E | Y | 0.37  ±  0.01 | -1.67  ±  0.31 | 0.08 | | N+10 | S0 | Y | 0.48  ±  0.03 | -2.85  ±  0.20 | 0.11 | [tab:fj] The Faber-Jackson (velocity dispersion-luminosity, ) relation is shown in Fig. [fig:fj], with best fits tabulated in Tab. [tab:fj]. The simulated relations have slopes fairly close to the observations, though the intercepts are significantly lower. The turnover or curvature at low velocity dispersions ( < 100 $\mathrm{km s^{-1}}$) is likely not entirely real, since such low dispersions are near the spectrograph’s resolution limit and unlikely to be reliable. The luminosity function weightings make a significant difference in slope for the N+10 sample, which is likely due to this same curvature. The scatter appears to be mostly due to projection effects at the low-luminosity end but increases at high luminosities, where the Many- and Few-merger samples appear to diverge. The most robust conclusions from the data are that the slope for the S0 sample is significantly steeper than that for ellipticals, which in turn is slightly steeper than the canonical slope of 0.25, depending on the weighting scheme used. The scatter in the simulated relations is also significantly lower than in the observations, even when both bulge samples are combined. | Cat. | Type | Weight | Slope | Intercept | R.M.S. | | --- | --- | --- | --- | --- | --- | | S+11 | E | N | 0.26  ±  0.00 | -0.60  ±  0.01 | 0.07 | | S+11 | E | Y | 0.28  ±  0.00 | -0.88  ±  0.01 | 0.07 | | N+10 | E | N | 0.29  ±  0.00 | -0.89  ±  0.05 | 0.06 | | N+10 | S0 | N | 0.36  ±  0.01 | -1.74  ±  0.08 | 0.09 | | N+10 | E | Y | 0.36  ±  0.03 | -1.70  ±  0.26 | 0.07 | | N+10 | S0 | Y | 0.43  ±  0.01 | -2.45  ±  0.10 | 0.09 | [tab:sigmamstellar] Unlike the size-mass/luminosity relations, the velocity dispersion-stellar mass relation (Tab. [tab:sigmamstellar]) is hardly changed from the velocity dispersion-luminosity relation, although the scatter shrinks slightly. The velocity dispersion-stellar mass relation also deviates from the canonical Faber-Jackson relation slope of 0.25, showing a scaling closer to $\mathrm{\sigma \propto M\_{\*}^{0.3}}$. Rotational Support ------------------ Rotational support of simulated galaxies by classical *v*/*σ* measure. Atlas3D ellipticals are shown with areas of points roughly corresponding to their relative weights on a logarithmic scale. Most simulated ellipticals are slow rotators, but some have modest rotational support. [fig:evdivsigma] We measure *v*/*σ* as the luminosity-weighted average within *R**e**f**f*, as used in IFU observations like Atlas3D. Most simulated ellipticals are slow rotators (Fig. [fig:evdivsigma]). However, some projections show *v*/*σ* as large as 0.35 and can be classified as fast rotators despite having been formed from dry mergers. found that dry binary mergers only form slow rotators (*v*/*σ* <  0.1), whereas some group mergers are clearly capable of producing fast rotators. Nonetheless, the scarcity of remnants with *v*/*σ* > 0.3 strongly suggests that dissipation is necessary to form fast rotators, as will be elaborated further in §[sec:discussion]. We also measure rotation in Fig. [fig:eL] by the more physically motivated measure *λ* - essentially a radially-weighted *v*/*σ* tracing net projected angular momentum. While the distribution of rotational support is not wildly different from Atlas3D, there is a significant excess of slow rotators (especially flattened ones) and a complete absence of simulated galaxies with *λ* > 0.4. B.n*s*=1 mergers and Many-merger remnants tend to be slightly slower rotators, but the differences in both cases are not large. Figure [fig:Leprax] shows rotational support for the principal axis projections. The minor axis projection shows minimal rotation, which is expected if there are no stable orbits about the major axis. In general, B.n*s*=4 mergers are rounder despite having faster rotation for the same set of initial conditions. As with random projections, there appear to be two distinct tracks for galaxies, which is more readily apparent in the B.n*s*=4 mergers. Most galaxies have a range of ellipticities in their major axis projections but only show modest increases in rotational support from the minor to medial axis projections. These appear as horizontal lines with a shallow slope near the bottom of the figure. A smaller subset of galaxies are nearly round in the minor axis projections, with very modest rotation (*λ* < 0.1), but are significantly flattened (*ε* > 0.2 and rotationally supported (*λ* > 0.2) in major and medial axes alike. In fact, for most of these galaxies it appears as if the minor and medial axes are nearly identical, and so these galaxies are probably prolate spheroids. In this case, the distinction between major and medial axis projection is not very meaningful. Rotational support of simulated galaxies by dimensionless angular momentum measure *λ*. Observed data are shown with point sizes proportional to the logarithm of the relative weights to match the luminosity function of the simulated galaxies. Despite this weighting scheme, too many simulated galaxies have low rotational support, and none have very high support (*λ* >  0.4). [fig:eL] Rotational support of elliptical galaxies by dimensionless angular momentum measure *λ* as a function of luminosity. Observational data show a trend of lower rotational support at higher luminosity, whereas the simulated trend is nearly flat. Almost all S0s are faster rotators than simulated remnants. [fig:lL] Rotational support decreases with increasing luminosity in A3D ellipticals but not in simulations, as shown in Fig. [fig:lL]. This is largely due to the inability of dry mergers to produce fast rotating, faint ellipticals. Furthermore, even if the morphological properties of some remnants (particularly Sersic indices of B.n*s*=1 remnants) are more consistent with S0s than ellipticals, observed S0s have far more rotational support than the vast majority of the simulated galaxies. There does not appear to be any strong correlation between rotational support and number of mergers in Fig. [fig:evdivsigma], or with total group mass or central galaxy luminosity. One might expect rotational support to at least correlate with net group specific angular momentum, assuming most of the halos merge and this angular momentum is conserved - however, Fig. [fig:angmom] does not show any such correlation. It appears that repeated, mostly isotropic mergers cannot produce very fast rotators, even if the group itself has some net orbital angular momentum in one or more satellite galaxies. Discussion ========== The main results of §[sec:results] are that collisionless mergers in groups can produce central remnants with properties very similar to nearby elliptical galaxies. However, we do note several key differences between the simulation predictions and observed elliptical galaxies, not all of which are easily reconciled with dissipationless merging. We will also highlight how and why these results differ from previously published simulations. Morphology ---------- In App. [app:analysistesting], it is shown that at the resolutions used in this study, luminosities, sizes and Sersic indices of spherical Sersic model galaxies can be recovered within about 5%, usually underestimating the true values. For group merger remnants, Sersic fits typically recover luminosities and sizes to within 10%, although luminosities tend to be more precisely recovered. By contrast, Petrosian radii systematically underestimate galaxy sizes and luminosities, negating the advantage of a non-parametric fit unless corrected for. Thus we conclude that single Sersic model fits are suitable for the simulated galaxies and can be compared directly to S+11 catalog fits, with the caveat that Sersic indices are the least robust parameter at low resolutions and are likely systematically underestimated. However, it is also true that in practice, Sersic fits can produce larger scatter on size than on luminosity, whereas Petrosian half-light radii appear to limit scatter in sizes - likely because they systematically underestimate the total luminosity of galaxies with large Sersic indices. Given these issues, our solution is to compare sizes between simulations and observations as fairly as possible, so that any systematic errors are likely to be shared between simulated and observed galaxies. We have compared Sersic index and ellipticity distribution to single Sersic profile fits of local (A3D) and SDSS (N+10, S+11) galaxies. Neither B.$\mathrm{n\_{s}}$ sample is a good fit to observed ellipticals alone, but the naive linear combination of the two is a better fit while remaining inconsistent with S0s. However, such a naive combination still produces a near-bimodal distribution, in contrast to the single peak typical of observed ellipticals. A more natural choice of progenitors would likely smooth out this bimodality. For example, groups with half of the spirals having exponential bulges and the other half de Vaucouleurs would likely produce remnants with intermediate properties, filling in the gap between the two peaks of single-progenitor distributions. A smooth, realistic distribution of bulge profiles and bulge fractions would likely flatten the peaks and further broaden the distribution of remnant Sersic indices. Sersic indices of observed ellipticals are generally larger for more luminous galaxies, a trend reproduced by the simulated galaxies in Fig. [fig:lsersicn]. predicted that the dissipationless component in mergers (including both binary spiral mergers and some re-mergers of the resulting remnants) should show only a weak increase with luminosity and have low median values of about $\mathrm{n\_{s}}$=3, with scatter of about 1. We find similar results for the B.$\mathrm{n\_{s}}$=1 sample, for which $\mathrm{n\_{s}}$ is nearly constant with luminosity at a mean of 3 and with a range from 2 to 4. By contrast, the B.$\mathrm{n\_{s}}$=4 sample not only has larger mean $\mathrm{n\_{s}}$ at about 5, but the median $\mathrm{n\_{s}}$ increases with luminosity by  ∼ 0.5-1 per dex. This slope is close to that observed for N+10 and shallower than that in S+11, the discrepancy between these two samples having no obvious cause beyond probable contamination by S0s in the S+11 elliptical sample. Since the simulations have the same initial conditions other than their bulge profiles, this demonstrates that sufficiently concentrated progenitors can produce remnants with large $\mathrm{n\_{s}}$ through dissipationless merging. Furthermore, bulge $\mathrm{n\_{s}}$=4 mergers appear to be a better fit for luminous ellipticals, while bulge $\mathrm{n\_{s}}$=1 remnants match the less luminous ellipticals. If progenitor bulge profiles scale with luminosity (i.e., luminous spirals have larger bulge $\mathrm{n\_{s}}$ and merge to form luminous ellipticals), the scaling of elliptical Sersic index with luminosity can be matched more closely. Both simulations and observations show a slight tendency for more luminous ellipticals to be rounder, especially above 1011*L*⊙ (Fig. [fig:lell]). Again, the S+11 sample differs from N+10, in this case being more flattened on average - likely due to S0 contamination. Nonetheless, we also find similar luminosity-dependent behaviour as for Sersic indices, as the B.n*s*=4 sample is a better match for bright, rounder ellipticals. The B.n*s*=1 sample has a shallower slope and appears to be too flattened on average. The observed distribution does not obviously require a combination of both simulation samples in the same way as the Sersic index-luminosity relation does, but such a combination does not disagree with the S+11 relation either. The N+10 ellipticity-luminosity relation is considerably flatter and could be reproduced by the B.n*s*=4 sample alone, with the scatter due to the substantial projection effects. Scaling Relations ----------------- ### Size-Luminosity Relation Despite having randomized initial conditions, the simulated galaxies typically produce tight size-luminosity relations, with slight dependence on which size measure is used and whether different B.n*s* samples are combined. The Sersic model relations are somewhat tighter ( ∼ 0.1 − 0.12 dex scatter) than those reported by ( ∼ 0.12 − 0.15 dex scatter). This is partly systematic, since used circular Sersic fits provided by. Using the elliptical Sersic model fits of S+11 - which are more directly comparable to our own methodology and overlap with the N+10 sample - yields smaller scatter in the N+10 size-luminosity relation of  ∼ 0.09 dex. We also find slightly tighter scatter in the remnant Petrosian size-luminosity relation ( ∼ 0.09 dex), whereas the scatter for N+10 ellipticals remains largely unchanged whether Sersic or Petrosian model sizes are used. The small scatter in the size-luminosity relation should allay concerns that stochastic merging processes cannot produce tight scaling relations. used simulations with multiple mergers of spheroidal galaxies to conclude that “a remarkable degree of fine tuning is required to reproduce the tightness of the local scaling relations with dry mergers”. Instead, we find that mergers of many galaxies typically produce slightly tighter correlations than those with fewer galaxies, and the relations are tight regardless of which formation time is assumed for the groups (App. [app:evolution]). No fine tuning in galaxy orbits, number of mergers or any other parameters are required to produce tight scaling relations. Moreover, the Faber-Jackson relation has even tighter scatter than the size-luminosity relation. Rather than scattering galaxies away from existing scaling relations, multiple mergers appear to converge remnants towards a common relation, a behavior somewhat like the central limit theorem. However, it is still true that dry mergers of spirals in groups produce remnants with larger sizes and smaller velocity dispersions at fixed mass or luminosity, a problem shared with mergers of spheroids. Also, the scatter does appear to increase slightly with luminosity. This could simply be a reflection of the wide range of galaxy and merger counts for the luminous groups, which may not match the true range of cosmological merger histories for galaxy groups. We have tested mergers of spirals following a zero-scatter Tully-Fisher relation. The estimates for the scatter of merger remnant scaling relations can be considered lower limits, as they would likely have been higher had progenitors followed Tully-Fisher relations with intrinsic scatter and/or evolving slope and scatter. The observed Tully-Fisher relation does have significant scatter, even at low redshift (about 0.12 dex, from ), but the intrinsic scatter could be much lower. estimate that a scatter of 0.1 dex in the Tully-Fisher relation contributes about 0.04 dex scatter in the Fundamental Plane scaling relation; comparable scatter added to the existing size-luminosity relation scatter of 0.10-0.12 dex would make little difference if added in quadrature. While limiting scatter does not appear to be a challenge, in almost all cases the slope of the size-luminosity relation is shallower than observed and the intercept larger, so most galaxies are too large for their luminosities. The slopes of the remnant size-luminosity relations (typically *R* ∝ *L*0.5 − 0.6) are steeper than the progenitor spiral scaling relation (*R* ∝ *L*0.42) and the group scaling relation (*ρ*=constant, *R* ∝ *L*1/3). However, the remnants slopes are still shallower than those for the observations, which range from *R* ∝ *L*0.6 to *R* ∝ *L*0.8 depending on the observational sample and size measure. Encouragingly, the best matches are found between simulated remnants and N+10 ellipticals (*R* ∝ *L*0.66), the largest sample for which visual classifications are available. The steeper slope of the S+11 elliptical Sersic size-luminosity relation (0.75 to 0.78) is of some concern. However, the elliptical classification for S+11 is based on empirical cuts on various parameters and results in significant ( ∼ 30%) contamination by S0s (Tab. [tab:morphcuts]). The much smaller A3D elliptical sample also has a slightly larger slope than N+10, and the luminosity function weighting does not change the slope. Since A3D used a slightly different fitting methodology with a much smaller volume sample, it is not clear whether this discrepancy is significant. The size-luminosity relation slopes for the simulated remnants are also steeper than that of *R* ∝ *L*∼ 0.3 predicted for binary mergers remnants by. However, those simulations began with a spiral scaling relation of similar slope (0.3), and so the merging process did not steepen the size-mass relation. By contrast, we have shown that group mergers are capable of steepening the slope of the size-luminosity relation by  ∼ 0.1 − 0.2 from progenitors to merger remnants without dissipation. Our models predict virtually no dependence of stellar mass-to-light ratio on luminosity - while the bulge and disk stellar mass-to-light ratios have different values, the fraction of disk stars within the effective radius varies little. However, luminous ellipticals do tend to have larger stellar mass-to-light ratios, so comparing to observed size-stellar mass relations lessens the discrepancies in the slopes by about 0.05 dex, depending on the sample and size measure. Such a dependence could be produced by more massive progenitor spirals having larger mean stellar mass-to-light ratios. We also did not include any scatter in the progenitor spiral Tully-Fisher relation or any scatter or luminosity dependence in bulge fractions. Extra scatter in either of these input galaxy properties would likely result in increased scatter in the remnant scaling relations. Any realistic luminosity dependence in the large M31 model bulge fraction would likely flatten the slope still further, since faint ellipticals would be produced by faint spirals with weak bulges. Dissipation is a tempting solution to the shallow size-luminosity relation slope problem. Dissipation should decrease sizes at fixed luminosities and preferentially shrink faint ellipticals if their progenitors had larger gas fractions, resulting in a remnant with a larger fraction of stars formed in a central starburst. Luminosity-dependent gas fractions have been proposed by as the source of the tilt in the fundamental plane scaling relation, a hypothesis which will be addressed in Paper II. Another possible remedy to the shallower slopes of the simulated size-luminosity relations is to weight the contributions from various simulation subsamples differently. Applying a simple linear weighting scheme of favoring B.n*s*=1 groups at low luminosity and B.n*s*=4 at high luminosities yields a steeper slope than a uniform weighting and a closer match with observations. Such a weighting also produces steeper slopes than either the Few- or Many-merger relations alone and can be justified if more massive halos undergo more mergers. While average halo merger rates are not strongly mass dependent, the groups we have simulated here would likely be those with higher than average merger rates. Although these schemes could resolve the mismatch in slopes, none save dissipation are viable solutions to the problem that simulated remnants are generally too large at fixed luminosity (Fig. [fig:sizelum]). Our estimated stellar mass-to-light ratios are already quite low, so making small galaxies brighter appears to be out of the question. Numerical resolution effects are not large (App. [app:numerics]). Barring a strong redshift dependence in the sizes of observed ellipticals, this discrepancy is real. As a result, the Kormendy relation (Fig. [fig:kormendy]) is poorly reproduced. Remnants are too faint at fixed sizes, so their effective surface brightnesses are also too faint by about a magnitude for small galaxies. The shallower slopes of the size-mass relation also translate into a weak simulated Kormendy relation (nearly non-existent in the case of the B.n*s*=1 sample). ### Faber-Jackson Relation The Faber-Jackson relation of simulated galaxies shows even smaller scatter (0.04 dex) than their size-luminosity relation or any observed Faber-Jackson relation (typically 0.08 dex, as in Fig. [fig:fj] and Tab. [tab:fj]). The simulated remnants also have slightly shallower slopes (*σ* ∝ *L*0.28) than the observations (*σ* ∝ *L*0.27 − 0.37), again depending on sample and weighting scheme. Curiously, the slope of the Faber-Jackson relation is nearly identical to that of the progenitor spiral Tully-Fisher relation (*V* ∝ *L*0.29), so multiple mergers appear to preserve the scaling of orbital velocity with mass while converting ordered rotation into random motions. This is despite the fact that the virial ratio in each group varies significantly, and so galaxy orbits within each group are not scaled uniformly the same way that stellar orbits within galaxies are. In virtually all cases, the slope of the remnant Faber-Jackson relation is steeper than the canonical value of 0.25 (or L ∝ *σ*4). However, the observed relations show similar deviations and there is no compelling reason why ellipticals should follow this canonical relation. Indeed, the simulations of predict scalings as steep as M ∝ *σ*12 for major mergers with very small pericentric distances, so such deviations from the canonical relation are not unexpected. In most samples, the simulations have smaller dispersions than observed galaxies of the same luminosity. No weighting scheme can resolve this mismatch in the intercept of the Faber-Jackson relation, which is of similar magnitude (but opposite sign) as the offset in the intercept of the size-luminosity relation. Increasing the stellar mass-to-light ratios of the simulations would make galaxies of the same dispersion fainter but would worsen the match to the size-luminosity relation by making small remnants even fainter. Dissipation appears to be necessary here - central starbursts have been shown to increase velocity dispersions and shrink effective radii compared to purely dissipationless mergers. However, it is not clear whether a mass-dependent gas fraction would preserve the slope or flatten it. The mild curvature in the observational relations may be a systematic effect at low dispersions, although we have attempted to minimize such systematics by including two independent dispersion measurements. On the other hand, the simulated relations are insensitive to the choice of velocity dispersion measure (central or effective; including rotational support or not) and most observed relations are insensitive to various weighting schemes. ### Time or Redshift Dependence All of the results presented above apply to simulations analyzed after 10.3 Gyr, assuming an initial formation redshift of z=2.0 - a redshift at which pure dry mergers of disks are not likely to be common. However, the first merger in the group typically only occurs after another 1-2 Gyr. The scaling relations of remnants after 5 and 7.7 Gyr are similar to those at 10.3 Gyr, as shown in App. [app:evolution], and so similar conclusions would be reached by assuming that the first merger occurred at z=0.5, when mergers were more likely to be dry or gas-poor. At face value, this also implies that the evolution in scaling relations is minimal; however, we caution that all of the groups are effectively the same age, so this prediction does not include any evolution from varied ages and assembly histories of real group galaxies. Spiral Progenitors and Their Bulges ----------------------------------- In the case of Sersic index distributions and scaling relations, it is tempting to consider whether a combination of progenitor bulge types (and possibly bulge fractions) could resolve the tensions with observations. To examine this further it is useful to ask what the distributions of bulge Sersic index and bulge fraction are for spirals as a function of luminosity. Relations between total luminosity and best-fit bulge Sersic index as a function of r-band bulge-to-total luminosity ratio in S+11 spiral galaxies. Plots show probability density on a logarithmic scale, with dark red the highest and blue the lowest densities. Only galaxies with a distinct bulge component are included. All galaxies have an F-test probability that a bulge component is not required for a good fit of less than 0.32. Each galaxy is weighted by 1/V$\mathrm{\_{max}}$ to correct for incompleteness. [fig:spiralbulges2] Spiral galaxy bulge properties as in Fig. [fig:spiralbulges2], but now for visually classified N+10 spirals. Although the statistics are barely sufficient, there does appear to be a weak correlation between luminosity and bulge Sersic index for spirals with large bulge fractions, more so than in Fig. [fig:spiralbulges2]. [fig:spiralbulges2n] Not all S+11 spirals have a distinct bulge, nor are most images of sufficient quality to accurately measure bulge properties, so we consider the subset for which a bulge plus disk fit is required - that is, those with an F-test probability that a de Vaucouleurs bulge is not required is less than 0.32. This is about half of the spiral sample. The proportion for which a free Sersic bulge is required over a de Vaucouleurs bulge is much smaller, so we do not limit the sample any further. Fig. [fig:spiralbulges2] shows the probability densities of bulge Sersic indices as a function of galaxy luminosity, split into different bulge fraction bins. In all bins, classical ($\mathrm{n\_{s}}$=4) bulges are at least a local maximum, although extreme bulge Sersic indices ($\mathrm{n\_{s}}$=0.5 and $\mathrm{n\_{s}}$=8, which are the lower and upper limits for S+11) are often the most common. The dependence on luminosity is not very strong, but in most bulge fraction bins, fainter spirals are slightly more likely to have low Sersic index bulges than high. The S+11 spiral sample is known to be contaminated by S0s. In Fig. [fig:spiralbulges2n], we instead use the much smaller but visually classified sample of spirals from N+10. This smaller sample does slight evidence for correlation between luminosity and bulge Sersic index, at least for more bulge-dominated spirals. Also, the large fraction of bulge Sersic indices below 1 is greatly diminished, suggesting that those could be primarily S0 contaminants in the S+11 sample, or possibly more poorly resolved, higher-redshift spirals which appear in S+11 but not N+10. In either case, both samples contain substantial fractions of spirals with large bulge fractions. The M31 model used in our simulations has a large bulge mass fraction (0.33) and luminosity fraction (0.5). Such fractions are not uncommon, even at low luminosities. de Vaucouleurs bulges are also quite common, whereas exponential bulges are at least not exceptionally rare, especially for bulge-dominated spirals. Even if groups of spirals have broad distributions of bulge profiles, as in Fig. [fig:spiralbulges2], their median values could also lie close to the limiting cases of exponential or de Vaucouleurs in our simulations. Also, a wide distribution of bulge profiles is indeed a realistic solution to the problem of single-progenitor mergers producing remnants with narrow Sersic index distributions. Real mergers in groups would likely produce wider, less bimodal distributions of Sersic indices than the single-progenitor simulations. Rotational Support ------------------ The abundance of fast-rotating, faint ellipticals is at odds with the simulation predictions. However, as Figs. [fig:evdivsigma] and [fig:eL] show, multiple mergers can product remnants with moderate rotational support. This contrasts with the results of that dissipationless binary mergers only produce slow-rotating remnants with v/\sigma < 0.15 - all the more so because measured major axis rotation curves, whereas our simulations (and A3D) average over *R**e**f**f*. Nonetheless, our simulations are unable to produce any remnants with \lambda > 0.35. The simulated remnants show little or no change in rotational support as a function of luminosity (Fig. [fig:lL]), unlike observations, and do not produce any of the fast-rotating, moderately luminous S0s found in A3D. There is also an abundance of flattened remnants with minimal rotation, unlike in Atlas3D. and others have shown that significant rotation can be easily produced in gas-rich mergers. Dissipation is likely necessary to produce some ellipticals, particularly faint ones, and most likely a large fraction of S0s - if S0s are formed through mergers. However, it should emphasized that many of the simulated galaxies are consistent with the properties of some A3D galaxies, particularly bright ellipticals, so dissipation may not be necessary in all cases. Conclusions =========== We have investigated the hypothesis that elliptical galaxies can form through collisionless mergers of spiral galaxies by creating a sample of numerical simulations of such mergers and comparing the results directly with observations of local ellipticals. We draw the following key conclusions: 1. For a given fixed bulge type, central remnants have narrow distributions of Sersic indices, with mergers of spirals with exponential bulges producing less concentrated remnants ($\sim \mathrm{n\_{s}}$=3) than classical-bulge merger remnants ($\sim \mathrm{n\_{s}}$=5). Although classical-bulge mergers alone are a better fit than exponential-bulge mergers, a combination of progenitor bulge profiles is required to reproduce observed Sersic index distributions. 2. Classical-bulge mergers produce a correlation between luminosity and Sersic index, whereas exponential bulge mergers do not. The observed correlation is best reproduced if exponential bulge mergers preferentially produce faint ellipticals and classical bulge mergers produce bright ellipticals. 3. Every simulation sample produces tight scaling relations, with approximately 0.1 dex scatter for the size-mass relation and 0.04 dex scatter in the Faber-Jackson relation. Thus, even multiple dry mergers can produce ellipticals with exceptionally tight scaling relations. However, the scatter estimates represent a lower limit, because the progenitor spirals in our simulations follow a fixed, zero-scatter Tully-Fisher relations. The scatter in the remnants scaling relations would likely increase (though not necessarily significantly) if the progenitor scaling relations had larger intrinsic scatter or evolved with redshift. 4. The remnant size-luminosity relation typically has a shallower slope (*R* ∝ *L*0.5 − 0.6) than observed relations (*R* ∝ *L*0.6 − 0.8), depending on the sample and weighting scheme used. The simulated slope is also steeper than that of the progenitor spiral size-luminosity relation (*R* ∝ *L*0.42), suggesting that mergers can steepen the size-luminosity relation. 5. As a consequence of the shallower slopes and larger intercepts of the simulated size-luminosity relation, the simulated Kormendy relation is shallower than observed - nearly flat for exponential bulge mergers - and has larger scatter. 6. The remnant Faber-Jackson relation has a slightly shallower slope (*σ* ∝ *L*0.28) than most of the observed relations (*σ* ∝ *L*0.27 − 0.37), but is virtually unchanged from the progenitor spiral Tully-Fisher relation, *V* ∝ *L*0.29. 7. The slopes of the scaling relations can be better reproduced if massive ellipticals are produced by many mergers and less massive by fewer mergers, or if stellar mass is compared instead of luminosity. 8. The intercepts of the size-mass and Faber-Jackson relations can be individually matched by adjusting the stellar mass-to-light ratios of the galaxies; however, each relation requires adjustment in the opposite sense (remnants of a fixed luminosity being too large and having too low of a velocity dispersion), so it is not possible to match both intercepts simultaneously. 9. Multiple mergers can produce remnants with modest rotational support (v/*σ* >  0.1); however, most remnants are slow rotators, and there is no correlation between luminosity and v/*σ*, whereas such a correlation is found in Atlas3D ellipticals. These results demonstrate that many of the properties of elliptical galaxies are consistent with their emergence through multiple dry mergers of spiral galaxies. Perhaps most importantly, these properties also differ significantly from those of remnants formed through binary dry mergers of spirals, as reported in previous studies. This not only adds to an increasing body of evidence supporting the case for multiple mergers but also demonstrates that such mergers can produce tight scaling relations - in some cases tighter than observed ellipticals - as long as the progenitor spirals are drawn from a realistic luminosity function and scaled appropriately. Several major concerns remain for a purely dissipationless formation scenario for elliptical galaxies. The first is the limited amount of rotational support in the merger remnants and the absence of any correlation between rotation and luminosity. The second is the large sizes (and low velocity dispersions) of faint ellipticals, which result in a shallow size-luminosity relation and poorly reproduced Kormendy relation. While this second point could be resolved without dissipation (e.g. by merging more compact disks at high redshift), dissipation does appear to be necessary to produce fast-rotating ellipticals. Dissipation could also solve the second problem, as central starbursts would produce more compact remnants with higher dispersions. Perhaps the greatest challenge for dry mergers lies in matching the tilt of the fundamental plane with respect to the virial relation. Previous work has suggested that dissipational processes are the cause of this tilt and that dry mergers cannot produce any tilt. This point will be addressed in Paper II of this series. Acknowledgments =============== D.T. would like to thank B. Abraham, L. Bai, D. Krajnovic, T. Mendel and P. Nair for fruitful discussions and for providing data used herein, as well as the anonymous referee for helpful suggestions. D.T. acknowledges the support of Ontario Graduate Scholarships for this work. Simulations and analyses were performed on the Canadian Institute for Theoretical Astrophysics’ Sunnyvale cluster and the University of Toronto’s SciNet cluster. H.Y. acknowledges support from grants from the National Science Engineering Research Council of Canada and the Canada Research Chair program. Analysis Pipeline Testing ========================= Several aspects of the simulation analysis pipeline merit further testing. First, we would like to determine if the pipeline can recover known or measurable quantities such as the total mass/luminosity and half-light radii in single galaxies. This is accomplished by analysing a sample of spherical, pure Sersic profile plus dark matter halo galaxies generated with GalactICS. This allows us to simultaneously test whether GalactICS can generate equilibrium Sersic profile models (which is how the bulges of progenitor spirals are initialized) and whether the analysis pipeline can successfully recover input parameters at arbitrary resolutions. We analyse these models before simulating them in any way. In  App. [app:numerics], we examine the results of simulating these simple models with PARTREE to test numerical convergence. We can also use the group simulations themselves to test the analysis pipeline. Although we do not know the structural parameters of merger remnants a priori - indeed, they do not necessarily follow a single Sersic profile at all - the total luminosity is known in groups which have merged to a single remnant. Similarly, we can directly measure a half-light radius from mock images with no PSF or sky background in these cases and compare to observational estimates from the SDSS-equivalent mock images. This procedure allows us to determine whether single Sersic profile fits can simultaneously recover the total luminosity of a galaxy and its half-light radius. In addition to Sersic fits from GALFIT, we fit de Vaucouleurs profiles with GALMORPH and measure non-parametric Petrosian radii to determine if these size measures can consistently recover the true half-light radius of a galaxy. Sersic plus Halo Models ----------------------- Our reference Sersic plus halo models consist of a single Sersic profile bulge and a dark halo with the same baryonic mass ratio as our fiducial M31 models. We produce $\mathrm{n\_{s}}=2$ and $\mathrm{n\_{s}}=4$ models to cover most of the range of typical elliptical surface brightness profiles. We create models with *R**e**f**f* of 2, 4, 8 and 16 kpc, again covering ranges of typical elliptical galaxies and massive spiral bulges. The 2-kpc model is slightly larger than the 1.5-kpc bulge in our fiducial M31 model. All models are in virial equilibrium and follow a size-luminosity relation log(*R**e**f**f*) = 0.7log(*L**r*), with one model exactly on this relation and an extra model either over- or under-luminous for its size. Each model is imaged at mock redshifts of 0.01, 0.025 and 0.1. These models will be used in the future to test recovery of scaling relations. However, for now we are mainly interested in whether the pipeline can recover the known values of $\mathrm{n\_{s}}$, *R**e**f**f* and *L* for each model and whether the systematics depend on any of those parameters. In addition to varying the galaxy luminosity as a function of size, each set of Sersic plus halo models is simulated at three resolutions. The lowest resolution has 15,000 star and 40,000 dark matter particles, identical to the lowest resolution model used in the simulations. The resolution increases by a factor of 8 each step such that the highest resolution model has 7,680,000 star and 20,480,000 dark particles, or at least a factor of two more than the total particle counts of the most massive group simulations. In principle these models should be rescaled versions of each other; however, the nominal SDSS PSF and signal-to-noise ratio set a physical scale for mock images, while our fixed softening length sets another physical scale for the simulation. Sersic Quantities ----------------- For $\mathrm{n\_{s}}=2$ models imaged at z=0.025, GALFIT Sersic fits show excellent agreement with expectations, even at low numerical resolution. With just 15,000 star particles, sizes are recovered to within 1 ± 0.5% for 2 kpc radius, although larger galaxies have underestimated sizes to the level of 3 ± 1% at *R**e**f**f* = 16 kpc. However, $\mathrm{n\_{s}}$ is underestimated by 10% for *R**e**f**f* = 2 kpc galaxies, which improves to 4 ± 1% at *R**e**f**f* = 16 kpc. Luminosities, in turn, are underestimated at fairly constant levels of 3%, with standard deviations increasing with size from 0.1 to 1%. Similar trends are found at medium resolution but with smaller amplitudes - the largest errors on $\mathrm{n\_{s}}$ are just 1.4 ± 0.3% at 16 kpc, while errors on *R**e**f**f* are at most 3.5 ± 0.5% at 2 kpc and shrink to half of that value at 2 kpc. Errors on parameters are reduced by about a factor of two by imaging at nearby redshifts (z=0.01) and increase by about the same factor by imaging at z=0.1. These errors are not eliminated by increasing the image size (and shrinking the PSF relative to *R**e**f**f*) but shrink dramatically at the highest numerical resolution, to well under 1% in $\mathrm{n\_{s}}$ and *L* and about 1% in *R**e**f**f*. This suggests that these parameters are in principle completely recoverable with SDSS-equivalent imaging and good sky subtraction. We have also fit $\mathrm{n\_{s}}=4$ de Vaucouleurs profiles using GALMORPH, as in. GALMORPH fits to the $\mathrm{n\_{s}}=2$ models show expectedly poor results. Sizes are overestimated by factors from 1.6 (at 2kpc) to 2.4 (at 16kpc) while luminosities are overestimated by 30 − 60%. These results are not entirely unexpected – $\mathrm{n\_{s}}=4$ models have shallower outer profiles and hence more light at large radii compared to profiles with lower $\mathrm{n\_{s}}=2$. However, they do demonstrate that other free parameters such as the effective radius and mean surface brightness cannot adjust to compensate for an incorrect profile choice, and so pure de Vaucouleurs profiles are not a good choice to fit ellipticals if their underlying surface brightness profiles are truly Sersic profile with $\mathrm{n\_{s}}$ significantly lower than 4. For $\mathrm{n\_{s}}=4$ models, GALFIT free-$\mathrm{n\_{s}}$ fits show curiously constant fractional errors on sizes, consistently underestimating *R**e**f**f* by 8 − 9 ± 1%. Underestimates of $\mathrm{n\_{s}}$ vary from a substantial 14 ± 1% at 2 kpc to 4 ± 0.6% at 16 kpc. Luminosity underestimates shrink from 5 ± 0.6% to 1.5 ± 0.3%. These errors are not improved by imaging at lower redshift and so are unrelated to the relative size of the PSF. Instead, they are reduced substantially by increasing numerical resolution. At the highest resolution, size estimates shrink to 5 − 6 ± 1% at all sizes, while $\mathrm{n\_{s}}$ underestimates now scale from 8 − 2%. Since typical resolutions for central group galaxies are at the medium level or slightly higher, we expect that $\mathrm{n\_{s}}$ is underestimated by 7% on average for a pure de Vaucouleurs fit, with better performance at smaller $\mathrm{n\_{s}}$. Size estimates are a minimum of 2% lower for large galaxies and up to 10% off for *R**e**f**f* = 2kpc. By contrast, the GALMORPH fits with fixed $\mathrm{n\_{s}}=4$ accurately recover sizes and luminosities to better than 2% even at the smallest sizes and at medium resolution. We conclude that even in ideal situations, free-$\mathrm{n\_{s}}$ fits will systematically underestimate sizes and luminosities at the 5% level, whereas fixed-$\mathrm{n\_{s}}$ fits only perform better if the exact value of $\mathrm{n\_{s}}$ is known. In App. [app:numerics] we detail how these results change after 2 Gyr of simulation with PARTREE using a fixed 100 pc softening length, as in the simulations. Group Simulation Results ------------------------ As Fig. [fig:sersiclrat] shows, Sersic models generally do an acceptable job recovering central galaxy luminosities. For the $\mathrm{n\_{s}}=1$ sample, model luminosities are typically 85-90% of the total in groups with no satellites with relatively small scatter. The Sersic luminosities of $\mathrm{n\_{s}}=4$ central galaxies appear to have little or no systematic deviations from the true luminosities, although the scatter appears somewhat larger than in the $\mathrm{n\_{s}}=1$ case. The largest discrepancies are found for groups with many mergers, particularly equal mass mergers, in which case the models can overestimate the central galaxy’s luminosity by at least 20-30%, largely due to runaway growth of the effective radius and Sersic index. However, in most cases Sersic profiles appear to be appropriate fits to the galaxies. The underestimation of $\mathrm{n\_{s}}=1$ merger luminosities appears to be a systematic effect. The underestimation of luminosities in groups with satellite galaxies is difficult to quantify, as the total luminosity in satellites is not easily separable from that of the central galaxy. Testing whether half-light radii are recovered is also complicated by the presence of satellite galaxies. Nonetheless, we attempt to measure how closely *R**e**f**f* matches the ’true’ half-light radius *R*50 of the central galaxy in Fig. [fig:sersicrrat]. We estimate *R*50 as the radius enclosing half of the group luminosity in a given sky- and satellite-subtracted image, using the same best-fit ellipse as the Sersic model. The ratio should be unity if there are no satellite galaxies in the group and less than unity if there are. As Fig. [fig:sersicrrat] shows, half-light radii are more difficult to measure than total galaxy luminosities - or rather, errors on half-light radii from Sersic fits are considerably larger than for luminosities, which likely contributes to the significant scatter in the Sersic size-luminosity relation compared to the Faber-Jackson relation. At first glance, the large scatter in the ratio of Sersic model to ’true’ half-light radius might suggest that much of the error in the Sersic size-luminosity relation is due to systematics rather than any intrinsic scatter. However, the size-luminosity relation using total group luminosity and ’true’ half-light radius still shows significant scatter (0.08 dex) even when limited to galaxies with no satellites. A much larger sample of higher-resolution simulations would be required to determine if this scatter is due to numerical effects or genuinely intrinsic. Petrosian Radii --------------- As in SDSS, the Petrosian radius *R**P* is given by the radius at which the mean surface brightness in the ring bounded by 0.8*R**P* < *r* < 1.25*R**P* is 0.2 times the mean surface brightness within *R**P*. As a non-parametric size measure, it requires no fitting to measure, unlike the Sersic *R**e**f**f*. Since the Sersic profile is an analytical solution, one can compute *R**P* uniquely for any given $\mathrm{n\_{s}}$. For $\mathrm{n\_{s}}=$ 3 to 6, *R**P*/*R**e**f**f* ranges from 1.5 to 2. The Petrosian magnitude of a galaxy is often estimated as the flux contained within a radius of a factor *N**P* larger than this Petrosian radius; SDSS uses *N**P* = 2. Petrosian magnitudes effectively measure half-light radii within 3 − 4*R**e**f**f* rather than the nominal 8*R**e**f**f* bounding box for the FITS images used to derive SDSS-equivalent magnitudes. We measure Petrosian radii using both circular apertures and elliptical apertures, using the best-fit ellipse from Sersic model fits in the latter case. Unfortunately, as shown in Fig. [fig:l2repratn], Petrosian luminosities appear to underestimate the true galaxy luminosity by a similar amount to the analytical relation for purely circular profiles (see for a reference to various Sersic quantities). Sizes are also underestimated to a similar degree as predicted for a pure circular Sersic profile, which suggests that most galaxies do not deviate greatly from a pure Sersic profile. The slight excess could be due to a number of factors, including the Sersic models underestimating the true half-light radii and/or Sersic indices, radial variations in the ellipticity or shape of the isophotes, or deviations of the underlying profile from a pure Sersic model, all of which are plausible. In principle, one can correct for this ’missing’ flux using fitting formulae valid for a wide range of Sersic or other profiles, but this seems unnecessary given that the Sersic fits appear sufficient and are available for all of the simulations and observational catalogs alike. Numerical Convergence ===================== We test the numerical convergence of the spherical Sersic plus halo models by simulating every galaxy for 2 Gyr at 3 different resolutions (differing in particle number by a factor of 8 in each step). We also test a subset of the group simulations at similar resolutions. All measurements are made using the same analysis pipeline as the results above; the images also have the same nominal redshift of *z* = 0.025. Convergence is generally quite good. With a 0.2 Myr timestep, total energy is conserved to better than one part in 105. With the initial conditions re-centered to the barycenter, linear momentum remains small. The net angular momentum vector is the least well conserved quantity in Sersic plus halo models; each orthogonal component can vary by up to 5% of the net rotation. However, the total angular momentum is usually dominated by a small number of dark matter halo particles at large distances from the galaxy center. Angular momentum conservation for baryons in isolated galaxies is considerably better, and deviations of 1 to 2% are typical for groups where the bulk of the angular momentum is initially in galaxy orbits. Having tested input parameter (Sersic index and effective radius) recovery with the analysis pipeline, we now turn to examining how these same parameters evolve in a 100 pc softened potential with a fixed, 0.2 Myr timestep, as in the group simulations. While idealized, these simulations are comparable to both the central ellipticals (which are slowly rotating and close to Sersic profiles, albeit somewhat flattened) and the bulges of the input spirals (which are smaller than the Sersic models and also slightly flattened by the presence of the disk) and will give estimates for how galaxy structure is affected by numerical resolution. Sersic plus Halo Model Convergence ---------------------------------- For a typical model (*R**e**f**f*=8 kpc) at very high resolution (7.68 million star particle), convergence of all parameters is achieved at the 1 to 2% level, with sizes, Sersic indices and dispersions shrinking slightly over 2 Gyr. Convergence is considerably worse for the $\mathrm{n\_{s}=4}$ model and is strongly resolution dependent. A factor eight drop to high resolution (0.96 million star particles) approximately double errors in all parameters to 2-4%. For medium resolution (120,000 star particle), $\mathrm{n\_{s}=4}$ models, parameters can shrink by over 10% - typical values being 5 to 15% for $\mathrm{n\_{s}}$ (4 to 3.4), 15% for sizes (8 kpc to 6.8 kpc) and 5% for dispersions. Thus, for larger ellipticals to be suitably resolved, a million or more stellar particles are required, especially if the profiles are as or more centrally concentrated than an $\mathrm{n\_{s}=4}$ model. Less centrally concentrated models such as $\mathrm{n\_{s}=2}$ are much less sensitive to numerical resolution and can be resolved by 100,000 stellar particles with at most 3 to 4% level drops in sizes and Sersic index. Only 4 simulations in the sample have fewer than 720,000 stellar particles, so central remnants are largely unaffected by numerical relaxation after formation regardless of their central concentration. Unfortunately, the results are not as encouraging for smaller models. For the smallest *R**e**f**f*=2 kpc model at low (15,000 star particle) resolution, Sersic indices shrink up to 50% (from 2 to 1.5, or 4 to 2.3). Sizes typically drop by less than 10%, but dispersions also shrink up to 20%. At high resolution, Sersic indices converge at the 5 to 15% level (from 2 to 1.9 and 4 to 3.4). Sizes remain constant for $\mathrm{n\_{s}=2}$ and drop at most 5% for $\mathrm{n\_{s}=4}$, with dispersions also shrinking by 3 to 5%. Typical remnants are resolved at close to this high resolution, so the greatest effect would be on the Sersic indices of small, high $\mathrm{n\_{s}}$ ellipticals. The greater concern with these results is the relaxation that occurs in the bulges of progenitor spirals. The effective radius of the M31 model is 1.5 kpc, but most galaxies are scaled to smaller sizes than this, with 0.5 to 1 kpc bulge *R**e**f**f*. Moreover, in groups with larger numbers of galaxies, total particle counts are larger, but individual spirals can have as few as 60,000 stellar particles, of which only 20,000 are in the bulge. The bulge is partially stabilized (and flattened) by the disk, but the disk forms a core near the center of the galaxy, and so one might expect the behaviour of these compact, marginally resolved bulges to be similar to the Sersic plus halo models. We will now test this hypothesis with convergence studies of group mergers. Group Simulation Convergence ---------------------------- We test numerical convergence in the groups by running a selected sample with a factor of eight higher and lower resolution and comparing parameters after the usual elapsed times (5.0, 7.7 and 10.3 Gyr). As all of the groups are resolved with an average of over a million stellar particles, numerical convergence is expected to be good once groups have merged. However, as detailed above, the least massive spirals in more massive groups are not as well resolved, so not all groups are expected to be converged at our standard resolution. ### Parameter Recovery Fig. [fig:lreconv] shows convergence for several identical groups on the size-luminosity and size-\sigma relations after 10.3 Gyr. Central remnant luminosities are fairly constant across all resolutions, but low resolutions can have slightly lower values. Sizes and dispersions are larger at low resolutions. Both trends continue from fiducial/medium to high resolution, although it is not as extreme - sizes are usually not more than 10 percent smaller between medium and high resolution. Sersic indices are systematically lower at low resolution by a factor of 1 to 2 (Fig. [fig:nreconv]). The trend persists at high resolution, although $\mathrm{n\_{s}}$ typically increases by a smaller factor of 0.2 to 0.3 between mid to high resolutions. Of the four parameters tested, then, luminosity appears to be the most robust, while the Sersic index is most sensitive to resolution effects. The effects on sizes are too small to fully reconcile the mismatch between sizes of faint simulated galaxies compared to observed ellipticals (Fig. [fig:sizelum]). Dispersions generally decrease with increasing resolution, and so numerical effects also cannot explain the lower intercept of the simulated Faber-Jackson relation compared to that of observed ellipticals (Fig. [fig:fj]). In general, increasing resolution by a factor of eight produces similar trends in the group simulations as in isolated Sersic plus halo models - Sersic indices increase, while sizes and dispersions decrease. The effects are not very large going from our standard (medium) to high resolution but are considerable when stepping down to low resolution. We recommend that a minimum of a million stellar particles be used to adequately resolve spheroidal galaxies. While luminosities and masses remain converged at low resolution, sizes and dispersions are overestimated. Sersic indices are especially untrustworthy, being systematically offset lower by one or two from higher resolutions. Scaling Relations at Different Times ==================================== The scaling relations presented in §[subsec:scalerel] are nominally for a zero-redshift galaxy population, assuming evolution from z=2. We can instead consider scaling relations at younger ages, assuming a fixed formation time for all groups. This is equivalent to assuming evolution from z=1 or z=0.5, since the only initial redshift-dependent parameter in the initial conditions is the group size. One might also consider combining groups from different snapshots into a single sample to simulate a sample with galaxies of different ages; however, this is best left to purely cosmological initial conditions with known merger trees and formation times. Simulations, Sersic model L and *r**e**f**f*, Unweighted | B.$\mathrm{n\_{s}}$ | Time | Sample | Slope | Intercept | R.M.S. | | --- | --- | --- | --- | --- | --- | | All | 5.0 | All | 0.51  ±  0.01 | -4.73  ±  0.06 | 0.11 | | All | 7.7 | All | 0.53  ±  0.01 | -4.88  ±  0.05 | 0.11 | | All | 10.3 | All | 0.58  ±  0.01 | -5.32  ±  0.06 | 0.12 | | All | 5.0 | Many | 0.57  ±  0.01 | -5.29  ±  0.10 | 0.11 | | All | 7.7 | Many | 0.58  ±  0.01 | -5.30  ±  0.06 | 0.10 | | All | 10.3 | Many | 0.62  ±  0.01 | -5.69  ±  0.06 | 0.10 | | All | 5.0 | Few | 0.47  ±  0.01 | -4.36  ±  0.06 | 0.10 | | All | 7.7 | Few | 0.50  ±  0.01 | -4.61  ±  0.06 | 0.10 | | All | 10.3 | Few | 0.54  ±  0.01 | -4.96  ±  0.08 | 0.12 | [tab:lreffevol] With these caveats in mind, we now present predictions for the evolution of the slope and scatter of selected scaling relations assuming a fixed formation time for all groups. The best-fit relations measured in Tab. [tab:lreffevol] show slight evolution with time in the slopes (increasing) and intercepts (decreasing) and limited evolution in scatter. The steepening of the slope and lowering of the intercept would seem to suggest that brighter ellipticals grow off the relation at later times while fainter ellipticals grow slowly, if it all - in our case largely by construction, since the Few-merger sample does not have any late-time mergers. This interpretation is complicated by the fact that some of the largest groups do not have a relaxed, early-type central remnant formed in the earlier time steps and so are not included in the sample at earlier times but are included later on. Thus, as in most observational catalogs, not all of the descendants can necessarily be clearly identified with a previous early-type ancestor. Simulations, Sersic model L, Unweighted | B.$\mathrm{n\_{s}}$ | Time | Sample | Slope | Intercept | R.M.S. | | --- | --- | --- | --- | --- | --- | | All | 5.0 | All | 0.31  ±  0.00 | -1.08  ±  0.02 | 0.05 | | All | 7.7 | All | 0.30  ±  0.00 | -0.99  ±  0.02 | 0.04 | | All | 10.3 | All | 0.28  ±  0.00 | -0.86  ±  0.02 | 0.04 | | All | 5.0 | Many | 0.29  ±  0.00 | -0.89  ±  0.03 | 0.05 | | All | 7.7 | Many | 0.28  ±  0.00 | -0.83  ±  0.02 | 0.04 | | All | 10.3 | Many | 0.27  ±  0.00 | -0.72  ±  0.02 | 0.04 | | All | 5.0 | Few | 0.32  ±  0.00 | -1.21  ±  0.02 | 0.04 | | All | 7.7 | Few | 0.31  ±  0.00 | -1.09  ±  0.02 | 0.04 | | All | 10.3 | Few | 0.30  ±  0.00 | -0.98  ±  0.03 | 0.04 | [tab:fjevol] The best-fit Faber-Jackson relations measured in Tab. [tab:lreffevol] also show slight evolution of the slope, but in the opposite sense (decreasing/flattening), with a corresponding increase in the intercept. However, the scatter remains largely unchanged at 0.04 dex. --- 1. Available at http://www.mpa-garching.mpg.de/SDSS/DR7/[↩](#fnref1)
arxiv_0000250
Measuring Metallicities with HST/WFC3 photometry ================================================ We quantified and calibrated the metallicity and temperature sensitivities of colors derived from nine Wide Field Camera 3 (WFC3) filters aboard the Hubble Space Telescope (HST) using Dartmouth isochrones and Kurucz atmospheres models. The theoretical isochrone colors were tested and calibrated against observations of five well studied galactic clusters: M92, NGC 6752, NGC 104, NGC 5927, and NGC 6791, all of which have spectroscopically determined metallicities spanning  − 2.30 <   <  + 0.4. We found empirical corrections to the Dartmouth isochrone grid for each of the following color magnitude diagrams (CMD) (F555W–F814W, F814W), (F336W–F555W, F814W), (F390M–F555W, F814W) and (F390W–F555W, F814W). Using the empirical corrections we tested the accuracy and spread of the photometric metallicities assigned from CMDs and color-color diagrams (which are necessary to break the age-metallicity degeneracy). Testing three color-color diagrams [(F336W–F555W),(F390M–F555W),(F390W–F555W), vs (F555W–F814W)], we found the colors (F390M–F555W) and (F390W–F555W), to be the best suited to measure photometric metallicities. The color (F390W–F555W) requires much less integration time, but generally produces wider metallicity distributions, and, at very-low metallicity, the MDF from (F390W–F555W) is  ∼ 60% wider than that from (F390M–F555W). Using the calibrated isochrones we recovered the overall cluster metallicity to within  ∼  0.1 dex in when using CMDs (i.e. when the distance, reddening and ages are approximately known). The measured metallicity distribution function (MDF) from color-color diagrams show this method measures metallicities of stellar clusters of unknown age and metallicity with an accuracy of  ∼  0.2 - 0.5 dex using F336W–F555W,  ∼ 0.15 - 0.25 dex using F390M–F555W, and  ∼ 0.2 - 0.4 dex with F390W–F555W, with the larger uncertainty pertaining to the lowest metallicity range. Introduction ============ Metallicity, age and mass are fundamental characteristics of a stellar population. Metallicity distributions, in conjunction with chemical evolution models, provide evolutionary information about both enrichment and gas inflow and outflow in the form of galactic winds. While in recent years measuring large numbers of spectroscopic metallicities has become more feasible with multiobject spectrographs, it is still challenging to observe samples large enough to find rare objects or substructure in metallicity distribution functions, or to observe faint enough to build up large samples in nearby galaxies. Photometric metallicities, though not as accurate as spectra, provide measurements for every star in the field, including those with fainter magnitudes than can be reached spectroscopically. The general technique to assign photometric metallicities relates color to metallicity. For instance, in a color magnitude diagram (CMD) fiducial ridgelines from several clusters with similar age, and a range of known metallicities can be interpolated to estimate the metallicity of an unmeasured cluster based upon the location of its ridgeline, provided that the cluster is also of similar age (e.g., Saviane et al. 2000; Da Costa et al. 2000,2002). Fiducial ridgelines have been used to derive empirical relations between color and metallicity at a given absolute magnitude. Alternatively, one can use theoretical isochrones, however, the isochrones need to be empirically calibrated to match observed sequences (e.g., Brown et al. 2005; Lianou et al. 2011). An issue with deriving metallicities from color arises because a giant can be redder either because it is older, or because it is more metal rich. One method to break the age-metallicity degeneracy uses a color-color diagram, where one color is constructed using a metallicity sensitive filter, and the other color is constructed from a pair of filters that provide a temperature estimate with minimal dependence on metallicity. There is a long history in astronomy of using specifically designed filters to isolate stellar characteristics such as metallicity. proposed photometric indices as a means of stellar classification. The *m*1 = (*v* − *b*) − (*b* − *y*) index estimates the stellar metallicity of horizontal branch, red giant branch and main sequence stars when used in conjunction with a temperature-sensitive index such as (*b* − *y*). The Washington system was developed to measure photometric metallicities, temperatures and the amount of CN line blanketing for giant G and K stars. [ht] Another consideration with photometric metallicities is that medium and broad band colors lose nearly all sensitivity at low metallicity. The Caby system, developed in the 1990’s, modified the Strömgren system, replacing the v filter with a filter centered on the Ca H & K lines. The Caby system measures stellar metallicities 3 times more accurately, especially at low metallicity where the m1 index loses sensitivity. Several filters on the HST/WFC3 were designed to provide information on the metallicities of resolved populations. Our WFC3 calibration program (11729, PI Holtzman) was designed to collect images of clusters with well established spectroscopic metallicities in order to map WFC3 colors to metallicity. Another program designed to study the galactic bulge (11664, PI Brown) imaged the same clusters with different filters, also as a calibration. This paper utilizes observations from both programs to present a broad range of calibrating filters and their metallicity sensitivities. Observations were obtained of M92, NGC 6752, NGC 104, NGC 5927, and NGC 6791. The five calibration clusters are well studied spectroscopically, and span a wide range of metallicity:  − 2.3 <   <  + 0.4. Individual stars can be observed throughout the Local Group, opening up possibilities for studying populations within the Milky Way, as well as Local Group galaxies. Our primary interest is in using these calibrations to measure metallicity distribution functions in several Local Group dwarf galaxies where we only have sufficient accuracy to measure giants, but the calibrations presented here may have broader applicability. As will be shown, photometric indices that measure metallicity also have a sensitivity to surface gravity, but for objects outside the Milky Way, it is trivial to separate giants from dwarfs based on their observed brightness. We structure the paper as follows: in Section 2 we use model atmospheres and isochrones to explore the capacity of various filter combinations to measure metallicity; in Section 3 we present the observations of the stellar clusters. Section 4 describes how we calibrate a set of isochrones to the observed sequences, and demonstrate how well these can be used to recover metallicities. In Section 5 we summarize our conclusions. Deriving Metallicity ==================== WFC3 filters ------------ HST WFC3 observations were obtained in the following filters: F336W, F390M, F390W, F395N, F410M, F467M, F547M, F555W, F814W, F110W and F160W. Information on the filter widths and system throughputs are listed in Table [throughput]. The system responses for the UVIS filters are shown in Figure [bandpasses], along with Kurucz model stellar spectra of typical giant branch stars with log g = 1.5, T = 4000 K and three different metallicities,  = 0.0,  − 1.5 and  − 3.0. llrcl[b] F336W &u, Strömgren &51.1 &0.20 &5.04 F390W &C, Washington &89.6 &0.25 &4.47 F390M &Ca II continuum &20.4 &0.22 &4.63 F395N &Ca II, 3933/3968 & 8.5 &0.22 &4.58 F410M &v, Strömgren &17.2 &0.27 &4.42 F467M &b, Strömgren &20.1 &0.28 &3.79 F547M &y, Strömgren &65.0 &0.26 &3.12 F555W &WFPC2 V &156.2 &0.28 &3.16 F814W &WFPC2 Wide I &153.6 &0.23 &1.83 F110W &Wide YJ &443.0 &0.56 &1.02 F160W &WFC3 H &268.3 &0.56 &0.63 Many of these filters are comparable to those from well established systems. For example, F336W, F410M, F467M and F547M are analogous to Strömgren u, v, b and y, respectively. While designed for measuring the Balmer decrement, the F336W and F410M filters both cover spectral regions with many absorption features from metals. The other two Strömgren filters, F467M and F547M, sample regions mostly clear of spectral features; historically F467M–F547M (i.e. b–y) has been used as a temperature indicator. Filters F390M and F395N cover the Ca H & K spectral features. The F395N filter is narrow (85 Å) while the F390M filter is broader, leading to a higher throughput, but also including a CN feature at *λ*  ≈  3885 Å. The F390W filter has a wide bandpass (896 Å), and is similar to the ground-based Washington C filter. The Washington C filter was designed to evaluate the total effects of line blanketing by CN (bands at 3595, 3883 and 4215 Å) as well as the CH molecular transition at 4304 Å, commonly known as the G band. F555W and F814W are wide band filters designed to cover the same spectral regions as the WFPC2 and ACS filters of the same name; they are similar to, but broader than, Johnson-Cousins V and I. The F555W and F814W filters measure mostly continuum; one notable exception is the MgH feature at  ∼  5100 Å in the F555W bandpass. As part of the observation campaign images in two near IR filters, F110W and F160W, were obtained in an effort to explore reddening free indices. However, we found that the photometric uncertainties from a two color reddening free index seemed to be too large to significantly add to our present analysis. Age - Metallicity Degeneracy ---------------------------- Stellar colors are a function of gravity, metallicity and effective temperature. For a given star, increasing the metallicity lowers the effective temperature and enhances line blanketing effects at a given mass, both of which cause redder color. For populations of comparable age the color is directly related to the metallicity. This relation has been used extensively in older stellar populations, e.g., globular clusters, to determine metallicities. For an old population, we calculate the metallicity sensitivity of CMD colors using Dartmouth stellar isochrones. These cover the metallicity range  − 2.5 <   < 0.5. Since we wish to understand sensitivity to metallicity at lower metallicities, we extend these down to  =  − 5 by assuming that stars with  <  − 2.5 have the same effective temperatures and luminosities as those with  =  − 2.5, while adopting the colors from a grid of model atmospheres that extends down to  =  − 5. In Table [cmdsens], we report metallicity sensitivity for RGB stars, in units of dex of per 0.01 mag of color change, for a range of different color choices, including most of the wideband UVIS WFC3 filters. In these units, small numbers represent higher sensitivity to metallicity. These sensitivities were computed for stars at M*F*814*W* =  − 1; for more luminous giants, the sensitivity is better, while for fainter ones, it is worse. Sensitivities are reported for several different ranges in metallicity, demonstrating the smaller color sensitivity at lower metallicity. Generally, sensitivity increases as the wavelength separation of the filters increases. However, each filter has different photometric precision for a fixed exposure time. The second to last column in Table [cmdsens] gives the relative color errors (normalized to *σ**F*555*W* − *F*814*W*), estimated using the exposure time calculator (ETC) on the WFC3 online data handbook for a K5 III giant and a fixed exposure time. The last column of Table [cmdsens] lists the sensitivity, over the metallicity range  − 1.5 <   <  − 0.5, scaled by the relative photometric error. The implication is that ideal filter choices rely on both the color sensitivity to metallicity and the precision to which the color can be measured. The most metallicity sensitive colors will not be the optimum choice when observing time and errors are considered. However, if you use a color that only changes minimally with metallicity, no amount of increased photometric accuracy will improve the metallicity determination. The last column of Table [cmdsens] suggests that F475W–F814W is the optimal color for maximum metallicity sensitivity, and this has been adopted by many studies (e.g., Gallart 2008); although this choice does depend to some extent on the color of the target stars. Other commonly used metallicity sensitive colors are F390W–F814W and F555W–F814W. As pointed out in the last paragraph there is a trade off between metallicity sensitivity and photometric accuracy (from a reasonable amount of integration time). The sensitivities in Table [cmdsens] show that these wide band colors have simultaneously greater throughput and less metallicity sensitivity than the medium and narrow band filters listed in Table [cmdsens]. Additionaly, at very-low metallicity, these broad-band colors have little sensitivity ( > 0.65 and  > 1 dex, respectively, per 0.01 mag of color change), and since it is challenging to reduce photometric errors below 0.01 mag, this leads to a fundamental limit on the accuracy of derived metallicities. For some colors, at low metallicity the color change from metallicity becomes smaller than the typical photometric accuracy of 0.01 mag. In Table [cmdsens] we report this as  >  1 dex of /0.01 mag color change. The narrower filters, such as F395N and F390M, retain sufficient sensitivity ( < 0.15 dex per 0.01 mag of color change) to provide useful metallicity estimates even at very-low metallicity. lccccccc F336W–F814W &0.270 & 0.100 &0.021 & 0.007 & 0.005 & 44.2 & 0.31 F390W–F814W &0.667 & 0.185 &0.036 & 0.012 & 0.009 & 4.5 & 0.05 F390M–F814W &0.454 & 0.133 &0.028 & 0.008 & 0.007 & 31.3 & 0.26 F395N–F814W &0.333 & 0.094 &0.027 & 0.010 & 0.007 & 69.7 & 0.72 F410M–F814W &  > 1 & 0.476 &0.060 & 0.014 & 0.009 & 11.6 & 0.17 F438W–F814W &  > 1 & 0.400 &0.053 & 0.020 & 0.013 & 2.9 & 0.06 F467W–F814W &  > 1 &  > 1 &0.149 & 0.038 & 0.017 & 3.6 & 0.13 F475W–F814W &  > 1 &  > 1 &0.097 & 0.031 & 0.017 & 1.3 & 0.04 F547M–F814W &  > 1 &  > 1 &0.198 & 0.050 & 0.023 & 1.4 & 0.07 F555W–F814W &  > 1 &  > 1 &0.182 & 0.048 & 0.022 & 1.0 & 0.05 F606W–F814W &  > 1 &  > 1 &0.284 & 0.075 & 0.027 & 0.79 & 0.06 F625W–F814W &  > 1 &  > 1 &0.402 & 0.114 & 0.033 & 0.90 & 0.10 F775W–F814W &  > 1 &  > 1 &  > 1 & 0.741 & 0.270 & 0.94 & 0.70 For a population of mixed age, the color-metallicity relation breaks down because younger giants, which are more massive, are hotter than older giants of the same metallicity. The color changes due to age and metallicity are demonstrated in Figure [agez], for two different ages (12.5 and 4 Gyr, solid and dashed lines, respectively), and for a range of metallicities (different colors). The effect is quantitatively shown in Figure [agecol], where color as a function of metallicity is plotted for several ages (2, 7 and 12 Gyr) for a giant branch star with a *M**F*814*W* =  − 1.0 (with comparable results along the entire giant branch). At higher metallicity this leads to an uncertainty in derived metallicity of a few tenths of a dex, but the uncertainty can be significantly larger at lower metallicity. The uncertainties in metallicity also become greater with younger populations; as Figure [agecol] shows, the color difference between 2 and 7 Gyr is almost double that between 7 and 12 Gyr. Separating atmospheric parameters using photometric colors ---------------------------------------------------------- The key to breaking the age-metallicity degeneracy is separating the color effects of metallicity and temperature. Two color indices have the potential to break the degeneracy because one color can measure a metallicity dependent feature in the spectrum, while the other can control for the temperature. The more the two color’s sensitivities to metallicity and temperature differ, the more effective the color combination will be at measureing metallicity. We examine the temperature and metallicity sensitivity for all the filter combinations by using Kurucz stellar atmosphere models of metallicity ranging from  − 5 >   >  + 1. We integrate the WFC3 transmission curve for each filter over the synthetic spectra with parameters of typical giant branch stars (log g = 2.5 and T*e**f**f* = 4500 K). We compute the color change with temperature (Δ color/Δ T*e*) and metallicity (Δ color / Δ ), at a range of metallicities. The ratio of the two gives the relative sensitivity to temperature and metallicity at equal color difference, where small values of Δ T*e*/Δ indicate a smaller dependence on metallicity than temperature. The relative sensitivity of temperature to metallicity (Δ T*e*/Δ ) for all colors is plotted in Figure [dtdz]. As expected, the colors whose bandpasses contain fewer metal-features (e.g F555W–F814W and F467M–F547M) are the ones that are least sensitive to metallicity. Figure [dtdz] shows that the F467M–F547M and F555W–F814W colors have comparably smaller sensitivities to metallicity. However, the relative color error for F467M–F547M is over 3.5 times larger than F555W–F814W for a fixed exposure time. F555W–F814W stands out as the optimal temperature index, considering minimal metallicity sensitivity and photometric accuracy. While F555W–F814W certainly changes with metallicity (cf. its use as a metallicity indicator in clusters at fixed age as discussed in the previous section) it has the largest relative sensitivity of temperature to metallicity of all the filters compared in this section (see Figure [dtdz]). Figure [agez2] demonstrates the age independence and metallicity separation of a color-color diagram. The plot includes isochrones of two different ages, 12.5 and 4 Gyr, as solid and dashed lines respectively, for a range of metallicities in steps of 0.5 dex. The solid and dashed lines closely follow each other throughout the color-color diagram. The bifurcation of the isochrones at cooler temperatures (especially seen at higher metallicity with the dashed and solid purple lines) is due to dwarf and giant stars colors showing increased gravity sensitivity at higher metallicity. The gravity sensitivity causes the metallicity-sensitive color of dwarfs to be bluer than giants at the same value of the temperature-sensitive color, however, for targets at a common distance, apparent magnitude can be used to separate the evolutionary stage. Based upon the color sensitivities shown in Figure [dtdz], we selected the more metallicity-sensitive colors to use with the temperature-sensitive color (F555W–F814W) in color-color diagrams. Figure [sensitivity] shows giant branch isochrones in color-color diagrams for five metallicity sensitive colors: F336W–F555W, F390M–F555W, F390W–F555W, F395N–F555W and F410M–F555W. The effectiveness of these color combinations to determine metallicity are quantified in Table [spans] by the color separation due to metallicity predicted from stellar isochrones in color-color diagrams, using units of dex of /0.01 mag of color change. While all the filter sets have similar sensitivity around solar metallicity ( + 0.5 >   >  − 0.5) this breaks down at lower metallicities. lcccccccc[h] F336W–F555W & 0.9 & 0.427 & 0.143 &0.047 &0.023 &0.026 & 9.8 & 0.24 & 1.0 & 0.290 & 0.107 &0.045 &0.023 &0.024 & 9.8 & 0.23 & 1.1 & 0.236 & 0.087 &0.051 &0.023 &0.024 & 9.8 & 0.23 F390W–F555W & 0.9 & 0.943 & 0.287 &0.098 &0.041 &0.049 & 1.0 & 0.04 & 1.0 & 0.704 & 0.199 &0.090 &0.038 &0.046 & 1.0 & 0.04 & 1.1 & 0.541 & 0.149 &0.084 &0.038 &0.043 & 1.0 & 0.04 F390M–F555W & 0.9 & 0.667 & 0.206 &0.074 &0.027 &0.036 & 6.9 & 0.19 & 1.0 & 0.493 & 0.146 &0.062 &0.024 &0.035 & 6.9 & 0.16 & 1.1 & 0.370 & 0.110 &0.052 &0.023 &0.034 & 6.9 & 0.16 F395N–F555W & 0.9 & 0.515 & 0.151 &0.066 &0.036 &0.049 & 15.4 & 0.56 & 1.0 & 0.369 & 0.104 &0.057 &0.037 &0.047 & 15.4 & 0.56 & 1.1 & 0.277 & 0.079 &0.052 &0.038 &0.047 & 15.4 & 0.60 F410M–F555W & 0.9 &  >  1 & 0.820 &0.254 &0.090 &0.063 & 2.6 & 0.23 & 1.0 &  >  1 & 0.510 &0.288 &0.070 &0.051 & 2.6 & 0.16 & 1.1 &  >  1 & 0.364 &0.350 &0.063 &0.044 & 2.6 & 0.16 Weighting the metallicity sensitivity by the relative uncertainty (last column of Table [spans]) shows that F390W–F555W is the most sensitive for a fixed exposure time. Below  <  − 1.5 the color separation for F395N–F555W, F390M–F555W and F336W–F555W are the most sensitive to metallicity. Although the F395N–F555W color retains sensitivity it also requires 2 to 3 times more exposure time to get comparable accuracy, making it prohibitive to use. At extremely low metallicity the most sensitive color is F395N–F555W, with sensitivity decreasing respectively for the colors: F336W–F555W, F390M–F555W and F390W–F555W. Based upon the metallicity sensitivity and photometric efficiency we find that the most promising metallicity indicating filters are the F390W, F390M and F336W filters. In the remainder of the paper we will focus our analysis on these filters. Reddening Effects ----------------- Reddening adds uncertainty to any photometric metallicity derivation, especially when the uncertainty in reddening is large, or if there is differential reddening throughout the field. In a color-color plot uncorrected reddening will be confused with a change in metallicity if the reddening vector is in the direction of the metallicity separation. For the majority of the color-color diagrams the reddening vectors are predominantly in the same direction as the isochrones, mitigating the influence of reddening. However, for (F336W–F555W, F555W–F814W) the reddening vector is  ∼  40∘ shallower than the isochrones, which increases the uncertainty in derived metallicities when there is differential reddening or if the uncertainty in the reddening is large. Reddening vectors are indicated in Figure [sensitivity] for E(B–V) = 0.1. The vector is defined by the reddening of the colors along the X and Y axes. For a given color, the reddening E(Filter1–Filter2) = *A**F**i**l**t**e**r*1 − *A**F**i**l**t**e**r*2, where A*F**i**l**t**e**r*1 = R*F**i**l**t**e**r*1 E(B–V), and the R*F**i**l**t**e**r* values are listed in Table [throughput]. To get an estimate of the effect of reddening on metallicities we consider an uncertainty of *σ**E*(*B* − *V*) = 0.02, as might be appropriate in a low reddening target. At shorter wavelengths, where all of our metallicity sensitive filters are located, the uncertainty in reddening will have an increased effect due to the UV extinction law increasing as wavelength decreases. In this case the *σ**E*(*B* − *V*) adds a systematic color uncertainty of  ∼ 0.1 mag to all of the metallicity sensitive colors. However, the color change from reddening will be along the reddening vector, and because the angles between the isochrones and reddening vectors vary (e.g. Figure [sensitivity]) and the colors have different sensitivity to metallicity (e.g. Table [spans]), this leads to metallicity uncertainties ranging from a tenth to a few tenths of a dex. The resulting systematic uncertainties in are reported in Table [error]. As expected, larger uncertainties are found at lower metallicities. Historically, reddening-free indices have been proposed as a means to correct colors for extinction. A reddening-free index, Q, is defined as: $$Q=(m\_1-m\_2) - \frac{E(m\_1 - m\_2)}{E(m\_2-m\_3)}(m\_2-m\_3)$$ where E(*m*1 − *m*2) is the color excess for the color (*m*1 − *m*2). However, making our metallicity indicator reddening-free still leaves the problem that the temperature indicator is affected by reddening. To completely remove reddening effects from the color-color diagram, both indices need to be reddening-free. suggest using two reddening-free indices (5 total filters, including two in the IR) to predict metallicity. We looked into using these additional filters and found that with our low reddening targets the additional filters do not significantly add to the metallicity resolution. lccc[Hb] F336W–F555W & 0.30 & 0.14 & 0.15 F390M–F555W & 0.20 & 0.09 & 0.13 F390W–F555W & 0.23 & 0.11 & 0.12 F395N–F555W & 0.10 & 0.07 & 0.09 F410M–F555W & 0.33 & 0.12 & 0.08 Abundance Ratio Variations -------------------------- While is commonly used as a proxy for the overall stellar metallicity, elements do not vary in lockstep from star to star. Variations in abundance ratios can alter spectral features within a bandpass, changing the stellar color and causing the overall photometric metallicity to be misinterpreted if fixed abundance ratios are assumed. A common abundance variation is *α* element enhancement (e.g. ), often present at low metallicity in globular clusters (see and references there in). The CMD of an *α*-enhanced population resembles a CMD of higher with solar abundance ratios. Figure [cmdafeh] shows how *α* enhancements affect different colors at low and high metallicity. Different combinations of and [*α*/Fe] can give the same colors. Using Dartmouth isochrones, we compute the relative color change from *α* abundances (Δ color/ Δ [*α*/Fe]) and metallicity (Δ color/ Δ ), to find the relative sensitivity of various colors to and *α* enhancement. The ratio of the two as a function of metallicity is presented in Table [dadz]; unsurprisingly the colors with the most *α* sensitivity are the the two colors (F395N–F555W and F390M–F555W) whose bandpasses are dominated by the H & K lines. Metallicities assigned assuming solar abundance ratios can be considered as a metallicity indicator that is a combination of the actual and [*α*/Fe], following the relation: $$\textrm{[m/H]}\_{phot}=\textrm{[Fe/H]}+ \left( \frac{\Delta \textrm{[Fe/H]} }{ \Delta[\alpha/\textrm{Fe}]} \right) [\alpha/\textrm{Fe}]$$ where the amount of relative color change due to *α* enhancements (listed in Table [dadz]) can be used to get the relation. The differing sensitivity of (Δ/Δ[*α*/Fe]) in different colors implies that *α* abundance can be separated with photometry using similar techniques described in Section 2.3. However, additional colors with differing sensitivities would be necessary to isolate the *α* abundance, such as F395N–F555W and F410M–F555W or F390M–F555W and F390W–F555W. We should note that the majority of globular clusters show large star-to-star variations in light elements (Li to Si) (for a review see Gratton et al. 2012, and references therein). Additionally there are cluster-to-cluster abundance variations, which also have the potential to affect the stellar colors. found that abundance enhancements of Mg, Si, and to some extent Ca, will change the color and thus location of the giant branch, while variations in oxygen will affect the height of the subgiant branch and the luminosity of the MS turnoff. Of particular interest in this study, C and N enhancements can affect the stellar colors measured with F390M and F390W, since a few CN and CH bands fall in the same spectral region covered by these filter bandpasses; see Section 2.1 for details. Since we cannot explicitily measure any of these abundance variation in this study due to the limited resolution of photometric metallicities, any such variations could affect the stellar colors and our inferred metallicities. lccc[Hb] F336W–F555W & 0.27 & 0.39 & 0.64 F390W–F555W & 0.22 & 0.19 & 0.09 F390M–F555W & 0.65 & 0.37 & 0.34 F395N–F555W & 0.88 & 0.84 & 0.79 F410M–F555W & 0.09 & 0.17 & -0.09 Testing the method: Observations of calibration clusters ======================================================== WFC3 images were obtained of five clusters during cycle 17 as part of the calibration program 11729, (PI Holtzman). Additional images of the five clusters come from the calibration portion of the program 11664 (PI Brown). The five clusters were chosen because they are well studied and span a range of ; 5 of the 6 clusters are discussed by for ACS calibration. lccccc[H] F336W &850, 30 &1000, 30 &1160, 30 &950, 30 &800, 30 F390M &1400, 50 &1400, 50 &1400, 50 &1400, 50 &1400, 50 F390W &2290, 10 &2460, 10 &2596, 10 &2376, 10 &2250, 10 F395N &1930, 90 &2100, 90 &2240, 90 &1015, 90 &1850, 90 F410M &1530, 40 &1600, 40 &1600, 40 &1600, 40 &1490, 40 F467M &700, 40 &800, 40 &900, 40 &730, 40 &700, 40 F547M &445 &445 &445 &445 &445 F555W &1360.5 &1360.5 &1360.48 &1381 &1360.5 F814W & 860.5 &1020.5 &1160.48 &961 &810.5 F160W &1245, 8.3 &1245, 12.5 &1245, 16.7 &1245, 12.5 &1245, 4.2 F110W &1245, 8.3 &1245, 12.5 &1245, 16.7 &1245, 12.5 &1245, 4.2 [exposure] The Observations ---------------- We present data for four globular clusters, M92 (NGC 6341), NGC 6752, NGC 104 (47 Tuc), NGC 5927, and one open cluster NGC 6791. Exposure times are listed in Table [exposure], and include short and long exposures to increase the dynamic range of the photometry. Photometry ---------- Reduced images were obtained from the HST archive using its on-the-fly processing. A deep image was created by summing all of the exposures in F467M, and stars were identified on this image. For each individual frame, a pixel area correction was applied to account for the modification in fluxes by flat fielding, and an astrometric solution was derived relative to the reference frame. Aperture photometry was done on all of the stars using aperture radii of 0.12, 0.25, 0.375, and 0.5 arcsec. Using this initial aperture photometry and model PSFs calculated using TinyTim, photometry was redone on each individual star after subtracting off all neighbors; this process was iterated twice. Cluster parameters ------------------ To adopt isochrones for the clusters we search the literature for well established cluster parameters. The distance modulus and the reddening are needed to get the absolute magnitudes of the cluster stars. Additionally we use the age, metallicity, and *α* enhancement to select isochrones for our study. We have compiled a sampling of the reported cluster parameters, and list the literature values adopted in this study in Table [obsdat]. **M92** is the only very metal-poor cluster in our sample. Spectroscopic metallicity measurements include those by,  =  − 2.24 ± 0.08,,  =  − 2.16 ± 0.02, and,⟨ ⟩ =  − 2.38 ± 0.02 and ⟨ ⟩ =  − 2.50 ± 0.12.. We average the spectroscopic metallicities, and adopt  =  − 2.30. Isochrone studies by and both found *α* enhancements of [*α*/Fe]  =  + 0.3, however our isochrone grid is in steps of 0.2, so we adopt [*α*/Fe] = 0.4. We adopt the age, and reddening found by, 13.5 Gyr and E(B–V)  = 0.023. After adopting these isochrone parameters we adjust the distance modulus to be fully consistent with our isochrone models, by assuming (m–M)$\_{\rm{V}} = 14.60$, which is consistent with the values used by. **NGC 6752**. measured spectra of seven stars for an overall metallicity  =  − 1.48 ± 0.02, and [*α*/Fe]  =  + 0.27  ± 0.01. measured  =  − 1.555 ± 0.07 from high resolution spectra of 14 stars, they also measured multiple *α* elments for an average [*α*/Fe]  =  + 0.35. In a previous study, found  =  − 1.43 ± 0.04. Other spectroscopic metallicity measurements include those by,  =  − 1.54 ± 0.09,,  =  − 1.42 ± 0.08,, ⟨ ⟩ =  − 1.50 ± 0.02. We average the spectroscopic and adopt a  =  − 1.45, which is slightly lower than the average but is consistent with the larger [*α*/Fe] adopted to fit our isochrone grid, [*α*/Fe] = 0.4. We assume a reddening of E(B–V)  = 0.046 ± 0.005 as measured by from 118 stars. We adopt an age of 12.5 Gyr, reported by, and also by in a study of comparative ages of globular clusters. From these adopted isochrone prameters we adjust our distance modulus to (m–M)$\_{\rm{V}} = 13.20$, which is consistent with the distance to NGC 6752 measured by using the white dwarf cooling sequence; (m–M)$\_{\rm{V}}$= 13.17. **47 Tuc**. Spectroscopic measurement include those by,  =  − 0.71 ± 0.08, who found  =  − 0.70 ± 0.07,, ⟨ ⟩ =  − 0.70 ± 0.05, found  =  − 0.60 ± 0.2 based on seven RGB stars, and measured  =  − 0.74 ± 0.02 from the medium resolution spectra of 147 stars and  =  − 0.77 ± 0.03 from the high resolution spectra of 11 stars, and measured an [*α*/Fe] = 0.44. We average the adopting values of  =  − 0.70 and [*α*/Fe] = 0.4. We use the E(B–V)  = 0.024 reported by. found an age of 12 Gyr, derived an age of 12.9 Gyr using diffusive models, which we average for an adopted age of 12.5 Gyr. Based upon the adopted isochrone parameters we adjust the distance modulus to (m–M)$\_{\rm{V}}=13.20$ which is smaller than the values measured by and who both measured the distance modulus (13.36 and 13.27, respectively) to 47 Tuc using white dwarf cooling models. [ht] **NGC 5927** is more complicated to study photometrically due to the high differential reddening within the cluster ( Δ E(V-I)=0.27 measured by and *δ*(E(B–V)=0.169 measured by Bonatto et al. 2013). found  =  − 0.30 ± 0.09; measured  =  − 0.31, while found  =  − 0.67. We adopt a of  − 0.40 and [*α*/Fe] of +0.2. found the reddening to be E(B–V)= 0.45. did a study on the relative ages of GC’s and found the age to be  ≈  10 Gyr, while the relative GC age study by determined an age of  ∼  11.3 Gyr, we assume an age of 11.5 Gyr. Based upon the adopted isochrone parameters we adjust the distance modulus to (m–M)$\_{\rm{V}}=15.78$. **NGC 6791**, unlike the rest of our calibrating clusters, is an open cluster (although recent work by suggests that NGC 6791 might be the remains of a stripped globular cluster). determined a metallicity of  =  + 0.37. used infrared spectroscopy of 6 stars to measure a =  + 0.35 ± 0.02 and solar *α* abundance ratio. found a  =  + 0.4. We average the spectroscopic metallicities and round to the nearest isochrone grid spacing and adopt  =  + 0.4 and [*α*/Fe]=0.0. We assume the reddening found by, E(B–V) = 0.10− 0.02+ 0.03. We use the age found by both and of 8 Gyr. found an age of  ∼ 8.3 Gyr, and also reported a consistent age of 8 Gyr between the white dwarf cooling sequence and the MSTO age. From this we find a distance modulus of (m–M)$\_{\rm{V}}=13.45$, which is consisten with who used an eclipsing binary to determine (m–M)$\_{\rm{V}}$ = 13.46 ±  0.1, and with, who found (m–M)$\_{\rm{v}} =13.45^{+0.03}\_{-0.12}$. As previously stated the adopted and [*α*/Fe] are not perfectly matched to the literature values but are rounded to the spacing within the isochrone grid (in steps of 0.05 of and 0.2 of [*α*/Fe]). When adjusting the metallicity to our grid we note that the adjustment for a change in [*α*/Fe] varied for each color according to equation (2) and Table [dadz]; therefore we made an average metallicity adjustment that best fit all colors simultaneously. llrlllrr[Ht] M 92 &17 17 07.05 &+43 07 58.2 &0.023 &14.60 &–2.50, –2.14 (–2.30) &+0.30 (+0.4) &13.5 NGC 6752 &19 10 54.86 &–59 59 11.2 &0.046 &13.20 &–1.54, –1.43 (–1.45) &+0.27 (+0.4) &12.5 NGC 104 &00 24 15.26 &–72 05 47.9 &0.024 &13.20 &–0.77, –0.60 (–0.70) &+0.44 (+0.4) &12.5 NGC 5927 &15 28 00.20 &–50 40 26.2 &0.45 &15.78 &–0.67, –0.30 (–0.40) &... (+0.2) &11.5 NGC 6791 &19 20 53.00 &+37 46 30.0 &0.10 &13.45 &+0.35, 0.40 ( + 0.40) & 0.0 ( 0.0) &8.0 [ht] Absolute Magnitudes ------------------- From the reported reddening, and distance modulus we calculate the Absolute magnitudes for each filter. All magnitudes reported are in the Vegamag system with zeropoints taken from the WFC3 handbook. The absolute magnitudes for each filter were calculated using: M(*f**i**l**t**e**r*) =  m*o**b**s**e**r**v**e**d* - ((m–M)$\_{\rm{v}}$ - A$\_{\rm{v}}$) - A*f**i**l**t**e**r*. The distance modulus, (m–M)$\_{\rm{v}}$, as reported in Table [obsdat], was corrected for the V band extinction, $A\_{\rm{v}}$ = 3.1E(B–V), and addtionally corrected for the extinction within the given filter A(*f**i**l**t**e**r*) = R(*f**i**l**t**e**r*)E(B–V). The ratio of total to selective extinction, R(*f**i**l**t**e**r*) as listed in Table [throughput], was calculated by taking the integral over the stellar atmosphere SED convolved with the filter transmission curve, divided by the integral over the same atmosphere and filter convolved with the galactic extinction curve. We used an atmosphere with  =  − 1.5, temperature of 5000, and log g = 2.0. [ht] Cluster Isochrone comparison ============================ It is well known that theoretical isochrone models do not perfectly match observed colors due to a combination of uncertainties from stellar evolution models, model atmospheres, and instrumental systematics. When comparing the isochrone of the mean spectroscopic value to the CMDs the models often did not match the shape of the cluster ridgelines. However, it should be noted that uncertainties in distance, age, reddening and composition complicate the fitting process. The cluster CMDs are shown in Figures [CMDs] - [medCMD], where dashed lines represent the literature valued isochrones adopted for each cluster as reported in Table [obsdat]. In order to improve the metallicity determinations we derived an empirical correction to the isochrones using the five cluster CMDs as benchmarks. For small steps in magnitude along the giant branch, we found the distance between the theoretical isochrone colors and the cluster ridgeline, then squared and summed the distances. Using the canonical metallicities we found the empirical corrections with the smallest summed distance between the corrected isochrone and the cluster. These corrections lead to consistent cluster metallicities across all CMDs. All isochrone corrections were based on color, gravity and metallicity adjustments. The various filter combinations require different functional forms of the generic equation: $$\begin{aligned} {\rm(color)}\_{\text{corrected}}=(\alpha +\beta\text{\textrm{[Fe/H]}}){\rm(color)}\_{\rm{obs}} \\ + \gamma(\text{\textrm{[Fe/H]}}+\delta)^2 +\epsilon \nonumber \\ + ( \zeta +\eta \textrm{[Fe/H]})(\theta - \text{log g}) \nonumber \end{aligned}$$ Equation (3) was modifed by inspection of the various CMDs. The (F555W–F814W, F814W) CMD in Figure [CMDs] shows the uncorrected literature valued isochrones falling redward of the cluster ridgelines to varying degrees depending on the magnitude and metallicity of the isochrone. The most metal-poor cluster was especially discrepant, with larger deviations along the lower giant branch than the upper. For this CMD the first order color correction includes a dependence on metallicity, which allows for larger corrections at lower metallicity. The first order color dependent term shifts and changes the shape of the isochrone, while the second order metallicity dependent term adds the same shift to a given isochrone without changing the shape. The additional gravity dependent term corrects the bend of the giant branch (where log g  <  3.4), with more bend required as metallicity decreases. [ht] In Figure [uCMD] the (F336W–F555W, F814W) CMD shows the uncorrected isochrones redward of the cluster ridgelines, except for M92 where the isochrone along the giant branch falls blueward. This CMDs shows the largest difference between the clusters and the models. The bend of the isochrone along the giant branch (where log g  <  3.4) is corrected with a gravity and metallicity dependent term, with increasing tilt as metallicity decreases. In this CMD the corrected isochrone for 47 Tuc (NGC 104) falls  ∼ 0.03 mag redward of the cluster ridgeline, while all of the other clusters are within  ∼ 0.01 mag of their ridgelines. We believe this slight offset to be an artifact of rounding the average spectroscopic metallicity to fit the grid spacing. This color offset is smaller than the color difference between a isochrones of [Fe/H]  =  − 0.70 and − 0.75, therefore we do not force and overcorrection which would potentially lead to distorted MDFs. In Figure [wideCMD] the uncorrected isochrones in the (F390W–F555W, F814W) CMDs are the least discrepant of the CMDs we examined. However a few corrections are still needed to improve the fits, including a metallicity dependent gravity term, with smaller corrections required for lower metallicities. lllll[hb] (F336W–F555W, F814W) & M92 & -2.28 & 0.114 & 937 ................& N6752 & -1.45 & 0.073 & 481 ................& N104 & -0.74 & 0.059 & 2096 ................& N5927 & -0.37 & 0.067 & 1518 ................& N6791 & 0.39 & 0.079 & 320 (F390M–F555W, F814W) & M92 & -2.32 & 0.089 & 899 ................& N6752 & -1.43 & 0.087 & 468 ................& N104 & -0.68 & 0.066 & 2071 ................& N5927 & -0.42 & 0.108 & 1504 ................& N6791 & 0.40 & 0.098 & 311 (F390W–F555W, F814W) & M92 & -2.33 & 0.092 & 906 ................& N6752 & -1.39 & 0.060 & 462 ................& N104 & -0.66 & 0.060 & 2067 ................& N5927 & -0.46 & 0.124 & 1443 ................& N6791 & 0.39 & 0.082 & 297 The (F390M–F555W, F814W) CMDs presented in Figure [medCMD] shows the uncorrected isochrones blueward of the clusters. Additionally the isochrone curve along the giant branch is too steep to match the shape of the clusters. We applied a first order color term that has an additional metallicity dependence, making the color coefficient greater for lower metallicities. [ht] The isochrone corrections for all CMDs are listed in Table 8. Finding a single correction to align all five isochrones to the clusters gives us reasonable confidence to interpolate our corrections. Smoothly changing corrections can be applied to the entire set of isochrones, then fit to an unknown population, although caution must be taken when extending beyond our calibrating clusters, i.e. M92 at  =  − 2.3, and NGC 6791 at  =  + 0.40. For the majority of cluster CMDs the empirical corrections improve the isochrone-ridgeline alignment such that the average difference between the two is $\lesssim$ 0.01 mag along the giant branch. In Figures [CMDs] - [medCMD] the dash-dot lines represent the uncorrected isochrones, while the solid lines are the empirically corrected isochrones. All the color corrections listed in Table 8 are shown to  =  − 2.5. Below this metallicity we replace all of the metallicity dependent terms with the value of the term when  =  − 2.5 keeping the corrections constant where the color change is small and we are unable to verify the corrections. Metallicity Determinations -------------------------- We tested the empirical isochrone corrections by re-deriving the cluster metallicities. We assign photometric metallicities to stars in the CMDs by searching isochrone grids with spacing in of 0.05 dex; each star was assigned the metallicity of the closest isochrone. We adopted [*α*/Fe]  = 0.0 since we cannot separate the color effects due to *α* and with only three filters. We selected giant branch stars with errors  < 0.03 mag in all filters. ### Metallicities From CMDs The photometric metallicities adopted from (F336W–F555W, F814W), (F390M–F555W, F814W) and (F390W–F555W, F814W) were used to create MDFs for every cluster, which are shown on the left side of Figure [plotmdf]. For all the clusters the MDF peaks are within  ±  0.06 dex of the spectroscopically derived values. The recovered peaks, widths and the number of stars measured are listed in Table 9. The MDF dispersion within each cluster does not vary much across the metallicity range, although we do see slightly larger numbers for the differentially reddened NGC 5927 and for the lowest metallicity cluster, M92, reflecting the diminished accuracy expected at low metallicity, as discussed in Section 2.2. Additionally for NGC 104 and NGC 5927 the horizontal branch stars are clearly seen as small shoulders left of the MDF peak. The horizontal branch metallicities are systematically lower than the true metallicities because we only use the isochrone colors along the giant branch to assign metallicity, so horizontal branch stars were incorrectly matched to GB stars. Overall, the average MDF dispersion for all clusters is  ∼ 0.10 dex and is consistent with what is expected from photometric errors. We simulated this by creating synthetic CMDs based on the IMF and photometric errors within our data set. At each magnitude along the GB ridgeline we used the measured photometric errors to randomly distribute the relative number of stars found at each magnitude. The synthetic CMD was then put through the same isochrone matching processes to get photometric metallicities and a MDF for each cluster. In order to determine the systematic effect of using uncorrected isochrones to assign photometric metallicities we compared the spectroscopically determined metallicities to those adopted using the uncorrected isochrone grid to assign metallicities. Consistent cluster metallicities could not be found across the various CMDs. For a given CMD the peak of the metallicity distribution varied by  ±  0.4 dex from the canonical values, with the larger discrepancies occurring at low metallicity. cllll[H] -2.30 & 0.6208& -0.0853& 0.3473& 0.2647 -2.30 & 0.6212& -0.0839& 0.3476& 0.2651 -2.30 & 0.6217& -0.0825& 0.3480& 0.2657 ....&....&....&....&.... -1.45 & 0.6804& -0.0160& 0.4533& 0.3075 -1.45 & 0.6807& -0.0149& 0.4534& 0.3077 -1.45 & 0.6809& -0.0137& 0.4536& 0.3080 ....&....&....&.... &.... -0.70 & 0.7542& 0.3142& 0.8049& 0.5268 -0.70 & 0.7545& 0.3152& 0.8052& 0.5270 -0.70 & 0.7550& 0.3165& 0.8058& 0.5276 ....&....&....&....&.... -0.40 & 0.7740& 0.4647& 0.9201& 0.6125 -0.40 & 0.7743& 0.4656& 0.9203& 0.6129 -0.40 & 0.7746& 0.4668& 0.9209& 0.6133 ....&....&....&.... &.... +0.40 & 0.9087& 1.0424& 1.4213& 0.9728 +0.40 & 0.9089& 1.0435& 1.4217& 0.9731 +0.40 & 0.9094& 1.0451& 1.4223& 0.9737 ### Metallicities From Color-Color Diagrams To test our metallicity recovery without using age as a known parameter, we apply the empirical corrections to isochrones in color-color diagrams (Figure [cc]) then adopt metallicities. For each star we found the closest isochrone in the grid (spaced by 0.05 dex in with the [*α*/Fe] held constant at solar) and assigned the corresponding metallicity. The corrected fiducial isochrone colors shown as black lines in the color-color diagram of Figure [cc] are available in the online version in Table 10. The MDFs derived from color-color diagrams are shown on the right side of Figure [plotmdf]. The recovered peaks, widths and the number of stars measured are listed in Table 11. We selected the same giant branch stars as before, i.e. stars with photometry better than 0.03 mag in all filters. We exclude poorly fit stars, e.g. stars more than 0.25 mag outside of the color range covered by the isochrone grid. This additional constraint accounts for the varying number of stars measured for different color-color diagrams in the last column of Table 11, though the variation is  ∼ 1%. For four of the five clusters, only the giant branch stars are used to adopt metallicities in the color-color plots. For the fifth cluster, NGC 6791, there are not enough prominent giant branch stars, necessitating that we additionally use the sub-giant branch and main sequence stars. Theoretically the different evolutionary states should not affect the metallicity determination because we selected the corresponding portion of the isochrone to match the less evolved stars. The MDFs from color-color diagrams, as compared to those from CMDs, tend to have slightly larger offsets from the spectroscopically derived values, and systematically larger MDF dispersions. However, these metallicities were adopted without any age information, which is necessary for measuring metallicities for a population of mixed ages. The MDFs from the (F390M–F555W) color-color diagrams have mean metallicities within  ±  0.07 dex for all clusters. The MDFs from the (F390W–F555W) color-color diagrams produces mean metallicities within  ±  0.05 dex for M92, NGC 104 (47Tuc) and NGC 6791, and peaks within  ∼  0.2 dex for NGC 6752 and the differentially reddened NGC 5927. The MDFs from the (F336W–F555W) color-color diagram finds mean metallicities of  ∼ 0.05 dex for most clusters, with the exception of NGC 104 which is discrepant by 0.1 dex. The larger MDF widths from the color-color diagrams as opposed to the CMD are a natural consequence of using color on the y-axis as opposed to magnitude. The change in color for the total range in metallicity is generally smaller than that for magnitude (see Section 2), thus there is decreased resolution when using color-color diagrams; additionally, the errors for colors are larger than for magnitudes. Together, the decreased color span and the increased errors both add to the uncertainty in the photometric metallicities adopted from color-color diagrams widening the MDFs. The derived MDF widths are consistent with what is expected based upon the photometric errors. lllll[Hb] (F336W–F555W, & M92 & -2.22 & 0.471 & 879 F555W–F814W) & N6752 & -1.73 & 0.484 & 454 .......... & N104 & -0.79 & 0.285 & 2067 .......... & N5927 & -0.38 & 0.365 & 1147 .......... & N6791 & 0.37 & 0.075 & 320 (F390M–F555W, & M92 & -2.25 & 0.244 & 906 F555W–F814W) & N6752 & -1.46 & 0.172 & 448 .......... & N104 & -0.68 & 0.201 & 2110 .......... & N5927 & -0.47 & 0.264 & 1322 .......... & N6791 & 0.37 & 0.061 & 314 (F390W–F555W, & M92 & -2.27 & 0.396 & 891 F555W–F814W) & N6752 & -1.57 & 0.250 & 447 .......... & N104 & -0.70 & 0.232 & 2107 .......... & N5927 & -0.58 & 0.296 & 1298 .......... & N6791 & 0.36 & 0.073 & 303 [mdfcc] Of the three color-color diagrams tested to assign metallicity the (F336W–F555W, F555W–F814W) diagram does the worst. For NGC 104 the peak metallicity is off by 0.09 dex and the MDF dispersions are twice to three times the dispersions from (F390M–F555W, F555W–F814W) depending on the cluster. F336W, while predicted to remain sensitive at low metallicity, does not show this in practice, as seen from the wide dispersions for the clusters. The overestimate of the sensitivity could be due to the fact that the color separation, as seen from the clusters, is smaller than predicted from the isochrones, both of which are shown in Figure [uCMD]. The (F390M–F555W, F555W–F814W) color-color diagram generally does the best job deriving metallicities of all the colors. The MDFs have peaks within  ±  0.07 dex of the spectroscopically determined metallicities for all clusters. The MDF dispersions are $\lesssim$ 0.2 dex, except for the very-low metallicity M92 at 0.24 dex and the differentially reddened NGC 5927 at 0.26 dex. The MDFs derived using the (F390M–F555W) color are better than those from (F390W–F555W) and (F336W–F555W). In particular the MDF dispersion in the very-metal poor regime is the narrowest of all the MDFs from color-color diagrams by almost a factor of two. The greater accuracy in photometric metallicities below  <  − 1.0 when using the F390M makes this filter the most useful when working in the very-low metallicity regime. The (F390W–F555W, F555W–F814W) color-color diagram recovers peaks within  ±  0.2 dex of the spectroscopically determined metallicities. The MDF dispersions are  ∼  0.25 dex, except for M92, the very-low metallicity cluster, where the dispersion is 0.4 dex. However, F390W is very economical in terms of integration time, and accurate to  ∼ 0.25 dex above  >  − 2.0, therefore this filter is most useful when working in the more metal rich regimes. In order to determine the systematic effect of using uncorrected isochrones to assign photometric metallicities we compared the spectroscopically determined metallicities to those adopted using the uncorrected isochrone grid in color-color diagrams to assign metallicities. In the MDFs from the color-color diagrams of (F336W–F555W, F555W–F814W) and (F390W–F555W, F555W–F814W) the adopted cluster metallicities were within  ∼ 0.2 dex of the spectroscopically derived values for the clusters with metallicities between  − 1.5 <   < 0, while the very metal poor and metal rich cluster are 3 to 4 times more discrepant. In the color-color diagram of (F390M–F555W, F555W–F814W) the metallicities are all systematically higher by 0.4 to 0.8 dex, again, with more disparate values for very metal poor and metal rich clusters. Conclusion ========== We explored the metallicity and temperature sensitivities of colors created from nine WFC3/UVIS filters aboard the HST using Dartmouth isochrones and Kurucz atmospheres models. The theoretical isochrone colors were tested and calibrated against observations of five well studied clusters. We found that (F390W–F555W), and (F390M–F555W), are the most promising colors in terms of metallicity sensitivity. For almost all of the clusters F390M has slightly better metallicity sensitivity, and narrower MDF dispersions, although the F390W filter requires much less integration time. Additionally, at low metallicity the photometric metallicities from (F390M–F555W) are nearly twice as accurate as those from (F390W–F555W). Using photometry of M92, NGC 6752, NGC 104, NGC 5927 and NGC 6791, all of which have spectroscopically determined metallicities spanning  − 2.30 <   <  + 0.4, we found empirical corrections to the Dartmouth isochrone grid for each of the following CMDs (F555W–F814W, F814W), (F336W–F555W, F814W), (F390M–F555W, F814W) and (F390W–F555W, F814W). Using the empirical corrections we tested the accuracy and spread of the photometric metallicities adopted from CMDs and color-color diagrams. From the color-color diagrams we were able to recover the spectroscopic metallicities independent from any assumptions about cluster ages, which allows us to apply the color-color diagram method of determining metallicities to complex stellar populations with confidence that the method breaks the age-metallicity degeneracy. When using color-color diagrams to assign metallicity, we found the (F390M–F555W) color to have the greatest accuracy and consistency across the entire metallicity range, with the main advantage being the increased sensitivity at low metallicity, while (F336W–F555W) and (F390W–F555W) both lose accuracy in this range. We showed that by using the calibrated isochrones we could assign the overall cluster metallicity to within  ∼  0.1 dex in when using CMDs (i.e when the distance, reddening and ages are approximately known). The measured MDFs from color-color diagrams show this method measures metallicities of stellar clusters of unknown age and metallicity with an accuracy of  ∼  0.2 - 0.58 dex using F336W,  ∼ 0.15 - 0.3 dex using F390M, and  ∼ 0.2 - 0.52 dex with F390W, with the larger uncertainty pertaining to lowest metallicity range. Acknowledgements ================ Support for program 11729 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. ll[H] $\rm{(F555W-F814W)}\_{n}=(0.89+0.028\rm{\textrm{[Fe/H]}}) \rm{(F555W-F814W)}\_{o} + 0.136 $ & log g  >  3.4, & -2.5  ≤   <  -1.4 $\rm{(F555W-F814W)}\_{n}=(0.89+0.028\rm{\textrm{[Fe/H]}}) \rm{(F555W-F814W)}\_{o} + 0.136 - (0.015\rm{\textrm{[Fe/H]}})(3.4-\log{g})$ &log g  <  3.4, & -2.5  ≤   <  -1.4 $\rm{(F555W-F814W)}\_{n}=(0.89+0.028\rm{\textrm{[Fe/H]}}) \rm{(F555W-F814W)}\_{o} + 0.05(\rm{\textrm{[Fe/H]}}+0.55)^2+0.10 $ &log g  >  3.4, & -1.4  ≤   <  0.5 $\rm{(F555W-F814W)}\_{n}=(0.89+0.028\rm{\textrm{[Fe/H]}}) \rm{(F555W-F814W)}\_{o}+0.05(\rm{\textrm{[Fe/H]}}+0.55)^2 +0.10-(0.015\rm{\textrm{[Fe/H]}})(3.4-\log{g})$ &log g  <  3.4, & -1.4  ≤   <  0.5 $\rm{(F336W-F555W)}\_{n}=0.88\rm{(F336W-F555W)}\_{o} - 0.009 $ & log g  >  3.4, & -2.5  <   ≤  -1.65 $\rm{(F336W-F555W)}\_{n}=0.88\rm{(F336W-F555W)}\_{o} - 0.009 + (0.015-0.036\rm{\textrm{[Fe/H]}})(3.4-\log{g})$ &log g  <  3.4, & -2.5  <   ≤  -1.65 $\rm{(F336W-F555W)}\_{n}=0.88\rm{(F336W-F555W)}\_{o} + 0.04(\rm{\textrm{[Fe/H]}}+0.80)^2-0.02 $ & log g  >  3.4, & -1.65 <   ≤  0.05 $\rm{(F336W-F555W)}\_{n}=0.88\rm{(F336W-F555W)}\_{o} + 0.04(\rm{\textrm{[Fe/H]}}+0.80)^2 -0.02 + (0.015-0.036\rm{\textrm{[Fe/H]}}(3.4-\log{g}) $ &log g  <  3.4, & -1.65 <   ≤  0.05 $\rm{(F336W-F555W)}\_{n}=0.88\rm{(F336W-F555W)}\_{o}- 0.009 $ & log g  >  3.4, & 0.05 <   ≤  0.5 $\rm{(F336W-F555W)}\_{n}=0.88\rm{(F336W-F555W)}\_{o} - 0.009 +(0.015-0.036\rm{\textrm{[Fe/H]}})(3.4-\log{g}) $ &log g  <  3.4, & 0.05 <   ≤  0.5 $\rm{(F390W-F555W)}\_{n}=(0.89+0.012\rm{\textrm{[Fe/H]}}) \rm{(F390W-F555W)}\_{o}+0.01(\rm{\textrm{[Fe/H]}}+2.5)^2 + 0.05 $ &log g  >  3.3, & -2.5  <   <  1.65 $\rm{(F390W-F555W)}\_{n}=(0.89+0.012\rm{\textrm{[Fe/H]}}) \rm{(F390W-F555W)}\_{o}+0.01(\rm{\textrm{[Fe/H]}}+2.5)^2 + 0.05 + (0.07 +0.006\rm{\textrm{[Fe/H]}})(3.3-\log{g}) $ &log g  <  3.3, & -2.5  <   <  1.65 $\rm{(F390W-F555W)}\_{n}=(0.89+0.012\rm{\textrm{[Fe/H]}}) \rm{(F390W-F555W)}\_{o}+0.057 $ &log g  >  3.3, & -1.65  <   <  +0.5 $\rm{(F390W-F555W)}\_{n}=(0.89+0.012\rm{\textrm{[Fe/H]}}) \rm{(F390W-F555W)}\_{o}+0.057 + (0.07 +0.006\rm{\textrm{[Fe/H]}})(3.3-\log{g}) $ &log g  <  3.3, & -1.65  <   <  0.5 $\rm{(F390M-F555W)}\_{n}=(1.01-0.005\rm{\textrm{[Fe/H]}}) \rm{(F390M-F555W)}\_{o} +0.07(\rm{\textrm{[Fe/H]}}+2.50)^2+0.005 $ &log g  >  3.2, & -2.5  <   <  -1.65 $\rm{(F390M-F555W)}\_{n}=(1.01-0.005\rm{\textrm{[Fe/H]}}) \rm{(F390M-F555W)}\_{o} +0.07(\rm{\textrm{[Fe/H]}}+2.50)^2+0.005 +(0.018-0.011\rm{\textrm{[Fe/H]}})(3.2-\log{g})$ & log g  <  3.2, & -2.5  <   <  -1.65 $\rm{(F390M-F555W)}\_{n}=(1.01-0.005\rm{\textrm{[Fe/H]}}) \rm{(F390M-F555W)}\_{o}+0.056$ &log g  >  3.2, & -1.65  <   <  +0.5 $\rm{(F390M-F555W)}\_{n}=(1.01-0.005\rm{\textrm{[Fe/H]}}) \rm{(F390M-F555W)}\_{o}+0.056 +(0.018-0.011 \rm{\textrm{[Fe/H]}})(3.2-\log{g})$ & log g  <  3.2, & -1.65  <   < +0.5 49 natexlab#1#1 Anthony-Twarog, B. J., Twarog, B. A., Laird, J. B., & Payne, D. 1991,, 101, 1902 Armandroff, T. E., & Zinn, R. 1988,, 96, 92 Bonatto, C., Campos, F., & Kepler, S. O. 2013,, 84, 187 Brogaard, K., VandenBerg, D. A., Bruntt, H., et al. 2012,, 543, A106 Brown, T. 2008, in HST Proposal, 11664 Brown, T. M., Ferguson, H. C., Smith, E., et al. 2005,, 130, 1693 Brown, T. M., Sahu, K., Zoccali, M., et al. 2009,, 137, 3172 Calamida, A., Bono, G., Stetson, P. B., et al. 2007,, 670, 400 Canterna, R. 1976,, 81, 228 Carraro, G., Villanova, S., Demarque, P., et al. 2006,, 643, 1151 Carretta, E., Bragaglia, A., Gratton, R., D’Orazi, V., & Lucatello, S. 2009,, 508, 695 Carretta, E., & Gratton, R. G. 1997,, 121, 95 Castelli, F., & Kurucz, R. L. 2004, ArXiv Astrophysics e-prints Chaboyer, B., Green, E. M., & Liebert, J. 1999,, 117, 1360 Da Costa, G. S., & Armandroff, T. E. 1990,, 100, 162 Da Costa, G. S., Armandroff, T. E., & Caldwell, N. 2002,, 124, 332 Da Costa, G. S., Armandroff, T. E., Caldwell, N., & Seitzer, P. 2000,, 119, 705 De Angeli, F., Piotto, G., Cassisi, S., et al. 2005,, 130, 116 di Cecco, A., Becucci, R., Bono, G., et al. 2010,, 122, 991 Dotter, A., Chaboyer, B., Jevremović, D., et al. 2008,, 178, 89 Gallart, C. 2008, in Astronomical Society of the Pacific Conference Series, Vol. 390, Pathways Through an Eclectic Universe, ed. J. H. Knapen, T. J. Mahoney, & A. Vazdekis, 278 García-Berro, E., Torres, S., Isern, J., et al. 2013, in European Physical Journal Web of Conferences, Vol. 43, European Physical Journal Web of Conferences, 5003 Geisler, D., Villanova, S., Carraro, G., et al. 2012, ArXiv e-prints Gratton, R. G., Bragaglia, A., Carretta, E., et al. 2003,, 408, 529 —. 2005,, 440, 901 Gratton, R. G., Carretta, E., & Bragaglia, A. 2012,, 20, 50 Grundahl, F., Clausen, J. V., Hardis, S., & Frandsen, S. 2008,, 492, 171 Harris, W. E. 1996,, 112, 1487 Heitsch, F., & Richtler, T. 1999,, 347, 455 Holtzman, J. 2008, in HST Proposal, 11729 Holtzman, J. 2009, in HST Proposal, 12304 Johnson, H. L., & Morgan, W. W. 1953,, 117, 313 Kirby, E. N., Lanfranchi, G. A., Simon, J. D., Cohen, J. G., & Guhathakurta, P. 2011,, 727, 78 Kraft, R. P., & Ivans, I. I. 2003,, 115, 143 Lianou, S., Grebel, E. K., & Koch, A. 2011,, 531, A152 Marín-Franch, A., Aparicio, A., Piotto, G., et al. 2009,, 694, 1498 McWilliam, A. 2010, in Nuclei in the Cosmos Origlia, L., Valenti, E., Rich, R. M., & Ferraro, F. R. 2006,, 646, 499 Renzini, A., Bragaglia, A., Ferraro, F. R., et al. 1996,, 465, L23 Saviane, I., Rosenberg, A., Piotto, G., & Aparicio, A. 2000,, 355, 966 Strömgren, B. 1966,, 4, 433 Tolstoy, E., Hill, V., & Tosi, M. 2009,, 47, 371 VandenBerg, D. A. 2000,, 129, 315 VandenBerg, D. A., Bergbusch, P. A., Dotter, A., et al. 2012,, 755, 15 VandenBerg, D. A., & Clem, J. L. 2003,, 126, 778 Woodley, K. A., Goldsbury, R., Kalirai, J. S., et al. 2012,, 143, 50 Wylie, E. C., Cottrell, P. L., Sneden, C. A., & Lattanzio, J. C. 2006,, 649, 248 Zinn, R., & West, M. J. 1984,, 55, 45 Zoccali, M., Renzini, A., Ortolani, S., et al. 2001,, 553, 733 Measuring Metallicities with HST/WFC3 photometry ================================================ We quantified and calibrated the metallicity and temperature sensitivities of colors derived from nine Wide Field Camera 3 (WFC3) filters aboard the Hubble Space Telescope (HST) using Dartmouth isochrones and Kurucz atmospheres models. The theoretical isochrone colors were tested and calibrated against observations of five well studied galactic clusters: M92, NGC 6752, NGC 104, NGC 5927, and NGC 6791, all of which have spectroscopically determined metallicities spanning  − 2.30 <   <  + 0.4. We found empirical corrections to the Dartmouth isochrone grid for each of the following color magnitude diagrams (CMD) (F555W–F814W, F814W), (F336W–F555W, F814W), (F390M–F555W, F814W) and (F390W–F555W, F814W). Using the empirical corrections we tested the accuracy and spread of the photometric metallicities assigned from CMDs and color-color diagrams (which are necessary to break the age-metallicity degeneracy). Testing three color-color diagrams [(F336W–F555W),(F390M–F555W),(F390W–F555W), vs (F555W–F814W)], we found the colors (F390M–F555W) and (F390W–F555W), to be the best suited to measure photometric metallicities. The color (F390W–F555W) requires much less integration time, but generally produces wider metallicity distributions, and, at very-low metallicity, the MDF from (F390W–F555W) is  ∼ 60% wider than that from (F390M–F555W). Using the calibrated isochrones we recovered the overall cluster metallicity to within  ∼  0.1 dex in when using CMDs (i.e. when the distance, reddening and ages are approximately known). The measured metallicity distribution function (MDF) from color-color diagrams show this method measures metallicities of stellar clusters of unknown age and metallicity with an accuracy of  ∼  0.2 - 0.5 dex using F336W–F555W,  ∼ 0.15 - 0.25 dex using F390M–F555W, and  ∼ 0.2 - 0.4 dex with F390W–F555W, with the larger uncertainty pertaining to the lowest metallicity range. Introduction ============ Metallicity, age and mass are fundamental characteristics of a stellar population. Metallicity distributions, in conjunction with chemical evolution models, provide evolutionary information about both enrichment and gas inflow and outflow in the form of galactic winds. While in recent years measuring large numbers of spectroscopic metallicities has become more feasible with multiobject spectrographs, it is still challenging to observe samples large enough to find rare objects or substructure in metallicity distribution functions, or to observe faint enough to build up large samples in nearby galaxies. Photometric metallicities, though not as accurate as spectra, provide measurements for every star in the field, including those with fainter magnitudes than can be reached spectroscopically. The general technique to assign photometric metallicities relates color to metallicity. For instance, in a color magnitude diagram (CMD) fiducial ridgelines from several clusters with similar age, and a range of known metallicities can be interpolated to estimate the metallicity of an unmeasured cluster based upon the location of its ridgeline, provided that the cluster is also of similar age (e.g., Saviane et al. 2000; Da Costa et al. 2000,2002). Fiducial ridgelines have been used to derive empirical relations between color and metallicity at a given absolute magnitude. Alternatively, one can use theoretical isochrones, however, the isochrones need to be empirically calibrated to match observed sequences (e.g., Brown et al. 2005; Lianou et al. 2011). An issue with deriving metallicities from color arises because a giant can be redder either because it is older, or because it is more metal rich. One method to break the age-metallicity degeneracy uses a color-color diagram, where one color is constructed using a metallicity sensitive filter, and the other color is constructed from a pair of filters that provide a temperature estimate with minimal dependence on metallicity. There is a long history in astronomy of using specifically designed filters to isolate stellar characteristics such as metallicity. proposed photometric indices as a means of stellar classification. The *m*1 = (*v* − *b*) − (*b* − *y*) index estimates the stellar metallicity of horizontal branch, red giant branch and main sequence stars when used in conjunction with a temperature-sensitive index such as (*b* − *y*). The Washington system was developed to measure photometric metallicities, temperatures and the amount of CN line blanketing for giant G and K stars. [ht] Another consideration with photometric metallicities is that medium and broad band colors lose nearly all sensitivity at low metallicity. The Caby system, developed in the 1990’s, modified the Strömgren system, replacing the v filter with a filter centered on the Ca H & K lines. The Caby system measures stellar metallicities 3 times more accurately, especially at low metallicity where the m1 index loses sensitivity. Several filters on the HST/WFC3 were designed to provide information on the metallicities of resolved populations. Our WFC3 calibration program (11729, PI Holtzman) was designed to collect images of clusters with well established spectroscopic metallicities in order to map WFC3 colors to metallicity. Another program designed to study the galactic bulge (11664, PI Brown) imaged the same clusters with different filters, also as a calibration. This paper utilizes observations from both programs to present a broad range of calibrating filters and their metallicity sensitivities. Observations were obtained of M92, NGC 6752, NGC 104, NGC 5927, and NGC 6791. The five calibration clusters are well studied spectroscopically, and span a wide range of metallicity:  − 2.3 <   <  + 0.4. Individual stars can be observed throughout the Local Group, opening up possibilities for studying populations within the Milky Way, as well as Local Group galaxies. Our primary interest is in using these calibrations to measure metallicity distribution functions in several Local Group dwarf galaxies where we only have sufficient accuracy to measure giants, but the calibrations presented here may have broader applicability. As will be shown, photometric indices that measure metallicity also have a sensitivity to surface gravity, but for objects outside the Milky Way, it is trivial to separate giants from dwarfs based on their observed brightness. We structure the paper as follows: in Section 2 we use model atmospheres and isochrones to explore the capacity of various filter combinations to measure metallicity; in Section 3 we present the observations of the stellar clusters. Section 4 describes how we calibrate a set of isochrones to the observed sequences, and demonstrate how well these can be used to recover metallicities. In Section 5 we summarize our conclusions. Deriving Metallicity ==================== WFC3 filters ------------ HST WFC3 observations were obtained in the following filters: F336W, F390M, F390W, F395N, F410M, F467M, F547M, F555W, F814W, F110W and F160W. Information on the filter widths and system throughputs are listed in Table [throughput]. The system responses for the UVIS filters are shown in Figure [bandpasses], along with Kurucz model stellar spectra of typical giant branch stars with log g = 1.5, T = 4000 K and three different metallicities,  = 0.0,  − 1.5 and  − 3.0. llrcl[b] F336W &u, Strömgren &51.1 &0.20 &5.04 F390W &C, Washington &89.6 &0.25 &4.47 F390M &Ca II continuum &20.4 &0.22 &4.63 F395N &Ca II, 3933/3968 & 8.5 &0.22 &4.58 F410M &v, Strömgren &17.2 &0.27 &4.42 F467M &b, Strömgren &20.1 &0.28 &3.79 F547M &y, Strömgren &65.0 &0.26 &3.12 F555W &WFPC2 V &156.2 &0.28 &3.16 F814W &WFPC2 Wide I &153.6 &0.23 &1.83 F110W &Wide YJ &443.0 &0.56 &1.02 F160W &WFC3 H &268.3 &0.56 &0.63 Many of these filters are comparable to those from well established systems. For example, F336W, F410M, F467M and F547M are analogous to Strömgren u, v, b and y, respectively. While designed for measuring the Balmer decrement, the F336W and F410M filters both cover spectral regions with many absorption features from metals. The other two Strömgren filters, F467M and F547M, sample regions mostly clear of spectral features; historically F467M–F547M (i.e. b–y) has been used as a temperature indicator. Filters F390M and F395N cover the Ca H & K spectral features. The F395N filter is narrow (85 Å) while the F390M filter is broader, leading to a higher throughput, but also including a CN feature at *λ*  ≈  3885 Å. The F390W filter has a wide bandpass (896 Å), and is similar to the ground-based Washington C filter. The Washington C filter was designed to evaluate the total effects of line blanketing by CN (bands at 3595, 3883 and 4215 Å) as well as the CH molecular transition at 4304 Å, commonly known as the G band. F555W and F814W are wide band filters designed to cover the same spectral regions as the WFPC2 and ACS filters of the same name; they are similar to, but broader than, Johnson-Cousins V and I. The F555W and F814W filters measure mostly continuum; one notable exception is the MgH feature at  ∼  5100 Å in the F555W bandpass. As part of the observation campaign images in two near IR filters, F110W and F160W, were obtained in an effort to explore reddening free indices. However, we found that the photometric uncertainties from a two color reddening free index seemed to be too large to significantly add to our present analysis. Age - Metallicity Degeneracy ---------------------------- Stellar colors are a function of gravity, metallicity and effective temperature. For a given star, increasing the metallicity lowers the effective temperature and enhances line blanketing effects at a given mass, both of which cause redder color. For populations of comparable age the color is directly related to the metallicity. This relation has been used extensively in older stellar populations, e.g., globular clusters, to determine metallicities. For an old population, we calculate the metallicity sensitivity of CMD colors using Dartmouth stellar isochrones. These cover the metallicity range  − 2.5 <   < 0.5. Since we wish to understand sensitivity to metallicity at lower metallicities, we extend these down to  =  − 5 by assuming that stars with  <  − 2.5 have the same effective temperatures and luminosities as those with  =  − 2.5, while adopting the colors from a grid of model atmospheres that extends down to  =  − 5. In Table [cmdsens], we report metallicity sensitivity for RGB stars, in units of dex of per 0.01 mag of color change, for a range of different color choices, including most of the wideband UVIS WFC3 filters. In these units, small numbers represent higher sensitivity to metallicity. These sensitivities were computed for stars at M*F*814*W* =  − 1; for more luminous giants, the sensitivity is better, while for fainter ones, it is worse. Sensitivities are reported for several different ranges in metallicity, demonstrating the smaller color sensitivity at lower metallicity. Generally, sensitivity increases as the wavelength separation of the filters increases. However, each filter has different photometric precision for a fixed exposure time. The second to last column in Table [cmdsens] gives the relative color errors (normalized to *σ**F*555*W* − *F*814*W*), estimated using the exposure time calculator (ETC) on the WFC3 online data handbook for a K5 III giant and a fixed exposure time. The last column of Table [cmdsens] lists the sensitivity, over the metallicity range  − 1.5 <   <  − 0.5, scaled by the relative photometric error. The implication is that ideal filter choices rely on both the color sensitivity to metallicity and the precision to which the color can be measured. The most metallicity sensitive colors will not be the optimum choice when observing time and errors are considered. However, if you use a color that only changes minimally with metallicity, no amount of increased photometric accuracy will improve the metallicity determination. The last column of Table [cmdsens] suggests that F475W–F814W is the optimal color for maximum metallicity sensitivity, and this has been adopted by many studies (e.g., Gallart 2008); although this choice does depend to some extent on the color of the target stars. Other commonly used metallicity sensitive colors are F390W–F814W and F555W–F814W. As pointed out in the last paragraph there is a trade off between metallicity sensitivity and photometric accuracy (from a reasonable amount of integration time). The sensitivities in Table [cmdsens] show that these wide band colors have simultaneously greater throughput and less metallicity sensitivity than the medium and narrow band filters listed in Table [cmdsens]. Additionaly, at very-low metallicity, these broad-band colors have little sensitivity ( > 0.65 and  > 1 dex, respectively, per 0.01 mag of color change), and since it is challenging to reduce photometric errors below 0.01 mag, this leads to a fundamental limit on the accuracy of derived metallicities. For some colors, at low metallicity the color change from metallicity becomes smaller than the typical photometric accuracy of 0.01 mag. In Table [cmdsens] we report this as  >  1 dex of /0.01 mag color change. The narrower filters, such as F395N and F390M, retain sufficient sensitivity ( < 0.15 dex per 0.01 mag of color change) to provide useful metallicity estimates even at very-low metallicity. lccccccc F336W–F814W &0.270 & 0.100 &0.021 & 0.007 & 0.005 & 44.2 & 0.31 F390W–F814W &0.667 & 0.185 &0.036 & 0.012 & 0.009 & 4.5 & 0.05 F390M–F814W &0.454 & 0.133 &0.028 & 0.008 & 0.007 & 31.3 & 0.26 F395N–F814W &0.333 & 0.094 &0.027 & 0.010 & 0.007 & 69.7 & 0.72 F410M–F814W &  > 1 & 0.476 &0.060 & 0.014 & 0.009 & 11.6 & 0.17 F438W–F814W &  > 1 & 0.400 &0.053 & 0.020 & 0.013 & 2.9 & 0.06 F467W–F814W &  > 1 &  > 1 &0.149 & 0.038 & 0.017 & 3.6 & 0.13 F475W–F814W &  > 1 &  > 1 &0.097 & 0.031 & 0.017 & 1.3 & 0.04 F547M–F814W &  > 1 &  > 1 &0.198 & 0.050 & 0.023 & 1.4 & 0.07 F555W–F814W &  > 1 &  > 1 &0.182 & 0.048 & 0.022 & 1.0 & 0.05 F606W–F814W &  > 1 &  > 1 &0.284 & 0.075 & 0.027 & 0.79 & 0.06 F625W–F814W &  > 1 &  > 1 &0.402 & 0.114 & 0.033 & 0.90 & 0.10 F775W–F814W &  > 1 &  > 1 &  > 1 & 0.741 & 0.270 & 0.94 & 0.70 For a population of mixed age, the color-metallicity relation breaks down because younger giants, which are more massive, are hotter than older giants of the same metallicity. The color changes due to age and metallicity are demonstrated in Figure [agez], for two different ages (12.5 and 4 Gyr, solid and dashed lines, respectively), and for a range of metallicities (different colors). The effect is quantitatively shown in Figure [agecol], where color as a function of metallicity is plotted for several ages (2, 7 and 12 Gyr) for a giant branch star with a *M**F*814*W* =  − 1.0 (with comparable results along the entire giant branch). At higher metallicity this leads to an uncertainty in derived metallicity of a few tenths of a dex, but the uncertainty can be significantly larger at lower metallicity. The uncertainties in metallicity also become greater with younger populations; as Figure [agecol] shows, the color difference between 2 and 7 Gyr is almost double that between 7 and 12 Gyr. Separating atmospheric parameters using photometric colors ---------------------------------------------------------- The key to breaking the age-metallicity degeneracy is separating the color effects of metallicity and temperature. Two color indices have the potential to break the degeneracy because one color can measure a metallicity dependent feature in the spectrum, while the other can control for the temperature. The more the two color’s sensitivities to metallicity and temperature differ, the more effective the color combination will be at measureing metallicity. We examine the temperature and metallicity sensitivity for all the filter combinations by using Kurucz stellar atmosphere models of metallicity ranging from  − 5 >   >  + 1. We integrate the WFC3 transmission curve for each filter over the synthetic spectra with parameters of typical giant branch stars (log g = 2.5 and T*e**f**f* = 4500 K). We compute the color change with temperature (Δ color/Δ T*e*) and metallicity (Δ color / Δ ), at a range of metallicities. The ratio of the two gives the relative sensitivity to temperature and metallicity at equal color difference, where small values of Δ T*e*/Δ indicate a smaller dependence on metallicity than temperature. The relative sensitivity of temperature to metallicity (Δ T*e*/Δ ) for all colors is plotted in Figure [dtdz]. As expected, the colors whose bandpasses contain fewer metal-features (e.g F555W–F814W and F467M–F547M) are the ones that are least sensitive to metallicity. Figure [dtdz] shows that the F467M–F547M and F555W–F814W colors have comparably smaller sensitivities to metallicity. However, the relative color error for F467M–F547M is over 3.5 times larger than F555W–F814W for a fixed exposure time. F555W–F814W stands out as the optimal temperature index, considering minimal metallicity sensitivity and photometric accuracy. While F555W–F814W certainly changes with metallicity (cf. its use as a metallicity indicator in clusters at fixed age as discussed in the previous section) it has the largest relative sensitivity of temperature to metallicity of all the filters compared in this section (see Figure [dtdz]). Figure [agez2] demonstrates the age independence and metallicity separation of a color-color diagram. The plot includes isochrones of two different ages, 12.5 and 4 Gyr, as solid and dashed lines respectively, for a range of metallicities in steps of 0.5 dex. The solid and dashed lines closely follow each other throughout the color-color diagram. The bifurcation of the isochrones at cooler temperatures (especially seen at higher metallicity with the dashed and solid purple lines) is due to dwarf and giant stars colors showing increased gravity sensitivity at higher metallicity. The gravity sensitivity causes the metallicity-sensitive color of dwarfs to be bluer than giants at the same value of the temperature-sensitive color, however, for targets at a common distance, apparent magnitude can be used to separate the evolutionary stage. Based upon the color sensitivities shown in Figure [dtdz], we selected the more metallicity-sensitive colors to use with the temperature-sensitive color (F555W–F814W) in color-color diagrams. Figure [sensitivity] shows giant branch isochrones in color-color diagrams for five metallicity sensitive colors: F336W–F555W, F390M–F555W, F390W–F555W, F395N–F555W and F410M–F555W. The effectiveness of these color combinations to determine metallicity are quantified in Table [spans] by the color separation due to metallicity predicted from stellar isochrones in color-color diagrams, using units of dex of /0.01 mag of color change. While all the filter sets have similar sensitivity around solar metallicity ( + 0.5 >   >  − 0.5) this breaks down at lower metallicities. lcccccccc[h] F336W–F555W & 0.9 & 0.427 & 0.143 &0.047 &0.023 &0.026 & 9.8 & 0.24 & 1.0 & 0.290 & 0.107 &0.045 &0.023 &0.024 & 9.8 & 0.23 & 1.1 & 0.236 & 0.087 &0.051 &0.023 &0.024 & 9.8 & 0.23 F390W–F555W & 0.9 & 0.943 & 0.287 &0.098 &0.041 &0.049 & 1.0 & 0.04 & 1.0 & 0.704 & 0.199 &0.090 &0.038 &0.046 & 1.0 & 0.04 & 1.1 & 0.541 & 0.149 &0.084 &0.038 &0.043 & 1.0 & 0.04 F390M–F555W & 0.9 & 0.667 & 0.206 &0.074 &0.027 &0.036 & 6.9 & 0.19 & 1.0 & 0.493 & 0.146 &0.062 &0.024 &0.035 & 6.9 & 0.16 & 1.1 & 0.370 & 0.110 &0.052 &0.023 &0.034 & 6.9 & 0.16 F395N–F555W & 0.9 & 0.515 & 0.151 &0.066 &0.036 &0.049 & 15.4 & 0.56 & 1.0 & 0.369 & 0.104 &0.057 &0.037 &0.047 & 15.4 & 0.56 & 1.1 & 0.277 & 0.079 &0.052 &0.038 &0.047 & 15.4 & 0.60 F410M–F555W & 0.9 &  >  1 & 0.820 &0.254 &0.090 &0.063 & 2.6 & 0.23 & 1.0 &  >  1 & 0.510 &0.288 &0.070 &0.051 & 2.6 & 0.16 & 1.1 &  >  1 & 0.364 &0.350 &0.063 &0.044 & 2.6 & 0.16 Weighting the metallicity sensitivity by the relative uncertainty (last column of Table [spans]) shows that F390W–F555W is the most sensitive for a fixed exposure time. Below  <  − 1.5 the color separation for F395N–F555W, F390M–F555W and F336W–F555W are the most sensitive to metallicity. Although the F395N–F555W color retains sensitivity it also requires 2 to 3 times more exposure time to get comparable accuracy, making it prohibitive to use. At extremely low metallicity the most sensitive color is F395N–F555W, with sensitivity decreasing respectively for the colors: F336W–F555W, F390M–F555W and F390W–F555W. Based upon the metallicity sensitivity and photometric efficiency we find that the most promising metallicity indicating filters are the F390W, F390M and F336W filters. In the remainder of the paper we will focus our analysis on these filters. Reddening Effects ----------------- Reddening adds uncertainty to any photometric metallicity derivation, especially when the uncertainty in reddening is large, or if there is differential reddening throughout the field. In a color-color plot uncorrected reddening will be confused with a change in metallicity if the reddening vector is in the direction of the metallicity separation. For the majority of the color-color diagrams the reddening vectors are predominantly in the same direction as the isochrones, mitigating the influence of reddening. However, for (F336W–F555W, F555W–F814W) the reddening vector is  ∼  40∘ shallower than the isochrones, which increases the uncertainty in derived metallicities when there is differential reddening or if the uncertainty in the reddening is large. Reddening vectors are indicated in Figure [sensitivity] for E(B–V) = 0.1. The vector is defined by the reddening of the colors along the X and Y axes. For a given color, the reddening E(Filter1–Filter2) = *A**F**i**l**t**e**r*1 − *A**F**i**l**t**e**r*2, where A*F**i**l**t**e**r*1 = R*F**i**l**t**e**r*1 E(B–V), and the R*F**i**l**t**e**r* values are listed in Table [throughput]. To get an estimate of the effect of reddening on metallicities we consider an uncertainty of *σ**E*(*B* − *V*) = 0.02, as might be appropriate in a low reddening target. At shorter wavelengths, where all of our metallicity sensitive filters are located, the uncertainty in reddening will have an increased effect due to the UV extinction law increasing as wavelength decreases. In this case the *σ**E*(*B* − *V*) adds a systematic color uncertainty of  ∼ 0.1 mag to all of the metallicity sensitive colors. However, the color change from reddening will be along the reddening vector, and because the angles between the isochrones and reddening vectors vary (e.g. Figure [sensitivity]) and the colors have different sensitivity to metallicity (e.g. Table [spans]), this leads to metallicity uncertainties ranging from a tenth to a few tenths of a dex. The resulting systematic uncertainties in are reported in Table [error]. As expected, larger uncertainties are found at lower metallicities. Historically, reddening-free indices have been proposed as a means to correct colors for extinction. A reddening-free index, Q, is defined as: $$Q=(m\_1-m\_2) - \frac{E(m\_1 - m\_2)}{E(m\_2-m\_3)}(m\_2-m\_3)$$ where E(*m*1 − *m*2) is the color excess for the color (*m*1 − *m*2). However, making our metallicity indicator reddening-free still leaves the problem that the temperature indicator is affected by reddening. To completely remove reddening effects from the color-color diagram, both indices need to be reddening-free. suggest using two reddening-free indices (5 total filters, including two in the IR) to predict metallicity. We looked into using these additional filters and found that with our low reddening targets the additional filters do not significantly add to the metallicity resolution. lccc[Hb] F336W–F555W & 0.30 & 0.14 & 0.15 F390M–F555W & 0.20 & 0.09 & 0.13 F390W–F555W & 0.23 & 0.11 & 0.12 F395N–F555W & 0.10 & 0.07 & 0.09 F410M–F555W & 0.33 & 0.12 & 0.08 Abundance Ratio Variations -------------------------- While is commonly used as a proxy for the overall stellar metallicity, elements do not vary in lockstep from star to star. Variations in abundance ratios can alter spectral features within a bandpass, changing the stellar color and causing the overall photometric metallicity to be misinterpreted if fixed abundance ratios are assumed. A common abundance variation is *α* element enhancement (e.g. ), often present at low metallicity in globular clusters (see and references there in). The CMD of an *α*-enhanced population resembles a CMD of higher with solar abundance ratios. Figure [cmdafeh] shows how *α* enhancements affect different colors at low and high metallicity. Different combinations of and [*α*/Fe] can give the same colors. Using Dartmouth isochrones, we compute the relative color change from *α* abundances (Δ color/ Δ [*α*/Fe]) and metallicity (Δ color/ Δ ), to find the relative sensitivity of various colors to and *α* enhancement. The ratio of the two as a function of metallicity is presented in Table [dadz]; unsurprisingly the colors with the most *α* sensitivity are the the two colors (F395N–F555W and F390M–F555W) whose bandpasses are dominated by the H & K lines. Metallicities assigned assuming solar abundance ratios can be considered as a metallicity indicator that is a combination of the actual and [*α*/Fe], following the relation: $$\textrm{[m/H]}\_{phot}=\textrm{[Fe/H]}+ \left( \frac{\Delta \textrm{[Fe/H]} }{ \Delta[\alpha/\textrm{Fe}]} \right) [\alpha/\textrm{Fe}]$$ where the amount of relative color change due to *α* enhancements (listed in Table [dadz]) can be used to get the relation. The differing sensitivity of (Δ/Δ[*α*/Fe]) in different colors implies that *α* abundance can be separated with photometry using similar techniques described in Section 2.3. However, additional colors with differing sensitivities would be necessary to isolate the *α* abundance, such as F395N–F555W and F410M–F555W or F390M–F555W and F390W–F555W. We should note that the majority of globular clusters show large star-to-star variations in light elements (Li to Si) (for a review see Gratton et al. 2012, and references therein). Additionally there are cluster-to-cluster abundance variations, which also have the potential to affect the stellar colors. found that abundance enhancements of Mg, Si, and to some extent Ca, will change the color and thus location of the giant branch, while variations in oxygen will affect the height of the subgiant branch and the luminosity of the MS turnoff. Of particular interest in this study, C and N enhancements can affect the stellar colors measured with F390M and F390W, since a few CN and CH bands fall in the same spectral region covered by these filter bandpasses; see Section 2.1 for details. Since we cannot explicitily measure any of these abundance variation in this study due to the limited resolution of photometric metallicities, any such variations could affect the stellar colors and our inferred metallicities. lccc[Hb] F336W–F555W & 0.27 & 0.39 & 0.64 F390W–F555W & 0.22 & 0.19 & 0.09 F390M–F555W & 0.65 & 0.37 & 0.34 F395N–F555W & 0.88 & 0.84 & 0.79 F410M–F555W & 0.09 & 0.17 & -0.09 Testing the method: Observations of calibration clusters ======================================================== WFC3 images were obtained of five clusters during cycle 17 as part of the calibration program 11729, (PI Holtzman). Additional images of the five clusters come from the calibration portion of the program 11664 (PI Brown). The five clusters were chosen because they are well studied and span a range of ; 5 of the 6 clusters are discussed by for ACS calibration. lccccc[H] F336W &850, 30 &1000, 30 &1160, 30 &950, 30 &800, 30 F390M &1400, 50 &1400, 50 &1400, 50 &1400, 50 &1400, 50 F390W &2290, 10 &2460, 10 &2596, 10 &2376, 10 &2250, 10 F395N &1930, 90 &2100, 90 &2240, 90 &1015, 90 &1850, 90 F410M &1530, 40 &1600, 40 &1600, 40 &1600, 40 &1490, 40 F467M &700, 40 &800, 40 &900, 40 &730, 40 &700, 40 F547M &445 &445 &445 &445 &445 F555W &1360.5 &1360.5 &1360.48 &1381 &1360.5 F814W & 860.5 &1020.5 &1160.48 &961 &810.5 F160W &1245, 8.3 &1245, 12.5 &1245, 16.7 &1245, 12.5 &1245, 4.2 F110W &1245, 8.3 &1245, 12.5 &1245, 16.7 &1245, 12.5 &1245, 4.2 [exposure] The Observations ---------------- We present data for four globular clusters, M92 (NGC 6341), NGC 6752, NGC 104 (47 Tuc), NGC 5927, and one open cluster NGC 6791. Exposure times are listed in Table [exposure], and include short and long exposures to increase the dynamic range of the photometry. Photometry ---------- Reduced images were obtained from the HST archive using its on-the-fly processing. A deep image was created by summing all of the exposures in F467M, and stars were identified on this image. For each individual frame, a pixel area correction was applied to account for the modification in fluxes by flat fielding, and an astrometric solution was derived relative to the reference frame. Aperture photometry was done on all of the stars using aperture radii of 0.12, 0.25, 0.375, and 0.5 arcsec. Using this initial aperture photometry and model PSFs calculated using TinyTim, photometry was redone on each individual star after subtracting off all neighbors; this process was iterated twice. Cluster parameters ------------------ To adopt isochrones for the clusters we search the literature for well established cluster parameters. The distance modulus and the reddening are needed to get the absolute magnitudes of the cluster stars. Additionally we use the age, metallicity, and *α* enhancement to select isochrones for our study. We have compiled a sampling of the reported cluster parameters, and list the literature values adopted in this study in Table [obsdat]. **M92** is the only very metal-poor cluster in our sample. Spectroscopic metallicity measurements include those by,  =  − 2.24 ± 0.08,,  =  − 2.16 ± 0.02, and,⟨ ⟩ =  − 2.38 ± 0.02 and ⟨ ⟩ =  − 2.50 ± 0.12.. We average the spectroscopic metallicities, and adopt  =  − 2.30. Isochrone studies by and both found *α* enhancements of [*α*/Fe]  =  + 0.3, however our isochrone grid is in steps of 0.2, so we adopt [*α*/Fe] = 0.4. We adopt the age, and reddening found by, 13.5 Gyr and E(B–V)  = 0.023. After adopting these isochrone parameters we adjust the distance modulus to be fully consistent with our isochrone models, by assuming (m–M)$\_{\rm{V}} = 14.60$, which is consistent with the values used by. **NGC 6752**. measured spectra of seven stars for an overall metallicity  =  − 1.48 ± 0.02, and [*α*/Fe]  =  + 0.27  ± 0.01. measured  =  − 1.555 ± 0.07 from high resolution spectra of 14 stars, they also measured multiple *α* elments for an average [*α*/Fe]  =  + 0.35. In a previous study, found  =  − 1.43 ± 0.04. Other spectroscopic metallicity measurements include those by,  =  − 1.54 ± 0.09,,  =  − 1.42 ± 0.08,, ⟨ ⟩ =  − 1.50 ± 0.02. We average the spectroscopic and adopt a  =  − 1.45, which is slightly lower than the average but is consistent with the larger [*α*/Fe] adopted to fit our isochrone grid, [*α*/Fe] = 0.4. We assume a reddening of E(B–V)  = 0.046 ± 0.005 as measured by from 118 stars. We adopt an age of 12.5 Gyr, reported by, and also by in a study of comparative ages of globular clusters. From these adopted isochrone prameters we adjust our distance modulus to (m–M)$\_{\rm{V}} = 13.20$, which is consistent with the distance to NGC 6752 measured by using the white dwarf cooling sequence; (m–M)$\_{\rm{V}}$= 13.17. **47 Tuc**. Spectroscopic measurement include those by,  =  − 0.71 ± 0.08, who found  =  − 0.70 ± 0.07,, ⟨ ⟩ =  − 0.70 ± 0.05, found  =  − 0.60 ± 0.2 based on seven RGB stars, and measured  =  − 0.74 ± 0.02 from the medium resolution spectra of 147 stars and  =  − 0.77 ± 0.03 from the high resolution spectra of 11 stars, and measured an [*α*/Fe] = 0.44. We average the adopting values of  =  − 0.70 and [*α*/Fe] = 0.4. We use the E(B–V)  = 0.024 reported by. found an age of 12 Gyr, derived an age of 12.9 Gyr using diffusive models, which we average for an adopted age of 12.5 Gyr. Based upon the adopted isochrone parameters we adjust the distance modulus to (m–M)$\_{\rm{V}}=13.20$ which is smaller than the values measured by and who both measured the distance modulus (13.36 and 13.27, respectively) to 47 Tuc using white dwarf cooling models. [ht] **NGC 5927** is more complicated to study photometrically due to the high differential reddening within the cluster ( Δ E(V-I)=0.27 measured by and *δ*(E(B–V)=0.169 measured by Bonatto et al. 2013). found  =  − 0.30 ± 0.09; measured  =  − 0.31, while found  =  − 0.67. We adopt a of  − 0.40 and [*α*/Fe] of +0.2. found the reddening to be E(B–V)= 0.45. did a study on the relative ages of GC’s and found the age to be  ≈  10 Gyr, while the relative GC age study by determined an age of  ∼  11.3 Gyr, we assume an age of 11.5 Gyr. Based upon the adopted isochrone parameters we adjust the distance modulus to (m–M)$\_{\rm{V}}=15.78$. **NGC 6791**, unlike the rest of our calibrating clusters, is an open cluster (although recent work by suggests that NGC 6791 might be the remains of a stripped globular cluster). determined a metallicity of  =  + 0.37. used infrared spectroscopy of 6 stars to measure a =  + 0.35 ± 0.02 and solar *α* abundance ratio. found a  =  + 0.4. We average the spectroscopic metallicities and round to the nearest isochrone grid spacing and adopt  =  + 0.4 and [*α*/Fe]=0.0. We assume the reddening found by, E(B–V) = 0.10− 0.02+ 0.03. We use the age found by both and of 8 Gyr. found an age of  ∼ 8.3 Gyr, and also reported a consistent age of 8 Gyr between the white dwarf cooling sequence and the MSTO age. From this we find a distance modulus of (m–M)$\_{\rm{V}}=13.45$, which is consisten with who used an eclipsing binary to determine (m–M)$\_{\rm{V}}$ = 13.46 ±  0.1, and with, who found (m–M)$\_{\rm{v}} =13.45^{+0.03}\_{-0.12}$. As previously stated the adopted and [*α*/Fe] are not perfectly matched to the literature values but are rounded to the spacing within the isochrone grid (in steps of 0.05 of and 0.2 of [*α*/Fe]). When adjusting the metallicity to our grid we note that the adjustment for a change in [*α*/Fe] varied for each color according to equation (2) and Table [dadz]; therefore we made an average metallicity adjustment that best fit all colors simultaneously. llrlllrr[Ht] M 92 &17 17 07.05 &+43 07 58.2 &0.023 &14.60 &–2.50, –2.14 (–2.30) &+0.30 (+0.4) &13.5 NGC 6752 &19 10 54.86 &–59 59 11.2 &0.046 &13.20 &–1.54, –1.43 (–1.45) &+0.27 (+0.4) &12.5 NGC 104 &00 24 15.26 &–72 05 47.9 &0.024 &13.20 &–0.77, –0.60 (–0.70) &+0.44 (+0.4) &12.5 NGC 5927 &15 28 00.20 &–50 40 26.2 &0.45 &15.78 &–0.67, –0.30 (–0.40) &... (+0.2) &11.5 NGC 6791 &19 20 53.00 &+37 46 30.0 &0.10 &13.45 &+0.35, 0.40 ( + 0.40) & 0.0 ( 0.0) &8.0 [ht] Absolute Magnitudes ------------------- From the reported reddening, and distance modulus we calculate the Absolute magnitudes for each filter. All magnitudes reported are in the Vegamag system with zeropoints taken from the WFC3 handbook. The absolute magnitudes for each filter were calculated using: M(*f**i**l**t**e**r*) =  m*o**b**s**e**r**v**e**d* - ((m–M)$\_{\rm{v}}$ - A$\_{\rm{v}}$) - A*f**i**l**t**e**r*. The distance modulus, (m–M)$\_{\rm{v}}$, as reported in Table [obsdat], was corrected for the V band extinction, $A\_{\rm{v}}$ = 3.1E(B–V), and addtionally corrected for the extinction within the given filter A(*f**i**l**t**e**r*) = R(*f**i**l**t**e**r*)E(B–V). The ratio of total to selective extinction, R(*f**i**l**t**e**r*) as listed in Table [throughput], was calculated by taking the integral over the stellar atmosphere SED convolved with the filter transmission curve, divided by the integral over the same atmosphere and filter convolved with the galactic extinction curve. We used an atmosphere with  =  − 1.5, temperature of 5000, and log g = 2.0. [ht] Cluster Isochrone comparison ============================ It is well known that theoretical isochrone models do not perfectly match observed colors due to a combination of uncertainties from stellar evolution models, model atmospheres, and instrumental systematics. When comparing the isochrone of the mean spectroscopic value to the CMDs the models often did not match the shape of the cluster ridgelines. However, it should be noted that uncertainties in distance, age, reddening and composition complicate the fitting process. The cluster CMDs are shown in Figures [CMDs] - [medCMD], where dashed lines represent the literature valued isochrones adopted for each cluster as reported in Table [obsdat]. In order to improve the metallicity determinations we derived an empirical correction to the isochrones using the five cluster CMDs as benchmarks. For small steps in magnitude along the giant branch, we found the distance between the theoretical isochrone colors and the cluster ridgeline, then squared and summed the distances. Using the canonical metallicities we found the empirical corrections with the smallest summed distance between the corrected isochrone and the cluster. These corrections lead to consistent cluster metallicities across all CMDs. All isochrone corrections were based on color, gravity and metallicity adjustments. The various filter combinations require different functional forms of the generic equation: $$\begin{aligned} {\rm(color)}\_{\text{corrected}}=(\alpha +\beta\text{\textrm{[Fe/H]}}){\rm(color)}\_{\rm{obs}} \\ + \gamma(\text{\textrm{[Fe/H]}}+\delta)^2 +\epsilon \nonumber \\ + ( \zeta +\eta \textrm{[Fe/H]})(\theta - \text{log g}) \nonumber \end{aligned}$$ Equation (3) was modifed by inspection of the various CMDs. The (F555W–F814W, F814W) CMD in Figure [CMDs] shows the uncorrected literature valued isochrones falling redward of the cluster ridgelines to varying degrees depending on the magnitude and metallicity of the isochrone. The most metal-poor cluster was especially discrepant, with larger deviations along the lower giant branch than the upper. For this CMD the first order color correction includes a dependence on metallicity, which allows for larger corrections at lower metallicity. The first order color dependent term shifts and changes the shape of the isochrone, while the second order metallicity dependent term adds the same shift to a given isochrone without changing the shape. The additional gravity dependent term corrects the bend of the giant branch (where log g  <  3.4), with more bend required as metallicity decreases. [ht] In Figure [uCMD] the (F336W–F555W, F814W) CMD shows the uncorrected isochrones redward of the cluster ridgelines, except for M92 where the isochrone along the giant branch falls blueward. This CMDs shows the largest difference between the clusters and the models. The bend of the isochrone along the giant branch (where log g  <  3.4) is corrected with a gravity and metallicity dependent term, with increasing tilt as metallicity decreases. In this CMD the corrected isochrone for 47 Tuc (NGC 104) falls  ∼ 0.03 mag redward of the cluster ridgeline, while all of the other clusters are within  ∼ 0.01 mag of their ridgelines. We believe this slight offset to be an artifact of rounding the average spectroscopic metallicity to fit the grid spacing. This color offset is smaller than the color difference between a isochrones of [Fe/H]  =  − 0.70 and − 0.75, therefore we do not force and overcorrection which would potentially lead to distorted MDFs. In Figure [wideCMD] the uncorrected isochrones in the (F390W–F555W, F814W) CMDs are the least discrepant of the CMDs we examined. However a few corrections are still needed to improve the fits, including a metallicity dependent gravity term, with smaller corrections required for lower metallicities. lllll[hb] (F336W–F555W, F814W) & M92 & -2.28 & 0.114 & 937 ................& N6752 & -1.45 & 0.073 & 481 ................& N104 & -0.74 & 0.059 & 2096 ................& N5927 & -0.37 & 0.067 & 1518 ................& N6791 & 0.39 & 0.079 & 320 (F390M–F555W, F814W) & M92 & -2.32 & 0.089 & 899 ................& N6752 & -1.43 & 0.087 & 468 ................& N104 & -0.68 & 0.066 & 2071 ................& N5927 & -0.42 & 0.108 & 1504 ................& N6791 & 0.40 & 0.098 & 311 (F390W–F555W, F814W) & M92 & -2.33 & 0.092 & 906 ................& N6752 & -1.39 & 0.060 & 462 ................& N104 & -0.66 & 0.060 & 2067 ................& N5927 & -0.46 & 0.124 & 1443 ................& N6791 & 0.39 & 0.082 & 297 The (F390M–F555W, F814W) CMDs presented in Figure [medCMD] shows the uncorrected isochrones blueward of the clusters. Additionally the isochrone curve along the giant branch is too steep to match the shape of the clusters. We applied a first order color term that has an additional metallicity dependence, making the color coefficient greater for lower metallicities. [ht] The isochrone corrections for all CMDs are listed in Table 8. Finding a single correction to align all five isochrones to the clusters gives us reasonable confidence to interpolate our corrections. Smoothly changing corrections can be applied to the entire set of isochrones, then fit to an unknown population, although caution must be taken when extending beyond our calibrating clusters, i.e. M92 at  =  − 2.3, and NGC 6791 at  =  + 0.40. For the majority of cluster CMDs the empirical corrections improve the isochrone-ridgeline alignment such that the average difference between the two is $\lesssim$ 0.01 mag along the giant branch. In Figures [CMDs] - [medCMD] the dash-dot lines represent the uncorrected isochrones, while the solid lines are the empirically corrected isochrones. All the color corrections listed in Table 8 are shown to  =  − 2.5. Below this metallicity we replace all of the metallicity dependent terms with the value of the term when  =  − 2.5 keeping the corrections constant where the color change is small and we are unable to verify the corrections. Metallicity Determinations -------------------------- We tested the empirical isochrone corrections by re-deriving the cluster metallicities. We assign photometric metallicities to stars in the CMDs by searching isochrone grids with spacing in of 0.05 dex; each star was assigned the metallicity of the closest isochrone. We adopted [*α*/Fe]  = 0.0 since we cannot separate the color effects due to *α* and with only three filters. We selected giant branch stars with errors  < 0.03 mag in all filters. ### Metallicities From CMDs The photometric metallicities adopted from (F336W–F555W, F814W), (F390M–F555W, F814W) and (F390W–F555W, F814W) were used to create MDFs for every cluster, which are shown on the left side of Figure [plotmdf]. For all the clusters the MDF peaks are within  ±  0.06 dex of the spectroscopically derived values. The recovered peaks, widths and the number of stars measured are listed in Table 9. The MDF dispersion within each cluster does not vary much across the metallicity range, although we do see slightly larger numbers for the differentially reddened NGC 5927 and for the lowest metallicity cluster, M92, reflecting the diminished accuracy expected at low metallicity, as discussed in Section 2.2. Additionally for NGC 104 and NGC 5927 the horizontal branch stars are clearly seen as small shoulders left of the MDF peak. The horizontal branch metallicities are systematically lower than the true metallicities because we only use the isochrone colors along the giant branch to assign metallicity, so horizontal branch stars were incorrectly matched to GB stars. Overall, the average MDF dispersion for all clusters is  ∼ 0.10 dex and is consistent with what is expected from photometric errors. We simulated this by creating synthetic CMDs based on the IMF and photometric errors within our data set. At each magnitude along the GB ridgeline we used the measured photometric errors to randomly distribute the relative number of stars found at each magnitude. The synthetic CMD was then put through the same isochrone matching processes to get photometric metallicities and a MDF for each cluster. In order to determine the systematic effect of using uncorrected isochrones to assign photometric metallicities we compared the spectroscopically determined metallicities to those adopted using the uncorrected isochrone grid to
arxiv_0000251
Coherence for adjunctions in a 3-category via string diagrams ============================================================= We construct a 3-categorical presentation Adj(3, 1) and define a coherent adjunction in a strict 3-category C as a map Adj(3, 1) → C. We use string diagrams to show that any adjunction in C can be extended to a coherent adjunction in an essentially unique way. The results and their proofs will apply in the context of Gray 3-categories after the string diagram calculus is shown to hold in that context in an upcoming paper. Introduction ============ In this paper, we construct a 3-categorical presentation Adj(3, 1) containing 1-cells *l* and *r* and we define a **coherent adjunction** in a strict 3-category C as a functor Adj(3, 1) → C. We then prove our Main Theorem, stating that any adjunction in C (by which we mean an adjunction in its homotopy 2-category) can be promoted to a coherent adjunction in an essentially unique way. In order to state this Theorem precisely, denote by *θ*(1) the computad consisting of a single 1-cell, so that Map(*θ*(1), C) is the 3-groupoid of 1-morphisms in C, and let Map*L*(*θ*(1), C) be its full 3-subgroupoid whose objects are the left adjoint 1-morphisms in C. The map *E**l* : Map(Adj(3, 1), C) → Map(*θ*(1), C) given by restriction to the 1-cell *l* factors through Map*L*(*θ*(1), C). [Main Theorem] Given a strict 3-category C, the restriction map *E**l* : Map(Adj(3, 1), C) → Map*L*(*θ*(1), C) is a weak equivalence of strict 3-groupoids. The basic idea of the proof is that the map *E**l* is a fibration of 3-groupoids, by the main result of. This allows us to make use of a long exact sequence in homotopy groups to reduce the problem to showing that the homotopy groups of the fibre are trivial and the map is surjective on objects. We then prove this is the case by constructing trivialising morphisms for arbitrary elements of these homotopy groups. We do this one cell at a time, by using the lifting properties of fibrations and using the string diagram calculus developed in,, and for explicit constructions. The restriction to strict 3-categories is a consequence of the fact that we use a string diagram calculus. In an upcoming paper we show that this string diagram calculus holds in Gray 3-categories and then all results in this paper will hold in that setting, with the same proofs. Pullbacks and the long exact sequence for a fibration ----------------------------------------------------- In the proof of the main Theorem, we need to make use of the long exact sequence in homotopy groups corresponding to a fibration of *n*-groupoids. We will also need to use pullbacks of maps of *n*-groupoids along a fibration. We therefore state and prove all the necessary results. These should in principle follow from model theoretic arguments in the folk model structure on strict *n*-categories of. We prefer to give different proofs here for completeness and also because we want proofs that will be applicable in the context of Gray 3-categories and other types of semistrict *n*-categories. Relation to other work ---------------------- We start by defining a 2-categorical presentation Adj(2, 1) and proving the corresponding coherence result for adjunctions in a 2-category. This presentation can be deduced directly from the definition of an adjunction as a pair of 1-morphisms, together with unit and counit 2-morphisms satisfying two relations, known as the snake relations, or triangle identities. The coherence result in this case is essentially equivalent to the well known result that adjoints are unique up to isomorphism. The essential difference between Adj(3, 1) and Adj(2, 1) is the appearance of a **swallowtail relation**, named after the well known singularity. Singularity theory and adjunctions in higher categories are related by the Cobordism Hypothesis (see ). The swallowtail relations were fist introduced in the context of adjunctions in. There they are part of the definition of a *locally adjoint biadjoint pair* in a *strongly bicategory enriched category* (a kind of semistrict 3-category). Note that what is called a biadjoint pair in is what we call an adjunction in the present paper. In that paper, it is proved that given a biadjoint pair one can modify the triangle isomorphisms in such a way that two additional relations (later called swallowtail relations) are satisfied, yielding a locally adjoint biadjoint pair. A string diagram proof of the analogous result for a strict 3-category has appeared in. There is also a formalized string diagram proof in the proof assistant Globular (see ). The swallowtail relations also appear in, in the more general context of *biadjunctions in tricategories*. There they are depicted in terms of pasting diagrams and the complexity of these diagrams is increased by the presence of various morphisms implementing the weak coherence laws which hold in a tricategory. In that paper, it is proved that any any biequivalence in a tricategory extends to a biadjoint biequivalence, satisfying the swallowtail relations. Again, note that what is called a biequivalence in we call here simply an equivalence. What is called a biadjunction in we call here an adjunction satisfying the swallowtail relations, i.e. a map Adj(3, 1) → C. In the author gives a definition of a *coherent adjoint equivalence* between 2-categories. This is in particular an adjunction in the 3-category of 2-categories, and so the swallowtail relations appear in the definition. The composite 3-morphisms whose equality is asserted by these relations are depicted as movies of 2-dimensional string diagrams. This seems to be the first place in the literature where these equations relating the cusp isomorphisms for an adjunction are given the name of swallowtail relations, by analogy with the singularity. In the author proves a coherence result for *duals in monoidal bicategories*. More precisely, they prove that the 2-groupoid of objects in a monoidal bicategory which admit a dual is equivalent to a 2-groupoid of *coherent dual pairs*, which can be seen as the 2-groupoid of maps out of a certain computad, which plays the same role as Adj(3, 1) in the present paper. We can specialize the result in to the case of strict monoidal 2-categories. On the other hand, using the fact that monoidal 2-categories are just 3-categories with one object, with duals corresponding to adjoints, we can also specialize the result in the present paper to the context of strict monoidal 2-categories. The two results on coherence for duals in strict monoidal 2-categories thus obtained are essentially the same. The main advantage of the methods in the present paper is that the proof is made much simpler by the use of string diagrams, a method which we can currently extend to 4-categories. Moreover, the result in the present paper does not follow from the one in, except in the special case where one considers adjunctions in 3-category with only one object. In the authors construct an (∞, 2)-category $\underline{\operatorname{Adj}}$ and prove that the space of functors $\underline{\operatorname{Adj}}\to \operatorname{Cat}\_{\infty}$ is equivalent to the space of adjunctions in Cat∞. Informally, we can think of Adj(3, 1) as a finite presentation for the homotopy 3-category of $\underline{\operatorname{Adj}}$. This seems to be the first place in the literature where coherence for adjunctions in a higher category C is stated in terms of an equivalence of spaces between the space of morphisms in C which admit an adjoint and the space of maps into C out of a category consisting of a free adjunction. The *strictly undulating squiggles* used there are also a kind of string diagram calculus. Future work ----------- The use of string diagrams comes with the limitation of applying only to strict 3-categories. However, we will prove in an upcoming paper that this string diagram calculus is applicable also in Gray 3-categories and therefore the proofs in and the present paper also hold more generally. A string diagram calculus for Gray 3-categories with duals already appears in. The methods used in this paper are also extended to dimension 4 in an upcoming paper, where we construct a 4-categorical presentation Adj(4, 1) and prove an analogous result for adjunctions in 4-categories. We will then use this result to give a new proof of the coherence result for fully dualizable objects in a strict symmetric monoidal 3-category in the author’s PhD Thesis. The cobordism hypothesis allows us to interpret the corresponding presentation as a finite presentation of the 3-dimensional fully extended framed bordism category, although this would require the coherence result to be extended to weak symmetric monoidal 3-categories. Definitions and basic results ============================= We now give some necessary definitions and recall the main result from which we will need in the present paper. Strict *n*-categories --------------------- We think of a strict *n*-category as an algebra over a certain monad *T**n* : gSet*n* → gSet*n* on the category of *n*-globular sets. This is the monad defined in, Chapter 8. Alternatively one can think of a strict *n*-category as a category enriched in strict (*n* − 1)-categories with the cartesian product. Given a strict *n*-category C, we denote by *s*, *t* its source and target maps. Equivalences ------------ In a strict *n*-category, we say that a *k*-morphism *f* : *x* → *y* is an **isomorphism** if there exists another *k*-morphism *f* : *y* → *x* such that *f* ∘ *g* = id*y* and *g* ∘ *f* = id*x*. We also say that *f* is **invertible** and we call *g* its **inverse** (one can show that it is unique). However, we are more interested in a weaker version of this, known as **equivalence**. Let C be a strict *n*-category. An *n*-morphism *f* : *x* → *y* in C is an **equivalence** if it is an isomorphism. When *k* < *n*, a *k*-morphism *f* : *x* → *y* in C is an **equivalence** when there is another *k*-morphism *g* : *y* → *x* and equivalences *f* ∘ *g* → id*y* and *g* ∘ *f* → id*x* in C. We say that *x* is **equivalent** to *y*, and write *x* ≃ *y*, if there is an equivalence *x* → *y*. When *f* : *x* → *y* is an equivalence, we also call it **weakly invertible** and any morphism *g* : *y* → *x* such that *f* ∘ *g* ≃ id*y* and *g* ∘ *f* ≃ id*x* is called a **weak inverse** to *f*. When *f* is a *k*-morphism and an equivalence we also call it a ***k*-equivalence**. An ***n*-groupoid** is an *n*-category all of whose morphisms are equivalences. Finally, we use the following notion of weak equivalence for functors, which coincides with the one in the folk model structure of. A functor *F* : C → D between strict *n*-categories is called **essentially surjective** if for every object *d* ∈ D there exists an object *c* ∈ C and an equivalence *F*(*c*) → *d* in D. A functor *F* : C → D between strict *n*-categories is called a **weak equivalence** if it is essentially surjective and for all objects *c*1, *c*2 ∈ C the induced functor C(*c*1, *c*2) → D(*F*(*c*1), *F*(*c*2)) is a weak equivalence of (*n* − 1)-categories. An *n*-groupoid *G* is called **weakly contractible** if the map *G* →  \*  is a weak equivalence. Adjunctions ----------- An **adjunction** in a strict 2-category C is a pair of 1-morphisms *l* : *X* → *Y* and *r* : *Y* → *X* together with 2-morphisms *u* : id*X* → *r* ∘ *l* and *c* : *l* ∘ *r* → id*Y* called the unit and the counit, which satisfy two standard relations, called zigzag, snake or triangle identities. Let C be a strict *n*-category. We define its **homotopy 2-category** to be the strict 2-category h2(C) obtained by declaring equivalent 2-morphisms to be equal. The following definitions of adjunctions in *n*-categories are adapted from the ones given in for the case of (∞, *n*)-categories. An **adjunction** between 1-morphisms in a strict *n*-category C is a pair of 1-morphisms *l* : *X* → *Y* and *r* : *Y* → *X* together with 2-morphisms *u* : id*X* → *r* ∘ *l* and *c* : *l* ∘ *r* → id*Y* called the unit and the counit, which determine an adjunction in the homotopy 2-category h2(C). This means that an adjunction in a 3-category consists of a pair of 1-morphisms *l* : *X* → *Y* and *r* : *Y* → *X* together with unit and counit 2-morphisms satisfying the usual snake relations or triangle identities up to 3-isomorphism. An adjunction between *k*-morphisms in a strict *n*-category C is an adjunction between 1-morphisms in an appropriate (*n* − *k* + 1)-category of morphisms in C. The following Lemma relating equivalences and adjunctions is well known. [adjeq] Let C be a strict *n*-category, *f* : *x* → *y* a *k*-equivalence in C, *g* : *y* → *x* a weak inverse and *u* : id*x* → *g* ∘ *f* a (*k* + 1)-equivalence. Then there exists a (*k* + 1)-equivalence *c* : *f* ∘ *g* → id*y* such that (*f*, *g*, *u*, *c*) is an adjunction in C. By passing to h2(Hom(*s*(*x*), *t*(*x*))) we can reduce to the case where *n* = 2 and *k* = 1. Now we just need to find *c* : *y* → *x* satisfying the two snake relations. This can be done by using string diagrams, as on the nLab page for adjoint equivalence. Presentations ------------- An ***n*-categorical presentation** is simply a collection of *k*-cells for every *k* ≤ *n* + 1, whose sources and targets are composites of lower dimensional cells. We interpret the (*n* + 1)-cells as relations. Given an *n*-categorical presentation P we denote by *F*(P) the ***n*-category generated by P**. Its *k*-morphisms are arbitrary composites of the *k*-cells in P. Two *n*-morphisms are declared equal when they are related by an (*n* + 1)-cell. We sometimes write P → C to refer to a functor *F*(P) → C. This can be made precise by using the theory of **computads**. See for a detailed treatment of computads and for our simplified exposition of how we use them. We denote by *θ*(*k*) the computad generated by a single *k*-cell, so that functors *θ*(*k*) → C are in canonical bijection with the set of *k*-morphisms in C. String diagrams --------------- In a string diagram calculus for 4-categorical compositions is introduced. The authors introduce the notion of a **signature**, which consists of sets of generating *k*-cells, for each *k* ≤ 5. They then define a *k*-**diagram** over a signature, be a 4-categorical composite of cells. They also introduce **homotopy generators** which are certain cells encoding coherent versions of the interchange laws that hold in strict 4-categories. Finally they define a **signature with homotopy generators** as a signature in which we have specified cells implementing these coherent laws. In we introduced a monad *T**n**D**s* on globular sets encoding the compositional structure of *n*-dimensional string diagrams. Its algebras are called *n*-sesquicategories and they are *n*-globular sets equipped with strictly associative and unital composition and whiskerig operations, but not satisfying the Godement interchange laws. The notion of a signature then coincides with that of a computad for *T**n**D**s*. In an upcoming paper, we will show how one can define a monad *T*3*s**s* by adding to *T*3*D**s* certain operations encoding the homotopy generators. The *T*3*s**s*-algebras are called semistrict 3-categories and they are defined precisely so that the string diagram calculus from applies. We also show that they are the same as Gray 3-categories. We are working on extending this to higher dimensions. In we explain how to interpret diagrams over a 4-signature with homotopy generators as specifying composites in a strict 4-category, by interpreting the homotopy generators as identity morphisms. In the present paper, we will use these string diagrams in the *n* = 3 case, so we include below an informal description of these diagrams. Given a strict 3-category C, we use the string diagram calculus to describe composites of morphisms in any dimension, and to prove identities between composite 3-morphisms. We read odd dimensional diagrams from left to right and even dimensional diagrams from top to bottom. This means the source of an odd dimensional morphism appears on its left and the source of an even dimensional morphism appears above it. We denote the composite of two composable 1-morphisms *f*, *g* by the labelled diagram . Similarly, we can also denote the composite of *n* composable 1-morphisms by a diagram consisting of *n* labeled dots on a line. Given 2-morphisms *α*, *β* such that *t*(*α*) = *s*(*β*), we can denote their composite by the labeled diagram . If *f*, *g* are 1-morphisms such that *t*(*f*) = *s*2(*α*) and *s*(*g*) = *t*2(*α*), we can also denote the whiskering of *α* with *f* or *g* by | | | | | --- | --- | --- | | | or | . | In general, a diagram such as labeled by morphisms in C, subject to compatibility conditions on their sources and targets, determines a composite 2-morphism in C. We can also consider 2-morphisms whose source and target are composites of 1-morphisms. Given composable 1-morphisms *i*, *j* and *f*, *g* we can denote a 2-morphism *η* : *g* ∘ *f* → *j* ∘ *i* by . These can also be composed and whiskered with other morphisms, so we can form general 2-diagrams, which when given a compatible labeling by morphisms in C denote composite 2-morphisms. Here is an example of such a diagram. . Now we come to 3-dimensional diagrams. We denote the composite of two 3-morphisms by a labeling of . The whiskering of a 3-morphism with a 2-morphism corresponds to the diagram | | | | | --- | --- | --- | | | or | . | The whiskering of a 3-morphism with a 1-morphism corresponds to the diagram | | | | | --- | --- | --- | | | or | . | These basic composition operations can be iterated to form 3-diagrams such as , which when given compatible labelings by morphisms in C denote composite 3-morphisms. We can also consider 3-morphisms whose source and target are arbitrary composites of 1 and 2-morphisms in C, which we denote by labeling a diagram such as . We can then compose these to get general 3-diagrams, such as . Notice that the two string diagrams | | | | | --- | --- | --- | | | and | | determine the same composition operation on 2-morphisms in a strict 3-category, as they both correspond to the pasting diagram $$\xymatrix@1{\bullet\rtwocell & \bullet \rtwocell & \bullet}.$$ So we introduce 3-dimensional cells | | | | | --- | --- | --- | | | and | | called interchangers (or type I2 homotopy generators in ) which when labeled by compatible morphisms in C compose to the appropriate identity 3-morphism in C. In a semistrict 3-category the interchangers become isomorphisms instead of identities. Then we need to introduce some equations between 3-diagrams. First there is interchanger cancellation: ccc &  =  & ; &  =  & . Then we have the type I3 homotopy generator ccc &  =  & , and finally we have the type II3 homotopy generators: ccc &  =  & ; &  =  & . Functor categories ------------------ Using the left and right internal Hom from the monoidal biclosed structure on *n*-categories associated to the Crans-Gray tensor product () one can define *n*-categories Fun*l**a**x*(C, D) and Fun*o**p**l**a**x*(C, D) for *n*-categories C and D. One can check that a *k*-morphism in Fun*o**p**l**a**x*(C, D) is a rule that associates to each ℓ-morphism in C a map *θ*(*k*); (ℓ) → D, satisfying certain relations of compatibility with composition. Here *θ*(*k*); (ℓ) is the (*k* + ℓ)-computad explictly constructed in. It can also be described as the Crans-Gray tensor product *θ*(*k*) ⊗ *θ*(ℓ). Similarly, a *k*-morphism in Fun*l**a**x*(C, D) is a rule that associates to each ℓ-morphism in C a map *θ*(ℓ); (*k*) → D. One can then define the *n*-category Fun(C, D) as the subcategory of Fun*o**p**l**a**x*(C, D) consisting of those *k*-morphisms which associate to an ℓ-morphism in C a (*k* + ℓ)-equivalence in D, for *k*, ℓ ≥ 1. The *k*-morphisms in Fun(C, D) are called *k*-transfors. For *k* = 1, 2, 3 they are also called natural transformations, modifications and perturbations, respectively (see the nLab page “transfor” for a discussion of this terminology). Similarly, $\overline{\operatorname{Fun}}({\mathcal{C}},{\mathcal{D}})$ is the analogous subcategory of Fun*l**a**x*(C, D). Finally Map(C, D) and $\overline{\operatorname{Map}}({\mathcal{C}},{\mathcal{D}})$ are defined as the underlying subgroupoids in Fun(C, D) and $\overline{\operatorname{Fun}}({\mathcal{C}},{\mathcal{D}})$. Given a presentation P we write Fun(P, D) instead of Fun(*F*(P), D) and similarly for Map. In we gave an explicit description of Fun(C, D) in terms of string diagrams, when C and D are 4-categories. We include here, for convenience, the string diagram description of *k*-transfors between 3-categories. ### Natural transformations Given functors *F*, *G* : C → D, a **natural transformation**, or 1-transfor, *α* : *F* → *G* consists of the following data. We use and to denote the images of objects and morphisms under *F* and *G*, respectively. 1. For each object *Y* ∈ C a 1-morphism *α**Y* : *F*(*Y*) → *G*(*Y*): | | | | | --- | --- | --- | | *Y* =  |  ↦  | *α**Y* =  | ; 2. For each 1-morphism *g* : *X* → *Y* in C an invertible 2-morphism *α**g* in D: | | | | | --- | --- | --- | | *g* =  |  ↦  | *α**g* =   :   →  | ; 3. For each 2-morphism *ζ* : *f* → *g* in C an invertible 3-morphism *α**ζ* in D: | | | | | --- | --- | --- | | *ζ* =  |  ↦  | *α**ζ* =   :   →  | ; 4. For each 3-morphism *t* : *η* → *ζ* in C a relation *α**t* in D: | | | | | --- | --- | --- | | *t* =  |  ↦  |  :   =  | . This data is subject to relations equating the values of *α* on composite morphisms with the corresponding composites of values of *α* given by stacking diagrams. ### Modifications Given natural transformations *α*, *β* : *F* → *G*, a **modification**, or 2-transfor, *m* : *α* → *β* consists of the following data. We use for *α* and for *β*. 1. For each object *Y* ∈ C a 2-morphism *m**Y* : *α**Y* → *β**Y* in D: | | | | | --- | --- | --- | | *Y* =  |  ↦  | *m**Y* =   :   →  | ; 2. For each 1-morphism *g* : *X* → *Y* in C an invertible 3-morphism *m**g* in D: | | | | | --- | --- | --- | | *g* =  |  ↦  | *m**g* =   :   →  ; | 3. For each 2-morphism $\zeta=\includegraphics[scale=1.5,align=c]{morphisms/eta.pdf}:f\to g$ in C a relation *m**ζ* in D:  :   =  . This data is subject to relations equating the values of *m* on composite morphisms with the corresponding composites of values of *m* given by stacking diagrams. ### Perturbations Given modifications *l*, *m* : *α* → *β*, a **perturbation**, or 3-transfor, A : *l* → *m* consists of the following data. We use for *l* and for *m*. 1. For each object *Y* ∈ C a 3-morphism A*Y* : *l**Y* → *m**Y* in D: | | | | | --- | --- | --- | | *Y* =  |  ↦  | A*Y* =   :   →  | ; 2. For each 1-morphism $g=\includegraphics[scale=1.2,align=c]{morphisms/f.pdf}:X\to Y$ in C a relation A*g* in D:  :   =  . This data is subject to relations equating the values of A on composite morphisms with the corresponding composites of values of A given by stacking diagrams. Fibrations ---------- A map of *n*-groupoids *p* : *E* → *B* is called a **fibration** if, given any *k*-morphism *f* : *x* → *y* in *B* and a lift *x̃* of its source along *p*, there exists a lift *f̃* : *x̃* → *ỹ* of *f* along *p*. Given *n*-groupoids *E* and *B*, it is natural to ask whether a map *f* : *E* → *B* is a fibration in the sense of this paper if and only if it is is a fibration in the folk model structure on strict *n*-categories defined in. This is plausible, since the generating trivial cofibrations in that model structure are the inclusions of a free *k*-cell as the source a free fully coherent (*k* + 1)-equivalence (*j**k* : **O***k* → **P***k* in the notation there). One would therefore have to show that any morphism in an *n*-groupoid can be extended to a fully coherent equivalence. We have not tried to give a proof of this fact. Note that in the authors construct a model structure on the category of strict *n*-groupoids. However, they define a strict *n*-groupoid as a strict *n*-category where every *k*-morphism has a strict inverse, rather than a weak one. See also. [from ][fibration] Let C a strict 4-category, P a presentation and Q another presentation, obtained by adding a finite number of cells to P. Then the restriction map Map(Q, C) → Map(P, C) is a fibration of 4-groupoids. In it is shown that the category of strict *n*-categories equipped with the Crans-Gray tensor product and the folk model structure is a biclosed monoidal model category. This implies that the internal Hom functors Fun*l**a**x*( − , D) and Fun*o**p**l**a**x*( − , D) send cofibrations to fibrations. From one can deduce that an inclusion of presentations induces a cofibration between the presented *n*-categories. Therefore one can deduce that the restriction map on (op)lax functor categories is a fibration in the folk model structure. Note also that in it is proved that Map(C, D) is the underlying *n*-groupoid of Fun*o**p**l**a**x*(C, D). One might then be able to prove that a folk fibration between lax functor *n*-categories restricts to a fibration (in our sense) between the underlying *n*-groupoids. In this way one might be able to give a different proof of Theorem [fibration] for all *n*. In we give an explicit string diagram proof of this Theorem in the case *n* = 4, which would also apply in any model of semistrict 4-categories admitting a string diagram calculus. Coherence for adjunctions in a 2-category ========================================= We start by proving coherence for adjunctions in a 2-category. This is not a new result, as it essentially amounts to uniqueness of adjoints in a 2-category. We give the proof only to illustrate the general method that will be applied to 3-categorical and, in a subsequent paper, 4-categorical adjunctions. An adjunction in a 2-category consists of 1-morphisms *l* : *X* → *Y* and *r* : *Y* → *X* together with unit and counit 2-morphisms satisfying the snake relations. In string diagram notation we can write *l* =  and *r* = , where we use to denote *X* and to denote *Y*. If we denote the unit and counit morphisms by | | | | | --- | --- | --- | | | and | | , then the snake relations look like | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | | |  =  | | | |  =  | . | This leads us to make the following definition. The presentation Adj(2, 1) consists of 1. 0-cells *X* =  and *Y* =  ; 2. 1-cells *l* =   : *X* → *Y* and *r* =   : *Y* → *X* ; 3. 2-cells cccccc *u* =  & &  :  & &  ⇒  & ; *c* =  & &  :  & &  ⇒  & ; 4. Relations lllllllll *C**l* =  & &  :  & &  =  & ; *C**r* =  & &  :  & &  =  & . The 2-category *F*(Adj(2, 1)) is canonically isomorphic to the 2-category Adj defined in. Given this definition, we want to prove the following statement. [2d] Given a 2-category C, the restriction map *E**l* : Map(Adj(2, 1), C) → Map*L*(*θ*(1), C) is a weak equivalence of 2-groupoids. Let *f* : *E* → *B* be a fibration of 2-groupoids and *b* ∈ *B* an object. The fibre *f*− 1(*b*) is the 2-subcategory of *E* consisting of objects that map to *b*, 1-morphisms that map to id*b* and 2-morphisms that map to Idid*b*. By Theorem [fibration], *E**l* is a fibration of 2-groupoids. In the following sections, we will prove that the fibre of a fibration of *n*-groupoids is an *n*-groupoid and that a fibration which is surjective on objects and has weakly contractible fibres is a weak equivalence. We will also define the homotopy groups *π**k*(*G*) of an *n*-groupoid *G* and show that *G* is weakly contractible if and only if *π**k*(*G*) is trivial for all *k* ≤ *n*. An *n*-groupoid *G* is **connected** if for any objects *x*, *y* ∈ *G* there exists a morphism *x* → *y* in *G*. A connected groupoid *G* is **1-connected** if given a 1-morphism *f* : *x* → *x* in *G* there exists a 2-morphism *f* → id*x* in *G*. Once we have defined homotopy groups it will be obvious that *G* is connected if and only if *π*0(*G*) is trivial and 1-connected when *π*0 and *π*1 are both trivial. In this section, we will show that *E**l* is surjective on objects and that its fibres are 1-connected 1-groupoids, and therefore weakly contractible. The following is an explicit description of Map(Adj(2, 1), C). An object *F* in Map(Adj(2, 1), C) is a functor, which consists of a choice of *k*-morphism *F*(*x*) for each *k*-cell *x* in Adj(2, 1), subject to source and target compatibilities. Given functors *F* and *G*, a 1-morphism *α* : *F* → *G* is a weakly invertible natural transformation. Using and to denote the images of generating cells under *F* and *G*, respectively, *α* consists of 1. weakly invertible 1-morphisms *α**X* =   : *F*(*X*) → *G*(*X*) and *α**Y* =   : *F*(*Y*) → *G*(*Y*); 2. invertible 2-morphisms llllll *α**l* =  & &  :  & &  ⇒  & ; *α**r* =  & &  :  & &  ⇒  & ; 3. relations | | | | | --- | --- | --- | | $\alpha\_{u} : \includegraphics[scale=1.5,align=c]{adj/1morph/a\_u\_s.pdf} = \includegraphics[scale=1.5,align=c]{adj/1morph/a\_u\_t.pdf}$ | and | $\alpha\_{c}: \includegraphics[scale=1.5,align=c]{adj/1morph/a\_c\_s.pdf} = \includegraphics[scale=1.5,align=c]{adj/1morph/a\_c\_t.pdf}$. | Finally, given weakly invertible natural transformations *α*, *β* : *F* → *G*, a 2-morphism *α* ⇒ *β* is an invertible modification. Using and to denote the components of *α* and *β*, respectively, *m* consists of 1. invertible 2-morphisms | | | | | --- | --- | --- | | $m\_X=\includegraphics[scale=1.5,align=c]{adj/2morph/m\_x.pdf}: \includegraphics[scale=1,align=c]{adj/2morph/a\_x.pdf} \Longrightarrow \includegraphics[scale=1,align=c]{adj/2morph/b\_x.pdf}$ | and | $m\_Y=\includegraphics[scale=1.5,align=c]{adj/2morph/m\_y.pdf} : \includegraphics[scale=1,align=c]{adj/2morph/a\_y.pdf} \Longrightarrow \includegraphics[scale=1,align=c]{adj/2morph/b\_y.pdf}$ ; | 2. relations | | | | | --- | --- | --- | | $m\_l:\includegraphics[scale=1.5,align=c]{adj/2morph/m\_l\_s.pdf} = \includegraphics[scale=1.5,align=c]{adj/2morph/m\_l\_t.pdf}$ | and | $m\_r: \includegraphics[scale=1.5,align=c]{adj/2morph/m\_r\_s.pdf}= \includegraphics[scale=1.5,align=c]{adj/2morph/m\_r\_t.pdf}$. | Given a 2-category C, the restriction map *E**l* : Map(Adj(2, 1), C) → Map*L*(*θ*(1), C) is surjective on objects. Let *F*(*l*) : *F*(*X*) → *F*(*Y*) be a 1-morphism in C which is a left adjoint. We can pick a right adjoint *F*(*r*) to *F*(*l*), together with unit and counit 2-morphisms satisfying the snake relations and this data determines a functor Adj(2, 1) → C. Now we show that the fibres are connected. For this we will need the following Lemmas. [ardef] Let C be a 2-category. Given *F*, *G* ∈ Map(Adj(2, 1), C) with *F* = *G* on {*X*, *Y*, *l*} there exists an equivalence *α* : *F* → *G* in Map({*X*, *Y*, *l*, *r*}, C), which is the identity on {*X*, *Y*, *l*}. We use red to denote the images of cells under *F* and blue for their images under *G*. For the images of cells where *F* = *G* we use black. So we denote $F(u):=\includegraphics[scale=1,align=c]{adj/1morph/f\_u.pdf}$, $F(c):=\includegraphics[scale=1,align=c]{adj/1morph/f\_c.pdf}$, $G(u):=\includegraphics[scale=1,align=c]{adj/1morph/g\_u.pdf}$ and $G(c):=\includegraphics[scale=1,align=c]{adj/1morph/g\_c.pdf}$. We need an isomorphism *α**r* : *F*(*r*) → *G*(*r*), so take *α**r* :  =  . [audef] Let C be a 2-category. Given *F*, *G* ∈ Map(Adj(2, 1), C) with *F* = *G* on {*X*, *Y*, *l*, *r*} there exists an equivalence *α* : *F* → *G* in Map({*X*, *Y*, *l*, *r*, *u*}, C), which is the identity on {*X*, *Y*, *l*}. We use green to denote the values of *α*. Since *α* is the identity on *X*, *Y* and *l*, the relation *α**u* will be of the form . So we define $$\alpha\_r=\includegraphics[scale=1,align=c]{adj/1morph/a\_r\_special.pdf}:=\includegraphics[scale=1,align=c]{adj/1morph/a\_r\_def2.pdf}$$ and then the following is a proof of *α**u* :  . [acdef] Let C be a 2-category. Given *F*, *G* ∈ Map(Adj(2, 1), C) with *F* = *G* on {*X*, *Y*, *l*, *r*, *u*} we have *F* = *G*. We have the following proof that *F*(*c*) = *G*(*c*) :  . [connected] Given a 2-category C, the fibres of *E**l* : Map(Adj(2, 1), C) → Map*L*(*θ*(1), C) are connected. Consider *F*, *G* ∈ Map(Adj(2, 1), C) which agree on *X*,*Y* and *l*. We want to define an equivalence *α* : *F* → *G* in Map(Adj(2, 1), C) which restricts to the identity on *X*, *Y* and *l*. By Lemma [ardef] there exists an equivalence *α* : *F* → *G* in Map({*X*, *Y*, *l*, *r*}, C), which is the identity on {*X*, *Y*, *l*}. Since the restriction map Map(Adj(2, 1), C) → Map({*X*, *Y*, *l*, *r*}, C) is a fibration, one can extend this to an equivalence *α* : *F* → *F*1 in Map(Adj(2, 1), C), where *F*1 agrees with *G* on *X*, *Y*, *l*, *r* and *α* is the identity on *X*, *Y*, *l*. So now it is enough to find an equivalence *F*1 → *G* which is the identity on *X*, *Y*, *l*, where *F*1 = *G* on {*X*, *Y*, *l*, *r*}. So we can repeat this process, applying the above Lemmas, to get the an equivalence *α* : *F* → *G* in Map({*X*, *Y*, *l*, *r*, *u*, *c*}, C), which is the identity on {*X*, *Y*, *l*}. Now C is a 2-category and {*X*, *Y*, *l*, *r*, *u*, *c*} is the 2-skeleton of Adj(2, 1), so 1-morphisms in Map({*X*, *Y*, *l*, *r*, *u*, *c*}, C) are the same thing as 1-morphisms in Map(Adj(2, 1), C). Now we show that the fibres are 1-connected. [mrdef] Let C be a 2-category. Given a 1-morphism *α* : *F* → *F* in Map(Adj(2, 1), C) such that *α* is the identity on {*X*, *Y*, *l*}, we have *α* = Id. Denote *α**r* : *F*(*r*) → *F*(*r*) by and consider the relation *α**u* :  . The following is a proof that *α**r* = Id*r* :  . Given a 2-category C, the fibres of *E**l* : Map(Adj(2, 1), C) → Map*L*(*θ*(1), C) are 1-connected. This follows directly from the previous Lemma. Given a 2-category C, the fibres of *E**l* : Map(Adj(2, 1), C) → Map*L*(*θ*(1), C) are weakly contractible. Since C is a 2-category and *θ*(1) contains the 0-skeleton of Adj(2, 1), the fibres of this map are 1-groupoids. Therefore, since the fibres are 1-connected, they are weakly contractible. The homotopy pullback ===================== Given a diagram of *n*-groupoids $$\xymatrix{ & X\ar[d]^{F} \\ Y\ar[r]\_{G} & Z,}$$ its pullback *X* × *Z**Y* in Cat*n* is just the pullback in gSet*n*, equipped with an obvious *T**n*-algebra structure. We want to show that *X* × *Z**Y* is actually an *n*-groupoid, provided either *F* or *G* is a fibration. One could give a direct proof of this, using the lifting properties of the fibration and the fact that equivalences can be promoted to adjoint equivalences to construct weak inverses. However, we prefer to give another proof, using computads and Theorem [fibration], which is more in tune with the general idea of this paper. Our strategy is to define a homotopy pullback *X* × *Z**h**Y*, which we can show is always an *n*-groupoid, and then to show that when *F* or *G* is a fibration the natural map *X* × *Z**Y* → *X* × *Z**h**Y* is a weak equivalence of *n*-categories. We define the **homotopy pullback** of a diagram $$\xymatrix{ & X\ar[d]^{F} \\ Y\ar[r]\_{G} & Z}$$ of *n*-groupoids to be the pullback $$\xymatrix{X\times\_{Z}^h Y\ar[r]\ar[d] & X\times Y\ar[d]^{F\times G} \\ \overline{\operatorname{Map}}(\theta^{(1)},Z)\ar[r]\_-{s\times t} & Z\times Z }$$ in Cat*n*. Given an *n*-groupoid *Z*, a *k*-morphism in $\overline{\operatorname{Map}}(\theta^{(\ell)},Z)$ is the same thing as an ℓ-morphism in Map(*θ*(*k*), *Z*), since they both correspond to maps *θ*(ℓ); (*k*) → *Z*. So the set of *k*-morphisms in the homotopy pullback is the pullback of sets $$\xymatrix{\operatorname{Hom}(\theta^{(k)},X\times\_{Z}^h Y)\ar[r]\ar[d] & \operatorname{Hom}(\theta^{(k)},X)\times\operatorname{Hom}(\theta^{(k)},Y)\ar[d] \\ \operatorname{Hom}(\theta^{(k)},\overline{\operatorname{Map}}(\theta^{(1)},Z))\ar[r] & \operatorname{Hom}(\theta^{(k)},Z\times Z),}$$ which is equal to the pullback of sets $$\xymatrix{\operatorname{Hom}(\theta^{(k)},X\times\_{Z}^h Y)\ar[r]\ar[d] & \operatorname{Hom}(\theta^{(k)},X)\times\operatorname{Hom}(\theta^{(k)},Y)\ar[d] \\ \operatorname{Hom}(\theta^{(1)},\operatorname{Map}(\theta^{(k)},Z))\ar[r] & \operatorname{Hom}(\partial \theta^{(1)},\operatorname{Map}(\theta^{(k)},Z)).}$$ This means that a *k*-morphism in *X* × *Z**h**Y* consists of a diagram $$\xymatrix{\theta^{(k)}\ar[r]\ar[d] & X\ar@2[dl]\ar[d] \\ Y\ar[r] & Z.}$$ Now we show that the homotopy pullback of *n*-groupoids is an *n*-groupoid. Let Iso(*k*, *k*) be the computad with two parallel *m*-cells *x**m* and *y**m* for each *m* < *k*, two *k*-cells *l* : *x**k* − 1 → *y**k* − 1 and *r* : *y**k* − 1 → *x**k* − 1 and two (*k* + 1)-cells *u* : id*x**k* − 1 → *r* ∘ *l* and *c* : *l* ∘ *r* → id*y**k* − 1. Let Adj(*k* + 1, *k*) be the computad obtained from Iso(*k*, *k*) by adding two (*k* + 2)-cells, corresponding to the two snake relations. Let C be an *n*-category and suppose all (*k* + 1)-morphisms are weakly invertible in C. Then a *k*-morphism in C is weakly invertible if and only if the corresponding map *θ*(*k*) → C extends to Iso(*k*, *k*) → C. Given a *k*-morphism *l* : *x* → *y* in C, an extension of the corresponding map *θ*(*k*) → C to Iso(*k*, *k*) → C consists of a choice of *k*-morphism *r* : *y* → *x* in C and (*k* + 1)-morphisms id*x* → *r* ∘ *l* and *l* ∘ *r* → id*y*. Since all (*k* + 1)-morphisms are weakly invertible in C, such choices exist if and only if *l* is weakly invertible. Let *X* be an *n*-groupoid and consider *F*, *G* ∈ Map(Adj(*k* + 1, *k*), *X*) where *F*(*l*) = *G*(*l*). Then there exists an equivalence *α* : *F* → *G* in Map(Iso(*k*, *k*), *X*), restricting to the identity on *l*. We need to construct *α**r*, *α**u* and *α**c* in *X*. By passing to h2(Hom*X*(*F*(*x**k* − 2), *F*(*y**k* − 2))) we can reduce to the case where *n* = 2 and *k* = 1 and apply Lemma [connected]. From this we get *α**r* as a 2-morphism and *α**u*, *α**c* as identities between 2-morphisms in h2(Hom*X*(*F*(*x**k* − 2), *F*(*y**k* − 2))). These correspond to the required (*k* + 1)-morphism *α**r* and the (*k* + 2)-morphisms *α**u* and *α**c* in *X*. Let *X* be an *n*-groupoid, *F*, *G* ∈ Map(Adj(*k* + 1, *k*), *X*) functors and *α* : *F* → *G* in Map(*θ*(*k*), *X*) an equivalence. Then *α* extends to an equivalence *F* → *G* in Map(Iso(*k*, *k*), *X*). Passing to a Hom*X*(*F*(*x**k* − 2), *F*(*y**k* − 2)) we can reduce to the case where *k* = 1. Now Adj(2, 1) only has cells of dimension  ≤ 3, so extending *α* only involves constructing composites of dimension  ≤ 4 in Hom*X*(*F*(*x**k* − 2), *F*(*y**k* − 2)). Therefore we can apply Theorem [fibration] to h3(Hom*X*(*F*(*x**k* − 2), *F*(*y**k* − 2))) and lift *α* starting at *F* to get an equivalence *F* → *G*1 in Map(Adj(*k* + 1, *k*), *X*), where *G*1(*l*) = *G*(*l*). Now we just need to find an equivalence *G*1 → *G* in Map(Iso(*k*, *k*), *X*) restricting to the identity on *l*, which we can do by the previous Lemma. The homotopy pullback of a diagram of *n*-groupoids is an *n*-groupoid. Consider a diagram $$\xymatrix{ & X\ar[d]^{F} \\ Y\ar[r]\_{G} & Z}$$ of *n*-groupoids. Let *k* ≤ *n* and suppose all (*k* + 1)-morphisms are weakly invertible in the homotopy pullback. Note that this condition vacuously holds for *k* = *n*. Let $$\xymatrix{\theta^{(k)}\ar[r]^{x}\ar[d]\_{y} & X\ar@2[dl]\_{\alpha}\ar[d]^{F} \\ Y\ar[r]\_{G} & Z}$$ be a *k*-morphism in the homotopy pullback. We want to show that it is weakly invertible, so we need to find an extension $$\xymatrix{\theta^{(k)}\ar[dr]\ar@/^/[drr]^{x}\ar@/\_/[ddr]\_{y} & & \\ &\operatorname{Iso}\_{(k,k)}\ar@{.>}[r]\ar@{.>}[d] & X\ar@{:>}[dl]\ar[d]^{F} \\ & Y\ar[r]\_{G} & Z.}$$ Since *X* and *Y* are *n*-groupoids, Lemma [adjeq] tells us we can find extensions of *x* and *y* to Adj(*k* + 1, *k*), to get a diagram of the form $$\xymatrix{\theta^{(k)}\ar[dr]\ar@/^/[drr]^{x}\ar@/\_/[ddr]\_{y} & & \\ &\operatorname{Adj}\_{(k+1,k)}\ar[r]^{x}\ar[d]\_{y} & X \\ & Y. & }$$ The previous Lemma now allows us to extend *α* : *F* ∘ *x* → *G* ∘ *y* from Map(*θ*(*k*), *Z*) to Map(Iso(*k*, *k*), *Z*) as desired. Given a diagram of *n*-groupoids $$\xymatrix{ & X\ar[d]^{F} \\ Y\ar[r]\_{G} & Z,}$$ where *F* is a fibration, the canonical map *X* × *Z**Y* → *X* × *Z**h**Y* is a weak equivalence of *n*-categories. Take a *k*-morphism $$\xymatrix{\theta^{(k)}\ar[r]^{x}\ar[d]\_{y} & X\ar@2[dl]\_{\alpha}\ar[d]^{F} \\ Y\ar[r]\_{G} & Z.}$$ in the homotopy pullback, whose source and target are in the image of the map *X* × *Z**Y* → *X* × *Z**h**Y*. This means that *α*∂*θ*(*k*) is the identity natural transformation and so *α* is simply an equivalence *F*(*x*) → *G*(*y*) in *Z*. Since *F* is a fibration, we can find a lift *ϕ* in $$\xymatrix{\theta^{(k)}\ar[r]^{x}\ar[d]\_{s} & X \ar[d]^{F} \\ \theta^{(k+1)}\ar@{.>}[ru]^{\phi}\ar[r]\_{\alpha} & Z.}$$ Now *ϕ* is a (*k* + 1)-morphism *x* → *x̄* such that *F*(*ϕ*) = *α* : *F*(*x*) → *G*(*y*), so *F*(*x̄*) = *G*(*y*) and we have a *k*-morphism $$\xymatrix{\theta^{(k)}\ar[r]^{\bar{x}}\ar[d]\_{y} & X\ar@{=}[dl]\ar[d]^{F} \\ Y\ar[r]\_{G} & Z}$$ in the pullback *X* × *Z**Y*. In order to show that the canonical map is essentially surjective on *k*-morphisms, we need to show that this is equivalent to the original *k*-morphism in the homotopy pullback. Since *X* × *Z**h**Y* is an *n*-groupoid, it’s enough to show that there is a (*k* + 1)-morphism between them. So we need to find a natural transformation $$\xymatrix{\theta^{(k+1)}\ar[r]^{\phi}\ar[d]\_{\operatorname{id}\_y} & X\ar@{:>}[dl]\ar[d]^{F} \\ Y\ar[r]\_{G} & Z}$$ whose restrictions along $\xymatrix{\theta^{(k)}\ar[r]^{s} & \theta^{(k+1)}}$ and $\xymatrix{\theta^{(k)}\ar[r]^{t} & \theta^{(k+1)}}$ are $$\xymatrix{\theta^{(k)}\ar[r]^{x}\ar[d]\_{y} & X\ar@2[dl]\_{\alpha}\ar[d]^{F} & & \theta^{(k)}\ar[r]^{\bar{x}}\ar[d]\_{y} & X\ar@{=}[dl]\ar[d]^{F} & \\ Y\ar[r]\_{G} & Z & \text{and} & Y\ar[r]\_{G} & Z & \text{respectively.}}$$ We can extend to the (*k* + 1)-cell by $$\xymatrix{F(x)\ar[r]^{F(\phi)}\ar[d]\_{\alpha} & F(\bar{x})\ar@{=}[dl]\ar[d]^{\operatorname{id}} \\ G(y)\ar[r]\_{\operatorname{id}} & G(y).}$$ Consider a weak equivalence of *n*-categories C → D and suppose that D is an *n*-groupoid. Then C is an *n*-groupoid. Consider a *k*-morphim *f* : *x* → *y* in C and let *F*(*f*)− 1 : *F*(*y*) → *F*(*x*) be a weak inverse for *F*(*f*). Since *F* is a weak equivalence, there exists an $\overline{f}:y\to x$ in C with $F(\overline{f})\simeq F(f)^{-1}$. Then $F(\overline{f}\circ f)\simeq F(f)^{-1}\circ F(f) \simeq \operatorname{id}\_{F(x)}$. Since *F* is a weak equivalence, this implies $\overline{f}\circ f\simeq \operatorname{id}\_{x}$. Similarly, we have $f\circ\overline{f}\simeq \operatorname{id}\_{y}$, so $\overline{f}$ is weak inverse to *f*. [pbngrpd] Given a diagram of *n*-groupoids $$\xymatrix{ & X\ar[d]^{F} \\ Y\ar[r]\_{G} & Z,}$$ where *F* is a fibration, the pullback *X* × *Z**Y* is an *n*-groupoid. This follows from the fact that *X* × *Z**h**Y* is an *n*-groupoid and the canonical map *X* × *Z**Y* → *X* × *Z**h**Y* is a weak equivalence. The long exact sequence for a fibration ======================================= Now we show that a fibration of *n*-groupoids is a weak equivalence if and only if its fibres are weakly contractible, by using an analog of the long exact sequence in homotopy groups for a fibration of spaces. [homfib] Let *p* : *E* → *B* be a fibration of *n*-groupoids. Then, for any 0 ≤ *k* ≤ *n* − 1 and any *k*-morphisms *x*, *y* in *E*, the induced map *E*(*x*, *y*) → *B*(*p*(*x*), *p*(*y*)) is a fibration of (*n* − *k* − 1)-groupoids. The lifting condition for ℓ-morphisms in *B*(*p*(*x*), *p*(*y*)) follows easily from the lifting condition for (ℓ + *k* + 1)-morphisms in *B*. Given a map of *n*-groupoids *f* : *A* → *B* and an object *b* ∈ *B*, we define the **fibre** *f*− 1(*b*) of *f* over *b* to be the pullback $$\xymatrix{f^{-1}(b)\ar[r]\ar[d] & A\ar[d]^{f} \\ \theta^{(0)} \ar[r]\_{b} & B}$$ in Cat*n*. So a *k*-morphism in *f*− 1(*b*) is a *k*-morphism *a* ∈ *A**k* such that *f*(*a*) = Id*b*(*k*) (defined inductively by saying that Id*b*(*k*) is the identity morphism on Id*b*(*k* − 1)). If *p* : *E* → *B* is a fibration between *n*-groupoids and *b* ∈ *B* is an object, then *p*− 1(*b*) is an *n*-groupoid. This follows from Proposition [pbngrpd]. Let *G* be an *n*-groupoid and *x* ∈ *G* an object. Define id*x*(0) :  = *x* and id*x*(*k*) = idid*x*(*k* − 1). Denote by Ω*x**G* the (*n* − 1)-groupoid Hom(*x*, *x*). This comes equipped with a strictly associative monoidal structure, given by composition in *G*. Denote Ω*x**k**G* :  = Ωid*x*(*k* − 1)Ω*x**k* − 1*G*. Finally, denote by *G*0 the set of objects of *G*. Let *G* be an *n*-groupoid. We define *π*0(*G*) :  = *G*0/ ∼ , where the equivalence relation  ∼  is equivalence in *G*. Now let *x* ∈ *G* be an object. Define *π*0(*G*, *x*) to be the pointed set (*π*0(*G*), [*x*]), were [*x*] denotes the equivalence class of *x* in *π*0(*G*). Finally, for 1 ≤ *k* ≤ *n*, define *π**k*(*G*, *x*) :  = *π**k* − 1(Ω*x**G*, id*x*) with monoid structure induced by composition. Note that, for *k* ≥ 1, the monoids *π**k*(*G*, *x*) are actually groups and for *k* ≥ 2 they are abelian, by an Eckmann-Hilton argument with pasting diagrams. Moreover, given a map of *n*-groupoids *f* : *A* → *B* and an object *a* ∈ *A* one can also define *π**k*(*f*, *a*) : *π**k*(*A*, *a*) → *π**k*(*B*, *f*(*a*)), making *π**k* into a functor on pointed *n*-groupoids. [htpygrps] Let *f* : *A* → *B* be a map of *n*-groupoids. Then *f* is a weak equivalence if and only if the maps *π**k*(*f*, *a*) : *π**k*(*A*, *a*) → *π**k*(*B*, *f*(*a*)) are isomorphisms, for all *a* ∈ *A*0 and for all *k* ≥ 0. The proof is by induction on *n*. Suppose *f* is a weak equivalence. Then it is essentially surjective, so it is surjective on *π*0. Moreover, for any objects *x*, *y* ∈ *A*, the map *A*(*x*, *y*) → *B*(*f*(*x*), *f*(*y*)) is a weak equivalence of (*n* − 1)-groupoids, so by the induction hypothesis it induces isomorphisms on all homotopy groups. In particular, it induces isomorphisms *π**k*(*A*(*x*, *x*), id*x*) → *π**k*(*B*(*f*(*x*), *f*(*x*)), id*f*(*x*)), for 0 ≤ *k* ≤ *n* − 1. Now *π**k*(*A*(*x*, *x*), id*x*) = *π**k* + 1(*A*, *x*) and *π**k*(*B*(*f*(*x*), *f*(*x*)), id*f*(*x*)) = *π**k* + 1(*B*, *f*(*x*)),  so we conclude that *π**l*(*f*, *x*) : *π**l*(*A*, *x*) → *π**l*(*B*, *f*(*x*)) is an isomorphism, for 1 ≤ *l* ≤ *n*. So all that is left to do is to show that *π*0(*f*) is injective. So suppose [*f*(*x*)] = [*f*(*y*)] in *π*0(*B*) and pick an equivalence *β* : *f*(*x*) → *f*(*y*) in *B*. Since *A*(*x*, *y*) → *B*(*f*(*x*), *f*(*y*)) is essentially surjective, there exists *α* : *x* → *y* and an equivalence *f*(*α*) ⇒ *β*. In particular, we have [*x*] = [*y*] in *π*0(*A*). Now suppose the maps *π**k*(*f*, *a*) : *π**k*(*A*, *a*) → *π**k*(*B*, *f*(*a*)) are isomorphisms, for all objects *a* ∈ *A* and for all *k* ≥ 0. Then *f* is essentially surjective, being an isomorphism on *π*0. Now let *x*, *y* ∈ *A* and consider the map *A*(*x*, *y*) → *B*(*f*(*x*), *f*(*y*)). We need to show that this map is a weak equivalence of (*n* − 1)-groupoids, and by the induction hypothesis it is enough to show that the maps *π**k*(*A*(*x*, *y*), *α*) → *π**k*(*B*(*f*(*x*), *f*(*y*)), *f*(*α*)) are isomorphisms, for 0 ≤ *k* ≤ *n* − 1 and *α* : *x* → *y* in *A*. Consider the maps *A*(*x*, *x*) → *A*(*x*, *y*) and *B*(*f*(*x*), *f*(*x*)) → *B*(*f*(*x*), *f*(*y*)) defined by wiskering with *α* and *f*(*α*), respectively. These induce isomorphisms *π**k*(*A*(*x*, *x*), id*x*) → *π**k*(*A*(*x*, *y*), *α*) and *π**k*(*B*(*f*(*x*), *f*(*x*)), id*f*(*x*)) → *π**k*(*B*(*f*(*x*), *f*(*y*)), *f*(*α*)) for 0 ≤ *k* ≤ *n* − 1, which fit in a commuting diagram $$\xymatrix{\pi\_k(A(x,y),\alpha) \ar[r] & \pi\_k(B(f(x),f(y)),f(\alpha)) \\ \pi\_k(A(x,x),\operatorname{id}\_x)\ar[u]^\*[@]-{\simeq} \ar[r]^-{\simeq} & \pi\_k(B(f(x),f(x)),\operatorname{id}\_{f(x)})\ar[u]^\*[@]-{\simeq} \\ \pi\_{k+1}(A,x)\ar[r]^{\simeq}\ar@{=}[u] & \pi\_{k+1}(B,f(x)).\ar@{=}[u] }$$ This shows that *π**k*(*A*(*x*, *y*), *α*) → *π**k*(*B*(*f*(*x*), *f*(*y*)), *f*(*α*)) is an isomorphism. Now we construct the long exact sequence in homotopy groups associated with a fibration of *n*-groupoids. Let *p* : *E* → *B* be a fibration of *n*-groupoids, *x* ∈ *E* an object, *b* = *p*(*x*) ∈ *E* and let *F**b* be the fibre of *p* over *b*. Then the induced map Ω*x**E* → Ω*b**B* is a fibration of (*n* − 1)-groupoids, whose fibre over id*b* is Ω*x**F**b*. The fact that Ω*x**E* → Ω*b**B* is a fibration follows from Lemma [homfib]. The fact that the fibre is Ω*x**F**b* follows from unraveling the definitions. Let *p* : *E* → *B* be a fibration of *n*-groupoids, *x* ∈ *E* an object, *b* = *p*(*x*) ∈ *E* and let *F**b* be the fibre of *p* over *b*. We define a pointed map ∂ : *π*0(Ω*b**B*, id*b*) → *π*0(*F**b*, *x*) sending the class [*α*] of a 1-morphism *α* : *b* → *b* in *B* to the class [*y*] where *y* is the endpoint of a lift of *α* along *p*, starting at *x*. The above procedure gives a well defined pointed map ∂ : *π*0(Ω*b**B*, id*b*) → *π*0(*F**b*, *x*). Suppose we have 1-morphisms *f*, *g* : *b* → *b* in *B*, and a 2-morphism *α* : *f* → *g*, so that [*f*] = [*g*] in *π*0(Ω*b**B*, id*b*). Let *f̄* : *x* → *y* and *ḡ* : *x* → *z* be lifts of *f* and *g* along *p*. We want to show that [*y*] = [*z*] in *F**b*, so we need to find a morphism *y* → *z* in *E* that maps to id*b*. Let *f̄*− 1 be a weak inverse for *f̄* and let *f*− 1 :  = *p*(*f̄*− 1). Consider the map *ḡ* ∘ *f̄*− 1 : *y* → *z*. This maps down to *g* ∘ *f*− 1. Composing *α*− 1 with a 2-morphism *f* ∘ *f*− 1 → id*b*, we get a 2-morphism *g* ∘ *f*− 1 → id*b*. Now we can lift this 2-morphism starting at *ḡ* ∘ *f̄*− 1 and the target of the resulting 2-morphism is a 1-morphism *y* → *z* which maps to id*b* under *p*. Let *p* : *E* → *B* be a fibration of *n*-groupoids and *F**b* its fibre over *b* ∈ *B*. Then the map $\xymatrix{\pi\_k(B,b)\ar[r]^{\partial} & \pi\_{k-1}(F\_b,x)}$ is a group homomorphism for *k* ≥ 2. It is enough to show that $\xymatrix{\pi\_0(\Omega^2\_b B,\operatorname{Id}\_{\operatorname{id}\_b})\ar[r]^{\partial} & \pi\_{0}(\Omega\_xF\_b,\operatorname{id}\_x)}$ is a group homomorphism. This follows from the fact that $$\xymatrix@1{b\ruppertwocell^{\operatorname{id}\_b}\rlowertwocell\_{\operatorname{id}\_b}\ar[r] & b} = \xymatrix@1{b\rtwocell^{\operatorname{id}\_b}\_{\operatorname{id}\_b} & b \rtwocell^{\operatorname{id}\_b}\_{\operatorname{id}\_b} & b}.$$ Let *p* : *E* → *B* be a fibration, *x* ∈ *E* an object, set *b* :  = *p*(*x*) and let *F**b* be the fibre of *f* over *b*. Then the following is an exact sequence of groups and pointed sets $$\xymatrix{\cdots\ar[r] & \pi\_1(E,x)\ar[r] & \pi\_1(B,b)\ar[r]^{\partial} & \pi\_0(F\_b,x)\ar[r] & \pi\_0(E,x)\ar[r] & \pi\_0(B,b)}.$$ It is enough to show that $$\xymatrix{\pi\_0(\Omega\_xE,\operatorname{id}\_x)\ar[r] & \pi\_0(\Omega\_bB,\operatorname{id}\_b)\ar[r]^{\partial} & \pi\_0(F\_b,x)\ar[r] & \pi\_0(E,x)\ar[r] & \pi\_0(B,b)}$$ is exact. This in turn is easy to check directly. A fibration of *n*-groupoids *f* : *A* → *B* is a weak equivalence if and only if for every object *b* ∈ *B* the fibre *f*− 1(*b*) is weakly contractible. Coherence for adjunctions in a 3-category ========================================= We now define the presentation Adj(3, 1) consisting of coherence data for an adjunction between 1-morphisms in a 3-category and prove our Main Theorem, which we restate here for convenience. [main] Given a strict 3-category C, the restriction map *E**l* : Map(Adj(3, 1), C) → Map*L*(*θ*(1), C) is a weak equivalence of strict 3-groupoids. The presentation Adj(3, 1) consists of 1. objects *X* =  and *Y* =  ; 2. 1-cells *l*= $:\xymatrix@1{X\ar@1@<1ex>[r] & \ar@1@<1ex>[l]Y}:$  = *r*; 3. 2-cells llllll *u* =  & &  :  & & $\xymatrix@1{\ar@2[r] & }$ & ; *c* =  & &  :  & & $\xymatrix@1{\ar@2[r] & }$ & ; 4. 3-cells lllllllll *C**l* =  & &  :  & & $\xymatrix@1{\ar@3@<1ex>[r] & \ar@3@<1ex>[l]}$ & &  :  & &  = *C**l*− 1 ; *C**r* =  & &  :  & & $\xymatrix@1{\ar@3@<1ex>[r] & \ar@3@<1ex>[l]}$ & &  :  & &  = *C**r*− 1 ; 5. relations lllll &  :  & &  =  & Id*l*(2) ; &  :  & &  =  & ; &  :  & &  =  & Id*r*(2) ; &  :  & &  =  & ; &  :  & &  =  & . An adjunction in a 3-category is a pair of 1-morphisms, together with unit and counit 2-morphisms satisfying the snake relations up to isomorphism. By picking inverse pairs of 3-morphisms - which are usually called **cusp 3-morphisms** - implementing these snake relations and then considering the relations implied in witnessing that these are in fact inverse pairs of 3-morphisms, one obtains all of the cells in the above presentation, except for the final one. This is the well known **swallowtail relation**. In and the definitions of coherent adjunction include two swallowtail relations, but in the author shows that one follows from the other. We will give a string diagram proof of this fact. Theorem [main] will be a consequence of the following Lemma. [pullback] Given a 3-category C, the map Map(Adj(3, 1), C) → Map(Adj(3, 1), h2C) × Map*L*(*θ*(1), h2C)Map*L*(*θ*(1), C) induced by the square $$\xymatrix{\operatorname{Map}(\operatorname{Adj}\_{(3,1)},{\mathcal{C}})\ar[d]\ar[r]^-{E\_l} & \operatorname{Map}^L(\theta^{(1)}, {\mathcal{C}})\ar[d] \\ \operatorname{Map}(\operatorname{Adj}\_{(3,1)},\operatorname{h}\_2{{\mathcal{C}}})\ar[r]\_-{E\_l} & \operatorname{Map}^L(\theta^{(1)}, \operatorname{h}\_2{{\mathcal{C}}})}$$ is a weak equivalence. First we explain how Theorem [main] follows from this Lemma. We need one more standard definition and a another standard Lemma. A map of *n*-groupoids *f* : *X* → *Y* is a **trivial fibration** if it is both a fibration and a weak equivalence. [trivfib] Consider a pullback $$\xymatrix{X\times\_B Y\ar[r]\ar[d] & Y \ar[d]^g \\ X\ar[r]\_-{f} & B, }$$ where *f* is a trivial fibration. Then *π*2 : *X* × *B**Y* → *Y* is a trivial fibration. The map *π*2 is a fibration, because a lifting problem for *π*2 reduces to a lifting problem for *f*, which is a fibration. The map *π*2 is essentially surjective (on objects) because *f* is essentially surjective. Finally, *π*2− 1(*y*) = *f*− 1(*g*(*y*)) is weakly contractible, because *f* is a fibration and a weak equivalence. [Proof of Theorem [main]] Note that the bottom map in the square from Lemma [pullback] is the same as Map(Adj(2, 1), h2C) → Map*L*(*θ*(1), h2C) which is a fibration by Theorem [fibration] and a weak equivalence of 2-groupoids by Proposition [2d]. Therefore, applying Lemma [trivfib], the map *π*2 : Map(Adj(3, 1), h2C) × Map*L*(*θ*(1), h2C)Map*L*(*θ*(1), C) → Map*L*(*θ*(1), C) is a trivial fibration of 3-groupoids. By Lemma [pullback], the composite Map(Adj(3, 1), C) → Map*L*(*θ*(1), C) is a weak equivalence. So we need to prove Lemma [pullback], which is to say we need to show that the map Map(Adj(3, 1), C) → Map(Adj(3, 1), h2C) × Map*L*(*θ*(1), h2C)Map*L*(*θ*(1), C) is a weak equivalence of 3-groupoids. Since this map is a fibration, it is enough to show that it is surjective on objects and has weakly contractible fibres. Surjective on objects --------------------- We show that the map Map(Adj(3, 1), C) → Map(Adj(3, 1), h2C) × Map*L*(*θ*(1), h2C)Map*L*(*θ*(1), C) is surjective on objects. We define $\overline{\operatorname{Adj}}\_{(3,1)}$ to be the presentation obtained from Adj(3, 1) by removing the swallowtail relation. Let C be a 3-category and consider a functor $$\overline{F}:\overline{\operatorname{Adj}}\_{(3,1)}\to{\mathcal{C}}.$$ Then there exists a functor *F* : Adj(3, 1) → C which agrees with $\overline{F}$ on all cells in $\overline{\operatorname{Adj}}\_{(3,1)}$ except for *C**l* and *C**l*− 1. We use the same notation for the image of a cell under $\overline{F}$ as for the cell itself. We define $$F(C\_l)=\includegraphics[scale=1,align=c]{adj/obj/cusp\_l\_def.pdf}$$ and $$F(C\_l^{-1})=\includegraphics[scale=1,align=c]{adj/obj/cusp\_l\_inv\_def.pdf}.$$ These are clearly inverse to each other and the following is a proof that the swallowtail relation holds with this choice of cusps: . The map Map(Adj(3, 1), C) → Map(Adj(3, 1), h2C) × Map*L*(*θ*(1), h2C)Map*L*(*θ*(1), C) is surjective on objects. Given a map *F* : Adj(3, 1) → h2C we must lift it to Adj(3, 1) → C. From *F* : Adj(3, 1) → h2C we can build a map $\overline{F}:\overline{\operatorname{Adj}}\_{(3,1)}\to{\mathcal{C}}$ whose image in Map(Adj(3, 1), h2C) is *F*, so we can apply the previous Lemma. Additional swallowtail relations -------------------------------- Denote by (SW) the swallowtail relation in Adj(3, 1). We show that there are three additional swallowtail relations $(\overline{\operatorname{SW}})$, (SW2) and $(\overline{\operatorname{SW}\_2})$ which hold in *F*(Adj(3, 1)). We will therefore use all four relations freely in the rest of the paper. The relation (SW2) is usually included in the definition of a coherent adjunction. The fact that it follows from (SW) is originally due to, in the context of duals in monoidal bicategories. We present here a new string diagram proof. The sources of the relations $(\overline{\operatorname{SW}})$ and $(\overline{\operatorname{SW}\_2})$ are inverse to those of (SW) and (SW2), so they follow trivially. We define Adj(3, 1)+ to be the presentation obtained from Adj(3, 1) by adding the relations llllll $(\overline{\operatorname{SW}})$ & &  :  & &  =  & ; (SW2) & &  :  & &  =  & ; $(\overline{\operatorname{SW}\_2})$ & &  :  & &  =  & . We have *F*(Adj(3, 1)+) = *F*(Adj(3, 1)). We have to show that the extra relations are already satisfied in *F*(Adj(3, 1)). The source of $\overline{\operatorname{SW}}$ is inverse to the source of the swallowtail relation in Adj(3, 1), so it follows. We have the following proof for SW2: . Finally, the source of $\overline{\operatorname{SW}\_2}$ is inverse to the source of SW2. Fibres are connected -------------------- Now we prove that the fibres are connected. Given *F*, *G* ∈ Map(Adj(3, 1), C) with the same image under the map Map(Adj(3, 1), C) → Map(Adj(3, 1), h2C) × Map*L*(*θ*(1), h2C)Map*L*(*θ*(1), C),  we need to find an equivalence *α* : *F* → *G* in Map(Adj(3, 1), C) that maps to the identity. Equivalently, given *F*, *G* ∈ Map(Adj(3, 1), C) with *F* = *G* on the 2-skeleton of Adj(3, 1), we need to find an equivalence *α* : *F* → *G* in Map(Adj(3, 1), C) which is the identity on the 1-skeleton of Adj(3, 1). We use and for the images of cells under functors *F* and *G*, respectively. When *F* and *G* agree on a cell, we use black. We use for the values of *α*. [acldef] Let C be a 3-category. Given *F*, *G* ∈ Map(Adj(3, 1), C) with *F* = *G* on sk2(Adj(3, 1)) there exists an equivalence *α* : *F* → *G* in Map(sk2(Adj(3, 1)) ∪ {*C**l*}, C) which is the identity on sk1(Adj(3, 1)) ∪ {*u*}. We need to construct $\alpha\_c=\includegraphics[scale=1,align=c]{adj/1morph/a\_c\_special.pdf}$ and show that the relation *α**C**l* is satisfied: . We define *α**c* :  =  and then we have the following proof of *α**C**l* :  . Let C be a 3-category. Given *F*, *G* ∈ Map(Adj(3, 1), C) with *F* = *G* on sk2(Adj(3, 1)) ∪ {*C**l*}, we have *F* = *G*. First we have *F*(*C**l*− 1) = *G*(*C**l*− 1) because both are both inverse to *F*(*C**l*) = *G*(*C**l*). Then we have *F*(*C**r*) = *G*(*C**r*) by . Finally, this implies *F*(*C**r*− 1) = *G*(*C**r*− 1). Let C be a 3-category. The fibres of Map(Adj(3, 1), C) → Map(Adj(3, 1), h2C) × Map*L*(*θ*(1), h2C)Map*L*(*θ*(1), C) are connected. Given *F*, *G* ∈ Map(Adj(3, 1), C) with *F* = *G* on {*X*, *Y*, *l*, *r*, *u*, *c*}, we need to find an equivalence *α* : *F* → *G* in Map(Adj(3, 1), C) which is the identity on {*X*, *Y*, *l*, *r*}. By Lemma [acldef], there exists an equivalence *α* : *F* → *G* in Map({*X*, *Y*, *l*, *r*, *u*, *c*, *C**l*}, C), which is the identity on {*X*, *Y*, *l*, *r*, *u*}. We can lift this to an equivalence *F* → *F*1 in Map(Adj(3, 1), C). Then *F*1 = *G* on {*X*, *Y*, *l*, *r*, *u*, *c*, *C**l*}, so *F*1 = *G*. Fibres are 1-connected ---------------------- Now we prove that the fibres are 1-connected. Given *F* ∈ Map(Adj(3, 1), C) and an equivalence *α* : *F* → *F* which is the identity on the 1-skeleton, we need to find a 2-equivalence *m* : *α* → Id in Map(Adj(3, 1), C) which is the identity on {*X*, *Y*, *l*}. We use and for the values of *α* and *m*, respectively. [mudef] Let C be a 3-category. Given a 1-morphism *α* : *F* → *F* in Map(Adj(3, 1), C) such that *α* is the identity on sk1(Adj(3, 1)), there exists a 2-morphism *m* : *α* → Id*F* in Map(sk1(Adj(3, 1)) ∪ {*u*}, C) which is the identity on {*X*, *Y*, *l*}. Denote *α**u* : *F*(*u*) → *F*(*u*) by . We want to define a 3-morphism *m**r* : Id*r* → Id*r*, which we denote by, such that the relation *m**u* :  =  is satisfied. Define *m**r* :  =  and then the following is a proof of *m**u* :  . [mcdef] Let C be a 3-category. Given a 1-morphism *α* : *F* → *F* in Map(Adj(3, 1), C) such that *α* is the identity on sk1(Adj(3, 1)) ∪ {*u*}, we have *α* = Id. Denote *α**c* : *F*(*c*) → *F*(*c*) by and consider the relation *α**C**l*− 1 :  . The following is a proof that *α**c* = Id*c* :  . Let C be a 3-category. The fibres of Map(Adj(3, 1), C) → Map(Adj(3, 1), h2C) × Map*L*(*θ*(1), h2C)Map*L*(*θ*(1), C) are 1-connected. Consider *F* ∈ Map(Adj(3, 1), C) and an equivalence *α* : *F* → *F* which is the identity on {*X*, *Y*, *l*, *r*}. By Lemma [mudef], there exists a 2-morphism *m* : *α* → Id*F* in Map({*X*, *Y*, *l*, *r*, *u*}, C) which is the identity on {*X*, *Y*, *l*}. We extend this to an equivalence *α* → *α*1 in Map(Adj(3, 1), C) with *α*1 = Id on {*X*, *Y*, *l*, *r*, *u*}. Then, by Lemma [mcdef], we have *α*1 = Id, so we have an equivalence *α* → Id which is the identity on {*X*, *Y*, *l*}. Fibres are 2-connected ---------------------- Now we prove that the fibres of Map(Adj(3, 1), C) → Map(Adj(3, 1), h2C) × Map*L*(*θ*(1), h2C)Map*L*(*θ*(1), C) are 2-connected. [aardef] Let C be a 3-category. Given a 2-morphism *m* : Id*F* → Id*F* in Map(Adj(3, 1), C) such that *m* is the identity on {*X*, *Y*, *l*}, we have *m* = Id. Denote *m**r* :  =  and consider the relation *m**c* :  . We have the following proof that *m**r* = Id*r*(2) :  . Let C be a 3-category. The fibres of Map(Adj(3, 1), C) → Map(Adj(3, 1), h2C) × Map*L*(*θ*(1), h2C)Map*L*(*θ*(1), C) are 2-connected. This follows from the previous Lemma. Let C be a 3-category. The fibres of Map(Adj(3, 1), C) → Map(Adj(3, 1), h2C) × Map*L*(*θ*(1), h2C)Map*L*(*θ*(1), C) are weakly contractible. Since morphisms in the fibres have to restrict to the identity on the 0-skeleton of Adj(3, 1), the fibres are 2-groupoids. Since we have shown they are 2-connected, they are therefore weakly contractible. This concludes the proof of Lemma [pullback] and therefore of Theorem [main]. Acknowledgements ---------------- I am grateful to Pedro Boavida, Christopher Douglas, John Huerta and Roger Picken, as well as the two anonymous reviewers, for many useful comments and suggestions. This work was partially supported by the FCT project grant **Higher Structures and Applications**, PTDC/MAT-PUR/31089/2017. The work that lead to this paper was also carried out while the author was visiting the **Max Planck Institute for Mathematics** and later a postdoc with the RTG 1670 **Mathematics inspired by String Theory and Quantum Field Theory** at the University of Hamburg. Coherence for adjunctions in a 3-category via string diagrams ============================================================= We construct a 3-categorical presentation Adj(3, 1) and define a coherent adjunction in a strict 3-category C as a map Adj(3, 1) → C. We use string diagrams to show that any adjunction in C can be extended to a coherent adjunction in an essentially unique way. The results and their proofs will apply in the context of Gray 3-categories after the string diagram calculus is shown to hold in that context in an upcoming paper. Introduction ============ In this paper, we construct a 3-categorical presentation Adj(3, 1) containing 1-cells *l* and *r* and we define a **coherent adjunction** in a strict 3-category C as a functor Adj(3, 1) → C. We then prove our Main Theorem, stating that any adjunction in C (by which we mean an adjunction in its homotopy 2-category) can be promoted to a coherent adjunction in an essentially unique way. In order to state this Theorem precisely, denote by *θ*(1) the computad consisting of a single 1-cell, so that Map(*θ*(1), C) is the 3-groupoid of 1-morphisms in C, and let Map*L*(*θ*(1), C) be its full 3-subgroupoid whose objects are the left adjoint 1-morphisms in C. The map *E**l* : Map(Adj(3, 1), C) → Map(*θ*(1), C) given by restriction to the 1-cell *l* factors through Map*L*(*θ*(1), C). [Main Theorem] Given a strict 3-category C, the restriction map *E**l* : Map(Adj(3, 1), C) → Map*L*(*θ*(1), C) is a weak equivalence of strict 3-groupoids. The basic idea of the proof is that the map *E**l* is a fibration of 3-groupoids, by the main result of. This allows us to make use of a long exact sequence in homotopy groups to reduce the problem to showing that the homotopy groups of the fibre are trivial and the map is surjective on objects. We then prove this is the case by constructing trivialising morphisms for arbitrary elements of these homotopy groups. We do this one cell at a time, by using the lifting properties of fibrations and using the string diagram calculus developed in,, and for explicit constructions. The restriction to strict 3-categories is a consequence of the fact that we use a string diagram calculus. In an upcoming paper we show that this string diagram calculus holds in Gray 3-categories and then all results in this paper will hold in that setting, with the same proofs. Pullbacks and the long exact sequence for a fibration ----------------------------------------------------- In the proof of the main Theorem, we need to make use of the long exact sequence in homotopy groups corresponding to a fibration of *n*-groupoids. We will also need to use pullbacks of maps of *n*-groupoids along a fibration. We therefore state and prove all the necessary results. These should in principle follow from model theoretic arguments in the folk model structure on strict *n*-categories of. We prefer to give different proofs here for completeness and also because we want proofs that will be applicable in the context of Gray 3-categories and other types of semistrict *n*-categories. Relation to other work ---------------------- We start by defining a 2-categorical presentation Adj(2, 1) and proving the corresponding coherence result for adjunctions in a 2-category. This presentation can be deduced directly from the definition of an adjunction as a pair of 1-morphisms, together with unit and counit 2-morphisms satisfying two relations, known as the snake relations, or triangle identities. The coherence result in this case is essentially equivalent to the well known result that adjoints are unique up to isomorphism. The essential difference between Adj(3, 1) and Adj(2, 1) is the appearance of a **swallowtail relation**, named after the well known singularity. Singularity theory and adjunctions in higher categories are related by the Cobordism Hypothesis (see ). The swallowtail relations were fist introduced in the context of adjunctions in. There they are part of the definition of a *locally adjoint biadjoint pair* in a *strongly bicategory enriched category* (a kind of semistrict 3-category). Note that what is called a biadjoint pair in is what we call an adjunction in the present paper. In that paper, it is proved that given a biadjoint pair one can modify the triangle isomorphisms in such a way that two additional relations (later called swallowtail relations) are satisfied, yielding a locally adjoint biadjoint pair. A string diagram proof of the analogous result for a strict 3-category has appeared in. There is also a formalized string diagram proof in the proof assistant Globular (see ). The swallowtail relations also appear in, in the more general context of *biadjunctions in tricategories*. There they are depicted in terms of pasting diagrams and the complexity of these diagrams is increased by the presence of various morphisms implementing the weak coherence laws which hold in a tricategory. In that paper, it is proved that any any biequivalence in a tricategory extends to a biadjoint biequivalence, satisfying the swallowtail relations. Again, note that what is called a biequivalence in we call here simply an equivalence. What is called a biadjunction in we call here an adjunction satisfying the swallowtail relations, i.e. a map Adj(3, 1) → C. In the author gives a definition of a *coherent adjoint equivalence* between 2-categories. This is in particular an adjunction in the 3-category of 2-categories, and so the swallowtail relations appear in the definition. The composite 3-morphisms whose equality is asserted by these relations are depicted as movies of 2-dimensional string diagrams. This seems to be the first place in the literature where these equations relating the cusp isomorphisms for an adjunction are given the name of swallowtail relations, by analogy with the singularity. In the author proves a coherence result for *duals in monoidal bicategories*. More precisely, they prove that the 2-groupoid of objects in a monoidal bicategory which admit a dual is equivalent to a 2-groupoid of *coherent dual pairs*, which can be seen as the 2-groupoid of maps out of a certain computad, which plays the same role as Adj(3, 1) in the present paper. We can specialize the result in to the case of strict monoidal 2-categories. On the other hand, using the fact that monoidal 2-categories are just 3-categories with one object, with duals corresponding to adjoints, we can also specialize the result in the present paper to the context of strict monoidal 2-categories. The two results on coherence for duals in strict monoidal 2-categories thus obtained are essentially the same. The main advantage of the methods in the present paper is that the proof is made much simpler by the use of string diagrams, a method which we can currently extend to 4-categories. Moreover, the result in the present paper does not follow from the one in, except in the special case where one considers adjunctions in 3-category with only one object. In the authors construct an (∞, 2)-category $\underline{\operatorname{Adj}}$ and prove that the space of functors $\underline{\operatorname{Adj}}\to \operatorname{Cat}\_{\infty}$ is equivalent to the space of adjunctions in Cat∞. Informally, we can think of Adj(3, 1) as a finite presentation for the homotopy 3-category of $\underline{\operatorname{Adj}}$. This seems to be the first place in the literature where coherence for adjunctions in a higher category C is stated in terms of an equivalence of spaces between the space of morphisms in C which admit an adjoint and the space of maps into C out of a category consisting of a free adjunction. The *strictly undulating squiggles* used there are also a kind of string diagram calculus. Future work ----------- The use of string diagrams comes with the limitation of applying only to strict 3-categories. However, we will prove in an upcoming paper that this string diagram calculus is applicable also in Gray 3-categories and therefore the proofs in and the present paper also hold more generally. A string diagram calculus for Gray 3-categories with duals already appears in. The methods used in this paper are also extended to dimension 4 in an upcoming paper, where we construct a 4-categorical presentation Adj(4, 1) and prove an analogous result for adjunctions in 4-categories. We will then use this result to give a new proof of the coherence result for fully dualizable objects in a strict symmetric monoidal 3-category in the author’s PhD Thesis. The cobordism hypothesis allows us to interpret the corresponding presentation as a finite presentation of the 3-dimensional fully extended framed bordism category, although this would require the coherence result to be extended to weak symmetric monoidal 3-categories. Definitions and basic results ============================= We now give some necessary definitions and recall the main result from which we will need in the present paper. Strict *n*-categories --------------------- We think of a strict *n*-category as an algebra over a certain monad *T**n* : gSet*n* → gSet*n* on the category of *n*-globular sets. This is the monad defined in, Chapter 8. Alternatively one can think of a strict *n*-category as a category enriched in strict (*n* − 1)-categories with the cartesian product. Given a strict *n*-category C, we denote by *s*, *t* its source and target maps. Equivalences ------------ In a strict *n*-category, we say that a *k*-morphism *f* : *x* → *y* is an **isomorphism** if there exists another *k*-morphism *f* : *y* → *x* such that *f* ∘ *g* = id*y* and *g* ∘ *f* = id*x*. We also say that *f* is **invertible** and we call *g* its **inverse** (one can show that it is unique). However, we are more interested in a weaker version of this, known as **equivalence**. Let C be a strict *n*-category. An *n*-morphism *f* : *x* → *y* in C is an **equivalence** if it is an isomorphism. When *k* < *n*, a *k*-morphism *f* : *x* → *y* in C is an **equivalence** when there is another *k*-morphism *g* : *y* → *x* and equivalences *f* ∘ *g* → id*y* and *g* ∘ *f* → id*x* in C. We say that *x* is **equivalent** to *y*, and write *x* ≃ *y*, if there is an equivalence *x* → *y*. When *f* : *x* → *y* is an equivalence, we also call it **weakly invertible** and any morphism *g* : *y* → *x* such that *f* ∘ *g* ≃ id*y* and *g* ∘ *f* ≃ id*x* is called a **weak inverse** to *f*. When *f* is a *k*-morphism and an equivalence we also call it a ***k*-equivalence**. An ***n*-groupoid** is an *n*-category all of whose morphisms are equivalences. Finally, we use the following notion of weak equivalence for functors, which coincides with the one in the folk model structure of. A functor *F* : C → D between strict *n*-categories is called **essentially surjective** if for every object *d* ∈ D there exists an object *c* ∈ C and an equivalence *F*(*c*) → *d* in D. A functor *F* : C → D between strict *n*-categories is called a **weak equivalence** if it is essentially surjective and for all objects *c*1, *c*2 ∈ C the induced functor C(*c*1, *c*2) → D(*F*(*c*1), *F*(*c*2)) is a weak equivalence of (*n* − 1)-categories. An *n*-groupoid *G* is called **weakly contractible** if the map *G* →  \*  is a weak equivalence. Adjunctions ----------- An **adjunction** in a strict 2-category C is a pair of 1-morphisms *l* : *X* → *Y* and *r* : *Y* → *X* together with 2-morphisms *u* : id*X* → *r* ∘ *l* and *c* : *l* ∘ *r* → id*Y* called the unit and the counit, which satisfy two standard relations, called zigzag, snake or triangle identities. Let C be a strict *n*-category. We define its **homotopy 2-category** to be the strict 2-category h2(C) obtained by declaring equivalent 2-morphisms to be equal. The following definitions of adjunctions in *n*-categories are adapted from the ones given in for the case of (∞, *n*)-categories. An **adjunction** between 1-morphisms in a strict *n*-category C is a pair of 1-morphisms *l* : *X* → *Y* and *r* : *Y* → *X* together with 2-morphisms *u* : id*X* → *r* ∘ *l* and *c* : *l* ∘ *r* → id*Y* called the unit and the counit, which determine an adjunction in the homotopy 2-category h2(C). This means that an adjunction in a 3-category consists of a pair of 1-morphisms *l* : *X* → *Y* and *r* : *Y* → *X* together with unit and counit 2-morphisms satisfying the usual snake relations or triangle identities up to 3-isomorphism. An adjunction between *k*-morphisms in a strict *n*-category C is an adjunction between 1-morphisms in an appropriate (*n* − *k* + 1)-category of morphisms in C. The following Lemma relating equivalences and adjunctions is well known. [adjeq] Let C be a strict *n*-category, *f* : *x* → *y* a *k*-equivalence in C, *g* : *y* → *x* a weak inverse and *u* : id*x* → *g* ∘ *f* a (*k* + 1)-equivalence. Then there exists a (*k* + 1)-equivalence *c* : *f* ∘ *g* → id*y* such that (*f*, *g*, *u*, *c*) is an adjunction in C. By passing to h2(Hom(*s*(*x*), *t*(*x*))) we can reduce to the case where *n* = 2 and *k* = 1. Now we just need to find *c* : *y* → *x* satisfying the two snake relations. This can be done by using string diagrams, as on the nLab page for adjoint equivalence. Presentations ------------- An ***n*-categorical presentation** is simply a collection of *k*-cells for every *k* ≤ *n* + 1, whose sources and targets are composites of lower dimensional cells. We interpret the (*n* + 1)-cells as relations. Given an *n*-categorical presentation P we denote by *F*(P) the ***n*-category generated by P**. Its *k*-morphisms are arbitrary composites of the *k*-cells in P. Two *n*-morphisms are declared equal when they are related by an (*n* + 1)-cell. We sometimes write P → C to refer to a functor *F*(P) → C. This can be made precise by using the theory of **computads**. See for a detailed treatment of computads and for our simplified exposition of how we use them. We denote by *θ*(*k*) the computad generated by a single *k*-cell, so that functors *θ*(*k*) → C are in canonical bijection with the set of *k*-morphisms in C. String diagrams --------------- In a string diagram calculus for 4-categorical compositions is introduced. The authors introduce the notion of a **signature**, which consists of sets of generating *k*-cells, for each *k* ≤ 5. They then define a *k*-**diagram** over a signature, be a 4-categorical composite of cells. They also introduce **homotopy generators** which are certain cells encoding coherent versions of the interchange laws that hold in strict 4-categories. Finally they define a **signature with homotopy generators** as a signature in which we have specified cells implementing these coherent laws. In we introduced a monad *T**n**D**s* on globular sets encoding the compositional structure of *n*-dimensional string diagrams. Its algebras are called *n*-sesquicategories and they are *n*-globular sets equipped with strictly associative and unital composition and whiskerig operations, but not satisfying the Godement interchange laws. The notion of a signature then coincides with that of a computad for *T**n**D**s*. In an upcoming paper, we will show how one can define a monad *T*3*s**s* by adding to *T*3*D**s* certain operations encoding the homotopy generators. The *T*3*s**s*-algebras are called semistrict 3-categories and they are defined precisely so that the string diagram calculus from applies. We also show that they are the same as Gray 3-categories. We are working on extending this to higher dimensions. In we explain how to interpret diagrams over a 4-signature with homotopy generators as specifying composites in a strict 4-category, by interpreting the homotopy generators as identity morphisms. In the present paper, we will use these string diagrams in the *n* = 3 case, so we include below an informal description of these diagrams. Given a strict 3-category C, we use the string diagram calculus to describe composites of morphisms in any dimension, and to prove identities between composite 3-morphisms. We read odd dimensional diagrams from left to right and even dimensional diagrams from top to bottom. This means the source of an odd dimensional morphism appears on its left and the source of an even dimensional morphism appears above it. We denote the composite of two composable 1-morphisms *f*, *g* by the labelled diagram . Similarly, we can also denote the composite of *n* composable 1-morphisms by a diagram consisting of *n* labeled dots on a line. Given 2-morphisms *α*, *β* such that *t*(*α*) = *s*(*β*), we can denote their composite by the labeled diagram . If *f*, *g* are 1-morphisms such that *t*(*f*) = *s*2(*α*) and *s*(*g*) = *t*2(*α*), we can also denote the whiskering of *α* with *f* or *g* by | | | | | --- | --- | --- | | | or | . | In general, a diagram such as labeled by morphisms in C, subject to compatibility conditions on their sources and targets, determines a composite 2-morphism in C. We can also consider 2-morphisms whose source and target are composites of 1-morphisms. Given composable 1-morphisms *i*, *j* and *f*, *g* we can denote a 2-morphism *η* : *g* ∘ *f* → *j* ∘ *i* by . These can also be composed and whiskered with other morphisms, so we can form general 2-diagrams, which when given a compatible labeling by morphisms in C denote composite 2-morphisms. Here is an example of such a diagram. . Now we come to 3-dimensional diagrams. We denote the composite of two 3-morphisms by a labeling of . The whiskering of a 3-morphism with a 2-morphism corresponds to the diagram | | | | | --- | --- | --- | | | or | . | The whiskering of a 3-morphism with a 1-morphism corresponds to the diagram | | | | | --- | --- | --- | | | or | . | These basic composition operations can be iterated to form 3-diagrams such as , which when given compatible labelings by morphisms in C denote composite 3-morphisms. We can also consider 3-morphisms whose source and target are arbitrary composites of 1 and 2-morphisms in C, which we denote by labeling a diagram such as . We can then compose these to get general 3-diagrams, such as . Notice that the two string diagrams | | | | | --- | --- | --- | | | and | | determine the same composition operation on 2-morphisms in a strict 3-category, as they both correspond to the pasting diagram $$\xymatrix@1{\bullet\rtwocell & \bullet \rtwocell & \bullet}.$$ So we introduce 3-dimensional cells | | | | | --- | --- | --- | | | and | | called interchangers (or type I2 homotopy generators in ) which when labeled by compatible morphisms in C compose to the appropriate identity 3-morphism in C. In a semistrict 3-category the interchangers become isomorphisms instead of identities. Then we need to introduce some equations between 3-diagrams. First there is interchanger cancellation: ccc &  =  & ; &  =  & . Then we have the type I3 homotopy generator ccc &  =  & , and finally we have the type II3 homotopy generators: ccc &  =  & ; &  =  & . Functor categories ------------------ Using the left and right internal Hom from the monoidal biclosed structure on *n*-categories associated to the Crans-Gray tensor product () one can define *n*-categories Fun*l**a**x*(C, D) and Fun*o**p**l**a**x*(C, D) for *n*-categories C and D. One can check that a *k*-morphism in Fun*o**p**l**a**x*(C, D) is a rule that associates to each ℓ-morphism in C a map *θ*(*k*); (ℓ) → D, satisfying certain relations of compatibility with composition. Here *θ*(*k*); (ℓ) is the (*k* + ℓ)-computad explictly constructed in. It can also be described as the Crans-Gray tensor product *θ*(*k*) ⊗ *θ*(ℓ). Similarly, a *k*-morphism in Fun*l**a**x*(C, D) is a rule that associates to each ℓ-morphism in C a map *θ*(ℓ); (*k*) → D. One can then define the *n*-category Fun(C, D) as the subcategory of Fun*o**p**l**a**x*(C, D) consisting of those *k*-morphisms which associate to an ℓ-morphism in C a (*k* + ℓ)-equivalence in D, for *k*, ℓ ≥ 1. The *k*-morphisms in Fun(C, D) are called *k*-transfors. For *k* = 1, 2, 3 they are also called natural transformations, modifications and perturbations, respectively (see the nLab page “transfor” for a discussion of this terminology). Similarly, $\overline{\operatorname{Fun}}({\mathcal{C}},{\mathcal{D}})$ is the analogous subcategory of Fun*l**a**x*(C, D). Finally Map(C, D) and $\overline{\operatorname{Map}}({\mathcal{C}},{\mathcal{D}})$ are defined as the underlying subgroupoids in Fun(C, D) and $\overline{\operatorname{Fun}}({\mathcal{C}},{\mathcal{D}})$. Given a presentation P we write Fun(P, D) instead of Fun(*F*(P), D) and similarly for Map. In we gave an explicit description of Fun(C, D) in terms of string diagrams, when C and D are 4-categories. We include here, for convenience, the string diagram description of *k*-transfors between 3-categories. ### Natural transformations Given functors *F*, *G* : C → D, a **natural transformation**, or 1-transfor, *α* : *F* → *G* consists of the following data. We use and to denote the images of objects and morphisms under *F* and *G*, respectively. 1. For each object *Y* ∈ C a 1-morphism *α**Y* : *F*(*Y*) → *G*(*Y*): | | | | | --- | --- | --- | | *Y* =  |  ↦  | *α**Y* =  | ; 2. For each 1-morphism *g* : *X* → *Y* in C an invertible 2-morphism *α**g* in D: | | | | | --- | --- | --- | | *g* =  |  ↦  | *α**g* =   :   →  | ; 3. For each 2-morphism *ζ* : *f* → *g* in C an invertible 3-morphism *α**ζ* in D: | | | | | --- | --- | --- | | *ζ* =  |  ↦  | *α**ζ* =   :   →  | ; 4. For each 3-morphism *t* : *η* → *ζ* in C a relation *α**t* in D: | | | | | --- | --- | --- | | *t* =  |  ↦  |  :   =  | . This data is subject to relations equating the values of *α* on composite morphisms with the corresponding composites of values of *α* given by stacking diagrams. ### Modifications Given natural transformations *α*, *β* : *F* → *G*, a **modification**, or 2-transfor, *m* : *α* → *β* consists of the following data. We use for *α* and for *β*. 1. For each object *Y* ∈ C a 2-morphism *m**Y* : *α**Y* → *β**Y* in D: | | | | | --- | --- | --- | | *Y* =  |  ↦  | *m**Y* =   :   →  | ; 2. For each 1-morphism *g* : *X* → *Y* in C an invertible 3-morphism *m**g* in D: | | | | | --- | --- | --- | | *g* =  |  ↦  | *m**g* =   :   →  ; | 3. For each 2-morphism $\zeta=\includegraphics[scale=1.5,align=c]{morphisms/eta.pdf}:f\to g$ in C a relation *m**ζ* in D:  :   =  . This data is subject to relations equating the values of *m* on composite morphisms with the corresponding composites of values of *m* given by stacking diagrams. ### Perturbations Given modifications *l*, *m* : *α* → *β*, a **perturbation**, or 3-transfor, A : *l* → *m* consists of the following data. We use for *l* and for *m*. 1. For each object *Y* ∈ C a 3-morphism A*Y* : *l**Y* → *m**Y* in D: | | | | | --- | --- | --- | | *Y* =  |  ↦  | A*Y* =   :   →  | ; 2. For each 1-morphism $g=\includegraphics[scale=1.2,align=c]{morphisms/f.pdf}:X\to Y$ in C a relation A*g* in D:  :   =  . This data is subject to relations equating the values of A on composite morphisms with the corresponding composites of values of A given by stacking diagrams. Fibrations ---------- A map of *n*-groupoids *p* : *E* → *B* is called a **fibration** if, given any *k*-morphism *f* : *x* → *y* in *B* and a lift *x̃* of its source along *p*, there exists a lift *f̃* : *x̃* → *ỹ* of *f* along *p*. Given *n*-groupoids *E* and *B*, it is natural to ask whether a map *f* : *E* → *B* is a fibration in the sense of this paper if and only if it is is a fibration in the folk model structure on strict *n*-categories defined in. This is plausible, since the generating trivial cofibrations in that model structure are the inclusions of a free *k*-cell as the source a free fully coherent (*k* + 1)-equivalence (*j**k* : **O***k* → **P***k* in the notation there). One would therefore have to show that any morphism in an *n*-groupoid can be extended to a fully coherent equivalence. We have not tried to give a proof of this fact. Note that in the authors construct a model structure on the category of strict *n*-groupoids. However, they define a strict *n*-groupoid as a strict *n*-category where every *k*-morphism has a strict inverse, rather than a weak one. See also. [from ][fibration] Let C a strict 4-category, P a presentation and Q another presentation, obtained by adding a finite number of cells to P. Then the restriction map Map(Q, C) → Map(P, C) is a fibration of 4-groupoids. In it is shown that the category of strict *n*-categories equipped with the Crans-Gray tensor product and the folk model structure is a biclosed monoidal model category. This implies that the internal Hom functors Fun*l**a**x*( − , D) and Fun*o**p**l**a**x*( − , D) send cofibrations to fibrations. From one can deduce that an inclusion of presentations induces a cofibration between the presented *n*-categories. Therefore one can deduce that the restriction map on (op)lax functor categories is a fibration in the folk model structure. Note also that in it is proved that Map(C, D) is the underlying *n*-groupoid of Fun*o**p**l**a**x*(C, D). One might then be able to prove that a folk fib
arxiv_0000257
Security of round-robin differential-phase-shift quantum key distribution protocol with correlated light sources ================================================================================================================ Among various quantum key distribution (QKD) protocols, the round-robin differential-phase- shift (RRDPS) protocol has a unique feature that its security is guaranteed without monitoring the signal disturbance. Moreover, this protocol has a remarkable property of being robust against source imperfections assuming that the emitted pulses are independent. Unfortunately, some experiments with high-speed QKD systems confirmed the violation of the independence due to pulse correlations, and therefore the lack of a security proof with taking into account this effect is an obstacle for guaranteeing the implementation security. In this paper, we show that the RRDPS protocol is secure against any source imperfections by establishing a security proof with the pulse correlations. The proof is simple in the sense that we make only three experimentally simple assumptions on the source. Our numerical simulation based on the proof shows that the long-range pulse correlation does not cause a significant impact on the key rate, which reveals another striking feature of the RRDPS protocol. Our security proof is thus effective and applicable to wide range of practical sources and paves the way to realize truly secure QKD in high-speed systems. Introduction ============ Quantum key distribution (QKD) offers information-theoretically secure communication between two distant parties, Alice and Bob . To prove the security of QKD, we suppose mathematical models on the users’ devices. If these models are discrepant from the physical properties of the actual devices, the security of actual QKD systems cannot be guaranteed. Hence, it is important to establish a security proof by reflecting the actual properties of the devices as accurately as possible. One of the serious imperfections in the source device is the pulse correlation, which becomes a problem especially in high-speed QKD systems. Due to experimental imperfections, signal modulation for each emitted pulse affects the modulation of subsequent pulses. This means that information of Alice’s setting choices, such as a bit choice and an intensity choice of the current pulse, is propagated to the subsequent pulses. Indeed, in , it is experimentally observed that the intensities are correlated among the adjacent pulses with GHz-clock QKD system. Even though tremendous efforts have been made so far to accommodate imperfections in the source into the security proofs (see e.g. ), such pulse correlation violates the assumption of most security proofs. The exceptions are the results in , where the intensity correlations between the nearest-neighbor pulses and arbitrary intensity correlations are respectively accommodated in  and , and the pulse correlation in terms of Alice’s bit choice information is taken into account in . Note that the result in  provides a security proof incorporating the correlation among the emitted pulses, but this correlation is assumed to be independent of Alice’s setting choices. Among various QKD protocols, the round-robin differential-phase-shift (RRDPS) protocol  is one of the promising protocols, which has a unique feature that its security is guaranteed without monitoring the signal disturbance such as the bit error rate. Thanks to this property, the RRDPS protocol has a better tolerance on the bit error rate than the other protocols and the fast convergence in the finite key regime. For this protocol, a number of works have been done theoretically   and experimentally . Moreover, the RRDPS protocol is shown to be robust against most of source imperfections , which is a remarkable property. However, this robustness is maintained only when the pulses emitted from the source are independent, which is also assumed in all the previous security proofs of the RRDPS protocol  . Unfortunately, some experiment  confirms the violation of this independence due to the pulse correlations, and hence the lack of a security proof with taking into account this effect is an obstacle for guaranteeing the implementation security of the RRDPS protocol. In this paper, we show that the RRDPS protocol is secure against any source imperfections by establishing the security proof with the pulse correlations. We adopt a general correlation model in which a bit information Alice selected is encoded not only on the current pulse but also on the subsequence pulses. In our security proof, we make only three experimentally simple source assumptions, which would be useful for simple source characterization. More specifically, we assume the length of the correlation among the emitted pulses, the fidelity between two emitted states when the correlation patterns are different, and the lower bounds on the vacuum emission probabilities of each emitted pulse. It is remarkable that no other detailed characterization is required for the source and any side-channels in the source can be accommodated. In the security proof, we exploit the reference technique  that is a general framework of a security proof to deal with source imperfections, including the pulse correlation. As a result of our security proof, we show that the long-range pulse correlation does not cause a significant impact on the key rate under a realistic experimental setting, which reveals another striking property of the RRDPS protocol. The paper is organized as follows. In section [sec:idea], we explain how to apply the reference technique to deal with the pulse correlation in the RRDPS protocol and why our protocol employs multiple interferometers in Bob’s measurement depending on the length of the correlation. In sections [sec:ass] and [sec:pro], we describe the assumptions that we make on Alice and Bob’s devices and introduce the protocol considered, respectively. In section [sec:security], we first summarize the security proof and state our main result about the amount of the privacy amplification, followed by providing its proof. Then in section [sec:simul], we present our numerical simulation results for the key generation rate and show that the long-range pulse correlation does not cause a significant impact on the key rate. Finally, in section [sec:dis], we wrap up our security proof and refer to some open problems. The idea to apply reference technique to RRDPS protocol ======================================================= Here, we explain how to apply the reference technique (RT)  to deal with the pulse correlations in the RRDPS protocol. In the original RRDPS protocol , Alice sends a block of pulses from which Alice and Bob try to extract one-bit key using a variable-delay interferometer. On the other hand, in our protocol with the correlation length of *l**c*, Alice and Bob divide each emitted block into (*l**c* + 1) groups and try to extract (*l**c* + 1)-bit key from each of the groups. In so doing, Bob employs (*l**c* + 1) variable-delay interferometers so that the pulses belonging to the same group interfere. In other terms, our protocol can be regarded as running (*l**c* + 1) RRDPS protocols simultaneously. We adopt such a modification for enabling us to apply the RT. Below, we explain why the modification is needed. In the RT, we consider an entanglement-based picture where each $k^{\U{th}}$ emitted pulse is entangled with the qubit. To discuss the security of the $k^{\U{th}}$ bit *j**k* that is obtained by measuring the qubit in the *Z*-basis (whose eigenstates are denoted by {∣0⟩, ∣1⟩}), each qubit is measured in the *X*-basis (whose eigenstates are denoted by ${| \pm \rangle}:=({| 0 \rangle}\pm{| 1 \rangle})/\sqrt{2}$). Since how well Alice can predict the *X*-basis measurement outcome is directly related to the amount of privacy amplification , this estimation is crucial in proving the security. The RT provides a method for its estimation under the pulse correlation, but one vital point is that the set of the $k^{\U{th}}$ emitted states must be fixed just before the emission of the $k^{\U{th}}$ pulse. To fix the set, we consider to measure the previous *l**c* qubits in the *Z*-basis. For instance, if *l**c* = 1, to discuss the security of the even-indexed bit *j*2*k*, the previous odd-indexed qubit must be measured in the *Z*-basis. These *Z*-basis measurements of the previous *l**c* qubits conflict the original security proof  of the RRDPS protocol. This is because to estimate the aforementioned *X*-basis statistics, all the qubits in the block are measured in the *X*-basis since any two pulses in the block can interfere in Bob’s measurement. To avoid this conflict, for instance if *l**c* = 1, we modify the RRDPS protocol such that the even-indexed and the odd-indexed pulses interfere separately, and the secret keys are separately extracted from each interference using two interferometers. In doing so, when we discuss the security of the even-indexed bit, only the even-indexed qubits in the block are measured in the *X*-basis while the odd-indexed ones are measured in the *Z*-basis. Hence, thanks to this modification, we can realize both the *X*- and the *Z*-basis measurements at the same time. By generalizing this idea to any *l**c* ≥ 2, if we use (*l**c* + 1) interferometers and consider the protocol that extracts the keys from each interferometer, these two basis measurements become compatible, and hence we can apply the RT for proving the security. We remark that when *l**c* = 1, the security proofs for the even- and the odd-indexed keys are mutually exclusive in the sense that the proof for the odd-indexed (even-indexed) key provides us with how much privacy amplification needs to be applied to the odd-indexed (even-indexed) key, but it does not offer the security of the even-indexed (odd-indexed) key. Fortunately, thanks to the universal composability  of the two security proofs, the amount of privacy amplification to generate the key both from the odd- and the even-indexed bits simultaneously is equivalent to those obtained from the mutually exclusive proofs. This argument holds for any *l**c* ≥ 2 due to the universal composability of the (*l**c* + 1) security proofs. Assumptions on the devices ========================== [t] [fig:device] Before describing the protocol, we summarize the assumptions we make on the source and the receiver. Figure [fig:device] depicts the setups of Alice and Bob’s devices employed in the protocol. Throughout the paper, we adopt the following notations. Let *N* be the total number of pulses sent by Alice in the protocol, and for any symbol *A*, we define $\bm{A}\_i:=A\_i,A\_{i-1},...,A\_1$ with *i* ∈ N. First, we list up the assumptions on Alice’s source as follows. As long as the following assumptions hold, any side-channel in the source can be accommodated. 1. For each $k^{\U{th}}$ emitted pulse (1 ≤ *k* ≤ *N*), Alice chooses a random bit *j**k* ∈ {0, 1}. The bit *j**k* is encoded not only to the $k^{\U{th}}$ emitted pulse but also to the subsequence pulses. Let *l**c* ≥ 0 be the number of pulses that the information *j**k* is propagated, and we call *l**c* correlation length. Let ${| \psi\_{j\_k|j\_{k-1},...,j\_1} \rangle}\_{B\_k}={| \psi\_{j\_k|\bm{j}\_{k-1}} \rangle}\_{B\_k}$ be the state of the $k^{\U{th}}$ emitted signal to Bob, where the subscripts *j**k* − 1, ..., *j*1 indicate the dependency of the previous information *j**k* − 1, ..., *j*1. Note that *j*0 represents having no condition. In defining the state ${| \psi\_{j\_k|\bm{j}\_{k-1}} \rangle}\_{B\_k}$, we have the freedom in the choice of its global phase. Throughout this paper, we fix the global phase of the state ${| \psi\_{j\_k|\bm{j}\_{k-1}} \rangle}\_{B\_k}$ such that the coefficient of the vacuum state is non-negative. In this paper, we consider the case where Alice employs *L̃* pulses contained in a single-block, where *L̃* is set to be (*l**c* + 1)*L* for *L* ≥ 3. We call *L̃* pulses of systems *B*(*i* − 1)*L̃* + 1, ..., *B**i**L̃* the $i^{\U{th}}$ block. 2. When *l**c* ≥ 1, for any *k* (1 ≤ *k* ≤ *N*) and any *ζ* (*k* + 1 ≤ *ζ* ≤ min{*N*, *k* + *l**c*}), the following parameter *ε**ζ* − *k* ≥ 0 characterizing the correlation is available. $$\begin{aligned} &\left|{\left\langle \psi\_{j\_{\zeta}|j\_{\zeta-1},...,j\_{k+1},j\_k=1,\bm{j}\_{k-1}}|\psi\_{j\_{\zeta}| j\_{\zeta-1},...,j\_{k+1},j\_k=0,\bm{j}\_{k-1}} \right\rangle}\right|^2{\notag}\\ &\ge1-\epsilon\_{\zeta-k}. \label{secII:LRASS1}\end{aligned}$$ Note that the difference between both states in the inner product is in the value of *j**k*. The parameter *ε**ζ* − *k* depends only on the difference *ζ* − *k*, but it is independent of *j**ζ*, *j**ζ* − 1, ..., *j**k* + 1, *j**k* − 1, ..., *j*1. Note that by the assumption (A1), if *ζ* ≥ *k* + *l**c* + 1, the left hand size of Eq. ([secII:LRASS1]) is equal to 1 since the bit information *j**k* does not propagate to the $\zeta^{\U{th}}$ state. 3. For any *k* (1 ≤ *k* ≤ *N*) and any *j**k* ∈ {0, 1}, the squared overlap of the vacuum state ${| \U{vac} \rangle}$ and the state ∣*ψ**j**k*∣*j**k* − 1, ..., *j*1⟩ is lower-bounded by $p^{\U{L}}\_{\vac,j\_k}$ regardless of *k* and the previous choices of *j**k* − 1, ..., *j*1. Mathematically, we suppose that $$\begin{aligned} \tr\left[{| \vac \rangle}\langle{\vac}|\psi\_{j\_k|\bm{j}\_{k-1}}\rangle{\langle \psi\_{j\_k|\bm{j} \_{k-1}} |}\right] \ge p^{\U{L}}\_{\vac,j\_k}. \label{secII:vacP}\end{aligned}$$ Providing the method for experimentally measuring the bounds in Eqs. ([secII:LRASS1]) and ([secII:vacP]) is beyond the scope of this paper. Note that the assumption (A2) can be alternatively expressed by using $p^{\U{L}}\_{\vac,0}$ and $p^{\U{L}}\_{\vac,1}$ in Eq. ([secII:vacP]) because as we will show in Appendix [sec:Appe1], the inner product in Eq. ([secII:LRASS1]) can be lower-bounded as $$\begin{aligned} &\left|{\left\langle \psi\_{j\_{\zeta}|j\_{\zeta-1},...,j\_{k+1},j\_k=1,\bm{j}\_{k-1}}|\psi\_{j\_{\zeta}| j\_{\zeta-1},...,j\_{k+1},j\_k=0,\bm{j}\_{k-1}} \right\rangle}\right|{\notag}\\ &\ge \begin{cases} 2p^{\U{L}}\_{\vac,j\_{\zeta}}-1&\U{if}~2p^{\U{L}}\_{\vac,j\_{\zeta}}\ge1\\ 0&\U{otherwise}. \end{cases} \label{eq:onlyvac}\end{aligned}$$ Next, we list up the assumptions on Bob’s measurement. As explained in Sec. [sec:idea], we consider that Alice and Bob try to extract (*l**c* + 1) secret keys i.e., they divide each block into (*l**c* + 1) groups and try to generate a one bit key from each of the groups. In so doing, Bob employs (*l**c* + 1) variable-delay interferometers with (*L* − 1) delays followed by two detectors [1](#fn1). To explain this more clearly, we classify the set {(*i* − 1)*L̃* + 1, (*i* − 1)*L̃* + 2, ..., *i**L̃*} of the positions of the emitted pulses associated with the $i^{\U{th}}$ block into (*l**c* + 1) groups, and the $w^{\U{th}}$ group (*w* ∈ {1, 2..., *l**c* + 1}) for the $i^{\U{th}}$ block is defined by $$\begin{aligned} \mathcal{G}\_{w}^{(i)}:=\{(l\_c+1)(m-1)+w+(i-1)\tilde{L}\}\_{m=1}^L. \label{eq:wth-ith}\end{aligned}$$ Note that $w^{\U{th}}$ group G*w*(*i*) is constructed by picking up all the $k^{\U{th}}$ pulses from the $i^{\U{th}}$ block with *k* ≡ *w* in modulo (*l**c* + 1). For instance, if *i* = 1, *l**c* = 2, *L* = 10 and *L̃* = 30, G*w* = 1(1) = {1, 4, 7, ..., 28}, G*w* = 2(1) = {2, 5, 8, ..., 29} and G*w* = 3(1) = {3, 6, 9, ..., 30}. Then, Bob prepares (*l**c* + 1) interferometers, and for each $i^{\U{th}}$ block, he feeds the incoming pulses of systems {*B**k*}*k* ∈ G*w*(*i*) to the $w^{\U{th}}$ interferometer. 1. Bob uses an active optical splitter with one-input and (*l**c* + 1)-output to feed the pulses in the $i^{\U{th}}$ block into the (*l**c* + 1) interferometers. This splitter actively sorts the incoming pulses to an appropriate interferometer, where the $k^{\U{th}}$ pulse with *k* ∈ G*w*(*i*) is fed to the $w^{\U{th}}$ interferometer. 2. Followed by the active optical splitter, Bob employs the (*l**c* + 1) variable-delay interferometers with two 50:50 beam splitters (BSs), where the delay of the interferometer is chosen uniformly at random from a set {1, 2, ..., *L* − 1}. When *r*-bit delay (*r* ∈ {1, 2, ..., *L* − 1}) is chosen in the interferometer, two pulses that are *r*(*l**c* + 1)-pulses apart in terms of the pulses Alice emitted interfere. 3. After the interferometer, the pulses are detected at time slots 1 through *L* + *r* by two photon-number-resolving (PNR) detectors, which discriminate the vacuum, a single-photon, and two or more photons of a specific optical mode. Each of the detectors is associated to bit values 0 and 1, respectively. We suppose that the quantum efficiencies and dark countings are the same for both detectors. 4. We suppose that there are no side-channels in Bob’s measurement device. Protocol ======== In this section, we describe the actual protocol of the RRDPS protocol under the pulse correlations in the source device. Let $N\_{\U{em}}$ be the number of emitted blocks sent by Alice, and the total number of pulses sent by Alice is $N=N\_{\U{em}}\tilde{L}$. As we will see below, our protocol can be regarded as running (*l**c* + 1) RRDPS protocols simultaneously, each of which employs a block containing *L* pulses. More specifically, our protocol runs as follows. In the description, $|\bm{z}|$ denotes the length of the bit sequence $\bm{z}$. 1. Alice and Bob respectively repeat steps 2 and 3 for $i=1,...,N\_{\U{em}}$. 2. Alice chooses a sequence of random bits *j*(*i* − 1)*L̃* + 1, ..., *j**i**L̃* ∈ {0, 1}*L̃*, and sends Bob the pulses in the following state through the quantum channel: $$\begin{aligned} \bigotimes\_{k=(i-1)\tilde{L}+1}^{i\tilde{L}}{| \psi\_{j\_k|j\_{k-1},...,j\_1} \rangle}\_{B\_k}. \label{eq:emittedActual}\end{aligned}$$ 3. By the active optical splitter with one-input and (*l**c* + 1)-output, the pulses in the $i^{\U{th}} $ block are split to feed into the (*l**c* + 1) variable-delay interferometers. Among the pulses in the $i^{\U{th}}$ block, the $k^{\U{th}}$ pulse with *k* ∈ G*w*(*i*) is fed to the $w^{\U{th}}$ interferometer. Bob executes the following for *w* = 1, ..., *l**c* + 1. At the $w^{\U{th}}$ interferometer, Bob randomly selects the delay *r* ∈ {1, 2, ..., *L* − 1}, splits *L* incoming pulses into two trains of pulses using a 50:50 BS, and shifts backwards only one of the two trains by *r*. Recall that the time of a single shift is equal to (*l**c* + 1)-times as long as the interval of the neighboring emitted pulses. Then, Bob lets each of the first *L* − *r* pulses in the shifted train interfere with each of the last *L* − *r* pulses in the other train with the other 50:50 BS, and detects photons with the two PNR detectors at time slots 1 through *L* + *r*. 1. When Bob detects exactly one photon among the $(r+1)^{\U{th}}$ to the $L^{\U{th}}$ time slots and observes no detection at the other time slots, he records a sifted key bit *z**B*, *i*(*w*) ∈ {0, 1} depending on which detector reported the single photon. He also records the unordered pair {*u**i*(*w*), *v**i*(*w*)}, which are the positions of the pulse pair that resulted in the successful detection (*u**i*(*w*), *v**i*(*w*) ∈ {1, 2, ..., *L*}, ∣*u**i*(*w*) − *v**i*(*w*)∣ = *r*). He announces ``success" and {*u**i*(*w*), *v**i*(*w*)} over the classical channel. 2. In all the cases other than (a), Bob announces ``failure" and *w* through the classical channel. 4. Bob executes the following for *w* = 1, ..., *l**c* + 1. Let $N^{(w)}\_{\U{suc}}$ be the number of success blocks observed at the $w^{\U{th}}$ interferometer. For these blocks, Bob defines his $w^{\U{th}}$ type sifted key $\bm{z}^{(w)}\_{B}$ by concatenating his sifted key bits *z**B*, *i*(*w*) for $i\in\mathcal{B}^{(w)}\_{{\U{suc}}}$. Here, the set $\mathcal{B}^{(w)}\_{{\U{suc}}}$ is composed of the block-index *i* where the pulses whose indices in the set G*w*(*i*) result in the successful detection. 5. Alice executes the following for *w* = 1, ..., *l**c* + 1. Alice calculates her sifted key bit *z**A*, *i*(*w*) = *j**k*1 ⊕ *j**k*2 for $i\in \mathcal{B}^{(w)}\_{{\U{suc}}}$ with *k*1 and *k*2 being the *u**i*(*w*)-th and the *v**i*(*w*)-th elements of G*w*(*i*), and defines her $w^{\U{th}}$ type sifted key $\bm{z}^{(w)}\_A$ by concatenating her raw key bits *z**A*, *i*(*w*) for $i\in\mathcal{B}^{(w)}\_{{\U{suc}}}$. 6. Bob corrects the bit errors in $\bm{z}\_B:=(\bm{z}^{(1)}\_B,...,\bm{z}^{(l\_c+1)}\_B)$ to make it coincide with $\bm{z}\_A:=(\bm{z}^{(1)}\_A,...,\bm{z}^{(l\_c+1)}\_A)$ by sacrificing $|\bm{z}\_A|f\_{\U{EC}}$ bits of encrypted public communication from Alice by consuming the same length of a pre-shared secret key. 7. Alice and Bob executes the following for *w* = 1, ..., *l**c* + 1. For each $w^{\U{th}}$ type reconciled key, Alice and Bob conduct privacy amplification by shortening their keys by $|\bm{z}^{(w)} \_A|f^{(w)}\_{\U{PA}}$ to obtain the final keys. In this paper, we only consider the secret key rate in the asymptotic limit of an infinite sifted key length. We consider the asymptotic limit of large $N\_{\U{em}}$ while the following observed parameters are fixed: $$\begin{aligned} 0\le Q^{(w)}:=\frac{N^{(w)}\_{\U{suc}}}{N\_{\U{em}}}\le1. \label{eq:Qw}\end{aligned}$$ Note that $f\_{\U{EC}}$ in step 6 is determined as a function of the bit error rate $e\_{\U{bit}}$ in $\bm{z}\_A$ and $\bm{z}\_B$, where $e\_{\U{bit}}$ can be estimated by random sampling whose cost is negligible in the asymptotic limit. Also, the fraction of privacy amplification $f^{(w)}\_{\U{PA}}$ in step 7 is determined by the experimentally available observables *Q*(*w*) in Eq. ([eq:Qw]), {*ε**d*}*d* = 1*l**c* in Eq. ([secII:LRASS1]), $p^{\U{L}} \_{\vac,0}$ and $p^{\U{L}}\_{\vac,1}$ in Eq. ([secII:vacP]), whose explicit form is given the next section. Security proof ============== Summary of security proof ------------------------- Here, we summarize the result of the security proof of the protocol described above and determine the amount of privacy amplification $|\bm{z}^{(w)}\_A|f^{(w)}\_{\U{PA}}$ for the $w^{\U{th}}$ type sifted key in the asymptotic limit. As will be explained in this section, our security proof is based on the complementarity scenario  in which estimation of an upper bound on the phase error rate assures the security. The main result is this upper bound, which is given in Theorem [th1], and we provide its derivation in Sec. [securityproof]. Here, we outline the crux of the discussions. The difficulty of our phase error rate estimation comes from the correlations among the emitted pulses that have not been accommodated in the previous security proofs of the RRDPS protocol  . We solve this problem by exploiting the *reference technique* established in . This is a technique that simplifies the estimation of the phase error rate when the actually employed states are close to the ones whose formula associated to the phase error rate is easily derived. In this technique, we consider reference states, which are fictitious states that are not prepared in the protocol but close to the actual state. The key intuition is rather simple; when the reference states and the actual states are close, the deviation between probabilities associated to the reference states and those associated to the actual states should not be large. Therefore, we can obtain the phase error rate formula for the actual states by slightly modifying the formula for the reference states. We emphasize that Alice does not need to generate the reference states in the protocol, and they are purely a mathematical tool for phase error rate estimation. In particular, we choose the reference states regarding the $k^{\U{th}}$ emitted pulse such that the information *j**k* is *only* encoded to system *B**k* (see Eq. ([defPHIGC]) for the explicit formula). By exploiting this property, it is simple to obtain the probabilities for the reference states, which will be given by *T* in Eq. ([cond1]). Depending on the fidelity between the actual and reference states, which will be given by *S* in Eq. ([cond2]), by slightly modifying the relationship for the reference states, we finally obtain the target probability with the actual ones. In the rest of this section, we first explain the structure of the security proof, define the parameters that are needed to present the main result, and then describe Theorem [th1]. For the security proof with complementarity, we consider alternative entanglement-based procedures for Alice’s state preparation at step 2 and calculation of her raw key bit *z**A*, *i*(*w*) at step 5. These alternative procedures can be employed to prove the security of the actual protocol because the states sent to an eavesdropper (Eve), Bob’s measurement, and the final key are identical to those in the actual protocol. Also, Bob’s public announcement of the unordered pair {*u**i*(*w*), *v**i*(*w*)} in the actual protocol is identical to the one in the alternative protocol. As for Alice’s state preparation at step 2, she alternatively prepares *N* auxiliary qubits in systems $\bm{A}\_N$, which remain at Alice’s laboratory during the whole protocol, and the *N* pulses in systems $\bm{B}\_N$ to be sent, in the following state $$\begin{aligned} &{| \Psi \rangle}\_{\bm{A}\_N\bm{B}\_N}{\notag}\\ &:=\frac{1}{\sqrt{2^N}} \sum^1\_{j\_N=0}\cdots\sum^1\_{j\_1=0}\bigotimes\_{k=1}^N{| j\_k \rangle}\_{A\_k} e^{\U{i}\theta\_{j\_k|\bm{j}\_{k-1}}} {| \psi\_{j\_k|\bm{j}\_{k-1}} \rangle}\_{B\_k}. \label{eq:entbased}\end{aligned}$$ Here, the phase factors $e^{\U{i}\theta\_{j\_k|\bm{j}\_{k-1}}}$ can be chosen arbitrary because from Eve’s perspective, the states of system *B**k* in Eqs. ([eq:emittedActual]) and ([eq:entbased]) are equivalent. However, these factors must be adequately chosen to apply the reference technique for each $w^{\U{th}}$ type sifted key, which will be explained in Sec. [app:dec]. As for calculation of the sifted key bit *z**A*, *i*(*w*) = *j**k*1 ⊕ *j**k*2 at step 5, this bit can be alternatively extracted by applying the controlled-not (CNOT) gate (defined on the *Z*-basis) on the $k\_1^{\U{th}}$ and $k\_2^{\U{th}}$ auxiliary qubits of systems *A**k*1 and *A**k*2 with the $k\_1^{\U{th}}$ one being the control and the $k\_2^{\U{th}}$ one being the target followed by measuring the $k\_2^{\U{th}}$ auxiliary qubit in the *Z*-basis to obtain *z**A*, *i*(*w*). In the complementarity scenario, the discussion of the security of the key $\bm{z}\_{A}^{(w)}$ is equivalent to consider a virtual scenario of how well Alice can predict the outcome of the measurement complementary to the one to obtain *z**A*, *i*(*w*). In particular, we take the *X*-basis measurement as the complementary basis, and we need to quantify how well Alice can predict its outcome *x**k*2 ∈ { + ,  − } on system *A**k*2. As for Bob, instead of aiming at learning *z**A*, *i*(*w*), he performs the alternative measurement that determines which of the $k^{\U{th}}$ pulse in the group G*w*(*i*) contains the single-photon. This measurement is complementary to the one for obtaining his sifted key bit *z**B*, *i*(*w*). With this alternative measurement, Bob announces the pair {*u**i*(*w*), *v**i*(*w*)} such that the first index *u**i*(*w*) corresponds to the location of the single-photon and the second index *v**i*(*w*) is chosen uniformly at random from the set {1, 2..., *i* − 1, *i* + 1, *i* + 2..., *L*} [2](#fn2). Hence, in this virtual scenario, Alice’s task is to predict the outcome *x**k*2 where *k*2 is chosen uniformly at random from the group G*w*(*i*) except for *k*1. We define the occurrence of *phase error* to be the case where Alice fails in her prediction of the outcome *x**k*2. Let $N\_{\U{ph}}^{(w)}$ denote the number of phase errors of the $w^{\U{th}}$ type sifted key among $|\bm{z}\_{A}^{(w)}|$ trials. Suppose that the upper bound $N\_{\U{ph}}^{(w),\U{U}}$ on $N\_{\U{ph}}^{(w)}$ is obtained as a function of the experimentally available observables *Q*(*w*) in Eq. ([eq:Qw]), {*ε**d*}*d* = 1*l**c* in Eq. ([secII:LRASS1]), $p^{\U{L}}\_{\vac,0}$ and $p^{\U{L}}\_{\vac,1}$ in Eq. ([secII:vacP]). In this case, in the asymptotic limit, a sufficient fraction of privacy amplification is given by  $$\begin{aligned} f\_{\U{PA}}^{(w)}=h\left(N\_{\U{ph}}^{(w),\U{U}}/N^{(w)}\_{\U{suc}}\right),\end{aligned}$$ where *h*(*x*) is defined by *h*(*x*) =  − *x*log2*x* − (1 − *x*)log2(1 − *x*) for 0 ≤ *x* ≤ 0.5 and *h*(*x*) = 1 for *x* > 0.5. Our main result, Theorem [th1], derives the upper bound $e^{(w),\U{U}}\_{\U{ph}}$ on the phase error rate $e^{(w)}\_{\U{ph}}:=N\_{\U{ph}}^{(w)}/N^{(w)}\_{\U{suc}}$ with *Q*(*w*), {*ε**d*}*d* = 1*l**c*, $p^{\U{L}}\_{\vac,0}$ and $p^{\U{L}}\_{\vac,1}$ (see Sec. [securityproof] for the proof). [th1] In the asymptotic limit of large key length of the $w^{\U{th}}$ type sifted key $|\bm{z}^{(w)}\_A|$, the upper bound on the phase error rate for the $w^{\U{th}}$ type sifted key of the RRDPS protocol is given by $$\begin{aligned} e^{(w),\U{U}}\_{\U{ph}}=\sum\_{s=0}^{L-2}\frac{1}{L-1}\min\left\{\frac{\nu(L,s,C)}{Q^{(w)}},1\right\}. \label{eq:resulteph}\end{aligned}$$ Here, function *ν*(*L*, *s*, *C*) is defined by $$\begin{aligned} \nu(L,s,C):=\sum\_{y=s+1}^L\binom{L}{y}C^y(1-C)^{L-y}, {\notag}\end{aligned}$$ where *C* = *g*(*T*, *S*) if *T* ≤ *S*2 and *C* = 1 otherwise with $T:=1-\left(\sqrt{p\_{\vac,0}^{\U{L}}}+\sqrt{p\_{\vac,1}^{\U{L}}}\right)^2/4$, $S:=\left(1+\prod\_{d=1}^{l\_c}\sqrt{1-\epsilon\_d}\right)/2$ for *l**c* ≥ 1 and S=1 for *l**c* = 0, and $g(x,y):=x+(1-y^2)(1-2x)+2y\sqrt{(1-y^2)x(1-x)}$. We remark that once characterizations of the source device are completed (i.e., {*ε**d*}*d* = 1*l**c*, $p^{\U{L}}\_{\vac,0}$ and $p^{\U{L}}\_{\vac,1}$ are obtained), *C* becomes a constant. Theorem [th1] reveals that as the correlation length *l**c* gets larger, *ν*(*L*, *s*, *C*) in the expression of the phase error rate in Eq. ([eq:resulteph]) generally gets larger because *C* = *g*(*T*, *S*) is a monotonically decreasing function of *S* and *S* generally gets smaller as *l**c* becomes larger. Finally, using Theorem [th1], the secret key rate per pulse is given by $$\begin{aligned} R=\sum\_{w=1}^{l\_c+1}Q^{(w)}\left[1-f\_{\U{EC}}-h\left(e^{(w),\U{U}}\_{\U{ph}}\right) \right]/(l\_c+1)L, \label{eq:keyrate}\end{aligned}$$ where we provide its proof in Appendix [ap:all]. Proof of the main result ------------------------ In this section, we prove our main result, Theorem [th1]. ### Derivation of the phase error rate for the $w^{\U{th}}$ type sifted key Here, we derive the upper bound $e^{(w),\U{U}}\_{\U{ph}}$ on the phase error rate $e^{(w)}\_{\U{ph}}:=N\_{\U{ph}}^{(w)}/N^{(w)}\_{\U{suc}}$ for the $w^{\U{th}}$ type sifted key (*w* ∈ {1, 2, ..., *l**c* + 1}). We remark that the following discussions hold for any *w*. To derive $e^{(w),\U{U}}\_{\U{ph}}$, we consider performing the *X*-basis measurement on system *A**k* of ${| \Psi \rangle}\_{\bm{A}\_N\bm{B}\_N}$ in Eq. ([eq:entbased]) with *k* belonging to the $w^{\U{th}}$ group of indices $\bigcup\_{i=1}^{N\_{\U{em}}}\mathcal{G}^{(i)}\_{w}$, and we define the total number of the minus *n**w*,  −(*i*) with $1\le i\le N\_{\U{em}}$ obtained through measuring *L* qubit systems {*A**k*}*k* ∈ G*w*(*i*). Thanks to these *X*-basis measurements, we can classify $N^{(w)}\_{\U{suc}}$ successfully detected blocks according to *n**w*,  −, which leads to $N^{(w)}\_{\U{ph}}=\sum\_{s=0}^LN^{(w)}\_{\U{ph},n\_{w,-}=s}$. Here, $N^{(w)}\_{\U{ph},n\_{w,-}=s}$ denotes the number of the phase error events for the $w^{\U{th}}$ type sifted key when *n**w*,  − = *s*. By considering Bob’s alternative measurement explained in Sec. [sec:securityA], the probability of failing in the prediction of the *X*-basis measurement outcome (namely, having the occurrence of the phase error) when *n**w*,  − = *s* is *s*/(*L* − 1). More precisely, the phase error is defined by obtaining the outcome of the minus in the *X*-basis measurement on the target qubit, which is randomly chosen with probability 1/(*L* − 1) . Since there are *s*-outcomes of the minus when *n**w*,  − = *s*, the probability of obtaining the phase error is at most *s*/(*L* − 1). Importantly, the probability 1/(*L* − 1) comes from the random choice of the delay at Bob’s measurement assumed in (B2), and Eve cannot distort this probability distribution . With this phase error probability *s*/(*L* − 1) when *n**w*,  − = *s*, the Chernoff bound leads the following for any *ζ* > 0 $$\begin{aligned} e^{(w)}\_{\U{ph}}& :=\frac{N\_{\U{ph}}^{(w)}}{N^{(w)}\_{\U{suc}}} =\frac{ \sum\_{s=0}^LN^{(w)}\_{\U{ph},n\_{w,-}=s}}{N^{(w)}\_{\U{suc}}} {\notag}\\ &\le\sum\_{s=0}^{L-1}\frac{s}{L-1} \frac{N^{(w)}\_{\U{suc},n\_{w,-}=s}}{N^{(w)}\_{\U{suc}}}+\zeta+ \frac{N^{(w)}\_{\U{suc},n\_{w,-}=L}}{N^{(w)}\_{\U{suc}}}{\notag}\\ &=\sum\_{s=0}^{L-2}\frac{1}{L-1} \frac{N^{(w)}\_{\U{suc},n\_{w,-}>s}}{N^{(w)}\_{\U{suc}}}+\zeta. \label{eq:Nph}\end{aligned}$$ To upper-bound $N^{(w)}\_{\U{suc},n\_{w,-}>s}/N^{(w)}\_{\U{suc}}$, whose trivial upper bound is 1, in addition to the successfully detected blocks, Alice measures the total number of the minus *n**w*,  −(*i*) for the non-detected blocks (namely, all the $i^\U{th}$ block with $i\notin\mathcal{B}^{(w)}\_{\U{suc}}$). In doing so, it is obvious to see that the number $N^{(w)}\_{\U{suc},n\_{w,-}>s}$ of obtaining *n**w*,  − > *s* among the detected blocks can never be larger than the one $N^{(w)}\_{\U{em},n\_{w,-}>s}$ among the *emitted* blocks [3](#fn3). We note that the number $N^{(w)}\_{\U{em},n\_{w,-}>s}$ of the emitted blocks is fixed once Alice prepares the state ${| \Psi \rangle}\_{\bm{A}\_N\bm{B}\_N}$ in Eq. ([eq:entbased]). By overestimating $N^{(w)}\_{\U{suc},n\_{w,-}>s}$ as $$\begin{aligned} N^{(w)}\_{\U{suc},n\_{w,-}>s}\le N^{(w)}\_{\U{em},n\_{w,-}>s},\end{aligned}$$ Eq. ([eq:Nph]) results in $$\begin{aligned} e^{(w)}\_{\U{ph}}\le\sum\_{s=0}^{L-2}\frac{1}{L-1} \min\left\{ \frac{N^{(w)}\_{\U{em},n\_{w,-}>s}}{N^{(w)}\_{\U{suc}}},1\right\}+\zeta. \label{eq:NphEm}\end{aligned}$$ Hence, the remaining task is to derive the upper bound on $N^{(w)}\_{\U{em},n\_{w,-}>s}$. For this, we evaluate the upper bound on the probability of obtaining the outcome of the minus when system *A**k* of state ${| \Psi \rangle}\_{\bm{A}\_N\bm{B}\_N}$ is measured in the *X*-basis with *k* belonging to the $m^{\U{th}}$ element (1 ≤ *m* ≤ *L*) of set G*w*(*i*). Mathematically, the target for computation is the probability Pr[*x**t* =  − ∣{*x**k*}*k* ∈ P*i*, *m*(*w*)] with $$\begin{aligned} t&:=(i-1)\tilde{L}+w+(m-1)(l\_c+1), \notag\\ \mathcal{P}^{(w)}\_{i,m}&:=\bigcup\_{a=1}^{i}\{k|k\in\mathcal{G}\_{w}^{(a)}, k<t\}. \notag\end{aligned}$$ If $$\begin{aligned} \Pr[x\_t=-|\{x\_k\}\_{k\in\mathcal{P}^{(w)}\_{i,m}}, j\_{t-1},...,j\_{t-l\_c}]\le C \label{pAllZ}\end{aligned}$$ with constant *C* holds for any *j**t* − 1, ..., *j**t* − *l**c* ∈ {0, 1}*l**c*, where *j**k* ∈ {0, 1} denotes the *Z*-basis measurement outcome on system *A**k* of Eq. ([eq:entbased]), applying the Bayes rule leads that the target probability is also upper-bounded by *C* [4](#fn4) : $$\begin{aligned} \Pr[x\_t=-|\{x\_k\}\_{k\in\mathcal{P}^{(w)}\_{i,m}}]\le C. \label{jyouC}\end{aligned}$$ The derivation of the upper-bound *C* in Eq. ([pAllZ]) involves the reference technique established in , and we explain its detail in Sec. [app:dec]. Note that as mentioned in Sec. [sec:idea], to apply the reference technique to estimate the statistics of *x**t* in Eq. ([pAllZ]), the previous *l**c* *Z*-basis measurement outcomes *j**t* − 1, ..., *j**t* − *l**c* must be fixed, where these *Z*-basis measurements are possible because we can discuss the security of each $w^{\U{th}}$ type sifted key separately thanks to the modification of the RRDPS protocol. With Eq. ([jyouC]) in hand, by considering the binomial trial with success probability *C*, the total number of the minus *n**w*,  −(*i*) obtained through measuring *L* systems {*A**k*}*k* ∈ G*w*(*i*) obeys the following probability distribution when conditioned on the previous outcomes *n**w*,  −(*i* − 1), ..., *n**w*,  −(1): $$\begin{aligned} &\Pr[n\_{w,-}^{(i)}>s|n\_{w,-}^{(i-1)},...,n\_{w,-}^{(1)}]\le{\notag}\\ &\sum\_{y=s+1}^L\binom{L}{y}C^y(1-C)^{L-y}=:\nu(L,s,C). \label{bound:minus}\end{aligned}$$ Here, *s* denotes the integer. Once *s* is fixed, *ν*(*L*, *s*, *C*) is constant independently of the block index *i* $(1\le i\le N\_{\U{em}})$. Since the probability of obtaining *n**w*,  − > *s* for any $i^{\U{th}}$ block is upper-bounded by *ν*(*L*, *s*, *C*) from Eq. ([bound:minus]), for deriving $N^{(w)}\_{\U{em},n\_{w,-}>s}$, we can imagine independent trials with probability *ν*(*L*, *s*, *C*). Therefore, again by using the Chernoff bound, we have from Eq. ([eq:NphEm]) and $N^{(w)}\_{\U{em},n\_{w,-}>s}/N^{(w)}\_{\U{suc}}=1/Q^{(w)}\cdot N^{(w)}\_{\U{em},n\_{w,-}>s}/N\_{\U{em}}$ that for any *χ* > 0 and *ζ* > 0 $$\begin{aligned} e^{(w)}\_{\U{ph}}\le\sum\_{s=0}^{L-2}\frac{1}{L-1}\min\left\{\frac{\nu(L,s,C)+\chi}{Q^{(w)}},1\right\}+\zeta. \label{finaleph}\end{aligned}$$ When we increase $N^{(w)}\_{{\U{suc}}}$ for any fixed *ζ* and *χ*, the probability of violating Eq. ([finaleph]) decreases exponentially. Therefore, in the limit of large $N^{(w)}\_{{\U{suc}}}$, we can neglect these terms and finally obtain our main result in Eq. ([eq:resulteph]). Note that *Q*(*w*) defined in Eq. ([eq:Qw]) is experimentally observed data. As will be shown in Sec. [app:dec], *C* is determined by the assumptions (A2) and (A3), namely, the parameters *ε*1, ..., *ε**l**c* in Eq. ([secII:LRASS1]) and the probabilities $p^{\U{L}}\_{\vac,0}$ and $p^{\U{L}}\_{\vac,1}$ in Eq. ([secII:vacP]). ### Derivation of *X*-basis measurement statistics using reference technique Here, we derive the upper bound on Pr[*x**t* =  − ∣{*x**k*}*k* ∈ P*i*, *m*(*w*), *j**t* − 1, ..., *j**t* − *l**c*] in Eq. ([pAllZ]), regardless of *j**t* − 1, ..., *j**t* − *l**c* ∈ {0, 1}*l**c* that are the *Z*-basis measurement outcomes of systems *A**t* − 1, ..., *A**t* − *l**c* in Eq. ([eq:entbased]). The crucial point for its computation is that once *j**t* − 1, ..., *j**t* − *l**c* are fixed, we find from Eq. ([eq:entbased]) that the state of the systems $\bm{A}\_{t-1}\bm{B}\_{t-1}$ and the one of the systems $\bm{A}\_{\ge t}\bm{B}\_{\ge t}$ are decoupled, i.e, they are in the tensor product. Here, we define $\bm{A}\_{\ge i}:=A\_N,A\_{N-1},...,A\_i$. The *Z*-basis measurement outcomes *j**t* − 1, ..., *j**t* − *l**c* have an influence on determining the set of the $t^{\U{th}}$ states $\{{| \psi\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{B\_t}\}\_{j\_t}$, but the previous *X*-basis measurement outcomes {*x**k*}*k* ∈ P*i*, *m*(*w*) have no influence on the state of the systems $\bm{A}\_{\ge t}\bm{B}\_{\ge t}$ thanks to the tensor product structure. Therefore, when conditioned on *j**t* − 1, ..., *j**t* − *l**c*, we only focus on the state of the systems $\bm{A}\_{\ge t}\bm{B}\_{\ge t}$ to calculate the target probability. From Eq. ([eq:entbased]), conditioned on the outcomes *j**t* − 1, ..., *j**t* − *l**c*, the state of systems $\bm{A}\_{\ge t}\bm{B}\_{\ge t}$ is written as $$\begin{aligned} {| \Gamma^{\U{Act}}\_{\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t}\bm{B}\_{\ge t}}:=\frac{1} {\sqrt{2}}\sum\_{j\_t=0}^1{| j\_t \rangle}\_{A\_t} {| \psi^{\U{Act}}\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t}} \label{defGNR}\end{aligned}$$ with $$\begin{aligned} &{| \psi^{\U{Act}}\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t}}:= e^{\U{i}\theta\_{j\_t|\bm{j}\_{t-1}}}{| \psi\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{B\_t}\otimes{\notag}\\ &\frac{1}{\sqrt{2^{N-t}}}\sum\_{j\_N}\cdots\sum\_{j\_{t+1}} \bigotimes\_{\zeta=t+1}^N{| j\_{\zeta} \rangle}\_{A\_{\zeta}} e^{\U{i}\theta\_{j\_\zeta|\bm{j}\_{\zeta-1}}} {| \psi\_{j\_{\zeta}|\bm{j}\_{\zeta-1}} \rangle}\_{B\_{\zeta}}. \label{maink-1}\end{aligned}$$ As shown in , and also in Appendix [sec:ANYCOMP], Eq. ([maink-1]) is rewritten as $$\begin{aligned} &{| \psi^{\U{Act}}\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t}} =e^{\U{i}\theta\_{j\_t|\bm{j}\_{t-1}}} {| \psi\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{B\_t}{\notag}\\ &\left[ a\_{j\_t,\bm{j}\_{t-1}}{| \Phi\_{\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t+1}} +b\_{j\_t,\bm{j}\_{t-1}}{| \Phi^{\perp}\_{j\_t,\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1} \bm{B}\_{\ge t+1}}\right]. \label{mainstate}\end{aligned}$$ Here, ${| \Phi\_{\bm{j}\_{t-1}} \rangle}$ and ${| \Phi^{\perp}\_{j\_t,\bm{j}\_{t-1}} \rangle}$ are normalized states that respectively does not contain the information of *j**t* and does contain its information. Note that the state ${| \Phi\_{\bm{j}\_{t-1}} \rangle}$ represents a side-channel-free state, while the state ${| \Phi^{\perp}\_{j\_t,\bm{j}\_{t-1}} \rangle}$ represents the state of the side-channel since the information of *j**t* is propagated to the subsequence pulses. In our security proof, ${| \Phi^{\perp}\_{j\_t,\bm{j}\_{t-1}} \rangle}$ can be taken as any form in any-dimensional Hilbert space as long as it is orthogonal to ${| \Phi\_{\bm{j}\_{t-1}} \rangle}$, and the characterization of ${| \Phi^{\perp}\_{j\_t,\bm{j}\_{t-1}} \rangle}$ is not required. As stated below Eq. ([eq:entbased]), the phase factors $e^{\U{i}\theta\_{j\_k|\bm{j}\_{k-1}}}$ in Eq. ([eq:entbased]) can be chosen arbitrary, but to derive the lower bound on $a\_{j\_t,\bm{j}\_{t-1}}$ in Eq. ([mainstate]), for each *k* of *k* ≡ *w* in modulo *l**c* + 1, the phase factors $e^{\U{i}\theta\_{j\_k|\bm{j}\_{k-1}}}$ must be set as $$\begin{aligned} e^{\U{i}\theta\_{j\_k|\bm{j}\_{k-1}}}=1, \label{eq:main\_ph\_1}\end{aligned}$$ and for each *k* of *k* ≡ *w* in modulo *l**c* + 1, the phase factors $\{e^{\U{i}\theta\_{j\_\zeta|\bm{j}\_{\zeta-1}}} \}\_{\zeta=k+1}^{k+l\_c}$ must be chosen as $$\begin{aligned} &e^{\U{i}\theta\_{j\_\zeta|\bm{j}\_{\zeta-1}}}{\notag}\\ &:= \frac{\left|{\left\langle \psi\_{j\_\zeta|j\_{\zeta-1},...,j\_{k+1},j\_k=0,\bm{j}\_{k-1}}|\psi\_{j\_\zeta|j\_{\zeta-1},...,j\_{k+1},j\_k,\bm{j}\_{k-1}} \right\rangle}\right|} {{\left\langle \psi\_{j\_\zeta|j\_{\zeta-1},...,j\_{k+1},j\_k=0,\bm{j}\_{k-1}}|\psi\_{j\_\zeta|j\_{\zeta-1},...,j\_{k+1},j\_k,\bm{j}\_{k-1}} \right\rangle}}. \label{eq:main\_ph}\end{aligned}$$ Note that $e^{\U{i}\theta\_{j\_t|\bm{j}\_{t-1}}}=1$ holds in Eqs. ([maink-1]) and ([mainstate]) because of Eq. ([eq:mainph1]) and *t* ≡ *w* in modulo *l**c* + 1. In doing so, the coefficient $a\_{j\_t,\bm{j}\_{t-1}}$ is positive and can be lower-bounded by using Eq. ([secII:LRASS1]) as $$\begin{aligned} a\_{j\_t=0,\bm{j}\_{t-1}}&=1,~~a\_{j\_t=1,\bm{j}\_{t-1}} \ge \prod\_{d=1}^{l\_c}\sqrt{1-\epsilon\_{d}} \label{eq:lowa}\end{aligned}$$ if *l**c* ≥ 1. If *l**c* = 0, $a\_{j\_t,\bm{j}\_{t-1}}=1$ for both *j**t* = 0, 1 (see Appendix [sec:ANYCOMP] for the detail). Using Eq. ([defGNR]), we have that the probability of our interest leads to $$\begin{aligned} &\Pr[x\_t=-|\{x\_k\}\_{k\in\mathcal{P}^{(w)}\_{i,m}}, j\_{t-1},...,j\_{t-l\_c}]{\notag}\\ &=\tr\left[{| - \rangle}{\langle - |}\_{A\_t}{| \Gamma^{\U{Act}}\_{\bm{j}\_{t-1}} \rangle} {\langle \Gamma^{\U{Act}}\_{\bm{j}\_{t-1}} |}\_{\bm{A}\_{\ge t}\bm{B}\_{\ge t}} \right]. \label{eq:TC}\end{aligned}$$ To calculate Eq. ([eq:TC]), we introduce the reference states $\{{| \phi^{\U{Ref}}\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t}}\}\_{j\_t}$ that are associated with the actual states $\{{| \psi^{\U{Act}}\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t}}\}\_{j\_t}$. The reference states, which are close to the actual states prepared by the protocol, need to be chosen such that the following two conditions are satisfied. In its description, we use the notation $$\begin{aligned} {| \Gamma^{\U{Ref}}\_{\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t}\bm{B}\_{\ge t}}:=\frac{1} {\sqrt{2}}\sum\_{j\_t=0}^1{| j\_t \rangle}\_{A\_t} {| \phi^{\U{Ref}}\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t}}. \label{eq:RS}\end{aligned}$$ 1. For the reference state, the probability of obtaining the outcome of the minus when system *A**t* is measured in the *X*-basis is upper-bounded by constant *T* > 0, which is expressed as $$\begin{aligned} \Pr\left[x\_t=-~|~{| \Gamma^{\U{Ref}}\_{\bm{j}\_{t-1}} \rangle}\right]\le T. \label{cond1}\end{aligned}$$ 2. The fidelity between ${| \Gamma^{\U{Act}}\_{\bm{j}\_{t-1}} \rangle}$ and $ {| \Gamma^{\U{Ref}}\_{\bm{j}\_{t-1}} \rangle}$ is lower-bounded by constant *S* > 0, that is $$\begin{aligned} \left|{\left\langle \Gamma^{\U{Ref}}\_{\bm{j}\_{t-1}}|\Gamma^{\U{Act}}\_{\bm{j}\_{t-1}} \right\rangle}\right|\ge S. \label{cond2}\end{aligned}$$ Once the reference states satisfy Eqs. ([cond1]) and ([cond2]), the upper bound on Eq. ([eq:TC]) can be obtained by using the function *g*(*x*, *y*)  that relates the statistics of the actual and the reference states [5](#fn5). Specifically, the *X*-basis measurement statistics of these two states are related as $$\begin{aligned} &\Pr\left[x\_t=-~|~{| \Gamma^{\U{Act}}\_{\bm{j}\_{t-1}} \rangle}\right] {\notag}\\ &\le g\left(\Pr\left[x\_t=-~|~{| \Gamma^{\U{Ref}}\_{\bm{j}\_{t-1}} \rangle}\right], \left|{\left\langle \Gamma^{\U{Ref}}\_{\bm{j}\_{t-1}}|\Gamma^{\U{Act}}\_{\bm{j}\_{t-1}} \right\rangle} \right|\right), \label{PRGAMMA}\end{aligned}$$ where $g(x,y)=x+(1-y^2)(1-2x)+2y\sqrt{(1-y^2)x(1-x)}$ if *x* ≤ *y*2 and *g*(*x*, *y*) = 1 if *x* > *y*2. A direct calculation reveals that if *x* ≤ *y*2, $$\begin{aligned} g(x,y)\le g(x^{\U{U}},y^{\U{L}}) \label{eq:gUL}\end{aligned}$$ holds, where U (L) indicates the upper (lower) bound. Then, combining Eqs. ([cond1])-([eq:gUL]) gives $$\begin{aligned} \Pr\left[x\_t=-~|~{| \Gamma^{\U{Act}}\_{\bm{j}\_{t-1}} \rangle}\right] \le C= \begin{cases} g(T,S)& (\U{if}~T\le S^2) \\ 1 & (\U{if}~T> S^2). \end{cases} \label{eq:fe}\end{aligned}$$ Hence, the remaining task for obtaining Eq. ([jyouC]) is to derive the two bounds *T* and *S*, which are calculated below. In so doing, we take the reference state ${| \phi^{\U{Ref}}\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t}}$ for *j**t* ∈ {0, 1}, which are associated with the actual state ${| \psi^{\U{Act}}\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t}}$ in Eq. ([mainstate]), such that it is the first term of ${| \psi^{\U{Act}}\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t}}$: $$\begin{aligned} {| \phi^{\U{Ref}}\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t}}= {| \psi\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{B\_t}\otimes{| \Phi\_{\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t+1}}. \label{defPHIGC}\end{aligned}$$ **Calculation of *T* in Eq. ([cond1]):** We calculate the upper bound on $\Pr\left[x\_t=-~|~{| \Gamma^{\U{Ref}}\_{\bm{j} \_{t-1}} \rangle}\right]$ as follows: $$\begin{aligned} &\Pr\left[x\_t=-~|~{| \Gamma^{\U{Ref}}\_{\bm{j}\_{t-1}} \rangle}\right]=1-\Pr\left[x\_t=+~|~{| \Gamma^{\U{Ref}}\_{\bm{j}\_{t-1}} \rangle}\right] \notag\\ &=1-\sum\_{n=0}^{\infty}\Pr\left[n\_t=n, x\_t=+~|~{| \Gamma^{\U{Ref}}\_{\bm{j}\_{t-1}} \rangle}\right]\notag\\ &\le1-\Pr\left[n\_t=0, x\_t=+~|~{| \Gamma^{\U{Ref}}\_{\bm{j}\_{t-1}} \rangle}\right], \notag\end{aligned}$$ where *n**t* denotes the number of photons contained in system *B**t*. By rewriting ${| \Gamma^{\U{Ref}}\_{\bm{j}\_{t-1}} \rangle}$ using the *X*-basis states ∣ ± ⟩*A**t*: $$\begin{aligned} &{| \Gamma^{\U{Ref}}\_{\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t}\bm{B}\_{\ge t}}= {| \Phi\_{\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t+1}}\otimes{\notag}\\ &\frac{{| + \rangle}\_{A\_t}\sum\_{j\_t}{| \psi\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{B\_t}+{| - \rangle}\_{A\_t} \sum\_{j\_t}(-1)^{j\_t}{| \psi\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{B\_t}}{2}, \label{eq:XREF}\end{aligned}$$ we find that the statistics of *n**t* only depends on system *A**t*. Importantly, in obtaining Eq. ([eq:XREF]), we used the fact that ${| \Phi\_{\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t+1}}$ is independent of *j**t* as stated in Sec. [app:dec]. Then, combining Eq. ([eq:XREF]) and the assumption (A3) in Sec. [sec:ass] gives the lower-bound on $\Pr\left[n\_t=0, x\_t=+~|~{| \Gamma^{\U{Ref}}\_{\bm{j}\_{t-1}} \rangle}\right]$ as $$\begin{aligned} &\Pr\left[n\_t=0, x\_t=+~|~{| \Gamma^{\U{Ref}}\_{\bm{j}\_{t-1}} \rangle}\right] =\left|\frac{{\langle \vac |}\sum\_{j\_t}{| \psi\_{j\_t|\bm{j}\_{t-1}} \rangle}}{2}\right|^2{\notag}\\ &\ge\left[\sqrt{p\_{\vac,0}^{\U{L}}}+\sqrt{p\_{\vac,1}^{\U{L}}}\right]^2/4. \notag\end{aligned}$$ In the inequality, we employ the fact that the coefficient of the vacuum state of ${| \psi\_{j\_t|\bm{j}\_{t-1}} \rangle}$ is non-negative, which is stated in assumption (A1). Therefore, $$\begin{aligned} \Pr\left[x\_t=-~|~{| \Gamma^{\U{Ref}}\_{\bm{j}\_{t-1}} \rangle}\right] \le1-\left(\sqrt{p\_{\vac,0}^{\U{L}}}+\sqrt{p\_{\vac,1}^{\U{L}}}\right)^2/4=T. {\notag}\end{aligned}$$ **Calculation of *S* in Eq. ([cond2]):** Next, we calculate the fidelity between ${| \Gamma^{\U{Act}}\_{\bm{j}\_{t-1}} \rangle}$ and ${| \Gamma^{\U{Ref}}\_{\bm{j}\_{t-1}} \rangle}$. We have that for *l**c* ≥ 1 $$\begin{aligned} &\left|{\left\langle \Gamma^{\U{Ref}}\_{\bm{j}\_{t-1}}|\Gamma^{\U{Act}}\_{\bm{j}\_{t-1}} \right\rangle} \right| = \frac{\left|\sum\_{j\_t}{\left\langle \phi^{\U{Ref}}\_{j\_t|\bm{j}\_{t-1}}|\psi^{\U{Act}}\_{j\_t|\bm{j}\_{t-1}} \right\rangle}\right|}{2} {\notag}\\ &=\frac{\left|\sum\_{j\_t}a\_{j\_t,\bm{j}\_{t-1}}\right|}{2} \ge \frac{1+\prod\_{d=1}^{l\_c}\sqrt{1-\epsilon\_d}}{2}=S. \notag\end{aligned}$$ The first equality follows from Eqs. ([defGNR]) and ([eq:RS]), the second equality comes from Eqs. ([mainstate]) and ([defPHIGC]), and the inequality follows from Eq. ([eq:lowa]). If *l**c* = 0, *S* = 1 holds since $a\_{j\_t,\bm{j}\_{t-1}}=1$ for both *j**t* = 0, 1. Simulation of secure key rates ============================== [fig:rate] Here, we show the simulation results of asymptotic key rate *R* per pulse given by Eq. ([eq:keyrate]) as a function of the overall channel transmission *η* including the detector efficiency. For the simulation, we assume that each emitted pulse is a coherent pulse from a conventional laser with mean photon number *μ* and only the phases of the coherent pulses are correlated. In this case, the lower bound on the vacuum emission probability $p^{\U{L}}\_{\U{vac},j\_k}$ in Eq. ([secII:vacP]) is given by *e*− *μ* and hence *T* defined above is 1 − *e*− *μ*. For the simulation, we consider the cases of the correlation length *l**c* = 0, 1, 2 and 10, and for all the cases, we adopt $f\_{\U{EC}}=h(e\_{\U{bit}})$ with $e\_{\U{bit}}$ denoting the bit error rate in the protocol and suppose the successful detection rate for any *w* as *Q*(*w*) = *L**η**μ**e*− *L**η**μ*/2. In the case of *l**c* = 1, namely, the case of the nearest-neighbor correlation, we assume that the $k^{\U{th}}$ emitted state is written as $$\begin{aligned} {| \psi\_{j\_k|j\_{k-1}} \rangle}= \delta\_{j\_{k-1},0}{| (-1)^{j\_k}\sqrt{\mu} \rangle} +\delta\_{j\_{k-1},1}{| (-1)^{j\_k}\sqrt{\mu} e^{\U{i}\Delta} \rangle}. \label{eq:nn}\end{aligned}$$ Here, *δ**x*, *y* is the Kronecker delta and ${| e^{\U{i}\theta}\sqrt{\mu} \rangle}$ denotes the coherent state with the complex amplitude being $e^{\U{i}\theta} \sqrt{\mu}$. This state represents the pulse correlation that if the previous choice of bit *j**k* − 1 is 0, then the next $k^{\U{th}}$ states are the ideal states $\{{| \sqrt{\mu} \rangle},{| -\sqrt{\mu} \rangle}\}$, but if *j**k* − 1 is 1, then the phases are deviated by Δ from the ideal ones. In this setting, *S* defined above is given by $$\begin{aligned} S=\frac{1+\sqrt{1-\epsilon\_1}}{2}=\frac{1+|{\left\langle \sqrt{\mu}|\sqrt{\mu} e^{\U{i}\Delta} \right\rangle}|}{2}=\frac{1+e^{\mu(\cos\Delta-1)}}{2}. {\notag}\end{aligned}$$ In the case of *l**c* = 2, we assume that the $k^{\U{th}}$ emitted state is written as ${| \psi\_{j\_k|j\_{k-1},j\_{k-2}} \rangle}=\delta\_{j\_{k-1},0}\delta\_{j\_{k-2},0}{| (-1)^{j\_k} \sqrt{\mu} \rangle}+ \delta\_{j\_{k-1},0}\delta\_{j\_{k-2},1}{| (-1)^{j\_k}\sqrt{\mu}e^{\U{i}\Delta/2} \rangle} +\delta\_{j\_{k-1},1}\delta\_{j\_{k-2},0}{| (-1)^{j\_k}\sqrt{\mu}e^{\U{i}\Delta} \rangle}+ \delta\_{j\_{k-1},1}\delta\_{j\_{k-2},1}{| (-1)^{j\_k}\sqrt{\mu}e^{\U{i}3\Delta/2} \rangle}$. This state represents that if *j**k* − 1 = 1 (*j**k* − 2 = 1), the phases of the $k^{\U{th}}$ pulses are rotated by Δ (Δ/2). This means that the influence of the second-previous bit *j**k* − 2 to the $k^{\U{th}}$ pulse is half of that of the previous bit *j**k* − 1. In this setting, *S* is given by $$\begin{aligned} S=\frac{1+\sqrt{1-\epsilon\_1}\sqrt{1-\epsilon\_2}}{2} =\frac{1+e^{\mu(\cos\Delta-1)}e^{\mu(\cos\frac{\Delta}{2}-1)}}{2}. {\notag}\end{aligned}$$ As for the case of *l**c* = 10, the $k^{\U{th}}$ state is set to be analogous to the one for *l**c* = 1, 2, where if *j**k* − *d* = 1 (with *d* = 1, ..., 10), the phases of the $k^{\U{th}}$ pulses are rotated by Δ/2*d* − 1. A direct calculation shows $S=\left(1+\prod\_{d=1}^{10}\sqrt{1-\epsilon\_d}\right)/2=\left(1+\prod\_{d=1}^{10} e^{\mu(\cos(\Delta/2^{d-1})-1)}\right)/2.$ In Fig. [fig:rate], we plot the key rates for $e\_{\U{bit}}=0.03$, *L* = 32 and Δ = 0.2 rad for the cases of *l**c* = 0, 1, 2, 10 from top to bottom. The top line is the key rate with no pulse correlation (i.e., *l**c* = 0) that corresponds to Δ = 0 in Eq. ([eq:nn]). The key rates are optimized over mean photon number *μ* for each value of channel transmission *η*. From these lines, we see that the pulse correlation slightly degrades the key rate (about 0.7 times lower than the one without pulse correlation), but the three lines with *l**c* = 1, 2 and 10 are almost superposed. This implies that when the pulse correlation gets weaker as the pulses are farther apart, which is assumed in our simulation, the long-range pulse correlation does not cause a significant impact on the key rate. Discussion ========== In this paper, we have provided the information theoretic security proof of the RRDPS protocol with the pulse correlation in Alice’s source by using the reference technique. The pulse correlation is one of the serious imperfections in high-speed QKD systems where Alice’s random bit choice is propagated to the subsequent emitted pulses. Once the number of propagated pulses (*l**c*) is fixed, our security proof only requires the two experimentally simple assumptions on the source: the lower bound on the fidelity between the two $k^{\U{th}}$ states when the correlation patterns are different and the lower bounds on the vacuum emission probabilities of each emitted pulse. Our numerical simulations have shown the key rates up to *l**c* = 10 and have revealed that the long-range pulse correlation does not cause a significant impact on the key rate in a realistic experimental setting. Therefore, our security proof is effective and applicable to wide range of practical sources, and thus paves the way to realize the truly secure and high-speed QKD systems. We end with some open questions. It has an practical importance to simulate the key rates based on another source correlation model such as an intensity correlation that is beyond the one we have supposed in our simulation shown in Fig. [fig:rate]. Also, it is interesting to extend our security proof without the modification of the protocol, namely, with a single variable-delay interferometer assuming the same source correlation. Another interesting topic is to extend the security proof to accommodate quantum correlations among the emitted signals. Acknowledgements ================ This work is supported in part by the JSPS Grant-in-Aid for Scientific Research (C) No. 20K03779, (C) No. 21K03388, JST Moonshot R&D-MILLENNIA Program (grant number JPMJMS2061), JSPS KAKENHI Research (S) (Grants No: JP18H05237), and by CREST (Japan Science and Technology Agency) Grant No: JPMJCR1671. AM is supported by JST, ACT-X Grant Number JPMJAX210O, Japan. Proof of Eq. ([eq:onlyvac]) =========================== In this appendix, we prove Eq. ([eq:onlyvac]). For this, once we obtain the following proposition, by substituting ∣*ψ*⟩ = ∣*ψ**j**ζ*∣*j**ζ* − 1, ..., *j**k* + 1, *j**k* = 1, *j**k* − 1, ..., *j*1⟩, ∣*ϕ*⟩ = ∣*ψ**j**ζ*∣*j**ζ* − 1, ..., *j**k* + 1, *j**k* = 0, *j**k* − 1, ..., *j*1⟩ and the lower bounds in Eq. ([secII:vacP]) to Eq. ([eq:exp]), Eq. ([eq:onlyvac]) can be obtained. For any state ∣*ψ*⟩ and ∣*ϕ*⟩, a lower bound on the fidelity between these two states is given by $$\begin{aligned} |{\left\langle \psi|\phi \right\rangle}|\ge \begin{cases} 2\sqrt{p\_{\vac,\phi}p\_{\vac,\psi}}-1 & \U{if}~2\sqrt{p\_{\vac,\phi}p\_{\vac,\psi}} \ge1\\ 0 & \U{otherwise}, \end{cases} \label{eq:exp}\end{aligned}$$ where $p\_{\vac,\phi}:=\tr[{| \vac \rangle}{\langle \vac |}{| \phi \rangle}{\langle \phi |}]$. (Proof) We expand ∣*ψ*⟩ and ∣*ϕ*⟩ using the photon number states in all the optical modes, ${| \vac \rangle}$ and {∣*n*⟩}*n* ≥ 1, as follows: $$\begin{aligned} {| \psi \rangle}&=\sqrt{p\_{\vac,\psi}}{| \vac \rangle}+\sum\_{n\ge1}\beta\_n{| n \rangle},{\notag}\\ {| \phi \rangle}&=\sqrt{p\_{\vac,\phi}}{| \vac \rangle}+\sum\_{n\ge1}\gamma\_n{| n \rangle}.{\notag}\end{aligned}$$ We here choose the global phase of ∣*ψ*⟩ and ∣*ϕ*⟩ such that the coefficients of ${| \vac \rangle}$ being positive, and *β**n* ∈ C and *γ**n* ∈ C are the coefficients for *n* ≥ 1 of ∣*ψ*⟩ and ∣*ϕ*⟩, respectively. By using this, ∣⟨*ψ*∣*ϕ*⟩∣ is written as $$\begin{aligned} |{\left\langle \psi|\phi \right\rangle}|=\left|\sqrt{p\_{\vac,\psi}p\_{\vac,\phi}}+e^{\U{i}\theta} \left|\sum\_{n\ge1}\beta\_n^{\ast}\gamma\_n\right|\right| \label{eq:expeq},\end{aligned}$$ where *θ* = arg(∑*n* ≥ 1*β**n*\**γ**n*). We next derive the upper bound on ∣∑*n* ≥ 1*β**n*\**γ**n*∣ by exploiting the triangle inequality and the Cauchy-Schwarz inequality: $$\begin{aligned} \left|\sum\_{n\ge1}\beta\_n^{\ast}\gamma\_n\right| &\le\sum\_{n\ge1}\left| \beta\_n\right|\left|\gamma\_n\right| \le\sqrt{\left(\sum\_{n\ge1}|\beta\_n|^2\right)\left(\sum\_{n\ge1}|\gamma\_n| ^2\right)}\notag\\ &=\sqrt{(1-p\_{\vac,\psi})(1-p\_{\vac,\phi})}=:\tau. {\notag}\end{aligned}$$ 1. If $2\sqrt{p\_{\vac,\phi}p\_{\vac,\psi}}\ge1$, since $\sqrt{p\_{\vac,\phi}p\_{\vac,\psi}}\ge\tau$ holds, Eq. ([eq:expeq]) is lower-bounded as follows: $$\begin{aligned} &|{\left\langle \psi|\phi \right\rangle}|\notag\\ \ge&\left|\sqrt{p\_{\vac,\phi}p\_{\vac,\psi}}+e^{\U{i}\pi}\tau\right|{\notag}\\ =&\sqrt{p\_{\vac,\phi}p\_{\vac,\psi}}-\sqrt{(1-p\_{\vac,\psi})(1-p\_{\vac,\phi})}{\notag}\\ \ge&\sqrt{p\_{\vac,\phi}p\_{\vac,\psi}}-\sqrt{1+p\_{\vac,\psi}p\_{\vac,\phi}-2\sqrt{p\_{\vac,\phi}p\_{\vac,\psi}}}{\notag}\\ =&2\sqrt{p\_{\vac,\phi}p\_{\vac,\psi}}-1 {\notag}.\end{aligned}$$ The second inequality follows from the fact that $a+b\ge2\sqrt{ab}$ holds for any *a*, *b* ≥ 0. 2. If $2\sqrt{p\_{\vac,\phi}p\_{\vac,\psi}}<1$, we only have the trivial lower bound: $$\begin{aligned} |{\left\langle \psi|\phi \right\rangle}|\ge0.\end{aligned}$$ Proof of Eqs. ([mainstate]) and ([eq:lowa]) =========================================== In this appendix, we prove Eqs. ([mainstate]) and ([eq:lowa]). We start form Eq. ([maink-1]): $$\begin{aligned} {| \psi^{\U{Act}}\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t}} =e^{\U{i}\theta\_{j\_t|\bm{j}\_{t-1}}}{| \psi\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{B\_t} \left( \frac{1}{\sqrt{2^{N-t}}} \sum\_{j\_N}\cdots\sum\_{j\_{t+1}}\bigotimes\_{\zeta=t+1}^N{| j\_{\zeta} \rangle}\_{A\_{\zeta}} e^{\U{i}\theta\_{j\_\zeta|\bm{j}\_{\zeta-1}}}{| \psi\_{j\_{\zeta}|j\_{\zeta-1},...,j\_{t+1},j\_t, \bm{j}\_{t-1}} \rangle}\_{B\_{\zeta}} \right). \label{Scha}\end{aligned}$$ To see how the information *j**t* is encoded to the state ${| \psi^{\U{Act}}\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t}}$, we expand it using ${| \Phi\_{\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t+1}}$ and ${| \Phi^{\perp}\_{j\_t,\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t+1}}$ to have $$\begin{aligned} &{| \psi^{\U{Act}}\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t}} =e^{\U{i}\theta\_{j\_t|\bm{j}\_{t-1}}}{| \psi\_{j\_t|\bm{j}\_{t-1}} \rangle}\_{B\_t} \otimes{\notag}\\ &\left(a\_{j\_t,\bm{j}\_{t-1}}{| \Phi\_{\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t+1}}+ b\_{j\_t,\bm{j}\_{t-1}}{| \Phi^{\perp}\_{j\_t,\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t+1}}\right), \label{SDEC}\end{aligned}$$ where ${| \Phi\_{\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t+1}}$ and $ {| \Phi^{\perp}\_{j\_t,\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t+1}}$ denote some normalized states, and these are orthogonal each other. The subscripts in $a\_{j\_t,\bm{j}\_{t-1}}$, $b\_{j\_t,\bm{j}\_{t-1}}$, ${| \Phi\_{\bm{j}\_{t-1}} \rangle} \_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t+1}}$, and ${| \Phi^{\perp}\_{j\_t,\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t+1}}$ indicate the dependency on the previous setting choices [6](#fn6). Importantly, ${| \Phi\_{\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t+1}}$ does not depend on *j**t* but ${| \Phi^{\perp}\_{j\_t,\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t+1}}$ does. This means that ${| \Phi^{\perp}\_{j\_t,\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t+1}}$ represents the side-channel state of *j**t*. For ${| \Phi\_{\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t+1}}$, we can take any state as long as it is independent of *j**t*. Here, we choose it as $$\begin{aligned} {| \Phi\_{\bm{j}\_{t-1}} \rangle}\_{\bm{A}\_{\ge t+1}\bm{B}\_{\ge t+1}}=\frac{1}{\sqrt{2^{N-t}}} & \left(\sum\_{j\_{t+l\_c}}\cdots\sum\_{j\_{t+1}}\bigotimes\_{\zeta=t+1}^{t+l\_c} {| j\_\zeta \rangle}\_{A\_\zeta}{| \psi\_{j\_{\zeta}|j\_{\zeta-1},...,j\_{t+1},j\_t=0,\bm{j}\_{t-1}} \rangle} \_{B\_{\zeta}}\right){\notag}\\ \otimes& \left(\sum\_{j\_N}\cdots\sum\_{j\_{t+l\_c+1}}\bigotimes\_{\zeta=t+l\_c+1}^{N} e^{\U{i}\theta\_{j\_\zeta|\bm{j}\_{\zeta-1}}}{| j\_\zeta \rangle}\_{A\_\zeta}{| \psi\_{j\_{\zeta}| j\_{\zeta-1},...,j\_{t+1},j\_t=0,\bm{j}\_{t-1}} \rangle}\_{B\_{\zeta}}\right) \label{wechoose}\end{aligned}$$ that corresponds to the *N* − *t* systems of Eq. ([Scha]) with *j**t* being fixed to be 0 and with omitting the phase from the state $\{{| \psi\_{j\_{\zeta}|j\_{\zeta-1},...,j\_{t+1},j\_t=0,\bm{j}\_{t-1}} \rangle}\_{B\_{\zeta}}\}\_{\zeta=t+1}^{t+l\_c}$. The reason for omitting the phase is to guarantee the positivity of $a\_{j\_t,\bm{j}\_{t-1}}$ in Eq. ([SDEC]). The remaining task is to derive the lower bound on $a\_{j\_t,\bm{j}\_{t-1}}$ for *j**t* ∈ {0, 1} using the assumption (A2). Since $a\_{j\_t,\bm{j}\_{t-1}}$ is the inner product between Eq. ([wechoose]) and the vector $$\begin{aligned} &\frac{1}{\sqrt{2^{N-t}}} \sum\_{j\_N}\cdots\sum\_{j\_{t+1}}{\notag}\\ &\bigotimes\_{\zeta=t+1}^N{| j\_{\zeta} \rangle}\_{A\_{\zeta}} e^{\U{i}\theta\_{j\_\zeta|\bm{j}\_{\zeta-1}}}{| \psi\_{j\_{\zeta}|j\_{\zeta-1},...,j\_{t+1},j\_t,\bm{j}\_{t-1}} \rangle}\_{B\_{\zeta}}, {\notag}\end{aligned}$$ which is the state of the *N* − *t* systems shown in the parenthesis of Eq. ([Scha]), we have $$\begin{aligned} a\_{j\_t,\bm{j}\_{t-1}}&= \frac{1}{2^{N-t}} \left(\sum\_{j\_{t+l\_c}}\cdots\sum\_{j\_{t+1}}\prod\_{\zeta=t+1}^{t+l\_c} e^{\U{i}\theta\_{j\_\zeta|\bm{j}\_{\zeta-1}}} {\left\langle \psi\_{j\_{\zeta}|j\_{\zeta-1},...,j\_{t+1},j\_t=0,\bm{j}\_{t-1}}|\psi\_{j\_{\zeta}| j\_{\zeta-1},...,j\_{t+1},j\_t, \bm{j}\_{t-1}} \right\rangle} \right) \left(\sum\_{j\_N}\cdots\sum\_{j\_{t+l\_c+1}}1\right) \notag\\ &= \frac{1}{2^{l\_c}} \sum\_{j\_{t+l\_c}}\cdots\sum\_{j\_{t+1}}\prod\_{\zeta=t+1}^{t+l\_c} e^{\U{i}\theta\_{j\_\zeta|\bm{j}\_{\zeta-1}}} {\left\langle \psi\_{j\_{\zeta}|j\_{\zeta-1},...,j\_{t+1},j\_t=0,\bm{j}\_{t-1}}|\psi\_{j\_{\zeta}| j\_{\zeta-1},...,j\_{t+1},j\_t, \bm{j}\_{t-1}} \right\rangle} {\notag}\\ &= \frac{1}{2^{l\_c}} \sum\_{j\_{t+l\_c}}\cdots\sum\_{j\_{t+1}}\prod\_{\zeta=t+1}^{t+l\_c} \left|{\left\langle \psi\_{j\_{\zeta}|j\_{\zeta-1},...,j\_{t+1},j\_t=0,\bm{j}\_{t-1}}|\psi\_{j\_{\zeta}|j\_{\zeta-1},...,j\_{t+1},j\_t, \bm{j}\_{t-1}} \right\rangle}\right|. \label{aabs}\end{aligned}$$ In the second equality, we set the phases $e^{\U{i}\theta\_{j\_\zeta|\bm{j}\_{\zeta-1}}}$ for any *ζ* (*t* + 1 ≤ *ζ* ≤ *t* + *l**c*) and $\bm{j}\_N$ as $$\begin{aligned} &e^{\U{i}\theta\_{j\_\zeta|\bm{j}\_{\zeta-1}}}{\notag}\\ &:= \frac{\left|{\left\langle \psi\_{j\_\zeta|j\_{\zeta-1},...,j\_{t+1},j\_t=0,\bm{j}\_{t-1}}|\psi\_{j\_\zeta|j\_{\zeta-1},...,j\_{t+1},j\_t,\bm{j}\_{t-1}} \right\rangle}\right|} {{\left\langle \psi\_{j\_\zeta|j\_{\zeta-1},...,j\_{t+1},j\_t=0,\bm{j}\_{t-1}}|\psi\_{j\_\zeta|j\_{\zeta-1},...,j\_{t+1},j\_t,\bm{j}\_{t-1}} \right\rangle}}. \label{ephaseA}\end{aligned}$$ Since the only difference between both states in the inner product of Eq. ([aabs]) is in the $j\_t^{\U{th}}$ index, we have $$a\_{j\_t=0,\bm{j}\_{t-1}}=1.$$ On the other hand, if *j**t* = 1, by applying Eq. ([secII:LRASS1]) to Eq. ([aabs]), we obtain $$\begin{aligned} a\_{j\_t=1,\bm{j}\_{t-1}} & \ge\frac{1}{2^{l\_c}}\sum\_{j\_{t+l\_c}}\cdots\sum\_{j\_{t+1}}\prod\_{\zeta=t+1}^{t+l\_c}\sqrt{1-\epsilon\_{\zeta-t}} {\notag}\\ &=\prod\_{d=1}^{l\_c}\sqrt{1-\epsilon\_{d}}.{\notag}\end{aligned}$$ This ends the proof of Eqs. ([mainstate]) and ([eq:lowa]). $\blacksquare$ Proof of Eq. ([eq:keyrate]) =========================== In this appendix, we prove Eq. ([eq:keyrate]) that is our result of the security proof. Our security proof adopts the composability definition , where the security of our RRDPS protocol is evaluated by the correctness and secrecy parameters. As shown in , these parameters can be quantified separately, and since the correctness parameter is obtained by a verification step of the protocol, our target is to compute the secrecy parameter *ε**s*. The protocol is *ε**s*-secret if and only if $$\begin{aligned} d:=||\rho^{{\U{final}}}\_{\bm{A}\_{{\U{final}}}E}-\rho^{{\U{ideal}}}\_{\bm{A}\_{{\U{final}}}E}|| \le \epsilon\_s. \label{def:tra}\end{aligned}$$ Here, we define trace distance $||X||:=\tr[\sqrt{X^\dagger X}]/2$, $\rho^{{\U{final}}}\_{\bm{A}\_{{\U{final}}}E}$ denotes the state of Alice’s actual final keys and Eve’s quantum system, and $\rho^{{\U{ideal}}}\_{\bm{A}\_{{\U{final}}}E}$ denotes the state of ideal final keys that are completely secret from Eve and Eve’s quantum system. These final keys can be obtained by applying a quantum circuit (composed of a lot of CNOT gates), which is determined by random matrices used in privacy amplification. We introduce a quantum operation $\mathcal{E}^{(w)}\_{\U{act}}$ that extracts the $w^{\U{th}}$-type final key from $w^{\U{th}}$-type sifted qubits of systems $\bm{A}\_{{\U{sift}}}^{(w)}$. The operation $\mathcal{E}^{(w)}\_{\U{act}}$ is composed of CNOT gates acting on systems $\bm{A}^{(w)}\_{{\U{sift}}}$ and *Z*-basis measurements. The total operation in privacy amplification to obtain the final keys, which acts on all the sifted qubits of systems $\bm{A}\_{{\U{sift}}}:=\bm{A}^{(1)}\_{{\U{sift}}}\bm{A}^{(2)}\_{{\U{sift}}}...\bm{A}^{(l\_c+1)}\_{{\U{sift}}}$, is then written as $$\begin{aligned} \mathcal{E}\_{\U{act}}:=\bigotimes\_{w=1}^{l\_c+1}\mathcal{E}^{(w)}\_{\U{act}}. \label{Ea}\end{aligned}$$ Using this definition, $\rho^{{\U{final}}}\_{\bm{A}\_{{\U{final}}}E}$ and $\rho^{{\U{ideal}}}\_{\bm{A}\_{{\U{final}}}E}$ in Eq. ([def:tra]) can be written as $$\begin{aligned} &\rho^{{\U{final}}}\_{\bm{A}\_{{\U{final}}}E}=\mathcal{E}\_{\U{act}}(\rho\_{\bm{A}\_{{\U{sift}}}E}),\label{decomp2}\\ &\rho^{{\U{ideal}}}\_{\bm{A}\_{{\U{final}}}E}=\mathcal{E}\_{\U{act}}({| \bm{+} \rangle}{\langle \bm{+} |}\_{\bm{A}\_{{\U{sift}}}}\otimes \tr\_{\bm{A} \_{{\U{sift}}}}[\rho\_{\bm{A}\_{{\U{sift}}}E}]), \label{decomp}\end{aligned}$$ where $\rho\_{\bm{A}\_{{\U{sift}}}E}$ is the state of Alice’s all the sifted qubits and Eve’s quantum system just before executing privacy amplification $\mathcal{E}\_{\U{act}}$. Note that ${| + \rangle}:=({| 0 \rangle}+{| 1 \rangle})/\sqrt{2}$ is the *X*-basis eigenstate from which an ideal key can be extracted. We use the notation ${| \bm{+} \rangle}\_{\bm{A}\_{{\U{sift}}}}$ to express all the qubits of systems $\bm{A}\_{{\U{sift}}}$ being in ∣ + ⟩. Below, we show that the above trace distance *d* can be upper-bounded by using our Theorem [th1]. In this theorem, as we consider the asymptotic limit of an infinite sifted key length, we neglect the probability of failing in obtaining the upper bound of Eq. ([eq:resulteph]). For a general discussion here, we denote its negligible probability of failing in obtaining Eq. ([eq:resulteph]) for *w* by *ξ*(*w*). With these failure probabilities and Theorem [th1], we have the following proposition. When Eq. ([eq:resulteph]) in our Theorem [th1] holds except for probability *ξ*(*w*), and if the amount of privacy amplification applied for the $w^{\U{th}}$-type reconciled key is set to be $$\begin{aligned} N^{(w)}\_{\U{suc}}h(e^{(w),\U{U}}\_{\U{ph}})+\log\_2\frac{1}{\eta^{(w)}} \label{result:phw}\end{aligned}$$ for any *η*(*w*) > 0, the secrecy parameter *ε**s* of the RRDPS protocol is given by $$\begin{aligned} \epsilon\_s=\sum\_{w=1}^{l\_c+1}\sqrt{2}\sqrt{\xi^{(w)}+\eta^{(w)}}. \label{pro}\end{aligned}$$ Here, *h*(*x*) denotes the binary entropy function, and $N^{(w)}\_{\U{suc}}$ is the number of sifted qubits of systems $\bm{A}^{(w)}\_{{\U{sift}}}$. In the asymptotic limit ($N^{(w)}\_{\U{suc}}\to\infty$), as *ξ*(*w*) → 0 and *η*(*w*) → 0, *ε**s* in Eq. ([pro]) results in negligible. Therefore, using this proposition by setting *ξ*(*w*) → 0 and *η*(*w*) → 0, we finally obtain the secret key rate shown in Eq. ([eq:keyrate]) with *ε**s*-secret. The rest of this appendix is devoted to proving this proposition. (Proof) First, when the amount of privacy amplification is set to be as Eq. ([result:phw]), it is straightforward from  to derive the secrecy parameter for the $w^{\U{th}}$-type key, that is, we obtain for any *w* ∈ {1, 2, ..., *l**c* + 1}, $$\begin{aligned} &||\mathcal{E}^{(\ge w)}\_{\U{act}}(\tr\_{\bm{A}^{(0,...,w-1)}\_{{\U{sift}}}}[\rho\_{\bm{A}\_{{\U{sift}}}E}]){\notag}\\ &-\mathcal{E}^{(\ge w)}\_{\U{act}}({| \bm{+} \rangle}{\langle \bm{+} |}\_{\bm{A}^{(w)}\_{{\U{sift}}}}\otimes \tr\_{\bm{A}^{(1,...,w)}\_{{\U{sift}}}}[\rho\_{\bm{A}\_{{\U{sift}}}E}])||{\notag}\\ &\le\sqrt{2}\sqrt{\xi^{(w)}+\eta^{(w)}}=:\Delta^{(w)}. \label{eaches}\end{aligned}$$ Here, we define $\mathcal{E}^{(\ge w)}\_{\U{act}}:=\bigotimes\_{x=w}^{l\_c+1}\mathcal{E}^{(x)}\_{\U{act}}$ and $\tr\_{\bm{A}^{(0)}\_{{\U{sift}}}}$ means that no system is traced out. This bound Δ(*w*) is obtained by executing phase error correction to correct all the qubits of systems $\bm{A}^{(w)}\_{{\U{sift}}}$ to ∣ + ⟩. This operation of its correction does not change any statistics of the measurement outcomes obtained by $\mathcal{E}^{(\ge w)}\_{\U{act}}$, and hence we can insert this operation to upper-bound the trace distance in Eq. ([eaches]). Then, based on , this trace distance can be evaluated by the failure probability *η*(*w*) of phase error correction when $e^{(w),\U{U}}\_{\U{ph}}$ is obtained and the failure probability *ξ*(*w*) of obtaining the upper-bound on the phase error rate $e^{(w),\U{U}}\_{\U{ph}}$. We remark that in doing this argument of phase error correction, the state $\rho\_{\bm{A}\_{{\U{sift}}}E}$ must be dependent on *w* ∈ {1, 2, ..., *l**c* + 1}. This is because we define the virtual state ${| \Psi \rangle}\_{\bm{A}\_N\bm{B}\_N}$ in Eq. ([eq:entbased]) for each *w* differently (due to the phase factors $e^{\U{i}\theta\_{j\_k|\bm{j}\_{k-1}}}$ defined in Eqs. ([eq:mainph1]) and ([eq:mainph])) in order to obtain the upper bound on the phase error rate for each *w*, which is explained in Sec. [app:dec]. The differences of the states $\rho\_{\bm{A}\_{{\U{sift}}}E}$ for *w* become apparent in correcting the phase errors for each $w^{\U{th}}$-type sifted qubits. Importantly, however, these differences do not change any statistics of the final keys obtained through the operation $\mathcal{E}\_{\U{act}}$ [7](#fn7). Therefore, we can take the state $\rho\_{\bm{A}\_{{\U{sift}}}E}$ independently of *w* when we consider the security of the final keys in Eq. ([eaches]). This is the reason why the state $\rho\_{\bm{A}\_{{\U{sift}}}E}$ in Eq. ([eaches]) does not depend on *w*. Then, by exploiting Eq. ([eaches]) and substituting Eqs. ([decomp2]) and ([decomp]) to Eq. ([def:tra]), *d* is calculated as follows: $$\begin{aligned} d&\le ||\mathcal{E}\_{\U{act}}(\rho\_{\bm{A}\_{{\U{sift}}}E})-\mathcal{E}\_{\U{act}}({| \bm{+} \rangle} {\langle \bm{+} |}\_{\bm{A}^{(w=1)}\_{{\U{sift}}}} \otimes\tr\_{\bm{A}^{(w=1)}\_{\U{sift}}}[\rho\_{\bm{A}\_{\U{sift}}E}])||{\notag}\\ &+ ||\mathcal{E}\_{\U{act}}({| \bm{+} \rangle}{\langle \bm{+} |}\_{\bm{A}^{(w=1)}\_{{\U{sift}}}} \otimes\tr\_{\bm{A}^{(w=1)}\_{\U{sift}}}[\rho\_{\bm{A}\_{\U{sift}}E}])- \mathcal{E}\_{\U{act}}({| \bm{+} \rangle}{\langle \bm{+} |}\_{\bm{A}\_{{\U{sift}}}}\otimes \tr\_{\bm{A} \_{{\U{sift}}}}[\rho\_{\bm{A}\_{{\U{sift}}}E}])||\\ &\le\Delta^{(1)}+ ||\mathcal{E}^{(\ge2)}\_{\U{act}}(\tr\_{\bm{A}^{(w=1)}\_{\U{sift}}}[\rho\_{\bm{A}\_{\U{sift}}E}])- \mathcal{E}^{(\ge2)}\_{\U{act}}({| \bm{+} \rangle}{\langle \bm{+} |}\_{\bm{A}^{(w\ge2)}\_{{\U{sift}}}} \otimes \tr\_{\bm{A}\_{{\U{sift}}}}[\rho\_{\bm{A}\_{{\U{sift}}}E}])||\\ &\le\Delta^{(1)}+ ||\mathcal{E}^{(\ge2)}\_{\U{act}}(\tr\_{\bm{A}^{(w=1)}\_{\U{sift}}}[\rho\_{\bm{A}\_{\U{sift}}E}])- \mathcal{E}^{(\ge2)}\_{\U{act}}({| \bm{+} \rangle}{\langle \bm{+} |}\_{\bm{A}^{(w=2)}\_{{\U{sift}}}} \otimes \tr\_{\bm{A}^{(w=1,2)}\_{{\U{sift}}}}[\rho\_{\bm{A}\_{{\U{sift}}}E}])||{\notag}\\ &+||\mathcal{E}^{(\ge2)}\_{\U{act}}({| \bm{+} \rangle}{\langle \bm{+} |}\_{\bm{A}^{(w=2)} \_{{\U{sift}}}}\otimes \tr\_{\bm{A}^{(w=1,2)}\_{{\U{sift}}}}[\rho\_{\bm{A}\_{{\U{sift}}}E}]) -\mathcal{E}^{(\ge2)}\_{\U{act}}({| \bm{+} \rangle}{\langle \bm{+} |}\_{\bm{A}^{(w\ge2)}\_{{\U{sift}}}} \otimes \tr\_{\bm{A}\_{{\U{sift}}}}[\rho\_{\bm{A}\_{{\U{sift}}}E}])||\\ &\le\Delta^{(1)}+\Delta^{(2)} +||\mathcal{E}^{(\ge2)}\_{\U{act}}({| \bm{+} \rangle}{\langle \bm{+} |}\_{\bm{A}^{(w=2)}\_{{\U{sift}}}} \otimes \tr\_{\bm{A}^{(w=1,2)}\_{{\U{sift}}}}[\rho\_{\bm{A}\_{{\U{sift}}}E}]) -\mathcal{E}^{(\ge2)}\_{\U{act}}({| \bm{+} \rangle}{\langle \bm{+} |}\_{\bm{A}^{(w\ge2)}\_{{\U{sift}}}} \otimes \tr\_{\bm{A}\_{{\U{sift}}}}[\rho\_{\bm{A}\_{{\U{sift}}}E}])||.\end{aligned}$$ The first and third inequalities come from the triangle inequality of trace distance, and the second and fourth ones follow from Eq. ([eaches]). So far, we have quantified the security of the $w^{\U{th}}$-type keys for *w* = 1, 2. By repeating the same arguments for *w* = 3, 4, ..., *l**c*, we have $$\begin{aligned} d\le \sum\_{w=1}^{l\_c}\Delta^{(w)} +||\mathcal{E}^{(\ge l\_c+1)}\_{\U{act}}(\tr\_{\bm{A}^{(w=1,2...,l\_c)}\_{\U{sift}}}[\rho\_{\bm{A} \_{\U{sift}}E}])- \mathcal{E}^{(\ge l\_c+1)}\_{\U{act}}({| \bm{+} \rangle}{\langle \bm{+} |}\_{\bm{A}^{(w=l\_c+1)} \_{{\U{sift}}}}\otimes \tr\_{\bm{A}\_{{\U{sift}}}}[\rho\_{\bm{A}\_{{\U{sift}}}E}])||.\end{aligned}$$ Finally, applying Eq. ([eaches]) for *w* = *l**c* + 1, we obtain $$\begin{aligned} d&\le \sum\_{w=1}^{l\_c+1}\Delta^{(w)},\end{aligned}$$ which ends the proof. $\blacksquare$ 99 H.-K. Lo, M. Curty, and K. Tamaki, Nature Photonics **8**, 595 (2014). K. Yoshino, M. Fujiwara, K. Nakata, T. Sumiya, T. Sasaki, M. Takeoka, M. Sasaki, A. Tajima, M. Koashi, and A. Tomita, npj Quantum Information **4**, 8 (2018). E. Diamanti, H.-K. Lo, B. Qi, and Z. Yuan, npj Quantum Information **2**, 16025 (2016). V. Zapatero, A. Navarrete, K. Tamaki, and M. Curty, arXiv:2105.11165v1 (2021). M. Pereira, G. Kato, A. Mizutani, M. Curty, and K. Tamaki, Science Advances **6**, eaaz4487 (2020). A. Mizutani, G. Kato, K. Azuma, M. Curty, R. Ikuta, T. Yamamoto, N. Imoto, H. K. Lo, and K. Tamaki, npj Quantum Information **5**, 8 (2019). T. Sasaki, Y. Yamamoto, and M. Koashi, Nature **509**, 475 (2014). A. Mizutani, N. Imoto, and K. Tamaki, Phys. Rev. A **92**, 060303 (2015). H.-L. Yin, Y. Fu, Y. Mao, and Z.-B. Chen, Phys. Rev. A **93**, 022330 (2016). Z. Zhang, X. Yuan, Z. Cao, and X. Ma, New Journal of Physics **19**, 033013 (2017). T. Sasaki and M. Koashi, Quantum Science and Technology **2**, 024006 (2017). Y. Hatakeyama, A. Mizutani, G. Kato, N. Imoto, and K. Tamaki, Phys. Rev. A **95**, 042301 (2017). L. Wang and S. Zhao, Quantum Information Processing **16**, 100 (2017). L. Liu, F.-Z. Guo, S.-J. Qin, and Q.-Y. Wen, Scientific Reports **7**, 42261 (2017). Z.-Q. Yin, S. Wang, W. Chen, Y.-G. Han, R. Wang, G.-C. Guo, and Z.-F. Han, Nature Communications **9**, 457 (2018). T. Matsuura, T. Sasaki, and M. Koashi, Phys. Rev. A **99**, 042303 (2019). H. Takesue, T. Sasaki, K. Tamaki, and M. Koashi, Nature Photonics **9**, 827 (2015). S. Wang, Z.-Q. Yin, W. Chen, D.-Y. He, X.-T. Song, H.- W. Li, L.-J. Zhang, Z. Zhou, G.-C. Guo, and Z.-F. Han, Nature Photonics **9**, 832 (2015). J.-Y. Guan, Z. Cao, Y. Liu, G.-L. Shen-Tu, J. S. Pelc, M. M. Fejer, C.-Z. Peng, X. Ma, Q. Zhang, and J.-W. Pan, Phys. Rev. Lett. **114**, 180502 (2015). Y.-H. Li, Y. Cao, H. Dai, J. Lin, Z. Zhang, W. Chen, Y. Xu, J.-Y. Guan, S.-K. Liao, J. Yin, et al., Phys. Rev. A **93**, 030302 (2016). F. Bouchard, A. Sit, K. Heshami, R. Fickler, and E. Karimi, Phys. Rev. A **98**, 010301 (2018). M. Koashi, New Journal of Physics **11**, 045018 (2009). M. Ben-Or, M. Horodecki, D. W. Leung, D. Mayes, and J. Oppenheim, Second Theory of Cryptography Conf. TCC (Lecture Notes in Computer Science vol 3378) (Berlin: Springer) pp 386-406 (2005). A. Mizutani, T. Sasaki, Y. Takeuchi, K. Tamaki, and M. Koashi, npj Quantum Information **5**, 87 (2019). M. Hayashi, R. Nakayama, New Journal of Physics **16**, 063009 (2014). --- 1. Note that each interferometer followed by two detectors considered in this paper is the same configuration as the one in the original RRDPS protocol .[↩](#fnref1) 2. Note that with additional 50% loss followed by this alternative measurement, it is equivalent to the actual measurement in Fig. [fig:device] .[↩](#fnref2) 3. Note that the relation of the numbers $N^{(w)}\_{\U{suc},n\_{w,-}>s}\le N^{(w)}\_{\U{em},n\_{w,-}>s}$ just comes from the inclusion relation between the emitted blocks and the detected blocks with *n**w*,  − > *s*, and this relation holds independently of whether the emitted pulses are correlated or not. Similar discussions are seen in Eq. (12) in the original RRDPS protocol  and in Eq. (45) in the DPS protocol .[↩](#fnref3) 4. Note that in Eq. ([jyouC]), *x**t* and {*x**k*}*k* ∈ P*i*, *m*(*w*) are not independent. For example, when *l**c* = 1, Pr[*x*3∣*x*1] ≠ Pr[*x*3]. This is because *x*1 influences *j*2 and *j*2 does *x*3. However,we can remove the condition *x*1 when conditioned on *j*2, namely, Pr[*x*3∣*x*1, *j*2] = Pr[*x*3∣*j*2], which will be explained in Sec. [app:dec].[↩](#fnref4) 5. Note that the exact statement presented in  is that for any two normalized states ∣*A*⟩ and ∣*R*⟩ and any POVM (positive-operator-valued measure) {*M*, *I* − *M*}, $$\tr[{| A \rangle}{\langle A |}M]\le g(\tr[{| R \rangle}{\langle R |}M], |{\left\langle A|R \right\rangle}|).$$ [↩](#fnref5) 6. Note that these subscripts are *j**t*, *j**t* − 1, ..., *j*1 if 1 ≤ *t* ≤ *l**c* + 1, and *j**t*, *j**t* − 1, ..., *j**t* − *l**c* if *l**c* + 2 ≤ *t*.[↩](#fnref6) 7. In other words, the unitary operator acting on Alice’s virtual qubits of systems $ \bm{A}\_N$ to add the adequate phase factors for each *w* commutes with $\mathcal{E}^{(w)}\_{\U{act}}$ since this unitary operator is diagonal in the *Z*-basis.[↩](#fnref7) Security of round-robin differential-phase-shift quantum key distribution protocol with correlated light sources ================================================================================================================ Among various quantum key distribution (QKD) protocols, the round-robin differential-phase- shift (RRDPS) protocol has a unique feature that its security is guaranteed without monitoring the signal disturbance. Moreover, this protocol has a remarkable property of being robust against source imperfections assuming that the emitted pulses are independent. Unfortunately, some experiments with high-speed QKD systems confirmed the violation of the independence due to pulse correlations, and therefore the lack of a security proof with taking into account this effect is an obstacle for guaranteeing the implementation security. In this paper, we show that the RRDPS protocol is secure against any source imperfections by establishing a security proof with the pulse correlations. The proof is simple in the sense that we make only three experimentally simple assumptions on the source. Our numerical simulation based on the proof shows that the long-range pulse correlation does not cause a significant impact on the key rate, which reveals another striking feature of the RRDPS protocol. Our security proof is thus effective and applicable to wide range of practical sources and paves the way to realize truly secure QKD in high-speed systems. Introduction ============ Quantum key distribution (QKD) offers information-theoretically secure communication between two distant parties, Alice and Bob . To prove the security of QKD, we suppose mathematical models on the users’ devices. If these models are discrepant from the physical properties of the actual devices, the security of actual QKD systems cannot be guaranteed. Hence, it is important to establish a security proof by reflecting the actual properties of the devices as accurately as possible. One of the serious imperfections in the source device is the pulse correlation, which becomes a problem especially in high-speed QKD systems. Due to experimental imperfections, signal modulation for each emitted pulse affects the modulation of subsequent pulses. This means that information of Alice’s setting choices, such as a bit choice and an intensity choice of the current pulse, is propagated to the subsequent pulses. Indeed, in , it is experimentally observed that the intensities are correlated among the adjacent pulses with GHz-clock QKD system. Even though tremendous efforts have been made so far to accommodate imperfections in the source into the security proofs (see e.g. ), such pulse correlation violates the assumption of most security proofs. The exceptions are the results in , where the intensity correlations between the nearest-neighbor pulses and arbitrary intensity correlations are respectively accommodated in  and , and the pulse correlation in terms of Alice’s bit choice information is taken into account in . Note that the result in  provides a security proof incorporating the correlation among the emitted pulses, but this correlation is assumed to be independent of Alice’s setting choices. Among various QKD protocols, the round-robin differential-phase-shift (RRDPS) protocol  is one of the promising protocols, which has a unique feature that its security is guaranteed without monitoring the signal disturbance such as the bit error rate. Thanks to this property, the RRDPS protocol has a better tolerance on the bit error rate than the other protocols and the fast convergence in the finite key regime. For this protocol, a number of works have been done theoretically   and experimentally . Moreover, the RRDPS protocol is shown to be robust against most of source imperfections , which is a remarkable property. However, this robustness is maintained only when the pulses emitted from the source are independent, which is also assumed in all the previous security proofs of the RRDPS protocol  . Unfortunately, some experiment  confirms the violation of this independence due to the pulse correlations, and hence the lack of a security proof with taking into account this effect is an obstacle for guaranteeing the implementation security of the RRDPS protocol. In this paper, we show that the RRDPS protocol is secure against any source imperfections by establishing the security proof with the pulse correlations. We adopt a general correlation model in which a bit information Alice selected is encoded not only on the current pulse but also on the subsequence pulses. In our security proof, we make only three experimentally simple source assumptions, which would be useful for simple source characterization. More specifically, we assume the length of the correlation among the emitted pulses, the fidelity between two emitted states when the correlation patterns are different, and the lower bounds on the vacuum emission probabilities of each emitted pulse. It is remarkable that no other detailed characterization is required for the source and any side-channels in the source can be accommodated. In the security proof, we exploit the reference technique  that is a general framework of a security proof to deal with source imperfections, including the pulse correlation. As a result of our security proof, we show that the long-range pulse correlation does not cause a significant impact on the key rate under a realistic experimental setting, which reveals another striking property of the RRDPS protocol. The paper is organized as follows. In section [sec:idea], we explain how to apply the reference technique to deal with the pulse correlation in the RRDPS protocol and why our protocol employs multiple interferometers in Bob’s measurement depending on the length of the correlation. In sections [sec:ass] and [sec:pro], we describe the assumptions that we make on Alice and Bob’s devices and introduce the protocol considered, respectively. In section [sec:security], we first summarize the security proof and state our main result about the amount of the privacy amplification, followed by providing its proof. Then in section [sec:simul], we present our numerical simulation results for the key generation rate and show that the long-range pulse correlation does not cause a significant impact on the key rate. Finally, in section [sec:dis], we wrap up our security proof and refer to some open problems. The idea to apply reference technique to RRDPS protocol ======================================================= Here, we explain how to apply the reference technique (RT)  to deal with the pulse correlations in the RRDPS protocol. In the original RRDPS protocol , Alice sends a block of pulses from which Alice and Bob try to extract one-bit key using a variable-delay interferometer. On the other hand, in our protocol with the correlation length of *l**c*, Alice and Bob divide each emitted block into (*l**c* + 1) groups and try to extract (*l**c* + 1)-bit key from each of the groups. In so doing, Bob employs (*l**c* + 1) variable-delay interferometers so that the pulses belonging to the same group interfere. In other terms, our protocol can be regarded as running (*l**c* + 1) RRDPS protocols simultaneously. We adopt such a modification for enabling us to apply the RT. Below, we explain why the modification is needed. In the RT, we consider an entanglement-based picture where each $k^{\U{th}}$ emitted pulse is entangled with the qubit. To discuss the security of the $k^{\U{th}}$ bit *j**k* that is obtained by measuring the qubit in the *Z*-basis (whose eigenstates are denoted by {∣0⟩, ∣1⟩}), each qubit is measured in the *X*-basis (whose eigenstates are denoted by ${| \pm \rangle}:=({| 0 \rangle}\pm{| 1 \rangle})/\sqrt{2}$). Since how well Alice can predict the *X*-basis measurement outcome is directly related to the amount of privacy amplification , this estimation is crucial in proving the security. The RT provides a method for its estimation under the pulse correlation, but one vital point is that the set of the $k^{\U{th}}$ emitted states must be fixed just before the emission of the $k^{\U{th}}$ pulse. To fix the set, we consider to measure the previous *l**c* qubits in the *Z*-basis. For instance, if *l**c* = 1, to discuss the security of the even-indexed bit *j*2*k*, the previous odd-indexed qubit must be measured in the *Z*-basis. These *Z*-basis measurements of the previous *l**c* qubits conflict the original security proof  of the RRDPS protocol. This is because to estimate the aforementioned *X*-basis statistics, all the qubits in the block are measured in the *X*-basis since any two pulses in the block can interfere in Bob’s measurement. To avoid this conflict, for instance if *l**c* = 1, we modify the RRDPS protocol such that the even-indexed and the odd-indexed pulses interfere separately, and the secret keys are separately extracted from each interference using two interferometers. In doing so, when we discuss the security of the even-indexed bit, only the even-indexed qubits in the block are measured in the *X*-basis while the odd-indexed ones are measured in the *Z*-basis. Hence, thanks to this modification, we can realize both the *X*- and the *Z*-basis measurements at the same time. By generalizing this idea to any *l**c* ≥ 2, if we use (*l**c* + 1) interferometers and consider the protocol that extracts the keys from each interferometer, these two basis measurements become compatible, and hence we can apply the RT for proving the security. We remark that when *l**c* = 1, the security proofs for the even- and the odd-indexed keys are mutually exclusive in the sense that the proof for the odd-indexed (even-indexed) key provides us with how much privacy amplification needs to be applied to the odd-indexed (even-indexed) key, but it does not offer the security of the even-indexed (odd-indexed) key. Fortunately, thanks to the universal composability  of the two security proofs, the amount of privacy amplification to generate the key both from the odd- and the even-indexed bits simultaneously is equivalent to those obtained from the mutually exclusive proofs. This argument holds for any *l**c* ≥ 2 due to the universal composability of the (*l**c* + 1) security proofs. Assumptions on the devices ========================== [t] [fig:device] Before describing the protocol, we summarize the assumptions we make on the source and the receiver. Figure [fig:device] depicts the setups of Alice and Bob’s devices employed in the protocol. Throughout the paper, we adopt the following notations. Let *N* be the total number of pulses sent by Alice in the protocol, and for any symbol *A*, we define $\bm{A}\_i:=A\_i,A\_{i-1},...,A\_1$ with *i* ∈ N. First, we list up the assumptions on Alice’s source as follows. As long as the following assumptions hold, any side-channel in the source can be accommodated. 1. For each $k^{\U{th}}$ emitted pulse (1 ≤ *k* ≤ *N*), Alice chooses a random bit *j**k* ∈ {0, 1}. The bit *j**k* is encoded not only to the $k^{\U{th}}$ emitted pulse but also to the subsequence pulses. Let *l**c*
arxiv_0000261
mathscr{l} }\! {\boldsymbol{E}}}_i \right)}_{i\in\mathcal{I}}, \ell\in\mathcal{J}. The component representation of {\left( {{}^{ 1 }\! {\boldsymbol{E}}}_i \right)}_{i\in\mathcal{I}} w.r.t. {\left( {\boldsymbol{E}}_i \right)}_{i\in\mathcal{I}} is {\left( {\left( 0,1,0 \right)},{\left( 1,0,0 \right)},{\left( 0,0,1 \right)} \right)}; of {\left( {{}^{ 2 }\! {\boldsymbol{E}}}_i \right)}_{i\in\mathcal{I}} is {\left( {\left( -\frac{2}{\sqrt{229}},\frac{225}{229},-\frac{30}{229} \right)},{\left( \frac{15}{\sqrt{229}},\frac{30}{229},-\frac{4}{229} \right)},{\left( 0,\frac{2}{\sqrt{229}},\frac{15}{\sqrt{229}} \right)} \right)}; of {\left( {{}^{ 3 }\! {\boldsymbol{E}}}_i \right)}_{i\in\mathcal{I}} is {\left( {\left( 0,1,0 \right)},{\left( \frac{5}{\sqrt{26}},0,-\frac{1}{\sqrt{26}} \right)},{\left( \frac{1}{\sqrt{26}},0,\frac{5}{\sqrt{26}} \right)} \right)}; and of {\left( {{}^{ 4 }\! {\boldsymbol{E}}}_i \right)}_{i\in\mathcal{I}} is {\left( {\left( 0,1,0 \right)},{\left( -\frac{5}{\sqrt{26}},0,-\frac{1}{\sqrt{26}} \right)},{\left( -\frac{1}{\sqrt{26}},0,\frac{5}{\sqrt{26}} \right)} \right)}. We apply the \sqrt{\text{AO}}-algorithm to the accelerometer data from the four virtual accelerometers {{}^{ \mathscr{l} }\! {\mathcal{X}}}, \ell\in\mathcal{J} to predict the acceleration of the material particle {{}^{ 5 }\! {\mathcal{X}}}. The reference position vector of {{}^{ 5 }\! {\mathcal{X}}} is -c{\boldsymbol{E}}_3. (modified from, copyright 2020, Elsevier)](figEllipsoidAccPos "fig:") [fig:accposition] ![Configuration of the rigid ellipsoid at different time instances in the simulation of it impacting an elastic half space (see §[subsec:Rigid-ellipsoid-impact] for details). In the simulation, the ellipsoid, \mathcal{B}, is dropped onto an elastic half-space, H, under the action of gravity with the initial angular and translational velocities prescribed. The ellipsoid’s initial position in \mathcal{E} is prescribed by taking {{\boldsymbol{\sf c}}}(0)={\left( 0,0,0.75 \right)}, and {{\boldsymbol{\sf Q}}}(0)=\text{diag}{\left( 1,1,1 \right)}. Its initial velocities are prescribed by setting \star({{\boldsymbol{\sf W}}}(0))={\left( 5,5,5 \right)}, and {{\boldsymbol{\sf c}}}'(0)={\left( 0.75,0,0 \right)}. (modified from, copyright 2020, Elsevier)](DataGenEllipsoidConfig "fig:") [fig:impact] ![The acceleration components {{}^{ 1 }\! \alpha}_{i}(\tau), i\in\mathcal{I}, of the virtual accelerometer {{}^{ 1 }\! {\mathcal{}}}{X} before and after addition of synthetic errors (see §[subsec:Simulating-noise-and] for details). (a) shows the acceleration components before the addition of synthetic errors. (b)–(d) show the error-inclusive acceleration component {{}^{ 1 }\! \alpha}_{2}^{\rm Error}, which is generated by adding different errors to the acceleration component {{}^{ 1 }\! \alpha}_{2}. In (b), (c), and (d) the error time signals are particular realizations of the OU process for the OU parameter sets (\mu, \sigma, \beta)=(5,0,10^3), (0,10^2,10^3), and (5,10^2,10^3), respectively. The error in (b) corresponds to [category1] (exclusively bias type errors); in (c) to [category2] (exclusively noise type errors); and in (d) to [category3] (a combination of bias and noise type errors). ](Accinput "fig:") [fig:accinput] Adding synthetic errors to virtual accelerometer data[subsec:Simulating-noise-and] ---------------------------------------------------------------------------------- The acceleration components l *α**i*, ℓ ∈ J, from the simulation do not contain any errors; other than, of course, the errors that arise due to numerical discretization of the balance equations, numerical round-off, etc. However, those type of errors are of insignificant magnitude. Using the error free virtual accelerometer data l *α**i*, ℓ ∈ J, from the simulation we generate virtual error-inclusive accelerometer data ${{}^{ \mathscr{l} }\! \alpha}\_{i}^{\rm Error}$, ℓ ∈ J, as $${{}^{ \mathscr{l} }\! \alpha\_{i}^{\rm Error}(\tau)}={{}^{ \mathscr{l} }\! \alpha\_{i}(\tau)}+\eta\_{\tau}. \label{eq:error}$$ In equation $\eqref{eq:error}$ *η**τ* denotes a particular realization of the Ornstein-Uhlenbeck (OU) process. We will describe shortly what we mean by a “realization”. The OU process is a continuous time and state stochastic process that is defined by the integral equation *η**τ*1 + *τ*2 − *η**τ*1 = *β*∫*τ*1*τ*1 + *τ*2(*μ* − *η**τ*) *d**τ* + *σ*∫*τ*1*τ*1 + *τ*2 *d**W**τ*,  where the second integral on the right is an Itô integral and *W**τ* is the Wiener process. The real number *μ* is called the mean value, *σ* ≥ 0 the diffusion coefficient, and *β* > 0 the drift coefficient. The symbols *τ*1, *τ*2 denote any two (non-dimensional) time instances. Since the OU process is a stochastic process, a given set of OU parameters, i.e., a particular set of *μ*, *σ*, and *β* values, define an entire family or population of real valued functions on R. For a given OU parameter set, a particular realization of the OU process is obtained by drawing *η*0 from a Gaussian distribution of mean *μ* and variance *σ*2/(2*β*) and solving $\eqref{eq:OUprocess}$. As a consequence of $\eqref{eq:error}$, ${{}^{ \mathscr{l} }\! \alpha}\_{i}^{\rm Error}$, ℓ ∈ J, too are stochastic processes. For *i* ∈ I and ℓ ∈ J, when *μ* ≠ 0 and *σ* = 0, any particular realization of ${{}^{ \mathscr{l} }\! \alpha}\_{i}^{\rm Error}$ will contain only bias type errors. A representative realization of ${{}^{ 1 }\! \alpha}\_{2}^{\rm Error}$ for *μ* = 5, *σ* = 0, and *β* = 103 is shown in Fig. [fig:accinput](b). Alternatively, when *μ* = 0 and *σ* ≠ 0 any particular realization of ${{}^{ \mathscr{l} }\! \alpha}\_{i}^{\rm Error}$ will only contain noise type errors. A representative realization of ${{}^{ 1 }\! \alpha}\_{2}^{\rm Error}$ for *μ* = 0, *σ* = 102, and *β* = 103 is shown in Fig. [fig:accinput](c). In general when *μ* and *σ* are both non-zero, realizations of ${{}^{ \mathscr{l} }\! \alpha}\_{i}^{\rm Error}$ will contain both bias and noise type errors. A representative realization of ${{}^{ 1 }\! \alpha}\_{2}^{\rm Error}$ for *μ* = 5, *σ* = 102, and *β* = 103 is shown in Fig. [fig:accinput](d). From here on unless otherwise specified the value of *β* will always be equal to 103. Comparison of $\sqrt{\text{AO}}$ and AO algorithms using virtual error-inclusive accelerometer data[subsec:Comparison-of-AO-alogrithm] -------------------------------------------------------------------------------------------------------------------------------------- We compare the predictions of the $\sqrt{\text{AO}}$ and AO algorithms for the following categories of OU parameter sets. 1. Exclusively bias type errors: *μ* = 0, 0.1, 0.2, 0.5, and 1, and *σ* = 0 (see Table. [Tab:musigma0]).[category1] 2. Exclusively noise type errors: *μ* = 0, and *σ* = 0, 1, 10, 50, and 102 (see Table. [Tab:sigma]).[category2] 3. Both bias and noise type errors: *μ* = 0, 0.1, 0.2, 0.5, and 1, and *σ* = 10 (see Table. [Tab:mu]).[category3] For a given OU parameter set we generate a large number of ${{}^{ \mathscr{l} }\! \alpha}\_{i}^{\rm Error}$, ℓ ∈ J, realizations. We apply the $\sqrt{\text{AO}}$ and AO algorithms to each of those realizations and derive a population of predictions for the acceleration of the material particle 5 X (see Fig. [fig:accposition]). We denote the error-free (non-dimensional) acceleration of 5 X, which we know from the rigid-ellipsoid-impact-simulation’s results, at the time instance *τ* as ${{}^{ 5 }\! {{\boldsymbol{\sf A}}}}(\tau)\in\mathcal{M}\_{3,1}{\left( \mathbb{R} \right)}$. The components of ${{}^{ 5 }\! {{\boldsymbol{\sf A}}}}(\tau)$, i.e., ${\left( {{}^{ 5 }\! {{\boldsymbol{\sf A}}}}(\tau) \right)}\_{i}, i\in\mathcal{I}$, for a sequence of time instances are, respectively, shown in subfigures (a), (b), and (c) in each of Figs. [fig:sigma0mu1]–[fig:sigma10mu1]. They are shown using thick gray curves. Since the predictions of the $\sqrt{\text{AO}}$ and AO algorithms are derived by, respectively, feeding the $\sqrt{\text{AO}}$ and AO algorithms the stochastic processes ${{}^{ \mathscr{l} }\! \alpha}\_{i}^{\rm Error}$, ℓ ∈ J, they too, in fact, are stochastic processes. Representative realizations of the predictions from the $\sqrt{\text{AO}}$ (resp. AO) algorithm for different OU parameter sets are, respectively, shown in Figs. [fig:sigma0mu1]–[fig:sigma10mu1] in green (resp. red). Results and Discussion[sec:Results-and-Discussion] ================================================== Category I[subsec:Category-I] ----------------------------- Among the OU parameter sets belonging to [category1] the set corresponding to the most amount of error is (*μ*, *σ*) = (1.0, 0.0). Representative realizations of the predictions from the $\sqrt{\text{AO}}$ and AO algorithms for this parameter set are, respectively, shown in green and red in Fig. [fig:sigma0mu1]. In Fig. [fig:sigma0mu1] the realization of the $\sqrt{\text{AO}}$-algorithm’s prediction appears to be more accurate than that of the AO-algorithm’s prediction, especially with increasing time. In order to make a more quantitative comparison between the $\sqrt{\text{AO}}$ and AO algorithms’ predictions, we focus on the time interval [0, 1] and make use of the the metrics $$\begin{aligned} \epsilon\_{2}{\left( \sqrt{\text{AO}} \right)}&:=\frac{\lVert\sqrt{\text{AO}}\left({{}^{ 5 }\! {{\boldsymbol{\sf }}}}{A}\right)-{{}^{ 5 }\! {{\boldsymbol{\sf }}}}{A}\rVert\_{2}}{\lVert{{}^{ 5 }\! {{\boldsymbol{\sf }}}}{A}\rVert\_{2}}, \label{equ:L2errorSqrtAO} \\ \epsilon\_{2}{\left( \text{AO} \right)}&:=\frac{\lVert\text{AO}\left({{}^{ 5 }\! {{\boldsymbol{\sf }}}}{A}\right)-{{}^{ 5 }\! {{\boldsymbol{\sf }}}}{A}\rVert\_{2}}{\lVert{{}^{ 5 }\! {{\boldsymbol{\sf }}}}{A}\rVert\_{2}}, \label{equ:L2errorAO}\end{aligned}$$ where $\lVert f\rVert\_{2}:=\sqrt{\int\_{0}^{1}\lVert f(\tau)\rVert^{2}\,d\tau}$; and $\mathbb{R}\ni\tau\mapsto\sqrt{\text{AO}}{\left( {{}^{ 5 }\! {{\boldsymbol{\sf A}}}} \right)}(\tau)\in\mathcal{M}\_{3,1}(\mathbb{R})$, and $\mathbb{R}\ni\tau\mapsto\text{AO}{\left( {{}^{ 5 }\! {{\boldsymbol{\sf A}}}} \right)}(\tau)\in\mathcal{M}\_{3,1}(\mathbb{R})$ are, respectively, particular realizations of the $\sqrt{\text{AO}}$ and AO algorithms’ predictions for $\mathbb{R}\ni\tau\mapsto{{}^{ 5 }\! {{\boldsymbol{\sf A}}}}(\tau)\in\mathcal{M}\_{3,1}(\mathbb{R})$. The metric $\epsilon\_{2}{\left( \sqrt{\text{AO}} \right)}$ (resp. *ε*2(AO)) is constructed such that the smaller its value the more accurate the realization used in computing it. The values of $\epsilon\_{2}{\left( \sqrt{\text{AO}} \right)}$ and *ε*2(AO) for the realizations shown in Fig. [fig:sigma0mu1] are, respectively, 8.79% and 47.11%. The metric $\epsilon\_{2}{\left( \sqrt{\text{AO}} \right)}$’s smaller value in comparison to that of *ε*2(AO) corroborates our earlier assertion that among the $\sqrt{\text{AO}}{\left( {{}^{ 5 }\! {{\boldsymbol{\sf A}}}} \right)}$ and $\sqrt{\text{AO}}{\left( {{}^{ 5 }\! {{\boldsymbol{\sf A}}}} \right)}$ shown in Fig. [fig:sigma0mu1] the realization $\sqrt{\text{AO}}{\left( {{}^{ 5 }\! {{\boldsymbol{\sf A}}}} \right)}$ is more accurate. This comparison between the $\sqrt{\text{AO}}$ and AO algorithms’ predictions’ particular realizations prompts us to hypothesize that the $\sqrt{\text{AO}}$ algorithm is more accurate than the AO algorithm. In order to compare the $\sqrt{\text{AO}}$ and AO algorithms’ predictions in a more well-balanced and comprehensive manner we calculated $\epsilon\_{2}{\left( \sqrt{\text{AO}} \right)}$ and *ε*2(AO), respectively, for a large number of realizations (population size *N* = 200) of the predictions from the $\sqrt{\text{AO}}$ and AO algorithms. The mean values of the thus generated populations of $\epsilon\_{2}{\left( \sqrt{\text{AO}} \right)}$ and *ε*2(AO) are 8.788% and 47.114%, respectively (see row number 5 of Table. [Tab:musigma0]). The mean value of the population of $\epsilon\_{2}{\left( \sqrt{\text{AO}} \right)}$ being lower than the mean value of the population of *ε*2(AO) further supports our earlier hypothesis that $\sqrt{\text{AO}}$-algorithm is more accurate than the AO-algorithm. To recall, the discussion so far in this section exclusively relates to the (*μ*, *σ*) parameter set (1.0, 0.0). We performed analysis similar to the one discussed in the previous paragraph for the parameter sets (0.0, 0.0), (0.1, 0.0), (0.2, 0.0), and (0.5, 0.0) as well. The means of $\epsilon\_{2}{\left( \sqrt{\text{AO}} \right)}$’s and *ε*2(AO)’s populations for these other parameter sets are, respectively, given in the first and second columns of Table. [Tab:musigma0]. It can be seen from Table. [Tab:musigma0] that the means of the $\epsilon\_{2}{\left( \sqrt{\text{AO}} \right)}$ populations are consistently smaller than those of *ε*2(AO) populations across all the parameter sets considered. Furthermore, in Table. [Tab:musigma0] the difference between the means of a $\epsilon\_{2}{\left( \sqrt{\text{AO}} \right)}$ population and a *ε*2(AO) population corresponding to the same parameter set increases with the amount of error, i.e., with the magnitude of *μ* in the present category. Thus for the category of exclusively bias type errors, in addition to the $\sqrt{\text{AO}}$-algorithm appearing to be more accurate than the AO-algorithm, it further appears that the $\sqrt{\text{AO}}$-algorithm’s performance over the AO-algorithm increases with increasing amount of error. Category II[subsec:Category-II] ------------------------------- In [category2] we consider the (*μ*, *σ*) parameter sets (0.0, 0.0), (0.0, 1.0), (0.0, 10.0), (0.0, 50.0), and (0.0, 100.0). The means of $\epsilon\_{2}{\left( \sqrt{\text{AO}} \right)}$’s and *ε*2(AO)’s populations for these parameter sets are, respectively, given in the first and second columns of Table. [Tab:sigma]. It can be seen from Table. [Tab:sigma] that the means of the $\epsilon\_{2}{\left( \sqrt{\text{AO}} \right)}$ populations are approximately the same as those of *ε*2(AO) populations across all the parameter sets considered. Thus, for the category of exclusively noise type errors the $\sqrt{\text{AO}}$-algorithm appears to perform on par with the AO-algorithm. Among the OU parameter sets belonging to [category2] the set corresponding to the most amount of error is (*μ*, *σ*) = (0.0, 100.0). Representative realizations of the predictions from the $\sqrt{\text{AO}}$ and AO algorithms for this parameter set are, respectively, shown in green and red in Fig. [fig:sigma100mu0]. In Fig. [fig:sigma100mu0], at least at the earlier time instances, the $\sqrt{\text{AO}}$ and AO algorithms’ predictions’ realizations are almost indistinguishable from one another. However, at later time instances the $\sqrt{\text{AO}}$-algorithm seems to be performing better than the AO-algorithm (This feature is likely not reflected in the results presented in Table. [Tab:sigma] because they are calculated only using data from the initial time instances, or, to be more precise, from the [0, 1] time interval.). Based on this observation we venture to speculate that even when the errors are predominantly of the noise type, the $\sqrt{\text{AO}}$-algorithm will eventually begin to outperform the AO-algorithm. Category III ------------ In [category3] we consider the (*μ*, *σ*) parameter sets (0.0, 10.0), (0.1, 10.0), (0.2, 10.0), (0.5, 10.0), and (1, 10.0). The means of $\epsilon\_{2}{\left( \sqrt{\text{AO}} \right)}$’s and *ε*2(AO)’s populations for these parameter sets are, respectively, given in the first and second columns of Table. [Tab:mu]. Among the OU parameter sets belonging to [category3] the set corresponding to the most amount of error is (*μ*, *σ*) = (1.0, 10.0). Representative realizations of the predictions from the $\sqrt{\text{AO}}$ and AO algorithms for this parameter set are, respectively, shown in green and red in Fig. [fig:sigma10mu1]. It can be seen from Table. [Tab:mu] that the means of the $\epsilon\_{2}{\left( \sqrt{\text{AO}} \right)}$ populations are consistently smaller than those of *ε*2(AO) populations across all the parameter sets considered. Furthermore, in Table. [Tab:mu] the difference between the means of a $\epsilon\_{2}{\left( \sqrt{\text{AO}} \right)}$ population and a *ε*2(AO) population corresponding to the same parameter set increases with the amount of error, i.e., with the magnitudes of *σ* and *μ*. Thus, in [category3] the relative performance of the $\sqrt{\text{AO}}$ and AO algorithms is very similar to that in [category1]. From the discussion in $\S$[subsec:Category-I] we know that the AO-algorithm is more sensitive to bias type errors than the $\sqrt{\text{AO}}$-algorithm and from the discussion in $\S$[subsec:Category-II] we know that the $\sqrt{\text{AO}}$ and AO algorithms are, approximately, equally sensitive to noise type errors. From the results in this section it appears that the $\sqrt{\text{AO}}$-algorithm outperforms the AO-algorithm as long as the errors have some bias type component in them, irrespective of what the amount of the noise type component in them is. [ht!] |>c|l|l| & & $\sqrt{\text{AO}}$-algorithm & AO-algorithm 0 & 0.01 & 0.02 0.1 & 8.87 & 41.02 0.2 & 17.73 & 83.63 0.5 & 44.22 & 220.28 1 & 87.88 & 471.14 [ht!] |>c|l|l| & & $\sqrt{\text{AO}}$-algorithm & AO-algorithm 0 & 0.01 & 0.02 1 & 1.03 ± 0.09 & 1.16 ± 0.21 10 & 10.22 ± 0.64 & 11.46 ± 1.80 50 & 52.79 ± 3.33 & 56.87 ± 7.79 100 & 115.85 ± 10.11 & 112.83 ± 15.54 [ht!] |>c|l|l| & & $\sqrt{\text{AO}}$-algorithm & AO-algorithm 0 & 10.22 ± 0.64 & 11.46 ± 1.80 0.1 & 13.42 ± 1.14 & 42.61 ± 5.05 0.2 & 20.19 ± 1.43 & 84.52 ± 5.57 0.5 & 45.18 ± 1.47 & 221.08 ± 6.02 1 & 88.27 ± 1.44 & 472.26 ± 6.47 ![Comparison of the predictions from the \sqrt{\text{AO}} and \text{AO} algorithms for the acceleration of the material particle {{}^{ 5 }\! {\mathcal{X}}} (see Fig. [fig:accposition]) in the rigid ellipsoid impact simulation (see §[subsec:Rigid-ellipsoid-impact] for details). Both the \sqrt{\text{AO}} and \text{AO} algorithms were fed the same virtual error-inclusive accelerometer data. The data was generated by adding a particular realization of the OU process to the virtual accelerometer data from the rigid ellipsoid impact simulation. The OU realization corresponded to the OU parameter set (\mu,\sigma,\beta)=(1,0, 10^3). Subfigures (a), (b), and (c), respectively, show the comparison for the component of {{}^{ 5 }\! {\mathcal{X}}}’s acceleration in the \boldsymbol{e}_{i}, i\in \mathcal{I}, directions. ](Acc5Ellipsoidsigma0mu1 "fig:") [fig:sigma0mu1] ![Comparison of the predictions from the \sqrt{\text{AO}} and \text{AO} algorithms for the acceleration of the material particle {{}^{ 5 }\! {\mathcal{X}}} (see Fig. [fig:accposition]) in the rigid ellipsoid impact simulation (see §[subsec:Rigid-ellipsoid-impact] for details). Both the \sqrt{\text{AO}} and \text{AO} algorithms were fed the same virtual error-inclusive accelerometer data. The data was generated by adding a particular realization of the OU process to the virtual accelerometer data from the rigid ellipsoid impact simulation. The OU realization corresponded to the OU parameter set (\mu,\sigma,\beta)=(0,10^2, 10^3). Subfigures (a), (b), and (c), respectively, show the comparison for the component of {{}^{ 5 }\! {\mathcal{X}}}’s acceleration in the \boldsymbol{e}_{i}, i\in \mathcal{I}, directions. ](Acc5Ellipsoidsigma100mu0 "fig:") [fig:sigma100mu0] ![Comparison of the predictions from the \sqrt{\text{AO}} and \text{AO} algorithms for the acceleration of the material particle {{}^{ 5 }\! {\mathcal{X}}} (see Fig. [fig:accposition]) in the rigid ellipsoid impact simulation (see §[subsec:Rigid-ellipsoid-impact] for details). Both the \sqrt{\text{AO}} and \text{AO} algorithms were fed the same virtual error-inclusive accelerometer data. The data was generated by adding a particular realization of the OU process to the virtual accelerometer data from the rigid ellipsoid impact simulation. The OU realization corresponded to the OU parameter set (\mu,\sigma,\beta)=(1,10, 10^3). Subfigures (a), (b), and (c), respectively, show the comparison for the component of {{}^{ 5 }\! {\mathcal{X}}}’s acceleration in the \boldsymbol{e}_{i}, i\in \mathcal{I}, directions. The predictions of the \sqrt{\text{AO}} algorithm when fed just the virtual accelerometer data, i.e., with no added errors, is also shown in (a), (b), and (c) using black open circles. ](Acc5Ellipsoidsigma10mu1 "fig:") [fig:sigma10mu1] Concluding remarks[Sec:Con] =========================== 1. The results discussed in $\S$[sec:Results-and-Discussion] show that the $\sqrt{\text{AO}}$-algorithm provides a valid approach to determine the complete motion of a rigid body using only data from four tri-axial accelerometers. However, the $\sqrt{\text{AO}}$-algorithm’s practical validity in the field still remains to be explored. In the future, we plan to conduct an experimental evaluation of the $\sqrt{\text{AO}}$-algorithm to compliment its *in silico* validation that we presented in this paper. 2. The comparison in $\S$[sec:Results-and-Discussion] shows that for the cases we considered the $\sqrt{\text{AO}}$-algorithm is less sensitive to bias type errors compared to the AO-Algorithm. However, we have not provided a mathematical proof that the $\sqrt{\text{AO}}$-algorithm is better than the AO-algorithm with regard to bias type errors. Thus, though the comparison presented in $\S$[sec:Results-and-Discussion] provides strong support to the hypothesis that the $\sqrt{\text{AO}}$-algorithm is less sensitive to bias type errors than the AO-algorithm, it by no means provides a proof for the hypothesis. A definitive resolution to the question of whether the hypothesis is true requires an error analysis of both the $\sqrt{\text{AO}}$-algorithm as well as the AO-algorithm. We currently have not carried out such analyses. Nevertheless, irrespective of the relative merit of the $\sqrt{\text{AO}}$ over the AO algorithm, it is quite clear from its derivation and the results discussed in $\S$[sec:Results-and-Discussion] that it provides a valid approach for determining the complete motion of a rigid body from accelerometer data. 3. The $\sqrt{\text{AO}}$-algorithm retains all the benefits of the AO-algorithm. Both algorithms provide the complete motion of the rigid body in the fixed laboratory frame. Without integration or differentiation, both algorithms are able to determine the pseudo acceleration field, providing the magnitude of acceleration for all material particles. Both algorithms can be applied to any arrangement of four tri-axial accelerometers as long as they do not lie in the same plane. There is no restriction on the orientation of the tri-axial accelerometers. 4. In the *in silico* validation, evaluation, and comparison of the $\sqrt{\text{AO}}$-algorithm that we set up in $\S$[sec:In-silico-validation] we used the OU process to model experimental errors. Even more specifically, we took the magnitude of the parameter *μ* in the OU process as a measure of the bias type errors in the OU process’ realizations. There of course exist bias type errors that cannot be modeled in this manner. Thus, our evaluation of the relative sensitivities of the $\sqrt{\text{AO}}$ and the AO algorithms to bias type errors was carried out using a limited form of bias type errors. A more general method to represent bias type errors in the virtual accelerometer data would provide a more comprehensive comparison of the relative sensitivities of the $\sqrt{\text{AO}}$ and the AO algorithms to bias type errors. 5. We expect the $\sqrt{\text{AO}}$-algorithm to be especially useful for constructing inputs to the upcoming finite element based brain injury criteria. The finite element based brain injury criteria are based on the mechanics of head motion and brain deformation, while the traditional brain injury criteria have mostly been developed empirically. Therefore, we expect the finite element based brain injury criteria to find increased use in the future. Funding information =================== The authors gratefully acknowledge support from the Panther Program and the Office of Naval Research (Dr. Timothy Bentley) under grants N000141812494 and N000142112044. Declaration of Competing Interest ================================= The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments =============== The authors thank Sayaka Kochiyama for her help in preparing some of the figures in the manuscript. Definition of the map  ⋆ ( ⋅ )[subsec:hodgestar] ================================================ For ${{\rm n}\_{\rm sd}}=2$, the map  ⋆ ( ⋅ ) : so(R, 2) → R is defined by the equation  ⋆ ( ⋅ ) = ( ⋅ )21. The inverse of  ⋆ ( ⋅ ) is the map  ⋆ − 1( ⋅ ) : R → so(R, 2) defined by the equation $$\begin{aligned} \star{\left( \alpha \right)}=\begin{pmatrix} 0 & -\alpha\\ \alpha & 0 \end{pmatrix}. \label{equ:hodge2d}\end{aligned}$$ For ${{\rm n}\_{\rm sd}}=3$, the map  ⋆ ( ⋅ ) : so(R, 3) → M3, 1(R) is defined by the equation  ⋆ ( ⋅ ) = (( ⋅ )32, ( ⋅ )13, ( ⋅ )21). The inverse of  ⋆ ( ⋅ ) is the map ⋆ − 1( ⋅ ) : M3, 1(R) → so(R, 3) defined by the equation $$\begin{aligned} \star{\left( {\left( \alpha\_1,\alpha\_2,\alpha\_3 \right)} \right)}=\begin{pmatrix} 0 & -\alpha\_3 &\alpha\_2\\ \alpha\_3 & 0 & -\alpha\_1\\ -\alpha\_2 &\alpha\_1 &0 \end{pmatrix}. \label{equ:hodge3d}\end{aligned}$$ To make our notation appear less cumbersome we denote  ⋆ − 1( ⋅ ) too as  ⋆ ( ⋅ ). Whether we mean  ⋆ ( ⋅ ) or  ⋆ − 1( ⋅ ) will be clear from the argument of  ⋆ ( ⋅ ). Derivation of $~\eqref{equ:sym-1}$, i.e., proof of the statement that square of ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ is equal to the symmetric part of ${{\boldsymbol{\sf P}}}(\tau)$ ============================================================================================================================================================================================ The following lemmas can be shown to be equivalent to some of the standard results in the mechanics of rigid solids, see the work in and, which treats the rigid body motion in a modern continuum mechanics style; or see the work in and, which treats the rigid body motion from a perspective of geometric mechanics. However, at a cursory level, due to our notation and formalism, those results might appear to be different from the below lemmas. The differences in notation and formalism are primarily due to the fact that in our work we distinguish between the vector spaces to which the various physical quantities, e.g., the rotation operation, belong to and the (non-dimensional) matrix vector spaces to which the component representations of those quantities belong to. For that reason, we believed that it would be helpful to the reader if we presented the following lemmas using the notation and formalism that we use in the current work. Skew symmetry of ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ ------------------------------------------------------------ The matrix ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$, defined in $\eqref{eq:busfW}$, is skew-symmetric. [lemma:skew] *Proof*. It can be shown using ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$’s definition $\eqref{eq:busfW}$ and equations $\eqref{equ:wdeNDF}$ and $\eqref{equ:QQ:1}$ that $${\overline{{{\boldsymbol{\sf W}}}}}(\tau)={\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau)\,{{\boldsymbol{\sf Q}}}'(\tau). \label{eq:WbeqQtQp}$$ Differentiating $\eqref{equ:QQ:1}$ we get $$\left({{\boldsymbol{\sf Q}}}^{\sf T}\right)'(\tau)\,{{\boldsymbol{\sf Q}}}(\tau)+{\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau)\,{{\boldsymbol{\sf Q}}}'(\tau)={{\boldsymbol{\sf 0}}}.\label{equ:QQ0}$$ Noting that ${\left( {\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}} \right)}'(\tau)={\left( {{\boldsymbol{\sf Q}}}' \right)^{\sf T}}(\tau)$ we see that the first term on the left hand side of is equal to ${\left( {{\boldsymbol{\sf Q}}}' \right)^{\sf T}}(\tau)\,{{\boldsymbol{\sf Q}}}(\tau)$, which, in fact, is equal to the transpose of the second term on the left hand side of. Thus, it follows from that $\text{sym}{\left( {\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau)\,{{\boldsymbol{\sf Q}}}'(\tau) \right)}={{\boldsymbol{\sf 0}}}$. That is, that ${\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau)\,{{\boldsymbol{\sf Q}}}'(\tau)$ is skew-symmetric. The result that ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ too is skew-symmetric now immediately follows from $\eqref{eq:WbeqQtQp}$. Derivation of equation $~\eqref{equ:sym-1}$[theorem:skeww] ---------------------------------------------------------- The symmetric part of ${{\boldsymbol{\sf P}}}(\tau)$ is equal to the square of ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$. *Proof*. Differentiating $\eqref{equ:QQ:1}$ twice and rearranging we get that $$\begin{aligned} \left({{\boldsymbol{\sf Q}}}^{\sf T}\right)''(\tau)\,{{\boldsymbol{\sf Q}}}(\tau)+{\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau)\,{{\boldsymbol{\sf Q}}}''(\tau)&=-2\left({{\boldsymbol{\sf Q}}}^{\sf T}\right)'(\tau)\,{{\boldsymbol{\sf Q}}}'(\tau). \label{equ:diffQ}\end{aligned}$$ The equation $\eqref{equ:diffQ}$ on noting that the first and second terms on its left hand side are in fact transposes of each other simplifies to $$\begin{aligned} \text{sym}{\left( {\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau)\,{{\boldsymbol{\sf Q}}}''(\tau) \right)}&=-\left({{\boldsymbol{\sf Q}}}^{\sf T}\right)'(\tau)\,{{\boldsymbol{\sf Q}}}'(\tau). \label{equ:diffQ2}\end{aligned}$$ Writing the term on the right hand side of $\eqref{equ:diffQ2}$ as $-\left({{\boldsymbol{\sf Q}}}^{\sf T}\right)'(\tau)\,{{\boldsymbol{\sf I}}}\,{{\boldsymbol{\sf Q}}}'(\tau)$, and then using $\eqref{equ:QQ:2}$ and replacing the ${{\boldsymbol{\sf I}}}$ in the resulting equation with ${{\boldsymbol{\sf Q}}}(\tau){\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau)$, we get $$\text{sym}{\left( {\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau)\,{{\boldsymbol{\sf Q}}}''(\tau) \right)}=- {\left( \left({{\boldsymbol{\sf Q}}}^{\sf T}\right)'(\tau)\,{{\boldsymbol{\sf Q}}}(\tau) \right)}\,{\left( {\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau)\,{{\boldsymbol{\sf Q}}}'(\tau) \right)}. \label{equ:diffQ3}$$ Noting that ${\left( {\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}} \right)}'(\tau)={\left( {{\boldsymbol{\sf Q}}}' \right)^{\sf T}}(\tau)$ we see that the first factor on the right hand side of $\eqref{equ:diffQ3}$ is equal to ${\left( {{\boldsymbol{\sf Q}}}' \right)^{\sf T}}(\tau)\,{{\boldsymbol{\sf Q}}}(\tau)$, which is the transpose of the second factor on the right hand side of $\eqref{equ:diffQ3}$, namely ${\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau)\,{{\boldsymbol{\sf Q}}}'(\tau)$. We, however, know from $\eqref{eq:WbeqQtQp}$ that this second factor is equal to ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$. Thus, we get from $\eqref{equ:diffQ3}$ that $$\begin{aligned} \text{sym}{\left( {\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau)\,{{\boldsymbol{\sf Q}}}''(\tau) \right)}&= -{\left. {\overline{{{\boldsymbol{\sf W}}}}} \right.^{\sf T}}(\tau)\,{\overline{{{\boldsymbol{\sf W}}}}}(\tau), \label{eq:PW1} \intertext{which simplifies on using Lemma \ref{lemma:skew} to} \text{sym}{\left( {\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau)\,{{\boldsymbol{\sf Q}}}''(\tau) \right)}&= {\overline{{{\boldsymbol{\sf W}}}}}^{2}(\tau). \label{eq:PW2}\end{aligned}$$ [eq:PW] Using $\eqref{equ:pde}$ and replacing the quantity ${\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau)\,{{\boldsymbol{\sf Q}}}''(\tau)$ appearing on the left hand side of $\eqref{eq:PW2}$ with ${{\boldsymbol{\sf P}}}(\tau)$, we get $$\text{sym}\left({{\boldsymbol{\sf P}}}(\tau)\right)={\overline{{{\boldsymbol{\sf W}}}}}^{2}(\tau).$$ The matrix $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$ is negative semidefinite and its negative eigenvalues, if they exist, have even algebraic multiplicities ============================================================================================================================================================================= The entries of ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ and $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$ are all real numbers. However, in this section we consider ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ and $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$ to be elements of $\mathcal{M}\_{{{\rm n}\_{\rm sd}},{{\rm n}\_{\rm sd}}}(\mathbb{C})$, where $\mathcal{M}\_{{{\rm n}\_{\rm sd}},{{\rm n}\_{\rm sd}}}(\mathbb{C})$ is the space of all ${{\rm n}\_{\rm sd}}\times{{\rm n}\_{\rm sd}}$ matrices whose entries belong to C, the space of complex numbers. Negative semi-definiteness of $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$ --------------------------------------------------------------------------------------- The matrix $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$ is negative semidefinite. [lemma:negative] *Proof*. The matrix $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$ is self-adjoint since it is equal to its transpose, which, as all of $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$’s entries are real, is equal to its conjugate-transpose, i.e., to its adjoint. Hence it follows from that the matrix $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$ is negative semidefinite iff the inner product $\langle\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}{{\boldsymbol{\sf X}}},{{\boldsymbol{\sf X}}}\rangle$, where ${{\boldsymbol{\sf X}}}\in\mathcal{M}\_{{{\rm n}\_{\rm sd}},1}(\mathbb{C})$ but is otherwise arbitrary, is always non-positive. It follows from $\eqref{equ:sym-1}$ that $$\langle\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}{{\boldsymbol{\sf X}}},{{\boldsymbol{\sf X}}}\rangle=\langle{\overline{{{\boldsymbol{\sf W}}}}}^2(\tau){{\boldsymbol{\sf X}}},{{\boldsymbol{\sf X}}}\rangle. \label{equ:sym1xxt}$$ Since we know from Lemma [lemma:skew] that ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ is skew-symmetric, we can write ${\overline{{{\boldsymbol{\sf W}}}}}^2(\tau)$ on the right hand side of $\eqref{equ:sym1xxt}$ as $-{\left. {\overline{{{\boldsymbol{\sf W}}}}} \right.^{\sf T}}(\tau)\,{\overline{{{\boldsymbol{\sf W}}}}}(\tau)$. On doing so and using the properties of the inner product we get $$\langle\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}{{\boldsymbol{\sf X}}},{{\boldsymbol{\sf X}}}\rangle=-\langle{\overline{{{\boldsymbol{\sf W}}}}}(\tau){{\boldsymbol{\sf X}}},{\overline{{{\boldsymbol{\sf W}}}}}(\tau){{\boldsymbol{\sf X}}}\rangle. \label{equ:W2eqngWtW}$$ It also follows from the properties of the inner product that $\langle{\overline{{{\boldsymbol{\sf W}}}}}(\tau){{\boldsymbol{\sf X}}},{\overline{{{\boldsymbol{\sf W}}}}}(\tau){{\boldsymbol{\sf X}}}\rangle$ is always non-negative. Therefore we get from $\eqref{equ:W2eqngWtW}$ that $\langle\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}{{\boldsymbol{\sf X}}},{{\boldsymbol{\sf X}}}\rangle$ is always non-positive, or equivalently that $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$ is negative semidefinite. Form of the eigenvalues of $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$ ------------------------------------------------------------------------------------ The matrix $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$’s negative eigenvalues, if they exist, have even algebraic multiplicities. [lemma:form] *Proof.* Say ${{\boldsymbol{\sf S}}}\in\mathcal{M}\_{{{\rm n}\_{\rm sd}},{{\rm n}\_{\rm sd}}}(\mathbb{C})$ then ${{\boldsymbol{\sf S}}}$ is said to be normal when it commutes with its conjugate-transpose ${\left. {{\boldsymbol{\sf S}}} \right.^{\sf H}}$, i.e., when ${{\boldsymbol{\sf S}}}{\left. {{\boldsymbol{\sf S}}} \right.^{\sf H}}={\left. {{\boldsymbol{\sf S}}} \right.^{\sf H}}{{\boldsymbol{\sf S}}}$. Note that $${\overline{{{\boldsymbol{\sf W}}}}}(\tau){\left. {\overline{{{\boldsymbol{\sf W}}}}} \right.^{\sf T}}(\tau)={\overline{{{\boldsymbol{\sf W}}}}}(\tau){\left( -{\overline{{{\boldsymbol{\sf W}}}}}(\tau) \right)}={\left( -{\overline{{{\boldsymbol{\sf W}}}}}(\tau) \right)}{\overline{{{\boldsymbol{\sf W}}}}}(\tau)={\left. {\overline{{{\boldsymbol{\sf W}}}}} \right.^{\sf T}}(\tau){\overline{{{\boldsymbol{\sf W}}}}}(\tau). \label{equ:WbisNormal}$$ The first and the third equalities in $\eqref{equ:WbisNormal}$ follow from the fact that ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ is skew-symmetric (Lemma [lemma:skew]). Since all the entries of ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ are real, the transpose of ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ is equal to its conjugate-transpose. For this reason it follows from $\eqref{equ:WbisNormal}$ that ${\overline{{{\boldsymbol{\sf W}}}}}(\tau){\left. {\overline{{{\boldsymbol{\sf W}}}}} \right.^{\sf H}}(\tau)={\left. {\overline{{{\boldsymbol{\sf W}}}}} \right.^{\sf H}}(\tau){\overline{{{\boldsymbol{\sf W}}}}}(\tau)$, or equivalently that ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ is normal. Since ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ is normal it follows from the *Complex Spectral Theorem* e.g., see, that there exists an unitary matrix ${{\boldsymbol{\sf U}}}(\tau)\in\mathcal{M}\_{{{\rm n}\_{\rm sd}},{{\rm n}\_{\rm sd}}}(\mathbb{C})$ such that $\textrm{\ensuremath{{\left. {{\boldsymbol{\sf U}}} \right.^{\sf H}}(\tau){\overline{{{\boldsymbol{\sf W}}}}}(\tau){{\boldsymbol{\sf U}}}(\tau)} }=\text{diag}{\left( \mu\_i(\tau) \right)}\_{i\in\mathcal{I}}$, where *μ**i*(*τ*) ∈ C and diag(*μ**i*(*τ*))*i* ∈ I is a diagonal matrix whose diagonal entries are $\mu\_1(\tau),\mu\_2(\tau)\ldots,\mu\_{{{\rm n}\_{\rm sd}}}(\tau)$. That is, $$\textrm{\ensuremath{{\left. {{\boldsymbol{\sf U}}} \right.^{\sf H}}(\tau){\overline{{{\boldsymbol{\sf W}}}}}(\tau){{\boldsymbol{\sf U}}}(\tau)} }= \begin{pmatrix}\mu\_{1}(\tau) & 0 &\dots & 0\\0 &\mu\_{2}(\tau) &\dots & 0\\\vdots &\vdots &\ddots &\vdots\\0 & 0 &\dots &\mu\_{{{\rm n}\_{\rm sd}}}(\tau)\end{pmatrix}. \label{equ:UhWbUeqDiagMus}$$ The complex numbers *μ**i*(*τ*), not necessarily distinct, are the eigenvalues of ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ and the columns of ${{\boldsymbol{\sf U}}}(\tau)$ are the eigenvectors of ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$. To be more specific, $${\overline{{{\boldsymbol{\sf W}}}}}(\tau){{\boldsymbol{\sf u}}}\_i(\tau)=\mu\_i(\tau){{\boldsymbol{\sf u}}}\_i(\tau)~\text{(no sum over $i$)}, \label{equ:Wbuieqmuiui}$$ where ${{\boldsymbol{\sf u}}}\_i(\tau) ={\left( {\left( {{\boldsymbol{\sf U}}}(\tau) \right)}\_{ji} \right)}\_{j\in\mathcal{I}}$. Applying the operation of complex-conjugation to both sides of $\eqref{equ:Wbuieqmuiui}$ and noting that the entries of ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ are all real we get that $$\begin{aligned} {\overline{{{\boldsymbol{\sf W}}}}}(\tau){{\boldsymbol{\sf u}}}\_i^\*(\tau) &=\mu\_i^\*(\tau)\,{{\boldsymbol{\sf u}}}\_i^\*(\tau) ~\text{(no sum over $i$)}, \label{equ:WbuieqmuiuiCC}\end{aligned}$$ where *μ**i*\*(*τ*) and ${{\boldsymbol{\sf u}}}\_i^{\ast}(\tau)$ are, respectively, the complex-conjugates of *μ**i*(*τ*) and ${{\boldsymbol{\sf u}}}\_i(\tau)$. It follows from $\eqref{equ:WbuieqmuiuiCC}$ that if *μ**i*(*τ*) is an eigenvalue of ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ then so is *μ**i*\*(*τ*). Thus (*μ**i*(*τ*))*i* ∈ I has the form $\varsigma{\left( z\_{1}(\tau), z\_1^\*(\tau), z\_2(\tau), z\_2^\*(\tau),\ldots, z\_k(\tau), z\_k^\*(\tau),\alpha\_1(\tau),\alpha\_2(\tau),\ldots,\alpha\_l(\tau) \right)}$, where $\varsigma(\cdot)$ is the permutation operation, *z**i*(*τ*) ∈ C with Im(*z**i*(*τ*)) ≠ 0, *z**i*\*(*τ*) is the complex-conjugate of *z**i*(*τ*), $0\le k\le\left\lfloor{{\rm n}\_{\rm sd}}/2\right\rfloor$[6](#fn6), *α**i*(*τ*) ∈ R, and $l={{\rm n}\_{\rm sd}}-2 k$. It is not necessary that the complex numbers *z**i*(*τ*) be distinct from one another. The same is the case with the real numbers *α**i*(*τ*). Taking the square on both sides of $\eqref{equ:UhWbUeqDiagMus}$ and using our knowledge about the form of (*μ**i*(*τ*))*i* ∈ I we get that $${\left( \textrm{\ensuremath{{\left. {{\boldsymbol{\sf U}}} \right.^{\sf H}}(\tau){\overline{{{\boldsymbol{\sf W}}}}}(\tau){{\boldsymbol{\sf U}}}(\tau)} } \right)}^2=\text{diag}\,\varsigma{\left( {\left. z\_{1} \right.}^2(\tau),{\left. z\_1^\* \right.}^2(\tau),\ldots,{\left. z\_{k} \right.}^2(\tau),{\left. z\_{k}^\* \right.}^2(\tau),\alpha\_1^2(\tau),\ldots,\alpha\_l^2(\tau) \right)}.\label{equ:UhWbUeqDiagMusSq}$$ The expression on the left hand side of $\eqref{equ:UhWbUeqDiagMusSq}$ can be simplified as ${\left( \textrm{\ensuremath{{\left. {{\boldsymbol{\sf U}}} \right.^{\sf H}}(\tau){\overline{{{\boldsymbol{\sf W}}}}}(\tau){{\boldsymbol{\sf U}}}(\tau)} } \right)}^2={\left. {{\boldsymbol{\sf U}}} \right.^{\sf H}}(\tau){\overline{{{\boldsymbol{\sf W}}}}}(\tau){{\boldsymbol{\sf U}}}(\tau){\left. {{\boldsymbol{\sf U}}} \right.^{\sf H}}(\tau){\overline{{{\boldsymbol{\sf W}}}}}(\tau){{\boldsymbol{\sf U}}}(\tau)={\left. {{\boldsymbol{\sf U}}} \right.^{\sf H}}(\tau){\overline{{{\boldsymbol{\sf W}}}}}^2(\tau){{\boldsymbol{\sf U}}}(\tau)={\left. {{\boldsymbol{\sf U}}} \right.^{\sf H}}(\tau)\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}{{\boldsymbol{\sf U}}}(\tau)$, where the second equality follows from the fact that ${{\boldsymbol{\sf U}}}(\tau)$ is an unitary matrix, and the third equality from. Thus we can get from$\eqref{equ:UhWbUeqDiagMusSq}$ that $$\begin{aligned} {\left. {{\boldsymbol{\sf U}}} \right.^{\sf H}}(\tau)\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}{{\boldsymbol{\sf U}}}(\tau)&=\text{diag}\,\varsigma{\left( {\left. z\_{1} \right.}^2(\tau),{\left. z\_1^\* \right.}^2(\tau),\ldots,{\left. z\_{k} \right.}^2(\tau),{\left. z\_{k}^\* \right.}^2(\tau),\alpha\_1^2(\tau),\ldots,\alpha\_l^2(\tau) \right)}, \label{eq:FormofD}\end{aligned}$$ [equ:DeCompOfSymP] which implies that ${{\boldsymbol{\sf u}}}\_i(\tau)$ are the eigenvectors of $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$ as well, but with their corresponding eigenvalues being $\varsigma{\left( {\left. z\_{1} \right.}^2(\tau),{\left. z\_1^\* \right.}^2(\tau),\ldots,{\left. z\_{k} \right.}^2(\tau),{\left. z\_{k}^\* \right.}^2(\tau),\alpha\_1^2(\tau),\ldots,\alpha\_l^2(\tau) \right)}$. We know from Lemma [lemma:negative] that all of $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$’s eigenvalues are non-positive. Therefore, each *z**i*(*τ*) must be of the form $\lambda\_i(\tau)\sqrt{-1}$, where *λ**i*(*τ*) ∈ R and *λ**i*(*τ*) ≠ 0, and *α**i*(*τ*) = 0. Hence, we get from $\eqref{eq:FormofD}$ that $${\left. {{\boldsymbol{\sf U}}} \right.^{\sf H}}(\tau)\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}{{\boldsymbol{\sf U}}}(\tau)=\text{diag}\,\varsigma{\left( -\lambda\_1^2(\tau),-\lambda\_1^2(\tau),\ldots, -\lambda\_k^2(\tau),-\lambda\_k^2(\tau), 0,\ldots,0 \right)}, \label{equ:ComplexPsymDecomposition}$$ where, to reiterate, $0\le k\le\left\lfloor{{\rm n}\_{\rm sd}}/2\right\rfloor$ and *λ**i*(*τ*), when they exist, are non-zero and not necessarily distinct. It can be noted from this last assertion that all of $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$’s negative eigenvalues, specifically those corresponding to *λ**i*(*τ*), are of even geometric multiplicities. For the case of symmetric matrices, algebraic and geometric multiplicities are one and the same. Therefore, $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$’s negative eigenvalues, when they exist, are also of even algebraic multiplicities. Calculating ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ as the square-root of $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$[subsec:sqrtprocedure] ============================================================================================================================================================= A spectral decomposition of $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$[subsec:A-spectral-decomposition] ---------------------------------------------------------------------------------------------------------------------- Since $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$ is a real symmetric matrix it follows from the *Real Spectral Theorem* that it can be decomposed as $${{\boldsymbol{\sf N}}}(\tau)\,{{\boldsymbol{\sf D}}}(\tau)\,{\left. {{\boldsymbol{\sf N}}} \right.^{\sf T}}(\tau), \label{eq:RealSpectralDecompositionofSymP}$$ where ${{\boldsymbol{\sf D}}}(\tau)$ and ${{\boldsymbol{\sf N}}}(\tau)$ belong to $\mathcal{M}\_{{{\rm n}\_{\rm sd}},{{\rm n}\_{\rm sd}}}(\mathbb{R})$. We first describe ${{\boldsymbol{\sf N}}}(\tau)$ and then ${{\boldsymbol{\sf D}}}(\tau)$, in the next paragraph. The matrix ${{\boldsymbol{\sf N}}}(\tau):={\left( {{\boldsymbol{\sf n}}}\_1(\tau),\ldots,{{\boldsymbol{\sf n}}}\_{{{\rm n}\_{\rm sd}}}(\tau) \right)^{\sf T}}$ where ${{\boldsymbol{\sf n}}}\_i(\tau)\in\mathcal{M}\_{{{\rm n}\_{\rm sd}},1}(\mathbb{R})$ are $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$’s eigenvectors that are constructed such that ${{\boldsymbol{\sf n}}}\_i(\tau)\cdot{{\boldsymbol{\sf n}}}\_{j}(\tau)=\delta\_{ij}$, or equivalently $${{\boldsymbol{\sf N}}}(\tau){\left. {{\boldsymbol{\sf N}}} \right.^{\sf T}}(\tau)={{\boldsymbol{\sf I}}}. \label{eq:NOrthonormalityCondition}$$ Using $\eqref{equ:ComplexPsymDecomposition}$ it can be shown that for ${{\rm n}\_{\rm sd}}=1$, ${{\boldsymbol{\sf D}}}(\tau)={\left( 0 \right)}$; for ${{\rm n}\_{\rm sd}}=2$, ${{\boldsymbol{\sf D}}}(\tau)=\text{diag}{\left( 0,0 \right)}$ or diag( − *λ*12(*τ*),  − *λ*12(*τ*)), where *λ*1(*τ*) ≠ 0; and for ${{\rm n}\_{\rm sd}}=3$, ${{\boldsymbol{\sf D}}}(\tau)=\text{diag}\,(0,0,0)$ or $\text{diag}\,\varsigma{\left( -\lambda\_1^2(\tau),-\lambda\_1^2(\tau),0 \right)}$. The last two results can be summarized by saying that when ${{\rm n}\_{\rm sd}}=2$, $${{\boldsymbol{\sf D}}}(\tau)=\text{diag}{\left( -\lambda^2(\tau),-\lambda^2(\tau) \right)}, \label{eq:DExpFormnsd2}$$ where *λ*(*τ*) ∈ R, and when ${{\rm n}\_{\rm sd}}=3$, ${{\boldsymbol{\sf D}}}(\tau)=\text{diag}\,\varsigma(-\lambda(\tau)^2,-\lambda(\tau)^2,0)$. Without loss of generality, we can choose the order of ${{\boldsymbol{\sf n}}}\_i(\tau)$ so that their respective eigenvalues form a non-increasing sequence[7](#fn7). Therefore, for concreteness in the case of ${{\rm n}\_{\rm sd}}=3$ we take $$\begin{aligned} {{\boldsymbol{\sf D}}}(\tau)&=\begin{pmatrix}0 & 0 & 0\\0 & -\lambda^2(\tau) & 0\\0 &0 & -\lambda^2(\tau)\end{pmatrix}. \label{eq:DExpFormnsd3}\end{aligned}$$ Calculation of ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ from $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$ using $\eqref{equ:sym-1}$[subsec:Calculation-of-] --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Let $${{\boldsymbol{\sf F}}}(\tau):={\left. {{\boldsymbol{\sf N}}} \right.^{\sf T}}(\tau)\,{\overline{{{\boldsymbol{\sf W}}}}}(\tau)\,{{\boldsymbol{\sf N}}}(\tau).\label{eq:FDef}$$ It can be shown using ${{\boldsymbol{\sf F}}}(\tau)$’s definition, equations $\eqref{eq:NOrthonormalityCondition}$, and $\eqref{equ:sym-1}$, and $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$’s decomposition that is derived in [subsec:A-spectral-decomposition] and summarized in $\eqref{equ:SymPTheoreticalSpecDecomposition}$ that $${{\boldsymbol{\sf F}}}^2(\tau)={{\boldsymbol{\sf D}}}(\tau). \label{eq:F2D}$$ Substituting ${{\boldsymbol{\sf D}}}(\tau)$ in $\eqref{eq:F2D}$ from $\eqref{eq:DExpFormnsd2}$ and $\eqref{eq:DExpFormnsd3}$ for then noting from Lemma [lemma:skew] and ${{\boldsymbol{\sf F}}}(\tau)$’s definition that ${{\boldsymbol{\sf F}}}(\tau)$ is skew-symmetric, it can be shown that for ${{\rm n}\_{\rm sd}}=2$ and 3 $${{\boldsymbol{\sf F}}}(\tau)= \pm \star{\left( \lambda(\tau) \right)} \label{equ:FfromLambda2}$$ and $${{\boldsymbol{\sf F}}}(\tau)= \pm \star{\left( {\left( \lambda(\tau),0,0 \right)} \right)}, \label{equ:FfromLambda3}$$ respectively. Equations $\ref{equ:WbTheoreticalDecomposition}$ follow from $\eqref{eq:FDef}$, $\eqref{equ:FfromLambda2}$, and $\eqref{equ:FfromLambda3}$. References ========== --- 1. A graphical user interface for applying the AO algorithm to different types of data sets and visualizing its results is freely available [↩](#fnref1) 2. In our previous paper, we denoted the set of bounded linear operators from U to W as *B*(U, W). As a linear operator on a finite dimensional normed space is automatically bounded, here we use L(U, W) instead of *B*(U, W) to denote the set of all linear operators from U to W.[↩](#fnref2) 3. For the definition of Fréchet derivative in the context of the current work see [↩](#fnref3) 4. [fn:Or-to-be]Or to be mathematically precise, in the ${{}^{ \mathscr{l} }\! {\boldsymbol{a}}}\_{\tau i}\in{\mathbb{A}}$ direction that is defined such that ${\left( {{}^{ \mathscr{l} }\! {\boldsymbol{a}}}\_{\tau i}{\boldsymbol{s}} \right)}{\boldsymbol{s}}={{}^{ \mathscr{l} }\! {\boldsymbol{e}}}\_{\tau i}$.[↩](#fnref4) 5. We plan to publish this result along with its proof elsewhere.[↩](#fnref5) 6. Here ⌊ ⋅ ⌋ is the *floor* function.[↩](#fnref6) 7. The matrix ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ as the square root of $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$ does not depend on the order of ${{\boldsymbol{\sf n}}}\_i(\tau)$. Different orders will lead to the same ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$.[↩](#fnref7) Determining rigid body motion from accelerometer data through the square-root of a negative semi-definite tensor, with applications in mild traumatic brain injury ================================================================================================================================================================== Mild Traumatic Brain Injuries (mTBI) are caused by violent head motions or impacts. Most mTBI prevention strategies explicitly or implicitly rely on a “brain injury criterion”. A brain injury criterion takes some descriptor of the head’s motion as input and yields a prediction for that motion’s potential for causing mTBI as the output. The inputs are descriptors of the head’s motion that are usually synthesized from accelerometer and gyroscope data. In the context of brain injury criterion the head is modeled as a rigid body. We present an algorithm for determining the complete motion of the head using data from only four head mounted tri-axial accelerometers. In contrast to inertial measurement unit based algorithms for determining rigid body motion the presented algorithm does not depend on data from gyroscopes; which consume much more power than accelerometers. Several algorithms that also make use of data from only accelerometers already exist. However, those algorithms, except for the recently presented AO-algorithm [Rahaman MM, Fang W, Fawzi AL, Wan Y, Kesari H (2020): *J Mech Phys Solids* 104014], give the rigid body’s acceleration field in terms of the body frame, which in general is unknown. Compared to the AO-algorithm the presented algorithm is much more insensitive to bias type errors, such as those that arise from inaccurate measurement of sensor positions and orientations. **Keywords:** mTBI, Tensor square root, Rigid body motion, Accelerometers, Inertial navigation, Continuum mechanics Introduction ============ Mild Traumatic Brian Injury (mTBI) is the most common injury among military personnel and it is estimated that as many as 600 per 100,000 people experience mTBIs each year across the world. Mild Traumatic Brain Injuries are caused by violent head motions, that may occur from intense blunt impacts to the head in contact sports, motor vehicle accidents, falls following blasts, etc. In mTBI, the motion of the head causes the soft tissue of the brain to deform. The magnitude and time rate of brain deformation can cause brain cells to die. There have been many strategies aimed at preventing mTBI. In sports, new rules aim to modify player behavior in order to decrease or eliminate exposure to blunt impacts. Helmets and neck collars are examples of equipment that can alter the motion experienced by the head. A jugular vein compression collar aims to change the stiffness of the brain making it less susceptible to injury. Most mTBI prevention strategies explicitly or implicitly rely on a “brain injury criterion” for their effective synthesis, implementation, and evaluation. A brain injury criterion takes some descriptor of the head’s motion as input and yields a prediction for that motion’s potential for causing mTBI as the output. When we refer to any aspect of the head’s motion, we, in fact, are referring to that aspect as it pertains to the skull’s motion; since it is the skull’s motion that is, at least currently, observable and quantifiable in the field, either using video recording equipment or inertial sensor systems. The Young’s modulus of bone from human skulls generally lies in the 2–13 GPa range. In comparision, brain tissue is extremely compliant. Recent indentation tests on brain slices that were kept hydrated show that the Young’s modulus of brain tissue lies in the 1–2 kPa range (white matter 1.9 ± 0.6 kPa, and gray matter 1.4 ± 0.3 kPa). Due to this large disparity between the skull’s and the brain’s stiffnesses, in biomechanical investigations of mTBI the skull is usually modeled as a rigid body. Thus, inputs to the brain injury criteria are rigid body motion descriptors, such as angular velocity time series, translational acceleration times series, etc., or a combination of such time series. Rigid body motion can be thought of as a composition of translatory and rotatory motions. In initial brain injury criteria the focus was on the head’s translatory motion. Two of the first published injury criteria are the Gadd Severity Index (SI) and the Head Injury Criterion (HIC). Both SI and HIC ignore the head’s rotations and take the head’s translational acceleration as their input. Later, however, it was realized that in the context of mTBI the head’s rotations play an even more important role in causing injury than its translations. The first brain injury criterion to take the rotational aspect of the head’s motion into consideration was GAMBIT. The input to GAMBIT is the tuple of center-of-mass-acceleration and angular-acceleration time series. Following the development of GAMBIT, brain injury criteria that use descriptors that only depend on the head’s rotational motion as inputs have also been put forward. One such criteria is the Brain Injury Criteria (BrIC). Aiming to compliment HIC, BrIC only uses the head’s angular velocity time series as input. We also note that there is currently significant activity in applying finite element modeling using 2D/3D anatomically consistent discrete geometry head models to evaluate or develop new brain injury criteria. Irrespective of which existing, or yet to be developed, brain injury criterion will be used in the future, its successful application will hinge on the availability of a robust algorithm for constructing the motion descriptor that the criterion takes as input from easily measurable data. Currently, different algorithms are used to obtain the descriptors taken by the injury criteria as inputs. The inputs to GAMBIT can be obtained from the measurements of one tri-axial accelerometer and one tri-axial gyroscope mounted in a mouthguard if the center-of-mass-acceleration and angular-acceleration are obtained by processing the data using the algorithm in. In another example the input to BrIC (i.e., angular velocity) is prepared by numerically integrating the angular acceleration, which is determined by applying the 6DOF algorithm to the data from 12 single-axis accelerometers mounted in a helmet. Interestingly the inputs to most of the currently employed brain injury criteria can be prepared from the knowledge of a few key rigid body motion descriptors. To make this idea more concrete, consider the following equation, which is often used to describe rigid body motion, $${{\boldsymbol{\sf x}}}(\tau)={{\boldsymbol{\sf Q}}}(\tau){{\boldsymbol{\sf X}}}+{{\boldsymbol{\sf c}}}(\tau). \label{eq:NondimensionalRotationDeformationMapping}$$ In $\eqref{eq:NondimensionalRotationDeformationMapping}$ *τ* is a real number that denotes a non-dimensional time instant; ${{\boldsymbol{\sf X}}}$ is a column matrix of real numbers that denotes the initial position vector of a rigid body material particle; ${{\boldsymbol{\sf x}}}(\tau)$ is the column matrix of real numbers that denotes that material particle’s position vector at the time instance *τ*; ${{\boldsymbol{\sf Q}}}(\tau)$ is a time dependent square matrix of real numbers with positive determinant whose transpose equals its inverse; and ${{\boldsymbol{\sf c}}}(\tau)$ is a time dependent column matrix of real numbers. The matrix ${{\boldsymbol{\sf Q}}}(\tau)$ quantifies the rotation or orientation of the rigid body at the time instance *τ*, while ${{\boldsymbol{\sf c}}}(\tau)$ quantifies the rigid body’s translation at that time instance. The inputs to most current brain injury criteria can be computed from a knowledge of the maps ${{\boldsymbol{\sf Q}}}$ and ${{\boldsymbol{\sf c}}}$ and their first and second-order time derivatives, i.e., ${{\boldsymbol{\sf Q}}}'$, ${{\boldsymbol{\sf c}}}'$, ${{\boldsymbol{\sf Q}}}''$, ${{\boldsymbol{\sf c}}}''$. In this manuscript we present an algorithm for determining these maps and their derivatives using data from only four tri The presented algorithm has some similarities to the one recently presented by Rahaman *et al.* [1](#fn1), which is referred to as the AO (accelerometer-only) algorithm. For reasons that will become clear shortly, we refer to the algorithm that we present in this manuscript as the $\sqrt{\text{AO}}$-algorithm. The AO-algorithm also presents a framework for completely determining the rigid body’s motion, i.e., for constructing the maps ${{\boldsymbol{\sf Q}}}$ and ${{\boldsymbol{\sf c}}}$ and their time derivatives, using data only from four tri-axial accelerometers. The $\sqrt{\textrm{AO}}$-algorithm has all the advantages of the AO-algorithm. There are existing algorithms for completely determining a rigid body’s motion using sensor data. However, these algorithms take data from sensor systems called inertial measurement units (IMUs). These units contain one or more gyroscopes. One of the primary advantages of the AO and $\sqrt{\text{AO}}$ algorithms is their non-dependence on gyroscopes. For a detailed discussion on why accelerometers are preferable over gyroscopes in the context of mTBI please see $\S$1 in. Briefly, gyroscopes’ power requirements are much higher than those of accelerometers (gyroscopes consume approximately 25 times more power than accelerometers ), and algorithms that aim to construct descriptors of a rigid body’s acceleration using data from gyroscopes add a significant amount of noise to those descriptors. Several algorithms exist for constructing inputs to brain injury criteria that that too only make use of data from accelerometers (Padgaonkar *et al.* , Genin *et al.*  and Naunheim *et al.* ). These algorithms, however, give much more limited information than is given by the AO and the $\sqrt{\text{AO}}$ algorithms. For example, all these algorithms give the rigid body’s acceleration field in terms of the body frame, which is a set of vectors that are attached to the rigid body, and hence move with it. These algorithms do not provide any information of how the body frame is oriented in space. However, that information is critical for constructing inputs for the upcoming finite element based brain injury criteria. The AO and the $\sqrt{\text{AO}}$ algorithms provide complete information of how the body frame is oriented in space. See $\S$1 in for further discussion on the advantages of the AO and the $\sqrt{\text{AO}}$ algorithms over other algorithms that also make use of only accelerometer data. Despite its many advantages we note that the AO-algorithm has one critical limitation. It is quite sensitive to bias type errors in the accelerometer data. Bias type errors are distinct from random errors in that they do not arise as a consequence of stochastic processes. For accelerometers, bias type errors can arise as a consequence of inaccurately defining sensor position and orientation (see Fig. [fig:errors]). As we explain below, the advantage of the $\sqrt{\text{AO}}$-algorithm over the AO-algorithm is that it is far less sensitive to bias type errors than the AO-algorithm. One of the critical steps in the $\sqrt{\text{AO}}$ and the AO algorithms is the determination of the map $\tau\mapsto{\overline{{{\boldsymbol{\sf W}}}}}(\tau)$. Here ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ is time dependent skew-symmetric matrix of real numbers that is related to the rigid body’s angular velocity. In the AO-algorithm ${\overline{{{\boldsymbol{\sf W}}}}}$ is determined by numerically integrating the equation () $$\overline{{{\boldsymbol{\sf W}}}}'(\tau)=\textrm{skew part of }{{\boldsymbol{\sf P}}}(\tau).\label{equ:skew}$$ Here ${{\boldsymbol{\sf P}}}(\tau)$ is a square matrix of real numbers that is to be computed from the accelerometers’ data, relative locations, and orientations. Due to numerical integration any bias type errors in ${{\boldsymbol{\sf P}}}$ will give rise to errors in ${\overline{{{\boldsymbol{\sf W}}}}}$ that grow with time. In the $\sqrt{\text{AO}}$-algorithm we alternatively determine ${\overline{{{\boldsymbol{\sf W}}}}}$ by taking the square-root of the equation $$\overline{{{\boldsymbol{\sf W}}}}(\tau)\overline{{{\boldsymbol{\sf W}}}}(\tau)=\textrm{symmetric part of }{{\boldsymbol{\sf P}}}(\tau).\label{equ:sym-1}$$ We derive in [subsec:square]. Due to the elimination of the numerical integration step associated with the solution of $\eqref{equ:skew}$, the $\sqrt{\textrm{AO}}$-algorithm gives much better persistent accuracy over time when applied to the data containing bias type errors, compared to the AO-algorithm. In $\S$[Sec:Math] we present the mathematics and mechanics of rigid body motion from that is needed for the development of the $\sqrt{\text{AO}}$-algorithm. In $\S$[subsec:Review-of-the] we review the AO-algorithm as preparation for the development of the $\sqrt{\text{AO}}$-algorithm. In $\S$[Sec:PM] we detail the $\sqrt{\textrm{AO}}$-algorithm and present a procedure for taking the square root of $\eqref{equ:sym-1}$. In $\S$[sec:In-silico-validation] we check the validity and robustness of the $\sqrt{\text{AO}}$-algorithm. We do so by feeding in virtual accelerometer data, to which differing amounts of bias and noise type errors have been added, to both the $\sqrt{\text{AO}}$ and AO algorithms and comparing their predictions. Using those predictions in $\S$[sec:Results-and-Discussion] we show that the $\sqrt{\textrm{AO}}$-algorithm is less sensitive to bias type errors than the AO-algorithm. We make a few concluding remarks in $\S$[Sec:Con]. ![Bias and noise type errors in acceleration component measurements. When there are errors \delta{\boldsymbol{X}} and \delta\theta in accurately defining an accelerometer’s position and orientation, respectively (see (a)), bias type errors can occur in the measurement of acceleration components (see (b)). Noise type errors in the acceleration component measurements are usually a consequence of seismic, electrical, and other types of noise.](biasnoiseerror "fig:") [fig:errors] Preliminary mathematics and kinematics of rigid body motion =========================================================== In this section we briefly recapitulate the mathematics and kinematics of rigid body motion from that are needed for the development of the proposed $\sqrt{\textrm{AO}}$-algorithm. Notation -------- Let E be a finite dimensional, oriented, Hilbert space, i.e., a Euclidean vector space. The Euclidean point space E has E as its associated vector space. Let *o* ∈ E be E’s origin. The spaces E and E are related to each other such that for any point *x* ∈ E there exists a vector ${\boldsymbol{x}}\in{\mathbb{E}}$ such that $o+\boldsymbol{x}=x$. The topological space B serves as our model for a rigid body that executes its motion in E. For that reason, we refer to E and E as the physical Euclidean vector space and point space, respectively. The spaces $\mathbb{E}\_{{\rm R}}$ and $\mathcal{E}\_{\rm R}$ are another pair of Euclidean vector and point spaces, respectively, that are related to each other in the same way that E and E are related to each other. We refer to $\mathbb{E}\_{{\rm R}}$ and $\mathcal{E}\_{\rm R}$ as the reference Euclidean vector and point spaces, respectively. The spaces E, E, $\mathbb{E}\_{{\rm R}}$, and $\mathcal{E}\_{\rm R}$ have the same dimension, which we denote as $n\_{\rm sd}$. The dimension of B is less than or equal to ${{\rm n}\_{\rm sd}}$. We call a select continuous, injective map from B into $\mathbb{E}\_{{\rm R}}$ the reference configuration and denote it as $\boldsymbol{\kappa}\_{{\rm R}}$. The elements of B are called material particles. We call ${\boldsymbol{X}}=\boldsymbol{\kappa}\_{{\rm R}}(\mathcal{X})$, where X ∈ B, the reference position vector of the material particle X, and we call the set ${\boldsymbol{\kappa}}\_{{\rm R}}(\mathcal{B})={\left\{{\boldsymbol{\kappa}}\_{{\rm R}}(\mathcal{X})\in\mathbb{E}\_{{\rm R}}\, \big|\, \mathcal{X}\in\mathcal{B}\right\}}$ the reference body (see Fig. [fig:motion]). When we refer to ${\boldsymbol{X}}$ as a material particle we in fact mean the material particle $\boldsymbol{\kappa}\_{{\rm R}}^{-1}(\boldsymbol{X})\in\mathcal{B}$. We model time as a one-dimensional normed vector space T and denote a typical element in it as ${\boldsymbol{\tau}}=\tau{\boldsymbol{s}}$, where *τ* ∈ R and ${\boldsymbol{s}}$ is a fixed vector in T of unit norm. We model the rigid body’s motion using the one-parameter family of maps $\boldsymbol{{\boldsymbol{x}}}\_{{\rm \tau}}:\mathcal{\mathbb{\mathbb{\mathbb{E}\_{{\rm R}}}}}\rightarrow\mathbb{E}$ (see Fig. [fig:motion]). We call ${\boldsymbol{x}}\_{\tau}$ the deformation map and ${\boldsymbol{x}}={\boldsymbol{x}}\_{\tau}({\boldsymbol{X}})$ the material particle ${\boldsymbol{X}}$’s position vector at the time instance ${\boldsymbol{\tau}}$. The set ${\boldsymbol{\kappa}}\_{\tau}{\left( \mathcal{B} \right)}$=${\left\{{\boldsymbol{x}}\_{\tau}({\boldsymbol{X}})\in\mathbb{E}\, \big|\, {\boldsymbol{X}}\in{\boldsymbol{\kappa}}\_{\rm R}{\left( \mathcal{B} \right)}\right\}}$ (see Fig. [fig:motion]) is called the current body. Components ---------- The sets ${\left( {\boldsymbol{E}}\_{i} \right)}\_{i\in\mathcal{I}}$ and ${\left( {\boldsymbol{e}}\_{i} \right)}\_{i\in\mathcal{I}}$, where $\mathcal{I}={\left( 1,\ldots,{{\rm n}\_{\rm sd}}\right)}$, are orthonormal sets of basis vectors for $\mathbb{E}\_{{\rm R}}$ and E, respectively. By orthonormal we mean that the inner product between ${\boldsymbol{E}}\_i$ and ${\boldsymbol{E}}\_j$, or ${\boldsymbol{e}}\_{i}$ and ${\boldsymbol{e}}\_j$, where *i*, *j* ∈ I, equals *δ**i**j*, the Kronecker delta symbol, which equals unity iff *i* = *j* and zero otherwise. We call *X**i* the component of ${\boldsymbol{X}}$ w.r.t. ${\boldsymbol{E}}\_{i}$ iff $X\_i={\boldsymbol{X}}\cdot{\boldsymbol{E}}\_i$, where the dot denotes the inner product in $\mathbb{E}\_{\rm R}$. The dot in other expressions is to be similarly interpreted noting the space to which the vectors belong. We call the ordered set (*X**i*)*i* ∈ I the component form of ${\boldsymbol{X}}$ w.r.t. ${\left( {\boldsymbol{E}}\_{i} \right)}\_{i\in\mathcal{I}}$ and denote it as ${{\boldsymbol{\sf X}}}$ or $\mathcal{M}\!{\boldsymbol{X}}$. We denote the space of all *m* × *n* real matrices, where *m*, *n* ∈ N, M*m*, *n*(R); here N and R denote the set of natural numbers and the space of real numbers, respectively. Thus, ${{\boldsymbol{\sf X}}}\in\mathcal{M}\_{n\_{{\rm sd}},1}(\mathbb{R})$. We access the $i^{\rm th}$ component, where *i* ∈ I, of ${{\boldsymbol{\sf X}}}$, which of course is *X**i*, as ${\left( {{\boldsymbol{\sf X}}} \right)}\_{i}$. Similarly, we denote the component of ${\boldsymbol{x}}$ w.r.t. ${\boldsymbol{e}}\_{i}$ as *x**i* and call ${{\boldsymbol{\sf x}}}={\left( x\_{i} \right)}\_{i\in\mathcal{I}}\in\mathcal{M}\_{n\_{{\rm sd}},1}(\mathbb{R})$ the component form of ${\boldsymbol{x}}$ w.r.t. ${\left( {\boldsymbol{e}}\_{i} \right)}\_{i\in\mathcal{I}}$. Say W and U are two arbitrary, oriented, finite dimensional Hilbert spaces; for instance, they can be $\mathbb{E}\_{\rm R}$ and E. We denote the space of all linear maps (transformations/operators) from W to U as L(W, U)[2](#fn2). We denote the norm of a vector ${\boldsymbol{w}}\_{1}$ in W that is induced by W’s inner product, i.e., $({\boldsymbol{w}}\_{1}\cdot{\boldsymbol{w}}\_{1})^{1/2}$, as $\lVert{\boldsymbol{w}}\_{1}\rVert$. For ${\boldsymbol{u}}\_{1}\in\mathbb{U}$, the expression ${\boldsymbol{u}}\_{1}\otimes{\boldsymbol{w}}\_{1}$ denotes the linear map from W to U defined as $${\left( {\boldsymbol{u}}\_{1}\otimes{\boldsymbol{w}}\_{1} \right)}{\boldsymbol{w}}\_{2}={\boldsymbol{u}}\_{1}{\left( {\boldsymbol{w}}\_{1}\cdot{\boldsymbol{w}}\_{2} \right)},$$ where ${\boldsymbol{w}}\_{2}\in\mathbb{W}$. If the sets ${\left( {\boldsymbol{u}}\_i \right)}\_{i\in\mathcal{I}}$ and ${\left( {\boldsymbol{w}}\_i \right)}\_{i\in\mathcal{I}}$ provide bases for U and W, respectively, then it can be shown that ${\left( {\left( {\boldsymbol{u}}\_{i}\otimes{\boldsymbol{w}}\_j \right)}\_{j\in\mathcal{I}} \right)}\_{i\in\mathcal{I}}$, which we will henceforth abbreviate as ${\left( {\boldsymbol{u}}\_{i}\otimes{\boldsymbol{w}}\_j \right)}\_{i,j\in\mathcal{I}}$, provides a basis for L(W, U). The number *T**i**j*, where *i*, *j* ∈ I, is called the component of ${\boldsymbol{T}}\in\mathcal{L}(\mathbb{W},\mathbb{U})$ w.r.t. ${\boldsymbol{u}}\_{i}\otimes{\boldsymbol{w}}\_j$ iff $T\_{ij}={\boldsymbol{u}}\_{i}\cdot{\left( {\boldsymbol{T}}{\boldsymbol{w}}\_j \right)}$. We call the nested ordered set (*T**i**j*)*i*, *j* ∈ I the component form of ${\boldsymbol{T}}$ w.r.t. ${\left( {\boldsymbol{u}}\_{i}\otimes{\boldsymbol{w}}\_j \right)}\_{i,j\in\mathcal{I}}$, and denote it as $\mathcal{M}{\boldsymbol{T}}$, or, when possible, briefly as ${{\boldsymbol{\sf T}}}$. We sometimes access the $i^{\rm th}$, $j^{\rm th}$ component of ${{\boldsymbol{\sf T}}}$, where *i*, *j* ∈ I, as ${\left( {{\boldsymbol{\sf T}}} \right)}\_{ij}$. ![Some mathematical quantities used in the description of motion. Illustration of the reference Euclidean vector space \mathbb{E}_{\rm R}, reference body {\boldsymbol{\kappa}}_{{\rm R}}(\mathcal{B}), a material particle \boldsymbol{X}, the deformation map \boldsymbol{x}_{\tau}, current body {\boldsymbol{\kappa}}_{\tau}{\left( \mathcal{B} \right)}, the (physical) Euclidean vector space \mathbb{E}, and the location of the material particle {\boldsymbol{X}} in \mathbb{E}, i.e., the material particle \boldsymbol{X}’s spatial position vector \boldsymbol{x}. See \S[subsec:notation] for details.](VectorSpace "fig:") [fig:motion] From here on, unless otherwise specified, we will be following the Einstein summation convention. As per this convention a repeated index in a term will imply a sum over that term with the repeated index taking values in I. For example, the expression $X\_{i}{\boldsymbol{E}}\_{i}$ represents the sum $\sum\_{i\in\mathcal{I}}X\_{i}{\boldsymbol{E}}\_{i}$. And an unrepeated index in a term will signify a set of ${{\rm n}\_{\rm sd}}$ terms. For example, the term ${\boldsymbol{E}}\_{i}$ represents the set ${\left\{{\boldsymbol{E}}\_{i}\, \big|\, i\in\mathcal{I}\right\}}$. Velocities and Accelerations ---------------------------- For the case of rigid body motion ${\boldsymbol{x}}\_{\tau}$ takes the form $$\boldsymbol{x}\_{\tau}(\boldsymbol{X})=\boldsymbol{Q}\_{\tau}\boldsymbol{X}+\boldsymbol{c}(\boldsymbol{\tau}), \label{equ:position}$$ where ${\boldsymbol{Q}}\_{\tau}$ is a proper (orientation preserving), linear isometry from $\mathbb{E}\_{{\rm R}}$ into E and ${\boldsymbol{c}}({\boldsymbol{\tau}})=c\_{i}(\tau){\boldsymbol{e}}\_{i}$, where *c**i* belongs to the space of twice continuously differentiable real valued functions over R, i.e., to *C*2(R, R). The operator ${\boldsymbol{Q}}\_{\tau}$ can be written as $Q\_{ij}(\tau){\boldsymbol{e}}\_{i}\otimes{\boldsymbol{E}}\_{j}$, where *Q**i**j* ∈ *C*2(R, R) and satisfy *Q**k**i*(*τ*)*Q**k**j*(*τ*) = *δ**i**j* for all *τ* ∈ R. We abbreviate ${\left( Q\_{ij}(\tau) \right)}\_{i,j\in\mathcal{I}}\in\mathcal{M}\_{{{\rm n}\_{\rm sd}},{{\rm n}\_{\rm sd}}}(\mathbb{R})$, ${\left( c\_{i}(\tau) \right)}\_{i\in\mathcal{I}}\in\mathcal{M}\_{{{\rm n}\_{\rm sd}},1}(\mathbb{R})$, and $(\delta\_{ij})\_{i,j\in\mathcal{I}}\in\mathcal{M}\_{{{\rm n}\_{\rm sd}},{{\rm n}\_{\rm sd}}}(\mathbb{R})$ as ${{\boldsymbol{\sf Q}}}(\tau)$, ${{\boldsymbol{\sf c}}}(\tau)$, and ${{\boldsymbol{\sf I}}}$, respectively. The component or non-dimensional form of is $\eqref{eq:NondimensionalRotationDeformationMapping}$. Since ${\boldsymbol{Q}}\_{\tau}$ is a proper isometry, it follows that ${{\boldsymbol{\sf Q}}}(\tau)$, which we refer to as the rotation matrix, belongs to the special orthogonal group $SO({{\rm n}\_{\rm sd}})$. As a consequence of belonging to $SO({{\rm n}\_{\rm sd}})$ the matrix ${{\boldsymbol{\sf Q}}}(\tau)$ satisfies the equations $$\begin{aligned} {\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau)\,{{\boldsymbol{\sf Q}}}(\tau)&={{\boldsymbol{\sf I}}},\label{equ:QQ:1} \intertext{and} {{\boldsymbol{\sf Q}}}(\tau)\,{\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau)&={{\boldsymbol{\sf I}}},\label{equ:QQ:2}\end{aligned}$$ [eq:QQ] where ${\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau)$ is the transpose of ${{\boldsymbol{\sf Q}}}(\tau)$, i.e., ${\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau)={\left. {\left( {{\boldsymbol{\sf Q}}}(\tau) \right)} \right.^{\sf T}}$. We call L(T, E) the physical velocity vector space and denote it as V. It can be shown that the set ${\left( {\boldsymbol{v}}\_{i} \right)}\_{i\in\mathcal{I}}$, where ${\boldsymbol{v}}\_{i}\in\mathbb{V}$ and are defined such that ${\boldsymbol{v}}\_{i}{\boldsymbol{\tau}}=\tau{\boldsymbol{e}}\_i$, provides an orthonormal basis for V. The velocity of a material particle ${\boldsymbol{X}}$ executing its motion in E lies in V. The velocity of the material particle ${\boldsymbol{X}}$ at the instant ${\boldsymbol{\tau}}$, which we denote as ${\boldsymbol{V}}\_{\tau}({\boldsymbol{X}})$, equals the value of the Fréchet derivative[3](#fn3) of the map $\mathbb{T}\ni{\boldsymbol{\tau}}\mapsto{\boldsymbol{x}}\_{{\boldsymbol{X}}}({\boldsymbol{\tau}})\in\mathbb{E}$, where ${\boldsymbol{x}}\_{{\boldsymbol{X}}}({\boldsymbol{\tau}})={\boldsymbol{x}}\_{\tau}({\boldsymbol{X}})$, at the time instance ${\boldsymbol{\tau}}$. Thus, it follows from that $${\boldsymbol{V}}\_{\tau}({\boldsymbol{X}})={\boldsymbol{L}}\_{\tau}{\boldsymbol{X}}+{\boldsymbol{c}}'({\boldsymbol{\tau}}), \label{equ:vel1}$$ where ${\boldsymbol{L}}\_{\tau}:=Q'\_{ij}(\tau){\boldsymbol{v}}\_{i}\otimes{\boldsymbol{E}}\_j$ and ${\boldsymbol{c}}'({\boldsymbol{\tau}}):=c\_{i}'(\tau){\boldsymbol{v}}\_{i}$, and *Q*ʹ*i**j* and *c**i*ʹ are the derivatives of *Q**i**j* and *c**i*, respectively. We abbreviate ${\left( Q\_{ij}'(\tau) \right)}\_{i,j\in\mathcal{I}}\in\mathcal{M}\_{{{\rm n}\_{\rm sd}},{{\rm n}\_{\rm sd}}}(\mathbb{R})$ and ${\left( c\_{i}'(\tau) \right)}\_{i\in\mathcal{I}}\in\mathcal{M}\_{{{\rm n}\_{\rm sd}},1}(\mathbb{R})$ as ${{\boldsymbol{\sf Q}}}'(\tau)$ and ${{\boldsymbol{\sf c}}}'(\tau)$, respectively. Using and it can be shown that the velocity at the time instance ${\boldsymbol{\tau}}$ of the material particle occupying the spatial position ${\boldsymbol{x}}\in\mathbb{E}$ at the time instance ${\boldsymbol{\tau}}$ is ${\boldsymbol{W}}\_{\tau}{\left( \boldsymbol{x}-{\boldsymbol{c}}({\boldsymbol{\tau}}) \right)}+{\boldsymbol{c}}'({\boldsymbol{\tau}})$, where the linear map ${\boldsymbol{W}}\_{\tau}:\mathbb{E}\to\mathbb{V}$ is defined by the formula $${\boldsymbol{W}}\_{\tau}{\boldsymbol{x}}={\boldsymbol{L}}\_{\tau}{\boldsymbol{Q}}\_{\tau}^{\*}{\boldsymbol{x}}, \label{equ:wde}$$ for all ${\boldsymbol{x}}\in{\mathbb{E}}$. The operator ${\boldsymbol{Q}}\_{\tau}^{\*}$ is the Hilbert-adjoint of ${\boldsymbol{Q}}\_{\tau}$ and is equal to $Q\_{ji}(\tau){\boldsymbol{E}}\_{i}\otimes{\boldsymbol{e}}\_{j}$. Let the component form of ${\boldsymbol{W}}\_{\tau}$ w.r.t. ${\left( {\boldsymbol{v}}\_i\otimes{\boldsymbol{e}}\_j \right)}\_{i,j\in\mathcal{I}}$ be ${\left( W\_{ij}(\tau) \right)}\_{i,j\in\mathcal{I}}\in\mathcal{M}\_{{{\rm n}\_{\rm sd}},{{\rm n}\_{\rm sd}}}(\mathbb{R})$, which we abbreviate as ${{\boldsymbol{\sf W}}}(\tau)$. It follows from that *W**i**j*(*τ*) = *Q*ʹ*i**k*(*τ*)*Q**j**k*(*τ*), or equivalently, $${{\boldsymbol{\sf W}}}(\tau)={{\boldsymbol{\sf Q}}}'(\tau){\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau). \label{equ:wdeNDF}$$ We call L(T, V) the physical acceleration vector space and denote it as A. It can be shown that the set ${\left( {\boldsymbol{a}}\_{i} \right)}\_{i\in\mathcal{I}}$, where ${\boldsymbol{a}}\_{i}\in\mathbb{A}$ and are defined such that ${\boldsymbol{a}}\_{i}{\boldsymbol{\tau}}=\tau{\boldsymbol{v}}\_i$, provides an orthonormal basis for A. The acceleration of a material particle ${\boldsymbol{X}}$ executing its motion in E lies in A. The acceleration of ${\boldsymbol{X}}$ at the time instance ${\boldsymbol{\tau}}$ equals the value of the Fréchet derivative of the map $\mathbb{T}\ni{\boldsymbol{\tau}}\mapsto{\boldsymbol{V}}\_{{\boldsymbol{X}}}({\boldsymbol{\tau}})\in\mathbb{V}$, where ${\boldsymbol{V}}\_{{\boldsymbol{X}}}({\boldsymbol{\tau}})={\boldsymbol{V}}\_{\tau}({\boldsymbol{X}})$, at the time instance ${\boldsymbol{\tau}}$. Thus, it follows from that $${\boldsymbol{A}}\_{\tau}({\boldsymbol{X}})={\color{red}}{\boldsymbol{M}}\_{\tau}{\boldsymbol{X}}+{\boldsymbol{c}}''({\boldsymbol{\tau}}),$$ where the map ${\boldsymbol{M}}\_{\tau}:{\mathbb{E}}\_{\rm R}\to{\mathbb{A}}$ is defined by the equation $${\boldsymbol{M}}\_{\tau}:=Q''\_{ij}(\tau){\boldsymbol{a}}\_{i}\otimes{\boldsymbol{E}}\_j \label{eq:MtauDef}$$ and ${\boldsymbol{c}}''({\boldsymbol{\tau}}):=c\_{i}''(\tau){\boldsymbol{a}}\_{i}$, where *Q*ʺ*i**j* and *c*ʺ*i* are the derivatives of *Q*ʹ*i**j* and *c*ʹ*i*, respectively. Let $A\_{\tau i}(\boldsymbol{X})$ be the component of ${\boldsymbol{A}}\_{\tau}({\boldsymbol{X}})$ w.r.t. ${\boldsymbol{a}}\_{i}$. We abbreviate the ordered sets ${\left( A\_{\tau i}(\boldsymbol{X}) \right)}\_{i\in\mathcal{I}}\in\mathcal{M}\_{n\_{{\rm sd}},1}(\mathbb{R})$, ${\left( Q\_{ij}''(\tau) \right)}\_{i,j\in\mathcal{I}}\in\mathcal{M}\_{{{\rm n}\_{\rm sd}},{{\rm n}\_{\rm sd}}}(\mathbb{R})$, and ${\left( c\_{i}''(\tau) \right)}\_{i\in\mathcal{I}}\in\mathcal{M}\_{{{\rm n}\_{\rm sd}},1}(\mathbb{R})$ as ${{\boldsymbol{\sf A}}}\_{\tau}({{\boldsymbol{\sf X}}})$, ${{\boldsymbol{\sf Q}}}''(\tau)$, and ${{\boldsymbol{\sf c}}}''(\tau)$, respectively. We will predominantly be presenting the ensuing results in component form. The component form can be converted into physical or dimensional form. Therefore, from here on we will be often omit explicitly using the qualification “is the component form of” when referring to the component form of a physical quantity. For example, instead of saying “${{\boldsymbol{\sf A}}}\_{\tau}({{\boldsymbol{\sf X}}})$ as the component form of the acceleration of the material particle ${\boldsymbol{X}}$ at the time instance ${\boldsymbol{\tau}}$”, we will often write “${{\boldsymbol{\sf A}}}\_{\tau}({{\boldsymbol{\sf X}}})$ is the acceleration of the material particle ${{\boldsymbol{\sf X}}}$ at the time instance ${\boldsymbol{\tau}}$”. The acceleration ${{\boldsymbol{\sf A}}}\_{\tau}({{\boldsymbol{\sf X}}})$ can be interpreted as the value of the (non-dimensional) acceleration field ${{\boldsymbol{\sf A}}}\_{\tau}:B\_{\rm R}(\mathcal{B})\to\mathbb{R}^3$, where we call $B\_{\rm R}(\mathcal{B}):={\left\{(X\_1,X\_2,X\_3)\in\mathbb{R}^3\, \big|\, X\_i{\boldsymbol{E}}\_i\in{\boldsymbol{\kappa}}\_{{\rm R}}(\mathcal{B})\right\}}$ the non-dimensional reference body. Review of the AO-algorithm ========================== Let $\overline{{\boldsymbol{Q}}}\_{\tau}:{\mathbb{A}}\to{\mathbb{E}}\_{\rm R}$ be defined by the equation $$\overline{{\boldsymbol{Q}}}\_{\tau}=Q\_{ji}(\tau){\boldsymbol{E}}\_i\otimes{\boldsymbol{a}}\_{j}, \label{eq:OverbarQtauDef}$$ then we call the map $\overline{{\boldsymbol{A}}}\_{\tau}:{\boldsymbol{\kappa}}\_{{\rm R}}(\mathcal{B})\to{\mathbb{E}}\_{\rm R}$ defined by the equation $$\overline{\boldsymbol{A}}\_{\tau}({\boldsymbol{X}})=\overline{\boldsymbol{Q}}\_{\tau}\boldsymbol{A}\_{\tau}({\boldsymbol{X}})\label{eq:barAtauDef}$$ the “Pseudo-acceleration field”. Say $\overline{A}\_{\tau i}(\boldsymbol{X})$ is the component of $\overline{{\boldsymbol{A}}}\_{\tau}({\boldsymbol{X}})$ w.r.t. ${\boldsymbol{E}}\_{i}$, then we abbreviate ${\left( \overline{A}\_{\tau i}(\boldsymbol{X}) \right)}\_{i\in\mathcal{I}}\in\mathcal{M}\_{n\_{{\rm sd}},1}(\mathbb{R})$, the component form of $\overline{{\boldsymbol{A}}}\_{\tau}({\boldsymbol{X}})$ w.r.t. ${\boldsymbol{E}}\_i$, as ${\overline{{{\boldsymbol{\sf A}}}}}\_{\tau}({{\boldsymbol{\sf X}}})$. From the definitions of the pseudo acceleration field $\overline{{\boldsymbol{A}}}\_{\tau}$, and ${\overline{{{\boldsymbol{\sf A}}}}}\_{\tau}({{\boldsymbol{\sf X}}})$, and the definitions of ${{\boldsymbol{\sf A}}}\_{\tau}({{\boldsymbol{\sf X}}})$, and ${{\boldsymbol{\sf Q}}}(\tau)$, which are given in $\S$[subsec:Velocities-and-Accelerations], it follows that $$\begin{aligned} {{\boldsymbol{\sf A}}}\_{\tau}({{\boldsymbol{\sf X}}})={{\boldsymbol{\sf Q}}}(\tau){\overline{{{\boldsymbol{\sf A}}}}}\_{\tau}({{\boldsymbol{\sf X}}}).\label{equ:acce}\end{aligned}$$ In it was shown that $${\overline{{{\boldsymbol{\sf A}}}}}\_{\tau}({{\boldsymbol{\sf X}}})={{\boldsymbol{\sf P}}}(\tau){{\boldsymbol{\sf X}}}+{{\boldsymbol{\sf q}}}(\tau), \label{equ:psuedo}$$ where $${{\boldsymbol{\sf P}}}(\tau)={{\boldsymbol{\sf Q}}}^{\sf T}(\tau)\,{{\boldsymbol{\sf Q}}}''(\tau)\label{equ:pde}$$ is the component form of the linear map ${\boldsymbol{P}}\_{\tau}:=\overline{{\boldsymbol{Q}}}\_{\tau}\circ{\boldsymbol{M}}{}\_{\tau}$ w.r.t. $\left({\boldsymbol{E}}\_{i}\otimes{\boldsymbol{E}}\_{j}\right)\_{i,j\in\mathcal{I}}$, and ${{\boldsymbol{\sf q}}}(\tau)\in\mathcal{M}\_{n\_{{\rm sd}},1}(\mathbb{R})$ is the component form of $${\boldsymbol{q}}({\boldsymbol{\tau}}):=\overline{{\boldsymbol{Q}}}\_{\tau}{\boldsymbol{c}}''({\boldsymbol{\tau}})$$ w.r.t. ${\left( {\boldsymbol{E}}\_{i} \right)}\_{i\in\mathcal{I}}$. Thus, the acceleration field ${{\boldsymbol{\sf A}}}\_{\tau}$ is taken to be fully determined once ${{\boldsymbol{\sf Q}}}(\tau)$, ${{\boldsymbol{\sf P}}}(\tau)$, and ${{\boldsymbol{\sf q}}}(\tau)$ have been computed. Both the AO and the $\sqrt{\text{AO}}$ algorithms can be described as consisting of three primary steps. The AO-algorithm’s three steps can briefly be described as follows: 1. Compute (time discrete versions of) the maps $\tau\mapsto{{\boldsymbol{\sf P}}}(\tau)$ and $\tau\mapsto{{\boldsymbol{\sf q}}}(\tau)$ using the measurements and the geometry of the arrangement of the four tri-axial accelerometers.[AOstep1] 2. Compute the map $\tau\mapsto{\overline{{{\boldsymbol{\sf W}}}}}(\tau)$, where $${\overline{{{\boldsymbol{\sf W}}}}}(\tau):={\left. {{\boldsymbol{\sf Q}}} \right.^{\sf T}}(\tau){{\boldsymbol{\sf W}}}(\tau){{\boldsymbol{\sf Q}}}(\tau) \label{eq:busfW},$$ using the map ${{\boldsymbol{\sf P}}}$ computed in [AOstep1] and numerical integrating. From Lemma [lemma:skew] we have that the matrix ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ belongs to the space of ${{\rm n}\_{\rm sd}}\times{{\rm n}\_{\rm sd}}$ real skew-symmetric matrices, which we denote as $\mathfrak{so}(\mathbb{R},{{\rm n}\_{\rm sd}})$.[AOstep2] 3. Compute the map $\tau\mapsto{{\boldsymbol{\sf Q}}}(\tau)$ using the ${\overline{{{\boldsymbol{\sf W}}}}}$ map computed in [AOstep2] and numerically integrating the equation $${{\boldsymbol{\sf Q}}}'(\tau)={{\boldsymbol{\sf Q}}}(\tau){\overline{{{\boldsymbol{\sf W}}}}}(\tau). \label{equ:rotation}$$ Equation $\eqref{equ:rotation}$ is from, where it appears as equation 3.14.[AOstep3] Step one of the $\sqrt{\text{AO}}$-algorithm has two sub-steps: the *predictor* step and the *corrector* step. The predictor step is the same as [AOstep1] of the AO-algorithm. The corrector step is necessary for carrying out step two of the $\sqrt{\text{AO}}$-algorithm. In step two of the $\sqrt{\text{AO}}$-algorithm instead of obtaining ${\overline{{{\boldsymbol{\sf W}}}}}$ from $\eqref{equ:skew}$, as is done in the AO-algorithm, we obtain it from. More precisely, in the $\sqrt{\text{AO}}$-algorithm ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ is obtained as the square root of the symmetric part of ${{\boldsymbol{\sf P}}}(\tau)$. We use $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$ to denote the symmetric part of ${{\boldsymbol{\sf P}}}(\tau)$. The derivation of is presented in [subsec:square]. A procedure for determining $\overline{{{\boldsymbol{\sf W}}}}(\tau)$ as the square root of $\text{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$, i.e., for solving for ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ with given ${{\boldsymbol{\sf P}}}(\tau)$, is presented in $\S$[step2]. The goal of step three of the $\sqrt{\text{AO}}$-algorithm is to compute ${{\boldsymbol{\sf Q}}}$ using the ${\overline{{{\boldsymbol{\sf W}}}}}$ computed in step two. It involves using a slightly modified version of the numerical integration scheme described by equations 3.15, 3.16, and 3.17 in to solve $\eqref{equ:rotation}$. We discuss it in $\S$[step3]. The $\sqrt{\text{AO}}$-algorithm ================================ As we mentioned in $\S$[subsec:Review-of-the] the $\sqrt{\text{AO}}$ algorithm consists of three primary steps. Those steps are as follows: 1. Compute (time discrete versions of) the maps $\tau\mapsto{{\boldsymbol{\sf P}}}(\tau)$ and $\tau\mapsto{{\boldsymbol{\sf q}}}(\tau)$ using the measurements and the geometry of the arrangement of the four tri-axial accelerometers (see $\S\ref{step1}$ for details).[enu:sqrtAO-Step1] 2. Use and the ${{\boldsymbol{\sf P}}}$ map obtained from [enu:sqrtAO-Step1] to solve for $\tau\mapsto{\overline{{{\boldsymbol{\sf W}}}}}(\tau)$. That is, for each *τ* in a discrete sequence of time instances, compute ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ as the square root of the symmetric part of ${{\boldsymbol{\sf P}}}(\tau)$ (for details see $\S\ref{step2}$).[enu:sqrtAO-Step2] 3. Compute (a time discrete version of) the map $\tau\mapsto{{\boldsymbol{\sf Q}}}(\tau)$ using the ${\overline{{{\boldsymbol{\sf W}}}}}$ map computed in [enu:sqrtAO-Step2] and numerically integrating $\eqref{equ:rotation}$ (details in $\S$[step3]).[enu:sqrtAO-Step3] $\sqrt{\text{AO}}$-algorithm, [enu:sqrtAO-Step1] of 3 ----------------------------------------------------- In $\S3.1$ of a method was presented to estimate ${{\boldsymbol{\sf P}}}(\tau)$ and ${{\boldsymbol{\sf q}}}(\tau)$ from the accelerometer measurements corresponding to the time instance *τ*. Applying that method for each *τ* in a discrete time sequence yields a numerical approximation for the maps $\tau\mapsto{{\boldsymbol{\sf P}}}(\tau)$ and $\tau\mapsto{{\boldsymbol{\sf q}}}(\tau)$. We present here an augmented version of that method for computing similar numerical approximations. The primary difference between our method and that presented in is that the estimate for ${{\boldsymbol{\sf P}}}(\tau)$ yielded by our method is certain to retain some of the mathematical properties that are expected of ${{\boldsymbol{\sf P}}}(\tau)$ based on our theoretical analysis. Specifically, it follows from Lemmas  [lemma:negative] and [lemma:form] that $\textrm{sym}{\left( {{\boldsymbol{\sf P}}}(\tau) \right)}$ is a negative semidefinite matrix with its negative eigenvalues, if any, being of even algebraic multiplicities. These mathematical properties of ${{\boldsymbol{\sf P}}}(\tau)$ are critical for carrying out [enu:sqrtAO-Step2] of the $\sqrt{\text{AO}}$-algorithm. We found that experimental noise and errors can cause the estimate for ${{\boldsymbol{\sf P}}}(\tau)$ provided by the method presented in to lose the aforementioned mathematical properties. Our method, on the contrary, ensures that the symmetric part of the estimated ${{\boldsymbol{\sf P}}}(\tau)$ is negative semidefinite and that its negative eigenvalues, when they exist, are of even algebraic multiplicities. Once ${{\boldsymbol{\sf P}}}(\tau)$ is estimated, our method to estimate ${{\boldsymbol{\sf q}}}(\tau)$ is exactly the same as that in. We review it in [subsec:Estimating-1]. ### Estimating ${{\boldsymbol{\sf P}}}(\tau)$[subsec:Estimating] Our method for estimating ${{\boldsymbol{\sf P}}}(\tau)$ can be described as consisting of two steps: a *predictor step* and a *corrector step*. In the predictor step we use the method presented in for estimating ${{\boldsymbol{\sf P}}}(\tau)$ to compute a prediction for ${{\boldsymbol{\sf P}}}(\tau)$. We denote this prediction as ${{\boldsymbol{\sf P}}}(\tau)^{\rm p}$. In the corrector step we estimate ${{\boldsymbol{\sf P}}}(\tau)$ as the sum of ${{\boldsymbol{\sf P}}}(\tau)^{\rm p}$ and a correction term, which we construct using ${{\boldsymbol{\sf P}}}(\tau)^{\rm p}$. The correction term is constructed such that the estimated ${{\boldsymbol{\sf P}}}(\tau)$ is as close as possible to ${{\boldsymbol{\sf P}}}(\tau)^{\rm p}$ under the constraint that the estimated ${{\boldsymbol{\sf P}}}(\tau)$’s symmetric part is negative semidefinite and its negative eigenvalues (if they exist) are of even algebraic multiplicities. #### Predictor step Say the four tri-axial accelerometers are attached to the rigid body B at the material particles (l X)ℓ ∈ J, where J :  = (1, …, 4), and let the position vectors of those particles in ${\mathbb{E}}\_{\rm R}$, respectively, be ${\left( {{}^{ \mathscr{l} }\! {\boldsymbol{X}}} \right)}\_{\ell\in\mathcal{J}}$ (see Fig. [fig:EllipsoidAccDir]). A tri-axial accelerometer is capable of measuring the components of its acceleration in three mutually perpendicular directions. We refer to those directions as the accelerometer’s measurement directions. The measurement directions are usually marked on the accelerometer package by the manufacturer as arrows that are labeled *x*, *y*, and *z*. As B moves in E, the attached accelerometers move with it, and, therefore, the measurement directions (in E) can change with time. For an accelerometer ℓ, where ℓ ∈ J, we denote its time varying measurement directions in E using the orthonormal set ${\left( {{}^{ \mathscr{l} }\! {\boldsymbol{e}}}\_{\tau i} \right)}\_{i\in\mathcal{I}}$. Assuming that the accelerometers remain rigidly attached to B, i.e., their positions and orientations w.r.t. B do not change as B moves in E, it can be shown that ${\boldsymbol{Q}}^\*\_{\tau}\,{{}^{ \mathscr{l} }\! {\boldsymbol{e}}}\_{\tau i}$, where ℓ ∈ J, *i* ∈ I, is a constant vector in ${\mathbb{E}}\_{\rm R}$, which we denote as ${{}^{ \mathscr{l} }\! {\boldsymbol{E}}}\_i$. The position vectors ${\left( {{}^{ \mathscr{l} }\! {\boldsymbol{X}}} \right)}\_{\ell\in\mathcal{J}}$ and the directions ${\left( {{}^{ \mathscr{l} }\! {\boldsymbol{E}}}\_{i} \right)}\_{\ell\in\mathcal{J}, i\in\mathcal{I}}$ are known from the arrangement and orientation of the accelerometers at the experiment’s beginning. For ℓ ∈ J, let ${{}^{ \mathscr{l} }\! {\overline{{{\boldsymbol{\sf A}}}}}}(\tau):={\left( {{}^{ \mathscr{l} }\! \alpha}\_{j}(\tau){{}^{ \mathscr{l} }\! {\boldsymbol{E}}}\_{j}\cdot{\boldsymbol{E}}\_{i} \right)}\_{i\in\mathcal{I}}$ (no sum over ℓ), where l *α**i*(*τ*), *i* ∈ I, is the measurement reported by accelerometer ${{}^{ \mathscr{l} }\! {\boldsymbol{X}}}$ for the (non-dimensional) component of its acceleration in the ${{}^{ \mathscr{l} }\! {\boldsymbol{e}}}\_{\tau i}$ direction[4](#fn4) at the time instance *τ*. And let ${{}^{ \mathscr{l} }\! {{\boldsymbol{\sf X}}}}:={\left( {{}^{ \mathscr{l} }\! {\boldsymbol{X}}}\cdot{\boldsymbol{E}}\_i \right)}\_{i\in\mathcal{I}}$. Then, we compute ${{\boldsymbol{\sf P}}}(\tau)^{\rm p}$ as ${{\boldsymbol{\sf P}}}(\tau)$ is estimated in using the equation $${{\boldsymbol{\sf P}}}(\tau)^{{\rm p}}={\left( {\left( {{}^{ i }\! \Delta{\overline{{{\boldsymbol{\sf A}}}}}}(\tau) \right)^{\sf T}}\,{\left( {{}^{ j }\! {\Delta\boldsymbol{\sf X}}} \right)} \right)}\,{\left( {\left( {{}^{ i }\! {\Delta\boldsymbol{\sf Y}}} \right)}\,{\left( {{}^{ j }\! {\Delta\boldsymbol{\sf Y}}} \right)^{\sf T}} \right)},\,\label{equ:p}$$ where ${{}^{ i }\! \Delta{\overline{{{\boldsymbol{\sf A}}}}}}(\tau):={{}^{ i+1 }\! {\overline{{{\boldsymbol{\sf A}}}}}}(\tau)-{{}^{ 1 }\! {\overline{{{\boldsymbol{\sf A}}}}}}(\tau)$, ${{}^{ i }\! \Delta{{\boldsymbol{\sf X}}}}:={{}^{ i+1 }\! {{\boldsymbol{\sf X}}}}-{{}^{ 1 }\! {{\boldsymbol{\sf X}}}}$. The ordered sets ${{}^{ i }\! {\Delta\boldsymbol{\sf }}}{Y}$ belong to $\mathcal{M}\_{{{\rm n}\_{\rm sd}},1}(\mathbb{R})$ and are defined by the equation $${\left( {{}^{ 1 }\! \Delta{{\boldsymbol{\sf Y}}}},\ldots,{{}^{ {{\rm n}\_{\rm sd}}}\! \Delta{{\boldsymbol{\sf Y}}}} \right)}={\left. {\left( {{}^{ 1 }\! \Delta{{\boldsymbol{\sf X}}}},\ldots,{{}^{ {{\rm n}\_{\rm sd}}}\! \Delta{{\boldsymbol{\sf X}}}} \right)} \right.^{-\sf T}},$$ where ${\left. {\left( \cdot \right)} \right.^{-\sf T}}$ is the operator that acts on an invertable element of $\mathcal{M}\_{{{\rm n}\_{\rm sd}},{{\rm n}\_{\rm sd}}}(\mathbb{R})$ and returns the transpose of its inverse. ![Schematic of the locations and orientations of four tri-axial accelerometers (left) and their motion (right) (modified from, copyright 2020, Elsevier).](EllipsoidAccDir "fig:") [fig:EllipsoidAccDir] #### Corrector step[par:Corrector-step] In [subsec:A-spectral-decomposition] we show that $\text{sym}({{\boldsymbol{\sf P}}}(\tau))$ allows itself to be decomposed as $$\begin{aligned} &{{\boldsymbol{\sf N}}}(\tau)\,{{\boldsymbol{\sf D}}}(\tau)\,{\left. {{\boldsymbol{\sf N}}} \right.^{\sf T}}(\tau),\label{equ:SymPTheoreticalSpecDecomposition\_P}\intertext{where ${{\boldsymbol{\sf N}}}(\tau)\in\mathcal{M}\_{{{\rm n}\_{\rm sd}},{{\rm n}\_{\rm sd}}}(\mathbb{R})$ is an orthogonal matrix, i.e.,} &{\left. {{\boldsymbol{\sf N}}} \right.^{\sf T}}(\tau){{\boldsymbol{\sf N}}}(\tau)={{\boldsymbol{\sf I}}},\label{equ:SymPTheoreticalSpecDecomposition\_N} \intertext{and ${{\boldsymbol{\sf D}}}(\tau)\in\mathcal{M}\_{{{\rm n}\_{\rm sd}},{{\rm n}\_{\rm sd}}}({\mathbb{R}})$ is a diagonal matrix that for ${{\rm n}\_{\rm sd}}=2$ and $3$, respectively, has the form} &{{\boldsymbol{\sf D}}}(\tau)=\text{diag}{\left( -\lambda(\tau)^2,-\lambda(\tau)^2 \right)}~\text{and} ~\text{diag}{\left( 0,-\lambda(\tau)^2,-\lambda(\tau)^2 \right)}, \label{equ:SymPTheoreticalSpecDecomposition\_D}\end{aligned}$$ [equ:SymPTheoreticalSpecDecomposition] where *λ*(*τ*) ∈ R and the function $\text{diag}(\cdot):{\mathbb{F}}^{{{\rm n}\_{\rm sd}}}\to{\mathcal{M}}\_{{{\rm n}\_{\rm sd}},{{\rm n}\_{\rm sd}}}({\mathbb{F}})$, where F is either R or C, is defined such that $\text{diag}(a\_1,\ldots, a\_{{{\rm n}\_{\rm sd}}})$ is a diagonal matrix with diagonal entries $a\_1,\ldots,a\_{{{\rm n}\_{\rm sd}}}$. The matrix $\text{sym}({{\boldsymbol{\sf P}}}(\tau))$ allowing the decomposition $\eqref{equ:SymPTheoreticalSpecDecomposition}$ is critical for carrying out [enu:sqrtAO-Step2] of the $\sqrt{\text{AO}}$-algorithm. In an ideal scenario, in which there are no experimental errors or noise in the accelerometer measurements, ${{\boldsymbol{\sf P}}}(\tau)^{\rm p}$ would be the same as ${{\boldsymbol{\sf P}}}(\tau)$. However, due to the experimental noise and other errors ${{\boldsymbol{\sf P}}}(\tau)^{\rm p}$ will generally be different from ${{\boldsymbol{\sf P}}}(\tau)$. In general, such a deviation would not be of much consequence, since, experimental measurements of physical quantities, more often than not, are different from the true values of those quantities. Thus, generally, we would, as done by, take ${{\boldsymbol{\sf P}}}(\tau)^{\rm p}$ to be the final estimate for ${{\boldsymbol{\sf P}}}(\tau)$ and no longer distinguish between ${{\boldsymbol{\sf P}}}(\tau)^{\rm p}$ and ${{\boldsymbol{\sf P}}}(\tau)$. However, in the present case the deviation of ${{\boldsymbol{\sf P}}}(\tau)^{\rm p}$ from ${{\boldsymbol{\sf P}}}(\tau)$ has an important consequence which requires us to not take ${{\boldsymbol{\sf P}}}(\tau)^{\rm p}$ as ${{\boldsymbol{\sf P}}}(\tau)$’s final estimate. The important consequence is that in general $\text{sym}({{\boldsymbol{\sf P}}}(\tau)^{\rm p})$ will not allow a decomposition of the form. In general, it will only allow itself to be decomposed as ${{\boldsymbol{\sf N}}}\_{\mathscr{p}}(\tau)\,\text{diag}{\left( \lambda\_i(\tau),\ldots,\lambda\_{{{\rm n}\_{\rm sd}}}(\tau) \right)}\,{\left. {{\boldsymbol{\sf N}}}\_{\mathscr{p}} \right.^{\sf T}}(\tau)$, where ${{\boldsymbol{\sf N}}}\_{\mathscr{p}}(\tau)$’s columns are the eigenvectors of $\text{sym}({{\boldsymbol{\sf P}}}(\tau)^{\rm p})$ that are chosen such that ${{\boldsymbol{\sf N}}}\_{\mathscr{p}}(\tau)$ is orthogonal and their corresponding eigenvalues *λ**i*(*τ*) ∈ R form a non-increasing sequence, i.e., $\lambda\_{1}(\tau)\ge\ldots\ge\lambda\_{{{\rm n}\_{\rm sd}}}(\tau)$. Therefore, instead of taking ${{\boldsymbol{\sf P}}}(\tau)^{\rm p}$ as the final estimate of ${{\boldsymbol{\sf P}}}(\tau)$ we derive the final estimate for ${{\boldsymbol{\sf P}}}(\tau)$, as we detail next, using ${{\boldsymbol{\sf P}}}(\tau)^{\rm p}$ so that its symmetric part does allow a decomposition of the form $\eqref{equ:SymPTheoreticalSpecDecomposition}$. We take the skew-symmetric part of our final estimate for ${{\boldsymbol{\sf P}}}(\tau)$ to be the same as that of ${{\boldsymbol{\sf P}}}(\tau)^{\rm p}$. We take its symmetric part to be $$\begin{aligned} &{{\boldsymbol{\sf N}}}\_{\mathscr{p}}(\tau)\,{{\check{{\boldsymbol{ \sf D}}}}}(\tau)\,{\left. {{\boldsymbol{\sf N}}} \right.^{\sf T}}\_{\mathscr{p}}(\tau), \intertext{where} {{\check{{\boldsymbol{ \sf D}}}}}(\tau)&:=\text{diag}{\left( 0,-\check{\lambda}(\tau)^2,-\check{\lambda}(\tau)^2 \right)}, \label{equ:DConstruction\_D\_nsd3} \intertext{with} \check{\lambda}(\tau)&:=\begin{cases}\sqrt{{-\frac{\lambda\_{2}(\tau)+\lambda\_{3}(\tau)}{2}}}, &\lambda\_{2}(\tau)+\lambda\_{3}(\tau)\leq0,\\0, &\lambda\_{2}(\tau)+\lambda\_{3}(\tau)>0,\end{cases} \label{equ:DConstruction\_lambda\_nsd3} \intertext{for ${{\rm n}\_{\rm sd}}=3$, and} {{\check{{\boldsymbol{ \sf D}}}}}(\tau)&:=\text{diag}{\left( -\check{\lambda}(\tau)^2,-\check{\lambda}(\tau)^2 \right)},\label{equ:DConstruction\_D\_nsd2} \intertext{with} \check{\lambda}(\tau)&:= \begin{cases} \sqrt{{-\frac{\lambda\_{1}(\tau)+\lambda\_{2}(\tau)}{2}}}, &\lambda\_{1}(\tau)+\lambda\_{2}(\tau)\leq0,\\0, &\lambda\_{1}(\tau)+\lambda\_{2}(\tau)>0, \end{cases} \label{equ:DConstruction\_lambda\_nsd2} \intertext{for ${{\rm n}\_{\rm sd}}=2$.}\notag\end{aligned}$$ [equ:FinalEstimateSymPDecomposition] The orthogonal matrix ${{\boldsymbol{\sf N}}}\_{\mathscr{p}}(\tau)$ and the eigenvalues *λ**i*(*τ*) can be obtained from the spectral or symmetric-Schur decomposition of $\text{sym}({{\boldsymbol{\sf P}}}(\tau)^{\rm p})$. Since $\text{sym}({{\boldsymbol{\sf P}}}(\tau)^{\rm p})$ is a real symmetric matrix, it is always possible to carry out $\text{sym}({{\boldsymbol{\sf P}}}(\tau)^{\rm p})$’s spectral or symmetric Schur decomposition. To summarize, we take ${{\boldsymbol{\sf P}}}(\tau)$’s final estimate to be $$\begin{aligned} &{{\boldsymbol{\sf P}}}^{\rm p}(\tau)+\Delta{{\boldsymbol{\sf P}}}(\tau), \intertext{where} \Delta{{\boldsymbol{\sf P}}}(\tau)&:=\text{sym}{\left( {{\check{{\boldsymbol{ \sf P}}}}}(\tau)-{{\boldsymbol{\sf P}}}^{\rm p}(\tau) \right)},\\ {{\check{{\boldsymbol{ \sf P}}}}}(\tau)&:={{\boldsymbol{\sf N}}}\_{\mathscr{p}}(\tau)\check{{{\boldsymbol{\sf D}}}}(\tau){\left. {{\boldsymbol{\sf N}}} \right.^{\sf T}}\_{\mathscr{p}}(\tau).\end{aligned}$$ [equ:PConstruction] It can be ascertained that the symmetric part of our final estimate for ${{\boldsymbol{\sf P}}}(\tau)$, namely ${{\check{{\boldsymbol{ \sf P}}}}}(\tau)$, allows a decomposition of the form $\eqref{equ:SymPTheoreticalSpecDecomposition}$. In fact, that decomposition is precisely the one given by. For ${{\rm n}\_{\rm sd}}=2$ or 3 it can be shown that ${{\check{{\boldsymbol{ \sf P}}}}}(\tau)$ is the best approximation[5](#fn5), in the Frobenius norm, to $\text{sym}{\left( {{\boldsymbol{\sf P}}}^{\rm p}(\tau) \right)}$ in the set of ${{\rm n}\_{\rm sd}}\times{{\rm n}\_{\rm sd}}$ real symmetric negative-semidefinite matrices whose negative eigenvalues (when they exist) are of even algebraic multiplicities. ### Estimating ${{\boldsymbol{\sf q}}}(\tau)$[subsec:Estimating-1] After estimating ${{\boldsymbol{\sf P}}}(\tau)$ as described in $\S$[subsec:Estimating] using $\eqref{equ:PConstruction}$ we estimate ${{\boldsymbol{\sf q}}}(\tau)$ as $${{}^{ \mathscr{l} }\! \bar{{{\boldsymbol{\sf A}}}}}(\tau)-\boldsymbol{\sf P}(\tau)\,{{}^{ \mathscr{l} }\! {{\boldsymbol{\sf X}}}},$$ where l is some particular integer in J. $\sqrt{\text{AO}}$-algorithm, [enu:sqrtAO-Step2] of 3 ----------------------------------------------------- In [subsec:Calculation-of-] we show using ${{\boldsymbol{\sf P}}}(\tau)$’s decomposition $\eqref{equ:SymPTheoreticalSpecDecomposition}$ that ${\overline{{{\boldsymbol{\sf W}}}}}(\tau)$ can be computed from $\eqref{equ:sym-1}$ as $${\overline{{{\boldsymbol{\sf W}}}}}(\tau)= \pm \begin{cases} {{\boldsymbol{\sf N}}}(\tau) \star{\left( \lambda(\tau) \right)} {\left. {{\boldsymbol{\sf N}}} \right.^{\sf T}}(\tau),&{{\rm n}\_{\
arxiv_0000279
large team projects in capstone courses & ASEE Annual Conference and Exposition S31 & Engelsma, J. R. & 2014 & Best practices for industry-sponsored CS capstone courses & Journal of Computing Sciences in Colleges S32 & Matthies, C., Teusner, R., Hesse, G. & 2019 & Beyond Surveys: Analyzing Software Development Artifacts to Assess Teaching Efforts & IEEE Frontiers in Education Conference, FIE 2018 S33 & Ziv, H., Patil, S. & 2010 & Capstone project: From software engineering to ``Informatics`` & 23rd IEEE Conference on Software Engineering Education and Training, CSEE&T 2010 S34 & Anderson, Ruth E.; Borriello, Gaetano; Martin, Hélène; Black, Leonard & 2009 & Capstone projects as community connectors & Journal of Computing Sciences in Colleges S35 & Paasivaara, M., Vanhanen, J., Lassenius, C. & 2019 & Collaborating with industrial customers in a capstone project course: The customers’ perspective & IEEE/ACM 41st International Conference on Software Engineering: Software Engineering Education and Training, ICSE-SEET 2019 S36 & Adams, R., Kleiner, C. & 2016 & Collaboration support in an international computer science capstone course & International Conference on Social Computing and Social Media S37 & Watkins, K.Z., Barnes, T. & 2010 & Competitive and agile software engineering education & IEEE SoutheastCon, SoutheastCon 2010 S38 & Gustavsson, H., Brohede, M. & 2019 & Continuous assessment in software engineering project course using publicly available data from GitHub & 15th International Symposium on Open Collaboration, OpenSym 2019 S39 & Hadfield, Steven M.; Jensen, Nathan A. & 2007 & Crafting a software engineering capstone project course & Journal of Computing Sciences in Colleges S40 & Rong, G., Shao, D. & 2012 & Delivering software process-specific project courses in tertiary education environment: Challenges and solution & 25th IEEE Conference on Software Engineering Education and Training, CSEE&T 2012 S41 & Nguyen, D.M., Truong, T.V., Le, N.B. & 2013 & Deployment of capstone projects in software engineering education at Duy Tan university as part of a university-wide project-based learning effort & Learning and Teaching in Computing and Engineering, LaTiCE 2013 S42 & Lago, P., Schalken, J., Vliet, H.V. & 2009 & Designing a multi-disciplinary software engineering project & 22nd IEEE Conference on Software Engineering Education and Training, CSEE&T 2009 S43 & Angelov, S., de Beer, P. & 2017 & Designing and applying an approach to software architecting in agile projects in education & Journal of Systems and Software S44 & Anderson, R.E., Kolko, B. & 2011 & Designing technology for resource-constrained environments: A multidisciplinary capstone sequence & Frontiers in Education, FIE 2012 S45 & Leilde, V., Ribaud, V. & 2017 & Does Process Assessment Drive Process Learning? the Case of a Bachelor Capstone Project & 30th IEEE Conference on Software Engineering Education and Training, CSEE&T 2017 S46 & Brown, Q., Lee, F., Alejandre, S. & 2009 & Emphasizing soft skills and team development in an educational digital game design course & 4th International Conference on the Foundations of Digital Games, FDG 2009 S47 & Takala, T. M., Malmi, L., Pugliese, R., Takala, T. & 2016 & Empowering students to create better virtual reality applications: A longitudinal study of a VR capstone course & Informatics in Education S48 & Marques, M., Ochoa, S.F., Bastarrica, M.C., Gutierrez, F.J. & 2018 & Enhancing the Student Learning Experience in Software Engineering Project Courses & IEEE Transactions on Education S49 & De Souza, R.T., Zorzo, S.D., Da Silva, D.A. & 2015 & Evaluating capstone project through flexible and collaborative use of Scrum framework & Frontiers in Education Conference, FIE 2015 S50 & Vu, J.H., Frojd, N., Shenkel-Therolf, C., Janzen, D.S. & 2009 & Evaluating test-driven development in an industry-sponsored capstone project & 6th International Conference on Information Technology: New Generations, ITNG 2009 S51 & Laplante, P.A., Defranco, J.F., Guimaraes, E. & 2019 & Evolution of a graduate software engineering capstone course - A course review & International Journal of Engineering Education S52 & Lederman, Timoth C. & 2010 & Evolution of capstone-courses in software engineering a finishing school & Journal of Computing Sciences in Colleges S53 & Delgado, D., Velasco, A., Aponte, J., Marcus, A. & 2017 & Evolving a Project-Based Software Engineering Course: A Case Study & 30th IEEE Conference on Software Engineering Education and Training, CSEE&T 2017 [width=1,cols=5] p0.07p0.19p0.03p0.31p0.28 ID & Author(s) & Year & Title & Source title S55 & Ras, Eric and Carbon, Ralf and Decker, Björn and Rech, Jörg & 2007 & Experience Management Wikis for Reflective Practice in Software Capstone Projects & IEEE Transactions on Education S56 & Schorr, R. & 2020 & Experience Report on Key Success Factors for Promoting Students’ Engagement in Software Development Group Projects & 4th IEEE World Conference on Engineering Education, EDUNINE 2020 S57 & Longstreet, C. Shaun; Cooper, Kendra & 2013 & Experience report: A sustainable serious educational game capstone project & CGAMES’2013 USA S58 & Dupuis, R., Champagne, R., April, A., Séguin, N. & 2010 & Experiments with Adding to the Experience that Can be Acquired from Software Courses & 7th International Conference on the Quality of Information and Communications Technology, QUATIC 2010 S59 & Burge, J. & 2007 & Exploiting Multiplicity to Teach Reliability and Maintainability in a Capstone Project & 20th IEEE Conference on Software Engineering Education and Training, CSEE&T 2007 S60 & Marshall, L., Pieterse, V., Thompson, L., Venter, D.M. & 2016 & Exploration of Participation in Student Software Engineering Teams & ACM Transactions on Computing Education, TOCE S61 & Ganci, A., Ramnath, R., Ribeiro, B., Stone, R.B. & 2011 & Exploring collaboration between computer science engineers and visual communication designers in educational settings & 13th International Conference on Engineering and Product Design Education, E&PDE 2011 S62 & Burden, H., Steghöfer, J.-P., Hagvall Svensson, O. & 2019 & Facilitating entrepreneurial experiences through a software engineering project course & 41st International Conference on Software Engineering: Software Engineering Education and Training, ICSE-SEET 2019 S63 & Basholli, A., Baxhaku, F., Dranidis, D., Hatziapostolou, T. & 2013 & Fair assessment in software engineering capstone projects & 6th Balkan Conference in Informatics S64 & Magana, A. J., Seah, Y. Y., Thomas, P. & 2018 & Fostering cooperative learning with Scrum in a semi-capstone systems analysis and design course & Journal of Information Systems Education S65 & Sievi-Korte, O., Systä, K., Hjelsvold, R. & 2015 & Global vs. local – Experiences from a distributed software project course using agile methodologies & Frontiers in Education, FIE 2015 S66 & Hebig, R., Ho-Quang, T., Jolak, R., Schröder, J., Linero, H., Ågren, M., Maro, S.H. & 2020 & How do students experience and judge software comprehension techniques? & 28th International Conference on Program Comprehension S67 & Verdicchio, Michael & 2021 & Hurricanes and pandemics: an experience report on adapting software engineering courses to ensure continuity of instruction & Journal of Computing Sciences in Colleges S68 & Włodarski, R., Poniszewska-Marańda, A., Falleri, J.-R. & 2022 & Impact of software development processes on the outcomes of student computing projects: A tale of two universities & Information and Software Technology S69 & Izu, Cruz & 2018 & Improving Outcomes for a Masters Capstone IT Project & IEEE International Conference on Teaching, Assessment, and Learning for Engineering, TALE 2018 S70 & Flowers, J.G. & 2008 & Improving the Capstone project experience: a case study in software engineering & 46th Annual Southeast Regional Conference on XX S71 & Gannod, Gerald C.; Bachman, Kristen M.; Troy, Douglas A.; Brockman, Steve D. & 2010 & Increasing alumni engagement through the capstone experience & Frontiers in Education, FIE 2010 S72 & Zilora, S.J. & 2015 & Industry-emulated projects in the classroom & 16th Annual ACM Conference on Information Technology Education, SIGITE 2015 S73 & Spichkova, M. & 2019 & Industry-oriented project-based learning of software engineering & 24th International Conference on Engineering of Complex Computer Systems, ICECCS 2019 S74 & Carvalho, J.A., Sousa, R.D., Sá, J.O. & 2010 & Information systems development course: Integrating business, IT and IS competencies & 2010 IEEE Transforming Engineering Education: Creating Interdisciplinary Skills for Complex Global Environments S75 & Palacin-Silva, M.V., Seffah, A., Porras, J. & 2018 & Infusing sustainability into software engineering education: Lessons learned from capstone projects & Journal of Cleaner Production S76 & Kumar, S., Wallace, C. & 2015 & Instruction in software project communication through guided inquiry and reflection & Frontiers in Education, FIE 2015 S77 & Zeid, A. & 2012 & Integrating international students’ contests with computer science capstone: Lessons learned and best practices & Frontiers in Education, FIE 2012 S78 & Lundqvist, K., Ahmed, A., Fridman, D., Bernard, J.-G. & 2019 & Interdisciplinary Agile Teaching & Frontiers in Education, FIE 2019 S79 & Santoso, H.B., Lawanto, O., Purwandari, B., Isal, R.Y.K., Fitriansyah, R. & 2018 & Investigating Students’ Metacognitive Skills while Working on Information Systems Development Projects & 7th World Engineering Education Forum, WEEF 2017 S80 & Christensen, E.L., Paasivaara, M. & 2022 & Learning Soft Skills through Distributed Software Development & International Conference on Software and System Processes and Internation Conference on Global Software Engineering S81 & Rout, Terence P.; Seagrott, John & 2007 & Maintaining High Process Capability in a Student Project Course & 20th Conference on Software Engineering Education & Training, CSEE&T 2007 S82 & Rodriguez, G., Soria, A., Campo, M. & 2016 & Measuring the Impact of Agile Coaching on Students’ Performance & IEEE Transactions on Education S83 & Linhoff, J., Settle, A. & 2009 & Motivating and evaluating game development capstone projects & 4th International Conference on Foundations of Digital Games [width=1,cols=5] p0.07p0.19p0.03p0.31p0.28 ID & Author(s) & Year & Title & Source title S84 & Haddad, H.M. & 2013 & One-semester CS capstone: A 40-60 teaching approach & 10th International Conference on Information Technology: New Generations, ITNG 2013 S85 & Fan, Xiaocong & 2018 & Orchestrating Agile Sprint Reviews in Undergraduate Capstone Projects & Frontiers in Education, FIE 2018 S86 & Fagerholm, F., Vihavainen, A. & 2013 & Peer assessment in experiential learning: Assessing tacit and explicit skills in agile software engineering capstone projects & Frontiers in Education, FIE 2013 S87 & Vasankari, T., Majanoja, A.-M. & 2019 & Practical Software Engineering Capstone Course – Framework for Large, Open-Ended Projects to Graduate Student Teams & Internation Conference on Computer Supported Education S88 & Karunasekera, S., Bedse, K. & 2007 & Preparing software engineering graduates for an industry career & 20th Conference on Software Engineering Education & Training, CSEE&T 2007 S89 & Weerawarana, S.M., Perera, A.S., Nanayakkara, V. & 2012 & Promoting creativity, innovation and engineering excellence: A case study from Sri Lanka & IEEE International Conference on Teaching, Assessment, and Learning for Engineering, TALE 2012 S90 & Fornaro, R.J., Heil, M.R., Tharp, A.L. & 2007 & Reflections on 10 years of sponsored senior design projects: Students win-clients win! & Journal of Systems and Software S91 & Roach, S. & 2011 & Retrospectives in a software engineering project course: Getting students to get the most from a project experience & 24th IEEE-CS Conference on Software Engineering Education and Training, CSEE&T 2011 S92 & Mäkiaho, P., Poranen, T. & 2018 & Risks management in software development capstone projects & 19th International Conference on Computer Systems and Technologies S93(a,b) & MacKellar, B. K., Sabin, M., Tucker, A. & 2013 & Scaling a framework for client-driven open source software projects: A report from three schools & Journal of Computing Sciences in Colleges S94 & Yuen, T.T. & 2015 & Scrumming with educators: Cross-departmental collaboration for a summer software engineering capstone & International Conference on Learning and Teaching in Computing and Engineering, LaTiCE 2015 S95 & Isomöttönen, V., Daniels, M., Cajander, Å., Pears, A., McDermott, R. & 2019 & Searching for global employability: Can students capitalize on enabling learning environments? & ACM Transactions on Computing Education S96 & Maxim, B. & 2008 & Serious games as software engineering capstone projects & ASEE Annual Conference and Exposition S97 & Krogstie, B.R., Divitini, M. & 2009 & Shared timeline and individual experience: Supporting retrospective reflection in student software engineering teams & 22nd Conference on Software Engineering Education and Training, CSEE&T 2009 S98 & Johns-Boast, L., Flint, S. & 2013 & Simulating industry: An innovative software engineering capstone design course & Frontiers in Education, FIE 2013 S99 & Boti, E., Damasiotis, V., Fitsilis, P. & 2021 & Skills Development Through Agile Capstone Projects & International Conference on Frontiers in Software Engineering S100 & Paiva, S.C., Carvalho, D.B.F. & 2018 & Software creation workshop: A capstone course for business-oriented software engineering teaching & XXXII Brazilian Symposium on Software Engineering S101 & Saeedi, K., Visvizi, A. & 2021 & Software development methodologies, HEIs, and the digital economy & Education Sciences S102 & Smith, T., Cooper, K.M.L., Longstreet, C.S. & 2011 & Software engineering senior design course: Experiences with agile game development in a capstone project & International Conference on Software Engineering S103 & Jaccheri, L., Sindre, G. & 2007 & Software engineering students meet interdisciplinary project work and art & 11th International Conference on Information Visualisation, IV 2007 S104 & Krusche, S., Dzvonyar, D., Xu, H., Bruegge, B. & 2018 & Software Theater—Teaching Demo-Oriented Prototyping & ACM Transactions on Computing Education, TOCE S105 & Budd, A.J., Ellis, H.J.C. & 2008 & Spanning the gap between software engineering instructor and student & Frontiers in Education, FIE 2008 S106 & Decker, A., Egert, C.A., Phelps, A. & 2016 & Splat! er, shmup? A postmortem on a capstone production experience & Frontiers in Education, FIE 2008 S107 & Kerbs, R. & 2007 & Student teamwork: A capstone course in game programming & Frontiers in Education, FIE 2007 S108 & Tadros, Ibrahem; Hammami, Samir; Al-Zoubi, Khaled & 2008 & Systems Development Projects & 3rd International Conference on Information and Communication Technologies: From Theory to Applications S109 & Jarzabek, S. & 2013 & Teaching advanced software design in team-based project course & 26th IEEE International Conference on Software Engineering Education and Training, CSEE&T 2013 S110 & Lu, Baochuan; DeClue, Tim & 2011 & Teaching agile methodology in a software engineering capstone course & Journal of Computing Sciences in Colleges S111 & Cagiltay, N.E. & 2007 & Teaching software engineering by means of computer-game development: Challenges and opportunities & British Journal of Educational Technology [width=1,cols=5] p0.07p0.19p0.03p0.31p0.28 ID & Author(s) & Year & Title & Source title S112 & Tafliovich, A., Caswell, T., Estrada, F. & 2019 & Teaching software engineering with free open source software development: An experience report & Annual Hawaii International Conference on System Sciences S113 & Paasivaara, M., Lassenius, C., Damian, D., Raty, P., Schroter, A. & 2013 & Teaching students global software engineering skills using distributed Scrum & 35th International Conference on Software Engineering, ICSE 2013 S114 & Khmelevsky, Y. & 2016 & Ten years of capstone projects at Okanagan College: A retrospective analysis & 21st Western Canadian Conference on Computing Education S115 & Mahnič, V. & 2015 & The capstone course as a means for teaching agile software development through project-based learning & World Transactions on Engineering and Technology Education S116 & Broman, D., Sandahl, K., Baker, M.A. & 2012 & The company approach to software engineering project courses & IEEE Transactions on Education S117 & Khakurel, J., Porras, J. & 2020 & The Effect of Real-World Capstone Project in an Acquisition of Soft Skills among Software Engineering Students & 32nd IEEE Conference on Software Engineering Education and Training, CSEE&T 2020 S118 & Iacob, C., Faily, S. & 2020 & The impact of undergraduate mentorship on student satisfaction and engagement, teamwork performance, and team dysfunction in a software engineering group project & 51st ACM Technical Symposium on Computer Science Education, SIGCSE 2020 S119 & Hoar, R. & 2014 & The real world web: How institutional IT affects the delivery of a capstone web development course & 19th Western Canadian Conference on Computing Education, WCCCE 2014 S120 & Yue, K. B., Damania, Z., Nilekani, R., Abeysekera, K. & 2011 & The use of free and open source software in real-world capstone projects & Journal of Computing Sciences in Colleges S121 & Isomöttönen, V., Kärkkäinen, T. & 2008 & The value of a real customer in a capstone project & 21st Conference on Software Engineering Education and Training, CSEE&T 2008 S122 & Mohan, S., Chenoweth, S., Bohner, S. & 2012 & Towards a better capstone experience & 43rd ACM Technical Symposium on Computer Science Education, SIGCSE’12 S123 & Rico, D.F., Sayani, H.H. & 2009 & Use of agile methods in software engineering education & Agile Conference, AGILE 2009 S124 & Tribelhorn, B., Nuxoll, A.M. & 2021 & Using Agile and Active Learning in Software Development Curriculum & ASEE Virtual Annual Conference and Exposition S125 & McDonald, J., Wolfe, R. & 2008 & Using computer graphics to foster interdisciplinary collaboration in capstone courses & Journal of Computing Sciences in Colleges S126 & Ju, A., Hemani, A., Dimitriadis, Y., Fox, A. & 2020 & What agile processes should we use in software engineering course projects? & 51st ACM Technical Symposium on Computer Science Education, SIGCSE 2020 S127 & Bastarrica, M.C., Perovich, D., Samary, M.M. & 2017 & What can students get from a software engineering capstone course? & 39th IEEE/ACM International Conference on Software Engineering: Software Engineering and Education Track, ICSE-SEET 2017 A systematic literature review of capstone courses in software engineering ========================================================================== [type=editor, auid=000,bioid=1, orcid=0000-0002-4894-8365] Context: Tertiary education institutions aim to prepare their computer science and software engineering students for working life. While much of the technical principles are covered in lower-level courses, team-based capstone projects are a common way to provide students with hands-on experience and teach soft skills.Objective: This paper explores the characteristics of software engineering capstone courses presented in the literature. The goal of this work is to understand the pros and cons of different approaches by synthesising the various aspects of software engineering capstone courses and related experiences.Method: In a systematic literature review for 2007–2007, we identified 127 primary studies. These studies were analysed based on their presented course characteristics and the reported course outcomes. Results: The characteristics were synthesised into a taxonomy consisting of duration, team sizes, client and project sources, project implementation, and student assessment. We found out that capstone courses generally last one semester and divide students into groups of 4–5 where they work on a project for a client. For a slight majority of courses, the clients are external to the course staff and students are often expected to produce a proof-of-concept level software product as the main end deliverable. The courses also offer versatile assessments for students throughout the project. Conclusions: This paper provides researchers and educators with a classification of characteristics of software engineering capstone courses based on previous research. We also further synthesise insights on the reported outcomes of capstone courses. Our review study aims to help educators to identify various ways of organising capstones and effectively plan and deliver their own capstone courses. The characterisation also helps researchers to conduct further studies on software engineering capstones. Our taxonomy of course features is based on ACM/IEEE guide for capstone courses There is a vast diversity in how capstone courses in SE are implemented More research is needed to compare different course implementation strategies Many of the capstone courses are shorter than the recommended two semesters Many of the capstone courses are missing an external client capstone project course computer science education software engineering education Introduction ============ Universities and other tertiary education institutions should provide their students with sufficient skills and abilities before the students enter working life. In software engineering-related programs, this entails having an understanding of the common principles and theory in computer science and technical competencies and knowledge demanded by the industry. Any recent graduate should also be able to apply this technical knowledge in practice. While much of the technical knowledge and theories are covered in lower-level courses, many institutions hold team-based capstone project courses to ensure students are ready to apply the knowledge in a workplace environment. A ``capstone course`` usually means a course that finishes an academic degree. The main goal of a capstone project is to provide hands-on experience in applying the tools, techniques, principles and best practices that are taught more theoretically in previous courses. Capstone projects are also regarded as crucial in teaching students the necessary soft skills such as teamwork, verbal and written communication, time management, problem solving and project management. In computer science (CS) and software engineering (SE) programs, capstone courses generally last one or two semesters, and they include assigning students into teams and having them work on various kinds of software engineering projects. In these projects, they are expected to experience stages of the software development life-cycle from requirements solicitation to software maintenance. Given the general acceptance of capstones as a practical way of teaching industry-relevant skills, a high number of institutions have implemented their own capstone courses. This has resulted in a great deal of research done on capstone courses and their outcomes. In order to provide a coherent and compact view of software engineering capstones, this research synthesises the current body of knowledge on the topic in a systematic manner. We believe that such a review gives educators an effective tool for planning and implementing their own capstone courses. Researchers can also benefit from a systematic review of capstones to conduct further comparative studies on the impact of the varying course forms. This study is organised as follows. The next section focus on the previous literature reviews on SE capstones, as well as general characteristics of such courses. Section [researchmethod] describes the research questions and the related methods, including how the articles were selected. Section [results] presents the results of the literature review. The main findings and their validity are discussed in Section [discussion]. Suggestions for future research are also given. Finally, Section [conclusions] concludes the research. Previous work ============= Systematic literature reviews of SE capstones --------------------------------------------- Many literature reviews have been written in the general area of software engineering education (SEE). Usually, they focus on specific sub-areas of SEE such as teaching methods in software engineering, practical approaches to SEE, trends in SEE or teaching global software engineering. As the focus of this research is especially on project-based capstone courses in software engineering, we carefully sought any earlier systematic reviews done on them. The search was conducted on May 17th, 2022, first in the citation database Scopus and secondly in Google Scholar. Table [tab:reviewsearchterms] lists the search terms used in both databases. All search results produced by Scopus were checked to see whether they include an SLR of SE capstones, whereas for Google Scholar, each search produced tens of thousands of hits, so we went through the first 20 pages of each search (200 hits). At this point, the results started to become highly irrelevant, and often repetitive. Based on our search, we believe that the three review papers presented in Table [tab:capstonereviews] are the ones that have been published so far on this topic. Table [tab:capstonereviews] also presents the characteristics of capstone courses each of these reviews investigated. Next, we will briefly present these studies and discuss the necessity of this review. [width=1,cols=3] [tab:reviewsearchterms] p0.1p0.77p0.05 Database & Search term & Hits Scopus & TITLE-ABS-KEY( software AND engineering AND capstone AND literature AND review ) & 10 Scopus & TITLE-ABS-KEY( software AND engineering AND capstone AND review ) & 61 Scopus & TITLE-ABS-KEY( education AND ``software engineering`` AND literature AND mapping ) & 40 Scopus & TITLE-ABS-KEY( project AND course AND software AND engineering AND systematic AND review ) & 20 Scopus & TITLE-ABS-KEY( ``computer science`` AND capstone AND literature AND review ) & 8 Scopus & TITLE-ABS-KEY( software AND engineering AND project AND course AND systematic AND literature AND review ) & 17 Scopus & TITLE-ABS-KEY( ``software engineering`` AND education AND literature AND review ) & 182 Google Scholar & software engineering capstone systematic review & 20 300 Google Scholar & software engineering capstone characteristics & 31 400 Google Scholar & computer science capstone literature review & 53 400 Google Scholar & capstone literature review & 94 800 [width=1,cols=4,pos=h] [tab:capstonereviews] p0.3p0.05p0.17p0.38 Title & Year & Ref & Course characteristics examined in the survey A survey of computer science capstone course literature & 2011 & & Course-related: models, learning theories, goals, topics, student evaluation, evaluation.Project-related: software process models, phases, type, documentation, tools, groups, instructor administration. Designing the IT capstone course & 2019 & & Course duration, learning of new skills, project identification and selection, teams sizes, team formation, followed methodologies, assessment of learning outcomes, team and project supervision\* A review of literature on assessment practices in capstone engineering design courses: Implications for formative assessment & 2006 & & Connection to student achievement \*For the survey by, these are characteristics, which would have been examined in the actual survey. presents a survey done on the literature related to undergraduate computer science capstone courses. The survey is comprehensive, comprising 200 papers on the subject and summarising them under two major themes: course issues and project issues (Table [tab:capstonereviews]). Out of these, course issues include aspects related to the general course organisation, such as course models, learning theories present in the course and student evaluation. Project issues, on the other hand, categorise and describe the projects and how they are implemented. The category includes things like software process phases, project type and documentation of the projects. has provided an abstract of a systematic literature review for designing an IT capstone course. The review plans to provide answers to several questions relating to capstone course design, such as the optimal team size for project teams, identifying and selecting suitable projects and determining the correct duration for the course (Table [tab:capstonereviews]). We are not aware that the research proposed in the abstract would have been completed. have performed a systematic review on the assessment practices in capstone engineering design courses. They were especially interested in discovering the extent to which classroom assessment has received attention in the capstone literature. The paper included 32 journal articles and conference proceedings presenting varying assessment techniques and their use. In addition to the presented three literature reviews, a study on the dimensions of SE and CS capstone projects has been conducted by. Said dimensions are roughly divided into two groups: project dimensions such as customer identity and development dimensions such as project type and source code visibility. The purpose of their study is to provide a framework for analysing capstone courses, especially in terms of risk and realism. While their categorisation is versatile, their study does not, however, include a thorough systematic literature review. The categorisation presented is more of an experience-based proposal, and therefore the study is left out of Table [tab:capstonereviews]. To the best of our knowledge, there is no extensive, recent literature review done on software engineering capstone courses. A survey conducted by is comprehensive but dated to 2011 and therefore does not cover the large number of primary studies published in the past decade. It also does not provide any quantitative statistics of the course characteristics, which would enable educators or researchers to assess how common some aspect in reality is. provide a review on capstone literature, but it is limited to continuous assessment techniques and is dated to 2006. aims to provide a systematic literature review on IT capstone design and characteristics, but as of now, the paper has not proceeded beyond the original abstract. In light of this, current research does not provide an up-to-date view of how SE capstone courses generally are organised and with what kind of outcomes. Such a view on the software engineering capstones would not only provide educators with an important tool for planning their own capstone courses but also give researchers a basis for performing comparative studies on these courses. Background: Capstone course characteristics ------------------------------------------- ACM/IEEE Curriculum Guidelines for Software Engineering (SE) Degree Programs view the capstone project as an essential element of a SE degree programme and state that the main goal of a capstone course is to ensure that the curriculum has a significant real-world basis. According to, incorporating real-world elements into the curriculum is necessary to enable effective learning of software engineering skills and concepts. The ACM/IEEE Curriculum Guidelines for Computer Science (CS) degree programs align with these views and state that all graduates of CS programs should have been involved in at least one substantial project. Such projects should challenge students by being integrative, requiring evaluation of potential solutions and working on a larger scale than typical course projects. For students, a capstone project typically represents a culmination of their studies and is one of the last milestones before graduation. Indeed, since the 1970s, hundreds of primary studies have been written on this large, final-year project course. The also lists a set of key recommendations that a capstone course should follow. The recommendations are listed word by word in Table [tab:capstonerecommendations]. We decided to use these recommendations as the basis for formulating our research questions. They give a general outline of capstone courses and therefore provide a valid starting point for the categorisation done in this research. [width=1,cols=2,pos=h] [tab:capstonerecommendations] p0.1p0.8 CR # & Recommendation & The project should span a full academic year, giving students adequate time to reflect upon experiences and retry solutions as appropriate. & Where possible, this should preferably be undertaken as a group project. If such factors as assessment make this difficult, it is essential that there should be a separate group project of substantial size. & Where possible, a project should have a “customer” other than the supervisor so that the student gains fuller experience with product development life-cycle activities. & A project should have some form of implementation as its end deliverable so that the students can experience a wide set of software development activities and adequately evaluate these experiences. Theory-based projects such as the development of formal specifications is therefore inappropriate for this role. & Evaluation of project outcomes should go beyond concept implementation (“we built it, and it worked” ), using walkthroughs, interviews, or simple experiments to assess the effectiveness and limitations of the deliverables. & Assessment of a capstone project should consider how effectively software engineering practices and processes have been employed, including the quality of student reflection on the experience, and not be based only on the delivery of a working system. Thus according to these guidelines, there are some basic characteristics that capstone courses have. They can be characterised as long and substantial projects (CR1, CR2) that should preferably be completed in a team (CR2). Projects should have customers (CR3) for whom the students are expected to deliver some form of real implementation at the end of the course (CR4). Students should therefore engage in real software development activities and not just complete simple, theory-based assignments provided by the teacher (CR4). Evaluation of the project outcomes should focus not only on the fact that the project ``works``, but also assess the deliverables on how well they have been completed (CR5). Finally, the focus of the course and its assessment should be on software engineering practices and processes and students should give adequate opportunities to reflect on the experience (CR6). The next section describes in more detail the process of how we derived the research questions based on these basic characteristics. Research questions and method ============================= Research questions ------------------ Characteristics of capstone courses (described in Section [capstonecharacteristics]) can be achieved in many ways. The main goal of this research was to understand these differences in how capstone courses are implemented in universities and other tertiary education institutions, and thus provide a holistic view over the various capstone course implementations. We decided to use the ACM/IEEE Curriculum Guidelines for Undergraduate SE Degree Programmes as the basis for starting to explore these characteristics. The recommendations are listed in Table [tab:capstonerecommendations]. Although some of these aspects, e.g., team formation, have been addressed in the previous reviews, previous literature reviews are slightly outdated. Moreover, we are not aware of any study covering all the aspects mentioned in the ACM/IEEE recommendations. Related to, we were interested in the duration of the courses and what rationale primary studies provide for choosing a specific course duration if any: RQ1 What is the duration of SE capstone courses, and what advantages or disadvantages are related to a certain duration? Related to, we wanted to find out if these projects are conducted in teams, how teams are composed and what is the rationale behind choosing a certain team size: RQ2 What team sizes do SE capstone courses have, and how are team sizes justified? Based on the ACM/IEEE recommendations (i.e., ), a project should have a customer other than the teacher of the course. An alternative approach to bringing an outside view to a project is to outsource project topics. Thus, our third research question, *how are the project and client sourcing handled in SE capstone courses*, was divided into two sub-questions: RQ3.1 Who acts as the client for capstone projects? RQ3.2 How are the ideas for projects sourced? Related to, we asked: *How are the projects in capstone courses implemented (RQ4)*. We wanted to uncover what students do in these courses and therefore, we looked into the actual project implementation. As `project implementation` can mean a multitude of things, we decided to divide this research question into smaller, more concrete sub-questions: RQ4.1 What artefacts are students expected to produce on capstone courses? RQ4.2 What is the software life-cycle gone through during these projects? RQ4.3 How are the implementation technologies chosen for capstone projects? With RQ4.1 we aimed to find out what students actually produce in these courses and whether any software is being developed. RQ4.2 helped us to find out if the capstone project is as integrative experience on software engineering practices as curriculum guidelines suggest. Finally, finding out how educators make the choices for implementation technologies and what implications these choices have, gave some insight into project implementation. As speaks about assessment, our last research question also asked: *How is the student assessment conducted on SE capstone courses? (RQ5)*. As assessment can be divided into continuous feedback and final grading, RQ5 was also split into two: RQ5.1 How are the students assessed at the end of SE capstone courses? RQ5.2 How are students guided, if at all, during SE capstone courses? The rationalisation here was that we wanted to uncover whether the evaluation is based on a multitude of factors like suggests and whether students are given adequate possibilities to reflect on their experiences. In order to get a comprehensive representation of how project-based capstone courses are generally organised, relevant research articles were searched. One could argue that the characteristics and any organisational details of these courses could be derived from the web pages of universities and other tertiary institutions. However, we wanted not only to produce a list of characteristics such as the duration and workload of the courses but also to reveal more about the contents of these courses. An important part of the research was also to provide educators with insights related to the various characteristics. Without any evaluation or assessment of the chosen structure and characteristics, this would have been impossible to achieve. Search strategy --------------- The method used in this study follows the SLR method by. The initial data collection was done by finding relevant sources from scientific databases: Scopus, ACM Digital Library, IEEE Xplore and ScienceDirect. Some preliminary searches were conducted on these databases to find out to which extent research articles use the word ``capstone`` and its synonyms when describing large, degree-culminating project courses in software engineering-related programs. It turned out that the term ``capstone`` is well-known and widely used in research articles. It was also used by in their earlier work. Therefore, the first search string was simply constructed as: ``` software AND capstone ``` In order to have a complete picture of the project course landscape in software engineering, a second search was performed using the second search string: ``` software AND 'project course' ``` This was deemed necessary as not all sources had the word ``capstone`` present in the metadata even though they clearly were describing courses relevant to this research. Searches with the two search strings were conducted sequentially in each database. used ``software engineering course`` as another search term in their study, but we did not want to limit ourselves to the SE discipline, as relevant software-related courses might be presented, for instance, in computer science. Using only the words ``software`` and ``course`` on the other hand, provided too many irrelevant hits. Scopus alone produced nearly 30 000 hits of which only a small fraction would have been relevant to our study. A total of 981 unique papers were found after combining the papers found from all four databases using the search strings and removing duplicates. The databases were searched on June 11th and June 12th 2022, one after the other, starting with Scopus, moving on to ACM Digital Library, followed by ScienceDirect and finishing with IEEE Xplore. As the search fields and filters are slightly different in each of these databases, the search strings were adjusted to match each specific set-up. They were, however, kept semantically the same across the searches. Exact search strings and initial search results are listed in Table [tab:initialsearchresults]. As we wanted to identify current ways of organising capstone courses, the searches in all four databases were limited to the years 2007 to the search day in June 2022. This time period was regarded as sufficiently long to provide a holistic view of the current capstone courses. It overlaps with by a few years but also uncovers 11 years of research done on the area that has not been systematically reviewed since. Three stages of selection were applied to this initial set, after which 127 primary studies remained. Fig. [fig:searchstrategy] summarises the search and selection process, and the following subsections will describe it in greater detail. ![Search strategy](figs/search_strategy "fig:") [fig:searchstrategy] [width=1,cols=3,pos=h] [tab:initialsearchresults] p0.18p0.66p0.08 Database & Search strings & Hits Scopus & TITLE-ABS-KEY ( software AND capstone ) & 762 Scopus & TITLE-ABS-KEY ( software AND ``project course`` ) & 262 ACM Digital Library & [Title: software] AND [Title: capstone] & 24 ACM Digital Library & [Keywords: software] AND [Keywords: capstone] & 32 ACM Digital Library & [Abstract: software] AND [Abstract: capstone] & 130 ACM Digital Library & [Title: software] AND [Title: ``project course``] & 6 ACM Digital Library & [Keywords: software] AND [Keywords: ``project course``] & 7 ACM Digital Library & [Abstract: software] AND [Abstract: ``project course``] & 44 ScienceDirect & (TITLE ABS KEY: SOFTWARE CAPSTONE) & 22 ScienceDirect & (TITLE ABS KEY: SOFTWARE PROJECT COURSE) & 12 IEEE Xplore & (``All Metadata``:Software) AND (``All Metadata``:capstone) & 223 IEEE Xplore & (``All Metadata``:software) AND (``All Metadata``:``project course``) & 86 Paper selection --------------- The paper selection was conducted from the initial set of 981 sources by the first author (Fig. [fig:searchstrategy]). The details of inclusion and exclusion are explained next. ### The first stage - Inclusion criteria *The titles, abstracts and keywords* of the initial papers were read and evaluated against the inclusion criteria presented below (IC1-IC3). After the first stage, 398 papers remained. IC1 The title or abstract strongly hints that the study presents frameworks or case studies of software engineering capstones or other large, project-based courses in software engineering IC2 Based on the title or abstract, the study describes real experiences of implementing a software engineering capstone course IC3 The title or abstract indicates that the study assesses the outcomes of the course or its characteristics The first inclusion criterion was developed to set the focus on software engineering courses in particular. A large number of the articles in the initial set were ruled out due to the first criterion (IC1). The papers were found to research, for instance, mechanical engineering courses, which were out of the scope of this research. We also wanted to rule out any purely hypothetical papers, where the researchers show no course that follows the frameworks or structures presented. The second inclusion criterion (IC2) aimed to ensure that all included papers would present a real-world course. The final inclusion criterion (IC3) was generated so that all papers would also evaluate the outcomes of the various course implementations. ### The second stage - Exclusion criteria The second stage was performed on the 398 papers remaining from the first stage. Any article that, based on reading the *full paper*, met at least one of the presented exclusion criteria was excluded at this stage. After this selection, 171 articles remained for the final evaluation. The used exclusion criteria were: EC1 The length of the study is less than four pages EC2 The study is not published in conference proceedings or as a journal article EC3 The study does not have full text available in English EC4 The study turned out not to describe a software engineering capstone course in a tertiary institution EC5 The study is not able to provide answers to most of the research questions Exclusion criteria from EC1 through EC3 aimed to ensure that the study was of sufficient quality. According to workshop proceedings often do not provide sufficient input for the purposes of an SLR. Additionally, quite many of the papers that were first published as short workshop proceedings or abstracts were also found to have a conference proceeding or a journal article published later on. EC3 relates to the language skills of the authors as well as the status of English as the primary language in software engineering-related research. These exclusion criteria led to some papers being rejected before reading their entire content. For exclusion criteria EC4–EC5, the content of the article was examined more carefully and, in most cases, read in its entirety to make a justified decision. Exclusion criteria EC4 and EC5 relate to our research goal. For instance, many articles were found to describe courses in computer engineering or mini-projects conducted prior to SE capstones which meant that they were out of the scope of this research and excluded based on EC4. Most of the papers left out during this stage met EC4. As for EC5, some studies were, for example, found to describe a whole curriculum with capstone courses playing only a minor part in the research, and they could therefore not provide answers to our research questions. A number of studies also evaluated a tool, method or framework relevant to the software engineering industry, not the capstone course itself. In many of these studies, the capstone course presented the researchers merely a convenient way of gaining study participants, which is why they did not fit this research. This led them to be excluded due to EC5. ### The third stage - Removal of duplicates and studies of poor quality The third stage was included mainly to rule out any duplicate data and studies of poor quality from our research. After the third stage, the final set of 127 papers remained. **Duplicate data** – All 171 primary studies remaining after the second stage describe real-life software engineering capstones. As educators often like to modify their courses over time to find the best ways of teaching, studies here too reflect on the changes done to the courses. Some authors also have written multiple articles based on the same capstone course. In such cases, the most recent article was chosen. Similarly, if a study describes several instances of the course in one paper, the principal characteristics of the most recent course instance were chosen for the data extraction. Choosing the latest instance of each course stems from the goal of this research to synthesise the current state of knowledge on capstone implementations. In addition, the decision of whether two descriptions of the same course are different enough for them to be included as their own capstone courses would have been too ambiguous and open for interpretation. also state that it is important not to include multiple publications of the same data, as it would seriously bias any results. Due to this procedure, 42 studies were removed from the final set. **Quality assessment**[qualityassessment] – In addition to inclusion/exclusion criteria, state that it is critical to perform a quality assessment on the primary studies. We also conducted a such assessment and used it to ensure that our final data set is of sufficient quality. At this stage, two studies were filtered out. Any tables or graphs presented from here on do not include these two excluded studies, and therefore represent the final set of 127 studies. Table [tab:qualityassessment] lists the set of questions used by which we also used to determine the quality of primary studies. Originally the questions were supposed to be graded on a dichotomous (``Yes`` = 1 or ``No`` = 0) scale, but we decided to use a three-point scale of ``Yes`` (= 1), ``To some extent`` (= 0.5) and ``No`` (= 0). This three-point scale has also been adopted by and and allowed us better to assess the studies where authors only provided some answers to the question. The two articles filtered out had a quality score of less than 4. [hb!]![Quality scores of the final set of studies](figs/quality_scores.png "fig:") [fig:qualityscores] [width=.9,cols=2] [tab:qualityassessment] p0.05p0.8 No. & Question Q1 & Is there a rationale for why the study was undertaken? Q2 & Is there an adequate description of the context (e.g. industry, laboratory setting, products used, etc.) in which the research was carried out? Q3 & Is there a justification and description for the research design? Q4 & Has the researcher explained how the study sample (participants or cases) was identified and selected, and what was the justification for such selection? Q5 & Is it clear how the data was collected (e.g. through interviews, forms, observation, tools, etc.)? Q6 & Does the study provide a description and justification of the data analysis approaches? Q7 & Has ‘sufficient’ data been presented to support the findings? Q8 & Is there a clear statement of the findings? Q9 & Did the researcher critically examine their own role, potential bias and influence during the formulation of research questions, sample recruitment, data collection, and analysis and selection of data for presentation? Q10 & Do the authors discuss the credibility of their findings? Q11 & Are limitations of the study discussed explicitly? In our assessment, we decided to group the first eight questions to represent the quality of reporting and rigour of the studies and final three questions to represent the credibility of evidence, similarly to. The grouped scores are presented in Fig. [fig:qualityscores] and individual scores for each study can be found at [https://github.com/article-additions](https://github.com/UniversityOfHelsinkiCS/article-additions/blob/main/capstone-courses-in-software-engineering/Quality_Assessment_Scores.png). Regarding the quality of reporting, the selected primary studies performed fairly well. It was mostly clear how the data had been collected, and the relevance of the study was explicitly discussed. However, the aspect most studies were lacking was providing justifications either for the sample selection or research designs. Regarding the credibility of evidence, the studies performed fairly poorly. Interestingly, many of the otherwise well-established studies did not include a section for explicitly discussing the limitations of the study or the author’s role in data and sample selection. This is indicated in the low averages of the credibility category. ### Overview of the final papers The three stages taken resulted in 127 primary studies, published between 2007 and June 2022. Research activity in this area has been fairly steady over the years, as depicted in Figure [fig:yearsofstudies]. It is worth noting, that we searched for papers in June 2022, making the study amount for 2022 partial. Also, as explained in Section [thirdstage], 42 earlier studies, which otherwise would have been valid for this research, were excluded from the final set of papers as there was a newer study of the same course available. This procedure skews the year distribution towards the end of the scale. The figure also shows the distribution by study type. In total, of studies 73% were conference proceedings, and 27% of studies were published as journal articles. ![Timeline and types of primary studies](figs/years_of_studies.png "fig:") [fig:yearsofstudies] All the articles included for further analysis are listed in Appendix A, Table [tab:finalsources], and referenced later in this section with their publication ID in the table (i.e., S1–S127.) Data extraction and synthesis ----------------------------- After applying the study selection process, the properties presented in Table [tab:dataextraction] were extracted from the remaining 127 studies to a common datasheet. Table [tab:dataextraction] defines how each extracted field relates to the research questions of this study. [width=.9,cols=3,pos=ht!] [tab:dataextraction] @ LLL@ Identifier & Field & RQ F1 & Title & metadata F2 & Author(s) & metadata F3 & Year & metadata F4 & Publication venue & metadata F5 & Duration of the course & RQ1 F6 & Course workload & RQ1 F7 & Team sizes & RQ2 F8 & Clients & RQ3.1 F9 & Project sources & RQ3.2 F10 & Artefacts produced & RQ4.1 F11 & Project phases & RQ4.2 F12 & Technologies & RQ4.3 F13 & Student assessment & RQ5 F14 & Outcomes of the course & RQ1–RQ5 F15 & Quality score & QA Values F1–F4 were extracted for basic documentation purposes. Items F5–F14 concern the course and its organisation presented in the study. Two of the studies present multiple separate capstone courses from different institutions. For these two studies (S8 and S93 in Table [tab:finalsources]), the items F5–F14 were extracted for each of the courses they presented. For F5–F14, we were not only interested in quantifying these characteristics into statistics but also in providing implications of different course design choices. Therefore, if the study stated, for example, that they had a two-semester capstone course because it provided students adequate time to learn, we recorded both of these information pieces: the quantifiable duration as well as any such insight relating to the characteristic. This enabled us to analyse and discuss the course characteristics better in Section [results]. A data-driven thematic analysis was applied to synthesise the qualitative data extracted as part of F5–F14. F5 and F6 were considered essential in assessing the general workload and duration of the course from the student’s perspective. Few sources have given the duration of their course (F5) as months or weeks. These were rounded to the nearest amount of semesters. Courses lasting less than 4 months were categorised as ``less than one semester``, 4 to 6 months as ``one semester`` and anything more than 6 but less than 10 months as ``two semesters``. Team sizes (F7) included the number of students per team. The courses were also examined on whether the projects in the course were done for a client (F8). The client could be external to the course staff, the role of a client could be played by the course staff, and some projects did not have clients at all. Some sources present a mix of these categories, in which case, the source was labelled by the client category, which we thought was the most prevalent in the course. We were also interested in how the project topics were generated (F9). Three main sources for projects were identified during the data extraction: course staff, external clients and the students themselves. The projects were also found to vary regarding whether the students were working on the same project idea or whether each team had their own initial problem to solve. We extracted all the artefacts that students were expected to produce during the course (F10). This included both the deliverables used for grading the course as well as artefacts produced for project management reasons, as in most cases, it was hard to draw a distinction between the two. Evidence of project phases (F11) was extracted to find out which software life-cycle activities are gone through in these courses. F12 describes the technologies used in the course. We found that most of the studies do not explicitly specify all the technologies used for the projects in their courses, and moreover, these technologies could potentially include any software technologies available. We, therefore, categorised these into two categories, based on whether the main technology selections are made team-wise, or all use a common technology stack. We extracted information on how the students learning process was assessed and improved throughout the course and how the student’s progress and achievement were assessed at the end of the course (F13). The key outcomes in the study (F14) were also extracted to assess the advantages and disadvantages of the presented capstone characteristics. F15 was extracted as presented in Section [qualityassessment] for quality assessment and study filtering. Results and analysis ==================== This section represents quantitative statistics and qualitative outcomes of capstone characteristics extracted from the primary studies. The characterisation enables us to answer our research questions and ultimately helps educators when they are planning their capstone courses. Duration (RQ1) -------------- [width=1,cols=4,pos=hb!] [tab:duration] p0.22p0.15p0.08p0.45 Category & Number of studies & Percentage & Study identifiers Less than one semester & 10 & 8% & S16, S18, S40, S46, S55, S61, S78, S94, S99, S113 One semester & 87 & 66% & S1, S2, S3, S4, S5, S8b, S8d, S9, S11, S13, S15, S20, S21, S22, S23, S24, S25, S26, S27, S30, S31, S32, S34, S35, S36, S37, S38, S42, S43, S44, S45, S47, S48, S51, S53, S54, S56, S57, S58, S60, S62, S63, S64, S65, S66, S67, S68, S69, S70, S73, S74, S75, S76, S77, S79, S80, S81, S82, S83, S84, S86, S89, S90, S92, S93a, S93b, S95, S97, S100, S102, S103, S104, S105, S106, S107, S112, S115, S116, S117, S119, S120, S121, S123, S124, S125, S126, S127 Two semesters & 32 & 24% & S6, S7, S8a, S8c, S10, S12, S14, S17, S19, S28, S33, S39, S41, S50, S52, S59, S71, S72, S85, S87, S88, S91, S96, S98, S101, S108, S109, S110, S111, S114, S118, S122 More than two semesters & 1 & 1% & S49 Not specified & 1 & 1% & S29 Regarding the actual course characteristics, we first looked into the reported duration of these courses (F5). A clear majority of institutions conduct capstone courses that last one semester (Table [tab:duration]). Interestingly, this is in conflict with the recommendations for undergraduate capstone courses, which propose having capstones lasting the entire academic year. However, the unfortunate reality is that not all curricula can absorb a full-year implementation [S117]. The capstone courses often are very labour-intensive for the teaching staff, with many teams to manage and evaluate throughout the projects [S39], [S112]. Students might have full- or part-time work, which makes the longer courses harder to arrange [S49], [S58], [S73]. Students also perceive two-semester capstone courses as laborious [S73], [S109], and some even the one-semester ones [S23], [S41]. In order to provide an intensive and realistic experience, many of these courses take up at least half a work-week [S18], [S28], [S41], [S69], [S73], [S109], [S127]. This again might make other courses taken simultaneously suffer [S41], which limits the possibilities for an intensive, year-long capstone. However, the educators who had experiences with both, shorter and longer duration, had shifted to the longer duration since they felt it was impossible to reach the wanted depth in just a few months. S6 describes how they switched to a two-semester capstone as they found the one-semester projects inadequate in skill coverage and depth. S70 is written by students of the course, and they strongly recommend that their course be lengthened into one academic year from the current duration of one semester. S33 have had experiences with one-period, i.e. a quarter of an academic year, and three-period courses, and stated that the change to a longer version received overwhelmingly positive feedback from all the participating parties. Students were able to gain more hands-on experience in applying new and familiar tools and project management. Additionally, they learned to act when faced with unanticipated events as the teams experienced surprises – regarding both technologies and people – multiple times during the year. Industrial clients received more ambitious and polished products as a result of the course, and the course staff felt that the learning objectives for the course were finally truly met. Team sizes (RQ2) ---------------- ![Team sizes in capstone courses](figs/team_sizes.png "fig:") [fig:teamsizes] To find out how many students there generally are in a project group, we extracted the reported team sizes (F7) for each course in Fig. [fig:teamsizes]. If a study refers to their course having teams of 4–5 students, this is thus reflected in both columns 4 and 5. By looking at Fig. [fig:teamsizes], it is evident that capstone courses are almost always conducted as group projects. Only three institutions in our research allow their capstone or senior project courses to be completed as single-student endeavours [S11], [S89], [S111]. Team sizes vary a great deal, ranging from 1 to 35. Research has found that in very small groups, e.g., 2–3 students, the teams are unlikely to generate the dynamics and issues that are common in collaborative software development [S36], [S53], [S56], [S58]. Such a small team size does not present enough of a challenge [S36], and smaller groups are unable to complete substantial projects in a typical one-semester course [S53]. Having very small teams might also be unmaintainable in large programs with hundreds of, or even a thousand, students due to the extra organisational overhead each team causes [S19]. Going to the other extreme, larger groups with 7 or more students have often been found to be facing other kinds of problems, such as the inability to meet all together and other management and coordination issues [S30], [S36], [S39], [S53], [S56], [S78]. ``Free-rider`` problem is also reportedly common in larger teams, where it is possible for few students to take the bigger responsibility for ensuring the overall success, and the small contribution of others might go unnoticed [S9], [S56], [S58], [S69], [S87], [S121]. In larger teams, ensuring fair grading and an equal balance of work and responsibilities requires more attention from the course staff [S56], [S87]. The course conducted in S106 had one of the largest team sizes found in our research. The course had 15 students, all working on the same game project in one team. The idea was to simulate what large-scale game development in a diverse team feels like and what it takes to create production-quality games. The authors share that their approach was not entirely successful. In the aftermath of the course, it came up that some students wanted explicit direction while others felt that they wanted more autonomy and control. According to the authors, for the latter group of students, it was clear that they were uncomfortable following the leadership of the vision team and would have preferred to work on a project of their own design. However, the authors also mention that getting to work with your own project vision is a very unlikely case for any recent graduate, which is why they did try to come up with such a real-world teamwork scenario. The majority of educators do seem to opt for the middle ground regarding team sizes and have 4 to 5 people working in a single group (Fig. [fig:teamsizes]). This size is perceived as the sweet spot, cancelling out the negatives of the two extremes [S36], [S52], [S53], [S56], [S58]. Students themselves have also reported being satisfied with such a team size [S56]. Additional measures for combating any non-productive and opportunistic group behaviour, such as social loafing and free riding, have also been proposed. Conducting peer reviews has been proven to mitigate the risk of such behaviour [S72], [S98]. Some periodic monitoring should also be done by the course staff to ensure working team dynamics [S69]. Both of these will be discussed further in Section [assessmentofstudents]. Clients and project ideas (RQ3) ------------------------------- ### Clients (RQ3.1) [width=1,cols=4,pos=ht!] [tab:clients] p0.22p0.15p0.08p0.45 Category & Number of studies & Percentage & Study identifiers Clients external to the course staff & 76 & 58% & S2, S3, S4, S5, S7, S8a, S8d, S9, S10, S11, S14, S15, S17, S18, S19, S23, S24, S27, S28, S29, S30, S31, S33, S34, S35, S37, S38, S39, S41, S46, S48, S49, S50, S52, S55, S58, S59, S60, S61, S62, S63, S66, S67, S69, S71, S73, S78, S80, S81, S84, S85, S86, S87, S88, S90, S91, S92, S93a, S93b, S94, S96, S97, S98, S104, S108, S110, S112, S113, S114, S117, S120, S121, S122, S124, S126, S127 Course staff acts as clients & 16 & 12% & S1, S6, S12, S21, S40, S43, S53, S54, S57, S68, S76, S82, S99, S115, S116, S123 No clients & 38 & 29% & S8b, S8c, S13, S20, S22, S25, S26, S32, S36, S42, S44, S45, S47, S51, S56, S64, S65, S70, S72, S74, S75, S77, S79, S83, S89, S95, S100, S101, S102, S103, S105, S106, S107, S109, S111, S118, S119, S125 Not specified & 1 & 1% & S16 We also looked at who is in the role of a client for these projects (F8) and how the project ideas are sourced (F9). Almost half of the studies (42 %) report conducting their capstone courses without clients that are external to the course (Table [tab:clients]). In these courses, the course staff may act as clients or Product Owners for the projects or alternatively, the student teams work on their own and only report progress regularly to the course staff. S6 explains that they have instructors playing clients due to the difficulty of finding suitable clients. Being a small program in a rural institution makes the businesses and organisations suited for such collaboration far and scarce. Also, the institution’s wish to own the intellectual property rights for the developed products puts off potential clients. For institutions, that would have suitable clients available, there is always the upfront investment in time and effort that the course staff has to make to contact said clients, guiding them through creating project proposals and assigning the students to these projects [S118]. S42 aimed to create a course with students from five different technical and non-technical disciplines, such as computer science and business informatics. S42 mentions that they need to be careful in how they organise the course so that it would suit the needs of all disciplines. Bringing an external client into the mix might not fulfil the learning goals for all students. Some studies also explain how the course outcomes are less predictable with multiple external clients. S114 have experienced several cases when the project sponsors did not show up for the bi-weekly meetings with the students. Such client behaviour caused very low motivation in the student teams, and some capstone projects failed due to client unavailability. S3, S23, S78 and S122 have made similar observations and stress the importance of finding committed clients to ensure a good experience for the students. Despite these risks having real, external clients other than the course supervisor for the projects is recommended for both undergraduate and graduate capstones. These clients can be from other units within the university [S7], [S9], [S14], [S52], [S58], [S86], [S87], [S94], local businesses [S7], [S9], [S86], [S87], [S98], [S110], [S122] or various non-profit organisations [S7], [S9], [S11], [S16], [S58], [S110]. Graduates of the program who already work in the industry are also a convenient way to find clients [S35]. Working closely with real-world clients has also often received highly positive feedback from students [S14], [S15], [S52], [S73], [S98], [S117] and organising staff alike [S14], [S35], [S84], [S98], [S117]. It has been found to increase the motivation and commitment of students, when there is an actual client with a real need behind the project [S9], [S14], [S15], [S35], [S66], [S73], [S84], [S121]. It has helped to keep the experience more realistic and credible in the students’ eyes [S19]. Having industry clients improves the students’ technical and nontechnical skills and better prepares them for the challenges they will face in the work-life [S14]. The collaboration has been reported to have benefits for the client too. S35 conducted a study to find out the reasons why clients participate in such project courses. The reasons included getting a tailored software product, researching new technologies and, as a clear number one, recruitment. Recruiting students could happen directly from the team or more indirectly by adding visibility among the students as potential employers. Others have noticed this benefit, too; it is not uncommon for students to get hired by the industry partner who sponsored their capstone project [S15], [S28], [S35], [S73], [S84], [S114], [S121]. S73 report having at least 60 out of a few hundred students gaining full- or part-time job offers based on the capstone project outcomes, in mere few years. Keeping the experience positive also for the clients, might make them come back with further project ideas [S35]. This helps to reduce the client acquisition overhead for years to come. Some organising institutions have even managed to attract more external clients than there are student teams, which has enabled them to collect a small fee from the ones participating in the course [S35], [S121]. ### Project sources (RQ3.2) [width=1,cols=4,pos=ht!] [tab:projectsources] p0.22p0.15p0.08p0.45 Category & Number of courses & Percentage & Study identifiers External stakeholders propose project ideas & 81 & 62% & S2, S3, S4, S5, S7, S8a, S8b, S8c, S8d, S9, S10, S11, S14, S15, S17, S18, S19, S23, S24, S27, S28, S29, S30, S31, S33, S34, S35, S37, S38, S39, S41, S46, S48, S49, S50, S52, S55, S58, S59, S60, S61, S62, S63, S66, S67, S69, S71, S72, S73, S77, S78, S80, S81, S84, S85, S86, S87, S88, S90, S91, S92, S93a, S93b, S94, S96, S97, S98, S104, S108, S110, S112, S113, S114, S117, S120, S121, S122, S124, S125, S126, S127 Course staff provides project ideas & 27 & 21% & S1, S6, S12, S13, S16, S20, S21, S25, S40, S42, S43, S45, S54, S57, S65, S68, S74, S75, S79, S82, S99, S105, S106, S109, S115, S116, S123 Students generate their own project ideas & 22 & 17% & S22, S26, S36, S44, S47, S51, S53, S56, S64, S70, S76, S83, S89, S95, S100, S101, S102, S103, S107, S111, S118, S119 Not specified & 1 & 1% & S32 In addition to finding out the clients for these projects, we also looked into how the projects for these courses are sourced (F9). Three main ways for project sourcing were identified (Table [tab:projectsources]). As the majority of courses have multiple external clients, the project ideas in these courses are mainly derived from the needs of the customer. In these cases, the organising staff often performs some pre-screening and scoping in collaboration with the clients, to ensure that the expectations for the projects are realistic and that the project scopes suit the intended learning outcomes [S9], [S24], [S35], [S52], [S61], [S87], [S90], [S94], [S98], [S121]. Capstone projects should generally not be on the critical path of any external organisation, as the course is intended to remain a safe learning place for the students [S35], [S52], [S87], [S90], [S121]. Some studies also emphasise that students are not supposed to be working for these clients, but be in collaboration with them [S9], [S52], [S87], [S121]. Thus, the projects need to remain such that the students can have a say in how the project will be developed. For 17% of the courses, the students themselves are the main source for project ideas. These studies state that students are more motivated if they get to choose the project idea rather than have teachers assigning the projects [S36], [S53]. According to S53, if the team selects and defines the projects, their level of commitment and excitement to the project rises as the software system grows. At the end of the semester, the students have a strong sense of ownership towards the project, rather than feeling that they have just done one additional assignment [S36], [S53]. However, there are some potential pitfalls with this approach that educators should be aware of. S114 states that students should not be allowed to bring project ideas from the companies they work at or from their own businesses. S114 have found that it causes a conflict of interests for the student with the proposal and creates an unfair situation for the rest of the team. S36 lets students form their own teams and generate their own project ideas, but states this might not accurately reflect the situation in the students’ future professional lives. If the project idea comes from the team itself, all complexities associated with requirements elicitation and analysis are eliminated [S73], making the experience less realistic. Real-life projects come with challenges relating to contradicting expectations coming from various external and internal stakeholders [S73]. For 21% of the courses, the course staff provides the project specifications. Some educators have assigned the same project idea to all the student teams [S40], [S45], [S72], [S82] or even in some cases, all the students work on the exact same project in one team [S106], [S112]. Having the same project has the benefit of giving the course staff a consistent basis for grading and teaching [S40], [S59], [S72] and providing technical assistance to the students [S40], [S53]. In such cases, all teams will need to deal with the same complexities, project management issues and technology demands as in a typically constructed course, which makes the experience more predictable [S72]. Having one project idea also opens up the possibility for competition amongst the teams, e.g., which team will create the best design and implementation [S5], [S30], [S37], [S42], [S54], [S72], [S106] potentially even for an external client [S30]. It also possibly allows the course to focus more on the quality of the developed software [S5]. S5 has experience with both approaches, having multiple project ideas and having only one project. They have found it more productive and rewarding to focus on doing one project really well rather than juggling multiple projects and obtaining partial results. Project implementation (RQ4) ---------------------------- We also aimed to uncover information on how capstone projects are implemented and what students are expected to do in these courses. For this, we extracted various information on the project implementation (F10–F12). ### Produced artifacts (RQ4.1) ![Most commonly mentioned artefacts produced by students in capstone courses](figs/produced_artifacts.png "fig:") [fig:producedartifacts] We were interested in finding out what kind of artefacts students are expected to produce on the course (F10). It was often difficult to determine which artefacts were used for the final student assessment (i.e. grading) and which were only produced to manage the project in some way. Therefore we could not produce a list specifically of graded artefacts. Table [fig:producedartifacts] provides a rough view of the explicitly mentioned artefacts that students are expected to produce. We were, however, able to determine that all but three courses [S16], [S17], [S103] expect some form of a software prototype, a software product or source code as the end deliverable. Out of the courses which did not include producing software, S16 conducted a course where the process and end deliverables are focused on students completing various research items, such as testing new team-based technologies (e.g. pair programming). For S17 the end deliverable was often a solution proposal for the client organisation’s IT department, and S103 delivered an inter-disciplinary course which might result in software deliverables or alternatively in reports describing a software-based solution to the defined problems, e.g. using 3D engines for art. Most studies mention requiring some documentation for the software project (Fig. [fig:producedartifacts]). The actual number of courses that require documentation might be larger than reported here, as we only counted the times the study explicitly mentions that project documentation is done on the course. Studies at the beginning of our time range are often following more ``plan-first`` software development approaches, such as the waterfall model, and as a result, the quantity and detail of non-software artefacts are substantial [S2], [S11], [S22], [S70]. The documentation usually starts heavily upfront by students producing project plans [S2], [S68], [S70], detailed designs [S2], [S22], [S52], [S68], architecture plans [S22], [S68] and test plans [S2], [S70]. In later, more agile, courses, there is less evidence of extensive documentation and planning. With agile projects, the system documentation is largely developed as the project evolves [S14], [S65], [S94]. Students in these courses often also produce agile artefacts, such as product backlogs, sprint backlogs and burndown charts for project management and planning purposes [S6], [S65], [S94], [S124]. A large share of the studies (60%) explicitly mention that students must present or demonstrate their projects to wider audiences than just the immediate project team. This is a way to teach students how to present and explain their work also to a non-technical audience [S6]. The most common time for presentations is at the end of the semester, and typically these presentations are given at fairs or class sessions where all stakeholders of the course are invited [S2], [S4], [S6], [S33], [S95]. The format of final presentations varies from poster sessions [S6], [S28], [S34], [S44] to live demonstration sessions of the software [S89], [S95] to different kinds of demo videos [S1], [S73], [S118]. Students are sometimes also expected to draw up a project proposal or a pitch and then present it to the teachers or the class before starting to work on the implementation [S87]. In these cases, the purpose is to offer the students a chance to practice their pitching skills [S100]. Software requirements are something that students are often expected to detail during the course. These can be written down as a Software Requirements Specification created before the implementation phase begins [S30]. For courses with agile methodologies, software requirements are often documented in an initial backlog with user stories, and the backlog is then updated as the project continues [S124]. Students are also quite often expected to write some form of a report at the end of the experience, either individually or as a group. The reports generally involve students reflecting on the learning done and development processes employed throughout the course [S6], [S19], [S21], [S24], [S27], [S41], [S63], [S65]. The balance between too little and too many other artefacts is a delicate one. Having more documentation and deliverables presents teachers with more opportunities to grade and assess students’ understanding of software development processes [S110]. On the other hand, more documentation means less product which might not be in the interests of project clients [S49]. Indeed, some educators mandate only basic time tracking and reflective reporting from the students and have left the majority of the deliverables for the external client and team to decide [S24]. ### Project phases (RQ4.2) We sought evidence of the project phases and software life-cycle gone through on these courses (F11). Software life-cycle models include, with varying frequency and order, phases such as requirements gathering or solicitation, planning and designing, developing, testing and maintaining the product. Out of the studies that discuss the development process and end product quality, the projects generally proceed from ideas to robust proof-of-concepts or products with few core requirements implemented [S1], [S6], [S7], [S11], [S12] [S16], [S18], [S20], [S21], [S22], [S26], [S40], [S42], [S45], [S52], [S54], [S55], [S62], [S64], [S65], [S68], [S79], [S70], [S72], [S75], [S77], [S78], [S87], [S90], [S94], [S95], [S99], [S100], [S106], [S107], [S109], [S115], [S118], [S122], [S123], [S124], [S125]. Students thus get to get experience the phases of planning, designing, developing and testing the products in these projects. In courses with clients, either external or internal, the students usually have to solicit the requirements from the clients (Table [tab:projectsources]). However, sometimes the teachers provide students with ready-made feature or requirements lists [S12], [S21], [S45], [S79], [S82] and in some courses, students generate their own project proposals (Table [projectsources]). The experience of requirements gathering is somewhat diminished in these cases. Additionally, as the projects proceed from ideas to proofs-of-concept or simple, handed-off products, the projects generally do not include developing existing products, especially ones that are in production-use during the course. There are some courses where some of the projects have been production-ready at the end of the course, but these too were then handed over to the customer [S5], [S9], [S117]. This practice leaves students without the experience of working with existing products or products in the true maintenance phase of their software life-cycle. Assigning students to contribute to Free and Open Source Software (FOSS) projects is an emerging approach to remedy these shortcomings. The idea is to allow students to deal with existing codebases, often large and complex, such as the one they will face when working in the industry [S8b], [S8c], [S112], [S113]. Some courses have also had a continuation of earlier projects in the course to expose students to code generated by other people [S14], [S15], [S19], [S38], [103]. Both of these approaches allow students to maintain existing code, but they still present a minority in our research. ### Project technologies (RQ4.3) [width=1,cols=4,pos=h!] [tab:technologyselection] p0.22p0.15p0.08p0.45 Category & Number of courses & Percentage & Study identifiers Projects use common technologies & 33 & 25% & S1, S5, S8a, S8d, S11, S13, S22, S32, S37, S38, S42, S43, S45, S46, S48, S53, S54, S57, S59, S72, S76, S79, S82, S83, S93b, S99, S105, S106, S107, S109, S112, S113, S124 Choices done primarily team-wise & 65 & 50% & S2, S3, S4, S8b, S8c, S9, S10, S12, S14, S19, S24, S26, S29, S31, S33, S35, S36, S39, S44, S47, S50, S51, S52, S56, S58, S61, S62, S65, S66, S68, S69, S71, S73, S74, S75, S77, S80, S84, S85, S86, S87, S88, S89, S90, S92, S95, S96, S98, S100, S101, S102, S103, S104, S110, S111, S114, S116, S117, S118, S119, S120, S121, S122, S125, S127 Not specified & 33 & 25% & S6, S7, S15, S16, S17, S18, S20, S21, S23, S25, S27, S28, S30, S34, S40, S41, S49, S55, S60, S63, S64, S67, S70, S78, S81, S91, S93a, S94, S97, S108, S115, S123, S126 We also looked into the development technologies used in capstone courses and how they are selected (F12). Commonly in multi-customer courses or in courses with otherwise very differing project ideas, the technology choices are made based on the project (Table [tab:technologyselection]). In these cases, the course staff does not impose an entirely common technology stack for all the projects. For some of these projects, the technology stack is based on the client’s infrastructure [S35] and in some cases, the students get to make manager-like decisions on the suitable development technologies [S6], [S84]. Having the teams decide on the tools and technologies makes the students explore available options and justify their selections [S6], [S56], [S84]. S84 states that not only give them autonomy but also make them responsible for their own successes and failures. However, even though the majority of technologies would be selected based on the project and client, some studies recommend having some shared infrastructural tools and technologies [S12], [S19], [S36], [S65], [S102]. Version-control [S6], [S12], [S19], [S29], [S36], [S67], [S102], project management and communication tools [S19], [S29], [S65], [S67], [S80] and tools for continuous integration and delivery are examples of these [S32], [S78]. It has been found to make the management and evaluation of projects easier [S19], [S29]. Then again, having common development technologies for all projects is fairly common in cases, where the teachers provide students with the project requirements [S42]. In some cases, the evaluation methods focus heavily on the technical implementation, and the course graders might, for example, have sets of tests they like to run on each project to determine the quality [S109]. Some educators have the students compete on the same project proposal, which makes choosing a common stack justifiable [S37]. Assessment of students (RQ5) ---------------------------- We were also interested in finding out how the assessment is conducted in these courses (F13), and to which extent students are given possibilities to reflect on their experiences. We looked at both the end-of-course student assessment (Section [studentassessment]) and any continuous guidance and feedback students are given during the course (Section [studentguidance]). ### End of course student assessment (RQ5.1) All studies which explicitly described the course evaluation process had course teachers involved in it (Table [tab:summativeassessors]). Teachers are generally the ones assessing any artefacts produced by students, such as the software product itself, reports and documentation done of the project [S118]. However, due to the shift from traditional development approaches to more agile ones, the quantity of delivered artefacts has decreased over time, leaving teachers with fewer data points for assessing students’ comprehension of software development processes [S110]. In addition, many of the learning goals in capstone courses relate to soft skill development, such as the ability to work in a team or with a client and being able to manage a software project [S6], [S86], [S122]. These skills are generally employed when the teaching staff is not present, and teams work on the projects on their own, making the evaluation of these skills harder [S52], [S56], [S86]. For these reasons, several studies report having additional sources for student assessment beyond the teachers’ evaluation of produced artefacts. Different kinds of (anonymous) peer evaluations are fairly common (31%). They give course staff a look into the team dynamics during the course and help in detecting social loafing or free-rider behaviour [S12], [S86], [S112]. Similarly, self-evaluation is often done either in combination with peer evaluations [S56] or as a part of the reflection done in a project’s final report [S65]. Some studies report utilising the client’s opinion in the course assessment process. Clients can fill out a questionnaire considering each student’s performance during the course [S86] or only evaluate the team’s deliverables or presentations and their value from the client’s point-of-view [S35], [S52], [S89], [S127]. As with self- and peer-reviews, educators use the client’s opinion as a complementary source of assessment when grading students (Table [tab:summativeassessors]). [width=1,cols=4,pos=ht!] [tab:summativeassessors] p0.22p0.15p0.08p0.45 Category & Number of courses & Percentage & Study identifiers Course staff & 93 & 71% & S1, S2, S4, S6, S7, S8a, S8b, S8c, S8d, S9, S12, S13, S17, S18, S19, S20, S22, S23, S24, S25, S26, S27, S28, S29, S30, S31, S33, S35, S36, S37, S39, S40, S41, S42, S43, S44, S46, S47, S48, S51, S52, S53, S56, S58, S62, S63, S64, S66, S67, S68, S69, S71, S72, S73, S74, S77, S80, S81, S82, S84, S85, S86, S87, S88, S89, S90, S94, S95, S96, S97, S98, S99, S100, S101, S102, S103, S104, S105, S106, S107, S108, S109, S110, S111, S112, S116, S117, S118, S120, S123, S124, S126, S127 Students’ peer-evaluations & 40 & 31% & S2, S4, S7, S8a, S8b, S12, S13, S27, S30, S31, S33, S37, S41, S42, S46, S48, S51, S52, S56, S58, S63, S64, S67, S72, S73, S74, S81, S84, S86, S87, S90, S98, S106, S107, S108, S110, S112, S117, S124, S127 Students’ self-evaluations & 24 & 18% & S1, S2, S4, S22, S23, S27, S37, S41, S42, S47, S51, S56, S62, S64, S69, S73, S74, S80, S81, S86, S87, S116, S126, S127 External project clients & 17 & 13% & S4, S20, S24, S29, S33, S35, S37, S42, S58, S73, S84, S85, S86, S89, S96, S98, S127 Others & 1 & 1% & S17 Not specified & 38 & 29% & S3, S5, S10, S11, S14, S15, S16, S21, S32, S34, S38, S45, S49, S50, S54, S55, S57, S59, S60, S61, S64, S65, S70, S75, S76, S78, S79, S83, S91, S92, S93a, S93b, S113, S114, S115, S119, S121, S122 ### Continuous student assessment and guidance (RQ5.2) [width=1,cols=4,pos=ht!] [tab:studentguidance] p0.22p0.15p0.08p0.45 Category & Number of courses & Percentage & Study identifiers Course staff & 99 & 76% & S2, S3, S4, S6, S8a, S8b, S8d, S9, S12, S15, S16, S17, S18, S19, S20, S22, S23, S24, S25, S27, S28, S29, S30, S31, S32, S35, S36, S37, S39, S40, S41, S43, S44, S45, S46, S47, S48, S49, S50, S51, S52, S53, S54, S55, S56, S57, S59, S60, S62, S63, S64, S65, S66, S67, S68, S69, S71, S73, S74, S75, S76, S77, S78, S80, S81, S82, S84, S85, S86, S87, S88, S90, S91, S92, S93a, S94, S95, S96, S97, S98, S99, S100, S101, S102, S103, S105, S106, S108, S109, S110, S112, S113, S114, S115, S116, S122, S123, S124, S126, S127 More experienced students & 23 & 18% & S5, S8a, S12, S14, S17, S19, S35, S40, S48, S58, S61, S68, S81, S85, S92, S95, S104, S105, S112, S113, S118, S119, S121 Industry advisers (other than project clients) & 22 & 17% & S5, S8a, S8b, S8c, S10, S20, S25, S52, S58, S65, S71, S72, S73, S77, S80, S83, S89, S93b, S98, S106, S118, S120 Not specified & 15 & 11% & S1, S7, S11, S13, S21, S26, S33, S34, S38, S42, S70, S79, S107, S111, S117 Many studies specifically mention that the teams should not be left entirely on their own to complete the course project and should be guided along the way [S6], [S9], [S10], [S16], [S18], [S53], [S69], [S72], [S86]. Three main ways for conducting such continuous assessment and guidance were found based on our research (Table [tab:studentguidance]). Studies where there is no mention of the teacher, or anyone else, having an active role in how the teams work during the course, fall into the category ``Not specified``. If the course staff only passively receives reports of the student’s progress and evaluates the course outcomes after its completion, these are not the active guidance of the teams we were looking for. Some courses have several types of guidance present, in which case the study has been listed under each corresponding category in Table [tab:studentguidance]. Only 11% of the studies do not explicitly specify having any ongoing feedback and guidance system present during the course. The most convenient way is to have the course staff, such as the responsible teacher or hired teaching assistants, acting in an advisory role (76%). The intensity of the guidance given by course staff varies a great deal between these courses, or even within these courses. Sometimes course staff provides oversight in a more supervisory role and intervenes in the team’s work if any conflicts arise or the team includes clearly non-contributing students [S6]. On the other end, some instructors have weekly meetings with the students where the teachers actively propose solutions and guide the teams with technical and non-technical issues and team dynamics [S6], [S88]. Some teachers prefer even to practically manage the team [S6]. S38 explains that the most successful changes made on the course were those that allowed the course staff to take a more active role in each team. The grades of students improved, and the teams were able to complete more functionality of the software products. Another, often complementary, guidance form is to have industry experts occasionally participate in the course (16%). This can be seen as especially relevant when the course projects are focused around a common theme, for instance, the gaming industry [S25], [S106]. However, finding the correct balance in this type of assessment, without a client relationship, has sometimes proven to be tricky. S106 had industry experts from the gaming and software industries participating as advisers on their course. The advisers’ feedback on the students’ game product was mainly positive and encouraging. While the staff took their remarks to mean that the game concept and development for the moment were commendable, the students took the feedback to mean that the prototype was, as presented, worthy of praise. This presented a dichotomy that never really resolved: students felt that the project was near-complete, whereas the instructors felt that the project was, at best, a rough sketch. Many authors have noticed the upsides of having more experienced students outside of the course staff, mentoring the students in the course (18%). These can be, for instance, students who have completed the project course themselves in the past year [S122]. This has been found to benefit both the project implementation and group dynamics: an active and knowledgeable coach can, for example, help students ask clarifying questions of the customer, overcoming fear of these being stupid and saving days or weeks [S122]. Interestingly, forming a capstone team of the final year students with similar skill levels are in accordance with the ACM/IEEE Curriculum Guidelines for Undergraduate SE Degree Programmes, but leaves out an integral part of the real software development team experience: junior and senior positions. This discrepancy has been noted in some studies [S19], [S35], [S58], [S98], where the course implementation has gone beyond having senior students just as advisors. In these capstones, less-experienced students work as junior developers and more-experienced students as senior developers or team leaders. S19 organised their capstone course in a way that students are required to work two course units on the same project, one unit as a junior member and one unit as a senior member. Each unit lasts one period (a quarter of an academic year), but the periods do not have to be consecutive to allow some flexibility for students in organising their studies. In order for such an arrangement to work, the projects in the course are large, long-term products, which undergo enhancements over a number of semesters. S19 found that for junior students, this setup allowed a smooth transition to the project, up-skilling on relevant skills and acquiring the necessary orientation from senior students. Senior students, on the other hand, were enthusiastic about mentoring junior students and finding answers to their questions ranging from project requirements to the technology stack. S98 have similarly split their capstone project into two parts with junior and senior positions. They also had faculty mentors with industrial experience mentoring the student teams working with external clients. According to S98, having this course design enabled them to create an effective industrial simulation. S98 reports that students used tools and practices prevalent in the industry but frequently not taught in university and were able to develop professional and team working skills more intensively. Discussion ========== In this research, the main objective was to understand how tertiary education institutions conduct their SE capstone courses. This was done by looking at the characteristics of capstone courses through an extensive literature review. Firstly, we summarise the main findings of this study and compare them to the findings of previous systematic literature reviews, whenever appropriate (Section [mainfindings]). Secondly, we present suggestions for further research in this area in Section [implications]. Finally, we discuss the validity of the results in Section [validity]. Main findings ------------- ### Duration (RQ1) Despite recommending that undergraduate SE capstones should span the whole academic year, most of courses identified here last only one semester. This is in line with findings regarding course models by. In our research, the studies presenting two-semester capstones often found one-semester courses inadequate in depth and breadth of skills they can provide. These studies reported that a longer course better prepares students for the experiences they can expect in their working life in software engineering. There are, however, some real-world constraints to why the courses generally are shorter, such as cramped curricula and the time and effort capstones require from both, the staff and the students. ### Team sizes (RQ2) We found that capstone courses are generally conducted as large-scale group projects. The team sizes varied greatly between courses, ranging from 1 to 35 students in one team. found no agreement in the literature on the appropriate team sizes. Our results were somewhat contradictory to this. In our research, several studies that reported experiences with different team sizes had found the optimal to be 4–6 students per group. This was also reflected in average team sizes for capstones we found based on our research. For a group of 2–3 students, there is no communication challenge to solve, and smaller groups often are not able to accomplish larger projects. In contrast, larger teams often present too many issues for communication, coordination and fair grading of students. Larger teams also require more effort from the teaching staff to ensure an even distribution of work. ### Clients and project ideas (RQ3) In our research, 58% of the studies reported having external clients for student projects (RQ3.1) and projects are based on the real needs of external stakeholders in 62% of the courses (RQ3.2). Having clients outside the immediate course staff presents more work for the teachers, but is often rewarding for students when they get to work for real clients on real projects. The motivation boost in students, as well as the positive implications in their skills and employment after the course, were found to be among the top reasons for having real clients with real projects. Using external clients is also recommended for both undergraduate and graduate degree programmes in software engineering. Despite these benefits, there still is a considerable number of capstone courses (42%), where the course staff acts as the client for these projects, or there is no client for students to interact with regularly. In addition, there are quite many courses where the teacher provides the students with project specifications (21%) or students themselves generate project proposals (17%). Such courses generally are less burdensome for teachers who do not have to get involved in sourcing multiple clients and projects. Students are often also motivated when they get to choose a project topic of their own. However, in these cases, students do not get to experience requirement solicitation and planning the project with an external stakeholder, who might not be technically knowledgeable at all. The taxonomy used by relates to project topics and not particularly project sources or clients making it difficult to assess whether there has been changes in this over the years. Moreover, our work demonstrates that external stakeholders can get involved in many ways. In the future, it would be important to explore this in more detail, and evaluate the consequences as demonstrated by. ### Project implementation (RQ4) To understand the project implementation, we first looked into the artefacts that students are expected to produce throughout the course (RQ4.1). We found that in 97% of the courses, students were involved in developing some form of a software product as an end deliverable. Additionally, students were often required to produce agile development artefacts (e.g. product and sprint backlogs), project plans and software documentation. The produced artefacts, therefore, supported the idea of a well-rounded software development experience. In our research, the role of documentation was slightly different from the role it had in the survey done by. In their classification, the core written documents involve project proposals, requirements documents, project plans, designs, test plans and user manuals. We, on the other hand, found that when courses had shifted towards more agile development approaches, the number of written assignments had reduced, and the documentation was being generated more throughout the course rather than as detailed plans up front. Secondly, we investigated the phases that these projects generally go through (RQ4.2). We found that projects often proceeded from an idea or a ready-made list of requirements into project delivery. For a large number of courses, the students start from scratch and produce a prototype or a software product that is handed off to the clients or teachers at the end of the course. Therefore, the maintenance of existing software products and working with existing codebases is often not experienced. In contrast states that regardless of the software process model, the common phases were requirements, design, implementation, testing, presentation and maintenance. However, they do make the same finding we made that, maintenance was frequently mentioned in the literature with little supporting detail of its actual implementation. Maintenance thus still remains an issue that is left with little attention in SE capstone literature, despite its high relevance in the industry. Some educators have solved this by involving large projects, which go through various incremental improvements over the years in their courses. Others have students contributing to large Open Source projects, but both of these approaches were still found to present a minority. makes no remarks on Open Source projects being used as a solution to this problem like we did, which would indicate that it is an upcoming solution to this problem. Indeed, the studies in our research covering large Open Source projects were written after the survey by. Finally, we investigated what technologies are used in these courses and how the selections are made (RQ4.3). We found that studies mostly do not explicitly describe all the technologies used in these courses. We also found that for the majority of courses, the technology selections are made based on the project specifications and needs. This often entails students learning new technologies and having to justify their selections. ### Assessment of students (RQ5) Regarding end-of-course student assessment (RQ5.1) we aimed to find out what constitutes the course grade in the end. A large number of studies gave inconclusive answers in this regard and did not describe grading rubrics in detail. Therefore, we were unable to draw any conclusions about what artefacts or assignments formed the final grades. However, we were able to determine fairly well, who does the final assessment and how well students are given a chance to reflect on their experiences. In all studies that discussed the student assessment, the teachers were involved in determining the final grades. Any sort of concrete deliverables (produced software, plans, agile artefacts, reports) were generally graded by the teacher. These artefacts provided teachers with some understanding of how well students understood the phases of software development. A minority of studies also mention including students in the assessment process, in the form of self- and peer-reviews. These both have proven not only to hold the individual students more accountable during project work but also to give valuable insight for the teacher into the soft skill development of an individual student. Continuous assessment and guidance during the course (RQ5.2) were explicitly addressed in most studies (89%). The survey by focused on similar sort of assessment practices when they sought to find out, how often engineering capstones implement classroom assessment. They found only 32 articles in all engineering disciplines between the years 1994 and 2006 which discussed classroom assessment schemes. While their scope is tighter, it seems that the guidance of students during the course would have increased in the past 15 years or so. also reflects on this, stating that the importance of classroom assessment has gained traction in recent years. Our research showed that any sort of mentoring or coaching was found to be highly beneficial for the students. It increased the success rate of projects and helped teachers to identify problems early on. Course staff were the ones that most often guided students during their projects, and several courses had hired teaching assistants for such positions. Some studies also found that having more experienced students advising the capstone participants was a rewarding experience for both groups. Mentoring activities are also common in real-life companies where graduates generally join an existing team with various skill levels. Implications for practitioners and researchers ---------------------------------------------- A large amount of research found on software engineering capstones shows that capstones are a common way for educators to prepare students for varying aspects of working life. For an educator to find ways to implement a capstone course, it would be too a time-consuming task to go through all the published primary studies and distil the experience and evidence into concrete suggestions. For such situations, we have provided an overview of the most common characteristics of capstone courses, and what kinds of choices regarding each of these can be made. The overriding guideline set by the for undergraduate SE capstone courses is that they should help to ensure that the curriculum has a significant real-world basis. Capstones are expected to be the culminating experience that ties everything learned so far together and prepares students for the working life in software engineering. Interestingly, our research revealed that primary studies on capstone courses only rarely included experiences, of course, alumni or industry clients. This would be important to see how well these courses reflect what students are expected to do in real life. We would like to see more research done on how well these courses capture what students face later on in their careers. It would also be worthwhile to see more controlled, comparative studies where one of the presented characteristics is changed and its impact on the course outcomes. Most of the research identified here does not provide controlled, comparative results on the capstone characteristics. Threats to validity ------------------- ### Deviations from the procedures for systematic reviews Although we aimed to use the guidelines provided in to perform our systematic review, we had deviations from their procedures. In our research, the study selection and data extraction were carried out by the first author rather than by a group of researchers. This means that some relevant papers might have been excluded or that some of the collected data may be erroneous. ### Inaccuracy and bias in selected papers for review One of the main limitations of any review is the possible bias in the study selection process. In our case, we included only studies considering software-related capstone courses with a relatively tight scope; for instance, we did not include any studies with courses on embedded systems or computer engineering. However, we were clear in our goals of describing only software capstones. A similar kind of search has also been conducted in the earlier survey done by, whose search strings were ``capstone`` and ``software engineering course`` into a selected set of journals in SEE and CS education. And to ensure that the selection process was as unbiased as possible, we described the employed search strategy and the inclusion and exclusion criteria in detail. This way, we aimed to make the selection process as visible to the reader as possible. The primary studies only represent the capstone courses with some aspects or outcomes worthy of publication. Therefore the study sample in our research might be skewed toward successful, well-planned courses or easy to research courses. We believe this is not a problem in describing how versatile the projects can be. However, quantitative aspects of the data (e.g., the portion of the courses having an external client) should be addressed with caution as certain kinds of courses may be more likely in real life than in SE education research. Finally, we also acknowledge that there are similar courses organised under other related disciplines, such as data science and computer engineering. We, however, knowingly chose to leave other disciplines out of the scope of this research as we wanted to provide a classification and insights specifically on software-related capstones. The recommendations we derived our research questions on, were also provided specifically for software engineering capstones. ### Inaccuracy and bias in data extraction As with any systematic review, one of the main limitations is the potential bias and inaccuracy of the data extraction procedure. This is also the most likely step with inaccuracy in our research. For example, the quality assessment was done by the first author, whose interpretation of quality might differ from that of another researcher. The distinction between whether a study has explicitly discussed limitations (``Yes``) or they have only shortly referred to a limitation of the study (``Somewhat``) is something that another researcher might view differently. However, the two summed-up categories presented, rigour and credibility, aimed to diminish the impact of a single quality assessment question and evaluate the study rather as a whole. It is also worth noting that the primary studies presented in this review are not exclusively written to provide course descriptions or general course evaluations. Some studies have a section dedicated to the course overview, which might have provided all the details of the course structure we needed. Then again, some studies had the relevant details scattered across various sections and might not have been explicitly referred to as our categories suggest. Especially regarding the produced artefacts and student assessment, the descriptions varied greatly in terms of detail and clarity. In situations like these, some interpretation was needed. To mitigate this problem, we tried to keep the categories generic and descriptive, so that it would be easy to grasp the general outline of each course. We also refrained from reading too much into the text itself. For instance, if the study mentioned that the student teams were composed of ``at most 5 students``, we left these courses in the category ``not specified``. ### Lack of third party assessment of capstone courses Usually, at least one of the authors of the study was somehow involved in organising the course in question. Additionally, quite a large portion of these reports lacked an honest evaluation of the author bias, as can be seen in Section [qualityassessment]. Therefore there is an inherent lack of truly objective third-party assessment of these SE capstone courses in the literature. This is something that we were unable to affect but is worth noting. We would welcome more research on capstone courses, or on SE education in general, where the author is an unbiased third party. ### Evaluation of review For evaluating any SLR, present criteria based on four quality assessment (QA) questions. We will briefly provide answers to each one of these. QA1 — Are the review’s inclusion and exclusion criteria described and appropriate? Yes, we have explicitly defined and described the inclusion and exclusion criteria. The foundation for the criteria stems from our research objectives and aims to ensure that the studies included in the review are of sufficient quality and help to answer our research questions. QA2 – Is the literature search likely to have covered all relevant studies? state that if the authors have searched 4 or more digital libraries and included additional search strategies, this criterion is met. In this research, we did search 4 digital libraries and included a description of our search strategies, so this criterion is fulfilled. QA3 – Did the reviewers assess the quality/validity of the included studies? We did use a question set used by many similar SLRs to assess the quality and validity of the included studies. Therefore this criterion is also met. QA4 – Were basic data/studies adequately described? We provided bibliographical references to each of the studies used, described from various viewpoints the target of their research (i.e. the capstone course presented in each of them), described how the data was collected in each of the studies and synthesised the reported outcomes. Therefore it is safe to say, that this criterion was met as well. Conclusions =========== This research aimed to understand how software engineering capstone courses are organised in tertiary education institutions. For this purpose, we conducted a systematic literature review, including 127 primary studies on SE capstone courses. The characteristics were synthesised into a taxonomy consisting of duration, team sizes, clients and project sources, project implementation and student assessment. Based on the synthesised justifications and outcomes for these characteristics, we provided suggestions on how the courses can be organised and what the trade-offs are to be weighted regarding each characteristic. The main curriculum guideline that capstones should help to accomplish is ``The curriculum should have a significant real-world basis``. In our research, we focused on the concrete recommendations given to accomplish this goal and formulated our research questions based on them. We found out that the courses have a software implementation as the main deliverable, the students are assessed based on various factors, not just the delivery of a working system, and the projects in these courses are almost always completed as group assignments. Students were also often given guidance and continuous assessment throughout the course via written and oral feedback on their progress and deliverables. The area which educators should pay attention to is the duration of the course which in practice is one semester, whilst for instance, recommends having two-semester courses to reach adequate depth and breadth in skills and experiences. A considerable number of courses also did not have a client external to the course staff, despite external clients being recommended for undergraduate and graduate capstones. In these cases, the project specifications were generated by the course staff or the students themselves. Such arrangements tend to leave students without the experience of having to solicit, negotiate and implement requirements set by a real client. In addition, the projects usually progress from idea to product, and often do not include maintenance, especially that of pre-existing projects. These characteristics somewhat diminish the real-world compatibility of the course. Primary studies =============== [width=1,cols=5, pos=!b] [tab:finalsources] p0.07p0.19p0.03p0.31p0.28 ID & Author(s) & Year & Title & Source title S1 & Marzolo, P., Guazzaloca, M., Ciancarini, P. & 2021 & “Extreme Development” as a Means for Learning Agile & International Conference on Frontiers in Software Engineering S2 & Tan, J., Jones, M. & 2008 & A case study of classroom experience with client-based team projects & Journal of Computing Sciences in Colleges S3 & Wong, W., Pepe, J., Stahl, J., Englander, I. & 2013 & A collaborative capstone to develop a mobile hospital clinic application through a student team competition & Information Systems Education Journal S4 & Tappert, C. C., Stix, A. & 2011 & A decade review of a masters-level real-world-projects capstone course & Info. Systems Educators Conf., ISECON 2011 S5 & Gotel, O., Kulkarni, V., Say, M., Scharff, C., Sunetnanta, T. & 2009 & A global and competition-Based model for fostering technical and soft skills in software engineering education & 22nd Conference on Software Engineering Education and Training, CSEE&T 2009 S6 & Scott, A., Kreahling, W., Holliday, M., Barlowe, S. & 2017 & A holistic capstone experience: Beyond technical ability & 18th Annual Conference on Information Technology Education S7 & Koolmanojwong, S., Boehm, B. & 2013 & A look at software engineering risks in a team project course & 26th International Conference on Software Engineering Education and Training, CSEE&T 2013 S8abcd & Braught, G., et al. & 2018 & A multi-institutional perspective on H/FOSS projects in the computing curriculum & ACM Transactions on Computing Education S9 & Mertz, J., Quesenberry, J. & 2019 & A scalable model of community-based experiential learning through courses and international projects & 2018 World Engineering Education Forum - Global Engineering Deans Council, WEEF-GEDC 2018 S10 & Bloomfield, A., Sherriff, M., Williams, K. & 2014 & A Service Learning Practicum capstone & 45th ACM technical symposium on Computer science education S11 & Brazier, P., Garcia, A., Vaca, A. & 2007 & A software engineering senior design project inherited from a partially implemented software engineering class project & 37th Annual Frontiers in Education Conference - Global Engineering S12 & Morales-Trujillo, M.E., Galster, M., Gilson, F., Mathews, M. & 2021 & A Three-Year Study on Peer Evaluation in a Software Engineering Project Course & IEEE Transactions on Education S13 & Liang, Z., Chapa-Martell, M.A. & 2019 & A Top-Down Approach to Teaching Web Development in the Cloud & IEEE International Conference on Teaching, Assessment, and Learning for Engineering, TALE 2018 S14 & Murphy, C., Sheth, S., Morton, S. & 2017 & A Two-Course Sequence of Real Projects for Real Customers & Conference on Integrating Technology into Computer Science Education, ITiCSE 2017 S15 & Rusu, A., Rusu, A., Docimo, R., Santiago, C., Paglione, M. & 2009 & Academia-academia-industry collaborations on software engineering projects using local-remote teams & 40th ACM Technical Symposium on Computer Science Education, SIGCSE’09 S16 & Stettina, C.J., Zhao, Z., Back, T., Katzy, B. & 2013 & Academic education of software engineering practices: towards planning and improving capstone courses based upon intensive coaching and team routines & 26th International Conference on Software Engineering Education and Training, CSEE&T 2013 S17 & Venson, E., Figueiredo, R., Silva, W., Ribeiro, L.C.M. & 2016 & Academy-industry collaboration and the effects of the involvement of undergraduate students in real world activities & IEEE Frontiers in Education Conference, FIE 2016 S18 & Eloe, N., Hoot, C. & 2020 & Accommodating Shortened Term Lengths in a Capstone Course using Minimally Viable Prototypes & IEEE Frontiers in Education Conference, FIE 2020 S19 & Schneider, J.-G., Eklund, P.W., Lee, K., Chen, F., Cain, A., Abdelrazek, M. & 2020 & Adopting industry agile practices in large-scale capstone education & 42nd International Conference on Software Engineering: Software Engineering Education and Training, ICSE-SEET 2020 S20 & Ye, H. & 2009 & An academia-industry collaborative teaching and learning model for software engineering education & 21st International Conference on Software Engineering and Knowledge Engineering, SEKE 2009 S21 & Demuth, B., Kandler, M. & 2017 & An Approach for Project Task Approximation in a Large-Scale Software Project Course & 30th IEEE Conference on Software Engineering Education and Training, CSEE&T 2017 S22 & Ellis, H.J.C. & 2007 & An assessment of a self-directed learning approach in a graduate web application design and development course & IEEE Transactions on Education S23 & Anslow, C., Maurer, F. & 2015 & An experience report at teaching a group based agile software development project course & 46th ACM Technical Symposium on Computer Science Education S24 & Bareiss, R., Katz, E. & 2011 & An exploration of knowledge and skills transfer from a formal software engineering curriculum to a capstone practicum project & 24th IEEE-CS Conference on Software Engineering Education and Training, CSEE&T 2011 S25 & Stephenson, B., James, M., Brooke, N., Aycock, J. & 2016 & An Industrial Partnership Game Development Capstone Course & 17th Annual Conference on Information Technology Education [width=1,cols=5] p0.07p0.19p0.03p0.31p0.28 ID & Author(s) & Year & Title & Source title S26 & Bell, J.T., Prabhu, A. & 2015 & An innovative approach to Software Engineering term projects, coordinating student efforts between multiple teams over multiple semesters & IEEE Frontiers in Education Conference, FIE 2014 S27 & Vasilevskaya, M., Broman, D., Sandahl, K. & 2015 & Assessing large-project courses: Model, activities, and lessons learned & ACM Transactions on Computing Education, TOCE S28 & von Konsky, B.R., Ivins, J. & 2008 & Assessing the capability and maturity of capstone software engineering projects & Tenth conference on Australasian computing education - Volume 78 S29 & Fontao, A., Gadelha, B., Junior, A.C. & 2019 & Balancing Theory and Practice in Software Engineering Education - A PBL, toolset based approach & IEEE Frontiers in Education Conference, FIE 2019 S30 & Harding, T. & 2007 & Benefits and struggles of using large team projects in capstone courses & ASEE Annual Conference and Exposition S31 & Engelsma, J. R. & 2014 & Best practices for industry-sponsored CS capstone courses & Journal of Computing Sciences in Colleges S32 & Matthies, C., Teusner, R., Hesse, G. & 2019 & Beyond Surveys: Analyzing Software Development Artifacts to Assess Teaching Efforts & IEEE Frontiers in Education Conference, FIE 2018 S33 & Ziv, H., Patil, S. & 2010 & Capstone project: From software engineering to ``Informatics`` & 23rd IEEE Conference on Software Engineering Education and Training, CSEE&T 2010 S34 & Anderson, Ruth E.; Borriello, Gaetano; Martin, Hélène; Black, Leonard & 2009 & Capstone projects as community connectors & Journal of Computing Sciences in Colleges S35 & Paasivaara, M., Vanhanen, J., Lassenius, C. & 2019 & Collaborating with industrial customers in a capstone project course: The customers’ perspective & IEEE/ACM 41st International Conference on Software Engineering: Software Engineering Education and Training, ICSE-SEET 2019 S36 & Adams, R., Kleiner, C. & 2016 & Collaboration support in an international computer science capstone course & International Conference on Social Computing and Social Media S37 & Watkins, K.Z., Barnes, T. & 2010 & Competitive and agile software engineering education & IEEE SoutheastCon, SoutheastCon 2010 S38 & Gustavsson, H., Brohede, M. & 2019 & Continuous assessment in software engineering project course using publicly available data from GitHub & 15th International Symposium on Open Collaboration, OpenSym 2019 S39 & Hadfield, Steven M.; Jensen, Nathan A. & 2007 & Crafting a software engineering capstone project course & Journal of Computing Sciences in Colleges S40 & Rong, G., Shao, D. & 2012 & Delivering software process-specific project courses in tertiary education environment: Challenges and solution & 25th IEEE Conference on Software Engineering Education and Training, CSEE&T 2012 S41 & Nguyen, D.M., Truong, T.V., Le, N.B. & 2013 & Deployment of capstone projects in software engineering education at Duy Tan university as part of a university-wide project-based learning effort & Learning and Teaching in Computing and Engineering, LaTiCE 2013 S42 & Lago, P., Schalken, J., Vliet, H.V. & 2009 & Designing a multi-disciplinary software engineering project & 22nd IEEE Conference on Software Engineering Education and Training, CSEE&T 2009 S43 & Angelov, S., de Beer, P. & 2017 & Designing and applying an approach to software architecting in agile projects in education & Journal of Systems and Software S44 & Anderson, R.E., Kolko, B. & 2011 & Designing technology for resource-constrained environments: A multidisciplinary capstone sequence & Frontiers in Education, FIE 2012 S45 & Leilde, V., Ribaud, V. & 2017 & Does Process Assessment Drive Process Learning? the Case of a Bachelor Capstone Project & 30th IEEE Conference on Software Engineering Education and Training, CSEE&T 2017 S46 & Brown, Q., Lee, F., Alejandre, S. & 2009 & Emphasizing soft skills and team development in an educational digital game design course & 4th International Conference on the Foundations of Digital Games, FDG 2009 S47 & Takala, T. M., Malmi, L., Pugliese, R., Takala, T. & 2016 & Empowering students to create better virtual reality applications: A longitudinal study of a VR capstone course & Informatics in Education S48 & Marques, M., Ochoa, S.F., Bastarrica, M.C., Gutierrez, F.J. & 2018 & Enhancing the Student Learning Experience in Software Engineering Project Courses & IEEE Transactions on Education S49 & De Souza, R.T., Zorzo, S.D., Da Silva, D.A. & 2015 & Evaluating capstone project through flexible and collaborative use of Scrum framework & Frontiers in Education Conference, FIE 2015 S50 & Vu, J.H., Frojd, N., Shenkel-Therolf, C., Janzen, D.S. & 2009 & Evaluating test-driven development in an industry-sponsored capstone project & 6th International Conference on Information Technology: New Generations, ITNG 2009 S51 & Laplante, P.A., Defranco, J.F., Guimaraes, E. & 2019 & Evolution of a graduate software engineering capstone course - A course review & International Journal of Engineering Education S52 & Lederman, Timoth C. & 2010 & Evolution of capstone-courses in software engineering a finishing school & Journal of Computing Sciences in Colleges S53 & Delgado, D., Velasco, A., Aponte, J., Marcus, A. & 2017 & Evolving a Project-Based Software Engineering Course: A Case Study & 30th IEEE Conference on Software Engineering Education and Training, CSEE&T 2017 [width=1,cols=5] p0.07p0.19p0.03p0.31p0.28 ID & Author(s) & Year & Title & Source title S55 & Ras, Eric and Carbon, Ralf and Decker, Björn and Rech, Jörg & 2007 & Experience Management Wikis for Reflective Practice in Software Capstone Projects & IEEE Transactions on Education S56 & Schorr, R. & 2020 & Experience Report on Key Success Factors for Promoting Students’ Engagement in Software Development Group Projects & 4th IEEE World Conference on Engineering Education, EDUNINE 2020 S57 & Longstreet, C. Shaun; Cooper, Kendra & 2013 & Experience report: A sustainable serious educational game capstone project & CGAMES’2013 USA S58 & Dupuis, R., Champagne, R., April, A., Séguin, N. & 2010 & Experiments with Adding to the Experience that Can be Acquired from Software Courses & 7th International Conference on the Quality of Information and Communications Technology, QUATIC 2010 S59 & Burge, J. & 2007 & Exploiting Multiplicity to Teach Reliability and Maintainability in a Capstone Project & 20th IEEE Conference on Software Engineering Education and Training, CSEE&T 2007 S60 & Marshall, L., Pieterse, V., Thompson, L., Venter, D.M. & 2016 & Exploration of Participation in Student Software Engineering Teams & ACM Transactions on Computing Education, TOCE S61 & Ganci, A., Ramnath, R., Ribeiro, B., Stone, R.B. & 2011 & Exploring collaboration between computer science engineers and visual communication designers in educational settings & 13th International Conference on Engineering and Product Design Education, E&PDE 2011 S62 & Burden, H., Steghöfer, J.-P., Hagvall Svensson, O. & 2019 & Facilitating entrepreneurial experiences through a software engineering project course & 41st International Conference on Software Engineering: Software Engineering Education and Training, ICSE-SEET 2019 S63 & Basholli, A., Baxhaku, F., Dranidis, D., Hatziapostolou, T. & 2013 & Fair assessment in software engineering capstone projects & 6th Balkan Conference in Informatics S64 & Magana, A. J., Seah, Y. Y., Thomas, P. & 2018 & Fostering cooperative learning with Scrum in a semi-capstone systems analysis and design course & Journal of Information Systems Education S65 & Sievi-Korte, O., Systä, K., Hjelsvold, R. & 2015 & Global vs. local – Experiences from a distributed software project course using agile methodologies & Frontiers in Education, FIE 2015 S66 & Hebig, R., Ho-Quang, T., Jolak, R., Schröder, J., Linero, H., Ågren, M., Maro, S.H. & 2020 & How do students experience and judge software comprehension techniques? & 28th International Conference on Program Comprehension S67 & Verdicchio, Michael & 2021 & Hurricanes and pandemics: an experience report on adapting software engineering courses to ensure continuity of instruction & Journal of Computing Sciences in Colleges S68 & Włodarski, R., Poniszewska-Marańda, A., Falleri, J.-R. & 2022 & Impact of software development processes on the outcomes of student computing projects: A tale of two universities & Information and Software Technology S69 & Izu, Cruz & 2018 & Improving Outcomes for a Masters Capstone IT Project & IEEE International Conference on Teaching, Assessment, and Learning for Engineering, TALE 2018 S70 & Flowers, J.G. & 2008 & Improving the Capstone project experience: a case study in software engineering & 46th Annual Southeast Regional Conference on XX S71 & Gannod, Gerald C.; Bachman, Kristen M.; Troy, Douglas A.; Brockman, Steve D. & 2010 & Increasing alumni engagement through the capstone experience & Frontiers in Education, FIE 2010 S72 & Zilora, S.J. & 2015 & Industry-emulated projects in the classroom & 16th Annual ACM Conference on Information Technology Education, SIGITE 2015 S73 & Spichkova, M. & 2019 & Industry-oriented project-based learning of software engineering & 24th International Conference on Engineering of Complex Computer Systems, ICECCS 2019 S74 & Carvalho, J.A., Sousa, R.D., Sá, J.O. & 2010 & Information systems development course: Integrating business, IT and IS competencies & 2010 IEEE Transforming Engineering Education: Creating Interdisciplinary Skills for Complex Global Environments S75 & Palacin-Silva, M.V., Seffah, A., Porras, J. & 2018 & Infusing sustainability into software engineering education: Lessons learned from capstone projects & Journal of Cleaner Production S76 & Kumar, S., Wallace, C. & 2015 & Instruction in software project communication through guided inquiry and reflection & Frontiers in Education, FIE 2015 S77 & Zeid, A. & 2012 & Integrating international students’ contests with computer science capstone: Lessons learned and best practices & Frontiers in Education, FIE 2012 S78 & Lundqvist, K., Ahmed, A., Fridman, D., Bernard, J.-G. & 2019 & Interdisciplinary Agile Teaching & Frontiers in Education, FIE 2019 S79 & Santoso, H.B., Lawanto, O., Purwandari, B., Isal, R.Y.K., Fitriansyah, R. & 2018 & Investigating Students’ Metacognitive Skills while Working on Information Systems Development Projects & 7th World Engineering Education Forum, WEEF 2017 S80 & Christensen, E.L., Paasivaara, M. & 2022 & Learning Soft Skills through Distributed Software Development & International Conference on Software and System Processes and Internation Conference on Global Software Engineering S81 & Rout, Terence P.; Seagrott, John & 2007 & Maintaining High Process Capability in a Student Project Course & 20th Conference on Software Engineering Education & Training, CSEE&T 2007 S82 & Rodriguez, G., Soria, A., Campo, M. & 2016 & Measuring the Impact of Agile Coaching on Students’ Performance & IEEE Transactions on Education S83 & Linhoff, J., Settle, A. & 2009 & Motivating and evaluating game development capstone projects & 4th International Conference on Foundations of Digital Games [width=1,cols=5] p0.07p0.19p0.03p0.31p0.28 ID & Author(s) & Year & Title & Source title S84 & Haddad, H.M. & 2013 & One-semester CS capstone: A 40-60 teaching approach & 10th International Conference on Information Technology: New Generations, ITNG 2013 S85 & Fan, Xiaocong & 2018 & Orchestrating Agile Sprint Reviews in Undergraduate Capstone Projects & Frontiers in Education, FIE 2018 S86 & Fagerholm, F., Vihavainen, A. & 2013 & Peer assessment in experiential learning: Assessing tacit and explicit skills in agile software engineering capstone projects & Frontiers in Education, FIE 2013 S87 & Vasankari, T., Majanoja, A.-M. & 2019 & Practical Software Engineering Capstone Course – Framework for Large, Open-Ended Projects to Graduate Student Teams & Internation Conference on Computer Supported Education S88 & Karunasekera, S., Bedse, K. & 2007 & Preparing software engineering graduates for an industry career & 20th Conference on Software Engineering Education & Training, CSEE&T 2007 S89 & Weerawarana, S.M., Perera, A.S., Nanayakkara, V. & 2012 & Promoting creativity, innovation and engineering excellence: A case study from Sri Lanka & IEEE International Conference on Teaching, Assessment, and Learning for Engineering, TALE 2012 S90 & Fornaro, R.J., Heil, M.R., Tharp, A.L. & 2007 & Reflections on 10 years of sponsored senior design projects: Students win-clients win! & Journal of Systems and Software S91 & Roach, S. & 2011 & Retrospectives in a software engineering project course: Getting students to get the most from a project experience & 24th IEEE-CS Conference on Software Engineering Education and Training, CSEE&T 2011 S92 & Mäkiaho, P., Poranen, T. & 2018 & Risks management in software development capstone projects & 19th International Conference on Computer Systems and Technologies S93(a,b) & MacKellar, B. K., Sabin, M., Tucker, A. & 2013 & Scaling a framework for client-driven open source software projects: A report from three schools & Journal of Computing Sciences in Colleges S94 & Yuen, T.T. & 2015 & Scrumming with educators: Cross-departmental collaboration for a summer software engineering capstone & International Conference on Learning and Teaching in Computing and Engineering, LaTiCE 2015 S95 & Isomöttönen, V., Daniels, M., Cajander, Å., Pears, A., McDermott, R. & 2019 & Searching for global employability: Can students capitalize on enabling learning environments? & ACM Transactions on Computing Education S96 & Maxim, B. & 2008 & Serious games as software engineering capstone projects & ASEE Annual Conference and Exposition S97 & Krogstie, B.R., Divitini, M. & 2009 & Shared timeline and individual experience: Supporting retrospective reflection in student software engineering teams & 22nd Conference on Software Engineering Education and Training, CSEE&T 2009 S98 & Johns-Boast, L., Flint, S. & 2013 & Simulating industry: An innovative software engineering capstone design course & Frontiers in Education, FIE 2013 S99 & Boti, E., Damasiotis, V., Fitsilis, P. & 2021 & Skills Development Through Agile Capstone Projects & International Conference on Frontiers in Software Engineering S100 & Paiva, S.C., Carvalho, D.B.F. & 2018 & Software creation workshop: A capstone course for business-oriented software engineering teaching & XXXII Brazilian Symposium on Software Engineering S101 & Saeedi, K., Visvizi, A. & 2021 & Software development methodologies, HEIs, and the digital economy & Education Sciences S102 & Smith, T., Cooper, K.M.L., Longstreet, C.S. & 2011 & Software engineering senior design course: Experiences with agile game development in a capstone project & International Conference on Software Engineering S103 & Jaccheri, L., Sindre, G. & 2007 & Software engineering students meet interdisciplinary project work and art & 11th International Conference on Information Visualisation, IV 2007 S104 & Krusche, S., Dzvonyar, D., Xu, H., Bruegge, B. & 2018 & Software Theater—Teaching Demo-Oriented Prototyping & ACM Transactions on Computing Education, TOCE S105 & Budd, A.J., Ellis, H.J.C. & 2008 & Spanning the gap between software engineering instructor and student & Frontiers in Education, FIE 2008 S106 & Decker, A., Egert, C.A., Phelps, A. & 2016 & Splat! er, shmup? A postmortem on a capstone production experience & Frontiers in Education, FIE 2008 S107 & Kerbs, R. & 2007 & Student teamwork: A capstone course in game programming & Frontiers in Education, FIE 2007 S108 & Tadros, Ibrahem; Hammami, Samir; Al-Zoubi, Khaled & 2008 & Systems Development Projects & 3rd International Conference on Information and Communication Technologies: From Theory to Applications S109 & Jarzabek, S. & 2013 & Teaching advanced software design in team-based project course & 26th IEEE International Conference on Software Engineering Education and Training, CSEE&T 2013 S110 & Lu, Baochuan; DeClue, Tim & 2011 & Teaching agile methodology in a software engineering capstone course & Journal of Computing Sciences in Colleges S111 & Cagiltay, N.E. & 2007 & Teaching software engineering by means of computer-game development: Challenges and opportunities & British Journal of Educational Technology [width=1,cols=5] p0.07p0.19p0.03p0.31p0.28 ID & Author(s) & Year & Title & Source title S112 & Tafliovich, A., Caswell, T., Estrada, F. & 2019 & Teaching software engineering with free open source software development: An experience report & Annual Hawaii International Conference on System Sciences S113 & Paasivaara, M., Lassenius, C., Damian, D., Raty, P., Schroter, A. & 2013 & Teaching students global software engineering skills using distributed Scrum & 35th International Conference on Software Engineering, ICSE 2013 S114 & Khmelevsky, Y. & 2016 & Ten years of capstone projects at Okanagan College: A retrospective analysis & 21st Western Canadian Conference on Computing Education S115 & Mahnič, V. & 2015 & The capstone course as a means for teaching agile software development through project-based learning & World Transactions on Engineering and Technology Education S116 & Broman, D., Sandahl, K., Baker, M.A. & 2012 & The company approach to software engineering project courses & IEEE Transactions on Education S117 & Khakurel, J., Porras, J. & 2020 & The Effect of Real-World Capstone Project in an Acquisition of Soft Skills among Software Engineering Students & 32nd IEEE Conference on Software Engineering Education and Training, CSEE&T 2020 S118 & Iacob, C., Faily, S. & 2020 & The impact of undergraduate mentorship on student satisfaction and engagement, teamwork performance, and team dysfunction in a software engineering group project & 51st ACM Technical Symposium on Computer Science Education, SIGCSE 2020 S119 & Hoar, R. & 2014 & The real world web: How institutional IT affects the delivery of a capstone web development course & 19th Western Canadian Conference on Computing Education, WCCCE 2014 S120 & Yue, K. B., Damania, Z., Nilekani, R., Abeysekera, K. & 2011 & The use of free and open source software in real-world capstone projects & Journal of Computing Sciences in Colleges S121 & Isomöttönen, V., Kärkkäinen, T. & 2008 & The value of a real customer in a capstone project & 21st Conference on Software Engineering Education and Training, CSEE&T 2008 S122 & Mohan, S., Chenoweth, S., Bohner, S. & 2012 & Towards a better capstone experience & 43rd ACM Technical Symposium on Computer Science Education, SIGCSE’12 S123 & Rico, D.F., Sayani, H.H. & 2009 & Use of agile methods in software engineering education & Agile Conference, AGILE 2009 S124 & Tribelhorn, B., Nuxoll, A.M. & 2021 & Using Agile and Active Learning in Software Development Curriculum & ASEE Virtual Annual Conference and Exposition S125 & McDonald, J., Wolfe, R. & 2008 & Using computer graphics to foster interdisciplinary collaboration in capstone courses & Journal of Computing Sciences in Colleges S126 & Ju, A., Hemani, A., Dimitriadis, Y., Fox, A. & 2020 & What agile processes should we use in software engineering course projects? & 51st ACM Technical Symposium on Computer Science Education, SIGCSE 2020 S127 & Bastarrica, M.C., Perovich, D., Samary, M.M. & 2017 & What can students get from a software engineering capstone course? & 39th IEEE/ACM International Conference on Software Engineering: Software Engineering and Education Track, ICSE-SEET 2017
arxiv_0000284
the factorization structure of soft-collinear divergences leads to more powerful identities. Recall that the box coefficients are so-called ‘quad-cuts’ (co-dimension four residues) of the loop integrand; let us now consider ‘triple-cuts’—co-dimension three residues. Because the loop integral is four-dimensional, a triple-cut still depends on one integration variable; by applying Cauchy’s residue theorem to the integral over this remaining variable, we find that the sum of all box coefficients sharing a triple-cut must vanish[6](#fn6). The richest of these residue theorems arise for triple-cuts involving at least one massless corner, as these separate into two distinct classes depending on whether the 3-particle amplitude at the massless corner is A3( − 1) or A3(0). Importantly, Cauchy’s theorem applies to these two cases *separately*, leading to a *pair* of identities: where *τ**a*, *b*, *d*  =   ± 1 if $d\!=\!b{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}\,1$ or $d\!=\!a\,{{\rm\rule[2.4pt]{6pt}{0.65pt}}}\,1$, respectively, and vanishes otherwise. The sums appearing in the left-hand side of ([residuetheorems]) are over all box coefficients sharing a particular (chiral) triple-cut, while the right-hand sides follow from the universal structure of soft-collinear divergences (and when *τ*  ≠  0, ([residuetheorems]) simply represents the famous ‘BCF’ formula for tree-amplitudes, ). It is easy to see that these constitute $2n(n\,{{\rm\rule[2.4pt]{6pt}{0.65pt}}}\,3)$ linearly-independent equations. Averaging the two lines of ([residuetheorems]) results in $n(n\,{{\rm\rule[2.4pt]{6pt}{0.65pt}}}\,3)$ relations among the (parity-even) box coefficients—which is twice as many as the “standard” IR-equations. This doubling of equations is a consequence of the fact that each collinear divergence of the integrated amplitude can arise from *two* distinct integration regions; and requiring factorization at the *integrand* level results in identities for each integration region separately. To better understand the nature of the vanishing terms appearing on the right-hand side of ([residuetheorems]), let us consider a loop-integrand in the neighborhood of a soft-collinear divergence—where three consecutive propagators are simultaneously put on-shell. For the sake of concreteness, we may parameterize the loop integrand in such a region by writing ℓ  →  (*L*+, *L*−, ℓ⊥) (see for a similar discussion): We can take a residue in *L*− which puts the first propagator on-shell, resulting in (for the regime in which *L*+, ℓ⊥ are both small). If we now write $\ell\_\perp^2\!=\!z{\widetilde}z$ and drop the condition that ${\widetilde}z$ is the complex conjugate of *z*, we see that we can take two more residues—for instance *L*+ and then *z*—to localize to the triple-cut, and then finally take a residue involving the pole in ${\widetilde}{z}$. Notice that this results in a ‘quad-cut’ (a co-dimension four residue) which only involves *three* propagators! This is a simple one-loop example of the phenomenon of ‘composite leading singularities’ discussed in refs. , and is the physical origin of the terms on the right-hand side of ([residuetheorems]). It is not hard to show that the criterion for the convergence of an integral discussed in is equivalent to the requirement that *all* composite leading singularities vanish (in general, composite leading singularities are in one to one correspondence with IR-divergent triangle topologies, which are in correspondence with these equations); this implies that for ratio functions in N  =  4 SYM the right-hand sides of ([residuetheorems]) must vanish. Moreover, it is possible to show that the difference between the ‘DCI’-regularized boxes of ([finiteboxexpansionratiofunction]), and more familiar—e.g. dimensionally-regulated— expressions for the box integrals is necessarily proportional to the the right-hand-sides of ([residuetheorems])—which provides an alternate proof of the equivalence between the two regularization schema when applied to manifestly finite observables. We leave the details of this discussion as an exercise for the interested reader. In summary, the residue theorems ([residuetheorems]) encode the physical fact that IR-factorization occurs at the *integrand* level, and constitute $n(n\,{{\rm\rule[2.4pt]{6pt}{0.65pt}}}\,3)$ independent parity-even relations among the box coefficients—*twice* as many as the IR-equations arising after integration. These relations are *general*—that is, not specific to N  =  4 SYM—provided that triangle coefficients are included where they are required. Applications to Non-Planar Theories ----------------------------------- Although we have largely focused our attention on planar theories, it is worth emphasizing that the increased number of IR equations upon considering integrands instead of integrals is completely unrelated to planarity: the same result holds for non-planar theories as well—including gravity. This motivates a generalization of the integrand-level regulator ([definitionofregulator]) to the non-planar case, at least at one-loop order. As a simple illustration, consider the 4-particle amplitude in N  =  8 supergravity. By the no-triangle property,, it is a combination of at most three box functions. It is not hard to see that the absence of collinear divergences—the vanishing of the right-hand-sides of the analogs of ([residuetheorems])—uniquely fixes their relative coefficients, leading to the following form for the amplitude: where *s*, *t*, *u* are the usual Mandelstam invariants and the box integrals are labelled by the momenta coming into the vertices. It is not difficult to check that this matches the correct expression. As a less trivial example, consider the 6-graviton amplitude (with arbitrary heliciticies). In principle, this amplitude could involve any combination of 195 boxes; but the integrand-level IR-equations of the preceding subsection shows that the amplitude can be expressed in terms of at most 120 combinations. That is, we find 75 linearly-independent constraints of the form ([residuetheorems]), as opposed to the mere 25 constraints arising from the previously known IR-equations. It would be interesting to explore if these relations could be used to obtain more compact analytic forms for one-loop graviton amplitudes. The *Chiral* Box Expansion for One-Loop *Integrands* ==================================================== As we have seen, while the familiar box expansion reproduces all one-loop amplitudes *post-integration*, it does *not* match the full structure of the actual loop integrand. These can be easily understood in the case of MHV loop amplitudes, where the *actual* MHV loop integrand *only* has support on two-mass easy boxes involving ${\color{cut1}\ell\_1}$—with vanishing support on all quad-cuts involving ${\color{cut2}\ell\_2}$. However, because the scalar box integrals are *parity-even*, their integrands always have unit-magnitude residues for both quad-cuts. This is easy to understand, as scattering amplitude integrands are generally *chiral*, while the scalar boxes are manifestly *non-chiral*. In this section, we describe a slight modification to the scalar-box integrands given above which leads to a fully-chiral generalization of the box expansion, allowing us to represent all one-loop *integrands* of N  =  4 SYM. In fact, such a modification was discovered for MHV and NMHV one-loop integrands in ref. , but the generalization to more complicated amplitudes was unclear. Here, by revisiting the special case of MHV, we will find that the underlying structure naturally generalizes to *all* N*k*MHV one-loop integrands, A*n*(*k*), 1. All of the essential structure needed is already present in the case of the MHV one-loop integrands, A*n*(0), 1. In ref. , the one-loop MHV integrand was written: Here, *X* is an arbitrary, auxiliary line in momentum-twistor space (of course, the integrand is ultimately, algebraically independent of *X*). We will mostly take this formula for granted here; but let us see what role is played by the auxiliary line *X*, and how we may generalize this to reproduce any one-loop integrand. Because a pentagon integral has five propagators, it has $2\!\times\!\binom{5}{4}\!=\!10$ fourth-degree residues—from the two ways of cutting any four of the five propagators. But because one of its propagators involves this auxiliary line *X*, only two such residues are physically meaningful: those which cut the lines $\{(a{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,a),(a\,a{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}1),(c{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,c),(c\,c{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}1)\}$. These are obviously ‘two-mass easy’ quad-cuts; but because the numerator of the integrand in ([pentagonexpansionformhvoneloop]) is proportional to ${\langle \ell\,(a{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,a\,a{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}1)\!{\mathrm{\raisebox{0.75pt}{{$\,\bigcap\,$}}}}\!(c{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,c\,c{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}1)\rangle}$, the pentagon’s residue on ${\color{cut2}\ell\_2}$ vanishes, while the residue on ${\color{cut1}\ell\_1}$ is unity (see ). But this is perfect: the one-loop MHV integrand only has support on ${\color{cut1}\ell\_1}$! Because of this, we choose to view the *pentagon* contributions to ([pentagonexpansionformhvoneloop]) as something like a ‘chiralized’ version of the scalar two-mass easy box: We are motivated to draw this as a *box* because it has precisely *one* physically-meaningful quad-cut (in this case ${\color{cut1}\ell\_1}$), upon which it has residue ${{\rm\rule[2.4pt]{6pt}{0.65pt}}}1$. Although not relevant for MHV integrands, we could similarly define the parity-conjugate version, which has residue of ${{\rm\rule[2.4pt]{6pt}{0.65pt}}}1$ on the quad-cut ${\color{cut2}\ell\_2}$, and vanishing residue on ${\color{cut1}\ell\_1}$. Although it may seem like we’re nearly done, we must step back to observe that *not all the terms in ([pentagonexpansionformhvoneloop]) are pentagons!* This is indeed a good thing, because as described in ref. , all such chiral pentagons are *convergent*, while the actual MHV one-loop amplitude is of course divergent! The easy-to-overlook, non-pentagon contributions to the one-loop MHV integrand of ([pentagonexpansionformhvoneloop]), come from the boundary terms when $c\!=\!a{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}1$: We are motivated to draw this as a triangle, because it has only three non-*X* propagators. In fact, in the particular case where *X* is taken to be the point at infinity, I*a*div becomes precisely a scalar triangle. Therefore, although somewhat less concise than ([pentagonexpansionformhvoneloop]) (which is deceptively so), we can write the MHV one-loop *integrand* in the somewhat more suggestive form, where the divergent part is expressed solely in terms of triangles. (Although we have not yet defined *all* chiral boxes ${\color{cut1}{\widetilde}{\mathcal{I}}^1\_{a,b,c,d}}$, the two-mass easy boxes given in ([chiraltwomasseasyone]) are the only ones relevant for MHV (*k*  =  0).) From this simple example, it should be clear that if we have ‘chiral’ versions of all the scalar boxes—ones with precisely *one* physical quad-cut with residue ${{\rm\rule[2.4pt]{6pt}{0.65pt}}}1$ on either ${\color{cut1}\ell\_1}$ and ${\color{cut2}\ell\_2}$—then would have a ‘chiral’ version of the box expansion: Notice that this expression will be valid *before* integration—unlike the more familiar *scalar* box expansion described in. To see that this formula must be right, first observe that the chiral boxes with coefficients are specifically engineered to have *precisely* the same residues on all physical quad-cuts as the actual amplitude’s integrand. However, as mentioned above, *every* chiral box integral ${\widetilde}{I}^{{\color{cut1}1},{\color{cut2}2}}\_{a,b,c,d}\!\equiv\!\int\!\!d^4\ell\,\,\widetilde{\mathcal{I}}^{{\color{cut1}1},{\color{cut2}2}}\_{a,b,c,d}$ is *convergent*—and so the chiral boxes alone cannot fully represent the amplitude. The remaining contributions must therefore encode the divergence of the one-loop amplitude, which from ([chiralboxexpansionformhv]) together with integrand-level factorization is simply the sum of the divergent “triangles” I*a*div. Thus, by construction, ([chiralboxexpansion]) will have the correct leading singularities on all physical quad-cuts (those that do not involve *X*) and have the correct infrared divergences. In order for ([chiralboxexpansion]) to reproduce the *integrand*, it obviously must be *X*-independent; and so it must have vanishing support on all quad-cuts involving *X*. The preceding arguments only fix the infrared-divergent part but not any potential four-mass integrals involving *X*. Thus, in order to uniquely fix ${\color{cut1}{\widetilde}{\mathcal{I}}^1\_{a,b,c,d}}$ and ${\color{cut2}{\widetilde}{\mathcal{I}}^2\_{a,b,c,d}}$, we must also require that they are *parity-odd* on all *four-mass* quad-cuts involving the auxiliary line *X*—that is, they must have *equal* residues (in both sign *and* magnitude) on all ‘spurious’ quad-cuts (as any parity-even integrand must have *opposite* residues on parity-conjugate quad-cuts, this will ensure that no four-mass integrals involving *X* will survive integration over a parity-invariant contour). Given chiral box integrands ${\color{cut1}{\widetilde}{\mathcal{I}}^1\_{a,b,c,d}}$ and ${\color{cut2}{\widetilde}{\mathcal{I}}^2\_{a,b,c,d}}$ which satisfy the conditions described above—being infrared convergent, having only one nonvanishing physical quad-cut and being parity-odd on spurious quad-cuts— it is not hard to prove that ([chiralboxexpansion]) matches *all* physical residues of the actual loop integrand, that it is ultimately free of any quad-cuts involving *X*, and that it is moreover algebraically independent of *X*. And so, ([chiralboxexpansion]) must give the correct integrand for the amplitude. (Using the Mathematica package documented in, it is easy to verify that ([chiralboxexpansion]) directly matches the one-loop integrands obtained using the BCFW recursion relations (which is described in ).) [t\*][chiralboxintegrandsandintegralstable] In terms of the chiral boxes, the ratio function becomes, Notice that the divergent integrands I*a*div are *manifestly* canceled in the ratio function, leading to an expression involving only *manifestly convergent* chiral boxes. This is remarkable, as it provides an analytic form of any one-loop ratio function for which no regularization is needed! However, although the complete integrand is of course independent of *X*, each chiral box *individually* depends on *X*. Nevertheless, although it is generally difficult (algebraically) to prove the *X*-independence of the integrated expressions (this can be seen to amount to the integrand-level IR equations ([residuetheorems])), this remarkable fact can easily be verified numerically—for example, using the Mathematica package ‘loopamplitudes’ which is made available with this note on the arXiv, and documented in.    Conclusions ===========   In this note, we have revisited the familiar story of using generalized unitarity to reconstruct one-loop amplitudes, especially for the case of planar, N  =  4 SYM. In order to make manifest all the known symmetries of the theory, we reconsidered the regularization of IR-divergences, and found a new, ‘DCI’-regularization scheme ([definitionofregulator]) which makes *manifest*—term-by-term—the dual-conformal invariance of all finite observables of N  =  4 SYM at one-loop, including all N*k*MHV ratio functions, ([finiteboxexpansionratiofunction]). The existence of such a regularization scheme was motivated by considering the remarkable properties of one-loop amplitudes *prior-to* integration. Such considerations also led to integrand-level IR-equations, ([residuetheorems])—giving novel constraints among box coefficients, and having applications beyond the planar limit. And in, we found that the familiar box expansion could be upgraded to a *chiral* box expansion for one-loop *integrands*, reproducing both the parity-odd and parity-even contributions to scattering amplitudes, and making the factorization of IR-divergences manifest. For the sake of completeness and reference, we have comprehensively described all the ingredients required to compute any one-loop amplitude in planar N  =  4 SYM: we gave explicit formulae for all ‘DCI’-regularized scalar box integrals in, and we gave expressions for all one-loop box coefficients in. Remarkably, these tables are incredibly redundant: all degenerate cases of *both* tables follow smoothly from the generic case—from the four-mass integral, ([symmetricfourmassintegral]), and the four-mass functions, ([fourmasscoefficient1]) and ([fourmasscoefficient2]), respectively. The notation used throughout this paper is reviewed in. In, we use the BCFW recursion relations described in to explicitly represent all one-loop integrands in N  =  4 SYM. All the results described in this paper have been implemented in a Mathematica package, ‘loopamplitudes’; instructions for obtaining this package, and complete documentation of the functions made available by it are described in. Natural extensions of this work include applying the ‘DCI’-regulator to scalar integrals beyond one-loop, finding chiral representations of higher-loop integrands, and using the integrand-level IR-equations to find better representations of one-loop amplitudes beyond the planar limit. Acknowledgements ================   We are especially grateful to Nima Arkani-Hamed and Freddy Cachazo for their helpful comments and suggestions regarding this work. This work was supported in part by the Harvard Society of Fellows and a grant form the Harvard Milton Fund (JB), Department of Energy contract (JB), and NSF grants (JT) and (SCH). Review of Momentum-Twistor Variables and Notation ================================================= Momentum-twistor variables, introduced by Hodges in ref. , trivialize the two ubiquitous constraints imposed on the external kinematics for all scattering amplitudes: the on-shell condition, *p**a*2  =  0, and momentum conservation, ∑*a**p**a*  =  0. By this we mean that *generic* momentum twistor variables $Z\!\equiv\!\big(z\_1\cdots z\_n\big)\!\in\!G(4,n)$, with *z**a*  ∈  C4, *always* correspond to a set of momentum-conserving, on-shell external momenta. This makes them especially convenient for use in scattering amplitudes. In, we used region-momentum variables *x**a* to encode the external four-momenta according to *p**a*  ≡  *x**a* + 1 − *x**a*. Momentum-twistors are so-called because they represent points in the twistor-space of *region-momentum* *x*-space: The *x*-space polygon, whose definition depends on a choice for the cyclic ordering of the external legs, encodes the external momenta in a simple way. Each line in twistor space—spanned, say by the twistors (*z**a* − 1 *z**a*)—corresponds to a *point* in *x*-space—in this case, the point *x**a*. This is why, for example, integration over a point *x*ℓ in region-momentum space translates to integration over a *line* (ℓ)  ≡  (ℓ*A*ℓ*B*) in momentum-twistor space (see e.g. ). Given a set of momentum-twistors *z**a*—viewed as columns of the *Z*—it is easy to construct the corresponding set of four-momenta. If we decompose each momentum-twistor *z**a* according to and define a (2  ×  *n*)-matrix $\widetilde{\lambda}\!\equiv\!\big(\widetilde\lambda\_1\cdots\widetilde\lambda\_n\big)\!\subset\!(\lambda^{\perp})$ according to, where ⟨*a* *b*⟩  ≡  det{*λ**a*, *λ**b*}, then $\lambda\!\cdot\!\widetilde{\lambda}\!=\!0$ because *Q*  ⋅  *λ* = 0; as such, we may identify $p\_a\!\equiv\!\lambda\_a\widetilde{\lambda}\_a$, and these (on-shell) four-momenta will *automatically* conserve momentum. Conversely, given four-momenta written in terms of spinor-helicity variables according to $p\_a\!\equiv\!\lambda\_a\widetilde{\lambda}\_a$, momentum twistors *z**a* can be constructed by joining each *λ**a* as in ([componentsofz]) with a corresponding *μ**a* given by Supersymmetry is encoded by dressing each momentum-twistor *z**a* with an anti-commuting four-vector *η**a*—collected into a (4  ×  *n*)-matrix *η* acted upon by the *S**U*(4) *R*-symmetry of N  =  4 SYM. If we similarly define $\widetilde{\eta}\!\equiv\!\eta\cdot Q$, then the kinematical data specified by $\{\lambda,\widetilde\lambda,\widetilde\eta\}$ will automatically be *super*momentum-conserving. Dual-conformal transformations in region-momentum space translate to mere *S**L*(4)-transformations in momentum-twistor space; hence, dual-conformal invariants are written in terms of simple determinants: ⟨*a* *b* *c* *d*⟩  ≡  det{*z**a*, *z**b*, *z**c*, *z**d*}. The simplest dual-*super*conformal invariant, however, involves five momentum-twistors, and is given by the familiar 5-bracket (sometimes called an ‘*R*-invariant’), : All one-loop leading singularities *except the four-masses*[7](#fn7) can be written directly as products of 5-brackets—as evidenced by and the fact that BCFW recursion (see ) directly gives tree-amplitudes in terms of products of 5-brackets. These often involve geometrically-defined, auxiliary points in momentum-twistor space such as “$(a\,b){\mathrm{\raisebox{0.75pt}{{$\,\bigcap\,$}}}}(c\,d\,e)$” which represents “$\mathrm{span}\{z\_a,z\_b\}{\mathrm{\raisebox{0.75pt}{{$\,\bigcap\,$}}}}\mathrm{span}\{z\_c,z\_d,z\_e\}$”. All such objects are trivially found via *Cramer’s rule*, which represents the unique identity satisfied by any five generic four-vectors: from this, it is easy to see that A similar, geometrically-defined object which appears in the BCFW recursion of one-loop amplitudes (see ) is “$(a\,b\,c){\mathrm{\raisebox{0.75pt}{{$\,\bigcap\,$}}}}(d\,e\,f)$”, which simply represents “”—which, in this case, is a (projective) *line* in twistor space; for the sake of concreteness, we can always write this explicitly as: Finally, we should recall that the Jacobian arising form the change of variables from momentum-space to momentum-twistor space is the full Parke-Taylor MHV super-amplitude, which explains why (when written in momentum-twistors) all MHV amplitudes are simply A*n*(0), 0  =  1, ensuring the dual-conformal symmetry of all amplitudes in planar N  =  4 SYM, ; and so, throughout this paper, A*n*(*k*), ℓ should be understood as the color-ordered, single-trace contribution to the ℓ-loop integrand for the *n*-point N*k*MHV scattering amplitude divided by ([parketaylor]), and in units of (*g*2*N**C*/(16*π*2))ℓ. The BCFW Representation of One-Loop Integrands ============================================== As described in ref.  (see also ), all ℓ-loop integrands for scattering amplitudes in planar N  =  4 SYM can be found by the BCFW recursion relations. In terms of on-shell diagrams, the BCFW recursion relations correspond to: Being more explicit about the ranges for the terms involved, the recursion becomes, Working in momentum-twistor space, the BCFW bridge operation corresponding to the shift $z\_n\!\to\!z\_n{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}\,\alpha\, z\_{n-1}$, for *n**R*  >  3, is given by: where ${{\widehat}{a}}\equiv(a\,a{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1){\mathrm{\raisebox{0.75pt}{{$\,\bigcap\,$}}}}(n{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,n\,1)$ and ${\widehat}{n}\equiv(n\,n{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1){\mathrm{\raisebox{0.75pt}{{$\,\bigcap\,$}}}}(a{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,a\,1)$; when *n**R*  =  3 and $n\_L\!=\!n{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1$, the bridge simply results in, And so, the ‘bridge’ terms of ([bcfwalllooprecursion]) are fairly straightforward to compute in momentum-twistor space, and the operations involved are the same regardless of the loop-levels ℓ*L* and ℓ*R* of the amplitudes being bridged. (Of course, at tree-level, *only* the bridge terms contribute to the recursion; and so the discussion so far suffices to recursively compute all tree-amplitudes, A*n*(*k*), 0, in N  =  4 SYM.) More interesting are the “forward-limit” contributions, $\mathrm{FL}\big(\mathcal{A}\_{n+2}^{(k+1),\ell-1}\big)$. It is easy to see that ([bcfwalllooprecursion]) gives rise to ℓ levels of nested forward-limits. As described in, determining which terms from the lower-loop amplitude are non-vanishing in the forward limit is generally difficult (even the *number* of terms which survive becomes scheme-dependent beyond one-loop). However, for one-loop amplitudes, we only need the forward-limits of trees; and as described in ref. , if the tree-amplitudes are obtained by BCFW,deforming the legs which are to be identified in the forward-limit, then the terms which vanish are *precisely* those involving three-particle amplitudes on either side of the bridge. Therefore, the only non-vanishing contributions are: where ${{\widehat}{a}}\!\equiv\!(a\,a{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1){\mathrm{\raisebox{0.75pt}{{$\,\bigcap\,$}}}}(\ell\_A\ell\_B\,1)$, ${\widehat}{n}\equiv(n\,n{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1){\mathrm{\raisebox{0.75pt}{{$\,\bigcap\,$}}}}(\ell\_A\ell\_B\,1)$, and ${\widehat}{\ell}\!\equiv\!(\ell\_A\ell\_B){\mathrm{\raisebox{0.75pt}{{$\,\bigcap\,$}}}}(n{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,n\,1)$, and the “kermit” *K*[(*a* *b* *c*); (*d* *e* *f*)]—written in terms of the line ℓ  ≡  (ℓ*A*ℓ*B*)—is given by: Putting everything together, the one-loop integrand for any amplitude is: Here, the first two lines represent ‘bridge’ contributions—identical in form to the tree-level recursion—while the last line represents the forward-limits. Notice the striking similarity of the roles of the 5-bracket $[1\,a{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,a\,n{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,n]$ in the bridge-terms and the ‘kermit’ $K[(1\,a\,{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,a);(1\,n{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,n)]$ in the forward-limit terms. Indeed, the forward-limit terms can be understood as unitarity-cuts which are “bridged” by the kermit. This analysis can of course be continued to higher loop-orders by repeatedly substituting the structure above into the forward-limit contributions appearing in ([bcfwalllooprecursion]); this results in higher-loop “kermits” which can similarly be understood as ‘bridging’ amplitudes across a unitarity cut. However, as our present work requires only one-loop integrands (and as the the complexity involved in the higher-loop ‘kermits’ is considerable), we will leave a more general discussion to future work. Mathematica Implementation of Results ===================================== In order to make the results described in this paper most useful to researchers, we have prepared a Mathematica package called ‘loopamplitudes’ which implements all of our results. In addition to providing fast numerical evaluation of loop amplitudes and ratio functions, the loopamplitudes package also serves as a reliable reference for the many results tabulated above (as any transcription error would obstruct numerical consistency checks). The package together with a notebook illustrating much of its functionality are included with the submission files for this paper on the arXiv, which can be obtained as follows. From the abstract page for this paper on the arXiv, look for the “download” options in the upper-right corner of the page, follow the link to “other formats” (below the option for “PDF”), and download the “source” files for the submission. The source will contain[8](#fn8) the primary package ( loop --- amplitudes.m ), together with a notebook ( loop --- amplitudes --- demo.nb ) which includes many detailed examples of the package’s functionality. Upon obtaining the source files, one should open and evaluate the Mathematica notebook ` loop --- amplitudes --- demo.nb ’; in addition to walking the user through many example computations, this notebook will copy the file loop --- amplitudes.m to the user’s ApplicationDirectory[]; this will make the package available to run in any future notebook via the simple command “<<loopamplitudes.m”: Glossary of Functions Defined By the Mathematica Package --------------------------------------------------------                 [2]#2 10 Z. Bern, G. Chalmers, L. J. Dixon, and D. A. Kosower, “One-Loop *N* Gluon Amplitudes with Maximal Helicity Violation via Collinear Limits,” [*Phys. Rev. Lett.* **72** (1994) 2134–2137](http://dx.doi.org/10.1103/PhysRevLett.72.2134), [arXiv:hep-ph/9312333](http://arxiv.org/abs/hep-ph/9312333). Z. Bern, L. J. Dixon, D. C. Dunbar, and D. A. Kosower, “One-Loop *n*-Point Gauge Theory Amplitudes, Unitarity and Collinear Limits,” [*Nucl. Phys.* **B425** (1994) 217–260](http://dx.doi.org/10.1016/0550-3213(94)90179-1), [arXiv:hep-ph/9403226](http://arxiv.org/abs/hep-ph/9403226). Z. Bern, L. J. Dixon, D. C. Dunbar, and D. A. Kosower, “Fusing Gauge Theory Tree Amplitudes into Loop Amplitudes,” [*Nucl. Phys.* **B435** (1995) 59–101](http://dx.doi.org/10.1016/0550-3213(94)00488-Z), [arXiv:hep-ph/9409265](http://arxiv.org/abs/hep-ph/9409265). Z. Bern and G. Chalmers, “Factorization in One-Loop Gauge Theory,” [*Nucl. Phys.* **B447** (1995) 465–518](http://dx.doi.org/10.1016/0550-3213(95)00226-I), [arXiv:hep-ph/9503236](http://arxiv.org/abs/hep-ph/9503236). R. Britto, F. Cachazo, and B. Feng, “Computing One-Loop Amplitudes from the Holomorphic Anomaly of Unitarity Cuts,” [*Phys. Rev.* **D71** (2005) 025012](http://dx.doi.org/10.1103/PhysRevD.71.025012), [arXiv:hep-th/0410179](http://arxiv.org/abs/hep-th/0410179). R. Britto, F. Cachazo, and B. Feng, “Generalized Unitarity and One-Loop Amplitudes in N = 4 Super-Yang-Mills,” [*Nucl. Phys.* **B725** (2005) 275–305](http://dx.doi.org/10.1016/j.nuclphysb.2005.07.014), [arXiv:hep-th/0412103](http://arxiv.org/abs/hep-th/0412103). J. M. Drummond, J. Henn, G. P. Korchemsky, and E. Sokatchev, “On Planar Gluon Amplitudes/Wilson Loops Duality,” [*Nucl. Phys.* **B795** (2008) 52–68](http://dx.doi.org/10.1016/j.nuclphysb.2007.11.007), [arXiv:0709.2368 [hep-th]](http://arxiv.org/abs/0709.2368). F. Cachazo, “Sharpening The Leading Singularity,” [arXiv:0803.1988 [hep-th]](http://arxiv.org/abs/0803.1988). S. Caron-Huot, “Loops and Trees,” [*JHEP* **1105** (2011) 080](http://dx.doi.org/10.1007/JHEP05(2011)080), [arXiv:1007.3224 [hep-ph]](http://arxiv.org/abs/1007.3224). J. M. Drummond and J. M. Henn, “Simple Loop Integrals and Amplitudes in N = 4 SYM,” [*JHEP* **1105** (2011) 105](http://dx.doi.org/10.1007/JHEP05(2011)105), [arXiv:1008.2965 [hep-th]](http://arxiv.org/abs/1008.2965). N. Arkani-Hamed, J. L. Bourjaily, F. Cachazo, A. B. Goncharov, A. Postnikov, *et al.*, “Scattering Amplitudes and the Positive Grassmannian,” [arXiv:1212.5605 [hep-th]](http://arxiv.org/abs/1212.5605). J. Drummond, J. Henn, G. Korchemsky, and E. Sokatchev, “Dual Superconformal Symmetry of Scattering Amplitudes in N = 4 super Yang-Mills Theory,” [*Nucl. Phys.* **B828** (2010) 317–374](http://dx.doi.org/10.1016/j.nuclphysb.2009.11.022), [arXiv:0807.1095 [hep-th]](http://arxiv.org/abs/0807.1095). J. Drummond, J. Henn, G. Korchemsky, and E. Sokatchev, “Generalized Unitarity for N = 4 Super-Amplitudes,” [*Nucl. Phys.* **B869** (2013) 452–492](http://dx.doi.org/10.1016/j.nuclphysb.2012.12.009), [arXiv:0808.0491 [hep-th]](http://arxiv.org/abs/0808.0491). H. Elvang, D. Z. Freedman, and M. Kiermaier, “Dual Conformal Symmetry of 1-Loop NMHV Amplitudes in N = 4 SYM Theory,” [*JHEP* **1003** (2010) 075](http://dx.doi.org/10.1007/JHEP03(2010)075), [arXiv:0905.4379 [hep-th]](http://arxiv.org/abs/0905.4379). N. Arkani-Hamed, J. L. Bourjaily, F. Cachazo, S. Caron-Huot, and J. Trnka, “The All-Loop Integrand For Scattering Amplitudes in Planar N=4 SYM,” [*JHEP* **1101** (2011) 041](http://dx.doi.org/10.1007/JHEP01(2011)041), [arXiv:1008.2958 [hep-th]](http://arxiv.org/abs/1008.2958). N. Arkani-Hamed, J. L. Bourjaily, F. Cachazo, and J. Trnka, “Local Integrals for Planar Scattering Amplitudes,” [*JHEP* **1206** (2012) 125](http://dx.doi.org/10.1007/JHEP06(2012)125), [arXiv:1012.6032 [hep-th]](http://arxiv.org/abs/1012.6032). Z. Bern, L. J. Dixon, and D. A. Kosower, “Dimensionally Regulated One Loop Integrals,” [*Phys. Lett.* **B302** (1993) 299–308](http://dx.doi.org/10.1016/0370-2693(93)90400-C), [arXiv:hep-ph/9212308](http://arxiv.org/abs/hep-ph/9212308). Z. Bern, L. J. Dixon, and D. A. Kosower, “Dimensionally Regulated Pentagon Integrals,” [*Nucl. Phys.* **B412** (1994) 751–816](http://dx.doi.org/10.1016/0550-3213(94)90398-0), [arXiv:hep-ph/9306240](http://arxiv.org/abs/hep-ph/9306240). J. Drummond, J. Henn, V. Smirnov, and E. Sokatchev, “Magic Identities for Conformal Four-Point Integrals,” [*JHEP* **0701** (2007) 064](http://dx.doi.org/10.1088/1126-6708/2007/01/064), [arXiv:hep-th/0607160](http://arxiv.org/abs/hep-th/0607160). L. F. Alday, J. M. Henn, J. Plefka, and T. Schuster, “Scattering into the Fifth Dimension of N = 4 super Yang-Mills,” [*JHEP* **1001** (2010) 077](http://dx.doi.org/10.1007/JHEP01(2010)077), [arXiv:0908.0684 [hep-th]](http://arxiv.org/abs/0908.0684). A. Hodges, “The Box Integrals in Momentum-Twistor Geometry,” [arXiv:1004.3323 [hep-th]](http://arxiv.org/abs/1004.3323). L. Mason and D. Skinner, “Amplitudes at Weak Coupling as Polytopes in *A**d**S*5,” [*J. Phys.* **A44** (2011) 135401](http://dx.doi.org/10.1088/1751-8113/44/13/135401), [arXiv:1004.3498 [hep-th]](http://arxiv.org/abs/1004.3498). J. L. Bourjaily, “Positroids, Plabic Graphs, and Scattering Amplitudes in Mathematica,” [arXiv:1212.6974 [hep-th]](http://arxiv.org/abs/1212.6974). A. Hodges, “Eliminating Spurious Poles from Gauge-Theoretic Amplitudes,” [arXiv:0905.1473 [hep-th]](http://arxiv.org/abs/0905.1473). R. Penrose, “Twistor Algebra,” [*J. Math. Phys.* **8** (1967) 345](http://dx.doi.org/10.1063/1.1705200). L. Mason and D. Skinner, “Dual Superconformal Invariance, Momentum Twistors and Grassmannians,” [*JHEP* **0911** (2009) 045](http://dx.doi.org/10.1088/1126-6708/2009/11/045), [arXiv:0909.0250 [hep-th]](http://arxiv.org/abs/0909.0250). J. L. Bourjaily, A. DiRe, A. Shaikh, M. Spradlin, and A. Volovich, “The Soft-Collinear Bootstrap: N=4 Yang-Mills Amplitudes at Six and Seven Loops,” [*JHEP* **1203** (2012) 032](http://dx.doi.org/10.1007/JHEP03(2012)032), [arXiv:1112.6432 [hep-th]](http://arxiv.org/abs/1112.6432). B. Eden, P. Heslop, G. P. Korchemsky, and E. Sokatchev, “Hidden Symmetry of Four-Point Correlation Functions and Amplitudes in N = 4 SYM,” [*Nucl. Phys.* **B862** (2012) 193–231](http://dx.doi.org/10.1016/j.nuclphysb.2012.04.007), [arXiv:1108.3557 [hep-th]](http://arxiv.org/abs/1108.3557). J. Golden and M. Spradlin, “Collinear and Soft Limits of Multi-Loop Integrands in N = 4 Yang-Mills,” [*JHEP* **1205** (2012) 027](http://dx.doi.org/10.1007/JHEP05(2012)027), [arXiv:1203.1915 [hep-th]](http://arxiv.org/abs/1203.1915). W. L. van Neerven and J. A. M. Vermaseren, “Large Loop Integrals,” [*Phys. Lett.* **B137** (1984) 241](http://dx.doi.org/10.1016/0370-2693(84)90237-5). A. I. Davydychev, “General Results for Massive *N*-Point Feynman Diagrams with Different Masses,” [*J. Math. Phys.* **33** (1992) 358–369](http://dx.doi.org/10.1063/1.529914). Z. Bern, J. Rozowsky, and B. Yan, “Two-Loop Four-Gluon Amplitudes in N = 4 SuperYang-Mills,” [*Phys. Lett.* **B401** (1997) 273–282](http://dx.doi.org/10.1016/S0370-2693(97)00413-9), [arXiv:hep-ph/9702424](http://arxiv.org/abs/hep-ph/9702424). R. Britto, F. Cachazo, and B. Feng, “New Recursion Relations for Tree Amplitudes of Gluons,” [*Nucl. Phys.* **B715** (2005) 499–522](http://dx.doi.org/10.1016/j.nuclphysb.2005.02.030), [arXiv:hep-th/0412308](http://arxiv.org/abs/hep-th/0412308). R. Roiban, M. Spradlin, and A. Volovich, “Dissolving N = 4 Loop Amplitudes into QCD Tree Amplitudes,” [*Phys. Rev. Lett.* **94** (2005) 102002](http://dx.doi.org/10.1103/PhysRevLett.94.102002), [arXiv:hep-th/0412265](http://arxiv.org/abs/hep-th/0412265). Z. Bern, L. J. Dixon, and D. A. Kosower, “On-Shell Methods in Perturbative QCD,” [*Annals Phys.* **322** (2007) 1587–1634](http://dx.doi.org/10.1016/j.aop.2007.04.014), [arXiv:0704.2798 [hep-ph]](http://arxiv.org/abs/0704.2798). W. Giele and E. N. Glover, “Higher Order Corrections to Jet Cross-Sections in *e*+ *e*− Annihilation,” [*Phys. Rev.* **D46** (1992) 1980–2010](http://dx.doi.org/10.1103/PhysRevD.46.1980). Z. Kunszt, A. Signer, and Z. Trocsanyi, “Singular Terms of Helicity Amplitudes at One-Loop in QCD and the Soft Limit of the Cross-Sections of Multiparton Processes,” [*Nucl. Phys.* **B420** (1994) 550–564](http://dx.doi.org/10.1016/0550-3213(94)90077-9), [arXiv:hep-ph/9401294](http://arxiv.org/abs/hep-ph/9401294). Z. Bern, V. Del Duca, L. J. Dixon, and D. A. Kosower, “All Non-Maximally- Helicity-Violating One-Loop Seven-Gluon Amplitudes in N = 4 Super-Yang-Mills Theory,” [*Phys. Rev.* **D71** (2005) 045006](http://dx.doi.org/10.1103/PhysRevD.71.045006), [arXiv:hep-th/0410224](http://arxiv.org/abs/hep-th/0410224). N. Arkani-Hamed, F. Cachazo, and J. Kaplan, “What is the Simplest Quantum Field Theory?,” [*JHEP* **1009** (2010) 016](http://dx.doi.org/10.1007/JHEP09(2010)016), [arXiv:0808.1446 [hep-th]](http://arxiv.org/abs/0808.1446). E. I. Buchbinder and F. Cachazo, “Two-Loop Amplitudes of Gluons and Octa-Cuts in N = 4 super Yang-Mills,” [*JHEP* **0511** (2005) 036](http://dx.doi.org/10.1088/1126-6708/2005/11/036), [arXiv:hep-th/0506126](http://arxiv.org/abs/hep-th/0506126). N. Bjerrum-Bohr and P. Vanhove, “Absence of Triangles in Maximal Supergravity Amplitudes,” [*JHEP* **0810** (2008) 006](http://dx.doi.org/10.1088/1126-6708/2008/10/006), [arXiv:0805.3682 [hep-th]](http://arxiv.org/abs/0805.3682). N. Arkani-Hamed, F. Cachazo, and C. Cheung, “The Grassmannian Origin Of Dual Superconformal Invariance,” [*JHEP* **1003** (2010) 036](http://dx.doi.org/10.1007/JHEP03(2010)036), [arXiv:0909.0483 [hep-th]](http://arxiv.org/abs/0909.0483).         --- 1. In a general quantum field theory, the integrand may also have contributions where only co-dimension three (triangles) or co-dimension two (bubbles) residues exist; it is a non-trivial fact that N  =  4 does not need contributions from triangles or bubbles (closely related to its UV-finiteness).[↩](#fnref1) 2. For notational simplicity, we have left implicit a factor *π*2 in the definition of the measure of the integrand ([deffourmassbox])—the integrated expression, ([symmetricfourmassintegral]), is of course the standard one; we also omit factors of *i* associated with the Wick rotation: “*d*4ℓ” denotes the Euclidean integration measure.[↩](#fnref2) 3. We note that this expression for the four-mass function appears to differ from that given in which we suspect to be incomplete.[↩](#fnref3) 4. We should note that the coefficient of log(*ε*)2 in the amplitude is *twice* the coefficient of log(*μ*IR2)2 of the Higgs regulator described in ref. —equivalently, it is *twice* the coefficient of 1/(2(*D* − 4)2) appearing in dimensional-regularization. This can be understood from the way ([definitionofregulator]) cuts-out the soft-collinear divergent regions of integration.[↩](#fnref4) 5. The factor of *n* in ([scalarboxexpansionintegratedF2]) should be thought of as ‘2*n*/2’, arising from the fact that each two-mass hard box diverges like $\frac{1}{2}\log(\epsilon)^2$ (one-mass boxes counted twice by symmetry), and the quad-cut coefficients ${\color{cut1}f^1}$ and ${\color{cut2}f^2}$ *separately* generate the tree-amplitude for each of the *n* sets of adjacent legs.[↩](#fnref5) 6. We should mention that there can be residues of poles at infinity—the parity-even combinations of such are simply the so-called triangle coefficients; the inclusion of these terms would have no substantive effect on the results of this section.[↩](#fnref6) 7. Recall that the four-mass leading singularities require a prefactor of $\varphi\_{{\color{cut1}1},{\color{cut2}2}}$ (see equation ([fourmasscoefficient1])).[↩](#fnref7) 8. Occasionally, the “source” file downloaded from the arXiv is saved without any extension; this can be ameliorated by manually appending “.tar.gz” to the name of the downloaded file.[↩](#fnref8) =1 Introduction ============ One-loop amplitudes have been extensively studied in recent decades, leading to many important insights and discoveries about the structure of scattering amplitudes, and frequently serve as an important source of theoretical ‘data’ with which to test new ideas. A powerful approach to computing loop amplitudes in any quantum field theory is the unitarity-based method, in which the amplitude is expanded into a basis of standardized scalar Feynman integrals (regulated if necessary) with coefficients fixed by on-shell scattering processes. Although very familiar and reasonably well understood, the way this approach has been realized in terms of existing technology does not make manifest several recently discovered aspects of loop-amplitudes—especially for the particularly rich case of scattering amplitudes in planar, N  =  4 super Yang-Mills (SYM). The two principle shortcomings about the way generalized unitarity has been realized in terms of existing tools (at least for N  =  4 SYM) are: (1) that it fails to reflect the rich symmetries observed in loop amplitudes *prior to integration*; and (2) that even those symmetries which survive integration—such as the dual-conformal invariance (DCI) of the ratios of scattering amplitudes (see e.g. )—are *severely* obfuscated in all existing regularization schema for infrared-divergent contributions. Because of this, manifestly-DCI expressions for ratio functions are known only in a few exceptionally simple cases. In this note, we revisit this story and fully address both shortcomings, providing manifestly-DCI expressions for *all* one-loop ratio functions in N  =  4 SYM, and describing how the familiar box expansion can be upgraded to a *chiral* box expansion which matches all one-loop *integrands*. This paper is organized as follows. In we review how generalized unitarity can be used to reproduce integrated amplitudes in N  =  4 SYM, and in we (heuristically) derive ‘DCI’-regularized expressions for all scalar box integrals, which are given in. In we summarize the computation of scalar box coefficients using momentum-twistor variables, and write explicit formulae for all one-loop box coefficients in. In we explore the general features of the ‘DCI’-regularization proposal. In we show that this proposal correctly reproduces all finite observables of any planar theory, thereby justifying its description as a ‘regularization scheme’. In we describe how this scheme can be extended beyond one-loop, and compare it with existing approaches. The ‘DCI’-regulator is closely related to (and motivated by) the way that infrared divergences arise at the level of the loop-integrand. In, we describe how the IR-divergences of loop amplitudes appear in terms of the ‘DCI’-regularization scheme, and in we show how generalized unitarity realized at the *integrand*-level can be used to generate more powerful identities than would be possible after integration. In we illustrate the how these features persist beyond the planar-level. In we return to the familiar box expansion, and describe how it can be made ‘chiral’ in a natural way, allowing us to match the full, chiral *integrand* of any one-loop scattering amplitude in N  =  4. For the purposes of concreteness and completeness, we review the basic kinematical variables—momentum twistors—used in most of this paper in ; and in, we use these variables to give a closed-form specialization of the BCFW recursion relations for all one-loop integrands in N  =  4 SYM. And finally, we have implemented the results described in this paper in a Mathematica package called ‘loopamplitudes’ which is documented in. Revisiting Generalized Unitarity at One-Loop ============================================ A major triumph of the unitarity-based approach to quantum field theory was the discovery that any one-loop amplitude can be written as a linear combination of standardized, scalar integrals with coefficients expressed as *on-shell diagrams*, historically known as ‘leading singularities’ (for a comprehensive review, see ). Because of the good UV-behavior of N  =  4 SYM, only *box* integrals contribute—those involving *four* loop-momentum propagators—giving rise to the familiar ‘box expansion’: where *f**a*, *b*, *c*, *d* are on-shell functions, and *I**a*, *b*, *c*, *d* are the standard scalar box integrals, (To be clear, throughout this paper we will refer to (∫ *d*4ℓ)A*n*(*k*), 1 as the (integrated) one-loop N*k*MHV amplitude, expressed in units of *g*2*N**c*/(16*π*2).) The objects appearing in ([scalarboxexpansion]) will each be described in detail below. But let us briefly remark on the motivation underlying the box expansion. Loop amplitudes are obtained by integrating the loop integrand over a four-dimensional contour of real loop momenta, ℓ  ∈  R3, 1. If this integrand were obtained from the Feynman expansion, for example, it would include many propagators for the internal, ‘loop’ particles. A co-dimension one residue of this integral ‘enclosing’ one propagator—a ‘single-cut’—would correspond to putting one internal particle on-shell. Because the loop integral is four-dimensional, the highest-degree residues would be co-dimension four; these are the so-called *leading singularities* or ‘quad-cuts’ of the integrand[1](#fn1) and are computed in terms of on-shell diagrams of the form shown in ([termsinboxexpansion]). The scalar box integrals *I**a*, *b*, *c*, *d* are simply those involving precisely four Feynman propagators of a scalar field theory—normalized to have co-dimension four residues of unit magnitude. As such, we should be able to represent any integrated loop amplitude in N  =  4 by dressing each box integral with the actual co-dimension four residues ‘enclosing’ the corresponding propagators as for the full loop integrand. A slight subtlety is that the residues of scalar boxes come in parity conjugate pairs, so in order to agree with the complete *integrand* the scalar boxes should be supplemented by parity-odd integrals,. Since these integrate to zero, they are often ignored. For the purpose of computing the *integrated* amplitude in this section—as opposed to the *integrand*—we will also ignore them here. This mismatch will be addressed in greater detail in, where we show how to upgrade the box expansion ([scalarboxexpansion]) in a way which allows us to match the full amplitude *prior to* integration. Scalar Box Integrals and their Divergences ------------------------------------------ Let us start our analysis with the generic, ‘four-mass’, scalar box integral —one for which all four corners are ‘massive’ (involving at least two massless momenta): where with $p\_a\!\equiv\!x\_{a+1}\,{{\rm\rule[2.4pt]{6pt}{0.65pt}}}\,x\_a$ and where 1/(ℓ, *a*) denotes the standard propagator, When all the corners are massive, this integral is a transcendentality-two, completely finite, symmetric function of the dual-conformally invariant cross-ratios (*u*, *v*): with Notice that this form is *manifestly* symmetric under the exchange *u*  ↔  *v* as this exchange results only in *α*  ↔  *β*, under which ([symmetricfourmassintegral]) is obviously symmetric. The equivalence of ([symmetricfourmassintegral]) to existing formulae in the literature is easily verified[2](#fn2). It is easy to see that this integral becomes divergent when any corner becomes massless—for example, identifying legs *p**a* and *p**B* results in: This causes the cross ratio *u* to vanish, introducing a logarithmic singularity from the term $-\frac{1}{2}\log(u)\log(v)$ in ([symmetricfourmassintegral]). It is worth noting that ([symmetricfourmassintegral]) simplifies considerably when *u* is taken to be parametrically small, *u*  →  O(*ε*): $\Delta\!\to\! (1\,{{\rm\rule[2.4pt]{6pt}{0.65pt}}}\,v){\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}\,\mathcal{O}(\epsilon)$,, and $\beta\!\to\!(1\,{{\rm\rule[2.4pt]{6pt}{0.65pt}}}\,v){\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}\,\mathcal{O}(\epsilon)$, resulting in This divergence can be regulated in a number of ways, including dimensional regularization (see e.g.  for standard formulae). Another canonical way to regulate such divergences uses the Higgs mechanism,. In the simplest setup, this is used to give masses only to *internal* propagators, and leads to the mass-regularized formulae found in e.g. . However, because such regularization schema are predicated on a dimensionful parameter, the regulated formulae that result *severely* break dual-conformal invariance, obscuring the ultimate invariance of even absolutely convergent (and hence DCI) combinations of box integrals. (A complete basis involving only *manifestly* finite integrals for all convergent one-loop integrals was given in ref. .) We are therefore motivated to find some way to make all the singular limits of the general integral ([symmetricfourmassintegral]) as dual-conformally invariant as possible, regulating the divergence caused by *u*  →  0 in a way which depends only on some dimensionless parameter, denoted *ε*, and dual-conformal cross-ratios. Such a regularization scheme is described in the next subsection. The ‘DCI’ Regularization of Scalar Box Integrals ------------------------------------------------ We propose the following regulator for one-loop integrals: render all external legs slightly off-shell by displacing the coordinates according to, this transformation can be understood graphically as follows: We will prove that this regulator produces the correct result for all finite observables and discuss its implications, generalizations, and extensions in greater detail in ; but first, let us understand its consequences for the one-loop integrals appearing in the box expansion ([scalarboxexpansion]). If the dimensionless parameter *ε* is small (the only regime in which we will be interested) then the invariants (*a*, *b*) which are already non-vanishing are modified by a negligible amount. However, when as in ([unregulatedscalarthreemass]), for example, the invariants will be regularized by ([xshift]): This implies that the integrated expressions for all box integrals will be given by limits of the four-mass-box ([symmetricfourmassintegral]); in particular, it is not necessary for us to add any ‘discontinuity functions’ such as those discussed in e.g. . Consider the leading, degenerate limit of the box function, the so-called ‘three-mass’ integral. This corresponds to taking $b\!=\!a{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}1$ in ([symmetricfourmassintegral]), keeping all other external corners massive: (*b*, *c*), (*c*, *d*), (*d*, *a*) ≠ 0. The cross-ratio *v* is unaffected by the regulator to O(*ε*), while *u* transforms according to Using ([limitoffourmassintegral]), the limit is easily seen to be given by All further singular limits of ([symmetricfourmassintegral]) are similarly regulated according to the shifts ([xshift]). This results in ‘DCI-regularized’ forms for all scalar box integrals, which we have listed in detail in. Importantly, these integrals depend *only* on the dimensionless parameter *ε* and dual-conformally invariant cross-ratios. One special case not given in is the so-called ‘massless’ scalar box—which is only relevant for *n*  =  4. In this case, the shifts ([xshift]) are not strictly defined, but can be generalized in simple way so that *u*, *v*  →  *ε*2, resulting in the following, ‘DCI’-regulated massless box function: [h\*] Scalar Box Coefficients: One-Loop Leading Singularities ------------------------------------------------------- Factorization dictates that the residues of the loop integrand—called *leading singularities* —are simply the products of tree-amplitudes, summed-over all the internal particles which can be exchanged, and integrated over the on-shell phase space of each. As mentioned above, we represent such functions graphically as *on-shell diagrams* of the form shown in ([termsinboxexpansion]). These are simply algebraic (super)functions of the external kinematical data—almost always rational, and at one-loop involving at most the solution to a quadratic equation. Leading singularities have of course been known to the literature for quite some time, and can be computed in many ways. A comprehensive summary of these objects—their classification, evaluation, and relations—was described recently in The physical content of any on-shell diagram (after blowing up all tree-amplitudes at the vertices themselves into on-shell diagrams) is encoded by a permutation, and the permutations of the corners are simply ‘glued’ together to give the permutation of the ‘one-loop’ diagram: Given the permutation labeling an on-shell diagram, it is trivial to construct an explicit formula for the corresponding on-shell function. This is done most directly in terms of an auxiliary Grassmannian integral as described in. (All the necessary tools involved in this story have been made available in a Mathematica package called ‘positroids’, which is documented in.) We will not review these ideas here, but simply give the formulae that result. The most compact expressions for leading singularities are found when they are written in terms of the momentum-twistor variables introduced in ref.  since they simultaneously trivialize the two ubiquitous kinematical constraints—the on-shell condition and momentum conservation. Momentum-twistors are simply the twistor-variables associated to the region-momentum coordinates *x**a* defined in. (A brief introduction to momentum-twistor variables and an explanation of the notation used throughout this section is given in.) Assuming a modicum of familiarity with momentum-twistors, let us now describe the form that leading singularities take. We start with the most general case: a leading singularity involving four massive corners. It turns out that this case is the only one we need to consider, as it will smoothly generate all the others in a very natural way. The most important data are the two solutions ${\color{cut1}\ell\_1},{\color{cut2}\ell\_2}$ to the kinematical constraints of putting all the internal lines on-shell, Here, the lines (*A**a*), …, (*D**d*) in momentum-twistor space correspond to the region-momenta *x**a*, …, *x**d* used in ; and the lines ${\color{cut1}\ell\_{1}}$ and ${\color{cut2}\ell\_{2}}$ correspond to the two solutions to the problem of putting four propagators on-shell. These ‘quad-cuts’ are the solution to a simple geometry problem in momentum-twistor space (viewed projectively as P3): ${\color{cut1}\ell\_{1}}$ and ${\color{cut2}\ell\_{2}}$ are the two lines which simultaneously intersect the four generic lines (*A**a*), …, (*D**d*): (The fact that there are *two* solutions to the problem of putting four-propagators on-shell is a classic result of the Schubert calculus—and continues to hold even when the four lines are non-generic; see for an exposition of these ideas.) We will give explicit formulae for the twistors ${\color{cut1}\alpha\_1},\ldots,{\color{cut1}\delta\_1}$ and ${\color{cut2}\alpha\_2},\ldots,{\color{cut2}\delta\_2}$ corresponding to the two lines ${\color{cut1}\ell\_{1}}$ and ${\color{cut2}\ell\_{2}}$ intersecting the lines (*A**a*), …, (*D**d*), respectively, in ; but for now let us take it for granted that they are known. Given these twistors, it is easy to write the leading singularity for each particular quad-cut $\ell\_{{\color{cut1}1},{\color{cut2}2}}$ in terms of momentum-twistors. In momentum-twistor space, we are dealing with MHV-stripped amplitudes (so (N*k* = 0)MHV tree-amplitudes are simply the identity), and polarization sums become simple multiplication of the corresponding MHV-stripped amplitudes. This allows us to ‘peel-off’ the tree-amplitudes at each corner from a standard on-shell graph involving only MHV amplitudes, : where the on-shell graph on the right is the N*k* = 2MHV ‘four-mass’ function[3](#fn3), While simply replacing $({\color{cut1}\alpha\_1},\ldots,{\color{cut1}\delta\_1})\!\to\!({\color{cut2}\alpha\_2},\ldots,{\color{cut2}\delta\_2})$ in the formula above would give (minus) the other leading singularity, we will find it advantageous to use an alternate form of the four-mass function involving the ${\color{cut2}\ell\_2}$ solution: In, we give the particular solutions to the quad-cuts represented graphically in ([fourmassquadcutfigure])—where Δ is defined as in equation ([definitionoffourmassdelta]). (The notation used here is fully defined in.) The motivation for using two separate formulae for the four-mass leading singularities is that they separately evolve smoothly to all other cases. This is made possible by the fact that the multiplicative factors appearing in, which encode the shifts of the quad-cuts from each ‘corner’ of the box (see equation ([fourmassquadcutfigure])), are all smooth and non-singular in limits where some of the legs are identified. (Notice that $\varphi\_{{\color{cut1}1},{\color{cut2}2}}\!\to\!1$ in all limits where a pair of legs are identified.) As promised, the formulae given above for the four-mass leading singularities smoothly generate *all* one-loop leading singularities, which for the purposes of completeness and reference have been written explicitly in. The formulae given in have been organized in order to highlight how each case descends smoothly from the general cases given above for the four-mass functions. The complete box coefficient for a given topology is given by the sum of the two corresponding on-shell diagrams, $f\_{a,b,c,d}\!\equiv\!{\color{cut1}f^1\_{a,b,c,d}}+{\color{cut2}f^2\_{a,b,c,d}}$, which involve the quad-cuts ${\color{cut1}\ell\_1}$ and ${\color{cut2}\ell\_2}$, respectively: Here, each graph represents a sum over all such graphs with the same topology—involving any four amplitudes A*n**a*(*k**a*), …, A*n**d*(*k**d*) such that $k\_a{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}\,k\_b{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}\,k\_c{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}\,k\_d=k\,{{\rm\rule[2.4pt]{6pt}{0.65pt}}}\,2$, and for which $n\_a{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}\,n\_b{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}\,n\_c{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}\,n\_d\!=\!n{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}8$, with $0\leq\!k\leq n\,{{\rm\rule[2.4pt]{6pt}{0.65pt}}}\,4$ for each (except when *n*  =  3, for which A3( − 1) is allowed). [b\*]   [h\*] Properties and Extensions of ‘DCI’ Regularization ================================================= Using the ‘DCI’-regularized box integrals *I**a*, *b*, *c*, *d**ε* and the box coefficients described in, the scalar-box expansion becomes Let us now prove that this produces the correct result for all finite observables in planar N  =  4 SYM by showing that the expressions given for *I**a*, *b*, *c*, *d**ε* can be obtained from a very concrete and simple regularization procedure. The ‘DCI’ Regularization Scheme ------------------------------- Given any four-dimensional integrand I(ℓ), we define its ‘DCI’-regulated integral by deforming the integrand in the following way: The regulating factor *R*(ℓ) suppresses all IR-divergent regions (for all integrands), making the result manifestly IR-finite. Ultimately, this works because all infrared singularities in a planar integral arise from predetermined integration regions—namely, the collinear regions, where the factor *R*(ℓ) becomes O(*ε*). It is trivial to see that the regulated expression ([definitionofregulator]) produces the correct result for all finite observables in the limit of *ε*  →  0: since the factor everywhere except in the isolated regions responsible for infrared divergences (where it is O(*ε*)), it can be ignored for any *convergent* integral. We should emphasize a distinction we are making between *convergent* integrals and so-called “finite” integrals. Tautologically, a “convergent” integral is one which can be evaluated *without* regularization; this requires that it have *no* collinearly-divergent regions. This notion of convergence precludes the combinations of divergent integrals which happen to be “finite” in some particular regularization scheme (but for which integration *does* require regularization). The convergence of an integral can be tested as follows: multiply the integrand by any two adjacent propagators, and verify that the product vanishes in the corresponding collinear region: As pointed out in ref.  the integrand for the ratio function is in fact *convergent*. A similar integrand-level test of (partial) convergence for the logarithm of the 4-particle amplitude has been shown sufficient to completely fix the integrand through seven-loops,, and has also been used to find amplitudes involving more particles. It remains for us to show that the regulated amplitude ([definitionofregulator]) is actually given by ([scalarboxexpansiontwotypesofcoefficients]). To see this, consider the box expansion as an integrand-level statement: we can decompose any one-loop *integrand* into parity-even and parity-odd sectors: (Recall that I*a*, *b*, *c*, *d*(ℓ) denotes the *integrand* of the scalar box *I**a*, *b*, *c*, *d*.) The scalar boxes form a complete basis for parity-even four-dimensional integrands (see e.g. ), and so the first term of ([scalarboxesandparityoddexpansion]) completely captures all parity-even contributions. The parity-odd contributions in ([scalarboxesandparityoddexpansion]) are often ignored because they vanish when integrated over the parity-even contour of R3, 1. Importantly, *all* parity-odd integrals are not merely vanishing upon integration, but are in fact *convergent* in the sense described above,. (This is not too surprising since the requirement for an integrand to vanish in the limit ([convergencetest]) is itself *parity-invariant*.) And because all parity-odd integrands are convergent, the regulator *R*(ℓ) is $1{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}\,\mathcal{O}(\epsilon)$ everywhere, and can therefore be ignored. For the parity-even sector—that is, the scalar box expansion, ([scalarboxexpansion]) (but now understood at the *integrand*-level)—we only need to verify the following: This identity is proved by noting that all other factors in *R*(ℓ) are approximately unity except in very small regions, where the unregulated box is not singular for lack of divergent propagators; with the explicit propagators regulated, the singular regions are all removed, resulting in precisely *I**a*, *b*, *c*, *d**ε* as described in and given in. This concludes our proof that ([scalarboxexpansiontwotypesofcoefficients]), using the ‘DCI’-regulated scalar box integrals, correctly reproduces all regulator-independent contributions to loop amplitudes, and therefore leads to correct formulae for all finite one-loop observables. Generalization of ‘DCI’-Regularization to Higher Loop-Orders ------------------------------------------------------------ The failure of simple off-shell regularization beyond one-loop has been known for quite some time (see e.g. ). For example, for the two-loop 4-particle amplitude in N  =  4 SYM, a simple off-shell prescription fails even to give the correct coefficient for the double-logarithmic divergence! Because the ‘DCI’-regulator ([definitionofregulator]) is somewhat similar to an off-shell regulator at one-loop (but one involving non-uniform masses), this may seem like a bad omen for extending it beyond one-loop. However, the most natural generalization of ([definitionofregulator]) beyond one-loop *cannot* be interpreted as an off-shell regulator—which is good news, indeed! Let us first describe the generalization of the ‘DCI’-regulator to higher loop-orders, and then illustrate how it differs from an of-shell regulator in the case of 4-particles. The integrand-level understanding of the ‘DCI’-regulator, ([definitionofregulator]), provides an obvious generalization to higher loops-orders, We suspect that this regularization prescription will render all multi-loop integrands IR-finite; moreover, using the same arguments as in, we expect that ([multiloopregulator]) will generate the correct result for all finite multi-loop observables. As one simple example of a regularized integral beyond one-loop—and as an illustration of the differences between ([multiloopregulator]) and off-shell regularization—consider the ‘DCI’-regularized 4-particle double-box integrand, : Because of the part of *R*(ℓ1)*R*(ℓ2) which survives, this is clearly *not* an off-shell version of the double-box! In particular, the integrand ([dciregulateddoublebox]) includes regions (both collinear and soft-collinear) where the regularization-factor *cannot* be approximated by unity. This is good news, since this had to happen for the regulator to have any chance of working beyond one-loop. It would be interesting to compute this integral explicitly, and verify for example that scheme-independent quantities such as the two-loop double-logarithmic divergences (the so called cusp anomalous dimension) are correctly reproduced; we leave this, however, to future work. The Infrared-Divergences of One-Loop Amplitudes ----------------------------------------------- Let us now consider how the IR-divergences of scattering amplitudes are organized in the ‘DCI’-regulated box expansion of ([scalarboxexpansiontwotypesofcoefficients]). These divergences arise from the parts of *I**a*, *b*, *c*, *d**ε* proportional to log(*ε*) or log(*ε*)2; let us denote the combined coefficients of each of these divergences as follows: It is easy to identify the coefficient *F*2 from :[4](#fn4) the only integrals which include a factor of log(*ε*)2 are the so-called ‘two-mass hard’ and ‘one-mass’ boxes, which can be understood[5](#fn5) as the ‘BCF’ representation of tree-amplitudes discovered in ref.  (see also ). Interestingly, the coefficient *F*1 in ([scalarboxdivergences]) is *also* proportional to the tree-amplitude. (This follows from the ideas discussed in.) Because the coefficient of log(*ε*) must be of transcendentality-one, it should involve the logarithm of some dual-conformal cross ratio, which we denote Ω*n*; ultimately, *F*1 is found to be simply, An important object that arises in the discussion of N*k*MHV amplitudes A*n*(*k*), ℓ is the so-called *ratio function*, denoted R*n*(*k*), ℓ, which at one-loop is given by, Because the divergences *F*1 and *F*2 are always proportional to the tree-amplitude, A*n*(*k*), 0—and because in momentum-twistor space A*n*(0), 0 is simply the identity—we see that R*n*(*k*), 1 is finite in the limit *ε*  →  0. Therefore, any one-loop ratio function can be computed more simply as, where *f**a*, *b*, *c*, *d*(MHV) are the MHV one-loop box coefficients, and *I**a*, *b*, *c*, *d*fin are the *ε*-independent (“finite”) parts of the *ε*-regulated scalar box integrals *I**a*, *b*, *c*, *d**ε* as listed in. Notice that because *I**a*, *b*, *c*, *d**ε* depended *only* on *ε* and DCI cross-ratios, the integrals *I**a*, *b*, *c*, *d*fin are *manifestly* dual-conformally invariant! And so ([finiteboxexpansionratiofunction]) provides a *manifestly* dual-conformal representation of any one-loop ratio function in N  =  4 SYM. Theories with Triangle Contributions ------------------------------------ We should briefly mention that the regulator ([definitionofregulator]) can be applied to any planar theory, not just N = 4 SYM. (Planarity being a consequence of the way the momenta *p**a* are associated with region-momentum coordinates *x**a*; a possible generalization to the non-planar case will be discussed shortly.) In a more general planar field theory, one-loop amplitudes may also require contributions from ‘triangle’ and ‘bubble’ integrals in addition to the scalar boxes. These integrals manifestly break dual-conformal symmetry, but at least the triangles can be regulated in the same way as the boxes, ([definitionofregulator]). Indeed, regulated triangle integrals can be obtained from the ‘DCI’-regulated scalar-box integrals without any additional work: one need only to send one of the points *x**a*, …, *x**d* to (space-like) ‘infinity’, denoted *x*∞. The correctness of this is easily seen from the geometry of the four-mass integral, ([deffourmassbox]). Thus, for example, the (no-longer ‘DCI’, but) *ε*-regulated two-mass triangle integral can be found by simply sending a point of the three-mass integral to infinity: where we simply take *x**d*  →  *x*∞ so that, Here, dual-conformal invariance is broken *explicitly* by the fact that ‘*d*  →  ∞’ picks out a preferred point—namely, *x*∞—in region-momentum space. The so-called ‘bubble’ and rational terms are unaffected by the infrared regulator. Our claim is that in any planar quantum field theory requiring triangle-contributions (which are absent for N  =  4), these contributions are correctly reproduced by simply augmenting the box expansion with triangle integrals obtained from *I**a*, *b*, *c*, *d**ε* as described above, with coefficients fixed by on-shell diagrams. The inclusion of bubble and rational terms would be unchanged from their usual form (see e.g. ). Integrand-Level Infrared Equations and Residue Theorems ------------------------------------------------------- The well-understood factorization structure of infrared divergences of one-loop amplitudes in gauge theory leads to constraints on integral coefficients in the framework of generalized unitarity, resulting in the so-called ‘infrared (IR) equations’ (see refs. for a few applications). These equations can be understood in terms of unitarity cuts, and in the planar case lead to $n(n{{\rm\rule[2.4pt]{6pt}{0.65pt}}}3)/2$ relations among the integral coefficients. By considering generalized unitarity at the *integrand* level, however, the factorization structure of soft-collinear divergences leads to more powerful identities. Recall that the box coefficients are so-called ‘quad-cuts’ (co-dimension four residues) of the loop integrand; let us now consider ‘triple-cuts’—co-dimension three residues. Because the loop integral is four-dimensional, a triple-cut still depends on one integration variable; by applying Cauchy’s residue theorem to the integral over this remaining variable, we find that the sum of all box coefficients sharing a triple-cut must vanish[6](#fn6). The richest of these residue theorems arise for triple-cuts involving at least one massless corner, as these separate into two distinct classes depending on whether the 3-particle amplitude at the massless corner is A3( − 1) or A3(0). Importantly, Cauchy’s theorem applies to these two cases *separately*, leading to a *pair* of identities: where *τ**a*, *b*, *d*  =   ± 1 if $d\!=\!b{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}\,1$ or $d\!=\!a\,{{\rm\rule[2.4pt]{6pt}{0.65pt}}}\,1$, respectively, and vanishes otherwise. The sums appearing in the left-hand side of ([residuetheorems]) are over all box coefficients sharing a particular (chiral) triple-cut, while the right-hand sides follow from the universal structure of soft-collinear divergences (and when *τ*  ≠  0, ([residuetheorems]) simply represents the famous ‘BCF’ formula for tree-amplitudes, ). It is easy to see that these constitute $2n(n\,{{\rm\rule[2.4pt]{6pt}{0.65pt}}}\,3)$ linearly-independent equations. Averaging the two lines of ([residuetheorems]) results in $n(n\,{{\rm\rule[2.4pt]{6pt}{0.65pt}}}\,3)$ relations among the (parity-even) box coefficients—which is twice as many as the “standard” IR-equations. This doubling of equations is a consequence of the fact that each collinear divergence of the integrated amplitude can arise from *two* distinct integration regions; and requiring factorization at the *integrand* level results in identities for each integration region separately. To better understand the nature of the vanishing terms appearing on the right-hand side of ([residuetheorems]), let us consider a loop-integrand in the neighborhood of a soft-collinear divergence—where three consecutive propagators are simultaneously put on-shell. For the sake of concreteness, we may parameterize the loop integrand in such a region by writing ℓ  →  (*L*+, *L*−, ℓ⊥) (see for a similar discussion): We can take a residue in *L*− which puts the first propagator on-shell, resulting in (for the regime in which *L*+, ℓ⊥ are both small). If we now write $\ell\_\perp^2\!=\!z{\widetilde}z$ and drop the condition that ${\widetilde}z$ is the complex conjugate of *z*, we see that we can take two more residues—for instance *L*+ and then *z*—to localize to the triple-cut, and then finally take a residue involving the pole in ${\widetilde}{z}$. Notice that this results in a ‘quad-cut’ (a co-dimension four residue) which only involves *three* propagators! This is a simple one-loop example of the phenomenon of ‘composite leading singularities’ discussed in refs. , and is the physical origin of the terms on the right-hand side of ([residuetheorems]). It is not hard to show that the criterion for the convergence of an integral discussed in is equivalent to the requirement that *all* composite leading singularities vanish (in general, composite leading singularities are in one to one correspondence with IR-divergent triangle topologies, which are in correspondence with these equations); this implies that for ratio functions in N  =  4 SYM the right-hand sides of ([residuetheorems]) must vanish. Moreover, it is possible to show that the difference between the ‘DCI’-regularized boxes of ([finiteboxexpansionratiofunction]), and more familiar—e.g. dimensionally-regulated— expressions for the box integrals is necessarily proportional to the the right-hand-sides of ([residuetheorems])—which provides an alternate proof of the equivalence between the two regularization schema when applied to manifestly finite observables. We leave the details of this discussion as an exercise for the interested reader. In summary, the residue theorems ([residuetheorems]) encode the physical fact that IR-factorization occurs at the *integrand* level, and constitute $n(n\,{{\rm\rule[2.4pt]{6pt}{0.65pt}}}\,3)$ independent parity-even relations among the box coefficients—*twice* as many as the IR-equations arising after integration. These relations are *general*—that is, not specific to N  =  4 SYM—provided that triangle coefficients are included where they are required. Applications to Non-Planar Theories ----------------------------------- Although we have largely focused our attention on planar theories, it is worth emphasizing that the increased number of IR equations upon considering integrands instead of integrals is completely unrelated to planarity: the same result holds for non-planar theories as well—including gravity. This motivates a generalization of the integrand-level regulator ([definitionofregulator]) to the non-planar case, at least at one-loop order. As a simple illustration, consider the 4-particle amplitude in N  =  8 supergravity. By the no-triangle property,, it is a combination of at most three box functions. It is not hard to see that the absence of collinear divergences—the vanishing of the right-hand-sides of the analogs of ([residuetheorems])—uniquely fixes their relative coefficients, leading to the following form for the amplitude: where *s*, *t*, *u* are the usual Mandelstam invariants and the box integrals are labelled by the momenta coming into the vertices. It is not difficult to check that this matches the correct expression. As a less trivial example, consider the 6-graviton amplitude (with arbitrary heliciticies). In principle, this amplitude could involve any combination of 195 boxes; but the integrand-level IR-equations of the preceding subsection shows that the amplitude can be expressed in terms of at most 120 combinations. That is, we find 75 linearly-independent constraints of the form ([residuetheorems]), as opposed to the mere 25 constraints arising from the previously known IR-equations. It would be interesting to explore if these relations could be used to obtain more compact analytic forms for one-loop graviton amplitudes. The *Chiral* Box Expansion for One-Loop *Integrands* ==================================================== As we have seen, while the familiar box expansion reproduces all one-loop amplitudes *post-integration*, it does *not* match the full structure of the actual loop integrand. These can be easily understood in the case of MHV loop amplitudes, where the *actual* MHV loop integrand *only* has support on two-mass easy boxes involving ${\color{cut1}\ell\_1}$—with vanishing support on all quad-cuts involving ${\color{cut2}\ell\_2}$. However, because the scalar box integrals are *parity-even*, their integrands always have unit-magnitude residues for both quad-cuts. This is easy to understand, as scattering amplitude integrands are generally *chiral*, while the scalar boxes are manifestly *non-chiral*. In this section, we describe a slight modification to the scalar-box integrands given above which leads to a fully-chiral generalization of the box expansion, allowing us to represent all one-loop *integrands* of N  =  4 SYM. In fact, such a modification was discovered for MHV and NMHV one-loop integrands in ref. , but the generalization to more complicated amplitudes was unclear. Here, by revisiting the special case of MHV, we will find that the underlying structure naturally generalizes to *all* N*k*MHV one-loop integrands, A*n*(*k*), 1. All of the essential structure needed is already present in the case of the MHV one-loop integrands, A*n*(0), 1. In ref. , the one-loop MHV integrand was written: Here, *X* is an arbitrary, auxiliary line in momentum-twistor space (of course, the integrand is ultimately, algebraically independent of *X*). We will mostly take this formula for granted here; but let us see what role is played by the auxiliary line *X*, and how we may generalize this to reproduce any one-loop integrand. Because a pentagon integral has five propagators, it has $2\!\times\!\binom{5}{4}\!=\!10$ fourth-degree residues—from the two ways of cutting any four of the five propagators. But because one of its propagators involves this auxiliary line *X*, only two such residues are physically meaningful: those which cut the lines $\{(a{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,a),(a\,a{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}1),(c{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,c),(c\,c{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}1)\}$. These are obviously ‘two-mass easy’ quad-cuts; but because the numerator of the integrand in ([pentagonexpansionformhvoneloop]) is proportional to ${\langle \ell\,(a{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,a\,a{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}1)\!{\mathrm{\raisebox{0.75pt}{{$\,\bigcap\,$}}}}\!(c{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,c\,c{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}1)\rangle}$, the pentagon’s residue on ${\color{cut2}\ell\_2}$ vanishes, while the residue on ${\color{cut1}\ell\_1}$ is unity (see ). But this is perfect: the one-loop MHV integrand only has support on ${\color{cut1}\ell\_1}$! Because of this, we choose to view the *pentagon* contributions to ([pentagonexpansionformhvoneloop]) as something like a ‘chiralized’ version of the scalar two-mass easy box: We are motivated to draw this as a *box* because it has precisely *one* physically-meaningful quad-cut (in this case ${\color{cut1}\ell\_1}$), upon which it has residue ${{\rm\rule[2.4pt]{6pt}{0.65pt}}}1$. Although not relevant for MHV integrands, we could similarly define the parity-conjugate version, which has residue of ${{\rm\rule[2.4pt]{6pt}{0.65pt}}}1$ on the quad-cut ${\color{cut2}\ell\_2}$, and vanishing residue on ${\color{cut1}\ell\_1}$. Although it may seem like we’re nearly done, we must step back to observe that *not all the terms in ([pentagonexpansionformhvoneloop]) are pentagons!* This is indeed a good thing, because as described in ref. , all such chiral pentagons are *convergent*, while the actual MHV one-loop amplitude is of course divergent! The easy-to-overlook, non-pentagon contributions to the one-loop MHV integrand of ([pentagonexpansionformhvoneloop]), come from the boundary terms when $c\!=\!a{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}1$: We are motivated to draw this as a triangle, because it has only three non-*X* propagators. In fact, in the particular case where *X* is taken to be the point at infinity, I*a*div becomes precisely a scalar triangle. Therefore, although somewhat less concise than ([pentagonexpansionformhvoneloop]) (which is deceptively so), we can write the MHV one-loop *integrand* in the somewhat more suggestive form, where the divergent part is expressed solely in terms of triangles. (Although we have not yet defined *all* chiral boxes ${\color{cut1}{\widetilde}{\mathcal{I}}^1\_{a,b,c,d}}$, the two-mass easy boxes given in ([chiraltwomasseasyone]) are the only ones relevant for MHV (*k*  =  0).) From this simple example, it should be clear that if we have ‘chiral’ versions of all the scalar boxes—ones with precisely *one* physical quad-cut with residue ${{\rm\rule[2.4pt]{6pt}{0.65pt}}}1$ on either ${\color{cut1}\ell\_1}$ and ${\color{cut2}\ell\_2}$—then would have a ‘chiral’ version of the box expansion: Notice that this expression will be valid *before* integration—unlike the more familiar *scalar* box expansion described in. To see that this formula must be right, first observe that the chiral boxes with coefficients are specifically engineered to have *precisely* the same residues on all physical quad-cuts as the actual amplitude’s integrand. However, as mentioned above, *every* chiral box integral ${\widetilde}{I}^{{\color{cut1}1},{\color{cut2}2}}\_{a,b,c,d}\!\equiv\!\int\!\!d^4\ell\,\,\widetilde{\mathcal{I}}^{{\color{cut1}1},{\color{cut2}2}}\_{a,b,c,d}$ is *convergent*—and so the chiral boxes alone cannot fully represent the amplitude. The remaining contributions must therefore encode the divergence of the one-loop amplitude, which from ([chiralboxexpansionformhv]) together with integrand-level factorization is simply the sum of the divergent “triangles” I*a*div. Thus, by construction, ([chiralboxexpansion]) will have the correct leading singularities on all physical quad-cuts (those that do not involve *X*) and have the correct infrared divergences. In order for ([chiralboxexpansion]) to reproduce the *integrand*, it obviously must be *X*-independent; and so it must have vanishing support on all quad-cuts involving *X*. The preceding arguments only fix the infrared-divergent part but not any potential four-mass integrals involving *X*. Thus, in order to uniquely fix ${\color{cut1}{\widetilde}{\mathcal{I}}^1\_{a,b,c,d}}$ and ${\color{cut2}{\widetilde}{\mathcal{I}}^2\_{a,b,c,d}}$, we must also require that they are *parity-odd* on all *four-mass* quad-cuts involving the auxiliary line *X*—that is, they must have *equal* residues (in both sign *and* magnitude) on all ‘spurious’ quad-cuts (as any parity-even integrand must have *opposite* residues on parity-conjugate quad-cuts, this will ensure that no four-mass integrals involving *X* will survive integration over a parity-invariant contour). Given chiral box integrands ${\color{cut1}{\widetilde}{\mathcal{I}}^1\_{a,b,c,d}}$ and ${\color{cut2}{\widetilde}{\mathcal{I}}^2\_{a,b,c,d}}$ which satisfy the conditions described above—being infrared convergent, having only one nonvanishing physical quad-cut and being parity-odd on spurious quad-cuts— it is not hard to prove that ([chiralboxexpansion]) matches *all* physical residues of the actual loop integrand, that it is ultimately free of any quad-cuts involving *X*, and that it is moreover algebraically independent of *X*. And so, ([chiralboxexpansion]) must give the correct integrand for the amplitude. (Using the Mathematica package documented in, it is easy to verify that ([chiralboxexpansion]) directly matches the one-loop integrands obtained using the BCFW recursion relations (which is described in ).) [t\*][chiralboxintegrandsandintegralstable] In terms of the chiral boxes, the ratio function becomes, Notice that the divergent integrands I*a*div are *manifestly* canceled in the ratio function, leading to an expression involving only *manifestly convergent* chiral boxes. This is remarkable, as it provides an analytic form of any one-loop ratio function for which no regularization is needed! However, although the complete integrand is of course independent of *X*, each chiral box *individually* depends on *X*. Nevertheless, although it is generally difficult (algebraically) to prove the *X*-independence of the integrated expressions (this can be seen to amount to the integrand-level IR equations ([residuetheorems])), this remarkable fact can easily be verified numerically—for example, using the Mathematica package ‘loopamplitudes’ which is made available with this note on the arXiv, and documented in.    Conclusions ===========   In this note, we have revisited the familiar story of using generalized unitarity to reconstruct one-loop amplitudes, especially for the case of planar, N  =  4 SYM. In order to make manifest all the known symmetries of the theory, we reconsidered the regularization of IR-divergences, and found a new, ‘DCI’-regularization scheme ([definitionofregulator]) which makes *manifest*—term-by-term—the dual-conformal invariance of all finite observables of N  =  4 SYM at one-loop, including all N*k*MHV ratio functions, ([finiteboxexpansionratiofunction]). The existence of such a regularization scheme was motivated by considering the remarkable properties of one-loop amplitudes *prior-to* integration. Such considerations also led to integrand-level IR-equations, ([residuetheorems])—giving novel constraints among box coefficients, and having applications beyond the planar limit. And in, we found that the familiar box expansion could be upgraded to a *chiral* box expansion for one-loop *integrands*, reproducing both the parity-odd and parity-even contributions to scattering amplitudes, and making the factorization of IR-divergences manifest. For the sake of completeness and reference, we have comprehensively described all the ingredients required to compute any one-loop amplitude in planar N  =  4 SYM: we gave explicit formulae for all ‘DCI’-regularized scalar box integrals in, and we gave expressions for all one-loop box coefficients in. Remarkably, these tables are incredibly redundant: all degenerate cases of *both* tables follow smoothly from the generic case—from the four-mass integral, ([symmetricfourmassintegral]), and the four-mass functions, ([fourmasscoefficient1]) and ([fourmasscoefficient2]), respectively. The notation used throughout this paper is reviewed in. In, we use the BCFW recursion relations described in to explicitly represent all one-loop integrands in N  =  4 SYM. All the results described in this paper have been implemented in a Mathematica package, ‘loopamplitudes’; instructions for obtaining this package, and complete documentation of the functions made available by it are described in. Natural extensions of this work include applying the ‘DCI’-regulator to scalar integrals beyond one-loop, finding chiral representations of higher-loop integrands, and using the integrand-level IR-equations to find better representations of one-loop amplitudes beyond the planar limit. Acknowledgements ================   We are especially grateful to Nima Arkani-Hamed and Freddy Cachazo for their helpful comments and suggestions regarding this work. This work was supported in part by the Harvard Society of Fellows and a grant form the Harvard Milton Fund (JB), Department of Energy contract (JB), and NSF grants (JT) and (SCH). Review of Momentum-Twistor Variables and Notation ================================================= Momentum-twistor variables, introduced by Hodges in ref. , trivialize the two ubiquitous constraints imposed on the external kinematics for all scattering amplitudes: the on-shell condition, *p**a*2  =  0, and momentum conservation, ∑*a**p**a*  =  0. By this we mean that *generic* momentum twistor variables $Z\!\equiv\!\big(z\_1\cdots z\_n\big)\!\in\!G(4,n)$, with *z**a*  ∈  C4, *always* correspond to a set of momentum-conserving, on-shell external momenta. This makes them especially convenient for use in scattering amplitudes. In, we used region-momentum variables *x**a* to encode the external four-momenta according to *p**a*  ≡  *x**a* + 1 − *x**a*. Momentum-twistors are so-called because they represent points in the twistor-space of *region-momentum* *x*-space: The *x*-space polygon, whose definition depends on a choice for the cyclic ordering of the external legs, encodes the external momenta in a simple way. Each line in twistor space—spanned, say by the twistors (*z**a* − 1 *z**a*)—corresponds to a *point* in *x*-space—in this case, the point *x**a*. This is why, for example, integration over a point *x*ℓ in region-momentum space translates to integration over a *line* (ℓ)  ≡  (ℓ*A*ℓ*B*) in momentum-twistor space (see e.g. ). Given a set of momentum-twistors *z**a*—viewed as columns of the *Z*—it is easy to construct the corresponding set of four-momenta. If we decompose each momentum-twistor *z**a* according to and define a (2  ×  *n*)-matrix $\widetilde{\lambda}\!\equiv\!\big(\widetilde\lambda\_1\cdots\widetilde\lambda\_n\big)\!\subset\!(\lambda^{\perp})$ according to, where ⟨*a* *b*⟩  ≡  det{*λ**a*, *λ**b*}, then $\lambda\!\cdot\!\widetilde{\lambda}\!=\!0$ because *Q*  ⋅  *λ* = 0; as such, we may identify $p\_a\!\equiv\!\lambda\_a\widetilde{\lambda}\_a$, and these (on-shell) four-momenta will *automatically* conserve momentum. Conversely, given four-momenta written in terms of spinor-helicity variables according to $p\_a\!\equiv\!\lambda\_a\widetilde{\lambda}\_a$, momentum twistors *z**a* can be constructed by joining each *λ**a* as in ([componentsofz]) with a corresponding *μ**a* given by Supersymmetry is encoded by dressing each momentum-twistor *z**a* with an anti-commuting four-vector *η**a*—collected into a (4  ×  *n*)-matrix *η* acted upon by the *S**U*(4) *R*-symmetry of N  =  4 SYM. If we similarly define $\widetilde{\eta}\!\equiv\!\eta\cdot Q$, then the kinematical data specified by $\{\lambda,\widetilde\lambda,\widetilde\eta\}$ will automatically be *super*momentum-conserving. Dual-conformal transformations in region-momentum space translate to mere *S**L*(4)-transformations in momentum-twistor space; hence, dual-conformal invariants are written in terms of simple determinants: ⟨*a* *b* *c* *d*⟩  ≡  det{*z**a*, *z**b*, *z**c*, *z**d*}. The simplest dual-*super*conformal invariant, however, involves five momentum-twistors, and is given by the familiar 5-bracket (sometimes called an ‘*R*-invariant’), : All one-loop leading singularities *except the four-masses*[7](#fn7) can be written directly as products of 5-brackets—as evidenced by and the fact that BCFW recursion (see ) directly gives tree-amplitudes in terms of products of 5-brackets. These often involve geometrically-defined, auxiliary points in momentum-twistor space such as “$(a\,b){\mathrm{\raisebox{0.75pt}{{$\,\bigcap\,$}}}}(c\,d\,e)$” which represents “$\mathrm{span}\{z\_a,z\_b\}{\mathrm{\raisebox{0.75pt}{{$\,\bigcap\,$}}}}\mathrm{span}\{z\_c,z\_d,z\_e\}$”. All such objects are trivially found via *Cramer’s rule*, which represents the unique identity satisfied by any five generic four-vectors: from this, it is easy to see that A similar, geometrically-defined object which appears in the BCFW recursion of one-loop amplitudes (see ) is “$(a\,b\,c){\mathrm{\raisebox{0.75pt}{{$\,\bigcap\,$}}}}(d\,e\,f)$”, which simply represents “”—which, in this case, is a (projective) *line* in twistor space; for the sake of concreteness, we can always write this explicitly as: Finally, we should recall that the Jacobian arising form the change of variables from momentum-space to momentum-twistor space is the full Parke-Taylor MHV super-amplitude, which explains why (when written in momentum-twistors) all MHV amplitudes are simply A*n*(0), 0  =  1, ensuring the dual-conformal symmetry of all amplitudes in planar N  =  4 SYM, ; and so, throughout this paper, A*n*(*k*), ℓ should be understood as the color-ordered, single-trace contribution to the ℓ-loop integrand for the *n*-point N*k*MHV scattering amplitude divided by ([parketaylor]), and in units of (*g*2*N**C*/(16*π*2))ℓ. The BCFW Representation of One-Loop Integrands ============================================== As described in ref.  (see also ), all ℓ-loop integrands for scattering amplitudes in planar N  =  4 SYM can be found by the BCFW recursion relations. In terms of on-shell diagrams, the BCFW recursion relations correspond to: Being more explicit about the ranges for the terms involved, the recursion becomes, Working in momentum-twistor space, the BCFW bridge operation corresponding to the shift $z\_n\!\to\!z\_n{\hspace{0.5pt}\text{{\small+}}\hspace{-0.5pt}}\,\alpha\, z\_{n-1}$, for *n**R*  >  3, is given by: where ${{\widehat}{a}}\equiv(a\,a{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1){\mathrm{\raisebox{0.75pt}{{$\,\bigcap\,$}}}}(n{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,n\,1)$ and ${\widehat}{n}\equiv(n\,n{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1){\mathrm{\raisebox{0.75pt}{{$\,\bigcap\,$}}}}(a{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,a\,1)$; when *n**R*  =  3 and $n\_L\!=\!n{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1$, the bridge simply results in, And so, the ‘bridge’ terms of ([bcfwalllooprecursion]) are fairly straightforward to compute in momentum-twistor space, and the operations involved are the same regardless of the loop-levels ℓ*L* and ℓ*R* of the amplitudes being bridged. (Of course, at tree-level, *only* the bridge terms contribute to the recursion; and so the discussion so far suffices to recursively compute all tree-amplitudes, A*n*(*k*), 0, in N  =  4 SYM.) More interesting are the “forward-limit” contributions, $\mathrm{FL}\big(\mathcal{A}\_{n+2}^{(k+1),\ell-1}\big)$. It is easy to see that ([bcfwalllooprecursion]) gives rise to ℓ levels of nested forward-limits. As described in, determining which terms from the lower-loop amplitude are non-vanishing in the forward limit is generally difficult (even the *number* of terms which survive becomes scheme-dependent beyond one-loop). However, for one-loop amplitudes, we only need the forward-limits of trees; and as described in ref. , if the tree-amplitudes are obtained by BCFW,deforming the legs which are to be identified in the forward-limit, then the terms which vanish are *precisely* those involving three-particle amplitudes on either side of the bridge. Therefore, the only non-vanishing contributions are: where ${{\widehat}{a}}\!\equiv\!(a\,a{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1){\mathrm{\raisebox{0.75pt}{{$\,\bigcap\,$}}}}(\ell\_A\ell\_B\,1)$, ${\widehat}{n}\equiv(n\,n{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1){\mathrm{\raisebox{0.75pt}{{$\,\bigcap\,$}}}}(\ell\_A\ell\_B\,1)$, and ${\widehat}{\ell}\!\equiv\!(\ell\_A\ell\_B){\mathrm{\raisebox{0.75pt}{{$\,\bigcap\,$}}}}(n{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,n\,1)$, and the “kermit” *K*[(*a* *b* *c*); (*d* *e* *f*)]—written in terms of the line ℓ  ≡  (ℓ*A*ℓ*B*)—is given by: Putting everything together, the one-loop integrand for any amplitude is: Here, the first two lines represent ‘bridge’ contributions—identical in form to the tree-level recursion—while the last line represents the forward-limits. Notice the striking similarity of the roles of the 5-bracket $[1\,a{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,a\,n{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,n]$ in the bridge-terms and the ‘kermit’ $K[(1\,a\,{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,a);(1\,n{{\rm\rule[2.4pt]{6pt}{0.65pt}}}1\,n)]$ in the forward-limit terms. Indeed, the forward-limit terms can be understood as unitarity-cuts which are “bridged” by the kermit. This analysis can of course be continued to higher loop-orders by repeatedly substituting the structure above into the forward-limit contributions appearing in ([bcfwalllooprecursion]); this results in higher-loop “kermits” which can similarly be understood as ‘bridging’ amplitudes across a unitarity cut. However, as our present work requires only one-loop integrands (and as the the complexity involved in the higher-loop ‘kermits’ is considerable), we will leave a more general discussion to future work. Mathematica Implementation of Results ===================================== In order to make the results described in this paper most useful to researchers, we have prepared a Mathematica package called ‘loopamplitudes’ which implements all of our results. In addition to providing fast numerical evaluation of loop amplitudes and ratio functions, the loopamplitudes package also serves as a reliable reference for the many results tabulated above (as any transcription error would obstruct numerical consistency checks). The package together with a notebook illustrating much of its functionality are included with the submission files for this paper on the arXiv, which can be obtained as follows. From the abstract page for this paper on the arXiv, look for the “download” options in the upper-right corner of the page, follow the link to “other formats” (below the option for “PDF”), and download the “source” files for the submission. The source will contain[8](#fn8) the primary package ( loop --- amplitudes.m ), together with a notebook ( loop --- amplitudes --- demo.nb ) which includes many detailed examples of the package’s functionality. Upon obtaining the source files, one should open and evaluate the Mathematica notebook ` loop --- amplitudes --- demo.nb ’; in addition to walking the user through many example computations, this notebook will copy the file loop --- amplitudes.m to the user’s ApplicationDirectory[]; this will make the package available to run in any future notebook via the simple command “<<loopamplitudes.m”: Glossary of Functions Defined By the Mathematica Package --------------------------------------------------------                 [2]#2 10 Z. Bern, G. Chalmers, L. J. Dixon, and D. A. Kosower, “One-Loop *N* Gluon Amplitudes with Maximal Helicity Violation via Collinear Limits,” [*Phys. Rev. Lett.* **72** (1994) 2134–2137](http://dx.doi.org/10.1103/PhysRevLett.72.2134), [arXiv:hep-ph/9312333](http://arxiv.org/abs/hep-ph/9312333). Z. Bern, L. J. Dixon, D. C. Dunbar, and D. A. Kosower, “One-Loop *n*-Point Gauge Theory Amplitudes, Unitarity and Collinear Limits,” [*Nucl. Phys.* **B425** (1994) 217–260](http://dx.doi.org/10.1016/0550-3213(94)90179-1), [arXiv:hep-ph/9403226](http://arxiv.org/abs/hep-ph/9403226). Z. Bern, L. J. Dixon, D. C. Dunbar, and D. A. Kosower, “Fusing Gauge Theory Tree Amplitudes into Loop Amplitudes,” [*Nucl. Phys.* **B435** (1995) 59–101](http://dx.doi.org/10.1016/0550-3213(94)00488-Z), [arXiv:hep-ph/9409265](http://arxiv.org/abs/hep-ph/9409265). Z. Bern and G. Chalmers, “Factorization in One-Loop Gauge Theory,” [*Nucl. Phys.* **B447** (1995) 465–518](http://dx.doi.org/10.1016/0550-3213(95)00226-I), [arXiv:hep-ph/9503236](http://arxiv.org/abs/hep-ph/9503236). R. Britto, F. Cachazo, and B. Feng, “Computing One-Loop Amplitudes from the Holomorphic Anomaly of Unitarity Cuts,” [*Phys. Rev.* **D71** (2005) 025012](http://dx.doi.org/10.1103/PhysRevD.71.025012), [arXiv:hep-th/0410179](http://arxiv.org/abs/hep-th/0410179). R. Britto, F. Cachazo, and B. Feng, “Generalized Unitarity and One-Loop Amplitudes in N = 4 Super-Yang-Mills,” [*Nucl. Phys.* **B725** (2005) 275–305](http://dx.doi.org/10.1016/j.nuclphysb.2005.07.014), [arXiv:hep-th/0412103](http://arxiv.org/abs/hep-th/0412103). J. M. Drummond, J. Henn, G. P. Korchemsky, and E. Sokatchev, “On Planar Gluon Amplitudes/Wilson Loops Duality,” [*Nucl. Phys.* **B795** (2008) 52–68](http://dx.doi.org/10.1016/j.nuclphysb.2007.11.007), [arXiv:0709.2368 [hep-th]](http://arxiv.org/abs/0709.2368). F. Cachazo, “Sharpening The Leading Singularity,” [arXiv:0803.1988 [hep-th]](http://arxiv.org/abs/0803.1988). S. Caron-Huot, “Loops and Trees,” [*JHEP* **1105** (2011) 080](http://dx.doi.org/10.1007/JHEP05(2011)080), [arXiv:1007.3224 [hep-ph]](http://arxiv.org/abs/1007.3224). J. M. Drummond and J. M. Henn, “Simple Loop Integrals and Amplitudes in N = 4 SYM,” [*JHEP* **1105** (2011) 105](http://dx.doi.org/10.1007/JHEP05(2011)105), [arXiv:1008.2965 [hep-th]](http://arxiv.org/abs/1008.2965). N. Arkani-Hamed, J. L. Bourjaily, F. Cachazo, A. B. Goncharov, A. Postnikov, *et al.*, “Scattering Amplitudes and the Positive Grassmannian,” [arXiv:1212.5605 [hep-th]](http://arxiv.org/abs/1212.5605). J. Drummond, J. Henn, G. Korchemsky, and E. Sokatchev, “Dual Superconformal Symmetry of Scattering Amplitudes in N = 4 super Yang-Mills Theory,” [*Nucl. Phys.* **B828** (2010) 317–374](http://dx.doi.org/10.1016/j.nuclphysb.2009.11.022), [arXiv:0807.1095 [hep-th]](http://arxiv.org/abs/0807.1095). J. Drummond, J. Henn, G. Korchemsky, and E. Sokatchev, “Generalized Unitarity for N = 4 Super-Amplitudes,” [*Nucl. Phys.* **B869** (2013) 452–492](http://dx.doi.org/10.1016/j.nuclphysb.2012.12.009), [arXiv:0808.0491 [hep-th]](http://arxiv.org/abs/0808.0491). H. Elvang, D. Z. Freedman, and M. Kiermaier, “Dual Conformal Symmetry of 1-Loop NMHV Amplitudes in N = 4 SYM Theory,” [*JHEP* **1003** (2010) 075](http://dx.doi.org/10.1007/JHEP03(2010)075), [arXiv:0905.4379 [hep-th]](http://arxiv.org/abs/0905.4379). N. Arkani-Hamed, J. L. Bourjaily, F. Cachazo, S. Caron-Huot, and J. Trnka, “The All-Loop Integrand For Scattering Amplitudes in Planar N=4 SYM,” [*JHEP* **1101** (2011) 041](http://dx.doi.org/10.1007/JHEP01(2011)041), [arXiv:1008.2958 [hep-th]](http://arxiv.org/abs/1008.2958). N. Arkani-Hamed, J. L. Bourjaily, F. Cachazo, and J. Trnka, “Local Integrals for Planar Scattering Amplitudes,” [*JHEP* **1206** (2012) 125](http://dx.doi.org/10.1007/JHEP06(2012)125), [arXiv:1012.6032 [hep-th]](http://arxiv.org/abs/1012.6032). Z. Bern, L. J. Dixon, and D. A. Kosower, “Dimensionally Regulated One Loop Integrals,” [*Phys. Lett.* **B302** (1993) 299–308](http://dx.doi.org/10.1016/0370-2693(93)90400-C), [arXiv:hep-ph/9212308](http://arxiv.org/abs/hep-ph/9212308). Z. Bern, L. J. Dixon, and D. A. Kosower, “Dimensionally Regulated Pentagon Integrals,” [*Nucl. Phys.* **B412** (1994) 751–816](http://dx.doi.org/10.1016/0550-3213(94)90398-0), [arXiv:hep-ph/9306240](http://arxiv.org/abs/hep-ph/9306240). J. Drummond, J. Henn, V. Smirnov, and E. Sokatchev, “Magic Identities for Conformal Four-Point Integrals,” [*JHEP* **0701** (2007) 064](http://dx.doi.org/10.1088/1126-6708/2007/01/064), [arXiv:hep-th/0607160](http://arxiv.org/abs/hep-th/0607160). L. F. Alday, J. M. Henn, J. Plefka, and T. Schuster, “Scattering into the Fifth Dimension of N = 4 super Yang-Mills,” [*JHEP* **1001** (2010) 077](http://dx.doi.org/10.1007/JHEP01(2010)077), [arXiv:0908.0684 [hep-th]](http://arxiv.org/abs/0908.0684). A. Hodges, “The Box Integrals in Momentum-Twistor Geometry,” [arXiv:1004.3323 [hep-th]](http://arxiv.org/abs/1004.3323). L. Mason and D. Skinner, “Amplitudes at Weak Coupling as Polytopes in *A**d**S*5,” [*J. Phys.* **A44** (2011) 135401](http://dx.doi.org/10.1088/1751-8113/44/13/135401), [arXiv:1004.3498 [hep-th]](http://arxiv.org/abs/1004.3498). J. L. Bourjaily, “Positroids, Plabic Graphs, and Scattering Amplitudes in Mathematica,” [arXiv:1212.6974 [hep-th]](http://arxiv.org/abs/1212.6974). A. Hodges, “Eliminating Spurious Poles from Gauge-Theoretic Amplitudes,” [arXiv:0905.1473 [hep-th]](http://arxiv.org/abs/0905.1473). R. Penrose, “Twistor Algebra,” [*J. Math. Phys.* **8** (1967) 345](http://dx.doi.org/10.1063/1.1705200). L. Mason and D. Skinner, “Dual Superconformal Invariance, Momentum Twistors and Grassmannians,” [*JHEP* **0911** (2009) 045](http://dx.doi.org/10.1088/1126-6708/2009/11/045), [arXiv:0909.0250 [hep-th]](http://arxiv.org/abs/0909.0250). J. L. Bourjaily, A. DiRe, A. Shaikh, M. Spradlin, and A. Volovich, “The Soft-Collinear Bootstrap: N=4 Yang-Mills Amplitudes at Six and Seven Loops,” [*JHEP* **1203** (2012) 032](http://dx.doi.org/10.1007/JHEP03(2012)032), [arXiv:1112.6432 [hep-th]](http://arxiv.org/abs/1112.6432). B. Eden, P. Heslop, G. P. Korchemsky, and E. Sokatchev, “Hidden Symmetry of Four-Point Correlation Functions and Amplitudes in N = 4 SYM,” [*Nucl. Phys.* **B862** (2012) 193–231](http://dx.doi.org/10.1016/j.nuclphysb.2012.04.007), [arXiv:1108.3557 [hep-th]](http://arxiv.org/abs/1108.3557). J. Golden and M. Spradlin, “Collinear and Soft Limits of Multi-Loop Integrands in N = 4 Yang-Mills,” [*JHEP* **1205** (2012) 027](http://dx.doi.org/10.1007/JHEP05(2012)027), [arXiv:1203.1915 [hep-th]](http://arxiv.org/abs/1203.1915). W. L. van Neerven and J. A. M. Vermaseren, “Large Loop Integrals,” [*Phys. Lett.* **B137** (1984) 241](http://dx.doi.org/10.1016/0370-2693(84)90237-5). A. I. Davydychev, “General Results for Massive *N*-Point Feynman Diagrams with Different Masses,” [*J. Math. Phys.* **33** (1992) 358–369](http://dx.doi.org/10.1063/1.529914). Z. Bern, J. Rozowsky, and B. Yan, “Two-Loop Four-Gluon Amplitudes in N = 4 SuperYang-Mills,” [*Phys. Lett.* **B401** (1997) 273–282](http://dx.doi.org/10.1016/S0370-2693(97)00413-9), [arXiv:hep-ph/9702424](http://arxiv.org/abs/hep-ph/9702424). R. Britto, F. Cachazo, and B. Feng, “New Recursion Relations for Tree Amplitudes of Gluons,” [*Nucl. Phys.* **B715** (2005) 499–522](http://dx.doi.org/10.1016/j.nuclphysb.2005.02.030), [arXiv:hep-th/0412308](http://arxiv.org/abs/hep-th/0412308). R. Roiban, M. Spradlin, and A. Volovich, “Dissolving N = 4 Loop Amplitudes into QCD Tree Amplitudes,” [*Phys. Rev. Lett.* **94** (2005) 102002](http://dx.doi.org/10.1103/PhysRevLett.94.102002), [arXiv:hep-th/0412265](http://arxiv.org/abs/hep-th/0412265). Z. Bern, L. J. Dixon, and D. A. Kosower, “On-Shell Methods in Perturbative QCD,” [*Annals Phys.* **322** (2007) 1587–1634](http://dx.doi.org/10.1016/j.aop.2007.04.014), [arXiv:0704.2798 [hep-ph]](http://arxiv.org/abs/0704.2798). W. Giele and E. N. Glover, “Higher Order Corrections to Jet Cross-Sections in *e*+ *e*− Annihilation,” [*Phys. Rev.* **D46** (1992) 1980–2010](http://dx.doi.org/10.1103/PhysRevD.46.1980). Z. Kunszt, A. Signer, and Z. Trocsanyi, “Singular Terms of Helicity Amplitudes at One-Loop in QCD and the Soft Limit of the Cross-Sections of Multiparton Processes,” [*Nucl. Phys.* **B420** (1994) 550–564](http://dx.doi.org/10.1016/0550-3213(94)90077-9), [arXiv:hep-ph/9401294](http://arxiv.org/abs/hep-ph/9401294). Z. Bern, V. Del Duca, L. J. Dixon, and D. A. Kosower, “All Non-Maximally- Helicity-Violating One-Loop Seven-Gluon Amplitudes in N = 4 Super-Yang-Mills Theory,” [*Phys. Rev.* **D71** (2005) 045006](http://dx.doi.org/10.1103/PhysRevD.71.045006), [arXiv:hep-th/0410224](http://arxiv.org/abs/hep-th/0410224). N. Arkani-Hamed, F. Cachazo, and J. Kaplan, “What is the Simplest Quantum Field Theory?,” [*JHEP* **1009** (2010) 016](http://dx.doi.org/10.1007/JHEP09(2010)016), [arXiv:0808.1446 [hep-th]](http://arxiv.org/abs/0808.1446). E. I. Buchbinder and F. Cachazo, “Two-Loop Amplitudes of Gluons and Octa-Cuts in N = 4 super Yang-Mills,” [*JHEP* **0511** (2005) 036](http://dx.doi.org/10.1088/1126-6708/2005/11/036), [arXiv:hep-th/0506126](http://arxiv.org/abs/hep-th/0506126). N. Bjerrum-Bohr and P. Vanhove, “Absence of Triangles in Maximal Supergravity Amplitudes,” [*JHEP* **0810** (2008) 006](http://dx.doi.org/10.1088/1126-6708/2008/10/006), [arXiv:0805.3682 [hep-th]](http://arxiv.org/abs/0805.3682). N. Arkani-Hamed, F. Cachazo, and C. Cheung, “The Grassmannian Origin Of Dual Superconformal Invariance,” [*JHEP* **1003** (2010) 036](http://dx.doi.org/10.1007/JHEP03(2010)036), [arXiv:0909.0483 [hep-th]](http://arxiv.org/abs/0909.0483).         --- 1. In a general quantum field theory, the integrand may also have contributions where only co-dimension three (triangles) or co-dimension two (bubbles) residues exist; it is a non-trivial fact that N  =  4 does not need contributions from triangles or bubbles (closely related to its UV-finiteness).[↩](#fnref1) 2. For notational simplicity, we have left implicit a factor *π*2 in the definition of the measure of the integrand ([deffourmassbox])—the integrated expression, ([symmetricfourmassintegral]), is of course the standard one; we also omit factors of *i* associated with the Wick rotation: “*d*4ℓ” denotes the Euclidean integration measure.[↩](#fnref2) 3. We note that this expression for the four-mass function appears to differ from that given in which we suspect to be incomplete.[↩](#fnref3) 4. We should note that the coefficient of log(*ε*)2 in the amplitude is *twice* the coefficient of log(*μ*IR2)2 of the Higgs regulator described in ref. —equivalently, it is *twice* the coefficient of 1/(2(*D* − 4)2) appearing in dimensional-regularization. This can be understood from the way ([definitionofregulator]) cuts-out the soft-collinear divergent regions of integration.[↩](#fnref4) 5. The factor of *n* in ([scalarboxexpansionintegratedF2]) should be thought of as ‘2*n*/2’, arising from the fact that each two-mass hard box diverges like $\frac{1}{2}\log(\epsilon)^2$ (one-mass boxes counted twice by symmetry), and the quad-cut coefficients ${\color{cut1}f^1}$ and ${\color{cut2}f^2}$ *separately* generate the tree-amplitude for each of the *n* sets of adjacent legs.[↩](#fnref5) 6. We should mention that there can be residues of poles at infinity—the parity-even combinations of such are simply the so-called triangle coefficients; the inclusion of these terms would have no substantive effect on the results of this section.[↩](#fnref6) 7. Recall that the four-mass leading singularities require a prefactor of $\varphi\_{{\color{cut1}1},{\color{cut2}2}}$ (see equation ([fourmasscoefficient1])).[↩](#fnref7) 8. Occasionally, the “source” file downloaded from the arXiv is saved without any extension; this can be ameliorated by manually appending “.tar.gz” to the name of the downloaded file.[↩](#fnref8)
arxiv_0000291
Rossini, G. M. Andolina, D. Rosa, M. Carrega, M. Polini, Phys. Rev. Lett. 125, 236402 (2020). rosaarxiv07247 D. Rosa, D. Rossini, G. M. Andolina, M. Polini, M. Carrega, J. High Energ. Phys. 2020, 67 (2020). carregaarxiv03551 M. Carrega, J. Kim, D. Rosa, Entropy 2021, 23(5), 587 (2021). caravelliprr023095 F. Caravelli, G. Coulter-De Wit, L. P. García-Pintos, and A. Hamma, Phys. Rev. Research 2, 023095 (2020). barraprl210601 F. Barra, Phys. Rev. Lett. 122, 210601 (2019). hovhannisyannjp052001 K. V. Hovhannisyan and A. Imparato, New J. Phys. **21**, 052001 (2019). liujpc18303 J. Liu, D. Segal, G. Hanna, J. Phys. Chem. C 123, 30, 18303 (2019). hovhannisyanprr033413 K. V. Hovhannisyan, F. Barra, A. Imparato, Phys. Rev. Research 2, 033413 (2020). quacharxiv10044 J. Q. Quach, W. J. Munro, Phys. Rev. Applied 14, 024092 (2020). carregaarxiv14034 M. Carrega, A. Crescente, D. Ferraro, M. Sassetti, New J. Phys. 22, 083085 (2020). zakavatiarxiv09814 S. Zakavati, F.T.Tabesh, S.Salimi, arXiv:2003.09814(2020). ghosharxiv12859 S. Ghosh, T. Chanda, S. Mal, A. Sen De, Phys. Rev. A 104, 032207 (2021). baiarxiv06982 S. Bai, J. An, Phys. Rev. A 102, 060201(R) (2020). crescentenjp063057 A. Crescente, M. Carrega, M. Sassetti, D. Ferraro, New J. Phys. 22 063057 (2020). purvesarxiv09065 T. Purves, T, Short, Phys. Rev. E 104, 014111 (2021). mitchisonquantum500 Mark T. Mitchison, John Goold, Javier Prior, Quantum 5, 500 (2021). quacharxiv06026 J. Q. Quach, K. E. McGhee, L. Ganzer, D. M. Rouse, B. W. Lovett, E. M. Gauger, J. Keeling, G. Cerullo, D. G. Lidzey, T. Virgili, arXiv:2012.06026 (2020). monselprl130601 J. Monsel, M. Fellous-Asiani, B. Huard, and A. Auff’eves, Phys. Rev. Lett. 124, 130601 (2020). franciaprl180603 G. Francica, F. C. Binder, G. Guarnieri, M. T. Mitchison, J. Goold, and F. Plastina, Phys. Rev. Lett. 125, 180603 (2020). allahverdyanpa542 A. E. Allahverdyan and Th. M. Nieuwenhuizen, Physica A 305, 542 (2002). francicanpj12 G. Francica, J. Goold, F. Plastina, and M. Paternostro, npj Quantum Inf. 3, 12 (2017). deffnerjpa335302 S. Deffner and E. Lutz, J. Phys. A Math. Theor. 46, 335302 (2013). deffnerjpa453001 S. Deffner and S. Campbell, J. Phys. A Math. Theor. 50, 453001 (2017). bengtssoncambridgebook I. Bengtsson and K. Zyczkowski, *Geometry of Quantum States: An Introduction to Quantum Entanglement, 2nd ed.* (Cambridge University Press, Cambridge, 2017). nielsencambridgebook M. L. Nielsen and I. L. Chuang, *Quantum Computation and Quantum Information* (Cambridge University Press, Cambridge, 2000). mandelstamjp249 L. Mandelstam, I. Tamm, J. Phys. 9, 249 (1945). levitinprl160502 L. B. Levitin, Y. Toffoli, Phys. Rev. Lett. 103, 160502 (2009). margoluspd188 N. Margolus, L. B. Levitin, Physica D 120, 188 (1998). gurvitspra062311 L. Gurvits and H. Barnum, Phys. Rev. A 66, 062311 (2002). gurvitspra032322 L. Gurvits and H. Barnum, Phys. Rev. A 72, 032322 (2005). chenpra052302 Z. Chen, Phys. Rev. A 71, 052302 (2005). guhnenjp229 O. Guhne, G. Toth, and H. J. Briegel, New J. Phys. 7, 229 (2005). tothpra022322 G. Toth, Phys. Rev. A 85, 022322 (2012). hylluspra022321 P. Hyllus, W. Laskowski, R. Krischek, C. Schwemmer, W. Wieczorek, H. Weinfurter, L. Pezzé, and A. Smerzi, Phys. Rev. A 85, 022321 (2012). kimarxiv02491 J. Kim, D. Rosa, D. Safranek, arXiv:2108.02491(2021). lipkinnpa188 H. J. Lipkin, N. Meshkov, and A. J. Glick, Nucl. Phys. A 62, 188 (1965). dickepr99 R. H. Dicke, Phys. Rev. 93, 99 (1954). andersonpr1492 P. W. Anderson, Phys. Rev. 109, 1492 (1958). nandkishorearcmp15 R. Nandkishore and D. A. Huse, Annu. Rev. Condens. Matter Phys. 6, 15 (2015). aletcrp498 F. Alet and N. Laflorencie, C. R. Phys. 19, 498 (2018). abaninrmp021001 D. A. Abanin, E. Altman, I. Bloch, and M. Serbyn, Rev. Mod. Phys. 91, 021001 (2019). vidalprl227902 G. Vidal, J. I. Latorre, E. Rico, and A. Kitaev, Entanglement in Quantum Critical Phenomena, Phys. Rev. Lett. 90, 227902 (2003). calabresejsmtep06002 P. Calabrese and J. Cardy, Entanglement entropy and quantum field theory, J. Stat. Mech.: Theory Exp. 2004, P06002. santospre032107 A. C. Santos, B. Cakmak, S. Campbell, and N. T. Zinner, Phys. Rev. E 100, 032107 (2019). santospre062114 A. C. Santos, A. Saguia, and M. S. Sarandy Phys. Rev. E 101, 062114 (2020). gherardiniprr013095 S. Gherardini, F. Campaioli, F. Caruso, and F. C. Binder, Phys. Rev. Research 2, 013095 (2020). caravelliquantum505 F. Caravelli, B. Yan, L. P. Garcia-Pintos, A. Hamma, Quantum 5, 505 (2021). kaminnjp083007 F. H. Kamin, F. T. Tabesh, S. Salimi, F. Kheirandish and A. C. Santos, New J. Phys. 22 (8), 083007 (2020). tavakolipra012315 A. Tavakoli, G. Haack, N. Brunner, and J. B. Brask, Phys. Rev. A 101, 012315 (2020). khandelwalnjp073039 S. Khandelwal, N. Palazzo, N. Brunner and G. Haack, New J. Phys. 22, 073039 (2020). josefssonprb081408 M. Josefsson and M. Leijnse, Phys. Rev. B 101, 081408(R) (2020). bresquearxiv03239 L. Bresque, P. A. Camati, S. Rogers, K. Murch, A. N. Jordan, A. Auffèves, Phys. Rev. Lett. 126, 120605 (2021). yipre022108 J. Yi, P. Talkner, and Y. W. Kim, Phys. Rev. E 96, 022108 (2017). dingpre042122 X. Ding, J. Yi, Y. W. Kim, and P. Talkner, Phys. Rev. E 98, 042122 (2018). Quantum thermal machines and batteries ====================================== The seminal work by Sadi Carnot in the early nineteenth century provided the blueprint of a reversible heat engine and the celebrated second law of thermodynamics eventually followed. Almost two centuries later, the quest to formulate a quantum theory of the thermodynamic laws has thus unsurprisingly motivated physicists to visualise what are known as ‘quantum thermal machines’ (QTMs). In this article, we review the prominent developments achieved in the theoretical construction as well as understanding of QTMs, beginning from the formulation of their earliest prototypes to recent models. We also present a detailed introduction and highlight recent progress in the rapidly developing field of ‘quantum batteries’. Introduction ============ Thermal machines refer to a broad class of devices whose operation is associated with some form of exchange and/or conversion of heat energy. They usually consist of two or more ‘heat reservoirs’ and a ‘working fluid’ (WF) which facilitates the intended process. Commonly known examples are the classical heat engines and refrigerators which form the backbone of almost all mechanical and industrial machines that utilize thermal energy, ranging from the household air conditioners and refrigerators to the fuel based vehicles and the propeller of a spaceship blasting off into space. Such devices, by construction, operate irreversibly in far from equilibrium settings. The dynamics of the quantities of interest, such as the heat transferred or the work extracted, are governed by the commonly known laws of thermodynamics. However, the tremendous technological advancement achieved in the past few decades has both necessitated as well as facilitated a rapid miniaturization of such thermal machines over the years. The frontiers of such miniaturization has been pushed down to astonishingly small scales, where quantum effects are prominent and can therefore no longer be ignored. This has resulted in the resurgence of a few age-old questions that have persisted throughout the last century − how do thermodynamics, usually associated with macroscopic phenomena, reconcile with quantum mechanics, which describes equations of motion at the microscopic level that are inherently time-reversal symmetric? It is worth noting that quantum mechanics and thermodynamics belong to a set of two immensely successful, albeit independent theoretical frameworks that have withstood the test of rigorous experimental verification. Yet, it remains an open question as to how the laws of thermodynamics manifest at very small scales, particularly, where quantum effects are expected to dominate the dynamics of a physical system kosloffent2100, vinjanampathyctp545, gemmerberlinbook, campaiolispringerbook, deffneriopbook, kosloffjcp204105, lindenprl130401, andersnjp010201, horodeckinatcom2059, jahnkeepl50008, tuncerarxiv04387. A priori, one may argue that the well-known laws of thermodynamics (with an exception to the first law) are defined for macroscopic systems described by statistical averages, and hence the question of their validity for microscopic systems consisting of a few particles or qubits may itself appear meaningless. However, in 1959, Scovil et. al. scovilprl262 demonstrated that the working of a quantum three-level maser coupled to two thermal reservoirs resembles that of a heat engine; with an efficiency upper bounded by the Carnot limit carnot. This work provided the first hint that the laws of thermodynamics, particularly the second law, may have a more fundamental and even quantum origin. In other words, it might be possible to naturally arrive at the thermodynamic laws starting from a microscopic quantum framework. However, the only known model of a ‘quantum heat engine’ at the time, i.e. the three-level maser, relied on a quasi-static description based on the equilibrium population of the energy-levels and hence could not provide any further insight into the dynamical processes involved. It was not until the 1990’s when researchers, motivated by the developing field of open quantum systems, began to look for new toy models of ‘quantum thermal machines’ (QTMs). The aim was to design simplistic models that have the same functional behavior as classical thermal machines, i.e. conversion of heat energy into useful work and vice-versa, yet at the same time that could be analyzed within the dynamical framework of open quantum systems. The advent of the Lindbladian framework made plausible the construction of physically meaningful models of QTMs that could operate at far from equilibrium settings. The challenge was however to demarcate the dynamical energy exchanges into parts that could be associated with quantum analogues of ‘heat’ and ‘work’ as well as to formulate a second law in terms of the entropic changes involved in a cycle of operation. The formulation of minimal working models of reciprocating or stroke engines soon followed; these works paved the way for an explosion of research that extensively analyzed the performance of such models in diverse scenarios, from exploring the role of entanglement and coherences to the consequences of using non-thermal baths and many more. However, to this day, the debate regarding a unique definition of quantities such as quantum work and heat as well as a universal formulation of the second law is far from being fully settled. Over the years, a plethora of such simple models of QTMs have been proposed to probe the thermodynamic laws at the quantum domain, a few of which have also been realized experimentally. Extensive analysis of these quantum models have strongly pointed to the presence of an upper bound to the efficiency and performance of such quantum heat engines and refrigerators. The existence of the Carnot bound, which is a manifestation of the celebrated second law of thermodynamics, at such small scales re-established a strong case for the validity of thermodynamics principles down to microscopic scales, thereby necessitating further scrutiny of the emergence of the thermodynamic laws at the fundamental level. The exhaustive analysis of QTMs, nevertheless, has led to an unprecedented understanding of how simple few-level quantum systems exchange energy as well as information with other such systems or with an external environment. Such understanding has in turn opened up the possibility of engineering microscopic devices in a way that may revolutionize nano-scale engineering. As for example, the application of ‘quantum probes’, which are essentially simple quantum systems such as a qubit or a harmonic oscillator, to quantum metrology have only been recently realized. giovannettinp222, degenrmp035002, kurizkitech1, pezzermp035005 Particularly, they have been shown to be suitable candidates for high precision measurements in thermometry hoferprl090603, mukherjeecp162, mondalarxiv08509 (i.e., temperature measurements of nanoscale devices) as well as magnetometry bhattacharjeenjp013024 (magnetic field measurements). As much as the focus has been on thermal energy conversion in the quantum regime, the subtleties of quantum phenomena affecting the process of energy storage and its subsequent extraction had not received much attention until recently. Alicki and Fannes, in their pioneering work, showed that a *quantum battery* composed of many identical copies of a single quantum system can, in principle, facilitate a higher energy extraction per cell through cyclic unitary process when compared to a single cell. In addition, they concluded that the maximal work extraction is possible only if the battery is driven through intermediate entangled states while *discharging* the battery. However, it was later proved that although  entangling’ or non-local operations are required for maximal work extraction, it is not necessary to generate entanglement, per se, in the battery during the discharging process. Further, the use of non-local operations was also shown to result in a faster scaling of the speed of discharging or charging (depositing energy) the battery with the battery size, as compared to local driving protocols. The above results have also been verified in a number of models. In spite of rapid developments, a robust mechanism to identify and utilize quantum affects for optimizing the usage of quantum batteries has not been established yet. In this article, we present a brief review of the basic design of some of the broad class of QTMs that are widely studied in literature. As already mentioned, the concepts of quantum heat and quantum work as such are not yet uniquely defined and several definitions of these can be found in literature (see Ref. [] for a review). However, in this review article, we limit ourselves to a handful of these definitions relevant for understanding the working of the thermal machines discussed. Two parameters of paramount importance which characterize the performance of QTMs and which we will repeatedly encounter in the course of discussion, are the efficiency (*η*) and the coefficient of performance (COP). The efficiency is defined in as the ratio of work output to the heat supplied from the hot bath, when the QTM operates as an engine. Similarly, the COP is defined as the ratio of heat extracted from the cold bath to the work performed on the working fluid, when the QTM acts as a refrigerator. In this regard, it is useful to recall the second law which states that the maximum efficiency *η**c* = 1 − *T**c*/*T**h* and COP*c* = *T**c*/(*T**h* − *T**c*) is attained in a Carnot cycle which is a reversible cycle operating between baths with temperatures *T**h* and *T**c*. We also review a couple of applications of QTMs in the field of quantum metrology, particularly in thermometry and magnetometry. Finally, we also outline recent developments in theoretical modeling of quantum batteries, with majority of the discussion focused on cyclic unitary protocols. We would like to mention here that this review article is in no way exhaustive. In particular, given the vast amount of literature available as far as QTMs are concerned, this review aims for a brisk introduction to the basics of QTMs outlining the essential underlying principles. On the other hand, given that the theoretical concept of quantum batteries is relatively new and still in its nascent stages, we have thus taken care to provide a more in-depth discussions on its fundamentals as well as recent developments. Finally, we have tried to maintain a consistent notational convention throughout the article as best as possible. However, in some sections, we had to resort to use particular notational conventions used in previous works in order to maintain consistency with the relevant figures that we have reused from those works. Hence, the reader may come across a few instances where different symbols may have been used to represent the same quantity in different sections, although we have taken care to clearly spell out all such redefinitions. Preliminary models ================== In this section, we outline the working of two very simple yet insightful models of QTMs, which operate quasi-statically and are capable of working as quantum heat engines or quantum refrigerators. The first model we introduce is the three-level maser scovilprl262 which, as already mentioned, is the earliest prototype of a quantum heat engine. The other model we discuss, is the realisation of a four-stroke Carnot engine where the working substance is comprised of the text-book system of a single particle in a one-dimensional box potential as working fluid. This model is unique in the way that it first identifies the analogue of classical ‘force’ and uses the same to calculate the quantum work performed. Although numerous other models were also proposed in the early days of QTMs, we however, begin by discussing the two models mentioned above. We highlight these particular models to make the reader appreciate the fact that thermodynamic signatures, as these toy models demonstrate, can manifest in the working of QTMs even when one does not explicitly resort to open system dynamics to analyze them. Three level maser ----------------- In 1959, Scovil *et.al.* scovilprl262 realized that the steady-state operation of a three-level maser (see Fig. [fig3lvl]) connected to two thermal baths through appropriately chosen frequency filters resemble that of a heat engine or refrigerator. The ‘quantum’ working fluid is a three-level system with energy levels *E*1, *E*2 and *E*3 (*E*1 > *E*2 > *E*3) and populations *n*1, *n*2 and *n*3, respectively. Herein and throughout the rest of the review, we shall set the Boltzman and Planck’s constants to unity, *k**B* = ℏ = 1, unless explicitly mentioned otherwise. A *hot* bath with temperature *T**h* is coupled to the system through a frequency filter so that it can induce excitations only between *E*1 and *E*3 with energy *ω**h* ∼ ∣*E*1 − *E*3∣. Similarly, a cold bath with temperature *T**c* is allowed to induce transitions between *E*2 and *E*3 with energy *ω**c* = ∣*E*1 − *E*2∣ in the system. Further, the system is also coupled resonantly with a radiation field with frequency *ω**p* = ∣*E*2 − *E*3∣. After the system attains (dynamical) equilibrium, the populations of the energy-levels satisfy, $$\frac{n\_1}{n\_3}=e^{-\frac{\omega\_h}{T\_h}},$$ $$\frac{n\_1}{n\_2}=e^{-\frac{\omega\_c}{T\_c}}.$$ Within this equilibrium regime, for each quanta of excitation *ω**h* induced by the hot bath, the system loses energy *ω**c* to the cold bath and *ω**p* to the radiation field, so that the populations are held steady. The energy exchanged with the baths can be thought of as ‘heat’ transferred while the energy supplied to the radiation field is identified as the *work* extracted from the system with the radiation field playing the role of the classical ‘piston’. Note that the preceding characterization of heat and work trivially satisfies the first law, *ω**h* = *ω**c* + *ω**p*. The most remarkable result however appears when the efficiency of the system is considered. An engine-like operation is possible when a ‘population inversion’ is achieved between the levels *E*2 and *E*3, i.e. *n*2 ≥ *n*3, which leads to the condition, $$\frac{n\_2}{n\_3}=\frac{n\_2}{n\_1}\frac{n\_1}{n\_3}=\exp\left(\frac{\omega\_c}{T\_c}-\frac{\omega\_h}{T\_h}\right)\geq 1$$ or, $$\frac{\omega\_c}{\omega\_h}\geq\frac{T\_c}{T\_h}.$$ The efficiency, i..e., the ratio of the work extracted to the energy supplied by the hot reservoir, is obtained as, $$\eta=\frac{\omega\_p}{\omega\_h}=1-\frac{\omega\_c}{\omega\_h}\leq 1-\frac{T\_c}{T\_h},$$ where we made use of the first law (see Eq. ) for obtaining the second equality. Note that the dynamical equilibrium assumed at all instants of time means that the maser operation is reversible. In other words, the engine-like operation discussed above can be reversed to obtain refrigerator-like operation, in which a quanta of excitation *ω**p* induced by the radiation field leads to the extraction of an energy quanta *ω**c* from the cold bath and the simultaneous loss of an energy quanta *ω**h* to the hotter bath. The coefficient of performance is easily calculated as, $$\mathrm{COP}=\frac{\omega\_c}{\omega\_p}=\frac{\omega\_c}{\omega\_h-\omega\_c}\leq\frac{T\_c}{T\_h-T\_c}.$$ The efficiency as well as the coefficient of performance of the three-level maser when working as a heat engine and refrigerator, respectively, therefore appears to be identical to those known for the classical Carnot cycle. This observation hence provided the astonishing result that the second law can hold true even for few-level quantum systems. Particle in a one-dimensional box --------------------------------- Another simple model of a quantum Carnot engine was provided by Bender *et.al.* benderjpa4427, benderprsla1519 which, much like its classical counter-part, is operated in a cycle comprising of discrete strokes. The working fluid is made up of a particle confined in a one-dimensional hard-walled box, in which one of the two walls is movable. The length of the box *L* is therefore an continuous variable which can be controlled externally to perform work on the system. To elucidate, let us consider the particle wave function $\ket{\psi}=\sum\_na\_n\ket{\phi\_n}$, where $\ket{\phi\_n}$ are the energy eigen states of the system. The energy expectation value of the system is ⟨*E*⟩ = ∑*n*∣*a**n*∣2*E**n*(*L*),  with *E**n*(*L*) = *n*2*π*2/2*m**L*2, where *m* is the mass of the particle. Invoking upon the notion of classical mechanics, the instantaneous ‘force’ which performs the work is defined as $$\begin{aligned} F(L)&=-d{\langle}E{\rangle}/dL=-\sum\_n|a\_n|^2\frac{dE\_n(L)}{dL}\nonumber\\ &=\sum\_n|a\_n|^2\frac{n^2\pi^2}{mL^3}.\end{aligned}$$ It is important to note that in defining the force we have used ⟨*E*⟩. In general, the derivative with respect to the energy expectation value in the above expression should also contain terms such as *E*(*L*)*d*∣*a**n*∣2/*d**L*. However, we recall that any change in the populations ∣*a**n*∣2 is necessarily associated with changes in the Von-Neumann entropy of the system. For quasi-static processes, which are the only processess involved in the working of the quantum Carnot cycle, an entropy change can result only from heat transfer with external baths or environment and not from any external force acting on the system. Hence, it is justified to neglect derivatives with respect to ∣*a**n*∣2 while defining the force. Having defined the force, the quantum analogues of adiabats and isotherms are identified as follows. The *quantum adiabats* correspond to the processes in which the system is kept isolated from the environment and the length *L* of the box is slowly changed. In the ideal situation, quantum mechanical adiabatic theorem then implies that the populations *a**n* are held constant. On the other hand, the *quantum isotherms* are defined as operations in which the coefficients *a**n* change along with *E**n* on tuning *L*, but in such a way that the energy expectation value defined in Eq.  remain constant. Strictly speaking, ‘isotherms’ demand a constant temperature which is itself not well-defined for an isolated single particle quantum system. Nevertheless, the quantum isotherms are defined here in analogy with classical isothermal processes of ideal systems (e.g. ideal gas) in which the temperature is equivalent to the internal energy and remains constant during a classical isothermal process. For purpose of simplicity, consider the situation in which only the two lowest lying energy eigen states contribute to the wave function, so that *a*1 + *a*2 = 1. Let us assume that the system is initialized in the ground state, with the box length *L* = *L**A*, so that *a*1(*L**A*) = 1. The energy expectation ⟨*E**A*⟩ is, $$\label{eq\_carnot\_en1} {\langle}E\_A{\rangle}=E\_1(L\_A)=\frac{\pi^2}{2mL\_A^2},$$ where the subscript *A* marks the initial point of the cycle. The start of the subsequent strokes will likewise be labeled as *B*, *C* and *D*. The quantum Carnot cycle is then constructed as follows (see Fig. [figcarnot]): 1. *Adiabatic compression (*A* → *B*)* – Length of the box is quasi-statically compressed to *L* = *L**B* with *L**B* < *L**A* so that the system remains in the instantaneous ground state. The force acting on the system during this stroke (*A* → *B*) is given by $$F\_{AB}(L)=\frac{\pi^2}{mL^3},$$ which performs work and increases the energy of the system to $${\langle}E\_B{\rangle}=\frac{\pi^2}{2mL\_B^2},$$ where ⟨*E**B*⟩ is the energy expectation value at the end of this stroke. The populations do not change and hence we have *a*1(*L**B*) = *a*1(*L**A*) = 1. 2. *Isothermal expansion (*B* → *C*)* – Length of the box is expanded to *L* = *L**C* = 2*L**B* such that the energy expectation value remains constant throughout, i.e. $$\label{eq\_iso2} |a\_1(L)|^2\frac{\pi^2}{2mL^2}+(1-|a\_1(L)|^2)\frac{2\pi^2}{mL^2}=\frac{\pi^2}{2mL\_B^2}.$$ Note that the populations depend on *L* during the process *B* → *C* due to the constraint on the energy expectation. At the end of the expansion, the system therefore reaches the first excited state with *a*1(*L**C* = 2*L**B*) = 0. Hence, we have, $${\langle}E\_C{\rangle}={\langle}E\_B{\rangle}=\frac{\pi^2}{2mL\_B^2}$$ The force acting on the system during this step is, $$F\_{BC}(L)=|a\_1(L)|^2\frac{\pi^2}{mL^3}+(1-|a\_1(L)|^2)\frac{4\pi^2}{mL^3},$$ where *a*1(*L*) is constrained by Eq. . 3. *Adiabatic expansion (*C* → *D*)* – As in step 1, the length is quasi-statically changed to *L* = *L**D* = 2*L**A* with *L**D* > *L**C* such that the system remains in the first excited state, i.e. *a*1(*L**D*) = *a*1(*L**C*) = 0. The energy expectation at the end of this stroke is therefore, $${\langle}E\_D{\rangle}=\frac{4\pi^2}{2mL\_D^2}=\frac{\pi^2}{2mL\_A^2},$$ and the instantaneous force acting on the system during the expansion is $$F\_{CD}(L)=\frac{4\pi^2}{mL^3}$$ 4. *Isothermal compression (*D* → *A*)* – Finally, the length is tuned back to *L* = *L**A* in a way that the energy expectation value remains constant, $$|a'\_1(L)|^2\frac{\pi^2}{2mL^2}+(1-|a'\_1(L)|^2)\frac{2\pi^2}{mL^2}=\frac{\pi^2}{2mL\_A^2},$$ where the population functions are primed to distinguish them from the functions in the isothermal stroke *B* → *C*. This also ensures that the system returns to the instantaneous ground state, *a*1ʹ(*L**A*) = *a*1(*L**A*) = 1, hence closing the cycle. The force acting on the system during this final stroke is given by, $$F\_{DA}(L)=|a'\_1(L)|^2\frac{\pi^2}{mL^3}+(1-|a'\_1(L)|^2)\frac{4\pi^2}{mL^3}.$$ The total work performed during the cycle is now calculated as $$\begin{gathered} W=\int\_{L\_A}^{L\_B}F\_{AB}(L)dL+\int\_{L\_B}^{L\_C=2L\_B}F\_{BC}(L)dL{\nonumber}\\+\int\_{L\_C}^{L\_D=2L\_A}F\_{CD}(L)dL+\int\_{L\_D}^{L\_A}F\_{DA}(L)dL{\nonumber}\\ =\frac{\pi^2}{m}\left(\frac{1}{L\_B^2}-\frac{1}{L\_A^2}\right)\log 2\end{gathered}$$ Heat is absorbed by the system only during the isothermal expansion and equals the work done during the expansion, i.e., $$Q\_H=\int\_{L\_B}^{2L\_B}F\_{BC}(L)dL=\frac{\pi^2}{mL\_B^2}\log 2.$$ The efficiency of this quantum engine cycle is therefore $$\eta=\frac{W}{Q\_H}=1-\frac{L\_B^2}{L\_A^2}=1-\frac{{\langle}E\_A{\rangle}}{{\langle}E\_B {\rangle}}$$ Note that, by construction, ⟨*E**A*⟩ and ⟨*E**B*⟩ play the role of constant temperatures during the isothermal compression and expansion steps, respectively. The efficiency therefore turns out to be identical to that of the classical Carnot cycle. Despite its elegance, the setup discussed above is however a highly idealistic one that neglects the effect of the ‘actual temperatures’ of the external environment, which shall inevitably effect the population of the energy levels. Nevertheless, the fact that a Carnot like bound emerged even within such idealistic setups was an intriguing result that invited further scrutiny of the second law. As already mentioned, the preliminary models discussed in this section are based on quasi-static processes which can be readily analyzed without any need for explicit equations of motion that govern the dynamics of the working fluid. On the down side, such simplistic description, besides being highly idealistic, do not provide any fundamental insights regarding the dynamical processes involved. In the following sections, we will take a look at models that operate at far from equilibrium settings and importantly whose operation bear the hallmarks of classical thermodynamic principles such as a maximum efficiency bounded by the Carnot efficiency. Open Quantum Systems ==================== An open quantum system refers to a quantum system *S* which obeys the quantum mechanical laws of motion and is coupled to an external environment *E*. The environment, in principle, can either be quantum in nature with a discreet energy spectrum, or classical in the continuum limit of vanishing energy gaps. Although, the dynamics of the composite system (i.e., the system *S* and the environment *E* taken together) can be described by the Schrodinger or the Von-Neumann equation, the exponentially large degrees of freedom of the environment renders it practically impossible to analytically solve these equations. One of the ways to resolve this issue is to formulate equations in terms of the reduced state of the system by tracing out the relevant degrees of freedom of the environment from the dynamics of the composite system. In this section, we discuss one such equation, namely the Gorini–Kossakowski–Sudarshan–Lindblad (GKSL) equation, which is relevant for the rest of this review. Dynamical maps -------------- To begin with, let us assume that the system and the environment are initially decoupled, *ρ*(0) = *ρ**s*(0) ⊗ *ρ**E*(0), where *ρ**s*(0), *ρ**E*(0) and *ρ*(0) represent the initial states of the system, environment and the composite system, respectively. The temporal evolution of the system is described by a dynamical map *V*(*t*, 0), which satisfies, *ρ**s*(*t*) = Tr*E*[*U*(*t*, 0)*ρ**s*(0) ⊗ *ρ**E*(0)*U*†(*t*, 0)] = *V*(*t*, 0)*ρ**s*(0),  where *ρ**s*(*t*) = Tr*E**ρ*(*t*) is the reduced state of the system obtained by tracing over the environmental degrees of freedom and *U*(*t*, 0) is the unitary evolution operator acting on the composite system. A few remarks about the dynamical map *V*(*t*, 0) are in order. Firstly, *V*(*t*, 0) is a self-mapping on the density matrix space which implies that it must be completely positive (maps only to positive eigen-values) and trace preserving (so that all density matrices have unit trace), as can be verified from Eq. . Importantly, one can also check that *V*(*t*, 0) only comprises of operators defined on the Hilbert space of *S*. Secondly, if the dynamics is Markovian (memory-less) in nature, the family of maps *V*(*t*, 0), ∀*t* > 0 constitutes what is known as a *quantum dynamical semi-group*, which satisfies *V*(*t*, 0) = *V*(*t*, *t*ʹ)*V*(*t*ʹ, 0). In the absence of explicit time dependence of the system Hamiltonian and environmental couplings, the semi-group property requires the dynamical map *V*(*t*, 0) to be of the form *V*(*t*, 0) = *e*L*t*, where L is referred to as the Linbladian super-operator. Substituting in Eq. , we therefore find that the *master equation* governing the evolution of the system is of the general form, $$\label{eq\_gen\_lindblad} \frac{d\rho\_s(t)}{dt}=\mathcal{L}\rho\_s(t).$$ Note that in the case of isolated system, the evolution of the system is governed by the Von-Neumann equation, $$\frac{d\rho\_s(t)}{dt}=-i\left[H\_s,\rho\_s(t)\right],$$ from which one can immediately identify the super-operator L*U* for unitary evolutions as, L*U* =  − *i*[*H**s*, *ρ**s**i**s**o*(*t*)]. In what follows, we will explicitly see these emergence of the features discussed above as we outline a short derivation of the GKSL master equation. We shall keep ourselves restricted to the cases where the Hamiltonian of the system is either independent or periodically modulated in time, as these are the ones relevant in the context of QTMs. Although periodically modulated Hamiltonians are strictly not time-independent, the invariance of the Floquet Hamiltonian at stroboscopic instants, as we shall elaborate below, permits a dynamical equation of the form in Eq. . The GKSL equation for static Hamiltonians ----------------------------------------- Consider a system-environment composite represented by a time-independent Hamiltonian, *H* = *H**s* + *H**E* + *H**I*,  where *H**s* and *H**E* correspond to the Hamiltonians of the system and the environment, respectively, while *H**I* encapsulates the coupling between them and is of the form, *H**I* = ∑*i**S**i* ⊗ *B**i*. The operators *S**i* and *B**i* are local Hermitian operators pertaining to the Hilbert spaces of the system and the environment, respectively. As an illustration, the interaction between a two-level system and a photonic cavity (bath) can be of the form, *H**I* = *σ**x* ⊗ (*a*† + *a*), where *σ**x* is a Pauli matrix and *a* (*a*†) is the photonic annihilation (creation) operator. We start with the von-Neumann equation governing the evolution of the composite system in the interaction picture, $$\label{eq\_lind\_1} \frac{d{\tilde{\rho}}(t)}{dt}=-i\left[{\tilde{H}}\_I(t),{\tilde{\rho}}(0)\right]\\-\int\_0^t\left[{\tilde{H}}\_I(t),\left[{\tilde{H}}\_I(t'),{\tilde{\rho}}(t')\right]\right]dt'.$$ In the above equation, for any observable *O*, *Õ* represents the same in the interaction picture. We now make an important approximation, namely the *Born* or the *weak-coupling* approximation, which assumes that (i) the system does not influence the environment so that *ρ̃**E*(*t*) = *ρ̃**E* and (ii) the composite system exists in a tensor-product state at all times, *ρ̃*(*t*) = *ρ̃**s*(*t*) ⊗ *ρ̃**E*. The Born approximation is valid for fast decaying environmental correlations and we shall return to it later. Under the above approximations, the equation of motion for the reduced state of the system can be obtained as, $$\label{eq\_lind\_2} \frac{d{\tilde{\rho}}\_s(t)}{dt}=-i{\operatorname{Tr}}\_E\left[{\tilde{H}}\_I(t),{\tilde{\rho}}\_s(0)\otimes{\tilde{\rho}}\_E\right]\\-\int\_0^t{\operatorname{Tr}}\_E\left[{\tilde{H}}\_I(t),\left[{\tilde{H}}\_I(t'),{\tilde{\rho}}\_s(t')\otimes{\tilde{\rho}}\_E\right]\right]dt'$$ Next, we decompose the system operators *S**i* into projections on the eigen-space of the Hamiltonian *H**s* as, $$S\_i=\sum\_{\varepsilon,\varepsilon'}\bra{\varepsilon}S\_i\ket{\varepsilon'}\ket{\varepsilon}\bra{\varepsilon'}=\sum\_{\omega=\varepsilon-\varepsilon'}S\_i(\omega)\ket{\varepsilon}\bra{\varepsilon'},$$ where $\ket{\varepsilon}$ are the energy eigen states of the system and *ω* = *ɛ* − *ɛ*ʹ are the possible excitation energies. The above decomposition leads to a particularly simple form of the Hamiltonian *H**I* in the interaction picture, $$\begin{aligned} \label{eq\_hi\_int} {\tilde{H}}\_I(t)&=\sum\_ie^{iH\_st}S\_ie^{-iH\_st}\otimes\sum\_ie^{iH\_Et}B\_ie^{-iH\_Et}{\nonumber}\\ &=\sum\_{i,\omega}e^{-i\omega t}S\_i(\omega)\otimes B\_i(t),\end{aligned}$$ where, *B**i*(*t*) = ∑*i**e**i**H**E**t**B**i**e*− *i**H**E**t* One can now derive the following relations, Tr*E*[*H̃**I*(*t*), *ρ̃**s*(0) ⊗ *ρ̃**E*] = ∑*i*, *ω*[*e*− *i**ω**t**S**i*(*ω*), *ρ̃**s*(0)]⟨*B**i*(*t*)⟩*E*,  Tr*E*[*H̃**I*(*t*), [*H̃**I*(*t*ʹ), *ρ̃**s*(*t*ʹ) ⊗ *ρ̃**E*]] = ∑*i*, *j*, *ω*, *ω*ʹ*e**i*(*ω*ʹ*t* − *ω**t*ʹ)(*S**j*†(*ω*ʹ)*S**i*(*ω*)*ρ̃**s*(*t*ʹ) − *S**i*(*ω*)*ρ̃**s*(*t*ʹ)*S**j*†(*ω*ʹ))⟨*B**j*†(*t*)*B**i*(*t*ʹ)⟩*E* + *h*.*c*.,  where ⟨ ⋅ ⟩*E* denotes averaging over the state *ρ̃**E*. In most cases, the average of the environmental operators ⟨*B**i*(*t*)⟩*E* vanish. However, even if they are finite, one can re-scale the system Hamiltonian appropriately to set ⟨*B**i*(*t*)⟩*E* = 0 and thus we shall ignore these averages in all the scenarios which we will consider in the rest of the paper. Substituting the above relations in Eq.  and shifting the time-coordinate *t*ʹ = *t* − *t*ʹ, we obtain, $$\label{eq\_lind\_3} \frac{d{\tilde{\rho}}\_s(t)}{dt}=\sum\_{i,j,\omega,\omega'}\int\_0^tdt'e^{i(\omega'-\omega)t} \Big(S\_i(\omega){\tilde{\rho}}\_s(t-t')S\_j^\dagger(\omega')\\-S\_j^\dagger(\omega')S\_i(\omega){\tilde{\rho}}\_s(t-t')\Big)e^{i\omega t'}{\langle}B\_j^\dagger(t)B\_i(t-t'){\rangle}\_E+h.c.$$ The next step is to invoke upon the Markovian approximation, which assumes that the two-time environment correlation functions decay rapidly within increasing time separation *t* − *t*ʹ, so much so that the system does not change appreciably within their decay time. In other words, if the environment correlations decay within a time *τ**B* and the system relaxes over a time-scale *τ**R*, then *τ**R* ≫ *τ**B*. Thus, if we consider a *coarse-grained* evolution of the system with a time-scale  ∼ *τ**R*, we can replace *ρ̃**s*(*t* − *t*ʹ) with *ρ*(*t*) in Eq.  and extend the integral to infinity, as all contributions from time *t*ʹ > *τ**B* can be neglected. We therefore arrive at, $$\label{eq\_lind\_4} \frac{d{\tilde{\rho}}\_s(t)}{dt}=\sum\_{i,j,\omega,\omega'}e^{i(\omega'-\omega)t} \Big(S\_i(\omega){\tilde{\rho}}\_s(t)S\_j^\dagger(\omega')\\-S\_j^\dagger(\omega')S\_i(\omega){\tilde{\rho}}\_s(t)\Big)\Gamma\_{i,j}(\omega)+h.c.$$ where $$\begin{aligned} \Gamma\_{i,j}(\omega)&=\int\_0^\infty dt' e^{i\omega t'}{\langle}B\_j^\dagger(t)B\_i(t-t'){\rangle}\_E{\nonumber}\\ &=\int\_0^\infty dt' e^{i\omega t'}{\langle}B\_j^\dagger(t')B\_i(0){\rangle}\_E.\end{aligned}$$ We have used the fact that *ρ̃**E* is stationary, [*ρ̃**E*, *H**E*] = 0 in deriving the second equality. Note that in Eq. , the evolution at a particular time *t* is only determined by the state of the system at time *t*; hence there are no ‘memory’ effects. Finally, we make the rotating wave or secular approximation which allows us to ignore all fast oscillating terms, i.e. we retain only terms with *ω*ʹ = *ω*. Note that this approximation, like to Markovian approximation, is also valid under a coarse-grained picture of the time evolution. The secular approximation allows to cast Eq.  in the Linbladian form (see Eq. ), [eqgksl] $$\label{eq\_static\_gksl} \frac{d{\tilde{\rho}}\_s(t)}{dt}=\tilde{\mathcal{L}}{\tilde{\rho}}\_s(t),$$ where, $$\label{eq\_lind\_static} \tilde{\mathcal{L}}{\tilde{\rho}}\_s(t)=-i\left[H\_{LS},{\tilde{\rho}}\_s(t)\right]\\+\sum\_{\omega,i,j}\gamma\_{i,j}(\omega)\left(S\_i(\omega){\tilde{\rho}}\_s(t)S\_j^\dagger(\omega)-\frac{1}{2}\{S\_j^\dagger(\omega)S\_i(\omega),{\tilde{\rho}}\_s(t)\}\right).$$ The renormalized or *Lamb shifted* Hamiltonian *H**L**S* commutes with *H**s* and is given by, *H**L**S* = ∑*ω*, *i*, *j**η**i*, *j*(*ω*)*S**j*†(*ω*)*S**i*(*ω*),  where *γ**i*, *j*(*ω*) = 2Re[Γ*i*, *j*(*ω*)] and *η**i*, *j*(*ω*) = Im[Γ*i*, *j*(*ω*)]. he equation derived in Eq.  is famously known as the GKSL master equation. Note that the first term in Eq.  captures the unitary part of the evolution while the second term encapsulates the dissipative part of the evolution. Let us now return to the Born approximation. The assumption *ρ**E*(*t*) = *ρ**E* is strictly not valid because there always exist a finite relaxation time which the environment requires to equilibriate, even if this relaxation time is very small in comparison with the relaxation time of the system. However, if one considers a ‘coarse-graining’ of the time-scale as we have done above in the case of the Markovian and secular approximations, the above assumption is perfectly valid as. Similarly, let us consider that a finite correlation *χ* is built up between the system and the environment, so that at a given time *t*, *ρ*(*t*) = *ρ**s*(*t*) ⊗ *ρ**E* + *χ*(*t*). If one explicitly calculates the contribution of these correlation in the evolution after a small time Δ*t* by integrating Eq. , one gets, $$\begin{aligned} \Delta{\tilde{\rho}}\_s^{cor}(t)&=-\int\_t^{t+\Delta t}dt\int\_0^t{\operatorname{Tr}}\_E\left[{\tilde{H}}\_I(t),\left[{\tilde{H}}\_I(t'),\chi(t')\right]\right]dt'{\nonumber}\\ &\propto \sum\_{i,j}\int\_t^{t+\Delta t}dt\int\_0^t{\langle}B\_j^\dagger(t)B\_i(t') {\rangle}\_E dt',\end{aligned}$$ where Δ*ρ̃**s**c**o**r* quantifies the extra contribution arising from the correlations *χ*(*t*). One can see that a finite contribution will only arise for Δ*t* < *τ**B* as the environmental correlations ⟨*B**j*†(*t*)*B**i*(*t*ʹ)⟩*E* are negligible for ∣*t* − *t*ʹ∣ > *τ**B*. Hence, once again, the coarse-grained picture of the time evolution permits us to neglect this extra contribution arising only for a very short duration. To obtain the asymptotic steady-state attained by the system, we revert back to the Schrodinger picture. In the static case, the GKSL equation in Eq.  assumes the form, $$\frac{d\rho\_s(t)}{dt}=\mathcal{L}\rho\_s(t) =-i[H\_s,\rho\_s(t)] +\tilde{\mathcal{L}}\rho\_s(t),$$ in the Schrodinger picture. The steady state *ρ**s**s* is therefore determined by solving the characteristic equation L*ρ**s**s* = 0. We note in passing that the set of reasoning in the derivation above can also be generalized for slowly varying Hamiltonians *H**s*(*t*). The rate of change should be slow enough so that the *quantum adiabaticity* is maintained. In other words, the time-scale over which *H**s*(*t*) changes appreciably is much greater than the time-scale of relaxation of the system as well as the baths. Under such conditions, the GKSL equation assumes the form, $$\frac{d{\tilde{\rho}}\_s(t)}{dt}=\mathcal{L}(t){\tilde{\rho}}\_s(t),$$ where all the quantities in the time-dependent super-operator is derived in terms of the instantaneous Hamiltonian of the system. Likewise, the steady state *ρ**s**s*(*t*) is also slowly-varying and is dependent on the instantaneous energy eigen-values of the system. The GKSL equation for periodic Hamiltonians ------------------------------------------- We now consider the case of a periodically driven system coupled to an external environment, *H*(*t*) = *H**s*(*t*) + *H**E* + *H**I*,  where *H**s*(*t* + *T*) = *H**s*(*t*). A dynamical equation in Linbladian form can be derived in a way similar to the case of static Hamiltonian discussed above, albeit with some alterations. The essential ingredient in the case of periodic Hamiltonians is the so called Floquet Hamiltonian *H**F*, which is defined as follows. Consider the unitary evolution operator over a single time period *T*, in the absence of any coupling to the environment, *U**s*(*T*, 0) = T*e*− *i*∫0*T**H**s*(*t*)*d**t* = *e*− *i**H**F**T*,  where T is the time-ordering operator and the Floquet Hamiltonian *H**F* is defined as, $$H\_F=\frac{i}{T}\ln U\_s(T,0).$$ The Floquet Hamiltonian *H**F* thus acts as an effective static Hamiltonian which drives the evolution of the system when observed at stroboscopic instants of time, i.e., *U**s*(*m**T*, 0) = exp( − *i**m**H**F**T*). It possesses a set of time-independent *quasi-energy* eigen-states $\ket{\phi\_n}$, which satisfy $H\_F\ket{\phi\_n}=\varepsilon\_n\ket{\phi\_n}$, where *ɛ**n* are refereed to as the quasi-energies. Now, let us consider the time evolution operator generating the evolution up to an arbitrary time, *U**s*(*t*, 0) = *U**s*(*t*, 0)*e**i**H**F**t**e*− *i**H**F**t* = *R*(*t*, 0)*e*− *i**H**F**t*,  where *R*(*t*) = *U**s*(*t*, 0)*e**i**H**F**t*. Using the relations *U**s*(*m**T*, 0) = exp( − *i**m**H**F**T*) and *U**s*(*m**T* + *t*ʹ, *m**T*) = *U**s*(*t*ʹ, 0) where 0 < *t*ʹ < *T*, one can easily verify that *R*(*t* + *T*) = *R*(*t*). Thus, *R*(*t*) has a discrete time-translational invariance, which permits its Fourier decomposition as, *U**s*(*t*) = ∑*q**R*(*q*)*e*− *i**q*Ω*t**e*− *i**H**F**t*,  where Ω = 2*π*/*T* is the frequency of the periodic modulation and $$R(q) = \frac{1}{T}\int\_0^TR(t)e^{iq\Omega t}dt.$$ The operators *S**i* are transformed in the interaction picture as, $$\begin{aligned} \tilde{S}\_i&=U^\dagger\_s(t,0)S\_iU\_s(t,0){\nonumber}\\ &=e^{iH\_Ft}\sum\_{q',q}\left(R^\dagger(q')S\_iR(q)e^{-i(q-q')\Omega t}\right)e^{-iH\_Ft}{\nonumber}\\ &=\sum\_me^{iH\_Ft}S(m)e^{-im\Omega t}e^{-iH\_Ft}{\nonumber}\\ &=\sum\_{m}\sum\_{\omega=\varepsilon\_{n'}-\varepsilon\_{n}}\bra{\phi\_n}S\_i(m)\ket{\phi\_{n'}}e^{-im\Omega t}e^{i(\varepsilon\_n-\varepsilon\_{n'})t}{\nonumber}\\ &=\sum\_{m,\omega}S\_i(m,\omega)e^{-i(\omega+m\Omega)t}\end{aligned}$$ where *ω* = *ɛ**n*ʹ − *ɛ**n* now denotes the difference in quasi-energies or eigen-values of the Floquet Hamiltonian *H**F*. This is unlike the static case where *ω* refereed to the difference in energy eigen-values of the time independent *H**s*. The Hamiltonian *H**I* hence assumes a similar form in the interaction picture as in Eq. , *H̃**I* = ∑*i*, *ω*, *m**e*− *i*(*ω* + *m*Ω)*t**S**i*(*m*, *ω*) ⊗ *B**i*(*t*),  where *B**i*(*t*) is given by Eq. . Assuming the same set of approximations as in Sec. [subsecopenstatic], one arrives at the following equation of the super-operator L, $$\label{eq\_gksl\_periodic} \tilde{\mathcal{L}}{\tilde{\rho}}\_s(t)=-i\left[H\_{LS},{\tilde{\rho}}\_s(t)\right]+\\\sum\_{m,\omega,i,j}\gamma\_{i,j}(\omega+m\Omega)\Big(S\_i(m,\omega){\tilde{\rho}}\_s(t)S\_j^\dagger(m,\omega)\\-\frac{1}{2}\{S\_j^\dagger(m,\omega)S\_i(m,\omega),{\tilde{\rho}}\_s(t)\}\Big),$$ where *H**L**S* commutes with the Floquet Hamiltonian *H**F* and is given by, *H**L**S* = ∑*m*, *ω*, *i*, *j**η**i*, *j*(*ω* + *m*Ω)*S**j*†(*m*, *ω*)*S**i*(*m*, *ω*). Unlike the static case, transforming the GKSL equation to the Schrodinger picture is not trivial in general. Instead, we first obtain the steady state in the interaction picture itself by solving $\tilde{\mathcal{L}}{\tilde{\rho}}\_{ss}=0$. The corresponding state in the Schrodinger picture is given by, *ρ**s**s*(*t*) = *U**s*(*t*, 0)*ρ̃**s**s**U**s*†(*t*, 0),  and satisfies *ρ**s**s*(*t* + *T*) = *ρ**s**s*(*t*) as *U**s*(*t* + *T*, 0) = *U**s*(*t*, 0). Hence, the steady state is periodic in nature and is in sync with the modulating Hamiltonian. Continuous thermal machines =========================== The three-level maser discussed in Sec. [subsecmaser], apart from being the earliest prototype of quantum thermal machines, can also be identified as the simplest form of *continuous thermal machines* geusicpr343, gevapre3903,kosloffarpc365, mukherjeepre062109 (CTMs). This class of thermal machines are characterized by their perpetual (continuous) coupling with both the heat source as well as the sink, unlike the reciprocating class of machines discussed in the next section. CTMs have a greater experimental relevance as they, unlike their reciprocating counter-parts, do not require intermittent couplings and decouplings between the working fluid and the baths that are particularly difficult to implement at microscopic scales. Moreover, such intermittent coupling-decoupling mechanisms are bound to generate some finite energetic-costs on the performance of reciprocating machines which are mostly ignored in theoretical calculations. Work extraction or refrigeration in CTMs is usually enforced by a periodic modulation of the system Hamiltonian, which in general, drives the system to a periodic steady state. The dynamical exchanges of energy between the system and the baths as well as the *work reservoir* (energy source of the external agent which modulates the system) in the steady state are, in general, out-of-equilibrium processes. In this regard, the three-level maser can be considered as a special case of CTMs in which a dynamical equilibrium (populations of energy levels held steady) is assumed throughout the energy conversion process of each quanta of excitation, thereby rendering the operation perfectly reversible. On the other hand, for a generic out-of-equilibrium process, the dynamics is irreversible in nature and the performance is found to be worse than the reversible Carnot engine or refrigerator. In what follows, we shall adopt the master equation approach, detailed in Sec. [subsecopenperiodic], to analyze the performance of a simple CTM operating in the steady state. For purpose of simplicity, let us consider a periodically modulated two-level system (TLS) in contact with two thermal baths having temperatures *T**h* and *T**c* (*T**h* > *T**c*) klimovskypre012140. The Hamiltonian of the composite system reads, *H*(*t*) = *H**s*(*t*) + *H**h* + *H**c* + *H**I*,  where *H**s*(*t*) and *H**h*(*H**c*) are the Hamiltonians of the system and the hot (cold) bath, respectively. The modulation is performed on the energy-gap of the TLS so that *H**s*(*t*) is of the form, $$\label{eq\_cont\_hs} H\_s(t)=\frac{1}{2}\omega\_s(t)\sigma\_z,$$ with *ω**s*(*t* + *T*) = *ω**s*(*t*) and *σ**z* being a Pauli matrix. The interaction between the system and the baths is chosen to be, *H**I* = *σ**x* ⊗ (*B**h* ⊗ I*c* + I*h* ⊗ *B**c*),  where *σ**x* is a Pauli matrix, while *B**h* and *B**c* are Hermitian operators which act locally on the Hilbert spaces of the hot and cold baths, respectively. Note that the modulation imposed is such that [*H**s*(*t*), *H**s*(*t*ʹ)] = 0, which ensures that the external driving only modulates the energy levels and do not generate any excitations in the system. On the other hand, the interaction between the system and the baths is chosen such that [*H**s*(*t*), *H**I*] ≠ 0 and thus the baths can induce excitations and affect the population of the energy levels. On plugging in Eqs.  and  in the GKSL equation derived Eq.  and simplifying, we obtain, [eqcont] $$\frac{\partial\rho(t)}{\partial t}=\mathcal{L}\rho(t)=\sum\_{j,m}\mathcal{L}\_{m}^j\rho(t),$$ where, $$\mathcal{L}\_{m}^j\rho=P\_m\Big(\gamma^j(\omega\_0+m\Omega)(\sigma^-\rho\sigma^+-\frac{1}{2}\{\sigma^+\sigma^-,\rho\})\\+\gamma^j(-\omega\_0-m\Omega)(\sigma^+\rho\sigma^--\frac{1}{2}\{\sigma^-\sigma^+,\rho\})\Big),$$ where we have ignored the Lamb-shift corrections to the energy levels. The superscript *j* = *h*, *c* label the operators or correlation functions defined on the hot and cold baths, respectively. Likewise, *ω*0 denotes the mean gap of the two-level system averaged over *T*, Ω = 2*π*/*T* is the frequency of modulation and *m* = 0,  ± 1,  ± 2… corresponds to the different photon sectors or side-bands created as a result of the modulation. The coefficient *P**m* assigns a weight to the contribution from the *m**t**h* side-band and is given by, $$P\_m = \left|\frac{1}{T}\int\_0^Te^{-i\int\_0^t\left(\omega\_s(t')-\omega\_0\right)dt'}e^{-im\Omega t}dt\right|^2$$ Note that, unlike in the previous subsections, we have not used *Õ* to denote an operator *O* in the interaction picture for simplicity in notation. A closer look at Eq.  suggests that the effective action of the periodic modulation in conjugation with the coupling to thermal baths can be interpreted as a dissipative evolution driven by infinite copies of each of the thermal baths. Each of the copies, which we henceforth refer to as sub-baths, of a particular thermal bath couples to different side-bands of the Floquet spectrum. The super-operator L*m**j* encodes the dissipative action arising due to the coupling of one of the copies of the *j**t**h* bath to the *m**t**h* side-band. The sub-baths therefore induce excitations in the system having energies equal to the energy gaps of the different side-bands. As discussed in Sec. [subsecopenperiodic], the steady state *ρ**s**s* in the interaction picture is obtained by solving the eigenvalue equation L*ρ**s**s* = 0, which transforms to a periodic steady state in the Schrodinger picture. A crucial assumption in determing the steady state is that the thermal baths satisfy the Kubo-Martin-Schwinger condition *γ**j*( − *ω*) = *e*− *β**j**ω**γ**j*(*ω*), where *β**j* = 1/*T**j* is the inverse temperature of *j**t**h* bath. The steady state is then found to be, [eqctmsteady] $$\rho\_{ss}=\frac{1}{1+r}\begin{pmatrix} r & 0\\0 & 1\end{pmatrix},$$ where, $$\label{eq\_r\_ori} r=\frac{\sum\_{m,j}P\_m \gamma^j\left(\omega\_0+m\Omega\right)e^{-\frac{\omega\_0+m\Omega}{T\_j}}}{\sum\_{m,j}P\_m \gamma^j\left(\omega\_0+m\Omega\right)},$$ To determine the heat currents and work, we first note that each sub-bath, when acting independently on the system, can in principle drive the system to a Gibbs-like steady state determined by the eigenvalue equation L*m**j**ρ**m*, *s**s**j* = 0. These steady states are of the form, $$\label{eq\_cont\_sub\_steady} \rho\_{m,ss}^j=\frac{1}{\mathcal{Z}}\exp{\left(\frac{\omega\_0+m\Omega}{\omega\_0}\beta\_j H\_{F}\right)},$$ where *H**F* = *ω*0*σ**z*/2 and $\mathcal{Z}={\operatorname{Tr}}\left(\exp{\left(\frac{\omega\_0+m\Omega}{\omega\_0}\beta\_j H\_{F}\right)}\right)$. Next, we calculate the rate of change of von-Neumann entropy *S*(*t*) =  − Tr(*ρ*(*t*)ln*ρ*(*t*)), $$\frac{d S(t)}{dt}=-{\operatorname{Tr}}\left(\dot{\rho}(t)\ln\rho(t)\right)=-\sum\_{j,m}{\operatorname{Tr}}\left(\mathcal{L}\_{m}^j\rho(t)\ln\rho(t)\right).$$ Further, it follows from from Spohn’s inequality spohnjmp1227 for Markovian dynamics that Tr(L*m**j**ρ*(ln*ρ* − ln*ρ**m*, *s**s**j*) ≤ 0. Substituting this inequality in the above equation yields, $$\frac{d S(t)}{dt}\geq-\sum\_{j,m}{\operatorname{Tr}}\left(\mathcal{L}\_{m}^j\rho(t)\ln\rho^j\_{m,ss}\right)=\sum\_j\frac{J\_j(t)}{T\_j}.$$ The above inequality can be considered to be a dynamical version of the second law. In the steady state, the Von-Neumann entropy maximizes, and we obtain, $$\sum\_j\frac{J\_j}{T\_j}=-\sum\_{j,m}{\operatorname{Tr}}\left(\mathcal{L}^j\_{m}\rho\_{ss}\ln \rho^j\_{m,ss}\right)\leq 0.$$ Substituting Eq. in the first equality above, the steady state heat currents can be identified as, $$J\_j=\sum\_m\left(\frac{\omega\_0+m\Omega}{\omega\_0}\right){\operatorname{Tr}}\Big(\mathcal{L}\_{m}^j\rho\_{ss}H\_F\Big),$$ while the power is calculated from the principle of energy conservation (first law), $$P=\dot{W}=-\sum\_jJ\_j.$$ The working and the mode of operation of the thermal machine discussed above depend crucially on the form of modulation as well as the bath spectral functions *γ**j*(*ω*). To act as a heat engine or refrigerator, the two baths need to be spectrally separated klimovskypre012140, klimovskyatmop329. As for example, consider the case of a sinusoidal modulation of the system with the bath spectra separated as *γ**h*(*ω*) = 0 ∀*ω* < *ω*0, *γ**c*(*ω*) = 0 ∀*ω* > *ω*0. One can then show that the QTM operates as a refrigerator, i.e. *J**h* < 0 and *J**c*, *P* > 0, if the modulation frequency Ω > Ω*c**r*, where the critical frequency Ω*c**r* given by klimovskypre012140, $$\Omega\_{cr}=\omega\_0\frac{T\_h-T\_c}{T\_h+T\_c}.$$ For Ω < Ω*c**r*, the QTM operates as a heat engine, characterized by reversal of the signs of the heat currents and power, i.e. *J**h* > 0 and *J**c*, *P* < 0. In this regime, the efficiency is found to be $$\eta=\frac{2\Omega}{\omega\_0+\Omega},$$ The maximum efficiency is achieved as Ω → Ω*c**r*; at Ω = Ω*c**r* the machine achieves the Carnot efficiency *η* = 1 − *T**c*/*T**h*, but the power as well as the heat current vanishes. Similarly, in the refrigerator regime, the COP is also found to be limited by the Carnot bound. Note that, although the efficiency and COP in general depends upon the choice of bath spectral function, they are nevertheless, always restricted by the Carnot bounds. Recently, it has also been demonstrated that using an asymmetric pulse, the switching between different modes of operation can also be achieved by tuning the up (or equivalently down) time duration of the pulse. Additionally, when modulated at resonance Ω = *ω*0, tuning the up time duration also allows to QTM to function as a heater. In this mode of operation, the power supplied to the system *P* > 0 is used to heat up both the thermal baths, i.e. *J**h* < 0, *J**c* < 0. The above framework of continuous thermal machines with TLSs as working fluid has also been extended to the case of multi-level systems with degenerate excited states. When compared with the performance of TLSs, it is found that the presence of degeneracy in the case of multi-level systems can boost the heat currents and power of the thermal machine; however, the efficiency or the coefficient of performance remains identical to that of the case of TLSs. Similarly, it has been shown that using *N* two-level atoms as working fluid in place of the TLS enhances the power output (as well as cooling capacity in refrigerator mode) of the thermal machine when compared with the net power output from *N* independent machines. Further, it has also been demonstrated that the hot bath can be cooled to very low temperatures if one considers a modified version of the continuous thermal machine discussed above, with the system and interaction Hamiltonians chosen as, $$H\_s(t)=\frac{1}{2}\omega\_0(t)\sigma\_z+\frac{1}{2}g\left(\sigma\_+e^{-i\nu t}+\sigma\_-e^{i\nu t}\right)$$ and *H**I* = *σ**z* ⊗ *B**h* ⊗ I*c* + *σ**x* ⊗ I*h* ⊗ *B**c*,  respectively. The first equation models a laser driven TLS with the coupling strength between the TLS and the laser as *g*, while the second equation describes the coupling of the TLS to a dephasing (does not induce transitions between energy levels of *σ**z*) hot bath and a cold bath. The particular advantage of this model is that, unlike the model governed by Eqs.  and , no spectral separation of the two baths is required for the cooling operation. Finally, we note that the working of CTMs in the regime of non-Markovian dynamics has aslo been explored. Reciprocating thermal machines ============================== A reciprocating thermal machine operates in a cycle which is composed of discrete strokes. Notably, the working fluid is coupled to the baths only in some of the strokes of the cycle. This class of thermal machines are much simpler to analyze in comparison to continuous ones, although they are in general trickier to implement experimentally. The particle in box system, discussed in Sec. [subsecbox] is an example of reciprocating thermal machine. In spite of it’s tantalizing similarity to the classical Carnot cycle, it does not offer much insight into the dynamical exchange of energy between the working fluid and the environment. In particular, the heat exchanged with the baths is indirectly found from the work performed in the isothermal strokes where the net energy change is zero. In this section, we will first discuss the common notions of quantum heat and work widely used in literature, which will be subsequently used to review some of the commonly studied reciprocating thermal machines. Although there are no universally accepted definitions of ‘quantum work’ or ‘quantum heat’ till date, we briefly underline here a set of definitions that are particularly useful in the context of reciprocating devices, especially, in the limit of weak system-bath coupling. Consider a system *S* with reduced density matrix *ρ**s*(*t*) evolving under a time-dependent Hamiltonian *H**s*(*t*) and coupled to an external environment. A change in energy expectation value of *S* after duration *τ* can be expressed as alickijpaL103, vinjanampathyctp545, $$\begin{aligned} \label{eq\_heatwork} \Delta E&=\int\_0^\tau\frac{d}{dt}\left[{\rm Tr}\left(\rho\_s(t)H\_s(t)\right)\right]dt{\nonumber}\\&=\int\_{0}^{\tau}{\rm Tr}\left(\frac{d\rho\_s(t)}{d t}H\_s(t)\right)dt+\int\_{0}^{\tau}{\rm Tr}\left (\rho\_s(t)\frac{d H\_s(t)}{d t}\right)dt{\nonumber}\\&=W+Q,\end{aligned}$$ where the work is identified as, $$\label{eq\_qwork} W=\int\_{0}^{\tau}{\rm Tr}\left(\rho\_s(t)\frac{d H\_s(t)}{d t}\right)dt,$$ and the heat exchanged with the environment as, $$\label{eq\_qheat} Q=\int\_{0}^{\tau}{\rm Tr}\left(\frac{d\rho\_s(t)}{d t}H\_s(t)\right)dt.$$ The above definitions of quantum work and heat can be intuitively justified as follows: If the system *S* was isolated, any change in energy expectation can only be associated with a work performed, as there is no environment with which *S* can exchange heat. This is consistent with the definition of heat in Eq.  as for an isolated system, one can easily check that, $$\begin{aligned} Q\_{iso}&=\int\_{0}^{\tau}{\rm Tr}\left(\frac{d\rho\_s(t)}{d t}H\_s(t)\right)dt{\nonumber}\\&=-i\int\_{0}^{\tau}{\rm Tr}\left(\left[H\_s(t),\rho\_s(t)\right]H\_s(t)\right)dt=0,\end{aligned}$$ where we have used the Liouville’s equation to arrive at the second equality. Further, both the quantum work and heat, as defined above, depend on the evolution process and are therefore not state functions, similar to their classical counterparts. One can therefore, consider Eq.  as the quantum equivalent of the first law. Further, the definitions in Eq.  also permits a natural formulation of a dynamical version of the second law. Within the framework of GKSL equations derived for time-independent and slowly varying system Hamiltonians *H**s*(*t*) in Sec. [secopen], the rate of heat exchanged can be calculated using Eq.  as, $$J(t)=\frac{dQ}{dt}={\rm Tr}\left(\frac{d\rho\_s(t)}{d t}H\_s(t)\right)={\rm Tr}\left(\mathcal{L}(t)\rho\_s(t)H\_s(t)\right)$$ Next, we invoke the Spohn’s inequality, Tr[L(*t*)*ρ**s*(*t*)(ln*ρ**s*(*t*) − ln*ρ**s**s*)] ≤ 0,  where *ρ**s**s* is the steady state which satisfies L*ρ**s**s* = 0. For baths obeying the Kubo-Martin-Schwinger (KMS) condition, *ρ**s**s*(*t*) corresponds to the thermal Gibbs state *ρ**s**s*(*t*) = *e*− *β**H**s*(*t*)/Tr(*e*− *β**H**s*(*t*)), where *β* = 1/*T* is the inverse bath temperature. Substituting in the above equation, we arrive at, $$\frac{dS(t)}{dt}-\frac{J(t)}{T}\geq 0,$$ where *S*(*t*) =  − Tr(*ρ**s*(*t*)ln*ρ**s*(*t*)) is the von Neumann entropy of the system. The above equation is the dynamical version of the second law which we had also obtained in Sec. [secconti] for time-periodic Hamiltonians. Finally, we would like to mention here that the above definitions of work and heat need to be modified in some cases, such as in the case of autonomous or self-contained quantum thermal machines tonnerpre066118,niedenzuq195, brasknjp113029, klimovskyatmop329, klimovskypre022102, klimovskyscirep7809, strasbergprl180605, hammamnjp043024. Four stroke devices ------------------- Carnot, Otto and Stirling engines are prime examples of four stroke thermal machines as their operation are based on cycles that are made up of four sequential strokes. The quantum analogues of these four stroke machines have been extensively studied, particularly the Otto engine has received the most attention followed by the Carnot engine. This is because the Otto cycle is the simplest to analyze owing to the fact that the heat and work exchanges occur separately in different strokes, unlike the Carnot and Stirling cycles, in which both the processes occur simultaneously in the isothermal strokes. In particular, note that the isothermal stroke requires a dissipative evolution with a time varying Hamiltonian. However, exact formulations in terms of the Lindbladian framework exist only for the cases of static and infinitesimally slowly varying Hamiltonians. Thus, the Lindbladian framework becomes inadequate when one is interested in studying finite-time performances of the thermal machines. On the contrary, the Otto cycle provides no such hindrance, as the natural separation of the work producing and heat exchanging strokes allows the former to be rephrased in terms of unitary evolution in isolated conditions and the latter in terms of dissipative dynamics with time-independent Hamiltonians. In the following, we therefore restrict ourselves to reviewing the working principles of the quantum Otto cycle using the definitions of quantum heat and work discussed previously. We note at the outset that a vast amount of research have gone into analyzing numerous aspects of the quantum Otto cycle and as such, a full-fledged discussion of all such work is not feasible. We therefore, resort to highlighting only fundamental aspects of its working and outline some of the interesting results that have been reported over the past few years. ![Schematic representation of the Otto cycle in the T-S plane, where T denotes the temperature and S is the Von-Neumann entropy. The strokes AB and CD are isentropic strokes in which the energetic changes occur only in the form of work. On the other hand, the thermalization with the hot and cold baths occur in the strokes DA and BC, respectively, with no work performed during these strokes whatsoever. ](Otto.png "fig:")[figotto] ### The Otto Cycle For the Otto cycle (see Fig. [figotto]), we consider a quantum harmonic oscillator (QHO) as the working fluid with Hamiltonian, $$H=\frac{\hat{p}^2}{2m}+\frac{1}{2}m\omega^2(t)\hat{x}^2=\omega(t)\left(a^\dagger a+1/2\right),$$ where *x̂* and *p̂* are the position and conjugate momentum operators, respectively. In addition, *a* denotes the annihilation operator and *ω* is the natural frequency of the oscillator which can be controlled externally. The QHO can be coupled to a hot or a cold thermal bath, having temperatures *T**h* and *T**c* (*T**h* > *T**c*), respectively. Further, we assume that the initial frequency is *ω*(0) = *ω**A*, i.e., *H*(0) = *H**A* = *ω**A*(*a*†*a* + 1/2), and the QHO is initialized in thermal equilibrium with bath *T**h* so that, $$\label{eq\_otto\_ini\_den} \rho\_A = \frac{e^{-H\_A/T\_h}}{\mathcal{Z}(\omega\_A,T\_h)},$$ where $\mathcal{Z}(\omega,T)={\rm Tr}(e^{-H\_A/T})$. The energy expectation value is found to be, $${\langle}E\_A{\rangle}={\operatorname{Tr}}(\rho\_AH\_A)=\omega\_A\left({\langle}n{\rangle}+\frac{1}{2}\right)=\frac{\omega\_A}{2}\coth \left(\frac{\omega\_A}{2T\_h}\right),$$ where ⟨*n*⟩ is the mean occupation number. The quantum Otto cycle is now constructed as follows: 1. *Isentropic compression* – The QHO is decoupled from the hot bath and its Hamiltonian is tuned from *H**A* → *H**B*, where *H**B* = *ω**B*(*a*†*a* + 1/2) and *ω**B* ≤ *ω**A*. The unitary evolution in isolated conditions does not affect the Von-Neumann entropy of the QHO; hence it is an isentropic process. In the ideal cycle, the tuning occurs adiabatically, which ensures that the occupation probabilities of the instantaneous eigen-energy levels remain unchanged. The energy exchange, which importantly occurs only in the form of work, is given by $$\label{eq\_otto\_W1} W\_{AB}=\frac{\omega\_B-\omega\_A}{2}\coth \left(\frac{\omega\_A}{2T\_h}\right).$$ 2. *Cold isochore* – In this stroke, the QHO is coupled to the cold bath and allowed to thermalize with its Hamiltonian held constant at *H* = *H**B*. By virtue of Eq. , this ensures that no work is performed. The only energy exchange occurs in the form of heat transfer and is given by, $$Q\_c=\frac{\omega\_B}{2}\left(\coth \left(\frac{\omega\_B}{2T\_c}\right)-\coth \left(\frac{\omega\_A}{2T\_h}\right)\right)$$ 3. *Isentropic expansion* – As in the compression stroke, the QHO is decoupled from bath and the Hamiltonian is tuned as *H**B* → *H**A*. Once again, in the ideal case, the populations remain invariant, and the work performed is found to be, $$\label{eq\_otto\_W2} W\_{BA}=\frac{\omega\_A-\omega\_B}{2}\coth \left(\frac{\omega\_B}{2T\_c}\right).$$ 4. *Hot isochore* – The cycle is completed by coupling the QHO to the hot bath so that it thermalises back to its initial state as given in Eq. . The heat exchanged in the process is, $$Q\_h=\frac{\omega\_A}{2}\left(\coth \left(\frac{\omega\_A}{2T\_h}\right)-\coth \left(\frac{\omega\_B}{2T\_c}\right)\right)$$ We reemphasize that the work is performed only during the isentropes while heat is exchanged with the baths only during the isochores. As the QHO returns to its intial state after each cycle, the total energy is conserved, i.e. *Q**c* + *W**A**B* + *Q**h* + *W**B**A* = 0. The efficiency can be calculated as, $$\label{eq\_otto\_eff} \eta\_o=-\frac{W\_{AB}+W\_{BA}}{Q\_h}=\frac{Q\_h+Q\_c}{Q\_h}=1-\frac{\omega\_B}{\omega\_A},$$ where we have used the convention that *W* is positive if work is done on the system. When operating as a heat engine, the net work output is positive, i.e. *W**A**B* + *W**B**A* ≤ 0. Using Eqs.  and , this inequality is simplified to *ω**B*/*ω**A* ≥ *T**c*/*T**h*. Hence the efficiency is limited by, $$\eta\_o\leq 1-\frac{T\_c}{T\_h}.$$ Thus, we find that the efficiency of the quantum Otto engine is bounded by the Carnot limit. As in the case of CTMs, one can check that at the Carnot point, *ω**B*/*ω**A* = *T**c*/*T**h*, the net work done as well as the heat exchanges vanish. For *ω**B*/*ω**A* ≤ *T**c*/*T**h*, the machine operates as a quantum refrigerator, where the COP is also found to be upper-bounded by the COP of the Carnot refrigerator. ### Efficiency at maximum power In practice, the power output of an ideal Otto engine, as in the case of Carnot engine, is zero. This is because of the fact that each of the adiabats as well as the isochores ideally requires an infinite time to achieve perfect adiabatic evolution and thermalization, respectively. The efficiency at maximum power is therefore an important figure of merit to analyze the engine’s performance. The power of a heat engine can be maximized in two ways. In the first case, let us assume that each of the strokes are carried out over arbitrarily long but finite times so as to achieve near perfect adiabaticity and thermalization. Maximizing the power in this case thus amounts to maximizing the net work output. To evaluate the efficiency under such a restraint, we expand the net work output in the high temperature limit as, $$\begin{aligned} W &= -\left(W\_{AB} + W\_{BA}\right)= \left(\omega\_B-\omega\_A\right)\left(\frac{T\_c}{\omega\_B}-\frac{T\_h}{\omega\_A}\right){\nonumber}\\ &=\left(\frac{\omega\_B}{\omega\_A}-1\right)\left(\frac{T\_c}{\left(\frac{\omega\_B}{\omega\_A}\right)}-T\_h\right)\end{aligned}$$ *W* is maximum when the ratio *ω**B*/*ω**A* satisfies $\omega\_B/\omega\_A=\sqrt{T\_c}/\sqrt{T\_h}$. The efficiency at maximum power *η̄* is thus, $$\label{eq\_otto\_max} \bar{\eta}=1-\sqrt{\frac{T\_c}{T\_h}},$$ which is surprisingly identical to the Curzon-Ahlborn (CA) efficiency of an ideal engine operating with finite power. However, a more meaningful way to achieve finite power at maximum efficiency is to devise ways that can lead to both perfect adiabaticity and thermalization within shorter times. The time required in the former is in general comparatively much longer than the latter; a greater effort has thus been devoted to engineer methods to achieve adiabatic evolutions in finite time, which we shall be briefly discussing below. Nevertheless, we note that protocols to achieve fast thermalization have also been explored recently. ### Quantum friction and shortcuts to adiabaticity The fact that the strokes associated with work extraction are required to be adiabatic is better explained in terms of quantum coherence. Diabatic excitations, associated with the build up of coherence in between the energy levels of the system, are generated when the time allocated to the isentropic processes is finite and [*H*(*t*), *H*(*t*ʹ)] ≠ 0. The build up of coherence costs additional work, which effectively reduces the net useful work extracted from the heat reservoirs, thus undermining the efficiency of the engine. Coherence therefore plays the role of ‘quantum friction’ which hampers the engine’s ability to extract useful work. A powerful technique which can mitigate work losses due to non-adiabatic driving is to utilize certain *shortcuts to adiabaticity*. One way in which this can be realized is by driving the working fluid along a certain path which ensures that the final state reached at the end of the isentropic stroke does not have any coherence although they may exist at intermediate times and the populations of the energy eigen-states are thus the same at the beginning and at the end of the stroke. To illustrate, let us consider an isentropic stroke in which the frequency of the QHO is tuned from *ω*(0) = *ω**i* to *ω*(*τ*) = *ω**f* in a finite duration of time *τ*. Now, consider the operator, $$I(t) = \frac{1}{2}\left[\frac{m\omega\_0^2}{b^2}\hat{x}^2+\frac{1}{m}\left(b\hat{p}-m\dot{b}\hat{x}\right)^2\right],$$ where *b* is a time-dependent parameter and *ω*0 = *c**o**n**s**t* > 0. This operator becomes an invariant of evolution if the parameter *b* satisfies the Ermakov equation, $$\label{eq\_ermakov} \ddot{b} + \omega(t)^2b=\frac{\omega\_0^2}{b^3}.$$ Let us now impose the boundary conditions *b*(0) = 1, $\dot{b}(0)=\ddot{b}(0)=0$ and $b(\tau)=\sqrt{\omega\_0/\omega\_f}$, $\dot{b}(\tau)=\ddot{b}(\tau)=0$. These choices of boundary conditions lead to *ω*0 = *ω**i*, *I*(0) = *H*(0) and *I*(*τ*) = *ω*0*H*(*τ*)/*ω**f*. The set of these six boundary conditions allows one to calculate a polynomial form of *b*(*t*) from which the required time-dependent tuning of *ω*(*t*) can be determined. The above results imply that an initial set of eigen-states of the Hamiltonian *H*(*t*) are also the eigen-states of the operator *I*(*t*) at *t* = 0. Let us consider a generic initial state $\ket{\psi(0)}=\sum c\_n(0)\ket{\phi\_n}$, where $\ket{\phi\_n}$ are the eigen-states of both *I*(0) and *H*(0). If *ω*(*t*) is subsequently tuned following Eq. , *I*(*t*) remains invariant and the state evolves as $\ket{\psi(t)}=\sum c\_n(0)e^{i\theta\_n(t)}\ket{\phi\_n(t)}$, where $\ket{\phi\_n(t)}$ are instantaneous eigen-states of *I*(*t*) but not *H*(*t*) (0 < *t* < *τ*) and *θ*(*t*) is some time-dependent phase. Finally, since $\ket{\phi\_n(\tau)}$ are also the eigen-states of *H*(*τ*), we conclude that the population of each of the eigen-states are restored at *t* = *τ*. An alternative but similar approach to adiabatic shortcuts is provided by counter-diabatic (CD) protocols. These protocols involve adding additional interactions to the system Hamiltonian; the dynamics resulting from the inclusion of these additional CD interactions suppresses the diabatic excitations which would have been otherwise generated in the system due to fast driving of the bare Hamiltonian. A general form of the CD interactions is obtained as follows. Consider a system Hamiltonian of the form, $$H\_0(t) = \sum\_n\varepsilon\_n(t)\ket{n(t)}\bra{n(t)}.$$ Under adiabatic driving, the system initialized in a given energy eigen-state at *t* = 0 follows the same instantaneous state throughout the evolution. The time evolution operator is therefore required to be of the form, $$U(t)=\sum\_ne^{i\phi\_n(t)}\ket{n(t)}\bra{n(0)}.$$ where *ϕ**n*(*t*) is the phase acquired by the *n**t**h* eigen-state during the unitary evolution and is given by, $$\phi\_n(t)=-\int\_0^tdt'\left(\varepsilon\_n(t')-i\braket{n(t')|\partial\_{t'}n(t')}\right).$$ We wish to find the Hamiltonian which mimics the above time evolution without any adiabatic approximations. This is easily done by substituting *U*(*t*) in the (Schrodinger) equation, $H(t)=i\dot{U}(t)U^\dagger(t)$, which leads to, $$H(t)=H\_0(t)+H\_{CD}(t)=\sum\_n\varepsilon\_n(t)\ket{n(t)}\bra{n(t)}+i\sum\_n\Big(\ket{\partial\_tn(t)}\bra{n(t)}-\braket{n(t)|\partial\_tn(t)}\ket{n(t)}\bra{n(t)}\Big),$$ where *H**C**D* encodes the CD interactions that are to be added to ensure an adiabatic evolution with reference to the bare Hamiltonian *H*0. Of course, one must be careful to take into account the extra work done by the additional counter-diabatic terms while calculating the net work output and efficiency of the thermal machine. Finally, it turns out that when coherence is generated in a quantum Otto engine during the isentropic strokes, an incomplete thermalization in between the two isentropic strokes can lead to a better engine performance. An incomplete thermalization, typically realized by allowing the working fluid to interact with the hot bath for a short time during the isochoric strokes, fails to destroy the coherence generated in the energy basis during the preceeding isentropic stroke. This leads to an interference like effect between this residual coherence and the coherence generated in the subsequent isentropic stroke after the incomplete thermalization step. This has been explicitly demonstrated in Ref. [], where the authors analyze a quantum Otto engine operating with a TLS as working medium, which is allowed to thermalize only partially during the hot isochoric stroke. The performance of this engine is compared to a second one which additionally incorporates a dephasing process after the incomplete thermalization step, so as to erase any residual coherence. Through numerical analysis, the authors demonstrate that a careful tuning of the finite times allocated to the isentropic and thermalization strokes can lead to a better power output and efficiency in the first engine when compared to the second one. ### Non-thermal baths It is important to realize that coherence is not always detrimental to the performance of QTMs. In fact, coherence plays the central role in boosting the work output of QTMs which utilize non-thermal baths as heat sources. In 2003, Scully *et. al.* demonstrated that it is possible to extract work effectively from a single ‘phaseonium’ bath, which consist of three-level atoms that have small amount of coherence between almost- degenerate lower energy levels,. Dubbed as the photo-Carnot engine, the working fluid here is the radiation field generated by the atoms operating between a phaseonium bath with temperature *T**h* and a bath *T**c* with *T**h* > *T**c*. The working fluid relaxes to a thermal steady state with a temperature *T**ϕ* = *T**h*(1 − *n̄**ε*cos*ϕ*), where *T**h* is the temperature of the hot bath, *n̄* is the average photon number in the absence of coherence, *ε* is a measure of the magnitude of the coherence and *ϕ* is the associated phase. By appropriately tuning *ϕ*, it is therefore possible for the working fluid to attain a higher temperature than the hot bath. This in turn allows work extraction even when *T**h* = *T**c*, thus effectively permitting work extraction from a single reservoir. Similar results were also reported in the context of quantum Otto engines, where the use of such ‘quantum coherent fuels’ was shown to enhance performance. A slightly different mechanism by which the use of non-thermal baths can boost engine performance is when the working fluid itself is rendered non-thermal after interaction with the bath. This can be done, for example, by using a squeezed thermal bath as the hot bath with squeezing parameter *r*. A QHO coupled to such a bath thermalizes to a squeezed thermal state with mean phonon number ⟨*n*⟩ = ⟨*n*⟩0 + (2⟨*n*⟩0 + 1)sinh2(*r*). Returning to the analysis of the Otto cycle in Sec. [subsubsecotto], we note that the initial energy expectation *E**A* is modified as, $${\langle}E\_A{\rangle}= \frac{\omega\_A}{2}\coth\left(\frac{\omega\_A}{2T\_h}\right)\Delta H\_r,$$ where Δ*H**r* = 1 + (2 + 1/⟨*n*0⟩)sinh2(*r*) with ⟨*n*0⟩ = [exp(*ω**A*/*T**h*) − 1]− 1. Proceeding as in Sec. [subsubsecotto], the total work done is found to be, $$W = \frac{\left(\omega\_B-\omega\_A\right)}{2}\left[\coth\left(\frac{\omega\_B}{2T\_c}\right)-\coth\left(\frac{\omega\_A}{2T\_h}\right)\Delta H\_r\right],$$ while the heat extracted is given by, $$Q\_h = \frac{\omega\_A}{2}\left[\coth\left(\frac{\omega\_A}{2T\_h}\right)\Delta H\_r-\coth\left(\frac{\omega\_B}{2T\_c}\right)\right].$$ The efficiency therefore turns out to be the same as the Otto engine efficiency *η**o* = 1 − *ω**B*/*ω**A*. However, in the high temperature limit, we have Δ*H**r* = 1 + 2sinh2(*r*) and the total work can be approximated as, $$W = \left(\frac{\omega\_B}{\omega\_A}-1\right)\left(\frac{T\_c}{\left(\frac{\omega\_B}{\omega\_A}\right)}-T\_h\left(1+2\sinh^2(r)\right)\right)$$ The efficiency at maximum power is then found by maximizing the work with respect to *ω**B*/*ω**A* and is given by, $$\bar{\eta} = 1-\sqrt{\frac{T\_c}{T\_h\left(1+2\sinh^2(r)\right)}}$$ For *r* = 0, *η̄* reduces to the CA efficiency obtained in Eq.  for the case of thermal reservoirs. On the other hand, for *r* → ∞, we find *η̄* → 1, which appears to surpass the Carnot limit. Nevertheless, this does not violate the second law as the non-thermal hot bath is found to have an effective higher temperature *T**h*\* > *T**h*. On properly accounting for this effective temperature, the upper bound of the efficiency is found to be bounded by the generalized Carnot limit, $$\eta\_{gen}=1-\frac{T\_c}{T\_h\left(1+2\sinh^2(r)\right)}.$$ Using squeezed thermal baths, it was also shown that work extraction is possible even from a single squeezed bath without violating the laws of thermodynamics. More recently, the efficiency bound on such quantum thermal machines has been quantitatively estimated using the notion of ergotropy, which quantifies the maximum amount of work that can be extracted from non-passive states through unitary protocols. ### Four-stroke QTM based on many-body systems An increasing amount of research in recent times is focusing on deploying quantum many-body quantum systems as working fluid in quantum stroke engines. This is because many-body systems, besides having the potential to naturally scale up the work output and efficiency per cycle of QTMs, are also capable of hosting certain novel phenomena which have no single-particle counterparts and can serve as thermodynamic resources. As for example, finite size scaling theory predicts that if the working fluid is operated closed to its critical point, the efficiency of the Otto engine can approach the Carnot limit at finite power. Similarly, using results known from energy-level statistics and localization properties of many-body localized phases, it has been shown that a quantum Otto engine operated with the working fluid ramped between a localized and a thermal phase has significant advantages. In particular, this engine exhibits lesser fluctuations in work output and can be easily scaled up in size as the localization ensures that different ‘sub-engines’ work independently of each other. At the same time, it is to be noted that ensuring adiabatic driving protocols is a more challenging task in many-body systems as compared to single-particle systems. Recent works have therefore focused on exploring viable shortcuts to adiabaticity protocols for many body quantum heat engines. A remarkable feature which emerges in many-body quantum engines is that non-adiabatic affects in some cases may even lead to enhancement of the engine’s performance. Such an enhancement has been demonstrated in the case of an Otto engine where the working fluid is an interacting Bose gas confined in a time-dependent harmonic trap. The efficiency achieved using the many-particle system is greater than the efficiency of an ensemble of single particle heat engines that have the same amount of thermodynamic resources at their disposal. Another mechanism though which non-adiabatic affects can be exploited to tap into the cooperative resources of many-body systems is tied to the notion of passive states. Technically, passive states are characterized by density matrices which are diagonal in energy basis and the population decrease with increase in energy. If a system is initially prepared in a passive state, then no work can be extracted out of it through cyclic unitary protocols. In a single-particle Otto engine, the final states after the completion of the isentropic strokes are passive states and the efficiency, as we have seen, is maximized when the strokes are adiabatic. However, it can be shown that the direct product state of multiple identical copies of a passive state, which is not additionally thermal, need not be a passive. This opens up the possibility of extracting extra amount of work in many-body systems. For maximizing efficiency, the direct product state at the end of the isentropic strokes is required to be passive, which in turn necessitates non-adiabatic excitations so that the populations among different copies can be interchanged. We will return to passive states in more detail when we address quantum batteries in Sec. [secbattery]. ### Non-Markovian QTMs In all the examples of four-stroke thermal machines considered thus far, the underlying assumption is that the dynamics of the QTM is strictly Markovian in nature. However, significant new results have also been reported recently for QTMs operating in the non-Markovian regime. In general, non-Markovian dynamics can result in a number of scenarios; such as in the limit of strong system-bath couplings and long decay times of bath correlation functions which implies a bath with memory. In Ref [], it was shown that frequent quantum nondemolition measurements can lead to extraction of useful work from the system-bath correlation energy if the cycle is operated within the bath memory time. Similarly, it was pointed out in Ref [] that in an Otto cycle with a TLS as the working fluid, thermalizing with a non-Markovian bath is not necessarily accompanied by a monotonous increase or decrease in the effective temperature of the TLS. This in turn allows the Otto engine to attain an efficiency which can apparently exceed the Carnot limit when operated in finite time. However, this apparent violation of the second law is resolved when the heat exchanged with the non-Markovian reservoirs is appropriately redefined by incorporating the minimum costs associated with the non-Markovianity of the dynamics. Another key result demonstrated in Ref. [] was that even if the working fluid shares correlations with only some degrees of freedom with the baths with the overall evolution remaining Markovian, the power output can still be boosted. In this regard, it is interesting to note a different model of a quantum Otto engine explored in Ref. []. This engine consists of a TLS strongly coupled with multimode harmonic oscillator reservoirs and can be mapped to a similar setup as studied in Ref. []. Through the so-called reaction-coordinate mapping, the total system of the TLS and the harmonic oscillator environment can be mapped to an enlarged system in which the TLS couples strongly with a single collective mode (called RC) of the environment. Further, the collective mode in turn couples weakly to the residual environment. This allows the strongly coupled TLS-RC system to be treated exactly by tracing out the weakly coupled residual environment using the Born-Markov approximations. Using this formalism, Ref. [] investigated the consequences of strong-coupling between the TLS and the reservoirs, incorporating the additional costs associated with sudden coupling/decoupling between the TLS and the reservoirs which becomes prominent in a strongl coupled setup. Indeed, the authors find that the performance of the engine is lowered due to these additional costs. In conclusion, it is fair to say that the impact of non-Markovian dynamics on the operation of QTMs is far from being fully explored and more research in this direction is expected in near future. Two stroke devices ------------------ Apart from four-stroke thermal machines, it is also possible to construct reciprocating thermal machines based on *two-stroke* cycles allahverdyanpre041118, allahverdyanpre051129,campisinjp035012, uzdinprx031044, campisijpa345002, buffoniprl070603,piccionepra032211. The reduction in number of strokes per cycle is compensated by increasing the number of systems or working fluids to two. As for example, consider two qubits *S*1 and *S*2 with energy gaps *ω*1 and *ω*2, respectively. These qubits can be individually coupled to two thermal baths with temperatures *T*1 and *T*2, such that *T*1 > *T*2. The two-stroke cycle consists of the following sequential strokes – (1) a thermalization stroke in which the qubits *S*1 and *S*2 are coupled with the baths having temperatures *T*1 and *T*2, respectively, and allowed to thermalize; (2) a unitary stroke in which the two qubits interact with each other, with no contact whatsoever with the baths, and work is performed on the composite system. The thermal machine based on this two-stroke cycle can work in three different modes depending on the ratio of the energy gaps *ω*1 and *ω*2. The different modes are characterized by the relative sign of the heat gained from the hot (cold) bath *Q**h* (*Q**c*) and the work performed *W*. These modes and their regime of operation are listed below: * Refrigerator (*Q**h* < 0, *Q**c* > 0, *W* > 0), when *ω*1/*ω*2 > *T*1/*T*2. * Engine (*Q**h* > 0, *Q**c* < 0, *W* < 0), when 1 < *ω*1/*ω*2 < *T*1/*T*2. * Accelerator (*Q**h* > 0, *Q**c* < 0, *W* > 0), when *ω*1/*ω*2 < 1. In the engine mode of operation, the efficiency is identical to that of the Otto cycle and is upper bounded by the Carnot efficiency, *η**t**w**o* − *s**t**r**o**k**e* = 1 − *ω*2/*ω*1 < 1 − *T*2/*T*1. In Sec. [subsecmagneto], we will discuss the working of the two-stroke cycle in more detail with an application of the same to quantum magnetometry. Quantum enhancement in performance of outcoupled QTMs ----------------------------------------------------- The purpose of a heat engine is to supply power to an external system. It is therefore plausible that any enhancement of quantum origin in the performance of quantum heat engines may manifest itself in the dynamics of the external system on which the engine performs work. This motivated the investigation of ‘outcoupled’ quantum heat engines in which the engine acts as a work resource for an external system and the performance of the engine is analyzed through energy measurements on the external system. ### Enhancement in performance from inter-cycle coherence Using such an outcoupled quantum heat engine, it was shown in Ref. [] that the average work performed by the engine over *n* cycles of the engine operation can be greater than *n* times the work performed over a single cycle. To this end, the authors considered a total Hamiltonian of the form, *H*(*t*) = *H**E*(*t*) + *H**B* + *H**E**B*(*t*) + *H**S* + *H**S**E*(*t*),  where *H**E*(*t*), *H**B* and *H**E**B*(*t*) represent the Hamiltonians of the engine (working fluid), baths and the engine-baths interaction respectively. The external system on which the engine perform work is represented by the time-independent Hamiltonian *H**S* and it is coupled with the engine through the interaction *H**S**E*(*t*). We would like to point out that, following the convention used in Ref. [], the term ‘engine’ here corresponds to only the working fluid and the baths are treated separately from the engine. All the time-dependent terms in the above Hamiltonian are periodic over a time *T* such that the total Hamiltonian also satisfies *H*(*t* + *T*) = *H*(*t*). The unitary time evolution operator over *n* cycles is thus given by *U**n**T* = *U**T**n* where *U**T* = T∫0*T*exp[ − *i**H*(*t*)*t*]*d**t*. In the above setup, the average work done by the engine is measured by projective energy measurements on the external system and calculating their differences. This can be done in two ways over *n* cycles. In the first case, the projective measurements are made at the start (*t* = 0) and at the end of *n* cycles (*t* = *n**T*). In the other case, *n* + 1 projective measurements are made, the first one at *t* = 0 followed by subsequent measurements at the end of each cycle. Note that in the second case, the per-cycle measurements destroy any coherence built up in the energy basis after each cycle. To compare the two cases discussed above, consider the Hamiltonian of the external system to be of the form $H\_S=\sum\_{i}E\_i\ket{i}$ and assume that the external system is initially prepared in the ground state $\ket{i=0}$. Thus, starting from a total initial state of the form $\rho\_0=\rho\_0^{EB}\otimes\ket{0}\bra{0}$, where *ρ*0*E**B* is the initial state of the bath-engine composite, it follows that the average work done in the first case is given by, $$\label{eq\_work\_many} {\langle}w{\rangle}\_n = \sum\_i\left(E\_i-E\_0\right)\sum\_{\bf {k}, \bf{k'}}\mathcal{U}\_{i,0}^{\bf{k};\bf{k'}},$$ where $\mathcal{U}\_{i,0}^{\bf{k};\bf{k'}}$ is of the form, $$\mathcal{U}\_{i,0}^{\bf{k};\bf{k'}}={\operatorname{Tr}}\_{EB}\left[\bra{i}U\_T\ket{k\_{n-1}}\dots\bra{k\_2}U\_T\ket{k\_1}\bra{k\_1}U\_T\ket{0}\rho\_0^{EB}\bra{0}U\_T^\dagger\ket{k'\_1}\bra{k'\_1}U\_T^\dagger\ket{k'\_2}\dots\bra{k'\_{n-1}}U\_T^\dagger\ket{i}\right].$$ In the above equations, $\ket{k\_j}, \ket{k'\_j}$ are the eigenstates of *H**S* with ${\bf k}=\{k\_1,k\_2,k\_3,\dots,k\_{n-1}\}$ and ${\bf k'}=\{k'\_1,k'\_2,k'\_3,\dots,k'\_{n-1}\}$. On the contrary, the average work done in the second case when projective measurements are performed at the end of each cycle is given by, $$\label{eq\_work\_proj} {\langle}\tilde{w}{\rangle}\_n = \sum\_i\left(E\_i-E\_0\right)\sum\_{\bf {k}. \bf{k}}\mathcal{U}\_{i,0}^{\bf{k};\bf{k}},$$ Note that the sum over the intermediate states in the above equation runs only over $\bf k$, unlike that in Eq.  where the sum runs over $\bf k$ and $\bf k'$. This is an artefact of the fact that the ‘inter-cycle’ coherence is suppressed in the second case. ![(a) Schematic representation of the outcoupled quantum Otto engine discussed in Sec. [subsubseccycles]. The inset represents the adiabatic increase in the energy gap of the TLS engine during the isentropic compression stroke. (b) Numerical comparison (dots) of the average work performed by an Otto engine using Eqs.  and . The solid lines are analytical values calculated from a perturbative treatment with g\ll 1 (see Ref.  for details). The other relevant parameters chosen for the numerics are g=0.02, b=0.1/\Delta, \omega T=2\pi\times0.05, v=0.5\Delta^2, T=20/\Delta, \beta_c=1/\Delta and \beta_h=1/4 E_{max} where E_{max}=2\sqrt{\Delta^2+(vT)^2/4}.](campo_cycles_num.png "fig:")[figcampocycles] An Otto engine is now constructed choosing a TLS as the working fluid and a harmonic oscillator as the external system as follows (see Fig. [figcampocycles] (a) for a schematic representation). In the thermalization steps, the TLS with Hamiltonian *H**E* = Δ*σ**x* is decoupled from the external system and is allowed to thermalize with one of the (thermal) baths. In the isentropic steps, the engine is decoupled from the baths and interacts with the external system, with Hamiltonian *H**S* = *ω**a*†*a*, via an impulse type interaction of the form *H**S**E*(*t*) = *g**S**E*(*t*)*σ**x*(*a* + *a*†) where, *g**S**E*(*t*) = *g*∑*m* = 0∞*δ*[*t* − (*m* + *b*)*T*],  with *g* ≪ 1 and 0 < *b* < 1. The unitary evolution in the isentropic steps is driven by the Hamiltonian *H*1(*t*) = *H**E*(*t*) + *H**S* + *H**S**E*(*t*) where, *H**E*(*t*) = Δ*σ**x* + Ω(*t*)*σ**z*,  where Ω(*t*) =  − *v**t* during the isentropic compression stroke (0 < *t* < *T*/2) and Ω(*t*) =  − *v*(*T* − *t*) in the isentropic expansion stroke (*T*/2 < *t* < *T*), with *v* being a constant. Each of the isentropic strokes takes a duration of time *T*/2 while the thermalization strokes are assumed to be completed in negligible times. A numerical comparison of the average work performed over *n* cycles using Eqs.  and  is given in Fig. [figcampocycles] (b). It can be clearly seen that ⟨*w*⟩*n* can be greater than ⟨*w̃*⟩*n* for appropriate choices of *n*, thus highlighting the possible enhancement in performance when inter-cycle coherence is preserved. Similar enhancements are also found to occur if one considers more generic interactions *g**S**E*(*t*) instead of the impulse type coupling considered in Eq.  (see Ref.  for details). ### Enhancement in performance from quantum statistical effects In a classical setting, it is reasonable to assume that the work performed on an external system by multiple heat engines working in parallel between a set of heat baths is proportional to the number of heat engines (Once again, ‘engine’ here refers to the working fluid and the baths are treated separately). However, it was shown in Ref. [] that this need not be true in a quantum setting, particularly when the engines are composed of indistinguishable systems. In fact, the bosonic statistics arising in such a scenario can boost the performance as we discuss below. In Ref. [], the authors considered an outcoupled quantum heat engine constructed from *N* two-level atoms as the working fluid (see Fig. [figcampomultiple] (a) for a schematic representation). The external system is once again chosen to be a harmonic oscillator with *H**S* = *ω**a*†*a*. As in Sec. [subsubseccycles], the total Hamiltonian is given by Eq. , where *H**E*(*t*) is the total Hamiltonian of all the atoms. Further, the interaction between the engines and the external system is of the form, *H**S**E*(*t*) = *g**S**E*(*t*)*V**E* ⊗ (*a* + *a*†) where *V**E* is a local operator defined on the Hilbert space of the engines. The coupling *g**S**E*(*t*) = *g**δ*(*t* − *t*1) with 0 < *t*1 < *T*/2 is chosen to be an impulse type coupling for simplicity in calculations. When the working fluid is composed of *N* distinguishable atoms, we have *V**E* = ∑*j* = 1*N**σ**x**j*. Similarly, the Hamiltonian of the engines during the isentropic strokes are given by, *H**E*(*t*) = Δ∑*j* = 1*N**σ**x**j* + Ω(*t*)∑*j* = 1*N**σ**z**j*. where Ω(*t*) = Ω(0) + *v**t* for the isentropic compression stroke and Ω(*t*) = Ω(0) + *v*(*T* − *t*) for the isentropic expansion stroke. ![(a) Schematic representation of the mutliple atoms outcoupled heat engine discussed in [subsubsecmultiple]. (b) Ratio of the work done \mathcal{E}={\langle}w{\rangle}_{indist}/{\langle}w{\rangle}_{dist} as a function of number of atoms N. The solid lines correspond to the analytical values obtained using Eq.  and the dots correspond to numerically calculated values. Here \theta_t=\tan^{-1}\left(\Omega(t)/\Delta\right) and the parameters chosen for numerics are g=0.01, v=0.1\Omega(0)^2, T=20/\Omega(0), t_1=0.35T/2, \omega T=2\pi\times0.05,\beta_c=2/\epsilon_0 and \beta_h=1/4\epsilon_{T/2} where \epsilon_t=\sqrt{\Omega(t)^2+\Delta^2} (see Ref. [] for details). ](ca.png "fig:")[figcampomultiple] On the other hand, when the atoms are considered to be bosonic and indistinguishable, we have *V**E* = *S**x* and *H**E*(*t*) during the isentropic strokes assume the form, *H**E*(*t*) = Δ*S**x* + Ω(*t*)*S**z*,  where *S**x* = *b*0†*b*1 + *b*1†*b*0 and *S**z* = *b*0†*b*0 − *b*1†*b*1, where *b*0(*b*0†) and *b*1(*b*1†) are the bosonic annhilation (creation) operators of the ground-state and excited-state atoms, respectively. The Otto engine is constructed in the same way as the mutli-cycle Otto engine discussed in [subsubseccycles], except that the measurements are made only over a single cycle. Therefore, the work performed can be evaluated by calculating the difference between the average energy of the external system at the start and the end of the cycle. Setting *E*0 = 0, we have, $${\langle}w{\rangle}= \sum\_iE\_i{\operatorname{Tr}}\_{EB}\left[\bra{i}U\_T^I\rho\_0U\_T^{I\dagger}\ket{i}\right],$$ where *ρ*0 is the initial state of the total system and *U**T**I* = Texp( − *i*∫0*T**H**S**E*(*t*)*d**t*) is the unitary evolution operator in the rotating frame with respect to *H*0(*t*) = *H**E*(*t*) + *H**B* + *H**E**B*(*t*) + *H**S*. After extensive calculations (see Ref. [] for details), an approximate analytical expression for the work done is found to be, $$\label{eq\_multiple\_ana} {\langle}w{\rangle}\simeq g^2{\langle}[V\_{E}^I(t\_1)]^2{\rangle}\_{\rho\_0^{EB}}\sum\_{i\neq 0}E\_i\left|\bra{i}V\_S^I(t\_1)\ket{0}\right|^2,$$ where *V**S**I*(*t*) = *e**i**ω**a*†*a**t*(*a* + *a*†)*e*− *i**ω**a*†*a**t*, *V**E**I*(*t*) = *U**E**B*†(*t*)*V**E**U**E**B*†(*t*) with $U\_{EB}(t)=\mathcal{T}\exp{[-i\int\_0^t(H\_E(t')+H\_B+\\H\_{EB}(t'))dt']}$ and ⟨…⟩*ρ*0*E**B* = Tr*E**B*[…*ρ*0*E**B*]. It is important to note that in the distinguishable case, all the possible 2*N* configurations of the atomic pseudospins are taken into account while calculating the trace over the eigenstates of the engine. On the contrary, the trace is taken only over the *N* + 1 symmetrized eigenstates of *S**z* in the case of indistinguishable atoms. Defining E = ⟨*w*⟩*i**n**d**i**s**t*/⟨*w*⟩*d**i**s**t* where ⟨*w*⟩*i**n**d**i**s**t* and ⟨*w*⟩*i**n**d**i**s**t* are the work performed with indistinguishable and distinguishable atoms, respectively, the authors of Ref. [] compared the work performed in the two cases. The results are illustrated in Fig. [figcampomultiple] (b) which clearly shows that the Bosonic statistics of the indistinguisable particles result in greater work output in a cycle as compared to the work output with distinguishable particles. Equivalence of thermal machines =============================== Naively, the two broad class of quantum thermal machines we have discussed, namely the continuous and reciprocating thermal machines, may appear to be vastly different in terms of their construction and operation. However, it was pointed out that in the limit of small bath action, they are indeed thermodynamically equivalent. The equivalence is valid within the Markovian and rotating wave approximations. To elaborate, consider the mapping $\rho\_{N\times N}\to\ket{\rho}\_{1\times N^2}$ of the density matrix of a *N*-level system. The GKLS master equation, governing the evolution of the density matrix can therefore be represented as, $$i\hbar\frac{d\ket{\rho}}{dt}=\mathcal{H}(t)\ket{\rho},$$ where H*N*2 × *N*2 is the super-operator which consists of terms arising from the system Hamiltonian as well as Lindblad operators. The bath action is then defined as, $$s=\int\_0^{\tau\_{cyc}}\|\mathcal{\tilde{H}}(t)\|dt,$$ where *τ**c**y**c* is the duration of one full cycle of operation, $\tilde{\mathcal{H}}$ denotes the super-operator H in the interaction picture and ∥ ⋅ ∥ is the operator norm defined as $\|\cdot\|=\mathrm{max}\sqrt{eig(\cdot^\dagger\cdot)}$. In the regime of small bath action with respect to the Planck’s constant, *s* ≪ ℏ, it can be shown that the state of the system in the continuous and reciprocating thermal machines differs by order O(*s*/ℏ) before completion of the cycle and by O((*s*/ℏ)3) at the end of the cycle. In fact, the work and heat transferred also differ by the same order of magnitude. Physically, the emergence of the equivalence is explained as follows. In general, work can be extracted through a coherent mechanism which involves alteration of the off-diagonal terms of the density matrix in energy eigen basis, as well as a stochastic mechanism which involves alteration of populations. In continuous machines, only the coherent mechanism is present while in the reciprocating machines, the stochastic mechanism is also present. In the limit of small bath action, the coherent mechanism strongly dominates and hence the thermal machines types become thermodynamically equivalent. Later, this equivalence was also extended to non-Markovian systems. ![Schematic representation of the Szilard engine. In the first step (step A), a single molecule gas is assumed to be in thermal equilibrium within a cylinder. A piston is then inserted (step B) in the middle of the cylinder and the molecule can be present in either part of the cylinder. An intelligent demon then measures the position of the gas molecule, depending on whose outcome, it attaches a pulley on the appropriate side of the piston. Finally, in step D, the gas expands by absorbing heat from a heat bath which pulls the load, thus performing useful work.](szilard.png "fig:")[figszilard] Quantum Szilard Engine ====================== Before concluding our discussion on QTMs, let us briefly mention the Szilard engine which, though not as technologically relevant, is tremendously important to gain a better understanding of the rapport between information and thermodynamics. When Maxwell proposed using ‘information’ as a resource to conceive what came to be famously known as the Maxwell’s demon, the second law appeared to be under some serious challenge. Building on his work, Szilard proposed an engine in which a feedback assisted cyclic process appeared to allow conversion of all the heat extracted from a single reservoir into work. This conceptual and ideal engine, operating with a single molecule gas as the working fluid, works as follows (see Fig. [figszilard] for a schematic representation). Consider a single molecule gas confined in a cylinder which is in contact with a heat reservoir. A piston, having a wide opening in its centre that can be closed with a friction less shutter, is placed in the middle of the cylinder. When the shutter is closed, the piston divides the cylinder volume into two equal parts and the molecule can be present on either side of the piston with equal probability. An intelligent demon then performs a *measurement* to determine on which side of the piston the molecule is located. Depending on the measurement outcome, the demon attaches a string to the right or left side of the piston such that the isothermal expansion of the single molecule gas pulls the string, thus lifting any weight attached to the other end of the string. Once the required work is completed, the shutter is opened so that the single molecule gas can once again occupy the whole volume of the cylinder. Therefore, the expansion of the gas driven by the heat extracted from the reservoir is completely converted into work. Classically, the widely accepted solution to this paradox is given by the Landauer’s erasure principle, which associates an energetic cost with any *logically irreversible manipulation of information*. As for example, resetting the information stored in a memory bit amounts to an increase of entropy  ∼ *k**B*log2 where *k**B* is the Boltzmann constant and hence, a minimum amount of heat  ∼ *k**B**T*log2 is dissipated to the environment at temperature *T*. A truly cyclic process demands that the memory of the demon is also restored at the end of the complete cycle. This requires an erasure of the information acquired by the demon during the measurement which results in heat dissipation, thus accounting for a second heat reservoir or sink. Thus it became apparent that *information* should be visualized as a physical entity that has a direct bearing on thermodynamic processes. Note that the Szilard engine, by construction, is a microscopic engine working with a single molecule and hence, a rigorous analysis must also incorporate the quantum effects in play. With this motivation, numerous quantum models of the Szilard engine has been investigated over the years. While early works had explored the consequences of endowing a quantum nature to the measurement and information erasure processes, many recent works have also investigated exotic variations of the Szilard engine. Similarly, it was pointed out in Ref [], that an energetic cost should be associated with the insertion or removal of piston in the quantum case, resulting from changes of boundary conditions. Recently, it has also been shown that it is possible to operate a Szilard engine working without any thermal source by drawing energy from projective measurements, although it may lose some of its characteristics in the process. Nevertheless, there is no universal consensus regarding the significance and implications of a fully quantized Szilard engine and much remains to be understood. Applications in quantum metrology ================================= In this section, we review a couple of recently proposed applications of quantum thermal machines in the field of quantum metrology. The basic idea behind the protocols is to use quantum thermal machines as sensitive probes, which can be coupled to a given system and some parameter of the system is then estimated through an indirect measurement on the probe. The precision attained by using a particular protocol is assessed through a comparison with the minimum bound on the relative error dictated by the quantum Cramer-Rao bound. This bound is quantified by the so called quantum Fisher information, which unlike the classical Fisher information, has a geometrical origin and is therefore uniquely determined by the state of the system on which the measurement is to be performed. It is therefore imperative that we present a brief summary of the QFI before moving on to discuss the protocols in detail. Quantum Fisher Information -------------------------- Let us consider that we wish to estimate a parameter *θ* through an indirect measurement on a random variable *X*. The probability that the outcome of a measurement on *X* is *x**i* ∈ *X*, is determined by the conditional probability (*x*∣*θ*). The parameter *θ* is read off from the measurement on *X* through the estimator *θ̂*(*X*). Let us assume that the true value of the parameter is *θ* = *θ*0. Further, we assume an *unbiased* estimator, i.e. ⟨*θ̂*(*x*) − *θ*0⟩ = 0. Taking a partial derivative w.r.t. *θ*0, we get, $$\frac{\partial}{\partial\theta\_0}{\langle}\hat{\theta}(x)-\theta\_0{\rangle}=\int\left(\hat{\theta}(x)-\theta\_0\right)\frac{\partial p(x|\theta\_0)}{\partial\theta\_0}dx-\int p(x|\theta\_0)dx=0.$$ or, $$\label{eq\_cfi\_deriv} \int\left(\hat{\theta}(x)-\theta\_0\right)p(x|\theta\_0)\frac{\partial}{\partial\theta\_0} \log p(x|\theta\_0)dx=1,$$ where we have used the equalities ∂*θ*0*p*(*x*∣*θ*0) = *p*(*x*∣*θ*0)∂*θ*0log*p*(*x*∣*θ*0) and ∫*p*(*x*∣*θ*0)*d**x* = 1. Next, consider the following relation by virtue of the Cauchy-Schwarz inequality, $$\begin{gathered} \label{ineq\_cauchy} \int\Big[\left(\hat{\theta}(x)-\theta\_0\right)\sqrt{p(x|\theta\_0)}\Big]\Big[\sqrt{p(x|\theta\_0)}\frac{\partial}{\partial\theta\_0} \log p(x|\theta\_0)\Big]dx\\ \leq\sqrt{\Big[\int\left(\hat{\theta}(x)-\theta\_0\right)^2p(x|\theta\_0)dx\Big]\Big[\int p(x|\theta\_0)\left(\frac{\partial}{\partial\theta\_0} \log p(x|\theta\_0)\right)^2dx\Big]}.\end{gathered}$$ Recognizing, ∫(*θ̂*(*x*) − *θ*0)2*p*(*x*∣*θ*0)*d**x* = Var(*θ*),  and rearranging Eq.  appropriately, we arrive at the (classical) Cramer-rao bound, $$\mathrm{Var}(\theta)\geq\frac{1}{\mathcal{I}(\theta\_0)},$$ where I(*θ*0) is the classical Fisher information (CFI) given by, $$\label{eq\_fi} \mathcal{I}(\theta\_0)=\int p(x|\theta\_0)\left(\frac{\partial}{\partial\theta\_0} \log p(x|\theta\_0)\right)^2dx=\int\frac{1}{p(x|\theta\_0)}\left(\frac{\partial p(x|\theta\_0)}{\partial\theta\_0}\right)^2dx.$$ The CFI, as defined above, depends on the probability distribution *p*(*x*∣*θ*). In other words, the CFI is influenced by the choice of the random variable *X* used for estimating the parameter *θ*. From a quantum mechanical viewpoint, the probability distribution *p*(*x*∣*θ*) is obtained as *p*(*x*∣
arxiv_0000293
4 & 2.08 & -1.89 & 1.56 & …& v2 J231433.38-523439.3 & -82.1 & 4925 & 1.90 & -2.11 & 2.71 & …& v2 J231529.71-462235.1 & -52.6 & 4622 & 0.95 & -2.57 & 3.27 & -0.20 & v2 J231645.30-404728.2 & …& …& …& …& …& …& v2 & 1 J231756.18-555332.4 & 267.3 & 4649 & 0.77 & -2.10 & 3.86 & …& v2 J231845.22-280744.8 & -38.1 & 4885 & 1.55 & -1.73 & 2.02 & …& v2 J231849.69-575012.6 & 121.9 & 4896 & 2.06 & -1.31 & 1.78 & …& v2 J232105.51-093254.7 & 60.9 & 4720 & 3.80 & -0.27 & 0.70 & …& v2 & 2 J232205.07-414138.3 & 97.6 & 5400 & 4.90 & -0.60 & 1.60 & …& v2 J232354.58-473023.4 & -52.5 & 4820 & 2.15 & -1.26 & 1.53 & …& v2 J232715.93-595258.3 & -32.6 & 5000 & 4.20 & -0.07 & 0.80 & …& v2 J232752.28-475202.3 & 106.8 & 4844 & 1.62 & -1.78 & 1.94 & …& v2 J232922.88-102343.6 & -95.8 & 5106 & 2.58 & -2.16 & 1.23 & …& v2 J232956.10-565659.8 & -26.6 & 5453 & 4.53 & -0.70 & 2.30 & …& v2 J233348.55-163352.9 & -6.3 & 4980 & 4.84 & -0.43 & 2.09 & …& v2 J233624.25-470759.8 & 29.7 & 4884 & 1.90 & -1.81 & 2.20 & …& v2 J233628.41-553742.6 & 21.7 & 4981 & 1.41 & -1.80 & 2.70 & …& v2 J233652.73-574249.6 & 73.5 & …& …& …& …& …& v2 & 2,3 J234331.22-450953.3 & 46.5 & …& …& …& …& …& v2 & 2,3,4 J234343.67-161522.4 & 29.3 & …& …& …& …& …& v2 & 2,3,4 J234921.12-405759.0 & -12.8 & 4750 & 4.20 & -1.05 & 1.95 & …& v2 J235146.46-541146.8 & -2.3 & 4350 & 3.45 & -0.90 & 2.30 & …& v2 J235242.16-451355.1 & 0.6 & …& …& …& …& …& v1 & 2,3 J235351.50-224735.2 & 1.4 & 4627 & 0.97 & -1.47 & 2.29 & …& v1 J235637.15-515552.9 & -1.4 & 4858 & 3.90 & -0.30 & 0.82 & …& v2 J235714.41-474944.8 & -25.4 & 4852 & 4.00 & -0.99 & 1.01 & …& v2 J235834.61-554917.0 & 26.4 & 5400 & 2.30 & 0.30 & 1.05 & …& v2 lccccccccc J121325.53-380021.6 & 12:13:25.5 & -38:00:21 & 11.27 & 0.81 & 9.92 & 9.49 & 9.43 & 9.36 & 9.39 J121812.33-372106.5 & 12:18:12.3 & -37:21:06 & 14.91 & 1.09 & 12.82 & 12.24 & 12.11 & 12.05 & 12.06 J122800.24-303137.6 & 12:28:00.2 & -30:31:38 & 14.12 & 1.12 & 11.94 & 11.35 & 11.24 & 11.17 & 11.20 J122851.02-300335.0 & 12:28:51.0 & -30:03:35 & 13.17 & 0.92 & 11.24 & 10.75 & 10.60 & 10.61 & 10.60 J122859.57-293619.2 & 12:28:59.6 & -29:36:19 & 13.29 & 1.29 & 10.64 & 10.07 & 9.84 & 9.79 & 9.77 J122934.47-323307.4 & 12:29:34.4 & -32:33:07 & 12.59 & 0.97 & 10.62 & 10.09 & 9.97 & 9.91 & 9.94 J122950.34-304017.2 & 12:29:50.3 & -30:40:17 & 12.84 & 0.84 & 11.07 & 10.57 & 10.46 & 10.40 & 10.42 J123015.17-305541.1 & 12:30:15.2 & -30:55:41 & 13.07 & 0.87 & 11.46 & 10.98 & 10.92 & 10.87 & 10.89 J123100.36-300953.2 & 12:31:00.4 & -30:09:53 & 11.92 & 0.83 & 10.17 & 9.70 & 9.50 & 9.46 & 9.46 J123118.86-311401.9 & 12:31:18.9 & -31:14:02 & 13.30 & 1.14 & 11.32 & 10.73 & 10.63 & 10.59 & 10.62 J123122.75-301530.1 & 12:31:22.8 & -30:15:30 & 12.71 & 0.72 & 11.16 & 10.68 & 10.60 & 10.51 & 10.51 J123221.73-285313.6 & 12:32:21.7 & -28:53:14 & 13.61 & 1.32 & 10.84 & 10.24 & 10.07 & 9.92 & 9.89 J123227.40-301457.3 & 12:32:27.4 & -30:14:57 & 13.14 & 0.82 & 11.39 & 10.89 & 10.78 & 10.72 & 10.74 J123249.92-300831.9 & 12:32:49.9 & -30:08:32 & 13.13 & 0.97 & 11.22 & 10.70 & 10.64 & 10.56 & 10.58 J123359.78-294404.2 & 12:33:59.8 & -29:44:04 & 11.20 & 0.82 & 9.55 & 9.04 & 8.97 & 8.88 & 8.89 J123539.62-272331.4 & 12:35:39.6 & -27:23:31 & 13.23 & 0.86 & 11.61 & 11.14 & 11.06 & 11.03 & 11.04 J123543.81-303212.0 & 12:35:43.8 & -30:32:12 & 13.56 & 0.98 & 11.54 & 10.96 & 10.87 & 10.82 & 10.78 J123643.63-024917.9 & 12:36:43.6 & -02:49:18 & 13.20 & 0.82 & 11.31 & 10.77 & 10.69 & 10.65 & 10.66 J123800.46-282708.6 & 12:38:00.5 & -28:27:09 & 13.84 & 1.02 & 11.98 & 11.47 & 11.36 & 11.28 & 11.29 J123851.73-305616.1 & 12:38:51.7 & -30:56:16 & 11.83 & 0.81 & 10.19 & 9.68 & 9.67 & 9.57 & 9.60 J123918.93-264941.9 & 12:39:18.9 & -26:49:42 & 13.48 & 1.04 & 11.47 & 10.96 & 10.79 & 10.71 & 10.69 J124055.64-251247.9 & 12:40:55.6 & -25:12:48 & 12.19 & 0.89 & 10.29 & 9.80 & 9.67 & 9.61 & 9.61 J124253.59-252644.1 & 12:42:53.6 & -25:26:44 & 13.53 & 0.83 & 11.87 & 11.38 & 11.34 & 11.25 & 11.28 J124331.99-240837.8 & 12:43:32.0 & -24:08:38 & 11.79 & 0.92 & 9.92 & 9.38 & 9.28 & 9.21 & 9.24 J124404.57-303844.2 & 12:44:04.6 & -30:38:44 & 13.51 & 0.88 & 11.84 & 11.36 & 11.28 & 11.24 & 11.28 J124624.44-270602.0 & 12:46:24.4 & -27:06:02 & 13.72 & 0.89 & 11.88 & 11.33 & 11.24 & 11.15 & 11.17 J124650.85-202135.2 & 12:46:50.9 & -20:21:35 & 13.18 & 1.03 & 11.18 & 10.64 & 10.49 & 10.48 & 10.48 J124712.11-211350.6 & 12:47:12.1 & -21:13:51 & 13.07 & 0.82 & 11.29 & 10.83 & 10.68 & 10.64 & 10.63 J124724.33-253720.7 & 12:47:24.3 & -25:37:21 & 13.70 & 0.97 & 11.67 & 11.08 & 11.01 & 10.95 & 10.99 J124727.86-143301.8 & 12:47:27.9 & -14:33:02 & 13.51 & 1.12 & 11.32 & 10.74 & 10.61 & 10.55 & 10.54 J124729.01-224535.4 & 12:47:29.0 & -22:45:35 & 12.83 & 0.95 & 11.03 & 10.48 & 10.37 & 10.33 & 10.36 J124748.75-202233.0 & 12:47:48.8 & -20:22:33 & 12.10 & 0.86 & 10.46 & 10.00 & 9.91 & 9.86 & 9.87 J124820.62-201506.9 & 12:48:20.6 & -20:15:07 & 12.99 & 0.86 & 11.42 & 10.95 & 10.88 & 10.83 & 10.86 J124945.02-264618.5 & 12:49:45.0 & -26:46:19 & 12.87 & 0.97 & 10.91 & 10.33 & 10.22 & 10.14 & 10.16 J125006.07-203416.2 & 12:50:06.1 & -20:34:16 & 13.08 & 0.87 & 11.35 & 10.86 & 10.76 & 10.72 & 10.73 J125036.76-211943.5 & 12:50:36.8 & -21:19:44 & 12.97 & 1.00 & 10.99 & 10.46 & 10.34 & 10.30 & 10.30 J125148.15-272525.7 & 12:51:48.2 & -27:25:26 & 13.32 & 0.71 & 11.69 & 11.23 & 11.15 & 11.10 & 11.10 J125258.31-191635.8 & 12:52:58.3 & -19:16:36 & 10.90 & 0.73 & 9.55 & 9.05 & 8.95 & 8.91 & 8.94 J125307.86-225638.2 & 12:53:07.9 & -22:56:38 & 13.11 & 0.83 & 11.29 & 10.76 & 10.68 & 10.61 & 10.63 J125335.64-261607.7 & 12:53:35.6 & -26:16:08 & 13.25 & 0.81 & 11.64 & 11.19 & 11.12 & 11.07 & 11.09 J125357.66-265849.9 & 12:53:57.7 & -26:58:50 & 13.42 & 0.92 & 11.47 & 10.88 & 10.82 & 10.75 & 10.77 J125404.87-144553.2 & 12:54:04.9 & -14:45:53 & 10.51 & 0.82 & 9.04 & 8.55 & 8.54 & 8.36 & 8.36 J125607.34-254505.3 & 12:56:07.3 & -25:45:05 & 12.81 & 0.94 & 10.96 & 10.38 & 10.29 & 10.22 & 10.26 J125704.60-105407.3 & 12:57:04.6 & -10:54:07 & 13.73 & 0.99 & 11.66 & 11.11 & 11.00 & 10.93 & 10.92 J125817.07-182848.5 & 12:58:17.1 & -18:28:49 & 13.54 & 0.98 & 11.62 & 11.08 & 10.97 & 10.90 & 10.93 J125845.73-171730.7 & 12:58:45.7 & -17:17:31 & 12.96 & 0.87 & 11.31 & 10.79 & 10.73 & 10.60 & 10.60 J130202.89-133005.1 & 13:02:02.9 & -13:30:05 & 13.57 & 1.07 & 11.44 & 10.88 & 10.75 & 10.67 & 10.68 J130815.23-053601.9 & 13:08:15.2 & -05:36:02 & 12.97 & 1.03 & 10.91 & 10.37 & 10.27 & 10.19 & 10.20 J130958.14-175101.9 & 13:09:58.1 & -17:51:02 & 11.57 & 0.90 & 9.75 & 9.24 & 9.14 & 9.06 & 9.08 J131110.83-052449.0 & 13:11:10.8 & -05:24:49 & 12.88 & 0.68 & 11.43 & 10.93 & 10.88 & 10.86 & 10.88 J131211.17-082754.5 & 13:12:11.2 & -08:27:55 & 13.20 & 0.93 & 11.37 & 10.88 & 10.72 & 10.66 & 10.65 J131224.90-053931.7 & 13:12:24.9 & -05:39:32 & 13.39 & 0.97 & 11.56 & 10.96 & 10.88 & 10.84 & 10.84 J131311.19-205222.6 & 13:13:11.2 & -20:52:23 & 13.49 & 0.97 & 11.59 & 11.02 & 10.93 & 10.84 & 10.86 J131404.35-083246.9 & 13:14:04.3 & -08:32:47 & 12.79 & 0.96 & 10.88 & 10.32 & 10.21 & 10.13 & 10.15 J131543.49-213024.1 & 13:15:43.5 & -21:30:24 & 12.72 & 0.92 & 10.82 & 10.30 & 10.18 & 10.10 & 10.12 J131549.29-140429.0 & 13:15:49.3 & -14:04:29 & 13.32 & 0.85 & 11.63 & 11.11 & 11.00 & 10.97 & 10.96 J131612.23-141749.9 & 13:16:12.2 & -14:17:50 & 13.62 & 1.10 & 11.31 & 10.73 & 10.62 & 10.53 & 10.53 J132109.71-095656.5 & 13:21:09.7 & -09:56:57 & 13.77 & 1.11 & 11.46 & 10.98 & 10.74 & 10.55 & 10.52 J132206.81-125711.5 & 13:22:06.8 & -12:57:12 & 13.28 & 1.02 & 11.31 & 10.77 & 10.64 & 10.56 & 10.58 J132310.08-053143.0 & 13:23:10.1 & -05:31:43 & 11.18 & 0.78 & 9.46 & 8.94 & 8.84 & 8.76 & 8.76 J132328.18-084924.5 & 13:23:28.2 & -08:49:25 & 13.21 & 0.78 & 11.55 & 11.08 & 11.01 & 10.93 & 10.93 J132339.07-161622.7 & 13:23:39.1 & -16:16:23 & 13.62 & 0.92 & 11.72 & 11.17 & 11.06 & 10.99 & 11.00 J132352.61-063823.0 & 13:23:52.6 & -06:38:23 & 13.09 & 0.91 & 11.07 & 10.56 & 10.44 & 10.37 & 10.36 J132400.06-174810.1 & 13:24:00.1 & -17:48:10 & 12.40 & 0.86 & 10.64 & 10.12 & 10.06 & 9.99 & 9.99 J132442.75-160700.9 & 13:24:42.8 & -16:07:01 & 12.22 & 0.87 & 10.39 & 9.83 & 9.72 & 9.65 & 9.68 J132544.89-150639.3 & 13:25:44.9 & -15:06:39 & 13.42 & 0.96 & 11.41 & 10.84 & 10.73 & 10.65 & 10.68 J132604.50-152502.0 & 13:26:04.5 & -15:25:02 & 11.46 & 1.00 & 9.54 & 8.95 & 8.85 & 8.74 & 8.77 J132649.82-065500.5 & 13:26:49.8 & -06:55:01 & 13.60 & 0.96 & 11.74 & 11.21 & 11.08 & 10.96 & 10.97 J132701.65-070301.5 & 13:27:01.6 & -07:03:02 & 13.30 & 1.07 & 11.07 & 10.51 & 10.44 & 10.25 & 10.26 J132722.65-165145.1 & 13:27:22.6 & -16:51:45 & 12.62 & 0.82 & 10.92 & 10.43 & 10.36 & 10.28 & 10.31 J132736.76-171038.5 & 13:27:36.8 & -17:10:39 & 12.53 & 0.74 & 10.85 & 10.40 & 10.32 & 10.23 & 10.24 J132751.40-180718.0 & 13:27:51.4 & -18:07:18 & 12.44 & 0.88 & 11.54 & 11.02 & 10.88 & 10.89 & 10.90 J132809.17-154633.6 & 13:28:09.2 & -15:46:34 & 12.78 & 0.81 & 11.00 & 10.51 & 10.41 & 10.33 & 10.34 J132841.04-011109.1 & 13:28:41.0 & -01:11:09 & 13.68 & 1.06 & 11.69 & 11.15 & 11.00 & 10.93 & 10.94 J133030.29-155143.0 & 13:30:30.3 & -15:51:43 & 11.34 & 0.91 & 9.41 & 8.84 & 8.71 & 8.62 & 8.64 J133142.29-001859.8 & 13:31:42.3 & -00:19:00 & 13.61 & 1.02 & 11.45 & 10.85 & 10.78 & 10.70 & 10.72 J133456.89-115027.0 & 13:34:56.9 & -11:50:27 & 13.98 & 1.10 & 11.74 & 11.15 & 11.00 & 10.96 & 10.98 J133514.03-011052.4 & 13:35:14.0 & -01:10:52 & 11.93 & 0.98 & 9.90 & 9.34 & 9.23 & 9.16 & 9.19 J133846.30-081659.4 & 13:38:46.3 & -08:16:59 & 13.20 & 0.83 & 11.51 & 10.99 & 10.93 & 10.85 & 10.87 J134254.05-071700.5 & 13:42:54.1 & -07:17:01 & 12.03 & 1.11 & 10.03 & 9.45 & 9.30 & 9.19 & 9.20 J134346.34-080605.8 & 13:43:46.4 & -08:06:06 & 12.06 & 1.06 & 10.26 & 9.71 & 9.57 & 9.44 & 9.46 J134521.11-073054.5 & 13:45:21.1 & -07:30:55 & 12.46 & 1.07 & 10.45 & 9.87 & 9.75 & 9.66 & 9.68 J135216.59-355425.8 & 13:52:16.5 & -35:54:25 & 14.54 & 0.79 & 12.94 & 12.47 & 12.37 & 12.32 & 12.31 J135841.47-315110.2 & 13:58:41.4 & -31:51:10 & 11.27 & 0.86 & 9.75 & 9.34 & 9.22 & 9.18 & 9.22 J140100.52-283026.4 & 14:01:00.5 & -28:30:26 & 12.63 & 0.81 & 11.11 & 10.64 & 10.59 & 10.53 & 10.56 J140132.79-281333.2 & 14:01:32.8 & -28:13:33 & 13.23 & 1.02 & 11.32 & 10.73 & 10.65 & 10.58 & 10.62 J140746.32-280908.1 & 14:07:46.3 & -28:09:08 & 12.81 & 0.77 & 11.43 & 10.95 & 10.88 & 10.85 & 10.88 J140825.63-292902.6 & 14:08:25.6 & -29:29:03 & 13.51 & 1.06 & 11.54 & 10.97 & 10.89 & 10.79 & 10.80 J140955.13-284425.9 & 14:09:55.1 & -28:44:26 & 11.83 & 0.74 & 10.22 & 9.74 & 9.67 & 9.59 & 9.60 J141032.84-281633.9 & 14:10:32.8 & -28:16:34 & 12.23 & 1.15 & 10.12 & 9.55 & 9.40 & 9.32 & 9.34 J141110.08-230207.4 & 14:11:10.1 & -23:02:07 & 13.22 & 0.90 & 11.58 & 11.09 & 11.04 & 10.99 & 11.01 J141403.15-234440.0 & 14:14:03.2 & -23:44:40 & 13.58 & 0.89 & 11.95 & 11.47 & 11.41 & 11.37 & 11.40 J141544.41-251932.2 & 14:15:44.4 & -25:19:32 & 13.14 & 1.46 & 10.01 & 9.43 & 9.18 & 9.05 & 9.03 J141546.80-223818.0 & 14:15:46.8 & -22:38:18 & 13.52 & 0.73 & 11.86 & 11.36 & 11.29 & 11.20 & 11.21 J141712.51-203858.0 & 14:17:12.5 & -20:38:58 & 12.95 & 0.90 & 11.19 & 10.72 & 10.61 & 10.56 & 10.58 J141733.51-274514.4 & 14:17:33.5 & -27:45:14 & 13.51 & 1.07 & 11.70 & 11.20 & 11.11 & 11.04 & 11.07 J141738.40-264733.4 & 14:17:38.4 & -26:47:33 & 13.16 & 0.88 & 11.40 & 10.85 & 10.77 & 10.67 & 10.71 J141759.95-241546.3 & 14:18:00.0 & -24:15:46 & 12.80 & 0.90 & 10.88 & 10.36 & 10.24 & 10.14 & 10.15 J141809.26-185954.9 & 14:18:09.3 & -18:59:55 & 12.65 & 1.42 & 9.81 & 9.22 & 9.03 & 8.90 & 8.89 J141826.98-232010.1 & 14:18:27.0 & -23:20:10 & 13.60 & 0.99 & 11.64 & 11.07 & 11.02 & 10.92 & 10.96 J141924.83-230737.1 & 14:19:24.8 & -23:07:37 & 13.24 & 1.06 & 11.20 & 10.69 & 10.53 & 10.46 & 10.49 J142030.32-272946.0 & 14:20:30.3 & -27:29:46 & 12.09 & 0.87 & 10.44 & 9.91 & 9.85 & 9.76 & 9.79 J142046.66-200010.0 & 14:20:46.7 & -20:00:10 & 13.24 & 0.89 & 11.34 & 10.79 & 10.71 & 10.63 & 10.65 J142051.35-235202.5 & 14:20:51.4 & -23:52:03 & 14.06 & 1.16 & 11.75 & 11.18 & 11.04 & 10.99 & 11.00 J142110.86-214813.6 & 14:21:10.9 & -21:48:14 & 12.22 & 0.88 & 10.59 & 10.09 & 10.02 & 9.94 & 9.97 J142149.04-241054.7 & 14:21:49.0 & -24:10:55 & 13.50 & 0.85 & 11.71 & 11.15 & 11.11 & 11.01 & 11.01 J142332.96-185957.3 & 14:23:33.0 & -18:59:57 & 13.69 & 0.95 & 11.63 & 11.16 & 10.98 & 10.93 & 10.92 J142552.89-264742.8 & 14:25:52.9 & -26:47:43 & 13.47 & 1.13 & 11.46 & 10.86 & 10.75 & 10.68 & 10.71 J142605.25-135652.2 & 14:26:05.3 & -13:56:52 & 13.62 & 1.07 & 11.47 & 10.87 & 10.72 & 10.67 & 10.69 J142805.73-135317.4 & 14:28:05.7 & -13:53:17 & 12.05 & 0.92 & 10.12 & 9.57 & 9.50 & 9.41 & 9.44 J142916.03-143535.7 & 14:29:16.0 & -14:35:36 & 12.01 & 0.76 & 10.38 & 9.91 & 9.80 & 9.74 & 9.76 J142924.23-155149.2 & 14:29:24.2 & -15:51:49 & 13.29 & 0.76 & 11.71 & 11.20 & 11.10 & 11.05 & 11.06 J143057.62-152345.9 & 14:30:57.6 & -15:23:46 & 13.16 & 0.87 & 11.16 & 10.65 & 10.55 & 10.46 & 10.47 J143409.52-102449.5 & 14:34:09.5 & -10:24:50 & 13.80 & 0.99 & 11.69 & 11.13 & 11.01 & 10.94 & 10.96 J143412.73-120856.6 & 14:34:12.7 & -12:08:57 & 13.58 & 1.02 & 11.32 & 10.75 & 10.61 & 10.53 & 10.51 J143428.10-121449.7 & 14:34:28.1 & -12:14:50 & 13.88 & 0.99 & 11.90 & 11.32 & 11.23 & 11.17 & 11.20 J143630.55-134523.2 & 14:36:30.6 & -13:45:23 & 13.54 & 1.02 & 11.46 & 10.90 & 10.80 & 10.68 & 10.68 J143642.51-071510.4 & 14:36:42.5 & -07:15:10 & 11.93 & 0.79 & 10.27 & 9.80 & 9.71 & 9.64 & 9.65 J143643.71-115050.4 & 14:36:43.7 & -11:50:50 & 13.65 & 0.95 & 11.83 & 11.25 & 11.17 & 11.11 & 11.13 J143715.34-093654.3 & 14:37:15.3 & -09:36:54 & 12.95 & 1.05 & 10.97 & 10.37 & 10.25 & 10.19 & 10.21 J143801.19-120444.3 & 14:38:01.2 & -12:04:44 & 13.17 & 1.04 & 11.22 & 10.66 & 10.55 & 10.47 & 10.49 J143816.29-115033.6 & 14:38:16.3 & -11:50:34 & 13.07 & 1.05 & 11.12 & 10.56 & 10.47 & 10.16 & 10.18 J144254.60-124252.9 & 14:42:54.6 & -12:42:53 & 13.04 & 0.95 & 11.24 & 10.74 & 10.65 & 10.59 & 10.61 J144314.82-020618.1 & 14:43:14.8 & -02:06:18 & 12.07 & 1.13 & 9.88 & 9.29 & 9.18 & 9.07 & 9.10 J144357.61-054833.1 & 14:43:57.6 & -05:48:33 & 12.72 & 0.89 & 10.86 & 10.30 & 10.22 & 10.11 & 10.14 J144421.17-033727.0 & 14:44:21.2 & -03:37:27 & 13.14 & 0.87 & 11.28 & 10.74 & 10.65 & 10.54 & 10.57 J144458.47-030536.1 & 14:44:58.5 & -03:05:36 & 13.38 & 0.73 & 11.73 & 11.26 & 11.17 & 11.10 & 11.11 J144747.67-012248.2 & 14:47:47.7 & -01:22:48 & 13.18 & 0.87 & 11.50 & 10.96 & 10.88 & 10.79 & 10.82 J144758.79-001415.8 & 14:47:58.8 & -00:14:16 & 13.36 & 0.83 & 11.58 & 11.07 & 10.98 & 10.89 & 10.89 J144809.90-000442.9 & 14:48:09.9 & -00:04:43 & 12.97 & 0.91 & 11.23 & 10.69 & 10.63 & 10.54 & 10.58 J145108.09-014250.2 & 14:51:08.1 & -01:42:50 & 13.26 & 0.88 & 11.49 & 11.02 & 10.89 & 10.80 & 10.79 J150141.47-202820.6 & 15:01:41.5 & -20:28:21 & 13.94 & 1.00 & 11.97 & 11.47 & 11.39 & 11.20 & 11.19 J150159.91-261349.4 & 15:01:59.9 & -26:13:49 & 11.51 & 0.98 & 9.48 & 8.95 & 8.82 & 8.74 & 8.77 J150237.42-244219.1 & 15:02:37.4 & -24:42:19 & 11.60 & 0.93 & 9.88 & 9.41 & 9.30 & 9.23 & 9.27 J150240.73-224229.8 & 15:02:40.7 & -22:42:30 & 13.66 & 0.93 & 11.83 & 11.31 & 11.19 & 11.15 & 11.18 J150244.65-224204.0 & 15:02:44.7 & -22:42:04 & 14.04 & 0.99 & 11.98 & 11.44 & 11.33 & 11.23 & 11.24 J150403.34-203127.7 & 15:04:03.4 & -20:31:28 & …& …& 10.84 & 10.30 & 10.24 & 10.15 & 10.18 J150425.56-223148.7 & 15:04:25.6 & -22:31:49 & 13.17 & 0.87 & 11.33 & 10.83 & 10.74 & 10.63 & 10.65 J150514.41-211719.6 & 15:05:14.4 & -21:17:20 & 13.94 & 0.96 & 11.97 & 11.41 & 11.32 & 11.24 & 11.25 J150523.83-232311.0 & 15:05:23.8 & -23:23:11 & 12.99 & 1.05 & 10.71 & 10.12 & 9.99 & 9.91 & 9.89 J150528.98-222309.3 & 15:05:29.0 & -22:23:09 & 13.01 & 0.91 & 11.14 & 10.65 & 10.51 & 10.44 & 10.45 J150545.31-210120.0 & 15:05:45.3 & -21:01:20 & 13.09 & 0.89 & 11.23 & 10.69 & 10.60 & 10.55 & 10.56 J150547.08-221557.3 & 15:05:47.1 & -22:15:57 & 13.68 & 0.95 & 11.97 & 11.46 & 11.38 & 11.31 & 11.34 J150600.72-215741.5 & 15:06:00.7 & -21:57:42 & 12.99 & 0.92 & 11.08 & 10.56 & 10.43 & 10.32 & 10.34 J150601.36-250831.9 & 15:06:01.3 & -25:08:31 & 12.45 & 0.98 & 10.50 & 10.00 & 9.87 & 9.79 & 9.79 J150628.63-210438.7 & 15:06:28.6 & -21:04:39 & 13.78 & 1.01 & 11.96 & 11.40 & 11.31 & 11.21 & 11.25 J150652.36-254707.1 & 15:06:52.3 & -25:47:07 & 13.14 & 1.17 & 11.15 & 10.59 & 10.42 & 10.35 & 10.38 J150657.58-223255.0 & 15:06:57.6 & -22:32:55 & 13.10 & 0.92 & 11.35 & 10.86 & 10.77 & 10.72 & 10.75 J150710.80-205510.8 & 15:07:10.8 & -20:55:11 & 13.86 & 0.99 & 11.94 & 11.42 & 11.31 & 11.26 & 11.28 J150725.60-203421.8 & 15:07:25.6 & -20:34:22 & 13.80 & 0.95 & 11.89 & 11.36 & 11.23 & 11.17 & 11.19 J150817.45-200757.4 & 15:08:17.5 & -20:07:57 & 13.83 & 0.81 & 11.94 & 11.42 & 11.32 & 11.23 & 11.24 J150852.39-224919.0 & 15:08:52.4 & -22:49:19 & 14.20 & 1.14 & 11.98 & 11.42 & 11.30 & 11.21 & 11.24 J150901.99-215246.9 & 15:09:02.0 & -21:52:47 & 12.22 & 0.78 & 10.51 & 10.05 & 9.97 & 9.88 & 9.90 J150939.48-211928.4 & 15:09:39.5 & -21:19:28 & 12.86 & 0.99 & 10.86 & 10.30 & 10.18 & 10.09 & 10.12 J150943.10-202529.9 & 15:09:43.1 & -20:25:29 & …& …& 11.68 & 11.21 & 11.08 & 11.03 & 11.04 J151001.59-192643.3 & 15:10:01.6 & -19:26:43 & 13.68 & 0.84 & 11.86 & 11.35 & 11.34 & 11.21 & 11.22 J151010.07-202030.7 & 15:10:10.1 & -20:20:31 & 13.92 & 0.98 & 11.91 & 11.38 & 11.27 & 11.18 & 11.20 J151113.24-213003.2 & 15:11:13.2 & -21:30:03 & 12.66 & 0.85 & 10.65 & 10.13 & 10.03 & 9.93 & 9.91 J151158.29-202516.0 & 15:11:58.3 & -20:25:16 & 13.54 & 0.86 & 11.67 & 11.19 & 11.13 & 11.02 & 11.04 J151204.86-212845.2 & 15:12:04.9 & -21:28:45 & 13.52 & 0.97 & 11.28 & 10.72 & 10.53 & 10.47 & 10.45 J151300.69-212000.0 & 15:13:00.7 & -21:20:00 & 13.80 & 0.80 & 11.91 & 11.42 & 11.35 & 11.29 & 11.29 J151305.06-214546.5 & 15:13:05.1 & -21:45:47 & 13.63 & 0.94 & 11.90 & 11.43 & 11.34 & 11.29 & 11.31 J154825.93-395925.8 & 15:48:25.9 & -39:59:25 & 13.56 & 1.11 & 11.35 & 10.88 & 10.79 & 10.59 & 10.59 J155145.68-393538.2 & 15:51:45.6 & -39:35:38 & 12.96 & 1.10 & 10.78 & 10.36 & 10.20 & 10.12 & 10.12 J155212.12-393422.7 & 15:52:12.1 & -39:34:22 & 13.63 & 0.87 & 11.92 & 11.55 & 11.41 & 11.36 & 11.37 J155410.62-325516.7 & 15:54:10.6 & -32:55:16 & 13.82 & 1.01 & 11.66 & 11.07 & 10.98 & 10.90 & 10.92 J155422.58-334156.7 & 15:54:22.5 & -33:41:56 & 11.39 & 0.97 & 9.51 & 9.08 & 8.94 & 8.89 & 8.91 J155730.10-293922.7 & 15:57:30.1 & -29:39:23 & 13.15 & 0.97 & 11.13 & 10.61 & 10.50 & 10.39 & 10.40 J155837.56-373411.3 & 15:58:37.5 & -37:34:11 & 10.77 & 0.65 & 9.24 & 8.91 & 8.79 & 8.72 & 8.71 J155848.49-360337.0 & 15:58:48.5 & -36:03:37 & 12.96 & 1.08 & 10.93 & 10.40 & 10.28 & 10.13 & 10.16 J155921.32-341626.1 & 15:59:21.3 & -34:16:26 & 12.57 & 0.84 & 10.86 & 10.52 & 10.39 & 10.32 & 10.36 J155922.28-385356.0 & 15:59:22.2 & -38:53:56 & 12.92 & 1.09 & 10.81 & 10.26 & 10.18 & 10.03 & 10.07 J155952.35-320738.8 & 15:59:52.3 & -32:07:38 & 14.61 & 0.68 & 12.78 & 12.38 & 12.28 & 12.18 & 12.22 J160022.63-003139.7 & 16:00:22.6 & -00:31:40 & 13.43 & 0.91 & 11.54 & 11.05 & 10.97 & 10.92 & 10.94 J160058.80-330756.4 & 16:00:58.8 & -33:07:56 & 13.48 & 0.90 & 11.70 & 11.32 & 11.25 & 11.17 & 11.18 J160114.19-002734.9 & 16:01:14.2 & -00:27:35 & 13.21 & 0.86 & 11.36 & 10.82 & 10.76 & 10.67 & 10.69 J160226.98-220056.9 & 16:02:27.0 & -22:00:57 & 12.79 & 0.88 & 10.77 & 10.24 & 10.12 & 10.04 & 10.04 J160527.88-204219.3 & 16:05:27.9 & -20:42:19 & 11.54 & 0.96 & 9.55 & 9.06 & 8.96 & 8.88 & 8.89 J160529.44-254335.1 & 16:05:29.4 & -25:43:35 & 12.03 & 1.57 & 9.72 & 9.15 & 8.98 & 8.90 & 8.93 J160604.52-350819.5 & 16:06:04.5 & -35:08:19 & 13.53 & 1.04 & 11.56 & 11.10 & 10.91 & 10.81 & 10.83 J160611.18-362035.6 & 16:06:11.1 & -36:20:35 & 13.45 & 1.24 & 11.17 & 10.60 & 10.47 & 10.33 & 10.37 J160616.44-323710.3 & 16:06:16.4 & -32:37:10 & 12.23 & 0.80 & 10.62 & 10.22 & 10.16 & 10.03 & 10.05 J160630.04-000408.3 & 16:06:30.0 & -00:04:08 & 13.41 & 1.04 & 11.29 & 10.71 & 10.62 & 10.51 & 10.53 J160645.42-321617.9 & 16:06:45.4 & -32:16:17 & 13.86 & 0.77 & 12.28 & 11.90 & 11.82 & 11.71 & 11.74 J160648.60-295316.3 & 16:06:48.6 & -29:53:16 & 13.22 & 0.94 & 11.32 & 10.80 & 10.73 & 10.65 & 10.67 J160656.06-370731.4 & 16:06:56.0 & -37:07:31 & 11.69 & 0.69 & 10.31 & 9.99 & 9.86 & 9.81 & 9.80 J160728.40-235137.9 & 16:07:28.4 & -23:51:38 & 12.44 & 1.35 & 9.65 & 9.09 & 8.86 & 8.76 & 8.72 J160729.51-002428.9 & 16:07:29.5 & -00:24:29 & 13.38 & 1.10 & 11.18 & 10.62 & 10.48 & 10.38 & 10.39 J160732.39-225038.1 & 16:07:32.3 & -22:50:38 & 11.66 & 0.67 & 10.26 & 9.91 & 9.83 & 9.75 & 9.75 J160735.32-320847.2 & 16:07:35.3 & -32:08:47 & …& …& 12.58 & 12.04 & 11.91 & 11.77 & 11.78 J160741.38-331457.5 & 16:07:41.3 & -33:14:57 & 10.98 & 0.75 & 9.63 & 9.22 & 9.12 & 9.00 & 9.03 J160808.90-294321.0 & 16:08:08.9 & -29:43:21 & 13.14 & 0.87 & 11.62 & 11.20 & 11.10 & 11.01 & 11.03 J160813.89-201718.1 & 16:08:13.9 & -20:17:18 & 12.79 & 0.83 & 11.02 & 10.66 & 10.55 & 10.47 & 10.49 J160832.69-343859.5 & 16:08:32.6 & -34:38:59 & 14.61 & 0.87 & 12.99 & 12.56 & 12.43 & 12.43 & 12.44 J160842.93-282018.2 & 16:08:42.9 & -28:20:18 & 12.20 & 0.83 & 10.66 & 10.27 & 10.19 & 10.14 & 10.14 J160949.84-202604.1 & 16:09:49.9 & -20:26:04 & 12.56 & 0.94 & 10.66 & 10.16 & 10.10 & 10.01 & 10.02 J161011.26-344622.6 & 16:10:11.2 & -34:46:22 & 13.36 & 0.98 & 11.33 & 10.85 & 10.70 & 10.60 & 10.63 J161025.60-212539.8 & 16:10:25.6 & -21:25:40 & 13.37 & 0.91 & 11.44 & 10.97 & 10.86 & 10.77 & 10.77 J161039.68-244356.5 & 16:10:39.6 & -24:43:56 & 13.42 & 1.04 & 11.33 & 10.78 & 10.67 & 10.57 & 10.60 J161058.76-281143.5 & 16:10:58.7 & -28:11:43 & 13.99 & 1.08 & 11.97 & 11.46 & 11.33 & 11.29 & 11.32 J161114.94-320449.4 & 16:11:14.9 & -32:04:49 & 10.87 & 0.88 & 9.05 & 8.65 & 8.52 & 8.45 & 8.48 J161120.84-240248.2 & 16:11:20.9 & -24:02:48 & 13.46 & 1.01 & 11.40 & 10.91 & 10.78 & 10.71 & 10.69 J161146.67-242705.5 & 16:11:46.6 & -24:27:05 & 14.69 & 0.73 & 12.93 & 12.59 & 12.45 & 12.38 & 12.40 J161149.93-001057.3 & 16:11:49.9 & -00:10:57 & 13.51 & 0.85 & 11.75 & 11.24 & 11.13 & 11.10 & 11.12 J161211.01-281812.7 & 16:12:11.0 & -28:18:12 & 10.30 & 0.72 & 9.16 & 8.72 & 8.65 & 8.48 & 8.48 J161246.46-290019.6 & 16:12:46.5 & -29:00:20 & 10.38 & 0.73 & 9.15 & 8.64 & 8.62 & 8.54 & 8.57 J161310.50-255622.1 & 16:13:10.5 & -25:56:22 & 13.27 & 0.86 & 11.38 & 10.91 & 10.77 & 10.70 & 10.70 J161328.28-201340.6 & 16:13:28.2 & -20:13:40 & 12.46 & 0.87 & 10.84 & 10.48 & 10.41 & 10.30 & 10.32 J161343.74-251738.5 & 16:13:43.7 & -25:17:38 & 11.10 & 0.79 & 9.30 & 8.89 & 8.77 & 8.69 & 8.70 J161357.00-275710.1 & 16:13:57.0 & -27:57:10 & 12.69 & 0.91 & 11.03 & 10.69 & 10.54 & 10.49 & 10.51 J161411.20-203338.2 & 16:14:11.2 & -20:33:38 & 13.30 & 0.70 & 11.88 & 11.56 & 11.44 & 11.34 & 11.33 J161458.93-203934.1 & 16:14:58.9 & -20:39:34 & 13.07 & 0.93 & 11.13 & 10.62 & 10.53 & 10.46 & 10.49 J161500.40-214855.7 & 16:15:00.4 & -21:48:56 & 13.54 & 0.97 & 11.35 & 10.77 & 10.64 & 10.56 & 10.57 J161501.58-233935.6 & 16:15:01.5 & -23:39:35 & 12.30 & 0.95 & 10.28 & 9.77 & 9.61 & 9.58 & 9.60 J161517.54-215105.2 & 16:15:17.5 & -21:51:05 & 13.19 & 0.98 & 11.40 & 10.90 & 10.86 & 10.81 & 10.84 J161535.45-215344.0 & 16:15:35.5 & -21:53:44 & 13.44 & 0.94 & 11.32 & 10.79 & 10.68 & 10.61 & 10.60 J161548.06-235537.7 & 16:15:48.0 & -23:55:37 & 11.16 & 0.76 & 9.54 & 9.19 & 9.07 & 8.98 & 9.02 J161554.20-040406.8 & 16:15:54.2 & -04:04:07 & 13.66 & 1.01 & 11.79 & 11.27 & 11.16 & 11.13 & 11.15 J161627.72-064016.5 & 16:16:27.7 & -06:40:17 & 13.92 & 0.99 & 11.85 & 11.29 & 11.20 & 11.13 & 11.14 J161641.59-221829.6 & 16:16:41.6 & -22:18:30 & …& …& 10.91 & 10.33 & 10.22 & 10.12 & 10.13 J161642.05-325700.1 & 16:16:42.0 & -32:57:00 & 12.17 & 0.83 & 10.35 & 9.89 & 9.83 & 9.71 & 9.73 J161652.02-215423.7 & 16:16:52.0 & -21:54:23 & 12.98 & 1.08 & 10.59 & 10.02 & 9.86 & 9.74 & 9.73 J161659.56-050617.1 & 16:16:59.6 & -05:06:17 & 13.68 & 0.88 & 11.72 & 11.20 & 11.08 & 11.01 & 11.03 J161707.08-053448.0 & 16:17:07.1 & -05:34:48 & 13.86 & 0.90 & 11.81 & 11.28 & 11.18 & 11.08 & 11.09 J161708.40-054214.1 & 16:17:08.4 & -05:42:14 & 14.06 & 0.85 & 11.88 & 11.34 & 11.24 & 11.16 & 11.15 J161739.03-260632.0 & 16:17:39.0 & -26:06:32 & 13.98 & 1.07 & 11.11 & 10.62 & 10.47 & 10.30 & 10.27 J161751.67-334450.6 & 16:17:51.6 & -33:44:50 & 11.72 & 0.73 & 10.15 & 9.78 & 9.69 & 9.61 & 9.62 J161755.06-250116.5 & 16:17:55.0 & -25:01:16 & 13.19 & 1.07 & 10.85 & 10.38 & 10.23 & 10.12 & 10.12 J161756.72-061016.1 & 16:17:56.7 & -06:10:16 & 13.64 & 0.97 & 11.39 & 10.89 & 10.75 & 10.63 & 10.64 J161804.87-250859.4 & 16:18:04.9 & -25:08:59 & 13.52 & 1.14 & 11.01 & 10.51 & 10.33 & 10.22 & 10.20 J161808.01-035215.0 & 16:18:08.0 & -03:52:15 & 13.92 & 0.98 & 11.84 & 11.30 & 11.20 & 11.10 & 11.13 J161829.54-253223.8 & 16:18:29.6 & -25:32:24 & 13.25 & 1.08 & 10.65 & 10.13 & 9.99 & 9.82 & 9.81 J161835.53-261536.3 & 16:18:35.5 & -26:15:36 & 13.88 & 1.08 & 11.04 & 10.51 & 10.33 & 10.17 & 10.13 J161842.77-201326.6 & 16:18:42.8 & -20:13:27 & 14.33 & 1.25 & 11.40 & 10.80 & 10.58 & 10.47 & 10.43 J161843.39-020452.4 & 16:18:43.4 & -02:04:52 & 13.47 & 0.97 & 11.53 & 10.98 & 10.94 & 10.85 & 10.86 J161849.74-014439.0 & 16:18:49.8 & -01:44:39 & 12.78 & 0.74 & 11.34 & 10.80 & 10.71 & 10.65 & 10.67 J161901.32-053606.7 & 16:19:01.3 & -05:36:07 & 12.98 & 0.99 & 10.88 & 10.39 & 10.28 & 10.21 & 10.23 J161903.88-032338.4 & 16:19:03.9 & -03:23:38 & 12.25 & 0.99 & 10.09 & 9.59 & 9.47 & 9.42 & 9.44 J161933.95-222829.6 & 16:19:34.0 & -22:28:30 & 11.45 & 1.05 & 9.23 & 8.66 & 8.51 & 8.44 & 8.46 J161943.09-022431.2 & 16:19:43.1 & -02:24:31 & 12.30 & 0.79 & 10.48 & 9.94 & 9.86 & 9.76 & 9.79 J162034.43-205657.8 & 16:20:34.4 & -20:56:57 & …& …& 9.74 & 9.29 & 9.15 & 9.06 & 9.08 J162042.39-030651.8 & 16:20:42.4 & -03:06:52 & 14.00 & 1.01 & 11.91 & 11.39 & 11.32 & 11.23 & 11.26 J162047.14-260616.3 & 16:20:47.1 & -26:06:16 & 14.67 & 1.04 & 12.36 & 11.89 & 11.79 & 11.63 & 11.66 J162318.05-311043.1 & 16:23:18.0 & -31:10:43 & 14.42 & 1.10 & 12.64 & 12.13 & 11.98 & 11.90 & 11.92 J162330.94-041243.3 & 16:23:30.9 & -04:12:43 & 13.87 & 1.07 & 11.55 & 10.98 & 10.81 & 10.72 & 10.71 J162352.93-202413.4 & 16:23:52.9 & -20:24:13 & 14.28 & 1.26 & 11.35 & 10.76 & 10.58 & 10.42 & 10.40 J162353.51-315154.7 & 16:23:53.5 & -31:51:54 & 12.97 & 1.20 & 10.62 & 10.05 & 9.86 & 9.73 & 9.71 J162358.69-030356.9 & 16:23:58.7 & -03:03:57 & 12.31 & 0.86 & 10.49 & 9.97 & 9.90 & 9.82 & 9.86 J162457.57-030428.7 & 16:24:57.6 & -03:04:29 & 12.68 & 0.98 & 10.66 & 10.12 & 9.98 & 9.88 & 9.89 J162546.95-335412.0 & 16:25:46.9 & -33:54:11 & 12.51 & 0.88 & 10.51 & 10.14 & 10.00 & 9.88 & 9.89 J162607.92-202528.0 & 16:26:07.9 & -20:25:27 & 14.50 & 1.20 & 11.93 & 11.39 & 11.25 & 11.12 & 11.15 J162719.51-023807.9 & 16:27:19.5 & -02:38:08 & 13.11 & 1.06 & 11.01 & 10.45 & 10.31 & 10.19 & 10.19 J162747.03-331403.1 & 16:27:47.0 & -33:14:03 & 12.81 & 1.44 & 10.11 & 9.53 & 9.35 & 9.22 & 9.26 J162811.79-015518.9 & 16:28:11.8 & -01:55:19 & 12.64 & 1.05 & 10.49 & 9.90 & 9.81 & 9.71 & 9.73 J163026.14-334249.8 & 16:30:26.1 & -33:42:49 & 12.70 & 0.77 & 10.99 & 10.64 & 10.54 & 10.46 & 10.47 J163114.30-001305.5 & 16:31:14.3 & -00:13:06 & 13.84 & 1.06 & 11.87 & 11.28 & 11.18 & 11.07 & 11.10 J163124.37-000005.3 & 16:31:24.4 & -00:00:05 & 11.72 & 1.05 & 9.59 & 9.00 & 8.86 & 8.81 & 8.80 J163524.07-335950.9 & 16:35:24.0 & -33:59:50 & 13.54 & 0.85 & 11.78 & 11.39 & 11.26 & 11.20 & 11.24 J171257.15-221421.5 & 17:12:57.1 & -22:14:21 & 13.30 & 1.08 & 10.97 & 10.53 & 10.34 & 10.23 & 10.24 J171305.97-344246.8 & 17:13:05.9 & -34:42:46 & 12.31 & 1.02 & 10.12 & 9.73 & 9.60 & 9.48 & 9.51 J171346.40-225227.0 & 17:13:46.4 & -22:52:27 & 11.46 & 1.01 & 9.57 & 9.11 & 8.97 & 8.87 & 8.89 J171514.13-225143.6 & 17:15:14.1 & -22:51:43 & 11.07 & 1.01 & 9.05 & 8.67 & 8.50 & 8.40 & 8.42 J180856.81-210630.0 & 18:08:56.8 & -21:06:30 & 12.92 & 0.70 & 11.31 & 11.01 & 10.88 & 10.78 & 10.77 J181503.64-375120.7 & 18:15:03.6 & -37:51:20 & 12.87 & 1.01 & 10.80 & 10.29 & 10.19 & 10.12 & 10.13 J181919.19-202925.4 & 18:19:19.2 & -20:29:25 & 11.09 & 1.06 & 9.37 & 9.00 & 8.83 & 8.75 & 8.78 J181931.28-371313.6 & 18:19:31.2 & -37:13:13 & 11.63 & 1.05 & 9.57 & 9.01 & 8.90 & 8.80 & 8.84 J182030.65-201601.3 & 18:20:30.6 & -20:16:01 & 10.72 & 0.81 & 9.34 & 9.01 & 8.89 & 8.80 & 8.83 J182049.23-341948.0 & 18:20:49.2 & -34:19:48 & …& …& 11.23 & 10.69 & 10.53 & 10.40 & 10.44 J182938.67-201048.4 & 18:29:38.5 & -20:10:48 & …& …& 10.12 & 9.72 & 9.61 & 9.49 & 9.53 J183214.23-382940.7 & 18:32:14.2 & -38:29:40 & 13.87 & 0.80 & 12.20 & 11.70 & 11.64 & 11.58 & 11.62 J183713.28-314109.3 & 18:37:13.2 & -31:41:09 & 12.47 & 0.82 & 10.65 & 10.12 & 10.04 & 9.90 & 9.90 J192532.78-282858.1 & 19:25:32.7 & -28:28:58 & 14.61 & 0.85 & 12.90 & 12.43 & 12.32 & 12.26 & 12.29 J192608.45-292547.2 & 19:26:08.4 & -29:25:47 & 13.61 & 0.96 & 11.64 & 11.07 & 10.97 & 10.87 & 10.88 J192812.00-290022.4 & 19:28:12.0 & -29:00:22 & 12.03 & 1.01 & 10.06 & 9.48 & 9.37 & 9.28 & 9.28 J195414.01-512227.7 & 19:54:14.0 & -51:22:28 & 13.22 & 1.00 & 11.29 & 10.73 & 10.62 & 10.57 & 10.59 J195415.28-522918.7 & 19:54:15.3 & -52:29:19 & 12.99 & 0.74 & 11.87 & 11.35 & 11.28 & 11.26 & 11.29 J195426.85-482214.9 & 19:54:26.9 & -48:22:15 & 13.47 & 0.92 & 11.72 & 11.20 & 11.15 & 11.05 & 11.08 J195526.11-485701.4 & 19:55:26.1 & -48:57:01 & 13.65 & 0.90 & 11.94 & 11.41 & 11.35 & 11.32 & 11.35 J195600.80-502028.4 & 19:56:00.8 & -50:20:28 & 13.23 & 0.95 & 11.61 & 11.08 & 11.00 & 10.98 & 11.02 J195722.73-451203.5 & 19:57:22.7 & -45:12:04 & 13.40 & 0.89 & 11.70 & 11.22 & 11.13 & 11.09 & 11.11 J195746.62-445128.9 & 19:57:46.6 & -44:51:29 & 14.26 & 1.20 & 11.94 & 11.35 & 11.19 & 11.11 & 11.11 J195800.96-520536.2 & 19:58:01.0 & -52:05:36 & 12.78 & 0.83 & 11.10 & 10.53 & 10.49 & 10.40 & 10.44 J195804.09-474938.9 & 19:58:04.1 & -47:49:39 & 13.23 & 0.89 & 11.37 & 10.82 & 10.70 & 10.66 & 10.66 J195823.33-460902.1 & 19:58:23.3 & -46:09:02 & 13.70 & 1.05 & 11.95 & 11.42 & 11.35 & 11.33 & 11.37 J195827.53-492944.6 & 19:58:27.5 & -49:29:45 & 13.63 & 1.04 & 11.55 & 10.98 & 10.87 & 10.79 & 10.82 J195844.58-510307.9 & 19:58:44.6 & -51:03:08 & 13.49 & 1.01 & 11.41 & 10.86 & 10.68 & 10.64 & 10.63 J195857.47-504311.4 & 19:58:57.5 & -50:43:12 & 13.22 & 0.83 & 11.57 & 11.08 & 10.97 & 10.95 & 10.98 J195926.37-460340.4 & 19:59:26.4 & -46:03:40 & 13.71 & 1.28 & 11.55 & 10.95 & 10.83 & 10.76 & 10.78 J195949.45-460620.5 & 19:59:49.5 & -46:06:21 & 14.06 & 1.51 & 11.06 & 10.47 & 10.21 & 10.10 & 10.07 J195953.38-455541.4 & 19:59:53.4 & -45:55:41 & 12.86 & 1.12 & 10.71 & 10.15 & 10.01 & 9.94 & 9.94 J200008.05-525151.1 & 20:00:08.1 & -52:51:51 & 12.78 & 1.04 & 10.87 & 10.29 & 10.21 & 10.13 & 10.15 J200012.21-502509.4 & 20:00:12.2 & -50:25:09 & 10.59 & 0.70 & 9.43 & 8.91 & 8.84 & 8.78 & 8.81 J200355.30-502810.2 & 20:03:55.3 & -50:28:10 & 11.93 & 0.85 & 10.11 & 9.52 & 9.44 & 9.36 & 9.37 J200405.97-555517.3 & 20:04:06.0 & -55:55:17 & 13.66 & 0.95 & 11.70 & 11.14 & 11.05 & 10.99 & 11.02 J200426.29-441911.4 & 20:04:26.3 & -44:19:11 & 13.18 & 0.70 & 11.78 & 11.27 & 11.21 & 11.02 & 11.03 J200516.78-561311.1 & 20:05:16.8 & -56:13:11 & 13.64 & 0.99 & 11.51 & 10.94 & 10.83 & 10.74 & 10.76 J200528.77-543126.0 & 20:05:28.8 & -54:31:26 & 12.09 & 0.76 & 10.37 & 9.90 & 9.80 & 9.74 & 9.75 J200713.06-435609.7 & 20:07:13.1 & -43:56:10 & 13.36 & 1.17 & 10.99 & 10.39 & 10.24 & 10.18 & 10.20 J200814.18-542913.1 & 20:08:14.2 & -54:29:13 & 13.46 & 1.03 & 11.38 & 10.81 & 10.69 & 10.58 & 10.59 J200830.53-462846.6 & 20:08:30.5 & -46:28:47 & 13.24 & 0.90 & 11.49 & 10.96 & 10.89 & 10.80 & 10.82 J200949.94-531934.0 & 20:09:50.0 & -53:19:34 & 13.37 & 0.90 & 11.57 & 11.04 & 10.87 & 10.66 & 10.65 J201033.49-445401.9 & 20:10:33.5 & -44:54:02 & 13.42 & 1.06 & 11.42 & 10.85 & 10.73 & 10.64 & 10.66 J201211.39-561618.1 & 20:12:11.4 & -56:16:18 & 11.15 & 0.82 & 9.44 & 8.92 & 8.80 & 8.73 & 8.75 J201335.24-540449.3 & 20:13:35.2 & -54:04:49 & 13.13 & 0.83 & 11.42 & 10.89 & 10.80 & 10.73 & 10.76 J201524.98-222100.7 & 20:15:24.9 & -22:21:00 & 12.19 & 1.05 & 10.06 & 9.48 & 9.36 & 9.30 & 9.32 J201531.32-571947.0 & 20:15:31.3 & -57:19:47 & 11.87 & 0.92 & 10.07 & 9.52 & 9.39 & 9.34 & 9.34 J201558.31-564847.0 & 20:15:58.3 & -56:48:47 & 13.41 & 1.00 & 11.53 & 10.95 & 10.83 & 10.77 & 10.75 J201749.89-440400.0 & 20:17:49.9 & -44:04:00 & 12.47 & 0.77 & 10.96 & 10.50 & 10.44 & 10.35 & 10.36 J201910.38-433849.6 & 20:19:10.4 & -43:38:50 & 12.62 & 0.85 & 10.87 & 10.37 & 10.27 & 10.21 & 10.23 J201914.28-540455.3 & 20:19:14.3 & -54:04:55 & 13.03 & 0.81 & 11.32 & 10.78 & 10.70 & 10.65 & 10.68 J201940.99-292226.8 & 20:19:41.0 & -29:22:27 & 11.65 & 0.87 & 9.81 & 9.28 & 9.15 & 8.97 & 8.96 J202009.20-481343.9 & 20:20:09.2 & -48:13:44 & 12.44 & 0.97 & 10.48 & 9.92 & 9.78 & 9.71 & 9.70 J202123.89-515742.7 & 20:21:23.9 & -51:57:43 & 13.27 & 0.90 & 11.44 & 10.87 & 10.83 & 10.67 & 10.67 J202140.45-535023.0 & 20:21:40.5 & -53:50:23 & 11.98 & 0.88 & 10.18 & 9.67 & 9.60 & 9.50 & 9.53 J202141.07-555031.4 & 20:21:41.1 & -55:50:31 & 11.84 & 0.74 & 10.29 & 9.83 & 9.74 & 9.67 & 9.69 J202148.39-291746.7 & 20:21:48.3 & -29:17:46 & 13.57 & 0.85 & 11.99 & 11.59 & 11.47 & 11.43 & 11.44 J202217.63-555640.7 & 20:22:17.6 & -55:56:41 & 12.81 & 0.85 & 11.14 & 10.60 & 10.52 & 10.44 & 10.47 J202226.93-301151.0 & 20:22:26.9 & -30:11:51 & 13.52 & 0.93 & 11.78 & 11.24 & 11.16 & 11.08 & 11.11 J202310.52-250158.8 & 20:23:10.5 & -25:01:59 & 13.39 & 0.79 & 11.65 & 11.15 & 11.05 & 10.98 & 11.00 J202326.86-234543.5 & 20:23:26.9 & -23:45:44 & 13.05 & 0.96 & 11.02 & 10.46 & 10.34 & 10.29 & 10.27 J202345.93-245456.9 & 20:23:45.9 & -24:54:57 & 12.10 & 1.01 & 10.08 & 9.56 & 9.44 & 9.39 & 9.40 J202413.10-285218.0 & 20:24:13.1 & -28:52:18 & 13.56 & 0.81 & 11.91 & 11.45 & 11.37 & 11.31 & 11.33 J202421.22-265643.6 & 20:24:21.2 & -26:56:44 & 12.88 & 1.16 & 10.69 & 10.10 & 9.98 & 9.88 & 9.89 J202424.59-252955.1 & 20:24:24.6 & -25:29:55 & 12.27 & 0.99 & 10.30 & 9.72 & 9.60 & 9.54 & 9.55 J202442.85-261900.3 & 20:24:42.8 & -26:19:00 & 12.41 & 0.69 & 11.03 & 10.68 & 10.58 & 10.51 & 10.53 J202509.07-221500.3 & 20:25:09.1 & -22:15:00 & 12.00 & 0.60 & 11.92 & 11.45 & 11.32 & 11.32 & 11.36 J202512.12-311007.1 & 20:25:12.1 & -31:10:07 & 12.80 & 0.84 & 11.18 & 10.67 & 10.60 & 10.55 & 10.58 J202512.44-315254.3 & 20:25:12.4 & -31:52:54 & 13.05 & 0.99 & 11.91 & 11.43 & 11.33 & 11.37 & 11.40 J202526.29-320856.8 & 20:25:26.3 & -32:08:57 & 13.49 & 0.80 & 11.89 & 11.42 & 11.34 & 11.28 & 11.29 J202543.67-271736.6 & 20:25:43.7 & -27:17:37 & 13.13 & 0.89 & 11.43 & 10.94 & 10.79 & 10.72 & 10.73 J202556.64-491552.6 & 20:25:56.6 & -49:15:53 & 12.01 & 0.92 & 10.22 & 9.71 & 9.58 & 9.49 & 9.50 J202607.83-315423.7 & 20:26:07.8 & -31:54:24 & 13.46 & 1.01 & 11.60 & 11.06 & 10.97 & 10.94 & 10.98 J202647.30-285509.6 & 20:26:47.3 & -28:55:10 & 14.00 & 1.10 & 11.52 & 10.94 & 10.79 & 10.82 & 10.83 J202649.50-473428.1 & 20:26:49.5 & -47:34:28 & 12.22 & 0.85 & 10.47 & 9.96 & 9.88 & 9.79 & 9.82 J202659.69-312311.2 & 20:26:59.7 & -31:23:11 & 13.69 & 0.92 & 11.80 & 11.20 & 11.13 & 11.05 & 11.07 J202708.93-350541.7 & 20:27:08.9 & -35:05:42 & 13.45 & 0.99 & 11.54 & 11.01 & 10.88 & 10.80 & 10.82 J202712.82-284827.8 & 20:27:12.8 & -28:48:28 & 13.08 & 0.84 & 11.29 & 10.76 & 10.66 & 10.56 & 10.59 J202726.00-243042.4 & 20:27:26.0 & -24:30:42 & 13.33 & 0.86 & 11.61 & 11.06 & 11.00 & 10.95 & 10.98 J202730.71-255837.6 & 20:27:30.7 & -25:58:38 & 13.48 & 0.96 & 11.71 & 11.17 & 11.09 & 11.04 & 11.06 J202735.43-205418.4 & 20:27:35.4 & -20:54:18 & 13.69 & 0.91 & 11.97 & 11.46 & 11.40 & 11.34 & 11.38 J202737.90-262741.7 & 20:27:37.9 & -26:27:41 & 10.63 & 0.83 & 9.12 & 8.65 & 8.57 & 8.52 & 8.56 J202744.84-422357.3 & 20:27:44.8 & -42:23:57 & 11.52 & 0.76 & 10.01 & 9.55 & 9.47 & 9.38 & 9.39 J202753.01-514113.7 & 20:27:53.0 & -51:41:14 & 12.19 & 0.99 & 10.22 & 9.62 & 9.50 & 9.43 & 9.45 J202755.18-213413.2 & 20:27:55.2 & -21:34:13 & 13.04 & 0.92 & 11.15 & 10.67 & 10.53 & 10.43 & 10.43 J202800.57-561727.8 & 20:28:00.6 & -56:17:28 & 13.93 & 1.19 & 11.68 & 11.10 & 10.95 & 10.76 & 10.74 J202822.77-530218.3 & 20:28:22.8 & -53:02:18 & 13.37 & 0.91 & 11.50 & 10.93 & 10.84 & 10.77 & 10.79 J202845.49-263810.3 & 20:28:45.4 & -26:38:10 & 10.29 & 0.67 & 9.22 & 8.88 & 8.73 & 8.64 & 8.68 J202852.85-222246.4 & 20:28:52.9 & -22:22:46 & 13.45 & 0.81 & 11.75 & 11.27 & 11.20 & 11.11 & 11.13 J202900.62-215735.4 & 20:29:00.6 & -21:57:35 & 12.56 & 0.87 & 10.92 & 10.44 & 10.34 & 10.28 & 10.31 J202916.37-352641.1 & 20:29:16.4 & -35:26:41 & 12.99 & 0.74 & 11.35 & 10.85 & 10.81 & 10.72 & 10.72 J202920.10-451346.8 & 20:29:20.1 & -45:13:47 & 11.34 & 0.90 & 9.78 & 9.20 & 9.08 & 8.91 & 8.93 J202925.48-333339.7 & 20:29:25.5 & -33:33:40 & 13.69 & 1.04 & 11.57 & 10.99 & 10.85 & 10.79 & 10.78 J202936.21-365222.0 & 20:29:36.2 & -36:52:22 & 12.39 & 0.84 & 10.83 & 10.26 & 10.20 & 10.17 & 10.21 J202951.78-573836.3 & 20:29:51.8 & -57:38:36 & 11.42 & 0.75 & 9.92 & 9.46 & 9.36 & 9.30 & 9.30 J203014.43-215726.1 & 20:30:14.4 & -21:57:26 & 12.24 & 0.84 & 10.58 & 10.13 & 10.03 & 9.99 & 10.01 J203019.13-284439.7 & 20:30:19.1 & -28:44:39 & 12.98 & 0.87 & 11.28 & 10.85 & 10.71 & 10.66 & 10.65 J203054.99-294100.8 & 20:30:55.0 & -29:41:01 & 13.73 & 0.89 & 11.90 & 11.38 & 11.26 & 11.22 & 11.21 J203113.20-191700.9 & 20:31:13.2 & -19:17:01 & 13.13 & 0.92 & 11.12 & 10.55 & 10.39 & 10.30 & 10.29 J203120.45-275434.7 & 20:31:20.4 & -27:54:35 & 13.88 & 0.91 & 11.94 & 11.40 & 11.29 & 11.24 & 11.25 J203123.84-504730.6 & 20:31:23.8 & -50:47:31 & 13.76 & 1.10 & 11.54 & 10.96 & 10.84 & 10.73 & 10.75 J203133.17-305412.5 & 20:31:33.1 & -30:54:12 & 12.25 & 0.82 & 10.84 & 10.43 & 10.37 & 10.29 & 10.33 J203138.97-344547.7 & 20:31:39.0 & -34:45:48 & 13.65 & 0.85 & 11.92 & 11.41 & 11.30 & 11.27 & 11.29 J203142.15-345010.9 & 20:31:42.1 & -34:50:11 & 12.82 & 0.86 & 11.10 & 10.60 & 10.54 & 10.48 & 10.51 J203212.28-285119.1 & 20:32:12.3 & -28:51:19 & 13.27 & 0.81 & 11.61 & 11.10 & 10.99 & 10.91 & 10.92 J203312.13-420027.3 & 20:33:12.1 & -42:00:27 & 12.20 & 0.72 & 10.71 & 10.21 & 10.17 & 10.10 & 10.11 J203316.09-504712.7 & 20:33:16.1 & -50:47:13 & 12.38 & 0.75 & 10.95 & 10.48 & 10.42 & 10.32 & 10.34 J203327.33-322508.8 & 20:33:27.3 & -32:25:09 & 12.23 & 0.90 & 10.57 & 10.07 & 9.99 & 9.93 & 9.96 J203331.76-274815.8 & 20:33:31.8 & -27:48:16 & 13.05 & 0.87 & 11.35 & 10.84 & 10.79 & 10.71 & 10.73 J203335.69-591206.1 & 20:33:35.7 & -59:12:06 & 12.65 & 0.84 & 11.04 & 10.52 & 10.46 & 10.35 & 10.38 J203337.10-501731.3 & 20:33:37.1 & -50:17:31 & 13.21 & 0.88 & 11.41 & 10.86 & 10.77 & 10.71 & 10.74 J203342.88-243451.9 & 20:33:42.9 & -24:34:52 & 13.75 & 0.94 & 11.86 & 11.34 & 11.26 & 11.19 & 11.20 J203352.43-303452.8 & 20:33:52.4 & -30:34:53 & 13.11 & 0.89 & 11.45 & 10.92 & 10.88 & 10.81 & 10.86 J203408.07-370639.5 & 20:34:08.1 & -37:06:40 & 13.20 & 0.96 & 11.17 & 10.61 & 10.46 & 10.38 & 10.38 J203413.87-324149.9 & 20:34:13.9 & -32:41:50 & 13.08 & 0.90 & 11.40 & 10.84 & 10.79 & 10.70 & 10.71 J203416.24-295108.3 & 20:34:16.2 & -29:51:08 & 12.62 & 0.96 & 10.66 & 10.17 & 10.04 & 9.94 & 9.94 J203523.60-211434.9 & 20:35:23.6 & -21:14:35 & 13.56 & 0.90 & 11.88 & 11.40 & 11.34 & 11.28 & 11.30 J203617.27-394943.8 & 20:36:17.3 & -39:49:44 & …& …& 11.74 & 11.28 & 11.14 & 11.13 & 11.14 J203632.12-330152.1 & 20:36:32.1 & -33:01:52 & 13.93 & 1.09 & 11.66 & 11.07 & 10.94 & 10.86 & 10.87 J203655.25-365819.9 & 20:36:55.2 & -36:58:20 & 13.35 & 0.88 & 11.57 & 11.06 & 10.97 & 10.93 & 10.96 J203659.73-462518.2 & 20:36:59.7 & -46:25:18 & 11.94 & 1.02 & 9.83 & 9.28 & 9.11 & 9.00 & 9.00 J203702.60-325204.4 & 20:37:02.6 & -32:52:04 & 12.77 & 0.82 & 11.13 & 10.60 & 10.55 & 10.47 & 10.49 J203711.26-282234.7 & 20:37:11.3 & -28:22:35 & 12.68 & 0.80 & 11.03 & 10.55 & 10.51 & 10.41 & 10.44 J203725.57-231906.2 & 20:37:25.6 & -23:19:06 & 12.66 & 0.76 & 11.01 & 10.52 & 10.44 & 10.40 & 10.41 J203733.35-364545.3 & 20:37:33.3 & -36:45:45 & 13.24 & 1.13 & 10.95 & 10.36 & 10.24 & 10.18 & 10.19 J203753.63-302313.8 & 20:37:53.6 & -30:23:14 & 13.20 & 0.98 & 11.33 & 10.81 & 10.74 & 10.66 & 10.69 J203757.80-251825.3 & 20:37:57.8 & -25:18:25 & 14.47 & 0.79 & 12.82 & 12.41 & 12.29 & 12.22 & 12.22 J203819.45-275047.7 & 20:38:19.4 & -27:50:47 & 12.70 & 0.68 & 11.20 & 10.80 & 10.70 & 10.67 & 10.69 J203824.20-481757.5 & 20:38:24.2 & -48:17:58 & 12.94 & 0.99 & 10.96 & 10.39 & 10.26 & 10.22 & 10.25 J204106.23-325135.9 & 20:41:06.2 & -32:51:35 & 12.76 & 0.80 & 11.13 & 10.71 & 10.62 & 10.55 & 10.59 J204430.69-293653.4 & 20:44:30.7 & -29:36:53 & 11.79 & 1.05 & 10.07 & 9.61 & 9.50 & 9.46 & 9.49 J204611.88-383311.8 & 20:46:11.8 & -38:33:11 & 12.37 & 0.76 & 10.85 & 10.38 & 10.28 & 10.23 & 10.24 J210143.95-491358.8 & 21:01:44.0 & -49:13:59 & 10.80 & 0.84 & 9.25 & 8.77 & 8.66 & 8.60 & 8.61 J210351.66-453103.5 & 21:03:51.7 & -45:31:04 & 12.73 & 0.71 & 11.18 & 10.73 & 10.68 & 10.60 & 10.62 J210540.68-452056.9 & 21:05:40.7 & -45:20:57 & 11.79 & 0.75 & 10.09 & 9.57 & 9.53 & 9.41 & 9.41 J210754.89-584958.0 & 21:07:54.9 & -58:49:58 & 13.40 & 1.12 & 11.06 & 10.47 & 10.32 & 10.24 & 10.26 J210914.42-472152.2 & 21:09:14.4 & -47:21:52 & 12.47 & 0.91 & 10.71 & 10.17 & 10.05 & 9.94 & 9.95 J210922.17-425049.1 & 21:09:22.2 & -42:50:49 & 11.73 & 1.04 & 9.81 & 9.22 & 9.11 & 9.00 & 9.03 J211105.33-423922.4 & 21:11:05.3 & -42:39:22 & 12.24 & 1.00 & 10.22 & 9.66 & 9.54 & 9.46 & 9.48 J211151.21-525707.8 & 21:11:51.2 & -52:57:08 & 10.78 & 0.80 & 9.10 & 8.56 & 8.50 & 8.42 & 8.45 J211716.58-411532.2 & 21:17:16.6 & -41:15:32 & 11.57 & 0.90 & 9.76 & 9.23 & 9.09 & 8.97 & 8.97 J212002.20-511539.9 & 21:20:02.2 & -51:15:40 & 13.42 & 1.10 & 11.29 & 10.70 & 10.56 & 10.47 & 10.48 J212014.02-585922.1 & 21:20:14.0 & -58:59:22 & 12.20 & 0.97 & 10.36 & 9.81 & 9.72 & 9.61 & 9.64 J212035.69-532142.8 & 21:20:35.7 & -53:21:43 & 12.27 & 0.79 & 10.66 & 10.16 & 10.11 & 10.04 & 10.06 J212328.32-532829.1 & 21:23:28.3 & -53:28:29 & 12.09 & 0.80 & 10.43 & 9.89 & 9.83 & 9.76 & 9.80 J212400.61-524152.3 & 21:24:00.6 & -52:41:52 & 11.69 & 0.83 & 10.00 & 9.46 & 9.40 & 9.32 & 9.35 J212928.33-555826.2 & 21:29:28.3 & -55:58:26 & 12.45 & 0.98 & 10.55 & 9.99 & 9.86 & 9.80 & 9.80 J213517.05-553312.1 & 21:35:17.1 & -55:33:12 & 11.91 & 0.80 & 10.26 & 9.76 & 9.67 & 9.62 & 9.64 J213740.00-244649.4 & 21:37:40.0 & -24:46:49 & 12.16 & 0.95 & 10.26 & 9.73 & 9.60 & 9.54 & 9.54 J213900.32-565630.9 & 21:39:00.3 & -56:56:31 & 12.54 & 1.04 & 10.44 & 9.86 & 9.66 & 9.54 & 9.54 J213933.94-584554.7 & 21:39:34.0 & -58:45:55 & 12.51 & 1.09 & 10.53 & 9.96 & 9.83 & 9.72 & 9.74 J214019.85-572217.2 & 21:40:19.9 & -57:22:17 & 12.74 & 0.97 & 10.71 & 10.16 & 10.03 & 9.96 & 9.97 J214140.00-285424.0 & 21:41:40.0 & -28:54:24 & 12.06 & 0.92 & 10.27 & 9.69 & 9.60 & 9.52 & 9.55 J214254.50-592131.2 & 21:42:54.5 & -59:21:31 & 13.14 & 1.07 & 10.92 & 10.32 & 10.18 & 10.11 & 10.12 J214611.50-542049.1 & 21:46:11.5 & -54:20:49 & 11.77 & 1.05 & 9.50 & 8.93 & 8.80 & 8.73 & 8.74 J214848.44-591925.7 & 21:48:48.4 & -59:19:26 & 12.35 & 0.80 & 10.78 & 10.29 & 10.21 & 10.15 & 10.18 J215518.91-540822.0 & 21:55:18.9 & -54:08:22 & 13.05 & 1.07 & 10.98 & 10.38 & 10.29 & 10.18 & 10.22 J215656.87-354415.1 & 21:56:56.9 & -35:44:15 & 13.50 & 1.05 & 11.42 & 10.89 & 10.76 & 10.67 & 10.69 J215735.54-030804.7 & 21:57:35.5 & -03:08:05 & 11.74 & 0.77 & 10.03 & 9.54 & 9.46 & 9.41 & 9.43 J215746.83-030301.9 & 21:57:46.8 & -03:03:02 & 12.89 & 1.02 & 10.84 & 10.29 & 10.14 & 10.11 & 10.12 J220318.61-015628.2 & 22:03:18.6 & -01:56:28 & 11.21 & 0.82 & 9.53 & 9.03 & 8.97 & 8.86 & 8.89 J220352.61-044731.0 & 22:03:52.6 & -04:47:31 & 13.26 & 0.95 & 11.32 & 10.80 & 10.66 & 10.59 & 10.61 J220358.17-053221.2 & 22:03:58.2 & -05:32:21 & 13.56 & 0.92 & 11.71 & 11.17 & 11.09 & 11.02 & 11.04 J220625.47-021521.6 & 22:06:25.5 & -02:15:22 & 12.56 & 1.05 & 10.32 & 9.75 & 9.57 & 9.54 & 9.52 J220906.23-342504.8 & 22:09:06.2 & -34:25:05 & 12.90 & 0.96 & 11.03 & 10.47 & 10.34 & 10.26 & 10.25 J220926.71-380821.5 & 22:09:26.7 & -38:08:22 & 13.15 & 1.07 & 11.11 & 10.53 & 10.42 & 10.33 & 10.35 J221124.59-375310.2 & 22:11:24.6 & -37:53:10 & 10.87 & 0.93 & 9.05 & 8.54 & 8.42 & 8.34 & 8.35 J221149.19-020946.3 & 22:11:49.2 & -02:09:46 & 12.69 & 0.78 & 11.02 & 10.56 & 10.47 & 10.40 & 10.42 J221153.49-120918.3 & 22:11:53.5 & -12:09:18 & 11.83 & 0.83 & 10.09 & 9.61 & 9.51 & 9.45 & 9.43 J221254.29-023542.1 & 22:12:54.3 & -02:35:42 & 12.23 & 0.68 & 10.75 & 10.23 & 10.18 & 10.13 & 10.14 J221254.59-040859.3 & 22:12:54.6 & -04:08:59 & 11.67 & 0.97 & 9.76 & 9.20 & 9.11 & 9.02 & 9.05 J221329.39-021012.3 & 22:13:29.4 & -02:10:12 & 13.32 & 0.97 & 11.17 & 10.58 & 10.47 & 10.41 & 10.44 J221844.15-374404.5 & 22:18:44.2 & -37:44:05 & 13.21 & 1.07 & 11.19 & 10.61 & 10.49 & 10.42 & 10.43 J221856.98-391548.8 & 22:18:57.0 & -39:15:49 & 11.74 & 0.69 & 9.88 & 9.42 & 9.26 & 9.29 & 9.28 J221956.01-314530.5 & 22:19:56.0 & -31:45:31 & 12.98 & 0.99 & 11.22 & 10.75 & 10.61 & 10.48 & 10.51 J222624.18-512711.7 & 22:26:24.2 & -51:27:12 & …& …& 9.71 & 9.22 & 9.11 & 9.09 & 9.11 J222940.83-330540.2 & 22:29:40.8 & -33:05:40 & 12.05 & 0.85 & 10.26 & 9.76 & 9.65 & 9.58 & 9.59 J223026.41-194952.8 & 22:30:26.4 & -19:49:53 & 12.83 & 1.05 & 10.80 & 10.26 & 10.11 & 10.02 & 10.03 J223039.54-180906.8 & 22:30:39.5 & -18:09:07 & 11.50 & 1.12 & 9.21 & 8.62 & 8.51 & 8.42 & 8.44 J223215.63-485715.7 & 22:32:15.6 & -48:57:16 & 12.54 & 0.91 & 10.65 & 10.16 & 10.00 & 9.92 & 9.91 J223541.38-430555.0 & 22:35:41.4 & -43:05:55 & 12.10 & 0.81 & 10.38 & 9.86 & 9.78 & 9.72 & 9.73 J223557.26-243411.6 & 22:35:57.3 & -24:34:12 & 12.35 & 1.07 & 10.26 & 9.70 & 9.52 & 9.37 & 9.38 J223733.15-434118.5 & 22:37:33.2 & -43:41:19 & 10.96 & 0.77 & 9.38 & 8.85 & 8.79 & 8.70 & 8.72 J224010.66-373826.1 & 22:40:10.7 & -37:38:26 & 11.43 & 0.89 & 9.67 & 9.11 & 9.04 & 8.94 & 8.96 J224011.39-231340.5 & 22:40:11.4 & -23:13:41 & 12.09 & 0.86 & 10.40 & 9.91 & 9.81 & 9.71 & 9.73 J224126.33-362730.6 & 22:41:26.3 & -36:27:31 & 11.85 & 0.79 & 10.12 & 9.62 & 9.56 & 9.47 & 9.47 J224153.62-335032.0 & 22:41:53.6 & -33:50:32 & 12.56 & 0.75 & 11.18 & 10.72 & 10.65 & 10.57 & 10.60 J224734.48-551153.4 & 22:47:34.5 & -55:11:53 & 12.01 & 0.67 & 10.33 & 9.84 & 9.70 & 9.66 & 9.65 J224813.13-520018.5 & 22:48:13.1 & -52:00:19 & 13.38 & 1.12 & 11.18 & 10.60 & 10.47 & 10.38 & 10.40 J224814.53-570307.2 & 22:48:14.5 & -57:03:07 & 11.94 & 0.65 & 10.61 & 10.15 & 10.08 & 10.05 & 10.07 J224903.77-553625.7 & 22:49:03.8 & -55:36:26 & 11.02 & 0.87 & 9.28 & 8.72 & 8.64 & 8.51 & 8.53 J225118.77-381438.6 & 22:51:18.8 & -38:14:39 & 11.24 & 0.82 & 9.68 & 9.15 & 9.09 & 8.99 & 9.03 J225127.45-504940.9 & 22:51:27.5 & -50:49:41 & 11.11 & 0.83 & 9.51 & 9.01 & 8.95 & 8.87 & 8.89 J225629.26-472146.3 & 22:56:29.3 & -47:21:46 & 11.58 & 0.96 & 9.47 & 8.95 & 8.82 & 8.75 & 8.74 J225755.49-562253.9 & 22:57:55.5 & -56:22:54 & 12.67 & 0.91 & 10.81 & 10.22 & 10.10 & 10.06 & 10.08 J225910.89-482942.6 & 22:59:10.9 & -48:29:43 & 12.18 & 0.85 & 10.53 & 10.03 & 9.94 & 9.88 & 9.90 J230027.75-343043.4 & 23:00:27.8 & -34:30:43 & 13.22 & 0.89 & 11.55 & 11.07 & 10.96 & 10.84 & 10.86 J230035.35-381608.7 & 23:00:35.4 & -38:16:09 & 13.12 & 1.18 & 10.74 & 10.21 & 10.02 & 9.96 & 9.95 J230058.16-393501.4 & 23:00:58.2 & -39:35:01 & 13.22 & 1.42 & 10.66 & 10.07 & 9.86 & 9.76 & 9.76 J230140.88-413753.6 & 23:01:40.9 & -41:37:54 & 13.04 & 1.06 & 11.10 & 10.50 & 10.36 & 10.29 & 10.32 J230156.02-410843.4 & 23:01:56.0 & -41:08:43 & 12.82 & 0.90 & 11.07 & 10.57 & 10.49 & 10.42 & 10.41 J230208.96-405043.1 & 23:02:09.0 & -40:50:43 & 12.66 & 0.88 & 11.08 & 10.55 & 10.37 & 10.11 & 10.10 J230251.15-411451.4 & 23:02:51.1 & -41:14:51 & 12.79 & 0.87 & 11.08 & 10.56 & 10.49 & 10.39 & 10.42 J230448.59-431103.0 & 23:04:48.6 & -43:11:03 & 12.90 & 1.08 & 10.67 & 10.07 & 9.91 & 9.85 & 9.86 J230634.68-385535.9 & 23:06:34.7 & -38:55:36 & 11.27 & 0.98 & 9.13 & 8.59 & 8.42 & 8.40 & 8.37 J230639.92-294818.9 & 23:06:39.9 & -29:48:19 & 13.28 & 1.01 & 11.39 & 10.82 & 10.71 & 10.64 & 10.66 J230647.47-303939.0 & 23:06:47.5 & -30:39:39 & 12.83 & 0.85 & 11.16 & 10.62 & 10.55 & 10.48 & 10.51 J230745.33-281307.8 & 23:07:45.3 & -28:13:08 & 11.88 & 1.04 & 9.71 & 9.15 & 9.05 & 8.96 & 8.97 J230859.70-532018.8 & 23:08:59.7 & -53:20:19 & 13.16 & 0.90 & 11.36 & 10.79 & 10.71 & 10.63 & 10.64 J230917.73-532004.2 & 23:09:17.7 & -53:20:04 & 13.06 & 1.04 & 11.03 & 10.49 & 10.36 & 10.28 & 10.29 J230948.74-384120.0 & 23:09:48.8 & -38:41:20 & 12.74 & 1.30 & 10.19 & 9.60 & 9.39 & 9.36 & 9.36 J230958.05-580926.2 & 23:09:58.0 & -58:09:26 & 12.07 & 0.73 & 10.71 & 10.22 & 10.16 & 10.08 & 10.11 J231041.88-401411.7 & 23:10:41.9 & -40:14:12 & 13.30 & 0.90 & 11.47 & 10.96 & 10.84 & 10.80 & 10.82 J231223.21-462509.1 & 23:12:23.2 & -46:25:09 & 12.55 & 0.83 & 10.79 & 10.31 & 10.19 & 10.14 & 10.14 J231242.11-015305.1 & 23:12:42.1 & -01:53:05 & 11.48 & 0.94 & 9.50 & 8.91 & 8.78 & 8.70 & 8.72 J231251.96-440022.8 & 23:12:52.0 & -44:00:23 & 13.08 & 0.90 & 11.35 & 10.84 & 10.68 & 10.58 & 10.59 J231342.08-465046.3 & 23:13:42.1 & -46:50:46 & 12.76 & 0.69 & 11.28 & 10.82 & 10.77 & 10.70 & 10.73 J231433.38-523439.3 & 23:14:33.4 & -52:34:39 & 12.23 & 0.85 & 10.49 & 9.99 & 9.90 & 9.82 & 9.84 J231529.71-462235.1 & 23:15:29.7 & -46:22:35 & 12.13 & 0.92 & 10.26 & 9.71 & 9.60 & 9.53 & 9.54 J231645.30-404728.2 & 23:16:45.3 & -40:47:28 & 11.58 & 0.86 & 9.77 & 9.24 & 9.12 & 9.05 & 9.06 J231756.18-555332.4 & 23:17:56.2 & -55:53:32 & 12.16 & 1.04 & 10.18 & 9.61 & 9.47 & 9.38 & 9.40 J231845.22-280744.8 & 23:18:45.2 & -28:07:45 & 12.44 & 0.81 & 10.80 & 10.32 & 10.22 & 10.15 & 10.16 J231849.69-575012.6 & 23:18:49.7 & -57:50:13 & 12.25 & 0.85 & 10.64 & 10.05 & 9.97 & 9.90 & 9.94 J232105.51-093254.7 & 23:21:05.5 & -09:32:55 & 13.36 & 1.12 & 11.14 & 10.55 & 10.46 & 10.34 & 10.37 J232205.07-414138.3 & 23:22:05.1 & -41:41:38 & 12.56 & 0.81 & 10.89 & 10.43 & 10.31 & 10.26 & 10.27 J232354.58-473023.4 & 23:23:54.6 & -47:30:23 & 10.80 & 0.77 & 9.05 & 8.47 & 8.39 & 8.33 & 8.37 J232715.93-595258.3 & 23:27:15.9 & -59:52:58 & 13.21 & 0.93 & 11.21 & 10.67 & 10.54 & 10.45 & 10.46 J232752.28-475202.3 & 23:27:52.3 & -47:52:02 & 11.30 & 0.82 & 9.72 & 9.19 & 9.11 & 9.03 & 9.06 J232922.88-102343.6 & 23:29:22.9 & -10:23:44 & 11.92 & 0.61 & 10.61 & 10.14 & 10.12 & 10.03 & 10.04 J232956.10-565659.8 & 23:29:56.1 & -56:57:00 & 12.77 & 0.74 & 11.24 & 10.73 & 10.68 & 10.57 & 10.61 J233348.55-163352.9 & 23:33:48.5 & -16:33:53 & 12.14 & 1.04 & 10.04 & 9.52 & 9.35 & 9.21 & 9.18 J233624.25-470759.8 & 23:36:24.3 & -47:08:00 & 11.30 & 0.82 & 9.64 & 9.14 & 9.05 & 8.97 & 8.99 J233628.41-553742.6 & 23:36:28.4 & -55:37:43 & 11.40 & 0.73 & 9.76 & 9.26 & 9.19 & 9.13 & 9.16 J233652.73-574249.6 & 23:36:52.7 & -57:42:50 & 11.72 & 1.03 & 9.52 & 8.96 & 8.80 & 8.79 & 8.79 J234331.22-450953.3 & 23:43:31.2 & -45:09:53 & 13.37 & 0.91 & 11.59 & 11.04 & 10.96 & 10.89 & 10.90 J234343.67-161522.4 & 23:43:43.7 & -16:15:22 & 12.61 & 0.85 & 11.00 & 10.46 & 10.39 & 10.33 & 10.33 J234921.12-405759.0 & 23:49:21.1 & -40:57:59 & 13.66 & 1.20 & 11.31 & 10.73 & 10.54 & 10.47 & 10.48 J235146.46-541146.8 & 23:51:46.5 & -54:11:47 & 13.67 & 1.20 & 11.22 & 10.63 & 10.48 & 10.31 & 10.33 J235242.16-451355.1 & 23:52:42.2 & -45:13:55 & …& …& 9.92 & 9.33 & 9.21 & 9.09 & 9.11 J235351.50-224735.2 & 23:53:51.5 & -22:47:35 & 12.60 & 1.40 & 9.95 & 9.35 & 9.12 & 8.97 & 8.96 J235637.15-515552.9 & 23:56:37.2 & -51:55:53 & 13.50 & 0.97 & 11.52 & 11.00 & 10.87 & 10.81 & 10.83 J235714.41-474944.8 & 23:57:14.4 & -47:49:45 & 13.02 & 0.87 & 11.23 & 10.70 & 10.65 & 10.56 & 10.59 J235834.61-554917.0 & 23:58:34.6 & -55:49:17 & 12.87 & 0.99 & 10.89 & 10.33 & 10.23 & 10.16 & 10.19 llcccr J121846.49-161050.8 & HE 1216-1554 & …& …&  − 3.33 & J123916.23-264746.8 &            …& …& …&  − 2.23 & J123918.93-264941.9 &            …& …& …&  − 2.23 & J125045.87-282656.3 &            …& …& …&  − 2.01 & J131947.00-042310.2 & HE 1317-0407 & 4525 & 0.30 &  − 3.10 & J143355.92-124035.7 & HE 1431-1227 & …& …&  − 2.58 & J212138.43-395612.9 & BPS CS 30492-0116 & …& …&  − 2.05 & J220216.36-053648.5 & HE 2159-0551 & …& …&  − 3.03 & J220841.10-394512.2 & BPS CS 22881-0040 & …& …&  − 2.27 & J221059.73-022525.2 & HE 2208-0240 & …& …&  − 2.06 & J222025.81-102320.2 & BPS CS 22886-0042 & …& …&  − 2.61 & J230449.13-424348.0 & HE 2302-4259 & …& …&  − 3.15 & J231300.04-450706.6 & HE 2310-4523 & …& …&  − 2.60 & J232607.41-055006.9 & BPS CS 22949-0048 & 4620 & 0.95 &  − 3.37 & Data Quality Flags ================== We make four WISE data quality checks. First, that the WISE *W*1, *W*2, and *W*3 photometry be free of artifacts. Second, that the objects are fully consistent with a point source. Third, that the quality of the photometry in both *W*1 and *W*2 has been rated ‘A’. Fourth, that the level of contamination by the Moon in *W*1, *W*2, and *W*3 be consistent with zero. The following SQL commands can be used to reproduce our initial selection by setting limits on an ``All Sky Search" of the AllWISE Source catalog available from the NASA/IPAC Infrared Science Archive[7](#fn7). ``` j_m_2mass - h_m_2mass between 0.45 and 0.6 and w3mpro > 8 and w1mpro - w2mpro between
arxiv_0000294
The 4d superconformal index near roots of unity and3d Chern-Simons theory ========================================================================= Introduction ============ In the last few years there has been a renewed interest in the study of the superconformal index of 4d N = 1 superconformal field theories (SCFTs) and, in particular, N = 4 super Yang-Mills (SYM). The index in question is the supersymmetric partition function of the SCFT on *S*3 × *S*1 which receives contributions from BPS states that preserve two supercharges $({\mathcal Q}, {\overline{\mathcal{Q}}})$. In the large-*N* limit, the expectation from AdS/CFT is that the index should account for the entropy of the BPS black holes (BH) that preserve the same two supercharges in the dual supergravity on AdS5. This question was introduced in , and the work of the last few years has shown that the index indeed captures the BH entropy in different asymptotic limits . The focus of the present paper is the *Cardy-like limit* in which the BH entropy becomes very large. In the canonical ensemble, this translates to the study of the exponential growth of the index as *τ* → 0, where the parameter *τ* is the chemical potential dual to the charge. As pointed out in , the *τ* → 0 limit is in fact one of an infinite number of inequivalent Cardy-like limits in which the index is expected to grow exponentially. These limits correspond to *τ* approaching a rational number or, equivalently, $q = e^{2\pi {{\rm i}}{\tau}}$ approaching a root of unity. In this paper we analyze the 4d superconformal index near a general root of unity, and find interesting relations to three-dimensional Chern-Simons (CS) theory. The main statement is that the asymptotics of the index near a rational point  − *n*/*m* is equal (to all orders in perturbation theory in deviations ${\widetilde}{\tau}= m{\tau}+n$ from the rational point) to the partition function of a certain 3d N = 2 gauge theory with Chern-Simons couplings that involve background as well as dynamical fields on an *S*3/Z*m* orbifold. The background couplings give rise to singular terms at ${\text{O}}(1/{\widetilde}{\tau}^2)$ and ${\text{O}}(1/{\widetilde}{\tau})$ that govern the growth of the index, while the constant O(1) term receives contributions from both background fields and the dynamical Chern-Simons theory. We demonstrate this statement from two points of view—by direct asymptotic analysis of the index near rational points, and from an analysis of the reduced three-dimensional theory and calculating the various couplings using high-temperature effective-field theory (EFT) techniques. The latter method, based on , relates the high-temperature asymptotics of the index to a low-energy effective field theory, in the spirit of the Cardy formula.[1](#fn1) 0.4cm **The four-dimensional superconformal index and its asymptotic growth** 0.1cm In this paper we study N = 1 gauge theories with a Lagrangian description and a *U*(1)*R* symmetry, with a focus on N = 4 SYM which we use to illustrate some statements in detail. The symmetry algebra of N = 1 SCFT on *S*1 × *S*3 is *S**U*(2, 2∣1), which includes the energy *E* which generates translations around *S*1, the angular momenta *J*1, *J*2 on *S*3, and the *U*(1) R-charge *Q*. One can pick a complex supercharge obeying the following algebra, [QQbaralg] {Q, } = E-J1-J2-Q. The most general index built out of the N = 1 superconformal algebra is an extension of the Witten index of ${\mathcal Q}$ and is defined as the following trace over the physical Hilbert space, [defindex] I(, ) = Tr (-1)F e- {Q, } +2 i(J1+Q)+2 i(J2+Q). The trace  only receives contributions from states annihilated by the supercharges ($\frac14$-BPS states) so that the right-hand side of  vanishes for these states. This index ${\mathcal I}({\sigma},{\tau})$ can be calculated from either Hamiltonian or functional integral methods and reduces to a unitary matrix integral , which can be written as an integral over the space of gauge holonomies around the *S*1 of certain infinite products, as written in Equation . Our focus in this paper is the analog, in the present context, of the high-temperature Cardy limit of 2d CFT. This means fixing the rank and taking the charges (*J**i*, *Q*) to be larger than any other scale in the theory. In the canonical ensemble this translates to taking Im *σ*,  Im *τ* → 0 at fixed rank. In order to calculate the asymptotic growth of states along a certain direction in the charge lattice, one needs to fix the relation between *σ* and *τ*. We study[2](#fn2) the slice *σ* = *τ* − *n*0 with *n*0 an integer, as in . Setting 2*J* = *J*1 + *J*2, the resulting canonical index ${\mathcal I}$ is given by [eq:n0Index] I(;n0) = Tr (-1)F e-{Q, } -2 in0 (J1+Q)+2 i(2J+Q). The large-charge asymptotics then implies Im *τ* → 0, while Re *τ* is not fixed a priori by the limit. We consider asymptotic limits as *τ* approaches a rational number *τ* →  − *n*/*m* with gcd(*m*, *n*) = 1, introduced in the present context as new Cardy-like limits in . The index I clearly depends on the gauge group *G*. We generally suppress it in our notation, but sometimes use the notation I*N* to emphasize the dependence on *N* for *U*(*N*) or *S**U*(*N*) N = 4 SYM theory (which should be clear from the context). Our motivation to consider these rational points comes from the study of the index ${\mathcal I}\_N({\tau})$ of N = 4 SYM in the large-*N* limit.[3](#fn3) In this limit one considers charges scaling as *N*2 as *N* → ∞, which translates to *N* → ∞ at fixed *τ* in the canonical ensemble . In this large-*N* limit one expects the field theory index ${\mathcal I}\_N({\tau})$ to be written as a sum over saddles. This picture has been partially realized in the last few years using two different approaches—the Bethe-ansatz-like approach developed in , and the direct study of large-*N* saddle points using an elliptic extension of the action . In particular, the large-*N* approach in  found a class of saddles labelled by rational numbers  − *n*/*m*, where the perturbation expansion around each saddle is given by the asymptotic limit *τ* →  − *n*/*m*.[4](#fn4) Setting *n*0 =  − 1, we have [mnasymp] IN() ~ -S(m,n; ), -n/m, where the effective action at each saddle is given by [actionEll] S(m,n; ) =, m+n. where *χ*1(*n*) is the Dirichlet character equal to 0,  ± 1 when *n* ≡ 0,  ± 1 (mod 3), respectively. There was one caveat in the above result, which was stressed in , namely that the pure-imaginary ${\widetilde}{\tau}$-independent term could not be fixed by the methods used in those papers. The constant term in the effective action , therefore, was a convenient choice made using inputs coming from outside the field-theory analysis. Although we do not have a rigorous notion of the sum over saddles yet, it should be clear that if the effective action of the (*m*, *n*) saddle has negative real part it dominates over the others as *m**τ* + *n* → 0. It is also clear from  that the fastest growth among these saddles comes from (*m*, *n*) = (1, 0). The (1, 0) saddle in the SYM theory is identified as a fully deconfined phase whose entropy scales as *N*2, while the other (*m*, *n*) saddles have entropies that are suppressed by a factor of *m*. For this reason they can be called partially deconfined saddles (in the sense of asymptotic growth, but not in the sense of center symmetry breaking—cf. ). On the gravitational side, the action *S*eff(1, 0; *τ*) agrees precisely with the canonical on-shell action of the black hole solution in the dual AdS5 supergravity , which leads to the identification of the AdS5 BH as the saddle (1, 0). The (*m*, *n*) solutions have been identified with orbifolds of the Euclidean AdS5 BH . Because of the dominance of the (1, 0) saddle near *τ* → 0, one can capture it directly in an asymptotic expansion—even for finite *N*. In this calculation, one writes the index  as an integral over gauge holonomies *u**i* (see  below), estimates the integrand in the Cardy-like limit *τ* → 0, and then performs the integrals. The initial studies  successfully reproduced the singular parts of the action as *τ* → 0, i.e. the 1/*τ*2 and the 1/*τ* terms with the correct coefficients. More recently, the complete action  for (*m*, *n*) = (1, 0) was obtained in  by a direct method, involving a careful analysis of all perturbative terms in the Cardy-like limit. (See for more recent related work.) Our first goal in this paper is to obtain the complete perturbative action at all the (*m*, *n*) saddles by a direct asymptotic analysis of the index as *τ* →  − *n*/*m*. This analysis is described in Section [sec:SCI], the result of which is a perfect agreement with the action , up to the constant terms as mentioned above. The asymptotic analysis requires developing the asymptotics of the elliptic gamma function near rational points. The *τ* → 0 asymptotic estimates were available in previous literature . Here we develop the analysis for *τ* approaching rational numbers. The analysis is presented in Appendix [app:Estimates]. (See also  for related work motivated by integrable-systems considerations.) Furthermore, we note that for given *m*, *n*, depending on the sign of $\mathrm{arg}{\widetilde}{\tau}-\pi/2$ the action in  can have negative or positive real part, which respectively yields a growing or decaying contribution to the index. Therefore in essentially half of the parameter space the saddles in  do not capture any growth in the index. As demonstrated in Section [sec:Ccenter], when the (*m*, *n*) saddle in  gives a decaying contribution to the index as ${\widetilde}{\tau}\to0$, a “2-center saddle” takes over which yields exponential growth again. In other words, in half of the parameter space the growth of the index I*N*(*τ*) is captured by 2-center saddles. (These turn out to be partially deconfined saddles both in the sense of asymptotic growth and in the sense of center symmetry breaking—cf. .) 0.4cm **Chern-Simons theory from the asymptotics of the 4d index** 0.1cm The second goal of the paper is partly inspired by an interesting pattern appearing in the asymptotic calculations. As emphasized in the context of SU(*N*) N = 4 SYM in , in the part of the parameter space where the index is dominated by isolated, 1-center saddles, the complete asymptotic expansion in *τ* terminates at *O*(*τ*)—i.e. the perturbation theory only contains 1/*τ*2, 1/*τ*, *τ*0 and *τ* up to exponentially suppressed corrections. (This is, in fact, more generally true when the index is dominated by isolated saddles, and not true when there are flat directions; see .) Interestingly, it was found in  that the constant term in the expansion contains the partition function of SU(*N*) pure Chern-Simons theory on *S*3 at level  ± *N*. In this paper we find that the same structure persists at all rational points. We see that the constant term in the expansion as *τ* →  − *n*/*m* involves Chern-Simons theory whose action is 1/*m* times the action as *τ* → 0. We present evidence that this corresponds to CS theory on an orbifold space *S*3/Z*m* (with the action of Z*m* depending on *n* such that the orbifold coincides with the lens space *L*(*m*,  − 1) when *n* = 1) at level  ± *N* . In other words, the 4d SYM index appears to play the role of a master index which governs the partition function of three-dimensional CS theory on an infinite family of *S*3 orbifolds. The appearance of 3d Chern-Simons theory from the 4d superconformal index is intriguing, and gives rise to two related questions: (a) is there a direct three-dimensional physics explanation of the appearance of Chern-Simons theory? (b) can we also understand the singular terms in the asymptotic expansions around rational points as being related to 3d Chern-Simons theory? The answers to both these questions are positive, as we now explain. 0.4cm **The asymptotics of the 4d index from supersymmetric Chern-Simons theory** 0.1cm The natural idea is that the reduction of the four-dimensional theory on *S*1 gives rise to a three-dimensional theory on *S*3 in a ``high-temperature" expansion in powers of the circumference *β* of the shrinking circle. If we calculate the functional integral of the three-dimensional theory, we should recover the four-dimensional functional integral as *β* → 0. The three-dimensional effective field theory is known to have a derivative expansion, where the most relevant terms are Chern-Simons terms . This EFT approach was developed in the supersymmetric context in  who presented *supersymmetrized* CS actions involving the dynamical as well as background fields, which are necessary for preserving supersymmetry on *S*3 × *S*1. In particular, the 1/*β*2 and 1/*β* effective actions derived this way in  reproduced the asymptotics of the the index as found in  for *n*0 = 0 and arg(*τ*) = *π*/2. (Note that when the metric on *S*3 × *S*1 has a direct product form with *S*3 the unit round three-sphere, a real value of *β* determines a purely imaginary ${\tau}=\frac{{{\rm i}}\beta}{2\pi}$.) The coefficient of the leading 1/*β*2 term in these works is pure imaginary, and also does not grow as *N*2 (it is in fact zero for non-chiral theories), therefore the exponential growth of states corresponding to the BH is not captured there. One of the motivations for the current paper is to explain the exponential growth associated to the bulk black holes from the three-dimensional point of view, which requires arg(*τ*) ≠ *π*/2 and *n*0 ≠ 0.[5](#fn5) (Note, in particular, that the (*m*, *n*) = (1, 0) saddle in given for *n*0 =  − 1, would have its leading piece a pure phase if arg(*τ*) = *π*/2.) For this purpose we consider, as in , a background geometry of the form *S*3 × Ω*S*1, with *S*3 the unit round three-sphere, *γ* the circumference of the circle, and Ω a twist parameter[6](#fn6) controlling the deviation of the metric from a direct product form (Equation ). The imaginary part of the twist parameter determines a non-zero real part of *τ* via (Equation ) = (1-Ø). [eq:tauAndOmega] As shown in , the integer *n*0 in  controls the periodicity of the fermions in this background, and *n*0 =  ± 1 (which is naturally dual to the BH) corresponds to anti-periodic fermions, i.e., as in a Scherck-Schwarz reduction. In the present context we insist on supersymmetry being preserved—and that necessitates the turning on of other background fields under which the fermions are charged. In the three-dimensional background supergravity, we have a non-zero graviphoton from the fibration as well as non-zero auxiliary background gauge and scalar fields. As we explain in Section [sec:4dto3d], the resulting configuration is effectively described by a circle of radius *R*, which in the limit *γ* → 0, Ω → ∞ with *τ* fixed obeys *R* → *τ*. Now, what is the actual calculation? There are two types of fields in the three-dimensional functional integral—background fields which take constant values, and dynamical modes which fluctuate in the integral. The latter is further made up of light modes (with zero momentum around *S*1) and heavy (Kaluza-Klein) modes. The first step is to integrate out the heavy modes in order to obtain an effective action for the light modes. The integration over heavy modes also generates corrections to the coefficients of the supersymmetric Chern-Simons terms of the non-zero background fields, see e.g. . In these calculations, we need to include, in addition to the couplings discussed in , the supersymmetrized RR and gravitational CS actions which were discussed in . The effective actions of the background gauge fields turn out to produce precisely the singular pieces 1/*τ*2 and 1/*τ* in the asymptotic expansion of the index, as well as a constant piece. The remaining functional integral is described by an N = 2 SYM theory with a certain one-loop induced CS coupling on *S*3, whose partition function is known to agree, up to a sign, with that of pure Chern-Simons theory . This explains the appearance of the dynamical Chern-Simons theory in the constant term of the asymptotic expansion. Two technical remarks are in order. Firstly, recall that supersymmetry implies that the 4d superconformal index should not depend on *γ* and Ω separately, but only on their combination *τ* as in . In  this was shown to be true in 5d gravitational variables, as well as through a localization computation in 4d field theory. In this paper we verify this also in 3d effective field theory. Secondly, the order of limits is important to have a smooth calculational set up. We first send *γ* → 0 keeping Ω fixed, so that the three-dimensional geometry is smooth and finite. Then we take Ω → ∞ at fixed *τ* and express the result in terms of *τ* using . We find there are no singularities generated in the latter step and thus the limiting procedure is perfectly smooth. Finally, we repeat the same analysis as *τ* approaches rational points. The dimensional reduction in this case naturally leads us to considering orbifolds of *S*3 × *S*1, which, as far as we understand, are related to the orbifolds discussed in . The three-dimensional calculation then leads to the $1/{\widetilde}{\tau}^2$ and $1/{\widetilde}{\tau}$ terms as well as a constant piece from the background fields, and we provide evidence that the remaining dynamical piece is the partition function of N = 2 SYM with a one-loop induced CS coupling on *S*3/Z*m*. 0.4cm **Notation**. We have *σ*, *τ* ∈ H and *z*, *u* ∈ C, and we set $p={{\rm e}}^{2\pi i{\sigma}}$, $q={{\rm e}}^{2 \pi {{\rm i}}{\tau}}$, ${\zeta}={{\rm e}}^{2\pi{{\rm i}}z}$. We use  ≃  to mean an all-order asymptotic equality of the logarithms of the two sides. The 4d superconformal index and its asymptotic expansion [sec:SCI] ================================================================== We consider a four-dimensional N = 1 gauge theory which flows to a superconformal fixed point. The theory has gauge group *G* (which we take to be semi-simple, and separately comment on the *U*(*N*) case), and a number of chiral multiplets labelled by *I* with R-charge *r**I* and in the representation R*I* of the gauge group. We assume 0 < *r**I* < 2 for all chiral multiplets. The superconformal index for these theories on *S*3 × *S*1 has been calculated in the Hamiltonian as well as functional integral formalism , and the answer is expressed as an integral over the Cartan torus which we parameterize by the vector of gauge holonomies ${\underline{u}}= (u\_1, \dots, u\_{\text{rk}(G)})$, with *u**i* ∈ R/Z. The index is given by the following expression  [eq:pqIndex] I(,) = vec (;,) chi (;,). Here we have used the measure $D{\underline{u}}= \frac{1}{|\mathcal{W}|} \prod\_{i=1}^{\text{rk}(G)} {\mathrm{d}}u\_i$ with ∣W∣ the order of the Weyl group of *G*. For *U*(*N*) we have $D{\underline{u}}= \frac{1}{N!} \prod\_{i=1}^{N} {\mathrm{d}}u\_i$, while for *S**U*(*N*) one can work with *u*1, …, *u**N* subject to ∑*i**u**i* ∈ Z and $D{\underline{u}}= \frac{1}{N!} \prod\_{i=1}^{N-1} {\mathrm{d}}u\_i$. The factors $\mathcal{Z}\_{\rm vec}$, $\mathcal{Z}\_{\rm chi}$ denote the vector multiplet and chiral multiplet contribution respectively given by vec (;,) &= (p;p)rk(G)(q;q)rk(G) + (+u + + ;, ) ( - +u + + ;, ), chi (;,) &= I I (u+ (+);,). [ZvecZchi] Here *α*+ runs over the set of positive roots of the gauge group *G*, *I* runs over all the chiral multiplets of the theory, and *ρ* is the weight of the gauge representation R*I*. The *Pochhammer symbol* is defined by (;q) = k=0(1- qk), [eq:PochDef] and the *elliptic gamma function*  is defined by the infinite product formula (z;,) = j,k0. [eq:GammaDef] From now on in this section we set *σ* = *τ* − *n*0, and use the notation Γe(*z*) = Γe(*z*; *τ*, *τ*). We have I(;n0) = ( -S(;) ), [eq:SYMIndex] where the action $S\_\text{ind}({\underline{u}}) = S\_\text{ind}({\underline{u}};{\tau})$ is given by -S() & = 2 rk(G) (q;q) + + ( (+u + 2) ( - +u + 2) ) & + I I (u+ rI (- 12 n0) ). [defVmicro] Our goal now is to calculate the asymptotics of the function ${\mathcal I}({\tau},n\_0)$ as *τ* approaches a rational number or, equivalently, $q=e^{2 \pi {{\rm i}}\tau}$ approaches a root of unity. For N = 4 SYM we have -S=4() = & 2N (q;q) + 3N (13 (2- n0)) & + ij (uij +2) + 3 ij (uij + 13 (2- n0) ) [eq:N4Index] for *U*(*N*) and a similar expression for *S**U*(*N*) with *N* replaced by *N* − 1. Using the product expression  we see that for N = 4 SYM the index ${\mathcal I}\_N({\tau};n\_0)$ has the symmetry *τ* ↦ *τ* + 3 for fixed *n*0, so that we can restrict our attention to, say, *τ* ∈ [0, 3]. Relatedly, the independent values of *n*0 are 0,  ± 1. More generally, the periodicity of *τ* depends on the quantization of R-charge in the theory. Before analyzing these asymptotic limits we briefly discuss a slightly independent motivation to study these new limits and, relatedly, the origin of the number *n*0 in ,. One of the motivations in the recent developments in this subject has been to ``find the dual black hole" in the superconformal index. In terms of the microcanonical Fourier coefficients IN (;n0) = n dN(n;n0) e2 in , the problem in the context of the Cardy-like limit is to check if ∣*d**N*(*n*; *n*0)∣ ∼ *N*2 *s*(*n*/*N*2) as *n* → ∞ . In the canonical setting this is reflected by a corresponding asymptotic growth of the function ${\mathcal I}({\tau})$ as *τ* approaches the real axis or, equivalently, as $q = e^{2 \pi {{\rm i}}{\tau}}$ approaches the unit circle. The leading asymptotics of the growth of microcanonical degeneracies is governed by the dominant singularity of ${\mathcal I}$. As it turns out, the index ${\mathcal I}\_N({\tau};0)$ of N = 4 SYM does not have any exponential growth as *τ* → 0 (the growth is power-law ). It is the asymptotic growth of $\log {\mathcal I}\_N({\tau};0)$ as *τ* →  ± 1 instead that matches the on-shell action of the AdS5 BH (the two points giving growth of equal magnitude and opposite phases). From a numerical study of the microcanonical degeneracies one can deduce that this is, in fact, the leading growth of the index . In this case, noting that I*N*(*τ* ± 1; *n*0) = I*N*(*τ*; *n*0 ∓ 1), we see that the leading growth can be stated as coming from the growth of the function ${\mathcal I}\_N({\tau},\mp1)$ as *τ* → 0. Actually, one finds that the growth of states at *n*0 =  ± 1 matches the BH growth of states for very large classes of N = 1 SCFTs . Once we understand that the growth can come from a region with Im *τ* → 0 but Re *τ* ≠ 0, it is perhaps more natural to set *n*0 = 0 (for which the two regions of leading growth have a symmetric placement around *τ* = 0). We will, nevertheless, keep *n*0 as an explicit parameter in the following to make contact with related literature. It is clear from the above discussion that one should equally well explore other points on the unit circle in *q*.[7](#fn7) As it turns out there is exponential growth near any root of unity consistent with ,, i.e. partial deconfinement in the sense of asymptotic growth. In the following subsections we proceed to analyze the asymptotic behavior of the index as *τ* → 0 and then as *τ* approaches any rational number. Asymptotics of the index as *τ* → 0 ----------------------------------- In this subsection we perform an all-order asymptotic analysis of the integral  as *τ* → 0. This calculation was done for N = 4 SYM recently in  using a saddle-point analysis. Here we find the asymptotics for the general class of theories discussed in the introduction, using the rigorous method of  (see in particular Section 3.1 of ). The application in  was restricted to real *τ* and *n*0 = 0, but the method is more general and we apply it to the case of complex *τ* and general *n*0. We first calculate the all-order asymptotic expansion of the integrand. In order to do this we need the the asymptotic behavior of the elements in , namely the Pocchammer symbol and the elliptic gamma function, which we review in Equations ([eq:PochEst]), ([eq:numEst]),. Using these estimates we find that in the range ${\alpha}\_+ \cdot {\underline{u}}\in (-1+\delta,1-\delta)$ (for fixed small *δ*) the integrand of  can be written, up to exponentially suppressed corrections, as ( -S(;) ) +42() ( -2iE -V () ). [eq:In0semi-simpleAsy0] The all-order effective potential as *τ* → 0 is given by V() = V2()+ V1()+V0(), [defVsum] with V2()&=I, I 3 (I - 12 rI n0 ), V1()&=I,I i(rI-1) 2 (I - 12 rI n0)+ i (()2 + ), V0()&=I,I i( (rI-1)2- ) 1 ( I - 12 rI n0), [eq:defVs] where *α* runs over all the roots of *G* including the rk(*G*) zero roots, *I* runs over all the chiral multiplets, and *ρ**I* runs over all the weights of the representation R*I*. Note that in  we have separated the supersymmetric Casimir energy given by  E = R3 - R. [Esusy] We make a brief comparison to  in which the singular pieces were studied. The potential *V*2 in  coincides (up a multiplicative $-{{\rm i}}\pi/6$ factor) with the *V*2 studied in . At finite ${\underline{u}}$, the sinh$^2(\frac{\pi\alpha\_+\cdot{\underline{u}}}{-{{\rm i}}\tau})$ factors in  also contribute to O(1/*τ*). Including this piece in *V*1 renormalizes it to *V*1*r* as[8](#fn8) $$V\_1({\underline{u}})\to V^r\_1({\underline{u}}) {\;= \;}\sum\_{I,\,\rho\_I} \, {{\rm i}}\pi(r\_I-1) \, \overline{B}\_2 \bigl(\rho\_I \cdot {\underline{u}}- \tfrac12 r\_I n\_0\bigr)+ \sum\_{\alpha} {{\rm i}}\pi \, \overline{B}\_2 \bigl(\alpha \cdot {\underline{u}}\bigr) \,.\label{eq:V1ren}$$ The potential *V*1*r* coincides (up to a multiplicative $-{{\rm i}}\pi$ factor) with the *V*1 studied in . In our treatment below we keep the sinh2 factors separate and place them in the ``dynamical measure" +42(). Compared to , here we also include the O(*τ*0) piece corresponding to exp( − *V*0). Finally, the O(*τ*) piece of the exponent is determined by the supersymmetric Casimir energy and, notably, there are no O(*τ*2) or higher corrections in the perturbative effective potential. We now investigate the local behavior of the potential near ${\underline{u}}=0$. The potentials *V*2, *V*1, *V*0 are piecewise polynomials, and using $\overline{B}'\_j=j \overline{B}\_{j-1}$ we obtain their Taylor expansion near ${\underline{u}}=0$ as V2() &=I,I ( 3 (- 12 rI n0 ) + i 2 (-12 rI n0 )I+ 2i 1(-12 rI n0 ) ) &+ I,I 2i, V1() &=I,I i(rI-1) 2(-12 rI n0)+ G+ I2i (rI-1) 1(-12 rI n0) I &+ I,I 2i(rI-1) + i ()2, V0() &=I,I i((rI-1)2- ) 1(-12 rI n0) &+ I,I i((rI-1)2- )I. Importantly, the second lines of *V*2, *V*1, *V*0 above vanish due to gauge3, *U*(1)*R*-gauge2, and *U*(1)*R*2-gauge and gravity2-gauge anomaly cancellations. Therefore *V*2 is actually piecewise quadratic in ${\underline{u}}$, while *V*1 is piecewise linear and *V*0 is piecewise constant. This is similar to Section 3.1 of . The leading asymptotic behavior of *V* as *τ* → 0 is determined by *V*2. In order to obtain a local minimum of Re(*V*) at ${\underline{u}}=0$, we want (i) the linear term in *V*2 to vanish, and (ii) the quadratic term to be on the negative (respectively positive) imaginary axis for $\mathrm{arg}(\tau)-\frac{\pi}{2}>0$ (respectively $\mathrm{arg}(\tau)-\frac{\pi}{2}<0$). As found in , we can achieve both of these requirements in any theory in which 0 < *r**I* < 2 by specializing to *n*0 =  ± 1. Explicitly, for *n*0 =  ± 1 we can use the fact that for ∣*x*∣ < 1 we have $\overline{B}\_2(x)=x^2-|x|+\frac{1}{6}$ to deduce that the linear term in *V*2 is equal to I,I ( (rI-1)2- ) I, which vanishes thanks to the *U*(1)*R*2-gauge and gravity2-gauge anomaly cancellations. Similarly we can use the fact that for 0 < ∣*x*∣ < 1 we have $\overline{B}\_1(x)=x-\frac{\mathrm{sign}(x)}{2}$ to deduce that the quadratic term in *V*2 is equal to -in0 I,I (rI-1)īn0 , where the equality follows from *U*(1)*R*-gauge2 anomaly cancellation, and we have used that sign(*n*0) = *n*0 for *n*0 =  ± 1. This quadratic piece is on the positive (respectively negative) imaginary axis for *n*0 =  + 1 (respectively *n*0 =  − 1). In this manner we see that ${\underline{u}}=0$ is a local minimum of Re(*V*). Therefore in the rest of this subsection we focus on *n*0 =  ± 1, and take $\mathrm{arg}(\tau)-\frac{\pi}{2}$ to have the opposite sign to *n*0, i.e. $n\_0\, (\mathrm{arg}(\tau)-\frac{\pi}{2})<0$. Using the explicit expressions of $\overline{B}\_{1,2,3}(x)$ in the range 0 < ∣*x*∣ < 1, and using the anomaly cancellation conditions, the potentials *V*2, *V*1, *V*0 simplify, for *n*0 =  ± 1, to V2()&=- (R3-R) + in0 , V1()&= (3R3-R), V0()&=I,I i ( - (rI-1)3). [eq:simplifiedVs] Note that, as a bonus, *V*1 also becomes independent of ${\underline{u}}$ for *n*0 =  ± 1 and small enough ${\underline{u}}$.[9](#fn9) We now consider a small neighborhood h*c**l**ε* around ${\underline{u}}=0$, defined by the cutoff ∣*u**j*∣ < *ε*, whose contribution to the index is (;n0)|uj|< e-2iEcl +42() ( -V() ). [eq:In0semi-simpleAsy1] From the above discussion we have that (;n0=1)|uj|< e-2iE Z(;n0)   Z(;n0), [eq:In0semi-simpleAsy2] where the *background piece* is & Z(;n0) = & ( (R3-R) -(3R3-R) + I ((rI-1)3-16(rI-1) ) ) and the *dynamical piece* is Z(;n0)=cl +42() (( )2 ).[eq:ZdynComplex] Here we suppress the dependence of these functions on the gauge group and the matter content. To simplify *Z**ε*dyn further, we first define $x\_j=\frac{u\_j}{-{{\rm i}}\tau}$, so that the integral becomes along straight contours from $x\_j=-\frac{\epsilon}{-{{\rm i}}\tau}$ to $x\_j=+\frac{\epsilon}{-{{\rm i}}\tau}$. With our choice of *n*0 and arg(*τ*), the integrand is locally exponentially suppressed away from ${\underline{u}}=0$, so we can complete the tails of the contours along straight lines to infinity (i.e. send *ε* →  + ∞) introducing only exponentially small error. The contours make an angle $\frac{\pi}{2}-\mathrm{arg}(\tau)$ with the positive real axis. However, observing that (i) the integrand is exponentially suppressed as ∣*x**j*∣ → ∞, and (ii) there are no poles between the contour of *x**j* and the real axis, we can deform the contours back to the real axis. We thus obtain, with $ {\underline{x}}= (x\_1,\dots,x\_n)$ Z(;n0) -D +42(+ ) ( ()2 ) =: Z(n0).[eq:ZdynS3] As noted in  for N = 4 SYM, and in for more general groups, *Z*dyn is related to the partition function of pure Chern-Simons theory  on *S*3 as Z ( n0)(-1)(G-G)/2 Z(kij), [dynCSrel] with the gauge group implicit and the same on both sides, and with Chern-Simons coupling given by kij=- i j. Notice that *Z*dyn(*n*0) is independent of *τ*. The tails completion (i.e. sending *ε* → ∞) and contour deformation mentioned above removed the *τ*-dependence of *Z**ε*dyn at the cost of an exponentially small error. The considerations of three-dimensional effective field theory in the next section show that *Z*dyn arises naturally in fact as the partition function of three-dimensional N = 2 gauge theory on *S*3 with the same gauge group and the same CS coupling. (The latter is well-known to be related to *Z*CS exactly as in .) 0.4cm The above analysis was local around ${\underline{u}}=0$. We now focus on SU(*N*) N = 4 SYM for which we know that ${\underline{u}}=0$ is a global minimum of the leading potential *V*2/*τ*2 for *n*0 =  ± 1 and $n\_0(\mathrm{arg}({\tau})-\frac{\pi}{2})<0$.[10](#fn10) However, this is not the only global minimum—there are *N* isolated global mimima labelled by *k* = 1, …, *N* which are related to *u**j* = 0 by center symmetry, namely the points *u**j* = (*k* − 1)/*N*, *k* = 1, …, *N*. Upon summing over these minima we obtain IN(;n0) N e-2iE Z(;n0) Z(n0). [eq:almostThere] The factor of *N* arises from the sum over *N* minima as explained above. The other three factors can be calculated by specializing our general discussion to this case: E& = & (N2-1), Z(;n0) & = & (-(N2-1) (( )3 + )+2i(N2-1)), [eq:Zback] Z(n0) & = & -D i<j 4 2 (xij ) ( in0 N j=1N xj2 ). [Zdyn] In this case the matrix of Chern-Simons couplings reduces to a single level (*k**i**j* = *k**δ**i**j*), and we have Z(n0) = (-1)N(N-1)/2 Z(k), k = -n0 N. [ZdynCS] For *n*0 =  ± 1 the SU(*N*) Chern-Simons partition was found in to simplify to $$Z^\text{CS}(-n\_0 N) {\;= \;}(-1)^{N(N-1)/2}\exp \bigl(5 {{\rm i}}\pi \,n\_0 \ (N^2-1)/12 \bigr) \,.$$ Upon combining this equation with  and, we obtain[11](#fn11) $$\begin{split} {\mathcal I}\_N({\tau};n\_0) \; \simeq \; N \, \exp \biggl(-\frac{{{\rm i}}\pi}{\tau^2} \, (N^2-1) \Bigl(\frac{-n\_0+2\tau}{3} \Bigr)^3 \biggr) \,. \label{eq:N=4indexAsyFinal} \end{split}$$ The analogous result for the case with *U*(*N*) gauge group is obtained by adding the contribution of a decoupled *U*(1) N = 4 multiplet to that of the SU(*N*) theory: $$\begin{split} \mathcal{I}^{U(N)}({\tau};n\_0) \; \simeq \; N \, \frac{1}{-{{\rm i}}\tau} \, \exp \biggl( -\frac{{{\rm i}}\pi}{\tau^2} \, N^2 \Bigl(\frac{-n\_0+2\tau}{3} \Bigr)^3 -\frac{5\pi {{\rm i}}\, n\_0}{12} \biggr) \,. \label{eq:N=4indexAsyFinalU(N)} \end{split}$$ This finishes the discussion of our methods illustrated in the special case *τ* → 0. Before moving on to the more general case of rational points, we make a few technical remarks. Firstly, since we are analyzing the index by estimating its integrand, we need uniform estimates. For *n*0 =  ± 1, the estimate  when applied to the chiral multiplet gamma functions gives uniformly valid asymptotics near ${\underline{u}}=0$, because the  − *n*0 *r**I*/2 shift in the argument takes us safely into the domain of validity of . For the vector multiplet gamma functions, however, there is no finite shift in the argument, so  does not apply uniformly around ${\underline{u}}=0$. We had to use instead  to obtain uniform asymptotics near ${\underline{u}}=0$ for the vector multiplet gammas. Secondly, we emphasize that our asymptotic analysis is essentially real-analytic (as in ). We only appeal to complex-analytic tools (specifically, contour deformation), after having done the asymptotic analysis, to simplify the final answer for *Z**ε*dyn in  to the more familiar form . Thirdly, we note that when actually doing the saddle-point analysis, one finds that the dominant holonomy configurations spread into the complex plane, as in the analysis of . Upon taking the *τ* → 0 limit the spreading shrinks, and the answers from those approaches indeed agree with our results. Asymptotics of the index as *τ* → Q ----------------------------------- We now study the index  in the limit $${\widetilde}\tau \; \equiv \; m\tau+n \to 0 \,, \label{eq:rationalLim}$$ with *m*, *n* relatively prime, keeping $\mathrm{arg}({\widetilde}\tau)$ away from integer multiples of *π*/2. As in the previous subsection we first calculate the all-order asymptotics of the integrand. The required small-${\widetilde}{\tau}$ estimates for the Pocchammer symbol and the elliptic gamma function are given in Equations ([eq:PochRationalEst]), ([eq:gammaRationalEstWithR]),. Using these estimates we find that in the range ${\alpha}\_+ \cdot {\underline{u}}\in (-\frac{1}{m}+\delta,\frac{1}{m}-\delta)$, for some fixed small *δ*, the integrand of  can be written up to exponentially suppressed corrections as ( -S(;) ) +42() ( -2i - ). [eq:In0semi-simpleAsyRat0] The all-order effective potential as ${\widetilde}{\tau}\to 0$ is given by V() = V2()+ V1()+V0(), [defVsumRat] with V2()&=I, I 3 (mI +mI ), V1()&=I,I i(rI-1) 2 (mI +mI)+ i ((m)2 + ), V0()&=-2i(G) s(n,m)+I,I 2i C(m,n,I-rI,rI), [eq:defVsRat] where *α* runs over all the roots of *G* including the rk(*G*) zero roots, *I* runs over all the chiral multiplets, and *ρ**I* runs over all the weights of the representation R*I*. Here *s*(*n*, *m*) is the Dedekind sum defined in  and the function *C*(*m*, *n*, *r*, *z*) is defined in . We have defined $$\xi\_I{\; \coloneqq \;}-\frac{r\_I}{2}\big(n\_0+\frac{2n}{m}\big),\label{eq:defXiGen}$$ to emphasize an analogy with the analysis in  of the index with flavor chemical potential *ξ*, although we do not have flavor fugacities in our problem. Note also that we have separated the supersymmetric Casimir energy in  as in the *τ* → 0 case. Next, as in the previous subsection we expand the potentials near ${\underline{u}}=0$. Anomaly cancellations again lead to simplifications, but here we further assume the theory is non-chiral (i.e. that *ρ**I* come in pairs of opposite sign) so that the answer takes a particularly simple form. Analogously to  we obtain V2() &=I,I ( 3 (mI ) + 2i 1(mI ) ), V1() &=I,I i(rI-1) 2(mI)+ (G), V0() &=V0(0)= -2i (G) s(n,m) +I,I 2i C(m,n,-rI,rI). [eq:simplifiedVtildes] Next we focus on a small neighborhood h*c**l**ε* around ${\underline{u}}=0$, defined by the cutoff ∣*u**j*∣ < *ε*, whose contribution to the index as ${\widetilde}{\tau}\to0$ is (;n0)|uj|< e-2i cl +42() ( - ). [eq:In0semi-simpleAsyRat1] Upon putting the above discussion together we obtain (;n0)|uj|< e-2i Z(;m,n,n0)   Z(;m,n,n0), [eq:In0semi-simpleAsyRat2] where the *background piece* is & Z(;m,n,n0) = & ( -I,I 3 (mI ) -I,I i(rI-1) 2(mI)+ (G) + V0(0) ), and the *dynamical piece* is Z(;m,n,n0)=cl +42() (+ I,I 1 (mI )( )2 ). Upon defining the rescaled variable $x\_j=\frac{u\_j}{-{{\rm i}}{\widetilde}\tau}$, we recognize *Z**ε*dyn(*τ*; *m*, *n*, *n*0) as the CS partition function on *S*3 with gauge group *G* and level $k^{ij}{\;= \;}-\frac{1}{m}\sum\_{I,\rho\_I}\overline{B}\_1 \bigl(m\xi\_I \bigr) \rho\_I^i \rho\_I^j$. We will see momentarily that it is more natural to define the rescaled variable as $x\_j=\frac{mu\_j}{-{{\rm i}}{\widetilde}\tau}$. Upon tails completion and deforming the integration contour we obtain Z(;m,n,n0) & m-(G)- D +42() (+ I,I 1 (mI )(I)2 ) &=: m-(G) Z(m,n,n0).[eq:Z0Rational] Up to the overall *m*− rk(*G*) factor, this coincides  with the topologically trivial sector of the *S*3/Z*m* partition function of N = 2 SYM with Chern-Simons coupling kij=-I,I1 (mI ) Ii Ij. [eq:CScouplingRat] While the explicit expression for the dominant potential ${\widetilde}V\_2$ in  was derived in a neighborhood $(-\frac{1}{m}+\delta,\frac{1}{m}-\delta)$ of ${\underline{u}}=0$, it is actually correct more generally, because it follows from  which we can use as long as $m\rho\_I \cdot {\underline{u}}+m\xi\_I\notin\mathbb{Z}$. Moreover, since *ρ**I**j* are integers and *u**j* appears in ${\widetilde}V\_2$ in the combination $m\,\rho\_I\cdot{\underline{u}}$, the 1-periodicity of ${\widetilde}V\_2$ implies that any holonomy configuration with *u**j* a multiple of 1/*m* gives the same leading asymptotics as the ${\underline{u}}=0$ configuration. In the SU(*N*) case these non-trivial holonomy configurations correspond to = = (,…, ), [eq:nontrivialSectors] with *m**j* ∈ Z/*m*Z, and ∑*j* = 1*N**m**j* = 0 (mod *m*). For *n* = 1, we can use the estimate  for the vector multiplet gamma functions to compute the contribution of the saddles . The result is similar to , with the same ${\widetilde}V\_{2,1}$ and the same SUSY Casimir piece, but with the dynamical piece modified to (modulo an overall constant factor) Z(;m,n,n0) & - D’ +42() (+ I,I 1 (mI )(I’ )2 ) &=: Z(m,n,n0),[eq:Z0RationalNonTrivialSect] where $\epsilon\_{{\underline{m}}}$ indicates that we are considering the contribution from a neighborhood ∣*u**j* − *m**j*/*m*∣ < *ε*, and the re-scaled integration variable arises as $x\_j'=\frac{m(u\_j-m\_j/m)}{-{{\rm i}}{\widetilde}{\tau}}$. This coincides (again up to an overall constant factor) with the topologically non-trivial sector of the partition function of SU(*N*) Chern-Simons theory with coupling  on the lens space *L*(*m*,  − 1) . We expect that similarly for general *n*, including the contribution of the non-trivial saddles  to the index would complete $Z\_{\underline{0}}^{\text{dyn}}(m,n,n\_0)$ to the full *S*3/Z*m* partition function, including all the topologically non-trivial sectors. We motivate this expectation further from an EFT perspective in the next section where we also present the (*n*-dependent) action of Z*m* on the *S*3. The explicit demonstration, which we leave to future work, requires generalizing the estimate  to arbitrary *n*, and improving it to include the overall constant. The above analysis was local in nature: we considered the contribution to the index from only a small neighborhood of ${\underline{u}}=0$. We now study the specific case of SU(*N*) N = 4 SYM for which we present a global picture of the dominant holonomy configurations. Note that in the previous subsection rather than performing the global analysis from scratch we had borrowed the result of  that in a certain domain of parameters (*n*0(arg(*τ*) − *π*/2) < 0) the ${\underline{u}}=0$ configuration is globally dominant (see Section [sec:Ccenter] for the complementary domain). ### The global structure of the leading potential for N = 4 SYM For SU(*N*) N = 4 theory the potential ${\widetilde}V\_2$ reads $$\begin{split} {\widetilde}V\_2 ({\underline{u}};\xi) {\;= \;}\frac{{{\rm i}}\pi}{3} \times 3 \; \Bigl((N-1)\overline{B}\_3(m\xi)+ \sum\_{i<j} \bigl( \overline{B}\_3(m\xi+m u\_{ij})+ \overline{B}\_3 (m\xi + m u\_{ji}) \bigr)\Bigr) \,, \label{eq:QhDef} \end{split}$$ where the factor of 3 comes from the sum over three chiral multiplets, and with = - 13 (n0 + ). [defxi] As mentioned below the expression applies as long as *u**i**j* + *ξ**I* avoid $\frac{\mathbb{Z}}{m}$. We now have to minimize the real part of ${\widetilde}V\_2({\underline{u}})/{\widetilde}{\tau}^{\,2}$ as $|{\widetilde}{\tau}| \to 0$. Since the *u**i**j*-independent piece and the real positive overall multiplicative constants are irrelevant in finding the dominant holonomy configurations, our problem boils down to minimizing the potential $$\begin{split} V^Q(u\_{ij};\arg{\widetilde}{\tau},\xi){\;= \;}-\mathrm{sign}(\arg{\widetilde}{\tau}-\frac{\pi}{2})\, \left(\overline{B}\_3(m\xi+ m u\_{ij})+\overline{B}\_3(m\xi- m u\_{ij})\right) \,, \label{eq:QhPot} \end{split}$$ which is analogous to the *pairwise holonomy potential* in . As in that work, we first consider the *qualitative* behavior of *V**Q*. We assume $\mathrm{arg}({\widetilde}{\tau})-\frac{\pi}{2}>0$, and comment below on what happens for the opposite sign. We find that the potential is (see Figure [fig:MW]) $$\begin{split} \text{M-shaped for }&0<\{m\xi\}<1/2 \,,\\ \text{W-shaped for }&1/2<\{m\xi\}<1 \,. \end{split}$$ We also see from Equation ([defxi]) that we have $\{m\xi\}\in\{0,\frac{1}{3},\frac{2}{3}\}$. ![The catastrophic behavior of V^Q(u_{ij}), drawn over the range mu_{ij}\in(-1,1), for \mathrm{arg}\widetilde{\tau}>\frac{\pi}{2}. The control parameter m\xi determines the M or W type behavior. [fig:MW]](MWlineF)The catastrophic behavior of *V**Q*(*u**i**j*), drawn over the range *m**u**i**j* ∈ ( − 1, 1), for $\mathrm{arg}\widetilde{\tau}>\frac{\pi}{2}$. The control parameter *m**ξ* determines the M or W type behavior. [fig:MW] ### The ${\text{O}}(1/{\widetilde}{\tau}^2)$ exponent Let us now assume *m*, *n* are chosen such that $\{m\xi\}=\{\frac{-m n\_0-2n}{3}\}=\frac{1}{3}$, so we are in the M-region with the dominant holonomy configurations corresponding to {*m**u**i**j*} = 0. Although this is analogous to the *1-center phase* in , as mentioned around  here in fact *u**i**j* can be any integer multiple of $\frac{1}{m}$. All these saddles contribute equally to the ${\text{O}}(1/{\widetilde}{\tau}^2)$ exponent though, and hence the preceding analysis around ${\underline{u}}=0$ gives the correct leading asymptotics of the index, which up to ${\text{O}}(1/{\widetilde}{\tau})$ corrections in the exponent reads $$\begin{split} &\exp \Bigl(-\frac{\pi {{\rm i}}}{m \, {\widetilde}{\tau}^{\, 2}}(N^2-1) \overline{B}\_3(m\xi) \Bigr) {\;= \;}\exp\Bigl(-\frac{{{\rm i}}\pi }{27 m \, {\widetilde}{\tau}^{\, 2}}(N^2-1) \Bigr) \,, \label{eq:n0IndexRationalAsy1} \\ &\qquad \text{for} \quad \mathrm{arg} ({\widetilde}{\tau})>\frac{\pi}{2} \,, \quad \{m\xi\}{\;= \;}\{\frac{-m n\_0-2n}{3}\} {\;= \;}\frac{1}{3} \,. \end{split}$$ For $\mathrm{arg} ({\widetilde}{\tau})-\frac{\pi}{2}<0$, the M- and W-regions switch places. So in order to have *u**i**j* = 0 as the dominant saddle we must assume *m*, *n* are such that $ \{m\xi\}=\{\frac{-m n\_0-2n}{3}\}=\frac{2}{3}$. In this case we have $\overline{B}\_3(2/3) = - \overline{B}\_3(1/3) =1/27$, which leads to $$\begin{split} &\exp \Bigl(-\frac{\pi {{\rm i}}}{m \, {\widetilde}{\tau}^{\, 2}}(N^2-1) \overline{B}\_3(m\xi) \Bigr) {\;= \;}\exp\Bigl(\frac{{{\rm i}}\pi }{27 m \, {\widetilde}{\tau}^{\, 2}}(N^2-1) \Bigr) \,, \label{eq:n0IndexRationalAsy2} \\ &\qquad \text{for} \quad \mathrm{arg} ({\widetilde}{\tau})<\frac{\pi}{2} \,, \quad \{m\xi\}{\;= \;}\{\frac{-m n\_0-2n}{3}\} {\;= \;}\frac{2}{3} \,. \end{split}$$ In the remaining case where $\{m\xi\}=\{\frac{-m n\_0-2n}{3}\}=0$, we have ${\widetilde}V\_2 ({\underline{u}};\xi)=0$ and hence no ${\text{O}}(\frac{1}{{\widetilde}\tau^2})$ exponent. As we discuss momentarily there is no ${\text{O}}(\frac{1}{{\widetilde}\tau})$ exponent in this case either. There are thus rk(*G*) flat directions in the moduli space, leading to a $(1/{\widetilde}{{\tau}})^{\mathrm{rk}(G)}$ growth for the index, as in the *n*0 = 0 and *τ* pure imaginary case studied in . ### The ${\text{O}}(1/{\widetilde}{\tau})$ exponent The ${\text{O}}(1/{\widetilde}{\tau})$ exponent comes from ${\widetilde}V\_1/m{\widetilde}{\tau}$. Although the expression for ${\widetilde}V\_1$ in  was obtained near ${\underline{u}}=0$, the ${\text{O}}(1/{\widetilde}{\tau})$ exponent is correctly captured by , which implies that  remains correct near the nontrivial saddles with $u\_{ij}\in\frac{1}{m}\mathbb{Z}$ as well. So we can specialize ${\widetilde}V\_1$ in  to the SU(*N*) N = 4 theory and obtain $$\exp \Bigl(- \frac{\pi {{\rm i}}}{m {\widetilde}{\tau}} (N^2-1) \, \bigl(-\overline{B}\_2(m\xi)+\tfrac{1}{6} \bigr) \Bigr) \,.\label{eq:N=4at1/tau}$$ In this case we have that $\overline{B}\_2(2/3) = +\overline{B}\_2(1/3) = - 1/18$, which leads to $$\exp \biggl(-\frac{2\pi {{\rm i}}\,}{9}\frac{(N^2-1)}{m {\widetilde}{\tau}}\biggr) \,,$$ for $\mathrm{arg} ({\widetilde}{\tau})>\frac{\pi}{2}$ as well as $\mathrm{arg} ({\widetilde}{\tau})<\frac{\pi}{2}$. Note that since $\overline{B}\_2(0) = 1/6$, we see from  that there is no ${\text{O}}(1/{\widetilde}{\tau})$ exponent for {*m**ξ*} = 0, as alluded to above. ### The Chern-Simons coupling Specializing the Chern-Simons coupling to SU(*N*) N = 4 theory we find kij=- N ij, with 61(m)= 61(). We emphasize that all the topologically nontrivial sectors necessary for agreement with an *S*3/Z*m* partition function are present in our analysis, but we leave the investigation of their explicit contributions to future work. ### The ${\text{O}}({\widetilde}{\tau})$ exponent The linear (in ${\widetilde}{\tau}$) exponent can be read from  to be $-2\pi i{\widetilde}\tau E\_{\mathrm{susy}}/m$. Note again that while  was derived near ${\underline{u}}=0$, as the estimate  shows the ${\text{O}}({\widetilde}{\tau})$ exponent remains valid near ${\underline{u}}\in\frac{\mathbb{Z}}{m}$ as well (at least for *n* = 1, and we expect more generally as well). Since for SU(*N*) N = 4 theory $E\_{\mathrm{susy}}= \frac{4}{27}\big(N^2-1\big)$, we have the $\mathcal{O}({\widetilde}{\tau})$ exponent as in $$\exp \biggl(-\frac{8\pi {{\rm i}}}{27m}(N^2-1) {\widetilde}{\tau} \biggr) \,.$$ Summary: the small-${\widetilde}{\tau}$ asymptotics for N = 4 SYM ----------------------------------------------------------------- We can summarize the asymptotics of the SU(*N*) N = 4 SYM index analyzed above as follows $${\mathcal I}\_N({\tau};n\_0) \; \simeq \; N \, {\widetilde}{C}\_N(n\_0,m,n)\, \exp \biggl(-\frac{{{\rm i}}\pi}{m \,{\widetilde}\tau^2} \, (N^2-1) \Bigl(\frac{-{\widetilde}\eta+2{\widetilde}\tau}{3} \Bigr)^3 \biggr) \; Z^{\text{CS}}\_{S^3/\mathbb{Z}\_m}(k) \,, \label{eq:N=4indexAsyRational}$$ for *τ* near any rational point  − *n*/*m*, with = m+n, = 61() = -(()-), k =- N, and with ${\widetilde}{C}\_N(n\_0,m,n)$ an overall constant. Note that we have used ${\widetilde}{\eta}^3={\widetilde}{\eta}=\pm1$ to simplify the final expression. Also, by completing the cube inside the exponent we have introduced an $\mathcal{O}({\widetilde}{\tau}^0)$ factor at the cost of redefining ${\widetilde}{C}\_N(n\_0,m,n)$. We have only demonstrated that there is a contribution to *Z**S*3/Z*m*CS(*k*) from near ${\underline{u}}=0$ that coincides with the topologically trivial sector of the *S*3/Z*m* partition function of Chern-Simons theory with coupling *k*. As mentioned below  we expect that summing over the contributions from neighborhoods of the non-trivial configurations *u**j* = *m**j*/*m* would lead to the complete orbifold partition function. We can include the contribution of a decoupled *U*(1) N = 4 multiplet in a straightforward manner. This effectively changes the dimension of the group in the exponent to *N*2, introduces a prefactor $1/{\widetilde}{\tau}$, and change the constant from ${\widetilde}{C}\_N(n\_0,m,n)$ to a new constant ${\widetilde}{C'}\_N(n\_0,m,n)$, so that we have $${\mathcal I}^{U(N)}({\tau};n\_0) \; \simeq \; \frac{N}{{{\rm i}}{\widetilde}{\tau}} \, {\widetilde}{C'}\_N(n\_0,m,n) \, \exp \biggl(-\frac{{{\rm i}}\pi}{m \,{\widetilde}\tau^2} \, N^2 \Bigl(\frac{-{\widetilde}\eta+2{\widetilde}\tau}{3} \Bigr)^3 \biggr) \; Z^{\text{CS}}\_{S^3/\mathbb{Z}\_m}(k) \,. \label{eq:N=4indexAsyRationalU(N)}$$ We see that the background (and the SUSY Casimir) piece in  matches the effective action  and, in addition, we have a dynamical Chern-Simons term. In the following section we explain both these pieces from the point of view of 3d N = 2 field theory. *C*-center phases ----------------- Focussing on SU(*N*) N = 4 theory, we now move on to studying the ${\widetilde}{\tau}\to0$ limit of the index in the *W region*, which as shown in Figure [fig:MW] for $\mathrm{arg}{\widetilde}{\tau}>\pi/2$ corresponds to 1/2 < {*m**ξ*} < 1. As before we assume $\mathrm{arg}{\widetilde}{\tau}$ is in compact domains avoiding integer multiples of *π*/2 as $|{\widetilde}{\tau}|\to0$. Recall from ([defxi]) that only the values {*m**ξ*} = 0, 1/3, 2/3 are realized in our problem. But to highlight the parallels with the analysis of partially-deconfined phases in the W regions of the (flavored) 4d N = 4 index in, we will study the phase structure for arbitrary $\{m\xi\}\in(\frac{1}{2},1)$ below, and only at the end specialize our result to the single “physical” point {*m**ξ*} = 2/3 in that interval. Asymptotic analysis of the index for arbitrary $\{m\xi\}\in(\frac{1}{2},1)$ is difficult for general *N*, because finding the dominant holonomy configurations is not possible analytically in the W regions. Analogously to we consider now the large-*N* limit (on top of the ${\widetilde}{\tau}\to0$ limit), and conjecture that the *C*-center phases suffice for extremizing the potential in the W region. Also, similarly to we consider only the leading (here ${\text{O}}(1/{\widetilde}{\tau}^2)$) exponent of the index in the W region. A *C*-center holonomy configuration consists of *C* packs of *N*/*C* holonomies uniformly distributed on the circle such that the SU(*N*) constraint is satisfied. While at finite *N* it is possible to have such configurations only for *C* a divisor of *N*, in the large-*N* limit any integer *C* ≥ 1 provides an acceptable *C*-center configuration. For such a distribution the “on-shell” value of the potential ${\widetilde}V\_2$ in ([eq:QhDef]) becomes $${\widetilde}V\_2^{(C)}={{\rm i}}\pi\Bigl((N-1)\overline{B}\_3(m\xi)+\frac{N}{d}\frac{d(d-1)}{2}\, 2\overline{B}\_3(m\xi)+d^2 \sum\_{J=1}^{C-1}J\big(\overline{B}\_3(m\xi+m\frac{J}{C}) +\overline{B}\_3(m\xi-m\frac{J}{C})\big)\Bigr),$$ where *d* :  = *N*/*C*. The second term above is the contribution from pairs in the same pack, and the third term is from pairs with each end on a different pack. To simplify the above expression further, we use the following identity which can be proven from ([eq:Raabe]) and ([eq:remarkableId]): $$\sum\_{J=1}^{C-1}J\big(\overline{B}\_3(\Delta+m\frac{J}{C})+\overline{B}\_3(\Delta-m\frac{J}{C})\big) =\frac{g^2\overline{B}\_3(C'\Delta)}{C'}-C\overline{B}\_3(\Delta),$$ where *g* :  = gcd(*m*, *C*) and *C*ʹ :  = *C*/*g*. Keeping only the *O*(*N*2) terms we hence end up with $${\widetilde}V\_2^{(C)}={{\rm i}}\pi N^2\,\frac{\overline{B}\_3(C'm\xi)}{C'^3}.$$ Since the leading asymptotics of the index is given as $\exp(-{\widetilde}V\_2/{\widetilde}{\tau}^2)$, we then find the analog of the main result of (Equation (3.19) of that work) for our case to be $$\mathcal{I}\_{N\to\infty}\xrightarrow{{\widetilde}{\tau}\to0}\sum\_{C=1}^{\infty} \exp\left(-\frac{i\pi N^2}{m{\widetilde}{\tau}^{\,2}}\,\frac{\overline{B}\_3(C'm\xi)}{C'^3}\right), \label{eq:doubleScalingConjecture}$$ with $m\xi=-\frac{mn\_0+2n}{3}$ as before. The competition between various terms in ([eq:doubleScalingConjecture]) can be visualized by comparing the exponents as in Figure [fig:singleDelta], which shows the range of Δ :  = {*m**ξ*} for which a given phase dominates when $\mathrm{arg}{\widetilde}{\tau}-\pi/2>0$. The figure implies that for the “physical” values {*m**ξ*} = 1/3, 2/3, the index is respectively in the 1-center, and 2-center phase when $\mathrm{arg}{\widetilde}{\tau}-\pi/2>0$, and vice versa for $\mathrm{arg}{\widetilde}{\tau}-\pi/2<0$. As mentioned above, for {*m**ξ*} = 0 the index is in a confined phase and does not yield exponential *O*(*N*2) growth. Therefore up to an $o(N^2/{\widetilde}{\tau}^{2})$ error in the exponents we have the following simplification of ([eq:doubleScalingConjecture]) by restricting to *C*ʹ = 1, 2: $$\mathcal{I}\_{N\to\infty}\xrightarrow{{\widetilde}{\tau}\to0}e^{-\frac{i\pi N^2}{m{\widetilde}{\tau}^2}\, \overline{B}\_3(m\xi)}+e^{-\frac{i\pi N^2}{m{\widetilde}{\tau}^2}\,\frac{\overline{B}\_3(2m\xi)}{8}}. \label{eq:doubleScalingConjecture2}$$ This is the analog of Conjecture 1 in. Since $\overline{B}\_3(2/3)=-\overline{B}\_3(1/3)$, we see from  that the action of the 2-center saddle has the opposite sign and is smaller in absolute value by a factor of 8 compared to that of the 1-center saddle. ![The functions C'^{-3}\overline{B}_3(C'\Delta) with C'=1,\cdots,13, for 0\le\Delta\le1. For 0<\Delta<1/2 the blue curve corresponding to the fully-deconfined phase takes over. The take-over of the orange curve signifies the partially-deconfined 2-center phase in the corresponding region (1/2<\Delta\lesssim.72), and so on. [fig:singleDelta]](singleDelta)The functions $C'^{-3}\overline{B}\_3(C'\Delta)$ with *C*ʹ = 1, ⋯, 13, for 0 ≤ Δ ≤ 1. For 0 < Δ < 1/2 the blue curve corresponding to the fully-deconfined phase takes over. The take-over of the orange curve signifies the partially-deconfined 2-center phase in the corresponding region ($1/2<\Delta\lesssim.72$), and so on. [fig:singleDelta] Asymptotics of the 4d index from 3d field theory [sec:4dto3d] ============================================================= In this section we consider the dimensional reduction of the four-dimensional N = 1 gauge theory on a Hopf surface. This surface is topologically *S*3 × *S*1 and we reduce along the *S*1 fiber. The dimensionally reduced theory describes a three-dimensional dynamical gauge supermultiplet coupled to background three-dimensional supergravity on *S*3. The Wilsonian effective action of the gauge multiplet can be calculated by integrating out the tower of massive Kaluza-Klein modes, and the resulting theory is described by a functional integral over the gauge multiplet fields with this effective action. We find that the functional integral of the three-dimensional theory can be written as a perturbative expansion in *τ*. The singular terms in the expansion behave as O(1/*τ*2) and O(1/*τ*), and are captured by three-dimensional effective field theory. In particular, these terms are independent of the dynamical fields, and are completely accounted by the (supersymmetrized) Chern-Simons couplings of the background supergravity. The result agrees with the corresponding singular terms in the microscopic expansion , . The all-order asymptotic formula from the microscopic index includes, in addition to these singular terms, constant and linear terms in *τ*. Using a localization argument we show that the constant term in *τ*, besides a background part, has a dynamical piece captured by the integral over the fluctuations of the dynamical fields in three-dimensional path integral, which is essentially the partition function of N = 2 supersymmetric CS theory at level  ± *N*. Finally, the linear term in the microscopic formula is precisely the supersymmetric Casimir energy which is needed to translate between the microscopic Hamiltonian index and the macroscopic functional integral.[12](#fn12) In this manner the full asymptotic formula for the four-dimensional index is explained by three-dimensional physics. The fact that the asymptotic formula does not contain any higher order terms in *τ* implies a non-renormalization theorem, namely that there are no corrections to the three-dimensional effective action at any polynomial order in *τ*. We leave the explanation of this interesting point to future work. Finally, we show that corresponding statements also hold near rational points when *τ* →  − *n*/*m*. Here we present evidence that the relevant three-dimensional manifold is a Z*m* orbifold of *S*3 and the results agree with the microscopic asymptotic expansion given in . 0.4cm We begin by recalling the functional integral definition of the N = 1 superconformal index on *S*3 × *S*1. In the Hamiltonian trace definition  we have two chemical potentials that couple to linear combinations of the two angular momenta *J*1, *J*2 on *S*3 and the *U*(1) R-charge *Q*. This is equal to the supersymmetric functional integral of the theory on *S*3 × *S*1 with twisted boundary conditions on the fields as we go around the *S*1. Equivalently, one can explicitly introduce a background gauge field (for the R charge) and background off-diagonal terms in the metric (for the angular momenta) in a manner, so as to preserve supersymmetry. As explained in, such background configurations can be obtained as solutions to the condition of vanishing gravitino variations of off-shell supergravity (and then taking a rigid limit so as to decouple the fluctuations of gravity). The relevant background configuration for the calculation of the 4d superconformal index for complex *τ* and nonzero *n*0 was studied in  in the context of 4d new minimal supergravity . Recall that the bosonic fields of new minimal supergravity are the metric, a gauge field *A*nm, and another vector field *V*nm which is covariantly conserved. The background configuration  preserving the supercharges $({\mathcal Q}, {\overline{\mathcal{Q}}})$ is[13](#fn13) [4dbackgnd] s42 & = tE2 + 2 + 2 (1 -i1 tE )2 + 2 (2 -i 2 tE )2, A & = i (- ) tE, V = -i tE. Here *θ* ∈ [0, *π*/2], the angles *ϕ*1, *ϕ*2 are 2*π*-periodic, and the Euclidean time coordinate has the independent periodicity condition[14](#fn14) tE ~tE +. This configuration admits the following Killing spinor which is identified with ${\mathcal Q}$, [KSsol] = ( c ei z tE 0 0 e- i z tE ), z =. The twist parameters Ω*i*, Φ are related to the chemical potentials *σ*, *τ* in the index as follows[15](#fn15) i = 1 +, = 32 + ( - i n0 ), [Omomrel] with 1 = 2 i, 2 = 2 i. In this section for ease of presentation we focus on the case with Ω1 = Ω2 = Ω,  which implies $\sigma=\tau=\frac{{{\rm i}}{\gamma}}{2\pi}(1-\Omega)$. The partition function on the above background is related to the index ${\mathcal I}(\sigma-n\_0,\tau)$, which for *σ* = *τ* coincides with the index ${\mathcal I}(\tau;n\_0)$ in . In Appendix [sec:diffomegas] we comment on the more general case with Ω1 ≠ Ω2 and hence *σ* ≠ *τ*. The four-dimensional supersymmetric partition function of the theory corresponding to the Hamiltonian index  can then be expressed as a functional integral of the gauge theory with 4d N = 1 chiral and vector multiplets on the background .[16](#fn16) As discussed in , this functional integral localizes to an integral over flat connections of the gauge field on the KK circle, [AYval] Ai = 2 ui. The Wilson loop  maps to the scalar in the three-dimensional vector multiplet in the KK reduction. We now proceed to derive an expression for the supersymmetric partition function of the three-dimensional gauge theory. Dimensional reduction to three dimensions [sec:dimred] ------------------------------------------------------ We first consider the reduction of the above four-dimensional background as a configuration in three-dimensional supergravity. In three dimensions we use the off-shell supergravity formalism , and follow the treatment  for the reduction from four to three dimensions. The bosonic fields in the off-shell three-dimensional supergravity are the metric, the KK gauge field (the graviphoton) written as a one-form *c*, a two-form *B*, and the R-symmetry gauge field one-form A*R*. The equations are often presented in terms of the dual one-form $v=-{{\rm i}}\*{\mathrm{d}}c$ and the dual scalar $H = {{\rm i}}\* {\mathrm{d}}B$. We begin by writing the background in  as a Kaluza-Klein (KK) compactification to three dimensions, i.e. a circle fibration on a 3-manifold M3. We define the rescaled *S*1 coordinate [X4period] Y = tE, which obeys the periodicity condition Y ~Y + 2 R, R =. [Rgamrel] Writing the metric  in the KK form, [4dmetricKKform] s42 = s32 + (Y + c)2, we find that the graviphoton field is [3dKKc] c = c x= -i ( 2 1 + 2 2 ), and the metric on the 3-manifold M3 is [3dmetric1] s32 = g x x= 2 + 2 12 + 2 22 - c2. The three-dimensional metric obeys [sqrtg] =. We see that we effectively have a KK reduction on a circle of radius *R*. 0.4cm In order to study the effective theory in three dimensions, we consider the limit *R* → 0. From the relation  we see that this is implemented by taking the original circle size *γ* → 0. Our eventual interest is in the limit *τ* → 0. The question is how to correlate these two limits of *γ* and *τ*. If we take *γ* → 0 first, then we see from the relation  that Ω → ∞ and from  that M3 shrinks to zero size. Although the local Lagrangian involves background fields and terms such as the Ricci scalar which diverge in this limit, the three-dimensional effective action turns out to be finite. We can understand this in a cleaner manner as follows. We first scale *τ* and *γ* to zero at the same rate keeping Ω finite and fixed, i.e. take *γ* = *ɛ**τ* with fixed ${\varepsilon}= 2 \pi {{\rm i}}/({\Omega}-1)$, and only take *ɛ* → 0 at the end of all calculations. In particular, the three-dimensional calculations are all performed at finite *ɛ*, i.e. on smooth backgrounds. The action turns out to have two pieces, one of which stays finite and the other vanishing in the limit *ɛ* → 0, and, in particular, there are no diverging terms in this limit. Thus we can safely take the limit Ω → ∞ at the end of calculations. In this limit we have that *R* → *τ*, so that the effective field theory answers are effectively written as a perturbative series in *τ*. 0.4cm In the treatment of three-dimensional background supergravity we need the Hodge dual of the graviphoton, [defv] v = v x= - i c, whose value in the above background is v = ( 2 1 + 2 2 ), so that *v**μ* = 2 Ω(1, 1, 0). The associated Chern-Simons action is S (c) = 3 c c = i3 3 x v c. 0.4cm The identification between the four-dimensional and the three-dimensional gauge fields is made by comparing the respective Killing spinor equations. As shown in , one has[17](#fn17) 12 v= V- VY c, H = VY, R= A- AY c+ 12 v. The background gauge fields in  are given by [AYVYval] A = ( -+ ) Y, V = - Y, so that the auxiliary fields in the background supergravity multiplet are [vHvalues] v= - 2 VY c, H = VY, R= - (AY + VY) c. (The above equation for *v**μ* is consistent with Equations , .) 0.4cm We now discuss the Kaluza-Klein reduction of the dynamical gauge multiplet. The N = 1 gauge multiplet in four dimensions reduces to an N = 2 gauge multiplet in three dimensions, whose bosonic field content is a vector A*μ*, a scalar *σ*, and the auxiliary D field. These are related to the four-dimensional fields as follows, [eq:susy3dvec] i = AiY, i= Ai- AiY c, i = Di - AiY H, and the three-dimensional fermions are the reduction of the corresponding four-dimensional fermions. As discussed above, the theory localizes on the BPS configurations given by Ai = dY, Di = 0, [AYval] with vanishing values of all other fields in the off-shell gauge and chiral multiplets. In the three-dimensional theory the non-zero fields on the BPS locus are i =, i= - c, i = - H. [AYval3d] Effective action and functional integral of the three-dimensional theory [sec:3deffact] --------------------------------------------------------------------------------------- We now turn to the calculation of the partition function of the three-dimensional supersymmetric theory that we just discussed. Our strategy is to first calculate the three-dimensional Wilsonian effective action of *u**i*, and then use this to calculate the three-dimensional partition function. The tree-level action (coming from a mode expansion of the four-dimensional theory) consists of matter-coupled super Yang-Mills theory. The full quantum effective action of the three-dimensional theory is obtained by integrating out the tower of massive KK modes on the circle. In order to calculate this action, we draw from known results in the effective field theory in three dimensions. The effective field theory on backgrounds of the type M3 × *S**R*1 was studied in a general context in , and in the special context of supersymmetry in . The resulting three-dimensional action begins with a term proportional to 1/*R*2, and continues as a perturbation expansion as the radius *R* → 0. At each order in *R* one has a combination of three-dimensional actions of the background and the dynamical fields, which are all related by supersymmetry to a certain Chern-Simons term. The Chern-Simons terms are of the form ∫M3*A**x* ∧ d*A**y*, where *A**x* and *A**y* represent the various gauge fields. As discussed in the previous subsection, these are the dynamical gauge field, the background graviphoton, the background R-gauge field, and the spin connection. We follow, and review in Appendix [sec:CSactions], the treatment of  for the supersymmetrized Chern-Simons action of all the background and the dynamical gauge fields up to *O*(*R*0). The full effective action also includes RR and gravitational supersymmetrized CS terms discussed in, which turn out to be crucial for our purposes. It follows from the above discussion that the overall coefficient at each order in *R* can be fixed by calculating the coefficient of the Chern-Simons terms themselves. These coefficients, in turn, can be obtained by integrating out all the fermions coupling to the corresponding gauge fields. The resulting induced Chern-Simons coefficient is one-loop exact. Thus the strategy is to integrate out the fermions in each KK mode, write the resulting Chern-Simons action, and sum over all the fermions in the theory. The KK momenta of the fermions take the values  *p**Y* = *k**Y*/*R*, with $k\_Y =n + \frac{n\_0}{2}$, *n* ∈ Z. The shift *n*0/2 appears because of the gauge fields in the background . (Recall, for example, that the four-dimensional Killing spinor  has momentum *n*0/2.) The result for the complete action obtained by integrating out a fermion f of R-charge *r*f and transforming in a representation of weight *ρ*f under the gauge group is given in Appendix [sec:CSactions] and take the following form, S = S + 2 S + S + S. [Sftot1] The terms in  depend on the real mass *m*f (related to the central charge appearing in the three-dimensional algebra). The first two terms depend on the dynamical gauge field. On the configuration  they take the following values, S & =-i (- kY )2 A3, 2 S & =-i 2 r (- kY ) L3, [S1S2fin] where *A*M3 and *L*M3 are functions of the three-dimensional background given in . The last two terms in  do not depend on the dynamical gauge field, and given by S&=-i (r2-) R3, S&=-i G3, [S:susy:explicit3] where *R*M3 and *G*M3 are functions of the three-dimensional background given in . In Appendix [sec:CSactionvals] we calculate the values of these background actions.[18](#fn18) As explained above, we perform the calculations keeping *R*, Ω finite so that the three-dimensional physics is manifestly smooth. The result is that there is a smooth limit as *γ* → 0 keeping fixed *τ*. The limiting values of the actions are as follows, &A3 = - 4, L3 = - 4 ( 1 - ), & R3 = - 4 ( 1 - )2, G3 = - 16 + 4 R3. [ALvals] Using these values, we obtain the total effective action of the fermion f to be S & = i (- kY - n0 r)2 + i r (- kY - n0 r) & + i r2 - i. [defSf] 0.4cm Now we turn to the sum over all the fermions in the theory. The value of the real mass is given in  to be, as *R* → 0,[19](#fn19) m,n = - (- n - 12 n0 (r+1) ), In order to obtain the full effective action we now have to sum over all the fermions. For the chiral multiplets, this implies summing over all the weights in representations *ρ*f ∈ Rf, as well as over all momenta labelled by *n* ∈ Z. The summation over KK modes can be evaluated using $$\sum\_{n\in\mathbb{Z}}\mathrm{sgn}(n+x)(n+x)^{j-1} {\;= \;}-\frac{2}{j} \, \overline{B}\_{j}(x) \,, \label{eq:KKsumBbar}$$ with $x=\rho\_{\text{f}}\cdot {\underline{u}}- \tfrac12 n\_0 \, r\_I $, for *j* = 1, 2, 3 (cf. Section 4 of ).[20](#fn20) Here we have used the relation *r*f = *r**I* − 1 between the R-charge of the fermion and that of the bottom component of the multiplet *I* to which the fermion belongs. 0.4cm For the vector multiplet contribution the analysis is quite similar: there is a tower of massive KK gaugino modes that are integrated out. These generate CS actions whose supersymmetrization yields the vector multiplet contribution to *δ**S*1-loop. In the present context there is an important difference with the chiral multiplet analysis however. Near ${\underline{u}}=0$ there is a single gaugino mode in the tower that has real mass of order $\alpha\cdot{\underline{u}}/R$, and is therefore considered a “light” mode for small enough $|\alpha\cdot{\underline{u}}|$. Therefore we do not integrate out this mode and, instead, keep it as a dynamical mode in the path integral of the three-dimensional theory. More precisely, recall that the *n*th KK gaugino mode associated to a root *α* of the gauge group has *p**Y* = (*n* + *n*0)/*R* and hence a real mass $(\alpha\cdot{\underline{u}}-n-n\_0)/R$. Therefore the mode corresponding to *n* =  − *n*0 is light near $\alpha\cdot{\underline{u}}=0$. We now describe how removing this term from the sum over the KK tower modifies the result compared to the chiral multiplet computation. The vector multiplet contributions is a sum over roots *α* that come in pairs  ± *α*+, as a result of which they give vanishing contributions to the quadratic and constant terms in ${\underline{u}}$ in the action of a single KK mode. We therefore focus on the contribution to the linear term in ${\underline{u}}\,,$ which is proportional to 1/*R*. The calculation is similar to the corresponding chiral multiplet calculation. Upon summing over all the KK modes, we obtain the vector multiplet contribution from a root *α* to be - n’ (-n-n0) (-n-n0)[eq:sumPrimeVec], where the prime indicates that we are not including the light mode corresponding to *n* =  − *n*0. Upon adding and subtracting the *n* =  − *n*0 contribution, we obtain, using , ( 2()+ || ). Now, since we are interested in the proximity of ${\underline{u}}=0$, we use the fact that for ∣*x*∣ < 1 we have $\overline{B}\_2(x)=x^2-|x|+\frac{1}{6}$, to simplify the result to (()2 + ). Upon putting all the pieces together, we obtain the total one-loop correction to the Wilsonian action of the three-dimensional theory, which we call $V\_\text{eff}({\underline{u}})$ (we justify this name below). We have V() = & S = & iI, I ( 3 (- n0 rI ) + 2 (- n0 rI ) & + (()2 + ) + ( (rI-1)2 - ) 1 (- n0 rI ) ). [Veffans3d] We now localize the path integral of the light gauge multiplet mode that was excluded from the sum , using its Wilsonian effective action, which consists of the tree-level action coming from the light *n* =  − *n*0 mode in 4d, as well as the one-loop action *δ**S*1-loop derived above (in the bosonic sector, which is relevant for the localization calculation) from integrating out the heavy modes. It is useful to keep in mind the different but related problem of calculating the partition function of superconformal CS theory coupled to matter on M3,. In that case the theory localizes onto arbitrary constant values of the scalar *σ* and is supported by the auxiliary scalar *H*. The measure including the one-loop determinant of the localizing action in the non-BPS directions is[21](#fn21) +4( ) ( ), [1loopM3] with *ω*1, 2thf the moduli of the transversely holomorphic foliation (THF)  of M3, which we expect to be 1=2ī.[eq:thfModuli] Recalling from  that *σ**i*  =  *u**i*/*R*, and adding the contribution from *δ**S*1-loop in ([Veffans3d]) (which although arises at one-loop in high-temperature EFT, contributes as a “classical” piece in the localization computation), we obtain the final result for the three-dimensional partition function Z() = +42() ( - V() ). [Z3d] Noting that the supersymmetric partition function and the Hamiltonian index are related as Z() = e2iE I(), we see that the result  agrees precisely with the microscopic result –. We emphasize that while the above derivation of *V*eff in  applies to ${\underline{u}}$ near 0, it can be easily extended to generic finite ${\underline{u}}$ by modifying the vector multiplet discussion. For generic ${\underline{u}}$, the non-Cartan components of the *n* =  − *n*0 mode of the vector multiplet are also heavy, and ought to be integrated out. Consequently the sum in  would no longer have a prime, and we end up with *V*1*r* as in  rather than *V*1 in . This is the EFT derivation of the finite-${\underline{u}}$ potentials *V*1, 2 found microscopically in . On the other hand, when *n*0 = 0, the small-${\underline{u}}$ discussion leading up to  needs to be modified because now the chiral multiplets have light modes (corresponding to *n* = 0). As in the discussion around  the light mode should be removed from the KK sum and instead be included in the dynamical part (to be localized). Indeed, it is well-known that for *n*0 = 0 the constant piece of the small-*τ* expansion coming from the ${\underline{u}}=0$ saddle contains the (localized) *S*3 partition function of the dimensionally reduced chiral as well as vector multiplets  (see  for earlier work on the connection between 4d indices and *S*3 partition functions). A technical remark is in order regarding our EFT derivation of . To reproduce the desired asymptotics, we have sent $\varepsilon(=\frac{{\gamma}}{\tau}=\frac{2\pi{{\rm i}}}{\Omega-1})\to0$, and hence Ω → ∞, when evaluating the CS actions in Appendix [sec:CSactionvals]. It would be interesting to have a formula of the type  for *γ* → 0 at finite *ɛ*, which would imply that Ω and the resulting 3d geometry would be finite-sized.[22](#fn22) Finally, as discussed in Appendix [sec:diffomegas], we find that the effective potential for *τ* and *σ* not necessarily equal is given by making the replacement, in the effective potential . The singular pieces are indeed in agreement with the microscopic calculations reported in . Rational points --------------- We now turn our attention to the limit of *τ* approaching a rational point. In the discussion of the previous subsection we used the fact that the radius of the circle *R* equals *τ* which becomes small in the limit, so that we could use an effective three-dimensional description. Now we are interested in ${\widetilde}{\tau}= m{\tau}+n \to 0$, with *n*, *m* ∈ Z (with no common factor) as in . In terms of the variable ${\widetilde}{\tau}$ we have that $\omega = 2 \pi {{\rm i}}{\tau}= 2 \pi {{\rm i}}({\widetilde}{\tau}-n)/m$ so that Ø= 1+ = 1- +, and the four-dimensional metric background  is now [4dbackgndrat] s42 = & tE2 + 2 + 2 (1 - tE -i (1 + ) tE )2 & + 2 (2 - tE -i (1+ ) tE )2. In terms of the following new coordinates and new parameters, = m, Ø= 1+, i = i - tE, the above metric is [4dbackgndrat1] s42 = tE2 + 2 + 2 (1 -i Ø tE )2 + 2 (2 -i ØtE )2, with ${\widetilde}\phi\_1$, ${\widetilde}\phi\_2$ being 2*π*-periodic as before, and the periodic identification going around the time circle is [newident] (tE, 1, 2 ) ~ ( tE +, 1 -, 2 - ). The metric configuration  with the identifications  is simply a global identification, or orbifold, of the configuration considered in the previous subsection with the new parameters $({\widetilde}{\gamma}, {\widetilde}{\tau}, {\widetilde}{\Omega})$ replacing (*γ*, *τ*, Ω). [23](#fn23) On the covering space, going around the time circle shifts ${\widetilde}t\_E \to {\widetilde}t\_E + {\widetilde}{\gamma}$ and ${\widetilde}\phi\_i \to {\widetilde}\phi\_i + 2 \pi n$. The latter identification can be trivialized by using the independent 2*π*-periodicity of ${\widetilde}\phi\_i$, so that we have the identification $\bigl(\,{\widetilde}t\_E \,, {\widetilde}\phi\_1 \,, {\widetilde}\phi\_2 \, \bigr) \sim \bigl(\, {\widetilde}t\_E + {\widetilde}{\gamma}\,, {\widetilde}\phi\_1 \,, {\widetilde}\phi\_2 \, \bigr)$. On this configuration we can perform the dimensional reduction to three dimensions. The relevant considerations of the previous subsection go through exactly as before with the replacement $({\gamma},{\tau}, {\Omega}) \mapsto ({\widetilde}{\gamma}, {\widetilde}{\tau}, {\widetilde}{\Omega})$. Actually, because the gauge holonomies on the cover wrap a circle *m* times larger than the original *S*1, we also get a replacement *u**j* → *m**u**j*. Moreover, since *ξ**I* (which equals  − *n*0*r**I*/2 for (*m*, *n*) = (1, 0)) effectively plays the role of a flavor chemical potential in our problem as mentioned around , we expect a similar replacement  − *n*0*r**I*/2 → *m**ξ**I*. We can see this replacement arise more directly as follows. We multiply the first term in  by $\frac{m^2}{m^2}$, and the second term by $\frac{m}{m}$. This amounts to *γ* → *m**γ* and *u**j* → *m**u**j* as mentioned above, but also *k**Y* → *m**k**Y* (which corresponds to keeping only the singlet modes under the Z*m* quotient) as well as *n*0 → *m**n*0. On the other hand, writing *A**Y*nm in  in terms of ${\widetilde}{\tau}$ instead of *τ* amounts to yet another replacement $n\_0\to n\_0+\frac{2n}{m}$. Combining these two effects yields the desired  − *n*0*r**I*/2 → *m**ξ**I* replacement. With the preceding substitutions in the results of the previous subsection, we thus arrive at the potentials ${\widetilde}V\_{2,1}$ in . We then take the Z*m* quotient which has two effects as usual. Firstly it reduces the volume of the three-dimensional space, and secondly it introduces new topologically non-trivial sectors in the path integral over the gauge-field configurations. The change in calculations involving local gauge-invariant Lagrangians will therefore be only a reduction in the action by a factor of *m*. This explains the reduction of the effective potential by a factor of *m* as in . Finally we discuss the constant terms (in ${\widetilde}{\tau}$) arising from the functional integral over the dynamical gauge multiplet. There are a few subtleties. Firstly the actions like the gravitational CS action will depend on the global properties of the orbifold. Then we need to calculate the partition function of the orbifold space with a background graviphoton. Assuming as in the previous subsection that the expected THF moduli arise, and that by re-scaling and contour deformation (as discussed around ) the THF moduli can be replaced with those of round *S*3, the calculation presumably reduces to an *S*3/Z*m* partition function as in , with the Z*m* action following from  to be (1,2) ~ (1-,2-), which for *n* = 1 coincides with that of the lens space *L*(*m*,  − 1). Here one has to be careful about how the measure on the space of constant scalars *σ**i* is affected by the four-dimensional orbifold . We leave these interesting questions to future work, noting that the result of these considerations indeed agrees with the microscopic answer , with the ${\text{O}}({\widetilde}{\tau})$ piece explained by the supersymmetric Casimir energy factor as before. **Note Added.** The paper , which appeared on the arXiv the same day as the first version of this paper, has some overlap with our section [sec:4dto3d]. The paper , which appeared on the arXiv soon after, has some overlap with our section [sec:SCI]. Acknowledgements ================ We would like to thank Alejandro Cabo-Bizet, Daniel Butter, Davide Cassani, Cyril Closset, Zohar Komargodski, Neil Lambert, Stavros Garoufalidis, Bernard de Wit, and Don Zagier for useful discussions and comments. This work is supported by the ERC Consolidator Grant N. 681908, “Quantum black holes: A macroscopic window into the microstructure of gravity”, and by the STFC grant ST/P000258/1. AAA would like to especially thank Junho Hong for several helpful discussions and collaboration on a related project. Asymptotic estimates of the special functions [app:Estimates] ============================================================= *τ* → 0 ------- We first consider the limit *τ* → 0. More precisely, in the rest of this subsection we assume arg(*τ*) is in compact domains avoiding integer multiples of $\frac{\pi}{2}$ as ∣*τ*∣ → 0. For the Pochhammer symbol (*q*; *q*) the small-*τ* asymptotics is standard: $$(q;q) \; \simeq \; \frac{1}{\sqrt{-{{\rm i}}\,\tau}} \; \exp \Bigl( -\frac{2\pi {{\rm i}}}{24\, \tau} -\frac{2\pi {{\rm i}}\,\tau}{24} \Bigr) \qquad(\text{as }|\tau|\to0) \,. \label{eq:PochEst}$$ Recall that the symbol  ≃  means that logarithms (on appropriate branches) of the two sides (assumed to be non-zero) are equal to all orders in the small parameter (here in ∣*τ*∣). For the chiral multiplet elliptic gamma functions we have the following estimate, valid for any *r* ∈ R, uniformly in *z* over compact subsets of R \ Z (see Proposition 2.11 of  or Equation (3.53) of ): $$\begin{split} {\Gamma\_\text{e}}(r\tau+z)& \;\simeq \; \exp \biggl(-2\pi {{\rm i}}\, \biggl(\,\frac{\overline{B}\_3(z)}{6\tau^2} +(r -1) \, \frac{\overline{B}\_2(z)}{2\tau}+\frac{(r-1)^2-\frac{1}{6}}{2} \, \overline{B}\_1(z) +\frac{(r-1)^3-\frac{r-1}{2}}{6}\,\tau \,\biggr) \biggr) \,, \label{eq:numEst} \end{split}$$ as ∣*τ*∣ → 0. Here $\overline{B}\_j(z)$ are the *periodic Bernoulli polynomials* defined, for *z* ∈ R through their Fourier series expansion, $$-\frac{(2\pi {{\rm i}})^j}{j!} \, \overline{B}\_j(z) {\;= \;}\sum\_{k\in\mathbb{Z}}{}^{'}\;\frac{{{\rm e}}^{2\pi {{\rm i}}k z}}{k^j} \qquad (z\in\mathbb{R}\,,\ j\ge1) \,. \label{eq:FourierBernoulli}$$ The prime in the above formula means that *k* = 0 has to be omitted, and that in the *j* = 1 case—where the series is not absolutely convergent—the sum is in the sense of Cauchy principal value. For *x* ∈ R \ Z, we have $\overline{B}\_j(x)=B\_j(\{x\})$ with { ⋅ } :  =  ⋅  − ⌊ ⋅ ⌋ the fractional-part function. When *j* > 1 this also holds for *x* ∈ Z (and so $\overline{B}\_j(\mathbb{Z})=B\_j(0)$). When *j* = 1 on the other hand $\overline{B}\_1(\mathbb{Z})=0$, while *B*1(0) =  − 1/2. The Bernoulli polynomials are uniquely characterized by *B*0(*u*)  =  1,   *B*ʹ*j*(*u*)  =  *j**B**j* − 1(*u*),   *B**j*(0)  =  *B**j*(1) for $j>1$ ,  and the first three non-trivial ones are explicitly $$\begin{split} B\_1(x)&{\;= \;}x-\frac{1}{2} \,,\\ B\_2(x)&{\;= \;}x^2-x+\frac{1}{6} \,,\\ B\_3(x)&{\;= \;}x^3-\frac{3}{2}x^2+\frac{1}{2}x \,. \end{split}$$ The connection between $\overline{B}\_j$ and the Bernoulli polynomials can be verified by first noting that for *j* = 1 the left-hand side of is essentially the Taylor expansion of the logarithm function, and then observing that *B**j* are uniquely characterized by $${\nonumber}B\_0(u){\;= \;}1,\qquad B'\_j(u){\;= \;}jB\_{j-1}(u),\qquad B\_j(0){\;= \;}B\_j(1)\quad \text{for $j>1$}\,.$$ With the aid of ([eq:FourierBernoulli]) one can easily prove relations such as $$\sum\_{\ell=1}^{C-1}\overline{B}\_3 \Bigl(x+\frac{\ell}{C} \Bigr){\;= \;}\frac{\overline{B}\_3(Cx)}{C^{2}}-\overline{B}\_3(x)\,,\qquad(\text{Raabe's formula})\label{eq:Raabe}$$ and $$\begin{split} \sum\_{\ell=1}^{n-1}\Bigl(\overline{B}\_{2} \Bigl(x+m\frac{\ell}{n} \Bigr) -\overline{B}\_{2} \Bigl(x-m\frac{\ell}{n} \Bigl) \Bigr)&{\;= \;}0\,,\\ \sum\_{\ell=1}^{n-1}\Bigl(\overline{B}\_2 \Bigl(\frac{\ell}{n} \Bigr)\,\overline{B}\_{2} \Bigl(x+m\frac{\ell}{n} \Bigr) -\overline{B}\_2\Bigl(\frac{\ell}{n}\Bigl) \, \overline{B}\_{2}\Bigl(x-m\frac{\ell}{n}\Bigr)\Bigl)&{\;= \;}0 \,, \label{eq:BernoulliBarIdentities} \end{split}$$ valid for *m*, *n* ∈ Z> 0 relatively prime and *x* ∈ R, by using the Fourier expansion of the Bernoulli functions, and swapping the sum over Fourier modes with the sum over ℓ.[24](#fn24) The estimate ([eq:numEst]) is particularly useful for the chiral multiplet gamma functions in ([defVmicro]) when the integral is dominated by the 1-center holonomy configurations with *z**i* − *z**j* = 0. This is because the complex phase 2*π**n*0/3 shifts the argument of the chiral multiplet gamma functions safely into the interior of the domain *z* ∈ R \ Z where the estimate is uniformly valid. On the other hand, since the vector multiplet gamma functions in ([defVmicro]) lack such phase shifts in their arguments, the estimate ([eq:numEst]) is not appropriate for them near the 1-center holonomy configurations when *τ* → 0. The estimate is not uniformly valid, with respect to *z*, over intervals containing Z. There is a well-known improvement of it around *z* = 0 however, which is valid uniformly over compact subsets of ( − 1, 1), and we will use for vector multiplet elliptic gamma functions in the index. It reads (see Proposition 2.10 of  or Equation (2.16) of ) $$\begin{split} \Gamma\_{{\rm e}}(r\tau+z)\simeq {{\rm e}}^{2\pi i R\_{0}(r\tau+z;\tau)}\, \Gamma\_h(r\tau+z;\tau,\tau),\label{eq:GammaToHypGamma} \end{split}$$ where $$R\_0(z;\tau)=-\frac{z^3}{6\tau^2}+\frac{z^2}{2\tau}-\frac{(1+5\tau^2)z}{12\tau^2} + \frac{1}{12\tau} + \frac{\tau }{12},$$ and Γ*h*(*x*; *ω*1, *ω*2) is the hyperbolic gamma function. Using the estimate and the “product formula” $$\frac{1}{\Gamma\_h(x;\omega\_1,\omega\_2)\Gamma\_h(-x;\omega\_1,\omega\_2)} {\;= \;}-4\sin \big(\frac{\pi x}{\omega\_1}\big)\sin \big(\frac{\pi x}{\omega\_2}\big),\label{eq:gammaHprodGen}$$ the next estimate follows (cf. Equation (2.18) of ): $$\frac{1}{{\Gamma\_\text{e}}(z)\, {\Gamma\_\text{e}}(-z)} \;\simeq \; {{\rm e}}^{-4\pi {{\rm i}}\,R^+\_0(z;\tau)} \; 4\sin \Bigl(\frac{\pi z}{\tau}\Bigr) \, \sin \Bigl(-\frac{\pi z}{\tau}\Bigr), \label{eq:denomEstSins}$$ valid uniformly in *z* over compact subsets of ( − 1, 1), with $$R^+\_0(z;\tau) {\; \coloneqq \;}\frac{R\_0(z;\tau)+R\_0(-z;\tau)}{2}= \frac{z^2}{2\tau} + \frac{1}{12\tau} + \frac{\tau }{12} \,. \label{eq:R0}$$ Note that since ${\Gamma\_\text{e}}(z+2\tau)\, {\Gamma\_\text{e}}(-z+2\tau)=\frac{1}{{\Gamma\_\text{e}}(z)\, {\Gamma\_\text{e}}(-z)}$, and $\sin({{\rm i}}\, x)={{\rm i}}\sinh(x)$, we can write alternatively as (z+2) (-z+2) e-4iR+0(z;) 4 2 ().[eq:denomEst] While we have presented two separate estimates ([eq:numEst]) and ([eq:denomEst]) for the chiral and vector multiplet gamma functions, both of them can in fact be derived from the “central estimate”. Deriving ([eq:numEst]) from the central estimate requires only an extra step to simplify the hyperbolic gamma functions arising from using Corollary 2.3 of , as explained in Proposition 2.11 there. *τ* → Q ------- We now consider $${\widetilde}\tau \;\equiv \; m\tau+n \to 0 \,,$$ with *m*, *n* relatively prime. More precisely, in the rest of this subsection we assume that $\mathrm{arg}({\widetilde}{\tau})$ is in compact domains avoiding integer multiples of *π*/2 as $|{\widetilde}{\tau}|\to0$. To obtain the asymptotics of the Pochhammer symbol we note that for integer *a*, *b*, *c*, *d* satisfying *a**d* − *b**c* = 1 with *c* > 0, we have $$\eta\left(\frac{a\tau+b}{c\tau+d}\right) {\;= \;}\exp \Bigl(2\pi {{\rm i}}\Bigl(\frac{a+d}{24c}-\frac{1}{8}-\frac{s(d,c)}{2} \Bigr)\Bigr) \, (c\tau+d)^{1/2} \, \eta(\tau) \,, \label{eq:modularEta}$$ with *s*(*d*, *c*) the Dedekind sum $$s(d,c){\;= \;}\sum\_{\ell=1}^{c-1} \, \frac{\ell}{c} \, \overline{B}\_1\big(d\frac{\ell}{c}\big) \,. \label{eq:DedSumDef}$$ Since the gcd(*m*, *n*) = 1, there exist integers *a*, *b* such that *a**n* − *b**m* = 1. Now we use ([eq:modularEta]) with (*c*, *d*) = (*m*, *n*). Noting that $a {\tau}+ b = a({\widetilde}{\tau}-n)/m + b = a {\widetilde}{\tau}/m -1/m$, we obtain $$(q;q)\;\simeq \; \frac{1}{\sqrt{-{{\rm i}}{\widetilde}{\tau}}} \; \exp \Bigl( -\frac{2\pi {{\rm i}}}{24m{\widetilde}{\tau}} - \frac{2\pi {{\rm i}}\, {\widetilde}\tau}{24m} + {{\rm i}}\pi s(n,m) \Bigr) \,, \label{eq:PochRationalEst}$$ in the limit of our interest. Our Dedekind sum is explicitly $$s(n,m){\;= \;}\sum\_{\ell=1}^{m-1}\frac{\ell}{m} \,\overline{B}\_1\big(n\frac{\ell}{m}\big) \,. \label{eq:ourDedekind Sum}$$ To obtain an estimate for the elliptic gamma function we first note the identity $$\Gamma(\zeta;q,q){\;= \;}\prod\_{\ell=0}^{2(m-1)}\Gamma(\zeta q^\ell;q^m,q^m)^{m-|\ell-(m-1)|} {\;= \;}\prod\_{\ell=0}^{2(m-1)}\Gamma(\zeta {{\rm e}}^{-2\pi {{\rm i}}n\frac{\ell}{m}} {\widetilde}{q}^{\frac{\ell}{m}};{\widetilde}{q},{\widetilde}{q})^{m-|\ell-(m-1)|},\label{eq:gammaFVprod}$$ with ${\widetilde}{q}={{\rm e}}^{2\pi {{\rm i}}{\widetilde}{\tau}}$. Using ([eq:numEst]) on the right-hand side of ([eq:gammaFVprod]) we get $$\begin{split} & \frac{1}{2\pi {{\rm i}}\,}\log{\Gamma\_\text{e}}(z) \; \sim \\ &-\frac{1}{6{\widetilde}{\tau}^2} \biggl(\,\sum\_{\ell=1}^{m-1}\ell \Bigl(\overline{B}\_3 \Bigl(z+\frac{n}{m}-n\frac{\ell}{m} \Bigr) +\overline{B}\_3 \Bigl(z+\frac{n}{m}+n\frac{\ell}{m} \Bigr)\Bigr) +m\,\overline{B}\_3\Bigl(z+\frac{n}{m} \Bigr) \biggr)\\ &\ -\frac{1}{2{\widetilde}{\tau}} \biggl(\, \sum\_{\ell=1}^{n-1}\ell\Bigl( \Bigl(\frac{\ell-1}{m}-1 \Bigr) \, \overline{B}\_2 \Bigl(z+\frac{n}{m}-n\frac{\ell}{m} \Bigr)+ \Bigl(\frac{2m-\ell-1}{m}-1 \Bigr) \, \overline{B}\_2 \Bigl(z+\frac{n}{m}+n\frac{\ell}{m} \Bigr) \Bigr)\\ &\ \ \ +m \,\Bigl(\frac{m-1}{m}-1 \Bigr) \,\overline{B}\_2\Bigl(z+\frac{n}{m}\Bigr) \biggr)\\ &\ -\frac{1}{2} \biggl( \, \sum\_{\ell=1}^{m-1}\ell \Bigl( \Bigl(\frac{\ell-1}{m}-1 \Bigr)^2 -\frac{1}{6} \Bigr) \, \overline{B}\_1 \Bigl(z+\frac{n}{m}-n\frac{\ell}{m} \Bigr)+ \Bigl( \Bigr(\frac{2m-\ell-1}{m}-1\Bigr)^2 -\frac{1}{6} \Bigr)\,\overline{B}\_1 \Bigl(z+\frac{n}{m}+n\frac{\ell}{m} \Bigr) \Bigr)\\ &\ \ \ +m \, \Bigl( \Bigl(\frac{m-1}{m}-1 \Bigr)^2-\frac{1}{6}\Bigr) \,\overline{B}\_1 \Bigl(z+\frac{n}{m} \Bigr) \biggr)\\ &\ - {\widetilde}{\tau} \, \biggl( \, \sum\_{\ell=1}^{m-1}\ell \, \Bigl(\frac{1}{6}\, \Bigl(\frac{\ell-1}{m}-1 \Bigr)^3 -\frac{1}{12}\, \Bigl(\frac{\ell-1}{m}-1 \Bigr) +\frac{1}{6} \, \Bigl(\frac{2m-\ell-1}{m}-1 \Bigr)^3 -\frac{1}{12} \, \Bigl(\frac{2m-\ell-1}{m}-1 \Bigr) \Bigr) \\ &\ \ \ +m \, \Bigl(\frac{1}{6} \, \Bigl(\frac{m-1}{m}-1 \Bigr)^3 -\frac{1}{12}\, \Bigl(\frac{m-1}{m}-1 \Bigr) \Bigr)\biggr) \,. \end{split}\label{eq:longRationalEst}$$ Now using the identity[25](#fn25), for gcd(*m*, *n*) = 1, *k* > 1, $$\sum\_{\ell=1}^{m-1}\ell \, \Bigl( \overline{B}\_k \, \Bigl(x-n\frac{\ell}{m} \Bigr) +\overline{B}\_k \, \Bigl(x+n\frac{\ell}{m} \Bigr)\Bigr)+m \, \overline{B}\_k(x) {\;= \;}\frac{1}{m^{k-2}} \, \overline{B}\_k(mx) \,, \label{eq:remarkableId}$$ and ([eq:FourierBernoulli]), we can simplify ([eq:longRationalEst]) to $${\Gamma\_\text{e}}( z) \; \simeq \; \exp \biggl(-\frac{2\pi {{\rm i}}}{m} \Bigl(\, \frac{\overline{B}\_3(mz)}{6 {\widetilde}{\tau}^2} -\frac{\overline{B}\_2(mz)}{2 {\widetilde}{\tau}}+C(m,n,z) -\frac{1}{12} {\widetilde}{\tau} \Bigr) \biggr), \label{eq:gammaRationalEst}$$ for *m**z* ∈ R \ Z, as ${\widetilde}{\tau}\to0$. Here *C*(*m*, *n*, *z*) stands for ( − *m* times) the fourth and fifth lines of the right-hand side of ([eq:longRationalEst]). Generalizing the above derivation in a straightforward manner leads to $$\begin{split} & {\Gamma\_\text{e}}( z+r\tau)\simeq \\ & \quad \exp \biggl(-\frac{2\pi {{\rm i}}}{m} \Bigl(\frac{\overline{B}\_3(mz-nr)}{6 {\widetilde}{\tau}^2} +(r-1)\, \frac{\overline{B}\_2(mz-nr)}{2 {\widetilde}{\tau}}+C(m,n,z,r) +\frac{(r-1)^3-\frac{r-1}{2}}{6} {\widetilde}{\tau} \Bigr) \biggr) \,, \label{eq:gammaRationalEstWithR} \end{split}$$ for *r* ∈ R. This is the analog of ([eq:numEst]) for ${\widetilde}\tau\to0$. The explicit expression for *C*(*m*, *n*, *z*, *r*) is $$\begin{split} C(m,n,z,r)&{\;= \;}-\frac{m}{2}\big[\sum\_{\ell=1}^{m-1}\ell\big(((\frac{\ell+r-1}{m}-1)^2 -\frac{1}{6})\overline{B}\_1(z+\frac{n}{m}-n\frac{\ell+r}{m})\\ &\ \ \ +((\frac{2m-\ell+r-1}{m}-1)^2-\frac{1}{6})\overline{B}\_1(z+\frac{n}{m}+n\frac{\ell-r}{m})\big)\\ &\ \ \ +m((\frac{m+r-1}{m}-1)^2-\frac{1}{6})\overline{B}\_1(z+\frac{n}{m}-\frac{nr}{m})\big] \,. \label{eq:C(m,n,r,z)} \end{split}$$ The estimate ([eq:gammaRationalEstWithR]) is important to derive our results for the asymptotic expansion of the index near the roots of unity. It is valid uniformly over compact subsets of $z\in\mathbb{R}\setminus\frac{\mathbb{Z}}{m}$, because using ([eq:numEst]) on the right-hand side of ([eq:gammaFVprod]) is allowed only if $z-n\frac{l}{m}\notin\mathbb{Z}$ for ℓ = 0, …, *m* − 1. We can also use ([eq:gammaFVprod]) for *z* near 0. More precisely, on the RHS of ([eq:gammaFVprod]), for fixed $z \in (-\frac{1}{m}, \frac{1}{m})$, we can use for the ℓ = 0, *m* terms, and use for all other ℓ. With the aid of the “reflection formula” $$\Gamma\_h(x+\frac{\omega\_1+\omega\_2}{2};\omega\_1,\omega\_2) \Gamma\_h(-x+\frac{\omega\_1+\omega\_2}{2};\omega\_1,\omega\_2)=1.\label{eq:gammaHrefl}$$ which gets rid of the hyperbolic gammas arising from ℓ = *m*, and using the “product formula” to trade the hyperbolic gammas arising from ℓ = 0 for hyperbolic sines, we obtain $$\frac{1}{{\Gamma\_\text{e}}(z){\Gamma\_\text{e}}(-z)}\;\simeq \; \exp \Bigl( -\frac{4\pi {{\rm i}}}{m} {\widetilde}{R}^+\_0(z; {\widetilde}{\tau}) +4\pi{{\rm i}}s(n,m)\Bigr) \, 4\sinh^2 \Bigl(\frac{\pi z}{-{{\rm i}}{\widetilde}\tau}\Bigr) \,, \label{eq:denomEstRational}$$ where $${\widetilde}{R}^+\_0(z; {\widetilde}{\tau}) {\; \coloneqq \;}\frac{m^2 z^2}{2 {\widetilde}{\tau}} + \frac{1}{12 {\widetilde}{\tau}} + \frac{{\widetilde}{\tau}}{12} \,. \label{eq:R0Rational}$$ This is the analog of ([eq:denomEst])–([eq:R0]) for ${\widetilde}\tau\to0$, and is similarly useful (i.e. uniformly valid) in a neighborhood of *z* = 0. An estimate similar to for *z* near general nonzero $\frac{\mathbb{Z}}{m}$ can be obtained as well. We focus for simplicity on the *n* = 1 case (i.e. $\tau\to-\frac{1}{m}$). We write *z* = ℓ0/*m* + *z*ʹ and appeal to. We have to use the estimate for ℓ = ℓ0, ℓ0 + *m*, and the estimate for all other ℓ in the product for $\Gamma\_{{\rm e}}(z)$. Similarly we have to use the estimate for ℓ =  − ℓ0 + *m*,  − ℓ0 + 2*m*, and the estimate for all other ℓ in the product for $\Gamma\_{{\rm e}}(-z)$. The result is (up to a constant phase that we suppress) $$\begin{split} \frac{1}{{\Gamma\_\text{e}}(z){\Gamma\_\text{e}}(-z)}\;\simeq \; & \exp\Bigl(-2\pi {{\rm i}}[\frac{\frac{1}{6}+m^2z'^2}{m{\widetilde}{\tau}}-\frac{1}{2}+\frac{{\widetilde}{\tau}}{6m}] \Bigr) \; \times \\ & \qquad \bigl[\Gamma\_h(z'+\frac{\ell\_0}{m}{\widetilde}{\tau},{\widetilde}{\tau},{\widetilde}{\tau})^{\ell\_0+1} \Gamma\_h(-z'+\frac{m-\ell\_0}{m}{\widetilde}{\tau},{\widetilde}{\tau},{\widetilde}{\tau})^{m-\ell\_0+1}\\ &\qquad \Gamma\_h(z'+\frac{\ell\_0+m}{m}{\widetilde}{\tau},{\widetilde}{\tau},{\widetilde}{\tau})^{m-\ell\_0-1} \Gamma\_h(-z'+\frac{2m-\ell\_0}{m}{\widetilde}{\tau},{\widetilde}{\tau},{\widetilde}{\tau})^{\ell\_0-1}\bigr]^{-1} \,. \label{eq:denomEstRational0} \end{split}$$ Using the reflection formula we can simplify the above product of the hyperbolic gamma functions to find (up to the neglected constant phase) $$\begin{split} \frac{1}{{\Gamma\_\text{e}}(z){\Gamma\_\text{e}}(-z)}\;\simeq \; & \exp\Bigl(-2\pi {{\rm i}}[\frac{\frac{1}{6}+m^2z'^2}{m{\widetilde}{\tau}}-\frac{1}{2}+\frac{{\widetilde}{\tau}}{6m}] \Bigr) \; \times \\ & \qquad \bigl[\Gamma\_h(z'+\frac{\ell\_0}{m}{\widetilde}{\tau},{\widetilde}{\tau},{\widetilde}{\tau})\, \Gamma\_h(-z'+\frac
arxiv_0000295
Purification in Rapid Repeated Interaction Systems ================================================== We investigate the open dynamics of a quantum system when it is rapidly repeatedly updated by a quantum channel. Specifically, we analyze when this dynamics can purify the system. We develop a necessary and sufficient condition for such purification effects to occur and characterize their strength. We thoroughly analyze the specific scenario of a quantum system undergoing rapid unitary interactions with a sequence of ancillary quantum systems. We find that while the purification effects are generally present, in order for these effects to be strong compared to the decoherence effects the interaction Hamiltonian must have a minimum degree of complexity. Specifically, a tensor product interaction *Q̂*S ⊗ *R̂*A, as well as many common light-matter interactions cannot purify efficiently. Introduction ============ The study of the interaction of quantum systems with an unknown environment is relevant to a host of different disciplines ranging from applied physics and engineering to the foundations of quantum theory. For instance, phenomena such as environment-induced decoherence and dephasing hinder our ability to control quantum systems and the flow of quantum information. Thus it is of capital importance to understand these effects in our efforts to build a quantum computer. Furthermore, such studies are also crucially relevant in our understanding of fundamental topics in quantum theory such as the *measurement problem*, and more generally, in the context of quantum thermodynamics. When one thinks of the interaction between a quantum system and its environment, the words dephasing (loss of purity) and decoherence come to mind: even if the system and the environment together evolve unitarily, the system’s effective dynamics will experience non-unitary evolution. However, not all non-unitary effects decrease purity, so it is thinkable that interaction with an environment can, in principle, also decrease the entropy of the system. Open dynamics can indeed be useful in many different ways. For example, a system could be driven by open dynamics to a fixed point which has some useful property, such as enabling entanglement farming. In this paper we consider the open dynamics that emerges out of the rapid repeated application of a—perhaps stochastic— completely positive trace preserving (CPTP) map. Within these setups, we will especially focus on the particular CPTP maps generated by the sequential interaction of a system with an ensemble of ancillae. This setup can be thought of as modeling the environment as a sequence of (maybe unknown) constituents which repeatedly couple to the system in rapid succession. In the literature, these scenarios are referred to as *Repeated Interaction Systems* or *Collision Models*. These models have been successfully applied to varied phenomena such as, for instance, the study of quantum coherence, quantum thermodynamics, the measurement problem (through its close relationship with the quantum Zeno effect, ) and even decoherence in gravitation and cosmology. Here, we will investigate the particularly interesting possibility that rapid repeated interactions can cause the system’s purity to increase rather than introduce decoherence. Specifically, we will show that that many common types of simple repeated interactions cannot efficiently purify in the rapid interaction regime. We will demonstrate that the interaction between a system and the constituents of its environment needs to have a minimum degree of complexity in order to cause significant purification. We will identify the necessary and sufficient conditions for a rapid repeated interaction scenario to have significant purification effects on the system. We will also provide particular examples of interactions that can and cannot increase a system’s purity under rapid repeated interactions. We will pay special attention to more experimentally relevant setups such as spin J-coupling, the coupling of a qubit to an environment of harmonic oscillators, and we will report particularly surprising results concerning the interaction of the degrees of freedom of electrons in atomic orbitals with relativistic quantum fields, such as the electromagentic field. This paper is structured as follows: Section [RRIF] reviews the rapid repeated interaction formalism developed in. Section [ECfP] studies when and how strongly rapid repeated interactions can purify. Section [AB] addresses the specific scenario of *ancillary bombardment*. And finally, Section [Examples] presents examples of classes of interactions which can or cannot purify, including the light-matter interaction. Rapid repeated interactions formalism ===================================== In this section we review the results in, paying close attention to more subtle aspects of their formulation in terms of open dynamics of rapid repeated interactions. Generally, a rapid repeated interaction scenario consists of a quantum system being frequently updated by a quantum channel. Before we present the formalism for updating by a general quantum channel, it may be helpful to have a more concrete setup in mind. Specifically, a very natural way of thinking about this kind of setup is to consider a quantum system being bombarded by a sequence of ancillary quantum systems, undergoing a brief unitary interaction with each of them. This scenario, which we term *ancillary bombardment*, generates non-trivial open dynamics in the system, as discussed broadly in. In section [AB] of this paper we analyze this scenario’s ability to cause the system state to increase its purity. As an example, ancillary bombardment could be used to model a system interacting with its environment by assuming that it repeatedly interacts unitarily with individual constituents of the environment. Another example of such a scenario is a laboratory system that is repeatedly bombarded by probes (See for examples). With this concrete scenario in mind we can proceed with more formal analysis. A rapid repeated interaction scenario considers a quantum system, labeled S, which evolves (in time steps of duration *δ**t*) by the repeated application of a quantum channel, *ϕ*(*δ**t*). At each time *T* = *n* *δ**t*, the discrete-time evolution of the system’s density matrix is given by (n t) (t)n[(0)], for integer *n*. We make the natural assumption that the strength of each individual interaction is finite, so that in the continuous interaction limit, that is as *δ**t* → 0, we have that $\phi(\delta t)\to\openone$ (nothing happens in no time). This is in contrast to approaches where the strength of the interaction is taken to infinity as *δ**t* → 0 (for an in-depth comparison with previous work see ). Note that since $\phi(\delta t)\to\openone$ as *δ**t* → 0, for small enough *δ**t*, we know *ϕ*(*δ**t*) is invertible. Additionally, we assume that *ϕ*(*δ**t*) is differentiable at *δ**t* = 0, with derivative *ϕ*ʹ(0) (things happen at a finite rate). Given such a discrete update map, *ϕ*(*δ**t*), we can construct a continuous-time interpolation scheme for the dynamics given by. Specifically, we find a unique interpolation scheme by making the following three assumptions for the continuous-time evolution: 1. The evolution is Markovian, such that, (t) (t t)[(0)], or equivalently, (t)=t[(t)], where L*δ**t* (the effective time-independent Liouvillian) is some superoperator which generates time translations for the system. 2. The evolution exactly matches the discrete dynamics at the end of every time step. Using this means, (n t t) =(t)n or equivalently, (t t) =(t). 3. The evolution’s effective Liouvillian, L*δ**t*, is well defined in the continuous interaction limit, that is as *δ**t* → 0. These three conditions uniquely specify the interpolation scheme that is generated by t ((t)), where we have taken the logarithm’s principal branch cut, that is the one with $\text{log}(\openone)=0$. Note that our assumption that $\phi(\delta t)\to\openone$ as *δ**t* → 0 guarantees that *ϕ*(*δ**t*) will be nonsingular in the short time regime, and hence will have a well defined logarithm. The first condition guarantees that the interpolation scheme is generated by some effective time-independent Liouvillian, and the second condition forces this Liouvillian to have the form. The third condition resolves the ambiguity of the logarithm’s branch cut by forcing $\text{log}(\openone)=0$, which is necessary to make L*δ**t* well defined as *δ**t* → 0. Moreover, this branch cut allows us to calculate L*δ**t* as *δ**t* → 0 (using L’Hôpital’s rule) to be $$\begin{aligned} \mathcal{L}\_0 \coloneqq \lim\_{\delta t\to0}\mathcal{L}\_{\delta t} &=\frac{{\mathrm{d}}}{{\mathrm{d}}\, \delta t} \bigg\vert\_{\delta t=0} \text{log}(\phi(\delta t))\\ \nonumber &=\phi^{-1}(0) \, \phi'(0)\\ \nonumber &=\phi'(0).\end{aligned}$$ Thus, in the continuum limit, evolution is generated by the derivative of the update map. This result was first explicated in. Taking all this into account, we can faithfully describe the discrete-time evolution,, of a quantum system using the continuous-time interpolation scheme, generated by. If in addition to the minimal regularity assumed above (that is, $\phi(\delta t)\to\openone$ as *δ**t* → 0 and *ϕ*ʹ(0) exists), we also have that *ϕ*(*δ**t*) is analytic at *δ**t* = 0, we can then form the series expansion (t) =+t 1 +t2 2 +t3 3 +…and from this t =0 +t 1 +t2 2 +t3 3 +…. As shown in, the first few superoperator coefficients are given recursively by $$\begin{aligned} \label{L0def} \mathcal{L}\_0 \coloneqq\phi\_1&,\\ \label{L1def} \mathcal{L}\_1 \coloneqq\phi\_2 &-\frac{1}{2}\mathcal{L}\_0{}^2,\\ \label{L2def} \mathcal{L}\_2 \coloneqq\phi\_{3} &-\frac{1}{2}(\mathcal{L}\_0\mathcal{L}\_1+\mathcal{L}\_1\mathcal{L}\_0) -\frac{1}{6}\mathcal{L}\_0{}^3\\ \label{L3def} \mathcal{L}\_3 \coloneqq\phi\_{4} &-\frac{1}{2}(\mathcal{L}\_0\mathcal{L}\_2+\mathcal{L}\_2\mathcal{L}\_0)\\ &\nonumber -\frac{1}{6}(\mathcal{L}\_0{}^2\mathcal{L}\_1+\mathcal{L}\_0\mathcal{L}\_1\mathcal{L}\_0+\mathcal{L}\_1\mathcal{L}\_0{}^2) -\frac{1}{24}\mathcal{L}\_0{}^4.\end{aligned}$$ with the higher order terms following a similar pattern. From the series, the master equation for the interpolation scheme becomes, (t) =0[(t)] +t 1[(t)] +t22[(t)] +…. Given such an update map *ϕ*(*δ**t*) we can compute these coefficient maps and analyze their effects in the system dynamics. For instance, in the case of a ancillary bombardment defined above, L0 generates unitary dynamics. Thus within this model any decoherence effects require finite interaction times. In it was shown that decoherence effects generically appear in L1, that is at first order in *δ**t*. In the following sections we will analyze under what conditions rapid repeated interactions can increase the purity of a system, rather than just introducing decoherence. Purification Conditions ======================= In this section, we find a necessary and sufficient condition for when the discrete dynamics given by can cause purification of a finite dimensional system. By this we mean that there exists some system state, *ρ*S, whose purity, P(*ρ*S) = Tr(*ρ*S2), increases under these dynamics. In section [RRIF] we converted the discrete-time dynamics into the continuous-time Markovian dynamics, generated by the effective Liouvillian. We will now discuss the exact conditions for such an interpolation scheme to cause purification, show that this interpolation scheme purifies if and only if the discrete dynamics does too, and finally characterize the strength of such purification effects. Markovian Purification ---------------------- For finite *d*-dimensional systems the dynamics generated by a Liouvillian, L, can cause purification if and only if the dynamics which it generates is not unital, that is, [I]0, where *I* is the identity matrix. Recalling that the maximally mixed state is given by, we can restate this as: Markovian dynamics can purify if and only if it *moves* the maximally mixed state. Throughout this paper we will refer to *I* and the maximally mixed state synonymously. The condition is clearly sufficient for the dynamics to cause purification since if the maximally mixed state is moved by the dynamics, its purity must increase. This follows from the maximally mixed state being the unique minimum purity state. Note, however, that this is not true for infinite dimensional systems. The question of purification of infinite dimensional systems under Markovian dynamics has been analyzed in depth, with the result that L not being unital is still necessary for causing purification, but is no longer sufficient. The necessity of to cause purification follows from the claim () = (2) ([I] 2) whose proof we reproduce with our notation in Appendix [Necessity]. Interpolation Faithfully Captures Purification Effects ------------------------------------------------------ In the following, we prove that, in the rapid interaction regime, the interpolation scheme faithfully captures the presence of purification effects in the discrete dynamics. First we argue that any purification effects in the discrete dynamics is captured by the interpolation scheme. Suppose that applying the update map, *ϕ*(*δ**t*), increases the purity of some state. By construction, applying the interpolation scheme for a duration *δ**t* to this state has to yield the same result. Because the interpolation scheme is smooth, at some point in this duration it must have increased some state’s purity. Next, we consider the possibility that the interpolation scheme could indicate that there is purification when none is present in the discrete dynamics. Suppose that the interpolation scheme instantaneously purifies some state. Then by the purification condition discussed in section [MarkPure], we must have L*δ**t*[*I*] ≠ 0. From the matching condition, this implies that $$\begin{aligned} \phi(\delta t)[I] &=\exp(\delta t \, \mathcal{L}\_{\delta t})[I]\\ &\nonumber =I+\delta t \, \mathcal{L}\_{\delta t}[I] +(\delta t^2/2) \, \mathcal{L}\_{\delta t}[\mathcal{L}\_{\delta t}[I]] +\dots \,.\end{aligned}$$ and we therefore conclude *ϕ*[*I*] ≠ *I*. For interaction times small enough, we can neglect the O(*δ**t*2) terms. Thus the discrete dynamics do in fact purify, the discrete update map moving (and hence purifying) the maximally mixed state. Note that this argument relies on the maximally mixed state being the unique minimum purity state and so does not work in infinite dimensions. From these arguments, we conclude that in the rapid repeated interaction regime, the discrete dynamics generated by *ϕ*(*δ**t*) can purify if and only if the continuous dynamics generated by L*δ**t* can. It then follows from that repeated applications of *ϕ*(*δ**t*) can purify if and only (or in other words *ϕ*(*δ**t*) is not unital). Purification Strength --------------------- Now that we have identified a necessary and sufficient condition for the dynamics generated by repeated applications of *ϕ*(*δ**t*) to purify, we will quantify the strength of this purification. Assuming that the quantum channel, *ϕ*(*δ**t*) is analytic at *δ**t* = 0, we can make use of the series expansion to quantify this strength by noting at what order in *δ**t* the maximally mixed state is moved by the effective Liouvillian, L*δ**t*. We say that the dynamics purifies at order *m* if, t[I] =(tm) where $\text{ord}(\phi)\coloneqq m$ is the *purification order* of the dynamics. The smaller ord(*ϕ*) is the stronger the purification effects. Recall that in infinite dimensions, dynamics being non-unital is not sufficient for purification effects. Therefore this measure of purification strength only makes sense for finite dimensional systems. This notion of purification strength can be translated to the discrete updater *ϕ*(*δ**t*) by using the recursive structure of the coefficient maps in. Concretely, t[I] =(tm) (t)[I] =I+(tm+1), such that the orders of the non-unital effects are offset by one between the discrete update map and the interpolation scheme. In order to make use of purification effects experimentally—for example in an algorithmic cooling setup,— we would like to manufacture interactions which can purify at the lowest possible order. One may wonder if this is possible by combining different interaction maps to engineer a new map with a lower purification order. However, two simple ways of combining maps together, namely concatenation of different maps or applying maps out of a statistical ensemble (taking convex combinations), cannot lower the resultant purification order below those of the original maps. Specifically, if we take *ϕ*(*δ**t*) to be a concatenation of a finite number of maps as, (t)= (1)(t) (2)(t) … (N)(t), then (){((n))}, such that *ϕ*’s strength is bounded by the strongest *χ*. Additionally, taking *ϕ*(*δ**t*) to be a convex combination of maps as, (t)= k pk (k)(t), with ∑*k**p**k* = 1 we find (){((k))}, such that *ϕ*’s strength is bounded by the strongest *ψ*. We prove these claims in Appendix [CaCC]. Ancillary Bombardment ===================== We now apply the characterization of purification effects developed in the previous section to a specific physically motivated class of update maps given by, (t)[] =((- t /)()) where ad*Ĥ*(*A*) = [*Ĥ*, *A*] for any operator *A*. Physically, this map describes the system, S, first engaging with an ancilla, A, which is in the state *ρ*A, then interacting for a time *δ**t* under the joint Hamiltonian =+ +, and finally decoupling from ancilla, which is discarded. This update map could be used to model a wide variety of scenarios. For example, it could model each discrete step of the dynamics of a system repeatedly interacting with the constituents of its environment, or an atom being bombarded with light/other atoms in a laboratory setting (both examples of ancillary bombardment). Note that the necessary and sufficient condition to cause purification,, which was discussed in section [ECfP], requires that S be finite dimensional. However, there is no such restriction on the ancillary systems, A, to which S couples. This update map is sufficiently well behaved in the rapid interaction limit — recall that we require $\phi(\delta t)\to\openone$ as *δ**t* → 0 and that *ϕ*ʹ(0) exists — and so we can construct the unique Markovian interpolation scheme as prescribed in section [RRIF]. Moreover, since the update map is analytic around *δ**t* = 0, we can expand it in powers of *δ**t* as in : $$\begin{aligned} \label{TimelessPhi1} &\phi\_1[\rho\_\text{S}] =\frac{-{\mathrm{i}}}{\hbar} \, \text{Tr}\_\text{A}\Big( [\hat{H},\rho\_\text{S}\otimes \rho\_\text{A}]\Big)\\ \label{TimelessPhi2} &\phi\_2[\rho\_\text{S}] =\frac{1}{2!}\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^2 \text{Tr}\_\text{A}\Big( [\hat{H},[\hat{H},\rho\_\text{S}\otimes \rho\_\text{A}]]\Big)\\ \label{TimelessPhi3} &\phi\_3[\rho\_\text{S}] =\frac{1}{3!}\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^3 \text{Tr}\_\text{A}\Big( [\hat{H},[\hat{H},[\hat{H},\rho\_\text{S}\otimes \rho\_\text{A}]]]\Big)\\ \label{TimelessPhi4} &\phi\_4[\rho\_\text{S}] =\frac{1}{4!}\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^4 \text{Tr}\_\text{A}\Big( [\hat{H},[\hat{H},[\hat{H},[\hat{H},\rho\_\text{S}\otimes \rho\_\text{A}]]]]\Big)\end{aligned}$$ and so on. We can thus expand the effective Liouvillian as in. In, a general family of update maps including were analyzed at zeroth and first order using the rapid repeated interaction formalism discussed in section [RRIF]. The full generality of the interactions considered in includes allowing time dependence in the interaction Hamiltonian as well as taking an arbitrary convex combination of multiple interaction types, with different types of ancilla and with different couplings. Remarkably, in, it was found that L0 generates unitary evolution. For the dynamics generated by we have, $$\begin{aligned} \label{TimelessL0explicit} \mathcal{L}\_0[\rho\_\text{S}] &=\frac{-{\mathrm{i}}}{\hbar} \, [\hat{H}\_\text{eff},\rho\_\text{S}]\end{aligned}$$ where the effective Hamiltonian *Ĥ*eff is given by +(0), that is, the system’s free Hamiltonian plus a new term, *Ĥ*(0), which comes from the repeated interactions. This new contribution to the dynamics is given by, (0) (). Note that since the leading order dynamics is unitary, it cannot affect the purity of the system. Thus any decoherence effects must arise at subleading order in the dynamics. The leading possible order for purification effects is thus first order. The first order dynamics, L1, was analyzed in full detail in, and was generally seen to give rise to dephasing effects. For the dynamics generated by, L1 is given by $$\begin{aligned} \label{L1DefAB} \mathcal{L}\_1[\rho\_\text{S}] &=\frac{-{\mathrm{i}}}{\hbar} \big[\hat{H}^{(1)},\rho\_\text{S}\big] -\frac{1}{2}\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^2 \, \big[\hat{H}^{(0)},[\hat{H}^{(0)},\rho\_\text{S}]\big]\\ &\nonumber +\frac{1}{2}\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^2 \, \text{Tr}\_\text{A}\Big( \big[\hat{H}\_\text{SA}, [\hat{H}\_\text{SA}, \rho\_\text{S}\otimes\rho\_\text{A}]\big]\Big)\end{aligned}$$ where, H(1)=( [,]). The first order dynamics, L1[*ρ*S], consists of two different contributions. One is a new unitary contribution to the dynamics, *Ĥ*(1), which (after examination of and ) can be understood as correction to *Ĥ*(0) accounting for the ancilla evolving under its free Hamiltonian during the interaction. Secondly, there are two other terms that are not unitary and will, in general, affect the purity of the system. Since generically introduces dephasing effects at order *δ**t*, for any purification effects to have a comparable impact on the dynamics they must also appear at first order. That is $$\begin{aligned} \label{L1oI} \mathcal{L}\_1[I] =\frac{1}{2}\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^2 \, \text{Tr}\_\text{A}\Big( \big[\hat{H}\_\text{SA}, [\hat{H}\_\text{SA}, I\otimes\rho\_\text{A}]\big]\Big)\neq0 \end{aligned}$$ or in other words L1 must already be able to move the maximally mixed state. Note that this just depends on the interaction Hamiltonian and the state of the ancilla, and not on either of the free Hamiltonians. In the following subsections we investigate the algebraic conditions that an interaction Hamiltonian needs in order to purify at leading possible order, that is to satisfy. Tensor Product Interaction -------------------------- We begin by analyzing the simplest model for an interaction Hamiltonian, namely the tensor product of scalar observables. The joint Hamiltonian under this type of coupling is, =+ +, where *Q̂*S and *R̂*A are observables of the system and ancilla respectively. This type of interaction is a very common interaction model considered throughout the literature of rapid repeated interaction. In Appendix [TPCalc], we show that the effect of the first order dynamics on the maximally mixed state vanishes, L1[*I*] = 0. Thus rapid repeated interaction under the Hamiltonian cannot purify at leading order in decoherence effects. In fact we also show that the second order effects vanish, L2[*I*] = 0. Continuing on, we find the leading order purification effect is given by 3[I] = [,[,]] ([,[,]]). Note that if the ancillae are infinite dimensional then the above calculations require that any relevant permutations of *R̂*A, *Ĥ*A, and *ρ*A are trace class. Thus a tensor product interaction will in general only be able to purify at third order, that is two orders lower than the leading order decoherence effects. The conclusion of this analysis is that, perhaps unintuitively, a tensor product interaction of the kind *Ĥ*SA = *Q̂*S ⊗ *R̂*A will in general strictly decrease purity at leading order in decoherence effects. This analysis allows us to conclude that any rapid repeated tensor product interaction model cannot capture phenomena involving an entropy decrease in S, such as cooling. Interaction via non-product Hamiltonians ---------------------------------------- After having established that tensor product Hamiltonians cannot purify at leading order in dechoerence effects, we now investigate whether it is possible to do so through rapid repeated interaction under a Hamiltonian that is the sum of two scalar couplings, i.e., = +. In Appendix [TPCalc], we show that the effect of L1 on the maximally mixed state,, is, $$\begin{aligned} \mathcal{L}\_1[I] &=\frac{1}{({\mathrm{i}}\hbar)^2} [\hat{Q}\_\text{S},\hat{S}\_\text{S}] \ \text{Tr}\_\text{A}\Big([\hat{R}\_\text{A},\hat{T}\_\text{A}]\rho\_\text{A}\Big).\end{aligned}$$ Again, if the ancillae are infinite dimensional, then the above calculation requires that all relevant permutations of *R̂*A, *T̂*A, and *ρ*A are trace class. In contrast to the simple interaction, the interaction Hamiltonian can purify at leading order in decoherence effects. Specifically it will purify if and only if the two system observables (*Q̂*S and *Ŝ*S) do not commute, and the two ancilla observables (*R̂*A and *T̂*A) do not commute on average with respect to the initial state of the ancilla, *ρ*A. From this we can move to the most general case, by noting that any interaction Hamiltonian, *H*SA, can be decomposed as a sum of tensor products =j,j,j In Appendix [TPCalc] we show that, for the general case of, the effect of L1 on the maximally mixed state,, is 1[I] =()2 i,j[,i,,j]   ([,i,,j]) and as before, if the ancillae are infinite dimensional, then the above calculation requires that all relevant permutations of *R̂*S, *i*, *R̂*S, *j*, and *ρ*A are trace class. Thus the condition that an interaction Hamilton to be able to purify at leading possible order is, i,j ([,i,,j])   [,i,,j] 0 In order for to be non-zero, a Hamiltonian of the form must have a pair of terms whose system parts do not commute and whose ancilla parts do not commute on average. Thus rapid repeated interactions with ancillae under an arbitrary Hamiltonian will in general be able to purify at leading order in decoherence effects. In section [Examples], we will show some simple non-product interactions that can purify at leading order. Conversely, we will also show some remarkable common types of non-product coupling that, nevertheless, cannot purify at leading order due to cancellations within. Note that while the above analysis only considers the specific form of the update map given by, we can extend these results to a much wider class of update maps by making use of the results described at the end of Section [PStrength]. Time-dependent interactions --------------------------- Additionally our analysis easily extends to include cases of ancillary bombardment where the Hamiltonian is explicitly time dependent. The dissipation effects in this scenario were analyzed in. In particular they considered the Hamiltonian to be of the form t(t) = + +(t/t). The update map for such an interaction is given by (t)[] =(Ut(t)() Ut(t)) where *U**δ**t*(*t*) is the unitary transformation, Ut(t) =(0t t()) and T is the time-ordering operation. This unitary transformation is generated by a time-dependent Hamiltonian, *Ĥ**δ**t*(*t*). From, we can compute the effect of L1 on the maximally mixed state,, as 1[I] = ()2 ( ]) where G0()01 () dis the unweighted time average of the interaction Hamiltonian. Thus we can see that the ability of an interaction Hamiltonian to purify at leading order in decoherence effects only depends on its time average. Thus a time dependent interaction can purify at leading order if and only if its time average can. Examples ======== In this section, we investigate several specific interaction Hamiltonians in light of the necessary and sufficient condition to purify at leading order which we described in the previous section. Namely, that when written as a sum of tensor products,, it must satisfy. Isotropic spin coupling () -------------------------- As an example of an interaction capable of purifying at leading order, we consider the isotropic spin-spin interactions, = J = J jj where we use Einstein’s summation notation of implicitly summing over all repeated indices. From we can compute the effect of L1 on the maximally mixed state as, $$\begin{aligned} \mathcal{L}\_1[I] &\nonumber =\frac{1}{2}\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^2 (\hbar J)^2 \ \big\langle[\hat{\sigma}\_\text{A}{}\_i,\hat{\sigma}\_\text{A}{}\_j]\big\rangle \ [\hat{\sigma}\_\text{S}{}^i,\hat{\sigma}\_\text{S}{}^j]\\ &=4 \, J^2 \ \langle \bm{\hat{\sigma}}\_\text{A}\rangle \,\cdot\, \bm{\hat{\sigma}}\_\text{S} \label{sig-sig}\end{aligned}$$ where, for convenience, we have introduced the notation ⟨  ⋅  ⟩ = TrA(  ⋅  *ρ*A). In terms of Bloch vectors, eq. expresses the intuitive result that the maximally mixed state, $\bm{a}\_\text{S}=0$, is moved in the direction of the ancilla’s Bloch vector, $\bm{a}\_\text{A}=\langle \bm{\hat{\sigma}}\_\text{A}\rangle$. Thus unless the ancillae are in the maximally mixed state there will be purification effects in the dynamics. Qubit-Harmonic Oscillator coupling ---------------------------------- We find another example of an interaction Hamiltonian that can purify by considering a qubit which repeatedly interacts with sequence of harmonic oscillators via the interaction Hamiltonian = (x +y) where *x̂* = (*â* + *â*†)/2 and *p̂* = i(*â* − *â*†)/2 are quadrature operators for a harmonic oscillator. From we can compute the effect of L1 on the maximally mixed state as, $$\begin{aligned} \mathcal{L}\_1[I] &\nonumber =\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^2 (\hbar \omega)^2 \ \big\langle[\hat{x},\hat{p}]\big\rangle \ [\hat{\sigma}\_x,\hat{\sigma}\_y]\\ &=2 \, \omega^2 \ \hat{\sigma}\_z.\end{aligned}$$ Thus the maximally mixed state is initially polarized in the *z* direction under this interaction regardless of the state of the harmonic oscillator ancillae. This type of interaction can, in principle, be implemented in superconducting circuits, achieving fast switching times in the ultra strong switchable coupling regime. Vector-vector Couplings ----------------------- As discussed in section [TPI] rapidly interacting with an ancilla via a tensor product of two scalar observables, *Ĥ*SA = *Q̂*S ⊗ *R̂*A, cannot purify at leading order. A natural generalization of this coupling is to instead couple two vector observables component-wise (through their dot product) as, = jj. From, the effect of L1 on the maximally mixed state is 1[I] =()2    [i,j]. Thus, for repeated interactions under to purify, the components of $\bm{\hat{V}}$ must not commute amongst themselves, and the components of $\bm{\hat{W}}$ must not either. Many common vector observables such as $\bm{\hat{x}}$, $\bm{\hat{p}}$, $\bm{\hat{E}}(\bm{x})$, and $\bm{\hat{B}}(\bm{x})$, do not pass this test, while others such as $\bm{\hat{L}}$ and $\bm{\hat{\sigma}}$ do. Thus vector-vector couplings involving any of $\bm{\hat{x}}$, $\bm{\hat{p}}$, $\bm{\hat{E}}(\bm{x})$, or $\bm{\hat{B}}(\bm{x})$ can not purify whereas couplings involving $\bm{\hat{L}}$ or $\bm{\hat{\sigma}}$ potentially can depending on what they are coupled to. From this we can generalize further to the case of two vector fields coupled component-wise throughout all of space as, $$\begin{aligned} \label{VectorFieldCoupling} H\_\text{SA} =\int {\mathrm{d}}\bm{x} \ \bm{\hat{V}}\_\text{S}(\bm{x})\cdot\bm{\hat{W}}\_\text{A}(\bm{x}) =\int {\mathrm{d}}\bm{x} \ \hat{V}\_\text{S}{}^j(\bm{x})\otimes\hat{W}\_\text{A}{}\_j(\bm{x}).\end{aligned}$$ A necessary condition for repeated interaction this type of Hamiltonian to purify is that at least one of the following two conditions holds: 1. Neither $\bm{\hat{V}}(\bm{x})$ nor $\bm{\hat{W}}(\bm{x})$ is microcausal. (Recall that an observable $\hat{\bm{X}}$ is microcausal if only has support on $\bm{x}=\bm{x}'$). 2. Neither $\bm{\hat{V}}(\bm{x})$ nor $\bm{\hat{W}}(\bm{x})$ has its components commute locally amongst themselves, i.e.. To see that this is the case, we compute the effect of L1 on the maximally mixed state from as, $$\begin{aligned} \label{{L1oIVW}} &\mathcal{L}\_1[I]\\ &\nonumber =\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^2\!\!\!\int\!\!\! {\mathrm{d}}\bm{x}\!\!\int\!\!\! d\bm{x'} \, \big\langle [\hat{W}\_i(\bm{x}),\hat{W}\_j(\bm{x'})]\big\rangle \, [\hat{V}^i(\bm{x}),\hat{V}^j(\bm{x'})]\end{aligned}$$ If one of $\bm{V}(\bm{x})$ or $\bm{W}(\bm{x})$ is microcausal then the integral’s domain can be reduced to the $\bm{x}=\bm{x}'$ region. From there, whichever of $\bm{V}(\bm{x})$ or $\bm{W}(\bm{x})$ has its components locally commuting causes the integrand to vanish. Thus such interactions cannot purify at leading order. Light-matter Interaction ------------------------ Let us now focus on a concrete relevant model used in quantum optics: we will analyze the ability of the light-matter interaction to purify in the context of rapid repeated interactions. Let us consider an atom interacting with a second-quantized electromagnetic field. Let us take the atom as the target system, S, and the field as the ancilla, A, to which the system is repeatedly coupled. Physically, one can imagine atoms bombarded by pulses of light. We begin by showing that any single multipolar coupling of the electric field to an atom cannot purify at leading order in rapid repeated interactions. First, we consider the electric dipole interaction given by $$\begin{aligned} \hat{H}\_\text{SA} =q \, \hat{x}^j\hat{E}\_j &\nonumber =\int {\mathrm{d}}\bm{x} \ q \, \hat{x}^j \, \ket{\bm{x}}\!\bra{\bm{x}}\otimes\hat{E}\_j(\bm{x})\\ &=\int {\mathrm{d}}\bm{x} \ \hat{d}^j(\bm{x})\otimes\hat{E}\_j(\bm{x})\end{aligned}$$ where $\hat{d}^j(\bm{x})=q \, x^j \, \ket{\bm{x}}\!\bra{\bm{x}}$ is the dipole moment operator at a position $\bm{x}$. In this form, the interaction is written as the coupling of two vector fields throughout all of space. This is the scenario that was analyzed at the end of the previous section. It is enough to note that the electric field is microcausal and the components of $\bm{\hat{d}}(\bm{x})$ commute amongst themselves locally (in fact, both observables have both properties) to conclude that the electric dipole interaction cannot purify at leading order on its own. Similarly, if we consider the electric quadrupole coupling given by H =q ijij =  ij()ij() where $\hat{Q}^i{}^j(\bm{x})=q \, x^i x^j \ket{\bm{x}}\!\bra{\bm{x}}$ is the quadrupole moment operator at a position $\bm{x}$, we find that it cannot purify at leading order. This is again because $\bm{\nabla}\_i\hat{E}\_j(\bm{x})$ is microcausal and the components of $\hat{Q}^i{}^j(\bm{x})$ commute amongst themselves locally. Similarly, every higher multipolar electric coupling cannot purify on its own at leading order since higher derivatives of the electric field remain microcausal and the components of the higher moment operators always commute amongst themselves locally. A similar analysis can be carried out with the magnetic dipole interaction given by ={k,k} =d  k()k() where $\hat{\mu}^k(\bm{x})=(q/2m)(\hat{L}^k\ket{\bm{x}}\!\bra{\bm{x}}+\ket{\bm{x}}\!\bra{\bm{x}}\hat{L}^k)$ is the magnetic dipole operator at a position $\bm{x}$. The Hamitonian is again the coupling of two vector fields throughout all of space. We conclude as before that the magnetic dipole interaction cannot purify at leading order since the magnetic field is both microcausal and has its components commute amongst themselves locally. Furthermore, linear combinations of different electric multipole couplings cannot purify at leading order either. For example consider the combination of electric dipole and electric quadrupole interactions $$\begin{aligned} H\_\text{SA} &=q \, \hat{x}^k\hat{E}\_k +q \, \hat{x}^i\hat{x}^j\nabla\_i\hat{E}\_j\\ &\nonumber =\!\!\int\!\! {\mathrm{d}}\bm{x} \ \Big(\hat{d}^k(\bm{x})\otimes\hat{E}\_k(\bm{x}) +\hat{Q}^i{}^j(\bm{x})\otimes\bm{\nabla}\_i\hat{E}\_j(\bm{x})\Big).\end{aligned}$$ In computing the effect of L1 on the maximally mixed state,, the cross terms within the dipole coupling will vanish, as will the cross terms within the quadrupole coupling. Only the cross terms between the two couplings remain, yielding, $$\begin{aligned} &\mathcal{L}\_1[I]\\ &\nonumber =\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^2\!\!\!\int\!\!\! {\mathrm{d}}\bm{x}\!\!\int\!\!\! {\mathrm{d}}\bm{x'} \, \big\langle [\hat{E}\_k(\bm{x}),\nabla\_i\hat{E}\_j(\bm{x'})]\big\rangle \, [\hat{d}^k(\bm{x}),\hat{Q}^i{}^j(\bm{x'})].\end{aligned}$$ However, this too vanishes since, =0 for all $\bm{x}$ and $\bm{x}'$. This can be easily seen by noting that both $\hat{d}^k(\bm{x})$ and $\hat{Q}^i{}^j(\bm{x})$ are diagonal in the position basis. In the same fashion, any combination of any electric multipolar couplings will not be able to purify at leading order. Thus rapid repeated light-matter interactions where matter couples only to the electric field are unable to purify at leading order. Thus if we have any hope of purifying at leading order we must involve the magnetic field. A first very simple combination of electric and magnetic couplings that we can consider is the combination of the electric dipole and magnetic dipole couplings: $$\begin{aligned} H\_\text{SA} &=q \, \hat{x}^j\hat{E}\_j +\frac{q}{2m}\{\hat{L}^k,\hat{B}\_k\}\\ &\nonumber =\int {\mathrm{d}}\bm{x} \ \Big( \hat{d}^j(\bm{x})\otimes\hat{E}\_j(\bm{x}) +\hat{\mu}^k(\bm{x})\otimes\hat{B}\_k(\bm{x})\Big).\end{aligned}$$ This interaction Hamiltonian satisfies the necessary condition for purification to appear at leading order, as discussed in previous sections. Namely, $[\hat{d}^j(\bm{x}),\hat{\mu}^k(\bm{x}')]\neq0$ and $[\hat{E}\_j(\bm{x}),\hat{B}\_k(\bm{x}')]\neq0$. As above the cross terms within each of the electric and magnetic couplings will vanish and only the commutators mixing the electric and magnetic field will survive. Computing the effect of L1 on the maximally mixed states,, yields, 1[I] =()2 d [j(),k()] This integrand is non-zero but, remarkably, the integral over $\bm{x}$ and $\bm{x}'$ vanishes. The mechanism for this cancellation is particularly interesting, and is discussed in detail in Appendix [EDiMDi]. Therefore, despite the fact that the coupling satisfies the necessary condition for purification discussed in section [NPIH], rapid repeated interactions which involve both the electric and magnetic dipole couplings cannot purify at leading order. This result is easily extended to more general light-matter couplings. For example, in the case of the more physically relevant combination of both the electric quadrupole and magnetic couplings $$\begin{aligned} H\_\text{SA} &=q \, \hat{x}^i\hat{x}^j\nabla\_i\hat{E}\_j +\frac{q}{2m}\{\hat{L}^k,\hat{B}\_k\}\\ &\nonumber =\int {\mathrm{d}}\bm{x} \ \Big( \hat{Q}^i{}^j(\bm{x})\otimes\nabla\_i\hat{E}\_j(\bm{x}) +\hat{\mu}^k(\bm{x})\otimes\hat{B}\_k(\bm{x})\Big).\end{aligned}$$ we find a similar cancellation that yields no purification at leading order. Any higher order electric couplings will exhibit the same cancellation as will the combination of several electric multipolar moments along with the magnetic dipole moment. Summarizing, we have proven that the most common models of light-matter interactions employed in quantum optics —i.e., those involving any combination of electric multipolar couplings and the magnetic dipole coupling—cannot purify at leading order under rapid repeated interactions. This leads us to conjecture that the light-matter interaction in general cannot purify at leading order in decoherence effects. Conclusion ========== We analyzed the ability of rapid repeated interactions to purify a quantum system. In particular, we considered the formalism developed in, where a quantum system evolves (in discrete time steps of duration *δ**t*) under the repeated application of a quantum channel. We have studied and characterized the strength of purification effects of these rapid repeated interactions, namely, at what order in *δ**t* the dynamics can lead to purification. We have shown that, perhaps contrary to intuition, the purification strength cannot be increased by combining different rapid repeated interaction dynamics by composition or convex combination. After this general study, we have investigated in-depth the purifying power of a particularly relevant scenario that we called *ancillary bombardment*. In this scenario a quantum system is bombarded by a sequence of ancillae, undergoing a brief unitary interaction with each of them. For instance, one can think of an atom interacting with its environment, under the assumption that it repeatedly interacts unitarily with its individual constituents. Another example of such a scenario would be a laboratory system that is repeatedly measured by probes. We have shown that simple interaction Hamiltonians (including some considered in previous literature on rapid repeated interactions ) cannot purify at leading order if their interaction strength remains finite. Furthermore, we have shown that for an ancillary bombardment to purify at leading order it must be mediated by a sufficiently complicated Hamiltonian. Specifically, an interaction consisting of the tensor products of two scalar observables will not purify at leading order. We have found necessary and sufficient conditions for a ancillary bombardment to purify a quantum system. We studied what kinds of couplings satisfy them and what kind of couplings do not. For illustration, we have shown how an isotropic spin-spin coupling, as well as a specific experimentally feasible interaction of a qubit with a harmonic oscillator can purify at leading order under ancillary bombardment. Furthermore, we have paid special attention to the case of couplings of system observables to vector fields, and in particular the case of the multipole moments of an atom coupled to the fully quantized electromagnetic field (EM). For the case of interaction with relativistic quantum fields (such as the EM field) we have found necessary conditions for purification involving the microcausality of the theory. Remarkably, we have shown that any combinations of electric multipole couplings and the magnetic dipole coupling cannot purify at leading order under repeated interaction. This casts fundamental doubt on the ability of simple quantum optical setups to increase the purity of atomic qubits under fast interaction. These results may perhaps be relevant to the field of algorithmic cooling and can be used to design setups to prolong the life of quantum coherence through a controlled exposure to an environment. The particular implications of these results in quantum thermodynamics are intriguing and will be analyzed elsewhere. Acknowledgements ================ This work was supported in part by the Natural Sciences and Engineering Research Council of Canada through the NSERC Discovery programme. Necessity of Non-unitality for Purification =========================================== In this appendix we reproduce (in our notation) a proof given in. Specifically we prove that, for a finite dimensional systems, in order for the dynamics generated by a Liouvillian, L, to cause purification it is necessary that the dynamics are not unital, that is, L[*I*] ≠ 0, where and *I* is the identity matrix. In order to show this we derive the inequality, () = (2) ([I] 2). from which our claim follows directly. We first write the Liouvillian in a standard form called the Lindblad form, that is, [] =+n n (nn-{nn,}), where *Ĥ* is a Hermitian operator, *F̂**n* are operators, and Γ*n* are non-negative numbers. The operator *Ĥ* is the effective Hamiltonian of the dyanamics and is said to generate the unitary part of the dynamics. The operators *F̂**n* are the dynamics decoherence modes and Γ*n* are their respective decoherence rates. Any Liouvillian can be written in this form. Note that the effect of the dynamics on the maximally mixed state is, [I] =n n [Fn,Fn]. Using the cyclic property of trace we find the rate of change of systems purity is, () =(2) =2 ([] ). The unitary part of the dynamics does not change the purity, as expected, since ([H,] ) =(H [,]) =0. Thus we can focus our attention on the decoherence modes. Using,, and the cyclic property of trace we have $$\begin{aligned} \frac{{\mathrm{d}}}{{\mathrm{d}}t}\mathcal{P}(\rho) &=2 \, \text{Tr} \, \big(\mathcal{L}[\rho] \, \rho\big)\\ &\nonumber =2 \, \text{Tr} \, \Big(\sum\_n \Gamma\_n \, \big(F\_n\rho F\_n^\dagger-\{F\_n^\dagger F\_n,\rho\}/2\big)\rho\Big)\\ &\nonumber =\sum\_n \Gamma\_n \, 2 \, \text{Tr} \, \big( F\_n\rho F\_n^\dagger\rho -F\_n^\dagger F\_n\rho^2\big).\end{aligned}$$ For Hermitian *ρ*, we have the identity, 2 (AA- AA2) =([A,A]2 -), which yields, $$\begin{aligned} \frac{{\mathrm{d}}}{{\mathrm{d}}t}\mathcal{P}(\rho) &=\sum\_n \Gamma\_n \, \text{Tr} \, \big( [F\_n,F\_n^\dagger]\rho^2 -[F\_n,\rho]^\dagger [F\_n,\rho]\big)\\ &\nonumber =\text{Tr} \, \big(\mathcal{L}[I] \, \rho^2\big) -\sum\_n \Gamma\_n \, \text{Tr} \, \big([F\_n,\rho]^\dagger [F\_n,\rho]\big)\end{aligned}$$ where we have made use of to identify L[*I*] in the first term. Since the second term is manifestly negative we have the inequality, $$\begin{aligned} \frac{{\mathrm{d}}}{{\mathrm{d}}t}\mathcal{P}(\rho) \leq\text{Tr} \, \big(\mathcal{L}[I] \, \rho^2\big)\end{aligned}$$ claimed in. If L[*I*] = 0 then the dynamics will either maintain or decrease the purity of any state. In, this proof is shown to hold as well for infinite dimensional systems following some assumptions on the decoherence modes. In particular, it holds if all *F̂**n* are bounded. Concatenation And Convex Combinations ===================================== In this appendix, we prove that constructing a new map by taking either concatenations or convex combinations of different maps cannot lower the resultant purification order, defined in, below those of the original maps. Concatenation ------------- Suppose we have an update map *ϕ*(*δ**t*) that is the concatenation of two maps *χ*(1)(*δ**t*) and *χ*(2)(*δ**t*): (t) =(1)(t)(2)(t). We can take these two maps to have series expansions about *δ**t* = 0 $$\begin{aligned} \label{Chi1Series} \chi^{(1)}(\delta t) &=\openone +\delta t \, \chi^{(1)}\_1 +\delta t^2 \, \chi^{(1)}\_2 +\delta t^3 \, \chi^{(1)}\_3 +\dots\\ \label{Chi2Series} \chi^{(2)}(\delta t) &=\openone +\delta t \, \chi^{(2)}\_1 +\delta t^2 \, \chi^{(2)}\_2 +\delta t^3 \, \chi^{(2)}\_3 +\dots \, \end{aligned}$$ and define *m*1 = ord(*χ*(1)) and *m*2 = ord(*χ*(2)) to be the purification orders of *χ*(1)(*δ**t*) and *χ*(2)(*δ**t*) respectively, with *m* = min(*m*1, *m*2). Recall that a map’s purification order is defined in terms of its interpolation scheme. Converting this into terms of the update maps we have such that, $$\begin{aligned} &\chi^{(1)}[I]=I+\mathcal{O}(\delta t^{m\_1+1})\\ &\nonumber \chi^{(2)}[I]=I+\mathcal{O}(\delta t^{m\_2+1}).\end{aligned}$$ Thus we have $$\begin{aligned} \label{LesserOrder1} &\chi^{(1)}\_1[I]=\dots=\chi^{(1)}\_{m}[I]=0\\ &\nonumber \chi^{(2)}\_1[I]=\dots=\chi^{(2)}\_{m}[I]=0.\end{aligned}$$ Evaluating *ϕ*(*δ**t*) on the maximally mixed state yields $$\begin{aligned} &\phi(\delta t) =\chi^{(1)}(\delta t)\chi^{(2)}(\delta t)\\ &\nonumber =\big(\openone +\sum\_{k=1}^\infty \delta t^k \, \chi^{(1)}\_k\big)\big(\openone +\sum\_{n=1}^\infty \delta t^n \, \chi^{(2)}\_n\big)[I]\\ &\nonumber =\big(\openone +\sum\_{k=1}^\infty \delta t^k \, \chi^{(1)}\_k\big)\big(I +\sum\_{n=1}^\infty \delta t^n \, \chi^{(2)}\_n[I]\big)\\ &\nonumber =I +\sum\_{k=1}^\infty \delta t^k \, \chi^{(1)}\_k[I] +\sum\_{n=1}^\infty \delta t^n \, \chi^{(2)}\_n[I] +\sum\_{k=1}^\infty\sum\_{n=1}^\infty \delta t^{k+n} \, \chi^{(1)}\_k[\chi^{(2)}\_n[I]]\\ &\nonumber =I +\sum\_{k=m+1}^\infty \delta t^k \, \chi^{(1)}\_k[I] +\sum\_{n=m+1}^\infty \delta t^n \, \chi^{(2)}\_n[I]\\ &+\sum\_{k=1}^\infty\sum\_{n=m+1}^\infty \delta t^{k+n} \, \chi^{(1)}\_k[\chi^{(2)}\_n[I]]\end{aligned}$$ where we have used to drop terms from the sums. From this we can see that any non-unital effects in *ϕ*(*δ**t*) appear at at least order *m* + 1 and thus ord(*ϕ*) ≥ *m*. By applying this proof repeatedly one can conclude that if *ϕ*(*δ**t*) is a concatenation of a finite number of maps as, (t)= (1)(t) (2)(t) … (N)(t), then (){((n))} as claimed. Convex Combinations ------------------- Suppose we have an update map *ϕ*(*δ**t*) which is a convex combination of maps as, (t)= k pk (k)(t), with ∑*k**p**k* = 1. We can take these maps to have series expansions about *δ**t* = 0 as, (k)(t) =+t (k)1 +t2 (k)2 +t3 (k)3 +…. Let *m**k* = ord(*ψ*(*k*)) be the purification orders of *ψ*(*k*)(*δ**t*) and *m* = min{*m**k*}. Recall that a map’s purification order is defined in terms of its interpolation scheme. Converting this into terms of the update maps we have such that, (k)[I]=I+(tmk+1) and thus for every *k*, (k)1[I]=…=(k)m[I]=0. Evaluating *ϕ*(*δ**t*) on the maximally mixed state yields $$\begin{aligned} \phi(\delta t)[I] &=\sum\_k p\_k \, \psi^{(k)}(\delta t)[I]\\ &\nonumber =\sum\_k p\_k \, \big(\openone +\sum\_{n=1}^\infty \delta t^n \, \psi^{(k)}\_n\big)[I]\\ &\nonumber =\big(\openone +\sum\_{n=1}^\infty \delta t^n \, \sum\_k p\_k \, \psi^{(k)}\_n\big)[I]\\ &\nonumber =I+\sum\_{n=1}^\infty \delta t^n \, \sum\_k p\_k \, \psi^{(k)}\_n[I]\\ &\nonumber =I+\sum\_{n=m+1}^\infty \delta t^n \, \sum\_k p\_k \, \psi^{(k)}\_n[I]\end{aligned}$$ where in the last step we have used to drop terms from the sums. From this we can see that any non-unital effects in *ϕ*(*δ**t*) appear at at least order *m* + 1 and thus ord(*ϕ*) ≥ *m* as claimed. Calculation of Purification Orders ================================== In this appendix we find the leading order purification effects in the ancillary bombardment scenario discussed in section [AB], for several different interaction Hamiltonians. History Reduction ----------------- In the general case, the system and ancilla interact via the joint Hamiltonian, =+ +. To find the leading order purification effects we compute L*δ**t*[*I*] order by order until we find the first non-zero contribution. However due to the recursive structure of the coefficient maps in we can simply look for the smallest *m* such that *ϕ**m* that moves the identity, *ϕ**m*[*I*] ≠ *I*. These maps are given by the partial trace of *m* nested commutations with *Ĥ* applied to *I* ⊗ *ρ*A. For instance, 4[I] =()4 ( [,[,[,[,I]]]]). By the linearity of the commutator, these computations involve all possible ways of picking one of the three terms from for each of the *m* commutators. In a sum over histories sense, *ϕ**m* involves all possible ways of the system and ancilla meeting *m* times, each time selecting one of *Ĥ*SA, $\hat{H}\_\text{S}\otimes\openone$, or $\openone\otimes \hat{H}\_\text{A}$ to evolve under. In human terms, each day they may either interact with the wider world or stay home and reflect on their lives. In order to simplify the following computations we first work out some immediate reductions that happen when choosing either of the free Hamiltonians for either the inner most or outer most commutator. First we see that picking the ancilla’s free Hamiltonian for the outermost commutator causes a history’s contribution to vanish. This follows directly from the cyclic property of partial trace, namely ( ()) =( ()). for any *Ẑ*SA such that ( [,])=0. Thus choosing *Ĥ*A for the outer most commutator yields $$\begin{aligned} \text{Tr}\_\text{A}\Big( [\openone\otimes\hat{H}\_\text{A},[\hat{H},[\dots,[\hat{H},I\otimes \rho\_\text{A}]]]]\Big)=0.\end{aligned}$$ Note, if the ancillae are infinite dimensional then the above calculation requires that all relevant ancilla observables are trace class. Additionally, if one selects the system free Hamiltonian for the inner most commutator one finds, $$\begin{aligned} \text{Tr}\_\text{A}\Big( [\hat{H},[\dots,[\hat{H},[\hat{H}\_\text{S}\otimes\openone,I\otimes \rho\_\text{A}]]]]\Big)=0\end{aligned}$$ since $\hat{H}\_\text{S}\otimes\openone$ and *I* ⊗ *ρ*A act on disjoint sectors of the Hilbert space. On the other hand, if one selects the ancilla free Hamiltonian for the innermost commutator the result is expressible in terms of *ϕ**m* − 1[*I*]. Specifically one finds, $$\begin{aligned} &\text{Tr}\_\text{A}\Big( [\hat{H},[\dots,[\hat{H},[\openone\otimes\hat{H}\_\text{A},I\otimes \rho\_\text{A}]]]]\Big)\\ &\nonumber =\text{Tr}\_\text{A}\Big( [\hat{H},[\dots,[\hat{H},I\otimes[\hat{H}\_\text{A},\rho\_\text{A}]]]]\Big)\\ &\nonumber \sim\phi\_{m-1}[I] \ \ \text{but with }\rho\_\text{A}\to[\hat{H}\_\text{A},\rho\_\text{A}].\end{aligned}$$ In particular, if *ϕ**m* − 1[*I*] = 0 for any initial ancilla state, then picking *H*A for the inner most commutator does not add anything to the final result. Finally, if one chooses *Ĥ*S for the outer most commutator we also find that the result is expressible in terms of *ϕ**m* − 1[*I*]. To show this we first realize that, when acting on a tensor product, the actions ‘to commute with $\hat{H}\_\text{S}(\otimes\openone)$’ and ‘to take the partial trace over A’ commute. Concretely, ( [,]) =. for any *Ẑ*SA. By linearity of the commutator and of partial trace we need only consider the case when *Ẑ*SA is a tensor product. In this case we find $$\begin{aligned} \text{Tr}\_\text{A}\big( [\hat{H}\_\text{S}\otimes\openone,\hat{X}\_\text{S}\otimes\hat{Y}\_\text{A}]\big) &=\text{Tr}\_\text{A}\Big(\big[\hat{H}\_\text{S},\hat{X}\_\text{S}\big]\otimes\hat{Y}\_\text{A}\Big)\\ &\nonumber =\big[\hat{H}\_\text{S},\hat{X}\_\text{S}\big] \, \text{Tr}\_\text{A}\big(\hat{Y}\_\text{A}\big)\\ &\nonumber =\Big[\hat{H}\_\text{S},\hat{X}\_\text{S} \, \text{Tr}\_\text{A}\big(\hat{Y}\_\text{A}\big)\Big]\\ &\nonumber =\Big[\hat{H}\_\text{S},\text{Tr}\_\text{A}\big(\hat{X}\_\text{S}\otimes\hat{Y}\_\text{A}\big)\Big].\end{aligned}$$ Thus choosing *Ĥ*S for the outermost commutator results in an expression of the form $$\begin{aligned} &\text{Tr}\_\text{A}\Big( [\hat{H}\_\text{S}\otimes\openone,[\hat{H},[\dots,[\hat{H},I\otimes \rho\_\text{A}]]]]\Big)\\ &\nonumber =\Big[\hat{H}\_\text{S}, \text{Tr}\_\text{A}\big([\hat{H},[\dots,[\hat{H},I\otimes \rho\_\text{A}]]]\big)\Big]\\ &\nonumber \sim\big[\hat{H}\_\text{S}, \phi\_{m-1}[I]\big].\end{aligned}$$ In particular, if *ϕ**m* − 1[*I*] = 0, then picking *H*S for the outermost commutator does not add anything to the final result. Taking these four cases into account we have the result that if *ϕ**m* − 1[*I*] = 0 for every *ρ*A then the innermost and outermost commutators are forced to be *Ĥ*SA. Tensor Product Interaction -------------------------- In this subsection we show that in the ancillary bombardment scenario discussed in section [AB], if the system and ancilla interact via a tensor product of scalar observables as, =+ +, then the leading order purification effects are given by 3[I] = [,[,]] ([,[,]]). Proceeding order by order we first use, and to compute, $$\begin{aligned} \phi\_1[I] &=\frac{-{\mathrm{i}}}{\hbar} \, \text{Tr}\_\text{A}\Big( [\hat{H},I\otimes \rho\_\text{A}]\Big)\\ &\nonumber =0\end{aligned}$$ Next using we have 2[I]=()2 ( [,[,I]]). Recalling the result derived earlier in this appendix, we know that, since *ϕ*1[*I*] = 0 for every *ρ*A, we must select the interaction Hamiltonian in both the inner most and outer most commutators. Thus, 2[I] =()2 ( [,[,I]]). Computing this yields zero. Pressing on, from we have 3[I] =()3 ( [,[,[,I]]]). Again, since *ϕ*2[*I*] = 0 for every *ρ*A, all histories without an interaction at the start and end vanish. Thus 3[I] =()3 ( [,[,[,I]]]). The *Ĥ* in this expression yields three terms, all of which vanish. Finally, from we have 4[I] =()4 ( [,[,[,[,I]]]]). Once again, since *ϕ*3[*I*] = 0 for every *ρ*A we have, 4[I] =()4 ( [,[,[,[,I]]]]). The two *Ĥ* in this expression yield nine terms to check. All of them vanish except for the two terms with the free Hamiltonians in the middle. Thus, $$\begin{aligned} &\phi\_4[I]\\ &\nonumber =\frac{1}{4! \, \hbar^4} \text{Tr}\_\text{A}\Big( [\hat{Q}\_\text{S}\otimes\hat{R}\_\text{A}, [\hat{H}\_\text{S}\otimes\openone, [\openone\otimes\hat{H}\_\text{A}, [\hat{Q}\_\text{S}\otimes\hat{R}\_\text{A}, I\otimes\rho\_\text{A}]]]] \Big)\\ &\nonumber +\frac{1}{4!\, \hbar^4}\text{Tr}\_\text{A}\Big( [\hat{Q}\_\text{S}\otimes\hat{R}\_\text{A}, [\openone\otimes\hat{H}\_\text{A}, [\hat{H}\_\text{S}\otimes\openone, [\hat{Q}\_\text{S}\otimes\hat{R}\_\text{A}, I\otimes\rho\_\text{A}]]]] \Big)\\ &\nonumber =\frac{1}{12\hbar^4} [\hat{Q}\_\text{S},[\hat{H}\_\text{S},\hat{Q}\_\text{S}]] \, \text{Tr}\_\text{A}\Big([\hat{R}\_\text{A},[\hat{H}\_\text{A},\hat{R}\_\text{A}]]\rho\_\text{A}\Big).\end{aligned}$$ Heuristically, in a sum over histories sense, this process involves the system and ancillae interacting with each other, then each evolving freely, and finally interacting again. Using we find $$\begin{aligned} \mathcal{L}\_3[I] &=\frac{1}{12\hbar^4} [\hat{Q}\_\text{S},[\hat{H}\_\text{S},\hat{Q}\_\text{S}]] \, \text{Tr}\_\text{A}\Big([\hat{R}\_\text{A},[\hat{H}\_\text{A},\hat{R}\_\text{A}]]\rho\_\text{A}\Big) \end{aligned}$$ as claimed. Non-tensor product interactions ------------------------------- In this subsection we show that in the ancillary bombardment scenario discussed in section [AB], if the system and ancilla interact via a sum of tensor products as, =+ + +, then the leading order purification effects are given by 1[I] = [,]   ([,]).. Proceeding order by order we first use, and to compute, $$\begin{aligned} &\phi\_1[I] =\frac{-{\mathrm{i}}}{\hbar} \, \text{Tr}\_\text{A}\Big( [\hat{H},I\otimes \rho\_\text{A}]\Big)\\ &\nonumber =0\end{aligned}$$ Next, from we have 2[I]=()2 ( [,[,I]]). Recalling the result derived earlier in this appendix, we know that since *ϕ*1[*I*] = 0 for every *ρ*A, we must select the interaction Hamiltonian in both the inner most and outer most commutators. Thus, $$\begin{aligned} &\phi\_2[I] =\frac{1}{2!}\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^2\\ \nonumber &\text{Tr}\_\text{A}\Big( [\hat{Q}\_\text{S}\otimes\hat{R}\_\text{A} +\hat{S}\_\text{S}\otimes \hat{T}\_\text{A} ,[\hat{Q}\_\text{S}\otimes\hat{R}\_\text{A} +\hat{S}\_\text{S}\otimes \hat{T}\_\text{A},I\otimes \rho\_\text{A}]]\Big). \end{aligned}$$ Computing this yields $$\begin{aligned} \mathcal{L}\_1[I] \nonumber &=\frac{1}{2({\mathrm{i}}\hbar)^2}\text{Tr}\_\text{A}\Big( [\hat{Q}\_\text{S}\otimes\hat{R}\_\text{A}, [\hat{S}\_\text{S}\otimes \hat{T}\_\text{A}, I\otimes\rho\_\text{A}]] \Big)\\ &\nonumber +\frac{1}{2({\mathrm{i}}\hbar)^2}\text{Tr}\_\text{A}\Big( [\hat{S}\_\text{S}\otimes \hat{T}\_\text{A}, [\hat{Q}\_\text{S}\otimes\hat{R}\_\text{A}, I\otimes\rho\_\text{A}]] \Big)\\ &=\frac{1}{({\mathrm{i}}\hbar)^2} [\hat{Q}\_\text{S},\hat{S}\_\text{S}] \ \text{Tr}\_\text{A}\Big([\hat{R}\_\text{A},\hat{T}\_\text{A}]\rho\_\text{A}\Big)\end{aligned}$$ as claimed. Heuristically, in a sum over histories sense, this process involves the system and ancilla interacting with each other twice via different terms in the full interaction Hamiltonian. The general expression is a direct generalization of this case. EM Dipole Cancellation ====================== In this appendix, we show that the combination of any electric multipolar coupling with the magnetic dipole couplings cannot purify at leading order in dechoerence effect. We begin with the simplest combination of electric and magnetic couplings $$\begin{aligned} H\_\text{SA} &=q \, \hat{x}^j\hat{E}\_j +\frac{q}{2m}\{\hat{L}^k,\hat{B}\_k\}\\ &=\int {\mathrm{d}}\bm{x} \ \hat{d}^j(\bm{x})\otimes\hat{E}\_j(\bm{x}) +\hat{\mu}^k(\bm{x})\otimes\hat{B}\_k(\bm{x}).\end{aligned}$$ The cross terms within each of the electric and magnetic couplings will vanish and only the cross terms between them will survive. Computing the effect of L1 on the maximally mixed states yields 1[I] =()2 d [j(),k()] This integrand is non-zero but, as as we will show the integral vanishes. Recall that the electric and magnetic fields have the commutator [i(),j()] = n(-’)  . where ∇*n* = ∂/∂*x**n* acts on the $\bm{x}$ vector. From this we can see why this interaction cannot purify at leading order in rapid repeated interactions. Integrating by parts to move the ∇*n* from the delta function onto $\bm{\hat{d}}(\bm{x})$ has the effect of transforming $\bm{\hat{d}}^i(\bm{x})\to\bm{\hat{\mu}}\_j(\bm{x})$ upon using $-{\mathrm{i}}\hbar\tensor{\varepsilon}{\_i\_j^ n}$. This then leads to a vanishing commutator. Using we have, $$\begin{aligned} \mathcal{L}\_1[I] =&\nonumber \frac{1}{\hbar^2}\!\! \int\!\!\!{\mathrm{d}}\bm{x}\!\!\int\!\!\!{\mathrm{d}}\bm{x'} \ \Big\langle \frac{-{\mathrm{i}}\hbar}{\epsilon\_0} \tensor{\varepsilon}{\_i\_j^ n}\nabla\_n\delta(\bm{x}-\bm{x'})\boldsymbol{\hat{1}}\Big\rangle [\hat{d}^i(\bm{x}),\hat{\mu}^j(\bm{x'})]\\ =&\nonumber \frac{-1}{\hbar^2}\!\! \int\!\!\!{\mathrm{d}}\bm{x}\!\!\int\!\!\!{\mathrm{d}}\bm{x'} \ \Big\langle \frac{-{\mathrm{i}}\hbar}{\epsilon\_0} \tensor{\varepsilon}{\_i\_j^ n}\delta(\bm{x}-\bm{x'})\boldsymbol{\hat{1}}\Big\rangle [\nabla\_n\hat{d}^i(\bm{x}),\hat{\mu}^j(\bm{x'})]\\ =&\nonumber \frac{-1}{\hbar^2\epsilon\_0}\!\! \int\!\!\!{\mathrm{d}}\bm{x}\!\!\int\!\!\!{\mathrm{d}}\bm{x}' \ \delta(\bm{x}-\bm{x'}) \big\langle \boldsymbol{\hat{1}}\big\rangle [-{\mathrm{i}}\hbar \, \tensor{\varepsilon}{\_i\_j^ n}\nabla\_n\hat{d}^i(\bm{x}),\hat{\mu}^j(\bm{x'})]\\ =&\label{EdipoleBdipoleInt} \frac{-1}{\hbar^2\epsilon\_0}\!\! \int\!\!\!{\mathrm{d}}\bm{x} \, [-{\mathrm{i}}\hbar \, \tensor{\varepsilon}{\_i\_j^ n}\nabla\_n\hat{d}^i(\bm{x}),\hat{\mu}^j(\bm{x})].\end{aligned}$$ and (as we shall demonstrate), since - ni() =-2j() the commutator thus vanishes. This is not unexpected since $\bm{\hat{d}}\sim\bm{\hat{x}}$ and $\bm{\hat{\mu}}\sim\bm{\hat{L}}$. In order to show we must first note that the Levi-Cevita symbol, $\tensor{\varepsilon}{\_i\_j^ n}$, forces *i* ≠ *n* such that *x**i* and ∇*n* commute. Secondly we must recognize that $$\begin{aligned} -{\mathrm{i}}\hbar\nabla\_n\big(\ket{\bm{x}}\!\bra{\bm{x}}\big) =\big\{\hat{p}\_n,\ket{\bm{x}}\!\bra{\bm{x}}\big\}\end{aligned}$$ which can be seen by computing $$\begin{aligned} &\bra{\psi}-{\mathrm{i}}\hbar\nabla\_n\big(\ket{\bm{x}}\!\bra{\bm{x}}\big)\ket{\phi}\\ &\nonumber =-{\mathrm{i}}\hbar\nabla\_n\big(\braket{\psi|\bm{x}}\braket{\bm{x}|\phi}\big)\\ &\nonumber =-{\mathrm{i}}\hbar\nabla\_n\big(\psi^\*(\bm{x})\phi(\bm{x})\big)\\ &\nonumber =\big(-{\mathrm{i}}\hbar\nabla\_n\psi^\*(\bm{x})\big)\phi(\bm{x}) +\psi^\*(\bm{x})\big(-{\mathrm{i}}\hbar\nabla\_n\phi(\bm{x})\big)\\ &\nonumber =\bra{\psi}\hat{p}\_n\ket{\bm{x}}\braket{\bm{x}|\phi} +\braket{\psi|\bm{x}}\bra{\bm{x}}\hat{p}\_n\ket{\psi}\\ &\nonumber =\bra{\psi}\big\{\hat{p}\_n,\ket{\bm{x}}\!\bra{\bm{x}}\big\}\ket{\phi}.\end{aligned}$$ Using these two results we can straightforwardly compute $$\begin{aligned} -{\mathrm{i}}\hbar \, \tensor{\varepsilon}{\_i\_j^ n}\nabla\_n\hat{d}^i(\bm{x}) &=-{\mathrm{i}}\hbar \, \tensor{\varepsilon}{\_i\_j^ n}\nabla\_n\big(q \, x^i\ket{\bm{x}}\!\bra{\bm{x}}\big)\\ &\nonumber =-{\mathrm{i}}\hbar \, \tensor{\varepsilon}{\_i\_j^ n} q \,\hat{x}^i\nabla\_n\big(\ket{\bm{x}}\!\bra{\bm{x}}\big)\\ &\nonumber =q \,\tensor{\varepsilon}{\_i\_j^ n} \hat{x}^i\{\hat{p}\_n,\ket{\bm{x}}\!\bra{\bm{x}}\}\\ &\nonumber =q \,\{\tensor{\varepsilon}{\_i\_j^ n}\hat{x}^i\hat{p}\_n,\ket{\bm{x}}\!\bra{\bm{x}}\}\\ &\nonumber =-q \,\{\hat{L}\_j,\ket{\bm{x}}\!\bra{\bm{x}}\}\\ &\nonumber =-2 \, m \, \hat{\mu}\_j(\bm{x})\end{aligned}$$ Thus the commutator in vanishes. In fact, taking a combination of both the electric quadrupole and magnetic couplings as, $$\begin{aligned} H\_\text{SA} &=q \, \hat{x}^i\hat{x}^j\nabla\_i\hat{E}\_j +\frac{q}{2m}\{\hat{L}^k,\hat{B}\_k\}\\ &=\int {\mathrm{d}}\bm{x} \ \hat{Q}^i{}^j(\bm{x})\otimes\nabla\_i\hat{E}\_j(\bm{x}) +\hat{\mu}^k(\bm{x})\otimes\hat{B}\_k(\bm{x}).\end{aligned}$$ we find a similar cancellation such that will yield no purification at leading order. Any higher order electric couplings will exibit the same cancellation as will the combination of several electric multipolar moments along with the magnetic dipole moment. Thus if there are any light-atom interactions capable of purifying at leading order they must involve quadrupolar or higher magnetic couplings. Purification in Rapid Repeated Interaction Systems ================================================== We investigate the open dynamics of a quantum system when it is rapidly repeatedly updated by a quantum channel. Specifically, we analyze when this dynamics can purify the system. We develop a necessary and sufficient condition for such purification effects to occur and characterize their strength. We thoroughly analyze the specific scenario of a quantum system undergoing rapid unitary interactions with a sequence of ancillary quantum systems. We find that while the purification effects are generally present, in order for these effects to be strong compared to the decoherence effects the interaction Hamiltonian must have a minimum degree of complexity. Specifically, a tensor product interaction *Q̂*S ⊗ *R̂*A, as well as many common light-matter interactions cannot purify efficiently. Introduction ============ The study of the interaction of quantum systems with an unknown environment is relevant to a host of different disciplines ranging from applied physics and engineering to the foundations of quantum theory. For instance, phenomena such as environment-induced decoherence and dephasing hinder our ability to control quantum systems and the flow of quantum information. Thus it is of capital importance to understand these effects in our efforts to build a quantum computer. Furthermore, such studies are also crucially relevant in our understanding of fundamental topics in quantum theory such as the *measurement problem*, and more generally, in the context of quantum thermodynamics. When one thinks of the interaction between a quantum system and its environment, the words dephasing (loss of purity) and decoherence come to mind: even if the system and the environment together evolve unitarily, the system’s effective dynamics will experience non-unitary evolution. However, not all non-unitary effects decrease purity, so it is thinkable that interaction with an environment can, in principle, also decrease the entropy of the system. Open dynamics can indeed be useful in many different ways. For example, a system could be driven by open dynamics to a fixed point which has some useful property, such as enabling entanglement farming. In this paper we consider the open dynamics that emerges out of the rapid repeated application of a—perhaps stochastic— completely positive trace preserving (CPTP) map. Within these setups, we will especially focus on the particular CPTP maps generated by the sequential interaction of a system with an ensemble of ancillae. This setup can be thought of as modeling the environment as a sequence of (maybe unknown) constituents which repeatedly couple to the system in rapid succession. In the literature, these scenarios are referred to as *Repeated Interaction Systems* or *Collision Models*. These models have been successfully applied to varied phenomena such as, for instance, the study of quantum coherence, quantum thermodynamics, the measurement problem (through its close relationship with the quantum Zeno effect, ) and even decoherence in gravitation and cosmology. Here, we will investigate the particularly interesting possibility that rapid repeated interactions can cause the system’s purity to increase rather than introduce decoherence. Specifically, we will show that that many common types of simple repeated interactions cannot efficiently purify in the rapid interaction regime. We will demonstrate that the interaction between a system and the constituents of its environment needs to have a minimum degree of complexity in order to cause significant purification. We will identify the necessary and sufficient conditions for a rapid repeated interaction scenario to have significant purification effects on the system. We will also provide particular examples of interactions that can and cannot increase a system’s purity under rapid repeated interactions. We will pay special attention to more experimentally relevant setups such as spin J-coupling, the coupling of a qubit to an environment of harmonic oscillators, and we will report particularly surprising results concerning the interaction of the degrees of freedom of electrons in atomic orbitals with relativistic quantum fields, such as the electromagentic field. This paper is structured as follows: Section [RRIF] reviews the rapid repeated interaction formalism developed in. Section [ECfP] studies when and how strongly rapid repeated interactions can purify. Section [AB] addresses the specific scenario of *ancillary bombardment*. And finally, Section [Examples] presents examples of classes of interactions which can or cannot purify, including the light-matter interaction. Rapid repeated interactions formalism ===================================== In this section we review the results in, paying close attention to more subtle aspects of their formulation in terms of open dynamics of rapid repeated interactions. Generally, a rapid repeated interaction scenario consists of a quantum system being frequently updated by a quantum channel. Before we present the formalism for updating by a general quantum channel, it may be helpful to have a more concrete setup in mind. Specifically, a very natural way of thinking about this kind of setup is to consider a quantum system being bombarded by a sequence of ancillary quantum systems, undergoing a brief unitary interaction with each of them. This scenario, which we term *ancillary bombardment*, generates non-trivial open dynamics in the system, as discussed broadly in. In section [AB] of this paper we analyze this scenario’s ability to cause the system state to increase its purity. As an example, ancillary bombardment could be used to model a system interacting with its environment by assuming that it repeatedly interacts unitarily with individual constituents of the environment. Another example of such a scenario is a laboratory system that is repeatedly bombarded by probes (See for examples). With this concrete scenario in mind we can proceed with more formal analysis. A rapid repeated interaction scenario considers a quantum system, labeled S, which evolves (in time steps of duration *δ**t*) by the repeated application of a quantum channel, *ϕ*(*δ**t*). At each time *T* = *n* *δ**t*, the discrete-time evolution of the system’s density matrix is given by (n t) (t)n[(0)], for integer *n*. We make the natural assumption that the strength of each individual interaction is finite, so that in the continuous interaction limit, that is as *δ**t* → 0, we have that $\phi(\delta t)\to\openone$ (nothing happens in no time). This is in contrast to approaches where the strength of the interaction is taken to infinity as *δ**t* → 0 (for an in-depth comparison with previous work see ). Note that since $\phi(\delta t)\to\openone$ as *δ**t* → 0, for small enough *δ**t*, we know *ϕ*(*δ**t*) is invertible. Additionally, we assume that *ϕ*(*δ**t*) is differentiable at *δ**t* = 0, with derivative *ϕ*ʹ(0) (things happen at a finite rate). Given such a discrete update map, *ϕ*(*δ**t*), we can construct a continuous-time interpolation scheme for the dynamics given by. Specifically, we find a unique interpolation scheme by making the following three assumptions for the continuous-time evolution: 1. The evolution is Markovian, such that, (t) (t t)[(0)], or equivalently, (t)=t[(t)], where L*δ**t* (the effective time-independent Liouvillian) is some superoperator which generates time translations for the system. 2. The evolution exactly matches the discrete dynamics at the end of every time step. Using this means, (n t t) =(t)n or equivalently, (t t) =(t). 3. The evolution’s effective Liouvillian, L*δ**t*, is well defined in the continuous interaction limit, that is as *δ**t* → 0. These three conditions uniquely specify the interpolation scheme that is generated by t ((t)), where we have taken the logarithm’s principal branch cut, that is the one with $\text{log}(\openone)=0$. Note that our assumption that $\phi(\delta t)\to\openone$ as *δ**t* → 0 guarantees that *ϕ*(*δ**t*) will be nonsingular in the short time regime, and hence will have a well defined logarithm. The first condition guarantees that the interpolation scheme is generated by some effective time-independent Liouvillian, and the second condition forces this Liouvillian to have the form. The third condition resolves the ambiguity of the logarithm’s branch cut by forcing $\text{log}(\openone)=0$, which is necessary to make L*δ**t* well defined as *δ**t* → 0. Moreover, this branch cut allows us to calculate L*δ**t* as *δ**t* → 0 (using L’Hôpital’s rule) to be $$\begin{aligned} \mathcal{L}\_0 \coloneqq \lim\_{\delta t\to0}\mathcal{L}\_{\delta t} &=\frac{{\mathrm{d}}}{{\mathrm{d}}\, \delta t} \bigg\vert\_{\delta t=0} \text{log}(\phi(\delta t))\\ \nonumber &=\phi^{-1}(0) \, \phi'(0)\\ \nonumber &=\phi'(0).\end{aligned}$$ Thus, in the continuum limit, evolution is generated by the derivative of the update map. This result was first explicated in. Taking all this into account, we can faithfully describe the discrete-time evolution,, of a quantum system using the continuous-time interpolation scheme, generated by. If in addition to the minimal regularity assumed above (that is, $\phi(\delta t)\to\openone$ as *δ**t* → 0 and *ϕ*ʹ(0) exists), we also have that *ϕ*(*δ**t*) is analytic at *δ**t* = 0, we can then form the series expansion (t) =+t 1 +t2 2 +t3 3 +…and from this t =0 +t 1 +t2 2 +t3 3 +…. As shown in, the first few superoperator coefficients are given recursively by $$\begin{aligned} \label{L0def} \mathcal{L}\_0 \coloneqq\phi\_1&,\\ \label{L1def} \mathcal{L}\_1 \coloneqq\phi\_2 &-\frac{1}{2}\mathcal{L}\_0{}^2,\\ \label{L2def} \mathcal{L}\_2 \coloneqq\phi\_{3} &-\frac{1}{2}(\mathcal{L}\_0\mathcal{L}\_1+\mathcal{L}\_1\mathcal{L}\_0) -\frac{1}{6}\mathcal{L}\_0{}^3\\ \label{L3def} \mathcal{L}\_3 \coloneqq\phi\_{4} &-\frac{1}{2}(\mathcal{L}\_0\mathcal{L}\_2+\mathcal{L}\_2\mathcal{L}\_0)\\ &\nonumber -\frac{1}{6}(\mathcal{L}\_0{}^2\mathcal{L}\_1+\mathcal{L}\_0\mathcal{L}\_1\mathcal{L}\_0+\mathcal{L}\_1\mathcal{L}\_0{}^2) -\frac{1}{24}\mathcal{L}\_0{}^4.\end{aligned}$$ with the higher order terms following a similar pattern. From the series, the master equation for the interpolation scheme becomes, (t) =0[(t)] +t 1[(t)] +t22[(t)] +…. Given such an update map *ϕ*(*δ**t*) we can compute these coefficient maps and analyze their effects in the system dynamics. For instance, in the case of a ancillary bombardment defined above, L0 generates unitary dynamics. Thus within this model any decoherence effects require finite interaction times. In it was shown that decoherence effects generically appear in L1, that is at first order in *δ**t*. In the following sections we will analyze under what conditions rapid repeated interactions can increase the purity of a system, rather than just introducing decoherence. Purification Conditions ======================= In this section, we find a necessary and sufficient condition for when the discrete dynamics given by can cause purification of a finite dimensional system. By this we mean that there exists some system state, *ρ*S, whose purity, P(*ρ*S) = Tr(*ρ*S2), increases under these dynamics. In section [RRIF] we converted the discrete-time dynamics into the continuous-time Markovian dynamics, generated by the effective Liouvillian. We will now discuss the exact conditions for such an interpolation scheme to cause purification, show that this interpolation scheme purifies if and only if the discrete dynamics does too, and finally characterize the strength of such purification effects. Markovian Purification ---------------------- For finite *d*-dimensional systems the dynamics generated by a Liouvillian, L, can cause purification if and only if the dynamics which it generates is not unital, that is, [I]0, where *I* is the identity matrix. Recalling that the maximally mixed state is given by, we can restate this as: Markovian dynamics can purify if and only if it *moves* the maximally mixed state. Throughout this paper we will refer to *I* and the maximally mixed state synonymously. The condition is clearly sufficient for the dynamics to cause purification since if the maximally mixed state is moved by the dynamics, its purity must increase. This follows from the maximally mixed state being the unique minimum purity state. Note, however, that this is not true for infinite dimensional systems. The question of purification of infinite dimensional systems under Markovian dynamics has been analyzed in depth, with the result that L not being unital is still necessary for causing purification, but is no longer sufficient. The necessity of to cause purification follows from the claim () = (2) ([I] 2) whose proof we reproduce with our notation in Appendix [Necessity]. Interpolation Faithfully Captures Purification Effects ------------------------------------------------------ In the following, we prove that, in the rapid interaction regime, the interpolation scheme faithfully captures the presence of purification effects in the discrete dynamics. First we argue that any purification effects in the discrete dynamics is captured by the interpolation scheme. Suppose that applying the update map, *ϕ*(*δ**t*), increases the purity of some state. By construction, applying the interpolation scheme for a duration *δ**t* to this state has to yield the same result. Because the interpolation scheme is smooth, at some point in this duration it must have increased some state’s purity. Next, we consider the possibility that the interpolation scheme could indicate that there is purification when none is present in the discrete dynamics. Suppose that the interpolation scheme instantaneously purifies some state. Then by the purification condition discussed in section [MarkPure], we must have L*δ**t*[*I*] ≠ 0. From the matching condition, this implies that $$\begin{aligned} \phi(\delta t)[I] &=\exp(\delta t \, \mathcal{L}\_{\delta t})[I]\\ &\nonumber =I+\delta t \, \mathcal{L}\_{\delta t}[I] +(\delta t^2/2) \, \mathcal{L}\_{\delta t}[\mathcal{L}\_{\delta t}[I]] +\dots \,.\end{aligned}$$ and we therefore conclude *ϕ*[*I*] ≠ *I*. For interaction times small enough, we can neglect the O(*δ**t*2) terms. Thus the discrete dynamics do in fact purify, the discrete update map moving (and hence purifying) the maximally mixed state. Note that this argument relies on the maximally mixed state being the unique minimum purity state and so does not work in infinite dimensions. From these arguments, we conclude that in the rapid repeated interaction regime, the discrete dynamics generated by *ϕ*(*δ**t*) can purify if and only if the continuous dynamics generated by L*δ**t* can. It then follows from that repeated applications of *ϕ*(*δ**t*) can purify if and only (or in other words *ϕ*(*δ**t*) is not unital). Purification Strength --------------------- Now that we have identified a necessary and sufficient condition for the dynamics generated by repeated applications of *ϕ*(*δ**t*) to purify, we will quantify the strength of this purification. Assuming that the quantum channel, *ϕ*(*δ**t*) is analytic at *δ**t* = 0, we can make use of the series expansion to quantify this strength by noting at what order in *δ**t* the maximally mixed state is moved by the effective Liouvillian, L*δ**t*. We say that the dynamics purifies at order *m* if, t[I] =(tm) where $\text{ord}(\phi)\coloneqq m$ is the *purification order* of the dynamics. The smaller ord(*ϕ*) is the stronger the purification effects. Recall that in infinite dimensions, dynamics being non-unital is not sufficient for purification effects. Therefore this measure of purification strength only makes sense for finite dimensional systems. This notion of purification strength can be translated to the discrete updater *ϕ*(*δ**t*) by using the recursive structure of the coefficient maps in. Concretely, t[I] =(tm) (t)[I] =I+(tm+1), such that the orders of the non-unital effects are offset by one between the discrete update map and the interpolation scheme. In order to make use of purification effects experimentally—for example in an algorithmic cooling setup,— we would like to manufacture interactions which can purify at the lowest possible order. One may wonder if this is possible by combining different interaction maps to engineer a new map with a lower purification order. However, two simple ways of combining maps together, namely concatenation of different maps or applying maps out of a statistical ensemble (taking convex combinations), cannot lower the resultant purification order below those of the original maps. Specifically, if we take *ϕ*(*δ**t*) to be a concatenation of a finite number of maps as, (t)= (1)(t) (2)(t) … (N)(t), then (){((n))}, such that *ϕ*’s strength is bounded by the strongest *χ*. Additionally, taking *ϕ*(*δ**t*) to be a convex combination of maps as, (t)= k pk (k)(t), with ∑*k**p**k* = 1 we find (){((k))}, such that *ϕ*’s strength is bounded by the strongest *ψ*. We prove these claims in Appendix [CaCC]. Ancillary Bombardment ===================== We now apply the characterization of purification effects developed in the previous section to a specific physically motivated class of update maps given by, (t)[] =((- t /)()) where ad*Ĥ*(*A*) = [*Ĥ*, *A*] for any operator *A*. Physically, this map describes the system, S, first engaging with an ancilla, A, which is in the state *ρ*A, then interacting for a time *δ**t* under the joint Hamiltonian =+ +, and finally decoupling from ancilla, which is discarded. This update map could be used to model a wide variety of scenarios. For example, it could model each discrete step of the dynamics of a system repeatedly interacting with the constituents of its environment, or an atom being bombarded with light/other atoms in a laboratory setting (both examples of ancillary bombardment). Note that the necessary and sufficient condition to cause purification,, which was discussed in section [ECfP], requires that S be finite dimensional. However, there is no such restriction on the ancillary systems, A, to which S couples. This update map is sufficiently well behaved in the rapid interaction limit — recall that we require $\phi(\delta t)\to\openone$ as *δ**t* → 0 and that *ϕ*ʹ(0) exists — and so we can construct the unique Markovian interpolation scheme as prescribed in section [RRIF]. Moreover, since the update map is analytic around *δ**t* = 0, we can expand it in powers of *δ**t* as in : $$\begin{aligned} \label{TimelessPhi1} &\phi\_1[\rho\_\text{S}] =\frac{-{\mathrm{i}}}{\hbar} \, \text{Tr}\_\text{A}\Big( [\hat{H},\rho\_\text{S}\otimes \rho\_\text{A}]\Big)\\ \label{TimelessPhi2} &\phi\_2[\rho\_\text{S}] =\frac{1}{2!}\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^2 \text{Tr}\_\text{A}\Big( [\hat{H},[\hat{H},\rho\_\text{S}\otimes \rho\_\text{A}]]\Big)\\ \label{TimelessPhi3} &\phi\_3[\rho\_\text{S}] =\frac{1}{3!}\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^3 \text{Tr}\_\text{A}\Big( [\hat{H},[\hat{H},[\hat{H},\rho\_\text{S}\otimes \rho\_\text{A}]]]\Big)\\ \label{TimelessPhi4} &\phi\_4[\rho\_\text{S}] =\frac{1}{4!}\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^4 \text{Tr}\_\text{A}\Big( [\hat{H},[\hat{H},[\hat{H},[\hat{H},\rho\_\text{S}\otimes \rho\_\text{A}]]]]\Big)\end{aligned}$$ and so on. We can thus expand the effective Liouvillian as in. In, a general family of update maps including were analyzed at zeroth and first order using the rapid repeated interaction formalism discussed in section [RRIF]. The full generality of the interactions considered in includes allowing time dependence in the interaction Hamiltonian as well as taking an arbitrary convex combination of multiple interaction types, with different types of ancilla and with different couplings. Remarkably, in, it was found that L0 generates unitary evolution. For the dynamics generated by we have, $$\begin{aligned} \label{TimelessL0explicit} \mathcal{L}\_0[\rho\_\text{S}] &=\frac{-{\mathrm{i}}}{\hbar} \, [\hat{H}\_\text{eff},\rho\_\text{S}]\end{aligned}$$ where the effective Hamiltonian *Ĥ*eff is given by +(0), that is, the system’s free Hamiltonian plus a new term, *Ĥ*(0), which comes from the repeated interactions. This new contribution to the dynamics is given by, (0) (). Note that since the leading order dynamics is unitary, it cannot affect the purity of the system. Thus any decoherence effects must arise at subleading order in the dynamics. The leading possible order for purification effects is thus first order. The first order dynamics, L1, was analyzed in full detail in, and was generally seen to give rise to dephasing effects. For the dynamics generated by, L1 is given by $$\begin{aligned} \label{L1DefAB} \mathcal{L}\_1[\rho\_\text{S}] &=\frac{-{\mathrm{i}}}{\hbar} \big[\hat{H}^{(1)},\rho\_\text{S}\big] -\frac{1}{2}\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^2 \, \big[\hat{H}^{(0)},[\hat{H}^{(0)},\rho\_\text{S}]\big]\\ &\nonumber +\frac{1}{2}\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^2 \, \text{Tr}\_\text{A}\Big( \big[\hat{H}\_\text{SA}, [\hat{H}\_\text{SA}, \rho\_\text{S}\otimes\rho\_\text{A}]\big]\Big)\end{aligned}$$ where, H(1)=( [,]). The first order dynamics, L1[*ρ*S], consists of two different contributions. One is a new unitary contribution to the dynamics, *Ĥ*(1), which (after examination of and ) can be understood as correction to *Ĥ*(0) accounting for the ancilla evolving under its free Hamiltonian during the interaction. Secondly, there are two other terms that are not unitary and will, in general, affect the purity of the system. Since generically introduces dephasing effects at order *δ**t*, for any purification effects to have a comparable impact on the dynamics they must also appear at first order. That is $$\begin{aligned} \label{L1oI} \mathcal{L}\_1[I] =\frac{1}{2}\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^2 \, \text{Tr}\_\text{A}\Big( \big[\hat{H}\_\text{SA}, [\hat{H}\_\text{SA}, I\otimes\rho\_\text{A}]\big]\Big)\neq0 \end{aligned}$$ or in other words L1 must already be able to move the maximally mixed state. Note that this just depends on the interaction Hamiltonian and the state of the ancilla, and not on either of the free Hamiltonians. In the following subsections we investigate the algebraic conditions that an interaction Hamiltonian needs in order to purify at leading possible order, that is to satisfy. Tensor Product Interaction -------------------------- We begin by analyzing the simplest model for an interaction Hamiltonian, namely the tensor product of scalar observables. The joint Hamiltonian under this type of coupling is, =+ +, where *Q̂*S and *R̂*A are observables of the system and ancilla respectively. This type of interaction is a very common interaction model considered throughout the literature of rapid repeated interaction. In Appendix [TPCalc], we show that the effect of the first order dynamics on the maximally mixed state vanishes, L1[*I*] = 0. Thus rapid repeated interaction under the Hamiltonian cannot purify at leading order in decoherence effects. In fact we also show that the second order effects vanish, L2[*I*] = 0. Continuing on, we find the leading order purification effect is given by 3[I] = [,[,]] ([,[,]]). Note that if the ancillae are infinite dimensional then the above calculations require that any relevant permutations of *R̂*A, *Ĥ*A, and *ρ*A are trace class. Thus a tensor product interaction will in general only be able to purify at third order, that is two orders lower than the leading order decoherence effects. The conclusion of this analysis is that, perhaps unintuitively, a tensor product interaction of the kind *Ĥ*SA = *Q̂*S ⊗ *R̂*A will in general strictly decrease purity at leading order in decoherence effects. This analysis allows us to conclude that any rapid repeated tensor product interaction model cannot capture phenomena involving an entropy decrease in S, such as cooling. Interaction via non-product Hamiltonians ---------------------------------------- After having established that tensor product Hamiltonians cannot purify at leading order in dechoerence effects, we now investigate whether it is possible to do so through rapid repeated interaction under a Hamiltonian that is the sum of two scalar couplings, i.e., = +. In Appendix [TPCalc], we show that the effect of L1 on the maximally mixed state,, is, $$\begin{aligned} \mathcal{L}\_1[I] &=\frac{1}{({\mathrm{i}}\hbar)^2} [\hat{Q}\_\text{S},\hat{S}\_\text{S}] \ \text{Tr}\_\text{A}\Big([\hat{R}\_\text{A},\hat{T}\_\text{A}]\rho\_\text{A}\Big).\end{aligned}$$ Again, if the ancillae are infinite dimensional, then the above calculation requires that all relevant permutations of *R̂*A, *T̂*A, and *ρ*A are trace class. In contrast to the simple interaction, the interaction Hamiltonian can purify at leading order in decoherence effects. Specifically it will purify if and only if the two system observables (*Q̂*S and *Ŝ*S) do not commute, and the two ancilla observables (*R̂*A and *T̂*A) do not commute on average with respect to the initial state of the ancilla, *ρ*A. From this we can move to the most general case, by noting that any interaction Hamiltonian, *H*SA, can be decomposed as a sum of tensor products =j,j,j In Appendix [TPCalc] we show that, for the general case of, the effect of L1 on the maximally mixed state,, is 1[I] =()2 i,j[,i,,j]   ([,i,,j]) and as before, if the ancillae are infinite dimensional, then the above calculation requires that all relevant permutations of *R̂*S, *i*, *R̂*S, *j*, and *ρ*A are trace class. Thus the condition that an interaction Hamilton to be able to purify at leading possible order is, i,j ([,i,,j])   [,i,,j] 0 In order for to be non-zero, a Hamiltonian of the form must have a pair of terms whose system parts do not commute and whose ancilla parts do not commute on average. Thus rapid repeated interactions with ancillae under an arbitrary Hamiltonian will in general be able to purify at leading order in decoherence effects. In section [Examples], we will show some simple non-product interactions that can purify at leading order. Conversely, we will also show some remarkable common types of non-product coupling that, nevertheless, cannot purify at leading order due to cancellations within. Note that while the above analysis only considers the specific form of the update map given by, we can extend these results to a much wider class of update maps by making use of the results described at the end of Section [PStrength]. Time-dependent interactions --------------------------- Additionally our analysis easily extends to include cases of ancillary bombardment where the Hamiltonian is explicitly time dependent. The dissipation effects in this scenario were analyzed in. In particular they considered the Hamiltonian to be of the form t(t) = + +(t/t). The update map for such an interaction is given by (t)[] =(Ut(t)() Ut(t)) where *U**δ**t*(*t*) is the unitary transformation, Ut(t) =(0t t()) and T is the time-ordering operation. This unitary transformation is generated by a time-dependent Hamiltonian, *Ĥ**δ**t*(*t*). From, we can compute the effect of L1 on the maximally mixed state,, as 1[I] = ()2 ( ]) where G0()01 () dis the unweighted time average of the interaction Hamiltonian. Thus we can see that the ability of an interaction Hamiltonian to purify at leading order in decoherence effects only depends on its time average. Thus a time dependent interaction can purify at leading order if and only if its time average can. Examples ======== In this section, we investigate several specific interaction Hamiltonians in light of the necessary and sufficient condition to purify at leading order which we described in the previous section. Namely, that when written as a sum of tensor products,, it must satisfy. Isotropic spin coupling () -------------------------- As an example of an interaction capable of purifying at leading order, we consider the isotropic spin-spin interactions, = J = J jj where we use Einstein’s summation notation of implicitly summing over all repeated indices. From we can compute the effect of L1 on the maximally mixed state as, $$\begin{aligned} \mathcal{L}\_1[I] &\nonumber =\frac{1}{2}\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^2 (\hbar J)^2 \ \big\langle[\hat{\sigma}\_\text{A}{}\_i,\hat{\sigma}\_\text{A}{}\_j]\big\rangle \ [\hat{\sigma}\_\text{S}{}^i,\hat{\sigma}\_\text{S}{}^j]\\ &=4 \, J^2 \ \langle \bm{\hat{\sigma}}\_\text{A}\rangle \,\cdot\, \bm{\hat{\sigma}}\_\text{S} \label{sig-sig}\end{aligned}$$ where, for convenience, we have introduced the notation ⟨  ⋅  ⟩ = TrA(  ⋅  *ρ*A). In terms of Bloch vectors, eq. expresses the intuitive result that the maximally mixed state, $\bm{a}\_\text{S}=0$, is moved in the direction of the ancilla’s Bloch vector, $\bm{a}\_\text{A}=\langle \bm{\hat{\sigma}}\_\text{A}\rangle$. Thus unless the ancillae are in the maximally mixed state there will be purification effects in the dynamics. Qubit-Harmonic Oscillator coupling ---------------------------------- We find another example of an interaction Hamiltonian that can purify by considering a qubit which repeatedly interacts with sequence of harmonic oscillators via the interaction Hamiltonian = (x +y) where *x̂* = (*â* + *â*†)/2 and *p̂* = i(*â* − *â*†)/2 are quadrature operators for a harmonic oscillator. From we can compute the effect of L1 on the maximally mixed state as, $$\begin{aligned} \mathcal{L}\_1[I] &\nonumber =\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^2 (\hbar \omega)^2 \ \big\langle[\hat{x},\hat{p}]\big\rangle \ [\hat{\sigma}\_x,\hat{\sigma}\_y]\\ &=2 \, \omega^2 \ \hat{\sigma}\_z.\end{aligned}$$ Thus the maximally mixed state is initially polarized in the *z* direction under this interaction regardless of the state of the harmonic oscillator ancillae. This type of interaction can, in principle, be implemented in superconducting circuits, achieving fast switching times in the ultra strong switchable coupling regime. Vector-vector Couplings ----------------------- As discussed in section [TPI] rapidly interacting with an ancilla via a tensor product of two scalar observables, *Ĥ*SA = *Q̂*S ⊗ *R̂*A, cannot purify at leading order. A natural generalization of this coupling is to instead couple two vector observables component-wise (through their dot product) as, = jj. From, the effect of L1 on the maximally mixed state is 1[I] =()2    [i,j]. Thus, for repeated interactions under to purify, the components of $\bm{\hat{V}}$ must not commute amongst themselves, and the components of $\bm{\hat{W}}$ must not either. Many common vector observables such as $\bm{\hat{x}}$, $\bm{\hat{p}}$, $\bm{\hat{E}}(\bm{x})$, and $\bm{\hat{B}}(\bm{x})$, do not pass this test, while others such as $\bm{\hat{L}}$ and $\bm{\hat{\sigma}}$ do. Thus vector-vector couplings involving any of $\bm{\hat{x}}$, $\bm{\hat{p}}$, $\bm{\hat{E}}(\bm{x})$, or $\bm{\hat{B}}(\bm{x})$ can not purify whereas couplings involving $\bm{\hat{L}}$ or $\bm{\hat{\sigma}}$ potentially can depending on what they are coupled to. From this we can generalize further to the case of two vector fields coupled component-wise throughout all of space as, $$\begin{aligned} \label{VectorFieldCoupling} H\_\text{SA} =\int {\mathrm{d}}\bm{x} \ \bm{\hat{V}}\_\text{S}(\bm{x})\cdot\bm{\hat{W}}\_\text{A}(\bm{x}) =\int {\mathrm{d}}\bm{x} \ \hat{V}\_\text{S}{}^j(\bm{x})\otimes\hat{W}\_\text{A}{}\_j(\bm{x}).\end{aligned}$$ A necessary condition for repeated interaction this type of Hamiltonian to purify is that at least one of the following two conditions holds: 1. Neither $\bm{\hat{V}}(\bm{x})$ nor $\bm{\hat{W}}(\bm{x})$ is microcausal. (Recall that an observable $\hat{\bm{X}}$ is microcausal if only has support on $\bm{x}=\bm{x}'$). 2. Neither $\bm{\hat{V}}(\bm{x})$ nor $\bm{\hat{W}}(\bm{x})$ has its components commute locally amongst themselves, i.e.. To see that this is the case, we compute the effect of L1 on the maximally mixed state from as, $$\begin{aligned} \label{{L1oIVW}} &\mathcal{L}\_1[I]\\ &\nonumber =\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^2\!\!\!\int\!\!\! {\mathrm{d}}\bm{x}\!\!\int\!\!\! d\bm{x'} \, \big\langle [\hat{W}\_i(\bm{x}),\hat{W}\_j(\bm{x'})]\big\rangle \, [\hat{V}^i(\bm{x}),\hat{V}^j(\bm{x'})]\end{aligned}$$ If one of $\bm{V}(\bm{x})$ or $\bm{W}(\bm{x})$ is microcausal then the integral’s domain can be reduced to the $\bm{x}=\bm{x}'$ region. From there, whichever of $\bm{V}(\bm{x})$ or $\bm{W}(\bm{x})$ has its components locally commuting causes the integrand to vanish. Thus such interactions cannot purify at leading order. Light-matter Interaction ------------------------ Let us now focus on a concrete relevant model used in quantum optics: we will analyze the ability of the light-matter interaction to purify in the context of rapid repeated interactions. Let us consider an atom interacting with a second-quantized electromagnetic field. Let us take the atom as the target system, S, and the field as the ancilla, A, to which the system is repeatedly coupled. Physically, one can imagine atoms bombarded by pulses of light. We begin by showing that any single multipolar coupling of the electric field to an atom cannot purify at leading order in rapid repeated interactions. First, we consider the electric dipole interaction given by $$\begin{aligned} \hat{H}\_\text{SA} =q \, \hat{x}^j\hat{E}\_j &\nonumber =\int {\mathrm{d}}\bm{x} \ q \, \hat{x}^j \, \ket{\bm{x}}\!\bra{\bm{x}}\otimes\hat{E}\_j(\bm{x})\\ &=\int {\mathrm{d}}\bm{x} \ \hat{d}^j(\bm{x})\otimes\hat{E}\_j(\bm{x})\end{aligned}$$ where $\hat{d}^j(\bm{x})=q \, x^j \, \ket{\bm{x}}\!\bra{\bm{x}}$ is the dipole moment operator at a position $\bm{x}$. In this form, the interaction is written as the coupling of two vector fields throughout all of space. This is the scenario that was analyzed at the end of the previous section. It is enough to note that the electric field is microcausal and the components of $\bm{\hat{d}}(\bm{x})$ commute amongst themselves locally (in fact, both observables have both properties) to conclude that the electric dipole interaction cannot purify at leading order on its own. Similarly, if we consider the electric quadrupole coupling given by H =q ijij =  ij()ij() where $\hat{Q}^i{}^j(\bm{x})=q \, x^i x^j \ket{\bm{x}}\!\bra{\bm{x}}$ is the quadrupole moment operator at a position $\bm{x}$, we find that it cannot purify at leading order. This is again because $\bm{\nabla}\_i\hat{E}\_j(\bm{x})$ is microcausal and the components of $\hat{Q}^i{}^j(\bm{x})$ commute amongst themselves locally. Similarly, every higher multipolar electric coupling cannot purify on its own at leading order since higher derivatives of the electric field remain microcausal and the components of the higher moment operators always commute amongst themselves locally. A similar analysis can be carried out with the magnetic dipole interaction given by ={k,k} =d  k()k() where $\hat{\mu}^k(\bm{x})=(q/2m)(\hat{L}^k\ket{\bm{x}}\!\bra{\bm{x}}+\ket{\bm{x}}\!\bra{\bm{x}}\hat{L}^k)$ is the magnetic dipole operator at a position $\bm{x}$. The Hamitonian is again the coupling of two vector fields throughout all of space. We conclude as before that the magnetic dipole interaction cannot purify at leading order since the magnetic field is both microcausal and has its components commute amongst themselves locally. Furthermore, linear combinations of different electric multipole couplings cannot purify at leading order either. For example consider the combination of electric dipole and electric quadrupole interactions $$\begin{aligned} H\_\text{SA} &=q \, \hat{x}^k\hat{E}\_k +q \, \hat{x}^i\hat{x}^j\nabla\_i\hat{E}\_j\\ &\nonumber =\!\!\int\!\! {\mathrm{d}}\bm{x} \ \Big(\hat{d}^k(\bm{x})\otimes\hat{E}\_k(\bm{x}) +\hat{Q}^i{}^j(\bm{x})\otimes\bm{\nabla}\_i\hat{E}\_j(\bm{x})\Big).\end{aligned}$$ In computing the effect of L1 on the maximally mixed state,, the cross terms within the dipole coupling will vanish, as will the cross terms within the quadrupole coupling. Only the cross terms between the two couplings remain, yielding, $$\begin{aligned} &\mathcal{L}\_1[I]\\ &\nonumber =\Big(\frac{-{\mathrm{i}}}{\hbar}\Big)^2\!\!\!\int\!\!\! {\mathrm{d}}\bm{x}\!\!\int\!\!\! {\mathrm{d}}\bm{x'} \, \big\langle [\hat{E}\_k(\bm{x}),\nabla\_i\hat{E}\_j(\bm{x'})]\big\rangle \, [\hat{d}^k(\bm{x}),\hat{Q}^i{}^j(\bm{x'})].\end{aligned}$$ However, this too vanishes since, =0 for all $\bm{x}$ and $\bm{x}'$. This can be easily seen by noting that both $\hat{d}^k(\bm{x})$ and $\hat{Q}^i{}^j(\bm{x})$ are diagonal in the position basis. In the same fashion, any combination of any electric multipolar couplings will not be able to purify at leading order. Thus rapid repeated light-matter interactions where matter couples only to the electric field are unable to purify at leading order. Thus if we have any hope of purifying at leading order we must involve the magnetic field. A first very simple combination of electric and magnetic couplings that we can consider is the combination of the electric dipole and magnetic dipole couplings: $$\begin{aligned} H\_\text{SA} &=q \, \hat{x}^j\hat{E}\_j +\frac{q}{2m}\{\hat{L}^k,\hat{B}\_k\}\\ &\nonumber =\int {\mathrm{d}}\bm{x} \ \Big( \hat{d}^j(\bm{x})\otimes\hat{E}\_j(\bm{x}) +\hat{\mu}^k(\bm{x})\otimes\hat{B}\_k(\bm{x})\Big).\end{aligned}$$ This interaction Hamiltonian satisfies the necessary condition for purification to appear at leading order, as discussed in previous sections. Namely, $[\hat{d}^j(\bm{x}),\hat{\mu}^k(\bm{x}')]\neq0$ and $[\hat{E}\_j(\bm{x}),\hat{B}\_k(\bm{x}')]\neq0$. As above the cross terms within each of the electric and magnetic couplings will vanish and only the commutators mixing the electric and magnetic field will survive. Computing the effect of L1 on the maximally mixed states,, yields, 1[I] =()2 d [j(),k()] This integrand is non-zero but, remarkably, the integral over $\bm{x}$ and $\bm{x}'$ vanishes. The mechanism for this cancellation is particularly interesting, and is discussed in detail in Appendix [EDiMDi]. Therefore, despite the fact that the coupling satisfies the necessary condition for purification discussed in section [NPIH], rapid repeated interactions which involve both the electric and magnetic dipole couplings cannot purify at leading order. This result is easily extended to more general light-matter couplings. For example, in the case of the more physically relevant combination of both the electric quadrupole and magnetic couplings $$\begin{aligned} H\_\text{SA} &=q \, \hat{x}^i\hat{x}^j\nabla\_i\hat{E}\_j +\frac{q}{2m}\{\hat{L}^k,\hat{B}\_k\}\\ &\nonumber =\int {\mathrm{d}}\bm{x} \ \Big( \hat{Q}^i{}^j(\bm{x})\otimes\nabla\_i\hat{E}\_j(\bm{x}) +\hat{\mu}^k(\bm{x})\otimes\hat{B}\_k(\bm{x})\Big).\end{aligned}$$ we find a similar cancellation that yields no purification at leading order. Any higher order electric couplings will exhibit the same cancellation as will the combination of several electric multipolar moments along with the magnetic dipole moment. Summarizing, we have proven that the most common models of light-matter interactions employed in quantum optics —i.e., those involving any combination of electric multipolar couplings and the magnetic dipole coupling—cannot purify at leading order under rapid repeated interactions. This leads us to conjecture that the light-matter interaction in general cannot purify at leading order in decoherence effects. Conclusion ========== We analyzed the ability of rapid repeated interactions to purify a quantum system. In particular, we considered the formalism developed in, where a quantum system evolves (in discrete time steps of duration *δ**t*) under the repeated application of a quantum channel. We have studied and characterized the strength of purification effects of these rapid repeated interactions, namely, at what order in *δ**t* the dynamics can lead to purification. We have shown that, perhaps contrary to intuition, the purification strength cannot be increased by combining different rapid repeated interaction dynamics by composition or convex combination. After this general study, we have investigated in-depth the purifying power of a particularly relevant scenario that we called *ancillary bombardment*. In this scenario a quantum system is bombarded by a sequence of ancillae, undergoing a brief unitary interaction with each of them. For instance, one can think of an atom interacting with its environment, under the assumption that it repeatedly interacts unitarily with its individual constituents. Another example of such a scenario would be a laboratory system that is repeatedly measured by probes. We have shown that simple interaction Hamiltonians (including some considered in previous literature on rapid repeated interactions ) cannot purify at leading order if their interaction strength remains finite. Furthermore, we have shown that for an ancillary bombardment to purify at leading order it must be mediated by a sufficiently complicated Hamiltonian. Specifically, an interaction consisting of the tensor products of two scalar observables will not purify at leading order. We have found necessary and sufficient conditions for a ancillary bombardment to purify a quantum system. We studied what kinds of couplings satisfy them and what kind of couplings do not. For illustration, we have shown how an isotropic spin-spin coupling, as well as a specific experimentally feasible interaction of a qubit with a harmonic oscillator can purify at leading order under ancillary bombardment. Furthermore, we have paid special attention to the case of couplings of system observables to vector fields, and in particular the case of the multipole moments of an atom coupled to the fully quantized electromagnetic field (EM). For the case of interaction with relativistic quantum fields (such as the EM field) we have found necessary conditions for purification involving the microcausality of the theory. Remarkably, we have shown that any combinations of electric multipole couplings and the magnetic dipole coupling cannot purify at leading order under repeated interaction. This casts fundamental doubt on the ability of simple quantum optical setups to increase the purity of atomic qubits under fast interaction. These results may perhaps be relevant to the field of algorithmic cooling and can be used to design setups to prolong the life of quantum coherence through a controlled exposure to an environment. The particular implications of these results in quantum thermodynamics are intriguing and will be analyzed elsewhere. Acknowledgements ================ This work was supported in part by the Natural Sciences and Engineering Research Council of Canada through the NSERC Discovery programme. Necessity of Non-unitality for Purification =========================================== In this appendix we reproduce (in our notation) a proof given in. Specifically we prove that, for a finite dimensional systems, in order for the dynamics generated by a Liouvillian, L, to cause purification it is necessary that the dynamics are not unital, that is, L[*I*] ≠ 0, where and *I* is the identity matrix. In order to show this we derive the inequality, () = (2) ([I] 2). from which our claim follows directly. We first write the Liouvillian in a standard form called the Lindblad form, that is, [] =+n n (nn-{nn,}), where *Ĥ* is a Hermitian operator, *F̂**n* are operators, and Γ*n* are non-negative numbers. The operator *Ĥ* is the effective Hamiltonian of the dyanamics and is said to generate the unitary part of the dynamics. The operators *F̂**n* are the dynamics decoherence modes and Γ*n* are their respective decoherence rates. Any Liouvillian can be written in this form. Note that the effect of the dynamics on the maximally mixed state is, [I] =n n [Fn,Fn]. Using the cyclic property of trace we find the rate of change of systems purity is, () =(2) =2 ([] ). The unitary part of the dynamics does not change the purity, as expected, since ([H,] ) =(H [,]) =0. Thus we can focus our attention on the decoherence modes. Using,, and the cyclic property of trace we have $$\begin{aligned} \frac{{\mathrm{d}}}{{\mathrm{d}}t}\mathcal{P}(\rho) &=2 \, \text{Tr} \, \big(\mathcal{L}[\rho] \, \rho\big)\\ &\nonumber =2 \, \text{Tr} \, \Big(\sum\_n \Gamma\_n \, \big(F\_n\rho F\_n^\dagger-\{F\_n^\dagger F\_n,\rho\}/2\big)\rho\Big)\\ &\nonumber =\sum\_n \Gamma\_n \, 2 \, \text{Tr} \, \big( F\_n\rho F\_n^\dagger\rho -F\_n^\dagger F\_n\rho^2\big).\end{aligned}$$ For Hermitian *ρ*, we have the identity, 2 (AA- AA2) =([A,A]2 -), which yields, $$\begin{aligned} \frac{{\mathrm{d}}}{{\mathrm{d}}t}\mathcal{P}(\rho) &=\sum\_n \Gamma\_n \, \text{Tr} \, \big( [F\_n,F\_n^\dagger]\rho^2 -[F\_n,\rho]^\dagger [F\_n,\rho]\big)\\ &\nonumber =\text{Tr} \, \big(\mathcal{L}[I] \, \rho^2\big) -\sum\_n \Gamma\_n \, \text{Tr} \, \big([F\_n,\rho]^\dagger [F\_n,\rho]\big)\end{aligned}$$ where we have made use of to identify L[*I*] in the first term. Since the second term is manifestly negative we have the inequality, $$\begin{aligned} \frac{{\mathrm{d}}}{{\mathrm{d}}t}\mathcal{P}(\rho) \leq\text{Tr} \, \big(\mathcal{L}[I] \, \rho^2\big)\end{aligned}$$ claimed in. If L[*I*] = 0 then the dynamics will either maintain or decrease the purity of any state. In, this proof is shown to hold as well for infinite dimensional systems following some assumptions on the decoherence modes. In particular, it holds if all *F̂**n* are bounded. Concatenation And Convex Combinations ===================================== In this appendix, we prove that constructing a new map by taking either concatenations or convex combinations of different maps cannot lower the resultant purification order, defined in, below those of the original maps. Concatenation ------------- Suppose we have an update map *ϕ*(*δ**t*) that is the concatenation of two maps *χ*(1)(*δ**t*) and *χ*(2)(*δ**t*): (t) =(1)(t)(2)(t). We can take these two maps to have series expansions about *δ**t* = 0 $$\begin{aligned} \label{Chi1Series} \chi^{(1)}(\delta t) &=\openone +\delta t \, \chi^{(1)}\_1 +\delta t^2 \, \chi^{(1)}\_2 +\delta t^3 \, \chi^{(1)}\_3 +\dots\\ \label{Chi2Series} \chi^{(2)}(\delta t) &=\openone +\delta t \, \chi^{(2)}\_1 +\delta t^2 \, \chi^{(2)}\_2 +\delta t^3 \, \chi^{(2)}\_3 +\dots \, \end{aligned}$$ and define *m*1 = ord(*χ*(1)) and *m*2 = ord(*χ*(2)) to be the purification orders of *χ*(1)(*δ**t*) and *χ*(2)(*δ**t*) respectively, with *m* = min(*m*1, *m*2). Recall that a map’s purification order is defined in terms of its interpolation scheme. Converting this into terms of the update maps we have such that, $$\begin{aligned} &\chi^{(1)}[I]=I+\mathcal{O}(\delta t^{m\_1+1})\\ &\nonumber \chi^{(2)}[I]=I+\mathcal{O}(\delta t^{m\_2+1}).\end{aligned}$$ Thus we have $$\begin{aligned} \label{LesserOrder1} &\chi^{(1)}\_1[I]=\dots=\chi^{(1)}\_{m}[I]=0\\ &\nonumber \chi^{(2)}\_1[I]=\dots=\chi^{(2)}\_{m}[I]=0.\end{aligned}$$ Evaluating *ϕ*(*δ**t*) on the maximally mixed state yields $$\begin{aligned} &\phi(\delta t) =\chi^{(1)}(\delta t)\chi^{(2)}(\delta t)\\ &\nonumber =\big(\openone +\sum\_{k=1}^\infty \delta t^k \, \chi^{(1)}\_k\big)\big(\openone +\sum\_{n=1}^\infty \delta t^n \, \chi^{(2)}\_n\big)[I]\\ &\nonumber =\big(\openone +\sum\_{k=1}^\infty \delta t^k \, \chi^{(1)}\_k\big)\big(I +\sum\_{n=1}^\infty \delta t^n \, \chi^{(2)}\_n[I]\big)\\ &\nonumber =I +\sum\_{k=1}^\infty \delta t^k \, \chi^{(1)}\_k[I] +\sum\_{n=1}^\infty \delta t^n \, \chi^{(2)}\_n[I] +\sum\_{k=1}^\infty\sum\_{n=1}^\infty \delta t^{k+n} \, \chi^{(1)}\_k[\chi^{(2)}\_n[I]]\\ &\nonumber =I +\sum\_{k=m+1}^\infty \delta t^k \, \chi^{(1)}\_k[I] +\sum\_{n=m+1}^\infty \delta t^n \, \chi^{(2)}\_n[I]\\ &+\sum\_{k=1}^\infty\sum\_{n=m+1}^\infty \delta t^{k+n} \, \chi^{(1)}\_k[\chi^{(2)}\_n[I]]\end{aligned}$$ where we have used to drop terms from the sums. From this we can see that any non-unital effects in *ϕ*(*δ**t*) appear at at least order *m* + 1 and thus ord(*ϕ*) ≥ *m*. By applying this proof repeatedly one can conclude that if *ϕ*(*δ**t*) is a concatenation of a finite number of maps as, (t)= (1)(t) (2)(t) … (N)(t), then (){((n))} as claimed. Convex Combinations ------------------- Suppose we have an update map *ϕ*(*δ**t*) which is a convex combination of maps as, (t)= k pk (k)(t), with ∑*k**p**k* = 1. We can take these maps to have series expansions about *δ**t* = 0 as, (k)(t) =+t (k)1 +t2 (k)2 +t3 (k)3 +…. Let *m**k* = ord(*ψ*(*k*)) be the purification orders of *ψ*(*k*)(*δ**t*) and *m* = min{*m**k*}. Recall that a map’s purification order is defined in terms of its interpolation scheme. Converting this into terms of the update maps we have such that, (k)[I]=I+(tmk+1) and thus for every *k*, (k)1[I]=…=(k)m[I]=0. Evaluating *ϕ*(*δ**t*) on the maximally mixed state yields $$\begin{aligned} \phi(\delta t)[I] &=\sum\_k p\_k \, \psi^{(k)}(\delta t)[I]\\ &\nonumber =\sum\_k p\_k \, \big(\openone +\sum\_{n=1}^\infty \delta t^n \, \psi^{(k)}\_n\big)[I]\\ &\nonumber =\big(\openone +\sum\_{n=1}^\infty \delta t^n \, \sum\_k p\_k \, \psi^{(k)}\_n\big)[I]\\ &\nonumber =I+\sum\_{n=1}^\infty \delta t^n \, \sum\_k p\_k \, \psi^{(k)}\_n[I]\\ &\nonumber =I+\sum\_{n=m+1}^\infty \delta t^n \, \sum\_k p\_k \, \psi^{(k)}\_
arxiv_0000298
variable accretion rate. If the mass accretion rate increases, the radiated energy will increase, thus, the intrinsic luminosity. Hence, $\dot{m} \propto L\_{in}$, or *λ**E**d**d* ∝ *L**i**n*. Again, if the luminosity increases, it will cool down the Compton cloud more efficiently implying a steeper photon index, i.e. *L**i**n* ∝ Γ. These relations are well known for Galactic black holes. [sec:reflection] Reflection & Line Emission -------------------------- Hard X-ray emission from the Compton cloud is reflected from a cold neutral medium surrounding the AGN. Within our spectral domain, the reflection component consists of a reflection hump and an iron fluorescent emission line. Due to higher inclination, the Seyfert 2 galaxies usually show stronger reflection than the Seyfert 1 galaxies. The strength of the reflection component can be obtained from the pexrav (*R*, relative reflection) or MYTORUS (*A**S*; relative normalization of scattering) model. As the reflection hump is observed in 15 − 30 keV, *R* may not be constrained with the *Chandra* data. In this case, the MYTORUS model can constrain *A**S* as both the reflected and the line emission are computed self-consistently. Along with the reflection hump, the iron line emission is a good indicator of reflection. NGC 6300 is known to emit strong reflection component with unusually high *R* (> 4). The reflection dominated *RXTE* spectrum was expected to show EW  ∼ 1 keV although the observed EW was about  ∼ 470 eV. argued that the sub-solar abundances were the reason behind the observation of narrow line-width than expected. The EW of Fe fluorescent line was decreased in 1999 and 2001 observations with 140 eV and 69 eV, respectively although the relative reflection was still high. 0.5cm [fig:6] In our analysis, the Fe K*α* line was observed in the spectra of NGC 6300 during 2007, 2013 & 2016 epochs from the powerlaw model fitting. However, we did not detect the Fe K*α* line in the spectra from the 2009 epochs while fitting the data with the powerlaw model. On the other hand, the MYTORUS model fitting result showed the presence of Fe line in all observations. This discrepancy arose as the MYTORUS model compute Fe line self-consistently, while an ad-hoc Gaussian component was added as Fe line in the powerlaw model. Thus, we made our discussion on the reflection and Fe K*α* line based on the results obtained from the spectral analysis with MYTORUS model. During 2007, the iron K*α* line was observed with an equivalent width of 120− 16+ 14 eV (see Table [tab:mytd]). In 2009 epoch, a marginally narrower line (95-114 eV) was observed from the *Chandra* observations. Though similar EW was observed in 2007 & 2009 epochs, a different relative scattering normalization was found. In 2007, *A**S* = 1.06 indicates that the reflection dominates over the primary continuum. In 2009, the reflection was weaker compared to 2007 (*A**S* < 1). However, in 2013 & 2016 epochs, the reflection became stronger (*A**S* > 2 with broad EW (>300 eV). The observed EW is consistent for the reflection from a Compton thin reprocessor for all the observations. We found that the EW and *A**S* are strongly correlated with Pearson correlation index of 0.9893. This indicates that the EW strongly depends on the strength of the reflection. The Fe K*α* line flux is found to be strongly correlated with the EW and *A**S* with Pearson correlation indices of 0.9347 and 0.9275, respectively. This suggests that the Fe-line flux strongly depends on the strength of the reflection. The EW also correlates with the Compton cloud temperature (*k**T**e*) and has a correlation value of 0.9146. From this, one can implicate that the hotter Compton cloud causes more reflection from various regions which broadens the line-width. However, the Pearson correlation index between Fe K*α* line flux and *L**b**o**l* is estimated to be -0.5187, which suggests a moderate anti-correlation. This X-ray Baldwin effect was observed in several other sources. Nonetheless, the 2009 epoch seems to exhibit outliers of the Baldwin effect for NGC 6300. The reason behind this could be a separate origin of the iron line than the other epochs. Later, we focus on the Fe K*α* line emitting region. We calculated the Fe K*α* line emitting region from observed FWHM. In 2007, the FWHM of the Fe K-line was derived to be 5550 km s− 1 which indicates that the line emitting region would be  ∼ 3900 *r**g* away from the central source, i.e. the broad-line region (BLR). In 2009, we observed a narrow FWHM (< 100 km s− 1), which means the line emitting region is at  > 107 *r**g*, i.e.  > 17 pc located near the ‘torus’ region. In 2013, the *NuSTAR* observations revealed a broad FWHM with  ∼  30000 km s− 1. It implies that the line emitting region is  ∼ 130 *r**g* away. In Jan 2016 & Aug 2016 observations, the values of FWHM were  ∼ 28000 km s− 1 and 39000 km s− 1, respectively. In these two epochs also, the line emitting region would be  ∼ 150 *r**g* and  ∼ 80 *r**g* away, respectively. It is believed that the narrow Fe line is ubiquitous and can be emitted from the ‘torus’, the BLR or the accretion disc. From our analysis of NGC 6300, we found that the Fe fluorescent line was emitted from separate regions during various epochs. estimated the Fe K*α* line emitting region for NGC 6300 to be  ∼ 104 *r**g* in 2001, i.e. torus. During 2007, we find that the Fe-line was emitted from the BLR region. In 2009 epoch, we observed that the line emitting region as the ‘torus’. But, during 2013 and 2016 epochs, the iron line was emitted from the accretion disc. It is also possible that the narrow Fe K*α* line was emitted from the ‘torus’ in 2013 and 2016 epochs, but, could not be detected due to the presence of broader Fe K*α* line emission from the accretion disc. Considering the time-delay patterns between Fe-line and continuum during 2013 and 2016 epochs (see Figure [zdcf]), one can get a rough estimate of the size of the Compton cloud. Since the delay between these two is minimal compared to an AGN, it is possible that the Comptonized and reflected Fe-line component originated from a similar vicinity. Bearing this in mind, the broad iron line emitting region could be the farthest extents of Compton cloud during 2013 and 2016 epochs. [tab:lum] lcccccc ID &Date&*N**H*, *Z* &Γ&*L**b**o**l* & *λ**E**d**d* &(MJD)&(1023 cm− 2)& &(1043 ergs s− 1) & R1& 50493.50 &5.80 ± 0.22&1.71 ± 0.20 & 0.69\*& 0.005\* B2& 51418.50 &2.10 ± 0.10&2.19 ± 0.10 & 5.23\*& 0.041\* X3& 51971.50 &2.40 ± 0.15&1.94 ± 0.09 & - & - S1 & 54390.51 &2.18 ± 0.04&1.84 ± 0.04 & 3.82 ± 0.21& 0.029 ± 0.001 C1 & 54985.27 &2.04 ± 0.08&1.75 ± 0.03 & 2.68 ± 0.20& 0.021 ± 0.001 C2 & 54989.17 &2.06 ± 0.10&1.73 ± 0.02 & 3.00 ± 0.20& 0.023 ± 0.001 C3 & 54991.01 &2.10 ± 0.08&1.79 ± 0.05 & 2.72 ± 0.19& 0.021 ± 0.001 C4 & 54992.88 &1.93 ± 0.10&1.79 ± 0.04 & 3.09 ± 0.20& 0.024 ± 0.001 C5 & 54996.33 &2.11 ± 0.10&1.73 ± 0.08 & 3.53 ± 0.20& 0.027 ± 0.001 N1 & 56348.89 &1.44 ± 0.07&1.62 ± 0.04 & 2.93 ± 0.10& 0.023 ± 0.001 N2 & 57411.03 &1.23 ± 0.07&1.46 ± 0.02 & 2.36 ± 0.09& 0.018 ± 0.001 N3 & 57624.35 &1.08 ± 0.06&1.51 ± 0.05 & 2.80 ± 0.08& 0.022 ± 0.001 [sec:soft] Soft Excess ----------- Soft excess ( <  3 keV) is found almost in every AGN. However, the origin of the soft excess is poorly understood. One possible origin of soft excess could be the reflection from an optically thick warm Comptonizing region or the reflection from the ionized accretion disc. The origin of soft excess can be explained by the heating of circumnuclear gas from the shock produced by AGN outflows or photoexcitation and photoionization of circumnuclear gas of the primary emission of the AGN. The high-resolution capabilities of X-ray observatories such as *XMM-Newton* and *Chandra* have supported the later scenario in recent studies, e.g. NGC 1068 ;, the Circinus galaxy, Mkn 3 ; ;,. proposed shock heating near the ISCO could produce the soft excess. The theory successfully demonstrated the spectra of Seyfert 1 galaxy Ark 120. presented Bulk Motion Comptonization as a plausible cause of the soft excess. presented a new perspective on the soft excess by attaching the component with the high mass accretion rate of the disc itself. In our observations of NGC 6300, we modelled the soft excess with constant\*(powerlaw + apec). The powerlaw normalization and Γ were tied with the corresponding parameters of the primary component. We find that the soft excess contributed about up to  ∼ 6 per cent. The APEC model temperature varied between 0.2 and 0.5 keV, indicating a warm absorber. We also checked the fractional variability rms amplitude (*F**v**a**r*) for the soft excess, i.e. in the energy band of 0.5 − 3 keV. We found variability in this energy band in some observations (see Table [tab:fvar]). The average variability is higher in the soft excess than the primary emission. claimed that for lower *L*/*L**E**d**d* sources, the energy-dependent variabilities is less for the soft excess part. However, this contradicts our findings. Higher variabilities in the soft excess indicate that the soft excess could be the scattered primary emission from the warm reflector or accretion disc. The absence of variability in some observations infers that it might have originated from the ambient medium during those observations. From the spectral studies (see Table [tab:PL], [tab:mytd]), we also found variable *f**s* which implicates a dynamic mechanism such as reflection or complex absorbing medium could be responsible for the origin of the soft excess. Overall, the origin of the soft excess is complex. It is plausible that more than one factor contributed to the soft excess part of the spectra. ![Variation of (a) line-of-sight column density (N_H) in 10^{23} cm^{-2} unit, (b) photon index (\Gamma), (c) 2-10 keV absorbed flux (F) in 10^{-11} ergs cm^{-2} s^{-1} unit and (d) 2-10 keV bolometric luminosity (L_{bol}) in 10^{43} ergs s^{-1} unit are shown over the years.](past "fig:") [fig:past] [sec:evolution] Evolution of the System ----------------------- We studied the Seyfert 2 galaxy NGC 6300 between 2007 and 2016. During the period of observation, the source evolved over the years. The source was observed in the Compton-thick region in 1997, though, it was observed in the Compton-thin region in 1999. This made NGC 6300 a changing-look AGN. However, during our observation period between 2007 and 2016, NGC 6300 remained Compton-thin (as *N**H* < 1.5 × 1024 cm− 2). Although, we observed a change in the line-of-sight column density (*N**H*, *Z*) over the years, the global averaged column density (*N**H*, *S*) remained constant. It can be explained with the transiting clouds along the line-of-sight (see section [sec:torus]). In Figure [fig:past] (a), we show the evolution of the line-of-sight column density over the years. In addition to the evolution of the circumnuclear properties, the nuclear region also evolved over the years (see Table [tab:lum]). The photon index (Γ), absorbed flux, and luminosity changed over the years. In 1997, the source was observed with very low luminosity with bolometric luminosity, *L**b**o**l* = 6.9 × 1042 ergs s− 1. In 1999, the bolometric luminosity increased to *L**b**o**l* = 5.23 × 1043 ergs s− 1. The photon index also increased to Γ = 2.11 from Γ = 1.71 in 1999. Since then, the photon index decreased over the years till 2016. However, it would be naive to conclude that the Γ decreased gradually since the source was not observed on a regular basis. Nonetheless, both the flux and luminosity changed over the years. The change in the luminosity can be explained with the evolution of the mass accretion rate. We also checked for a correlation between the intrinsic luminosity *L**i**n* and the line-of-sight column density (*N**H*, *Z*) which we fail to observe. Such a correlation would indicate luminosity dependent covering factor. Since no correlation was found between these two, it is likely that the ‘torus’ and the AGN evolved independently. Conclusion ========== We study NGC 6300 between 2007 & 2016. Over the 9 years of observations, we investigated and found that NGC 6300 evolves with time. NGC 6300 was previously reported as changing-look AGN. However, we find it in Compton-thin region in every observation. Following are the findings from our work. 1. The obscured torus is not uniform; rather, it is clumpy. Global averaged column density is constant over the years. However, the line-of-sight column density changes with time. This change is interpreted as due to the transiting clouds along the line-of-sight. 2. The nuclear region was found to evolve over the years. The intrinsic luminosity of the source changes with time. The change in the mass accretion rate is likely to be responsible for that. 3. The torus and primary nucleus evolved independently, at least during our observation. We did not find any relation between column density and intrinsic luminosity. 4. The Fe K*α* line emitting region is different in different epochs. During 2007, 2009, and 2013-16, the primary source of Fe K*α* emission was BLR, ‘torus’ and accretion disc, respectively. Narrow Fe K*α* line is originated in the torus and could be present in all epochs. Although, narrow Fe K*α* lines were not detected in every epoch in presence of broad Fe K*α* line. 5. We find variability in both soft excess (0.5-3 keV range) and primary emission (> 3 keV). The variability in the soft excess infers that it could be scattered primary emission. However, a lack of variability in some observations infers that the origin of the soft excess is complex. Acknowledgements ================ We acknowledge the anonymous Reviewer for the constructive review which improved the clarity of the manuscript. A.J. and N. K. acknowledges support from the research fellowship from Physical Research Laboratory, Ahmedabad, India, funded by the Department of Space, Government of India for this work. AC acknowledges Post-doctoral fellowship of S. N. Bose National Centre for Basic Sciences, Kolkata India, funded by Department of Science and Technology (DST), India. PN acknowledges Council of Scientific and Industrial Research (CSIR) fellowship for this work. This research has made use of data and/or software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC and the High Energy Astrophysics Division of the Smithsonian Astrophysical Observatory. This work has made use of data obtained from the *Suzaku*, a collaborative mission between the space agencies of Japan (JAXA) and the USA (NASA). The scientific results reported in this article are in part based on observations made by the Chandra X-ray Observatory. This research has made use of software provided by the Chandra X-ray Center (CXC) in the application packages CIAO, ChIPS, and Sherpa. This work has made use of data obtained from the *NuSTAR* mission, a projects led by Caltech, funded by NASA and managed by NASA/JPL, and has utilised the NuSTARDAS software package, jointly developed by the ASDC, Italy and Caltech, USA. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. Data Availability ================= We have used archival data for our analysis in this manuscript. All the models and software used in this manuscript are publicly available. Appropriate links are given in the manuscript. 99 Alexander, T. 1997, in Astrophysics and Space Science Library, Vol. 218, Astronomical Time Series, ed.D. Maoz, A. Sternberg, & E. M. Leibowitz, 163 Antonucci, R. R. J., & Miller, J. S. 1985,, 297, 621 Antonucci, R. 1993,, 31, 473 Arnaud, K. A., Branduardi-Raymont, G., Culhane, J. L., et al. 1985,, 217, 105 Arnaud, K. A. 1996, Astronomical Data Analysis Software and Systems V, 17 Awaki H., Murakami H., Leighly K. M., Matsumoto C., Hayashida K., Grupe D.,  2005,, 632, 793 Baloković, M., Brightman, M., Harrison, F. A., et al. 2018,, 854, 42 Beckerman, E., Aldcroft, T., Gaetz, T. J., et al. 2004,, 445 Bennett C. L. et al.,  2003,, 148, 1 Bianchi, S., Miniutti, G., Fabian, A. C., & Iwasawa, K.  2005b,, 360, 380 Boorman, P. G., Gandhi, P., Baloković, M., et al. 2018,, 477, 3775 Brightman, M., & Nandra, K. 2011,, 413, 1206 Brightman, M., Baloković, M., Ballantyne, D. R., et al. 2017,, 844, 10 Broos, P. S., Townsley, L. K., Feigelson, E. D., et al. 2010,, 714, 1582 Chevallier L., Collin S., Dumont A.-M., Czerny B., Mouchet M.,Gon ̧calves A. C.& Goosmann R., 2006, A&A, 449, 493 Davis, J. E.  2001,, 562, 575 Dai, X., Chartas, G., Eracleous, M., et al. 2004,, 605, 45 Done, C., Davis, S. W., Jin, C., et al. 2012,, 420, 1848 Edelson R. A., et al.,  1996,, 470, 364 Edelson R., Griffiths G., Markowitz A., Sembay S., TurnerM. J. L., Warwick R.,  2001,, 554, 274 Edelson R., Turner T. J., Pounds K., Vaughan S., Markowitz A.,Marshall H., Dobbie P., Warwick R.,  2002,, 568, 610 Edelson R., Malkan M.,  2012,, 751, 52 Fabian, A. C., Ballantyne, D. R., Merloni, A., et al.  2002,, 331, L35 Fabian A. C., Lohfink A., & Kara E., et al.  2015,, 451, 4375 Fruscione, A., McDowell, J. C., Allen, G. E., et al.  2006, Proc. SPIE, 6270, 62701V Fukazawa, Y., Mizuno, T., Watanabe, S., et al.  2009,, 61, S17 Fukumura, K., Hendry, D., Clark, P., et al. 2016,, 827, 31 García, J. A., Kara, E., Walton, D., et al. 2019,, 871, 88 Gaspar G., Díaz R. J., Mast D., D’Ambra A., Agüero M. P., Günthardt G.,  2019,, 157, 44 George, I. M., & Fabian, A. C. 1991,, 249, 352 Gierliński, M., & Done, C. 2004,, 349, L7 Gierliński, M., Middleton, M., Ward, M., et al. 2008,, 455, 369 Gruber, D. E., Matteson, J. L., Peterson, L. E., & Jung, G. V.  1999,, 520, 124 Guainazzi M.,  2002,, 329, L13 Guainazzi M., & Bianchi S.,  2007,, 374, 1290 Guainazzi, M., Risaliti, G., Awaki, H., et al. 2016,, 460, 1954 Haardt, F., & Maraschi, L. 1991,, 380, L51 Halpern, J. P. 1984,, 281, 90 Harrison, F. A., Craig, W. W., Christensen, F. E., et al. 2013,, 770, 103 Hernández-García, L., Masegosa, J., González-Martín, O., & Marquez, I.  2015, å,579, A90 HI4PI Collaboration, et al.,  2016, A&A, 594, 116 Huenemoerder, D. P., Mitschang, A., Dewey, D., et al.  2011,, 141, 129 Ikeda, S., Awaki, H., & Terashima, Y. 2009,, 692, 608 Iwasawa, K., & Taniguchi, Y. 1993,, 413, L15 LaMassa, S. M., Cales, S., Moran, E. C., et al. 2015,, 800, 144 Lansbury, G. B., Stern, D., Aird, J., et al. 2017,, 836, 99 Leighly K. M., Halpern J. P., Awaki H., Cappi M., Ueno S., Siebert J.,  1999,, 522, 209 Lohfink, A. M., Reynolds, C. S., Miller, J. M., et al. 2012,, 758, 67 Kaufman, J., Blaes, O. M., & Hirose, S. 2017,, 467, 1734 Khorunzhev, G. A., Sazonov, S. Y., Burenin, R. A., & Tkachenko, A. Y.,  2012, AstL, 38, 475 King, A.  2005,, 635, L121 Kinkhabwala, A., Sako, M., Behar, E., et al.  2002,, 575, 732 Koyama, K., Tsunemi, H., Dotani, T., et al. 2007,, 59, 23 Lu, Y., & Yu, Q. 1999,, 526, L5 Madsen, K. K., Reynolds, S., Harrison, F., et al.  2015,, 220, 8 Madsen, K. K., Forster, K., Grefenstette, B. W., et al.  2017,, 841, 56 Magdziarz P., Zdziarski A. A.,  1995,, 273, 837 Magdziarz, P., Blaes, O. M., Zdziarski, A. A., et al. 1998,, 301, 179 Matsumoto C., Nava A., Maddox L. A., Leighly K. M., Grupe D., Awaki H., Ueno S.,  2004,, 617, 930 Matt, G., Perola, G. C., & Piro, L. 1991,, 247, 25 Matt G., Guainazzi M., Maiolino R., 2003, mnras, 342, 422 Markowitz, A. G., Krumpe, M., & Nikutta, R. 2014,, 439, 1403 Mehdipour, M., Branduardi-Raymont, G., Kaastra, J. S., et al. 2011,, 534, A39 Meyer M. J., et al.,  2004,, 350, 1195 Murphy K. D., Yaqoob T.,  2009,, 397, 1549 Mushtukov, A. A., Suleimanov, V. F., Tsygankov, S. S., et al. 2015,, 447, 1847 Nandra K., George I. M., Mushotzky R. F., Turner T. J., YaqoobT.,  1997,, 476, 70 Nandra, K. 2006,, 368, L62 Nardini, E., Fabian, A. C., Reis, R. C., et al. 2011,, 410, 1251 Netzer, H. 2013, The Physics and Evolution of Active Galactic Nuclei Panessa, F., Bassani, L., Landi, R., et al. 2016,, 461, 3153 Petrucci, P. O., Haardt, F., Maraschi, L., et al.  2001,, 556, 716 Rees, M. J. 1984,, 22, 471 Remillard, R. A., & McClintock, J. E. 2006,, 44, 49 Ricci, C., Walter, R., Courvoisier, T. J.-L., et al. 2011,, 532, A102 Ricci, C., Paltani, S., Ueda, Y., et al. 2013a,, 435, 1840 Ricci, C., Paltani, S., Awaki, H., et al. 2013b,, 553, A29 Risaliti, G., Elvis, M., & Nicastro, F. 2002,, 571, 234 Risaliti, G., Young, M., & Elvis, M. 2009,, 700, L6 Rodríguez-Pascual P.M., Alloin D., Clavel J., et al.,  1997,, 110, 9 Ross R. R., & Fabian A. C.,  2005,, 358, 211 Saez, C., Chartas, G., Brandt, W. N., et al. 2008,, 135, 1505 Sako, M., Kahn, S. M., Paerels, F., & Liedahl, D. A.  2000,, 543, L115 Sambruna, R. M., Netzer, H., Kaspi, S., et al.  2001,, 546, L13 Shakura, N. I., & Sunyaev, R. A. 1973,, 500, Shemmer, O., Brandt, W. N., Netzer, H., et al. 2006,, 646, L29 Shemmer, O., Brandt, W. N., Netzer, H., et al. 2008,, 682, 8133 Sikora, M., Stawarz, Ł., & Lasota, J.-P.  2007,, 658, 815 Singh, K. P., Garmire, G. P., & Nousek, J. 1985,, 297, 633 Skrutskie M. F., et al., 2006,, 131, 1163 Sobolewska M. A. & Done C.,, 2007, 374, 150 Pounds, K. A., & Page, K. L.  2005,, 360, 1123 Takahashi, T., Abe, K., Endo, M., et al. 2007,, 59, 35 Tetarenko, B. E., Sivakoff, G. R., Heinke, C. O., et al. 2016,, 222, 15 Titarchuk, L. 1994,, 434, 570 Uchiyama, Y., Maeda, Y., Ebara, M., et al. 2008,, 60, S35 Urry, C. M., & Padovani, P. 1995,, 107, 803 Vaughan, S., Edelson, R., Warwick, R. S., et al. 2003,, 345, 1271 Walton, D. J., Nardini, E., & Fabian, A. C., et al.  2013,, 428, 2901, Wilms, J., Allen, A., & McCray, R. 2000,, 542, 914 Wu, X.-B., & Liu, F. K. 2004,, 614, 91 Yaqoob T., 2012, MNRAS, 423, 3360 Yaqoob, T., Tatum, M. M., Scholtes, A., et al. 2015,, 454, 973 Young, A. J., Wilson, A. S., & Shopbell, P. L. 2001,, 556, 6 [lastpage] --- 1. <!-- h='&#112;&#114;&#108;&#46;&#114;&#x65;&#x73;&#46;&#x69;&#110;';a='&#64;';n='&#x61;&#114;&#x67;&#104;&#x61;';e=n+a+h; document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'">'+e+'<\/'+'a'+'>'); // --> argha at prl dot res dot in[↩](#fnref1) 2. <!-- h='&#98;&#x6f;&#x73;&#x65;&#46;&#114;&#x65;&#x73;&#46;&#x69;&#110;';a='&#64;';n='&#x61;&#114;&#x6b;&#x61;&#x63;&#104;&#x61;&#116;&#116;&#x65;&#114;&#106;&#x65;&#x65;';e=n+a+h; document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'">'+e+'<\/'+'a'+'>'); // --> arkachatterjee at bose dot res dot in[↩](#fnref2) 3. <http://heasarc.gsfc.nasa.gov/>[↩](#fnref3) 4. <http://heasarc.gsfc.nasa.gov/docs/suzaku/analysis/abc/>[↩](#fnref4) 5. <http://www.astro.isas.jaxa.jp/suzaku/caldb/>[↩](#fnref5) 6. <https://cxc.harvard.edu/ciao/download/>[↩](#fnref6) 7. <https://cxc.harvard.edu/ciao/download/caldb.html>[↩](#fnref7) 8. <https://cxc.cfa.harvard.edu/ciao/ahelp/chandra_repro.html>[↩](#fnref8) 9. <https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/node304.html>[↩](#fnref9) 10. <http://heasarc.gsfc.nasa.gov/FTP/caldb/data/nustar/fpm/>[↩](#fnref10) 11. 12. <https://www.mytorus.com/>[↩](#fnref12) Probing Nuclear and Circumnuclear Properties of NGC 6300 using X-ray Observations ================================================================================= ### Accepted XXX. Received YYY; in original form ZZZ [firstpage] We present the results obtained from a detailed X-ray timing and spectral analysis of Seyfert 2 galaxy NGC 6300 by using observations with the *Suzaku*, *Chandra* and *NuSTAR* observatories between 2007 and 2016. We calculate variance, rms fractional variability of the source in different energy bands and find variabilities in various energy bands. Spectral properties of the source are studied by using various phenomenological and physical models. The properties of the Compton clouds, reflection, Fe K*α* line emission and soft X-ray excess are studied in detail. Several physical parameters of the source are extracted and investigated to establish the presence/absence of any correlation between them. We also investigate the nature of the circumnuclear ‘torus’ and find that the torus is not uniform, rather clumpy. The observed changes in the line-of-sight column density can be explained in terms of transiting clouds. The iron line emitting region is found to be different in the different epoch of observations. We also observe that the torus and the nucleus independently evolve over the years. galaxies: active – galaxies: Seyfert – X-rays: galaxies – X-rays: individual: NGC 6300 Introduction ============ Active galactic nuclei (AGNs) are the most energetic persistent objects in the universe. The AGNs are powered by the accreting supermassive black holes (SMBH) which reside at the centre of each galaxy. The matter from the surrounding medium is accreted in the form of a geometrically thin, optically thick accretion disc around the blackhole, also known as the standard disc. The accretion disc, in case of AGNs, predominately radiates in the UV/optical wavebands and creates the so-called ‘big-blue-bump’ in the broadband spectral energy distribution (SED). The X-rays, on the other hand, are emitted from a Compton cloud located within a few tens of Schwarzschild radii around the central engine. The X-rays can be produced by the inverse-Compton scattering of the UV/optical photons originating from the accretion disc. The hard X-ray photons are reflected at the relatively cold matter and produce Fe fluorescent line. Thus, along with the primary X-ray continuum, a reflection hump at  ∼ 15-30 keV energy range, and a Fe fluorescent line at  ∼ 6.4 keV are observed in the X-ray spectrum of AGNs. An additional soft X-ray (< 2 keV) component, known as ‘soft excess’ is often observed in the AGNs spectra. The ‘soft excess’ could have a completely different origin than the primary continuum, and is often associated with the host galaxy. However, more recent studies indicate that the ‘soft-excess’ could be generated via reflection mechanism or due to the variable nature of the hydrogen column density (*N**H*) along the line of sight. Inverse Comptonization by warm and optically thick region and relativistic blurred reflection of the hot coronal photons by the inner disc are also considered to be the origin of the soft excess. suggested that hot corona to be the origin of the soft excess. explored both the possibilities, i.e. relativistic reflection as well as Comptonization form warm corona in Mrk 509. The AGNs also produce powerful relativistic jets which are best observed in radio bands. About  ∼ 15 per cent AGNs show relativistic jets, leading to a classification of radio-loud and radio-quiet AGNs. Recently, reported that from the total AGN population, about  ∼ 7 − 10 per cent galaxies are the radio galaxies. AGNs are classified as type-I or type-II based on the observation of broad line emission (originate in the broad line emitting region or BLR) or narrow line emission (originate from narrow line emitting region or NLR) in the optical wavebands. In type-I AGN, both broad lines and narrow lines are observed, while in type-II AGN, only narrow lines are observed in the optical spectra. Observation of broad lines in the polarized light in NGC 1068 revealed that the BLR is obstructed by an absorbing dusty ‘torus’ surrounding the AGN. This leads to the unified model for AGN, where the classification is due to the orientation effect. The unified model of AGN from the optical wavebands transforms into the X-ray wavebands as the hydrogen column density (*N**H*) in the line-of-sight of the absorbing material. The type-I AGNs are observed in unobstructed way with *N**H* < 1022 cm− 2, while the type-II AGNs are observed through the obstruction where *N**H* > 1023 cm− 2. In the case of type-II AGNs, a dusty torus around the accretion disc is considered as the major absorbing medium along the line-of-sight of the observer although the possibility of contribution from NLRs and BLRs towards total *N**H* can not be ignored. The obscuring torus is characterized by its hydrogen column density (*N**H*). If *N**H* > 1.5 × 1024 atoms *c**m*− 2, the torus is considered as Compton thick; otherwise, the torus is Compton thin. Generally, photons with energy  < 2 keV suffers absorption and above  ∼ 2 keV, the unabsorbed spectrum is expected. The absorption of X-rays increases as the column density of absorbing material increases. In the case of torus geometry, flux suppression in the energy range below 10 keV gets flatten off in case of Compton-thick source, i.e. *N**H* > 1.5 × 1024 cm− 2, as there is always reflected flux contribution from the torus irrespective of the obscuration level of the torus. Seyfert 2 galaxies are the radio-quiet type-II AGNs. In this case, a dusty torus surrounds the circumnuclear region. It is believed that the ‘torus’ is located far away (a few parsecs) from the nucleus. In general, the nature of the torus does not change over the years. Although, for several AGNs, it is observed that the opacity of the torus changes from Compton-thin to Compton-thick and vice-versa in a timescale ranging from months to years. This type of AGNs are known as changing-look AGN. The variation of column density is believed to be occurred due to the transition of cloud along the line-of-sight. The location of these transiting clouds commensurate with the outer region of the BLR or dusty inner torus and sometimes with the inner BLR. showed that Compton-thin and changing-look AGNs show more variabilities than the Compton-thick AGN. NGC 6300 is a nearby Seyfert 2 galaxy with *z* = 0.0037. It is located at R.A. = 17*h* 16*m* 59.473*s*; DEC =  =  − 6249ʹ 13ʺ.98. It is a ring, barred spiral galaxy and classified as SBb-type galaxy from its morphology. NGC 6300 was observed a few times in X-ray bands with the RXTE, BeppoSAX and XMM-Newton, in February 1997, August 1999 and March 2001, respectively. From the variability studies with the XMM-Newton, the mass of NGC 6300 was estimated to be  ∼ 2.8 × 105 *M*⊙. The *M**B**H* − *σ* relation yields the mass of NGC 6300 to be 107 *M*⊙. However, with various uncertainties, the mass is estimated to be  < 107 *M*⊙. estimated the mass of the BH to be 107.59 *M*⊙ from mass-bulge luminosity correlation. The NIR study of the molecular radial velocity yields the mass of the BH as  < 6.25 × 107 *M*⊙. In this article, we study the Seyfert 2 galaxy NGC 6300 by using observations between 2007 & 2016 at five epochs (2007, 2009, 2013, January 2016 & August 2016) with the *Suzaku*, *Chandra*, and *NuSTAR* observatories. We investigate the characteristics of the nucleus as well as the nature of the circumnuclear torus. In §[sec:observation], we briefly discuss the observations and data reduction processes. In §[sec:analysis], we present timing and spectral analysis methods and the corresponding results. In §[sec:discussion], we discuss our findings. Finally, we draw our conclusions in §[sec:conclusion]. Observation and Data Reduction ============================== [tab:1] | | | | | | | --- | --- | --- | --- | --- | | ID | Date | Obs. ID | Instrument | Exposures | | | (yyyy-mm-dd) | | | (ks) | | S1 | 2007-10-17 | 702049010 | *Suzaku* | 82.5 | | C1 | 2009-06-03 | 10289 | *Chandra*/ACIS | 10.2 | | C2 | 2009-06-07 | 10290 | *Chandra*/ACIS | 9.8 | | C3 | 2009-06-09 | 10291 | *Chandra*/ACIS | 10.2 | | C4 | 2009-06-10 | 10292 | *Chandra*/ACIS | 10.2 | | C5 | 2009-06-14 | 10293 | *Chandra*/ACIS | 10.2 | | N1 | 2013-02-25 | 60061277002 | *NuSTAR* | 17.7 | | N2 | 2016-01-24 | 60261001002 | *NuSTAR* | 20.4 | | N3 | 2016-08-24 | 60261001004 | *NuSTAR* | 23.5 | We searched and acquired the publicly available archival data of NGC 6300 from *Suzaku*, *Chandra* and *Nustar* observatories by using HEASARC[3](#fn3). [sec:suzaku] Suzaku ------ NGC 6300 was observed with *Suzaku* on 2007 October 17 (Obs ID: 702049010). The *Suzaku* observatory consisted of two sets of instruments: the X-ray Imaging Spectrometer (XIS) and the Hard X-ray Detector (HXD). There are four units of XIS among which XIS-0, XIS-2, and XIS-3 were front-side-illuminated CCDs (FI-XISs), while XIS-1 was back-side-illuminated one (BI-XIS). The HXD was a non-imaging Instrument consisting of Si PIN photo-diodes and GSO scintillation counters. We followed[4](#fn4) standard procedures and recommended screening criteria while extracting *Suzaku*/XIS spectra and light-curves. We reprocessed the event files by using the latest calibration data files [5](#fn5), released on 2014-02-03, through the software package FTOOLS 6.25. We chose a circular region with a radius of 210 arcsecs and source coordinates as the centre, for source extraction. The background spectra were extracted from source-free regions by selecting a circular region of 210 arcsecs radius. With the xisrmfgen and xisarfgen task, we generated Response matrices and ancillary response files, respectively. As XIS-2 was not operational, the data from the other three XISs are used in the present analysis. The 2 − 10 keV front-illuminated XIS-0 and XIS-3 spectra are co-added using the addascaspec task, whereas the 0.5 − 10 keV back-illuminated XIS-1 spectrum was treated separately. The XIS spectra in 1.6 − 2 keV range was ignored due to the presence of known Si edge. All the spectra are re-binned to achieve  > 20 counts per channel bins by using the grppha task. For *Suzaku* HXD/PIN spectra, cleaned event files were generated using the aepipeline task. With the hxdpinxbpi task, the deadtime corrected source and background spectra were generated. The background spectra included non-X-ray background (nxb; ) and the simulated cosmic X-ray background (cxb; ). We used HXD/PIN spectrum in 15 − 40 keV energy range in our analysis. [sec:chandra] Chandra ------- The *Chandra*/ACIS observed NGC 6300 five times between 2009 June 03 and 2009 June 14. We summarized these observations in Table [tab:1]. All the observations were carried out in Faint data mode. The data were processed with the Chandra Interactive Analysis of Observations tools (CIAO v.4.11[6](#fn6);) by using corresponding Calibration Database (CALDB v.4.8.5)[7](#fn7). We first reprocessed the level-2 event files by applying the updated calibration data using CIAO script chandra\_repro [8](#fn8). After that, we used the CIAO tool axbary to apply the barycentre correction on the reprocessed level-2 event files. We considered a circular region of 2.46 arcsec radius, centred at the source coordinates, to extract the source light curves and spectra. The background light curves and spectra were extracted by selecting a circular region of 10 arcsec radius and away from the source. Finally, we used specextract and dmextract to extract the spectra and light curves of the source and background, respectively. To check the amount of pile-up in each observation, we used the CIAO tool PILEUP\_MAP. Although it is less than 10 per cent, we account this effect in spectral fitting by using the convolution model pileup [9](#fn9) with all spectral model, with frame time set to 0.5s. We did not see any changes with or without pileup model. [sec:nustar] NuSTAR ------ NGC 6300 was observed with *NuSTAR* three times; once in 2013, and twice in 2016 (see Table [tab:1] for details). The *NuSTAR* consists of two identical modules: FPMA and FPMB. Reduction of the raw data was performed with the NuSTAR Data Analysis Software (NuSTARDAS, version 1.4.1). Cleaned event files were generated and calibrated by using the standard filtering criteria with the nupipeline task and the latest calibration data files available in the NuSTAR calibration database (CALDB)[10](#fn10). Both the extraction radii for the source and the background products were set to be 80 arcsec. With the nuproduct task, the spectra and light-curves were extracted. The light curves were binned over 100s. Considering the background counts, we limited our spectral analysis within 3 − 40 keV range. We re-binned the spectra to achieve 20 counts per bin by using the grppha task. Results ======= We used *Suzaku*, *Chandra*, and *NuSTAR* data between 2007 and 2016 for our analysis. All the instruments have different effective area, thus, one needs to take care of this. To address this issue, we used ‘ancillary response file (arf)’ in the spectra. For the lightcurves, we used a cross-normalization factor to normalize the count rate using crab light curve. We used cross-normalization factor, *N**F**P**M**A* = 1.0, *N**A**C**I**S* = 1.10 ± 0.05, *N**X**I**S* = 0.95 ± 0.03. We used the following cosmological parameters in this work: *H*0 = 70 km s− 1 Mpc − 1, Λ0 = 0.73, and Ω*M* = 0.27. Timing Analysis --------------- We considered all the X-ray light curves obtained from the *Suzaku*, *Chandra* and *NuSTAR* observations of NGC 6300 with 100 s bin for timing analysis. We segregated the total energy bins into various segments for variability analysis. We divided the low energy data ( ≤ 10 keV) into two energy bands, namely 0.5 − 3 keV and 3 − 10 keV for variability analysis for *Suzaku*/XIS and *Chandra* observations. The *NuSTAR* data (3 − 40 keV range) were divided into two chunks: 3 − 10 keV and 10 − 40 keV ranges for variability analysis. To examine the time delay between the Fe K*α* line and continuum, we considered the light curve of 3 − 40 keV with the Fe line counts (in 6 − 6.7 keV energy band) for *NuSTAR* observations. The 10 − 40 keV light curves were taken into account to analyze the variability in the high energy part. lcccccc ID & *N* &*x**m**a**x* &*x**m**i**n*&*R* &*σ**N**X**S*2 &*F**v**a**r* & & & & & ( × 10− 3) & S1(0.5-3)& 198 &0.196 &0.021 &9.40 & -0.287 ± 16.5 &  −  C1(0.5-3)& 18 &0.020 &0.002 &9.11 & 0.191 ± 115.4 & 0.17 ± 0.36 C2(0.5-3)& 20 &0.027 &0.005 &6.00 & 0.779 ± 53.0 & 0.24 ± 0.15 C3(0.5-3)& 19 &0.022 &0.002 &9.88 & 1.180 ± 73.3 & 0.36 ± 0.16 C4(0.5-3)& 21 &0.028 &0.002 &12.1 & 1.510 ± 55.8 & 0.35 ± 0.13 C5(0.5-3)& 20 &0.020 &0.002 &9.11 &-0.361 ± 59.7 &  −  &&&&&& S1(3-10)& 203 &0.49 &0.11 &4.31 & 7.41 ± 3.73 & 0.19 ± 0.02 C1(3-10)& 21 &0.09 &0.02 &4.55 & 2.54 ± 10.3 & 0.19 ± 0.06 C2(3-10)& 20 &0.30 &0.15 &2.08 & 3.47 ± 3.18 & 0.13 ± 0.03 C3(3-10)& 22 &0.21 &0.01 &2.32 & 12.9 ± 5.36 & 0.32 ± 0.05 C4(3-10)& 21 &0.30 &0.12 &2.56 & 66.1 ± 3.41 & 0.18 ± 0.04 C5(3-10)& 21 &0.25 &0.07 &3.65 & 8.33 ± 3.83 & 0.22 ± 0.04 N1(3-10)& 44 &1.01 &0.32 &3.17 & 35.9 ± 1.89 & 0.25 ± 0.03 N2(3-10)& 49 &0.74 &0.20 &3.75 & 24.9 ± 1.78 & 0.23 ± 0.03 N3(3-10)& 62 &0.84 &0.31 &2.68 & 14.7 ± 4.98 & 0.16 ± 0.03 &&&&&& N1(10-40)& 44 &0.63 &0.27 &2.33 & 13.1 ± 2.67 & 0.18 ± 0.03 N2(10-40)& 49 &0.52 &0.19 &2.61 & 11.5 ± 2.23 & 0.19 ± 0.02 N3(10-40)& 62 &0.93 &0.26 &3.60 &-1.25 ± 12.0 &  −  &&&&&& [tab:fvar] ### **Variablility** To examine the temporal variability in X-ray emission from NGC 6300, in the different energy bands during the period of 17th October 2007 to 24th August 2016, we estimated numerous parameters. The fractional variability *F**v**a**r* (; ; ; ; ; ) for a light curve of *x**i* count/s with finite measurement error *σ**i* of length *N* with a mean *μ* and standard deviation *σ* is given by: $$F\_{var}=\sqrt{\frac{\sigma^2\_{XS}}{\mu^2}}$$ where *σ**X**S*2 is excess variance (; ), an estimator of the intrinsic source variance and is given by: $$\sigma^2\_{XS}=\sigma^2 - \frac{1}{N}\sum\_{i=1}^{N} \sigma^2\_{i}.$$ The normalized excess variance is given by *σ**N**X**S*2 = *σ**X**S*2/*μ*2. The uncertainties in *σ**N**X**S*2 and *F**v**a**r* are taken from and. The peak to peak amplitude is defined as *R* = *x**m**a**x*/*x**m**i**n* (where *x**m**a**x* and *x**m**i**n* are the maximum and minimum flux, respectively) to investigate the variability in the X-ray light curves. The X-ray photons from NGC 6300 in different energy bands (0.5–3 keV, 3–10 keV, 10–40 keV ranges) showed different magnitude of variabilities. The results are shown in Table [tab:fvar]. From the low energy part (0.5 − 3 keV range), we obtained an average *R* value of 9.27 which has a range from 6.00 to 12.1. However, for higher energy part (3–10 keV range), we observed  < *R* >  = 3.02, ranging from 2.08 to 4.55. Thus, the high energy part showed fewer variabilities in terms of the average values and range of *R*. Although, *σ**N**X**S*2 exhibited opposite nature. A trend of increasing normalized excess variance can be seen from Table [tab:fvar]. It should also be noted that the errors of *σ**N**X**S*2 (calculated considering variance $\sim \overline{\sigma^2\_{err}}$) are larger than the values for 0.5–3 keV energy band. Thus, the conclusions solely based on *σ**N**X**S*2 would be erroneous. We calculated the fractional variability (*F**v**a**r*) in terms of normalized excess variance (*σ**N**X**S*2) to investigate the variabilities in different wavebands. These quantities describe the variability strength present in AGN light curves. Though *σ**N**X**S*2 and *F**v**a**r* represent similar information, the latter was used as *F**v**a**r* is independent of the signal-to-noise ratio of the light curve. From Table [tab:fvar], we can infer that NGC 6300 had a constant decrease of variability strength with increasing energy ( < *F**v**a**r*0.5 − 3 >  = 0.28;  < *F**v**a**r*3 − 10 >  = 0.208;  < *F**v**a**r*10 − 40 >  = 0.185) over the entire period of time. ### **Correlation** In order to investigate the physical connection between the Fe line and the X-ray continuum, we computed the Pearson correlation coefficient (*r*) and Spearman’s rank correlation coefficient (*r**s*) using the *NuSTAR* observation. Quantitive analysis of the correlation based on the Pearson coefficient provided a degree of linear correlation between the Fe-line emission and the X-ray continuum. We correlated 3 − 40 keV light curves (for X-ray continuum) with the Fe line light curves. The Fe line light curves are computed in the energy range of 6 − 6.7 keV. The positive and negative values of the Pearson coefficient for all three observations indicated a very weak linear correlation between the Fe-flux and the X-ray continuum. This implies a plausible disjoint mechanism involved in the emission of the Fe line and the X-ray continuum. We estimated the Spearman’s rank correlation (*r**s*) to further quantify the degree of correlation between the Fe flux and X-ray continuum. It reflects a similar result for the first two observations. The result of this correction study is quoted in Table [tab:correlation]. We also investigate the time-delay between the Fe-line flux and the X-ray continuum by using the *ζ*-transformed discrete correlation function (ZDCF) method fully described by [11](#fn11). The ZDCF-code is publicly available for estimating the cross-correlation function of unevenly sampled light curves. We considered the X-ray light curves in 3 − 40 keV range with the Fe-line flux for three NuSTAR observations to estimate the ZDCF coefficient. We calculated ZDCF for two different cases: omit zero lag points and include zero lag points, and in both cases, we got a similar result. The values of ZDCF coefficient with time delay is presented in Table [tab:correlation]. Also, the variation of coefficient with time delay is shown in Figure [zdcf]. Although there is no prominent peak in the correlation function, we found a moderate peak at the time delay of 1.67 ± 1.5 minutes for the first observation with ZDCF coefficient value of 0.18 ± 0.12 and peak at  − 3.30 ± 1.5 minutes for second observation with ZDCF coefficient value of 0.45 ± 0.12. In the case of third observation, we did not notice any peak in the correlation function. [zdcf] [tab:correlation] | | | | | | | | --- | --- | --- | --- | --- | --- | | ID | *r* | *r**s* | *p* − *v**a**l**u**e* | *Z**D**C**F* | *t**i**m**e* − *d**e**l**a**y* | | | | | | | (min) | | N1 | 0.2385 | 0.1835 |.0115 | 0.265 ± 0.122 | 1.67 ± 1.5 | | N2 | 0.3490 | 0.3494 | <.001 | 0.445 ± 0.123 |  − 3.30 ± 1.5 | | N3 | -0.2700 | 0.3676 | <.001 |  −  |  −  | Spectral Analysis ----------------- Spectral analysis of data obtained from the *Suzaku*, *Chandra* and *NuSTAR* observations of NGC 6300 was carried out by using the software package XSPEC v12.10. For spectral fitting, we explored several phenomenological and physical models in order to understand the core region of NGC 6300. We used powerlaw, compTT, pexrav, and MYTORUS models to approximate the primary continuum and reflection components. While fitting the data with the first three continuum models separately, a Gaussian component is considered for the iron fluorescent emission line. The soft X-ray excess observed in the spectra of many AGNs, is usually modelled with the combination of powerlaw and APEC model. The normalization parameter was tied with the primary component in order to get an idea of scattering fraction *f**s*; modelled with ‘constant’ in XSPEC. The soft excess component in XSPEC reads as: constant\*(powerlaw + apec). Along with these components, we used two absorption components, namely TBabs and zTBabs. TBabs was used for Galactic absorption and hydrogen column density (*N**H*, *G**a**l*) was fixed at 8.01 × 1020 *c**m*− 2. We calculated error for each spectral parameters with 90 per cent confidence level (1.6 *σ*). The errors are calculated using ‘error’ command in XSPEC. [sec:powerlaw] ### **Power law** We started the spectral analysis with the simple powerlaw model. Our baseline model in XSPEC reads as: TBabs1\*(zTBabs2\*(zpowerlaw + zGaussian) + soft excess). We started our analysis using data from the *Suzaku* observation in 2007. The 0.5 − 40 keV spectrum was fitted with the above spectral model. The parameters obtained from the fitting are *N**H* = 2.09 × 1023 *c**m*− 2, Γ = 1.81, an Fe *K**α* line at 6.37 keV with equivalent width (EW) of 114 eV and the reduced chi-square (*χ*2/*d**o**f*) = 1.03 (for 2549 dof). Next, we analyzed the data from the 2009 epoch when NGC 6300 was observed five times within 11 days with the *Chandra* observatory. We did not detect Fe K*α* line in all five spectra. We verified this by using the ftest task in XSPEC. We fitted all the *Chandra* spectra by removing the Gaussian component from the baseline model. The hydrogen column density (*N**H*) along the line of sight varied between 1.77 × 1023 *c**m*− 2 and 1.89 × 1023 *c**m*− 2. The power-law photon indices were similar during the 2009 epoch of observations. NGC 6300 was observed with the *NuSTAR* observatory in February 2013. The data obtained from the *NuSTAR* observation were fitted with the model yielding parameters such as *N**H* = 1.49 × 1023 *c**m*− 2, Γ = 1.61, an Fe *K**α* line at 6.36 keV with an equivalent width of 258 eV and the *χ*2/*d**o**f* = 758/702. The iron line was found to be broader than the previous observations. The spectra became harder with photon indices of Γ = 1.52 and 1.54 during the observations in Jan 2016 and Aug 2016, respectively. The iron line width was increased to 280 and 299 eV during these two observations. However, the hydrogen column density was decreased in the 2016 epochs to 1.08 × 1023 *c**m*− 2 which is almost  ∼ 50 per cent of the value of the column density during 2007 epoch of observation. In Figure [fig:pl-spec], the powerlaw model fitted spectra are shown. In Figure [fig:contour], the contour plots for Γ vs *N**H* are shown for S1, C2, & N2. The powerlaw model fitted spectral analysis result is shown in Table [tab:PL]. [fig:pl-spec] [sec:comptt] ### **CompTT model** The powerlaw model provided valuable information on the variations of spectral hardness and hydrogen column density over the observation duration of  ∼ 9 years. However, the fundamental properties such as electron temperature (*k**T**e*), optical depth of the medium (*τ*), and approximate shape of accretion geometry (slab or spherical) are necessary to gain a deeper understanding of the system. To estimate these quantities, we replaced the powerlaw model with a more physical compTT model in our spectral fitting. The X-ray emitting Compton cloud is characterized by the hot electron temperature (*k**T**e*) and optical depth (*τ*). The model considered here can be expressed as: TBabs1\*(zTBabs2 \* (compTT + zGaussian) + soft excess). While fitting with the compTT model, the *χ*2 values obtained were acceptable for all observations. We found that the electron temperature varied over the years, indicating a change in the spectral state. In fact, *k**T**e* increased over the period of  ∼ 9 years from 37.2 keV to 88.6 keV. The value of the optical depth varied dramatically over the entire span of the observation duration. The optical depth remained mostly constant during the 2009 epoch except for the *Chandra* observation on 3 June 2009 (‘C1’). Particularly on that date, we found significantly high electron temperature (much higher than the *Suzaku* observation - S1) along with the higher value of the optical depth, suggesting a denser and hotter Compton cloud around the black hole. Results presented here were obtained by considering a spherical cloud with seed photon temperature fixed at 30 eV, which is likely to be the inner disc temperature for a BH with a mass of  ∼ 107 *M*⊙. We also considered the slab geometry for which the parameter variations remained reasonably consistent as obtained with the spherical geometry. Thus, the geometry of the Compton cloud remained unclear with the compTT model. [sec:pexrav] ### **Pexrav** The reported inclination for of NGC 6300 is around 77∘. As a high inclination source, the X-ray spectrum from the inner region is more prone to suffer reflection from the disc. To estimate the reflection coefficient over the entire period of our observation, we applied reflection model pexrav. The pexrav model contains a powerlaw continuum and a reflected component from an infinite neutral slab. We estimated relative reflection (*R*) of the source. We fixed the photon index (Γ*p**e**x**r**a**v*) with the value of Γ obtained from the powerlaw model. We fixed abundances for heavy elements and iron at Solar value (i.e. 1). In the beginning, we froze cos*i* at 0.22 (*θ**o**b**s* = 77). Later, we re-analyzed the data by fixing cos*i* at 0.15 (*θ**o**b**s* = 80; obtained from MYTROUS model fit, see [sec:MYT]). As we could not constrain the cut-off energy of the Compton cloud, we froze it at 1000 keV. We found that during the 2007 *Suzaku* observation, the reflection was moderate (*R* = 0.62). However, in 2009 epoch, *R* increased and dominated during the three observations (*R* > 1). During the other two observations in 2009 epoch, *R* is estimated to be in the range of 0.8 and 1, indicating a strong reflection. The reflection hump was seen in  ∼ 15 − 30 keV energy range; thus, the value of *R* obtained is in the range of 0.5 to 8.0 from the *Chandra* data and are not well constrained. The 3 − 40 keV spectra from the *NuSTAR*observations in 2013 & 2016 epochs gave us a good estimation of *R*. During all three *NuSTAR* observations, we found that the reflection was strong with *R* > 0.74. The model fitted results are given in Table [tab:PL]. [fig:mytd-spec] [sec:MYT] ### **MYTORUS** The AGNs are surrounded by circumnuclear absorbing clouds popularly referred to as ‘torus’. The phenomenological models do not consider the complex structure of the ‘torus’. Several physical models have been developed which account for the ‘toroidal geometry’ and calculate reflection and line spectra self-consistently. We studied NGC 6300 with a physical model, namely the MYTORUS[12](#fn12) model which describes an absorbing torus surrounding the nuclear region with half opening angle fixed at 60. This model consists of three components: absorbed primary continuum or *zeroth-ordered* component (MYTZ), a scattered/reflected component (MYTS) and an iron fluorescent line component (MYTL; Fe K*α* and Fe K*β* line). The MYTORUS model can be used in two configurations: coupled and decoupled. The coupled configuration describes a uniform circumnuclear torus while the decoupled configuration attributes the nonuniformity in the torus shape and density profiles. In the beginning, we tested the default or coupled configuration of the MYTORUS model. The model reads as, TBabs(powerlaw\*MYTZ + *A**S*MYTS + *A**L*MYTL + soft excess). In this configuration, the equatorial hydrogen column density (*N**H*, *e**q*), photon index of the incident primary continuum (Γ), inclination angle (*θ**o**b**s*), and model normalization are tied together. As recommended, the relative normalizations of the scattered and line components are tied together, i.e. *A**S* = *A**L*. We allowed the inclination angle (*θ**o**b**s*) to vary freely. We found that the inclination angle varied between 76and 82. With this model, we achieved a good *χ*2 fit for all observations. The photon indices and equatorial column density obtained using this model followed a similar trend as obtained from the powerlaw model. In the MYTROUS model, the equivalent width is not a free parameter; instead, it is computed self-consistently. The full width half maxima (FWHM) and flux (*F**K**α*) of iron K*α* line are obtained from a Gaussian convolution model gsmooth which can be used with MYTL component (for details, see ). The results obtained using this model are presented in Table [tab:mytc]. Later, we tested the decoupled configuration of the MYTORUS model. This can be achieved by decoupling the column density of different components. Initially, we fixed *θ* of MYTZ component to 90. This component describes the absorbed transmitted component. The hydrogen column density of MYTZ will provide us with a column density (*N**H*, *Z*) along the line-of-sight. Next, we fixed inclination angle of MYTS and MYTL at 0. The column density is decoupled from the MYTZ component. The column density of scattered component describes averaged global column density (*N**H*, *S*). This component describes a component which is scattered from the backside and coming to us through the hole of a patchy torus (see Figure 2 of ). The model can be read as: TBabs(powerlaw \* MYTZ + *A**S*00MYTS + *A**L*00MYTL+ soft excess). The Decoupled configuration did not give us a significantly deviated fit from the Coupled model. Γ was found to follow the same trend as the Coupled model. The line-of-sight column density (*N**H*, *Z*) was observed to follow the same pattern as *N**H*, *e**q*. However, the averaged-global column density (*N**H*, *S*) was roughly constant for all nine observations. The scattering fraction (*A**S*) was 1.06 in 2007, which indicates that the reflection was dominating and delayed. In 2009 epoch, *A**S* < 1, which infers a weaker reflection compared to the primary emission. In 2013, & 2016 epochs, we observed *A**S* > 2 from this model. This indicates that the spectra were dominated by the reflection. We calculated FWHM and line flux for Fe K*α* line emission from the Gaussian convolution model gsmooth. We also calculated intrinsic luminosity (*L**i**n**t*) of the source using ‘clum’ command in XSPEC in the energy band of 2 − 10 keV. We computed the intrinsic luminosity for MYTZ component. The results of this model are tabulated in Table [tab:mytd]. The MYTORUS model fitted (decoupled configuration) spectra are shown in Figure [fig:mytd-spec]. In Figure [fig:contour], we show the contour plot for *N**H*, *Z* vs *N**H*, *S*. [tab:PL] lcccccccccccccc ID& *N**H* & Γ & PL. norm & Line E& *F**K**α*& EW & *f**s* &*k**T**a**p**e**c* & *χ*2/dof&*k**T**e**c**o**m**p**T**T* & *τ* & *R* & *F*2 − 10*k**e**V*, *o**b**s* &(1022 cm− 2)& &(10− 3 ph cm− 2 s− 1)&(keV) & (10− 5 ph cm2 s− 1)&(eV) &(10− 2) &(keV)& &(keV)& & & (10− 11 ergs cm− 2 s− 1) S1 & 20.9− 0.3+ 0.3 & 1.81− 0.03+ 0.04 & 33.2− 0.22+ 0.24 & 6.37− 0.12+ 0.08 & 1.52− 0.10+ 0.09 & 114 & 1.94− 0.13+ 0.17 & 0.23− 0.08+ 0.06 & 2614/2549 &37.2− 1.8+ 2.1&3.08− 0.25+ 0.22 &0.62− 0.09+ 0.10 &1.68− 0.14+ 0.04 C1 & 18.9− 0.3+ 0.3 & 1.69− 0.05+ 0.05 & 6.94− 0.30+ 0.33 &  −  &  −  &  −  & 1.68− 0.17+ 0.12 & 0.35− 0.11+ 0.07 & 283/ 293 &44.7− 2.3+ 1.8&2.97− 0.34+ 0.19 &1.09− 0.14+ 0.13 &1.23− 0.02+ 0.22 C2 & 17.7− 0.4+ 0.4 & 1.72− 0.04+ 0.04 & 8.06− 0.79+ 0.64 &  −  &  −  &  −  & 2.58− 0.10+ 0.14 & 0.45− 0.07+ 0.10 & 305/ 321 &46.6− 4.1+ 3.1&0.39− 0.09+ 0.05 &1.35− 0.19+ 0.22 &1.68− 0.21+ 0.35 C3 & 18.3− 0.6+ 0.7 & 1.78− 0.05+ 0.06 & 8.04− 0.13+ 0.14 &  −  &  −  &  −  & 2.35− 0.12+ 0.08 & 0.44− 0.13+ 0.10 & 289/ 293 &43.1− 5.4+ 5.4&0.51− 0.10+ 0.07 &0.83− 0.15+ 0.11 &1.52− 0.21+ 0.33 C4 & 18.5− 0.5+ 0.6 & 1.78− 0.04+ 0.06 & 7.72− 0.37+ 0.47 &  −  &  −  &  −  & 1.78− 0.19+ 0.16 & 0.37− 0.12+ 0.07 & 295/ 315 &44.1− 4.9+ 5.1&0.49− 0.05+ 0.09 &1.74− 0.22+ 0.19 &1.62− 0.10+ 0.24 C5 & 18.3− 0.3+ 0.5 & 1.79− 0.04+ 0.04 & 9.38− 0.49+ 0.42 &  −  &  −  &  −  & 1.61− 0.27+ 0.22 & 0.47− 0.05+ 0.05 & 306/ 299 &46.1− 5.4+ 6.2&0.89− 0.14+ 0.14 &0.82− 0.10+ 0.07 &1.44− 0.25+ 0.16 N1 & 14.9− 0.3+ 0.3 & 1.61− 0.02+ 0.02 & 4.67− 0.53+ 0.41 & 6.36− 0.34+ 0.29 & 1.18− 0.24+ 0.30 & 258 &  −  &  −  & 758/ 702 &55.6− 2.2+ 1.9&0.77− 0.08+ 0.09 &0.86− 0.17+ 0.13 &1.59− 0.05+ 0.03 N2 & 11.7− 0.5+ 0.8 & 1.52− 0.02+ 0.02 & 3.46− 0.33+ 0.25 & 6.38− 0.09+ 0.10 & 7.39− 0.17+ 0.11 & 280 &  −  &  −  & 802/ 715 &79.6− 2.8+ 3.2&1.12− 0.16+ 0.09 &1.05− 0.19+ 0.16 &1.27− 0.02+ 0.04 N3 & 10.7− 0.6+ 0.5 & 1.54− 0.04+ 0.04 & 4.59− 0.44+ 0.37 & 6.32− 0.15+ 0.13 & 9.29− 0.10+ 0.14 & 299 &  −  &  −  & 781/ 821 &88.6− 3.5+ 2.8&0.54− 0.11+ 0.06 &0.74− 0.20+ 0.06 &1.60− 0.06+ 0.02 [tab:mytc] lcccccccccccc ID &Γ&PL. norm &*N**H*, *e**q* &*θ**o**b**s* & *A**S* = *A**L* & EW &*F**K**α* &FWHM &*f**s*& *k**T**a**p**e**c* &*χ*2/dof & &(10− 3 ph cm− 2 s− 1)&(1024 cm− 2)&(degree)& & (eV) &(10− 13 ergs cm− 2 s− 1) &(km s− 1)&(10− 2) &(keV)& S1& 1.84− 0.04+ 0.04 & 38.2− 2.92+ 3.42 & 0.232− 0.010+ 0.009 & 78.91− 2.75+ 2.67 & 1.36− 0.03+ 0.08 & 119− 10+ 13 &1.81− 0.14+ 0.13 & 5290− 471+ 529 & 1.30− 0.12+ 0.14 & 0.24− 0.01+ 0.02 &2476/2380 C1& 1.73− 0.03+ 0.05 & 8.08− 0.41+ 0.43 & 0.207− 0.006+ 0.009 & 79.20− 2.41+ 2.56 & 1.15− 0.13+ 0.10 & 112− 12+ 9 &1.94− 0.10+ 0.12 & 71− 7+ 5 & 1.35− 0.15+ 0.10 & 0.29− 0.04+ 0.03 & 278/292 C2& 1.75− 0.06+ 0.05 & 11.6− 0.49+ 0.41 & 0.199− 0.012+ 0.007 & 79.98− 1.94+ 2.09 & 0.62− 0.10+ 0.05 & 96− 10+ 8 &1.43− 0.08+ 0.10 & 52− 3+ 6 & 4.48− 0.21+ 0.17 & 0.28− 0.03+ 0.04 & 311/314 C3& 1.76− 0.04+ 0.03 & 8.86− 0.32+ 0.25 & 0.196− 0.008+ 0.004 & 76.17− 3.04+ 2.92 & 1.00− 0.22+ 0.16 & 88− 7+ 10 &1.93− 0.19+ 0.15 & 45− 4+ 8 & 1.25− 0.28+ 0.25 & 0.37− 0.04+ 0.07 & 295/294 C4& 1.79− 0.07+ 0.09 & 9.28− 0.16+ 0.14 & 0.203− 0.005+ 0.003 & 78.52− 1.79+ 1.75 & 0.94− 0.15+ 0.21 & 105− 11+ 6 &0.96− 0.08+ 0.12 & 81− 9+ 6 & 1.99− 0.14+ 0.10 & 0.40− 0.07+ 0.05 & 285/313 C5& 1.75− 0.06+ 0.04 & 10.6− 0.44+ 0.39 & 0.216− 0.008+ 0.006 & 82.45− 2.02+ 1.92 & 0.76− 0.09+ 0.07 & 102− 18+ 13 &1.67− 0.10+ 0.15 & 47− 5+ 5 & 4.77− 0.32+ 0.54 & 0.38− 0.06+ 0.05 & 305/296 N1& 1.63− 0.03+ 0.03 & 5.55− 0.58+ 0.71 & 0.159− 0.012+ 0.008 & 81.37− 3.55+ 2.48 & 2.18− 0.28+ 0.38 & 321− 13+ 14 &3.00− 0.35+ 0.32 & 28882− 1820+ 1711 &  −  &  −  & 763/719 N2& 1.48− 0.05+ 0.03 & 3.84− 0.41+ 0.55 & 0.132− 0.007+ 0.003 & 81.29− 2.23+ 1.70 & 3.40− 0.24+ 0.20 & 415− 21+ 15 &3.55− 0.22+ 0.33 & 28114− 3145+ 2987 &  −  &  −  & 798/711 N3& 1.51− 0.05+ 0.04 & 5.06− 0.22+ 0.14 & 0.119− 0.008+ 0.002 & 81.37− 1.65+ 1.38 & 3.03− 0.13+ 0.19 & 366− 19+ 12 &3.57− 0.21+ 0.28 & 37812− 4821+ 4547 &  −  &  −  & 812/819 [tab:mytd] lccccccccccccc ID &Γ&PL. norm &*N**H*, *Z* & *N**H*, *S* &*A**S* = *A**L*& EW & *F**K**α* & FWHM & *f**s*& *k**T**a**p**e**c* &*L*2 − 10*k**e**V*, *i**n**t* &*χ*2/dof & &( 10− 3 ph cm− 2 s− 1)& (1024 cm− 2) & (1024 cm− 2) & & (eV)&(10− 13 ergs cm2 s− 1) &(km s− 1) &(10− 2) & (keV)& (1041 ergs s− 1) & S1 & 1.84− 0.03+ 0.04 & 40.2− 0.37+ 0.41 & 0.218− 0.003+ 0.004 & 0.115− 0.005+ 0.005 & 1.06− 0.05+ 0.03 & 120− 16+ 14 &1.81− 0.14+ 0.12 & 5551− 711+ 682 & 1.39− 0.41+ 0.46 & 0.25− 0.02+ 0.02 & 13.86− 0.52+ 0.58 &2480/2379 C1 & 1.75− 0.05+ 0.04 & 7.65− 0.72+ 0.67 & 0.204− 0.006+ 0.009 & 0.113− 0.004+ 0.004 & 0.97− 0.12+ 0.11 & 109− 12+ 10 &1.88− 0.10+ 0.11 & 72− 8+ 5 & 1.25− 0.18+ 0.15 & 0.24− 0.03+ 0.04 & 9.73− 0.81+ 0.89 &285 /291 C2 & 1.73− 0.02+ 0.02 & 8.84− 0.52+ 0.45 & 0.206− 0.009+ 0.011 & 0.118− 0.004+ 0.004 & 0.79− 0.10+ 0.14 & 100− 7+ 9 &1.27− 0.15+ 0.10 & 49− 3+ 4 & 4.62− 0.38+ 0.35 & 0.33− 0.05+ 0.04 & 10.89− 0.70+ 0.62 &347 /346 C3 & 1.79− 0.06+ 0.04 & 8.07− 0.27+ 0.32 & 0.210− 0.008+ 0.007 & 0.116− 0.005+ 0.008 & 0.75− 0.14+ 0.10 & 89− 18+ 14 &1.87− 0.13+ 0.17 & 49− 6+ 6 & 1.43− 0.15+ 0.10 & 0.36− 0.08+ 0.04 & 9.88− 0.68+ 0.74 &285 /290 C4 & 1.79− 0.03+ 0.04 & 8.55− 0.23+ 0.19 & 0.193− 0.009+ 0.011 & 0.117− 0.005+ 0.004 & 0.65− 0.09+ 0.07 & 114− 10+ 8 &1.04− 0.10+ 0.08 & 82− 9+ 3 & 2.04− 0.09+ 0.13 & 0.39− 0.04+ 0.03 & 11.23− 0.81+ 0.75 &288 /312 C5 & 1.73− 0.07+ 0.09 & 9.66− 0.29+ 0.20 & 0.211− 0.009+ 0.011 & 0.112− 0.003+ 0.003 & 0.36− 0.10+ 0.05 & 95− 7+ 13 &1.93− 0.08+ 0.12 & 49− 7+ 6 & 4.17− 0.22+ 0.17 & 0.45− 0.10+ 0.07 & 12.84− 0.93+ 1.22 &355 /311 N1 & 1.62− 0.04+ 0.03 & 6.01− 0.32+ 0.21 & 0.144− 0.006+ 0.008 & 0.112− 0.003+ 0.003 & 2.28− 0.21+ 0.18 & 329− 21+ 15 &3.05− 0.21+ 0.23 & 30483− 2214+ 2114 &  −  &  −  & 10.66− 0.17+ 0.28 &747 /716 N2 & 1.46− 0.02+ 0.03 & 3.89− 0.20+ 0.16 & 0.123− 0.007+ 0.007 & 0.117− 0.003+ 0.002 & 3.08− 0.22+ 0.29 & 419− 22+ 17 &3.57− 0.41+ 0.35 & 28407− 3218+ 2983 &  −  &  −  & 8.56− 0.27+ 0.24 &755 /710 N3 & 1.51− 0.06+ 0.04 & 5.63− 0.21+ 0.19 & 0.108− 0.005+ 0.006 & 0.114− 0.003+ 0.003 & 2.61− 0.25+ 0.17 & 368− 19+ 14 &3.57− 0.33+ 0.29 & 39116− 4214+ 3671 &  −  &  −  & 10.16− 0.22+ 0.23 &811 /816 [t]0.33 [fig:figure141] [t]0.33 [fig:figure142] [t]0.33 [fig:figure143] [t]0.33 [fig:figure144] [t]0.33 [fig:figure145] [t]0.33 [fig:figure146] [fig:contour] Discussion ========== We studied NGC 6300 between 2007 and 2016 by using the data from the *Suzaku*, *Chandra*, and *NuSTAR* observations. We mainly concentrate our study on the nuclear core and circumnuclear torus region. However, there is a possibility that the extra-nuclear X-ray sources could contribute to the X-ray emission and affect our analysis. The X-ray luminosity of NGC 6300 is  ∼ 1043 ergs s− 1. Although, the time-averaged luminosity of X-ray binaries can reach reach up to  ∼ 1037 ergs s− 1. The maximum luminosity of an accretion powered X-ray Pulsar can reach up to  ∼ 1037 − 1039 ergs s− 1 depending on a range of magnetic field between  ∼ 1014 − 1015 G. The ultra-luminous X-ray (ULX) sources show luminosity about  < 1041 ergs s− 1. The X-ray luminosity of supernovae can reach as high as  ∼ 1041 ergs *s*− 1 in the initial days. Thus, the luminosity of supernovae is  ∼  0.01 − 1 per cent of our source of interest in this work. Even then, within the period of observations used in present work, no such events were recorded from the optical observations around the field of view of NGC 6300. Thus, contamination from these sources can be neglected for variability studies. The positional accuracy of the *Suzaku*/XIS is around 19ʺ at 90 per cent confidence level. At a similar confidence level, *Chandra* has 0.8ʺ − 2ʺ positional accuracy depending on the brightness of the source. In case of *NuSTAR*, the positional accuracy varies between 8ʺ − 20ʺ. For *Suzaku* and *NuSTAR*, there can be thousands of transient sources within the source region. The number reduces to hundreds for *Chandra*. Nevertheless, none of them can be as bright as the AGN core itself. Thus, the X-ray emission must originate from the nucleus. [sec:torus] Properties of torus ------------------- We refer the circumnuclear absorbing material as ‘torus’. It does not have to be a uniform torus; it could be a clumpy or patchy torus. We obtained the line-of-sight column density (*N**H*, *Z*) and equatorial column density (*N**H*, *e**q*) from the powerlaw and MYTORUS-coupled model fitting, respectively. We observed that *N**H*, *e**q* is slightly higher than *N**H*, *Z*. This is expected as matter density is expected to be high in the equatorial region. Nonetheless, they both show a similar trend over the years. During the 2007 epoch, the column density was high with *N**H*, *e**q* = 2.32 × 1023 cm− 2. It decreased over the years and after 9 years, it became almost half with *N**H*, *e**q* = 1.19 × 1023 cm− 2. The overall column density of the absorber is found to be varying over nine years. *Chandra* observed the source five times within 11 days in 2009. During that epoch, the *N**H*, *e**q* varied between 1.99 × 1023 cm− 2 and 2.16 × 1023 cm− 2. Considering a uniform torus around the AGN, the variation of column density within a few days remains a puzzle. However, in an alternative scenario, showed that the column density could be changed in timescale as short as a day if the torus is clumpy. To further investigate the nature of the torus, we used the decoupled configuration of the MYTORUS model. We find (see Table [tab:mytd]) that the line-of-sight column density (*N**H*, *Z*) changed over the years while the global-averaged column density (*N**H*, *S*) remained almost same. The variation of *N**H*, *Z* can be explained by the migrating clouds in the line of sight. During the *Chandra* observations in 2009, minimum change in the line of sight column density was  ∼ 2 × 1021 cm− 2 (between C1 & C2). If we assume that the column density of each cloud is  ∼ 2 × 1021 cm− 2, then the number of clouds in the line of sight in 2009 would be 102 ± 10. Similarly, reported that the number of clouds in the line of sight for Mkn 3 was 17 ± 5 in 2014. NGC 6300 is known as ‘changing look’ AGN. argued that the temporary ‘shut-off’ of the nucleus could be the reason for the ‘changing-look’. However, the transiting clouds in the line of sight would naturally explain the variation of observed flux and the line of sight column density. The observed flux depends on both lines of sight column density and nuclear activity. Thus, the observed flux can be changed due to the transiting clouds in the line-of-sight even if the nucleus remains unchanged. Therefore, the ‘changing look’ AGN can be explained naturally by the transition of clouds in the line of sight. Another possible explanation for *N**H* variation could be the ionization of the ‘torus’. The outer surface of the torus can be ionized by the radiation from the AGN that has leaked through the clumpy torus. showed that the transition of changing look quasar SDSS J015957.64+003310.5 could be explained by the ionization, but not with the transiting clouds. Nevertheless, in the case of NGC 6300, no ionized line was observed. Thus, it is improbable that the variable column density is due to ionization. [sec:nuclear] Nuclear Emission ---------------- We find the variability above 3 keV for NGC 6300 in all nine observations. The calculated rms fractional variability amplitude in 3 − 10 keV or higher (10 − 40 keV) energy bands for all the observations are presented in Table [tab:fvar]. The observed variability of the primary emission (>3 keV) is consistent with the other Seyfert 2 galaxies. In the case of supermassive black holes, the X-ray emission is mostly originated from the Compton cloud. The properties of the X-ray spectra (popularly charactarized with the photon index Γ) depend on the properties of the Compton cloud which is charactarized by hot electron temperature (*k**T**e*), optical depth (*τ*) and cut-off energy (*E**c*). We obtained Γ from the powerlaw model, *k**T**e* and *τ* from the compTT model (see Table [tab:PL]). We tried to estimate *E**c**u**t* using the cutoffpl model, but were unable to constrain it. All observations indicated *E**c* > 500 keV which is the upper limit of the cutoffpl model. The highecut model also failed to constrain *E**c**u**t* of the source. Nevertheless, considering an isotropic electron cloud with seed photon source placed at the centre, as sugested by, one can get an estimate of cutoff energy using *E**c* ∼ 3*k**T**e* for *τ* ≫ 1, and *E**c* ∼ 2*k**T**e* for *τ* ≤ 1. Using the relation and the parameters obtained from the compTT model, it can be implied that the cut-off energy might be 60.0 < *E**c* < 200.0 keV and rise up neatly within this period of time. We also noticed the increase in *E**c* with the time considering the simple theoretical approach. However, it would be naive to draw any conclusion based on the cut-off energy variation. The cut-off energy (*E**c*) could not be constrained by earlier observations too. In our observation, we found Γ ∼ 1.7 − 1.8 during the 2007 & 2009 epochs. During these epochs, the Compton cloud temperature was  ∼ 40 keV. Later, in 2013 epoch, *k**T**e* was found to increase, whereas the value of Γ decreased. This is expected since the hotter Compton cloud emits harder and flatter spectrum. We found an anti-correlation between *k**T**e* and Γ with Pearson correlation index of -0.9187. This strong anti-correlation indicates that the hot electron cloud is the primary source for the harder spectra. The correlation is presented in Figure [fig:6]. We calculated source intrinsic luminosity (*L**i**n*) in 2 − 10 keV energy range and found to be varied over the years. *L**i**n* was 1.4 × 1042 ergs s− 1 in 2007. After that, it decreased in later epochs. *L**i**n* varied between 0.86 × 1042 ergs s− 1 and 1.18 × 1042 ergs s− 1 in 2009, 2013 & 2016 epochs. The variation of accretion rate could be responsible for the variation of *L**i**n*. To get an estimate of the accretion rate, we calculated the bolometric luminosity with bolometric correction *κ**b**o**l* = 1.44 ± 0.12 dex. We estimated the Eddington luminosity of the source as 1.3 × 1045 ergs s− 1 assuming the mass of the BH as 107 *M*⊙. The bolometric luminosity depends on the mass accretion rate as, $L\_{bol} = \eta \dot{M} c^2$, where *η* is the energy conversion efficiency. The Eddington luminosity is $L\_{Edd} = \eta \dot{M}\_{Edd} c^2$. Thus, the mass accretion rate ($\dot{M}$) in terms of Eddington mass accretion rate ($\dot{M}\_{Edd}$) is, $\dot{m} = \dot{M}/\dot{M}\_{Edd}$. The Eddington ratio (*λ**E**d**d*) is defined as *λ**E**d**d* = *L**b**o**l*/*L**E**d**d*. Since *η* is the energy conversion efficiency, $L\_{Edd} = \eta \dot{M}\_{Edd} c^2$ and $L\_{bol} = \eta \dot{M} c^2$. Therfore, $L\_{bol}/L\_{Edd} = \lambda\_{Edd} = \dot{m}$. The observed $\dot{m}$ or the Eddington ratio is consistent with other Seyfert 2 galaxies. We found that the intrinsic luminosity (*L**i**n*) is strongly correlated with the Eddington ratio (*λ**E**d**d*) with the Pearson correlation index as 0.9967. We also found a strong correlation between *λ**E**d**d* and Γ with Pearson correction index of 0.7472. The correlation of *λ**E**d**d* and Γ is studied by many authors and a positive correlation is found for NGC 6300 as well. We also found a correlation between *L**i**n* and Γ with Pearson correlation index as 0.7604. This positive correlation of Γ and *L**i**n* is observed in high redshift objects, but not in the local universe. Above three correlations (*L**i**n* − *λ**E**d**d*, *λ**E**d**d* − Γ, *L**i**n* − Γ) can be explained in a single framework of variable accretion rate. If the mass accretion rate increases, the radiated energy will increase, thus, the intrinsic luminosity. Hence, $\dot{m} \propto L\_{in}$, or *λ**E**d**d* ∝ *L**i**n*. Again, if the luminosity increases, it will cool down the Compton cloud more efficiently implying a steeper photon index, i.e. *L**i**n* ∝ Γ. These relations are well known for Galactic black holes. [sec:reflection] Reflection & Line Emission -------------------------- Hard X-ray emission from the Compton cloud is reflected from a cold neutral medium surrounding the AGN. Within our spectral domain, the reflection component consists of a reflection hump and an iron fluorescent emission line. Due to higher inclination, the Seyfert 2 galaxies usually show stronger reflection than the Seyfert 1 galaxies. The strength of the reflection component can be obtained from the pexrav (*R*, relative reflection) or MYTORUS (*A**S*; relative normalization of scattering) model. As the reflection hump is observed in 15 − 30 keV, *R* may not be constrained with the *Chandra* data. In this case, the MYTORUS model can constrain *A**S* as both the reflected and the line emission are computed self-consistently. Along with the reflection hump, the iron line emission is a good indicator of reflection. NGC 6300 is known to emit strong reflection component with unusually high *R* (> 4). The reflection dominated *RXTE* spectrum was expected to show EW  ∼ 1 keV although the observed EW was about  ∼ 470 eV. argued that the sub-solar abundances were the reason behind the observation of narrow line-width than expected. The EW of Fe fluorescent line was decreased in 1999 and 2001 observations with 140 eV and 69 eV, respectively although the relative reflection was still high. 0.5cm [fig:6] In our analysis, the Fe K*α* line was observed in the spectra of NGC 6300 during 2007, 2013 & 2016 epochs from the powerlaw model fitting. However, we did not detect the Fe K*α* line in the spectra from the 2009 epochs while fitting the data with the powerlaw model. On the other hand, the MYTORUS model fitting result showed the presence of Fe line in all observations. This discrepancy arose as the MYTORUS model compute Fe line self-consistently, while an ad-hoc Gaussian component was added as Fe line in the powerlaw model. Thus, we made our discussion on the reflection and Fe K*α* line based on the results obtained from the spectral analysis with MYTORUS model. During 2007, the iron K*α* line was observed with an equivalent width of 120− 16+ 14 eV (see Table [tab:mytd]). In 2009 epoch, a marginally narrower line (95-114 eV) was observed from the *Chandra* observations. Though similar EW was observed in 2007 & 2009 epochs, a different relative scattering normalization was found. In 2007, *A**S* = 1.06 indicates that the reflection dominates over the primary continuum. In 2009, the reflection was weaker compared to 2007 (*A**S* < 1). However, in 2013 & 2016 epochs, the reflection became stronger (*A**S* > 2 with broad EW (>300 eV). The observed EW is consistent for the reflection from a Compton thin reprocessor for all the observations. We found that the EW and *A**S* are strongly correlated with Pearson correlation index of 0.9893. This indicates that the EW strongly depends on the strength of the reflection. The Fe K*α* line flux is found to be strongly correlated with the EW and *A**S* with Pearson correlation indices of 0.9347 and 0.9275, respectively. This suggests that the Fe-line flux strongly depends on the strength of the reflection. The EW also correlates with the Compton cloud temperature (*k**T**e*) and has a correlation value of 0.9146. From this, one can implicate that the hotter Compton cloud causes more reflection from various regions which broadens the line-width. However, the Pearson correlation index between Fe K*α* line flux and *L**b**o**l* is estimated to be -0.5187, which suggests a moderate anti-correlation. This X-ray Baldwin effect was observed in several other sources. Nonetheless, the 2009 epoch seems to exhibit outliers of the Baldwin effect for NGC 6300. The reason behind this could be a separate origin of the iron line than the other epochs. Later, we focus on the Fe K*α* line emitting region. We calculated the Fe K*α* line emitting region from observed FWHM. In 2007, the FWHM of the Fe K-line was derived to be 5550 km s− 1 which indicates that the line emitting region would be  ∼ 3900 *r**g* away from the central source, i.e. the broad-line region (BLR). In 2009, we observed a narrow FWHM (< 100 km s− 1), which means the line emitting region is at  > 107 *r**g*, i.e.  > 17 pc located near the ‘torus’ region. In 2013, the *NuSTAR* observations revealed a broad FWHM with  ∼  30000 km s− 1. It implies that the line emitting region is  ∼ 130 *r**g* away. In Jan 2016 & Aug 2016 observations, the values of FWHM were  ∼ 28000 km s− 1 and 39000 km s− 1, respectively. In these two epochs also, the line emitting region would be  ∼ 150 *r**g* and  ∼ 80 *r**g* away, respectively. It is believed that the narrow Fe line is ubiquitous and can be emitted from the ‘torus’, the BLR or the accretion disc. From our analysis of NGC 6300, we found that the Fe fluorescent line was emitted from separate regions during various epochs. estimated the Fe K*α* line emitting region for NGC 6300 to be  ∼ 104 *r**g* in 2001, i.e. torus. During 2007, we find that the Fe-line was emitted from the BLR region. In 2009 epoch, we observed that the line emitting region as the ‘torus’. But, during 2013 and 2016 epochs, the iron line was emitted from the accretion disc. It is also possible that the narrow Fe K*α* line was emitted from the ‘torus’ in 2013 and 2016 epochs, but, could not be detected due to the presence of broader Fe K*α* line emission from the accretion disc. Considering the time-delay patterns between Fe-line and continuum during 2013 and 2016 epochs (see Figure [zdcf]), one can get a rough estimate of the size of the Compton cloud. Since the delay between these two is minimal compared to an AGN, it is possible that the Comptonized and reflected Fe-line component originated from a similar vicinity. Bearing this in mind, the broad iron line emitting region could be the farthest extents of Compton cloud during 2013 and 2016 epochs. [tab:lum] lcccccc ID &Date&*N**H*, *Z* &Γ&*L**b**o**l* & *λ**E**d**d* &(MJD)&(1023 cm− 2)& &(1043 ergs s− 1) & R1& 50493.50 &5.80 ± 0.22&1.71 ± 0.20 & 0.69\*& 0.005\* B2& 51418.50 &2.10 ± 0.10&2.19 ± 0.10 & 5.23\*& 0.041\* X3& 51971.50 &2.40 ± 0.15&1.94 ± 0.09 & - & - S1 & 54390.51 &2.18 ± 0.04&1.84 ± 0.04 & 3.82 ± 0.21& 0.029 ± 0.001 C1 & 54985.27 &2.04 ± 0.08&1.75 ± 0.03 & 2.68 ± 0.20& 0.021 ± 0.001 C2 & 54989.17 &2.06 ± 0.10&1.73 ± 0.02 & 3.00 ± 0.20& 0.023 ± 0.001 C3 & 54991.01 &2.10 ± 0.08&1.79 ± 0.05 & 2.72 ± 0.19& 0.021 ± 0.001 C4 & 54992.88 &1.93 ± 0.10&1.79 ± 0.04 & 3.09 ± 0.20& 0.024 ± 0.001 C5 & 54996.33 &2.11 ± 0.10&1.73 ± 0.08 & 3.53 ± 0.20& 0.027 ± 0.001 N1 & 56348.89 &1.44 ± 0.07&1.62 ± 0.04 & 2.93 ± 0.10& 0.023 ± 0.001 N2 & 57411.03 &1.23 ± 0.07&1.46 ± 0.02 & 2.36 ± 0.09& 0.018 ± 0.001 N3 & 57624.35 &1.08 ± 0.06&1.51 ± 0.05 & 2.80 ± 0.08& 0.022 ± 0.001 [sec:soft] Soft Excess ----------- Soft excess ( <  3 keV) is found almost in every AGN. However, the origin of the soft excess is poorly understood. One possible origin of soft excess could be the reflection from an optically thick warm Comptonizing region or the reflection from the ionized accretion disc. The origin of soft excess can be explained by the heating of circumnuclear gas from the shock produced by AGN outflows or photoexcitation and photoionization of circumnuclear gas of the primary emission of the AGN. The high-resolution capabilities of X-ray observatories such as *XMM-Newton* and *Chandra* have supported the later scenario in recent studies, e.g. NGC 1068 ;, the Circinus galaxy, Mkn 3 ; ;,. proposed shock heating near the ISCO could produce the soft excess. The theory successfully demonstrated the spectra of Seyfert 1 galaxy Ark 120. presented Bulk Motion Comptonization as a plausible cause of the soft excess. presented a new perspective on the soft excess by attaching the component with the high mass accretion rate of the disc itself. In our observations of NGC 6300, we modelled the soft excess with constant\*(powerlaw + apec). The powerlaw normalization and Γ were tied with the corresponding parameters of the primary component. We find that the soft excess contributed about up to  ∼ 6 per cent. The APEC model temperature varied between 0.2 and 0.5 keV, indicating a warm absorber. We also checked the fractional variability rms amplitude (*F**v**a**r*) for the soft excess, i.e. in the energy band of 0.5 − 3 keV. We found variability in this energy band in some observations (see Table [tab:fvar]). The average variability is higher in the soft excess than the primary emission. claimed that for lower *L*/*L**E**d**d* sources, the energy-dependent variabilities is less for the soft excess part. However, this contradicts our findings. Higher variabilities in the soft excess indicate that the soft excess could be the scattered primary emission from the warm reflector or accretion disc. The absence of variability in some observations infers that it might have originated from the ambient medium during those observations. From the spectral studies (see Table [tab:PL], [tab:mytd]), we also found variable *f**s* which implicates a dynamic mechanism such as reflection or complex absorbing medium could be responsible for the origin of the soft excess. Overall, the origin of the soft excess is complex. It is plausible that more than one factor contributed to the soft excess part of the spectra. ![Variation of (a) line-of-sight column density (N_H) in 10^{23} cm^{-2} unit, (b) photon index (\Gamma), (c) 2-10 keV absorbed flux (F) in 10^{-11} ergs cm^{-2} s^{-1} unit and (d) 2-10 keV bolometric luminosity (L_{bol}) in 10^{43} ergs s^{-1} unit are shown over the years.](past "fig:") [fig:past] [sec:evolution] Evolution of the System ----------------------- We studied the Seyfert 2 galaxy NGC 6300 between 2007 and 2016. During the period of observation, the source evolved over the years. The source was observed in the Compton-thick region in 1997, though, it was observed in the Compton-thin region in 1999. This made NGC 6300 a changing-look AGN. However, during our observation period between 2007 and 2016, NGC 6300 remained Compton-thin (as *N**H* < 1.5 × 1024 cm− 2). Although, we observed a change in the line-of-sight column density (*N**H*, *Z*) over the years, the global averaged column density (*N**H*, *S*) remained constant. It can be explained with the transiting clouds along the line-of-sight (see section [sec:torus]). In Figure [fig:past] (a), we show the evolution of the line-of-sight column density over the years. In addition to the evolution of the circumnuclear properties, the nuclear region also evolved over the years (see Table [tab:lum]). The photon index (Γ), absorbed flux, and luminosity changed over the years. In 1997, the source was observed with very low luminosity with bolometric luminosity, *L**b**o**l* = 6.9 × 1042 ergs s− 1. In 1999, the bolometric luminosity increased to *L**b**o**l* = 5.23 × 1043 ergs s− 1. The photon index also increased to Γ = 2.11 from Γ = 1.71 in 1999. Since then, the photon index decreased over the years till 2016. However, it would be naive to conclude that the Γ decreased gradually since the source was not observed on a regular basis. Nonetheless, both the flux and luminosity changed over the years. The change in the luminosity can be explained with the evolution of the mass accretion rate. We also checked for a correlation between the intrinsic luminosity *L**i**n* and the line-of-sight column density (*N**H*, *Z*) which we fail to observe. Such a correlation would indicate luminosity dependent covering factor. Since no correlation was found between these two, it is likely that the ‘torus’ and the AGN evolved independently. Conclusion ========== We study NGC 6300 between 2007 & 2016. Over the 9 years of observations, we investigated and found that NGC 6300 evolves with time. NGC 6300 was previously reported as changing-look AGN. However, we find it in Compton-thin region in every observation. Following are the findings from our work. 1. The obscured torus is not uniform; rather, it is clumpy. Global averaged column density is constant over the years. However, the line-of-sight column density changes with time. This change is interpreted as due to the transiting clouds along the line-of-sight. 2. The nuclear region was found to evolve over the years. The intrinsic luminosity of the source changes with time. The change in the mass accretion rate is likely to be responsible for that. 3. The torus and primary nucleus evolved independently, at least during our observation. We did not find any relation between column density and intrinsic luminosity. 4. The Fe K*α* line emitting region is different in different epochs. During 2007, 2009, and 2013-16, the primary source of Fe K*α* emission was BLR, ‘torus’ and accretion disc, respectively. Narrow Fe K*α* line is originated in the torus and could be present in all epochs. Although, narrow Fe K*α* lines were not detected in every epoch in presence of broad Fe K*α* line. 5. We find variability in both soft excess (0.5-3 keV range) and primary emission (> 3 keV). The variability in the soft excess infers that it could be scattered primary emission. However, a lack of variability in some observations infers that the origin of the soft excess is complex. Acknowledgements ================ We acknowledge the anonymous Reviewer for the constructive review which improved the clarity of the manuscript. A.J. and N. K. acknowledges support from the research fellowship from Physical Research Laboratory, Ahmedabad, India, funded by the Department of Space, Government of India for this work. AC acknowledges Post-doctoral fellowship of S. N. Bose National Centre for Basic Sciences, Kolkata India, funded by Department of Science and Technology (DST), India. PN acknowledges Council of Scientific and Industrial Research (CSIR) fellowship for this work. This research has made use of data and/or software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC and the High Energy Astrophysics Division of the Smithsonian Astrophysical Observatory. This work has made use of data obtained from the *Suzaku*, a collaborative mission between the space agencies of Japan (JAXA) and the USA (NASA). The scientific results reported in this article are in part based on observations made by the Chandra X-ray Observatory. This research has made use of software provided by the Chandra X-ray Center (CXC) in the application packages CIAO, ChIPS, and Sherpa. This work has made use of data obtained from the *NuSTAR* mission, a projects led by Caltech, funded by NASA and managed by NASA/JPL, and has utilised the NuSTARDAS software package, jointly developed by the ASDC, Italy and Caltech, USA. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. Data Availability ================= We have used archival data for our analysis in this manuscript. All the models and software used in this manuscript are publicly available. Appropriate links are given in the manuscript. 99 Alexander, T. 1997, in Astrophysics and Space Science Library, Vol. 218, Astronomical Time Series, ed.D. Maoz, A. Sternberg, & E. M. Leibowitz, 163 Antonucci, R. R. J., & Miller, J. S. 1985,, 297, 621 Antonucci, R. 1993,, 31, 473 Arnaud, K. A., Branduardi-Raymont, G., Culhane, J. L., et al. 1985,, 217, 105 Arnaud, K. A. 1996, Astronomical Data Analysis Software and Systems V, 17 Awaki H., Murakami H., Leighly K. M., Matsumoto C., Hayashida K., Grupe D.,  2005,, 632, 793 Baloković, M., Brightman, M., Harrison, F. A., et al. 2018,, 854, 42 Beckerman, E., Aldcroft, T., Gaetz, T. J., et al. 2004,, 445 Bennett C. L. et al.,  2003,, 148, 1 Bianchi, S., Miniutti, G., Fabian, A. C., & Iwasawa, K.  2005b,, 360, 380 Boorman, P. G., Gandhi, P., Baloković, M., et al. 2018,, 477, 3775 Brightman, M., & Nandra, K. 2011,, 413, 1206 Brightman, M., Baloković, M., Ballantyne, D. R., et al. 2017,, 844, 10 Broos, P. S., Townsley, L. K., Feigelson, E. D., et al. 2010,, 714, 1582 Chevallier L., Collin S., Dumont A.-M., Czerny B., Mouchet M.,Gon ̧calves A. C.& Goosmann R., 2006, A&A, 449, 493 Davis, J. E.  2001,, 562, 575 Dai, X., Chartas, G., Eracleous, M., et al. 2004,, 605, 45 Done, C., Davis, S. W., Jin, C., et al. 2012,, 420, 1848 Edelson R. A., et al.,  1996,, 470, 364 Edelson R., Griffiths G., Markowitz A., Sembay S., TurnerM. J. L., Warwick R.,  2001,, 554, 274 Edelson R., Turner T. J., Pounds K., Vaughan S., Markowitz A.,Marshall H., Dobbie P., Warwick R.,  2002,, 568, 610 Edelson R., Malkan M.,  2012,, 751, 52 Fabian, A. C., Ballantyne, D. R., Merloni, A., et al.  2002,, 331, L35 Fabian A. C., Lohfink A., & Kara E., et al.  2015,, 451, 4375 Fruscione, A., McDowell, J. C., Allen, G. E., et al.  2006, Proc. SPIE, 6270, 62701V Fukazawa, Y., Mizuno, T., Watanabe, S., et al.  2009,, 61, S17 Fukumura, K., Hendry, D., Clark, P., et al. 2016,, 827, 31 García, J. A., Kara, E., Walton, D., et al. 2019,, 871, 88 Gaspar G., Díaz R. J., Mast D., D’Ambra A., Agüero M. P., Günthardt G.,  2019,, 157, 44 George, I. M., & Fabian, A. C. 1991,, 249, 352 Gierliński, M., & Done, C. 2004,, 349, L7 Gierliński, M., Middleton, M., Ward, M., et al. 2008,, 455, 369 Gruber, D. E., Matteson, J. L., Peterson, L. E., & Jung, G. V.  1999,, 520, 124 Guainazzi M.,  2002,, 329, L13 Guainazzi M., & Bianchi S.,  2007,, 374, 1290 Guainazzi, M., Risaliti, G., Awaki, H., et al. 2016,, 460, 1954 Haardt, F., & Maraschi, L. 1991,, 380, L51 Halpern, J. P. 1984,, 281, 90 Harrison, F. A., Craig, W. W., Christensen, F. E., et al. 2013,, 770, 103 Hernández-García, L., Masegosa, J., González-Martín, O., & Marquez, I.  2015, å,579, A90 HI4PI Collaboration, et al.,  2016, A&A, 594, 116 Huenemoerder, D. P., Mitschang, A., Dewey, D., et al.  2011,, 141, 129 Ikeda, S., Awaki, H., & Terashima, Y. 2009,, 692, 608 Iwasawa, K., & Taniguchi, Y. 1993,, 413, L15 LaMassa, S. M., Cales, S., Moran, E. C., et al. 2015,, 800, 144 Lansbury, G. B., Stern, D., Aird, J., et al. 2017,, 836, 99 Leighly K. M., Halpern J. P., Awaki H., Cappi M., Ueno S., Siebert J.,  1999,, 522, 209 Lohfink, A. M., Reynolds, C. S., Miller, J. M., et al. 2012,, 758, 67 Kaufman, J., Blaes, O. M., & Hirose, S. 2017,, 467, 1734 Khorunzhev, G. A., Sazonov, S. Y., Burenin, R. A., & Tkachenko, A. Y.,  2012, AstL, 38, 475 King, A.  2005,, 635, L121 Kinkhabwala, A., Sako, M., Behar, E., et al.  2002,, 575, 732 Koyama, K., Tsunemi, H., Dotani, T., et al. 2007,, 59, 23 Lu, Y., & Yu, Q. 1999,, 526, L5 Madsen, K. K., Reynolds, S., Harrison, F., et al.  2015,, 220, 8 Madsen, K. K., Forster, K., Grefenstette, B. W., et al.  2017,, 841, 56 Magdziarz P., Zdziarski A. A.,  1995,, 273, 837 Magdziarz, P., Blaes, O. M., Zdziarski, A. A., et al. 1998,, 301, 179 Matsumoto C., Nava A., Maddox L. A., Leighly K. M., Grupe D., Awaki H., Ueno S.,  2004,, 617, 930 Matt, G., Perola, G. C., & Piro, L. 1991,, 247, 25 Matt G., Guainazzi M., Maiolino R., 2003, mnras, 342, 422 Markowitz, A. G., Krumpe, M., & Nikutta, R. 2014,, 439, 1403 Mehdipour, M., Branduardi-Raymont, G., Kaastra, J. S., et al. 2011,, 534, A39 Meyer M. J., et al.,  2004,, 350, 1195 Murphy K. D., Yaqoob T.,  2009,, 397, 1549 Mushtukov, A. A., Suleimanov, V. F., Tsygankov, S. S., et al. 2015,, 447, 1847 Nandra K., George I. M., Mushotzky R. F., Turner T. J., YaqoobT.,  1997,, 476, 70 Nandra, K. 2006,, 368, L62 Nardini, E., Fabian, A. C., Reis, R. C., et al. 2011,, 410, 1251 Netzer, H. 2013, The Physics and Evolution of Active Galactic Nuclei Panessa, F., Bassani, L., Landi, R., et al. 2016,, 461, 3153 Petrucci, P. O., Haardt, F., Maraschi, L., et al.  2001,, 556, 716 Rees, M. J. 1984,, 22, 471 Remillard, R. A., & McClintock, J. E. 2006,, 44, 49 Ricci, C., Walter, R., Courvoisier, T. J.-L., et al. 2011,, 532, A102 Ricci, C., Paltani, S., Ueda, Y., et al. 2013a,, 435, 1840 Ricci, C., Paltani, S., Awaki, H., et al. 2013b,, 553, A29 Risaliti, G., Elvis, M., & Nicastro, F. 2002,, 571, 234 Risaliti, G., Young, M., & Elvis, M. 2009,, 700, L6 Rodríguez-Pascual P.M., Alloin D., Clavel J., et al.,  1997,, 110, 9 Ross R. R., & Fabian A. C.,  2005,, 358, 211 Saez, C., Chartas, G., Brandt, W. N., et al. 2008,, 135, 1505 Sako, M., Kahn, S. M., Paerels, F., & Liedahl, D. A.  2000,, 543, L115 Sambruna, R. M., Netzer, H., Kaspi, S., et al.  2001,, 546, L13 Shakura, N. I., & Sunyaev, R. A. 1973,, 500, Shemmer, O., Brandt, W. N., Netzer, H., et al. 2006,, 646, L29 Shemmer, O., Brandt, W. N., Netzer, H., et al. 2008,, 682, 8133 Sikora, M., Stawarz, Ł., & Lasota, J.-P.  2007,, 658, 815 Singh, K. P., Garmire, G. P., & Nousek, J. 1985,, 297, 633 Skrutskie M. F., et al., 2006,, 131, 1163 Sobolewska M. A. & Done C.,, 2007, 374, 150 Pounds, K. A., & Page, K. L.  2005,, 360, 1123 Takahashi, T., Abe, K., Endo, M., et al. 2007,, 59, 35 Tetarenko, B. E., Sivakoff, G. R., Heinke, C. O., et al. 2016,, 222, 15 Titarchuk, L. 1994,, 434, 570 Uchiyama, Y., Maeda, Y., Ebara, M., et al. 2008,, 60, S35 Urry, C. M., & Padovani, P. 1995,, 107, 803 Vaughan, S., Edelson, R., Warwick, R. S., et al. 2003,, 345, 1271 Walton, D. J., Nardini, E., & Fabian, A. C., et al.  2013,, 428, 2901, Wilms, J., Allen, A., & McCray, R. 2000,, 542, 914 Wu, X.-B., & Liu, F. K. 2004,, 614, 91 Yaqoob T., 2012, MNRAS, 423, 3360 Yaqoob, T., Tatum, M. M., Scholtes, A., et al. 2015,, 454, 973 Young, A. J., Wilson, A. S., & Shopbell, P. L. 2001,, 556, 6 [lastpage] --- 1. <!-- h='&#112;&#114;&#108;&#46;&#114;&#x65;&#x73;&#46;&#x69;&#110;';a='&#64;';n='&#x61;&#114;&#x67;&#104;&#x61;';e=n+a+h; document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'">'+e+'<\/'+'a'+'>'); // --> argha at prl dot res dot in[↩](#fnref1) 2. <!-- h='&#98;&#x6f;&#x73;&#x65;&#46;&#114;&#x65;&#x73;&#46;&#x69;&#110;';a='&#64;';n='&#x61;&#114;&#x6b;&#x61;&#x63;&#104;&#x61;&#116;&#116;&#x65;&#114;&#106;&#x65;&#x65;';e=n+a+h; document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'">'+e+'<\/'+'a'+'>'); // --> arkachatterjee at bose dot res dot in[↩](#fnref2) 3. <http://heasarc.gsfc.nasa.gov/>[↩](#fnref3) 4. <http://heasarc.gsfc.nasa.gov/docs/suzaku/analysis/abc/>[↩](#fnref4) 5. <http://www.astro.isas.jaxa.jp/suzaku/caldb/>[↩](#fnref5) 6. <https://cxc.harvard.edu/ciao/download/>[↩](#fnref6) 7. <https://cxc.harvard.edu/ciao/download/caldb.html>[↩](#fnref7) 8. <https://cxc.cfa.harvard.edu/ciao/ahelp/chandra_repro.html>[↩](#fnref8) 9. <https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/node304.html>[↩](#fnref9) 10. <http://heasarc.gsfc.nasa.gov/FTP/caldb/data/nustar/fpm/>[↩](#fnref10) 11. 12. <https://www.mytorus.com/>[↩](#fnref12)
arxiv_0000299
because we want to obtain the best spatial resolution using SHARP (see section 2), and because 350 *μ*m emission is weighted towards the warmer emission of the cores immediately surrounding a YSO rather than towards the extended cooler core envelope. This paper gives a summary of our results on three of the YSOs in our study, L1527, B335, and IC348-SMM2. Section 2 outlines our observations and summarizes our results, Section 3 compares these results to magnetically regulated models, and Section 4 gives our conclusions. Observations and Results ======================== Polarimetry Observations and Data Analysis ------------------------------------------ The observations presented here were carried out at the Caltech Submillimeter Observatory (CSO) using the SHARP polarimeter. SHARP is a fore-optics module that provides polarimetric capability to SHARC-II, the CSO’s 32  ×  12 pixel submillimeter camera. SHARP divides the incident submillimeter radiation into two orthogonally polarized beams that are then reimaged onto the two ends of the “long and skinny” SHARC-II detector. A half-wave plate upstream of the polarization-splitting optics is rotated every few minutes during data collection, and the two orthogonally polarized 12  ×  12 pixel “sub-images” acquired for four different half-wave plate angles are combined in software to determine the total flux as well as the linear polarization state of the radiation. SHARC-II and SHARP are described respectively by Dowell et al. (2003) and Li et al. (2008). The 350 *μ*m observations described here were made with a beam size of 10$\arcsec$ (FWHM). The data were collected in chop-nod mode. This involves rapidly modulating the tilt of the CSO secondary mirror in cross-elevation (“chopping”) while more slowly “nodding” the entire telescope back and forth, thereby making near simultaneous measurements of the source and two nearby sky reference positions. A chop-nod observation is carried out at each of the four half-wave plate angles (0∘, 22.5∘, 45∘, and 67.5∘) thereby forming a “half-wave plate cycle”. As described by Dowell et al. (1998) and Kirby et al. (2005), the net effect is that the total source flux and a component of the linearly polarized flux are measured for each half-wave plate position in a cycle, while removing the spatially extended “background” that includes atmospheric and telescope emission as well as any Galactic emission covering a sky area large compared to the chopper throw. For the observations described here, the chopper throw ranged from 120$\arcsec$ (for B335 and L1527) to 300$\arcsec$ (for IC348-SMM2) and the chopping frequency was  ∼ 1 Hz. Because of sky rotation, the reference beams rotate about the main beam on the sky as the source is tracked. Our three targets do have extended flux, so it was necessary to avoid making observations at hour angles for which significant source flux contaminates the reference beams. Each SHARP half-wave plate cycle requires about 7 minutes of elapsed time. In between successive half-wave plate cycles, the pointing position is dithered by about 10-20$\arcsec$. Data were obtained for B335 during the nights of 28, 29, and 30 April 2007 ( ∼ 80 cycles; zenith submillimeter opacity *τ*350*μ**m* ∼  1.0 - 1.8), for IC348-SMM2 on the night of 10 November 2007 ( ∼ 20 cycles; *τ*350*μ**m* ∼  1.0), and for L1527 during the nights of 9 and 10 November 2007 and 6 and 10 September 2008 ( ∼ 85 cycles, *τ*350*μ**m* ∼  0.8 - 1.2). SHARP data analysis is carried out in two steps. In the first step, each individual half-wave plate cycle is processed to obtain three 12  ×  12 pixel maps, one for each of the Stokes parameters *I*, *Q*, and *U*.  (Parameter *I* corresponds to the total flux while *Q* and *U* fully characterize the linearly polarized flux.) A detailed discussion of the techniques used during this first step can be found in Section 3.4 of Hildebrand et al. (2000) and Equations 2, 3, and 4 of Attard et al. (2008). Corresponding error-maps are also obtained for each of the three Stokes parameters. This is done by first using the variance in the individual total and polarized flux measurements to estimate the uncertainties in this time stream (as in section 4.1 of Dowell et al. 1998) and then propagating these uncertainties to the Stokes parameter maps. At this point, we remove from the Stokes parameter maps any pixels that have abnormally high errors. The second step of the analysis involves combining these single-cycle 12  ×  12 pixel maps of *I*, *Q*, and *U*, to form final maps of *I*, *Q*, and *U*. We account for the dithering and the sky rotation by interpolating the single-cycle maps onto a regular equatorial-coordinate grid, which causes a modest loss of angular resolution (Houde & Vaillancourt 2007). Errors in the final maps are propagated from the corresponding errors in the single-cycle maps. Corrections for changing atmospheric opacity (Kirby et al. 2005) as well as for instrumental polarization and polarimetric efficiency (Li et al. 2008) are also made during this second analysis step. An important question that we can ask during the second step of the analysis is whether the single-cycle *Q* and *U* maps are consistent with one another within their nominal errors. Recall that these nominal errors are computed during the first step of the analysis. A quantitative answer to this question is provided by the reduced chi squared, *χ**r*2. Averaging together the *χ**r*2 values found over the *Q* and *U* maps of each source, we obtain mean *χ**r*2 values of 1.7 for L1527, 1.5 for IC348-SMM2, and 2.1 for B335. We do not know the origin of the extra errors that cause these elevated *χ**r*2 values. However, for each of our three data sets (one for each source) we were able to verify that these errors occur mainly on time scales that are short compared to the total duration of the data set. (This duration is several hours for IC348-SMM2, and several days or even longer for the other two sources.) Thus, it seems reasonable to treat the extra errors as if they are statistical in nature. Accordingly, we inflate our nominal errors by the square root of *χ**r*2. Finally, the degree *P* and angle *ϕ* of polarization and their associated errors *σ**P* and *σ**ϕ* are computed for each sky position via standard techniques (see Section 3.4 of Hildebrand et al. 2000). The uncertainties *σ**P* and *σ**ϕ* are affected by both the polarized flux errors (*σ**Q*, *σ**U*) and the total flux errors (*σ**I*). The latter have a negligible effect, however, because *P* <  < 100%. We consider sky positions for which *P* ≥ 2*σ**P* to be polarization detections and sky positions for which *P* + 2*σ**P* < 1.0% to be low upper limits on *P*.  *P* is then corrected downward to account for polarization bias (as described in Section 4.2 of Hildebrand et al. 2000 and in Vaillancourt 2006). We keep track of the sky positions where the polarimetric significance drops below 2*σ* after this bias correction, as discussed below. Note that we have chosen to set our detection threshold at *P* ≥ 2*σ**P* rather than applying the more conservative criterion *P* ≥ 3*σ**P* used for previous SHARP observations (e.g., Attard et al. 2009). When using the latter, more conservative threshold, the 1*σ* uncertainties *σ**ϕ* in the angles of polarization (which translate into 1*σ* uncertainties in the magnetic field angles) are below $\sim9.5\degr$ (Serkowski 1962). With our more lenient threshold, these 1*σ* error bars range up to almost $15\degr$ (see Table 1). Because our goal in this paper is to test the gross predictions of the magnetically regulated collapse models rather than its fine details, we believe that this degree of uncertainty is acceptable. However, it should be kept in mind when interpreting our polarimetry results. Polarimetry Results ------------------- Our polarization detections for L1527, IC348-SMM2, and B335 are presented in Table 1. Figures 2a, 3a, and 4a illustrate the results listed in Table 1. Contours indicate the total intensity, I, measured by SHARP. The highest contour level corresponds to 90% of the peak flux, and each subsequent contour represents a decrement of 10% of the peak flux. Except for very narrow strips at the map edges, all sky positions mapped have total flux errors well below 10% of the peak flux. The morphologies seen in our total intensity contour maps thus represent real structures, with the exception of a few very small flux peaks and flux holes seen at the map edges (e.g., southwest edge of Figure 3a, and northeast and northwest edges of Figure 4a). All three sources have been mapped at 850 *μ*m (Chandler & Richer 2000, Hatchell et al. 2005, Jørgensen et al. 2007), and the morphologies seen at this longer wavelength are similar to those seen in our SHARP maps. The bars in Figures 2a, 3a, and 4a indicate polarization detections. The length of each bar is proportional to *P* (see key at bottom left of each figure) and its orientation indicates the direction of the E-vector of the polarized radiation. The thickness of each bar shows its significance: sky points having greater than 3*σ* significance after bias correction are thickest, points with greater than 2*σ* post-correction significance but less than 3*σ* have intermediate thickness, and sky points for which the significance drops below 2*σ* after bias correction are shown using the thinnest bars. For two sky positions, we obtained 2*σ* upper limits below 1.0%. These correspond to the peak of B335, where we find a 2*σ* limit of 0.6% and a position offset from the peak of L1527 by (Δ*α*, Δ*δ*) = (+9.5$\arcsec$, -9.5$\arcsec$), where our 2*σ* limit is 0.9%. Although we do not make use of these upper limits in this paper, we include them in our figures since they represent sensitive measurements that may one day be useful. The two limits are indicated with open circles in Figures 2a and 4a. Comparing our polarization results for B335 with the 850 *μ*m polarimetry of this source presented by Wolf, Launhardt & Henning (2003; hereafter WLH03), we find reasonable agreement. The WLH03 map is discussed further in section 3.2.2 below. Similarly, the 850 *μ*m polarization map of L1527 that is presented with no interpretation in the SCUPOL archive paper (Matthews et al. 2009) agrees with our 350 *μ*m polarization map of this source. In making these comparisons, we refer only to the measured angles of polarization since the degree of polarization of thermal dust emission is known to generally have considerable wavelength dependence (Vaillancourt et al. 2008, and references therein). B335 Outflow Observations and Data Analysis ------------------------------------------- The B335 350 *μ*m emission is very compact (see Figure 4a). To date, outflow maps for B335 have not been made with sufficient spatial resolution to determine the structure and kinematics of the outflow within the compact core region of B335. As part of our survey, we measured the outflow within the B335 core using the CARMA interferometer. The CARMA observations of B335 were made with the 15-element array in the D-configuration in July 2008. The total duration of the observations was 8.4 hours. The QSO 1925+211 was the phase calibrator; its flux was determined to be 1.2 Jy, using both Uranus and Neptune as flux calibrators. Observations of 3C454.3 were used to calibrate the passband structure. The correlator was configured to place the CO 1-0 line in the upper sideband; with 63 channels across a 7.6 MHz bandwidth, the velocity resolution at 115.271 GHz is approximately 0.32 kms− 1. Phase calibration was carried out on observations of the 2.7mm continuum emission in two 500 MHz bands. The gains determined for these wide (continuum) bands were then applied to the narrow (line) bands. The data were calibrated and mapped using the MIRIAD software package. The Fourier transform of u,v visibilities was taken, constrained by an image cell size of 1$\arcsec$ and a natural weighting function. The size of the synthesized beam is 4.4$\arcsec$  ×  3.8$\arcsec$. The peak intensity in the continuum emission (Figure 4b, grayscale) is 16.8 mJy/beam; the rms noise is 1.8 mJy/beam. The blue and red contours in Figure 4b show the CO 1-0 line emission integrated over the velocity ranges 5.4 - 8.0 kms− 1 and 8.6 - 11.2 kms− 1, respectively. The contours are at intervals of 3*σ*, where *σ* is 174 mJy/beam  ×  kms− 1. The integrated intensity peak in the blueshifted outflow is 5.6 Jy/beam  ×  kms− 1; the peak in the redshifted outflow is 4.0 Jy/beam  ×  kms− 1. Discussion ========== Core Orientation with respect to the Outflows --------------------------------------------- Magnetically regulated models predict that a YSO will be surrounded by a pseudo-disk a few thousand AU in size, and that the pseudo-disk symmetry axis will be aligned or nearly so with that of the YSO bipolar outflow. In this sub-section we will explore the extent to which this is the case for our three sources. In the case of L1527, Hogerheijde & Sandell (2000) used a two Gaussian model to separate the 450 and 850 *μ*m emission of an inner compact core immediately surrounding the embedded YSO from the more extended emission. Their Table 3 shows a deconvolved FWHM compact core size of 10$\arcsec$  ×  5$\arcsec$ (1400 AU  ×  700 AU) with a position angle of the major axis  ∼  30∘ east of north. However, the interferometric C18O map of this core by Ohashi et al. (1997; see Figure 5a), made with slightly higher spatial resolution and with the advantage that the extended emission is automatically removed, implies the major axis of the core is oriented north-south, but that the outer regions of the flattened core are distorted – possibly by the bipolar outflow – thus giving an impression of a tilt away from the north-south orientation when mapped with slightly poorer resolution and extended emission confusion. The bipolar outflow measured by Hogerheijde et al. (1998) in 12CO and by Zhou, Evans & Wang (1996) in 13CO has a position angle of 90∘ east of north. Thus L1527 is observed to have a flattened core on the scale of a few thousand AU, consistent with an edge-on pseudo-disk with its symmetry axis aligned with the outflow axis. In the case of B335, Chandler & Sargent (1993) observed a highly flattened core at 2.7 mm that is resolved only along the major axis, which has a size of $8\farcs2$ (2000 AU) with a position angle of 20∘ east of north (see Figure 7a). The bipolar outflow measured in 12CO by ourselves in this paper, Hirano et al. (1988) and others has a position angle of 90∘ east of north. Thus B335 is observed to have a flattened core on the scale of a few thousand AU, consistent with an edge-on pseudo-disk with its symmetry axis slightly tilted relative to the outflow axis by about 20∘. The core emission from IC 348-SMM2 has not previously been observed in detail. We use our data to study the morphology of the core surrounding this source. Figure 3a shows the underlying 350 *μ*m continuum emission intensity from IC348-SMM2 which we measured with SHARP. The 50% 350 *μ*m contour of this source has a major axis with a position angle of 60∘ east of north. The diameter of this major axis is 27$\arcsec$, compared to the perpendicular minor axis diameter of 20$\arcsec$. However, the 50% contour probably does not represent the FWHM of the core surrounding IC348-SMM2. A more realistic estimation of the FWHM of the core would be the 65% contour, which represents the 50% contour level of the core after the subtraction of a more extended cloud background set at about the 30% contour level in Figure 3a. The 65% contour major axis has a position angle of 56∘ east of north, and the diameters of the major and minor 65% contours are 21$\arcsec$ and 15$\arcsec$, respectively. If we assume an elliptical Gaussian core, we can deconvolve these FWHM estimates with a 10$\arcsec$ FWHM Gaussian beam to get a core size of 18$\arcsec$  ×  11$\arcsec$ (5500 AU  ×  3300 AU) at a position angle of 56∘. The size measured here is larger than for a typical pseudo-disk quoted in the literature by about a factor of two, but at our spatial resolution we are probably not resolving the true FWHM of the pseudo-disk, we are more likely measuring the outer extended region. The bipolar outflow measured by Tafalla, Kumar & Bachiller (2006) in 12CO has a position angle of 17∘ west of north. Thus IC348-SMM2 is observed to have a flattened core on the scale of several thousand AU, consistent with the outer regions of an edge-on pseudo-disk with its symmetry axis tilted within 20∘ relative to the outflow. In summary, we do see evidence for pseudo-disks and a tendency for alignment, within 20∘, of the pseudo-disk symmetry axis and the outflow axis for each of the YSOs in our sample. Inferred Magnetic Field Structures ---------------------------------- Our submillimeter polarization measurements displayed in Figures 2a, 3a, and 4a do not give a measure of the strength of the magnetic field, but they do give an indication of the net magnetic field direction (Lazarian 2007) and the level of uniformity (Hildebrand et al. 2009) along a given line-of-sight. The net magnetic field direction along a given line-of-sight is perpendicular to the submillimeter polarization vector measured for that line-of-sight (Lazarian 2007). Figures 2b, 3b, and 4b display the direction of the magnetic fields surrounding L1527, IC348-SMM2 and B335, respectively, as determined by the polarimetry *E*-vectors rotated by 90 degrees. We compare these field directions with our observations of the cores’ density structure (i.e., pseudo-disk) and velocity structure (i.e., outflows, infall radii). We note, however, that our polarization data for the three cores likely contain contributions from both the magnetic field associated with the core under study as well as that associated with the larger surrounding cloud. To see this, recall (section 1) that the maps shown by Kirk et al. (2006) reveal large changes in field direction as one moves away from the center of a core into the surrounding cloud material; and note that these changes occur at positions located about one arcminute ( ∼ 8000 AU) from the centers of the Kirk et al. (2006) cores, while our own maps extend out to distances of 7000-15,000 AU from the central YSO. Thus a significant fraction of the emission we have observed may originate from the cloud at large, not the core under study. One way to account for such contamination is to flag polarization measurements made at positions having lower flux density. Such measurements could be contaminated to a large degree by line-of-sight emission associated with the larger parent cloud. The fluxes we measure at the edges of our three maps range from  ∼ 5% to  ∼ 40% of the respective peak flux (see contours in Figures 2a, 3a, and 4a). We have chosen to set a “contamination threshold” at 25% of peak flux; we will consider polarization measurements obtained at positions having flux below this threshold to be at risk of large amounts of contamination by polarized signals originating in the larger cloud. This choice is somewhat arbitrary, but it has the benefit of flagging polarization measurements coming from regions of very low flux while preserving most of our measurements; we flag as unreliable just four measurements out of 22. All of our *B*-vectors shown in Figures 2b, 3b, and 4b are drawn with the same length and thickness, and those associated with total intensity levels below our contamination threshold are shown using dashed lines. Figures 2b and 4b each contain one such suspect *B*-vector, and Figure 3b contains two. Also shown in Figures 2b, 3b, and 4b are the bipolar outflow morphologies in these sources (see the figure captions for references). In addition, these figures show circles depicting the measured outer limits of the infall regions in L1527 and B335 based on inverse P-Cygni line profiles (see captions for references). No measurement of the infall radius for IC348-SMM2 has been made to date, but such an infall region is likely to have a radius range of 20$\arcsec$ to 30$\arcsec$ for an infall age similar to the ages of L1527 and B335 (see Table 2). Below we will assume an infall radius of 25$\arcsec$ when we compare our observational results to models. The models we will compare our data to are those of ASL03a and ALS03b. These models consist of self-similar, self-gravitating, singular, isothermal toroids with various amounts of rotation and magnetization. The rotation speeds of the cores range from 0 to 0.5 times their thermal sound speeds and the magnetic-flux-to-mass-ratios of the cores range from from 0 to 0.5. All models are supercritical in order for collapse to occur without external pressure. Presumably the collapse phase occurs after ambipolar diffusion has occurred, producing the supercritical state in the core. ALS03b note, however, that even in the relatively weakened state of the fields, these fields are responsible for the formation of pseudo-disks, considerable transport of angular momentum, and the resulting size of the centrifugally supported Keplerian disk during the collapse phase, and so cannot be ignored. In Figures 5, 6, and 7 we compare our data on L1527, IC348-SMM2, and B335, respectively, to the model displayed in Figure 8c of ALS03b, a model with intermediate rotation and magnetic field strengths. It should be noted here that, except for the case where there is no magnetic field to flatten the core and provide polarization, the spatial resolution of our data precludes us from discerning between the various models presented in ASL03a and ALS03b (see Figures 7 and 8 in ALS03b). ALS03b show (their Figure 7) that rotation has only a minor effect on the gas and magnetic field geometry at the spatial scales we are measuring here. However, the aim of our current study is not to test the finer points of magnetic regulated collapse models, but the gross predictions represented in Figure 1 and evidence for magnetic field pinches. In addition to our spatial resolution constraints, it is important to bear in mind that our results represent an integration of polarizations along each line of sight, whereas the magnetic field given in Figure 8c of ALS03b represents only the cross-section of the poloidal field on the plane of the sky for an edge-on pseudo-disk. If this cross-section were rotated round the symmetry axis of the pseudo-disk, an integration along a line-of-sight would result in a weakening of the pinch geometry to a more uniform field aligned with the symmetry axis. ### Magnetic Field Structure around L1527 and IC348-SMM2 Scrutiny of the results displayed in Figures 2 & 3 reveals that the field structures in L1527 and IC348-SMM2 are generally consistent with the magnetically regulated dynamical collapse models cited in the introduction in that they show: (1) pinched field line vectors on the scale of the measured or inferred infall regions for these cores; and (2) field line vectors (with a few exceptions discussed below) that are basically aligned with the bipolar outflows (once the distortion of a pinch is subtracted by eye using Figures 5 & 6). The exceptions mentioned in (2) are the three polarization vectors in the low-flux region to the south of IC348-SMM2, which imply an east-west magnetic field, and a vector immediately east of the emission peak of L1527. A possible explanation for this latter vector is given later in this section. However, the east-west field lines in IC348-SMM2 cannot be explained in the context of a magnetically regulated model. It is possible these vectors may not be associated with the core of IC348-SMM2, since two of the three reside in a region which has emission less than 25% of the peak emission for the source, and the third resides in a region with emission that is 25.5% of the peak. Although in general the polarization results agree with magnetically regulated models, in detail we see significant discrepancies beyond the exceptions mentioned above. The scale of the pinched structure in magnetically regulated models depends on the size (i.e., age) of the infall region. Outside the infall region, the magnetic fields should be uniform or nearly uniform, depending on the details of the pre-collapse, quasi-static contraction of the core. Only within the infall regions should pinches of the field lines be significant. Figure 6 shows how our *B*-vectors in IC348-SMM2 (with our assumed infall radius and ignoring the three *B*-vector exceptions to the south) show a remarkable agreement with Figure 8c from ALS03b if the symmetry axis of the pseudo-disk has a 17∘ tilt with respect to the outflow as we measure for this source. The agreement of the predicted field geometry from the ALS03b model with our measured vector position angles is somewhat surprising since ALS03b Figure 8c gives a cross-section of the poloidal field on the plane of the sky for an edge-on pseudo-disk, while our measurements are an integration along the line of sight. Such an integration should smooth out the pinched structure to some degree. The fit by eye between the model and our measurements is not as good if the pseudo-disk symmetry axis in the model is instead aligned with the outflow axis. The fact that the magnetic field fit is better when the model is aligned to the measured pseudo-disk axis, rather than the outflow axis, is consistent with the hypothesis that the pinched magnetic field structure and the pseudo-disk are both products of magnetized contraction and collapse, whereas the outflow is probably a product of rotation. In IC348-SMM2, the rotation axis may not be exactly aligned with the overall magnetic field axis for the core. Figure 5 shows that in L1527 our *B*-vectors broadly agree with a pinched magnetic field structure aligned with the pseudo-disk axis as shown in Figure 8c of ALS03b, but the figure also shows that there are significant distortions beyond the uncertainties ( ± 11∘) in the angles of the vectors. We see these distortions close to the edge and in one case beyond the infall boundary. A possible explanation for this additional distortion could be the bipolar outflow in this source which overlaps a significant portion of the region containing our measured vectors (see Figure 2). The effects of bipolar outflows are not included in the models to which we are comparing our results (ASL03a; ALS03b; Galli & Shu 1993a,b). The bipolar outflow in L1527 may also be responsible for the exceptional vector identified previously, which lies immediately east of the L1527 emission peak. This vector implies a magnetic field direction that is almost perpendicular to the outflow. In this scenario, the outflow pushes core material, and therefore also core magnetic field lines, into two polar cones surrounding the bipolar outflow near the emission peak in L1527, thereby giving rise to both the additional distortions near the edge of the infall boundary as well as the exceptional vector just east of the peak. Evidence for conical cavities such as would be required in this scenario has been obtained via mid-IR scattered light observations by Tobin et al. (2008) as well as interferometric measurements of HCO+ by Hogerheijde et al. (1998). The latter is shown in Figure 2b. In addition, the submillimeter maps of Hogerheijde & Sandell (2000) and our 350 *μ*m map (Figure 2a) show evidence of an X-like structure in the extended background emission about L1527 that overlaps the observed outflow in that region, implying that significant submillimeter emission (and any polarization of such emission) must be coming from the surface of the outflow conical cavities. Could the bipolar outflow observed in L1527 have enough energy to distort the magnetic field structure linked to the gas it entrains? The diagrams in Figure 8 in ALS03b show the pinched magnetic field structures for a number of rotating, dynamically-collapsing toroids of different initial cloud magnetic field strengths. These diagrams also show the contours of *β* (the ratio of the thermal to magnetic energy density). Close to the axis of symmetry *β* is much less than one ( ∼  0.1), further from the axis of symmetry *β* increases to values above one. These values of *β* can be used together with estimates of the thermal energy within a core to determine the energy density required of an outflow to distort the magnetic field within that core. The core thermal energy density for L1527 can be approximated by ${3 \over 2} \rho c\_s^2$, where *ρ* is given by the mass density of the 106 cm− 3 molecular gas measured for the L1527 core about 10$\arcsec$ away from the center (Hogerheijde & Sandell 2000) and c*s* is the sound speed which is approximately 0.25kms− 1 (Zhou, Evans & Wang 1996). The lower limit to the outflow energy density in L1527 is expressed as ${ 1 \over 2} M\_{\_L} V^2\_{\_L}$ divided by the observed volume of the outflow lobes, where the measured values for *M**L* (the mass entrained in the outflows) and *V**L* (the velocity of the entrained gas in the outflows) are given in Table 2 from Hogerheijde et al. (1998). The volume of both outflow lobes can be approximated by ${2 \over 3} \Omega\_{\_L} R^3\_{\_L}$, where *R**L*, the extent of one lobe in L1527, is given in Table 2 and Ω*L* ≈  0.14 sr based on Figure 6 in Hogerheijde et al. (1998) showing the extent of the 12CO outflow in L1527. The above imply that the energy density in the outflow is greater than the thermal energy density in the region 10$\arcsec$ from the peak of L1527 by about a factor of 100. Since in the ALS03b models the magnetic energy density is less than the thermal energy density away from the axis of symmetry, this implies that the outflow does have the energy required to distort the magnetic field where it disturbs the gas in the core away from the axis of symmetry. Indeed, with the energy density of the outflow observed, even the magnetic field very close to the axis of symmetry could be distorted. The ratio of the outflow energy density to core thermal energy density in IC348-SMM2 is similar to the ratio in L1527, if one assumes similar gas densities and sound speeds for IC348-SMM2 as measured for L1527 and the outflow parameters in Table 2 for IC348-SMM2. But the outflow in IC348-SMM2 does not overlap with our measured polarization vectors to a very great degree, so the outflow cannot affect the alignment of the magnetic fields which are inferred by these polarization measurements. Where there is overlap, the distortion would be minimal for field lines already parallel to the axis of the outflow if the opening angle of the outflow at that location is small – as it is for IC348-SMM2 ( ± 15∘; see Figure 3). In summary, if one assumes that the low-flux exceptions in IC348-SMM2 are not part of the core and one includes the effects of bipolar outflows on field alignment, then our observations of L1527 and IC348-SMM2 are consistent with magnetically regulated models. ### Magnetic Field Structure around B335 At first glance, our two polarization vectors in B335 imply a magnetic field more perpendicular than parallel to the outflow axis. However, our two inferred *B*-vectors are not too different from what would be expected in the region just south-east of the B335 center based on Figure 8c of ALS03b when this figure is aligned with the minor axis of the flattened core as measured by Chandler & Sargent (1993) (i.e., the symmetry axis of the inferred pseudo-disk). In this region of the model, the field lines are pinched towards the axis of symmetry (see Figure 7). Figure 7 also shows the *B*-vectors inferred from the 850 *μ*m polarimetry measurements of WLH03 as well as those inferred from our 350 *μ*m polarimetry measurements. Our results are broadly consistent with the results of WLH03, in that they agree with four of the six vectors in the south-east quadrant region of the B335 core, but not, however, with the one vector with which our measurements most nearly overlap. The 20 *B*-vectors of WLH03 and our 2 all lie inside the infall region outlined in Figures 4 and 7. The WLH03 results imply on average a more N-S magnetic field structure within B335, but there is considerable distortion evident in the field lines implied by the 20 vectors, since the standard deviation of the position angles of the *B*-vectors is about three times the average measurement error for each vector. WLH03 concluded that the average field they measure in B335 is the direction of the field in the core when it collapsed, and that the flattened core seen in B335 is prolate (rather than oblate) with its symmetry axis parallel to the magnetic field. If this is the case, then B335 presents a counter example to the results obtained by Tassis et al. (2009) who concluded the core model that best fits their sample of 24 high-mass cloud cores is an oblate core with the mean magnetic field more aligned with the short axis than the long axis. Alternatively, the WLH03 data and ours could imply a more toroidal field in B335 than poloidal. But if this were the case then this is inconsistent with the model of ALS03b which includes rotation. In this model, only a small volume of a dynamically collapsing core contains twisted (i.e., toroidal) magnetic fields. Outside this small region the collapsing flow has yet to be spun up, and inside this region *β* ≪ 1 so the field lines are rigid. If B335 does contain a toroidal field configuration, then the ALS03b model fails to describe it; there appears to be more twisting of the original poloidal magnetic field lines than can be explained in their model. Yet another interpretation of the data, and the one we advocate in this paper, is that B335 is an extreme example of what is happening in L1527; the outflow in B335 has distorted the field lines in the core of B335, either directly or indirectly by exciting more turbulence within the core, so the magnetic field seems to align N-S on average. Could the outflow in B335 cause large field distortions in the core? If we use the same analysis we used for L1527, but with a core thermal speed of 0.23kms− 1 (Zhou et al.1993), a core density at about 10$\arcsec$ from the center of  ∼  105 cm− 3 ( Zhou et al. 1990; Harvey et al. 2003), the B335 values for the outflow parameters given in Table 2, and Ω*L* =  0.6 sr (Figure 2 in Hirano et al. 1988), then we find that the outflow energy density is about a factor of 6 higher than the core thermal energy density (10$\arcsec$ from the peak) in B335. Our own outflow data shown in Figure 4b give a total kinetic energy for the outflow within the region mapped (i.e., ${1 \over 2}M\_{\_{chan}} V^2\_{\_{chan}}$, summed over the velocity channels in the ranges 5.4 - 8.0 kms− 1 and 8.6 - 11.2 kms− 1) of 1.8  ×  1042 erg, once a correction for the 10∘ inclination of the outflow to the plane of the sky has been made. Dividing this by the volume of the outflow as defined in Figure 4b, we get a kinetic energy density of  ∼  10− 9 erg cm− 3, which is a factor of 3 higher than the core thermal energy density. This outflow energy density although not enough to distort the magnetic field close to the axis of symmetry of the pseudo-disk in B335, is enough to distort the field further out. The outflow cavity outlined by the CO observations in Figure 4b is coincident with the region within the B335 core that most likely would have a magnetic field energy density that is greater than the outflow energy density; beyond this outflow cavity the reverse is likely to be true. The outflow cavity is a region in which the gas and dust densities are low, and so it is a region of low 350 *μ*m emission along our line of sight. Our polarization measurements are thus weighted away from the outflow cavity region towards those regions along our line of sight where the energy density of the outflow could distort the magnetic field structure. But why is this distortion observed to be so large in B335 compared to the distortion in L1527? The difference between B335 and the other two YSOs presented here is that the bipolar outflow in B335 is much larger in length, width and apparent age than the outflows in L1527 and IC348-SMM2 (Table 2). Therefore, the field lines within the B335 core could be highly distorted because the outflow has had time to plow through a greater portion of the core or excite greater gas turbulence in the core. ### Chi-Square Tests of Various Magnetic Field Geometries In order to give some quantitative assessment of the magnetic field scenarios discussed above, and how they compare to other possible configurations, we carried out reduced chi-squared tests of our data using a number of different theoretical magnetic field configurations for each source. For each source we compared the *B*-vectors implied by our data to: (1) a uniform magnetic field model where the angle of the field is aligned with the mean *B*-vector angle implied by our data for that source; (2) a uniform magnetic field model where the angle of the field is aligned with the outflow; (3) a uniform magnetic field model where the angle of the field is aligned with the symmetry axis of the observed pseudo-disk for that source; (4) a uniform magnetic field model where the angle of the field is aligned with the major axis of the observed flattened core (or pseudo-disk) for that source; and (5) a pinched magnetic field aligned with the symmetry axis of the observed pseudo-disk as presented in Figure 8c of ALS03b. Note that model (4) also corresponds to the case where the magnetic field is toroidal. Table 3 summarizes our results. Each number in the table represents the reduced chi-squared value, *χ**r*2, for the data specified for that column against the model for that row. The reduced chi-squared is calculated as $\chi^2\_r = {1 \over {\nu}} \Sigma\_i ((\theta\_i - \theta\_{Mi})^2 / \sigma\_i^2)$ where *θ**i* are the data representing the angles of the *B*-vector at various locations, i; *σ**i* is the uncertainty in each data angle; *θ**M**i* is the angle of the magnetic field at the location of each data point for a particular model; and *ν* is the number of degrees of freedom for the data set. For these calculations, the values of ∣(*θ**i* − *θ**M**i*)∣ were constrained to be  ≤  90$\arcdeg$ since our *B*-vectors have been derived from our polarization *E*-vectors which are invariant under 180 degree rotations. For cases where $|(\theta\_i - \theta\_{Mi})| > 90 \arcdeg$, a value of $|[(\theta\_i - \theta\_{Mi} \pm 180)]| \le 90\arcdeg$ was substituted. For B335, we carried out the same calculations using the data of WLH03 based on their Figure 1. *χ**r*2 should be close to 1 for a good fit. An inspection of Table 3 shows that *χ**r*2 is close to this value only for the ALS03b model for IC348-SMM2 and B335 when we consider only our data minus the exceptions discussed in Section 3.2.1. However the fits are not good for this model if we include the exceptions for IC348-SMM2, and if we include the WLH03 data for B335. This is in agreement with our qualitative assessment above. For L1527, although no model gives a good fit to our data, the two most favored are the ALS03b model and the uniform field aligned with the mean *B*-vector of our data for L1527 (i.e., 60$\arcdeg$ east of north). Our data do not differentiate between these two models, since our vectors lie mostly in two diagonal quadrants. If we had more data in the other two quadrants, a *χ**r*2 test would be able to differentiate between these two models. As it is, the *χ**r*2 results re-enforce our qualitative assessment above that although our data agree with the ALS03b model in a broad sense, there are enough deviations to it to require an explanation; possibly a disturbance to the magnetic field structure due to the outflow in L1527. Further, based on Table 3, our data for L1527, IC348-SMM2, and B335 clearly favor the ALS03b model over the ``toroidal" models, and the ALS03b model is slightly more favored than the models with a uniform field aligned with the outflow and a uniform field aligned with the pseudo-disk symmetry axis. This implies there is some evidence for a pinch in the magnetic field configuration for these sources. However, the data of WLH03 for B335, in contradiction, favor the model with a uniform field aligned with the major axis of the flattened core over all the other models, although the fit is not good for this model, thus implying some other underlying magnetic field configuration must be present - possibly a configuration disrupted by the outflow in B335. In summary, the chi-squared results in Table 3 support the qualitative discussions we presented in sections 3.2.1 and 3.2.2. We also compared our data to a random magnetic field model for each source. We did this by calculating the root-mean-square (RMS) of the differences between the directions of the *B*-vectors at points lying spatially adjacent to one another in each of our sources. For L1527 we obtained 11 pairs of measurements separated by 10$\arcsec$ or $\sqrt{2} \times 10 \arcsec$ (see Figure 2) and for IC348-SMM2 we obtained 10 such pairs (see Figure 3) for our RMS calculations. Points separated by $\sim 10\arcsec$ (i.e., one beam diameter) or greater represent independent measurements. For L1527 we calculated an RMS value for these differences in adjacent *B*-vector directions to be 19$\arcdeg$, and for IC348-SMM2 a value of 15$\arcdeg$. We then carried out the same calculation on 10 randomly generated numbers ranging between -90 and 90, and did this 50 times. The resulting RMS values formed a peaked gaussian distribution with a mean of 51 and a dispersion *σ* = 6. (This mean is close to the theoretical RMS of  ∼  52 for randomly selected numbers between -90 and 90.) Thus our RMS values for L1527 and IC348-SMM2 lie 5.5*σ* and 6*σ*, respectively, below the expected RMS for randomly distributed *B*-vectors, implying that the magnetic fields in L1527 and in IC348-SMM2 are not random in nature. We carried out the same RMS calculation on two different sets of 13 pairs of adjacent *B*-vector measurements made by WLH03 in B335 (see Figure 7), resulting in RMS values of 42$\arcdeg$ and 34$\arcdeg$. Both values are much larger than the average uncertainties ($\le 10\arcdeg$) for the measurements reported in WLH03. These RMS values imply that the magnetic field is possibly more randomized in B335 than in the other sources. This latter point is consistent with our suggestion that the magnetic field in B335 is more distorted by its outflow than are the magnetic fields in L1527 and IC348-SMM2. Conclusions =========== We have used the SHARP polarimeter on the CSO to obtain 350 *μ*m intensity maps and polarization measurements on L1527, IC348-SMM2, and B335 to test whether or not magnetically regulated models for low-mass star formation are consistent with observations of sources which should have little distortions in their various geometries due to projection effects (i.e., sources with bipolar outflows which lie close to the plane-of-the-sky). (1) Our data for IC348-SMM2 combined with the data of others for L1527 and B335 show flattened cores consistent with edge-on pseudo-disks having symmetry axes that are nearly, but in two cases not exactly, parallel to the bipolar outflows in these sources. (2) There is evidence that the sources L1527 and IC348-SMM2 each contains a pinched magnetic field structure with its symmetry axis approximately aligned with the symmetry axis of the inferred embedded pseudo-disk in each. The evidence is strong (i.e., the goodness-of-fit to the data is good) for IC348-SMM2, if certain low-flux polarimetry measurements, which could be associated more with the background cloud than the core, are ignored. In IC348-SMM2, where the outflow and pseudo-disk axes are not exactly aligned, the pinched magnetic field structure fits the data better when it is aligned with the symmetry axis of the pseudo-disk rather than with the outflow axis. This is consistent with the hypothesis that the pinched magnetic field structure and the pseudo-disk are both products of magnetized contraction and collapse, whereas the outflow is only indirectly related to the core magnetic field structure inasmuch as that magnetic field structure has influenced the orientation of the rotation axis of the core. (3) In L1527, however, the magnetic field structure shows considerable distortion from an ideal pinched field line structure given the measured infall radius of this source. Our hypothesis is that this distortion is caused by the bipolar outflow in L1527. We show that the outflow has sufficient energy density to distort the magnetic field structure in the core of this source. This distortion is not seen in IC348-SMM2, because the only inferred *B*-vectors that overlap with the outflow in this source are vectors that would not have been altered by the outflow. (4) The magnetic field structure observed in the B335 core is not aligned with the outflow axis of this source, but our two *B*-vectors are consistent with models with pinched field lines through a pseudo-disk, if the pseudo-disk is tilted with respect to the outflow axis by 20$\arcdeg$ (i.e., consistent with the pseudo-disk observed for B335 by Chandler & Sargent (1993)). However, if we combine our data with those of WLH03, the fit to these pinched field line models is not good. Our explanation is that B335 is an extreme example of the bipolar outflow driven field distortion that we are seeing in L1527. The main difference between L1527 and B335 is that the outflow in B335 is much larger in extent than the one in L1527, is assumed therefore to be much older, and so has had time to cause a greater degree of distortion of the core magnetic field lines. This is not the interpretation given by WLH03 of their B335 observations. They interpreted their results to imply a near uniform field lying perpendicular to the outflow in this source - in contradiction to the predictions of magnetically regulated collapse as summarized in Figure 1. (5) More core magnetic field structures need to be mapped to elucidate the overall dynamical collapse story, given the considerable variation in core structures observed in our sample of just three. In short, the gross predictions of the magnetically regulated models (i.e., as summarized in Figure 1) need to be tested further. Acknowledgements ================ We are grateful to the Caltech Submillimeter Observatory TAC, management, and staff for making this study possible over the many observing runs that it has consumed. We thank Darren Dowell, Megan Krejny, Woojin Kwon, and Hiroko Shinnaga for help with the observations, and Ron Taam for valuable discussions. We are grateful to the National Science Foundation for supporting the Caltech Submillimeter Observatory via grant AST-0838261, and for supporting the operation of SHARP via grant AST-0909030 to Northwestern University. Astronomical Research by P.F.G. and N.L.C. is supported by the Jet Propulsion Laboratory, California Institute of Technology. Allen, A., Shu, F.H., & Li, Z.-Y. 2003,, 599, 351 (ASL03a) Allen, A., Li, Z.-Y., & Shu, F.H. 2003,, 599, 363 (ALS03b) Attard, M., Houde, M., Novak, G., & Vaillancourt, J.E. 2008,, 120, 805 Attard, M., Houde, M., Novak, G.,Vaillancourt, J.E., Li, H., Dowell, C.D., Davidson, J.A., and Shinnaga, H. 2009,, 702, 1584 Basu, S. & Mouschovias, T.C. 1994, 432, 720 Chandler, C.J., & Sargent, A.I. 1993,, 414, L29 Chandler, C.J., & Richer, J.S. 2000,, 530, 851 Dowell, C.D., Hildebrand, R.H., Schleuning, D.A, Vaillancourt, J.E., Dotson, J.L., Novak, G., Renbarger, T., & Houde, M. 1998,, 504, 588 Dowell, C.D. et al. 2003, Proc. SPIE 4855: Millimeter and Submillimeter Detectors for Astronomy, eds. T.G. Phillips & J. Zmuidzinas, 73 Galli, D., & Shu, F.H. 1993a,, 417, 220 Galli, D., & Shu, F.H. 1993b,, 417, 243 Girart, J. M., Rao, R. & Marrone, D. P. 2006, Science, 313, 812 Goldsmith, P.F., & Arquilla, R. 1985, Protostars and Planets II, 137 Goldsmith, P.F., Heyer, M., Narayanan, G., Snell, R., Li D., & Brunt, C. 2008,, 680, 428 Goodman, A.A., Benson, P.J., Fuller, G.A., & Myers, P.C. 1993,, 406, 528 Hatchell, J., Richer, J.S., Fuller, G.A., Qualtrough, C. J., Ladd, E. F., & Chandler, C. J. 2005,, 440, 151 Harvey, D.W.A., Wilner, D.J., Myers, P., & Tafalla, M. 2003,, 596, 383 Henning, T., Wolf, S., Launhardt, R., & Waters, R. 2001,, 561, 871 Hildebrand, R.H., Davidson, J.A., Dotson, J.L., Dowell, C.D., Novak, G., & Vaillancourt, J.E. 2000,, 112, 1215 Hildebrand, R.H., Kirby, L., Dotson, J.L, Houde, M., & Vaillancourt, J.E. 2009,, 696, 567 Hirano, N., Kameya, O., Nayayama, M., & Takakubo,K. 1988,, 327, L69 Houde, M., & Vaillancourt, J.E. 2007,, 119, 871 Hogerheijde, M.R, van Dishoeck, E.F, Blake, G.A., van Langevelde, H.J. 1998,, 502, 315 Hogerheijde, M.R., & Sandell, G. 2000,, 534, 880 Jørgensen, J.K., et al. 2007,, 659, 479 Kirby, L., Davidson, J.A., Dotson, J.L., Dowell, C.D., & Hildebrand, R.H. 2005,, 117, 991 Kirk, J.M., Ward-Thompson, D., & Crutcher, R.M. 2006,, 369, 1445 Konigl, A. & Pudritz, R. E. 2000, Protostars and Planets IV, 759 Kwon, W., Looney, L.W., Crutcher, R. M., & Kirk, J. M. 2006,, 653, 1358 Lazrarian, A. 2007,, 106, 225 Li, H., Dowell, C. D., Kirby, L., Novak, G., & Vaillancourt, J. E. 2008, App. Opt., 47, 422 Mac Low, M.-M. & Klessen, R.S. 2004, Reviews of Modern Physics, 76, 125 Mathews, B.C., & Wilson, C.D. 2002,, 574, 822 Matthews, B.C., McPhee, C.A., Fissel, L.M., & Curran, R.L. 2009,, 182, 143 McKee, C.F. & Ostriker, E.C. 2007,, 45, 565 Ménard, F. & Duchêne, G. 2004,, 425, 973 Mestel, L. 1985, Protostars and Planets II, 320 Mouschovias, T.C. & Paleologou, E. V. 1979, 230 204 Mouschovias, T.C. & Paleologou, E. V. 1980, 237, 877 Mouschovias, T.C. & Ciolek, G.E. 1999, The Origin of Stars and Planetary Systems, eds. C.J. Lada & N.D. Kylafis, 305 Myers, P.C., Bachiller, R., Caselli, P., Fuller, G.A., Mardones, D., Tafalla, M., & Wilner, D.J. 1995,, 449, L65 Ohashi, N., Hayashi, M., Ho, P.T.P, & Momose, M. 1997,, 475, 211 Serkowski, K. 1962, in Advances in Astronomy and Astrophysics, ed. Z. Kopal (New York: Academic), 290 Shu, F.H., Adams, F.C., & Lizano, S. 1987,, 25, 23 Shu, F. H., Najita, J.R., Shang, H., & Li, Z-Y. 2000, Protostars and Planets IV, 789 Shu, F.H., Li, Z-Y., & Allen, A. 2004,, 601, 930 Tafalla, M., Kumar, M.S.N., & Bachiller, R. 2006,, 456, 179 Tamura, M., Ohashi, N., Hirano, N., Itoh, Y., & Moriarty-Schieven, G.H. 1996,, 112, 2076 Tassis, K., Dowell, C.D., Hildebrand, R.H., Kirby, L., and Vaillancourt, J.E. 2009,, 399, 168 Tobin, J.J., Hartmann, L., Calvet. N, & D’Alessio, P. 2008,, 679, 1364 Tomisaka, K. 1998,, 502, L163 Vaillancourt, J. E. 2006,, 118, 1340 Vaillancourt, J.E., Dowell, C.D., Hildebrand, R.H., Kirby, L., Krejny, M.M., Li, H., Novak, G., Houde, M., Shinnaga, H., Attard, M. 2008,, 679, L25 Vallée, J.P., Bastien, P., & Greaves, J.S. 2000,, 542, 352 Vallée, J.P., Greaves, J.S., & Fiege, J.D. 2003,, 588, 910 Ward-Thompson, D., Sen, A.K., Kirk, J.M., & Nutter, D. 2009,, 398, 394 Wolf, S., Launhardt, R., & Henning, T. 2003,, 592, 233 (WLH03) Zhou, S., Evans, N.J., Butner, H.M., Kutner, M.L., Leung, C.M., Mundy, L.G. 1990,, 363, 168 Zhou, S., Evans, N.J., II, Carsten, K., & Walmsley, C.M. 1993,, 404, 232 Zhou, S., Evans, N.J., II, & Wang, Y. 1996,, 466, 296 lrrrrrr L1527 & 38.0 & 9.5 & 5.6 & 2.1 & -51.0 & 9.5 & 28.5 & 19.0 & 3.3 & 1.6 & -56.1 & 12.0 & 19.0 & 9.5 & 2.5 & 0.9 & -63.7 & 9.7 & 19.0 & 19.0 & 3.5 & 1.1 & -41.7& 10.1 & 9.5 & -28.5 & 5.8 & 1.8 & 18.2 & 9.0 & 9.5 & 0.0 & 1.0 & 0.5 & -66.4 & 12.3 & 9.5 & 9.5 & 1.6 & 0.5 & -36.5 & 9.6 & -9.5 & -19.0 & 2.8 & 1.3 & 3.1& 10.0 & -9.5 & -9.5 & 1.8 & 0.7 & -13.2 & 10.3 & -19.0 & -9.5 & 2.2 & 1.2 & -28.4 & 14.5 IC348-SMM2&9.5 & 19.0& 5.7& 2.1& 84.8&10.5 & 0.0& 19.0& 6.3& 2.4& 81.6&9.4 & -9.5& -28.5& 9.2& 3.7& -9.8&10.0 & -9.5& 19.0& 5.1& 2.8& 61.9&12.8 & -19.0& -28.5& 9.5& 4.4& -4.7&11.0 & -19.0& -19.0& 5.8& 3.0& 4.0&11.8 & -19.0& 19.0& 7.9& 3.4& 54.4&13.3 & -28.5& -9.5& 5.9& 2.9& 33.7&11.4 & -28.5& 0.0& 9.4& 3.1& 32.3&8.2 & -28.5& 9.5 & 8.0& 4.1& 26.3&12.0 B335 & 9.5& -19.0& 3.2& 1.5& 56.5&11.9 & 9.5& -9.5& 1.3& 0.7&58.8&14.6 lrrr RA (2000) & 04:39:53.9 & 03:43:56.9 & 19:37:01.03 Dec (2000) & 26:03:11 & 32:03:06 & 07:34:11 Distance (pc) & 140 & 300 & 250 Infall Radius, rinf (pc) & 0.026 (38) & & 0.04 (33) Sound Speed, c*s* (kms− 1) & 0.25 & & 0.23 Infall Age ( ∼  rinf/c*s*) (yr) & $\thicksim 1 \times 10^5 $ & & $\thicksim 1.5 \times 10^5 $ 12CO Outflow Extent, R*L* (pc) & 0.095 (140) & 0.14 (95) & 0.3 (250) PA of outflow axis E from N & 90∘ &  − 17∘& 90∘ Outflow Mass, M*L* (M$\_\sun$) & 0.18 & 0.033 & 0.44 Characistic Outflow Velocity, & & & V*L* (kms− 1) & 20 & 54 & 13 Outflow Age ( ∼  R*L*/V*L*) (yr) & $\thicksim 5 \times 10^3 $ & $\thicksim 2 \times 10^3 $ & $ \thicksim 2 \times 10^4 $ lcccccc *θ**M**i* = Mean *θ**i* & 8.3 &8.2 & 10.4 & 5.5 & 0.02 & 13.7 *θ**M**i*= *θ**o**u**t**f**l**o**w* & 16.4 & 15.0 & 20.9 & 8.1 & 19.4 & 36.9 *θ**M**i*= *θ**P**D**S**A*& 16.4 & 15.0 & 12.7 & 4.8 & 8.2 & 38.5 *θ**M**i*= *θ**P**D**M**A*& 30.3 & 33.2 & 31.7 & 42.0 &16.2 & 14.2 ALS03b Model & 9.9 & 7.8 & 14.0 & 0.4 & 0.8 & 31.5 A cartoon summarizing the magnetically regulated core collapse scenario outlined in Section 1. Typically, the diameter of the infall region is  ∼ 10,000 AU, the diameter of a pseudo-disk is  ∼ 2,000 AU, and the diameter of a Keplerian disk is  ∼ 100 AU. In the magnetically regulated core collapse scenario: the pseudo-disk symmetry axis is aligned with the core magnetic field; and magnetic braking tends to align the core rotation axis with the magnetic field, but this alignment may not be exact. The pseudo-disk is a dynamically collapsing object formed by the magnetic fields, not rotation. The Keplerian disk is an object formed by rotation and so its symmetry axis is aligned with the core’s rotation axis, as too is the outflow axis if the outflow is driven by rotation. [fig:cartoon] (a) 350 *μ*m polarization vectors for L1527. The thickness of each vector indicates the significance, with all vectors having significance greater than 2*σ* before bias correction (see section 2.2). Note the very low polarization ( ≤ 0.9%) marked by a small open circle near the flux peak (see section 2.2). The contours indicate the total intensity, I, measured by SHARP. The highest contour level corresponds to 90% of the peak flux, and each subsequent contour represents a decrement of 10% of the peak flux. (b) Inferred magnetic field directions superimposed on a section of a figure from Hogerheijde et al. (1998) showing 12CO 3-2 observations of the bipolar outflow (contours) and a HCO+ map tracing gas on the cavity walls surrounding the outflow (gray area). B-vectors in regions having 350 *μ*m flux less than 25% of the peak are shown as dashed lines. The full Hogerheijde et al. figure shows the blueshifted outflow (solid contours) extending 2$\arcmin$ east and the redshifted outflow (dashed contours) extending 2$\arcmin$ west from the YSO in L1527. The large dashed circles in (a) and (b) show the extent of the measured infall region for this object (Myers et al. 1995; Zhou, Evans & Wang 1996). [fig:L1527] [fig:IC348] [fig:B335] [fig:L1527b] [fig:IC348b] [fig:B335b] Magnetic Field Structure around Low-Mass Class 0 Protostars: B335, L1527 and IC348-SMM2 ======================================================================================= We report new 350 *μ*m polarization observations of the thermal dust emission from the cores surrounding the low-mass, Class 0 YSOs L1527, IC348-SMM2 and B335. We have inferred magnetic field directions from these observations, and have used them together with results in the literature to determine whether magnetically regulated core-collapse and star-formation models are consistent with the observations. These models predict a pseudo-disk with its symmetry axis aligned with the core magnetic field. The models also predict a magnetic field pinch structure on a scale less than or comparable to the infall radii for these sources. In addition, if the core magnetic field aligns (or nearly aligns) the core rotation axis with the magnetic field before core collapse, then the models predict the alignment (or near alignment) of the overall pinch field structure with the bipolar outflows in these sources. We show that if one includes the distorting effects of bipolar outflows on magnetic fields, then in general the observational results for L1527 and IC348-SMM2 are consistent with these magnetically regulated models. We can say the same for B335 only if we assume the distorting effects of the bipolar outflow on the magnetic fields within the B335 core are much greater than for L1527 and IC348-SMM2. We show that the energy densities of the outflows in all three sources are large enough to distort the magnetic fields predicted by magnetically regulated models. Introduction ============ Only a few percent of the mass of a molecular cloud is typically converted into stars (McKee & Ostriker 2007; Goldsmith et al. 2008). The rate of star formation inferred for molecular clouds in general is significantly less than that expected from free-fall gravitational collapse of the gravitationally bound mass in these regions. Consequently, there must be some mechanism which regulates the rate of star formation within molecular clouds. Two mechanisms have been proposed; magnetic support (e.g., Shu, Adams & Lizano 1987; Mouschovias & Ciolek 1999) and super-Alfvenic turbulence (e.g., Mac Low & Klessen 2004). In addition, observations (Goldsmith & Arquilla 1985; Goodman et al. 1993) show that the rotation rates of cores (density $\gtrsim$ 104 cm− 3; size  ∼  0.1 pc) are less than expected for cores condensed from their less dense background clouds. In the models favoring magnetic support, this slow core rotation is the result of magnetic braking (Mestel 1985; Basu & Mouschovias 1994). Mouschovias & Paleologou (1979, 1980) showed analytically that the braking timescale for a cloud rotating with its rotation axis perpendicular to the magnetic field is typically an order of magnitude smaller than for a cloud rotating with its axis of rotation parallel to the magnetic field. This results in core rotation rates reduced in amplitude and core rotation axes tending to align with the cloud magnetic field immediately in the vicinity of cores. In this paper, we use our submillimeter polarimetry results to test gross features of the magnetically regulated core-collapse and star-formation models in the literature. A key feature of many such magnetically regulated dynamical collapse models (Shu, Adams & Lizano 1987; Allen, Shu & Li 2003; Allen, Li & Shu 2003; Galli & Shu 1993a,b; Shu, Li & Allen 2004; Tomisaka 1998) is the formation of a flattened inner core (or “pseudo-disk”) of several thousand AU in extent with its symmetry axis parallel to the core magnetic field. This pseudo-disk is the inner part of the infall region of the core surrounding a YSO; it is not formed by rotation, but by the geometry of the magnetic field. The pseudo-disk is a dynamically collapsing entity, accreting onto the protostar and its associated Keplerian disk (size  ∼  100 AU). Signatures of magnetic regulation include a fairly uniform magnetic field outside the infall region of a core, a pinched magnetic field line structure within this region, and the overall direction of the magnetic field parallel to the axis of symmetry of the pseudo-disk. The level of uniformity of the field outside the infall region depends on the quasi-static evolution prior to the dynamical collapse of the core. In the Galli & Shu (1993a,b) models, the field lines outside the infall region are uniform, but in the Allen, Shu & Li (2003; hereafter ASL03a) and Allen, Li & Shu (2003; hereafter ALS03b) models, they are already inclined towards a gentle pinch configuration through quasi-static contraction of the core as a whole before the onset of the dynamical inside-out collapse of the inner core. Various mechanisms have been proposed to remove angular momentum from the gas falling onto a protostar through a Keplerian disk within the inner core; many use the observed, ubiquitous, bipolar outflows which turn on during the Class 0 phase of star formation (Konigl & Pudritz 2000; Shu et al. 2000; Tomisaka 1998). The general expectation from the above magnetically regulated collapse and outflow models is that the outflow axis will be parallel or nearly so to the core magnetic field, since in these models the core rotation axis is aligned or nearly aligned with the core magnetic field, and so the Keplerian disk and bipolar outflow axes are also parallel or nearly parallel to the core magnetic field lines. The cartoon and caption in Figure 1 summarize the above magnetically regulated scenario. The basic scenario depicted in Figure 1 has not been observationally verified; so far observations tailored to test parts of this scenario have not provided irrefutable evidence for the general existence of pseudo-disks, let alone the alignment of the mean magnetic fields with respect to the symmetry axis of these disks. Recent polarimetry studies by Ward-Thompson et al. (2009) and Tassis et al. (2009) imply mean magnetic field directions more aligned with the short axis of observed flattened cores than not (consistent with Figure 1), but by no means aligned. The statistical analysis of Tassis et al. (2009) of their 24 high-mass star forming regions assumed random line-of-sight angles for the cores and magnetic fields, resulting in a best-fit model consisting of a thick oblate core with a magnetic field on average having an angle of 24∘ to the short axis of the core. Tassis et al.  (2009) state that more observations are required in order to reduce the inaccuracies caused by unknown line-of-sight projection effects, but their results do provide some validity to the pseudo-disk scenario, albeit with slightly mis-aligned magnetic field configurations. In regards to the near alignment of the outflow to the magnetic field shown in Figure 1, Ménard & Duchêne (2004) used polarization measurements of background stars to study the relative orientations of the jets from Classical T Tauri stars (CTTS) embedded in the Taurus-Auriga molecular cloud and the magnetic field in their vicinity. Typically the background stars used in this study were separated by about 0.5∘ or more from the evolved CTTS being examined, so the magnetic field being probed in this way is not that of the field of a remnant core surrounding the CTTS, but the magnetic field of the molecular cloud in the vicinity of the CTTS. This study concluded that the jets of embedded CTTS in the molecular cloud Taurus-Auriga are not aligned with the large-scale molecular cloud magnetic fields. Indeed the jets within this one molecular cloud are not aligned with each other. Large scale turbulence may play a role - possibly producing angular momentum and magnetic structures within the cores which are not aligned with the major axis of symmetry or the large-scale magnetic field of the much larger parent molecular cloud. Possible evidence for this can be seen in Taurus-Auriga in the two pre-stellar cores observed by Kirk, Ward-Thompson, & Crutcher (2006), for which the polarized 850 *μ*m emission indicates changes in magnetic field directions in the cores relative to the large-scale magnetic field in the cloud. It is the magnetic field in these cores, not the large-scale magnetic field, that may align the direction of future outflows from that core if magnetic regulation occurs within the cores. Hence the results of Ménard & Duchêne (2004) do not necessarily disprove the magnetically regulated scenario. Vallée, Bastien & Greaves (2000), Henning et al. (2001), Matthews & Wilson (2002), Vallée, Greaves & Fiege (2003), Wolf, Launhardt & Henning (2003), Girart, Rao & Marrone (2006), Kwon et al. (2006) and Attard et al. (2009), among others, undertook observational studies of magnetic field structures within cores containing low-mass, embedded YSOs with bipolar outflows using polarized submillimeter continuum emission. Low-mass YSOs are good objects to study because they tend to be embedded in simple, relatively isolated regions. The results of the above studies taken as a whole do not show a clear case for the alignment of outflows with core magnetic fields. However, most of the YSOs in this combined sample are binaries, are very distant, are not Class 0, or do not have their outflow axis lying close to the plane-of-the-sky. The last criterion is an important one because a pinched magnetic field structure, which is symmetric about an axis of an outflow with a large line-of-sight component, would produce polarization vectors with a large variation in position angles. To minimize projection effects when testing the theory of alignment between outflows and core magnetic fields, the axis of the outflow should lie close to the plane-of-the-sky. We have begun a survey of low-mass, isolated (single), nearby ( ≤  400 pc), Class 0 YSOs that have well defined bipolar outflows which lie nearly in the plane-of-the-sky. Our study will provide maps of the 350 *μ*m polarization vectors within a 10,000 AU radius around each embedded YSO in our survey. We will also provide interferometric maps of the outflows within the cores for each of our survey YSOs where such maps do not already exist. This is so we can better determine the orientation of the outflow and its interaction with the core gas. We are using 350 *μ*m rather than longer wavelength polarization because we want to obtain the best spatial resolution using SHARP (see section 2), and because 350 *μ*m emission is weighted towards the warmer emission of the cores immediately surrounding a YSO rather than towards the extended cooler core envelope. This paper gives a summary of our results on three of the YSOs in our study, L1527, B335, and IC348-SMM2. Section 2 outlines our observations and summarizes our results, Section 3 compares these results to magnetically regulated models, and Section 4 gives our conclusions. Observations and Results ======================== Polarimetry Observations and Data Analysis ------------------------------------------ The observations presented here were carried out at the Caltech Submillimeter Observatory (CSO) using the SHARP polarimeter. SHARP is a fore-optics module that provides polarimetric capability to SHARC-II, the CSO’s 32  ×  12 pixel submillimeter camera. SHARP divides the incident submillimeter radiation into two orthogonally polarized beams that are then reimaged onto the two ends of the “long and skinny” SHARC-II detector. A half-wave plate upstream of the polarization-splitting optics is rotated every few minutes during data collection, and the two orthogonally polarized 12  ×  12 pixel “sub-images” acquired for four different half-wave plate angles are combined in software to determine the total flux as well as the linear polarization state of the radiation. SHARC-II and SHARP are described respectively by Dowell et al. (2003) and Li et al. (2008). The 350 *μ*m observations described here were made with a beam size of 10$\arcsec$ (FWHM). The data were collected in chop-nod mode. This involves rapidly modulating the tilt of the CSO secondary mirror in cross-elevation (“chopping”) while more slowly “nodding” the entire telescope back and forth, thereby making near simultaneous measurements of the source and two nearby sky reference positions. A chop-nod observation is carried out at each of the four half-wave plate angles (0∘, 22.5∘, 45∘, and 67.5∘) thereby forming a “half-wave plate cycle”. As described by Dowell et al. (1998) and Kirby et al. (2005), the net effect is that the total source flux and a component of the linearly polarized flux are measured for each half-wave plate position in a cycle, while removing the spatially extended “background” that includes atmospheric and telescope emission as well as any Galactic emission covering a sky area large compared to the chopper throw. For the observations described here, the chopper throw ranged from 120$\arcsec$ (for B335 and L1527) to 300$\arcsec$ (for IC348-SMM2) and the chopping frequency was  ∼ 1 Hz. Because of sky rotation, the reference beams rotate about the main beam on the sky as the source is tracked. Our three targets do have extended flux, so it was necessary to avoid making observations at hour angles for which significant source flux contaminates the reference beams. Each SHARP half-wave plate cycle requires about 7 minutes of elapsed time. In between successive half-wave plate cycles, the pointing position is dithered by about 10-20$\arcsec$. Data were obtained for B335 during the nights of 28, 29, and 30 April 2007 ( ∼ 80 cycles; zenith submillimeter opacity *τ*350*μ**m* ∼  1.0 - 1.8), for IC348-SMM2 on the night of 10 November 2007 ( ∼ 20 cycles; *τ*350*μ**m* ∼  1.0), and for L1527 during the nights of 9 and 10 November 2007 and 6 and 10 September 2008 ( ∼ 85 cycles, *τ*350*μ**m* ∼  0.8 - 1.2). SHARP data analysis is carried out in two steps. In the first step, each individual half-wave plate cycle is processed to obtain three 12  ×  12 pixel maps, one for each of the Stokes parameters *I*, *Q*, and *U*.  (Parameter *I* corresponds to the total flux while *Q* and *U* fully characterize the linearly polarized flux.) A detailed discussion of the techniques used during this first step can be found in Section 3.4 of Hildebrand et al. (2000) and Equations 2, 3, and 4 of Attard et al. (2008). Corresponding error-maps are also obtained for each of the three Stokes parameters. This is done by first using the variance in the individual total and polarized flux measurements to estimate the uncertainties in this time stream (as in section 4.1 of Dowell et al. 1998) and then propagating these uncertainties to the Stokes parameter maps. At this point, we remove from the Stokes parameter maps any pixels that have abnormally high errors. The second step of the analysis involves combining these single-cycle 12  ×  12 pixel maps of *I*, *Q*, and *U*, to form final maps of *I*, *Q*, and *U*. We account for the dithering and the sky rotation by interpolating the single-cycle maps onto a regular equatorial-coordinate grid, which causes a modest loss of angular resolution (Houde & Vaillancourt 2007). Errors in the final maps are propagated from the corresponding errors in the single-cycle maps. Corrections for changing atmospheric opacity (Kirby et al. 2005) as well as for instrumental polarization and polarimetric efficiency (Li et al. 2008) are also made during this second analysis step. An important question that we can ask during the second step of the analysis is whether the single-cycle *Q* and *U* maps are consistent with one another within their nominal errors. Recall that these nominal errors are computed during the first step of the analysis. A quantitative answer to this question is provided by the reduced chi squared, *χ**r*2. Averaging together the *χ**r*2 values found over the *Q* and *U* maps of each source, we obtain mean *χ**r*2 values of 1.7 for L1527, 1.5 for IC348-SMM2, and 2.1 for B335. We do not know the origin of the extra errors that cause these elevated *χ**r*2 values. However, for each of our three data sets (one for each source) we were able to verify that these errors occur mainly on time scales that are short compared to the total duration of the data set. (This duration is several hours for IC348-SMM2, and several days or even longer for the other two sources.) Thus, it seems reasonable to treat the extra errors as if they are statistical in nature. Accordingly, we inflate our nominal errors by the square root of *χ**r*2. Finally, the degree *P* and angle *ϕ* of polarization and their associated errors *σ**P* and *σ**ϕ* are computed for each sky position via standard techniques (see Section 3.4 of Hildebrand et al. 2000). The uncertainties *σ**P* and *σ**ϕ* are affected by both the polarized flux errors (*σ**Q*, *σ**U*) and the total flux errors (*σ**I*). The latter have a negligible effect, however, because *P* <  < 100%. We consider sky positions for which *P* ≥ 2*σ**P* to be polarization detections and sky positions for which *P* + 2*σ**P* < 1.0% to be low upper limits on *P*.  *P* is then corrected downward to account for polarization bias (as described in Section 4.2 of Hildebrand et al. 2000 and in Vaillancourt 2006). We keep track of the sky positions where the polarimetric significance drops below 2*σ* after this bias correction, as discussed below. Note that we have chosen to set our detection threshold at *P* ≥ 2*σ**P* rather than applying the more conservative criterion *P* ≥ 3*σ**P* used for previous SHARP observations (e.g., Attard et al. 2009). When using the latter, more conservative threshold, the 1*σ* uncertainties *σ**ϕ* in the angles of polarization (which translate into 1*σ* uncertainties in the magnetic field angles) are below $\sim9.5\degr$ (Serkowski 1962). With our more lenient threshold, these 1*σ* error bars range up to almost $15\degr$ (see Table 1). Because our goal in this paper is to test the gross predictions of the magnetically regulated collapse models rather than its fine details, we believe that this degree of uncertainty is acceptable. However, it should be kept in mind when interpreting our polarimetry results. Polarimetry Results ------------------- Our polarization detections for L1527, IC348-SMM2, and B335 are presented in Table 1. Figures 2a, 3a, and 4a illustrate the results listed in Table 1. Contours indicate the total intensity, I, measured by SHARP. The highest contour level corresponds to 90% of the peak flux, and each subsequent contour represents a decrement of 10% of the peak flux. Except for very narrow strips at the map edges, all sky positions mapped have total flux errors well below 10% of the peak flux. The morphologies seen in our total intensity contour maps thus represent real structures, with the exception of a few very small flux peaks and flux holes seen at the map edges (e.g., southwest edge of Figure 3a, and northeast and northwest edges of Figure 4a). All three sources have been mapped at 850 *μ*m (Chandler & Richer 2000, Hatchell et al. 2005, Jørgensen et al. 2007), and the morphologies seen at this longer wavelength are similar to those seen in our SHARP maps. The bars in Figures 2a, 3a, and 4a indicate polarization detections. The length of each bar is proportional to *P* (see key at bottom left of each figure) and its orientation indicates the direction of the E-vector of the polarized radiation. The thickness of each bar shows its significance: sky points having greater than 3*σ* significance after bias correction are thickest, points with greater than 2*σ* post-correction significance but less than 3*σ* have intermediate thickness, and sky points for which the significance drops below 2*σ* after bias correction are shown using the thinnest bars. For two sky positions, we obtained 2*σ* upper limits below 1.0%. These correspond to the peak of B335, where we find a 2*σ* limit of 0.6% and a position offset from the peak of L1527 by (Δ*α*, Δ*δ*) = (+9.5$\arcsec$, -9.5$\arcsec$), where our 2*σ* limit is 0.9%. Although we do not make use of these upper limits in this paper, we include them in our figures since they represent sensitive measurements that may one day be useful. The two limits are indicated with open circles in Figures 2a and 4a. Comparing our polarization results for B335 with the 850 *μ*m polarimetry of this source presented by Wolf, Launhardt & Henning (2003; hereafter WLH03), we find reasonable agreement. The WLH03 map is discussed further in section 3.2.2 below. Similarly, the 850 *μ*m polarization map of L1527 that is presented with no interpretation in the SCUPOL archive paper (Matthews et al. 2009) agrees with our 350 *μ*m polarization map of this source. In making these comparisons, we refer only to the measured angles of polarization since the degree of polarization of thermal dust emission is known to generally have considerable wavelength dependence (Vaillancourt et al. 2008, and references therein). B335 Outflow Observations and Data Analysis ------------------------------------------- The B335 350 *μ*m emission is very compact (see Figure 4a). To date, outflow maps for B335 have not been made with sufficient spatial resolution to determine the structure and kinematics of the outflow within the compact core region of B335. As part of our survey, we measured the outflow within the B335 core using the CARMA interferometer. The CARMA observations of B335 were made with the 15-element array in the D-configuration in July 2008. The total duration of the observations was 8.4 hours. The QSO 1925+211 was the phase calibrator; its flux was determined to be 1.2 Jy, using both Uranus and Neptune as flux calibrators. Observations of 3C454.3 were used to calibrate the passband structure. The correlator was configured to place the CO 1-0 line in the upper sideband; with 63 channels across a 7.6 MHz bandwidth, the velocity resolution at 115.271 GHz is approximately 0.32 kms− 1. Phase calibration was carried out on observations of the 2.7mm continuum emission in two 500 MHz bands. The gains determined for these wide (continuum) bands were then applied to the narrow (line) bands. The data were calibrated and mapped using the MIRIAD software package. The Fourier transform of u,v visibilities was taken, constrained by an image cell size of 1$\arcsec$ and a natural weighting function. The size of the synthesized beam is 4.4$\arcsec$  ×  3.8$\arcsec$. The peak intensity in the continuum emission (Figure 4b, grayscale) is 16.8 mJy/beam; the rms noise is 1.8 mJy/beam. The blue and red contours in Figure 4b show the CO 1-0 line emission integrated over the velocity ranges 5.4 - 8.0 kms− 1 and 8.6 - 11.2 kms− 1, respectively. The contours are at intervals of 3*σ*, where *σ* is 174 mJy/beam  ×  kms− 1. The integrated intensity peak in the blueshifted outflow is 5.6 Jy/beam  ×  kms− 1; the peak in the redshifted outflow is 4.0 Jy/beam  ×  kms− 1. Discussion ========== Core Orientation with respect to the Outflows --------------------------------------------- Magnetically regulated models predict that a YSO will be surrounded by a pseudo-disk a few thousand AU in size, and that the pseudo-disk symmetry axis will be aligned or nearly so with that of the YSO bipolar outflow. In this sub-section we will explore the extent to which this is the case for our three sources. In the case of L1527, Hogerheijde & Sandell (2000) used a two Gaussian model to separate the 450 and 850 *μ*m emission of an inner compact core immediately surrounding the embedded YSO from the more extended emission. Their Table 3 shows a deconvolved FWHM compact core size of 10$\arcsec$  ×  5$\arcsec$ (1400 AU  ×  700 AU) with a position angle of the major axis  ∼  30∘ east of north. However, the interferometric C18O map of this core by Ohashi et al. (1997; see Figure 5a), made with slightly higher spatial resolution and with the advantage that the extended emission is automatically removed, implies the major axis of the core is oriented north-south, but that the outer regions of the flattened core are distorted – possibly by the bipolar outflow – thus giving an impression of a tilt away from the north-south orientation when mapped with slightly poorer resolution and extended emission confusion. The bipolar outflow measured by Hogerheijde et al. (1998) in 12CO and by Zhou, Evans & Wang (1996) in 13CO has a position angle of 90∘ east of north. Thus L1527 is observed to have a flattened core on the scale of a few thousand AU, consistent with an edge-on pseudo-disk with its symmetry axis aligned with the outflow axis. In the case of B335, Chandler & Sargent (1993) observed a highly flattened core at 2.7 mm that is resolved only along the major axis, which has a size of $8\farcs2$ (2000 AU) with a position angle of 20∘ east of north (see Figure 7a). The bipolar outflow measured in 12CO by ourselves in this paper, Hirano et al. (1988) and others has a position angle of 90∘ east of north. Thus B335 is observed to have a flattened core on the scale of a few thousand AU, consistent with an edge-on pseudo-disk with its symmetry axis slightly tilted relative to the outflow axis by about 20∘. The core emission from IC 348-SMM2 has not previously been observed in detail. We use our data to study the morphology of the core surrounding this source. Figure 3a shows the underlying 350 *μ*m continuum emission intensity from IC348-SMM2 which we measured with SHARP. The 50% 350 *μ*m contour of this source has a major axis with a position angle of 60∘ east of north. The diameter of this major axis is 27$\arcsec$, compared to the perpendicular minor axis diameter of 20$\arcsec$. However, the 50% contour probably does not represent the FWHM of the core surrounding IC348-SMM2. A more realistic estimation of the FWHM of the core would be the 65% contour, which represents the 50% contour level of the core after the subtraction of a more extended cloud background set at about the 30% contour level in Figure 3a. The 65% contour major axis has a position angle of 56∘ east of north, and the diameters of the major and minor 65% contours are 21$\arcsec$ and 15$\arcsec$, respectively. If we assume an elliptical Gaussian core, we can deconvolve these FWHM estimates with a 10$\arcsec$ FWHM Gaussian beam to get a core size of 18$\arcsec$  ×  11$\arcsec$ (5500 AU  ×  3300 AU) at a position angle of 56∘. The size measured here is larger than for a typical pseudo-disk quoted in the literature by about a factor of two, but at our spatial resolution we are probably not resolving the true FWHM of the pseudo-disk, we are more likely measuring the outer extended region. The bipolar outflow measured by Tafalla, Kumar & Bachiller (2006) in 12CO has a position angle of 17∘ west of north. Thus IC348-SMM2 is observed to have a flattened core on the scale of several thousand AU, consistent with the outer regions of an edge-on pseudo-disk with its symmetry axis tilted within 20∘ relative to the outflow. In summary, we do see evidence for pseudo-disks and a tendency for alignment, within 20∘, of the pseudo-disk symmetry axis and the outflow axis for each of the YSOs in our sample. Inferred Magnetic Field Structures ---------------------------------- Our submillimeter polarization measurements displayed in Figures 2a, 3a, and 4a do not give a measure of the strength of the magnetic field, but they do give an indication of the net magnetic field direction (Lazarian 2007) and the level of uniformity (Hildebrand et al. 2009) along a given line-of-sight. The net magnetic field direction along a given line-of-sight is perpendicular to the submillimeter polarization vector measured for that line-of-sight (Lazarian 2007). Figures 2b, 3b, and 4b display the direction of the magnetic fields surrounding L1527, IC348-SMM2 and B335, respectively, as determined by the polarimetry *E*-vectors rotated by 90 degrees. We compare these field directions with our observations of the cores’ density structure (i.e., pseudo-disk) and velocity structure (i.e., outflows, infall radii). We note, however, that our polarization data for the three cores likely contain contributions from both the magnetic field associated with the core under study as well as that associated with the larger surrounding cloud. To see this, recall (section 1) that the maps shown by Kirk et al. (2006) reveal large changes in field direction as one moves away from the center of a core into the surrounding cloud material; and note that these changes occur at positions located about one arcminute ( ∼ 8000 AU) from the centers of the Kirk et al. (2006) cores, while our own maps extend out to distances of 7000-15,000 AU from the central YSO. Thus a significant fraction of the emission we have observed may originate from the cloud at large, not the core under study. One way to account for such contamination is to flag polarization measurements made at positions having lower flux density. Such measurements could be contaminated to a large degree by line-of-sight emission associated with the larger parent cloud. The fluxes we measure at the edges of our three maps range from  ∼ 5% to  ∼ 40% of the respective peak flux (see contours in Figures 2a, 3a, and 4a). We have chosen to set a “contamination threshold” at 25% of peak flux; we will consider polarization measurements obtained at positions having flux below this threshold to be at risk of large amounts of contamination by polarized signals originating in the larger cloud. This choice is somewhat arbitrary, but it has the benefit of flagging polarization measurements coming from regions of very low flux while preserving most of our measurements; we flag as unreliable just four measurements out of 22. All of our *B*-vectors shown in Figures 2b, 3b, and 4b are drawn with the same length and thickness, and those associated with total intensity levels below our contamination threshold are shown using dashed lines. Figures 2b and 4b each contain one such suspect *B*-vector, and Figure 3b contains two. Also shown in Figures 2b, 3b, and 4b are the bipolar outflow morphologies in these sources (see the figure captions for references). In addition, these figures show circles depicting the measured outer limits of the infall regions in L1527 and B335 based on inverse P-Cygni line profiles (see captions for references). No measurement of the infall radius for IC348-SMM2 has been made to date, but such an infall region is likely to have a radius range of 20$\arcsec$ to 30$\arcsec$ for an infall age similar to the ages of L1527 and B335 (see Table 2). Below we will assume an infall radius of 25$\arcsec$ when we compare our observational results to models. The models we will compare our data to are those of ASL03a and ALS03b. These models consist of self-similar, self-gravitating, singular, isothermal toroids with various amounts of rotation and magnetization. The rotation speeds of the cores range from 0 to 0.5 times their thermal sound speeds and the magnetic-flux-to-mass-ratios of the cores range from from 0 to 0.5. All models are supercritical in order for collapse to occur without external pressure. Presumably the collapse phase occurs after ambipolar diffusion has occurred, producing the supercritical state in the core. ALS03b note, however, that even in the relatively weakened state of the fields, these fields are responsible for the formation of pseudo-disks, considerable transport of angular momentum, and the resulting size of the centrifugally supported Keplerian disk during the collapse phase, and so cannot be ignored. In Figures 5, 6, and 7 we compare our data on L1527, IC348-SMM2, and B335, respectively, to the model displayed in Figure 8c of ALS03b, a model with intermediate rotation and magnetic field strengths. It should be noted here that, except for the case where there is no magnetic field to flatten the core and provide polarization, the spatial resolution of our data precludes us from discerning between the various models presented in ASL03a and ALS03b (see Figures 7 and 8 in ALS03b). ALS03b show (their Figure 7) that rotation has only a minor effect on the gas and magnetic field geometry at the spatial scales we are measuring here. However, the aim of our current study is not to test the finer points of magnetic regulated collapse models, but the gross predictions represented in Figure 1 and evidence for magnetic field pinches. In addition to our spatial resolution constraints, it is important to bear in mind that our results represent an integration of polarizations along each line of sight, whereas the magnetic field given in Figure 8c of ALS03b represents only the cross-section of the poloidal field on the plane of the sky for an edge-on pseudo-disk. If this cross-section were rotated round the symmetry axis of the pseudo-disk, an integration along a line-of-sight would result in a weakening of the pinch geometry to a more uniform field aligned with the symmetry axis. ### Magnetic Field Structure around L1527 and IC348-SMM2 Scrutiny of the results displayed in Figures 2 & 3 reveals that the field structures in L1527 and IC348-SMM2 are generally consistent with the magnetically regulated dynamical collapse models cited in the introduction in that they show: (1) pinched field line vectors on the scale of the measured or inferred infall regions for these cores; and (2) field line vectors (with a few exceptions discussed below) that are basically aligned with the bipolar outflows (once the distortion of a pinch is subtracted by eye using Figures 5 & 6). The exceptions mentioned in (2) are the three polarization vectors in the low-flux region to the south of IC348-SMM2, which imply an east-west magnetic field, and a vector immediately east of the emission peak of L1527. A possible explanation for this latter vector is given later in this section. However, the east-west field lines in IC348-SMM2 cannot be explained in the context of a magnetically regulated model. It is possible these vectors may not be associated with the core of IC348-SMM2, since two of the three reside in a region which has emission less than 25% of the peak emission for the source, and the third resides in a region with emission that is 25.5% of the peak. Although in general the polarization results agree with magnetically regulated models, in detail we see significant discrepancies beyond the exceptions mentioned above. The scale of the pinched structure in magnetically regulated models depends on the size (i.e., age) of the infall region. Outside the infall region, the magnetic fields should be uniform or nearly uniform, depending on the details of the pre-collapse, quasi-static contraction of the core. Only within the infall regions should pinches of the field lines be significant. Figure 6 shows how our *B*-vectors in IC348-SMM2 (with our assumed infall radius and ignoring the three *B*-vector exceptions to the south) show a remarkable agreement with Figure 8c from ALS03b if the symmetry axis of the pseudo-disk has a 17∘ tilt with respect to the outflow as we measure for this source. The agreement of the predicted field geometry from the ALS03b model with our measured vector position angles is somewhat surprising since ALS03b Figure 8c gives a cross-section of the poloidal field on the plane of the sky for an edge-on pseudo-disk, while our measurements are an integration along the line of sight. Such an integration should smooth out the pinched structure to some degree. The fit by eye between the model and our measurements is not as good if the pseudo-disk symmetry axis in the model is instead aligned with the outflow axis. The fact that the magnetic field fit is better when the model is aligned to the measured pseudo-disk axis, rather than the outflow axis, is consistent with the hypothesis that the pinched magnetic field structure and the pseudo-disk are both products of magnetized contraction and collapse, whereas the outflow is probably a product of rotation. In IC348-SMM2, the rotation axis may not be exactly aligned with the overall magnetic field axis for the core. Figure 5 shows that in L1527 our *B*-vectors broadly agree with a pinched magnetic field structure aligned with the pseudo-disk axis as shown in Figure 8c of ALS03b, but the figure also shows that there are significant distortions beyond the uncertainties ( ± 11∘) in the angles of the vectors. We see these distortions close to the edge and in one case beyond the infall boundary. A possible explanation for this additional distortion could be the bipolar outflow in this source which overlaps a significant portion of the region containing our measured vectors (see Figure 2). The effects of bipolar outflows are not included in the models to which we are comparing our results (ASL03a; ALS03b; Galli & Shu 1993a,b). The bipolar outflow in L1527 may also be responsible for the exceptional vector identified previously, which lies immediately east of the L1527 emission peak. This vector implies a magnetic field direction that is almost perpendicular to the outflow. In this scenario, the outflow pushes core material, and therefore also core magnetic field lines, into two polar cones surrounding the bipolar outflow near the emission peak in L1527, thereby giving rise to both the additional distortions near the edge of the infall boundary as well as the exceptional vector just east of the peak. Evidence for conical cavities such as would be required in this scenario has been obtained via mid-IR scattered light observations by Tobin et al. (2008) as well as interferometric measurements of HCO+ by Hogerheijde et al. (1998). The latter is shown in Figure 2b. In addition, the submillimeter maps of Hogerheijde & Sandell (2000) and our 350 *μ*m map (Figure 2a) show evidence of an X-like structure in the extended background emission about L1527 that overlaps the observed outflow in that region, implying that significant submillimeter emission (and any polarization of such emission) must be coming from the surface of the outflow conical cavities. Could the bipolar outflow observed in L1527 have enough energy to distort the magnetic field structure linked to the gas it entrains? The diagrams in Figure 8 in ALS03b show the pinched magnetic field structures for a number of rotating, dynamically-collapsing toroids of different initial cloud magnetic field strengths. These diagrams also show the contours of *β* (the ratio of the thermal to magnetic energy density). Close to the axis of symmetry *β* is much less than one ( ∼  0.1), further from the axis of symmetry *β* increases to values above one. These values of *β* can be used together with estimates of the thermal energy within a core to determine the energy density required of an outflow to distort the magnetic field within that core. The core thermal energy density for L1527 can be approximated by ${3 \over 2} \rho c\_s^2$, where *ρ* is given by the mass density of the 106 cm− 3 molecular gas measured for the L1527 core about 10$\arcsec$ away from the center (Hogerheijde & Sandell 2000) and c*s* is the sound speed which is approximately 0.25kms− 1 (Zhou, Evans & Wang 1996). The lower limit to the outflow energy density in L1527 is expressed as ${ 1 \over 2} M\_{\_L} V^2\_{\_L}$ divided by the observed volume of the outflow lobes, where the measured values for *M**L* (the mass entrained in the outflows) and *V**L* (the velocity of the entrained gas in the outflows) are given in Table 2 from Hogerheijde et al. (1998). The volume of both outflow lobes can be approximated by ${2 \over 3} \Omega\_{\_L} R^3\_{\_L}$, where *R**L*, the extent of one lobe in L1527, is given in Table 2 and Ω*L* ≈  0.14 sr based on Figure 6 in Hogerheijde et al. (1998) showing the extent of the 12CO outflow in L1527. The above imply that the energy density in the outflow is greater than the thermal energy density in the region 10$\arcsec$ from the peak of L1527 by about a factor of 100. Since in the ALS03b models the magnetic energy density is less than the thermal energy density away from the axis of symmetry, this implies that the outflow does have the energy required to distort the magnetic field where it disturbs the gas in the core away from the axis of symmetry. Indeed, with the energy density of the outflow observed, even the magnetic field very close to the axis of symmetry could be distorted. The ratio of the outflow energy density to core thermal energy density in IC348-SMM2 is similar to the ratio in L1527, if one assumes similar gas densities and sound speeds for IC348-SMM2 as measured for L1527 and the outflow parameters in Table 2 for IC348-SMM2. But the outflow in IC348-SMM2 does not overlap with our measured polarization vectors to a very great degree, so the outflow cannot affect the alignment of the magnetic fields which are inferred by these polarization measurements. Where there is overlap, the distortion would be minimal for field lines already parallel to the axis of the outflow if the opening angle of the outflow at that location is small – as it is for IC348-SMM2 ( ± 15∘; see Figure 3). In summary, if one assumes that the low-flux exceptions in IC348-SMM2 are not part of the core and one includes the effects of bipolar outflows on field alignment, then our observations of L1527 and IC348-SMM2 are consistent with magnetically regulated models. ### Magnetic Field Structure around B335 At first glance, our two polarization vectors in B335 imply a magnetic field more perpendicular than parallel to the outflow axis. However, our two inferred *B*-vectors are not too different from what would be expected in the region just south-east of the B335 center based on Figure 8c of ALS03b when this figure is aligned with the minor axis of the flattened core as measured by Chandler & Sargent (1993) (i.e., the symmetry axis of the inferred pseudo-disk). In this region of the model, the field lines are pinched towards the axis of symmetry (see Figure 7). Figure 7 also shows the *B*-vectors inferred from the 850 *μ*m polarimetry measurements of WLH03 as well as those inferred from our 350 *μ*m polarimetry measurements. Our results are broadly consistent with the results of WLH03, in that they agree with four of the six vectors in the south-east quadrant region of the B335 core, but not, however, with the one vector with which our measurements most nearly overlap. The 20 *B*-vectors of WLH03 and our 2 all lie inside the infall region outlined in Figures 4 and 7. The WLH03 results imply on average a more N-S magnetic field structure within B335, but there is considerable distortion evident in the field lines implied by the 20 vectors, since the standard deviation of the position angles of the *B*-vectors is about three times the average measurement error for each vector. WLH03 concluded that the average field they measure in B335 is the direction of the field in the core when it collapsed, and that the flattened core seen in B335 is prolate (rather than oblate) with its symmetry axis parallel to the magnetic field. If this is the case, then B335 presents a counter example to the results obtained by Tassis et al. (2009) who concluded the core model that best fits their sample of 24 high-mass cloud cores is an oblate core with the mean magnetic field more aligned with the short axis than the long axis. Alternatively, the WLH03 data and ours could imply a more toroidal field in B335 than poloidal. But if this were the case then this is inconsistent with the model of ALS03b which includes rotation. In this model, only a small volume of a dynamically collapsing core contains twisted (i.e., toroidal) magnetic fields. Outside this small region the collapsing flow has yet to be spun up, and inside this region *β* ≪ 1 so the field lines are rigid. If B335 does contain a toroidal field configuration, then the ALS03b model fails to describe it; there appears to be more twisting of the original poloidal magnetic field lines than can be explained in their model. Yet another interpretation of the data, and the one we advocate in this paper, is that B335 is an extreme example of what is happening in L1527; the outflow in B335 has distorted the field lines in the core of B335, either directly or indirectly by exciting more turbulence within the core, so the magnetic field seems to align N-S on average. Could the outflow in B335 cause large field distortions in the core? If we use the same analysis we used for L1527, but with a core thermal speed of 0.23kms− 1 (Zhou et al.1993), a core density at about 10$\arcsec$ from the center of  ∼  105 cm− 3 ( Zhou et al. 1990; Harvey et al. 2003), the B335 values for the outflow parameters given in Table 2, and Ω*L* =  0.6 sr (Figure 2 in Hirano et al. 1988), then we find that the outflow energy density is about a factor of 6 higher than the core thermal energy density (10$\arcsec$ from the peak) in B335. Our own outflow data shown in Figure 4b give a total kinetic energy for the outflow within the region mapped (i.e., ${1 \over 2}M\_{\_{chan}} V^2\_{\_{chan}}$, summed over the velocity channels in the ranges 5.4 - 8.0 kms− 1 and 8.6 - 11.2 kms− 1) of 1.8  ×  1042 erg, once a correction for the 10∘ inclination of the outflow to the plane of the sky has been made. Dividing this by the volume of the outflow as defined in Figure 4b, we get a kinetic energy density of  ∼  10− 9 erg cm− 3, which is a factor of 3 higher than the core thermal energy density. This outflow energy density although not enough to distort the magnetic field close to the axis of symmetry of the pseudo-disk in B335, is enough to distort the field further out. The outflow cavity outlined by the CO observations in Figure 4b is coincident with the region within the B335 core that most likely would have a magnetic field energy density that is greater than the outflow energy density; beyond this outflow cavity the reverse is likely to be true. The outflow cavity is a region in which the gas and dust densities are low, and so it is a region of low 350 *μ*m emission along our line of sight. Our polarization measurements are thus weighted away from the outflow cavity region towards those regions along our line of sight where the energy density of the outflow could distort the magnetic field structure. But why is this distortion observed to be so large in B335 compared to the distortion in L1527? The difference between B335 and the other two YSOs presented here is that the bipolar outflow in B335 is much larger in length, width and apparent age than the outflows in L1527 and IC348-SMM2 (Table 2). Therefore, the field lines within the B335 core could be highly distorted because the outflow has had time to plow through a greater portion of the core or excite greater gas turbulence in the core. ### Chi-Square Tests of Various Magnetic Field Geometries In order to give some quantitative assessment of the magnetic field scenarios discussed above, and how they compare to other possible configurations, we carried out reduced chi-squared tests of our data using a number of different theoretical magnetic field configurations for each source. For each source we compared the *B*-vectors implied by our data to: (1) a uniform magnetic field model where the angle of the field is aligned with the mean *B*-vector angle implied by our data for that source; (2) a uniform magnetic field model where the angle of the field is aligned with the outflow; (3) a uniform magnetic field model where the angle of the field is aligned with the symmetry axis of the observed pseudo-disk for that source; (4) a uniform magnetic field model where the angle of the field is aligned with the major axis of the observed flattened core (or pseudo-disk) for that source; and (5) a pinched magnetic field aligned with the symmetry axis of the observed pseudo-disk as presented in Figure 8c of ALS03b. Note that model (4) also corresponds to the case where the magnetic field is toroidal. Table 3 summarizes our results. Each number in the table represents the reduced chi-squared value, *χ**r*2, for the data specified for that column against the model for that row. The reduced chi-squared is calculated as $\chi^2\_r = {1 \over {\nu}} \Sigma\_i ((\theta\_i - \theta\_{Mi})^2 / \sigma\_i^2)$ where *θ**i* are the data representing the angles of the *B*-vector at various locations, i; *σ**i* is the uncertainty in each data angle; *θ**M**i* is the angle of the magnetic field at the location of each data point for a particular model; and *ν* is the number of degrees of freedom for the data set. For these calculations, the values of ∣(*θ**i* − *θ**M**i*)∣ were constrained to be  ≤  90$\arcdeg$ since our *B*-vectors have been derived from our polarization *E*-vectors which are invariant under 180 degree rotations. For cases where $|(\theta\_i - \theta\_{Mi})| > 90 \arcdeg$, a value of $|[(\theta\_i - \theta\_{Mi} \pm 180)]| \le 90\arcdeg$ was substituted. For B335, we carried out the same calculations using the data of WLH03 based on their Figure 1. *χ**r*2 should be close to 1 for a good fit. An inspection of Table 3 shows that *χ**r*2 is close to this value only for the ALS03b model for IC348-SMM2 and B335 when we consider only our data minus the exceptions discussed in Section 3.2.1. However the fits are not good for this model if we include the exceptions for IC348-SMM2, and if we include the WLH03 data for B335. This is in agreement with our qualitative assessment above. For L1527, although no model gives a good fit to our data, the two most favored are the ALS03b model and the uniform field aligned with the mean *B*-vector of our data for L1527 (i.e., 60$\arcdeg$ east of north). Our data do not differentiate between these two models, since our vectors lie mostly in two diagonal quadrants. If we had more data in the other two quadrants, a *χ**r*2 test would be able to differentiate between these two models. As it is, the *χ**r*2 results re-enforce our qualitative assessment above that although our data agree with the ALS03b model in a broad sense, there are enough deviations to it to require an explanation; possibly a disturbance to the magnetic field structure due to the outflow in L1527. Further, based on Table 3, our data for L1527, IC348-SMM2, and B335 clearly favor the ALS03b model over the ``toroidal" models, and the ALS03b model is slightly more favored than the models with a uniform field aligned with the outflow and a uniform field aligned with the pseudo-disk symmetry axis. This implies there is some evidence for a pinch in the magnetic field configuration for these sources. However, the data of WLH03 for B335, in contradiction, favor the model with a uniform field aligned with the major axis of the flattened core over all the other models, although the fit is not good for this model, thus implying some other underlying magnetic field configuration must be present - possibly a configuration disrupted by the outflow in B335. In summary, the chi-squared results in Table 3 support the qualitative discussions we presented in sections 3.2.1 and 3.2.2. We also compared our data to a random magnetic field model for each source. We did this by calculating the root-mean-square (RMS) of the differences between the directions of the *B*-vectors at points lying spatially adjacent to one another in each of our sources. For L1527 we obtained 11 pairs of measurements separated by 10$\arcsec$ or $\sqrt{2} \times 10 \arcsec$ (see Figure 2) and for IC348-SMM2 we obtained 10 such pairs (see Figure 3) for our RMS calculations. Points separated by $\sim 10\arcsec$ (i.e., one beam diameter) or greater represent independent measurements. For L1527 we calculated an RMS value for these differences in adjacent *B*-vector directions to be 19$\arcdeg$, and for IC348-SMM2 a value of 15$\arcdeg$. We then carried out the same calculation on 10 randomly generated numbers ranging between -90 and 90, and did this 50 times. The resulting RMS values formed a peaked gaussian distribution with a mean of 51 and a dispersion *σ* = 6. (This mean is close to the theoretical RMS of  ∼  52 for randomly selected numbers between -90 and 90.) Thus our RMS values for L1527 and IC348-SMM2 lie 5.5*σ* and 6*σ*, respectively, below the expected RMS for randomly distributed *B*-vectors, implying that the magnetic fields in L1527 and in IC348-SMM2 are not random in nature. We carried out the same RMS calculation on two different sets of 13 pairs of adjacent *B*-vector measurements made by WLH03 in B335 (see Figure 7), resulting in RMS values of 42$\arcdeg$ and 34$\arcdeg$. Both values are much larger than the average uncertainties ($\le 10\arcdeg$) for the measurements reported in WLH03. These RMS values imply that the magnetic field is possibly more randomized in B335 than in the other sources. This latter point is consistent with our suggestion that the magnetic field in B335 is more distorted by its outflow than are the magnetic fields in L1527 and IC348-SMM2. Conclusions =========== We have used the SHARP polarimeter on the CSO to obtain 350 *μ*m intensity maps and polarization measurements on L1527, IC348-SMM2, and B335 to test whether or not magnetically regulated models for low-mass star formation are consistent with observations of sources which should have little distortions in their various geometries due to projection effects (i.e., sources with bipolar outflows which lie close to the plane-of-the-sky). (1) Our data for IC348-SMM2 combined with the data of others for L1527 and B335 show flattened cores consistent with edge-on pseudo-disks having symmetry axes that are nearly, but in two cases not exactly, parallel to the bipolar outflows in these sources. (2) There is evidence that the sources L1527 and IC348-SMM2 each contains a pinched magnetic field structure with its symmetry axis approximately aligned with the symmetry axis of the inferred embedded pseudo-disk in each. The evidence is strong (i.e., the goodness-of-fit to the data is good) for IC348-SMM2, if certain low-flux polarimetry measurements, which could be associated more with the background cloud than the core, are ignored. In IC348-SMM2, where the outflow and pseudo-disk axes are not exactly aligned, the pinched magnetic field structure fits the data better when it is aligned with the symmetry axis of the pseudo-disk rather than with the outflow axis. This is consistent with the hypothesis that the pinched magnetic field structure and the pseudo-disk are both products of magnetized contraction and collapse, whereas the outflow is only indirectly related to the core magnetic field structure inasmuch as that magnetic field structure has influenced the orientation of the rotation axis of the core. (3) In L1527, however, the magnetic field structure shows considerable distortion from an ideal pinched field line structure given the measured infall radius of this source. Our hypothesis is that this distortion is caused by the bipolar outflow in L1527. We show that the outflow has sufficient energy density to distort the magnetic field structure in the core of this source. This distortion is not seen in IC348-SMM2, because the only inferred *B*-vectors that overlap with the outflow in this source are vectors that would not have been altered by the outflow. (4) The magnetic field structure observed in the B335 core is not aligned with the outflow axis of this source, but our two *B*-vectors are consistent with models with pinched field lines through a pseudo-disk, if the pseudo-disk is tilted with respect to the outflow axis by 20$\arcdeg$ (i.e., consistent with the pseudo-disk observed for B335 by Chandler & Sargent (1993)). However, if we combine our data with those of WLH03, the fit to these pinched field line models is not good. Our explanation is that B335 is an extreme example of the bipolar outflow driven field distortion that we are seeing in L1527. The main difference between L1527 and B335 is that the outflow in B335 is much larger in extent than the one in L1527, is assumed therefore to be much older, and so has had time to cause a greater degree of distortion of the core magnetic field lines. This is not the interpretation given by WLH03 of their B335 observations. They interpreted their results to imply a near uniform field lying perpendicular to the outflow in this source - in contradiction to the predictions of magnetically regulated collapse as summarized in Figure 1. (5) More core magnetic field structures need to be mapped to elucidate the overall dynamical collapse story, given the considerable variation in core structures observed in our sample of just three. In short, the gross predictions of the magnetically regulated models (i.e., as summarized in Figure 1) need to be tested further. Acknowledgements ================ We are grateful to the Caltech Submillimeter Observatory TAC, management, and staff for making this study possible over the many observing runs that it has consumed. We thank Darren Dowell, Megan Krejny, Woojin Kwon, and Hiroko Shinnaga for help with the observations, and Ron Taam for valuable discussions. We are grateful to the National Science Foundation for supporting the Caltech Submillimeter Observatory via grant AST-0838261, and for supporting the operation of SHARP via grant AST-0909030 to Northwestern University. Astronomical Research by P.F.G. and N.L.C. is supported by the Jet Propulsion Laboratory, California Institute of Technology. Allen, A., Shu, F.H., & Li, Z.-Y. 2003,, 599, 351 (ASL03a) Allen, A., Li, Z.-Y., & Shu, F.H. 2003,, 599, 363 (ALS03b) Attard, M., Houde, M., Novak, G., & Vaillancourt, J.E. 2008,, 120, 805 Attard, M., Houde, M., Novak, G.,Vaillancourt, J.E., Li, H., Dowell, C.D., Davidson, J.A., and Shinnaga, H. 2009,, 702, 1584 Basu, S. & Mouschovias, T.C. 1994, 432, 720 Chandler, C.J., & Sargent, A.I. 1993,, 414, L29 Chandler, C.J., & Richer, J.S. 2000,, 530, 851 Dowell, C.D., Hildebrand, R.H., Schleuning, D.A, Vaillancourt, J.E., Dotson, J.L., Novak, G., Renbarger, T., & Houde, M. 1998,, 504, 588 Dowell, C.D. et al. 2003, Proc. SPIE 4855: Millimeter and Submillimeter Detectors for Astronomy, eds. T.G. Phillips & J. Zmuidzinas, 73 Galli, D., & Shu, F.H. 1993a,, 417, 220 Galli, D., & Shu, F.H. 1993b,, 417, 243 Girart, J. M., Rao, R. & Marrone, D. P. 2006, Science, 313, 812 Goldsmith, P.F., & Arquilla, R. 1985, Protostars and Planets II, 137 Goldsmith, P.F., Heyer, M., Narayanan, G., Snell, R., Li D., & Brunt, C. 2008,, 680, 428 Goodman, A.A., Benson, P.J., Fuller, G.A., & Myers, P.C. 1993,, 406, 528 Hatchell, J., Richer, J.S., Fuller, G.A., Qualtrough, C. J., Ladd, E. F., & Chandler, C. J. 2005,, 440, 151 Harvey, D.W.A., Wilner, D.J., Myers, P., & Tafalla, M. 2003,, 596, 383 Henning, T., Wolf, S., Launhardt, R., & Waters, R. 2001,, 561, 871 Hildebrand, R.H., Davidson, J.A., Dotson, J.L., Dowell, C.D., Novak, G., & Vaillancourt, J.E. 2000,, 112, 1215 Hildebrand, R.H., Kirby, L., Dotson, J.L, Houde, M., & Vaillancourt, J.E. 2009,, 696, 567 Hirano, N., Kameya, O., Nayayama, M., & Takakubo,K. 1988,, 327, L69 Houde, M., & Vaillancourt, J.E. 2007,, 119, 871 Hogerheijde, M.R, van Dishoeck, E.F, Blake, G.A., van Langevelde, H.J. 1998,, 502, 315 Hogerheijde, M.R., & Sandell, G. 2000,, 534, 880 Jørgensen, J.K., et al. 2007,, 659, 479 Kirby, L., Davidson, J.A., Dotson, J.L., Dowell, C.D., & Hildebrand, R.H. 2005,, 117, 991 Kirk, J.M., Ward-Thompson, D., & Crutcher, R.M. 2006,, 369, 1445 Konigl, A. & Pudritz, R. E. 2000, Protostars and Planets IV, 759 Kwon, W., Looney, L.W., Crutcher, R. M., & Kirk, J. M. 2006,, 653, 1358 Lazrarian, A. 2007,, 106, 225 Li, H., Dowell, C. D., Kirby, L., Novak, G., & Vaillancourt, J. E. 2008, App. Opt., 47, 422 Mac Low, M.-M. & Klessen, R.S. 2004, Reviews of Modern Physics, 76, 125 Mathews, B.C., & Wilson, C.D. 2002,, 574, 822 Matthews, B.C., McPhee, C.A., Fissel, L.M., & Curran, R.L. 2009,, 182, 143 McKee, C.F. & Ostriker, E.C. 2007,, 45, 565 Ménard, F. & Duchêne, G. 2004,, 425, 973 Mestel, L. 1985, Protostars and Planets II, 320 Mouschovias, T.C. & Paleologou, E. V. 1979, 230 204 Mouschovias, T.C. & Paleologou, E. V. 1980, 237, 877 Mouschovias, T.C. & Ciolek, G.E. 1999, The Origin of Stars and Planetary Systems, eds. C.J. Lada & N.D. Kylafis, 305 Myers, P.C., Bachiller, R., Caselli, P., Fuller, G.A., Mardones, D., Tafalla, M., & Wilner, D.J. 1995,, 449, L65 Ohashi, N., Hayashi, M., Ho, P.T.P, & Momose, M. 1997,, 475, 211 Serkowski, K. 1962, in Advances in Astronomy and Astrophysics, ed. Z. Kopal (New York: Academic), 290 Shu, F.H., Adams, F.C., & Lizano, S. 1987,, 25, 23 Shu, F. H., Najita, J.R., Shang, H., & Li, Z-Y. 2000, Protostars and Planets IV, 789 Shu, F.H., Li, Z-Y., & Allen, A. 2004,, 601, 930 Tafalla, M., Kumar, M.S.N., & Bachiller, R. 2006,, 456, 179 Tamura, M., Ohashi, N., Hirano, N., Itoh, Y., & Moriarty-Schieven, G.H. 1996,, 112, 2076 Tassis, K., Dowell, C.D., Hildebrand, R.H., Kirby, L., and Vaillancourt, J.E. 2009,, 399, 168 Tobin, J.J., Hartmann, L., Calvet. N, & D’Alessio, P. 2008,, 679, 1364 Tomisaka, K. 1998,, 502, L163 Vaillancourt, J. E. 2006,, 118, 1340 Vaillancourt, J.E., Dowell, C.D., Hildebrand, R.H., Kirby, L., Krejny, M.M., Li, H., Novak, G., Houde, M., Shinnaga, H., Attard, M. 2008,, 679, L25 Vallée, J.P., Bastien, P., & Greaves, J.S. 2000,, 542, 352 Vallée, J.P., Greaves, J.S., & Fiege, J.D. 2003,, 588, 910 Ward-Thompson, D., Sen, A.K., Kirk, J.M., & Nutter, D. 2009,, 398, 394 Wolf, S., Launhardt, R., & Henning, T. 2003,, 592, 233 (WLH03) Zhou, S., Evans, N.J., Butner, H.M., Kutner, M.L., Leung, C.M., Mundy, L.G. 1990,, 363, 168 Zhou, S., Evans, N.J., II, Carsten, K., & Walmsley, C.M. 1993,, 404, 232 Zhou, S., Evans, N.J., II, & Wang, Y. 1996,, 466, 296 lrrrrrr L1527 & 38.0 & 9.5 & 5.6 & 2.1 & -51.0 & 9.5 & 28.5 & 19.0 & 3.3 & 1.6 & -56.1 & 12.0 & 19.0 & 9.5 & 2.5 & 0.9 & -63.7 & 9.7 & 19.0 & 19.0 & 3.5 & 1.1 & -41.7& 10.1 & 9.5 & -28.5 & 5.8 & 1.8 & 18.2 & 9.0 & 9.5 & 0.0 & 1.0 & 0.5 & -66.4 & 12.3 & 9.5 & 9.5 & 1.6 & 0.5 & -36.5 & 9.6 & -9.5 & -19.0 & 2.8 & 1.3 & 3.1& 10.0 & -9.5 & -9.5 & 1.8 & 0.7 & -13.2 & 10.3 & -19.0 & -9.5 & 2.2 & 1.2 & -28.4 & 14.5 IC348-SMM2&9.5 & 19.0& 5.7& 2.1& 84.8&10.5 & 0.0& 19.0& 6.3& 2.4& 81.6&9.4 & -9.5& -28.5& 9.2& 3.7& -9.8&10.0 & -9.5& 19.0& 5.1& 2.8& 61.9&12.8 & -19.0& -28.5& 9.5& 4.4& -4.7&11.0 & -19.0& -19.0& 5.8& 3.0& 4.0&11.8 & -19.0& 19.0& 7.9& 3.4& 54.4&13.3 & -28.5& -9.5& 5.9& 2.9& 33.7&11.4 & -28.5& 0.0& 9.4& 3.1& 32.3&8.2 & -28.5& 9.5 & 8.0& 4.1& 26.3&12.0 B335 & 9.5& -19.0& 3.2& 1.5& 56.5&11.9 & 9.5& -9.5& 1.3& 0.7&58.8&14.6 lrrr RA (2000) & 04:39:53.9 & 03:43:56.9 & 19:37:01.03 Dec (2000) & 26:03:11 & 32:03:06 & 07:34:11 Distance (pc) & 140 & 300 & 250 Infall Radius, rinf (pc) & 0.026 (38) & & 0.04 (33) Sound Speed, c*s* (kms− 1) & 0.25 & & 0.23 Infall Age ( ∼  rinf/c*s*) (yr) & $\thicksim 1 \times 10^5 $ & & $\thicksim 1.5 \times 10^5 $ 12CO Outflow Extent, R*L* (pc) & 0.095 (140) & 0.14 (95) & 0.3 (250) PA of outflow axis E from N & 90∘ &  − 17∘& 90∘ Outflow Mass, M*L* (M$\_\sun$) & 0.18 & 0.033 & 0.44 Characistic Outflow Velocity, & & & V*L* (kms− 1) & 20 & 54 & 13 Outflow Age ( ∼  R*L*/V*L*) (yr) & $\thicksim 5 \times 10^3 $ & $\thicksim 2 \times 10^3 $ & $ \thicksim 2 \times 10^4 $ lcccccc *θ**M**i* = Mean *θ**i* & 8.3 &8.2 & 10.4 & 5.5 & 0.02 & 13.7 *θ**M**i*= *θ**o**u**t**f**l**o**w* & 16.4 & 15.0 & 20.9 & 8.1 & 19.4 & 36.9 *θ**M**i*= *θ**P**D**S**A*& 16.4 & 15.0 & 12.7 & 4.8 & 8.2 & 38.5 *θ**M**i*= *θ**P**D**M**A*& 30.3 & 33.2 & 31.7 & 42.0 &16.2 & 14.2 ALS03b Model & 9.9 & 7.8 & 14.0 & 0.4 & 0.8 & 31.5 A cartoon summarizing the magnetically regulated core collapse scenario outlined in Section 1. Typically, the diameter of the infall region is  ∼ 10,000 AU, the diameter of a pseudo-disk is  ∼ 2,000 AU, and the diameter of a Keplerian disk is  ∼ 100 AU. In the magnetically regulated core collapse scenario: the pseudo-disk symmetry axis is aligned with the core magnetic field; and magnetic braking tends to align the core rotation axis with the magnetic field, but this alignment may not be exact. The pseudo-disk is a dynamically collapsing object formed by the magnetic fields, not rotation. The Keplerian disk is an object formed by rotation and so its symmetry axis is aligned with the core’s rotation axis, as too is the outflow axis if the outflow is driven by rotation. [fig:cartoon] (a) 350 *μ*m polarization vectors for L1527. The thickness of each vector indicates the significance, with all vectors having significance greater than 2*σ* before bias correction (see section 2.2). Note the very low polarization ( ≤ 0.9%) marked by a small open circle near the flux peak (see section 2.2). The contours indicate the total intensity, I, measured by SHARP. The highest contour level corresponds to 90% of the peak flux, and each subsequent contour represents a decrement of 10% of the peak flux. (b) Inferred magnetic field directions superimposed on a section of a figure from Hogerheijde et al. (1998) showing 12CO 3-2 observations of the bipolar outflow (contours) and a HCO+ map tracing gas on the cavity walls surrounding the outflow (gray area). B-vectors in regions having 350 *μ*m flux less than 25% of the peak are shown as dashed lines. The full Hogerheijde et al. figure shows the blueshifted outflow (solid contours) extending 2$\arcmin$ east and the redshifted outflow (dashed contours) extending 2$\arcmin$ west from the YSO in L1527. The large dashed circles in (a) and (b) show the extent of the measured infall region for this object (Myers et al. 1995; Zhou, Evans & Wang 1996). [fig:L1527] [fig:IC348] [fig:B335] [fig:L1527b] [fig:IC348b] [fig:B335b]
arxiv_0000300
*Monthly Notices of the Royal Astronomical Society*: LaTeX guide for authors ============================================================================ ### Last updated 2020 June 10; in original form 2013 September 5 [firstpage] This is a guide for preparing papers for *Monthly Notices of the Royal Astronomical Society* using the `mnras` LaTeX package. It provides instructions for using the additional features in the document class. This is not a general guide on how to use LaTeX, and nor does it replace the journal’s instructions to authors. See `mnras_template.tex` for a simple template. editorials, notices – miscellaneous Introduction ============ The journal *Monthly Notices of the Royal Astronomical Society* (MNRAS) encourages authors to prepare their papers using LaTeX. The style file `mnras.cls` can be used to approximate the final appearance of the journal, and provides numerous features to simplify the preparation of papers. This document, `mnras_guide.tex`, provides guidance on using that style file and the features it enables. This is not a general guide on how to use LaTeX, of which many excellent examples already exist. We particularly recommend *Wikibooks LaTeX*[3](#fn3), a collaborative online textbook which is of use to both beginners and experts. Alternatively there are several other online resources, and most academic libraries also hold suitable beginner’s guides. For guidance on the contents of papers, journal style, and how to submit a paper, see the MNRAS Instructions to Authors[4](#fn4). Only technical issues with the LaTeX class are considered here. Obtaining and installing the MNRAS package ========================================== Some LaTeX distributions come with the MNRAS package by default. If yours does not, you can either install it using your distribution’s package manager, or download it from the Comprehensive TeX Archive Network[5](#fn5) (CTAN). The files can either be installed permanently by placing them in the appropriate directory (consult the documentation for your LaTeX distribution), or used temporarily by placing them in the working directory for your paper. To use the MNRAS package, simply specify `mnras` as the document class at the start of a `.tex` file: ``` \documentclass{mnras} ``` Then compile LaTeX (and if necessary BibTeX) in the usual way. Preparing and submitting a paper ================================ We recommend that you start with a copy of the `mnras_template.tex` file. Rename the file, update the information on the title page, and then work on the text of your paper. Guidelines for content, style etc. are given in the instructions to authors on the journal’s website$^{\ref{foot:itas}}$. Note that this document does not follow all the aspects of MNRAS journal style (e.g. it has a table of contents). If a paper is accepted, it is professionally typeset and copyedited by the publishers. It is therefore likely that minor changes to presentation will occur. For this reason, we ask authors to ignore minor details such as slightly long lines, extra blank spaces, or misplaced figures, because these details will be dealt with during the production process. Papers must be submitted electronically via the online submission system; paper submissions are not permitted. For full guidance on how to submit a paper, see the instructions to authors. Class options ============= There are several options which can be added to the document class line like this: ``` \documentclass[option1,option2]{mnras} ``` The available options are: * `letters` – used for papers in the journal’s Letters section. * `onecolumn` – single column, instead of the default two columns. This should be used *only* if necessary for the display of numerous very long equations. * `doublespacing` – text has double line spacing. Please don’t submit papers in this format. * `referee` – *(deprecated)* single column, double spaced, larger text, bigger margins. Please don’t submit papers in this format. * `galley` – *(deprecated)* no running headers, no attempt to align the bottom of columns. * `landscape` – *(deprecated)* sets the whole document on landscape paper. * `usenatbib` – *(all papers should use this)* this uses Patrick Daly’s `natbib.sty` package for citations. * `usegraphicx` – *(most papers will need this)* includes the `graphicx` package, for inclusion of figures and images. * `useAMS` – adds support for upright Greek characters `\upi`, `\umu` and `\upartial` ($\upi$, $\umu$ and $\upartial$). Only these three are included, if you require other symbols you will need to include the `amsmath` or `amsymb` packages (see section [sec:packages]). * `usedcolumn` – includes the package `dcolumn`, which includes two new types of column alignment for use in tables. Some of these options are deprecated and retained for backwards compatibility only. Others are used in almost all papers, but again are retained as options to ensure that papers written decades ago will continue to compile without problems. If you want to include any other packages, see section [sec:packages]. Title page ========== If you are using `mnras_template.tex` the necessary code for generating the title page, headers and footers is already present. Simply edit the title, author list, institutions, abstract and keywords as described below. Title ----- There are two forms of the title: the full version used on the first page, and a short version which is used in the header of other odd-numbered pages (the ‘running head’). Enter them with `\title[]{}` like this: ``` \title[Running head]{Full title of the paper} ``` The full title can be multiple lines (use `\\` to start a new line) and may be as long as necessary, although we encourage authors to use concise titles. The running head must be  ≤  45 characters on a single line. See appendix [sec:advanced] for more complicated examples. Authors and institutions ------------------------ Like the title, there are two forms of author list: the full version which appears on the title page, and a short form which appears in the header of the even-numbered pages. Enter them using the `\author[]{}` command. If the author list is more than one line long, start a new line using `\newauthor`. Use `\\` to start the institution list. Affiliations for each author should be indicated with a superscript number, and correspond to the list of institutions below the author list. For example, if I were to write a paper with two coauthors at another institution, one of whom also works at a third location: ``` \author[K. T. Smith et al.]{ Keith T. Smith,$^{1}$ A. N. Other,$^{2}$ and Third Author$^{2,3}$ \\ $^{1}$Affiliation 1\\ $^{2}$Affiliation 2\\ $^{3}$Affiliation 3} ``` Affiliations should be in the format ‘Department, Institution, Street Address, City and Postal Code, Country’. Email addresses can be inserted with the `\thanks{}` command which adds a title page footnote. If you want to list more than one email, put them all in the same `\thanks` and use `\footnotemark[]` to refer to the same footnote multiple times. Present addresses (if different to those where the work was performed) can also be added with a `\thanks` command. Abstract and keywords --------------------- The abstract is entered in an `abstract` environment: ``` \begin{abstract} The abstract of the paper. \end{abstract} ``` Note that there is a word limit on the length of abstracts. For the current word limit, see the journal instructions to authors$^{\ref{foot:itas}}$. Immediately following the abstract, a set of keywords is entered in a `keywords` environment: ``` \begin{keywords} keyword 1 -- keyword 2 -- keyword 3 \end{keywords} ``` There is a list of permitted keywords, which is agreed between all the major astronomy journals and revised every few years. Do *not* make up new keywords! For the current list of allowed keywords, see the journal’s instructions to authors$^{\ref{foot:itas}}$. Sections and lists ================== Sections and lists are generally the same as in the standard LaTeX classes. Sections -------- Sections are entered in the usual way, using `\section{}` and its variants. It is possible to nest up to four section levels: ``` \section{Main section} \subsection{Subsection} \subsubsection{Subsubsection} \paragraph{Lowest level section} ``` The other LaTeX sectioning commands `\part`, `\chapter` and `\subparagraph{}` are deprecated and should not be used. Some sections are not numbered as part of journal style (e.g. the Acknowledgements). To insert an unnumbered section use the ‘starred’ version of the command: `\section*{}`. See appendix [sec:advanced] for more complicated examples. Lists ----- Two forms of lists can be used in MNRAS – numbered and unnumbered. For a numbered list, use the `enumerate` environment: ``` \begin{enumerate} \item First item \item Second item \item etc. \end{enumerate} ``` which produces 1. First item 2. Second item 3. etc. Note that the list uses lowercase Roman numerals, rather than the LaTeX default Arabic numerals. For an unnumbered list, use the `description` environment without the optional argument: ``` \begin{description} \item First item \item Second item \item etc. \end{description} ``` which produces First item Second item etc. Bulleted lists using the `itemize` environment should not be used in MNRAS; it is retained for backwards compatibility only. Mathematics and symbols ======================= The MNRAS class mostly adopts standard LaTeX handling of mathematics, which is briefly summarised here. See also section [sec:packages] for packages that support more advanced mathematics. Mathematics can be inserted into the running text using the syntax `$1+1=2$`, which produces 1 + 1 = 2. Use this only for short expressions or when referring to mathematical quantities; equations should be entered as described below. Equations --------- Equations should be entered using the `equation` environment, which automatically numbers them: ``` \begin{equation} a^2=b^2+c^2 \end{equation} ``` which produces *a*2 = *b*2 + *c*2 By default, the equations are numbered sequentially throughout the whole paper. If a paper has a large number of equations, it may be better to number them by section (2.1, 2.2 etc.). To do this, add the command `\numberwithin{equation}{section}` to the preamble. It is also possible to produce un-numbered equations by using the LaTeX built-in `\[``\]` and `$$``$$` commands; however MNRAS requires that all equations are numbered, so these commands should be avoided. Special symbols --------------- [tab:anysymbols] @l@ l@ l@ Command & Output & Meaning `\sun` & & Sun, solar `\earth` & & Earth, terrestrial `\micron` & & microns `\degr` & & degrees `\arcmin` & & arcminutes `\arcsec` & & arcseconds `\fdg` & & fraction of a degree `\farcm` & & fraction of an arcminute `\farcs` & & fraction of an arcsecond `\fd` & & fraction of a day `\fh` & & fraction of an hour `\fm` & & fraction of a minute `\fs` & & fraction of a second `\fp` & & fraction of a period `\diameter` & & diameter `\sq` & & square, Q.E.D. [tab:mathssymbols] l@ l@ l Command & Output & Meaning `\upi` & $\upi$ & upright pi `\umu` & $\umu$ & upright mu `\upartial` & $\upartial$ & upright partial derivative `\lid` & $\lid$ & less than or equal to `\gid` & $\gid$ & greater than or equal to `\la` & $\la$ & less than of order `\ga` & $\ga$ & greater than of order `\loa` & $\loa$ & less than approximately `\goa` & $\goa$ & greater than approximately `\cor` & $\cor$ & corresponds to `\sol` & $\sol$ & similar to or less than `\sog` & $\sog$ & similar to or greater than `\lse` & $\lse$ & less than or homotopic to `\gse` & $\gse$ & greater than or homotopic to `\getsto` & $\getsto$ & from over to `\grole` & $\grole$ & greater over less `\leogr` & $\leogr$ & less over greater Some additional symbols of common use in astronomy have been added in the MNRAS class. These are shown in tables [tab:anysymbols]–[tab:mathssymbols]. The command names are – as far as possible – the same as those used in other major astronomy journals. Many other mathematical symbols are also available, either built into LaTeX or via additional packages. If you want to insert a specific symbol but don’t know the LaTeX command, we recommend using the Detexify website[6](#fn6). Sometimes font or coding limitations mean a symbol may not get smaller when used in sub- or superscripts, and will therefore be displayed at the wrong size. There is no need to worry about this as it will be corrected by the typesetter during production. To produce bold symbols in mathematics, use `\bmath` for simple variables, and the `bm` package for more complex symbols (see section [sec:packages]). Vectors are set in bold italic, using `\mathbfit{}`. For matrices, use `\mathbfss{}` to produce a bold sans-serif font e.g. ; this works even outside maths mode, but not all symbols are available (e.g. Greek). For ∇ (del, used in gradients, divergence etc.) use `$\nabla$`. Ions ---- A new `\ion{}{}` command has been added to the class file, for the correct typesetting of ionisation states. For example, to typeset singly ionised calcium use `\ion{Ca}{ii}`, which produces. Figures and tables ================== Figures and tables (collectively called ‘floats’) are mostly the same as built into LaTeX. Basic examples -------------- ![An example figure.](example "fig:") [fig:example] Figures are inserted in the usual way using a `figure` environment and `\includegraphics`. The example Figure [fig:example] was generated using the code: ``` \begin{figure} \includegraphics[width=\columnwidth]{example} \caption{An example figure.} \label{fig:example} \end{figure} ``` [tab:example] | | | | | --- | --- | --- | | Star | Mass | Luminosity | | | $M\_{\sun}$ | $L\_{\sun}$ | | Sun | 1.00 | 1.00 | | *α* Cen A | 1.10 | 1.52 | | *ε* Eri | 0.82 | 0.34 | The example Table [tab:example] was generated using the code: ``` \begin{table} \caption{An example table.} \label{tab:example} \begin{tabular}{lcc} \hline Star & Mass & Luminosity\\ & $M_{\sun}$ & $L_{\sun}$\\ \hline Sun & 1.00 & 1.00\\ $\alpha$~Cen~A & 1.10 & 1.52\\ $\epsilon$~Eri & 0.82 & 0.34\\ \hline \end{tabular} \end{table} ``` Captions and placement ---------------------- Captions go *above* tables but *below* figures, as in the examples above. The LaTeX float placement commands `[htbp]` are intentionally disabled. Layout of figures and tables will be adjusted by the publisher during the production process, so authors should not concern themselves with placement to avoid disappointment and wasted effort. Simply place the LaTeX code close to where the figure or table is first mentioned in the text and leave exact placement to the publishers. By default a figure or table will occupy one column of the page. To produce a wider version which covers both columns, use the `figure*` or `table*` environment. If a figure or table is too long to fit on a single page it can be split it into several parts. Create an additional figure or table which uses `\contcaption{}` instead of `\caption{}`. This will automatically correct the numbering and add ‘*continued*’ at the start of the caption. [tab:continued] | | | | | --- | --- | --- | | Star | Mass | Luminosity | | | $M\_{\sun}$ | $L\_{\sun}$ | | *τ* Cet | 0.78 | 0.52 | | *δ* Pav | 0.99 | 1.22 | | *σ* Dra | 0.87 | 0.43 | Table [tab:continued] was generated using the code: ``` \begin{table} \contcaption{A table continued from the previous one.} \label{tab:continued} \begin{tabular}{lcc} \hline Star & Mass & Luminosity\\ & $M_{\sun}$ & $L_{\sun}$\\ \hline $\tau$~Cet & 0.78 & 0.52\\ $\delta$~Pav & 0.99 & 1.22\\ $\sigma$~Dra & 0.87 & 0.43\\ \hline \end{tabular} \end{table} ``` To produce a landscape figure or table, use the `pdflscape` package and the `landscape` environment. The landscape Table [tab:landscape] was produced using the code: ``` \begin{landscape} \begin{table} \caption{An example landscape table.} \label{tab:landscape} \begin{tabular}{cccccccccc} \hline Header & Header &...\\ Unit & Unit &...\\ \hline Data & Data &...\\ Data & Data &...\\ ...\\ \hline \end{tabular} \end{table} \end{landscape} ``` Unfortunately this method will force a page break before the table appears. More complicated solutions are possible, but authors shouldn’t worry about this. [tab:landscape] | | | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Header | Header | Header | Header | Header | Header | Header | Header | Header | Header | | Unit | Unit | Unit | Unit | Unit | Unit | Unit | Unit | Unit | Unit | | Data | Data | Data | Data | Data | Data | Data | Data | Data | Data | | Data | Data | Data | Data | Data | Data | Data | Data | Data | Data | | Data | Data | Data | Data | Data | Data | Data | Data | Data | Data | | Data | Data | Data | Data | Data | Data | Data | Data | Data | Data | | Data | Data | Data | Data | Data | Data | Data | Data | Data | Data | | Data | Data | Data | Data | Data | Data | Data | Data | Data | Data | | Data | Data | Data | Data | Data | Data | Data | Data | Data | Data | | Data | Data | Data | Data | Data | Data | Data | Data | Data | Data | References and citations ======================== Cross-referencing ----------------- The usual LaTeX commands `\label{}` and `\ref{}` can be used for cross-referencing within the same paper. We recommend that you use these whenever relevant, rather than writing out the section or figure numbers explicitly. This ensures that cross-references are updated whenever the numbering changes (e.g. during revision) and provides clickable links (if available in your compiler). It is best to give each section, figure and table a logical label. For example, Table [tab:mathssymbols] has the label `tab:mathssymbols`, whilst section [sec:packages] has the label `sec:packages`. Add the label *after* the section or caption command, as in the examples in sections [sec:sections] and [sec:figtable]. Enter the cross-reference with a non-breaking space between the type of object and the number, like this: `see Figure~\ref{fig:example}`. The `\autoref{}` command can be used to automatically fill out the type of object, saving on typing. It also causes the link to cover the whole phrase rather than just the number, but for that reason is only suitable for single cross-references rather than ranges. For example, `\autoref{tab:journal_abbr}` produces. Citations --------- MNRAS uses the Harvard – author (year) – citation style, e.g.. This is implemented in LaTeX via the `natbib` package, which in turn is included via the `usenatbib` package option (see section [sec:options]), which should be used in all papers. Each entry in the reference list has a ‘key’ (see section [sec:reflist]) which is used to generate citations. There are two basic `natbib` commands: `\citet{key}` produces an in-text citation: `\citep{key}` produces a bracketed (parenthetical) citation: Citations will include clickable links to the relevant entry in the reference list, if supported by your LaTeX compiler. [tab:natbib] | Command | Ouput | Note | | --- | --- | --- | | `\citet{key}` | | | | `\citep{key}` | | | | `\citep{key,key2}` | | Multiple papers | | `\citet[table 4]{key}` | | | | `\citep[see][figure 7]{key}` | | | | `\citealt{key}` | | For use with manual brackets | | `\citeauthor{key}` | | If already cited in close proximity | | `\defcitealias{key}{Paper~I}` | | Define an alias (doesn’t work in floats) | | `\citetalias{key}` | | | | `\citepalias{key}` | | | There are a number of other `natbib` commands which can be used for more complicated citations. The most commonly used ones are listed in Table [tab:natbib]. For full guidance on their use, consult the `natbib` documentation[7](#fn7). If a reference has several authors, `natbib` will automatically use ‘et al.’ if there are more than two authors. However, if a paper has exactly three authors, MNRAS style is to list all three on the first citation and use ‘et al.’ thereafter. If you are using BibTeX (see section [sec:reflist]) then this is handled automatically. If not, the `\citet*{}` and `\citep*{}` commands can be used at the first citation to include all of the authors. The list of references ---------------------- It is possible to enter references manually using the usual LaTeX commands, but we strongly encourage authors to use BibTeX instead. BibTeX ensures that the reference list is updated automatically as references are added or removed from the paper, puts them in the correct format, saves on typing, and the same reference file can be used for many different papers – saving time hunting down reference details. An MNRAS BibTeX style file, `mnras.bst`, is distributed as part of this package. The rest of this section will assume you are using BibTeX. References are entered into a separate `.bib` file in standard BibTeX formatting. This can be done manually, or there are several software packages which make editing the `.bib` file much easier. We particularly recommend JabRef[8](#fn8), which works on all major operating systems. BibTeX entries can be obtained from the NASA Astrophysics Data System[9](#fn9) (ADS) by clicking on ‘Bibtex entry for this abstract’ on any entry. Simply copy this into your `.bib` file or into the ‘BibTeX source’ tab in JabRef. Each entry in the `.bib` file must specify a unique ‘key’ to identify the paper, the format of which is up to the author. Simply cite it in the usual way, as described in section [sec:cite], using the specified key. Compile the paper as usual, but add an extra step to run the `bibtex` command. Consult the documentation for your compiler or latex distribution. Correct formatting of the reference list will be handled by BibTeX in almost all cases, provided that the correct information was entered into the `.bib` file. Note that ADS entries are not always correct, particularly for older papers and conference proceedings, so may need to be edited. If in doubt, or if you are producing the reference list manually, see the MNRAS instructions to authors$^{\ref{foot:itas}}$ for the current guidelines on how to format the list of references. Appendices and online material ============================== To start an appendix, simply place the `\appendix` command before the next `\section{}`. This will automatically adjust the section headings, figures, tables, and equations to reflect the fact that they are part of an appendix. It is only necessary to enter the `\appendix` command once – everything after that command is in an appendix. Remember that appendices should be placed *after* the list of references. Unlike other astronomy class files, there are no special commands for online material. If your paper has any online material, it should be placed in a separate file. See our instructions to authors$^{\ref{foot:itas}}$ for guidance. Packages and custom commands ============================ Additional packages ------------------- Sometimes authors need to include additional LaTeX packages, which provide extra features. For example, the `bm` package provides extra bold maths symbols, whilst the `pdflscape` package adds support for landscape pages. Packages can be included by adding the `\usepackage{}` command to the preamble of the document (not the main body). Please *only include packages which are actually used in the paper*, and include a comment to explain what each one does. This will assist the typesetters. If you are using `mnras_template.tex`, it includes a specific section for this purpose, near the start of the file with the header ’authors - place your own packages here’. For example, to include `pdflscape`, use: ``` \usepackage{pdflscape} % Landscape pages ``` Consult the documentation for that package for instructions on how to use the additional features. Custom commands --------------- Authors should avoid duplicating or redefining commands which are already available in LaTeX or `mnras.cls`. However it may sometimes be necessary to introduce a custom command e.g. as a shortcut while writing the paper. Please *only include commands which are actually used in the paper*, and include a comment to explain what each one does. This will assist the typesetters. Use `\newcommand`, *not* `\def`, as this will avoid accidentally overwriting existing commands. Place custom commands in the preamble of the document (not the main body). If you are using `mnras_template.tex`, it includes a specific section for this purpose, near the start of the file with the header ’authors - place your own commands here’. As an example, a shortcut for the unit kms− 1can be defined like this: ``` \newcommand{\kms}{\,km\,s$^{-1}$} % kilometres per second ``` Velocities can then be written as e.g. `2.3\kms` which produces 2.3kms− 1. Similar shortcuts can be used for frequently quoted object designations. Acknowledgements ================ This guide replaces an earlier one originally prepared by Cambridge University Press (CUP) in 1994, and last updated in 2002 by Blackwell Publishing. Some code segments are reproduced from, and some examples are based upon, that guide. The authors were: A. Woollatt, M. Reed, R. Mulvey, K. Matthews, D. Starling, Y. Yu, A. Richardson (all CUP), and Penny Smith, N. Thompson and Gregor Hutton (all Blackwell), whose work is gratefully acknowledged. The accompanying BibTeX style file was written by John Sleath, Tim Jenness and Norman Gray, without whom BibTeX support would not have been possible. Some special symbols in tables [tab:anysymbols]–[tab:mathssymbols] were taken from the Springer Verlag *Astronomy & Astrophysics* LaTeX class, with their permission. KTS thanks Nelson Beebe (University of Utah) for helpful advice regarding CTAN. Data Availability ================= The inclusion of a Data Availability Statement is a requirement for articles published in MNRAS. Data Availability Statements provide a standardised format for readers to understand the availability of data underlying the research results described in the article. The statement may refer to original data generated in the course of the study or to third-party data analysed in the article. The statement should describe and provide means of access, where possible, by linking to the data or providing the required accession numbers for the relevant databases or DOIs. 99 Author A. N., 2013, Journal of Improbable Astronomy, 1, 1 Jones C. D., 2015, Journal of Interesting Stuff, 17, 198 Smith A. B., 2014, The Example Journal, 12, 345 (Paper I) Journal abbreviations ===================== Abbreviations for cited journals can be accessed using the commands listed in table [tab:journalabbr]. Although some of these may appear to be outdated or rarely cited, they have been selected to be compatible with the BibTeX output by the NASA Astrophysics Data System$^{\ref{foot:ads}}$, commands used by other astronomy journals, and with additional entries for journals with non-standard abbreviations in MNRAS. For journals which are not on this list, see our instructions to authors$^{\ref{foot:itas}}$ for guidance on how to abbreviate titles. [tab:journalabbr] @l@l@l@ Command & Output & Journal name `\aap` or `\astap` & & Astronomy and Astrophysics*a* `\aapr` & & The Astronomy and Astrophysics Review `\aaps` & & Astronomy and Astrophysics Supplement Series `\actaa` & & Acta Astronomica `\afz` & & Astrofizika `\aj` & & The Astronomical Journal `\ao` or `\applopt` & & Applied Optics `\aplett` & & Astrophysics Letters `\apj` & & The Astrophysical Journal `\apjl` or `\apjlett` & & The Astrophysical Journal Letters*a* `\apjs` or `\apjsupp` & & The Astrophysical Journal Supplement Series `\apss` & & Astrophysics and Space Science `\araa` & & Annual Review of Astronomy and Astrophysics `\arep` & & Astronomy Reports*b* `\aspc` & & Astronomical Society of the Pacific Conference Series `\azh` & & Astronomicheskii Zhurnal*c* `\baas` & & Bulletin of the American Astronomical Society `\bac` & & Bulletin of the Astronomical Institutes of Czechoslovakia `\bain` & & Bull. Astron. Inst. Netherlands `\caa` & & Chinese Astronomy and Astrophysics `\cjaa` & & Chinese Journal of Astronomy and Astrophysics `\fcp` & & Fundamentals of Cosmic Physics `\gca` & & Geochimica Cosmochimica Acta `\grl` & & Geophysics Research Letters `\iaucirc` & & International Astronomical Union Circulars `\icarus` & & Icarus `\japa` & & Journal of Astrophysics and Astronomy `\jcap` & & Journal of Cosmology and Astroparticle Physics `\jcp` & & Journal of Chemical Physics `\jgr` & & Journal of Geophysics Research `\jqsrt` & & Journal of Quantitiative Spectroscopy and Radiative Transfer `\jrasc` & & Journal of the Royal Astronomical Society of Canada `\memras` & & Memoirs of the Royal Astronomical Society `\memsai` & & Memoire della Societa Astronomica Italiana `\mnassa` & & Monthly Notes of the Astronomical Society of Southern Africa `\mnras` & & Monthly Notices of the Royal Astronomical Society*a* `\na` & & New Astronomy `\nar` & & New Astronomy Review `\nat` & & Nature `\nphysa` & & Nuclear Physics A `\pra` & & Physical Review A: Atomic, molecular, and optical physics `\prb` & & Physical Review B: Condensed matter and materials physics `\prc` & & Physical Review C: Nuclear physics `\prd` & & Physical Review D: Particles, fields, gravitation, and cosmology `\pre` & & Physical Review E: Statistical, nonlinear, and soft matter physics `\prl` & & Physical Review Letters `\pasa` & & Publications of the Astronomical Society of Australia `\pasp` & & Publications of the Astronomical Society of the Pacific `\pasj` & & Publications of the Astronomical Society of Japan `\physrep` & & Physics Reports `\physscr` & & Physica Scripta `\planss` & & Planetary and Space Science `\procspie` & & Proceedings of the Society of Photo-Optical Instrumentation Engineers `\rmxaa` & & Revista Mexicana de Astronomia y Astrofisica `\qjras` & & Quarterly Journal of the Royal Astronomical Society `\sci` & & Science `\skytel` & & Sky and Telescope `\solphys` & & Solar Physics `\sovast` & & Soviet Astronomy*b* `\ssr` & & Space Science Reviews `\zap` & & Zeitschrift fuer Astrophysik Advanced formatting examples ============================ Sometimes formatting doesn’t behave exactly as expected when used in titles or section headings, and must be modified to obtain the correct appearance. Generally the publishers can fix these problems during the typesetting process after a paper is accepted, but authors may wish to adjust these themselves to minimise the possibility of errors and/or for the benefit of the refereeing process. Below are some examples of output, followed by the LaTeX code which produces them. Most mathematics and text formatting works as expected, but some commands might not be the correct size, bold or italic. If so they can be finessed by hand, as in the bold mathematics here: ``` \title{\textit{Herschel} observations of galaxies at $\bm{\delta > 60\degr}$} ``` Most fonts do not provide bold and italic versions of small capitals, so the `\ion{}{}` command doesn’t produce the expected output in headings. The effect has to be ‘faked’ using font size commands, remembering that the running head is a different style: ``` \title [Abundances in H\,{\normalsize \textit{II}} regions] {Abundances in H\,{\Large \textbf{II}} regions} ``` Complex mathematics can cause problems with links, so might require adding a less formatted short version of the heading: ``` \section [Finding Mg II absorbers at z > 2] {Finding M\lowercase{g}\,{\sevensize II} absorbers at $\lowercase{\bm{z > 2}}$} ``` Using square brackets in headings can cause additional linking problems, which are solved by wrapping them in {}: ``` \subsection [{[C II] 158$\umu$m emission}] {[C\,{\sevensize II}] 158$\bmath{\umu}$m emission} ``` Use `\text{}` (not `\rm`) for non-variables in mathematics, which preserves the formatting of the surrounding text. For the same reasons, use `\textit{}` for italics (not `\it`). ``` \subsection{Measuring $\bm{T}_\text{eff}$ from \textit{Gaia} photometry} ``` Additional commands for editors only ==================================== The following commands are available for the use of editors and production staff only. They should not be used (or modified in the template) by authors. `\maketitle` inserts the title, authors and institution list in the correct formatting. `\nokeywords` tidies up the spacing if there are no keywords, but authors should always enter at least one. `\volume{}` sets the volume number (default is 000) `\pagerange{}` sets the page range. The standard template generates this automatically, starting from 1. `\bsp` adds the ‘This paper has been typeset’ comment at the end of the paper. The command name refers to Blackwell Science Publishing, who were the publishers at the time when MNRAS began accepting LaTeX submissions in 1993. `\mniiiauth{}` used by the BibTeX style to handle MNRAS style for citing papers with three authors. It should not be used manually. `\eprint{}` used by the BibTeX style for citing arXiv eprints. `\doi{}` used by the BibTeX style for citing Digital Object Identifiers. [lastpage] --- 1. Contact e-mail: <!-- h='&#114;&#x61;&#x73;&#46;&#x61;&#x63;&#46;&#x75;&#x6b;';a='&#64;';n='&#x6d;&#110;';e=n+a+h; document.write('<a h'+'ref'+'="ma'+'ilto'+':'+e+'">'+e+'<\/'+'a'+'>'); // --> mn at ras dot ac dot uk[↩](#fnref1) 2. Present address: Science magazine, AAAS Science International,  Hills Road, Cambridge CB2 1LQ, UK[↩](#fnref2) 3. <https://en.wikibooks.org/wiki/LaTeX>[↩](#fnref3) 4. [foot:itas]<http://www.oxfordjournals.org/our_journals/mnras/for_authors/>[↩](#fnref4) 5. <http://www.ctan.org/tex-archive/macros/latex/contrib/mnras>[↩](#fnref5) 6. <http://detexify.kirelabs.org>[↩](#fnref6) 7. <http://www.ctan.org/pkg/natbib>[↩](#fnref7) 8. <http://jabref.sourceforge.net/>[↩](#fnref8) 9. [foot:ads]<http://adsabs.harvard.edu>[↩](#fnref9) The MSPSR*π* catalogue: VLBA astrometry of 18 millisecond pulsars ================================================================= ### Accepted XXX. Received YYY; in original form ZZZ [firstpage] With unparalleled rotational stability, millisecond pulsars (MSPs) serve as ideal laboratories for numerous astrophysical studies, many of which require precise knowledge of the distance and/or velocity of the MSP. Here, we present the astrometric results for 18 MSPs of the ``MSPSR*π*" project focusing exclusively on astrometry of MSPs, which includes the re-analysis of 3 previously published sources. On top of a standardized data reduction protocol, more complex strategies (i.e., normal and inverse-referenced 1D interpolation) were employed where possible to further improve astrometric precision. We derived astrometric parameters using sterne, a new Bayesian astrometry inference package that allows the incorporation of prior information based on pulsar timing where applicable. We measured significant ( > 3 *σ*) parallax-based distances for 15 MSPs, including 0.81 ± 0.02kpc for PSR J1518 + 4904 — the most significant model-independent distance ever measured for a double neutron star system. For each MSP with a well-constrained distance, we estimated its transverse space velocity and radial acceleration. Among the estimated radial accelerations, the updated ones of PSR J1012 + 5307 and PSR J1738 + 0333 impose new constraints on dipole gravitational radiation and the time derivative of Newton’s gravitational constant. Additionally, significant angular broadening was detected for PSR J1643 − 1224, which offers an independent check of the postulated association between the HII region Sh 2-27 and the main scattering screen of PSR J1643 − 1224. Finally, the upper limit of the death line of *γ*-ray-emitting pulsars is refined with the new radial acceleration of the hitherto least energetic *γ*-ray pulsar PSR J1730 − 2304. radio continuum: stars – stars: kinematics and dynamics – gravitation – gamma-rays: stars – pulsars: individual: PSR J0030 + 0451, PSR J0610 − 2100, PSR J0621 + 1002, PSR J1024 − 0719, PSR J1537 + 1155, PSR J1853 + 1303, PSR J1910 + 1256, PSR J1918 − 0642, PSR J1939 + 2134  Introduction ============ Millisecond pulsars: a key for probing theories of gravity and detecting the gravitational-wave background ---------------------------------------------------------------------------------------------------------- Pulsars are an observational manifestation of neutron stars (NSs) that emit non-thermal electromagnetic radiation while spinning. Over 3000 radio pulsars have been discovered to date throughout the Galaxy and the nearest members of the Local Group. Due to the large moment of inertia of pulsars, the pulses we receive on Earth from a pulsar exhibit highly stable periodicity. By measuring a train of pulse time-of-arrivals (ToAs) of a pulsar and comparing it against the model prediction, a long list of model parameters can be inferred. This procedure to determine ToA-changing parameters is known as pulsar timing, hereafter referred to as timing. In the pulsar family, recycled pulsars (commonly refereed to as millisecond pulsars, or MSPs), have the shortest rotational periods. They are believed to have been spun-up through the accretion from their donor stars during a previous evolutionary phase as a low-mass X-ray binary (LMXB). As the duration of the recycling phase (and hence the degree to which the pulsar is spun-up) can vary depending on the nature of the binary, there is no clear spin period threshold that separates MSPs from canonical pulsars. In this paper, we define MSPs as pulsars with spin periods of $\lesssim 40$ms and magnetic fields $\lesssim10^{10}$ G. This range encompasses most partially recycled pulsars with NS companions, such as PSR J1537 + 1155 (also known as PSR B1534 + 12) and PSR J1518 + 4904. Compared to non-recycled pulsars, ToAs from MSPs can be measured to higher precision due to both the narrower pulse profiles and larger number of pulses. Additionally, MSPs exhibit more stable rotation ; both factors promise a lower level of random timing noise. Consequently, MSPs outperform non-recycled pulsars in the achievable precision for probing theories underlying ToA-changing astrophysical effects. In particular, MSPs provide the hitherto most precise tests for gravitational theories. Einstein’s theory of general relativity (GR) is the simplest form among a group of possible candidate post-Newtonian gravitational theories. The discovery of highly relativistic double neutron star (DNS) systems and their continued timing have resulted in many high-precision tests of GR and other gravity theories (, and especially ). The precise timing, optical spectroscopy and VLBI observations of pulsar-white-dwarf (WD) systems have, in addition, achieved tight constraints on several classes of alternative theories of gravity. Gravitational Waves (GWs) are changes in the curvature of spacetime (generated by accelerating masses), which propagate at the speed of light. Individual GW events in the Hz—kHz range have been detected directly with GW observatories (e.g. ; see the third Gravitational-Wave Transient Catalog[3](#fn3)), and indirectly using the orbital decay of pulsar binaries. Collectively, a gravitational wave background (GWB), formed with primordial GWs and GWs generated by later astrophysical events, is widely predicted, but has not yet been confirmed by any observational means. In the range of 10− 9 Hz − 0.1Hz, supermassive black hole binaries are postulated to be the primary sources of the GWB. In this nano-hertz regime, the most stringent constraints on the GWB are provided by pulsar timing. To enhance the sensitivity for the GWB hunt with pulsar timing, and to distinguish GWB-induced ToA signature from other sources of common timing “noise” (e.g., Solar-system planetary ephemeris error, clock error and interstellar medium, ), a pulsar timing array (PTA), composed of MSPs scattered across the sky (see for spatial distribution requirement), is necessary. After 2 decades of efforts, no GWB has yet been detected by a PTA, though common steep-spectrum timing noise (in which GWB signature should reside) has already been confirmed by several radio PTA consortia. At *γ*-rays, a competitive GWB amplitude upper limit was recently achieved using the Fermi Large Area Telescope with 12.5 years of data. Very long baseline astrometry of millisecond pulsars ---------------------------------------------------- In timing analysis, astrometric information for an MSP (reference position, proper motion, and annual geometric parallax) can form part of the global ensemble of parameters determined from ToAs. However, the astrometric signatures can be small compared to the ToA precision and/or covariant with other parameters in the model, especially for new MSPs that are timed for less than a couple of years. Continuing to add newly discovered MSPs into PTAs is considered the best pathway to rapidly improve the PTA sensitivity, and is particularly important for PTAs based around newly commissioned high-sensitivity radio telescopes. Therefore, applying priors to the astrometric parameters can be highly beneficial for the timing of individual MSPs (especially the new ones) and for enhancing PTA sensitivities. Typically, the best approach to independently determine precise astrometric parameters for MSPs is the use of phase-referencing very long baseline interferometry (VLBI) observations, which can achieve sub-mas positional precision (relative to a reference source position) for MSPs in a single observation. By measuring the sky position of a Galactic MSP a number of times and modeling the position evolution, VLBI astrometry can obtain astrometric parameters for the MSP. Compared to pulsar timing, VLBI astrometry normally takes much shorter time to reach a given astrometric precision. One of the limiting factors on searching for the GWB with PTAs is the uncertainties on the Solar-system planetary ephemerides (SSEs), which are utilized to convert geocentric ToAs to ones measured in the (Solar-system) barycentric frame (i.e., the reference frame with respect to the barycentre of the Solar system). Various space-mission-driven SSEs have been released mainly by two SSE providers — the NASA Jet Propulsion Laboratory and the IMCCE. In pulsar timing analysis, adopting different SSEs may lead to discrepant timing parameters. On the other hand, VLBI astrometry measures offsets with respect to a source whose position is measured in a quasi-inertial (reference) frame defined using remote quasars. Although VLBI astrometry also relies on SSEs to derive annual parallax, it is robust against SSE uncertainties. In other words, for VLBI astrometry, using different SSEs in parameter inference would not lead to a noticeable difference in the inferred parameters. Therefore, VLBI astrometry of MSPs can serve as an objective standard to be used to discriminate between various SSEs. Specifically, if an SSE is inaccurate, the barycentric frame based on the SSE would display rotation with respect to the quasar-based frame. This frame rotation can be potentially detectable by comparing VLBI positions of multiple MSPs against their timing positions. By eliminating inaccurate SSEs, VLBI astrometry of MSPs can suppress the SSE uncertainties, and hence enhance the PTA sensitivities. Besides the GWB-related motivations, interferometer-based astrometric parameters (especially distances to MSPs) have been adopted to sharpen the tests of gravitational theories for individual MSPs. Such tests are normally made by comparing the model-predicted and observed post-Keplerian (PK) parameters that quantify excessive gravitational effects beyond a Newtonian description of the orbital motion. Among the PK parameters is the orbital decay $\dot{P}\_\mathrm{b}$ (or the time derivative of orbital period). The intrinsic cause of $\dot{P}\_\mathrm{b}$ in double neutron star systems is dominated by the emission of gravitational waves, which can be predicted using the binary constituent masses and orbital parameters. To test this model prediction, however, requires any extrinsic orbital decay $\dot{P}\_\mathrm{b}^\mathrm{ext}$ due to relative acceleration between the pulsar and the observer to be removed from the observed $\dot{P}\_\mathrm{b}$. Such extrinsic terms depend crucially on the proper motion and the distance of the pulsar, however these (especially the distance) can be difficult to estimate from pulsar timing. Precise VLBI determination of proper motions and distances can yield precise estimates of these extrinsic terms and therefore play an important role in orbital-decay tests of gravitational theories. Likewise, Gaia astrometry on nearby pulsar-WD systems can potentially serve the same scientific goal, though the method is only applicable to a small number of pulsar-WD systems where the WDs are sufficiently bright for the Gaia space observatory (see Section [subsec:Gaiaresults]). Last but not least, pulsar astrometry is crucial for understanding the Galactic free-electron distribution, or the Galactic free-electron number density *n*e(*x⃗*) as a function of position. An *n*e(*x⃗*) model is normally established by using pulsars with well determined distances as benchmarks. As the pulsations from a pulsar allow precise measurement of its dispersion measure (DM), the average *n*e between the pulsar and the Earth can be estimated given the pulsar distance. Accordingly, a large group of such benchmark pulsars across the sky would enable the establishment of an *n*e(*x⃗*) model. In a relevant research field, extragalactic fast radio bursts (FRBs) have been used to probe intergalactic medium distribution on a cosmological scale, which, however, demands the removal of the DMs of both the Galaxy and the FRB host galaxy. The Galactic DM cannot be determined without a reliable *n*e(*x⃗*) model, which, again, calls for precise astrometry of pulsars across the Galaxy. The MSPSR*π* project -------------------- Using the Very Long Baseline Array (VLBA), the PSR*π* project tripled the sample of pulsars with precisely measured astrometric parameters, but included just three MSPs. The successor project, MSPSR*π*, is a similarly designed VLBA astrometric program targeting exclusively MSPs. Compared to canonical pulsars, MSPs are generally fainter. To identify MSPs feasible for VLBA astrometry, a pilot program was conducted, which found 31 suitable MSPs. Given observational time constraints, we selected 18 MSPs as the targets of the MSPSR*π* project, focusing primarily on sources observed by pulsar timing arrays. The 18 MSPs are listed in Table [tab:MSPs] along with their spin periods *P*s and orbital periods *P*b (if available) that have been obtained from the ATNF Pulsar Catalogue[4](#fn4). The astrometric results for 3 sources (PSR J1012 + 5307, PSR J1537 + 1155, PSR J1640 + 2224) involved in the project have already been published. In this paper, we present the astrometric results of the remaining 15 MSPs studied in the MSPSR*π* project. We also re-derived the results for the 3 published MSPs, in order to ensure consistent and systematic astrometric studies. [tab:MSPs] Along with the release of the catalogue results, this paper covers several scientific and technical perspectives. First, this paper explores novel data reduction strategies such as inverse-referenced 1D phase interpolation (see Section [subsec:sophisticateddatareduction]). Second, a new Bayesian astrometry inference package is presented (see Section [sec:parameterinference]). Third, with new parallax-based distances and proper motions, we discriminate between the two prevailing *n*e(*x⃗*) models (see Section [subsubsec:DMdistances]), and investigate the kinematics of MSPs in Section [subsec:vt]. Fourth, with new parallax-based distances of two MSPs, we re-visit the constraints on alternative theories of gravity (see Section [sec:orbitaldecaytests]). Finally, discussions on individual pulsars are given in Section [sec:individualpulsars], which includes a refined “death line” upper limit of *γ*-ray pulsars (see Section [subsec:J1730]). The study of SSE-dependent frame rotation, which depends on an accurate estimation of the reference points of our calibrator sources in the quasi-inertial VLBI frame, requires additional multi-frequency observations and will be presented in a follow-up paper. Throughout this paper, we abide by the following norms unless otherwise stated. **1)** The uncertainties are provided at 68% confidence level. **2)** Any mention of flux density refers to unresolved flux density *S*unres in our observing configuration (e.g., a 10-mJy source means *S*unres = 10mJy). **3)** All bootstrap and Bayesian results adopt the 50th, 16th and 84th percentile of the marginalized (and sorted) value chain as, respectively, the estimate and its 1-*σ* error lower and upper bound. **4)** Where an error of an estimate is required for a specific calculation but an asymmetric error is reported for the estimate, the mean of upper and lower errors is adopted for the calculation. **5)** VLBI positional uncertainties will be broken down into the uncertainty of the offset from a chosen calibrator reference point, and the uncertainty in the location of that chosen reference point. This paper focuses on the relative offsets which are relevant for the measurement of proper motion and parallax, and the uncertainty in the location of the reference source is presented separately. Observations and Correlation ============================ As is mentioned in Section [subsec:MSPVLBIastrometry], to achieve high-precision pulsar astrometry requires the implementation of a VLBI phase referencing technique. There are, however, a variety of such techniques, including the normal phase referencing, relayed phase referencing, inverse phase referencing and interpolation. These techniques are described and discussed in Chapter 2 of. Generally, a given phase referencing approach and hence observational setup maps directly to a corresponding data reduction procedure, though occasionally other data reduction opportunities could arise by chance (see Section [sec:datareduction]). The MSPSR*π* project systematically employs the relayed phase referencing technique, in which a secondary phase reference source (explained in Chapter 2 of ) very close to the target on the sky is observed to refine direction-dependent calibration effects. The observing and correlation tactics are identical to those of the PSR*π* project. All MSPs in the MSPSR*π* catalogue (see Table[tab:MSPs]) were observed at L band with the VLBA at 2Gbps data rate (256 MHz total bandwidth, dual polarisation) from mid-2015 to no later than early 2018. To minimise radio-frequency interference (RFI) at L band, we used eight 32MHz subbands with central frequencies of 1.41, 1.44, 1.47, 1.50, 1.60, 1.66, 1.70 and 1.73GHz, corresponding to an effective central frequency of 1.55GHz. The primary phase calibrators were selected from the Radio Fundamental Catalogue[5](#fn5). The secondary phase calibrators were identified from the FIRST (Faint Images of the Radio Sky at Twenty-cm) catalogue or the NVSS (NRAO VLA sky survey) catalogue (for sky regions not covered by the FIRST survey) using a short multi-field observation. Normally, more than one secondary phase calibrators were observed together with the target. Among them, a main one that is preferably the brightest and the closest to the target is selected to carry out self-calibration; the other secondary phase calibrators are hereafter referred to as redundant secondary phase calibrators. The primary and the main secondary phase calibrators for the astrometry of the 18 MSPs are summarized in Table [tab:MSPs], alongside the project codes. At correlation time, pulsar gating was applied to improve the S/N on the target pulsars. The median values of the gating gain, defined as (*S*/*N*)gated/(*S*/*N*)ungated, are provided in Table [tab:MSPs]. Data Reduction and fiducial systematic errors ============================================= We reduced all data with the psrvlbireduce pipeline[6](#fn6) written in parseltongue, a python-based interface for running functions provided by AIPS and DIFMAP. The procedure of data reduction is identical to that outlined in, except for four MSPs — PSR J1518 + 4904, PSR J0621 + 1002, PSR J1824 − 2452A and PSR J1939 + 2134. For PSR J1518 + 4904, the self-calibration solutions acquired with NVSS J151733 + 491626, a 36-mJy secondary calibrator 138 away from the pulsar, are extrapolated to both the pulsar and NVSS J151815 + 491105 — a 4.5-mJy source about a factor of two closer to PSR J1518 + 4904 than NVSS J151733 + 491626. The positions relative to NVSS J151815 + 491105 are used to derive the astrometric parameters of PSR J1518 + 4904. For the other exceptions, the data reduction procedures as well as fiducial systematics estimation are described in Sections [subsec:dualphscal] and [subsec:sophisticateddatareduction]. At the end of the data reduction, a series of positions as well as their random errors *σ**i*R (where *i*  =  1, 2, 3, ... refers to right ascension or declination at different epochs) are acquired for each pulsar. For each observation, on top of the random errors due to image noise, ionospheric fluctuations would introduce systematic errors that distort and translate the source, the magnitude of which generally increases with the angular separation between a target and its (secondary) phase calibrator. We estimate fiducial values for these systematic errors *σ**i*S of pulsar positions using the empirical relation (i.e., Equation 1 of ) derived from the whole PSR*π* sample. While this empirical relation has proven a reasonable approximation to the actual systematic errors for a large sample of sources, for an individual observational setup *σ**i*S may overstate or underestimate the true systematic error (see Section [sec:parameterinference]). We can account for our uncertainty in this empirical estimator by re-formulating the total positional uncertainty as $$\label{eq:EFAC} \begin{split} \sigma\_{i}\left(\eta\_\mathrm{EFAC}\right)=\sqrt{(\sigma\_i^\mathcal{R})^2+(\eta\_\mathrm{EFAC} \cdot \sigma\_i^\mathcal{S})^2} \,, \end{split}$$ where *η*EFAC is a positive correction factor on the fiducial systematic errors. In this work, we assume *η*EFAC stays the same for each pulsar throughout its astrometric campaign. The inference of *η*EFAC is described in Section [sec:parameterinference]. We reiterate that the target image frames have been determined by the positions assumed for our reference sources (or virtual calibrators, see Section [subsec:dualphscal]), and that any change in the assumed reference source position would transfer directly into a change in the recovered position for the target pulsar. Accordingly, the uncertainty in the reference source position must be accounted for in the pulsar’s reference position error budget, after fitting the pulsar’s astrometric parameters. All pulsar positions and their error budgets are provided in the online[7](#fn7) “pmpar.in.preliminary” and “pmpar.in” files. The only difference between “pmpar.in.preliminary” and “pmpar.in” (for each pulsar) files are the position uncertainties: “pmpar.in.preliminary” and “pmpar.in” offer, respectively, position uncertainties *σ**i*(0) = *σ**i*R and $\sigma\_i(1)=\sqrt{(\sigma\_i^\mathcal{R})^2+(\sigma\_i^\mathcal{S})^2}$. As an example, the pulsar positions for PSR J1738 + 0333 are presented in Table [tab:pulsarpositions], where the values on the left and right side of the “ | ” sign stand for, respectively, *σ**i*(0) and *σ**i*(1). Additionally, to facilitate reproducibility, the image models for all primary and secondary phase calibrators listed in Table [tab:MSPs] are released[8](#fn8) along with this paper. Following, the calibrator models were made with the calibrator data concatenated from all epochs in an iterative manner. [tab:pulsarpositions] 1D interpolation on PSR J0621 + 1002 and PSR J1824 − 2452A ---------------------------------------------------------- One can substantially reduce propagation-related systematic errors using 1D interpolation with two phase calibrators quasi-colinear with a target. After 1D interpolation is applied, the target should in effect be referenced to a “virtual calibrator” much closer (on the sky) than either of the two physical phase calibrators, assuming the phase screen can be approximated by a linear gradient with sky position. According to Table [tab:MSPs], 7 secondary phase calibrators (or the final reference sources) are more than 20’ away from their targets, which would generally lead to relatively large systematic errors. Fortunately, there are 3 MSPs — PSR J0621 + 1002, PSR J1824 − 2452A and PSR J1939 + 2134, for which the pulsar and its primary and secondary phase calibrators are near-colinear (see online[footnote:pulsarpositions] calibrator plans as well as Figure [fig:J1939calibrationplan]). Hence, by applying 1D interpolation, each of the 3 “1D-interpolation-capable” MSPs can be referenced to a virtual calibrator much closer than the physical secondary phase calibrator (see Table [tab:MSPs]). We implemented 1D interpolation on PSR J0621 + 1002 and PSR J1824 − 2452A in the same way as the astrometry of the radio magnetar XTE J1810 − 197 carried out at 5.7GHz. Nonetheless, due to our different observing frequency (i.e., 1.55GHz), we estimated *σ**i*S differently. The post-1D-interpolation systematic errors should consist of **1)** first-order residual systematic errors related to the target-to-virtual-calibrator offset Δpsr-VC and **2)** higher-order terms. Assuming negligible higher-order terms, we approached post-1D-interpolation *σ**i*S with Equation 1 of, using Δpsr-VC as the calibrator-to-target separation. The assumption of negligible higher-order terms will be tested later and discussed in Section [subsubsec:implicationsfor1Dinterpolation]. Inverse-referenced 1D interpolation on PSR J1939 + 2134 ------------------------------------------------------- For PSR J1939 + 2134, normal 1D interpolation, with respect to the primary phase calibrator ICRF J193510.4 + 203154 (J1935) and the brightest secondary reference source NVSS J194104 + 214913 (J194104), is still not the optimal calibration strategy. The  ≈ 10-mJy (at 1.55GHz) PSR J1939 + 2134 is the brightest MSP in the northern hemisphere and only second to PSR J0437 − 4715 in the whole sky. After pulsar gating, PSR J1939 + 2134 is actually brighter than J194104. PSR J1939 + 2134 is unresolved on VLBI scales, and does not show long-term radio feature variations (frequently seen in quasars), which makes it an ideal secondary phase calibrator. Both factors encouraged us to implement the inverse-referenced 1D interpolation (or simply inverse 1D interpolation) on PSR J1939 + 2134, where PSR J1939 + 2134 is the de-facto secondary phase calibrator and the two “secondary phase calibrators” serve as the targets. To avoid confusion, we refer to the two “secondary phase calibrators” for PSR J1939 + 2134 (see Table [tab:MSPs]) as secondary reference sources or simply reference sources. Though inverse phase referencing (without interpolation) has been an observing/calibration strategy broadly used in VLBI astrometry, inverse interpolation is new, with the 2D approach of at 8.3 GHz being a recent and independent development. We implemented inverse 1D interpolation at 1.55GHz on PSR J1939 + 2134 in three steps (in addition to the standard procedure) detailed as follows. ### Tying PSR J1939 + 2134 to the primary-calibrator reference frame Inverse 1D interpolation relies on the residual phase solutions Δ*ϕ**n*(*x⃗*, *t*) of self-calibration on PSR J1939 + 2134 (where *x⃗*, *t* and *n* refers to, respectively, sky position, time and the *n*-th station in a VLBI array), which, however, change with Δ*x⃗*psr — the displacement from the “true” pulsar position to its model position. When ∣Δ*x⃗*psr∣ is much smaller than the synthesized beam size *θ*syn, the changes in Δ*ϕ**n*(*x⃗*, *t*) would be equal across all epochs, hence not biasing the resultant parallax and proper motion. However, if $|\Delta\vec{x}\_\mathrm{psr}| \gtrsim \theta\_\mathrm{syn}$, then the phase wraps of Δ*ϕ**n*(*x⃗*, *t*) would likely become hard to uncover. The main contributor of considerable ∣Δ*x⃗*psr∣ is an inaccurate pulsar position. The proper motion of the pulsar would also increase ∣Δ*x⃗*psr∣ with time, if it is poorly constrained (or neglected). For PSR J1939 + 2134, the effect of proper motion across our observing duration is small ($\lesssim1$mas across the nominal observing span of 2.5 years; see the timing proper motion in Section [sec:inferencewithpriors]) compared to *θ*syn ∼ 10mas. In order to minimize ∣Δ*x⃗*psr∣, we shifted the pulsar model position, on an epoch-to-epoch basis, by Δ*x⃗*cor (which ideally should approximate  − Δ*x⃗*psr), to the position measured in the J1935 reference frame (see Section 4.1 of for explanation of “reference frame”). This J1935-frame position was derived with the method for determining pulsar absolute position (where J194104 was used temporarily as the secondary phase calibrator) except that there is no need to quantify the position uncertainty. We typically found ∣Δ*x⃗*psr∣ ∼ 50mas, which is well above *θ*syn ∼ 10mas. After the map centre shift, PSR J1939 + 2134 becomes tied to the J1935 frame. ### 1D interpolation on the tied PSR J1939 + 2134 The second step of inverse 1D interpolation is simply the normal 1D interpolation on PSR J1939 + 2134 that has been tied to the J1935 frame as described above (in Section [subsubsec:tyingJ1939toJ1935]). When there is only one secondary reference source, optimal 1D interpolation should see the virtual calibrator moved along the interpolation line (that passes through both J1935 and PSR J1939 + 2134) to the closest position to the secondary reference source. However, there are two reference sources for PSR J1939 + 2134 (see Table [tab:MSPs]), and the virtual calibrator point can be set at a point that will enable both of them to be used. After calibration, a separate position series can be produced for each reference source. While we used each reference-source position series to infer astrometric parameters separately, we can also directly infer astrometric parameters with the combined knowledge of the two position series (which can be realized with $\tt sterne$[9](#fn9)). If the errors in the two position series are (largely) uncorrelated, this can provide superior astrometric precision. Since position residuals should be spatially correlated, we would ideally set the virtual calibrator at a location such that the included angle between the two reference sources is $90\degr$. While achieving this ideal is not possible, we chose a virtual calibrator location that forms the largest possible included angle (657) with the two reference sources to minimise spatially correlated errors (see Figure [fig:J1939calibrationplan]). This virtual calibrator is 1.2836 times further away from J1935 than PSR J1939 + 2134. Accordingly, the Δ*ϕ**n*(*x⃗*, *t*) solutions (obtained from the self-calibration on the tied PSR J1939 + 2134) were multiplied by 1.2836, and applied to the two reference sources. [fig:J1939calibrationplan] ### De-shifting reference source positions After data reduction involving the two steps outlined in Sections [subsubsec:tyingJ1939toJ1935] and [subsubsec:1DinterontiedJ1939], one position series was acquired for each reference source. At this point, however, the two position series are not yet ready for astrometric inference, mainly because both proper motion and parallax signatures have been removed in the first step (see Section [subsubsec:tyingJ1939toJ1935]) when PSR J1939 + 2134 was shifted to its J1935-frame position. Therefore, the third step of inverse 1D interpolation is to cancel out the PSR J1939 + 2134 shift (made in the first step) by moving reference source positions by  − 1.2836 ⋅ Δ*x⃗*cor, where the multiplication can be understood by considering Figure 1 of. This de-shifting operation was carried out separately outside the data reduction pipeline[footnote:parseltongue]. After the operation, we estimated *σ**i**j*S of the reference sources (where *j*  =  1, 2 refers to an individual reference source) following the method described in Section [subsec:dualphscal]. The final position series of the reference sources are available online[footnote:pulsarpositions]. The astrometric parameter inference based on these position series is outlined in Section [sec:parameterinference]. Astrometric inference methods and quasi-VLBI-only astrometric results ===================================================================== After gathering the position series[footnote:pulsarpositions] with basic uncertainty estimation (see Section [sec:datareduction]), we proceed to infer the astrometric parameters. The inference is made by three different methods: **a)** direct fitting of the position series with pmpar[10](#fn10), **b)** bootstrapping (see ) and **c)** Bayesian analysis using sterne[footnote:sterne] (see ). The two former methods directly adopt $\sigma\_i(1)=\sqrt{(\sigma\_i^\mathcal{R})^2+(\sigma\_i^\mathcal{S})^2}$ as the position errors. In Bayesian analysis, however, we inferred *η*EFAC along with other model parameters using the likelihood terms $$\label{eq:probability} \begin{split} P\_1 \propto \left(\prod\_{i} \sigma\_i\right)^{-1} \exp{\left[-\frac{1}{2} \sum\_i \left(\frac{\Delta \epsilon\_i}{\sigma\_i}\right)^2\right]} \,, \end{split}$$ where *σ**i* = *σ**i*(*η*EFAC) obeys Equation [eq:EFAC]; Δ*ε**i* refers to the model offsets from the measured positions. As is discussed in Section [subsec:Bayesianasmajor], Bayesian inference outperforms the other two methods, and is hence consistently used to present final results in this work. In all cases, the uncertainty in the reference source position should be added in quadrature to the uncertainty in the pulsar’s reference position acquired with any method (of the three), in order to obtain a final estimate of the absolute positional uncertainty of the pulsar. To serve different scientific purposes, we present two sets of astrometric results in two sections (i.e., Sections [sec:parameterinference] and [sec:inferencewithpriors]), which differ in whether timing proper motions and parallaxes are used as prior information in the inference. ### Priors of canonical model parameters used in Bayesian analysis To facilitate reproduction of our Bayesian results, the priors (of Bayesian inference) we use for canonical model parameters and *η*EFAC are detailed as follows. Priors for the two orbital parameters can be found in Section [subsec:reflexmotioninference]. We universally adopt the prior uniform distribution U(0, 15) (i.e., uniformly distributed between 0 and 15) for *η*EFAC. This prior distribution can be refined for future work with an ensemble of results across many pulsars. With regard to the canonical astrometric parameters (7 parameters for PSR J1939 + 2134 and 5 for the other pulsars), we adopt U(*X*0(DF) − 20 *σ̃**X*(DF),   *X*0(DF) + 20 *σ̃**X*(DF)) for each *X*, where *X* refers to one of *α*ref, *δ*ref, *μ**α*, *μ**δ* and $\varpi$. Here, *X*0(DF) stands for the direct-fitting estimate of *X*; *σ̃**X*(DF) represents the direct-fitting error corrected by the reduced chi-square *χ**ν*2 (see Table [tab:astrometricparameters]) with $\tilde{\sigma}\_X^\mathrm{(DF)} \equiv \sigma\_{X}^\mathrm{(DF)} \cdot \sqrt{\chi^2\_\nu}$. The calculation of prior range of *X* is made with the function $\tt sterne.priors.generate\\_initsfile$[footnote:sterne]. We note that the adopted priors are relaxed enough to ensure robust outcomes: shrinking or enlarging the prior ranges by a factor of two would not change the inferred values. Meanwhile, the specified prior ranges are also reasonably small so that the global minimum of Equation [eq:probability] can be reached. Astrometric inference disregarding orbital motion ------------------------------------------------- ### Single-reference-source astrometric inferences All MSPs (in this work) excepting PSR J1939 + 2134 have only one reference source. For each of these single-reference-source MSPs, we fit for the five canonical astrometric parameters, i.e., reference position (*α*ref and *δ*ref), proper motion ($\mu\_\alpha \equiv \dot{\alpha} \cos{\delta}$ and *μ**δ*) and parallax ($\varpi$). In the Bayesian analysis alone, *η*EFAC is also inferred alongside the astrometric parameters. At this stage, we neglect any orbital reflex motion for binary pulsars – the effects of orbital reflex motion are addressed in Section [subsec:reflexmotioninference]. The proper motions and parallaxes derived with single-reference-source astrometry and disregarding orbital motion are summarized in Table [tab:astrometricparameters]. The reference positions are presented in Section [subsec:astrometricresultsnonPMpriors]. [tab:astrometricparameters] ### Multi-source astrometry inferences When multiple sources share proper motion and/or parallax (while each source having its own reference position), a joint multi-source astrometry inference can increase the degrees of freedom of inference (i.e., the number of measurements reduced by the number of parameters to infer), and tighten constraints on the astrometric parameters. Multi-source astrometry inference has been widely used in maser astrometry (where maser spots with different proper motions scatter around a region of high-mass star formation, ), but has not yet been used for any pulsar, despite the availability of several bright pulsars with multiple in-beam calibrators (e.g., PSR J0332 + 5434, PSR J1136 + 1551) in the PSR*π* project. PSR J1939 + 2134 is the only source (in this work) that has multiple (i.e., two) reference sources, which provides a rare opportunity to test multi-reference-source astrometry. We assumed that the position series of J194104 is uncorrelated with that of NVSS J194106 + 215304 (hereafter J194106), and utilized sterne[footnote:sterne] to infer the common parallax and proper motion, alongside two reference positions (one for each reference source). The acquired proper motion and parallax are listed in Table [tab:astrometricparameters]. As inverse phase referencing is applied for PSR J1939 + 2134, the parallax and proper motion of PSR J1939 + 2134 are the inverse of the direct astrometric measurements. For comparison, the proper motion and parallax inferred solely with one reference source are also reported in Table [tab:astrometricparameters]. Due to the relative faintness of J194106 (see Table [tab:MSPs]), the inclusion of J194106 only marginally improves the astrometric results (e.g., $\varpi$) over those inferred with J194104 alone. The constraints on the parallax (as well as the proper motion) are visualized in Figure [fig:J1939parallax]. The best-inferred model (derived from the J194104 and J194106 positions) is illustrated with a bright magenta curve, amidst two sets of Bayesian simulations — each set for a reference source. Each simulated curve is a time series of simulated positions, with the best-inferred reference position (*α*ref, *j* and *δ*ref, *j*, where *j* refers to either J194104 or J194106) and proper-motion-related displacements (i.e., *μ**α*Δ*t* and *μ**δ*Δ*t*, where Δ*t* is the time delay from the reference epoch) subtracted. As the simulated curve depends on the underlying model parameters, the degree of scatter of simulated curves would increase with larger uncertainties of model parameters. Though sharing simulated parallaxes and proper motions with J194104, the simulated curves for J194106 exhibits broader scatter (than the J194104 ones) owing to more uncertain reference position (see Section [subsec:astrometricresultsnonPMpriors] for *α*ref,J194106 and *δ*ref,J194106). The large scatter implies that the J194106 position measurements impose relatively limited constraints on the common model parameter (i.e., parallax and proper motion), which is consistent with the findings from Table [tab:astrometricparameters]. [fig:J1939parallax] ### Implications for 1D/2D interpolation On the three 1D-interpolation-capable MSPs, we compared astrometric inference with both the 1D-interpolated and non-1D-interpolated position series (one at a time). For PSR J1939 + 2134, the *η*EFAC of the three 1D-interpolated realizations are consistent with each other, but larger than the non-1D-interpolated counterpart. This post-1D-interpolation inflation of *η*EFAC also occurs to the other two 1D-interpolation-capable pulsars (see Table [tab:astrometricparameters]), which suggests the post-1D-interpolation fiducial systematic errors *σ**i*S might be systematically under-estimated. One obvious explanation for this under-estimation is that the higher-order terms of systematic errors are non-negligible (as opposed to the assumption we started with in Section [subsec:dualphscal]): they might be actually comparable to the first-order residual systematic errors (that are related to Δpsr-VC) at the  ∼ 1.55GHz observing frequencies. On the other hand, the astrometric results based on the non-1D-interpolated J194104 positions inverse-referenced to PSR J1939 + 2134 are less precise than the 1D-interpolated counterpart by  ≈ 40%, as is also the case for PSR J0621 + 1002 (see Table [tab:astrometricparameters]). Moreover, the post-1D-interpolation parallax of PSR J1824 − 2452A becomes relatively more accurate than the negative parallax obtained without applying 1D interpolation. All of these demonstrate the utility of 1D/2D interpolation, even in the scenario of in-beam astrometry that is already precise. In the remainder of this paper, we only focus on the 1D-interpolated astrometric results for the three 1D-interpolation-capable MSPs. Bayesian inference as the major method for MSPSR*π* --------------------------------------------------- We now compare the three sets of astrometric parameters (in Table [tab:astrometricparameters]) obtained with different inference methods, and seek to proceed with only one set in order to simplify the structure of this paper. Among the three inference methods we use in this work, direct least-square fitting is the most time-efficient, but is also the least robust against improperly estimated positional uncertainties. Conversely, the other two methods (i.e., bootstrap and Bayesian methods) do not rely solely on the input positional uncertainties, and can still estimate the model parameters and their uncertainties *σ**X*(*Y*) (*X*  =  *μ**α*, *μ**δ* or $\varpi$; *Y*  = “Bo” or “Ba”) more robustly in the presence of incorrectly estimated positional errors. Generally speaking, *σ**X*(*Y*) inferred from a pulsar position series are expected to change with the corresponding *χ**ν*2-corrected direct-fitting error $\tilde{\sigma}\_X^\mathrm{(DF)} \equiv \sigma\_{X}^\mathrm{(DF)} \cdot \sqrt{\chi^2\_\nu}$. In order to investigate the relation between *σ**X*(*Y*) and *σ̃**X*(DF), we divided *σ**X*(*Y*) by *σ̃**X*(DF) for each pulsar entry in the top block of Table [tab:astrometricparameters]. The results are displayed in Figure [fig:BoerrsvsBaerrs]. For the convenience of illustration, we calculated the dimensionless *σ̃**X*(DF) defined as *σ̃**X*(DF)/*s**X*(*D**F*) (where *s**X*(*D**F*) represents the standard deviation of *σ̃**X*(DF) over the group *X*), which allows all the three sets (i.e., *μ**α*, *μ**δ* and $\varpi$) of dimensionless *σ̃**X*(DF) to be horizontally more evenly plotted in Figure [fig:BoerrsvsBaerrs]. Across the entire MSPSR*π* sample, we see that *σ**X*(*Y*) scales with *σ̃**X*(DF) in a near-linear fashion. The mean scaling factors across all of the three parameter groups (i.e., *μ**α*, *μ**δ* and $\varpi$) are $\left<\sigma^\mathrm{(Bo)}\_X / \tilde{\sigma}\_X^\mathrm{(DF)}\right>=1.67\pm 0.85$ and $\left<\sigma^\mathrm{(Ba)}\_X / \tilde{\sigma}\_X^\mathrm{(DF)}\right>=1.49\pm0.24$ (see Figure [fig:BoerrsvsBaerrs]). The two mean scaling factors show that parameter uncertainties inferred using either a bootstrap or Bayesian approach will be slightly higher (and on average, consistent between the two approaches) than would be obtained utilising direct-fitting (illustrated with the cyan dashed line in Figure [fig:BoerrsvsBaerrs]). The more optimistic uncertainty predictions of *σ̃**X*(DF) can be understood as resulting from two causes: first, it neglects both the finite width and the skewness of the *χ*2 distribution, and second, to achieve the expected *χ*2 it scales the *total* uncertainty contribution at each epoch, rather than the systematic uncertainty contribution alone. When (as is typical for pulsar observations) the S/N and hence statistical positional precision can vary substantially between observing epochs, this simplified approach preserves the relative weighting between epochs, whereas increasing the estimated systematic uncertainty contribution acts to equalise the weighting between epochs (by reducing the position precision more for epochs where the pulsar was bright and the statistical precision high, than for epochs where the pulsar was faint and the statistical precision is already low). While the consistency between $\left<\sigma^\mathrm{(Bo)}\_X / \tilde{\sigma}\_X^\mathrm{(DF)}\right>$ and $\left<\sigma^\mathrm{(Ba)}\_X / \tilde{\sigma}\_X^\mathrm{(DF)}\right>$ suggests that both approaches can overcome this shortcoming in the direct fitting method, *σ**X*(Bo)/*σ̃**X*(DF) shows a much larger scatter (3.5 times) compared to *σ**X*(Ba)/*σ̃**X*(DF) (see Figure [fig:BoerrsvsBaerrs]). To determine which approach best represents the true (and unknown) parameter uncertainties, it is instructive to consider the outliers in the bootstrap distribution results. First, consider cases where the bootstrap results in a lower uncertainty than *σ̃**X*(DF). For the reasons noted above, we expect *σ̃**X*(DF) to yield the most optimistic final parameter uncertainty estimates, and yet the bootstrap returns a lower uncertainty than *σ̃**X*(DF) in a number of cases. Second, the cases with the highest values of *σ**X*(Bo)/*σ̃**X*(DF) reach $\gtrsim$3 on a number of occasions, which imply an extremely large (or very non-Gaussian) systematic uncertainty contribution, which would lead (in those cases) to a surprisingly low reduced *χ*2 for the best-fitting model. Given the frequency with which these outliers arise, we regard it likely that bootstrap approach mis-estimates parameter uncertainties at least occasionally, likely due to the small number of observations available. Therefore, we consider the Bayesian method described in this paper as the preferred inference method for the MSPSR*π* sample, and consistently use the Bayesian results in the following discussions. We note that as continued VLBI observing campaigns add more results, the systematic uncertainty estimation scheme applied to Bayesian inference can be further refined in the future. [fig:BoerrsvsBaerrs] Astrometric inference accounting for orbital motion --------------------------------------------------- For some binary pulsars, VLBI astrometry can also refine parameters related to the binary orbit, on top of the canonical astrometric parameters. The orbital inclination *i* and the orbital ascending node longitude Ωasc have been previously constrained for a few nearby pulsars, such as PSR J1022 + 1001, PSR J2145 − 0750 and PSR J2222 − 0137. To assess the feasibility of detecting orbital reflex motion with VLBI, we computed $$\label{eq:reflex\_motion\_measurability} \eta\_\mathrm{orb} \equiv \frac{2a\_1}{1\,\mathrm{AU}} \cdot \frac{\varpi}{\sigma\_{\varpi}} = 2a\_1 \cdot \left( \frac{1\,\mathrm{AU}}{\varpi}\right)^{-1} \cdot \frac{1}{\sigma\_{\varpi}} = \frac{2a\_1}{D} \cdot \frac{1}{\sigma\_\varpi} \,,$$ where *D* and *a*1 ≡ *a*sin*i* stands for, respectively, the distance (to the pulsar) and the orbital semi-major axis projected onto the sightline. On the other hand, *θ̃*orb ≡ 2*a*/*D* reflects the apparent angular size of orbit. Provided the parallax uncertainty $\sigma\_{\varpi}$, $\tilde{\theta}\_\mathrm{orb} / \sigma\_{\varpi}$ quantifies the detectability of orbital parameters using VLBI astrometry. Hence, $$\label{eq:theta\_orb} \frac{\tilde{\theta}\_\mathrm{orb}}{\sigma\_{\varpi}} \equiv \frac{2a}{D}\cdot \frac{1}{\sigma\_{\varpi}} \geq \eta\_\mathrm{orb} \,.$$ Since *i* is usually unknown, the *η*orb defined in Equation [eq:reflexmotionmeasurability] serves as a lower limit for $\tilde{\theta}\_\mathrm{orb} / \sigma\_{\varpi}$, and is used in this work to find out pulsar systems with *i* and Ωasc potentially measurable with VLBI observations. In general, the orbital reflex motion should be negligible when *η*orb ≪ 1, easily measurable when *η*orb ≫ 1, and difficult to constrain (but non-negligible) when *η*orb ∼ 1. By way of comparison, were able to firmly constrain Ωasc and *i* for PSR J2222 − 0137 (*η*orb = 10.2), while could place weak constraints for PSR J1022 + 1001 and PSR J2145 − 0750 (*η*orb = 3.2 and 1.6, respectively) Accordingly, in this work, we fit for orbital reflex motion if all the following conditions are met: 1. *a*1 is well determined with pulsar timing; 2. *η*orb > 1; 3. the orbital period *P*b < 2yr, where 2yr is the nominal time span of an MSPSR*π* astrometric campaign. For the calculation of *η*orb, we simply use the direct-fitting parallax $\varpi^{\mathrm{(DF)}}$ for $\varpi$, and its *χ**ν*2-corrected uncertainty $\sigma\_{\varpi}^\mathrm{(DF)} \cdot \sqrt{\chi^2\_\nu}$ for $\sigma\_{\varpi}$ (see Table [tab:astrometricparameters]). We note that this choice of parallax and its uncertainty would generally lead to slightly larger *η*orb compared to using $\varpi^{\mathrm{(Ba)}}$ and $\sigma\_{\varpi}^\mathrm{(Ba)}$, according to Figure [fig:BoerrsvsBaerrs] and the discussion in Section [subsec:Bayesianasmajor]. Nevertheless, the choice **1)** enables the comparison with *η*orb of the historically published pulsars (that do not have $\varpi^{\mathrm{(Ba)}}$ and $\sigma\_{\varpi}^\mathrm{(Ba)}$), **2)** simplifies the procedure of analysis, **3)** facilitates the reproduction of *η*orb by other researchers, and **4)** is more conservative in the sense that more candidates with *η*orb > 1 would be found. The calculated *η*orb as well as *P*b are summarized in Table [tab:astrometricparameters]. Among the 18 MSPSR*π* pulsars, PSR J1518 + 4904, PSR J1640 + 2224, PSR J1643 − 1224 and PSR J1853 + 1303 meet our criteria (see Table [tab:astrometricparameters]), where PSR J1518 + 4904 is a DNS system and the others are pulsar-WD binaries. Hereafter, the 4 pulsars are referred to as the “8P” pulsars for the sake of brevity, as we would perform 8-parameter (i.e., the 5 canonical astrometric parameters and *η*EFAC plus *i* and Ωasc) inference on them. For the 8-parameter inference, prior probability distributions of the canonical parameters and *η*EFAC are described in Section [subsubsec:parameterpriors]. Both *i* and Ωasc are defined in the TEMPO2 convention. The prior probability distribution of Ωasc follows U(0, 360). Sine distribution S(0, 180) is used for *i* of the four 8P pulsars (i.e., the probability density *p*(*i*) ∝ sin*i*, $i \in \left[0, 180\degr\right]$). Where available, tighter constraints are applied to *i* in accordance with Table [tab:7parameterinference] (also see the descriptions in Section [sec:individualpulsars]). Moreover, extra prior constraints can be applied to *i* and Ωasc based on $\dot{a}\_1$, the time derivative of *a*1. As *a*1 ≡ *a*sin*i*, $$\label{eq:a1dot} \begin{split} \frac{\dot{a}\_1}{a\_1} = \frac{\dot{a}}{a} + \frac{\partial i}{\partial t} \cot{i} \approx \frac{\partial i}{\partial t} \cot{i}\,. \end{split}$$ Here, the $\dot{a}/a$ term reflects the intrinsic variation of the semi-major axis *a* due to GR effects, which is however  ∼ 8 and  ∼ 5 orders smaller than $\dot{a}\_1/a\_1$ for the 8P WD-pulsar systems and the DNS system PSR J1518 + 4904, respectively (see for an analogy). Accordingly, the apparent $\dot{a}\_1/a\_1$ is predominantly caused by apparent *i* change as a result of the sightline shift. When proper motion contributes predominantly to the sky position shift (as is the case for the 8P pulsars), $$\label{eq:i\_dot} \begin{split} \frac{\partial i}{\partial t} = \mu \sin{\left(\theta\_\mu-\Omega\_\mathrm{asc}\right)}\,, \end{split}$$ where *θ**μ* refers to the position angle (east of north) of the proper motion *μ*. We incorporated the $\dot{a}\_1/a\_1$ measurements (with Equations [eq:a1dot] and [eq:idot]) on top of other prior constraints, and inferred *i*, Ωasc, *η*EFAC and the canonical five astrometric parameters for the 8P pulsars with sterne[footnote:sterne], following similar approaches taken by. While we ultimately did not significantly constrain *i* or Ωasc for any pulsar, including their non-negligible reflex motion in the inference is still necessary for correctly inferring the uncertainties of the non-orbital model parameters. The non-orbital inferred parameters are provided in Section [subsec:astrometricresultsnonPMpriors] below, along with all the non-8P pulsars. As we found minimal differences between the constraints obtained on orbital parameters with or without the adoption of priors based on pulsar timing, we defer the presentation of the posterior constraints on orbital inclinations and ascending node longitudes (of the 8P pulsars) to Section [sec:inferencewithpriors] in order to avoid repetition. [tab:7parameterinference] The quasi-VLBI-only astrometric results --------------------------------------- To wrap up this section, we summarize in Table [tab:modelsnopmprior] the full (including *α*ref and *δ*ref) final astrometric results obtained with no exterior prior proper motion or parallax constraints, which we simply refer to as quasi-VLBI-only astrometric results (we add “quasi” because timing constraints on two orbital parameters, i.e., *i* and $\dot{a}\_1$, have already been used for the 8P pulsars). These quasi-VLBI-only results are mainly meant for independent checks of timing results (which would enable the frame connection mentioned in Section [subsec:MSPVLBIastrometry]), or as priors for future timing analyses. For the most precise possible pulsar parallaxes and hence distances, we recommend the use of the “VLBI  +  timing” results presented in Section [sec:inferencewithpriors]. The reference positions *α*ref and *δ*ref we provide in Table [tab:modelsnopmprior] are precisely measured, but only with respect to the assumed location of the in-beam calibrator source for each pulsar. In all cases, the uncertainties on the in-beam source locations (also shown in Table [tab:modelsnopmprior]) dominate the total uncertainty in the pulsar’s reference position. A future work, incorporating additional multi-frequency observations of the in-beam calibrations, will enable significantly more precise pulsar reference positions to be obtained, as is discussed in Section [subsec:mspsrpi]. [tab:modelsnopmprior] VLBI+timing astrometric results =============================== In Bayesian inference, the output of a model parameter *X**j* (where *j* refers to various model parameters) hinges on its prior probability distribution: generally speaking, tighter prior constraints (on *X**j*) that are consistent with data (in the sense of Bayesian analysis) would sharpen the output *X**j*. In cases where a strong correlation between *X**j* and another model parameter *X**k* is present, tighter prior *X**j* constraints that are consistent with the data would potentially sharpen both the output *X**j* and the output *X**k*. As noted in Section [subsec:MSPVLBIastrometry], VLBI astrometry serves as the prime method to measure parallaxes of Galactic pulsars. A VLBI astrometric campaign (on a Galactic pulsar) normally spans  ∼ 2 years, as a substantial parallax can likely be achieved in this timespan. On the other hand, most MSPSR*π* pulsars have been timed routinely for $\gtrsim10$ years, which allows their proper motions to be precisely determined, as the precision on proper motion grows with *t*3/2 (see, e.g., Section 4.4 of ) for a regularly observed pulsar. In Table [tab:VLBItimingresults], we collect one timing proper motion (denoted as *μ**α*(Ti) and *μ**δ*(Ti)) and one timing parallax ($\varpi^\mathrm{(Ti)}$) for each MSPSR*π* pulsar. Among the published timing results, we select the timing proper motions measured over the longest timespan, and the $\varpi^\mathrm{(Ti)}$ having the smallest uncertainties. According to Tables [tab:modelsnopmprior] and [tab:VLBItimingresults], most timing proper motions are more precise than the quasi-VLBI-only counterparts. On the other hand, timing parallaxes are mostly less precise than the quasi-VLBI-only counterparts. Nevertheless, adopting appropriate timing parallaxes as priors can still effectively lower parallax uncertainties. The precisely measured *μ**α*(Ti) and *μ**δ*(Ti) provide the opportunity to significantly refine the quasi-VLBI-only proper motions. Furthermore, as shown with the Pearson correlation coefficients $\rho\_{\mu\_\alpha,\varpi}$ and $\rho\_{\mu\_\delta,\varpi}$ that we summarized in Table [tab:modelsnopmprior], large correlation between parallax and proper motion is not rare for VLBI astrometry. Therefore, using the *μ**α*(Ti) and *μ**δ*(Ti) measurements as the prior proper motion constraints in Bayesian inference can potentially refine both proper motion and parallax determination. The astrometric results inferred with timing priors, hereafter referred to as VLBI+timing results, are reported in Table [tab:VLBItimingresults]. To differentiate from the notation of quasi-VLBI-only astrometric parameter *Y*, we denote a VLBI+timing model parameter in the form of *Y*ʹ. Comparing Tables [tab:modelsnopmprior] and [tab:VLBItimingresults], we find almost all VLBI+timing proper motions and parallaxes more precise than the quasi-VLBI-only counterparts; the most significant parallax precision enhancement occurs to PSR J1918 − 0642 (by 42%), followed by PSR J1939 + 2134 (by 36%) and PSR J1537 + 1155 (by 33%). Hence, we use the VLBI+timing results in the remainder of this paper. In 7 cases (i.e., PSR J0610 − 2100, PSR J1643 − 1224, PSR J1730 − 2304, PSR J1738 + 0333, PSR J1853 + 1303, PSR J1824 − 2452A, PSR J1910 + 1256), one of *μ**α*(Ti), *μ**δ*(Ti) or $\varpi^\mathrm{(Ti)}$ is more than 2*σ* discrepant from the quasi-VLBI-only counterpart. Using such timing priors may widen the uncertainties of resultant model parameters, as *η*EFAC would be lifted to counter-balance the increased *χ**ν*2. Without any indication that the discrepant timing values are less reliable, we use them as priors regardless. However, we caution the use of these 7 sets of VLBI+timing results, and would recommend the quasi-VLBI-only results to be considered if our adopted timing priors are proven inaccurate in future. We also now consider any possible effects that could, despite our best efforts to characterise all sources of position noise, bias the fitted VLBI positions. For any given VLBI calibrator source, evolution in the source structure can lead to a detectable position offset that is then transferred to the target pulsar. Due to the long timescales of AGN structure evolution, over the  ∼ 2-year timescale of the MSPSR*π* observations, this error may be quasi-linear in time and be absorbed into the pulsar proper motion. Redundant secondary calibrators can be used to probe the astrometric effect of structure evolution. However, with small numbers of redundant calibrator sources, such probes are hardly conclusive, as the structure evolution of the redundant calibrators would also be involved. Among the 7 pulsars showing  > 2 *σ* discrepancy between quasi-VLBI-only and timing results (see Table [tab:VLBItimingresults]), PSR J0030 + 0451, PSR J1643 − 1224, PSR J1730 − 2304, PSR J1738 + 0333 and PSR J1824 − 2452A either display no relative motion between the redundant secondary calibrators and the main secondary calibrators or do not have any redundant calibrator (i.e. PSR J1643 − 1224), although the sub-optimal main secondary calibrators of PSR J1643 − 1224 and PSR J1824 − 2452A (see Sections [subsec:J1643] and [subsec:J1824]) may likely affect the astrometric performance. For PSR J1853 + 1303, the main secondary calibrator has a clear jet aligned roughly with the right ascension (RA) direction, and thus source structure evolution is potentially significant. The two redundant calibrators for PSR J1853 + 1303 do display a relative proper motion of up to 0.2 mas/yr with respect to the main secondary calibrator, so while the mean relative motion seen between the two redundant secondary calibrators is small, calibrator structure evolution remains a possible explanation for the VLBI-timing discrepancy. Finally, the main secondary calibrator of PSR J1910 + 1256 also exhibits a jet structure at a position angle of  ∼ 45. When using the only redundant calibrator of PSR J1910 + 1256 as the reference source, we obtained the VLBI-only result *μ**α* = 0.25 ± 0.06$\rm mas~yr^{-1}$, *μ**δ* =  − 7.3 ± 0.1$\rm mas~yr^{-1}$ and $\varpi=0.61\pm0.05$mas with Bayesian inference, where *μ**α* becomes consistent with *μ**α*(Ti) but *μ**δ* and $\varpi$ are further away from the timing counterparts. The *μ**α* consistency between VLBI and timing indicates that structure evolution in our chosen calibrator is likely contributing to the VLBI-timing discrepancy. However, as the redundant calibrator is both fainter and further away from PSR J1910 + 1256 (compared to the main secondary calibrator), we do not use this source as the final reference source. [tab:VLBItimingresults] The posterior orbital inclinations and ascending node longitudes ---------------------------------------------------------------- For the four 8P pulsars, orbital inclinations *i*ʹ and ascending node longitudes Ωʹasc are also inferred alongside the five canonical parameters and *η*ʹEFAC (see Section [subsec:reflexmotioninference]). The full 8D corner plots out of the 8-parameter inferences are available online[footnote:pulsarpositions]. Prior constraints on *i*ʹ and Ωʹasc have been provided in Section [subsubsec:parameterpriors]. Owing to bi-modal features of all 1D histograms of *i*ʹ, no likelihood component is substantially favored over the other. Hence, no tight posterior constraint on *i*ʹ is achieved for any 8P pulsar. Likewise, all 1D histograms of Ωʹasc show multi-modal features, which precludes stringent constraints on Ωʹasc. Comparison with Gaia results ---------------------------- From the Gaia Data Release 2, Gaia counterparts for pulsars with optically bright companions have been identified and studied by. In the MSPSR*π* sample, PSR J1012 + 5307 and PSR J1024 − 0719 have secure Gaia counterparts, while PSR J1910 + 1256 has a proposed Gaia counterpart candidate. In Table [tab:Gaiaresults], we updated the Gaia results for these three Gaia sources to the Gaia Data Release 3 (DR3, ). For PSR J1024 − 0719, the Gaia proper motion {*μ**α*(G), *μ**δ*(G)} and parallax $\varpi\_1^\mathrm{(G)}$ are highly consistent with the VLBI+timing ones, which further strengthens the proposal that PSR J1024 − 0719 is in an ultra-wide orbit with a companion star (, also see Sections [subsec:vt] and [subsec:AGal]). The Gaia proper motion and parallax of PSR J1012 + 5307 is largely consistent with the VLBI+timing counterparts. The  > 1 *σ* discrepancy between *μ**δ*(G) and $\varpi\_1^\mathrm{(G)}$ and the respective VLBI+timing counterparts can be explained by non-optimal goodness of (Gaia astrometric) fitting (GoF) (see Table [tab:Gaiaresults]). On the other hand, the Gaia counterpart candidate for PSR J1910 + 1256 (proposed by ) possesses a *μ**α*(G) 4*σ* discrepant from the VLBI+timing one. Though this discrepancy is discounted by the relatively bad GoF by roughly a factor of 1.9 (see Table [tab:Gaiaresults]), the connection between the Gaia source and PSR J1910 + 1256 remains inconclusive. We note that the parallax zero-points $\varpi\_0^\mathrm{(G)}$ of the three Gaia sources are negligible and hence not considered, as $\varpi\_0^\mathrm{(G)}$ is small ($|\varpi\_0^\mathrm{(G)}|\lesssim0.02$mas, ) compared to the uncertainty of $\varpi\_1^\mathrm{(G)}$ (see Table [tab:Gaiaresults]). [tab:Gaiaresults] Distances and Space velocities ============================== In this section, we derive pulsar distances *D* from parallaxes $\varpi'$ (see Section [sec:inferencewithpriors]), and compare them to the dispersion-measure-based distances. Incorporating the proper motions {*μ*ʹ*α*, *μ*ʹ*δ*} (see Section [sec:inferencewithpriors]), we infer the transverse space velocity *v*⊥ (i.e., the velocity with respect to the stellar neighbourhood) for each pulsar, in an effort to enrich the sample of  ∼ 40 MSPs with precise *v*⊥ and refine the *v*⊥ distributions of MSP subgroups such as binary MSPs and solitary MSPs. Parallax-based distances ------------------------ Inferring a source distance from a measured parallax requires assumptions about the source properties, for which a simple inversion implicitly makes unphysical assumptions. Various works have contributed to developing and consolidating the mathematical formalism of parallax-based distance inference, which we briefly recapitulate as follows, in order to facilitate comprehension and ready the mathematical formalism for further discussion. A parallax-based distance *D* can be approached from the conditional probability density function (PDF) $$\label{eq:px\_based\_D} \begin{split} p(D|\varpi', l, b) \propto p(\varpi'|D) p(D, l, b), \end{split}$$ where *l* and *b* stands for Galactic longitude and latitude, respectively; $\varpi'=\varpi'\_0 \pm \sigma\_{\varpi'}$. The first term on the right takes the form of $$\label{eq:gaussian\_distribution} \begin{split} p(\varpi'|D) \propto \exp\left[{-\frac{1}{2}\left(\frac{1/D-\varpi'\_0}{\sigma\_{\varpi'}}\right)^2}\right], \end{split}$$ assuming $\varpi\_0'$ is Gaussian-distributed, or more specifically, $\varpi\_0' \sim \mathcal{N}\left(1/D, \sigma\_{\varpi'}^2\right)$. The second term on the right side of Equation [eq:pxbasedD] can be approximated as *p*(*D*, *l*, *b*) ∝ *D*2, when the parent population Ψ of the target celestial body is uniformly distributed spatially. Given a postulated (Galactic) spatial distribution *ρ*(*D*, *l*, *b*) of Ψ, *p*(*D*, *l*, *b*) ∝ *D*2*ρ*(*D*, *l*, *b*). Hence, $$\label{eq:p\_D\_extended} \begin{split} p(D|\varpi',l,b) \propto D^2 \rho(D,l,b) \exp\left[{-\frac{1}{2}\left(\frac{1/D-\varpi'\_0}{\sigma\_{\varpi'}}\right)^2}\right] \,. \end{split}$$ We join and to adopt the *ρ*(*D*, *l*, *b*) (of the “Model C”) determined by for Galactic pulsars. While calculating the *ρ*(*D*, *l*, *b*) with Equations 10 and 11 of, we follow and to increase the scale height (i.e., the parameter “*E*” of ) to 0.5kpc to accommodate the MSP population. In addition, the distance to the Galactic centre (GC) in Equation 10 of is updated to *d*⊙ = 8.12 ± 0.03kpc. We do not follow to use pulsar radio fluxes to constrain pulsar distances, as pulsar luminosity is relatively poorly constrained. Using the aforementioned mathematical formalism, we calculated $p(D|\varpi',l,b)$ for each MSPSR*π* pulsar, and integrated it into the cumulative distribution function (CDF) $\Phi(D|\varpi',l,b)=\int^{D}\_{\,0} p(D'|\varpi',l,b)\,dD'$. The $p(D|\varpi',l,b)$ and $\Phi(D|\varpi',l,b)$ is plotted for each pulsar and made available online[footnote:pulsarpositions]. An example of these plots are presented in Figure [fig:J1012dist]. The median distances *D*median corresponding to $\Phi(D|\varpi',l,b)\!=\!0.5$ are taken as the pulsar distances, and summarized in Table [tab:VLBItimingresults]. The distances matching $\Phi(D|\varpi',l,b)\!=\!0.16$ and $\Phi(D|\varpi',l,b)\!=\!0.84$ are respectively used as the lower and upper bound of the 1*σ* uncertainty interval. [fig:J1012dist] ### Comparison with DM distances As mentioned in Section [subsec:MSPVLBIastrometry], the precise DM measured from a pulsar can be used to assess the pulsar distance, provided an *n*e(*x⃗*) model. Using pygedm[11](#fn11), we compile into Table [tab:VLBItimingresults] the DM distances (i.e., *d*DM(NE) and *d*DM(YMW)) of each pulsar based on the two latest realisations of *n*e(*x⃗*) model — the NE2001 model and the YMW16 model. For all the DM distances, we adopt typical 20% fractional uncertainties. We have obtained significant ( ≥ 3 *σ*) parallax-based distances *D* for 15 out of 18 MSPSR*π* pulsars. These distances enable an independent quality check of both *n*e(*x⃗*) models. Among the 15 pulsars with parallax-based distance measurements, YMW16 is more accurate than NE2001 in three cases (i.e., PSR J1012 + 5307, PSR J1643 − 1224 and PSR J1939 + 2134), but turns out to be the other way around in four cases (i.e. PSR J0621 + 1002, PSR J1853 + 1303, PSR J1910 + 1256 and PSR J1918 − 0642). In other 8 cases, the *D* cannot discriminate between the two models. The small sample of 15 *D* measurements shows that NE2001 and YMW16 remain comparable in terms of outliers. In 2 (out of the 15) cases (i.e., PSR J0610 − 2100, PSR J1024 − 0719), *D* is about 2.6 *σ* and 6.8 *σ* away from either DM distance, which reveals the need to further refine the *n*e(*x⃗*) models. Such a refinement can be achieved with improved pulsar distances including the ones determined in this work. Transverse space velocities --------------------------- Having determined the parallax-based distances *D* and the proper motions {*μ*ʹ*α*, *μ*ʹ*δ*}, we proceed to calculate transverse space velocities *v*⊥ for each pulsar, namely the transverse velocity with respect to the neighbouring star field of the pulsar. In estimating the transverse velocity of a pulsar neighbourhood, we assume the neighbourhood observes circular motion about the axis connecting the North and South Galactic Poles, which is roughly valid given that all MSPSR*π* pulsars with significant ( > 3 *σ*) *D* share a median ∣*z*∣ = *D*sin∣*b*∣ of 0.3kpc. Using the Galactic rotation curve from and the full circular velocity of the Sun 247 ± 1$\rm km~s^{-1}$, we derived the apparent transverse velocity of the neighbourhood *v*⊥ , *N*, thus obtaining *v*⊥ by subtracting the apparent transverse velocity of the pulsar by *v*⊥ , *N*. Here, the full circular velocity (denoted as $\Theta\_0+{\ifmmode\mbox{V}\_{\odot}\else$\mbox{V}\_{\odot}$\fi}$ in ) is calculated with *d*⊙ = 8.12  ±  0.03kpc and the proper motion of Sgr A^\* from. To estimate the uncertainty of *v*⊥, we simulated a chain of 50,000 distances for each pulsar based on the $p(D|\varpi',l,b)$ that we have obtained in Section [subsec:pxbasedD]. Besides, we also acquired chains of 50,000 *μ*ʹ*α* and *μ*ʹ*δ* given the VLBI+timing proper motions of Table [tab:VLBItimingresults], assuming *μ*ʹ*α* and *μ*ʹ*δ* follow Gaussian distributions. With these chains of *D*, *μ*ʹ*α* and *μ*ʹ*δ*, we calculated 50,000 *v*⊥ values, which form a PDF of *v*⊥ for each pulsar. The *v*⊥ inferred from the PDFs are summarized in Table [tab:VLBItimingresults]. In Figure [fig:vt], we illustrate the *v*⊥ in relation to ∣*z*∣ for 16 pulsars with precise distance estimates. Among the 16 pulsars, only PSR J1824 − 2452A does not have a significant parallax-based distance. Nevertheless, its *v*⊥ can be inferred by incorporating its proper motion with the astrometric information (i.e., distance and proper motion) of its host globular cluster (see Section [subsec:J1824]). No clear correlation is revealed between *v*⊥ and ∣*z*∣, which reinforces our decision to treat all MSPSR*π* pulsars across the $|z|\lesssim1$kpc regime equally. By concatenating the simulated *v*⊥ chains, we acquired the PDF for the 16 MSPs (see Figure [fig:vt]), which gives *v*⊥(MSP) = 53− 37+ 48$\rm km~s^{-1}$. Amongst the MSPSR*π* sources, PSR J1024 − 0719 is an obvious outlier, with a velocity of  ∼ 300 km s− 1 that is 3*σ* above the mean. As proposed by and, PSR J1024 − 0719 is theorized to have been ejected from a dense stellar region, thus possibly following a different *v*⊥ distribution from typical field MSPs (isolated along with their respective companions throughout their lives). In this regard, we turn our attention to the binary sample of pulsars with well determined orbital periods *P*b (see *P*b of Table [tab:astrometricparameters]), and obtain *v*⊥(BI) = 50− 34+ 49$\rm km~s^{-1}$ for field binary MSPs. Based on this small sample, we do not find the *v*⊥ of the three solitary MSPs (i.e., PSR J0030 + 0451, PSR J1730 − 2304 and PSR J1939 + 2134) to be inconsistent with *v*⊥(BI). Neither are the two DNSs (i.e., PSR J1518 + 4904 and PSR J1537 + 1155). If we exclude the two DNSs from the binary sample, we would come to *v*⊥(WD) = 50− 31+ 46$\rm km~s^{-1}$ for the MSPSR*π* pulsars with WD companions, which is highly consistent with *v*⊥(BI) and *v*⊥(MSP). Compared to 113 ± 17$\rm km~s^{-1}$ previously estimated for a sample of  ∼ 40 MSPs, our *v*⊥(MSP) is largely consistent but on the smaller side. recently shows that MSP space velocities have to be near zero to explain the Galactic Centre *γ*-ray excess. Interestingly, the *v*⊥ PDF based on our small sample of 16 shows a multi-modal feature, with the lowest mode consistent with zero. Specifically, the 7 MSPSR*π* pulsars with the smallest *v*⊥ share an equally weighted mean *v*⊥ of only 25$\rm km~s^{-1}$, which suggests MSPs with extremely low space velocities are not uncommon. Accordingly, we suspect the MSP origin of the GC *γ*-ray excess can still not be ruled out based on our sample of *v*⊥. [fig:vt] Radial accelerations of pulsars and orbital-decay tests of gravitational theories ================================================================================= As described in Section [subsec:MSPVLBIastrometry], VLBI astrometry of pulsars, in conjunction with pulsar timing, can enhance the orbital-decay tests of gravitational theories. For binary systems involved in this work, the observed orbital decay has three significant components: $$\label{eq:Pb\_budget} \begin{split} \dot{P}\_\mathrm{b}^\mathrm{obs} = \dot{P}\_\mathrm{b}^\mathrm{GW} + \dot{P}\_\mathrm{b}^\mathrm{Shk} + \dot{P}\_\mathrm{b}^\mathrm{Gal}\,, \end{split}$$ where $\dot{P}\_\mathrm{b}^\mathrm{GW}$ reflects the effect of gravitational-wave damping intrinsic to a binary system, while $\dot{P}\_\mathrm{b}^\mathrm{Shk}$ and $\dot{P}\_\mathrm{b}^\mathrm{Gal}$ are both extrinsic contributions caused, respectively, by relative line-of-sight accelerations (of pulsars) AShk and AGal. Specifically, $\dot{P}\_\mathrm{b}^\mathrm{Shk}=\mathcal{A}\_\mathrm{Shk}/c \cdot P\_\mathrm{b} =\mu^2 D/c \cdot P\_\mathrm{b}$ (where *μ*2 = *μ*ʹ*α*2 + *μ*ʹ*δ*2) is the radial acceleration caused by the tangential motion of pulsars, which becomes increasingly crucial for pulsars with larger *μ* (e.g. PSR J1537 + 1155, ), as AShk ∝ *μ*2. On the other hand, $$\label{eq:PbGal} \begin{split} \dot{P}\_\mathrm{b}^\mathrm{Gal} &= \frac{\mathcal{A}\_\mathrm{Gal}}{c} P\_\mathrm{b} = \frac{\left[-\nabla \varphi\left(\vec{x}\right)\right]\big|^{\vec{x}\_\mathrm{target}}\_{\vec{x}\_{\odot}} \cdot \vec{e}\_{r}}{c} P\_\mathrm{b} \end{split}$$ is a consequence of the gravitational pull (or push) exerted by the Galaxy. Here, *φ*(*x⃗*) and *e⃗**r* are, respectively, the Galactic gravitational potential (as a function of Galactic position *x⃗*) and the unit vector in the Earth-to-pulsar direction. In order to test any theoretical prediction of $\dot{P}\_\mathrm{b}^\mathrm{GW}$, it is necessary to estimate AShk and AGal and remove their effect on $\dot{P}\_\mathrm{b}^\mathrm{obs}$. Besides this impact, the radial accelerations AShk and AGal would, more generally, affect the time derivative of all periodicities intrinsic to a pulsar system, which include the pulsar spin period derivative $\dot{P}\_\mathrm{s}$. Similar to $\dot{P}\_\mathrm{b}^\mathrm{Shk}$ and $\dot{P}\_\mathrm{b}^\mathrm{Gal}$, $\dot{P}\_\mathrm{s}^\mathrm{Shk}=\mathcal{A}\_\mathrm{Shk}/c \cdot P\_\mathrm{s}$ and $\dot{P}\_\mathrm{s}^\mathrm{Gal}=\mathcal{A}\_\mathrm{Gal}/c \cdot P\_\mathrm{s}$ (where *P*s stands for the spin period of a pulsar). As MSPs consist of nearly half of the *γ*-ray pulsar population, determining the extrinsic terms of $\dot{P}\_\mathrm{s}$ and the intrinsic spin period derivative $\dot{P}\_\mathrm{s}^\mathrm{int}=\dot{P}\_\mathrm{s}^\mathrm{obs}-\dot{P}\_\mathrm{s}^\mathrm{Shk}-\dot{P}\_\mathrm{s}^\mathrm{Gal}$ is essential for exploring the “death line” (i.e., the lower limit) of high-energy emissions from pulsars. In Sections [subsec:Shk] and [subsec:AGal], we evaluate AShk and AGal one after another. The evaluation only covers pulsars with significant *D*, as both AShk and AGal are distance-dependent. Shklovkii effects ----------------- We estimate the model-independent AShk in a way similar to the estimation of *v*⊥ (see Section [subsec:vt]). Three chains of 50,000 *μ*ʹ*α*, *μ*ʹ*δ* and *D* were simulated from their respective PDFs. Using the relation AShk = (*μ*ʹ*α*2 + *μ*ʹ*δ*2)*D*, 50,000 AShk were calculated to assemble the PDF of AShk for each pulsar with significant *D*. The AShk inferred from the PDFs are compiled in Table [tab:Shk] along with their resultant $\dot{P}\_\mathrm{s}^\mathrm{Shk}$ and $\dot{P}\_\mathrm{b}^\mathrm{Shk}$. [tab:Shk] Relative radial accelerations due to Galactic gravitational pull ---------------------------------------------------------------- We estimate AGal in the same way as, following the pioneering work of. To briefly demonstrate this method, we present, in Table [tab:AGal], the AGal based on five different *φ*(*x⃗*) models for the 15 pulsars with significant *D* measurements. The five *φ*(*x⃗*) models are denoted as NT95, DB98, BT08, P14 and M17 in this paper. The results obtained with NT95, which uses a simple analytical approach, are frequently discrepant compared to the other 4 *φ*(*x⃗*) models. Accordingly, and following, we exclude it and use the remaining 4 models to derive the estimate for AGal and its uncertainty, which we present in Table [tab:Shk] (along with $\dot{P}\_\mathrm{b}^\mathrm{Gal}$ and $\dot{P}\_\mathrm{s}^\mathrm{Gal}$). Incorporating the $\dot{P}\_\mathrm{s}^\mathrm{Shk}$ derived in Section [subsec:Shk], we calculated the intrinsic spin period derivative $\dot{P}\_\mathrm{s}^\mathrm{int}=\dot{P}\_\mathrm{s}^\mathrm{obs}-\dot{P}\_\mathrm{s}^\mathrm{Shk}-\dot{P}\_\mathrm{s}^\mathrm{Gal}$. We note that the negative $\dot{P}\_\mathrm{s}^\mathrm{int}$ of PSR J1024 − 0719 is probably the consequence of radial acceleration induced by a putative companion in an extremely wide orbit with PSR J1024 − 0719 (, also see Section [subsec:Gaiaresults]). In addition to $\dot{P}\_\mathrm{s}^\mathrm{int}$, $\dot{P}\_\mathrm{b}^\mathrm{int}=\dot{P}\_\mathrm{b}^\mathrm{obs}-\dot{P}\_\mathrm{b}^\mathrm{Shk}-\dot{P}\_\mathrm{b}^\mathrm{Gal}$ are estimated for four pulsar systems with reported $\dot{P}\_\mathrm{b}^\mathrm{obs}$. The improved PSR J1738 + 0333 parallax as well as the re-assessed PSR J1012 + 5307 parallax calls for an update to the constraint on alternative theories of gravity, which is discussed in Section [subsec:alternativegravity]. While performing the AGal analysis, we found an error in the code that had been used to implement the calculation of Equation [eq:PbGal] for the work (which, to be clear, is not an error in the GalPot[12](#fn12) package that provides the *φ*(*x⃗*) models). Therefore, we note that the $\dot{P}\_\mathrm{b}^\mathrm{Gal}$ of PSR J1537 + 1155 in Table [tab:Shk] is a correction to the counterpart. Further discussions on PSR J1537 + 1155 can be found in Section [subsec:J1537]. Last but not least, assuming GR is correct, the approach taken above can be inverted to infer $\mathcal{A}\_\mathrm{Gal}^\mathrm{(GR)}\!=\!(\dot{P}\_\mathrm{b}^\mathrm{obs} - \dot{P}\_\mathrm{b}^\mathrm{GW} - \dot{P}\_\mathrm{b}^\mathrm{Shk})c/P\_\mathrm{b}$, which can be used to constrain Galactic parameters for the local environment (of the Solar system), or probe the Galactic dark matter distribution in the long run. The AGal(GR) for the three viable pulsars are listed in Table [tab:AGal]. [tab:AGal] New constraints on alternative theories of gravity -------------------------------------------------- In the GR framework, the excess orbital decay $\dot{P}\_\mathrm{b}^\mathrm{ex}=\dot{P}\_\mathrm{b}^\mathrm{int}-\dot{P}\_\mathrm{b}^\mathrm{GW}$ is expected to agree with zero. However, some alternative theories of gravity expect otherwise due to their predictions of non-zero dipole gravitational radiation and time-varying Newton’s gravitational constant *G*. Both phenomena are prohibited by GR. Namely, in GR, the dipole gravitational radiation coupling constant *κ**D* = 0, and $\dot{G}/G=0$. The large asymmetry of gravitational binding energy of pulsar-WD systems makes them ideal testbeds for dipole gravitational emissions. In an effort to test (and possibly eliminate) alternative theories of gravity, increasingly tight constraints on *κ**D* and $\dot{G}/G$ have been placed using multiple pulsar-WD systems. With the reassessed astrometric results of PSR J1012 + 5307, the $\dot{P}\_\mathrm{b}^\mathrm{ex}$ of PSR J1012 + 5307 changes from $10.6\pm6.1\,\mathrm{fs~s^{-1}}$ in to $5.1\pm5.1\,\mathrm{fs~s^{-1}}$. This change is mainly caused by three reasons: **1)** priors are placed on the proper motion during inference in this work (but not in ); **2)** a Bayesian framework is applied in this work (while reported bootstrap results); **3)** this work adopts PDF medians as the estimates (while used PDF modes). Though barely affecting this work (see Figure [fig:J1012dist]), the choice between PDF mode and median makes a difference to given that their parallax PDF is more skewed (see Figure 4 of ). After employing the new VLBI+timing distance, the $\dot{P}\_\mathrm{b}^\mathrm{ex}$ of PSR J1738 + 0333 has shifted from $2.0\pm3.7\,\mathrm{fs~s^{-1}}$ to $1.6\pm3.5\,\mathrm{fs~s^{-1}}$. More discussions on PSR J1738 + 0333 can be found in Section [subsec:J1738]. With the new $\dot{P}\_\mathrm{b}^\mathrm{ex}$ of PSR J1012 + 5307 and PSR J1738 + 0333, we updated the constraints on *κ**D* and $\dot{G}/G$ in exactly the same way as. The prerequisites of this inference are reproduced in Table [tab:PbdotEx], where the two underlined $\dot{P}\_\mathrm{b}^\mathrm{ex}$ are the only difference from the Table 6 of. We obtained $$\label{eq:gdot\_kD} \begin{split} \dot{G}/G &= -1.6^{\,+5.3}\_{\,-4.8}\times10^{-13}\,\mathrm{yr^{-1}}, \\ \kappa\_D &= -1.1^{\,+2.4}\_{\,-0.9}\times10^{-4}. \end{split}$$ Compared to, *κ**D* becomes more consistent with zero, while the new uncertainties of *κ**D* and $\dot{G}/G$ remain at the same level. [tab:PbdotEx] Individual pulsars ================== In this section, we discuss the impacts of the new astrometric measurements (particularly the new distances) on the scientific studies around individual pulsars. Accordingly, special attention is paid to the cases where there is no published timing parallax $\varpi^\mathrm{(Ti)}$. In addition, we also look into the two pulsars (i.e. PSR J1721 − 2457  and PSR J1824 − 2452A) that have $\varpi'$ consistent with zero, in an effort to understand the causes of parallax non-detection. PSR J0610 − 2100 ---------------- PSR J0610 − 2100 is the third black widow pulsar discovered, which is in a 7-hr orbit with an extremely low-mass ( ≈ 0.02M⊙, ) star. Adopting a distance of around 2.2kpc, obtained a *γ*-ray emission efficiency $\eta\_\gamma \equiv 4 \pi F\_\mathrm{\gamma} D^2 / \dot{E}^\mathrm{int}$ in the range of 0.5–3.7, where $\dot{E}^\mathrm{int}$ and *F*\gamma are, respectively, the intrinsic NS spin-down power and the *γ*-ray flux above 100MeV. In addition, estimated a mass function $$\label{eq:mass\_func} f(m\_\mathrm{p},q)=m\_\mathrm{p} \frac{\sin^3{i}}{q(q+1)^2}=\frac{4\pi^2 a\_1^3}{G P\_\mathrm{b}^2}$$ of 5.2 × 10− 6M⊙ for the PSR J0610 − 2100 system (where *q* ≡ *m*p/*m*c). Besides, they determined the irradiation temperature (of the companion) *T*irr = 2820 ± 190K as well as the projected orbital semi-major axis *a*1 = 7.3 × 10− 3lt-s. Combining these three estimates, we calculated the heating luminosity $$\label{eq:L\_irr} \begin{split} L\_\mathrm{irr} &\equiv 4 \pi \left[\frac{a\_1 (1+q)}{\sin{i}}\right]^2 \sigma\_\mathrm{SB} T\_\mathrm{irr}^4 \\ &\approx 4 \pi a\_1^2 \left[ \frac{m\_\mathrm{p}}{f(m\_\mathrm{p},q)} \right]^{2/3} \sigma\_\mathrm{SB} T\_\mathrm{irr}^4 \\ &\sim 9.1\times10^{32} \left( \frac{m\_\mathrm{p}}{1.4\,{\ifmmode\mbox{M}\_{\odot}\else$\mbox{M}\_{\odot}$\fi}}\right)^{2/3} \mathrm{erg~s^{-1}}, \end{split}$$ where *σ*SB represents the Stefan-Boltzmann constant. Our new distance *D* = 1.5− 0.2+ 0.3kpc to PSR J0610 − 2100 is less than half the DM-based distances (see Table [tab:VLBItimingresults]), and significantly below that assumed by. Assuming a NS moment of inertia *I*NS = 1045 g~cm^2, the $\dot{P}\_\mathrm{s}^\mathrm{int}$ of PSR J0610 − 2100 (see Table [tab:Shk]) corresponds to an intrinsic spin-down power $$\label{eq:Edot} \dot{E}^\mathrm{int} \equiv 4 \pi^2 I\_\mathrm{NS} \dot{P}\_\mathrm{s}^\mathrm{int}/P\_\mathrm{s}^3$$ of $(5.1\pm0.5)\times10^{33}\,\mathrm{erg~s^{-1}}$, which is roughly twice as large as the $\dot{E}^\mathrm{int}$ range calculated by. In conjunction with a smaller *γ*-ray luminosity *L**γ* = 4*π**F*\gamma*D*2 (due to closer distance), the $\dot{E}^\mathrm{int}$ reduced *η**γ* to around 0.37 (from 0.5 < *η**γ* < 3.7 estimated by ), disfavoring unusually high *γ*-ray beaming towards us. Moreover, the heating efficiency *ε**T* drops to  ∼ 0.17 (from 0.15 < *ε**T* < 0.77 evaluated by ), disfavoring the scenario where the NS radiation is strongly beamed towards the companion. ### On the DM discrepancy In Section [subsubsec:DMdistances], we noted that our VLBI parallax-derived distance and the DM model-inferred distance to this pulsar differed substantially. Specifically, PSR J0610 − 2100 has a measured DM = 60.7 pc cm− 3 while the NE2001 model predicts 27.5 pc cm− 3 for a line of sight of length 1.5kpc. We attribute this discrepancy to thermal plasma or ``free electrons’’ along the line of sight that is not captured fully by a ``clump’’ in the NE2001 model. The NE2001 model includes this ``clump’’ to describe the effects due to the Mon R2 region, centered at a Galactic longitude and latitude of (214, -12.6), located at an approximate distance of  ∼ 0.9kpc. However, the WHAM survey shows considerable H*α* in this direction, extending over tens of degrees. Lines of sight close to the pulsar show changes in the H*α* intensity by factors of two, but an approximate value toward the pulsar is roughly 13 Rayleighs, equivalent to an emission measure EM = 29 pc cm− 6 (for a temperature *T* = 8000 K). Using standard expressions, as provided in the NE2001 model, to convert EM to, there is sufficient H*α* intensity along the line of sight to account for the excess DM that we infer from the difference between our parallax-derived distance and the NE2001 model distance. PSR J1518 + 4904 ---------------- The 41-ms PSR J1518 + 4904, discovered by, is one of the only two DNSs in the current sample. According to, the non-detection of Shapiro delay effects suggests sin*i* ≤ 0.73 at 99% confidence level. Accordingly, we adopted 0.73 as the upper limit of sin*i*, and carried out 8-parameter Bayesian inference, which led to a bi-modal posterior PDF on *i*ʹ and a multi-modal PDF on Ωʹasc (see the online corner plot[footnote:pulsarpositions]). The predominant constraints on both *i*ʹ and Ωʹasc come from the $\dot{a}\_1$ measurement ( or see Table [tab:7parameterinference]). Though there are 3 major likelihood peaks for the Ωʹasc, two of them gather around 171, making the PDF relatively more concentrated. When a much more precise $\dot{a}\_1$ measurement is reached with new timing observations, the existing VLBI data will likely place useful constraints on *i*ʹ and Ωʹasc. So will additional VLBI observations. In addition to *i*ʹ and Ωʹasc, the 8-parameter Bayesian inference also renders a 40 *σ* parallax $\varpi'$, which becomes the most significant parallax achieved for a DNS. In contrast, to detect a timing parallax $\varpi^\mathrm{(Ti)}$ for PSR J1518 + 4904 would take $\gtrsim600$ years, due to its relatively high ecliptic latitude of 63. PSR J1537 + 1155 ---------------- PSR J1537 + 1155, also known as PSR B1534 + 12, is the second discovered DNS. The DNS displays an exceptionally high proper motion amongst all Galactic DNSs (see Table 3 of ), leading to an unusually large Shklovskii contribution to observed timing parameters. Therefore, precise astrometry of the DNS plays an essential role in its orbital decay test of GR. The most precise astrometric parameters of PSR J1537 + 1155 are provided by based on the same dataset used in this work, which result in $\dot{P}\_\mathrm{b}^\mathrm{Shk}=53\pm4$$\mathrm{fs~s^{-1}}$. Subsequently, estimated $\dot{P}\_\mathrm{b}^\mathrm{Gal}=-1.9\pm0.2$$\mathrm{fs~s^{-1}}$, and concluded $\dot{P}\_\mathrm{b}^\mathrm{int}/\dot{P}\_\mathrm{b}^\mathrm{GW}=0.977\pm0.020$. In this work, we inferred *η*EFAC on top of the canonical astrometric parameters, which is the only difference from the Bayesian method of. Despite this difference, our astrometric results of PSR J1537 + 1155 remain almost the same as. So is our re-derived $\dot{P}\_\mathrm{b}^\mathrm{Shk}=53.3^{+3.8}\_{-3.3}$$\mathrm{fs~s^{-1}}$. However, as is mentioned in Section [subsec:AGal], the $\dot{P}\_\mathrm{b}^\mathrm{Gal}$
arxiv_0000302
Adaptive Equi-Energy Sampler: Convergence and Illustration ========================================================== Abstract ======== Markov chain Monte Carlo (MCMC) methods allow to sample a distribution known up to a multiplicative constant. Classical MCMC samplers are known to have very poor mixing properties when sampling multimodal distributions. The Equi-Energy sampler is an interacting MCMC sampler proposed by Kou, Zhou and Wong in 2006 to sample difficult multimodal distributions. This algorithm runs several chains at different temperatures in parallel, and allow lower-tempered chains to jump to a state from a higher-tempered chain having an energy ‘close’ to that of the current state. A major drawback of this algorithm is that it depends on many design parameters and thus, requires a significant effort to tune these parameters. In this paper, we introduce an Adaptive Equi-Energy (AEE) sampler which automates the choice of the selection mecanism when jumping onto a state of the higher-temperature chain. We prove the ergodicity and a strong law of large numbers for AEE, and for the original Equi-Energy sampler as well. Finally, we apply our algorithm to motif sampling in DNA sequences. Keywords: interacting Markov chain Monte Carlo, adaptive sampler, equi-energy sampler, ergodicity, law of large numbers, motif sampling. Author’s email addresses: [email protected] This work is partially supported by the French National Research Agency, under the program ANR-08-BLAN-0218 BigMC. Introduction ============ Markov Chain Monte Carlo (MCMC) methods are well-known tools for sampling a target distribution *π* known up to a multiplicative constant. MCMC algorithms sample *π* by constructing a Markov chain admitting *π* as unique invariant distribution. A canonical example is the the Metropolis-Hastings algorithm : given the current value *X**n* of the chain {*X**j*,  *j* ≥ 0}, it consists in proposing a move *Y**n* + 1 under a proposal distribution *Q*(*X**n*,  ⋅ ). This move is then accepted with probability $$\begin{aligned} \alpha\_n=1 \wedge \pi(Y\_{n+1}) Q(Y\_{n+1},X\_n) / [\pi(X\_n) Q(X\_n,Y\_{n+1})] \eqsp,\end{aligned}$$ where *a* ∧ *b* stands for min(*a*, *b*); otherwise, *X**n* + 1 = *X**n*. It is known that the efficiency of MCMC methods depends upon the choice of the proposal distribution. For example, when sampling multi-modal distributions, a Metropolis-Hastings algorithm with *Q*(*X**n*,  ⋅ ) equal to a Gaussian distribution centered in *X**n* tends to be stuck in one of the modes. So the convergence of such an algorithm will be slow, and the target distribution will not be correctly approximated unless a huge number of points is sampled. Efficient implementations of MCMC rely on a strong expertise of the user in order to choose a proposal kernel and, more generally, design parameters adapted to the target *π*. This is the reason why *adaptive* and *interacting* MCMC methods have been introduced. Adaptive MCMC methods consist in choosing, at each iteration, a transition kernel *P**θ* among a family {*P**θ*, *θ* ∈ Θ} of kernels with invariant distribution *π*: the conditional distribution of *X**n* + 1 given the past is *P**θ**n*(*X**n*,  ⋅ ) where the parameter *θ**n* is chosen according to the past values of the chain {*X**n*, *n* ≥ 0}. From the pioneering Adaptive Metropolis algorithm of, many adaptive MCMC have been proposed and successfully applied (see the survey papers by ,, for example). Interacting MCMC methods rely on the (parallel) construction of a family of processes with distinct stationary distributions; the key behind these techniques is to allow interactions when sampling these different processes. At least one of these processes has *π* as stationary distribution. The stationary distributions of the auxiliary processes are chosen in such a way that they have nice convergence properties, hoping that the process under study will inherit them. For example, in order to sample multi-modal distributions, a solution is to draw auxiliary processes with target distributions equal - up to the normalizing constant - to tempered versions *π*1/*T**i*, *T**i* > 1. This solution is the basis of the parallel *tempering algorithm* , where the states of two parallel chains are allowed to swap. Following this tempering idea, different interacting MCMC algorithms have been proposed and studied so far . The *Equi-Energy sampler* of Kou, Zhou and Wong is an example of such interacting MCMC algorithms. *K* processes are sampled in parallel, with target distributions (proportional to) *π**β**k*, 1 = *β**K* > *β**K* − 1 > ⋯ > *β*1. The first chain *Y*(1) = {*Y**n*(1), *n* ≥ 0} is usually a Markov chain; then *Y*(*k*) is built from *Y*(*k* − 1) as follows: with a fixed probability $\InterractionProba$, the current state *Y**n*(*k*) is allowed to jump onto a past state of the auxiliary chain {*Y*ℓ(*k* − 1), ℓ ≤ *n*}, and with probability $(1-\InterractionProba)$, *Y**n*(*k*) is obtained using a ``local" MCMC move (such as a random walk Metropolis step or a Metropolis-adjusted Langevin step). This mechanism includes the computation of an acceptance ratio so that the chain *Y*(*k*) will have *π**β**k* as target density. As the acceptance probability of such a jump could be very low, only jumps toward selected past values of *Y*(*k* − 1), namely those with an *energy* close to that of the current state *Y**n*(*k*), are allowed. This selection step allows higher acceptance rates of the jump, and a faster convergence of the algorithm is expected. The Equi-Energy sampler has many design parameters: the interacting probability $\InterractionProba$, the number *K* of parallel chains, the temperatures *T**k* = 1/*β**k*, *k* ∈ {1, …, *K*} and the selection function. It is known that all of these design parameters play a role on the efficiency of the algorithm. suggest some values for all these parameters, designed for practical implementation and based on empirical results on some simple models. discuss the choice of the interacting probability $\InterractionProba$ in similar contexts; discuss the choice of the temperatures *T**k* of the chains for the Parallel Tempering algorithm. Recently, an algorithm combining parallel tempering with equi-energy moves have been proposed by. In this paper, we discuss the choice of the energy rings and the selection function, when the jump probability $\InterractionProba$, the number *K* of auxiliary processes and the temperatures are fixed. We introduce a new algorithm, called *Adaptive Equi-Energy sampler* in which the selection function is defined adaptively based on the past history of the sampler. We also address the convergence properties of this new sampler. Different kinds of convergence of adaptive MCMC methods have been addressed in the literature: convergence of the marginals, the law of large numbers (LLN) and central limit theorems (CLT) for additive functionals (see e.g. for convergence of the marginals and weak LLN of general adaptive MCMC, or for LLN and CLT for adaptive Metropolis algorithms, and for convergence of the marginals, LLN and CLT for general adaptive MCMC algorithms - see also the survey paper by ). There are quite few analysis of the convergence of interacting MCMC samplers. The original proof of the convergence of the Equi-Energy sampler in (resp. ) contains a serious gap, mentioned in (resp. ). established a strong LLN of a simplified version of the Equi-Energy sampler, in which the number of levels is set to *K* = 2 and the proposal during the interaction step are drawn uniformly at random in the past of the auxiliary process. Finally, Fort, Moulines and Priouret established the convergence of the marginals and a strong LLN for the same simplified version of the Equi-Energy sampler (with no selection) but have removed the limitations on the number of parallel chains. The paper addresses the convergence of an interacting MCMC sampler in which the proposal are selected from energy rings which are constructed adaptively at each levels. In this paper, we obtain the convergence of the marginals and a strong LLN of a smooth version of the Equi-Energy sampler and its adaptive variant. We illustrate our results in several difficult scenarios such as sampling mixture models with ``well-separated" modes and motif sampling in biological sequences. The paper is organized as follows: in Section [presentation], we derive our algorithm and set the notations that are used throughout the paper. The convergence results are presented in Section [main:results]. Finally, Section [illustration] is devoted to the application to motif sampling in biological sequences. The proofs of the results are postponed to the Appendix. Presentation of the algorithm ============================= Notations --------- Let (**X**, X) be a measurable Polish state space and *P* be a Markov transition kernel on (**X**, X). *P* operates on bounded functions *f* on **X** and on finite positive measures *μ* on X: $$Pf(x) = \int P(x,\rmd y) f(y), \qquad \mu P(A) = \int \mu(dx) P(x,A) \eqsp.$$ The *n*-iterated transition kernel *P**n*, *n* ≥ 0 is defined by: $$P^n(x,A)=\int P^{n-1}(x,\rmd y)P(y,A)= \int P(x,\rmd y) P^{n-1}(y,A) \eqsp;$$ by convention, *P*0(*x*, *A*) is the identity kernel. For a function *V* : **X** → [1,  + ∞[, we denote by ∣*f*∣*V* the V-norm of a function *f* : **X** → R: $$|f|\_V = \sup\_{x \in \mathbf{X}} \frac{|f(x)|}{V(x)} \eqsp.$$ If *V* = 1, this norm is the usual uniform norm. Let L*V* = {*f* : **X** → R, ∣*f*∣*V* <  + ∞}. We also define the V-distance between two probability measures *μ*1 and *μ*2 by: $$\|\mu\_1 - \mu\_2 \|\_V = \sup\_{f,|f|\_V\leq1} |\mu\_1(f) - \mu\_2(f) | \eqsp.$$ When *V* = 1, the V-distance is the total-variation distance and will be denoted by ∥*μ*1 − *μ*2∥*T**V*. Let (Θ, T) be a measurable space, and {*P**θ*, *θ* ∈ Θ} be a family of Markov transition kernels; Θ can be finite or infinite dimensional. It is assumed that for all *A* ∈ X, (*x*, *θ*) → *P**θ*(*x*, *A*) is (X ⊗ T∣B([0, 1]))-measurable, where B([0, 1]) denotes the Borel *σ*-field on [0, 1]. The Equi-Energy sampler ----------------------- Let *π* be the probability density of the target distribution with respect to a dominating measure *μ* on (**X**, X). In many applications, *π* is known up to a multiplicative constant; therefore, we will denote by *π**u* the (unnormalized) density. We denote by *P* the Metropolis-Hastings kernel with proposal density kernel *q* and invariant distribution *π* defined by: $$P(x,A)= \int\_A r(x,y) q(x,y) {\mu}(\rmd y) + \mathbf{1}\_A (x) \int (1-r(x,y)) q(x,y) {\mu}(\rmd y) \eqsp,$$ where (*x*, *y*) ↦ *r*(*x*, *y*) is the acceptance ratio given by $$r(x,y) = 1 \wedge \frac{\pi(y) q(y,x)}{\pi(x) q(x,y)} \eqsp.$$ The Equi-Energy (EE) sampler proposed by exploits the fact that it is often easier to sample a tempered version *π**β*, 0 < *β* < 1, of the target distribution than *π* itself. This is why the algorithm relies on an auxiliary process {*Y**n*, *n* ≥ 0}, run independently from {*X**n*} and admitting *π**β* as stationary distribution (up to a normalizing constant). This mechanism can be repeated yielding to a multi-stages Equi-Energy sampler. We denote by *K* the number of processes run in parallel. Let $\InterractionProba \in (0,1)$. Choose *K* temperatures *T*1 > … > *T**K* = 1 and set *β**k* = 1/*T**k*; and *K* MCMC kernels {*P*(*k*), 1 ≤ *k* ≤ *K*} such that *π**β**k**P*(*k*) = *π**β**k*. *K* processes *Y*(*k*) = {*Y**n*(*k*), *n* ≥ 0}, 1 ≤ *k* ≤ *K*, are defined by induction on the probability space (Ω, F, P). The first auxiliary process *Y*(1) is a Markov chain, with *P*(1) as transition kernel. Given the auxiliary process *Y*(*k* − 1) up to time *n*, {*Y**m*(*k* − 1), *m* ≤ *n*}, and the current state *Y**n*(*k*) of the process of level *k*, the Equi-Energy sampler draws *Y**n* + 1(*k*) as follows: * (Metropolis-Hastings step) with probability $1-\InterractionProba$, *Y**n* + 1(*k*) ∼ *P*(*k*)(*Y**n*(*k*),  ⋅ ). * (equi-energy step) with probability $\InterractionProba$, the algorithm selects a state *Z**n* + 1 from the auxiliary process having an energy close to that of the current state. An acceptance-rejection ratio is then computed and if accepted, *Y**n* + 1(*k*) = *Z**n* + 1; otherwise, *Y**n* + 1(*k*) = *Y**n*(*k*). In practice, only apply the equi-energy step when there is at least one point in each ring. In, the distance between the energy of two states is defined as follows. Consider an increasing sequence of positive real numbers $$\label{eq:definition-bornea} {\xi}\_0=0<{\xi}\_1<\dots<{\xi}\_S=+\infty \eqsp.$$ If the energies of two states *x* and *y* belong to the same energy ring, i.e. if there exists 1 ≤ ℓ ≤ *S* such that *ξ*ℓ − 1 ≤ *π**u*(*x*), *π**u*(*y*) < *ξ*ℓ, then the two states are said to have “close energy”. The choice of the energy rings is most often a difficult task. As shown in Figure [comparaison10D][right], the Equi-Energy sampler is inefficient when the energy rings are not appropriately defined. The efficiency of the sampler is increased when the variation of *π**u* in each ring is small enough so that the equi-energy move is accepted with high probability. The Adaptive Equi-Energy sampler -------------------------------- We propose to modify the Equi-Energy sampler by adapting the energy rings ``on the fly", based on the history of the algorithm. Our new algorithm, so called *Adaptive Equi-Energy* sampler (AEE) is similar to the Equi-Energy sampler of except for the equi-energy step, which relies on adaptive boundaries of the rings. For the definition of the process *Y*(*k*), *k* ≥ 2, adaptive boundaries computed from the process *Y*(*k* − 1) are used. For a distribution *θ* in Θ, denote by *ξ**θ*, ℓ, ℓ ∈ {1, ⋯, *S* − 1} the bounds of the rings, computed from r.v. with distribution *θ*; by convention, *ξ**θ*, 0 = 0 ≤ *ξ**θ*, 1 ≤ ⋯ ≤ *ξ**θ*, *S* − 1 ≤ *ξ**θ*, *S* =  + ∞. Define the associated energy rings *H**θ*, ℓ = [*ξ**θ*, ℓ − 1, *ξ**θ*, ℓ) for ℓ ∈ {1, ⋯, *S*}. We consider selection functions *g**θ*(*x*, *y*) of the form $$\label{eq:FonctionSelectionEE} g\_{{\theta}}(x,y) = \sum\_{\ell=1}^S h\_{{\theta},\ell}(x) h\_{{\theta}, \ell}(y) \eqsp, \quad h\_{{\theta}, \ell}(x) = \left(1-d({\pi\_u}(x),{H}\_{{\theta},\ell}) \right)\_+ \eqsp,$$ where *d*(*π**u*(*x*), *H**θ*, ℓ) measures the distance between *π**u*(*x*) and the ring *H**θ*, ℓ. By convention *h**θ*, ℓ = 0 if *H**θ*, ℓ = ∅. We finally introduce a set of selection kernels {*K**θ*(*k*), *θ* ∈ Θ} for all *k* ∈ {2, ⋯, *K*} defined by $$\begin{aligned} \label{eq:DefinitionKthetaAEE} K\_{\theta}^{(k)}(x,A) &=\int\_{A} \alpha\_{\theta}^{(k)}(x,y)\frac{g\_{\theta}(x,y)\theta(\rmd y)}{\int g\_{\theta}(x,z)\theta(\rmd z)} +\mathbf{1}\_A(x) \int \{1-\alpha\_{\theta}^{(k)}(x,y)\} \frac{g\_{\theta}(x,y)\theta(\rmd y)}{\int g\_{\theta}(x,z)\theta(\rmd z)} \eqsp,\end{aligned}$$ where $$\label{eq:DefinitionAlphaAEE} \alpha\_{\theta}^{(k)}(x,y) =1 \wedge \left( \frac{\pi^{\beta\_{k}-\beta\_{k-1}}(y) \int g\_{\theta} (x,z) \theta(\rmd z)}{\pi^{\beta\_{k}-\beta\_{k-1}}(x)\int g\_{\theta} (y,z) \theta(\rmd z)} \right) \eqsp.$$ *K**θ*(*k*) is associated to the equi-energy step when defining *Y*(*k*): a draw under the selection kernel proportional to $ g\_{\theta}(x,y)\theta(\rmd y)$ is combined with an acceptance-rejection step. The acceptance-rejection step is defined so that when *θ* ∝ *π**β**k* − 1, *π**β**k* is invariant for *K**θ*(*k*) . This equi-energy step is only allowed when each ring contains at least one point (of the auxiliary process *Y*(*k* − 1) up to time *n*). We therefore introduce, for all positive integer *m*, the set Θ*m*: $$\label{eq:ThetaM} {{\Theta}}\_m {\ensuremath{\stackrel{\mathrm{def}}{=}}}\left\{\theta \in \Theta: \frac{1}{m} \leq \inf\_x \int g\_\theta(x,y) \theta(\rmd y) \right\} \eqsp.$$ With these notations, AEE satisfies for any *n* ≥ 0 and *k* ∈ {1, ⋯, *K*}, $$\label{eq:interactingMCMC} {\mathbb{E}}[f(Y\_{n+1}^{(k)})|\mathcal{F}\_n^{(k)}] = {\mathbb{E}}[f(Y\_{n+1}^{(k)})|Y\_n^{(k)}, Y\_m^{(k-1)}, 1 \leq m \leq n] = P\_{{\theta}\_n^{(k-1)}}^{(k)}f(Y\_n^{(k)}) \eqsp,$$ where {F*n*(*k*), *n* ≥ 0} is the filtration defined by F*n*(*k*) = *σ*({*Y**m*(*l*), 1 ≤ *m* ≤ *n*, 1 ≤ *l* ≤ *k*}); the transition kernel is given by *P**θ*(1) = *P*(1) and for *k* ≥ 2, $$P\_{\theta}^{(k)} = (1-\InterractionProba \un\_{{\theta}\in \bigcup\_{m \geq 1} \Theta\_m}) P^{(k)} + \InterractionProba \un\_{{\theta}\in \bigcup\_{m \geq 1} \Theta\_m} \ K\_{{\theta}}^{(k)} \eqsp;$$ and *θ**n*(*k*) is the empirical distribution $$\label{eq:DefinitionThetan} {\theta}\_n^{(k)} = \frac{1}{n} \sum\_{m=1}^n \delta\_{Y\_m^{(k)}} \eqsp, \qquad k \in \{1, \cdots, K \}, n \geq 1 \eqsp.$$ Different functions *d* can be chosen. For example, the function given by $$\label{eq:hard-distance} d({\pi\_u}(x),{H}\_{\theta,\ell})= \un\_{ \Rset \setminus {H}\_{\theta,\ell}}({\pi\_u}(x))= \left\{ \begin{array}{ll} 0 & \text{if} \ {\pi\_u}(x) \in {H}\_{\theta,\ell}, \\ 1 & \text{otherwise} \end{array} \right.$$ yields to a selection function *g**θ* such that *g**θ*(*x*, *y*) = 1 iff *x*, *y* are in the same energy ring and *g**θ*(*x*, *y*) = 0 otherwise. In this case, the acceptance-rejection ratio *α**θ*(*k*)(*x*, *y*) is equal to 1 ∧ (*π**β**k* − *β**k* − 1(*y*)/*π**β**k* − *β**k* − 1(*x*)) upon noting that by definition of the proposal kernel, the points *x* and *y* are in the same energy ring. By using this ``hard" distance during the equi-energy jump, all the states of the auxiliary process having their energy in the same ring as the energy of the current state are chosen with the same probability, while the other auxiliary states have no chance to be selected. Other functions *d* could be chosen, such as ``soft" selections of the form $$\label{eq:soft:selection:function} d({\pi\_u}(x),{H}\_{\theta,l})= \frac{1}{r} \min\_{y \in {H}\_{\theta,\ell}} |{\pi\_u}(x) -y| \eqsp,$$ where *r* > 0 is fixed. With this “soft” distance, given a current state *Y**n*(*k*), the probability for each auxiliary state *Y**i*(*k* − 1), *i* ≤ *n*, to be chosen is proportional to *g**θ**n*(*k* − 1)(*Y**n*(*k*), *Y**i*(*k* − 1)). Then, the ``soft" selection function allows auxiliary states having an energy in a *r*-neighborhood of the energy ring of *π**u*(*Y**n*(*k*)) to be chosen, as well as states having their energy in this ring. Nevertheless, this selection function yields an acceptance-rejection ratio *α**θ*(*k*) which may reveal to be quite costly to evaluate. The asymptotic behavior of AEE will be addressed in Section [main:results]. The intuition is that when the empirical distribution *θ**n*(*k* − 1) of the auxiliary process of order *k* − 1 converges (in some sense) to *θ*⋆(*k* − 1), the process {*Y**n*(*k*), *n* ≥ 0} will behave (in some sense) as a Markov chain with transition kernel *P**θ*⋆(*k* − 1)(*k*). A toy example (I) ----------------- To highlight the interest of our algorithm, we consider toy examples: the target density *π* is a mixture of $\Rset^d$-valued Gaussian[1](#fn1). This model is known to be difficult, as illustrated (for example) in for a random walk Metropolis-Hastings sampler (SRWM), an EE-sampler and a parallel tempering algorithm. Indeed, if the modes are well separated, a Metropolis-Hastings algorithm using only ``local moves" is likely to remain trapped in one of the modes for a long-period of time. In the following, AEE is implemented with ring boundaries computed as described in Section [ring:construction]. Figure [comparaison].(a) displays the target density *π* and the simulated one for three different algorithms (SRWM, EE and AEE) in one dimension. The histograms are obtained with 105 samples; for EE and AEE, the probability of interaction is $\InterractionProba=0.1$, the number of parallel chains is equal to *K* = 5 and the number of rings is *S* = 5. For the adaptive definition of the rings in AEE, we choose the ``hard" selection ([eq:hard-distance]) and the construction of the rings defined in Section [ring:construction]. In the same vein, Figure [comparaison2D] displays the points obtained by the three algorithms when sampling a mixture of two Gaussian distributions in two dimensions. As expected, in both figures, SRWM never explores one of the modes, while EE and AEE are far more efficient. ![image](comparaison_densite) [comparaison] ![image](nuage1) [comparaison2D] To compare EE and AEE in a more challenging situation, we consider the case of a mixture with two components in ten dimensions. We run EE and AEE with *K* = 3 parallel chains with respective temperatures *T*1 = 1, *T*2 = 9, *T*3 = 60, the probability of jump $\InterractionProba$ is equal to 0.1, and the number of rings is *S* = 50. Both algorithms are initialized in one of the two modes of the distribution. For the Metropolis-Hastings step, we use a Symmetric Random Walk with Gaussian proposal; the covariance matrix of the proposal is of the form *c* *I* where *c* is calibrated so that the mean acceptance rate is approximatively 0.25. Figure [comparaison10D] displays, for each algorithm, the *L*1-norm of the empirical mean, averaged over 10 independent trajectories, as a function of the length of the chains. In order to show that the efficiency of EE depends crucially upon the choice of the rings, we choose a set of boundaries so that in practice, along one run of the algorithm, some of the rings are never reached. Figure [comparaison10D](a) compares EE and AEE in this extreme case: even after 2 × 105 iterations, all of the equi-energy jumps are rejected for the (non-adaptive) EE, and the algorithm is trapped in one of the modes. This does not occur for AEE, and the *L*1-error tends to zero as the number of iterations increases. This illustrates that our adaptive algorithm avoids the poor behaviors that EE can have when the choice of its design parameters is inappropriate. We now run EE in a less extreme situation: we choose (fixed) energy rings so that the sampler can jump more easily than in the previous experiment between the modes. Figure [comparaison10D](b) illustrates that the adaptive choice of the energy rings speeds up the convergence, as it makes the equi-energy jumps be more often accepted. To have a numerical comparison, the equi-energy jumps were accepted about ten times more often for AEE than for EE. | | | | --- | --- | | image | image | | (a) | (b) | [comparaison10D] Toy example (II) ---------------- For a better understanding on how our algorithm behaves, Figure [erreur:epsilon:strates].(a) displays the evolution of the ring bounds used in the definition of *Y*(*K*). In this numerical application, the target density is a mixture of two Gaussian distributions in one dimension; EE and AEE are run with *K* = 5 chains, *S* = 5 rings and $\InterractionProba=0.1$, for a number of iterations varying from 0 to 105. As expected, the ring bounds become stable after a reasonable number of iterations. Moreover, we observed that the (non-adaptive) EE run with the rings fixed to the limiting values obtained with AEE behaves remarkably well. Finally, to have an idea on the role played by $\InterractionProba$, Figure [erreur:epsilon:strates].(b) displays the average *L*1 error of AEE for a mixture of two Gaussian distributions in one dimension, after 2 × 105 iterations and for 100 independent trajectories when $\InterractionProba$ is varying from 0 to 1. If $\InterractionProba$ is too small, AEE is not mixing well enough, and if $\InterractionProba$ is too large, the algorithm jumps easily from one mode to another but does not explore well enough each mode, which explains the ‘u’ shape of the curve. This experiment suggests that there exists an optimal value for $\InterractionProba$, but to our best knowledge, the optimal choice of this design parameter is an open problem. | | | | --- | --- | | image | image | | (a) | (b) | [erreur:epsilon:strates] Convergence of the Adaptive Equi-Energy sampler =============================================== In this section, the convergence of the *K*-stages Adaptive Equi-Energy sampler is established. In order to make the proof easier, we consider the case when the distance function *d* in the definition of the selection function ([eq:FonctionSelectionEE]) is given by ([eq:soft:selection:function]). provide sufficient conditions for the convergence of the marginals and the strong LLN (s-LLN) of interacting MCMC samplers. We use their results and show the convergence of the marginals i.e. $$\lim\_{n \rightarrow \infty} {\mathbb{E}}\left[f(Y\_n^{(K)}) \right] = \pi(f) \eqsp,$$ for any continuous bounded functions *f*. Note that this implies that this limit holds for any indicator function $f =\un\_A$ such that P(∂*A*) = 0 where ∂*A* denotes the boundary of *A*. We then establish the s-LLN: for a wide class of continuous (un)bounded functions *f*, $$\lim\_{n \rightarrow \infty} \frac{1}{n} \sum\_{m=0}^{n-1} f(Y\_m^{(K)}) = \pi(f) \eqsp, {\mathbb{P}}-\mathrm{a.s.}$$ Assumptions ----------- Our results are established for target distributions *π* satisfying [EE1] 1. [EE1:a] *π* is the density of a probability distribution on the measurable Polish space $({\mathbf{X}}, \Xsigma)$ and sup**X***π* < ∞ and for any *s* ∈ (0, 1], $\int \pi^s(x) \ \rmd x < \infty$. 2. [EE1:b] *π* is continuous and positive on **X**. Usually, the user knows *π* up to a normalizing constant: hereafter, *π**u* will denote this available (unnormalized) density. As in, we first introduce a set of conditions that will imply the geometric ergodicity of the kernels *P**θ*(*k*), and the existence of an invariant probability measure for *P**θ*(*k*) (see conditions E[EE3]). We finally introduce conditions on the boundaries of the adaptive energy rings (see conditions E[EE:ring]). Examples of boundaries satisfying E[EE:ring] and computed from quantile estimators are given in Section [ring:construction] (see also for stochastic approximation-based adapted boundaries). Convergence of adaptive and interacting MCMC samplers is addressed in the literature by assuming containment conditions and diminishing adaptations (so called after ). Assumptions E[EE3] is the main tool to establish a (generalized) containment condition. In our algorithm, the adaptation mechanism is due to (a) the interaction with an auxiliary process and (b) the adaption of the rings. Therefore, assumptions E[EE3] and E[EE:ring] are related to the diminishing adaptation condition (see e.g. Lemma [lemme:cv:dv:proba] in Section [secproof:ergoees]). [EE3] For each *k* ∈ {1, …, *K*}: 1. [EE3:a] *P*(*k*) is a *ϕ*-irreducible transition kernel which is Feller on (**X**, X) and such that *π**β**k**P*(*k*) = *π**β**k*. 2. [EE3:b] There exist *λ**k* ∈ (0, 1), *b**k* <  + ∞ and *τ**k* ∈ (0, *τ**k* − 1*β**k* − 1/*β**k*) such that *P*(*k*)*W**k* ≤ *λ**k**W**k* + *b**k* with $$\label{eq:DefinitionW} W\_k(x) = \left( \frac{\pi^{\beta\_k}(x)}{\sup\_{{\mathbf{X}}} \pi^{\beta\_k}}\right)^{-\tau\_k} \eqsp;$$ by convention, *τ*0*β*0 = *β*1. 3. [EE3:c] For all *p* ∈ (0, sup**X***π*), the sets {*π* ≥ *p*} are 1-small for *P*(*k*). Note that by definition of *τ**k* and E[EE1][EE1:a], *W**k* + 1 ∈ L*W**k* and $\int W\_{k}(x) \pi^{\beta\_{k}}(x) \rmd x < \infty$. E[EE3] is satisfied for example if for each *k*, *P*(*k*) is a symmetric random walk Metropolis Hastings kernel; and *π* is a sub-exponential target density . In our algorithm, *Y*(1) is a Markov chain with transition kernel *P*(1). As discussed in [chapters 13 and 17], E[EE3] is sufficient to prove ergodicity and a s-LLN for *Y*(1). E[EE3] also implies uniform *W*1-moments for *Y*(1). These results, which initializes our proof by recurrence of the convergence for the process number *K*, is given in Proposition [prop:lgn:first]. Define the probability distributions $$\label{eq:ThetaStar} \theta\_\star^{(k)}(\rmd x) = \frac{\pi^{\beta\_{k}}(x)}{\int \pi^{\beta\_{k}}(z) {\mu}(\rmd z)}{\mu}(\rmd x)\eqsp, \qquad k \in \{1, \cdots, K \} \eqsp.$$ [prop:lgn:first] Assume E[EE1][EE1:a], E[EE3] and E[*W*1(*Y*0(1))] < ∞. Then, 1. [EE2:c] For all bounded measurable functions *f*, lim*n* → ∞E[*f*(*Y**n*(1))] = *θ*⋆(1)(*f*). 2. [EE2:a] *θ*⋆(1)(*W*2) <  + ∞, and for any measurable function *f* in L*W*1, lim*n* → ∞*θ**n*(1)(*f*) = *θ*⋆(1)(*f*) a.s. 3. [EE2:b] sup*n*E[*W*1(*Y**n*(1))] < ∞. [EE:ring] 1. [ring:c] For any *k* ∈ {1, …, *K* − 1}, $\inf\_{\ell \in \{1, \cdots, S-1 \} } \int h\_{\theta\_\star^{(k)},\ell}(y) \ \theta\_\star^{(k)}(\rmd y) >0$. 2. [ring:a] For any *k* ∈ {1, …, *K* − 1} and ℓ ∈ {1, ⋯, *S* − 1}, lim*n* → ∞∣*ξ**θ**n*(*k*), ℓ − *ξ**θ*⋆(*k*), ℓ∣ = 0 w.p.1 3. [ring:b] There exists Γ > 0 such that for any *k* ∈ {1, …, *K* − 1}, any ℓ ∈ {1, ⋯, *S* − 1}, and any *γ* ∈ (0, Γ), limsup*n* *n**γ* ∣*ξ**θ**n* + 1(*k*), ℓ − *ξ**θ**n*(*k*), ℓ∣ < ∞ w.p.1. Note that by definition of *h**θ*, ℓ (see ([eq:FonctionSelectionEE])) $$\label{eq:CS:EE:ringa} \int h\_{\theta,\ell}(y) \ \theta(\rmd y) \geq \theta (\{y: \pi\_u(y) \in H\_{\theta,\ell} \}) \eqsp.$$ Condition E[EE:ring][ring:a] states that the rings {*H**θ**n*(*k*), ℓ), *n* ≥ 0} converge to *H**θ*⋆(*k*), ℓ w.p.1; therefore, E[EE:ring][ring:c] is satisfied as soon as the limiting rings are of positive probability under the distribution of *π**u*(*Z*) when *Z* ∼ *θ*⋆(*k*). When the energy bounds are fixed, the conditions E[EE:ring][ring:a]-[ring:b] are clearly satisfied and E[EE:ring][ring:c] holds under convenient choice of the rings. We will discuss in Section [ring:construction] how to check the condition E[EE:ring] with adaptive energy bounds. Convergence results ------------------- Proposition [drift:small:aees] shows that the kernels *P**θ*(*k*) satisfy a geometric drift inequality and a minorization condition, with constants in the drift independent of *θ* for *θ* ∈ Θ*m* (Θ*m* being defined in ([eq:ThetaM])). The proof is in Appendix [secproof:drift:small:aees]. [drift:small:aees] Assume E[EE1][EE1:a] and E[EE3]. For all *k* ∈ {1, …, *K*}: 1. There exist *λ̃**k* ∈ (0, 1) and *b̃**k* <  + ∞ such that for all *m* ≥ 1 and any *θ* ∈ Θ*m*, $$\label{eq:drift} P\_{\theta}^{(k)}W\_k \leq \tilde{\lambda}\_k W\_k +\tilde{b}\_k \ m \, \theta (W\_k) \ \eqsp.$$ For all *p* ∈ (0, sup**X***π*) and all *θ* ∈ ⋃*m*Θ*m*, the sets {*π* ≥ *p*} are 1-small for *P**θ*(*k*) and the minorization constants depend neither upon *θ* nor on *m*. 2. [drift:small:aees:item2] For all *θ* ∈ ⋃*m*Θ*m*, there exists a probability measure *π**θ*(*k*) invariant for *P**θ*(*k*). In addition, *π**θ*(*k*)(*W**k*) ≤ *b̃**k*(1 − *λ̃**k*)− 1 *m**θ*(*W**k*) for *θ* ∈ Θ*m*. Theorem [theo:tout] is proved in Section [proof]. Theorem [theo:tout] shows that there exists *m*⋆ ≥ 1 such that w.p.1, for all *n* large enough *θ**n*(*k*) belongs to some Θ*m*⋆. Note that in, a s-LLN for the Equi-Energy sampler is established by assuming that there exists a deterministic positive integer *m* such that w.p.1, *θ**n*(*k*) ∈ Θ*m* for any *n*. Such a condition is quite strong since roughly speaking, it means that after *n* steps (even for small *n*), all the rings contain a number of point which is proportional to *n*, w.p.1. This is all the more difficult to guarantee in practice, that the rings have to be chosen prior to any exploration of *π*. Our approach allows to relax this strong condition. The convergence of the marginals and the law of large numbers both require the convergence in *n* (*k* fixed) of {*π**θ**n*(*k*)(*k* + 1)(*f*), *n* ≥ 0} for some functions *f*. Such a convergence is addressed in Theorem [theo:tout]. We will then have the main ingredients to establish the convergence results for the processes *Y*(*k*), *k* ≥ 1. [theo:tout] Assume E[EE1], E[EE3], E[EE:ring] and E[*W**k*(*Y*0(*k*))] < ∞ for all *k* ∈ {1, ⋯, *K*}. 1. [prop:subsetTheta] There exists *m*⋆ ≥ 1 such that for all *k* ∈ {1, …, *K* − 1} $$\begin{aligned} \label{proba:teta:u} {\mathbb{P}}\left( \bigcup\_{q \geq 1} \bigcap\_{n \geq q} \{\theta\_n^{(k)} \in {\Theta}\_{m\_\star} \} \right)=1 \eqsp.\end{aligned}$$ 2. [prop:cvg:pi] For any *k* ∈ {1, ⋯, *K*}, any *a* ∈ (0, 1) and any continuous function *f* ∈ L*W**k**a*, $$\lim\_{n \to \infty} \pi\_{\theta\_n^{(k-1)}}^{(k)}(f) = \theta\_{\star}^{(k)}(f) \eqsp, \ \text{w.p.1} \eqsp.$$ 3. For any *k* ∈ {1, ⋯, *K*} and for all bounded continuous function $f: {\mathbf{X}}\to \Rset$, lim*n* → ∞E[*f*(*Y**n*(*k*))] = *θ*⋆(*k*)(*f*). 4. Let $a \in (0,\frac{1+{\Gamma}}{2} \wedge 1)$. For any *k* ∈ {1, ⋯, *K*} and for all continuous function *f* in L*W**k**a* $$\begin{aligned} \lim\_{n \to \infty} \frac{1}{n} \sum\_{m=1}^n f(Y\_m^{(k)}) = \theta\_{\star}^{(k)}(f) \qquad {\mathbb{P}}-\text{a.s.} \eqsp.\end{aligned}$$ Observe that, for the process $\{Y^{(k)}, k \in \Nset\}$, the family of functions for which the law of large numbers holds depends *(i)* upon Γ given by EE[EE:ring]([ring:b]) i.e. in some sense, depends upon the adaptation rate; and *(ii)* the temperature ladder. In the case *τ**k* can be chosen arbitrarily close to *β*1/*β**k* for any *k* (see comments after ), this family of functions only depends upon Γ and the lowest inverse temperature : it is all the more restrictive than *β*1 is small. To our best knowledge, we are the first to prove such convergence results for AEE (and EE): previous works consider the simpler case when there is no selection i.e. *g**θ*(*x*, *y*) = 1. Comments on Assumption E[EE:ring] --------------------------------- We propose to choose the adaptive boundaries *ξ**θ*, ℓ as the *p*ℓ-quantile of the distribution of *π**u*(*Z*) when *Z* is sampled under the distribution *θ*. This section proves that empirical quantiles of regularly spaced orders are examples of adaptive boundaries *ξ**θ**n*(*k*), ℓ satisfying E[EE:ring]. Let *F**θ* be the cumulative distribution function (cdf) of the r.v. *π**u*(*Z*) when *Z* ∼ *θ*: $$\begin{aligned} F\_{{\theta}}(x) = \int \un\_{\{ {\pi\_u}(z) \leq x \}} {\theta}(\rmd z) \eqsp, \qquad x \in [0, \infty)\eqsp.\end{aligned}$$ We denote the quantile function associated to *π**u*(*Z*) by: $$\begin{aligned} F\_{{\theta}}^{-1}(p) = \inf \{x \geq 0, F\_{{\theta}}(x) \geq p\} \quad \forall p>0 \eqsp; \qquad \qquad F\_{{\theta}}^{-1}(0)=0 \eqsp.\end{aligned}$$ With this definition, for 0 < *p*1 < ⋯ < *p**S* − 1 < 1, we set ${\xi}\_{{\theta},\ell} {\ensuremath{\stackrel{\mathrm{def}}{=}}}F\_{{\theta}}^{-1} \left( p\_\ell \right)$. With this choice of the boundaries, the condition E[EE:ring][ring:c] holds: by ([eq:CS:EE:ringa]), E[EE:ring][ring:c] is satisfied because *π* is continuous. The conditions E[EE:ring][ring:a]-[ring:b] require the convergence of the quantile estimators and a rate of convergence of the variation of two successive boundaries. To prove such conditions, we use an Hoeffding-type inequality. [prop:vitesse:quantiles] Assume 1. [hyp:fd] The cumulative distribution function *F**θ*⋆(1) where *θ*⋆(1) is given by ([eq:ThetaStar]), is differentiable with positive derivative on *F**θ*⋆(1)− 1((0, 1)). 2. there exists $\overline{W}$ such that *Y*(1) is a $\overline{W}$-uniformly ergodic Markov chain with initial distribution satisfying E[*Y*0(1)] < ∞. Then E[EE:ring][ring:a]-[ring:b] hold with Γ = 1/2 and *K* = 2. The proof is in Section [secproof:prop:vitesse:quantiles]. Extensions of Proposition [prop:vitesse:quantiles] to the case when *Y*(1) is not a uniformly ergodic Markov chain is, to our best knowledge, an open question. Therefore, our convergence result of AEE when the boundaries are the quantiles defined by inversion of the cdf of the auxiliary process applies to the 2-stage level and seems difficult to extend to the *K*-stage, *K* > 2. We proved recently in that when the quantiles are defined by a stochastic approximation procedure, the conditions E[EE:ring][ring:a]-[ring:b] hold even under very weak conditions on the auxiliary *Y*(*k*), *k* ≥ 2. In this case, the convergence of the *K*-level AEE with *K* > 2 is established. Application to motif sampling in biological sequences ===================================================== One of the challenges in biology is to understand how gene expression is regulated. Biologists have found that proteins called transcription factors play a role in this regulation. Indeed, transcription factors bind on special motifs of DNA and then attract or repulse the enzymes that are responsible of transcription of DNA sequences into proteins. This is the reason why finding these binding motifs is crucial. But binding motifs do not contain deterministic start and stop codons: they are only random sequences that occurs more frequently than expected under the background model. Several methods have been proposed so far to retrieve binding motifs , which yields to a complete Bayesian model . Among the Bayesian approach, one effective method is based on the Gibbs sampler  - it has been popularized by software programs. Nevertheless, as discussed in, it may happen that classical MCMC algorithms are inefficient for this Bayesian approach. Therefore, show the interest of the Equi-Energy sampler when applied to this Bayesian inverse problem; more recently, proposed a Gibbs-based algorithm for a similar model (their model differs from the following one through the assumptions on the background sequence). We start with a description of our model for motif sampling in biological sequences - this section is close to the description in   but is provided to make this paper self-contained. We then apply AEE and compare it to the Interacting MCMC of (hereafter called I-MCMC), and to a Metropolis-Hastings algorithm (MH). Comparison with Gibbs-based algorithms (namely BioProspector and AlignACE) can be found in the paper of. The available data is a DNA sequence, which is modeled by a background sequence in which some motifs are inserted. The background sequence is represented by a vector S = (*s*1, *s*2, …, *s**L*) of length *L*. Each element *s**i* is a nucleotide in {*A*, *C*, *G*, *T*}; in this paper, we will choose the convention *s**i* ∈ {1, 2, 3, 4}. The length *w* of a motif is assumed to be known. The motif positions are collected in a vector *A* = (*a*1, …, *a**L*), with the convention that *a**i* = *j* iff the nucleotide *s**i* is located at position number *j* of a motif; and *a**i* = 0 iff *s**i* is not in the motif. The goal of the statistical analysis of the data S is to explore the distribution of *A* given the sequence S. We now introduce notations and assumptions on the model in order to define this conditional distribution. We denote by *p*0 the probability that a sub-sequence of length *w* of S is a motif. It is assumed that the background sequence is a Markov chain with (deterministic) transition matrix $\param\_0 =\{\param\_0(i,j)\}\_{1 \leq i,j \leq 4}$ on {1, ⋯, 4}; and the nucleotide in a sequence are sampled from a multinomial distribution of parameter $\param= \{\param(i,j)\}\_{1 \leq i \leq 4, 1 \leq j \leq w}$, $\param(i,j)$ being the probability for the *j*-th element of a motif to be equal to *i*. In practice, it has been observed that approximating $\param\_0(i,j)$ by the frequency of jumps from *i* to *j* in the (whole) sequence S is satisfying. It is assumed that the r.v. $(\param, p\_0)$ are independent with prior distribution $\prod\_{j=1}^w \prior(\param(\cdot,j))$ and $\prior'(p\_0)$; $\prior(\param(\cdot, j))$ is a Dirichlet distribution with parameters *ι**j* = (*ι**j*, 1, ⋯, *ι**j*, 4) and $\prior'(p\_0)$ is a Beta distribution with parameters (*b*1, *b*2). *ι**j*, *b*1 and *b*2 are assumed to be known. Therefore, given $(\param, p\_0)$, (*A*, S) is a Markov chain described as follows: * If *a**k* − 1 ∈ {1, …, *w* − 1} then *a**k* = *a**k* − 1 + 1; else ${\mathbb{P}}(a\_k =1 \vert a\_{k-1} \in \{0, w\}, p\_0, \param) = 1- {\mathbb{P}}(a\_k=0\vert a\_{k-1} \in \{0, w\}, p\_0, \param) =p\_0$. * If *a**k* = 0, $s\_k \sim \param\_0(s\_{k-1},.)$; else *s**k* is drawn from a Multinomial distribution with parameter $\param(\cdot,a\_k)$. The chains are initialized with P(*a*1 = 1|*p*0) = 1 − P(*a*1 = 0|*p*0) = *p*0; the distribution of *s*1 given *a*1 = 0 and $\param$ (resp. given *a*1 = 1 and $\param$) is uniform on {1, ⋯, 4} (resp. a Multinomial distribution with parameter $\param(\cdot, 1)$). This description yields to the following conditional distribution of *A* given S: (up to a multiplicative constant) - see for similar derivation - $$\begin{aligned} \mathrm{P}(A|S) &\propto \frac{\Gamma(N\_1(A)+b\_1) \Gamma(N\_0(A)+b\_2)}{\Gamma(N\_1(A)+N\_0(A)+b\_1+b\_2)} \ \prod\_{i=1}^w \frac{\prod\_{j=1}^4 \Gamma(c\_{j,i}(A)+\iota\_{j,i})}{\Gamma(\sum\_{\ell=1}^4 c\_{\ell,i}(A) + \iota\_{\ell,i})} \cdots \\ & \ \times \prod\_{k=2}^L (\delta\_{a\_{k-1}+1}(a\_k))^{\mathbf{1}\_{a\_{k-1} \in \{1, \dots,w-1\}}} \prod\_{k=2}^L \left(\param\_0(s\_{k-1},s\_k)\right)^{\un\_{a\_k=0}} \left(\un\_{\{0\}}(a\_1) \frac{1}{4} + \un\_{\{1\}}(a\_1) \right)\end{aligned}$$ where * *N*1(*A*) = #{*k*, *a**k* = 1} is the number of elements of *A* equal to 1. * *N*0(*A*) = #{*k*, *a**k* = 0} is the number of elements of *A* equal to 0. * *c**j*, *i*(*A*) = ∑*k* = 1*L***1***a**k* = *i***1***s**k* = *j* is the number of pairs (*a**k*, *s**k*) equal to (*i*, *j*). ![image](bio_5) [comparaisonbio] To highlight the major role of the equi-energy jumps, and the importance of the construction of the rings to make the acceptance probability of the jumps large enough, we compare AEE to I-MCMC, and to MH. The data are obtained with values of $p\_0, \param\_0$ and $\param$ similar to those of : *p*0 = 0.005, *b*1 = 2, *b*2 = 200, *ι**j*, *i* = 1 for all *j*, *i*, and $$\param\_0 = \left( \begin{array}{cccc} 0.1 & 0.7 & 0.1 & 0.1\\ 0.1 & 0.1 & 0.7 & 0.1\\ 0.1 & 0.1 & 0.1 & 0.7\\ 0.7 & 0.1 & 0.1 & 0.1\end{array} \right), \, \param = \left( \begin{array}{cccccccccccc} 0.5 & 0.6 & 0.2 & 0.4 & 0.1 & 0.3 & 0.6 & 0.1 & 0.4 & 0.4 & 0.3 & 0 \\ 0 & 0.2 & 0 & 0.2 & 0.8 & 0.7 & 0 & 0.9 & 0 & 0 & 0.2 & 0.3\\ 0 & 0 & 0.8 & 0 & 0 & 0 & 0 & 0 & 0.3 & 0.5 & 0.4 & 0.1\\ 0.5 & 0.2 & 0 & 0.4 & 0.1 & 0 & 0.4 & 0 & 0.3 & 0.1 & 0.1 & 0.6 \end{array} \right).$$ We sample a sequence S of length *L* = 2000 and the size of the motif is *w* = 12. We now detail how the MH and the Metropolis-Hastings steps of AEE and I-MCMC are run. For the Metropolis-Hastings stage, the proposal distribution $p(A\_n, \tilde A\_{n+1})$ is of the form $$p(A\_n, \tilde A\_{n+1}) = q\_0(\tilde a\_1^{n+1}) \ \prod\_{j=1}^{L-1} q\_j(\tilde a\_{j}^{n+1},\tilde a\_{j+1}^{n+1}; A\_n) \eqsp,$$ where we set $ \tilde A\_{n+1} = (\tilde a\_1^{n+1}, \cdots, \tilde a\_L^{n+1})$. The proposed state *Ã**n* + 1 of the Metropolis-Hastings step is then sampled element by element; the distributions are designed to be close to the previous model: *ã**j* + 1*n* + 1 equal to *ã**j**n* + 1 + 1 if *ã**j**n* + 1 ∈ {1, …, *w* − 1}, and else, *ã**j* + 1*n* + 1 is sampled under a Bernoulli distribution of parameter $$\label{eq:proposal:bio} \frac{\hat{p}\_0 \prod\_{i=1}^{w} \hat{\param}\_{A\_n}(s\_{j+i-1},i)}{\hat{p}\_0 \prod\_{i=1}^{w} \hat{\param}\_{A\_n}(s\_{j+i-1},i)+(1-\hat{p}\_0) \prod\_{i=1}^{w-1} \param\_0(s\_{j+i},s\_{j+i+1})} \eqsp;$$ the replacement constant *p̂*0 is fixed by the users and $\hat \param\_{A\_n}$ is given by $\hat{\param}\_{A\_n}(s,i) \propto c\_{s,i}(A\_n)+c$ - where *c* is a value fixed by the users. $q\_0(\tilde a\_1^{n+1})$ is the Bernoulli distribution with parameter ([eq:proposal:bio]). Finally, the candidate *Ã**n* + 1 is accepted with probability $$1 \wedge \frac{\mathrm{P}(\tilde A\_{n+1}|S)^{1/T\_k}}{\mathrm{P}(A\_{n}|S)^{1/T\_k}} \frac{p(\tilde{A}\_{n+1}, A\_n )}{p(A\_n, \tilde{A}\_{n+1} ) } \eqsp.$$ Figure [comparaisonbio] displays the results obtained by AEE, I-MCMC and a MH sampler. Each subplot displays two horizontal lines with length equal to the length of the observed DNA sequence. The upper line represents the actual localization of the motifs, and the lower line represents in gray-scale the probability for each position to be part of a motif computed by one run of each algorithm after 2000 iterations. For AEE and I-MCMC, we choose $\InterractionProba=0.1$, *K* = 5, *S* = 3. The acceptance rate of the jump for AEE was about five times higher than for I-MCMC, which confirms the interest of the rings. As expected, AEE performs better than the other algorithms: there were 13 actual motifs, and AEE retrieved 10 motifs, whereas the I-MCMC and the MH retrieved respectively 7 and 6 motifs. Conclusion ========== As illustrated by the numerical examples, the efficiency of EE depends upon the choice of the energy rings. The adaptation we proposed improves this efficiency since it makes the probability of accepting a jump more stable. It is known that adaptation can destroy the convergence of the samplers: we proved that AEE converges under quite general conditions on the adapted bounds and these general conditions can be used to prove the convergence of AEE when applied with other adaptation strategies . It is also the first convergence result for an interacting MCMC algorithm including a selection mechanism. Our sketch of proof can be a basis for the proof of other interacting MCMC such as the SIMCMC algorithm of , the Non-Linear MCMC algorithms described in  or the PTEEM algorithm of . Results on the transition kernels *P**θ*(*k*) ============================================= Define $$\label{eq:ThetaTilde} G\_\theta(x) {\ensuremath{\stackrel{\mathrm{def}}{=}}}\int g\_\theta(x,z) \theta(\rmd z) \eqsp, \qquad \tilde \theta(x,\rmd y) {\ensuremath{\stackrel{\mathrm{def}}{=}}}\frac{g\_\theta(x,y) \theta(\rmd y)}{G\_\theta(x)} \eqsp.$$ Proof of Proposition  [drift:small:aees] ---------------------------------------- The case *k* = 1 is a consequence of E[EE3] since *P**θ*(1) = *P*(1) for any *θ* so that *π**θ*(1) ∝ *π**β*1. We now consider the case *k* ∈ {2, ⋯, *K*}: in the proof below, for ease of notations we will write *P*, *P**θ*, *W*, *λ*, *b* and *π**θ* instead of *P*(*k*), *P**θ*(*k*), *W**k*, *λ**k*, *b**k* and *π**θ*(*k*). *(a)* Let *m* ≥ 1 and *θ* ∈ Θ*m*. By definition of *g**θ* (see ([eq:FonctionSelectionEE])) and of Θ*m* (see ([eq:ThetaM])), $1/m \leq \int g\_{\theta}(x,y) \theta(\rmd y) \leq S$. Moreover, by E[EE3][EE3:b] $$P\_{\theta}W(x) = (1-\InterractionProba) PW(x) + \InterractionProba K\_{\theta}W(x) \leq (1-\InterractionProba) (\lambda W(x) + b )+ \InterractionProba K\_{\theta}W(x) \eqsp.$$ We have by ([eq:DefinitionKthetaAEE]), ([eq:DefinitionW]) and ([eq:ThetaTilde]) $$\begin{aligned} K\_\theta W(x) &=W(x)+ \int W(y) \alpha\_{\theta}(x,y) \left( 1-\frac{\pi^{\tau\_k \beta\_k}(y)}{\pi^{\tau\_k \beta\_k}(x)}\right) \tilde{\theta} (x,\rmd y) \eqsp.\end{aligned}$$ By ([eq:DefinitionAlphaAEE]), $$K\_\theta W(x) \leq W(x) + m \int\_{\{y, \pi(y) \leq \pi(x) \}} W(y) \frac{\pi^{\beta\_{k} -\beta\_{k-1}}(y)}{\pi^{\beta\_{k} -\beta\_{k-1}}(x)} \left( 1-\frac{\pi^{\tau\_k \beta\_k}(y)}{\pi^{\tau\_k \beta\_k}(x)}\right) g\_\theta(x,y) {\theta} (\rmd y) \eqsp.$$ Defining *ψ* by *ψ*(*σ*) = *σ*/(*σ* + 1)(*σ* + 1)/*σ* gives the upper bound sup*z* ∈ [0, 1]*z*(1 − *z**σ*) ≤ *ψ*(*σ*). Hence, *K**θ**W*(*x*) ≤ *W*(*x*) + *S**m* *ψ*(*τ**k**β**k*/(*β**k* − *β**k* − 1))*θ*(*W*). This yields $P\_\theta W(x) \leq \tilde \lambda W(x) + \tilde b m \theta(W)$ with $\tilde{\lambda} = (1-\InterractionProba)\lambda+\InterractionProba<1$ and $\tilde b = \InterractionProba S \ \psi \left(\tau\_k \beta\_k /(\beta\_{k} - \beta\_{k-1}) \right) + (1-\InterractionProba) b$. The minorization condition comes from the lower bound $ P\_{\theta}(x,A) \geq (1-\InterractionProba) P(x,A)$. *(b)* Let *m* ≥ 1 and *θ* ∈ Θ*m*. By E[EE3][EE3:a], *P* is *φ*-irreducible and so is *P**θ*; *P**θ* possesses a 1-small set and is thus aperiodic. In addition, $ P\_{{\theta}}W \leq ( 1 + \tilde \lambda) W/ 2 + \tilde{b} {\theta}(W) \un\_{\{W \leq c \}}$, with $c{\ensuremath{\stackrel{\mathrm{def}}{=}}}2 \tilde{b} m \ {\theta}(W) (1-\tilde{\lambda})^{-1}$ and {*W* ≤ *c*} is a 1-small set for *P**θ*. By, *π**θ* exists and $\pi\_\theta(W) \leq \tilde b m \theta(W) (1-\tilde \lambda)^{-1}$. Ergodic behavior ---------------- [lemme:unif:ergo] Assume E[EE1][EE1:a] and E[EE3]. Then for all *a* ∈ (0, 1), for all *m* ≥ 1 and all *θ* ∈ Θ*m*, there exist *C**θ* and *ρ**θ* ∈ (0, 1) such that for all *x* ∈ **X** and any *j* ≥ 1 and any *k* ∈ {1, ⋯, *K*}, $$\begin{aligned} \label{control:normev} \|\left(P\_{{\theta}}^{(k)}\right)^j(x,.)-\pi\_{{\theta}}^{(k)}\|\_{W\_{k}^a} \leq C\_{{\theta}} \ \rho\_{{\theta}}^j \ W^a\_k(x) \eqsp. \end{aligned}$$ Let *k* ∈ {1, ⋯, *K* − 1} and assume in addition that lim*n* → ∞*θ**n*(*k*)(*W**k*) = *θ*⋆(*k*)(*W**k*) w.p.1. Then for any positive integer *q*, on the set ⋂*n* ≥ *q*{*θ**n* ∈ Θ*m*⋆} $$\begin{aligned} \limsup\_n \rho\_{\theta\_n^{(k)}}<1, \limsup\_n C\_{\theta\_n^{(k)}} < +\infty , {\mathbb{P}}-\text{a.s. } \eqsp. \label{lemme:unif:ergo:tool1} \end{aligned}$$ The proof in the case *k* = 1 is a consequence of E[EE3] and since *P**θ*(1) = *P*(1). Consider the case *k* ≥ 2. Here again, the dependence upon *k* is omitted: *P**θ*, *W*, *θ**n* denote *P**θ*(*k*), *W**k* and *θ**n*(*k*). *Proof of ([control:normev])* Let *a* ∈ (0, 1) and set *V* = *W**a*. By the Jensen’s inequality and Proposition [drift:small:aees], there exists $\bar \lambda \in (0,1)$ and $\bar b$ such that for any *m* ≥ 1 and any *θ* ∈ Θ*m*, $$P\_\theta V \leq \bar \lambda V + \bar b \, m \, \theta(W)^a \eqsp.$$ Let *m* ≥ 1 and *θ* ∈ Θ*m*. By, ([control:normev]) holds and there exist constants *C*, *γ* > 0 such that for any *θ* ∈ Θ*m*, $$C\_\theta \vee (1-\rho\_\theta)^{-1} \leq C \left( \bar{b} \ m \theta(W) \vee \delta\_\theta^{-1} \vee (1-\bar{\lambda})^{-1} \right)^\gamma \eqsp,$$ where *δ**θ* is the minorizing constant of *P**θ* on the set $\{x: W(x) \leq 2 \bar b m \, \theta(W) \, (1-\bar \lambda)^{-1} -1 \}$. *Proof of ([lemme:unif:ergo:tool1])* For all *ω* ∈ ⋂*n* ≥ *q*{*θ**n* ∈ Θ*m*⋆}, $$\limsup\_n \{ C\_{\theta\_n(\omega)} \vee (1-\rho\_{\theta\_n(\omega)})^{-1} \} \leq C \left( \bar{b} \, m \, \limsup\_n \theta\_n(W) \vee \limsup\_n \delta\_{\theta\_n(\omega)}^{-1} \vee (1-\bar{\lambda})^{-1} \right)^\gamma \eqsp.$$ Since limsup*n**θ**n*(*W*) = *θ*⋆(*W*) < ∞ w.p.1, limsup*n**δ**θ**n*(*ω*)− 1 < ∞ w.p.1. thus showing that on the set ⋂*n* ≥ *q*{*θ**n* ∈ Θ*m*⋆}, limsup*n*{*C**θ**n*(*ω*) ∨ (1 − *ρ**θ**n*(*ω*))− 1} < ∞. This implies ([lemme:unif:ergo:tool1]). Moment conditions ----------------- Let *m*⋆ > 0. Define for any positive integer *q* and any *k* ∈ {1, ⋯, *K* − 1}, *A**q*, *n*(*k*) = ⋂ℓ ≤ *k*⋂*q* ≤ *j* ≤ *n*{*θ**j*(ℓ) ∈ Θ*m*⋆} if $q \leq n$, and *A**q*, *n*(*k*) = Ω otherwise; by convention, *A**q*, *n*(0) = Ω for any *q*, *n* ≥ 0. [lemme:unif:moments] Assume E[EE1][EE1:a], E[EE3] and E[*W**k*(*Y*0(*k*))] < ∞ for any *k* ∈ {1, ⋯, *K*}. Then for any *k* ∈ {1, ⋯, *K*}, $$\label{lemme:unif:ergo:tool2} \sup\_{j\geq 1} {\mathbb{E}}\left[W\_k(Y\_j^{(k)}) \un\_{A\_{q,j-1}^{(k-1)}} \right] < \infty \eqsp.$$ The proof is by induction on *k*. The case *k* = 1 is a consequence of E[EE3] since *P**θ*(1) = *P*(1). Assume the property holds for *k* ∈ {2, ⋯, *K* − 1}. In this proof, *W**k* + 1, *P**θ*(*k* + 1), *θ**n*(*k*), *Y*(*k*), *Y*(*k* + 1), *P*(*k* + 1), *K**θ*(*k* + 1) will be denoted by *W*, *P**θ*, *θ**n*, *Y*, *X*, *P*, *K**θ*. By ([eq:interactingMCMC]) and Proposition [drift:small:aees] we obtain, for *j* > *q* $$\begin{aligned} {\mathbb{E}}\left[ W(X\_j) \un\_{A\_{q,j-1}^{(k)}} \right] & \leq {\mathbb{E}}\left[ P\_{\theta\_{j-1}} W(X\_{j-1}) \un\_{A\_{q,j-1}^{(k)}}\right] \\ &\leq \tilde{\lambda} {\mathbb{E}}\left[ W(X\_{j-1}) \un\_{A\_{q,j-2}^{(k)} }\right] +\tilde{b} \, m\_\star \, {\mathbb{E}}\left[\theta\_{j-1}(W) \un\_{A\_{q,j-1}^{(k-1)} } \right] \\ & \leq \tilde{\lambda} {\mathbb{E}}\left[ W(X\_{j-1}) \un\_{A\_{q,j-2}^{(k)} }\right] +\tilde{b} \, m\_\star \, \sup\_l {\mathbb{E}}\left[W(Y\_l) \un\_{A\_{q,l-1}^{(k-1)} } \right] \eqsp.\end{aligned}$$ Since *W**k* + 1 ∈ L*W**k*, the induction assumption implies that $ \sup\_l {\mathbb{E}}\left[W(Y\_l) \un\_{A\_{q,l-1}^{(k-1)} } \right]< \infty$. Iterating this inequality allows to write that for some constant *C*ʹ $$\sup\_{j \geq q} {\mathbb{E}}\left[ W(X\_j) \un\_{A\_{q,j-1}^{(k)}} \right] \leq C' \ {\mathbb{E}}\left[ W(X\_q) \right] \eqsp.$$ Finally, by definition of *P**θ**j*, either *P**θ**j* = *P* if $ {\theta}\_j \notin \bigcup\_m \Theta\_m$, or *P**θ**j* = (1 − *ε*)*P* + *ε**K**θ**j* otherwise; note that if *θ**j* ∈ ⋃*m*Θ*m* then *θ**j* ∈ Θ1/*j*. Since both *P* and *P**θ* for *θ* ∈ ⋃*m*Θ*m* satisfy a drift inequality (see E[EE3] and Proposition [drift:small:aees]), E[*W*(*X**q*)] < ∞ by ([eq:interactingMCMC]). Proof of Theorem [theo:tout] ============================ (*k*) [hyp:proof:subset] There exists *m*⋆ > 0 such that P(⋃*q* ≥ 1⋂*n* ≥ *q*{*θ**n*(*k*) ∈ Θ*m*⋆}) = 1. (*k*) [hyp:proof:pi] for any *a* ∈ (0, 1) and any continuous function *f* ∈ L*W**k**a*, $$\lim\_{n \to \infty} \pi\_{{\theta}\_n^{(k-1)}}^{(k)}(f) = {\theta}\_\star^{(k)}(f) \eqsp.$$ (*k*)[hyp:proof:ergo] For all bounded continuous function *f*, lim*n* → ∞E[*f*(*Y**n*(*k*))] = *θ*⋆(1)(*f*). (*k*) [hyp:proof:lln] *θ*⋆(*k*)(*W**k* + 1) <  + ∞, and for any $a \in (0, \frac{1 + \Gamma}{2} \wedge 1)$ and any continuous function *f* in L*W**k**a*, *θ**n*(*k*)(*f*) → *θ*⋆(*k*)(*f*) a.s. By Proposition [prop:lgn:first], the conditions R[hyp:proof:ergo] and R[hyp:proof:lln] hold for *k* = 1; R[hyp:proof:pi] also holds for *k* = 1 since *π**θ*(1) = *θ*⋆(1) for any *θ*. We assume that for any *j* ≤ *k*, for *k* ∈ {1, ⋯, *K* − 1}, the conditions R[hyp:proof:subset](*j* − 1), R[hyp:proof:pi](*j*), R[hyp:proof:ergo](*j*) and R[hyp:proof:lln](*j*) hold. We prove that R[hyp:proof:subset](*k*), R[hyp:proof:pi](*k* + 1), R[hyp:proof:ergo](*k* + 1) and R[hyp:proof:lln](*k* + 1) hold. To make the notations easier, the superscript *k* is dropped from the notations: the auxiliary process *Y*(*k*) will be denoted by *Y*, and the process *Y*(*k* + 1) by *X*; *P*(*k* + 1), *W**k* + 1, *K**θ*(*k* + 1), *P**θ*(*k* + 1), *α**θ*(*k* + 1), *π**θ*(*k* + 1) and *θ**n*(*k*), *θ*⋆(*k*) are resp. denoted by *P*, *W*, *K**θ*, *P**θ*, *α**θ*, *π**θ* and *θ**n*, *θ*⋆. Finally, we define the V-variation of the two kernels *P**θ* and *P**θ*ʹ by: $$D\_V(\theta,\theta')=\sup\_{x\in \mathbf{X}} \left( \frac{\|P\_{\theta}(x,.)-P\_{\theta'}(x,.) \|\_V}{V(x)}\right) \eqsp.$$ When *V* = 1, we will simply write *D*. Proof of R[hyp:proof:subset](*k*) --------------------------------- The proof is prefaced with a preliminary lemma. [lemme:conv:h] For all *l* ∈ {1, ⋯, *S* − 1} and any *θ*, *θ*ʹ, $$\sup\_{x \in \mathbf{X}} \left|h\_{\theta,l}(x) - h\_{\theta',l}(x) \right| \leq \frac{1}{r} \sup\_{l \in \{1,\cdots, S-1 \}}\left| {\xi}\_{\theta,l} - {\xi}\_{\theta',l} \right| \eqsp.$$ Note that ∣(1 − *a*)+ − (1 − *b*)+∣ ≤ ∣*b* − *a*∣. Therefore, for all *x* ∈ **X** : $$|h\_{\theta,l}(x) - h\_{\theta',l}(x)| \leq \frac{\left| d({\pi\_u}(x), {H}\_{\theta,l}) -d({\pi\_u}(x),{H}\_{\theta',l}) \right|}{r} \eqsp.$$ This concludes the proof. *(Proof of R[hyp:proof:subset](*k*))* We prove there exist an integer *m*⋆ ≥ 1 and a positive r.v. *N* such that $${\mathbb{P}}(N < \infty)=1 \eqsp, \qquad {\mathbb{P}}\left( \bigcap\_{n \geq N} \left \{ \inf\_x \int g\_{\theta\_n}(x,y) \theta\_n(\rmd y) \geq 1/m\_\star \right\}\right)=1 \eqsp.$$ To that goal, we prove that with probability 1, for all *n* large enough, $$\begin{aligned} \inf\_x \int g\_{\theta\_n}(x,y) \theta\_n(\rmd y) \geq \inf\_{\ell \in \{1, \cdots, S-1 \}} \int h\_{\theta\_\star, \ell}(y) \ \theta\_\star(\rmd y) \eqsp, \label{eq:tool2:prop:subsetTheta}\end{aligned}$$ and use the assumption E[EE:ring][ring:c]. For all *x* and *θ*, there exists a ring index *l**x*, *θ* ∈ {1, ⋯, *S*} such that *π**u*(*x*) ∈ *H**θ*, *l**x*, *θ*. Upon noting that *d*(*π**u*(*x*), *H**θ*, ℓ*x*, *θ*) = 0, it holds $$\begin{aligned} \liminf\_n \inf\_x \int g\_{\theta\_n}(x,y) \theta\_n(\rmd y) \geq \liminf\_n \inf\_{l \in \{1, \cdots, S\}} \int h\_{\theta\_n, l}(y) \theta\_n(\rmd y) \eqsp.\end{aligned}$$ We write $$\begin{aligned} \int h\_{\theta\_n, l}(y) \theta\_n(\rmd y) & \geq \int h\_{\theta\_\star, l}(y) \theta\_n(\rmd y) - \int \left| h\_{\theta\_n, l}(y) - h\_{\theta\_\star, l}(y) \right| \theta\_n(\rmd y) \\ & \geq \int h\_{\theta\_\star, l}(y) \theta\_n(\rmd y) - \sup\_{y \in {\mathbf{X}}} \left| h\_{\theta\_n, l}(y) - h\_{\theta\_\star, l}(y) \right| \eqsp.\end{aligned}$$ By definition of *h**θ*⋆, ℓ, *y* ↦ *h**θ*⋆, *l*(*y*) is continuous and bounded. Therefore, by R[hyp:proof:lln](k), Lemma [lemme:conv:h] and E[EE:ring][ring:a], the proof of ([eq:tool2:prop:subsetTheta]) is concluded by $$\liminf\_n \int h\_{\theta\_n, l}(y) \theta\_n(\rmd y) > \int h\_{\theta\_\star, l}(y) \theta\_\star(\rmd y) \eqsp.$$ Proof of R[hyp:proof:pi](*k* + 1) --------------------------------- First of all, observe that by definition of *π**θ* (see Proposition [drift:small:aees]) and the expression of *P**θ*, *π**θ*⋆ ∝ *π**β**k* + 1. We check the conditions of. By Proposition [prop:subsetTheta] it is sufficient to prove that for any *q* ≥ 1, $\lim\_{n \to \infty} |\pi\_{\theta\_n}(f) - \pi\_{\theta\_\star}(f) | \un\_{\bigcap\_{j \geq q} \{\theta\_j \in {\Theta}\_{m\_\star} \}} =0$ w.p.1. #### Case *f* bounded Lemma [lemme:unif:ergo] and R[hyp:proof:lln](*k*) show that on the set ⋂*j* ≥ *q*{*θ**j* ∈ Θ*m*⋆}, limsup*n**C**θ**n* < ∞ and limsup*n*(1 − *ρ**θ**n*)− 1 < ∞ w.p.1. Equicontinuity of the class {*P**θ**f*, *θ* ∈ Θ*m*⋆}, where *f* is a bounded continuous function on **X**, will follow from Lemmas [lem:ReguX:G] to [lem:Equicont]. Finally, the weak convergence of the transition kernels is proved in Lemma [lem:AScvgNoyaux]. #### Case *f* unbounded Following the same lines as in the proof of, it can be proved that the above discussion for *f* bounded and Proposition [drift:small:aees] imply $$\begin{aligned} \lim\_{n \to \infty} \{\pi\_{\theta\_n}(f) - \pi\_{{\theta}\_\star}(f) \}\un\_{\cap\_{j \geq q} \{\theta\_j \in \Theta\_{m\_\star} \}} =0\end{aligned}$$ w.p.1. for any continuous function *f* such that ∣*f*∣*W**k* + 1*a* < ∞. [lem:ReguX:G] For all *θ* ∈ ⋃*m*Θ*m*, and *x*, *x*ʹ, $\sup\_y \left| g\_\theta(x,y) - g\_\theta(x',y) \right| \leq \frac{S}{r} | \pi(x) - \pi(x')|$. By ([eq:FonctionSelectionEE]), $$\begin{aligned} |g\_{\theta}(x,y)-g\_{\theta}(x',y)| \leq \sum\_{l=1}^S |h\_{\theta,l}(x)-h\_{\theta,l}(x')| h\_{\theta,l}(y) \leq \sum\_{l=1}^S |h\_{\theta,l}(x)-h\_{\theta,l}(x')| \eqsp.\end{aligned}$$ The proof is completed since $$|h\_{\theta,l}(x)-h\_{\theta,l}(x')| \leq \frac{|d(\pi(x),{H}\_{\theta,l})-d(\pi(x'),{H}\_{\theta,l})|}{r} \leq \frac{|\pi(x)-\pi(x')|}{r} \eqsp.$$ [lem:ReguX:alpha] Assume E[EE1][EE1:a]. For all *m* ≥ 1, there exists a constant *C**m* such that for all *x*, *x*ʹ, *y*, *y*ʹ ∈ **X** and *θ* ∈ Θ*m* $$\begin{aligned} \label{majoration:alpha:x} \left| \alpha\_\theta(x,y) - \alpha\_{\theta}(x',y) \right| &\leq C\_m \, \left[ \left| \pi^{\beta\_k-\beta\_{k+1}}(x)-\pi^{\beta\_k-\beta\_{k+1}}(x') \right| + \left| \pi(x) - \pi(x')\right| \right] \eqsp, \\ \label{majoration:alpha:y} \left| \alpha\_\theta(x,y) - \alpha\_{\theta}(x,y') \right| & \leq C\_m \left[\left| \pi^{\beta\_k-\beta\_{k+1}}(y)-\pi^{\beta\_k-\beta\_{k+1}}(y') \right| + \left| \pi(y) - \pi(y')\right| \right] \eqsp.\end{aligned}$$ By definition of *α**θ* (see ([eq:DefinitionAlphaAEE])), *α**θ*(*x*, *y*) − *α**θ*(*x*ʹ, *y*) = (1 ∧ *a*) − (1 ∧ *b*), with $$\begin{aligned} a=\frac{\pi^{\beta\_{k+1}-\beta\_{k}}(y)}{\pi^{\beta\_{k+1}-\beta\_{k}}(x)} \frac{\int g\_{\theta}(x,z) \theta(\rmd z)}{\int g\_{\theta}(y,z) \theta(\rmd z) } \quad \textrm{and} \quad b=\frac{\pi^{\beta\_{k+1}-\beta\_{k}}(y)}{\pi^{\beta\_{k+1}-\beta\_{k}}(x')} \frac{\int g\_{\theta}(x',z) \theta(\rmd z)}{\int g\_{\theta}(y,z) \theta(\rmd z) } \eqsp.\end{aligned}$$ Note that $|(1 \wedge a) - (1 \wedge b)| \leq |a-b| \left( \un\_{a \leq 1} + \un\_{ b \leq 1, a > 1 } \right)$. By symmetry, we can assume that *b* ≤ 1 and this implies $$\begin{aligned} \frac{\pi^{\beta\_{k+1}-\beta\_{k}}(y)}{\pi^{\beta\_{k+1}-\beta\_{k}}(x')} \leq \frac{\int g\_{\theta}(y,z) \theta(\rmd z) }{\int g\_{\theta}(x',z) \theta(\rmd z)} \leq S m \eqsp,\end{aligned}$$ since *g**θ*(*x*, *y*) ≤ *S*. Therefore, $$\begin{gathered} |a-b| = \frac{\pi^{\beta\_{k+1}-\beta\_{k}}(y)}{\int g\_{\theta}(y,z) \theta(\rmd z) } \left|\frac{\int g\_{\theta}(x,z) \theta(\rmd z)}{\pi^{\beta\_{k+1}-\beta\_{k}}(x)} -\frac{\int g\_{\theta}(x',z) \theta(\rmd z)}{\pi^{\beta\_{k+1}-\beta\_{k}}(x')} \right| \\ \leq S m \left[ \pi^{\beta\_{k+1}-\beta\_{k}}(y) \left| \pi^{\beta\_k - \beta\_{k+1}}(x)-\pi^{\beta\_k -\beta\_{k+1}}(x') \right| + m \left|\int (g\_{\theta}(x,z) - g\_{\theta}(x',z)) \theta(\rmd z) \right| \right] \eqsp.\end{gathered}$$ The proof of ([majoration:alpha:x]) is concluded by Lemma [lem:ReguX:G]. The proof of ([majoration:alpha:y]) is on the same lines and omitted. [lem:Equicont] Assume E[EE1] and E[EE3][EE3:a]. For any *m* ≥ 1 and for any continuous bounded function *f*, the class of functions {*P**θ**f*, *θ* ∈ Θ*m*} is equicontinuous. Let *f* be a continuous function on **X**, bounded by 1. Let *m* ≥ 1 and *θ* ∈ Θ*m*. We have $$\begin{aligned} P\_\theta f(x) - P\_{\theta} f(x') = & (1-\InterractionProba) \left( Pf(x) - Pf(x') \right) + \InterractionProba \left(f(x) - f(x') \right) \left(1-\int \alpha\_\theta(x',y) \tilde \theta(x,\rmd y)\right) \\ & + \InterractionProba \int \left( f(y) -f(x) \right) \left( \alpha\_\theta(x,y) - \alpha\_{\theta}(x',y) \right) \tilde {\theta}(x,\rmd y) \\ & + \InterractionProba \int \alpha\_{\theta}(x',y) (f(y)-f(x')) (\tilde{\theta}(x,\rmd y)-\tilde{\theta}(x',\rmd y)) \eqsp,\end{aligned}$$ where $\tilde \theta$ is given by ([eq:ThetaTilde]). This yields to $$\begin{aligned} \left| P\_\theta f(x) - P\_{\theta} f(x') \right| \leq & \left| Pf(x) - Pf(x') \right| + \left|f(x) -f(x') \right| \\ & + 2 \sup\_y \left| \alpha\_\theta(x,y)- \alpha\_{\theta}(x',y)\right| +2 \left\| \tilde \theta(x,.) - \tilde {\theta}(x',.) \right\|\_{\mathrm{TV}} \eqsp.\end{aligned}$$ We have $$\begin{aligned} \|\tilde {\theta}(x,.) - \tilde {\theta}(x',.) \|\_{\mathrm{TV}} & \leq \frac{1}{G\_\theta(x)} \sup\_y \left| g\_\theta(x,y) - g\_\theta(x',y) \right| + \frac{S}{G\_\theta(x) G\_\theta(x')} \left| G\_\theta(x) - G\_\theta(x') \right| \\ & \leq m \sup\_y \left| g\_\theta(x,y) - g\_\theta(x',y) \right| + S m^2 \sup\_y \left| g\_\theta(x,y) - g\_\theta(x',y) \right| \eqsp,\end{aligned}$$ where *G**θ* is given by ([eq:ThetaTilde]). So Lemmas [lem:ReguX:G] and [lem:ReguX:alpha] imply that for all *m* ≥ 1, there exists a constant *C**m* such that for all *θ* ∈ Θ*m*: $$\begin{aligned} \label{difference:pteta:x} \left| P\_\theta f(x) - P\_{\theta} f(x') \right| \leq & \left| Pf(x) - Pf(x') \right| + \left|f(x) -f(x') \right| \nonumber\\ & + C\_m \left( | \pi(x) - \pi(x')|+ |\pi^{\beta\_k-\beta\_{k+1}}(x) - \pi^{\beta\_k-\beta\_{k+1}}(x') |\right) \eqsp.\end{aligned}$$ The proof is concluded since *P* is Feller and *π* is continuous. [lem:AScvgNoyaux] Let *m* ≥ 1. Assume E[EE1], E[EE:ring][ring:a] and R[hyp:proof:lln](k). For all *x* ∈ **X**, there exists a set Ω*x* such that P(Ω*x*) = 1 and for all *ω* ∈ Ω*x* and any bounded continuous function *f* $$\lim\_{n \to \infty} \left| P\_{{\theta}\_n(\omega)}f(x) - P\_{{\theta}\_\star}f(x) \right| \un\_{\bigcap\_j \{\theta\_j \in {\Theta}\_m \}}= 0 \eqsp.$$ Following the same lines as in the proof of, it is sufficient to prove that for any *x* ∈ **X** and any bounded continuous function *f*, lim*n* → ∞*P**θ**n*(*f*) = *P**θ*⋆(*f*) w.p.1 on the set ⋂*j*{*θ**j* ∈ Θ*m*}. Let *f* and *x* be fixed. We write $$\begin{aligned} \label{diff:P} P\_\theta f(x) - P\_{\theta'} f(x) = & \InterractionProba \int \left( \alpha\_\theta(x,y) - \alpha\_{\theta'}(x,y) \right) \left(f(y) -f(x) \right) \tilde \theta(x, \rmd y) \nonumber \\ & + \InterractionProba \int \alpha\_{\theta'}(x,y) \left(f(y) -f(x) \right) \left( \tilde{{\theta}}(x,\rmd y) - \tilde{{\theta}'}(x,\rmd y) \right) \eqsp,\end{aligned}$$ where $\tilde \theta$ is given by ([eq:ThetaTilde]). Moreover, $$\begin{aligned} \tilde{{\theta}}(x,\rmd y) - \tilde{{\theta}'}(x,\rmd y) & =\frac{g\_\theta(x,y) \theta(\rmd y) -g\_{\theta'}(x,y) \theta'(\rmd y)}{G\_{\theta}(x)} + g\_{\theta'}(x,y) \theta'(\rmd y) \, \frac{(G\_{{\theta}'}(x) - G\_{\theta}(x))}{G\_{\theta}(x)G\_{{\theta}'}(x) } \eqsp.\end{aligned}$$ This yields to $$\begin{gathered} \InterractionProba^{-1} \left( P\_{\theta\_n}f(x) - P\_{\theta\_\star} f(x) \right) = \int \left( \alpha\_{\theta\_n}(x,y) - \alpha\_{\theta\_\star}(x,y) \right)\ \left( f(y) - f(x) \right) \tilde \theta\_n(x, \rmd y) \\ - \int \frac{g\_{\theta\_{\star}}(x,y)}{G\_{\theta\_n}(x)}F(x,y)\left( \theta\_\star( \rmd y) - \theta\_n(\rmd y) \right) \\ - \int F(x,y) \left( (g\_{{\theta}\_n}(x,y)-g\_{{\theta}\_{\star}}(x,y)) \frac{{\theta}\_n(\rmd y)}{G\_{{\theta}\_n}(x)} + g\_{\theta\_{\star}}(x,y) \theta\_{\star}(\rmd y) \, \frac{(G\_{{\theta}\_{\star}}(x) - G\_{{\theta}\_n}(x))}{G\_{{\theta}\_n}(x)G\_{{\theta}\_{\star}}(x) } \right) \eqsp, \end{gathered}$$ where *F*(*x*, *y*) = *α**θ*⋆(*x*, *y*)(*f*(*y*) − *f*(*x*)). There exists a constant *C**m* such that on the set ⋂*n*{*θ**n* ∈ Θ*m*}, (see the proof of Lemma [lem:ReguX:alpha] for similar upper bounds) $$\begin{gathered} \left| \alpha\_{\theta\_n}(x,y) - \alpha\_{\theta\_\star}(x,y)\right| \leq C\_m \, \left| \frac{G\_{\theta\_n}(x)}{G\_{\theta\_n}(y)} - \frac{G\_{\theta\_\star}(x)}{G\_{\theta\_\star}(y)}\right| \\ \leq m^2 S C\_m \, \left(\left| G\_{\theta\_n}(x) - G\_{\theta\_\star}(x) \right|+\left| G\_{\theta\_n}(y) - G\_{\theta\_\star}(y) \right| \right) \end{gathered}$$ where *G**θ*(*x*) is defined by ([eq:ThetaTilde]). We write by definition of the function *g**θ* (see ([eq:FonctionSelectionEE])) $$\begin{gathered} \sup\_{x} \left| G\_{\theta\_n}(x) - G\_{\theta\_\star}(x) \right| \leq \sup\_{x,z} |g\_{\theta\_n}(x,z) - g\_{\theta\_\star}(x,z)| + \sup\_{x} \left| \int g\_{\theta\_\star}(x,z) \theta\_n(\rmd z) - \int g\_{\theta\_\star}(x,z) \theta\_\star(\rmd z) \right| \\ \leq 2 \sum\_{l=1}^S \sup\_z |h\_{\theta\_n,l}(z) - h\_{\theta\_\star,l}(z)| + \sum\_{l=1}^S \left| \int h\_{\theta\_\star,l}(z) \theta\_n(\rmd z) - \int h\_{\theta\_\star,l}(z) \theta\_\star(\rmd z) \right| \eqsp. \end{gathered}$$ By Lemma [lemme:conv:h] and E[EE:ring][ring:a], the first term converges to zero w.p.1. Since *t* ↦ *h**θ*⋆, *l*(*t*) is continuous, R[hyp:proof:lln](k) implies that the second term tends to zero w.p.1. Therefore, on the set ⋂*n*{*θ**n* ∈ Θ*m*}, sup*x*, *y*∣*α**θ**n*(*x*, *y*) − *α**θ*⋆(*x*, *y*)∣ converges to zero w.p.1, as well as sup*x*, *y*∣*g**θ**n*(*x*, *y*) − *g**θ*⋆(*x*, *y*)∣, and sup*x*∣*G**θ**n*(*x*) − *G**θ*⋆(*x*)∣. Note that by Lemma [lem:ReguX:alpha], *y* ↦ *F*(*x*, *y*) is bounded and continuous. Therefore, following the same lines as above, it can be proved that under R[hyp:proof:lln](k) and E[EE:ring][ring:a], on the set ⋂*n*{*θ**n* ∈ Θ*m*}, $\lim\_{n \to \infty} |\int F(x,y)\theta\_n(x, \rmd y) -\int F(x,y) \theta\_\star(x, \rmd y)|$ =0 w.p.1 Proof of R[hyp:proof:ergo](k+1) ------------------------------- We check the conditions of. Let *f* be a bounded continuous function on **X**. By R[hyp:proof:pi](k+1), lim*n* → ∞*π**θ**n*(*f*) = *π**θ*⋆(*f*) ∝ *π**β**k* + 1 w.p.1. Let *δ* > 0. By Proposition [prop:subsetTheta], there exists *q* ≥ 1 such that P(⋂*n* ≥ *q*{*θ**n* ∈ Θ*m*⋆}) ≥ 1 − *δ*. Following the same lines as in the proof of, it can be proved by using Lemmas [lemme:unif:ergo], [lemme:conv:h] and [lemme:cv:dv:proba] and the condition E[EE:ring][ring:b] that $\lim\_{n \to \infty} {\mathbb{E}}\left[ \left(f(X\_n) - \pi\_{\theta\_n}(f) \right) \un\_{\bigcap\_{n \geq q} \{\theta\_n \in {\Theta}\_{m\_\star}\}} \right] =0$. This concludes the proof. [lemme:cv:dv:proba] For all *m* ≥ 1, there exists a constant *C**m* such that for any *θ*, *θ*ʹ ∈ Θ*m*, $$D(\theta, \theta') \leq C\_m \left( \| \theta - \theta' \|\_{\mathrm{TV}} + \sup\_{l, x} \left| h\_{\theta,l}(x) -h\_{\theta',l}(x) \right|\right) \eqsp.$$ By definition of *P**θ*, for all function *f* bounded by 1, ([diff:P]) holds. So $$\begin{aligned} D(\theta,\theta') &\leq 2 \InterractionProba \sup\_{x,y} |\alpha\_{\theta}(x,y) - \alpha\_{\theta'}(x,y) | \\ & +2 \InterractionProba S m^2 \, \left(\sup\_{x,y} |g\_{\theta}(x,y)- g\_{\theta'}(x,y)| + \|{\theta}- {\theta}'\|\_{\mathrm{TV}} + \sup\_x |G\_{{\theta}'}(x) - G\_{\theta}(x)| \right) \eqsp.\end{aligned}$$ The term ∣*α**θ*(*x*, *y*) − *α**θ*ʹ(*x*, *y*)∣ is equal to ∣1 ∧ *a* − 1 ∧ *b*∣ with $$\begin{aligned} a= \frac{\pi^{ \beta\_{k+1}-\beta\_k}(y) \int g\_{\theta} (x,z) \theta(\rmd z)}{\pi^{\beta\_{k+1}-\beta\_k}(x)\int g\_{\theta} (y,z) \theta(\rmd z)} \quad \textrm{and} \quad b= \frac{\pi^{\beta\_{k+1}-\beta\_k}(y) \int g\_{\theta'} (x,z) \theta'(\rmd z)}{\pi^{\beta\_{k+1}-\beta\_k}(x)\int g\_{\theta'} (y,z) \theta'(\rmd z)} \eqsp.\end{aligned}$$ Note that $ |1 \wedge a - 1 \wedge b| \leq |b-a| \left( \un\_{\{b \leq 1, a > 1 \}} + \un\_{a \leq 1} \right)$. Therefore, for all *θ*, *θ*ʹ ∈ Θ*m*, $$\sup\_{x,y}| \alpha\_\theta(x,y) - \alpha\_{\theta'}(x,y)| \leq S^2 m^2 \ \left(\sup\_{x,y}| g\_\theta(x,y) - g\_{\theta'}(x,y) | + \|\theta-\theta' \|\_{\mathrm{TV}} \right) \eqsp.$$ The term ∣*G**θ*ʹ(*x*) − *G**θ*(*x*)∣ is upper bounded by $$\begin{aligned} |G\_{{\theta}'}(x) - G\_{\theta}(x)| \leq \sup\_{x,y} | g\_\theta(x,y) - g\_{\theta'}(x,y) | + S \|\theta-\theta' \|\_{\mathrm{TV}} \eqsp.\end{aligned}$$ Moreover, $$\left| g\_{\theta}(x,y)-g\_{\theta'}(x,y) \right| = \left| \sum\_{l=1}^S [h\_{\theta,l}(x)h\_{\theta,l}(y)-h\_{\theta',l}(x)h\_{\theta',l}(y)] \right| \leq 2S \sup\_{l,x} |h\_{\theta,l}(x)-h\_{\theta',l}(x)| \eqsp.$$ This concludes the proof. Proof of R[hyp:proof:lln](*k* + 1) ---------------------------------- Let $a \in (0,\frac{1+{\Gamma}}{2} \wedge 1)$ and set *V* = *W**a*. We check the conditions of. By Proposition [drift:small:aees], condition A3 of holds. By R[hyp:proof:pi](*k* + 1), lim*n* → ∞*π**θ**n*(*f*) = *π**θ*⋆(*f*) w.p.1 for any continuous function *f* in L*W**a*. Condition A4 (resp. A5) of is proved in Lemma [lemma:A5] (resp. Lemma [lemma:a6]). [lemma:A5] Assume E[EE1], E[EE3], E[EE:ring], R[hyp:proof:lln](k), R[hyp:proof:subset](j) and E[*W**j*(*Y*0(*j*))] < ∞ for all *j* ≤ *k*. Then for any $a \in (0,\frac{1+{\Gamma}}{2} \wedge 1)$ ∑*j* ≥ 1*j*− 1(*L**θ**j* ∨ *L**θ**j* − 1)6*D**W**a*(*θ**j*, *θ**j* − 1)*W**a*(*X**j*) < ∞   P − a.s., where *L**θ* = *C**θ* ∨ (1 − *ρ**θ*)− 1. By R[hyp:proof:subset](j) for all *j* ≤ *k*, it is sufficient to prove that for any positive integer *q* $$\sum\_{j \geq 1} j^{-1} (L\_{\theta\_j} \vee L\_{\theta\_{j-1}})^6 D\_V(\theta\_j,\theta\_{j-1}) V(X\_j) \ \un\_{A\_{q,j}^{(k)}} < \infty \qquad {\mathbb{P}}-\text{a.s.}$$ where *A**q*, *j*(*k*) is defined in Appendix [app:lemme:unif:moments]. Following the same lines as in the proof of Lemma [lemme:unif:moments], it can be proved that ∑*j* = 1*q**j*− 1(*L**θ**j* ∨ *L**θ**j* − 1)6*D**V*(*θ**j*, *θ**j* − 1)*V*(*X**j*) < ∞ w.p.1. By Lemma [lemme:unif:ergo] and R[hyp:proof:lln](k), on the set ⋂*l* ≥ *q*{*θ**l* ∈ Θ*m*⋆}, limsup*n**L**θ**n* < ∞ w.p.1. Therefore, we have to prove that $ \sum\_{j \geq q} j^{-1} D\_V(\theta\_j,\theta\_{j-1}) V(X\_j) \un\_{A\_{q,j}^{(k)}} < \infty$ w.p.1. Following the same lines as in the proof of Lemma [lemme:cv:dv:proba], we obtain that on the set *A**q*, *j*(*k*), there exists a constant *C**m* such that $$\begin{aligned} D\_V(\theta\_j,\theta\_{j-1}) & \leq C\_m \left( \sup\_{l} \left| {\xi}\_{\theta\_j,l} -{\xi}\_{\theta\_{j-1},l} \right| + \| \theta\_j - \theta\_{j-1} \|\_{\mathrm{TV}} \right) \left(\|\theta\_j\|\_V + \|\theta\_{j-1}\|\_V \right) \\ & + C\_m \ \| \theta\_j-\theta\_{j-1}\|\_V \eqsp.\end{aligned}$$ Set *s*, *γ*, such that *s* = 1 ∨ (2*a*) < 1 + *γ* < 1 + Γ. By E[EE:ring][ring:b], there exists a r.v. *Z* finite w.p.1 such that P-a.s. $$\begin{aligned} |{\xi}\_{{\theta}\_n,l}-{\xi}\_{{\theta}\_{n-1},l}| + \| \theta\_n - \theta\_{n-1} \|\_{\mathrm{TV}} \leq Z \left( \frac{1}{ n^{{\gamma}}} + \frac{1}{n} \right) \eqsp.\end{aligned}$$ Therefore, it holds $$\begin{gathered} \mathcal{I}\_{\gamma}{\ensuremath{\stackrel{\mathrm{def}}{=}}}{\mathbb{E}}\left[\left( \sum\_{j \geq q} j^{-1} \left( j^{-{\gamma}} + j^{-1} \right) \left(\|\theta\_j\|\_V + \|\theta\_{j-1}\|\_V \right)V(X\_j) \un\_{A\_{q,j}^{(k)}} \right)^{\frac{1}{s}} \right] \\ \leq \sum\_{j \geq q} j^{-1/s} \left( j^{-{\gamma}/s} + j^{-1/s} \right) {\mathbb{E}}\left[ \left(\|\theta\_j\|\_V + \|\theta\_{j-1}\|\_V \right)^{\frac{1}{s}} V^{\frac{1}{s}}(X\_j) \un\_{A\_{q,j}^{(k)}} \right] \eqsp.\end{gathered}$$ We have, $$\mathcal{I}\_{\gamma}\leq 2 C({\gamma}) \ \sup\_j{\mathbb{E}}\left[\left(\|\theta\_j\|\_V \right)^{\frac{2}{s}} \un\_{A\_{q,j}^{(k)}} \right]^{1/2} \ \sup\_j {\mathbb{E}}\left[V(X\_j)^{\frac{2}{s}} \un\_{A\_{q,j}^{(k)}} \right]^{1/2} \ \eqsp,$$ where $C( {\gamma}) {\ensuremath{\stackrel{\mathrm{def}}{=}}}\sum\_{j \geq q} \left( j^{(-1-{\gamma})/s} + j^{-2/s} \right)$ is finite since 2/*s* > 1 and 1 + *γ* > *s*. Since *V*2/*s* ≤ *W*, Lemma [lemme:unif:moments] implies that $\sup\_j {\mathbb{E}}\left[ W(X\_j) \un\_{A\_{q,j}^{(k)}} \right] < \infty$. In addition, since 2/*s* > 1 we have, by Jensen’s inequality, $$\begin{aligned} {\mathbb{E}}\left[\|\theta\_j\|\_V^{\frac{2}{s}} \un\_{A\_{q,j}^{(k)}} \right] & \leq {\mathbb{E}}\left[\left( \frac{1}{j} \sum\_{p=1}^j V(Y\_p) \right)^{\frac{2}{s}} \un\_{A\_{q,j}^{(k-1)}} \right] \leq {\mathbb{E}}\left[ \frac{1}{j} \sum\_{p=1}^j V^{\frac{2}{s}}(Y\_p) \un\_{A\_{q,j}^{(k-1)}} \right] \\ & \leq \sup\_p {\mathbb{E}}\left[ W(Y\_p) \un\_{A\_{q,p-1}^{(k-1)}} \right]\end{aligned}$$ which is finite under Lemma [lemme:unif:moments]. Similarly, we prove that $ \sum\_{j \geq q} j^{-1} \| \theta\_j-\theta\_{j-1}\|\_V V(X\_j) \un\_{A\_{q,j}^{(k)}} < \infty$ w.p.1, upon noting that ∥*θ**j* − *θ**j* − 1∥*V* ≤ *j*− 1(*V*(*Y**j*) + *θ**j* − 1(*V*)). [lemma:a6] Assume E[EE1], E[EE3], E[EE:ring][ring:c]-[ring:a], R[hyp:proof:lln](k), R[hyp:proof:subset](j) and E[*W**j*(*Y*0(*j*))] < ∞ for all *j* ≤ *k*. For any *a* ∈ (0, 1), $$\sum\_{j \geq 1} j^{-1/a} L\_{\theta\_j}^{2/a} \ P\_{\theta\_j} W(X\_j) < \infty \eqsp, \qquad {\mathbb{P}}-\text{a.s.}$$ By R[hyp:proof:subset](j) for all *j* ≤ *k*, it is sufficient to prove that for any positive integer *q* $$\sum\_{j \geq 1} j^{-1/a} L\_{\theta\_j}^{2/a} \ P\_{\theta\_j} W(X\_j) \ \un\_{A\_{q,j}^{(k)}} < \infty \qquad {\mathbb{P}}-\text{a.s.}$$ where *A**q*, *j*(*k*) is defined in Appendix [app:lemme:unif:moments]. Let *q* ≥ 1. By Lemma [lemme:unif:ergo], $\sup\_j L\_{\theta\_{j}} \un\_{A\_{q,j}^{(k)}} < \infty$ w.p.1; and, as in the proof of Lemma [lemme:unif:moments], it can be proved that $\sup\_j {\mathbb{E}}\left[ P\_{\theta\_j} W(X\_j) \un\_{A\_{q,j}^{(k)} } \right] < \infty$. The proof is concluded since ∑*k**k*− 1/*a* < ∞. Proof of Proposition [prop:vitesse:quantiles] --------------------------------------------- The proof uses a Hoeffding inequality for (non-stationary) Markov chains. The following result is proved in. [hoeffding] Let (*Y**k*)*k* ∈ N be a Markov chain on (**X**, X), with transition kernel *Q* and initial distribution *η*. Assume *Q* is $\overline{W}$-uniformly ergodic, and denote by *θ*⋆ its unique invariant distribution. Then there exists a constant *K* such that for any *t* > 0 and for any bounded function *f* : **X** → R $${\mathbb{P}}\left( \sum\_{i=1}^n f(Y\_i) - n {\theta}\_{\star}(f) \geq t \right) \leq K \eta(\overline{W}) \exp \left[-\frac{1}{K} \left( \frac{t^2}{n |f|\_{\infty}^2} \wedge \frac{t}{|f|\_{\infty}} \right) \right] \eqsp.$$ [lemme:appl:hoeffding] Assume that there exists $\overline{W}$ such that {*Y**n*, *n* ≥ 0} is a $\overline{W}$-uniformly ergodic Markov chain with initial distribution *η* with $\eta(\overline{W}) < \infty$. Let *l* ∈ {1, ⋯, *S* − 1} and *p**l* ∈ (0, 1); and set *ξ**l* = *F**θ*⋆− 1(*p**l*). For all *ε* > 0 and any *n* ≥ 1, $$\begin{aligned} {\mathbb{P}}\left(|{\xi}\_{{\theta}\_n,l}-\xi\_l|>\epsilon\right) \leq 2 K \eta(\overline{W}) \exp \left(- \frac{n}{K} \left(\delta\_{\epsilon}^2 \wedge \delta\_{\epsilon} \right) \right) \eqsp,\end{aligned}$$ where *δ**ε* = *m**i**n*{*F**θ*⋆(*ξ**l* + *ε*) − *p**l*, *p**l* − *F**θ*⋆(*ξ**l* − *ε*)}. Let *ε* > 0. We write P(∣*ξ**θ**n*, *l* − *ξ**l*∣ > *ε*) ≤ P(*ξ**θ**n*, *l* ≥ *ξ**l* + *ε*) + P(*ξ**θ**n*, *l* < *ξ**l* − *ε*). Since *F**θ**n*(*x*) ≤ *t* iff *x* ≤ *F**θ**n*− 1(*t*), $$\begin{aligned} {\mathbb{P}}\left({\xi}\_{{\theta}\_n,l} \geq \xi\_l + \epsilon \right) &= {\mathbb{P}}\left(F\_{\theta\_n}^{-1}(p\_l) \geq \xi\_l + \epsilon \right) = {\mathbb{P}}\left( p\_l \geq F\_{{\theta}\_{n}}(\xi\_l + \epsilon) \right)\\ & = {\mathbb{P}}\left(\sum\_{k=1}^n \un\_{\{{\pi\_u}(Y\_k) > \xi\_l + \epsilon \}} \geq n(1-p\_l)\right) \eqsp.\end{aligned}$$ Proposition [hoeffding] is then applied with $f(x) = \un\_{\{{\pi\_u}(x)>\xi\_l + \epsilon\}}$. As $$\begin{aligned} {\theta}\_{\star}(f) = \int \un\_{\{{\pi\_u}(x)>\xi\_l + \epsilon\}} {\theta}\_{\star}(dx) = 1 - F\_{{\theta}\_{\star}}(\xi\_l + \epsilon) \eqsp,\end{aligned}$$ we obtain $$\begin{aligned} {\mathbb{P}}\left({\xi}\_{{\theta}\_n,l} \geq \xi\_l + \epsilon \right) &={\mathbb{P}}\left( \sum\_{k=1}^n f(Y\_k) - n {\theta}\_{\star}(f) \geq n \left( F\_{{\theta}\_{\star}}(\xi\_l + \epsilon) - p\_l\right) \right)\\ & \leq K \eta(\overline{W}) \exp \left( - \frac{n}{K} \left[ \left( F\_{{\theta}\_{\star}}(\xi\_l + \epsilon) - p\_l \right)^2 \wedge \left( F\_{{\theta}\_{\star}}(\xi\_l + \epsilon) -p\_l \right) \right]\right) \eqsp.\end{aligned}$$ for some constant *K* independent of *n*, *l*, *ε*. Similarly, $$\begin{aligned} {\mathbb{P}}\left({\xi}\_{{\theta}\_n,l} < \xi\_l - \epsilon \right) \leq K \eta(\overline{W}) \, \exp \left( - \frac{n}{K} \left[ \left( p\_l -F\_{{\theta}\_{\star}}(\xi\_l - \epsilon) \right)^2 \wedge \left( p\_l -F\_{{\theta}\_{\star}}(\xi\_l - \epsilon) \right)\right] \right) \eqsp,\end{aligned}$$ which concludes the proof. *Proof of Proposition [prop:vitesse:quantiles]* Let *f**θ*⋆ = *F*ʹ*θ*⋆ and *ε**n* be defined by $$\begin{aligned} \epsilon\_n = \frac{2 \sqrt{2}}{f\_{{\theta}\_{\star}}(\xi\_l)} \sqrt{K} \sqrt{\frac{\log(n)}{n}} \eqsp,\end{aligned}$$ where *K* is given by Lemma [lemme:appl:hoeffding]. Note that under ([hyp:fd]), *f**θ*⋆(*ξ**l*) > 0 since *p**l* ∈ (0, 1). By ([hyp:fd]), *F**θ*⋆ is differentiable and we write when *n* → ∞ $$\begin{aligned} F\_{{\theta}\_{\star}}(\xi\_l + \epsilon\_n) - p\_l = F\_{{\theta}\_{\star}}(\xi\_l + \epsilon\_n) - F\_{{\theta}\_{\star}}(\xi\_l) = f\_{{\theta}\_{\star}}(\xi\_l) \epsilon\_n + o(\epsilon\_n) \eqsp.\end{aligned}$$ Hence $F\_{{\theta}\_{\star}}(\xi\_l + \epsilon\_n) - p\_l \geq \sqrt{2 K} \sqrt{\frac{\log(n)}{n}}$ for *n* large enough. Similarly, $p\_l-F\_{{\theta}\_{\star}}(\xi\_l - \epsilon\_n)\geq \sqrt{2 K} \sqrt{\frac{\log(n)}{n}}$ for *n* large enough. So when *n* is large enough, *n**K*− 1(*δ**ε**n*2 ∧ *δ**ε**n*) ≥ 2log(*n*) with *δ**ε* defined in Lemma [lemme:appl:hoeffding]. By Lemma [lemme:appl:hoeffding], for *n* large enough, to $$\begin{aligned} {\mathbb{P}}\left(|{\xi}\_{{\theta}\_n,l}-\xi\_l|>\epsilon\_n \right) \leq \frac{2 K \eta(\bar W)}{n^2} \eqsp.\end{aligned}$$ As ∑*n* = 1∞P(∣*ξ**θ**n*, *l* − *ξ**l*∣ > *ε**n*) < ∞, the Borel-Cantelli lemma yields limsup*n**ε**n*− 1∣*ξ**θ**n*, *l* − *ξ**l*∣ < ∞ w.p.1. This concludes the proof. 10 C. Andrieu, A. Jasra, A. Doucet, and P. Del Moral. Non-linear Markov chain Monte Carlo. In *Conference Oxford sur les méthodes de Monte Carlo séquentielles*, volume 19 of *ESAIM Proc.*, pages 79–84. 2007. C. Andrieu, A. Jasra, A. Doucet, and P. Del Moral. A note on convergence of the Equi-Energy Sampler., 26(2):298–312, 2008. C. Andrieu, A. Jasra, A. Doucet, and P. Del Moral. On nonlinear Markov chain Monte Carlo., 17(3):987–1014, 2011. C. Andrieu and E. Moulines. On the ergodicity property of some adaptive MCMC algorithms., 16(3):1462–1505, 2006. C. Andrieu and J. Thoms. A tutorial on adaptive MCMC., 18(4):343–373, 200
arxiv_0000309
A diffuse interface model for the analysis of propagating bulges in cylindrical balloons ======================================================================================== Axisymmetric membranes Bifurcation Asymptotic analysis Strain gradient elasticity With the aim to characterize the formation and propagation of bulges in cylindrical rubber balloons, we carry out an expansion of the non-linear axisymmetric membrane model assuming slow axial variations. We obtain a diffuse interface model similar to that introduced by van der Waals in the context of liquid-vapor phase transitions. This provides a quantitative basis to the well-known analogy between propagating bulges and phase transitions. The diffuse interface model is amenable to numerical as well as analytical solutions, including linear and non-linear bifurcation analyses. Comparisons to the original membrane model reveal that the diffuse interface model captures the bulging phenomenon very accurately, even for well-localized phase boundaries. Introduction ============ Problem and background ---------------------- Bulges in cylindrical rubber balloons are a classical example of localization in solid mechanics. When a balloon is inflated, an initial regime of uniform inflation is followed by the formation of a bulge: the bulge appears initially as a long-wavelength buckling mode that localizes rapidly and then grows locally, until it propagates and eventually invades the entire balloon . As in other localization phenomena, the formation of a bulge reflects the non-convexity of the strain energy when restricted to homogeneous deformations: the onset of bulging occurs quickly after the maximum in the volume-pressure loading curve, and the propagation pressure can be predicted by Maxwell’s equal-area rule . Several other localization phenomena have been studied in solid mechanics, such as stress-induced phase transformations , the necking of bars , kink bands in compressed fiber composites , as well as localized structures in thin elastic shells  and tape springs . These localization phenomena have been investigated based on two types of models, as discussed in  for example. On the one hand, *non-regularized models*, also known as sharp interface models, make use of a classical strain energy functional depending solely on the strain: the onset of localization is associated with the loss of ellipticity of the equations of equilibrium at a critical value of the load . Such models can typically predict the critical load, the formation of different phases and the orientation of the phase boundaries, but cannot predict their subsequent evolution, nor their number or distribution in space; they cannot resolve the displacement inside the localized region either. On the other hand, *regularized models*, also known as diffuse interface models, make use of a stored elastic energy functional depending on both the strain and the *strain gradient*: such models remedy the limitations of the non-regularized models, and in particular remain well posed beyond the onset of localization . Regularized models are often introduced heuristically, but can in some cases be justified mathematically. Such a justification has been done in the case of periodic elastic solids, such as elastic crystals , trusses made of elastic bars or beams , or elastic solids with a periodic micro-structure . In these works on periodic solids, the ratio *R*/*L* ≪ 1 of microscopic cell size *R* to the macroscopic dimension *L* of the structure is used as an expansion parameter, and the homogenized properties of the periodic medium are derived through a systematic expansion in terms of the macroscopic strain and of its successive gradients. The goal of this paper is to derive a one-dimensional, regularized model applicable to the analysis of axisymmetric bulges in cylindrical rubber balloons. It is part of a general effort to characterize localization phenomena occurring in slender structures, which have been much less studied than in periodic solids. In slender structures, regularized models can be derived by an asymptotic expansion as well, using now the aspect ratio *R*/*L* ≪ 1 as the expansion parameter where *R* is the typical transverse dimension of the structure and *L* its length. This approach has been carried out for the analysis of necking, and diffuse interface models have been derived asymptotically, first for a two-dimensional hyperelastic strip by Mielke  and later for a general prismatic solid in three dimensions by Audoly & Hutchinson . These authors proposed an expansion method upon which we build ours (and which we improve). Our asymptotic expansion starts from the axisymmetric membrane model, which has been used extensively to analyze bulges in cylindrical balloons . Its outcome is a one-dimensional diffuse interface model, exactly similar to that introduced heuristically by van der Waals  to analyze the liquid-vapor phase transitions at a mesoscopic level. The analogy between bulges in balloons and phase transitions has been known for a long time: Chater & Hutchinson  have adapted Maxwell’s rule for the the coexistence of two phases to derive the pressure at which a bulge can propagate in a balloon, while Müller & Strehlow  have proposed a pedagogical introduction to the theory of phase transitions based on the mechanics of rubber balloons. Here, we push the analogy further, and show that the diffuse interface model can provide a quantitative description of bulges in balloons, not only accounting for the propagation pressure, but also for the domain boundary between the bulged and unbulged phases, as well as for its formation via a bifurcation—borrowing from the theory of phase transitions, we will refer to this boundary as a ‘diffuse interface’. The diffuse interface model is classical, tractable, and amenable to analytical bifurcation and post-bifurcation analysis, as we demonstrate. It is also simpler than the axisymmetric membrane model on which it is based. There is a vast body of work on the bulging of cylindrical balloons, all of which have used the theory of axisymmetric membranes as a starting point. The stability and bifurcations from homogeneous solutions have been analyzed in . Non-linear solutions comprising bulges have been derived in . The analysis of stability has been later extended to arbitrary incompressible hyperelastic materials, to various closure conditions at the ends of the tube, as well as to various type of loading controls based on either the internal pressure, the mass or the volume of the enclosed gas . In a recent series of four papers, Fu *et al.*  revisit the bifurcation problem, complement it with the weakly non-linear post-bifurcation analysis in the case of an infinite tube, and address imperfection sensitivity. Besides these theoretical studies, there has been a number of experimental and numerical papers on balloons. A compelling agreement between experiments and numerical simulations of the non-linear membrane model has been obtained by Kyriakides & Chang , who provide detailed experimental and numerical results on the initiation, growth and propagation of bulges, highlighting the analogy with phase transitions. Given that the agreement between experiments and the non-linear membrane theory has already been covered thoroughly in this work, our focus here will be on comparing the diffuse interface model to the non-linear membrane model, using exactly the same material model as in Kyriakides’ simulations and experiments. Outline of the main results --------------------------- Our work focusses on solutions to the non-linear axisymmetric membrane model that vary slowly in the axial direction, as happens typically at the onset of localization. A systematic expansion of the membrane energy is obtained in terms of the aspect-ratio parameter *ɛ* = *R*/*L* ≪ 1, where *R* is the initial radius of the balloon and *L* its initial length. The result reads $$\mathcal{E} [p, \mu] = \int\_{0}^{L} \left[ G\_0 (p, \mu (Z)) + \frac{1}{2} B\_0 (p, \mu (Z)) \mu^{\prime 2} (Z) \right] \mathrm{d} Z \label{eq:diffuseInterfaceModel-Announce}$$ where *p* denotes the (scaled) internal pressure, a control parameter in the experiments, *Z* is the axial coordinate and *μ*(*Z*) is a strain measure, see figure [fig:experiment]. Specifically, *μ* is the orthoradial stretch, defined as the ratio $\mu(Z)= \frac{r (Z)}{R}$ of the current radius *r* to the initial radius. ![Inflation of a cylindrical membrane: (a) reference configuration, (b) sketch of an equilibrium configuration when the membrane is subject to an axial force F and an internal pressure p. (c) Equivalent diffuse interface model derived in this paper: a bar having an order parameter \mu(Z) undergoes a phase transformation.](balloon-geom "fig:") [fig:experiment] The potential *G*0 appearing in the first term is a non-convex function of the stretch *μ*, much like in Ericksen’s bar : the non-regularized model for the balloon would correspond to the energy functional ∫0*L**G*0(*p*, *μ*(*Z*)) d*Z*. Values of *p* such that several minima of *G*0 exist correspond to pressures for which different phases (associated with different values of the stretch *μ*) are in competition. The second term $\frac{1}{2}\,B\_{0}\,\mu'^{2}$ in the integrand is a correction of order *ɛ*2, that accounts for the energetic cost of inhomogeneity; in the theory of phase transitions, this is the term that would account for surface tension at an interface. We provide simple and explicit formulas for both the potential *G*0 characterizing homogeneous solutions, see §[s:homsolbarmod], equations ([eq:strainEnergy]), ([eq:H0Decomposition]) ([eq:lambda0def]) and ([eq:Gdef]) in particular, and for the modulus *B*0 of the regularizing term, see §[s:reducedmodel] and equation ([eq:Bexpression]). The diffuse interface model is obtained by a systematic, formal expansion. It is asymptotically exact and does not rely on any unjustified kinematic assumptions: equation ([eq:diffuseInterfaceModel-Announce]) approximates the energy of the original membrane model with an error of order *ɛ*4 that is negligible compared to the smallest term retained, namely the gradient term of order *ɛ*2. By contrast, regularized models for slender structures have been proposed in earlier work starting from kinematic hypotheses, which appeared to be incorrect: see the treatment of necking in an elastic cylinder in  as well as the critical discussion in . Our derivation is based on a *finite-strain* membrane model. The non-linear features of the elastic constitutive law at finite strain are ultimately reflected in the diffuse interface model through the non-linear potential *G*0(*p*, *μ*) and through the dependence of the second gradient coefficient *B*0(*p*, *μ*) on the current strain *μ*. By contrast, an assumption of small strain has been used in previous work  on the justification of a diffuse interface model to analyze phase transformations in an elastic cylinder: this assumption is questionable since the presence of coexisting phases involves finite variations of strain across the interface. The outline of the paper is as follows. In §[s:nonlinearMembraneModel], we introduce the non-linear membrane model. In §[s:homsolbarmod] we analyze its homogeneous solutions, and derive an expression for the potential *G*0. Section [s:reducedmodel] is the core of the paper, and establishes the diffuse interface model ([eq:diffuseInterfaceModel-Announce]) by an asymptotic method. Section [sec:application] derives solutions to the diffuse interface model using various methods, and compares them with the predictions of the original membrane model. Non-linear membrane model ========================= We consider a cylindrical membrane with uniform initial thickness *H* and radius *R*. We use the cylindrical coordinates (*Z*, *θ*) in reference configuration as Lagrangian variables. When subject to external load, the cylinder deforms into an axisymmetric membrane, see figure [fig:experiment]a. The cylindrical coordinates of a material point in actual configuration are written as (*r*(*Z*), *θ*), corresponding to a position **x**(*Z*, *θ*) = *z*(*Z*) **e***z* + *r*(*Z*) **e***r*(*θ*), where (**e***r*(*θ*), **e***θ*(*θ*), **e***z*) is the local cylindrical basis. In the axisymmetric membrane theory, the deformation gradient is a 3 × 2 matrix which writes $$\mathbf{F}= \mu\,\mathbf{e\_{\theta}}\otimes \mathbf{e}\_{\theta} + (R\,\mu'\,\mathbf{e}\_{r} + \lambda\,\mathbf{e}\_{z})\otimes \mathbf{e}\_{z}$$ where we have defined an apparent axial stretch *λ* and the circumferential stretch *μ* as $$\begin{aligned} \lambda(Z) & = \frac{\mathrm{d} z }{\mathrm{d} Z}(Z),\\ \mu(Z) & = \frac{r (Z)}{R}.\end{aligned}$$ The Green-Lagrange strain tensor $\mathbf{E}= \frac{1}{2} (\mathbf{F}^T \cdot \mathbf{F}-\mathbf{1})$ is a 2 × 2 diagonal matrix in the basis (**e***θ*, **e***z*) tangent to the undeformed mid-surface: it will be represented compactly as a vector, whose entries are the diagonal components *E**θ* and *E**z* of the matrix, $$\mathbf{E} (\lambda, \mu, \mu') = \left(\begin{array}{c} E\_{\theta}\\ E\_z \end{array}\right) = \frac{1}{2} \begin{pmatrix} \mu^2 - 1\\ \lambda^2 + (R \mu')^2 - 1 \end{pmatrix}, \label{eq:genericMembraneStrain}$$ where $\mu' = \frac{\mathrm{d}\mu}{\mathrm{d}Z}$ is a stretch gradient, namely the axial gradient of circumferential stretch. A material model is now specified through a strain energy per unit volume *w*\*(*E**θ*, *E**z*). In previous work on axisymmetric membranes , the 2d material model proposed by Ogden  for incompressible rubber has been used: $$w^{\*}(E\_{\theta}, E\_z)= \sum\_{i = 1}^3 \frac{S\_i}{\alpha\_i} \left(\ell\_{\theta}^{\alpha\_i} + \ell\_z^{\alpha\_i} + \left( \frac{1}{\ell\_{\theta} \ell\_z} \right)^{\alpha\_i} \right), \quad \textrm{with } \begin{cases} \ell\_{\theta} = \sqrt{2 E\_{\theta} + 1} & \textrm{(circumfer.\ stretch)}\textrm{,} \\ \ell\_z = \sqrt{2 E\_z + 1}& \textrm{(axial stretch)}\textrm{.} \end{cases} \label{eq:ogdenlaw}$$ We use this model as well for our numerical examples, with the same set of material parameters *α**i*’s and *S**i*’s as used in previous work, namely *S*1 = 617 kPa, *S*2 = 1.86kPa, *S*3 =  − 9.79kPa, *α*1 = 1.3, *α*2 = 5.08 and *α*3 =  − 2. All our results can be easily adapted to a different constitutive law. For this constitutive law, the initial shear modulus *S*ini can be obtained as *S*ini = ∑*i* = 13*α**i* *S**i*. The domain 0 ≤ *Z* ≤ *L* represents one half of a balloon comprising a single bulge center at *Z* = 0, with symmetry conditions *μ*ʹ = *λ*ʹ = 0 enforced at *Z* = 0. At the other endpoint *Z* = *L*, we consider the ideal boundary conditions sketched in figure [fig:experiment]b, whereby the terminal section of the balloon is resting and freely sliding on a planar ‘plug’. These conditions would be difficult to achieve in experiments but they offer the advantage of being compatible with a uniform expansion of the membrane, which simplifies the analysis. By contrast, actual cylindrical balloons are typically closed up on their ends and cannot be inflated in a homogeneous manner due to end effects; these end effects could reproduced by employing different boundary conditions, but we prefer to ignore them. Note that Kyriakides & Chang use a rigid plug condition on one end, *μ*(*L*) = 1, which is not realistic either. Our boundary conditions, sketched in figure [fig:experiment] and provided in explicit form in §[pre-ssec:fullSimulations][ssec:fullSimulations], are natural: the applicable equilibrium condition will emerge automatically from the condition that the energy is stationary. As in the experiments of Kyriakides & Chang , the membrane is subject to an interior pressure *p*\* and to a stretching force *F*\* applied along the axis, see figure [fig:experiment]. The total potential energy reads $$\mathcal{E}\_{\mathrm{memb}}^{\ast} = \int\_{0}^{L}\Big( w^{\*}(\mathbf{E})\,2\,\pi\,R\,H\,\mathrm{d}Z - \pi\,(R\,\mu)^{2}\,(\lambda\,\mathrm{d}Z)\,p^{\ast} - (\lambda\,\mathrm{d}Z)\,F^{\*} \Big) \nonumber$$ where 2 *π* *R* *H* d*Z* is the initial volume element, *π* (*R* *μ*)2 (*λ* d*Z*) = *π* *r*2 d*z* is the current enclosed volume element, and *λ* d*Z* = d*z* is the current axial length element. We introduce a rescaled energy, denoted without an asterisk as $\mathcal{E}\_{\mathrm{memb}} = \frac{\mathcal{E}\_{\mathrm{memb}}^{\ast}}{ (2\pi\,R\,H)\,S\_{\mathrm{ini}}}$: $$\mathcal{E}\_{\mathrm{memb}}[p,\lambda,\mu]= \int\_{0}^{L}\left( w (\mathbf{E}(\lambda,\mu,\mu')) - p \frac{e}{2}\, \lambda \, \mu^2 - F \lambda \right) \mathrm{d} Z \textrm{.} \label{eq:completeEnergy}$$ The strain energy, the force and pressure have been rescaled as well, as $w = \frac{w^{\*}}{S\_{\mathrm{ini}}}$, $F = \frac{F^{\ast}}{2 \pi R H S\_{\mathrm{ini}} }$ and $p=\frac{p^{\ast}}{S\_{\mathrm{ini}}}$, respectively, and $e = \frac{R}{H}$ is an initial aspect ratio. In our numerical examples, we use the same value $e = \frac{55}{16}$ as in : even though this balloon is relatively thick prior to deformation, the non-linear membrane model has been checked to match the experimental results accurately in . We also use the same value of the load *F* = 1.149 as in these experiments. The parameter *F* will never be changed, and we do not keep track of how the various quantities depend on *F*; the argument *F* will systematically be omitted in functions, as we did already in the left hand side of ([eq:completeEnergy]). The functions *λ*(*Z*) and *μ*(*Z*) that make the energy ([eq:completeEnergy]) stationary yield the axisymmetric equilibra of the balloon. These solutions are obtained by a numerical method described in section [pre-ssec:fullSimulations][ssec:fullSimulations], and are plotted as the black curves in figure [fig:fullnonlinearsolutions], where they are used as a reference. Analysis of homogeneous solutions ================================= Our general goal is to justify the diffuse interface model when *λ*(*Z*) and *μ*(*Z*) vary slowly as a function of *Z*. In this section, we start by considering the case where *λ* and *μ* do not depend on *Z*, $$\frac{\mathrm{d}\lambda}{\mathrm{d}Z} = 0,\qquad \frac{\mathrm{d}\mu}{\mathrm{d}Z} = 0 \textrm{.}$$ This corresponds to homogeneous solutions, *i.e.* to solutions with uniform inflation. These homogeneous solutions are well known, and are re-derived here for the sake of completeness. A catalog of such homogeneous solutions will be obtained, which plays a key role in the subsequent derivation of the diffuse interface model. Kinematics of homogeneous solutions ----------------------------------- For homogeneous solutions, the gradient term *μ*ʹ in ([eq:genericMembraneStrain]) vanishes and the membrane strain reads $$\mathbf{E}\_0 (\lambda, \mu) = \begin{pmatrix} E\_{0}^{\theta} \\ E\_{0}^{z} \end{pmatrix} = \frac{1}{2} \left( \begin{array}{c} \mu^2 - 1\\ \lambda^2 - 1 \end{array} \right). \label{eq:E0}$$ All the quantities pertaining to homogeneous solutions are denoted using a subscript ‘0’. In the homogeneous case, the strain energy becomes $$w\_0 (\lambda, \mu) = \frac{1}{S\_{\mathrm{ini}}} \sum\_{i = 1}^3 \frac{S\_i}{\alpha\_i} \left(\lambda^{\alpha\_i} + \mu^{\alpha\_i} + \left( \frac{1}{\lambda\, \mu} \right)^{\alpha\_i} \right) \textrm{.} \label{eq:strainEnergy}$$ Of particular importance will be the second Piola-Kirchhoff membrane stress $\boldsymbol{\Sigma}\_{0}$, defined as the gradient of the strain energy with respect to the strain: $$\boldsymbol{\Sigma}\_0 (\lambda, \mu) = \begin{pmatrix} \Sigma\_{0}^{\theta} \\ \Sigma\_{0}^{z} \end{pmatrix} = \begin{pmatrix} \frac{\partial w}{\partial E\_{\theta}} \\ \frac{\partial w}{\partial E\_{z}} \end{pmatrix}\_{\mathbf{E} = \mathbf{E}\_{0}} = \begin{pmatrix} \frac{1}{\mu} \frac{\partial w\_{0}}{\partial \mu} (\lambda, \mu)\\ \frac{1}{\lambda} \frac{\partial w\_{0}}{\partial \lambda} (\lambda, \mu) \end{pmatrix}\textrm{.} \label{eq:sigma0}$$ Equilibrium of homogeneous solutions ------------------------------------ In view of ([eq:completeEnergy]), the total potential energy of a homogeneous solution per unit reference length is $$g\_0 (p, \lambda, \mu) = w\_0 (\lambda, \mu) - p \frac{e}{2} \,\lambda \, \mu^2 - F \lambda. \label{eq:H0Decomposition}$$ Given the load parameters *p* (and *F*) the equilibrium values of *λ* and *μ* are found by the condition of equilibrium in the axial and transverse directions, $$\begin{aligned} \frac{\partial g\_0}{\partial \lambda} (p, \lambda, \mu) & = & 0, \label{eq:g0EquiAxial}\\ \frac{\partial g\_0}{\partial \mu} (p, \lambda, \mu) & = & 0. \label{eq:g0EquiTransv}\end{aligned}$$ We leave the load *p* left unspecified for the moment, and we view the axial equilibrium ([eq:g0EquiAxial]) as an implicit equation for *λ* = *λ*0(*p*, *μ*) in terms of *p* and *μ*: by definition, *λ*0(*p*, *μ*) is the solution to the implicit equation $$\frac{\partial g\_0}{\partial \lambda} (p, \lambda\_0 (p, \mu), \mu) = 0. \label{eq:lambda0def}$$ From now on, we will systematically eliminate *λ* = *λ*0(*p*, *μ*) in favor of the second unknown *μ*. Starting with the potential *g*0, we define a reduced potential *G*0 as *G*0(*p*, *μ*) = *g*0(*p*, *λ*0(*p*, *μ*), *μ*), as well as the stress *n*0 dual to *μ*, $$n\_0 (p, \mu) = - \frac{\partial G\_0}{\partial \mu} (p, \mu) \textrm{.} \label{eq:n0label}$$ This *n*0(*p*, *μ*) can be interpreted as an imbalance of hoop stress; it vanishes at equilibrium, *n*0(*p*, *μ*) = 0. Indeed, we have $n\_0 = - \frac{\partial G\_{0}}{\partial \mu} = - \frac{\mathrm{d} g\_0 (p, \lambda\_0 (p, \mu), \mu)}{\mathrm{d} \mu} = - \frac{\partial g\_0}{\partial \mu} - \frac{\partial \lambda\_0}{\partial \mu} \frac{\partial g\_0}{\partial \lambda}$, where the both terms are zero by the equilibrium conditions ([eq:g0EquiTransv]–[eq:lambda0def]). To summarize, we view *λ* as an internal variable slaved to the ‘macroscopic’ variable *μ* (the roles of *λ* and *μ* could be exchanged but the other way around would be more complicated as the mapping from *λ* to *μ* is not single-valued). A catalog of homogeneous solutions can be obtained by (*i*) solving the axial equilibrium ([eq:g0EquiAxial]) for *λ* = *λ*0(*p*, *μ*), (*ii*) defining a reduced potential energy *G*0(*p*, *μ*) by ([eq:Gdef]), and (*iii*) solving the equilibrium condition *n*0(*p*, *μ*) = 0 in the (*p*, *μ*) plane. This program has been carried out and the results are shown in figure [fig:homsolutions]. ![Analysis of the homogeneous solutions, with F=1.149. (a) For any value of the pressure p, the axial equilibrium ([eq:lambda0def]) yields an implicit curve in the (\mu,\lambda) plane that defines the stretch \lambda_{0}(p,\mu) in terms of \mu; the different curves correspond to p = 0.0885, 0.1002, 0.1087, 0.1187, 0.1285, 0.140, 0.1646, 0.200. (b) Reduced potential G_0 as a function of \mu for the same set of values of p. Critical points are shown in red: Considère points \mathrm{C} and \mathrm{C}' where the pressure is extremal (disks), and Maxwell construction (line and diamonds). (c) Homogeneous solutions in the (\mu,p)-plane, as determined by solving the transverse equilibrium n_{0}(p,\mu)=0. The dashed part of the curve between the Considère points is unstable, as \partial^{2}G_{0}/\partial \mu^{2} < 0. (d) Same set of homogeneous solutions, now represented in the (v_{0},p) plane where v_{0}=\mu^2 \, \lambda_{0}(p,\mu) is the ratio of the final to the initial volume. With this set of conjugate variables, Maxwell’s rule applies and the two shaded regions have the same area.](homo-sols "fig:") [fig:homsolutions] The homogeneous stretch *λ*0(*p*, *μ*) and the potential *G*0(*p*, *μ*) are shown in parts a and b of the figure. In figure [fig:homsolutions]c, the pressure is plotted in terms of *μ* and is seen to increase, attain a local maximum *p*C = 0.1646, decrease, attain a local minimum *p*Cʹ = 0.1002, and finally increase again. The points of extremal pressure are where the onset of localization is expected to occur in a infinite medium (*L* = ∞) according to Considère’s criterion : we will refer to them as *Considère points*. For intermediate values of the pressure, *p*C < *p* < *p*C', the potential *G*0(*p*, *μ*) plotted in figure [fig:homsolutions]b has two minima and one maximum as a function of *μ*. The non-convexity of *G*0 makes it possible for the bulged and unbulged domains to coexist, as recalled in the next section; the diffuse interface model derived later in §[s:reducedmodel] will be able to account for the boundary between these domains. Maxwell’s construction ---------------------- In a first attempt to address inhomogeneities, we consider solutions made up of two phases, with respective properties (*λ*a = *λ*0(*p*, *μ*a), *μ*a) and (*λ*b = *λ*0(*p*, *μ*b), *μ*b). Discontinuities are allowed for the moment, their contribution to the energy being ignored: gradient term *μ*ʹ appearing in the membrane model are simply discarded. Let *c* denote the fraction of the phase ‘a’, and (1 − *c*) the fraction of the phase ‘b’, as measured after pulling everything back in the reference configuration. Under these assumptions, the membrane energy ([eq:completeEnergy]) takes the form $$\mathcal{E}\_{0} \left( p, c, \mu\_{\mathrm{a}}, \mu\_{\mathrm{b}} \right) = L \left( c\,G\_0 \left( p, \mu\_{\mathrm{a}} \right) + (1 - c) \, G\_0 \left( p, \mu\_{\mathrm{b}} \right) \right). \nonumber$$ Optimizing with respect to the *μ**i*’s and to *c* successively, we find $$\begin{array}{rclll} n\_0 (p, \mu\_i) & = & 0 & \text{\qquad} & \text{(mechanical equilibrium in each phase)}\\ G\_0 \left( p, \mu\_{\mathrm{b}} \right) - G\_0 \left( p, \mu\_{\mathrm{a}} \right) & = & 0 & & \text{(chemical equilibrium)} \end{array} \label{eq:twoPhaseEquilConds}$$ These equations can be solved for *p* and the *μ**i*’s: in particular this selects a value of the pressure *p* = *p*M, known as Maxwell’s pressure, where the two phases can coexist. The propagation pressure *p*M is a function of both the applied force *F* and of the constitutive model for the membrane, but this is implicit in our notation. For *F* = 1.149 and for the particular values of the constitutive parameters used here, we have obtained the Maxwell load as *p*M = 0.1087, see the red line joining the points labeled Ma and Mb in figure [fig:homsolutions] Maxwell’s equal-area rule for the propagation pressure can be rederived as follows. The quantity *G*0(*p*, *μ*b) − *G*0(*p*, *μ*a) appearing in ([eq:twoPhaseEquilConds]) can be written as the integral of d*G*0 along the curve corresponding to homogeneous solutions in the (*p*, *μ*) plane. Along this curve, $\frac{\partial G\_0}{\partial \mu} = - n\_0 = 0$ by ([eq:n0label]). Therefore, $\mathrm{d} G\_0 = \frac{\partial G\_0}{\partial p} (p, \mu)\, \mathrm{d} p = \frac{\mathrm{d} g\_0 (p, \lambda\_0 (p, \mu), \mu)}{\mathrm{d} p} \,\mathrm{d} p = \left( \frac{\partial g\_0}{\partial p} + \frac{\partial g\_0}{\partial \lambda} \frac{\partial \lambda\_0}{\partial \mu} \right) \mathrm{d} p = \frac{\partial g\_0}{\partial p} \mathrm{d} p $ after using ([eq:lambda0def]). In view of ([eq:H0Decomposition]), this can be written as $\mathrm{d} G\_0 = \frac{e}{2}\, v\_{0}\,\mathrm{d}p$, where $v\_{0}(p,\mu)= \lambda\_{0}(p,\mu) \mu^2 = \frac{\pi\,r^{2}\,\mathrm{d}z}{\pi\,R^{2}\,\mathrm{d}Z}$ denotes the ratio of the deformed to the undeformed volume of homogeneous solutions. Using ([eq:twoPhaseEquilConds]), the variation of *G*0 from one Maxwell point *μ*a to the other *μ*b is zero, and so $$\int\_{\mu\_{\mathrm{a}}}^{\mu\_{\mathrm{b}}} v\_{0}(p,\mu) \,\mathrm{d} p = 0 \textrm{.} \nonumber$$ This equality implies the equality of the area of the shaded regions in figure [fig:homsolutions]d, which uses *v*0 as the horizontal axis and *p* as the vertical axis. Derivation of the diffuse interface model ========================================= We proceed to derive the diffuse interface model from the non-linear membrane theory. This reduction combines an assumption of scale separation, whereby the solution is assumed to vary on a length scale *L* much larger than the radius *R*, and the elimination of the unknown *λ* in favor of *μ* by means of the relation *λ* = *λ*0(*p*, *μ*). Principle of the expansion -------------------------- We assume scale separation and use the convention that the radius *R* is fixed and finite while *L* = *R*/*ɛ* goes to infinity: the solution is sought in terms of a scaled variable *Z̃* = *ɛ* *Z* through scaled functions *λ̃* and *μ̃*, where *ɛ* ≪ 1 is our expansion parameter, $$\lambda\_{\varepsilon}(Z) = \tilde{\lambda}(\varepsilon\,Z),\qquad \mu\_{\varepsilon}(Z) = \tilde{\mu}(\varepsilon\,Z) \textrm{.} \nonumber$$ As a consequence of this scaling assumption, any derivative with respect to the slow axial variable *Z* entails a multiplication by the small parameter *ɛ*: $$\begin{aligned} {2} &\lambda\_{\varepsilon}(Z) = \tilde{\lambda}(\varepsilon\,Z) = \mathcal{O}(1),&\qquad &\mu\_{\varepsilon}(Z) = \tilde{\mu}(\varepsilon\,Z) = \mathcal{O}(1),\nonumber\\ &\frac{\mathrm{d}\lambda\_{\varepsilon}}{\mathrm{d}Z} = \epsilon\, \frac{\mathrm{d}\tilde{\lambda}}{\mathrm{d}\tilde{Z}}(\varepsilon\,Z) = \mathcal{O}(\varepsilon),&\qquad& \frac{\mathrm{d}\mu\_{\varepsilon}}{\mathrm{d}Z} = \varepsilon\, \frac{\mathrm{d}\tilde{\mu}}{\mathrm{d}\tilde{Z}}(\varepsilon\,Z) = \mathcal{O}(\varepsilon) \textrm{.} \nonumber\end{aligned}$$ For the sake of legibility, we drop the subscripts *ɛ* and remove any reference to the scaled functions *λ̃* and *μ̃* in the following: it will be sufficient for us to use the above order of magnitude estimates. Derivation of the gradient effect by a formal expansion ------------------------------------------------------- The general expression of the membrane strain ([eq:genericMembraneStrain]) can be split in two terms **E**(*λ*, *μ*, *μ*ʹ) = **E**0(*λ*, *μ*) + **E**1(*μ*ʹ),  where the first one depends on the stretch and the second one on the stretch gradient, $$\begin{aligned} \mathbf{E}\_0 (\lambda, \mu) & = & \frac{1}{2} \left( \begin{array}{c} \mu^2 - 1\\ \lambda^2 - 1 \end{array}\right), \\ \mathbf{E}\_1 (\mu') & = & \frac{1}{2} \left( \begin{array}{c} 0\\ (R \mu')^2 \end{array}\right). \label{eq:E1} \end{aligned}$$ In view of the results from the previous section, their orders of magnitude are **E**0(*λ*, *μ*) = O(1)  **E**1(*μ*ʹ) = O(*ɛ*2). In line with the fact that we use the finite elasticity theory, the strain **E**0 is of order 1. **E**1 being a small correction to **E**0, the strain energy density can be expanded as $$\begin{array}{rcl} w (\mathbf{E}) & = & w (\mathbf{E}\_0 (\lambda, \mu) +\mathbf{E}\_1 (\mu'))\\ & = & w (\mathbf{E}\_0 (\lambda, \mu)) + \frac{\partial w}{\partial \mathbf{E}}(\mathbf{E}\_0 (\lambda, \mu)) \cdot \mathbf{E}\_1 (\mu') +\mathcal{O} (|\mathbf{E}\_1|^{2})\\ & = & w\_0 (\lambda, \mu) +\boldsymbol{\Sigma}\_0 (\lambda, \mu) \cdot \mathbf{E}\_1 (\mu') +\mathcal{O} (\varepsilon^4) \end{array} \label{eq:wOfEExpansion}$$ where we have used the definition of the membrane stress $\boldsymbol{\Sigma}\_0$ in ([eq:sigma0]). Inserting this into ([eq:completeEnergy]) yields the following approximation of the energy $$\mathcal{E}\_{\text{memb}} [p, \mu] = \int\_{0}^{L} \Big( g\_0 (p, \lambda (Z), \mu (Z)) + \boldsymbol{\Sigma}\_0 (\lambda (Z), \mu (Z)) \cdot \mathbf{E}\_1 (\mu' (Z)) \Big)\, \mathrm{d} Z +\mathcal{O} (L \, \varepsilon^4). \label{eq:E-tmp1}$$ Note that the gradient of axial stretch *λ*ʹ does not appear in this expression. Energy of the diffuse interface model ------------------------------------- An important result, proved in appendix [B:Redmodel], is that it is consistent, at this order of approximation, to replace the unknown *λ*(*Z*) with the axial stretch *λ*0(*p*, *μ*(*Z*)) of the homogeneous solution having the local value of *μ*(*Z*) as its circumferential stretch. This eliminates *λ*(*Z*) from the equations, and we obtain the *diffuse interface model* [subeqs:EfinalAndB0] $$\mathcal{E} [p, \mu] = \int\_{0}^{L} G\_0 (p, \mu (Z))\, \mathrm{d} Z + \frac{1}{2} \int\_{0}^{L} B\_{0} (p, \mu (Z)) \, \mu^{\prime 2} (Z) \, \mathrm{d} Z. \label{eq:Efinal}$$ where we have omitted terms of order *L* *ɛ*4 and higher. The coefficient *B*0 of the regularizing term has a simple expression which is found by identifying with ([eq:E-tmp1]) and ([eq:sigma0]) as $$B\_{0} (p, \mu(Z)) = R^2 \, \left[ \frac{1}{\lambda} \, \frac{\partial w\_{0}}{\partial \lambda} (\lambda, \mu(Z)) \right]\_{\lambda = \lambda\_0 (p, \mu(Z))}. \label{eq:Bexpression}$$ This defines the regularizing term in terms of the energy *w*0(*λ*, *μ*) of homogeneous solutions, see ([eq:strainEnergy]). Even though this is implicit in our notation, both *G*0 and *B*0 depend on the force *F*. Equations ([eq:Efinal]–[eq:Bexpression]) are our main result, and can be restated as follows. The energy Ememb of the full non-linear membrane model can be approximated as the sum of (*i*) the non-regularized energy ∫*G*0 d*Z* which depends on the stretch *μ* but not on its gradient, and is of order *L*, and (*ii*) a much smaller correction $\frac{1}{2} \int B\_0 \, \mu^{\prime 2} \mathrm{d} Z$, of order *ɛ*2, that depends on the strain *μ* and as well as on its gradient $\mu' = \frac{\mathrm{d} \mu}{\mathrm{d} Z}$. These two terms provide an approximation of the full energy E of the non-linear membrane model which is accurate up to order *L* *ɛ*4. Non-linear equilibrium of the diffuse interface model ----------------------------------------------------- The equilibrium equations are obtained from ([eq:Efinal]) by the Euler-Lagrange method as [subeqs:BVpbDiffuseInterfacee] $$n\_0 (p, \mu (Z)) - \frac{1}{2} \, \frac{\partial B\_{0}}{\partial \mu} (p, \mu (Z)) \, \mu^{\prime 2} (Z) + \frac{\mathrm{d}}{\mathrm{d} Z} (B\_{0} (p, \mu (Z)) \, \mu' (Z)) = 0. \label{eq:equil2dGrdModel}$$ In the absence of kinematic constraints, the variational method yields the natural conditions at the endpoints as well, *μ*ʹ(0) = *μ*ʹ(*L*) = 0. Here, *μ*ʹ(*L*) = 0 is consistent with the symmetry condition at the center *Z* = 0 of the bulge. The equilibrium condition ([eq:equil2dGrdModel]) reduces to the condition ([eq:dGdmuIsZero]) applicable to homogeneous solutions, namely *n*0(*p*, *μ*) = 0, when the gradient effect is removed, by setting *B*0 = 0. Solution for a domain boundary in an infinite balloon ----------------------------------------------------- The existence of a first integral associated with the equilibrium ([eq:equil2dGrdModel]) has been noted by a number of authors such as Coleman & Newman . It can be obtained by expanding the derivative in the last term in the right-hand side, and by multiplying the entire side by *μ*ʹ(*Z*); the result is $\frac{\mathrm{d}(-G\_{0}+B\_{0}\,\mu'^{2})}{\mathrm{d}Z} = 0$. This shows that the following quantity is conserved:  − *G*0(*p*, *μ*(*Z*)) + *B*0(*p*, *μ*(*Z*)) *μ*ʹ2(*Z*) = *C*. This equation can be used to solve for *μ*(*Z*) by quadrature. However, this method is impractical for numerical calculations as it involves evaluating integrals that are close to singular, even when the singular parts are taken care of analytically . This is why our numerical simulations in §[pre-ssec:fullSimulations][ssec:fullSimulations] use a direct integration method of the equilibrium ([eq:equil2dGrdModel]) rather than the quadrature method. In the case of the boundary separating two domains in an infinite medium, however, the quadrature method is tractable. Then, the pressure matches Maxwell’s pressure, *p* = *p*M, and *μ*(*Z*) tends to *μ*a and *μ*b for *Z* →  ± ∞, respectively. The value of *C* consistent with these asymptotic behaviors is the common value *C* = *G*0(*p*M, *μ*a) = *G*0(*p*M, *μ*b) of the potential, see ([eq:twoPhaseEquilConds]). The implicit equation ([eq:conservedIntegral]) can then be plotted in the phase space (*μ*(*Z*), *μ*ʹ(*Z*)) using a contour plot method. We have checked that the resulting curve (not shown) falls on top of the dotted green curve labeled Aʹ in figure [fig:fullnonlinearsolutions]c, obtained by numerical integration of the equilibrium with a large but finite aspect ratio, *L*/*R* = 30: the analytical solution ([eq:conservedIntegral]) in an infinite balloon provides an excellent approximation to a propagating interface in a finite balloon, as long as it is sufficiently remote from the endpoints. In the bifurcation diagram, the numerical solutions appears as a point Aʹ lying almost exactly on Maxwell’s plateau, see figure [fig:fullnonlinearsolutions]a. This analytical solution is also an excellent approximation to the domain boundary predicted by the original membrane model, as discussed below in §[pre-ssec:fullSimulations][ssec:fullSimulations]. Comparison of the diffuse interface and membrane models ======================================================= [pre-ssec:fullSimulations] Using a formal expansion method, we have shown that the 2d non-linear axisymmetric membrane model (§[s:nonlinearMembraneModel]) is asymptotically equivalent to the 1d diffuse interface model in ([subeqs:EfinalAndB0]). This equivalence holds for ‘slowly’ varying solutions, *i.e.* when the axial gradients involve a length scale much larger than the tube radius, ∣d*μ*/d*Z*∣ ≪ 1/*R*. Here, we compare the predictions of the approximate diffuse interface model to those of the original membrane model. The goal is twofold. First, we verify our asymptotic expansion by checking consistency for slowly-varying solutions. Second, we push the diffuse interface model outside its domain of strict mathematical validity, by applying it to problems involving sharp boundaries and comparing to the predictions of the original membrane model. Comparison of the full bifurcation diagrams ------------------------------------------- We start by comparing the bifurcation diagrams obtained with each one of the models for balloons of finite length *L*, see figure [fig:fullnonlinearsolutions]a. ![Comparison of the predictions of the non-linear membrane model and of the diffuse interface model, for F=1.149. (a) Bifurcation diagrams for different values of the aspect-ratio L/R: homogeneous solutions (dark blue curve), bifurcated branches of the membrane model (double-struck black curves) and of the diffuse interface model (green dots); Considère’s points (red dots) and Maxwell’s plateau (dotted red line). Note that we use the logarithm of the scaled volume v on the horizontal axis, so that Maxwell’s equal-area rule does not apply directly. (b) Comparison of the deformed configurations in physical space (z/R,\mu=r/R) for L/R = 30, for the three configurations labeled \mathrm{A}, \mathrm{A}' and \mathrm{A}'' in (a), corresponding to (v,p) = (2.39, 0.146), (45., 0.109) and (77.43, 0.106) respectively. (c) Same solutions visualized in the phase space: a small discrepancy is visible in the center of the sharp interface (arrow).](membrane-sols "fig:") [fig:fullnonlinearsolutions] In this numerical example, both the membrane and the diffuse interval models use the constitutive law in ([eq:ogdenlaw]), the standard set of material parameters listed below this equation, and the value of the pulling force *F* in ([eq:forceF]). We limit our attention to solutions that are either homogeneous or comprise a single bulge centered at *Z* = 0: recall that the simulation domain (0, *L*) represents one half of a real balloon. The equilibrium equations of the membrane model are obtained from the energy ([eq:completeEnergy]) by an Euler-Lagrange method, and are solved numerically. These equations of equilibrium and their numerical solution have already been documented in , and we refer to this work for details; in our work, the solution branches were calculated using the path-following method from the AUTO-07p library . While used boundary conditions representing rigid plugs, we use instead the natural boundary conditions, namely the axial and radial equilibria $F=\frac{\partial w}{\partial \ell\_z} \, \frac{z'}{\ell\_z} - p \, e \frac{\ell\_\theta^2}{2}$ and *μ*ʹ = 0. These boundary conditions are relevant to the soft boundary device sketched in figure [fig:experiment]b, and are enforced at *Z* = *L*. At the center of bulge *Z* = 0, we impose the symmetry conditions *μ*ʹ = 0 and *z* = 0. To solve the diffuse interface model ([subeqs:BVpbDiffuseInterfacee]) numerically, we first sample the functions *n*0(*p*, *μ*) and *B*0(*p*, *μ*) numerically. This tabulation is available as by-product of the analysis of homogeneous solutions from Section [s:homsolbarmod]. Next, the solution branches are generated by solving the boundary value problem ([subeqs:BVpbDiffuseInterfacee]) using the path-following library AUTO-07p. Alternatively, we tried solving this boundary value problem by the quadrature method described earlier, but it did not work well for the reason already explained. The bifurcation diagrams are shown in figure [fig:fullnonlinearsolutions]a. The homogeneous solutions are plotted using the thick, dark blue curve: they are identical for both models, and are also identical to those derived earlier in figure [fig:homsolutions]d. Bifurcated solutions are shown as black double-struck curves (membrane model) and green dots (diffuse interface model) for different value of $\overline{L} = L/R$. The bifurcation diagram uses the natural logarithm of the scaled volume *v* on the horizontal axis, $$v = \frac{1}{L/R}\,\int\_0^{L} \mu^2 \, \lambda\_{0}(p,\mu)\,\frac{\mathrm{d}Z}{R} \textrm{.} \label{eq:v}$$ This is consistent with the definition of the scaled volume *v*0 used in the analysis of homogeneous solutions. For large values of the aspect-ratio *L*/*R*, the bifurcated branches display a plateau corresponding to Maxwell’s pressure *p*M. The diffuse interface model appears to be highly accurate, as its bifurcation diagram is almost identical to that of the membrane model: in the figure, the green dots fall exactly onto the double-struck curves. Given that the diffuse interface model has been derived under an assumption of ‘slow’ axial variations, it could be expected that the models would agree near the bifurcation points (in the neighborhood of the dark blue curve) where the localization is mild. We did not anticipate the good agreement far from the bifurcation point, for configurations featuring relatively sharp interfaces such as that labeled Aʹ in the figure: for this solution, the largest value of the stretch gradient is 1.2, see figure [fig:fullnonlinearsolutions]c—even though this is not a small number, the diffuse interface model remains remarkably accurate. Selected deformed configurations are plotted in figure [fig:fullnonlinearsolutions]b in real space: the predictions of both models are still indistinguishable, even inside the domain boundary. The predictions of the two models are not exactly identical, however: a small difference is visible when these solutions are represented in phase space, see figure [fig:fullnonlinearsolutions]c; in phase space, the subtle features of the interface are highlighted, while the uniform domains shrink to the points labeled ‘a’ and ‘b’ in the figure. To sum up, the diffuse interface model reproduces the entire bifurcation diagram of the original membrane model with good accuracy, even for well localized domain boundaries. In the following sections, we show that it is also well suited to linear and non-linear buckling analyses. Onset of bulging: linear bifurcation analysis, finite length ------------------------------------------------------------ We now compare the bifurcation load at the onset of bulging, as predicted by the diffuse interface model on the one hand, and by the membrane model on the other hand. The diffuse interface model yields a simple analytical prediction, that matches that of the membrane model exactly. The critical load of the diffuse interface model is derived by a classical linear bifurcation analysis as follows. Consider a perturbation to a homogeneous solution *μ*0, in the form *μ*(*Z*) = *μ*0 + *μ*1(*Z*). Linearizing the equilibrium equation of the diffuse interface model in ([eq:equil2dGrdModel]) with respect to *μ*1, we obtain $$\frac{\partial n\_0}{\partial \mu} (p, \mu\_{0}) \, \mu\_1(Z) + B\_{0} (p, \mu\_{0})\, \mu\_1'' (Z) = 0. \label{eq:bifurcation\_cond}$$ The boundary conditions are *μ*1ʹ(0) = 0 and *μ*1ʹ(*L*) = 0. The first critical mode $\mu\_1(Z) = \cos \frac{\pi\,Z}{L}$ corresponds to half a bulge in the simulation domain (0, *L*). When inserted into the above expression, this yields $$\frac{\partial n\_0}{\partial \mu} (p, \mu\_{0})=\frac{B\_{0}(p, \mu\_{0})}{R^2} \, \frac{\pi^2}{\overline{L}^2} \quad \textrm{(diffuse interface model)} \textrm{.} \label{eq:stability\_secondgrad}$$ This equation must be solved together with the axial equilibrium condition for the unperturbed solution ([eq:dGdmuIsZero]), *n*0(*p*, *μ*0) = 0. For any given value of the aspect-ratio $\overline{L}$, the roots $(p\_{\star}(\overline{L}), \mu\_{\star}(\overline{L}) = \mu\_{0}(p\_{\star}(\overline{L})))$ of these two equations define the critical parameters where the bifurcation occurs. The corresponding scaled volume can then be reconstructed as $v\_{\star}(\overline{L}) = v\_{0}(p\_{\mathrm{\star}}(\overline{L}), \mu\_{\star}(\overline{L}))$. The dependence of the critical volume *v*⋆ on the aspect-ratio is shown by the green dots in figure [fig:onsetbifurc]a. ![Linear and non-linear bifurcation analyses of the bulging instability based on the diffuse interface model, and comparison with the membrane model. (a) Critical volume v_\star at the onset of bulging as a function of the aspect-ratio \overline{L} = L/R. The diffuse interface model predicts the bifurcation load exactly (the green dots fall exactly onto the double-struck curve). Inset: bifurcation point shown in the bifurcation diagram as in figure [fig:fullnonlinearsolutions]a for a particular value \overline{L} = L/R = 5.8. (b) Initial tangents to the bifurcated branches for different values of the aspect-ratio \overline{L} = L/R, as predicted by a weakly non-linear expansion of the membrane model (green dotted lines); comparison with the non-linear branches predicted by the membrane model (double-struck curves).](stability "fig:") [fig:onsetbifurc] For comparison, we derive the bifurcation load predicted by the membrane model. The critical load for hard plugs has been obtained by Kyriakides & Chang . Adapting their bifurcation analysis to soft plugs, we obtain the bifurcation condition as $$\lambda\_0(p,\mu\_0)\, \frac{ (p \, e \, \lambda\_0(p, \mu\_0) - w\_{0,\mu^{2}} )\, w\_{0,\lambda^{2}} + ( w\_{0,\lambda\mu} - p \, e \, \mu\_0 )^2 }{ w\_{0,\lambda} \, w\_{0,\lambda^{2}} } = \frac{\pi^2}{\overline{L}^2} \quad \textrm{(membrane model)} \label{eq:bifurcationMembraneModel}$$ where commas in subscripts denote partial derivatives of the homogeneous strain energy *w*0 defined in ([eq:strainEnergy]), all of which are evaluated at (*λ*, *μ*) = (*λ*0(*p*, *μ*0), *μ*0). Solving this equation together with the axial equilibrium yields *p*⋆ and *μ*⋆ = *μ*0(*p*⋆) as a function of $\overline{L}$, as earlier. The bifurcation loads of the diffuse interface model (green dots) and of the membrane model (solid dark gray curve) are compared in figure [fig:onsetbifurc]a. They agree exactly. This is not surprising as, close to bifurcation, the solutions of the membrane model depart from a uniform solution by an arbitrarily small perturbation, implying that the axial gradients are arbitrarily small: the assumptions underlying the diffuse interface model are satisfied close to the bifurcation point. The diffuse interface model captures exactly the retardation of the onset of buckling in balloons of finite length. Similarly, the critical load predicted by the one-dimensional diffuse interface model for the analysis of necking in solid cylinders has been found in  to agree exactly with that based on the three-dimensional analysis of . For large values of the aspect-ratios $\overline{L}$, the bifurcation equation ([eq:stabilitysecondgrad]) can be simplified by noticing that the left-hand side goes to zero, as the bifurcations takes place closer and closer to the Considère point (*p*C, *μ*C) where the load is maximum, $\frac{\partial n\_0}{\partial \mu} (p\_{\text{C}}, \mu\_{\text{C}})=0$. Expanding the left-hand side accordingly, one obtains from ([eq:stabilitysecondgrad]) $$\mu\_\star \approx \mu\_{\text{C}} + \frac{B\_{0} (p\_{\text{C}}, \mu\_{\text{C}})}{R^2\, \frac{\partial^2 n\_0}{\partial \mu^2} (p\_{\text{C}}, \mu\_{\text{C}}) }\, \frac{\pi^2}{\overline{L}^2} \quad\textrm{(diffuse interface model, large $\overline{L}$).} \label{eq:stability\_secondgrad\_considere}$$ This yields the dotted curve in figure [fig:onsetbifurc]a, which is indeed consistent with the two other curves in the limit $\overline{L}\to\infty$. Onset of bulging: weakly non-linear bifurcation analysis, finite length ----------------------------------------------------------------------- Following the method of Lyapunov and Koiter , an expansion of the bifurcated branch can be found by introducing a small arc-length parameter *η* and expanding *p* and *μ* as [eq:weakNLexp] $$\begin{aligned} {4} \mu &= \mu\_0(p) &\;+\;& \eta \, \mu\_1 (Z) &\;+\;& \eta^2 \, \mu\_2(Z) &\;+\;&\mathcal{O}(\eta^3)\\ p &= p\_{ \star} && &\;+\;& \eta^2 \, p\_2 &\;+\;&\mathcal{O}(\eta^3), \label{eq:pExpansionInTermsOfEta} \end{aligned}$$ where *μ*0(*p*) denotes the branch of homogeneous solutions satisfying the equilibrium condition *n*0(*p*, *μ*0(*p*)) = 0, see ([eq:dGdmuIsZero]), $p\_{\star} = p\_{\star} (\overline{L})$ is the critical pressure found by the linear stability analysis, $\mu\_1(Z) = \cos \frac{\pi Z}{L}$ is the linear bifurcation mode, and *μ*2(*Z*) and *p*2 are higher-order corrections. The latter are now determined by inserting this expansion into the non-linear equilibrium ([subeqs:BVpbDiffuseInterfacee]), and by solving it order by order in *η*. It is actually preferable to work with the weak form of the equilibrium (principle of virtual work), which formally writes E, *μ*(*p*, *μ*) ⋅ [*μ̂*] = 0, for any kinematically admissible virtual stretch *μ̂*. Here, E, *μ*(*p*, *μ*) denotes the first variation of the total potential energy, defined as $$\mathcal{E}\_{,\mu}(p, \mu )\cdot[\hat{\mu}]= \lim\_{t \rightarrow 0} \frac{\mathcal{E} (p,\mu + t\, \widehat{\mu}) -\mathcal{E} (p,\mu)}{t} \textrm{.} \nonumber$$ Higher-order variations of the energy are defined similarly. When the expansion ([eq:weakNLexp]) is inserted into the principle of virtual work, one obtains, at order *η*, the condition ∀*μ̂*,  E, *μ*2(*p*⋆, *μ*0(*p*⋆)) ⋅ [*μ*1, *μ̂*] = 0. We have recovered the bifurcation condition ([eq:bifurcationcond]), which is automatically satisfied by the linear mode *μ*1(*Z*). At order *η*2 the expansion yields $$\forall\hat{\mu},\quad \mathcal{E}\_{,\mu^{2}}(p\_\star, \mu\_0(p\_\star) )\cdot[\mu\_2,\hat{\mu}] + \frac{1}{2}\mathcal{E}\_{,\mu^{3}}(p\_\star, \mu\_0(p\_\star) )\cdot[\mu\_1,\mu\_1,\hat{\mu}]=0. \label{eq:mu2}$$ The first term in the left-hand side involves the tangent stiffness operator E, *μ*2(*p*⋆, *μ*0(*p*⋆)) which is known to be singular by the bifurcation condition ([eq:mu1]). Therefore a solvability condition must be verified before attempting to solve equation ([eq:mu2]) for *μ*2(*Z*); it is derived by replacing *μ̂* with *μ*1 and reads E, *μ*3(*p*⋆, *μ*0(*p*⋆)) ⋅ [*μ*1, *μ*1, *μ*1] = 0. In the left-hand side, the interactions of the three modes *μ*1 produces harmonic waves with wave-vector *π*/*L* and 3*π*/*L*, which all cancel out upon integration over the domain 0 ≤ *Z* ≤ *L*: the solvability condition is automatically verified (this is referred to as a symmetric system in bifurcation theory). Next, equation ([eq:mu2]) can be solved for *μ*2, and the solution reads $$\mu\_2(Z) = \mu\_{20} +\mu\_{21} \cos (\frac{\pi}{L}\, Z) +\mu\_{22} \cos (\frac{2\,\pi}{L}\, Z) \textrm{.} \nonumber$$ The coefficient *μ*21 remains unspecified at this order (in can be used to re-normalize the arc-length *η*), and the other coefficients are found as $$\mu\_{20} = \frac{1}{4}\, \left( \frac{B\_{0,\mu}^\star}{B\_{0}^\star} - \frac{n\_{0,\mu^{2}}^\star}{n\_{0,\mu}^\star} \right),\qquad \mu\_{22} = \frac{1}{12}\, \left( -\frac{3\,B\_{0,\mu}^\star}{B^\star\_{0}} + \frac{n\_{0,\mu^{2}}^\star}{n\_{0,\mu}^\star} \right), \nonumber$$ In our notation, any quantity bearing a star is evaluated at the critical point (*p*⋆, *μ*0(*p*⋆)). Next, the solvability condition at order *η*3 yields the coefficient *p*2 as $$p\_2 = \frac{-3 (\lambda\_{0,\mu}^\star \, n\_{0,\mu^{2}}^\star)^2 -6\, \lambda\_0^\star \, \lambda\_{0,\mu}^\star\, n\_{0,\mu}^\star\, n\_{0,\mu^{2}}^\star +\lambda\_{0}^\star\, ( 6\, \lambda\_{0,\mu^{2}}^\star \, (n\_{0,\mu})^2 -3\, \lambda\_{0}^\star\, n\_{0,\mu^{3}}^\star\, n\_{0,\mu}^\star + 5\, \lambda\_0^\star\, (n\_{0,\mu^{2}}^\star)^2 ) }{ 24\, \lambda\_{0}^\star\, \left( -\lambda\_{0,p}^\star\, (n\_{0,\mu}^\star)^2+n\_{0,\mu}^\star\, \left(\lambda\_{0,\mu}^\star\, n\_{0,\mu}^\star +\lambda\_0^\star\, n\_{0,\mu\,p}^\star \right) -\lambda\_0^\star\, n\_{0,\mu^{2}}^\star\, n\_{0,p}^\star \right) } \textrm{.} \nonumber$$ The right-hand side is defined in terms of the properties of the homogeneous solution (§[s:homsolbarmod]), and can evaluated numerically for any value of $\overline{L}$: recall that all the quantities in the right-hand side are evaluated at $(p\_{\star}(\overline{L}), \mu\_{\star}(\overline{L}) = \mu\_{0}( p\_{\star}(\overline{L}))$, where $p\_{\star}(\overline{L})$ is the critical load as determined from the linear buckling analysis, see ([eq:stabilitysecondgrad]). Finally, an expansion of the volume factor *v* defined in ([eq:v]) is obtained as follows. Observe that the integrand defining *v* is the function *v*0(*p*, *μ*) = *μ*2 *λ*0(*p*, *μ*): inserting the expansions of *p* and *μ* from ([eq:weakNLexp]) into *v*0 and averaging over *Z*, one derives an expansion of the volume factor as *v*(*p*, *μ*) = *v*(*p*⋆, *μ*⋆) + *v*2 *η*2 + …, where the coefficient reads $$v\_{2} = p\_{2}\,v\_{0,p}^{\star} +\left(\mu\_{20}+p\_{2}\,\frac{\mathrm{d}\mu\_0}{\mathrm{d}p}(p\_{\star})\right) \,v\_{0,\mu}^{\star} +\frac{1}{4}\,v\_{0,\mu^{2}}^{\star}. \nonumber$$ The right-hand side depends on the properties of the homogeneous solutions, and can be evaluated numerically for any given value of the aspect-ratio $\overline{L}$. When the expansion of *v* is combined with that of *p* in ([eq:pExpansionInTermsOfEta]), we finally obtain the initial slope of the bifurcated branch as $$\left( \frac{\mathrm{d}p}{\mathrm{d}v} \right)\_{\star} = \frac{p\_{2}\,\eta^{2}+\cdots}{v\_{2}\,\eta^{2}+\cdots} = \frac{p\_{2}}{v\_{2}} \textrm{,}$$ where *p*2 and *v*2 have just been calculated. The corresponding tangents are plotted on figure [fig:onsetbifurc](b) for various values of $\overline{L}$. They agree very well with the bifurcated branches of the non-linear membrane model. To sum up, the diffuse interface model is amenable to a weakly non-linear analysis which reproduces accurately the solutions of the original, fully non-linear membrane model. Onset of localization: weakly non-linear analysis, infinite length ------------------------------------------------------------------ The domain of validity of the weakly non-linear expansion derived in the previous section is more and more limited when the aspect-ratio gets larger, *L*/*R* → ∞: in figure [fig:onsetbifurc]b, the domain where the tangent (dashed green line) yields a reasonable approximation to the bifurcated branch (black double-struck curve) shrinks when the aspect-ratio increases from *L*/*R* = 10 to 300. This is because for large values of *L*/*R*, the extended buckling mode localizes rapidly after bifurcation, a feature not captured by the analysis of the previous section. Here, we derive a different weakly non-linear solution of the diffuse interface model, assuming that the cylinder is infinitely long, *L*/*R* = ∞. This solution captures the quick localization of the bulges; it is similar to that derived Fu *et al.*  based on the full membrane model but its derivation is somewhat simpler. In the limit *L*/*R* → ∞, the bifurcation takes place at the Considère point (*μ*C, *p*C), where the pressure attains its maximum (the other bifurcation taking place at the minimum pressure *p*C', which can be treated similarly). Accordingly, the weakly bulged solution satisfies *p* ≈ *p*C and *μ*(*Z*) ≈ *μ*C, and the potential *G*0 can be expanded as $$\begin{gathered} G\_{0}(p,\mu) = G\_{0}^{\mathrm{C}} + G\_{0,p}^\mathrm{C}\,(p-p\_{\mathrm{C}}) + G\_{0,\mu}^\mathrm{C}\,(\mu-\mu\_{\mathrm{C}}) \cdots \\ {} + \frac{G\_{0,p^{2}}^\mathrm{C}}{2}\,(p-p\_{\mathrm{C}})^{2} + G\_{0,p\mu}^\mathrm{C}\,(p-p\_{\mathrm{C}})\,(\mu-\mu\_{\mathrm{C}}) + \frac{G\_{0,\mu^{2}}^\mathrm{C}}{2}\,(\mu-\mu\_{\mathrm{C}})^{2} + \frac{G\_{0,\mu^{3}}^\mathrm{C}}{6}\,(\mu-\mu\_{\mathrm{C}})^{3} + \cdots \nonumber\end{gathered}$$ The arguments appearing in subscript after a comma denote a partial derivative, while a superscript ‘C’ means that the function is evaluated at Considère’s point (*p*C, *μ*C). The values of all the coefficients *G*0C, *G*0, *p*C, etc. are available from the analysis of homogeneous solutions (§[s:homsolbarmod]). In the right-hand side above, we can discard the terms that do not depend on *μ*, as well as the terms containing *G*0, *μ*C which cancels by equation ([eq:dGdmuIsZero]), and that containing *G*0, *μ*2C which cancels a the maximum pressure *p*C. Accordingly, the energy ([eq:diffuseInterfaceModel-Announce]) can be approximated as $$\mathcal{E} \approx \int\_{-\infty}^{+\infty} \left[ \mathrm{Cte}(p) + G\_{0,p\mu}^\mathrm{C}\,(p-p\_{\mathrm{C}})\,(\mu-\mu\_{\mathrm{C}}) + \frac{G\_{0,\mu^{3}}^\mathrm{C}}{6}\,(\mu-\mu\_{\mathrm{C}})^{3} + \frac{1}{2}\,B\_{0}^{\mathrm{C}}\,\left(\frac{\mathrm{d}\mu}{\mathrm{d}Z}\right)^{2} \right]\,\mathrm{d}Z \textrm{.} \label{eq:energyExpansionSech2}$$ The signs of the coefficients appearing in the integrand are important: for our particular constitutive law and using the results of Section [s:homsolbarmod], their numerical value is $$G\_{0,p\mu}^\mathrm{C} = \frac{\partial^{2} G\_{0}}{\partial p\,\partial \mu}(p\_{\mathrm{C}},\mu\_{\mathrm{C}}) = -9.366, \quad G\_{0,\mu^{3}}^\mathrm{C} = \frac{\partial^{3} G\_{0}}{\partial \mu^{3}}(p\_{\mathrm{C}},\mu\_{\mathrm{C}}) = -3.413,\quad B\_{0}^{\mathrm{C}}=B\_{0}(p\_{\mathrm{C}},\mu\_{\mathrm{C}}) = 0.8956 \nonumber \textrm{.}$$ A balance argument on the three last term in the integrand above suggests the change of variable $$\mu(Z) = \mu\_{\mathrm{C}} + (p\_{\mathrm{C}} - p)^{1/2}\,\mu^{\dag}\,\overline{\mu}(\overline{Z}) \label{eq:muRescaleSoliton}$$ where $$\mu^{\dag} = \left(\frac{2\,|G\_{0,p\mu}^\mathrm{C}|}{| G\_{0,\mu^{3}}^\mathrm{C} |}\right)^{1/2},\quad \overline{Z} = \left( \frac{2\,|G\_{0,p\mu}^\mathrm{C}|\,|G\_{0,\mu^{3}}^\mathrm{C}|}{ (B\_{0}^\mathrm{C})^{2} }\, (p\_{\mathrm{C}}-p) \right)^{1/4} \, Z \textrm{.} \label{eq:ZbarSoliton}$$ In terms of the rescaled variables, the energy expansion ([eq:energyExpansionSech2]) writes $$\overline{\mathcal{E}} = \frac{1}{2}\int\_{-\infty}^{+\infty} \left[ \overline{\mu}(\overline{Z})-\frac{\overline{\mu}^{3}(\overline{Z})}{3} + \left(\frac{\mathrm{d}\overline{\mu}}{\mathrm{d}\overline{Z}}\right)^{2} \right] \,\mathrm{d}\overline{Z}, \nonumber$$ after dropping the term *G*0(*p*, *μ*C) that is independent of *μ*, and rescaling the energy using a numerical constant. The weakly non-linear solutions are the stationary points $\overline{\mu}(\overline{Z})$ of this energy functional. They can be analyzed based on the analogy with a mass moving in a potential $U(\overline{\mu}) = -\frac{\overline{\mu}}{2}+\frac{\overline{\mu}^{3}}{6}$, when $\overline{Z}$ is viewed as a time variable. A first type of solutions are those corresponding to the equilibria in the effective potential $U(\overline{\mu})$, namely $\overline{\mu}(\overline{Z}) = \pm 1$: they yield an expansion of the branch of homogeneous solutions near the point of maximum pressure, as can be checked. A second type of solution corresponds to a soliton, *i.e.* to a non-constant but bounded solution; it can be derived by a quadrature method, using the conservation of the total mechanical energy of the mass in the effective potential. The result is $$\overline{\mu}(\overline{Z}) = -1 + \frac{3}{\cosh^{2}\frac{\overline{Z} - \overline{Z}\_{0}}{2}} \textrm{.} \label{eq:weaklyNLSolutionSoliton}$$ This solution represents a weakly localized bulge centered about $\overline{Z}\_{0}$. It is identical to that derived by Fu *et al.*  based on the full membrane model. The soliton solution ([eq:weaklyNLSolutionSoliton]) is plotted in figure [fig:soliton-test]b–c, and compared to the non-linear solution of the diffuse interface model: the weakly non-linear solution and the full non-linear solution agree asymptotically close to the bifurcation point, as expected. ![image](soliton-test) [fig:soliton-test] Unlike the weakly non-linear solution from the previous section, this solution captures the rapid localization of the bulge past the bifurcation point: from ([eq:ZbarSoliton]), the width of the interface is *Z* ∼ (*p* − *p*C)− 1/4 near bifurcation. To derive the weakly non-linear solution, we have retained some terms in the expansion of the energy ([eq:energyExpansionSech2]), and omitted others such as $\frac{1}{2}\,G\_{0,\mu p^{2}}^\mathrm{C}\,(\mu-\mu\_{\mathrm{C}})\,(p-p\_{\mathrm{C}})^{2}$ or $\frac{1}{2}\,B\_{0,p}^\mathrm{C}\,(p-p\_{\mathrm{C}})\,{\mu'}^{2}$. This can be justified *a posteriori*, based on the scaling laws of the weakly non-linear solution: the scaling assumptions (*μ* − *μ*C) ∼ (*p*C − *p*)1/2 and *Z* ∼ (*p*C − *p*)− 1/4 not only make the last three terms in ([eq:energyExpansionSech2]) balanced (which is by design), they also make the other terms negligible, as can be checked. The expansion ([eq:energyExpansionSech2]) is therefore consistent. Conclusion and final remarks ============================ We have proposed a diffuse interface model for the analysis of the formation, localization and propagation of bulges in cylindrical rubber balloons. The model has been derived from the non-linear membrane theory, and is asymptotically exact in the limit where the strain gradient $\frac{\mathrm{d}\mu}{\mathrm{d}Z}$ is small compared to 1/*R*. Analytical and numerical solutions to the diffuse interface model have been obtained, showing good agreement with the predictions of the original non-linear membrane model, both at the onset of localization and for well-localized solutions: in practice, the diffuse interface model remains accurate well beyond its domain of strict mathematical validity, *i.e.* even for relative large gradients such as those found at the boundary between a bulged and a non-bulged domain. Hopefully, our work will shed new light on the classical problem of bulging in elastic balloons, and will help highlighting its tight connection with the theory of phase transitions. The model handles finite strain: only the strain *gradients* are assumed to be small. The elastic response of the material under finite strain gets reflected into the diffuse interface model through the non-linear potential *G*0(*p*, *μ*) and the non-linear strain gradient modulus *B*0(*p*, *μ*). Thanks to this feature, the predictions of the model are exact as far as linear bifurcation analyses are concerned, and remain accurate even well into the post-bifurcation regime. By contrast, expansion methods underlying bifurcation analyses typically assume that the solution is close to a homogeneous solution, and have a much narrower domain of applicability. A simple expression for the coefficient *B*0(*p*, *μ*) of the strain gradient term has been established, see ([eq:Bexpression]). Remarkably, this coefficient is directly proportional to the pre-stress $\boldsymbol{\Sigma}\_0$ in the homogeneous solution, see ([eq:E-tmp1]). The same observation has been made concerning the strain-gradient model applicable to necking in a hyperelastic cylinder . In both cases, the contribution of the strain gradient to the elastic energy occurs through a ‘geometric rigidity effect’ — the vibration of a string under tension is another example of this geometric rigidity effect, whereby the pre-stress brings in an effective elastic stiffness. The consistency of our results with those of Audoly & Hutchinson  for a solid cylinder can be checked as follows. In ([eq:Efinal]), we have derived the contribution of the strain gradient to the energy of a balloon using dimensionless quantities. In terms of the original (non-scaled) quantities, it can be rewritten as $\frac{1}{2}\int\_{0}^{L}B\_{0}^{\*}\,\mu'^{2}(Z)\,\mathrm{d}Z$, where the non-scaled modulus is found from ([eq:Bexpression]) as $B\_{0}^{\*} = (2\pi\,R\,H)\,R^{2}\,\frac{1}{\lambda}\frac{\partial {w}\_{0}^{\*}}{\partial \lambda}$, after restoring the initial area that had been scaled out. Identifying *I* = (2*π* *R* *H*) *R*2 as the geometric moment of inertia of the annular cross-section in reference configuration, the contribution of the strain gradient to the energy can therefore be written as $\frac{1}{2}\int\_{0}^{L}I\,\frac{1}{\lambda}\frac{\partial {w}\_{0}^{\*}}{\partial \lambda}\,\mu'^{2}(Z)\,\mathrm{d}Z$ in an axisymmetric membrane. In fact, this holds for a solid cylinder as well, as can be checked by combining equations 2.28, 2.13 and 2.8 from . In the future, the following extensions of the model could be considered. While we have only been concerned with bifurcations in this paper, one could analyze the stability of the solutions based on the diffuse interface model as well: presumably, this would confirm the stability results obtained previously from the non-linear membrane model. Different boundary conditions than the natural boundary conditions could be used at the ends of the balloons: a hard plug can be enforced by the boundary condition *μ* = 1, while a rounded elastic cap can be prescribed through a non-cylindrical reference configuration *R*(*Z*); if, however, the prescribed initial profile *R* varies quickly with *Z*, there may be a conflict with our asymptotic procedure that assumes slow variations, and the diffuse interface model may not be able to account accurately for such boundary conditions. A coupling with an electrical field could also be introduced, as is relevant to the actuation of balloons made of dielectric elastomers ; we believe that a natural extension of our asymptotic expansion can be derived for dielectric balloons. In future work, we also hope to apply our asymptotic reduction method to other localization phenomena occurring in slender structures, such as the Plateau-Rayleigh instability in soft elastic rods with surface tension , and the localized kinks in tape springs . *We would like to thank John Hutchinson for drawing our attention to the absence of a strain gradient model for the analysis of bulging in balloons, and for providing useful feedback on the manuscript.* Elimination of *λ*(*Z*) from the diffuse interface model ======================================================== This appendix provides details on the elimination of the unknown *λ*(*Z*) from the intermediate form ([eq:E-tmp1]) of the energy, leading to the final form ([subeqs:EfinalAndB0]). As the approximation of the energy ([eq:E-tmp1]) contains no derivative of the axial stretch *λ*, optimizing with respect with the *λ* variable yields an algebraic (*i.e.* non-differential) problem for *λ*(*Z*) $$\frac{\partial g\_0}{\partial \lambda} (p, \lambda (Z), \mu (Z)) + \frac{\partial \boldsymbol{\Sigma}\_0}{\partial \lambda} (\lambda (Z), \mu (Z)) \cdot \mathbf{E}\_1 (\mu' (Z)) +\mathcal{O} (\varepsilon^4) = 0. \label{eq:nonHomAxialEquil}$$ The first term is of order *ɛ*0 = 1, the following term is of order *ɛ*2 by ([eq:mainScalings]), and we have omitted terms of order *ɛ*4 and higher. Let us solve ([eq:nonHomAxialEquil]) order by order for *λ*. At dominant order *ɛ*0 = 1, *λ*(*Z*) is a solution of $\frac{\partial g\_0}{\partial \lambda} (p, \lambda, \mu (Z)) = 0$, which we identify as the equilibrium condition ([eq:g0EquiAxial]) applicable to homogeneous solutions. In ([eq:lambda0def]), the solution to this equation has be=en introduced as *λ*(*Z*) = *λ*0(*p*, *μ*(*Z*)), *i.e.* *λ*(*Z*) = *λ*0(*p*, *μ*(*Z*)) + O(*ɛ*2). To order *ɛ*2, the solution of equation ([eq:nonHomAxialEquil]) is found as *λ*(*Z*) = *λ*0(*p*, *μ*(*Z*)) + *λ*[2](*Z*) + O(*ɛ*4),  where $\lambda\_{[2]} =-\left( \frac{\partial \boldsymbol{\Sigma}\_0}{\partial \lambda} \cdot \mathbf{E}\_1 (\mu') \right) / \frac{\partial^2 g\_0}{\partial \lambda^2}$ is a correction of order *ɛ*2, arising from the strain correction **E**1(*μ*ʹ) = O(*ɛ*2) (the expression of *λ*[2] is given for the sake of completeness but is not needed in the following). Inserting the expansion ([eq:lambdaToOrderEps2]) for *λ*(*Z*) into ([eq:E-tmp1]) and expanding in series, we find $$\mathcal{E}= \int \left[ G\_0 (p, \mu) + \frac{\partial g\_0}{\partial \lambda} (p, \lambda\_0, \mu) \,\lambda\_{[2]} \right] \,\mathrm{d} Z + \int \boldsymbol{\Sigma}\_0 (\lambda\_0, \mu) \cdot \mathbf{E}\_1 (\mu')\, \mathrm{d} Z +\mathcal{O} (L \,\varepsilon^4),$$ where we use *λ*0 as a shorthand for *λ*0(*p*, *μ*(*Z*)). As the second term in the bracket vanishes by ([eq:lambda0def]), we are left with $$\mathcal{E}= \int G\_0 (p, \mu)\, \mathrm{d} Z + \int \boldsymbol{\Sigma}\_0 (\lambda\_0, \mu) \cdot \mathbf{E}\_1 (\mu') \,\mathrm{d} Z +\mathcal{O} (L\, \varepsilon^4).$$ Inserting the explicit expressions of $\boldsymbol{\Sigma}\_0$ and **E**1 from ([eq:sigma0]) and ([eq:E1]), one obtains the expressions announced in ([subeqs:EfinalAndB0]). 45 [1]#1 [1]`#1` urlstyle[1]doi:#1[1]doi:[2]#2 ,,,  () (). ,,,  () (). ,, (). ,, Oxford Series on materials modelling,,. ,,, (). ,, Metallurgy and metallurgical engineering series,,,. ,, (). ,,, (). ,,,, (). ,,, (). ,,, (). ,,,  () (). ,,,,  () (). ,,, (). ,,, (). ,,, (). ,,,  () (). ,,, (). ,,,. ,,,,,. ,, vol. of,,,. ,,, (). ,,, (). ,, (). ,,,,. ,,, (). ,,. ,,, (). ,,  () (). ,, (). ,,,,  () (). ,,, (). ,,, (). ,,, (). ,,, (). ,,, (). ,,, (). ,, (). ,, (). ,,,,,,,,. ,, Ph.D. thesis,,,. ,,,  () (). ,,,,,, (). ,,,,,,  () (). ,,, (). A diffuse interface model for the analysis of propagating bulges in cylindrical balloons ======================================================================================== Axisymmetric membranes Bifurcation Asymptotic analysis Strain gradient elasticity With the aim to characterize the formation and propagation of bulges in cylindrical rubber balloons, we carry out an expansion of the non-linear axisymmetric membrane model assuming slow axial variations. We obtain a diffuse interface model similar to that introduced by van der Waals in the context of liquid-vapor phase transitions. This provides a quantitative basis to the well-known analogy between propagating bulges and phase transitions. The diffuse interface model is amenable to numerical as well as analytical solutions, including linear and non-linear bifurcation analyses. Comparisons to the original membrane model reveal that the diffuse interface model captures the bulging phenomenon very accurately, even for well-localized phase boundaries. Introduction ============ Problem and background ---------------------- Bulges in cylindrical rubber balloons are a classical example of localization in solid mechanics. When a balloon is inflated, an initial regime of uniform inflation is followed by the formation of a bulge: the bulge appears initially as a long-wavelength buckling mode that localizes rapidly and then grows locally, until it propagates and eventually invades the entire balloon . As in other localization phenomena, the formation of a bulge reflects the non-convexity of the strain energy when restricted to homogeneous deformations: the onset of bulging occurs quickly after the maximum in the volume-pressure loading curve, and the propagation pressure can be predicted by Maxwell’s equal-area rule . Several other localization phenomena have been studied in solid mechanics, such as stress-induced phase transformations , the necking of bars , kink bands in compressed fiber composites , as well as localized structures in thin elastic shells  and tape springs . These localization phenomena have been investigated based on two types of models, as discussed in  for example. On the one hand, *non-regularized models*, also known as sharp interface models, make use of a classical strain energy functional depending solely on the strain: the onset of localization is associated with the loss of ellipticity of the equations of equilibrium at a critical value of the load . Such models can typically predict the critical load, the formation of different phases and the orientation of the phase boundaries, but cannot predict their subsequent evolution, nor their number or distribution in space; they cannot resolve the displacement inside the localized region either. On the other hand, *regularized models*, also known as diffuse interface models, make use of a stored elastic energy functional depending on both the strain and the *strain gradient*: such models remedy the limitations of the non-regularized models, and in particular remain well posed beyond the onset of localization . Regularized models are often introduced heuristically, but can in some cases be justified mathematically. Such a justification has been done in the case of periodic elastic solids, such as elastic crystals , trusses made of elastic bars or beams , or elastic solids with a periodic micro-structure . In these works on periodic solids, the ratio *R*/*L* ≪ 1 of microscopic cell size *R* to the macroscopic dimension *L* of the structure is used as an expansion parameter, and the homogenized properties of the periodic medium are derived through a systematic expansion in terms of the macroscopic strain and of its successive gradients. The goal of this paper is to derive a one-dimensional, regularized model applicable to the analysis of axisymmetric bulges in cylindrical rubber balloons. It is part of a general effort to characterize localization phenomena occurring in slender structures, which have been much less studied than in periodic solids. In slender structures, regularized models can be derived by an asymptotic expansion as well, using now the aspect ratio *R*/*L* ≪ 1 as the expansion parameter where *R* is the typical transverse dimension of the structure and *L* its length. This approach has been carried out for the analysis of necking, and diffuse interface models have been derived asymptotically, first for a two-dimensional hyperelastic strip by Mielke  and later for a general prismatic solid in three dimensions by Audoly & Hutchinson . These authors proposed an expansion method upon which we build ours (and which we improve). Our asymptotic expansion starts from the axisymmetric membrane model, which has been used extensively to analyze bulges in cylindrical balloons . Its outcome is a one-dimensional diffuse interface model, exactly similar to that introduced heuristically by van der Waals  to analyze the liquid-vapor phase transitions at a mesoscopic level. The analogy between bulges in balloons and phase transitions has been known for a long time: Chater & Hutchinson  have adapted Maxwell’s rule for the the coexistence of two phases to derive the pressure at which a bulge can propagate in a balloon, while Müller & Strehlow  have proposed a pedagogical introduction to the theory of phase transitions based on the mechanics of rubber balloons. Here, we push the analogy further, and show that the diffuse interface model can provide a quantitative description of bulges in balloons, not only accounting for the propagation pressure, but also for the domain boundary between the bulged and unbulged phases, as well as for its formation via a bifurcation—borrowing from the theory of phase transitions, we will refer to this boundary as a ‘diffuse interface’. The diffuse interface model is classical, tractable, and amenable to analytical bifurcation and post-bifurcation analysis, as we demonstrate. It is also simpler than the axisymmetric membrane model on which it is based. There is a vast body of work on the bulging of cylindrical balloons, all of which have used the theory of axisymmetric membranes as a starting point. The stability and bifurcations from homogeneous solutions have been analyzed in . Non-linear solutions comprising bulges have been derived in . The analysis of stability has been later extended to arbitrary incompressible hyperelastic materials, to various closure conditions at the ends of the tube, as well as to various type of loading controls based on either the internal pressure, the mass or the volume of the enclosed gas . In a recent series of four papers, Fu *et al.*  revisit the bifurcation problem, complement it with the weakly non-linear post-bifurcation analysis in the case of an infinite tube, and address imperfection sensitivity. Besides these theoretical studies, there has been a number of experimental and numerical papers on balloons. A compelling agreement between experiments and numerical simulations of the non-linear membrane model has been obtained by Kyriakides & Chang , who provide detailed experimental and numerical results on the initiation, growth and propagation of bulges, highlighting the analogy with phase transitions. Given that the agreement between experiments and the non-linear membrane theory has already been covered thoroughly in this work, our focus here will be on comparing the diffuse interface model to the non-linear membrane model, using exactly the same material model as in Kyriakides’ simulations and experiments. Outline of the main results --------------------------- Our work focusses on solutions to the non-linear axisymmetric membrane model that vary slowly in the axial direction, as happens typically at the onset of localization. A systematic expansion of the membrane energy is obtained in terms of the aspect-ratio parameter *ɛ* = *R*/*L* ≪ 1, where *R* is the initial radius of the balloon and *L* its initial length. The result reads $$\mathcal{E} [p, \mu] = \int\_{0}^{L} \left[ G\_0 (p, \mu (Z)) + \frac{1}{2} B\_0 (p, \mu (Z)) \mu^{\prime 2} (Z) \right] \mathrm{d} Z \label{eq:diffuseInterfaceModel-Announce}$$ where *p* denotes the (scaled) internal pressure, a control parameter in the experiments, *Z* is the axial coordinate and *μ*(*Z*) is a strain measure, see figure [fig:experiment]. Specifically, *μ* is the orthoradial stretch, defined as the ratio $\mu(Z)= \frac{r (Z)}{R}$ of the current radius *r* to the initial radius. ![Inflation of a cylindrical membrane: (a) reference configuration, (b) sketch of an equilibrium configuration when the membrane is subject to an axial force F and an internal pressure p. (c) Equivalent diffuse interface model derived in this paper: a bar having an order parameter \mu(Z) undergoes a phase transformation.](balloon-geom "fig:") [fig:experiment] The potential *G*0 appearing in the first term is a non-convex function of the stretch *μ*, much like in Ericksen’s bar : the non-regularized model for the balloon would correspond to the energy functional ∫0*L**G*0(*p*, *μ*(*Z*)) d*Z*. Values of *p* such that several minima of *G*0 exist correspond to pressures for which different phases (associated with different values of the stretch *μ*) are in competition. The second term $\frac{1}{2}\,B\_{0}\,\mu'^{2}$ in the integrand is a correction of order *ɛ*2, that accounts for the energetic cost of inhomogeneity; in the theory of phase transitions, this is the term that would account for surface tension at an interface. We provide simple and explicit formulas for both the potential *G*0 characterizing homogeneous solutions, see §[s:homsolbarmod], equations ([eq:strainEnergy]), ([eq:H0Decomposition]) ([eq:lambda0def]) and ([eq:Gdef]) in particular, and for the modulus *B*0 of the regularizing term, see §[s:reducedmodel] and equation ([eq:Bexpression]). The diffuse interface model is obtained by a systematic, formal expansion. It is asymptotically exact and does not rely on any unjustified kinematic assumptions: equation ([eq:diffuseInterfaceModel-Announce]) approximates the energy of the original membrane model with an error of order *ɛ*4 that is negligible compared to the smallest term retained, namely the gradient term of order *ɛ*2. By contrast, regularized models for slender structures have been proposed in earlier work starting from kinematic hypotheses, which appeared to be incorrect: see the treatment of necking in an elastic cylinder in  as well as the critical discussion in . Our derivation is based on a *finite-strain* membrane model. The non-linear features of the elastic constitutive law at finite strain are ultimately reflected in the diffuse interface model through the non-linear potential *G*0(*p*, *μ*) and through the dependence of the second gradient coefficient *B*0(*p*, *μ*) on the current strain *μ*. By contrast, an assumption of small strain has been used in previous work  on the justification of a diffuse interface model to analyze phase transformations in an elastic cylinder: this assumption is questionable since the presence of coexisting phases involves finite variations of strain across the interface. The outline of the paper is as follows. In §[s:nonlinearMembraneModel], we introduce the non-linear membrane model. In §[s:homsolbarmod] we analyze its homogeneous solutions, and derive an expression for the potential *G*0. Section [s:reducedmodel] is the core of the paper, and establishes the diffuse interface model ([eq:diffuseInterfaceModel-Announce]) by an asymptotic method. Section [sec:application] derives solutions to the diffuse interface model using various methods, and compares them with the predictions of the original membrane model. Non-linear membrane model ========================= We consider a cylindrical membrane with uniform initial thickness *H* and radius *R*. We use the cylindrical coordinates (*Z*, *θ*) in reference configuration as Lagrangian variables. When subject to external load, the cylinder deforms into an axisymmetric membrane, see figure [fig:experiment]a. The cylindrical coordinates of a material point in actual configuration are written as (*r*(*Z*), *θ*), corresponding to a position **x**(*Z*, *θ*) = *z*(*Z*) **e***z* + *r*(*Z*) **e***r*(*θ*), where (**e***r*(*θ*), **e***θ*(*θ*), **e***z*) is the local cylindrical basis. In the axisymmetric membrane theory, the deformation gradient is a 3 × 2 matrix which writes $$\mathbf{F}= \mu\,\mathbf{e\_{\theta}}\otimes \mathbf{e}\_{\theta} + (R\,\mu'\,\mathbf{e}\_{r} + \lambda\,\mathbf{e}\_{z})\otimes \mathbf{e}\_{z}$$ where we have defined an apparent axial stretch *λ* and the circumferential stretch *μ* as $$\begin{aligned} \lambda(Z) & = \frac{\mathrm{d} z }{\mathrm{d} Z}(Z),\\ \mu(Z) & = \frac{r (Z)}{R}.\end{aligned}$$ The Green-Lagrange strain tensor $\mathbf{E}= \frac{1}{2} (\mathbf{F}^T \cdot \mathbf{F}-\mathbf{1})$ is a 2 × 2 diagonal matrix in the basis (**e***θ*, **e***z*) tangent to the undeformed mid-surface: it will be represented compactly as a vector, whose entries are the diagonal components *E**θ* and *E**z* of the matrix, $$\mathbf{E} (\lambda, \mu, \mu') = \left(\begin{array}{c} E\_{\theta}\\ E\_z \end{array}\right) = \frac{1}{2} \begin{pmatrix} \mu^2 - 1\\ \lambda^2 + (R \mu')^2 - 1 \end{pmatrix}, \label{eq:genericMembraneStrain}$$ where $\mu' = \frac{\mathrm{d}\mu}{\mathrm{d}Z}$ is a stretch gradient, namely the axial gradient of circumferential stretch. A material model is now specified through a strain energy per unit volume *w*\*(*E**θ*, *E**z*). In previous work on axisymmetric membranes , the 2d material model proposed by Ogden  for incompressible rubber has been used: $$w^{\*}(E\_{\theta}, E\_z)= \sum\_{i = 1}^3 \frac{S\_i}{\alpha\_i} \left(\ell\_{\theta}^{\alpha\_i} + \ell\_z^{\alpha\_i} + \left( \frac{1}{\ell\_{\theta} \ell\_z} \right)^{\alpha\_i} \right), \quad \textrm{with } \begin{cases} \ell\_{\theta} = \sqrt{2 E\_{\theta} + 1} & \textrm{(circumfer.\ stretch)}\textrm{,} \\ \ell\_z = \sqrt{2 E\_z + 1}& \textrm{(axial stretch)}\textrm{.} \end{cases} \label{eq:ogdenlaw}$$ We use this model as well for our numerical examples, with the same set of material parameters *α**i*’s and *S**i*’s as used in previous work, namely *S*1 = 617 kPa, *S*2 = 1.86kPa, *S*3 =  − 9.79kPa, *α*1 = 1.3, *α*2 = 5.08 and *α*3 =  − 2. All our results can be easily adapted to a different constitutive law. For this constitutive law, the initial shear modulus *S*ini can be obtained as *S*ini = ∑*i* = 13*α**i* *S**i*. The domain 0 ≤ *Z* ≤ *L* represents one half of a balloon comprising a single bulge center at *Z* = 0, with symmetry conditions *μ*ʹ = *λ*ʹ = 0 enforced at *Z* = 0. At the other endpoint *Z* = *L*, we consider the ideal boundary conditions sketched in figure [fig:experiment]b, whereby the terminal section of the balloon is resting and freely sliding on a planar ‘plug’. These conditions would be difficult to achieve in experiments but they offer the advantage of being compatible with a uniform expansion of the membrane, which simplifies the analysis. By contrast, actual cylindrical balloons are typically closed up on their ends and cannot be inflated in a homogeneous manner due to end effects; these end effects could reproduced by employing different boundary conditions, but we prefer to ignore them. Note that Kyriakides & Chang use a rigid plug condition on one end, *μ*(*L*) = 1, which is not realistic either. Our boundary conditions, sketched in figure [fig:experiment] and provided in explicit form in §[pre-ssec:fullSimulations][ssec:fullSimulations], are natural: the applicable equilibrium condition will emerge automatically from the condition that the energy is stationary. As in the experiments of Kyriakides & Chang , the membrane is subject to an interior pressure *p*\* and to a stretching force *F*\* applied along the axis, see figure [fig:experiment]. The total potential energy reads $$\mathcal{E}\_{\mathrm{memb}}^{\ast} = \int\_{0}^{L}\Big( w^{\*}(\mathbf{E})\,2\,\pi\,R\,H\,\mathrm{d}Z - \pi\,(R\,\mu)^{2}\,(\lambda\,\mathrm{d}Z)\,p^{\ast} - (\lambda\,\mathrm{d}Z)\,F^{\*} \Big) \nonumber$$ where 2 *π* *R* *H* d*Z* is the initial volume element, *π* (*R* *μ*)2 (*λ* d*Z*) = *π* *r*2 d*z* is the current enclosed volume element, and *λ* d*Z* = d*z* is the current axial length element. We introduce a rescaled energy, denoted without an asterisk as $\mathcal{E}\_{\mathrm{memb}} = \frac{\mathcal{E}\_{\mathrm{memb}}^{\ast}}{ (2\pi\,R\,H)\,S\_{\mathrm{ini}}}$: $$\mathcal{E}\_{\mathrm{memb}}[p,\lambda,\mu]= \int\_{0}^{L}\left( w (\mathbf{E}(\lambda,\mu,\mu')) - p \frac{e}{2}\, \lambda \, \mu^2 - F \lambda \right) \mathrm{d} Z \textrm{.} \label{eq:completeEnergy}$$ The strain energy, the force and pressure have been rescaled as well, as $w = \frac{w^{\*}}{S\_{\mathrm{ini}}}$, $F = \frac{F^{\ast}}{2 \pi R H S\_{\mathrm{ini}} }$ and $p=\frac{p^{\ast}}{S\_{\mathrm{ini}}}$, respectively, and $e = \frac{R}{H}$ is an initial aspect ratio. In our numerical examples, we use the same value $e = \frac{55}{16}$ as in : even though this balloon is relatively thick prior to deformation, the non-linear membrane model has been checked to match the experimental results accurately in . We also use the same value of the load *F* = 1.149 as in these experiments. The parameter *F* will never be changed, and we do not keep track of how the various quantities depend on *F*; the argument *F* will systematically be omitted in functions, as we did already in the left hand side of ([eq:completeEnergy]). The functions *λ*(*Z*) and *μ*(*Z*) that make the energy ([eq:completeEnergy]) stationary yield the axisymmetric equilibra of the balloon. These solutions are obtained by a numerical method described in section [pre-ssec:fullSimulations][ssec:fullSimulations], and are plotted as the black curves in figure [fig:fullnonlinearsolutions], where they are used as a reference. Analysis of homogeneous solutions ================================= Our general goal is to justify the diffuse interface model when *λ*(*Z*) and *μ*(*Z*) vary slowly as a function of *Z*. In this section, we start by considering the case where *λ* and *μ* do not depend on *Z*, $$\frac{\mathrm{d}\lambda}{\mathrm{d}Z} = 0,\qquad \frac{\mathrm{d}\mu}{\mathrm{d}Z} = 0 \textrm{.}$$ This corresponds to homogeneous solutions, *i.e.* to solutions with uniform inflation. These homogeneous solutions are well known, and are re-derived here for the sake of completeness. A catalog of such homogeneous solutions will be obtained, which plays a key role in the subsequent derivation of the diffuse interface model. Kinematics of homogeneous solutions ----------------------------------- For homogeneous solutions, the gradient term *μ*ʹ in ([eq:genericMembraneStrain]) vanishes and the membrane strain reads $$\mathbf{E}\_0 (\lambda, \mu) = \begin{pmatrix} E\_{0}^{\theta} \\ E\_{0}^{z} \end{pmatrix} = \frac{1}{2} \left( \begin{array}{c} \mu^2 - 1\\ \lambda^2 - 1 \end{array} \right). \label{eq:E0}$$ All the quantities pertaining to homogeneous solutions are denoted using a subscript ‘0’. In the homogeneous case, the strain energy becomes $$w\_0 (\lambda, \mu) = \frac{1}{S\_{\mathrm{ini}}} \sum\_{i = 1}^3 \frac{S\_i}{\alpha\_i} \left(\lambda^{\alpha\_i} + \mu^{\alpha\_i} + \left( \frac{1}{\lambda\, \mu} \right)^{\alpha\_i} \right) \textrm{.} \label{eq:strainEnergy}$$ Of particular importance will be the second Piola-Kirchhoff membrane stress $\boldsymbol{\Sigma}\_{0}$, defined as the gradient of the strain energy with respect to the strain: $$\boldsymbol{\Sigma}\_0 (\lambda, \mu) = \begin{pmatrix} \Sigma\_{0}^{\theta} \\ \Sigma\_{0}^{z} \end{pmatrix} = \begin{pmatrix} \frac{\partial w}{\partial E\_{\theta}} \\ \frac{\partial w}{\partial E\_{z}} \end{pmatrix}\_{\mathbf{E} = \mathbf{E}\_{0}} = \begin{pmatrix} \frac{1}{\mu} \frac{\partial w\_{0}}{\partial \mu} (\lambda, \mu)\\ \frac{1}{\lambda} \frac{\partial w\_{0}}{\partial \lambda} (\lambda, \mu) \end{pmatrix}\textrm{.} \label{eq:sigma0}$$ Equilibrium of homogeneous solutions ------------------------------------ In view of ([eq:completeEnergy]), the total potential energy of a homogeneous solution per unit reference length is $$g\_0 (p, \lambda, \mu) = w\_0 (\lambda, \mu) - p \frac{e}{2} \,\lambda \, \mu^2 - F \lambda. \label{eq:H0Decomposition}$$ Given the load parameters *p* (and *F*) the equilibrium values of *λ* and *μ* are found by the condition of equilibrium in the axial and transverse directions, $$\begin{aligned} \frac{\partial g\_0}{\partial \lambda} (p, \lambda, \mu) & = & 0, \label{eq:g0EquiAxial}\\ \frac{\partial g\_0}{\partial \mu} (p, \lambda, \mu) & = & 0. \label{eq:g0EquiTransv}\end{aligned}$$ We leave the load *p* left unspecified for the moment, and we view the axial equilibrium ([eq:g0EquiAxial]) as an implicit equation for *λ* = *λ*0(*p*, *μ*) in terms of *p* and *μ*: by definition, *λ*0(*p*, *μ*) is the solution to the implicit equation $$\frac{\partial g\_0}{\partial \lambda} (p, \lambda\_0 (p, \mu), \mu) = 0. \label{eq:lambda0def}$$ From now on, we will systematically eliminate *λ* = *λ*0(*p*, *μ*) in favor of the second unknown *μ*. Starting with the potential *g*0, we define a reduced potential *G*0 as *G*0(*p*, *μ*) = *g*0(*p*, *λ*0(*p*, *μ*), *μ*), as well as the stress *n*0 dual to *μ*, $$n\_0 (p, \mu) = - \frac{\partial G\_0}{\partial \mu} (p, \mu) \textrm{.} \label{eq:n0label}$$ This *n*0(*p*, *μ*) can be interpreted as an imbalance of hoop stress; it vanishes at equilibrium, *n*0(*p*, *μ*) = 0. Indeed, we have $n\_0 = - \frac{\partial G\_{0}}{\partial \mu} = - \frac{\mathrm{d} g\_0 (p, \lambda\_0 (p, \mu), \mu)}{\mathrm{d} \mu} = - \frac{\partial g\_0}{\partial \mu} - \frac{\partial \lambda\_0}{\partial \mu} \frac{\partial g\_0}{\partial \lambda}$, where the both terms are zero by the equilibrium conditions ([eq:g0EquiTransv]–[eq:lambda0def]). To summarize, we view *λ* as an internal variable slaved to the ‘macroscopic’ variable *μ* (the roles of *λ* and *μ* could be exchanged but the other way around would be more complicated as the mapping from *λ* to *μ* is not single-valued). A catalog of homogeneous solutions can be obtained by (*i*) solving the axial equilibrium ([eq:g0EquiAxial]) for *λ* = *λ*0(*p*, *μ*), (*ii*) defining a reduced potential energy *G*0(*p*, *μ*) by ([eq:Gdef]), and (*iii*) solving the equilibrium condition *n*0(*p*, *μ*) = 0 in the (*p*, *μ*) plane. This program has been carried out and the results are shown in figure [fig:homsolutions]. ![Analysis of the homogeneous solutions, with F=1.149. (a) For any value of the pressure p, the axial equilibrium ([eq:lambda0def]) yields an implicit curve in the (\mu,\lambda) plane that defines the stretch \lambda_{0}(p,\mu) in terms of \mu; the different curves correspond to p = 0.0885, 0.1002, 0.1087, 0.1187, 0.1285, 0.140, 0.1646, 0.200. (b) Reduced potential G_0 as a function of \mu for the same set of values of p. Critical points are shown in red: Considère points \mathrm{C} and \mathrm{C}' where the pressure is extremal (disks), and Maxwell construction (line and diamonds). (c) Homogeneous solutions in the (\mu,p)-plane, as determined by solving the transverse equilibrium n_{0}(p,\mu)=0. The dashed part of the curve between the Considère points is unstable, as \partial^{2}G_{0}/\partial \mu^{2} < 0. (d) Same set of homogeneous solutions, now represented in the (v_{0},p) plane where v_{0}=\mu^2 \, \lambda_{0}(p,\mu) is the ratio of the final to the initial volume. With this set of conjugate variables, Maxwell’s rule applies and the two shaded regions have the same area.](homo-sols "fig:") [fig:homsolutions] The homogeneous stretch *λ*0(*p*, *μ*) and the potential *G*0(*p*, *μ*) are shown in parts a and b of the figure. In figure [fig:homsolutions]c, the pressure is plotted in terms of *μ* and is seen to increase, attain a local maximum *p*C = 0.1646, decrease, attain a local minimum *p*Cʹ = 0.1002, and finally increase again. The points of extremal pressure are where the onset of localization is expected to occur in a infinite medium (*L* = ∞) according to Considère’s criterion : we will refer to them as *Considère points*. For intermediate values of the pressure, *p*C < *p* < *p*C', the potential *G*0(*p*, *μ*) plotted in figure [fig:homsolutions]b has two minima and one maximum as a function of *μ*. The non-convexity of *G*0 makes it possible for the bulged and unbulged domains to coexist, as recalled in the next section; the diffuse interface model derived later in §[s:reducedmodel] will be able to account for the boundary between these domains. Maxwell’s construction ---------------------- In a first attempt to address inhomogeneities, we consider solutions made up of two phases, with respective properties (*λ*a = *λ*0(*p*, *μ*a), *μ*a) and (*λ*b = *λ*0(*p*, *μ*b), *μ*b). Discontinuities are allowed for the moment, their contribution to the energy being ignored: gradient term *μ*ʹ appearing in the membrane model are simply discarded. Let *c* denote the fraction of the phase ‘a’, and (1 − *c*) the fraction of the phase ‘b’, as measured after pulling everything back in the reference configuration. Under these assumptions, the membrane energy ([eq:completeEnergy]) takes the form $$\mathcal{E}\_{0} \left( p, c, \mu\_{\mathrm{a}}, \mu\_{\mathrm{b}} \right) = L \left( c\,G\_0 \left( p, \mu\_{\mathrm{a}} \right) + (1 - c) \, G\_0 \left( p, \mu\_{\mathrm{b}} \right) \right). \nonumber$$ Optimizing with respect to the *μ**i*’s and to *c* successively, we find $$\begin{array}{rclll} n\_0 (p, \mu\_i) & = & 0 & \text{\qquad} & \text{(mechanical equilibrium in each phase)}\\ G\_0 \left( p, \mu\_{\mathrm{b}} \right) - G\_0 \left( p, \mu\_{\mathrm{a}} \right) & = & 0 & & \text{(chemical equilibrium)} \end{array} \label{eq:twoPhaseEquilConds}$$ These equations can be solved for *p* and the *μ**i*’s: in particular this selects a value of the pressure *p* = *p*M, known as Maxwell’s pressure, where the two phases can coexist. The propagation pressure *p*M is a function of both the applied force *F* and of the constitutive model for the membrane, but this is implicit in our notation. For *F* = 1.149 and for the particular values of the constitutive parameters used here, we have obtained the Maxwell load as *p*M = 0.1087, see the red line joining the points labeled Ma and Mb in figure [fig:homsolutions] Maxwell’s equal-area rule for the propagation pressure can be rederived as follows. The quantity *G*0(*p*, *μ*b) − *G*0(*p*, *μ*a) appearing in ([eq:twoPhaseEquilConds]) can be written as the integral of d*G*0 along the curve corresponding to homogeneous solutions in the (*p*, *μ*) plane. Along this curve, $\frac{\partial G\_0}{\partial \mu} = - n\_0 = 0$ by ([eq:n0label]). Therefore, $\mathrm{d} G\_0 = \frac{\partial G\_0}{\partial p} (p, \mu)\, \mathrm{d} p = \frac{\mathrm{d} g\_0 (p, \lambda\_0 (p, \mu), \mu)}{\mathrm{d} p} \,\mathrm{d} p = \left( \frac{\partial g\_0}{\partial p} + \frac{\partial g\_0}{\partial \lambda} \frac{\partial \lambda\_0}{\partial \mu} \right) \mathrm{d} p = \frac{\partial g\_0}{\partial p} \mathrm{d} p $ after using ([eq:lambda0def]). In view of ([eq:H0Decomposition]), this can be written as $\mathrm{d} G\_0 = \frac{e}{2}\, v\_{0}\,\mathrm{d}p$, where $v\_{0}(p,\mu)= \lambda\_{0}(p,\mu) \mu^2 = \frac{\pi\,r^{2}\,\mathrm{d}z}{\pi\,R^{2}\,\mathrm{d}Z}$ denotes the ratio of the deformed to the undeformed volume of homogeneous solutions. Using ([eq:twoPhaseEquilConds]), the variation of *G*0 from one Maxwell point *μ*a to the other *μ*b is zero, and so $$\int\_{\mu\_{\mathrm{a}}}^{\mu\_{\mathrm{b}}} v\_{0}(p,\mu) \,\mathrm{d} p = 0 \textrm{.} \nonumber$$ This equality implies the equality of the area of the shaded regions in figure [fig:homsolutions]d, which uses *v*0 as the horizontal axis and *p* as the vertical axis. Derivation of the diffuse interface model ========================================= We proceed to derive the diffuse interface model from the non-linear membrane theory. This reduction combines an assumption of scale separation, whereby the solution is assumed to vary on a length scale *L* much larger than the radius *R*, and the elimination of the unknown *λ* in favor of *μ* by means of the relation *λ* = *λ*0(*p*, *μ*). Principle of the expansion -------------------------- We assume scale separation and use the convention that the radius *R* is fixed and finite while *L* = *R*/*ɛ* goes to infinity: the solution is sought in terms of a scaled variable *Z̃* = *ɛ* *Z* through scaled functions *λ̃* and *μ̃*, where *ɛ* ≪ 1 is our expansion parameter, $$\lambda\_{\varepsilon}(Z) = \tilde{\lambda}(\varepsilon\,Z),\qquad \mu\_{\varepsilon}(Z) = \tilde{\mu}(\varepsilon\,Z) \textrm{.} \nonumber$$ As a consequence of this scaling assumption, any derivative with respect to the slow axial variable *Z* entails a multiplication by the small parameter *ɛ*: $$\begin{aligned} {2} &\lambda\_{\varepsilon}(Z) = \tilde{\lambda}(\varepsilon\,Z) = \mathcal{O}(1),&\qquad &\mu\_{\varepsilon}(Z) = \tilde{\mu}(\varepsilon\,Z) = \mathcal{O}(1),\nonumber\\ &\frac{\mathrm{d}\lambda\_{\varepsilon}}{\mathrm{d}Z} = \epsilon\, \frac{\mathrm{d}\tilde{\lambda}}{\mathrm{d}\tilde{Z}}(\varepsilon\,Z) = \mathcal{O}(\varepsilon),&\qquad& \frac{\mathrm{d}\mu\_{\varepsilon}}{\mathrm{d}Z} = \varepsilon\, \frac{\mathrm{d}\tilde{\mu}}{\mathrm{d}\tilde{Z}}(\varepsilon\,Z) = \mathcal{O}(\varepsilon) \textrm{.} \nonumber\end{aligned}$$ For the sake of legibility, we drop the subscripts *ɛ* and remove any reference to the scaled functions *λ̃* and *μ̃* in the following: it will be sufficient for us to use the above order of magnitude estimates. Derivation of the gradient effect by a formal expansion ------------------------------------------------------- The general expression of the membrane strain ([eq:genericMembraneStrain]) can be split in two terms **E**(*λ*, *μ*, *μ*ʹ) = **E**0(*λ*, *μ*) + **E**1(*μ*ʹ),  where the first one depends on the stretch and the second one on the stretch gradient, $$\begin{aligned} \mathbf{E}\_0 (\lambda, \mu) & = & \frac{1}{2} \left( \begin{array}{c} \mu^2 - 1\\ \lambda^2 - 1 \end{array}\right), \\ \mathbf{E}\_1 (\mu') & = & \frac{1}{2} \left( \begin{array}{c} 0\\ (R \mu')^2 \end{array}\right). \label{eq:E1} \end{aligned}$$ In view of the results from the previous section, their orders of magnitude are **E**0(*λ*, *μ*) = O(1)  **E**1(*μ*ʹ) = O(*ɛ*2). In line with the fact that we use the finite elasticity theory, the strain **E**0 is of order 1. **E**1 being a small correction to **E**0, the strain energy density can be expanded as $$\begin{array}{rcl} w (\mathbf{E}) & = & w (\mathbf{E}\_0 (\lambda, \mu) +\mathbf{E}\_1 (\mu'))\\ & = & w (\mathbf{E}\_0 (\lambda, \mu)) + \frac{\partial w}{\partial \mathbf{E}}(\mathbf{E}\_0 (\lambda, \mu)) \cdot \mathbf{E}\_1 (\mu') +\mathcal{O} (|\mathbf{E}\_1|^{2})\\ & = & w\_0 (\lambda, \mu) +\boldsymbol{\Sigma}\_0 (\lambda, \mu) \cdot \mathbf{E}\_1 (\mu') +\mathcal{O} (\varepsilon^4) \end{array} \label{eq:wOfEExpansion}$$ where we have used the definition of the membrane stress $\boldsymbol{\Sigma}\_0$ in ([eq:sigma0]). Inserting this into ([eq:completeEnergy]) yields the following approximation of the energy $$\mathcal{E}\_{\text{memb}} [p, \mu] = \int\_{0}^{L} \Big( g\_0 (p, \lambda (Z), \mu (Z)) + \boldsymbol{\Sigma}\_0 (\lambda (Z), \mu (Z)) \cdot \mathbf{E}\_1 (\mu' (Z)) \Big)\, \mathrm{d} Z +\mathcal{O} (L \, \varepsilon^4). \label{eq:E-tmp1}$$ Note that the gradient of axial stretch *λ*ʹ does not appear in this expression. Energy of the diffuse interface model ------------------------------------- An important result, proved in appendix [B:Redmodel], is that it is consistent, at this order of approximation, to replace the unknown *λ*(*Z*) with the axial stretch *λ*0(*p*, *μ*(*Z*)) of the homogeneous solution having the local value of *μ*(*Z*) as its circumferential stretch. This eliminates *λ*(*Z*) from the equations, and we obtain the *diffuse interface model* [subeqs:EfinalAndB0] $$\mathcal{E} [p, \mu] = \int\_{0}^{L} G\_0 (p, \mu (Z))\, \mathrm{d} Z + \frac{1}{2} \int\_{0}^{L} B\_{0} (p, \mu (Z)) \, \mu^{\prime 2} (Z) \, \mathrm{d} Z. \label{eq:Efinal}$$ where we have omitted terms of order *L* *ɛ*4 and higher. The coefficient *B*0 of the regularizing term has a simple expression which is found by identifying with ([eq:E-tmp1]) and ([eq:sigma0]) as $$B\_{0} (p, \mu(Z)) = R^2 \, \left[ \frac{1}{\lambda} \, \frac{\partial w\_{0}}{\partial \lambda} (\lambda, \mu(Z)) \right]\_{\lambda = \lambda\_0 (p, \mu(Z))}. \label{eq:Bexpression}$$ This defines the regularizing term in terms of the energy *w*0(*λ*, *μ*) of homogeneous solutions, see ([eq:strainEnergy]). Even though this is implicit in our notation, both *G*0 and *B*0 depend on the force *F*. Equations ([eq:Efinal]–[eq:Bexpression]) are our main result, and can be restated as follows. The energy Ememb of the full non-linear membrane model can be approximated as the sum of (*i*) the non-regularized energy ∫*G*0 d*Z* which depends on the stretch *μ* but not on its gradient, and is of order *L*, and (*ii*) a much smaller correction $\frac{1}{2} \int B\_0 \, \mu^{\prime 2} \mathrm{d} Z$, of order *ɛ*2, that depends on the strain *μ* and as well as on its gradient $\mu' = \frac{\mathrm{d} \mu}{\mathrm{d} Z}$. These two terms provide an approximation of the full energy E of the non-linear membrane model which is accurate up to order *L* *ɛ*4. Non-linear equilibrium of the diffuse interface model ----------------------------------------------------- The equilibrium equations are obtained from ([eq:Efinal]) by the Euler-Lagrange method as [subeqs:BVpbDiffuseInterfacee] $$n\_0 (p, \mu (Z)) - \frac{1}{2} \, \frac{\partial B\_{0}}{\partial \mu} (p, \mu (Z)) \, \mu^{\prime 2} (Z) + \frac{\mathrm{d}}{\mathrm{d} Z} (B\_{0} (p, \mu (Z)) \, \mu' (Z)) = 0. \label{eq:equil2dGrdModel}$$ In the absence of kinematic constraints, the variational method yields the natural conditions at the endpoints as well, *μ*ʹ(0) = *μ*ʹ(*L*) = 0. Here, *μ*ʹ(*L*) = 0 is consistent with the symmetry condition at the center *Z* = 0 of the bulge. The equilibrium condition ([eq:equil2dGrdModel]) reduces to the condition ([eq:dGdmuIsZero]) applicable to homogeneous solutions, namely *n*0(*p*, *μ*) = 0, when the gradient effect is removed, by setting *B*0 = 0. Solution for a domain boundary in an infinite balloon ----------------------------------------------------- The existence of a first integral associated with the equilibrium ([eq:equil2dGrdModel]) has been noted by a number of authors such as Coleman & Newman . It can be obtained by expanding the derivative in the last term in the right-hand side, and by multiplying the entire side by *μ*ʹ(*Z*); the result is $\frac{\mathrm{d}(-G\_{0}+B\_{0}\,\mu'^{2})}{\mathrm{d}Z} = 0$. This shows that the following quantity is conserved:  − *G*0(*p*, *μ*(*Z*)) + *B*0(*p*, *μ*(*Z*)) *μ*ʹ2(*Z*) = *C*. This equation can be used to solve for *μ*(*Z*) by quadrature. However, this method is impractical for numerical calculations as it involves evaluating integrals that are close to singular, even when the singular parts are taken care of analytically . This is why our numerical simulations in §[pre-ssec:fullSimulations][ssec:fullSimulations] use a direct integration method of the equilibrium ([eq:equil2dGrdModel]) rather than the quadrature method. In the case of the boundary separating two domains in an infinite medium, however, the quadrature method is tractable. Then, the pressure matches Maxwell’s pressure, *p* = *p*M, and *μ*(*Z*) tends to *μ*a and *μ*b for *Z* →  ± ∞, respectively. The value of *C* consistent with these asymptotic behaviors is the common value *C* = *G*0(*p*M, *μ*a) = *G*0(*p*M, *μ*b) of the potential, see ([eq:twoPhaseEquilConds]). The implicit equation ([eq:conservedIntegral]) can then be plotted in the phase space (*μ*(*Z*), *μ*ʹ(*Z*)) using a contour plot method. We have checked that the resulting curve (not shown) falls on top of the dotted green curve labeled Aʹ in figure [fig:fullnonlinearsolutions]c, obtained by numerical integration of the equilibrium with a large but finite aspect ratio, *L*/*R* = 30: the analytical solution ([eq:conservedIntegral]) in an infinite balloon provides an excellent approximation to a propagating interface in a finite balloon, as long as it is sufficiently remote from the endpoints. In the bifurcation diagram, the numerical solutions appears as a point Aʹ lying almost exactly on Maxwell’s plateau, see figure [fig:fullnonlinearsolutions]a. This analytical solution is also an excellent approximation to the domain boundary predicted by the original membrane model, as discussed below in §[pre-ssec:fullSimulations][ssec:fullSimulations]. Comparison of the diffuse interface and membrane models ======================================================= [pre-ssec:fullSimulations] Using a formal expansion method, we have shown that the 2d non-linear axisymmetric membrane model (§[s:nonlinearMembraneModel]) is asymptotically equivalent to the 1d diffuse interface model in ([subeqs:EfinalAndB0]). This equivalence holds for ‘slowly’ varying solutions, *i.e.* when the axial gradients involve a length scale much larger than the tube radius, ∣d*μ*/d*Z*∣ ≪ 1/*R*. Here, we compare the predictions of the approximate diffuse interface model to those of the original membrane model. The goal is twofold. First, we verify our asymptotic expansion by checking consistency for slowly-varying solutions. Second, we push the diffuse interface model outside its domain of strict mathematical validity, by applying it to problems involving sharp boundaries and comparing to the predictions of the original membrane model. Comparison of the full bifurcation diagrams ------------------------------------------- We start by comparing the bifurcation diagrams obtained with each one of the models for balloons of finite length *L*, see figure [fig:fullnonlinearsolutions]a. ![Comparison of the predictions of the non-linear membrane model and of the diffuse interface model, for F=1.149. (a) Bifurcation diagrams for different values of the aspect-ratio L/R: homogeneous solutions (dark blue curve), bifurcated branches of the membrane model (double-struck black curves) and of the diffuse interface model (green dots); Considère’s points (red dots) and Maxwell’s plateau (dotted red line). Note that we use the logarithm of the scaled volume v on the horizontal axis, so that Maxwell’s equal-area rule does not apply directly. (b) Comparison of the deformed configurations in physical space (z/R,\mu=r/R) for L/R = 30, for the three configurations labeled \mathrm{A}, \mathrm{A}' and \mathrm{A}'' in (a), corresponding to (v,p) = (2.39, 0.146), (45
arxiv_0000310
Kinematic structures of the Solar neighbourhoodrevealed by ***Gaia*** DR1/TGAS and RAVE ======================================================================================= ### Received 10 May 2017 / Accepted 19 September 2017 We aim to study the velocity distribution of stars of the Solar neighbourhood and investigate the properties of individual kinematic structures in order to improve our understanding of their origins. Using the astrometric data provided by *Gaia* DR1/TGAS and radial velocities from RAVE DR5 we perform a wavelet analysis with the *à trous* algorithm to 55831 stars that have *U* and *V* velocity uncertainties less than $4\,{{\rm\,km\,s^{-1}}}$. An auto-convolution histogram method is used to filter the output data, and we then run Monte Carlo simulations to verify that the detected structures are real due to velocity uncertainties. Additionally we analysed our stellar sample by splitting all stars into a nearby sample ( < 300pc) and a distant sample ( > 300pc), and two chemically defined samples that to a first degree represent the thin and the thick disks. We detect 19 kinematic structures in the Solar neighbourhood between scales $3-16\,{{\rm\,km\,s^{-1}}}$ at the 3*σ* confidence level. Among them we identified well-known groups (such as Hercules, Sirius, Coma Berenices, Pleiades, and Wolf 630), confirmed recently detected groups (such as Antoja12 and Bobylev16), and detected a new structure at $(U,V)\approx(37, 8)\,{{\rm\,km\,s^{-1}}}$. Another three new groups are tentatively detected, but require further confirmation. Some of the detected groups show clear dependence on distance in the sense that they are only present in the nearby sample ( < 300pc), and others appear to be correlated with chemistry as they are only present in either of the chemically defined thin and thick disk samples. With the much enlarged stellar sample and much increased precision in distances, proper motions, provided by *Gaia* DR1 TGAS we have shown that the velocity distribution of stars in the Solar neighbourhood contains more structures than previously known. A new feature is discovered and three recently detected groups are confirmed at high confidence level. Dividing the sample based on distance and/or metallicity shows that there are variety of structures which are as large-scale and small-scale groups, some of them have clear trends on metallicities, others are a mixture of both disk stars and based on that we discuss possible origin of each group. Introduction ============ Studies of the velocity distribution of stars in the Solar neighbourhood have shown that it contains a plethora of kinematic structures, with stars that have similar space velocities (*U*, *V*, *W*). There are several possibilities to why different stars have similar kinematic properties: they could be from evaporated open clusters; they could be due to dynamical resonances within the Milky Way; or they could even be remnants of accreted satellite galaxies that merged with the Milky Way billions of years ago. This means that stellar streams retain a lot of information of various dynamical processes of the Milky Way’s past and provide clues to our understanding of the formation of the Galaxy. Mapping the structure and properties of the Milky Way, that is a benchmark galaxy, will also aid our attempts to understand the evolution and formation of large spiral galaxies in general. A detailed characterisation of the kinematic properties together with chemical composition of the stars of such structures are crucial when trying to trace their origins. The release of Hipparcos Catalogue twenty years ago boosted the study of kinematic properties of the Solar neighbourhood. For example, studied the distribution of 14369 kinematically selected stars using a maximum likelihood estimate method and detected 14 features in the *U* − *V* plane of Galactic space velocities. The *W* direction did not appear very rich in structures with only four moving groups detected. The sample was then split based on (*B* − *V*) colour index to study the behaviour of young and old stars separately. They found that there are moving groups composed of red stars (supposed to be older), while younger structures were composed of stars with different colours. This was an argument against the theory previously proposed by, that kinematic structures could be remnants of disrupted open clusters, in which stars are chemically homogeneous. Instead, propose that moving groups that follow eccentric orbits could be formed through resonances with the Galactic bar. studied a sample of 4597 Hipparcos stars and, unlike, used radial velocities provided in Hipparcos Input Catalogue. applied a wavelet analysis method for kinematic group detection, identified the most significant structures in the *U* − *V* plane, and found that the velocity distribution has a more complicated structure than just moving groups and has a larger, branch-like structure. Later, using a combination of CORAVEL radial velocities and ages, together with Tycho-2 astrometry, investigated a stellar sample of 5952 stars and found well-known streams like Hercules, Sirius, Hyades and Pleiades. They suggest that stellar groups are of dynamical origin as isochrones of associated stars with the moving groups show a big dispersion in ages. A deeper study of the origin of moving groups provided by involved wavelet transform applied to the same data as in and checked the theory of kinematic groups being remnants of open clusters. After fitting isochrones inherent for open clusters to stars associated with the Sirius, Hyades and Pleiades streams, claimed dynamical origins for these groups, as they did not match. investigated a larger sample of 24910 stars using wavelet techniques and analysed the age-metallicity distribution of the kinematic branches of Sirius, Hercules, Hyades-Pleiades and Coma Berenices. Each branch showed a wide spread of metallicities, especially Hercules. Sirius group stars were older and peaked at about 400 Myr, compared to Hyades-Pleiades which consist of mainly younger stars. later detected 22 structures by applying a wavelet analysis to a sample of 14000 dwarf stars from and 6000 giant stars from. That study provided a comprehensive comparison of the positions of all kinematic structures detected by different authors. Eleven of 22 groups had previously been found in the literature, while the remaining 11 were discovered for the first time. identified 19 structures in the Solar neighbourhood by analysing a sample of over 200000 stars with available RAVE radial velocities and compared their results with those obtained by using the Geneva-Copenhagen survey. They found 19 structures in the Solar neighbourhood from a sample of over 110000 stars and support the dynamical origin of stellar branches based on age-metallicity distribution from and the fact that the main groups are large-scale structures that are detectable even beyond the Solar neighbourhood. An alternative approach (than analysis in the *U* − *V* velocity plane) is to search for streams in the plane defined by the integrals of motions. This way of searching for kinematic over-densities is important as one may discover stellar streams of possible resonant or even extra-galactic origin. Stars in associated kinematic over-densities keep the same angular momenta and in the Solar neighbourhood behave the same way as moving groups of the cluster disruption origin. Together with the approximation of Keplerian orbits,, studied the distribution of stars in $\sqrt{U^2+2V^2}$ and *V* plane as a consequence of integral of motion approach, first discussed in. applied wavelet transform to the thin and thick disk samples that consist of nearby subdwarfs with metallicities [Fe/H] >  − 0.6 and [Fe/H] ≤  − 0.6. They found Pleiades, Hyades and Hercules in the thin disk and Arcturus stream in the thick disk. were the first to use RAVE DR1 archive. Their sample consisted of 7015 stars that allowed them to detect 8 groups in the $\sqrt{U^2+2V^2}$ and *V* plane. Later, focused on the search for kinematic structures for the thick disk population based on LAMOST survey. Their stellar sample consisted of 7993 stars. Thus, only a few kinematic structures were detected. Usually the origin of kinematic structures is studied with help from our knowledge about other components of the Galaxy, but did the opposite: assuming that the Hercules stream was caused by resonances of Galactic bar, they used the Hercules to constrain the Galactic bar’s pattern speed and the local circular frequency. This paper demonstrated further the importance of the study of kinematic structures, as they could be a key to better understanding of the Milky Way formation. Cross-matching the first astrometric data release of *Gaia* DR1 and the radial velocities from the RAVE DR5 catalogue, we now have an access to the most up-to-date and precise astrometric measurements for more than 200000 stars. This is a substantially larger sample than most sample that has been previously available, and the precision in the data is also much better than before. Recently, using TGAS and RAVE, the kinematics of halo stars was investigated by, who studied the velocity correlation function and the energy versus angular momentum space of about 1000 stars with metallicities $\rm [M/H] \le - 1.5$. They found that the distribution of stars in the space defined by integrals of motion has complex kinematic structure and more that a half of them follow retrograde orbits. Halo substructure with TGAS and RAVE was also studied in. The clump of 14 co-moving metal-poor giants was discovered. Based on small spreads of the metallicity within the group, authors explain its origin as being a dissolving remnant of a globular cluster. applied a wavelet analysis technique to a sample that is a combination of the LAMOST DR3 and the *Gaia* TGAS catalogues. They detected 16 kinematic structures were found and four of them are potential new stream candidates. The list of works on kinematic groups could be extended and all of them prove that the velocity distribution in the Solar neighbourhood is inhomogeneous and has a complex, branch-like structure. The question on how did the stellar streams formed is still actual. Some of the mentioned above papers attempts to resolve this question, and as a result exists a variety of theories that could explain the origin of stellar streams, but there is no exact agreement among them even for the well-studied groups, which further demonstrates the importance of the study of kinematic structures. Using the recent TGAS and RAVE data releases, this study focus on the velocity distribution of stars in the *U* − *V* plane to reveal the structures and to further analyse some properties of each structure in terms of distance and chemistry. The paper is organised in the following way: in Sect. [secdata] we discuss the stellar sample and its properties; Sect. [sec:numericalmethods] contains an explanation of numerical methods (wavelet analysis) used in this work; Sect. [sec:maps] covers the description of input and output maps for the the wavelet analysis; all the results including tables and figures of kinematic structures we present in Sect. [sec:results]; finally, the summary and discussion of this work are in Sect. [sec:discussion]. Stellar sample ============== To detect stellar structures in velocity space we will perform a wavelet analysis applied for a data sample in the *U* − *V* plane (see Sect. [sec:numericalmethods]), where *U*, *V*, *W* are the space velocities of the stars in a right-handed Cartesian coordinate system (*X*, *Y*, *Z*). *X* axis points towards Galactic centre, *Y* axis defines the direction of Galactic rotation, and the *Z* axis points towards Galactic North Pole. The sample should be as large as possible and contain precise measurements of distances, proper motions, and radial velocities, for the calculation of accurate space velocities. Distances and radial velocities ------------------------------- Since the *Gaia* satellite was launched in 2013 we are expecting the most precise astrometric measurements for billions of stars in the Milky Way. The first *Gaia* data release (DR1) due to the shortage of observations is still far from declared precision: for a star with a magnitude *V* = 15 it is expected to get positions, proper motions and parallaxes with the precision up to 5-25 *μ*as. However, adding astrometry from the Hipparcos catalogue, TGAS gives astrometric solutions for 2.5 million stars with precise measurements of all required astrometric data. According to it is recommended that a systematic error of 0.3 mas has to be accounted. Later, states that parallax uncertainties already represent the total uncertainty and additional account of systematic error could lead to overestimation. So, in this work we used original TGAS data. In order to calculate the three-dimensional movements of the stars, i.e. the *U*, *V*, and *W* space velocities, the TGAS data needs to be complemented with radial velocities. The Radial Velocity Experiment (RAVE) is a medium-resolution spectroscopic survey with the primary goal of determining radial velocities and stellar parameters for hundreds of thousands of stars in the Southern Hemisphere using spectra of resolving power R = 7500. The final release of RAVE (DR5) gives precise measurements of radial velocities of 457588 unique stars. Cross-matching RAVE DR5 with TGAS provide us a sample of 159299 stars with known coordinates (*α*, *δ*), proper motions (*μ**α*, *μ**δ*), positive parallaxes (*π*), radial velocities (*v**r**a**d*), abundances of Mg and Fe and their associated uncertainties for all stars. The RAVE catalogue contains multiple observations for some stars. In those cases, the median value of every parameter were used in this work. To further expand our sample we will also explore the option to include the data releases from The Large sky Area Multi-Object Fibre Spectroscopic Telescope. This is a Northern hemisphere survey which contains spectra of almost 2 million stars with the resolution of R = 2000. The cross-matching of A, F, G and K type stars in the LAMOST DR2 catalogue with TGAS [1](#fn1) leaves us a sample of 107501 stars with positive parallax. Space velocities and their uncertainties ---------------------------------------- Space velocities and their uncertainties, which are dependent on the accuracy of the proper motions, the parallaxes, and the radial velocities, were computed according to the equations in. [guvhist] Figure [guvhist] shows the distributions of the uncertainties in the *U*, *V*, and *W* velocities for 159299 RAVE stars (top) and 107501 LAMOST stars (bottom). Each velocity component is indicated with a different colour. About 35% (55831) of the RAVE stars have velocities with uncertainties smaller than 4${{\rm\,km\,s^{-1}}}$, while only 0.8% (905) LAMOST stars belong to the same region. Such a comparably low accuracy of LAMOST velocities can be explained with high uncertainties of radial velocities, which are one of the main components when computing *σ**U*, *σ**V* and *σ**W*. cross-matched LAMOST DR1 with APOGEE and discovered an offset of  ∼ 5.7${{\rm\,km\,s^{-1}}}$ of LAMOST radial velocities. report that LAMOST line-of-site velocities are underestimated and have to be corrected by 5${{\rm\,km\,s^{-1}}}$. The accuracy of space velocities is crucial for detection of kinematic groups which will be shown later in the Sect. [sec:maps]. Taking into account high uncertainties for the LAMOST stars we decided to focus our analysis on the RAVE sample only, which gives us a sample of 159299 stars. The spatial distribution of our RAVE-TGAS star sample in the *X* − *Y* and *X* − *Z* planes is shown in Fig. [fig:xyz]. In this plot we show three distributions: blue colour is for the sample of all 159299 stars, green colour shows 55831 stars with *σ**U*, $\sigma\_V < 4 {{\rm\,km\,s^{-1}}}$, and the red colour indicates the same stars as the green but with distance limit  < 300pc. As will be shown later in Sect. [sec:maps] we will focus on the analysis of the last two sub-samples. The precision of the parallax distances provided by TGAS is high enough and additionally the cut on velocity uncertainties (*σ**U* and *σ**V*) less than $4\,{{\rm\,km\,s^{-1}}}$ already cut stars by parallax too. In Fig. [pe] the distribution of parallax relative uncertainties *π**e*/*π* for the total sample is shown, where *π* is the parallax and *π**e* is its uncertainty. Most of stars have uncertainties less than 30%. This cut is necessary to get robust positions of kinematic structures. The question if the Local Standard of Rest (LSR) should be included in the space velocities, or not, in the detection analysis for kinematic groups is debatable. In several works the space velocities were not adjusted for the peculiar Solar motion, while in some papers it was. We chose to not correct our space velocities to the LSR as the adopted solar motion relative to the LSR may differ between studies and if so, would make direct comparisons of the detected structures more difficult. Numerical methods ================= Different statistical methods have been used to highlight kinematic over-densities: wavelet analysis, maximum likelihood algorithm, and adaptive kernel estimate. We chose the most efficient technique for our purposes: the wavelet analysis with the *à trous* algorithm as it is a powerful tool which gives signal characteristics in terms of location, both in position and scale (size of the structure) simultaneously. The utility of this analysis method applied to the detection of moving groups in the Solar neighbourhood has already been demonstrated in several studies. The analysis consists of applying a set of filters at different scales to the original data in order to determine wavelet coefficients. Detected structures, which correspond to local maxima in the wavelet space, can be either physical (kinematic groups) or ‘artefacts’. The latter can have two origins: (1) The wavelet coefficients contain Poisson noise due to that the stellar sample is finite. Details on the filtering of the Poisson noise can be found in Sect. [sec:sec:filtering]; (2) The space velocities of the stars contain significant uncertainties. Details on the verification of the robustness of results are given in Sect. [sec:sec:MC]. [pe] The *à trous* algorithm ----------------------- We focused on the wavelet analysis with the *à trous* algorithm because it has an advantage compared to other statistical methods: it does not require any assumptions on the initial stellar distribution. So, the input data correspond only to the original star count map in the *U* − *V* plane. The algorithm implies applying a set of filters at different scales *s**j* = 2*j* × Δ in order to decompose the 2-D signal *c*0(*i**x*, *i**y*) into a set of wavelet coefficients (*w*1,  ...,  *w**n*) that contain the information about kinematic structures. Here, (*i**x*, *i**y*) is a position at the input grid, *j* is the scale index (*j* ∈ [1,  *p*]), *p* is the maximum scale and Δ is the bin size of the input pixel grid which is used to detect structures which have sizes between *s**j* − 1 and *s**j* ${{\rm\,km\,s^{-1}}}$ (for details on the algorithm see ). For one position (*i**x*, *i**y*), a positive wavelet coefficient corresponds to an over-density in the velocity space. We followed the documentation provided with the MR software and we used a maximum scale *p* equal to log2(*N* − 1), where *N*, assuming that the input star count map has a size *N* × *M*, is the number of pixels in the smaller direction. Image filtering and detection of significant structures ------------------------------------------------------- Given that the data sample is finite, wavelet coefficients at each scale except the information about the structures contain also noise which follows Poisson statistics. The procedure to determine if a wavelet coefficient is significant or not depends on the kind of data. First, we determined the multi-resolution support of the resulting image, which is a logical[2](#fn2) way to store information about the significance of a wavelet coefficient at a given scale *j* and a position (*i**x*, *i**y*). Our data contains a large number of pixels with less than 30 star counts, which is called the case of “few events”. In order to remove the Poisson noise in the case of “few events” we used the auto-convolution histogram method  which has already been successfully used to detect structures in data with few events such as low intensity X-ray images . With the final set of wavelet coefficients we used an algorithm provided with the MR software that groups coefficients into structures that are characterised by the level of confidence *ε*. A structure detected with a 3*σ* confidence level corresponds to a 99.86% probability that the structure was not produced by the Poisson noise. Then, the algorithm approximates the shape of the structure by an ellipse, characterised by its centre, its semi-minor axis, its semi-major axis, and the angle between the major axis and the horizontal axis of the input map. These parameters are useful for the estimation of the number of stars that belongs to the structure, assuming that all the stars inside the ellipse can be associated with the structure. [bins] Analysis ======== Input data ---------- The constraints on velocity uncertainties and the choice of the bin size of the input star count map are linked. First, the uncertainties have to be at the same time large enough in order to provide us with as a large sample as possible, and at the same time small enough to take advantage of the high-precision data provided by *Gaia* DR1/TGAS and RAVE. Second, the bin size of the star count map has to be consistent with the space velocity uncertainty of the stars in order to get robust positions of the structures. This means that the bin size needs to be roughly equal to the average velocity uncertainty of the sample, otherwise the probability that a star falls into the particular bin will be reduced and therefore the precision of the positions of kinematic structures will also decrease. If the bin size is higher than  ∼ 5${{\rm\,km\,s^{-1}}}$, the first scale (*J* = 1) would be $10-20\,{{\rm\,km\,s^{-1}}}$, but from previous studies it has been shown that the typical size of structures is of the order $10\,{{\rm\,km\,s^{-1}}}$. Thus, a bin size larger than about 5${{\rm\,km\,s^{-1}}}$ should not be used as too many structures would be lost. With a restriction on *σ**U* and *σ**V* equal to $4\,{{\rm\,km\,s^{-1}}}$ it should possible to get robust measurements of positions of structures, and that leaves us with a sample of 55831 stars that have average velocity uncertainties of $\langle\sigma\_{U}\rangle=1.8\,{{\rm\,km\,s^{-1}}}$ and $\langle\sigma\_{V}\rangle=1.6\,{{\rm\,km\,s^{-1}}}$. We then chose a bin size of $\Delta\_{\rm{bin}}=1 {{\rm\,km\,s^{-1}}}$. With this value the scales of the output images from the wavelet transform will be: *J* = 1 (2-4${{\rm\,km\,s^{-1}}}$), *J* = 2 (4-8${{\rm\,km\,s^{-1}}}$), *J* = 3 (8-16${{\rm\,km\,s^{-1}}}$), *J* = 4 (16-32${{\rm\,km\,s^{-1}}}$), *J* = 5 (32-64${{\rm\,km\,s^{-1}}}$). Monte-Carlo simulations ----------------------- The space velocities of the stars have uncertainties that will influence the ability to detect kinematic structures and how robust these detections will be. Figure [bins] shows the probability density function of one star to be located in the centre of one bin in *U* − *V* plane (plots on the left-hand side) or at the edge of the bin (plots on the right-hand side), given that the velocity uncertainties are equal to the average uncertainties of the sample (upper plots) or half of the average uncertainties (lower plots). The probability (see numbers at each concentric circle) that a star can fall into different bins is non-zero and consequently, can lead either to that structures being ‘fake detections’, or miss the detection of real physical structures. Hence, we perform Monte-Carlo (MC) simulations in order to estimate the robustness of the detected structures. *N**M**C* synthetic samples are generated from the original one by randomly drawing 55831 new couples (*U*, *V*) assuming that the stars have Gaussian velocity distributions, where mean values are positions (*U**i*, *V**i*) and the standard deviations are uncertainties (*σ**U**i*,  *σ**V**i*), where $i\in[1,\,N\_{\rm{stars}}]$ refers to the $i^{\rm{th}}$ star in the original sample. Then, the wavelet analysis and the structure detection algorithm are applied to the *N**M**C* synthetic stellar samples and the positions of all structures at all scales are stored for each simulated sample. Output data ----------- Following the computations described in Sect. [sec:numericalmethods] and MC simulations as in Sect. [sec:sec:MC], the wavelet analysis was applied to these *N**M**C* samples giving: (1) *N**M**C* sets of wavelet coefficients [(*w*11,  *w*21,  ...,  *w**J*1),  ..., (*w*1*N**M**C*,  *w*2*N**M**C*,  ...,  *w**J**N**M**C*)]; (2) the multi-resolution support for *J* scales and *N**M**C* simulations, which gives: [(*M*11,  *M*21,  ... , *M**J*1),  ..., (*M*1*N**M**C*,  *M*2*N**M**C*,  ...,  *M**J**N**M**C*)]; (3) positions of detected structures for *J* scales and *N**M**C* simulations. The results are presented in two different forms. First as a structure’s position count map, in which the positions of the detected structures of each of the 2000 samples are superimposed (see Fig. [uvall]). The detected structures are marked by black boxes. To quantify the ‘realness’ of each group, the fraction of times each group was detected relative to the total number of simulations is calculated. Figure [uvall] shows the position count map for the detected structures at scales *J* = 2 (4-8${{\rm\,km\,s^{-1}}}$, top plot), *J* = 3 (8-16${{\rm\,km\,s^{-1}}}$, middle plot) and *J* = 4 (16-32${{\rm\,km\,s^{-1}}}$, bottom plot). The highest number of individual structures, shown by black rectangles with individual numbers, is for *J* = 2. However, as can be seen, scale *J* = 3 also includes all significant structures detected at scales *J* = 2 and *J* = 4, and covers smaller and bigger scales. How many Monte Carlo simulations are enough for the results to convergence? To explore this, Fig. [convergence] shows how the positions for structure number 13 from the *J* = 3 map converge as the number of Monte Carlo simulations increases. We introduce four different estimators: The first two are mean positions of the structure *U**m**e**a**n* and *V**m**e**a**n* (calculated based on coordinates *U* and *V* of all structures inside the rectangle number 13); The third one is the number of stars inside structure number 13 which was calculated as an averaged number of stars from the total number of Monte Carlo simulations (*N**M**C* runs); The last estimator is the percentage of structure detection inside the rectangle. Convergence is reached at around 1400 simulations (marked by grey background in Fig. [convergence]). We therefore chose to run 2000 simulations to have confident results. The position count map is useful for providing positions of structures. However, one cannot justify if the structures are independent, or are connected to other groups. Hence, another way to represent the results is shown in the bottom plot of Fig. [j3], and is the multi-resolution support for *N**M**C* simulations by displaying the quantity M$\_{\rm{tot}}$ defined as follows: *M**t**o**t*, *j*(*i**x*, *i**y*) = ∑*k* = 1*N**M**C**M**j**k*(*i**x*, *i**y*) Thus, if *M**t**o**t*, *j*(*i**x*, *i**y*) = *N**M**C*, it means that *w**j*(*i**x*, *i**y*) is significant for all the simulations. Conversely, if *M**t**o**t*, *j*(*i**x*, *i**y*) = 0, it means that *w**j*(*i**x*, *i**y*) is never significant. We explain in more details results that can be gained from Fig. [j3] in Sect. [sec:results]. [convergence] rlrrrrrr|cccc N & Name & *U* & *V* & Δ*U* & Δ*V* & $\frac{N\_d} {N\_{MC}}, \_{\%}$ & *N*\* & *S**N* & *B**S**N*& *D* & *T**D* (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & 31533 & 24298 & 36439 & 11410 1 &Sirius &30 &  − 3 &3 &1 &31 &434 & +  & & +  & +  2 &Sirius &0 & 8 &7 &3 &154 &4821 & +  & +  & +  & +  3 &Sirius & − 11& 9 &3 &1 &83 &879 & +  & & +  & 4 &Coma B &9 &  − 12 &2 &2 &52 &1102 & & +  & & +  5 &Coma B & − 2 &  − 11 &3 &3 &70 &2753 & +  & +  & +  & 6 &Coma B & − 15&  − 7 &1 &1 &79 &673 & +  & +  & +  & 7 &Hyades & − 44&  − 18 &6 &2 &90 &2344 & +  & & +  & 8 &Pleiades & − 22&  − 23 &7 &3 &170 &4257 & +  & +  & +  & 9 &Wolf630+Dehnen98 &43 &  − 22 &9 &2 &168 &1777 & +  & & +  & 10 &Hercules & − 38&  − 49 &9 &3 &116 &1451 & +  & & +  & 11 &Hercules & − 16&  − 48 &2 &1 &22 &197 & +  & & +  & 12 &*γ*Leo &52 & 0 &1 &2 &27 &96 & +  & & +  & 13 &New &37 & 8 &2 &2 &74 &201 & +  & & +  & +  14 &Antoja12(15) &48 &  − 68 &1 &1 &6 &8 & +  & & +  & 15 &Antoja12(12) &94 &  − 13 &1 &1 &38 &10 & +  & & +  & 16 &Bobylev16 & − 94&  − 5 &1 &1 &17 &14 & +  & +  & +  & +  17 &*ε*Ind & − 88&  − 48 &2 &2 &12 &24 & +  & & & 18 &Unknown & − 86&  − 76 &2 &1 &8 &12 & +  & & & 19 &Unknown & − 18&  − 67 &1 &1 &22 &70 & +  & & +  & rlrrrrrr N & Name & *U* & *V* & Δ*U* & Δ*V* & $\frac{N\_d} {N\_{MC}}, \_{\%}$ & *N*\* (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) 1 & Sirius & 31 &  − 3 & 2& 1& 12 & 285 2 & Sirius & 17 & 0 & 1& 1& 10 & 377 3 & Sirius & 8 & 9 & 3& 2& 23 & 1105 4 & Sirius &  − 1 & 7 & 1& 1& 18 & 511 5 & Sirius &  − 11& 9 & 1& 1& 23 & 496 6 & Sirius &  − 3 & 2 & 1& 1& 7 & 459 7 & Coma B &  − 2 &  − 11& 1& 1& 7 & 377 8 & Coma B &  − 15&  − 8 & 3& 2& 26 & 1692 9 & Hyades &  − 43&  − 19& 8& 3& 108& 3392 10 & Hyades &  − 41&  − 11& 1& 1& 5 & 257 11 & Hyades &  − 33&  − 10& 1& 1& 7 & 243 12 & Pleiades &  − 19&  − 25& 3& 1& 30 & 1262 13 & Pleiades &  − 12&  − 22& 1& 3& 20 & 1181 14 & Pleiades &  − 6 &  − 24& 1& 1& 8 & 342 15 & Wolf 630 & 16 &  − 18& 2& 1& 6 & 430 16 & Wolf 630 & 22 &  − 21& 1& 1& 6 & 153 17 & Dehnen98 & 34 &  − 20& 2& 1& 6 & 252 18 & Dehnen98 & 45 &  − 22& 2& 1& 10 & 193 19 & Hercules &  − 36&  − 49& 2& 1& 9 & 178 20 & Hercules &  − 19&  − 51& 2& 2& 10 & 266 21 & *γ* Leo & 66 &  − 8 & 1& 1& 5 & 14 22 & *γ* Leo & 53 & 0 & 1& 1& 7 & 40 23 & *γ* Leo & 52 & 5 & 1& 1& 5 & 40 24 & New & 38 & 6 & 2& 2& 12 & 201 25 & Antoja12(15) & 49 &  − 69& 1& 1& 9 & 12 26 & Antoja12(15) & 63 &  − 64& 1& 1& 7 & 5 27 & Antoja12(15) & 83 &  − 62& 1& 1& 9 & 5 28 & Antoja12(15) & 56 &  − 48& 1& 1& 5 & 20 29 & Antoja12(12) & 88 &  − 32& 1& 1& 6 & 5 30 & Antoja12(12) & 93 &  − 13& 1& 1& 14 & 10 31 & Bobylev16 &  − 95&  − 6 & 1& 1& 8 & 11 32 & *ε*Ind &  − 89&  − 52& 1& 2& 7 & 20 1 &Sirius & − 3 &3 &2 &3 &97 &1767 2 &New &38 &7 &1 &1 &60 &106 3 &*γ*Leo &55 &2 &2 &1 &8 & 29 4 &Hercules & − 32 & − 48 &7 &1 &100&437 Results ======= In this section we present the detected structures in the *U* − *V* plane for the following samples: the full TGAS-RAVE sample, the sample split into a nearby and a distant sample, and two chemically defined samples that to a first degree represent the stars belonging to the Galactic thin and thick disks. Full sample ----------- Figure [uvall] shows detected structures in the *U* − *V* planes for three different scales, *J* = 2 (4-8 ${{\rm\,km\,s^{-1}}}$): 32 structures, *J* = 3 (8-16 ${{\rm\,km\,s^{-1}}}$): 19 structures, and *J* = 4 (16-32 ${{\rm\,km\,s^{-1}}}$): 4 structures. The *J* = 3 structures are listed in Table [groups], and the structures from the *J* = 2 and *J* = 4 scales in Table [groups2]. As can be seen, *J* = 3 appears to cover all the detected features, including smaller structures at *J* = 2, as well as larger groups at *J* = 4. Therefore, we will from now on consider *J* = 3 as the main scale since it covers the a range around the typical sizes of kinematic structures found in the Solar neighbourhood (both small- and big-scale structures), and secondly focus on *J* = 2 and *J* = 4 that covers even smaller and larger structures, respectively. The top plot of Fig. [j3] shows again the detected kinematic structures in the *U* − *V* plane for *J* = 3 (as in the Fig. [uvall] middle plot), but now with previously detected structures found in the literature () marked with blue crosses. Classical structures such as Sirius (structures number 1 − 3 in Fig. [j3]), Coma Berenices (structures 4 − 6), Hyades (structure 7), Pleiades (structure 8) and Hercules (structures 10 − 11), and some smaller structures like Wolf 630 (structure 9), Dehnen98 (structure 9), *γ*Leo (structure 12) can be easily recognised. They all have a comparably high percentage of detection (column 7 in Tab. [groups]) and big number of stars (column 7 in Tab. [groups]). The two structures from (structures 14 and 15) and one structure from (structure 16) are confirmed. We also present evidence for a new structure (number 13) that is detected with 74% significance. Structures 18 − 19 have low percentages of detection, less than15 %, and might be insignificant. In Sect. [sec:sec:individual] we will discuss how our results agree with those from the literature. The way the detected structures are split into groups is motivated with the bottom of Figure [j3] which shows the multi-resolution support obtained for *J* = 3 for all stars and 2000 MC simulations. In other words, this is the same plot as the top Figure [j3], but instead of structure counts we show multi-resolution support counts. This representation allows to see whether structures are bound or separated. Structures 1 − 3 seem to be connected and thus are united into Sirius stream. Group 5 is bound to structure 2 in the wavelet case. It should not be associated with the Sirius stream as its most significant part is located slightly aside Sirius, but lays on one line with structures 4 and 6, therefore grouping structures 4 − 6 into the Coma Berenices stream. Groups that have percentage detection higher than 100% (8, 9, 10) show a few distinct peaks in this plot, supporting the statement that these groups consist of a few smaller structures that overlap in the structure count map. Based on that we split group 9 into Wolf 630 (to the left) and Dehnen98 (to the right). Group 11 is a part of the Hercules stream. Structures 12-19 are not connected to other groups. Solar neighbourhood and beyond samples -------------------------------------- The detected structures are found in velocity space. The question is if they depend on the distance from the Sun? We divide the sample into a nearby Solar neighbourhood sample with 31533 stars that have distances *d* < 300pc (SN), and a beyond the Solar neighbourhood sample (BSN), with 24,298 stars that have *d* > 300pc (most distant star at 2kpc). Distance *d* = 300pc that is arbitrarily chosen to split the sample, is also a reasonable value, because it allows to have almost equal number of stars in both samples. Both samples are then independently analysed in the same way as for the full sample: applying the wavelet transform, filtering, and structure detection procedure for 2000 synthetic data samples. Figure [samples] shows the detected structures associate with the SN sample (top left plot), and the BSN sample (top right plot) for the scale *J* = 3. The rectangles mark the borders for the structures that were detected for the full sample (see Fig. [j3]). This allows an easier comparison how the shapes on kinematic structures change with the respect to the full sample. In Table [groups] we have indicated in columns 9 and 10 with “ + ” signs if the structure is present in SN and BSN samples. Almost of all of the full sample structures are observed in the SN except two weak Hyades peaks (groups 10, 11). So, the SN results almost completely reproduce the results from the full sample. For the BSN sample that has 7000 less stars than the SN sample, most of structures appear to have slightly changed their positions relative to the SN case. Similar result was obtained by where the structures detected in distant regions were shifted on the velocity plane. Hence, only 6 of 19 kinematic groups can be recognised: strong Sirius peaks 2, all Coma B peaks (4-6), Bobylev16 peak 16, Pleiades peak 8 is slightly shifted. In summary, it appears that some kinematic structures are located only in the SN sample as a few significant groups are not detected in the BSN sample at all (groups 1, 3, 7, 9-15, 17-19). These changes in the number of structures, their positions and shapes in the respect to distance can be due to a different numbers of stars that fall into SN and BSN samples with the SN sample containing 10000 stars more. The technique of wavelet analysis is sensitive to the number of stars in the initial sample, the more stars we have, the more realistic picture of structures we can get. Mean values of velocity uncertainties for two samples are also slightly different and are bigger in the case of the BSN sample: ⟨*σ**U*⟩*S**N* = 1.7, ⟨*σ**V*⟩*S**N* = 1.6 for the SN sample; ⟨*σ**U*⟩*B**S**N* = 2.5, ⟨*σ**V*⟩*B**S**N* = 2.2 for the BSN sample. So that for the BSN sample, which is at the same time smaller, velocity uncertainties are slightly higher and this can lead to some displacements of the structures. This issue can be investigated further with the availability of the *Gaia* DR2 in April 2018 which will provide precise astrometric parameters for 109 stars and first radial velocities for bright stars. Thin and thick disk structures ------------------------------ Several high-resolution spectroscopic studies of nearby stars have identified and characterised the thin and thick disks as distinct stellar populations, not only in terms of kinematics, but also in terms of elemental abundances and stellar ages. The two co-existing and largely overlapping disk populations points to a complex formation history for the Milky Way. A process which is currently is not well understood. The question is if we can gain further insights into the nature and origin of this two-disk structure from the kinematic structures seen in the Solar neighbourhood? As shown in and stellar ages appear to be the best discriminator between the thin and thick disks. However, stellar ages are not available for the stars in our sample. Another way would be to use kinematics, but as this is exactly the property that we want to investigate. Another approach which can reveal more features of kinematic group associated with the thin and thick disks is to use their chemical compositions. Several papers have shown that the two disks follow distinct and well separated abundance trends both in the Solar neighbourhood and also further away. All these studies show that thick disk stars, at a given metallicity, are more *α*-enhanced than thin disk stars. In this paper we do not perform any spectroscopic analysis of our structures. Instead we separate our stellar sample by magnesium [Mg/Fe] abundances provided by RAVE in order to study our sample in terms of thick (metal-poor) and thin (metal-rich) disks. In Fig. [kde] (see the last plot on the right-hand side on the bottom line) we show [Mg/Fe]-[Fe/H] diagram for the total RAVE sample of 47849 stars that have RAVE signal to noise ratio *S*/*N* > 40. The last limit is needed to get abundances with a higher precision (abundances uncertainties less that 0.2dex, ). A chemical separation of thick and thin disks with RAVE based on probability density approach has been done in and we define a thin disk sample (D) and a thick disk sample (TD) samples according to : thin disk $\rm [Mg/Fe] < 0.2$, thick disk $\rm [Mg/Fe] > 0.2$. This separation is shown by the red horizontal line in all plots of Fig. [kde]. Effective ranges of disk metallicities obtained for a RAVE sample by are the following: $\rm -0.27<[Fe/H]<0.38$ for an *α*-poor component (thin disk) and $\rm -1.15<[Fe/H]<-0.05$ for an *α*-enhanced component (thick disk). The metallicity distribution function for the total sample reaches the maximum value at $\rm [Fe/H]\approx-0.1$, which is close to the disk separation values, hence, the total sample represents a mixture disk populations. In the thin disk sample we have 36439 stars, and in the thick disk sample 11410 stars. As in the case with SN and BSN samples, we run the same procedure as for the full sample and the SN and BSN samples (i.e. applying the wavelet transform, filtering, and structure detection procedure for 2000 synthetic data samples.) Uncertainties for both [Fe/H] and [Mg/Fe] from RAVE are stated to be around 0.2dex, which is comparably large to make a clear separation of the two disks. The separation line shown in Fig. [kde] is therefore only a first approximation to represent thick and thin disks. A better precision could be achieved with a detailed spectroscopic analysis of stars associated with kinematic structures to investigate which disk population do they belong to. The bottom panels of Fig. [samples] show the structures that were detected by applying wavelet transform to the 2000 synthetic samples associated with thick and thin disks, respectively. The rectangles correspond to the structures detected for the full sample at scale *J* = 3. In Table [groups] columns 11 and 12 show a clear presence of the structure in T and TD with “ + ” sign. Similarly to the SN sample, the thin disk sample (D) contains more stars, so most of the structures detected for the full sample can be recognised. Only groups 17 and 18 appear to be missing. Hyades and Pleiades groups 7 and 8 are more distinct in the D sample, but a few stars are also detected in the TD sample, so they could be a mixture of the two stellar populations. The Hercules stream is almost missing in the TD map, so that probably is constructed mostly of thin disk stars. The same result was obtained by and from a chemical abundances analysis of stars that belong to the Hercules (for more discussions see Sect. [sec:sec:individual]). Coma Berenices slightly changed its location in the TD case, being more significant in the box 4. Groups 11, 12, 14, 15, 17 − 19 are not seen at all. These groups consist of mostly D stars what points towards their possible origin through the outer Lindblad resonance (OLR) (for further explanations see Sect. [sec:sec:individual]). In Fig. [kde] we plot individually for each kinematic structure its $\rm [Mg/Fe]-[Fe/H]$ diagram and its metallicity distribution function which is a [Fe/H] versus a probability density, computed with the kernel density estimation (KDE) method. In each contour plot the positions of individual stars are shown as dots. The numbers in each panel indicate the numbers of the structures as listed in the legend to Fig. [j3]. The horizontal red line at each density plot corresponds to the $\rm [Mg/Fe]=0.2$ showing the approximate separation between the thin and thick disks. Black dashed lines at each histogram show the probability density distribution for the full sample of 47849 stars with *S*/*N* > 40. The solid violet histogram at the top of each panel shows a probability density distribution for stars inside the current group. Each structure we will discussed in detail in Sect. [sec:sec:individual]. Individual structures --------------------- Individual structures will here be discussed in detail. Each case contains an overview of what is known about each group from the literature and how it compares with the results from the present study. The number in parentheses at the beginning of each paragraph indicates the number of the structure as listed in Table [groups] and shown in Figs. [j3] and [samples]. #### **Sirius** (1-3): studied the properties of the Sirius super-cluster, which is considered a part of the larger Ursa Major stream. They found that its stars fall into two distinct age groups, 6.3Gyr and 0.2Gyr, and that its chemical composition differs from the Hyades and Pleiades open clusters, showing heavy element abundances close to solar values. tried to reveal the origin of kinematic features including the Sirius stream by probing ages of stars that belong to the Sirius group and the evaporating Ursa Major star cluster. It was shown that only 30% of the stars associated with the stellar stream fall on the same isochrone (300 Myr) as the open cluster, and, as was concluded in, not all stars of Sirius stream have an origin of being a remnant of an open cluster and favour a dynamical (resonant) origin for the Sirius stream. Later, through modelling of the dynamics of the Milky Way, showed that the low-velocity features including Sirius stream could be reproduced with the OLR of the Galactic bar. studied the ages and metallicities of kinematic over-densities of nearby stars from Hipparcos to investigate whether stellar streams consist of stars that belong to the same population, that could indicate that they originated from dissolved open clusters. Their main result was negative for the stellar streams they analysed, including the Sirius stream, and that it should not be associated with the Ursa Major open cluster. To test possible dynamical origins for the stellar streams (such as the OLR of the bar, or the inner Lindblad resonance (ILR) of the spiral structure) used the the Geneva-Copenhagen survey to compare the metallicities of stellar stream stars to the background population of thin disk stars. assume that depending on the type of the resonance, orbits of stellar groups are located most of the time inside or outside the solar circle and consequently, these stars shows up the properties of different parts of the Galaxy. Metallicity is one of the main parameters that vary for kinematic groups that come from different parts of the Milky Way due to the existence of a metallicity gradient in the Galaxy. They found ‘weak evidence’ that Sirius stream stars have lower metallicities than the thin disk population and could therefore be related to the ILR of the spiral arms. We associate Sirius stream with structures 1-3 (see Fig. [samples]). Sirius is elongated in both the *U* and *V* directions and is detected in all maps, although its shape and location vary from sample to sample. Structure 2 is the most significant sub-stream with more than 4800 stars located inside the ‘detection box’, and 154% of MC repeats. As the detection percentage exceeds 100% the structure may consist of a few smaller groups like those detected at the scale *J* = 2 (see Fig. [uvall]) that overlap with each other at the scale *J* = 3. Below we provide a table of positions of the Sirius stream from this work and from the literature and a blue cross in Fig. [j3] corresponds to the Sirius group from. [h] | | | | --- | --- | | (*U*, *V*) | Reference | | $[{{\rm\,km\,s^{-1}}}]$ | | | (30,  − 3) | group 1 | | (0, 8) | group 2 | | ( − 11, 9) | group 3 | | (9, 3) | | | (15, 1) | | | (10, 3) | | | (7, 4) | | | (5, 2) | | | (10, 3) | | | (9, 4) | | | (4, 4) | | Our central peak 2 agrees with all the studies listed. Group 1 has a higher *U* velocity and group 3 a lower *U* velocity compared to the central peak but all have approximately the same *V* velocity, so they could be members of one larger stream. Sirius is a nearby structure, while only stars from group 2 also appear in the distant BSN sample. Most of the stars appear to have chemical compositions comparable to what is observed for the Galactic thin disk stars, but group 2 is still strong in the thick disk sample. So Sirius could be a mixture of stars from both disk populations. Plots 1-3 in Fig. [kde] show the $\rm [Mg/Fe]-[Fe/H]$ diagrams for stars from groups 1-3 that we associate with the Sirius stream and at the top of each panel the metallicity distribution for each individual group is shown (solid violet distribution). The Sirius stream stars appear to have properties similar to the total sample (black dashed histogram) and do not show any particular metallicity trend inherent to the thick or thin disk populations. Figure [samples] also indicates that the Sirius stream is a large-scale structure that is observed in both SN and BSN samples and appears to be a mixture of both disk populations. Since we observe Sirius in both disks, we favour its dynamical origin possibly from the ILR of the spiral arms, but note that our thin/thick disk separation is uncertain due to the rather large errors in the RAVE chemical abundance ratios. #### **Coma Berenices** (4-6): carried out an astrometric and photometric analysis of the region of the sky where the Coma Berenices open star cluster is located and found that the luminosity function of the core of the cluster decreases, while it increases towards fainter magnitudes in the edges of the cluster. assume that there could be a lot of faint, low-mass members of the moving group that were not observed. The proximity of the moving group and the open cluster pointed towards the idea that Coma Berenices moving group was formed due to a dissolution of the cluster. Conversely, through modelling of the dynamics of the Milky Way, reproduced a few main stellar streams including Coma Berenices assuming the OLR of the bar and thus, favour resonant mechanism of formation of also this kinematic over-density. The table below lists the detection of the Coma Berenices kinematic over-density in the *U* − *V* plane that is available in the literature. In our study Coma Berenices is associated with the structures 4-6 (see Fig. [samples]) and the table below shows that the positions we detect are in agreement with results from the other studies. [h] | | | | --- | --- | | (*U*, *V*) | Reference | | $[{{\rm\,km\,s^{-1}}}]$ | | | (9,  − 12) | group 4 | | ( − 2,  − 11) | group 5 | | ( − 15,  − 7) | group 6 | | ( − 10,  − 5) | | | ( − 10,  − 10) | | | ( − 11,  − 7)*d* | dwarf sample | | ( − 13,  − 6)*g* | giant sample | | ( − 7,  − 6) | | | ( − 7,  − 6) | | All three groups 4-6 share similar space velocities. Peak 6 has higher detection percentage in MC simulation (79%) than peaks 4 and 5. While group 5 is the biggest and contains over 2700 stars inside the ‘detection box’. The blue cross inside box 5 in Fig. [j3] corresponds to the detection of Coma Berenices from at ( − 7,  − 6) ${{\rm\,km\,s^{-1}}}$. Figure [kde] plots 4-6 show the $\rm [Mg/Fe]-[Fe/H]$ diagrams and metallicity histograms (at the top of each plot) for groups 4-6, respectively. Coma Berenices stream stars show metallicity properties similar to the total sample, and does not show any particular metallicity trend to either the thick or the thin disks. Figure [samples] shows a similar result: Coma Berenices unites stars that belong to both thin and thick disk samples with more stars in the thin disk sample. It is a large scale over-density because it is seen in both distance samples. As Coma Berenices shares properties similar to the Sirius moving group, both combining stars of different populations, it could also originate from the ILR of the spiral arms, again with the remark that the thin/thick disk separation is uncertain due to the low precision of the RAVE abundances. #### **Hyades** (7): Being first discovered by, the Hyades stream, or Hyades super-cluster, was for a long time considered to be a remnant of the eponymous Hyades open stellar cluster. However, recent studies have shown the opposite. For instance, found that only half of stars of the Hyades stream could originate from the Hyades open cluster as only about 50% of stars fall onto the same isochrone as would have been expected for an open cluster. They favour the dynamical origin for the Hyades stream. Later, compared chemical abundances and metallicities of stars belonging to Hyades stream with stars that are members of the Hyades open cluster, that is believed to be chemically homogeneous. It was found that only 2 of the 21 selected Hyades stream stars have similar chemical properties with the open cluster. Furthermore, showed that the Hyades stream stars have a metallicity excess of about 0.06 − 0.15dex compared to thin disk stars, which is consistent with an origin caused by the ILR of the spiral arms. They also performed a particle simulation test that supported the same conclusion, showing that the Hyades stream could be reproduced with the 4:1 resonance of the spiral arms. Another chemical tagging study of the Hyades stream was presented by that further supported the idea of dynamical origin of the Hyades stream. They analysed stellar spectra of 61 Hyades stream stars and compared the results with a reference star vB 153 that is a verified member of the Hyades open cluster. Only 26 stars were found to have similar parameters with the Hyades open cluster. conclude that the Hyades stream does not completely originate from the Hyades open cluster. used a simple dynamical modelling of the Milky Way to study the origin of the Hyades stream and to check whether it could originate from a Lindblad resonance. The author conclude that Hyades stream has a resonant (dynamical) nature, but that it is not possible to say exactly which resonance due to selection effects associated with the dynamics. bfWe associate Hyades with group 7 (see Fig. [samples]). This group contains 2344 stars inside the detection box and has a high MC detection of 90%. The blue cross in Fig. [j3] marks the detection of Hyades from. Below we show a table of positions of Hyades from this work and from the literature. [h] | | | | --- | --- | | (*U*, *V*) | Reference | | $[{{\rm\,km\,s^{-1}}}]$ | | | ( − 44,  − 18) | group 7 | | ( − 40,  − 20) | | | ( − 35,  − 18) | | | ( − 38,  − 18)*d* | dwarf sample | | ( − 38,  − 17)*g* | giant sample | | ( − 40,  − 20) | | | ( − 30,  − 13) | | | ( − 30,  − 15) | | The $\rm [Mg/Fe]-[Fe/H]$ diagram the and [Fe/H] distribution for structure 7 is shown in Fig. [kde]. The Hyades stream shows properties that are similar to the full sample. From the analysis of SN/BSN and D/TD sub-samples (see Fig. [samples]) it is seen that Hyades stream sample mostly consists of nearby stars. It is also more distinct in the thin disk subsample, although the structure is detected in the thick disk subsample too. So, it appears as if the Hyades stream is nearby structure which consists of mixture of disk populations. This does not support the hypothesis for Hyades to be a dissolved open cluster as this theory implies all stars to have a solid chemical composition and could have a dynamical origin, again with the remark on a low precision of abundances given in RAVE. #### **Pleiades** (8): The Pleiades was the first ever discovered moving group. found it through observations of Pleiades open cluster that there was a large number of stars located a few degrees far from the cluster, that were moving in the same direction. It was the Pleiades moving group. Its origin has been investigated in several studies. For example, conclude that Pleiades moving group has a dynamical (resonant) origin, since only 46% of the moving groups stars fall onto the 100Myr isochrone that is the assumed age for the Pleiades open cluster. Through galactic dynamics modelling reproduced stellar streams being due to the OLR. However, to be consistent with the number of stars in the Pleiades they assumed that the Milky Way bar was formed 2Gyr ago. This paper stands against the idea that the Pleiades and the Hyades share a common dynamical origin. analysed age and metallicity properties the Pleiades moving group and found that it could not originate through a dissolved Pleiades open cluster as their stellar populations differ. also compared the metallicity of the Pleiades and Hyades moving groups with the metallicity of the thin disk population and found similar metallicities for the Pleiades and the thin disk stars, while the Hyades shows a higher metallicity than thin disk. Hence, also does not support the idea of common dynamical origin for the Pleiades and Hyades. We detect one large structure that we associate with Pleiades, group 8 in Fig. [samples]. This group could consist of a few separate groups that overlap since the percentage of detection in MC simulations is 170%. Interestingly, at the *J* = 2 scale the Pleiades detection consists of three separate structures, number 12-14 (see Fig. [uvall]). Group 8 (in the *J* = 3 scale) is one of the largest groups with about 4200 stars inside the detection box. The table below gives the positions for Pleiades stream from this work and from the literature. The blue cross corresponding to Pleiades in Fig. [j3] refers to the detection by. [h] | | | | --- | --- | | (*U*, *V*) | Reference | | $[{{\rm\,km\,s^{-1}}}]$ | | | ( − 22,  − 23) | group 8 | | ( − 12,  − 22) | | | ( − 12,  − 21) | | | ( − 16, 23) | | | ( − 12,  − 23)*d* | dwarf sample | | ( − 15,  − 23)*g* | giant sample | | ( − 15,  − 20) | | | ( − 16,  − 22) | | | ( − 13,  − 24) | | Plot 8 in Fig. [kde] shows the $\rm [Mg/Fe]-[Fe/H]$ diagram and the [Fe/H] histogram the Pleiades stars. The metallicity distribution is almost equal to the full sample. The structure that we associate with the Pleiades does not have any particular distance or abundance dependence as it is observed in both SN/BSN, and both D/TD samples (see Fig. [samples]). The structure has a higher detection percentage for the thin disk sample, but this could be because the thin disk sample contains three times as many stars than the thick disk sample. The position and the shape of Pleiades group do not vary much between the different sub-samples and leads to the conclusion that it is a large-scale structure, composed of a mixture of different populations of stars. Thus, as it appears to be chemically inhomogeneous, unlike open clusters and moving groups, it could originate from the ILR of the spiral arms and not from the Pleiades open cluster. Again, a better thin/thick disk separation could be achieved with more precise chemical abundances than what RAVE is providing. #### **Hercules** (10-11): Being the largest and the most elongated structure in the *U* direction, the origin of the Hercules stream has been investigated by many authors. For example, favour a hypothesis that Hercules stream is a dynamical feature caused by the Galactic bar resonances (the OLR). showed that a combined dynamical effect of spiral arms and a Galactic bar can explain main kinematic structures including the Hercules. performed a detailed chemical characterisation of its stars. They favour a dynamical origin through the Galactic bar as the Hercules stream stars appeared to be a mixture of thick and thin disk stars. Also performed a hypothesis testing to check whether moving groups consist of homogeneous population of stars. The results was negative and further found indications that Hercules stream stars have a higher average metallicity than the local thin disk, hence, concluding that it could be a structure caused by the OLR of the Galactic bar. Later, re-examined the chemical composition of Hercules stream stars and found that it mainly consists of stars that chemically can be associated with both the thin and thick disks. on the other hand studied 58 Hercules stream red giants and found that they are mostly metal-rich stars from the thin disk. The somewhat discrepant results could be explained with different target selection methods used by the two studies. carried out a dynamical modelling of the Hercules stream “in the framework” of a slow bar and compared obtained results with data from the RAVE and LAMOST catalogues. They found that Hercules is more prominent in the Galactic inner disk and should consist in average of more metal-rich and older stars comparing to the Solar negihbourhood. Hercules is identified as structures 10 and 11 in this study (see Fig. [samples]). This kinematic structure is the most elongated feature in the *U* direction and has a detection percentage for group number 10 that exceeds 100%. An explanation of this result is that it appears to consist of a few separate structures that overlap in the MC simulations (see the *J* = 2 scale in Fig. [uvall], where Hercules is detected as the two peaks number 19 and 20. The blue crosses in Fig. [j3] (one is inside the Hercules box 10 and another is just outside on the left-hand side) mark the results from. The table below gives the positions of the Hercules stream from this work and from the literature. [h] | | | | --- | --- | | (*U*, *V*) | Reference | | $[{{\rm\,km\,s^{-1}}}]$ | | | ( − 38,  − 49) | group 10 | | ( − 16,  − 48) | group 11 | | ( − 30,  − 50) | | | ( − 42,  − 51) | | | ( − 35,  − 51) | | | ( − 32,  − 48)*d* | dwarf sample | | ( − 35,  − 51)*g* | giant sample | | ( − 20,  − 33) | | | ( − 57,  − 48)*I* | | | ( − 28,  − 50)*I**I* | | | ( − 57,  − 48)*I* | | | ( − 35,  − 50)*I**I* | | The (*U*, *V*) velocities of groups 10 and 11 are in agreement with most of the previous studies except, whose position differs from others by about $10{{\rm\,km\,s^{-1}}}$. and defined two sub-streams in the Hercules, we have only one centred peak, but the size of the structure covers both of them. The panels 10 and 11 in Fig. [kde] corresponds to the Hercules stream and shows its metallicity distribution and $\rm [Mg/Fe]-[Fe/H]$ diagram. It appears to contain more metal-rich stars, and it is also clearly a thin disk structure located in the nearby sample as it is observed only in the SN sample (see Fig. [samples]). Our results support recent findings that the Hercules stream mainly belong to the thin disk population and could be due to the OLR of the Galactic bar. #### **Wolf 630** (9): Wolf 630 was first identified by and its origin is still unclear. analysed spectra of 34 stars of the Wolf 630 stream and 19 stars were found to be chemically homogeneous. This sub-sample of 19 stars was fitted with a 2.7Gyr isochrone and a metallicity of $\rm [Fe/H]=-0.01$. suggest that the sub-sample of 19 stars could be a remnant of an open cluster since its stars share similar features, but the rest of the sample is inhomogeneous, which makes the origin of Wolf 630 uncertain. We identify Wolf 630 as the group 9 (see Fig. [samples]). It has a 168% MC detection rate and 1777 stars of our sample can be associated within the group. At the *J* = 2 scale the same region of the *U* − *V* plane consists of four individual groups. This could indicate that group 9 consists of at two structures that overlap: Wolf 630 and Dehnen98 (to be discussed below). The result from is marked by the blue cross inside the structure 9 (see Fig. [j3]). The table below gives positions of Wolf 630 obtained in this work and from the literature. [h] | | | | --- | --- | | (*U*, *V*) | Reference | | $[{{\rm\,km\,s^{-1}}}]$ | | | (43,  − 22) | group 9 | | (23,  − 33) | | | (28,  − 21) | | | (29,  − 21) | | Our *U*-component differs from the other works by at least in $10 {{\rm\,km\,s^{-1}}}$. This can be a consequence that box 9 corresponds to at least two independent groups and thus its position represent mean coordinates for both groups. Plot 9 in Fig. [kde] corresponds to the Wolf 630 stream and shows its metallicity distribution and $\rm [Mg/Fe]-[Fe/H]$ diagram. It has a metallicity properties very similar to the full sample, but maybe with a few more metal-rich stars. Based on the analysis of SN/BSN and D/TD sub-samples in Fig. [samples], Wolf 630 appears to be a thin disk structure belonging to the nearby sample. It could have a resonant origin with the remark on the uncertainties of the RAVE abundances that makes the disk separation less reliable. #### **Dehnen98** (9): This structure is detected inside box 9 in Fig. [samples]. It is a small kinematic group that was first discovered by and has later been confirmed by other studies. found a group with the same (*U*, *V*) coordinates, but after the analysis of the branch structure using modified equations as was first proposed by to fit four branches of groups based on its motion, they concluded that the group could belong to the Coma Berenices stream. sub-sequently detected a kinematic over-density which they associated with the Dehnen98 structure. This result is marked by a blue cross at the right-hand side of the box 9 in Fig. [j3]. The table below gives positions of this structure found in this work and from the literature. [h] | | | | --- | --- | | (*U*, *V*) | Reference | | $[{{\rm\,km\,s^{-1}}}]$ | | | (43,  − 22) | group 9 | | (50,  − 25) | | | (48,  − 24) | | | (43,  − 24) | | Our detection of the Dehnen98 structure is in agreement with previous works. Dehnen98 has a very high percentage of detection in the MC simulations compared to other groups that we have detected:  ∼ 98% MC catches and it contains 58 stars. The metallicity distribution and the $\rm [Mg/Fe]-[Fe/H]$ diagram of Dehnen98 is given in plot 9 in Fig. [kde]. Dehnen98 has similar metallicity properties to the total sample, with more thin disk stars. From the analysis of Fig. [samples] Dehnen98 contains stars of different populations and belongs to the nearby sample. Concerning the assumption stated in that Dehnen98 is a part of a Coma Berenices branch, we can say that this group has similar properties with Wolf 630 and Coma Berenices streams, and they all could form to one large-scale structure that has a dynamical origin. A detailed chemical tagging of stars that belong to these groups is required to properly speculate on its origin. #### ***γ*Leo** (12): This structure is shown as group 12 in Fig. [samples], and has a relatively low detection percentage of 27% in the MC simulations. It is rather small with only 96 stars from our sample. Figure [j3] shows two blue crosses for this group from. The table below gives velocity positions of our detection of *γ*Leo together with those from the literature. [h] | | | | --- | --- | | (*U*, *V*) | Reference | | $[{{\rm\,km\,s^{-1}}}]$ | | | (52, 0) | group 12 | | (50, 0) | | | (56, 2)*I* | | | (68, 1)*I**I* | | | (65, 1) | | Group 12 is consistent with and peak *I*, while is in agreement with the structure II from. All the groups have similar *V*-velocities. Plot 12 in Fig. [kde] shows the $\rm [Mg/Fe]-[Fe/H]$ diagram and the [Fe/H] distribution for group 12. The *γ*Leo stream shows metallicity properties similar to the total sample and it appears to be a nearby thin disk structure (see Fig. [samples]) with only a few stars in the TD sample. Thus, it could have formed due to dynamical reasons. #### **New** (13): Group 13 at $(37, 8)\,{{\rm\,km\,s^{-1}}}$ in Fig. [samples] has 201 stars and a high detection level of 74% MC repeats. We cannot find any previous detections in the literature of a structure at these coordinates, and we therefore identify this as a New structure. It appears to be a nearby structure and is detected in both the thin and the thick disk sub-samples. It is, however, not detected in the more distant BSN sample, which could be due to smaller number of stars in the BSN sample compared to the SN sample. The metallicity distribution and $\rm [Mg/Fe]-[Fe/H]$ diagram for this new group 13 are shown in plot 13 in Fig. [kde]. This group contains stars of both disk populations. It can be an elongation of larger nearby streams such as Sirius or *γ*Leo, as their properties are similar. A more precise detailed chemical analysis of stars associated with these groups is required to more precisely probe the origin of the new group 13. [h] | | | | --- | --- | | (*U*, *V*) | Reference | | $[{{\rm\,km\,s^{-1}}}]$ | | | (37, 8) | group 13 | #### **Antoja12(15)** (14): This structure was first reported in but was detected only at 2*σ* confidence level and needed further confirmation. In Fig. [j3] it is shown as a blue cross close to the box 14. We received a 3*σ*-significant group 14 which is 10 ${{\rm\,km\,s^{-1}}}$ higher in *U*, but can be associated with the one detected in. It has only 6% of MC detection and accounts 8 stars. Below we show a list of positions we found in the literature for this structure and included our results. [h] | | | | --- | --- | | (*U*, *V*) | Reference | | $[{{\rm\,km\,s^{-1}}}]$ | | | (48,  − 68) | group 14 | | (60,  − 72) | | | (72,  − 64) | | This group appears clearly in the nearby and the thin disk sub-samples (see Fig. [kde]). Taking into account the low number of stars associated with this group it could not be observed in the BSN and TD samples as they consist of less stars than SN and D samples. The metallicity distribution and the $\rm [Mg/Fe]-[Fe/H]$ diagram for group 14 are shown in plot 14 in Fig. [samples] and both point toward the thin disk population, which is coherent with the result from Fig. [kde]. The *Gaia* DR2 data will provide astrometric data for more stars, thus, one could verify whether this group is observed in the BSN and TD samples too. With the current results a dynamical origin seems favoured. #### **Antoja12(12)** (15): This group was stated as new in and is marked by a cross in Fig. [j3] close to structure 15. In this study, as in, structure 15 was detected with a 3*σ*-significance. The table below gives the positions for this group obtained in this work and from the literature. [h] | | | | --- | --- | | (*U*, *V*) | Reference | | $[{{\rm\,km\,s^{-1}}}]$ | | | (94,  − 13) | group 15 | | (92,  − 23) | | | (91,  − 35) | | Our group 15 shares the same *U* velocity as in the other studies, but differs in *V* direction by $-10\,{{\rm\,km\,s^{-1}}}$ compared to. Interestingly, obtained a structure which also differs by $10\,{{\rm\,km\,s^{-1}}}$ in the *V* direction, but in the other direction. It can be the same structure as it is located in the low-density region of the *U* − *V* map, so it cannot be affected by other stronger streams. Antoja12(12) has a 38% detection in the MC simulations and includes only 10 stars. Group 15 appears to be a thin disk structure mainly present in the nearby sample (see Fig. [samples]). The metallicity distribution and $\rm [Mg/Fe]-[Fe/H]$ diagram for group 15 are shown in plot 15 in Fig. [kde] and its properties are similar to the full sample. We suppose that this group is an independent one, but has to be confirmed in later studies that contain more stars. The *Gaia* DR2 data release may help to resolve this case. #### **Bobylev16** (16): This group has 14 stars and 17% of MC detection. It was first discovered in and is shown with a blue cross on the left-hand side of structure 16 in Fig. [samples]. We confirm this group and add that it belongs to all both nearby and distant, thin and thick disk samples, what allows to state that it is a mixture of different type stars. [h] | | | | --- | --- | | (*U*, *V*) | Reference | | $[{{\rm\,km\,s^{-1}}}]$ | | | ( − 94,  − 5) | group 16 | | ( − 96,  − 10) | | The same as group 15, structure 16 is observed far aside from the majority of kinematic groups. This supports group’s independence from other structures, but unlike group 15 is present all samples. The metallicity distribution and [Mg/Fe]-[Fe/H] diagram for group 16 are shown in pattern 16 (see Fig. [kde]) and it appears to have more thin disk stars. We propose its dynamical origin similar to the Sirius group since the sample properties are alike. #### ***ε*Ind** (17): The closest blue cross to the group 17 in Fig. [j3] is the one previously found at 2*σ* confidence level by that is listed in the table below. Although the structure is detected at the 3*σ*-significane level, it has a low percentage of detection, only 12% and contains only 24 stars. This group appears to be detected only in the nearby sample, but this could be due to the fact that this group contains very few stars. [h] | | | | --- | --- | | (*U*, *V*) | Reference | | $[{{\rm\,km\,s^{-1}}}]$ | | | ( − 88,  − 48) | group 17 | | ( − 81,  − 42) | | | ( − 90,  − 49) | | Group 17 is a small group and is thus easier to detect in the larger SN sample. However, it is not detected in the larger thin disk sample that has 5000 more stars than the SN sample. The metallicity distribution and $\rm [Mg/Fe]-[Fe/H]$ diagram for group 17 are shown in plot 17 in Fig. [kde], it appears to mainly be a thin disk structure. To speculate on the origin of this kinematic feature *Gaia* DR2 data should be used to have larger stellar sample. #### **Two tentatively new structures** (18-19), *J* = 2: These two groups have a low structure count in MC simulations and contains 12 and 70 stars respectively. Group 19 can be associated with HR1614 peak detected at (15,  − 60) ${{\rm\,km\,s^{-1}}}$ by (marked by a blue cross in Fig. [j3]), but none of the groups have similar velocities. Tentatively we define them as new structures, but require further confirmation with larger data samples. [h] | | | | --- | --- | | (*U*, *V*) | Reference | | $[{{\rm\,km\,s^{-1}}}]$ | | | ( − 88,  − 76) | group 18 | | ( − 18,  − 67) | group 19 | The metallicity distributions and $\rm [Mg/Fe]-[Fe/H]$ diagrams for groups 18-19 are shown in plots 18-19 in Fig. [kde]. Structure 19 appears to be more a thin disk structure, and structures 18 is seen only in SN sample, which could be a consequence of the groups small sizes. *Gaia* DR2 will help us to further investigate the existence and origin of these two structures. Summary and discussion ====================== We have analysed the velocity distribution of 55831 *Gaia* DR1/TGAS stars in the Solar neighbourhood and sample properties relative to distance and metallicity using wavelet analysis. 19 kinematic structures were detected at scales 8-16 ${{\rm\,km\,s^{-1}}}$, 32 groups found between 4-8 ${{\rm\,km\,s^{-1}}}$ and 4 structures at 16-32 ${{\rm\,km\,s^{-1}}}$ in the *U* − *V* plane. Our analysis has several advantages comparing to previous works. As it is the first ever analysis of *Gaia* DR1 data in such a kinematical context, and the most important benefit is the precision of astrometry provided by TGAS itself. High precision of the input data allow us to apply the analysis for a larger sample of stars than in previous works, and even after cutting the sample based on *σ**U* and $\sigma\_V<4 {{\rm\,km\,s^{-1}}}$ we still have a competitive number of stars. This limit on velocity uncertainties is important to obtain robust measurements of positions of kinematic structures. In previous works velocity uncertainties were either not accounted at all, either were established too high to retain more stars in the sample, that could led to uncertain results in both cases. A set of 3*σ*-significant (99.8 %) wavelet coefficients that indicate kinematic structures were received after applying the wavelet analysis and filtering the data. Although the output data was already smoothed with the auto-convolution histogram method, the question whether obtained structures are real remained due to the availability of velocity uncertainties. Then we run Monte Carlo simulations and apply the same analysis to them as for the real sample. This step is beneficial for the procedure in general as it allows to calculate the percentage of detection which indicates if the structures are real. To investigate properties of obtained structures with respect to distance and chemical composition four sub-samples were defined: a Solar neighbourhood sample with stars closer than 300pc (SN), a sample with more distant stars (BSN), and based on [Mg/Fe] enhancement (from RAVE abundances) a thick disk sample (TD) and a thin disk sample (D). As shown is Sect. [sec:sec:sn], [sec:disks], some structures are SN/BSN and/or D/TD structures. For example, group 10 (Hercules) is obviously SN/D structure, while group 4 (Coma B part) is a BSN/TD structure. Most of the moving groups are observed at close distances *d* < 300pc and at higher metallicities. This can be a repercussion of the selection effect since SN and D samples contain more stars compared to the BSN and TD samples. Some groups change their positions and shapes when considering distance and metallicity (e.g. group 7 (Hyades), and group 2 (Sirius)). These variations could be a consequence of how the sample is split, where the SN and D samples contain more stars than BSN and TD samples, but possibly can prove the dynamical origin of these groups since shifts in the velocity plane were also found in, when analysing nearby and distant samples of stars. They found that the observed shifts were consistent with the dynamical models of spiral arm effects discussed in. *Gaia* DR2 data will cover more stars and can possibly resolve the question of shifted positions. With a high probability we observe major peaks like Sirius, Coma B, Hyades, Pleiades, Hercules, Wolf 630. We confirm group 9 (Dehnen98), which was recently discovered in and discuss the possibility to be a part of the Coma Berenices stream together with Wolf 630, since these groups share similar metallicity properties (see Fig.[kde], Fig.[samples]). Groups 14 and 15 (Antoja12(15) and Antoja12(12)) were first reported in at 2 and 3*σ* confidence level. We confirm both of them at the 3*σ* level. Structure 16, which was first discovered in, is also confirmed. We report on a new group (number 13) which has not been discussed in the literature before. It appeared in 74% of the MC runs and contains around 201 stars. This group belongs to the nearby sample and unites stars of both disks. Group 13 is located in the proximity to Sirius and *γ*Leo streams. The latest one, group 12, has rather low percentage of detection, but shares similar properties as the group 13. This new group could be an independent structure, but could also be an elongation of the Sirius or *γ*Leo streams, because the metallicity properties are similar for all three groups (see Fig.[kde], Fig.[samples]). To claim if this structure is independent, this case should be further investigated, possibly through a detailed chemical analysis of stars that belong to the structures. The *ε*Ind and another two tentatively new structures have weak detection percentages in the MC simulations (less than about 25 %). Hence, the tentatively new structures 18 and 19 need further confirmation. We discuss a possible origin of stellar streams 1-19 based on our results and previous findings form the literature. If found groups showed metallicity homogeneity it would point towards an origin through being remnants of open clusters. Most of the structures do not show any particular properties inherent to thin or thick disk populations and thus we consider them to be a mixture of different type stars caused through dynamical resonances. Those groups that are more likely thick or thin disk structures are either large-scale structures (e.g. Hercules), or are small-scale groups located far from the most dense regions in the *U* − *V* plane, and thus, should be independent structures possibly also caused by resonances too. Our conclusions on the origin of kinematic structures are consistent with previous works, but should be verified with a better data which would include more stars with high-precision abundances and astrometry. We also want to discuss a few groups which are not observed in our work, but they were in the centre of discussions in a couples of recent works. Among them is a debatable structure at (35,  − 20) ${{\rm\,km\,s^{-1}}}$ which was first reported in. Taking into account its proximity to Wolf 630, Dehnen98 and such bigger streams like Sirius or Coma Berenices, authors of the same paper claim that the structure at (35,  − 20) ${{\rm\,km\,s^{-1}}}$ can be en elongation of these bigger groups. At the same time detected a distinct structure at (38,  − 20) ${{\rm\,km\,s^{-1}}}$ with probability 98% ( ∼ 3*σ*) and suggested that it is an independent group. However, in our analysis we detected all discussed above streams except the one at (35,  − 20) ${{\rm\,km\,s^{-1}}}$, while Wolf 630 and Dehnen98 share similar metallicity properties to the Coma Berenices stream. Groups NGC 1901 & IC 2391 were detected by and at ( − 25,  − 10) and ( − 20.8,  − 15.9) ${{\rm\,km\,s^{-1}}}$ respectively. Interestingly, later works with bigger stellar samples like, did not detect these structures. make an assumption that these groups are weak compared to super-streams like Sirius, Coma Berenices, Hyades and Pleiades as they did not detect them. We do not observe these groups too. We note that the *J* = 2 scale (see Fig. [uvall]) which is almost two times as rich with kinematic structure detections than the scale *J* = 3, all these smaller-scale *J* = 2 structures could be associated with some of the *J* = 3 streams (see Table [groups2]). The question remains for groups 21 (part of *γ*Leo?), and groups 25-30 (parts of Antoja(12) and Antoja(15)?) detected on the *J* = 2 scale. These structures could be also independent and new, the answer may be given later when *Gaia* DR2 data is available. The next step should be a deeper investigation of the origin of these moving groups through a better detailed analysis of chemical composition and ages of stars associated with each group to better understand the Milky Way formation. This can be done on small scales for individual structures, but ongoing and upcoming large spectroscopic surveys such as for example the $\it Gaia$-ESO, WEAVE, and 4MOST surveys will provide precise elemental abundances for millions of stars, that together with astrometry from $\it Gaia$ will allow us to probe the kinematic structures at greater detail throughout the Galactic disk. T.B. was funded by the “The New Milky Way” project grant from the Knut and Alice Wallenberg Foundation. We thank Prof. F. Murtagh for making available for us the MR software packages and for valuable and helpful comments. --- 1. Using the gaia\_tools Python package developed by Jo Bovy that is available at <https://github.com/jobovy/gaia_tools>[↩](#fnref1) 2. if *w**j*(*i**x*, *i**y*) is significant for a given scale *j* and position (*i**x*, *i**y*), then *M**j*(*i**x*, *i**y*) = 1, otherwise, *M**j*(*i**x*, *i**y*) = 0[↩](#fnref2) Kinematic structures of the Solar neighbourhoodrevealed by ***Gaia*** DR1/TGAS and RAVE ======================================================================================= ### Received 10 May 2017 / Accepted 19 September 2017 We aim to study the velocity distribution of stars of the Solar neighbourhood and investigate the properties of individual kinematic structures in order to improve our understanding of their origins. Using the astrometric data provided by *Gaia* DR1/TGAS and radial velocities from RAVE DR5 we perform a wavelet analysis with the *à trous* algorithm to 55831 stars that have *U* and *V* velocity uncertainties less than $4\,{{\rm\,km\,s^{-1}}}$. An auto-convolution histogram method is used to filter the output data, and we then run Monte Carlo simulations to verify that the detected structures are real due to velocity uncertainties. Additionally we analysed our stellar sample by splitting all stars into a nearby sample ( < 300pc) and a distant sample ( > 300pc), and two chemically defined samples that to a first degree represent the thin and the thick disks. We detect 19 kinematic structures in the Solar neighbourhood between scales $3-16\,{{\rm\,km\,s^{-1}}}$ at the 3*σ* confidence level. Among them we identified well-known groups (such as Hercules, Sirius, Coma Berenices, Pleiades, and Wolf 630), confirmed recently detected groups (such as Antoja12 and Bobylev16), and detected a new structure at $(U,V)\approx(37, 8)\,{{\rm\,km\,s^{-1}}}$. Another three new groups are tentatively detected, but require further confirmation. Some of the detected groups show clear dependence on distance in the sense that they are only present in the nearby sample ( < 300pc), and others appear to be correlated with chemistry as they are only present in either of the chemically defined thin and thick disk samples. With the much enlarged stellar sample and much increased precision in distances, proper motions, provided by *Gaia* DR1 TGAS we have shown that the velocity distribution of stars in the Solar neighbourhood contains more structures than previously known. A new feature is discovered and three recently detected groups are confirmed at high confidence level. Dividing the sample based on distance and/or metallicity shows that there are variety of structures which are as large-scale and small-scale groups, some of them have clear trends on metallicities, others are a mixture of both disk stars and based on that we discuss possible origin of each group. Introduction ============ Studies of the velocity distribution of stars in the Solar neighbourhood have shown that it contains a plethora of kinematic structures, with stars that have similar space velocities (*U*, *V*, *W*). There are several possibilities to why different stars have similar kinematic properties: they could be from evaporated open clusters; they could be due to dynamical resonances within the Milky Way; or they could even be remnants of accreted satellite galaxies that merged with the Milky Way billions of years ago. This means that stellar streams retain a lot of information of various dynamical processes of the Milky Way’s past and provide clues to our understanding of the formation of the Galaxy. Mapping the structure and properties of the Milky Way, that is a benchmark galaxy, will also aid our attempts to understand the evolution and formation of large spiral galaxies in general. A detailed characterisation of the kinematic properties together with chemical composition of the stars of such structures are crucial when trying to trace their origins. The release of Hipparcos Catalogue twenty years ago boosted the study of kinematic properties of the Solar neighbourhood. For example, studied the distribution of 14369 kinematically selected stars using a maximum likelihood estimate method and detected 14 features in the *U* − *V* plane of Galactic space velocities. The *W* direction did not appear very rich in structures with only four moving groups detected. The sample was then split based on (*B* − *V*) colour index to study the behaviour of young and old stars separately. They found that there are moving groups composed of red stars (supposed to be older), while younger structures were composed of stars with different colours. This was an argument against the theory previously proposed by, that kinematic structures could be remnants of disrupted open clusters, in which stars are chemically homogeneous. Instead, propose that moving groups that follow eccentric orbits could be formed through resonances with the Galactic bar. studied a sample of 4597 Hipparcos stars and, unlike, used radial velocities provided in Hipparcos Input Catalogue. applied a wavelet analysis method for kinematic group detection, identified the most significant structures in the *U* − *V* plane, and found that the velocity distribution has a more complicated structure than just moving groups and has a larger, branch-like structure. Later, using a combination of CORAVEL radial velocities and ages, together with Tycho-2 astrometry, investigated a stellar sample of 5952 stars and found well-known streams like Hercules, Sirius, Hyades and Pleiades. They suggest that stellar groups are of dynamical origin as isochrones of associated stars with the moving groups show a big dispersion in ages. A deeper study of the origin of moving groups provided by involved wavelet transform applied to the same data as in and checked the theory of kinematic groups being remnants of open clusters. After fitting isochrones inherent for open clusters to stars associated with the Sirius, Hyades and Pleiades streams, claimed dynamical origins for these groups, as they did not match. investigated a larger sample of 24910 stars using wavelet techniques and analysed the age-metallicity distribution of the kinematic branches of Sirius, Hercules, Hyades-Pleiades and Coma Berenices. Each branch showed a wide spread of metallicities, especially Hercules. Sirius group stars were older and peaked at about 400 Myr, compared to Hyades-Pleiades which consist of mainly younger stars. later detected 22 structures by applying a wavelet analysis to a sample of 14000 dwarf stars from and 6000 giant stars from. That study provided a comprehensive comparison of the positions of all kinematic structures detected by different authors. Eleven of 22 groups had previously been found in the literature, while the remaining 11 were discovered for the first time. identified 19 structures in the Solar neighbourhood by analysing a sample of over 200000 stars with available RAVE radial velocities and compared their results with those obtained by using the Geneva-Copenhagen survey. They found 19 structures in the Solar neighbourhood from a sample of over 110000 stars and support the dynamical origin of stellar branches based on age-metallicity distribution from and the fact that the main groups are large-scale structures that are detectable even beyond the Solar neighbourhood. An alternative approach (than analysis in the *U* − *V* velocity plane) is to search for streams in the plane defined by the integrals of motions. This way of searching for kinematic over-densities is important as one may discover stellar streams of possible resonant or even extra-galactic origin. Stars in associated kinematic over-densities keep the same angular momenta and in the Solar neighbourhood behave the same way as moving groups of the cluster disruption origin. Together with the approximation of Keplerian orbits,, studied the distribution of stars in $\sqrt{U^2+2V^2}$ and *V* plane as a consequence of integral of motion approach, first discussed in. applied wavelet transform to the thin and thick disk samples that consist of nearby subdwarfs with metallicities [Fe/H] >  − 0.6 and [Fe/H] ≤  − 0.6. They found Pleiades, Hyades and Hercules in the thin disk and Arcturus stream in the thick disk. were the first to use RAVE DR1 archive. Their sample consisted of 7015 stars that allowed them to detect 8 groups in the $\sqrt{U^2+2V^2}$ and *V* plane. Later, focused on the search for kinematic structures for the thick disk population based on LAMOST survey. Their stellar sample consisted of 7993 stars. Thus, only a few kinematic structures were detected. Usually the origin of kinematic structures is studied with help from our knowledge about other components of the Galaxy, but did the opposite: assuming that the Hercules stream was caused by resonances of Galactic bar, they used the Hercules to constrain the Galactic bar’s pattern speed and the local circular frequency. This paper demonstrated further the importance of the study of kinematic structures, as they could be a key to better understanding of the Milky Way formation. Cross-matching the first astrometric data release of *Gaia* DR1 and the radial velocities from the RAVE DR5 catalogue, we now have an access to the most up-to-date and precise astrometric measurements for more than 200000 stars. This is a substantially larger sample than most sample that has been previously available, and the precision in the data is also much better than before. Recently, using TGAS and RAVE, the kinematics of halo stars was investigated by, who studied the velocity correlation function and the energy versus angular momentum space of about 1000 stars with metallicities $\rm [M/H] \le - 1.5$. They found that the distribution of stars in the space defined by integrals of motion has complex kinematic structure and more that a half of them follow retrograde orbits. Halo substructure with TGAS and RAVE was also studied in. The clump of 14 co-moving metal-poor giants was discovered. Based on small spreads of the metallicity within the group, authors explain its origin as being a dissolving remnant of a globular cluster. applied a wavelet analysis technique to a sample that is a combination of the LAMOST DR3 and the *Gaia* TGAS catalogues. They detected 16 kinematic structures were found and four of them are potential new stream candidates. The list of works on kinematic groups could be extended and all of them prove that the velocity distribution in the Solar neighbourhood is inhomogeneous and has a complex, branch-like structure. The question on how did the stellar streams formed is still actual. Some of the mentioned above papers attempts to resolve this question, and as a result exists a variety of theories that could explain the origin of stellar streams, but there is no exact agreement among them even for the well-studied groups, which further demonstrates the importance of the study of kinematic structures. Using the recent TGAS and RAVE data releases, this study focus on the velocity distribution of stars in the *U* − *V* plane to reveal the structures and to further analyse some properties of each structure in terms of distance and chemistry. The paper is organised in the following way: in Sect. [secdata] we discuss the stellar sample and its properties; Sect. [sec:numericalmethods] contains an explanation of numerical methods (wavelet analysis) used in this work; Sect. [sec:maps] covers the description of input and output maps for the the wavelet analysis; all the results including tables and figures of kinematic structures we present in Sect. [sec:results]; finally, the summary and discussion of this work are in Sect. [sec:discussion]. Stellar sample ============== To detect stellar structures in velocity space we will perform a wavelet analysis applied for a data sample in the *U* − *V* plane (see Sect. [sec:numericalmethods]), where *U*, *V*, *W* are the space velocities of the stars in a right-handed Cartesian coordinate system (*X*, *Y*, *Z*). *X* axis points towards Galactic centre, *Y* axis defines the direction of Galactic rotation, and the *Z* axis points towards Galactic North Pole. The sample should be as large as possible and contain precise measurements of distances, proper motions, and radial velocities, for the calculation of accurate space velocities. Distances and radial velocities ------------------------------- Since the *Gaia* satellite was launched in 2013 we are expecting the most precise astrometric measurements for billions of stars in the Milky Way. The first *Gaia* data release (DR1) due to the shortage of observations is still far from declared precision: for a star with a magnitude *V* = 15 it is expected to get positions, proper motions and parallaxes with the precision up to 5-25 *μ*as. However, adding astrometry from the Hipparcos catalogue, TGAS gives astrometric solutions for 2.5 million stars with precise measurements of all required astrometric data. According to it is recommended that a systematic error of 0.3 mas has to be accounted. Later, states that parallax uncertainties already represent the total uncertainty and additional account of systematic error could lead to overestimation. So, in this work we used original TGAS data. In order to calculate the three-dimensional movements of the stars, i.e. the *U*, *V*, and *W* space velocities, the TGAS data needs to be complemented with radial velocities. The Radial Velocity Experiment (RAVE) is a medium-resolution spectroscopic survey with the primary goal of determining radial velocities and stellar parameters for hundreds of thousands of stars in the Southern Hemisphere using spectra of resolving power R = 7500. The final release of RAVE (DR5) gives precise measurements of radial velocities of 457588 unique stars. Cross-matching RAVE DR5 with TGAS provide us a sample of 159299 stars with known coordinates (*α*, *δ*), proper motions (*μ**α*, *μ**δ*), positive parallaxes (*π*), radial velocities (*v**r**a**d*), abundances of Mg and Fe and their associated uncertainties for all stars. The RAVE catalogue contains multiple observations for some stars. In those cases, the median value of every parameter were used in this work. To further expand our sample we will also explore the option to include the data releases from The Large sky Area Multi-Object Fibre Spectroscopic Telescope. This is a Northern hemisphere survey which contains spectra of almost 2 million stars with the resolution of R = 2000. The cross-matching of A, F, G and K type stars in the LAMOST DR2 catalogue with TGAS [1](#fn1) leaves us a sample of 107501 stars with positive parallax. Space velocities and their uncertainties ---------------------------------------- Space velocities and their uncertainties, which are dependent on the accuracy of the proper motions, the parallaxes, and the radial velocities, were computed according to the equations in. [guvhist] Figure [guvhist] shows the distributions of the uncertainties in the *U*, *V*, and *W* velocities for 159299 RAVE stars (top) and 107501 LAMOST stars (bottom). Each velocity component is indicated with a different colour. About 35% (55831) of the RAVE stars have velocities with uncertainties smaller than 4${{\rm\,km\,s^{-1}}}$, while only 0.8% (905) LAMOST stars belong to the same region. Such a comparably low accuracy of LAMOST velocities can be explained with high uncertainties of radial velocities, which are one of the main components when computing *σ**U*, *σ**V* and *σ**W*. cross-matched LAMOST DR1 with APOGEE and discovered an offset of  ∼ 5.7${{\rm\,km\,s^{-1}}}$ of LAMOST radial velocities. report that LAMOST line-of-site velocities are underestimated and have to be corrected by 5${{\rm\,km\,s^{-1}}}$. The accuracy of space velocities is crucial for detection of kinematic groups which will be shown later in the Sect. [sec:maps]. Taking into account high uncertainties for the LAMOST stars we decided to focus our analysis on the RAVE sample only, which gives us a sample of 159299 stars. The spatial distribution of our RAVE-TGAS star sample in the *X* − *Y* and *X* − *Z* planes is shown in Fig. [fig:xyz]. In this plot we show three distributions: blue colour is for the sample of all 159299 stars, green colour shows 55831 stars with *σ**U*, $\sigma\_V < 4 {{\rm\,km\,s^{-1}}}$, and the red colour indicates the same stars as the green but with distance limit  < 300pc. As will be shown later in Sect. [sec:maps] we will focus on the analysis of the last two sub-samples. The precision of the parallax distances provided by TGAS is high enough and additionally the cut on velocity uncertainties (*σ**U* and *σ**V*) less than $4\,{{\rm\,km\,s^{-1}}}$ already cut stars by parallax too. In Fig. [pe] the distribution of parallax relative uncertainties *π**e*/*π* for the total sample is shown, where *π* is the parallax and *π**e* is its uncertainty. Most of stars have uncertainties less than 30%. This cut is necessary to get robust positions of kinematic structures. The question if the Local Standard of Rest (LSR) should be included in the space velocities, or not, in the detection analysis for kinematic groups is debatable. In several works the space velocities were not adjusted for the peculiar Solar motion, while in some papers it was. We chose to not correct our space velocities to the LSR as the adopted solar motion relative to the LSR may differ between studies and if so, would make direct comparisons of the detected structures more difficult. Numerical methods ================= Different statistical methods have been used to highlight kinematic over-densities: wavelet analysis, maximum likelihood algorithm, and adaptive kernel estimate. We chose the most efficient technique for our purposes: the wavelet analysis with the *à trous* algorithm as it is a powerful tool which gives signal characteristics in terms of location, both in position and scale (size of the structure) simultaneously. The utility of this analysis method applied to the detection of moving groups in the Solar neighbourhood has already been demonstrated in several studies. The analysis consists of applying a set of filters at different scales to the original data in order to determine wavelet coefficients. Detected structures, which correspond to local maxima in the wavelet space, can be either physical (kinematic groups) or ‘artefacts’. The latter can have two origins: (1) The wavelet coefficients contain Poisson noise due to that the stellar sample is finite. Details on the filtering of the Poisson noise can be found in Sect. [sec:sec:filtering]; (2) The space velocities of the stars contain significant uncertainties. Details on the verification of the robustness of results are given in Sect. [sec:sec:MC]. [pe] The *à trous* algorithm ----------------------- We focused on the wavelet analysis with the *à trous* algorithm because it has an advantage compared to other statistical methods: it does not require any assumptions on the initial stellar distribution. So, the input data correspond only to the original star count map in the *U* − *V* plane. The algorithm implies applying a set of filters at different scales *s**j* = 2*j* × Δ in order to decompose the 2-D signal *c*0(*i**x*, *i**y*) into a set of wavelet coefficients (*w*1,  ...,  *w**n*) that contain the information about kinematic structures. Here, (*i**x*, *i**y*) is a position at the input grid, *j* is the scale index (*j* ∈ [1,  *p*]), *p* is the maximum scale and Δ is the bin size of the input pixel grid which is used to detect structures which have sizes between *s**j* − 1 and *s**j* ${{\rm\,km\,s^{-1}}}$ (for details on the algorithm see ). For one position (*i**x*, *i**y*), a positive wavelet coefficient corresponds to an over-density in the velocity space. We followed the documentation provided with the MR software and we used a maximum scale *p* equal to log2(*N* − 1), where *N*, assuming that the input star count map has a size *N* × *M*, is the number of pixels in the smaller direction. Image filtering and detection of significant structures ------------------------------------------------------- Given that the data sample is finite, wavelet coefficients at each scale except the information about the structures contain also noise which follows Poisson statistics. The procedure to determine if a wavelet coefficient is significant or not depends on the kind of data. First, we determined the multi-resolution support of the resulting image, which is a logical[2](#fn2) way to store information about the significance of a wavelet coefficient at a given scale *j* and a position (*i**x*, *i**y*). Our data contains a large number of pixels with less than 30 star counts, which is called the case of “few events”. In order to remove the Poisson noise in the case of “few events” we used the auto-convolution histogram method  which has already been successfully used to detect structures in data with few events such as low intensity X-ray images . With the final set of wavelet coefficients we used an algorithm provided with the MR software that groups coefficients into structures that are characterised by the level of confidence *ε*. A structure detected with a 3*σ* confidence level corresponds to a 99.86% probability that the structure was not produced by the Poisson noise. Then, the algorithm approximates the shape of the structure by an ellipse, characterised by its centre, its semi-minor axis, its semi-major axis, and the angle between the major axis and the horizontal axis of the input map. These parameters are useful for the estimation of the number of stars that belongs to the structure, assuming that all the stars inside the ellipse can be associated with the structure. [bins] Analysis ======== Input data ---------- The constraints on velocity uncertainties and the choice of the bin size of the input star count map are linked. First, the uncertainties have to be at the same time large enough in order to provide us with as a large sample as possible, and at the same time small enough to take advantage of the high-precision data provided by *Gaia* DR1/TGAS and RAVE. Second, the bin size of the star count map has to be consistent with the space velocity uncertainty of the stars in order to get robust positions of the structures. This means that the bin size needs to be roughly equal to the average velocity uncertainty of the sample, otherwise the probability that a star falls into the particular bin will be reduced and therefore the precision of the positions of kinematic structures will also decrease. If the bin size is higher than  ∼ 5${{\rm\,km\,s^{-1}}}$, the first scale (*J* = 1) would be $10-20\,{{\rm\,km\,s^{-1}}}$, but from previous studies it has been shown that the typical size of structures is of the order $10\,{{\rm\,km\,s^{-1}}}$. Thus, a bin size larger than about 5${{\rm\,km\,s^{-1}}}$ should not be used as too many structures would be lost. With a restriction on *σ**U* and *σ**V* equal to $4\,{{\rm\,km\,s^{-1}}}$ it should possible to get robust measurements of positions of structures, and that leaves us with a sample of 55831 stars that have average velocity uncertainties of $\langle\sigma\_{U}\rangle=1.8\,{{\rm\,km\,s^{-1}}}$ and $\langle\sigma\_{V}\rangle=1.6\,{{\rm\,km\,s^{-1}}}$. We then chose a bin size of $\Delta\_{\rm{bin}}=1 {{\rm\,km\,s^{-1}}}$. With this value the scales of the output images from the wavelet transform will be: *J* = 1 (2-4${{\rm\,km\,s^{-1}}}$), *J* = 2 (4-8${{\rm\,km\,s^{-1}}}$), *J* = 3 (8-16${{\rm\,km\,s^{-1}}}$), *J* = 4 (16-32${{\rm\,km\,s^{-1}}}$), *J* = 5 (32-64${{\rm\,km\,s^{-1}}}$). Monte-Carlo simulations ----------------------- The space velocities of the stars have uncertainties that will influence the ability to detect kinematic structures and how robust these detections will be. Figure [bins] shows the probability density function of one star to be located in the centre of one bin in *U* − *V* plane (plots on the left-hand side) or at the edge of the bin (plots on the right-hand side), given that the velocity uncertainties are equal to the average uncertainties of the sample (upper plots) or half of the average uncertainties (lower plots). The probability (see numbers at each concentric circle) that a star can fall into different bins is non-zero and consequently, can lead either to that structures being ‘fake detections’, or miss the detection of real physical structures. Hence, we perform Monte-Carlo (MC) simulations in order to estimate the robustness of the detected structures. *N**M**C* synthetic samples are generated from the original one by randomly drawing 55831 new couples (*U*, *V*) assuming that the stars have Gaussian velocity distributions, where mean values are positions (*U**i*, *V**i*) and the standard deviations are uncertainties (*σ**U**i*,  *σ**V**i*), where $i\in[1,\,N\_{\rm{stars}}]$ refers to the $i^{\rm{th}}$ star in the original sample. Then, the wavelet analysis and the structure detection algorithm are applied to the *N**M**C* synthetic stellar samples and the positions of all structures at all scales are stored for each simulated sample. Output data ----------- Following the computations described in Sect. [sec:numericalmethods] and MC simulations as in Sect. [sec:sec:MC], the wavelet analysis was applied to these *N**M**C* samples giving: (1) *N**M**C* sets of wavelet coefficients [(*w*11,  *w*21,  ...,  *w**J*1),  ..., (*w*1*N**M**C*,  *w*2*N**M**C*,  ...,  *w**J**N**M**C*)]; (2) the multi-resolution support for *J* scales and *N**M**C* simulations, which gives: [(*M*11,  *M*21,  ... , *M**J*1),  ..., (*M*1*N**M**C*,  *M*2*N**M**C*,  ...,  *M**J**N**M**C*)]; (3) positions of detected structures for *J* scales and *N**M**C* simulations. The results are presented in two different forms. First as a structure’s position count map, in which the positions of the detected structures of each of the 2000 samples are superimposed (see Fig. [uvall]). The detected structures are marked by black boxes. To quantify the ‘realness’ of each group, the fraction of times each group was detected relative to the total number of simulations is calculated. Figure [uvall] shows the position count map for the detected structures at scales *J* = 2 (4-8${{\rm\,km\,s^{-1}}}$, top plot), *J* = 3 (8-16${{\rm\,km\,s^{-1}}}$, middle plot) and *J* = 4 (16-32${{\rm\,km\,s^{-1}}}$, bottom plot). The highest number of individual structures, shown by black rectangles with individual numbers, is for *J* = 2. However, as can be seen, scale *J* = 3 also includes all significant structures detected at scales *J* = 2 and *J* = 4, and covers smaller and bigger scales. How many Monte Carlo simulations are enough for the results to convergence? To explore this, Fig. [convergence] shows how the positions for structure number 13 from the *J* = 3 map converge as the number of Monte Carlo simulations increases. We introduce four different estimators: The first two are mean positions of the structure *U**m**e**a**n* and *V**m**e**a**n* (calculated based on coordinates *U* and *V* of all structures inside the rectangle number 13); The third one is the number of stars inside structure number 13 which was calculated as an averaged number of stars from the total number of Monte Carlo simulations (*N**M**C* runs); The last estimator is the percentage of structure detection inside the rectangle. Convergence is reached at around 1400 simulations (marked by grey background in Fig. [convergence]). We therefore chose to run 2000 simulations to have confident results. The position count map is useful for providing positions of structures. However, one cannot justify if the structures are independent, or are connected to other groups. Hence, another way to represent the results is shown in the bottom plot of Fig. [j3], and is the multi-resolution support for *N**M**C* simulations by displaying the quantity M$\_{\rm{tot}}$ defined as follows: *M**t**o**t*, *j*(*i**x*, *i**y*) = ∑*k* = 1*N**M**C**M**j**k*(*i**x*, *i**y*) Thus, if *M**t**o**t*, *j*(*i**x*, *i**y*) = *N**M**C*, it means that *w**j*(*i**x*, *i**y*) is significant for all the simulations. Conversely, if *M**t**o**t*, *j*(*i**x*, *i**y*) = 0, it means that *w**j*(*i**x*, *i**y*) is never significant. We explain in more details results that can be gained from Fig. [j3] in Sect. [sec:results]. [convergence] rlrrrrrr|cccc N & Name & *U* & *V* & Δ*U* & Δ*V* & $\frac{N\_d} {N\_{MC}}, \_{\%}$ & *N*\* & *S**N* & *B**S**N*& *D* & *T**D* (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) & (12) & 31533 & 24298 & 36439 & 11410 1 &Sirius &30 &  − 3 &3 &1 &31 &434 & +  & & +  & +  2 &Sirius &0 & 8 &7 &3 &154 &4821 & +  & +  & +  & +  3 &Sirius & − 11& 9 &3 &1 &83 &879 & +  & & +  & 4 &Coma B &9 &  − 12 &2 &2 &52 &1102 & & +  & & +  5 &Coma B & − 2 &  − 11 &3 &3 &70 &2753 & +  & +  & +  & 6 &Coma B & − 15&  − 7 &1 &1 &79 &673 & +  & +  & +  & 7 &Hyades & − 44&  − 18 &6 &2 &90 &2344 & +  & & +  & 8 &Pleiades & − 22&  − 23 &7 &3 &170 &4257 & +  & +  & +  & 9 &Wolf630+Dehnen98 &43 &  − 22 &9 &2 &168 &1777 & +  & & +  & 10 &Hercules & − 38&  − 49 &9 &3 &116 &1451 & +  & & +  & 11 &Hercules & − 16&  − 48 &2 &1 &22 &197 & +  & & +  & 12 &*γ*Leo &52 & 0 &1 &2 &27 &96 & +  & & +  & 13 &New &37 & 8 &2 &2 &74 &201 & +  & & +  & +  14 &Antoja12(15) &48 &  − 68 &1 &1 &6 &8 & +  & & +  & 15 &Antoja12(12) &94 &  − 13 &1 &1 &38 &10 & +  & & +  & 16 &Bobylev16 & − 94&  − 5 &1 &1 &17 &14 & +  & +  & +  & +  17 &*ε*Ind & − 88&  − 48 &2 &2 &12 &24 & +  & & & 18 &Unknown & − 86&  − 76 &2 &1 &8 &12 & +  & & & 19 &Unknown & − 18&  − 67 &1 &1 &22 &70 & +  & & +  & rlrrrrrr N & Name & *U* & *V* & Δ*U* & Δ*V* & $\frac{N\_d} {N\_{MC}}, \_{\%}$ & *N*\* (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) 1 & Sirius & 31 &  − 3 & 2& 1& 12 & 285 2 & Sirius & 17 & 0 & 1& 1& 10 & 377 3 & Sirius & 8 & 9 & 3& 2& 23 & 1105 4 & Sirius &  − 1 & 7 & 1& 1& 18 & 511 5 & Sirius &  − 11& 9 & 1& 1& 23 & 496 6 & Sirius &  − 3 & 2 & 1& 1& 7 & 459 7 & Coma B &  − 2 &  − 11& 1& 1& 7 & 377 8 & Coma B &  − 15&  − 8 & 3& 2& 26 & 1692 9 & Hyades &  − 43&  − 19& 8& 3& 108& 3392 10 & Hyades &  − 41&  − 11& 1& 1& 5 & 257 11 & Hyades &  − 33&  − 10& 1& 1& 7 & 243 12 & Pleiades &  − 19&  − 25& 3& 1& 30 & 1262 13 & Pleiades &  − 12&  − 22& 1& 3& 20 & 1181 14 & Pleiades &  − 6 &  − 24& 1& 1& 8 & 342 15 & Wolf 630 & 16 &  − 18& 2& 1& 6 & 430 16 & Wolf 630 & 22 &  − 21& 1& 1& 6 & 153 17 & Dehnen98 & 34 &  − 20& 2& 1& 6 & 252 18 & Dehnen98 & 45 &  − 22& 2& 1& 10 & 193 19 & Hercules &  − 36&  − 49& 2& 1& 9 & 178 20 & Hercules &  − 19&  − 51& 2& 2& 10 & 266 21 & *γ* Leo & 66 &  − 8 & 1& 1& 5 & 14 22 & *γ* Leo & 53 & 0 & 1& 1& 7 & 40 23 & *γ* Leo & 52 & 5 & 1& 1& 5 & 40 24 & New & 38 & 6 & 2& 2& 12 & 201 25 & Antoja12(15) & 49 &  − 69& 1& 1& 9 & 12 26 & Antoja12(15) & 63 &  − 64& 1& 1& 7 & 5 27 & Antoja12(15) & 83 &  − 62& 1& 1& 9 & 5 28 & Antoja12(15) & 56 &  − 48& 1& 1& 5 & 20 29 & Antoja12(12) & 88 &  − 32& 1& 1& 6 & 5 30 & Antoja12(12) & 93 &  − 13& 1& 1& 14 & 10 31 & Bobylev16 &  − 95&  − 6 & 1& 1& 8 & 11 32 & *ε*Ind &  − 89&  − 52& 1& 2& 7 & 20 1 &Sirius & − 3 &3 &2 &3 &97 &1767 2 &New &38 &7 &1 &1 &60 &106 3 &*γ*Leo &55 &2 &2 &1 &8 & 29 4 &Hercules & − 32 & − 48 &7 &1 &100&437 Results ======= In this section we present the detected structures in the *U* − *V* plane for the following samples: the full TGAS-RAVE sample, the sample split into a nearby and a distant sample, and two chemically defined samples that to a first degree represent the stars belonging to the Galactic thin and thick disks. Full sample ----------- Figure [uvall] shows detected structures in the *U* − *V* planes for three different scales, *J* = 2 (4-8 ${{\rm\,km\,s^{-1}}}$): 32 structures, *J* = 3 (8-16 ${{\rm\,km\,s^{-1}}}$): 19 structures, and *J* = 4 (16-32 ${{\rm\,km\,s^{-1}}}$): 4 structures. The *J* = 3 structures are listed in Table [groups], and the structures from the *J* = 2 and *J* = 4 scales in Table [groups2]. As can be seen, *J* = 3 appears to cover all the detected features, including smaller structures at *J* = 2, as well as larger groups at *J* = 4. Therefore, we will from now on consider *J* = 3 as the main scale since it covers the a range around the typical sizes of kinematic structures found in the Solar neighbourhood (both small- and big-scale structures), and secondly focus on *J* = 2 and *J* = 4 that covers even smaller and larger structures, respectively. The top plot of Fig. [j3] shows again the detected kinematic structures in the *U* − *V* plane for *J* = 3 (as in the Fig. [uvall] middle plot), but now with previously detected structures found in the literature () marked with blue crosses. Classical structures such as Sirius (structures number 1 − 3 in Fig. [j3]), Coma Berenices (structures 4 − 6), Hyades (structure 7), Pleiades (structure 8) and Hercules (structures 10 − 11), and some smaller structures like Wolf 630 (structure 9), Dehnen98 (structure 9), *γ*Leo (structure 12) can be easily recognised. They all have a comparably high percentage of detection (column 7 in Tab. [groups]) and big number of stars (column 7 in Tab. [groups]). The two structures from (structures 14 and 15) and one structure from (structure 16) are confirmed. We also present evidence for a new structure (number 13) that is detected with 74% significance. Structures 18 − 19 have low percentages of detection, less than15 %, and might be insignificant. In Sect. [sec:sec:individual] we will discuss how our results agree with those from the literature. The way the detected structures are split into groups is motivated with the bottom of Figure [j3] which shows the multi-resolution support obtained for *J* = 3 for all stars and 2000 MC simulations. In other words, this is the same plot as the top Figure [j3], but instead of structure counts we show multi-resolution support counts. This representation allows to see whether structures are bound or separated. Structures 1 − 3 seem to be connected and thus are united into Sirius stream. Group 5 is bound to structure 2 in the wavelet case. It should not be associated with the Sirius stream as its most significant part is located slightly aside Sirius, but lays on one line with structures 4 and 6, therefore grouping structures 4 − 6 into the Coma Berenices stream. Groups that have percentage detection higher than 100% (8, 9, 10) show a few distinct peaks in this plot, supporting the statement that these groups consist of a few smaller structures that overlap in the structure count map. Based on that we split group 9 into Wolf 630 (to the left) and Dehnen98 (to the right). Group 11 is a part of the Hercules stream. Structures 12-19 are not connected to other groups. Solar neighbourhood and beyond samples -------------------------------------- The detected structures are found in velocity space. The question is if they depend on the distance from the Sun? We divide the sample into a nearby Solar neighbourhood sample with 31533 stars that have distances *d* < 300pc (SN), and a beyond the Solar neighbourhood sample (BSN), with 24,298 stars that have *d* > 300pc (most distant star at 2kpc). Distance *d* = 300pc that is arbitrarily chosen to split the sample, is also a reasonable value, because it allows to have almost equal number of stars in both samples. Both samples are then independently analysed in the same way as for the full sample: applying the wavelet transform, filtering, and structure detection procedure for 2000 synthetic data samples. Figure [samples] shows the detected structures associate with the SN sample (top left plot), and the BSN sample (top right plot) for the scale *J* = 3. The rectangles mark the borders for the structures that were detected for the full sample (see Fig. [j3]). This allows an easier comparison how the shapes on kinematic structures change with the respect to the full sample. In Table [groups] we have indicated in columns 9 and 10 with “ + ” signs if the structure is present in SN and BSN samples. Almost of all of the full sample structures are observed in the SN except two weak Hyades peaks (groups 10, 11). So, the SN results almost completely reproduce the results from the full sample. For the BSN sample that has 7000 less stars than the SN sample, most of structures appear to have slightly changed their positions relative to the SN case. Similar result was obtained by where the structures detected in distant regions were shifted on the velocity plane. Hence, only 6 of 19 kinematic groups can be recognised: strong Sirius peaks 2, all Coma B peaks (4-6), Bobylev16 peak 16, Pleiades peak 8 is slightly shifted. In summary, it appears that some kinematic structures are located only in the SN sample as a few significant groups are not detected in the BSN sample at all (groups 1, 3, 7, 9-15, 17-19). These changes in the number of structures, their positions and shapes in the respect to distance can be due to a different numbers of stars that fall into SN and BSN samples with the SN sample containing 10000 stars more. The technique of wavelet analysis is sensitive to the number of stars in the initial sample, the more stars we have, the more realistic picture of structures we can get. Mean values of velocity uncertainties for two samples are also slightly different and are bigger in the case of the BSN sample: ⟨*σ**U*⟩*S**N* = 1.7, ⟨*σ**V*⟩*S**N* = 1.6 for the SN sample; ⟨*σ**U*⟩*B**S**N* = 2.5, ⟨*σ**V*⟩*B**S**N* = 2.2 for the BSN sample. So that for the BSN sample, which is at the same time smaller, velocity uncertainties are slightly higher and this can lead to some displacements of the structures. This issue can be investigated further with the availability of the *Gaia* DR2 in April 2018 which will provide precise astrometric parameters for 109 stars and first radial velocities for bright stars. Thin and thick disk structures ------------------------------ Several high-resolution spectroscopic studies of nearby stars have identified and characterised the thin and thick disks as distinct stellar populations, not only in terms of kinematics, but also in terms of elemental abundances and stellar ages. The two co-existing and largely overlapping disk populations points to a complex formation history for the Milky Way. A process which is currently is not well understood. The question is if we can gain further insights into the nature and origin of this two-disk structure from the kinematic structures seen in the Solar neighbourhood? As shown in and stellar ages appear to be the best discriminator between the thin and thick disks. However, stellar ages are not available for the stars in our sample. Another way would be to use kinematics, but as this is exactly the property that we want to investigate. Another approach which can reveal more features of kinematic group associated with the thin and thick disks is to use their chemical compositions. Several papers have shown that the two disks follow distinct and well separated abundance trends both in the Solar neighbourhood and also further away. All these studies show that thick disk stars, at a given metallicity, are more *α*-enhanced than thin disk stars. In this paper we do not perform any spectroscopic analysis of our structures. Instead we separate our stellar sample by magnesium [Mg/Fe] abundances provided by RAVE in order to study our sample in terms of thick (metal-poor) and thin (metal-rich) disks. In Fig. [kde] (see the last plot on the right-hand side on the bottom line) we show [Mg/Fe]-[Fe/H] diagram for the total RAVE sample of 47849 stars that have RAVE signal to noise ratio *S*/*N* > 40. The last limit is needed to get abundances with a higher precision (abundances uncertainties less that 0.2dex, ). A chemical separation of thick and thin disks with RAVE based on probability density approach has been done in and we define a thin disk sample (D) and a thick disk sample (TD) samples according to : thin disk $\rm [Mg/Fe]
arxiv_0000315
On a Heath-Jarrow-Morton approach for stock options =================================================== This paper aims at transferring the philosophy behind Heath-Jarrow-Morton to the modelling of call options with all strikes and maturities. Contrary to the approach by Carmona and Nadtochiy and related to the recent contribution by the same authors, the key parametrisation of our approach involves time-inhomogeneous Lévy processes instead of local volatility models. We provide necessary and sufficient conditions for absence of arbitrage. Moreover we discuss the construction of arbitrage-free models. Specifically, we prove their existence and uniqueness given basic building blocks. Keywords: Heath-Jarrow-Morton, option price surfaces, Lévy processes MSC subject classification (2010): 91B24, 91G20 JEL classification: G 12, G 13 Introduction ============ The traditional approach to modelling stock options takes the underlying as a starting point. If the dynamics of the stock are specified under a risk neutral measure for the whole market (i.e. all discounted asset price processes are martingales), then options prices are obtained as conditional expectations of their payoff. In reality, standard options as calls and puts are liquidly traded. If one wants to obtain vanilla option prices which are consistent with observed market values, special care has to be taken. A common and also theoretically reasonable way is calibration, i.e. to choose the parameters for the stock dynamics such that the model approximates market values sufficiently well. After a while, models typically have to be recalibrated, i.e. different parameters have to be chosen in order for model prices to be still consistent with market quotes. However, frequent recalibration is unsatisfactory from a theoretical point of view because model parameters are meant to be deterministic and constant. Its necessity indicates that the chosen class fails to describe the market consistently. In Markovian factor models with additional unobservable state variables, the situation is slightly more involved. Since these state variables are randomly changing within the model, they may be recalibrated, which means that their current values are inferred from option prices. In practice, however, the model parameters are often recalibrated as well because the few state variables do not provide enough flexibility to match observed option data. In this case, we are facing the same theoretically unsatisfactory situation as above. A possible way out is to model the whole surface of call options as a state variable, i.e.  as a family of primary assets in their own right. This alternative perspective is motivated from the Heath-Jarrow-Morton (HJM, see ) approach in interest rate theory. Rather than considering bonds as derivatives on the short rate, HJM treat the whole family of zero bonds or equivalently the forward rate curve as state variable in the first place. In the context of HJM-type approaches for stock options, Wissel and Schönbucher consider the case of a single strike, whereas Cont et al.  and Carmona and Nadtochiy allow for all strikes and maturities. Further important references in this context include Jacod and Protter, Schweizer and Wissel and Wissel. The HJM approach has been adapted to other asset classes, e.g.  credit models in Benanni, Schönbucher, or Sidenius et al.  and variance swaps in Bühler, cf.  Carmona for an overview and further references. Similar to Carmona and Nadtochiy we aim at modelling the whole call option price surface using the HJM methodology. However, our approach differs in the choice of the parametrisation or *codebook*, which constitutes a crucial step in HJM-type setups. By relying on time-inhomogeneous Lévy processes rather than Dupire’s local volatility models, we can avoid some intrinsic difficulties of the framework in. E.g., a simpler drift condition makes the approach more amenable to existence and uniqueness results. Moreover, the Lévy-based setup allows for jumps and may hence be more suitable to account particularly for short-term option prices, cf. . More recently and independently of the present study, Carmona and Nadtochiy have also put forward a HJM-type approach for the option price surface which is based on time-inhomogeneous Lévy processes. The similarities and differences of their and our approach are discussed in Section [Abschnitt: CNs Modell]. The paper is arranged as follows. We start in Section [Abschnitt: HJM philosophie] with an informal discussion of the HJM philosophy, as a motivation to its application to stock options. Section [Abschnitt: Notationen und Modelvorbereitung] provides necessary and sufficient conditions for an option surface model to be arbitrage-free or, more precisely, risk-neutral. Subsequently, we turn to existence and uniqueness of option surface models given basic building blocks. In particular, we provide a concrete example which turns out to be related to the stochastic volatility model proposed by Barndorff-Nielsen and Shephard in. Mathematical tools and some technical proofs are relegated to the appendix. Facts on seminartingale characteristics are summarised in Section [s:localchar]. The subsequent section concerns mainly option pricing by Fourier transform. In Section [s:frechetsde] we consider stochastic differential equations in Fréchet spaces, driven by subordinators. This framework is needed for existence and uniqueness results in Section [Abschnitt: Beispiele und Existenzresultate]. Notation -------- Re and Im denote the real resp. imaginary part of a complex vector in $\mathbb C^d$. We write [*a*, *b*] for the closed interval $\{x\in\mathbb R:a\leq x\leq b\}$, which is empty if *a* > *b*. We use the notations ∂*u* and *D* for partial and total derivatives, respectively. We often write $\beta{\stackrel{\mbox{\tiny$\bullet$}}{}}X\_t=\int\_0^t\beta\_sdX\_s$ for stochastic integrals. *L*(*X*) denotes the set of *X*-integrable predictable processes for a semimartingale *X*. If we talk about an *m* + *n*-dimensional semimartingale (*X*, *Y*), we mean that *X* is an $\mathbb R^m$-valued semimartingale and *Y* is an $\mathbb R^n$-valued semimartingale. For $u,v\in\mathbb C^d$ we denote the bilinear form of *u* and *v* by *u**v* :  = ∑*k* = 1*d**u**k**v**k*. The abbreviation *PII* stands for processes with independent increments in the sense of. Further unexplained notation is used as in. Heath-Jarrow-Morton and Lévy models =================================== This section provides an informal discussion of the HJM philosophy and its application to stock options. The Heath-Jarrow-Morton philosophy ---------------------------------- According to the fundamental theorem of asset pricing, there exists at least one equivalent probability measure that turns discounted prices of all traded securities into martingales or, more precisely, into *σ*-martingales. For simplicity we take the point of view of risk-neutral modelling in this paper, i.e. we specify the dynamics of all assets in the market directly under such an equivalent martingale measure (EMM). Moreover, we assume the existence of a deterministic bank account unless we refer to the original HJM setup in interest rate theory. This allows us to express all prices easily in discounted terms. Before we turn to our concrete setup, we want to highlight key features of the HJM approach in general. For more background and examples we refer the reader to the brilliant exposition from Carmona, which greatly inspired our work. We proceed by stating seven informal axioms or steps. 1. In HJM-type setups there typically exists a canonical underlying asset or reference process, namely the money market account in interest rate theory or the stock in the present paper. The object of interest, namely bonds in interest rate theory or vanilla options in stock markets, can be interpreted as derivatives on the canonical process. HJM-type approaches typically focus on a whole manifold of such — at least in theory — liquidly traded derivatives, e.g. the one-parameter manifold of bonds with all maturities or the two-parameter manifold of call options with all strikes and maturities. As first and probably most important HJM axiom we claim that this manifold of liquid derivatives is to be treated as the set of primary assets. It — rather than the canonical reference asset — constitutes the object whose dynamics should be modelled in the first place. Zero bonds are securities with terminal value 1 and appear to be somewhat degenerate derivatives. But as noted above, we consider discounted prices in this paper. If the money market account *S*0 is chosen as numeraire, the discounted payoff of a bond maturing at *T* is of the form 1/*S**T*0 and hence a function of *S*0. In this broader sense, we view bonds here as derivatives on the money market account. European call options on a stock *S* with maturity *T* and strike *K* have a payoff of the form (*S**T* − *K*)+. The same is true for their discounted payoff relative to a deterministic numeraire, provided that *S* and *K* are replaced by their discounted analogues as well. 2. The first axiom immediately leads to the second one: do *not* model the canonical reference asset in detail under the market’s risk-neutral measure. Indeed, otherwise all derivative prices would be entirely determined by their martingale property, leaving no room for a specification of their dynamics. 3. Direct modelling of the above manifold typically leads to awkward constraints. Zero bond price processes must terminate in 1, vanilla options in their respective payoff. Rather than prices themselves one should therefore consider a convenient parametrisation (or *codebook* in the language of Carmona ), e.g.  instantaneous forward rates in interest rate theory. Specifying the dynamics of this codebook leads immediately to a model for the manifold of primary assets. If the codebook is properly chosen, then static arbitrage constraints are satisfied automatically, cf. Steps [i:4] and [i:5]. 4. [i:4] It is generally understood that choosing a convenient parametrisation constitutes a crucial step for a successful HJM-type approach. This is particularly obvious in the context of call options. Their prices are linked by a number of non-trivial static arbitrage constraints, which must hold independently of any particular model, cf.  Davis and Hobson. These static constraints have to be respected by any codebook dynamics. Specifying the latter properly may therefore be a difficult task unless the codebook is chosen such that the constraints naturally hold. We now suggest a way how to come up with a reasonable parametrisation. The starting point is a family of simple risk-neutral models for the canonical underlying whose parameter space has — loosely speaking — the same “dimension” or “size” as the space of liquid derivative manifolds. Provided sufficient regularity holds, the presently observed manifold of derivative prices is explained by one and only one of these models. [Bsp: HJM] In interest rate theory consider bank accounts of the form $$\label{e:bankkonto} S^0\_t=\exp\bigg(\int\_0^tr(s)ds\bigg)$$ with deterministic short rate $r(T),T\in\mathbb R\_+$. Fix $t\in\mathbb R\_+$ and a differentiable curve of bond prices *B*(*t*, *T*), $T\in\mathbb R\_+$. Except for the past up to time *t*, the observed bond prices are consistent with one and only one of these models, namely for $$\begin{aligned} r(T):=-\partial\_T\log(B(t,T)).\label{Gleichung: HJM Ausgangspunkt}\end{aligned}$$ Consider Dupire’s local volatility models *d**S**t* = *S**t**σ*(*S**t*, *t*)*d**W**t* for a discounted stock, where *W* denotes standard Brownian motion and $\sigma:\mathbb R\_+^2\rightarrow\mathbb R$ a deterministic function. Up to regularity, any surface of discounted call option prices *C**t*(*T*, *K*) with varying maturity *T* and strike *K* and fixed current date $t\in\mathbb R\_+$ is obtained by one and only one local volatility function *σ*, namely by $$\begin{aligned} \sigma^2(K,T):=\frac{2\partial\_TC\_t(T,K)}{K^2\partial\_{KK}C\_t(T,K)}.\label{Gleichung: Carmona Ausgangspunkt}\end{aligned}$$ Note that the above starting point should not be taken as a precise mathematical requirement. As illustrated in the examples, we relate “size” liberally to the number of arguments in the respective functions: parameter curves correspond to bond price curves, parameter surfaces to option price surfaces. The actual regularity needed for one-to-one correspondence crucially depends on the chosen family of simple models. If market data follows the simple model, the parameter manifold, e.g. (*r*(*T*))*T* ≥ *t* in Example [Bsp: HJM], is deterministic and does not depend on the time *t* when derivative prices are observed. Generally, however, market data does not follow such a simple model as in the two examples. Hence, evaluation of the right-hand side of and leads to a parameter manifold which changes randomly over time. The *instantaneous forward rate* curve *f*(*t*, *T*) :  =  − ∂*T*log(*B*(*t*, *T*)),  *T* ≥ *t* for fixed $t\in\mathbb R\_+$ can be interpreted as the family of deterministic short rates that is consistent with the presently observed bond price curve *B*(*t*, *T*), *T* ≥ *t*. The *implied local volatility* $$\begin{aligned} \sigma^2(K,T):=\frac{2\partial\_TC\_t(T,K)}{K^2\partial\_{KK}C\_t(T,K)},\quad K>0,\quad T\geq t\end{aligned}$$ for fixed $t\in\mathbb R\_+$ can be interpreted as the unique local volatility function that is consistent with the presently observed discounted call prices *C**t*(*T*, *K*), *T* ≥ *t*, *K* > 0. The idea now is to take this present parameter manifold as a parametrisation or codebook for the manifold of derivatives. 5. [i:5] In a next step, we set this parameter manifold “in motion.” We consider the codebook, e.g. the instantaneous forward rate curve *f*(*t*, *T*) or the implied local volatility *σ**t*(*T*, *K*), as an infinite-dimensional stochastic process. It is typically modelled by a stochastic differential equation, e.g. *d**f*(*t*, *T*) = *α*(*t*, *T*)*d**t* + *β*(*t*, *T*)*d**W**t*,  where *W* denotes standard Brownian motion. As long as the solution to this equation moves within the parameter space for the family of simple models, one automatically obtains derivative prices that satisfy any static arbitrage constraints. Indeed, since the current bond prices resp. call prices coincide with the prices from an arbitrage-free model, they cannot violate any such constraints, however complicated they might be. This automatic absence of static arbitrage motivates the codebook choice in Step 4. 6. Absence of static arbitrage does not imply absence of arbitrage altogether. Under the risk-neutral modelling paradigm, all discounted assets must be martingales. In interest rate theory this leads to the well known HJM drift condition. More generally it means that the drift part of the codebook dynamics of Step 5 is determined by its diffusive component. 7. Finally we come back to Step 2. The dynamics of the canonical reference asset process is typically implied by the current state of the codebook. E.g. in interest rate theory the short rate is determined by the so-called consistency condition *r*(*t*) = *f*(*t*, *t*). Similar conditions determine the current stock volatility in. Time-inhomogeneous Lévy models ------------------------------ According to the above interpretation, the approach of to option surface modelling relies on the family of Dupire’s local volatility models. Similarly as the independent study, we suggest another family of simple models for the stock, also relying on a two-parameter manifold. To this end, suppose that the discounted stock is a martingale of the form *S* = *e**X*, where the *return process* *X* denotes a process with independent increments (or time-inhomogeneous Lévy process, henceforth PII) on some filtered probability space $(\Omega,{\scr F},({\scr F}\_{t})\_{t\in\mathbb R\_+},P)$. Recall that we work with risk-neutral probabilities, i.e. discounted asset prices are supposed to be *P*-martingales. More specifically, the characteristic function of *X* is assumed to be absolutely continuous in time, i.e. $$\label{Gleichung: 2.3} E(e^{iuX\_t})=\exp\bigg(iuX\_0+\int\_0^t\Psi(s,u)ds\bigg)$$ with some function $\Psi:\mathbb R\_+\times\mathbb R\rightarrow \mathbb C$. We assume call options of all strikes and maturities to be liquidly traded. Specifically, we write *C**t*(*T*, *K*) for the discounted price at time *t* of a call which expires at *T* with discounted strike *K*. A slight extension of shows that option prices can be expressed in terms of Ψ. To this end, we define *modified option prices* $$\scr O\_t(T,x):=e^{-(x+X\_t)}C\_t(T,e^{x+X\_t})-(e^{-x}-1)^+.$$ Since call option prices are obtained from $C\_t(T,K)=E((S\_T-K)^+\vert{\scr F}\_t)$, by call-put parity, and by $E(S\_T\vert{\scr F}\_t)=S\_t$, we have $$\scr O\_t(T,x)=\bigg\{\begin{array}{cc} E((e^{(X\_T-X\_t)-x}-1)^+\vert{\scr F}\_t) & \mathrm{if}\ x\geq0,\\ E((1-e^{(X\_T-X\_t)-x})^+\vert{\scr F}\_t) & \mathrm{if}\ x<0.\end{array}$$ Proposition [Proposition: Fourier transformierte der Call-Preise] yields $$\begin{aligned} \scr O\_t(T,x)&=&{\scr F}^{-1}\bigg\{u\mapsto\frac{1-E(e^{iu(X\_T-X\_t)}\vert{\scr F}\_t)}{u^2+iu}\bigg\}(x), \label{Gleichung: Levy-mod-optionen} \\ {\scr F}\{x\mapsto\scr O\_t(T,x)\}(u)&=&\frac{1-E(e^{iu(X\_T-X\_t)}\vert{\scr F}\_t)}{u^2+iu}\end{aligned}$$ where ${\scr F}^{-1}$ and ${\scr F}$ denote the improper inverse Fourier transform and the improper Fourier transform, respectively, in the sense of ([Gleichung: Fourier Transformation], [Gleichung: Inverse Fourier Transformation]) in Section [Abschnitt: Fourier transformation und Optionspreise] of the appendix. Since $$C\_t(T,K) = (S\_t-K)^++K\scr O\_t\bigg(T,\log\frac{K}{S\_t}\bigg)\label{Gleichung: Levy-Optionen}$$ and $$E(e^{iu(X\_T-X\_t)}\vert {\scr F}\_t) = \exp\bigg(\int\_t^T\Psi(s,u)ds\bigg),\label{Gleichung Levy char. exponent}$$ we can compute option prices according to the following diagram: $$\Psi \rightarrow \exp\bigg(\int\_t^T\Psi(s,\cdot)ds\bigg) \rightarrow \scr O\_t(T,\cdot) \rightarrow C\_t(T,\cdot).$$ For the last step we also need the present stock price *S**t*. Under sufficient smoothness we can invert all transformations. Indeed, we have $$\begin{aligned} \Psi(T,u)=\partial\_T\log\bigg(1-(u^2+iu){\scr F}\{x\mapsto\scr O\_t(T,x)\}(u)\bigg)\label{Gleichung: Idee fuer Psi}.\end{aligned}$$ Hence we obtain option prices from Ψ and vice versa as long as we know the present stock price. Setting Lévy in motion ---------------------- Generally we do not assume that the *return process* $$\begin{aligned} X:=\log(S)\label{Gleichung: X} \end{aligned}$$ follows a time-inhomogeneous Lévy process. Hence the right-hand side of Equation will typically change randomly over time. In line with Step [i:4] above, we define modified option prices $$\begin{aligned} \scr O\_t(T,x) & := & e^{-(x+X\_t)}C\_t(T,e^{x+X\_t})-(e^{-x}-1)^+\label{Gleichung: O ist gleich}\end{aligned}$$ as before and $$\begin{aligned} \Psi\_t(T,u) & := & \partial\_T\log\bigg(1-(u^2+iu){\scr F}\{x\mapsto\scr O\_t(T,x)\}(u)\bigg).\label{Gleichung: Psi ist gleich}\end{aligned}$$ This constitutes our *codebook process* for the surface of discounted option prices. As in Section [Abschnitt: ZeitinhomogeneLevys] the asset price processes *S* and *C*(*T*, *K*) can be recovered from *X* and Ψ(*T*, *u*) via $$\begin{aligned} S & = & \exp(X),\\ \scr O\_t(T,x) & = & {\scr F}^{-1}\bigg\{u\mapsto\frac{1-\exp(\int\_t^T\Psi\_t(s,u)ds)}{u^2+iu}\bigg\}(x),\\ C\_t(T,K) & = & (S\_t-K)^++K\scr O\_t\bigg(T,\log\frac{K}{S\_t}\bigg).\end{aligned}$$ In the remainder of this paper we assume that the infinite-dimensional codebook process satisfies an equation of the form $$\begin{aligned} \label{Gleichung:neu} d\Psi\_t(T,u)=\alpha\_t(T,u)dt+\beta\_t(T,u)dM\_t,\end{aligned}$$ driven by some finite-dimensional semimartingale *M*. Model setup and risk neutrality =============================== As before we fix a filtered probability space $(\Omega,{\scr F},({\scr F}\_t)\_{t\in\mathbb R\_+},P)$ with trivial initial *σ*-field ${\scr F}\_0$. In this section we single out conditions such that a given pair (*X*, Ψ) corresponds via ([Gleichung: X] – [Gleichung: Psi ist gleich]) to a risk-neutral model for the stock and its call options. Option surface models --------------------- We denote by Π the set of characteristic exponents of Lévy processes *L* such that *E*(*e**L*1) = 1. More precisely, Π contains all functions $\psi:\mathbb R\rightarrow\mathbb C$ of the form $$\psi(u)=-\frac{u^2+iu}{2}c+\int(e^{iux}-1-iu(e^x-1))K(dx),$$ where $c\in\mathbb R\_+$ and *K* denotes a Lévy measure on $\mathbb R$ satisfying ∫{*x* > 1}*e**x**K*(*d**x*) < ∞. [Def: osm] A quintuple (*X*, Ψ0, *α*, *β*, *M*) is an *option surface model* if * (*X*, *M*) is a 1 + *d*-dimensional semimartingale that allows for local characteristics in the sense of Section [su:lc], * $\Psi\_0:\mathbb R\_+\times\mathbb R\rightarrow\mathbb C$ is measurable with ∫0*T*|Ψ0(*r*, *u*)|*d**r* < ∞ for any $T\in\mathbb R\_+$, $u\in\mathbb R,$ * *α*(*T*, *u*), *β*(*T*, *u*) are $\mathbb C$- resp. $\mathbb C^d$-valued processes for any $T\in\mathbb R\_+$, $u\in\mathbb R$, * (*ω*, *t*, *T*, *u*) ↦ *α**t*(*T*, *u*)(*ω*), *β**t*(*T*, *u*)(*ω*) are $\scr P\otimes{\scr B}(\mathbb R\_+)\otimes{\scr B}$-measurable, where $\scr P$ denotes the predictable *σ*-field on $\Omega\times\mathbb R\_+$, * ∫0*t*∫0*T*|*α**s*(*r*, *u*)|*d**r**d**s* < ∞ for any $t,T\in\mathbb R\_+$, $u\in\mathbb R$, * ∫0*T*∣*β**t*(*r*, *u*)∣2*d**r* < ∞ for any $t,T\in\mathbb R\_+$, $u\in\mathbb R$, * $((\int\_0^T|\beta^j\_{t}(r,u)|^2dr)^{1/2})\_{t\in\mathbb R\_+}\in L(M^j)$ for any fixed $T\in\mathbb R\_+$, $u\in\mathbb R$, *j* ∈ {1, …, *d*}, where the set *L*(*M**j*) of *M**j*-integrable processes is defined as in, * a version of the corresponding *codebook process* $$\begin{aligned} \label{Gleichung:neu2} \Psi\_t(T,u):=\Psi\_0(T,u)+\int\_0^{t\wedge T}\alpha\_s(T,u)ds+\int\_0^{t\wedge T}\beta\_s(T,u)dM\_s\end{aligned}$$ has the following properties: 1. (*ω*, *t*, *T*, *u*) ↦ Ψ*t*(*T*, *u*)(*ω*) is $\scr A\otimes{\scr B}(\mathbb R\_+)\otimes{\scr B}$-measurable, where $\scr A$ denotes the optional *σ*-field on $\Omega\times\mathbb R\_+$, 2. *u* ↦ ∫*t**T*Ψ*s*(*r*, *u*)(*ω*)*d**r* is in Π for any $T\in\mathbb R\_+$, *t* ∈ [0, *T*], *s* ∈ [0, *t*], *ω* ∈ Ω. The square-integrability conditions on *β* are imposed only to warrant $$\begin{aligned} \int\_0^T\left\vert\int\_0^t\beta\_s(r,u)dM\_s\right\vert dr &<&\infty\quad\mbox{and}\label{e:intbeta1}\\ \int\_0^t\int\_0^T\beta\_s(r,u)drdM\_s &=&\int\_0^T\int\_0^t\beta\_s(r,u)dM\_sdr. \label{e:intbeta2}\end{aligned}$$ If *M* has increasing components, we can and do replace the integrability conditions on *β* by the weaker requirement * ∑*j* = 1*d*∫0*t*∫0*T*|*β**s**j*(*r*, *u*)|*d**r**d**M**s**j* < ∞ for any $t,T\in\mathbb R\_+$, $u\in\mathbb R$, which implies ([e:intbeta1], [e:intbeta2]) by Fubini’s theorem. We denote the local exponents of (*X*, *M*), *X* by *ψ*(*X*, *M*), *ψ**X* and their domains by $\scr U^{(X,M)}, \scr U^X$, cf. Definitions [Definition: lokaler Exponent 1. Version] and [Definition: lokaler Exponent]. In line with Section [Abschnitt: Levy in Bewegung], the discounted stock and call price processes associated with an option surface model are defined by $$\begin{aligned} S\_t & := & \exp(X\_t),\\ \scr O\_t(T,x) & := & {\scr F}^{-1}\left\{u\mapsto\frac{1-\exp\big(\int\_t^T\Psi\_t(r,u)dr\big)}{u^2+iu}\right\}(x), \label{Gleichung: O als Fourier Transformierte}\\ C\_t(T,K) & := & (S\_t-K)^++K\scr O\_t\bigg(T,\log\frac{K}{S\_t}\bigg)\label{Gleichung: Optionen sind}\end{aligned}$$ for any $T\in\mathbb R\_+$, *t* ∈ [0, *T*], $x\in\mathbb R$, $K\in\mathbb R\_+$, where $\scr F^{-1}$ denotes the improper inverse Fourier transform in the sense of Section [Abschnitt: Fourier transformation und Optionspreise]. From ([Gleichung:neu2]) it follows that Ψ*t*(*T*, *u*) = Ψ*T*(*T*, *u*) for *T* < *t*. By ([Gleichung: O als Fourier Transformierte], [Gleichung: Optionen sind]) this part Ψ*t*(*T*, *u*), *T* < *t* of the codebook does not affect option prices and is hence irrelevant. [Bemerkung: Vergleich zu PII] The existence of these processes is implied by the assumptions above. Indeed, by Fubini’s theorem for ordinary and stochastic integrals, we have ∫0*T*|Ψ*t*(*r*, *u*)|*d**r* < ∞. Fix *ω* ∈ Ω. Since *u* ↦ ∫*t**T*Ψ*t*(*r*, *u*)(*ω*)*d**r* ∈ Π, there is a random variable *Y* on some probability space $(\widetilde\Omega,\widetilde{\scr F},\widetilde P)$ which has infinitely divisible distribution and characteristic function $$\widetilde E(e^{iuY})=\exp\left(\int\_t^T\Psi\_t(r,u)(\omega)dr\right),\quad u\in\mathbb R.$$ Since this function is in Π, we have $\widetilde E(e^Y)=1$. Thus Proposition [Proposition: Fourier transformierte der Call-Preise] yields the existence of the inverse Fourier transform in Equation. Moreover, it implies $C\_t(T,K)(\omega)=\widetilde E((S\_t(\omega)e^Y-K)^+)$ and thus we have 0 ≤ *C**t*(*T*, *K*)(*ω*) ≤ *S**t*(*ω*) and 0 ≤ *P**t*(*T*, *K*)(*ω*) ≤ *K* for any $K\in\mathbb R\_+$, $T\in\mathbb R\_+$, *t* ∈ [0, *T*], where *P**t*(*T*, *K*) :  = *C**t*(*T*, *K*) + *K* − *S**t* for any $K\in\mathbb R\_+$, $T\in\mathbb R\_+$, *t* ∈ [0, *T*]. As noted above, we model asset prices under a risk-neutral measure for the whole market. Put differently, we are interested in risk-neutral option surface models in the following sense. An option surface model (*X*, Ψ0, *α*, *β*, *M*) is called *risk neutral* if the corresponding stock *S* and all European call options *C*(*T*, *K*), $T\in\mathbb R\_+$, *K* > 0 are *σ*-martingales or, equivalently, local martingales (cf. ). It is called *strongly risk neutral* if *S* and all *C*(*T*, *K*) are martingales. Below, risk-neutral option surface models are characterized in terms of the following properties. An option surface model (*X*, Ψ0, *α*, *β*, *M*) satisfies the *consistency condition* if $$\psi^X\_t(u)=\Psi\_{t-}(t,u),\quad u\in \mathbb R$$ outside some *d**P* ⊗ *d**t*-null set. Moreover, it satisfies the *drift condition* if $$\bigg(u,-i\int\_t^T\beta\_t(r,u)dr\bigg)\_{t\in\mathbb R\_+}\in \scr U^{(X,M)}$$ and $$\begin{aligned} \label{Gleichung: Driftbedingung} \int\_t^T\alpha\_t(r,u)dr=\psi^X\_t(u)-\psi\_t^{(X,M)}\bigg(u,-i\int\_t^T\beta\_t(r,u)dr\bigg)\end{aligned}$$ outside some *d**P* ⊗ *d**t*-null set for any $T\in\mathbb R\_+$, $u\in\mathbb R$. Finally, the option surface model satisfies the *conditional expectation condition* if $$\exp\left(\int\_t^T\Psi\_t(r,u)dr\right)=E(e^{iu(X\_T-X\_t)}\vert{\scr F}\_t)$$ for any $T\in\mathbb R\_+$, *t* ∈ [0, *T*], $u\in\mathbb R$. [Bemerkung: abgeleitete Driftbedingung] The drift condition can be rewritten as $$\begin{aligned} \label{G:Driftbedingung, einf.} \alpha\_t(T,u)=-\partial\_T\left(\psi\_t^{(X,M)}\bigg(u,-i\int\_t^T\beta\_t(r,u)dr\bigg)\right) \end{aligned}$$ for almost all $T\in\mathbb R\_+$. It gets even simpler if *X* and *M* are assumed to be locally independent in the sense of Definition [Definition: lokale unabh]: $$\begin{aligned} \label{G:Driftbedingung, ganz einf.} \alpha\_t(T,u)=-\partial\_T\left(\psi\_t^M\bigg(-i\int\_t^T\beta\_t(r,u)dr\bigg)\right). \end{aligned}$$ If the derivative *ψ**t*′(*u*) :  = ∂*u**ψ**t**M*(*u*) exists as well, the drift condition simplifies once more and turns into $$\alpha\_t(T,u)=i\psi\_t^\prime\bigg(-i\int\_t^T\beta\_t(r,u)dr\bigg)\beta\_t(T,u).$$ Now consider the situation that *M* is a one-dimensional Brownian motion which is locally independent of the return process *X*. Then *ψ**M*(*u*) =  − *u*2/2 and the drift condition reads as *α**t*(*T*, *u*) =  − *β**t*(*T*, *u*)∫*t**T**β**t*(*r*, *u*)*d**r*. Thus the drift condition for option surface models is similar to the HJM drift condition (cf. ). Drift condition seems to rely on the joint exponent of *X* and *M*. However, suggests that only partial knowledge about *X* is needed. It is in fact sufficient to specify the joint exponent *ψ*(*X*∥, *M*) of *M* and the dependent part *X*∥ of *X* relative to *M*, which is defined in Section [Abschnitt: Semimartingalprojektion]. Using this notion, Equation can be rewritten as $$\label{e:letztedrift} \alpha\_t(T,u)=-\partial\_T\left(\psi\_t^{({X^\parallel},M)}\bigg(u,-i\int\_t^T\beta\_t(r,u)dr\bigg)\right)$$ because *ψ*(*X*, *M*) = *ψ*(*X*⊥, 0) + *ψ*(*X*∥, *M*) and the first summand on the right-hand side does not depend on its second argument. Necessary and sufficient conditions ----------------------------------- The goal of this section is to prove the following characterisation of risk-neutral option surface models. [Satz: Bedingungen impl. Vollstaendig] For any option surface model (*X*, Ψ0, *α*, *β*, *M*) the following statements are equivalent. 1. It is strongly risk neutral. 2. It is risk neutral. 3. It satisfies the conditional expectation condition. 4. It satisfies the consistency and drift conditions. The remainder of this section is devoted to the proof of Theorem [Satz: Bedingungen impl. Vollstaendig]. We proceed according to the following scheme (1) ⇒ (2) ⇒ (3) ⇒ (4) ⇒ (3) ⇒ (1). We use the notation $$\begin{aligned} \delta\_t(T,u) & := & \int\_t^T\alpha\_t(r,u)dr-\psi\_t^X(u),\\ \sigma\_t(T,u) & := & \int\_t^T\beta\_t(r,u)dr,\\ \Gamma\_t(T,u) & := & \int\_0^T\Psi\_0(r,u)dr+\int\_0^t\delta\_s(T,u)ds+\int\_0^t\sigma\_s(T,u)dM\_s.\end{aligned}$$ The existence of the integrals above is implied by the condition for option surface models. Observe that Γ(*T*, *u*) is a semimartingale. [Lemma: Gammazerlegung] For any $u\in\mathbb R,T\in\mathbb R\_+,t\in[0,T]$ we have Γ*t*(*T*, *u*) − Γ*t*(*t*, *u*) = ∫*t**T*Ψ*t*(*r*, *u*)*d**r*. Using the definition of Γ, *δ*, *σ* and applying Fubini’s theorem as in yields $$\begin{aligned} \Gamma\_t(T,u)-\Gamma\_t(t,u) & = & \int\_t^T\Psi\_0(r,u)dr+\int\_t^T\int\_0^t\alpha\_s(r,u)dsdr\\ &&{}+\int\_t^T\int\_0^t\beta\_s(r,u)dM\_sdr\\ & = & \int\_t^T\Psi\_t(r,u)dr.\end{aligned}$$ [Lemma: Gammas querdrift] For any $u\in\mathbb R$, $t\in\mathbb R\_+$ we have Γ*t*(*t*, *u*) = ∫0*t*(Ψ*s* −(*s*, *u*) − *ψ**s**X*(*u*))*d**s*. By Fubini’s theorem for ordinary and stochastic integrals we have $$\begin{aligned} \Gamma\_t(t,u) & = & \int\_0^t\Psi\_0(r,u)dr+\int\_0^t\delta\_s(t,u)ds+\int\_0^t\sigma\_s(t,u)dM\_s\\ &=& \int\_0^t\bigg(\Psi\_0(r,u)+\int\_0^r\alpha\_s(r,u)ds-\psi^X\_r(u)\bigg)dr+\int\_0^t\int\_0^r\beta\_s(r,u)dM\_sdr.\end{aligned}$$ This yields the claim. [Lemma: Bedingte charackteristische Funktion] If (*X*, Ψ0, *α*, *β*, *M*) is risk neutral, then it satisfies the conditional expectation condition. Let $T\in \mathbb R\_+$. We define $$O\_t(T,x):=\bigg\{\begin{array}{cc}e^{-x}C\_t(T,e^x) & \mathrm{if}\ x\geq0,\\e^{-x}P\_t(T,e^x) & \mathrm{if}\ x<0,\end{array}$$ where *P**t*(*T*, *K*) :  = *C**t*(*T*, *K*) + *K* − *S**t* for any $K\in\mathbb R\_+$, $t\in[0,T],x\in\mathbb R$. Then we have $$O\_t(T,x)=\bigg\{\begin{array}{cc}(e^{X\_t-x}-1)^++\scr O\_t(T,x-X\_t) & \mathrm{if}\ x\geq0,\\(1-e^{X\_t-x})^++\scr O\_t(T,x-X\_t) & \mathrm{if}\ x<0.\end{array}$$ We calculate the Fourier transform of *O**t*(*T*, *x*) in two steps by considering the summands separately. The improper Fourier transform of the second summand $\scr O\_t(T,x-X\_t)$ exists and satisfies $$\begin{aligned} {\scr F}\{x\mapsto\scr O\_t(T,x-X\_t)\}(u)&=&{\scr F}\{x\mapsto\scr O\_t(T,x)\}(u)e^{iuX\_t}\\ &=& \frac{1-\exp\left(\int\_t^T\Psi\_t(r,u)dr\right)}{u^2+iu}e^{iuX\_t}\end{aligned}$$ for any $u\in\mathbb R\setminus\{0\}$ by Remark [Bemerkung: Vergleich zu PII], Proposition [Proposition: Fourier transformierte der Call-Preise] and the translation property for the Fourier transform, which holds for the improper Fourier transform as well. The Fourier transform of the first summand $A\_t(T,x):=O\_t(T,x)-\scr O\_t(T,x-X\_t)$ exists and equals $${\scr F}\{x\mapsto A\_t(T,x)\}(u)=\frac{1}{iu}-\frac{e^{X\_t}}{iu-1}-\frac{e^{iuX\_t}}{u^2+iu}$$ for any $u\in\mathbb R\setminus\{0\}$. Therefore the improper Fourier transform of *x* ↦ *O**t*(*T*, *x*) exists and is given by $${\scr F}\{x\mapsto O\_t(T,x)\}(u)=\frac{1}{iu}-\frac{e^{X\_t}}{iu-1}- \frac{\exp(iuX\_t+\int\_t^T\Psi\_t(r,u)dr)}{u^2+iu},\label{Gleichung: Optionen Transformiert}$$ for any $u\in\mathbb R\setminus\{0\}$. By Lemmas [Lemma: Gammazerlegung] and [Lemma: Gammas querdrift] we have that the right-hand side of is a semimartingale, in particular it has càdlàg paths. Remark [Bemerkung: Vergleich zu PII] yields that 0 ≤ *P**t*(*T*, *K*) ≤ *K*. Hence (*P**t*(*T*, *K*))*t* ∈ [0, *T*] is a martingale because it is a bounded local martingale. Let $(\tau\_n)\_{n\in\mathbb N}$ denote a common localising sequence for (*C**t*(*T*, 1))*t* ∈ [0, *T*] and *S*, i.e. *S**τ**n*, *C**τ**n*(*T*, 1) are uniformly integrable martingales for any $n\in\mathbb N$. Since *C**t**τ**n*(*T*, *K*) ≤ *C**t**τ**n*(*T*, 1) for *K* ∈ [1, ∞), we have that $(\tau\_n)\_{n\in\mathbb N}$ is a common localising sequence for all European calls with maturity *T* and strike *K* ≥ 1. The definition of *O**t*(*T*, *x*) yields that it is a local martingale for any $x\in\mathbb R$ and $(\tau\_n)\_{n\in\mathbb N}$ is a common localising sequence for $(O\_t(T,x))\_{t\in[0,T]},x\in\mathbb R$. Fix *ω* ∈ Ω. Since *u* ↦ ∫*t**T*Ψ*t*(*r*, *u*)(*ω*)*d**r* is in Π for any *t* ∈ [0, *T*], there is a random variable *Y* on some space $(\widetilde\Omega,\widetilde{\scr F},\widetilde P)$ with characteristic function $$\widetilde E(e^{iuY})=\exp\left(\int\_t^T\Psi\_t(r,u)(\omega)dr\right).$$ Then $\widetilde E (e^Y)=1$ and $$O\_t(T,x)(\omega)=\bigg\{\begin{array}{cc}\widetilde E( (S\_t(\omega)e^{Y-x}-1)^+) & \mathrm{if}\ x\geq0,\\\widetilde E(1-S\_t(\omega)e^{Y-x})^+) & \mathrm{if}\ x<0,\end{array}$$ cf. Remark [Bemerkung: Vergleich zu PII]. By Corollary [Korollar: Abschaetzung der Fourier transformierten] we have $\vert\int\_{-C}^\infty e^{iux}O\_t(T,x)dx\vert\leq S\_t(\omega)+\frac{1+2\vert u\vert}{u^2}$. Proposition [Proposition: Fourier Transformation von Martingalen] yields that $$\left({\scr F}\{x\mapsto O\_t(T,x)\}(u)\right)\_{t\in[0,T]}$$ and hence (Φ*t*(*u*))*t* ∈ [0, *T*] given by $$\Phi:\Omega\times[0,T]\times\mathbb R\rightarrow\mathbb C,\quad (\omega,t,u)\mapsto\exp\left(iuX\_t(\omega)+\int\_t^T\Psi\_t(r,u)(\omega)dr\right)$$ are local martingales for any $u\in\mathbb R\setminus\{0\}$. Since *u* ↦ ∫*t**T*Ψ*t*(*r*, *u*)(*ω*)*d**r* is in Π for any *t* ∈ [0, *T*], *ω* ∈ Ω, its real part is bounded by 0 from above, cf. Lemma [Lemma: Negativitaet von LKM]. Hence |Φ*t*(*u*)| ≤ 1 and thus (*ω*, *t*) ↦ Φ*t*(*u*)(*ω*) is a true martingale for any $u\in\mathbb R\setminus\{0\}$. By Φ*t*(0) = 1 it is a martingale for *u* = 0 as well. Since Φ*T*(*u*) = exp(*i**u**X**T*), the two martingales (Φ*t*(*u*))*t* ∈ [0, *T*] and $\left(E(\exp(iuX\_T)\vert{\scr F}\_t)\right)\_{t\in[0,T]}$ coincide for any $u\in\mathbb R$. Thus we have $$\exp\bigg(\int\_t^T\Psi\_t(r,u)dr\bigg)=\exp(-iuX\_t)\Phi\_t(u)=E(e^{iu(X\_T-X\_t)}\vert{\scr F}\_t)$$ for any $u\in\mathbb R$, *t* ∈ [0, *T*]. [Drift condition in terms of *δ* and *σ*][Lemma: Driftbedingung in delta und sigma] If (*X*, Ψ0, *α*, *β*, *M*) satisfies the conditional expectation condition, we have the drift condition *δ**t*(*T*, *u*) = Ψ*t* −(*t*, *u*) − *ψ**t**X*(*u*) − *ψ**t*(*X*, *M*)(*u*,  − *i**σ**t*(*T*, *u*)) outside some *d**P* ⊗ *d**t*-null set for $T\in\mathbb R\_+$, $u\in\mathbb R$. In particular, $(u,-i\sigma(T,u))\in\scr U^{(X,M)}$. For $u\in\mathbb R$ and $T\in\mathbb R\_+$ define the process *Z**t* :  = *i**u**X**t* + ∫*t**T*Ψ*t*(*r*, *u*)*d**r*. The conditional expectation condition yields that $\exp(Z\_t)=E(e^{iuX\_T}\vert{\scr F}\_t)$ is a martingale. Hence $-i\in\scr U^Z$ and *ψ**t**Z*( − *i*) = 0 by Lemma [Lemma: exp(X) Martingal gdw psi(-i)=0]. With *Y**t* :  = Γ*t*(*t*, *u*) we obtain $$\begin{aligned} 0 & = & \psi^Z\_t(-i)\\ & = & \psi^{iuX+\Gamma(T,u)-Y}\_t(-i) \\ & = & \psi^{iuX+\Gamma(T,u)}\_t(-i)-\big(\Psi\_{t-}(t,u)-\psi^X\_t(u)\big) \\ & = & \psi^{(iuX,\Gamma(T,u))}\_t(-i,-i)-\Psi\_{t-}(t,u)+\psi^X\_t(u) \\ & = & \psi^{(X,M)}\_t(u,-i\sigma\_t(T,u))+\delta\_t(T,u)-\Psi\_{t-}(t,u)+\psi^X\_t(u),\end{aligned}$$ where the second equation follows from Lemma [Lemma: Gammazerlegung], the third from Lemmas [Lemma: Gammas querdrift] and [Lemma: Driftregel fuer Levy Exponenten], the fourth from Lemma [Lemma: Levy exponenten Formel] and the last from Lemmas [Lemma: Levy exponeten Formel] and [Lemma: Driftregel fuer Levy Exponenten]. [Consistency condition][Korollar: Abstrakte Konsistenzbedingung] If (*X*, Ψ0, *α*, *β*, *M*) satisfies the conditional expectation condition, then it satisfies the consistency condition. Lemma [Lemma: Driftbedingung in delta und sigma] and the definition of *δ* yield Ψ*t* −(*t*, *u*) = *δ**t*(*t*, *u*) + *ψ**t**X*(*u*) + *ψ**t*(*X*, *M*)(*u*, 0) = *ψ**t**X*(*u*). [Drift condition][Korollar: Driftbedingung] If (*X*, Ψ0, *α*, *β*, *M*) satisfies the conditional expectation condition, then it satisfies the drift condition. This follows from Lemma [Lemma: Driftbedingung in delta und sigma] and Corollary [Korollar: Abstrakte Konsistenzbedingung]. [Lemma: Gamma Darstellung] If the option surface model satisfies the consistency condition, then Γ*t*(*T*, *u*) = ∫*t**T*Ψ*t*(*r*, *u*)*d**r* for any $T\in\mathbb R\_+$, *t* ∈ [0, *T*], $u\in\mathbb R$. This is a direct consequence of Lemmas [Lemma: Gammazerlegung] and [Lemma: Gammas querdrift]. [Lemma: Psi ist Martingal] If the option surface model satisfies the drift condition, then Φ*t*(*T*, *u*) :  = exp(*i**u**X**t* + Γ*t*(*T*, *u*)) defines a local martingale (Φ*t*(*T*, *u*))*t* ∈ [0, *T*] for any $u\in\mathbb R$, $T\in\mathbb R\_+$. Fix *T*, *u* and define *Z**t* :  = *i**u**X**t* + Γ*t*(*T*, *u*). By the drift condition and Lemmas [Lemma: Levy exponeten Formel] – [Lemma: Driftregel fuer Levy Exponenten] we have $$\begin{aligned} 0&=&\psi\_t^{(X,M)}(u,-i\sigma\_t(T,u))+\delta\_t(T,u)\\ &=&\psi\_t^{(X,\sigma(T,u){\stackrel{\mbox{\tiny$\bullet$}}{}}\! M)}(u,-i)+\delta\_t(T,u)\\ &=&\psi\_t^{(iuX,\Gamma(T,u))}(-i,-i)\\ &=&\psi\_t^{iuX+\Gamma(T,u)}(-i)\\ &=&\psi\_t^Z(-i).\end{aligned}$$ Hence exp(*Z*) is a local martingale by Lemma [Lemma: exp(X) Martingal gdw psi(-i)=0]. [Lemma: Bedingungen gdw conditional expectation] (*X*, Ψ0, *α*, *β*, *M*) satisfies the drift and consistency conditions if and only if it satisfies the conditional expectation condition.  ⇐  :  This is a restatement of Corollaries [Korollar: Driftbedingung] and [Korollar: Abstrakte Konsistenzbedingung].  ⇒  :  Fix $u\in\mathbb R$, $T\in\mathbb R\_+$. Lemma [Lemma: Negativitaet von LKM] implies that the absolute value of Φ*t*(*T*, *u*) :  = exp (*i**u**X**t* + ∫*t**T*Ψ*t*(*r*, *u*)*d**r*) is bounded by 1. By Lemmas [Lemma: Gamma Darstellung] and [Lemma: Psi ist Martingal], Φ(*T*, *u*) is a local martingale and hence a martingale. This yields $$\Phi\_t(T,u)=E(\Phi\_T(T,u)\vert{\scr F}\_t)=E(e^{iuX\_T}\vert{\scr F}\_t).$$ [Lemma: conditional expectation implies martingale property] If the option surface model (*X*, Ψ0, *α*, *β*, *M*) satisfies the conditional expectation condition, then *S* = *e**X* is a martingale. For $T\in\mathbb R\_+$, *t* ∈ [0, *T*] we have $$E(e^{iu(X\_T-X\_t)}\vert{\scr F}\_t)=\exp\left(\int\_t^T\Psi\_t(r,u)dr\right).$$ Since *u* ↦ ∫*t**T*Ψ*t*(*r*, *u*)(*ω*)*d**r* is in Π, we have $E(e^{X\_T-X\_t}\vert{\scr F}\_t)=1$, cf. Remark [Bemerkung: Vergleich zu PII]. [Lemma: conditional expectation impliziert risiko neutral] If the option surface model (*X*, Ψ0, *α*, *β*, *M*) satisfies the conditional expectation condition, it is strongly risk neutral. Lemma [Lemma: conditional expectation implies martingale property] implies that *e**X* is a martingale and in particular that *e**X**t* is integrable for any $t\in\mathbb R\_+$. Let $T\in\mathbb R\_+$, *t* ∈ [0, *T*]. We define $$\begin{aligned} \widetilde C(K)&:=&E((e^{X\_T}-K)^+\vert{\scr F}\_t),\\ \widetilde{\scr O}(x)&:=&e^{-(x+X\_t)}\widetilde{C}(e^{x+X\_t})-(e^{-x}-1)^+,\\ Y &:=& X\_T-X\_t\end{aligned}$$ for any $K\in\mathbb R\_+$. Obviously we have $$\widetilde{\scr O}(x)=\bigg\{\begin{array}{cc}E((e^{Y-x}-1)^+\vert{\scr F}\_t) & \mathrm{if}\ x\geq0,\\ E((1-e^{Y-x})^+\vert{\scr F}\_t) & \mathrm{if}\ x<0\end{array}$$ and $E(e^Y\vert{\scr F}\_t)=1$. Hence Corollary [Proposition: Fourier transformierte der Call-Preise], the conditional expectation condition, and the definition of $\scr O$ yield $$\begin{aligned} {\scr F}\{x\mapsto\widetilde{\scr O}(x)\}(u)&=&\frac{1-E(e^{iuY}\vert{\scr F}\_t)}{u^2+iu}\\ &=&\frac{1-\exp\left(\int\_t^T\Psi\_t(r,u)dr\right)}{u^2+iu}\end{aligned}$$ as well as $$\begin{aligned} \widetilde{\scr O}(x)&=&\scr F^{-1}\left\{u\mapsto\frac{1-\exp\left(\int\_t^T\Psi\_t(r,u)dr\right)}{u^2+iu}\right\}(x)\\ &=&\scr O\_t(T,x).\end{aligned}$$ Thus we have $$\widetilde C(K)=C\_t(T,K)$$ for any $K\in\mathbb R\_+$. Consequently, the option surface model (*X*, Ψ0, *α*, *β*, *M*) is strongly risk neutral. (1) ⇒ (2) is obvious. (2) ⇒ (3) has been shown in Lemma [Lemma: Bedingte charackteristische Funktion]. (3) ⇔ (4) is the conclusion of Lemma [Lemma: Bedingungen gdw conditional expectation]. (3) ⇒ (1) has been shown in Lemma [Lemma: conditional expectation impliziert risiko neutral]. Musiela parametrisation ----------------------- In practice one may prefer to parametrise the codebook in terms of time-to-maturity *x* :  = *T* − *t* instead of maturity *T*, which is referred to as *Musiela parametrisation* in interest rate theory. However, in order to express the corresponding codebook dynamics, we need some additional regularity. [p:musiela] Let (*X*, Ψ0, *α*, *β*, *M*) be an option surface model such that 1. *M* is of the form *M**t* = *N**t* + ∫0*t**v**s**d**s* with some locally square-integrable martingale *N* and an integrable predictable process *v*, 2. *T* ↦ *α**t*(*T*, *u*), *β**t*(*T*, *u*), Ψ0(*T*, *u*) are continuously differentiable for any $t\in{\mathbb R \_+}$, $u\in{\mathbb R}$, 3. ∫0*T*∣∂*r**β**t*(*r*, *u*)∣2*d**r* < ∞ for any $t,T\in\mathbb R\_+$, $u\in\mathbb R$, 4. ∫0*t*sup*r* ∈ [0, *T*]|∂*r**α**s*(*r*, *u*) + ∂*r**β**s*(*r*, *u*)*v**s*|*d**s* < ∞ for any $t,T\in{\mathbb R \_+}$, $u\in{\mathbb R}$, 5. $((\int\_0^T(\vert \beta^j\_t(r,u)\vert^2 +\vert \partial\_r\beta^j\_t(r,u)\vert^2)dr)^{1/2})\_{t\in{\mathbb R \_+}}\in L^2\_\mathrm{loc}(N^j)$ for any $T\in{\mathbb R \_+}$, $u\in{\mathbb R}$, *j* ∈ {1, …, *d*}, where *L*loc2(*N**j*) is defined as in. Define $\check\alpha\_t(x,u) := \alpha\_t(t+x,u)$, $\check\beta\_t(x,u) := \beta\_t(t+x,u)$, $\check\Psi\_t(x,u) := \Psi\_t(t+x,u)$ for any $t\in{\mathbb R \_+}$, $x\in{\mathbb R \_+}$, $u\in{\mathbb R}$. For any fixed $u\in{\mathbb R}$, the mapping $x\mapsto\check\Psi\_t(x,u)$ is differentiable for *d**t*-almost all $t\in{\mathbb R \_+}$ and we have $$\begin{aligned} \check\Psi\_t(x,u) = \check\Psi\_0(x,u) + \int\_0^t\left(\check\alpha\_s(x,u) + \partial\_x\check\Psi\_s(x,u)\right)ds + \int\_0^t\check\beta\_s(x,u) M\_s\end{aligned}$$ for any $t\in{\mathbb R \_+}$, $x\in{\mathbb R}$, $u\in{\mathbb R}$. Since $$\begin{aligned} \Psi\_t(T,u)&=&\Psi\_0(T,u)+\int\_0^{t\wedge T}\alpha\_s(T,u)ds+\int\_0^{t\wedge T}\beta\_s(T,u)dM\_s\\ &=&\Psi\_0(T,u)+\int\_0^{t\wedge T}(\alpha\_s(T,u)+\beta\_s(T,u)v\_s)ds+\int\_0^{t\wedge T}\beta\_s(T,u)dN\_s,\end{aligned}$$ we can assume w.l.o.g. that *M* is a locally square-integrable martingale. By localisation, it even suffices to consider square-integrable martingales. By Fubini’s theorem for the stochastic integral (cf. ), we have $$\begin{aligned} \check\Psi\_t(x,u) &=& \check\Psi\_0(t+x,u) + \int\_0^t \check\alpha\_s(t-s+x,u)ds + \int\_0^t\check\beta\_s(t-s+x,u)dM\_s \\ &=& \check\Psi\_0(x,u) + \int\_0^t \check\alpha\_s(x,u)ds + \int\_0^t\check\beta\_s(x,u)dM\_s \\ &&{}+ \int\_0^t \partial\_x\check\Psi\_0(r+x,u)dr + \int\_0^t\int\_s^t\partial\_x\check\alpha\_s(r-s+x,u)drds \\ &&{}+ \int\_0^t\int\_s^t\partial\_x\check\beta\_s(r-s+x,u)drdM\_s \\ &=& \check\Psi\_0(x,u) + \int\_0^t \check\alpha\_s(x,u)ds + \int\_0^t\check\beta\_s(x,u)dM\_s \\ &&{}+ \int\_0^t \partial\_x\check\Psi\_0(x+r,u)dr + \int\_0^t\int\_0^r\partial\_x\alpha\_s(r+x,u)dsdr \\ &&{}+ \int\_0^t\int\_0^r\partial\_x\beta\_s(r+x,u)dM\_sdr \\ &=& \check\Psi\_0(x,u) + \int\_0^t \check\alpha\_s(x,u)ds + \int\_0^t\check\beta\_s(x,u)dM\_s \\ &&{}+ \int\_0^t \Big(\partial\_x\check\Psi\_0(x+r,u) + \partial\_x\int\_0^r\alpha\_s(r+x,u)ds \\ &&{}+ \partial\_x\int\_0^r\beta\_s(r+x,u)dM\_s\Big)dr \\ &=& \check\Psi\_0(x,u) + \int\_0^t\left(\check\alpha\_s(x,u) + \partial\_x\check\Psi\_s(x,u)\right)ds + \int\_0^t\check\beta\_s(x,u) M\_s\end{aligned}$$ for any $t,x\in{\mathbb R \_+}$, $u\in{\mathbb R}$ where the fourth equality is explained below. For fixed $t,x\in{\mathbb R \_+}$, $u\in{\mathbb R}$ define Hilbert spaces $H:=L^2([0,t+x],\mathbb R)$ and *H*1 :  = {*f* ∈ *H* : *f*is differentiable and *f*ʹ ∈ *H*} with norm $$\|f\|\_{H\_1}:=\sqrt{\|f\|\_{H}^2+\|f'\|\_{H}^2}.$$ The mapping *r* ↦ *β**t*(*r* + *x*, *u*) is in *H*1 by assumption. Since ∂*r* : *H*1 → *H*, *f* ↦ *f*ʹ is linear and continuous, yields ∫0*t*∂*r**β**s*( ⋅ , *u*)*d**M**s* = ∂*r*∫0*t**β**s*( ⋅ , *u*)*d**M**s* and hence $$\begin{aligned} \int\_0^t\int\_0^r\partial\_x\beta\_s(r+x,u)dM\_sdr = \int\_0^t\partial\_x\int\_0^r\beta\_s(r+x,u)dM\_sdr. \end{aligned}$$ Constructing models from building blocks ======================================== In this section we turn to existence and uniqueness results for option surface models which are driven by a subordinator *M*. Building blocks --------------- Theorem [Satz: Bedingungen impl. Vollstaendig] indicates thet neither the drift part *α* of the codebook nor the dynamics of the return process *X* can be chosen arbitrarily if one wants to end up with a risk-neutral option surface model. What ingredients do we need in order to construct such a model? It seems natural to consider volatiliy processes *β* that are functions of the present state of the codebook, i.e. *β**t*(*T*, *u*)(*ω*) = *b*(*t*, Ψ*t* −( ⋅ ,  ⋅ )(*ω*))(*T*, *u*) for some deterministic function $b:\mathbb R\_+\times {L^1(\mathbb R\_+,\Pi)}\rightarrow {L^1(\mathbb R\_+,\Pi)}$, where ${L^1(\mathbb R\_+,\Pi)}$ denotes some suitable space of conceivable codebook states, i.e.  essentially of functions $\mathbb R\_+\rightarrow\Pi$. It is specified below. In order to hope for uniqueness, we need to fix the initial values *X*0 and Ψ0( ⋅ ,  ⋅ ), function *b* and the law of the driving process *M*. The drift part in need not be specified as it is implied by the drift condition. But we need some information on *X*. Although its dynamics seem to be determined by the consistency condition, the joint behaviour of *X* and *M* is not. The latter, however, is needed for the drift condition resp. . In view of ([e:letztedrift]), we assume that the joint law of *M* and the dependent part *X*∥ of *X* relative to *M* in the sense of Section [Abschnitt: Semimartingalprojektion] are given. More specifically, we suppose that (*X*∥, *M*) is a Lévy process with given Lévy exponent *ψ*(*X*∥, *M*) = *γ*. The components of *M* are supposed to be subordinators. Altogether, we suggest to construct models based on a quadrupel (*x*0, *ψ*0, *b*, *γ*), where $x\_0\in\mathbb R$ and $\psi\_0\in {L^1(\mathbb R\_+,\Pi)}$ stand for the initial states of the return process and the codebook, respectively. In order to derive existence and uniqueness results, we still need to specify the domain and codomain of *b*. For ease of notation, we focus on one-dimensional driving processes *M*. The vector-valued case can be treated along the same lines. Let *E* denote the set of continuous functions $\psi:\mathbb R\rightarrow\mathbb C$ and ‖*ψ*‖*m* :  = sup{∣*ψ*(*u*)∣ : ∣*u*∣ ≤ *m*} for $m\in{\mathbb R \_+}$. By $\scr L^1(\mathbb R\_+,E)$ we denote the set of measurable functions $\psi:\mathbb R\_+\times\mathbb R\rightarrow\mathbb C$ such that *ψ*(*T*,  ⋅ ) ∈ *E* and ∥*ψ*∥*T*, *m* :  = ∫0*T*‖*ψ*(*r*,  ⋅ )‖*m**d**r* < ∞ for any $T,m\in\mathbb R\_+$. For $\psi\in \scr L^1(\mathbb R\_+,E)$ we set $$[\psi] := \{ \varphi\in\scr L^1({\mathbb R \_+},E): \psi(T,\cdot)=\varphi(T,\cdot) \text{ for almost any }T\in{\mathbb R \_+}\}.$$ Moreover, we define the space $${L^1(\mathbb R\_+,E)} := \{ [\psi] : \psi\in\scr L^1({\mathbb R \_+},E) \}.$$ as usual. Finally, we set $$\begin{aligned} {L^1(\mathbb R\_+,\Pi)} &:=& \left\{\psi\in{L^1(\mathbb R\_+,E)}:\int\_t^T\psi(r,\cdot)dr\in\Pi\text{ for any }0\leq t\leq T<\infty\right\}, \end{aligned}$$ where we refer to the Bochner integral in the sense of Definition [d:frechetintegral] and Example [ex:frechet]. [l:banach] The following statements hold: 1. (*E*, ‖ ⋅ ‖*m*) is a complete and separable semi-normed space for any $m\in{\mathbb R \_+}$. 2. $({L^1(\mathbb R\_+,E)},\Vert\cdot\Vert\_{T,m})$ is a complete and separable semi-normed space for any $T,m\in\mathbb R\_+$. If $x\in{L^1(\mathbb R\_+,E)}$ with ∥*x*∥*n*, *n* = 0 for any $n\in\mathbb N$, we have *x* = 0. Moreover, if $(x\_k)\_{k\in\mathbb N}$ is a Cauchy sequence in ${L^1(\mathbb R\_+,E)}$ relative to any ∥ ⋅ ∥*n*, *n*, $n\in\mathbb N$, there exists $x\in{L^1(\mathbb R\_+,E)}$ such that lim*k* → ∞∥*x**k* − *x*∥*n*, *n* = 0, $n\in\mathbb N$. Consequently, $({L^1(\mathbb R\_+,E)},d)$ is a separable Fréchet space for the metric $$d(\psi,\varphi):=\sum\_{n\in\mathbb N}2^{-n}(1\wedge \Vert\psi-\varphi\Vert\_{n,n}),\quad \psi,\varphi\in {L^1(\mathbb R\_+,E)}.$$ 3. Π ⊂ *E* is a convex cone. If *A* is a Borel subset of ${\mathbb R}$, *μ* a finite measure on *A*, and $\psi:A\times\mathbb R\rightarrow\mathbb C$ is measurable with *ψ*(*r*,  ⋅ ) ∈ Π and ∫*A*‖*ψ*(*r*,  ⋅ )‖*m**μ*(*d**r*) < ∞ for all $m\in\mathbb N$, then the mapping *u* ↦ ∫*A**ψ*(*r*, *u*)*μ*(*d**r*) is in Π. 4. If $\psi\in{L^1(\mathbb R\_+,E)}$ and *ψ*(*T*,  ⋅ ) ∈ Π for almost all $T\in\mathbb R\_+$, then $\psi\in{L^1(\mathbb R\_+,\Pi)}$. 5. For any increasing function $X:\mathbb R\_+\rightarrow\mathbb R\_+$ and any locally *X*-integrable function $\eta:\mathbb R\_+\rightarrow{L^1(\mathbb R\_+,\Pi)}$, we have $\int\_0^t\eta\_s dX\_s\in{L^1(\mathbb R\_+,\Pi)}$ for any $t\in\mathbb R\_+$. Here, we refer to Bochner integration on ${L^1(\mathbb R\_+,E)}$, cf. Example [ex:frechet] and Definition [d:frechetintegral]. 6. Π is a Borel subset of *E* and consequently ${L^1(\mathbb R\_+,\Pi)}$ is a Borel subset of ${L^1(\mathbb R\_+,E)}$ (relative to the Borel-*σ*-field generated by the metric *d*). 1. This follows from the fact that the continuous functions on [ − *m*, *m*] are a separable Banach space relative to the uniform norm. 2. $({L^1(\mathbb R\_+,E)},\Vert\cdot\Vert\_{T,m})$ is a complete and separable semi-normed space because the corresponding Lebesgue-Bochner space of integrable functions on [0, *T*] with values in the Banach space of continuous functions $[-m,m]\to\mathbb C$ is a Banach space. Let *Q* be a countable dense set in *E*. Define $$S:=\left\{\sum\_{j=1}^n q\_j1\_{(a\_j,b\_j]}: n\in\mathbb N,q\in Q^n,a,b\in\mathbb Q^n\right\}.$$ *S* is dense in ${L^1(\mathbb R\_+,E)}$ because *S* is obviously dense in $$T:=\left\{\sum\_{j=1}^n q\_j1\_{A\_j}: n\in\mathbb N,q\in Q^n,A\_1,\dots,A\_n\in\scr B(\mathbb R\_+)\right\}$$ and *T* is dense in ${L^1(\mathbb R\_+,E)}$, cf. Section [su:frechet] of the appendix. This shows separability of $({L^1(\mathbb R\_+,E)},\Vert\cdot\Vert\_{T,m})$. The remaining statements are straightforward. 3. Π is obviously a convex cone. Define $$\Psi:\mathbb R\rightarrow\mathbb C,\quad u\mapsto \int\_A\psi(r,u)\mu(dr).$$ Then Ψ is continuous and it is the pointwise limit of Lévy exponents. Hence Lévy’s continuity theorem (see ) together with yield that Ψ is the characteristic exponent of an infinitely divisible random variable *X*. Fix the truncation function $h:\mathbb R\rightarrow\mathbb R,x\mapsto x1\_{\{\vert x\vert\leq 1\}}$. For all *r* ∈ *A* let (*b**r*, *c**r*, *F**r*) be the Lévy-Khintchine triplet corresponding to *ψ*(*r*,  ⋅ ). A detailed analysis of the proof of yields integrability of *b* and *c* and that *F* is a transition kernel satisfying ∫*A*∫(∣*x*∣2 ∧ 1)*F**r*(*d**x*)*μ*(*d**r*) < ∞. In order to prove Ψ ∈ Π we have to show that *E**e**X* = 1. Let (*B*, *C*, *ν*) be the triplet corresponding to Ψ and *h*. Then yields $$\exp(\Psi(u)) = \exp\left(iuB-\frac{u^2}{2}C+\int\_{\mathbb R}(e^{iux}-1-iuh(x))\nu(dx)\right)$$ as well as $$\begin{aligned} &&\exp(\Psi(u)) \\ &=& \exp\left(\int\_A \psi(r,u)\mu(dr)\right) \\ &=& \exp\left(iu\int\_Ab\_rdr-\frac{u^2}{2}\int\_Ac\_rdr+\int\_A\int(e^{iux}-1-iuh(x))F\_r(dx)\mu(dr)\right)\end{aligned}$$ for any $u\in\mathbb R$. From we obtain *B* = ∫*A**b**r**μ*(*d**r*), *C* = ∫*A**c**r**μ*(*d**r*), and *ν*(*G*) = ∫*A**F**r*(*G*)*μ*(*d**r*) for $G\in\scr B$. Consequently, we have $$\begin{aligned} E(e^X) &=& \exp\left(B+\frac{1}{2}C+\int (e^x-1-h(x)) \nu(dx)\right) \\ &=& \exp\Bigg(\int\_A\left(b\_r+\frac{1}{2}c\_r+\int\_{(-\infty,1]} (e^x-1-h(x)) F\_r(dx)\right)\mu(dr) \\ &&+ \int\_{(1,\infty)} (e^x-1) \nu(dx)\Bigg).\end{aligned}$$ Tonelli’s theorem yields ∫(1, ∞)(*e**x* − 1)*ν*(*d**x*) = ∫*A*∫(1, ∞)(*e**x* − 1)*F**r*(*d**x*)*μ*(*d**r*). Hence $$E(e^X) = \exp\left(\int\_A\left(b\_r+\frac{1}{2}c\_r+\int (e^x-1-h(x)) F\_r(dx)\right)\mu(dr)\right) = 1.$$ 4. This is a consequence of Statement 3. 5. This is a consequence of Statement 3 as well. 6. *E* is a metric space relative to $$\delta(\psi,\varphi):=\sum\_{n\in\mathbb N} 2^{-n}(1\wedge \|\psi-\varphi\|\_n).$$ Let *C* ⊂ *E* denote the set of all Lévy exponents. Lévy’s continuity theorem (cf. ) and imply that *C* is closed in *E* and in particular a Borel subset in *E*. The function $$f:C\rightarrow \overline{\mathbb R}\_+, \quad\varphi\mapsto \frac{1}{2\pi}\int\_{-\infty}^{\infty}e^x\int\_{-\infty}^{\infty} \exp\left(\varphi(u)-\frac{1}{2}(iu+u^2)-iux\right)dudx$$ is well defined and measurable, where $\overline{\mathbb R}\_+:=\mathbb R\_+\cup\{\infty\}$. Indeed, observe that for *φ* ∈ *C* the function $$u\mapsto \varphi(u)-\frac{1}{2}(iu+u^2)$$ is a Lévy exponent and thus yields that $$x\mapsto \frac{1}{2\pi}\int\_{-\infty}^{\infty}\exp\left(\varphi(u)-\frac{1}{2}(iu+u^2)-iux\right)du$$ is a density function. Let *L* be a Lévy process with Lévy exponent *φ* ∈ *C* and *W* be an independent Brownian motion with diffusion coefficient 1 and drift rate  − 1/2. yields that $$p:\mathbb R\rightarrow\mathbb R\_+,\quad x \mapsto \frac{1}{2\pi}\int\_{-\infty}^{\infty} \exp\left(\varphi(u)-\frac{1}{2}(iu+u^2)-iux\right)du$$ is the density function of *L*1 + *W*1. Thus we have $$\begin{aligned} E(e^{L\_1}) &=& E(e^{L\_1+W\_1}) = \int\_{-\infty}^{\infty}e^xp(x)dx = f(\varphi).\end{aligned}$$ Hence Π = *f*− 1(1), which implies that Π is measurable in *E*. It remains to be shown that ${L^1(\mathbb R\_+,\Pi)}$ is a measurable set in ${L^1(\mathbb R\_+,E)}$. For $a,b\in\mathbb R\_+$ define the continuous and hence measurable map $$I\_{a,b}:{L^1(\mathbb R\_+,E)}\rightarrow E,\quad \psi \mapsto \int\_{a}^{b}\psi(r,\cdot)dr.$$ We obviously have $${L^1(\mathbb R\_+,\Pi)}\subset M:=\bigcap \{ I\_{q\_1,q\_2}^{-1}(\Pi) : q\_1,q\_2\in\mathbb Q\_+,q\_1\leq q\_2\}.$$ We show that the two sets are in fact equal. Let *ψ* ∈ *M* and $t,T\in{\mathbb R \_+}$ with *t* < *T*. Then $\psi\in{L^1(\mathbb R\_+,E)}$ and hence ∫*t**T**ψ*(*r*,  ⋅ )*d**r* ∈ *E*. Let $(q\_n)\_{n\in\mathbb N},(p\_n)\_{n\in\mathbb N}$ be sequences of rational numbers such that *q**n* ↓ *t*, *p**n* ↑ *T*, and *q**n* ≤ *p**n*. Then $$I\_{t,T}\psi = I\_{q\_0,p\_0}\psi + \sum\_{n\in\mathbb N}(I\_{q\_{n+1},p\_{n+1}}-I\_{q\_{n},p\_{n}})\psi.$$ Defining the finite measure *μ* on $\mathbb N$ by *μ*({*n*}) :  = 1/*n*2 and the function $$\gamma:\mathbb N\rightarrow \Pi,\quad n\mapsto n^2(I\_{q\_{n+1},p\_{n+1}}-I\_{q\_{n},p\_{n}})\psi,$$ we obtain $$I\_{t,T}\psi = I\_{q\_0,p\_0}\psi + \int\_{\mathbb N} \gamma d\mu.$$ Hence Statement 3 yields *I**t*, *T**ψ* ∈ Π, which in turn implies $\psi\in{L^1(\mathbb R\_+,\Pi)}$. We are now ready to formalise the notion of building blocks. [D:Bausteine] We call a quadruple (*x*0, *ψ*0, *b*, *γ*) *building blocks* of an option surface model if 1. $x\_0\in\mathbb R\_+$, 2. $\psi\_0\in{L^1(\mathbb R\_+,\Pi)}$, 3. $b:\mathbb R\_+\times{L^1(\mathbb R\_+,E)}\rightarrow{L^1(\mathbb R\_+,E)}$ is measurable (relative to the *σ*-fields ${\scr B}({\mathbb R \_+})\otimes{\scr B}({L^1(\mathbb R\_+,E)})$ and ${\scr B}({L^1(\mathbb R\_+,E)})$, where ${\scr B}({L^1(\mathbb R\_+,E)})$ denotes the Borel-*σ*-field on the metric space $({L^1(\mathbb R\_+,E)},d)$ as introduced in Lemma [l:banach]), 4. *b* maps $\mathbb R\_+\times{L^1(\mathbb R\_+,\Pi)}$ on a subset of ${L^1(\mathbb R\_+,\Pi)}$, 5. *b* is *locally Lipschitz* in the sense that for any $T\in{\mathbb R \_+}$ there are $T\_0,m\_0\in{\mathbb R \_+}$ such that for any $\widetilde T\geq T\_0$, *m* ≥ *m*0 there exists $c\in\mathbb R\_+$ such that $$\Vert b(t,\psi\_1)-b(t,\psi\_2)\Vert\_{\widetilde T,m}\leq c\Vert\psi\_1-\psi\_2\Vert\_{\widetilde T,m}$$ holds for any $\psi\_1,\psi\_2\in{L^1(\mathbb R\_+,E)}$ and any *t* ∈ [0, *T*], 6. *b*(*t*, *ψ*)(*T*, *u*) = 0 for any $\psi\in{L^1(\mathbb R\_+,E)}$, $u\in\mathbb R$ and any $t,T\in\mathbb R\_+$ with *t* > *T*, 7. sup*t* ∈ [0, *T*]∥*b*(*t*, 0)∥*T*, *m* < ∞ for any $T,m\in{\mathbb R \_+}$, 8. $\gamma:\mathbb R\times({\mathbb R}+i{\mathbb R \_+})\rightarrow\mathbb C$ is the extended Lévy exponent on $\mathbb R\times({\mathbb R}+i{\mathbb R \_+})$(cf. Section [Unterabschnitt: Lévy exponenten]) of an $\mathbb R^{1+1}$-dimensional Lévy process (*X*∥, *M*) such that 1. *M* is a pure jump subordinator, i.e. *M**t* = ∑*s* ≤ *t*Δ*M**s*, $t\in\mathbb R\_+$, 2. *X*∥ is the dependent part of *X*∥ relative to *M*, 3. *γ* is differentiable, and 4. $\partial\_2\gamma:\mathbb R\times({\mathbb R}+i{\mathbb R \_+})\rightarrow\mathbb C$ is bounded and Lipschitz continuous. By Remark [Remark smoothness] the smoothness conditions (c,d) on *γ* are satisfied if both *X*∥ and *M* have finite second moments. Our goal is to find corresponding risk-neutral option surface models in the following sense. [Definition: kompatibel] An option surface model (*X*, Ψ0, *α*, *β*, *M*) is said to be *compatible* with building blocks (*x*0, *ψ*0, *b*, *γ*) if * *X*0 = *x*0, * Ψ0 = *ψ*0, * ${\mathbb R \_+}\to{L^1(\mathbb R\_+,E)}$, *t* ↦ Ψ*t*(*ω*) is a well-defined, a.s. càdlàg mapping, * *β**t*(*ω*) = *b*(*t*, Ψ*t* −(*ω*)) for *d**P* ⊗ *d**t*-almost any $(\omega,t)\in \Omega\times\mathbb R\_+$, * *ψ*(*X*∥, *M*) = *γ* on $\mathbb R\times({\mathbb R}+i{\mathbb R \_+})$, where *X*∥ denotes the dependent part of *X* relative to *M*. In practice one may be interested in coefficients *b* of the form $$\label{e:musiela} b(t,\psi)(T,u):=\left\{\begin{array}{ll} \check b(\psi^{(t)})(T-t,u) & \mbox{ if } t\leq T,\\ 0 & \mbox{ if }t> T, \end{array}\right.$$ where $\psi^{(t)}\in{L^1(\mathbb R\_+,E)}$ is defined by $$\psi^{(t)}({x},u):=\psi(t+{x},u),\quad {x}\in{\mathbb R \_+}, \quad u\in{\mathbb R}$$ for $\psi\in{L^1(\mathbb R\_+,E)}$. In line with Proposition [p:musiela], $(\Psi\_t^{(t)})\_{t\in{\mathbb R \_+}}$ may be called *Musiela parametrisation* of the codebook process $(\Psi\_t)\_{t\in{\mathbb R \_+}}$. In other words, the Musiela codebook refers to a function of the remaining life time *x* = *T* − *t* rather than maturity *T* of the claim. Function *b* in ([e:musiela]) satisfies Conditions 3–7 in Definition [D:Bausteine] if $\check b:{L^1(\mathbb R\_+,E)}\rightarrow{L^1(\mathbb R\_+,E)}$ maps ${L^1(\mathbb R\_+,\Pi)}$ on a subset of ${L^1(\mathbb R\_+,\Pi)}$ and if $\check b$ is *locally Lipschitz* in the sense that there exist $x\_0,m\_0\in{\mathbb R \_+}$ such that for any *x* ≥ *x*0, *m* ≥ *m*0 there exists $c\in\mathbb R\_+$ such that $$\Vert \check b(\psi\_1)- \check b(\psi\_2)\Vert\_{\widetilde x,m}\leq c\Vert\psi\_1-\psi\_2\Vert\_{\widetilde x,m}$$ holds for any $\psi\_1,\psi\_2\in{L^1(\mathbb R\_+,E)}$ and any $\widetilde x\in[x\_0,x]$. Existence and uniqueness results -------------------------------- For building blocks (*x*0, *ψ*0, *b*, *γ*) consider the stochastic differential equation (SDE) $$\begin{aligned} \label{G:Psi SDE} d\Psi\_t = a(t,\Psi\_{t-})dt+b(t,\Psi\_{t-})dM\_t,\quad\Psi\_0=\psi\_0\end{aligned}$$ in ${L^1(\mathbb R\_+,E)}$, where *M* denotes a subordinator with Lévy exponent *γ*(0,  ⋅ ) and $$\begin{aligned} \label{G:Driftbedingung, Markovsch} a(t,\psi)(T,u) &:=& -\partial\_T\left(\gamma\Big(u,-i\int\_{t\wedge T}^T\tilde b(t,\psi)(r,u)dr\Big)\right) \\ &=& i\partial\_2\gamma\Big(u,-i\int\_t^T\tilde b(t,\psi)(r,u)dr\Big)\tilde b(t,\psi)(T,u)1\_{[t,\infty)}(T). \nonumber\end{aligned}$$ with $$\begin{aligned} \label{e:tilb} \tilde b(t,\psi)(r,u) :=\big({\mathrm{Re}}(b(t,\psi)(r,u))\wedge 0\big) +i {\mathrm{Im}}(b(t,\psi)(r,u)).\end{aligned}$$ Note that $\tilde b(t,\psi)= b(t,\psi)$ for *ψ* ∈ Π by Lemma [Lemma: Negativitaet von LKM]. In view of Equations and, any compatible codebook process should solve. We start by showing that allows for a unique solution in ${L^1(\mathbb R\_+,E)}$. [Proposition: SDE ist eindeutig] Let (*x*0, *ψ*0, *b*, *γ*) be building blocks. defines a measurable function $a:\mathbb R\_+\times{L^1(\mathbb R\_+,E)}\rightarrow{L^1(\mathbb R\_+,E)}$. Let $T,T\_0,m\_0,\widetilde T,m$ be as in Definition [D:Bausteine](5). Then there are a constant $C\in{\mathbb R \_+}$ and for any $\Vert\cdot\Vert\_{\widetilde T,m}$-bounded set $B\subset {L^1(\mathbb R\_+,E)}$ a constant $c\in{\mathbb R \_+}$ such that *a* satisfies the Lipschitz and linear growth conditions $$\begin{aligned} \Vert a(t,\psi\_1) - a(t,\psi\_2) \Vert\_{\widetilde T,m} &\leq& c\Vert \psi\_1-\psi\_2\Vert\_{\widetilde T,m}, \\ \Vert a(t,\psi) \Vert\_{\widetilde T,m} &\leq& C(1+\Vert \psi\Vert\_{\widetilde T,m})\end{aligned}$$ for any *t* ∈ [0, *T*], *ψ*1, *ψ*2 ∈ *B*, $\psi\in {L^1(\mathbb R\_+,E)}$. 1. Let $T,T\_0,m\_0,\widetilde T,m,c$ be as in Definition [D:Bausteine](5) such that $$\begin{aligned} \label{e:lipb} \Vert b(t,\psi\_1)-b(t,\psi\_2)\Vert\_{\widetilde T,m} &\leq& c \Vert \psi\_1-\psi\_2\Vert\_{\widetilde T,m}\end{aligned}$$ for any $\psi\_1,\psi\_2\in {L^1(\mathbb R\_+,E)}$, *t* ∈ [0, *T*]. Let *B* be a bounded set and *L* a Lipschitz constant of ∂2*γ*. Then $$\begin{aligned} H &:=& \sup\_{(R,u)\in S} \left\vert i\partial\_2\gamma\left(u,-i\int\_t^R\widetilde\psi\_1(r,u)dr\right)-i \partial\_2\gamma\left(u,-i\int\_t^R\widetilde\psi\_2(r,u)dr\right)\right\vert \\ &\leq& L\sup\_{(R,u)\in S} \left\vert \int\_t^R\widetilde\psi\_1(r,u)dr-\int\_t^R\widetilde\psi\_2(r,u)dr\right\vert \\ &\leq& L\sup\_{(R,u)\in S} \int\_t^R\vert\psi\_1(r,u)-\psi\_2(r,u)\vert dr \\ &\leq& L\Vert \psi\_1-\psi\_2\Vert\_{\widetilde T,m}\end{aligned}$$ with $S=[t,\widetilde T]\times[-m,m]$. Let $c\_1\in\mathbb R\_+$ be a bound for the set *B*, $$c\_2 := \sup\_{t\in[0,T]}\Vert b(t,0)\Vert\_{\widetilde T,m} + cc\_1,$$ and *c*3 be a bound for ∂2*γ*. Observe that $ \Vert b(t,\psi) \Vert\_{\widetilde T,m} \leq c\_2$ for any *ψ* ∈ *B*, *t* ∈ [0, *T*]. Submultiplicativity of the uniform norm yields $$\begin{aligned} \lefteqn{\Vert f(t,\psi\_1)-f(t,\psi\_2)\Vert\_{\widetilde T,m}}\\ &=& \int\_0^{\widetilde T}\sup\_{u\in[-m,m]}\vert (f(t,\psi\_1)-f(t,\psi\_2))(r,u)\vert dr \\ & \leq & \int\_0^{\widetilde T} \Big(H\sup\_{u\in[-m,m]}\vert \psi\_1(r,u)\vert \\ &&{}+ \sup\_{u\in[-m,m]}\left\vert \partial\_2\gamma\left(u,-i\int\_t^R\widetilde\psi\_2(r,u)dr\right)\right\vert \vert (\psi\_1-\psi\_2)(r,u)\vert \Big)dr\\ &\leq& L\Vert \psi\_1-\psi\_2\Vert\_{\widetilde T,m} \Vert\psi\_1\Vert\_{\widetilde T,m} +c\_3 \Vert\psi\_1-\psi\_2\Vert\_{\widetilde T,m} \\ &\leq& (Lc\_2+c\_3) \Vert \psi\_1-\psi\_2\Vert\_{\widetilde T,m}.\end{aligned}$$ for any *t* ∈ [0, *T*] and any $\psi\_1,\psi\_2\in{L^1(\mathbb R\_+,E)}$ which are bounded by *c*2. Since *a* is the composition of (*t*, *ψ*) → (*t*, *b*(*t*, *ψ*)) and *f* and since *b* is Lipschitz continuous, the first inequality follows. For the second inequality note that $$\begin{aligned} \Vert a(t,\psi) \Vert\_{\widetilde T,m} &\leq& c\_3\Vert b(t,\psi)\Vert\_{\widetilde T,m} \\ &\leq&c\_3\left(\Vert b(t,0)\Vert\_{\widetilde T,m} +\Vert b(t,\psi)-b(t,0)\Vert\_{\widetilde T,m}\right) \\ &\leq& c\_3\left(\Vert b(t,0)\Vert\_{\widetilde T,m} +c\Vert \psi\Vert\_{\widetilde T,m}\right) \\ &\leq& C(1+\Vert\psi\Vert\_{\widetilde T,m})\end{aligned}$$ by ([e:a], [e:lipb]) for $C:=c\_3 (\sup\_{t\in[0,T]}\Vert b(t,0)\Vert\_{\widetilde T,m} \vee c)$. 2. Pathwise uniqueness of a solution to SDE ([G:Psi SDE]) now implies uniqueness in law. This follows along the same lines as for ${\mathbb R}^d$-valued SDE’s driven by a Wiener process, cf. e.g. . For the proof of one may note that the law of the Bochner integral ∫0⋅*a*(*t*, Ψ*t* −)*d**t* (and likewise the law of the integral with respect to *M*) is determined by the law of all random vectors of the form (∫0*t*1*f*1(*a*(*s*, Ψ*s* −))*d**s*, …, ∫0*t**d**f**d*(*a*(*s*, Ψ*s* −))*d**s*),  where $d\in\mathbb N$, $t\_1,\dots,t\_d\in{\mathbb R \_+}$, and *f*1, …, *f**d* denote continuous linear functionals on ${L^1(\mathbb R\_+,E)}$. We can now state an existence and uniqueness result for compatible option surface models. The condition in Statement 2 of the following theorem means essentially that * the current codebook state (*T*, *u*) ↦ Ψ*t*(*T*, *u*) must look like the exponent of a PII whose exponential is a martingale and * *u* ↦ Ψ*t* −(*t*, *u*) is the local exponent of some process *X* whose exponential is a martingale and whose dependent part *X*∥ relative to *M* is of the form in Definition [D:Bausteine](8). The first requirement makes sense because of the very idea of a codebook in the present Lévy setup. The second condition, on the other hand, naturally appears through the consistency condition. [Satz: starker Existenzsatz] Let (*x*0, *ψ*0, *b*, *γ*) be building blocks. 1. 2. If a compatible risk-neutral option surface model (*X*, Ψ0, *α*, *β*, *M*) exists, then the ${L^1(\mathbb R\_+,E)}$-valued process *η* defined by $$\label{e:eta} \eta\_t(T,u):=\Phi\_t(T,u)-\gamma(u,0)1\_{[0,t]}(T),\quad t,T\in{\mathbb R \_+},\quad u\in{\mathbb R},$$ has values in ${L^1(\mathbb R\_+,\Pi)}$. Here, Φ denotes the ${L^1(\mathbb R\_+,E)}$-valued solution to SDE from Proposition [Proposition: SDE ist eindeutig]. 3. Let Φ denote the ${L^1(\mathbb R\_+,E)}$-valued solution to SDE from Proposition [Proposition: SDE ist eindeutig]. If the ${L^1(\mathbb R\_+,E)}$-valued process *η* in ([e:eta]) has values in ${L^1(\mathbb R\_+,\Pi)}$, there exists a compatible risk-neutral option surface model. 1. *Step 1:* Let (*X*, Ψ0, *α*, *β*, *M*) be a risk-neutral option surface model which is compatible with the building blocks. Denote by *X*∥ the dependent part of *X* relative to *M*. By compatibility we have *ψ*(*X*∥, *M*) = *γ*. Thus (*X*∥, *M*) is a Lévy process. Let $(\scr G\_t)\_{t\in\mathbb R\_+}$ be the filtration generated by (*X*∥, *M*), i.e.  $$\scr G\_t=\bigcap\_{s>t}\sigma({({X^\parallel},M)}\_r:r\leq s).$$ Since the option surface model (*X*, Ψ0, *α*, *β*, *M*) is risk neutral, Theorem [Satz: Bedingungen impl. Vollstaendig] yields that it satisfies the drift condition, the consistency condition and the conditional expectation condition. Moreover, compatibility and the drift condition imply $$\begin{aligned} \beta\_t(T,u) &=& b(t,\Psi\_{t-})(T,u),\\ \alpha\_t(T,u) &=& -\partial\_T\left(\gamma\Big(u,-i\int\_t^T b(t,\Psi\_{t-})(r,u)dr\Big)\right)\end{aligned}$$ a.s for any $u\in\mathbb R$ and almost any $T\in{\mathbb R \_+}$, *t* ∈ [0, *T*]. ([Gleichung:neu2]) implies that Ψ solves the SDE *d*Ψ*t* = *a*(*t*, Ψ*t* −)*d**t* + *b*(*t*, Ψ*t* −)*d**M**t*,  Ψ0 = *ψ*0. pointwise for any $T\in{\mathbb R \_+}$, $u\in{\mathbb R}$. By compatibility, Ψ*t*, Ψ*t* − are ${L^1(\mathbb R\_+,E)}$-valued random variables. It is not hard to see that Equation ([e:ba]) holds also in the sense of ${L^1(\mathbb R\_+,E)}$-valued processes. By Proposition [Proposition: SDE ist eindeutig] Ψ is the unique pathwise solution to the SDE. Thus Ψ is adapted to the filtration $(\scr G\_t)\_{t\in\mathbb R\_+}$. *Step 2:* Define a filtration $(\scr H\_t)\_{t\in{\mathbb R \_+}}$ via $\scr H\_t:=\bigcap\_{s>t}({\scr F}\_s\vee{\scr G}\_\infty)$. We show that *X*⊥ − *X*0 = *X* − *X*∥ − *X*0 is a ${\scr G}\_\infty$-conditional PII with respect to filtration $(\scr H\_t)\_{t\in{\mathbb R \_+}}$. Indeed, adaptedness follows from the fact that both *X* and *X*⊥ are adapted to the original filtration $({\scr F}\_t)\_{t\in{\mathbb R \_+}}$. By definition of conditional PII’s in it remains to be shown that $$\label{e:CPII} E\left(f(X^\perp\_r-X^\perp\_s)ZY\right)=E\left(E\big(f(X^\perp\_r-X^\perp\_s)\big|{\scr G}\_\infty\big) E(Z|{\scr G}\_\infty)Y\right)$$ for any *s* ≤ *r*, any bounded measurable function $f:{\mathbb R}\to{\mathbb R}$, any bounded $\scr H\_s$-measurable random variable *Z* and any bounded ${\scr G}\_\infty$-measurable function *Y*. By right-continuity of *X*⊥, it suffices to consider only ${\scr F}\_s\vee{\scr G}\_\infty$-measurable *Z*. Standard measure theory yields that we can focus on functions of the form *f*(*x*) = *e**i**u**x* for any $u\in{\mathbb R}$ and *Z* of the form *Z* = 1*F*1*G* with $F\in{\scr F}\_s$ and $G\in{\scr G}\_\infty$. In view of ([e:CPII]), it even suffices to discuss *Z* = 1*F* because the second factor can be moved to *Y*. Moreover, we may replace *X**r*⊥ by *X**r* ∧ *τ**n*⊥, where the ${\scr G}\_\infty$-measurable stopping times $\tau\_n,n\in\mathbb N$ are defined by $$\tau\_n:=\inf\left\{\tilde t\ge s:{\mathrm{Re}}\Big(\int\_s^{\tilde t}\Psi\_{t}(t,u)dt\Big)\ge n\right\}.$$ Finally, ${\scr G}\_\infty$-mesurability of Ψ implies that we can write *Y* as $$Y=\widetilde Y\exp\left(-\int\_s^{r\wedge\tau\_n}(\Psi\_t(t,u)-\gamma(u,0))dt\right)$$ with some bounded ${\scr G}\_\infty$-measurable $\tilde Y$. The consistency condition and local independence of *X*∥, *X*⊥ imply that $$\Psi\_t(t,u)-\gamma(u,0)=\psi^X\_t(u)-\psi\_t^{X^\|}(u)=\psi^{X^\perp}\_t(u), \quad u\in{\mathbb R}$$ outside some *d**P* ⊗ *d**t*-null set. As above, standard measure theory yields that it suffices to consider $\widetilde Y$ of the form $$\widetilde Y=\exp\left(i\int\_0^Tv(t)d(X^\|,M)\_t\right)$$ with *T* ∈ [*r*, ∞) and bounded measurable $v=(v\_1,v\_2):[0,T]\to{\mathbb R}^2$. If we set ${\scr G}^+\_s:=\sigma((X^\|,M)\_t-(X^\|,M)\_s:t\ge s)$, we have ${\scr G}\_\infty={\scr G}\_s\vee{\scr G}^+\_{s}$. Moreover, ${\scr G}^+\_s$ is independent of ${\scr F}\_s$ because (*X*∥, *M*) is a Lévy process with respect to filtration $({\scr F}\_t)\_{t\in{\mathbb R \_+}}$. Since *Z* = 1*F* is ${\scr F}\_s$-measurable, we have $E(Z|{\scr G}\_\infty)=E(Z|{\scr G}\_s)$, cf. e.g. . This yields $$\begin{aligned} \label{e:indep1} \lefteqn{E\left(E\big(f(X^\perp\_{r\wedge\tau\_n}-X^\perp\_s)\big|{\scr G}\_\infty\big) E(Z|{\scr G}\_\infty)Y\right)}\nonumber\\ &=&E\left(f(X^\perp\_{r\wedge\tau\_n}-X^\perp\_s)YE(Z|{\scr G}\_\infty)\right)\nonumber\\ &=&E\left(f(X^\perp\_{r\wedge\tau\_n}-X^\perp\_s)YE(Z|{\scr G}\_s)\right)\nonumber\\ &=&E\left(E\big(f(X^\perp\_{r\wedge\tau\_n}-X^\perp\_s)Y\big|{\scr G}\_s\big)Z\right).\end{aligned}$$ It remains to be shown that $E(f(X^\perp\_{r\wedge\tau\_n}-X^\perp\_s)Y|{\scr F}\_s)$ is in fact ${\scr G}\_s$-measurable because together with ([e:indep1]) this implies $$\begin{aligned} E\left(E\big(f(X^\perp\_{r\wedge\tau\_n}-X^\perp\_s)\big|{\scr G}\_\infty\big) E(Z|{\scr G}\_\infty)Y\right) &=&E\left(E\big(f(X^\perp\_{r\wedge\tau\_n}-X^\perp\_s)Y\big|{\scr F}\_s\big)Z\right)\\ &=&E\left(f(X^\perp\_{r\wedge\tau\_n}-X^\perp\_s)ZY\right)\end{aligned}$$ as claimed in ([e:CPII]). To this end, note that where *ψ*(*X*⊥, *X*∥, *M*), *ψ*(*X*∥, *M*) denote local exponents in the sense of Definition [Definition: lokaler Exponent 1. Version], $$\begin{aligned} M&=&\exp\Bigg(i\int\_0^\cdot(u1\_{(s,r\wedge\tau\_n]}(t),v\_1(t),v\_2(t))d(X^\perp,X^\|,M)\_t\\ &&{}-\int\_0^\cdot\psi^{(X^\perp,X^\|,M)}\_t(u1\_{(s,r\wedge\tau\_n]}(t),v\_1(t),v\_2(t))dt\Bigg),\end{aligned}$$ and *D* stands for the remaining factor. Since *D* is deterministic and *M* is a bounded local martingale and hence a martingale, we have $$\label{e:msd} E(M\_TD|{\scr F}\_s)=M\_sD,$$ which is ${\scr G}\_s$-measurable as desired. *Step 3:* Using the notation of Step 2, we show that $$\label{e:cpiicf} E\big(f(X^\perp\_r-X^\perp\_s)\big|{\scr G}\_\infty\big) =\exp\left(\int\_s^{r}(\Psi\_t(t,u)-\gamma(u,0))dt\right).$$ Indeed, first note that we may replace *r* with *r* ∧ *τ**n*, $n\in\mathbb N$ by right-continuity. Choosing ${\scr G}\_\infty$-measurable *Y* as in Step 2, we obtain using ([e:msd]): $$\begin{aligned} E\big(f(X^\perp\_{r\wedge\tau\_n}-X^\perp\_s)Y\big) &=&D\\ &=&E(\widetilde Y)\\ &=&E\left(\exp\Big(\int\_s^{r\wedge\tau\_n}(\Psi\_t(t,u)-\gamma(u,0))dt\Big)Y\right),\end{aligned}$$ which yields the assertion. *Step 4:* We now show uniqueness of the law of (*X*⊥, *X*∥, *M*, Ψ), which implies uniqueness of the law of (Ψ, *X*, *M*). To this end, observe that (*X*∥, *M*, Ψ) is ${\scr G}\_\infty$-measurable whereas the conditional law of *X*⊥ given ${\scr G}\_\infty$ is determined by the fact that *X*⊥ − *X*0 is a ${\scr G}\_\infty$-conditional PII with conditional characteristic function ([e:cpiicf]). Therefore, it suffices to prove uniqueness of the law of (*X*∥, *M*, Ψ). This uniqueness, on the other hand, follows from Step 1 and Statement 2 in Proposition [Proposition: SDE ist eindeutig]. 2. In Step 1 of the proof of Statement 1 it is shown that the codebook process Ψ of ([Gleichung:neu2]) solves SDE ([G:Psi SDE]), i.e. it coincides with Φ. It suffices to show ∫*t**T**η**s*(*r*,  ⋅ )*d**r* ∈ Π separately for *s* < *t* ≤ *T* and for *t* ≤ *T* ≤ *s*. By definition of option surface models, we have ∫*t**T*Ψ*s*(*r*,  ⋅ )*d**r* ∈ Π for *s* ≤ *t* ≤ *T*. Since *η**s*(*r*,  ⋅ ) = Ψ*s*(*r*,  ⋅ ) for *s* < *r*, this yields ∫*t**T**η**s*(*r*,  ⋅ )*d**r* ∈ Π for *s* < *t* ≤ *T*. For *r* ≤ *s* outside some Lebesgue-null set, we have $$\begin{aligned} \eta\_s(r,u)&=&\Psi\_r(r,u)-\gamma(u,0)\\ &=&\psi^X\_r(u)-\gamma(u,0)\\ &=&\psi^{X^\|}\_r(u)+\psi^{X^\perp}\_r(u)-\gamma(u,0)\\ &=&\psi^{X^\perp}\_r(u), \quad u\in{\mathbb R},\end{aligned}$$ where we used the consistency condition in the second equality. By Lemma [Lemma: Semimartingalprojektion] and Remark [r:pi] *u* ↦ *ψ**t**X*⊥(*u*) is in Π. Lemma [l:banach](3) yields that ∫*t**T**η**s*(*r*,  ⋅ )*d**r* ∈ Π for *t* ≤ *T* ≤ *s*. 3. *Construction of the codebook process*: By Theorem [S:Ex von PII] there is a Lévy process (*X*∥, *M*) on a complete filtered probability space $(\Omega^{(1)},\scr F^{(1)},(\scr F^{(1)}\_t)\_{t\in\mathbb R\_+},P^{(1)})$ such that its (extended) Lévy exponent is *γ*. Let Ψ be the ${L^1(\mathbb R\_+,E)}$-valued càdlàg solution to the SDE *d*Ψ*t* = *a*(*t*, Ψ*t* −)*d**t* + *b*(*t*, Ψ*t* −)*d**M**t*,  Ψ0 = *ψ*0 given by Proposition [Proposition: SDE ist eindeutig]. Ψ is an ${L^1(\mathbb R\_+,\Pi)}$-valued process because this even holds for *η* by assumption. It is not hard to find versions $$\begin{aligned} \alpha\_t(T,u) &:=& a(t,\Psi\_{t-})(T,u), \\ \beta\_t(T,u) &:=& b(t,\Psi\_{t-})(T,u)\end{aligned}$$ for any $t,T\in\mathbb R\_+$, $u\in\mathbb R$ and a version of Ψ such that Ψ*t*(*T*, *u*) = Ψ0(*T*, *u*) + ∫0*t* ∧ *T**α**s*(*T*, *u*)*d**s* + ∫0*t* ∧ *T**β**s*(*T*, *u*)*d**M**s* for any $t,T\in\mathbb R\_+$, $u\in\mathbb R$ almost surely. More precisely, one can choose versions of the ${L^1(\mathbb R\_+,E)}$-valued processes $\Psi,(a(t,\Psi\_{t-}))\_{t\in{\mathbb R \_+}},(b(t,\Psi\_{t-}))\_{t\in{\mathbb R \_+}}$ such that for any *T*, *u* the $\mathbb C$-valued process Ψ(*T*, *u*) is adapted, almost surely càdlàg, and satisfies ([e:SDEp]) almost surely. *Construction of the return process*: As usual, let $(\mathbb D,{\scr D},(\scr D\_t)\_{t\in\mathbb R\_+})$ denote the Skorokhod space of real-valued càdlàg functions. Let *X*⊥ be the canonical process on $\mathbb D$ and set $$\begin{aligned} \Omega &:=& \Omega^{(1)}\times\mathbb D,\\ \scr F &:=& \scr F^{(1)}\otimes\scr D,\\ \scr F\_t &:=& \bigcap\_{s>t}\big(\scr F\_s^{(1)}\otimes\scr D\_s\big).\end{aligned}$$ Fix *ω*1 ∈ Ω(1). Theorem [S:Ex von PII] yields that there is a probability measure *P*(2)(*ω*1,  ⋅ ) on $(\mathbb D,\scr D)$ such that *X*0⊥ = *x*0 a.s. and *X*⊥ − *X*0⊥ is a PII with characteristic function *u* ↦ exp(∫0*t**η*∞(*r*, *u*)(*ω*1)*d**r*) = exp(∫0*t**η**t*(*r*, *u*)(*ω*1)*d**r*). Measurability of *η* implies that *P*(2) is a transition kernel from $(\Omega^{(1)},{\scr F}^{(1)})$ to $(\mathbb D,\scr D)$. Therefore, *P*(*d*(*ω*1, *ω*2)) :  = (*P*(1) ⊗ *P*(2))(*d*(*ω*1, *ω*2)) :  = *P*(1)(*d**ω*1)*P*(2)(*ω*1, *d**ω*2) defines a probability measure *P* on $(\Omega,{\scr F})$. By abuse of notation we will use the same letters for the process *M*, Ψ, *X*∥, *X*⊥ embedded in the filtered probability space $(\Omega,\scr F,(\scr F\_t)\_{t\in\mathbb R\_+},P)$, i.e. we denote e.g. the process ((*ω*1, *ω*2), *t*) ↦ *M**t*(*ω*1) again by *M*. Set *X* :  = *X*∥ + *X*⊥ Observe that *X*⊥ − *X*0 is an $\scr F^{(1)}\otimes \{\emptyset,\mathbb D\}$-conditional PII relative to the filtration $({\scr G}\_t)\_{t\in{\mathbb R \_+}}$ defined by $${\scr G}\_t:=\bigcap\_{s>t}\big(\scr F^{(1)}\otimes\scr D\_s\big).$$ Denote by (*b*, *c*, *K*) its local characteristics relative to some truncation function *h*. Then yields $$iub\_t(\omega)-\frac{u^2}{2}c\_t(\omega)+\int(e^{iux}-1-iuh(x))K\_t(\omega,dx) =\eta(t,u)(\omega\_1)$$ for almost all *ω* = (*ω*1, *ω*2) ∈ Ω, $t\in\mathbb R\_+$, $u\in\mathbb R$. Both *X*⊥ and (*b*, *c*, *K*) are $(\scr F\_t)\_{t\in\mathbb R\_+}$-adapted. From Proposition [P:erhalt der lokalen char] it follows that *X*⊥ is a semimartingale with respect to this smaller filtration $(\scr F\_t)\_{t\in\mathbb R\_+}$ with the same local characteristics (*b*, *c*, *K*). We want to show that (*X*∥, *M*) is a Lévy process on ${(\Omega,{\scr F},({\scr F}\_t)\_{t\in\mathbb R\_+},P)}$ as well. By right-continuity of (*X*∥, *M*) it suffices to prove that $$E\big(U\big\vert\scr F\_s^{(1)}\otimes\scr D\_s\big)=E(U)$$ for any $s,t\in\mathbb R\_+$ with *s* ≤ *t* and any $u\in\mathbb R^2$, where $$U:=\exp\left(iu\big(({X^\parallel},M)\_t-({X^\parallel},M)\_s\big)\right).$$ It suffices to show that *E*(*U**V**W*) = *E*(*U*)*E*(*V**W*) for any bounded $\scr F^{(1)}\_s\otimes\{\emptyset,\mathbb D\}$-measurable *V* and any bounded $\{\emptyset,\Omega^{(1)}\}\otimes \scr D\_s$-measurable *W*. The conditional law of (*X**r*⊥)*r* ≤ *s* given $\scr F^{(1)}\otimes \{\emptyset,\mathbb D\}$ is $\scr F^{(1)}\_s\otimes \{\emptyset,\mathbb D\}$-measurable because *η*(*r*, *u*) is $\scr F^{(1)}\_s$-measurable for any *r* ≤ *s*. This implies $$E\big(W\big|\scr F^{(1)}\otimes \{\emptyset,\mathbb D\}\big) =E\big(W\big|\scr F^{(1)}\_s\otimes \{\emptyset,\mathbb D\}\big)$$ because *W* is a measurable function of (*X**r*⊥)*r* ≤ *s*. Moreover, *U* is independent of $\scr F^{(1)}\_s\otimes \{\emptyset,\mathbb D\}$ because (*X*∥, *M*) is a Lévy process on Ω(1). This yields $$\begin{aligned} E(UVW) &=&E\left(UVE\big(W\big|\scr F^{(1)}\otimes \{\emptyset,\mathbb D\}\big)\right)\\ &=&E\left(UVE\big(W\big|\scr F^{(1)}\_s\otimes \{\emptyset,\mathbb D\}\big)\right)\\ &=&E\left(E\big(U\big|\scr F^{(1)}\_s\otimes \{\emptyset,\mathbb D\}\big)VW\right)\\ &=&E(U)E(VW)\end{aligned}$$ as desired. *Compatibility of the constructed model*: We have *β**t* = *b*(*t*, Ψ*t* −), *X*0 = *x*0, Ψ0 = *ψ*0 and *γ* = *ψ*(*X*∥, *M*). We must show that the dependent part of *X* relative to *M* is *X*∥. Since *X*∥ is the dependent part of *X*∥ relative to *M*, it remains to be shown that *M* and *X*⊥ are locally independent. Since *M* is a subordinator, it suffices to prove that $$P(\exists t\in\mathbb R\_+:\Delta M\_t\neq0,\Delta X^\perp\_t\neq0)=0.$$ Let $J:=\{ t\in{\mathbb R \_+}:\Delta M\_t\neq 0\}$ denote the set of jump times of *M*. Then *J* is almost surely countable and $$\begin{aligned} P\big(\exists t\in\mathbb R\_+:\Delta M\_t\neq0,\Delta X^\perp\_t\neq0\big) &\leq& E\left(\sum\_{s\in J}1\_{\{\Delta X^\perp\_s\neq0\}}\right)\\ &=& E\left( \sum\_{s\in J} P\big(\Delta X^\perp\_s\neq 0\big\vert M\big)\right)\\ &=& 0\end{aligned}$$ because $$\begin{aligned} E\big(\exp(iu \Delta X^\perp\_s)\big\vert \scr F^{(1)}\otimes\{\emptyset,\mathbb D\}\big) = \exp\left(\int\_{s-}^s\eta(r,u)dr\right) =0,\quad u\in{\mathbb R}\end{aligned}$$ and hence *P*(Δ*X**s*⊥ ≠ 0|*M*) = 0. *Risk neutrality of the constructed model*: The constructed model satisfies the consistency and the drift condition. Hence Theorem [Satz: Bedingungen impl. Vollstaendig] yields risk-neutrality. Examples illustrating the previous result are to be found in Sections [Abschnitt:Verschwindendes Beta] and [Abschnitt: Affine Modelle] below. In general, however, it is not obvious why the solution to SDE should satisfy the condition in Statement 3 of Theorem [Satz: starker Existenzsatz]. If this is not the case, a compatible risk-neutral option surface model does not exist. As a way out, we introduce a weaker form of compatibility, which assumes to hold only up to some maximal stopping time. For related discussions on stochastic invariance problems, we refer the reader to. [Definition: schwach kompatibel] 1. An option surface model (*X*, Ψ0, *α*, *β*, *M*) is called **τ*-weakly compatible* with building blocks (*x*0, *ψ*0, *b*, *γ*) if * *τ* is a stopping time, * *X*0 = *x*0, * Ψ0 = *ψ*0, * ${\mathbb R \_+}\to{L^1(\mathbb R\_+,E)}$, *t* ↦ Ψ*t*(*ω*) is well defined and a.s. càdlàg, * *β**t*(*ω*) = *b*(*t*, Ψ*t* −(*ω*)) for *d**P* ⊗ *d**t*-almost any (*ω*, *t*) ∈ [ [0, *τ*] ], * $\psi^{(X^\Vdash,M)}(u,v)=\gamma(u,v)$ for $(u,v)\in\mathbb R\times({\mathbb R}+i{\mathbb R \_+})$, where $X^\Vdash$ denotes some process which coincides on [ [0, *τ*] ] with the dependent part *X*∥ of *X* relative to *M*. 2. Let (*X*, Ψ0, *α*, *β*, *M*) denote an option surface model which is *τ*-weakly compatible with building blocks (*x*0, *ψ*0, *b*, *γ*). It is called *maximal weakly compatible* if * $$\label{e:tau} \tau =\inf\big\{t\in\mathbb R\_+:\eta\_t\notin{L^1(\mathbb R\_+,\Pi)}\big\}\quad\text{a.s},$$ where Φ denotes the unique ${L^1(\mathbb R\_+,E)}$-valued solution to SDE from Proposition [Proposition: SDE ist eindeutig] and *η* is defined as in ([e:eta]), * *t* ↦ Ψ*t*(*T*, *u*) from ([Gleichung:neu2]) stays constant after *τ*. We can now state our general existence and uniqueness result. [Satz: schwacher Existenz und Eindeutigkeitssatz] Let (*x*0, *ψ*0, *b*, *γ*) be building blocks. 1. There exists a maximal weakly compatible and risk-neutral option surface model. 2. 3. If a compatible risk-neutral option surface model exists, then any maximal weakly compatible risk-neutral option surface model is in fact compatible. 1. Let $(X^\Vdash,M)$ be a Lévy process with characteristic exponent *γ*. Define *a* as in Proposition [Proposition: SDE ist eindeutig], Φ as the unique ${L^1(\mathbb R\_+,E)}$-valued solution to SDE, and the ${L^1(\mathbb R\_+,E)}$-valued adapted càdlàg process *η* as in ([e:eta]). and yield that there is a stopping time *τ* which satisfies Equation. Set $$\begin{aligned} \Psi\_t &:=& \Phi\_{t\wedge\tau},\nonumber\\ \alpha\_t(T,u) &:=& a(t,\Psi\_{t-})(T,u)1\_{[\![ 0,\tau]\!]}(t)\label{e:dreia},\\ \beta\_t(T,u) &:=& b(t,\Psi\_{t-})(T,u)1\_{[\![ 0,\tau]\!]}(t).\label{e:dreib}\end{aligned}$$ More specifically, it is not hard to find versions of the right-hand sides of ([e:dreia],[e:dreib]) such that ([e:SDEp]) holds up tp *τ*. We have Ψ0 = *ψ*0. Along the same lines as in the proof of Theorem [Satz: starker Existenzsatz] we can now construct a return process *X* such that (*X*, Ψ0, *α*, *β*, *M*) is a *τ*-weakly compatible and risk-neutral option surface
arxiv_0000316
-rate control algorithm for variable bit-rate applications of versatile video coding standard,” *Signal Processing: Image Communication*, vol. 96, p. 116317, 2021. M. Wang, J. Zhang, L. Huang, and J. Xiong, “Machine learning-based rate distortion modeling for vvc/h. 266 intra-frame,” in *2021 IEEE International Conference on Multimedia and Expo (ICME)*.1em plus 0.5em minus 0.4emIEEE, 2021, pp. 1–6. L. Xu, S. Ma, D. Zhao, and W. Gao, “Rate control for scalable video model,” in *Visual Communications and Image Processing 2005*, S. Li, F. Pereira, H.-Y. Shum, and A. G. Tescher, Eds., vol. 5960, International Society for Optics and Photonics.1em plus 0.5em minus 0.4emSPIE, 2005, pp. 525 – 534. Y. Liu, Z. G. Li, and Y. C. Soh, “Rate control of h.264/avc scalable extension,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 18, no. 1, pp. 116–121, 2008. Y. Pitrey, M. Babel, O. Déforges, and J. Viéron, “*ρ*-domain based rate control scheme for spatial, temporal, and quality scalable video coding,” in *Visual Communications and Image Processing 2009*, M. Rabbani and R. L. Stevenson, Eds., vol. 7257, International Society for Optics and Photonics.1em plus 0.5em minus 0.4emSPIE, 2009, pp. 25 – 32. S. Hu, H. Wang, S. Kwong, T. Zhao, and C.-C. J. Kuo, “Rate control optimization for temporal-layer scalable video coding,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 21, no. 8, pp. 1152–1162, 2011. Y. Li, D. Liu, and Z. Chen, “Ahg9-related: Cnn-based lambda-domain rate control for intra frames (jvet-m0215),” in *Joint Video Experts Team (JVET) 13th Meeting, Marrakech, MA*, Jan. 2019. G. Lu, W. Ouyang, D. Xu, X. Zhang, C. Cai, and Z. Gao, “Dvc: An end-to-end deep video compression framework,” in *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, 2019, pp. 11006–11015. X. Jing, L.-P. Chau, and W.-C. Siu, “Frame complexity-based rate-quantization model for h.264/avc intraframe rate control,” *IEEE Signal Processing Letters*, vol. 15, pp. 373–376, 2008. Z. Liu, H. Yan, L. Shen, Y. Wang, and Z. Zhang, “A motion attention model based rate control algorithm for h.264/avc,” in *2009 Eighth IEEE/ACIS International Conference on Computer and Information Science*, 2009, pp. 568–573. L. Shen, Z. Liu, and Z. Zhang, “A novel h.264 rate control algorithm with consideration of visual attention,” *Multimed Tools Appl*, vol. 63, p. 709–727, 2013. J. Ribas-Corbera and S. Lei, “Rate control in dct video coding for low-delay communications,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 9, no. 1, pp. 172–185, 1999. C.-W. Seo, J.-H. Moon, and J.-K. Han, “Rate control for consistent objective quality in high efficiency video coding,” *IEEE transactions on image processing*, vol. 22, no. 6, pp. 2442–2454, 2013. W.-N. Lie, C.-F. Chen, and T. C.-I. Lin, “Two-pass rate-distortion optimized rate control technique for h. 264/avc video,” in *Visual Communications and Image Processing 2005*, vol. 5960.1em plus 0.5em minus 0.4emInternational Society for Optics and Photonics, 2005, p. 596035. S. Wang, A. Rehman, K. Zeng, and Z. Wang, “Ssim-inspired two-pass rate control for high efficiency video coding,” in *2015 IEEE 17th International Workshop on Multimedia Signal Processing (MMSP)*, 2015, pp. 1–5. I. Zupancic, M. Naccari, M. Mrak, and E. Izquierdo, “Two-pass rate control for improved quality of experience in uhdtv delivery,” *IEEE Journal of Selected Topics in Signal Processing*, vol. 11, no. 1, pp. 167–179, 2017. C. Meenderinck, A. Azevedo, B. Juurlink, M. Alvarez Mesa, and A. Ramirez, “Parallel scalability of video decoders,” *Journal of Signal Processing Systems*, vol. 57, no. 2, pp. 173–194, 2009. Z. Ling, X. J. Jiang, and J. J. Liu, “Efficiency of dynamic gop length in video stream,” in *Advanced Materials Research*, vol. 765.1em plus 0.5em minus 0.4emTrans Tech Publ, 2013, pp. 879–884. M. Roitzsch, “Slice-balancing h. 264 video encoding for improved scalability of multicore decoding,” in *Proceedings of the 7th ACM & IEEE international conference on Embedded software*, 2007, pp. 269–278. C. Blumenberg, D. Palomino, S. Bampi, and B. Zatt, “Adaptive content-based tile partitioning algorithm for the hevc standard,” in *2013 Picture Coding Symposium (PCS)*.1em plus 0.5em minus 0.4emIEEE, 2013, pp. 185–188. M. Koziri, P. K. Papadopoulos, and T. Loukopoulos, “Combining tile parallelism with slice partitioning in video coding,” in *Applications of Digital Image Processing XLI*, vol. 10752.1em plus 0.5em minus 0.4emInternational Society for Optics and Photonics, 2018, p. 107520N. M. Karczewicz, N. Hu, J. Taquet, C.-Y. Chen, K. Misra, K. Andersson, P. Yin, T. Lu, E. François, and J. Chen, “Vvc in-loop filters,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 31, no. 10, pp. 3907–3925, 2021. E. B. Van Der Tol, E. G. Jaspers, and R. H. Gelderblom, “Mapping of h. 264 decoding on a multiprocessor architecture,” in *Image and Video Communications and Processing 2003*, vol. 5022.1em plus 0.5em minus 0.4emInternational Society for Optics and Photonics, 2003, pp. 707–718. M. Alvarez-Mesa, V. George, T. Schierl, and B. Juurlink, “Improving parallelization efficiency of wpp using overlapped wavefront,” *Joint Collaborative Team on Video Coding (JCT-VC), Document JCTVC-J0425, Stockholm*, vol. 3, 2012. G. Clare and F. Henry, “An hevc transcoder converting non-parallel bitstreams to/from wpp,” *Joint Collaborative Team on Video Coding (JCT-VC), Document JCTVC-J0032, Stockholm*, 2012. J. Sainio, A. Mercat, and J. Vanne, “Parallel implementations of lambda domain and r-lambda model rate control schemes in a practical hevc encoder,” in *2021 Data Compression Conference (DCC)*, 2021, pp. 368–368. P. Xu, K. Chen, J. Sun, X. Ji, and Z. Guo, “An adaptive intra-frame parallel method based on complexity estimation for hevc,” in *2016 Visual Communications and Image Processing (VCIP)*, 2016, pp. 1–4. S. Wang, S. Zhang, J. Wang, L. Chang, L. Feng, and X. Fan, “Hardware architecture design of hevc entropy decoding,” in *2021 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom)*, 2021, pp. 1143–1150. S. Xu, M. Yu, S. Fang, Z. Peng, X. Wang, and G. Jiang, “New rate control optimization algorithm for hevc aiming at discontinuous scene,” *WSEAS Transactions on computers*, vol. 14, no. 1, pp. 598–606, 2015. F. Song, C. Zhu, Y. Liu, and Y. Zhou, “A new gop level bit allocation method for hevc rate control,” in *2017 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB)*.1em plus 0.5em minus 0.4emIEEE, 2017, pp. 1–4. L. Li, B. Li, H. Li, and C. W. Chen, “*λ* domain optimal bit allocation algorithm for high efficiency video coding,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 28, no. 1, pp. 130–142, 2016. H. Schwarz, D. Marpe, and T. Wiegand, “Overview of the scalable video coding extension of the h.264/avc standard,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 17, no. 9, pp. 1103–1120, 2007. M. Flierl and B. Girod, “Generalized b pictures and the draft h.264/avc video-compression standard,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 13, no. 7, pp. 587–597, 2003. K. Hwang and N. Jotwani, *Advanced computer architecture: parallelism, scalability, programmability*.1em plus 0.5em minus 0.4emMcGraw-Hill New York, 1993, vol. 199. R. E. Bryant, O. David Richard, and O. David Richard, *Computer systems: a programmer’s perspective*.1em plus 0.5em minus 0.4emPrentice Hall Upper Saddle River, 2003, vol. 2. C. Ranger, R. Raghuraman, A. Penmetsa, G. Bradski, and C. Kozyrakis, “Evaluating mapreduce for multi-core and multiprocessor systems,” in *2007 IEEE 13th International Symposium on High Performance Computer Architecture*.1em plus 0.5em minus 0.4emIeee, 2007, pp. 13–24. F. Bossen, J. Boyce, K. Suehring, X. Li, and V. Seregin, “Jvet common test conditions and software reference configurations for sdr video (joint video experts team (jvet) of itu-t sg 16 wp 3 and iso/iec jtc 1/sc 29/wg 11, jvet-n1010-v1),” in *Joint Video Exploration Team (JVET) 14th Meeting, Geneva, CH*, Apr. 2019. F. Bossen, “Common conditions and software reference configurations (jvt-j1100),” in *Joint Collaborative Team on Video Coding (JCT-VC) 8th Meeting*, Feb. 2012. G. Bjontegaard, “Calculation of average psnr differences between rd-curves,” *VCEG-M33*, 2001. L. Bai, L. Song, R. Xie, L. Zhang, and Z. Luo, “Rate control model for high dynamic range video,” in *2017 IEEE Visual Communications and Image Processing (VCIP)*, 2017, pp. 1–4. J. Kim, S.-H. Bae, and M. Kim, “An hevc-compliant perceptual video coding scheme based on jnd models for variable block-sized transform kernels,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 25, no. 11, pp. 1786–1800, 2015. H. Wei, X. Zhou, W. Zhou, C. Yan, Z. Duan, and N. Shan, “Visual saliency based perceptual video coding in hevc,” in *2016 IEEE International Symposium on Circuits and Systems (ISCAS)*, 2016, pp. 2547–2550. “Jm 9.4.” [Online]. Available: <http://iphome.hhi.de/suehring/tml/download> M. Zhou, X. Wei, C. Ji, T. Xiang, and B. Fang, “Optimum quality control algorithm for versatile video coding,” *IEEE Transactions on Broadcasting*, pp. 1–12, 2022. X. Meng, C. Jia, X. Zhang, S. Wang, and S. Ma, “Spatio-temporal correlation guided geometric partitioning for versatile video coding,” *IEEE Transactions on Image Processing*, vol. 31, pp. 30–42, 2022. Z. Huang, K. Lin, C. Jia, S. Wang, and S. Ma, “Beyond vvc: Towards perceptual quality optimized video compression using multi-scale hybrid approaches,” in *2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)*, 2021, pp. 1866–1869. M. Zhou, X. Wei, W. Jia, and S. Kwong, “Joint decision tree and visual feature rate control optimization for vvc uhd coding,” *IEEE Transactions on Image Processing*, vol. 32, pp. 219–234, 2023. B. Bross, J. Chen, J.-R. Ohm, G. J. Sullivan, and Y.-K. Wang, “Developments in international video coding standardization after avc, with an overview of versatile video coding (vvc),” *Proceedings of the IEEE*, vol. 109, no. 9, pp. 1463–1493, 2021. M. Benjak, H. Meuel, T. Laude, and J. Ostermann, “Enhanced machine learning-based inter coding for vvc,” in *2021 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)*, 2021, pp. 021–025. M. Saldanha, G. Sanchez, C. Marcon, and L. Agostini, “Configurable fast block partitioning for vvc intra coding using light gradient boosting machine,” *IEEE Transactions on Circuits and Systems for Video Technology*, pp. 1–1, 2021. K. Fischer, C. Herglotz, and A. Kaup, “On versatile video coding at uhd with machine-learning-based super-resolution,” in *2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX)*, 2020, pp. 1–6. N. Le, H. Zhang, F. Cricri, R. Ghaznavi-Youvalari, H. R. Tavakoli, and E. Rahtu, “Learned image coding for machines: A content-adaptive approach,” in *2021 IEEE International Conference on Multimedia and Expo (ICME)*, 2021, pp. 1–6. H. Wang, L. Yu, J. Liang, H. Yin, T. Li, and S. Wang, “Hierarchical predictive coding-based jnd estimation for image compression,” *IEEE Transactions on Image Processing*, vol. 30, pp. 487–500, 2021. Y. Li and X. Mou, “Joint optimization for ssim-based ctu-level bit allocation and rate distortion optimization,” *IEEE Transactions on Broadcasting*, vol. 67, no. 2, pp. 500–511, 2021. A. Nakhaei and M. Rezaei, “Scene-level two-pass video rate controller for h. 265/hevc standard,” *Multimedia Tools and Applications*, vol. 80, no. 5, pp. 7023–7038, 2021. J. Chen, X. Luo, M. Hu, D. Wu, and Y. Zhou, “Sparkle: User-aware viewport prediction in 360-degree video streaming,” *IEEE Transactions on Multimedia*, vol. 23, pp. 3853–3866, 2021. F. Chiariotti, “A survey on 360-degree video: Coding, quality of experience and streaming,” *Comput. Commun.*, vol. 177, pp. 133–155, 2021. T. Zhao, J. Lin, Y. Song, X. Wang, and Y. Niu, *Game Theory-Driven Rate Control for 360-Degree Video Coding*.1em plus 0.5em minus 0.4emNew York, NY, USA: Association for Computing Machinery, 2021, p. 3998–4006. V. Sanchez, “Rate control for predictive transform screen content video coding based on ransac,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 31, no. 11, pp. 4422–4438, 2021. K. Perez-Daniel, F. Garcia-Ugalde, and V. Sanchez, “Scene-based imperceptible-visible watermarking for hdr video content,” in *2019 7th International Workshop on Biometrics and Forensics (IWBF)*, 2019, pp. 1–6. H. Yuan, Q. Wang, Q. Liu, J. Huo, and P. Li, “Hybrid distortion-based rate-distortion optimization and rate control for h.265/hevc,” *IEEE Transactions on Consumer Electronics*, vol. 67, no. 2, pp. 97–106, 2021. Q. Zhang, H. Meng, Z. Feng, and Z. Han, “Resource scheduling of time-sensitive services for b5g/6g connected automated vehicles,” *IEEE Internet of Things Journal*, pp. 1–1, 2022. Y. Cui, Q. Zhang, Z. Feng, Z. Wei, C. Shi, J. Fan, and P. Zhang, “Dual identities enabled low-latency visual networking for uav emergency communication,” in *GLOBECOM 2022 - 2022 IEEE Global Communications Conference*, 2022, pp. 474–479. A. M. Girgis, J. Park, M. Bennis, and M. Debbah, “Predictive control and communication co-design via two-way gaussian process regression and aoi-aware scheduling,” *IEEE Transactions on Communications*, vol. 69, no. 10, pp. 7077–7093, 2021. [![image](author_photo/Xuekai_Wei)]Xuekai Wei received the bachelor’s degree in Electronic Information Science and Technology from Shandong University in 2014, the master’s degree in Communication and Information Systems from Shandong University in 2017, and the Ph.D. degree in computer science from the City University of Hong Kong, Hong Kong, China in 2021. He was a post-doctoral at the School of Artificial Intelligence, Beijing Normal University, Beijing, China, from 2021 to 2022. He is currently an Associate Professor with the School of Computer Science, Chongqing University, Chongqing, China. His current research interests include video coding/transmission and machine learning. [![image](author_photo/Mingliang_Zhou)]Mingliang Zhou received the Ph.D. degree in computer science from Beihang University, Beijing, China, in 2017. He was a Postdoctoral Researcher with the Department of Computer Science, City University of Hong Kong, Hong Kong, China, from September 2017 to September 2019. He was a Postdoctoral Fellow with the State Key Laboratory of Internet of Things for Smart City, University of Macau, Macau, China, from October 2019 to October 2021. He is currently an Associate Professor with the School of Computer Science, Chongqing University, Chongqing, China. His research interests include image and video coding, perceptual image processing, multimedia signal processing, rate control, multimedia communication, machine learning, and optimization. [![image](./author_photo/Heqiang_Wang)] Heqiang Wang is currently a M.S. student of Computer Science in Chongqing University. He received his bachelor degree of Mechanical Design Manufacturing and Automation in Southwest University. His research interests include perceptual image quality assessment and video coding. [![image](./author_photo/Haoyan_Yang)] Haoyan Yang is currently a M.S. student of Electronic Information in Chongqing University. He received his bachelor degree of Software Engineering in Nanjing Audit University. His research interests is video coding. [![image](./author_photo/Lei_Chen)] Lei Chen is currently a M.S. student of Computer Science in Chongqing University. He received his bachelor degree of Medical Information Engineering in Zhejiang Chinese Medical University. His research interests include image super-resolution and video coding. [![image](author_photo/Sam_Kwong)]Sam Kwong (M’93-SM’04-F’14) received the B.S. degree from the State University of New York at Buffalo, Buffalo, NY, USA, in 1983, the M.S. degree in electrical engineering from the University of Waterloo, Waterloo, ON, Canada, in 1985, and the Ph.D. degree from the University of Hagen, Hagen, Germany, in 1996. From 1985 to 1987, he was a Diagnostic Engineer with Control Data Canada, Mississauga, ON, Canada. He joined Bell Northern Research Canada, Ottawa, ON, Canada, as a member of Scientific Staff. In 1990, he became a Lecturer in the Department of Electronic Engineering, The City University of Hong Kong, Hong Kong, China, where he is currently a Professor with the Department of Computer Science. His research interests include video and image coding, and evolutionary algorithms. Prof. Kwong serves as an Associate Editor for the IEEE Transactions on Industrial Electronics and the IEEE Transactions on Industrial Informatics. --- 1. This work was supported in part by the National Natural Science Foundation of China under Grant 62176027; in part by the General Program of the National Natural Science Foundation of Chongqing under Grant cstc2020jcyj-msxmX0790; in part by the Human Resources and Social Security Bureau Project of Chongqing under Grant cx2020073; in part by the Hong Kong GRF-RGC General Research Fund under Grant 11209819 and Grant CityU 9042816; in part by the Hong Kong GRF-RGC General Research Fund under Grant 11203820 and Grant CityU 9042598; and in part by the Hong Kong Innovation and Technology Commission, InnoHK Project Centre for Intelligent Multidimensional Data Analysis (CIMDA). (\*Corresponding author: Mingliang Zhou.)[↩](#fnref1) 2. Xuekai Wei, Mingliang Zhou, Heqiang Wang, Haoyan Yang, and Lei Chen are with the School of Computer Science, Chongqing University, Chongqing 400044, China (e-mail: [email protected], [email protected], [email protected], [email protected], [email protected]).[↩](#fnref2) 3. Sam Kwong is with the Department of Computer Science, City University of Hong Kong, Kowloon 999077, Hong Kong (e-mail: [email protected]).[↩](#fnref3) Recent Advances in Rate Control: From Optimisation to Implementation and Beyond =============================================================================== Video coding is a video compression technique that compresses the original video sequence to produce a smaller archive file or reduce the transmission bandwidth under constraints on the visual quality loss. Rate control (RC) plays a critical role in video coding. It can achieve stable stream output in practical applications, especially real-time video applications such as video conferencing or game live streaming. Most RC algorithms either directly or indirectly characterise the relationship between the bit rate (R) and quantisation (Q) and then allocate bits to every coding unit so as to guarantee the global bit rate and video quality level. This paper comprehensively reviews the classic RC technologies used in international video standards of past generations, analyses the mathematical models and implementation mechanisms of various schemes, and compares the performance of recent state-of-the-art RC algorithms. Finally, we discuss future directions and new application areas for RC methods. We hope that this review can help support the development, implementation, and application of RC for new video coding standards. Video coding, rate control, AVC, HEVC, VVC Introduction ============ coding is a technology for converting uncompressed digital video signals into standardised and decodable formats that consume less space. As the speed supported by internet infrastructures, including fixed broadband, mobile networking and Wi-Fi, has increased, video traffic has come to dominate global data traffic; video transmission currently represents approximately 82% of all traffic, and this percentage is still rising. However, uncompressed videos occupy a large quantity of bits, and this high data volume greatly limits various video applications if suitable coding or compression is not applied. To address this issue, multiple generations of video coding standards have been successively proposed through collaboration among several international standards-setting organisations, e.g., the International Telecommunication Union Telecommunication Standardization Sector (ITU-T) Video Coding Experts Group (VCEG) and the International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC) Moving Picture Experts Group (MPEG). The most well-known recent standards are the Advanced Video Coding (AVC) standard, the High Efficiency Video Coding (HEVC) standard and the Versatile Video Coding (VVC) standard. Throughout the history of video standards, each generation of standards has offered significantly improved rate–distortion performance, with the main objective of either reducing the coding bit rate as much as possible while ensuring a certain video quality or reducing the coding distortion as much as possible while maintaining a certain coding bit rate limit. Usually, the original image is divided into multiple square blocks of pixels, which are processed in sequence, followed by intra-frame/inter-frame prediction, transformation and inverse transformation, quantisation and inverse quantisation, loop filtering, and entropy coding to finally obtain a video stream. [!t] [img1] [img2a] Rate control (RC) is a mechanism for determining how many bits are to be transmitted during the encoding process, which is useful because the available bandwidth for video transmission is usually limited. To support the effective transmission of video data while guaranteeing the playback quality of the video service under the condition that all relevant channel bandwidth and transmission delay constraints are met, an RC mechanism selects a series of encoding parameters so as to simultaneously ensure that the bit rate of the to-be-encoded video satisfies the required rate limit and that the encoding distortion is as low as possible. The coding parameters usually include partition models, prediction models and quantisation parameters (QPs). Since a large number of intra-frame and inter-frame prediction techniques are applied in video compression algorithms, the levels of rate–distortion performance achieved for the numerous coding units are interdependent, and the QP of each coding unit is directly determined in accordance with rate–distortion optimisation (RDO) technology. The complexity of obtaining straightforward closed-form solutions for the coding parameters is extremely high. Therefore, an actual RC scheme is usually divided into two steps. First, bits are assigned to each basic coding unit so as to achieve the minimum distortion in accordance with the total bit budget; this process is called bit allocation. Second, in accordance with the relationship model between the coding rate and the QP, the QP is independently determined for each coding unit in accordance with its target number of bits. Because an RC module is a necessary component of any video encoder, all video coding standards have their own recommended RC models, such as TM5 of MPEG-2, TMN8 of H.263, JVT-G012 of H.264/AVC, JCTVC-H0213 and JCTVC-K0103 of H.265/HEVC, and JVET-K0390 of H.266/VVC. From the perspectives of mathematical models and realisation mechanisms, various possible RC schemes have been extensively explored. These schemes have essentially been developed to explore the relationship between rate (R) and distortion (D); that is, different models are based on different R-D curves. Based on the assumption that the encoder can determine the target bit rate by choosing a suitable Q, R-Q-based RC involves estimating the relationship between R and D in the QP domain; hence, these schemes are called Q-domain RC schemes. Many works have followed this idea. Liu *et al.* proposed an accurate linear R-Q model to characterise the relationship between the total R for both texture and nontexture information and the QP. Hu *et al.* proposed a frame-level RC algorithm that employs bit information in the RDO process instead of the mean absolute deviation (MAD) of the residuals to predict the complexity of each frame and uses a self-adaptive exponential R-Q model to apply RC. Choi *et al.* proposed a unified R-Q (URQ) model that can be employed for RC at any level (groups of pictures (GOPs), frames, or basic units) because it captures the relationship between the target rate R and the QP value for a pixel. R-Q-based RC algorithms are widely used in AVC. However, an investigation of JCTVC-I0426 has revealed that the slope *λ* of the R-D curve is more important than the QP for bit rate determination. Therefore, Li *et al.* proposed a *λ*-domain RC method and implemented it in the HEVC standard. Karczewicz *et al.* proposed an R-*λ*-based RC scheme for intra frames/slices based on the sum of absolute transformed differences (SATD) rather than the mean square error (MSE). Fang *et al.* proposed an R-*λ*-based RC method with a preencoding process. There is also another kind of RC algorithm that builds an association between R and the percentage of zeros among the quantised transform coefficients (*ρ*). He *et al.* estimated an R-D function with low computational complexity in the *ρ* domain and proposed an encoder-based rate-shape-smoothing algorithm. Liu *et al.* proposed a new linear model to obtain the QP at the frame level, for which the model parameters in the *ρ* domain can be adaptively estimated from temporal or interlayer information. Additional milestone papers and studies are also represented in Fig. [img1]. We can observe that R-Q models are applied most frequently in H.264, whereas R-*λ* models are widely adopted in H.265 and H.266, reflecting the superiority of the latter. Another observation is that RC methods are usually implemented at the frame or block level based on considerations regarding the effectiveness and complexity of the encoding process. [img2b] In this paper, we provide a comprehensive review and survey of RC schemes, from the most widely used methods in engineering to the latest research directions in academia. Moreover, a variety of mathematical models of the relationship between R and Q are introduced and analysed to explore their advantages and disadvantages. Different kinds of RC algorithms are also compared in terms of coding efficiency and time consumption. Finally, recent advances in RC research and future research directions are considered. The remainder of this paper is organised as follows. Section introduces and analyses RC models from different perspectives. In Section, the performance comparison of different RC schemes is discussed. Section surveys recent and future RC techniques, and Section concludes the paper. Brief Overview of Rate Control ============================== Before we begin exploring recent advances in RC, we introduce the RC structures used in HEVC and VVC. In HEVC, the RC process begins with parameter initialisation; then, bit allocation is performed at three main levels, namely, the GOP level, the frame level and the coding tree unit (CTU) level. The parameters are updated when the features of the coding units change. In particular, QP determination is an important step in RC. Compared with HEVC, the skip mode at the CTU level is modified in VVC. More details are shown in Fig. [img2a] and [img2b]. This section provides a brief introduction to RC models, divided into six parts: R-Q models, exponential R-*λ* models, R-*ρ* models, deep learning models, scalable video coding and other types of models. [!t] [tab2] cccc Author & Category & Year & Key idea Lee *et al.* & Macroblock & 2000 & He *et al.* & Frame level & 2003 & Jiang *et al.* & Frame level & 2004 & Xu *et al.* & Frame level & 2004 & Li *et al.* & Basic unit & 2004 & Ma *et al.* & & 2005 & Yuan *et al.* & Frame level & 2006 & Kwon *et al.* & Macroblock & 2007 & Liu *et al.* & Macroblock & 2007 & Wang *et al.* & Macroblock & 2008 & Tsai *et al.* & Intra frame & 2010 & Hu *et al.* & Frame level & 2010 & Hu *et al.* & Region-based & 2012 & Liang *et al.* & CTU level & 2013 & Wang *et al.* & Frame level & 2013 & Tian *et al.* & Frame level & 2014 & Wu *et al.* & CTU level & 2016 & Hosking *et al.* & Frame level & 2016 & Mao *et al.* & Frame level & 2020 & Helmrich *et al.* & Frame level & 2021 & [!t] [tab1] cccc Author & Category & Year & Key idea Wang *et al.* & Three joint layers & 2009 & Lee *et al.* & Frame level & 2013 & Li *et al.* & Frame level and CTU level & 2014 & Meddeb *et al.* & Frame level and CU level & 2014 & Yang *et al.* & Frame level and CTU level & 2014 & Zhou *et al.* & CTU level & 2014 & Wang *et al.* & Frame level & 2015 & Zeng *et al.* & CTU level & 2015 & Zhou *et al.* & CTU level & 2016 & Li *et al.* & CTU level & 2016 & Wang *et al.* & CTU level & 2016 & Sanchez *et al.* & Frame level and CU level & 2018 & Gong *et al.* & Frame level & 2017 & Perez-Daniel *et al.* & Block level & 2018 & Guo *et al.* & Frame level & 2018 & Li *et al.* & CTU level & 2018 & Mir *et al.* & Frame level & 2018 & Zhou *et al.* & CTU level & 2019 & Lim *et al.* & CTU level & 2019 & Chen *et al.* & CTU level & 2019 & Zhou *et al.* & CTU level & 2020 & Hyun *et al.* & Frame level & 2020 & Chen *et al.* & Frame level & 2020 & Li *et al.* & Frame level & 2020 & Liu *et al.* & CTU level & 2021 & R-Q methods ----------- The RC algorithms for H.264 adopt a variety of techniques, including the adaptive basic unit layer (ABUL) approach, the fluid traffic model (FTM), a linear MAD model, and a quadratic rate–distortion model. A hierarchical bit rate control strategy considering the GOP level, the frame level and the basic unit level is adopted. In the Joint Video Team (JVT)’s proposal, the JVT-G012 bit rate control algorithm is adopted. This algorithm introduces the concept of a basic unit, which is used to divide each frame into several basic units. The basic unit may be a macroblock, a row of macroblocks, a field or a frame. In frame-level bit rate control, the target number of bits per frame is allocated based on the network bandwidth, buffer usage, buffer size, and remaining bits. In basic-unit-level bit rate control, the target bits are averaged based on the remaining target bits for the frame. The quadratic R-Q model adopted in the JVT-G012 algorithm for H.264 is as follows: *R* = *M**A**D* × (*X*1/*Q**s**t**e**p* + *X*2/*Q**s**t**e**p*2) where *R* represents the number of coding bits required by the encoding quantisation coefficient and *Q**s**t**e**p* denotes the quantisation step size of the basic units. *X*1 and *X*2 are the model coefficients. The MAD is predicted through the following linear prediction model: *M**A**D**c**b* = *a*1 × *M**A**D**p**b* + *a*2 where *M**A**D**c**b* and *M**A**D**p**b* denote the MADs for the current basic unit and the corresponding position in the previous frame, respectively, and *a*1 and *a*2 are model coefficients, which are updated through linear regression during the processing of the last macroblock of each basic unit. Following the emergence of quadratic models, the authors of further developed models of this kind to increase their coding efficiency. Lee *et al.* proposed a scalable RC scheme with a more accurate second-order R-D model. Xu *et al.* first offered a solution to the chicken-and-egg problem between RC and RDO in H.264 and assigned different numbers of bits to different modes accordingly to avoid the regime of poor behaviour of quadratic R-D models. Yuan *et al.* presented an adaptive coding feature prediction method using spatiotemporal correlations to improve the accuracy of R-D modelling. The works in were both developed based on linear models. He *et al.* presented a linear RC (LRC) algorithm for the JVT encoder combined with a simple scheme for collecting frame-level statistics. Ma *et al.* proposed a new R-D model using the real quantisation step size and, on this basis, proposed an improved RC scheme for the H.264/AVC encoder. With the development of R-Q models, the works in took the complexity of the frame content into consideration in R-Q modelling. Jiang *et al.* mitigated video distortion due to strong motion or scene changes by using statistics from previously encoded frames to more accurately predict frame complexity. Kwon *et al.* developed an RC algorithm under a constant bit rate constraint for the H.264 baseline profile encoder. Hu *et al.* first adopted a two-stage RC scheme to decouple RDO and RC, then used bit information to predict the frame complexity in the mode decision process for RDO, and finally proposed an adaptive exponential R-Q model for RC. The works in used adaptive coding methods. Wang *et al.* proposed an R-D-optimised RC algorithm with adaptive initialisation for H.264. In contrast to traditional RC, the area-based RC method proposed by Hu *et al.* can adaptively control the rate in accordance with the content to attain better subjective and objective quality. Tsai *et al.* proposed determining the QPs of general intra frames and scene-change frames by means of a rate–quantisation step size (QS) model and a scene-change-aware model based on Taylor series. For HEVC, the work in proposed a logarithmic model to describe the relation between distortion and rate. Liang *et al.* determined that the rate and QP approximately obey a logarithmic relation and proposed a logarithmic R-Q model for HEVC. The works in mainly considered the low-delay (LD) case. Wu *et al.* proposed an RC scheme for LD video coding considering the temporal prediction structure of HEVC. Si *et al.* proposed frame-level RC schemes for HEVC designed for LD and random access (RA) coding individually. Hosking *et al.* presented an enhanced intrasymbol RC method that can produce more accurate predictions and thus reduce the average mismatch rate. Wang *et al.* proposed a frame-level RC algorithm for HEVC based on the reference picture set (RPS) mechanism, leading to specialisation of the QP determination and RPS mechanisms in HEVC, which considerably improved the coding efficiency. The works in considered linear models to describe the relation between distortion and rate. Yoon *et al.* combined a linear rate model with an R-Q model based on the Cauchy distribution. Si *et al.* modified a linear model to adjust the QP to the q scale. Tian *et al.* proposed a rate–complexity–QP (RCQ) model for HEVC intra-frame RC, which includes linear distortion quantisation as well as exponential R-Q and linear rate–complexity models. For VVC, Mao *et al.* presented an RC model based on transform coefficient modelling and derived corresponding R-Q and D-Q models. Helmrich *et al.* proposed an RC design based on a two-step R-Q model and derived the two-pass encoding parameters. Representative R-Q models and their corresponding key ideas are summarised in, which is sorted by the year of publication and divided into three parts: classic models used in H.264, classic models used in H.265 and classic models used in H.266. The category column of this table indicates at which layer the RC algorithm is applied. [htbp] [tab4] cccc Author & Category & Year & Key idea He *et al.* & Frame level & 2001 & He *et al.* & Frame level & 2001 & Shin *et al.* & Frame level & 2004 & Wang *et al.* & GOP level & 2013 & Wang *et al.* & Frame level & 2013 & Biatek *et al.* & CTU level & 2014 & *λ*-domain methods ------------------ The existing research shows that the slope *λ* of an R-D model can be obtained from the hyperbolic R-D function as follows: $$\lambda=-\frac{\partial D}{\partial R}=C K \cdot R^{-K-1} \triangleq \alpha R^{\beta} \label{pythagorean3}$$ where *α* and *β* are parameters related to the video source. Li *et al.* first proposed this type of model for use in HEVC. With the development of R-*λ* models, the works in took the complexity of the frame content into consideration in R-*λ* modelling. In JCTVC-M0257, Karczewicz *et al.* used video content complexity in a model at the intra-frame/slice level. Wang *et al.* considered the complexity at the intra-frame level and designed a gradient-based R-*λ* (GRL) model. Building on the previous developments, Zhou *et al.* added video complexity to a model for both intra frames and inter frames. Wen *et al.* proposed an R-*λ*-based RC algorithm with preencoding that improves encoding performance by means of two solutions: one uses only 16x16 coding units (CUs) for preencoding, while the other uses both a hyperbolic function and an exponential function as an improvement to the R-*λ* model. The work in took the influence of temporal layers into consideration in the R-*λ* model to ensure that pictures in different time-domain layers are given different levels of importance for prediction. The authors of constructed R-D models on the basis of optimised bit allocation. Li *et al.* improved the CTU-level *λ*-domain model based on optimised bit allocation, and Guo *et al.* similarly improved the frame-level *λ*-domain model by means of optimised bit allocation. Based on Lagrangian multiplier (LM) theory, Wang *et al.* considered optimisation from the GOP level to the CTU level, thereby improving the video quality. Tang *et al.* proposed a generalised rate–distortion–*λ* (R-D-*λ*) optimisation solution for HEVC RC. The works in addressed optimised RC in accordance with the video content. Sanchez *et al.* proposed a context-based model that incorporates predictive coding technology and uses a piecewise linear function to approximate the R-D curve. Meddeb *et al.* developed an advanced algorithm for video coding that separates the video content into regions of interest and regions of noninterest and increases the bit allocation for regions of interest while maintaining the overall bit rate. Li *et al.* optimised bit allocation for multiview texture videos based on interview dependency and spatiotemporal correlation. The works in mainly considered high-dynamic-range (HDR) video. Mir *et al.* enhanced the *λ*-QP relation for HDR video coding to solve the problem of compression performance degradation caused by different coding standards. Perez-Daniel *et al.* developed a *λ*-domain model into a multi-R-*λ* model that considers the wider range of luma values found in HDR video. Zhou *et al.* considered the issue of visual differences in HDR video and built an R-D model based on this issue. The works in considered perceptual RC methods. Zeng *et al.* incorporated human visual acuity into the video coding process by separating the input video into perceptually sensitive areas and less perceptually sensitive areas and increasing the bit allocation for perceptually sensitive areas. Lim *et al.* proposed a perceptual luminance-adaptive single-loop encoding method. Zhou *et al.* adopted a just-noticeable distortion (JND) factor to build a *λ*-domain model that can well describe the distortion of the perceptual field to be used for bit allocation. Li *et al.* developed an advanced *λ*-domain model by using an intra-CTU RC scheme that considers the influence of the drift from earlier CTUs to subsequent CTUs in RC. Lee *et al.* incorporated texture and nontexture factors into frame-level RC to build a *λ*-domain model. The works in considered the LD configuration. Wang *et al.* designed an RC model for the LD case that can adapt to rapid variations in coding efficiency. Chen *et al.* introduced the variance into the R-D model with the aim of minimising the distortion of the CTUs. Guo *et al.* presented a *λ*-domain frame-level RC scheme. Yang *et al.* adjusted the R-D model to correct for buffer overflow and underflow issues at low latency. Existing RC methods typically optimise the MSE between the distorted image *Z**i* and the original image *Z**j*, i.e., the following cost function: $$\frac{1}{P} \sum\_{p=1}^{P}\left(Z\_{j}(p)-a Z\_{i}(p)-b\right)^{2}$$ In this function, the optimal values of *a* and *b* are computed as: $$\left\{\begin{array}{l} a^{\*}=\frac{\operatorname{cov}\_{Z\_{i}, Z\_{j}}}{\sigma\_{Z\_{i}}^{2}} \\ b^{\*}=\mu\_{Z\_{j}}-a^{\*} \mu\_{Z\_{i}} \end{array}\right.$$ where cov*Z**i*, *Z**j* is $\frac{1}{P} \sum\_{p}\left(Z\_{i}(p)-\mu\_{Z\_{i}}\right)\left(Z\_{j}(p)-\mu\_{Z\_{j}}\right)$. Zhou *et al.* adjusted the R-D model by replacing the MSE-based distortion evaluation in the R-D model with a distortion based on the structural similarity index measure (SSIM), which is related to visual quality. For VVC, the works in introduced some adjustments to *λ*-domain algorithms. Li *et al.* proposed a three-part scheme: splitting skip and nonskip areas at the picture level, changing the update strategy, and modifying the GOP size to 16. Liu *et al.* proposed the use of an adaptive *λ* ratio estimation algorithm. Hyun *et al.* adjusted the R-*λ* model in VVC to address textured and nontextured regions simultaneously. Liu *et al.* researched the relationship between distortion and *λ* to achieve a balance among bit rate, distortion and video quality. The works in made use of a quality dependency factor (QDF) to improve the coding efficiency. Liu *et al.* used QDF-based bit allocation to improve the coding efficiency. Liu *et al.* also proposed an extension of RC to achieve the configuration in JVET-M0600. Ren *et al.* proposed an extension of the QDF to a low frame rate. The works in introduced model modifications in accordance with the coding structure in VVC. Chen *et al.* presented an R-*λ* relationship using a quadratic R-D model for VVC, and Li *et al.* presented an RC model based on the influence of skip blocks to adjust the parameter update strategy. Representative R-*λ* models and their corresponding key ideas are summarised in, which is sorted by the year of publication and divided into three parts: classic models used in H.264, classic models used in H.265 and classic models used in H.266. The category column of this table indicates at which layer the RC algorithm is applied. Notably, for RC strategies, the selection of a suitable R-D model is crucial. Packetwise exponential and hyperbolic models are the two most commonly used models to describe the relationship between bit rate and distortion. Studies on HEVC RC have proven the hyperbolic model to be the better option for that standard. However, the R-D relationship has evolved with the advent of the VVC standard and the integration of new coding tools, necessitating a new R-D model to determine the optimal coding QPs. [!t] [tab3] cccc Author & Category & Year & Key idea Gao *et al.* & CTU level & 2017 & Hu *et al.* & CTU level & 2018 & Zhou *et al.* & Frame level and CTU level & 2020 & Wei *et al.* & Tile level & 2021 & Raufmehr *et al.* & Frame level & 2020 & Farhad *et al.* & GOP level & 2021 & Wang *et al.* & Initial intra frame & 2021 & [htbp] [tab5] cccc Author & Category & Year & Key idea Xu *et al.* & Spatial layer & 2005 & Liu *et al.* & Base layer & 2008 & Pitrey *et al.* & Inter layer & 2009 & Hu *et al.* & Frame level & 2011 & Models in the *ρ* domain ------------------------ The percentage of zero coefficients, *ρ*, increases monotonically with the QP when the distribution of the transform coefficients is known, meaning that there is a one-to-one correspondence between *ρ* and the QP. Hence, the relationship between R and the QP can be established through *ρ*. Shin *et al.* modelled the rate–*ρ* and QP–*ρ* relationships and adopted a linear approximation scheme to model the rate–*ρ* relationship. Experiments have shown that the *ρ* of the quantised transform coefficients has a good linear relation with the bit rate R : *R*(*ρ*) = *θ*(1 − *ρ*) where *θ* is a constant and *ρ* is a model parameter that is related to the video content. The authors of argued that a Laplacian distribution is not sufficiently precise to capture the true distribution arising from a quadtree prediction structure. Therefore, a mixed Laplacian distribution was applied to describe this distribution. The authors of proposed an RC algorithm of decreased complexity in the *ρ* domain, which predicts the encoding parameters for each collocated CTU. Wang *et al.* proposed a more accurate mixed Laplacian distribution to capture the transform coefficients in HEVC. Representative *ρ*-domain models and their corresponding key ideas are summarised in, which is sorted by the year of publication. The category column of this table indicates at which layer the RC algorithm is applied. Classic methods in deep learning -------------------------------- Deep learning is an effective approach for solving decision-making problems, and thus, it has recently attracted great interest in the video coding community. Hu *et al.* adjusted the QP by balancing the relationship between texture complexity and coding rate to achieve lower distortion at the CTU level. Gao *et al.* extracted features from previous frames and built a more accurate R-D model using machine learning. Cooperative game theory has also been introduced into the bit allocation process to increase coding efficiency and quality. Zhou *et al.* adjusted the QP via a deep neural network when processing dynamic video sequences to reduce distortion and bit rate fluctuations. Wei *et al.* integrated reinforcement learning and game theory into tile-level bit allocation for 360-degree streaming to increase the quality and coding efficiency. For VVC, Li *et al.* presented a convolutional neural network (CNN)-based R-*λ* RC approach for intra-frame coding that reuses the *λ*-domain model used for inter-frame RC in the VVC Test Model (VTM) and trained a CNN to simultaneously predict the two model parameters, alpha and beta. Using a multilayer perceptron (MLP) neural network, Raufmehr *et al.* presented a video bit rate controller that completely conforms to the constraints of real-time applications. Farhad *et al.* presented a nonlinear relationship by using a neural network to balance the relationship among the bit rate, buffer size and QP. Wang *et al.* designed an RC algorithm that extracts four highly descriptive features to capture the relationship between the video content and the R-D model. Representative deep learning models and their corresponding key ideas are summarised in, which is sorted by the year of publication and divided into two parts: classic models used in H.265 and classic models used in H.266. The category column of this table indicates at which layer the RC algorithm is applied. With the development of deep learning, Lu *et al.* presented the first end-to-end deep video compression model that jointly optimises all components for video compression. [!t] [tab6] cccc Author & Category & Year & Key idea Jing *et al.* & Intra frame & 2008 & Liu *et al.* & 4 × 4 block level & 2009 & Shen *et al.* & Macroblock & 2013 & Classic methods in scalable video coding ---------------------------------------- The aim of scalable video coding (SVC) is to allow partial streams to be obtained on the decoding side while encoding the video signal only once. Three types of scalability, namely, temporal scalability, spatial scalability and quality scalability, are desired to meet different application requirements in terms of rate or resolution. Hu *et al.* introduced a frame-level RC method based on a linear R-Q model and a linear D-Q model ; the formulation can be expressed as follows: $$\frac{Q\_{s t e p}^{i}{ }^{2} \theta\_{i} \gamma\_{i}}{k\_{i} m\_{i}}=\lambda, i=0, \ldots, N\label{pythagorean5}$$ where *m**i* is the predicted MAD of the remaining frames at level *i* and *k**i*, *σ**i*, and *γ**i* are parameters. Xu *et al.* introduced an effective RC method for a scalable video model (SVM) that inherits features of the sophisticated hybrid RC schemes of JVT. Liu *et al.* proposed a switched model for predicting the MAD of the residual texture using the MAD information available from the previous frame in the same layer. Pitrey *et al.* presented a new RC scheme for SVC based on a simple yet attractive bit rate modelling framework in the *ρ* domain. Representative SVC models and their corresponding key ideas are summarised in, which is sorted by the year of publication and divided into two parts: classic models used in H.264 and classic models used in H.265. The category column of this table indicates at which layer the RC algorithm is applied. Other models ------------ ### Segmented R-Q model Based on the understanding that the transformation coefficients obey a Laplace distribution with *σ*2, the following segmented model can be obtained : $$R(Q)=\left\{\begin{array}{ll} \frac{1}{2} \log \_{2}\left(2 \mathrm{e}^{2} \cdot \frac{\sigma^{2}}{Q^{2}}\right), & \frac{\sigma^{2}}{Q^{2}}>\frac{1}{2 \mathrm{e}} \\ \frac{\mathrm{e}}{\ln 2} \cdot \frac{\sigma^{2}}{Q^{2}}, & \frac{\sigma^{2}}{Q^{2}} \leqslant \frac{1}{2 \mathrm{e}} \end{array}\right.$$ where *σ*/*Q*2 > 1/2e corresponds to the high-bit-rate situation and *σ*/*Q*2 ≤ 1/2e corresponds to the low-bit-rate situation. ### D-Q model Seo *et al.* proposed a D-Q model to determine the target distortion for QP generation, as follows : $$\small \begin{aligned} D&=\alpha \sum\_{i=0}^{N\_{\text {Depth }}} w\_{i}\Bigg \{ \left(1-P\_{i, S}\right)\\ &\left(\frac{2}{\lambda\_{i, \mathrm{NS}}^{2}}+\frac{2 Q}{\lambda\_{i, \mathrm{NS}}\left(e^{-\frac{1}{2} \lambda\_{i, \mathrm{NS}} Q}-e^{\frac{1}{2} \lambda\_{i, \mathrm{NS}} Q}\right)}\right)+P\_{i, S} \frac{2}{\lambda\_{i, S}} \Bigg \} \end{aligned}\label{pythagorean6}$$ where *D* is the MSE for a frame, *α* is a model parameter that compensates for the difference between the actual and estimated distortions, *P**i*, *S* is the proportion of CUs at depth *i* for which the skip mode is adopted, *Q* is the *q* step size, *N**D**e**p**t**h* is the maximum CU depth, *ω**i* is a weighting factor, and *λ**i* denotes the model parameter for the *i*-th CU depth. ### Adaptive R-Q model The method proposed by Jing *et al.* aims to select accurate QPs for intra-coded frames in accordance with a target *R*. The parameters of the model can be adaptively adjusted by considering a gradient-based frame complexity measure. ### Two-pass methods The general idea of two-pass RC is to further optimise the QP of each frame in the second encoding pass in accordance with scene complexity statistics computed in the first pass. For AVC, Lie *et al.* proposed performing frame-level rate allocation in the second pass using content-aware models constructed in the first pass. For HEVC, Wang *et al.* proposed a SSIM-inspired two-pass RC scheme. The algorithm proposed by Ma *et al.* consists of a one-pass process and a partial two-pass process at the frame and block levels. Zupancic *et al.* further proposed a two-pass RC method targeting quality improvement for ultrahigh-definition television (UHDTV) delivery. For VVC, Helmrich *et al.* proposed an RC design based on a two-step R-Q model and derived the two-pass encoding parameters. ### Visual attention models Liu *et al.* utilised the mechanism of human visual attention to guide the RC process by incorporating an attention model of motion. They calculated multilayer saliency maps of motion, which were used to adjust the frame-level bit allocation, resulting in quality improvement. Shen *et al.* proposed an innovative *R* method that considers human visual attention, in which the stronger the local motion attention is, the greater the *R* that is assigned to the frame. ### Multithreaded coding methods At the frame level, multicore systems with a small number of cores can take advantage of multithreaded coding capabilities, e.g., parallelisation, especially when there is little or no dependency between images, as in the case of images in the same temporal layer or even intra frames. In such cases, parallelisation is simple to implement and incurs relatively small coding efficiency losses. However, the gain that can be achieved through frame-level parallelisation is limited by the GOP size, and such processing increases latency despite improving the processing frame rate. In addition to frame-level parallelisation, slice-level parallelisation can be another way to improve performance. Slices partitioned within a picture are independent of each other, apart from potential in-loop filtering dependencies that may exist at the slice boundaries. Slices need not be associated with each other during the execution of most processes performed during coding, such as prediction and transformation, which are applied across slices, and slice-level parallelisation can enable a dramatic enhancement in coding efficiency due to motion prediction processing. Nevertheless, among the RC methods discussed in this article, almost none use the slice-level RC approach; therefore, we do not present experimental statistics on it. A more fine-grained, block-level parallelisation technique is most widely applied. However, this approach is more difficult to implement because block-level parallelisation requires a more elaborate scheduling algorithm to ensure the correct ordering of the macroblocks due to their multiple spatial dependencies. Furthermore, wavefront parallel processing (WPP) is a commonly used intra-frame parallelisation method, although it has the drawbacks of limitations on the parallelism that can be achieved and unbalanced computational complexity for RC optimisation. To overcome these drawbacks, Joose *et al.* proposed two real-time RC algorithms with parallelisation techniques, a *λ*-domain (LD) algorithm and an R-*λ* model (R-LM) algorithm. An adaptive intra-frame parallelisation method was also proposed in that guarantees higher intra-frame parallelism and more accurate control of parallelisation. A hardware architecture strategy was additionally explored in to improve the parallel acceleration of HEVC hardware. Representative models discussed in this subsection and their corresponding key ideas are summarised in, which is sorted by the year of publication. The category column of this table indicates at which layer the RC algorithm is applied. [!t] \*8c Method & Anchor & Configuration & BD-rate (%) & BD-PSNR (dB) & RC accuracy (%) & Latency reduction (%) & Lee 2013 & HM 16.20 & LDP & -3.11 & 0.11 & 89.19 & -0.98 Wang 2013 & HM 16.20 & LDB/LDP/RA & -3.13/-4.00/-6.00 & 0.11/0.13/0.23 & 99.76/99.77/98.44 & -0.34/0.87/2.58 Wang 2013 & HM 16.20 & LDB-HE/LDP-HE & -21.91/-12.03 & 0.77/0.41 & 99.45/99.32 & 1.35/1.62 Xu 2015 & HM 16.20 & LDP & -1.5 & 0.14 & 99.33 & 1.78 Song 2017 & HM 16.20 & RA/LDP & -0.20/-0.20 & 0.02/0.02 & 99.97/99.98 & 2.01/-1.00 & FWA & HM 16.20/LDP/LDB & -4.70/-4.30 & 0.21 & 99.93/99.94 & 3.22/3.13 & AWA & HM 16.20/LDP/LDB & -2.90/-3.20 & 0.13 & 99.92/99.39 & 2.97/2.66 Hyun 2020 & HM 16.20 & AI/LDB/RA & -0.30/-0.20/-0.30 & 0.02/0.01/0.02 & 99.89/99.50/86.33 & 0.22/0.68/0.91 Raufmehr 2021 & VTM 17.0 & LDB & -3.05 & 0.11 & 99.63 & 36.61 [tab7] \*7c Method & Anchor & Configuration & BD-rate (%) & BD-PSNR (dB) & RC accuracy (%) & Latency reduction (%) Kwon 2007 & JM 9.4 & Li /LDP & -5.40 & 0.35 & 97.29 & 2.50 Liu 2007 & JM 9.4 & JVT-G012 /LDP & -5.10 & 0.33 & 90.22 & 3.07 Wang 2008 & JM 9.4 & JVT-G012 /LDB & -0.63 & 0.41 & 93.47 & 2.10 Liu 2009 & JM 9.4 & JVT-G012 /LDP & -0.12 & 0.07 & 95.33 & 2.80 Li 2016 & HM 16.20 & Li & -5.10 & 0.15 & 93.57 & 0.50 Chen 2019 & HM 16.20 & LDP & -3.30 & 0.10 & 97.27 & 0.10 & & LDB & -4.60 & 5.40 & 97.56 & 2.90 & & LDP & -1.20 & 1.00 & 97.38 & 1.84 & & RA & -2.00 & 1.90 & 99.87 & 2.50 & & LDB & -1.35 & 0.01 & 97.08 & 0.01 & & LDP & -3.30 & 0.11 & 94.14 & 0.07 & & RA & -2.75 & 0.09 & 94.89 & 0.77 [tab8] [!t] [tab9] \*8c Method & Anchor & Configuration & BD-rate (%) & BD-PSNR (dB) & RC accuracy (%) & Latency reduction (%) & Xu 2005 & SVM3.0 & Default & -7.20 & 1.50 & 99.50 & -3.02 & TM5 & Default & -2.30 & 0.20 & 96.50 & -2.22 & AVC-TM & Default & -2.20 & 0.20 & 96.70 & -2.21 Wang 2009 & JVT-G012 & LDP & -4.20 & 0.98 & 95.49 & -0.45 Li 2014 & HM 16.20 & LD(noH)/LD(H)/RA & - 3.10/-5.50/-8.90 & 0.29/0.55/1.08 & 99.94/99.90/99.80 & -3.10/0.10/-3.20 Yang 2014 & HM 16.20 & LDB & -2.20 & 0.31 & 99.92 & 1.30 & Hu *et al.* & HM 16.20/AI & -5.00 & 0.50 & 99.95 & -114.00 (train)/-1.80 (no train) & Gao *et al.* & HM 16.20/LDB & -5.20 & 0.50 & 99.86 & -1048.20 (train)/-1.20 (no train) Sanchez 2018 & HM 16.20 & Default & -1.00 & 0.01 & 99.93 & -1048.20 (train)/-1.20 (no train) Performance Comparison of Different RC Schemes ============================================== The bases for measuring the advantages and disadvantages of an RC algorithm do not solely concern how many bits are saved and how much the visual quality is improved; coding/decoding complexity is also an essential consideration for some real-time video applications. Modern encoders, such as AVC, HEVC and VVC, are designed following a framework of square-block-based hybrid coding, which provides the opportunity for performance gains on machines with multithreading capabilities and even multicore processors. Usually, the performance gain is calculated as the execution time of the improved algorithm divided by the execution time of the original algorithm. Many related techniques have been proposed in recent video coding standards, some of which are mentioned in the following. All comparisons were performed under fair and well-controlled test conditions. In the evaluation, the general sequences from each class defined in the Common Test Conditions (CTC) were chosen, with possible coding QPs of [22, 27, 32, 37]. The total number of test sequences in each class is as follows: Class A1 contains three sequences, Class A2 contains three sequences, Class B contains five sequences, Class C contains four sequences, Class D contains four sequences, and Class E contains three sequences. The resolution of the sequences in each class is as follows: Class A1 has a resolution of 3840 × 2160, Class A2 has a resolution of 3840 × 2160, Class B has a resolution of 1920 × 1080, Class C has a resolution of 832 × 480, Class D has a resolution of 416 × 240, and Class E has a resolution of 1280 × 720. The total number of frames per sequence is taken to be 300, and for sequences that are fewer than 300 frames, the results are normalised accordingly with respect to the actual number of frames. The resolution of the video sequences varies from the Common Intermediate Format (CIF) to 4K, and the bit depth is 8 bits. The target bit rate was set to the actual bit rate obtained by compressing the same sequence at a fixed QP in the non-RC encoding mode. The detailed GOP types used for the experiments can be found in the evaluation tables. For the AVC, HEVC, and VVC RC methods, all other settings were kept the same as the defaults in JM 9.4, HM 16.20, and VTM 17.0, respectively. Some terms that appear in the tables in this section may need explanation. The *configuration* column represents part of the test conditions recommended by the standardisation group. Its entries represent three different prediction structures: low delay with B frames (LDB), low delay with P frames (LDP) and random access (RA). Some entries have suffixes of *H* or *noH*, which indicate whether bits are allocated hierarchically. Coding efficiency is evaluated in terms of the Bjøntegaard-delta bit rate (*BD-rate*), which indicates the reduction in bit rate at a given quality, while the Bjøntegaard-delta peak signal-to-noise ratio (*BD-PSNR*) complementarily represents the video quality enhancement at a given bit rate. The results in the *Latency reduction* column, which measure the coding complexity of each algorithm, are normalised to the average time taken to encode a frame. Finally, the RC accuracy, *A**c**c*, is an important index for an RC algorithm that represents the difference between the actual bit rate and the target bit rate and is calculated as follows: $${Acc}=1 - \dfrac{|R\_{r}-R\_{c}|}{R\_{c}}\times 100\%$$ where *R**r* and *R**c* are the actual number of bits and the target number of bits, respectively. Implementations at the frame level ---------------------------------- In the vast majority of cases, the working environment of the encoder is the CPU; accordingly, many optimisations have been analysed and implemented in modern CPUs. Thus, unless otherwise mentioned, the following experiments were carried out on the CPU. More detailed quantitative comparisons are shown in. When using a CPU, the key to improving performance lies in reducing data dependencies and operational dependencies to improve parallelisation, and at the frame level, these requirements are naturally satisfied. To build an accurate R-Q model, Lee *et al.* separately constructed three Laplacian probability models for low-textured, medium-textured, and high-textured CUs; the BD-rate of this method is -3.11%, and the BD-PSNR is 0.11 dB (LDP). Wang *et al.* considered new coding tools for use in HEVC and adopted a hierarchical RC architecture to maintain the video quality of keyframes. In comparisons considering all three configurations, the BD-rate ranges from -3.13% to -6.00%, with corresponding BD-PSNR values ranging from 0.11 dB to 0.23 dB. These authors also proposed D-Q and R-Q models for finding the interframe dependency between to-be-coded frames and the reference frame. Accordingly, a mixed Laplacian distribution *ρ*-domain-based rate–GOP model was proposed. Wang *et al.* proposed an efficient hierarchical bit allocation scheme based on a new mechanism for HEVC, i.e., the RPS mechanism, which achieves the highest BD-rate among the compared methods. Moreover, they proposed an innovative header bit ratio prediction method to improve RC accuracy and used a quadratic R-Q model to calculate the QP. Xu *et al.* focused on video sequences depicting discontinuous scenes and proposed a novel bit allocation algorithm by building the correlation between the intensity of a scene change and the bit allocation. A BD-rate of -1.5% can be achieved in this way, and the BD-PSNR is 0.14 dB. Song *et al.* addressed the issue that the RC method used in AVC is no longer suitable for HEVC due to the differences in the GOP coding structures; accordingly, they proposed a new GOP-level bit allocation method that can achieve more accurate RC and lower bit fluctuation, resulting in slightly better R-D performance, as shown in table [tab7]. Guo *et al.* considered the temporal R-D dependency in GOP-level bit allocation; on this basis, an advanced frame-level R-D model that can more completely use the information of the coded frames was introduced to further enhance R-D performance. Then, an equation for bit allocation at the frame level was proposed for the optimal Lagrange multiplier approach, which can be solved by employing a recursive Taylor expansion (RTE) scheme. Two comparative experiments were performed using and as benchmarks, referred to as fixed-weight bit allocation (FWA) and adaptive-weight bit allocation (AWA), respectively. The comparison results show that the BD-rate ranges from -4.30% to -4.70% for FWA and -2.90% to -3.20% for AWA. As an alternative to traditional methods that regard the entire RC process as deterministic, some methods treat the variables and parameters of RC as random variables to re-examine the RC process. Hyun *et al.* observed that the inaccuracy of existing linear rate estimation models causes a decline in RC performance. Thus, they adopted a method called recursive Bayesian estimation (RBE) to precisely estimate rates. Performance comparisons with all three configurations show that the BD-rate ranges from -0.20% to -0.30%, with slight improvements in the BD-PSNR. Raufmehr *et al.* proposed a video bit rate controller to meet the demands of real-time applications in the VVC standard by suppressing bit fluctuations and buffer overflow and underflow. The BD-rate is -3.05%, and the BD-PSNR is 0.11 dB. In this method, the necessary QP modifications are estimated by using an MLP neural network at the frame level, which is beneficial for robust buffer control. Comparisons of the results suggest that each method has its strengths and weaknesses, depending on the specific application or situation. Lee *et al.*’s method is beneficial for building an accurate R-Q model, while Wang *et al.*’s method is suitable for maintaining the video quality of keyframes. Xu *et al.*’s method is effective for video sequences depicting discontinuous scenes, while Song *et al.*’s method achieves more accurate RC and lower bit fluctuation. Guo *et al.*’s method considers temporal R-D dependency and can more completely use the information of the coded frames, while Hyun *et al.*’s method can precisely estimate rates. Raufmehr *et al.*’s method is suitable for real-time applications. Due to the elimination of traditional models and the utilised network structure, the model complexity is lower, resulting in a large latency reduction. However, these frame-level methods also have some drawbacks, such as poor scalability or high memory occupation. Fortunately, block-based implementations can address these problems. Implementations at the block level ---------------------------------- RC evaluation at the block level was carried out using the JM reference software for H.264 and the HM reference software for H.265. In, the RC methods proposed by Kwon *et al.*, Liu *et al.*, Wang *et al.* and Liu *et al.* are all compared and analysed. In the LDP mode of, the BD-PSNR is improved by 0.35 dB over the baseline. In the scheme of, the BD-rate is -5.10%, and the BD-PSNR is 0.33 dB. It can be seen that the algorithm in can achieve better PSNR results than JVT-G012, while the bit rate performance of the two algorithms is similar. In the LDP mode of, the BD-rate is -0.12%, the BD-PSNR is 0.07 dB, and the latency reduction is 2.80% on average. proposed a new scheme named the optimal bit allocation (OBA) scheme and presented a detailed comparison based on ; the BD-rate of this method is -5.10%, and the BD-PSNR is 0.15 dB. Compared with the method of HM 16.20, the average BD-rate of can reach -3.30%, and the BD-PSNR can reach 0.10 dB. For the method of, the BD-rate and BD-PSNR for LDB/LDP/RA are -4.60%/-1.20%/-2.00% and 5.40 dB/1.00 dB/1.90 dB, respectively, which are much higher than those of the baseline HM 16.20 coding algorithms. A recently proposed JND-based perceptual RC method is also included in the comparison, and it can be seen that this method effectively reduces the bit rate without compromising the encoded video quality, as confirmed by objective metrics such as the PSNR. In addition, in terms of coding control, the actual *R* after encoding is closer to the target *R*. As seen from these comparisons, different methods are suitable for different situations. The OBA scheme is appropriate for achieving a lower bit rate with good quality. The method of is suitable for improving the BD-rate and BD-PSNR for LDB/LDP/RA, while the JND-based perceptual RC method is appropriate for reducing the bit rate without compromising the encoded video quality. Other methods also have their own advantages in achieving a higher BD-PSNR or lower BD-rate. Implementations addressing joint layer optimisation --------------------------------------------------- Often, multiple layers can be jointly optimised to improve the coding efficiency. Recall that the RC process can be divided into two main parts: one is bit allocation, while the other is determining how to realise the target bit allocation for a CU through the R-D model. As shown in table [tab9], the algorithm proposed by Xu *et al.* is simultaneously implemented at the GOP, frame and basic unit levels, and experiments in SVM3.0 show that the mismatch between the target *R* and real *R* does not exceed 0.5% and that the BD-PSNR on average is approximately 1.50 dB. The algorithm proposed by Ma *et al.* consists of a one-pass process and a partial two-pass process at the frame and block levels, and experiments show that compared with TM5 and AVC-TM, the BD-PSNRs are 0.20 dB and 0.20 dB, respectively. The algorithm proposed by Wang *et al.* is a joint three-layer (JTL) model implemented in JM 9.4. Compared with JVT-G012, the average BD-PSNR is 0.98 dB, while the average latency reduction is -0.45%. The algorithm proposed by Li *et al.* is implemented at the frame and CTU levels, and testing in HM 16.20 shows that relative to in the LD(noH), LD(H) and RA configurations, the RC accuracy gains are 0.06%, 0.10% and 0.20%, respectively; the average BD-PSNRs are 0.29 dB, 0.55 dB and 1.08 dB, respectively; and the latency reductions are -3.10%, 0.10%, and -3.20%, respectively. The algorithm proposed by Yang *et al.* is implemented at the frame and CTU levels, and experiments in HM 16.20 in the LDB configuration show that the bit rate at the same quality is 2.20% lower than that of the RC algorithm in HM 16.20. The algorithm proposed by Zhou *et al.* is implemented at the frame and CTU levels, and experiments in HM 16.20 show that compared with and under the all-intra (AI) and LDB coding structures, the RC accuracies are 99.95% and 99.86%, respectively; the BD-PSNR is 0.50 dB in both cases; and the latency reduction is increased by 114% and 1048.2%, respectively, when the time needed to train the model parameters is included and by 1.80% and 1.20%, respectively, when the time needed to train the model parameters is not included. The algorithm proposed by Sanchez *et al.* was also implemented, and experiments in HM 16.20 show that the RC accuracy is 99.93% with a slight BD-PSNR increase, while the encoding time is increased if the training time is included. The above comparisons indicate that the various algorithms all show improvements in coding efficiency, with different strengths and limitations. Xu *et al.*’s algorithm minimises the mismatch between the target and real *R* values, while Ma *et al.*’s algorithm is effective for one-pass and partial two-pass encoding. A significant PSNR gain along with a slight latency reduction can be guaranteed with Wang *et al.*’s algorithm. Significant gains in RC accuracy and PSNR can also be seen in Li *et al.*’s algorithm. Zhou *et al.*’s and Sanchez *et al.*’s algorithms are suitable for non-real-time encoding solutions. From the above evaluations, it can be seen that RC modelling methods have evolved over time, with each model having its own set of advantages and disadvantages. The simple and widely used R-Q models cannot accurately reflect the quality for diverse video content. More accurate R-*λ* models have been proposed to better reflect the R-D relationship, but these models require frequent and complex updates. An R-*ρ* model, while improving the linear approximation of the R-D relationship, still establishes only an indirect relationship between R and the QP. Model-free RC methods offer greater flexibility but require more computational resources and have not yet been widely adopted. Overall, RC modelling has become more sophisticated over time, and models should be selected flexibly to balance coding efficiency and performance. Future Works ============ As efficient tools for compressing video information, video codecs allow service providers to compress video files so that they will occupy minimal storage space and can be efficiently delivered over a range of networks. The purpose of bit rate control is to achieve stable and high-quality video compression to the greatest possible extent under specified bit rate constraints. By removing redundant information, RC methods aim to maintain the original video quality while reducing the amount of data sent over the network or occupying storage space. With the development of virtual and augmented reality applications, video users have gained the ability to interact with and influence objects in immersive three-dimensional (3D) simulated environments that emulate reality with the help of interactive devices, thereby providing an experience equivalent to that of an objective natural environment. This situation has driven the further development of video streaming media in the directions of ultrahigh definition, high dynamics, high frame rates, and high depth. These developments have resulted in ever-increasing demands on video coding techniques for current applications to achieve higher compression efficiency, lower computational complexity, and more intelligent integration into video analysis systems. We believe that the future directions of related research can be summarised from six perspectives: RC methods based on machine learning and deep learning, development from RC methods to quality control methods, perceptual and depth-aware RC methods, RC methods for various advanced video sources, RDO and RC in depth coding, and point cloud RC methods. Future work is expected to focus on topics such as image integration, video capture, encoding, processing, analysis, and understanding, with the objective of guiding a new generation of codecs to effectively and intelligently model the human visual system. A promising possibility is that through the dynamic selection of different RC models, such as R-Q, R-*λ*, or R-*ρ* models, or model-free methods in accordance with different video content or coding conditions, the trade-off between coding efficiency and RC accuracy can be optimised. For instance, an R-*λ* model performs best in terms of coding efficiency and RC accuracy in texture regions of HDR content, but it may not be suitable for nontexture regions. On the other hand, an R-Q model, which is relatively simple, may be more appropriate for simpler motions or textures. By switching between different models based on region characteristics or coding conditions, better RC accuracy and general quality can be achieved while maintaining high coding efficiency. Therefore, more sophisticated algorithms and strategies for dynamically selecting and adapting RC models in real-time video coding applications should be developed. The specific directions of anticipated future development are discussed as follows. Learning-based visually enhanced RC methods ------------------------------------------- Machine learning (ML) and deep learning (DL) techniques are widely acknowledged as important tools for analysing and processing massive amounts of weakly correlated or high-dimensional data. As new technologies for video applications (e.g., virtual reality, augmented reality, and point clouds) revolutionise the video coding industry, the heterogeneity and complexity of the captured data are presenting increasing challenges for the efficient compression of these data. Based on the above review of the methods applied to date for video RC in the ML and DL domains, this paper argues that future ML and DL techniques can help to achieve smarter video coding. Based on the specific requirements of video coding tasks, learning-based RC methods aim to achieve intelligent RC with low complexity, high coding efficiency, and high visual quality. First, ML and DL techniques can be used to combine analysis and recognition tasks with encoding tasks, enabling intelligent RC through the effective reuse of video information and reducing the complexity of video encoding. Second, various learning methods, such as active learning, reinforcement learning, and transfer learning, can be introduced to establish self-updating mechanisms in the relevant models to solve the more complex RC decision-making problems arising in new generations of video coding standards. Third, for the current ML and DL methods applied for video RC, there is still a need to solve the problems of their relatively high computational complexity and cost. Thus, another important direction of future development for ML and DL RC methods will be to investigate how to realise low-hardware and low-cost implementation solutions based on DL. Perceptual and depth-aware RC methods ------------------------------------- Perceptual-based video coding that exploits perceptual redundancy is a promising research area worthy of future consideration. Related research aims to realise an optimal RC mechanism by constructing the relationship between video quality assessment (VQA) results and the coding rate. When a VQA algorithm is to be applied to determine the quality target in a video coding module, it is necessary to adapt the VQA algorithm from image/video-based assessment to block-based assessment and to rate–distortion theory. A convenient method is to model the relationship between the VQA metric of interest and the MSE, given that the MSE is the basis of RDO. In addition, balancing the uniqueness and particularities of perceptual redundancy is also a fundamental difficulty that needs to be overcome. Since the perception of the human visual system for static, dynamic, stereo, and omnidirectional video varies from person to person, it is a daunting challenge to implement a general VQA algorithm suitable for different applications. In addition, the introduction of VQA makes RC complexity control particularly challenging. The computational complexity increases substantially with the adoption of more advanced feature extraction tools and learning-based classifiers for quality prediction. Frequent calls to learning-based VQA algorithms in intensely learning-based RDO schemes can result in extremely complex encoding algorithms. Thus, it will be worth investigating how to integrate VQA, especially better-performing learning-based VQA, into video coding while maintaining a desirable complexity. Future work on perceptual RC will be aimed at developing an objective visual perception model that is broadly advantageous in terms of accuracy, complexity, and adaptability. Interoperable end-to-end RC methods for hyperrealistic and high-dimensional videos ---------------------------------------------------------------------------------- The rapid evolution of immersive video applications, such as augmented reality (AR), virtual reality (VR) and HDR videos, is presenting new challenges for end-to-end RC methods. Such immersive applications require calculating the positions and angles of camera images in real time and generating corresponding artificial images, with the aim of simulating or supporting interaction with the real world. A VR system uses computer simulation to generate a 3D space, presenting users with a visual and interactive simulation without restrictions. A much more advanced technology is mixed reality (MR), which mixes the real-world environment with AR and VR technologies. This unique experience requires redefining the perceptual quality indicators used in the video encoding process, implementing end-to-end RDO, and outputting stable and high-quality code streams while taking viewing interoperability into consideration. Interoperability can be defined as the ability to achieve a stable viewing angle bit rate through interaction during the encoding process, involving the sharing of user viewing data and encoding information between the client and the server through data exchange via the head-mounted device (HMD). RC interoperability for immersive video is closely related to interoperability in broader human–computer interaction. The need for interoperability between immersive video delivery systems increases the practical value of RC coding. This interoperability can be achieved in two ways: through competition for more network bandwidth and by enabling data-driven decisions based on users’ viewing habits. The users of such immersive experiences will also benefit from improved data quality and a better immersive interactive experience. Therefore, an important direction of development for hyperrealistic and high-dimensional video technology will be to implement an R-D model based on HDR video characteristics in order to improve coding performance and achieve a globally optimal rate allocation scheme in the subsequent RC mechanism. In addition, video content prediction has become very popular in recent years due to its ability to learn from previous viewing behaviour to construct the forthcoming video. Such predicted video content can be widely used in decision-making, autonomous driving, video comprehension, etc. Investigating how best to achieve RC for this type of video will be very important, as all of the abovementioned related tasks demand smooth and high-quality streaming video. Beyond 5G/6G-powered quality control methods -------------------------------------------- With the popularisation of 5G networking, high-data-rate and low-latency network connections are increasingly expected to ensure a smooth and high-quality playback experience for video users. Since visual quality is an important aspect of the user experience in many media applications, high quality must be guaranteed while pursuing high smoothness. These issues may become critical as the demand for high-quality video transmission becomes more widespread. The objective of quality control is to keep the video quality within a certain high range under the premise of high bandwidth, which can be achieved by using variable bit rates. However, most existing techniques may not provide constant visual quality and/or efficient compression. It will be more critical for future quality control methods to consider the quality difference between frames as a measure of frame complexity in order to model the relationship among the target bit rate, distortion, and QP. Another essential direction of research will be to pursue the implementation of a low-latency quality control scheme to achieve quality stability. Conclusions =========== This paper comprehensively reviews the latest progress in RC techniques for the H.265/HEVC and H.266/VVC video coding standards and discusses relevant development prospects. More specifically, this paper first introduces and compares different kinds of RC methods based on various R-D models as well as emerging DL-based schemes. Then, the implementation schemes of RC methods on different hardware platforms are reviewed. Finally, we discuss RC methods based on ML and DL, the evolution from RC to quality control, RC methods considering human visual perception and depth perception, RC methods for 360-degree/HDR video, and RC methods for predicted video content. A comprehensive summary of directions of future work on topics such as RC methods and RDO in depth coding is also presented. The aim is to provide valuable guidance for the improvement, implementation, application, and continuous development of the current and next generations of RC standards. 100 [1]#1 url@samestyle [2]#2 [2]l@#1=l@#1#2 “Cisco annual internet report white paper.” [Online]. Available: <https://www.cisco.com/c/en/us/solutions/collateral/executive-perspectives/annual-internet-report/white-paper-c11-741490.html> T. Smith, M. Obrist, and P. Wright, “Live-streaming changes the (video) game,” in *Proceedings of the 11th european conference on Interactive TV and video*, 2013, pp. 131–138. D. Marpe, T. Wiegand, and G. J. Sullivan, “The h. 264/mpeg4 advanced video coding standard and its applications,” *IEEE communications magazine*, vol. 44, no. 8, pp. 134–143, 2006. G. J. Sullivan, J.-R. Ohm, W.-J. Han, and T. Wiegand, “Overview of the high efficiency video coding (hevc) standard,” *IEEE Transactions on circuits and systems for video technology*, vol. 22, no. 12, pp. 1649–1668, 2012. B. Bross, Y.-K. Wang, Y. Ye, S. Liu, J. Chen, G. J. Sullivan, and J.-R. Ohm, “Overview of the versatile video coding (vvc) standard and its applications,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 31, no. 10, pp. 3736–3764, 2021. T. Berger, “Rate-distortion theory,” *Wiley Encyclopedia of Telecommunications*, 2003. N. Tang, J. Cao, F. Liang, J. Wang, H. Liu, X. Wang, and X. Du, “Fast ctu partition decision algorithm for vvc intra and inter coding,” in *2019 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS)*.1em plus 0.5em minus 0.4emIEEE, 2019, pp. 361–364. Y. Huang and R. P. Rao, “Predictive coding,” *Wiley Interdisciplinary Reviews: Cognitive Science*, vol. 2, no. 5, pp. 580–593, 2011. H. S. Malvar, A. Hallapuro, M. Karczewicz, and L. Kerofsky, “Low-complexity transform and quantization in h. 264/avc,” *IEEE Transactions on circuits and systems for video technology*, vol. 13, no. 7, pp. 598–603, 2003. M. Budagavi, A. Fuldseth, and G. Bjøntegaard, “Hevc transform and quantization,” in *High Efficiency Video Coding (HEVC)*.1em plus 0.5em minus 0.4emSpringer, 2014, pp. 141–169. A. Norkin, G. Bjontegaard, A. Fuldseth, M. Narroschke, M. Ikeda, K. Andersson, M. Zhou, and G. Van der Auwera, “Hevc deblocking filter,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 22, no. 12, pp. 1746–1754, 2012. V. Sze and M. Budagavi, “High throughput cabac entropy coding in hevc,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 22, no. 12, pp. 1778–1791, 2012. L. Wang, “Rate control for mpeg video coding,” *Signal Processing: Image Communication*, vol. 15, no. 6, pp. 493–511, 2000. J.-C. Tsai and C.-H. Shieh, “Modified tmn8 rate control for low-delay video communications,” *IEEE transactions on circuits and systems for video technology*, vol. 14, no. 6, pp. 864–868, 2004. Z. Li and F. Pan, “Adaptive basic unit layer rate control for jvt (jvt-g012),” in *Joint Video Team (JVT) 7th Meeting at Pattaya*, March. 2003. H. Choi, J. Nam, J. Yoo, and D. Sim, “Rate control based on unified rq model for hevc (jctvc-h0213),” in *Joint Collaborative Team on Video Coding (JCT-VC) 8th Meeting, San José, CA*, Feb. 2012. B. Li, H. Li, and L. Li, “Rate control by r-lambda model for hevc,” in *ITU-T/ISO/IEC JCT-VC Document JCTVC-K0103*, Oct. 2012. Y. Li and Z. Chen, “Rate control for vvc (jvet-k0390),” in *Joint Video Experts Team (JVET) 11th Meeting, Ljubljana, SI*, July 2018. Y. Liu, Z. G. Li, and Y. C. Soh, “A novel rate control scheme for low delay video communication of h.264/avc standard,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 17, no. 1, pp. 68–78, 2007. S. Hu, H. Wang, S. Kwong, and T. Zhao, “Frame level rate control for h.264/avc with novel rate-quantization model,” in *2010 IEEE International Conference on Multimedia and Expo*, 2010, pp. 226–231. B. Li and D. Zhang, “Qp determination by lambda value,” in *Joint Collaborative Team on Video Coding (JCT-VC) 9th Meetting at Geneva*, Apr. 2012. M. Karczewicz and X. Wang, “Intra frame rate control based on satd (jctvc-m0257),” in *Joint Collaborative Team on Video Coding (JCT-VC) 13th Meeting, Incheon, KR*, Apr. 2013. M. Fang, M. Tang, and J. Wen, “R-lambda model based rate control with pre-encoding process (jctvc-u0152),” in *Joint Collaborative Team on Video Coding (JCT-VC) 21st Meeting, Warsaw, PL*, June 2015. Z. He, Y. K. Kim, and S. Mitra, “Low-delay rate control for dct video coding via /spl rho/-domain source modeling,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 11, no. 8, pp. 928–940, 2001. M. Liu, Y. Guo, H. Li, and C. W. Chen, “Low-complexity rate control based on rho-domain model for scalable video coding,” in *2010 IEEE International Conference on Image Processing*, 2010, pp. 1277–1280. H.-J. Lee, T. Chiang, and Y.-Q. Zhang, “Scalable rate control for mpeg-4 video,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 10, no. 6, pp. 878–894, 2000. Z. He and T. Chen, “Linear rate control for jvt video coding,” in *International Conference on Information Technology: Research and Education, 2003. Proceedings. ITRE2003.*, 2003, pp. 65–68. M. Jiang, X. Yi, and N. Ling, “Improved frame-layer rate control for h.264 using mad ratio,” in *2004 IEEE International Symposium on Circuits and Systems (ISCAS)*, vol. 3, 2004, pp. III–813. J. Xu and Y. He, “A novel rate control for h.264,” in *2004 IEEE International Symposium on Circuits and Systems (ISCAS)*, vol. 3, 2004, pp. III–809. S. Ma, W. Gao, and Y. Lu, “Rate-distortion analysis for h.264/avc video coding and its application to rate control,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 15, no. 12, pp. 1533–1544, 2005. W. Yuan, S. Lin, Y. Zhang, W. Yuan, and H. Luo, “Optimum bit allocation and rate control for h.264/avc,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 16, no. 6, pp. 705–715, 2006. D.-K. Kwon, M.-Y. Shen, and C.-C. J. Kuo, “Rate control for h.264 video with enhanced rate and distortion models,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 17, no. 5, pp. 517–529, 2007. H. Wang and S. Kwong, “Rate-distortion optimization of rate control for h.264 with adaptive initial quantization parameter determination,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 18, no. 1, pp. 140–144, 2008. W.-J. Tsai and T.-L. Chou, “Scene change aware intra-frame rate control for h.264/avc,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 20, no. 12, pp. 1882–1886, 2010. H.-M. Hu, B. Li, W. Lin, W. Li, and M.-T. Sun, “Region-based rate control for h.264/avc for low bit-rate applications,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 22, no. 11, pp. 1564–1576, 2012. X. Liang, Q. Wang, Y. Zhou, B. Luo, and A. Men, “A novel rq model based rate control scheme in hevc,” in *2013 Visual Communications and Image Processing (VCIP)*.1em plus 0.5em minus 0.4emIEEE, 2013, pp. 1–6. S. Wang, S. Ma, L. Zhang, S. Wang, D. Zhao, and W. Gao, “Multi layer based rate control algorithm for hevc,” in *2013 IEEE International Symposium on Circuits and Systems (ISCAS)*.1em plus 0.5em minus 0.4emIEEE, 2013, pp. 41–44. L. Tian, Y. Zhou, and X. Cao, “A new rate-complexity-qp algorithm (rcqa) for hevc intra-picture rate control,” in *2014 International Conference on Computing, Networking and Communications (ICNC)*.1em plus 0.5em minus 0.4emIEEE, 2014, pp. 375–380. W. Wu, J. Liu, and L. Feng, “Novel rate control scheme for low delay video coding of hevc,” *ETRI Journal*, vol. 38, no. 1, pp. 185–194, 2016. B. Hosking, D. Agrafiotis, D. Bull, and N. Eastern, “An adaptive resolution rate control method for intra coding in hevc,” in *2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, 2016, pp. 1486–1490. Y. Mao, M. Wang, S. Wang, and S. Kwong, “High efficiency rate control for versatile video coding based on composite cauchy distribution,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 32, no. 4, pp. 2371–2384, 2022. C. R. Helmrich, I. Zupancic, J. Brandenburg, V. George, A. Wieckowski, and B. Bross, “Visually optimized two-pass rate control for video coding using the low-complexity xpsnr model,” in *2021 International Conference on Visual Communications and Image Processing (VCIP)*.1em plus 0.5em minus 0.4emIEEE, 2021, pp. 1–5. M. Wang and B. Yan, “Lagrangian multiplier based joint three-layer rate control for h.264/avc,” *IEEE Signal Processing Letters*, vol. 16, no. 8, pp. 679–682, 2009. B. Lee, M. Kim, and T. Q. Nguyen, “A frame-level rate control scheme based on texture and nontexture rate models for high efficiency video coding,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 24, no. 3, pp. 465–479, 2013. B. Li, H. Li, L. Li, and J. Zhang, “*λ* domain rate control algorithm for high efficiency video coding,” *IEEE Transactions on Image Processing*, vol. 23, no. 9, pp. 3841–3854, 2014. M. Meddeb, M. Cagnazzo, and B. Pesquet-Popescu, “Region-of-interest-based rate control scheme for high-efficiency video coding,” *APSIPA Transactions on Signal and Information Processing*, vol. 3, 2014. Z. Yang, L. Song, Z. Luo, and X. Wang, “Low delay rate control for hevc,” in *2014 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting*.1em plus 0.5em minus 0.4emIEEE, 2014, pp. 1–5. M. Zhou, X. Wei, S. Wang, S. Kwong, C.-K. Fong, P. H. Wong, W. Y. Yuen, and W. Gao, “Ssim-based global optimization for ctu-level rate control in hevc,” *IEEE Transactions on Multimedia*, vol. 21, no. 8, pp. 1921–1933, 2019. M. Wang, K. N. Ngan, and H. Li, “An efficient frame-content based intra frame rate control for high efficiency video coding,” *IEEE Signal Processing Letters*, vol. 22, no. 7, pp. 896–900, 2015. H. Zeng, A. Yang, K. N. Ngan, and M. Wang, “Perceptual sensitivity-based rate control method for high efficiency video coding,” *Multimedia tools and applications*, vol. 75, no. 17, pp. 10383–10396, 2016. M. Zhou, Y. Zhang, B. Li, and H.-M. Hu, “Complexity-based intra frame rate control by jointing inter-frame correlation for high efficiency video coding,” *Journal of Visual Communication and Image Representation*, vol. 42, pp. 46–64, 2017. S. Li, M. Xu, Z. Wang, and X. Sun, “Optimal bit allocation for ctu level rate control in hevc,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 27, no. 11, pp. 2409–2424, 2017. M. Wang, K. N. Ngan, and H. Li, “Low-delay rate control for consistent quality using distortion-based lagrange multiplier,” *IEEE Transactions on Image Processing*, vol. 25, no. 7, pp. 2943–2955, 2016. V. Sanchez, “Rate control for hevc intra-coding based on piecewise linear approximations,” in *2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*, 2018, pp. 1782–1786. Y. Gong, S. Wan, K. Yang, H. R. Wu, and Y. Liu, “Temporal-layer-motivated lambda domain picture level rate control for random-access configuration in h.265/hevc,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 29, no. 1, pp. 156–170, 2019. K. R. Perez-Daniel and V. Sanchez, “Luma-aware multi-model rate-control for hdr content in hevc,” in *2017 IEEE International Conference on Image Processing (ICIP)*, 2017, pp. 1022–1026. H. Guo, C. Zhu, S. Li, and Y. Gao, “Optimal bit allocation at frame level for rate control in hevc,” *IEEE Transactions on Broadcasting*, vol. 65, no. 2, pp. 270–281, 2018. W. Li, P. Ren, E. Zhang, and F. Zhao, “Rate control for hevc intra-coding with a ctu-dependent distortion model,” *Signal, Image and Video Processing*, vol. 13, no. 1, pp. 17–25, 2019. J. Mir, D. S. Talagala, and A. Fernando, “Optimization of hevc *λ*-domain rate control algorithm for hdr video,” in *2018 IEEE International Conference on Consumer Electronics (ICCE)*, 2018, pp. 1–4. M. Zhou, X. Wei, S. Wang, S. Kwong, C.-K. Fong, P. H. Wong, and W. Y. Yuen, “Global rate-distortion optimization-based rate control for hevc hdr coding,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 30, no. 12, pp. 4648–4662, 2019. W. Lim and D. Sim, “A perceptual rate control algorithm based on luminance adaptation for hevc encoders,” *Signal, Image and Video Processing*, vol. 14, no. 5, pp. 887–895, 2020. Z. Chen and X. Pan, “An optimized rate control for low-delay h. 265/hevc,” *IEEE Transactions on Image Processing*, vol. 28, no. 9, pp. 4541–4552, 2019. M. Zhou, X. Wei, S. Kwong, W. Jia, and B. Fang, “Just noticeable distortion-based perceptual rate control in hevc,” *IEEE Transactions on Image Processing*, vol. 29, pp. 7603–7614, 2020. M. H. Hyun, B. Lee, and M. Kim, “A frame-level constant bit-rate control using recursive bayesian estimation for versatile video coding,” *IEEE Access*, vol. 8, pp. 227255–227269, 2020. Y. Chen, S. Kwong, M. Zhou, S. Wang, G. Zhu, and Y. Wang, “Intra frame rate control for versatile video coding with quadratic rate-distortion modelling,” in *ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*.1em plus 0.5em minus 0.4emIEEE, 2020, pp. 4422–4426. Y. Li, Z. Liu, Z. Chen, and S. Liu, “Rate control for versatile video coding,” in *2020 IEEE International Conference on Image Processing (ICIP)*.1em plus 0.5em minus 0.4emIEEE, 2020, pp. 1176–1180. F. Liu and Z. Chen, “Multi-objective optimization of quality in vvc rate control for low-delay video coding,” *IEEE Transactions on Image Processing*, vol. 30, pp. 4706–4718, 2021. Z. Li, F. Pan, K. Lim, X. Lin, and S. Rahardja, “Adaptive rate control for h.264,” in *2004 International Conference on Image Processing, 2004. ICIP ’04.*, vol. 2, 2004, pp. 745–748 Vol.2. J. Si, S. Ma, X. Zhang, and W. Gao, “Adaptive rate control for high efficiency video coding,” in *2012 Visual Communications and Image Processing*, 2012, pp. 1–6. Y.-J. Yoon, H. Kim, S.-h. Jung, D. Jun, Y. Kim, J. S. Choi, and S.-J. Ko, “A new rate control method for hierarchical video coding in hevc,” in *IEEE International symposium on Broadband Multimedia Systems and Broadcasting*.1em plus 0.5em minus 0.4emIEEE, 2012, pp. 1–4. J. Si, S. Ma, and W. Gao, “Adaptive rate control for hevc (jctvc-j0057),” in *Joint Collaborative Team on Video Coding (JCT-VC) 10th Meeting, Stockholm, SE*, July 2012. H. Zhihai, Y. Kim, and S. Mitra, “Low-delay rate control for dct video coding via p-domain source modeling,” *IEEE Trans on CSTV*, vol. 11, no. 8, pp. 813–816, 2001. Z. He, Y. K. Kim, and S. Mitra, “*ρ*-domain source modeling and rate control for video coding and transmission,” in *2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.01CH37221)*, vol. 3, 2001, pp. 1773–1776 vol.3. I.-H. Shin, Y.-L. Lee, and H. Park, “Rate control using linear rate-*ρ* model for h.264,” *Signal Processing: Image Communication*, vol. 19, no. 4, pp. 341–352, 2004. S. Wang, S. Ma, S. Wang, D. Zhao, and W. Gao, “Rate-gop based rate control for high efficiency video coding,” *IEEE Journal of Selected Topics in Signal Processing*, vol. 7, no. 6, pp. 1101–1111, 2013. ——, “Quadratic *ρ*-domain based rate control algorithm for hevc,” *2013 IEEE International Conference on Acoustics, Speech and Signal Processing*, pp. 1695–1699, 2013. T. Biatek, M. Raulet, J.-F. Travers, and O. Deforges, “Efficient quantization parameter estimation in hevc based on *ρ*-domain,” in *European Signal Processing Conference*, 2014. A. A. Ramanand, I. Ahmad, and V. Swaminathan, “A survey of rate control in hevc and shvc video encoding,” in *2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)*, 2017, pp. 145–150. J. Wen, M. Fang, and M. Tang, “R-lambda model based rate control with pre-encoding process (jctvc-t0216),” in *Joint Collaborative Team on Video Coding (JCT-VC) 20th Meeting, vGeneva, CH*, Feb. 2015. M. Tang, J. Wen, and Y. Han, “A generalized rate-distortion-*λ* model based HEVC rate control algorithm,” *CoRR*, vol. abs/1911.00639, 2019. [Online]. Available: <http://arxiv.org/abs/1911.00639> T. Li, L. Yu, H. Wang, and Z. Kuang, “A bit allocation method based on inter-view dependency and spatio-temporal correlation for multi-view texture video coding,” *IEEE Transactions on Broadcasting*, vol. 67, no. 1, pp. 159–173, 2021. H. Guo, C. Zhu, Y. Gao, and S. Song, “A frame-level rate control scheme for low delay video coding in hevc,” in *2017 IEEE 19th International Workshop on Multimedia Signal Processing (MMSP)*, 2017, pp. 1–6. Z. Liu, Z. Chen, and Y. Li, “Ahg10: Quality dependency factor based rate control for vvc (jvet-m0600),” in *Joint Video Experts Team (JVET) 13th Meeting, Marrakech, MA*, Jan. 2019. F. Liu, Z. Liu, Y. Li, and Z. Chen, “Ahg10: Extension of rate control to support random access configuration with gop size of 32 (jvet-t0062),” in *Joint Video Experts Team (JVET) 20th Meeting by teleconference*, Oct. 2020. G. Ren, J. Jia, J. Wang, and Z. Chen, “Ahg10: An improved vvc rate control scheme ( jvet-y0105),” in *Joint Video Experts Team (JVET) 21th Meeting, by teleconference*, Jan. 2022. W. Gao, S. Kwong, and Y. Jia, “Joint machine learning and game theory for rate control in high efficiency video coding,” *IEEE Transactions on Image Processing*, vol. 26, no. 12, pp. 6074–6089, 2017. J.-H. Hu, W.-H. Peng, and C.-H. Chung, “Reinforcement learning for hevc/h. 265 intra-frame rate control,” in *2018 IEEE International Symposium on Circuits and Systems (ISCAS)*.1em plus 0.5em minus 0.4emIEEE, 2018, pp. 1–5. M. Zhou, X. Wei, S. Kwong, W. Jia, and B. Fang, “Rate control method based on deep reinforcement learning for dynamic video sequences in hevc,” *IEEE Transactions on Multimedia*, vol. 23, pp. 1106–1121, 2020. X. Wei, M. Zhou, S. Kwong, H. Yuan, and T. Xiang, “Joint reinforcement learning and game theory bitrate control method for 360-degree dynamic adaptive streaming,” in *ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*.1em plus 0.5em minus 0.4emIEEE, 2021, pp. 4230–4234. F. Raufmehr, M. R. Salehi, and E. Abiri, “A frame-level mlp-based bit-rate controller for real-time video transmission using vvc standard,” *Journal of Real-Time Image Processing*, vol. 18, no. 3, pp. 751–763, 2021. R. Farhad, S. M. Reza, and A. Ebrahim, “A neural network-based video bit-rate control algorithm for variable bit-rate applications of versatile video coding standard,” *Signal Processing: Image Communication*, vol. 96, p. 116317, 2021. M. Wang, J. Zhang, L. Huang, and J. Xiong, “Machine learning-based rate distortion modeling for vvc/h. 266 intra-frame,” in *2021 IEEE International Conference on Multimedia and Expo (ICME)*.1em plus 0.5em minus 0.4emIEEE, 2021, pp. 1–6. L. Xu, S. Ma, D. Zhao, and W. Gao, “Rate control for scalable video model,” in *Visual Communications and Image Processing 2005*, S. Li, F. Pereira, H.-Y. Shum, and A. G. Tescher, Eds., vol. 5960, International Society for Optics and Photonics.1em plus 0.5em minus 0.4emSPIE, 2005, pp. 525 – 534. Y. Liu, Z. G. Li, and Y. C. Soh, “Rate control of h.264/avc scalable extension,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 18, no. 1, pp. 116–121, 2008. Y. Pitrey, M. Babel, O. Déforges, and J. Viéron, “*ρ*-domain based rate control scheme for spatial, temporal, and quality scalable video coding,” in *Visual Communications and Image Processing 2009*, M. Rabbani and R. L. Stevenson, Eds., vol. 7257, International Society for Optics and Photonics.1em plus 0.5em minus 0.4emSPIE, 2009, pp. 25 – 32. S. Hu, H. Wang, S. Kwong, T. Zhao, and C.-C. J. Kuo, “Rate control optimization for temporal-layer scalable video coding,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 21, no. 8, pp. 1152–1162, 2011. Y. Li, D. Liu, and Z. Chen, “Ahg9-related: Cnn-based lambda-domain rate control for intra frames (jvet-m0215),” in *Joint Video Experts Team (JVET) 13th Meeting, Marrakech, MA*, Jan. 2019. G. Lu, W. Ouyang, D. Xu, X. Zhang, C. Cai, and Z. Gao, “Dvc: An end-to-end deep video compression framework,” in *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*, 2019, pp. 11006–11015. X. Jing, L.-P. Chau, and W.-C. Siu, “Frame complexity-based rate-quantization model for h.264/avc intraframe rate control,” *IEEE Signal Processing Letters*, vol. 15, pp. 373–376, 2008. Z. Liu, H. Yan, L. Shen, Y. Wang, and Z. Zhang, “A motion attention model based rate control algorithm for h.264/avc,” in *2009 Eighth IEEE/ACIS International Conference on Computer and Information Science*, 2009, pp. 568–573. L. Shen, Z. Liu, and Z. Zhang, “A novel h.264 rate control algorithm with consideration of visual attention,” *Multimed Tools Appl*, vol. 63, p. 709–727, 2013. J. Ribas-Corbera and S. Lei, “Rate control in dct video coding for low-delay communications,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 9, no. 1, pp. 172–185, 1999. C.-W. Seo, J.-H. Moon, and J.-K. Han, “Rate control for consistent objective quality in high efficiency video coding,” *IEEE transactions on image processing*, vol. 22, no. 6, pp. 2442–2454, 2013. W.-N. Lie, C.-F. Chen, and T. C.-I. Lin, “Two-pass rate-distortion optimized rate control technique for h. 264/avc video,” in *Visual Communications and Image Processing 2005*, vol. 5960.1em plus 0.5em minus 0.4emInternational Society for Optics and Photonics, 2005, p. 596035. S. Wang, A. Rehman, K. Zeng, and Z. Wang, “Ssim-inspired two-pass rate control for high efficiency video coding,” in *2015 IEEE 17th International Workshop on Multimedia Signal Processing (MMSP)*, 2015, pp. 1–5. I. Zupancic, M. Naccari, M. Mrak, and E. Izquierdo, “Two-pass rate control for improved quality of experience in uhdtv delivery,” *IEEE Journal of Selected Topics in Signal Processing*, vol. 11, no. 1, pp. 167–179, 2017. C. Meenderinck, A. Azevedo, B. Juurlink, M. Alvarez Mesa, and A. Ramirez, “Parallel scalability of video decoders,” *Journal of Signal Processing Systems*, vol. 57, no. 2, pp. 173–194, 2009. Z. Ling, X. J. Jiang, and J. J. Liu, “Efficiency of dynamic gop length in video stream,” in *Advanced Materials Research*, vol. 765.1em plus 0.5em minus 0.4emTrans Tech Publ, 2013, pp. 879–884. M. Roitzsch, “Slice-balancing h. 264 video encoding for improved scalability of multicore decoding,” in *Proceedings of the 7th ACM & IEEE international conference on Embedded software*, 2007, pp. 269–278. C. Blumenberg, D. Palomino, S. Bampi, and B. Zatt, “Adaptive content-based tile partitioning algorithm for the hevc standard,” in *2013 Picture Coding Symposium (PCS)*.1em plus 0.5em minus 0.4emIEEE, 2013, pp. 185–188. M. Koziri, P. K. Papadopoulos, and T. Loukopoulos, “Combining tile parallelism with slice partitioning in video coding,” in *Applications of Digital Image Processing XLI*, vol. 10752.1em plus 0.5em minus 0.4emInternational Society for Optics and Photonics, 2018, p. 107520N. M. Karczewicz, N. Hu, J. Taquet, C.-Y. Chen, K. Misra, K. Andersson, P. Yin, T. Lu, E. François, and J. Chen, “Vvc in-loop filters,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 31, no. 10, pp. 3907–3925, 2021. E. B. Van Der Tol, E. G. Jaspers, and R. H. Gelderblom, “Mapping of h. 264 decoding on a multiprocessor architecture,” in *Image and Video Communications and Processing 2003*, vol. 5022.1em plus 0.5em minus 0.4emInternational Society for Optics and Photonics, 2003, pp. 707–718. M. Alvarez-Mesa, V. George, T. Schierl, and B. Juurlink, “Improving parallelization efficiency of wpp using overlapped wavefront,” *Joint Collaborative Team on Video Coding (JCT-VC), Document JCTVC-J0425, Stockholm*, vol. 3, 2012. G. Clare and F. Henry, “An hevc transcoder converting non-parallel bitstreams to/from wpp,” *Joint Collaborative Team on Video Coding (JCT-VC), Document JCTVC-J0032, Stockholm*, 2012. J. Sainio, A. Mercat, and J. Vanne, “Parallel implementations of lambda domain and r-lambda model rate control schemes in a practical hevc encoder,” in *2021 Data Compression Conference (DCC)*, 2021, pp. 368–368. P. Xu, K. Chen, J. Sun, X. Ji, and Z. Guo, “An adaptive intra-frame parallel method based on complexity estimation for hevc,” in *2016 Visual Communications and Image Processing (VCIP)*, 2016, pp. 1–4. S. Wang, S. Zhang, J. Wang, L. Chang, L. Feng, and X. Fan, “Hardware architecture design of hevc entropy decoding,” in *2021 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom)*, 2021, pp. 1143–1150. S. Xu, M. Yu, S. Fang, Z. Peng, X. Wang, and G. Jiang, “New rate control optimization algorithm for hevc aiming at discontinuous scene,” *WSEAS Transactions on computers*, vol. 14, no. 1, pp. 598–606, 2015. F. Song, C. Zhu, Y. Liu, and Y. Zhou, “A new gop level bit allocation method for hevc rate control,” in *2017 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB)*.1em plus 0.5em minus 0.4emIEEE, 2017, pp. 1–4. L. Li, B. Li, H. Li, and C. W. Chen, “*λ* domain optimal bit allocation algorithm for high efficiency video coding,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 28, no. 1, pp. 130–142, 2016. H. Schwarz, D. Marpe, and T. Wiegand, “Overview of the scalable video coding extension of the h.264/avc standard,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 17, no. 9, pp. 1103–1120, 2007. M. Flierl and B. Girod, “Generalized b pictures and the draft h.264/avc video-compression standard,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 13, no. 7, pp. 587–597, 2003. K. Hwang and N. Jotwani, *Advanced computer architecture: parallelism, scalability, programmability*.1em plus 0.5em minus 0.4emMcGraw-Hill New York, 1993, vol. 199. R. E. Bryant, O. David Richard, and O. David Richard, *Computer systems: a programmer’s perspective*.1em plus 0.5em minus 0.4emPrentice Hall Upper Saddle River, 2003, vol. 2. C. Ranger, R. Raghuraman, A. Penmetsa, G. Bradski, and C. Kozyrakis, “Evaluating mapreduce for multi-core and multiprocessor systems,” in *2007 IEEE 13th International Symposium on High Performance Computer Architecture*.1em plus 0.5em minus 0.4emIeee, 2007, pp. 13–24. F. Bossen, J. Boyce, K. Suehring, X. Li, and V. Seregin, “Jvet common test conditions and software reference configurations for sdr video (joint video experts team (jvet) of itu-t sg 16 wp 3 and iso/iec jtc 1/sc 29/wg 11, jvet-n1010-v1),” in *Joint Video Exploration Team (JVET) 14th Meeting, Geneva, CH*, Apr. 2019. F. Bossen, “Common conditions and software reference configurations (jvt-j1100),” in *Joint Collaborative Team on Video Coding (JCT-VC) 8th Meeting*, Feb. 2012. G. Bjontegaard, “Calculation of average psnr differences between rd-curves,” *VCEG-M33*, 2001. L. Bai, L. Song, R. Xie, L. Zhang, and Z. Luo, “Rate control model for high dynamic range video,” in *2017 IEEE Visual Communications and Image Processing (VCIP)*, 2017, pp. 1–4. J. Kim, S.-H. Bae, and M. Kim, “An hevc-compliant perceptual video coding scheme based on jnd models for variable block-sized transform kernels,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 25, no. 11, pp. 1786–1800, 2015. H. Wei, X. Zhou, W. Zhou, C. Yan, Z. Duan, and N. Shan, “Visual saliency based perceptual video coding in hevc,” in *2016 IEEE International Symposium on Circuits and Systems (ISCAS)*, 2016, pp. 2547–2550. “Jm 9.4.” [Online]. Available: <http://iphome.hhi.de/suehring/tml/download> M. Zhou, X. Wei, C. Ji, T. Xiang, and B. Fang, “Optimum quality control algorithm for versatile video coding,” *IEEE Transactions on Broadcasting*, pp. 1–12, 2022. X. Meng, C. Jia, X. Zhang, S. Wang, and S. Ma, “Spatio-temporal correlation guided geometric partitioning for versatile video coding,” *IEEE Transactions on Image Processing*, vol. 31, pp. 30–42, 2022. Z. Huang, K. Lin, C. Jia, S. Wang, and S. Ma, “Beyond vvc: Towards perceptual quality optimized video compression using multi-scale hybrid approaches,” in *2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)*, 2021, pp. 1866–1869. M. Zhou, X. Wei, W. Jia, and S. Kwong, “Joint decision tree and visual feature rate control optimization for vvc uhd coding,” *IEEE Transactions on Image Processing*, vol. 32, pp. 219–234, 2023. B. Bross, J. Chen, J.-R. Ohm, G. J. Sullivan, and Y.-K. Wang, “Developments in international video coding standardization after avc, with an overview of versatile video coding (vvc),” *Proceedings of the IEEE*, vol. 109, no. 9, pp. 1463–1493, 2021. M. Benjak, H. Meuel, T. Laude, and J. Ostermann, “Enhanced machine learning-based inter coding for vvc,” in *2021 International Conference on Artificial Intelligence in Information and Communication (ICAIIC)*, 2021, pp. 021–025. M. Saldanha, G. Sanchez, C. Marcon, and L. Agostini, “Configurable fast block partitioning for vvc intra coding using light gradient boosting machine,” *IEEE Transactions on Circuits and Systems for Video Technology*, pp. 1–1, 2021. K. Fischer, C. Herglotz, and A. Kaup, “On versatile video coding at uhd with machine-learning-based super-resolution,” in *2020 Twelfth International Conference on Quality of Multimedia Experience (QoMEX)*, 2020, pp. 1–6. N. Le, H. Zhang, F. Cricri, R. Ghaznavi-Youvalari, H. R. Tavakoli, and E. Rahtu, “Learned image coding for machines: A content-adaptive approach,” in *2021 IEEE International Conference on Multimedia and Expo (ICME)*, 2021, pp. 1–6. H. Wang, L. Yu, J. Liang, H. Yin, T. Li, and S. Wang, “Hierarchical predictive coding-based jnd estimation for image compression,” *IEEE Transactions on Image Processing*, vol. 30, pp. 487–500, 2021. Y. Li and X. Mou, “Joint optimization for ssim-based ctu-level bit allocation and rate distortion optimization,” *IEEE Transactions on Broadcasting*, vol. 67, no. 2, pp. 500–511, 2021. A. Nakhaei and M. Rezaei, “Scene-level two-pass video rate controller for h. 265/hevc standard,” *Multimedia Tools and Applications*, vol. 80, no. 5, pp. 7023–7038, 2021. J. Chen, X. Luo, M. Hu, D. Wu, and Y. Zhou, “Sparkle: User-aware viewport prediction in 360-degree video streaming,” *IEEE Transactions on Multimedia*, vol. 23, pp. 3853–3866, 2021. F. Chiariotti, “A survey on 360-degree video: Coding, quality of experience and streaming,” *Comput. Commun.*, vol. 177, pp. 133–155, 2021. T. Zhao, J. Lin, Y. Song, X. Wang, and Y. Niu, *Game Theory-Driven Rate Control for 360-Degree Video Coding*.1em plus 0.5em minus 0.4emNew York, NY, USA: Association for Computing Machinery, 2021, p. 3998–4006. V. Sanchez, “Rate control for predictive transform screen content video coding based on ransac,” *IEEE Transactions on Circuits and Systems for Video Technology*, vol. 31, no. 11, pp. 4422–4438, 2021. K. Perez-Daniel, F. Garcia-Ugalde, and V. Sanchez, “Scene-based imperceptible-visible watermarking for hdr video content,” in *2019 7th International Workshop on Biometrics and Forensics (IWBF)*, 2019, pp. 1–6. H. Yuan, Q. Wang, Q. Liu, J. Huo, and P. Li, “Hybrid distortion-based rate-distortion optimization and rate control for h.265/hevc,” *IEEE Transactions on Consumer Electronics*, vol. 67, no. 2, pp. 97–106, 2021. Q. Zhang, H. Meng, Z. Feng, and Z. Han, “Resource scheduling of time-sensitive services for b5g/6g connected automated vehicles,” *IEEE Internet of Things Journal*, pp. 1–1, 2022. Y. Cui, Q. Zhang, Z. Feng, Z. Wei, C. Shi, J. Fan, and P. Zhang, “Dual identities enabled low-latency visual networking for uav emergency communication,” in *GLOBECOM 2022 - 2022 IEEE Global Communications Conference*, 2022, pp. 474–479. A. M. Girgis, J. Park, M. Bennis, and M. Debbah, “Predictive control and communication co-design via two-way gaussian process regression and aoi-aware scheduling,” *IEEE Transactions on Communications*, vol. 69, no. 10, pp. 7077–7093, 2021. [![image](author_photo/Xuekai_Wei)]Xuekai Wei received the bachelor’s degree in Electronic Information Science and Technology from Shandong University in 2014, the master’s degree in Communication and Information Systems from Shandong University in 2017, and the Ph.D. degree in computer science from the City University of Hong Kong, Hong Kong, China in 2021. He was a post-doctoral at the School of Artificial Intelligence, Beijing Normal University, Beijing, China, from 2021 to 2022. He is currently an Associate Professor with the School of Computer Science, Chongqing University, Chongqing, China. His current research interests include video coding/transmission and machine learning. [![image](author_photo/Mingliang_Zhou)]Mingliang Zhou received the Ph.D. degree in computer science from Beihang University, Beijing, China, in 2017. He was a Postdoctoral Researcher with the Department of Computer Science, City University of Hong Kong, Hong Kong, China, from September 2017 to September 2019. He was a Postdoctoral Fellow with the State Key Laboratory of Internet of Things for Smart City, University of Macau, Macau, China, from October 2019 to October 2021. He is currently an Associate Professor with the School of Computer Science, Chongqing University, Chongqing, China. His research interests include image and video coding, perceptual image processing, multimedia signal processing, rate control, multimedia communication, machine learning, and optimization. [![image](./author_photo/Heqiang_Wang)] Heqiang Wang is currently a M.S. student of Computer Science in Chongqing University. He received his bachelor degree of Mechanical Design Manufacturing and Automation in Southwest University. His research interests include perceptual image quality assessment and video coding. [![image](./author_photo/Haoyan_Yang)] Haoyan Yang is currently a M.S. student of Electronic Information in Chongqing University. He received his bachelor degree of Software Engineering in Nanjing Audit University. His research interests is video coding. [![image](./author_photo/Lei_Chen)] Lei Chen is currently a M.S. student of Computer Science in Chongqing University. He received his bachelor degree of Medical Information Engineering in Zhejiang Chinese Medical University. His research interests include image super-resolution and video coding. [![image](author_photo/Sam_Kwong)]Sam Kwong (M’93-SM’04-F’14) received the B.S. degree from the State University of New York at Buffalo, Buffalo, NY, USA, in 1983, the M.S. degree in electrical engineering from the University of Waterloo, Waterloo, ON, Canada, in 1985, and the Ph.D. degree from the University of Hagen, Hagen, Germany, in 1996. From 1985 to 1987, he was a Diagnostic Engineer with Control Data Canada, Mississauga, ON, Canada. He joined Bell Northern Research Canada, Ottawa, ON, Canada, as a member of Scientific Staff. In 1990, he became a Lecturer in the Department of Electronic Engineering, The City University of Hong Kong, Hong Kong, China, where he is currently a Professor with the Department of Computer Science. His research interests include video and image coding, and evolutionary algorithms. Prof. Kwong serves as an Associate Editor for the IEEE Transactions on Industrial Electronics and the IEEE Transactions on Industrial Informatics. --- 1. This work was supported in part by the National Natural Science Foundation of China under Grant 62176027; in part by the General Program of the National Natural Science Foundation of Chongqing under Grant cstc2020jcyj-msxmX0790; in part by the Human Resources and Social Security Bureau Project of Chongqing under Grant cx2020073; in part by the Hong Kong GRF-RGC General Research Fund under Grant 11209819 and Grant CityU 9042816; in part by the Hong Kong GRF-RGC General Research Fund under Grant 11203820 and Grant CityU 9042598; and in part by the Hong Kong Innovation and Technology Commission, InnoHK Project Centre for Intelligent Multidimensional Data Analysis (CIMDA). (\*Corresponding author: Mingliang Zhou.)[↩](#fnref1) 2. Xuekai Wei, Mingliang Zhou, Heqiang Wang, Haoyan Yang, and Lei Chen are with the School of Computer Science, Chongqing University, Chongqing 400044, China (e-mail: [email protected], [email protected], [email protected], [email protected], [email protected]).[↩](#fnref2) 3. Sam Kwong is with the Department of Computer Science, City University of Hong Kong, Kowloon 999077, Hong Kong (e-mail: [email protected]).[↩](#fnref3)
arxiv_0000319
.868756 0.498006 0.868701 0.49792 0.868647 0.497834 0.868592 0.497749 0.868538 0.497663 0.868483 0.497578 0.868429 0.497492 0.868374 0.497407 0.86832 0.497321 0.868265 0.497236 0.86821 0.49715 0.868156 0.497065 0.868101 0.49698 0.868046 0.496894 0.867991 0.496809 0.867936 0.496724 0.867881 0.496638 0.867826 0.496553 0.867772 0.496468 0.867717 0.496383 0.867662 0.496297 0.867606 0.496212 0.867551 0.496127 0.867496 0.496042 0.867441 0.495957 0.867386 0.495872 0.867331 0.495787 0.867276 0.495702 0.86722 0.495617 0.867165 0.495532 0.86711 0.495447 0.867054 0.495362 0.866999 0.495277 0.866943 0.495192 0.866888 0.495107 0.866833 0.495022 0.866777 0.494937 0.866721 0.494852 0.866666 0.494768 0.86661 0.494683 0.866555 0.494598 0.866499 0.494513 0.866443 0.494429 0.866388 0.494344 0.866332 0.494259 0.866276 0.494175 0.86622 0.49409 0.866164 0.494006 0.866108 0.493921 0.866052 0.493836 0.865996 0.493752 0.865941 0.493667 0.865884 0.493583 0.865828 0.493499 0.865772 0.493414 0.865716 0.49333 0.86566 0.493245 0.865604 0.493161 0.865548 0.493077 0.865491 0.492992 0.865435 0.492908 0.865379 0.492824 0.865323 0.49274 0.865266 0.492655 0.86521 0.492572 0.865155 0.492488 0.865098 0.492404 0.865042 0.49232 0.864985 0.492236 0.864929 0.492152 0.864872 0.492068 0.864816 0.491984 0.864759 0.4919 0.864702 0.491816 0.864646 0.491732 0.864589 0.491648 0.864532 0.491564 0.864475 0.49148 0.864419 0.491396 0.864362 0.491312 0.864305 0.491229 0.864248 0.491145 0.864191 0.491061 0.864134 0.490977 0.864077 0.490894 0.86402 0.49081 0.863963 0.490726 0.863906 0.490643 0.863849 0.490559 0.863792 0.490476 0.863734 0.490392 0.863677 0.490309 0.863619 0.490225 0.863562 0.490142 0.863505 0.490058 0.863447 0.489975 0.86339 0.489891 0.863333 0.489808 0.863275 0.489725 0.863218 0.489641 0.86316 0.489558 0.863103 0.489475 0.863045 0.489392 0.862987 0.489308 0.86293 0.489225 0.862872 0.489142 0.862814 0.489059 0.862757 0.488976 0.862699 0.488893 0.862641 0.48881 0.862583 0.488727 0.862526 0.488644 0.862468 0.488561 0.86241 0.488478 0.862352 0.488395 0.862294 0.488312 0.862236 0.488229 0.862178 0.488146 0.86212 0.488063 0.862062 0.48798 0.862004 0.487898 0.861946 0.487815 0.861887 0.487732 0.861829 0.487649 0.861771 0.487567 0.861713 0.487484 0.861654 0.487401 0.861596 0.487319 0.861538 0.487236 0.861479 0.487154 0.861421 0.487071 0.861362 0.486989 0.861304 0.486906 0.861246 0.486824 0.861187 0.486741 0.861128 0.486659 0.86107 0.486577 0.861011 0.486494 0.860953 0.486412 0.860894 0.486331 0.860837 0.486249 0.860778 0.486166 0.860719 0.486084 0.86066 0.486002 0.860601 0.48592 0.860543 0.485838 0.860484 0.485756 0.860425 0.485674 0.860366 0.485592 0.860307 0.48551 0.860248 0.485428 0.860189 0.485346 0.86013 0.485264 0.860071 0.485182 0.860012 0.4851 0.859952 0.485018 0.859893 0.484936 0.859834 0.484855 0.859775 0.484773 0.859716 0.484691 0.859656 0.484609 0.859597 0.484528 0.859538 0.484446 0.859478 0.484364 0.859419 0.484283 0.859359 0.484201 0.8593 0.48412 0.85924 0.484038 0.859181 0.483957 0.859121 0.483875 0.859062 0.483794 0.859002 0.483713 0.858942 0.483631 0.858883 0.48355 0.858823 0.483469 0.858763 0.483387 0.858704 0.483306 0.858644 0.483225 0.858584 0.483144 0.858524 0.483062 0.858464 0.482981 0.858404 0.4829 0.858344 0.482819 0.858284 0.482738 0.858224 0.482657 0.858164 0.482576 0.858104 0.482495 0.858044 0.482414 0.857984 0.482333 0.857924 0.482252 0.857864 0.482172 0.857804 0.482091 0.857743 0.48201 0.857683 0.481929 0.857623 0.481849 0.857563 0.481768 0.857502 0.481687 0.857442 0.481607 0.857382 0.481526 0.857321 0.481445 0.857261 0.481365 0.8572 0.481284 0.85714 0.481204 0.857079 0.481123 0.857019 0.481043 0.856958 0.480964 0.856899 0.480883 0.856838 0.480803 0.856778 0.480723 0.856717 0.480642 0.856656 0.480562 0.856596 0.480482 0.856535 0.480402 0.856474 0.480322 0.856413 0.480242 0.856352 0.480161 0.856291 0.480081 0.85623 0.480001 0.85617 0.479921 0.856109 0.479841 0.856048 0.479762 0.855987 0.479682 0.855926 0.479602 0.855864 0.479522 0.855803 0.479442 0.855742 0.479362 0.855681 0.479283 0.85562 0.479203 0.855559 0.479123 0.855497 0.479044 0.855436 0.478964 0.855375 0.478884 0.855314 0.478805 0.855252 0.478725 0.855191 0.478646 0.855129 0.478566 0.855068 0.478487 0.855007 0.478408 0.854945 0.478328 0.854884 0.478249 0.854822 0.47817 0.854761 0.47809 0.854699 0.478011 0.854637 0.477932 0.854576 0.477853 0.854514 0.477774 0.854452 0.477694 0.854391 0.477615 0.854329 0.477536 0.854267 0.477457 0.854205 0.477378 0.854144 0.477299 0.854082 0.477221 0.85402 0.477142 0.853958 0.477063 0.853896 0.476984 0.853834 0.476905 0.853772 0.476826 0.85371 0.476748 0.853648 0.476669 0.853586 0.47659 0.853524 0.476512 0.853462 0.476434 0.853401 0.476356 0.853339 0.476277 0.853277 0.476199 0.853215 0.476121 0.853153 0.476042 0.853091 0.475964 0.853028 0.475885 0.852966 0.475807 0.852904 0.475729 0.852841 0.475651 0.852779 0.475572 0.852717 0.475494 0.852654 0.475416 0.852592 0.475338 0.852529 0.47526 0.852467 0.475182 0.852404 0.475104 0.852342 0.475026 0.852279 0.474948 0.852217 0.47487 0.852154 0.474792 0.852092 0.474715 0.852029 0.474637 0.851966 0.474559 0.851904 0.474481 0.851841 0.474404 0.851778 0.474326 0.851715 0.474249 0.851653 0.474171 0.85159 0.474093 0.851527 0.474016 0.851464 0.473938 0.851401 0.473861 0.851338 0.473784 0.851275 0.473706 0.851212 0.473629 0.851149 0.473552 0.851086 0.473474 0.851023 0.473397 0.85096 0.47332 0.850897 0.473243 0.850834 0.473166 0.850771 0.473089 0.850708 0.473012 0.850645 0.472935 0.850582 0.472858 0.850518 0.472781 0.850455 0.472704 0.850392 0.472627 0.850329 0.472552 0.850267 0.472475 0.850204 0.472398 0.850141 0.472322 0.850077 0.472245 0.850014 0.472168 0.84995 0.472092 0.849887 0.472015 0.849824 0.471939 0.84976 0.471862 0.849697 0.471786 0.849633 0.471709 0.84957 0.471633 0.849506 0.471557 0.849443 0.471481 0.849379 0.471404 0.849316 0.471328 0.849252 0.471252 0.849188 0.471176 0.849125 0.4711 0.849061 0.471024 0.848997 0.470948 0.848934 0.470872 0.84887 0.470796 0.848806 0.47072 0.848742 0.470644 0.848679 0.470568 0.848615 0.470493 0.848551 0.470417 0.848487 0.470341 0.848423 0.470266 0.848359 0.47019 0.848295 0.470114 0.848232 0.470039 0.848168 0.469963 0.848104 0.469888 0.84804 0.469813 0.847976 0.469737 0.847912 0.469662 0.847848 0.469587 0.847784 0.469511 0.84772 0.469436 0.847655 0.469361 0.847591 0.469286 0.847527 0.469211 0.847463 0.469137 0.8474 0.469062 0.847336 0.468987 0.847272 0.468912 0.847208 0.468837 0.847144 0.468763 0.847079 0.468688 0.847015 0.468613 0.846951 0.468538 0.846887 0.468464 0.846822 0.468389 0.846758 0.468314 0.846694 0.46824 0.846629 0.468165 0.846565 0.468091 0.846501 0.468017 0.846436 0.467942 0.846372 0.467868 0.846307 0.467794 0.846243 0.467719 0.846178 0.467645 0.846114 0.467571 0.846049 0.467497 0.845985 0.467423 0.84592 0.467349 0.845856 0.467275 0.845791 0.467201 0.845727 0.467127 0.845662 0.467053 0.845598 0.466979 0.845533 0.466906 0.845468 0.466832 0.845404 0.466758 0.845339 0.466685 0.845274 0.466611 0.84521 0.466538 0.845145 0.466464 0.84508 0.466391 0.845016 0.466317 0.844951 0.466244 0.844886 0.466171 0.844821 0.466099 0.844758 0.466025 0.844694 0.465952 0.844629 0.465879 0.844564 0.465806 0.844499 0.465733 0.844434 0.46566 0.84437 0.465587 0.844305 0.465514 0.84424 0.465441 0.844175 0.465368 0.84411 0.465296 0.844045 0.465223 0.84398 0.46515 0.843915 0.465078 0.84385 0.465005 0.843785 0.464933 0.843721 0.46486 0.843656 0.464788 0.843591 0.464715 0.843526 0.464643 0.843461 0.464571 0.843396 0.464498 0.843331 0.464426 0.843266 0.464354 0.843201 0.464282 0.843136 0.46421 0.843071 0.464138 0.843005 0.464066 0.84294 0.463994 0.842875 0.463922 0.84281 0.46385 0.842745 0.463779 0.84268 0.463707 0.842615 0.463635 0.84255 0.463564 0.842485 0.463493 0.842421 0.463422 0.842356 0.46335 0.842291 0.463279 0.842226 0.463208 0.842161 0.463136 0.842096 0.463065 0.842031 0.462994 0.841965 0.462923 0.8419 0.462852 0.841835 0.462781 0.84177 0.46271 0.841705 0.462639 0.84164 0.462568 0.841574 0.462497 0.841509 0.462426 0.841444 0.462355 0.841379 0.462285 0.841314 0.462214 0.841248 0.462143 0.841183 0.462073 0.841118 0.462002 0.841053 0.461932 0.840987 0.461861 0.840922 0.461791 0.840857 0.461721 0.840792 0.461651 0.840726 0.46158 0.840661 0.46151 0.840596 0.46144 0.840531 0.46137 0.840465 0.4613 0.8404 0.46123 0.840335 0.46116 0.840269 0.461092 0.840206 0.461022 0.840141 0.460953 0.840076 0.460883 0.84001 0.460813 0.839945 0.460744 0.83988 0.460674 0.839815 0.460605 0.839749 0.460536 0.839684 0.460466 0.839619 0.460397 0.839554 0.460328 0.839488 0.460259 0.839423 0.46019 0.839358 0.460121 0.839292 0.460052 0.839227 0.459983 0.839162 0.459914 0.839097 0.459845 0.839031 0.459776 0.838966 0.459708 0.838901 0.459639 0.838836 0.45957 0.83877 0.459502 0.838705 0.459433 0.83864 0.459365 0.838575 0.459297 0.83851 0.459228 0.838444 0.45916 0.838379 0.459092 0.838314 0.459025 0.838251 0.458957 0.838185 0.458889 0.83812 0.458821 0.838055 0.458753 0.83799 0.458685 0.837925 0.458618 0.83786 0.45855 0.837794 0.458482 0.837729 0.458415 0.837664 0.458347 0.837599 0.45828 0.837534 0.458212 0.837469 0.458145 0.837404 0.458078 0.837339 0.458011 0.837273 0.457943 0.837208 0.457876 0.837143 0.457809 0.837078 0.457742 0.837013 0.457675 0.836948 0.457609 0.836883 0.457542 0.836818 0.457475 0.836753 0.457408 0.836688 0.457342 0.836623 0.457275 0.836558 0.457209 0.836493 0.457144 0.83643 0.457077 0.836365 0.457011 0.8363 0.456945 0.836235 0.456879 0.83617 0.456813 0.836106 0.456747 0.836041 0.456681 0.835976 0.456615 0.835911 0.456549 0.835846 0.456483 0.835781 0.456418 0.835717 0.456352 0.835652 0.456286 0.835587 0.456221 0.835522 0.456156 0.835457 0.45609 0.835393 0.456025 0.835328 0.45596 0.835263 0.455895 0.835199 0.455829 0.835134 0.455764 0.835069 0.455699 0.835005 0.455635 0.83494 0.45557 0.834875 0.455506 0.834813 0.455442 0.834749 0.455377 0.834684 0.455313 0.83462 0.455248 0.834555 0.455184 0.834491 0.455119 0.834426 0.455055 0.834362 0.454991 0.834297 0.454927 0.834233 0.454863 0.834169 0.454799 0.834104 0.454735 0.83404 0.454671 0.833976 0.454607 0.833912 0.454544 0.833847 0.45448 0.833783 0.454416 0.833719 0.454353 0.833655 0.454289 0.833591 0.454226 0.833526 0.454163 0.833462 0.4541 0.833398 0.454036 0.833334 0.453975 0.833272 0.453912 0.833209 0.453849 0.833145 0.453786 0.833081 0.453724 0.833017 0.453661 0.832953 0.453598 0.832889 0.453536 0.832825 0.453473 0.832762 0.453411 0.832698 0.453349 0.832634 0.453286 0.83257 0.453224 0.832507 0.453162 0.832443 0.4531 0.83238 0.453038 0.832316 0.452976 0.832253 0.452915 0.832189 0.452853 0.832126 0.452791 0.832062 0.45273 0.831999 0.45267 0.831938 0.452609 0.831874 0.452547 0.831811 0.452486 0.831748 0.452425 0.831685 0.452364 0.831622 0.452303 0.831559 0.452242 0.831495 0.452181 0.831432 0.452121 0.831369 0.45206 0.831306 0.451999 0.831244 0.451939 0.831181 0.451879 0.831118 0.451818 0.831055 0.451758 0.830992 0.451698 0.83093 0.451638 0.830867 0.451578 0.830804 0.451518 0.830742 0.45146 0.830681 0.4514 0.830619 0.45134 0.830556 0.451281 0.830494 0.451221 0.830432 0.451162 0.830369 0.451103 0.830307 0.451043 0.830245 0.450984 0.830183 0.450925 0.83012 0.450866 0.830058 0.450807 0.829996 0.450749 0.829934 0.45069 0.829872 0.450631 0.82981 0.450573 0.829749 0.450514 0.829687 0.450456 0.829625 0.450398 0.829563 0.450341 0.829504 0.450283 0.829442 0.450225 0.829381 0.450167 0.829319 0.450109 0.829258 0.450052 0.829197 0.449994 0.829135 0.449937 0.829074 0.449879 0.829013 0.449822 0.828952 0.449765 0.828891 0.449708 0.82883 0.44965 0.828769 0.449594 0.828708 0.449537 0.828647 0.44948 0.828586 0.449423 0.828525 0.449367 0.828465 0.449312 0.828406 0.449255 0.828346 0.449199 0.828285 0.449143 0.828225 0.449087 0.828165 0.449031 0.828104 0.448975 0.828044 0.448919 0.827984 0.448863 0.827924 0.448808 0.827864 0.448752 0.827804 0.448697 0.827744 0.448642 0.827684 0.448586 0.827624 0.448531 0.827565 0.448476 0.827505 0.448421 0.827445 0.448368 0.827388 0.448313 0.827329 0.448259 0.82727 0.448204 0.82721 0.44815 0.827151 0.448096 0.827092 0.448041 0.827033 0.447987 0.826974 0.447933 0.826915 0.447879 0.826856 0.447826 0.826797 0.447772 0.826739 0.447718 0.82668 0.447665 0.826622 0.447612 0.826563 0.447558 0.826505 0.447507 0.826449 0.447454 0.826391 0.447401 0.826333 0.447348 0.826275 0.447295 0.826217 0.447243 0.826159 0.44719 0.826101 0.447138 0.826043 0.447086 0.825986 0.447034 0.825928 0.446982 0.825871 0.44693 0.825813 0.446878 0.825756 0.446826 0.825699 0.446774 0.825642 0.446725 0.825587 0.446673 0.82553 0.446622 0.825473 0.446571 0.825417 0.44652 0.82536 0.446469 0.825303 0.446418 0.825247 0.446368 0.825191 0.446317 0.825134 0.446266 0.825078 0.446216 0.825022 0.446166 0.824966 0.446116 0.82491 0.446066 0.824854 0.446018 0.824801 0.445968 0.824745 0.445918 0.82469 0.445869 0.824635 0.445819 0.824579 0.44577 0.824524 0.445721 0.824469 0.445672 0.824414 0.445623 0.824359 0.445574 0.824304 0.445526 0.82425 0.445477 0.824195 0.445429 0.824141 0.44538 0.824086 0.445334 0.824034 0.445286 0.82398 0.445238 0.823926 0.44519 0.823872 0.445143 0.823819 0.445095 0.823765 0.445048 0.823711 0.445001 0.823658 0.444953 0.823604 0.444906 0.823551 0.444859 0.823498 0.444813 0.823445 0.444768 0.823395 0.444721 0.823342 0.444675 0.823289 0.444629 0.823237 0.444583 0.823184 0.444537 0.823132 0.444491 0.82308 0.444445 0.823028 0.4444 0.822976 0.444354 0.822924 0.444309 0.822872 0.444264 0.822821 0.444219 0.822769 0.444175 0.82272 0.444131 0.822669 0.444086 0.822618 0.444042 0.822567 0.443997 0.822516 0.443953 0.822466 0.443909 0.822415 0.443865 0.822365 0.443821 0.822314 0.443778 0.822264 0.443734 0.822214 0.443691 0.822164 0.443649 0.822117 0.443606 0.822068 0.443563 0.822018 0.44352 0.821969 0.443478 0.82192 0.443435 0.821871 0.443393 0.821822 0.443351 0.821773 0.443308 0.821724 0.443266 0.821676 0.443225 0.821627 0.443185 0.821582 0.443143 0.821534 0.443102 0.821486 0.443061 0.821438 0.44302 0.82139 0.442979 0.821343 0.442938 0.821296 0.442897 0.821248 0.442857 0.821201 0.442817 0.821155 0.442776 0.821108 0.442738 0.821064 0.442698 0.821017 0.442659 0.820971 0.442619 0.820925 0.44258 0.820879 0.44254 0.820833 0.442501 0.820788 0.442462 0.820742 0.442423 0.820697 0.442385 0.820652 0.442346 0.820607 0.44231 0.820564 0.442272 0.82052 0.442234 0.820475 0.442196 0.820431 0.442158 0.820387 0.44212 0.820343 0.442083 0.820299 0.442046 0.820255 0.442009 0.820211 0.441972 0.820168 0.441937 0.820127 0.4419 0.820084 0.441864 0.820041 0.441828 0.819999 0.441791 0.819956 0.441756 0.819914 0.44172 0.819872 0.441684 0.81983 0.441649 0.819788 0.441613 0.819746 0.441578 0.819704 0.441545 0.819666 0.44151 0.819625 0.441476 0.819584 0.441441 0.819543 0.441407 0.819502 0.441373 0.819462 0.441339 0.819422 0.441305 0.819381 0.441271 0.819341 0.441238 0.819302 0.441206 0.819265 0.441173 0.819226 0.44114 0.819186 0.441107 0.819147 0.441074 0.819109 0.441042 0.81907 0.44101 0.819031 0.440978 0.818992 0.440946 0.818954 0.440916 0.818919 0.440884 0.818881 0.440852 0.818844 0.440821 0.818807 0.44079 0.818769 0.440759 0.818732 0.440728 0.818696 0.440698 0.818659 0.440667 0.818623 0.440637 0.818586 0.440609 0.818553 0.440579 0.818517 0.440549 0.818482 0.440519 0.818446 0.44049 0.818411 0.440461 0.818376 0.440432 0.818341 0.440403 0.818307 0.440374 0.818272 0.440346 0.818238 0.440319 0.818207 0.440291 0.818173 0.440263 0.818139 0.440235 0.818106 0.440207 0.818072 0.44018 0.818039 0.440153 0.818006 0.440125 0.817974 0.440098 0.817941 0.440074 0.817912 0.440047 0.81788 0.44002 0.817848 0.439994 0.817816 0.439968 0.817785 0.439942 0.817753 0.439916 0.817722 0.439891 0.817691 0.439865 0.817661 0.439842 0.817633 0.439817 0.817603 0.439792 0.817572 0.439767 0.817543 0.439743 0.817513 0.439719 0.817483 0.439694 0.817454 0.43967 0.817425 0.439646 0.817396 0.439625 0.81737 0.439601 0.817342 0.439578 0.817314 0.439555 0.817286 0.439532 0.817258 0.439509 0.81723 0.439486 0.817203 0.439464 0.817175 0.439442 0.817148 0.43942 0.817121 0.4394 0.817098 0.439378 0.817071 0.439356 0.817045 0.439335 0.817019 0.439314 0.816993 0.439293 0.816967 0.439272 0.816942 0.439251 0.816917 0.43923 0.816892 0.439212 0.81687 0.439192 0.816845 0.439172 0.816821 0.439152 0.816796 0.439132 0.816772 0.439113 0.816748 0.439093 0.816725 0.439074 0.816701 0.439055 0.816678 0.439036 0.816655 0.439019 0.816635 0.439001 0.816612 0.438982 0.81659 0.438964 0.816568 0.438946 0.816546 0.438928 0.816524 0.438911 0.816502 0.438893 0.816481 0.438876 0.816459 0.438858 0.816438 0.438843 0.81642 0.438827 0.8164 0.43881 0.816379 0.438793 0.816359 0.438777 0.816339 0.438761 0.816319 0.438744 0.816299 0.438729 0.81628 0.438713 0.81626 0.438697 0.816241 0.438684 0.816225 0.438668 0.816206 0.438653 0.816188 0.438638 0.816169 0.438623 0.816151 0.438609 0.816133 0.438594 0.816115 0.43858 0.816097 0.438565 0.81608 0.438551 0.816062 0.438539 0.816048 0.438526 0.816031 0.438512 0.816014 0.438499 0.815998 0.438485 0.815981 0.438472 0.815965 0.438459 0.815949 0.438446 0.815933 0.438433 0.815917 0.438421 0.815902 0.43841 0.815889 0.438398 0.815874 0.438386 0.815859 0.438373 0.815844 0.438362 0.815829 0.43835 0.815815 0.438338 0.815801 0.438327 0.815786 0.438315 0.815772 0.438304 0.815758 0.438293 0.815745 0.438282 0.815731 0.438273 0.815721 0.438262 0.815707 0.438252 0.815694 0.438241 0.815681 0.438231 0.815669 0.438221 0.815656 0.438211 0.815644 0.438201 0.815631 0.438191 0.815619 0.438181 0.815607 0.438171 0.815595 0.438162 0.815584 0.438155 0.815575 0.438146 0.815564 0.438136 0.815552 0.438127 0.815541 0.438119 0.81553 0.43811 0.815519 0.438101 0.815509 0.438093 0.815498 0.438084 0.815488 0.438076 0.815477 0.438068 0.815467 0.43806 0.815457 0.438052 0.815447 0.438046 0.815441 0.438038 0.815431 0.438031 0.815422 0.438023 0.815412 0.438016 0.815403 0.438008 0.815394 0.438001 0.815385 0.437994 0.815376 0.437987 0.815367 0.43798 0.815359 0.437973 0.81535 0.437966 0.815342 0.43796 0.815334 0.437953 0.815326 0.437949 0.815321 0.437942 0.815313 0.437936 0.815305 0.43793 0.815297 0.437924 0.81529 0.437918 0.815282 0.437912 0.815275 0.437906 0.815268 0.4379 0.815261 0.437895 0.815254 0.437889 0.815247 0.437884 0.81524 0.437878 0.815233 0.437873 0.815227 0.437868 0.81522 0.437863 0.815214 0.437858 0.815207 0.437855 0.815204 0.43785 0.815198 0.437845 0.815192 0.43784 0.815186 0.437836 0.81518 0.437831 0.815175 0.437826 0.815169 0.437822 0.815163 0.437818 0.815158 0.437813 0.815153 0.437809 0.815147 0.437805 0.815142 0.437801 0.815137 0.437797 0.815132 0.437793 0.815127 0.437789 0.815122 0.437785 0.815117 0.437781 0.815112 0.437777 0.815108 0.437776 0.815106 0.437772 0.815102 0.437769 0.815098 0.437765 0.815093 0.437762 0.815089 0.437758 0.815085 0.437755 0.815081 0.437752 0.815077 0.437749 0.815073 0.437746 0.815069 0.437743 0.815065 0.43774 0.815061 0.437737 0.815057 0.437734 0.815054 0.437731 0.81505 0.437728 0.815047 0.437725 0.815043 0.437722 0.81504 0.43772 0.815036 0.437717 0.815033 0.437715 0.81503 0.437712 0.815027 0.437709 0.815024 0.437707 0.81502 0.437705 0.815017 0.437702 0.815014 0.437702 0.815015 0.4377 0.815012 0.437698 0.815009 0.437695 0.815006 0.437693 0.815004 0.437691 0.815001 0.437689 0.814998 0.437687 0.814996 0.437685 0.814993 0.437683 0.814991 0.437681 0.814989 0.437679 0.814986 0.437677 0.814984 0.437676 0.814982 0.437674 0.814979 0.437672 0.814977 0.43767 0.814975 0.437669 0.814973 0.437667 0.814971 0.437665 0.814969 0.437664 0.814967 0.437662 0.814965 0.437661 0.814963 0.437659 0.814961 0.437658 0.814959 0.437656 0.814957 0.437655 0.814956 0.437654 0.814954 0.437652 0.814952 0.437651 0.81495 0.43765 0.814949 0.437648 0.814947 0.437647 0.814946 0.437646 0.814944 0.437645 0.814943 0.437643 0.814941 0.437642 0.81494 0.437641 0.814938 0.43764 0.814937 0.437641 0.814939 0.43764 0.814937 0.437639 0.814936 0.437638 0.814935 0.437637 0.814934 0.437636 0.814932 0.437635 0.814931 0.437634 0.81493 0.437633 0.814929 0.437632 0.814928 0.437631 0.814927 0.437631 0.814926 0.43763 0.814924 0.437629 0.814923 0.437628 0.814922 0.437627 0.814921 0.437627 0.81492 0.437626 0.814919 0.437625 0.814918 0.437624 0.814918 0.437624 0.814917 0.437623 0.814916 0.437622 0.814915 0.437622 0.814914 0.437621 0.814913 0.43762 0.814912 0.437619 0.814911 0.437618 0.81491 0.437618 0.814909 0.437617 0.814909 0.437617 0.814908 0.437616 0.814907 0.437615 0.814906 0.437615 0.814905 0.437614 0.814905 0.437614 0.814904 0.437613 0.814903 0.437612 0.814902 0.437611 0.814901 0.437611 0.8149 0.43761 0.814899 0.437609 0.814898 0.437609 0.814897 0.437608 0.814897 0.437608 0.814896 0.437607 0.814896 0.437607 0.814895 0.437606 0.814894 0.437605 0.814893 0.437605 0.814892 0.437604 0.814892 0.437604 0.814891 0.437603 0.814891 0.437603 0.81489 0.437602 0.814889 0.437602 0.814888 0.437601 0.814888 0.437601 0.814887 0.4376 0.814887 0.4376 0.814886 0.437599 0.814885 0.437599 0.814884 0.437598 0.814884 0.437598 0.814883 0.437597 0.814883 0.437597 0.814882 0.437597 0.814881 0.437596 0.814881 0.437596 0.81488 0.437596 0.814879 0.437594 0.814875 0.437595 0.814875 0.437596 0.814875 0.437594 0.814872 0.437595 0.814872 0.437595 0.814871 0.437596 0.814871 0.437594 0.814867 0.437595 0.814867 0.437596 0.814867 0.437596 0.814866 0.437597 0.814866 0.437594 0.814862 0.437594 0.814861 0.437595 0.814861 0.437595 0.81486 0.437596 0.81486 0.437596 0.814859 0.437597 0.814859 0.437594 0.814855 0.437594 0.814854 0.437595 0.814854 0.437595 0.814853 0.437596 0.814853 0.437596 0.814852 0.437596 0.814851 0.437597 0.814851 0.437594 0.814847 0.437594 0.814846 0.437595 0.814846 0.437595 0.814845 0.437595 0.814844 0.437596 0.814843 0.437596 0.814842 0.437596 0.814841 0.437597 0.814841 0.437597 0.81484 0.437594 0.814836 0.437594 0.814835 0.437595 0.814835 0.437595 0.814834 0.437595 0.814833 0.437595 0.814832 0.437596 0.814832 0.437596 0.814831 0.437596 0.81483 0.437596 0.814829 0.437597 0.814829 0.437597 0.814828 0.437597 0.814827 0.437594 0.814822 0.437594 0.814821 0.437595 0.814821 0.437595 0.81482 0.437595 0.814819 0.437595 0.814818 0.437595 0.814817 0.437596 0.814817 0.437596 0.814816 0.437596 0.814815 0.437596 0.814814 0.437596 0.814813 0.437597 0.814813 0.437597 0.814812 0.437597 0.814811 0.437597 0.81481 0.437594 0.814805 0.437594 0.814804 0.437595 0.814804 0.437595 0.814803 0.437595 0.814802 0.437595 0.814801 0.437595 0.8148 0.437595 0.814799 0.437596 0.814799 0.437596 0.814798 0.437596 0.814797 0.437596 0.814796 0.437596 0.814795 0.437596 0.814794 0.437597 0.814793 0.437597 0.814792 0.437597 0.814791 0.437597 0.81479 0.437597 0.814789 0.437594 0.814784 0.437594 0.814783 0.437595 0.814783 0.437595 0.814782 0.437595 0.814781 0.437595 0.81478 0.437595 0.814779 0.437595 0.814778 0.437595 0.814777 0.437596 0.814776 0.437596 0.814775 0.437596 0.814774 0.437596 0.814773 0.437596 0.814772 0.437596 0.814771 0.437596 0.81477 0.437597 0.814769 0.437597 0.814768 0.437597 0.814767 0.437597 0.814766 0.437597 0.814765 0.437597 0.814764 0.437597 0.814763 0.437594 0.814758 0.437594 0.814757 0.437595 0.814757 0.437595 0.814756 0.437595 0.814755 0.437595 0.814754 0.437595 0.814753 0.437595 0.814752 0.437595 0.814751 0.437595 0.81475 0.437595 0.814749 0.437596 0.814749 0.437596 0.814748 0.437596 0.814747 0.437596 0.814746 0.437596 0.814745 0.437596 0.814744 0.437596 0.814743 0.437596 0.814742 0.437596 0.814741 0.437596 0.81474 0.437597 0.81474 0.437597 0.814739 0.437597 0.814738 0.437597 0.814737 0.437597 0.814736 0.437597 0.814735 0.437597 0.814734 0.437597 0.814733 0.437597 0.814732 0.437594 0.814726 0.437594 0.814725 0.437595 0.814724 0.437595 0.814723 0.437595 0.814722 0.437595 0.814721 0.437595 0.81472 0.437595 0.814719 0.437595 0.814718 0.437595 0.814717 0.437595 0.814716 0.437596 0.814715 0.437596 0.814714 0.437596 0.814713 0.437596 0.814712 0.437596 0.814711 0.437596 0.81471 0.437596 0.814709 0.437596 0.814708 0.437596 0.814707 0.437596 0.814706 0.437596 0.814705 0.437596 0.814704 0.437597 0.814703 0.437597 0.814702 0.437597 0.814701 0.437597 0.8147 0.437597 0.814699 0.437597 0.814698 0.437597 0.814697 0.437597 0.814696 0.437597 0.814695 0.437597 0.814694 0.437597 0.814693 0.437597 0.814692 0.437594 0.814687 0.437594 0.814686 0.437594 0.814685 0.437595 0.814684 0.437595 0.814683 0.437595 0.814682 0.437595 0.814681 0.437595 0.81468 0.437595 0.814679 0.437595 0.814678 0.437595 0.814677 0.437595 0.814676 0.437595 0.814675 0.437595 0.814674 0.437596 0.814674 0.437596 0.814673 0.437596 0.814672 0.437596 0.814671 0.437596 0.81467 0.437596 0.814669 0.437596 0.814668 0.437596 0.814667 0.437596 0.814666 0.437596 0.814665 0.437596 0.814664 0.437596 0.814663 0.437596 0.814662 0.437596 0.814661 0.437596 0.81466 0.437597 0.814659 0.437597 0.814658 0.437597 0.814657 0.437597 0.814656 0.437597 0.814655 0.437597 0.814654 0.437597 0.814653 0.437597 0.814652 0.437597 0.814651 0.437597 0.81465 0.437597 0.814649 0.437597 0.814648 0.437597 0.814647 0.437597 0.814646 0.437598 0.814645 0.437594 0.814638 0.437594 0.814637 0.437595 0.814636 0.437595 0.814635 0.437595 0.814634 0.437595 0.814633 0.437595 0.814632 0.437595 0.814631 0.437595 0.81463 0.437595 0.814629 0.437595 0.814627 0.437595 0.814626 0.437595 0.814625 0.437595 0.814624 0.437595 0.814623 0.437595 0.814622 0.437596 0.81462 0.437596 0.814619 0.437596 0.814618 0.437596 0.814617 0.437596 0.814616 0.437596 0.814615 0.437596 0.814613 0.437596 0.814612 0.437596 0.814611 0.437596 0.81461 0.437596 0.814609 0.437596 0.814607 0.437597 0.814606 0.437597 0.814605 0.437597 0.814603 0.437597 0.814602 0.437597 0.814601 0.437597 0.8146 0.437597 0.814598 0.437597 0.814597 0.437597 0.814596 0.437597 0.814594 0.437597 0.814593 0.437597 0.814592 0.437597 0.81459 0.437597 0.814589 0.437598 0.814588 0.437598 0.814586 0.437598 0.814585 0.437598 0.814584 0.437595 0.814577 0.437595 0.814576 0.437595 0.814574 0.437595 0.814573 0.437595 0.814571 0.437595 0.81457 0.437595 0.814569 0.437595 0.814567 0.437595 0.814566 0.437595 0.814564 0.437595 0.814563 0.437595 0.814561 0.437595 0.81456 0.437596 0.814559 0.437596 0.814557 0.437596 0.814556 0.437596 0.814554 0.437596 0.814553 0.437596 0.814551 0.437596 0.81455 0.437596 0.814548 0.437596 0.814546 0.437596 0.814545 0.437596 0.814543 0.437596 0.814542 0.437596 0.81454 0.437597 0.814539 0.437597 0.814537 0.437597 0.814536 0.437597 0.814534 0.437597 0.814532 0.437597 0.814531 0.437597 0.814529 0.437597 0.814527 0.437597 0.814526 0.437597 0.814524 0.437597 0.814523 0.437597 0.814521 0.437597 0.814519 0.437598 0.814517 0.437598 0.814516 0.437598 0.814514 0.437598 0.814512 0.437598 0.814511 0.437595 0.814503 0.437595 0.814502 0.437595 0.8145 0.437595 0.814498 0.437595 0.814496 0.437595 0.814495 0.437595 0.814493 0.437595 0.814491 0.437595 0.814489 0.437595 0.814488 0.437595 0.814486 0.437595 0.814484 0.437596 0.814482 0.437596 0.81448 0.437596 0.814478 0.437596 0.814476 0.437596 0.814475 0.437596 0.814473 0.437596 0.814471 0.437596 0.814469 0.437596 0.814467 0.437596 0.814465 0.437596 0.814463 0.437596 0.814461 0.437597 0.814459 0.437597 0.814457 0.437597 0.814455 0.437597 0.814453 0.437597 0.814451 0.437597 0.814449 0.437597 0.814447 0.437597 0.814445 0.437597 0.814443 0.437597 0.814441 0.437597 0.814439 0.437597 0.814437 0.437597 0.814435 0.437598 0.814433 0.437598 0.814431 0.437598 0.814428 0.437598 0.814426 0.437598 0.814424 0.437598 0.814422 0.437598 0.81442 0.437595 0.814412 0.437595 0.81441 0.437595 0.814408 0.437595 0.814406 0.437595 0.814403 0.437595 0.814401 0.437595 0.814399 0.437595 0.814397 0.437595 0.814394 0.437595 0.814392 0.437596 0.81439 0.437596 0.814388 0.437596 0.814385 0.437596 0.814383 0.437596 0.814381 0.437596 0.814378 0.437596 0.814376 0.437596 0.814374 0.437596 0.814371 0.437596 0.814369 0.437596 0.814366 0.437596 0.814364 0.437597 0.814361 0.437597 0.814359 0.437597 0.814357 0.437597 0.814354 0.437597 0.814352 0.437597 0.814349 0.437597 0.814347 0.437597 0.814344 0.437597 0.814342 0.437597 0.814339 0.437597 0.814336 0.437597 0.814334 0.437598 0.814331 0.437598 0.814329 0.437598 0.814326 0.437598 0.814323 0.437598 0.814321 0.437598 0.814318 0.437598 0.814315 0.437598 0.814313 0.437595 0.814304 0.437595 0.814301 0.437595 0.814299 0.437595 0.814296 0.437595 0.814293 0.437595 0.814291 0.437595 0.814288 0.437595 0.814285 0.437595 0.814282 0.437595 0.814279 0.437596 0.814277 0.437596 0.814274 0.437596 0.814271 0.437596 0.814268 0.437596 0.814265 0.437596 0.814262 0.437596 0.814259 0.437596 0.814256 0.437596 0.814253 0.437596 0.81425 0.437596 0.814247 0.437597 0.814244 0.437597 0.814241 0.437597 0.814238 0.437597 0.814235 0.437597 0.814232 0.437597 0.814229 0.437597 0.814226 0.437597 0.814223 0.437597 0.81422 0.437597 0.814217 0.437597 0.814214 0.437597 0.81421 0.437598 0.814207 0.437598 0.814204 0.437598 0.814201 0.437598 0.814197 0.437598 0.814194 0.437598 0.814191 0.437598 0.814188 0.437598 0.814184 0.437598 0.814181 0.437598 0.814178 0.437598 0.814174 0.437595 0.814165 0.437595 0.814161 0.437595 0.814158 0.437595 0.814155 0.437595 0.814151 0.437595 0.814148 0.437595 0.814144 0.437596 0.814141 0.437596 0.814137 0.437596 0.814134 0.437596 0.81413 0.437596 0.814127 0.437596 0.814123 0.437596 0.814119 0.437596 0.814116 0.437596 0.814112 0.437596 0.814108 0.437596 0.814105 0.437597 0.814101 0.437597 0.814097 0.437597 0.814094 0.437597 0.81409 0.437597 0.814086 0.437597 0.814082 0.437597 0.814079 0.437597 0.814075 0.437597 0.814071 0.437597 0.814067 0.437597 0.814063 0.437598 0.814059 0.437598 0.814055 0.437598 0.814051 0.437598 0.814047 0.437598 0.814043 0.437598 0.814039 0.437598 0.814035 0.437598 0.814031 0.437598 0.814027 0.437598 0.814023 0.437598 0.814019 0.437598 0.814015 0.437599 0.814011 0.437595 0.814 0.437595 0.813996 0.437595 0.813992 0.437595 0.813988 0.437595 0.813983 0.437595 0.813979 0.437596 0.813975 0.437596 0.81397 0.437596 0.813966 0.437596 0.813962 0.437596 0.813957 0.437596 0.813953 0.437596 0.813948 0.437596 0.813944 0.437596 0.813939 0.437596 0.813935 0.437597 0.81393 0.437597 0.813926 0.437597 0.813921 0.437597 0.813917 0.437597 0.813912 0.437597 0.813907 0.437597 0.813903 0.437597 0.813898 0.437597 0.813893 0.437597 0.813889 0.437597 0.813884 0.437598 0.813879 0.437598 0.813874 0.437598 0.813869 0.437598 0.813864 0.437598 0.813859 0.437598 0.813855 0.437598 0.81385 0.437598 0.813845 0.437598 0.81384 0.437598 0.813835 0.437598 0.81383 0.437598 0.813825 0.437599 0.813819 0.437599 0.813814 0.437595 0.813803 0.437595 0.813798 0.437595 0.813793 0.437595 0.813787 0.437596 0.813782 0.437596 0.813777 0.437596 0.813771 0.437596 0.813766 0.437596 0.813761 0.437596 0.813755 0.437596 0.81375 0.437596 0.813745 0.437596 0.813739 0.437596 0.813734 0.437597 0.813728 0.437597 0.813723 0.437597 0.813717 0.437597 0.813711 0.437597 0.813706 0.437597 0.8137 0.437597 0.813694 0.437597 0.813689 0.437597 0.813683 0.437597 0.813677 0.437597 0.813671 0.437598 0.813665 0.437598 0.81366 0.437598 0.813654 0.437598 0.813648 0.437598 0.813642 0.437598 0.813636 0.437598 0.81363 0.437598 0.813624 0.437598 0.813618 0.437598 0.813612 0.437598 0.813605 0.437599 0.813599 0.437599 0.813593 0.437599 0.813587 0.437599 0.81358 0.437599 0.813574 0.437595 0.813561 0.437595 0.813555 0.437595 0.813549 0.437596 0.813542 0.437596 0.813536 0.437596 0.813529 0.437596 0.813523 0.437596 0.813516 0.437596 0.81351 0.437596 0.813503 0.437596 0.813496 0.437596 0.81349 0.437596 0.813483 0.437597 0.813476 0.437597 0.813469 0.437597 0.813463 0.437597 0.813456 0.437597 0.813449 0.437597 0.813442 0.437597 0.813435 0.437597 0.813428 0.437597 0.813421 0.437598 0.813414 0.437598 0.813407 0.437598 0.8134 0.437598 0.813392 0.437598 0.813385 0.437598 0.813378 0.437598 0.813371 0.437598 0.813363 0.437598 0.813356 0.437598 0.813349 0.437599 0.813341 0.437599 0.813334 0.437599 0.813326 0.437599 0.813319 0.437599 0.813311 0.437599 0.813303 0.437599 0.813296 0.437599 0.813288 0.437599 0.81328 0.437599 0.813272 0.437596 0.813258 0.437596 0.81325 0.437596 0.813242 0.437596 0.813234 0.437596 0.813226 0.437596 0.813218 0.437596 0.81321 0.437596 0.813202 0.437596 0.813194 0.437597 0.813186 0.437597 0.813178 0.437597 0.813169 0.437597 0.813161 0.437597 0.813153 0.437597 0.813144 0.437597 0.813136 0.437597 0.813128 0.437597 0.813119 0.437598 0.81311 0.437598 0.813102 0.437598 0.813093 0.437598 0.813085 0.437598 0.813076 0.437598 0.813067 0.437598 0.813058 0.437598 0.813049 0.437598 0.81304 0.437598 0.813032 0.437599 0.813023 0.437599 0.813014 0.437599 0.813004 0.437599 0.812995 0.437599 0.812986 0.437599 0.812977 0.437599 0.812968 0.437599 0.812958 0.437599 0.812949 0.4376 0.812939 0.4376 0.81293 0.4376 0.812921 0.437596 0.812904 0.437596 0.812894 0.437596 0.812885 0.437596 0.812875 0.437596 0.812865 0.437596 0.812856 0.437597 0.812846 0.437597 0.812836 0.437597 0.812826 0.437597 0.812816 0.437597 0.812806 0.437597 0.812796 0.437597 0.812786 0.437597 0.812776 0.437597 0.812765 0.437598 0.812755 0.437598 0.812745 0.437598 0.812734 0.437598 0.812724 0.437598 0.812713 0.437598 0.812703 0.437598 0.812692 0.437598 0.812682 0.437598 0.812671 0.437599 0.81266 0.437599 0.812649 0.437599 0.812638 0.437599 0.812627 0.437599 0.812616 0.437599 0.812605 0.437599 0.812594 0.437599 0.812583 0.437599 0.812572 0.4376 0.812561 0.4376 0.812549 0.4376 0.812538 0.4376 0.812526 0.4376 0.812515 0.4376 0.812503 0.4376 0.812492 0.437596 0.812473 0.437596 0.812461 0.437596 0.812449 0.437597 0.812438 0.437597 0.812426 0.437597 0.812414 0.437597 0.812402 0.437597 0.81239 0.437597 0.812378 0.437597 0.812365 0.437598 0.812353 0.437598 0.812341 0.437598 0.812328 0.437598 0.812316 0.437598 0.812304 0.437598 0.812291 0.437598 0.812278 0.437598 0.812266 0.437598 0.812253 0.437599 0.81224 0.437599 0.812227 0.437599 0.812214 0.437599 0.812201 0.437599 0.812188 0.437599 0.812175 0.437599 0.812162 0.437599 0.812148 0.4376 0.812135 0.4376 0.812122 0.4376 0.812108 0.4376 0.812095 0.4376 0.812081 0.4376 0.812067 0.4376 0.812054 0.4376 0.81204 0.4376 0.812026 0.437601 0.812012 0.437601 0.811998 0.437601 0.811984 0.437601 0.811969 0.437601 0.811955 0.437597 0.811934 0.437597 0.811919 0.437597 0.811905 0.437597 0.81189 0.437598 0.811876 0.437598 0.811861 0.437598 0.811846 0.437598 0.811831 0.437598 0.811817 0.437598 0.811802 0.437598 0.811787 0.437598 0.811771 0.437599 0.811756 0.437599 0.811741 0.437599 0.811726 0.437599 0.81171 0.437599 0.811695 0.437599 0.811679 0.437599 0.811664 0.4376 0.811648 0.4376 0.811632 0.4376 0.811616 0.4376 0.8116 0.4376 0.811584 0.4376 0.811568 0.4376 0.811552 0.4376 0.811535 0.437601 0.811519 0.437601 0.811503 0.437601 0.811486 0.437601 0.811469 0.437601 0.811453 0.437601 0.811436 0.437601 0.811419 0.437601 0.811402 0.437602 0.811385 0.437602 0.811368 0.437602 0.811351 0.437602 0.811333 0.437598 0.811308 0.437598 0.811291 0.437598 0.811273 0.437598 0.811256 0.437598 0.811238 0.437599 0.81122 0.437599 0.811202 0.437599 0.811184 0.437599 0.811166 0.437599 0.811148 0.437599 0.81113 0.437599 0.811112 0.4376 0.811093 0.4376 0.811075 0.4376 0.811056 0.4376 0.811037 0.4376 0.811018 0.4376 0.811 0.4376 0.810981 0.4376 0.810961 0.437601 0.810942 0.437601 0.810923 0.437601 0.810904 0.437601 0.810884 0.437601 0.810864 0.437601 0.810845 0.437601 0.810825 0.437602 0.810805 0.437602 0.810785 0.437602 0.810765 0.437602 0.810745 0.437602 0.810725 0.437602 0.810704 0.437602 0.810684 0.437603 0.810663 0.437603 0.810642 0.437603 0.810622 0.437603 0.810601 0.437603 0.81058 0.437599 0.810551 0.437599 0.81053 0.437599 0.810508 0.437599 0.810487 0.4376 0.810465 0.4376 0.810444 0.4376 0.810422 0.4376 0.8104 0.4376 0.810378 0.4376 0.810356 0.437601 0.810334 0.437601 0.810312 0.437601 0.810289 0.437601 0.810267 0.437601 0.810244 0.437601 0.810222 0.437601 0.810199 0.437602 0.810176 0.437602 0.810153 0.437602 0.81013 0.437602 0.810106 0.437602 0.810083 0.437602 0.810059 0.437603 0.810036 0.437603 0.810012 0.437603 0.809988 0.437603 0.809964 0.437603 0.80994 0.437603 0.809916 0.437604 0.809891 0.437604 0.809867 0.437604 0.809842 0.437604 0.809817 0.437604 0.809793 0.437604 0.809768 0.437604 0.809742 0.437605 0.809717 0.437605 0.809692 0.437605 0.809666 0.437605 0.809641 0.437601 0.809607 0.437601 0.809581 0.437601 0.809555 0.437601 0.809529 0.437602 0.809503 0.437602 0.809476 0.437602 0.80945 0.437602 0.809423 0.437602 0.809396 0.437602 0.80937 0.437603 0.809343 0.437603 0.809315 0.437603 0.809288 0.437603 0.809261 0.437603 0.809233 0.437604 0.809205 0.437604 0.809178 0.437604 0.80915 0.437604 0.809122 0.437604 0.809093 0.437604 0.809065 0.437605 0.809036 0.437605 0.809008 0.437605 0.808979 0.437605 0.80895 0.437605 0.808921 0.437606 0.808891 0.437606 0.808862 0.437606 0.808832 0.437606 0.808803 0.437606 0.808773 0.437606 0.808743 0.437607 0.808713 0.437607 0.808682 0.437607 0.808652 0.437607 0.808621 0.437607 0.80859 0.437607 0.80856 0.437608 0.808528 0.437603 0.808489 0.437604 0.808457 0.437604 0.808426 0.437604 0.808394 0.437604 0.808363 0.437604 0.808331 0.437605 0.808298 0.437605 0.808266 0.437605 0.808234 0.437605 0.808201 0.437605 0.808168 0.437606 0.808135 0.437606 0.808102 0.437606 0.808069 0.437606 0.808036 0.437606 0.808002 0.437607 0.807968 0.437607 0.807934 0.437607 0.8079 0.437607 0.807866 0.437608 0.807831 0.437608 0.807797 0.437608 0.807762 0.437608 0.807727 0.437608 0.807692 0.437609 0.807656 0.437609 0.807621 0.437609 0.807585 0.437609 0.807549 0.437609 0.807513 0.43761 0.807477 0.43761 0.807441 0.43761 0.807404 0.43761 0.807367 0.437611 0.807331 0.437611 0.807293 0.437611 0.807256 0.437611 0.807219 0.437611 0.807181 0.437607 0.807134 0.437607 0.807096 0.437608 0.807058 0.437608 0.80702 0.437608 0.806981 0.437608 0.806942 0.437609 0.806903 0.437609 0.806864 0.437609 0.806825 0.437609 0.806785 0.43761 0.806746 0.43761 0.806706 0.43761 0.806666 0.43761 0.806625 0.437611 0.806585 0.437611 0.806544 0.437611 0.806503 0.437611 0.806462 0.437612 0.80642 0.437612 0.806379 0.437612 0.806337 0.437612 0.806295 0.437613 0.806253 0.437613 0.80621 0.437613 0.806168 0.437613 0.806125 0.437614 0.806082 0.437614 0.806039 0.437614 0.805995 0.437614 0.805952 0.437615 0.805908 0.437615 0.805864 0.437615 0.805819 0.437615 0.805775 0.437616 0.80573 0.437616 0.805685 0.437616 0.80564 0.437616 0.805594 0.437617 0.805548 0.437612 0.805494 0.437613 0.805447 0.437613 0.805401 0.437613 0.805355 0.437614 0.805308 0.437614 0.805261 0.437614 0.805214 0.437615 0.805166 0.437615 0.805119 0.437615 0.805071 0.437615 0.805022 0.437616 0.804974 0.437616 0.804925 0.437616 0.804876 0.437617 0.804827 0.437617 0.804778 0.437617 0.804728 0.437618 0.804678 0.437618 0.804628 0.437618 0.804578 0.437619 0.804527 0.437619 0.804476 0.437619 0.804425 0.43762 0.804374 0.43762 0.804322 0.43762 0.80427 0.437621 0.804218 0.437621 0.804166 0.437621 0.804113 0.437622 0.80406 0.437622 0.804007 0.437622 0.803953 0.437623 0.803899 0.437623 0.803845 0.437623 0.803791 0.437624 0.803737 0.437624 0.803682 0.437624 0.803627 0.43762 0.803562 0.43762 0.803507 0.437621 0.803451 0.437621 0.803395 0.437621 0.803338 0.437622 0.803281 0.437622 0.803225 0.437622 0.803167 0.437623 0.80311 0.437623 0.803052 0.437624 0.802994 0.437624 0.802935 0.437624 0.802877 0.437625 0.802818 0.437625 0.802759 0.437626 0.802699 0.437626 0.802639 0.437626 0.802579 0.437627 0.802519 0.437627 0.802458 0.437628 0.802397 0.437628 0.802335 0.437629 0.802274 0.437629 0.802212 0.437629 0.80215 0.43763 0.802087 0.43763 0.802024 0.437631 0.801961 0.437631 0.801897 0.437632 0.801834 0.437632 0.801769 0.437632 0.801705 0.437633 0.80164 0.437633 0.801575 0.437634 0.80151 0.437634 0.801444 0.437635 0.801378 0.437635 0.801311 0.437636 0.801245 0.437631 0.801168 0.437632 0.801101 0.437632 0.801033 0.437633 0.800965 0.437633 0.800897 0.437634 0.800828 0.437634 0.800759 0.437635 0.80069 0.437635 0.80062 0.437636 0.80055 0.437636 0.80048 0.437637 0.800409 0.437637 0.800338 0.437638 0.800266 0.437638 0.800195 0.437639 0.800122 0.437639 0.80005 0.43764 0.799977 0.43764 0.799904 0.437641 0.79983 0.437641 0.799756 0.437642 0.799682 0.437643 0.799607 0.437643 0.799532 0.437644 0.799457 0.437644 0.799381 0.437645 0.799305 0.437645 0.799228 0.437646 0.799151 0.437647 0.799074 0.437647 0.798997 0.437648 0.798918 0.437648 0.79884 0.437649 0.798761 0.437649 0.798682 0.43765 0.798602 0.437651 0.798522 0.437651 0.798442 0.437652 0.798361 0.437648 0.79827 0.437648 0.798188 0.437649 0.798107 0.43765 0.798024 0.43765 0.797942 0.437651 0.797858 0.437652 0.797775 0.437652 0.797691 0.437653 0.797606 0.437654 0.797522 0.437654 0.797436 0.437655 0.797351 0.437656 0.797265 0.437657 0.797178 0.437657 0.797091 0.437658 0.797004 0.437659 0.796916 0.437659 0.796828 0.43766 0.796739 0.437661 0.79665 0.437662 0.796561 0.437662 0.796471 0.437663 0.79638 0.437664 0.79629 0.437665 0.796198 0.437665 0.796106 0.437666 0.796014 0.437667 0.795922 0.437668 0.795829 0.437668 0.795735 0.437669 0.795641 0.43767 0.795546 0.437671 0.795451 0.437672 0.795356 0.437673 0.79526 0.437673 0.795164 0.437674 0.795067 0.437675 0.79497 0.437671 0.794861 0.437672 0.794763 0.437673 0.794665 0.437674 0.794565 0.437674 0.794466 0.437675 0.794366 0.437676 0.794265 0.437677 0.794164 0.437678 0.794062 0.437679 0.79396 0.43768 0.793857 0.437681 0.793754 0.437682 0.793651 0.437683 0.793546 0.437684 0.793442 0.437685 0.793337 0.437686 0.793231 0.437687 0.793125 0.437688 0.793018 0.437689 0.792911 0.43769 0.792803 0.437691 0.792695 0.437692 0.792586 0.437693 0.792476 0.437694 0.792366 0.437695 0.792256 0.437696 0.792145 0.437697 0.792033 0.437698 0.791921 0.4377 0.791808 0.437701 0.791695 0.437702 0.791581 0.437703 0.791467 0.437704 0.791352 0.437705 0.791237 0.437706 0.791121 0.437707 0.791004 0.437709 0.790887 0.43771 0.790769 0.437706 0.79064 0.437707 0.790521 0.437708 0.790402 0.43771 0.790282 0.437711 0.790161 0.437712 0.79004 0.437713 0.789918 0.437715 0.789796 0.437716 0.789673 0.437717 0.789549 0.437719 0.789425 0.43772 0.7893 0.437721 0.789175 0.437723 0.789049 0.437724 0.788922 0.437725 0.788795 0.437727 0.788667 0.437728 0.788539 0.43773 0.78841 0.437731 0.78828 0.437733 0.788149 0.437734 0.788018 0.437735 0.787887 0.437737 0.787754 0.437738 0.787621 0.43774 0.787488 0.437741 0.787353 0.437743 0.787218 0.437745 0.787083 0.437746 0.786947 0.437748 0.78681 0.437749 0.786672 0.437751 0.786534 0.437752 0.786395 0.437754 0.786255 0.437756 0.786115 0.437757 0.785974 0.437759 0.785832 0.437755 0.785679 0.437757 0.785536 0.437759 0.785392 0.437761 0.785247 0.437762 0.785102 0.437764 0.784956 0.437766 0.78481 0.437768 0.784663 0.43777 0.784515 0.437772 0.784366 0.437773 0.784217 0.437775 0.784066 0.437777 0.783916 0.437779 0.783764 0.437781 0.783611 0.437783 0.783458 0.437785 0.783304 0.437787 0.78315 0.437789 0.782994 0.437791 0.782838 0.437793 0.782681 0.437795 0.782524 0.437797 0.782365 0.437799 0.782206 0.437801 0.782046 0.437803 0.781885 0.437806 0.781724 0.437808 0.781561 0.43781 0.781398 0.437812 0.781234 0.437814 0.781069 0.437817 0.780904 0.437819 0.780738 0.437821 0.78057 0.437823 0.780402 0.437826 0.780234 0.437828 0.780064 0.43783 0.779893 0.437833 0.779722 0.43783 0.779538 0.437832 0.779366 0.437835 0.779192 0.437837 0.779017 0.43784 0.778842 0.437843 0.778666 0.437845 0.778488 0.437848 0.77831 0.43785 0.778132 0.437853 0.777952 0.437856 0.777771 0.437858 0.77759 0.437861 0.777407 0.437864 0.777224 0.437867 0.77704 0.43787 0.776855 0.437872 0.776669 0.437875 0.776482 0.437878 0.776294 0.437881 0.776106 0.437884 0.775916 0.437887 0.775725 0.43789 0.775534 0.437893 0.775342 0.437896 0.775148 0.437899 0.774954 0.437902 0.774759 0.437905 0.774562 0.437908 0.774365 0.437911 0.774167 0.437915 0.773968 0.437918 0.773768 0.437921 0.773567 0.437924 0.773365 0.437928 0.773162 0.437931 0.772958 0.437935 0.772753 0.437938 0.772547 0.437936 0.772328 0.43794 0.77212 0.437943 0.771911 0.437947 0.771702 0.43795 0.771491 0.437954 0.771279 0.437958 0.771066 0.437961 0.770852 0.437965 0.770637 0.437969 0.770421 0.437973 0.770204 0.437977 0.769986 0.437981 0.769766 0.437985 0.769546 0.437989 0.769325 0.437993 0.769102 0.437997 0.768879 0.438001 0.768654 0.438005 0.768428 0.438009 0.768201 0.438013 0.767974 0.438017 0.767745 0.438022 0.767514 0.438026 0.767283 0.43803 0.767051 0.438035 0.766817 0.438039 0.766583 0.438044 0.766347 0.438048 0.76611 0.438053 0.765872 0.438057 0.765633 0.438062 0.765392 0.438067 0.765151 0.438072 0.764908 0.438076 0.764664 0.438081 0.764419 0.438086 0.764173 0.438091 0.763925 0.438096 0.763677 0.438095 0.763414 0.438101 0.763163 0.438106 0.762911 0.438111 0.762658 0.438116 0.762403 0.438122 0.762147 0.438127 0.76189 0.438132 0.761632 0.438138 0.761372 0.438143 0.761112 0.438149 0.760849 0.438155 0.760586 0.43816 0.760321 0.438166 0.760055 0.438172 0.759788 0.438178 0.75952 0.438183 0.75925 0.438189 0.758979 0.438195 0.758706 0.438201 0.758433 0.438208 0.758158 0.438214 0.757881 0.43822 0.757603 0.438226 0.757324 0.438233 0.757044 0.438239 0.756762 0.438245 0.756479 0.438252 0.756195 0.438259 0.755909 0.438265 0.755621 0.438272 0.755333 0.438279 0.755043 0.438286 0.754751 0.438293 0.754458 0.438299 0.754164 0.438307 0.753868 0.438314 0.753571 0.438321 0.753273 0.438328 0.752973 0.43833 0.752658 0.438337 0.752355 0.438345 0.752051 0.438353 0.751746 0.43836 0.751438 0.438368 0.75113 0.438376 0.75082 0.438384 0.750508 0.438392 0.750195 0.4384 0.74988 0.438408 0.749564 0.438416 0.749247 0.438424 0.748927 0.438433 0.748607 0.438441 0.748284 0.43845 0.747961 0.438458 0.747635 0.438467 0.747308 0.438476 0.74698 0.438485 0.74665 0.438494 0.746318 0.438503 0.745985 0.438512 0.74565 0.438521 0.745313 0.43853 0.744975 0.43854 0.744636 0.438549 0.744294 0.438559 0.743951 0.438568 0.743607 0.438578 0.74326 0.438588 0.742912 0.438598 0.742563 0.438608 0.742211 0.438618 0.741858 0.438628 0.741504 0.438639 0.741147 0.438649 0.740789 0.438659 0.740429 0.43867 0.740068 0.438675 0.739691 0.438686 0.739326 0.438697 0.738959 0.438709 0.738591 0.43872 0.738221 0.438731 0.737849 0.438743 0.737475 0.438754 0.7371 0.438766 0.736722 0.438778 0.736343 0.438789 0.735962 0.438801 0.73558 0.438814 0.735195 0.438826 0.734809 0.438838 0.73442 0.438851 0.73403 0.438863 0.733638 0.438876 0.733245 0.438889 0.732849 0.438902 0.732451 0.438915 0.732052 0.438928 0.73165 0.438942 0.731247 0.438955 0.730842 0.438969 0.730434 0.438983 0.730025 0.438996 0.729614 0.439011 0.729201 0.439025 0.728786 0.439039 0.728369 0.439053 0.72795 0.439068 0.727529 0.439083 0.727106 0.439098 0.726681 0.439113 0.726254 0.439128 0.725825 0.439143 0.725394 0.439159 0.724961 0.439174 0.724526 0.43919 0.724089 0.439206 0.723649 0.439217 0.723194 0.439233 0.72275 0.43925 0.722305 0.439266 0.721857 0.439283 0.721407 0.4393 0.720956 0.439317 0.720502 0.439335 0.720045 0.439352 0.719587 0.43937 0.719127 0.439388 0.718664 0.439406 0.718199 0.439424 0.717732 0.439442 0.717263 0.439461 0.716791 0.43948 0.716318 0.439498 0.715842 0.439518 0.715363 0.439537 0.714883 0.439556 0.7144 0.439576 0.713915 0.439596 0.713428 0.439616 0.712938 0.439636 0.712446 0.439657 0.711952 0.439677 0.711455 0.439698 0.710956 0.439719 0.710455 0.439741 0.709951 0.439762 0.709445 0.439784 0.708937 0.439806 0.708426 0.439828 0.707913 0.43985 0.707397 0.439873 0.706879 0.439896 0.706359 0.439919 0.705836 0.439942 0.70531 0.439966 0.704782 0.439989 0.704252 0.440008 0.703704 0.440033 0.703169 0.440057 0.702631 0.440082 0.702091 0.440107 0.701548 0.440132 0.701003 0.440158 0.700455 0.440183 0.699905 0.440209 0.699352 0.440236 0.698796 0.440262 0.698238 0.440289 0.697677 0.440316 0.697113 0.440344 0.696547 0.440371 0.695978 0.440399 0.695407 0.440427 0.694833 0.440456 0.694256 0.440485 0.693677 0.440514 0.693095 0.440543 0.69251 0.440573 0.691922 0.440603 0.691332 0.440633 0.690739 0.440663 0.690143 0.440694 0.689544 0.440725 0.688943 0.440757 0.688338 0.440789 0.687731 0.440821 0.687121 0.440853 0.686509 0.440886 0.685893 0.440919 0.685275 0.440952 0.684653 0.440986 0.684029 0.44102 0.683402 0.441055 0.682772 0.441089 0.682139 0.441124 0.681503 0.44116 0.680865 0.441196 0.680223 0.441227 0.679562 0.441264 0.678915 0.441301 0.678264 0.441338 0.677611 0.441376 0.676954 0.441414 0.676295 0.441453 0.675632 0.441491 0.674966 0.441531 0.674298 0.441571 0.673626 0.441611 0.672951 0.441651 0.672273 0.441692 0.671592 0.441733 0.670908 0.441775 0.67022 0.441817 0.66953 0.44186 0.668836 0.441903 0.668139 0.441946 0.667439 0.44199 0.666735 0.442035 0.666029 0.442079 0.665319 0.442125 0.664606 0.44217 0.66389 0.442217 0.66317 0.442263 0.662447 0.44231 0.661721 0.442358 0.660991 0.442406 0.660258 0.442455 0.659522 0.442504 0.658783 0.442553 0.65804 0.442603 0.657293 0.442654 0.656544 0.442705 0.655791 0.442757 0.655034 0.442809 0.654274 0.442861 0.653511 0.442915 0.652744 0.442968 0.651973 0.443023 0.651199 0.443078 0.650422 0.443129 0.649624 0.443185 0.64884 0.443241 0.648052 0.443299 0.647261 0.443356 0.646466 0.443415 0.645667 0.443474 0.644865 0.443533 0.64406 0.443594 0.64325 0.443654 0.642437 0.443716 0.64162 0.443778 0.6408 0.443841 0.639976 0.443904 0.639148 0.443968 0.638317 0.444033 0.637482 0.444098 0.636643 0.444164 0.6358 0.444231 0.634954 0.444298 0.634103 0.444366 0.633249 0.444435 0.632391 0.444504 0.63153 0.444575 0.630664 0.444646 0.629795 0.444717 0.628922 0.44479 0.628044 0.444863 0.627163 0.444937 0.626278 0.445011 0.625389 0.445087 0.624497 0.445163 0.6236 0.44524 0.622699 0.445318 0.621794 0.445396 0.620885 0.445476 0.619973 0.445556 0.619056 0.445637 0.618135 0.445719 0.61721 0.445802 0.616281 0.445885 0.615348 0.44597 0.614411 0.446055 0.61347 0.446138 0.612506 0.446225 0.611557 0.446313 0.610604 0.446402 0.609646 0.446492 0.608684 0.446583 0.607718 0.446675 0.606748 0.446768 0.605773 0.446861 0.604794 0.446956 0.603811 0.447052 0.602824 0.447149 0.601833 0.447247 0.600837 0.447345 0.599836 0.447445 0.598832 0.447546 0.597823 0.447648 0.59681 0.447751 0.595792 0.447855 0.59477 0.44796 0.593744 0.448066 0.592713 0.448174 0.591678 0.448282 0.590638 0.448392 0.589594 0.448503 0.588545 0.448615 0.587492 0.448728 0.586435 0.448842 0.585373 0.448958 0.584306 0.449075 0.583235 0.449193 0.582159 0.449312 0.581079 0.449432 0.579994 0.449554 0.578904 0.449677 0.57781 0.449802 0.576711 0.449927 0.575608 0.450054 0.5745 0.450183 0.573387 0.450312 0.57227 0.450444 0.571148 0.450576 0.570021 0.45071 0.56889 0.450845 0.567753 0.450979 0.566593 0.451118 0.565448 0.451258 0.564297 0.451399 0.563142 0.451542 0.561982 0.451686 0.560818 0.451832 0.559648 ; table[row sep=crcr]0.451832 0.559648 0.451979 0.558474 0.452128 0.557295 0.452279 0.556111 0.452431 0.554922 0.452585 0.553728 0.45274 0.552529 0.452898 0.551326 0.453056 0.550117 0.453217 0.548904 0.453379 0.547685 0.453543 0.546462 0.453709 0.545233 0.453877 0.544 0.454046 0.542762 0.454217 0.541518 0.45439 0.54027 0.454565 0.539016 0.454742 0.537758 0.454921 0.536494 0.455101 0.535226 0.455284 0.533952 0.455469 0.532673 0.455655 0.531389 0.455844 0.5301 0.456035 0.528806 0.456227 0.527507 0.456422 0.526202 0.456619 0.524893 0.456818 0.523578 0.45702 0.522258 0.457223 0.520933 0.457429 0.519603 0.457637 0.518267 0.457847 0.516927 0.45806 0.515581 0.458275 0.51423 0.458492 0.512873 0.458712 0.511512 0.458934 0.510145 0.459158 0.508752 0.459385 0.507375 0.459614 0.505993 0.459846 0.504605 0.460081 0.503212 0.460318 0.501814 0.460558 0.50041 0.460801 0.499001 0.461046 0.497587 0.461294 0.496168 0.461545 0.494743 0.461798 0.493313 0.462055 0.491877 0.462314 0.490436 0.462576 0.48899 0.462841 0.487539 0.463109 0.486082 0.46338 0.484619 0.463654 0.483152 0.463931 0.481679 0.464212 0.4802 0.464495 0.478716 0.464781 0.477227 0.465071 0.475732 0.465364 0.474232 0.465661 0.472727 0.46596 0.471216 0.466264 0.4697 0.46657 0.468178 0.46688 0.466651 0.467194 0.465119 0.467511 0.463581 0.467832 0.462038 0.468156 0.460489 0.468484 0.458935 0.468816 0.457376 0.469151 0.455811 0.469491 0.454241 0.469834 0.452665 0.470181 0.451084 0.470533 0.449498 0.470888 0.447906 0.471247 0.446309 0.471611 0.444707 0.471979 0.443099 0.472351 0.441486 0.472727 0.439867 0.473107 0.438243 0.473493 0.436614 0.473884 0.434957 0.474278 0.433317 0.474677 0.431672 0.47508 0.430022 0.475488 0.428366 0.475901 0.426705 0.476319 0.425039 0.476741 0.423367 0.477169 0.421691 0.477601 0.420008 0.478039 0.418321 0.478482 0.416629 0.478929 0.414931 0.479383 0.413228 0.479841 0.411519 0.480306 0.409806 0.480775 0.408087 0.48125 0.406363 0.481731 0.404635 0.482218 0.4029 0.48271 0.401161 0.483209 0.399417 0.483713 0.397667 0.484223 0.395913 0.48474 0.394153 0.485262 0.392389 0.485791 0.390619 0.486327 0.388845 0.486869 0.387066 0.487417 0.385281 0.487972 0.383492 0.488534 0.381698 0.489103 0.379899 0.489678 0.378095 0.490261 0.376286 0.490851 0.374473 0.491448 0.372655 0.492052 0.370832 0.492664 0.369004 0.493283 0.367172 0.49391 0.365335 0.494545 0.363493 0.495187 0.361647 0.495838 0.359796 0.496497 0.357941 0.497164 0.356082 0.497839 0.354217 0.498522 0.352349 0.499215 0.350476 0.499916 0.348599 0.500625 0.346718 0.501351 0.344808 0.502079 0.342919 0.502816 0.341025 0.503563 0.339127 0.504318 0.337225 0.505084 0.33532 0.505859 0.33341 0.506644 0.331496 0.50744 0.329578 0.508245 0.327657 0.509061 0.325731 0.509887 0.323802 0.510724 0.32187 0.511572 0.319933 0.512431 0.317993 0.513301 0.31605 0.514182 0.314103 0.515075 0.312152 0.51598 0.310199 0.516896 0.308242 0.517825 0.306281 0.518765 0.304318 0.519719 0.302351 0.520685 0.300382 0.521663 0.298409 0.522655 0.296434 0.52366 0.294455 0.524678 0.292474 0.52571 0.290491 0.526756 0.288504 0.527816 0.286515 0.528891 0.284524 0.52998 0.28253 0.531084 0.280533 0.532202 0.278535 0.533336 0.276534 0.534486 0.274531 0.535651 0.272527 0.536833 0.27052 0.538031 0.268511 0.539245 0.266501 0.540476 0.264489 0.541724 0.262475 0.54299 0.26046 0.544273 0.258444 0.545575 0.256426 0.546895 0.254407 0.548233 0.252386 0.54959 0.250365 0.550967 0.248343 0.552363 0.24632 0.55378 0.244296 0.555216 0.242272 0.556673 0.240247 0.558151 0.238221 0.559651 0.236196 0.561172 0.23417 0.562735 0.23212 0.564301 0.230095 0.56589 0.228069 0.567502 0.226044 0.569138 0.224019 0.570798 0.221995 0.572483 0.219971 0.574193 0.217948 0.575928 0.215926 0.57769 0.213904 0.579477 0.211884 0.581292 0.209865 0.583135 0.207847 0.585005 0.205831 0.586904 0.203816 0.588832 0.201803 0.590789 0.199792 0.592777 0.197783 0.594795 0.195776 0.596845 0.193771 0.598927 0.191769 0.601041 0.189769 0.603188 0.187771 0.60537 0.185777 0.607585 0.183785 0.609836 0.181797 0.612123 0.179812 0.614447 0.17783 0.616808 0.175852 0.619206 0.173878 0.621644 0.171907 0.624122 0.16994 0.626639 0.167978 0.629199 0.16602 0.6318 0.164066 0.634444 0.162117 0.637133 0.160173 0.639866 0.158233 0.642644 0.156299 0.64547 0.15437 0.648343 0.152447 0.651266 0.150529 0.654237 0.148617 0.65726 0.146711 0.660335 0.144811 0.663463 0.142917 0.666645 0.141029 0.669883 0.139149 0.673177 0.137275 0.67653 0.135408 0.679942 0.133548 0.683414 0.131695 0.686948 0.12985 0.690546 0.128013 0.694209 0.126183 0.697938 0.124361 0.701735 0.122548 0.705602 0.120743 0.709539 0.118946 0.71355 0.117158 0.717635 0.11538 0.721796 0.11361 0.726094 0.111829 0.730414 0.110079 0.734816 0.108337 0.739301 0.106606 0.743873 0.104884 0.748534 0.103173 0.753284 0.101472 0.758128 0.099781 0.763066 0.098102 0.768102 0.096432 0.773239 0.094774 0.778477 0.093127 0.783821 0.091492 0.789273 0.089868 0.794835 0.088255 0.800511 0.086655 0.806305 0.085066 0.812218 0.08349 0.818254 0.081926 0.824416 0.080375 0.830709 0.078836 0.837135 0.07731 0.843698 0.075797 0.850403 0.074298 0.857252 0.072812 0.864251 0.071339 0.871403 0.06988 0.878712 0.068434 0.886184 0.067003 0.893824 0.065585 0.901635 0.064182 0.909623 0.062794 0.917794 0.061419 0.926152 0.06006 0.934704 0.058715 0.943456 0.057385 0.952413 0.05607 0.961581 0.054771 0.970969 0.053486 0.980582 0.052217 0.990427 0.050964 1.000512 0.049726 1.010845 0.048504 1.021433 0.047297 1.032285 0.046107 1.043411 0.044932 1.054817 0.043774 1.066516 0.042632 1.078515 0.041506 1.090825 0.040396 1.103458 0.039303 1.116424 0.038227 1.129734 0.037166 1.143402 0.036123 1.157439 0.035096 1.171859 0.034086 1.186675 0.033092 1.201903 0.032115 ; ; table[row sep=crcr]0 0 0.0002 0 0.0004 0 0.0006 0 0.0008 0 0.001 0 0.0012 0 0.0014 0 0.0016 0 0.0018 0 0.002 0 0.0022 0 0.0024 0 0.0026 0 0.0028 0 0.003 0 0.0032 0 0.0034 0 0.0036 0 0.0038 0 0.004 0 0.0042 0 0.0044 0 0.0046 0 0.0048 0 0.005 0 0.0052 0 0.0054 0 0.0056 0 0.0058 0 0.006 0 0.0062 0 0.0064 0 0.0066 0 0.0068 0 0.007 0 0.0072 0 0.0074 0 0.0076 0 0.0078 0 0.008 0 0.0082 0 0.0084 0 0.0086 0 0.0088 0 0.009 0 0.0092 0 0.0094 0 0.0096 0 0.0098 0 0.01 0 0.0102 0 0.0104 0 0.0106 0 0.0108 0 0.011 0 0.0112 0 0.0114 0 0.0116 0 0.0118 0 0.012 0 0.0122 0 0.0124 0 0.0126 0 0.0128 0 0.013 0 0.0132 0 0.0134 0 0.0136 0 0.0138 0 0.014 0 0.0142 0 0.0144 0 0.0146 0 0.0148 0 0.015 0 0.0152 0 0.0154 0 0.0156 0 0.0158 0 0.016 0 0.0162 0 0.0164 0 0.0166 0 0.0168 0 0.017 0 0.0172 0 0.0174 0 0.0176 0 0.0178 0 0.018 0 0.0182 0 0.0184 0 0.0186 0 0.0188 0 0.019 0 0.0192 0 0.0194 0 0.0196 0 0.0198 0 0.02 0 0.0202 0 0.0204 0 0.0206 0 0.0208 0 0.021 0 0.0212 0 0.0214 0 0.0216 0 0.0218 0 0.022 0 0.0222 0 0.0224 0 0.0226 0 0.0228 0 0.023 0 0.0232 0 0.0234 0 0.0236 0 0.0238 0 0.024 0 0.0242 0 0.0244 0 0.0246 0 0.0248 0 0.025 0 0.0252 0 0.0254 0 0.0256 0 0.0258 0 0.026 0 0.0262 0 0.0264 0 0.0266 0 0.0268 0 0.027 0 0.0272 0 0.0274 0 0.0276 0 0.0278 0 0.028 0 0.0282 0 0.0284 0 0.0286 0 0.0288 0 0.029 0 0.0292 0 0.0294 0 0.0296 0 0.0298 0 0.03 0 0.0302 0 0.0304 0 0.0306 0 0.0308 0 0.031 0 0.0312 0 0.0314 0 0.0316 0 0.0318 0 0.032 0 0.0322 0 0.0324 0 0.0326 0 0.0328 0 0.033 0 0.0332 0 0.0334 0 0.0336 0 0.0338 0 0.034 0 0.0342 0 0.0344 0 0.0346 0 0.0348 0 0.035 0 0.0352 0 0.0354 0 0.0356 0 0.0358 0 0.036 0 0.0362 0 0.0364 0 0.0366 0 0.0368 0 0.037 0 0.0372 0 0.0374 0 0.0376 0 0.0378 0 0.038 0 0.0382 0 0.0384 0 0.0386 0 0.0388 0 0.039 0 0.0392 0 0.0394 0 0.0396 0 0.0398 0 0.04 0 0.0402 0 0.0404 0 0.0406 0 0.0408
arxiv_0000322
Cosmology in generalized Proca theories ======================================= We consider a massive vector field with derivative interactions that propagates only the 3 desired polarizations (besides two tensor polarizations from gravity) with second-order equations of motion in curved space-time. The cosmological implications of such generalized Proca theories are investigated for both the background and the linear perturbation by taking into account the Lagrangian up to quintic order. In the presence of a matter fluid with a temporal component of the vector field, we derive the background equations of motion and show the existence of de Sitter solutions relevant to the late-time cosmic acceleration. We also obtain conditions for the absence of ghosts and Laplacian instabilities of tensor, vector, and scalar perturbations in the small-scale limit. Our results are applied to concrete examples of the general functions in the theory, which encompass vector Galileons as a specific case. In such examples, we show that the de Sitter fixed point is always a stable attractor and study viable parameter spaces in which the no-ghost and stability conditions are satisfied during the cosmic expansion history. Introduction ============ The high-precision observations achieved by the measurements like supernovae Type Ia (SNIa), Cosmic Microwave Background (CMB), and Baryon Acoustic Oscillations (BAO) together with the two pillars of General Relativity (GR) and the cosmological principle have led to a notion of the standard cosmological model. According to this concordance model, about 70% and 25% of the today’s energy density of the Universe correspond to shadowy components dubbed dark energy and dark matter, respectively. In particular, dark energy has a repulsive force against gravity, which leads to the late-time cosmic acceleration. After its first discovery in 1998, there have been numerous attempts to pursue the origin of dark energy. The first promising explanatory attempt for dark energy consists of introducing a cosmological constant Λ that would cause an effective repulsive force between cosmological objects at large distances. A natural origin of the cosmological constant could be the vacuum energy density. Using the technique of particle physics, one can estimate the expected value for the vacuum energy density caused by fluctuating quantum fields. The result is the worst theoretical prediction in the history of physics. The discrepancy between the theoretically predicted value and the measured one amounts to 120 orders of magnitude. This constitutes the vacuum catastrophe prediction which remains one of the most challenging puzzles in physics. An alternative explanation for the late-time acceleration of the Universe could be accounted by introducing new dynamical degrees of freedom. This is achieved either by directly invoking new fluids in form of an energy-momentum tensor *T**μ**ν* with negative pressure or by modifying the geometrical part of Einstein field equations (see Refs.  for reviews). One hope in weakening gravity on cosmological scales is to tackle the cosmological constant problem. The implications are multifaceted. The same modification might account for the late-time speed-up of the cosmic expansion. This type of scenarios can naturally arise in higher-dimensional theories or massive gravity. Concerning the higher-dimensional frameworks, the Dvali-Gabadadze-Porrati (DGP) model is one of the most important large-scale modifications of gravity. The Universe is modeled by a confined three-brane embedded in a five-dimensional Minkowski bulk. An intrinsic Einstein-Hilbert term is introduced to recover four-dimensional gravity on small scales. On the other hand, the gravity is systematically weakened on cosmological scales due to the gravitational leakage to the extra dimension. From the effective four dimensional point of view, the graviton acquires a soft mass and hence carries five degrees of freedom. One of such degrees of freedom corresponds to the helicity-0 mode, which sources the existence of a self-accelerating solution. The non-linear interactions of the helicity-0 mode make the mode decouple from the gravitational dynamics, which is known as the Vainshtein mechanism. Motivated by the nice properties of the helicity-0 mode in the decoupling limit of the DGP model, more general “Galileon” interactions have been proposed on the Minkowski background. These interactions are invariant under internal Galilean and shift transformations. The Galilean symmetries together with the requirement of ghost absence restrict the effective Lagrangian to consist only of five derivative interactions. There were attempts to generalize the Minkowski Galileon theory to a curved background. The first natural pursuit was done by covariantizing directly the decoupling limit. A rigorous investigation revealed that the naive covariantization (replacing partial derivatives with covariant derivatives) would yield higher-order equations of motion and that unique non-minimal couplings with the curvature should be added to maintain the equations of motion up to second order. The generalization of this latter “covariant Galileon” led to the rediscovery of the Horndeski action as the most general scalar-tensor theories with second-order equations of motion in 4-dimensions. The application of covariant Galileons to cosmology witnessed a flurry of investigations concerning self-accelerating de Sitter solutions, late-time cosmology and its observational implications, inflation, super-luminalities, and so on. Even if the ghost-like Ostrogradski instability can be avoided for covariant Galileons, the Galilean symmetry is explicitly broken when promoted to the curved space-time. However, there have been also successful generalizations to maximally symmetric backgrounds with a generalized Galilean symmetry. It is worth mentioning that there exists a parallel to massive gravity, namely Galileon interactions arise naturally in the decoupling limit of massive gravity. Furthermore, the covariantization of the decoupling-limit interactions gives rise to a specific sub-class of Horndeski theories. A natural follow-up was to apply the same construction of Galileon-like interactions to arbitrary *p*-forms. While even *p*-form Galileons and multi odd *p*-form Galileons are easy to construct, very soon it was realized that one cannot construct Galilean interactions for a Lorentz- and gauge-invariant, single spin-1 (i.e., 1-form) field in any dimensions (see also Refs. ). The formalism developed for the proof of the no-go theorem was recently extended towards categorization of all possible general odd *p*-form Galileons, as an application of the concept called plethysm in the representation theory of the symmetric groups. However, this no-go theorem does not apply to massive spin 1 fields since one of the assumptions of the theorem, the gauge invariance, is dropped, and it is possible to successfully construct derivative self-interactions for the vector field with a broken *U*(1) symmetry. A systematical construction of these interactions with only 3 propagating degrees of freedom (two transverse and one longitudinal) was performed in Ref.  with the requirement that the longitudinal mode possesses non-trivial interactions belonging to the class of Galileon/Horndeski theories. The analysis was based on studying the Hessian matrix and assuring the propagation of a second class constraint. The action derived in Ref.  corresponds to the generalization of massive Proca theories to the curved background in which the requirement of second-order equations of motion enforces the presence of non-minimal derivative couplings with the Ricci scalar and the Einstein tensor (see also Refs.  for related works). Note that similar type of vector-tensor theories can naturally arise in modifications of gravity based on Weyl geometries. It is also possible to obtain derivative interactions higher than quintic order by keeping the degrees of freedom three. In a sub-class of generalized Proca theories up to the quartic Lagrangian ${\cal L}\_4$, the existence of de Sitter solutions relevant to dark energy was found in Refs. . Applying these theories to the spherically symmetric background, the authors in Ref.  showed that cubic-order derivative self-interactions and non-minimal derivative couplings at quartic order can efficiently screen the propagation of the fifth force in local regions of the Universe. In this paper, we will closely study cosmological implications of the full derivative interactions introduced in Ref.  in the presence of matter (radiation and non-relativistic matter). In particular, we derive conditions for the absence of ghosts and Laplacian instabilities by considering tensor, vector, and scalar perturbations at linear level. Our general results will be applied to concrete models relevant to the late-time cosmic acceleration. Not only we show the existence of stable de Sitter solutions, but also we study the stability of perturbations throughout the cosmological evolution. Our paper is organized as follows. In Sec. [HPsec] we present the background equations of motion in the presence of a temporal component of the vector field and a matter fluid. In Sec. [spsec] we derive conditions for avoiding ghosts and Laplacian instabilities by expanding the generalized Proca action up to quadratic order in tensor, vector, and scalar perturbations. In Sec. [genedissec] we discuss general conditions for the theoretical consistency at the de Sitter fixed point and in the early cosmological epoch. In Sec. [covasec] we propose a class of concrete models in which the de Sitter point is always a late-time attractor and discuss the viable parameter space in which ghosts and instabilities are absent during the cosmic expansion history. Sec. [consec] is devoted to conclusions. Generalized Proca theories and the background equations of motion ================================================================= Let us start with generalized Proca theories described by the four-dimensional action S=d4x ( L +LM ),=LF+i=25 Li, [Lag] where *g* is a determinant of the metric tensor *g**μ**ν*, ${\cal L}\_M$ is a matter Lagrangian, and F &=& -14 FF, [LF] L2 &=& G2(X), [L2] L3 &=& G3(X) A, [L3] L4 &=& G4(X)R+ G4,X(X),[L4] L5 &=& G5(X) G A -16 G5,X(X) [ ( A)3 -3d2 A A A -3(1-d2) A A A & & +(2-3d2) A A A +3d2 A A A]. [L5] Here, *A**μ* is a vector field with *F**μ**ν* = ∇*μ**A**ν* − ∇*ν**A**μ*, ∇*μ* is the covariant derivative operator, *R* is the Ricci scalar, *G**μ**ν* is the Einstein tensor, *c*2, *d*2 are constants, *G*2, 3, 4, 5 are arbitrary functions of X=-12 A A, [Xdef] and *G**i*, *X* = ∂*G**i*/∂*X*. The Lagrangians ${\cal L}\_{2,3,4,5}$ are constructed to keep the propagating degrees of freedom up to three with second-order equations of motion. As in standard massless Maxwell theory, there are two transverse polarizations for the vector field. The Proca theory corresponds to the function *G*2(*X*) = *m*2*X* with *G*3, 4, 5 = 0, in which case introduction of the mass term *m* breaks the *U*(1) gauge symmetry. This gives rise to one additional degree of freedom in the longitudinal direction. The generalized Proca theories given above correspond to the extension of massive Proca theories with the cubic derivative self-interaction ${\cal L}\_3$ and non-minimal derivative couplings with the Ricci scalar (in the Lagrangian ${\cal L}\_4$) and with the Einstein tensor (in the Lagrangian ${\cal L}\_5$). In Eqs. ([L4]) and ([L5]) the terms multiplied by the constants *c*2 and *d*2 can be expressed in terms of *F**μ**ν*, so they can be regarded as intrinsic vector modes (i.e., they vanish by taking the scalar limit *A**μ* → ∇*μ**π*). In Refs.  the cosmology for *c*2 =  − 1 up to the Lagrangian ${\cal L}\_4$ was studied for specific choices of the functions *G*2, 3, 4. In this paper we study the cosmology for the full action ([Lag]) for arbitrary constants *c*2 and *d*2. We also derive the background equations of motion and the no-ghost/stability conditions without specifying the functional forms of *G*2, 3, 4, 5. We define the energy-momentum tensor of the matter Lagrangian ${\cal L}\_M$, as T(M)=- g. Assuming that matter is minimally coupled to gravity, the following continuity equation holds T(M)=0. [Tcon] We derive the equations of motion on the flat Friedmann-Lemaître-Robertson-Walker (FLRW) background described by the line element *d**s*2 =  − *d**t*2 + *a*2(*t*)*δ**i**j**d**x**i**d**x**j*, where *a*(*t*) is the scale factor that depends on the cosmic time *t*. The homogenous vector field *A**μ*, which does not break spatial isotropy, is given by A=((t),0,0,0), [Aansatz] where the temporal component *ϕ* is a function of *t*. Then, the kinetic term ([Xdef]) reduces to *X* = *ϕ*2/2. For the matter Lagrangian ${\cal L}\_M$, we take into account the perfect fluid with the energy-momentum tensor $T^{\mu}\_{\nu}={\rm diag}(-\rho\_M,P\_M,P\_M,P\_M)$, where *ρ**M* is the energy density and *P**M* is the pressure. Then, the continuity equation ([Tcon]) reads M+3H(M+PM)=0, [continuity] where a dot represents a derivative with respect to *t*, and $H \equiv \dot{a}/a$ is the Hubble expansion rate. Varying the action ([Lag]) with respect to *g**μ**ν*, we obtain the following equations of motion & & G2-G2,X2-3G3,XH 3 +6G4H2-6(2G4,X+G4,XX2)H22 +G5,XX H35+ 5G5,X H33 =M, [be1] & & G2-2G3,X+2G4(3H2+2) -2G4,X ( 3H2+2H +2 )-4G4,XXH3 && +G5,XXH2 4+G5,X H 2(2+2H2+3H) =-PM. [be2] Variation of the action ([Lag]) with respect to *A**μ* leads to ( G2,X+3G3,XH+6G4,XH2 +6G4,XXH22 -3G5,XH3-G5,XXH3 3 )=0. [be3] Among four Eqs. ([continuity])-([be3]), three of them are independent. We note that Eqs. ([be1]) and ([be3]) do not contain any time derivatives of *ϕ*. This reflects the fact that the theory given by the action ([Lag]) has been constructed to avoid the appearance of the propagating degrees of freedom more than three. From Eq. ([be3]) there are two branches of solutions. One of them is *ϕ* = 0, but in this case the temporal component of the vector field does not contribute to the background dynamics. Another branch corresponds to G2,X+3G3,XH+6G4,XH2 +6G4,XXH22 -3G5,XH3-G5,XXH3 3=0, [be3d] in which case the field *ϕ* is directly related to the Hubble parameter *H*. This allows the existence of de-Sitter solutions characterized by constant *H* and *ϕ*, so we shall focus on the second branch satisfying Eq. ([be3d]) in the following discussion. Conditions for avoiding ghosts and instabilities ================================================ We derive conditions for the absence of ghosts and instabilities of tensor, vector, and scalar perturbations on the flat FLRW background. We consider two scalar metric perturbations *α*, *χ* and one vector perturbation *V**i* by choosing the so-called flat gauge. Under this choice, the temporal and spatial components of gauge transformation vectors are fixed. Taking into account the tensor perturbation *h**i**j*, the linearly perturbed line-element is given by ds2=-(1+2)dt2+2( i+Vi )dtdxi+a2(t) ( ij +hij )dxi dxj, where *V**i* and *h**i**j* satisfy the following conditions & & i Vi=0, [traconve1] & & i hij=0,i=0. [tracon] As for the Proca vector field *A**μ*, the time component *A*0 and the spatial vector *A**i* can be expressed in the following forms A0 & = & (t)+, Ai & = & ij ( jV+Ej ), where *δ**ϕ* is the perturbation in *A*0 (which depends on both *t* and *x**i*). The perturbation *χ**V* corresponds to the intrinsic scalar part, whereas *E**j* is the vector part satisfying the transverse condition j Ej=0. [traconve2] For the matter action $S\_M=\int d^4 x \sqrt{-g}\,{\cal L}\_M$, we consider a single perfect fluid with the energy density *ρ**M*. For the description of the perfect fluid, we make use of the Schutz-Sorkin action (see also Ref. ): SM=-d4x, [Spf] where ℓ is a scalar, *J**μ* is a vector field of weight one, A1, A2, B1, B2 are scalars whose perturbations are meant to describe the vector modes, and *n* is the number density of the fluid defined by n=. Due to the transverse condition for the vector modes, a third component A3∂*μ*B3 is redundant in the action ([Spf]). We express ℓ and *J**μ* into the background and perturbed components, as &=&-t M,ndt’ - M,n v, J0 &=& 0+J, Ji &=& ik ( kj+Wk ), where *ρ**M*, *n* ≡ ∂*ρ**M*/∂*n* is evaluated on the background, N0 = *n*0*a*3 is a constant associated with the total number of fluid particles (*n*0 is the background value of *n*), and *v*, *δ**J* are the perturbations of ℓ, *J*0, respectively. On the FLRW background, the action ([Spf]) reduces to $S\_{M}^{(0)}=\int d^4 x \sqrt{-g}\,(n\_0\rho\_{M,n}-\rho\_M)$. Since the term inside the bracket corresponds to the pressure *P**M* of the fluid, we have PM=n0M,n-M. [PM] The perturbation *δ**j* corresponds to the scalar part of *J**i*, whereas the intrinsic vector perturbation *W**k* obeys k Wk=0. [traconve3] We write the perturbation of ${\cal A}\_i$ in the form i=M,nui, [delAi] where *u**i* corresponds to the intrinsic vector part of the four velocity $u^{\alpha}=J^{\alpha}/(n\sqrt{-g})$, satisfying i ui=0. [traconve4] It should be pointed out that the above form of action is not the only possibility for describing the perfect fluid, e.g., the k-essence form is also another possibility. However, in the theories under consideration, the Schutz-Sorkin action is convenient for properly accommodating vector perturbations. Moreover, it provides a natural and convenient choice of variables for the dust-fluid perturbation with a vanishing propagation speed squared *c**m*2. On the other hand, in the k-essence form, one may need to perform a change of variables corresponding to a canonical transformation, e.g., from the perturbation of the scalar field to the density contrast, before taking the *c**m*2 → 0 limit. Due to the decomposition into tensor, vector, and scalar modes explained above, expansion of the action ([Lag]) up to second order in perturbations leads to the quadratic action of the form S(2)=ST(2)+SV(2)+SS(2), where *S**T*(2), *S**V*(2), *S**S*(2) correspond to contributions from tensor, vector, and scalar perturbations, respectively, Variations of the action *S**T*(2), *S**V*(2), *S**S*(2) with respect to perturbed quantities give rise to the linear perturbation equations of motion for tensor, vector, and scalar modes. The same equations of motion can be derived after obtaining the general field equations by varying the action ([Lag]) with respect to *g**μ**ν* and then decomposing into tensor, vector, and scalar perturbations (as in the case of GR discussed in Ref. ). The advantage of using the second-order action is that no-ghost and stability conditions for dynamical fields can be easily deduced after integrating out all non-dynamical fields from the action. In the following, we shall separately derive *S**T*(2), *S**V*(2), *S**S*(2) to discuss conditions for the absence of ghosts and instabilities. Tensor perturbations -------------------- The tensor perturbation *h**i**j*, which satisfies the transverse and traceless conditions ([tracon]), can be expressed in terms of the two polarization modes *h*+ and *h*×, as *h**i**j* = *h*+*e**i**j*+ + *h*×*e**i**j*×. In Fourier space, the unit bases *e**i**j*+ and *e**i**j*× obey the relations $e\_{ij}^{+}({\bm k}) e\_{ij}^{+}(-{\bm k})^\*=1$, $e\_{ij}^{\times}({\bm k}) e\_{ij}^{\times}(-{\bm k})^\*=1$, and $e\_{ij}^{+}({\bm k}) e\_{ij}^{\times}(-{\bm k})^\*=0$, where ${\bm k}$ is the comoving wavenumber. Expanding Eq. ([Lag]) up to second order in tensor perturbations, the resulting second-order action is given by ST(2)==+,dtd3x a3, where qT=2G4-22G4,X+ H3G5,X, [qT] and cT2=. [cT] Note that *c**T*2 corresponds to the propagation speed squared of the tensor mode. The ghost is absent under the condition *q**T* > 0. The Laplacian instability on small scales can be avoided for *c**T*2 > 0. Vector perturbations -------------------- The four quantities *V**i*, *E**i*, *W**i*, and *u**i* obey the transverse conditions ([traconve1]), ([traconve2]), ([traconve3]), and ([traconve4]), respectively. For the practical computations of the second-order action, we can consider the vector fields depending on *t* and *z* alone, e.g., *V**i* = (*V*1(*t*, *z*), *V*2(*t*, *z*), 0). Then, the transverse condition such as Eq. ([traconve1]) is automatically satisfied. For the quantities ${\cal A}\_i$ and ${\cal B}\_i$, the simplest choice containing all the needed information of the vector mode is given by ${\cal A}\_1=\delta {\cal A}\_1(t,z)$, ${\cal A}\_2=\delta {\cal A}\_2(t,z)$, ${\cal B}\_1=x+\delta {\cal B}\_1(t,z)$, and ${\cal B}\_2=y+\delta {\cal B}\_2(t,z)$. The normalizations of ${\cal B}\_1$ and ${\cal B}\_2$ have been done such that both ${\cal B}\_{1,x}$ and ${\cal B}\_{2,y}$ are equivalent to 1. Note that the above prescription can be extended to the general case in which the perturbations depend on *t*, *x*, *y*, *z*. After expanding the matter action ([Spf]) up to second order in vector perturbations, it follows that (SV(2))M=i=12dtd3x. [LVM] Since the quantities $W\_i, \delta {\cal A}\_i, \delta {\cal B}\_i$ appear only in the matter action, we can vary the action ([LVM]) with respect to these perturbations independently of the full second-order action. Variation of Eq. ([LVM]) in terms of *W**i* leads to Wi=. [Wiex] Substituting Eq. ([Wiex]) into Eq. ([LVM]) and varying the action with respect to $\delta {\cal A}\_i$, we obtain $\delta {\cal A}\_i$ in the form ([delAi]) with ui=Vi-a2i. Varying the action with respect to $\delta {\cal B}\_i$, we find M,nui =ui =, [conser] which is associated with the conservation of the energy-momentum tensor of the perfect fluid. This is the same relation as that in GR and it states that the perturbation *u**i* is non-dynamical. Finally, the second-order matter action reduces to (SV(2))M=i=12 dtd3x, [LVM2] with the conservation relation ([conser]) for the four velocity $u\_i=V\_i-a^2\dot{\delta {\cal B}}\_i$. We sum up the second-order action originating from $\int d^4 x \sqrt{-g}{\cal L}$ with Eq. ([LVM2]). In doing so, it is convenient to introduce the following combination Zi= Ei+(t)Vi. Let us consider the perturbations in Fourier space with $k=|{\bm k}|$. Variation of the total second-order action *S**V*(2) with respect to *V**i* leads to Vi=-(M+PM)ui- (G4,X-12 G5,XH )Zi. [vecre] The metric perturbation *V**i* follows a different dynamics from that in GR (where *G*4, *X* = 0 = *G*5, *X*). Integrating out the non-dynamical fields *u**i* and *V**i*, we obtain the action for the dynamical field *Z**i*. It follows that there are two dynamical degrees of freedom *Z*1 and *Z*2 for the vector mode. Taking the small-scale limit (*k* → ∞) in Eq. ([vecre]) and using the background equations of motion, the resulting second-order action for the perturbations *Z**i* in Fourier space reads SV(2) i=12 dtd3 x ( i2+cV2 Zi2 ), [SVac] where qV= 1-2c2G4,X-2d2HG5,X, [qv] and the vector propagation speed squared can be written as cV2=1+ 2qTqV+. [cv] In Eq. ([SVac]) we have ignored the contribution of an effective mass term *m**V*2*Z**i*2 relative to the second term, which can be justified in the limit *k* → ∞ with *c**V* ≠ 0. The sign of the coefficient *q**V* characterizes the no-ghost condition, such that the vector ghost is absent for *q**V* > 0. If *c*2 = *d*2 = 0, then this condition is automatically satisfied. We recall that the terms containing *c*2 and *d*2 are associated with the pure vector contributions in the original action ([Lag]), which affect the no-ghost condition for the vector mode. If the vector propagation speed squared *c**V*2 is positive, the small-scale Laplacian instability can be avoided. For the theories with *G*5 = 0 or the theories with *d*2 = 0 and *G*5 ≠ 0, we have that *c**V*2 > 1 under the no-ghost conditions of tensor and vector perturbations (*q**T* > 0 and *q**V* > 0). Scalar perturbations -------------------- For scalar perturbations, we consider the single perfect fluid with the background energy density *ρ**M* and the pressure *P**M* given by Eq. ([PM]). We make the field redefinitions J &=& M =M, V&=& -(t), where *δ**ρ**M* is the matter density perturbation. First, we can integrate out the Lagrange multiplier *δ**j* by means of its own equation of motion: =-v-. Then, the second-order action for scalar perturbations is given by *S**S*(2) = ∫*d**t**d*3*x* *L**S*(2), where LS(2) & = & a3{- +v -(M)2-M & & -w3+w42 - & & - +w5 - & & -+ +(w1+)}, [sscalar] with the short-cut notations w1 & = & H2 3 (G5,X+2G5,*XX*) -4H(G4+4G4,*XX*)-3G3,X, w2 & = & w1+2HqT, w3 & = & -22qV, w4 & = & 33(9G5,X-4G5,*XXX*) -3H2 (2G4+22G4,X+4G4,*XX*-6G4,*XXX*) & & -H3(G3,X-2G3,*XX*) +4G2,*XX*, w5 & = & w4-H(w1+w2), w6 & = & -, w7 & = & 2(HG5,X-2G4,X) +. The Lagrangian ([sscalar]) does not contain the time derivatives of *α*, *χ*, *δ**ϕ*, *v*, so these four fields are non-dynamical. Varying the action *S**S*(2) with respect to *α*, *χ*, *δ**ϕ*, *v*, we obtain the following equations of motion in Fourier space respectively: & & M-2w4 +( 3Hw1-2w4 ) + ( w3+w3 -w6 +w1 +2w3 )=0, & & n0M,n v+ w1 + =0, & & ( 3Hw1-2w4 )-2w5 + ( w3+ 2-++w2 )=0, & & M+3H(1+ ) M +n0M,n ( +v )=0. On using these equations, we can eliminate the non-dynamical fields from the second-order Lagrangian *L**S*(2). After integrations by parts, we finally obtain a reduced Lagrangian for two dynamical fields *ψ* and *δ**ρ**M* in the form LS(2)=a3( tK +tG -tM -tB ), [L2mat] where ${\bm K}$, ${\bm G}$, ${\bm M}$, ${\bm B}$ are 2 × 2 matrices, and the vector field $\vec{\mathcal{X}}$ is defined by t=(, M/k ). The matrix ${\bm M}$ is related with the masses of two scalar modes, which do not contain the *k*2/*a*2 term. In the following we shall take the small-scale limit (*k* → ∞), in which the second term on the r.h.s. of Eq. ([L2mat]) dominates over the third and fourth terms. The matrix ${\bm K}$ is associated with the kinetic terms of two scalar modes. Provided that the two eigenvalues of ${\bm K}$ are positive, the scalar ghosts are absent. The no-ghost condition for the matter perturbation *δ**ρ**M* is trivially satisfied for *ρ**M* + *P**M* > 0. For the perturbation *ψ*, the quantity related with the no-ghost condition corresponds to[1](#fn1) QS=, [Qsge] where qS=3w12+4qTw4. [qs] Under the condition *q**T* > 0, the scalar ghost does not arise for *q**S* > 0. For large *k*, the Lagrangian ([L2mat]) leads to the dispersion relation ( 2 K- G )=0, [deteq] where *ω* is a frequency. Defining the scalar propagation speed *c**s* according to the relation *ω*2 = *c**s*2 *k*2/*a*2, there are two solutions to Eq. ([deteq]). One of them is the matter propagation speed squared given by $$c\_M^2= \frac{n\_0\rho\_{M,nn}}{\rho\_{M,n}}\,,$$ which is the same value as that of GR. Another is the propagation speed squared of the perturbation *ψ*, which is given by cS2 &=& {2w22w3(M+PM) -w3(w1-2w2)(/-H) - w3( 2w221-w122 ) & & +(w1-2w2 )2 +w1w2},[cs] where 8H2 2 qTqVqS. If the numerator of Eq. ([cs]) is positive, then the Laplacian instability of scalar perturbations is absent under the three no-ghost conditions *q**T* > 0, *q**V* > 0, and *q**S* > 0. While we have focused on the single-fluid case, the results given in Eqs. ([Qsge]) and ([cs]) are valid for the multi-fluid case by dealing with *ρ**M* and *P**M* as the energy density and the pressure of the total fluid, respectively. This situation is analogous to what happens in scalar Horndeski theories. In summary, besides the matter fluid, the number of propagating degrees of freedom is five (two tensors, two vectors, one scalar), where three of them (two vectors and one scalar) originate from the massive vector field. The six quantities *q**T*, *q**V*, *q**S* and *c**T*2, *c**V*2, *c**S*2 need to be positive to avoid the appearance of ghosts and instabilities for tensor, vector, scalar perturbations. General discussions for de Sitter and early-Universe stabilities ================================================================ We discuss no-ghost and stability conditions derived in Sec. [spsec] in the two stages: (i) de Sitter fixed point, and (ii) early cosmological epochs (radiation/matter eras). The functional forms of *G*2, 3, 4, 5 are not specified in this section. de Sitter fixed point --------------------- The de Sitter solutions, which are characterized by $\dot{\phi}=0$ and $\dot{H}=0$, can exist for the branch satisfying Eq. ([be3d]). We can solve Eqs. ([be1]) and ([be3d]) to express *G*4, *X* and *G*4, *X**X* in terms of other quantities like *G*2. Since *ρ**M* = 0 at the de Sitter solution, the quantities *q**T* and *c**T*2 reduce, respectively, to qT=-, cT2 =. [ctds] Provided that *G*4 > 0, the tensor ghost and instability are absent for *q**T* > 0, i.e., G2-H33G5,X<0. [qT2] In the absence of the Lagrangian ${\cal L}\_5$ (i.e., *G*5 = 0), the condition ([qT2]) simply reduces to *G*2 < 0. This sign is opposite to that of standard Proca theories characterized by the function *G*2 = *m*2*X*. Naively one may think that this gives rise to tachyons, but we need to caution that the effective mass *m**V* of the vector field in the presence of gravity is generally different from its bare mass. The stability condition for super-horizon modes could be a subtle issue since the expression for the squared mass depends on e.g., normalization of variables. So far, we have defined the variables *E**i* and *V**i* in the comoving coordinate basis *d**x**i*. Alternatively, one could define the variables *Ẽ**i* and *Ṽ**i* in the background tetrad basis *a*(*t*)*d**x**i* since it is the tetrad components rather than the comoving coordinate components that are relevant for local nonlinear interactions. The new set of variables is related to the original set of variables as *E**i* = *a**Ẽ**i* and *V**i* = *a**Ṽ**i*, and thus the new variables may stay constant or decay even if the original variables grow exponentially. In terms of the new variable *Ẽ**i*, the quadratic action for the vector perturbations around de Sitter backgrounds is simply given by SV(2)=i=12 dtd3 x. This expression holds for all *k* (i.e., without the need for taking the limit *k* → ∞). Since the effective mass squared *m**V*2 = 2*H*2 of the vector mode is positive, this implies the absence of tachyonic instability on the de Sitter solution. For *m* of the order of the today’s Hubble parameter *H*0, the vector mass term does not affect the dynamics of perturbations deep inside the Hubble radius (*k*/*a* ≫ *H*) even in the early cosmological epoch. If the Lagrangian ${\cal L}\_5$ is present, it is possible to satisfy the condition ([qT2]) even for *G*2 > 0. In this case, however, both *Q**T* and *c**T*2 are negative for *G*5 → 0, so the ghost and the instability arise in this limit. We shall focus on the theories with *G*2 < 0 to ensure the stability of tensor perturbations in the limit *G*5 → 0. From Eqs. ([qv]) and ([cv]) the quantities associated with vector perturbations at the de Sitter point are given by qV &=& 1-, cV2 &=& 1+ 18H42 qT qV +. For the theories and backgrounds with *d*2*H**ϕ**G*5, *X* > 0, it follows that *c**V*2 > 1 under the two no-ghost conditions *q**T* > 0 and *q**V* > 0. For *d*2*H**ϕ**G*5, *X* < 0, the condition for avoiding the Laplacian instability corresponds to 18H42 qT ( qV+d2HG5,X) +(G2+6H2G4-H3 3 G5,X)2>0. For scalar perturbations, all the terms with the time-derivatives in Eq. ([cs]) vanish at the de Sitter solution. Then *c**S*2 can be simply written as *c**S*2 = [(*w*1 − 2*w*2)*ϕ**w*6 + *w*1*w*2][(*w*1 − 2*w*2)(*ϕ**w*6 + *H**w*3) + *w*1*w*2]/Δ. By using the functions *G*2, 3, 4, 5 and their derivatives, Eqs. ([qs]) and ([cs]) reduce, respectively, to qS &=&, [qsds] cS2 &=&, [csds] where S1 && +2,[xi1] S2 && 6H22 &&.[xi2] Under the two conditions *q**T* > 0 and *q**V* > 0, the scalar ghost and the instability are absent for qS>0,S1(S1-S2)>0. Unless we specify some functional forms of *G*2, 3, 4, 5, it is difficult to derive concrete constraints on these functions from the general expressions ([qsds])-([csds]). Early cosmological epochs ------------------------- Let us proceed to the discussion of no-ghost and stability conditions during radiation and matter eras. In doing so, we assume that the early cosmological evolution is close to that of GR with the function $G\_4=M\_{\rm pl}^2/2+g\_4$ satisfying the condition $g\_4 \ll M\_{\rm pl}^2/2$. The background Eqs. ([be1]) and ([be2]) can be expressed in the following forms & & 3Mpl2 H2=M+DE, [Fri1] & & Mpl2 ( 3H2+2) =-PM-PDE, [Fri2] where DE &=& -G2+G2,X2+3G3,X3 H -6g4H2+62H2 (2G4,X+G4,XX2) -H3 G5,XX5-5H3G5,X 3, [rhode] PDE &=& G2-2G3,X+2g4(3H2+2) -2G4,X ( 3H2 +2H +2 )-4H3G4,XX && +4H2G5,XX+G5,X2H(2H+2H2+3H). [Pde] In the early cosmological epoch, the energy density $\rho\_{\rm DE}$ and the pressure $P\_{\rm DE}$ of the “dark” component are sub-dominant to *ρ**M* and *P**M*, respectively, so that $|\rho\_{\rm DE}| \ll M\_{\rm pl}^2 H^2$ and $|P\_{\rm DE}| \ll M\_{\rm pl}^2 H^2$. It is then natural to consider a situation in which each term in Eqs. ([rhode]) and ([Pde]) is much smaller than the order of $M\_{\rm pl}^2 H^2$. In this case, the terms *ϕ*2*G*4, *X*, *H**ϕ*3*G*5, *X*, and $\phi^2 \dot{\phi}G\_{5,X}$ in *q**T* and *c**T*2 are much smaller than the order of $2G\_{4} \simeq M\_{\rm pl}^2$, so the quantities ([qT]) and ([cT]) are approximately given by qT Mpl2,cT2 1. Hence there are neither ghosts nor instabilities for tensor perturbations. For vector perturbations, even if each term in Eqs. ([rhode]) and ([Pde]) is much smaller than the order of $M\_{\rm pl}^2 H^2$, this does not necessarily imply that the terms *c*2*G*4, *X* and *d*2*H**ϕ**G*5, *X* in *q**V* are much less than 1. If the conditions |c2 G4,X| 1,|d2 HG5,X| 1,|| Mpl [qvearly] hold in the early cosmological epoch with ∣*c*2∣, ∣*d*2∣ of the order of unity, *q**V* and *c**V*2 approximately reduce to qV 1,cV2 1. In this case, the vector ghost and stability conditions are automatically satisfied. It should be noted that, even if the conditions ([qvearly]) are violated in the early cosmological epoch, one can avoid the vector ghost for *c*2*G*4, *X* < 0 and *d*2*H**ϕ**G*5, *X* < 0. In addition, *c**V*2 is larger than 1 for $d\_2G\_{5,X}(H\phi-\dot{\phi})>0$. Hence, the three conditions in Eq. ([qvearly]) are sufficient but not necessary, so there are other cases in which both *q**V* and *c**V*2 are positive. The quantities *q**S* and *c**S*2 of scalar perturbations contain the third-order derivatives *G*4, *X**X**X* and *G*5, *X**X**X* with respect to *X*. These terms do not appear in the background equations of motion. Hence we need to impose some conditions on such derivatives to derive analytic expression of *Q**S* and *c**S*2. Moreover, unless the functional forms of *G*2, 3, 4, 5 are specified, one cannot extract detailed information for each function due to the complexities of no-ghost and stability conditions of scalar perturbations. In Sec. [covasec] we shall compute the quantities *Q**S* and *c**S*2 for concrete models from the radiation era to the late-time de Sitter solution. Application to concrete models ============================== In this section, we search for viable dark energy models in the framework of generalized Proca theories given by the action ([Lag]). From Eq. ([be3d]) the field *ϕ* depends on the Hubble parameter *H*. To realize the situation in which the energy density of *ϕ* starts to dominate over the background matter densities at the late cosmological epoch, the amplitude of the field *ϕ* should grow with the decrease of *H*. For this purpose, we assume that *ϕ* is related with *H* according to the relation p H-1, [phiH] where *p* is a positive constant. We shall later justify this ansatz by showing that all background equations of motion are satisfied and that all stability conditions for linear perturbations are fulfilled. Hereafter, without loss of generality, we shall focus on the branch in which *ϕ* is positive. For realizing the solution ([phiH]), the functions *G*2, 3, 4, 5 in Eq. ([be3d]) should contain the power-law dependence of *X* in the forms G2(X)=b2 Xp2,G3(X)=b3Xp3,G4(X)=+b4Xp4,G5(X)=b5Xp5, [G2345] where $M\_{\rm pl}$ is the reduced Planck mass, *b*2, 3, 4, 5 are constants, and the powers *p*3, 4, 5 are given by p3=12 ( p+2p2-1 ),p4=p+p2,p5=12 ( 3p+2p2-1 ). [p345] The vector Galileon corresponds to the powers *p*2 = 1 and *p* = 1, so the field *ϕ* has the dependence *ϕ* ∝ *H*− 1. Thus the analysis based on the assumption ([phiH]) is general in that it encompasses the vector Galileon as a specific case. Dynamical analysis of the background ------------------------------------ Let us first consider the background dynamics for the theories given by the functions ([G2345]) with the powers ([p345]). For the matter sector we take into account non-relativistic matter (energy density *ρ**m* and pressure *P**m* = 0) and radiation (energy density *ρ**r* and pressure *P**r* = *ρ**r*/3), which obey the continuity equations $\dot{\rho}\_m+3H\rho\_m=0$ and $\dot{\rho}\_r+4H\rho\_r=0$ respectively. In this case, we have *ρ**M* = *ρ**m* + *ρ**r* and *P**M* = *ρ**r*/3 in Eqs. ([Fri1]) and ([Fri2]). To study the dynamical system of the background, it is convenient to introduce the following density parameters r, m, DE 1-r-m, and the following dimensionless quantities y 3Mpl2 H22p2,i ( p H )i-2, [ydef] where *i* = 3, 4, 5. Due to the relation ([phiH]), *β**i*’s are constants. From Eq. ([be3d]) it follows that 1+33+6(2p+2p2-1)4 -(3p+2p2)5=0. [mueq] In the following, we employ this relation to express *β*3 in terms of *β*4 and *β*5. On using Eq. ([be1]), we find that the dark energy density parameter is related with the quantity *y*, as DE=, [OmegaDE] where -p2(p+p2)(1+4p2 5) +6p22(2p+2p2-1)4. [betadef] Differentiating Eq. ([be3d]) with respect to *t* and using Eq. ([be2]), we can solve for $\dot{\phi}$ and $\dot{H}$. Taking the derivatives of $\Omega\_{\rm DE}$ and Ω*r* with respect to ${\cal N} \equiv \ln a$ (denoted as a prime), we obtain the following dynamical equations of motion DE’ &=& 1+sDE, [dx1] r’ &=& - 1+sDE, [dx2] where s. The matter density parameter $\Omega\_m=1-\Omega\_{\rm DE}-\Omega\_{r}$ is known by solving Eqs. ([dx1]) and ([dx2]) for $\Omega\_{\rm DE}$ and Ω*r*, respectively, with their given initial conditions. We shall prevent $\Omega\_{\rm DE}$ (or Ω*r*) from diverging in the interval $0\leq\Omega\_{\rm DE}\leq 1$, by imposing 1 + *s* > 0. The effective equation of state of the system, which is defined by $w\_{\rm eff} \equiv -1-2\dot{H}/(3H^2)$, reads weff= 3(1+sDE). [weff] We define the dark energy equation of state as $w\_{\rm DE} \equiv P\_{\rm DE}/\rho\_{\rm DE}$, where $\rho\_{\rm DE}$ and $P\_{\rm DE}$ are given by Eqs. ([rhode]) and ([Pde]) respectively. Then, it follows that wDE= - 3(1+sDE). [wde] For the dynamical system given by Eqs. ([dx1]) and ([dx2]), there are the following three fixed points: & &(a)  (DE,r)=(0,1) (radiation point), & &(b)  (DE,r)=(0,0) (matter point), & &(c)  (DE,r) =(1,0) (de Sitter point). At each fixed point, the equations of state ([weff]) and ([wde]) reduce to & &(a)   weff=13,wDE =-1-s,[casea] & &(b)   weff=0,wDE=-1-s,[caseb] & &(c)   weff=-1,wDE=-1.[casec] From Eqs. ([dx1]) and ([dx2]) we obtain the relation =(1+s) ( +4 ). Integrating this equation gives =C a4(1+s), [Omere] where *C* is an integration constant. The constant *C* can be fixed as $Ca\_0^{4(1+s)}=(\Omega\_{\rm DE}/\Omega\_{r}^{1+s})\_0$, where the lower subscript “0” represents the today’s value. Since the ratio between Ω*m* and Ω*r* is given by Ω*m*/Ω*r* = (*a*/*a*0) (Ω*m*/Ω*r*)0, elimination of the scale factor *a* from Eq. ([Omere]) leads to (1-DE-r)4(1+s) =( )0 ( )04(1+s), which corresponds to the trajectory of solutions in the $(\Omega\_{\rm DE}, \Omega\_{r})$ plane. For given values of $\Omega\_{{\rm DE}0}$ and Ω*r*0, the cosmological trajectory is fixed. In the radiation-dominated epoch (Ω*r* ≃ 1), the relation ([Omere]) shows that, for *s* >  − 1, the dark energy density parameter grows as $\Omega\_{\rm DE} \propto a^{4(1+s)}$. In fact, this is consistent with the limits Ω*r* → 1 and $\Omega\_{\rm DE} \ll 1$ in Eq. ([dx1]), i.e., $\Omega\_{\rm DE}' \simeq 4(1+s) \Omega\_{\rm DE}$. Since $\Omega\_{\rm DE} \propto y \propto \phi^{2p\_2}/H^2$, the evolution of *ϕ* during the radiation era (*H* ∝ *t*− 1 and *a* ∝ *t*1/2) is given by *ϕ* ∝ *a*2*s*/*p*2. In summary, around the radiation point (a), the dark energy density parameter and the field *ϕ* evolve as DE t2(p+p2)/p, t1/p. [Omedephi] In the matter-dominated epoch (Ω*m* ≃ 1) the radiation density parameter Ω*r* decreases as Ω*r* ∝ *a*− 1, so Eq. ([Omere]) implies that the dark energy density parameter evolves as $\Omega\_{\rm DE} \propto a^{3(1+s)}$. This relation can be also confirmed by taking the limits Ω*r* ≪ 1 and $\Omega\_{\rm DE} \ll 1$ in Eq. ([dx1]). On using the properties *H* ∝ *t*− 1 and *a* ∝ *t*2/3, it follows that the time dependence of $\Omega\_{\rm DE}$ and *ϕ* around the matter point (b) is exactly the same as Eq. ([Omedephi]). At the de Sitter point (c), both *H* and *ϕ* are constants. Provided that this fixed point is stable, the solutions with different initial conditions finally converge to the de Sitter attractor. From Eq. ([Omedephi]) we find that, in the limit *p* ≫ 1, the field *ϕ* stays nearly constant during the radiation and matter eras, so the cosmological dynamics should be close to that of the ΛCDM model at the background model. In fact, we have that $w\_{\rm DE} \to -1$ in Eqs. ([casea]) and ([caseb]) by taking the limit *p* → ∞. To discuss the stability of the fixed points (a), (b), (c), we consider small perturbations $\delta \Omega\_{\rm DE}$ and *δ*Ω*r* around them. These homogeneous perturbations obey $$\begin{aligned} \left( \begin{array}{c} \delta \Omega\_{\rm DE}' \\ \delta \Omega\_{r}' \end{array} \right) = {\cal M} \left( \begin{array}{c} \delta \Omega\_{\rm DE} \\ \delta \Omega\_{r} \end{array} \right)\,,\qquad {\cal M}=\left( \begin{array}{cc} \cfrac{\partial f\_1}{\partial \Omega\_{\rm DE}}& \cfrac{\partial f\_1}{\partial \Omega\_{r}}\\ \cfrac{\partial f\_2}{\partial \Omega\_{\rm DE}}& \cfrac{\partial f\_2}{\partial \Omega\_{r}} \end{array} \right)\,, \label{uvdif}\end{aligned}$$ where *f*1 and *f*2 are the functions on the r.h.s. of Eqs. ([dx1]) and ([dx2]) respectively, and the components of the matrix ${\cal M}$ should be evaluated at the fixed points. The eigenvalues of the 2 × 2 matrix ${\cal M}$ for the fixed points (a), (b), (c) are given, respectively, by & &(a)  1=4(1+s),2=1, & &(b)  1=3(1+s),2=-1, & &(c)  1=-3, 2=-4. If *s* >  − 1, then the radiation fixed point (a) is unstable due to the positivity of *μ*1 and *μ*2. In this case the matter point (b) corresponds to a saddle. The de Sitter solution (c) is always stable because the two eigenvalues are negative. Thus, the cosmological trajectory is characterized by the sequence of the fixed points: (a) → (b) → (c). The dark energy equation of state evolves from $w\_{\rm DE}=-1-4s/3$ (radiation era) to $w\_{\rm DE}=-1-s$ (matter era) and then it finally approaches the value $w\_{\rm DE}=-1$ at the de Sitter attractor. The same cosmological evolution is realized for the tracker solution in extended scalar Galileon theories. In this case the second time derivative $\ddot{\phi}$ is present in the background equations of motion, so there are initial conditions where the solutions are not on the tracker. In generalized vector Galileon theories, the absence of the $\ddot{\phi}$ term in Eqs. ([be1])-([be3]) does not allow the deviation from the tracker solution explained above. In Fig. [fig1] we show an example of the phase-map portrait of the dynamical autonomous system for *s* = 1. One can see that the de Sitter fixed point (c) is an attractor. All the trajectories with arbitrary initial conditions end there. The red line denotes the cosmological trajectory following the sequence: (a) → (b) → (c). We would like to remind that the reason why the zero component of the vector field is not part of the dynamical autonomous system comes from the fact that its equation of motion is an algebraic equation that can be used to determine it fully by *H*. This will be always the case as long as Eq. ([be3d]) does not contain any time derivative applied on *ϕ*. Therefore, the dynamical system is reduced by one dimension. This on the other hand will have as a consequence the known general result that gravity with auxiliary fields gives rise to a modified matter coupling. In the left panel of Fig. [fig2], we plot the evolution of $w\_{\rm DE}$ versus 1 + *z* (where *z* ≡ *a*0/*a* − 1 is the redshift) for three different values of *s*. We identify the present epoch (*z* = 0) as $\Omega\_{\rm DE}=0.68$ and Ω*r* = 1 × 10− 4. As estimated above, the dark energy equation of state for *s* = 1 (case (i) in Fig. [fig2]) evolves as $w\_{\rm DE}=-7/3$ (radiation era)  →  $w\_{\rm DE}=-2$ (matter era)  →  $w\_{\rm DE}=-1$ (de Sitter era). This behavior of $w\_{\rm DE}$ is the same as that of the tracker solution found for scalar Galileons. Unfortunately, this case is in tension with the combined observational constraints of SNIa, CMB, and BAO due to the large deviation of $w\_{\rm DE}$ from  − 1 before the onset of cosmic acceleration. For smaller ∣*s*∣, the dark energy equation of state in the radiation and matter eras tends to be closer to the value $w\_{\rm DE}=-1$. In the cases (ii) and (iii) shown in the left panel of Fig. [fig2], the values of $w\_{\rm DE}$ during the matter-dominated epoch are given, respectively, by $w\_{\rm DE}=-1.5$ and $w\_{\rm DE}=-1.2$. The likelihood analysis based on the SNIa, CMB, and BAO data in extended scalar Galileon theories has given the bound 0 ≤ *s* ≤ 0.36 (95 %CL) for tracker solutions, which can be also applied to the present model. Hence the cases (i) and (ii) are in tension with the data, but the case (iii) is observationally allowed. In the right panel of Fig. [fig2], we show the evolution of $w\_{\rm eff}$ and the density parameters $\Omega\_{\rm DE}$, Ω*m*, Ω*r* for *s* = 1/5. As estimated analytically, the solution repels away from the unstable radiation point (a) (with Ω*r* = 1, $w\_{\rm eff}=1/3$) and temporally dwells in the saddle matter point (b) (with Ω*m* = 1, $w\_{\rm eff}=0$). Finally, the solution approaches the stable de Sitter attractor (c) characterized by $\Omega\_{\rm DE}=1$, $w\_{\rm eff}=-1$. No-ghost and stability conditions --------------------------------- Let us study the no-ghost and stability conditions of tensor, vector, and scalar perturbations for the theories given by the functions ([G2345]) with the powers ([p345]). In what follows, unless otherwise stated, we shall discuss the case in which *β*4 and *β*5 are non-zero[2](#fn2). From Eqs. ([qT]) and ([cT]) we obtain qT &=& Mpl2, cT2 &=& 1+ (2p+2p2-1)(p+p2DE) [(1-DE)-p2(p+p2) DE(1-2p25)], where T && 2p+p2 +3r p225(2p+2p2-1) & & +DEp2. Taking the limit $\Omega\_{\rm DE} \to 0$ in the radiation and matter dominated epochs, we have $q\_T \to M\_{\rm pl}^2$ and *c**T*2 → 1, respectively. At the de Sitter point (c) characterized by $\Omega\_{\rm DE}=1$ and Ω*r* = 0, the conditions for avoiding the tensor ghost and instabilities are given by ( qT )dS &=& -p2 (p+p2) Mpl2>0, [qtdscon] ( cT2 )dS &=& 1-2p2 5>0. [ctdscon] For the theories with *β*5 = 0 and *p*2(*p* + *p*2) > 0, the conditions ([qtdscon]) and ([ctdscon]) translate to *β* < 0 and 1 − 12*p*2*β*4 > 0, respectively. When *β*5 = 0, the condition for avoiding the tensor ghost at the de Sitter point requires that *G*2 < 0, in which case the constant *b*2 and the quantity *y* in Eq. ([ydef]) are negative. This is consistent with the condition *β* < 0 in the sense that the dark energy density parameter ([OmegaDE]) remains positive. In this case the term *G*4 appearing in Eq. ([ctds]) is given by $G\_4=-M\_{\rm pl}^2p\_2(p+p\_2)(1-12p\_2\beta\_4)/(2\beta)$, which is positive under the two conditions ([qtdscon]) and ([ctdscon]) (with *β*5 = 0). For vector perturbations, the quantities ([qv]) and ([cv]) reduce, respectively, to qV &=& 1-, [qVex] cV2 &=& 1+, [cVex] where u. The quantity *q**V* contains the time-dependent factors $\Omega\_{\rm DE}$ and *u*. From Eq. ([Omedephi]) it follows that $\Omega\_{\rm DE}$ grows faster than *u*2 for p+p2>1. [pp2con] Under this condition, the term containing $\Omega\_{\rm DE}$ in Eq. ([qVex]) tends to be negligible relative to the *u*2*β* term as we go back to the past. In other words, *q**V* is close to 1 in the radiation and deep matter eras. At the de Sitter fixed point, the absence of the vector ghost requires that ( qV )dS = 1->0. [qVexds] In the following we shall focus on the case in which the powers *p* and *p*2 satisfy the condition ([pp2con]). Then, *c**V*2 approaches 1 in the asymptotic past. At the de Sitter point, the condition for avoiding the instability of vector perturbations reads ( cV2 )dS = 1- +>0. [cVexds] Under the no-ghost condition ([qtdscon]) of the tensor mode, the second term on the r.h.s. of Eq. ([cVexds]) is positive. If *d*2*β*5(*p* + *p*2)/*β* ≥ 0 and $(q\_V)\_{\rm dS}>0$, then the vector propagation speed is super-luminal. For scalar perturbations, the quantity ([qs]) reduces to qS=22-p2Mpl2(1+p2)u2p2 p2 ( p+ p2DE ) b2, [qsex] so that the scalar ghost is absent for p2 ( p+ p2DE ) b2 >0. We need to caution that, even when *q**S* > 0, there are cases in which the term *w*1 − 2*w*2 appearing in the denominator of *Q**S* crosses 0. Due to the divergence of *Q**S*, we shall exclude such cases in the following discussion[3](#fn3). One can express *w*1 − 2*w*2 in the form w1-2w2=-2HMpl2 ( 1-DEwc ),wc 1+p+p2-. For $0<\Omega\_{\rm DE}<1$, the term *w*1 − 2*w*2 remains negative for *w**c* < 1, i.e., >1. [divcon] To satisfy this inequality for the theories with *β*5 = 0 and *p*2(*p* + 1) > 0, the necessary condition to be required is *β* < 0. The general expression of *c**S* computed from Eq. ([cs]) is cumbersome, but we can obtain its analytic expression in radiation and matter eras by taking the limit $\Omega\_{\rm DE} \to 0$. This leads to the following condition ( cS2 )early &=& 6p2[64(2p+2p2-1)-25(3p+2p2)-1] & &+ 6p2[64(2p+2p2-1)-25(3p+2p2)-1]>0. [csex1] At the de Sitter fixed point, the scalar instability is absent for ( cS2 )dS= 6 p2 2 (2 p2 5-1) (qV)dSu2>0, [csdeex] where && p22 && +. For the computation of the dimensionless field $u=\phi/M\_{\rm pl}$ appearing in *q**V*, *c**V*2, *q**S*, *c**S*2, we introduce the following dimensionless quantity ( )p, where *λ* is constant and the mass *m* is related to the coefficient *b*2 in *G*2, as G2=-m2Mpl2(1-p2)Xp2. On using Eq. ([OmegaDE]) with the definition of *y* given in Eq. ([ydef]), we can express *u* in the form u=. We are interested in the case where the today’s values of *ϕ* and *H* are of the orders of $M\_{\rm pl}$ and *m*, respectively, so that $\lambda={\cal O}(1)$. Such a very light mass of the vector field is only relevant to the physics associated with the late-time acceleration. In local regions of the Universe the effect of the mass term is negligible, but the derivative interactions like *b*4*X*2 in *G*4 play crucial roles to screen the propagation of the fifth force mediated by the vector field. Models with *p*2 = 1 and *G*5 = 0 --------------------------------- To be more concrete, we shall first study the models with *G*5 = 0, *p*2 = 1, and *p* > 0. In this case, the parameter *β* defined in Eq. ([betadef]) reduces to =6(2p+1)4-p-1. To avoid the divergence of *Q**S* associated with the sign change of *w*1 − 2*w*2, we require the condition ([divcon]). In the present case, this translates to 0<4<. [be4con] The upper bound of *β*4 comes from the condition *β* < 0. The conditions ([qtdscon]) and ([ctdscon]) for tensor perturbations at the de Sitter point reduce, respectively, to ( qT )dS &=& Mpl2 -1>0, [qtten] ( cT2 )dS &=& 1-124>0. [ctten] The condition ([qtten]) is automatically satisfied under the inequality ([be4con]). On the other hand the condition ([ctten]) gives the upper bound on *β*4 tighter than Eq. ([be4con]), so the allowed range of *β*4 shrinks to 0<4<. [becon2] In Fig. [fig3] we plot the evolution of *q**T* and *c**T*2 for *β*4 = 0.01 and *p* = 5. As estimated in Sec. [ghosec], we have $q\_T \simeq M\_{\rm pl}^2$ and *c**T*2 ≃ 1 during the radiation and matter eras. Finally, we see that *q**T* and *c**T*2 approach the asymptotic values ([qtten]) and ([ctten]), respectively. Under the condition ([becon2]), $(c\_{T}^2)\_{\rm dS}$ is smaller than 1. If the today’s value of *c**T*2 is smaller than 1, the tensor propagation speed squared is constrained to be 1 − *c**T* < 2 × 10− 15 from the gravitational Cherenkov radiation. On using the value ([ctten]) at the de Sitter fixed point, we obtain the bound $\beta\_4 < {\cal O}(10^{-16})$. In this case, we have numerically confirmed that the Cherenkov-radiation constraint of *c**T* is satisfied today. For vector perturbations, the conditions ([qVexds]) and ([cVexds]) can be expressed, respectively, as ( qV )dS &=& 1- (2p+1)u2>0, ( cV2 )dS &=& 1+ (2p+1)[2c2(p+1)(+p+1)-(2p+1)u2]>0. Since *β* + *p* + 1 = 6(2*p* + 1)*β*4, we have that $\left( q\_V \right)\_{\rm dS} \to 1$ and $\left( c\_V^2 \right)\_{\rm dS} \to 1$ in the limit *β*4 → 0. As we see in Fig. [fig3], both *q**V* and *c**V*2 are very close to 1 in the early cosmological epoch. At late times *q**V* and *c**V*2 start to deviate from 1, but both of them are still close to 1 for *β*4 ≪ 1 and $|c\_2| \lesssim {\cal O}(1)$. The no-ghost condition for scalar perturbations corresponds to qS=2m2Mpl4 (p+DE ) u2>0, [qsexa] which is satisfied for 4<. [becon3] If *p* > 1/2, then the condition ([becon3]) gives a tighter upper bound on *β*4 than that constrained from Eq. ([becon2]). From Eq. ([csex1]) the conditions for avoiding the Laplacian instability during the radiation and matter eras read ( cS2 )r &=& 3p2[1-6(2p+1)4]>0,[csr] ( cS2 )m &=& 6p2[1-6(2p+1)4]>0,[csm] respectively. The condition ([csdeex]) at the de Sitter point reduces to ( cS2 )dS = 3 >0. [csdef] In the limit that *β*4 → 0, we have the following asymptotic values ( cS2 )r,( cS2 )m,( cS2 )dS, [cSesti] which are positive. Hence the stability of scalar perturbations is always ensured for *β*4 close to 0. The evolution of *Q**S*/*a*3 and *c**S*2 plotted in Fig. [fig3] corresponds to the model parameters *β*4 = 0.01 and *p* = 5. In this case, the constant *β*4 is in the range consistent with both Eqs. ([becon2]) and ([becon3]). In the radiation and matter eras the parameter *q**S*, which is positive, grows as *q**S* ∝ *u*2 ∝ *t*2/*p*. Since both *q**S*/*ϕ*2 and *H*2/(*w*1 − 2*w*2)2 are constants in this regime, *Q**S*/*a*3 stays constant for *p*2 = 1. The quantity *q**S* asymptotically approaches the value ([qsexa]) at the de Sitter point characterized by $\Omega\_{\rm DE}=1$ and constant *u*. Provided that $\lambda={\cal O}(1)$, the dimensionless field *u* is of the order of 1 at the de Sitter attractor, so the mass scale *m* is of the order of the today’s Hubble parameter *H*0. The analytic estimations of *q**S* as well as *Q**S*/*a*3 show good agreement with our numerical results. We also confirm that the term *w*1 − 2*w*2 always remains negative in the numerical analysis of Fig. [fig3]. If we choose the values of *β*4 which are outside the range of Eqs. ([be4con]) (e.g., negative *β*4), we find that *Q**S*/*a*3 exhibits a divergence due to the sign change of *w*1 − 2*w*2 around the transient epoch to the cosmic acceleration. For the cosmological viability of the model with *β*5 = 0 and *p*2 = 1, we require the conditions ([becon2]) and ([becon3]). For *β*4 = 0.01 and *p* = 5 the analytic estimations ([csr]) and ([csm]) give (*c**S*2)*r* ≃ 0.354 and (*c**S*2)*m* ≃ 0.288, respectively, which are in good agreement with the numerical values of *c**S*2 in Fig. [fig3]. Substituting the numerical value *u* ≃ 1.172 at the de Sitter point into Eq. ([csdef]) with *c*2 =  − 1, we obtain $(c\_S^2)\_{\rm dS}=0.109$. In Fig. [fig3], we see that *c**S*2 is always positive during the cosmic expansion history, so there is no Laplacian instability of scalar perturbations. While the numerical evolution of Fig. [fig3] corresponds to the case (*β*4 = 0.01) in which the Cherenkov-radiation constraint of *c**T*2 is not imposed, we have also obtained the numerical evolution of *q**T*, *q**V*, *q**S* and *c**T*2, *c**V*2, *c**S*2 under the bound $0<\beta\_4 \lesssim 10^{-16}$. In such cases the three no-ghost conditions are trivially satisfied, with *c**T*2 and *c**V*2 very close to 1. In addition, *c**S*2 is close to the positive values given by Eqs. ([cSesti]), so there is no instability of scalar perturbations. Models with general *p*2 and *G*5 realizing *c**T*2 > 1 ------------------------------------------------------- In the models with *p*2 = 1 and *G*5 = 0, the tensor propagation speed is sub-luminal under the condition ([becon2]). In this case, the constraint from Cherenkov radiation puts a tight bound on *β*4. Let us study the possibility for realizing *c**T*2 > 1 to escape the Cherenkov-radiation constraint for general *p*2 and *G*5, while avoiding the divergence of *Q**S*. Provided that p25<1/2, [p2beta5] the tensor propagation speed squared ([ctdscon]) at the de Sitter solution is larger than or equal to 1 for 524. [be5con1] Under Eq. ([p2beta5]) with negative *β*, the condition ([divcon]) for avoiding the divergence of *Q**S* reads 6p2(2p+2p2-1)<4< 6p2(2p+2p2-1), [be5con2] where we have assumed *p*2(2*p* + 2*p*2 − 1) > 0. When *p*2 = 1 the first inequality of ([be5con2]) reads *β*5 < *β*4(2*p* + 1)/(*p* + 1), so this is not compatible with the condition ([be5con1]) for *p* > 0. Thus, for *p*2 = 1, the sub-luminal property of *c**T*2 found for *G*5 = 0 in Sec. [G50sec] also holds for the theories with *G*5 ≠ 0 and *p* > 0. For *p*2 different from 1, it is possible to simultaneously fulfill the conditions ([p2beta5])-([be5con2]). If *p*2 < 1, for example, the simple case with *β*4 = *β*5 = 0 satisfies such three conditions. The allowed parameter space also exists for non-zero values of *β*4 and *β*5. In Fig. [fig4] we plot the evolution of *q**T*, *q**V*, *Q**S*/*a*3 and *c**T*2, *c**V*2, *c**S*2 for *p*2 = 1/2, *p* = 5/2, *β*4 = 0.01, and *β*5 = 0.05, under which the conditions ([p2beta5])-([be5con2]) are satisfied. The value of $\lambda=(\phi/M\_{\rm pl})^p(H/m)$ is chosen to be 1 with the negative function $G\_2=-m^2M\_{\rm pl}X^{1/2}$. In the left panel of Fig. [fig4] we find that the no-ghost conditions are satisfied during the cosmic expansion history. During the radiation and matter eras the quantity *q**S* evolves as *q**S* ∝ *ϕ*2*p*2, so that *Q**S*/*a*3 decreases in proportion to *ϕ*− 1. After the end of the matter era, *Q**S*/*a*3 approaches the value at the de Sitter point without any divergence. In Fig. [fig4] we find that *c**T*2 is larger than 1 today, so the bound of Cherenkov radiation is not applied to this case. Since both *c**V*2 and *c**S*2 are positive for the model parameters used in Fig. [fig4], there are no Laplacian instabilities of vector and scalar perturbations as well. The case shown in Fig. [fig4] is one example for realizing *c**T*2 > 1 without the divergence of *Q**S*. There are also other cases in which ghosts and Laplacian instabilities are absent with *c**T*2 > 1. We leave the detailed analysis about the viable parameter space consistent with observations and experiments for a future work. Conclusions =========== We have studied the cosmology in generalized Proca theories with three propagating degrees of freedom (besides two tensor gravitational degrees of freedom) in the presence of a matter fluid. The temporal component *ϕ* of the vector field can drive the cosmic acceleration due to the existence of its derivative interactions. By construction of the action ([Lag]), there is a branch of the background solution where the field *ϕ* depends on the Hubble expansion rate *H* alone. This allows the existence of de Sitter solutions characterized by constant *ϕ* and *H*. Expanding the action ([Lag]) up to second order in cosmological perturbations on the flat FLRW background, we have derived general conditions for the absence of ghosts and Laplacian instabilities in the small-scale limit. Unlike scalar-tensor theories, the propagation of vector modes is non-trivial in the sense that the intrinsic vector Lagrangians proportional to the constants *c*2 and *d*2 affect the no-ghost condition and the propagation speed of vector perturbations. In total, there are six no-ghost and stability conditions associated with tensor, vector, and scalar perturbations. Applying our general results to de Sitter solutions, we find that, in the limit *G*5 → 0, the function *G*2 needs to be negative to avoid the tensor ghost, but this does not necessarily imply the appearance of tachyons. In fact, defining vector perturbations with the background tetrad basis, the mass squared of the vector mode is positive on the de Sitter solution. In the early cosmological epoch the tensor propagation speed squared is close to 1, but this is not generally the case for scalar and vector perturbations. We have also proposed a class of concrete models in which the field *ϕ* grows as *ϕ**p* ∝ *H*− 1 (*p* > 0) with the decrease of *H*. The functions *G*2, 3, 4, 5 realizing this solution is given by Eq. ([G2345]) with the powers satisfying ([p345]), which accommodate vector Galileons as a specific case (*p*2 = 1 and *p* = 1). We have shown the existence of a de Sitter fixed point which is always stable. Before reaching the de Sitter attractor, the dark energy equation of state evolves as $w\_{\rm DE}=-1-4s/3$ (radiation era)  →  $w\_{\rm DE}=-1-s$ (matter era), where *s* = *p*2/*p*. For the consistency with the observations of SNIa, CMB, and BAO, the parameter *s* is constrained to be 0 < *s* ≤ 0.36. For the concrete model proposed in Sec. [covasec], we have discussed the six no-ghost and stability conditions to search for the theoretically allowed parameter space. For avoiding the divergent behavior of *Q**S* during the transient epoch to the cosmic acceleration, there is an additional condition to be imposed. For the theories with *G*5 = 0 and *p*2 = 1, the parameter *β*4 is constrained as Eqs. ([becon2]) and ([becon3]), in which case the tensor propagation speed *c**T* is sub-luminal. On using the experimental constraint of *c**T* from the Cherenkov radiation, the upper bound of *β*4 is tightly constrained ($\beta\_4 \lesssim 10^{-16}$). For more general theories characterized by *G*5 ≠ 0 and/or *p*2 ≠ 1, we find that there exists the parameter space in which all the theoretically consistent conditions are satisfied with *c**T*2 ≥ 1. We have thus shown that the generalized Proca theories can give rise to the dark energy dynamics compatible with theoretically consistent conditions. It will be of interest to place observational and experimental constraints on the parameter space further by taking into account the data of the growth rate of matter perturbations as well as the bound of the tensor propagation speed. Finally, we would like to point out that there might be interesting consequences in the presence of the sixth-order interactions L6 = *G*6(*X*)*L**μ**ν**α**β*∇*μ**A**ν*∇*α**A**β* + *G*6, *X*(*X*)*F̃**μ**ν**F̃**α**β*∇*α**A**μ*∇*β**A**ν*/2, where *L**μ**ν**α**β* is the double dual Riemann tensor and *F̃**μ**ν* is the dual of the strength tensor. Since the background vector field considered here has only the zero component non-vanishing, these sixth-order interactions do not contribute at the background level, and similarly the tensor perturbations are not modified by them either, but they do have non-trivial effects on the vector and scalar perturbations. The implications of these sixth-order interactions will be investigated in a future work. Acknowledgements ================ We would like to thank J. Beltran Jimenez, R. Brandenberger, M. Motta, J. Yokoyama, and G. Zhao for very useful discussions. LH acknowledges financial support from Dr. Max Rössler, the Walter Haefner Foundation and the ETH Zurich Foundation. RK is supported by the Grant-in-Aid for Research Activity Start-up of the JSPS No.15H06635. The work of SM is supported in part by Grant-in-Aid for Scientific Research 24540256 and World Premier International Research Center Initiative (WPI), Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan. ST is supported by the Grant-in-Aid for Scientific Research Fund of the JSPS No.24540286, MEXT KAKENHI Grant-in-Aid for Scientific Research on Innovative Areas “Cosmic Acceleration” (No.15H05890), and the cooperation program between Tokyo University of Science and CSIC. YZ is supported by the Strategic Priority Research Program ``The Emergence of Cosmological Structures" of the Chinese Academy of Sciences, Grant No. XDB09000000. 99 A. G. Riess *et al.* [Supernova Search Team Collaboration], Astron. J.  **116**, 1009 (1998) [astro-ph/9805201]; S. Perlmutter *et al.* [Supernova Cosmology Project Collaboration], Astrophys. J.  **517**, 565 (1999) [astro-ph/9812133]. D. N. Spergel *et al.* [WMAP Collaboration], Astrophys. J. Suppl.  **148**, 175 (2003) [astro-ph/0302209]; P. A. R. Ade *et al.* [Planck Collaboration], Astron. Astrophys.  **571**, A16 (2014) [arXiv:1303.5076 [astro-ph.CO]]. D. J. Eisenstein *et al.* [SDSS Collaboration], Astrophys. J.  **633**, 560 (2005) [astro-ph/0501171]. E. J. Copeland, M. Sami and S. Tsujikawa, Int. J. Mod. Phys. D **15**, 1753 (2006) [hep-th/0603057]; A. Silvestri and M. Trodden, Rept. Prog. Phys.  **72**, 096901 (2009) [arXiv:0904.0024 [astro-ph.CO]]; S. Tsujikawa, arXiv:1004.1493 [astro-ph.CO]; A. Joyce, B. Jain, J. Khoury and M. Trodden, Phys. Rept.  **568**, 1 (2015) [arXiv:1407.0059 [astro-ph.CO]]; P. Bull *et al.*, arXiv:1512.05356 [astro-ph.CO]. S. Weinberg, Rev. Mod. Phys.  **61**, 1 (1989). T. P. Sotiriou and V. Faraoni, Rev. Mod. Phys.  **82**, 451 (2010) [arXiv:0805.1726 [gr-qc]]; A. De Felice and S. Tsujikawa, Living Rev. Rel.  **13**, 3 (2010) [arXiv:1002.4928 [gr-qc]]; S. Tsujikawa, Lect. Notes Phys.  **800**, 99 (2010) [arXiv:1101.0191 [gr-qc]]; T. Clifton, P. G. Ferreira, A. Padilla and C. Skordis, Phys. Rept.  **513**, 1 (2012) [arXiv:1106.2476 [astro-ph.CO]]. G. R. Dvali, G. Gabadadze and M. Porrati, Phys. Lett. B **485**, 208 (2000) [arXiv:hep-th/0005016]; G. R. Dvali and G. Gabadadze, Phys. Rev.  D **63**, 065007 (2001) [arXiv:hep-th/0008054]. A. I. Vainshtein, Phys. Lett.  B **39** (1972) 393. A. Nicolis, R. Rattazzi and E. Trincherini, Phys. Rev.  D **79**, 064036 (2009) [arXiv:0811.2197 [hep-th]]. N. Chow and J. Khoury, Phys. Rev.  D **80**, 024037 (2009) [arXiv:0905.1325 [hep-th]]. J. Khoury and M. Wyman, Phys. Rev.  **D80**, 064023 (2009). [arXiv:0903.1292 [astro-ph.CO]]; T. Kobayashi, H. Tashiro and D. Suzuki, Phys. Rev.  D **81**, 063513 (2010) [arXiv:0912.4641 [astro-ph.CO]]; T. Kobayashi, Phys. Rev. D **81**, 103533 (2010) [arXiv:1003.3281 [astro-ph.CO]]; M. Wyman and J. Khoury, Phys. Rev.  **D82**, 044032 (2010) [arXiv:1004.2046 [astro-ph.CO]]. C. Deffayet, G. Esposito-Farese and A. Vikman, Phys. Rev.  D **79**, 084003 (2009) [arXiv:0901.1314 [hep-th]]; C. Deffayet, S. Deser and G. Esposito-Farese, Phys. Rev.  D **80**, 064015 (2009) [arXiv:0906.1967 [gr-qc]]. G. W. Horndeski, Int. J. Theor. Phys. **10**, 363-384 (1974). C. Deffayet, X. Gao, D. A. Steer and G. Zahariade, Phys. Rev. D **84**, 064039 (2011) [arXiv:1103.3260 [hep-th]]; T. Kobayashi, M. Yamaguchi and J. ’i. Yokoyama, Prog. Theor. Phys.  **126**, 511 (2011) [arXiv:1105.5723 [hep-th]]; C. Charmousis, E. J. Copeland, A. Padilla and P. M. Saffin, Phys. Rev. Lett.  **108**, 051101 (2012) [arXiv:1106.2000 [hep-th]]. F. P. Silva and K. Koyama, Phys. Rev.  D **80**, 121301 (2009) [arXiv:0909.4538 [astro-ph.CO]]; A. De Felice and S. Tsujikawa, JCAP **1007**, 024 (2010) [arXiv:1005.0868 [astro-ph.CO]]. R. Gannouji and M. Sami, Phys. Rev.  D **82**, 024011 (2010) [arXiv:1004.2808 [gr-qc]]; A. Ali, R. Gannouji and M. Sami, Phys. Rev.  D **82**, 103015 (2010) [arXiv:1008.1588 [astro-ph.CO]]; A. De Felice, R. Kase and S. Tsujikawa, Phys. Rev.  D **83**, 043515 (2011) [arXiv:1011.6132 [astro-ph.CO]]; K. Hirano, Z. Komiya and H. Shirai, Prog. Theor. Phys.  **127**, 1041 (2012) [arXiv:1103.6133 [astro-ph.CO]]; P. Brax, C. Burrage and A. C. Davis, JCAP **1109**, 020 (2011) [arXiv:1106.1573 [hep-ph]]. A. De Felice and S. Tsujikawa, Phys. Rev. Lett.  **105**, 111301 (2010) [arXiv:1007.2700 [astro-ph.CO]]; A. De Felice and S. Tsujikawa, Phys. Rev. D **84**, 124029 (2011) [arXiv:1008.4236 [hep-th]]. P. Creminelli, A. Nicolis and E. Trincherini, JCAP **1011**, 021 (2010) [arXiv:1007.0027 [hep-th]]; L. Levasseur, R. Brandenberger and A. C. Davis, Phys. Rev. D **84**, 103512 (2011) [arXiv:1105.5649 [astro-ph.CO]]. T. Kobayashi, M. Yamaguchi and J. Yokoyama, Phys. Rev. Lett.  **105**, 231302 (2010) [arXiv:1008.0603 [hep-th]]; C. Burrage, C. de Rham, D. Seery and A. J. Tolley, JCAP **1101**, 014 (2011) [arXiv:1009.2497 [hep-th]]; P. Creminelli, G. D’Amico, M. Musso, J. Norena and E. Trincherini, JCAP **1102**, 006 (2011) [arXiv:1011.3004 [hep-th]]; K. Hinterbichler and J. Khoury, JCAP **1204**, 023 (2012) [arXiv:1106.1428 [hep-th]]. S. Mizuno and K. Koyama, Phys. Rev.  **D82**, 103518 (2010) [arXiv:1009.0677 [hep-th]]; A. De Felice and S. Tsujikawa, JCAP **1104**, 029 (2011) [arXiv:1103.1172 [astro-ph.CO]]; T. Kobayashi, M. Yamaguchi and J. Yokoyama, Phys. Rev. D **83**, 103524 (2011) [arXiv:1103.1740 [hep-th]]; S. Renaux-Petel, Class. Quant. Grav.  **28**, 182001 (2011) [arXiv:1105.6366 [astro-ph.CO]]; X. Gao, JCAP **1110**, 021 (2011) [arXiv:1106.0292 [astro-ph.CO]]. K. Hinterbichler, A. Nicolis and M. Porrati, JHEP **0909**, 089 (2009) [arXiv:0905.2359 [hep-th]]; G. L. Goon, K. Hinterbichler and M. Trodden, Phys. Rev. D **83**, 085015 (2011) [arXiv:1008.4580 [hep-th]]; P. de Fromont, C. de Rham, L. Heisenberg and A. Matas, JHEP **1307**, 067 (2013) [arXiv:1303.0274 [hep-th]]; C. Burrage, C. de Rham, L. Heisenberg and A. J. Tolley, JCAP **1207**, 004 (2012) [arXiv:1111.5549 [hep-th]]. M. V. Ostrogradski, Mem. Acad. St. Petersbourg VI 4, **385** (1850). G. Goon, K. Hinterbichler and M. Trodden, [arXiv:1103.5745 [hep-th]]; G. Goon, K. Hinterbichler and M. Trodden, [arXiv:1103.6029 [hep-th]]; C. Burrage, C. de Rham and L. Heisenberg, JCAP **1105**, 025 (2011). [arXiv:1104.0155 [hep-th]]. C. de Rham and G. Gabadadze, Phys. Rev.  D **82**, 044020 (2010) [arXiv:1007.0443 [hep-th]]; C. de Rham, G. Gabadadze, L. Heisenberg and D. Pirtskhalava, Phys. Rev. D **83**, 103516 (2011) doi:10.1103/PhysRevD.83.103516 [arXiv:1010.1780 [hep-th]]; C. de Rham, G. Gabadadze and A. J. Tolley, Phys. Rev. Lett.  **106**, 231101 (2011) [arXiv:1011.1232 [hep-th]]. C. de Rham and L. Heisenberg, Phys. Rev. D **84**, 043503 (2011) [arXiv:1106.3312 [hep-th]]; L. Heisenberg, R. Kimura and K. Yamamoto, Phys. Rev. D **89**, 103008 (2014) [arXiv:1403.2049 [hep-th]]. C. Deffayet, S. Deser and G. Esposito-Farese, Phys. Rev. D **82**, 061501 (2010). [arXiv:1007.5278 [gr-qc]]. C. Deffayet, A. E. Gumrukcuoglu, S. Mukohyama and Y. Wang, JHEP **1404**, 082 (2014). [arXiv:1312.6690 [hep-th]]. G. W. Horndeski, J. Math. Phys.  **17**, 1980 (1976). J. D. Barrow, M. Thorsrud and K. Yamamoto, JHEP **1302**, 146 (2013) [arXiv:1211.5403 [gr-qc]]. J. B. Jimenez, R. Durrer, L. Heisenberg and M. Thorsrud, JCAP **1310**, 064 (2013) [arXiv:1308.1867 [hep-th]]. C. Deffayet, S. Mukohyama and V. Sivanesan, Phys. Rev. D **93**, no. 8, 085027 (2016) [arXiv:1601.01287 [hep-th]]. L. Heisenberg, JCAP **1405**, 015 (2014) [arXiv:1402.7026 [hep-th]]. G. Tasinato, JHEP **1404**, 067 (2014) [arXiv:1402.6450 [hep-th]]; G. Tasinato, Class. Quant. Grav.  **31**, 225004 (2014) [arXiv:1404.4883 [hep-th]]. G. Tasinato, K. Koyama and N. Khosravi, JCAP **1311**, 037 (2013) [arXiv:1307.0077 [hep-th]]. P. Fleury, J. P. B. Almeida, C. Pitrou and J. P. Uzan, JCAP **1411**, 043 (2014). [arXiv:1406.6254 [hep-th]]. M. Hull, K. Koyama and G. Tasinato, JHEP **1503**, 154 (2015) [arXiv:1408.6871 [hep-th]]; M. Hull, K. Koyama and G. Tasinato, arXiv:1510.07029 [hep-th]. W. Li, arXiv:1508.03247 [gr-qc]. J. B. Jimenez and T. S. Koivisto, Class. Quant. Grav.  **31** (2014) 135002 [arXiv:1402.1846 [gr-qc]]; J. B. Jimenez, L. Heisenberg and T. S. Koivisto, arXiv:1602.07287 [hep-th]. E. Allys, P. Peter and Y. Rodriguez, JCAP **1602**, no. 02, 004 (2016) [arXiv:1511.03101 [hep-th]]. J. Beltran Jimenez and L. Heisenberg, Phys. Lett. B **757**, 405 (2016) [arXiv:1602.03410 [hep-th]]. A. De Felice, L. Heisenberg, R. Kase, S. Tsujikawa, Y. l. Zhang and G. B. Zhao, arXiv:1602.00371 [gr-qc] (to appear in Physical Review D). J. M. Bardeen, Phys. Rev. D **22**, 1882 (1980). H. Kodama and M. Sasaki Prog. Theor. Phys. Suppl. **78**, 1 (1984). V. F. Mukhanov, H. A. Feldman and R. H. Brandenberger, Phys. Rept.  **215**, 203 (1992). B. F. Schutz and R. Sorkin, Annals Phys.  **107**, 1 (1977). A. De Felice, J. M. Gerard and T. Suyama, Phys. Rev. D **81**, 063527 (2010). A. De Felice and S. Mukohyama, JCAP **1604**, no. 04, 028 (2016) [arXiv:1512.04008 [hep-th]]. D. Giannakis and W. Hu, Phys. Rev. D **72**, 063502 (2005) [astro-ph/0501423]; F. Arroja and M. Sasaki, Phys. Rev. D **81**, 107301 (2010) [arXiv:1002.1376 [astro-ph.CO]]; A. De Felice, S. Mukohyama and S. Tsujikawa, Phys. Rev. D **82**, 023524 (2010) [arXiv:1006.0281 [astro-ph.CO]]. R. Kase and S. Tsujikawa, Phys. Rev. D **90**, 044073 (2014) [arXiv:1407.0794 [hep-th]]; R. Kase and S. Tsujikawa, Int. J. Mod. Phys. D **23**, no. 13, 1443008 (2015) [arXiv:1409.1984 [hep-th]]. A. De Felice and S. Tsujikawa, JCAP **1202**, 007 (2012) [arXiv:1110.3878 [gr-qc]]. S. Nesseris, A. De Felice and S. Tsujikawa, Phys. Rev. D **82**, 124054 (2010) [arXiv:1010.0407 [astro-ph.CO]]. A. De Felice and S. Tsujikawa, JCAP **1203**, 025 (2012) [arXiv:1112.1774 [astro-ph.CO]]. G. D. Moore and A. E. Nelson, JHEP **0109**, 023 (2001) [hep-ph/0106220]; R. Kimura and K. Yamamoto, JCAP **1207**, 050 (2012) [arXiv:1112.4284 [astro-ph.CO]]. --- 1. The fluid description ([Spf]) based on the matter perturbation *δ**ρ**M* is convenient in that the off-diagonal components of the matrix ${\bm K}$ vanish. This is not generally the case for the k-essence description of the perfect fluid, see e.g., Refs. .[↩](#fnref1) 2. We note that under the limits $G\_{4} \to M\_{\rm pl}^2/2$ and *G*5 → 0 in Eqs. ([qT]), ([cT]), ([qv]) and ([cv]), it follows that $q\_T \to M\_{\rm pl}^2$, *c**T*2 → 1, *q**V* → 1, and *c**V*2 → 1.[↩](#fnref2) 3. The divergence of *Q**S* does not necessarily imply theoretical inconsistencies. It may simply indicate an infinite weak coupling, depending on the nature of non-linear interactions. Nonetheless we do not consider this possibility just to be conservative.[↩](#fnref3) Cosmology in generalized Proca theories ======================================= We consider a massive vector field with derivative interactions that propagates only the 3 desired polarizations (besides two tensor polarizations from gravity) with second-order equations of motion in curved space-time. The cosmological implications of such generalized Proca theories are investigated for both the background and the linear perturbation by taking into account the Lagrangian up to quintic order. In the presence of a matter fluid with a temporal component of the vector field, we derive the background equations of motion and show the existence of de Sitter solutions relevant to the late-time cosmic acceleration. We also obtain conditions for the absence of ghosts and Laplacian instabilities of tensor, vector, and scalar perturbations in the small-scale limit. Our results are applied to concrete examples of the general functions in the theory, which encompass vector Galileons as a specific case. In such examples, we show that the de Sitter fixed point is always a stable attractor and study viable parameter spaces in which the no-ghost and stability conditions are satisfied during the cosmic expansion history. Introduction ============ The high-precision observations achieved by the measurements like supernovae Type Ia (SNIa), Cosmic Microwave Background (CMB), and Baryon Acoustic Oscillations (BAO) together with the two pillars of General Relativity (GR) and the cosmological principle have led to a notion of the standard cosmological model. According to this concordance model, about 70% and 25% of the today’s energy density of the Universe correspond to shadowy components dubbed dark energy and dark matter, respectively. In particular, dark energy has a repulsive force against gravity, which leads to the late-time cosmic acceleration. After its first discovery in 1998, there have been numerous attempts to pursue the origin of dark energy. The first promising explanatory attempt for dark energy consists of introducing a cosmological constant Λ that would cause an effective repulsive force between cosmological objects at large distances. A natural origin of the cosmological constant could be the vacuum energy density. Using the technique of particle physics, one can estimate the expected value for the vacuum energy density caused by fluctuating quantum fields. The result is the worst theoretical prediction in the history of physics. The discrepancy between the theoretically predicted value and the measured one amounts to 120 orders of magnitude. This constitutes the vacuum catastrophe prediction which remains one of the most challenging puzzles in physics. An alternative explanation for the late-time acceleration of the Universe could be accounted by introducing new dynamical degrees of freedom. This is achieved either by directly invoking new fluids in form of an energy-momentum tensor *T**μ**ν* with negative pressure or by modifying the geometrical part of Einstein field equations (see Refs.  for reviews). One hope in weakening gravity on cosmological scales is to tackle the cosmological constant problem. The implications are multifaceted. The same modification might account for the late-time speed-up of the cosmic expansion. This type of scenarios can naturally arise in higher-dimensional theories or massive gravity. Concerning the higher-dimensional frameworks, the Dvali-Gabadadze-Porrati (DGP) model is one of the most important large-scale modifications of gravity. The Universe is modeled by a confined three-brane embedded in a five-dimensional Minkowski bulk. An intrinsic Einstein-Hilbert term is introduced to recover four-dimensional gravity on small scales. On the other hand, the gravity is systematically weakened on cosmological scales due to the gravitational leakage to the extra dimension. From the effective four dimensional point of view, the graviton acquires a soft mass and hence carries five degrees of freedom. One of such degrees of freedom corresponds to the helicity-0 mode, which sources the existence of a self-accelerating solution. The non-linear interactions of the helicity-0 mode make the mode decouple from the gravitational dynamics, which is known as the Vainshtein mechanism. Motivated by the nice properties of the helicity-0 mode in the decoupling limit of the DGP model, more general “Galileon” interactions have been proposed on the Minkowski background. These interactions are invariant under internal Galilean and shift transformations. The Galilean symmetries together with the requirement of ghost absence restrict the effective Lagrangian to consist only of five derivative interactions. There were attempts to generalize the Minkowski Galileon theory to a curved background. The first natural pursuit was done by covariantizing directly the decoupling limit. A rigorous investigation revealed that the naive covariantization (replacing partial derivatives with covariant derivatives) would yield higher-order equations of motion and that unique non-minimal couplings with the curvature should be added to maintain the equations of motion up to second order. The generalization of this latter “covariant Galileon” led to the rediscovery of the Horndeski action as the most general scalar-tensor theories with second-order equations of motion in 4-dimensions. The application of covariant Galileons to cosmology witnessed a flurry of investigations concerning self-accelerating de Sitter solutions, late-time cosmology and its observational implications, inflation, super-luminalities, and so on. Even if the ghost-like Ostrogradski instability can be avoided for covariant Galileons, the Galilean symmetry is explicitly broken when promoted to the curved space-time. However, there have been also successful generalizations to maximally symmetric backgrounds with a generalized Galilean symmetry. It is worth mentioning that there exists a parallel to massive gravity, namely Galileon interactions arise naturally in the decoupling limit of massive gravity. Furthermore, the covariantization of the decoupling-limit interactions gives rise to a specific sub-class of Horndeski theories. A natural follow-up was to apply the same construction of Galileon-like interactions to arbitrary *p*-forms. While even *p*-form Galileons and multi odd *p*-form Galileons are easy to construct, very soon it was realized that one cannot construct Galilean interactions for a Lorentz- and gauge-invariant, single spin-1 (i.e., 1-form) field in any dimensions (see also Refs. ). The formalism developed for the proof of the no-go theorem was recently extended towards categorization of all possible general odd *p*-form Galileons, as an application of the concept called plethysm in the representation theory of the symmetric groups. However, this no-go theorem does not apply to massive spin 1 fields since one of the assumptions of the theorem, the gauge invariance, is dropped, and it is possible to successfully construct derivative self-interactions for the vector field with a broken *U*(1) symmetry. A systematical construction of these interactions with only 3 propagating degrees of freedom (two transverse and one longitudinal) was performed in Ref.  with the requirement that the longitudinal mode possesses non-trivial interactions belonging to the class of Galileon/Horndeski theories. The analysis was based on studying the Hessian matrix and assuring the propagation of a second class constraint. The action derived in Ref.  corresponds to the generalization of massive Proca theories to the curved background in which the requirement of second-order equations of motion enforces the presence of non-minimal derivative couplings with the Ricci scalar and the Einstein tensor (see also Refs.  for related works). Note that similar type of vector-tensor theories can naturally arise in modifications of gravity based on Weyl geometries. It is also possible to obtain derivative interactions higher than quintic order by keeping the degrees of freedom three. In a sub-class of generalized Proca theories up to the quartic Lagrangian ${\cal L}\_4$, the existence of de Sitter solutions relevant to dark energy was found in Refs. . Applying these theories to the spherically symmetric background, the authors in Ref.  showed that cubic-order derivative self-interactions and non-minimal derivative couplings at quartic order can efficiently screen the propagation of the fifth force in local regions of the Universe. In this paper, we will closely study cosmological implications of the full derivative interactions introduced in Ref.  in the presence of matter (radiation and non-relativistic matter). In particular, we derive conditions for the absence of ghosts and Laplacian instabilities by considering tensor, vector, and scalar perturbations at linear level. Our general results will be applied to concrete models relevant to the late-time cosmic acceleration. Not only we show the existence of stable de Sitter solutions, but also we study the stability of perturbations throughout the cosmological evolution. Our paper is organized as follows. In Sec. [HPsec] we present the background equations of motion in the presence of a temporal component of the vector field and a matter fluid. In Sec. [spsec] we derive conditions for avoiding ghosts and Laplacian instabilities by expanding the generalized Proca action up to quadratic order in tensor, vector, and scalar perturbations. In Sec. [genedissec] we discuss general conditions for the theoretical consistency at the de Sitter fixed point and in the early cosmological epoch. In Sec. [covasec] we propose a class of concrete models in which the de Sitter point is always a late-time attractor and discuss the viable parameter space in which ghosts and instabilities are absent during the cosmic expansion history. Sec. [consec] is devoted to conclusions. Generalized Proca theories and the background equations of motion ================================================================= Let us start with generalized Proca theories described by the four-dimensional action S=d4x ( L +LM ),=LF+i=25 Li, [Lag] where *g* is a determinant of the metric tensor *g**μ**ν*, ${\cal L}\_M$ is a matter Lagrangian, and F &=& -14 FF, [LF] L2 &=& G2(X), [L2] L3 &=& G3(X) A, [L3] L4 &=& G4(X)R+ G4,X(X),[L4] L5 &=& G5(X) G A -16 G5,X(X) [ ( A)3 -3d2 A A A -3(1-d2) A A A & & +(2-3d2) A A A +3d2 A A A]. [L5] Here, *A**μ* is a vector field with *F**μ**ν* = ∇*μ**A**ν* − ∇*ν**A**μ*, ∇*μ* is the covariant derivative operator, *R* is the Ricci scalar, *G**μ**ν* is the Einstein tensor, *c*2, *d*2 are constants, *G*2, 3, 4, 5 are arbitrary functions of X=-12 A A, [Xdef] and *G**i*, *X* = ∂*G**i*/∂*X*. The Lagrangians ${\cal L}\_{2,3,4,5}$ are constructed to keep the propagating degrees of freedom up to three with second-order equations of motion. As in standard massless Maxwell theory, there are two transverse polarizations for the vector field. The Proca theory corresponds to the function *G*2(*X*) = *m*2*X* with *G*3, 4, 5 = 0, in which case introduction of the mass term *m* breaks the *U*(1) gauge symmetry. This gives rise to one additional degree of freedom in the longitudinal direction. The generalized Proca theories given above correspond to the extension of massive Proca theories with the cubic derivative self-interaction ${\cal L}\_3$ and non-minimal derivative couplings with the Ricci scalar (in the Lagrangian ${\cal L}\_4$) and with the Einstein tensor (in the Lagrangian ${\cal L}\_5$). In Eqs. ([L4]) and ([L5]) the terms multiplied by the constants *c*2 and *d*2 can be expressed in terms of *F**μ**ν*, so they can be regarded as intrinsic vector modes (i.e., they vanish by taking the scalar limit *A**μ* → ∇*μ**π*). In Refs.  the cosmology for *c*2 =  − 1 up to the Lagrangian ${\cal L}\_4$ was studied for specific choices of the functions *G*2, 3, 4. In this paper we study the cosmology for the full action ([Lag]) for arbitrary constants *c*2 and *d*2. We also derive the background equations of motion and the no-ghost/stability conditions without specifying the functional forms of *G*2, 3, 4, 5. We define the energy-momentum tensor of the matter Lagrangian ${\cal L}\_M$, as T(M)=- g. Assuming that matter is minimally coupled to gravity, the following continuity equation holds T(M)=0. [Tcon] We derive the equations of motion on the flat Friedmann-Lemaître-Robertson-Walker (FLRW) background described by the line element *d**s*2 =  − *d**t*2 + *a*2(*t*)*δ**i**j**d**x**i**d**x**j*, where *a*(*t*) is the scale factor that depends on the cosmic time *t*. The homogenous vector field *A**μ*, which does not break spatial isotropy, is given by A=((t),0,0,0), [Aansatz] where the temporal component *ϕ* is a function of *t*. Then, the kinetic term ([Xdef]) reduces to *X* = *ϕ*2/2. For the matter Lagrangian ${\cal L}\_M$, we take into account the perfect fluid with the energy-momentum tensor $T^{\mu}\_{\nu}={\rm diag}(-\rho\_M,P\_M,P\_M,P\_M)$, where *ρ**M* is the energy density and *P**M* is the pressure. Then, the continuity equation ([Tcon]) reads M+3H(M+PM)=0, [continuity] where a dot represents a derivative with respect to *t*, and $H \equiv \dot{a}/a$ is the Hubble expansion rate. Varying the action ([Lag]) with respect to *g**μ**ν*, we obtain the following equations of motion & & G2-G2,X2-3G3,XH 3 +6G4H2-6(2G4,X+G4,XX2)H22 +G5,XX H35+ 5G5,X H33 =M, [be1] & & G2-2G3,X+2G4(3H2+2) -2G4,X ( 3H2+2H +2 )-4G4,XXH3 && +G5,XXH2 4+G5,X H 2(2+2H2+3H) =-PM. [be2] Variation of the action ([Lag]) with respect to *A**μ* leads to ( G2,X+3G3,XH+6G4,XH2 +6G4,XXH22 -3G5,XH3-G5,XXH3 3 )=0. [be3] Among four Eqs. ([continuity])-([be3]), three of them are independent. We note that Eqs. ([be1]) and ([be3]) do not contain any time derivatives of *ϕ*. This reflects the fact that the theory given by the action ([Lag]) has been constructed to avoid the appearance of the propagating degrees of freedom more than three. From Eq. ([be3]) there are two branches of solutions. One of them is *ϕ* = 0, but in this case the temporal component of the vector field does not contribute to the background dynamics. Another branch corresponds to G2,X+3G3,XH+6G4,XH2 +6G4,XXH22 -3G5,XH3-G5,XXH3 3=0, [be3d] in which case the field *ϕ* is directly related to the Hubble parameter *H*. This allows the existence of de-Sitter solutions characterized by constant *H* and *ϕ*, so we shall focus on the second branch satisfying Eq. ([be3d]) in the following discussion. Conditions for avoiding ghosts and instabilities ================================================ We derive conditions for the absence of ghosts and instabilities of tensor, vector, and scalar perturbations on the flat FLRW background. We consider two scalar metric perturbations *α*, *χ* and one vector perturbation *V**i* by choosing the so-called flat gauge. Under this choice, the temporal and spatial components of gauge transformation vectors are fixed. Taking into account the tensor perturbation *h**i**j*, the linearly perturbed line-element is given by ds2=-(1+2)dt2+2( i+Vi )dtdxi+a2(t) ( ij +hij )dxi dxj, where *V**i* and *h**i**j* satisfy the following conditions & & i Vi=0, [traconve1] & & i hij=0,i=0. [tracon] As for the Proca vector field *A**μ*, the time component *A*0 and the spatial vector *A**i* can be expressed in the following forms A0 & = & (t)+, Ai & = & ij ( jV+Ej ), where *δ**ϕ* is the perturbation in *A*0 (which depends on both *t* and *x**i*). The perturbation *χ**V* corresponds to the intrinsic scalar part, whereas *E**j* is the vector part satisfying the transverse condition j Ej=0. [traconve2] For the matter action $S\_M=\int d^4 x \sqrt{-g}\,{\cal L}\_M$, we consider a single perfect fluid with the energy density *ρ**M*. For the description of the perfect fluid, we make use of the Schutz-Sorkin action (see also Ref. ): SM=-d4x, [Spf] where ℓ is a scalar, *J**μ* is a vector field of weight one, A1, A2, B1, B2 are scalars whose perturbations are meant to describe the vector modes, and *n* is the number density of the fluid defined by n=. Due to the transverse condition for the vector modes, a third component A3∂*μ*B3 is redundant in the action ([Spf]). We express ℓ and *J**μ* into the background and perturbed components, as &=&-t M,ndt’ - M,n v, J0 &=& 0+J, Ji &=& ik ( kj+Wk ), where *ρ**M*, *n* ≡ ∂*ρ**M*/∂*n* is evaluated on the background, N0 = *n*0*a*3 is a constant associated with the total number of fluid particles (*n*0 is the background value of *n*), and *v*, *δ**J* are the perturbations of ℓ, *J*0, respectively. On the FLRW background, the action ([Spf]) reduces to $S\_{M}^{(0)}=\int d^4 x \sqrt{-g}\,(n\_0\rho\_{M,n}-\rho\_M)$. Since the term inside the bracket corresponds to the pressure *P**M* of the fluid, we have PM=n0M,n-M. [PM] The perturbation *δ**j* corresponds to the scalar part of *J**i*, whereas the intrinsic vector perturbation *W**k* obeys k Wk=0. [traconve3] We write the perturbation of ${\cal A}\_i$ in the form i=M,nui, [delAi] where *u**i* corresponds to the intrinsic vector part of the four velocity $u^{\alpha}=J^{\alpha}/(n\sqrt{-g})$, satisfying i ui=0. [traconve4] It should be pointed out that the above form of action is not the only possibility for describing the perfect fluid, e.g., the k-essence form is also another possibility. However, in the theories under consideration, the Schutz-Sorkin action is convenient for properly accommodating vector perturbations. Moreover, it provides a natural and convenient choice of variables for the dust-fluid perturbation with a vanishing propagation speed squared *c**m*2. On the other hand, in the k-essence form, one may need to perform a change of variables corresponding to a canonical transformation, e.g., from the perturbation of the scalar field to the density contrast, before taking the *c**m*2 → 0 limit. Due to the decomposition into tensor, vector, and scalar modes explained above, expansion of the action ([Lag]) up to second order in perturbations leads to the quadratic action of the form S(2)=ST(2)+SV(2)+SS(2), where *S**T*(2), *S**V*(2), *S**S*(2) correspond to contributions from tensor, vector, and scalar perturbations, respectively, Variations of the action *S**T*(2), *S**V*(2), *S**S*(2) with respect to perturbed quantities give rise to the linear perturbation equations of motion for tensor, vector, and scalar modes. The same equations of motion can be derived after obtaining the general field equations by varying the action ([Lag]) with respect to *g**μ**ν* and then decomposing into tensor, vector, and scalar perturbations (as in the case of GR discussed in Ref. ). The advantage of using the second-order action is that no-ghost and stability conditions for dynamical fields can be easily deduced after integrating out all non-dynamical fields from the action. In the following, we shall separately derive *S**T*(2), *S**V*(2), *S**S*(2) to discuss conditions for the absence of ghosts and instabilities. Tensor perturbations -------------------- The tensor perturbation *h**i**j*, which satisfies the transverse and traceless conditions ([tracon]), can be expressed in terms of the two polarization modes *h*+ and *h*×, as *h**i**j* = *h*+*e**i**j*+ + *h*×*e**i**j*×. In Fourier space, the unit bases *e**i**j*+ and *e**i**j*× obey the relations $e\_{ij}^{+}({\bm k}) e\_{ij}^{+}(-{\bm k})^\*=1$, $e\_{ij}^{\times}({\bm k}) e\_{ij}^{\times}(-{\bm k})^\*=1$, and $e\_{ij}^{+}({\bm k}) e\_{ij}^{\times}(-{\bm k})^\*=0$, where ${\bm k}$ is the comoving wavenumber. Expanding Eq. ([Lag]) up to second order in tensor perturbations, the resulting second-order action is given by ST(2)==+,dtd3x a3, where qT=2G4-22G4,X+ H3G5,X, [qT] and cT2=. [cT] Note that *c**T*2 corresponds to the propagation speed squared of the tensor mode. The ghost is absent under the condition *q**T* > 0. The Laplacian instability on small scales can be avoided for *c**T*2 > 0. Vector perturbations -------------------- The four quantities *V**i*, *E**i*, *W**i*, and *u**i* obey the transverse conditions ([traconve1]), ([traconve2]), ([traconve3]), and ([traconve4]), respectively. For the practical computations of the second-order action, we can consider the vector fields depending on *t* and *z* alone, e.g., *V**i* = (*V*1(*t*, *z*), *V*2(*t*, *z*), 0). Then, the transverse condition such as Eq. ([traconve1]) is automatically satisfied. For the quantities ${\cal A}\_i$ and ${\cal B}\_i$, the simplest choice containing all the needed information of the vector mode is given by ${\cal A}\_1=\delta {\cal A}\_1(t,z)$, ${\cal A}\_2=\delta {\cal A}\_2(t,z)$, ${\cal B}\_1=x+\delta {\cal B}\_1(t,z)$, and ${\cal B}\_2=y+\delta {\cal B}\_2(t,z)$. The normalizations of ${\cal B}\_1$ and ${\cal B}\_2$ have been done such that both ${\cal B}\_{1,x}$ and ${\cal B}\_{2,y}$ are equivalent to 1. Note that the above prescription can be extended to the general case in which the perturbations depend on *t*, *x*, *y*, *z*. After expanding the matter action ([Spf]) up to second order in vector perturbations, it follows that (SV(2))M=i=12dtd3x. [LVM] Since the quantities $W\_i, \delta {\cal A}\_i, \delta {\cal B}\_i$ appear only in the matter action, we can vary the action ([LVM]) with respect to these perturbations independently of the full second-order action. Variation of Eq. ([LVM]) in terms of *W**i* leads to Wi=. [Wiex] Substituting Eq. ([Wiex]) into Eq. ([LVM]) and varying the action with respect to $\delta {\cal A}\_i$, we obtain $\delta {\cal A}\_i$ in the form ([delAi]) with ui=Vi-a2i. Varying the action with respect to $\delta {\cal B}\_i$, we find M,nui =ui =, [conser] which is associated with the conservation of the energy-momentum tensor of the perfect fluid. This is the same relation as that in GR and it states that the perturbation *u**i* is non-dynamical. Finally, the second-order matter action reduces to (SV(2))M=i=12 dtd3x, [LVM2] with the conservation relation ([conser]) for the four velocity $u\_i=V\_i-a^2\dot{\delta {\cal B}}\_i$. We sum up the second-order action originating from $\int d^4 x \sqrt{-g}{\cal L}$ with Eq. ([LVM2]). In doing so, it is convenient to introduce the following combination Zi= Ei+(t)Vi. Let us consider the perturbations in Fourier space with $k=|{\bm k}|$. Variation of the total second-order action *S**V*(2) with respect to *V**i* leads to Vi=-(M+PM)ui- (G4,X-12 G5,XH )Zi. [vecre] The metric perturbation *V**i* follows a different dynamics from that in GR (where *G*4, *X* = 0 = *G*5, *X*). Integrating out the non-dynamical fields *u**i* and *V**i*, we obtain the action for the dynamical field *Z**i*. It follows that there are two dynamical degrees of freedom *Z*1 and *Z*2 for the vector mode. Taking the small-scale limit (*k* → ∞) in Eq. ([vecre]) and using the background equations of motion, the resulting second-order action for the perturbations *Z**i* in Fourier space reads SV(2) i=12 dtd3 x ( i2+cV2 Zi2 ), [SVac] where qV= 1-2c2G4,X-2d2HG5,X, [qv] and the vector propagation speed squared can be written as cV2=1+ 2qTqV+. [cv] In Eq. ([SVac]) we have ignored the contribution of an effective mass term *m**V*2*Z**i*2 relative to the second term, which can be justified in the limit *k* → ∞ with *c**V* ≠ 0. The sign of the coefficient *q**V* characterizes the no-ghost condition, such that the vector ghost is absent for *q**V* > 0. If *c*2 = *d*2 = 0, then this condition is automatically satisfied. We recall that the terms containing *c*2 and *d*2 are associated with the pure vector contributions in the original action ([Lag]), which affect the no-ghost condition for the vector mode. If the vector propagation speed squared *c**V*2 is positive, the small-scale Laplacian instability can be avoided. For the theories with *G*5 = 0 or the theories with *d*2 = 0 and *G*5 ≠ 0, we have that *c**V*2 > 1 under the no-ghost conditions of tensor and vector perturbations (*q**T* > 0 and *q**V* > 0). Scalar perturbations -------------------- For scalar perturbations, we consider the single perfect fluid with the background energy density *ρ**M* and the pressure *P**M* given by Eq. ([PM]). We make the field redefinitions J &=& M =M, V&=& -(t), where *δ**ρ**M* is the matter density perturbation. First, we can integrate out the Lagrange multiplier *δ**j* by means of its own equation of motion: =-v-. Then, the second-order action for scalar perturbations is given by *S**S*(2) = ∫*d**t**d*3*x* *L**S*(2), where LS(2) & = & a3{- +v -(M)2-M & & -w3+w42 - & & - +w5 - & & -+ +(w1+)}, [sscalar] with the short-cut notations w1 & = & H2 3 (G5,X+2G5,*XX*) -4H(G4+4G4,*XX*)-3G3,X, w2 & = & w1+2HqT, w3 & = & -22qV, w4 & = & 33(9G5,X-4G5,*XXX*) -3H2 (2G4+22G4,X+4G4,*XX*-6G4,*XXX*) & & -H3(G3,X-2G3,*XX*) +4G2,*XX*, w5 & = & w4-H(w1+w2), w6 & = & -, w7 & = & 2(HG5,X-2G4,X) +. The Lagrangian ([sscalar]) does not contain the time derivatives of *α*, *χ*, *δ**ϕ*, *v*, so these four fields are non-dynamical. Varying the action *S**S*(2) with respect to *α*, *χ*, *δ**ϕ*, *v*, we obtain the following equations of motion in Fourier space respectively: & & M-2w4 +( 3Hw1-2w4 ) + ( w3+w3 -w6 +w1 +2w3 )=0, & & n0M,n v+ w1 + =0, & & ( 3Hw1-2w4 )-2w5 + ( w3+ 2-++w2 )=0, & & M+3H(1+ ) M +n0M,n ( +v )=0. On using these equations, we can eliminate the non-dynamical fields from the second-order Lagrangian *L**S*(2). After integrations by parts, we finally obtain a reduced Lagrangian for two dynamical fields *ψ* and *δ**ρ**M* in the form LS(2)=a3( tK +tG -tM -tB ), [L2mat] where ${\bm K}$, ${\bm G}$, ${\bm M}$, ${\bm B}$ are 2 × 2 matrices, and the vector field $\vec{\mathcal{X}}$ is defined by t=(, M/k ). The matrix ${\bm M}$ is related with the masses of two scalar modes, which do not contain the *k*2/*a*2 term. In the following we shall take the small-scale limit (*k* → ∞), in which the second
arxiv_0000330
Topological Defects on the Lattice I: The Ising model ===================================================== In this paper and its sequel, we construct topologically invariant defects in two-dimensional classical lattice models and quantum spin chains. We show how defect lines commute with the transfer matrix/Hamiltonian when they obey the *defect commutation relations*, cousins of the Yang-Baxter equation. These relations and their solutions can be extended to allow defect lines to branch and fuse, again with properties depending only on topology. In this part I, we focus on the simplest example, the Ising model. We define lattice spin-flip and duality defects and their branching, and prove they are topological. One useful consequence is a simple implementation of Kramers-Wannier duality on the torus and higher genus surfaces by using the fusion of duality defects. We use these topological defects to do simple calculations that yield exact properties of the conformal field theory describing the continuum limit. For example, the shift in momentum quantization with duality-twisted boundary conditions yields the conformal spin 1/16 of the chiral spin field. Even more strikingly, we derive the modular transformation matrices explicitly and exactly. Introduction ============ The Yang-Baxter equation plays a fundamental role in the study of integrable models and their mathematics. Lattice Boltzmann weights satisfying the Yang-Baxter equation can be used to construct a family of commuting transfer matrices, yielding the conservation laws necessary for integrabilty. This has resulted not only in countless physical insights, but deep mathematics as well. For example, taking a limit of these Boltzmann weights yields topological invariants for knots and links generalizing the Jones polynomial. In this paper and its sequel we explain how cousins of the Yang-Baxter equation allow these studies to be extended into further realms. We show how solving the *defect commutation relations* gives topological defects in many lattice models. The partition function is independent of the path a topological defect follows except for topological properties, e.g. if and how the path wraps around a system with periodic boundary conditions. In this part I, we study solutions of these defect commutation relations for the Ising model. In part II, we explain how solutions follow from a general mathematical structure known as a fusion category, familiar from studies of anyons, rational conformal field theory, and topological quantum field theory. Magic follows. Much can be learned even in the case of a single defect line, where our construction gives a simple and systematic way to generalize defects in lattice models previously constructed by exploiting integrability. Solving the defect commutation relations allows a uniform way of understanding seemingly different observations, for example showing that Kramers-Wannier duality can be extended to many other models in a generalisation known as “topological symmetry”. It provides a systematic way to define twisted boundary conditions, where a modified translation symmetry holds despite the presence of a defect seemingly breaking it. Understanding such translation invariance allows us to compute the spin of operators in conformal field theory *exactly* in this lattice model, without using the full apparatus of integrability. For example, we find here in part I the conformal spin 1/16 of the chiral spin field in the Ising model simply by computing the eigenvalues of the operator for translating around one cycle (i.e., a Dehn twist). We show how to include multiple topological defects by giving a precise method for understanding how they fuse together. Kramers-Wannier duality becomes easily implemented with the transfer matrix formulation. This clarifies greatly a major subtlety, that duality is *not* a symmetry in the traditional sense ; while a duality transformation can be implemented by an operator commuting with the transfer matrix/quantum Hamiltonian, this operator is not unitary or even invertible. Defining this operator via an insertion of a “duality defect” line makes it easy to derive what happens when two duality transformations are performed. We show how to allow topological defect lines to branch in a topologically invariant fashion. Namely, we define a trivalent vertex where defect lines meet such that the partition function is independent of the location of the vertex. This results in a precise lattice definition of operations familiar in topological field theory, for example the “*F*-move” that in our context relates the partition functions of systems with different defect branching. This in turn allows some of the profound consequences of modular invariance, famed from conformal field theory, to be derived and utilised directly on the lattice. It also allows a straightforward method for implementing duality when space is a higher-genus surface. One of the messages of our papers is that topological defects are an absolutely fundamental characteristic of many two-dimensional lattice models, especially but not exclusively at their critical points. At a crude level, it is because defects are effectively external probes, and so a system’s response to their insertion gives useful information. However, the relation is much more profound, both physically and mathematically. Topological defects are intimately related to the symmetry structure of the theory, and so deeply ingrained. Indeed, this has long been known in conformal field theory: Cardy’s seminal work on boundaries (in particular ) led to among other things an understanding of how defects play a fundamental role; see e.g. . We find similar structure in many lattice models. For example, we show how ratios of the non-integer ‘ground-state degeneracy’ are given easily and exactly by fusing a topological defect with a boundary; they are simply the eigenvalues of the defect creation operators. The defect commutation relations are typically a *milder* constraint than the Yang-Baxter equation: we will show how topological defects occur even when the Boltzmann weights are staggered or spatially varying. In the Ising model, this makes the model non-critical. In the more general models discussed in part II, staggering the Boltzmann weights not only takes the model away from criticality, but breaks the integrability. In this part I, we downplay the general structure in favour of describing a particular model in depth, the Ising model. This we hope provides a very concrete realisation of the more general approach of part II. Even though the two-dimensional Ising model has been studied for the best part of a century, we believe our approach not only illuminates greatly some known results such as duality, but adds some new ones to the canon. For example, we compute the exact modular transformation matrices directly on the lattice. In section [sec:Ising], we review the basics of the two-dimensional Ising model. In section [sec:defcomm], we introduce topological defects and the defect commutation relations. We describe in depth two types, the spin-flip defect and the duality defect. In section [sec:trivalent], we show how defects can branch and join in a topologically invariant fashion. Even more remarkably, in the presence of multiple defect junctions, we derive how to reshuffle the junctions, giving linear identities for partition functions. We show how useful these identities are in section [sec:torus], by giving a simple implementation of duality on the torus, as well as deriving the aforementioned 1/16. In section [sec:partition], we go further and derive the full set of modular transformation matrices for the Ising conformal field theory from purely lattice considerations. In section [sec:highergenus] we find similar relations for partition functions on the disc and on higher genus surfaces. This in particular allows us to derive the ratios of ground-state degeneracies, and give a straightforward procedure for defining duality on any orientable surface. The Ising model =============== The degrees of freedom of the classical Ising model are spins *σ**j* =  ± 1 at each site *j* of some graph, which for this section we take to be the square lattice. To allow for general types of defects we illustrate the configurations in a somewhat unconventional fashion in Fig. [fig:IsingLattice]. We draw both the square lattice and its dual, leaving the dual empty; we will see that duality maps the original model to one on the dual lattice. We also label the degrees of freedom by heights *h**j* = 0, 1, related to the customary variables by *σ**j* = ( − 1)*h**j*. These conventions also have the advantage of making the generalisation to height models in part II more transparent. [fig:IsingLattice] The Ising energy and partition function are given by $$\begin{aligned} -\beta E[\{\sigma\}] & \;\; = \newcommand{\ColTwo}[2]{\renewcommand{\arraystretch}{0.4} \begin{tabular}{@{} c @{}}{#1}\\{#2}\end{tabular}} \mkern-5mu \sum\_{x,y \,\, \in \, \text{\ColTwo{active}{sites}}} \mkern-5mu \big( J\_x \sigma\_{x,y}\sigma\_{x+1,y} + J\_y \sigma\_{x,y}\sigma\_{x,y+1} \big), & {Z} &= \sum\_{ \{\sigma \} } e^{-\beta E[\{ \sigma \}]}.\end{aligned}$$ where (*x*, *y*) labels the points on the original square lattice. The critical and self-dual line for the square lattice is given by couplings *J**x* and *J**y* obeying $$\begin{aligned} (\sinh 2J\_x) (\sinh 2J\_y) = 1\. \label{Isingcritline}\end{aligned}$$ A convenient reparametrization of this line is given by introducing the “anisotropy parameters” *u**H* and *u**V* such that $$\begin{aligned} e^{2J\_y} = \cot u\_V, \qquad\quad e^{2J\_x} = \cot (\tfrac{\pi}{4}-u\_H)\.\end{aligned}$$ The critical line ([Isingcritline]) then corresponds to *u**H* = *u**V*. The couplings are isotropic when *u**H* = *π*/4 − *u**V*, so that $u\_H=u\_V=\frac{\pi}{8}$ is the critical and isotropic point. Each link of the original lattice corresponds to a plaquette on the lattice comprised of the original and its dual, with the four sites comprised of two adjacent ones on the original lattice along with two on the dual. Letting *a* and *b* be the values of the spins on the former two, the vertical and horizontal Boltzmann weights for each plaquette are $$\begin{aligned} &{\mathord{ \raisebox{0.2ex}{$\scriptstyle{}$}\mkern2mu\overset{a}{\underset{b} {\ifthenelse{\isempty{}}{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.4]{BW.pdf}}}}}}{{{\mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{}\hidewidth\cr$\vcenter{\hbox{\includegraphics[scale=1.4]{BW.pdf}}}$\cr\vphantom{$\Big|\_q$} }}}}}} }\mkern2mu\raisebox{0.2ex}{$\scriptstyle{}$} }} =\; \begin{cases} \cos{u\_V} & a=b, \\ \sin{u\_V} & a\ne b, \end{cases} \label{eq:Ising\_WV} \\ &{\mathord{ \raisebox{0.2ex}{$\scriptstyle{a}$}\mkern2mu\overset{}{\underset{} {\ifthenelse{\isempty{}}{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.4]{BW.pdf}}}}}}{{{\mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{}\hidewidth\cr$\vcenter{\hbox{\includegraphics[scale=1.4]{BW.pdf}}}$\cr\vphantom{$\Big|\_q$} }}}}}} }\mkern2mu\raisebox{0.2ex}{$\scriptstyle{b}$} }} =\; \begin{cases} \cos\big(\tfrac{\pi}{4}-u\_H\big) & a=b, \\ \sin\big(\tfrac{\pi}{4}-u\_H\big) & a\ne b, \end{cases} \label{eq:Ising\_WH}\end{aligned}$$ [eq:IsingW] In this paper we focus mainly on the square lattice (plus defects), but all of our results can be generalized greatly, as we explain in section [sec:highergenus]. In general, one simply associates a parameter *u**p* for each plaquette made from the combination of the lattice and its dual. In the absence of defects, the partition function can be defined directly in terms of ([eq:IsingW]). It turns out to be much more convenient once defects are introduced to complicate this definition by an additional weight per site of the lattice. This additional factor arises naturally in the closely related theories of anyons/topological field theory, where it is known as the “quantum dimension”. Here it is simply $$\begin{aligned} d\_v= \begin{cases} 1&\qquad\text{if there is a spin on site $v$},\\ \sqrt{2}&\qquad\text{if site $v$ is empty}. \end{cases} \label{vertexweights}\end{aligned}$$ We thus define the Boltzmann weights as $$\begin{aligned} e^{-\beta E[\{ \sigma \}]} = \prod\_{\text{plaquettes $p$}} {\mathord{ \raisebox{0.2ex}{$\scriptstyle{\alpha\_p}$}\mkern2mu\overset{\beta\_p}{\underset{\delta\_p} {\ifthenelse{\isempty{}}{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.4]{BW.pdf}}}}}}{{{\mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{}\hidewidth\cr$\vcenter{\hbox{\includegraphics[scale=1.4]{BW.pdf}}}$\cr\vphantom{$\Big|\_q$} }}}}}} }\mkern2mu\raisebox{0.2ex}{$\scriptstyle{\gamma\_p}$} }} \times \prod\_{\text{sites $v$}} d\_v\;. \label{IsingConfigurationWeight}\end{aligned}$$ Here we explicitly label the spins on the lattice and the dual, but each plaquette has only two active degrees of freedom. It is apparent from this definition that in the absence of defects, the extra site weight results in an unimportant overall constant. The partition function is conveniently written in terms of the transfer matrix *T*. The operator *T* acts on the vector space ${\cal H}$, whose orthonormal basis states are the spin/height configurations along a row of *L* sites in the original lattice. Each basis state is labelled by $\ket{h\_1 h\_2 \cdots h\_L}$, where each *h**j* ∈ {0, 1}. Acting with *T* represents one step of evolution in the *y* direction (Euclidean “time”), taking one row to the one above it; drawing both the lattice and the dual lattice makes the row zig-zag. The on-site weights are essentially gauge factors, and so there exist many ways of shuffling them around so that one reproduces the same partition function. We have found the least offensive way to include them is to split the weight per site evenly across the vertices in the vertical direction. To illustrate this splitting in the diagrams we put a half-dot at each of the vertices which serves to multiply the diagram by a $\sqrt{d\_v}$. Explicitly, the Boltzmann weights with and without the half-dots are related by, $$\begin{aligned} {\mathord{ \raisebox{0.2ex}{$\scriptstyle{\alpha}$}\mkern2mu\overset{\beta}{\underset{\delta} {\ifthenelse{\isempty{}}{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.4]{BWDots.pdf}}}}}}{{{\mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{}\hidewidth\cr$\vcenter{\hbox{\includegraphics[scale=1.4]{BWDots.pdf}}}$\cr\vphantom{$\Big|\_q$} }}}}}} }\mkern2mu\raisebox{0.2ex}{$\scriptstyle{\gamma}$} }} = {\mathord{ \raisebox{0.2ex}{$\scriptstyle{\alpha}$}\mkern2mu\overset{\beta}{\underset{\delta} {\ifthenelse{\isempty{}}{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.4]{BW.pdf}}}}}}{{{\mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{}\hidewidth\cr$\vcenter{\hbox{\includegraphics[scale=1.4]{BW.pdf}}}$\cr\vphantom{$\Big|\_q$} }}}}}} }\mkern2mu\raisebox{0.2ex}{$\scriptstyle{\gamma}$} }} \times\sqrt{d\_{\beta} d\_{\delta}}. \label{halfdots}\end{aligned}$$ With this in mind, the matrix elements of *T* are illustrated by $$\begin{aligned} \bra{\{h\}}T \ket{\{h'\}} = {\mathord{\vcenter{\hbox{\includegraphics[scale=1.25]{Transfer\_Matrix\_Ising\_Dots.pdf}}}}}. \label{Tdef} \end{aligned}$$ To write *T* explicitly, we define $$\begin{aligned} \sigma^c\_j = \mathds{1}\otimes\mathds{1}\otimes\dots\otimes\mathds{1}\otimes \sigma^c\otimes \mathds{1} \otimes \cdots~,\end{aligned}$$ where the Pauli matrix *σ**c* on the right-hand side acts on the *j*th spin. We then define the operators *W**j**H* and *W**j**V* with matrix elements $$\begin{aligned} \label{IsingWVH} \bra{\cdots h\_j \cdots }W^V\_j \ket{\cdots h\_j' \cdots} &= {\mathord{ \raisebox{0.2ex}{$\scriptstyle{}$}\mkern2mu\overset{h\_j}{\underset{h\_j'} {\ifthenelse{\isempty{}}{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.4]{BWDots.pdf}}}}}}{{{\mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{}\hidewidth\cr$\vcenter{\hbox{\includegraphics[scale=1.4]{BWDots.pdf}}}$\cr\vphantom{$\Big|\_q$} }}}}}} }\mkern2mu\raisebox{0.2ex}{$\scriptstyle{}$} }} = \big[ \cos u\_V + \sigma^x\_j \sin u\_V \big]\_{h\_j h\_j'} ~, \\ \bra{\cdots h\_j h\_{j+1}\cdots}W^H\_{j+1/2} \ket{\cdots h\_{j}' h\_{j+1}' \cdots}&= {\mathord{ \raisebox{0.2ex}{$\scriptstyle{h\_{j}}$}\mkern2mu\overset{}{\underset{} {\ifthenelse{\isempty{}}{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.4]{BWDots.pdf}}}}}}{{{\mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{}\hidewidth\cr$\vcenter{\hbox{\includegraphics[scale=1.4]{BWDots.pdf}}}$\cr\vphantom{$\Big|\_q$} }}}}}} }\mkern2mu\raisebox{0.2ex}{$\scriptstyle{h\_{j+1}}$} }}= \big[ \cos u\_H + \sigma^z\_j \sigma^z\_{j+1}\sin u\_H \big]\_{h\_j h\_{j+1}; h\_{j}' h\_{j+1}'}, \end{aligned}$$ The transfer matrix is given by *T**H**T**V*, where *T**H* = ∏*j* = 1*L**W**j* + 1/2*H*,   *T**V* = ∏*j* = 1*L**W**j**V*. For periodic boundary conditions in the horizontal direction, we let *σ**L* + 1*z* ≡ *σ*1*z*. Putting periodic boundary conditions around the vertical direction as well makes two-dimensional space a torus. The toroidal partition function for *r* rows and *L* columns is then $${Z}\_{1,1}(L,r) = {\operatorname{Tr}}\big[ (T^H T^V)^r\big] ~;$$ the reason for the subscripts will be apparent shortly. Duality maps the Ising model on any planar lattice to one on the dual lattice. The partition functions of the two are the same when the coupling *J* in the original model on a given link is related to the coupling *Ĵ* on the corresponding link in the dual by $$\begin{aligned} \sinh(J)\sinh(\widehat{J})=1\. \label{Isingduality}\end{aligned}$$ Since the dual of a square lattice is also a square lattice, it is natural to expect that the critical point in this case is the self-dual point, *J* = *Ĵ*, and indeed this is so; note ([Isingcritline]). The transfer matrix for the dual model can be defined using the *same* picture ([Tdef]), but here the heights/spins *ĥ**j* − 1/2 live on the dual lattice. We denote this vector space by $\widehat{\cal H}$, and its orthonormal basis elements are labelled by $\ket{\hat{h}\_{\frac{1}{2}}\hat{h}\_{\frac{3}{2}}\cdots \hat{h}\_{L-\frac{1}{2}}}$. The matrix elements are then given by replacing *h**j* with *ĥ**j* − 1/2 in ([IsingWVH]). In the following, we will describe how dualities of this sort can be derived in a very simple way by understanding the topological defects associated with the global symmetries of the model. However, studying the Ising model on graphs with non-trivial topology and boundary segments will require further restrictions. Each boundary condition on the original graph will result in a change or possibly a sum over boundary conditions on the dual lattice. In the following, we provide a geometric way of performing the duality that provides a direct and simple way of mapping the boundary conditions between the model and its dual. The defect commutation relations and their solutions ==================================================== We now introduce defect lines, and show that when they and the Boltzmann weights obey certain relations, the partition function is independent of the deformations of the defect’s path. We focus on the two main types of topological defect lines in the Ising model, the spin-flip defect and the duality defect. The spin-flip defect -------------------- The Ising model has a ${\mathbb Z}\_2$ symmetry given by flipping the spin at every site. In the transfer matrix formulation, this is generated by the operator D*ψ* = ∏*n**σ**n**x*, which obviously commutes with *T**H* and *T**V*. The reason for the mysterious subscript will be explained later. A straight spin-flip defect stretching in the horizontal direction is then created by inserting D*ψ* between two of the rows of the transfer matrix. The toroidal partition function in the presence of such a defect line is then $$\begin{aligned} {Z}\_{1,\psi}(L,r) = {\operatorname{Tr}}\left[ \cdots T^HT^V \mathcal{D}\_{\psi} T^HT^V \cdots \right].\end{aligned}$$ We sketch one way to do this diagrammatically in Fig. [fig:IsingDefect-psi](a), with the defect line corresponding to the sequence of parallelograms. The defect spin flip Boltzmann weights are given by: $$\begin{aligned} \mathord{\sideset{^a\_b}{}{\mathop{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.5]{RSquare.pdf}}}}}}}} = \mathord{\sideset{}{^a\_b}{\mathop{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.5]{RSquare.pdf}}}}}}}} = 2^{-\frac14}\big[ \sigma^x \big]\_{a,b} = \begin{cases} 0 & a = b, \\ 2^{-\frac14} & a \neq b. \end{cases}\end{aligned}$$ Here we draw the parallelograms as rectangles. As Boltzmann weights, ${\mathord{\vcenter{\hbox{\includegraphics[scale=1.5]{RSquare.pdf}}}}}$ simply enforces that the spins on either side are opposite. In terms of the half-dot notation introduced in ([halfdots]), the spin-flip defect creation operator is $$\begin{aligned} \mathcal{D}\_\psi = {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Spin\_Flip\_defect.pdf}}}}}= \prod\_{j} \sigma\_j^x\,\end{aligned}$$ taking ${\cal H}\to {\cal H}$, and $\widehat{\cal H}\to \widehat{\cal H}$. The semi-circles have been introduced so that the operator can be inserted into the partition function between rows of the transfer matrix without changing the overall constant in front of the partition function. Because (D*ψ*)2 = 1, inserting two horizontal defects successively is equivalent to inserting none. @!0 @M=2mm @R=19mm @C=38mm &&(a)&(b) && & [fig:IsingDefect-psi] The spin-flip defect line can be moved around without changing the partition function Z1, *ψ*. It is therefore a *topological defect*. This follows from the fact that the D*ψ* commutes with each individual plaquette operator used to build the transfer matrix: $$\begin{aligned} [{{\mathcal{D}}}\_\psi\,,W^H\_{j+1/2}] = [{{\mathcal{D}}}\_\psi\,,W^V\_j] = 0\end{aligned}$$ for all *j*. Pictorially, these can be represented as *defect commutation relations* $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=0.8]{DefectCommute\_NoSymbols1H.pdf}}}}}\ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=0.8]{DefectCommute\_NoSymbols2H.pdf}}}}}\, \qquad\quad \sum\_\alpha {\mathord{\vcenter{\hbox{\includegraphics[scale=0.8]{DefectCommute\_NoSymbols1V.pdf}}}}}\ =\ \sum\_\beta {\mathord{\vcenter{\hbox{\includegraphics[scale=0.8]{DefectCommute\_NoSymbols2V.pdf}}}}}\label{Isingdefcomm}\end{aligned}$$ where the two cases correspond to the different possible locations of the spins. Similarly, $$\begin{aligned} \sum\_{\rm internal\ labels}{\mathord{\vcenter{\hbox{\includegraphics[scale=0.8]{DefectCommute\_NoSymbols3.pdf}}}}}\ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=0.8]{DefectCommute\_NoSymbols4.pdf}}}}}\, \label{Isingdefcomm2}\end{aligned}$$ where here we do not show the (fixed) external labels and the (summed over) internal labels, and so only need write the diagram once. The partition function remains invariant under these moves when the factor of *d**v* per site, denoted by the full circle, is included. These relations mean that the defect line can be deformed without modifying the partition function; for example $${\mathord{\vcenter{\hbox{\includegraphics[scale=1.2]{DefectCommute\_reda.pdf}}}}}\leftrightarrow {\mathord{\vcenter{\hbox{\includegraphics[scale=1.2]{DefectCommute\_redb.pdf}}}}}\quad\text{ and }\quad {\mathord{\vcenter{\hbox{\includegraphics[scale=1.2]{DefectCommute\_redc.pdf}}}}}\leftrightarrow {\mathord{\vcenter{\hbox{\includegraphics[scale=1.2]{DefectCommute\_redd.pdf}}}}}\. \label{symmcomm}$$ Thus *Z*1, *ψ*(*L*, *r*) is independent of any local deformations of the defect line. This implies that in the critical and continuum limits, the partition function remains conformally invariant. Inserting a vertical defect as shown in Fig. [fig:IsingDefect-psi]b can be interpreted as changing the boundary condition on the transfer matrix. Such boundary conditions defined using a topological defect are called *twisted*. Twisted boundary conditions are often associated with a discrete symmetry like spin-flip one here. The spin-flip defect effectively changes the interaction bridging the defect line from ferromagnetic to antiferromagnetic, modifying the Boltzmann weight and corresponding operator to $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=0.8]{Wpsi.pdf}}}}}&\ =\ \begin{cases} 2^{-\frac12}\sin\big(\tfrac{\pi}{4}-u\_H\big) & a=b \\ 2^{-\frac12}\cos\big(\tfrac{\pi}{4}-u\_H\big) & a\ne b \end{cases}\, \\ W^H\_{L+1/2} &\ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=0.8]{WpsiOp.pdf}}}}}= \cos u\_H- \sigma^z\_L \sigma^z\_{1}\sin u\_H\end{aligned}$$ respectively. Correspondingly the transfer matrix is modified to give the partition function *Z**ψ*, 1(*L*, *r*) = Tr (*T**V**T**ψ**H*)*r*,  where *T**ψ**H* = (cos*u**H* − *σ**L**z**σ*1*z*sin*u**H*)∏*j* = 1*L* − 1*W**j* + 1/2*H*. The path independence of the defect can be seen directly in the transfer matrix formulation by utilizing a unitary transformation. For example, taking *T**ψ**H* → *U**T**ψ**H**U*− 1 with *U* = *σ**L**x* moves a defect between sites *L* and 1 to one between sites *L* − 1 and *L* (or vice versa) without changing the partition function. Twisted boundary conditions are particularly interesting because the model still obeys a modified version of translation invariance. This follows from the path independence of the defect; it can be placed between any two sites without changing the partition function. To find the modified translation operator, we first define the translation operator ${{{\cal T}}}\_1$, the translation operator for untwisted periodic boundary conditions. Precisely, $ {{{\cal T}}}\_1$ has matrix elements $$\begin{aligned} \bra{ \{ h\_i\}} { {{{\cal T}}}}\_{\mathds 1} \ket{\{h\_i'\}}= \left( \prod\_{i=1}^{L} \delta\_{h\_{i+1},h\_i'} \right)\. \label{T1def}\end{aligned}$$ It is simple to check that ${\cal T}\_{\mathds{1}}$ indeed obeys ${\cal T}\_{\mathds{1}} T^V {\cal T}\_{\mathds{1}}^{-1}=T^V$ and ${\cal T}\_{\mathds{1}} T^H {\cal T}\_{\mathds{1}}^{-1}=T^H$. However, ${\cal T}\_{\mathds{1}}$ does not commute with *T**ψ*, because translating moves the defect over to between sites 1 and 2. However, the defect can be moved back by the unitary transformation *U* = *σ*1*x*. Equivalently, one can first shift the defect to the left, and then translate it back. Thus the modified translation operator $$\begin{aligned} {{{{\cal T}}}}\_\psi = \sigma^x\_1 { {{{\cal T}}}}\_{\mathds 1}={ {{{\cal T}}}}\_{\mathds 1}\sigma^x\_L \label{Tpsidef}\end{aligned}$$ does indeed commute with the transfer matrix in the presence of these twisted boundary conditions: ${ {{{\cal T}}}}\_\psi T^V {{{{\cal T}}}}\_\psi^{-1}=T^V$ and ${ {{{\cal T}}}}\_\psi T^H\_\psi {{{{\cal T}}}}\_\psi^{-1}=T^H\_\psi$. Thus in the continuum and critical limits, the model remains conformally invariant even in the presence of a defect. This simple result has another interesting consequence. Because $({ {{{\cal T}}}}\_\psi)^2=\sigma^x\_1{{{{\cal T}}}}\_{\mathds 1} \sigma^x\_1 {{{{\cal T}}}}\_{\mathds 1} = \sigma^x\_1\sigma^x\_2({{{{\cal T}}}}\_{\mathds 1})^2$, $$\begin{aligned} ({{{{\cal T}}}}\_{\psi})^L=\prod\_{j=1}^L \sigma^x\_j = {{\mathcal{D}}}\_\psi\. \label{TpsiL}\end{aligned}$$ Thus translating around a cycle in the presence of a vertical spin-flip defect is equivalent to inserting a horizontal defect! As a map between partition functions, it can be pictured schematically by $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_0RRw.pdf}}}}}\ \xrightarrow{({{{{\cal T}}}}\_{\psi})^L}\ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RR0pw.pdf}}}}}\ \equiv\ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RR0w.pdf}}}}}\. \label{Tpsi}\end{aligned}$$ In section [sec:trivalent] we define precisely the quadrivalent defect Boltzmann weight at the point where the vertical defect meets the horizontal defect, and (see equation [redcross]) derive the equality between partition functions on the right-hand-side of ([Tpsi]). Intuitively, this is a consequence of the fact that any path traversing either cycle of the torus will always cross a spin flip defect. Indeed we also have the equality $$\begin{aligned} \newcommand{\ZRRxwS}{\mathbin{\rotatebox[origin=c]{90}{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RR0w.pdf}}}}}$}}} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RR0w.pdf}}}}}\ =\ \ZRRxwS\.\end{aligned}$$ This can also be verified by explicit calculation using the defect commutation relations. Inserting the operator ${{\mathbf{T}}}\_\psi\equiv ({{{{\cal T}}}}\_{\psi})^L$ into the toroidal partition function can be thought of as cutting the torus into a cylinder, twisting one end of the cylinder around a full cycle, and gluing it back together. This process in topology is known as a Dehn twist, and in geometry as the modular transformation ${\bf T}$. (Sadly, we now have three objects conventionally labelled by the letter T.) In the absence of vertical defects, doing a Dehn twist in the horizontal direction leaves the toroidal partition function invariant. In the presence of a vertical defect, it is non-trivial. The duality defect ------------------ We now move on to a more intricate example, the duality defect line. This defect line is associated with the Kramers-Wannier duality of the Ising model described at the end of section [sec:Ising]. We will explain here precisely how to define this defect. In subsequent sections we will show how implementing duality via this topological defect proves a powerful tool in understanding duality, both on and off the critical point. @!0 @M=2mm @R=19mm @C=38mm (a1)&(b1)&(a2)&(b2) && & [fig:IsingDefect-sigma] Since duality replaces a model on any lattice with one on the dual lattice, the parallelogram underlying a duality defect line needs to implement this. Thus a duality defect must join a lattice and its dual, as illustrated for the square lattice and the dual square lattice in figure [fig:IsingDefect-sigma]. Note that the spins are located on opposite corners of the parallelogram, in constrast with the spin-flip defect. We saw that the ${\mathbb Z}\_2$ spin-flip symmetry generator commutes with the individual Boltzmann weights, the relation pictorially presented in ([symmcomm]). This guaranteed that the partition function was independent of continuous deformations of the defect path. This will also hold for the duality defect if the analogous relations are satisfied, namely $$\sum\_a {\mathord{\vcenter{\hbox{\includegraphics[scale=0.8]{DefectCommute\_NoSymbols5.pdf}}}}}= {\mathord{\vcenter{\hbox{\includegraphics[scale=0.8]{DefectCommute\_NoSymbols6.pdf}}}}}\, \qquad {\mathord{\vcenter{\hbox{\includegraphics[scale=0.8]{DefectCommute\_NoSymbols5p.pdf}}}}}= \sum\_\beta {\mathord{\vcenter{\hbox{\includegraphics[scale=0.8]{DefectCommute\_NoSymbols6p.pdf}}}}}\ \label{dualitydefcomm}$$ as well as these relations rotated by 90 degrees, along with ([Isingdefcomm2]). Commuting a duality defect line through a plaquette with its two spins along the vertical axis turns it into one with spins along the horizontal, and vice versa. Thus dragging a duality defect through the lattice using the commutation relations ([dualitydefcomm]) relates a model on one lattice to one on the dual lattice. The solution of the duality defect commutation relations is less obvious than in the spin-flip case. The building block is given by $$\begin{aligned} {\mathord{\sideset{^{a}\_{}}{^{}\_{b}}{\mathop{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.5]{LSquare.pdf}}}}}}}}} = {\mathord{\sideset{^{}\_{b}}{^{a}\_{}}{\mathop{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.5]{LSquare.pdf}}}}}}}}} = 2^{-\frac12} (-1)^{ab}\, \label{dualitydefectdef} \end{aligned}$$ where *a* and *b* are heights (related to the traditional spins by *σ* = ( − 1)*h*). To satisfy ([dualitydefcomm]) one needs to include the internal vertex weight as well (recall: a factor of 1 for each Ising spin and $\sqrt{2}$ for the blank vertices). Multiplying ([dualitydefectdef]) by a factor ( − 1)*μ*( − 1)*ν*(*a* + *b*) still solves the defect commutation relations, but these cancel out in the string of defect Boltzmann weights. We thus set *μ* = *ν* = 0 in the following. It is instructive to work out explicitly the weights for a single closed duality defect loop. For example, consider the duality defect loop surrounding a vertical Boltzmann weight, $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=0.9]{Sigma\_closed\_loop.pdf}}}}}&= \sum\_{a,b} \underbrace{1\times\sqrt{2}\times1\times\sqrt{2}}\_{\text{internal vertex weights}} \times \underbrace{ 2^{-2} (-1)^{a\alpha} (-1)^{a\beta} (-1)^{b\beta} (-1)^{b\alpha} }\_{\text{duality defect}} \times W\_V(a,b;u) \\ &= \cos{u} + (-1)^{\alpha+\beta}\sin{u} = \sqrt{2}\ {\mathord{\vcenter{\hbox{\includegraphics[scale=0.9]{WH.pdf}}}}}\. \label{eq:IsingClosedLoop} \end{aligned}$$ The partition function with a single closed loop not wrapped around a cycle is therefore simply that without the loop, multiplied by the “quantum dimension” $d\_\sigma=\sqrt{2}$. Because of the defect commutation relations, this statement holds for any topologically trivial loop. Thus we can map a model on to its dual by nucleating a loop (and dividing the partition function by $\sqrt{2}$) and then growing it. This is obvious for a planar graph, but by introducing trivalent vertices in section [sec:trivalent], we will show how to implement duality on the torus and higher genus surfaces as well. Note that in ([eq:IsingClosedLoop]), the factor *u* is associated with a particular plaquette; thus inside the closed loop the couplings *J**x* and *J**y* are interchanged from those outside. This can be made clear by utilising the operator formulation. The operator D*σ* creating a duality defect takes a spin/height configuration on the original lattice to one on the dual, i.e. takes ${\cal H}\to \widehat{\cal H}$, or vice versa. The graphical picture applicable to both is $$\begin{aligned} {{\mathcal{D}}}\_{\sigma} = {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{DualityDefect.pdf}}}}}\label{Dsigmadef}\end{aligned}$$ so that the matrix elements are $$\bra{\{\hat{h}\}} \mathcal{D}\_{\sigma} \ket{\{h\}} = \prod\_{j=1}^L \frac{1}{\sqrt{2}}(-1)^{\hat{h}\_{j-1/2}h\_j } (-1)^{h\_j\hat{h}\_{j+1/2}}\\ =2^{-L/2}(-1)^{\sum\_{j=1}^L (h\_{j-1}+h\_{j})\widehat{h}\_{j-1/2}}$$ where *h*0 ≡ *h**L* and *h**L* + 1/2 ≡ *h*1/2. It is easy to check that $ \bra{\{h\}} \mathcal{D}\_{\sigma} \ket{\{\hat{h}\}}= \bra{\{\hat{h}\}} \mathcal{D}\_{\sigma} \ket{\{h\}}$. Following the standard convention that the Pauli matrices acting on sites in the new vector space are denoted by *μ**j* + 1/2*r*, the operator version of ([dualitydefcomm]) is $$\begin{aligned} \mathcal{D}\_\sigma \sigma\_j^z \sigma\_{j+1}^z = \mu\_{j+1/2}^x \mathcal{D}\_\sigma\ ;\qquad \mathcal{D}\_\sigma \sigma\_j^x &=& \mu\_{j-1/2}^z \mu\_{j+1/2}^z\mathcal{D}\_\sigma \.\end{aligned}$$ Letting *Ŵ**j* + 1/2*V*, *H* be the operators acting on the new vector space analogous to *W**j**V*, *H* from ([IsingWVH]) on the original, these relations then yield $$\begin{aligned} \mathcal{D}\_\sigma W^H\_{j+1/2} = \widehat{W}^V\_{j+1/2} \mathcal{D}\_\sigma\ ; \qquad \mathcal{D}\_\sigma W^V\_j = \widehat{W}^H\_{j} \mathcal{D}\_\sigma\.\end{aligned}$$ where *Ŵ**H*, *V* are given by *W**H*, *V* with *σ**r* replaced by *μ**r*. Thus the duality relates the corresponding transfer matrices by $$\begin{aligned} \mathcal{D}\_\sigma T^H = \widehat{T}^V \mathcal{D}\_\sigma\ ; \qquad \mathcal{D}\_\sigma T^V= \widehat{T}^H \mathcal{D}\_\sigma\.\end{aligned}$$ Thus if a Boltzmann weight associated with a given plaquette on the square lattice is *W**H*(*u**H*), commuting the duality defect gives a plaquette described by Boltzmann weight *W**V*(*u**H*). Likewise it changes *W**V*(*u**V*) → *W**H*(*u**V*). Commuting a duality defect through the lattice thus not only changes the degrees of freedom from the original lattice to the dual lattice, it interchanges *J**x* ↔ *J**y* as well. This indeed is the familiar statement of Ising duality. Our definition of defect lines can be utilized for any lattice or graph; one must just keep track of which factors *u**p* are associated with each plaquette, i.e., each link of the original lattice or graph. Commuting a duality defect through the lattice then changes the spins from the original lattice to the dual lattice, but leaves the *u**p* for each plaquette the same. Since the *μ**j**r* satisfy the same algebra as the *σ**j**r*, D*σ* preserves the algebra of Pauli matrices, and all local physical properties will be invariant under duality. However, non-local properties such as ground-state degeneracy can and typically do change. The reason is that duality is not really a symmetry in the traditional sense: ${\mathcal D}\_\sigma$ is not unitary, or even invertible. An intuitive way of understanding why the duality transformation is not invertible is to note that because duality interchanges *u**H* and *u**V*, it interchanges the ordered phase (*u**H* > *u**V*) with the disordered phase (*u**H* < *u**V*). There are two minima of the free energy in the former case (or equivalently, a two-fold degenerate ground state in the quantum Hamiltonian limit). In the latter case, there is a unique minimum. Thus the duality map is not one-to-one and cannot be unitary. The way this has been traditionally remedied in the operator formulation is to project onto sectors of the theory for which duality is invertible, and consider the appropriate boundary conditions in each sector. In such constructions, duality includes a translation by half a site (to go effectively from lattice to dual), which meant that within one of these sectors, doing duality twice gives a translation, not the identity. Our scheme provides a different perspective where we explicitly see the sectors over which D*σ* is invertible and make use of both the lattice and dual lattice all along. This will prove particularly useful in section [sec:sixteenth]. Thus, despite the name, the duality defect operator D*σ* does *not* square to the identity. In fact, it has a zero eigenvalue. To see this, we work out the defect *fusion algebra*, which describes what happens when two horizontal defects are combined. Putting two duality defects together gives $$\begin{aligned} \bra{\{ h\_{j}' \}} \mathcal{D}\_\sigma^2 \ket{\{ h\_j \}} = 2^{-L}\sum\_{\{\hat{h}\_{j+1/2}=0,1\}} (-1)^{\sum\_j \hat{h}\_{j+1/2}(h\_j+h\_j' +h\_{j+1}+h\_{j+1}')}.\end{aligned}$$ Each sum over the hatted variables imposes the constraint $h\_j+h\_j' +h\_{j+1}+h\_{j+1}' =0 \ {\rm mod}\ 2$. The constraint is satisfied only if *h**j*ʹ = *h**j* for all *j* or *h**j*ʹ = 1 + *h**j* for all *j*. Thus two duality defects either gives the identity, or a spin-flip defect: ${{\mathcal{D}}}\_\sigma^2=\mathds{1}+{{\mathcal{D}}}\_\psi$. The spin-flip defect operator is invertible: ${{\mathcal{D}}}\_\psi^2=\mathds{1}$, so the duality-defect operator is not: ${\cal D}\_\sigma^4=2{\cal D}\_\sigma^2.$ Thus D*σ*2/2 is a projection operator onto the sector where D*ψ* has eigenvalue 1. Putting a duality defect and a spin-flip defect together gives just a duality defect: $$\begin{aligned} \bra{\{ \hat{h}\_{j+1/2} \}} \mathcal{D}\_\sigma \mathcal{D}\_{\psi} \ket{\{ h\_j \}}= 2^{-L/2}(-1)^{\sum\_j (\hat{h}\_{j-1/2 }+\hat{h}\_{j+1/2})(1-h\_j)} = \bra{ \{ \hat{h}\_{j+1/2} \} } \mathcal{D}\_\sigma \ket{ \{ h\_j \}} \,\end{aligned}$$ since ( − 1)∑*j**ĥ**j* − 1/2 + *ĥ**j* + 1/2 = 1. Thus D*σ*(D*σ*2 − 2) = 0, and so the eigenvalues of D*σ* are $0,\pm\sqrt{2}$. The fusion algebra for the topological defects in the Ising model is therefore $$\begin{aligned} {\mathcal D}\_\psi^2=\mathds{1}\ ;\qquad\quad \mathcal{D}\_\sigma \mathcal{D}\_{\psi} =\mathcal{D}\_\psi \mathcal{D}\_{\sigma} = \mathcal{D}\_\sigma\ ;\qquad\quad \mathcal{D}\_{\sigma}^2 = \mathds{1} + \mathcal{D}\_{\psi} \label{fusionalgebra}\end{aligned}$$ Fusion algebras will play an important role in our analysis. In general, the fusion algebra can be be expressed in a compact notation $$\begin{aligned} {{\mathcal{D}}}\_a {{\mathcal{D}}}\_b = \sum\_c N\_{ab}^c {{\mathcal{D}}}\_c\. \label{Ndef}\end{aligned}$$ The D*a* are normalized so that the *N**a**b**c* are non-negative integers. Remarkably, for all the examples discussed in part II as well, it is always possible to find a topological defects obeying ([Ndef]). This is an explicit and exact correspondence between defects on the lattice and a rational conformal field theory, because chiral vertex operators in the latter obey such a fusion algebra. This suggests strongly that defect lines in the lattice can be associated directly with such operators. For example, in the Ising conformal field theory the chiral operators are *ψ*, the Majorana fermion, and *σ* the chiral part of the spin field. These obey the fusion algebra $\psi^2=\mathds{1}$, $\sigma\times\sigma = \mathds{1} +\psi$, and *σ**ψ* = *σ*, just as their namesake defects do. This association of chiral operators with defects explains our naming of the defects, but is much more than a coincidence: it will allow us to generalise Kramers-Wannier duality to the height models in part II. Duality-twisted boundary condition ---------------------------------- Since the duality defect shifts the spins from one sub-lattice to the other, care is needed when considering a single duality defect on a manifold with non-trivial topology. Consider a torus with a single vertical duality defect illustrated in Fig. [Gluing]. The weights on the hatched and unhatched plaquettes are defined using parameters *u* and *u*ʹ respectively, so that when crossing the duality defect on the square lattice, the vertical and horizontal Boltzmann weights change roles. Away from the critical point, this glues together the ordered and disordered phases in a seamless way – the duality defect is mobile. However, in the presence of a single duality defect (or any odd number), the only way to sew together left and right sides is include an immobile, physical domain wall between the ordered and disordered phases. At the domain wall, the two types of hatching meet so that now the two boundaries can be sewed together with a half vertical translation in either direction. The domain wall breaks translation invariance except at the critical point *u* = *u*ʹ, where the hatching is unnecessary and the system becomes translation invariant again. [Gluing] Including a single vertical duality defect is equivalent to a *duality twisted boundary condition*. Due to the presence of the domain wall, there are two transfer matrices one can define in the presence of a vertical duality defect, depending on which sites are occupied by heights/spins. The one arising from the bottom part of figure [Gluing] has matrix elements pictured as $$\begin{aligned} &T\_{\sigma} \ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=.4]{TDualityTwisted.pdf}}}}}\, \end{aligned}$$ where the one spin in the middle of the defect is summed over. We have left the dots implicit in favour of displaying where the Ising spins reside. We denote by ${\cal H}\_\sigma$ the vector space comprised of the spin configurations on a row, so that *T**σ* takes ${\cal H}\_\sigma\to {\cal H}\_\sigma$. We denote by $\widehat{\cal H}\_\sigma$ the other vector space on which *T**σ* acts, with $$\begin{aligned} &{T}\_{\sigma} \ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=.4]{TDualityTwistedDual.pdf}}}}}\.\end{aligned}$$ The two transfer matrices differ only by where the Ising degrees of freedom are placed, and the two vector spaces ${\cal H}\_\sigma$ and $\widehat{\cal H}\_\sigma$ are isomorphic. They are related by a half translation, just as ${\cal H}$ and $\widehat{\cal H}$ are. We can thus adopt a unified convention for the *T**σ* acting on the full vector space ${\cal H}\_\sigma\oplus \widehat{\cal H}\_\sigma$ just as for *T* in ([Tdef]): $$\begin{aligned} T\_\sigma \ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=.4]{TDualityTwistedDots.pdf}}}}}\end{aligned}$$ To be more explicit, consider *T**σ* acting on ${\cal H}$, and let the duality defect be between sites *L* and 1, while the domain wall is at site *r*. The duality defect modifies the adjacent Boltzmann weight to $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=0.8]{Wsigma.pdf}}}}}\;=\; 2^{-\frac{1}{2}} \big[ \cos u + \sigma^z\_L \otimes \sigma^x\_{1}\sin u \big]\_{a\beta;a\beta'}\.\end{aligned}$$ The transfer matrices in the presence of a vertical duality defect are therefore $$\begin{aligned} T^{V}\_\sigma = \big[ \cos u + \sigma^z\_L \otimes \sigma^x\_{1}\sin u \big]\prod\_{j=2}^{r-1} W^V\_j(u')\, \prod\_{j=r}^{L} W^V\_j(u) \, \qquad T^{H}\_\sigma = \prod\_{j=1}^{r-1}W\_{j+1/2}^H(u) \prod\_{j=r}^{L-1}W\_{j+1/2}^H(u')\. \label{Tsigmadef}\end{aligned}$$ The partition function is then $$\begin{aligned} Z\_{\sigma,1} = {\operatorname{Tr}}(T^{V}\_\sigma T^{H}\_\sigma)\. \label{Zsigma}\end{aligned}$$ The Boltzmann weights *W*1*V* and *W**L* + 1/2*H* do not appear, and there is only one new weight associated with the defect. Thus only 2*L* − 1 different weights are involved in the transfer matrix here. Since the duality defect is topological, there exists a local unitary transformation *U**σ* that shifts it over one site. The way to find it in our approach is to drag the duality defect over the Boltzmann weight using the defect commutation relations. For the defect between *L* and 1, it is $$\begin{aligned} U\_\sigma = {\cal H}\_L\otimes CZ\_{L,1} \label{DualityTranslate}\end{aligned}$$ where ${\cal H}\_L = \frac{1}{\sqrt{2}} (\sigma^x\_L + \sigma^z\_L)$ and $CZ\_{L,1} = \frac{1}{2} (1+\sigma\_L^z +\sigma\_1^z - \sigma\_L^z \sigma\_1^z)$ are the Hadamard and control-Z gates. However, the domain wall is *not* movable by a unitary transformation. Thus only at the critical point *u* = *u*ʹ does the transfer matrix commute with the modified translation operator ${\cal T}\_\sigma= {\cal T}U\_\sigma$. Relating ${\cal T}\_\sigma$ to the corresponding Dehn twist is subtle, and so we defer this analysis to section [sec:sixteenth]. A Majorana zero mode in the Hamiltonian limit --------------------------------------------- Here we consider the Hamiltonian limit and study twisted boundary conditions in the Ising quantum spin chain. The critical case has been investigated thoroughly, so we focus mainly on a very interesting piece of physics in the gapped phase. In the presence of a duality defect, a Majorana zero mode occurs, localized on the domain wall separating ordered and disordered regions. Such zero modes are currently of great current interest in the study of topological order. An interesting feature is that this zero mode is unpaired – the “branch cut” or “string” associated with the zero mode terminates on the defect instead of another zero mode. Including no defect or a vertical spin defect results in periodic or anti-periodic boundary conditions on the spin chain. The Hilbert space on which the quantum Hamiltonian *H* acts is then the vector space ${\cal H}$ on which the transfer matrix acts. In the limit *u**H*, *u**V* → 0, while keeping *u**H*/*u**V* = *J* finite, the transfer matrix approaches the identity matrix. The order *u* corrections are local, and yield the Hamiltonian: $T-\mathds{1} \propto u H + O(u^2)$, where $$\begin{aligned} H^{\pm} = -\sum\_{j=1}^L \sigma^x\_j - J \sum\_{j=1}^{L-1}\sigma\_j^z \sigma\_{j+1}^z \mp J\sigma^z\_L\sigma^z\_{1}\. \label{HamiltonianLimit}\end{aligned}$$ The top sign is for periodic boundary condition, while the bottom sign is for antiperiodic, a spin-flip defect between sites *L* and 1. Both Hamiltonians commute with the global spin-flip Z2 symmetry generated by ${\cal D}\_\psi=\prod\_j \sigma\_j^x$. The spectrum is independent of the location of the defect, since the defect can be moved at will by a unitary transformation. Multiple vertical spin-flip defect lines simply result in minus signs in the corresponding terms *J**σ**k**z**σ**k* + 1*z* in the Hamiltonian. However, any even number can be removed by a unitary transformation; e.g. spin-flip defects at (*k*, *k* + 1) and (*L*, 1) are removed by the unitary transformation ∏*j* = 1*k**σ**j**x*. This is the vertical version of the horizontal defect fusion ${{\mathcal{D}}}\_\psi{{\mathcal{D}}}\_\psi =\mathds{1}$. The Hamiltonian for a duality-twisted boundary conditions includes a domain wall at site *r*, and so acts on the Hilbert space ${\cal H}\_\sigma$. Taking *u*, *u*ʹ → 0 with *u*ʹ/*u* = *J* gives $$\begin{aligned} H^{d} =& - J\sum\_{j=1}^{r-1}\sigma^z\_j \sigma^z\_{j+1} -\sum\_{j =2}^{r-1} \sigma^x\_j -\sum\_{j=r}^{L-1} \sigma^z\_j \sigma^z\_{j+1} - J\sum\_{j=r}^{L} \sigma^x\_j - \sigma\_{L}^z \sigma\_{1}^y\. \label{Hduality}\end{aligned}$$ The final term is the effect of the duality defect, and is unitarily equivalent to the one discussed in and. For later convenience we have done a unitary transformation that changes *σ*1*x* to *σ*1*y* in this term, leaving the others untouched. A straightforward calculation shows that if we wrap the defect line around the boundary differently, i.e., shifted by half a unit cell so that it wraps around *W**L**V*, we find the last term in ([Hduality]) is replaced by *J**σ**L**x* − *σ*1*x* − *J**σ**L**x**σ*1*z*. The spectra for these three spin-chain boundary conditions in the critical case has been found explicitly, but we will show in sec. [sec:partition] that the explicit expressions are not needed to relate the critical partition functions to those in conformal field theory. The spin-flip symmetry remains, still generated by D*ψ*. With a duality defect, as opposed to other boundary conditions, the spectrum is identical in both sectors labelled by eigenvalues  ± 1 of D*ψ*. This double degeneracy is a consequence of Kramers pairing. Precisely, note that conjugating *H**d* and D*ψ* by the operator *σ*1*z* gives $$\begin{aligned} \sigma^z\_1 H^d \sigma^z\_1\,=\, (H^d)^\*\, \qquad\quad \sigma^z\_1\, {{\mathcal{D}}}\_\psi\,\sigma^z\_1\, =\, - {{\mathcal{D}}}\_\psi\.\end{aligned}$$ Since *H**d* is Hermitian, (*H**d*)\* has the same eigenvalues. Hence the pairing: for each eigenstate ∣*ψ*⟩ of *H**d* and D*ψ* with eigenvalues *E* and *ω* respectively, *σ**z*1∣*ψ*⟩\* has eigenvalues *E* and  − *ω* respectively. This degeneracy has an interesting consequence for partition functions. Because the spectra with D*ψ* = 1 and  − 1 are the same, $$\begin{aligned} \hbox{tr }\left(\frac{1+{{\mathcal{D}}}\_\psi}{2} e^{-\beta H\_d} \right) = \hbox{tr }\left(\frac{1-{{\mathcal{D}}}\_\psi}{2} e^{-\beta H\_d} \right) \quad\Rightarrow\quad \hbox{tr }\left({{\mathcal{D}}}\_\psi e^{-\beta H\_d} \right)\ =\ 0\. \label{ZDpsi}\end{aligned}$$ The analogous statement holds in the classical model on the torus as well; see ([ZDpsi2]). Unless *J* = 1 so that the system is critical, the domain wall illustrated in figure [Gluing] breaks translation invariance. In ([Hduality]), the coupling *J* changes roles at the domain wall. Thus on one side the system is ordered, and on the other it is disordered; which is which depends on whether ∣*J*∣ < 1 or ∣*J*∣ > 1. As a consequence, a zero mode operator “localised” on the domain wall appears. This is straightforward to understand by using a Jordan-Wigner transformation of the Hamiltonian into free fermions; see Appendix [zero-mode] for details. In the gapped phase, the Hamiltonian includes a mass term for the fermions that changes sign at the location of the domain wall. Thus one would expect a Majorana zero mode localised there, an expectation confirmed in the appendix. Its construction is similar to that of the zero mode occurring at the edge of the Ising/Kitaev chain with open boundary conditions, and utilises the iterative procedure described in. We consider the case ∣*J*∣ < 1, so that the region from *r* to *L* is ordered (i.e. the two-point function ⟨*σ**j**z**σ**k**z*⟩ in the ground state approaches a non-zero constant in the limit *r* ≪ *j* ≪ *k* ≪ *L* ), while the region from 1 to *r* is disordered. The starting point for the iteration is the operator $$\begin{aligned} \Psi\_0=i\sigma^z\_1\sigma^z\_r \prod\_{k=1}^{r-1} \sigma^x\_k\, \label{psi0}\end{aligned}$$ which commutes with the Hamiltonian in the extreme limit *J* = 0. The string of spin-flip operators terminating at the duality defect means that the only terms in the full Hamiltonian not commuting with Ψ0 are in the region around the duality defect. Moreover, these terms are of order *J*. The resulting commutator can be written as a commutator of *H**d* with an order *J* operator, up to corrections of order *J*2: $$\begin{aligned} \left[H^d,\Psi\_0\right]\ =\ -2J(\sigma^x\_r+\sigma^z\_{r-1}\sigma^z\_{r})\Psi\_0 \ =\ -\left[H^d,\,\Psi\_1\right]\ +\ {\cal O}(J^2)\.\end{aligned}$$ where $$\begin{aligned} \Psi\_1= iJ\sigma^z\_1\left(\sigma^z\_{r+1} \prod\_{k=1}^{r} \sigma^x\_k\ +\ \sigma^z\_{r-1}\prod\_{k=1}^{r-2} \sigma^x\_k\right)\.\end{aligned}$$ Thus [*H**d*, Ψ0 + Ψ1] contains only terms of order *J*2. These terms can then be written as  − [*H**d*, Ψ2] and the iteration continues. The analysis in the appendix shows that that the iteration can stop, giving an operator Ψ = Ψ0 + Ψ1 + Ψ2 + … + Ψ*L* that commutes with the Hamiltonian. For ∣*J*∣ > 1, there is an analogous construction, given in the appendix. All the terms in the zero mode have strings of spin flips terminating at the duality defect. This string is familiar from the Jordan-Wigner transformation: it means the zero mode is non-local in terms of the spins, but local in terms of the fermions. Thus if the fundamental degrees of freedom are fermions, Ψ0 and Ψ1 are localised at the domain wall. As the operators get farther from *r*, the norm of each term falls off exponentially, up to one (important) subtlety discussed in the appendix. One other feature worthy of note is that there is just one zero mode for a given value of *J*, as opposed to the case of an open Ising chain, where there is a zero mode at each end. Here the duality defect plays the role of the other end, but there is no zero mode localised there. The operator *σ*1*z* does result in the Kramers pairing, but it is not a zero mode, as it does not commute with *H**d*. A nice picture also emerges in terms of the spins: the zero mode contains a spin-flip defect stretching from the domain wall to the duality defect. As we have repeatedly emphasised, the duality defect is topological and so can be moved without changing the energy. This suggests that in the full 2d classical model, there is a trivalent junction of defects where a spin-flip defect terminates at the duality defect, and that its location can be changed without changing the free energy/partition function. In the next section [sec:trivalent] we define such a junction precisely. Trivalent junctions of defects ============================== Having described two different types of defects, our next step is to show how they can intersect. In this section we show how to branch and fuse defect lines at intersections. We show that in our construction, it is quite straightforward to find intersections that can be moved around without changing the partition function. Understanding these intersections is essential for defining and analysing partition functions on the torus twisted along multiple cycles. The triangular defect --------------------- @!0 @M=8mm @C=85mm & [fig:Trivalent] The basic object is a trivalent junction of defect lines. The three lines meeting need not be of the same type (and in Ising, cannot be). In our construction of defects using quadrilaterals, a trivalent junction requires introducing a triangular face, as illustrated in figure [fig:Trivalent]. A nontrivial statement we elaborate on below is that any higher valency junction of defect lines can always be decomposed into trivalent junctions. Each side of the triangle is labelled by the type of defect coming into that side of the junction, as well as a height label at each vertex: $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=0.3]{Trivalent.pdf}}}}}\label{eq:triangle}\end{aligned}$$ In the interest of studying topological defects we will also enforce the condition that the triangle defect is free to move without changing the value of the partition function. We thus require that the triangle defect commutes locally with the transfer matrix. The analogous triangle defect commutation relations to be solved are given by $$\begin{aligned} \sum\_\beta {\mathord{\vcenter{\hbox{\includegraphics[scale=.7]{TrivalentLeft.pdf}}}}}= {\mathord{\vcenter{\hbox{\includegraphics[scale=.7]{TrivalentRight.pdf}}}}}, \label{eq:defcomm3}\end{aligned}$$ When this is satisfied, triangle defects indeed can be moved around without changing the partition function, as illustrated in figure [fig:TrivalentConsistency]. @!0 @M=2mm @R=42mm @C=62mm & & & & [fig:TrivalentConsistency] Finding solutions to ([eq:defcomm3]) appears to be quite an imposing problem, since it involves the defect Boltzmann weights and the triangle weights. However, as we will explain in detail in part II, there is a procedure for finding solutions. The general structure developed there requires that the weight of the triangle is non-vanishing only when the labels obey certain conditions. These conditions work in much the same way as the defect lines themselves – they occur at a termination point of three defect lines with a rule on how to fuse them together. We used the word “fuse” in the previous section to describe for example how creating two successive horizontal duality defects is equivalent to the sum of the identity operator and a horizontal spin-flip defect, i.e. ${{\mathcal{D}}}\_\sigma^2=\mathds{1} + {{\mathcal{D}}}\_\psi$ in operator language. This same algebra ([fusionalgebra]) is utilised to constrain the allowed triangles at a given trivalent junction. For this to make sense, the heights themselves must be labelled by elements of the fusion algebra. In the Ising model, the heights 0 and 1 (i.e. spin up and down) correspond respectively to $\mathds{1}$ and *ψ*, while the empty site corresponds to *σ*. The condition that the triangle be non-vanishing is that at each vertex, the two defect labels must fuse to the height variable at that vertex. Precisely, this means that for the weight of ([eq:triangle]) not to vanish, *N**β**ρ**G**N**ρ**γ**B**N**γ**β**R* ≠ 0, where the *N**a**b**c* are the non-negative integers defined by the fusion algebra ([Ndef]). Since the fusion algebra is associative, this implies that *N**R**G**B* > 0 as well. This correspondence between heights and defect labels seems rather ad hoc and coincidental. However, as will be apparent in the more general height models of part II, it is not, but instead a fundamental part of the definitions of the theory itself. This indicates rather strongly that the defect lines we study are themselves a fundamental part of the theory, and gives excellent intuition into why their topological properties are so remarkable. We therefore need to introduce the identity “defect”, which leaves the partition function invariant, even if wrapped around a cycle. We draw it with dashed lines, and its Boltzmann weights are $$\mathord{\sideset{^{a}\_{\alpha}}{^{b}\_{\beta}}{\mathop{{\mathord{\vcenter{\hbox{\includegraphics[scale=.3]{IsingBubbleRightId.pdf}}}}}}}}\ =\ 2^{-\tfrac14}\,\delta\_{a\alpha}\delta\_{b\beta}\.$$ The factor of $2^{-\tfrac14}$ is needed to cancel the weights ([vertexweights]) associated with the extra sites arising from the defect insertion. The identity defect obviously satisfies the defect commutation relations. The identity defect also can terminate along another defect at a trivalent junction. It is easy to find the corresponding solution of ([eq:defcomm3]); the only non-trivial part is to again compensate for the weights from the extra sites. In our conventions the triangle weights that occur at the termination of an identity defect are then: $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=.25]{Ising\_trivalenth\_id.pdf}}}}}&\ =\ 2^{-\frac14}\,&&\qquad {\mathord{\vcenter{\hbox{\includegraphics[scale=.25]{Ising\_trivalenthh\_id.pdf}}}}}\ =\ \delta\_{hh'},\\ {\mathord{\vcenter{\hbox{\includegraphics[scale=.25]{Ising\_trivalent\_psi\_psi\_hh.pdf}}}}}& \ =\ 2^{-\frac14}\, &&\qquad{\mathord{\vcenter{\hbox{\includegraphics[scale=.25]{Ising\_trivalent\_psi\_psi\_h.pdf}}}}}\ =\ [\sigma^x]\_{h\tilde{h}} \delta\_{h h'}, \end{aligned} \label{identitytri}$$ where *h* can be the height 0 or 1 (i.e. $\mathds{1}$ or *ψ*, i.e. spin up or down), and the *σ* height corresponds to an empty site. We can then insert an identity defect between say two duality defect lines that touch at a single site. At that site, we can insert two triangle defects using ([identitytri]): $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=2]{IsingDualityDefectSplitp.pdf}}}}}\quad \ =\ \quad {\mathord{\vcenter{\hbox{\includegraphics[scale=2]{IsingF0Hp.pdf}}}}}\label{insertidentity}\end{aligned}$$ A possible factor of $\sqrt{2}$ coming from ([identitytri]) is cancelled by the extra weight coming from increasing the number of sites by one. The Ising triangle defect commutation relations are fairly straightforward to solve in general, since the only non-vanishing *N**R**G**B* in the Ising model not involving the identity defect are *N**σ**σ**ψ* = *N**σ**ψ**σ* = *N**ψ**σ**σ* = 1. Thus the only non-trivial trivalent junction here involves two duality defects and one spin flip defect. There are two distinct types of labelings of the heights around the triangle, and the weights are $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=.25]{Ising\_trivalenth.pdf}}}}}&=2^{-\frac14} (-1)^h, & {\mathord{\vcenter{\hbox{\includegraphics[scale=.25]{Ising\_trivalenthh.pdf}}}}}&= \big[ \sigma^x \big]\_{hh'}. \label{psitri}\end{aligned}$$ It is straightforward to check directly that indeed these triangle weights satisfy ([eq:defcomm3]). Partition function identities ----------------------------- The fact that the partition function is independent of the locations of trivalent junctions has a number of remarkable consequences. Here we show that there are *local* relations between different ways defect lines can join, resulting in linear relations among partition functions with different defects present. We develop a toolset that relates microscopic properties of the defect lines to macroscopic pictures, making deriving these linear relations very easy. We showed in ([eq:IsingClosedLoop]) that the partition function including a closed isolated defect loop not wrapped around any cycle is simply $\sqrt{2}$ times that without the loop. Let us now consider a more general situation with a closed loop, still topologically trivial, but with two defect lines terminating at it. A concrete example is two spin-flip defects terminating at a duality defect loop. To express such configurations in a uniform fashion, we use the trivalent defect commutation relations ([eq:defcomm3]) to push the two trivalent junctions so that they are both on top of one defect Boltzmann weight. Furthermore, using the standard defect commutation relations ([dualitydefcomm]) we rearrange the picture so that the defect loop is in line with the defect lines that terminate at it. In a picture, $$\begin{aligned} \xymatrix @!0 @M=3mm @C=38mm{ {\mathord{\vcenter{\hbox{\includegraphics[scale=1.3]{BubbleMacroLeft.pdf}}}}}\ar[r] &{\mathord{\vcenter{\hbox{\includegraphics[scale=.3]{IsingBubbleMicroLeft.pdf}}}}}} \label{IsingBubbleMicro}\end{aligned}$$ It applies for any allowed external heights, while the internal heights (the dots) are summed over. In part II we explain a general method for simplifying the right-hand-side of ([IsingBubbleMicro]). The result is very elegant. First, ([IsingBubbleMicro]) is zero unless *R* = *B*. One can check this in Ising by direct computation; more generally it follows from an orthogonality relation satisfied by the trivalent junction weights. If $P = \mathds{1}$ then obviously ([IsingBubbleMicro]) is simply *δ**R**B**δ**G**B*, and likewise with *P* ↔ *G*. We can explicitly compute the non-trivial ones to give $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=.3]{IsingBubbleLeftId.pdf}}}}}\ =\ \sqrt{2}\ {\mathord{\vcenter{\hbox{\includegraphics[scale=.3]{IsingBubbleRightId.pdf}}}}}\, \qquad {\mathord{\vcenter{\hbox{\includegraphics[scale=.3]{IsingBubbleLeftSigma.pdf}}}}}\ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=.3]{IsingBubbleRightSigma.pdf}}}}}\, \qquad {\mathord{\vcenter{\hbox{\includegraphics[scale=.3]{IsingBubbleLeftPsi.pdf}}}}}\ =\ \sqrt{2}\ {\mathord{\vcenter{\hbox{\includegraphics[scale=.3]{IsingBubbleRightPsi.pdf}}}}}\label{MicroBubble}\end{aligned}$$ where these apply independently of the heights at the corners. This special property means the macroscopic defect lines can be manipulated independently of the microscopic configurations, making the analysis quite tractable. Indeed, all these rules are summarized with the following *macroscopic* picture, $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=1.3]{BubbleMacroLeft.pdf}}}}}\ =\ \delta\_{RB} \sqrt{\frac{d\_G d\_P}{d\_R}} {\mathord{\vcenter{\hbox{\includegraphics[scale=1.3]{BubbleMacroRight.pdf}}}}}\label{BubbleMacro}\end{aligned}$$ where $d\_\sigma = \sqrt{2}$ and *d*0 = *d**ψ* = 1. This relation, and its successors, are local, meaning that they can be applied to any subgraph of defects: ([BubbleMacro]) applies no matter what the lines going out are connected to. Any loop with zero, one or two external defects attached can thus be removed. In fact, setting $B = \mathds{1}$ and *R* = *ψ* shows that any partition function containing a “tadpole” vanishes, e.g. $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Barbell.pdf}}}}}\ =\ 0 \. \label{barbell}\end{aligned}$$ The zigzag around the edge illustrates that this could be a sub-part of any partition function. The two colours of shading correspond to the two types of staggering of the couplings. These necessarily change across the duality defect, as indicated by the types of hatching in figure [Gluing]. At the critical point *u*  =  *u*ʹ, so there is no staggering and no need for the shading. We already came across a simple example of such a partition function identity when implementing translation invariance in the presence of a vertical spin-flip defect: translating fully around a cycle results in the crossing red lines appearing in ([Tpsi]). The junction is given by two spin-flip defects meeting at a point and then leaving. The termination of the spin-flip defect lines at that point defines a quadrivalent defect Boltzmann weight: $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=.25]{RedCross.pdf}}}}}\ =\ \begin{cases} 2^{-\frac12} & a=b=c=d = \sigma, \\ \sigma^x\_{ab}\sigma^x\_{bc} \sigma^x\_{cd} \sigma^x\_{da} & \text{otherwise} \end{cases}\end{aligned}$$ Either the defect lines terminate at a dual lattice point shown in the first line or a point on the lattice shown in the second line. The quadrivalent defect of spin flips can be decomposed into two trivalent junctions in two distinct ways. Using the explicit weights ([identitytri]) shows that the two are the same for any labelling of the heights: $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=.25]{RedCrossNoHeights.pdf}}}}}\ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=.25]{RedCrossLeft.pdf}}}}}\ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=.25]{RedCrossRight.pdf}}}}}\. \label{redcross}\end{aligned}$$ The trivalent junctions give a linear relation between these. Since the trivalent junctions in ([redcross]) are free to move we indeed have $$\begin{aligned} \newcommand{\ZRRxLocalwS}{\mathbin{\rotatebox[origin=c]{90}{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RR0Localw.pdf}}}}}$}}} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RR0pLocalw.pdf}}}}}\ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RR0Localw.pdf}}}}}\ =\ \ZRRxLocalwS \label{redcross2}\end{aligned}$$ as promised. The analogous situation for duality defects is even more interesting. Here the relations between pairs of triangle defects is more intricate; the corresponding identity is $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=.25]{IsingF0H.pdf}}}}}\ =\ \frac{1}{\sqrt2} \Bigg( {\mathord{\vcenter{\hbox{\includegraphics[scale=.25]{IsingF0V.pdf}}}}}+ {\mathord{\vcenter{\hbox{\includegraphics[scale=.25]{IsingF1V.pdf}}}}}\Bigg), \label{reshuffle1}\end{aligned}$$ as is easily verified from ([identitytri],[psitri]). This identity also holds rotated by 90 degrees, so $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=.25]{IsingF1H.pdf}}}}}\ =\ \frac{1}{\sqrt2} \Bigg( {\mathord{\vcenter{\hbox{\includegraphics[scale=.25]{IsingF0V.pdf}}}}}- {\mathord{\vcenter{\hbox{\includegraphics[scale=.25]{IsingF1V.pdf}}}}}\Bigg), \label{reshuffle2}\end{aligned}$$ for any labelling of the external vertices. The trivalent defect commutation relations ([eq:defcomm3]) allow the junctions to be moved apart, connected by a defect of the type stretched across. The ordinary defect commutation relations allow the individual lines to be moved at will. All these manoeuvres leave the partition function invariant. Therefore ([reshuffle1]) and ([reshuffle2]) can be recast as linear identities for various partition functions under local rejoining. Schematically, $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{IsingF0HMacro.pdf}}}}}\ &=\ \frac{1}{\sqrt2} \Bigg( {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{IsingF0VMacro.pdf}}}}}+ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{IsingF1VMacro.pdf}}}}}\Bigg),\\ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{IsingF1HMacro.pdf}}}}}\ &=\ \frac{1}{\sqrt2} \Bigg( {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{IsingF0VMacro.pdf}}}}}- {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{IsingF1VMacro.pdf}}}}}\Bigg). \label{reshuffleMacro1}\end{aligned}$$ Again, the two colours of shading correspond to the two types of staggerings and the zig-zag boundary signifies that it could be a sub-part of any partition function. As a consistency check, we can rederive the vanishing of the ‘barbell’’ graph from ([barbell]): $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Barbell.pdf}}}}}\ =\ \frac{1}{\sqrt{2}} \Bigg( {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{SigmaBubble.pdf}}}}}- {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{ThetaDiagram.pdf}}}}}\Bigg) \ =\ \frac{1}{\sqrt{2}} \Bigg( {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{SigmaBubble.pdf}}}}}- {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{SigmaBubble.pdf}}}}}\Bigg) = 0\.\end{aligned}$$ using first the reshuffling and then ([BubbleMacro]) with *P* = *ψ* and *G* = *R* = *B* = *σ*. Rearranging intersections: *F* moves ------------------------------------ We have shown how to manipulate defect lines at the macroscopic level without any reference to the microscopic details of how the defect lines fuse together. What is even more remarkable is that the macroscopic rules they obey are precisely those of the underlying microscopic degrees of freedom. This of course is not a coincidence, but rather both are consequences of a deep mathematical structure. These linear relations are called in the context of topological field theory **F*-moves*. These are precisely the rules obeyed by defect lines in the conformal field theory describing the continuum limit of the critical model. The rules we derive here apply off the critical point as well. Here we say a little about their general structure. In part II we will go much further. Whenever four defect line segments meet at a point, the intersection can be decomposed into trivalent vertices in two distinct ways. However, as apparent in ([redcross], [reshuffle1], [reshuffle2]), there are linear relations between the two ways the lines join. The *F* moves governing this reshuffling are in general $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=1.6]{IsingFLeft.pdf}}}}}\;=\; \sum\_Y \Big[F^{RGB}\_P \Big]\_{XY} {\mathord{\vcenter{\hbox{\includegraphics[scale=1.6]{IsingFRight.pdf}}}}}\;. \label{MicroF}\end{aligned}$$ where the coefficients $\Big[F^{RGB}\_P \Big]\_{XY}$ are called *F*-symbols. Conjoined with the trivalent defect commutation relations ([eq:defcomm3]) and the standard duality defect relations ([dualitydefcomm]), the analogous statement to ([BubbleMacro]) holds: $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=1.3]{FmoveMacroLeft.pdf}}}}}\ =\ \sum\_{Y} \big[ F\_{P}^{RGB} \big]\_{XY}{\mathord{\vcenter{\hbox{\includegraphics[scale=1.3]{FmoveMacroRight.pdf}}}}}\label{MacroF}\end{aligned}$$ The *F*-symbols form a unitary matrix for fixed external legs *R*, *G*, *B*, *P*, and in simple examples like Ising are independent of the heights at the corners. We have already worked out most of the *F*-symbols for Ising. For example, the relation ([redcross]) or ([redcross2]) arises because spin flip defects can only fuse to the identity defect. The corresponding *F*-symbol is thus $\Big[F\_\psi^{\psi \psi \psi} \Big]\_{\mathds{1} \mathds{1}}=1$. The nontrivial *F*-symbols arise when the external legs have duality defects, and follow from ([reshuffle1]) and ([reshuffle2]): $$\begin{aligned} \left[F\_{\sigma}^{\sigma \sigma \sigma} \right]\_{ab} \ &=\ \frac{1}{\sqrt2}(-1)^{ab}, & F\_{a}^{ \sigma b \sigma } \ &=\ F\_{ \sigma }^{a \sigma b} \ =\ (-1)^{ab}. \label{FSymbol}\end{aligned}$$ where *a* or *b* = 0 for the identity defect $\mathds{1}$, and 1 when it is the spin-flip defect *ψ* (the same notation as for the height labels). The other non-zero *F*-symbols are equal to one when allowed by the fusion rules. If not allowed by the fusion rules, they of course give zero. The *F*-moves allow one to decompose any higher-valency junction of defect lines into trivalent junctions, as promised. Crucially, the *F*-symbols are independent of the microscopic height configurations at the corners of the square. This is why we have always been able to omit writing them, and why the schematic pictures contain all the information necessary. The *F* moves are macroscopic operations; they only depend on the types of defect lines present and how they fuse. However, they are uniquely determined by the microscopic rules we have used to define the Boltzmann and defect weights. This beautiful relation between microscopic and macroscopic properties illustrates how defect lines and junctions are fundamental objects in the theory. The relations between them and the continuum results we derive in subsequent sections are natural consequences of this common structure. This common structure is known as a *fusion category*, and arises in studies of rational conformal field theory, anyonic particles, and topological quantum field theory ; for reviews relevant to our story, see. The fusion category provides a general set of rules for manipulating these lines. Written in abstract form, these are summarized as $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=0.9]{QD.pdf}}}}}= d\_a\, \qquad {\mathord{\vcenter{\hbox{\includegraphics[scale=0.9]{bubblea.pdf}}}}}&= \delta\_{ac}\sqrt{\frac{d\_b d\_{b'}}{d\_a}}\ {\mathord{\vcenter{\hbox{\includegraphics[scale=0.9]{bubbleb.pdf}}}}}\, \qquad {\mathord{\vcenter{\hbox{\includegraphics[scale=0.9]{recouplinga.pdf}}}}}= \sum\_c N\_{ab}^c \sqrt{\frac{d\_c}{d\_a d\_b}}\ {\mathord{\vcenter{\hbox{\includegraphics[scale=0.9]{recouplingb.pdf}}}}}\,\\ &{\mathord{\vcenter{\hbox{\includegraphics[scale=1.6]{Fmove\_self\_dual\_left.pdf}}}}}= \sum\_y \Big[ F^{abc}\_d \Big]\_{xy} {\mathord{\vcenter{\hbox{\includegraphics[scale=1.6]{Fmove\_self\_dual\_right.pdf}}}}}\. \label{threerules} \end{aligned}$$ These should be understood as diagrammatic rules for the defect lines – i.e. each line in the picture corresponds to some defect line with the corresponding label in the Ising model. The first two generalize ([BubbleMacro]) and ([MacroF]), while the third is a special case of the *F*-symbol, whose general form is given in the last. In fact, the triangle and defect weights themselves are expressed in terms of the *F*-symbols: $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=0.3]{Trivalent.pdf}}}}}= (\frac{d\_R d\_G} {d\_B d\_\beta^2})^{\frac14}\Big[ F^{RG \rho}\_\gamma \Big]\_{ B \beta}, \quad {\mathord{\sideset{^{a}\_{\alpha}}{^{b}\_{\beta}}{\mathop{{\mathord{\vcenter{\hbox{\includegraphics[scale=0.3]{LSquareLabel.pdf}}}}}}}}} = \frac{1}{\sqrt{d\_{G}d\_\sigma}} \Big[F^{a\alpha \beta}\_ b \Big]\_{G\sigma}\end{aligned}$$ The specific rules analysed in this paper are those of the Ising fusion category.[1](#fn1) In part II we will extend this correspondence to a large class of lattice models, including for example the height models of Andrews, Baxter and Forrester. In our lattice defects, we have only required the rules of a fusion category; we have not utilised the braiding appearing in the additional structure present in a modular tensor category. It thus would be quite interesting to see if lattice models could be defined using fusion categories that cannot be extended to allow braiding. Duality on the torus ==================== With these diagrammatic rules we can prove some remarkable facts about the Ising model on the torus with relatively little amounts of work. One particularly useful application is a very straightforward way to implement Kramers-Wannier duality on the torus. Our analysis makes it possible to do all the manipulations using macroscopic pictures that only depend on the topology of the defects. Using such manipulations, we show how to easily derive identities between toroidal partition functions on the lattice and those on the dual lattice. Even more strikingly, by studying the translation operator in the presence of duality defects, we derive the conformal dimension 1/16 of the chiral spin field directly in the critical lattice model. Identities for toroidal partition functions ------------------------------------------- A simple example is two defect loops wrapped around the same cycle of a torus. Applying ([reshuffle1]) gives $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_sigma\_sigma.pdf}}}}}\ =\ \frac{1}{\sqrt{2}} \Bigg( {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_sigma\_cycle.pdf}}}}}+ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_sigma\_cycle\_psiw.pdf}}}}}\Bigg). \label{id1}\end{aligned}$$ The fact that the trivalent junction is free to move allows us to pull the duality defect loop around the torus and find a linear combination of two duality defect bubbles, one having no defect loops terminating at it, the other having two spin-flip defects terminating at it. Using equation ([BubbleMacro]) allows these loops to be removed at the expense of some quantum dimensions: $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_sigma\_sigma.pdf}}}}}= \frac{1}{\sqrt{2}} \Bigg( {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_sigma.pdf}}}}}+ \sqrt{2}\,{{ \mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.0225]{DualityM.pdf}}}}}}\hidewidth\cr$\vcenter{ \hbox{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_R0Rw.pdf}}}}}$} }$\cr\vphantom{$\Big|\_q$} }}}} \Bigg)= {{ \mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.0225]{DualityM.pdf}}}}}}\hidewidth\cr$\vcenter{ \hbox{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z0.pdf}}}}}$} }$\cr\vphantom{$\Big|\_q$} }}}} + {{ \mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.0225]{DualityM.pdf}}}}}}\hidewidth\cr$\vcenter{ \hbox{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_R0Rw.pdf}}}}}$} }$\cr\vphantom{$\Big|\_q$} }}}}\. \label{id2}\end{aligned}$$ We thus recover ${{\mathcal{D}}}\_\sigma {{\mathcal{D}}}\_\sigma = \mathds{1} + {{\mathcal{D}}}\_\psi$ from section [subsec:dualitydefect] by purely local moves, and without any need to manipulate the explicit definitions of the operators. Since ([id2]) shows the operator creating a duality defect wrapped around a cycle is not invertible, it is not immediately obvious how to define a duality transformation on the torus. If it were invertible, the procedure would be to create such a line and its inverse, move one around the other cycle, and then annihilate the two. This would leave behind couplings with the other staggering on the dual lattice, and so show that the partition function for the model and its dual on the torus are the same. The fact that *D**σ* is not invertible means that they are not. We can however use ([id2]) to derive a first identity between toroidal partition functions. We move one of the duality defect lines on the left-hand side around the torus in the vertical direction before fusing it with the other. This then leaves the dual lattice behind, so that $$\begin{aligned} {{ \mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.0225]{DualityM.pdf}}}}}}\hidewidth\cr$\vcenter{ \hbox{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z0.pdf}}}}}$} }$\cr\vphantom{$\Big|\_q$} }}}} + {{ \mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.0225]{DualityM.pdf}}}}}}\hidewidth\cr$\vcenter{ \hbox{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_R0Rw.pdf}}}}}$} }$\cr\vphantom{$\Big|\_q$} }}}} \ =\ {{ \mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.0225]{DualityG.pdf}}}}}}\hidewidth\cr$\vcenter{ \hbox{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z0.pdf}}}}}$} }$\cr\vphantom{$\Big|\_q$} }}}} +{{ \mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.0225]{DualityG.pdf}}}}}}\hidewidth\cr$\vcenter{ \hbox{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_R0Rw.pdf}}}}}$} }$\cr\vphantom{$\Big|\_q$} }}}}\. \label{zzdual1}\end{aligned}$$ In words, the partition function of the model and its dual are the same, when restricted to the even sector of the ${\mathbb Z}\_2$ symmetry generated by D*ψ*. Using our defect lines, there is a simple way to derive a less obvious relation between the partition functions of the Ising model and its dual. This exploits the fact that creating a topologically trivial defect loop is invertible: $$\begin{aligned} {{ \mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.0225]{DualityM.pdf}}}}}}\hidewidth\cr$\vcenter{ \hbox{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z0.pdf}}}}}$} }$\cr\vphantom{$\Big|\_q$} }}}} \ =\ \frac{1}{\sqrt{2}}\,{\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_sigma.pdf}}}}}\ =\ \frac{1}{\sqrt{2}}\,{\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_sigma\_cycle.pdf}}}}}\. \label{nucleate1}\end{aligned}$$ The first relation follows from ([eq:IsingClosedLoop]), while the second follows from using the defect commutation relations to move the loop. Using the *F*-moves derived above we also have the equality $$\begin{aligned} \frac{1}{\sqrt{2}}\,{\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_sigma\_cycle.pdf}}}}}\ =\ \frac{1}{2} \Bigg ( {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_sigma\_sigma.pdf}}}}}+{\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_sigma\_sigma\_psiw.pdf}}}}}\Bigg )\. \label{nucleate2} \end{aligned}$$ All spins between the defect lines reside on the dual lattice, as indicated by the darker shading. Next we drag one *σ* defect line around the vertical cycle, giving $$\begin{aligned} {{ \mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.0225]{DualityM.pdf}}}}}}\hidewidth\cr$\vcenter{ \hbox{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z0.pdf}}}}}$} }$\cr\vphantom{$\Big|\_q$} }}}} \ =\ \frac{1}{2} \Bigg( {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_sigma\_sigma\_dual.pdf}}}}}+{\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_sigma\_psi\_sigmaw.pdf}}}}}\Bigg). \label{eq:Z\_dd}\end{aligned}$$ Finally, using another *F*-move and ([threerules]) we annihilate these duality defect lines to find $$\begin{aligned} {{ \mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.0225]{DualityM.pdf}}}}}}\hidewidth\cr$\vcenter{ \hbox{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z0.pdf}}}}}$} }$\cr\vphantom{$\Big|\_q$} }}}} \ =\ \frac{1}{2} \Bigg ( \ {{ \mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.0225]{DualityG.pdf}}}}}}\hidewidth\cr$\vcenter{ \hbox{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z0.pdf}}}}}$} }$\cr\vphantom{$\Big|\_q$} }}}} +{{ \mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.0225]{DualityG.pdf}}}}}}\hidewidth\cr$\vcenter{ \hbox{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_R0Rw.pdf}}}}}$} }$\cr\vphantom{$\Big|\_q$} }}}} +{{ \mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.0225]{DualityG.pdf}}}}}}\hidewidth\cr$\vcenter{ \hbox{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_0RRw.pdf}}}}}$} }$\cr\vphantom{$\Big|\_q$} }}}} +{{ \mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.0225]{DualityG.pdf}}}}}}\hidewidth\cr$\vcenter{ \hbox{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RR0w.pdf}}}}}$} }$\cr\vphantom{$\Big|\_q$} }}}}\ \Bigg )\. \label{z-to-zdual}\end{aligned}$$ All the partition functions on the right-hand side are defined solely on the dual lattice with the opposite staggering. A more traditional way of deriving this would be to rewite the Ising model in terms of dimers and utilise the analogous result there. Here we have a direct and simple proof valid both off the critical point and on the lattice. This linear constraint is also familiar in conformal field theory, and indeed it will prove quite useful in section [sec:partition] studying modular transformations on critical partition functions. Combining ([zzdual1]) with ([z-to-zdual]) yields another relation between toroidal partition functions: $$\begin{aligned} {{ \mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.0225]{DualityM.pdf}}}}}}\hidewidth\cr$\vcenter{ \hbox{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z0.pdf}}}}}$} }$\cr\vphantom{$\Big|\_q$} }}}} - {{ \mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.0225]{DualityM.pdf}}}}}}\hidewidth\cr$\vcenter{ \hbox{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_R0Rw.pdf}}}}}$} }$\cr\vphantom{$\Big|\_q$} }}}} \ =\ {{ \mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.0225]{DualityG.pdf}}}}}}\hidewidth\cr$\vcenter{ \hbox{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_0RRw.pdf}}}}}$} }$\cr\vphantom{$\Big|\_q$} }}}} + {{ \mathord{\ooalign{ \vphantom{$\Big|^2$}\cr\hidewidth\ensuremath{{\mathord{\vcenter{\hbox{\includegraphics[scale=1.0225]{DualityG.pdf}}}}}}\hidewidth\cr$\vcenter{ \hbox{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RR0w.pdf}}}}}$} }$\cr\vphantom{$\Big|\_q$} }}}}\, \label{zzdualagain}\end{aligned}$$ This provides a nice consistency check, since it also is obtained by directly annihilating the defects in first picture on the right-hand side of ([eq:Zdd]). *F*-moves give an easy way of seeing that one particular toroidal partition function vanishes. An *F* move relates two partition functions with a duality defect wrapped around one cycle and a spin-flip defect around the other: $$\begin{aligned} {\mathbin{\rotatebox[origin=c]{90}{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GRGw.pdf}}}}}$}}}\ =\ [F^{\psi \sigma \psi}\_\sigma]\_{\sigma\sigma}\, {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RGGw.pdf}}}}}\ =\ -\ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RGGw.pdf}}}}}\,\end{aligned}$$ where we have left the domain wall unlabelled and so omitted the shading here. However, shifting the defects part way around the vertical direction shows that the two partition functions are equal as well as opposite, and hence vanish: $$\begin{aligned} {\mathbin{\rotatebox[origin=c]{90}{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GRGw.pdf}}}}}$}}} \ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RGGw.pdf}}}}}\ =\ 0\. \label{ZDpsi2}\end{aligned}$$ This is the full 2d classical version of ([ZDpsi]), a consequence of the double degeneracy of the Hamiltonian *H**d* in the presence of a vertical duality defect. Translation across a duality defect ----------------------------------- We have shown in the previous section how *F* moves of defect lines result in identities for toroidal partition functions. Here we extend this analysis further by studying the Dehn twist **T***σ*, the translation around a complete cycle of the torus in a direction orthogonal to a duality defect. We use this to compute the exact shift in the momenta, enabling us to give a direct and exact (mod 1/2) lattice derivation of the conformal spin 1/16 of the chiral spin field in the corresponding conformal field theory. This Dehn twist will also play a large role in section [sec:partition] when we explore the modular transformations of the partition functions and their continuum limits. The presence of a conserved translation operator guarantees conservation of momentum, with the momentum operator ${\cal P}\_{\varphi}$ in the presence of a twisted boundary condition defined via $$e^{\frac{2\pi i {\cal P}\_\varphi}{L}} ={{{\cal T}}}\_\varphi. \label{Pdf}$$ With periodic boundary conditions (no vertical defects), $({ {{{\cal T}}}}\_\mathds{1})^L=1$, so the total momenta (the eigenvalues of ${\cal P}\_1$) are constrained to be integers. With anti-periodic boundary conditions from a vertical spin-flip defect, this is modified. Here $({{{{\cal T}}}}\_{\psi})^L={{\mathcal{D}}}\_\psi$, as detailed in ([TpsiL]). Since ${{{{\cal T}}}}\_\psi$ commutes with the transfer matrix, we can group the states into sectors labelled both by momentum and the eigenvalue  ± 1 of D*ψ*. The momenta in the sector with *D**ψ* =  − 1 and anti-periodic boundary conditions are therefore given by half integers. This seemingly innocuous half-integer shift in the momentum gives an exact result in the conformal field theory describing the continuum limit. In a CFT, one can think of a conformally twisted boundary condition as being “created” by pair producing an operator and its conjugate, dragging one around the torus in the vertical direction, and then re-annihilating the two. A classic result relates the eigenvalues of the energy and momentum operators in the CFT to the scaling dimensions of operators. Up to an integer, the conformal spin (the difference of its right and left scaling dimensions) of the operator creating the twisted boundary condition is then precisely the momentum shift. Thus in the Ising CFT there must be an operator with conformal spin 1/2 modulo an integer. The fermion field *ψ* indeed has conformal spin 1/2. Thus simple manipulations on the lattice model in the presence of conformally twisted boundary conditions yield exact results for the boundary-condition changing operator in the corresponding conformal field theory! Finding the shift in momentum quantisation for duality-twisted boundary conditions requires more work, but ultimately becomes a calculation similar to those in the preceding subsection [sec:partitionidentities]. Since a duality-twisted boundary condition requires a domain wall, only at the critical point is it possible to define a translation operator that commutes with the transfer matrix, as explained above by ([DualityTranslate]). However, a suitably defined Dehn twist **T***σ* does commute for all *u* and *u*ʹ. The Dehn twist **T***ψ* in the presence of a vertical spin-flip defect was described at the end of section [subsec:spinflip]. and qualitatively the effect is the same for **T***σ*: doing a Dehn twist in the presence of a vertical duality defect creates a horizontal defect. At the putative crossing of the vertical and horizontal lines in **T***σ*, the lines avoid, just as for **T***ψ* in ([redcross]). Therefore, schematically, the Dehn twist implements a map of partition functions $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_0GG.pdf}}}}}\xrightarrow{{{\mathbf{T}}}\_\sigma} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GG0.pdf}}}}}\label{Dehn1}\end{aligned}$$ just as in ([redcross2]). For simplicity we specialise to the critical point, where we can omit the shading and the domain wall. Precisely, the operator **T***σ* takes ${\cal H}\_\sigma$ to $\widehat{\cal H}\_\sigma$ and vice-versa; these vector spaces were defined in section [sec:dualitybc]. The matrix elements acting on ${\cal H}$ are given graphically by $$\begin{aligned} {{\mathbf{T}}}\_\sigma^{l} \ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=.4]{IsingDualityTranslationOperator.pdf}}}}}\. \label{dualitytranslation}\end{aligned}$$ The superscript *l* is to note that the spins have effectively shifted half a unit cell to the left at the defect. The analogous operator acting on ${\widehat{\cal H}\_\sigma}$ has matrix elements $$\begin{aligned} {{{\mathbf{T}}}}\_\sigma^{r}\ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=.4]{IsingDualityTranslationOperatorDual.pdf}}}}}\. \label{dualitytranslationdual}\end{aligned}$$ The superscript *r* indicates the half-unit-cell shift to the right of the spins. Similarly to the transfer matrix, the pictures ([dualitytranslation]) and ([dualitytranslationdual]) differ only in where the Ising spins are placed. A unified notation for the two in the same fashion as ([Dsigmadef]) is therefore $$\begin{aligned} {{\mathbf{T}}}\_\sigma \ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=.4]{IsingDualityTranslate.pdf}}}}}\label{Tsigma}\end{aligned}$$ Unlike the single insertion of a horizontal duality defect by D*σ*, the Dehn twist is invertible (in fact unitary). Reversing the orientation of how the full twist is glued back together gives $$\begin{aligned} {{\mathbf{T}}}\_\sigma^{-1} \ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=.4]{IsingDualityTranslateDagger.pdf}}}}}\end{aligned}$$ Using the defect commutation relations gives ${{\mathbf{T}}}\_\sigma^{-1} {{\mathbf{T}}}\_\sigma \ =\ {{\mathbf{T}}}\_\sigma {{\mathbf{T}}}\_\sigma^{-1}\ =\ \mathds{1}\_\sigma$ where $$\begin{aligned} \mathds{1}\_\sigma \ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=.4]{IsingIdentitytranslate.pdf}}}}}\,\end{aligned}$$ is the identity operator for ${\cal H}\_\sigma$ and $\widehat{\cal H}\_\sigma$. It is now straightforward to use the defect commutation relations to show that the Dehn twist commutes with the transfer matrix: $$\begin{aligned} {{\mathbf{T}}}\_\sigma T\_\sigma \ =\ T\_\sigma {{\mathbf{T}}}\_\sigma\.\end{aligned}$$ However, one interesting characteristic of **T***σ* is that it is *not* a product of the translation operators ${\mathcal T}\_\sigma$, even at the critical point. Instead, a tedious calculation gives $$\begin{aligned} {{\mathbf{T}}}\_\sigma^2\ =\ {{{\cal T}}}\_\sigma^{2L-1}\. \label{TT}\end{aligned}$$ The effective length of the system is therefore $L\_{\text{eff}} = L-\frac{1}{2}$ for *L* Ising spins. This is also apparent from ([Hf]) in the appendix; there are two fermions per site, but only 2*L* − 1 of them appear in the fermionic version of the Hamiltonian. As a consequence, the *L*-dependence of the momenta is in units of $2\pi/(L-\frac12)$ instead of the usual 2*π*/*L*. To compute the shift in momenta, we need not diagonalize the 2*L* × 2*L* matrix. Instead we derive identities for products of **T***σ* using the *F* moves, and so constrain the momenta. The product **T***σ*2 is easy to compute by fusing together the defect lines locally, as done without the vertical duality defect in ([id2]). Here this results in $$\begin{aligned} {{\mathbf{T}}}\_\sigma^2 \ =\ \frac{1}{\sqrt{2}} \Big( {\mathord{\vcenter{\hbox{\includegraphics[scale=.4]{IsingIdentitytranslate.pdf}}}}}+ {\mathord{\vcenter{\hbox{\includegraphics[scale=.4]{IsingSpinFlipTranslate.pdf}}}}}\Big) \label{MicroTSquared}\end{aligned}$$ Away from the vertical duality defect, this resembles (D*σ*)2 = 1 + D*ψ*. However, the presence of the vertical duality defect results in the trivalent junctions at the end points of the spin-flip defect lines. These leave **T***σ* invertible, unlike D*σ*. This operator identity also can be derived in the same spirit as those for the partition function in section [sec:partitionidentities]. We label $$\begin{aligned} {{\mathbf{T}}}\_{\sigma} \ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{OGGx.pdf}}}}}\, \qquad \mathds{1}\_\sigma \ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{OxGG.pdf}}}}}\, \qquad \psi\_\sigma \ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{ORGGw.pdf}}}}}\.\end{aligned}$$ The multiplication rule for these operators is simply to stack the first one on top of the second. Thus ([MicroTSquared]) can be rederived simply via $$\begin{aligned} {{\mathbf{T}}}\_{\sigma}^2 \ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{OGGx2.pdf}}}}}\ =\ \sum\_x \frac{1}{\sqrt{2}} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{OxRGG.pdf}}}}}\ =\ \frac{1}{\sqrt{2}} \big( {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{OxGG.pdf}}}}}+ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{ORGGw.pdf}}}}}\big) \ =\ \frac{1}{\sqrt{2}} \big( \mathds{1}\_\sigma+\psi\_\sigma \big)\. \label{Tsquared}\end{aligned}$$ The subtleties involving the half-lattice translations and dislocation are immaterial in this method, since this relies only on the topological properties of the defect lines. We have just one more identity to derive. As opposed to ${{\mathcal{D}}}\_\psi^2=\mathds{1}$, the operator *ψ**σ* squares to $-\mathds{1}\_\sigma$: $$\begin{aligned} \psi\_\sigma^2 \ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{ORGGsquaredw.pdf}}}}}\ =\ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{ORGGsquaredfusew.pdf}}}}}\ =\ \Big[ F\_{\sigma}^{\psi \sigma \psi} \Big]\_{\sigma \sigma}{\mathord{\vcenter{\hbox{\includegraphics[scale=1]{ORGGsquaredfuseFw.pdf}}}}}\ =\ -\, {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{OxGG.pdf}}}}}\ =\ -\mathds{1}\_\sigma\end{aligned}$$ In the first step we fuse the spin flip defects locally, which gives the identity. In the second step we used another *F*-move defined in ([FSymbol]) to pull the spin flip defects past each other on the duality defect. Lastly we used the relation for the bubble diagrams in ([threerules]) to remove the spin flip defects from the duality defect. Therefore $$\begin{aligned} {{\mathbf{T}}}\_\sigma^4\ =\ \frac{1}{2}\big(\mathds{1}\_\sigma+\psi\_\sigma\big)^2\ =\ \psi\_\sigma\.\end{aligned}$$ Thus $$\begin{aligned} {{\mathbf{T}}}\_\sigma^4 \ =\ \sqrt{2}\,{{\mathbf{T}}}\_\sigma^2-\mathds{1}\_\sigma\, \qquad {{\mathbf{T}}}\_\sigma^8\ =\ -\mathds{1}\_\sigma\,\qquad {{\mathbf{T}}}\_\sigma^{16} \ =\ \mathds{1}\_\sigma\.\ \label{idX}\end{aligned}$$ The identities ([idX]) strongly constrain the allowed momenta. The relation ([TT]) means that when $e^{2\pi ip\_\sigma/L\_{\rm eff}}$ is an eigenvalue of the translation operator ${{{\cal T}}}\_\sigma$, then *e*4*π**i**p**σ* is an eigenvalue of **T***σ*2. Therefore ([idX]) requires $$\begin{aligned} p\_\sigma \ =\ h\_\sigma + n, \quad \text{where} \quad n \in \mathbb{Z} \quad \text{and} \quad h\_\sigma \ =\ \pm \frac{1}{16}, \pm \frac{7}{16}\. \end{aligned}$$ The momentum offset given by *h**σ* is one of our main results. This discrete quantity cannot change as one takes the continuum limit, so it indicates that there is a field of conformal spin exactly  ± 1/16 in the Ising conformal field theory, up to a half integer. It of course has long been known that the chiral part of the spin field indeed has conformal spin 1/16, but our derivation required none of the detailed apparatus of integrability, just simple manipulations using the *F* moves. We will explore this and other consequences in detail in the next section. Partition functions and modular invariance ========================================== By analysing the behaviour of the partition function in the presence of horizontal and vertical duality defects, we showed in the preceding section how to extract an exact continuum quantity, the conformal spin (mod half an integer) of the chiral spin field at criticality. In this section, we show that this correspondence between lattice and continuum results can be extended substantially. By including the appropriate defect lines, we find lattice analogs of the Ising conformal field theory partition functions in all sectors. The method is to exploit *modular transformations*. The partition functions of conformal field theories on a torus exhibit remarkable mathematical structure, and often can be computed exactly. A key tool in this analysis is modular invariance, a consequence of the fact that the physics must be invariant under reparametrizations of the torus. We show that modular transformations are not solely a continuum property: we derive the complete and exact modular transformations for the critical Ising model purely by lattice considerations. Although the Ising lattice model is special in that partition functions can be computed exactly even in the presence of defects, this knowledge is not necessary for our computation. Indeed, in part II we will derive similar results for models where it is not possible to compute partition functions on the lattice. Modular transformations of lattice defect lines ----------------------------------------------- We went to great lengths in preceding sections to show precisely how to define defect lines branching, fusing, and wrapping around cycles of the torus. Therefore in this section we do all the calculations using schematic pictures, since all have precise lattice definitions, up to ambiguities of half-lattice translations that do not affect the partition functions in the continuum limit. We restrict to the critical point (*u**H* = *u**V* or *u* = *u*ʹ on the square lattice), so we need no shading in the presence of the duality defect. The most general defects we need consider have toroidial partition functions $$\begin{aligned} Z\_{ac}^b \equiv {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Partition\_basis\_abc.pdf}}}}}\ \ :\ \ N\_{ac}^b \neq0\.\end{aligned}$$ Any other toroidal partition functions involving defect lines can be reduced to sums over these by utilising the *F* moves described in section [sec:Fmoves], summarized in ([threerules]). Modular transformations form a group with two generators, both of which were discussed above. One generator, conventionally labelled **S**, simply exchanges the two cycles of the torus. The other is the Dehn twist **T**, discussed at length in section [sec:sixteenth], with the precise lattice definitions of the particular cases **T***σ* and **T***ψ* given by ([Tsigma]) and ([TpsiL]) respectively. The modular transformations act on the defect partition functions as $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Partition\_basis\_abc.pdf}}}}}\ \xrightarrow{\ {{\mathbf{S}}}\ } {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{SPartitionBasis.pdf}}}}}\,\qquad\qquad {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Partition\_basis\_abc.pdf}}}}}\ \xrightarrow{\ {{\mathbf{T}}}\ }\ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{TPartitionBasisabc.pdf}}}}}\. \label{STgen}\end{aligned}$$ Using the *F* moves and other transformations described above, the right-hand side of the latter can be written as a linear combination of the *Z**a**b**c*. We now show explicitly how the lattice modular transformations act on all the states *Z**a**c**b*, deriving a general formula for **S** and **T** in the critical Ising lattice model. It turns out to be slightly nicer to compute **S****T****S** instead of **T**; since ${{\mathbf{S}}}^2=\mathds{1}$, **T** of course can be extracted. A critical toroidal partition function with no defects (i.e. periodic boundary conditions around both cycles) must be invariant under both modular transformations, since they simply reparamaterise the torus: $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z0.pdf}}}}}\ \xrightarrow{{{\mathbf{S}}}} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z0.pdf}}}}}&& {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z0.pdf}}}}}\ \xrightarrow{ {{\mathbf{S}}}{{\mathbf{T}}}{{\mathbf{S}}}} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z0.pdf}}}}}\end{aligned}$$ When defects are present, however, they need not be. Wrapping a spin-flip defect around one or both cycles results in anti-periodic boundary conditions. These three types of configurations close under the action of both **T** and **S**: $$\begin{aligned} \xymatrix @C=12mm { {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_0RRw.pdf}}}}}\ar[r]^{{{\mathbf{S}}}} & {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_R0Rw.pdf}}}}}& {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_R0Rw.pdf}}}}}\ar[r]^{{{\mathbf{S}}}} & {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_0RRw.pdf}}}}}& {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RR0w.pdf}}}}}\ar[r]^{{{\mathbf{S}}}} & {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RR0w.pdf}}}}}& \\ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_0RRw.pdf}}}}}\ar[r]^{{{\mathbf{S}}}{{\mathbf{T}}}{{\mathbf{S}}}} & {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_0RRw.pdf}}}}}& {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_R0Rw.pdf}}}}}\ar[r]^{{{\mathbf{S}}}{{\mathbf{T}}}{{\mathbf{S}}}} & {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RR0w.pdf}}}}}& {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RR0w.pdf}}}}}\ar[r]^{{{\mathbf{S}}}{{\mathbf{T}}}{{\mathbf{S}}}} & {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_R0Rw.pdf}}}}}& } \end{aligned}$$ It is a little more work to find modular transformations involving duality-twisted boundary conditions. Obviously, $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_0GG.pdf}}}}}\ \xrightarrow{{{\mathbf{S}}}} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_G0G.pdf}}}}}\quad \text{and} \quad {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_G0G.pdf}}}}}\ \xrightarrow{{{\mathbf{S}}}} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_0GG.pdf}}}}}\ \end{aligned}$$ For a duality defect wrapping around both cycles of the torus, $$\begin{aligned} \newcommand{\ZGGxS}{\mathbin{\rotatebox[origin=c]{90}{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GG0.pdf}}}}}$}}} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GG0.pdf}}}}}\ \xrightarrow{{{\mathbf{S}}}} \ZGGxS = \frac{1}{\sqrt{2}} \Bigg({\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GG0.pdf}}}}}+ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GGRw.pdf}}}}}\Bigg)\,\end{aligned}$$ where the equality follows from using an *F* move in the middle of the figure. Another configuration with duality-twisted boundary conditions around both cycles has a spin-flip defect beginning and ending on the duality defect. The analogous computation gives $$\begin{aligned} \newcommand{\ZGGRwS}{\mathbin{\rotatebox[origin=c]{90}{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GGRw.pdf}}}}}$}}} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GGRw.pdf}}}}}\ \xrightarrow{{{\mathbf{S}}}} \ZGGRwS = \frac{1}{\sqrt{2}} \Bigg({\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GG0.pdf}}}}}- {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GGRw.pdf}}}}}\Bigg)\.\end{aligned}$$ The last ones to consider correspond to a mix of boundary conditions; duality twisted in one direction and anti-periodic in the other. These are given by $$\begin{aligned} \newcommand{\ZRGGwS}{\mathbin{\rotatebox[origin=c]{90}{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RGGw.pdf}}}}}$}}} \newcommand{\ZGRGwS}{\mathbin{\rotatebox[origin=c]{90}{${\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GRGw.pdf}}}}}$}}} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RGGw.pdf}}}}}\ \xrightarrow{{{\mathbf{S}}}} \ZRGGwS = -\,{\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GRGw.pdf}}}}}\quad \text{and} \quad {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GRGw.pdf}}}}}\ \xrightarrow{{{\mathbf{S}}}} \ZGRGwS = -\,{\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RGGw.pdf}}}}}\end{aligned}$$ where the right hand side of each equality follows again from ([threerules]). Finally, the action of **S****T****S** on the partition functions involving duality defects is a fun exercise to compute, yielding $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GG0.pdf}}}}}\ \xrightarrow{{{\mathbf{S}}}{{\mathbf{T}}}{{\mathbf{S}}}}\ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_G0G.pdf}}}}}\qquad \text{and} \qquad {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GGRw.pdf}}}}}\ \xrightarrow{{{\mathbf{S}}}{{\mathbf{T}}}{{\mathbf{S}}}}\ -\, {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GRGw.pdf}}}}}\end{aligned}$$ as well as $$\begin{aligned} {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_G0G.pdf}}}}}\ \xrightarrow{{{\mathbf{S}}}{{\mathbf{T}}}{{\mathbf{S}}}} \frac{1}{\sqrt{2}} \Bigg({\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GG0.pdf}}}}}+ {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GGRw.pdf}}}}}\Bigg) \quad \text{and} \quad {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GRGw.pdf}}}}}\ \xrightarrow{{{\mathbf{S}}}{{\mathbf{T}}}{{\mathbf{S}}}} \frac{1}{\sqrt{2}} \Bigg({\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GG0.pdf}}}}}- {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GGRw.pdf}}}}}\Bigg)\.\end{aligned}$$ We now summarize the transformations in matrix form. To do so we pick the basis given by $$\begin{aligned} \Bigg\{{\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z0.pdf}}}}}\,,\,{\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_0RRw.pdf}}}}}\,,\, {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_R0Rw.pdf}}}}}\,,\, {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_RR0w.pdf}}}}}\,,\, {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GG0.pdf}}}}}\,,\, {\mathord{\vcenter{\hbox{\includegraphics[scale=1]{Z\_GGRw.pdf
arxiv_0000331
Moduli space of *J*-holomorphic subvarieties ============================================ We study the moduli space of *J*-holomorphic subvarieties in a 4-dimensional symplectic manifold. For an arbitrary tamed almost complex structure, we show that the moduli space of a sphere class is formed by a family of linear system structures as in algebraic geometry. Among the applications, we show various uniqueness results of *J*-holomorphic subvarieties, *e.g.* for the fiber and exceptional classes in irrational ruled surfaces. On the other hand, non-uniqueness and other exotic phenomena of subvarieties in complex rational surfaces are explored. In particular, connected subvarieties in an exceptional class with higher genus components are constructed. The moduli space of tori is also discussed, and leads to an extension of the elliptic curve theory. Introduction ============ In this paper, we study the moduli space of *J*-holomorphic subvarieties where the almost complex structure *J* is tamed by a symplectic form. Recall *J* is said to be tamed by a symplectic form *ω* if the bilinear form *ω*( ⋅ , *J*( ⋅ )) is positive definite. When we say *J* is tamed, we mean it is tamed by an arbitrary symplectic form unless it is said otherwise. *J*-holomorphic subvarieties are the analogues of one dimensional subvarieties in algebraic geometry. In our paper, the ambient space *M* is of dimension four, where subvarieties are just divisors. In, Taubes provided systematic local analysis of its moduli space $\mathcal M\_e$ of *J*-holomorphic subvarieties in a class $e\in H^2(M, \mathbb Z)$ with the Gromov-Hausdorff topology, in particular when the almost complex structure *J* is chosen generically. For precise definitions and basic properties, see section 2.1. For an almost complex structure *J*, and a class $e\in H^2(M, \mathbb Z)$, we introduce the *J*-genus of *e*, $$\label{J-genus} \begin{array}{lll} g\_J(e)&=&\frac{1}{2}(e\cdot e+K\_J\cdot e)+1, \end{array}$$ where *K**J* is the canonical class of *J*. A **K**J*-spherical class* (sometimes called sphere class if there is no confusion of choosing a canonical class) is a class *e* which could be represented by a smoothly embedded sphere and *g**J*(*e*) = 0. An exceptional curve class *E* is a *K**J*-spherical class such that *E*2 = *K* ⋅ *E* =  − 1.[1](#fn1) For a generic tamed *J*, any exceptional curve class is represented by a unique embedded *J*-holomorphic sphere with self-intersection  − 1. For an arbitrary *J*, even it is tamed, the behaviour of reducible *J*-holomorphic subvarieties could be very wild. There are even some unexpected phenomenon for a *K**J*-spherical class. For instance, there are classes of exceptional curves, such that the moduli space are of complex dimension 1 and some representatives have an elliptic curve component. One such example is constructed in, recalled in section 6.1. It shows that an exceptional curve class in $\mathbb CP^2\#8\overline{\mathbb CP^2}$ has a $\mathbb CP^1$ family of subvarieties and some of them have an elliptic curve as one of irreducible components. Such examples, although very simple, were not generally expected by symplectic geometers. Since the Gromov-Witten invariant is 1, people expected to have uniqueness in some sense. This example is extended to all sphere classes in Proposition [T2comp]. This sort of examples could be even wilder. The example constructed above Question 4.18 in is disconnected and has a genus 1 component. In Example [connectedgenus3], we show the existence of a rational complex surface such that there is a *connected* subvariety with a genus 3 component in an exceptional curve class. Moreover, the graph attached to the subvariety has a loop. This does not contradict to Gromov-Witten theory. In fact, none of the subvarieties in a spherical class with higher genus irreducible components contributes to the Gromov-Witten invariant of *e*, see Remark [notGW]. In, the notion of *J*-nefness is introduced. A class is said to be **J*-nef* if it pairs non-negatively with all *J*-holomorphic subvarieties. This condition prevents all the exotic phenomena mentioned in the above. Under this assumption, the topological complexity, *e.g.* the genus of each irreducible component and the intersection theory, is well controlled. The result is particularly nice when *g**J*(*e*) = 0. In this case, all the irreducible components of subvarieties in class *e* are rational curves (comparing to Proposition [T2comp] and Example [connectedgenus3]). Moreover, when *e* is a sphere class with *e* ⋅ *e* ≥ 0, we know there is always a smooth *J*-holomorphic curve in class *e*. Both results are sensitive to the nefness condition. In particular, they no longer hold when *e* is an exceptional curve class in a rational surface as we mentioned above. However, there are no such examples in irrational ruled surfaces. Here, irrational ruled surfaces are smooth 4-manifolds diffeomorphic to blowups of sphere bundles over Riemann surfaces with positive genus. [intro1] Let *M* be an irrational ruled surface, and *E* an exceptional class. Then for any tamed *J* and any subvariety in class *E*, each irreducible component is a rational curve of negative self-intersection. Moreover, the moduli space $\mathcal M\_E$ is a single point. In particular, it confirms Question 4.18 of for irrational ruled surfaces.[2](#fn2) As other results in this paper, our statement works for an arbitrary tamed almost complex structure, this gives us much more freedom for geometric applications than a generic statement. The first statement follows from the fact that the positive fiber class of an irrational ruled surface is *J*-nef for any tamed *J* (Proposition [smoothT]). Here the positive fiber class is the unique *K**J*-spherical class of square 0. Then the *J*-nefness technique in gives the desired result. The proof of Proposition [smoothT] requires a new idea. This is based on a simple observation that the adjunction number of a class *e* is the Seiberg-Witten dimension of  − *e*. When the class is not *J*-nef and the *J*-genus of the class is positive, the wall crossing formula of Seiberg-Witten theory would produce non-trivial subvarieties with trivial homology class. To summarize, this observation gives us a strategy to show certain class is *J*-nef. We expect this observation, along with the nefness technique in, would lead to more applications. See the discussion in section 3. The second statement of Theorem [intro1] follows from a uniqueness result of reducible subvarieties, Lemma [uniquereducible]. This lemma constraints the reducible subvarieties by intersection theory of subvarieties. This is an important ingredient for almost all the results in this paper. In fact, it follows directly from the second statement of Theorem [intro1] that the *J*-holomorphic subvariety in class *E* is connected and has no cycle in its underlying graph for any tamed *J* by Gromov compactness, since these properties hold for the Gromov limit of smooth pseudoholomorphic rational curves. The nefness of the positive fiber class and Lemma [uniquereducible] also lead to the structure of the moduli space of a sphere class in irrational surfaces for an arbitrary tamed almost complex structure. [intro2] Let *M* be an irrational ruled surface of base genus *h* ≥ 1. Then for any tamed *J* on *M*, 1. there is a unique subvariety in the positive fiber class *T* passing through a given point; 2. the moduli space $\mathcal M\_T$ is homeomorphic to Σ*h*, and there are finitely many reducible varieties; 3. every irreducible rational curve is an irreducible component of a subvariety in class *T*. Theorem [intro1] and Theorem [intro2](1-2) hold for generic tamed *J* on general ruled surfaces regardless they are rational or not. But they hold for arbitrary tamed *J* only in irrational case. It is likely the following version of Theorem [intro2](3) is true for general rational surfaces as well: every irreducible negative rational curve is an irreducible component of a subvariety in a sphere class of nonnegative self-intersection. In algebraic geometry, Theorem [intro2] could be explained by the linear systems. Recall the long exact sequence $$\cdots \longrightarrow H^1(M, \mathcal O)\longrightarrow H^1(M, \mathcal O^\*)\stackrel{c\_1}{\longrightarrow} H^2(M, \mathbb Z)\longrightarrow \cdots$$ A divisor *D* gives rise to a line bundle $L\_D\in \hbox{Pic}(M)=H^1(M, \mathcal O^\*)$. When *M* is projective, the group of divisor classes modulo linear equivalence is identified with $\hbox{Pic}(M)$. The Poincaré-Lelong theorem says that *c*1(*L**D*) = *P**D*[*D*]. In our setting, we fix the class $e\in H^2(M, \mathbb Z)$ (indeed its Poincaré dual, but we will not distinguish them in this paper). Any line bundle *L* with *c*1(*L*) = *e* would give a projective space family of effective divisors, *i.e.* the linear system $(\Gamma(M, L)\setminus \{0\})/ \mathbb C^\*$, in the moduli space $\mathcal M\_e$. The union of such projective spaces with respect to all possible line bundles with *c*1(*L*) = *e* is exactly $\mathcal M\_e$. Two fibers of an irrational ruled surface are not linearly equivalent, since they are not connected through a family parametrized by rational curves. Hence each projective space is just a point, and the family of these spaces is parametrized by a section of the ruled surface which is diffeomorphic to Σ*h*. In fact, this Σ*h* is embedded in its Jacobian which is a complex tori *T*2*h*. Theorem [intro1] could also be interpreted by the linear system, where $\mathcal M\_E=\mathbb CP^0$. When *M* is simply connected, the long exact sequence implies the uniqueness of the line bundle with given Chern class. Hence the moduli space is always a projective space. It is very interesting to see whether it still holds for a tamed almost complex structure. The following is for rational surfaces. [cpl] Let *J* be a tamed almost complex structure on a rational surface *M*. Suppose *e* is a primitive class and represented by a smooth *J*-holomorphic sphere. Then $\mathcal M\_e$ is homeomorphic to $\mathbb CP^l$ where *l* = max{0, *e* ⋅ *e* + 1}. In particular, it partially confirms Question 5.25 in. Here, *M* is called a rational surface if it is diffeomorphic to *S*2 × *S*2 or $\mathbb CP^2\#k\overline{\mathbb CP^2}$. We remark that even the connectedness of the moduli spaces $\mathcal M\_e$ appearing in Theorems [intro2] and [cpl] was not known. For the proof of the result, we view $\mathbb CP^l$ as $\hbox{Sym}^lS^2$, the *l*-th symmetric product of *S*2. There are two main steps in the argument. First we need to find a ``dual" smooth *J*-holomorphic rational curve in a class *e*ʹ whose pairing with *e* is *l*. This is achieved by a delicate homological study of *J*-nef classes and techniques from. Hence the intersection of elements in $\mathcal M\_e$ with this rational curve would give elements of $\hbox{Sym}^lS^2$. Then a refined version of Lemma [uniquereducible] would give us the desired identification. The only possible non-primitive sphere classes are Cremona equivalent to a double line class in $\mathbb CP^2\#k\overline{\mathbb CP^2}$. Here Cremona equivalence refers to the equivalence under the group of diffeomorphisms preserving the canonical class *K**J*. In this case, we can still show the connectedness of the moduli space and its irreducible part (Proposition [connected]). The connectedness is important in the study of symplectic isotopy problem. More interestingly, a potential generalization of our argument for Theorem [cpl] leads us to a larger framework which generalizes certain part of the elliptic curve theory. In particular, a non-associative (because of the failure of Cayley-Bacharach theorem for a non-integrable almost complex structure) addition is introduced to measure the deviation from the integrability. On the other hand, some arguments and techniques in this paper and that of could be extended to study moduli space of subvarieties in higher genus classes, in particular, tori or classes with *g**J*(*e*) = 1. In this paper, we focus our discussion on the anti-canonical class of $\mathbb CP^2\#8\overline{\mathbb CP^2}$. We are able to show the following: [introtori] If there is an irreducible (singular) nodal curve in $\mathcal M\_{-K}$, then $\mathcal M\_{smooth, -K}$ and $\mathcal M\_{-K}$ are both path connected. We hope to have a more general discussion of *J*-holomorphic tori in future work. Section 6 contains a couple more applications. First, we show that the example mentioned in the beginning is actually a general phenomenon for any non-negative sphere classes. Namely, Proposition [T2comp] says that some subvarieties in a sphere class of a complex surface have an elliptic curve component. This immediately implies that no sphere class in $\mathbb CP^2\#k\overline{\mathbb CP^2}, k \ge 8$ is *J*-nef for every complex structure *J*. This should be compared with the result mentioned above that the positive fiber class of an irrational ruled surface is *J*-nef for any tamed *J*. The other application is on the symplectic isotopy of spheres to a holomorphic curve. This problem is first studied for plane curves, i.e. symplectic surfaces in $\mathbb CP^2$. In this case, the genus of a smooth symplectic surface is totally determined by its degree *d*. It is now known that any symplectic surface in $\mathbb CP^2$ of degree *d* ≤ 17 is symplectically isotopic to an algebraic curve. Chronologically, for *d* = 1, 2 (i.e. the sphere case) this result is due to Gromov, for *d* = 3 to Sikorav, for *d* ≤ 6 to Shevchishin and finally *d* ≤ 17 to Siebert and Tian. In Theorem [sphereiso], we give an alternative proof of the fact (see *e.g.* ) that any symplectic sphere *S* with self-intersection *S* ⋅ *S* ≥ 0 in a 4-manifold (*M*, *ω*) is symplectically isotopic to a holomorphic rational curve. Besides the techniques of *J*-holomorphic subvarieties, especially the *J*-nefness technique, another important ingredient in our arguments is the Seiberg-Witten theory. In particular, we use SW=Gr and wall-crossing formula frequently. They provide abundant *J*-holomorphic subvarieties when *b*+(*M*) = 1. As an amusing byproduct, we observe in Proposition [hodge] that the corresponding statement of Hodge conjecture for tamed almost complex structure on *M* with *b*+(*M*) = 1 holds. Namely, any element of $H^2(M, \mathbb Z)$ is the cohomology class of a *J*-divisor. We would like to thank Dmitri Panov for helpful discussion which leads to the paper, and Fedor Bogomolov for his interest. We are grateful to the referee for careful reading and very helpful suggestions improving the presentation. The work is partially supported by EPSRC grant EP/N002601/1. *J*-holomorphic subvarieties ============================ In this section, we recall the definition and basic properties of *J*-holomorphic subvarieties. The first two subsections are essentially from. Then an useful technical lemma on the intersection of *J*-holomorphic subvarieties, Lemma [uniquereducible], is proved. Finally, after recalling the basics of Seiberg-Witten theory, we show that the almost Kähler Hodge conjecture holds when *b*+ = 1. *J*-holomorphic subvarieties ---------------------------- A closed set *C* ⊂ *M* with finite, nonzero 2-dimensional Hausdorff measure is said to be an *irreducible *J*-holomorphic subvariety* if it has no isolated points, and if the complement of a finite set of points in *C*, called the singular points, is a connected smooth submanifold with *J*-invariant tangent space. Suppose *C* is an irreducible subvariety. Then it is the image of a *J*-holomorphic map *ϕ* : Σ → *M* from a complex connected curve Σ, where *ϕ* is an embedding off a finite set. Σ is called the model curve and *ϕ* is called the tautological map. The map *ϕ* is uniquely determined up to automorphisms of Σ. A **J*-holomorphic subvariety* Θ is a finite set of pairs {(*C**i*, *m**i*), 1 ≤ *i* ≤ *n*}, where each *C**i* is irreducible *J*-holomorphic subvariety and each *m**i* is a positive integer. The set of pairs is further constrained so that *C**i* ≠ *C**j* if *i* ≠ *j*. When *J* is understood, we will simply call a *J*-holomorphic subvariety a subvariety. They are the analogues of one dimensional subvarieties in algebraic geometry. Taubes provides a systematic analysis of pseudo-holomorphic subvarieties in. A subvariety Θ = {(*C**i*, *m**i*)} is said to be connected if  ∪ *C**i* is connected. We call Θ > Θ0 if Θ − Θ0 is another, possibly empty, subvariety. The associated homology class *e**C* (sometimes, we will also write it by [*C*]) is defined to be the push forward of the fundamental class of Σ via *ϕ*. And for a subvariety Θ, the associated class *e*Θ is defined to be ∑*m**i**e**C**i*. An irreducible subvariety is said to be *smooth* if it has no singular points. A special feature in dimension 4 is that, by the adjunction formula, the genus of a smooth subvariety *C* is given by *g**J*(*e**C*). For a general class *e* in $H^2(M;\mathbb Z)$, recall the *J*-genus of *e* is defined by $$g\_J(e)=\frac{1}{2}(e\cdot e+K\_J\cdot e)+1$$ where *K**J* is the canonical class of *J*. In general, *g**J*(*e*) could take any integer value. Let $\mathcal J^{\omega}$ be the space of *ω*-tamed almost complex structures. Notice the *J*-genus is an invariant for $J\in \mathcal J^{\omega}$ since $\mathcal J^{\omega}$ is path connected and *K**J* is invariant under deformation. Hence, later we will sometimes write *g**ω*(*e*) = *g**J*(*e*) when a symplectic structure *ω* is fixed. Moreover, when *C* is an irreducible subvariety, *g**J*(*e**C*) is non-negative. In fact, by the adjunction inequality in, *g**J*(*e**C*) is bounded from below by the genus of the model curve Σ of *C*, with equality if and only if *C* is smooth. Especially, when *g**J*(*e**C*) = 0, *C* is a smooth rational curve. An element Θ, in the moduli space $\mathcal M\_e$ of subvarieties in the class *e*, is a subvariety with *e*Θ = *e*. $\mathcal M\_e$ has a natural topology in the following Gromov-Hausdorff sense. Let ∣Θ∣ =  ∪ (*C*, *m*) ∈ Θ*C* denote the support of Θ. Consider the symmetric, non-negative function, $\varrho$, on $\mathcal M\_e\times \mathcal M\_e$ that is defined by the following rule: $$\varrho(\Theta, \Theta')=\sup \_{z\in |\Theta|} \hbox{dist}(z, |\Theta'|) +\sup \_{z'\in |\Theta'|} \hbox{dist}(z', |\Theta|).$$ The function $\varrho$ is used to measure distances on $\mathcal M\_e$, where the distance function dist( ⋅ ,  ⋅ ) is defined by an almost Hermitian metric on (*M*, *J*). Given a smooth 2-form *ν* we introduce the pairing (*ν*, Θ) = ∑(*C*, *m*) ∈ Θ*m*∫*C**ν*. The topology on $\mathcal M\_e$ is defined in terms of convergent sequences: A sequence {Θ*k*} in $\mathcal M\_e$ converges to a given element Θ if the following two conditions are met: * $\lim\_{k\to \infty} \varrho (\Theta, \Theta\_k)=0$. * lim*k* → ∞(*ν*, Θ*k*) = (*ν*, Θ) for any given smooth 2-form *ν*. That the moduli space $\mathcal M\_e$ is compact is an application of Gromov compactness, see Proposition 3.1 of. [eff] A homology class $e\in H\_2(M; \mathbb Z)$ is said to be *J*-effective if $\mathcal M\_e$ is nonempty. We use $\mathcal M\_{irr, e}$ to denote the moduli space of irreducible subvarieties in class *e*. Let $\mathcal M\_{red,e}:=\mathcal M\_e \setminus \mathcal M\_{irr,e}$. Given a class *e*, its *J*-dimension is $$\label{l} \iota\_e=\frac{1}{2}(e\cdot e-K\_{J}\cdot e).$$ The integer *ι**e* is the expected (complex) dimension of the moduli space $\mathcal M\_e$. When *g**J*(*e*) = 0, we have *ι**e* = *e* ⋅ *e* + 1. When *e* is a class represented by a smooth rational curve (*i.e.* *J*-holomorphic sphere), we introduce *l**e* = max{*ι**e*, 0}. Given a *k*( ≤ *l**e*)-tuple of distinct points Ω, recall that $\mathcal M\_e^\Omega$ is the space of subvarieties in $\mathcal M\_e$ that contains all entries of Ω. Introduce similarly $\mathcal M\_{irr,e}^{\Omega}$ and $\mathcal M\_{red,e}^{\Omega}$. We will often drop the subscript *e* when there is no confusion. *J*-nef classes --------------- In general, all these moduli spaces could behave wildly. The notion of *J*-nefness provides good control as shown in. A class *e* is said to be **J*-nef* if it pairs non-negatively with any *J*-holomorphic subvariety. When there is a *J*-holomorphic subvariety in a *J*-nef class *e*, *i.e.* *e* is also effective, we have *e* ⋅ *e* ≥ 0. A *J*-nef class *e* is said to be *big* if *e* ⋅ *e* > 0. The vanishing locus *Z*(*e*) of a big *J*-nef class *e* is the union of irreducible subvarieties *D**i* such that *e* ⋅ *e**D**i* = 0. Denote the complement of the vanishing locus of *e* by *M*(*e*). From the definition and the positivity of intersections of distinct irreducible subvarieties, it is clear that there does not exist an irreducible subvariety in class *e* passing through *x* ∈ *Z*(*e*) when *e* is big and *J*-nef. If the support ∣*C*∣ =  ∪ *C**i* of subvariety Θ = {(*C**i*, *m**i*)} is connected, then Theorem 1.4 of says that *g**J*(*e*) ≥ ∑*i**g**J*(*e**C**i*) for a *J*-nef class *e* with *g**J*(*e*) ≥ 0. In this paper, we use the following result which follows from the above genus bound and is read from Theorem 1.5 of. [spheresphere] Suppose *J* is tamed by some symplectic structure, e is a *J*-nef class with *g**J*(*e*) = 0 and $\Theta\in \mathcal M\_e$. Then Θ is connected and each irreducible component of Θ is a smooth rational curve. Moreover, when *e* is *J*-nef and *J*-effective with *g**J*(*e*) = 0, we have the following strong bound for the expected dimension of curve configuration for $\Theta\in \mathcal M\_{red, e}$ (Lemma 4.10 in ) ∑(*C**i*, *m**i*) ∈ Θ*l**e**i* ≤ ∑(*C**i*, *m**i*) ∈ Θ*m**i**l**e**i* ≤ *l**e* − 1. Along with automatic transversality, we have the following which is extracted from Proposition 4.5 and Proposition 4.10 of. [unobstructed] Suppose *e* is a *J*-nef spherical class with *e* ⋅ *e* ≥ 0. Then $\mathcal M\_{irr, e}$ is a non-empty smooth manifold of dimension 2*l**e* and $\mathcal M\_{red, e}$ is a finite union of compact manifolds, each with dimension at most 2(*l**e* − 1). This is an unobstructedness result for the deformation of symplectic surfaces. In, an unobstructed result is obtained. In our circumstance, it implies that when $\mathcal M\_{irr, e}\neq \emptyset$, it is a smooth manifold. Hence, our main contribution is to show $\mathcal M\_{irr, e}\neq \emptyset$ when *e* is *J*-nef. It is important for our applications, since we will deform *J* in $\mathcal J^{\omega}$ and the irreducible part of moduli space need not to be nonempty *a priori*. Our result for $\mathcal M\_{red, e}$ is more general since need each component of Θ has multiplicity one and has self-intersection no less than  − 1. Intersection of subvarieties ---------------------------- We first analyze how an intersection point contributes to the intersection number of two subvarieties. Since every component of a subvariety is an irreducible curve, the intersection number will always contribute positively. There are two typical types of intersections. The first is when two multiple components (*C*, *n*) and (*C*ʹ, *m*) have an intersection point *p*. If the two irreducible curves *C* and *C*ʹ intersect at *p* transversally, then the point *p* contributes *m**n* to the intersection numbers. The second type is when two curves *C* and *C*ʹ have high contact order at *p*. If they are tangent to each other at order *n*, which means the local Taylor expansion coincides up to order *n* − 1, then *p* would contribute *n* to the intersection number. Notice only the local behavior of the two curves matters for the intersection near *p*. Hence the two types could interact simultaneously. Suppose Θ is a subvariety with two irreducible components (*C*1, *m*1) and (*C*2, *m*2), which intersect transversally at point *p*, and Θʹ is another subvariety with a component (*C*ʹ, *m*), passing through *p* and tangent to *C*1 of order *n* at *p*. The point *p* would contribute *n**m**m*1 + *m**m*2 to the intersection of two subvarieties Θ and Θʹ. Later in this paper, we will see in several occasions to prescribe a subvariety passing through given ``points with weight", which will be explained immediately. Corresponding to the above two types of intersections of subvarieties, there are two types of points with weight. The first type, denote by (*x*, *d*) with *x* ∈ *M* and $d\in \mathbb Z$, means the subvariety Θ passes through point *x* with multiplicity *d*. Since no direction or higher order contact is given, the multiplicity here is the sum of weights of all irreducible components of Θ passing through *x*, say (*C*1, *m*1), ⋯, (*C**k*, *m**k*), *i.e.* *d* = *m*1 + ⋯ + *m**k*. The second type, denote by (*x*, *C*, *d*) with *x* ∈ *M*, $d\in \mathbb Z$ and *C* a (local) *J*-holomorphic curve passing through *x*, means subvariety Θ passes through point *x* with multiplicity *d* counted with contact orders with *C*. Precisely, if locally there are local components of Θ, say (*C*1, *m*1), ⋯, (*C**k*, *m**k*) passing through point *x* and tangent to the curve *C* with order *d*1, ⋯, *d**k* respectively, then *d* = *d*1*m*1 + ⋯ + *d**k**m**k*. Here we implicitly assume *C* is of multiplicity one. In the most general case, we consider (*C*, *n*), and the corresponding relation is *d* = *n*(*d*1*m*1 + ⋯ + *d**k**m**k*). Sometimes, we call *C* the ``matching" curve at point *x*. The following strengthens Lemma 4.18 in, considering the first type intersection. [uniquereducible] Let *J* be an almost complex structure on *M*4. Suppose *e* is *J*-nef with *l* = max{*e* ⋅ *e* + 1, 0} and {(*x*1, *d*1), ⋯, (*x**k*, *d**k*)} are points with weight. 1. Suppose two subvarieties $\Theta, \Theta'\in \mathcal M\_e$ do not share irreducible components. If they both pass through these points with weight, then *d*1 + ⋯ + *d**k* < *l*. 2. Let $\Theta=\{(C\_i, m\_i)\}\in \mathcal M\_e$ be a connected subvariety passing through these points with weight such that there are at least *m**i**e* ⋅ *e**C**i* points (counted with multiplicities) on *C**i* for each *i* and all *x**i* are smooth points. Then there is no other such subvariety in class *e* that shares an irreducible component with Θ. The first statement simply follows from positivity of intersection of two distinct irreducible *J*-holomorphic curves. This is because the points (*x**i*, *d**i*) are in the intersection of Θ and Θʹ and each *d**i* is no greater than the local intersection index of them at *x**i*. These local intersection indices are positive integers which add up to *e* ⋅ *e*, although there might be intersection points of Θ and Θʹ other than *x**i*. Thus, the inequality follows. For the second statement, suppose there is another such subvariety Θʹ, such that Θ and Θʹ share at least one common irreducible components. We rewrite two subvarieties $\Theta, \Theta' \in \mathcal M\_e$, allowing *m**i* = 0 in the notation, such that they share the same set of irreducible components formally, i.e. Θ = {(*C**i*, *m**i*)} and Θʹ = {(*C**i*, *m*ʹ*i*)}. Then for each *C**i*, if *m**i* ≤ *m*ʹ*i*, we change the components to (*C**i*, 0) and (*C**i*, *m*ʹ*i* − *m**i*). At the same time, if a point *x*, as one of *x*1, ⋯, *x**k*, is on *C**i*, then the weight is reduced by *m**i* as well. Similar procedure applies to the case when *m**i* > *m**i*ʹ. Apply this process to all *i* and discard finally all components with multiplicity 0 and denote them by Θ0, Θʹ0 and still use (*C**i*, *m**i*) and (*C**i*, *m*ʹ*i*) to denote their components. Notice they are homologous, formally having homology class *e* − ∑*m**k**i* < *m**k**i*ʹ*m**k**i**e**C**k**i* − ∑*m**l**j*ʹ < *m**l**j**m*ʹ*l**j**e**C**l**j* − ∑*m**q**p*ʹ = *m**q**p**m*ʹ*q**p**e**C**q**p*. There are two ways to express the class, by taking *e* = *e*Θ or *e* = *e*Θʹ in the above formula. Namely, it is $$\sum\_{m\_{k\_i}<m\_{k\_i}'}(m\_{k\_i}'-m\_{k\_i})e\_{C\_{k\_i}}+\hbox{others}=e\_{\Theta\_0'}=e\_{\Theta\_0}=\sum\_{m\_{l\_j}'<m\_{l\_j}} (m\_{l\_j}-m'\_{l\_j})e\_{C\_{l\_j}}+\hbox{others}.$$ Here the term ``others" means the terms *m**i**e**C**i* or *m**i*ʹ*e**C**i* where *i* is not taken from *k**i*, *l**j* or *q**p*. Now Θ0 and Θ0ʹ have no common components. By the process we just applied, counted with weight, there are at least *e* ⋅ *e*Θ0 points on Θ0. These points are also contained in Θ0ʹ with right weights. Hence Θ0 and Θ0ʹ would intersect at least *e* ⋅ *e*Θ0 points with weight. We notice that *e* ⋅ *e*Θ0 ≥ *e*Θ0 ⋅ *e*Θ0ʹ. In fact, the difference *e* − *e*Θ0 = *e* − *e*Θ0ʹ has 3 types of terms, any of them pairing non-negatively with the class *e*Θ0. For the terms with index *k**i*, *i.e.* the terms with *m**k**i* < *m**k**i*ʹ, we use the expression of $e\_{\Theta\_0}=\sum\_{m\_{l\_j}'<m\_{l\_j}} (m\_{l\_j}-m'\_{l\_j})e\_{C\_{l\_j}}+\hbox{others}$ to pair with. Since the irreducible curves involved in the expression are all different from *C**k**i*, we have *e**C**k**i* ⋅ *e*Θ0 ≥ 0. Similarly, for *C**l**j*, we use the expression of $e\_{\Theta\_0'}=\sum\_{m\_{k\_i}<m\_{k\_i}'}(m\_{k\_i}'-m\_{k\_i})e\_{C\_{k\_i}}+\hbox{others}$. We have *e**C**l**j* ⋅ *e*Θ0ʹ ≥ 0. For *C**q**p*, we could use either *e*Θ0 or *e*Θ0ʹ. Since *e*Θ0 = *e*Θ0ʹ, we have (*e* − *e*Θ0) ⋅ *e*Θ0 ≥ 0. Moreover, we have the strict inequality *e* ⋅ *e*Θ0 > *e*Θ02. This is because we assume the original Θ, Θʹ are connected and have at least one common component. The first fact implies there is at least one index in *k**i*, *l**j* or *q**p*. The second fact implies at least one of the intersection of *C**k**i*, *C**l**j* or *C**q**p* with *e*Θ0 as in the last paragraph would take positive value. As we have shown that Θ0 and Θ0ʹ would intersect at least *e* ⋅ *e*Θ0 points with weight, the inequality *e* ⋅ *e*Θ0 > *e*Θ02 implies the sum of local intersection indices of Θ0 and Θ0ʹ is greater than the homology intersection number *e*Θ02 of our new subvarieties Θ0 and Θ0ʹ. This contradicts to the local positivity of intersection and the fact that Θ0, Θ0ʹ have no common component. The contradiction implies that Θ is the unique such subvariety as described in the statement. The lemma and its argument will be used later, in particular, Theorem [red-1], Theorem [MT] and Proposition [connected]. A similar statement for the more general second type intersection will be proved by a similar argument and used in Theorem [homeoCPl]. Seiberg-Witten invariants and subvarieties ------------------------------------------ Other than techniques in, another important ingredient of our method is the Seiberg-Witten invariant. We follow the notation in. However we need a more general setting. Let *M* be an oriented 4-manifold with a given Riemannian metric *g* and a spin*c* structure $\mathcal L$. Hence there are a pair of rank 2 complex vector bundles *S*± with isomorphisms $\det(S^+)=\det(S^-)=\mathcal L$. The Seiberg-Witten equations are for a pair (*A*, *ϕ*) where *A* is a connection of $\mathcal L$ and *ϕ* ∈ Γ(*S*+) is a section of *S*+. These equations are *D**A**ϕ* = 0 *F**A*+ = *i**q*(*ϕ*) + *i**η* where *q* is a canonical map *q* : Γ(*S*+) → Ω+2(*M*) and *η* is a self-dual 2-form on *M*. The group *C*∞(*M*; *S*1) naturally acts on the space of solutions. Under this action, the map *f* ∈ *C*∞(*M*; *S*1) sends a pair (*A*, *ϕ*) to (*A* + 2*f**d**f*− 1, *f**ϕ*). It acts freely at irreducible solutions. Recall a reducible solution has *ϕ* = 0, and hence *F**A*+ = *i**η*. The quotient is the moduli space, denoted by $\mathcal M\_M(\mathcal L, g, \eta)$. For generic pairs (*g*, *η*), the Seiberg-Witten moduli space $\mathcal M\_M(\mathcal L, g, \eta)$ is a compact manifold of dimension $$2d(\mathcal L)=\frac{1}{4}(c\_1(\mathcal L)^2-(3\sigma(M)+2\chi(M)))$$ where *σ*(*M*) is the signature and *χ*(*M*) is the Euler number. Furthermore, an orientation is given to $\mathcal M\_M(\mathcal L, g, \eta)$ by fixing a homology orientation for *M*, *i.e.* an orientation of *H*1(*M*) ⊕ *H*+2(*M*). When *b*+(*M*) = 1, the space of *g*-self-dual forms $\mathcal H^+\_g(M)$ is spanned by a single harmonic 2-form *ω**g* of norm 1 agreeing with the homology orientation. Quotient out the space of triple (*p*, (*A*, *ϕ*)) where *p* ∈ *M* and (*A*, *ϕ*) is a solution of Seiberg-Witten equation by based actions *f* ∈ *C*∞(*M*; *S*1) with *f*(*p*) = 1, we obtain a smooth manifold $\mathcal E$. It is a principal *S*1 bundle over $M\times \mathcal M\_M(\mathcal L, g, \eta)$. The slant product with $c\_1(\mathcal E)$ defines a natural map *ψ* from $H\_\*(M, \mathbb Z)$ to $H^{2-\*}(\mathcal M\_M(\mathcal L, g, \eta), \mathbb Z)$. We now assume (*M*, *J*) is an almost complex 4-manifold with canonical class *K*. We denote $e:=\frac{c\_1(\mathcal L)+K}{2}\in H^2(M; \mathbb Z)/(2\hbox{-torsion})$. For a generic choice of (*g*, *η*), the Seiberg-Witten invariant *S**W**M*, *g*, *η*\*(*e*) takes value in $\Lambda^\*H^1(M, \mathbb Z)$. If $d(\mathcal L)<0$, then the SW invariant is defined to be zero. Otherwise, let $\gamma\_1\wedge \cdots\wedge \gamma\_p\in \Lambda^p(H\_1(M, \mathbb Z)/\hbox{Torsion})$, we define $$\label{SWdef} SW^\*\_{M, g, \eta}(e; \gamma\_1\wedge\cdots\wedge \gamma\_p):=\int\_{\mathcal M\_M(\mathcal L, g, \eta)} \psi(\gamma\_1)\wedge\cdots\wedge\psi(\gamma\_p)\wedge\psi(pt)^{d-\frac{p}{2}}.$$ If *b*+ > 1, a generic path of (*g*, *η*) contains no reducible solutions. Hence, the Seiberg-Witten invariant is an oriented diffeomorphism invariant in this case. Hence we can use the notation *S**W*\*(*e*) for the (full) Seiberg-Witten invariant. We will also write $$\dim\_{SW}(e)=2d(\mathcal L)=e^2-K\cdot e$$ for the Seiberg-Witten dimension. In the case *b*+ = 1, there might be reducible solutions on a 1-dimensional family. Recall that the curvature *F**A* represents the cohomology class $-2\pi ic\_1(\mathcal L)$. Hence *F**A*+ = *i**η* holds only if $-2\pi c\_1(\mathcal L)^+=\eta$. This happens if and only if the discriminant $\Delta\_{\mathcal L}(g, \eta):=\int (2\pi c\_1(\mathcal L)+\eta)\omega\_g=0$. With this in mind, the set of pairs (*g*, *η*) with positive (resp. negative) discriminant is called the positive (resp. negative) $\mathcal L$ chamber. We use the notation *S**W*±\*(*e*) for the Seiberg-Witten invariants in these two chambers. The part of *S**W*\*(*e*) (resp. *S**W*±\*(*e*)) in $\Lambda^{i}H^1(M, \mathbb Z)$ will be denoted by *S**W**i*(*e*) (resp. *S**W*±*i*(*e*)). Moreover, in the this paper, we will use *S**W*\*(*e*) instead of *S**W*−\*(*e*) when *b*+ = 1. For simplicity, the notation *S**W*(*e*) is reserved for *S**W*0(*e*). We now assume (*M*, *ω*) is a symplectic 4-manifold, and *J* is a *ω*-tamed almost complex structure. Then the results in equate Seiberg-Witten invariants with Gromov-Taubes invariants that are defined by making a suitable counting of *J*-holomorphic subvarieties. In fact, our *S**W*\*(*e*) used in this paper is essentially the Gromov-Taubes invariant in the literature. In particular, our *S**W*\*(*e*) is the original Seiberg-Witten invariant of the class 2*e* − *K*. The key conclusion we will take from this equivalence is that when *S**W*\*(*e*) ≠ 0, there is a *J*-holomorphic subvariety in class *e*. Moreover, if *S**W*(*e*) ≠ 0, there is a *J*-holomorphic subvariety in class *e* passing through dim*S**W*(*e*) given points. Hence, to produce subvarieties in a given class, we will prove nonvanishing results for *S**W*\*(*e*), usually for *S**W*(*e*). When *b*+(*M*) > 1, an important result of Taubes says that *S**W*(*K*) = 1. When *b*+(*M*) = 1, the key tool is the wall-crossing formula, which relates the Seiberg-Witten invariants of classes *K* − *e* and *e* when dim*S**W*(*e*) ≥ 0. The general wall-crossing formula is proved in. In particular, when *M* is rational or ruled, we have |SW(K-[C])-SW([C])|= 1 & |1+[C]T|h & where *T* is the unique positive fiber class and *h* is the genus of base surface of irrationally ruled manifolds. For a general symplectic 4-manifold with *b*+(*M*) = 1, usually the wall-crossing number for *S**W*(*e*) is hard to determine and sometimes vanishes. However, we still have a simple formula for top degree part of Seiberg-Witten invariant (see Lemma 3.3 (1) of ). [generalWC] Let *M* be a symplectic 4-manifold with *b*+ = 1 and canonical class *K*. Suppose dim*S**W*(*e*) ≥ *b*1. Let *γ*1, ⋯, *γ**b*1 be a basis of $H\_1(M, \mathbb Z)/\hbox{Torsion}$ such that *γ*1 ∧ ⋯ ∧ *γ**b*1 is the dual orientation of that on $\Lambda^{b\_1}(H^1(M, \mathbb Z))$. Then ∣*S**W**b*1(*K* − *e*; *γ*1 ∧ ⋯ ∧ *γ**b*1) − *S**W**b*1(*e*; *γ*1 ∧ ⋯ ∧ *γ**b*1)∣ = 1. Here, *b*1 stands for the first Betti number. In particular, it implies a nonvanishing result: let $e\in H^2(M, \mathbb Z)$ be a class with *e*2 ≥ 0, *K* ⋅ *e* ≤ 0, and at least one of the inequalities being strict, then *S**W*\*(*k**e*) ≠ 0 for sufficiently large *k*. Almost Kähler Hodge conjecture ------------------------------ Let *X* be a non-singular complex projective manifold. The (integral) Hodge conjecture asks whether every class in $H^{2k}(X, \mathbb Q)\cap H^{k, k}(X)$ (resp. $H^{2k}(X, \mathbb Z)\cap H^{k, k}(X)$) is a linear combination with rational (resp. integral) coefficients of the cohomology classes of complex subvarieties of *X*. When $\dim\_{\mathbb C}X\le 3$, Hodge conjecture is known to be true and follows from Lefschetz theorem on (1, 1) classes. The integral Hodge conjecture, which was Hodge’s original conjecture, is known to be false for some projective 3-folds. In this subsection, we will show an amusing result, which basically says that the integral Hodge conjecture, or Lefschetz theorem on (1, 1) classes, is true for almost Kähler 4-manifolds of *b*+ = 1. It is well known that in general the almost Kähler Hodge conjecture statement is not true if *b*+ > 1, even when our manifold is Kähler. The most well known counterexample is a generic CM complex tori. It has no subvarieties in general, but the group of integral Hodge classes has $\dim H^{1,1}(M, \mathbb Z)=2$. See the appendix of. In our situation, $H\_J^+(M)\cap H^2(M, \mathbb K)$ plays the role of $H^{1,1}(M, \mathbb K)$ for $\mathbb K=\mathbb Z$ or $\mathbb Q$. Here *H**J*+(*M*) is called the *J*-invariant cohomology which is introduced in along with the *J*-anti-invariant *H**J*−(*M*). Recall that an almost complex structure acts on the bundle of real 2-forms Λ2 as an involution, by *α*( ⋅ ,  ⋅ ) → *α*(*J* ⋅ , *J* ⋅ ). This involution induces the splitting into *J*-invariant, respectively, *J*-anti-invariant 2-forms Λ2 = Λ*J*+ ⊕ Λ*J*−. Then we define $H\_J^{\pm}(M)=\{ \mathfrak{a} \in H^2(M;\mathbb R) | \exists \; \alpha\in \Lambda\_J^{\pm}, \, d\alpha=0 \mbox{ such that } [\alpha] = \mathfrak{a}\}$. A *divisor* (resp. *$\mathbb Q$-divisor*) with respect to an almost complex structure *J* is a finite formal sum ∑*a**i**C**i* where *C**i* are *J*-holomorphic irreducible curves and $a\_i\in \mathbb Z$ (resp. $a\_i\in \mathbb Q$). [hodge] Let *M* be a symplectic 4-manifold with *b*+(*M*) = 1, and *J* a tamed almost complex structure on it. Any element of $H^2(M, \mathbb Z)$ is the cohomology class of a divisor (with respect to *J*). When *b*+(*M*) = 1, by Corollary 3.4 of, we have *h**J*− = dim*H**J*−(*M*) = 0 and $H\_J^+(M)=H^2(M, \mathbb R)$. Let *e*1, ⋯, *e**b*2 be a $\mathbb Z$-basis of $H^2(M, \mathbb Z)$, and *α*1, ⋯, *α**b*2 2-forms representing them. Since being a *J*-tamed symplectic form is an open condition, if *J* is tamed by a symplectic form *ω*, we can choose *ω* such that $[\omega]\in H^2(M, \mathbb Q)$. Then we can find a large integer *N* and *b*2 + 1 *J*-tamed symplectic forms *ω**i* = *N**ω* + *α**i* with $[\omega\_i]=N[\omega]+e\_i\in H^2(M, \mathbb Z)$ when 1 ≤ *i* ≤ *b*2 and *ω*0 = *N**ω*. Their cohomology classes generate the vector space $H^2(M, \mathbb Z)$. If we choose $L>k:=\max\_i\{0, \frac{K\cdot [\omega\_i]}{[\omega\_i]\cdot [\omega\_i]}\}+b\_1$, we have dim*S**W*(*L*[*ω**i*]) = *L*(*L*[*ω**i*]2 − *K* ⋅ [*ω**i*]) > *L*((*K* ⋅ [*ω**i*] + *b*1) − *K* ⋅ [*ω**i*]) ≥ *b*1. Apply Proposition [generalWC], we have *S**W**b*1(*L*[*ω**i*]) ≠ *S**W**b*1(*K* − *L*[*ω**i*]). We claim that when *L* > *k*, *S**W**b*1(*L*[*ω**i*]) ≠ 0 for any *i*. By wall-crossing, we only need to show that *S**W**b*1(*K* − *L*[*ω**i*]) = 0. We prove it by contradiction. If *S**W*\*(*K* − *L*[*ω**i*]) ≠ 0, then *K* − *L*[*ω**i*] will be the class of a *J*-holomorphic subvariety and hence an *ω**i*-symplectic submanifold. However, when *L* > *k*, we have (*K* − *L*[*ω**i*]) ⋅ [*ω**i*] < 0, which is a contradiction. Hence, we have *S**W*(*L*[*ω**i*]) ≠ 0 for *L* > *k* and there are subvarieties in class *L*[*ω**i*] for any *i*. Let $a\in H^2(M, \mathbb Z)$ be an arbitrary class. Because of the way we choose our *ω**i*, we have *a* = ∑*i* = 0*b*2*a**i*[*ω**i*] with $a\_i\in \mathbb Z$. Now we further write it as *a* = ∑*i* = 0*b*2*a**i*(*L* + 1)[*ω**i*] − ∑*i* = 0*b*2*a**i**L*[*ω**i*], which implies *a* is the cohomology class of a divisor. There is another argument to prove *S**W*(*K* − *L*[*ω**i*]) = 0 for large *L*. This is because *K* − *L*[*ω**i*] pairs negatively with 2*K* for non-rational or non-ruled manifolds, with *H* for $\mathbb CP^2\# k\overline{\mathbb CP^2}$, with a positive fiber class *A* for *S*2 × *S*2, and with the positive fiber class *T* for irrational ruled manifolds. All of the classes mentioned above are SW non-trivial classes with a representative of irreducible *J*-holomorphic non-negative self-intersections. Hence the contradiction follows from Lemma [int>0] by taking *e* = *K* − *L*[*ω**i*]. We remark that the symplectic version of Hodge conjecture holds for any compact symplectic manifolds (*M*2*n*, *ω*). More precisely, in, it shows that any element of $H\_{2k}(M^{2n}, \mathbb Z)$ is a symplectic $\mathbb Q$-cycle in the form $\frac{1}{N}[S\_1^{2k}]-\frac{1}{N}[S\_2^{2k}]$ where *N* is a positive integer and *S**i*2*k* are symplectic submanifolds of dimension 2*k*. Irrational ruled surfaces ========================= In this section, we use the techniques of along with Seiberg-Witten theory to identify the moduli space of *J*-holomorphic subvarieties in the fiber class of irrational ruled surfaces for any tamed almost complex structure *J*. When the irrational ruled surface is minimal, it was handled by McDuff in a series of papers, in particular. For non-minimal irrational ruled surfaces, the structure of reducible subvarieties was not clear for a non-generic tamed almost complex structure. The work of developed a toolbox to study this kind of problems. To apply the results and techniques from, one has to check the *J*-nefness of the classes we are dealing with. For previous applications, like Nakai-Moishezon type duality and the tamed to compatible question, we could always start with a *J*-nef class. However, for most other applications like our problem in this section, we do not know *J*-nefness *a priori*. In the following, we will develop a strategy to verify this technical condition. Then along with the techniques in, we cook up a general scheme to investigate the moduli space of subvarieties (and its irreducible and reducible parts) in a given class. The following lemma is Lemma 2.2 in. Since the statement is very useful and the proof is extremely simple, we include in the following. [int>0] If *C* is an irreducible *J*-holomorphic curve with *C*2 ≥ 0 and *S**W*(*e*) ≠ 0, then *e* ⋅ [*C*] ≥ 0. Since *S**W*(*e*) ≠ 0, we can represent *e* by a possible reducible *J*-holomorphic subvariety. Since each irreducible curve *C*ʹ has [*C*ʹ] ⋅ [*C*] ≥ 0, we have *e* ⋅ [*C*] ≥ 0. Let us now fix the notation. Since the blowups of *S*2 × Σ*h* and nontrivial *S*2 bundle over Σ*h* are diffeomorphic, we will write $M=S^2\times \Sigma\_h\#k\overline{\mathbb CP^2}$ if it is not minimal. Let *U* be the class of {*p**t*} × Σ*h* which has *U*2 = 0 and *T* be the class of the fiber *S*2 × {*p**t*}. Then the canonical class *K* =  − 2*U* + (2*h* − 2)*T* + ∑*i**E**i*. On the other hand, if *M* is a nontrivial *S*2 bundle over Σ*h*, *U* represents the class of a section with *U*2 = 1 and *T* is the class of the fiber. Then *K* =  − 2*U* + (2*h* − 1)*T*. In this section, we assume *h* ≥ 1, *i.e.* *M* is an irrational ruled surface. We will first show that there is an embedded curve in the fiber class. [smoothT] Let *J* be a tamed almost complex structure on irrational ruled surface *M*, then the fiber class *T* is *J*-nef. Hence there is an embedded curve in class *T*. The first statement is equivalent to the following: let *C* be an irreducible curve with [*C*] = *a**U* + *b**T* − ∑*i**c**i**E**i*, then *a* ≥ 0. We prove it by contradiction. Assume there is an irreducible curve with *a* < 0. Then we know that 2*g**J*([*C*]) − 2 = *C*2 + *K* ⋅ [*C*]. We take the projection *f* : *C* → Σ*h* to the base. Its mapping degree is *a* = [*C*] ⋅ *T*. Since Σ*h* has genus at least one, by Kneser’s theorem, we have *C*2 + *K* ⋅ [*C*] = 2*g**J*([*C*]) − 2 ≥ 2*g*(Σ*C*) − 2 ≥ ∣*a*∣(2*h* − 2) ≥ 0. Here Σ*C* is the model curve of the irreducible subvariety *C*. Now we look at the class  − [*C*]. By the above calculation, we have the Seiberg-Witten dimension dim*S**W*( − [*C*]) = *C*2 − *K* ⋅ ( − [*C*]) ≥ 0. Hence, we could apply the wall-crossing formula ∣*S**W*(*K* + [*C*]) − *S**W*( − [*C*])∣ = ∣1 + ( − [*C*]) ⋅ *T*∣*h* = (1 − *a*)*h* ≠ 0. For classes *T* and *e* = *K* + [*C*] = (*a* − 2)*U* + (2*h* − 2 + *b*)*T* + ∑*i*(1 − *c**i*)*E**i* when *M* is not a nontrivial *S*2 bundle (or *e* = *K* + [*C*] = (*a* − 2)*U* + (2*h* − 1 + *b*)*T* when *M* is a nontrivial *S*2 bundle), we have *e* ⋅ *T* = *a* − 2 < 0. We choose an almost complex structure *J*ʹ such that there is an embedded *J*ʹ-holomorphic curve in class *T*. Then apply Lemma [int>0] for this *J*ʹ to conclude that *S**W*(*K* + [*C*]) = 0. Apply, we have *S**W*( − [*C*]) ≠ 0. Hence the class 0 = [*C*] + ( − [*C*]) is a class of subvariety. This contradicts to the fact that *J* is tamed which implies that any positive combinations of curve classes have positive paring with a symplectic form taming *J*. This finishes the proof that *T* is *J*-nef. Note *g**J*(*T*) = 0, any irreducible curve in class *T* would be smooth. Hence, we only need to show the existence of an irreducible curve in class *T*. By Theorem 1.5 of, all components of reducible curves in class *T* are rational curves since *T* is *J*-nef. Furthermore, all the subvarieties are connected since *J* is tamed. Then by the dimension counting formula Equation for reducible subvarieties, we know ∑*l**e**i* ≤ *l**T* − 1 = 0. Here *e**i* is the homology class of each irreducible component and *l**e**i* = max{0, *e**i* ⋅ *e**i* + 1}. Hence *l**e**i* = 0 and all these irreducible components are rational curves of negative self-intersections. It is direct to see from the adjunction formula that there are finitely many negative *J*-holomorphic spheres on an irrational ruled surface. For a complete classification of symplectic spheres on irrational ruled surfaces, see section 6. Since *S**W*(*T*) ≠ 0 and dim*S**W*(*T*) = 2, any point of *M* lies on a subvariety in class *T*. Since the part covered by reducible curves is a union of finitely many rational curves, as we have shown above, we conclude that there has to be an irreducible, thus embedded, rational curve in class *T*. [+sph] On irrational ruled surfaces, the only irreducible rational curves with nonnegative square are in the fiber class *T*. Let [*C*] = *a**U* + *b**T* − ∑*i**c**i**E**i* be the class of an irreducible rational curve. By Proposition [smoothT], we have *a* ≥ 0. Since *g**J*([*C*]) = 0, as argued in Proposition [smoothT] by Kneser’s theorem, we will have contradiction if *a* > 0. Hence we must have *a* = 0. Then *C*2 =  − ∑*i**c**i*2 ≥ 0. Hence *c**i* = 0 for all *i* and [*C*] = *b**T*. Since *C* is a rational curve,  − 2 = *C*2 + *K* ⋅ [*C*] =  − 2*b*. Hence *b* = 1 and [*C*] = *T*. We can now confirm Question 4.18 of for irrational ruled surfaces, and further show there is a unique subvariety in each exceptional class. We rephrase Theorem [intro1]. [red-1] Let *M* be an irrational ruled surface, and let *E* be an exceptional class. Then for any subvariety Θ = {(*C**i*, *m**i*)} in class *E*, each irreducible component *C**i* is a rational curve of negative self-intersection. Moreover, the moduli space $\mathcal M\_E$ is a single point. Notice the statement is not true for a rational surface. See for a disconnected example and Section 6.1 for a connected example and related discussion. As explained in Corollary [+sph], any rational curve class must be like [*C*] = *b**T* − ∑*i**c**i**E**i*. If it is the class of an exceptional curve, then *K* ⋅ [*C*] =  − 2*b* + ∑*c**i* =  − 1,   *C*2 =  − ∑*c**i*2 =  − 1. Hence the only such classes are *E**i* and *T* − *E**i*. Both types have non-trivial Seiberg-Witten invariants. Hence, there are *J*-holomorphic subvarieties in both types of classes for arbitrary tamed *J*. Let $\Theta\_i\in \mathcal M\_{E\_i}$ and $\tilde{\Theta}\_i\in \mathcal M\_{T-E\_i}$. Since *T* = *E**i* + (*T* − *E**i*), we have $\{\Theta\_i, \tilde{\Theta}\_i\}\in \mathcal M\_T$. Since *T* is *J*-nef by Proposition [smoothT], we know all irreducible components in Θ*i* and $\tilde{\Theta}\_i$ are rational curves by Theorem 1.5 of. Moreover, by Equation, we have ∑*l**e**C**i* ≤ *l**T* − 1 = 0. Hence *e**C**i*2 < 0. This proves the first statement. For the second statement, we apply the same trick. If there is another subvariety $\Theta\_i'\in \mathcal M\_{E\_i}$. Consider $\Theta=\{\Theta\_i, \tilde{\Theta}\_i\}\in \mathcal M\_T$ and $\Theta'=\{\Theta\_i', \tilde{\Theta}\_i\}\in \mathcal M\_T$. They have common components including $\tilde{\Theta}\_i$. We then follow the argument of Lemma [uniquereducible]. After discarding all common components, we have cohomologous subvarieties Θ0 and Θ0ʹ. Moreover, we have 0 = *T*2 ≥ *T* ⋅ *e*Θ0 > *e*Θ02 = *e*Θ0 ⋅ *e*Θ0ʹ. The first inequality follows from nefness of *T*. Actually, *T*2 = *T* ⋅ *e*Θ0 by nefness of *T* applying to the common components we have discarded. The second inequality is because original Θ, Θʹ have common components at least from $\tilde{\Theta}\_i$, and because they are connected by Theorem 1.5 of. The inequality implies Θ0 = Θ0ʹ by local positivity of intersections and in turn Θ = Θʹ. Hence there is a unique subvariety Θ*i* in each exceptional class *E**i*. Similarly, there is a unique subvariety $\tilde{\Theta}\_i$ in *T* − *E**i*. By the uniqueness result that $\mathcal M\_E$ is a single point, we know the *J*-holomorphic subvariety in class *E* is connected and has no cycle in its underlying graph for any tamed *J* by Gromov compactness. This is because *E* is represented by a smooth rational curve for a generic tamed almost complex structure, and the above properties hold for the Gromov limit of these smooth pseudoholomorphic rational curves. [-pair] Let *M* be an irrational ruled surface, and *E* an exceptional class. If an irreducible *J*-holomorphic curve *C* satisfies *E* ⋅ [*C*] < 0, then *C* is a rational curve of negative square. Since *S**W*(*E*) ≠ 0, we always have a subvariety in class *E*. By Proposition [red-1], all irreducible components are negative rational curves. Thus, if *C* has positive genus, then *C* cannot be an irreducible component of the *J*-holomorphic subvariety in class *E*. Hence *E* ⋅ *C* ≥ 0 by local positivity of intersections. We would like to remark that the technique we use to prove Proposition [smoothT] could also be applied to other situations. Let us summarize it in the following. We will focus on the case when *b*+ = 1. To show certain class *A* with *A*2 ≥ 0 is *J*-nef when *J* is tamed, we would have to show classes *B* with *A* ⋅ *B* < 0 are not curve classes. If such a curve class exists with *B*2 ≥ 0 and at the same time *A* is realized by a symplectic surface, then there is a contradiction due to the light cone lemma. Hence we could assume *B*2 < 0. For this case, the first obvious obstruction is from the adjunction formula. Second type of obstruction is what we have applied above. To show *B* is not in the curve cone, we look for classes *C**i* with nontrivial Seiberg-Witten invariants, and *a*0*B* + ∑*i**a**i**C**i* = 0 with each *a**i* > 0. In Proposition [smoothT], we choose *a*0 = *a*1 = 1 which are the only nonzero *a**i*’s. For another such application, see Lemma [nonnefS2]. The key observation in this case is 2*g**J*(*B*) − 2 = dim*S**W*( − *B*). Hence, if *g**J*(*B*) > 0 and (*K* + *B*) ⋅ *A* < 0 we could efficiently apply the general wall crossing formula in to get nontriviality of Seiberg-Witten invariant for *B*. The above argument could have some obvious twists such as taking *C*1 =  − *k**B*. For the case of *g**J*(*B*) = 0, we will use a different strategy. We might apply the classifications of negative rational curves, *e.g.*, and calculate the intersection numbers with *A* directly. Now, we will investigate the moduli space of the subvarieties in class *T*. First, we need a curve to model the moduli space as we did in. [smoothsection] There is a smooth section of the irrational ruled surface, *i.e.* there is an embedded *J*-holomorphic curve *C* of genus *h* such that [*C*] ⋅ *T* = 1. We do our calculation for $M=S^2\times \Sigma\_h\#k\overline{\mathbb CP^2}$. When *M* is a nontrivial *S*2 bundle over Σ*h*, the calculation is similar. In Proposition [smoothT], we have shown that all curves having the homology class *a**U* + *b**T* − ∑*i**c**i**E**i* must have *a* ≥ 0. Especially, for a possibly reducible section which is in the class *U* + *b**T* − ∑*i**c**i**E**i*, there is exactly one irreducible component of it has *a* = 1 (with multiplicity one), all the others have *a* = 0. Furthermore, let *A* = *U* + *h**T*, we have dim*S**W*(*A*) = *A*2 − *K* ⋅ *A* = 2*h* − ( − 2*h* + 2*h* − 2) > 0. Since *K* − *A* =  − 3*U* + (*h* − 2)*T* + ∑*i**E**i* pairs negatively with *T*, by Lemma [int>0], *S**W*(*K* − *A*) = 0. Apply the wall crossing formula, we have *S**W*(*A*) =  ± 2*h* ≠ 0. Hence there is a subvariety in class *U* + *h**T*. Choose an irreducible component with *a* = 1, call it *C*. We show that *C* has to be smooth. Since [*C*] ⋅ *T* = 1, for any point *x* ∈ *C*, there is a subvariety Θ*x* in class *T* passing through it. Since any curve class *a**U* + *b**T* − ∑*i**c**i**E**i* has *a* ≥ 0, we know *C* cannot be an irreducible component of this subvariety Θ*x* in class *T*. If *x* is a singular point, the contribution to the intersection of *C* and Θ*x* would be greater than 1. Hence by the local positivity of the intersection, we know *C* is an embedded curve. Since *C* is a section, we have *g*(*C*) > 0 by Kneser’s theorem. By Corollary [-pair], for any exceptional rational curve class *E*, we have [*C*] ⋅ *E* ≥ 0. Since *T* − *E* is another exceptional rational curve class and [*C*] ⋅ (*T* − *E*) + [*C*] ⋅ *E* = [*C*] ⋅ *T* = 1, we have 0 ≤ [*C*] ⋅ *E* ≤ 1. Because of this, *K* ⋅ [*C*] + [*C*]2 = (2*h* − 2 − 2*b* + ∑*c**i*) + (2*b* − ∑*c**i*2) = 2*h* − 2. Hence *C* has genus *h*. We are ready to show the structure of the moduli space $\mathcal M\_T$. [MT] Let *M* be an irrational ruled surface of base genus *h*. Then for any tamed *J* on *M*, 1. there is a unique subvariety in class *T* passing through a given point; 2. the moduli space $\mathcal M\_T$ of the subvarieties in class *T* is homeomorphic to Σ*h*; 3. $\mathcal M\_{red, T}$ is a set of finitely many points. Let *C* ≅ Σ*h* be the smooth *J*-holomorphic section constructed in Proposition [smoothsection]. First, by Lemma [uniquereducible], for any given point *x* ∈ *M*, there is a unique element in $\mathcal M\_T$ passing through it. We denote this element by Θ*x*. Now, we construct a natural map *h* : *x* ↦ Θ*x* from *C* to $\mathcal M\_T$. The map *h* is surjective because *T* ⋅ [*C*] ≠ 0. The map is injective since *T* ⋅ [*C*] = 1 and the positivity of intersection. To show *h*− 1 is continuous, consider a sequence $\Theta\_i\in \mathcal M\_T$ approaching to its Gromov-Hausdorff limit Θ. Let the intersection points of Θ*i*, Θ with *C* be *p**i*, *p*. Then *p**i* has to approach *p* by the first item of the definition of topology on $\mathcal M\_T$. Now since *C* ≅ Σ*h* is Hausdorff and $\mathcal M\_T$ is compact, the fact we just proved that $h^{-1}: \mathcal M\_T\rightarrow C$ is continuous would imply *h* is also continuous. Hence *h* is a homeomorphism. This completes the proof of the second statement. The third bullet, that $\mathcal M\_{red, T}$ is a set of finitely many points, follows from the following two facts. First, each irreducible component of an element in $\mathcal M\_{red, T}$ would have negative self-intersection since ∑*l**e**i* ≤ 0 by Equation. Second, there are finitely many negative rational curves as we have seen in Proposition [smoothT]. [spherefiber] Every irreducible rational curve belongs to a fiber, *i.e.* it is an irreducible component of an element of $\mathcal M\_T$. First, by Corollary [+sph], all irreducible rational curves with nonnegative square have class *T*. Hence, we could only talk about negative curves. By Kneser theorem, for such a curve *C*, we have [*C*] ⋅ *T* = 0 as argued in Corollary [+sph]. By Theorem [MT] (1), for any point *x* ∈ *C*, there is a unique element $\Theta\_x\in \mathcal M\_T$ passing through it. If *C* is not an irreducible component of Θ*x*, then [*C*] ⋅ *T* > 0 by the positivity of intersection, which contradicts to [*C*] ⋅ *T* = 0. Theorem [MT] and Corollary [spherefiber] constitute Theorem [intro2] in the introduction. Along with Corollary [+sph], we have described $\mathcal M\_e$ for any rational curve class *e* and an arbitrary tamed almost complex structure on an irrational ruled surface. Some finer local structures of the moduli space are described in the following. [fibration] The natural map $f: M\rightarrow \mathcal M\_T$, where *f*(*x*) is the unique subvariety Θ*x* in class *T* passing through *x*, is a continuous map. We only need to show that for any sequence {*x**n*}*n* = 1∞ converging to *x*, the subvarieties Θ*x**n* converge to Θ*x* in $\mathcal M\_T$. We notice that if a sequence satisfies lim*n* → ∞*ρ*(Θ*x*, Θ*x**n*) = 0 (the first defining property of the topology of $\mathcal M\_e$), then a subsequence must converge to Θ*x* because of Theorem [MT](1). Hence, we can assume on the contrary that there is a sequence {*x**n*}*n* = 1∞ converging to *x* such that *ρ*(Θ*x*, Θ*x**n*) > *c* > 0 for a constant *c*. However, since $\mathcal M\_T$ is compact, we know there is a subsequence of {*x**n*} such that it converges to a subvariety $\Theta'\in \mathcal M\_T$. Since {*x**n*} converging to *x*, we know *x* ∈ ∣Θʹ∣ ∩ ∣Θ*x*∣, which implies Θʹ = Θ*x* by Theorem [MT] (1). This contradicts to our assumption *ρ*(Θ*x*, Θ*x**n*) > *c* > 0. Thus we know $f: M\rightarrow \mathcal M\_T$ is a continuous map. It is worth pointing out that near a smooth curve $C\subset \mathcal M\_T$ (or more generally any moduli space $\mathcal M\_e$), the convergence is very explicit, as described in (see also ). Recall that any curve in a neighborhood of *C* in $\mathcal M\_T$ can be written as exp*C*(*η*) with *η* being a section of normal bundle *N**C* satisfying *D**C**η* + *τ*1∂*η* + *τ*0 = 0. Here, *τ*1 and *τ*0 are smooth, fiber preserving maps from a small radius disk in *N**C* to $\hbox{Hom}(N\_C\otimes T^{1,0}C; N\_C\otimes T^{0,1}C)$ and to *N**C* ⊗ *T*0, 1*C* that obey ∣*τ*1(*b*)∣ ≤ *c*0∣*b*∣ and ∣*τ*0(*b*)∣ ≤ *c*0∣*b*∣2. Meanwhile, *D**C* is the $\mathbb R$-linear operator that appears in (2.12) of, which is used to describe the first order deformations of *C* as a *J*-holomorphic submanifold. The *L*2-orthogonal projection map from *C*∞(*C*; *N**C*) to the kernel of *D**C* maps an open set of solutions of diffeomorphically to an open ball centered at 0 in ker(*D**C*). Notice in our situation, ker(*D**C*) has complex dimension one. This description identifies an open neighborhood $\mathcal N(C)$ of *C* in $\mathcal M\_T$ with a small radius ball about the origin in ker(*D**C*). From this description, we know the tangent bundle of each element in $\mathcal N(C)$ varies as a smooth family. We will finish this section by a digression on another example of using the technique of Proposition [smoothT], now for rational surfaces $M=\mathbb CP^2\# k\overline{\mathbb CP^2}$. [nonnefS2] Let *M* be a rational surface and *J* be tamed. Let $A\in H^2(M, \mathbb Z)$ be a class with *A*2 ≥ 0. Moreover, assume there is an embedded *J*ʹ-holomorphic curve in class *A* for a tamed *J*ʹ. Then if a *J*-holomorphic curve *C* such that [*C*] ⋅ *A* < min{0,  − *K* ⋅ *A*}, it has to be a rational curve with negative square. For example, *A* could be chosen as *H*, *H* − *E*, 3*H* − *E*, *etc*. We first show *C* is a rational curve by contradiction. If *g**J*([*C*]) > 0, we have *C*2 + *K* ⋅ [*C*] ≥ 0. We look at the class  − [*C*], which has dim*S**W*( − [*C*]) = *C*2 + *K* ⋅ [*C*] ≥ 0. The wall-crossing formula for rational surfaces implies ∣*S**W*(*K* + [*C*]) − *S**W*( − [*C*])∣ = 1. For classes *A* and *e* = *K* + [*C*], we have *A* ⋅ *e* < 0. Apply Lemma [int>0], using the conditions *A*2 ≥ 0 and *A* has an embedded *J*ʹ-holomorphic representative, we conclude *S**W*(*K* + [*C*]) = 0. Hence *S**W*( − [*C*]) ≠ 0 by wall-crossing. It follows that the class 0 = [*C*] + ( − [*C*]) is a class of subvariety, which contradicts to the fact that *J* is tamed. Hence *C* has to be a rational curve. Now we choose an integral symplectic form *ω* taming *J*. Hence, for large *N* we have dim*S**W*(*N*[*ω*]) > 0. Moreover, the class *K* − *N*[*ω*] pairs negatively with the symplectic form *ω* for large *N*. Therefore, we must have *S**W*(*K* − *N*[*ω*]) = 0. By wall-crossing, we have *S**W*(*N*[*ω*]) ≠ 0 for large *N*. Then by Lemma [int>0], we have [*ω*] ⋅ *A* ≥ 0. Since *C* is a *J*-holomorphic curve, [*ω*] ⋅ [*C*] > 0. If *C*2 ≥ 0, and because *A*2 ≥ 0, we apply the light cone lemma to conclude that [*C*] ⋅ *A* ≥ 0, which contradicts to our assumption. Hence *C* is a rational curve with negative square. With this lemma in hand, we could apply the classification of negative rational curves in to find *J*-nef classes for rational surfaces with *K*2 > 0. The only feature of rational surfaces used in the proof is that they have nonzero wall-crossing number for all the classes with non-negative Seiberg-Witten dimension. Hence, the argument could be extended to a general symplectic 4-manifold with *b*+ = 1 under this assumption. Rational surfaces ================= In this section, we will concentrate on rational surfaces, *i.e.* 4-manifolds diffeomorphic to $\mathbb CP^2\#k\overline{\mathbb CP^2}$ or *S*2 × *S*2. We study the moduli space of *J*-holomorphic subvarieties in a sphere class. Our main results, Theorem [homeoCPl] and Proposition [connected], show that our moduli space behaves like a linear system in algebraic geometry. Connectedness of moduli spaces of subvarieties ---------------------------------------------- For the applications, in particular the symplectic isotopy problem, it is important to show that the reducible part $\mathcal M\_{red}$ would not disconnect the whole moduli space. This is the technical heart of in which it is called Isotopy Lemma. In our setting, we would show the following stronger result. [connected] Suppose *e* is a *J*-nef class with *g**J*(*e*) = 0. Then $\mathcal M\_e$ and $\mathcal M\_{irr, e}$ are path connected. In particular, any two smooth rational curves representing class *e* are connected by a path of smooth rational curves. We divide our argument into five parts. **Part 1: Reduce to rational surfaces.** If $\mathcal M\_e\ne \emptyset$, since *e* is *J*-nef, we know the self-intersection *e*2 ≥ 0. By a classical result of McDuff, if furthermore *g**J*(*e*) = 0 and $\mathcal M\_{irr, e}\ne \emptyset$, then *M* has to be a rational or ruled surface. When *M* is not rational, the results follow from Theorem [MT] and Corollary [spherefiber]. Hence, in the following, we assume *M* is a rational surface. **Part 2: Definition of pretty generic tuples.** We first assume *e* is a big *J*-nef class, *i.e.* a *J*-nef class with *e* ⋅ *e* > 0. For the proof we need the following definition of. We denote *l* = *l**e* ≥ 2. Let *M*[*k*] be the set of *k* tuples of pairwise distinct points in *M*. [pretty generic] Fix a point *x* ∈ *M*(*e*) (see Section [secnef] for the definition). An element Ω ∈ *M*[*l* − 2] is called *pretty generic* with respect to *e* and *x* if * *x* is distinct from any entry of Ω; For each $\Theta=\{(C\_1, m\_1),\cdots, (C\_n, m\_n)\}\in \mathcal M^x\_{red, e}$ with *x* ∈ *C*1, * *x* is not in *C**i* for any *i* ≥ 2; * Ω*i* ∩ Ω*j* = ∅ for *i* ≠ *j*, where Ω*i* = Ω ∩ *C**i*; * 1 + *w*1 = *m*1*e* ⋅ *e*1( ≥ *l**e*1), and *w**i* = *m**i**e* ⋅ *e**i*( ≥ *l**e**i*) for *i* ≥ 2. Here *w**i* is the cardinality of Ω*i*. Let *G**e**x* be the set of pretty generic *l* − 2 tuples with respect to *e* and *x*. It is indeed a generic set in the sense that the complement of *G**e**x* has complex codimension at least one in *M*[*l* − 2] by Proposition 4.8 of. In particular, the set *G**e**x* is path connected. **Part 3: $\mathcal M\_{irr, e}$ is path connected when *e* is a big *J*-nef class.** Now, if *C* and *C*ʹ are smooth rational curves in $\mathcal M\_{irr, e}$, they intersect at *l* − 1 points (counted with multiplicities). If one of the intersection points $\tilde x\in D$ where *D* is an irreducible curve in *Z*(*e*), we have *e**C* ⋅ *e**D* = 0 by definition. On the other hand, since *C* is irreducible, we know the irreducible curve *D* is not identical to *C* since the class *e* = *e**C* is big and thus *e* ⋅ *e**C* > 0 = *e* ⋅ *e**D*. Then $\tilde x\in C\cap D$ implies *e* ⋅ *e**D* > 0 which is a contradiction. Hence, all of these intersection points are in *M*(*e*). First, we will show that we can deform the curve *C* within $\mathcal M\_{irr, e}$ to a smooth curve *C̃* such that all the intersection points with *C*ʹ are of multiplicity one, if there are intersection points of *C* and *C*ʹ having multiplicity greater than one. We know $\mathcal M\_{irr, e}$ is a smooth manifold of dimension 2*l*. Hence we can choose an open neighborhood *U* of $C\in \mathcal M\_{irr, e}$. We look at the intersection points between elements in *U* and the curve *C*ʹ. There are *l* − 1 intersection points counted with multiplicities. Let *U*ʹ ⊂ *U* be a subset of *U* such that an element in *U*ʹ is tangent to the curve *C*ʹ, *i.e.* intersecting at least one point with multiplicity at least two. In particular, *C* ∈ *U*ʹ. The following is a general fact of automatic transversality, see *e.g.* Remark 3.6 of. If we have *k* ≤ *l* distinct points *x*1, ⋯, *x**k* in *C*ʹ and *k*ʹ < *k* with *k* + *k*ʹ ≤ *l*, then the set of smooth rational curves in class *e* passing through *x*1, ⋯, *x**k* and having the same tangent space at the *k*ʹ points *x*1, ⋯, *x**k*ʹ as *C*ʹ is still a smooth manifold, whose dimension is 2(*l* − *k* − *k*ʹ). Since we can vary *x*1, ⋯, *x**k* in the curve *C*ʹ, and *k*ʹ ≥ 1, we know *U*ʹ is a submanifold of U with dimension 2(*l* − *k* − 1) + 2*k* = 2*l* − 2. In particular, *U* \ *U*ʹ, which is the set of curves in *U* intersecting *C*ʹ at points with multiplicity one, is non-empty and path connected. Moreover, elements in *U* \ *U*ʹ could be connected by paths to the element *C* ∈ *U*ʹ within *U* \ *U*ʹ, in the sense that for any $\tilde C \in U\setminus U'$ there is a path *P*(*t*) ⊂ *U* such that *P*(1) = *C*, $P(0)=\tilde C$ and *P*([0, 1)) ⊂ *U* \ *U*ʹ. Hence, any curve $\tilde C\in \mathcal M\_{irr, e}$ could be obtained by deforming the curve *C* within $\mathcal M\_{irr, e}$, such that all the intersection points of $\tilde C$ and *C*ʹ are of multiplicity one. For simplicity of notation, we will still write the deformed curve $\tilde C$ by *C*. We can now choose one of the intersection points of *C* and *C*ʹ, and call it *x*. For the remaining *l* − 2 points *x*3, ⋯, *x**l*, they might not be in *G**e**x*. Choose two more points *y* ∈ *C* and *y*ʹ ∈ *C*ʹ other than all these intersection points. We are able to choose *l* − 2 disjoint open neighborhoods *N**i* of *x**i* in *M* with *i* = 3, ⋯, *l*, such that all curves representing *e*, passing through *x*, and *y* or *y*ʹ, and intersecting all *N**i* are smooth rational curves. This is because $\mathcal M\_{irr, e}$ is a smooth manifold by Theorem [unobstructed] and there is a unique subvariety, smooth or not, passing through *l* given points on an irreducible curve in class *e*. Since the complement of *G**e**x* has complex codimension at least one in *M*[*l* − 2], we are able to choose a pretty generic *l* − 2-tuple from *N*3 × ⋯ × *N**l*. With these understood, we are able to deform *C* and *C*ʹ within $\mathcal M\_{irr, e}$ to two smooth rational curves intersecting at *x* and an *l* − 2-tuple in *G**e**x*. We still denote these two curves by *C* and *C*ʹ. By Proposition 4.9 of, the subset $\mathcal M\_{e}^{x, x\_3, \cdots, x\_l}\subset \mathcal M\_e$ is homeomorphic to $\mathbb CP^1=S^2$ when (*x*3, ⋯, *x**l*) ∈ *G**e**x*. Moreover, $\mathcal M\_{e}^{x, x\_3, \cdots, x\_l}\cap \mathcal M\_{red, e}$ is a finite set of points. Since $C, C'\in \mathcal M\_{e}^{x, x\_3, \cdots, x\_l}$, they are connected by a family of smooth rational curve in $\mathcal M\_{e}^{x, x\_3, \cdots, x\_l}\cap \mathcal M\_{irr, e}$. This finishes the proof that $\mathcal M\_{irr, e}$ is path connected when *e* is big *J*-nef. **Part 4: $\mathcal M\_e$ is path connected when *e* is a big *J*-nef class.** To show $\mathcal M\_e$ is path connected, we only need to prove that any element in $\mathcal M\_{red, e}$ is connected by a path to an element in $\mathcal M\_{irr, e}$. This would imply $\mathcal M\_e$ is path connected since we have shown $\mathcal M\_{irr, e}$ is path connected. Let $\Theta \in \mathcal M\_{red, e}$, we choose *l*ʹ = ∑*e* ⋅ *e**C**i* distinct points *x*1, ⋯, *x**l*ʹ from the smooth part of Θ = {(*C**i*, *m**i*)}. We choose the *l*ʹ points such that there are exactly *e* ⋅ *e**C**i* points on *C**i* for each *i*, each with type (*x**i*, *m**i*) in the sense of section [intersection]. Counted with weights, there are ∑*m**i**e* ⋅ *e**C**i* = *l* − 1 points. We then choose another point, labeled by *x**l*, from the smooth part of Θ and different from *x*1, ⋯, *x**l*ʹ. By Lemma [uniquereducible], there is a unique element in $\mathcal M\_e$ passing through points *x*1, ⋯, *x**l*ʹ with multiplicities and another point *x**l*. We take *l* disjoint open sets *N*1, ⋯, *N**l* ⊂ *M* as following. For each *x**i*, *i* ≤ *l*ʹ, assume it is on the irreducible component (*C**j*, *m**j*). We choose *m**j* disjoint open sets, say *N*ʹ1, ⋯, *N*ʹ*m**j*, such that $\overline{N'\_{1}}\cup \cdots \cup \overline{N'\_{m\_j}}$ is a neighborhood of *x**i* in *M* and $x\_i\in \overline{N'\_{k}}$ for each 1 ≤ *k* ≤ *m**i*. Considering all the points *x*1, ⋯, *x**l*ʹ, there are *l* − 1 = ∑*m**i**e* ⋅ *e**C**i* such open sets. We relabel them by *N*1, ⋯, *N**l* − 1. Finally, we take a neighborhood *N**l* of *x**l* in *M*. Apparently, we can choose these open sets such that they are disjoint from each others. We denote by $\mathcal M\_{irr, e, k}$ (resp. $\mathcal M\_{red, e, k}$) the subset of $\mathcal M\_{irr, e}\times M^{[k]}$ (resp. $\mathcal M\_{red, e}\times M^{[k]}$) that consists of elements of the form (*C*, *x*1, ⋯, *x**k*) with *x**i* ∈ *C* and distinct. There are natural projections $\pi\_{irr, l}: \mathcal M\_{irr, e, l}\rightarrow M^{[l]}$ and $\pi\_{red, l}: \mathcal M\_{red, e, l}\rightarrow M^{[l]}$. First, we notice that the diagonal elements *Z**d**i**a**g* = *M**l* \ *M*[*l*] is a finite union of submanifolds of dimension at least two. Proposition 4.5 in shows that the image of *π**r**e**d*, *l*, say *Z**r**e**d* ⊂ *M*[*l*], is a finite union of submanifolds of codimension at least two, and *π**i**r**r*, *e* maps onto its complement. Moreover, the map *π**i**r**r*, *l* is one-to-one. Hence, *M**l* \ (*Z**d**i**a**g* ∪ *Z**r**e**d*) is path connected. In particular, we can choose a path *P*(*t*) in *M**l* such that *P*(0) ∈ *M**l* is the *l* points with weight (*x*1, *m**k*1), ⋯, (*x**l*ʹ, *m**k**l*ʹ) and *x**l* that determine Θ uniquely and *P*((0, 1]) ⊂ *N*1 × ⋯ × *N**l* \ *Z**r**e**d*. Since all the *l* tuples *P*(*t*) determines the subvariety uniquely, the path *P*(*t*) ⊂ *M**l* gives rise to a path connecting Θ to $\mathcal M\_{irr, e}$. **Part 5: $\mathcal M\_e$ is homeomorphic to *S*2 when *e* ⋅ *e* = 0.** When *e* ⋅ *e* = 0, we no longer need the technicalities of pretty generic tuples. In fact, the argument here is similar to that of Theorem [MT]. Instead of finding a smooth section as in Proposition [smoothsection], we will use a general construction in of a ``dual" *J*-nef class. This will be used as our model for moduli space. By Theorem [unobstructed], $\mathcal M\_{irr, e}$ is a manifold of complex dimension 1 and $\mathcal M\_{red, e}$ is a union of finitely many points. We will show that $\mathcal M\_e=\mathcal M\_{irr, e}\cup \mathcal M\_{red, e}$ is actually homeomorphic to *S*2. By Proposition 4.6 of, there is another *J*-nef class *H**e* with *g**J*(*H**e*) = 0 such that *H**e* ⋅ *e* = 1. We choose a smooth rational curve *S* representing *H**e*. Given any *z* ∈ *S*, there is a unique (although possibly reducible) rational curve *C**z* in class *e* passing through *z*. Thus we obtain a map *h* : *z* ↦ *C**z* from *S* to $\mathcal M\_e$. The map *h* is surjective since *H**e* ⋅ *e* ≠ 0. Since *S* is also *J*-holomorphic and *H**e* ⋅ *e* = 1 any curve in $\mathcal M\_e$ intersects with *S* at a unique point by the positivity of intersection. Therefore *h* is also one-to-one. Now let us show that *h* is a homeomorphism, namely both *h* and *h*− 1 are continuous. Since *S* = *S*2 is Hausdorff and $\mathcal M\_e$ is compact, if we can show that $h^{-1}:\mathcal M\_e\to S$ is continuous, it follows that *h* is also continuous. To show *h*− 1 is continuous, consider a sequence $C\_i \in \mathcal M\_e$ approaching to its Gromov-Hausdorff limit *C*. Let the intersection of *C**i* (resp. *C*) with *S* be *p**i* (resp. *p*). Then *p**i* has to approach *p* by the first item of the definition of topology on $\mathcal M\_e$. Therefore *h* is a homeomorphism. $\mathcal M\_e=\mathbb CP^l$ when *e* is primitive -------------------------------------------------- In the argument of Proposition [connected], we have shown $\mathcal M\_e=\mathbb CP^1$ when *e* ⋅ *e* = 0. We will next generalize it to show that $\mathcal M\_{e}$ is homeomorphic to $\mathbb CP^l$ when *e* is primitive, hence confirm Question 5.25 of in the topological sense in this circumstance. This is Theorem [cpl] and we state it again below as Theorem [homeoCPl]. We first need a lemma to adapt the discussion of section 4.3 in. This lemma is crucial in our construction of the model for the moduli space. [Henef] Let *M* = *S*2 × *S*2 or $\mathbb CP^2\#k\overline{\mathbb CP^2}$ with *k* ≥ 1. Suppose $e\in H^2(M, \mathbb Z)$ is a primitive (i.e. *e* is not divisible by an integer *k* > 1) *J*-nef class with *g**J*(*e*) = 0. Then there is a *J*-nef class *H**e* such that *g**J*(*H**e*) = 0 and *H**e* ⋅ *e* = 1. Moreover, *H**e* can be assumed to be not proportional to *e*. We take the class *H**e* to be the same ones chosen in the proof of Lemma 4.13 of except if *e* is Cremona equivalent to *H* or 2*H* − *E*1 − *E*2 when $M=\mathbb CP^2\#k\overline{\mathbb CP^2}$. When *e* is equivalent to *H*, without loss of generality, we assume *e* = *H*. We will show that at least one of *H* − *E*1, ⋯, *H* − *E**k* is *J*-nef. Let us first take *H**e*ʹ = *H* − *E*1, and assume there is a curve pairing negatively with it. By Lemma 4.15 of, we know an irreducible curve class *e**C* = *a**H* − *b*1*E*1 − ⋯ − *b**k**E**k*, pairing negatively with *H* − *E*1, must have *a* ≤ 0. Hence *a* = 0, otherwise it contradicts to the assumption that *e* = *H* is *J*-nef. But when *a* = 0, we have *b*1 =  − *H**e*ʹ ⋅ *e**C* > 0. Moreover, since *S**W*(*E**i*) ≠ 0, we know there are *J*-holomorphic subvarieties in classes *E**i*. At least one *b**i* < 0, otherwise 0 is a linear combination of *e**C* and *e**E**i* which contradicts to the fact that *J* is tamed. Then we look at the adjunction number *e**C* ⋅ *e**C* + *K**J* ⋅ *e**C* =  − *b*12 − ⋯ − *b**k*2 + *b*1 + ⋯ + *b**k* ≤ 0. To make sure the adjunction number is no less than  − 2, we will exactly have one negative *b**i*, say *b*2 =  − 1. Other *b**i*’s are 0 or 1. In particular, *b*1 = 1. Then we take the class *H* − *E*2. If it is not *J*-nef, we can argue as in the last paragraph for *H* − *E*1 to show that there is a curve class *e**C*2 =  − *b*1(2)*E*1 − ⋯ − *b**k*(2)*E**k* with only one negative coefficient which is  − 1, and others are 0 or 1. If the negative coefficient is some *b**i*(2) =  − 1 such that *b**i* = 1, then *e**C* + *e**C*2 is a linear combination of *E*1, ⋯, *E**k* with non-positive coefficients. This contradicts to the fact that *J* is tamed. If the negative coefficient is some *b**i*(2) (*i* ≠ 2 since *b*2(2) = 1) with *b**i* = 0, say *b*3(2) =  − 1, we could continue our argument with the class *H* − *E*3. Since *k* is a finite number, this process will stop at some finite number *j*, such that when we argue it with a non *J*-nef class *H* − *E**j*, we will get a curve class *e**C**j* with one negative *b**i*(*j*) and *i* < *j*. Then the sum *e**C**i* + ⋯ + *e**C**j* is a linear combination of *E**i* with non-positive coefficients, which contradicts to the tameness of *J* again. Hence, we have shown that at least one of *H* − *E*1, ⋯, *H* − *E**k* is *J*-nef. We choose it as *H**e*, which is a class satisfying the requirements of our lemma. When *e* is equivalent to 2*H* − *E*1 − *E*2, say *e* = 2*H* − *E*1 − *E*2, we claim that one of the classes, *H* − *E*1 or *H* − *E*2, is *J*-nef. We assume both *H* − *E*1 and *H* − *E*2 are not *J*-nef. By Lemma 4.15 of, we know an irreducible curve class *e**C* = *a**H* − *b*1*E*1 − ⋯ − *b**k**E**k* pairing negatively with *H* − *E*1 or *H* − *E*2 must have *a* ≤ 0. We are able to determine all the possible classes that pair negatively with *H* − *E*1. In this case, (*H* − *E*1) ⋅ *e**C* ≤  − 1 implies *a* ≤ *b*1 − 1. Since 2*H* − *E*1 − *E*2 is *J*-nef, we know *H* − *E*2 pairs positively with *e**C*, which implies *b*2 ≤ *a* − 1 ≤  − 1. We calculate the *K**J*-adjunction number *e**C* ⋅ *e**C* + *K**J* ⋅ *e**C* ≤ *a*2 − *b*22 − 3*a* + *b*2 ≤ 2*a* − 1 − 3*a* + *b*2 ≤  − 2. The equality holds only when *b*2 = *a* − 1 (the second inequality) and all other *b**i* are 0 or 1 (the first inequality). Furthermore *b*2 = *a* − 1 would imply *a* = *b*1 − 1 also holds. Hence the only possible classes are *E*2 − *E*1 − *b*3*E*3 − ⋯ − *b**k**E**k* and  − *H* + 2*E*2 − *b*3*E*3 − ⋯ − *b**k**E**k* with all *b*3, ⋯*b**k* are 0 or 1. Similarly, if the class *H* − *E*2 is not *J*-nef, then there is a curve class *E*1 − *E*2 − *b*3*E*3 − ⋯ − *b**k**E**k* or  − *H* + 2*E*1 − *b*3*E*3 − ⋯ − *b**k**E**k* with all *b*3, ⋯*b**k* are 0 or 1. We notice the classes *E*2 − *E*1 − *b*3*E*3 − ⋯ − *b**k**E**k* and *E*1 − *E*2 − *b*ʹ3*E*3 − ⋯ − *b*ʹ*k**E**k* cannot coexist. We assume there is no curve class of type *E*2 − *E*1 − *b*3*E*3 − ⋯ − *b**k**E**k*. Then there is a curve in class  − *H* + 2*E*2 − *b*3*E*3 − ⋯ − *b**k**E**k*. At the same time, there is a curve in class *E*1 − *E*2 − *b*ʹ3*E*3 − ⋯ − *b*ʹ*k**E**k* or  − *H* + 2*E*1 − *b*ʹ3*E*3 − ⋯ − *b*ʹ*k**E**k*. In particular, it implies that there are *J*-holomorphic subvarieties in classes  − *H* + 2*E*2 − *b*3*E*3 − ⋯ − *b**k**E**k* and  − *H* + 2*E*1 − *b*ʹ3*E*3 − ⋯ − *b*ʹ*k**E**k*. Again, *S**W*(*E**i*) ≠ 0 implies that there are *J*-holomorphic subvarieties in classes *E**i*. In turn, it would imply that there are subvarieties in classes  − *H* + 2*E*1 and  − *H* + 2*E*2. Finally *S**W*(*H* − *E*1 − *E*2) ≠ 0, hence *H* − *E*1 − *E*2 is the class of a subvariety. However, then we know 0 = ( − *H* + 2*E*1) + ( − *H* + 2*E*2) + 2(*H* − *E*1 − *E*2) is the class of a subvariety, which contradicts to the tameness of *J*. Hence, *H* − *E*1 or *H* − *E*2 has to be *J*-nef. It is our *H**e* when *e* = 2*H* − *E*1 − *E*2. It satisfies all the requirements. This finishes the proof of the lemma. [homeoCPl] Suppose *J* is a tamed almost complex structure on a rational surface *M*, and *e* is a primitive class which is represented by a smooth *J*-holomorphic sphere. Then $\mathcal M\_e$ is homeomorphic to $\mathbb CP^l$ where *l* = max{0, *e* ⋅ *e* + 1}. When *e* ⋅ *e* < 0, we have *l* = 0. It follows from positivity of local intersections that the smooth *J*-holomorphic sphere representing *e* is the unique element in $\mathcal M\_e$. When *e* ⋅ *e* ≥ 0, we know *e* is *J*-nef. We could assume *e* ⋅ *e* > 0 otherwise it is verified in Proposition [connected]. For $M=\mathbb CP^2$, it is well known $\mathcal M\_H=\mathbb CP^2$, see *e.g.*. Actually, $\mathcal M\_{2H}=\mathbb CP^5$ by the result of. Hence, let us discuss $\mathbb CP^2\#k\overline{\mathbb CP^2}$ or *S*2 × *S*2. Our first goal is to find a class *e*ʹ such that it is *J*-nef, *g**J*(*e*ʹ) = 0 and *e*ʹ ⋅ *e* = *l*. In Lemma [Henef], we have found *J*-nef class *H**e* such that *g**J*(*H**e*) = 0 and *H**e* ⋅ *e* = 1 when *e* is primitive. Moreover, we could choose *H**e* not proportional to *e*. Let *e*ʹ = *e* + *H**e*. By adjunction formula (*e* + *H**e*)2 + *K* ⋅ (*e* + *H**e*) = ( − 2) + ( − 2) + 2 =  − 2,  *g**J*(*e*ʹ) = 0 and *e*ʹ is *J*-nef since both *e* and *H**e* are so. Moreover, the intersection number *e*ʹ ⋅ *e* = *e*2 + 1 = *l*. Since *e*ʹ2 > 0, we have dim*S**W*(*e*ʹ) > 0. If *S**W*(*K* − *e*ʹ) ≠ 0, it will contradict to the nefness of *e*ʹ by *e* ⋅ (*K* − *e*ʹ) =  − dim*S**W*(*e*ʹ) < 0. Hence by Seiberg-Witten wall-crossing, we have *S**W*(*e*ʹ) = 1. By Proposition 4.5 of, we choose a smooth rational curve *S* in class *e*ʹ. Notice by our choice of class *e*ʹ, the smooth rational curve *S* cannot be an irreducible component of any element in $\mathcal M\_e$. Since *J* is tamed, any subvariety in $\mathcal M\_e$ is connected. Take points *x*1, ⋯, *x**l* ∈ *S*. Some of the points *x**i* might be identical. Since the curve *S* is given *a priori*, when we talk about the intersection of subvarieties as in Section [intersection], we could also include the second type where the ``matching" curve at *x**i* is given by *S*. We will show that there is a unique (possibly reducible) rational curve in class *e* passing through all *x**i*. The argument is similar to that of Lemma [uniquereducible], with slight modifications with regard to the existence of the curve *S* and the corresponding second type intersections. We assume there are two such subvarieties, say Θ = {(*C**i*, *m**i*)}, Θʹ = {(*C**i*ʹ, *m**i*ʹ)}. If Θ, Θʹ have no common components, then the result follows from positivity of local intersection since *e*Θ ⋅ *e*Θʹ < *l*. Hence we assume they have at least one common components. In particular, none of Θ and Θʹ is a smooth variety. We rewrite two subvarieties $\Theta, \Theta' \in \mathcal M\_e$, allowing *m**i* = 0 in the notation, such that they have the same set of irreducible components formally, i.e. Θ = {(*C**i*, *m**i*)} and Θʹ = {(*C**i*, *m*ʹ*i*)}. Then for each *C**i*, if *m**i* ≤ *m*ʹ*i*, we change the components to (*C**i*, 0) and (*C**i*, *m*ʹ*i* − *m**i*). Similar procedure applies to the case when *m**i* > *m**i*ʹ. Apply this process to all *i* and discard finally all components with multiplicity 0 and denote them by Θ0, Θʹ0 and still use (*C**i*, *m**i*) and (*C**i*, *m*ʹ*i*) to denote their components. Notice they are homologous, formally have homology class *e* − ∑*m**k**i* < *m**k**i*ʹ*m**k**i**e**C**k**i* − ∑*m**l**j*ʹ < *m**l**j**m*ʹ*l**j**e**C**l**j* − ∑*m**q**p*ʹ = *m**q**p**m*ʹ*q**p**e**C**q**p*. There are two ways to express the class, by taking *e* = *e*Θ or *e* = *e*Θʹ in the above formula. Namely, it is $$\sum\_{m\_{k\_i}<m\_{k\_i}'}(m\_{k\_i}'-m\_{k\_i})e\_{C\_{k\_i}}+\hbox{others}=e\_{\Theta\_0'}=e\_{\Theta\_0}=\sum\_{m\_{l\_j}'<m\_{l\_j}} (m\_{l\_j}-m'\_{l\_j})e\_{C\_{l\_j}}+\hbox{others}.$$ Here the term ``others" means the terms *m**i**e**C**i* or *m**i*ʹ*e**C**i* where *i* is not taken from *k**i*, *l**j* or *q**p*. Now Θ0 and Θ0ʹ have no common components. They intersect the rational curve *S* at least *e*ʹ ⋅ *e*Θ0 ≥ *e* ⋅ *e*Θ0 points (as a subset of {*x*1, ⋯, *x**l*}) with multiplicities. In the inequality above, we make use of the fact that *H**e* is *J*-nef. Hence Θ0 and Θ0ʹ would intersect at least *e* ⋅ *e*Θ0 points with multiplicities. We notice that *e* ⋅ *e*Θ0 ≥ *e*Θ0 ⋅ *e*Θ0. In fact, the difference *e* − *e*Θ0 = *e* − *e*Θ0ʹ has 3 types of terms, any of them pairing non-negatively with *e*Θ0 = *e*Θ0ʹ. For the terms with index *k**i*, *i.e.* the terms with *m**k**i* < *m**k**i*ʹ, we use the expression of $e\_{\Theta\_0}=\sum\_{m\_{l\_j}'<m\_{l\_j}} (m\_{l\_j}-m'\_{l\_j})e\_{C\_{l\_j}}+\hbox{others}$ to pair with. Since the irreducible curves involved in the expression are all different from *C**k**i*, we have *e**C**k**i* ⋅ *e*Θ0 ≥ 0. Similarly, for *C**l**j*, we use the expression of $e\_{\Theta\_0'}=\sum\_{m\_{k\_i}<m\_{k\_i}'}(m\_{k\_i}'-m\_{k\_i})e\_{C\_{k\_i}}+\hbox{others}$. We have *e**C**l**j* ⋅ *e*Θ0ʹ ≥ 0. For *C**q**p*, we could use either *e*Θ0 or *e*Θ0ʹ. Since *e*Θ0 = *e*Θ0ʹ, we have (*e* − *e*Θ0) ⋅ *e*Θ0 ≥ 0. Moreover, we have the strict inequality *e* ⋅ *e*Θ0 > *e*Θ02. This is because we assume the original Θ, Θʹ have at least one common component and because they are connected by Theorem 1.5 of. The first fact implies there is at least one index in *k**i*, *l**j* or *q**p*. The second fact implies at least one of the intersection of *C**k**i*, *C**l**j* or *C**q**p* with *e*Θ0 as in the last paragraph would take positive value. The inequality *e* ⋅ *e*Θ0 > *e*Θ02 implies there are more intersections than the homology intersection number *e*Θ02 of our new subvariety Θ0 and Θ0ʹ. This contradicts to the positivity of local intersection and the fact that Θ0, Θ0ʹ have no common component. Hence Θ = Θʹ. We will use *C**x*1, ⋯, *x**l* to denote the unique subvariety passing through the *l* points {*x*1, ⋯, *x**l*}. Apparently, changing the order of *x**i* gives the same curve. Thus we obtain a well-defined map *h* : (*x*1, ⋯, *x**l*) ↦ *C**x*1, ⋯, *x**l* from $\hbox{Sym}^l S^2\cong \mathbb CP^l$ to $\mathcal M\_e$. Since *S* is *J*-holomorphic and *e*ʹ ⋅ *e* = *e*2 + 1 = *l* > 0, any curve in $\mathcal M\_e$ intersects with *S* at exactly *l* points by the positivity of local intersection of distinct irreducible *J*-holomorphic subvarieties. Therefore *h* is one-to-one and surjective. Now let us show that *h* is a homeomorphism, namely both *h* and *h*− 1 are continuous. Since $\hbox{Sym}^l S^2$ is Hausdorff and $\mathcal M\_e$ is compact, if we can show that $h^{-1}:\mathcal M\_e\to \hbox{Sym}^l S^2$ is continuous, it follows that *h* is also continuous. To show *h*− 1 is continuous, consider a sequence $C\_i \in \mathcal M\_e$ approaching to its Gromov-Hausdorff limit *C*. Let the intersection of *C**i* (resp. *C*) with *S* be (*x*1*i*, ⋯, *x**l**i*) (resp. (*x*1, ⋯, *x**l*)). Then (*x*1*i*, ⋯, *x**l**i*) has to approach (*x*1, ⋯, *x**l*) by the first item of the definition of topology on $\mathcal M\_e$. Therefore *h* is a homeomorphism. We remark that the subvarieties determined by (*x*1*i*, ⋯, *x**l**i*) with *x**j**i* ∈ *M* will not converge in general, especially when two points in the tuple converge to same point by a simple dimension counting. However when *x**i* ∈ *S*, they indeed converge in the Gromov-Hausdorff sense since the tangent plane is fixed as that of *S*. Notice that since the configuration of a subvariety of a sphere class is a tree, a non-primitive class will never be an irreducible component in a reducible subvariety. This is because there will be another irreducible component intersecting the non-primitive class more than once to form cycles, since the reducible subvariety is connected. Usually, nefness is a numerical way to guarantee the existence of smooth *J*-holomorphic curves. [nef=>smooth] If *e* is a *J*-nef class with *g**J*(*e*) = 0 and $\mathcal M\_e\ne \emptyset$, then it is represented by a smooth *J*-holomorphic sphere of non-negative self-intersection. Since *e* is *J*-nef and *J*-effective, we have *e* ⋅ *e* ≥ 0. Then by Proposition 4.5 of, we know *e* is represented by a smooth rational curve. Suppose *J* is a tamed almost complex structure on a rational surface *M*. If *e* is a *J*-nef class with *g**J*(*e*) = 0 and $\mathcal M\_e\ne \emptyset$, then $\mathcal M\_e$ is homeomorphic to $\mathbb CP^l$ with *l* 
arxiv_0000333
On permutations with decidable cycles ===================================== Recursive permutations whose cycles are the classes of a decidable equivalence relation are studied; the set of these permutations is called Perm, the group of all recursive permutations G. Multiple equivalent computable representations of decidable equivalence relations are provided. G-conjugacy in Perm is characterised by computable isomorphy of cycle equivalence relations. This result parallels the equivalence of cycle type equality and conjugacy in the full symmetric group of the natural numbers. Conditions are presented for a permutation *f* ∈ G to be in Perm and for a decidable equivalence relation to appear as the cycle relation of a member of G. In particular, two normal forms for the cycle structure of permutations are defined and it is shown that conjugacy to a permutation in the first normal form is equivalent to membership in Perm. Perm is further characterised as the set of maximal permutations in a family of preordered subsets of automorphism groups of decidable equivalences. Conjugacy to a permutation in the second normal form corresponds to decidable cycles plus decidable cycle finiteness problem. Cycle decidability and cycle finiteness are both shown to have the maximal one-one degree of the Halting Problem. Cycle finiteness is used to prove that conjugacy in Perm cannot be decided and that it is impossible to compute cycle deciders for products of members of Perm and finitary permutations. It is also shown that Perm is not recursively enumerable and that it is not a group. Introduction ============ An equivalence relation over the natural numbers is *decidable* if there is a Turing machine which decides for every pair (*x*, *x*ʹ) of numbers whether they are related or not. The set of all recursive permutations of the natural numbers is denoted G in this paper. We consider the subset Perm in G of recursive permutations whose orbits, or *cycles*, are the classes of a decidable equivalence relation. The present paper studies algorithmic and algebraic questions about Perm which arise naturally from this definition, which relates equivalence relations and permutations. Identify a permutation *f* of N with the digraph on vertices N whose arrows are given by the mapping *x* → *f*(*x*). A permutation is in G if its digraph is locally explorable by a Turing-computable algorithm. For the permutations in Perm a more global class of questions can be decided in addition, namely for any pair of numbers whether they belong to the same weakly connected component of the digraph. The weakly connected components in the digraph view correspond to the cycles of the permutation and we use these two terms synonymously. The set Perm appears in an attempt to transfer a well-known theorem from Group Theory to Recursion Theory. This theorem states that in a symmetric group, such as SymN, conjugacy is equivalent to cycle type equality. Cycle type equality of two permutations is the condition that for each countable cardinal, the both permutations have the same number of cycles of size that cardinal. An attempt at this transfer has been made by Kent in 1962: [] [theorem:kent] In the group G, a cycle type class is also a conjugacy class iff it is the cycle type class of a permutation with finitely many infinite cycles. Define the set G1 to consist precisely of those recursive permutations with finitely many infinite cycles. Then Kent’s theorem states that in G1 two permutations are G-conjugate iff they have the same cycle type and there is no proper superset ${\mathcal{G}}\_1 \subsetneq {\mathcal{G}}' \subseteq {\mathcal{G}}$ which is closed under cycle types and where this theorem holds, too. This formulation takes the same form as the one for full symmetric groups, but one might argue that G1 is a too small subset of G. We will show, for instance, that cycle decidability and cycle finiteness are trivial problems in G1 (Propositions [prop:fininfdec], [prop:fininfcf]). Thus one asks why the notions of conjugacy and cycle type equality drift apart in G. The crucial observation is that by changing the group from SymN to G, the notion of conjugacy changes. Conjugation in G is always afforded by a *recursive* permutation — it gets a constructive character. Cycle type equality, on the other hand, remains non-constructive in Kent’s theorem. If we define an effective version of cycle type equality, we are not restricted by the second part of Kent’s theorem anymore and may find a bigger set in which the new formulation of the theorem holds. To make cycle type equality effective, one first needs a witness for the condition of cycle type equality. Such a witness would be a bijection between the weakly connected components which preserves the size of each component, or, alternatively, a bijection between the sets of vertices which respects weakly connected components in both directions. If we denote the equivalence relation whose equivalence classes are the weakly connected components of *f* by  ≡ *f*, then we require a permutation *θ* of N such that *x* ≡ *f**x*ʹ ⇔ *θ*(*x*) ≡ *g**θ*(*x*ʹ). This takes the form of an equivalence isomorphism; see Figure [fig:conjcycletypeeq] for an illustration of witnesses for conjugacy and cycle type equality. The notions of equivalence relation and isomorphism thereof can easily be transferred into Recursion Theory and yield the desired definition of effective cycle type equality. The present paper starts by giving a number of possible definitions for decidable equivalences which parallel characterisations of equivalence relations in non-constructive Mathematics, and shows that they are equivalent, too, in Recursion Theory. Based on this solid notion of decidable equivalence, we are interested in recursive permutations *f* whose relation  ≡ *f* is decidable, as only for those permutations effective cycle type equality can be defined in the language of Recursion Theory. The set of these permutations is exactly Perm, and indeed one obtains an analogue to the theorem in SymN, our Theorem [theorem:permconj2], which states that in Perm G-conjugacy and effective cycle type equality are equivalent. This theorem has two advantages over the one in G1: [(1)] the equivalence is constructive, i.e. a witness for either condition can be converted into one for the other by Turing-computable algorithms, and G1 is properly (and trivially) contained in Perm, by Proposition [prop:fininfdec]. [fig:conjcycletypeeq] After the theorem about the equivalence of G-conjugacy and effective cycle type equality in Perm is established, section [sec:charperm] gives a number of characterisations of Perm. The two main ideas come from the two ways to approach Perm, as per its definition. The first is the question whether a given *f* ∈ G has decidable cycles, the second is whether a given decidable equivalence relation is realisable as the cycle equivalence of a permutation in G. In this process, [par:cycledec] collects evident sufficient and equivalent extrinsic criteria for cycle decidability. The approach of [par:normal] is to characterise cycle decidability intrinsically through normal forms for the cycle structure of permutations. Two forms, the *normal* and the *semi-normal* form, are defined. A corollary to Theorem [theorem:decnormal] is that a permutation has decidable cycles iff it is G-conjugate to a permutation in normal form. Indeed if a permutation is in normal form, a decider for its cycles can be extracted from the permutation alone. Normality, in this light, is a uniformity condition. Then, [par:permutability] gives sufficient and equivalent criteria for a decidable equivalence to be *permutable*, i.e. to appear as the cycle equivalence of a recursive permutation. It is shown that decidable and permutable equivalences are in bijection with recursive normal-form permutations. The techniques developed in this subsection give powerful tools to construct permutations from mere equivalence relations which only encode the orbits of a permutation, not the cyclic structure. Especially Corollary [coro:pairrhochar] proves to be useful in existence theorems such as Proposition [prop:nonpermut], Lemma [lemma:infcycle] and Theorem [theorem:interred]. One further characterisation of Perm is of order-theoretic nature. It is shown that the elements of Perm are exactly the permutations in a preordered subset of the automorphism group of some decidable equivalence which are maximal with respect to cycle inclusion. Here, the characterisation via maximality does not use notions of computability; these occur only in the setting, i.e. the choice of the preordered sets. While the normal form deals with cycle decidability, the semi-normal form specifies an incompatibility between the structure of finite and infinite cycles which makes it possible, in addition to deciding the cycles, to decide the *cycle finiteness problem* of the semi-normal permutation, that is for each number *x* to determine if its cycle is finite or infinite. It has previously been shown by Lehtonen  that there are recursive permutations which can be defined by fairly elementary formulae but whose cycle finiteness problem is undecidable. In [par:cyclefinite] in section [sec:cfunsolv], it is proved that the set of permutations for which this problem is decidable is precisely the union of G-conjugacy classes of recursive semi-normal permutations. Remarkably, the sets of permutations for which the cycle decidability and cycle finiteness problems are solvable, each possess a set of generators with respect to G-conjugacy, namely the normal and the semi-normal permutations, such that the problems are uniformly solvable on these generators. The fact that cycle finiteness in general is undecidable in Perm can be applied in reductions. For example, deciding cycle finiteness is an instance of deciding conjugacy in Perm, which implies that the latter is also undecidable. As a corollary to a result by van Leeuwen , we obtain that Perm is not recursively enumerable. In [par:diffcycledec] it is shown that cycle decidability problems in G are at most as hard as the Halting Problem for Turing machines and that this upper bound is attained. It is also proved that the classes of cycle decidability problems and cycle finiteness problems in G are reducible to each other. The last subsection concerns multiplicative closure of Perm. It is shown that a product of a member of Perm and a finitary permutation is again in Perm. In contrast to Theorem [theorem:permconj2], this fact has no algorithmic content. The third part of Corollary [coro:permclosure] states that there is no computable mapping which associates a decider for the cycles of *a**f* ∈ Perm to every triple (*a*, *f*, *π*), where *a* ∈ *F*, *F* the set of finitary permutation, *f* ∈ Perm and *π* a decider for the cycles of *f*. Such an algorithm exists, however, for semi-normal permutations. By Corollary [coro:permnotgroup], a product of two arbitrary (non-finitary) members of Perm does not necessarily reside in Perm which shows that Perm is not a group. This may be seen as an ultimate defect in trying to recover the aforementioned theorem on conjugacy in symmetric groups inside Recursion Theory. Theorem [theorem:permconj2] is an entirely constructive version of this theorem, but its domain, Perm, is missing the group structure from the non-constructive original, which means that the notion of conjugacy has to be borrowed from the ambient group G. Kent’s set G1 suffered from the same problem. Indeed this failure is predetermined by a result on the composition series of G, also due to Kent: no set strictly between *F* and G which is closed under G-conjugation, i.e. normal, can be a group. If not explicitly introduced, the notation follows, which can also serve as the primary resource for recursion-theoretic facts used without citation. The set of natural numbers, including 0, is denoted N and the set of positive integers as N+. We denote the domain of a partial function *f* by dom*f* and its range by rng*f*. If a composition of functions is applied to an object, we save parentheses: instead of *f*(*g*(*x*)) or (*f**g*)(*x*) only the “application to *x*” parentheses are written: *f**g*(*x*). By ⟨*x*, *y*⟩ we denote a recursive *pairing function* which maps pairs (*x*, *y*) to numbers bijectively. Instead of *f*(⟨*x*, *y*⟩) we write *f*⟨*x*, *y*⟩. Existential and universal quantifiers are taken over the natural numbers if no set is given. The complement of a set *A* is also understood as the complement in the natural numbers and written as ${\overline{A}}$. Usually we do not discriminate between a partial recursive function *f* and a program, or *Gödel number*, **f** under the standard numbering of partial recursive functions: *φ***f** = *f*. The associated standard numbering of recursively enumerable sets is *W**x* :  = dom*φ**x*. When the function *f* is used in the description of an algorithm, it is implied that any Gödel number for *f* would suffice. Most proofs in this paper are constructive and yield, by virtue of the Church-Turing Thesis, a (Turing-computable) algorithm. This is usually indicated by an *effectiveness* addition in the formulation of the statement. The term “uniformly effectively” is used throughout the text according to. The term “cycle” is used with two different meanings in this paper. Firstly, it can mean an orbit of the group action a permutation affords on its domain by function application. Secondly it abbreviates *cyclic permutation*, a permutation whose support consists of at most one orbit. Commonly *f*, *g*, *h* denote recursive permutations, often members of Perm, *σ*, *ρ*, *ψ* partial recursive functions encoding an object of importance in a limited scope, *π*, *γ* deciders for equivalences Π, Γ, and *x*, *y*, *z* natural numbers. Acknowledgement. ---------------- The author wishes to thank Thomas Kahle for his support of this paper and discussion of earlier versions which helped to improve the presentation. Decidable equivalences and their isomorphisms ============================================= currentlabel [par:decequiv] currentlabel This subsection gives a number of possible constructive representations for decidable equivalence relations and shows that they all are effectively equivalent: for each pair of representations there is an algorithm which transforms one into the other. A set $\Pi \subseteq {\ifmmode\mathcal{P}\else\textparagraph\fi}({\mathbb{N}})$ of subsets of N is called a *partition* if the sets in Π are non-empty, pairwise disjoint and their union is N. The elements of Π are called its *blocks*. Denote by *P*(*x*, Π) the unique block *P* ∈ Π with *x* ∈ *P*. Usually we abbreviate this to *P*(*x*) if there is no danger of confusion. It is well known that the concepts of partition and equivalence relation are equivalent: to each partition Π there is the equivalence *x* ≡ Π*x*ʹ :  ⇔ *P*(*x*) = *P*(*x*ʹ) and the classes of every equivalence form a partition. We write *x*Π*x*ʹ instead of *x* ≡ Π*x*ʹ and speak of Π also as an equivalence relation. An equivalence relation Π over N is *decidable*, if for every *x*, *x*ʹ ∈ N, the predicate *x*Π*x*ʹ is decidable. Let t and f be two distinct natural numbers, representing *true* and *false*. We adopt the following syntax for predicates *p*: $$[p(x\_1, \dots, x\_n)] := \begin{cases} {\mathfrak{t}}, & \text{$p(x\_1, \dots, x\_n)$ is true}, \\ {\mathfrak{f}}, & \text{else}, \end{cases}$$ Then we can state the decidability of Π as follows: there exists a recursive function *π* of two variables such that *π*(*x*, *x*ʹ) = [*x*Π*x*ʹ]. For Recursion Theory, the need for representation of objects as natural numbers arises. A partition Π is identified with its set of blocks. By the symbol  ≡ Π we denote the same equivalence relation as Π but represented in the customary fashion as a set of ordered pairs (*x*, *x*ʹ) ∈   ≡ Π  :  ⇔ *x*Π*x*ʹ. Then we see that Π is a decidable equivalence iff  ≡ Π is a recursive set (of pairs) in the usual sense of. This subsection shows that the distinction of these two representations is immaterial to computability. The following lemma is obvious from the *s**n**m* Theorem: [lemma:sigmachar] Π is decidable iff there is a computable function *σ* such that *φ**σ*(*x*) decides *P*(*x*) for all *x* ∈ N. The decider *π* for Π and the function *σ* are computationally equivalent, in that either can be computed from the other. If Π is decidable, then every block of Π is a decidable set. The converse is not true. As Lemma [lemma:sigmachar] suggests, a *uniform* decider for the blocks is needed. An example of an undecidable equivalence on N whose blocks are all recursive can be manufactured from an undecidable problem in two variables which becomes decidable if one of its variables is fixed. Consider the two-variable decision problem, given a program *x* and a (suitable encoding of) a cardinal number *m* ≤ ℵ0 to decide if ∣*W**x*∣ = *m*. Write *C*(*x*, *m*) = [∣*W**x*∣ = *m*]. By Rice’s Theorem *C*(*x*, *m*) is undecidable, however if *x* is fixed, it becomes trivial. Define an equivalence Π as ⟨*x*, *m*⟩Π⟨*x*ʹ, *m*ʹ⟩ :  ⇔ *x* = *x*ʹ ∧ *C*(*x*, *m*) = *C*(*x*, *m*ʹ). The block *P*(⟨*x*, *m*⟩, Π) = {⟨*x*, *m*ʹ⟩ : *C*(*x*, *m*) = *C*(*x*, *m*ʹ)} is a recursive set as *C*(*x*, *m*) is computable for fixed *x*. *If* the equivalence as a whole was decidable, we could decide *E*(*x*, *m*, *m*ʹ) = [*C*(*x*, *m*) = *C*(*x*, *m*ʹ)] uniformly in *x*, *m*, *m*ʹ. Now let *x*, *m* be given. Choose two distinct cardinals *p*, *q* which are also distinct from *m*. At least two of *C*(*x*, *m*), *C*(*x*, *p*), *C*(*x*, *q*) must be false. A complete discussion of the cases | *C*(*x*, *m*) | *C*(*x*, *p*) | *C*(*x*, *q*) | *E*(*x*, *m*, *p*) | *E*(*x*, *m*, *q*) | *E*(*x*, *p*, *q*) | | --- | --- | --- | --- | --- | --- | | f | f | t | t | f | f | | f | t | f | f | t | f | | t | f | f | f | f | t | | f | f | f | t | t | t | characterises the statement *C*(*x*, *m*) = t. Thus if we assume *E*(*x*, *m*, *m*ʹ) to be uniformly computable in all its parameters, we can decide ∣*W**x*∣ = *m* for any given *x* and *m*, which is a *contradiction*. [prop:pchar] Let *A* be a non-empty initial segment, i.e. either *A* = {0, …, *n* − 1} for some *n* ∈ N+ or *A* = N. Then a partition Π = {*P**i* : *i* ∈ *A*} is decidable iff there is a partial recursive function *p* such that if *i* ∈ *A* it follows that *φ**p*(*i*) decides *P**i*. A program for *p* can be obtained from a decider *π* for Π and vice versa. “ ⇒ ”: Let *π* decide Π. It follows from Lemma [lemma:sigmachar] that every block is decidable. We need to enumerate a decider for every block of Π so that each block is represented exactly once among the first ∣Π∣ values of *p*. Define *i*(*x*) :  = *μ**x*ʹ[*x*Π*x*ʹ] which is total recursive via *π* and returns the least representative of the block of *x*. Let *σ* be the function obtained from *π* by Lemma [lemma:sigmachar]. Now $$\begin{aligned} n(x) &:= \mu x'[\forall y < x: i(x') \not= in(y)], \\ p(x) &:= \sigma n(x)\end{aligned}$$ are both computable by appeal to the Church-Turing Thesis. *p* is obtainable uniformly effectively from *π*. We will prove that *p* is the wanted enumeration. The *μ* search in *n* for input *x* computes *n*(*y*) for all *y* < *x*. It follows that if *p* halts on input *x* it must have halted on all inputs *y* < *x* and if *p* does not halt on *y* it will not halt on inputs *x* > *y*. The set of inputs where *p* halts is an initial segment *A* of N. If *x* ∈ *A*, *p*(*x*) ∈ rng*σ* is a program to decide some block of Π. We claim that *n* possesses these three properties: [2em(a)] *i**n*(*x*) = *n*(*x*), *x* < *z* ⇒ *n*(*x*) < *n*(*z*) and *x* ≤ *n*(*x*) for all *x*, *z*. Clearly *i**n*(*x*) ≤ *n*(*x*) and furthermore $\forall y < x: iin(x) = in(x) \not= in(y)$ where the inequality follows by definition of *n*(*x*). This means that *i**n*(*x*) also satisfies the condition over which *n*(*x*) is minimised, so *n*(*x*) ≤ *i**n*(*x*) and it must hold equality. For the second property observe that the sets over which *n*(*x*) and *n*(*z*) minimise are isotone: $\{x' : \forall y < x: i(x') \not= in(y)\} \subsetneq \{x' : \forall y < z: i(x') \not= in(y)\}$. The difference of these sets includes *n*(*x*) so that indeed *n*(*x*) < *n*(*z*). The last property follows by induction from the monotonicity: we have 0 = *n*(0) and if *x* ≤ *n*(*x*) for some *x*, it follows *x* ≤ *n*(*x*) < *n*(*x* + 1) and thus *x* + 1 ≤ *n*(*x* + 1). We can now prove that each block *P* ∈ Π has at least one decider in rng*p*: Let *j* be the minimal element in *P*, then *i*(*j*) = *j*. If *j* was not already taken as an image *n*(*y*) for *y* < *j*, it must hold that $\forall y < j: j \not= n(y)$. But *j* = *i*(*j*) and *n*(*y*) = *i**n*(*y*), so that $\forall y < j: i(j) \not= in(y)$, i.e. *j* satisfies the condition over which *n*(*j*) minimises and *j* ≥ *n*(*j*). By the third property above also *n*(*j*) ≥ *j* and so *j* = *n*(*j*) at the latest. On the other side, we see from the definition of *n* that $\forall y < x: in(x) \not= in(y)$, which means *n*(*x*) and *n*(*y*) are in different blocks. Therefore *σ**n*(*x*) decides a different block than *σ**n*(*y*) for *y* < *x*. Consequently each block has at most one decider. In conclusion *p* is partial recursive and halts precisely on an initial segment of N. Wherever it halts it gives a program to decide a block of Π so that each block gets exactly one decider. “ ⇐ ”: The function *j*(*x*) :  = *μ**j*[*x* ∈ *P**j*] is computable by means of *p*(*j*). It is a total function because Π is a partition. *A* is an initial segment of N and every *x* will be found in some *P**j* for *j* ∈ *A*, i.e. before, in the *μ* search in *j*(*x*), *p* must be evaluated for an argument *j* ∉ *A*, where its behaviour is not specified. Then *x*Π*x*ʹ ⇔ *j*(*x*) = *j*(*x*ʹ). Hence *π*(*x*, *x*ʹ) :  = [*j*(*x*) = *j*(*x*ʹ)] decides Π and can be obtained uniformly effectively from a program for *p*. Another way to constructively define equivalence relations is to require a method which assigns each number a label. The equivalence classes are formed by grouping equally labeled numbers together: [prop:recdec] For each decidable equivalence Π there is a recursive function *r* whose non-empty fibers are the blocks of Π. Conversely the set of non-empty fibers of any recursive function is a decidable equivalence. There are algorithms to convert a decider for Π into a recursive function and vice versa. If Π is decidable, the function *r*(*x*) :  = *μ**x*ʹ[*x*Π*x*ʹ] is computable. Obviously *r*(*x*) = *r*(*x*ʹ) ⇔ *x*Π*x*ʹ. If *r* is any recursive function, then *λ**x**x*ʹ[*r*(*x*) = *r*(*x*ʹ)] is a recursive decider for the equivalence induced by the non-empty fibers of *r*. The results thus far enable us to represent a decidable partition Π in one of four constructive ways: [(1)] as the recursive function *π* which decides the predicate [*x*Π*x*ʹ], as the *σ* function from Lemma [lemma:sigmachar], as the partial recursive *p* from Proposition [prop:pchar] which enumerates deciders for the blocks of Π, and as the non-empty fibers of a recursive function . The function *π* decides  ≡ Π as a set of ordered pairs while *p* corresponds to the representation of the partition Π as a set of blocks. We have thus seen that these representations are computationally equivalent, i.e. we can go effectively from one representation to another. This result justifies our identification of partitions as sets of blocks with equivalence relations in the form of sets of ordered pairs. We remark in passing that it is not critical that *p* enumerates deciders for the blocks; recursive enumerators are sufficient. The proof is via dove tailing. [prop:qchar] Let *A* be a non-empty initial segment and Π = {*P**i* : *i* ∈ *A*}. Then Π is decidable iff there is a partial recursive *q* such that dom*q* = *A* and if *i* ∈ *A*, then *φ**q*(*i*) enumerates *P**i*. A program for *q* can be obtained from a decider *π* for Π and vice versa. currentlabel [par:decperm] currentlabel For any permutation *f*, the set of cycles of *f* form a partition of its domain. Denote the cycle of *f* to which *x* belongs by [*x*]*f*. Let Φ temporarily denote the function which maps a permutation to its thus associated partition. Conversely, to any partition Π of a set, the inverse image Φ− 1(Π) is the set of permutations associated with it. For Recusion Theory, these considerations are restricted to decidable equivalences and recursive permutations. If Π is a decidable equivalence, the set of associated permutations is PermΠ :  = Φ− 1(Π) = {*f* ∈ G : ∀*x* : *P*(*x*, Π) = [*x*]*f*}. A recursive permutation *f* has an associated partition Part*f* :  = Φ(*f*) = {[*x*]*f* : *x* ∈ N}. The corresponding equivalence relation, for infix usage, is denoted  ≡ *f*. The central object of study in this paper is the set Perm :  = ⋃$\Pi$ decidablePermΠ. Writing the cycle equivalence relation down explicitly, *x* ≡ *f**x*ʹ :  ⇔ ∃*k* ∈ Z : *x*ʹ = *f**k*(*x*), it takes the form of a *reachability relation*. A recursive permutation *f* is in Perm precisely when reachability in its undirected functional graph is decidable. It is in general not true that Part*f* is a decidable partition for each *f* or that PermΠ is non-empty for each decidable Π. Section [sec:charperm] treats the questions of when these statements do hold extensively. At this point we give the relevant definitions only: An equivalence Π is *permutable* if $\operatorname{Perm}\Pi \not= \emptyset$. Conversely, a permutation *f* has *decidable cycles* if Part*f* is a decidable equivalence. In this and the following subsection we concentrate on the relation of the many possible permutations which belong to a single decidable partition Π and describe the conjugacy classes in Perm in terms of decidable equivalences. In a subset of the group G, two permutations *f* and *g* are G-conjugate if there is *h* ∈ G such that *f* = *h*− 1*g**h*. Instead of G-conjugate we will use the term *effectively conjugate* or just *conjugate*. This relation is denoted as *f* ∼ *g*. A first step in understanding PermΠ is provided by [prop:permconj1] Let Π be decidable and *f*, *g* ∈ PermΠ. Then *f* and *g* are effectively conjugate. Moreover, given programs for *f*, *g* and a decider *π* for Π, a program to compute the conjugation between *f* and *g* can be described. If *f* and *g*, as above, belong to the same equivalence relation, they have the same cycle type and would be conjugate in the full (non-constructive) symmetric group of the natural numbers. Although G is not a symmetric group, the standard proof, as found in carries through under the additional premises of the proposition. A major non-constructive step in their proof is to pair cycles from *f* and *g* with matching lengths. This problem does not arise in our scenario because for each *x* we know that the cycle in *f* to which *x* belongs has the same length as the cycle for *x* in *g*, since *f* and *g* are permutations associated to the same partition. It is also convenient that the cycles for *x* in *f* and in *g* contain the same elements. Before we give the proof, we introduce the *ξ* operator which is the *μ* operator equivalent for (signed) integers. Its introduction is motivated by the observation that powers of permutations act like integers modulo cycle length. [def:delta] 1. Let *δ* : Z → N denote the (intuitively computable) bijection $$\delta(k) := \begin{cases} 0, & k = 0, \\ -2k - 1, & k < 0, \\ 2k, & k > 0. \end{cases}$$ 2. A partial function $\beta: {\mathbb{Z}}\rightsquigarrow {\mathbb{Z}}$ is called *partial recursive* if $\delta\beta\delta^{-1}: {\mathbb{N}}\rightsquigarrow {\mathbb{N}}$ is a partial recursive function. 3. Let $p: {\mathbb{Z}}\rightsquigarrow {\mathbb{Z}}$ be a partial recursive function. By writing *ξ**k*[*p*(*k*)] we mean that *k* is an integer and we perform *alternating search*, that is we evaluate *p*(0), *p*( − 1), *p*(1), *p*( − 2), *p*(2), … in this order, until *p* becomes t for the first time. The first argument *k* in this process achieving *p*(*k*) = t becomes the value of the *ξ* expression and if no such argument exists or *p* is undefined on some argument on the way, *ξ* does not terminate. $p: {\mathbb{Z}}\rightsquigarrow {\mathbb{Z}}$ be a partial recursive function. Then *ξ**k*[*p*(*k*)] is partial recursive. Observe that *ξ**k*[*p*(*k*)] = *δ*− 1*μ**i*[*δ**p**δ*− 1(*i*)] with the exact same semantics. The *δ* function takes an integer *k* and, to produce the output natural number, doubles the absolute value of *k* and subtracts from this the sign bit of *k*. To compute the inverse of *δ*, examine the parity of the input, as it determines the sign of the output. The absolute value of the output integer is half of the input, rounded up to the next integer. Based on these intuitions for dealing with *δ* and *δ*− 1, and by appeal to the Church-Turing Thesis, the function *λ**x**x*ʹ*ξ**k*[*f**k*(*x*) = *x*ʹ] is partial recursive and total on pairs (*x*, *x*ʹ) with *x* ≡ *f**x*ʹ. [Proof of Proposition [prop:permconj1]] We construct the conjugation *h*. Let *π* decide Π and *x* be given, then calculate *y*(*x*) :  = *μ**y*[*x*Π*y*] via *π*. This calculation terminates as *x*Π*x*, so *y*(*x*) ≤ *x*. By alternating search we determine *k*(*x*) :  = *ξ**k*[*f**k**y*(*x*) = *x*] ∈ Z which must exist. Return *h*(*x*) :  = *g**k*(*x*)*y*(*x*). The following facts are immediate from the definition of *h* and the prerequisites: [2em(i)] *y*(*z*) = *y*(*x*) iff *z* ∈ [*x*]*f* iff *z*Π*x* iff *z* ∈ [*x*]*g*, for any *x* the cycle lengths ∣[*x*]*f*∣ and ∣[*x*]*g*∣ match by (i), for any *x* the value *y*(*x*) is a fixed point of *h*, *f**k**y*(*x*) = *f**k*ʹ*y*(*x*) iff *g**k**y*(*x*) = *g**k*ʹ*y*(*x*) by (ii), if *z*Π*x* then *h*(*z*) = *g**k*(*z*)*y*(*x*) by (i). Because *f**k*(*x*)*y*(*x*) = *x*, it follows that *h**f**k*(*x*)*y*(*x*) = *g**k*(*x*)*y*(*x*) holds for all inputs *x*. We want to modify this equation so that the exponent *k* is free in Z. Take any *x* ∈ N and *k* ∈ Z and set *z* = *f**k**y*(*x*) so that *z*Π*x*. There is also the representation *z* = *f**k*(*z*)*y*(*x*) by (v). This yields *h*(*z*) = *g**k*(*z*)*y*(*x*) = *g**k**y*(*x*) by (iv), which implies $$\begin{aligned} hf^ky(x) = g^ky(x) \tag{$\*$} \label{eq:hfkyx}\end{aligned}$$ for all *x* ∈ N and *k* ∈ Z. To see that *h* is a permutation, assume first *h*(*x*) = *h*(*x*ʹ), i.e. *g**k*(*x*)*y*(*x*) = *g**k*(*x*ʹ)*y*(*x*ʹ). Because *y*(*x*) = *g**k*(*x*ʹ) − *k*(*x*)*y*(*x*ʹ), *y*(*x*) and *y*(*x*ʹ) are in the same cycle of *g* and hence in the same cycle of *f* and must therefore be equal, as values in the range of *y*. We have shown *g**k*(*x*)*y*(*x*) = *g**k*(*x*ʹ)*y*(*x*) which implies *x* = *f**k*(*x*)*y*(*x*) = *f**k*(*x*ʹ)*y*(*x*) = *x*ʹ. To show surjectivity, let *z* be given. There is a representation as *z* = *g**k**y*(*z*) for some *k* ∈ Z because *z* ∈ [*y*(*z*)]*f* = [*y*(*z*)]*g*. But then already *h**f**k**y*(*z*) = *g**k**y*(*z*) = *z* by . This shows that *h* is a permutation, and for all *x* = *f**k**y*(*x*) we see via  that *h*− 1*g**h*(*x*) = *h*− 1*g**h**f**k**y*(*x*) = *h*− 1*g**k* + 1*y*(*x*) = *f**k* + 1*y*(*x*) = *f*(*x*),  which completes the proof. currentlabel [par:isoconj] currentlabel Proposition [prop:permconj1] shows that two permutations with identical equivalences are conjugate. The converse is not true as the following simple example shows: let $f = \cycle{\dots, 6, 2, 0, 4, 8, \dots}$ and $g = \cycle{\dots, 7, 3, 1, 5, 9, \dots}$. They define different equivalences but are conjugate via $\cycle{0,1}\cycle{2,3}\cycle{4,5}\dots$. However, the equivalences defined by *f* and *g* are “essentially the same”, in that they have the same block structure. This observation suggests that studying equivalence isomorphisms leads to a better understanding of effective conjugacy classes. Π and Γ be equivalences. A recursive permutation *θ* is a *(Π, Γ)-isomorphism* if *x*Π*y* ⇔ *θ*(*x*)Γ*θ*(*y*). Isomorphy of Π and Γ is written as Π ≅ Γ. The next goal is a characterisation for the solvability of conjugation equations *f* = *h*− 1*g**h* when *f*, *g* have decidable cycles, and an algorithm for computing the solution *h* if a witness for the solvability is available. This result extends Proposition [prop:permconj1]. To this end, some preparations are needed. Let Π be a decidable equivalence, *h* a recursive permutation. Then Π*h* :  = *h*(Π) :  = {*h*(*P*) : *P* ∈ Π} is a decidable partition. Since *h* is a permutation, Π*h* is again a partition. Then *x*Π*h**x*ʹ iff *P*(*x*, Π*h*) = *P*(*x*ʹ, Π*h*), which is the case iff *x* and *x*ʹ belong to the same *h*(*P*), for some *P* ∈ Π. But this is equivalent to *h*− 1(*x*) and *h*− 1(*x*ʹ) belonging to the same *P* ∈ Π, i.e. *h*− 1(*x*)Π*h*− 1(*x*ʹ). Thus *λ**x**x*ʹ[*h*− 1(*x*)Π*h*− 1(*x*ʹ)] is a recursive decider for Π*h*. [lemma:fpi] Let *f* be any recursive permutation. Then *x*Π*x*ʹ ⇔ *f*(*x*)Π*f**f*(*x*ʹ). Π*f* is a decidable partition. Then clearly *x*Π*x*ʹ ⇒ *f*(*x*)Π*f**f*(*x*ʹ). The converse follows by repeating the argument with *f*− 1 in place of *f*. [prop:gammathetapi] If *θ* is a recursive permutation, then it is a (Π, Γ)-isomorphism iff Γ = Π*θ*. “ ⇒ ”: If *θ* is an isomorphism, we have *y*Γ*y*ʹ ⇔ *θ*− 1(*y*)Π*θ*− 1(*y*ʹ) ⇔ *y*Π*θ**y*ʹ according to Lemma [lemma:fpi]. Thus Γ = Π*θ*. “ ⇐ ”: *x*Π*x*ʹ ⇔ *θ*(*x*)Π*θ**θ*(*x*ʹ) ⇔ *θ*(*x*)Γ*θ*(*x*ʹ). The set of (Π, Π)-isomorphisms is the *automorphism group* of Π, denoted AutΠ. This is indeed a group under composition (a subgroup of G) as *x*Π*x*ʹ ⇔ id(*x*)Πid(*x*ʹ), hence id ∈ AutΠ. If *f*, *g* ∈ AutΠ, then by Proposition [prop:gammathetapi]: Π*f**g* = (Π*g*)*f* = Π*f* = Π and Π*f*− 1 = (Π*f*)*f*− 1 = Π*f*− 1*f* = Π so that *f**g*, *f*− 1 ∈ AutΠ. [lemma:fxequivx’] Let Π be decidable and *f* ∈ G. Consider the predicate $$\begin{aligned} \forall x, x' : x \Pi x' \Leftrightarrow f(x) \Pi x'. \tag{$\*$} \label{eq:fxequivx'}\end{aligned}$$ Then $f \in \operatorname{Perm}\Pi \Rightarrow \eqref{eq:fxequivx'} \Rightarrow f \in \operatorname{Aut}\Pi$. In particular PermΠ ⊆ AutΠ. If *f* ∈ PermΠ, it follows that *x*Π*f*(*x*). Together with *x*ʹΠ*x*, this implies *x*ʹΠ*f*(*x*). With *f* ∈ PermΠ is also *f*− 1 ∈ PermΠ and thus *f*(*x*)Π*x*ʹ ⇒ *x* = *f*− 1*f*(*x*)Π*x*ʹ. Now assume *f* ∈ G and holds. By symmetry of Π and twofold application of, we obtain *x*Π*x*ʹ ⇔ *f*(*x*)Π*f*(*x*ʹ), i.e. Π = Π*f* by Lemma [lemma:fpi]. Then it follows that *f* ∈ AutΠ by Proposition [prop:gammathetapi]. With Lemma [lemma:fxequivx’] we are in a position to approach the following problem: given *f* ∈ PermΠ, Π decidable, and *h* a recursive permutation, a member of PermΠ*h* is to be described. If we assume that there is some *g* ∈ PermΠ*h*, it would satisfy the relation *g*(*y*)Π*h**y*ʹ ⇔ *y*Π*h**y*ʹ ⇔ *h*− 1(*y*)Π*h*− 1(*y*ʹ) ⇔ *f**h*− 1(*y*)Π*h*− 1(*y*ʹ) ⇔ *h**f**h*− 1(*y*)Π*h**y*ʹ, by Lemmata [lemma:fxequivx’] and [lemma:fpi]. This shows that, at least under the assumption that Π*h* is permutable, *h**f**h*− 1 ∈ AutΠ*h*. The following proposition strengthens this deduction in two ways: indeed *h**f**h*− 1 ∈ PermΠ*h* and this is independent of the assumption that Π*h* is permutable. A technical lemma is needed, which is obvious from the definition of PermΠ: [lemma:permcrit] *f* ∈ PermΠ iff *f* ∈ G and for all *x*0 the function Z ∋ *k* ↦ *f**k*(*x*0) is surjective on *P*(*x*0, Π). [prop:memberhpi] Let *f* ∈ PermΠ and *h* ∈ G, then *h**f**h*− 1 ∈ PermΠ*h*. Let *y*0 = *h*(*x*0) and *y* = *h*(*x*) ∈ *P*(*y*0, Π*h*) be given. Then *y*0Π*h**y* implies *x*0Π*x* by Lemma [lemma:fpi]. Because *f* ∈ PermΠ, we can find *k* so that *f**k*(*x*0) = *x*. Now (*h**f**h*− 1)*k*(*y*0) = *h**f**k**h*− 1(*y*0) = *h**f**k*(*x*0) = *h*(*x*) = *y*. It remains to show that *h**f**k**h*− 1(*y*0) is in *P*(*y*0, Π*h*) for every *k*. Because *f* ∈ PermΠ, it is *f**k*(*x*0) ∈ *P*(*x*0, Π) and thus *h**f**k*(*x*0) ∈ *P*(*y*0, Π*h*) by definition of Π*h*. Lemma [lemma:permcrit] shows that *h**f**h*− 1 ∈ PermΠ*h*. We remark that a more satisfying proof of this proposition can be given with the notions introduced in [par:orderperm]. It is not difficult to prove that the condition is characteristic for the set IΠ which lies between PermΠ and AutΠ and is equipped with a preorder in [par:orderperm]. The mapping *f* ↦ *h**f**h*− 1 is an order isomorphism IΠ → IΠ*h* and it is shown that PermΠ is the set of upper bounds on IΠ. Since an order isomorphism preserves upper bounds, it follows that *h**f**h*− 1 is an upper bound on IΠ*h* which immediately yields *h**f**h*− 1 ∈ PermΠ*h*. We have remarked before the proof of Proposition [prop:permconj1] that conveniently the permutations *f* and *g* for a decidable partition Π have not only the same cycle type and that the cycles are paired already according to their length but the cycles of *f* and *g* to which an *x* belongs contain the same elements. Replacing identity of cycles with isomorphy yields a characterisation of conjugacy for members of Perm: [theorem:permconj2] Let *f*, *g* be recursive permutations for decidable partitions Π, Γ respectively. Then *f* and *g* are effectively conjugate iff Π and Γ are isomorphic. Put another way: ∀*f*, *g* ∈ Perm : *f* ∼ *g* ⇔ Part*f* ≅ Part*g*. If a decider for one of the equivalences is known, a conjugation between *f* and *g* can be obtained uniformly effectively from an isomorphism of the equivalences and vice versa. “ ⇒ ”: Let *f* = *h*− 1*g**h* for some recursive permutation *h*. We claim that *h* is a (Π, Γ)-isomorphism. By Proposition [prop:gammathetapi] this is equivalent to the statement Π*h* = Γ. By Proposition [prop:memberhpi] we have *f* = *h*− 1*g**h* ∈ PermΓ*h*− 1, but also *f* ∈ PermΠ. Thus the blocks of Π are the cycles of *f* which are also the blocks of Γ*h*− 1. This implies Π = Γ*h*− 1 and since *h* is a permutation, Π*h* = Γ. “ ⇐ ”: Let *θ* be a (Π, Γ)-isomorphism. Then *θ*− 1*g**θ* is in PermΠ because by Proposition [prop:memberhpi]: *θ*− 1*g**θ* ∈ PermΓ*θ*− 1 and Γ*θ*− 1 = Π by Proposition [prop:gammathetapi]. By virtue of Proposition [prop:permconj1] there is a recursive permutation *h* such that *f* = *h*− 1*θ*− 1*g**θ**h*, thus *f* and *g* are effectively conjugate via *θ**h*. In view of permutations *f*, *g* ∈ Perm, the isomorphy of their respective equivalences Part*f* ≅ Part*g* is also called *effective cycle type equality*. From Theorem [theorem:permconj2] it follows that if *f* has decidable cycles and Π is decidable and permutable, then $$\begin{gathered} [f]\_\sim = \bigcup\_{\Gamma \cong \operatorname{Part}f} \operatorname{Perm}\Gamma, \\ [\Pi]\_{\cong} = \{\operatorname{Part}g : \exists f \in \operatorname{Perm}\Pi : g \sim f\},\end{gathered}$$ and using the former equation $$\begin{aligned} \operatorname{Perm}= \bigcup\_{\Pi \; \text{dec.}} \operatorname{Perm}\Pi = \bigcup\_{\operatorname{Part}f \; \text{dec.}} [f]\_\sim.\end{aligned}$$ Since Perm is a union of conjugacy classes, we obtain [coro:permnormal] Perm is a normal subset of the group of recursive permutations. Characterisations of Perm ========================= The definition of Perm establishes a relation between the permutations of N, SymN, and the equivalence relations over N, EqvN. Regarding the properties of “recursiveness” and “decidability”, this relation is non-trivial, as each of the four cases in Figure [fig:symeqv] is possible. From this relation arise two approaches to characterise Perm. The first starts in the “recursive permutation” column and asks when the permutation has decidable cycles and the second starts in the “decidable equivalence” row asks when the equivalence is permutable. The global questions whether every recursive permutation has decidable cycles and whether all decidable equivalences are permutable are both answered in the negative, as referenced in Figure [fig:symeqv]. This section follows these two directions and also gives an order-theoretic characterisation of Perm. | | recursive | non-recursive | | --- | --- | --- | | decidable | Perm | Proposition [prop:nonpermut] | | undecidable | Lemma [lemma:gminusperm] | $\Pi = \{K, {\overline{K}}\}$ | [fig:symeqv] currentlabel [par:cycledec] currentlabel This subsection gives cycle decidability criteria, i.e. how must a recursive permutation be designed so that its cycles induce a decidable equivalence. There is a sufficient condition which is practically useful because it only requires some information from the cycle type of *f*: [lemma:oneinfdec] If *f* is a recursive permutation with at most one infinite cycle, then Π = Part*f* is decidable. A decider for Π can be computed from *f*. Let *x*, *x*ʹ be given. If *x* = *x*ʹ, the case is clear and we report *π*(*x*, *x*ʹ) :  = t. In the other case, search $$i = \mu i[i \ge 1 \wedge \{f^i(x), f^i(x')\} \cap \{x, x'\} \not= \emptyset].$$ This search always terminates: 1. If both *x* and *x*ʹ are in an infinite cycle, they must be in the same cycle. Because $x \not= x'$ at this point, there is an *i* ≥ 1 such that *f**i*(*x*) = *x*ʹ or *f**i*(*x*ʹ) = *x*, whereas *f**i*(*x*) = *x* or *f**i*(*x*ʹ) = *x*ʹ is impossible for *i* ≥ 1. 2. If they are in the same finite cycle, we will find *f**i*(*x*) = *x*ʹ or *f**i*(*x*ʹ) = *x* *before* *f**i*(*x*) = *x* or *f**i*(*x*ʹ) = *x*ʹ can be encountered. 3. If they are in different cycles *f**i*(*x*) = *x*ʹ ∨ *f**i*(*x*ʹ) = *x* is unsatisfiable. But at least one of them must be in a finite cycle and thus *f**i*(*x*) = *x* or *f**i*(*x*ʹ) = *x*ʹ will be found. These are all the cases. If *f**i*(*x*) = *x*ʹ or *f**i*(*x*ʹ) = *x*, report *π*(*x*, *x*ʹ) :  = t and else *π*(*x*, *x*ʹ) :  = f. The function *π* decides the cycles of *f* and can be computed uniformly effectively from *f*. Lemma [lemma:oneinfdec] can be generalised to the case of finitely many infinite cycles, but the constructed decider is not uniform in *f* anymore. The construction requires a system of representatives for the infinite cycles. For this reason, it is given as a separate proposition. This result can already be found in Myhill’s paper on *splinters* in digraphs of recursive functions, which generalise cycles in permutations. Myhill’s corollary implies that all cycles of a permutation are recursive sets if there are only finitely many infinite cycles. As seen in [par:decequiv], recursivity of all blocks of a partition does not suffice for (uniform) decidability of the partition. That addition, however, follows easily from his Theorem 1.3 and our characterisation of cycle decidability via transversals in Theorem [theorem:cycledecchar] below. We give an independent proof here: [prop:fininfdec] If *f* is a recursive permutation with finitely many infinite cycles, then Π = Part*f* is decidable. Using the definition from the introduction: G1 ⊆ Perm. Let *x*1, …, *x**n* be a *system of representatives* for the infinite cycles, i.e. each *x**i* belongs to an infinite cycle, no two of them belong to the same, and every infinite cycle has a representative among the *x**i*’s. Given *x*, *x*ʹ we calculate indices *k*, *k*ʹ respectively. If *x* = *x*ʹ, set *k* = *k*ʹ = 0. If *x* = *x**i*, then set *k* = 0, and analogously for *x*ʹ and *k*ʹ. Otherwise *k* is determined by alternating search of *f**k*(*x*) in the set {*x*, *x*ʹ, *x*1, …, *x**n*}: $k = \xi k[k \not= 0 \wedge f^k(x) \in \{x, x', x\_1, \dots, x\_n\}]$ and *k*ʹ analogously by searching for *f**k*ʹ(*x*ʹ) in the same set. This procedure terminates because *x* is either in a finite cycle (and will find itself eventually with an index $k \not= 0$) or in an infinite cycle (and will find one of the representatives for infinite cycles with an index $k \not= 0$). Once *k*, *k*ʹ are computed, let *y* = *f**k*(*x*), *y*ʹ = *f**k*ʹ(*x*ʹ) be the found elements. Note that *y*, *y*ʹ ∈ {*x*, *x*ʹ, *x*1, …, *x**n*}. If *y* = *x*ʹ or *y*ʹ = *x*, then *x* and *x*ʹ are in the same cycle. If $y = x \not= x'$ or $y' = x' \not= x$, then one of *x*, *x*ʹ is in a finite cycle different from the other’s. Otherwise *y* = *x**i* and *y*ʹ = *x**j* for 1 ≤ *i*, *j* ≤ *n*. Then *x*Π*x*ʹ ⇔ *x**i* = *x**j*. This gives a way to decide Π. Kent’s, as quoted in the introduction, states that for the permutations with finitely many infinite cycles, cycle type equality and effective conjugacy are equivalent. By Theorem [theorem:permconj2], in turn, effective conjugacy in Perm and effective cycle type equality are equivalent. The preceding Proposition [prop:fininfdec] yields that G1 is contained in Perm and it follows [coro:cycletypeeq] In G1 cycle type equality and effective cycle type equality are the same. For characterisations of cycle decidability, in contrast to Lemma [lemma:oneinfdec] and Proposition [prop:fininfdec], much more detailed information about the cycles is required. [theorem:cycledecchar] Let *f* be a recursive permutation. The following conditions are equivalent: [.99em(a)] where all functions are taken to be total recursive. These functions are pairwise computationally equivalent. The condition (e) describes a recursively enumerable system of representatives for all cycles. Such a system is called a *choice set* by and, and a *transversal* by. We adopt the term “transversal”. Higman furthermore calls the transversal consisting of the smallest elements of each cycle *principal*, which corresponds to condition (b) above. As the theorem shows, any recursively enumerable transversal of a permutation is computationally equivalent to the principal transversal. We will show (a)  ⇒  (b)  ⇒  (e)  ⇒  (a) and (b)  ⇒  (c)  ⇒  (d)  ⇒  (b). “(a)  ⇒  (b)”: *μ*(*x*) :  = *μ**x*ʹ[*π*(*x*, *x*ʹ)] is computable. “(b)  ⇒  (e)”: We need to enumerate representatives of all the cycles without representing a cycle twice with different values (it is allowed to repeat values because *f* may have only finitely many cycles). This can be done by *ρ* = *μ*. “(e)  ⇒  (a)”: Given *x*, *x*ʹ, start dove-tailing computations of *f**k**ρ*(*e*) where *e* varies in N and *k* varies in Z. Since *ρ* enumerates a complete system of representatives we will eventually find *e*, *e*ʹ and *k*, *k*ʹ such that *f**k**ρ*(*e*) = *x* and *f**k*ʹ*ρ*(*e*ʹ) = *x*ʹ. Because cycles are uniquely represented in *ρ*, it holds *x* ≡ *f**x*ʹ ⇔ *e* = *e*ʹ. “(b)  ⇒  (c)”: *ϑ* = *μ* is possible. “(c)  ⇒  (d)”: *χ* = *ϑ* is possible because *x* ≡ *f**x*ʹ ⇒ *ϑ*(*x*) = *ϑ*(*x*ʹ), and since *x* ≡ *f**ϑ*(*x*), we have $x \not\equiv\_f x' \Rightarrow \vartheta(x) \not= \vartheta(x')$. “(d)  ⇒  (b)”: Take *μ*(*x*) :  = *μ**x*ʹ[*χ*(*x*) = *χ*(*x*ʹ)]. shows in the more general context of digraphs of recursive functions (not necessarily permutations), but with the same technique as above, that a recursively enumerable transversal for all weakly connected components, which are the corresponding generalisation of cycles, implies decidability of every component. Aside from the transversal criterion, all characterisations of cycle decidability listed in Theorem [theorem:cycledecchar] come in the form of an external function which answers questions about the cycles of the permutation. The next subsection follows a more profound approach to cycle decidability. We define a normal form for permutations, an intrinsic property of the permutation, and prove that conjugacy to a normal form permutation is equivalent to cycle decidability. currentlabel [par:normal] currentlabel Let *f* be a recursive permutation. Then for every block *P* ∈ Part*f* and arbitrary *x**P* ∈ *P*, the function *λ**j*[*f**δ*− 1(*j*)(*x**P*)] enumerates *P*. Thus Part*f* is a partition whose blocks are recursively enumerable by the powers of a single fixed permutation at appropriately chosen starting points. Proposition [prop:pchar] shows that if Part*f* is decidable, every block of it must be decidable. It is well-known that a set is recursive iff it is recursively enumerable in non-decreasing order. We use this idea to define a normal form for permutations with decidable cycles and derive a characterisation for permutable decidable equivalences. The following lemma provides an important technique for constructing a cycle out of an infinite recursive set. Some variants of the idea will also be used later. [lemma:buildcycle] Let *p* be the characteristic function of an infinite recursive set *A*. Then we can find a cyclic permutation whose support is *A*. Let *x* be given. [*x* ∈ *A*] is computable by *p*. If *x* ∉ *A*, then return *f*(*x*) :  = *x*. Else list the members of *A* in increasing order: $$\begin{aligned} a(0) &:= \mu x[x \in A], \\ a(n+1) &:= \mu x[x > a(n) \wedge x \in A]\end{aligned}$$ which is uniform in *p* and total recursive because *A* is infinite. Then define *f*(*x*) :  = *a**δ*(*δ*− 1*a*− 1(*x*) + 1),  *x* ∈ *A*. The *δ* function is indeed used here according to Definition [def:delta]: Denote by succ the successor function succ(*x*) = *x* + 1 which is a recursive permutation on Z. Then *f* = *a**δ*succ*δ*− 1*a*− 1 where *δ*succ*δ*− 1 is a recursive permutation N → N. This definition of *f* takes members of *A* into *A* injectively, as it is a composition of injective functions. For surjectivity it suffices to observe that *a* is surjective on *A*, and that *δ*succ*δ*− 1*a*− 1 maps *A* onto N. It is clear that *f* has no fixed points in *A*. In the normal form for cyclic permutations the structure of the cycle (repeated application of the permutation or its inverse) shall exhibit the increasingly ordered list of all members of the cycle, in an algorithmically recognisable way. A first attempt would be to take all members of the cycle in increasing order, *a*0 < *a*1 < *a*2 < … and define the cycle to be $\cycle{a\_0, a\_1, a\_2, \dots}$. This layout is easy to work with but yields a permutation only if the number of *a**i*’s is finite, for if not *a*0 will not get an inverse image. For infinite cycles, the idea of using *δ* from the proof of Lemma [lemma:buildcycle] comes into play: A (finite or infinite) cycle $f = \cycle{\dots, a\_{-1}, a\_0, a\_1, \dots}$ of length *n* ∈ N+ ∪ {∞} whose least element is *a*0 is in *normal form* if the function *λ**j*[*f**δ*− 1(*j*)(*a*0)] is increasing for 0 ≤ *j* < *n*. A recursive permutation is in *normal form* if every cycle in its disjoint cycle decomposition is in normal form. Such permutations are more briefly called *normal*. A permutation where every infinite cycle is normal and every finite cycle $\cycle{a\_0, \dots, a\_{n-1}}$ with *a*0 as smallest element, has *λ**j*[*f**j*(*a*0)] increasing for 0 ≤ *j* < *n*, is called *semi-normal*. Because there is only one way to order the entirety of a cycle into an increasing sequence, there is for any permutation *f* precisely one (possibly non-recursive) normal permutation *f*ʹ with Part*f* = Part*f*ʹ. This *f*ʹ is called the *normal form* of *f*. The *semi-normal form* of a permutation is defined analogously and also unique. The less straightforward definition of normality serves the purpose to unify the look of normal cycles regardless of whether they are finite or infinite, which cannot be decided, as is shown in [par:cyclefinite]. Indeed the idea from Lemma [lemma:buildcycle] can also be made to work with finite cycles if their length is known, as Figure [fig:normaldiag] indicates. An archetypal construction of a normal cycle already appeared in Lemma [lemma:buildcycle]: [coro:buildcyclenormal] The cycle constructed in the proof of Lemma [lemma:buildcycle] is normal. In the nomenclature of the lemma, *f*∣*A* :  = *a**δ*succ*δ*− 1*a*− 1. The assertion follows by mere calculation: $$\begin{aligned} f|\_A^{\delta^{-1}(j)}a(0) &= (a\delta\operatorname{succ}\delta^{-1}a^{-1})^{\delta^{-1}(j)}a(0) \\ &= a\delta\operatorname{succ}^{\delta^{-1}(j)}\delta^{-1}a^{-1}a(0) \\ &= a\delta\delta^{-1}(j) = a(j),\end{aligned}$$ which is increasing. Outside of *A* are only fixed points of *f* which are normal cycles. [fig:normaldiag] [lemma:finitesetcycle] There is an algorithm which is uniform in *f* ∈ G and decides for every finite set *A*, given as a list of its elements, if it is the union of cycles of *f*, and, if it is not, computes a member of [*A*]*f* \ *A*, where [*A*]*f* :  = ⋃*x* ∈ *A*[*x*]*f*. We use the fact that a finite set is a union of cycles of *f* iff it is closed under application of *f*. Obtain a list of the members of *X* = *A* ∪ *f*(*A*), which is possible because *A* was given as such a list. Without loss of generality, these finite lists are without repetition. Then we can decide if ∣*X*∣ = ∣*A*∣. If this is the case, then *A* is closed under application of *f* and thus a union of cycles. Otherwise ∣*X*∣ > ∣*A*∣ and *A* is not a union of cycles. Since evidently *X* ⊆ [*A*]*f*, we can find an element in [*A*]*f* \ *A* by inspecting the list difference *X* \ *A*. The encoding of a finite set as a finite bitstring where the *x*-th bit is set iff *x* is a member of the set is called *canonical*. It is easily seen that this canonical encoding is computationally equivalent to the encoding as a finite list. This encoding was essential in the proof to obtain the cardinality of *A*. It is shown in that the cardinality of a finite set cannot be computed from a decider for that set, which would have been another candidate for an encoding of finite sets. [prop:existnormal] Let *f* be a recursive permutation and let *π* decide Π = Part*f*. Then there is a recursive *f*ʹ in normal form with Part*f*ʹ = Part*f*. A program for *f*ʹ can be computed uniformly effectively from *f* and *π*. To given *x* find *all* *x*0 < *x*1 < … < *x**k* = *x* with *x**i*Π*x*. Call the list of these elements *A*. Using Lemma [lemma:finitesetcycle], we can determine if *A* is a union of cycles. If it is, it must be a single cycle because all members of *A* are in the same cycle. Then the appropriate image of *x* = *x**k* can easily be determined, according to Figure [fig:normaldiag]. Otherwise Lemma [lemma:finitesetcycle] gives an element *x*\* ∈ [*A*]*f* \ *A* = [*x*]*f* \ *A*. Since the *x*0, …, *x**k* are all members of the cycle satisfying *x**i* ≤ *x*, it must be *x*\* > *x*. Search $\hat x = \mu \hat x[\hat x > x \wedge \hat x \Pi x]$; this search will terminate as *x*\*Π*x* and thus $\hat x \le x^\*$. This algorithm provides a way to decide if there are still greater elements than *x* in its cycle and, in the affirmative case, to find the smallest such element. Figure [fig:normaldiag] suggests that this is enough information to carry the construction of the cycle out to its end, in the finite as well as in the infinite case: $$f'(x) := \begin{cases} x\_{k+2}, & \text{$k$ even and $x\_{k+2}$ exists}, \\ x\_{k+1}, & \text{$k$ even, $x\_{k+1}$ exists and $x\_{k+2}$ does not exist}, \\ x\_{k-1}, & \text{$k$ even and $x\_{k+1}$ does not exist}, \\ x\_0, & \text{$k = 1$}, \\ x\_{k-2}, & \text{$k$ odd and $k > 1$}. \end{cases}$$ [coro:hasuniquenormal] If Π is decidable and permutable, then PermΠ contains exactly one permutation in normal form. Let *f* be a recursive permutation with decidable cycles. Then Part*f* contains, by Corollary [coro:hasuniquenormal], a recursive permutation *f*ʹ in normal form, which is the normal form of *f*. Proposition [prop:permconj1] shows that *f* is effectively conjugate to *f*ʹ. The converse is also true. Suppose that *f* is effectively conjugate to its normal form *f*ʹ. It is to be shown that Part*f*ʹ is decidable. The proof exploits the resemblance between a normal cycle and a strictly convex function: the smallest element *x*0 of a cycle of *f*ʹ is characterised by the condition *x*0 ≤ min{*x*0±}, where *x*± :  = *f*± 1(*x*). Thus, given any *x*, an alternating search through the cycle, beginning at *x*, can be performed to find the smallest element of *x*’s cycle: *μ**x*0[*x*0 ≡ *f*ʹ*x*] = *ξ**f*ʹ*k*(*x*)[*f*ʹ*k*(*x*) ≤ min{*f*ʹ*k* + 1(*x*), *f*ʹ*k* − 1(*x*)}],  where *ξ**f*ʹ*k*(*x*)[*p*(*k*)] is a more suggestive notation for *f*ʹ*ξ**k*[*p*(*k*)](*x*). The alternating search can be replaced by a more intelligent algorithm which further uses the resemblance of normal cycles and strictly convex functions, namely that the direction from any point towards the minimum can be determined by inspecting a neighbourhood of the given point. For any *x*, compute *x*+ and *x*−. If, by the test above, *x* is not the minimum of the cycle, at least one of *x*+ and *x*− must be smaller than *x*. Take the smallest of both. If it is *x*+, the minimum can be found in *f*-positive direction, i.e. the smallest element is of the form *f**k*(*x*) with *k* > 0, and if the smaller element is *x*−, the minimum is in *f*-negative direction. By a variant of binary search, the minimum can be found with a number of steps logarithmic in the distance from the starting point to the minimum in the cycle. [lemma:normalmin] The function *λ**x*[*μ**x*ʹ[*x*ʹ ≡ *f*ʹ*x*]] can be computed uniformly effectively in recursive normal permutations *f*ʹ. If the smallest element of each cycle can be found, the cycles of *f*ʹ can be decided as shown in Theorem [theorem:cycledecchar]. To summarise: [theorem:decnormal] Let *f* ∈ G. *f* has decidable cycles iff *f* is effectively conjugate to its normal form. A decider *π* for Part*f* can be computed from *f*ʹ. The next results show that instead of effective conjugacy to the normal form, conjugacy to any normal or semi-normal permutation is sufficient. [lemma:conjnormal] If a recursive permutation *f* is effectively conjugate to a normal permutation *g* via *h*, i.e. *f* = *h*− 1*g**h*, then it is effectively conjugate to its normal form *f*ʹ which can be computed from *f* and *h*. It follows from the conjugation that the normal permutation *g* = *h**f**h*− 1 is recursive, so that Part*g* ≅ Part*f* = Part*f*ʹ is decidable; by Theorem [theorem:decnormal], a decider *γ* for Γ = Part*g* can be found uniformly effectively from *g*. The proof of Theorem [theorem:permconj2] shows that the conjugation *h* is a (Part*f*, Part*g*)-isomorphism. By Proposition [prop:existnormal] we can compute *f*ʹ from *f* using the decider *λ**x**x*ʹ[*h*(*x*)Γ*h*(*x*ʹ)] for Part*f* which is uniform in *γ* and *h*. The semi-normal form has an incompatibility between the structure of finite and infinite cycles. This incompatibility provides additional information: it is possible to decide whether a number lies in a finite or an infinite cycle, uniformly in the permutation. The next lemma shows that this information can be discarded effectively to obtain a normal permutation in the same effective conjugacy class. By Lemma [lemma:conjnormal], it follows that if a permutation *f* is effectively conjugate to a semi-normal permutation, it is also effectively conjugate to its normal form and therefore decidable. [lemma:semiisnormal] If *f* is a recursive semi-normal permutation, then it is effectively conjugate to its normal form *f*ʹ, which can be computed from *f*. We first note that the structure of a semi-normal permutation allows to decide for every *x* whether it is in a finite cycle or an infinite one. Let *x* be given. Again we inspect the neighbours *x*+ = *f*(*x*) and *x*− = *f*− 1(*x*) of *x*. *x* is the least element in the cycle, in case of finite as well as infinite [*x*]*f*, iff *x* ≤ min{*x*+, *x*−}. Using alternating search, we can again find the smallest element *x*0 in the cycle of *x*. By computing *x*0, *f*(*x*0), *f*2(*x*0), we can tell if the cycle length is  ≤ 2. If it is, the cycle is, of course, finite. Else the values *x*0−, *x*0, *x*0+ are all distinct. If the cycle is infinite, it is in normal form and it must be *x*0 < *x*0− < *x*0+. If the cycle is finite, it is in semi-normal form and conversely it must be *x*0 < *x*0+ < *x*0−. This gives a way to decide cycle finiteness. To construct the normal form *f*ʹ of *f*, we determine for an input *x* if its cycle is finite or infinite. The infinite cycles are already normal because *f* is semi-normal. If the cycle is finite, we can generate a finite list of all members of the cycle [*x*]*f* by repeated application of *f* to *x*. The appropriate image for *x* to produce a normal cycle can be determined according to Figure [fig:normaldiag]. If *f* ∈ G is effectively conjugate to a semi-normal permutation, then *f* has decidable cycles. Semi-normal permutations are studied further in [par:cyclefinite]. The next subsection deals with permutability criteria for decidable equivalences. Some sufficient conditions even yield permutability by a semi-normal permutation, which provides further motivation for [par:cyclefinite]. currentlabel [par:permutability] currentlabel Recall that a decidable equivalence is permutable if its blocks are the orbits of a recursive permutation. Corollary [coro:hasuniquenormal] already formulated a permutability condition which we restate as The recursive normal permutations and the decidable and permutable equivalences are in bijection, given by *ϕ* : *g* ↦ Part*g*. This mapping is well-defined since the equivalence associated to a recursive normal permutation is decidable by Theorem [theorem:decnormal]. To see bijectivity it suffices to find an inverse mapping. This inverse is the function which associates to every decidable and permutable equivalence its, by Corollary [coro:hasuniquenormal], unique recursive normal permutation. It is sound to represent a decidable permutable equivalence Π by any pair consisting of a decider *π* for Π together with an element of PermΠ, as these are witnesses for the asserted properties of the equivalence. Under this representation, the bijection and its inverse are computable. In one direction, any recursive normal *f*ʹ maps to the pair (*π*, *f*ʹ) which represents Part*f*ʹ. As Theorem [theorem:decnormal] shows, *π* can be obtained from *f*ʹ. In the other direction, (*π*, *f*) maps to the normal form *f*ʹ of *f*, which is computable from *π* and *f* by Proposition [prop:existnormal]. A closer inspection of the proof of Proposition [prop:existnormal] shows that the algorithm to construct the normal element can be reformulated to rely on the decidability of Π and a way to determine for any *x* if there is a greater element in its block. A permutation *f* ∈ PermΠ provided such a method in the proof of Proposition [prop:existnormal]. We proceed to show the converse: if such a method is available, there must be an *f* ∈ PermΠ: [theorem:rhochar] Let Π be decidable via *π*. Π is permutable iff the predicate *ρ*(*x*) = [∃*x*ʹ > *x* : *x*ʹΠ*x*] is recursive. Given *π*, such a function *ρ* can be constructed from the normal permutation for Π, and therefore, in the presence of *π*, from any member of PermΠ, and vice versa. “ ⇒ ”: If $\operatorname{Perm}\Pi \not= \emptyset$ there is an *f*ʹ ∈ PermΠ in normal form. Let *x* be given. By Lemma [lemma:normalmin] we can find the smallest number *x*0 in *P*(*x*). The function *a*(*j*) :  = *f*ʹ*δ*− 1(*j*)(*x*0) enumerates *P*(*x*) in increasing order for 0 ≤ *j* < ∣*P*(*x*)∣. We can obtain the smallest index *k* = *k*(*x*) such that *a*(*k*) = *x*. Suppose *P*(*x*) is finite of length *n* ∈ N+. Since *a* is increasing for its first *n* arguments, we have *a*(*n*) ≤ *a*(*n* − 1), because *a*(*n*) ∈ *P*(*x*) and *a*(*n* − 1) is the greatest element therein. By definition 0 ≤ *k*(*x*) < *n*. Now, *x* is the greatest element in the finite block iff *k*(*x*) = *n* − 1 iff *a*(*k*(*x*) + 1) ≤ *a**k*(*x*), the latter of which can be checked without knowing *n*. If the block is infinite, there cannot be a greatest element and indeed *a*(*k*(*x*) + 1) > *a**k*(*x*) will always hold. Therefore [∃*x*ʹ > *x* : *x*ʹΠ*x*] = [*a**k*(*x*) < *a*(*k*(*x*) + 1)] gives a uniform way to compute *ρ*. “ ⇐ ”: The construction is the same as in the proof of Proposition [prop:existnormal], except that *ρ* is used instead of the function *f* there to decide [∃*x*ʹ > *x* : *x*ʹΠ*x*]. The next goal is to obtain a non-permutable equivalence relation. This can be achieved by encoding the computation of Turing machines well enough so that decidability of the criterion in Theorem [theorem:rhochar] implies decidability of the Halting Problem. The equivalence is defined over codings of pairs ⟨*x*, *n*⟩ where *x* is a program and *n* a step counter in the computation *φ**x*(*x*). This setting reveals a flaw of Theorem [theorem:rhochar]: deciding ⟨*x*, *n*⟩ > ⟨*x*ʹ, *n*ʹ⟩ requires knowledge of the coding *λ**x**n*[⟨*x*, *n*⟩]. It would suffice for the current purpose to fix the *standard coding* of pairs ${\langlex, y\rangle} := \frac{1}{2}(x^2 + 2xy + y^2 + 3x + y)$, because it is strictly increasing in its second parameter. Such a fixation is unpleasant and defining permutations over tuples is a useful tool in general. We provide at least a variant of Theorem [theorem:rhochar] which is independent of the coding of pairs and deals with the kind of equivalence that is later considered in Proposition [prop:nonpermut] and Lemma [lemma:infcycle]. An infinite family Π*z*, *z* ∈ N, of decidable equivalences is *uniformly decidable* if there is a recursive function *ψ* such that *ψ*(*z*, *x*, *x*ʹ) = [*x*Π*z**x*ʹ]. In this case the *coproduct equivalence* Π defined by ⟨*z*, *x*⟩Π⟨*z*ʹ, *x*ʹ⟩ :  ⇔ *z* = *z*ʹ ∧ *x*Π*z**x*ʹ is again decidable. The blocks of a coproduct Π of a family Π*z* are of the form *P*(⟨*z*, *x*⟩, Π) = {⟨*z*, *x*ʹ⟩ : *x*ʹ ∈ *P*(*x*, Π*z*)}, i.e. the blocks of Π*z* are prefixed by *z*, to make all blocks across the family disjoint, and then Π is the partition consisting of all these blocks. [coro:pairrhochar] Let Π be the coproduct of a uniform family Π*z* of decidable equivalences. Π is permutable iff the predicate *ρ*⟨*z*, *x*⟩ = [∃*x*ʹ > *x* : *x*ʹΠ*z**x*] is recursive. “ ⇒ ”: Let *f* ∈ PermΠ. Uniform in *z*, we obtain functions *f**z* = *λ**x*[*π*2*f*⟨*z*, *x*⟩] ∈ PermΠ*z*, where *π*2⟨*z*, *x*⟩ :  = *x* denotes the projection of a pair on its second component. By Theorem [theorem:rhochar], there is a function *ρ**z* for Π*z*, which can be computed from *f**z* and a decider *π**z* for Π*z*, by Proposition [prop:existnormal] and Theorem [theorem:rhochar]. *f**z* and *π**z*, in turn, can be computed uniformly effectively from *f*, *z* and a uniform decider *ψ* for the family Π*z*. Thus *λ*⟨*z*, *x*⟩[*ρ**z*(*x*)] is recursive and has the desired property. “ ⇐ ”: For any fixed *z*, *λ**x*[*ρ*⟨*z*, *x*⟩] is a *ρ* function for Π*z* as in Theorem [theorem:rhochar]. This gives a permutation *f**z* ∈ PermΠ*z* which can be constructed uniformly effectively from *ρ*, *z* and *ψ*. Then the function *f*⟨*z*, *x*⟩ :  = ⟨*z*, *f**z*(*x*)⟩ is a recursive permutation and moreover a member of PermΠ. [prop:nonpermut] There exists a decidable equivalence which is not permutable. Define $$\begin{aligned} r\_x'(n) &:= [\text{$\varphi\_x(x)$ halts after $\le n$ steps}], \\ r\_x(n) &:= \begin{cases} {\mathfrak{t}}, & n = 0, \\ r\_x'(n-1), & n > 0. \end{cases}\end{aligned}$$ Each *r**x* is a recursive function which induces a decidable equivalence Π*x*, as per Proposition [prop:recdec]. Indeed this family of equivalences is uniformly decidable because [*r**x*(*n*) = *r**x*(*n*ʹ)] can be decided uniformly in *x*, *n*, *n*ʹ, using a universal Turing machine. Let Π be the coproduct of this family. *Assume* that Π is permutable so that Corollary [coro:pairrhochar] yields a recursive function *ρ*⟨*x*, *n*⟩ = [∃*n*ʹ > *n* : *n*ʹΠ*x**n*]. In particular for *n* = 0, we can decide [∃*n*ʹ ≥ 1 : *r**x*(*n*ʹ) = t] uniformly in *x*, i.e. whether *φ**x*(*x*) halts eventually. This *contradicts* the undecidability of the Halting Problem. [theorem:rhoperm] Let Π be decidable via *π* and let there be a recursive function *ρ* such that $$\rho(x) = \begin{cases} 0, & |P(x)| = \infty, \\ |P(x)|, & \text{else}. \\ \end{cases}$$ Then Π is permutable by a semi-normal permutation which can be constructed from *π* and *ρ*. Without the last addition that there be a semi-normal permutation in PermΠ, a proof would have been immediate from Theorem [theorem:rhochar]. Theorem [theorem:cfdec] in [par:cyclefinite] shows that the converse of Theorem [theorem:rhoperm] holds, too. Together with other results from [par:cyclefinite], this shows that there are permutable equivalences which lack a semi-normal element, which is why Theorem [theorem:rhoperm] cannot be inferred from Theorem [theorem:rhochar] and needs a separate proof. Let *x* be given. With a decider for Π, a decider for *P*(*x*) can be found uniformly in *x*, by Lemma [lemma:sigmachar]. Using *ρ*, it is decidable whether *P*(*x*) is finite or infinite. If it is finite, all elements of *P*(*x*) can be found by testing *x*ʹ ∈ *P*(*x*) until *ρ*(*x*) members are found. These numbers can easily be arranged into a finite semi-normal cycle. If the cycle is infinite, it is an infinite decidable set and Lemma [lemma:buildcycle] plus Corollary [coro:buildcyclenormal] provide a method to construct the infinite semi-normal (i.e. normal) cycle. [coro:condf] If Π is decidable, then each of the following conditions is sufficient for the permutability of Π by a semi-normal element: [2em(a)] Π has only finitely many blocks, Π has only blocks of the same cardinality (including ℵ0). We show that there is a recursive function *ρ* as in Theorem [theorem:rhoperm]. [(a)] Let Π = {*P*1, …, *P**n*}, *p**i* = ∣*P**i*∣ and *y**i* ∈ *P**i* arbitrary representatives. Using *y*(*x*) :  = *μ**y*[*y* ∈ *P*(*x*)] we can find the unique index *i* such that *y*(*y**i*) = *y*(*x*), then set *ρ*(*x*) :  = *p**i*. The program for *ρ* only needs to include the subroutine *y*(*x*) and the finitely many constants *y**i* and *p**i*, 1 ≤ *i* ≤ *n*. The function *ρ* returning the cardinality of a block is constant and therefore recursive. It is easy to construct counterexamples to the converses of Corollary [coro:condf] and Proposition [prop:fininfdec] from [par:cycledec]. There is a recursive semi-normal permutation, implying decidable cycles, with infinitely many infinite cycles and one finite cycle. It is therefore a simultaneous counterexample to the converses of the corollary and the proposition. Consider the partition Π whose blocks consist of all numbers  ≥ 2 which have the same smallest prime factor, and 0 and 1 form another block together. This is a decidable partition. By Theorem [theorem:rhoperm] (with *ρ*(0) :  = *ρ*(1) :  = 2 and *ρ*(*x*) :  = 0 else) we see that Π has a semi-normal *f* ∈ PermΠ which is a simultaneous counterexample. It was shown in the proof of Lemma [lemma:semiisnormal] that the cycle structure of a recursive semi-normal permutation makes it possible to decide the finiteness of the cycle of any given number *x*. Theorem [theorem:rhoperm] has a similar direction and also involves semi-normal permutations. [par:cyclefinite] deals with cycle finiteness and its relation to semi-normality more systematically. currentlabel [par:orderperm] currentlabel This section characterises the elements of PermΠ for a fixed decidable equivalence Π as the maximal elements with respect to cycle inclusion inside a normal subgroup of AutΠ. The merit of this theorem is that Recursion Theory only appears in the setting; the characterisation itself makes no use of the language of computability. For the basic order-theoretic notions needed here, see e.g. . Let IΠ :  = {*f* ∈ AutΠ : *f*(*P*) = *P* ∀*P* ∈ Π} denote the set of *block-wise identical* permutations in AutΠ. Any *f* ∈ PermΠ achieves [*x*]*f* = *P*(*x*, Π), which implies *f*(*P*) = *P* for every block *P* ∈ Π. This means PermΠ ⊆ IΠ and furthermore IΠ is a normal subgroup of AutΠ. For an alternative view on IΠ, define a *refinement* of an equivalence Π to be an equivalence Πʹ such that *x*Πʹ*x*ʹ ⇒ *x*Π*x*ʹ. In this case we write Πʹ ≤ Π. No decidability requirements are attached to this notion and, in this subsection, the symbol PermΠ shall not imply that Π is decidable. Then IΠ = ⋃Πʹ ≤ ΠPermΠʹ. The finest and coarsest equivalences, e.g., yield I{{0}, {1}, …} = {id} and I{N} = G, but the automorphism group is G in both cases. The relation $$f \lesssim g \,:\Leftrightarrow\, \forall x: [x]\_f \subseteq [x]\_g$$ is a preorder on IΠ. It becomes antisymmetric if the permutations are collapsed to their cycle equivalences, i.e. if one disregards the sequence of elements in the cycles of a permutation. By $f \lnsim g$ we mean $f \lesssim g$ and $g \not\lesssim f$. A permutation *f* ∈ IΠ is *maximal* if there is no *g* ∈ IΠ such that $f \lnsim g$, i.e. *f* is an upper bound on all elements it is comparable to. The set of maximal elements of IΠ is written maxIΠ. If Π is decidable, then PermΠ = maxIΠ. First suppose Π is permutable. Let *f* ∈ IΠ and *g* ∈ PermΠ be arbitrary. From [*x*]*f* ⊆ *P*(*x*) it follows that the restriction *f*∣*P* is a permutation of *P* for every block *P* ∈ Π. This means that *P* is a union of cycles of *f*. Since *g* ∈ PermΠ, we know [*x*]*g* = *P*(*x*) and thus $f \lesssim g$. This shows that every element of PermΠ is an upper bound on IΠ and in particular maximal. On the other hand, if *g* is maximal in IΠ and Π is permutable, there exists an *f* ∈ PermΠ and by the first part of the proof $g \lesssim f$. Maximality of *g* then implies $f \lesssim g$, i.e. *P*(*x*) = [*x*]*f* ⊆ [*x*]*g* ⊆ *P*(*x*) and it follows equality everywhere and *g* ∈ PermΠ. If Π is non-permutable, we wish to show that there are no maximal elements. Given any *f* ∈ IΠ, we construct a permutation in IΠ which is comparable to and strictly greater than *f*. Since *f* ∉ PermΠ, there is an *z* such that $[z]\_f \subsetneq P(z)$. Since *P*(*z*) is a union of cycles of *f*, *P*(*z*) must be the union of at least two cycles of *f*. Because Π is decidable, *P*(*z*) is a recursive set. The equivalence $\{P(z), {\overline{P(z)}}\}$ is evidently decidable and has finitely many blocks, which is a sufficient permutability condition. By Corollary [coro:condf] we obtain a permutation *c*, one of whose cycles is *P*(*z*). Then $$f'(x) := \begin{cases} f(x), & x \not\in P(z), \\ c(x), & x \in P(z), \end{cases}$$ replaces the multitude of cycles in *f*, which make up *P*(*z*), by a single cycle. *f*ʹ is in IΠ and strictly greater than *f*. The proof shows the slightly stronger statement that every member of PermΠ is not only a maximum of IΠ but also an *upper bound*, i.e. it is maximal and comparable to every member of IΠ. We also remark that finding the $f' \gnsim f$ in the second part of the proof was an instance of permutability. We had to find a permutable, not necessarily decidable equivalence Part*f*ʹ such that $\operatorname{Part}f \lneq \operatorname{Part}f' \le \Pi$. The proof above shows that it is not hard to order individual blocks of an equivalence into a single cycle of a recursive permutation, at least if the block is decidable and its size is known (cf. Corollary [coro:condf]). The hard part of permutability is ordering infinitely many blocks into cycles simultaneously. The proof gives the following picture of the order in IΠ if Π is decidable but not permutable: every chain in IΠ with a maximal element can be extended by adding a strictly greater element which assembles one further block of Π into a single cycle. From this perspective at least, chains grow along the blocks of Π, and Π has infinitely many blocks as it is not permutable. $\operatorname{Perm}= \bigcup\_{\text{$$ dec.}} \max {\mathcal{I}}\Pi$. Cycle finiteness and unsolvable problems ======================================== This last section proves negative answers to algorithmic questions surrounding Perm. The *cycle finiteness problem* is introduced and it is shown that it is in general unsolvable for permutations with decidable cycles. The subset $\operatorname{Perm\_{\textsc{CF}}}$ of Perm where cycle finiteness is decidable is characterised by semi-normal permutations. It is shown that cycle decidability and cycle finiteness problems in G are intertwined by one-one reductions and that the maximum one-one degree of either problem is the Halting Problem. Furthermore it is shown that conjugacy in Perm is undecidable, that Perm is not enumerable, and that it is not closed under multiplication. Lastly while Perm is closed under multiplication with finitary permutations, it can be shown that there is no constructive proof of this fact, assuming that a finitary permutation *a* is encoded as a list of transpositions, whose product is *a*, and each transposition is encoded as an ordered pair. A constructive proof can be given for $\operatorname{Perm\_{\textsc{CF}}}$. currentlabel [par:cyclefinite] currentlabel We begin by constructing a permutation for which it is undecidable if numbers lie in a finite or an infinite cycle. Such a permutation has already been described in, by encoding the Halting Problem for Turing machines into the cycle length. Indeed cycle finiteness is the prototype of Collatz’ original problem, which is the motivation of Lehtonen’s paper. His permutation has the additional property that it can be described using a case distinction on a decidable partition with 5 blocks, and each case has the form of an affine-linear function. The construction below will use the general theory developed so far, which makes it swift but does not yield similar properties. For *f* ∈ G, the *cycle finiteness problem* of *f* is the decision problem $$\textsc{CF}(f) := \{x : |[x]\_f| < \infty\}.$$ The subset of Perm consisting of permutations with decidable cycle finiteness problem is denoted $\operatorname{Perm\_{\textsc{CF}}}$. We use a slight modification of the proof of Proposition [prop:nonpermut]. Define the *x*-indexed family of recursive predicates *r**x*ʹ(*n*) by *r**x*ʹ(*n*) = [$\varphi\_x(x)$ halts after $\le n$ steps]. The family Π*x* of equivalences which correspond to these recursive functions via Proposition [prop:recdec] is uniformly decidable by a universal Turing machine, which makes their coproduct Π a decidable equivalence. The interpretation of the blocks of Π is as follows: ⟨*x*, *n*⟩ and ⟨*x*ʹ, *n*ʹ⟩ are in the same block iff they belong to the same program, *x* = *x*ʹ, and either both computations (after *n* and *n*ʹ steps) did not halt yet or both halted. We want to show that *ρ*⟨*x*, *n*⟩ = [∃*n*ʹ > *n* : *r**x*ʹ(*n*) = *r**x*ʹ(*n*ʹ)] is recursive in order to apply Corollary [coro:pairrhochar]. To given ⟨*x*, *n*⟩, simulate the computation *φ**x*(*x*) for *n* + 1 steps. If it halts after  ≤ *n* steps, it also halts after  ≤ *n* + 1 steps, so *r**x*ʹ(*n*) = *r**x*ʹ(*n* + 1) and *ρ*⟨*x*, *n*⟩ = t. If it halts at the (*n* + 1)-st step, then $r\_x'(n') \not= r\_x'(n)$ for all *n*ʹ > *n* and *ρ*⟨*x*, *n*⟩ = f. The remaining case is that the computation did not halt after *n* + 1 steps, in which case it did not halt after  ≤ *n* steps either, and *r**x*ʹ(*n*) = *r**x*ʹ(*n* + 1), *ρ*⟨*x*, *n*⟩ = t. By Corollary [coro:pairrhochar], there is a member *g* ∈ PermΠ. *Assume*, we could decide ∣[⟨*x*, *n*⟩]*g*∣ < ∞, for every pair ⟨*x*, *n*⟩. Let a program *x* be given. Then we could decide whether ⟨*x*, 0⟩ lies in a cycle of finite length, which is the same as deciding whether the computation of *φ**x*(*x*) halts eventually. This *contradicts* the undecidability of the Halting Problem. [lemma:infcycle] There is a $g \in \operatorname{Perm}\setminus \operatorname{Perm\_{\textsc{CF}}}$. Define the decision problem $\textsc{CF\*}$ as follows: given *f* ∈ Perm, a decider *π* for the cycles of *f* and a number *x*, it is to decide whether ∣[*x*]*f*∣ < ∞. A fortiori, this problem is recursively unsolvable. The *diagonal* problem $\textsc{$$CF\*}$ of $\textsc{CF\*}$ asks, given a program which computes a permutation with decidable cycles, and a decider for the cycles, if that program itself is in a finite or an infinite cycle of the permutation. This problem may be thought of as the Perm version of the one-parameter Halting Problem $K := \{x : \text{$x(x)$ halts}\}$. Using the Recursion Theorem, one can reduce the seemingly more general problem $\textsc{CF\*}$ to its diagonal and obtains $\textsc{$$CF\*}$ is recursively unsolvable. We can improve upon the inclusion G1 ⊆ Perm from Proposition [prop:fininfdec], by using essentially the same technique as in that proof. [prop:fininfcf] Every permutation with finitely many infinite cycles has decidable cycle finiteness problem, i.e. ${\mathcal{G}}\_1 \subseteq \operatorname{Perm\_{\textsc{CF}}}$. Let *f* ∈ G1 and *x*1, …, *x**n* be a system of representatives for the infinite cycles of *f*. By Proposition [prop:fininfdec], *f* has decidable cycles. Given a number *x*, we can decide if *x* belongs to any of the cycles [*x*1]*f*, …, [*x**n*]*f*. This is the case iff ∣[*x*]*f*∣ = ∞. Theorem [theorem:rhoperm] stated that if a decidable equivalence Π possesses a recursive function *ρ* which returns to each *x* the size of *P*(*x*), or 0 if *P*(*x*) is infinite, then Π is permutable by a semi-normal element. We see now that not every decidable permutable equivalence has such a function *ρ*. Take the function *g* from Lemma [lemma:infcycle]: Part*g* is decidable and permutable. If such a function *ρ* existed for Part*g*, the computable function $\lambda x[\rho(x) \not= 0]$ would decide $\textsc{CF}(g)$, which is impossible. The next theorem links cycle finiteness to the conjugacy classes of semi-normal permutations: [theorem:cfdec] Let *f* ∈ Perm. Then the following statements are equivalent: [2em(a)] $\textsc{CF}(f)$ is decidable, there is a *ρ* function as in Theorem [theorem:rhoperm] for Part*f*, and *f* is effectively conjugate to its semi-normal form. “(a)  ⇒  (b)”: Write Π = Part*f* and *P*(*x*) = *P*(*x*, Π) as usual. By the assumption we can compute [∣*P*(*x*)∣ < ∞]. Let *x* be given. If ∣*P*(*x*)∣ = ∞, then report *ρ*(*x*) :  = 0. Else *P*(*x*) = [*x*]*f* is a finite set which we can enumerate by powers of *f* on *x*: determine *n* = *μ**n*[*n* ≥ 1 ∧ *f**n*(*x*) = *x*]. Then *n* is the length of [*x*]*f* and we report correctly *ρ*(*x*) :  = *n*. “(b)  ⇒  (c)”: Given *ρ*, Theorem [theorem:rhoperm] yields a recursive semi-normal element *f*ʹ in PermΠ, which is the semi-normal form of *f*. Both permutations are recursive and hence effectively conjugate by Proposition [prop:permconj1]. “(c)  ⇒  (a)”: Let *f*ʹ be the semi-normal form of *f*. Because *f* and *f*ʹ are effectively conjugate, *f*ʹ is recursive. The proof of Lemma [lemma:semiisnormal] shows that a recursive semi-normal permutation can be used to solve its own cycle finiteness problem. Since Part*f* = Part*f*ʹ, it follows that $\textsc{CF}(f) = \textsc{CF}(f')$ is decidable. $\operatorname{Perm\_{\textsc{CF}}}$ is the union of effective conjugacy classes of recursive semi-normal permutations. The recursive semi-normal permutations and the decidable, permutable equivalences with decidable block finiteness problem are in bijection via *g* ↦ Part*g*. currentlabel [par:conjenum] currentlabel As shown by the Theorems [theorem:permconj2], [theorem:decnormal] and [theorem:cfdec], conjugacy in Perm is equivalent to the solvability of certain decision problems. We proceed to prove that conjugacy, and therefore the solvability of these problems, cannot be decided. To a *g* ∈ Perm define the problem $\textsc{Conj}(g)$, which asks, given *f* ∈ Perm and a decider for its cycles, to decide whether *f* ∼ *g*. We describe a permutation *g* for which this problem is unsolvable. This immediately implies that conjugacy between two given members of Perm can in general not be decided. [theorem:undecconj] There is a *g* ∈ Perm such that $\textsc{Conj}(g)$ is recursively unsolvable. Define *g* to be $$g(x) := \begin{cases} x, & x \equiv 1 {\ \mathrm{mod}\ 2}, \\ 0, & x = 2, \\ x-4, & x \equiv 2 {\ \mathrm{mod}\ 4}, x \not= 2, \\ x+4, & x \equiv 0 {\ \mathrm{mod}\ 4}, \end{cases}$$ i.e. $g = \cycle{\dots, 6, 2, 0, 4, 8, \dots}$. The proof is by (truth-table) reduction of $\textsc{CF\*}$. Let *f*, *π* and *x* be given. We define another permutation *f*ʹ in a way that if [*x*]*f* is finite, *f*ʹ consists only of finite cycles, whereas if [*x*]*f* is infinite, all cycles in *f*ʹ are either 1-cycles or infinite and there are infinitely many 1-cycles and one infinite cycle. Therefore *f*ʹ and *g* have the same cycle type iff [*x*]*f* is infinite. Since *f*ʹ and *g* have finitely many infinite cycles, cycle type equality is equivalent to effective conjugacy of *f*ʹ and *g*, by Corollary [coro:cycletypeeq] and Theorem [theorem:permconj2]. This shows $|[x]\_f| < \infty \Leftrightarrow f' \not\sim g$. It therefore suffices to construct a permutation *f*ʹ and a decider *π*ʹ for its cycles, uniformly in *f*, *π*, *x*, such that *f*ʹ has the same cycle type as *g* iff [*x*]*f* is infinite. Let *k*(*x*ʹ) :  = *ξ**k*[*f**k*(*x*) = *x*ʹ]. Define the following equivalence relation Πʹ: every *x*ʹ ∉ [*x*]*f* is alone in a block and so is every *x*ʹ ∈ [*x*]*f* with $k(x') \equiv 1 \mod 2$. The remaining numbers *x*ʹ ∈ [*x*]*f* with $k(x') \equiv 0 \mod 2$ form a block together. This equivalence is evidently decidable and a decider *π*ʹ can be computed uniformly in *f*, *π* and *x*. Because *f* and *π* are available, we can compute a *ρ* function for Part*f*, as in Theorem [theorem:rhochar]. This *ρ* function can be used to define a *ρ* function *ρ*ʹ for Πʹ in the following manner: let *y* be given. If *y* ∉ [*x*]*f* or *y* ∈ [*x*]*f* and $k(y) \equiv 1 \mod 2$, then ∣*P*(*y*, Πʹ)∣ = 1 and there is no greater element in the same block. Now assume *y* ∈ [*x*]*f* and *k*(*y*) even. We have to decide whether there is a *y*ʹ > *y* with *y*ʹ ∈ [*x*]*f* and *k*(*y*ʹ) even. First, using *ρ*, we can determine if there is a *y*ʹ > *y* which is also in [*x*]*f*. If not, *ρ*ʹ(*y*) :  = f. Otherwise we can find the smallest value *y*ʹ > *y* with *y*ʹ ∈ [*x*]*f*. If *k*(*y*ʹ) is even, we are done and report *ρ*ʹ(*y*) :  = t. Otherwise we repeat the procedure with *y*ʹ. This algorithm lists all elements of [*x*]*f* which are greater than *y* *in order* until one with even *k* index is found or the cycle is exhausted. Thus if the algorithm terminates, we have either witnessed the existence of a greater element than *y* in *P*(*y*, Πʹ) or we have verified that all elements in [*x*]*f* ⊇ *P*(*y*, Πʹ) which are greater than *y* are not in *P*(*y*, Πʹ). So if the algorithm terminates, it yields a correct answer. It remains to prove that it always terminates. If [*x*]*f* is finite, the algorithm halts at the latest after the cycle is exhausted. If [*x*]*f* is infinite, then there are infinitely many numbers with even *k* index, and thus arbitrarily large ones; the algorithm will eventually find one and terminate. With *ρ*ʹ, Theorem [theorem:rhochar] gives, still uniformly in *f*, *π* and *x*, a permutation *f*ʹ ∈ PermΠʹ. If [*x*]*f* is finite, all cycles in *f*ʹ are finite. If [*x*]*f* is infinite, *f*ʹ consists of infinitely many 1-cycles and one infinite cycle. This completes the proof. The second task treated in this subsection is enumerability. It may be useful for various constructions to compute an exhaustive list of all the members of Perm. This is shown impossible here, i.e. there is no partial recursive function *ρ* such that: [2em(1)] ∀*x* ∈ dom*ρ* : *φ**ρ*(*x*) ∈ Perm, and ∀*f* ∈ Perm∃*x* ∈ dom*ρ* : *f* = *φ**ρ*(*x*). [theorem:perminenum] Perm is not recursively enumerable. There are multiple accessible proofs of this theorem. The first is an obvious but somewhat technical diagonalisation. The second proof is based on Corollary [coro:permnormal] and Kent’s result about the composition series of G. From these two follows that the subgroup of G generated by Perm is already all of G. *If* Perm was enumerable, then so would be its group closure, but this *contradicts* the inenumerability of G,. We give another short proof based on a result by van Leeuwen: By, no recursively enumerable set of partial recursive functions with infinite domains can contain all involutions. All members of Perm are permutations and have infinite domains, but by Lemma [lemma:oneinfdec], Perm contains all recursive permutations with only finite cycles, in particular all involutions. It follows that Perm is not enumerable. currentlabel [par:diffcycledec] currentlabel provides a tool to obtain permutations with particularly difficult cycle structure. Let *W**x* :  = dom*φ**x* denote the standard numbering of recursively enumerable sets. Recall from that a set *P* is *productive* if there is a partial recursive *ψ*, such that whenever *W**x* ⊆ *P* it follows that *ψ*(*x*) is convergent and *ψ*(*x*) ∈ *P* \ *W**x*. The function *ψ* is called a *productive function* for *P*. A set *C* is *creative* if it is a recursively enumerable complement of a productive set. One example of a creative set is the Halting Problem *K* :  = {*x* : *x* ∈ *W**x*} whose complement has the identity as a productive function. [] Let *C* be a creative set. There is a recursive permutation *k* composed of infinitely many infinite cycles, one of which is *C*. We remark that the proof of Theorem 1.3 in is largely based on his Lemma 1.4, which does not hold as stated there. It states that if *A* is a non-empty recursively enumerable set, then there is a recursive permutation with infinitely many infinite cycles, one of which is the cylinder *A* × N ⊆ N. However, for *A* = N, the cylinder N × N = N and if one of the cycles of the permutation is N, it cannot have infinitely many cycles. The proof given in shows the assertion under the additional assumption that *A* has infinite complement. This result is sufficient to infer his Theorem 1.3, as creative sets necessarily have infinite complements. With this theorem we obtain a recursive permutation *k* with a creative cycle. The complement of this cycle is productive. By definition, a productive set is not recursively enumerable. On the other hand, if *f* ∈ Perm, then every cycle of *f* is a recursive set, so the complement of every cycle is recursive, too. It follows [lemma:gminusperm] There is a permutation *k* ∈ G \ Perm with infinitely many infinite cycles, one of which is the Halting Problem *K*. Alternatively, shows that there is a recursive permutation with infinitely many infinite cycles and no finite cycles, all of whose transversals are immune. Since an immune set is not recursively enumerable, it follows by Theorem [theorem:cycledecchar] that this permutation has undecidable cycles. We note that Kent’s and Higman’s constructions produce a permutation with infinitely many infinite cycles. Indeed, a permutation with finitely many infinite cycles is necessarily in Perm by Proposition [prop:fininfdec]. The fact that there are permutations with undecidable cycles raises the question of how difficult cycle decidability problems for recursive permutations can become. The following proposition determines the maximal one-one degree of cycle decidability problems in G. For the definition of reducibilities and degrees, the reader is referred to. [prop:cycledegree] For every *f* ∈ G, the cycle decidability problem of *f* is one-one reducible to the Halting Problem, and there exists an *f*ʹ ∈ G such that the Halting Problem is one-one reducible to the cycle decidability of *f*ʹ. Let *f* ∈ G be arbitrary. With an oracle for the Halting Problem *K* we can check uniformly in *x*, *x*ʹ whether the function *λ**w*[*ξ**k*[*f**k*(*x*) = *x*ʹ]] halts on its own encoding (or any other number because the function ignores its argument). This is the case iff *x* ≡ *f**x*ʹ. The mapping of ⟨*x*, *x*ʹ⟩ to a program for *λ**w*[*ξ**k*[*f**k*(*x*) = *x*ʹ]] can be chosen to be strictly increasing in the numerical value of ⟨*x*, *x*ʹ⟩ which makes it a one-one reduction. For the opposite direction, Lemma [lemma:gminusperm] shows that there is a recursive permutation *k* ∈ G of which one cycle is the Halting Problem. Fix a program *x*0 such that *φ**x*0(*x*0) halts. Then the Halting Problem can be solved by *x* ∈ *K* ⇔ *x* ≡ *k**x*0 which is a one-one reduction. In previous sections we have used conjugacy and normal forms to characterise the solvability of cycle decidability and cycle finiteness problems. An interesting fact is that these two classes of problems in G are inter-reducible. To prove this, a technical lemma is needed: [lemma:oddlength] For every *g* ∈ G there is a *g*ʹ ∈ G and an embedding *j* : N → N such that ∣[*x*]*g*∣ < ∞ ⇔ ∣[*j*(*x*)]*g*ʹ∣ < ∞ and all cycles of *g*ʹ which intersect rng*j* are infinite or of odd length. The idea is to create for each *x* a copy of its cycle, double its length and then add one further element to it. This preserves cycle finiteness and makes every finite cycle odd. Define *j*(*x*) :  = ⟨*x*, *x*, 0⟩ and *g*ʹ by $$\begin{gathered} {\langlex,x,0\rangle} \mapsto {\langlex,x,1\rangle} \mapsto {\langlex,x,2\rangle} \mapsto {\langlex,g(x),0\rangle} \\ {\langlex,y,0\rangle} \mapsto {\langlex,y,1\rangle} \mapsto {\langlex,g(y),0\rangle}, \; y \not= x\end{gathered}$$ and fix every triple which does not match the decidable patterns above. Clearly *g*ʹ is a permutation and every cycle which contains some ⟨*x*, *x*, 0⟩ is either infinite or of odd length. Since the cycle structure of *g* is transferred into the second component and merely stretched by the third component in the definition of *g*ʹ, we have ∣[*x*]*g*∣ < ∞ ⇔ ∣[*j*(*x*)]*g*ʹ∣ < ∞ for all *x*. ∣[*j*(*x*)]*g*ʹ∣ is either infinite or odd. [theorem:interred] In G, the classes of cycle decidability and cycle finiteness problems are one-one inter-reducible, in the following sense: [.99em(i)] For every *f* ∈ G there is a *g* ∈ G and an embedding *j* such that *x* ≡ *f**y* ⇔ ∣[*j*⟨*x*, *y*⟩]*g*∣ < ∞. For every *g* ∈ G there is an *f* ∈ G and two embeddings *j*, *j*ʹ such that ∣[*x*]*g*∣ < ∞ ⇔ *j*(*x*) ≡ *f**j*ʹ(*x*). [(i)] For every pair *x*, *y* define the relation $$i \Pi\_{x,y} j \; :\Leftrightarrow \; i = j \, \vee \, \forall k \in {\mathbb{Z}}, |k| \le \max\{i,j\}: f^k(x) \not= y.$$ Π*x*, *y* is easily seen to be reflexive, symmetric and transitive, and thus an equivalence. Indeed Π*x*, *y* is a family of uniformly decidable equivalence relations indexed by ⟨*x*, *y*⟩. Let Π denote their coproduct. The relation Π*x*, *y* is defined in such a way that for any *i* there exists an *i*ʹ > *i* with *i*ʹΠ*x*, *y**i* iff (*i* + 1)Π*x*, *y**i*. By the uniform decidability of the relations we immediately obtain a uniform *ρ* function as in Corollary [coro:pairrhochar] for this family. Application of this corollary gives *g* ∈ PermΠ. Observe that the block of 0 in Π*x*, *y* contains infinitely many elements iff $f^k(x) \not= y \; \forall k \in {\mathbb{Z}}$. The embedding is *j*⟨*x*, *y*⟩ = ⟨*x*, *y*, 0⟩. By Lemma [lemma:oddlength] we find a *g*ʹ and an embedding *j* such that [*j*(*x*)]*g*ʹ is finite iff [*x*]*g* is, and [*j*(*x*)]*g*ʹ is either infinite or of odd length. If [*j*(*x*)]*g*ʹ is infinite, *j*(*x*) and *g*ʹ*j*(*x*) cannot be in the same cycle of *g*ʹ2. On the other hand, if [*j*(*x*)]*g*ʹ is finite, its length is odd. Application of *g*ʹ on such a cycle [*j*(*x*)]*g*ʹ imposes the structure of a finite cyclic group of odd order on the cycle with the group operation *g*ʹ*k**j*(*x*) ⋅ *g*ʹ*l**j*(*x*) :  = *g*ʹ*k* + *l**j*(*x*). Since 2 is coprime to the order of this group, *g*ʹ2*j*(*x*) is a generator and [*j*(*x*)]*g*ʹ2 must contain *g*ʹ*j*(*x*). We have shown that ∣[*x*]*g*∣ < ∞ iff ∣[*j*(*x*)]*g*ʹ∣ < ∞ iff *j*(*x*) ≡ *g*ʹ2*g*ʹ*j*(*x*), thus set *f* = *g*ʹ2 and *j*ʹ = *g*ʹ*j*. currentlabel [par:products] currentlabel Theorem [theorem:permconj2] shows that effective conjugacy and effective cycle type equality are equivalent in Perm. Recall that effective cycle type equality of two permutations *f*, *g* means computable isomorphy of their respective equivalences Part*f*, Part*g*. To formulate this theorem constructively, there must be constructive representations of these equivalences, i.e. they must be decidable and represented by their deciders, or by an equivalent means, as presented in [par:decequiv]. In this way Perm is the maximal domain for this theorem. As discussed in the introduction, the theorem is a constructive analogue of the well-known theorem that conjugacy in a symmetric group is equivalent to cycle type equality, where Perm plays the role of the full symmetric group. This raises the question about the algebraic structure of Perm. G ⊇ Perm imposes its group operation on Perm and it is evident that [(a)] id ∈ Perm and if *f* ∈ Perm then *f*− 1 ∈ Perm. This subsection proves that multiplicative closure fails. The proof uses structural results of to infer the existence of *f*, *g* ∈ Perm such that *f**g* ∉ Perm without constructing them. The present proof could have been given after Corollary [coro:permnormal] was established. A second theorem contains a positive result in the direction of multiplicative closure, namely that Perm is closed under multiplication from left and right with *finitary* permutations. However, it is also shown that there is a fixed permutation *g* ∈ Perm such that cycle decidability of finitary products cannot be witnessed uniformly. The cycle finiteness problem introduced in [par:cyclefinite] plays an important role in the characterisation of the cases where computing the decider is possible. By Corollary [coro:permnormal], Perm is a normal subset in G, so that the subgroup ⟨Perm⟩ generated by Perm is a normal subgroup of G. Clearly Perm contains all finitary permutations and a non-finitary one, e.g. $\delta\operatorname{succ}\delta^{-1} = \cycle{\dots, 5, 3, 1, 0, 2, 4, 6, \dots}$ whose decider is trivial. [] If *N* is a normal subgroup of G that contains a permutation with infinite support, then *N* = G. Applying this theorem yields that ⟨Perm⟩ = G. Since Perm is closed under inversion, it follows that every recursive permutation can be expressed as a finite product of members of Perm. By Lemma [lemma:gminusperm] there is a *k* ∈ G \ Perm. Then there is a decomposition *k* = *f*1…*f**n* with *f**i* ∈ Perm. Now an index 1 ≤ *i* < *n* must exist such that *f*1…*f**i* ∈ Perm and (*f*1…*f**i*)*f**i* + 1 ∉ Perm. This proves [coro:permnotgroup] Perm is not a group. More precisely, there are *f*, *g* ∈ Perm such that *f**g* ∉ Perm. The proof did not give examples of such *f*, *g*. They may be found by factoring *k* ∈ G \ Perm into members of Perm. Such a factorisation exists for every such *k* and always yields counterexamples to the closure of Perm, as seen above. While Perm is not closed under multiplication with itself, one might expect that changes with finite support do not disturb cycle decidability. This is only partially true. Perm is closed under multiplication with *F*, the set of permutations with finite support, but a decider for such a product cannot always be computed. [lemma:transcf] Let *f* ∈ Perm. There exists a recursive function *ρ*⟨*x*, *y*⟩ such that for all *x*, *y* the function *φ**ρ*⟨*x*, *y*⟩ decides the permutation $\cycle{x,y}f$ iff $\textsc{CF}(f)$ is decidable. In this case *ρ* can be constructed from *f*, a decider for its cycles and a decider for its cycle finiteness problem. [fig:transmult] We begin the proof with “ ⇐ ” which uses all cases depicted in Figure [fig:transmult]. This figure will be used as an argument in place of a formal calculation in the following proofs. “ ⇐ ”: Suppose $\textsc{CF}(f)$ is decidable. We distinguish three cases (a)—(c) which correspond to the pictures in Figure [fig:transmult]. Which case applies is decidable by our prerequisites. [(a)] If *x* ≡ *f**y*, then either *x* = *y*, which is trivial, or there is an *
arxiv_0000334
Message-Passing on Hypergraphs: Detectability, Phase Transitions and Higher-Order Information ============================================================================================= [1](#fn1) [2](#fn2) Hypergraphs are widely adopted tools to examine systems with higher-order interactions. Despite recent advancements in methods for community detection in these systems, we still lack a theoretical analysis of their detectability limits. Here, we derive closed-form bounds for community detection in hypergraphs. Using a Message-Passing formulation, we demonstrate that detectability depends on hypergraphs’ structural properties, such as the distribution of hyperedge sizes or their assortativity. Our formulation enables a characterization of the entropy of a hypergraph in relation to that of its clique expansion, showing that community detection is enhanced when hyperedges highly overlap on pairs of nodes. We develop an efficient Message-Passing algorithm to learn communities and model parameters on large systems. Additionally, we devise an exact sampling routine to generate synthetic data from our probabilistic model. With these methods, we numerically investigate the boundaries of community detection in synthetic datasets, and extract communities from real systems. Our results extend the understanding of the limits of community detection in hypergraphs and introduce flexible mathematical tools to study systems with higher-order interactions. Introduction ============ Modeling complex systems as graphs has broadened our understanding of the macroscopic features that emerge from the interaction of individual units. Among the various aspects of this problem, community detection stands out as a fundamental task, as it provides a coarse-grained description of a network’s structural organization. Notably, community structure is observed across different systems, such as food webs, spatial migration and gene flow of animal species, as well as in social networks, power grids, and others. In the case of networks with only pairwise interactions, there are solid theoretical results on detectability limits, describing whether the task of community detection can or cannot succeed. However, many complex systems with interactions that extend beyond pairs are better modeled by hypergraphs, which generalize the simpler case of dyadic graphs. Phenomena that have been investigated on graphs are now readily explored on hypergraphs, with examples including diffusion processes, synchronization, phase transitions and, more recently, community structure. Extending the rigorous results of detectability transitions for networks to higher-order interactions is a relevant open question. One of the main obstacles in modeling hypergraphs is their intrinsic complexity, which poses both theoretical and computational challenges and restricts the range of results available in the literature. The difficulty of defining communities in hypergraphs and of deriving theoretical thresholds for their recovery has limited investigations to the study of *d*-uniform hypergraphs, i.e., hypergraphs that only contain interactions among exactly *d* nodes. A related line of literature focuses on the detection of planted sub-hypergraphs and testing for the presence of community structure in hypergraphs. Generally, extracting recovery results on non-uniform hypergraphs proved to be demanding, with scarce literature on the subject. Recently, conjectured a recoverability threshold for their spectral clustering algorithm on non-uniform hypergraphs. Closer to the scope of our work, provide a probabilistic model and bounds for the theoretical recovery of communities under the same model. However, such detectability bounds are based on algorithms which are not feasible in practice, and no empirical demonstration of the predicted recovery is provided. Furthermore, all these methods lack a variety of desirable probabilistic features, such as the estimation of marginal probabilities of a node to belong to a community, a principled procedure to sample synthetic hypergraphs with prescribed community structure, and the possibility to investigate the energy landscape of a problem via free energy estimations. In this work, we address these issues by deriving a precise detectability threshold for hypergraphs that depends on the node degree distribution, the assortativity of the hyperedges, and crucially, on higher-order properties such as the distribution of hyperedge sizes. Additionally, we show how these properties can be formally described via notions of entropy and information, leading to a clear interpretation of the role of higher-order interaction in detectability. Our approach is based on a probabilistic generative model and a related Bayesian inference procedure, which we utilize to study the limits of the community detection problem using a Message-Passing (MP) formulation, originating from the cavity method in statistical physics. We focus on an extension to hypergraphs of the stochastic block model (SBM), a generative model for networks with community structure. Several variants of the SBM, and of its mixed-membership version, have been extended to hypergraphs. The model we utilize is an extension of the dyadic SBM to hypergraphs and allows generalizing the seminal detectability results of to higher-order interactions. In addition to our theoretical contributions, we derive an algorithmic implementation for inferring both communities and parameters of the models from the data. Our implementation scales well to both large hypergraphs and large hyperedges, owing to a dynamic-program formulation. Finally, we show how, with additional combinatorial arguments, one can efficiently sample hypergraphs with arbitrary communities from our probabilistic model. This problem, often studied in conjunction with inference, deserves its own attention when dealing with hypergraphs, as recently discussed in related work. Through numerical experiments, we confirm our theoretical calculations by showing that our algorithm accurately recovers the true community structure in synthetic hypergraphs all the way down to the predicted detectability threshold. We also illustrate that our approach gives insights into the community organization of real hypegraphs by analyzing a dataset of group interactions between students in a school. To facilitate reproducibility, we release open source the code that implements our inference and sampling procedures. The hypergraph stochastic block model ===================================== Consider a hypergraph *H* = (*V*, *E*) where *V* = {1, ..., *N*} is the set of nodes and *E* the set of hyperedges. A hyperedge *e* is a set of two or more nodes. We define Ω = {*e* : 2 ≤ ∣*e*∣ ≤ *D*}, the set of all possible hyperedges up to some maximum dimension *D* ≤ *N*, with ∣*e*∣ being the size of a hyperedge, i.e., the number of nodes it contains. Notice that *E* ⊆ Ω. We denote with *A**e* = 1 all *e* ∈ *E* and with *A**e* = 0 hyperedges *e* ∈ Ω \ *E*. Our Hypergraph Stochastic Block Model (HySBM) is an extension of the classical SBM for graphs. It partitions nodes into *K* communities by assigning a hard membership *t**i* ∈ [*K*] ≡ {1, …, *K*} to each node *i* ∈ *V*. It does so probabilistically, assuming that the likelihood to observe a hyperedge *A**e* is a Bernoulli distribution with a parameter that depends on the memberships {*t**i*}*i* ∈ *e* of its nodes. Formally, the probabilistic model is summarized as $$\begin{aligned} {2} \label{eq: prior} t\_i &\sim \mathrm{Cat}(n) &&\forall i \in V \\ \label{eq: likelihood} A\_e \, | \, t &\sim \mathrm{Be}\left(\frac{\pi\_e}{\kappa\_{|e|}}\right) \quad&&\forall e \in \Omega \,,\end{aligned}$$ where *n* = (*n*1, …, *n**K*) is a vector of prior categorical probabilities for the hard assignments *t**i*. The Bernoulli probabilities are given by *π**e* = ∑*i* < *j* ∈ *e**p**t**i**t**j* ,  with 0 ≤ *p**a**b* ≤ 1 being elements of a symmetric probability matrix (also referred to as affinity matrix) and *κ*∣*e*∣ a normalizing constant that only depends on the hyperedge size ∣*e*∣. This can take on any values, provided that it yields sparse hypergraphs where *π**e*/*κ*∣*e*∣ = *O*(1/*N*) and valid probabilities *π**e*/*κ*∣*e*∣. We develop our theory for a general form of *κ*∣*e*∣ and elaborate more on its choice in. In our experiments we utilize the value $\kappa\_d = \binom{N-2}{d-2} \frac{d(d-1)}{2}$. Our specific formulation of the likelihood is only one among many alternatives to model communities in hypergraphs. The likelihood we propose has three main properties. First, HySBM reduces to the standard SBM when only pairs are present (as *κ*2 = 1). Since we aim to develop a model that generalizes the SBM to hypergraphs, this is an important condition to satisfy. Second, it enables to develop the MP equations presented in the following section, which in turn lead to a theoretical characterization of the detectability limits and a computationally efficient algorithmic implementation. Third, the likelihoods based on expressions similar to have been shown to well describe higher-order interactions that possibly contain nodes from different communities. For convenience, we work with a rescaled affinity matrix *c* = *N**p*, which is of order *c* = *O*(1) in sparse hypergraphs. The log-likelihood L ≡ L(*A*, *t* ∣ *p*, *n*) evaluates to $$\begin{aligned} {\mathcal{L}}&= \sum\_{e \in \Omega} \left[A\_e \log\left(\frac{\pi\_e}{\kappa\_e}\right) + (1-A\_e) \log\left(1- \frac{\pi\_e}{\kappa\_e}\right)\right] \nonumber \\ &\hspace{4mm} + \sum\_{i \in V} \log n\_{t\_i} \nonumber \\ \begin{split} &= \sum\_{e \in \Omega} \Bigg[ A\_e \log\Bigg(\sum\_{i < j \in e} c\_{t\_i t\_j}\Bigg) + (1- A\_e) \\ &\hspace{4mm} \times \log\left(1- \frac{\sum\_{i < j \in e} c\_{t\_i t\_j}}{N \kappa\_e}\right) \Bigg] + \sum\_{i \in V} \log n\_{t\_i} + {\mathrm{const.}}\,, \label{eq: loglik} \end{split}\end{aligned}$$ where const. denotes quantities that do not depend on the parameters of the model. Inference and generative modeling ================================= Induced factor graph representation ----------------------------------- The probabilistic model in has a negative log-likelihood that can be interpreted as the Hamiltonian of a Gibbs-Boltzmann distribution on the community assignments *t*: $$\label{eq: gibbs\_distribution} p(t \, | \, A, p, n) = \frac{p(A, t \, | \, p, n)}{p(A \, | \, p, n)} = \frac{\exp{{\mathcal{L}}(A, t \, | \, p, n)}}{Z} \,,$$ where *Z* is the partition function of the system, that corresponds to the marginal likelihood of the data. The quantity *F* =  − log*Z* is also called the free energy. The equivalence in allows interpreting the probabilistic model in terms of factor graphs. Here, the function nodes are hyperedges *f* ∈ Ω, and variable nodes are elements of *V*. The interactions between function and variable nodes can be read directly from the log-likelihood in. In other words, the probabilistic model induces a factor graph *F* = (V, F, E) with variable nodes V = *V*, function nodes F = Ω and edges E = {(*i*, *e*) ∈ V × F : *i* ∈ *e*}. In we show a graphical representation of the equivalence between hypergraphs and factor graphs. For any variable node *i* and function node *f* of the factor graph we define the neighbors, or boundaries, as ∂*i* = {*f* ∈ F : (*i*, *e*) ∈ E}, being all function nodes adjacent to *i*, and ∂*f* = {*i* ∈ V : (*i*, *e*) ∈ E} being all variable nodes adjacent to *f*. Message-Passing (MP) -------------------- Given the factor graph representation of HySBM, we can perform Bayesian inference of the community assignments via message-passing. Originally obtained from the cavity method on spin glasses, MP allows estimating marginal distributions on the variable nodes of a graphical model by iteratively updating messages, auxiliary variables that operate on the edges of the factor graph. The efficiency of MP comes from the fact that the structure of the factor graph favors locally distributed updates. Although exact theoretical results are only proven on trees, MP has been shown to obtain strong performance also on locally tree-like graphs and it has been extended to dense graphs with short loops. [fig: hypergraph as factor graph] Applying MP to our model, the inference procedure yields expressions for the marginal probabilities *q**i*(*a*) of a node *i* to be assigned to any given community *a* ∈ [*K*]. Their values are obtained as solutions to closed-form fixed-point equations, which involve messages *q**i* → *e*(*t**i*) from variable to function nodes, and *q̂**e* → *i*(*t**i*), from function to variable nodes. The messages follow the sum-product updates $$\begin{aligned} \label{eq:MP\_eq1} {q\_{i \rightarrow e}(t\_i)} &\propto n\_{t\_i} \prod\_{f \in \partial i \setminus e} {\widehat{q}\_{f \rightarrow i}(t\_i)} \\ \label{eq:MP\_eq2} {\widehat{q}\_{e \rightarrow i}(t\_i)} &\propto \sum\_{t\_j: j \in \partial e \setminus i} \left( \frac{\pi\_e}{\kappa\_e} \right)^{A\_e} \left( 1- \frac{\pi\_e}{\kappa\_e} \right)^{1-A\_e} \prod\_{j \in \partial e \setminus i} {q\_{j \rightarrow e}(t\_j)} \,,\end{aligned}$$ and yield marginal distributions as *q**i*(*t**i*) ∝ *n**t**i*∏*e* ∈ ∂*i**q̂**e* → *i*(*t**i*) . Notice that, compared to those for graphs, the MP equations for hypergraphs in present additional challenges. First, in graphs the updates simplify. One can in fact collapse the two types of messages (and equations) into a unique one, since paths (*i*, *f*, *j*) in the factor graph reduce to pairwise interactions (*i*, *j*) between nodes. This simplification is not possible in hypergraphs, as one function node may connect more than two variable nodes. Second, the dimensionality of the MP equations grows faster when accounting for higher-order interactions. Here, the number of function nodes is equal to $|\mathcal{F}| = |\Omega| = \sum\_{d=2}^D \binom{N}{d}$, yielding ∣F∣ = *O*(2*N*) at large *D* = *N*. In contrast, one gets *O*(*N*2) pairwise messages in the updates for graphs. To produce computationally feasible MP updates one can assume sparsity, as already done in the dyadic case. We outline such updates in the following theorem. [th: MP equations] Assuming sparse hypergraphs where *c* = *O*(1), the MP updates satisfy the following fixed-point equations to leading order in *N*. For all hyperedges *e* ∈ *E* and nodes *i* ∈ *e*, the messages and marginals are given by: $$\begin{aligned} \label{eq:feasible\_MP\_1} {q}\_{i \to e}(t\_i) &\propto n\_{t\_i} \Bigg( \prod\_{\substack{f \in E \\ f \in \partial i \setminus e}} \widehat{q}\_{f \to i}{(t\_i)} \Bigg) \exp(-h(t\_i)) \\ \label{eq:feasible\_MP\_2} \widehat{q}\_{e \to i}(t\_i) &\propto \sum\_{t\_j: j \in \partial e \setminus i} \pi\_e \prod\_{j \in \partial e \setminus i} q\_{j \to e}{(t\_j)} \\ \label{eq:feasible\_MP\_3} q\_i(t\_i) &\propto n\_{t\_i} \Bigg( \prod\_{\substack{f \in E \\ f \in \partial i}} \widehat{q}\_{f \to i}{(t\_i)} \Bigg) \exp(-h(t\_i)) \\ \label{eq:feasible\_MP\_4} h(t\_i) &= \frac{C'}{N} \sum\_{j \in V} \sum\_{t\_j} c\_{t\_i t\_j} q\_j(t\_j) \,,\end{aligned}$$ where $C' = \sum\_{d=2}^D \binom{N-2}{d-2} \frac{1}{\kappa\_d}$. A proof of is provided in. The updates in are in principle computationally feasible, as products of function nodes *f* ∈ *E* have replaced products over the entire space *f* ∈ Ω. In sparse graphs, that we observe in many real datasets, *E* is much smaller than the original Ω, thus significantly decreasing the computation cost. An intuitive justification of, which we formalize in its proof, is that the observed interactions *f* ∈ *E* hold most of the weight in the updates of their neighbors, while the unobserved ones *f* ∈ Ω \ *E* send approximately constant messages and thus can be absorbed in the external field *h* introduced in. This idea is inspired by the dyadic MP equations in. However, in contrast to MP on graphs, a vanilla implementation of the updates is still not scalable in hypergraphs, as the computational cost of is *O*(*K*∣*e*∣ − 1). To tackle this issue, we develop a dynamic programming approach that reduces the complexity to *O*(*K*2∣*e*∣). Dynamic programming is exact, as it does not rely on further approximations on the MP updates, its detailed derivations are provided in. The fixed-point equations of naturally suggest an algorithmic implementation of the MP inference procedure. We present a pseudocode for it in. Expectation-Maximization to learn the model parameters ------------------------------------------------------ We have presented a MP routine for inferring the community assignments {*t**i*}*i* ∈ *V*. Now, we derive closed-form updates for the model parameters *c*, *n* via an Expectation-Maximization (EM) routine. Differentiating the log-likelihood in with respect to *n*, and imposing the constraint ∑*a* = 1*K**n**a* = 1, yields the update $$\label{eq: n update} n\_a = \frac{N\_a}{N} \,.$$ Notice that this update depends on the MP results, as $N\_a = |\{i \in V {:}\operatorname\*{arg\,max}\_b q\_i(b) = a \}|$ is the count of nodes assigned to community *a* according to the inferred marginals. To update the rescaled affinity *c* we adopt a variational approach, where we maximize a lower bound of the log-likelihood, or, equivalently, minimize a variational free energy. In, we show detailed derivations for the following fixed-point updates $$\label{eq: c update} c\_{ab}^{(t+1)} = c\_{ab}^{(t)} \, \frac{ 2 \, \sum\_{e \in E} {\#^{e}\_{ab}} / {\pi\_e} }{ N \, C' \, (N n\_a n\_b - \delta\_{ab} n\_a) } \,,$$ where #*a**b**e* = ∑*i* < *j* ∈ *e**δ**t**i**a**δ**t**j**b* is the count of dyadic interactions between two communities *a*, *b* within a hyperedge *e*. In practice, when inferring *t*, *n*, *c* one proceeds by alternating MP inference of *t*, as presented in, with the updates of *c* and *n* in until convergence. A pseudocode for the EM procedure is presented in. Sampling from the generative model ---------------------------------- One of the main advantages of using a probabilistic formulation, such as the one presented here, is the ability to generate data with a desired community structure. Among other tasks, this can be used in particular to test detectability results like the ones we theoretically derive in the following section. However, in hypergraphs, writing a probabilistic model does not directly imply the ability to sample from it, as is typically the case for graphs. In fact, while the *O*(*N*2) configuration space of graphs allows performing sampling explicitly, in the case of hypergraphs the exploding configuration space Ω makes this task prohibitive, even for hypergraphs with moderate number of nodes and hyperedge sizes. Here we propose a novel sampling algorithm that can efficiently scale and produce hypergraphs of dimensions in the tens or hundreds of thousands of nodes. We exploit the hard-membership nature of the assignments to obtain exact sampling via combinatorial arguments, as opposed to the approximate sampling in recent work for mixed-membership models. Owing to this procedure, our results are not limited to theoretical derivations, but can be tested numerically on synthetic data, as we show below. Hence, our sampling routine is an essential contribution of this work. The details of our sampling procedure are presented in and its implementation is open sourced. Phase transition ================ Detectability bounds -------------------- [t] [fig:factor graph tree assumption] Beside providing a valid and efficient inference algorithm, one of the main advantages of MP is the possibility of deriving closed-form expressions for the detectability of planted communities. The transition from detectable to undetectable regimes has been first shown to exist in MP-based inference models for graphs, and gave rise to an extensive body of literature on theoretical detectability limits and sharp phase transitions. Here, we extend these classical arguments to hypergraphs, and find relevant differences when higher-order interactions are considered. In line with previous work, we restrict our study to the case where groups have constant expected degrees. In fact, in settings where such an assumption does not hold, it is possible to obtain good classification by simply clustering nodes based on their degrees. Formally, we assume ∑*a* = 1*K**c**a**b**n**b* = *c* ,  for some fixed constant *c*. Notice that does not immediately imply a constant degree for the groups, as in hypergraphs the expected degree is defined differently than the left-hand-side of the equation above. Nevertheless, in we prove that imposing the condition in does indeed imply a constant average degree. More precisely, [th: constant degree and message fixed points] Assuming, the following holds: * all the groups have the same expected degree; * the fixed points for the messages read $$\begin{aligned} {2} \label{eq: fixed\_point\_1} {q\_{i \rightarrow e}(t\_i)} &= n\_{t\_i} \quad&& \forall e \in E, i \in e \\ \label{eq: fixed\_point\_2} {\widehat{q}\_{e \rightarrow i}(t\_i)} &= \frac{1}{K} && \forall e \in E, i \in e \,. \end{aligned}$$ We want to study the propagation of perturbations around the fixed points of. We assume that the factor graph is locally tree-like, i.e., neighborhoods of nodes are approximately trees. We provide a visualization of this in. Classically, it has been proven that for sparse graphs almost all nodes have local tree-like structures up to distances of order *O*(log*N*). We are not aware of similar statements for hypergraphs. While our empirical results prove that these assumptions are reasonable and approximately valid, we leave the formalization of such an argument for future work. Referring to (b), one can see that between every leaf and the root, there is a single connecting path. Thus, perturbations on the leaves propagate through a tree to the root, and transmit via the following transition matrix $$\tilde T^{ab}\_r = \frac{\partial {q\_{i\_r \rightarrow f\_r}(a)}}{\partial {q\_{i\_{r+1} \rightarrow f\_{r+1}}(b)}} \,,$$ where *i**r*, *f**r* are respectively the *r*-th variable node and function node in the path. In words, this is the dependency of a message on the message one level below in the path. In we show that, to leading terms in *N*, the transition matrix evaluates to $$\label{eq: transition matrix} \tilde T^{ab}\_r = \frac{2\, n\_a}{|f\_r|(|f\_r|-1)} \left( \frac{c\_{ab}}{c} -1 \right) \,.$$ A related expression was previously obtained for the transition matrix on graphs is *T**a**b* = *n**a*(*c**a**b*/*c* − 1). Hence, we can compactly write $\tilde T^{ab}\_i = [2 / {(|f\_r|(|f\_r|-1))}] \,T^{ab}$. This connection highlights an important difference between the two cases: hyperedges induce a higher-order prefactor with a “dispersion” effect. The larger the hyperedge, the lower is the magnitude of this transition. Instead, if the hyperedge is a pair, this prefactor reduces to one, and we recover the result on graphs. A perturbation *ε**t**d**k**d* of a leaf node *k**d* influences the perturbation *ε**t*0*k*0 on the root *t*0 by $$\epsilon\_{t\_0}^{k\_0} = \sum\_{\{t\_r\}\_{r=1, \ldots, d}} \left(\prod\_{r=0}^{d-1}\tilde T^{t\_rt\_{r+1}}\_i\right) \epsilon\_{t\_d}^{k\_d} \,.$$ We can also express this connection in matrix form as $$\label{eqn:perturbation\_step} \epsilon^{k\_0}= \left( \prod\_{r=0}^{d-1} \frac{2}{|f\_r|(|f\_r|-1)} \right) T^d \epsilon^{k\_d} \,,$$ where *T* is the matrix with entries *T**a**b* (in raised to the power of *d*), and *ε**k**d* the array of *ε**t**d**k**d* values. Now, similarly to, we consider paths of length *d* →  + ∞. In such a case, the *r*-dependent prefactor in converges almost surely to $$\begin{aligned} \mu = \exp \left( \mathbb{E} \left[d \log \frac{2}{|f|(|f|-1)} \right] \right) \,,\end{aligned}$$ where the expectation is taken with respect to randomly drawn hyperedges *f* ∈ *E*. If *λ* is the leading eigenvector of *T*, then *ε**k*0 ≈ *μ* *λ**d**ε**k**d* . Aggregating over the leaves, and since the perturbations have an expected value of zero, we obtain variance: $$\begin{aligned} \langle (\epsilon\_{t\_0}^{k\_0})^2 \rangle &\approx \left \langle \left( \sum\_{k=1}^{[\langle d \rangle (F-1)]^d} \mu \, \lambda^d \epsilon^{k}\_t \right)^2 \right \rangle \\ \label{eq:last\_expression\_before\_stability} &\overset{\text{i.i.d.}}{=} (\langle d \rangle (F-1))^d \mu^2\, \lambda^{2d} \langle (\epsilon\_{t}^{k})^2 \rangle \,,\end{aligned}$$ where ⟨*d*⟩ is the average node degree and *F* the average hyperedge size. The expression in yields the following stability criterion, the key result of our derivations: $$\label{eq: stability criterion} \langle d \rangle (F-1) \left( \exp \mathbb{E} \left[\log \frac{2}{|f|(|f|-1)} \right] \right)^2 \lambda^{2} < 1 \,.$$ This generalizes the seminal result *c**λ*2 < 1 of to hypergraphs. When holds, the influence of the leaves to the root decays when propagating up the tree in (b). Conversely, if is not satisfied, it grows exponentially. To obtain more interpretable bounds, we focus on a benchmark scenario where the affinity matrix contains all equal on- and off-diagonal elements, i.e., *c**a**a* = *c*in for all *a* ∈ [*K*] and *c**a**b* = *c*out for all *a* ≠ *b*. In this case, condition becomes *c*in + (*K* − 1)*c*out = *K**c*, the leading eigenvalue of *T* is *λ* = (*c*in − *c*out)/*K**c*, and the stability condition in reads $$\label{eq: bound cin - cout} |c\_{\mathrm{in}} - c\_{\mathrm{out}}| > {\frac{Kc}{\sqrt{\langle d \rangle (F-1)}}} \exp \left( - \mathbb{E} \left[\log \frac{2}{|f|(|f|-1)} \right] \right) \,.$$ When hypergraphs only contain dyadic interactions, reduces to the bound $|c\_{\mathrm{in}} - c\_{\mathrm{out}}| > K \sqrt{c}$ previously derived for graphs, also known as Kesten-Stigum bound. Phase transition in hypergraphs ------------------------------- We test the bound obtained in by running MP on synthetic hypergraphs generated via the sampling algorithm of. In our experiments, we fix *K* = 4 and sample hypergraphs with *N* = 104 nodes. We also fix *c* = 10 and change the ratio *c*out/*c*in. In this setup, for graphs, one expects a continuous phase transition between two regimes where the system is undetectable and detectable. In the former, where the inequality yielded by the Kesten-Stigum bound does not hold, and the graph does not carry sufficient information about the community assignments, community detection is impossible. In the latter, communities can be efficiently recovered by MP. In we plot the overlap = (∑*i**q**i*⋆/*N* − max*a**n**a*)/(1 − max*a**n**a*) with *q**i*⋆ ≡ *q**i*(*a**i*⋆) and $a\_i^\star = \operatorname\*{arg\,max}\_b q\_i(b)$, against *c*out/*c*in. Our results are in agreement with the theoretical predictions: the overlap is low in the undetectable region, high in the detectable region, and we observe a continuous phase transition at the Kesten-Stigum bound for graphs, i.e., when *D* = 2. We expect the presence of higher-order interactions to improve detectability, as it yields greater overlap for any *c*out/*c*in and it shifts the theoretical transition to larger values. We empirically validate this prediction by evaluating for hyperedges up to size *D* = 50 and performing MP inference in. Diverging convergence times for larger *c*out/*c*in, i.e., when the free energy landscape gets progressively rugged, further demonstrate this behavior, as shown in. [fig:phasetransition] The impact of higher-order interactions on detectability -------------------------------------------------------- As mentioned above, the transition matrix in reduces to the classic *T**a**b* when only dyadic interactions are present. In fact, the additional prefactor 2/(∣*f**r*∣(∣*f**r*∣ − 1)) is equal to one for 2-dimensional hyperedges. However, when hyperedges of higher sizes are present, this prefactor is strictly smaller than one. This dampens the perturbations *ε**k*0 when they propagate up the tree in (b). It is unclear whether this higher-order effect aids or hurts detectability, as it could prevent signal from being propagated, but also noise from accumulating at the root. With this in mind, we investigate the impact of higher-oder interactions on detectability by disentangling the effect that *K*, *c* and, most importantly, *D* have on the detectability bound set by. To this end, we rewrite as $$\label{eq:bound using effective deg} \left| \rho\_{\mathrm{in}} - \frac{1}{Kc} \right| > \Phi(K,c,D) \,.$$ Here, we utilized *c*in/*K**c*  = *ρ*in ∈ [0, 1], a degree-independent rescaling of *c*in, where we normalize by its maximum possible value *K**c*, as per. The term Φ(*K*, *c*, *D*) is the value of the theoretical bound at the r.h.s. of, normalized by *K**c* as well. This way, we get the decomposition Φ(*K*, *c*, *D*) = *α*(*K*)*β*(*c*)*γ*(*D*) as a product of three independent terms: $$\begin{aligned} \alpha(K) &= \frac{K-1}{K} \label{eq: bound factor 1} \\ \beta(c) &= \frac{1}{\sqrt{c}} \label{eq: bound factor 2} \\ \label{eq: gamma\_term\_detectability} \gamma(D) &= \frac{\exp \left( - \mathbb{E} \left[\log \frac{2}{|f|(|f|-1)} \right] \right)}{\sqrt{C (F - 1) / 2}} \,,\end{aligned}$$ where $C = \sum\_{d=2}^D \binom{N-2}{d-2} \frac{d}{\kappa\_d}$ In our experiments we choose of $\kappa\_d = \binom{N-2}{d-2} \frac{d(d-1)}{2}$, which conveniently returns *C* = 2*H**D* − 1 (see ), with *H**D* − 1 being the (*D* − 1)-th harmonic number. However, our theory holds true for any *κ**d* yielding sparse hypergraphs. The classic effect of *α*(*K*) and *β*(*c*) is summarized in (a), where the maximum hyperedges size is fixed to *D* = 2, hence *γ*(*D*) = 1. Here, we observe that the undetectability gap reduces when increasing *c*. Graphs with higher average degrees are more detectable even when there is a larger inter-community mixing. The effect of larger *K* is that of skewing the detectability phase transition. This is because edges contributing to *c*out are spread over *K* − 1 communities, while those accounted for *c*in concentrate in a single one. Intuitively, increasing *K* allows to have more in-out edges, and detectability is still possible because of the dominating *c*in term. The limit value *ρ*in = 1/*K* constitutes the perfect mixing case *c*in = *c*out = *c*, where detectability is unfeasible for any *K* and finite degree *c*. One should notice that, while the bounds drawn in hold theoretically, for large *K* it may be exponentially hard to retrieve communities even in the detectable region. The higher-order effects on detectability are shown in (b)-(c). The presence of hyperedges with *D* > 2 enters in as the product of two separate contributions, *γ*(*D*) = *γ*1(*D*)*γ*2(*D*), where $$\begin{aligned} \gamma\_1(D) &= \exp \left( - \mathbb{E} \left[\log \frac{2}{(|f|(|f|-1))} \right] \right) \label{eq: gamma 1} \\ \gamma\_2(D) &= \frac{1}{\sqrt{C(F-1)/2}} \,. \label{eq: gamma 2}\end{aligned}$$ These two terms have contrasting effects that multiply to obtain the overall trend of *γ*(*D*): *γ*1(*D*) is monotonically increasing while *γ*2(*D*) is monotonically decreasing. If we were to consider only the “dispersion” contribution *γ*1, we would enlarge the detectability gap by increasing Φ. However, the *γ*2 term factors in the increasing number of interactions observed with larger hyperedges. The result is the overall higher-order contribution to detectability *γ*(*D*) = *γ*1(*D*)*γ*2(*D*), where the value of *γ*2 dominates over *γ*1, giving rise to the non-trivial, monotonically decreasing, profile of (b). The overall effect of higher-order terms is illustrated by plotting the relative difference ΔΦ(*K*, *c*, *D*) = (Φ(*K*, *c*, *D*) − Φ(*K*, *c*, 2))/Φ(*K*, *c*, 2) for a range of *c* and *D* values, with *K* = 4, as shown in (c). We observe how higher-order interactions lead to better detectability for all *c*, especially in sparse regimes, where *c* is small and pairwise information is not sufficient for the recovery of the communities. [fig:detectabilitypanel] Entropy and higher-order information ------------------------------------ Hypergraphs are often compared against their clique decomposition, i.e., the graph obtained by projecting all hyperedges onto their pairwise connections, as a baseline network structure. The clique decomposition yields highly dense graphs. For this reason, most theoretical results on sparse graphs are not directly applicable, algorithmic implementations become heavier—many times unfeasible—and storage in memory is suboptimal. Previous work also showed that algorithms developed for hypergraphs tend to work better in many practical scenarios. Intuitively, hypergraphs “are more informative” than graphs, as there exists only one clique decomposition induced by a given hypergraph, but possibly more hypergraphs corresponding to a given clique decomposition. Here we give a theoretical basis to this common intuition and find that, within our framework, we can quantify the extra information carried by higher-order interactions. For a given hypergraph *H* = (*V*, *E*), edge (*i*, *j*) ∈ *V*2 and hyperedge *e* ∈ *E*, we define the probability distribution $$\label{eq:p joint hypergraph} {p\_{H}}(\{i, j\}, e) = \begin{cases} \displaystyle \frac{1}{E} \frac{2}{|e|(|e|-1)} & \quad \text{if } i, j \in e \\ 0 & \quad \text{otherwise} \,. \end{cases}$$ This distribution represents the joint probability of drawing a hyperedge uniformly at random among the possible *E* in the hypergraph and a dyadic interaction {*i*, *j*} out of the possible $\binom{|e|}{2}$ within the hyperedge *e*. From we can derive the following marginal distributions: $$\begin{aligned} {p\_{E}}(e) &= \frac{1}{E} \label{eq: marginal hyperedge distr}\\ {p\_{\mathcal{C}}}(\{i, j\}) &= \frac{1}{E} \sum\_{e \in E: i, j \in e} \frac{2}{|e|(|e|-1)} \,, \label{eq: marginal dyadic distr}\end{aligned}$$ for all *e* ∈ *E* and pairs of nodes *i* ≠ *j*. The distribution *p**E* is a uniform random draw of hyperedges. The distribution *p*C represents the probability of drawing a weighted interaction {*i*, *j*} in the clique decomposition of *H*. With at hand, it is possible to rewrite *γ*1(*D*) in as $$\begin{aligned} \log \gamma\_1(D) &= \mathcal{H}(\{i, j\} \, | \, f) \label{eq: gamma 1 as conditional entropy}\,,\end{aligned}$$ where H( ⋅  ∣  ⋅ ) is the conditional entropy. This entropy is minimized when *p*C({*i*, *j*}) is very different than *p**H*({*i*, *j*}∣*f*), i.e., when conditioning a pair {*i*, *j*} to be in *f* brings additional information with respect to the interaction {*i*, *j*} alone. This happens when {*i*, *j*} appears in several hyperedges and it is difficult to reconstruct the hypergraph from its clique decomposition. As lower values of *γ*1 imply easier recovery, suggests that recovery is favored in hypergraphs where hyperedges overlap substantially and that cannot be easily distinguished from their clique decomposition. We obtain a similar result by rewriting as $$\label{eq: gamma 1 as perplexity} \gamma\_1(D) = \frac{\exp{\mathcal{H}({p\_{H}})}}{\exp{\mathcal{H}({p\_{E}})}} = \frac{\mathrm{PP}({p\_{H}})}{\mathrm{PP}({p\_{E}})} \,,$$ which is the ratio of two exponentiated entropies. In information theory, PP is referred to as perplexity, and it is an effective measure of the number of possible outcomes in a probability distribution. Once we fix the number of hyperedges *E* (and therefore PP(*p**E*)), the number of effective outcomes is given by the number of likely drawn {*i*, *j*} pairs. This number is minimized when there is high overlap between hyperedges, thus confirming the interpretation of. Finally, we set a different focus by rewriting *γ*1 as log*γ*1(*D*) = H(*p*C) − KL(*p**H* ∣∣ *p*C  ⊗  *p**E*) ,  where KL is the Kullback-Leibler divergence and  ⊗  the product probability distribution. Here we pose the question: given a fixed clique decomposition and number of hyperedges, what is the hypergraph attaining the highest detectability? From the equation, such hypergraph is that with the highest KL(*p**H* ∣∣ *p*C  ⊗  *p**E*) = *I*({*i*, *j*}, *f*). In this case, the KL-divergence between a joint distribution and its marginals, also called mutual information *I* of the two random variables, describes the information shared between pairwise interactions and single hyperedges. Hypergraphs with high KL-divergence, i.e, high information about a given {*i*, *j*} in a single hyperedge *f*, will yield better detectability. In other words, it is preferable to choose hypergraphs that, while still producing the observed clique decomposition (thus achieving low entropy H(*p**H*)), have largely overlapping hyperedges. The results discussed in this section provides a theoretically guidance for the construction of hypergraphs that explain an observed graph made of only pairwise interactions, a problem relevant in datasets where higher-oder interactions are not explicitly tracked. Experiments on real data ======================== [t]![. We infer the communities via MP and EM on the High School dataset. In all cases, we run inference with K=10 communities. (a) Inferred communities on the High School dataset, only utilizing hyperedges up to a maximum size D. Taking into account higher-order information, up to D=4, results in more granular partitions. (b) Graphical representation of the students’ partition into classes. We draw only hyperedges of size D. (c) We compare the inferred partitions with the attended class covariate of the nodes, i.e., the classes students participate in. We comment further on this comparison in. (d) A quantitative measurement complementing that of panel (b): the Normalized Mutual Information (NMI) between inferred communities and attended classes, the AUC on the full dataset, as well as the ratio \rho_D of hyperedges of size equal to D. (e) Free energy landscape. We consider the parameters (p_2, n_2), (p_3, n_3) and (p_4, n_4) inferred from the dataset with, respectively, D=2, 3, 4. With these, we build the simplex of convex combinations p = \sum_{i \in \{2,3,4\}} \lambda_i p_i, where \sum_{i \in \{2,3,4\}} \lambda_i = 1 and 0 \leq \lambda_i \leq 1 (similarly for n). For every point in the simplex, we compute the free energy on the full dataset, i.e., with D=5. More details on these computations are provided in.](figure5.png "fig:") [fig: real exps] Our model leads to a natural algorithmic implementation to learn communities in hypergraphs. In fact, alternating MP and EM rounds, our algorithm outputs marginal probabilities *q**i*(*t**i*) for a node *i* to belong to a community *t**i*, as well as the community ratios *n* and the affinity matrix *p*. We illustrate an application of this procedure on a dataset of interactions between high school students (High School). Here, nodes are students and hyperedges represent whether a group of students was observed in close proximity, as recorded by wearable devices. The hypergraph contains *N* = 327 nodes and *E* = 7818 hyperedges. In (a) we show the communities inferred on the dataset where only hyperedges up to size *D* = 2, 3, 4 are kept. We observe a clear progression in how the nodes are gradually allocated into different groups when higher-order interactions are progressively taken into account. This suggests that interactions beyond pairs carry information that would get lost if only edges were to be observed. To get a qualitative interpretation, we compare the communities inferred with the nine classes attended by the students, an attribute available with the dataset. We illustrate the hypergraph of student interactions, coloring each node according to its class, in (b). Previous studies have shown that in this dataset a number of interactions happen with stronger prevalence within students of the same class. In (c), we compare the communities inferred with different maximum hyperedge size *D* with the classes, and observe that there is a stronger alignment between them when larger hyperedges are utilized for inference. In (d) we show, at *D* = 2, 3, 4, the Normalized Mutual Information (NMI) between inferred communities and class attributes,the AUC with respect to the full dataset, and the fraction *ρ**D* of hyperedges with size equal to *D*. In addition, our algorithm detects connection patterns that were previously observed between the different student classes as captured by the affinity matrix *p*, see for details. A feature that sets MP apart from other inference methods is the possibility to approximately compute the evidence *Z* = *p*(*A* ∣ *p*, *n*) of the whole dataset, or, equivalently, the free energy *F* =  − log*Z*. In we discuss how to make the free energy computations feasible by exploiting classical cavity arguments, as well as a dynamic program similar to that employed for MP. We present the results of these estimates on the High School dataset in (e). Here we take the values of *n* and *p* inferred by cutting the dataset at maximum hyperedge sizes *D* = 2, 3, 4. Then, we compute the free energy on the full dataset (*D* = 5) in the simplex of *n*, *p* parameters outlined by the three vertices. We notice that interactions of size *D* = 5 seem to be less informative and lead to suboptimal inference, see. Similarly to what observed on graphs, the energy landscape appears rugged and complex. EM converges to solutions that are local attraction points, i.e., valleys of low-energy configurations. Moreover, the free energy of the *p*, *n* parameters inferred with only pairwise interactions (i.e., *D* = 2, lower-right) is higher than that inferred for *D* = 3 (upper-left), which is in turn higher that the one of *D* = 4 (bottom-left). Conclusion ========== We developed a probabilistic generative model and a message-passing-based inference procedure that lead to several results advancing community detection on hypergraphs. In particular we obtained closed-form bounds for the detectability of community configurations, extending the seminal results of to higher-order interactions. Experimental validation of such bounds shows the emergence of a detectability phase transition when spanning from disassortative to assortative community structures. With these theoretical bounds at hand, we investigate the relationship between hypergraphs and graphs from an information-theoretical perspective. Characterizing the entropy and perplexity of pairs of nodes in hyperedges, we find that hypergraphs with many overlapping hyperedges are easier to detect. Beside these theoretical advancements, we develop two relevant algorithmic ones. First, we derive an efficient and scalable Message-Massing algorithm to learn communities and model parameters. Second, we propose an exact and efficient sampling routine that generates synthetic data with desired community structure according to our probabilistic model in order of seconds. Both of these implementations are released open source. The mathematical tools we propose here to obtain our results are valid for standard hypergraphs. We can foresee that they could be generalized to dynamic hypergraphs where interactions change in time, using intuitions derived for dynamic graphs. Similarly, it would be interesting to see how detectability bounds change when accounting for node attributes, as results in networks have shown that adding extra information can boost community detection. Finally, from an empirical perspective, it would be interesting to see how our theoretical insights in terms of entropy of hypergraphs and clique expansion match measures that relate hypergraphs to simplicial complexes. Acknowledgments =============== N.R. and A.L. contributed equally to this work. N.R. acknowledges support from the Max Planck ETH Center for Learning Systems. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting A.L. Expected degree and choice of *κ**d* ==================================== As we commented in, the choice of the normalizing constant *κ**d*, for *d* = 2, …, *D*, controls the Bernoulli probabilities for all hyperedges *e* ∈ Ω via $$\mathbb{P}(e \, | \, p, t) = \frac{\pi\_e}{\kappa\_{|e|}} = \frac{\sum\_{i<j \in e} p\_{t\_i t\_j}}{\kappa\_{|e|}} \,. \nonumber$$ Our theoretical analysis and results hold for general choices of *κ**d*, as long as these respect the following conditions. First, for any choice of a symmetric 0 ≤ *p**a**b* ≤ 1, we need valid probabilities 0 ≤ *π**e*/*κ*∣*e*∣ ≤ 1. This implies that, necessarily: $$\label{eq: minimum kappa} \kappa\_d \ge \frac{d(d-1)}{2} \quad \forall d=2, \ldots, D \,.$$ Second, we want the ensemble to consist of sparse hypergraphs, in expectation. A good proxy for such a requirement is the average degree, which we can compute explicitly: $$\begin{aligned} \langle d \rangle &= \frac{1}{N} \sum\_{i \in V} \sum\_{e \in \Omega: i \in e} \mathbb{P}(e \, | \, p, t) \nonumber \\ &= \frac{1}{N} \sum\_{e \in \Omega} \sum\_{i \in e} \mathbb{P}(e \, | \, p, t) \nonumber \\ &= \frac{1}{N} \sum\_{e \in \Omega} |e| \mathbb{P}(e \, | \, p, t) \nonumber \\ &= \frac{1}{N} \sum\_{e \in \Omega} \frac{|e|}{\kappa\_{|e|}} \sum\_{i< j \in e} p\_{t\_i t\_j} \nonumber \\ &= \frac{C}{N} \sum\_{i <j \in V} p\_{t\_i t\_j} \nonumber \\ &\approx \frac{C}{N} \sum\_{a \le b \in [K]} \frac{p\_{ab} (N n\_a) (N n\_b)}{1 + \delta\_{ab}} \nonumber \\ \label{eq: final expression avg deg} &= \frac{C}{2} \sum\_{a,b \in [K]} c\_{ab}n\_a n\_b \,, \end{aligned}$$ where $$\label{eq: C formula} C = \sum\_{d=2}^D \binom{N-2}{d-2} \frac{d}{\kappa\_d} \,. \nonumber$$ We assume *c**a**b* = *O*(1), i.e. to be in a sparse regime. Thus, the expected degree’s scale is governed by *C* and, in turn, by the choice of *κ**d*, as $$\langle d \rangle = O(C) \nonumber \,.$$ Additionally, but not necessarily, we wish our model to extend the classical SBM, which imposes the additional condition *κ*2 = 1. There exist many choices of *κ**d* obeying the constraints just discussed. A natural one is the minimum value satisfying, i.e. *κ**d* = *d*(*d* − 1)/2. This gives $$C = \frac{2}{N-1} \sum\_{d=1}^{D-1} \binom{N-1}{d} \, \nonumber$$ that, for *D* = *N*, returns ⟨*d*⟩ = *O*(2*N*/(*N* − 1)), which is too high to yield sparse hypergraphs. Notice that, in practice, we rarely use *D* = *N*. However, such considerations are useful to evaluate how different *κ**d* values reflect on the properties of the hypergraphs ensembles of the model. A more interesting choice is given by $$\kappa\_d = \frac{d(d-1)}{2} \binom{N-2}{d-2} \,. \nonumber$$ This corresponds to taking the average among the *d*(*d* − 1)/2 interactions that yield *π**e*, and $\binom{N-2}{d-2}$ is a normalization: once observed an interaction between two nodes *i*, *j*, the remaining *d* − 2 are chosen at random. This gives $$\begin{aligned} C &= 2 \sum\_{d=1}^{D-1} \frac{1}{d} \nonumber \\ \label{eq: kappa choice in appendix} &= 2 H\_{D-1} \,,\end{aligned}$$ which is proportional to the (*D* − 1)-th harmonic number, hence growing more mildly at leading order as *C* = *O*(log*D*). Aside from having an interpretation in terms of null modeling, the value in, which we utilize experimentally, was shown to be a sensible choice in many real-life scenarios. Message-Passing derivations =========================== MP equations have been developed in the case of general factor graphs, see for example Murphy et al. , Section 22.2.3.2. We consider approximate messages from hyperedges *e* to nodes *i* being *q̂**e* → *i*(*t**i*), and vice versa, *q**i* → *e*(*t**i*). The messages, for any *e* ∈ F, *i* ∈ ∂*e* satisfy the general updates $$\begin{aligned} {q\_{i \rightarrow e}(t\_i)} &\propto n\_{t\_i} \prod\_{f \in \partial i \setminus e} {\widehat{q}\_{f \rightarrow i}(t\_i)} \nonumber \\ {\widehat{q}\_{e \rightarrow i}(t\_i)} &\propto \sum\_{t\_j: j \in \partial e \setminus i} \left( \frac{\pi\_e}{\kappa\_e} \right)^{A\_e} \left( 1- \frac{\pi\_e}{\kappa\_e} \right)^{1-A\_e} \prod\_{j \in \partial e \setminus i} {q\_{j \rightarrow e}(t\_j)} \,. \label{eq: messtwo formula appendix}\end{aligned}$$ The marginal beliefs are given by *q**i*(*t**i*) ∝ *n**t**i*∏*e* ∈ ∂*i**q̂**e* → *i*(*t**i*) . Message updates --------------- First, we can distinguish the values of messages for function nodes *e* such that *A**e* = 0 or *A**e* = 1, i.e. if the hyperedge *e* is observed or not in the data. If *A**e* = 1, i.e. *e* ∈ *E*, then $$\begin{aligned} {\widehat{q}\_{e \rightarrow i}(t\_i)} &\propto \sum\_{t\_j: j \in \partial e \setminus i} \frac{\pi\_e}{\kappa\_e} \prod\_{j \in \partial e \setminus i} {q\_{j \rightarrow e}(t\_j)} \nonumber\\ &\propto \sum\_{t\_j: j \in \partial e \setminus i} \pi\_e \prod\_{j \in \partial e \setminus i} {q\_{j \rightarrow e}(t\_j)} \,. \label{eq: e to i for present hye}\end{aligned}$$ If *A**e* = 0, then *e* ∈ Ω \ *E*. We start by computing $$\begin{aligned} {\widehat{q}\_{e \rightarrow i}(t\_i)} &\propto \sum\_{t\_j: j \in \partial e \setminus i} \left( 1 - \frac{\pi\_e}{\kappa\_e} \right) \prod\_{j \in \partial e \setminus i} {q\_{j \rightarrow e}(t\_j)} \nonumber \\ &= \sum\_{t\_j: j \in \partial e \setminus i} \prod\_{j \in \partial e \setminus i} {q\_{j \rightarrow e}(t\_j)} - \sum\_{t\_j: j \in \partial e \setminus i} \frac{\pi\_e}{\kappa\_e} \prod\_{j \in \partial e \setminus i} {q\_{j \rightarrow e}(t\_j)} \nonumber \\ &= 1 - \sum\_{t\_j: j \in \partial e \setminus i} \frac{\pi\_e}{\kappa\_e} \prod\_{j \in \partial e \setminus i} {q\_{j \rightarrow e}(t\_j)} \nonumber \\ &= 1- \frac{1}{N}\sum\_{t\_j: j \in \partial e \setminus i} \frac{\sum\_{k < m \in e}c\_{t\_k t\_m}}{\kappa\_e} \prod\_{j \in \partial e \setminus i} {q\_{j \rightarrow e}(t\_j)} \,.\label{eq: e to i non existent}\end{aligned}$$ We indicate with *Ẑ**e* → *i*(*t**i*) the convenient non-normalized rewriting of *q̂**e* → *i*(*t**i*) in. Therefore, we find $$\begin{aligned} {q\_{i \rightarrow e}(t\_i)} &\propto n\_{t\_i} \prod\_{f \in \partial i \setminus e} {\widehat{q}\_{f \rightarrow i}(t\_i)} \nonumber \\ \label{eq: second\_step\_approximate\_message} &= \frac{n\_{t\_i}}{{\widehat{q}\_{e \rightarrow i}(t\_i)}} \prod\_{f \in \partial i} {\widehat{q}\_{f \rightarrow i}(t\_i)} \\ \label{eq: third\_step\_approximate\_message} &\propto \frac{n\_{t\_i}}{\hat{Z}\_{e\to i}(t\_i)} \prod\_{f \in \partial i} {\widehat{q}\_{f \rightarrow i}(t\_i)} \\ &= \frac{q\_{i}{(t\_i)}}{\hat{Z}\_{e\to i}(t\_i)} \label{eq: i to e no expansion} \,, \end{aligned}$$ where from the to we used *Ẑ**e* → *i*(*t**i*) introduced in. We evaluate the expression in for the limit *N* →  + ∞, which gives the node-to-hyperedge messages for *e* ∈ Ω \ *E* as $$\begin{aligned} {q\_{i \rightarrow e}(t\_i)} &= q\_{i}{(t\_i)} + O\left( \frac{1}{N} \right) \nonumber \\ \label{eq: approx i to e} &\approx q\_{i}{(t\_i)} \,,\end{aligned}$$ i.e., the nodes approximately (to leading order in *O*(1/*N*)) share their marginal belief to hyperedges that are not observed in the data. Using, we can also approximate as $$\begin{aligned} {\widehat{q}\_{e \rightarrow i}(t\_i)} &\propto 1- \frac{1}{N}\sum\_{t\_j: j \in \partial e \setminus i} \frac{\sum\_{k < m \in e}c\_{t\_k t\_m}}{\kappa\_e} \prod\_{j \in \partial e \setminus i} {q\_{j \rightarrow e}(t\_j)} \nonumber \\ &\approx 1- \frac{1}{N}\sum\_{t\_j: j \in \partial e \setminus i} \frac{\sum\_{k < m \in e}c\_{t\_k t\_m}}{\kappa\_e} \prod\_{j \in \partial e \setminus i} q\_j(t\_j) \,. \label{eq: e to i marginal}\end{aligned}$$ In the assumed sparsity regime, the term of order *O*(1/*N*) in is close to zero. Since for *x* ≈ 0 the approximation 1 − *x* ≈ *e*− *x* is sufficiently accurate, we write $$\begin{aligned} \label{eq: final e to i in exponential} {\widehat{q}\_{e \rightarrow i}(t\_i)} &\approx \exp\left( -\frac{1}{N}\sum\_{t\_j: j \in \partial e \setminus i} \frac{\sum\_{k < m \in e}c\_{t\_k t\_m}}{\kappa\_e} \prod\_{j \in \partial e \setminus i} q\_j(t\_j) \right) \,.\end{aligned}$$ We can put the hyperedge-to-node updates together using the two results in and in. Specifically, we derive the following expression for the message *q**i* → *e*(*t**i*), where *e* ∈ *E*: $$\begin{aligned} {q\_{i \rightarrow e}(t\_i)} &\propto n\_{t\_i} \prod\_{\substack{f \in \Omega: \\ f \in \partial i \setminus e}} {\widehat{q}\_{f \rightarrow i}(t\_i)} \nonumber \\ &= n\_{t\_i} \prod\_{\substack{f \in E: \\ f \in \partial i \setminus e}} {\widehat{q}\_{f \rightarrow i}(t\_i)} \prod\_{\substack{f \in \Omega \setminus E \\ f \in \partial i}} {\widehat{q}\_{f \rightarrow i}(t\_i)} \nonumber \\ \label{eq: approx message 2} &\approx n\_{t\_i} \left( \prod\_{\substack{f \in E: \\ f \in \partial i \setminus e}} \widehat{q}\_{f \to i}{(t\_i)} \right) \left[ \prod\_{\substack{f \in \Omega \setminus E: \\ f \in \partial i}} \exp\left( -\frac{1}{N}\sum\_{t\_j: j \in \partial f \setminus i} \frac{\sum\_{k < m \in f} c\_{t\_k t\_m}}{\kappa\_f} \prod\_{j \in \partial f \setminus i} q\_j(t\_j) \right) \right] \\ &= n\_{t\_i} \left( \prod\_{\substack{f \in E: \\ f \in \partial i \setminus e}} \widehat{q}\_{f \to i}{(t\_i)} \right) \exp\left( -\frac{1}{N} \sum\_{\substack{f \in \Omega \setminus E: \\ f \in \partial i}} \sum\_{t\_j: j \in \partial f \setminus i} \frac{\sum\_{k < m \in f} c\_{t\_k t\_m}}{\kappa\_f} \prod\_{j \in \partial f \setminus i} q\_j(t\_j) \right) \nonumber \\ &\approx n\_{t\_i} \left( \prod\_{\substack{f \in E: \\ f \in \partial i \setminus e}} \widehat{q}\_{f \to i}{(t\_i)} \right) \exp\left( -\frac{1}{N} \sum\_{\substack{f \in \Omega: \\ f \in \partial i}} \sum\_{t\_j: j \in \partial f \setminus i} \frac{\sum\_{k < m \in f} c\_{t\_k t\_m}}{\kappa\_f} \prod\_{j \in \partial f \setminus i} q\_j(t\_j) \right) \label{eq: approx message 4} \\ &= n\_{t\_i} \left( \prod\_{\substack{f \in E: \\ f \in \partial i \setminus e}} \widehat{q}\_{f \to i}{(t\_i)} \right) \exp(-h\_i(t\_i)) \label{eq: ext field} \,.\end{aligned}$$ In, we used the approximation introduced in. In we passed from summing over Ω \ *E* to Ω. This approximation is sensible as long as the expected degree of the nodes grows at most as *N*, which is satisfied in the assumed sparse regime, as discussed in. Finally, in we introduced node-dependent external field *h**i*(*t**i*) whose definition naturally follows from the argument of the exponential in. External field updates ---------------------- We simplify the external field to remove the node dependency of *h**i*(*a*). The node-dependent external field reads $$\begin{aligned} h\_i(t\_i) &= \frac{1}{N} \sum\_{f \in \partial i} \frac{1}{\kappa\_f} \left(\sum\_{t\_j: j \in \partial f \setminus i} \, \, \sum\_{k < m \in f}c\_{t\_k t\_m} \prod\_{r \in f \setminus i} q\_r(t\_r) \right) \nonumber \\ &= \frac{1}{N} \sum\_{f \in \partial i } \frac{1}{\kappa\_f} \left(\sum\_{t\_j: j \in f \setminus i} \, \, \sum\_{m \in f \setminus i}c\_{t\_i t\_m} \prod\_{r \in f \setminus i} q\_r(t\_r) \right) + {\mathrm{const.}}\, \label{eq: external field 1}\end{aligned}$$ The sum in parentheses in can be simplified as $$\begin{aligned} \sum\_{t\_j: j \in f \setminus i} \left[ \left( \sum\_{m \in f \setminus i}c\_{t\_i t\_m} \right) \prod\_{r \in f \setminus i} q\_r(t\_r) \right] \nonumber &= \sum\_{t\_j: j \in f \setminus i} \, \, \sum\_{m \in f \setminus i} \left( c\_{t\_i t\_m} \prod\_{r \in f \setminus i} q\_r(t\_r) \right) \nonumber \\ &= \sum\_{m \in f \setminus i} \, \, \sum\_{t\_j: j \in f \setminus i} \left( c\_{t\_i t\_m} \prod\_{r \in f \setminus i} q\_r(t\_r) \right) \nonumber \\ &= \sum\_{m \in f \setminus i} \,\, \sum\_{t\_m} c\_{t\_i t\_m} q\_m(t\_m) \,. \label{eq: inner summand h\_i} \end{aligned}$$ Plugging into we get, ignoring constants, $$\begin{aligned} h\_i(t\_i) &= \frac{1}{N} \sum\_{f \in \partial i} \frac{1}{\kappa\_f} \sum\_{m \in f \setminus i} \,\, \sum\_{t\_m} c\_{t\_i t\_m} q\_m(t\_m) \nonumber \\ &= \frac{C'}{N} \sum\_{j \in V \setminus i} \sum\_{t\_j} c\_{t\_i t\_j} q\_j(t\_j) \nonumber \\ &\approx \frac{C'}{N} \sum\_{j \in V} \sum\_{t\_j} c\_{t\_i t\_j} q\_j(t\_j) \,, \label{eq: external field supp intermediate}\end{aligned}$$ with $C' = \sum\_{d=2}^D \binom{N-2}{d-2} \frac{1}{\kappa\_d}$ and where in we included *i* in the node summation. Since does not depend on *i*, we define the node-independent external field $$\label{eq: external field final supp} h(a) = \frac{C'}{N} \sum\_{j \in V} \sum\_{t\_j} c\_{a t\_j} q\_j(t\_j) \quad \forall a \in [K] \,. \nonumber$$ Marginal beliefs updates ------------------------ Notice, that, in passing from to and then in, we have shown that $$\prod\_{\substack{f \in \Omega \setminus E \\ f \in \partial i}} {\widehat{q}\_{f \rightarrow i}(t\_i)} \approx \exp(-h\_i(t\_i)) \approx \exp(-h(t\_i)) \,.$$ We use the same argument to treat the general expression of the marginal beliefs in, yielding $$\begin{aligned} q\_{i}(t\_i) &\propto n\_{t\_i} \prod\_{e \in \partial i} {\widehat{q}\_{e \rightarrow i}(t\_i)} \nonumber \\ &= n\_{t\_i} \prod\_{\substack{e \in E: \\ e \in \partial i}} {\widehat{q}\_{e \rightarrow i}(t\_i)} \prod\_{\substack{e \in \Omega \setminus E: \\ e \in \partial i}} {\widehat{q}\_{e \rightarrow i}(t\_i)} \nonumber \\ &\approx n\_{t\_i} \prod\_{\substack{e \in E: \\ e \in \partial i}} {\widehat{q}\_{e \rightarrow i}(t\_i)} \exp(-h(t\_i)) \,. \nonumber\end{aligned}$$ Summary: approximate Message-Passing updates -------------------------------------------- Putting all derivations together, the final MP equations read $$\begin{aligned} {3} \text{Node-to-observed hyperedge:}& \quad {q\_{i \rightarrow e}(t\_i)} &&\propto n\_{t\_i} \left( \prod\_{\substack{f \in E \\ f \in \partial i \setminus e}} \widehat{q}\_{f \to i}{(t\_i)} \right) \exp(-h(t\_i)) \quad &&\forall e \in E, i \in e \nonumber \\ \label{eq: final\_mp\_costly} \text{Observed hyperedge-to-node:}& \quad {\widehat{q}\_{e \rightarrow i}(t\_i)} &&\propto \sum\_{t\_j: j \in \partial e \setminus i} \pi\_e \prod\_{j \in \partial e \setminus i} q\_{j \to e}{(t\_j)} \quad &&\forall e \in E, i \in e \\ \text{External field:}& \quad\quad\; h(t\_i) &&= \frac{C'}{N} \sum\_{j \in V} \sum\_{t\_j} c\_{t\_i t\_j} q\_j(t\_j) \label{eq: external field final apx} \\ \text{Marginals:}& \quad\quad\; q\_i(t\_i) &&\propto n\_{t\_i} \left( \prod\_{\substack{f \in E \\ f \in \partial i}} \widehat{q}\_{f \to i}{(t\_i)} \right) \exp(-h(t\_i)) \, \nonumber. \end{aligned}$$ Notice that the MP updates cannot be naively implemented as presented. In fact, the update in for *q̂**e* → *i*(*t**i*) have cost *O*(*K*∣*e*∣ − 1), which does not scale with the hyperedge size. In we present a dynamic programming approach to perform this computation exactly with cost *O*(*K*2∣*e*∣), and comment on further algorithmic details to implement the MP updates in practice. Expectation-Maximization inference ================================== *Updates of the community priors *n*.* We take the derivative of the log-likelihood in. By imposing the constraint ∑*a* = 1*K**n**a* = 1, we obtain the update in. *Updates of the affinity matrix *p*.* We show here the updates in terms of *c*. These easily translate to those in terms of the affinity matrix *p* as the expression we derive below in is invariant with respect to the substitution *c* = *N**p*. Let *x**e* = ∑*i* < *j* ∈ *e**c**t**i**t**j*/*N**κ**e*. Then, ignoring additive constants, the log-likelihood reads $$\begin{aligned} {\mathcal{L}}&= \sum\_{e \in E} \log \left( \sum\_{i<j \in e} c\_{t\_i t\_j} \right) + \sum\_{e \in \Omega \setminus E} \log(1-x\_e) \nonumber \\ \label{eq:approx} &\approx \sum\_{e \in E} \log \left( \sum\_{i<j \in e} c\_{t\_i t\_j} \right) - \sum\_{e \in \Omega \setminus E} x\_e \\ &= \sum\_{e \in E} \log \left( \sum\_{i<j \in e} c\_{t\_i t\_j} \right) - \sum\_{e \in \Omega \setminus E} \frac{\sum\_{i<j \in e} c\_{t\_i t\_j}}{N \kappa\_e} \nonumber \end{aligned}$$ where is the linearization of log(1 − *x*) ≈ *x* around *x* = 0, which is valid at leading order *O*(1/*N*). We now take a variational approach to find a lower bound $\tilde{{\mathcal{L}}}$ of the log-likelihood: $$\begin{aligned} {\mathcal{L}}&\approx \sum\_{e \in E} \log \left( \sum\_{i<j \in e} c\_{t\_i t\_j} \right) - \sum\_{e \in \Omega \setminus E} \frac{\sum\_{i<j \in e} c\_{t\_i t\_j}}{ N \kappa\_e} \nonumber \\ \label{eq: jensen} &\geq \sum\_{e \in E} \sum\_{i <j \in e}\rho\_{ij}^{e} \log \left( \frac{c\_{t\_i t\_j}}{\rho\_{ij}^{e}} \right) - \sum\_{e \in \Omega \setminus E} \frac{\sum\_{i<j \in e} c\_{t\_i t\_j}}{ N \kappa\_e} \\ &= \sum\_{e \in E} \sum\_{i <j \in e} \rho\_{ij}^{e} \log c\_{t\_i t\_j} \nonumber - \sum\_{e \in \Omega \setminus E} \frac{\sum\_{i<j \in e} c\_{t\_i t\_j}}{ N \kappa\_e} + {\mathrm{const.}}\\ &= \tilde{{\mathcal{L}}}(c) + {\mathrm{const.}}\,, \nonumber \end{aligned}$$ which is valid for any distribution *ρ**i**j**e* such that ∑*i* < *j* ∈ *e**ρ**i**j**e* = 1. In, we utilized Jensen’s inequality. The lower bound is exact when $$\label{eq: optimal rho} \rho\_{ij}^{e} = \frac{c\_{t\_i t\_j}}{\sum\_{i< j \in e} c\_{t\_i t\_j}} = \frac{c\_{t\_i t\_j}}{N \pi\_e} \quad.$$ We compute the derivative of the variational lower bound and approximate to leading terms in *N*: $$\begin{aligned} \frac{\partial \tilde{{\mathcal{L}}}}{\partial c\_{ab}} &= \frac{1}{c\_{ab}} \sum\_{e \in E} \sum\_{i < j \in e} \rho\_{ij}^{e} \delta\_{t\_i a} \delta\_{t\_j b} - \frac{1}{N} \sum\_{e \in \Omega \setminus E} \frac{1}{\kappa\_e} \sum\_{i < j \in e} \delta\_{t\_i a} \delta\_{t\_j b} \nonumber \\ &\approx \frac{1}{c\_{ab}} \sum\_{e \in E} \sum\_{i < j \in e} \rho\_{ij}^{e} \delta\_{t\_i a} \delta\_{t\_j b} - \frac{1}{N} \sum\_{e \in \Omega} \frac{1}{\kappa\_e} \sum\_{i < j \in e} \delta\_{t\_i a} \delta\_{t\_j b} \label{eq: log-lik derivative indexes} \\ &= \frac{1}{c\_{ab}} \sum\_{e \in E} \sum\_{i < j \in e} \rho\_{ij}^{e} \delta\_{t\_i a} \delta\_{t\_j b} - \frac{C'}{N} \sum\_{i < j \in V} \delta\_{t\_i a} \delta\_{t\_j b} \nonumber \\ &= \frac{1}{c\_{ab}} \sum\_{e \in E} \sum\_{i < j \in e} \rho\_{ij}^{e} \delta\_{t\_i a} \delta\_{t\_j b} - \frac{C'}{2 N} (N\_a N\_b - \delta\_{ab} N\_a) \nonumber \\ &= \frac{1}{c\_{ab}} \sum\_{e \in E} \sum\_{i < j \in e} \rho\_{ij}^{e} \delta\_{t\_i a} \delta\_{t\_j b} - \frac{C'}{2} (N n\_a n\_b - \delta\_{ab} n\_a) \,. \label{eq: approx c derivative lower bound}\end{aligned}$$ where $C' = \sum\_{d=2}^D \binom{N-2}{d-2} \frac{1}{\kappa\_d}$. Notice that the approximations in and hold valid only when considering *c**a**b* in the expressions, as by assumption *c* = *O*(1). Now, by setting equal to zero, and substituting *ρ**i**j**e* from, we obtain the update $$\label{eq: em update c} c\_{ab}^{(t+1)} = c\_{ab}^{(t)} \, \frac{ 2 \, \sum\_{e \in E} {\#^{e}\_{ab}} / {\pi\_e} }{ N \, C' \, (N n\_a n\_b - \delta\_{ab} n\_a) } \,,$$ where #*a**b**e* = ∑*i* < *j* ∈ *e**δ**t**i**a**δ**t**j**b*. Algorithmic and computational details ===================================== Dynamic programming for MP -------------------------- In this section, we explain how the MP updates for the *q̂**e* → *i*(*t**i*) messages can be performed efficiently. In log-space, the messages can be compactly written as $$\begin{aligned} \log {\widehat{q}\_{e \rightarrow i}(t\_i)} &= \log \sum\_{t\_j : j \in \partial e \setminus i} \pi\_e \prod\_{j \in e \setminus i} {q\_{j \rightarrow e}(t\_j)} + {\mathrm{const.}}\nonumber \\ &= {\psi}(e, i, t\_i) + {\mathrm{const.}}\label{eq: psi\_def}\quad.\end{aligned}$$ Below, we focus on finding efficient updates for *ψ* as defined in, which should be exponentiated and properly normalized to find the original messages *q̂**e* → *i*(*t**i*). For this, we introduce an auxiliary quantity. For any subset *g* ⊆ *f* of nodes in *f*, where *i* ∈ *g*, we define $${\eta}(g, i, t\_i) = \log \left[ \sum\_{t\_j : j \in g \setminus i} \left(\sum\_{l < m \in g} p\_{t\_l t\_m} \right) \prod\_{j \in g \setminus i} {q\_{j \rightarrow f}(t\_j)} \right] \,. \nonumber$$ Hence *η*(*f*, *i*, *t**i*) = *ψ*(*f*, *i*, *t**i*) + const.. This quantity is useful in that it allows to obtain an efficient recursion formula for *ψ*, by computing the *η* values starting from subsets *g* containing two nodes. Without loss of generality, consider *f* = {1, …, *m* − 1} and *i* = 1. Consider *g* = {1, …, *n* − 1} for some *n* ≤ *m*. We want to compute *η* for the set {1, …*n*}. Its exponential is given by: $$\begin{aligned} \exp\left( {\eta}(\{1, \ldots, n\}, 1, t\_1) \right) & = \sum\_{t\_n} \sum\_{t\_2} \sum\_{t\_3} \ldots \sum\_{t\_{n-1}} \left( p\_{t\_1 t\_2} + \ldots + p\_{t\_{n-2} t\_{n-1}} + p\_{t\_1 t\_n} +p\_{t\_2 t\_n} + \ldots + p\_{t\_{n-1} t\_n} \right) \nonumber \\ &\times \left({q\_{2 \rightarrow f}(t\_2)} \ldots {q\_{n-1 \rightarrow f}(t\_{n-1})} {q\_{n \rightarrow f}(t\_n)}\right) \nonumber \\ &= \sum\_{t\_n} {q\_{n \rightarrow f}(t\_n)} \Bigg( \sum\_{t\_2} \sum\_{t\_3} \ldots \sum\_{t\_{n-1}} (p\_{t\_1 t\_2} + \ldots + p\_{t\_{n-2} t\_{n-1}}) ({q\_{2 \rightarrow f}(t\_2)} \ldots {q\_{n-1 \rightarrow f}(t\_{n-1})}) \nonumber \\ &+ \sum\_{t\_2} \sum\_{t\_3} \ldots \sum\_{t\_{n-1}} p\_{t\_1 t\_n} ({q\_{2 \rightarrow f}(t\_2)} \ldots {q\_{n-1 \rightarrow f}(t\_{n-1})}) \nonumber \\ &+ \sum\_{t\_2} \sum\_{t\_3} \ldots \sum\_{t\_{n-1}} (p\_{t\_2 t\_n} + \ldots + p\_{t\_{n-1} t\_n}) ({q\_{2 \rightarrow f}(t\_2)} \ldots {q\_{n-1 \rightarrow f}(t\_{n-1})}) \nonumber \Bigg)\nonumber \\ &= \sum\_{t\_n} {q\_{n \rightarrow f}(t\_n)} \Bigg( \exp( {\eta}(\{1, \ldots, n-1\}, 1, t\_1)) \nonumber \\ &+ p\_{t\_1 t\_n} \nonumber\\ &+ \sum\_{t\_2} p\_{t\_2 t\_n}{q\_{2 \rightarrow f}(t\_2)} + \ldots + \sum\_{t\_{n-1}} p\_{t\_{n-1} t\_n}{q\_{n-1 \rightarrow f}(t\_{n-1})} \Bigg) \nonumber \\ \begin{split} &= \exp( {\eta}(\{1, \ldots, n-1\}, 1, t\_1)) \\ &+ \sum\_{t\_n} {q\_{n \rightarrow f}(t\_n)} \Bigg( p\_{t\_1 t\_n} + \sum\_{t\_2} p\_{t\_2 t\_n}{q\_{2 \rightarrow f}(t\_2)} + \ldots + \sum\_{t\_{n-1}} p\_{t\_{n-1} t\_n}{q\_{n-1 \rightarrow f}(t\_{n-1})} \Bigg) \,. \label{eq: recursion formula dynamic} \end{split}\end{aligned}$$ The recursion in allows to compute the value of *η*({1, …, *n*}, 1, *t*1) from *η*({1, …, *n* − 1}, 1, *t*1) in time *O*((*n* − 2) *K*2). However, we can further reduce the cost. For any *a* ∈ [*K*], define *s**n*(*a*) = ∑*t*2*p**t*2*a**q*2 → *f*(*t*2) + … + ∑*t**n* − 1*p**t**n* − 1*a**q**n* − 1 → *f*(*t**n* − 1) . Substituting the definition of *s**n*(*a*) in, we obtain the final two-step dynamic update: $$\begin{aligned} {2} &s\_n(a) = s\_{n-1}(a) + \sum\_{t\_{n-1}} p\_{t\_{n-1} a} {q\_{n-1 \rightarrow f}(t\_{n-1})} \label{eq: s updates}\\ &\exp\left( {\eta}(\{1, \ldots, n\}, 1, t\_1) \right) = \exp( {\eta}(\{1, \ldots, n-1\}, 1, t\_1)) \nonumber \\ & \qquad\qquad\qquad + \sum\_{t\_n} {q\_{n \rightarrow f}(t\_n)} \left( p\_{t\_1 t\_n} + s\_n(t\_n) \right) \,. \label{eq: simplified dynamic formula}\end{aligned}$$ This yields a cost of *O*(*K*) per recursion, and a total cost of *O*(*K* ∣*f*∣) to compute the final *ψ*(*f*, 1, *t*1). In practice, for any *e*, *i* pair, we compute *ψ*(*e*, *i*, *t**i*) for all values *t**i* ∈ [*K*], which yields a total cost of *O*(*K*2 ∣*f*∣). Implementation details ---------------------- In our implementation of the MP and EM routines, we take some additional steps to ensure convergence to non-trivial local optima of the free energy landscape. The initialization of the messages is performed taking into account the circular relationships in. We perform them as follows: (i) randomly initialize the messages *q**i* → *e*(*t**i*). For every *i*, *e* pair, the messages are drawn from a *K*-dimensional Dirichlet distribution. (ii) Similarly, randomly initialize the marginal beliefs *q**i*(*t**i*). (iii) We infer all the other quantities from the initialized *q**i* → *e*(*t**i*) and *q**i*(*t**i*). In fact, up to constants $${\widehat{q}\_{e \rightarrow i}(t\_i)} = \frac{ q\_i(t\_i) }{ {q\_{i \rightarrow e}(t\_i)} } \,.$$ All values are then normalized to have unitary sum. (iv) Finally, the external field is entirely determined by the marginals, as per. We check for convergence of the MP and EM inference routines by evaluating the absolute difference between parameters in consecutive steps. We present complete pseudocodes of the two routines in and. [htpb].46 [H] [pseudocode: MP] convergence threshold *ε*mp maximum iterations itermp prior *n*, rescaled affinity *c* Randomly initialize all *q**i* → *e*(*t**i*), *q̂**e* → *i*(*t**i*), *q**i*(*t**i*), *h*(*t**i*) *// Perform updates* update messages *q**i* → *e*(*t**i*) update messages *q̂**e* → *i*(*t**i*) *q**i**o**l**d*(*t**i*) ← *q**i*(*t**i*) update marginals *q**i*(*t**i*) update external field *h*(*t**i*) *// Check for convergence* Δ = ∑*i* = 1*N*∑*t**i* = 1*K*∣*q**i*old(*t**i*) − *q**i*(*t**i*)∣ break [htpb].46 [H] [pseudocode: EM] convergence threshold *ε*em maximum iterations iterem Randomly initialize *c*, *n* *// Perform updates* Perform Message-Passing inference *n*old ← *n* update *n* *c*old ← *c* update *c* *// Check for convergence* Δ = ∑*a* = 1*K*∣*n**a* − *n**a*old∣ + ∑*a*, *b* = 1*K*∣*c**a**b* − *c**a**b*old∣ break While is presented as a completely parallel implementation of the MP equations, in practice we proceed in batches. In fact, we find that applying completely parallel updates, i.e. applying for all *i*, *e* pairs, successively for all *i*, *e* pairs, and then for all nodes *i* ∈ *V*, results in fast convergence to degenerate fixed-points where all nodes are assigned to the same community. For this reason, we apply dropout. Given a fraction *α* ∈ (0, 1], we select a random fraction *α* of all possible *i*, *e* pairs, and apply the update in only for the selected pairs. We perform a new random draw, and update according to, and similarly for. Finally, we update the external field in. Empirically, we find that a value of *α* = 0.25 works for synthetic data, where inference is simpler. Values below work as well. For real data we find that substantially lowering *α* yields more stable inference. On real data, where we alternate MP and EM, and learning is less stable, we utilize *α* = 0.01. In practice, we also set a patience parameter, and only stop MP once a given number of iterations in a row falls below the threshold *ε**m**p* in. For real datasets, we set the patience to 50 consecutive steps, and the maximum number of iterations itermp = 2000. Sampling from the generative model ================================== The algorithm ------------- In the following, we describe an exact sampling procedure derived from our probabilistic model. The key observation to obtain an efficient algorithm is that the hyperedge probabilities do not depend on the nodes they contain, but only on their community assignments, as implied by. For a hyperedge *e* and community *a* ∈ [*K*], we define the quantity #*a**e* = ∑*i* ∈ *e**δ**t**i**a* ,  which is the count of nodes in *e* that belong to community *a*. Crucially, the hyperedge probability depends only on these counts: $$\label{eq: pi in function of counts} \pi\_e = \sum\_{a < b \in [K]} \#^e\_a \, \#^e\_b \, p\_{ab} + \sum\_{a \in [K]} \frac{\#^e\_a (\#^e\_a - 1)}{2} \, p\_{aa} \,.$$ Hence, all hyperedges with different nodes, but same counts #1*e*, …, #*K**e*, have equal probability. With this in mind, we sample hypergraphs following these steps. 1. Iterate over the combinations. For hyperedge size *d* = 2, we can sample all the *N*(*N* − 1)/2 edges directly. Otherwise, iterate the following procedure for every other hyperedge size *d* = 3, …, *D* and vectors # = (#1, …, #*K*) of community counts (independent of *e* to highlight that same counts yield identical ) satisfying ∑*a* = 1*K*#*a* = *d*. 2. Compute the probability. For a given count vector #, the resulting hyperedge probability *π*# is given in. Also notice that there are $N\_{\#} = \binom{N\_1}{\#\_1} \cdot \ldots \cdot \binom{N\_K}{\#\_K}$ possible hyperedges satisfying the count #, since we can choose #1 nodes from the *N*1 nodes in community 1, #2 out of *N*2, and so on. 3. Sample the number of hyperedges. The key idea is not to sample the individual hyperedges, but the *number* of observed hyperedges. Since the individual hyperedges are independent Bernoulli variables with same probability, their sum *X* follows a binomial distribution: $$\label{eq: binomial count of samples hyperdges} X \sim \text{Binom}\left( N\_{\#}, \frac{\pi\_{\#}}{\kappa\_d} \right) \,$$ with probability *π*# fixed, determined by #, and number of realizations *N*#. Sampling directly from is numerically challenging for large *N*# and *κ**d*, hence we adopt a Gaussian approximation for the binomial. Similarly, we find a Poisson approximation to be more stable for large *N*# and small *π*#/*κ**d*. We also adopt a Ramanujan approximation for large log-factorials appearing during the calculations. This substantially speeds up sampling, while being extremely precise. 4. Sample the final hyperedges. Given the count *X* of hyperedges, sampled from, we can sample the hyperedges. This operation is performed by independently sampling *X* times #1 nodes from community 1, #2 from community 2, and so on. As an important point, notice that this procedure might yield repeated hyperedges, which are not allowed. In sparse regimes, this event has low probability. As a sensible approximation, one can simply proceed by deleting repetitions, or resample single hyperedges until the desired number *X* of distinct realizations is reached. In our experiments, we choose the former option. Our code implementation has both choices available for practitioners. Computational complexity ------------------------ For a fixed hyperedge size *d*, there are two parts to the computational cost: iterating through the counts #, and sampling the hyperedges. The number of counts is fixed and given by *K**d*/*d*!, i.e., the number of possible ways to assign *d* nodes to *K* groups, without order. The cost of sampling the hyperedges can be precisely quantified since every *d*-dimensional hyperedge has a cost *d*, and there is an average number *k**d* of such hyperedges. Calling Ω*d* the space of all *d*-dimensional hyperedges, we find $$\begin{aligned} \mathbb{E}[k\_d] &= \sum\_{e \in \Omega^d} {\mathbb{P}}(e \, | \, p, t) \nonumber\\ &= \sum\_{e \in \Omega^d} \sum\_{i < j \in e} \frac{p\_{t\_i t\_j}}{\kappa\_d} \nonumber\\ &= \frac{1}{\kappa\_d}\sum\_{e \in \Omega^d} \sum\_{i < j \in e} p\_{t\_i t\_j} \nonumber\\ &= \frac{\binom{N-2}{d-2}}{\kappa\_d} \sum\_{i < j \in V} p\_{t\_i t\_j} \nonumber\\ &\approx \frac{\binom{N-2}{d-2} N^2}{\kappa\_d} \sum\_{b \le d =1}^K p\_{bd} n\_b n\_d \,.\end{aligned}$$ Hence, the average computational cost is given by $$\label{eq: expected\_num\_hye\_d} \sum\_{d=2}^D O\left( \frac{K^d}{d!} + d \, \mathbb{E}[k\_d] \right) \,.$$ Given the large size of Ω*d*, the cost in tightly concentrates around the expected value. In sparse regimes, the term *K**d*/*d*! dominates, as the number of hyperedges *k**d* is low, while the two terms both contribute to the cost when E[*k**d*] grows. Experiments ----------- We employ the sampling algorithm to generate the hypergraphs used to study the phase transition of. Here, we set the affinity matrix to have all equal in-degree *c**a**a* = *c*in and out-degree *c**a**b* = *c*out, so that becomes *c*in + (*K* − 1)*c*out = *K**c* for some *K* and *c*. In our experiments, we sample hypergraphs with *N* = 104 nodes by fixing *c* = 10 and *K* = 4, we span across 65 values of *c*out in [0, 500], and compute the corresponding *c*in = *c*in(*c*out; *K*, *c*). For each experimental configuration *c*in, *c*out, we draw 5 hypergraphs from different random seeds. This gives a total of 325 hypergraphs. We use the expected number of *d*-dimensional hyperedges E[*k**d*] in and the average degree ⟨*d*⟩ in to perform a sanity check between our sampling algorithm and theoretical derivations. For constant in and out-degree, these two metrics evaluate to $$\begin{aligned} \mathbb{E}[k\_d] &\approx \frac{N c}{d(d-1)} \quad,\\ \langle d \rangle &\approx \frac{C c}{2} \,.\end{aligned}$$ The results in show excellent agreement between theory and experiments. We also highlight that the sampling method is extremely fast and has an average sampling time of *t* = 32.7 ± 2.7 on the experimental setup considered here. [fig: sampling exps] Phase transition: complementary derivations and additional results ================================================================== Proof of -------- First, we want to prove that all communities have the same expected degree. In order to do that, we start by computing the expected degree ⟨*d**i*⟩ of a given node *i* ∈ *V*. Following similar derivations to those for ⟨*d*⟩, we find $$\begin{aligned} \langle d\_i \rangle &= \sum\_{e \in E: i \in e} {\mathbb{P}}(e \, | \, p, t) \\ &= \sum\_{e \in E: i \in e} \frac{\pi\_e}{\kappa\_e} \\ &= C' \sum\_{a=1}^K c\_{t\_i a} n\_a + \frac{N C''}{2} \left( \sum\_{b, d=1}^K c\_{bd} n\_b n\_ d + \sum\_{b=1}^K c\_{bb} n\_b^2 \right) \,,\end{aligned}$$ where $C' = \sum\_{d=2}^D {\binom{N-2}{d-2}} / {\kappa\_d}$, as previously defined, and $C'' = \sum\_{d=3}^D {\binom{N-3}{d-3}} / {\kappa\_d}$. Therefore, the average degree ⟨*b*⟩ of a community *b* ∈ [*K*], evaluates to $$\begin{aligned} \langle b \rangle &= \frac{1}{N\_b} \sum\_{i \in V: t\_i = b} \langle d\_i \rangle \\ &= \frac{1}{N\_b} \sum\_{i \in V: t\_i = b} \left[ C' \sum\_{a=1}^K c\_{t\_i a} n\_a + \frac{N C''}{2} \left( \sum\_{d, m=1}^K c\_{dm} n\_d n\_ m + \sum\_{d=1}^K c\_{dd} n\_d^2 \right) \right] \\ &= \frac{1}{N\_b} \sum\_{i \in V: t\_i = b} \left[ C' \sum\_{a=1}^K c\_{b a} n\_a + \frac{N C''}{2} \left( \sum\_{d,m=1}^K c\_{dm} n\_d n\_ m + \sum\_{d=1}^K c\_{dd} n\_d^2 \right) \right] \\ &= C' \sum\_{a=1}^K c\_{b a} n\_a + \frac{N C''}{2} \left( \sum\_{d,m=1}^K c\_{dm} n\_d n\_ m + \sum\_{d=1}^K c\_{dd} n\_d^2 \right) \\ &= C' c + \frac{N C''}{2} \left( \sum\_{d,m=1}^K c\_{dm} n\_d n\_ m + \sum\_{d=1}^K c\_{dd} n\_d^2 \right)\quad,\end{aligned}$$ which is independent of the specific choice of group *b*, from which we conclude that all the groups yield equal expected degrees. Second, we wish to demonstrate that MP’s fixed points are as in. Notice that in the derivations here below, when convenient, we interchange equivalent summations over function nodes’ neighbors ∂*e* and hyperedge *e*. By treating all quantities that are independent of *t**i* in *q**i* → *e*(*t**i*), *q̂**e* → *i*(*t**i*) as a constant, we evaluate as $$\begin{aligned} {\widehat{q}\_{e \rightarrow i}(t\_i)} &\propto \sum\_{t\_j : j \in e \setminus i} \frac{\pi\_e}{\kappa\_e} \prod\_{j \in \partial e \setminus i} q\_{j \rightarrow e}(t\_j) \nonumber \\ &\propto \sum\_{t\_j : j \in e \setminus i} \sum\_{r < s \in e} p\_{t\_r t\_s} \prod\_{j \in \partial e \setminus i} q\_{j \rightarrow e}(t\_j) \nonumber \\ &= \sum\_{t\_j : j \in e \setminus i} \left( \sum\_{r \in e \setminus i} p\_{t\_r t\_i} \prod\_{j \in \partial e \setminus i} q\_{j \rightarrow e}(t\_j) + \sum\_{r<s \in e \setminus i} p\_{t\_r t\_s} \prod\_{j \in \partial e \setminus i} q\_{j \rightarrow e}(t\_j) \right) \nonumber \\ &= \sum\_{r \in e \setminus i} \sum\_{t\_r} p\_{t\_r t\_i} q\_{r \rightarrow e}(t\_r) + \sum\_{r<s \in e \setminus i} \sum\_{t\_r, t\_s} p\_{t\_r t\_s} q\_{r \rightarrow e}(t\_r) q\_{s \rightarrow e}(t\_s) \nonumber \\ &= \sum\_{r \in e \setminus i} \sum\_{t\_r} p\_{t\_r t\_i} n\_{t\_r} + \sum\_{r<s \in e \setminus i} \sum\_{t\_r, t\_s} p\_{t\_r t\_s} n\_{t\_r} n\_{t\_s} \nonumber \\ &= \frac{1}{N} \left( \sum\_{r \in e \setminus i} c + \sum\_{r<s \in e \setminus i} c \right) \nonumber \\ &= \frac{c}{N} \left( (|e|-1) + c\frac{|e|(|e|-1)}{2} \right) \,. \label{eq: prop\_to\_constant\_mess\_e\_to\_i}\end{aligned}$$ Since messages *q̂**e* → *i*(*t**i*) are normalized to have unitary sum, implies that *q̂**e* → *i*(*t**i*) = 1/*K*. Substituting this result into, one finds also that *q**i*(*t**i*) = *n**t**i*. The variable-to-function node messages are updated with, which includes for the external field *h*(*t**i*). The external field evaluated at fixed points is also constant, in fact $$\begin{aligned} h(t\_i) &= \frac{C'}{N} \sum\_{j \in V} \sum\_{t\_j} c\_{t\_i t\_j} q\_j(t\_j) \nonumber \\ &= \frac{C'}{N} \sum\_{j \in V} \sum\_{t\_j} c\_{t\_i t\_j} n\_{t\_j} \nonumber \\ &= \frac{C'}{N} \sum\_{j \in V} c \nonumber \\ &= C'c \label{eq: neglibible field} \,.\end{aligned}$$ The result of implies that the messages in read $$\begin{aligned} q\_{i \rightarrow e}(t\_i) &\propto n\_{t\_i} \left( \prod\_{\substack{f \in E \\ f \in \partial i \setminus e}} \sum\_{t\_j: j \in \partial f \setminus i} \pi\_f \prod\_{j \in \partial f \setminus i} {q\_{j \rightarrow f}(t\_j)} \right) \\ &= n\_{t\_i} \left( \prod\_{\substack{f \in E \\ f \in \partial i \setminus e}} \sum\_{t\_j: j \in \partial f \setminus i} \pi\_f \prod\_{j \in \partial f \setminus i} n\_{t\_j} \right) \\ &\propto n\_{t\_i} \left( \prod\_{\substack{f \in E \\ f \in \partial i \setminus e}} \sum\_{r < s \in f} \sum\_{t\_j: j \in \partial f \setminus i} p\_{t\_r t\_s} \prod\_{j \in \partial f \setminus i} n\_{t\_j} \right) \\ &\propto n\_{t\_i} \left[ \prod\_{\substack{f \in E \\ f \in \partial i \setminus e}} \left( \sum\_{r < s \in f} \sum\_{t\_r t\_s} p\_{t\_r t\_s} n\_{t\_r} n\_{t\_s} + \sum\_{r \in f \setminus i} \sum\_{t\_r} p\_{t\_r t\_i} n\_{t\_r} \right) \right] \\ &\propto n\_{t\_i} \left[ \prod\_{\substack{f \in E \\ f \in \partial i \setminus e}} \left( \frac{|f|(|f|-1)}{2} c + (|f|-1)c \right) \right] \\ &\propto n\_{t\_i} \,, \label{eq: external field is constant}\end{aligned}$$ which is exactly. Transition matrix formula ------------------------- In this section, we derive the expression for the transition matrix $\tilde T\_r^{ab}$ in. To simplify the notation, we indicate the (variable node, function node) pairs at level *r* as (*i**r*, *f**r*) = (*i*, *e*), and similarly, at level *r* + 1 we use (*i**r* + 1, *f**r* + 1) = (*j*, *f*). Hence, the transition matrix becomes $$\begin{aligned} \tilde T\_r^{ab} = \frac{\partial {q\_{i \rightarrow e}(a)}}{\partial {q\_{j \rightarrow f}(b)}} \,.\end{aligned}$$ In order to find a closed-form expression of $\tilde T\_r^{ab}$, we claim that the two following Lemmas hold. [th: sum and product lemma] Under the constant group degree assumption in : 1. [item: complex expression1] for any hyperedge *e* and nodes *i* ∈ *e*: $$\label{eq: lemma formula 1} \sum\_{t\_j: j \in e \setminus i} \pi\_e \prod\_{k \in e \setminus i} q\_{k \rightarrow e}(t\_k) = \frac{c|e|(|e|-1)}{2N} \, ;$$ 2. [item: complex expression2] for any hyperedge *e* and nodes *i*, *j* ∈ *e*: $$\label{eq: lemma formula 2} \sum\_{t\_k: k \in e \setminus i, j} \pi\_e \prod\_{m \in e \setminus i, f} q\_{m \rightarrow e}(t\_m) = \frac{1}{N} \left[ c\_{t\_i t\_j} + c(|e|-2) \left( 2+\frac{|e|-3}{2} \right) \right] \,.$$ [Employing ] [th: derivatives on h and Z] Under the constant group degree assumption in : 1. [item: derivative h negligible] the derivative ∂exp( − *h*(*a*))/∂*q**i* → *e*(*b*) is negligible to leading order in *N*; 2. [item: h constant] the external field is constant *h*(*t**i*) = const.; 3. [item: Z expressions] call *Z**i* → *e* the normalizing constant of *q**i* → *e*, then $$\begin{aligned} Z^{i \rightarrow e} &= \prod\_{g \in \partial i \setminus e} c \frac{|g|(|g|-1)}{2N} \label{eq: normalizing constant} \\ \frac{\partial Z^{i \rightarrow e}}{\partial q\_{j \rightarrow f}(b)} &= \frac{c}{N} \left( \prod\_{g \in \partial i \setminus e, f} c \frac{|g|(|g|-1)}{2N} \right) \left[ 1 + (|f|-2)\left( 2 + \frac{|f|-3}{2} \right) \right] \,. \label{eq: normalizing constant derivative}\end{aligned}$$ The claims allow us to derive the transition matrix. Particularly, we make explicit all derivatives and variable-to-function nodes messages as in. By also ignoring all terms relative to *h*(*t**i*) thanks to, we get $$\begin{aligned} \frac{\partial q\_{i \rightarrow e}(a)}{\partial q\_{j \rightarrow f}(b)} &\approx -\frac{1}{(Z^{i \rightarrow e})^2} \frac{\partial Z^{i \rightarrow e}}{\partial q\_{j \rightarrow f}(b)}n\_{t\_i} \left( \prod\_{g \in \partial i \setminus e} \sum\_{t\_m : m \in \partial g \setminus i} \pi\_g \prod\_{m \in \partial g \setminus i} q\_{m\rightarrow g}(t\_m) \right) \\ &+ \frac{1}{Z^{i \rightarrow e}} n\_{t\_i} \left( \prod\_{g \in \partial i \setminus e, f} \sum\_{t\_m : m \in \partial g \setminus i} \pi\_g \prod\_{m \in \partial g \setminus i} q\_{m\rightarrow g}(t\_m) \right) \left( \sum\_{t\_m : m \in \partial f \setminus i, j} \pi\_f \prod\_{m \in \partial f \setminus i, j} q\_{m\rightarrow f}(t\_m) \right) \,.\end{aligned}$$ The terms involving *Z**i* → *e* are in ( and ), while the expressions in parentheses are in ( and ). By performing all the substitutions we get $$\begin{aligned} &\frac{\partial q\_{i \rightarrow e}(a)}{\partial q\_{j \rightarrow f}(b)} =\\ &- n\_{t\_i}{\left( \prod\_{g \in \partial i \setminus e} c \frac{|g|(|g|-1)}{2N} \right)^{-2}} \left( \prod\_{g \in \partial i \setminus e, f} c \frac{|g|(|g|-1)}{2N} \right) \left( \prod\_{g \in \partial i \setminus e} c \frac{|g|(|g|-1)}{2N} \right) \frac{c}{N} \left[ 1 + (|f|-2) \left( 2+ \frac{|f|-3}{2}\right)\right] \nonumber \\ &+ n\_{t\_i}{ \left( \prod\_{g \in \partial i \setminus e} c \frac{|g|(|g|-1)}{2N} \right)^{-1} } \left( \prod\_{g \in \partial i \setminus e, f} c \frac{|g|(|g|-1)}{2N} \right) \frac{1}{N} \left[ c\_{t\_i t\_j} +c (|f|-2) \left( 2 + \frac{|f|-3}{2}\right) \right] \nonumber \\ &= {n\_{t\_i}}{ \left( c \frac{|f|(|f|-1)}{2N} \right)^{-1}} \left\{ - \frac{c}{N} \left[ 1 + (|f|-2) \left( 2+ \frac{|f|-3}{2}\right)\right] + \frac{1}{N} \left[ c\_{t\_i t\_j} +c (|f|-2) \left( 2 + \frac{|f|-3}{2}\right) \right] \right\} \nonumber \\ &= \frac{2}{|f|(|f|-1)} n\_{t\_i} \left( \frac{c\_{t\_i t\_j}}{c} -1 \right) \nonumber \\ &= \frac{2}{|f|(|f|-1)} n\_a \left( \frac{c\_{ab}}{c} -1 \right) \nonumber \, \end{aligned}$$ which is exactly the expression in. What is left to complete all derivations is to prove and, which is done next. ### Proof of * Derivation of : $$\begin{aligned} \sum\_{t\_j: j \in e \setminus i} \pi\_e \prod\_{k \in e \setminus i} q\_{k \rightarrow e}(t\_k) &= \sum\_{t\_j: j \in e \setminus i} \sum\_{r<s \in e} p\_{t\_r t\_s} \prod\_{k \in e \setminus i} q\_{k \rightarrow e}(t\_k) \\ &= \sum\_{r<s \in e\setminus i} \sum\_{t\_r t\_s} p\_{t\_r t\_s} q\_{r \rightarrow e}(t\_r) q\_{s \rightarrow e}(t\_s) + \sum\_{r \in e\setminus i} \sum\_{t\_r} p\_{t\_r t\_i} q\_{r \rightarrow e}(t\_r) \\ &= \frac{1}{N}\sum\_{r<s \in e\setminus i} \sum\_{t\_r, t\_s} c\_{t\_r t\_s} n\_{t\_r} n\_{t\_s} + \frac{1}{N}\sum\_{r \in e\setminus i} \sum\_{t\_r} c\_{t\_r t\_i} n\_{t\_r} \\ &= \frac{c}{N}\left[ \frac{(|e|-1)(|e|-2)}{2} + (|e|-1 ) \right] \\ &= \frac{c(|e|-1)|e|}{2N} \,. \end{aligned}$$ * Derivation of : $$\begin{aligned} \sum\_{t\_k: k \in e \setminus i, j} \pi\_e \prod\_{m \in e \setminus i, j} q\_{m \rightarrow e}(t\_m) &= \sum\_{t\_k: k \in e \setminus i, j} \sum\_{r<s \in e} p\_{t\_r t\_s} \prod\_{m \in e \setminus i, j} q\_{m \rightarrow e}(t\_m) \nonumber \\ &= p\_{t\_i t\_j}+ \sum\_{ r \in e\setminus i, j}\sum\_{t\_r} p\_{t\_r t\_i} q\_{r \rightarrow e}(t\_r) + \sum\_{ r \in e\setminus i, j}\sum\_{t\_r} p\_{t\_r t\_j} q\_{r \rightarrow e}(t\_r) \nonumber \\ & + \sum\_{r<s \in e \setminus i, j} \sum\_{t\_r, t\_s} p\_{t\_r t\_s} q\_{r \rightarrow e}(t\_r)q\_{s \rightarrow e}(t\_s) \nonumber \\ &= \frac{1}{N} \Bigg( c\_{t\_i t\_j} + \sum\_{r \in e\setminus i, j}\sum\_{t\_r} c\_{t\_r t\_i} n\_{t\_r} + \sum\_{ r \in e\setminus i, j}\sum\_{t\_r} c\_{t\_r t\_j} n\_{t\_r} + \sum\_{r<s \in e \setminus i, j}\sum\_{t\_r, t\_s} c\_{t\_r t\_s} n\_{t\_r} n\_{t\_s} \Bigg) \nonumber \\ &= \frac{1}{N} \left( c\_{t\_i t\_j} + \sum\_{ r \in e\setminus i, j} c + \sum\_{ r \in e\setminus i, j}c + \sum\_{r<s \in e \setminus i, j}c \right) \nonumber \\ &= \frac{1}{N} \left[ c\_{t\_i t\_j} + c(|e|-2) \left( 2+\frac{|e|-3}{2} \right) \right] \, \nonumber. \end{aligned}$$ ### Proof of * Using, we write $$\begin{aligned} \label{eq: non-zero derivatives} \frac{\partial \exp({-h (a)})}{\partial q\_{i \rightarrow e}(b)} = \exp\left(- \frac{C'}{N} \sum\_{v \in V} \sum\_{t\_k} c\_{a t\_k} q\_k(t\_k)\right) \left( - \frac{C'}{N} \sum\_{k \in V} \sum\_{t\_v} c\_{a t\_k} \frac{\partial q\_k(t\_k)}{\partial q\_{i \rightarrow e}(b)} \right) \,.\end{aligned}$$ Only few of the derivatives ∂*q**k*(*t**k*)/∂*q**i* → *e*(*b*) entering are non-zero. Hence, the full derivative has negligible order *O*(1/*N*). * The fact that the external field is constant was already shown in during the proof of. * As just proved, we can ignore the external field in the expression of *Z**i* → *e*, and find $$\begin{aligned} Z^{i \rightarrow e} &\approx \sum\_{t\_i} q\_{i \rightarrow e}(t\_i) \nonumber \\ \label{eq: to simplify in lemma} &= \sum\_{t\_i} n\_{t\_i} \left(\prod\_{g \in \partial i \setminus e} \sum\_{t\_j: j \in \partial g \setminus i} \pi\_g \prod\_{j \in \partial g \setminus i} q\_{j \rightarrow g}(t\_j) \right) \,.\end{aligned}$$ Utilizing result in, simplifies to $$\begin{aligned} Z^{i \rightarrow e} &= \sum\_{t\_i} n\_{t\_i} \left(\prod\_{g \in \partial i \setminus e} c\frac{|g|(|g|-1)}{2N} \right) \\ &= \left(\prod\_{g \in \partial i \setminus e} c\frac{|g|(|g|-1)}{2N} \right) \,.\end{aligned}$$ which results in, as desired. Similarly, to compute the derivative ∂*Z**i* → *e*/∂*q**j* → *f*(*b*) we can ignore all appearing ∂exp( − *h*(*a*))/∂*q**j* → *f*(*b*) and *h*(*t**i*) thanks the Lemma’s first two points (just proved). Hence $$\begin{aligned} \frac{\partial Z^{i \rightarrow e}}{\partial q\_{j \rightarrow f}(b)} &= \frac{\partial}{ \partial q\_{j \rightarrow f}(b)} \left[ \displaystyle \sum\_{t\_i} n\_{t\_i} \left(\prod\_{g \in \partial i \setminus e} \sum\_{t\_j: j \in \partial g \setminus i} \pi\_g \prod\_{j \in \partial g \setminus i} q\_{j \rightarrow g}(t\_j) \right) \right]\\ &=\sum\_{t\_i} n\_{t\_i} \left( \prod\_{g \in \partial i \setminus e, f} \sum\_{t\_m: m \in \partial g \setminus i} \pi\_g \prod\_{m \in \partial g \setminus i} q\_{m \rightarrow g}(t\_m) \right) \left( \sum\_{t\_m: m \in \partial f \setminus i, j} \pi\_f \prod\_{m \in \partial f \setminus i, j} q\_{m \rightarrow f}(t\_m) \right) \,,\end{aligned}$$ and using and from, conclude with $$\begin{aligned} &= \frac{1}{N}\left( \prod\_{g \in \partial i \setminus e, f} c \frac{|g|(|g|-1)}{2N} \right) \left[ \sum\_{t\_i} n\_{t\_i} c\_{t\_i t\_j} + c(|f|-2)\left(2 + \frac{|f|-3}{2} \right) \right] \\ &= \frac{c}{N}\left( \prod\_{g \in \partial i \setminus e, f} c \frac{|g|(|g|-1)}{2N} \right) \left[ 1+ (|f|-2)\left(2 + \frac{|f|-3}{2} \right) \right] \,.\end{aligned}$$ Elapsed time of MP ------------------ In, we plot the running time of MP when performing the synthetic experiments of. Elapsed times become prohibitively large when *c*out/*c*in increases. For this reason, we threshold the maximum number of MP iterations and obtain the plateaus of. [fig:elapsedtime] Calculations of the free energy =============================== After MP, it is possible to approximate the log-evidence of the data, i.e., the log-normalizing constant log*Z*, as per. The equivalent quantity *F* =  − log*Z*, called the free energy of the system, can be obtained via the following cavity-based general formula: *F* ≈  − ∑*i* ∈ *V**f**i* + ∑*e* ∈ Ω(∣*e*∣ − 1)*f**e* ,  where $$\begin{aligned} f\_i &= \log\left( \sum\_{t\_i} n\_{t\_i} \prod\_{e \in \partial i} \sum\_{t\_j: j \in \partial e \setminus i} \left( \frac{\pi\_e}{\kappa\_e} \right)^{A\_e} \left( 1- \frac{\pi\_e}{\kappa\_e} \right)^{1-A\_e} \prod\_{j \in \partial e \setminus i} {q\_{j \rightarrow e}(t\_j)} \right) \\ f\_e &= \log\left( \sum\_{t\_j: j \in \partial e} \left( \frac{\pi\_e}{\kappa\_e} \right)^{A\_e} \left( 1- \frac{\pi\_e}{\kappa\_e} \right)^{1-A\_e} \prod\_{j \in \partial e} {q\_{j \rightarrow e}(t\_j)} \right) \,.\end{aligned}$$ Assuming that MP has converged, all messages *q**j* → *e*(*t**j*) are available. Notice, however, that naive computations of the *f**i* and *f**e* addends are unfeasible, due to the exploding sums over *t**j* : *j* ∈ ∂*e*. In the following, we show how such computations can be performed efficiently. 1. Calculations of *f**i*. As one can observe from and, the *f**i* terms are the log-normalizing constants of *q**i*, therefore they can be computed similarly. In particular, ignoring constants, by, the following simplification holds: $$f\_i = \log \left( \sum\_{t\_i} n\_{t\_i} \prod\_{\substack{e \in E \\ e \in \partial i}} \sum\_{t\_j: j \in \partial e \setminus i} \pi\_e \prod\_{j \in \partial e \setminus i} {q\_{j \rightarrow e}(t\_j)} \exp( - h(t\_i)) \right) \,.$$ The single terms indexed by *e* ∈ *E*, i.e., the values ∑*t**j* : *j* ∈ ∂*e* \ *i**π**e*∏*j* ∈ ∂*e* \ *i**q**j* → *e*(*t**j*), are equivalent to the unnormalized messages *q̂**e* → *j*(*t**j*). For this reason, they can be computed with the same dynamic program presented in. 2. Calculations of *f**e*. While the *f**i* terms in are computed singularly, we take a different approach and calculate the whole sum ∑*e* ∈ Ω(∣*e*∣ − 1)*f**e* without computing the single *f**e*, as this would be impossible due to their exploding number. First, we separate the terms over Ω in as follows. $$\begin{aligned} \sum\_{e \in \Omega} (|e|-1) f\_e &= \sum\_{e \in \Omega} (|e|-1) \log\left( \sum\_{t\_j: j \in \partial e} \left( \frac{\pi\_e}{\kappa\_e} \right)^{A\_e} \left( 1- \frac{\pi\_e}{\kappa\_e} \right)^{1-A\_e} \prod\_{j \in \partial e} {q\_{j \rightarrow e}(t\_j)} \right) \\ &= \log \left( \prod\_{e \in \Omega} \left[ \sum\_{t\_j: j \in \partial e} \left( \frac{\pi\_e}{\kappa\_e} \right)^{A\_e} \left( 1- \frac{\pi\_e}{\kappa\_e} \right)^{1-A\_e} \prod\_{j \in \partial e} {q\_{j \rightarrow e}(t\_j)} \right]^{|e|-1} \right) \\ &= \log \left( \prod\_{e \in E} \left[ \sum\_{t\_j: j \in \partial e} \pi\_e \prod\_{j \in \partial e} {q\_{j \rightarrow e}(t\_j)} \right]^{|e|-1} \right) + \log \left( \prod\_{e \in \Omega \setminus E} \left[ \sum\_{t\_j: j \in \partial e} \left( 1- \frac{\pi\_e}{\kappa\_e} \right) \prod\_{j \in \partial e} {q\_{j \rightarrow e}(t\_j)} \right]^{|e|-1} \right) + {\mathrm{const.}}\,.\end{aligned}$$ This allows us to compute the last two addends separately. Focusing on the second addend, and proceeding similarly as for the external field calculations that brought to, we get $$\begin{aligned} \log \prod\_{e \in \Omega \setminus E} \left[ \sum\_{t\_j: j \in \partial e} \left( 1- \frac{\pi\_e}{\kappa\_e} \right) \prod\_{j \in \partial e} {q\_{j \rightarrow e}(t\_j)} \right]^{|e|-1} &\approx \log \prod\_{e \in \Omega \setminus E} \left[ \exp \left( -\frac{1}{N} \sum\_{t\_j: j \in \partial e} \left( \frac{\sum\_{k < m \in e} c\_{t\_k t\_m}}{\kappa\_e} \right) \prod\_{j \in \partial e} {q\_{j \rightarrow e}(t
arxiv_0000341
*t*th test bag. In the inductive setting, the predicted label maximizes the instance label probability given feature vector **x***t**i* and parameter **w** is *ŷ**t**i* = arg max*k**p*(*y**t**i* = *k*∣**x***t**i*, **w**),  where *p*(*y**t**i* = *k*∣**x***t**i*, **w**) is given in. Note that in the absence of bag level label, *ŷ**t**i* can be predicted independently using *ŷ**t**i* = arg max*k***w***k**T***x***t**i*, without the need for dynamic programming. Therefore, the computational complexity of inductive prediction for the *t*th test bag is *O*(*n**t**C**d*). In the transductive setting, the predicted label maximizes the joint probability of the instance label and its bag label as follows *ŷ**t**i* = arg max*k**p*(*y**t**i* = *k*, **Y***t*∣**X***t*, **w**),  where *p*(*y**t**i* = *k*, **Y***t*∣**X***t*, **w**) is computed as described in Section [ss:farfe]. As a result, the dynamic programming in Section [ss:farfe] is used to obtain *p*(*y**t**i* = *k*, **Y***t*∣**X***t*, **w**) and find *ŷ**t**i*. The computational complexity of transductive prediction for the *t*th test bag is *O*(*n**t*∣**Y***t*∣2∣**Y***t*∣) + *O*(*n**t*∣**Y***t*∣*d*). Bag label prediction -------------------- For the *t*th test bag, the predicted bag label $\hat{\textbf{Y}}\_t$ is obtained by taking the union of its instance labels *ŷ**t**i* in as follows $$\hat{\textbf{Y}}\_t=\bigcup\_{i=1}^{n\_t}\hat{y}\_{ti}.$$ Experiments =========== In this section, we evaluate and compare the proposed framework with several state-of-the-art approaches for MIML instance annotation and bag level prediction. General setting --------------- **Approaches.** We compare the proposed approach of ORed-logistic regression model with maximum likelihood inference for instance annotation denoted by MLR with the following methods: SIM, MIMLfast (Mfast for short), and LSB-CMM (LSB for short). For bag level predicton we include a comparison with MIMLSVM (M-SVM for short) and MIMLNN (M-NN for short). **Parameter tuning.** For MLR, by observation that the objective function in stabilizes in around 50 expectation maximization iterations for all datasets, we fix the number of iterations for the proposed EM framework to 50. For SIM, we tune *λ* over the set {10− 4, 10− 5, 10− 6, 10− 7, 10− 8, 10− 9}. For Mfast, following, we search over the set {50, 100, 200}, {1, 5, 10}, {1, 5, 10, 15}, {10− 4, 5 ⋅ 10− 4, 10− 3, 5 ⋅ 10− 3}, {10− 5, 10− 6} for parameters *m*, *C*, *K*, *γ*0, *η*, respectively. Additionally, to satisfy the norm constraint in Mfast, we divide each feature vector by a constant in the set {10− 2, 3 ⋅ 10− 2, 10− 1, 3 ⋅ 10− 1, 1, 3, 10, 30, 102, 3 ⋅ 102, 103}. Note that in the training phase, Mfast learns a parameter **W** which can be used to compute the score *f**M**f**a**s**t*(**x***t**i*, *c*) indicating the confidence of instance **x***t**i* having class *c*. However, Mfast does not predict the label $\hat{\textbf{y}}\_{ti}$ for each instance **x***t**i*. It indirectly uses *f**M**f**a**s**t*(**x***t**i*, *c*) to compute *f**M**f**a**s**t*(**X***t*, *c*) indicating the confidence of the *t*th test bag having class *c*. In the experiments, we access *f**M**f**a**s**t*(**x***t**i*, *c*) and predict the label $\hat{\textbf{y}}\_{ti}$ as $\hat{\textbf{y}}\_{ti}={\arg\!\max}\_{c}f\_{Mfast}(\textbf{x}\_{ti},c)$. For LSB, we search *σ*2 over the set $\{10^{-2}, 10^{-1},\allowbreak 1, 10, 10^{2}, 10^{3}, 10^{4}, 10^{5}\}$ as suggested by the authors. Moreover, we set the parameter *α* as 0.05 and set *K* to 5  ×  (the number of classes). For M-SVM, we use RBF kernel, with *γ* in {0.1, 0.2, 0.4, 0.8} and the parameter ratio is searched over the set {0.2, 0.4, 0.6, 0.8, 1}. For M-NN, we search over the set {0.2, 0.4, 0.6, 0.8, 1} for the parameter ratio, and the parameter *λ* over the set {10− 2, 10− 1, 1, 10}. **Datasets.** We compare the aforementioned approaches on six datasets: HJA birdsong, MSCV2, Letter Frost, Letter Carroll, Voc12, and 50Salad. For the HJA birdsong, there are a few bags where the bag label is not equal the union of its instance labels. Therefore, we create an additional version of the HJA birdsong dataset namely HJA union in which the label of each bag is the union of its instance labels. For the Letter Carroll and Letter Frost datasets, bags are created from words of two poems, and instances of each bag are their letters which are sampled from the Letter Recognition dataset on the UCI Machine Learning repository. However, in each of these two poems, two letters are missing. Therefore, we only consider 24 classes in each dataset. Detailed information of all the aforementioned datasets is shown in Table [table:1]. [H] | l || l | l | l | l | l| l | Dataset & classes & bags &instances& dimension & classes per & instances per & *(C)*& *(B)*& *(N)* & *(d)* & bag $\overline{\textbf{Y}}\_b$ & bag $\overline{n}\_b$ HJA bird & 13 & 548 & 4,998 & 38 &2.1 &9.1 MSCV2 & 23 & 591 & 1,758 & 48 &2.5 &3.0 Letter Carroll & 24 & 166 & 717 & 16 &3.9 &4.3 Letter Frost & 24 & 144 & 565 & 16 &3.6 &3.9 Voc12 &20 &1,053 &4,142 &48 &2.3 &3.9 50Salad &6 &124 &2,020 &57 &2.3 &16.3 [table:1] **Evaluation measures.** We consider different metrics for evaluation of instance annotation and bag label prediction. For instance annotation, we consider instance level accuracy by dividing the correctly predicted label instances by the total number of predicted instances. For bag label prediction, we consider Hamming loss, ranking loss, average precision, one error, and coverage as in. Instance annotation experiments ------------------------------- **Setting.** In this section, we compare the proposed MLR approach with Mfast, SIM, and LSB methods. We also consider the logistic regression classifier in the single instance single label setting (SLR) in which all instance labels are provided. The accuracy of SLR is considered as an upper bound for those of all methods. Furthermore, we consider the dummy classifier that classifies every instance in the test dataset with the most common class in the training dataset without using any information from the feature vectors. The accuracy of the dummy classifier is considered as a lower bound for those of all methods. For the inductive setting, we use 10-fold cross validation to evaluate the performance. For both inductive and transductive settings, there are two methods, namely post-hoc parameter tuning, for selecting the optimal parameters: (1) using an instance level metric and (2) using a bag level metric. Note that the proposed MLR approach is free of parameter tuning hence we only consider these tuning schemes for baseline methods. In that way, there is no unfair advantage for MLR from selecting the best parameter settings. The inductive results with post-hoc tuning using instance level accuracy and bag level accuracy are presented in Tables [table:2] and [table:3], respectively. The transductive results with post-hoc tuning using instance level accuracy and bag level accuracy are presented in Tables [table:4] and [table:5], respectively. **Inductive results.** From Tables [table:2] and [table:3], the MLR approach outperforms and in some cases significantly outperforms other methods on the datasets considered. On MSCV2, Letter Carroll, and Letter Frost, the results of MLR are from 7% to 12% higher than those of Mfast, SIM, and LSB, whose performances are comparable. For HJA bird, Voc12, and 50Salad, LSB is comparable or achieves a higher accuracy than Mfast which outperforms SIM. For these three datasets, the proposed MLR approach outperforms LSB 3% on HJA bird and is comparable with LSB on Voc12 and 50Salad. Furthermore, the accuracy obtained by MLR is only 5-6% lower than that of SLR on HJA bird and MSCV2 datsets, and 1-3% on other datasets. **Analysis.** The accuracy of all methods on HJA dataset is lower than on HJA union dataset. This is due to the violation of the union assumption in the HJA dataset. The accuracy of SLR approach on both datasets is the same since SLR relies only on the instance labels. MLR significantly outperforms other methods in MSCV2, Letter Carroll, and Letter Frost. The reason is that for those datasets, the average number of classes per bag is high. By carefully considering all possibilities for instance labels and avoiding any approximation in inference, MLR uses all the instances effectively. In contrast, Mfast and SIM use the max or softmax principle which may ignore useful information from most of the instances. Moreover, LSB uses an approximate inference method, MCMC sampling. In general, sampling methods are slow to converge and due to randomness present a challenge in establishing a stopping criteria. Furthermore, in LSB, each instance has a set of possible labels which is the label of its bag. As a result, LSB may ignore a useful constraint, that the union of instance labels equals their bag label, which is preserved in our dynamic programming approach. For the Voc12 dataset, the dummy classifier works well since there exists a dominant class consisting of more than 30% instances of the dataset. In Table [table:3], the accuracy of MLR is similar to that in Table [table:2] since MLR has no tuning parameter. For LSB and SIM, the accuracy in Table [table:3] is slightly smaller than in Table [table:2] since the tuning is performed indirectly using the bag level measurement instead of directly using the instance level accuracy. For Mfast, while we follow the parameter tuning scheme proposed in, we understand that post-hoc tuning benefits methods with a high number of parameter settings. Since we consider a large number of choices (a total of 3,168) for the parameters, the accuracy drops significantly from the scenario when we both tune and test on instance level metric, as in Table [table:2], to the case when we tune on bag level metric and test on instance level metric, as in Table [table:3]. **Transductive results and analysis.** In Tables [table:4] and [table:5], there is no standard deviation reported for the transductive setting. The reason is that since bag level labels are known in both training and test phases, approaches are trained and tested on the whole dataset instead of dividing into 10 folds as in the inductive setting. MLR outperforms all other methods on all datasets, especially on MSCV2, Letter Carroll, and Letter Frost. For example, on MSCV2, the accuracy of MLR is 14%, 15%, and 23% higher than those of Mfast, SIM, and LSB, respectively. For datasets HJA union and Voc12, MLR outperforms LSB even though they are comparable in the inductive setting with instance level tuning. Similarly, even though SIM, LSB, and Mfast perform comparably on MSCV2 in inductive setting, their reported accuracies are noticeably different in transductive setting. | p11mm || p15mm | p17mm | p14mm | l | l | l | l | Dataset & HJA bird & HJA union & MSCV2 & Carroll & Frost &Voc12 &50Salad & **69.1 ± 4.7** & **72.3 ± 3.8** & **54.8 ± 4.7** & **67.7 ± 5.2** &**71.3 ± 7.6** &**43.2 ± 3.2** &**76.0 ± 5.3** Mfast & 60.5 ± 5.5 & 61.3 ± 3.6 & 47.6 ± 5.1 & 56.1 ± 5.9 &58.9 ± 5.3 &**42.5 ± 3.3** &**72.4 ± 7.5** SIM & 61.9 ± 4.2 & 62.7 ± 4.1 & 46.7 ± 4.4 & 54.2 ± 6.5 &58.1 ± 5.7 &38.0 ± 3.4 &66.9 ± 7.3 LSB & 66.3 ± 4.3 & **72.0 ± 4.0** & 45.4 ± 4.8 & 55.9 ± 2.6 &55.5 ± 3.6 &**42.8 ± 4.5** &**76.8 ± 7.8** SLR & 74.6 ± 3.1 & 74.6 ± 3.1 & 60.2 ± 4.8 & 70.5 ± 4.8 &72.7 ± 4.0 &45.2 ± 3.9 &77.6 ± 4.7 Dummy & 12.1 ± 3.4 & 12.1 ± 3.4 & 14.5 ± 3.7 & 11.2 ± 3.7 &09.3 ± 2.4 &36.1 ± 4.1 &16.7 ± 7.7 [table:2] | p11mm || p15mm | p17mm | p14mm | l | l | l | l | Dataset & HJA bird & HJA union & MSCV2 & Carroll & Frost &Voc12 &50Salad & **69.1 ± 4.7** & **72.3 ± 3.8** & **54.8 ± 4.7** & **67.7 ± 5.2** &**71.3 ± 7.6** &**43.2 ± 3.2** &**76.0 ± 5.3** Mfast & 60.0 ± 3.6 & 59.8 ± 4.9 & 41.5 ± 5.8 & 51.5 ± 6.5 &45.4 ± 7.1 &41.6 ± 2.8 &**72.4 ± 7.5** SIM & 61.7 ± 4.3 & 62.7 ± 4.1 & 46.7 ± 4.4 & 54.2 ± 6.5 &58.1 ± 5.7 &38.0 ± 3.4 &66.9 ± 7.3 LSB & 66.1 ± 4.4 & **71.4 ± 4.3** & 45.2 ± 4.0 & 55.9 ± 2.6 &55.5 ± 3.6 &**42.0 ± 4.3** &**75.9 ± 6.7** SLR & 74.6 ± 3.1 & 74.6 ± 3.1 & 60.2 ± 4.8 & 70.5 ± 4.8 &72.7 ± 4.0 &45.2 ± 3.9 &77.6 ± 4.7 Dummy & 12.1 ± 3.4 & 12.1 ± 3.4 & 14.5 ± 3.7 & 11.2 ± 3.7 &09.3 ± 2.4 &36.1 ± 4.1 &16.7 ± 7.7 [table:3] | l || l | l | l | l | l | l | l | Dataset & HJA bird & HJA union & MSCV2 & Carroll & Frost &Voc12 &50Salad & 80.8 & **88.0** & **84.9** & **91.5** & **91.5** &**67.9** &86.8 Mfast & 78.3 & 81.8 & 70.6 & 72.7 & 75.8 &61.0 &84.7 SIM & **81.7** & 83.5 & 69.7 & 72.2 & 77.3 &63.1 &83.1 LSB & 77.0 & 85.7 & 61.1 & 70.0 & 66.2 &57.8 &**88.3** Dummy & 51.6 & 56.3 & 38.0 & 25.2 & 27.8 &50.9 &46.9 [table:4] | l || l | l | l | l | l | l | l | Dataset & HJA bird & HJA union & MSCV2 & Carroll & Frost &Voc12 &50Salad & 80.8 & **88.0** & **84.9** & **91.5** & **91.5** &**67.9** &86.8 Mfast & 77.5 & 80.0 & 65.1 & 67.3 & 70.2 &58.4 &84.7 SIM & **81.7** & 83.5 & 69.7 & 71.7 & 76.5 &63.1 &82.0 LSB & 75.4 & 81.0 & 59.0 & 68.3 & 65.7 &54.8 &**87.5** Dummy & 51.6 & 56.3 & 38.0 & 25.2 & 27.8 &50.9 &46.9 [table:5] Instance annotation accuracy vs. the number of training bags ------------------------------------------------------------ **Setting.** In this section, we examine the ability of the proposed approach to effectively use the information from all instances compared to LSB and SIM. We exclude Mfast since its post-hoc instance level tuning accuracy is far different from its bag level tuning accuracy. In addition, adding one parameter for the percentage of training bags leading to a larger number of parameter settings which is already high for Mfast, and longer runtime. We perform the experiments on HJA, MSCV2, Letter Carroll, Letter Frost, Voc12, and 50Salad datasets. For each dataset we randomly select {1%, 2%, 5%, 10%, 20%, 50%, 90%} of the data for training and the remaining data for testing. For each of these sampling percentage values, we perform the experiment 10 times and report the average accuracy. The accuracy as a function of the sampling percentage is depicted in Figure [fig:accvsbag]. [b]0.48 HJA bird [b]0.48 Voc12 [b]0.48 MSCV2 [b]0.48 Letter Frost [b]0.48 Letter Carroll [b]0.48 50Salad [fig:accvsbag] **Results.** From Figure [fig:accvsbag], we observe that the accuracy for each method increases with the number of training bags. Furthermore, in order to achieve similar accuracy, our MLR approach uses fewer bags. This has significant implications in practice since the data labeling process is costly. In addition, the time required for training could also be saved. For example, in MSCV2, MLR can achieve similar level of accuracy to that of SIM and LSB using only 17% of training data SIM and LSB use. With a small amount of data, there is a clear gap between MLR and the ideal upper bound SLR. However, with enough training data, the performance of MLR is close to SLR. For example, in 50Salad, at 10% of training data, the performance of MLR is 20% less than that of SLR. However, with 90% data for training, the gap is only 1-2%. From the figure, we could observe how efficiently each approach uses information from instance labels. SLR achieves high accuracy even with a small number of training bags since it is directly given instance labels. MLR achieves a similar level of accuracy using a smaller number of bags, compared to SIM and LSB, indicating that MLR efficiently uses information from bag labels. In SIM, the softmax function for computing the weight of each instance in bags is heuristically defined instead of directly coming from the objective function. As a result, SIM may not efficiently use information from all instances. For LSB, a possible explanation for its lower accuracy compared to that of MLR is that approximation methods including variational EM and MCMC sampling are used to inference. Another possible explanation is that LSB does not maintain the constraint that the union of instance labels in each bag is equal to their bag label. We also observe that the accuracy of SIM and LSB are quite similar. **Special case for Voc12.** For small training data, if we have a high number of classes, over-fitting problem may occur, as in Voc12. Specifically, MLR and SLR are outperformed by the dummy classifier, whose accuracy is around 36%, since there is a dominant class in the dataset such that more than 30% of the instances in that class. We can mitigate these problems by adding an *l*2-norm regularization term to the objective function for both MLR and SLR. LSB is less susceptible to over-fitting problem since LSB uses a parameter *σ*2 for regularization which is searched over the set {10− 2, 10− 1, …, 105} as in Section [ss:gs]. The results are presented in Figure [fig:Vocreg], indicating that if we use more than 5% of data for training, SLR and MLR can obtain a higher accuracy compared to the dummy classifier. [fig:Vocreg] Bag label prediction -------------------- | l|| p13mm||p14mm|p13mm| p13mm|p13mm|| p13mm|p13mm|| p13mm| &SLR &**MLR** &SIM &Mfast &LSB &M-SVM &M-NN &Dummy  ↓  hl &11.2 ± 0.9 &09.6 ± 1.0 &15.9 ± 1.5 &05.5 ± 1.1 &10.6 ± 1.5 &**04.5 ± 0.6** &**04.7 ± 1.1** &18.7 ± 0.8  ↓  rl &03.5 ± 0.4 &**02.7 ± 0.6** &**02.2 ± 0.8** &**02.5 ± 0.7** &06.9 ± 1.8 &**02.7 ± 1.1** &**02.7 ± 1.1** &32.3 ± 3.1  ↑  ap &92.5 ± 1.0 &**94.2 ± 1.2** &**94.1 ± 1.8** &**94.1 ± 1.4** &89.7 ± 2.6 &**94.0 ± 2.0** &**93.9 ± 2.8** &44.1 ± 3.2  ↓  oe &05.7 ± 3.3 &**03.8 ± 1.8** &**05.1 ± 3.1** &**03.7 ± 2.4** &**03.7 ± 1.7** &**04.6 ± 2.6** &**05.3 ± 4.4** &64.0 ± 6.2  ↓  co &15.2 ± 1.6 &13.9 ± 1.6 &**12.4 ± 1.6** &**13.4 ± 1.6** &21.7 ± 3.6 &**13.2 ± 1.6** &13.4 ± 1.3 &45.9 ± 2.7  ↓  hl &11.2 ± 1.0 &11.1 ± 1.2 &16.1 ± 1.9 &05.5 ± 1.0 &10.1 ± 1.5 &**04.5 ± 0.5** &**04.7 ± 0.7** &18.7 ± 0.8  ↓  rl &02.8 ± 0.5 &**02.7 ± 0.6** &**02.4 ± 0.7** &**02.6 ± 0.6** &04.1 ± 1.3 &**02.6 ± 1.0** &**02.6 ± 0.9** &33.8 ± 2.9  ↑  ap &93.3 ± 1.5 &**93.7 ± 1.7** &**93.6 ± 1.9** &**93.9 ± 1.5** &**93.0 ± 2.3** &**93.7 ± 2.1** &**94.4 ± 2.0** &40.5 ± 2.7  ↓  oe &06.0 ± 3.7 &**05.7 ± 3.7** &**06.0 ± 2.6** &**04.2 ± 2.6** &**03.8 ± 3.1** &**05.3 ± 2.8** &**04.0 ± 2.8** &67.8 ± 10.0  ↓  co &12.5 ± 1.2 &12.7 ± 1.7 &**11.5 ± 1.3** &12.4 ± 1.3 &15.7 ± 2.5 &**12.2 ± 1.3** &**12.4 ± 1.4** &45.1 ± 2.7  ↓  hl &07.3 ± 1.0 &07.7 ± 1.0 &08.9 ± 0.8 &09.0 ± 0.9 &07.9 ± 0.8 &07.2 ± 1.1 &**06.4 ± 0.8** &13.0 ± 0.4  ↓  rl &08.7 ± 1.5 &08.7 ± 1.6 &10.5 ± 2.2 &10.7 ± 1.6 &17.0 ± 2.8 &08.1 ± 1.6 &**07.3 ± 2.0** &34.2 ± 3.5  ↑  ap &73.2 ± 5.1 &73.1 ± 4.9 &70.3 ± 3.8 &69.3 ± 4.1 &65.2 ± 4.4 &74.6 ± 5.1 &**78.5 ± 5.6** &40.2 ± 3.0  ↓  oe &26.6 ± 7.4 &27.1 ± 6.1 &27.1 ± 6.6 &27.9 ± 6.8 &25.4 ± 5.9 &25.1 ± 7.7 &**20.3 ± 6.2** &64.0 ± 7.7  ↓  co &20.6 ± 2.6 &20.5 ± 2.9 &23.3 ± 3.3 &23.3 ± 3.1 &36.3 ± 3.9 &19.8 ± 2.3 &**18.3 ± 2.8** &53.0 ± 3.3  ↓  hl &07.8 ± 1.4 &**09.0 ± 1.3** &11.9 ± 1.5 &11.9 ± 1.2 &10.8 ± 1.0 &14.9 ± 1.2 &14.2 ± 1.8 &18.9 ± 1.4  ↓  rl &06.6 ± 2.6 &**07.4 ± 2.8** &13.5 ± 4.0 &12.2 ± 3.2 &13.9 ± 2.3 &20.5 ± 3.3 &17.7 ± 2.3 &25.7 ± 3.3  ↑  ap &84.5 ± 5.0 &**83.1 ± 4.6** &73.0 ± 5.4 &75.6 ± 5.1 &74.9 ± 2.7 &58.6 ± 8.4 &62.2 ± 4.5 &47.1 ± 5.3  ↓  oe &06.7 ± 7.4 &**05.5 ± 6.1** &**12.0 ± 9.1** &**06.6 ± 5.9** &**09.1 ± 3.3** &38.1 ± 20.6 &28.3 ± 10.3 &56.1 ± 15.4  ↓  co &28.1 ± 5.4 &**29.8 ± 5.2** &39.5 ± 5.4 &36.8 ± 4.6 &41.6 ± 4.6 &44.3 ± 2.6 &41.4 ± 3.8 &48.4 ± 4.9  ↓  hl &06.7 ± 1.1 &**07.5 ± 2.2** &10.7 ± 1.7 &11.2 ± 1.5 &10.2 ± 1.3 &12.3 ± 2.5 &11.9 ± 2.2 &17.2 ± 2.0  ↓  rl &05.7 ± 2.0 &**06.1 ± 1.8** &12.0 ± 2.1 &12.7 ± 2.3 &15.4 ± 1.9 &17.7 ± 3.1 &15.7 ± 1.6 &25.4 ± 3.5  ↑  ap &85.6 ± 4.6 &**83.9 ± 3.8** &73.3 ± 2.3 &73.6 ± 4.0 &71.9 ± 3.4 &64.3 ± 9.1 &68.9 ± 4.4 &44.5 ± 5.0  ↓  oe &09.6 ± 8.1 &**11.1 ± 7.6** &**14.6 ± 9.9** &**09.6 ± 9.2** &**11.0 ± 9.3** &30.0 ± 27.3 &20.7 ± 13.0 &64.5 ± 9.2  ↓  co &24.0 ± 4.9 &**24.8 ± 6.0** &34.6 ± 5.8 &37.2 ± 6.5 &42.3 ± 6.3 &41.7 ± 3.7 &39.7 ± 4.1 &49.0 ± 3.2  ↓  hl &10.5 ± 0.8 &10.2 ± 0.7 &12.5 ± 0.9 &**09.3 ± 0.5** &**09.2 ± 0.8** &09.5 ± 0.7 &**09.2 ± 0.7** &09.5 ± 0.7  ↓  rl &15.5 ± 1.4 &**15.8 ± 1.1** &18.3 ± 1.6 &17.0 ± 1.6 &22.7 ± 1.6 &19.4 ± 2.3 &**15.6 ± 1.9** &23.6 ± 1.8  ↑  ap &64.4 ± 2.8 &**63.8 ± 1.3** &60.7 ± 1.8 &62.5 ± 2.3 &58.9 ± 2.6 &61.7 ± 2.8 &**65.0 ± 2.6** &55.4 ± 2.4  ↓  oe &25.0 ± 6.7 &**26.1 ± 4.1** &30.4 ± 3.1 &**25.8 ± 4.7** &**25.2 ± 4.6** &28.0 ± 5.6 &**25.5 ± 4.6** &28.6 ± 5.3  ↓  co &31.9 ± 2.2 &**32.6 ± 2.3** &35.5 ± 2.8 &34.1 ± 2.9 &45.0 ± 3.0 &37.8 ± 2.6 &**32.2 ± 3.4** &44.9 ± 2.5  ↓  hl &16.5 ± 2.1 &**18.0 ± 3.4** &**20.3 ± 4.2** &**19.0 ± 5.0** &**17.6 ± 5.0** &23.6 ± 4.8 &24.6 ± 3.8 &41.0 ± 2.8  ↓  rl &07.9 ± 3.0 &**07.6 ± 4.4** &11.1 ± 5.7 &**09.0 ± 3.3** &**07.5 ± 2.8** &19.3 ± 5.8 &20.5 ± 5.2 &54.4 ± 7.0  ↑  ap &91.9 ± 5.3 &**92.1 ± 5.4** &**88.7 ± 5.6** &**90.9 ± 3.4** &**91.9 ± 3.6** &81.8 ± 4.5 &80.6 ± 5.0 &52.8 ± 6.9  ↓  oe &11.0 ± 12.8 &**10.9 ± 10.5** &**13.9 ± 9.8** &**09.1 ± 6.0** &**09.8 ± 8.1** &20.4 ± 8.6 &22.9 ± 7.7 &62.4 ± 13.7  ↓  co &27.9 ± 4.7 &**27.7 ± 5.5** &31.5 ± 5.0 &29.5 ± 5.5 &**28.1 ± 4.6** &37.6 ± 3.9 &37.6 ± 4.8 &61.1 ± 4.1 [table:6] **Setting.** Even though this paper focuses on instance label prediction, we also consider examining the application of the proposed method for bag label prediction. We perform experiments on datasets with instance level label such as HJA, MSCV2, Letter Carroll, Letter Frost, Voc12, and 50Salad following the method described in Section [ss:blp]. We compare MLR with the following methods: SIM, Mfast, LSB, M-SVM, and M-NN. Moreover, we also compare MLR with the logistic regression trained in the SISL setting (SLR) and a dummy classifier designed to optimize the performance for each of the following evaluation metrics: Hamming loss, ranking loss, average precision, one error, and coverage. Note that the dummy classifier assigns the same output for any test bag. For Hamming loss, the dummy classifier outputs a bag label consisting of all classes which appear in more than 50% of training bags. For ranking loss, average precision, one error, and coverage, with each class *c*, the dummy classifier outputs *f**d**u**m**m**y*(**X***t*, *c*) =  (the percentage of training bags having class *c*) indicating the confidence of the test bag **X***t* having class *c*. Note that *f**d**u**m**m**y*(**X***t*, *c*) is independent of **X***t*. To compute ranking loss, average precision, one error, and coverage from the confidence values, we follow. For MLR, SIM, and LSB, the confidence for the bag on each class is computed as the maximal value of the confidence of its instances w.r.t. the class. Specifically, *f**M**L**R*(**X***t*, *c*) = max*i**p*(**y***t**i* = *c*∣**x***t**i*, **w**), *f**L**S**B*(**X***t*, *c*) = max*i**p*(**y***t**i* = *c*∣**x***t**i*, **w**, *α̂*), and *f**S**I**M*(**X***t*, *c*) = max*i***w***c**T***x***t**i*. For Mfast, M-SVM, and M-NN, the bag level metrics are computed as in, respectively. The results are reported in Table [table:6]. **Results.** From Table [table:6], we observe that MLR matches or outperforms other methods in term of each evaluation measure in almost all datasets, except for the MSCV2 where the accuracy of MLR is lower than those of M-NN and M-SVM. The reason is that M-NN and M-SVM consider bag level information by encoding each bag with a vector containing the similarities of the bag with representative bags in the dataset. However, instance annotation methods including MLR, SIM, LSB, and Mfast only focus on instance level features without including additional bag level information. Consequently, approaches that rely on bag level information may outperform instance level approaches on datasets where each test bag information is helpful to predict its instance labels such as image datasets including MSCV2. For example, we are not sure whether a single white color pixel is from the sky or a blank paper. However, if we know that the overall image is about a scene, we would rather predict the pixel is from the sky. Similar to the instance annotation accuracy in Table [table:2], our approach performance is close to the SLR version. Among the considered instance annotation methods, MLR seems to outperform other methods in bag level measurements. Among the bag level methods, M-NN seems to outperform M-SVM. Kernel OR-LR ------------ **Setting.** In kernel learning, we transform feature vector **x** to **k**(**x**) as in Section [ss:kbolrm]. We refer to the setting which involves **x** (as in ) as feature learning and to the setting which **x** is replaced by **k**(**x**) as kernel learning. The parameter *δ* in is selected as follows. We first compute the mean square distance among every pair of instances in the dataset $\overline{d^2}$. Then, *δ* is computed as $\delta=s\times \overline{d^2}$, where *s* is in {0.1, 0.2, 0.5, 1, 2, 5, 10}. We evaluate the instance level accuracy as a function of *δ* for SLR and MLR and report the highest accuracy for each method. Since the dimension of **w** in kernel learning is higher than in feature learning, we set the number of expectation maximization iterations to 500. For SIM and Mfast, we add *δ* to the parameters used for tuning in Section [ss:gs]. In SIM, the number of iterations in each phase *T* is set to 1,000 due to the higher dimension in kernel space. Similarly, in Mfast, the number of iterations is set to 100. Due to the new parameter *s*, the large number of parameter settings, and the higher number of iterations leading to a longer runtime compared to the feature learning case, the parameters of Mfast are selected as follows. Since the performance of Mfast depends mainly on *K*, *K* is searched over the set {1, 5, 10, 15} and *m*, *C*, *γ*0, *η* are set to 100, 10, 10− 3, 10− 5, respectively. In this set of experiments, due to significant increase in runtime of the kernel learning setting, we do not include LSB. The results for kernel learning of MLR, SLR, SIM, and Mfast are reported in Table [table:8]. **Results.** From Table [table:8], on MSCV2, Letter Carroll, and Letter Frost, there is a significant improvement of accuracy in kernel case compared to the feature case in both SLR and MLR. Specifically, on kernel learning, the instance annotation accuracy of MLR is 5%, 4%, and 3% higher on MSCV2, Letter Carroll, and Letter Frost, compared to the feature case, respectively. Moreover, the accuracy of SLR is also significantly higher 5%, 9%, and 6% on those datasets, compared to the feature case, respectively. Since the dummy classifier does not rely on instance feature vectors, the accuracy is unchanged compared to the feature case as presented in Table [table:2]. In the kernel learning case, MLR outperforms Mfast which seems to outperform SIM. We note that due to the higher number of iterations and larger size parameters, running the kernel version of methods takes longer than the feature version. For instance, running the kernel version of MLR on Letter Carroll takes 7 times longer than the feature version whereas running the kernel version of SIM on Letter Carroll takes 15 times longer than the feature version. | l || l | l | l | l | l | l | l | Dataset & HJA bird & HJA union & MSCV2 & Carroll & Frost &Voc12 &50Salad & **70.2 ± 5.1** & **72.8 ± 3.7** & **61.0 ± 5.3** & **72.1 ± 5.7** &**74.0 ± 5.5** &**45.0 ± 3.0** &**78.4 ± 5.1** K-Mfast & 62.1 ± 3.0 & 63.8 ± 3.9 & 53.7 ± 4.9 & 68.6 ± 4.5 &67.0 ± 5.1 &37.7 ± 3.6 &74.2 ± 6.6 K-SIM & 63.4 ± 3.0 & 63.8 ± 3.2 & 51.2 ± 6.6 & 65.9 ± 3.9 &64.9 ± 7.8 &42.1 ± 4.1 &71.9 ± 6.2 K-SLR & 77.6 ± 3.6 & 77.6 ± 3.6 & 65.1 ± 4.1 & 79.5 ± 5.2 &79.5 ± 3.5 &46.6 ± 2.5 &83.7 ± 3.5 K-Dummy & 12.1 ± 3.4 & 12.1 ± 3.4 & 14.5 ± 3.7 & 11.2 ± 3.7 &9.3 ± 2.4 &36.1 ± 4.1 &16.7 ± 7.7 [table:8] Speed up -------- In this section, we study techniques to reduce the computational complexity of our approach. The main time-consuming part of MLR is in the E-step. Table [table:9] shows the computational complexity per bag for each of our E-step implementation and the sum of each factor in that computational complexity over all bags on every dataset. Since all implementations have an exponential term regarding the number of classes per bag 2∣**Y***b*∣, speeding up is necessary for datasets such as Letter Carroll or Letter Frost, where the factor ∑*b* = 1*B*∣**Y***b*∣2∣**Y***b*∣*n**b*2 significantly dominates the factor ∑*b* = 1*B**n**b**C**d*, as illustrated in Table [table:9], because of several bags having a large number of classes. | l || l | l | l | l | Algorithm & & Complexity per bag & & Dataset & ∑*b* = 1*B*∣**Y***b*∣2∣**Y***b*∣*n**b*2 &∑*b* = 1*B**n**b**C**d* &∑*b* = 1*B*∣**Y***b*∣2∣**Y***b*∣*n**b* &∑*b* = 1*B**n**b**C**d* HJA bird & 1,300,502 &2,469,012 &99,314 &2,469,012 HJA union & 1,105,808 &2,469,012 &83,584 &2,469,012 MSCV2 & 680,112 &1,940,832 &105,584 &1,940,832 Letter Carroll & 4,825,522 &275,328 &499,282 &275,328 Letter Frost & 1,921,792 &216,960 &211,248 &216,960 Voc12 & 566,464 &3,976,320 &79,888 &3,976,320 50Salad & 4,981,778 &690,840 &120,738 &690,840 [table:9] ### Speed up using pruning [b]0.48 Letter Carroll [b]0.48 Letter Carroll [b]0.48 Letter Frost [b]0.48 Letter Frost [b]0.48 HJA bird [b]0.48 HJA bird [fig:prune] Pruning is a technique in our paper that removes bags containing a large number of classes in the dataset. In order to prune, we sort bags based on the value of *n**b*∣**Y***b*∣2∣**Y***b*∣ in an ascending order and remove a fixed percentage of bags with these highest values from the training set. We perform 10-fold cross validation. Assume that *n**b* is equal for every bag, thus a bag contains a large number of classes ∣**Y***b*∣ would have a significantly high value of *n**b*∣**Y***b*∣2∣**Y***b*∣ proportional to the time MLR spends on the *b*th bag in the expectation step. In fact, those bags are less informative, due to the label ambiguity, which increases as a function of the size of the label set. As a result, we expect the accuracy is hardly changed after pruning while the runtime decreases. To understand the intuition behind the pruning technique used for MLR in this section, we present the following experiments on HJA, MSCV2, Letter Carroll, and Letter Frost datasets. First, we sort the bags in term of *n**b*∣**Y***b*∣2∣**Y***b*∣ in an ascending order. Then, we select *B*0 bags which contain 80% of the bags with lowest values, and report the ratio (∑*b* = 1*B**n**b*∣**Y***b*∣2∣**Y***b*∣)/(∑*b* = 1*B*0*n**b*∣**Y***b*∣2∣**Y***b*∣) for each dataset as Table [table:10]. From Table [table:10], we could observe that the *B*0 selected bags only constitute to a small proportion of the sum, which are around 1/3, 1/7, 1/20, and 1/12 for HJA, MSCV2, Letter Carroll, and Letter Frost, respectively. | m3cm || l | l | l | l | Dataset & HJA bird & MSCV2 & Letter Carroll & Letter Frost $\frac{\sum\_{b=1}^{B}n\_b|\textbf{Y}\_b|2^{|\textbf{Y}\_b|}}{\sum\_{b=1}^{B\_0}n\_b|\textbf{Y}\_b|2^{|\textbf{Y}\_b|}}$ & 3.3 & 6.6 & 20.3 & 11.6 [table:10] We perform the experiment on HJA, Letter Carroll, and Letter Frost. We prune {0%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%} the number of bags having highest value. The accuracy and runtime as functions of the proportion of keeping bags on Letter Carroll, Letter Frost, and HJA are reported in Figure [fig:prune]. From Figure [fig:prune], we observe a significant speed up on Letter Carroll and Letter Frost datasets. Specifically, we can speed up the computation time in Letter Carroll and Letter Frost by 9.2 and 5.0 times by just removing 20% of the data and the accuracy is hardly changed. In the Letter Carroll and Letter Frost, there are few bags having 10 classes (words have more than 10 different letters). Removing these words does not affect the overall accuracy result. However, since the computational complexity is exponential in the number of classes per bag, the runtime for such bags constitutes a large percentage of the overall runtime. In the HJA dataset, the effect is less pronounced. We can maintain the accuracy by removing 20% bags and the runtime decreases by a factor of 2. From Figure [fig:prune], for a small proportion of keeping bags, the runtime seems to decrease linearly with the number of keeping bags. It can be explained as for those datasets, after removing a high proportion of bags with high number of classes, the remaining bags almost have the same number of classes. As a result, the computational complexity per bag using the forward and substitution algorithm from Table [table:9] depends on the number of instances per bag only which is assumed to be equal. Consequently, the runtime depends linearly on the number of keeping bags. ### Speed up using stochastic gradient ascent [b]0.48 HJA bird [b]0.48 HJA bird [b]0.48 MSCV2 [b]0.48 MSCV2 [b]0.48 Letter Carroll [b]0.48 Letter Carroll [fig:sampling] In Section [ss:em], we observe that in each expectation maximization iteration, for all bags, we need to compute the instance labels probability given their bag label which is costly in runtime. Stochastic sampling means that, for each iteration, we sample only a few bags then compute the instance labels probability for those bags only. Specifically, the new expectation maximization at the *k*th iteration is as follows: E-step: * Sample a subset *S* from *B* bags * Compute *p*(*y**b**i* = *c*∣**Y***b*, **X***b*, **w**(*k*)), ∀*b* ∈ *S* M-step: $$\textbf{w}\_c^{(k+1)}=\textbf{w}\_c^{(k)}+ \frac{\partial \widetilde{g}(\textbf{w},\textbf{w}^{(k)})}{\partial\textbf{w}\_c}\bigg|\_{\textbf{w}=\textbf{w}^{(k)}} \times \eta,$$ where $$\frac{\partial \widetilde{g}(\textbf{w},\textbf{w}^{(k)})}{\partial\textbf{w}\_c}=\sum\_{b\in S}\sum\_{i=1}^{n\_b}[p(y\_{bi}=c|\textbf{Y}\_b,\textbf{X}\_b,\textbf{w}^{(k)})-p(y\_{bi}=c|\textbf{x}\_{bi},\textbf{w})]\textbf{x}\_{bi}.$$ Since computing instance labels probability on a subset of bags is faster with a factor linear w.r.t. the size of the subset, we expect that the runtime will decrease proportionally. In each iteration, the subset of bags *S* would change, so we expect we could cover all the bags after few iterations, such that the accuracy does not change significantly. The percentage of sampled bags is selected from the set {1%, 2%, 5%, 10%, 20%, 50%, 100%}. The accuracy and runtime as functions of the proportion of sampled bags of our approach on HJA, MSCV2, and Letter Carroll are reported in Figure [fig:sampling]. From Figure [fig:sampling], we observe that with high percentage of sampled bags, when we decrease the percentage of sampled bags, the runtime decreases at almost the same rate. For example, the runtime for HJA dataset with sampling percentage of {20%, 50%, 100%} are {55, 131, 260} seconds per cross validation, respectively. The reason is that in both expectation step and maximization step, MLR with stochastic gradient ascent only considers sampled bags. When we decrease the percentage of sampled bags, the overall runtime of MLR decreases at almost the same rate. In general, by sampling 20% of the number of the bags, we can reduce the runtime by a factor of 5 while keeping the accuracy at the same level on HJA, MSCV2, and Letter Carroll. ### Speed up by shrinking the dictionary [b]0.48 HJA bird [b]0.48 HJA bird [b]0.48 MSCV2 [b]0.48 MSCV2 [b]0.48 Letter Carroll [b]0.48 Letter Carroll [fig:dict] In kernel learning, in order to create the kernel vector **k**(**x**), we compute the similarities of **x** with all other instances (called dictionary), to obtain $\textbf{k}(\textbf{x})=[K(\textbf{x},\textbf{x}\_{11}),K(\textbf{x},\textbf{x}\_{12}),\dots,K(\textbf{x},\allowbreak\textbf{x}\_{Bn\_B})]$. Shrinking dictionary means reducing the dimension of **k**(**x**). Specifically, from the data, we sample *S* instances $\hat{\textbf{x}}\_1,\hat{\textbf{x}}\_2,\dots,\hat{\textbf{x}}\_S$, and create $\textbf{k}(\textbf{x})=[K(\textbf{x},\hat{\textbf{x}}\_1),K(\textbf{x},\hat{\textbf{x}}\_2),\dots,K(\textbf{x},\allowbreak\hat{\textbf{x}}\_S)]$. In fact, the larger the dimension of **k**(**x**), the longer MLR takes to converge since one dimension of the parameter $\boldsymbol\alpha$ in Section [ss:kernel] is equal to the dimension of **k**(**x**). Even though there are a lot of instances, they may lie on a low dimensional space. As a result, even though some instances are removed in the kernel vector, we hope that the accuracy will only mildly change. We select *q* the percentage of instances used to create the kernel vector in the set {1%, 2%, 5%, 10%, 20%, 50%, 100%}. If *q* ≤ 10%, we set the number of EM iterations to 50. If *q* > 10%, then the dimension of the kernel vector and parameter $\boldsymbol\alpha$ become large. Consequently, in order to guarantee the convergence of the proposed EM solution, we set the number of iterations to (*q*/10%) \* 50 = *q* \* 500 which is proportional to *q*. For example, if we use 20% of instances, we set the number of iterations to 20% × 500 = 100. The accuracy and runtime as functions of *q* are reported in Figure [fig:dict]. From Figure [fig:dict], on the HJA, MSCV2, and Letter Carroll datasets, we observe that by creating the kernelized feature vectors containing the similarities with only 50% of the number of instances in the dataset, we can still keep a similar level of accuracy when using all instances in the datasets. Note that for *q* ≤ 10%, the runtime is hardly changed since the number of expectation maximization iterations is fixed to be 50. In summary, even though there is an exponential factor w.r.t. the number of classes per bag in the computational complexity of the proposed MLR approach, the average number of classes per bag is not high in reality making MLR a practical instance annotation solution. Additionally, there are several speed up methods including pruning and sampling that help to reduce the runtime while keeping the accuracy almost unchanged. Especially, pruning is a technique designed for MLR that help to reduce the runtime on datasets with high average number of classes per bag such as Letter Carroll and Letter Frost by a factor from 4 to 7 times. Conclusions =========== This paper focuses on the instance annotation problem in multi-label multi-instance setting. We proposed an OR-ed logistic regression model for the problem and used an expectation maximization framework to facilitate maximum likelihood estimation of the model parameters. We focused on the challenge of how to efficiently compute the exact instance label posterior probabilities given their bag level labels to keep the accuracy as high as possible. We address this challenge by using a dynamic programming method, with the computational complexity linear in the number of instances per bag. Experiments on different datasets indicate that the proposed approach outperforms other state-of-the-art methods including SIM, LSB, and Mfast, especially on MSCV2 dataset where the accuracy of the proposed approach is around 7% higher than those of SIM, LSB, and Mfast. Our MLR approach is not only free of parameter tuning but also achieves higher accuracy, as well as uses less bags to get to the same accuracy level compared to other methods. In MSCV2, MLR just requires around 17% training bags to achieve the same level of accuracy as SIM and LSB. Experiments on bag level classification show that even though without using bag level features, MLR is comparable to other bag level classifiers such as M-SVM and M-NN. Other experiments show that the proposed approach can be extended well to linearly inseparable datasets using kernel learning and efficiently speeded up using pruning and sampling techniques. For *i* = 0, from our model, *p*(**Y***b*1∣**X***b*, **w**) = *p*(*y**b*1∣**X***b*, **w**). Therefore, if **\L** contains only one class *l*, *p*(**Y***b*1 = **\L**∣**X***b*, **w**) = *p*(*y**b*1 = *l*∣**x***b*1, **w**). Otherwise, *p*(**Y***b*1 = **\L**, ∣**X***b*, **w**) = 0. For *i* ≥ 1, in the graphical model in Figure [fig:Dynamic4](e), by marginalizing *p*(**Y***b**i* + 1, **Y***b**i*, *y**b*(*i* + 1)∣**X***b*, **w**) over **Y***b**i* and *y**b*(*i* + 1), we obtain *p*(**Y***b**i* + 1 = **\L**∣**X***b*, **w**) = ∑**\L**ʹ∑*c**p*(**Y***b**i* + 1 = **\L**, **Y***b**i* = **\L**ʹ, *y**b*(*i* + 1) = *c*∣**X***b*, **w**). Furthermore, since in the model **Y***b**i* and *y**b*(*i* + 1) are independent, and from the conditional probability rule, the RHS of becomes $$\begin{aligned} p(\textbf{Y}\_b^{i+1}=\textbf{\L}|\textbf{X}\_b,\textbf{w})=\sum\_{\textbf{\L}^{'}}\sum\_{c}&p(\textbf{Y}\_b^{i+1}=\textbf{\L}|\textbf{Y}\_b^{i}=\textbf{\L}^{'},y\_{b(i+1)}=c,\textbf{X}\_b,\textbf{w})\times\nonumber\\ &p(\textbf{Y}\_b^{i}=\textbf{\L}^{'}|\textbf{X}\_b,\textbf{w})p(y\_{b(i+1)}=c|\textbf{X}\_b,\textbf{w}).\label{e:a2}\end{aligned}$$ Using the assumption that **Y***b**i* + 1 = **Y***b**i*⋃*y**b*(*i* + 1), we can rewrite the RHS of as follows $$\begin{aligned} p(\textbf{Y}\_b^{i+1}=\textbf{\L}|\textbf{X}\_b,\textbf{w})=&\sum\_{\textbf{\L}^{'}}\sum\_{c}I(\textbf{\L}=\textbf{\L}^{'}\bigcup c)p(\textbf{Y}\_b^{i}=\textbf{\L}^{'}|\textbf{X}\_b,\textbf{w})p(y\_{b(i+1)}=c|\textbf{X}\_b,\textbf{w}) \nonumber\\ =&\sum\_{c \in \textbf{\L}}p(y\_{b(i+1)}=c|\textbf{X}\_b,\textbf{w})\times \nonumber\\ &\hspace{6mm}[p(\textbf{Y}\_b^{i}=\textbf{\L}\_{\backslash c}|\textbf{X}\_b,\textbf{w})+p(\textbf{Y}\_b^{i}=\textbf{\L}|\textbf{X}\_b,\textbf{w})].\label{e:appa}\end{aligned}$$ From our model, *y**b*(*i* + 1) is only depend on **x***b*(*i* + 1) and **w**, therefore we obtain $$\begin{aligned} p(\textbf{Y}\_b^{i+1}=\textbf{\L}|\textbf{X}\_b,\textbf{w})=\sum\_{c \in \textbf{\L}}&p(y\_{b(i+1)}=c|\textbf{x}\_{b(i+1)},\textbf{w})\times\\ &[p(\textbf{Y}\_b^{i}=\textbf{\L}\_{\backslash c}|\textbf{X}\_b,\textbf{w})+p(\textbf{Y}\_b^{i}=\textbf{\L}|\textbf{X}\_b,\textbf{w})].\end{aligned}$$ Similar to the proof for Proposition [p:1], by marginalizing *p*(**Y***b**n**b*, **Y***b**n**b* − 1, *y**b**n**b*∣**X***b*, **w**) over **Y***b**n**b* − 1, we have *p*(*y**b**n**b* = *c*, **Y***b**n**b* = **\L**∣**X***b*, **w**) = ∑**\L**ʹ*p*(**Y***b**n**b* = **\L**, **Y***b**n**b* − 1 = **\L**ʹ, *y**b**n**b* = *c*∣**X***b*, **w**). Since **Y***b**n**b* − 1 and *y**b**n**b* are independent, and from the conditional probability rule, the RHS of becomes $$\begin{aligned} p(y\_{bn\_b}=c, \textbf{Y}\_b^{n\_b}=\textbf{\L}|\textbf{X}\_b,\textbf{w})=\sum\_{\textbf{\L}^{'}}&p(\textbf{Y}\_b^{n\_b}=\textbf{\L}|\textbf{Y}\_b^{n\_b-1}=\textbf{\L}^{'},y\_{bn\_b}=c,\textbf{X}\_b,\textbf{w})\times\nonumber\\ &p(\textbf{Y}\_b^{n\_b-1}=\textbf{\L}^{'}|\textbf{X}\_b,\textbf{w})p(y\_{bn\_b}=c|\textbf{X}\_b,\textbf{w}).\label{e:b2}\end{aligned}$$ Using the assumption that **Y***b**n**b* = **Y***b**n**b* − 1⋃*y**b**n**b*, and since *y**b**n**b* depends on only **x***b**n**b* and **w**, we can rewrite the RHS of as follows $$\begin{aligned} p(y\_{bn\_b}=c, \textbf{Y}\_b^{n\_b}=\textbf{\L}|\textbf{X}\_b,\textbf{w})=&\sum\_{\textbf{\L}^{'}}I(\textbf{\L}=\textbf{\L}^{'}\bigcup c)p(\textbf{Y}\_b^{n\_b-1}=\textbf{\L}^{'}|\textbf{X}\_b,\textbf{w})p(y\_{bn\_b}=c|\textbf{X}\_b,\textbf{w})\\ =&p(y\_{bn\_b}=c|\textbf{x}\_{bn\_b},\textbf{w})[p(\textbf{Y}\_b^{n\_b-1}=\textbf{\L}\_{\backslash c}|\textbf{X}\_b,\textbf{w})+p(\textbf{Y}\_b^{n\_b-1}=\textbf{\L}|\textbf{X}\_b,\textbf{w})].\end{aligned}$$ From in the proof of Proposition [p:1] we obtain *p*(**Y***b* = **\L**∣**X***b*, **w**) = ∑**\L**ʹ∑*c**I*(**\L** = **\L**ʹ⋃*c*)*p*(**Y***b*∖ *i* = **\L**ʹ∣**X***b*, **w**)*p*(*y**b**i* = *c*∣**x***b**i*, **w**). Then, from and by definition of **A**, **u**, and **v**, we obtain the relation **u** = **A****v**. Dynamic Programming for Instance Annotation in Multi-instance Multi-label Learning ================================================================================== Labeling data for classification requires significant human effort. To reduce labeling cost, instead of labeling every instance, a group of instances (bag) is labeled by a single bag label. Computer algorithms are then used to infer the label for each instance in a bag, a process referred to as instance annotation. This task is challenging due to the ambiguity regarding the instance labels. We propose a discriminative probabilistic model for the instance annotation problem and introduce an expectation maximization framework for inference, based on the maximum likelihood approach. For many probabilistic approaches, brute-force computation of the instance label posterior probability given its bag label is exponential in the number of instances in the bag. Our key contribution is a dynamic programming method for computing the posterior that is linear in the number of instances. We evaluate our methods using both benchmark and real world data sets, in the domain of bird song, image annotation, and activity recognition. In many cases, the proposed framework outperforms, sometimes significantly, the current state-of-the-art MIML learning methods, both in instance label prediction and bag label prediction. Multi-instance multi-label learning, instance annotation, expectation maximization, graphical model, dynamic programming Introduction ============ Multiple instance multiple label (MIML) learning is a framework where learning is carried out under label uncertainty. Conventional single instance single label (SISL) learning assumes that each instance in the training data is labeled. In the MIML setting, instances are grouped into bags and labels are provided at the bag level. For example, an image can be viewed as a bag of segments tagged with names of objects present in the image (e.g., ‘house’, ‘grass’, and ‘sky’) without associating individual segments with a label. In bird species recognition from audio recording, bags are long intervals containing multiple syllables (instances). Intervals are labeled with list of species without providing an explicit label for each syllable. Various problems are considered in the MIML setting including: (i) learning a bag level label classifier and (ii) learning an instance level classifier. We refer readers to for a detailed review of MIML methods. A bag level label classifier can be constructed without explicitly reasoning about the instance labels. This is the approach taken by MIMLBoost, MIMLSVM, Citation-kNN, and Bayesian-kNN. In MIMLSVM, the training bags are first clustered using the Hausdorff distances among them. Then, each bag is encoded with a vector of similarities to each of the cluster centers. Finally, an SVM classifier is trained and used to predict the bag label of a new bag. A similar method using Hausdorff distance is applied in Citation-kNN and Bayesian-kNN. The training phase of the aforementioned methods does not provide an instance level classifier. The focus of our paper is instance level label prediction, i.e., instance annotation. Even though most MIML methods focus on bag level prediction, a few of the existing methods resort to instance level classifiers as a means to obtain bag level predictions. For example, M3MIML aims at maximizing the margin among classes where the score for each class is computed from score of bags and the score of each bag is computed from a single score-maximizing instance in the bag. As a result, M3MIML may not use information from many instances in each bag. A smaller number of methods directly aim at solving the instance annotation problem. For example, rank-loss support instance machines (SIM) considers both max score and softmax score taking into account all of the instance scores in each bag. Probabilistic graphical models have been proposed for the instance annotation problem in different applications. Due to the high computational complexity in the inference step, they employ approximation techniques, such as sampling and variational inference. propose a generative model with an exact inference based on the expectation maximization framework. However, compared to discriminative methods, generative methods often achieve lower accuracy. We develop a discriminative probabilistic model with an efficient inference method that takes into account all instances in each bag. The contributions of this paper are as follows. First, we propose the discriminative ORed-logistic regression model for the instance annotation problem. Second, we propose an expectation maximization framework to facilitate maximum likelihood inference. Third, we introduce a computationally efficient and exact algorithm for posterior probability calculation in the E-step. Finally, we demonstrate the superiority of this approach over various domains such as bird song, image annotation, and activity recognition, for both bag level prediction and instance level prediction. Related work ============ Multi-instance multi-label learning problems have been implicitly addressed by *probabilistic graphical models*. In text data processing, Latent Dirichlet Allocation (LDA) is a well-known generative topic model for processing a corpus of text documents. The graphical model for LDA is illustrated in Figure [fig:allmodels](a). For each document (bag), a topic proportion *θ* is generated. Then, from the topic proportion, a topic *y* is randomly selected and a word (instance) **x** is selected at random based on the topic. However, different from the MIML setting, LDA is an unsupervised model in which words are observed but their topics are hidden. Supervised/labeled LDA models incorporate a bag label. In supervised LDA, as shown in Figure [fig:allmodels](b), the observed document label **Y** is generated based on the hidden topics *y* in that document. From the observation (observed labels and words) parameters are estimated using approximate maximum likelihood through variational expectation maximization. In labeled LDA, as illustrated in Figure [fig:allmodels](c), the topic proportion *θ* of each document is generated based on the observed document label **Y**. For inference, labeled LDA uses collapsed Gibbs sampling to estimate parameters. [fig:allmodels] Several *maximum margin based methods* have been considered for MIML learning. ASVM-MIL poses the support vector machine (SVM) for region-based image annotation problem. The challenge lies in how to construct a bag level label error from instance level label errors. With an assumption that there are a large number of true positive instances in positive bags and hence a false negative at the instance level does not necessarily lead to a bag error. However, a false positive at the instance level absolutely results in a positive prediction for a negative bag. Consequently, ASVM-MIL approximates the bag level label error by the false positive. In practice, the assumption may be violated since there may be only one instance from the positive class in a positive bag. mi-SVM uses a heuristic approach that alternates between updating instance level labels constrained on bag labels and maximizing the margin among hyperplanes obtained from the instance labels. MIMLfast maximizes the margin among classes by maximizing difference of class scores which are evaluated from bag scores. To construct a score for each bag, MIMLfast uses only the instance which maximizes the scores. This principle may ignore information from most instances in each bag. Another maximum margin based method that uses the max principle is M3MIML. SIM uses both max principle and softmax principle where the score of each bag is computed from all instance scores by the softmax function. show that using softmax score for SIM yields some performance improvement compared to using max score. *Super set label learning* (SSLL) or partial label learning has been considered for instance annotation. Different from the instance annotation setting, SSLL does not consider the bag concept. Instead, from a MIML dataset, it transforms the data so that each instance label is replaced with the bag label. A limitation of this setting is that it cannot capture the assumption that bag labels are union of instance labels. LSB-CMM uses logistic stick breaking encoding scheme to link each instance vector to its labels. Due to the complexity of the model, variational expectation maximization and MCMC sampling are employed. The discriminative LSB-CMM model is presented in Figure [fig:allmodels](d). For each instance **x**, a mixture component *z* is identified. From the mixture component, the instance hidden label *y* is generated. Finally, the noisy superset label **Y** is generated from that instance label. In the LSB-CMM model, each instance is considered independently therefore the model does not take advantage of the union relationship among instance labels in each bag. In addition, several SVM-based solutions have been proposed for the partial label learning setting such as SVM based partial label learning and PL-SVM. In this paper, we would like to develop an instance annotation approach that addresses some of the aforementioned challenges associated with MIML methods. We would like to construct a probabilistic framework that uses instance class membership probability instead of using a single score-maximizing instance. Furthermore, even though generative models are well suited to deal with missing or small amounts of data, when the data size is sufficiently large, generative models are outperformed by discriminative models. We are motivated by this argument to develop a discriminative model for instance annotation. Moreover, we would like to design a model sufficiently simple that allows for exact inference. Finally, we would like to maintain the relation among instance labels in each bag. [H] | | | | --- | --- | | *B* | the number of bags in the dataset | | *C* | the number of classes in the dataset | | *N* | the total number of instances in the dataset | | **x***b**i* | the *i*th instance of the *b*th bag | | *y**b**i* | the label for the *i*th instance of the *b*th bag | | *n**b* | the number of instances for the *b*th bag | | **X***b* | the set of instances in the *b*th bag | | **Y***b* | the label for the *b*th bag | | **y***b* | [*y**b*1, *y**b*2, …, *y**b**n**b*] | | *d* | the dimension of every instance **x***b**i* | | **X***D* | {**X**1, **X**2, …, **X***B*} | | **Y***D* | {**Y**1, **Y**2, …, **Y***B*} | | $\underline{\textbf{y}}$ | {**y**1, **y**2, …, **y***B*} | | **w***c* | instance level weight for class *c* | | **w** | [**w**1, **w**2, …, **w***C*] | | **Y***b**k* | ⋃*i* = 1*k**y**b**i* | | **Y***b*∖ *k* | $\bigcup\_{\substack{i=1;i\neq k}}^{n\_b}y\_{bi}$ | | **\L**∖ *c* | a set includes all labels in **\L** excluding *c* | [table:notation] Problem formulation and the proposed model ========================================== This paper considers the instance annotation problem in the MIML framework. We consider a collection of bags and their labels {(**X***b*, **Y***b*)}*b* = 1*B*. Specifically, each **X***b* denotes the set of instance feature vectors of the *b*th bag, **x***b*1, **x***b*2, …,  and **x***b**n**b*, where **x***b**i* ∈ X ⊆ R*d* is the feature vector for the *i*th instance in the *b*th bag and *n**b* denotes the number of instances in the *b*th bag. Moreover, the bag level label **Y***b* is a subset of the set Y = {1, 2, …, *C*}, where *C* is the number of classes. To simplify the notation, we use (**X***D*, **Y***D*) where **X***D* = {**X**1, …, **X***B*} and **Y***D* = {**Y**1, …, **Y***B*} as an abbreviated notation for {(**X***b*, **Y***b*)}*b* = 1*B*. The goal of instance annotation is to train a classifier that maps an instance in X to a single label in Y under the MIML framework i.e., given (**X***D*, **Y***D*). Main notations used in this paper are in Table [table:notation]. The proposed model: ORed-logistic regression -------------------------------------------- The graphical representation of the proposed model is illustrated in Figure [fig:MIML]. Following the notations of Section [s:pf], we assume that bags **X**1, **X**2, …, **X***B* are independent and that instances in each bag **x***b*1, **x***b*2, …, **x***b**n**b* are independent: *p*(**X**1, **X**2, …, **X***B*) = ∏*b* = 1*B**p*(**X***b*) = ∏*b* = 1*B*∏*i* = 1*n**b**p*(**x***b**i*). Next, we model the probability relationship between *y**b**i* the label of the *i*th instance in the *b*th bag and its feature vector **x***b**i* by a multinomial logistic regression function as follows $$p(y\_{bi}|\textbf{x}\_{bi},\textbf{w})=\frac{\prod\_{c=1}^{C}e^{I(y\_{bi}=c)\textbf{w}\_c^T\textbf{x}\_{bi}}}{\sum\_{c=1}^C{e^{\textbf{w}\_c^T\textbf{x}\_{bi}}}}, \label{e:prior}$$ [fig:MIML] where **w***c* ∈ R*d* × 1 is the weight for the *c*th class score function and $\textbf{w}=[\textbf{w}\_1,\textbf{w}\_2,\allowbreak\dots,\textbf{w}\_C]$. We use *I*( ⋅ ) to denote the indicator function taking the value of 1 when its argument is true and 0 otherwise. Note that the probability of **X**1, **X**2, …, **X***B* is not a function of **w**. This property is a key to the discriminative nature of our model. Next, we assume that the label of each bag is the union of its instance labels. Consequently, the probability of the label of each bag **Y***b* given its instance labels **y***b* = [*y**b*1, *y**b*2, …, *y**b**n**b*] is expressed as follows *p*(**Y***b*∣**y***b*) = *I*(**Y***b* = ⋃*j* = 1*n**b**y**b**j*). Based on the aforementioned description, the proposed model has the following properties. First, our model is a discriminative probabilistic model since it learns a mapping from instance feature vectors to class labels as in. Second, the instance labels in each bag are constrained by their bag label using. As a result, the model preserves the MIML structure from the dataset. Maximum Likelihood ------------------ We consider the maximum likelihood principle for inference in our model. From the formula for conditional probability, *p*(**Y***D*, **X***D*∣**w**) = *p*(**X***D*∣**w**)*p*(**Y***D*∣**X***D*, **w**). Moreover, based on our assumption that **X***D* is independent of **w**, *p*(**X***D*∣**w**) = *p*(**X***D*). Consequently, the probability of the observation given the unknown parameters is given by *p*(**Y***D*, **X***D*∣**w**) = *p*(**X***D*)∏*b* = 1*B**p*(**Y***b*∣**X***b*, **w**),  where *p*(**Y***b*∣**X***b*, **w**) is obtained by marginalizing *p*(**Y***b*, **y***b*∣**X***b*, **w**) over **y***b* as follows *p*(**Y***b*∣**X***b*, **w**) = ∑*y**b*1 = 1*C*…∑*y**b**n**b* = 1*C**p*(**Y***b*, **y***b*∣**X***b*, **w**). From the conditional probability and from the graphical model that given **y***b* the bag label **Y***b* is independent of **X***b* and **w**, we can rewrite as $$\begin{aligned} p(\textbf{Y}\_b|\textbf{X}\_{b},\textbf{w})=&\sum\_{y\_{b1}=1}^C\dots\sum\_{y\_{bn\_{b}}=1}^C p(\textbf{Y}\_b|\textbf{y}\_{b})p(\textbf{y}\_{b}|\textbf{X}\_{b},\textbf{w})\nonumber\\ =&\sum\_{y\_{b1}=1}^C\dots\sum\_{y\_{bn\_{b}}=1}^C [I(\textbf{Y}\_b=\bigcup\_{j=1}^{n\_b}y\_{bj})\prod\_{i=1}^{n\_b}p(y\_{bi}|\textbf{x}\_{bi},\textbf{w})],\label{e:observed}\end{aligned}$$ where the last step is obtained by substituting *p*(**Y***b*∣**y***b*) from and the assumption that instances in each bag are independent. Substituting back into and taking the logarithm, the log-likelihood function can be written as *l**M**I**M**L*(**w**) = ∑*b* = 1*B*log(∑*y**b*1 = 1*C*…∑*y**b**n**b* = 1*C*[*I*(**Y***b* = ⋃*j* = 1*n**b**y**b**j*)∏*i* = 1*n**b**p*(*y**b**i*∣**x***b**i*, **w**)]) + log*p*(**X***D*). Note that log*p*(**X***D*) is a constant w.r.t. **w** and hence does not affect the inference of **w**. In comparison, inference for the single instance single label setting where instance level labels are known can be done by maximizing the following log-likelihood function *l**S**I**S**L*(**w**) = ∑*b* = 1*B*∑*i* = 1*n**b*[∑*c* = 1*C**I*(*y**b**i* = *c*)**w***c**T***x***b**i* − log(∑*c* = 1*C**e***w***c**T***x***b**i*)] + log*p*(**X***D*). More detailed description of training a logistic regression in the SISL setting can be found in. To obtain the MIML maximum likelihood estimation of **w**, the log-likelihood in is maximized w.r.t. **w**, and the maximizing $\hat{\textbf{w}}$ is selected, i.e., $\hat{\textbf{w}}={\arg\!\max}\_{\textbf{w}}{l\_{MIML}(\textbf{w})}.$ Maximizing the log-likelihood in is difficult since to the best of our knowledge, no closed-form efficient solution exists. Therefore, we propose an expectation maximization (EM) approach to maximize. EM for maximum likelihood inference =================================== Expectation maximization is a framework for indirectly maximizing the log-likelihood when there are hidden variables. In the following, we introduce the EM approach (Section [s:emr]) and present its application to inference for the proposed ORed-logistic regression (Ored-LR) model (Section [ss:em]). Expectation maximization review ------------------------------- Denote the log-likelihood of the observation X given the parameter *θ* by *l*(*θ*) = log*p*(X∣*θ*). In EM, the hidden variable Y is introduced to develop a surrogate function to the log-likelihood given by *g*(*θ*, *θ*ʹ) = *E*Y[log*p*(X, Y∣*θ*)∣X, *θ*ʹ]. The EM framework alternates between the computation and the maximization of the surrogate function w.r.t. *θ*. Specifically, the two steps are: * E-step: Compute *g*(*θ*, *θ*ʹ) = *E*Y[log*p*(X, Y∣*θ*)∣X, *θ*ʹ] * M-step: *θ*(*k* + 1)=arg max*θ**g*(*θ*, *θ*(*k*)). This paper uses the Generalized EM approach where in the M-step, instead of finding *θ*(*k* + 1) such that *θ*(*k* + 1)=arg max*θ**g*(*θ*, *θ*(*k*)), we obtain *θ*(*k* + 1) satisfying *g*(*θ*(*k* + 1), *θ*(*k*)) ≥ *g*(*θ*(*k*), *θ*(*k*)). As with EM, GEM guarantees non-decreasing log-likelihood *l*(*θ*(*k*)) as a function of *k*. Expectation maximization for the proposed ORed-LR model ------------------------------------------------------- In our setting, the observed data X = {**Y***D*, **X***D*}, the parameter *θ* = **w**, and the hidden data $\mathcal{Y}=\underline{\textbf{y}}=\{\textbf{y}\_{1},\textbf{y}\_{2},\dots,\textbf{y}\_{B}\}$. To compute the surrogate *g*(**w**, **w**ʹ), we begin with the derivation of the complete log-likelihood. We apply the conditional rule as follows $$\begin{split} p(\textbf{Y}\_D,\textbf{X}\_D,\underline{\textbf{y}}|\textbf{w})&=p(\textbf{Y}\_D|\underline{\textbf{y}},\textbf{X}\_D,\textbf{w})p(\underline{\textbf{y}}|\textbf{X}\_D,\textbf{w})p(\textbf{X}\_D|\textbf{w})\\ &=p(\textbf{Y}\_D|\underline{\textbf{y}})[\prod\_{b=1}^B\prod\_{i=1}^{n\_b}p(y\_{bi}|\textbf{x}\_{bi},\textbf{w})]p(\textbf{X}\_D). \label{e:joint} \end{split}$$ Then, the complete log-likelihood can be computed by taking the logarithm of, replacing *p*(*y**b**i*∣**x***b**i*, **w**) from into, and reorganizing as follows $$\begin{aligned} \log p(\textbf{Y}\_D,\textbf{X}\_D,\underline{\textbf{y}}|\textbf{w})=&\sum\_{b=1}^B\sum\_{i=1}^{n\_b}\sum\_{c=1}^{C}I(y\_{bi}=c)\textbf{w}\_c^T\textbf{x}\_{bi}\label{e:llh}\\ &-\sum\_{b=1}^B\sum\_{i=1}^{n\_b}\log(\sum\_{c=1}^C{e^{\textbf{w}\_c^T\textbf{x}\_{bi}}})+\log p(\textbf{Y}\_D|\underline{\textbf{y}})+\log p(\textbf{X}\_D)\nonumber.\end{aligned}$$ Finally, taking the expectation of w.r.t. $\underline{\textbf{y}}$ given **Y***D*, **X***D*, and **w**ʹ, we obtain the surrogate function *g*( ⋅ ,  ⋅ ) as follows $$\begin{aligned} &g(\textbf{w},\textbf{w}')=E\_{\underline{\textbf{y}}}[\log p(\textbf{Y}\_D,\textbf{X}\_D,\underline{\textbf{y}}|\textbf{w})|\textbf{Y}\_D,\textbf{X}\_D,\textbf{w}']\label{e:sllh}\\ &=\sum\_{b=1}^B\sum\_{i=1}^{n\_b}[\sum\_{c=1}^{C}p(y\_{bi}=c|\textbf{Y}\_b,\textbf{X}\_b,\textbf{w}')\textbf{w}\_c^T\textbf{x}\_{bi}-\log(\sum\_{c=1}^C{e^{\textbf{w}\_c^T\textbf{x}\_{bi}}})]+ \zeta, \nonumber\end{aligned}$$ where $\zeta=E\_{\underline{\textbf{y}}}[\log p(\textbf{Y}\_D|\underline{\textbf{y}})+\log p(\textbf{X}\_D)|\textbf{Y}\_D,\textbf{X}\_D,\textbf{w}']$ is a constant w.r.t. **w**. From, we have the following expectation maximization iterations: Note that the form of the EM surrogate function in is similar to the SISL log-likelihood in. While the EM approach for MIML requires updating *p*(*y**b**i* = *c*∣**Y***b*, **X***b*, **w**ʹ), in the SISL case the equivalent term *I*(*y**b**i* = *c*) requires no update. ### The expectation step and our challenge Computing *p*(*y**b**i* = *c*∣**Y***b*, **X***b*, **w**) can be done using *p*(*y**b**i* = *c*, **Y***b*∣**X***b*, **w**) and the following conditional rule $$p(y\_{bi}=c|\textbf{Y}\_b,\textbf{X}\_b,\textbf{w})=\frac{p(y\_{bi}=c,\textbf{Y}\_b|\textbf{X}\_b,\textbf{w})}{\sum\_{c=1}^C{p(y\_{bi}=c,\textbf{Y}\_b|\textbf{X}\_b,\textbf{w})}}.$$ The probability *p*(*y**b**i* = *c*, **Y***b*∣**X***b*, **w**) can be computed in a brute-force manner by keeping *y**b**i* = *c* and marginalizing over all other instances *y**b**j*, where *j* ∈ {1, …, *n**b*} and *j* ≠ *i*, as follows $$\begin{aligned} p(y\_{bi}=c,\textbf{Y}\_b|\textbf{X}\_b,\textbf{w})=&\sum\_{y\_{b1}=1}^C\dots\sum\_{y\_{b(i-1)}=1}^C\sum\_{y\_{b(i+1)}=1}^C\dots\sum\_{y\_{bn\_b}=1}^Cp(y\_{b1},\dots,y\_{bi}=c,\dots,y\_{bn\_{b}},\textbf{Y}\_b|\textbf{X}\_b,\textbf{w})\nonumber\\ =&\sum\_{y\_{b1}=1}^C\dots\sum\_{y\_{b(i-1)}=1}^C\sum\_{y\_{b(i+1)}=1}^C\dots\sum\_{y\_{bn\_b}=1}^C\Big[p(y\_{b1},\dots,y\_{bi}=c,\dots,y\_{bn\_{b}}|\textbf{X}\_b,\textbf{w})\nonumber\\ &\hspace{56mm}\times p(\textbf{Y}\_b|y\_{b1},\dots,y\_{bi}=c,\dots,y\_{bn\_{b}})\Big]\nonumber\\ =&\sum\_{y\_{b1}=1}^C\dots\sum\_{y\_{b(i-1)}=1}^C\sum\_{y\_{b(i+1)}=1}^C\dots\sum\_{y\_{bn\_b}=1}^C\Big[p(y\_{b1},\dots,y\_{bi}=c,\dots,y\_{bn\_{b}}|\textbf{X}\_b,\textbf{w})\nonumber\\ &\hspace{56mm}\times I(\textbf{Y}\_b=\{c\}\cup\bigcup\_{\substack{j=1;j\neq i}}^{n\_b}y\_{bj})\Big],\hspace{-5mm}\label{e:2}\end{aligned}$$ where the last step is obtained by substituting *p*(**Y***b*∣*y**b*1, …, *y**b**i* = *c*, …, *y**b**n**b*) from. From the last step in and using the definition of the indicator function *I*( ⋅ ), if $c\notin \textbf{Y}\_b$ then *p*(*y**b**i* = *c*, **Y***b*∣**X***b*, **w**) = 0. Additionally, if there exists *j* ∈ {1, …, *n**b*} and *j* ≠ *i* such that $y\_{bj}\notin\textbf{Y}\_b$ then $I(\textbf{Y}\_b=\{c\}\cup\bigcup\_{\substack{j=1;j\neq i}}^{n\_b}y\_{bj})=0$. As a result, we restrict the summation in the last step in from *y**b**j* ∈ {1, …, *C*} to *y**b**j* ∈ **Y***b*, for *j* ∈ {1, …, *n**b*} and *j* ≠ *i*. Moreover, from and using the instance labels independence assumption, we obtain $$\begin{aligned} p(y\_{bi}=c,\textbf{Y}\_b|\textbf{X}\_b,\textbf{w})=&\sum\_{y\_{b1}\in\textbf{Y}\_b}\dots\sum\_{y\_{b(i-1)}\in\textbf{Y}\_b}\sum\_{y\_{b(i+1)}\in\textbf{Y}\_b}\dots\sum\_{y\_{bn\_b}\in\textbf{Y}\_b}\Big[p(y\_{bi}=c|\textbf{x}\_{bi},\textbf{w})\times\nonumber\\ &\prod\_{\substack{j=1;j\neq i}}^{n\_b}p(y\_{bj}|\textbf{x}\_{bj},\textbf{w})I(\textbf{Y}\_b=\{c\}\cup\bigcup\_{\substack{j=1;j\neq i}}^{n\_b}y\_{bj})\Big].\label{e:1}\end{aligned}$$ The cost of computing prior probabilities *p*(*y**b**j* = *l*∣**x***b**j*, **w**) for *j* = 1, …, *n**b* and *l* ∈ **Y***b* using is *O*(*n**b**C**d*). The cost of marginalizing using is *O*(∣**Y***b*∣*n**b* − 1). Consequently, the computational complexity for the E-step is exponential w.r.t. the number of instances in each bag. To overcome this challenge, we propose a computational method based on dynamic programming to compute *p*(*y**b**i* = *c*, **Y***b*∣**X***b*, **w**) from the prior probabilities *p*(*y**b**j* = *l*∣**x***b**j*, **w**). For many probability models, the computational complexity cost of exact calculation of the posterior probability necessitates approximate inference. In the following section, we introduce a dynamic programming approach for efficient and exact calculation of. ### Forward algorithm for the E-step In this section, we solve the aforementioned E-step challenge using a dynamic programming approach. To compute *p*(*y**b**i* = *c*, **Y***b*∣**X***b*, **w**) for all *c* ∈ **Y***b* in, we introduce *n**b* sets **Y***b*1, **Y***b*2, …, **Y***b**n**b* such that **Y***b**i* = ⋃*j* = 1*i**y**b**j*. Note that **Y***b**i* represents the sub-bag label associated with the first *i* instances in bag *b*. Consequently, **Y***b**n**b* = **Y***b* can be computed recursively using **Y***b**i* = **Y***b**i* − 1⋃*y**b**i*. We illustrate the relation between these sets using Figure [fig:Dynamic4](b). By introducing sets **Y***b*1, **Y***b*2, …, **Y***b**n**b* we can convert the graphical model in Figure [fig:Dynamic4](a) to a chain model in Figure [fig:Dynamic4](b). The chain structure allows us to perform a step by step grouping of independent factors in resulting in a polynomial time complexity. Specifically, from the chain structure, we can dynamically compute *p*(**Y***b*1∣**X***b*, **w**), *p*(**Y***b*2∣**X***b*, **w**), …, *p*(**Y***b**n**b* − 1∣**X***b*, **w**) as illustrated in Figures [fig:Dynamic4](c), (d), and (e), respectively. Finally, the desired probability *p*(*y**b**n**b* = *c*, **Y***b*∣**X***b*, **w**) can be computed from *p*(**Y***b**n**b* − 1∣**X***b*, **w**) and *p*(*y**n**b* = *c*∣**x***b**n*, **w**) as in Figure [fig:Dynamic4](f). Our forward algorithm is summarized by the following steps. [fig:Dynamic4] To compute *p*(**Y***b**n**b* − 1∣**X***b*, **w**), we use an incremental calculation of *p*(**Y***b**i* + 1∣**X***b*, **w**) from *p*(**Y***b**i*∣**X***b*, **w**) using the following proposition. This process is also illustrated in Figures [fig:Dynamic4](c) to (e). *The probability* *p*(**Y***b**i* + 1 = **\L**∣**X***b*, **w**), *for any* **\L** *in the power set of* **Y***b* *excluding the empty set, can be computed using the following recursion* [p:1] * *For *i* = 0:* $$\label{e:initdp} \text{{\em If} }\textbf{\L}=\{l\}\text{, }p(\textbf{Y}\_b^{1}=\textbf{\L}|\textbf{X}\_b,\textbf{w})=p(y\_{b1}=l|\textbf{x}\_{b1},\textbf{w})\text{{\em, otherwise} } p(\textbf{Y}\_b^{1}=\textbf{\L}|\textbf{X}\_b,\textbf{w})=0.$$ * *For *i* ≥ 1:* $$\begin{aligned} p(\textbf{Y}\_b^{i+1}=\textbf{\L}|\textbf{X}\_b,\textbf{w})=\sum\_{c\in\textbf{\L}}&p(y\_{b(i+1)}=c|\textbf{x}\_{b(i+1)},\textbf{w})\times\label{e:6}\\ &[p(\textbf{Y}\_b^{i}=\textbf{\L}|\textbf{X}\_b,\textbf{w})+p(\textbf{Y}\_b^{i}=\textbf{\L}\_{\backslash c}|\textbf{X}\_b,\textbf{w})],\nonumber\end{aligned}$$ *where we use* **\L**∖ *c* *to denote the set* {*l*∣*l* ∈ **\L**, *l* ≠ *c*},  *for any* **\L** *and* *c* ∈ **\L**. For a detailed proof, we refer the reader to Appendix A. To understand the recursion in Proposition 1, we present an example of a bag with bag label **Y***b* = {1, 3, 4} in Figure [fig:Dynamic]. The process of computing *p*(**Y***b*3 = {1, 3}∣**X***b*, **w**) is as follows:  •  First, *p*(**Y***b*1 = {1}∣**X***b*, **w**) and *p*(**Y***b*1 = {3}∣**X***b*, **w**) are computed by as $$\begin{aligned} p(\textbf{Y}\_b^{1}&=\{1\}|\textbf{X}\_b,\textbf{w})=p(y\_{b1}=1|\textbf{x}\_{b1},\textbf{w}),\nonumber\\ p(\textbf{Y}\_b^{1}&=\{3\}|\textbf{X}\_b,\textbf{w})=p(y\_{b1}=3|\textbf{x}\_{b1},\textbf{w}).\label{e:example3}\end{aligned}$$ Note that since **Y***b*1 only contains the label of the first instance in the bag, it cannot have more than one label and therefore *p*(**Y***b*1 = {1, 3}∣**X***b*, **w**) = 0.  •  Second, using, *p*(**Y***b*2 = {1}∣**X***b*, **w**), *p*(**Y***b*2 = {3}∣**X***b*, **w**), and *p*(**Y***b*2 = {1, 3}∣**X***b*, **w**) are computed from *p*(**Y***b*1 = {1}∣**X***b*, **w**) and *p*(**Y***b*1 = {3}∣**X***b*, **w**) as $$\begin{aligned} {2} &p(\textbf{Y}\_b^{2}=\{1\}|\textbf{X}\_b,\textbf{w})&=&p(\textbf{Y}\_b^{1}=\{1\}|\textbf{X}\_b,\textbf{w})\cdot p(y\_{b2}=1|\textbf{x}\_{b2},\textbf{w}),\nonumber\\ &p(\textbf{Y}\_b^{2}=\{3\}|\textbf{X}\_b,\textbf{w})&=&p(\textbf{Y}\_b^{1}=\{3\}|\textbf{X}\_b,\textbf{w})\cdot p(y\_{b2}=3|\textbf{x}\_{b2},\textbf{w}),\nonumber\\ &p(\textbf{Y}\_b^{2}=\{1,3\}|\textbf{X}\_b,\textbf{w})&=&p(\textbf{Y}\_b^{1}=\{1\}|\textbf{X}\_b,\textbf{w})\cdot p(y\_{b2}=3|\textbf{x}\_{b2},\textbf{w})\nonumber\\ &&&+p(\textbf{Y}\_b^{1}=\{3\}|\textbf{X}\_b,\textbf{w})\cdot p(y\_{b2}=1|\textbf{x}\_{b2},\textbf{w}).\label{e:example2}\end{aligned}$$  •  Finally, *p*(**Y***b*3 = {1, 3}∣**X***b*, **w**) is obtained by replacing the probabilities obtained in into with *i* = 2 and **\L** = {1, 3} as follows $$\begin{aligned} p(\textbf{Y}\_b^{3}=\{1,3\}|\textbf{X}\_b,\textbf{w})=&p(\textbf{Y}\_b^{2}=\{1\}|\textbf{X}\_b,\textbf{w})\cdot p(y\_{b3}=3|\textbf{x}\_{b3},\textbf{w})\label{e:example1}\\ &+p(\textbf{Y}\_b^{2}=\{3\}|\textbf{X}\_b,\textbf{w})\cdot p(y\_{b3}=1|\textbf{x}\_{b3},\textbf{w})\nonumber\\ &+p(\textbf{Y}\_b^{2}=\{1,3\}|\textbf{X}\_b,\textbf{w})\cdot [p(y\_{b3}=1|\textbf{x}\_{b3},\textbf{w})+p(y\_{b3}=3|\textbf{x}\_{b3},\textbf{w})].\nonumber\end{aligned}$$ Note that in Figure [fig:Dynamic], we only compute the probabilities stored in the boxes of the *i*th column, which are *p*(**Y***b**i* = **\L**∣**X***b*, **w**) for **Ł** in the power set of {1, 3, 4} (excluding the empty set), one time. The probabilities stored in the (*i* + 1)th column depend only on the probabilities stored in the *i*th column, as illustrated in. [fig:Dynamic] The computational complexity of Step 1 can be derived as follows. Since **Y***b**i* + 1 ⊆ **Y***b* and **Y***b**i* + 1 contains at least one instance, there are at most 2∣**Y***b*∣ − 1 possible bag labels **\L** for **Y***b**i* + 1 in. Moreover, computing the probability of each bag label **Y***b**i* + 1 using involves the computation of at most 2 × ∣**Y***b*∣ terms in the sum. As a result, the computational complexity of computing *p*(**Y***b**i* + 1∣**X***b*, **w**) from *p*(**Y***b**i*∣**X***b*, **w**) is *O*(∣**Y***b*∣2∣**Y***b*∣). To obtain *p*(**Y***b**n**b* − 1∣**X***b*, **w**), the aforementioned recursion is applied for *i* = 0, …, *n**b* − 2 and hence the computational complexity is *O*(∣**Y***b*∣2∣**Y***b*∣*n**b*). Next, we compute *p*(*y**b**n**b* = *c*, **Y***b*∣**X***b*, **w**) for all *c* ∈ **Y***b* from *p*(**Y***b**n**b* − 1∣**X***b*, **w**) and *p*(*y**b**n**b* = *c*∣**x***b**n**b*, **w**) using the following proposition. *The probability* *p*(*y**b**n**b* = *c*, **Y***b* = **\L**∣**X***b*, **w**) *for all* *c* ∈ **\L** *can be computed using* [p:2] $$\begin{aligned} p(y\_{bn\_b}=c,\textbf{Y}\_b=\textbf{\L}|\textbf{X}\_b,\textbf{w})=&p(y\_{bn\_b}=c|\textbf{x}\_{bn\_b},\textbf{w})\nonumber\times\\ &[p(\textbf{Y}\_b^{n\_b-1}=\textbf{\L}|\textbf{X}\_b,\textbf{w})+p(\textbf{Y}\_b^{n\_b-1}=\textbf{\L}\_{\backslash c}|\textbf{X}\_b,\textbf{w})] \label{e:last}.\end{aligned}$$ For a detailed proof, we refer the reader to Appendix B. The computational complexity required to obtain *p*(*y**b**n**b* = *c*, **Y***b*∣**X***b*, **w**) for all *c* ∈ **Y***b* using Proposition 2 is *O*(∣**Y***b*∣). Combining the computational complexity in Proposition [p:1] and [p:2], the overall computational complexity for computing *p*(*y**b**n**b* = *c*, **Y***b*∣**X***b*, **w**) is *O*(∣**Y***b*∣2∣**Y***b*∣*n**b*). Finally, we compute *p*(*y**b**i* = *c*, **Y***b*∣**X***b*, **w**),  ∀*i* ≠ *n**b*. Note that *p*(*y**b**i* = *c*, **Y***b*∣**X***b*, **w**) is independent of the position of the *i*th instance in the *b*th bag. Define $\textbf{Y}\_b^{\backslash i}=\bigcup\_{\substack{j=1;j\neq i}}^{n\_b}y\_{bj}$ the union of the instance label of the *b*th bag excluding label *y**b**i*. To compute *p*(*y**b**i* = *c*, **Y***b*∣**X***b*, **w**), we swap the *i*th instance with the *n**b*th instance. This process is depicted in Figures [fig:Dynamic6](a) and (b). We then compute *p*(**Y***b*∖ *i*∣**X***b*, **w**) by applying Proposition [p:1] on the newly ordered instances, as illustrated in Figure [fig:Dynamic6](c). Finally, the probability *p*(*y**b**i* = *c*, **Y***b*∣**X***b*, **w**) is computed from *p*(**Y***b*∖ *i*∣**X***b*, **w**) and *p*(*y**b**i* = *c*∣**x***b**i*, **w**) using Proposition [p:2], which is shown in Figure [fig:Dynamic6](d). The forward algorithm is presented in Algorithm [a:1]. For each *i*, the computational complexity for computing *p*(*y**b**i* = *c*, **Y***b*∣**X***b*, **w**) is *O*(∣**Y***b*∣2∣**Y***b*∣*n**b*). As a result, the computational complexity for computing *p*(*y**b**i* = *c*, **Y***b*∣**X***b*, **w**), for 1 ≤ *i* ≤ *n**b*, is *O*(∣**Y***b*∣2∣**Y***b*∣*n**b*2). The limitation of the forward approach is that in order to compute *p*(*y**b**i* = *c*, **Y***b*∣**X***b*, **w**), we swap the *i*th and *n**b*th instances, and repeat the Step 1. Recomputing the Step 1 *n**b* times for each bag is inefficient. That leads us to the forward and substitution algorithm in the following section. [fig:Dynamic6] [H] **\L**, **X***b*, **Y***b*, **w**, *c* *Step 1:* Swap {**x***b**i*, *y**b**i*} and {**x***b**n**b*, *y**b**n**b*}; Initialize *p*(**Y***b*1 = **\l**∣**X***b*, **w**) = 0,  ∀**\l** in the power set of **\L** excluding the empty set; *p*(**Y***b*1 = {*l*}∣**X***b*, **w**) = *p*(*y**b*1 = *l*∣**x***b*1, **w**); *p*(**Y***b**k* + 1 = **\L***u*∣**X***b*, **w**) = ∑*l* ∈ **\L***u**p*(*y**b*(*k* + 1) = *l*∣**x***b*(*k* + 1), **w**)[*p*(**Y***b**k* = **\L***u*∣**X***b*, **w**) + *p*(**Y***b**k* = **\L**∖ *l**u*∣**X***b*, **w**)]; *Step 2:* Return *p*(*y**b**i* = *c*, **Y***b* = **\L**∣**X***b*, **w**) = *p*(*y**b**n**b* = *c*∣**x***b**n**b*, **w**)[*p*(**Y***b**n**b* − 1 = **\L**∣**X***b*, **w**) + *p*(**Y***b**n**b* − 1 = **\L**∖ *c*∣**X***b*, **w**)]; Swap {**x***b**i*, *y**b**i*} and {**x***b**n**b*, *y**b**n**b*}; [a:1] ### **F**orward **A**nd **S**ubs**T**itution (FAST) algorithm for the E-step Recall in the forward algorithm, to compute *p*(*y**b**i* = *c*, **Y***b*∣**X***b*, **w**), for *i* = 1, 2, …, *n**b* − 1, we swap the *i*th and *n**b*th instances, and sequentially recompute *p*(**Y***b*∖ *i*∣**X***b*, **w**). The computational complexity for the overall process is *O*(∣**Y***b*∣2∣**Y***b*∣*n**b*2). In this section, we propose an efficient method for computing *p*(*y**b**i* = *c*, **Y***b*∣**X***b*, **w**) for *i* = 1, 2, …, *n**b*. The computational complexity of this method is only *O*(∣**Y***b*∣2∣**Y***b*∣*n**b*). A conceptual illustration of the forward and substitution algorithm is presented in Figure [fig:Dynamic5]. The overview of the forward and substitution algorithm is presented as follows. Compute *p*(**Y***b**n**b*∣**X***b*, **w**) using Proposition [p:1]. From Proposition [p:1], the computational complexity for Step 1 is *O*(∣**Y***b*∣2∣**Y***b*∣*n**b*). For each *i* ∈ {1, 2, …, *n**b*}, efficiently compute *p*(**Y***b*∖ *i*∣**X***b*, **w**) from *p*(**Y***b**n**b*∣**X***b*, **w**) using the following proposition. [fig:Dynamic5] [p:4] *Define* {**\L**1, …, **\L**2∣**Y***b*∣ − 1} *as the power set of* **Y***b* *excluding the empty set. Define vectors* **u**, **v** ∈ R(2∣**Y***b*∣ − 1) × 1 *such that* **u**(*m*) = *p*(**Y***b* = **\L***m*∣**X***b*, **w**) *and* **v**(*m*) = *p*(**Y***b*∖ *i* = **\L***m*∣**X***b*, **w**), *for* 1 ≤ *m* ≤ 2∣**Y***b*∣ − 1. *Define a matrix* **A** ∈ R(2∣**Y***b*∣ − 1) × (2∣**Y***b*∣ − 1) *such that* **A**(r,s)= p(ybi=c|**x**bi,**w**) &$\text{{\em if} } \textbf{\L}^s \subset \textbf{\L}^r \text{{\em and} } \textbf{\L}^r \backslash \textbf{\L}^s =\{c\}$[case1] l**Ł**rp(ybi=l|**x**bi,**w**) &[case2] 0 &[case3] *where* 1 ≤ *r*, *s* ≤ 2∣**Y***b*∣ − 1. *Then,* *p*(**Y***b*∖ *i*∣**X***b*, **w**) *can be computed from* *p*(**Y***b*∣**X***b*, **w**) *by solving* **u** = **A****v**. For a proof, we refer the reader to Appendix C. We assume that **\L**1, **\L**2, …, **\L**2∣**Y***b*∣ − 1 are sorted in an ascending order of their cardinality. Moreover, if two sets are equal in cardinality, they are sorted based on the lexicographical order. For this order, the matrix **A** constructed from Proposition [p:4] is a lower triangular matrix. We prove this by contradiction. Assume that **A** is not a lower triangular matrix then there exist (*r*, *s*) such that *r* < *s* and **A**(*r*, *s*) ≠ 0. From Proposition [p:4], if **A**(*r*, *s*) ≠ 0 then **\L***s* = **\L***r* or **\L***s* ⊂ **\L***r* which means that *s* ≤ *r* leading to a contradiction with *r* < *s*. The structure of **A** for the example of Figure [fig:Dynamic] is presented in Figure [fig:Dynamic7]. Since **A** is lower triangular, **v** can be obtained from **A** and **u** using the forward substitution method with the time complexity proportional to the number of nonzero elements in **A** which is *O*(∣**Y***b*∣2∣**Y***b*∣). As a result, the computational complexity for Step 2 is *O*(∣**Y***b*∣2∣**Y***b*∣*n**b*). Computing *p*(**Y***b*∖ *i*∣**X***b*, **w**) by Proposition [p:4] in the forward and substitution algorithm is faster by an order of *O*(*n**b*) compared to swapping the *i*th and *n**b*th instances then using Proposition [p:1] as in the forward algorithm. [fig:Dynamic7] For each *i* ∈ {1, 2, …, *n**b*}, compute *p*(*y**b**i* = *c*, **Y***b*∣**X***b*, **w**) from *p*(*y**b**i* = *c*∣**x***b**i*, **w**) and *p*(**Y***b*∖ *i*∣**X***b*, **w**), for all *c* ∈ **Y***b*, using Proposition [p:2]. From Proposition [p:2], the computational complexity for Step 3 is *O*(*n**b*∣**Y***b*∣). Combining the computational complexities in Step 1, 2, and 3, the computational complexity of the forward and substitution algorithm is *O*(∣**Y***b*∣2∣**Y***b*∣*n**b*). A detailed description of the forward and substitution algorithm is presented in Algorithm [a:3]. From Section [s:esaoc], the computational complexity for obtaining the prior probabilities is *O*(*n**b**C**d*). Consequently, the computational complexity of the E-step implementation per bag using the forward and substitution algorithm is *O*(∣**Y***b*∣2∣**Y***b*∣*n**b* + *n**b**C**d*). As a result, the simple structure of our ORed-logistic regression model allows to perform an efficient and exact inference instead of using approximation techniques which may then degrade the accuracy. [!h] **\L**, **X***b*, **Y***b*, **w**, *c* *Step 1:* Initialize *p*(**Y***b*1 = **\l**∣**X***b*, **w**) = 0,  ∀**\l** in the power set of **\L** excluding the empty set; *p*(**Y***b*1 = {*l*}∣**X***b*, **w**) = *p*(*y**b*1 = *l*∣**x***b*1, **w**); *p*(**Y***b**i* + 1 = **\L***m*∣**X***b*, **w**) = ∑*l* ∈ **\L***m**p*(*y**b*(*i* + 1) = *l*∣**x***b*(*i* + 1), **w**)[*p*(**Y***b**i* = **\L***m*∣**X***b*, **w**) + *p*(**Y***b**i* = **\L**∖ *l**m*∣**X***b*, **w**)]; *Step 2:* Construct the **u** vector in Proposition [p:4], from *p*(**Y***b* = **\L***m*∣**X***b*, **w**), ∀1 ≤ *m* ≤ 2∣**\L**∣ − 1; Construct the **A** matrix in Proposition [p:4], from *p*(*y**b**i*∣**x***b**i*, **w**); Solve **u** = **A****v** in order to obtain **v** containing *p*(**Y***b*∖ *i* = **\L***m*∣**X***b*, **w**), ∀1 ≤ *m* ≤ 2∣**\L**∣ − 1; *Step 3:* Return *p*(*y**b**i* = *c*, **Y***b* = **\L**∣**X***b*, **w**) = *p*(*y**b**i* = *c*∣**x***b**i*, **w**)[*p*(**Y***b*∖ *i* = **\L**∣**X***b*, **w**) + *p*(**Y***b*∖ *i* = **\L**∖ *c*∣**X***b*, **w**)]; [a:3] ### Maximization step We use gradient ascent to increase the objective function in as follows $$\textbf{w}\_c^{(k+1)}=\textbf{w}\_c^{(k)}+ \frac{\partial g(\textbf{w},\textbf{w}^{(k)})}{\partial\textbf{w}\_c}\bigg|\_{\textbf{w}=\textbf{w}^{(k)}} \times \eta, \label{e:12}$$ where the first derivative of *g*(**w**, **w**(*k*)) w.r.t. **w***c* is computed as follows $$\begin{aligned} &\frac{\partial g(\textbf{w},\textbf{w}^{(k)})}{\partial\textbf{w}\_c}=\sum\_{b=1}^B\sum\_{i=1}^{n\_b}[p(y\_{bi}=c|\textbf{Y}\_b,\textbf{X}\_b,\textbf{w}^{(k)})\textbf{x}\_{bi}-\frac{e^{\textbf{w}\_c^T\textbf{x}\_{bi}}\textbf{x}\_{bi}}{\sum\_{c=1}^C{e^{\textbf{w}\_c^T\textbf{x}\_{bi}}}}]\nonumber\\ &=\sum\_{b=1}^B\sum\_{i=1}^{n\_b}[p(y\_{bi}=c|\textbf{Y}\_b,\textbf{X}\_b,\textbf{w}^{(k)})-p(y\_{bi}=c|\textbf{x}\_{bi},\textbf{w})]\textbf{x}\_{bi}\label{e:13},\end{aligned}$$ and *η* in is determined using the backtracking line search method. In each backtracking step, the surrogate function in is computed with the time complexity *O*(∑*b* = 1*B**n**b**C**d*). As a result, the computational complexity of the maximization step is $O(\sum\_{b=1}^{B}n\_bCd\overline{M})$, where $\overline{M}$ is the average number of backtracking steps. Kernel-based OR-ed logistic regression model -------------------------------------------- In order to deal with linearly inseparable data, we introduce the kernel extension to our model. Kernel learning is a method that transforms the data from the original feature space to a higher dimensional space where the data can be separated. Recall the multi-class logistic regression function from. Applying *ϕ* to the **x***b**i* in, where *ϕ* is the function transforming from feature space to kernel space yields $$p(y\_{bi}|\textbf{x}\_{bi},\textbf{w})=\frac{\prod\_{c=1}^{C}e^{I(y\_{bi}=c)\textbf{w}\_c^T\phi(\textbf{x}\_{bi})}}{\sum\_{c=1}^C{e^{\textbf{w}\_c^T\phi(\textbf{x}\_{bi})}}}. \label{e:kernel1}$$ Replacing $\textbf{w}\_c=\sum\_{b=1}^B\sum\_{j=1}^{n\_b}\boldsymbol\alpha\_{c\_{bj}}\phi(\textbf{x}\_{bj})$ and *ϕ*(**x**)*T**ϕ*(**x***b**j*) = *K*(**x**, **x***b**j*) into, yields $$p(y\_{bi}|\textbf{x}\_{bi},\boldsymbol\alpha)=\frac{\prod\_{c=1}^{C}e^{I(y\_{bi}=c)\boldsymbol\alpha\_c^T\textbf{k}(\textbf{x}\_{bi})}}{\sum\_{c=1}^C{e^{\boldsymbol\alpha\_c^T\textbf{k}(\textbf{x}\_{bi})}}}, \label{e:kernel2}$$ where **k**(**x**) is $[K(\textbf{x},\textbf{x}\_{11}), K(\textbf{x},\allowbreak\textbf{x}\_{12}),\dots, K(\textbf{x},\textbf{x}\_{Bn\_B})]^T$. The kernel logistic regression does not require to compute the high dimensional *ϕ*(**x**). Instead, only the dot product *K*( ⋅ ,  ⋅ ) is computed. In our paper, we consider an RBF kernel between instances **x** and **x**ʹ as follows $$K(\textbf{x},\textbf{x}')=e^{\frac{{-\lVert\textbf{x}-\textbf{x}'\rVert}\_{2}^{2}}{\delta}}, \label{e:kernele}$$ where *δ* is a tuning parameter to control the kernel width. Note that in kernel learning, the model parameter vector is $\boldsymbol\alpha$ instead of **w**. Thus, wherever encountering **w**, replace **
arxiv_0000345
by and. [f12] In Fig. [f12] we compare the derived effective temperatures with the spectral types determined with the HPI in Sect. [s32]. The expected trend – cooler temperatures correspond to later spectral types – is visible, with only three exceptions. For two of these exceptions (SONYC-NGC1333-17 and -31), the signal-to-noise ratio in the spectrum is very poor ( ∼ 10), i.e. the uncertainties in spectral type and temperature are large. For SONYC-NGC1333-17 there are two published spectral types of M8 and M8.7, which would move the datapoint closer to the general trend. The third outlier (SONYC-NGC1333-33) has an unusual dip in the spectrum around 1.65 *μ**m*, which decreases our spectral type estimate by  ∼ 1 subtype, but does not affect the model fitting. Excluding these three outliers, the datapoints are well-approximated by a linear trend (solid line): *T*eff = 4191 − 157 × SpT (where SpT corresponds to the M subtype). The plot also shows the effective temperature scales by and, which are derived using the optical portion of the spectrum. The two scales agree fairly well with each other, although the Mentuch et al. scale is an extrapolation and has not been directly calibrated for *T*eff < 3000K. The trend seen in our data is consistent with these lines. Our datapoints are, however, mostly above the two lines, indicating that we systematically overestimate the temperatures (by  ∼ 200K) or the spectral types (by 0.5-1 subtypes). We explored possible reasons for this discrepancy. One option is a systematic error in the extinction. Decreasing *A**V* by 1mag for all objects would shift their spectral types to earlier types, but only by  ∼ 0.3 subtypes, not sufficient to explain the effect. Moreover, this would also increase the best estimate for *T*eff by typically 200K and thus cause larger disagreement between datapoints and published effective temperature scales. The inverse is true as well – increasing *A**V* would lead to lower *T*eff, as required, but also to later spectral types. Thus, systematic changes in *A**V* do not resolve the problem. Given the good agreement between our spectral types and literature values (Sect. [s32]), we suspect that the offset is most likely due to a problem with the effective temperatures and could indicate issues with the used model spectra. A more extensive comparison of the effective temperature scales is beyond the scope of this paper. Spectral flux distributions --------------------------- To further test the spectral fitting from Sect. [s33] we compare the full photometric spectral flux distributions (SFD) for some selected sources, as far as available, with the AMES-DUSTY model spectra. We include our own photometry in izJK as well as Spitzer/IRAC data from the C2D ’HREL’ catalogue. In Fig. [f5] we show 3 examples. Object SONYC-NGC1333-5 is well-matched by a model spectrum for *T*eff of 2800-2900K for *A**V* = 4mag, in line with the parameters derived from the FMOS spectrum (Table [t2]). This object shows colour excess redwards of 5 *μ**m*, presumably due to the presence of a disk (see Sect. [s45]). The best match for object SONYC-NGC1333-30 is obtained for *T*eff of 2500-2700 with *A**V* of 9-10mag, i.e. the object may be slightly cooler than listed in Table [t1], but still within the uncertainty. For the coolest object SONYC-NGC1333-36 a good match is found for temperatures between 1900 and 2100K and *A**V* of 0-2mag. This is again somewhat cooler than the estimate given in Table [t1]. In general, when comparing with the full SFD the results are similar to the spectral fits to individual wavelength bands in the near-infrared, maybe except for the regime below 2500K where the SFD comparison yields lower temperatures by  ∼ 200K. The comparison also illustrates that the ideal dataset for a characterisation of young brown dwarfs would be a spectrum covering the entire near-infrared domain from 1 to 8 *μ**m*, thus including five broadband features. The brown dwarf population in NGC1333 ===================================== The newly identified very low mass objects in this paper add to the substantial number in the NGC1333 region that have already been confirmed in the literature. In Table [t2] we compile all previously spectroscopically confirmed objects with spectral types of M5 or later or effective temperatures of 3200K or below, from,,, and. Whenever possible, we also re-measured the HPI spectral types for the sources identified in, based on their MOIRCS spectra. In the Table, we list coordinates, photometry, spectral types, effective temperatures, and alternative names. Adding the 10 objects discovered in this paper, the entire sample comprises 51 objects. Not all these objects are brown dwarfs; some are very low mass stars. Based on the COND, DUSTY, and BCAH isochrones [2](#fn2), the hydrogen burning limit at 1Myr would be reached at effective temperatures of 2800-2900K, which is found to correspond to an (optical) spectral type of M6-M7. In Table [t2] we made the cut at M5 to include all borderline cases as well. Taking this into account, the number of confirmed brown dwarfs in NGC1333 is about 35 ± 5. This is currently one of the largest and best characterised populations of substellar objects in a single star forming region. In the following we will investigate the mass function, spatial distribution, and disk properties for this sample. lllcclll S-1 & 03 28 47.66 & +31 21 54.6 & 17.55 & 15.24 & M9.2*a* & 2600*a*, 2800*b* & MBO139 S-2 & 03 28 54.92 & +31 15 29.0 & 15.990 & 14.219 & M7.9*a*, M6.5*b*, M8*d*, M8.6*f* & 2850*a*, 2600*b* & ASR109, Sp 60 S-3 & 03 28 55.24 & +31 17 35.4 & 15.090 & 13.433 & M7.9*c*, M8.2*f* & 2900*b* & ASR38 S-4 & 03 28 56.50 & +31 16 03.1 & 18.17 & 16.73 & M9.6*f* & 2500*b* & S-5 & 03 28 56.94 & +31 20 48.7 & 15.362 & 13.815 & M7.6*a*, M6*b*, M6.8*c* & 2850*a*, 2900*b* & MBO91, Sp 66 S-6 & 03 28 57.11 & +31 19 12.0 & 17.24 & 15.34 & M7.3*a*, M8*b*, M8.0*f* & 3250*a*, 2700*b* & MBO148, ASR64, Sp 23 S-7 & 03 28 58.42 & +31 22 56.7 & 15.399 & 13.685 & M6.5*b*, M7.1*c*, M7.7*f* & 2800*b* & MBO80, Sp 72 S-8 & 03 29 03.39 & +31 18 39.9 & 15.833 & 14.000 & M8.2*a*, M8.5*b*, M7.4*c*, M8.4*f* & 2850*a*, 2600*b* & MBO88, ASR63, Sp 80 S-9 & 03 29 05.54 & +31 10 14.2 & 17.072 & 15.667 & M8*b*, M8.6*f* & 2600*b* & S-10 & 03 29 05.66 & +31 20 10.7 & 17.113 & 15.485 & & 2500*b* & MBO143, Sp 86 S-11 & 03 29 07.17 & +31 23 22.9 & 17.87 & 15.62 & M9.2*f* & (2600)*b* & MBO141 S-12 & 03 29 09.33 & +31 21 04.2 & 16.416 & 13.150 & & (2500)*b* & MBO70, Sp 93 S-13 & 03 29 10.79 & +31 22 30.1 & 14.896 & 12.928 & M7.5*b*, M7.4*c*, M8.1*f* & 3000*b* & MBO62 S-14 & 03 29 14.43 & +31 22 36.2 & 14.606 & 13.035 & M7*b*, M6.6*c*, M7.7*f* & 2900*b* & MBO66 S-15 & 03 29 17.76 & +31 19 48.1 & 14.803 & 12.988 & M7.5*b*, M6.5*c*, M7.8*f* & 3000*b* & MBO64, ASR80, Sp 112 S-16 & 03 29 28.15 & +31 16 28.5 & 13.054 & 12.091 & M8.5*a*, M7.5*b*, M7.5*e*, M9.1*f* & 2850*a*, 2600*b*, 2761*e* & Sp 164 S-17 & 03 29 33.87 & +31 20 36.2 & 16.562 & 15.481 & M7.5*a*, M8*b*, M8.7*f* & 2600*a*, 2500*b* & MBO140 S-18 & 03 29 35.71 & +31 21 08.5 & 18.50 & 16.94 & M8.2*f* & 2500*b* & Sp 129 S-19 & 03 29 36.36 & +31 17 49.8 & 17.91 & 16.38 & & 2700*b* & S-21 & 03 28 47.34 & +31 11 29.8 & 15.484 & 12.702 & & 3100*b* & ASR117 ASR15 & 03 28 56.94 & +31 15 50.3 & 15.056 & 13.461 & M7.4*c*, M6.0*d* & & ASR17 & 03 28 57.15 & +31 15 34.5 & 15.405 & 13.186 & M7.4*c*, M6.0*d* & & Sp 68 MBO73 & 03 28 58.24 & +31 22 09.3 & 16.004 & 13.367 & M6.4*c* & & Sp 70 ASR24 & 03 29 11.30 & +31 17 17.5 & 13.977 & 12.915 & M8.2*c*, M8.0*d* & & MBO69 & 03 29 24.45 & +31 28 14.9 & 14.041 & 12.686 & M7.0*a*, M7.4*c* & 3200*a* & ASR29 & 03 29 13.61 & +31 17 43.4 & 16.441 & 13.028 & M5*d* & 3250*a* & ASR105 & 03 29 04.66 & +31 16 59.1 & 15.550 & 12.665 & M6*d* & & Sp 84 ASR8 & 03 29 04.06 & +31 17 07.5 & 13.310 & 12.313 & M7*d* & & MBO78 & 03 29 00.15 & +31 21 09.2 & 16.466 & 13.349 & M5*d* & & Sp 75 Sp 45 & 03 28 43.55 & +31 17 36.4 & 12.219 & 10.138 & M5.0*e* & 3125*e* & ASR127 Sp 46 & 03 28 44.07 & +31 20 52.8 & 14.245 & 12.627 & M7.3*a*, M7.5*e* & 3050*a*, 2829*e* & Sp 49 & 03 28 47.82 & +31 16 55.2 & 12.940 & 10.909 & M8.0*e* & 2710*e* & ASR111 Sp 53 & 03 28 52.13 & +31 15 47.1 & 13.161 & 12.029 & M7.0*e* & 2846*e* & ASR45 Sp 55 & 03 28 52.90 & +31 16 26.4 & 13.616 & 12.476 & M5.0*e* & 3154*e* & ASR46 Sp 58 & 03 28 54.07 & +31 16 54.3 & 13.027 & 11.599 & M5.0*e* & 3098*e* & ASR42 Sp 94 & 03 29 09.48 & +31 27 20.9 & 14.154 & 12.692 & M5.0*e* & 3098*e* & MBO60 Sp 105 & 03 29 13.03 & +31 17 38.3 & 15.231 & 14.158 & M8.0*e* & 2710*e* & ASR28 Sp 131 & 03 29 37.73 & +31 22 02.4 & 13.987 & 12.958 & M7.6*a*, M7.0*e* & 2850*a*, 2891*e* & MBO65 Sp 157 & 03 29 12.79 & +31 20 07.7 & 14.676 & 13.294 & M7.6*a*, M7.7*c*, M7.5*e* & 2950*a*, 2812*e* & MBO75, ASR83 Sp 177 & 03 29 24.83 & +31 24 06.2 & 14.433 & 13.383 & M6.5*a*, M6.7*c*, M6.5*e* & 3000*a*, 2957*e* & MBO77 Sp 71 & 03 28 58.24 & +31 22 02.1 & 14.912 & 12.406 & M6*e* & 2990*e* & MBO47 The number of brown dwarfs -------------------------- Based on our comprehensive spectroscopy, we can put some constraints on the total number of brown dwarfs in NGC1333 and the mass limits of the current surveys. For this purpose, we use the iz survey presented in. In Fig. [f8] we plot the iz colour-magnitude diagram for the 196 candidates selected in. The confirmed brown dwarfs (Tables [t1] and [t2]), either by us or other groups, are marked with squares; all objects for which we have obtained spectroscopy with crosses. Note that the iz candidates are selected only with a cut in colour and a cut in PSF shape to rule out extended objects. No other selection criteria have been used, i.e. this sample is as unbiased as possible. Colour-magnitude diagram for the iz candidates (dots), originally identified in. Crosses are objects for which we have obtained spectra in this paper or. Confirmed very low mass objects from this paper or the literature are marked with squares. The horizontal dashed line shows the completeness limit of the survey estimated in. [f8] We have useful spectra for 98/196 candidates; out of these 98, 24 are confirmed by our spectra. In total the iz sample contains 35 confirmed objects with $A\_V \lesssim 12$mag. Thus, we have a yield of 24/98 (24%) and would expect to find 24 more objects among the candidates for which we do not have spectra. Since 11 of them (35 minus 24) have already been confirmed by other groups, the expected number of additional very low mass objects from this iz selection is 13. The low-mass end of the diagram deserves additional discussion. The faintest confirmed brown dwarfs in Fig. [f8] are SONYC-NGC1333-1, 30, and 36 at *i* = 23.4 − 23.7. Comparing their effective temperatures with the 1Myr DUSTY and COND isochrones, it seems likely that they have masses of $\lesssim 0.02\,M\_{\odot}$. If our estimate of *T*eff = 2000K for SONYC-NGC1333-36 is correct (Sect. [s34]), the best mass estimate would be in the range of 0.006 *M*⊙. We have taken spectra for 7 fainter objects, but none of them is a brown dwarf, which might indicate that we have reached the ’bottom of the IMF’, as preliminarily stated in. However we may not be 100% complete in this magnitude range. The formal completeness limits of the iz survey at *i* = 24.8mag, determined from the peak of the histogram of the magnitudes, is shown with a dashed line. This limit has been derived for a field of view of 34ʹ × 27ʹ, but most of the cluster members are located in a smaller region of 10ʹ × 10ʹ which is partially affected by significant background emission from the cloud (see Fig. [f50]. Thus, the completeness limit in the relevant areas might not always reach the value shown in Fig. [f8]. Thus, based on our new data we retract the previous claim by stating a deficit of objects with *M* < 0.02 *M*⊙ in NGC1333, for two reasons: a) The updated brown dwarf census contains a few of objects with masses at or below 0.02 *M*⊙, including one with an estimated mass of 0.006 *M* ⊙ . b) The current survey may not be complete at the lowest masses, i.e. we cannot exclude the presence of a few more objects with *M* < 0.02 *M*⊙. The census for *M* > 0.02 *M*⊙ is more robust. From the 35 confirmed members in the iz diagram we subtract 10 which are probably slightly above the substellar boundary (see discussion in Sect. [s4]). We also subtract the 3 which are likely below 0.02 *M*⊙ and add the 13 which we are still missing. This gives a total number of  ∼ 35 brown dwarfs down to 0.02 *M*⊙ and with $A\_V\lesssim 12$mag. The star vs. brown dwarf ratio ------------------------------ As a proxy for the shape of the mass function, previous authors have used the ratio of stars to brown dwarfs, where these two groups are defined by a range of masses. These ratios are more robust against uncertainties in the masses than a complete IMF. use a range of 0.08-1.0 *M*⊙ for stars and 0.03-0.08 *M*⊙ for brown dwarfs, hereafter called *R*1. Other authors use 0.08-10 *M*⊙ for stars and 0.02-0.08 *M*⊙ for brown dwarfs, hereafter called *R*2. Since the number of high-mass stars is small, the two ratios *R*1 and *R*2 should be fairly similar. The comparison with NGC1333 is complicated by the fact that no comprehensive spectroscopic census is available for the stars. The best starting point is probably the Spitzer analysis by. They find a total of 137 Class I and II members with disk, from which 94 are in 2MASS. Objects not detected in 2MASS are likely embedded sources with high extinction *A**V* > 10mag and thus not comparable with our brown dwarf sample. We calculated absolute J-band magnitudes for this sample using the dereddening described in Sect. [s32] and assuming a distance of 300pc, which gives a range of *M**J* = 0 − 9mag. Comparing with the BCAH 1Myr isochrone the sample contains 13 objects with *M* > 1.0 *M*⊙, 52 objects with 0.08 < *M* < 1.0 *M*⊙ and 29 with 0.02 < *M* < 0.08 *M*⊙. These are only objects with disks; correcting for a disk fracton of 83% shifts the numbers to 16, 63, 35. The latter number is consistent with the estimate of brown dwarfs in this cluster given in Sect. [s41]. Out of 35 brown dwarfs, the number of objects with masses above 0.03 *M*⊙ would be 28. Based on these estimates the ratios for NGC1333 become *R*1 = 63/28 = 2.3 ± 0.5 and *R*2 = 79/35 = 2.3 ± 0.5 (see below for an explanation of the uncertainties). Our value for *R*1 is somewhat larger than our first estimate given in of 1.5 ± 0.3, mainly because we use here the cutoff at 0.03 *M*⊙ to be consistent with. The uncertainties for *R*1 and *R*2 stated above are 1*σ* confidence intervals and have been derived based on the prescription provided by. This prescription is given for population proportions (’success counts’). Therefore, we use the Cameron equation to calculate the confidence intervals *σ**k*1 for the ratio of number of stars to the sum of stars and brown dwarfs (*k*1) and *σ**k*2 for the ratio of number of brown dwarfs to the same sum (*k*2). The confidence intervals for *R*1 and *R*2 are then derived as follows: $$R\_{\mathrm{max}} = \frac{k1 + \sigma\_{k1}}{k2 - \sigma\_{k2}}$$ $$R\_{\mathrm{min}} = \frac{k1 - \sigma\_{k1}}{k2 + \sigma\_{k2}}$$ lll NGC1333 & 2.3 (1.8-2.8) & 2.3 (1.8-2.7) ONC & 3.3 (2.8-3.9) & 3.8,4.5,5.0 UpSco & 3.8 (3.1-4.5) & – NGC2024 & 3.8 (2.6-5.2) & – Chamaeleon & 4.0 (2.3-6.0) & 3.9 (2.9-5.0) *ρ*-Oph & 5.1 (3.8-6.4) & 4.8 (3.7-6.0) Taurus & 6.0 (4.5-7.7) & 7.3 (5.1-9.6) IC348 & 8.3 (6.4-10.5) & 8.0 (6.3-10.0) In Table [t4] we compare the ratios for NGC1333 with the available literature values for *R*1 and *R*2 for other regions. The same numbers are plotted in Fig. [f40] for illustration. To have accurate and consistent confidence intervals, we re-calculated the errors for all literature values as described above. NGC1333 has the lowest ratios measured so far in any star forming region, suggesting that the number of brown dwarfs in NGC1333 is unusually high, which is in line with the conclusion in. In particular, the ratios for NGC1333 deviate by more than 2*σ* from those in IC348. It should be noted that the current census for IC348 is nearly complete down to 0.03 *M*⊙ and covers most of the cluster, which makes the difference to NGC1333 even more striking. Star vs. brown dwarf ratios for various star forming regions, from Table [t4]. *R*1 is plotted with squares, *R*2 with circles. The number on the y-axis is identical with the row in Table [t4]. [f40] This finding has to be substantiated with further survey work in diverse regions. In Table [t4] we list the statistical 1*σ* confidence intervals, purely based on the sample sizes. These statistical errors do not take into account additional sources of uncertainty, e.g. unrecognised biases, inconsistencies in sample selection or problems with the mass estimates, i.e. the actual errors may be larger than listed in Table [t4]. In particular, it is important to note that all mass estimates are necessarily model-dependent. For the value in NGC1333 we use the BCAH isochrones, mainly because they cover the brown dwarf regime down to the Deuterium burning limit. The problems and uncertainties of these type of models at very young ages are well-documented. The best way of assessing the overall uncertainties is to compare results from independent surveys. As can be seen in Table [t4], so far the results from independent groups agree within the statistical errorbars, with the possible exception of the ONC.[3](#fn3) Such an independent confirmation is required for NGC1333 as well. If confirmed, the unusually low ratio of stars to brown dwarfs in NGC1333 could point to regional differences in this quantity, possibly indicating environmental differences in the formation of very low mass objects. One option to explain this is turbulence, as very low mass cores which can potentially collapse to brown dwarfs could be assembled by the turbulent flow in a molecular cloud. At first glance this could be a realistic possibility for NGC1333, where the cloud is strongly affected by numerous outflows, although it is not clear if the turbulence in NGC1333 is mainly driven by these outflows. Alternatively, additional brown dwarfs could form by gravitational fragmentation of gas falling into the cluster center. This latter mechanism would benefit from the fact that NGC1333 has a higher stellar density and thus a stronger cluster potential than most other nearby star forming regions. Spatial distribution -------------------- In Fig. [f9] we show the spatial distribution of the sample of very low mass objects listed in Tables [t1] and [t2] (squares). For comparison, the positions of the 137 Class I and Class II sources are overplotted with crosses. The dots indicate the positions of all targets for which we have obtained spectra but which are not confirmed as very low mass objects. Additionally, we show the frequency of objects as a function of distance from the cluster center in Fig. [f30], again for the same three samples, and in addition for all photometric candidates from our iz catalogue (dash-dotted line). Spatial distribution of NGC1333 members. Crosses are all 137 objects with Spitzer excess by, squares are all confirmed brown dwarfs (Tables [t1] and [t2]). Objects with spectroscopy for which we can exclude that they are substellar members are shown with dots. [f9] Histogram of the distance from the cluster center for confirmed brown dwarfs (solid line, Tables [t1] and [t2]), objects with Spitzer excess by (dashed line), objects for which we can exclude that they are substellar members (dotted line), and all IZ candidates (dash-dotted line). As cluster center we used the average position of the Class I/II sources. [f30] In the two figures, the spatial distribution of brown dwarfs is strongly clustered and indistinguishable from the distribution of the total population of Class I/II sources in NGC1333. For the two samples, the average *α* and *δ* differ only by 0.4’ and 0.3’, respectively, which is  < 10% of the cluster radius. Adopting the average position of the Class I/II sources as cluster center, the average distance from the center is similar in the two samples, 5.2’ for the very low mass objects and 5.5’ for the Class I/II sources. The fraction of objects with distance from the cluster center of *d* < 0.1, 0.1 < *d* < 0.2, *d* > 0.2deg is 65, 35, 0% for the very low mass objects and 67, 26, 6% for the Class I/II objects. The scatter in the positions is *σ**α* = 0.071 and *σ**δ* = 0.069deg for the very low mass objects, and *σ**α* = 0.067 and *σ**δ* = 0.085deg for the Class I/II sources. For all these quantities there are no significant differences between the two samples. The figures also show that our spectroscopic follow-up covers an area that is about 1.5-2 as large (in radius) than the cluster itself. We took spectra for 31 candidates with distances of  > 0.2deg from the cluster center, but none of them turned out to be a brown dwarf. There are still 43 candidates from the IZ photometry outside 0.2deg (see Sect. [s41]) for which we do not have spectra, but based on our current results, it is unlikely that they contain any very low mass cluster members. Thus, our wide-field follow-up spectroscopy shows that there is no significant population of brown dwarfs at *d* > 0.2deg from the cluster center, corresponding to  ∼ 1pc at a distance of 300pc. It has been suggested that gravitational ejection occurs at an early stage in the evolution of substellar objects, either from multiple stellar/substellar systems or from a protoplanetary disk. This ejection is thought to remove the objects from their accretion reservoir and thus sets their masses. In these scenarios one could expect the brown dwarfs to have high spatial velocities in random directions. An ejection velocity of 1kms− 1 would allow the object to travel 1pc in 1Myr, i.e. in the case of NGC1333 this would be sufficient to reach the edge of the cluster. However, the gravitational potential of the cluster will significantly brake the motion of the brown dwarf. Assuming a total cluster mass of 500 *M*⊙ homogenuously distributed in a sphere with 1pc radius, a brown dwarf that gets ejected in the cluster center with 1.5, 2, 3kms− 1 would reach a velocity of approximately 0.5, 1.4, 2.6kms− 1 at a distance of 1pc from the center. All objects with ejection velocities of $\gtrsim 2$kms− 1 would have moved to distances significantly larger than 1pc over 1Myr. As shown above, the presence of such objects can be excluded from our data. The scenarios by and predict that a substantial fraction of ejected brown dwarfs (more than 50% in some simulations) exceed this velocity threshold of 2kms− 1. These models would require some tuning to reproduce a spatial distribution as observed in NGC1333. However, such simplified scenarios do not take into account that dynamical interactions affect the total cluster population, not exclusively the brown dwarfs. The cluster formation simulations by show that the velocity dispersion in a dense cluster is not expected to increase in the very low mass regime. Although brown dwarfs undergo ejection in the simulations, this does not lead to a velocity offset in comparison to the stars. NGC1333 seems to be consistent with this picture. As a side comment, we note that the parameters in the main simulation in with gas mass of 500 *M*⊙ and cloud radius of 0.4pc are fairly similar to the properties of NGC1333, although the simulation produces a much higher number of stars and brown dwarfs (total stellar mass of 191 *M*⊙ vs.  ∼ 50 *M*⊙ in NGC1333). Disks ----- In Fig. [f10] we plot the Spitzer/IRAC colour-colour diagram for the sample listed in Tables [t1] and [t2], again based on the C2D-’HREL’ catalogue. Out of the sample of 51 sources with confirmed spectral type M5 or later, 41 have photometric errors  < 40% in all four IRAC bands and are shown in this plot. The figure shows the typical appearance with two groups, one around the origin, the second with significant colour excess in mid-infrared bands due to the presence of circum-(sub)-stellar material. In this sample of 41 objects, 27 show evidence for a disk, i.e. 66 ± 8%. All of them have colours comparable to the Class II sources identified in. The derived disk fraction of 66% is only valid for the sample of 41 objects with reliable Spitzer detection. In the entire sample of 51 very low mass sources in NGC1333 listed in Tables [t1] and [t2], the disk fraction is likely to be smaller, because the ten objects which are not detected by Spitzer are unlikely to have a disk. Correcting for this effect, the disk fraction in the full sample could be as low as 27/51 or 55%. Therefore, we consider the disk fraction of 66% to be an upper limit. For comparison, for the total cluster population derive a disk fraction of 83% from a Spitzer survey. This number has been derived for objects with *K* < 14mag. This magnitude limit was chosen by because it corresponds to *M* = 0.08 *M*⊙ at age of 1Myr and *A**V* = 20mag. Their sample thus includes mostly stars, but also some brown dwarfs (as evident from Table [t2], which contains a number of objects from the sample, marked with ’Sp’.) The Spitzer sample contains a substantial number of objects with *A**V* > 10mag, which are rare among the currently known brown dwarfs. It is possible that some of the heavily embedded brown dwarfs with *A**V* > 10mag have not been found yet. This could explain the discrepancy in the disk fractions. Our disk fraction is consistent with the values derived for very low mass members in *σ*Ori, Chamaeleon-I, and IC348 although all three regions are thought to be somewhat older (2-3Myr) than NGC1333. A more detailed SED analysis was carried out for objects with an additional datapoint at 24 *μ**m*. 19 of the objects in Fig. [f10] have MIPS fluxes at 24 *μ**m* with errors  < 40%. At this wavelength the images are strongly affected by the cloud emission and blending. To make sure that the fluxes are trustworthy, we checked all objects in a 24 *μ**m* image obtained in the Spitzer program #40563 (PI: K. Wood, AOR 23712512), which is deeper than the C2D mosaics. After visual inspection, 3 objects were discarded; the remaining 16 are point sources at 24 *μ**m* and are marked with squares in Fig. [f10]. Spitzer/IRAC colour-colour diagram for the very low mass sources listed in Tables [t1] and [t2], using the ’HREL’ photometry from the C2D survey. We only plot objects with photometric errors  < 40% in all four bands (41 out of 51). The position of the Class II objects from the survey by is indicated by the dashed box. Objects with reliable detection at 24 *μ**m* are marked with squares [f10] Spectral energy distributions for 16 very low mass objects with disks in NGC1333 (crosses). The SEDs have been dereddened and scaled to the J-band fluxes. We overplot the typical SED for NGC1333 (solid line), *ρ*Oph (dotted line) Taurus (dashed line), and UpSco (dash-dotted line). We also show the photospheric SED from the 2800K DUSTY-AMES model with small dots. A typical errorbar for the 24 *μ* fluxes is overplotted. [f11] In Fig. [f11] we show their SEDs after dereddening (see Sect. [s32]) and scaling to the J-band flux (crosses). For comparison, the photospheric SED from a model spectrum is overplotted with small dots. To assess the disk evolution in the substellar regime, we derive the typical SED for NGC1333 and three other star forming regions: *ρ*Oph (1Myr), Taurus (2Myr), and UpSco (5Myr). For this purpose we selected the objects which are detected in all four IRAC bands and at 24 *μ**m*. For *ρ*Oph we started with the census in Muzic et al. (subm.) and made use of the C2D data. For Taurus we used published Spitzer data from and. For UpSco the data from was used. When comparing the SEDs from different regions, one has to take into account that the depth of the 24 *μ**m* observations is not the same; thus the median SED is affected by incompleteness at low flux levels. Instead we plot the SED for the object that has the 10th highest flux level at 24 *μ**m* after converting to *λ**F**λ* and scaling to the J-band flux. This represents an estimate for a typical SED unaffected by the depth of the Spitzer observations and the distance to the cluster. Note that all objects used for this exercise are spectroscopically confirmed members of the respective clusters. For wavelengths  < 8 *μ**m* the four median SEDs are fairly similar. At 8 *μ**m* the SEDs in the youngest regions (NGC1333, *ρ*Oph) are slightly enhanced. The biggest differences are seen at 24 *μ**m*, particularly when comparing NGC1333 with UpSco. This is mostly due to the fact that NGC1333 harbours a few objects with unusually strong excess emission, which are missing in UpSco, indicating that the objects in NGC1333 are in an early evolutionary stage compared with the other regions. As demonstrated in a large spread in 24 *μ**m* fluxes, as seen in NGC1333, can easily be explained by a range of flaring angles in the disks. Conclusions =========== As part of our survey program SONYC, we present a census of very low mass objects in the young cluster NGC1333 based on new follow-up spectroscopy from Subaru/FMOS. To derive reliable spectral types from our data, we define a new spectral index based on the slope of the H-band peak. We find 10 new likely brown dwarfs in this cluster, including one with a spectral type  ∼ L3 and two more with spectral type around or later than M9. These objects have estimated masses of 0.006 to 0.02 *M*⊙, the least massive objects identified thus far in this region. This demonstrates that the mass function in this cluster extends down to the Deuterium burning limit and beyond. By combining the findings from our SONYC survey with results published by other groups, we compile a sample of 51 objects with spectral types of M5 or later in this cluster, more than half of them found by SONYC. About 30-40 of them are likely to be substellar. The star vs. brown dwarf ratio in NGC1333 is significantly lower than in other nearby star forming regions, possibly indicating environmental differences in the formation of brown dwarfs. We show that the spatial distribution of brown dwarfs in NGC1333 closely follows the distribution of the stars in the cluster. The disk fraction in the brown dwarf sample is  < 66%, lower than for the stellar members, but comparable to the brown dwarf disk fraction in 2-3Myr old regions. The substellar members in NGC1333 show a large fraction of highly flared disks, evidence for the early evolutionary state of the cluster. The authors would like to thank the Subaru staff, especially Dr. Naoyuki Tamura and Dr. Kentaro Aoki, for the assistance during the observations and their preparation. We are grateful to Ms. Yuuki Moritani, Mr. Kiyoto Yabe and Prof. Fumihide Iwamuro (Kyoto University) for their help with the FMOS data reduction. We thank the anonymous referee for a constructive report that helped to improve the paper. This work made use of results from the Spitzer program #40563; we thank Jane Greaves, Chris Poulton, and Kenneth Wood from the University of St. Andrews for their help in the framework of this campaign. We also thank Ewan Cameron for discussing the calculation of confidence intervals and David Lafrenière for making his spectra available to us. AS acknowledges financial support the grant 10/RFP/AST2780 from the Science Foundation Ireland. The research was supported in large part by grants from the Natural Sciences and Engineering Research Council (NSERC) of Canada to RJ. This research has benefitted from the SpeX Prism Spectral Libraries, maintained by Adam Burgasser at http://pono.ucsd.edu/~adam/browndwarfs/spexprism/. Spectroscopically excluded objects ================================== In Tables [t10] and [t11] we provide a full list of objects for which we obtained spectra and which were not classified as young very low mass objects based on the shape of their near-infrared spectrum (see Sect. [s31]). The spectra come from the first campaign with MOIRCS and from the second run with FMOS (this paper). Most of these objects are likely to be either young stellar objects in NGC1333 or background stars with effective temperatures above  ∼ 3500K or spectral type earlier than  ∼ M3. In Table [t10] we also give the J- and K-band photometry from 2MASS and the identifiers from the photometric surveys by and, if available. Objects without listed identifiers are not known to have a counterpart within 1". llccccl 03 29 18.71 & +31 32 26.4 & 16.722 & 13.839 & IZ & F & 03 29 52.35 & +31 28 13.7 & 14.919 & 13.829 & IZ & F & 03 29 48.12 & +31 28 29.4 & 15.550 & 13.663 & IZ & F & 03 29 37.80 & +31 27 48.4 & 17.274 & 15.249 & IZ & F & 03 29 34.76 & +31 29 08.1 & 13.647 & 11.532 & IZ & F & MBO 43 03 29 11.53 & +31 30 05.6 & 16.740 & 13.786 & IZ & F & [LAL96] 241, 242 03 28 55.74 & +31 30 58.0 & 15.175 & 13.390 & IZ & F & [LAL96] 143 03 28 52.66 & +31 32 04.3 & 16.563 & 14.730 & IZ & F & 03 28 46.67 & +31 31 35.4 & 15.647 & 13.933 & IZ & F & 03 28 37.55 & +31 32 54.5 & 15.107 & 14.232 & IZ & F & 03 28 23.84 & +31 32 49.3 & 14.714 & 13.655 & IZ & F & 03 28 44.96 & +31 31 09.9 & 17.057 & 15.211 & IZ & F & 03 28 46.24 & +31 30 12.1 & 15.089 & 14.069 & IZ & F & [LAL96] 88 03 29 45.53 & +31 26 56.5 & 14.870 & 14.023 & IZ & F & 03 30 05.41 & +31 25 13.1 & 14.921 & 13.680 & IZ & F & 03 28 49.48 & +31 25 06.6 & 15.957 & 13.658 & IZ & F & MBO 82 03 29 10.46 & +31 23 34.8 & 15.633 & 12.762 & JK & F & [LAL96] 231 03 28 43.17 & +31 26 06.1 & 15.617 & 13.384 & IZ & F & [LAL96] 78 03 28 37.75 & +31 26 32.8 & 16.847 & 14.340 & IZ & F & [LAL96] 60 03 27 56.27 & +31 27 00.8 & 16.836 & 13.967 & IZ & F & 03 28 07.64 & +31 26 42.4 & 15.974 & 13.719 & IZ & F & 03 28 19.51 & +31 26 39.5 & 15.100 & 13.687 & IZ & F & [LAL96] 13 03 28 40.22 & +31 25 49.1 & 15.637 & 12.911 & IZ & F & 03 28 47.64 & +31 24 06.2 & 14.199 & 11.660 & JK & F & [LAL96] 93 03 28 55.22 & +31 25 22.4 & 14.735 & 12.550 & IZ & F & [LAL96] 139 03 29 03.32 & +31 23 14.8 & 17.254 & 14.071 & JK & F & [LAL96] 191 03 29 03.13 & +31 22 38.2 & 13.724 & 11.323 & JK & F & [LAL96] 189 03 29 27.61 & +31 21 10.1 & 14.836 & 13.074 & IZ & F & [LAL96] 324 03 29 55.50 & +31 15 30.5 & 15.120 & 14.131 & IZ & F & 03 29 52.65 & +31 17 22.9 & 16.325 & 14.970 & IZ & F & 03 29 39.61 & +31 17 43.4 & & & JK & F & 03 28 35.46 & +31 21 29.9 & 16.052 & 14.197 & IZ & F & [LAL96] 49 03 28 48.45 & +31 20 28.4 & 16.842 & 14.283 & IZ & F & [LAL96] 100 03 29 10.82 & +31 16 42.7 & 15.652 & 13.039 & JK & F & [LAL96] 235 03 29 21.42 & +31 15 55.3 & 15.396 & 14.004 & IZ & F & [LAL96] 305 03 29 45.41 & +31 16 23.2 & 15.118 & 13.767 & IZ & F & 03 29 50.23 & +31 15 47.9 & 16.980 & 15.106 & IZ & F & 03 30 01.93 & +31 10 50.5 & 14.374 & 12.921 & IZ & F & 03 29 54.78 & +31 11 41.7 & 16.679 & 15.253 & IZ & F & 03 29 34.57 & +31 11 23.8 & 16.750 & 14.643 & IZ & F & [LAL96] 351 03 29 36.00 & +31 12 49.6 & 15.522 & 14.615 & IZ & F & [LAL96] 353 03 29 26.11 & +31 11 36.9 & 14.781 & 12.874 & IZ & F & [LAL96] 320 03 28 01.19 & +31 17 36.5 & 15.587 & 14.136 & IZ & F & 03 27 52.51 & +31 19 38.8 & 14.973 & 13.662 & IZ & F & 03 28 22.90 & +31 15 21.7 & 15.032 & 13.587 & IZ & F & [LAL96] 20 03 29 08.71 & +31 12 01.9 & & & IZ & F & 03 29 12.24 & +31 12 20.5 & 16.588 & 14.722 & IZ & F & [LAL96] 254 03 29 18.45 & +31 11 30.5 & 16.413 & 15.530 & IZ & F & 03 29 31.00 & +31 11 20.1 & 17.052 & 15.265 & IZ & F & 03 29 28.99 & +31 10 00.3 & 13.365 & 10.889 & IZ & F & [LAL96] 329 03 29 45.40 & +31 10 35.6 & 14.828 & 13.516 & IZ & F & 03 29 46.48 & +31 08 43.6 & 15.205 & 14.096 & IZ & F & 03 29 49.16 & +31 09 03.7 & 16.143 & 14.537 & IZ & F & 03 29 28.95 & +31 07 40.9 & 15.975 & 15.199 & IZ & F & [LAL96] 330 03 29 24.73 & +31 07 26.8 & & & IZ & F & 03 29 20.11 & +31 08 53.7 & 15.923 & 14.813 & IZ & F & 03 29 09.23 & +31 08 55.4 & 16.330 & 14.912 & IZ & F & 03 28 57.45 & +31 09 46.5 & 16.623 & 12.738 & IZ & F & [LAL96] 160 03 28 06.32 & +31 10 02.0 & 15.731 & 14.579 & IZ & F & 03 28 22.07 & +31 10 42.9 & 13.867 & 11.859 & IZ & F & [LAL96] 18 03 28 39.01 & +31 08 07.6 & 15.143 & 13.389 & IZ & F & [LAL96] 65 03 29 03.60 & +31 07 11.9 & 14.995 & 13.351 & IZ & F & [LAL96] 197 03 29 05.23 & +31 07 10.6 & 16.776 & 14.766 & IZ & F & llccccl 03 28 41.72 & +31 11 15.1 & & & IZ & M & 03 28 41.97 & +31 12 17.2 & 15.863 & 13.822 & IZ & M & 03 28 46.21 & +31 12 03.4 & 16.933 & 13.526 & IZ & M & [LAL96] 90 03 28 48.99 & +31 12 45.1 & 17.836 & 14.048 & IZ & M & [LAL96] 103 03 28 52.10 & +31 16 29.3 & 16.071 & 13.738 & IZ & M & [LAL96] 123 03 28 57.25 & +31 07 26.0 & & & IZ & M & [LAL96] 159 03 28 58.68 & +31 09 39.2 & 17.052 & 12.945 & IZ & M & [LAL96] 169 03 29 00.70 & +31 22 00.8 & 16.236 & 11.764 & IZ & M & [LAL96] 180 03 29 17.93 & +31 14 53.5 & 16.625 & 14.039 & IZ & M & [LAL96] 287 03 29 18.66 & +31 20 17.8 & 17.510 & 14.608 & IZ & M & MBO 109 03 29 19.86 & +31 18 47.7 & 17.205 & 13.321 & IZ & M & [LAL96] 297 03 29 28.06 & +31 18 39.0 & 15.307 & 12.846 & IZ & M & [LAL96] 327 03 29 32.20 & +31 17 07.3 & 15.503 & 13.784 & IZ & M & [LAL96] 341 03 29 37.41 & +31 17 41.6 & 14.886 & 12.761 & IZ & M & [LAL96] 359 03 29 08.17 & +31 11 54.6 & 17.247 & 14.980 & IZ & M & [LAL96] 217 --- 1. downloaded[↩](#fnref1) 2. downloaded from http://perso.ens-lyon.fr/isabelle.baraffe/[↩](#fnref2) 3. A new paper by updates the value of for the ONC to *R*1 = 2.4 ± 0.4, based on an HST survey covering a larger area than previous studies.[↩](#fnref3) Substellar Objects in Nearby Young Clusters (SONYC) IV:A census of very low mass objects in NGC1333 =================================================================================================== SONYC – *Substellar Objects in Nearby Young Clusters* – is a program to investigate the frequency and properties of young substellar objects with masses down to a few times that of Jupiter. Here we present a census of very low mass objects in the  ∼ 1Myr old cluster NGC1333. We analyze near-infrared spectra taken with FMOS/Subaru for 100 candidates from our deep, wide-field survey and find 10 new likely brown dwarfs with spectral types of M6 or later. Among them, there are three with $\gtrsim$M9 and one with early L spectral type, corresponding to masses of 0.006 to $\lesssim 0.02\,M\_{\odot}$, so far the lowest mass objects identified in this cluster. The combination of survey depth, spatial coverage, and extensive spectroscopic follow-up makes NGC1333 one of the most comprehensively surveyed clusters for substellar objects. In total, there are now 51 objects with spectral type M5 or later and/or effective temperature of 3200K or cooler identified in NGC1333; 30-40 of them are likely to be substellar. NGC1333 harbours about half as many brown dwarfs as stars, which is significantly more than in other well-studied star forming regions, thus raising the possibility of environmental differences in the formation of substellar objects. The brown dwarfs in NGC1333 are spatially strongly clustered within a radius of  ∼ 1pc, mirroring the distribution of the stars. The disk fraction in the substellar regime is  < 66%, lower than for the total population (83%) but comparable to the brown dwarf disk fraction in other 2-3Myr old regions. Introduction ============ Brown dwarfs are objects with masses too low to sustain stable hydrogen burning (*M* < 0.08 *M*⊙) and as such intermediate in mass between low-mass stars and giant planets. The substellar mass regime is crucial to test how the physics of the formation and early evolution of stars depends on object mass, thus may help address some of the most relevant issues in this field. One example is the origin of the initial mass function (IMF) and the relative importance of dynamical interactions, fragmentation, and accretion in setting the mass of objects. Surveys in star forming regions indicate that the mass regime of free-floating brown dwarfs extends down to masses below the Deuterium burning limit at 0.015 *M*⊙, i.e. it is overlapping with the domain of massive planets. The currently available surveys, however, are not complete in the substellar regime. Only small regions have been observed with sufficient depth to detect the lowest-mass brown dwarfs. Moreover, in many cases the brown dwarf candidates are selected based on their mid-infrared excess and the presence of disks, which introduces an obvious bias. In other cases, the presence of methane absorption is used to identify objects, which only finds T dwarfs in a limited temperature regime. In the SONYC project (short for: Substellar Objects in Nearby Young Clusters) we aim for a more complete census of brown dwarfs in star forming regions. For a number of regions, we have carried out deep photometric surveys in the optical and near-infrared, to facilitate a primary candidate selection based on photospheric colours. This is complemented by Spitzer data to identify additional objects with disks. We have published the photometric survey as well as follow-up spectroscopy for three regions so far: NGC1333, *ρ*Oph, and Chamaeleon-I. A fourth paper with additional spectroscopy in *ρ*Oph is submitted (Muzic et al.). Based on these results, the largest population of brown dwarfs is found in NGC1333, a very young ( ∼ 1Myr old) cluster in the Perseus OB2 association at a distance of  ∼ 300pc. Fig. [f50] shows a deep optical image of the cluster NGC1333 with some of the relevant features marked. Here we present new follow-up spectroscopy in NGC1333 for a large sample of additional candidates from our photometric survey (Sect. [s2]) and identify 10 new, previously unknown very low mass members (Sect. [s3]). Combining these with the known members yields 51 objects with spectral type  ≥ M5 in this cluster. In Sect. [s4] we analyse the brown dwarf census for NGC1333, including the mass function, the spatial distribution and the disk properties. The conclusions are given in Sect. [s5]. Throughout this paper, we make use of a large number of different samples for objects in the area of NGC1333. In Table [t15] we provide an overview of the most important samples and link them to the corresponding sections of the paper. ll Objects observed with FMOS (Sect. [s3]) & 100 excluded as very low mass sources & 63 confirmed as young very low mass sources (YVLM) & 26 newly identified & 10 Objects with spectral type  > M5 (Sect. [s4]) & 51 identified in this paper & 10 identified in & 20 with estimated masses  < 0.08 *M*⊙ & 30-40 with Spitzer counterpart (Sect. [s44]) & 41 with mid-infrared excess at 3-8 *μ**m* & 27 Candidates selected from the iz diagram (Sect. [s41]) & 196 with spectroscopy from MOIRCS or FMOS & 98 confirmed by our spectra & 27 confirmed by other groups & 8 Class I and II sources in NGC1333 (Sect. [s42]) & 137 with 2MASS detection & 94 with estimated masses 0.02 < *M* < 0.08 *M*⊙ & 29 corrected for disk fraction & 35 Subaru/Suprime-Cam i-band image for our target region NGC1333 in the Perseus star forming region. Marked are the cluster radius with a large dashed circle, the two well-known objects BD+30 549 (north) and HH 7-11 (south) with small ellipses, and the population of very low mass members (see this paper, Tables [t1] and [t2]) with small circles. The reflection nebula NGC1333 is slightly north of the image center. Coordinates are J2000. For more information on this image, see. [f50] Observations and data reduction =============================== Our spectra were obtained with the Fiber Multi Objects Spectrograph (FMOS) at the Subaru Telescope on the night of 2010 November 27. Four-hundred fibers, each of 1.2“ diameter are configured by the fiber positioner system of FMOS in the 30’ diameter field of view, with an accuracy of 0.2” rms. The patrol radius of each spine is 87“, while the minimum spacing between two neighboring spines is 12”. The spectra are extracted by the two spectrographs (IRS1 and IRS2). Our data were obtained in shared-risk mode, using only one of the spectrographs (IRS1, 200 fibers). IRS1 is equipped with a 2048 × 2048 pixel HAWAII-II HgCdTe detector, and a mask mirror for OH airglow supression. With the low-resolution mode (*R* ∼ 600) the spectrograph yields a coverage from 0.9 to 1.8 *μ**m* (J- and H-bands). We obtained 10 exposures with 15 min on-source time each. The observations were carried out in the Normal Beam Switching (NBS) mode, i.e. the same amount of time was spent to observe the sky, which is achieved by offseting the telescope by 10-15“. The seeing during the science observations was stable at  ∼ 1”. Since no stars suitable for telluric correction are found within our science field-of-view, we observed several standard star fields, covering the range of airmasses at which the science target was observed (airmass between 1-2). The observed standard stars have spectral types in the range F4 - G5. Data reduction was carried out using the data reduction package supplied by the Subaru staff. The package consists of IRAF tasks and C programs using the CFITSIO library. The data reduction package contains the tasks for standard reduction of NIR spectra, performing sky subtraction, bad-pixel and flat-field correction, wavelength calibration, flux calibration and telluric correction. In this last step, each 15-minute exposure was calibrated using a standard star at the appropriate airmass. Finally, the ten individual exposures were averaged. For the analysis (Sect. [s3]) the spectra were binned to a uniform wavelength interval of 5Å and smoothed with a small-scale median filter. For the reduced spectra, the signal-to-noise ratio in the H-band ranges from 10 to 70. In total, we covered 100 targets, from which 71 are selected from our IZ candidate catalogue. 10 additional targets have been selected by combining our JK-catalogue with the ’HREL’ catalogue from the Spitzer ‘Cores to Disk’ (C2D) Legacy program. All 10 have colour-excess in IRAC bands and thus should have a disk (see Sect. [s45]). 19 fibres were placed on known M-type members for reference. To test for possible effects of imperfect calibration, we compared the H-band spectra for six objects observed with MOIRCS and with FMOS (this paper). For three of them there is excellent agreement (SONYC-NGC1333-1, 5, 8), while for the others there are slight differences in the spectral slope. Using the method outlined in Sect. [s32], we measured the spectral types for the four out of six MOIRCS spectra which cover the entire H-band. All four give types that are later than those derived from FMOS, by 0.4, 1.3, 0.5, 0.7 subtypes, i.e. the differences are larger than our internal accuracy of 0.4 subtypes. These discrepancies may indicate residual problems with the telluric calibration in the MOIRCS and/or the FMOS data. These problems could be induced by the use of multi-object facilities: Since we stay on the target field for long integration times and the fields themselves do not cover adequate telluric standards, there is a significant time and position offset between science targets and standards, i.e. the depth of the telluric bands could potentially be quite different between science and standard fields. Stable conditions, as they were present for the FMOS observations, should mitigate this effect. The FMOS data also have the wider wavelength coverage, which facilitates the telluric correction and the spectral analysis. Therefore we put more trust in the quantities derived from FMOS spectra. Spectral analysis ================= Selection of young very low mass objects ---------------------------------------- In total we obtained spectra for 100 objects with FMOS. Based on the broadband spectral shape in the near-infrared, young very low mass sources can be reliably separated from more massive stars. Objects with very low masses and thus effective temperature below  ∼ 3500K or spectral types of  ∼ M3 or later show broad absorption features of H2O, which distinguishes them clearly from the smooth near-infrared slopes of more massive and hotter stars. The depth of these absorption troughs is a strong function of effective temperature. The most important spectral feature for our purposes is the H-band ‘peak’, formed by the two H2O absorption bands at 1.3-1.5 and 1.75-2.05 *μ**m*. The shape of this feature is gravity sensitive; while it appears round with a flat top in evolved field objects, it is triangular with a pronounced peak at 1.67 *μ**m* in young objects with ages of $\lesssim 100$Myr. In addition, H2O absorption causes a sharp edge at 1.35 *μ**m* and another ‘peak’ in the K-band. We use these features to identify young very low mass sources in the FMOS sample. We are looking for objects showing structure over the full spectral regime, as opposed to a smooth slope. In particular, we select objects with a) a pointy peak in the H-band and b) an absorption edge at 1.35 *μ**m*. Out of 100 FMOS spectra, 26 show these characteristics and are called YVLM sample (short for ’young very low mass’) in the following. For 11 the spectra are too noisy to make a decision, and the remaining 63 do not show these features. These 63 objects for which we can exclude that they are very low mass sources are listed in Appendix [a1]. From the 19 literature sources, 16 are re-identified and are part of the YVLM sample. The other 3 have spectra that are too noisy to identify the features. The remaining 10 YVLM objects are newly confirmed very low mass members of NGC1333 and are listed in Table [t1]. We use the nomenclature SONYC-NGC1333-X for these objects, where X is a running number. Since we have listed objects 1-28 in, we continue here with no. 29. Note that the list in contains some previously confirmed members. The spectra for the 10 new objects are shown in Fig. [f20]. 7 of the new objects come from our IZ catalogue, the remaining 3 from the JK plus Spitzer list (SONYC-NGC1333-31, 32, 33). cllccccccccl S-29 & 03 28 28.40 & +31 16 27.3 & 17.637 & 16.675 & 14.624 & 13.624 & 0 & 0 & M6.9 & 3150 & S-30 & 03 28 31.08 & +31 17 04.1 & 23.650 & 21.235 & 16.823 & 14.079 & 9 & 9 & M9.3 & 2700 & S-31 & 03 29 44.15 & +31 19 47.9 & & & 17.386 & 15.290 & 6 & 5 &  ∼ M9 & 2300 & Sp 132 S-32 & 03 29 03.21 & +31 25 45.2 & & & 15.798 & 13.832 & 4 & 4 & M7.1 & 3200 & MBO89, Sp 79 S-33 & 03 29 03.95 & +31 23 30.8 & & & 17.143 & 14.932 & 7 & 5 & M8.3 & 2500 & MBO116, Sp 83 S-34 & 03 29 06.94 & +31 29 57.1 & 20.839 & 19.225 & 16.541 & 14.774 & 4 & 4 & M7.0 & 2950 & Sp 90 S-35 & 03 29 10.18 & +31 27 16.0 & 18.988 & 17.711 & 15.547 & 14.127 & 2 & 2 & M7.4 & 3050 & MBO94, Sp 96 S-36 & 03 29 25.84 & +31 16 41.8 & 23.392 & 21.432 & 18.53 & 17.07 & 2 & 0 &  ∼ L3 & 2250 & S-37 & 03 29 30.54 & +31 27 28.0 & 17.356 & 16.344 & 13.808 & 12.598 & 1 & 2 & M6.9 & 3200 & MBO68 S-38 & 03 29 34.42 & +31 15 25.2 & 21.091 & 19.654 & 17.33 & 16.15 & 1 & 1 & M7.7 & 2850 & Many of our spectra show emission in Paschen *β* at 1.28 *μ**m*. If the emission originates from the source itself, it would be a clear indication for ongoing accretion. However, this is difficult to tell with fibre spectroscopy; our spectra may be affected by emission from the cloud material in NGC1333. Still, it is worth pointing out that the fraction of objects with Pa *β* emission is 50% in the YVLM sample and 32% among the remaining objects. This indicates that the other sources might contain a number of young stars in NGC1333 with temperatures  > 3500K. FMOS spectra for the 10 new brown dwarfs listed in Table [t1]. The fluxes are on a relative scale with arbitrary offsets. The observed spectrum is shown in black, the best fitting DUSTY model slightly offset, with thin, dashed lines. We masked out the Pa*β* lines for clarity. The identification numbers SONYC-NGC1333-X are abbreviated with S-X. [f20] In Fig. [f4] we compare the FMOS spectra for two of our sources, both originally identified in, with spectra for young brown dwarfs from and old dwarfs with similar spectral types. At late M spectral types, the more rounded peak of the old objects becomes visible (compare the young KPNO4 with the old LHS2924). However, for earlier types the difference between young and old is not significant. The plot also illustrates that sufficient signal-to-noise is required to distinguish between the two types of objects. Our spectra often do not have the necessary quality to make this decision. From these arguments it follows that contamination by late M-dwarfs in the field cannot completely be excluded. Given the compact nature of the NGC1333 cluster (see Sect. [s44]) and the low space densities of such objects, the contamination is considered to be negligible ($\lesssim 1$). Spectral types -------------- Spectral classification for cool dwarfs in the near-infrared relies mostly on the broad H2O absorption features mentioned above. A number of spectral indices has been suggested in the literature. Relevant for us are the indices that are calibrated for young brown dwarfs and thus take into account the fact that the absorption features are sensitive to gravity. Such indices have been proposed by,, and. We find, however, that these schemes are of limited use for our low-resolution and (mostly) low signal-to-noise spectra, mainly because the wavelength offset between the two intervals from which the index is derived is relatively small, e.g. 0.058 *μ**m* for the H2O index or 0.103 *μ**m* for the WH index. For our purposes we require a flux ratio that maximises the contrast between the numerator and the denominator. This is achieved by using the flux ratio between 1.675-1.685 *μ**m* and 1.495-1.505 *μ**m*. The first interval is chosen because it corresponds to the position of the H-band peak for young objects with M6-M9 spectral type. The second interval marks the lowest flux level on the blue side of the H-band peak for such objects. lccccc ITG2 & M7.25 & 11.540 & 10.097 & 2.40 & 1.050 J0444 & M7.25 & 12.195 & 10.761 & 2.35 & 1.124 CFHT6 & M7.5 & 12.646 & 11.368 & 1.51 & 1.135 KPNO2 & M7.5 & 13.925 & 12.753 & 0.93 & 1.136 KPNO5 & M7.5 & 12.640 & 11.536 & 0.56 & 1.137 CFGH3 & M7.75 & 13.724 & 12.367 & 1.94 & 1.184 J0441 & M7.75 & 13.730 & 12.220 & 2.77 & 1.098 CFHT4 & M7.0 & 12.168 & 10.332 & 4.53 & 1.009 MHO4 & M7.0 & 11.653 & 10.567 & 0.47 & 1.174 KPNO7 & M8.25 & 14.521 & 13.271 & 1.37 & 1.121 KPNO1 & M8.5 & 15.101 & 13.772 & 1.78 & 1.203 KPNO6 & M8.5 & 14.995 & 13.689 & 1.63 & 1.185 KPNO9 & M8.5 & 15.497 & 14.185 & 1.69 & 1.155 LRL405 & M8.0 & 15.28 & 13.91 & 2.01 & 1.126 J0457 & M9.25 & 15.771 & 14.484 & 1.56 & 1.240 KPNO4 & M9.5 & 14.997 & 13.281 & 3.91 & 1.342 KPNO12 & M9.0 & 16.305 & 14.927 & 2.01 & 1.261 J1207 & M8.0 & 12.995 & 11.945 & 0.81 & 1.226 J1139 & M9.0 & 12.686 & 11.503 & 0.99 & 1.336 J1245 & M9.5 & 14.518 & 13.369 & 0.81 & 1.306 This index, dubbed HPI for H-peak index, is illustrated in the left and middle panel of Fig. [f4] where we show literature spectra for young and old brown dwarfs in Taurus with spectral types M7 and M9.5 compared with FMOS data for two of the brown dwarfs in NGC1333. The H-band peak clearly is highly sensitive to the spectral type in this regime. We expect that this index increases from mid M to late M spectral types. The figure also demonstrates the advantages of this index at low signal-to-noise ratio. To calibrate the HPI, we use literature spectra for 20 young brown dwarfs with spectral types M7-M9.5, for which spectra are publicly available in the SpeX Prism Spectral Libraries. Their spectra have been dereddened using the reddening law *A**λ* = (*λ*/1.235 *μ**m*)− 1.61 ⋅ *A**J* and the relation *A**J* = 0.3088 ⋅ *A**V*. The optical extinction *A**V* is derived from their *J* − *K* colours: *A**V* = [(*J* − *K*) − (*J* − *K*)0]/0.1844 These relations are based on the extinction law from for *R**V* = 4.0, which is used throughout this paper. We note that *R**V* varies within star forming regions typically from 3 to 5; the adopted value is an average from the values measured by for diffuse and dense regions of the interstellar medium. It is also a reasonable average for our target region NGC1333. For the reasons outlined in we use (*J* − *K*)0 = 1.0. The resulting uncertainty in *A**V* is  ∼ 1mag. For example it is possible that we overestimate *A**V* due to the presence of K-band excess from a disk. This induces an uncertainty of up to 0.04 in the HPI. Note that the literature spectral types for the calibrators have been determined in the optical by comparison with templates. The internal accuracy of these optical types is typically  ± 0.25 subtypes. The calibrators are listed in Table [t3]. H-peak index (HPI) calculated as flux ratio between 1.675-1.685 *μ**m* and 1.495-1.505 *μ**m* plotted vs. (optical) spectral type for published spectra from and. The correlation after excluding the outlier MHO4 at M7 is SpT =  − 0.84 + 7.66 ⋅ HPI (dashed line). A typical errorbar is overplotted. [f3] In Fig. [f3] we plot the HPI for the 20 calibrators. The plot shows the expected correlation between index and spectral type. The one outlier at spectral type M7 corresponds to the object MHO4 and deviates in its HPI by 3*σ* from a linear fit. Nothing particular can be seen in its spectrum which would explain this discrepancy. Based on the H-band peak, MHO4 appears to have a later spectral type than indicated in the literature. Excluding MHO4, we derive a correlation of SpT =  − 0.84 + 7.66 ⋅ HPI with an 1*σ* scatter of 0.37 subclasses. The 1*σ* scatter in HPI around the correlation is 0.039. As outlined above, this can be attributed to the uncertainty in the extinction. The HPI is properly calibrated for types M7-M9.5, but may also hold for later spectral types as the H-band peak increases in strength in the L-type regime (see below). We note that this correlation does not hold for field dwarfs. As seen in the middle panel of Fig. [f4] these objects have flatter H-band peaks which results in lower values for the HPI for a given spectral type. Using this index we determined spectral types for all 26 objects in the YVLM sample, after dereddening using the corrected extinction derived in Sect. [s33]. 9 of them have published spectral types. Five of them (MBO69, MBO54, MBO77, ASR29, ASR83) are based on an index defined in the K-band. The other four have the Spitzer IDs 131, 104, 118, and 46 in and were classified in the optical. Excluding MBO54, ASR29, Sp 104 and Sp 118 which have a published spectral type of M6 or earlier, for which the HPI is not properly calibrated, the HPI types deviate by -0.4, -0.2, -0.2, -0.1, +0.6 subtypes from the published ones, which provides some reassurance in the usefulness of the HPI. The 10 new objects have spectral types of M6.9 or later, classifying them as likely brown dwarfs. The spectral types for the new and known sources are listed in Table [t1] and [t2] respectively. In the right panel of Fig. [f4] we compare FMOS data for the four latest-type SONYC objects with spectra for young ultracool objects with published spectral types: the M8 companion to HIP78530, the L2 companion to the TW Hya brown dwarf 2M1207, and the L4 companion to 1RXSJ1609-2105B, all three with ages of 5-10Myr. Using the HPI, we get spectral types of M8, L2, and L3.5 for these three comparison objects, consistent with the literature values, which indicates that HPI could be useful for spectral typing of young early L-dwarfs as well. In this plot we approximate the spectral slopes for the two faintest objects in our sample with linear fits on either side of the H-band peak, to facilitate the comparison. The plot demonstrates that objects SONYC-NGC1333-1, 30, 31 are about or later than M8 and clearly earlier than L2, in line with our classification. The object SONYC-NGC1333-36 compares well with the L2 and L4 templates, which makes it the coolest object discovered in NGC1333 so far. We note that the redward slope of the H-band peak appears anomalously steep for SONYC-NGC1333-31 and 36. This is not introduced by the linear fit or the treatment of the spectra, and it is not seen in the other FMOS spectra for NGC1333 or *ρ*Oph (Muzic et al., subm.), which makes a calibration problem unlikely. The effect is difficult to explain, as these two spectra are very noisy, but it could in principle be a real feature, especially since the data is well-matched by the model spectrum (see Sect. [s33]). Better quality spectra are needed to verify the parameters for these objects. Since the HPI is defined for the blueward slope of the H-band peak, this does not affect the spectral typing. Model fitting ------------- For the 26 objects in the YVLM sample the FMOS spectra were compared with AMES-DUSTY models for low gravity (log*g* = 3.5 or, if not available, 4.0) low-mass stars[1](#fn1). Using the extinction law described in Sect. [s32] we calculated a model grid for *T*eff = 1500 to 3900K in steps of 100K and *A**V* = 1 to 20mag in steps of 1mag. The models were binned to 5Å, the same binsize as the FMOS data. The fitting was done in a semi-interactive way. Since extinction and effective temperature cannot be determined separately with low-resolution, low signal-to-noise spectra, we started by adopting the *A**V* determined from the *J* − *K* colour, using Equ. (1). For each of the YVLM objects, we calculated the following test quantity (*O* - observed flux; *M* - model flux; *N* - number of datapoints): $$\chi = \frac{1}{N} \sum\_{i=1}^{N} \frac{(O - M)^2}{M}$$ This was done for the series of models using the ’photometric’ *A**V*; we selected the one with the minimum *χ*, which is typically between 0.005 and 0.05 (with one exception with 0.2). This means that the average deviation between observed and model spectrum in a given wavelength bin of 5Å is in the same range as the noise in the observations. Usually a few model spectra (2-4) give indistinguishable *χ*; we adopt the average *T*eff from these best fitting models. A visual inspection of the observed spectra with models for a range of temperatures shows that clear discrepancies are visible for temperatures which are  > 200K different from the adopted value, i.e. the uncertainty in the adopted values is  ± 200K. We note that effective temperatures are necessarily model dependent; our values should only be interpreted in the context of the AMES-DUSTY models. The best fitting models are plotted as thin, dashed lines in Fig. [f20] for the 10 newly identified objects. In 6/10 cases this initial fit is already convincing. In two more cases, it can be improved by slightly adjusting *A**V* by 1mag, which is within the uncertainty for *A**V* (see Sect. [s32]). The two remaining cases give the best fit when *A**V* is changed by 2mag compared with the initial estimate. In Table [t1] we list the photometric and adjusted value for *A**V* and the best fit value for the effective temperature. For SONYC-NGC1333-36, the coolest object in our sample, we find an effective temperature of 2250K, which is significantly higher than the published values for the two comparison objects shown in Fig. [f4] (right panel). For 1RXSJ1609-2105B find 1800 ± 200K based on DUSTY and Drift-Phoenix models. For 2M1207B, two independent groups determined 1600 ± 100K, again based on comparison with DUSTY spectra, although suggest that the actual value might be as low as 1000K. The DUSTY spectra for  < 2000K are clearly not in agreement with our spectrum for SONYC-NGC1333-36. One possible explanation is that model fitting for the comparison objects has been done over the full near-infrared range, whereas in our case *T*eff is mostly fixed by the shape of the H-band peak. We will revisit this issue in Sect. [s34]. Comparison between the spectral types and effective temperatures for the YVLM sample, as determined in this paper (Sect. [s32] and [s33]). A typical errorbar is overplotted. The solid line is a linear fit to the datapoints excluding the 3 outliers marked with squares. We also show the effective temperature scales by and. [f12] In Fig. [f12] we compare the derived effective temperatures with the spectral types determined with the HPI in Sect. [s32]. The expected trend – cooler temperatures correspond to later spectral types – is visible, with only three exceptions. For two of these exceptions (SONYC-NGC1333-17 and -31), the signal-to-noise ratio in the spectrum is very poor ( ∼ 10), i.e. the uncertainties in spectral type and temperature are large. For SONYC-NGC1333-17 there are two published spectral types of M8 and M8.7, which would move the datapoint closer to the general trend. The third outlier (SONYC-NGC1333-33) has an unusual dip in the spectrum around 1.65 *μ**m*, which decreases our spectral type estimate by  ∼ 1 subtype, but does not affect the model fitting. Excluding these three outliers, the datapoints are well-approximated by a linear trend (solid line): *T*eff = 4191 − 157 × SpT (where SpT corresponds to the M subtype). The plot also shows the effective temperature scales by and, which are derived using the optical portion of the spectrum. The two scales agree fairly well with each other, although the Mentuch et al. scale is an extrapolation and has not been directly calibrated for *T*eff < 3000K. The trend seen in our data is consistent with these lines. Our datapoints are, however, mostly above the two lines, indicating that we systematically overestimate the temperatures (by  ∼ 200K) or the spectral types (by 0.5-1 subtypes). We explored possible reasons for this discrepancy. One option is a systematic error in the extinction. Decreasing *A**V* by 1mag for all objects would shift their spectral types to earlier types, but only by  ∼ 0.3 subtypes, not sufficient to explain the effect. Moreover, this would also increase the best estimate for *T*eff by typically 200K and thus cause larger disagreement between datapoints and published effective temperature scales. The inverse is true as well – increasing *A**V* would lead to lower *T*eff, as required, but also to later spectral types. Thus, systematic changes in *A**V* do not resolve the problem. Given the good agreement between our spectral types and literature values (Sect. [s32]), we suspect that the offset is most likely due to a problem with the effective temperatures and could indicate issues with the used model spectra. A more extensive comparison of the effective temperature scales is beyond the scope of this paper. Spectral flux distributions --------------------------- To further test the spectral fitting from Sect. [s33] we compare the full photometric spectral flux distributions (SFD) for some selected sources, as far as available, with the AMES-DUSTY model spectra. We include our own photometry in izJK as well as Spitzer/IRAC data from the C2D ’HREL’ catalogue. In Fig. [f5] we show 3 examples. Object SONYC-NGC1333-5 is well-matched by a model spectrum for *T*eff of 2800-2900K for *A**V* = 4mag, in line with the parameters derived from the FMOS spectrum (Table [t2]). This object shows colour excess redwards of 5 *μ**m*, presumably due to the presence of a disk (see Sect. [s45]). The best match for object SONYC-NGC1333-30 is obtained for *T*eff of 2500-2700 with *A**V* of 9-10mag, i.e. the object may be slightly cooler than listed in Table [t1], but still within the uncertainty. For the coolest object SONYC-NGC1333-36 a good match is found for temperatures between 1900 and 2100K and *A**V* of 0-2mag. This is again somewhat cooler than the estimate given in Table [t1]. In general, when comparing with the full SFD the results are similar to the spectral fits to individual wavelength bands in the near-infrared, maybe except for the regime below 2500K where the SFD comparison yields lower temperatures by  ∼ 200K. The comparison also illustrates that the ideal dataset for a characterisation of young brown dwarfs would be a spectrum covering the entire near-infrared domain from 1 to 8 *μ**m*, thus including five broadband features. The brown dwarf population in NGC1333 ===================================== The newly identified very low mass objects in this paper add to the substantial number in the NGC1333 region that have already been confirmed in the literature. In Table [t2] we compile all previously spectroscopically confirmed objects with spectral types of M5 or later or effective temperatures of 3200K or below, from,,, and. Whenever possible, we also re-measured the HPI spectral types for the sources identified in, based on their MOIRCS spectra. In the Table, we list coordinates, photometry, spectral types, effective temperatures, and alternative names. Adding the 10 objects discovered in this paper, the entire sample comprises 51 objects. Not all these objects are brown dwarfs; some are very low mass stars. Based on the COND, DUSTY, and BCAH isochrones [2](#fn2), the hydrogen burning limit at 1Myr would be reached at effective temperatures of 2800-2900K, which is found to correspond to an (optical) spectral type of M6-M7. In Table [t2] we made the cut at M5 to include all borderline cases as well. Taking this into account, the number of confirmed brown dwarfs in NGC1333 is about 35 ± 5. This is currently one of the largest and best characterised populations of substellar objects in a single star forming region. In the following we will investigate the mass function, spatial distribution, and disk properties for this sample. lllcclll S-1 & 03 28 47.66 & +31 21 54.6 & 17.55 & 15.24 & M9.2*a* & 2600*a*, 2800*b* & MBO139 S-2 & 03 28 54.92 & +31 15 29.0 & 15.990 & 14.219 & M7.9*a*, M6.5*b*, M8*d*, M8.6*f* & 2850*a*, 2600*b* & ASR109, Sp 60 S-3 & 03 28 55.24 & +31 17 35.4 & 15.090 & 13.433 & M7.9*c*, M8.2*f* & 2900*b* & ASR38 S-4 & 03 28 56.50 & +31 16 03.1 & 18.17 & 16.73 & M9.6*f* & 2500*b* & S-5 & 03 28 56.94 & +31 20 48.7 & 15.362 & 13.815 & M7.6*a*, M6*b*, M6.8*c* & 2850*a*, 2900*b* & MBO91, Sp 66 S-6 & 03 28 57.11 & +31 19 12.0 & 17.24 & 15.34 & M7.3*a*, M8*b*, M8.0*f* & 3250*a*, 2700*b* & MBO148, ASR64, Sp 23 S-7 & 03 28 58.42 & +31 22 56.7 & 15.399 & 13.685 & M6.5*b*, M7.1*c*, M7.7*f* & 2800*b* & MBO80, Sp 72 S-8 & 03 29 03.39 & +31 18 39.9 & 15.833 & 14.000 & M8.2*a*, M8.5*b*, M7.4*c*, M8.4*f* & 2850*a*, 2600*b* & MBO88, ASR63, Sp 80 S-9 & 03 29 05.54 & +31 10 14.2 & 17.072 & 15.667 & M8*b*, M8.6*f* & 2600*b* & S-10 & 03 29 05.66 & +31 20 10.7 & 17.113 & 15.485 & & 2500*b* & MBO143, Sp 86 S-11 & 03 29 07.17 & +31 23 22.9 & 17.87 & 15.62 & M9.2*f* & (2600)*b* & MBO141 S-12 & 03 29 09.33 & +31 21 04.2 & 16.416 & 13.150 & & (2500)*b* & MBO70, Sp 93 S-13 & 03 29 10.79 & +31 22 30.1 & 14.896 & 12.928 & M7.5*b*, M7.4*c*, M8.1*f* & 3000*b* & MBO62 S-14 & 03 29 14.43 & +31 22 36.2 & 14.606 & 13.035 & M7*b*, M6.6*c*, M7.7*f* & 2900*b* & MBO66 S-15 & 03 29 17.76 & +31 19 48.1 & 14.803 & 12.988 & M7.5*b*, M6.5*c*, M7.8*f* & 3000*b* & MBO64, ASR80, Sp 112 S-16 & 03 29 28.15 & +31 16 28.5 & 13.054 & 12.091 & M8.5*a*, M7.5*b*, M7.5*e*, M9.1*f* & 2850*a*, 2600*b*, 2761*e* & Sp 164 S-17 & 03 29 33.87 & +31 20 36.2 & 16.562 & 15.481 & M7.5*a*, M8*b*, M8.7*f* & 2600*a*, 2500*b* & MBO140 S-18 & 03 29 35.71 & +31 21 08.5 & 18.50 & 16.94 & M8.2*f* & 2500*b* & Sp 129 S-19 & 03 29 36.36 & +31 17 49.8 & 17.91 & 16.38 & & 2700*b* & S-21 & 03 28 47.34 & +31 11 29.8 & 15.484 & 12.702 & & 3100*b* & ASR117 ASR15 & 03 28 56.94 & +31 15 50.3 & 15.056 & 13.461 & M7.4*c*, M6.0*d* & & ASR17 & 03 28 57.15 & +31 15 34.5 & 15.405 & 13.186 & M7.4*c*, M6.0*d* & & Sp 68 MBO73 & 03 28 58.24 & +31 22 09.3 & 16.004 & 13.367 & M6.4*c* & & Sp 70 ASR24 & 03 29 11.30 & +31 17 17.5 & 13.977 & 12.915 & M8.2*c*, M8.0*d* & & MBO69 & 03 29 24.45 & +31 28 14.9 & 14.041 & 12.686 & M7.0*a*, M7.4*c* & 3200*a* & ASR29 & 03 29 13.61 & +31 17 43.4 & 16.441 & 13.028 & M5*d* & 3250*a* & ASR105 & 03 29 04.66 & +31 16 59.1 & 15.550 & 12.665 & M6*d* & & Sp 84 ASR8 & 03 29 04.06 & +31 17 07.5 & 13.310 & 12.313 & M7*d* & & MBO78 & 03 29 00.15 & +31 21 09.2 & 16.466 & 13.349 & M5*d* & & Sp 75 Sp 45 & 03 28 43.55 & +31 17 36.4 & 12.219 & 10.138 & M5.0*e* & 3125*e* & ASR127 Sp 46 & 03 28 44.07 & +31 20 52.8 & 14.245 & 12.627 & M7.3*a*, M7.5*e* & 3050*a*, 2829*e* & Sp 49 & 03 28 47.82 & +31 16 55.2 & 12.940 & 10.909 & M8.0*e* & 2710*e* & ASR111 Sp 53 & 03 28 52.13 & +31 15 47.1 & 13.161 & 12.029 & M7.0*e* & 2846*e* & ASR45 Sp 55 & 03 28 52.90 & +31 16 26.4 & 13.616 & 12.476 & M5.0*e* & 3154*e* & ASR46 Sp 58 & 03 28 54.07 & +31 16 54.3 & 13.027 & 11.599 & M5.0*e* & 3098*e* & ASR42 Sp 94 & 03 29 09.48 & +31 27 20.9 & 14.154 & 12.692 & M5.0*e* & 3098*e* & MBO60 Sp 105 & 03 29 13.03 & +31 17 38.3 & 15.231 & 14.158 & M8.0*e* & 2710*e* & ASR28 Sp 131 & 03 29 37.73 & +31 22 02.4 & 13.987 & 12.958 & M7.6*a*, M7.0*e* & 2850*a*, 2891*e* & MBO65 Sp 157 & 03 29 12.79 & +31 20 07.7 & 14.676 & 13.294 & M7.6*a*, M7.7*c*, M7.5*e* & 2950*a*, 2812*e* & MBO75, ASR83 Sp 177 & 03 29 24.83 & +31 24 06.2 & 14.433 & 13.383 & M6.5*a*, M6.7*c*, M6.5*e* & 3000*a*, 2957*e* & MBO77 Sp 71 & 03 28 58.24 & +31 22 02.1 & 14.912 & 12.406 & M6*e* & 2990*e* & MBO47 The number of brown dwarfs -------------------------- Based on our comprehensive spectroscopy, we can put some constraints on the total number of brown dwarfs in NGC1333 and the mass limits of the current surveys. For this purpose, we use the iz survey presented in. In Fig. [f8] we plot the iz colour-magnitude diagram for the 196 candidates selected in. The confirmed brown dwarfs (Tables [t1] and [t2]), either by us or other groups, are marked with squares; all objects for which we have obtained spectroscopy with crosses. Note that the iz candidates are selected only with a cut in colour and a cut in PSF shape to rule out extended objects. No other selection criteria have been used, i.e. this sample is as unbiased as possible. Colour-magnitude diagram for the iz candidates (dots), originally identified in. Crosses are objects for which we have obtained spectra in this paper or. Confirmed very low mass objects from this paper or the literature are marked with squares. The horizontal dashed line shows the completeness limit of the survey estimated in. [f8] We have useful spectra for 98/196 candidates; out of these 98, 24 are confirmed by our spectra. In total the iz sample contains 35 confirmed objects with $A\_V \lesssim 12$mag. Thus, we have a yield of 24/98 (24%) and would expect to find 24 more objects among the candidates for which we do not have spectra. Since 11 of them (35 minus 24) have already been confirmed by other groups, the expected number of additional very low mass objects from this iz selection is 13. The low-mass end of the diagram deserves additional discussion. The faintest confirmed brown dwarfs in Fig. [f8] are SONYC-NGC1333-1, 30, and 36 at *i* = 23.4 − 23.7. Comparing their effective temperatures with the 1Myr DUSTY and COND isochrones, it seems likely that they have masses of $\lesssim 0.02\,M\_{\odot}$. If our estimate of *T*eff = 2000K for SONYC-NGC1333-36 is correct (Sect. [s34]), the best mass estimate would be in the range of 0.006 *M*⊙. We have taken spectra for 7 fainter objects, but none of them is a brown dwarf, which might indicate that we have reached the ’bottom of the IMF’, as preliminarily stated in. However we may not be 100% complete in this magnitude range. The formal completeness limits of the iz survey at *i* = 24.8mag, determined from the peak of the histogram of the magnitudes, is shown with a dashed line. This limit has been derived for a field of view of 34ʹ × 27ʹ, but most of the cluster members are located in a smaller region of 10ʹ × 10ʹ which is partially affected by significant background emission from the cloud (see Fig. [f50]. Thus, the completeness limit in the relevant areas might not always reach the value shown in Fig. [f8]. Thus, based on our new data we retract the previous claim by stating a deficit of objects with *M* < 0.02 *M*⊙ in NGC1333, for two reasons: a) The updated brown dwarf census contains a few of objects with masses at or below 0.02 *M*⊙, including one with an estimated mass of 0.006 *M* ⊙ . b) The current survey may not be complete at the lowest masses, i.e. we cannot exclude the presence of a few more objects with *M* < 0.02 *M*⊙. The census for *M* > 0.02 *M*⊙ is more robust. From the 35 confirmed members in the iz diagram we subtract 10 which are probably slightly above the substellar boundary (see discussion in Sect. [s4]). We also subtract the 3 which are likely below 0.02 *M*⊙ and add the 13 which we are still missing. This gives a total number of  ∼ 35 brown dwarfs down to 0.02 *M*⊙ and with $A\_V\lesssim 12$mag. The star vs. brown dwarf ratio ------------------------------ As a proxy for the shape of the mass function, previous authors have used the ratio of stars to brown dwarfs, where these two groups are defined by a range of masses. These ratios are more robust against uncertainties in the masses than a complete IMF. use a range of 0.08-1.0 *M*⊙ for stars and 0.03-0.08 *M*⊙ for brown dwarfs, hereafter called *R*1. Other authors use 0.08-10 *M*⊙ for stars and 0.02-0.08 *M*⊙ for brown dwarfs, hereafter called *R*2. Since the number of high-mass stars is small, the two ratios *R*1 and *R*2 should be fairly similar. The comparison with NGC1333 is complicated by the fact that no comprehensive spectroscopic census is available for the stars. The best starting point is probably the Spitzer analysis by. They find a total of 137 Class I and II members with disk, from which 94 are in 2MASS. Objects not detected in 2MASS are likely embedded sources with high extinction *A**V* > 10mag and thus not comparable with our brown dwarf sample. We calculated absolute J-band magnitudes for this sample using the dereddening described in Sect. [s32] and assuming a distance of 300pc, which gives a range of *M**J* = 0 − 9mag. Comparing with the BCAH 1Myr isochrone the sample contains 13 objects with *M* > 1.0 *M*⊙, 52 objects with 0.08 < *M* < 1.0 *M*⊙ and 29 with 0.02 < *M* < 0.08 *M*⊙. These are only objects with disks; correcting for a disk fracton of 83% shifts the numbers to 16, 63, 35. The latter number is consistent with the estimate of brown dwarfs in this cluster given in Sect. [s41]. Out of 35 brown dwarfs, the number of objects with masses above 0.03 *M*⊙ would be 28. Based on these estimates the ratios for NGC1333 become *R*1 = 63/28 = 2.3 ± 0.5 and *R*2 = 79/35 = 2.3 ± 0.5 (see below for an explanation of the uncertainties). Our value for *R*1 is somewhat larger than our first estimate given in of 1.5 ± 0.3, mainly because we use here the cutoff at 0.03 *M*⊙ to be consistent with. The uncertainties for *R*1 and *R*2 stated above are 1*σ* confidence intervals and have been derived based on the prescription provided by. This prescription is given for population proportions (’success counts’). Therefore, we use the Cameron equation to calculate the confidence intervals *σ**k*1 for the ratio of number of stars to the sum of stars and brown dwarfs (*k*1) and *σ**k*2 for the ratio of number of brown dwarfs to the same sum (*k*2). The confidence intervals for *R*1 and *R*2 are then derived as follows: $$R\_{\mathrm{max}} = \frac{k1 + \sigma\_{k1}}{k2 - \sigma\_{k2}}$$ $$R\_{\mathrm{min}} = \frac{k1 - \sigma\_{k1}}{k2 + \sigma\_{k2}}$$ lll NGC1333 & 2.3 (1.8-2.8) & 2.3 (1.8-2.7) ONC & 3.3 (2.8-3.9) & 3.8,4.5,5.0 UpSco & 3.8 (3.1-4.5) & – NGC2024 & 3.8 (2.6-5.2) & – Chamaeleon & 4.0 (2.3-6.0) & 3.9 (2.9-5.0) *ρ*-Oph & 5.1 (3.8-6.4) & 4.8 (3.7-6.0) Taurus & 6.0 (4.5-7.7) & 7.3 (5.1-9.6) IC348 & 8.3 (6.4-10.5) & 8.0 (6.3-10.0) In Table [t4] we compare the ratios for NGC1333 with the available literature values for *R*1 and *R*2 for other regions. The same numbers are plotted in Fig. [f40] for illustration. To have accurate and consistent confidence intervals, we re-calculated the errors for all literature values as described above. NGC1333 has the lowest ratios measured so far in any star forming region, suggesting that the number of brown dwarfs in NGC1333 is unusually high, which is in line with the conclusion in. In particular, the ratios for NGC1333 deviate by more than 2*σ* from those in IC348. It should be noted that the current census for IC348 is nearly complete down to 0.03 *M*⊙ and covers most of the cluster, which makes the difference to NGC1333 even more striking. Star vs. brown dwarf ratios for various star forming regions, from Table [t4]. *R*1 is plotted with squares, *R*2 with circles. The number on the y-axis is identical with the row in Table [t4]. [f40] This finding has to be substantiated with further survey work in diverse regions. In Table [t4] we list the statistical 1*σ* confidence intervals, purely based on the sample sizes. These statistical errors do not take into account additional sources of uncertainty, e.g. unrecognised biases, inconsistencies in sample selection or problems with the mass estimates, i.e. the actual errors may be larger than listed in Table [t4]. In particular, it is important to note that all mass estimates are necessarily model-dependent. For the value in NGC1333 we use the BCAH isochrones, mainly because they cover the brown dwarf regime down to the Deuterium burning limit. The problems and uncertainties of these type of models at very young ages are well-documented. The best way of assessing the overall uncertainties is to compare results from independent surveys. As can be seen in Table [t4], so far the results from independent groups agree within the statistical errorbars, with the possible exception of the ONC.[3](#fn3) Such an independent confirmation is required for NGC1333 as well. If confirmed, the unusually low ratio of stars to brown dwarfs in NGC1333 could point to regional differences in this quantity, possibly indicating environmental differences in the formation of very low mass objects. One option to explain this is turbulence, as very low mass cores which can potentially collapse to brown dwarfs could be assembled by the turbulent flow in a molecular cloud. At first glance this could be a realistic possibility for NGC1333, where the cloud is strongly affected by numerous outflows, although it is not clear if the turbulence in NGC1333 is mainly driven by these outflows. Alternatively, additional brown dwarfs could form by gravitational fragmentation of gas falling into the cluster center. This latter mechanism would benefit from the fact that NGC1333 has a higher stellar density and thus a stronger cluster potential than most other nearby star forming regions. Spatial distribution -------------------- In Fig. [f9] we show the spatial distribution of the sample of very low mass objects listed in Tables [t1] and [t2] (squares). For comparison, the positions of the 137 Class I and Class II sources are overplotted with crosses. The dots indicate the positions of all targets for which we have obtained spectra but which are not confirmed as very low mass objects. Additionally, we show the frequency of objects as a function of distance from the cluster center in Fig. [f30], again for the same three samples, and in addition for all photometric candidates from our iz catalogue (dash-dotted line). Spatial distribution of NGC1333 members. Crosses are all 137 objects with Spitzer excess by, squares are all confirmed brown dwarfs (Tables [t1] and [t2]). Objects with spectroscopy for which we can exclude that they are substellar members are shown with dots. [f9] Histogram of the distance from the cluster center for confirmed brown dwarfs (solid line, Tables [t1] and [t2]), objects with Spitzer excess by (dashed line), objects for which we can exclude that they are substellar members (dotted line), and all IZ candidates (dash-dotted line). As cluster center we used the average position of the Class I/II sources. [f30] In the two figures, the spatial distribution of brown dwarfs is strongly clustered and indistinguishable from the distribution of the total population of Class I/II sources in NGC1333. For the two samples, the average *α* and *δ* differ only by 0.4’ and 0.3’, respectively, which is  < 10% of the cluster radius. Adopting the average position of the Class I/II sources as cluster center, the average distance from the center is similar in the two samples, 5.2’ for the very low mass objects and 5.5’ for the Class I/II sources. The fraction of objects with distance from the cluster center of *d* < 0.1, 0.1 < *d* < 0.2, *d* > 0.2deg is 65, 35, 0% for the very low mass objects and 67, 26, 6% for the Class I/II objects. The scatter in the positions is *σ**α* = 0.071 and *σ**δ* = 0.069deg for the very low mass objects, and *σ**α* = 0.067 and *σ**δ* = 0.085deg for the Class I/II sources. For all these quantities there are no significant differences between the two samples. The figures also show that our spectroscopic follow-up covers an area that is about 1.5-2 as large (in radius) than the cluster itself. We took spectra for 31 candidates with distances of  > 0.2deg from the cluster center, but none of them turned out to be a brown dwarf. There are still 43 candidates from the IZ photometry outside 0.2deg (see Sect. [s41]) for which we do not have spectra, but based on our current results, it is unlikely that they contain any very low mass cluster members. Thus, our wide-field follow-up spectroscopy shows that there is no significant population of brown dwarfs at *d* > 0.2deg from the cluster center, corresponding to  ∼ 1pc at a distance of 300pc. It has been suggested that gravitational ejection occurs at an early stage in the evolution of substellar objects, either from multiple stellar/substellar systems or from a protoplanetary disk. This ejection is thought to remove the objects from their accretion reservoir and thus sets their masses. In these scenarios one could expect the brown dwarfs to have high spatial velocities in random directions. An ejection velocity of 1kms− 1 would allow the object to travel 1pc in 1Myr, i.e. in the case of NGC1333 this would be sufficient to reach the edge of the cluster. However, the gravitational potential of the cluster will significantly brake the motion of the brown dwarf. Assuming a total cluster mass of 500 *M*⊙ homogenuously distributed in a sphere with 1pc radius, a brown dwarf that gets ejected in the cluster center with 1.5, 2, 3kms− 1 would reach a velocity of approximately 0.5, 1.4, 2.6kms− 1 at a distance of 1pc from the center. All objects with ejection velocities of $\gtrsim 2$kms− 1 would have moved to distances significantly larger than 1pc over 1Myr. As shown above, the presence of such objects can be excluded from our data. The scenarios by and predict that a substantial fraction of ejected brown dwarfs (more than 50% in some simulations) exceed this velocity threshold of 2kms− 1. These models would require some tuning to reproduce a spatial distribution as observed in NGC1333. However, such simplified scenarios do not take into account that dynamical interactions affect the total cluster population, not exclusively the brown dwarfs. The cluster formation simulations by show that the velocity dispersion in a dense cluster is not expected to increase in the very low mass regime. Although brown dwarfs undergo ejection in the simulations, this does not lead to a velocity offset in comparison to the stars. NGC1333 seems to be consistent with this picture. As a side comment, we note that the parameters in the main simulation in with gas mass of 500 *M*⊙ and cloud radius of 0.4pc are fairly similar to the properties of NGC1333, although the simulation produces a much higher number of stars and brown dwarfs (total stellar mass of 191 *M*⊙ vs.  ∼ 50 *M*⊙ in NGC1333). Disks ----- In Fig. [f10] we plot the Spitzer/IRAC colour-colour diagram for the sample listed in Tables [t1] and [t2], again based on the C2D-’HREL’ catalogue. Out of the sample of 51 sources with confirmed spectral type M5 or later, 41 have photometric errors  < 40% in all four IRAC bands and are shown in this plot. The figure shows the typical appearance with two groups, one around the origin, the second with significant colour excess in mid-infrared bands due to the presence of circum-(sub)-stellar material. In this sample of 41 objects, 27 show evidence for a disk, i.e. 66 ± 8%. All of them have colours comparable to the Class II sources identified in. The derived disk fraction of 66% is only valid for the sample of 41 objects with reliable Spitzer detection. In the entire sample of 51 very low mass sources in NGC1333 listed in Tables [t1] and [t2], the disk fraction is likely to be smaller, because the ten objects which are not detected by Spitzer are unlikely to have a disk. Correcting for this effect, the disk fraction in the full sample could be as low as 27/51 or 55%. Therefore, we consider the disk fraction of 66% to be an upper limit. For comparison, for the total cluster population derive a disk fraction of 83% from a Spitzer survey. This number has been derived for objects with *K* < 14mag. This magnitude limit was chosen by because it corresponds to *M* = 0.08 *M*⊙ at age of 1Myr and *A**V* = 20mag. Their sample thus includes mostly stars, but also some brown dwarfs (as evident from Table [t2], which contains a number of objects from the sample, marked with ’Sp’.) The Spitzer sample contains a substantial number of objects with *A**V* > 10mag, which are rare among the currently known brown dwarfs. It is possible that some of the heavily embedded brown dwarfs with *A**V* > 10mag have not been found yet. This could explain the discrepancy in the disk fractions. Our disk fraction is consistent with the values derived for very low mass members in *σ*Ori, Chamaeleon-I, and IC348 although all three regions are thought to be somewhat older (2-3Myr) than NGC1333. A more detailed SED analysis was carried out for objects with an additional datapoint at 24 *μ**m*. 19 of the objects in Fig. [f10] have MIPS fluxes at 24 *μ**m* with errors  < 40%. At this wavelength the images are strongly affected by the cloud emission and blending. To make sure that the fluxes are trustworthy, we checked all objects in a 24 *μ**m* image obtained in the Spitzer program #40563 (PI: K. Wood, AOR 23712512), which is deeper than the C2D mosaics. After visual inspection, 3 objects were discarded; the remaining 16 are point sources at 24 *μ**m* and are marked with squares in Fig. [f10]. Spitzer/IRAC colour-colour diagram for the very low mass sources listed in Tables [t1] and [t2], using the ’HREL’ photometry from the C2D survey. We only plot objects with photometric errors  < 40% in all four bands (41 out of 51). The position of the Class II objects from the survey by is indicated by the dashed box. Objects with reliable detection at 24 *μ**m* are marked with squares [f10] Spectral energy distributions for 16 very low mass objects with disks in NGC1333 (crosses). The SEDs have been dereddened and scaled to the J-band fluxes. We overplot the typical SED for NGC1333 (solid line), *ρ*Oph (dotted line) Taurus (dashed line), and UpSco (dash-dotted line). We also show the photospheric SED from the 2800K DUSTY-AMES model with small dots. A typical errorbar for the 24 *μ* fluxes is overplotted. [f11] In Fig. [f11] we show their SEDs after dereddening (see Sect. [s32]) and scaling to the J-band flux (crosses). For comparison, the photospheric SED from a model spectrum is overplotted with small dots. To assess the disk evolution in the substellar regime, we derive the typical SED for NGC1333 and three other star forming regions: *ρ*Oph (1Myr), Taurus (2Myr), and UpSco (5Myr). For this purpose we selected the objects which are detected in all four IRAC bands and at 24 *μ**m*. For *ρ*Oph we started with the census in Muzic et al. (subm.) and made use of the C2D data. For Taurus we used published Spitzer data from and. For UpSco the data from was used. When comparing the SEDs from different regions, one has to take into account that the depth of the 24 *μ**m* observations is not the same; thus the median SED is affected by incompleteness at low flux levels. Instead we plot the SED for the object that has the 10th highest flux level at 24 *μ**m* after converting to *λ**F**λ* and scaling to the J-band flux. This represents an estimate for a typical SED unaffected by the depth of the Spitzer observations and the distance to the cluster. Note that all objects used for this exercise are spectroscopically confirmed members of the respective clusters. For wavelengths  < 8 *μ**m* the four median SEDs are fairly similar. At 8 *μ**m* the SEDs in the youngest regions (NGC1333, *ρ*Oph) are slightly enhanced. The biggest differences are seen at 24 *μ**m*, particularly when comparing NGC1333 with UpSco. This is mostly due to the fact that NGC1333 harbours a few objects with unusually strong excess emission, which are missing in UpSco, indicating that the objects in NGC1333 are in an early evolutionary stage compared with the other regions. As demonstrated in a large spread in 24 *μ**m* fluxes, as seen in NGC1333, can easily be explained by a range of flaring angles in the disks. Conclusions =========== As part of our survey program SONYC, we present a census of very low mass objects in the young cluster NGC1333 based on new follow-up spectroscopy from Subaru/FMOS. To derive reliable spectral types from our data, we define a new spectral index based on the slope of the H-band peak. We find 10 new likely brown dwarfs in this cluster, including one with a spectral type  ∼ L3 and two more with spectral type around or later than M9. These objects have estimated masses of 0.006 to 0.02 *M*⊙, the least massive objects identified thus far in this region. This demonstrates that the mass function in this cluster extends down to the Deuterium burning limit and beyond. By combining the findings from our SONYC survey with results published by other groups, we compile a sample of 51 objects with spectral types of M5 or later in this cluster, more than half of them found by SONYC. About 30-40 of them are likely to be substellar. The star vs. brown dwarf ratio in NGC1333 is significantly lower than in other nearby star forming regions, possibly indicating environmental differences in the formation of brown dwarfs. We show that the spatial distribution of brown dwarfs in NGC1333 closely follows the distribution of the stars in the cluster. The disk fraction in the brown dwarf sample is  < 66%, lower than for the stellar members, but comparable to the brown dwarf disk fraction in 2-3Myr old regions. The substellar members in NGC1333 show a large fraction of highly flared disks, evidence for the early evolutionary state of the cluster. The authors would like to thank the Subaru staff, especially Dr. Naoyuki Tamura and Dr. Kentaro Aoki, for the assistance during the observations and their preparation. We are grateful to Ms. Yuuki Moritani, Mr. Kiyoto Yabe and Prof. Fumihide Iwamuro (Kyoto University) for their help with the FMOS data reduction. We thank the anonymous referee for a constructive report that helped to improve the paper. This work made use of results from the Spitzer program #40563; we thank Jane Greaves, Chris Poulton, and Kenneth Wood from the University of St. Andrews for their help in the framework of this campaign. We also thank Ewan Cameron for discussing the calculation of confidence intervals and David Lafrenière for making his spectra available to us. AS acknowledges financial support the grant 10/RFP/AST2780 from the Science Foundation Ireland. The research was supported in large part by grants from the Natural Sciences and Engineering Research Council (NSERC) of Canada to RJ. This research has benefitted from the SpeX Prism Spectral Libraries, maintained by Adam Burgasser at http://pono.ucsd.edu/~adam/browndwarfs/spexprism/. Spectroscopically excluded objects ================================== In Tables [t10] and [t11] we provide a full list of objects for which we obtained spectra and which were not classified as young very low mass objects based on the shape of their near-infrared spectrum (see Sect. [s31]). The spectra come from the first campaign with MOIRCS and from the second run with FMOS (this paper). Most of these objects are likely to be either young stellar objects in NGC1333 or background stars with effective temperatures above  ∼ 3500K or spectral type earlier than  ∼ M3. In Table [t10] we also give the J- and K-band photometry from 2MASS and the identifiers from the photometric surveys by and, if available. Objects without listed identifiers are not known to have a counterpart within 1". llccccl 03 29 18.71 & +31 32 26.4 & 16.722 & 13.839 & IZ & F & 03 29 52.35 & +31 28 13.7 & 14.919 & 13.829 & IZ & F & 03 29 48.12 & +31 28 29.4 & 15.550 & 13.663 & IZ & F & 03 29 37.80 & +31 27 48.4 & 17.274 & 15.249 & IZ & F & 03 29 34.76 & +31 29 08.1 & 13.647 & 11.532 & IZ & F & MBO 43 03 29 11.53 & +31 30 05.6 & 16.740 & 13.786 & IZ & F & [LAL96] 241, 242 03 28 55.74 & +31 30 58.0 & 15.175 & 13.390 & IZ & F & [LAL96] 143 03 28 52.66 & +31 32 04.3 & 16.563 & 14.730 & IZ & F & 03 28 46.67 & +31 31 35.4 & 15.647 & 13.933 & IZ & F & 03 28 37.55 & +31 32 54.5 & 15.107 & 14.232 & IZ & F & 03 28 23.84 & +31 32 49.3 & 14.714 & 13.655 & IZ & F & 03 28 44.96 & +31 31 09.9 & 17.057 & 15.211 & IZ & F & 03 28 46.24 & +31 30 12.1 & 15.089 & 14.069 & IZ & F & [LAL96] 88 03 29 45.53 & +31 26 56.5 & 14.870 & 14.023 & IZ & F & 03 30 05.41 & +31 25 13.1 & 14.921 & 13.680 & IZ & F & 03 28 49.48 & +31 25 06.6 & 15.957 & 13.658 & IZ & F & MBO 82 03 29 10.46 & +31 23 34.8 & 15.633 & 12.762 & JK & F & [LAL96] 231 03 28 43.17 & +31 26 06.1 & 15.617 & 13.384 & IZ & F & [LAL96] 78 03 28 37.75 & +31 26 32.8 & 16.847 & 14.340 & IZ & F & [LAL96] 60 03 27 56.27 & +31 27 00.8 & 16.836 & 13.967 & IZ & F & 03 28 07.64 & +31 26 42.4 & 15.974 & 13.719 & IZ & F & 03 28 19.51 & +31 26 39.5 & 15.100 & 13.687 & IZ & F & [LAL96] 13 03 28 40.22 & +31 25 49.1 & 15.637 & 12.911 & IZ & F & 03 28 47.64 & +31 24 06.2 & 14.199 & 11.660 & JK & F & [LAL96] 93 03 28 55.22 & +31 25 22.4 & 14.735 & 12.550 & IZ & F & [LAL96] 139 03 29 03.32 & +31 23 14.8 & 17.254 & 14.071 & JK & F & [LAL96] 191 03 29 03.13 & +31 22 38.2 & 13.724 & 11.323 & JK & F & [LAL96] 189 03 29 27.61 & +31 21 10.1 & 14.836 & 13.074 & IZ & F & [LAL96] 324 03 29 55.50 & +31 15 30.5 & 15.120 & 14.131 & IZ & F & 03 29 52.65 & +31 17 22.9 & 16.325 & 14.970 & IZ & F & 03 29 39.61 & +31 17 43.4 & & & JK & F & 03 28 35.46 & +31 21 29.9 & 16.052 & 14.197 & IZ & F & [LAL96] 49 03 28 48.45 & +31 20 28.4 & 16.842 & 14.283 & IZ & F & [LAL96] 100 03 29 10.82 & +31 16 42.7 & 15.652 & 13.039 & JK & F & [LAL96] 235 03 29 21.42 & +31 15 55.3 & 15.396 & 14.004 & IZ & F & [LAL96] 305 03 29 45.41 & +31 16 23.2 & 15.118 & 13.767 & IZ & F & 03 29 50.23 & +31 15 47.9 & 16.980 & 15.106 & IZ & F & 03 30 01.93 & +31 10 50.5 & 14.374 & 12.921 & IZ & F & 03 29 54.78 & +31 11 41.7 & 16.679 & 15.253 & IZ & F & 03 29 34.57 & +31 11 23.8 & 16.750 & 14.643 & IZ & F & [LAL96] 351 03 29 36.00 & +31 12 49.6 & 15.522 & 14.615 & IZ & F & [LAL96] 353 03 29 26.11 & +31 11 36.9 & 14.781 & 12.874 & IZ & F & [LAL96] 320 03 28 01.19 & +31 17 36.5 & 15.587 & 14.136 & IZ & F & 03 27 52.51 & +31 19 38.8 & 14.973 & 13.662 & IZ & F & 03 28 22.90 & +31 15 21.7 & 15.032 & 13.587 & IZ & F & [LAL96] 20 03 29 08.71 & +31 12 01.9 & & & IZ & F & 03 29 12.24 & +31 12 20.5 & 16.588 & 14.722 & IZ & F & [LAL96] 254 03 29 18.45 & +31 11 30.5 & 16.413 & 15.530 & IZ & F & 03 29 31.00 & +31 11 20.1 & 17.052 & 15.265 & IZ & F & 03 29 28.99 & +31 10 00.3 & 13.365 & 10.889 & IZ & F & [LAL96] 329 03 29 45.40 & +31 10 35.6 & 14.828 & 13.516 & IZ & F & 03 29 46.48 & +31 08 43.6 & 15.205 & 14.096 & IZ & F & 03 29 49.16 & +31 09 03.7 & 16.143 & 14.537 & IZ & F & 03 29 28.95 & +31 07 40.9 & 15.975 & 15.199 & IZ & F & [LAL96] 330 03 29 24.73 & +31 07 26.8 & & & IZ & F & 03 29 20.11 & +31 08 53.7 & 15.923 & 14.813 & IZ & F & 03 29 09.23 & +31 08 55.4 & 16.330 & 14.912 & IZ & F & 03 28 57.45 & +31 09 46.5 & 16.623 & 12.738 & IZ & F & [LAL96] 160 03 28 06.32 & +31 10 02.0 & 15.731 & 14.579 & IZ & F & 03 28 22.07 & +31 10 42.9 & 13.867 & 11.859 & IZ & F & [LAL96] 18 03 28 39.01 & +31 08 07.6 & 15.143 & 13.389 & IZ & F & [LAL96] 65 03 29 03.60 & +31 07 11.9 & 14.995 & 13.351 & IZ & F & [LAL96] 197 03 29 05.23 & +31 07 10.6 & 16.776 & 14.766 & IZ & F & llccccl 03 28 41.72 & +31 11 15.1 & & & IZ & M & 03 28 41.97 & +31 12 17.2 & 15.863 & 13.822 & IZ & M & 03 28 46.21 & +31 12 03.4 & 16.933 & 13.526 & IZ & M & [LAL96] 90 03 28 48.99 & +31 12 45.1 & 17.836 & 14.048 & IZ & M & [LAL96] 103 03 28 52.10 & +31 16 29.3 & 16.071 & 13.738 & IZ & M & [LAL96] 123 03 28 57.25 & +31 07 26.0 & & & IZ & M & [LAL96] 159 03 28 58.68 & +31 09 39.2 & 17.052 & 12.945 & IZ & M & [LAL96] 169 03 29 00.70 & +31 22 00.8 & 16.236 & 11.764 & IZ & M & [LAL96] 180 03 29 17.93 & +31 14 53.5 & 16.625 & 14.039 & IZ & M & [LAL96] 287 03 29 18.66 & +31 20 17.8 & 17.510 & 14.608 & IZ & M & MBO 109 03 29 19.86 & +31 18 47.7 & 17.205 & 13.321 & IZ & M & [LAL96] 297 03 29 28.06 & +31 18 39.0 & 15.307 & 12.846 & IZ & M & [LAL96] 327 03 29 32.20 & +31 17 07.3 & 15.503 & 13.784 & IZ & M & [LAL96] 341 03 29 37.41 & +31 17 41.6 & 14.886 & 12.761 & IZ & M & [LAL96] 359 03 29 08.17 & +31 11 54.6 & 17.247 & 14.980 & IZ & M & [LAL96] 217 --- 1. downloaded[↩](#fnref1) 2. downloaded from http://perso.ens-lyon.fr/isabelle.baraffe/[↩](#fnref2) 3. A new paper by updates the value of for the ONC to *R*1 = 2.4 ± 0.4, based on an HST survey covering a larger area than previous studies.[↩](#fnref3)
arxiv_0000350
the subsonic and remnant phases are unchanged from those given in and respectively. Isothermal evolution of the shocked shell ----------------------------------------- As first pointed out by, the final state of the shocked gas lying between the lobe surface and the bow shock depends critically on whether it expands adiabatically or isothermally; we assume here that the swept-up gas evolves isothermally. The pressure, $p\_{\rm s}(\theta)$, of the swept-up gas at an angle *θ* from the jet axis is found from the jump conditions at the shock. The cooling rate, and thus intensity of thermal bremsstrahlung radiation, in the shocked shell depends on both the pressure of the shocked gas and its temperature. For low Mach numbers *M*0 the cooling time of the swept-up gas, $t\_{\rm s}$, can become short compared with that in the cluster, $t\_{\rm x}$, with $t\_{\rm s}/t\_{\rm x} > 0.05M\_0$. However, the cooling time of the external gas would have to be at least an order of magnitude less than the source age, *t*, for the shocked gas to suffer significant radiative losses. Such conditions are only possible at the centres of the strongest cooling flow clusters – precisely *not* the locations where powerful, large radio galaxies are typically found. We therefore ignore the cooling of the gas within the shocked shell in our analysis. ### Mean shocked shell temperature The mean temperature of the shocked gas is found by first calculating the mean pressure and density of the shocked shell. The mean density is taken as the ratio of the mass of ambient gas previously occupying the lobe and shocked shell to the volume of the shocked gas shell: $$\begin{split} \bar{\rho\_{\rm s}}(t) = \frac{\int\_0^{\pi/2} \int\_0^{R\_{\rm s}(t, \theta)} \rho(r) r^2 dr \sin \theta d\theta}{\int\_0^{\pi/2} \int\_{R(t, \theta)}^{R\_{\rm s}(t, \theta)} r^2 dr \sin \theta d\theta}, \end{split}$$ where *ρ*(*r*) is the gas density of the ambient medium. This expression must be solved numerically except for the special case of spherical lobes expanding into a power law environment. The mean temperature of the shocked gas shell is thus given as $T\_{\rm s}(t) = \bar{m} \bar{p\_{\rm s}}(t)/k\_{\rm B} \bar{\rho\_{\rm s}}(t)$ for a shocked gas shell with mean pressure $\bar{p\_{\rm s}}(t)$ and an average particle mass $\bar{m} \sim 0.6m\_{\rm p}$, where $m\_{\rm p}$ is the proton mass. The temperature may of course vary throughout the shocked gas shell but without precise knowledge of any density gradients we simply assume mean values. ### Bremsstrahlung radiation The X-ray emissivity per unit volume due to thermal bremsstrahlung radiation is $$\begin{split} J(\nu) =& \frac{Z^2 e^6}{3 \pi^2 {\varepsilon\_0}^3 {m\_{\rm e}}^2 c^3} \left(\frac{\pi m\_{\rm e}}{6} \right)^{1/2} \\ &\quad\quad (k\_{\rm B} \mathcal{T})^{-5/2} {\tilde{p}}^{\!\:2} e^{-h\nu/k\_{\rm B} \mathcal{T}} g(\nu, \mathcal{T}), \end{split}$$ where the pressure and temperature (*p̃* and T respectively) may relate to the shocked shell, ambient medium, or any other plasma. Here, $Z \gtrsim 1$ is the average atomic number of the positively changed particles, *ɛ*0 is the vacuum permittivity, and, at frequencies $h\nu \ll k\_{\rm B} \mathcal{T}$, the Gaunt factor has a logarithmic dependence on frequency as $$\begin{split} g(\nu, \mathcal{T}) = \frac{\sqrt{3}}{\pi} \ln \left(\frac{4}{\zeta} \frac{k\_{\rm B} \mathcal{T}}{h\nu} \right), \end{split}$$ where here *ζ* = 1.78 is Gauss’ number. In this work, we assume the metallicity of the plasma is 0.3 times the solar value, corresponding to an average atomic number of *Z* ∼ 1.04. The shocked gas pressure used in the bremsstrahlung radiation calculation is that derived from the shock jump conditions on the shell surface at angle *θ* (i.e. $\tilde{p} = p\_{\rm s}(\theta)$), whilst the mean temperature is derived following the method described in the previous section. [fig:apec] The X-ray spectra of smaller clusters at low-redshift are dominated by emission lines at the wavelengths of the *Chandra* and *eRosita* surveys. The APEC plasma code can calculate the X-ray emission spectra from thin-thermal plasma for various temperatures and metallicities. In Figure [fig:apec], we plot the fractional contribution of continuum emission to the total bremsstrahlung X-ray spectrum (including both continuum and line emission) integrated over a typical 0.5-$2\rm\, keV$ observing band. The emission lines are found to contribute a significant component of the X-ray flux density at redshifts *z* < 2.5 in clusters with temperatures $kT \lesssim 1\rm\, keV$, corresponding to halo masses $\lesssim 10^{13.5}\rm\, M\_\odot$. We use the appropriate APEC model to correct our analytically derived flux densities of the X-ray emission arising from bremsstrahlung radiation. X-RAY SURFACE BRIGHTNESS MAPS ============================= We use the formalism developed in the preceding sections to make predictions for the observed X-ray emission from radio galaxies and their environments. There are three main contributions: lobe inverse-Compton emission (Section [sec:LUMINOSITY MODEL]) and two sources of bremsstrahlung radiation, from the shocked gas shell ahead of the lobes (Section [sec:SHOCKED SHELL BREMSSTRAHLUNG RADIATION]), and from the ambient medium. We calculate the latter in the same way as bremsstrahlung from the shocked gas shell, but adopting density and temperature parameters characteristic of the hot gas in galaxy clusters, as described in. X-ray and radio brightness maps ------------------------------- The lobes and shocked gas shell of each mock radio galaxy are divided into a 512 × 512 × 512 grid of cubic pixels; this grid extends a factor of two beyond the edge of the lobe to enable a comparison with X-ray surface brightness of the ambient medium. Each cell in the cube is classified as either part of the lobe, shocked shell or ambient medium based on the modified RAiSE dynamical model (Section [sec:SHOCKED SHELL BREMSSTRAHLUNG RADIATION]), and the inverse-Compton emissivity or bremsstrahlung radiation calculated as appropriate. The observed emission arising from these extended objects is the integral of all the emissivity along a given line-of-sight. The two-dimensional surface brightness is simply calculated by summing the emissivity from every cell along the depth of the source, assuming the lobe plasma and ambient medium in front of the source is optically thin. The X-ray emission arising from the ambient medium lying outside the grid of cell is calculated analytically for each line-of-sight and added to the total from the numerical grid. For simplicity, we assume that the lobes lie in the plane of the sky. The inverse-Compton and bremsstrahlung emissivity in the (two-dimensional) surface brightness map are derived for the technical characteristics of the *extended Roentgen Survey with an Imaging Telescope Array*. Specifically, at a $1\rm\, keV$ observer-frame energy, the half-energy width (on axis) is $15\rm\, arcsec$ and the total effective area of the seven mirror systems is $\sim 1500\rm\, cm^2$. The number of $1\rm\, keV$ photons falling on these mirrors in a typical 1000$\rm\, s$ (1 ks) observing time is thus calculated from the X-ray surface brightness grid. The $151\rm\, MHz$ radio frequency emission from the lobes is also calculated following the method of ; this frequency is commonly used by low-frequency Square Kilometre Pathfinder instruments, including the *Low-Frequency Array* (LOFAR) and the *Murchison Widefield Array* (MWA). The X-ray and radio surface brightness distributions are modelled for lobed FR-IIs at a range of redshifts throughout their evolutionary history. Specifically, we simulate AGNs at 41 (log-spaced) source ages between 3 and 300$\rm\, Myrs$, redshifts of *z* = 0.1, 0.5 and 1, jet powers of *Q* = 1037, 1038 and $10^{39}\rm\, W$, active ages of 30 and 100$\rm\, Myrs$ (i.e. when the jet ceases injecting fresh electrons), and host cluster environments with dark halo masses of 1012.5, 1013.5 and $10^{14.5}\rm\, M\_\odot$. The axis ratio of the lobe is taken as *A* = 4 corresponding to an axis ratio for the shocked shell of $A\_{\rm s} = 2$ (see Section [sec:AXISRATIO]). The other parameters in the RAiSE dynamical and synchrotron/inverse-Compton emissivity models take the same values as used by. The surface brightness maps for several informative combinations of these parameters are shown in Figure [fig:surfbright]. Powerful jets expanding into dense environments are found to create dense shells of shocked gas surrounding the lobe. The bremsstrahlung radiation from the shocked gas is much brighter than can be generated by the inverse-Compton radiation process, especially in young sources and at low redshifts (e.g. top panel of Figure [fig:surfbright]). The faint lobes bounded by well-defined lines are seen in *Chandra* X-ray images of Cygnus A and MS0735.6+7421, with cavities also seen in the hosts of weaker FR-Is such as the Perseus cluster. By contrast, in lower mass clusters, the density of swept-up gas is very low causing its temperature to be high to satisfy the shock jump conditions. The increased temperature leads to minimal bremsstrahlung radiation from the shocked gas shell. The inverse-Compton upscattered photons in the lobe are thus the dominant source of X-ray emission, especially at higher redshifts where the cosmic microwave background radiation is stronger (e.g. middle panel of Figure [fig:surfbright]). Finally, the X-ray inverse-Compton emission remains visible well after the radio frequency emission vanishes; in the simulated remnant, the radio emission has retreated towards the freshest electrons at the hotspot, whilst the X-ray emission does a much better job of tracing out the channel evacuated by the radio lobes. A similar result has previously been reported by ; Figure [fig:surfbright] additionally shows that the peak in the X-ray intensity from the lobe is much closer to the core than the full extent of the radio source. Many radio sources appear asymmetric, due to asymmetry in the environments encountered by the two lobes ; this poses a challenge for robustly identifying host galaxies of remnant radio sources. Our results suggest that X-ray observations may provide a useful alternative. [fig:surfbright] An intriguing surface brightness map is produced when propagating weak jets into dense environments (bottom panel of Figure [fig:surfbright]); dense shells of shocked gas build up around the lobe as before, however, the weakened shocks and magnetic fields allow the contact surface near the core to become unstable to turbulent mixing of the dense shell and the lobe. This results in thermal bremsstrahlung emission extending out from the core along the transverse axis, brighter than X-ray emission from the lobe or the thinner (and thus fainter integrated along the line-of-sight), unmixed portions of the shocked shell further from the core. We note that strong X-ray emission may also be observed *along* the jet axis due to inverse-Compton emission from the jets as well as lobes (for older sources at high redshift); a combination of radio and X-ray imaging may distinguish these two scenarios. Dominant source of X-ray emissivity ----------------------------------- In this section, we investigate the relative importance of contributions to the X-ray surface brightness from the lobe (inverse-Compton), shocked gas shell (bremsstrahlung), and the ambient medium (bremsstrahlung), for some representative sources at a range of redshifts. The surface brightness for the three regions is calculated as the maximum value along the jet axis. However, the increased bremsstrahlung radiation from sources pinched by Rayleigh-Taylor mixing may present a source of error in this work as the rate of mixing is a poorly quantified model parameter in RAiSE. Fortunately, in the vast majority of extended AGNs this enhanced emission is only a minor contributor to the integrated X-ray luminosity, and when generating surface brightness maps, either the Rayleigh-Taylor mixed region is not resolved from the bright X-ray core (discussed in Section [sec:X-ray source number densities]) or the remainder of the lobe also sits above the surface brightness sensitivity limit (i.e. the shape of the source is correctly determined). The analyses in this work are repeated with the inner third of the lobe masked to exclude this region of enhanced emission; our results are unaffected by this change. ![image](AGNmap_H=1250_Q=3800_T=750_z=var) [fig:timeseries1] [fig:timeseries2] The evolutionary tracks of the X-ray surface brightness for these representative sources are shown in Figures [fig:timeseries1] and [fig:timeseries2]. The inverse-Compton emission is dominant over the bremsstrahlung radiation from both the shocked gas shell and the ambient medium in poor clusters ($10^{12.5}\rm\, M\_\odot$; Figure [fig:timeseries1]) at all source ages $\gtrsim 1\rm\, Myr$. By contrast, the shocked gas shell of sources expanding into denser host cluster environments ($\gtrsim 10^{13.5}\rm\, M\_\odot$; Figure [fig:timeseries2]) is brighter than the lobe, albeit not appreciably, for at least 20 to $100\rm\, Myrs$ with the weakest FR-II jet powers ($Q = 10^{37}\rm\, W$); the shocked shell stays brighter than the lobe for longer with either denser environments or higher jet powers. This change in the importance of the lobe and shocked gas shell to the X-ray surface brightness typically results from rapidly falling pressure in the shocked gas as the source approaches pressure equilibrium with the ambient medium. Further, the stronger cosmic microwave background radiation at higher redshifts leads to increased inverse-Compton emission from the lobe causing this source of X-ray surface brightness to become dominant in younger sources. Finally, the X-ray surface brightness from the lobe and shocked gas shell remain detectable above that of the ambient medium for a considerable time after the jet switches off. The extended source in Figure [fig:timeseries1], with an active lifetime of 30$\rm\, Myrs$, remains detectable over the ambient medium (at the 5*σ* level) for a further 155$\rm\, Myrs$ at redshift *z* = 0.1, 115$\rm\, Myrs$ at *z* = 0.5, 100$\rm\, Myrs$ at *z* = 1, and 65$\rm\, Myrs$ at *z* = 2. By contrast, *any* level of synchrotron emission is present at radio frequencies for only 170, 70, 25 and 10$\rm\, Myrs$ after the jets switch off, respectively, at these redshifts. Similarly, the extended source in Figure [fig:timeseries2] emits radiation from the lobe and shocked shell at X-ray wavelengths long after the radio emission fades, though this is only a factor of a couple brighter than the ambient medium. Based on these and other test cases, we expect a population of remnant extended radio galaxies undetectable at radio frequencies but visible to X-ray telescopes, in line with literature predictions of a large population of inverse-Compton ‘ghosts’, particularly at high-redshift. MOCK EXTENDED AGN POPULATION ============================ We now extend our analysis to make predictions for radio galaxy population statistics. Specifically, we combine the theoretical framework developed in the previous section with the halo mass function and literature radio frequency observations to create a mock population of extended radio galaxies. We then use this simulated population to predict the X-ray luminosity function for extended AGN emission, and investigate the number of objects that could be uniquely detected using large sky X-ray surveys. Construction of mock catalogue ------------------------------ We construct a mock catalogue of extended AGNs, including their angular sizes, surface brightnesses, and integrated flux densities, by running simulations for a dense grid of model parameters. The brightnesses are calculated at radio frequencies for a typical *Low Frequency Array* (LOFAR) survey and in the X-ray for an *eRosita* survey. Observations are used to constrain the input halo mass function for AGN hosts, and the distribution of source active lifetimes and jet kinetic powers. For the remaining parameters, we assume values given in Section [sec:Surface brightness]. ### Halo mass function [fig:luminfunctions] [fig:luminfractions] The mass function of extended AGN hosting clusters is assumed to be the convolution of the halo mass function for all groups and clusters, and the probability of finding an AGN in a given mass host. The mass of dark matter haloes (observed as galaxy groups and clusters) can in general be described by a mass function, which gives the number density of clusters as a function of mass. In this work, and following, we take our dark matter halo masses from the low-redshift mass function of who find that a common Schechter function describes both galaxy groups and clusters. We use the semi-analytic galaxy evolution (SAGE) model of to extend their observations to higher redshifts, finding the mass of the break in the Schechter function scales with redshift as approximately (1 + *z*)− 3. The radio-loud fraction of AGNs (i.e. fraction of sources with brightness above an arbitrary cut in radio luminosity) suggests hosts with higher mass black holes will have more frequent (or longer) phases of activity. made a theoretical prediction for the AGN duty cycle (i.e. the fraction of time a given source is active) as a function of black hole mass, $\delta \propto \rm M\_\bullet^{1.5}$; this relationship is consistent with the observed radio-loud fraction for all but the most massive galaxies in which the duty cycle peaks at 100%. The black hole mass in the brightest cluster galaxy (BCG), the mass of the stellar bulge, and the dark matter halo mass, are all known to scale approximately linearly with each other. Individual galaxies show moderate scatter of $0.5\rm\, dex$ about the dark matter halo–black hole mass relationship, however the underlying relationship remains when considering several orders of magnitude in halo mass. We therefore convolve the halo mass function with the mass–duty cycle relation to derive the radio AGN mass function. This halo mass function is shown in the left panel of Figure [fig:luminfunctions] as a function of redshift for masses between 1012.5 and $10^{14.75}\rm\, M\_\odot$. The number density is scaled in the plot based on the number of high-luminosity radio AGNs of FR-II morphology observed by. These scaling relationships predict that all plausible host halo masses occur at similar probability, with a slightly decreased likelihood of finding a radio AGN in the most massive clusters. This convolved mass function will provide a better description of the environments typically hosting AGNs than the raw halo mass function. ### Source lifetimes The radio AGN lifetime function has generated much recent interest. The large observed fractions of compact sources strongly suggests a dominant population of short-lived radio jets. Recently, used a combination of data from the LOFAR LoTSS survey and dynamical models, to show that the majority of radio AGNs are consistent with models in which the source lifetime distribution is log-uniform. applied self-consistent modelling to LOFAR observations of active, remnant and restarted radio sources in the HETDEX field to similarly infer a dominant short-lived population, consistent with feedback-regulated accretion. In line with these results, we similarly adopt here a log-uniform distribution of the active lifetimes between 3 and 300Myrs, and assume that the duty cycle is (on average) independent of the active lifetime. We note that the chosen distribution for these parameters makes only minimal difference to the results in this work. ### Jet kinetic powers The distribution of the jet kinetic powers fuelling the extended AGN emission is informed by observations of the luminosity function of powerful FR-IIs in the 3CRR, 6CE and 7CRS samples. fit the high-luminosity function for these objects at $151\rm\, MHz$ as a function of redshift. The general shape of their distribution is shown in the right-hand subplot of Figure [fig:luminfunctions], with the number density scaled to match the observed distribution at redshifts *z* = 0.1, 0.5, 1, 2 and 4. We generate our jet power distribution by converting their radio luminosity function into kinetic powers, first by using theory driven jet power–luminosity relationships; i.e. *Q* ∝ *L**ν*6/7, noting that environment and source age introduce scatter to this relation. The jet power corresponding to the turnover in their luminosity function is found by simulating mock catalogues (sampled every $0.2\rm\, dex$ in jet power, and including only active sources) for a range of possible turnovers and selecting the best match to the observed radio luminosity distribution. The resulting probability distribution for the kinetic jet power is given by $$n(Q) = n\_0 \left( \frac{Q}{Q\_\star} \right)^{-1.17\alpha\_\star} \exp\bigg[-\left( \frac{Q\_\star}{Q} \right)^{1.17} \bigg],$$ where *n*0 is a normalisation constant, the turnover in the luminosity function occurs at $Q\_\star = 10^{38.1}\rm\, W$, and the slope of the power law component of the luminosity function is *α*⋆ = 2.27. The shape of the probability distribution is independent of redshift, however the number density of extended AGNs increases towards higher redshifts. The radio luminosity functions generated from our mock catalogues (using the selected jet power distribution) are shown in the right-hand panel of Figure [fig:luminfunctions] for a range of redshifts; these are in good agreement with the observed luminosity functions. X-ray luminosity function ------------------------- The mock extended radio AGN catalogue, which has been calibrated to successfully describe the observed radio properties of powerful FR-IIs, can now be used to predict their brightness at X-ray wavelengths. The integrated luminosity is calculated for active sources by summing the synchrotron emissivity from the lobe, and the bremsstrahlung radiation from both the shocked gas shell and the ambient medium. The X-ray emission from the ambient medium is included in these calculations since in practice it may be hard to disentangle the emission from the AGN and environment (except in well resolved objects), however we also derive the X-ray luminosity function including only AGN related emission for comparison with previous studies. We expect the X-ray luminosity of most extended radio galaxies to be strongly correlated with the properties of the ambient medium since the brightness of the shocked gas shell is directly related to the mass of gas swept up as the lobe expands. The level of inverse-Compton emission from the lobe, meanwhile, has a more complicated relationship with the density profile of the ambient medium. The accretion disk of AGNs is also often bright at X-ray wavelengths; the spectrum of the accretion disk peaks at ultraviolet wavelengths but a sizeable number of thermal photons from the disk are inverse-Compton upscattered to X-ray energies by the hot corona surrounding the disk. Typical X-ray luminosities from accretion-related nuclear emission are of order 1043 to $10^{45}\rm\, erg \,s^{-1}$. Meanwhile, the highest energy shock-accelerated electrons emit synchrotron radiation at X-ray wavelengths both along the jet and at the terminal hotspot in active sources; these quickly fade in remnants due to radiative losses. The hotspots of typical FR-IIs can reach 1042 to $10^{43}\rm\, erg \,s^{-1}$ at X-ray wavelengths. The brightness of the core and hotspots is not related to the inverse-Compton upscattering of CMB photons and is thus independent of redshift, however it is hypothesised that a beamed inverse-Compton mechanism may operate in the jets. We therefore choose not to include the (rather uncertain) core emission in our luminosity function whilst analyses in subsequent sections will only investigate the detection of emission spatially resolved from the core. Finally, our modelling does not consider synchrotron self-Compton radiation (i.e. inverse-Compton upscattering of synchrotron photons); in Cygnus A (*z* = 0.0561), this mechanism contributes 70-80% of non-thermal X-ray emission in the lobes, however inverse-Compton upscattered CMB photons become dominant at higher redshifts (*z* > 0.4 for a Cygnus A-like source). The X-ray luminosity function derived for a typical *eRosita* survey frequency of $1\rm\, keV$ is shown in the central panel of Figure [fig:luminfunctions]. The brightness of the extended AGNs is comparable to, or at most a factor of a few brighter than (on average 30% brighter at *z* = 0.1), typical inactive X-ray clusters in the range $10^{41} < L\_{\rm X} < 10^{44}\rm\, erg\, s^{-1}$. The shape of the X-ray luminosity function also resembles that of the AGN halo mass function when including all contributors of extended X-ray emission, in particular at the lowest redshifts. As we show in Figure [fig:luminfractions], this is due to the dominant source of X-ray emission being bremsstrahlung radiation from the ambient medium and shocked gas shell for *z* ≤ 1. Objects with the highest luminosities are typically the highest jet power sources, observed at ages of a few tens of Myrs when inverse-Compton radiation from the lobe begins to increase significantly, contributing in excess of 90% of the total integrated X-ray luminosity from the AGN–host cluster system (see Figure [fig:luminfractions]). By contrast, the X-ray luminosity function at the highest redshifts (*z* > 1) is dominated by inverse-Compton emission for the majority of luminosities (and thus a large fraction of the extended AGN population). The stronger cosmic microwave background radiation at higher redshifts thus boosts the integrated luminosity well above that generated by the hot cluster gas (on average 155% brighter at *z* = 4), leading to an increased population of X-ray bright extended AGNs. X-ray source number densities ----------------------------- [fig:numberdensities] [fig:remnantdensities] We now use the mock catalogue to calculate the predicted number density of extended AGNs that would be detected using *LOFAR* and *eRosita*. The *LOFAR Two-metre Sky Survey* (LoTSS) has a median sensitivity of $71\rm\, \mu Jy\, beam^{-1}$ at an observing frequency of 120-168MHz, with approximately 6arcsec full-width half maximum (FWHM) across the synthesised beam. The expected $1\rm\, keV$ sensitivity of all-sky surveys using *eRosita* is $14\rm\, nJy\, beam^{-1}$ for their 15arcsec full-width half maximum beam. We smooth the RAiSE images with a circular gaussian filter, mimicking the approximate shape of the *LOFAR* and *eRosita* beams. Extended emission separated from the core by less than the angular resolution of the survey cannot be distinguished from a bright compact core; these pixels are flagged and removed from our mock images. Simulated extended AGNs with emission in at least one beam (i.e. gaussian filtered pixel) exceeding the surface brightness sensitivity limit are assumed to be detectable by *LOFAR* or *eRosita* as appropriate. These sources are by definition resolved, however we make a further classification that the extended AGNs are well resolved (i.e. we can make out the shape of the source) if they have at least five beams along their length. The number density of X-ray and radio detected active-phase, extended AGNs of FR-II morphology is shown in Figure [fig:numberdensities] for a range of source ages and jet kinetic powers. The overwhelming majority of low-redshift sources have extended emission resolvable from their cores in either X-ray and radio frequencies (blue or green shading); approximately half have sufficient resolution to determine the shape of their extended structures (green shading). Specifically, close to all of the active-phase FR-IIs in our catalogue are detectable using radio frequency observations (at *z* ≤ 1), whilst 41% can be seen at X-ray wavelengths at *z* = 0.1 and 2.6% at *z* = 0.5. At both these redshifts *eRosita* can uniquely detect (i.e. no radio detection) only 0.2% of active sources. This pattern continues out to higher redshifts where most active sources are detectable at radio frequencies whilst a rapidly decreasing fraction can be seen at X-ray wavelengths. Importantly, at *z* = 1, only young and relatively compact objects are detectable using X-rays in the active phase; the source shape cannot be determined in any objects. The inverse-Compton emission arising from the overpressured lobes of these extended AGNs is related to the density of the ambient medium and these fade rapidly as they expand into lower-density environments such as the intergalactic medium in low-mass haloes. Moreover, extended AGN emission is not detectable at redshifts *z* ≥ 2 for the *eRosita* surface brightness sensitivity limit investigated (but see Section [sec:Remnant density at high-redshifts]). The number density of extended AGNs with FR-II morphology detectable with *LOFAR* or *eRosita* are summarised in Table [tab:detections] as a function of redshift. We scale the number densities in our catalogue based on the observed high-luminosity function of. [tab:detections] ccccccc && &&*eRosita*&*LOFAR*&dual detection&*eRosita* only&*LOFAR* only &0.1&-9.17&-8.78&-9.17&-11.40&-9.00 &0.5&-9.46&-7.88&-9.49&-10.61&-7.89 &1&-11.62&-7.07&-11.63&-13.12&-7.07 &0.1&-9.38&-9.07&-9.56&-9.86&-9.24 &0.5&-9.59&-8.13&-9.77&-10.07&-8.14 &1&-11.80&-7.54&-11.95&-12.32&-7.54 &0.1&-4.06&–&–&-4.06&– &0.5&-10.57&–&–&-10.57&– &1&-29.70&–&–&-29.70&– The number density of remnant FR-IIs detectable with X-ray and radio frequencies is similarly shown in Figure [fig:remnantdensities]. The maximum age of remnants detectable at radio frequencies reduces quickly with increasing redshift (by more than a factor of two over the range *z* = 0.1-1) due to the increasing inverse-Compton losses at high redshift. Crucially, the inverse-Compton emission remains detectable at X-ray wavelengths long after the synchrotron radiation at radio frequencies ceases. This leads to a large population of remnant extended AGNs which are detectable with *eRosita* but cannot be seen at radio frequencies; these sources are typically aged between $100\rm\, Myrs$ and $1\rm\, Gyr$ (at the lowest redshifts). Specifically, *eRosita* uniquely detects 14% of remnants at *z* = 0.1 (based on non-core emission), decreasing to 1.1% and 0.002% at redshifts *z* = 0.5 and 1 respectively (i.e. a much greater fraction of remnants can only be detected using X-ray surveys than for active sources). We note in Section [sec:Remnant density at high-redshifts] that these number densities increase markedly for the higher redshifts with just a modest improvement in sensitivity. These exclusively X-ray detected remnants have more than five beams across the length of the lobe in 98% of sources at redshift *z* = 0.1; this reduces to 53% at *z* = 0.5 whilst no high-redshift (*z* ≥ 1) remnants are observed with multiple beams across the source. The remnant source number densities are summarised in Table [tab:detections]. ### False positive detections Mock extended AGNs in our catalogue are only detected if emission from either the lobe or shocked gas shell is present above the survey surface brightness sensitivity limit. However, bremsstrahlung radiation from the brightest X-ray clusters may also be detectable using *eRosita* (in a single beam) without any density enhancement from AGN activity. At redshift, *z* = 0.1, simulated X-ray clusters more massive than $10^{13.43}\rm\, M\_\odot$ (derived analytically for RAiSE clusters[3](#fn3)) emit sufficiently high levels of bremsstrahlung radiation for their ambient medium to be detected and resolved from an X-ray nucleus. The critical cluster mass increases to $10^{14.94}\rm\, M\_\odot$ at *z* = 0.5 whilst no realistic mass haloes are predicted to have their ambient medium detected (in a single beam) at redshifts *z* ≥ 1; a small fraction of observed X-ray clusters may of course be atypically gas rich compared to our mock clusters. The number density of ‘false positive’ detections expected due to the misclassification of cluster gas is derived by integrating the cluster mass function above the critical mass at each redshift (see Table [tab:detections]). The number density of X-ray clusters whose hot gas is detectable using *eRosita* far exceeds the expected density of extended AGNs at low-redshift (*z* ∼ 0.1). The presence of non-core emission at X-ray wavelengths therefore cannot be used to suggest AGN activity at these redshifts; however, 98% of exclusively X-ray detected remnants are well-resolved (i.e. have more than five beams across their lobes) and so can potentially be classified based on the shape of their X-ray brightness distribution. The same behaviour is seen at moderate redshifts though ‘false positives’ and actual AGN detections are expected to occur with equal likelihood. By contrast, at high-redshifts (*z* ≥ 1) the number density of X-ray clusters detectable with *eRosita* for a typical survey sensitivity is over seventeen orders of magnitude lower than the expected detection rate for extended AGN emission. Any non-core X-ray emission detected at high-redshifts can therefore be directly attributed to extended AGNs, and not the ambient medium. ### Remnant density at high-redshifts The number density of extended AGNs of FR-II morphology detectable at high-redshifts using an *eRosita*-like survey is investigated for a range of improved surface brightness sensitivities. In the previous section, we found none of the mock extended AGNs at redshift *z* = 2 and 4 could be detected using the assumed $14\rm\, nJy\, beam^{-1}$ sensitivity; this is reduced by up to a factor of 100 down to $0.14\rm\, nJy\, beam^{-1}$ for the same resolution ($15\rm\, arcsec$ FWHM). The expected number density of remnants detectable (and mostly uniquely detectable) by this theoretical increased sensitivity *eRosita* survey is shown in Figure [fig:remnantdensity] for redshifts *z* = 1, 2 and 4 as a function of the surface brightness sensitivity. Remnants can begin to be detected at *z* = 2 for a modest factor of 1.5 increase in sensitivity and at redshift *z* = 4 for an order of magnitude improvement. The rate of false positive detections from the ambient medium is also included in Figure [fig:remnantdensity]; the likelihood of detecting AGN emission and false positives converges at *z* = 1 for a factor of 20 increase in sensitivity, but at higher redshifts the false positive rate remains small for at least a three orders of magnitude improvement in surface brightness sensitivity. The number density of remnants detected at *z* = 1 meanwhile can increase by over four orders of magnitude (before converging to the false positive rate) reaching a level comparable to that found by using radio frequencies; *eRosita* is therefore highly complementary to *LOFAR*. The number density of radio-detected remnants does not increase with improved sensitivity as any objects emitting appreciable synchrotron emission are already detectable. Importantly, the long lasting inverse-Compton emission probes a different population of remnants that has minimal overlap with the radio-selected sample (Table [tab:detections]). The number density of X-ray detected AGNs similarly coverges to the false positive rate with a comparable or greater number of detections to that obtained using radio frequencies; i.e. factor of five and 1200 more remnants at *z* = 2 and 4 respectively. The detection of non-core X-ray emission at high-redshift therefore presents a viable technique to identify previous episodes of AGN activity with X-ray telescopes presently in service; this would be greatly enhanced with only a modest increase in sensitivity. Our work predicts that X-ray wavelengths are capable of detecting at least a factor of ten more remnants than at radio frequencies for redshifts *z* > 2.2, increasing to a factor of 100 for redshifts *z* > 3.1. [fig:remnantdensity] Similar results are obtained for extended AGNs in the active phase, however these are more readily observed at radio frequencies and thus are not considered likely targets for such surveys. CONCLUSIONS =========== We have extended the successful *Radio AGN in Semi-analytic Environments* lobe expansion and evolution model to X-ray wavelengths. Specifically, this improved model considers: (1) inverse-Compton upscattering of cosmic microwave background radiation by the synchrotron-emitting electrons in the lobe; (2) the dynamics of the shocked gas shell and the associated bremsstrahlung emission from this dense gas; and (3) emission from the ambient medium surrounding the extended AGN. We construct X-ray surface brightness maps of mock extended AGNs with type-II morphology to understand the relative importance of these radiative mechanisms; in particular, to determine what fraction of the population may be detectable (and correctly recognised as extended AGNs) at X-ray wavelengths. X-ray and radio-frequency surface brightness maps are derived for the technical characteristics of the *extended Roentgen Survey with an Imaging Telescope Array* (eRosita) and the *Low-Frequency Array* (LOFAR) instruments respectively. We consider the temporal evolution of the surface brightness in the lobe (inverse-Compton or synchrotron), shocked gas shell (bremsstrahlung) and ambient medium (bremsstrahlung) along the jet axis for typical sources located at increasing redshift from *z* = 0.1 to 2. The inverse-Compton emission from the lobe is dominant over the bremsstrahlung radiation from both the shocked gas shell and ambient medium in poor clusters ($10^{12.5}\rm\, M\_\odot$) at all sources ages. By contrast, the shocked gas shell is initially brighter than the lobe (for between 20 and $100\rm\, Myrs$) for sources expanding into denser environments ($10^{13.5}\rm\, M\_\odot$); the shocked gas shell stays brighter than the lobe for longer with either denser environments or higher jet powers. The X-ray surface brightness from the lobe and shocked gas shell remain detectable above the ambient medium (at the 5*σ* level) for another 65-$115\rm\, Myrs$ after the jet switches off after $30\rm\, Myrs$ at redshifts *z* ≥ 0.5. By contrast, synchrotron emission ceases after only 10-$70\rm\, Myrs$ at radio frequencies. We find that although both synchrotron and inverse-Compton emission fade more rapidly at higher redshift, synchrotron radiation is much more sharply curtailed, leading to a sizable high-redshift population of remnant radio galaxies emitting exclusively through the inverse-Compton mechanism. We constructed an integrated X-ray luminosity function for extended AGNs by generating a mock population of FR-IIs with jet powers, active lifetimes and host cluster environments based on observational constraints. The integrated X-ray luminosity of most extended AGNs at low redshifts (*z* ≤ 1) is found to be strongly correlated with the properties of the ambient medium. In other words, the bremsstrahlung radiation from either the ambient medium or the shocked gas shell (whose density is strongly correlated with the host environment) is the dominant source of X-ray emission. At these low redshifts, only a small population of AGNs with the highest jet powers and moderate ages of a few tens of Myrs can have inverse-Compton emission contribute in excess of 90% of the total integrated X-ray luminosity from the AGN–host cluster system. By contrast, the X-ray luminosity function at higher redshifts (*z* > 1) is dominated by inverse-Compton emission for the vast majority of the extended AGN population. The stronger microwave background radiation at these high redshifts boosts the lobe contribution to the integrated luminosity well above that generated by the hot cluster gas, leading to an increased population of X-ray bright extended AGNs. We used our mock extended AGN catalogue to explore how many new objects could be detected using both existing and increased sensitivity X-ray observations. We find that most active FR-II sources at redshifts *z* ≤ 1 can be detected at radio frequencies for the sensitivity of the *LOFAR Two-metre Sky Survey* (LoTSS). However, only a small fraction of active or remnant sources can be seen at X-ray wavelengths for typical *eRosita* sensitivity, when excluding core emission. No active extended AGNs are detectable at X-ray wavelengths but not at radio frequencies. By contrast, *eRosita* will find 14% of remnants at *z* = 0.1 which are not visible to *LOFAR*, decreasing to 1.1% and 0.002% at redshifts *z* = 0.5 and 1 respectively. Meanwhile, the surface brightness of the bremsstrahlung radiation from the ambient medium of any realistic mass haloes is expected to become undetectable beyond redshift *z* ≥ 1; i.e. any non-core X-ray detection can be attributed to extended AGN activity. We consider the effectiveness of radio-frequency and X-ray surveys at detecting remnants at these high-redshifts for greatly enhanced surface brightness sensitivity. The number density of X-ray detected remnants at redshift *z* = 1 becomes comparable to the number of radio-frequency detections. However, our work predicts that at least a factor of ten more remnants would be detected using X-ray wavelengths (compared to radio frequencies) at redshifts *z* > 2.2, increasing to a factor of 100 for redshifts *z* > 3.1. Future high-sensitivity surveys using *eRosita* or subsequent X-ray telescopes may therefore prove the best tool for probing the earliest generations of powerful radio galaxies. ##### We thank an anonymous referee for helpful and constructive comments that have improved our manuscript. Alexander, P. 2002, MNRAS, 335, 610 Best, P. N., Kauffmann, G., Heckman, T. M., et al. 2005, MNRAS, 362, 25 Blundell, K. M., & Rawlings, S. 1999, Nature, 399, 6734 Blundell, K. M., Rawlings, S., & Willott, C. J. 1999, AJ, 117, 677 Bower, R. G., Benson, A. J., Malbon, R., et al. 2006, MNRAS, 370, 645 Brienza, M., Godfrey, L. E. H., Morganti, R., et al. 2017, A&A, 606A, 98 Croton, D. J., Springel, V., White, S. D. M., et al. 2006, MNRAS, 365, 11 Croton, D. J., Stevens, A. R. H., Tonini, C., et al. 2016, ApJS, 222, 22 de Vries, M. N., Wise, M. W., Huppenkothen, D., et al. 2018, MNRAS, 478, 4010 Fabian, A. C. 2012, ARA&A, 50, 455 Fabian, A. C., Chapman, S., Casey, C. M., Bauer, F., & Blundell, K. M. 2009, MNRAS, 395, L67 Fabian, A. C., Sanders, J. S., Allen, S. W., et al. 2003, MNRAS, 344, L43 Fabian, A. C., Sanders, J. S., Taylor, G. B., et al. 2006, MNRAS, 366, 417 Fabian, A. C., Walker, S. A., Celotti, A., et al. 2014, MNRAS, 442, L81 Fanaroff, B. L., Riley, J. M. 1974, MNRAS 167, 31 Forman, W., Nulsen, P., Heinz, S., et al. 2005, ApJ, 635, 894 Gaibler, V., Khochfar, S., Krause, M., Silk, J., 2012, MNRAS, 425, 438 Gaspari, M., & Sadowski, A. 2017, ApJ, 837, 149 Ghisellini, G., Celotti, A., Tavecchio, F., Haardt, F., & Sbarrato, T. 2014, MNRAS, 438, 2694 Girardi, M., & Giuricin, G. 2000, ApJ, 540, 45 Godfrey, L. E. H., & Shabala, S. S. 2013, ApJ, 767, 12 Gültekin, K., Richstone, D. O., Gebhardt, K., et al. 2009, ApJ, 698, 198 Hardcastle, M. J. 2013, MNRAS, 433, 3364 Hardcastle, M. J. 2018, MNRAS, 475, 2768 Hardcastle M. J., Birkinshaw M., & Worrall D. M., 1998, MNRAS, 294, 615 Hardcastle, M. J. & Krause, M. G. H. 2013, MNRAS, 430, 174 Hardcastle, M. J. & Krause, M. G. H. 2014, MNRAS, 443, 1482 Hardcastle, M. J., Williams, W. L., Best, P. N., et al. 2019, A&A, 622, A12 Häring, N., & Rix, H-W. 2004, ApJ, 604, L89 Harris, D. E., Nulsen, P. E. J., Ponman, T. J., et al. 2000, ApJ, 530, L81 Kaiser, C. R., & Alexander, P. 1997, MNRAS, 286, 215 Kaiser, C. R., & Best, P. N. 2007, MNRAS, 381, 1548 Kaiser, C. R., Dennett-Thorpe, J., & Alexander, P. 1997, MNRAS, 292, 723 Konigl, A. 1981, ApJ, 243, 700 Lobanov, A. P. 1998, A&A, 330, 79 Longair, M. S. 2010, High Energy Astrophysics, Cambridge University Press Marshall, H. L., Gelbord, J. M., Worrall, D. M., et al. 2018, ApJ, 856, 66 Magorrian, J., Tremaine, S., Richstone, D., et al. 1998, AJ, 115, 2285 Mahatma, V. H., Hardcastle, M. J., Williams, W. L., et al. 2018, MNRAS, 475, 4557 Mahatma, V. H., Hardcastle, M. J., Williams, W. L., et al. 2019, A&A, 622, A13 McNamara, B. R., & Nulsen, P. E. J. 2007, ARA&A, 45, 117 McNamara, B. R. & Nulsen, P. E. J. 2012, New Journal of Physics, 14 Merloni, A., Predehl, P., Becker, W., et al. 2012, ArXiv e-prints, arXiv:1209.3114 Mignone, A., Bodo, G., Massaglia, S., et al. 2007, ApJS, 170, 228 Mingo, B., Hardcastle, J. H., Croston, D., et al. 2014, MNRAS, 440, 269 Miraghaei, H., & Best, P. N. 2017, MNRAS, 466, 4346 Mittal, R., Hudson, D. S., Reiprich, T. H., & Clarke, T. 2009, A&A, 501, 835 Mocz P., Fabian A. C., & Blundell K. M., 2011, MNRAS, 413, 1107 Morganti, R., Fogasy, J., Paragi, Z., Oosterloo, T., & Orienti, M., 2013, Science, 341, 1082 Nath B. B., 2010, MNRAS, 407, 1998 Nesvadba, N. P. H., Lehnert, M. D., De Breuck, C., Gilbert, A. M., & van Breugel, W., 2008, A&A, 491, 407 Novak, G. S., Ostriker, J. P., & Ciotti, L. 2011, ApJ, 737, 26 Page, M., Symeonidis, M., Vieira, J., et al. 2012, Nature, 485, 213 Perlman, E. S., Georganopoulos, M., May, E. M., & Kazanas, D. 2010, ApJ, 708, 1 Planck Collaboration 2016, A&A, 594, A13 Pope, E. C. D., Mendel, T., & Shabala, S. S. 2012, MNRAS, 419, 50 Raouf, M., Shabala, S. S., Croton, D. J., Khosroshahi, H. G., Bernyk, M. 2017, MNRAS, 471, 658 Rafferty, D. A., McNamara, B. R., Nulsen, P. E. J, & Wise, M. W. 2006, ApJ, 652, 216 Reiprich, T. H., & Böhringer, H. 2002, ApJ, 567, 716 Rodman, P. E., Turner, R. J., Shabala, S. S., et al. 2019, MNRAS, 482, 5625 Rybicki, G. B., & Lightman, A. P. 1979, Radiative processes in astrophysics, Wiley-Interscience, Sabater, J., Best. P. N., Hardcastle, M. J., et al. 2019, A&A, 622A, 17 Shabala, S. S., & Alexander, P. 2009, ApJ, 699, 525 Shabala, S. S., Ash, A. A., Alexander, P., Riley, J. M. 2008, MNRAS, 388, 625 Shabala, S. S., & Godfrey, L. E. H. 2013, ApJ, 769, 129 Shabala, S. S., Jurlin, N., Brienza, M., Morganti, R., et al., submitted to MNRAS Shabala, S. S., Santoso, J. S., & Godfrey, L. E. H. 2012, ApJ, 756, 161 Shankar, F., Weinberg, D., & Miralda-Escudé, J. 2009, ApJ, 690, 20 Shimwell, T. W., Tasse, C., Hardcastle, M. J., et al. 2019, A&A, 622, A1 Smith, R. K., Brickhouse, N. S., Liedahl, D. A., & Raymond, J. C. 2001, ApJ, 556, L91 Turner R. J. 2018, MNRAS, 476, 2522 Turner, R. J., Rogers, J. G., Shabala, S. S., & Krause, M. G. H. 2018a, MNRAS, 473, 4179 Turner, R. J., & Shabala, S. S. 2015, ApJ, 806, 59 Turner, R. J., & Shabala, S. S. 2019, MNRAS, 486, 1225 Turner, R. J., Shabala, S. S., & Krause, M. G. H. 2018b, MNRAS, 474, 3361 Vikhlinin, A., Kravtsov, A., Forman, W., et al. 2006, ApJ, 640, 691 Vogelsberger, M., Genel, S., Springel, V., et al. 2014, MNRAS, 444, 1518 Willott, C. J., Rawlings, S., Blundell, K. M., & Lacy, M. 1999, MNRAS, 309, 1017 Willott, C. J., Rawlings, S., Blundell, K. M., Lacy, M., & Eales, S. A. 2001, MNRAS, 322, 536 Yates, P. M., Shabala, S. S., & Krause, M. G. H. 2018, MNRAS, 480, 5286 --- 1. Email: [email protected][↩](#fnref1) 2. RAiSE implicitly assumes a single magnetic field value throughout the lobes; although more complex configurations almost certainly exist these are not well constrained at present, and our approach is standard for lobe modelling.[↩](#fnref2) 3. RAiSE simulates cluster density profiles based on the observed profiles of, taking the scaling gas mass and virial radius from the semi-analytic galaxy evolution model of. The gas temperature is calculated using an observed relationship with the halo mass.[↩](#fnref3) RAiSE X: searching for radio galaxies in X-ray surveys ====================================================== ### Accepted 2020 March 06. Received 2020 March 05; in original form 2019 December 12 [firstpage] We model the X-ray surface brightness distribution of emission associated with Fanaroff & Riley type-II radio galaxies. Our approach builds on the RAiSE dynamical model which describes broadband radio-frequency synchrotron evolution of jet-inflated lobes in a wide range of environments. The X-ray version of the model presented here includes: (1) inverse-Compton upscattering of cosmic microwave background radiation; (2) the dynamics of the shocked gas shell and associated bremsstrahlung radiation; and (3) emission from the surrounding ambient medium. We construct X-ray surface brightness maps for a mock catalogue of extended FR-IIs based on the technical characteristics of the *eRosita* telescope. The integrated X-ray luminosity function at low redshifts (*z* ≤ 1) is found to strongly correlate with the density of the ambient medium in all but the most energetic sources, whilst at high-redshift (*z* > 1) the majority of objects are dominated by inverse-Compton lobe emission due to the stronger cosmic microwave background radiation. By inspecting our mock spatial brightness distributions, we conclude that any extended X-ray detection can be attributed to AGN activity at redshifts *z* ≥ 1. We compare the expected detection rates of active and remnant high-redshift radio AGNs for *eRosita* and *LOFAR*, and future more sensitive surveys. We find that a factor of ten more remnants can be detected using X-ray wavelengths over radio frequencies at *z* > 2.2, increasing to a factor of 100 for redshifts *z* > 3.1. galaxies: active – galaxies: jets – radio continuum: galaxies – X-rays: galaxies INTRODUCTION ============ Most galaxies are known to harbour a supermassive black hole (SMBH) at their centre, with the galaxy and SMBH growth rates strongly intertwined over their evolutionary history. The activity of the black hole is tied to the state of its surroundings, since, as hot cluster gas cools by bremsstrahlung radiation, it forms cooling flows that sink towards and are accreted by the SMBH, switching it on. This active galactic nucleus (AGN) imparts a fraction of its accreted energy back into its host environment through either radiative or kinetic-mode feedback. The radio lobes inflated by the relativistic jets of pair-plasma, which emanate from the accretion disk in the kinetic-mode, are responsible for pushing out vast amounts of gas from the host galaxy and thus suppress star formation, or sometimes trigger regions of enhanced growth along shock fronts. find the growth of black holes closely tracks the star formation history of galaxies across cosmic time. The energy input by AGNs into their surroundings through kinetic feedback (primarily by shocks and pressure-volume work) prevent catastrophic cooling of the hot cluster gas. Kinetic-mode feedback is also invoked in cosmological galaxy formation models to explain the missing stellar mass in the brightest galaxies towards the present epoch. The quantification of the energetics of kinetic-mode AGN feedback requires knowledge of the energy imparted in each outburst and the duty cycle of the black hole activity. Jet kinetic powers can be directly measured at radio-frequencies using techniques including hotspot luminosities and spatial shifts of radio cores in VLBI images, or more readily constrained in large sky surveys using dynamical models for the lobes populated by shock-accelerated synchrotron-emitting electrons. Recent high sensitivity and resolution observations by the *Low Frequency Array* (LOFAR) at low frequencies ($150\rm\, MHz$) have reinvigorated studies of the AGN duty cycle. found that up to 100 per cent of all large galaxies host an active nucleus, noting that their accretion rate is modulated such that half the energy output is released in outbursts of increased activity lasting less than two percent of the time. Studies of individual objects and large sky surveys have also found populations of remnant and restarted radio galaxies, setting a lower bound for the duty-cycle. used a radio source dynamical model to provide an upper limit on the duty cycle of *δ* < 0.15 in the remnant B20924+30. tighten the range of plausible duty cycles by comparing the size and luminosity functions of radio sources in the Lockman Hole to simulated functions based on dynamical models for a range of lifetime distributions. The shock-accelerated electrons comprising extended radio AGN lobes not only emit synchrotron radiation at radio frequencies, but also produce X-ray emission at keV energies due to the inverse-Compton upscattering of cosmic microwave background photons by lower energy electrons. The inverse-Compton and synchrotron radiative loss mechanisms more rapidly deplete the higher energy (*γ* ∼ 104) electrons involved in the synchrotron radiation compared to the lower energy (*γ* ∼ 103) electrons responsible for the X-ray inverse-Compton emission. Remnant radio galaxies are therefore expected to appear as inverse-Compton ‘ghosts’ for some period of time after the cessation of jet activity before becoming undetectable at both radio and X-ray wavelengths. In particular, the extended X-ray source HDF130 at *z* = 1.99 has been identified as a remnant radio galaxy despite its double-lobed structure only being visible at X-ray wavelengths. Further inverse-Compton ‘ghosts’ should be readily detectable at high redshift since the energy density of the cosmic microwave background increases as (1 + *z*)4, offsettting the reduced X-ray flux density at greater distance. Conversely, only the youngest radio galaxies at high-redshift will be detectable in radio observations due to the extreme radiative losses resulting from the strong microwave background energy density. investigate two distant quasars, ULAS J112001.48+064124.3 at *z* = 7.1 and SDSS J1030+0524 at *z* = 6.3, finding that powerful jets fuelled by super-Eddington accretion rates could exist but be undetectable with current surveys. However, the benefit of increased brightness at higher redshifts may be offset by the convolution of the extended and core emission, which can be of comparable magnitude to the brightest inverse-Compton lobes. In this work, we extend the *Radio AGNs in Semi-analytic Environments* model for the dynamical evolution and synchrotron emissivity of active, remnant and restarted radio galaxies to X-ray wavelengths; in particular, we seek to quantify which radio galaxies have non-core associated X-ray emission detectable with the surface brightness sensitivity of current surveys (e.g. using *Chandra* and *eRosita*). Previous iterations of RAiSE have found success in: (1) reproducing surface brightness and spectral age maps for canonical FR-I (3C31) and FR-II (3C436) type sources ; (2) deriving jet kinetic powers consistent with X-ray inverse-Compton measurements ; and (3) accurately constraining the Hubble constant using low-redshift AGNs. In this paper, we first extend RAiSE to calculate the integrated X-ray luminosity due to the inverse-Compton upscattering of cosmic microwave background radiation by lobe electrons, then create model X-ray surface brightness maps for lobes of type-II morphology (Section [sec:LUMINOSITY MODEL]). These X-ray brightness maps are completed in Section [sec:SHOCKED SHELL BREMSSTRAHLUNG RADIATION] by including the bremsstrahlung radiation both from the shocked shell of gas swept up between the bow shock and lobe plasma, and from the ambient medium. In Section [sec:X-ray surface brightness maps], we investigate the changing importance of the inverse-Compton and bremsstrahlung radiative mechanisms with redshift and intrinsic properties of the source. Finally, we create a mock catalogue of extended radio galaxies based on an observationally informed set of parameters; this catalogue is used to generate the X-ray luminosity function for FR-IIs over cosmic time, and characterise the effectiveness of both radio-frequency (*SKA*-pathfinders) and X-ray surveys at detecting high-redshift remnants (Section [sec:MOCK EXTENDED AGN POPULATION]). The $\Lambda \rm CDM$ concordance cosmology with $\Omega\_{\rm M} = 0.3$, ΩΛ = 0.7 and $H\_0 = 70 \rm\,km \,s^{-1} \,Mpc^{-1}$ is assumed throughout the paper. LOBE INVERSE-COMPTON EMISSIVITY =============================== The inverse-Compton emissivity model developed in this work is based on the synchrotron model presented by ; it can therefore similarly be applied to any AGN dynamical model, be it analytical or a hydrodynamical simulation. The model assumes that the radiative losses are not taken into account self-consistently in the evolution of the lobe pressure. confirms that synchrotron radiation comprises less than ten percent of the input power for typical radio galaxies (at *z* = 0), whilst inverse-Compton and bremsstrahlung radiation contribute even less to the energy loss during the source expansion. However, the cosmic microwave background radiation responsible for the inverse-Compton losses increases with redshift as (1 + *z*)4; the inverse-Compton radiation can reach up to 40% of the input power in a $100\rm\, Myr$ source at *z* = 1, or all of the input power for a $10\rm\, Myr$ source at *z* = 4. The results in this work do not consider objects beyond these extremes in age and redshift. The adiabatic expansion of the lobe is therefore assumed to take up the bulk of the input energy as explicitly considered in numerical and published analytical dynamical models. proposed a technique to derive the inverse-Compton emissivity from the hotspots of powerful FR-IIs by integrating over the full electron and photon distribution. and later both derived models for the inverse-Compton emission from radio lobes, with the formalisms consistent to those found in the dynamical and synchrotron emissivity models of and. These two models make use of the electron energy distribution assumed in the synchrotron emissivity model, and will therefore have the same strengths and shortcomings as the calculation of the radio emissivity. Unresolved, continuously injected electron model ------------------------------------------------ The synchrotron-emitting electrons in radio sources have significant kinetic energy when compared to the energy of the cosmic microwave background (CMB) photons, and thus the electron energy can be transferred to a CMB photon through inverse-Compton scattering. Assuming that the electrons emit synchrotron radiation only at some critical frequency $\nu\_{\rm syn} = \gamma^2 \nu\_{\rm L}$, where $\nu\_{\rm L}$ is the Larmor frequency, the total lossless radio power (integrated over solid angle) emitted by electrons in a volume element *d**V* is $$\begin{split} dL\_{\nu\_{\rm syn}} = \frac{1}{2} \sigma\_{\rm T} c u\_{\rm B} \frac{\gamma^3}{\nu\_{\rm syn}} n(\gamma) dV, \end{split} \label{Lsyn}$$ where $\sigma\_{\rm T}$ is the electron scattering cross-section, *c* is the speed of light, $u\_{\rm B}$ is the energy density of the magnetic field, and *n*(*γ*) is the electron energy distribution. The inverse-Compton power is similarly given by $$\begin{split} dL\_\nu = \frac{1}{2} \sigma\_{\rm T} c u\_{\rm c} \frac{\gamma^3}{\nu} n(\gamma) dV, \end{split} \label{Lic}$$ where $u\_{\rm c} = u\_{\rm c0} (1 + z)^4$ is the energy density of the cosmic microwave background radiation at redshift *z*, *e* the electron charge and $u\_{\rm c0} = 0.25e\times10^6 \rm\, J \, m^{-3}$ is the corresponding energy density at the present epoch. The electron population involved in both emission processes is identical, and thus the Lorentz factor and electron energy distribution can be modelled using the equations of derived for synchrotron radiation. For inverse-Compton emission, the relevant frequency is that of upscattered CMB photons, $\nu = \gamma^2 \nu\_{\rm cmb}$, where a photon at the peak CMB frequency $\nu\_{\rm cmb} = 5.879\times10^{10} {\rm\, Hz \, K^{-1}} \times 2.73 {\rm\, K}(1 + z)$ is boosted by a synchrotron-emitting electron with a Lorentz factor *γ*. The peak frequency of the synchrotron-emitting electrons involved in the inverse-Compton upscattering of CMB photons to frequency *ν*, can thus be derived for rest-frame frequencies as $$\begin{split} \nu\_{\rm syn} = \frac{3e \nu \sqrt{2 \mu\_0 u\_{\rm B}}}{2\pi m\_{\rm e} \nu\_{\rm cmb}}, \end{split}$$ where *μ*0 is the permeability of free space and $m\_{\rm e}$ the electron mass. The full calculation of the inverse-Compton emissivity from the upscattering photons requires the integration over the total spectrum of CMB photons (i.e. a blackbody distribution). The following correction should be applied to the simpler analytically tractable solution which assumes all CMB photons at a given redshift are at a single frequency : $$\begin{split} J\_{\rm corr}(s) = \frac{\pi^4}{15 \Gamma (\tfrac{s + 5}{2}) \zeta (\tfrac{s + 5}{2})}, \end{split}$$ where Γ and *ζ* are the Gamma and Zeta functions respectively. For a typical electron energy injection index of *s* = 2.4 (i.e. *N*(*γ*) ∝ *γ*− *s*), the use of a single frequency CMB spectrum in this calculation would lead to an error of approximately 40%. Following the method of, their Equation 3 is modified to yield the integrated inverse-Compton emissivity of the lobe at time *t*: $$\begin{split} L(\nu, t) = &\frac{K(s) {\nu\_{\rm syn}}^{(1 - s)/2}}{J\_{\rm corr}(s)} \left(\frac{\nu\_{\rm syn}}{\nu} \right) \frac{q^{(s - 3)/4}}{(q + 1)^{(s + 1)/4}} \\ &\quad \times p(t)^{(s + 1)/4} V(t) (\Gamma\_{\rm c} - 1) [u\_{\rm c0} (z + 1)^4] \mathcal{Y}(\gamma, t), \end{split} \label{luminosityloss}$$ where *K*(*s*) is the source specific constant defined in Equation 5 of, $\Gamma\_{\rm c}$ is the adiabatic index of the lobe plasma, *p*(*t*) is the present-time lobe pressure, and *q* ≡ *u**B*/*u**e* is the ratio of the energy density in the magnetic field to that in the synchrotron-emitting particles [2](#fn2). The differences between the synchrotron and inverse-Compton (here) forms of this equation can be explained as follows: (1) the power law spectrum is cast in terms of the synchrotron frequency rather than the (now inverse-Compton) emitting frequency, (2) the inverse-Compton frequency *ν* in the denominator of Equation [Lic] replaces the synchrotron frequency in Equation [Lsyn], and (3) the inverse-Compton power in Equation [Lic] replaces a linear dependence on the magnetic field energy density $u\_{\rm B} = qp/(\Gamma\_{\rm c} - 1)(q + 1)$ in Equation [Lsyn] with a dependence on the energy density of the CMB radiation $u\_{\rm c}$. The loss function Y(*γ*, *t*) is defined in Equation 7 of for active and remnant sources as $$\begin{split} \mathcal{Y}(t) = \int\_0^t \frac{a\_{\rm v}(t\_{\rm i})}{t\_{\rm i}} \frac{Q(t\_{\rm i})}{Q\_0} \left[\frac{p(t\_{\rm i})}{p(t)} \right]^{1 - 4/(3\Gamma\_{\rm c)}} \frac{V(t\_{\rm i})}{V(t)} \left[\frac{\gamma\_{\rm i}}{\gamma} \right]^{2 - s} dt\_{\rm i}, \end{split} \label{resolvedvolume}$$ where $Q(t\_{\rm i})/Q\_0$ is the instantaneous jet power at the electron injection time scaled by its active value. The integral is over the particle injection times $t\_{\rm i}$, where $p(t\_{\rm i})$, $V(t\_{\rm i})$ and $\gamma\_{\rm i}$ are the lobe pressure, volume and electron Lorentz factor at the time of injection respectively. The constant $a\_{\rm v}$ is the average volume expansion rate of the packet of electrons injected at time $t\_{\rm i}$, defined through $V \propto t^{a\_{\rm v}(t\_{\rm i})}$. The loss function is defined, as before, in terms of the Lorentz factor *γ* of the synchrotron-emitting electrons which radiate at frequency $\nu\_{\rm syn}$; the Lorentz factor of these electrons is the same as for the inverse-Compton upscattering process. The inverse-Compton emission is modelled over the evolutionary history of a source using the RAiSE dynamical model to derive the pressure and volume evolution of the lobe. Luminosity–source age tracks are thus calculated at $1\rm\, keV$ observer-frame X-ray energies for four extended AGNs (Figure [fig:LDtracks]). Our base model (Model A) considers a $Q = 3 \times 10^{38}\rm\, W$ jet with an active lifetime of $t = 100\rm\, Myrs$ at redshift *z* = 2. The gas density profile is modelled for a $10^{13.5}\rm\, M\_\odot$ mass cluster following the method of, the lobe axis ratio is set as *A* = 4, the injection index of the electron energy distribution is *s* = 2.4, whilst the other parameters in the RAiSE dynamical and inverse-Compton emissivity models take the same values as used by. Three variations to the base model are considered with either a lower redshift (Model B), higher jet power (Model C) or shorter active lifetime (Model D). The X-ray inverse-Compton luminosity–source age tracks for our four extended AGNs are qualitatively very similar (for example) to those in Figure 3 of, albeit with noticeably brighter luminosities, largely due to the richer cluster cores in our observationally informed density profiles. [fig:LDtracks] Spatially resolved lobe losses ------------------------------ The spatially resolved RAiSE dynamical model calculates lobe properties as a function of distance from the site of particle acceleration (hotspots for FR-II, flaring points for FR-I sources). Taking the particle acceleration site as the origin of the coordinate system, the inverse-Compton emissivity from a position **r** in the lobe can similarly be derived following. $$\begin{split} dL(\nu&, t, \textbf{r}) = \frac{K(s) {\nu\_{\rm syn}}^{(1 - s)/2}}{J\_{\rm corr}(s)} \left(\frac{\nu\_{\rm syn}}{\nu} \right) \frac{q^{(s + 1)/4}}{(q + 1)^{(s + 5)/4}} \\ &\times p(t, \textbf{r})^{(s + 1)/4} dV(t, \textbf{r}) (\Gamma\_{\rm c} - 1) [u\_{\rm c0} (z + 1)^4] \\ &\times \left[\frac{p(t\_{\rm i}(\textbf{r}))}{p(t, \textbf{r})} \right]^{1 - 4/(3\Gamma\_{\rm c})} \left[\frac{\gamma\_{\rm i}(\textbf{r})}{\gamma} \right]^{2 - s}, \end{split} \label{resolvedluminosity}$$ where the volume occupied at the present time by an electron packet injected between $t\_{\rm i}$ and $t\_{\rm i} + dt\_{\rm i}$ is given by $$\begin{split} dV(t, \textbf{r}) &= \int\_0^t \frac{Q(t\_{\rm i})}{Q\_0} \delta(t\_{\rm i}' - t\_{\rm i}) \frac{dV(t\_{\rm i}', \textbf{r})}{dt\_{\rm i}'} dt\_{\rm i}'. \end{split} \label{resolvedvolume}$$ For lobed sources with a high internal sound speed (typically FR-IIs), the volume element can be simplified to $dV(t, \textbf{r}) = a\_{\rm v}(t\_{\rm i}) V(t\_{\rm i}) dt\_{\rm i}/t\_{\rm i}$ and the pressures taken as lobe averages; i.e. *p*(*t*, **r**) and $p(t\_{\rm i}(\textbf{r}))$ are the lobe pressure at the present time and the time of electron injection respectively. used hydrodynamic simulations to study the spatial distribution of synchrotron-emitting electrons throughout lobed sources, by injecting tracer fields into the jet at regular intervals and tracking their motion. The distribution of these tracer fields within the lobe at each injection age was used to produce a representative synchrotron-emitting electron population at every location across the source. found that the fluid injected at the jet terminal hotspot initially flows smoothly back towards the core, but electrons of different ages disperse broadly after travelling back slightly less than half of the lobe length. model the average location, **r**, and the 2*σ* spread, *d***r**, of the electron packets as a function of synchrotron age (see their Equations 11a,b). These authors also found the behaviour of the flow to be independent of both physical and temporal scales, enabling a single analytic description of the electron distribution to be assumed for all lobed FR-II sources. Below, we use the results of these simulations to inform our analytic models for both active and remnant radio sources. For remnants, the relative locations of the electron packets within the lobe are assumed to remain fixed upon the jet switching off. This assumption is only valid on timescales shorter than the mixing timescales. However, remnant sources are typically observed soon after the jet switches off and this assumption is expected to be appropriate for most objects. SHOCKED SHELL BREMSSTRAHLUNG RADIATION ====================================== The bow shock generated by a powerful lobed radio source (FR-II or FR-I) sweeps up the intracluster medium (ICM) in its path as it propagates outwards from the active nucleus. The thermal evolution of this swept-up gas was considered by. Here, we extend the RAiSE model to explicitly include evolution of the shell of shocked gas surrounding the lobe. Shocked shell dynamical model ----------------------------- ### Axis ratio evolution Hydrodynamical simulations suggest that the bow shocks of radio sources expand in a self-similar manner despite the lobes slowly elongating over their evolutionary history: for example, in their Figure 1, the axis ratio of the shocked shell remains a constant value of  ∼ 2.5 once fully formed, whilst the lobe ratio slowly increases from 6.5 to 8. We therefore model the growth of the shocked shell in RAiSE using the same formalism as for lobe evolution, but excluding the late-stage Rayleigh-Taylor mixing which quickly pinches the lobes. The radius of the shocked shell is thus related to that of the intact lobe at each point on its surface as $R\_{\rm s}(\theta) = \tau R(\theta)$, where *τ* ≡ *τ*(*θ*) is a constant of proportionality and *θ* is the angle between the surface location and the jet axis. Simulations clearly show that the ratio between shocked shell and lobe radii varies across the surface of the shocked shell. We define the axis ratio (length divided by width) of the shocked shell in terms of the axis ratio of the lobe as $A\_{\rm s} = A^\iota$, for some exponent *ι*. Based on the hydrodynamical simulations of, we assume a value of *ι* = 0.5 in this work; in other words, $A\_{\rm s} = \sqrt{A}$. The radial distance to the lobe surface at an angle *θ* from the jet axis is related to the length of the lobe along the jet axis by the ratio given in Equation 21 of, $$\begin{split} \eta(\theta) = \frac{1}{\sqrt{A^2 \sin^2 \theta + \cos^2 \theta}} \end{split}, \label{eta}$$ and the distance to the shocked shell at angle *θ* is similarly related to the length of the shocked shell along the jet axis by the ratio $$\begin{split} \eta\_s(\theta) = \frac{1}{\sqrt{A^{2\iota} \sin^2 \theta + \cos^2 \theta}} \end{split}. \label{eta\_s}$$ The radius of the shocked shell can thus be related to that of the lobe as $R\_s(\theta) = \tau \eta\_{\rm s}(\theta) R(\theta)/\eta(\theta)$, where *τ* ∼ 1.05 is the ratio of the shocked shell to lobe radii along the jet axis. ### Modified shock geometry To describe the expansion of the shocked shell, we follow the geometric approach of. We calculate the component of the expansion rate normal to the surface of the shocked shell by relating it to the expansion rate of the shocked shell along the jet axis, as in Equation 20 of, $$\begin{split} \zeta\_s(\theta) = \left[\frac{A^{2\iota} \sin^2 \theta + \cos^2 \theta}{A^{4\iota} \sin^2 \theta + \cos^2 \theta} \right]^{1/2} \end{split}. \label{zeta\_s}$$ Again following the method of, we derive a second order differential equation in terms of the lobe radius and expansion rate of the lobe surface at an angle *θ* from the jet axis. This differential equation cannot be solved analytically in general, and so we must adopt a numerical scheme using a fourth order Runge-Kutta method in terms of a system of two first order ODEs. The following system of equations must be solved for each small angular element [*θ* − *d**θ*/2, *θ* + *d**θ*/2) of the lobe and shocked shell: $$\begin{split} \dot{R} &= v \\ \dot{v} &= \frac{3 (\Gamma\_{\rm x} + 1)(\Gamma\_{\rm c} - 1) Q R^{\beta - 3} d\lambda}{8 \pi v (\zeta\_{\rm s}/\eta\_{\rm s})^2 k \sin\theta d\theta} \left[\frac{\tau \eta\_{\rm s}}{\eta} \right]^3 + \frac{(\beta - 3\Gamma\_{\rm c}) v^2}{2 R} \\ &\quad\quad + \frac{(\Gamma\_{\rm x} - 1) (3 \Gamma\_{\rm c} - \beta - \xi) l}{4 R^{\xi + 1} (\zeta\_{\rm s}/\eta\_{\rm s})^2}, \end{split} \label{supersonic system}$$ where in the strong-shock supersonic limit (near time zero in the numerical scheme) the *θ* dependent constant *d**λ* is defined through the expression: $$\begin{split} &\frac{8\pi k \sin\theta d\theta}{3 (\Gamma\_{\rm x} + 1)} \left[(3 \Gamma\_{\rm c} - \beta)R^{2 - \beta} \dot{R}^3 + 2R^{3 - \beta} \dot{R} \ddot{R} \right]\_{\theta = 0} \\ &\quad\quad \times {\eta\_{\rm s}}^{3 - \beta}(\theta) {\zeta\_{\rm s}}^2(\theta) \left[\frac{\eta(\theta)}{\tau \eta\_{\rm s}(\theta)} \right]^3 = (\Gamma\_{\rm c} - 1) Q\, d\lambda(\theta). \end{split}$$ Here, *β* and *k* parametrise the local shape of the ambient density profile (i.e. *ρ* = *k**r*− *β*), *ξ* and *l* describe the local shape of the temperature profile (i.e. *T* = *l**r*− *ξ*), whilst $\Gamma\_{\rm x}$ is the adiabatic index of the ambient medium. The shape of density and temperature profiles are based on cluster observations and modelled as piecewise continuous power laws to retain analytically tractable solutions. The differential equations describing the evolution of the lobe in the subsonic and remnant phases are unchanged from those given in and respectively. Isothermal evolution of the shocked shell ----------------------------------------- As first pointed out by, the final state of the shocked gas lying between the lobe surface and the bow shock depends critically on whether it expands adiabatically or isothermally; we assume here that the swept-up gas evolves isothermally. The pressure, $p\_{\rm s}(\theta)$, of the swept-up gas at an angle *θ* from the jet axis is found from the jump conditions at the shock. The cooling rate, and thus intensity of thermal bremsstrahlung radiation, in the shocked shell depends on both the pressure of the shocked gas and its temperature. For low Mach numbers *M*0 the cooling time of the swept-up gas, $t\_{\rm s}$, can become short compared with that in the cluster, $t\_{\rm x}$, with $t\_{\rm s}/t\_{\rm x} > 0.05M\_0$. However, the cooling time of the external gas would have to be at least an order of magnitude less than the source age, *t*, for the shocked gas to suffer significant radiative losses. Such conditions are only possible at the centres of the strongest cooling flow clusters – precisely *not* the locations where powerful, large radio galaxies are typically found. We therefore ignore the cooling of the gas within the shocked shell in our analysis. ### Mean shocked shell temperature The mean temperature of the shocked gas is found by first calculating the mean pressure and density of the shocked shell. The mean density is taken as the ratio of the mass of ambient gas previously occupying the lobe and shocked shell to the volume of the shocked gas shell: $$\begin{split} \bar{\rho\_{\rm s}}(t) = \frac{\int\_0^{\pi/2} \int\_0^{R\_{\rm s}(t, \theta)} \rho(r) r^2 dr \sin \theta d\theta}{\int\_0^{\pi/2} \int\_{R(t, \theta)}^{R\_{\rm s}(t, \theta)} r^2 dr \sin \theta d\theta}, \end{split}$$ where *ρ*(*r*) is the gas density of the ambient medium. This expression must be solved numerically except for the special case of spherical lobes expanding into a power law environment. The mean temperature of the shocked gas shell is thus given as $T\_{\rm s}(t) = \bar{m} \bar{p\_{\rm s}}(t)/k\_{\rm B} \bar{\rho\_{\rm s}}(t)$ for a shocked gas shell with mean pressure $\bar{p\_{\rm s}}(t)$ and an average particle mass $\bar{m} \sim 0.6m\_{\rm p}$, where $m\_{\rm p}$ is the proton mass. The temperature may of course vary throughout the shocked gas shell but without precise knowledge of any density gradients we simply assume mean values. ### Bremsstrahlung radiation The X-ray emissivity per unit volume due to thermal bremsstrahlung radiation is $$\begin{split} J(\nu) =& \frac{Z^2 e^6}{3 \pi^2 {\varepsilon\_0}^3 {m\_{\rm e}}^2 c^3} \left(\frac{\pi m\_{\rm e}}{6} \right)^{1/2} \\ &\quad\quad (k\_{\rm B} \mathcal{T})^{-5/2} {\tilde{p}}^{\!\:2} e^{-h\nu/k\_{\rm B} \mathcal{T}} g(\nu, \mathcal{T}), \end{split}$$ where the pressure and temperature (*p̃* and T respectively) may relate to the shocked shell, ambient medium, or any other plasma. Here, $Z \gtrsim 1$ is the average atomic number of the positively changed particles, *ɛ*0 is the vacuum permittivity, and, at frequencies $h\nu \ll k\_{\rm B} \mathcal{T}$, the Gaunt factor has a logarithmic dependence on frequency as $$\begin{split} g(\nu, \mathcal{T}) = \frac{\sqrt{3}}{\pi} \ln \left(\frac{4}{\zeta} \frac{k\_{\rm B} \mathcal{T}}{h\nu} \right), \end{split}$$ where here *ζ* = 1.78 is Gauss’ number. In this work, we assume the metallicity of the plasma is 0.3 times the solar value, corresponding to an average atomic number of *Z* ∼ 1.04. The shocked gas pressure used in the bremsstrahlung radiation calculation is that derived from the shock jump conditions on the shell surface at angle *θ* (i.e. $\tilde{p} = p\_{\rm s}(\theta)$), whilst the mean temperature is derived following the method described in the previous section. [fig:apec] The X-ray spectra of smaller clusters at low-redshift are dominated by emission lines at the wavelengths of the *Chandra* and *eRosita* surveys. The APEC plasma code can calculate the X-ray emission spectra from thin-thermal plasma for various temperatures and metallicities. In Figure [fig:apec], we plot the fractional contribution of continuum emission to the total bremsstrahlung X-ray spectrum (including both continuum and line emission) integrated over a typical 0.5-$2\rm\, keV$ observing band. The emission lines are found to contribute a significant component of the X-ray flux density at redshifts *z* < 2.5 in clusters with temperatures $kT \lesssim 1\rm\, keV$, corresponding to halo masses $\lesssim 10^{13.5}\rm\, M\_\odot$. We use the appropriate APEC model to correct our analytically derived flux densities of the X-ray emission arising from bremsstrahlung radiation. X-RAY SURFACE BRIGHTNESS MAPS ============================= We use the formalism developed in the preceding sections to make predictions for the observed X-ray emission from radio galaxies and their environments. There are three main contributions: lobe inverse-Compton emission (Section [sec:LUMINOSITY MODEL]) and two sources of bremsstrahlung radiation, from the shocked gas shell ahead of the lobes (Section [sec:SHOCKED SHELL BREMSSTRAHLUNG RADIATION]), and from the ambient medium. We calculate the latter in the same way as bremsstrahlung from the shocked gas shell, but adopting density and temperature parameters characteristic of the hot gas in galaxy clusters, as described in. X-ray and radio brightness maps ------------------------------- The lobes and shocked gas shell of each mock radio galaxy are divided into a 512 × 512 × 512 grid of cubic pixels; this grid extends a factor of two beyond the edge of the lobe to enable a comparison with X-ray surface brightness of the ambient medium. Each cell in the cube is classified as either part of the lobe, shocked shell or ambient medium based on the modified RAiSE dynamical model (Section [sec:SHOCKED SHELL BREMSSTRAHLUNG RADIATION]), and the inverse-Compton emissivity or bremsstrahlung radiation calculated as appropriate. The observed emission arising from these extended objects is the integral of all the emissivity along a given line-of-sight. The two-dimensional surface brightness is simply calculated by summing the emissivity from every cell along the depth of the source, assuming the lobe plasma and ambient medium in front of the source is optically thin. The X-ray emission arising from the ambient medium lying outside the grid of cell is calculated analytically for each line-of-sight and added to the total from the numerical grid. For simplicity, we assume that the lobes lie in the plane of the sky. The inverse-Compton and bremsstrahlung emissivity in the (two-dimensional) surface brightness map are derived for the technical characteristics of the *extended Roentgen Survey with an Imaging Telescope Array*. Specifically, at a $1\rm\, keV$ observer-frame energy, the half-energy width (on axis) is $15\rm\, arcsec$ and the total effective area of the seven mirror systems is $\sim 1500\rm\, cm^2$. The number of $1\rm\, keV$ photons falling on these mirrors in a typical 1000$\rm\, s$ (1 ks) observing time is thus calculated from the X-ray surface brightness grid. The $151\rm\, MHz$ radio frequency emission from the lobes is also calculated following the method of ; this frequency is commonly used by low-frequency Square Kilometre Pathfinder instruments, including the *Low-Frequency Array* (LOFAR) and the *Murchison Widefield Array* (MWA). The X-ray and radio surface brightness distributions are modelled for lobed FR-IIs at a range of redshifts throughout their evolutionary history. Specifically, we simulate AGNs at 41 (log-spaced) source ages between 3 and 300$\rm\, Myrs$, redshifts of *z* = 0.1, 0.5 and 1, jet powers of *Q* = 1037, 1038 and $10^{39}\rm\, W$, active ages of 30 and 100$\rm\, Myrs$ (i.e. when the jet ceases injecting fresh electrons), and host cluster environments with dark halo masses of 1012.5, 1013.5 and $10^{14.5}\rm\, M\_\odot$. The axis ratio of the lobe is taken as *A* = 4 corresponding to an axis ratio for the shocked shell of $A\_{\rm s} = 2$ (see Section [sec:AXISRATIO]). The other parameters in the RAiSE dynamical and synchrotron/inverse-Compton emissivity models take the same values as used by. The surface brightness maps for several informative combinations of these parameters are shown in Figure [fig:surfbright]. Powerful jets expanding into dense environments are found to create dense shells of shocked gas surrounding the lobe. The bremsstrahlung radiation from the shocked gas is much brighter than can be generated by the inverse-Compton radiation process, especially in young sources and at low redshifts (e.g. top panel of Figure [fig:surfbright]). The faint lobes bounded by well-defined lines are seen in *Chandra* X-ray images of Cygnus A and MS0735.6+7421, with cavities also seen in the hosts of weaker FR-Is such as the Perseus cluster. By contrast, in lower mass clusters, the density of swept-up gas is very low causing its temperature to be high to satisfy the shock jump conditions. The increased temperature leads to minimal bremsstrahlung radiation from the shocked gas shell. The inverse-Compton upscattered photons in the lobe are thus the dominant source of X-ray emission, especially at higher redshifts where the cosmic microwave background radiation is stronger (e.g. middle panel of Figure [fig:surfbright]). Finally, the X-ray inverse-Compton emission remains visible well after the radio frequency emission vanishes; in the simulated remnant, the radio emission has retreated towards the freshest electrons at the hotspot, whilst the X-ray emission does a much better job of tracing out the channel evacuated by the radio lobes. A similar result has previously been reported by ; Figure [fig:surfbright] additionally shows that the peak in the X-ray intensity from the lobe is much closer to the core than the full extent of the radio source. Many radio sources appear asymmetric, due to asymmetry in the environments encountered by the two lobes ; this poses a challenge for robustly identifying host galaxies of remnant radio sources. Our results suggest that X-ray observations may provide a useful alternative. [fig:surfbright] An intriguing surface brightness map is produced when propagating weak jets into dense environments (bottom panel of Figure [fig:surfbright]); dense shells of shocked gas build up around the lobe as before, however, the weakened shocks and magnetic fields allow the contact surface near the core to become unstable to turbulent mixing of the dense shell and the lobe. This results in thermal bremsstrahlung emission extending out from the core along the transverse axis, brighter than X-ray emission from the lobe or the thinner (and thus fainter integrated along the line-of-sight), unmixed portions of the shocked shell further from the core. We note that strong X-ray emission may also be observed *along* the jet axis due to inverse-Compton emission from the jets as well as lobes (for older sources at high redshift); a combination of radio and X-ray imaging may distinguish these two scenarios. Dominant source of X-ray emissivity ----------------------------------- In this section, we investigate the relative importance of contributions to the X-ray surface brightness from the lobe (inverse-Compton), shocked gas shell (bremsstrahlung), and the ambient medium (bremsstrahlung), for some representative sources at a range of redshifts. The surface brightness for the three regions is calculated as the maximum value along the jet axis. However, the increased bremsstrahlung radiation from sources pinched by Rayleigh-Taylor mixing may present a source of error in this work as the rate of mixing is a poorly quantified model parameter in RAiSE. Fortunately, in the vast majority of extended AGNs this enhanced emission is only a minor contributor to the integrated X-ray luminosity, and when generating surface brightness maps, either the Rayleigh-Taylor mixed region is not resolved from the bright X-ray core (discussed in Section [sec:X-ray source number densities]) or the remainder of the lobe also sits above the surface brightness sensitivity limit (i.e. the shape of the source is correctly determined). The analyses in this work are repeated with the inner third of the lobe masked to exclude this region of enhanced emission; our results are unaffected by this change. ![image](AGNmap_H=1250_Q=3800_T=750_z=var) [fig:timeseries1] [fig:timeseries2] The evolutionary tracks of the X-ray surface brightness for these representative sources are shown in Figures [fig:timeseries1] and [fig:timeseries2]. The inverse-Compton emission is dominant over the bremsstrahlung radiation from both the shocked gas shell and the ambient medium in poor clusters ($10^{12.5}\rm\, M\_\odot$; Figure [fig:timeseries1]) at all source ages $\gtrsim 1\rm\, Myr$. By contrast, the shocked gas shell of sources expanding into denser host cluster environments ($\gtrsim 10^{13.5}\rm\, M\_\odot$; Figure [fig:timeseries2]) is brighter than the lobe, albeit not appreciably, for at least 20 to $100\rm\, Myrs$ with the weakest FR-II jet powers ($Q = 10^{37}\rm\, W$); the shocked shell stays brighter than the lobe for longer with either denser environments or higher jet powers. This change in the importance of the lobe and shocked gas shell to the X-ray surface brightness typically results from rapidly falling pressure in the shocked gas as the source approaches pressure equilibrium with the ambient medium. Further, the stronger cosmic microwave background radiation at higher redshifts leads to increased inverse-Compton emission from the lobe causing this source of X-ray surface brightness to become dominant in younger sources. Finally, the X-ray surface brightness from the lobe and shocked gas shell remain detectable above that of the ambient medium for a considerable time after the jet switches off. The extended source in Figure [fig:timeseries1], with an active lifetime of 30$\rm\, Myrs$, remains detectable over the ambient medium (at the 5*σ* level) for a further 155$\rm\, Myrs$ at redshift *z* = 0.1, 115$\rm\, Myrs$ at *z* = 0.5, 100$\rm\, Myrs$ at *z* = 1, and 65$\rm\, Myrs$ at *z* = 2. By contrast, *any* level of synchrotron emission is present at radio frequencies for only 170, 70, 25 and 10$\rm\, Myrs$ after the jets switch off, respectively, at these redshifts. Similarly, the extended source in Figure [fig:timeseries2] emits radiation from the lobe and shocked shell at X-ray wavelengths long after the radio emission fades, though this is only a factor of a couple brighter than the ambient medium. Based on these and other test cases, we expect a population of remnant extended radio galaxies undetectable at radio frequencies but visible to X-ray telescopes, in line with literature predictions of a large population of inverse-Compton ‘ghosts’, particularly at high-redshift. MOCK EXTENDED AGN POPULATION ============================ We now extend our analysis to make predictions for radio galaxy population statistics. Specifically, we combine the theoretical framework developed in the previous section with the halo mass function and literature radio frequency observations to create a mock population of extended radio galaxies. We then use this simulated population to predict the X-ray luminosity function for extended AGN emission, and investigate the number of objects that could be uniquely detected using large sky X-ray surveys. Construction of mock catalogue ------------------------------ We construct a mock catalogue of extended AGNs, including their angular sizes, surface brightnesses, and integrated flux densities, by running simulations for a dense grid of model parameters. The brightnesses are calculated at radio frequencies for a typical *Low Frequency Array* (LOFAR) survey and in the X-ray for an *eRosita* survey. Observations are used to constrain the input halo mass function for AGN hosts, and the distribution of source active lifetimes and jet kinetic powers. For the remaining parameters, we assume values given in Section [sec:Surface brightness]. ### Halo mass function [fig:luminfunctions] [fig:luminfractions] The mass function of extended AGN hosting clusters is assumed to be the convolution of the halo mass function for all groups and clusters, and the probability of finding an AGN in a given mass host. The mass of dark matter haloes (observed as galaxy groups and clusters) can in general be described by a mass function, which gives the number density of clusters as a function of mass. In this work, and following, we take our dark matter halo masses from the low-redshift mass function of who find that a common Schechter function describes both galaxy groups and clusters. We use the semi-analytic galaxy evolution (SAGE) model of to extend their observations to higher redshifts, finding the mass of the break in the Schechter function scales with redshift as approximately (1 + *z*)− 3. The radio-loud fraction of AGNs (i.e. fraction of sources with brightness above an arbitrary cut in radio luminosity) suggests hosts with higher mass black holes will have more frequent (or longer) phases of activity. made a theoretical prediction for the AGN duty cycle (i.e. the fraction of time a given source is active) as a function of black hole mass, $\delta \propto \rm M\_\bullet^{1.5}$; this relationship is consistent with the observed radio-loud fraction for all but the most massive galaxies in which the duty cycle peaks at 100%. The black hole mass in the brightest cluster galaxy (BCG), the mass of the stellar bulge, and the dark matter halo mass, are all known to scale approximately linearly with each other. Individual galaxies show moderate scatter of $0.5\rm\, dex$ about the dark matter halo–black hole mass relationship, however the underlying relationship remains when considering several orders of magnitude in halo mass. We therefore convolve the halo mass function with the mass–duty cycle relation to derive the radio AGN mass function. This halo mass function is shown in the left panel of Figure [fig:luminfunctions] as a function of redshift for masses between 1012.5 and $10^{14.75}\rm\, M\_\odot$. The number density is scaled in the plot based on the number of high-luminosity radio AGNs of FR-II morphology observed by. These scaling relationships predict that all plausible host halo masses occur at similar probability, with a slightly decreased likelihood of finding a radio AGN in the most massive clusters. This convolved mass function will provide a better description of the environments typically hosting AGNs than the raw halo mass function. ### Source lifetimes The radio AGN lifetime function has generated much recent interest. The large observed fractions of compact sources strongly suggests a dominant population of short-lived radio jets. Recently, used a combination of data from the LOFAR LoTSS survey and dynamical models, to show that the majority of radio AGNs are consistent with models in which the source lifetime distribution is log-uniform. applied self-consistent modelling to LOFAR observations of active, remnant and restarted radio sources in the HETDEX field to similarly infer a dominant short-lived population, consistent with feedback-regulated accretion. In line with these results, we similarly adopt here a log-uniform distribution of the active lifetimes between 3 and 300Myrs, and assume that the duty cycle is (on average) independent of the active lifetime. We note that the chosen distribution for these parameters makes only minimal difference to the results in this work. ### Jet kinetic powers The distribution of the jet kinetic powers fuelling the extended AGN emission is informed by observations of the luminosity function of powerful FR-IIs in the 3CRR, 6CE and 7CRS samples. fit the high-luminosity function for these objects at $151\rm\, MHz$ as a function of redshift. The general shape of their distribution is shown in the right-hand subplot of Figure [fig:luminfunctions], with the number density scaled to match the observed distribution at redshifts *z* = 0.1, 0.5, 1, 2 and 4. We generate our jet power distribution by converting their radio luminosity function into kinetic powers, first by using theory driven jet power–luminosity relationships; i.e. *Q* ∝ *L**ν*6/7, noting that environment and source age introduce scatter to this relation. The jet power corresponding to the turnover in their luminosity function is found by simulating mock catalogues (sampled every $0.2\rm\, dex$ in jet power, and including only active sources) for a range of possible turnovers and selecting the best match to the observed radio luminosity distribution. The resulting probability distribution for the kinetic jet power is given by $$n(Q) = n\_0 \left( \frac{Q}{Q\_\star} \right)^{-1.17\alpha\_\star} \exp\bigg[-\left( \frac{Q\_\star}{Q} \right)^{1.17} \bigg],$$ where *n*0 is a normalisation constant, the turnover in the luminosity function occurs at $Q\_\star = 10^{38.1}\rm\, W$, and the slope of the power law component of the luminosity function is *α*⋆ = 2.27. The shape of the probability distribution is independent of redshift, however the number density of extended AGNs increases towards higher redshifts. The radio luminosity functions generated from our mock catalogues (using the selected jet power distribution) are shown in the right-hand panel of Figure [fig:luminfunctions] for a range of redshifts; these are in good agreement with the observed luminosity functions. X-ray luminosity function ------------------------- The mock extended radio AGN catalogue, which has been calibrated to successfully describe the observed radio properties of powerful FR-IIs, can now be used to predict their brightness at X-ray wavelengths. The integrated luminosity is calculated for active sources by summing the synchrotron emissivity from the lobe, and the bremsstrahlung radiation from both the shocked gas shell and the ambient medium. The X-ray emission from the ambient medium is included in these calculations since in practice it may be hard to disentangle the emission from the AGN and environment (except in well resolved objects), however we also derive the X-ray luminosity function including only AGN related emission for comparison with previous studies. We expect the X-ray luminosity of most extended radio galaxies to be strongly correlated with the properties of the ambient medium since the brightness of the shocked gas shell is directly related to the mass of gas swept up as the lobe expands. The level of inverse-Compton emission from the lobe, meanwhile, has a more complicated relationship with the density profile of the ambient medium. The accretion disk of AGNs is also often bright at X-ray wavelengths; the spectrum of the accretion disk peaks at ultraviolet wavelengths but a sizeable number of thermal photons from the disk are inverse-Compton upscattered to X-ray energies by the hot corona surrounding the disk. Typical X-ray luminosities from accretion-related nuclear emission are of order 1043 to $10^{45}\rm\, erg \,s^{-1}$. Meanwhile, the highest energy shock-accelerated electrons emit synchrotron radiation at X-ray wavelengths both along the jet and at the terminal hotspot in active sources; these quickly fade in remnants due to radiative losses. The hotspots of typical FR-IIs can reach 1042 to $10^{43}\rm\, erg \,s^{-1}$ at X-ray wavelengths. The brightness of the core and hotspots is not related to the inverse-Compton upscattering of CMB photons and is thus independent of redshift, however it is hypothesised that a beamed inverse-Compton mechanism may operate in the jets. We therefore choose not to include the (rather uncertain) core emission in our luminosity function whilst analyses in subsequent sections will only investigate the detection of emission spatially resolved from the core. Finally, our modelling does not consider synchrotron self-Compton radiation (i.e. inverse-Compton upscattering of synchrotron photons); in Cygnus A (*z* = 0.0561), this mechanism contributes 70-80% of non-thermal X-ray emission in the lobes, however inverse-Compton upscattered CMB photons become dominant at higher redshifts (*z* > 0.4 for a Cygnus A-like source). The X-ray luminosity function derived for a typical *eRosita* survey frequency of $1\rm\, keV$ is shown in the central panel of Figure [fig:luminfunctions]. The brightness of the extended AGNs is comparable to, or at most a factor of a few brighter than (on average 30% brighter at *z* = 0.1), typical inactive X-ray clusters in the range $10^{41} < L\_{\rm X} < 10^{44}\rm\, erg\, s^{-1}$. The shape of the X-ray luminosity function also resembles that of the AGN halo mass function when including all contributors of extended X-ray emission, in particular at the lowest redshifts. As we show in Figure [fig:luminfractions], this is due to the dominant source of X-ray emission being bremsstrahlung radiation from the ambient medium and shocked gas shell for *z* ≤ 1. Objects with the highest luminosities are typically the highest jet power sources, observed at ages of a few tens of Myrs when inverse-Compton radiation from the lobe begins to increase significantly, contributing in excess of 90% of the total integrated X-ray luminosity from the AGN–host cluster system (see Figure [fig:luminfractions]). By contrast, the X-ray luminosity function at the highest redshifts (*z* > 1) is dominated by inverse-Compton emission for the majority of luminosities (and thus a large fraction of the extended AGN population). The stronger cosmic microwave background radiation at higher redshifts thus boosts the integrated luminosity well above that generated by the hot cluster gas (on average 155% brighter at *z* = 4), leading to an increased population of X-ray bright extended AGNs. X-ray source number densities ----------------------------- [fig:numberdensities] [fig:remnantdensities] We now use the mock catalogue to calculate the predicted number density of extended AGNs that would be detected using *LOFAR* and *eRosita*. The *LOFAR Two-metre Sky Survey* (LoTSS) has a median sensitivity of $71\rm\, \mu Jy\, beam^{-1}$ at an observing frequency of 120-168MHz, with approximately 6arcsec full-width half maximum (FWHM) across the synthesised beam. The expected $1\rm\, keV$ sensitivity of all-sky surveys using *eRosita* is $14\rm\, nJy\, beam^{-1}$ for their 15arcsec full-width half maximum beam. We smooth the RAiSE images with a circular gaussian filter, mimicking the approximate shape of the *LOFAR* and *eRosita* beams. Extended emission separated from the core by less than the angular resolution of the survey cannot be distinguished from a bright compact core; these pixels are flagged and removed from our mock images. Simulated extended AGNs with emission in at least one beam (i.e. gaussian filtered pixel) exceeding the surface brightness sensitivity limit are assumed to be detectable by *LOFAR* or *eRosita* as appropriate. These sources are by definition resolved, however we make a further classification that the extended AGNs are well resolved (i.e. we can make out the shape of the source) if they have at least five beams along their length. The number density of X-ray and radio detected active-phase, extended AGNs of FR-II morphology is shown in Figure [fig:numberdensities] for a range of source ages and jet kinetic powers. The overwhelming majority of low-redshift sources have extended emission resolvable from their cores in either X-ray and radio frequencies (blue or green shading); approximately half have sufficient resolution to determine the shape of their extended structures (green shading). Specifically, close to all of the active-phase FR-IIs in our catalogue are detectable using radio frequency observations (at *z* ≤ 1), whilst 41% can be seen at X-ray wavelengths at *z* = 0.1 and 2.6% at *z* = 0.5. At both these redshifts *eRosita* can uniquely detect (i.e. no radio detection) only 0.2% of active sources. This pattern continues out to higher redshifts where most active sources are detectable at radio frequencies whilst a rapidly decreasing fraction can be seen at X-ray wavelengths. Importantly, at *z* = 1, only young and relatively compact objects are detectable using X-rays in the active phase; the source shape cannot be determined in any objects. The inverse-Compton emission arising from the overpressured lobes of these extended AGNs is related to the density of the ambient medium and these fade rapidly as they expand into lower-density environments such as the intergalactic medium in low-mass haloes. Moreover, extended AGN emission is not detectable at redshifts *z* ≥ 2 for the *eRosita* surface brightness sensitivity limit investigated (but see Section [sec:Remnant density at high-redshifts]). The number density of extended AGNs with FR-II morphology detectable with *LOFAR* or *eRosita* are summarised in Table [tab:detections] as a function of redshift. We scale the number densities in our catalogue based on the observed high-luminosity function of. [tab:detections] ccccccc && &&*eRosita*&*LOFAR*&dual detection&*eRosita* only&*LOFAR* only &0.1&-9.17&-8.78&-9.17&-11.40&-9.00 &0.5&-9.46&-7.88&-9.49&-10.61&-7.89 &1&-11.62&-7.07&-11.63&-13.12&-7.07 &0.1&-9.38&-9.07&-9.56&-9.86&-9.24 &0.5&-9.59&-8.13&-9.77&-10.07&-8.14 &1&-11.80&-7.54&-11.95&-12.32&-7.54 &0.1&-4.06&–&–&-4.06&– &0.5&-10.57&–&–&-10.57&– &1&-29.70&–&–&-29.70&– The number density of remnant FR-IIs detectable with X-ray and radio frequencies is similarly shown in Figure [fig:remnantdensities]. The maximum age of remnants detectable at radio frequencies reduces quickly with increasing redshift (by more than a factor of two over the range *z* = 0.1-1) due to the increasing inverse-Compton losses at high redshift. Crucially, the inverse-Compton emission remains detectable at X-ray wavelengths long after the synchrotron radiation at radio frequencies ceases. This leads to a large population of remnant extended AGNs which are detectable with *eRosita* but cannot be seen at radio frequencies; these sources are typically aged between $100\rm\, Myrs$ and $1\rm\, Gyr$ (at the lowest redshifts). Specifically, *eRosita* uniquely detects 14% of remnants at *z* = 0.1 (based on non-core emission), decreasing to 1.1% and 0.002% at redshifts *z* = 0.5 and 1 respectively (i.e. a much greater fraction of remnants can only be detected using X-ray surveys than for active sources). We note in Section [sec:Remnant density at high-redshifts] that these number densities increase markedly for the higher redshifts with just a modest improvement in sensitivity. These exclusively X-ray detected remnants have more than five beams across the length of the lobe in 98% of sources at redshift *z* = 0.1; this reduces to 53% at *z* = 0.5 whilst no high-redshift (*z* ≥ 1) remnants are observed with multiple beams across the source. The remnant source number densities are summarised in Table [tab:detections]. ### False positive detections Mock extended AGNs in our catalogue are only detected if emission from either the lobe or shocked gas shell is present above the survey surface brightness sensitivity limit. However, bremsstrahlung radiation from the brightest X-ray clusters may also be detectable using *eRosita* (in a single beam) without any density enhancement from AGN activity. At redshift, *z* = 0.1, simulated X-ray clusters more massive than $10^{13.43}\rm\, M\_\odot$ (derived analytically for RAiSE clusters[3](#fn3)) emit sufficiently high levels of bremsstrahlung radiation for their ambient medium to be detected and resolved from an X-ray nucleus. The critical cluster mass increases to $10^{14.94}\rm\, M\_\odot$ at *z* = 0.5 whilst no realistic mass haloes are predicted to have their ambient medium detected (in a single beam) at redshifts *z* ≥ 1; a small fraction of observed X-ray clusters may of course be atypically gas rich compared to our mock clusters. The number density of ‘false positive’ detections expected due to the misclassification of cluster gas is derived by integrating the cluster mass function above the critical mass at each redshift (see Table [tab:detections]). The number density of X-ray clusters whose hot gas is detectable using *eRosita* far exceeds the expected density of extended AGNs at low-redshift (*z* ∼ 0.1). The presence of non-core emission at X-ray wavelengths therefore cannot be used to suggest AGN activity at these redshifts; however, 98% of exclusively X-ray detected remnants are well-resolved (i.e. have more than five beams across their lobes) and so can potentially be classified based on the shape of their X-ray brightness distribution. The same behaviour is seen at moderate redshifts though ‘false positives’ and actual AGN detections are expected to occur with equal likelihood. By contrast, at high-redshifts (*z* ≥ 1) the number density of X-ray clusters detectable with *eRosita* for a typical survey sensitivity is over seventeen orders of magnitude lower than the expected detection rate for extended AGN emission. Any non-core X-ray emission detected at high-redshifts can therefore be directly attributed to extended AGNs, and not the ambient medium. ### Remnant density at high-redshifts The number density of extended AGNs of FR-II morphology detectable at high-redshifts using an *eRosita*-like survey is investigated for a range of improved surface brightness sensitivities. In the previous section, we found none of the mock extended AGNs at redshift *z* = 2 and 4 could be detected using the assumed $14\rm\, nJy\, beam^{-1}$ sensitivity; this is reduced by up to a factor of 100 down to $0.14\rm\, nJy\, beam^{-1}$ for the same resolution ($15\rm\, arcsec$ FWHM). The expected number density of remnants detectable (and mostly uniquely detectable) by this theoretical increased sensitivity *eRosita* survey is shown in Figure [fig:remnantdensity] for redshifts *z* = 1, 2 and 4 as a function of the surface brightness sensitivity. Remnants can begin to be detected at *z* = 2 for a modest factor of 1.5 increase in sensitivity and at redshift *z* = 4 for an order of magnitude improvement. The rate of false positive detections from the ambient medium is also included in Figure [fig:remnantdensity]; the likelihood of detecting AGN emission and false positives converges at *z* = 1 for a factor of 20 increase in sensitivity, but at higher redshifts the false positive rate remains small for at least a three orders of magnitude improvement in surface brightness sensitivity. The number density of remnants detected at *z* = 1 meanwhile can increase by over four orders of magnitude (before converging to the false positive rate) reaching a level comparable to that found by using radio frequencies; *eRosita* is therefore highly complementary to *LOFAR*. The number density of radio-detected remnants does not increase with improved sensitivity as any objects emitting appreciable synchrotron emission are already detectable. Importantly, the long lasting inverse-Compton emission probes a different population of remnants that has minimal overlap with the radio-selected sample (Table [tab:detections]). The number density of X-ray detected AGNs similarly coverges to the false positive rate with a comparable or greater number of detections to that obtained using radio frequencies; i.e. factor of five and 1200 more remnants at *z* = 2 and 4 respectively. The detection of non-core X-ray emission at high-redshift therefore presents a viable technique to identify previous episodes of AGN activity with X-ray telescopes presently in service; this would be greatly enhanced with only a modest increase in sensitivity. Our work predicts that X-ray wavelengths are capable of detecting at least a factor of ten more remnants than at radio frequencies for redshifts *z* > 2.2, increasing to a factor of 100 for redshifts *z* > 3.1. [fig:remnantdensity] Similar results are obtained for extended AGNs in the active phase, however these are more readily observed at radio frequencies and thus are not considered likely targets for such surveys. CONCLUSIONS =========== We have extended the successful *Radio AGN in Semi-analytic Environments* lobe expansion and evolution model to X-ray wavelengths. Specifically, this improved model considers: (1) inverse-Compton upscattering of cosmic microwave background radiation by the synchrotron-emitting electrons in the lobe; (2) the dynamics of the shocked gas shell and the associated bremsstrahlung emission from this dense gas; and (3) emission from the ambient medium surrounding the extended AGN. We construct X-ray surface brightness maps of mock extended AGNs with type-II morphology to understand the relative importance of these radiative mechanisms; in particular, to determine what fraction of the population may be detectable (and correctly recognised as extended AGNs) at X-ray wavelengths. X-ray and radio-frequency surface brightness maps are derived for the technical characteristics of the *extended Roentgen Survey with an Imaging Telescope Array* (eRosita) and the *Low-Frequency Array* (LOFAR) instruments respectively. We consider the temporal evolution of the surface brightness in the lobe (inverse-Compton or synchrotron), shocked gas shell (bremsstrahlung) and ambient medium (bremsstrahlung) along the jet axis for typical sources located at increasing redshift from *z* = 0.1 to 2. The inverse-Compton emission from the lobe is dominant over the bremsstrahlung radiation from both the shocked gas shell and ambient medium in poor clusters ($10^{12.5}\rm\, M\_\odot$) at all sources ages. By contrast, the shocked gas shell is initially brighter than the lobe (for between 20 and $100\rm\, Myrs$) for sources expanding into denser environments ($10^{13.5}\rm\, M\_\odot$); the shocked gas shell stays brighter than the lobe for longer with either denser environments or higher jet powers. The X-ray surface brightness from the lobe and shocked gas shell remain detectable above the ambient medium (at the 5*σ* level) for another 65-$115\rm\, Myrs$ after the jet switches off after $30\rm\, Myrs$ at redshifts *z* ≥ 0.5. By contrast, synchrotron emission ceases after only 10-$70\rm\, Myrs$ at radio frequencies. We find that although both synchrotron and inverse-Compton emission fade more rapidly at higher redshift, synchrotron radiation is much more sharply curtailed, leading to a sizable high-redshift population of remnant radio galaxies emitting exclusively through the inverse-Compton mechanism. We constructed an integrated X-ray luminosity function for extended AGNs by generating a mock population of FR-IIs with jet powers, active lifetimes and host cluster environments based on observational constraints. The integrated X-ray luminosity of most extended AGNs at low redshifts (*z* ≤ 1) is found to be strongly correlated with the properties of the ambient medium. In other words, the bremsstrahlung radiation from either the ambient medium or the shocked gas shell (whose density is strongly correlated with the host environment) is the dominant source of X-ray emission. At these low redshifts, only a small population of AGNs with the highest jet powers and moderate ages of a few tens of Myrs can have inverse-Compton emission contribute in excess of 90% of the total integrated X-ray luminosity from the AGN–host cluster system. By contrast, the X-ray luminosity function at higher redshifts (*z* > 1) is dominated by inverse-Compton emission for the vast majority of the extended AGN population. The stronger microwave background radiation at these high redshifts boosts the lobe contribution to the integrated luminosity well above that generated by the hot cluster gas, leading to an increased population of X-ray bright extended AGNs. We used our mock extended AGN catalogue to explore how many new objects could be detected using both existing and increased sensitivity X-ray observations. We find that most active FR-II sources at redshifts *z* ≤ 1 can be detected at radio frequencies for the sensitivity of the *LOFAR Two-metre Sky Survey* (LoTSS). However, only a small fraction of active or remnant sources can be seen at X-ray wavelengths for typical *eRosita* sensitivity, when excluding core emission. No active extended AGNs are detectable at X-ray wavelengths but not at radio frequencies. By contrast, *eRosita* will find 14% of remnants at *z* = 0.1 which are not visible to *LOFAR*, decreasing to 1.1% and 0.002% at redshifts *z* = 0.5 and 1 respectively. Meanwhile, the surface brightness of the bremsstrahlung radiation from the ambient medium of any realistic mass haloes is expected to become undetectable beyond redshift *z* ≥ 1; i.e. any non-core X-ray detection can be attributed to extended AGN activity. We consider the effectiveness of radio-frequency and X-ray surveys at detecting remnants at these high-redshifts for greatly enhanced surface brightness sensitivity. The number density of X-ray detected remnants at redshift *z* = 1 becomes comparable to the number of radio-frequency detections. However, our work predicts that at least a factor of ten more remnants would be detected using X-ray wavelengths (compared to radio frequencies) at redshifts *z* > 2.2, increasing to a factor of 100 for redshifts *z* > 3.1. Future high-sensitivity surveys using *eRosita* or subsequent X-ray telescopes may therefore prove the best tool for probing the earliest generations of powerful radio galaxies. ##### We thank an anonymous referee for helpful and constructive comments that have improved our manuscript. Alexander, P. 2002, MNRAS, 335, 610 Best, P. N., Kauffmann, G., Heckman, T. M., et al. 2005, MNRAS, 362, 25 Blundell, K. M., & Rawlings, S. 1999, Nature, 399, 6734 Blundell, K. M., Rawlings, S., & Willott, C. J. 1999, AJ, 117, 677 Bower, R. G., Benson, A. J., Malbon, R., et al. 2006, MNRAS, 370, 645 Brienza, M., Godfrey, L. E. H., Morganti, R., et al. 2017, A&A, 606A, 98 Croton, D. J., Springel, V., White, S. D. M., et al. 2006, MNRAS, 365, 11 Croton, D. J., Stevens, A. R. H., Tonini, C., et al. 2016, ApJS, 222, 22 de Vries, M. N., Wise, M. W., Huppenkothen, D., et al. 2018, MNRAS, 478, 4010 Fabian, A. C. 2012, ARA&A, 50, 455 Fabian, A. C., Chapman, S., Casey, C. M., Bauer, F., & Blundell, K. M. 2009, MNRAS, 395, L67 Fabian, A. C., Sanders, J. S., Allen, S. W., et al. 2003, MNRAS, 344, L43 Fabian, A. C., Sanders, J. S., Taylor, G. B., et al. 2006, MNRAS, 366, 417 Fabian, A. C., Walker, S. A., Celotti, A., et al. 2014, MNRAS, 442, L81 Fanaroff, B. L., Riley, J. M. 1974, MNRAS 167, 31 Forman, W., Nulsen, P., Heinz, S., et al. 2005, ApJ, 635, 894 Gaibler, V., Khochfar, S., Krause, M., Silk, J., 2012, MNRAS, 425, 438 Gaspari, M., & Sadowski, A. 2017, ApJ, 837, 149 Ghisellini, G., Celotti, A., Tavecchio, F., Haardt, F., & Sbarrato, T. 2014, MNRAS, 438, 2694 Girardi, M., & Giuricin, G. 2000, ApJ, 540, 45 Godfrey, L. E. H., & Shabala, S. S. 2013, ApJ, 767, 12 Gültekin, K., Richstone, D. O., Gebhardt, K., et al. 2009, ApJ, 698, 198 Hardcastle, M. J. 2013, MNRAS, 433, 3364 Hardcastle, M. J. 2018, MNRAS, 475, 2768 Hardcastle M. J., Birkinshaw M., & Worrall D. M., 1998, MNRAS, 294, 615 Hardcastle, M. J. & Krause, M. G. H. 2013, MNRAS, 430, 174 Hardcastle, M. J. & Krause, M. G. H. 2014, MNRAS, 443, 1482 Hardcastle, M. J., Williams, W. L., Best, P. N., et al. 2019, A&A, 622, A12 Häring, N., & Rix, H-W. 2004, ApJ, 604, L89 Harris, D. E., Nulsen, P. E. J., Ponman, T. J., et al. 2000, ApJ, 530, L81 Kaiser, C. R., & Alexander, P. 1997, MNRAS, 286, 215 Kaiser, C. R., & Best, P. N. 2007, MNRAS, 381, 1548 Kaiser, C. R., Dennett-Thorpe, J., & Alexander, P. 1997, MNRAS, 292, 723 Konigl, A. 1981, ApJ, 243, 700 Lobanov, A. P. 1998, A&A, 330, 79 Longair, M. S. 2010, High Energy Astrophysics, Cambridge University Press Marshall, H. L., Gelbord, J. M., Worrall, D. M., et al. 2018, ApJ, 856, 66 Magorrian, J., Tremaine, S., Richstone, D., et al. 1998, AJ, 115, 2285 Mahatma, V. H., Hardcastle, M. J., Williams, W. L., et al. 2018, MNRAS, 475, 4557 Mahatma, V. H., Hardcastle, M. J., Williams, W. L., et al. 2019, A&A, 622, A13 McNamara, B. R., & Nulsen, P. E. J. 2007, ARA&A, 45, 117 McNamara, B. R. & Nulsen, P. E. J. 2012, New Journal of Physics, 14 Merloni, A., Predehl, P., Becker, W., et al. 2012, ArXiv e-prints, arXiv:1209.3114 Mignone, A., Bodo, G., Massaglia, S., et al. 2007, ApJS, 170, 228 Mingo, B., Hardcastle, J. H., Croston, D., et al. 2014, MNRAS, 440, 269 Miraghaei, H., & Best, P. N. 2017, MNRAS, 466, 4346 Mittal, R., Hudson, D. S., Reiprich, T. H., & Clarke, T. 2009, A&A, 501, 835 Mocz P., Fabian A. C., & Blundell K. M., 2011, MNRAS, 413, 1107 Morganti, R., Fogasy, J., Paragi, Z., Oosterloo, T., & Orienti, M., 2013, Science, 341, 1082 Nath B. B., 2010, MNRAS, 407, 1998 Nesvadba, N. P. H., Lehnert, M. D., De Breuck, C., Gilbert, A. M., & van Breugel, W., 2008, A&A, 491, 407 Novak, G. S., Ostriker, J. P., & Ciotti, L. 2011, ApJ, 737, 26 Page, M., Symeonidis, M., Vieira, J., et al. 2012, Nature, 485, 213 Perlman, E. S., Georganopoulos, M., May, E. M., & Kazanas, D. 2010, ApJ, 708, 1 Planck Collaboration 2016, A&A, 594, A13 Pope, E. C. D., Mendel, T., & Shabala, S. S. 2012, MNRAS, 419, 50 Raouf, M., Shabala, S. S., Croton, D. J., Khosroshahi, H. G., Bernyk, M. 2017, MNRAS, 471, 658 Rafferty, D. A., McNamara, B. R., Nulsen, P. E. J, & Wise, M. W. 2006, ApJ, 652, 216 Reiprich, T. H., & Böhringer, H. 2002, ApJ, 567, 716 Rodman, P. E., Turner, R. J., Shabala, S. S., et al. 2019, MNRAS, 482, 5625 Rybicki, G. B., & Lightman, A. P. 1979, Radiative processes in astrophysics, Wiley-Interscience, Sabater, J., Best. P. N., Hardcastle, M. J., et al. 2019, A&A, 622A, 17 Shabala, S. S., & Alexander, P. 2009, ApJ, 699, 525 Shabala, S. S., Ash, A. A., Alexander, P., Riley, J. M. 2008, MNRAS, 388, 625 Shabala, S. S., & Godfrey, L. E. H. 2013, ApJ, 769, 129 Shabala, S. S., Jurlin, N., Brienza, M., Morganti, R., et al., submitted to MNRAS Shabala, S. S., Santoso, J. S., & Godfrey, L. E. H. 2012, ApJ, 756, 161 Shankar, F., Weinberg, D., & Miralda-Escudé, J. 2009, ApJ, 690, 20 Shimwell, T. W., Tasse, C., Hardcastle, M. J., et al. 2019, A&A, 622, A1 Smith, R. K., Brickhouse, N. S., Liedahl, D. A., & Raymond, J. C. 2001, ApJ, 556, L91 Turner R. J. 2018, MNRAS, 476, 2522 Turner, R. J., Rogers, J. G., Shabala, S. S., & Krause, M. G. H. 2018a, MNRAS, 473, 4179 Turner, R. J., & Shabala, S. S. 2015, ApJ, 806, 59 Turner, R. J., & Shabala, S. S. 2019, MNRAS, 486, 1225 Turner, R. J., Shabala, S. S., & Krause, M. G. H. 2018b, MNRAS, 474, 3361 Vikhlinin, A., Kravtsov, A., Forman, W., et al. 2006, ApJ, 640, 691 Vogelsberger, M., Genel, S., Springel, V., et al. 2014, MNRAS, 444, 1518 Willott, C. J., Rawlings, S., Blundell, K. M., & Lacy, M. 1999, MNRAS, 309, 1017 Willott, C. J., Rawlings, S., Blundell, K. M., Lacy, M., & Eales, S. A. 2001, MNRAS, 322, 536 Yates, P. M., Shabala, S. S., & Krause, M. G. H. 2018, MNRAS, 480, 5286 --- 1. Email: [email protected][↩](#fnref1) 2. RAiSE implicitly assumes a single magnetic field value throughout the lobes; although more complex configurations almost certainly exist these are not well constrained at present, and our approach is standard for lobe modelling.[↩](#fnref2) 3. RAiSE simulates cluster density profiles based on the observed profiles of, taking the scaling gas mass and virial radius from the semi-analytic galaxy evolution model of. The gas temperature is calculated using an observed relationship with the halo mass.[↩](#fnref3)
arxiv_0000355
velocity during the period Jan.31-Feb. 6 determined from all measurable absorption lines was -1810  ±  40 km s− 1. Outflow velocities of the order of 1500 - 2200 kms− 1 were also observed in previous outbursts. Adams and Joy (1920) reported the presence of ``dark" components of radial velocity up to -2100 km s− 1 a few days after maximum. Joy (1945) observed expansion velocities of 1700 kms− 1 in forbidden emission lines in spectra taken some months after outburst. Similar values were also found by Herbig (1945) and reported by Payne-Gaposchkin (1957). The ``principal absorption system“ is generally associated with the bulk of the mass ejected during outburst, and for most novae has a velocity close to that of the ``nebular system” as determined from the width of the nebular emission lines (Payne-Gaposchkin 1957, Mc Laughlin 1956, Pottasch 1959). The v*e**x**p* value reported by Joy (1945) referred to observations during the nebular phase, three-four months after outburst. That the emission lines in the nebular spectrum showed velocities of the same order as those deduced from the absorption lines of the principal spectrum is an indication of near constant expansion. We note that in the literature on T Pyx little attention has been paid to the fact that for almost three months after the initial halt T Pyx showed a strong continuum with the presence of emission and absorption lines of similar strength. The persistence for almost three months (about t3) of displaced absorption components in the H (and FeII) lines, which are observable by combining the spectroscopic observations of Catchpole (1969) and Chincarini and Rosino (1969) for the 1966 outburst, indicates an optically thick phase of similar duration. In this respect, we recall that, before the outburst of 1966-1967, Mc Laughlin (1965) noted that T Pyx was exceptional among RNe since its photometric and spectroscopic behavior closely resembled that of a typical nova both close to maximum (where it remained for several weeks to within about a magnitude) and in the nebular stage. The mass of the shell ejected in the optically thick phase ---------------------------------------------------------- The similarity between the spectroscopic and photometric characteristics of the outbursts of T Pyx and those of CNe, which allegedly eject about 10− 4 - 10− 5 M⊙, suggests in itself that during outburst T Pyx expelled a shell of comparable mass. Classical novae undergo an optically thick phase during which they resemble each other, a fact that can be explained by the same mechanism (i.e. flux redistribution) producing the spectrophotometric light curve (Shore 1998, 2008). To achieve flux redistribution, the material must reach column densities of the order of 1023-1024 cm− 2, which corresponds to masses of about 10− 4 - 10− 5 M⊙. An optically thick stage also characterized the outbursts of T Pyx, as can be directly inferred from the lengthy period of time during which the optical magnitude was close to its maximum value, with t3  ∼  90*d*, and from the presence of absorption lines of HI and FeII, which lasted for at least 80 days. From the duration of the optically thick phase (associated to t2) and the observed v*e**x**p*, using simple assumptions, we can estimate the mass of the shell ejected during outburst. The outer radius of the shell can be estimated from the observed expansion velocity (v*e**x**p*  ∼ 1500 km s− 1) and the time elapsed from outburst, assuming continuous ejection. This assumption is justified by the persistence of displaced absorption components with similar equivalent widths. The shell radius is: *R**e**j* ∼ 7.7 × 1014*c**m* ∼ 1.1 × 104*R*⊙. The corresponding shell volume is V ∼ 1.84  ×  1045 cm3. These values are conservative because the terminal velocity was probably higher. To reproduce the optically thick stage recognized from the presence and persistence of the absorption lines, the column density must be of the order of 1023 cm− 2. Therefore, the average density in the shell must be close to 108 - 109 cm− 3, (we note that the density should scale as R− 3). This density value agrees with the spectroscopic behavior and the presence of permitted emission lines only. If we assume that the ejecta are homogeneous and consist of ionized hydrogen, the mass might be estimated as *M**H* = *N**e**m**H**V* ∼ 3.1 × 1029*g* = 1.5 × 10− 4*M*⊙. This value is probably an upper limit because of the assumptions of continuous ejection and homogeneous shell. In an alternative approach, following Williams (1994), one can estimate the hydrogen column density produced by an expanding shell of mass 1.0  ×  10− 4 M⊙ : *N**H* ⋅ *R* = 3.0 × 1052 ⋅ *R*− 2 [*c**m*− 2]. For T Pyx at day 60 (= t2), we find that R2= 5.8  ×  1029 [cm2], and N*H*  ⋅ R  ∼  5.2  ×  1022 [cm− 2]. A mass of the ejecta higher than 10− 4 is therefore required to produce an optically thick stage until day 60. It is well established that in novae, the mass of the ejected envelope is directly correlated with the optical decay time t2 or t3 (Livio, 1994). Therefore, an independent estimate of the ejected shell mass can be derived from the relation: *l**o**g**M**e**j* = 0.274 ± 0.197 ⋅ *l**o**g**t*2 − 4.355 ± 0.283 (Della Valle et al. 2002). Even after considering the large uncertainties in this relationship, a t2=62*d* implies a mass for the envelope M*e**j*  ∼  10− 4 M⊙, which is similar to the mass ejected by classical novae. The data leading to Eq. [eq:dellavalle] suffer from a large scatter. We suspect that the ejecta expansion velocity plays also an important role and should be included in the relation. Shore (2002, 2008) suggested an approximate scaling relation for the optically thick stage: $$M\_{ej} \sim 6.0\ \times 10^{-7} \ \epsilon \ N\_{H,24} \ V\_3^2 \ t\_3^2 \ M\_{\odot},$$ where V3 is the outflow velocity in 103 km s− 1, and *ε* is the filling factor that, for this stage, can be assumed to be of the order of 0.1. For t3=90, f=0.1, N*H*, 24=0.1-1.0, and V3=1.5, the derived values for M*e**j* are in the range 1.5  ×  10− 4 - 1.5  ×  10− 3 M⊙. Finally, the ejected mass can be estimated using the following scaling law from Cassatella et al. (2005), which depends on the approximate assumption that the filling factor in novae ejecta is the same as in V1668 Cyg: *M**e**j* ∼ 0.044 ⋅ *M*⊙/*v**e**x**p*,  where *v**e**x**p* is in km s− 1 and the constant is set to the values of the ejected mass and the expansion velocity of V1668 Cyg (Stickland et al. 1981). Using *v**e**x**p*=1500 km s− 1, one finds that for T Pyx, *M**e**j*  ∼  2.93  ×  10− 5 *M*⊙. We recall that Kato and Hachisu (1991), from their models of steady-state winds for a nova with M1=1.33 X=0.5 and Z=0.02 and the observed t3, suggested the ejection of a massive envelope with M*e**j* of about 10− 5 M⊙ in a single outburst. All the previous results agree with the considerations of Shore (1998), who pointed out that, for a typical ejection velocity of about -2000 kms− 1, a nova with an optical decline time longer than a week must eject a mass higher than 10− 5 M⊙. We also recall the fact that in classical novae close to maximum, the Balmer lines develop P-Cygni profiles, which is a clear indication that a significant amount of material was ejected during the outburst (Starrfield, 1993). Therefore, all quantitative methods and the qualitative consideration of the photometric and spectral behavior of T Pyx during the outbursts indicate the presence of a massive envelope with M*e**j*  ∼  10− 4 -10− 5 M⊙. The discrepancy between the mass of the thick shell and the ignition mass ========================================================================= The ejection of a massive shell during the early (optically thick) outburst phase contrast significantly with the results of the UV + optical observations during quiescence and the theoretical requirements for M*i**g**n*, which imply a $\dot{M}\_{pre-OB}$  ∼  2.2  ×  10− 8 and a total mass for the accreted shell M*a**c**c**r* of about  ∼  5.0  ×  10− 7 M⊙ (see Sect. 8). It is also in contrast with the conclusions of Sect. 9 that indicated that the closest agreement between the grid models and the observed properties of the system during outburst and Q corresponds to a model with $\dot{M}$  ∼  3.0  ×  10− 8 M⊙ yr− 1 and M*i**g**n*  ≤  1.3  ×  10− 6 M⊙ (Table 6). During outburst, T Pyx has apparently ejected far more material than it has accreted. Studies of classical novae containing a massive WD indicated that these objects eject apparently more material than theoretically predicted (Starrfield et al. 1998a; Starrfield et al. 1998b; Vanlandingham et al. 1996; Shore, 1998). These authors emphasized the significant discrepancy between the observed mass of the ejecta and the predicted critical mass of accreted nova envelopes for massive WDs (M*W**D*  ≥  1.25 M⊙), the mass of the observed shell being one order of magnitude (or more) higher than that predicted by the models. For these CNe one could attribute the discrepancy to some inadequacy in the TNR models or in the methods to determine the nebular mass. In the case of T Pyx, the situation, however, differs because there is a serious mismatch between the shell mass indicated by the optical *observations* during outburst (M*e**j*  ∼  10− 4 10− 5 M⊙) and that determined by the UV and optical *observations* during quiescence (which give $\dot{M}\_{pre-OB}$  ∼  2.2  ×  10− 8 M⊙ yr− 1 and therefore M*a**c**c**r*  ∼  M*i**g**n* = 5.0  ×  10− 7 M⊙). Therefore, the mass ejected during outburst is about a factor of 100 higher than both the theoretical ignition mass M*i**g**n* and the total mass accreted before outburst, M*a**c**c**r*. We note that using the post 1966 value of $\dot{M}$ instead of that inferred for the pre-outburst interval would produce an even larger discrepancy. The role of the distance ------------------------ Although M*e**j* does not depend on distance, M*a**c**c**r* does (due to its dependence on both L*d**i**s**k* and $\dot{M}\_{pre-OB}$), and therefore the mismatch between M*e**j* and M*a**c**c**r* depends crucially on the assumed distance. The uncertainty in the adopted distance was found in Sect. 2 to be of the order of 10 percent. However, we note, that even the adoption of an unlikely distance of, say, 10,000 pc (at about 20 *σ* from the estimated value) would only partially alleviate this inconsistency, and at the expense of an uncomfortably high mass accretion rate, well within the range for the onset of steady burning. This would imply characteristic temperatures and luminosities that are not observed (see also Sect. 14 for a further discussion). A larger distance would also necessarily imply that T Pyx was super-Eddington at maximum, a circumstance that appears unlikely due to its slow photometric and spectroscopic developments during outburst. A lower distance would correspond to a lower values of L*d**i**s**k*, and hence $\dot{M}\_{pre-OB}$ and M*a**c**c**r*, to values that are theoretically incompatible with the occurrence of outbursts with an average interval of 22 years. It would also exacerbate the discrepancy between the low value of M*a**c**c**r* obtained from the UV observations (and the models) and the apparently high mass of the ejecta, as inferred from the behavior during outburst. Therefore, there is not much leeway to invoke a different distance to explain the discordance. The nebula revisited ===================== The nebula surrounding T Pyx has been the target of several spectroscopic and imaging observations. Duerbeck und Seitter (1979) first reported the presence of a strong nebulosity around T Pyx, with radius r  ∼  5", whose origin was tentatively attributed to the 1966 outburst and whose strength was described as unusual. By the assumption of an outburst expansion velocity of -900 kms− 1, a low distance ( ∼  600 pcs) was derived. Williams (1982) obtained spectral scans of the northern portion of the nebula of T Pyx. The spectrum was similar to that of as a typical PN, probably photoionized by radiation from the hot remnant, and lacked the strong CNO enhancements characteristic of the ejecta of classical novae. Comparing images acquired in 1979.0 and 1982.9, Seitter (1987) found that the nebula did not increase in size during that time interval. Shara et al. (1989) from deep narrowband CCD images confirmed the faint, extended H*α* + NII halo (twice as large as the inner nebula), first reported by Duerbeck (1987). A smooth, small [OIII] nebula with r  ∼  2“ was also found. Shara et al. (1989) also compared the relative sizes of the main nebula with r  ∼  5” in 1985 and in 1979 but failed as well to find any detectable expansion during the 6 year interval, confirming the finding of Seitter (1987). High resolution imagery data from HST was obtained by Shara et al. (1997). The nebula was resolved into more than two thousand individual knots, and a comparison between images taken at four epochs indicated that these individual knots retained a similar pattern, without any evidence of expansion. These data confirm the apparent stationarity in the 10" diameter nebula suggested by previous observations. Shara et al. (1997) found an upper limit of 40 ⋅ (1500/d) (km s− 1) for the expansion velocity of the knots. They also detected nine distinct peaks in the brightness distribution, an indication that a multiple nebula model was required. Many studies, disappointingly inconclusive, addressed the problem of the mass of the nebula. In this respect, it should be noted that large uncertainties are generally associated with estimates of the mass of novae ejecta, since the mass estimate depends critically on quantities that are not reliably measured, e.g.: distance, electron density, ionization structure in the nebula, geometry, and filling factor. Therefore, it is unsurprising that a range of masses for the nebula of T Pyx has been proposed in the literature. From the nebular H*β* intensity measured by Williams (1982), WLTO (1987) obtained a lower limit to M*n**e**b* of 10− 6 M⊙, while Shara (1989), using the H*β* intensity with the requirement *ε*  ≤ 1 for the filling factor derived an upper limit of 1.0  ×  10− 4 M⊙. From the intensity of the H*α* and [NII] lines, Seitter (1987) found a mass close to 8.0  ×  10− 5 M⊙. From the H*α* flux and considerations based on HST imagery, Shara et al. (1997) obtained 1.3  ×  10− 6 M⊙ to be the most reliable estimate for the nebula mass (with an assumed distance of 1500 pc), the electron density being, allegedly, the main uncertainty factor. We add one more estimate for the mass of the nebula, based on the H*β* flux obtained by Williams (1982) scaled to the entire nebula, after correction for the new reddening and distance. We obtain : F*β*  ∼  4.24  ×  10− 14 erg cm− 2 s− 1 and L*β* = 6.22  ×  1031 erg s− 1. with a relative error of about 25 percent, arising from the uncertainties in the distance and the flux. By combining the two common relations: *L**β* = 1.24  ×  10− 25 *N**e* ⋅ *N*+ ⋅ (4/3) *π* *R**n**e**b*3 ⋅ *ε* and *M**n**e**b* = *μ* ⋅ *N*+ ⋅ *m**H* ⋅ (4/3) *π* *R**n**e**b*3 ⋅ *ε*,  (where *μ*=1.4 is the mean atomic weight and *ε* is the filling factor), one finds that the mass of the nebula (independent of *ε* and *R**n**e**b*) is $$M\_{neb}= 18.67\; {L\_{\beta} \over N\_e} \; g = {0.584 \over N\_e} \;M\_{\odot}.$$ N*e* is inaccurately determined because, as noted by Williams (1982), the spectrum is too poor in number of lines to constrain the physical parameters of the nebula accurately. However, from the presence of a quite strong emission feature near 3722 Å, attributable to [OII] 3726.1 and 3728.8 one can estimate that the nebula density is less than 3.0  ×  103 cm− 3, since these two forbidden lines have critical density log N*e**c**r**i**t* = 3.5 and 2.8, respectively. If we assume that N*e*  ∼  103 cm− 3, a value also adopted by Shara et al. (1997) from other considerations, we find that *M**n**e**b*  ∼  5.84  ×  10− 4 M⊙. A value of this order for the total mass of the nebula, which has apparently increased from the contribution of several successive outbursts, agrees with the ejection during outburst of a massive shell ( ∼  10− 5 M⊙) as suggested in Sect. 10.2. The fact that the nebula of T Pyx is still clearly observed despite its large distance seems hardly compatible with a mass of 10− 6 M⊙ derived in previous studies. In the most well studied CNe, which on average are closer in distance than T Pyx, the ejected nebula has, at best, a similar strength and/or is barely evident a few years after outburst. We think that the peculiar strength of the nebula of T Pyx may be explained by the fact that most of the gas ejected in successive outbursts has accumulated into a nearly stationary envelope that is strongly irradiated by a more than average luminous UV central source. Alternatively, one can guess that the observed nebula of T Pyx was produced in a peculiar event and is not associated with the recorded and/or previous outbursts. Its apparent stationarity and the lack of changes over a timescale of about fifteen years (Shara et al., 1997) supports this interpretation. Williams (1982) noted that the spectrum of the nebula around T Pyx is almost similar to typical planetary nebula, of approximately solar composition. We emphasize that the optically thick shell discussed in Sects. 10.1 and 10.2 was observed only spectroscopically during the outburst phases, and that there is no definite, direct link with the extended nebula. Simple calculations indicate that, if d=3500 pc, the angular radius of the shell, after ten years of constant expansion at 1500 km s− 1, would be less than 1". The filling factor ------------------- If the radius of the nebula *R**n**e**b* is known, from the observed angular radius r ( ≤  10") and the distance, one can obtain the filling factor *ε* with the help of eq. [eq:ellebeta]. After insertion of the values for L*β* = 6.22  ×  1031 erg s− 1, R*n**e**b*  ∼  5.2  ×  1017 cm, and N*e*  ∼  1.1 N+, one obtains that *ε*  ⋅  N*e*2  ∼  1062.7. Therefore, for N*e*  ∼ 103 cm− 3, *ε* is close to 10− 3. A filling factor of  ∼  10− 3 can be obtained independently from the relation of Harrison and Stringfellow (1994): *ε* ∼ *N**c**l**V**c**l*/*V**n**e**b* = 8*N**c**l*− 1/2*γ*3/2,  where *γ* is the fraction of the spherical nebula intercepted by the clumps. From the figures of Shara et al. (1997), we estimate that *γ*  ∼  0.05 and, if the number of clumps N*c**l* is  ∼  2000 (Shara et al. 1997), we find that *ε*  ∼  2  ×  10− 3. We note that the filling factors estimated for the ejecta of other novae have a wide range of values, from 4  ×  10− 1 (Vanlandingham et al., 1999 for Nova LMC 1990 no.1), to 10− 6 (Saizar and Ferland (1994) for nova QU Vul. Williams et al. (1981) found an intermediate value, *ε*  ∼  2  ×  10− 3 for the recurrent nova U Sco. Mason et al. (2005) found *ε*  ∼  10− 4-10− 1 for nova SMC 2001 and nova LMC 2002, and Balman and Oegelman (1999) estimated *ε*  ∼  5  ×  10− 3-3  ×  10− 1 for the shell of GK Per. Mason et al. (2005) have suggested *ε*  ∼  10− 5-10− 2 for T Pyx. The mass balance and the SNIa connection ======================================== It is accepted that Type Ia supernovae represent the complete thermonuclear disruption of mass-accreting white dwarfs that reach the Chandrasekhar limit by accretion (Nomoto et al. 1984; Woosley and Weaver 1986). Within this general framework, there exist single degenerate models (Whelan and Iben, 1973) in which a WD accretes from a non-degenerate companion, and double degenerate models (e.g. Iben and Tutukov, 1984) that involve the merger between two WDs. Hachisu (2002, 2003) and Hachisu and Kato (2001, 2002) proposed a unified picture of binary evolution to SNe Ia in which recurrent novae could be understood to be part of the evolutionary stages of supersoft X-ray sources and symbiotic channels to SNe Ia. The mass M1 must be close to the Chandrasekhar limit before a recurrent nova can become a SN Ia, and the WD must also increase in the long term, after many cycles of accretion and ejection (Starrfield et al., 1985). However, in the case of T Pyx, even if M1 were close to the limit, which is not clearly established since M1 appears to be close to 1.37 M⊙, the results of the previous sections indicate that the mass balance situation is unclear: On the one hand, the photometric and spectroscopic behavior close to outburst, as mentioned in Sect. 10.2, appear to be consistent with the ejection of a rather massive shell, while the UV data and theoretical models (at the specific M1 and $\dot{M}$ values) indicate that the ignition mass is low ( ∼ 5.0  ×  10− 7 *M*⊙). This indicates that during outburst the WD ejects more material than it accumulates and that a secular decrease in the mass of the white dwarf is expected. Therefore, evolution to become a SNIa appears to be excluded. On the other hand, the ejection of a more-massive-than-accreted shell is apparently in contrast with the observational evidence that the chemical composition of the T Pyx nebula is close to solar (Williams, 1982). This appears to exclude any erosion of the white dwarf and implies that the white dwarf does not lose mass after cycles of accretion and ejection. However, this presupposes that the chemical composition of the observed nebula is representative of that of a single shell ejected during outburst. It is unclear whether these substantial discrepancies originates in flaws in the theoretical assumptions or the interpretation of the observations. They certainly highlight the need for accurate values of the most critical parameters of this recurrent nova, i.e. the mass and chemical composition of the shell ejected during outburst. In any case, the behavior of T Pyx raises several doubts about the common assumption that in recurrent novae far less material is ejected during the outburst than is accreted by the white dwarf, and therefore that its mass increases toward the Chandrasekhar limit (Starrfield, 2002). In assessing whether recurrent novae could be progenitors of Type Ia supernovae, we note that Hachisu and Kato (2001), discovered that, among six recurrent novae, only T Pyx is offset significantly from the region occupied in an a orbital period - donor mass plane (their Fig. 3) by the progenitors of SNe Ia. Neither a supersoft X-ray source, nor assisted suicide ====================================================== To explain the alleged extremely blue color of T Pyx in quiescence, WLTO (1987) proposed that nuclear burning continues even during its Q state, consistent with the slow outburst development, which suggests that the accreted envelope was only weakly degenerate at the onset of TNR. Patterson et al. (1998) attributed the luminosity of T Pyx (and V Sagittae) to quasi-steady thermonuclear burning and suggested the object to be included in the class of the supersoft X-ray sources. However, the color of T Pyx (B-V)*o* = -0.26 used in these studies is based on a significant overcorrection for the reddening, assumed to be E*B* − *V*=0.36, instead of the correct value E*B* − *V*=0.25 (see Paper I and earlier communications, e.g. Gilmozzi et al., 1998). It is unfortunate that both the value (B-V)*o*=-0.26 of Patterson et al. (1998) and the statement about the ``extremely blue color" of T Pyx was adopted widely in the literature (see for example Anupama 2002, and Parthasaraty et al. 2007). We note, incidentally, that in the same paper Patterson et al. (1998) assumed too high a reddening correction (E*B* − *V*=0.33) for V Sge; the IUE data suggest, instead a value close to 0.23. We recall that for T Pyx the observed (B-V) is about 0.14  ±  0.04 (WLTO 1987; Bruch and Engel 1994; Downes et al. 1997; Schaefer 2005; see also Table 7). This would imply that (B-V)*o*  ∼  -0.11, close to the value (B-V)*o*= -0.06 given by Szkody (1994). Patterson et al.(1998) assumed that M*v*=1.3 and after a significant bolometric correction (based on the assumption of an extremely hot source being present, a consequence of the overestimate of the reddening), derived a quiescent bolometric luminosity higher than 1036 erg s− 1, which considered to be a true lower limit. This encouraged them to invoke nuclear burning on the surface of the WD as the main power source, considering the disturbingly high $\dot{M}$ (  ≥  10− 7 M⊙ yr− 1) required in the case of pure accretion power. However, the IUE and the optical data do not appear to be reproduced by the model depicted by Patterson et al. (1998), since the following observational evidence contradicts with their conclusions: 1. One of the main results of Paper I was that the de-reddened UVOIR continuum of T Pyx is reproduced well by a single power-law F*λ*  ∝  *λ* *α* with a slope *α* =  − 2.33, representative of a steady accretion disk. Alternatively, as shown in Paper 1, the UV continuum can be well fitted by a blackbody of temperature 34000 K. There is no way to reconcile these firm observational results with the presence of a supersoft source of typical T  ∼  3.0  ×  105 K, since its expected slope in the UV region (*α* =  − 3.80) would be inconsistent with the UV observations. 2. As shown in Sect. 5, the IUE observations in 1980-1996 and the optical and IR photometric data indicate that L*d**i**s**k*=2.7  ×  1035 erg s− 1, which corresponds to emission mainly in the directly observed UV range. Therefore, in the absence of any direct or indirect evidence of a hot source, it is unlikely that L*d**i**s**k* is higher than 1036 erg s− 1. 3. Under steady-state hydrogen-burning conditions, the accretion rate $\dot{M}$*s**t**e**a**d**y* can be estimated to be: $$\dot{M}\_{steady} \sim 3.7\:\times\: 10^{-7} (M\_1-0.4) M\_{\odot}\:yr^{-1},$$ (Hachisu and Kato (2001), which is valid for hydrogen content X=0.7 In the case of T Pyx, one calculates $\dot{M}$*s**t**e**a**d**y*  ∼  3.2  ×  10− 7 M⊙*y**r*− 1. This value is almost a factor of 30 higher than the value of $\dot{M}$  ∼  1.1  ×  10− 8 M⊙ *y**r*− 1 obtained from the IUE data for the post outburst phase. 4. After correction for inclination, the apparent absolute magnitude of M*v*=1.79 corresponds to a 4-*π* averaged absolute magnitude of M*v**c**o**r**r*=2.53, to be compared with the average value M*v**m**i**n*  ∼  4.0 for ex-novae (Warner, 1995). This implies that T Pyx is more luminous than other ex-novae, as mentioned in Sect. 4, but is not ``extremely bright". We can explain why T Pyx is brighter than other novae in terms of repeated nova eruptions and heating of the primary, which triggers irradiation of the secondary and produces a higher than average $\dot{M}$. 5. The emission line spectrum of T Pyx is not that of an high excitation object: NV *λ* 1240 is nearly absent, HeII *λ* 1640 is weaker than CIV *λ* 1550 and barely present in several spectra, the OIV lines close to *λ* 1405 are absent. A comparison between the IUE spectra of T Pyx and V Sge (Fig.3) clearly shows remarkable differences and a much lower excitation character in T Pyx. From an inspection of IUE spectra of several CVs, we also found that the spectrum of T Pyx is similar to that of the old novae V533 Her and V603 Aql and to some spectra of the intermediate polar TV Col. The old nova RR Pic definitely shows higher excitation than T Pyx, with far stronger NV and HeII emissions. Other findings considerations also exclude the H-burning-bloated-WD hypothesis : 1. The complexity of the optical photometric behavior in T Pyx (Shaefer et al.,1992; Patterson et al., 1998) is difficult to explain if the bulk of the luminosity originates in a spherically symmetric radiation source associated with H-burning on top of a bloated white dwarf. 2. If the majority of the gas undergoes steady burning, it would be difficult to understand how the remainder that accumulates in the degenerate envelope could burn explosively every 20 years. 3. The outburst amplitude of T Pyx,  ∼  8.0 magnitudes, is close to that found for classical novae of similar t3 and similar system inclination (Warner, 1995, 2008). If, as assumed by Patterson et al. (1998), the luminosity during quiescence is greater than 1036 erg s− 1 then, with an outburst amplitude of about 8 magnitudes, the luminosity would reach 1040 erg s− 1, implying that  T Pyx is an object intermediate between a nova and a SN Ia. To support the hypothesis of steady nuclear burning Patterson et al. (1998) considered *all* CVs of comparable P*o**r**b* and deduced that the $\dot{M}$ in T Pyx was a factor of 5000 higher than in other CVs of similar P*o**r**b*. One should compare T Pyx with objects that are similar, that is, with recent novae. After outburst, the nova system remains in an excited state and $\dot{M}$ increases due to the irradiation of the secondary. Patterson et al. (1998) correctly excluded from their Fig. 14 all old novae within 30 yr of the outburst because of their systematically too high luminosity levels. Adopting the same line of reasoning, T Pyx should also have been excluded. In this respect, we note that the $\dot{M}$ of T Pyx is only slightly higher than that observed in *recent* ex-novae (e.g. RR Pic, V841Oph, HR Del, etc) (Selvelli, 2004). Based on the conclusions of Patterson et al. (1998) of an the extremely high luminosity (L*b**o**l* far higher than 1036 erg s− 1), Knigge et al (2000) investigated in detail the evolution of the T Pyx system and proposed that the system is a wind-driven supersoft X-ray source. In this scenario, a strong, radiation-induced wind is excited from the secondary star, and increases the rate of binary evolution, causing the system to destroy itself either by evaporation of the secondary star or in a Type Ia SN if the WD reaches the Chandrasekhar limit. Knigge et al. (2000) therefore proposed that either the primary, the secondary, or both stars may be committing assisted stellar suicide This scenario is, admittedly, highly speculative, and depends crucially on the unsubstantiated assumption that both the temperature and luminosity of T Pyx are extremely high. The IUE and optical data are instead consistent with a more conventional scenario of accretion power, as in other CVs, and we confidently predict that, fortunately, any form of suicide in the near future is extremely unlikely. Finally, we note that Greiner et al. (2003) did not find T Pyx to be a supersoft X-ray source, and that T Pyx does not appear in NASA’s HEASARC tool (a master compilation of EUV and X-ray databases). The XMM observations ==================== While this work was close to completion, the data for X-ray observations of T Pyx by XMM became publicly available. This prompted us to perform a preliminary analysis of the data to verify the presence or absence of a supersoft source. T Pyx was observed by XMM-Newton on November 10 2006. All the three EPIC cameras were operated in Full Frame mode with the Medium filter. The total useful exposure time after filtering for high radiation periods was 22.1 ksec. Optical Monitor data were taken simultaneously with the X-ray observations. The values (see Table 7) are consistent with the values given in Paper I (IUE and optical observations) and confirm the stability of the SED with time. The reduction of the XMM EPIC data was carried out with SAS version 7.1, using standard methods. T Pyx was detected as a faint source that had an observed EPIC-pn count-rate of 8.5  ×  10− 3 cts s− 1 and emission over the complete range 0.2-8 keV. Figure 4 shows the XMM-Newton EPIC-pn spectrum of T Pyx compared to a blackbody of 2.4  ×  105 K and a luminosity of 1.0  ×  1037 erg s− 1. The blackbody spectrum was simulated assuming an exposure time of 20 ksec, similar to that of the data, with two assumptions: a distance of 3500 pc and a reddening E*B* − *V*=0.25 (as in this paper), and a distance of 3000 pc and a reddening of 0.4 (as assumed by Knigge et al., 2000). The presence of a supersoft component with a temperature of 2.4  ×  105 K and a luminosity of 1.0  ×  1037 erg s− 1, whose existence was postulated by both Patterson et al. (1998) and Knigge et al.(2000), can be definitely excluded. Such a bright component would be easily visible at soft energies (i.e. below 0.5 keV), and this is not the case. Any supersoft emission, if present, would be several orders of magnitude fainter than expected. The data are compatible with the presence of a relatively hard source of low luminosity, but their low statistical quality (227 counts in the range 0.2-8. keV) does not allows us to perform a reliable fit. We also note that extrapolation of the observed IUE spectrum (power-law) into the X-ray range would result in an unrealistically high flux, several orders of magnitude higher than observed. The existence of such a high flux was already excluded in Paper I on the basis of the intensity of the HeII 1640 Åline. We therefore ask from which process or in which source the observed X-ray emission originates? In a CV (nova or quiescent nova) X-rays may be produced by at least three different mechanisms or regions (see Kuulkers et al. 2006, and Krautter, 2008): 1. Thermal emission from the hot white dwarf nova remnant in the late outburst phases. The nova becomes a strong X-Ray emitter with a soft SED. 2. Emission from the inner accretion disk (BL) or the accretion columns (in magnetic CVs). One expects to observe a typical X-ray emission of CVs in quiescence, with a thermal bremsstrahlung spectrum. 3. Shocks in the circumstellar medium surrounding the nova system where the expanding nova shell and/or a nova wind interact with each other or with pre-existing CS material. The expected hard X-ray spectrum originates from thermal bremsstrahlung with kTeff temperatures 0.2-15 keV, and with Lx  ∼  1033-1034 erg s− 1 (O’Brien et al. 1994). 4. The corona of an active dwarf M star companion. In T Pyx, as already mentioned, the XMM data would exclude case 1. corresponding to a strong SSS source. Case 2 can also be ruled out because, for a CV accreting at the quite high rates of T Pyx, the BL is optically thick and one would expect to observe (see Patterson and Raymond 1985) only soft X-ray emission with luminosity comparable to the disk luminosity, L*b**l*  ∼  L*d**i**s**k*  ∼  2.7  ×  1035 erg s− 1. For case 4, we note that a high level of coronal heating at values of L*x*/L*b**o**l*  ∼  10− 3 persists for the active dMe stars, and the emission was detected in surveys of nearby stars (see Giampapa and Fleming, 2002), although any detection would be impossible at the distance of T Pyx. We suggest that the most likely origin of the observed hard X-ray emission is from shocks within the circumstellar envelope. We note that in GK Per, Balman et al. (2006) detected hard X-ray emission by direct imaging with Chandra. The total X-ray spectrum of the nebula consists of two thermal prominent components of emission. GK Per has a large amount of CS material, which is most likely a residual of a planetary nebula phase, and the shell remnant shows a clumpy structure similar to that observed in T Pyx by Shara et al. (1997) with HST. We recall that the studies by Orio et al. (2001) by Orio (2004) of the X-ray emission from classical and recurrent novae demonstrated that emission from shocked ejecta is expected to last about two years, but may last for up to a century, if, for example, there is pre-existing circumstellar material (as in the case of GK Per). Hernanz and Sala (2007) reported on X-ray observations of V4633 Sgr performed with XMM-Newton between 2.6 and 3.5 yr after outburst. The X-ray spectrum is dominated by thermal plasma emission, which most probably originated in the shock heated ejecta. Unfortunately, the limited spatial resolution of the available XMM data do not enable any spatially-resolved study to be completed, because the pixel size is about 4 arcsec compared with the optical radius of the nebula which is about 5 arcsec. Also the XMM observations, excluding the possibilities of continuous burning and the supersoft source scenario (a massive white dwarf accreting at high rates), appear to exclude T Pyx becoming a SN Ia by means of the supersoft X-ray source channel described by Hachisu (2002, 2003). | | | | | | | --- | --- | --- | --- | --- | | Filter | *λ**e**f**f* | Observed flux | IUE flux | magn. | | | (Å) | (1  ×  10− 14) | (1  ×  10− 14) | | | UVM2 | 2310 | 0.92 | 0.80 | - | | UVW1 | 2910 | 0.86 | 0.92 | - | | U | 3440 | 0.70 | - | 14.37 | | B | 4500 | 0.38 | - | 15.59 | | V | 5430 | 0.24 | - | 15.49 | The recurrence time and the next, long-awaited outburst. ======================================================== As reported in Paper I, we started an observing program in 1986 with IUE to monitor T Pyx prior to (and during) the expected next outburst that was supposed to occur in the late eighties of the last millennium. Unfortunately, the star successfully managed to postpone the long-awaited outburst, and at the present time (2008) has surpassed by eighteen years the longest inter-outburst interval so far recorded (24 yrs). As mentioned in Sect. 7, Schaefer (2005) published the results of a study of the inter-outburst interval in the recurrent novae T Pyx and U Sco. From an analysis of the available data, he found that the two novae are relatively bright during short inter-eruption intervals and dim during long intervals, suggesting that the product of the inter-eruption interval times the average bolometrically corrected flux is a constant. Therefore, in the case of T Pyx, the lack of the post-1967 outburst is explained by a lower luminosity and therefore a lower $\dot{M}$. From the decline in the observed quiescent B magnitude in the time intervals before and after the 1966 outburst, Schaefer (2005) also predicted that the next outburst of T Pyx will occur around 2052. With the help of considerations in Sect. 8 and the data in Tables 3 and 5, we can further investigate this prediction. The recurrence time can be estimated from the (theoretical) M*i**g**n* and the observed mass accretion rate. M*i**g**n* depends mainly on M1 and $\dot{M}$, while $\dot{M}$, for a given L*d**i**s**k*, is a function also of M1 and R1, which, in turn, is a function of M1. Tables 3 and 5 list M*i**g**n*, $\dot{M}$, and the recurrence time *τ*=M*i**g**n*/$\dot{M}$ (years) for various M1 values. Table 5 (pre-1967 $\dot{M}$ values) clearly shows that the observed inter-outburst interval (22 years) corresponds to M1 values  ∼  1.36-1.38 M⊙. Table 3, which contains post 1967-outburst $\dot{M}$ values indicates that the observed interval, which, so far, is longer than 42 years, corresponds to M1  ≤  1.38 M⊙. The most likely value of M1 is therefore close to 1.37 M⊙. We note that a reduction by a factor two in the mass accretion rate corresponds to an increase in the expected recurrence time *τ* by a larger factor because, for a given M1, M*i**g**n* increases as $\dot{M}$ decreases. For the relevant values of T Pyx, the decrease by a factor of 2 in $\dot{M}$ is accompanied by an increase by a factor of about 50 percent in M*i**g**n*. For the next outburst, therefore one expects an increase in the inter-outburst interval *τ* by a factor of approximately 3.0, to values near 60 years (see Table 3). Our prediction for the next outburst date is therefore around A.D. 2025. With this new date, contrasted with that of Schaefer (A.D. 2052), we (or at least some of us) feel a bit more confident about the chance of personally testing this prediction. However, given the uncertainties in $\dot{M}$ and M*i**g**n*, the possibility of a more imminent outburst cannot be ruled out. In this case, X-ray and other observations during the first outburst stages will be of paramount importance in determining the mass ejected in a single event. Summary and conclusions ======================= We have accurately determined, from UV and other observations, the accretion disk luminosity of T Pyx during both the pre- and post-1966 inter-outburst phases. For M1  ∼  1.37 M⊙, we have found that $\dot{M}\_{pre-OB}$  ∼  2.2  ×  10− 8 M⊙ yr− 1. By combining the measured accretion rate with the duration of the inter-outburst phase (Δt = 22 yrs), the total accreted mass is inferred to be M*a**c**c**r*=$\dot{M}\_{pre-OB}$  ⋅  Δt  ∼  5.2  ×  10− 7 M⊙. This value is in excellent agreement with the theoretical ignition mass (M*i**g**n*)  ∼  5.0  ×  10− 7M⊙ expected for a massive white dwarf accreting at the quoted rate. Therefore, both the time interval between the last two outburst and the absence of the awaited post-1967 outburst (due to the lower $\dot{M}$ in the post-1967 time interval) are explained in a self-consistent way. This is the first reliable determination of the mass accreted prior to a nova outburst, *M**a**c**c**r*, owing to the dominance of the accretion disk luminosity over that of the secondary star at UV, optical and IR wavelengths, as well as good observational coverage during the inter-outburst phases. Unfortunately, *M**a**c**c**r* cannot be confidently determined in other cases, such as classical novae, because of their long inter-outburst interval, nor in other recurrent novae, due to the faintness of the source, the lack of systematic UV observations, or the dominance of light from the giant companion over that from the accretion disk. In T Pyx, the consistency between the observed M*a**c**c**r* and the theoretical M*i**g**n* supports the good quality of the observations and the reliability of the models and represents a new, direct confirmation of the validity of the TNR theory, which associates a massive white dwarf with the recurrent nova phenomenon. A detailed comparison of the observed parameters with the theoretical grids of Yaron et al. (2005) indicates that the closest agreement is obtained with a models of a rather massive white dwarf (M1  ∼  1.25-1.40 M⊙) that accretes at high $\dot{M}$ rates ($\dot{M}$  ∼  10− 8-10− 7 M⊙ yr− 1). However, no combination of the theoretical parameters can reproduce the observed values reliably, t3 being the most difficult parameter to describe. The literature data of the spectroscopic and photometric evolution during the outbursts of T Pyx clearly indicate the occurrence of an optically thick phase that lasted about three months. This implies an ejected mass of M*e**j*  ∼  10− 5 M⊙ or higher, i.e. much higher than the mass of the accreted shell M*a**c**c**r*  ∼  5.2  ×  10− 7 M⊙, inferred from UV and other observations during quiescence. Therefore, T Pyx ejected far more material than it has accreted. There is no way to reconcile this discrepancy given the small uncertainty in the value of $\dot{M}$; even if allowance is made for an uncertainty of a factor two, one obtains an upper limit to M*a**c**c**r* that is smaller by a factor of at least ten than the theoretical value of M*e**j*. Only for accretion rates higher than 4  × 10− 7 M⊙ yr− 1 would the accreted mass M*a**c**c**r* be comparable with the estimated ejected mass M*e**j*. However, these high rates would correspond to the steady-burning regime, while our detailed discussion of Sect. 14 definitely excluded this possibility. Further confirmation of our considerations can be found in the the very recent results of XMM observations that exclude the presence of a super-soft-source in T Pyx. The important point is that far more material appears to have been ejected during the last outburst of T Pyx than has been accreted by the white dwarf. This raises several doubts about the common assumption that the white dwarf in recurrent novae increases in mass toward the Chandrasekhar limit, and about the possible role of RNe as progenitors of SNIa. We note that Della Valle and Livio (1996), based on statistical considerations on the frequency of occurrence of RNe in M31 and LMC, deduced that RNe are not a major class of progenitors of Type Ia supernovae. The behavior of T Pyx represents observational confirmation of this conclusion. Further confirmation for other RNe is required, and this highlights the crucial need for accurate determinations of the most critical parameters of RNe, i.e. the mass accretion rate, and the mass and chemical composition of the shell ejected in a single outburst. In the case of T Pyx, at present, useful information can be obtained from highly spatially resolved spectrophotometry of the nebula that resolves its innermost part (associated with the last eruption), whose apparent radius should be by now larger than 1ʺ. At the same time, spatially resolved observations of the outer portions of the nebula will shed light on its poorly known chemical composition and on its complex velocity structure. We gratefully acknowledge the valuable conversations about the elusive nature of this recurrent nova that we have had with many colleagues in the last fifteen years, while waiting for its allegedly imminent outburst. In roughly chronological order we wish to thank in particular M. Livio, D. Prialnik, M. Shara, J. A. De Freitas-Pacheco, M. Contini, S. Shore, R. E. Williams, M. Friedjung, O. Yaron, J.Danziger, and M. Della Valle. Adams, W. S., & Joy, A. H.1920, Pop.Astr., 28, 514 Anderson, N. 1988,, 325, 266 Anupama, G. C. 2002, Classical Nova Explosions, AIP Conf. Proc. 637, 32 Balman, Ş. 2006, Advances in Space Research, 38, 2840 Balman, Ş., &Oumlgelman, H. B. 1999,, 518, L111 Beals, C. S., & Oke, J. B.1953,, 113, 530 Bruch, A., & Engel, A. 1994,, 104, 79 Cassatella, A., Altamore, A., & González-Riestra, R. 2005,, 439, 205 Catchpole, R. M.1969,, 142, 119 Chincarini, G., & Rosino, L  1969, IAU Coll.: Non-Periodic Phenomena in Variable Stars, 261 Contini, M., & Prialnik, D. 1997,, 475, 803 Della Valle, M., & Livio, M.1995,, 452, 704 della Valle, M., & Livio, M.1996,, 473, 240 della Valle, M., & Livio, M.1998,, 506, 818 Della Valle, M., Pasquini, L., Daou, D., & Williams, R. E. 2002,, 390, 155 Downes, R. A., & Duerbeck, H. W.2000,, 120, 2007 Downes, R., Webbink, R. F., & Shara, M. M. 1997,, 109, 345 PASP 93 552 Duerbeck, H. W.1987, Space Science Reviews, 45, 1 Duerbeck, H. W.1987, The Messenger, 50, 8 Duerbeck, H. W., & Seitter, W. C.1979, The Messenger, 17, 1 Eggen, O. J., Mathewson, D. S., & Serkowski, K. 1967,, 213, 1216 Eggen, O. J.  1968, 9, 329 Eggleton, P. P.1983,, 268, 368 Fujimoto, M. Y.1982,, 257, 752 Gehrz, R. D., Truran, J. W., Williams, R. E., & Starrfield, S. 1998,, 110, 3 Giampapa, M. S., & Fleming, T. A.2002, Stellar Coronae in the Chandra and XMM-NEWTON Era, ASP Conference Series, Vol. 277, 247 Gilmozzi, R., & Selvelli, P.2007,, 461, 593 (Paper I) Gilmozzi, R., Selvelli, P. L., & Cassatella, A. 1994, Memorie della Societa Astronomica Italiana, 65, 199 Gilmozzi, R., Selvelli, P., & Cassatella, A. 1998, Ultraviolet Astrophysics Beyond the IUE Final Archive, 413, 415 Greiner, J., Orio, M., & Schartel, N.2003,, 405, 703 Hachisu, I. 2002, ASP Conference Series 261, The Physics of Cataclysmic Variables and Related Objects, 605 Hachisu, I. 2003, ASP Conference Series 303, Symbiotic Stars Probing Stellar Evolution, 261 Hachisu, I., Kato, M., & Nomoto, K. 1999,, 522, 487 Hachisu, I., & Kato, M. 2001,, 558, 323 Hachisu, I., & Kato, M. 2002, Classical Nova Explosions, AIP 637, 284 Hamada, T., & Salpeter, E. E.1961,, 134, 683 Harrison, T. E., & Stringfellow, G. S. 1994,, 437, 827 Herbig, G. H. 1945,, 57, 168 Hernanz, M., & Jose’, J.1998, Wild Stars in the Old West, 137, 368 Hernanz, M., & Sala, G. 2007,, 664, 467 Hunter, I., Smoker, J. V., Keenan, F. P., Ledoux, C., Jehin, E., Cabanac, R., Melo, C., & Bagnulo, S. 2006,, 367, 1478 Iben, I., Jr., & Tutukov, A. V.1984,, 54, 335 Joy, A. H. 1945,, 57, 171 Kahabka, P., & van den Heuvel, E. P. J. 2006, Compact stellar X-ray sources, Cambridge Astrophysics Series, 461 Kahabka, P., & van den Heuvel, E. P. J. 1997,, 35, 69 Kato, M. 1990,, 362, L17 Kato, M., & Hachisu, I. 1991,, 373, 620 Kirkpatrick, J. D., Henry, T. J., & McCarthy, D. W., Jr. 1991,, 77, 417 Knigge, C. 2006,, 373, 484 Knigge, C., King, A. R., & Patterson, J. 2000,, 364, L75 Kolb, U., Rappaport, S., Schenker, K., & Howell, S. 2001,, 563, 958 Kopal, Z. 1972, Advances in Astronomy and Astrophysics, 9, 1 Krautter, J. 2008, in Classical Novae, 2nd Edition. Edited by M.F. Bode and A. Evans. Cambridge Astrophysics Series, No. 43, Cambridge: Cambridge University Press, 2008, 232 Kuulkers, E., Norton, A., Schwope, A., & Warner, B. 2006, Compact stellar X-ray sources, 421 Leggett, S. K. 1992,, 82, 351 Lipkin, Y., Leibowitz, E. M., Retter, A., & Shemmer, O. 2001,, 328, 1169 Livio, M. 1991,, 369, L5 Livio, M. 1994, Saas-Fee Advanced Course 22: Interacting Binaries, 135 Livio, M., & Truran, J. W.1992,, 389, 695 MacDonald, J. 1983,, 267, 732 Mason, E., Della Valle, M., Gilmozzi, R., Lo Curto, G., & Williams, R. E. 2005,, 435, 1031 Mayall, M. W. 1967,, 61, 349 McLaughlin D. B.Vistas in astronomy Vol.2  1956 Pergamon Press 1477 McLaughlin, D. B. 1965, in Stellar Structure - Stars and Stellar Systems, Chicago Univ. Press, p. 600 Mochnacki, S. W.1984,, 55, 551 Münch, G. 1968, Stars and Stellar Systems, 7, 365 Nauenberg, M. 1972,, 175, 417 Nofar, I., Shaviv, G., & Wehrse, R. 1992, Cataclysmic Variable Stars, 29, 65 Nomoto, K., Thielemann, F.-K., & Yokoi, K. 1984,, 286, 644 O’Brien, T. J., Lloyd, H. M., & Bode, M. F. 1994,, 271, 155 Orio, M. 2004, Revista Mexicana de Astronomia y Astrofisica Conference Series, 20, 182 Orio, M., Covington, J., Oegelman, H. 2001,, 373, 542 Orio, M., Tepedelenlioglu, E., Starrfield, S., Woodward, C. E., & Della Valle, M. 2005,, 620, 938 Paczyński, B. 1971,, 9, 183 Paczynski, B., & Schwarzenberg-Czerny, A. 1980, Acta Astronomica, 30, 127 Parthasarathy, M., Branch, D., Jeffery, D. J., & Baron, E. 2007, New Astronomy Review, 51, 524 Patterson, J., & Raymond, J. C.1985,, 292, 550 Patterson, J., et al. 1998,, 110, 380 Patterson, J., et al. 2005,, 117, 1204 Payne-Gaposchkin C., 1957, in “The Galactic Novae”, North Holland, p. 23 Plavec, M., & Kratochvil, P.1964, Bulletin of the Astronomical Institutes of Czechoslovakia, 15, 165 Politano, M., Livio, M., Truran, J. W., & Webbink, R. F. 1990, IAU Colloq. 122: Physics of Classical Novae, 369, 386 Pottasch, S. 1959, Annales d’Astrophysique, 22, 297 Prialnik, D., & Kovetz, A.1995,, 445, 789 Retter, A., & Leibowitz, E. M.1998,, 296, L37 Retter, A., & Naylor, T.2000,, 319, 510 Ritter, H., Politano, M., Livio, M., & Webbink, R. F. 1991,, 376, 177 Saizar, P., & Ferland, G. J. 1994,, 425, 755 Schaefer, B. E.2005,, 621, L53 Schaefer, B. E., Landolt, A. U., Vogt, N., Buckley, D., Warner, B., Walker, A. R., & Bond, H. E. 1992,, 81, 321 Seitter, W. C. 1987, RS Ophiuchi and the Recurrent Nova Phenomenon, VNU Science Press, 63 Selvelli, P. 2004, Baltic Astronomy, 13, 93 Selvelli, P., & Friedjung, M.2003,, 401, 297 Shahbaz, T., Livio, M., Southwell, K. A., & Charles, P. A. 1997,, 484, L59 Shara, M. M. 1981,, 243, 926 Shara, M. M. 1989,, 101, 5 Shara, M. M., Moffat, A. F. J., Williams, R. E., & Cohen, J. G. 1989,, 337, 720 Shara, M. M., Zurek, D. R., Williams, R. E., Prialnik, D., Gilmozzi, R., & Moffat, A. F. J. 1997,, 114, 258 Shore, S. N. 2002, Classical Nova Explosions, AIP Conf. Proc. 637, 175 Shore, S. N. 2008, Classical Novae, 2nd Edition. Edited by M.F. Bode and A. Evans. Cambridge Astrophysics Series, No. 43, Cambridge University Press, 2008, 194 Shore, S. N., & Starrfield, S.1998, Stellar Evolution, Stellar Explosions and Galactic Chemical Evolution, 413 Smith, D. A., & Dhillon, V. S.1998,, 301, 767 Starrfield, S.1993, Astrophysics and Space Science Library, 177, 209 Starrfield, S.1999,, 311, 371 Starrfield, S.2002, Classical Nova Explosions, AIP 637, 89 Starrfield, S., Sparks, W. M., & Truran, J. W. 1985,, 291, 136 Starrfield, S., Vanlandingham, K., Schwarz, G., Hauschildt, P. H., & Shore, S. N. 1998, Stellar Evolution, Stellar Explosions and Galactic Chemical Evolution, 433 Starrfield, S., Truran, J. W., Wiescher, M. C., & Sparks, W. M. 1998,, 296, 502 Starrfield, S., Truran, J. W., Sparks, W. M., Hauschildt, P. H., Shore, S. N., Krautter, J., Vanlandingham, K., & Schwarz, G. 1998, Wild Stars in the Old West, 137, 352 Stickland, D. J., Penn, C. J., Seaton, M. J., Snijders, M. A. J., & Storey, P. J. 1981,, 197, 107 Szkody, P. 1994,, 108, 639 Szkody, P., & Feinswog, L  1988,, 334, 422 Townsley, D. M., & Bildsten, L.2004,, 600, 390 Truran, J. W. 1998, Stellar Evolution, Stellar Explosions and Galactic Chemical Evolution, 425 van den Bergh, S., & Younger, P. F  1987,, 70, 125 Vanlandingham, K. M., Starrfield, S., Wagner, R. M., Shore, S. N., & Sonneborn, G. 1996,, 282, 563 Vanlandingham, K. M., Starrfield, S., Shore, S. N., & Sonneborn, G. 1999,, 308, 577 Vogt, N., Barrera, L. H., Barwig, H., & Mantel, K.-H. 1990, Accretion-Powered Compact Binaries, 391 Wade, R. A., & Hubeny, I.1998,, 509, 350 Warner, B. 1976, Structure and Evolution of Close Binary Systems, 73, 85 Warner, B. 1986,, 222, 11 Warner, B. 1987,, 227, 23 Warner, B. 1995, Cataclysmic Variable Stars, Cambridge Astrophysics Series, Cambridge University Press Warner, B. 2008, in Classical Novae, 2nd Edition. Edited by M.F. Bode and A. Evans. Cambridge Astrophysics Series, No. 43, Cambridge University Press, 2008, 16 Webbink, R. F. 1990, Accretion-Powered Compact Binaries, 177 Webbink, R. F., Livio, M., Truran, J. W., & Orio, M. 1987,, 314, 653 (WLTO, 1987) Whelan, J., & Iben, I. J.1973,, 186, 1007 Williams, R. E., Sparks, W. M., Gallagher, J. S., Ney, E. P., Starrfield, S. G., & Truran, J. W. 1981,, 251, 221 Williams, R. E.1982,, 261, 170 Williams, R. E.1994,, 426, 279 Woosley, S. E., & Weaver, T. A.1986,, 24, 205 Yaron, O., Prialnik, D., Shara, M. M., & Kovetz, A. 2005,, 623, 398 Yaron, O., 2007, private communication The secrets of T Pyx II.A recurrent nova that will not become a SN Ia ===================================================================== ### Received.....; accepted We compare the observed and theoretical parameters for the quiescent and outburst phases of the recurring nova T Pyx. IUE data were used to derive the disk luminosity and the mass accretion rate, and to exclude the presence of quasi-steady burning at the WD surface. XMM-NEWTON data were used to verify this conclusion. By various methods, we obtained L*d**i**s**k*  ∼  70 L⊙ and $\dot{M}$  ∼ 1.1  ×  10− 8 M⊙yr− 1. These values were about twice as high in the pre-1966-outburst epoch. This allowed the first direct estimate of the total mass accreted before outburst, M*a**c**c**r*=$\dot{M}\_{pre-OB}$  ⋅ Δt, and its comparison with the critical ignition mass M*i**g**n*. We found M*a**c**c**r* and M*i**g**n* to be in perfect agreement (with a value close to 5  ×  10− 7M⊙) for M1  ∼  1.37 M⊙, which provides a confirmation of the thermonuclear runaway theory. The comparison of the observed parameters of the eruption phase, with the corresponding values in the grid of models by Yaron and collaborators, provides satisfactory agreement for values of M1 close to 1.35 M⊙ and log$\dot{M}$ between -8.0 and -7.0, but the observed value of the decay time t3 is higher than expected. The long duration of the optically thick phase during the recorded outbursts of T Pyx, a spectroscopic behavior typical of classical novae, and the persistence of P Cyg profiles, constrains the ejected mass M*i**g**n* to within 10− 5 - 10− 4 M⊙. Therefore, T Pyx ejects far more material than it has accreted, and the mass of the white dwarf will not increase to the Chandrasekhar limit as generally believed in recurrent novae. A detailed study based on the UV data excludes the possibility that T Pyx belongs to the class of the supersoft X-ray sources, as has been postulated. XMM-NEWTON observations have revealed a weak, hard source and confirmed this interpretation. Introduction ============ Classical novae (CNe) are semi-detached binary systems in which a white dwarf (WD) primary star accretes hydrogen-rich matter, by means of an accretion disk, from a companion star which fills its Roche lobe. The ``classical nova" outburst is a thermonuclear runaway (TNR) process on the surface of a white dwarf that is produced when, due to the gradual accumulation of hydrogen-rich material on its surface, the pressure at the bottom of the accreted layer, which is partially or fully degenerate, becomes sufficiently high (P  ≥  1018-1020 dyn cm− 2 ) for nuclear ignition of hydrogen to begin. The critical mass for ignition M*i**g**n* depends primarily on the mass of the white dwarf as M*i**g**n*  ∼  M1− 7/3, although more detailed models show that the ignition mass also depends on both the mass accretion rate $\dot{M}$ and the core WD temperature (Prialnik and Kovetz 1995; Townsley and Bildsten 2004; Yaron et al. 2005; see also Sect. 8). Recurrent novae (RNe) are a subclass of classical novae characterized by outbursts with recurrence time of the order of decades. We refer to Webbink et al. (hereinafter WLTO, 1987) as the seminal paper on the nature of recurrent novae and to Warner (1995), Hachisu and Kato (2001), and Shore (2008) for further considerations of this topic. Solid theoretical considerations indicate that a model of a recurrent outburst requires a high accretion rate $\dot{M}$ (10− 8-10− 7 M⊙) onto a WD of mass close to the Chandrasekhar mass limit (Starrfield 1985; WLTO 1987; Livio 1994). However, nova models (Prialnik and Kovetz 1995; Yaron et al. 2005; see also Sect. 8) appear to evade these strict requirements and allow the occurrence of recurrent outbursts also in a (slightly) less massive WD. The ejecta of RNe are expected to be less massive than those of CNe. This is because on the surface of the massive WD expected in a RN, the critical conditions for ignition are reached with a far less massive envelope. Studies of the ejecta of RNe indicate a mass between 10− 6-10− 7 M⊙, instead of the 10− 4-10− 5 M⊙ observed in ``classical" novae ( Gehrz et al. 1998, Hernanz and Jose 1998, Starrfield 1999). RNe represent a convenient laboratory to compare the predictions of the TNR theory with the observations. From the observed $\dot{M}$ and the observed duration of the inter-outburst interval, in principle, one can obtain a direct estimate of the total mass accreted (M*a**c**c**r*) between two successive outbursts. This quantity can be compared with both the (theoretical) critical ignition mass M*i**g**n* and the mass of the ejected shell M*e**j*, which can be estimated by spectroscopic methods (and, in principle, by photometric methods if dP/P can be accurately measured, Livio 1991). A similar comparison cannot be made in the case of CNe because the amount of mass accreted prior to outburst is badly determined due to their long inter-outburst interval. Observations of RNe in quiescence and outburst can also be used to determine the secular balance between the total accreted mass M*a**c**c**r* and that ejected in the explosive phase M*e**j*, and therefore to investigate the possible role of RNe as progenitors of SN Ia. The ejecta of RNe have an almost ``solar“ chemical composition and are not enriched in ``heavy” elements, such as carbon, oxygen and neon. This has been taken as an indication that the massive white dwarf in RNe is not eroded, will gain mass after each cycle of accretion and ejection, and will eventually explode as a Type Ia supernova (Hachisu and Kato, 2002). In Gilmozzi and Selvelli (2007) (hereinafter Paper I), we studied the UV spectrum of T Pyx in detail and our main conclusion was that the spectral energy distribution (SED) is dominated by an accretion disk in the UV+opt+IR ranges, with a distribution that, after correction for reddening E*B* − *V*=0.25, is described by a power law F*λ* = 4.28  ×  10− 6 *λ*− 2.33 erg cm− 2 s− 1 Å− 1, while the continuum in the UV range can also be well represented by a single blackbody of T  ∼  34000 K. The observed UV continuum distribution of T Pyx has remained remarkably constant in both slope and intensity during 16 years of *IUE* observations. In the present study, we use this basic result from the IUE data and other considerations, to constrain the relevant parameters of the system and to attempt to understand the elusive nature of its outbursts. The five recorded outbursts of T Pyx occurred in 1890, 1902, 1920, 1944, and 1966, with a mean recurrence time of 19 ± 5.3 yrs (WLTO). All outbursts were similar in photometric behavior and characterized by a decay time t3  ∼  90$^{\rm d}$, a speed class that is substantially slower than in other RNe. We divide the paper into several self-contained sections. In Sect. 2, we discuss the problem of distance, which we determine to be 3500 ± 500 pc. In Sect. 3, we analyze the mass function of the system. Using observational data and other considerations, we obtain 1.25 < *M*1 < 1.4 M⊙, *M*2 ∼ 0.24 M⊙, or *M*2 ∼ 0.12 M⊙ (depending on the as yet uncertain period), and *i* = 25 ± 5 degrees. In Sect. 4, we derive the absolute magnitude at minimum M$\_{\rm v}$=2.53 ±  0.23. In Sect. 5, we calculate the bolometric disk luminosity $L\_{\rm disk} \sim 70$ L⊙. In Sect. 6 we use various methods to determine the mass accretion rate. The value depends on $M\_{\rm WD}$, and is $\dot{M} \sim 1.3\ \times 10^{-8}$ M⊙ yr− 1 for $M\_{\rm WD}=1.33$ M⊙. In Sect. 7, we derive the accretion rate before the 1966 outburst, and find that it is about twice the value measured with IUE. In Sect. 8, we compare the accreted mass before the last outburst as determined from the $\dot{M}$ derived in Sect. 7, $M\_{\rm accr} \sim 4.5\ \times 10^{-7}$ M⊙, with theoretical calculations of the ignition mass. We find good agreement (including the inter-outburst interval) for $M\_{\rm WD} \sim 1.37$ M⊙. In Sect. 9, we compare the observations of T Pyx with the theoretical models of Yaron et al (2005), finding good agreement for all parameters with the exception of *t*3. In Sect. 10, we reanalyze the historical data of the outbursts and reach the inescapable conclusion that the mass ejected in the explosion is $M\_{\rm ej} \sim 10^{-4}-10^{-5}$ M⊙, about a factor 100 higher than the accreted mass. This discrepancy is discussed in detail in Sect. 11, and leads to the conclusion that the WD loses mass during the ourbursts. In Sect. 12, we revisit the parameters of the extended nebula around T Pyx, and demonstrate why the nebula cannot be used to derive information about the mass ejected during outburst. Section 13 discusses the mass balance and infers that T Pyx is not destined to become a SN Ia as often postulated for recurrent novae. Far from increasing its mass towards the Chandrasekhar limit, the WD in T Pyx is being eroded. In Sect. 14, we discuss the hypothesis that the WD in T Pyx is undergoing steady nuclear burning on its surface, and show that it is incompatible with both historical and UV observations. In Sect. 15, we report about XMM observations confirming our analysis by directly disproving the hypothesis. In Sect. 16, we discuss the recurrence time, and predict that the long awaited next outburst will occur around A.D. 2025. Section 17 reports our conclusions. The distance ============ An accurate knowledge of distance is fundamental to the determination of the disk luminosity L*d**i**s**k* (and M*v*), the mass accretion rate $\dot{M}$, and the mass of the observed nebula surrounding the system M*n**e**b* (not to be confused with the mass ejected during outburst, M*e**j*). In this section, we revisit the problem of determining the distance to T Pyx. We adopt a new accurate determination of the reddening, E*B* − *V*=0.25  ±  0.02, which is based on the depth of the *λ* 2175 Å  interstellar absorption feature (see Paper I). An estimate of the absolute magnitude at maximum M*V**m**a**x* can be obtained from the optical decay time *t*2 or *t*3 by extending to a recurrent nova such as T Pyx, the applicability of the Maximum Magnitude versus Rate of Decline (MMRD) relations obtained using data of classical novae. This is justified by the fact that it is clearly established that the same kind of physical process for the outburst is involved i.e. a TNR on an accreting WD. The suggestion that recurrent novae in M31 are fainter than predicted by the MMRD curve (Della Valle and Livio, 1998), applies only to the RNe subset that, unlike T Pyx, displays a short *t*3. The long *t*3 of T Pyx, the spectral behavior during outburst and the quantity of mass ejected (see Sect. 10) indicate that the same mechanism is operating as in CNe. The optical decay time of T Pyx is well determined (Mayall 1967; Chincarini and Rosino 1969; Duerbeck 1987; WLTO 1987; Warner 1995) with *t*2  ∼  62*d* and *t*3  ∼  92*d*. From these values, we derived four different estimates for M*v**m**a**x* ( − 7.01,  − 6.75,  − 7.00,  − 6.64) using the S-shaped MMRD relation of Della Valle and Livio (1995), and the two linear and the one S-shaped relations of Downes and Duerbeck (2000), respectively. An additional estimate of the distance can be obtained by assuming that T Pyx, as a moderately fast nova, radiated at near-Eddington luminosity at maximum. This provides *L**E**d**d*/*L*⊙ = 4.6 × 104 (*M*1/*M*⊙ − 0.26) For a massive white dwarf (M1  ∼  1.35 M⊙), this corresponds to M*b**o**l**m**a**x*  ∼  -6.98. Close to the epoch of maximum light the observed color index showed oscillations about an average value of (B-V)=0.35 (Eggen et al. 1967, Eggen, 1968), or, after correction for reddening, (B-V)*o*  ∼  0.10. Van den Bergh and Younger (1987) found that novae at maximum light have (B-V)*o* = +0.23  ±  0.06, while Warner (1976) reported that the intrinsic color of novae close to maximum is (B-V)*o*  ∼  0.0. The intermediate value for T Pyx was (B-V)*o*  ∼  0.10, which is indicative of a temperature lower than 10,000 K and suggests that, close to maximum light, the nova radiated mostly in the optical. Since the bolometric correction is quite small in this case (BC  ∼  − 0.20), it follows that M*v**m**a**x* =  − 6.78, which is quite close to the values obtained above from the four MMRD relations. The mean value of the five estimates for M*v**m**a**x* is  − 6.83, with a median value  − 6.79 and a formal standard deviation *σ* = 0.16. The five estimates are intermediate between the value ( − 6.4) reported by Payne-Gaposchkin (1957), Catchpole (1968), and Warner(1995) and the single value ( − 7.47) given by Duerbeck (1981). Adopting M*v**m**a**x* =  − 6.81, *A**V* = 3.15 × *E*(*B* − *V*) = 0.79 and m*v**m**a**x*  ∼  6.7 (Catchpole 1968, Warner 1995, Chincarini and Rosino 1969), one obtains a distance d = 10(1 + (*m* − *M* − *A*)/5)  ∼  3500 pc. The uncertainties in the individual parameters (0.16 for M*v**m**a**x*, 0.1 for m*v**m**a**x*, 0.06 for A*V*) correspond to an uncertainty of about 330 pc in the distance, i.e. to a relative uncertainty of the order of 10 percent. A further estimate of the distance can be obtained by comparing the dereddened flux of T Pyx at 1600 Å  with the accretion disk models by Wade and Hubeny (1998). As mentioned in Paper I, the most appropriate descriptions of the shape and depth of the lines (models *e**e* and *j**j* at low inclination angles) correspond to high values of white-dwarf mass and accretion rate, although they provide a continuum that is too steep. In any case, these high M1, high $\dot{M}$ models, (we have extrapolated $\dot{M}$ to values as high as log $\dot{M}$=-7.5) clearly show that the ratio F1600*m**o**d**e**l*/F1600*o**b**s*. is about 1000. Applying a scaling of a factor of  ∼  (1000)0.5 = 31.5 to the model distance (100 pc) would reconcile these two values. The derived value of  ∼  3150 pc is in fair agreement with the assumed distance of 3500 pc. We recall that a range of distances to T Pyx can be found in the literature. Catchpole (1969) derived a lower limit of 1050 pc from the equivalent width of the CaII K line (EW  ∼  400 mÅ) and the EW-distance relationship derived by Beals and Oke (1953) for stars close to the Galactic plane. The EW measurement of Catchpole is however quite uncertain because of possible contamination by emission lines and the low quality of the spectra (the CaII K line falls close to the edge of the spectrum and only two spectra were of sufficient signal in that region). Catchpole (1969), using the two spectra of the highest signal-to-noise-ratio found that the K line of calcium corresponds to radial velocity of +20 km s− 1. According to Catchpole (1969), this velocity agrees well with that of material travelling with the Galactic rotation in the direction of T Pyx; since this direction is, however, close to a node of the rotation curve, the Beals and Oke (1953) method cannot be used to determine the distance. Warner (1976), used the calibration given by Munch (1968) for stars close to a node in the Galactic rotation curve, and indicated that a star with a measurement of 0.4 Å  for the K line should be at least at a distance of 2 kpc. In a study of the IS Ca II K line towards O- and B-type stars in the Galactic disk, Hunter et al. (2006), presented a new calibration of the the total column density of Ca II K versus distance. In a plot of log N(K) versus log d, the estimated distance of 3500 pc (log d = 3.545) would correspond to log N (CaII)  ∼  13.0. The EW of the K line measured by Catchpole (1969) corresponds to a column density of 4.9  ×  1012 (log N=12.69), which would correspond to a distance of 2200 pc. However, this value of column density is a lower limit, because the medium is not optically thin in the K line, and a yet higher value of distance is expected. By applying T Pyx theoretical predictions for light curves of CVs in outburst, Kato (1990) estimated a distance close to 4000 pc, by comparing the observed magnitude at outburst with the theoretical absolute magnitude prediction at outburst. In the following, we assume a distance of *d* = 3500 ± 350 pc. A discussion of the critical role played by distance in the interpretation of the nature of T Pyx is given in Sect. 11.1. The system parameters ====================== We aim to determine both the disk luminosity and the mass accretion rate of T Pyx. This requires prior knowledge of the system inclination angle *i* and the mass of the primary M1. In the disk geometry, the inclination angle is critical to the estimate of the disk luminosity, while M1 and R1 (a function of M1) are key parameters in the correlation between $\dot{M}$ and L*d**i**s**k*. For most CVs, the determination of these and other parameters (M2, P*o**r**b*, 2K) entering the mass function: $$\frac{(M\_2\cdot sini)^3}{(M\_1+M\_2)^2}=1.037 \times 10^{-7}\cdot K\_1^3\cdot P \label{eq:massfun}$$ is a difficult task, and accurate solutions have been obtained only for a few eclipsing systems. For T Pyx, the situation, ``prima facie", does not appear encouraging because no system parameter (not even the orbital period) have been accurately measured. Therefore, one has to start from a restricted range of values corresponding to the most accurately known parameters in the mass function equation, based on theoretical or semi-empirical considerations, and derive the corresponding range of allowed solutions for the unknown relevant parameters. The orbital period(s) and K1 ---------------------------- There have been several attempts to measure the orbital period of T Pyx, although, disappointingly, they have resulted in a wide range of values (see Paper I); this is a clear indication that there is no definite photometric clock associated with the system. Patterson et al. (1998) detected a stable photometric wave at P*h*=1*h*.829  ±  0*h*.002 (a value close to that of the ``most probable" photometric period P*h* = 1*h*.828 of Schaefer et al., 1992), although this interpretation is inconsistent with the presence of another signal at 2*h*.635. The only spectroscopic determination of the period is that by Vogt et al. (1990), who, in a preliminary study based on a large number (101) of spectra of limited resolution ( ∼ 3 Å), reported a spectroscopic period of P*h*  ∼  3*h*.439, without quoting uncertainties. This spectroscopic period is almost twice the photometric value determined by Szkody and Feinswog (1988) in the infrared. In other CVs, this behavior is generally interpreted as evidence of ellipsoidal variations in the companion, in a system observed at a high inclination angle. This is not the case for T Pyx because, as convincingly shown in Paper I, the companion does not contribute to the J light curve, unless the distance is less than 200 pc. In the following, we consider system solutions for both P*h* = 1*h*.829 (photometric, Patterson et al, 1998) and P*h*=3*h*.439 (spectroscopic, Vogt et al. 1990). The inclusion of the spectroscopic period could be criticized since it is the result of a preliminary analysis that has not been confirmed by subsequent studies. However, given the complex photometric period structure and in the absence of a definite physical interpretation of the stable signal with P*h*=1*h*.829 (that could even arise from the rotation of a magnetic white dwarf), we believe that the spectroscopic results should be considered, since, in any case, they have not been challenged by similar studies. Vogt et al. (1990) found that the projected orbital velocity K1 of the primary, derived from an analysis of radial velocities of the emission lines, is approximately 24 ± 5 kms− 1. Taking into account the limited resolution and the large number of spectra, this value is probably an upper limit. We assume that this value of K1 is representative of the primary star radial velocity (independently of the period being considered), although we are aware of the problems encountered in associating radial velocity changes with the motion of the WD. The mass of the primary ----------------------- Both theoretical considerations and observational determinations (although limited) indicate that the WD masses in classical nova systems are about 1.0 M⊙, i.e. higher than inferred from the standard field WD mass distribution (Ritter et al., 1991). Theoretical models of recurrent novae require an even more massive white dwarf (WLTO, Shara 1989, Livio 1994). Therefore, in the following, we allow the mass of the primary, M1, to vary between 1.25 and 1.4 M⊙. The mass of the secondary ------------------------- In the absence of any direct information about the secondary (the accretion disk is the dominant luminosity source from the UV to the infrared, Paper I), we can estimate its mass M2 on the basis of the commonly accepted assumptions that: a) the secondary fills its Roche lobe, b) its radius R2 is determined by the Roche geometry, and c) the secondary star obeys a mass-radius relation valid for low mass main sequence stars. In this standard description, the radius of the secondary is well approximated by the equivalent radius R*L* of the Roche lobe itself, i.e. R2  ∼  R*L* and the spherical volume of the secondary is assumed to equal the volume of its Roche lobe. The equivalent volume radii of the Roche lobe are given in tabular form by several authors (Plavec and Kratochvil 1964; Kopal 1972; Eggleton 1983; Mochnacky 1984). For our purposes, it is convenient however to use the analytical approximation by Paczinsky (1971), which is valid for M2/M1  ≤  0.8: *R**L*/*A* = 0.457256(*M*2/(*M*1 + *M*2))1/3 where we have introduced a new value for the constant (instead of 0.46224) because it yields results that differ by less than 1% from those tabulated by Eggleton (1983 ) and Mochnacky (1984). By combining equation [eq:RL] with Kepler’s third law: *A* = 0.50557*M**t**o**t*1/3*P**h*2/3 one obtains an approximate relation between P, M2, and R2 ( ∼  R*L*): *P**h* = 8.997*R*23/2*M*2− 1/2 In the crude assumption M2  ∼  R2, this provides an approximate relation: M2 = 0.111 P*h* by which a rough estimate of the mass of the secondary can be obtained if the orbital period is known. In general, however, the R-M relation for the low MS stars is described more accurately by a relation of the form: *R*2 ∼ *β**M*2*α* By combining eq. [eq:PRM] and eq [eq:alpha], we derive a more general relation between M2 and P*h*: *M*2 = *P**h*(2/(3*α* − 1)) ⋅ 8.997(2/(1 − 3*α*)) ⋅ *β*(3/(1 − 3*α*)) Values for the parameters *α* and *β* of the low mass main-sequence stars were derived from observational data and theoretical considerations. In particular, Warner (1995) assumed that R2=M213/15 (which provides an approximate fit to the data set by Webbink, 1990) and derived M=0.065 × P1.25, while Smith and Dhillon (1998) found a constitutive relation R2=0.91 M20.75 and derived M=0.038 × P1.58. Patterson et al. (2005) derived an empirical mass-radius sequence for CV secondaries based on masses and radii measured primarily for the superhumping CVs. They found that R2=0.62 × M20.61 for P shorter than 2.3 hours, and R2=0.92 × M20.71 for P longer than 2.5 hours, and derived M2=0.032 × P2.38 (P  ≤  2.3) and M2=0.026 × P1.78 (P  ≥  2.5). We inserted the *α* and *β* values of Patterson et al.(2005) in Eq. [eq:alphabeta] and obtained slightly different results, that is, M2=0.02823 P2.41 (short P) and M2=0.02554 P1.77 (long P). | | P*h*= 1.829 | P*h*=3.439 | | --- | --- | --- | | Patterson et al. | 0.134 | 0.234 | | Smith and Dhillon | 0.100 | 0.267 | | Knigge | 0.140 | 0.236 | | Warner | 0.138 | 0.304 | | Equation (7) | 0.121 | 0.227 | Knigge (2006) obtained an independent M2-R2 relation by revisiting and updating the mass-radius relationship for CV secondaries determined by Patterson et al. (2005). We have fitted with a power-law the data contained in his Table 3 and derived M2-P relations for short P and long P systems. The relations are $$\begin{aligned} M\_2 = 0.0371 \times P^{2.2022}~~~~~~~~~(P \le 2.2), \\ \nonumber M\_2 = 0.02024 \times P^{1.9802}~~~~~~~~(P \ge 3.2).\end{aligned}$$ Table 1 provides the M2 values obtained by these various methods, for P*h*=1*h*.829 (photometric) and P*h*=3*h*.439 (spectroscopic). In the following, we therefore study the system solutions for M2 = 0.12  ± 0.02 M⊙ and M2 = 0.24  ±  0.02 M⊙, separately. The corresponding spectral types are in the range M3V-M5V (Kirkpatrick et al. 1991, Legget 1992). The system inclination ---------------------- The above derived range of allowed values for the system parameters (M1: 1.25-1.4 M⊙, M2 = 0.12 M⊙ (for P*h*=1*h*.829), M2 = 0.24 M⊙ (for P*h*= 3*h*.439), K1: 24  ± 5 km s− 1) can be inserted into Eq. (2) to find the corresponding range of solutions for the system inclination, which is considered to be a free parameter. Table 2 clearly shows that, given the observational and theoretical constrains for P, K1, M1, and M2, the solutions for the inclination *i* are in the range between 20 and 30 degrees, a value close to 25 degrees being the most likely value. In particular, for M1  ∼ 1.35 M⊙, (see also the following) the solutions for *i* do not change significantly, and for P*h*=1*h*.829 (M2=0.12 M⊙) and P*h*=3*h*.439 (M2=0.24 M⊙ ) are close to 30 and 20 degrees, respectively (see also Fig. 1). Patterson et al. (1998) suggested that T Pyx is being observed at low system inclination, i.e. *i*  ∼ 10*o*-20*o*, due to the low amplitude of the orbital signal. Shabbaz et al. (1997) estimated the binary inclination *i* of T Pyx by measuring the separation of the H*α* emission-line peaks, and obtained a lower limit of  ∼  6*o*. A low system inclination is also consistent with the small value of the radial velocity, the sharpness of the emission lines in the optical (Warner, 1995), and the steepness of the UV and optical continua (Paper I). However, the presence of radial velocity variations (Vogt et al. 1990), and the modulations in the photometric variations preclude an *i* value close to zero. cccc M1 & K1 & *i* (P=1*h*.83) & *i* (P=3*h*.44) (M⊙) &(km s− 1) & 1.25 & 19 & 22.9 & 14.7 1.25 & 24 & 29.4 & 18.7 1.25 & 29 & 36.4 & 22.8 1.30 & 19 & 23.5 & 15.0 1.30 & 24 & 30.2 & 19.1 1.30 & 29 & 37.4 & 23.3 1.35 & 19 & 24.0 & 15.4 1.35 & 24 & 31.0 & 19.6 1.35 & 29 & 38.4 & 23.9 1.40 & 19 & 24.6 & 15.7 1.40 & 24 & 31.7 & 20.0 1.40 & 29 & 39.5 & 24.4 It should therefore be noted that, in spite of the large uncertainties in the observational data, the adoption of plausible theoretical assumptions and some semi-empirical constraints has enabled a quite restricted range for the values of the system inclination to be defined. In the following we will assume *i* = 25 ± 5 degrees for the system inclination angle. The absolute magnitude at minimum ================================= During the quiescent phase after the 1967 outburst, the average optical magnitude of T Pyx is m*v* = 15.3  ±  0.05 (Warner 1995, Duerbeck 1987, Schaefer 1992); for d = 3500  ±  350 pc and A*V*=0.79  ±  0.06, we obtain an (observed) absolute magnitude at minimum M*v**o**b**s*=1.79  ±  0.21 (Warner 1995 defined this to be the *apparent* absolute magnitude since the observed flux in quiescence depends on the inclination angle). Assuming that the visual radiation originates in a non-irradiated disk (Paper I), we correct the apparent absolute magnitude M*v**o**b**s*(*i*) for the inclination angle of the disk (see Warner 1987, 1995) by a term Δ*M**v*(*i*) to obtain the ``standard“ or ``reference” absolute magnitude: *M**v**c**o**r**r* = *M**v**o**b**s*(*i*) − Δ*M**v*(*i*) where Δ*M**v*(*i*) =  − 2.5log(2cos*i*) according to WLTO (1987), or Δ*M**v*(*i*) =  − 2.5log[(1 + 1.5cos*i*)cos*i*] according to Paczynski and Schvarzenberg-Czerny (1980). For *i* ∼ 25*o*, one obtains a correction of Δ*M**v*(*i*)  ∼  0.74  ±  0.06, by averaging these two relations, and *M**v**c**o**r**r*= +2.53  ±  0.22 for the absolute magnitude, averaged over all aspect angles. This value is about 1.4 magnitudes brighter than the mean absolute magnitude of nova remnants in the same speed class (see Fig. 2.20 and Table 4.6 of Warner 1995). It is also about 2.0 magnitudes brighter than the mean absolute magnitude of novae at minimum, as obtained from the values given in Table 6 of Downes and Duerbeck (2000). The T Pyx remnant is therefore one of the brightest nova remnants, its absolute magnitude M*v**c**o**r**r*=+2.53 being close to that of the ex-nova HR Del, whose corrected value, M*v**c**o**r**r*=2.30, implies that it is a quite luminous object (see also Paper I). The disk luminosity from IUE and optical data ============================================= As shown in Paper I, the integrated UV continuum flux of T Pyx in the wavelength range 1180-3230 Å , after correction for reddening, is 1.94  ×  10− 10 erg cm− 2s− 1. For *d* = 3500  ±  350 pc, the corresponding integrated luminosity of the UV continuum is: *L**U**V* ∼ 2.85  ×  1035 *e**r**g* *s*− 1 ∼ 74.2  ± 15.0 *L*⊙. where the 20 percent is caused by the 10 percent uncertainty in the distance. Most old novae have L*U**V* in the range 1-20 L⊙, with V 841 Oph at  ∼ 24 L⊙ (Gilmozzi et al., 1994), and V603 Aql and RR Pic at  ∼ 11 L⊙ and  ∼ 3 L⊙, respectively (Selvelli, 2004). Therefore, T Pyx is also quite bright in the UV and, again, is challenged only by HR Del that has L*U**V* = 56 L⊙ (Selvelli and Friedjung (2002). In the following, we assume that the observed UV and optical luminosity arise from an accretion disk heated by viscous dissipation of gravitational energy. This agrees with the behavior of similar objects, like old novae and nova-like stars. This assumption is also strongly supported by the fact that the continuum energy distribution in the UV range is well reproduced (Paper I) by a power-law with spectral index *λ* − 2.33, as predicted by theoretical optically thick accretion disk models radiating as a sum of blackbodies. The bolometric disk luminosity *L**d**i**s**k* can be estimated from the observed UV and optical luminosity, where the bulk of the continuum radiation is emitted, after correction for the inclination and the unseen luminosity in both the infrared and at *λ*  ≤  1200 Å. The radiation emitted at wavelengths shorter than Ly*α* is strongly absorbed and the energy is redistributed to longer wavelengths (Nofar, Shaviv and Wehrse 1992, Wade and Hubeny 1998). The integration (from *λ*  ∼  1000 Å  to the IR) of the power-law distribution which represents the observed UV and optical continuum of T Pyx, corresponds to a flux of about 3.6  ×  10− 10 erg cm− 2 s− 1, which corresponds to a bolometric luminosity of 5.24  ×  1035 erg s− 1  ∼  136.5 L⊙. This value refers to the adopted inclination of about 25 degrees. After correction to the ``standard“ inclination of about 57 degrees and by considering an average limb-darkening factor similar to that in the optical, we obtain an angle (4*π*) averaged bolometric disk luminosity (hereinafter L*d**i**s**k*) of about 70  ±  15 L⊙, where the relative uncertainty (21 percent) derives from the combination of the uncertainties in the distance and the inclination. We consider this to be the reference disk luminosity for the ``recent”, post-1967-outburst epoch. We note that the Wade and Hubeny (1998) models indicate (their Fig. 8) that in the UV region close to 1200 A (where the effect is larger) the limb-darkening correction, from *i* of about 25 degrees to *i* of about 60 degrees, corresponds to a flux reduction of about 30 percent (this is limb darkening alone, and the geometric projection factor cos *i* is not included). A rough estimate of the 4*π* averaged bolometric disk luminosity can also be obtained from M*v**c**o**r**r* = +2.53. Adopting an average disk temperature of about 28,000 K (the UV continuum indicates  ∼  34,000 K, although this is too hot to match the optical fluxes), the corresponding bolometric correction is *B*.*C*. ∼  − 2.7, and *L**b**o**l*  ∼  92.9 *L*⊙, in fair agreement with the previous estimate. In this comparison, we neglected the effects of temperature stratification in the accretion disk. The mass accretion rate ======================= One could in principle estimate the mass accretion rate $\dot{M}$ by comparing the observed spectral distribution with that of proper accretion disk models. This approach is not viable in our case for the following reasons: a) the spectral index of model disks that consist of blackbodies is hardly sensitive to the mass accretion rate $\dot{M}$; b) most models providing $\dot{M}$ as a function of the continuum slope do not include the case of a massive WD with high accretion rates; c) disk models depend on a large number of parameters, so that model fitting does not generally provide unique results. Alternatively, if, as in our case, one can make the reasonable assumption that the disk is heated by viscous dissipation of gravitational energy, the mass accretion rate $\dot{M}$ can be obtained from the relation: $$\dot{M} = (2R\_1 L\_{disk})/(GM\_1)$$ With this method, $\dot{M}$ is not model dependent but the knowledge of *L**d**i**s**k* and *M*1 is required. Numerically, $\dot{M}$ can be represented by: $$\dot{M}=5.23 \times 10^{-10} \phi L\_{disk}/L\_{\odot} \quad \rm{with}$$ *ϕ* = (*R*1/*M*1)/(*R*1*o*/*M*1*o*) = 0.1235 ⋅ *R*1/*M*1,  where R1 is the WD radius in 10− 3 R⊙, R1*o* = 8.10  ×  10− 3 R⊙ is the radius of a WD of mass M1 = 1.0 M⊙, and M1 is in solar masses. We obtained average values for R1 as a function of M1 from various WD radius-mass relations in the literature (Hamada and Salpeter 1961; Nauenberg 1972; Anderson 1988; Politano et al. 1990; Livio 1994). By fitting a quadratic function to these average values, for M1 in the range 1.0 to 1.4 M⊙, we found that: *R*1/*R*⊙ =  − 0.01315*M*12 + 0.01777*M*1 + 0.00347. The upper curve in Fig. 2 is a plot of the standard R1  ∼  M1− 1/3 relation, which is generally valid for M1  ≤  1.0 M⊙, while the lower curve represents the quadratic fit to the average values for M1 in the range 1.0 to 1.4 M⊙, with a relative uncertainty of *σ**R*1/R1 = 0.07. Columns 2, 3, and 4 in Table 3 indicate R1, *ϕ*, and $\dot{M}$ for *M*1 in the range 1.00 - 1.4 *M*⊙. We note that *ϕ* = 1.00 for M1=1.0 M⊙, and decreases to about 0.23 for M1=1.4 M⊙. The accretion rate $\dot{M}$ in Table 3 is calculated for the (present) luminosity *L**d**i**s**k*  ∼  70 L⊙, derived from UV data. In the case of a massive WD, the mass accretion rate is close to 1.1  ±  0.3  ×  10− 8 M⊙ *y**r*− 1. The uncertainty in $\dot{M}$ is of the order of 23 percent and derives from the uncertainties in the distance, the correction for the inclination, and the value of R1. cccccc M1 & R1 & *ϕ* & $\dot{M}$ & M*i**g**n* & *τ* (M⊙) & (10− 3R⊙)& &(10− 8M⊙yr− 1) & (10− 7M⊙) & (yrs) 1.00 & 8.10 & 1.000 & 3.66& 108.78 & 297.2 1.05 & 7.63 & 0.897 & 3.28& 92.88 & 283.2 1.10 & 7.10 & 0.797 & 2.92& 77.75 & 266.3 1.15 & 6.51 & 0.699 & 2.56& 63.32 & 247.3 1.20 & 5.85 & 0.602 & 2.20& 49.19 & 223.6 1.25 & 5.14 & 0.508 & 1.86& 35.21 & 189.3 1.30 & 4.35 & 0.413 & 1.51& 22.22 & 147.1 1.33 & 3.85 & 0.357 & 1.31& 14.78 & 112.8 1.35 & 3.51 & 0.321 & 1.17& 10.35 & 88.5 1.36 & 3.33 & 0.302 & 1.11& 8.26 & 74.4 1.37 & 3.15 & 0.284 & 1.04& 6.35 & 61.1 1.38 & 2.96 & 0.265 & 0.97& 4.61 & 47.5 1.39 & 2.78 & 0.247 & 0.90& 3.06 & 34.0 1.40 & 2.60 & 0.229 & 0.84& 1.78 & 21.2 An independent confirmation of these values for the mass accretion rate can be obtained from the mass accretion rate versus boundary layer luminosity ($\dot{M}$-L*B**L*) relation given in Table 2 of Patterson and Raymond (1985), if one assumes that L*B**L*  ∼  L*d**i**s**k*. Their Table 2 indicates that (for M1=1.0) a mass accretion rate $\dot{M}$ of 1.6  ×  10− 8 M⊙ *y**r*− 1, close to that derived above, would correspond to a luminosity of 1.3  ×  1035 erg s− 1. With the same $\dot{M}$, a larger luminosity is expected by extrapolation to M1 values close to 1.3 M⊙, in close agreement with the value found for L*d**i**s**k* (2.70  ×  1035 erg s− 1). The L1640-$\dot{M}$ relation reported in the same Table 2 does not, however, provide a consistent result. The average (de-reddened) flux on earth in the HeII 1640 line is 6.32  ×  10− 13 erg cm− 2 s− 1 (Paper I) and the corresponding luminosity for d=3500 pc is 9.3  ×  1032 erg s− 1. Table 2 of Patterson and Raymond (1985) indicates instead a L1640 = 1.2  ×  1032 erg s− 1 for $\dot{M}$=1018 g s− 1=1.587  ×  10− 8 M⊙ yr− 1, M1=1, and extrapolation to higher M1 values does not appear to be able of reproducing the observed HeII luminosity. In T Pyx there is probably a nebular contribution to L1640 that is added to that produced by the BL. It should also be emphasized that the HeII 1640 emission line exhibits significant changes, while the UV continuum remains constant in intensity and slope (see Paper I). This behavior casts doubts about the suitability of the L1640-$\dot{M}$ method for the specific case of T Pyx. Another estimate of $\dot{M}$ can be obtained by direct application of the M*v*-$\dot{M}$ relations reported in the literature. WLTO (1987) derived a relation between the absolute optical magnitude and the accretion rate for the ``reference" disk model (Warner, 1987), after correction for the disk inclination: $$M\_v^{corr.}=-9.48 -5/3 \cdot log(M\_1\cdot\dot{M})$$ where M1 is in solar masses and $\dot{M}$ is in solar masses per year. In the case of T Pyx, for M1 in the range 1.25-1.4 M⊙, one obtains values of $\dot{M}$ close to 5.0  ×  10− 8 M⊙ yr− 1. It should be noted, however, that in the M*v*-$\dot{M}$ relation given by WLTO there is no explicit dependence on R1, which strongly depends on M1, especially for massive WDs. If $\dot{M}$ is corrected for the factor *ϕ*=(R1/M1)/(R1*o*/M1*o*) one obtains that $\dot{M}$(1.25) = 3.50  ×  10− 8 M⊙ yr− 1 and $\dot{M}$(1.40) = 1.60  ×  10− 8 M⊙ yr− 1. If one instead adopts the Paczynski and Schvarzenberg-Czerny (1980) correction for *i*, the corresponding $\dot{M}$ values are 2.85  ×  10− 8 M⊙ yr− 1 and 1.22  ×  10− 8 M⊙ yr− 1 respectively. Lipkin et al. (2001) improved the $\dot{M}$-M*v* relations presented by Retter and Leibowitz (1998) and Retter and Naylor (2000) and derived the new relation $$\dot{M}\_{17} = M\_1^{-4/3} \times 10^{(5.69 - 0.4M\_v^{corr})}$$ where $\dot{M}\_{17}$ is the mass transfer rate in 1017 g s− 1 and M*v**c**o**r**r* is the inclination-corrected absolute magnitude of the disk. The factor M1− 4/3 comes from the factor R1/M1 in the assumption, generally valid for WDs with M  ≤  1 M⊙, that MR3=const. However, this R1-M1 relation (polytropes) is not applicable if the WD is massive because in this case R1 decreases with a far steeper slope as M1 increases (see Sect. 6 and Fig. 2). The adoption of the standard R1  ∼  M1− 1/3 relation for massive WD has the effect of overestimating R1 and in turn overestimating $\dot{M}$. This is especially true for M1  ∼  1.4, for which the difference between the R1 values estimated from the R1  ∼  M1− 1/3 relation and the more general R1 - M1 relations (Hamada and Salpeter 1961; Nauenberg 1972; Anderson 1988; Politano et al. 1990; Livio 1994) reaches its maximum value (Fig. 2). The Lipkin et al. (2001) formula corrected for the proper R1/M1 ratio in massive white dwarfs according to the *ϕ* factors of Table 3, provides values of $\dot{M}$ = 1.38  ×  10− 8 and 0.59  ×  10− 8 for white dwarfs with M1= 1.25 M⊙ and M1=1.40 M⊙, respectively. These values are about one half of the corresponding values obtained from the WLTO formula. It should be noted, however, that, in the derivation of Lipkin et al. (2001) formula there is the assumption that L*v*/L*b**o**l*  ∼  0.14. In the case of T Pyx, this L*v*/L*b**o**l* ratio is lower by a factor of about two and therefore the $\dot{M}$ derived using their formula is underestimated by a similar amount. In conclusion, the various methods provide quite consistent results and we can confidently assume that the mass accretion rate derived from ``recent" UV and optical data is 1.86  ±  0.4  ×  10− 8 M⊙*y**r*− 1 for M1=1.25, and 0.84  ±  0.2  ×  10− 8 M⊙*y**r*− 1 for M1=1.40 M⊙. For an intermediate value of M1=1.33 M⊙ we obtain that $\dot{M}$  ∼  1.3  ×  10− 8 M⊙ *y**r*− 1. Table 4 summarizes the adopted values of the basic parameters and their estimated errors. | Quantity | value | | --- | --- | | M*v**m**a**x* | -6.81  ±  0.16 | | m*v**m**a**x* | 6.7  ±  0.1 | | A*v* | 0.79  ±  0.06 | | distance | 3500  ±  350 pc | | m*v* | 15.30  ±  0.05 | | M*v* | 1.79  ±  0.21 | | *i* | 25  ±  5 degrees | | Δm*v*(i) | 0.74  ±  0.06 | | M*v**c**o**r**r* | 2.53  ±  0.23 | | L*U**V* | 74  ±  15 L⊙ | | L*d**i**s**k* | 70  ±  15 L⊙ | | M1 |  ∼  1.36 M⊙ | | $\dot{M}$ | 1.1  ±  0.25  ×  10− 8 M⊙ yr− 1 | | $\dot{M}\_{pre-OB}$ | 2.2  ±  0.5  ×  10− 8 M⊙ yr− 1 | The pre-1966-outburst mass accretion rate ========================================= Schaefer (2005) published a large collection of quiescent B-magnitudes for the recurrent novae T Pyx and U Sco using information from archival plates, from the literature, and from his own collection of CCD magnitudes. The photometric data of T Pyx start in 1890 and therefore cover all of the known inter-outburst intervals. According to these data, during the inter-outburst phase in the years 1945-1966, T Pyx was a factor of 2 brighter in the B band than during the present quiescent phase after the 1967 outburst. Since no significant changes in the (B-V) color index were found during both of these quiescence phases, one can safely deduce that in 1945-1966 T Pyx was a factor of 2 brighter also in terms of visual and bolometric flux. Using the WLTO and the Lipkin et al. (2001) relations, one finds that the mass accretion rate in epochs pre-1966-outburst, hereinafter $\dot{M}\_{pre-OB}$, was about twice the values of $\dot{M}$ for post 1967, obtained in the previous section. Therefore, we estimate that the mass accretion rate $\dot{M}\_{pre-OB}$ during the last pre-outburst interval (1945-1966) was between 1.68  ±  0.4  ×  10− 8 M⊙ *y**r*− 1 and 3.72  ±  0.8  ×  10− 8 M⊙ *y**r*− 1, for M1=1.25 and M1=1.40 respectively. **These are the values that are compared in Sects. 8 and 9 with the theoretical models of nova.** The post-1967 accretion values (Table 3) are considered in Sect. 16 in the context of the ``missing" outburst and the next expected outburst. The theoretical ignition mass and the accreted mass =================================================== A nova outburst occurs when, due to the gradual accumulation of H-rich material on the surface of the white dwarf, the pressure at the bottom of the accreted layer becomes sufficiently high for nuclear ignition of H to begin ( Shara 1981, Fujimoto 1982, MacDonald 1983). Since the radius R1 of the white dwarf varies approximately as M1− 1/3 for M1  ≤  1.0 M⊙, the critical pressure for ignition [eq:PIGN] *P**i**g**n* = (*G* ⋅ *M*1 ⋅ *M**i**g**n*)/(4*π**R*14) corresponds to a critical ignition mass M*i**g**n* that decreases approximately as M1− 7/3, while, for more massive WDs M*i**g**n*, decreases with a steeper slope (see Table 3 and Fig. 2). In any case, massive white dwarfs need to accrete a small amount of mass to reach the critical conditions. For a given M1 value, the critical ignition mass has been calculated by various authors and the reported values (for a given M1) differ from each other mostly in the choice of the critical pressure at the base of the accreted envelope, which varies between 2.0  ×  1019 and 6.0  ×  1019 dynes cm− 2 (see Gehrz et al. 1998; Starrfield et al. 1998; Livio and Truran 1992; Hernanz and Jose’ 1998; Truran 1998). These studies indicate that a lower limit to *M**i**g**n* (for a massive white dwarf with *M*1 close to 1.4 *M*⊙) is in the range 2.0 - 4.0  ×  10− 6 *M*⊙. However, as first pointed out by MacDonald (1983) and in more detail by Shara (1989) and Prialnik and Kovetz (1995), the behavior of a CN eruption and in particular the critical mass depends (apart from the WD mass) also on the mass accretion rate (since the WD is heated by accretion) and the temperature of the isothermal white dwarf core. Townsley and Bildsten (2004) confirmed that the earlier prescriptions for ignition, based on the simple scaling M*i**g**n*  ∝  R4 M1− 1 for a unique P*i**g**n*, are inadequate and that a system of a given M1 mass can have value of M*i**g**n* that varies by a factor of 10 for different $\dot{M}$. At the high $\dot{M}$ values typical of most CVs, the critical pressure can decrease to values as low as 3  ×  1018 dyn cm− 2, and the critical mass decreases accordingly. Thus, in a massive white dwarf accreting at significantly high rates, one expects a value for M*i**g**n* as low as 2-4  ×  10− 7 M⊙. The critical envelope mass *M**i**g**n* as a function of M1 and $\dot{M}$ can be numerically approximated to be : $$\begin{aligned} log M\_{ign}=-2.862+1.542\cdot M\_1^{-1.436}ln(1.429-M\_1)+\nonumber\\ -0.19(log\dot{M}+10)^{1.484}\end{aligned}$$ (Kahabka and van Den Heuvel 2006) where M1 is in M⊙ and $\dot{M}$ is in M⊙ *y**r*− 1. Table 5 indicates (for various M1 and hence R1 values) $\dot{M}\_{pre-OB}$, the theoretical *M**i**g**n*, the accreted mass M*a**c**c**r*=Δt ⋅  $\dot{M}\_{pre-OB}$, where Δt=22 yrs is the pre-1966 inter-outburst interval, and *τ*=*M**i**g**n*/$\dot{M}\_{pre-OB}$, that is, the expected recurrence time in years. The mass accretion rate and *M**i**g**n* were calculated using the quadratic fit for R1 as a function of M1 for massive white dwarfs derived in Sect. 6 (see also Table 3), and not the approximation R1  ∝  M1− 1/3. Table 5 clearly shows that, after allowances for errors in the estimate of $\dot{M}\_{pre-OB}$, the expected recurrence time *τ* = M*i**g**n*/$\dot{M}\_{pre-OB}$ is close to the observed value (22 yr) for M1  ∼  1.36 - 1.38 M⊙ corresponding to M*i**g**n* and M*a**c**c**r* in the range 3.0 to 6.0  ×  10− 7 *M*⊙. This agrees with the estimate of Kato (1990) that the white dwarf mass for T Pyx is between 1.3 and 1.4 M⊙, while in a subsequent paper Kato and Hachisu (1991) assumed M1=1.33 M⊙. We confirmed the results for M*i**g**n* using the approximate relation for the ignition mass as a function of M1 and $\dot{M}$ given by Kolb et al. (2001): $$M\_{ign}=4.4 \times 10^{-4}\cdot R\_1^4\cdot M\_1^{-1}\cdot\dot{M}^{-1/3}$$ (where M*i**g**n* and M1 are in M⊙, R1 is in 109 cm, and $\dot{M}$ is in 10− 9 M⊙ yr− 1) and obtained M*i**g**n* = 2.72  ×  10− 7 M⊙. A similar result is also obtained by graphical interpolation in Fig. 5 of Kahabka and Van Den Heuvel (1997), which for M1  ∼  1.37 and $\dot{M}$  ∼  2  ×  10− 8 M⊙ yr− 1 provides an ignition mass close to 3.0  ×  10− 7 M⊙ (we note, however, that these parameters fall within a region in which only weak flashes are expected). In addition, we derived P*i**g**n* as a function of $\dot{M}$ and the WD temperature from Fig. 3 of Yaron et al. (2005), and found that P*i**g**n*  ∼  4.8  ×  1018 for $\dot{M}$ = 2  ×  10− 8M⊙ yr− 1 (we note that in this regime there is a weak dependence on the WD temperature). The insertion of this value of P*i**g**n* in equation [eq:PIGN] gives M*i**g**n* = 3.8  ×  10− 7 M⊙. The consistency of all these results confirms that in the case of T Pyx the ignition mass was close to 4.5  ×  10− 7 M⊙. | | | | | | | --- | --- | --- | --- | --- | | M1 | $\dot{M}\_{pre-OB}$ | M*i**g**n* | M*a**c**c**r* | *τ* | | (M⊙) | (10− 8M⊙yr− 1) | (10− 7M⊙) | (10− 7M⊙) | (yrs) | | 1.00 | 7.32 | 77.2 | 16.10 | 105.5 | | 1.05 | 6.56 | 67.6 | 14.43 | 103.0 | | 1.10 | 5.84 | 56.2 | 12.84 | 96.2 | | 1.15 | 5.12 | 45.7 | 11.26 | 89.2 | | 1.20 | 4.40 | 36.3 | 9.68 | 82.5 | | 1.25 | 3.72 | 25.7 | 8.18 | 69.1 | | 1.30 | 3.02 | 16.2 | 6.64 | 53.6 | | 1.33 | 2.62 | 11.0 | 5.76 | 42.0 | | 1.35 | 2.34 | 7.69 | 5.15 | 32.8 | | 1.36 | 2.22 | 6.14 | 4.88 | 27.7 | | 1.37 | 2.08 | 4.73 | 4.58 | 22.7 | | 1.38 | 1.94 | 3.44 | 4.27 | 17.7 | | 1.39 | 1.80 | 2.29 | 3.96 | 12.7 | | 1.40 | 1.68 | 1.33 | 3.69 | 7.9 | In the recurrent nova T Pyx, the theoretical ignition mass and observed accreted mass are in excellent agreement for a massive WD, which provides new, independent support of the TNR theory. Theoretical expectations have received only limited confirmation because studies of the system parameters (in U Sco, T Cr B) have provided contradictory results. Comparison with the nova models of Yaron et al. (2005) ====================================================== The realization that three basic and independent parameters, the white dwarf mass M1, the temperature of its isothermal core T*c*, and the mass transfer rate $\dot{M}$, control the behavior of a CN eruption, and the improvements in computer power and codes, has enabled an increase in sophistication in simulating a nova outburst. Prialnik and Kovetz (1995) presented an extended grid of multicycle nova evolution models that have been extensively used by researchers. Each observed nova characteristic (e.g. peak luminosity, recurrence time, duration of the high luminosity phase, outburst amplitude, mass of ejecta, average outflow velocity, etc.) can be reproduced by a particular combinations of values of M1,T*c*, and $\dot{M}$. Following this earlier study, Yaron et al. (2005) extended and refined the resolution in the grid of models, including a considerable number of new parameter combinations. The full grid covers the entire range of observed characteristics, even those of peculiar objects. By matching the observed characteristics of a particular nova with its theoretical counterpart, it is therefore possible to derive information about the mass and temperature of the white dwarf and its average accretion rate. Therefore, the grids in Tables 2 and 3 of Yaron et al. (2005) can be used to determine a set of fundamental parameters that provides the closest agreement with the observed data. In the case of T Pyx, a comparison of the observed parameters for the eruption phase (the recurrence time *τ*, the outburst amplitude A, the decay rate t3, the expansion velocity v*e**x**p*) with the corresponding values in the grid clearly indicates that the closest agreement corresponds to values that are intermediate in between 1.25 and 1.40 for M1, and -8.0 and -7.0 for log$\dot{M}$; at these high $\dot{M}$ values, the effects of the core temperature T*c* are minor. Since a denser grid is not available, we calculated intermediate values by graphical and non-linear interpolation in the published values. A satisfactory agreement with the observed data comes from a model (model A) with M1=1.36 and log$\dot{M}$=7.5 ($\dot{M}$=3.2  ×  10− 8), which provides: *τ*  ∼  20, v*e**x**p*  ∼  -1200, A  ∼  7.6, t3  ∼  13, and *M**i**g**n*  ∼  5.0  ×  10− 7. Following our interpolation of the data, Yaron (2007) kindly provided us with results obtained by execution of the code for other intermediate models, that is, WD masses between 1.30 and 1.33, log$\dot{M}$ between -7 and -8, and different WD temperatures. The closest agreement with the observed values originates in a model with M1=1.33, T1= 107, log$\dot{M}$=-7.3 (model B), although the model with M1=1.30, T1= 107, and log$\dot{M}$=-7.15 (model C) is also acceptable. In Table 6, we also include, as an illustration, a model directly obtained from the published Tables 2 and 3, for M1= 1.4 M⊙ and $\dot{M}$ = 1.0  ×  10− 8 (model D). The
arxiv_0000356
Schottky Signal Modification as a Diagnostic Tool for Coherent Electron Cooling =============================================================================== Coherent electron cooling is a promising technique to cool high-intensity hadron bunches by imprinting the noise in the hadron beam on a beam of electrons, amplifying the electron density modulations, and using them to apply cooling kicks to the hadrons. The typical size for these perturbations can be on the *μ*m scale, allowing us to extend the reach of classical stochastic cooling by several orders of magnitude. However, it is crucial to ensure that the electron and hadron beams are longitudinally aligned within this same *μ*m scale. In order to provide fast feedback for this process, we discuss the extension of signal suppression to coherent electron cooling, and show in both theory and simulation that certain components of the spectral noise in the hadron beam will be predictably modified at the several percent level, which may be detected by observations of the radiation of the hadron beam. Introduction ============ In high-intensity hadron storage rings, intrabeam scattering (IBS) and beam-beam effects will degrade the beam emittance over the length of the store, limiting machine luminosity. In particular, at the planned Electron-Ion Collider (EIC), the IBS times are expected to be at the timescale of a couple hours, and so some form of strong hadron cooling is necessary to achieve the physics goals. The proposed method in this case is microbunched electron cooling (MBEC), a particular form of coherent electron cooling (CeC). This was first introduced in, and the theory has since been developed extensively in. The premise of MBEC is that the hadron beam to be cooled copropagates with an electron beam in a straight “modulator” section, during which time the hadrons will provide energy kicks to the electrons. The two beams are then separated, and the electrons pass through a series of amplifiers to change this initial energy modulation into a density modulation and amplify it. The hadrons travel through their own chicane before meeting the electrons again in a straight “kicker” section. Here, the amplified density modulations in the electron beam provide energy kicks to the hadrons. By tuning the hadron chicane so that the hadron delay in travelling from the modulator to the kicker is energy-dependent, we may arrange it so that the energy kick that the hadron receives in the kicker tends to correct initial energy offsets. If the chicane also gives the hadrons a delay dependent on their transverse phase-space coordinates and if there is non-zero dispersion in the kicker, then the transverse emittance of the hadron beam can also be cooled. In the current design of an MBEC Cooler for the EIC, the typical scale of the electron density modulations at the top energy will be  ∼ 1*μ*m. This corresponds to about 4 orders of magnitude higher bandwidth than can be achieved with microwave stochastic cooling, allowing the cooling of dense hadron bunches, but also making alignment a challenge. It is important that the hadron arrives in the kicker at the same time as the density perturbations which it had induced in the electron beam, or else it will receive entirely uncorrelated energy kicks. Comparing the  ∼ 100m distance between modulator and kicker to the  ∼ 1*μ*m density perturbations in the electron beam, we see that the transit times of the electrons and hadrons must be maintained at a level of ten parts per billion. In order to commission and operate such a system, it is necessary to have some way to quickly measure the relative alignment of the electron and hadron beams at the sub-micron scale. Directly observing cooling would require waits on the timescale of hours, which would make commissioning painful and prevent any sort of fast feedback during operations. The method proposed here is to make use of “signal modification,” an extension of the well-known signal suppression of microwave stochastic cooling to the case of MBEC cooling. The principle of this method is that after the hadron beam has received its cooling kicks, it will propagate to a “detector” where the power of the hadron beam at particular wavelengths may be measured. If the hadron beam is well-aligned with the electron beam, this will produce a predictable change in the spectral content of the hadron beam on a single-pass basis. As our model, we assume that the cooling section consists of a modulator, where the hadron beam imprints on the electrons; a kicker, where the electrons provide an energy kick back to the hadrons; and a detector, where we will observe the density spectrum in the hadron beam. See Fig. [fig:layout]. ![image](sigsup_layout.png) In Section [sec:thry], we provide a theoretical derivation of signal modification. In Section [sec:sim], we discuss simulation tools to model this process, and find good agreement with the theoretical predictions. In Section [sec:detection], we discuss what will be needed to measure such a signal experimentally. We present our outlook in Section [sec:conclude] and conclude. Note that, although this paper focuses on MBEC, the general form of these results will hold for coherent electron cooling with other amplification mechanisms. Theory ====== We derive here the theory of signal modification by directly propagating the particles themselves from the modulator to the detector with arbitrary 6-dimensional transfer matrices. Alternative derivations using the longitudinal Vlasov equation are presented in Appendix [app:alternative]. In subsection [subsec:thrydecohere], we comment on decoherence of the signal when observing a range of frequencies and in subsection [subsec:multikick], we discuss an extension where the kicks are distributed throughout the length of the kicker. We use phase-space coordinates *x*, *x*ʹ, *y*, *y*ʹ, *z*, and *η*, where the first four are the transverse positions and angles, *z* is the particle’s longitudinal position in the bunch, with positive *z* values corresponding to the head of the bunch, and *η* is the fractional energy offset of the particle. In order to characterize the cooling process, we follow and define a longitudinal wake function in the 1D approximation, such that the energy kick a hadron receives in the kicker is the convolution of this wake function with the longitudinal distribution of hadrons in the modulator. Explicitly, the fractional energy kick Δ*η* received by a hadron at longitudinal position *z* within the bunch at the kicker is given by $$\begin{aligned} \label{eqtn:wake\_def} \Delta\eta(z) = \frac{q^2}{E\_0} \int\_{-\infty}^{\infty}w(z+\Delta z-z')n(z') dz'\end{aligned}$$ where *q* is the hadron charge, *E*0 is the nominal hadron energy, *n*(*z*) is the longitudinal hadron density in the modulator, and Δ*z* is the difference in modulator-to-kicker longitudinal delay between the hadron and electron beams. We also identify a corresponding impedance $$\begin{aligned} \label{eqtn:imped\_def} Z(k) = -\frac{1}{c}\int\_{-\infty}^{\infty} w(z) e^{-ikz} dz\end{aligned}$$ In order for these simple longitudinal wakes to hold, we require coherent oscillation of the beams, and therefore the electric field at our frequency of interest must be large compared to the beam sizes. In particular, we require : $$\begin{aligned} \label{eqtn:1D} \Sigma\_{\perp} \lesssim \frac{\beta\gamma\lambda}{2}\end{aligned}$$ where Σ⊥ is the transverse beam size, *β* is the relativistic beta, *γ* is the relativistic gamma, and *λ* is the beam oscillation wavelength. Examining the values in Tab. [tab:param], we see that the largest beam size is that of the protons in the modulator and kicker (0.95mm), and the gamma factor is 293, so our approximation is good for wavelengths above  ∼ 6.5*μ*m. As we will see, we will be interested mainly in wavelengths of 6*μ*m, at the limit of applicability of the above. However, we note that the requirement of Eq. [eqtn:1D] is likely not a strict one—as it was shown in, in the case when there is not amplification, the 3D theory gives exactly the same result as the 1D model which we employ here. In order for us to neglect Landau damping in the electron beam amplifiers, one usually requires : $$\begin{aligned} \label{eqtn:landau\_damp} k \ll k\_D,\end{aligned}$$ where *k* is the wavenumber of our signal, *k**D* ≡ *ω**p*/*c**σ**β* is the Debye wavenumber, and *σ**β* is the spread in relativistic beta. This electron beam has *k**D* = 4 × 106m− 1 in the amplifiers, larger than the *k* = 106m− 1 of our microbunching. One also has to keep in mind that the concept of Landau damping, with the condition of Eq. [eqtn:landaudamp], is only valid asymptotically, after many plasma oscillations, which is not the case of the microbunching coherent cooling where the electron beam executes $\frac{1}{4}$ of the plasma oscillation in the amplifier sections. A more accurate analysis of when one can ignore the Landau damping for this case was carried out in Section IV of. In order to ensure that the microbunching is not washed out by the energy spread and divergence of the beam within any of the straight sections, we require : $$\begin{aligned} \lambda \gg \frac{2L}{\beta^3\gamma^2}\sigma\_{\eta}\end{aligned}$$ and $$\begin{aligned} \lambda \gg \frac{L}{\beta^2}\sigma\_{x'}^2\end{aligned}$$ where *σ**η* is the fractional energy spread of the beam, *σ**x*ʹ is the beam divergence, and *L* is the length of a drift. Comparison with the parameters in Tab. [tab:param] shows that these are met for both the electrons and hadrons in all the straight sections. We assume that there is no transverse correlation in the electron or hadron beam. Further discussion of the effects which this might have is available in. We focus our attention on the peak region of the electron and hadron beams and take the limit of a longitudinally infinite and uniform plasma. Since the typical wake wavelength is on the order of a few microns, and the typical bunch lengths are a few mm or longer, this is a reasonable approximation. Finally, we assume that the hadron beam enters the modulator with no correlated structures on the scale of the wake wavelength. Although such structures will be generated within the kicker, their characteristic size is on the micron scale, much less than the millimeter scale longitudinal motion per turn, washing out any memory of the kick by the time the beam enters the modulator again. As illustrated in Fig. [fig:layout], we take the hadrons to have transfer matrix *M**M**K* between modulator and kicker and *M**K**D* between kicker and detector, with the transfer matrix between modulator and detector given by *M**M**D*. We treat our particles as existing in a region of length *L*, much larger than any length scale associated with the wake function, and assume periodic boundary conditions, so that we may arbitrarily shift the limits of integration in our integrals. In this model, we also consider the full 6-dimensional evolution of the hadron beam and ignore any collective effects during beam transport except for the electron-hadron interactions characterized by the wake function, as discussed above. We write the evolution of a hadron’s position between modulator and detector as $$\begin{aligned} \label{eqtn:delays} z\_d^{(i)} = &z\_m^{(i)} + M^{MD}\_{5u}\vec{x}\_u^{(i)}\\\nonumber + &M^{KD}\_{56}\sum\_j \frac{q^2}{E\_0} w(z\_m^{(i)} + M^{MK}\_{5u}\vec{x}\_u^{(i)} + \Delta z - z\_m^{(j)})\end{aligned}$$ where *z**m*(*i*) is the longitudinal position of particle *i* in the modulator, *z**d*(*i*) is its position within the detector, and *x⃗*(*i*) are the phase-space coordinates of particles *i* in the modulator. We use the convention that the repeated “*u*” subscript refers to sums over the 5 phase-space coordinates *excluding* the longitudinal position. The summation over *j* is over all particles within the hadron beam. The first two terms of this equation describe the modulator to kicker beam evolution by a simple transfer matrix, while the final term gives the contribution to delay due to the extra energy kick our particle receives in the kicker due to the wakes of all particles in the beam, including itself. At the detector, the longitudinal density of the hadron beam is $$\begin{aligned} \label{eqtn:rho\_d} n(z) = \sum\_i \delta(z - z\_d^{(i)})\end{aligned}$$ with corresponding density in Fourier space $$\begin{aligned} \label{eqtn:rho\_tilde\_d} \tilde{n}(k) &= \int\_{-\infty}^{\infty}\sum\_i e^{-ikz} \delta(z - z\_d^{(i)}) dz\\\nonumber &=\sum\_i e^{-ikz\_d^{(i)}}\end{aligned}$$ The power in the hadron spectrum at a given wavenumber is then given by $$\begin{aligned} \label{eqtn:pwr} |\tilde{n}(k)|^2 = &\sum\_{i,a}e^{-ik\big[z\_d^{(i)} - z\_d^{(a)}\big]}\\\nonumber =N + &\sum\_{i \neq a}e^{-ik\big[z\_m^{(i)} - z\_m^{(a)} + M^{MD}\_{5u}\big(\vec{x}\_u^{(i)} - \vec{x}\_u^{(a)}\big)\big]}\\\nonumber &\times e^{-ikM^{KD}\_{56}q^2/E\_0\sum\_j w\big(z\_m^{(i)} + M^{MK}\_{5u}\vec{x}\_u^{(i)} + \Delta z - z\_m^{(j)}\big)}\\\nonumber &\times e^{ikM^{KD}\_{56}q^2/E\_0\sum\_j w\big(z\_m^{(a)} + M^{MK}\_{5u}\vec{x}\_u^{(a)} + \Delta z - z\_m^{(j)}\big)}\end{aligned}$$ where we have substituted in the expression for *z**d*(*i*) given in Eq. [eqtn:delays] and used the fact that the *i* = *a* terms in the sum are all equal to 1, giving us the *N* out front, where *N* is the number of particles in the length-*L* section of the beam. Typically, the kick from the wake is small, and so we may Taylor-expand the final two exponentials above to linear order. We thereby obtain $$\begin{aligned} \label{eqtn:pwr\_taylor} |\tilde{n}(k)|^2 &= N + \sum\_{i \neq a}e^{-ik\big[z\_m^{(i)} - z\_m^{(a)} + M^{MD}\_{5u}\big(\vec{x}\_u^{(i)} - \vec{x}\_u^{(a)}\big)\big]}\\\nonumber \times \big[1 &- ikM^{KD}\_{56}\frac{q^2}{E\_0}\sum\_j w\big(z\_m^{(i)} + M^{MK}\_{5u}\vec{x}\_u^{(i)} + \Delta z - z\_m^{(j)}\big)\\\nonumber &+ ikM^{KD}\_{56}\frac{q^2}{E\_0}\sum\_j w\big(z\_m^{(a)} + M^{MK}\_{5u}\vec{x}\_u^{(a)} + \Delta z - z\_m^{(j)}\big)\big]\end{aligned}$$ with the effect of second-order terms considered in Appendix [app:secondorder]. We now wish to take the expectation value of the beam power, requiring integrals over the 12 phase space coordinates of particles *i* and *a* and the longitudinal position of particle *j*. However, note that the dependence on *z**m*(*j*) appears only in the argument of the wake functions. If the total integral of the wake is zero, then, if particle *j* is distinct from both particles *i* and *a*, this integral will evaluate to 0. We then need only consider the terms *j* = *i* and *j* = *a* in those sums. The beam power can then be written as $$\begin{aligned} \label{eqtn:pwr\_taylor2} |\tilde{n}(k)|^2 &= N + \sum\_{i \neq a}e^{-ik\big[z\_{ia} + M^{MD}\_{5u}\big(\vec{x}\_u^{(i)} - \vec{x}\_u^{(a)}\big)\big]}\\\nonumber \times [1 &- ikM^{KD}\_{56}\frac{q^2}{E\_0} w(M^{MK}\_{5u}\vec{x}\_u^{(i)} + \Delta z)\\\nonumber &+ ikM^{KD}\_{56} \frac{q^2}{E\_0}w(M^{MK}\_{5u}\vec{x}\_u^{(a)} + \Delta z)\\\nonumber &- ikM^{KD}\_{56} \frac{q^2}{E\_0}w(z\_{ia} + M^{MK}\_{5u}\vec{x}\_u^{(i)} + \Delta z)\\\nonumber &+ ikM^{KD}\_{56} \frac{q^2}{E\_0}w(-z\_{ia} + M^{MK}\_{5u}\vec{x}\_u^{(a)} + \Delta z)]\end{aligned}$$ where we have made the definition *z**i**a* ≡ *z**m*(*i*) − *z**m*(*a*). Since we assume a homogeneous hadron bunch, *z**m*(*i*) and *z**m*(*a*) themselves are irrelevant and *z**i**a* has the same probability distribution as them. We note in the above formula that the first three terms have their only *z**i**a* dependence in the leading exponential, and so performing an average over all *z**i**a* will be zero. We therefore need only focus on the 4th and 5th terms. Approximating the *N*(*N* − 1) terms in the above sum as *N*2, the relevant integral for the 4th term is $$\begin{aligned} \label{eqtn:4th\_term} &-N^2ikM\_{56}^{KD} \int\_{-L/2}^{L/2} dz\_{ia}/L \int\_{-\infty}^{\infty} d^5\vec{x}^{(i)} d^5\vec{x}^{(a)} \rho\big(\vec{x}^{(i)}\big) \rho\big(\vec{x}^{(a)}\big)\\\nonumber &\times \frac{q^2}{E\_0}w(z\_{ia} + M^{MK}\_{5u}\vec{x}\_u^{(i)} + \Delta z) e^{-ik\big[z\_{ia} + M^{MD}\_{5u}\big(\vec{x}\_u^{(i)} - \vec{x}\_u^{(a)}\big)\big]}\end{aligned}$$ where $\rho\big(\vec{x}\big)$ the hadron phase-space density in the modulator within the 5 phase-space coordinates excluding longitudinal position. Approximating the longitudinal integral as extending from  − ∞ to ∞, using the impedance of Eq. [eqtn:impeddef], and making an appropriate change of variables to *z*ʹ ≡ *z**i**a* + *M*5*u**M**K**x⃗**u*(*i*) + Δ*z*, the longitudinal integral in Eq. [eqtn:4thterm] may be evaluated, leaving us with $$\begin{aligned} \label{eqtn:4th\_term\_2} &N^2ikM\_{56}^{KD} \frac{q^2c}{E\_0}\frac{1}{L} Z(k) e^{ik\Delta z}\\\nonumber &\times\int\_{-\infty}^{\infty} d^5\vec{x}^{(i)} d^5\vec{x}^{(a)} \rho\big(\vec{x}^{(i)}\big) \rho\big(\vec{x}^{(a)}\big)\\\nonumber &\times e^{-ik\big[M^{MD}\_{5u}\big(\vec{x}\_u^{(i)} - \vec{x}\_u^{(a)}\big) - M^{MK}\_{5u}\vec{x}\_u^{(i)}\big]}\end{aligned}$$ To perform the remaining integrals, we write the evolution of the phase-space coordinates explicitly in terms of action-angle variables and Courant-Snyder parameters at the start of the transfer matrix, finding $$\begin{aligned} \label{eqtn:action\_angle} M\_{5u}\vec{x}\_u &= M\_{51}(\sqrt{2J\_x\beta\_x}\cos(\phi\_x)+D\_x\eta)\\\nonumber &+ M\_{52}(-\sqrt{2J\_x/\beta\_x}[\sin(\phi\_x)+\alpha\_x\cos(\phi\_x)]+D'\_x\eta)\\\nonumber &+ M\_{53}(\sqrt{2J\_y\beta\_y}\cos(\phi\_y)+D\_y\eta)\\\nonumber &+ M\_{54}(-\sqrt{2J\_y/\beta\_y}[\sin(\phi\_y)+\alpha\_y\cos(\phi\_y)]+D'\_y\eta)\\\nonumber &+ M\_{56}\eta\\\nonumber &\\\nonumber &= (M\_{51}-\frac{\alpha\_x}{\beta\_x}M\_{52})\sqrt{2J\_x\beta\_x}\cos(\phi\_x)\\\nonumber &- M\_{52}\sqrt{2J\_x/\beta\_x}\sin(\phi\_x)\\\nonumber &+ (M\_{53}-\frac{\alpha\_y}{\beta\_y}M\_{54})\sqrt{2J\_y\beta\_y}\cos(\phi\_y)\\\nonumber &- M\_{54}\sqrt{2J\_y/\beta\_y}\sin(\phi\_y)\\\nonumber &+ (D\_xM\_{51} + D'\_xM\_{52} + D\_yM\_{53} + D'\_yM\_{54} + M\_{56})\eta\\\nonumber &\\\nonumber \equiv & \hat{M}\_{51}\hat{x} + \hat{M}\_{52}\hat{x}' + \hat{M}\_{53}\hat{y} + \hat{M}\_{54}\hat{y}' + \hat{M}\_{56}\hat{\eta}\end{aligned}$$ with $$\begin{aligned} \label{eqtn:m\_tilde} &\hat{M}\_{51} \equiv M\_{51}-\frac{\alpha\_x}{\beta\_x}M\_{52}\\\nonumber &\hat{M}\_{52} \equiv M\_{52}\\\nonumber &\hat{M}\_{53} \equiv M\_{53}-\frac{\alpha\_y}{\beta\_y}M\_{54}\\\nonumber &\hat{M}\_{54} \equiv M\_{54}\\\nonumber &\hat{M}\_{56} \equiv D\_xM\_{51} + D'\_xM\_{52} + D\_yM\_{53} + D'\_yM\_{54} + M\_{56}\end{aligned}$$ For a Gaussian beam, the *x̂*, *x̂*ʹ, *ŷ*, *ŷ*ʹ, and *η̂* are normally distributed with $$\begin{aligned} \label{eqtn:sigmas} &\sigma\_{\hat{x}} = \sqrt{\epsilon\_x\beta\_x}\\\nonumber &\sigma\_{\hat{x}'} = \sqrt{\epsilon\_x/\beta\_x}\\\nonumber &\sigma\_{\hat{y}} = \sqrt{\epsilon\_y\beta\_y}\\\nonumber &\sigma\_{\hat{y}'} = \sqrt{\epsilon\_y/\beta\_y}\\\nonumber &\sigma\_{\hat{\eta}} = \sigma\_{\eta}\end{aligned}$$ where the *ε* are the horizontal and vertical emittances and *σ**η* is the fractional energy spread. In this case, the remaining ten integrals in Eq. [eqtn:4thterm2] may be performed, yielding $$\begin{aligned} \label{eqtn:4th\_term\_final} &ikM\_{56}^{KD}\frac{q^2c}{E\_0} \frac{N^2}{L} Z(k) e^{ik\Delta z}\\\nonumber &\times e^{-\frac{k^2}{2} \sum\_{u\neq5} \sigma^2\_{\hat{x}\_u}\Big[\big(\hat{M}^{MD}\_{5u}\big)^2 + \big(\hat{M}^{MD}\_{5u} - \hat{M}^{MK}\_{5u}\big)^2\Big]}\end{aligned}$$ A similar procedure may be applied to the fifth term of Eq. [eqtn:pwrtaylor2], resulting in $$\begin{aligned} \label{eqtn:5th\_term\_final} &-ikM\_{56}^{KD} \frac{q^2c}{E\_0}\frac{N^2}{L} Z(-k) e^{-ik\Delta z}\\\nonumber &\times e^{-\frac{k^2}{2} \sum\_{u\neq5} \sigma^2\_{\hat{x}\_u}\Big[\big(\hat{M}^{MD}\_{5u}\big)^2 + \big(\hat{M}^{MD}\_{5u} - \hat{M}^{MK}\_{5u}\big)^2\Big]}\end{aligned}$$ Making use of the fact that, for real wakes, *Z*( − *k*) = *Z*\*(*k*), and defining *n*0 ≡ *N*/*L* as the mean linear density of the hadrons, we may sum Eqs. [eqtn:4thtermfinal] and [eqtn:5thtermfinal] and incorporate them back into Eq. [eqtn:pwrtaylor2], obtaining $$\begin{aligned} \label{eqtn:pwr\_final} |\tilde{n}(k)|^2 &= N - 2Nn\_0\frac{q^2c}{E\_0}kM\_{56}^{KD}\\\nonumber &\times[{{\rm Re}}(Z(k))\sin(k\Delta z) + {{\rm Im}}(Z(k))\cos(k\Delta z)]\\\nonumber &\times e^{-\frac{k^2}{2} \sum\_{u\neq5} \sigma^2\_{\hat{x}\_u}\Big[\big(\hat{M}^{MD}\_{5u}\big)^2 + \big(\hat{M}^{MD}\_{5u} - \hat{M}^{MK}\_{5u}\big)^2\Big]}\end{aligned}$$ Then, the fractional change in beam power as a function of electron/hadron misalignment is $$\begin{aligned} \label{eqtn:pwr\_relative} \frac{\Delta |\tilde{n}(k)|^2}{|\tilde{n}(k)|^2} &= -2n\_0\frac{q^2c}{E\_0}kM\_{56}^{KD}\\\nonumber &\times[{{\rm Re}}(Z(k))\sin(k\Delta z) + {{\rm Im}}(Z(k))\cos(k\Delta z)]\\\nonumber &\times e^{-\frac{k^2}{2} \sum\_{u\neq5} \sigma^2\_{\hat{x}\_u}\Big[\big(\hat{M}^{MD}\_{5u}\big)^2 + \big(\hat{M}^{MD}\_{5u} - \hat{M}^{MK}\_{5u}\big)^2\Big]}\end{aligned}$$ Note that if ${{\rm Re}}(Z(k)) = 0$ and ${{\rm Im}}(Z(k)) > 0$ (which is the case considered below, see Eq. [eqtn:impedapprox]), and if *M*56*K**D* > 0, then perfect alignment of the hadrons and electrons, ie Δ*z* = 0, results in Δ∣*ñ*(*k*)∣2 < 0,  corresponding to noise suppression below the shot noise in the beam. Such noise suppression has been previously studied theoretically in and observed experimentally in. Our result Eq. [eqtn:pwrrelative] is an agreement with the theoretical analysis of. Decoherence ----------- The above posits that the amount of signal modification has a purely sinusoidal dependence on the electron/hadron misalignment, ie, $$\begin{aligned} \label{eqtn:decoherence\_ideal} \frac{\Delta |\tilde{n}(k)|^2}{|\tilde{n}(k)|^2} = A(k) \cos(k\Delta z + \theta\_0)\end{aligned}$$ where *A* is some amplitude and *θ*0 is a phase, equal to 0 for an antisymmetric wake. However, the above derivation assumes an observation at a pure single frequency. If we take the more realistic case that we sample some range of frequencies with bandwidth Δ*k*, the signal amplitude will be $$\begin{aligned} \label{eqtn:decoherence\_true} \frac{\Delta |\tilde{n}(k)|^2}{|\tilde{n}(k)|^2} &\approx A(k)\frac{1}{\Delta k}\int\_{k-\Delta k/2}^{k+\Delta k/2} \cos(k'\Delta z + \theta\_0) dk'\\\nonumber &= A(k)\cos(k\Delta z + \theta\_0) \frac{\sin(\Delta k\Delta z/2)}{\Delta k\Delta z/2}\end{aligned}$$ so that the amplitude of the oscillations will decay over lengths of  ∼ 1/Δ*k*. Integrated Kicker ----------------- The above analysis assumes that the full kick to the hadrons takes place at a single point in the kicker. However, in the more realistic case where the kick is applied over the full length of the kicker, we may change Eq. [eqtn:delays] to $$\begin{aligned} &z\_d^{(i)} = z\_m^{(i)} + M^{MD}\_{5u}\vec{x}\_u^{(i)}\\\nonumber &+ \frac{1}{N\_s}\sum\_{s=1}^{N\_s}M^{K\_sD}\_{56}\sum\_j \frac{q^2}{E\_0} w(z\_m^{(i)} + M^{MK\_s}\_{5u}\vec{x}\_u^{(i)} + \Delta z - z\_m^{(j)})\end{aligned}$$ where we now split our kick in the kicker into *N**s* fractional kicks at locations *K**s*. The rest of the analysis may be carried through in the same way as before, and we arrive at $$\begin{aligned} \frac{\Delta |\tilde{n}(k)|^2}{|\tilde{n}(k)|^2} &= -2n\_0\frac{q^2c}{E\_0}\frac{1}{N\_s}\sum\_{s=1}^{N\_s}kM\_{56}^{K\_sD}\\\nonumber &\times[{{\rm Re}}(Z(k))\sin(k\Delta z) + {{\rm Im}}(Z(k))\cos(k\Delta z)]\\\nonumber &\times e^{-\frac{k^2}{2} \sum\_{u\neq5} \sigma^2\_{\hat{x}\_u}\Big[\big(\hat{M}^{MD}\_{5u}\big)^2 + \big(\hat{M}^{MD}\_{5u} - \hat{M}^{MK\_s}\_{5u}\big)^2\Big]}\end{aligned}$$ However, we have not observed any significant difference numerically between using the above equation and Eq. [eqtn:pwrrelative], with the single kick taken at the kicker center. Simulation ========== In order to check the validity and limits of the above theory, we make use of simulation. We first examine the case of a perfectly linear simulation, where the fractional energy kick to a given hadron in the kicker is simply the convolution of the wake function with the longitudinal hadron distribution in the modulator. We then turn our attention to a more detailed simulation, where the electrons and hadrons are tracked with a particle-in-cell code in order to incorporate saturation effects. In this and future sections, we consider the cooling parameters currently planned for 275 GeV protons in the EIC, listed in Tab. [tab:param]. The electron optics are assumed to be kept roughly constant within the modulator, kicker, and amplifiers through the use of focusing. Due to their much higher energy, the hadrons see the modulator and kicker as drifts, with the optics parameters specified at the center. Electron chicanes are defined from the end of one straight section to the start of the next one, while the proton *M*56 and phase advances are evaluated from modulator center to kicker center. [!hbt] | | | | --- | --- | | ***Geometry*** | | | Modulator Length (m) | 45 | | Kicker Length (m) | 45 | | Number of Amplifier Straights | 2 | | Amplifier Straight Lengths (m) | 37 | | ***Proton Parameters*** | | | Energy (GeV) | 275 | | Protons per Bunch | 6.9e10 | | Average Current (A) | 1 | | Proton Bunch Length (cm) | 6 | | Proton Fractional Energy Spread | 6.8e-4 | | Proton Emittance (x/y) (nm) | 11.3 / 1 | | Horizontal/Vertical Proton Betas in Modulator (m) | 39 / 39 | | Horizontal/Vertical Proton Dispersion in Modulator (m) | 1 / 0 | | Horizontal/Vertical Proton Dispersion Derivative in Modulator | -0.023 / 0 | | Horizontal/Vertical Proton Betas in Kicker (m) | 39 / 39 | | Horizontal/Vertical Proton Dispersion in Kicker (m) | 1 / 0 | | Horizontal/Vertical Proton Dispersion Derivative in Kicker | 0.023 / 0 | | Proton Horizontal/Vertical Phase Advance (rad) | 4.79 / 4.79 | | Proton M56 between Centers of Modulator and Kicker (mm) | -2.26 | | ***Electron Parameters*** | | | Energy (MeV) | 150 | | Electron Bunch Charge (nC) | 1 | | Electron Bunch Length (mm) | 7 | | Electron Peak Current (A) | 17 | | Electron Fractional Slice Energy Spread | 1e-4 | | Electron Normalized Emittance (x/y) (mm-mrad) | 2.8 / 2.8 | | Horizontal/Vertical Electron Betas in Modulator (m) | 40 / 20 | | Horizontal/Vertical Electron Betas in Kicker (m) | 4 / 4 | | Horizontal/ Vertical Electron Betas in Amplifiers (m) | 1 / 1 | | M56 in First Electron Chicane (mm) | 5 | | M56 in Second Electron Chicane (mm) | 5 | | M56 in Third Electron Chicane (mm) | -11 | | ***Cooling Times*** | | | Horizontal/Vertical/Longitudinal IBS Times (hours) | 2.0 / - / 2.9 | | Horizontal/Vertical/Longitudinal Cooling Times (hours) | 1.8 / - / 2.8 | [tab:param] Linear Simulation ----------------- We simulate a 50*μ*m length of the hadron beam at peak electron and hadron beam currents. Since the bunch lengths of the hadron and electron beams are a few cm and mm, respectively, we may assume constant longitudinal beam densities in our region of interest, and take periodic boundary conditions. We start by seeding 1 million hadron macroparticles in the modulator, representing 23 million real hadrons. In order to match the noise statistics in the real beam at both the modulator and detector, we perform the seeding with sub-Poisson noise, as follows. We create a 2-dimensional grid in the hadron phase space. One coordinate represents the hadron’s longitudinal position at the center of the modulator, and the other represents its delay in travelling from the modulator center to the detector. Using the notation of Eq. [eqtn:mtilde], this delay may be expressed as $$\begin{aligned} \label{eqtn:rms\_delay} \sigma^2\_{\Delta z} = &\hat{M}\_{51}^2\beta\_x\epsilon\_x + \hat{M}\_{52}^2\epsilon\_x/\beta\_x\\\nonumber +&\hat{M}\_{53}^2\beta\_y\epsilon\_y + \hat{M}\_{54}^2\epsilon\_y/\beta\_y\\\nonumber + &\hat{M}\_{56}^2\sigma\_{\eta}^2\end{aligned}$$ This grid extends in 32768 uniform steps over the 50*μ*m length of beam to be simulated, and in 100 uniform steps between  ± 5*σ*Δ*z* in the hadron delay. We assume that the hadrons are distributed uniformly longitudinally and that they have a Gaussian distribution (mean 0, standard deviation *σ*Δ*z*) with respect to the delay. Multiplying this distribution by the total number of hadrons expected within a 50*μ*m slice of beam, we arrive at the expected number of hadrons in each bin. We then add on appropriate pseudo-random Poisson noise to obtain the number of real hadrons in each bin. To assign the macroparticles, we iterate over all the bins, with the loop over position nested inside the loop over delays. We assign each bin in turn a number of macroparticles equal to the number of real hadrons in that bin divided by the number of real hadrons represented by each macroparticle. Each macroparticle is given a position and delay distributed uniformly within the bin. Underflow and overflow are carried to the next bin. Each macroparticle is assigned a horizontal angle (a pseudo-random number between 0 and 2*π*) and a horizontal action (a pseudo-random number drawn from an exponential distribution of mean *ε**x*). Similar procedures are used to obtain the vertical action and angle. The fractional energy error is chosen to provide the hadron with its assigned delay. The actions, angles, and energy errors, together with the optics at the center of the modulator, allow the construction of the particle phase-space coordinates. In the case where the modulator-to-detector transfer matrix is an identity matrix, we assign the hadron fractional energy offset directly, replacing *σ*Δ*z* in the above with *σ**η*. We obtain the position of each hadron macroparticle in the kicker by using the modulator-to-kicker transfer matrix in the design optics, and save this pre-kicker distribution. We then apply a longitudinal shift to all hadrons equally, corresponding to a longitudinal electron-hadron misalignment. We assign an energy kick to each macroparticle by convolving the ideal wake function with the longitudinal hadron density distribution in the modulator. The ideal wake function is computed using the procedures of for the case of two amplification straights and unmatched Gaussian electron and hadron beams with arbitrary horizontal and vertical beam sizes. A plot of this idealized wake is shown in black in Fig. [fig:wakesboth]. Note that this wake is antisymmetiric, which means that ${{\rm Re}}(Z(k)) = 0$ and the first term in the square brackets on the right hand side of Eq. [eqtn:pwrrelative] vanishes. We then translate the hadrons to the detector element using a kicker-to-detector transfer matrix equal to the inverse of the modulator-to-kicker transfer matrix. We chose this matrix because we had found empirically that it gives a sizable signal and thereby reduces the impact of numerical noise. This will be further justified in subsection [subsec:optimalobs]. Finally, we perform a fast Fourier transform (FFT) on the hadron linear density distribution to obtain the amplitude of the spectrum. Squaring these values gives us the spectral power, and we compute the fractional change from the initial spectral power of the hadrons. We repeat this process from the saved pre-kicker distribution, but apply different longitudinal misalignments. Finally, we repeat this whole procedure from the beginning for 100 different random noise seeds and take the average fractional change in spectral power for each delay value. ![image](wakes_both_corrected.png) We plot these simulated results and the theory prediction in Fig. [fig:simlinear], finding excellent agreement. Note that the theoretical wake has a constant vertical shift of  ∼ 0.04 due to higher-order terms, as discussed in Appendix [app:secondorder]. ![image](true_impedance_linear_correct.png) Nonlinear Simulation -------------------- As a more realistic model, we explicitly track the hadrons and electrons step-by-step through the modulator, kicker, and amplifiers. Much of the process for the hadrons is the same as in the linear case, described in subsection [subsec:simlin]. The main difference is that, rather than simply convolve the wake with the hadron density distribution, we explicitly include the electron beam and model its interactions with itself and the hadrons in a custom particle-in-cell (PIC) code. This is described in, and a summary of its operation and changes is given below. The electrons are initialized analogously to the hadrons with 10 real electrons per macroparticle, but with only their longitudinal coordinates described. As such, the binning is done as a function of the electron longitudinal position and its fractional energy offset. The hadrons are initialized at the center of the modulator as described in subsection [subsec:simlin] using the modulator optics of Tab. [tab:param], then back-propagated to the start of the modulator so as to have the correct initial distribution. We track the electrons and hadrons through the modulator in 2 and 6 phase-space dimensions, respectively, using a simple kick-drift model. We model the inter-particle interactions using the disc model of, and convolve the associated force function with the hadron and electron longitudinal density distributions to obtain the kicks to each species. We then apply the transfer matrix of a short drift length to update the particle positions. This process is repeated through the length of the modulator. The electron and hadron chicanes are modelled as simple transfer matrices. In bringing the hadrons to the kicker, the modulator-to-kicker transfer matrix is multiplied by the inverse transfer matrices of half the modulator and kicker drifts, so as to have the correct transfer matrix between the element centers, while the electrons are explicitly tracked through the amplification section (3 chicanes and 2 straights), using the same PIC code. In the kicker, both species are again tracked with the PIC code to get accurate kicks to the hadrons. In bringing the hadrons to the detector, we multiply the kicker-to-detector transfer matrix by half the inverse transfer matrix of the kicker drift in order to have the correct transfer matrix from the kicker center to the detector. Using the transfer matrix from the center of the kicker is justified by the fact that this is the average location where the kick will take place. In comparing to theory, we reduce the wake amplitude to account for saturation in the electron beam, as described in, and shown in red in Fig. [fig:wakesboth]. We incorporate the  ∼ 0.01 offset of Appendix [app:secondorder] into Eq. [eqtn:pwrrelative] and compare with the simulation results, as shown in Fig. [fig:simnonlinear]. Good agreement is observed. ![image](true_impedance_nonlinear_transverse_motion_in_mk.png) Detection of the Signal ======================= So far, we have focused on the derivation and validation of the signal modification theory. However, for this to be useful, we must be able to physically detect it. We therefore look for the optimal kicker-to-detector transfer matrix and wavelength at which to observe the signal, and then examine the possibility of detecting the hadron beam density modulation in the radiation of an EIC dipole. Optimal Parameters ------------------ In order to perform the optimization, it is helpful to make use of a simplified, analytic form for the wake function. In, it was proposed to approximate the wake using a simple fit function equal to a sine wave with Gaussian decay. Making slight changes to the parameters, we may write $$\begin{aligned} \label{eqtn:wake\_approx} w(z) = A \sin(\kappa z)e^{-z^2/2\lambda^2}\end{aligned}$$ where *A*, *κ*, and *λ* are fit parameters roughly corresponding to the wake amplitude, wavenumber, and falloff-distance, respectively. We find that this function fits our wake with saturation fairly well, as shown in Fig. [fig:wakefitsat], and we arrive at values of *A* = 78.4MV/pC, *κ* = 1.031/*μ*m, and *λ* = 2.768*μ*m. ![image](fit_wake_corrected_new.png) The corresponding impedance of the simplified wake of Eq. [eqtn:wakeapprox] is given by $$\begin{aligned} \label{eqtn:imped\_approx} Z(k) = -\frac{A\lambda i}{2c}\sqrt{2\pi}\Big[e^{-\lambda^2(k+\kappa)^2/2} - e^{-\lambda^2(k-\kappa)^2/2}\Big]\end{aligned}$$ Putting the above into Eq. [eqtn:pwrrelative], we obtain an expression for signal modification amplitude: $$\begin{aligned} \label{eqtn:pwr\_relative2} \frac{\Delta |\tilde{n}^2(k)|^2}{|\tilde{n}^2(k)|^2} &= \frac{q^2}{E\_0}A\lambda n\_0kM\_{56}^{KD}\sqrt{2\pi}\cos(k\Delta z)\\\nonumber &\times \Big[e^{-\lambda^2(k+\kappa)^2/2} - e^{-\lambda^2(k-\kappa)^2/2}\Big]\\\nonumber &\times e^{-\frac{k^2}{2} \sum\_{u\neq5} \sigma^2\_{\tilde{x}\_u}\bigg[\Big(\tilde{M}^{MD}\_{5u}\Big)^2 + \Big(\tilde{M}^{MK}\_{5u} - \tilde{M}^{MD}\_{5u}\Big)^2\bigg]}\end{aligned}$$ Examining the above, we see that once we have fixed the cooling parameters, including the transfer matrix from the modulator to the kicker, the only variables which we can alter which will have any impact on the signal modification are the wavenumber of the density modulation we wish to observe and the *M*5*u* transfer elements from the kicker to the detector. (Note that these parameters alone are also sufficient to specify the *M*5*u* transfer matrix elements from the modulator to the detector.) If we have no vertical dispersion, *M*53*M**K* = *M*54*M**K* = 0, and it is easy to see from Eq. [eqtn:pwrrelative2] that the corresponding elements in the kicker-to-detector transfer matrix should also be equal to 0. The magnitude of the signal amplitude in Eq. [eqtn:pwrrelative2] may be easily maximized with respect to *k*, *M*51*K**D*, *M*52*K**D*, and *M*56*K**D*, obtaining values of *M*51*K**D* =  − 8.44 × 10− 4, *M*52*K**D* =  − 3.00 × 10− 2m, *M*56*K**D* = 2.36 × 10− 3m, and *k* = 1.04 × 106/m, with Eq. [eqtn:pwrrelative2] taking on the numerical value $\frac{\Delta |\tilde{n}^2(k)|^2}{|\tilde{n}^2(k)|^2} = -0.22\cos(k\Delta z)$. (For comparison, the simulations in Section [sec:sim], setting *M**K**D* to the inverse of *M**M**K* were made with near-optimal parameters *M*51*K**D* =  − 7.93 × 10− 4, *M*52*K**D* =  − 2.86 × 10− 2m, *M*56*K**D* = 2.26 × 10− 3m, and *k* = 1.06 × 106/m.) Drift Plus Dipole ----------------- In the current design of the cooler, it is useful to have as much space as possible dedicated to the modulator, kicker, and amplifiers, making it difficult to also fit in an optimized transfer line between the kicker and detector to achieve optimal signal modification. We therefore consider the case where we simply have a drift after the kicker plus a main arc dipole (field strength of 3.782T, bending radius of 243m). We assume a hard-edge dipole model and that the beam path is normal to the pole face, so that edge effects may be ignored. We also use the fit wake of Fig. [fig:wakefitsat]. For convenience, we define the amplitude of signal modification so that positive values correspond to a reduction in intensity when the electrons and hadrons are aligned. We take an observation point at the start of the dipole and perform a scan of the amplitude of signal modification as a function of the *M*56 value of a drift transfer matrix between the kicker center and the observation point for several wavelengths of hadron density perturbations, as shown in Fig. [fig:scanwavelengthdiscrete]. We find an optimal wavelength of  ∼ 6*μ*m, but the 1.2mm *M*56 value corresponds to over 100m of drift, which is too long for the available space. We therefore focus on more reasonable parameters and scan the signal modification amplitude as a function of moderate kicker-to-dipole drift lengths and observation wavelengths, as shown in Fig. [fig:scanwavelength]. Note that these drifts are defined between the end of the kicker and the start of the dipole, and so an extra *M*56 contribution from half the kicker length is also included in the transfer matrix. We find an optimal observation wavelength of 6*μ*m over a range of drift lengths. We then fix a 6*μ*m observation wavelength and scan the signal modification amplitude as a function of drift length and bend angle within the dipole, as shown in Fig. [fig:scandipole]. We find that it is ideal to observe the radiation near the start of the dipole and that, within the region of interest, increased drift lengths result in increased signal modification, with amplitudes of a few percent. ![image](drift_scan_discrete.png) ![image](drift_wavelength_scan.png) ![image](drift_dipole_scan_corrected.png) Note that the inclusion of the *M*51 and *M*52 terms is vital to the correct understanding of signal modification. Fig. [fig:scandriftnegative] shows the result of scanning the amplitude of signal modification as a function of the *M*56 term between the kicker center and observation point. It would appear that we could use a dipole to generate negative *M*56 and see about half the signal modification we could obtain with a drift. However, the dipole will also generate non-trivial *M*51 and *M*52 terms. Performing a scan with the more realistic dipole generates the plot seen in Fig. [fig:scandipolepure]. Virtually no signal modification is observed, and even then only for the small positive *M*56 values generated near the very start of the dipole. ![image](drift_scan_discrete_negative.png) ![image](pure_dipole_v_theta.png) Fringe Fields ------------- Since we are focused on the radiation at the start of the dipole, careful consideration of the fringe fields is important. We take a model of a dipole consisting of two magnetized poles of uniform magnetization *B*0/*μ*0 separated by a distance 2*w* and of longitudinal length *L* which extend infinitely far in the *x* direction and are each semi-infinite in *y*. See Fig. [fig:dipolegeometry]. It can be analytically shown that, with the origin defined as the dipole center, the fields visible to the beam are given by: ![image](dipole_model.png) $$\begin{aligned} B\_y(y,z) = \frac{B\_0}{2\pi}\Bigg[&\tan^{-1}\bigg(\frac{L/2+z}{w+y}\bigg)\\\nonumber + &\tan^{-1}\bigg(\frac{L/2-z}{w+y}\bigg)\\\nonumber +&\tan^{-1}\bigg(\frac{L/2+z}{w-y}\bigg)\\\nonumber +&\tan^{-1}\bigg(\frac{L/2-z}{w-y}\bigg)\Bigg] \end{aligned}$$ $$\begin{aligned} B\_z(y,z) = \frac{B\_0}{4\pi}\Bigg[&\ln\bigg(\frac{(w+y)^2 + (L/2+z)^2}{(w-y)^2 + (L/2+z)^2}\bigg)\\\nonumber +&\ln\bigg(\frac{(w-y)^2 + (L/2-z)^2}{(w+y)^2 + (L/2-z)^2}\bigg)\Bigg]\end{aligned}$$ We choose the dimensions of the RHIC dipoles, with *w* = 4cm and *L* = 9.441m. As before, *B*0 = 3.782T. Although this is not an exact fit to any particular dipole, it provides a proof-of-principle study. In order to counteract the fringe fields, so that we arrive at a reasonable transfer matrix inside the main arc dipole where the field is strong enough to produce detectable radiation, we also include a 1m long screen dipole with *w* = 4cm and a weak field opposite to that of the main arc dipole (*B*0 =  − 0.40T). We check that this produces reasonable results by altering the simulation used in subsection [subsec:simlin] to bring the hadrons from the kicker center to a point 5m after the end of the kicker through the use of a 27.5m drift transfer matrix, then track them with the realistic fields through a drift of 4m, the 1m screen dipole, and 91.5mm into the main arc dipole. The total integrated field seen by the proton making this journey is 0.13T-m, corresponding to a bend of 0.14 mrad. Due to the longer computation time, only 8 random seeds are used. We compare this to the theoretical prediction of a 32.5m drift after the kicker center (corresponding to 10m past the end of the kicker) plus a 0.14mrad bend in the main arc dipole. The results are shown in Fig. [fig:sigsupwrealdipole]. Good agreement is observed. ![image](linear_sigsup_real_fields.png) Radiation --------- So far, we have focused entirely on the changes to the hadron density spectrum. However, this cannot be observed directly. Instead, we will monitor the radiation produced by the hadron beam as it moves through a dipole. In the far-field approximation, the electric field from an accelerated charge is given by $$\begin{aligned} \label{eqtn:jackson} \vec{E}(\vec{x},t\_0) = \frac{q}{4\pi\epsilon\_0c}\Bigg[\frac{1}{|\vec{r}\_{0} - \vec{r}|}\frac{\hat{n}\times\big[(\hat{n}-\vec{\beta})\times\dot{\vec{\beta}}\big]}{\big(1-\vec{\beta}\cdot\hat{n}\big)^3}\Bigg]\_t\end{aligned}$$ where the observation point is *r⃗*0, the observation time is *t*0, and the hadron position, *r⃗*, relativistic beta, *β⃗*, and its derivative, $\dot{\vec{\beta}}$, are evaluated at the retarded time, *t*, with *t*0 = *t* + ∣*r⃗*0 − *r⃗*∣/*c*. Since we had seen in subsection [subsec:dipoleobs] that it is useful to observe the signal modification immediately at the start of the dipole, we consider the edge radiation. We write the frequency-space electric field $$\begin{aligned} \label{eqtn:fft\_E} \vec{E}(\vec{x},\omega) &= \int\_{-\infty}^{\infty} \vec{E}(\vec{x},t\_0) e^{i\omega t\_0} dt\_0\\\nonumber &= \frac{q}{4\pi\epsilon\_0c}\int\_{-\infty}^{\infty}\Bigg[\frac{1}{|\vec{r}\_{0} - \vec{r}|}\frac{\hat{n}\times\big[(\hat{n}-\vec{\beta})\times\dot{\vec{\beta}}\big]}{\big(1-\vec{\beta}\cdot\hat{n}\big)^2}\Bigg]\_t e^{i\omega t\_0} dt\end{aligned}$$ where we have simplified using the fact that *d**t*0/*d**t* = 1 − *β⃗* ⋅ *n̂*. The intensity of the radiation is given by $$\begin{aligned} \label{eqtn:intensity} S = \alpha \frac{\Delta \omega}{\omega} \frac{I\_h}{q} \bigg(\frac{2\epsilon\_0c}{e}\bigg)^2\hbar\omega\Big|\vec{E}(\vec{x},\omega)\Big|^2\end{aligned}$$ where *α* is the fine-structure constant, Δ*ω* is the bandwidth, and *I**h* is the average hadron beam current (1A for the EIC). We track a test charge through the dipole fields described in subsection [subsec:fringefields], and use its path to integrate Eq. [eqtn:fftE] at various locations on a transverse grid 10m downstream of the entrance pole face of the main arc dipole, where we might place a camera to detect the emitted radiation. We assume a 10% bandwidth for this detector. The result of this is shown in Fig. [fig:radiationdetectionscan]. We see that we may achieve intensities of  ∼ 700*μ**W*/*m*2. We repeat the above procedure, but model the effect of signal modification by multiplying the field amplitude produced by each step of the tracker by $\sqrt{1 + A(s)\sigma\_{z,e}/\sigma\_{z,h}}$, where *A*(*s*) is the amplitude of signal suppression from Eq. [eqtn:pwrrelative]. We assume 6*μ*m light and use the fit wake of Fig. [fig:wakefitsat] (which includes saturation effects) and a transfer matrix through the relevant drift and concatenated dipole fields to reach an arbitrary location after the kicker. We assume 10m between the end the the kicker and the start of the main arc dipole, with a 1m screen dipole immediately in front of the main arc dipole. *σ**z*, *e*/*σ**z*, *h* is the ratio of the electron to hadron bunch lengths, since only the central hadrons will see the effect of the cooling. We subtract the intensity shown in Fig. [fig:radiationdetectionscan] from that of this modified simulation in order to obtain the intensity of our signal. This is shown in Fig. [fig:radiationintensitychange]. The fractional change is  ∼ 0.3%. Integration over 1mm2 for 1 second on-axis will provide 2.0 × 1010 photons from the full bunch, while signal modification in the core protons will reduce this number by 7.1 × 107. The error on the former should follow Poisson statistics, with a numerical value of $\sqrt{2.0\times10^{10}} \approx 1.4\times10^5$ so that we obtain a signal-to-noise ratio of  ∼ 500. However, inefficiencies in the detector will reduce this, so that we may need to integrate for a longer time or over a wider area. ![image](real_dipole_intensity.png) ![image](real_dipole_intensity_change.png) We must also contend with background thermal radiation, whose intensity is given by the well-known blackbody radiation formula: $$\begin{aligned} \label{eqtn:thermal} S = \frac{\hbar}{4\pi^2c^2}\frac{\omega^3}{e^{\hbar\omega/k\_BT}-1} \Delta \omega\end{aligned}$$ Within this same 10% bandwidth at a temperature of 300K, we have a thermal intensity of 10*W*/*m*2, which would swamp our signal. It is therefore necessary to operate at liquid nitrogen temperature (77.29K), where the thermal background is only 1*n**W*/*m*2. Conclusion ========== We have derived an expression for signal modification for the case of a CEC cooler. We have also performed simulations of this process, finding good agreement with theoretical expectations. Finally, we have shown that such a signal may be experimentally observed at the few parts-per-thousand level with a cryo-cooled infrared camera set up to observe existing dipole radiation. As the design of the cooler matures, it will be necessary to incorporate the needs of this radiation diagnostic, at least ensuring the existence of a sufficient drift length between the end of the kicker and the first downstream dipole. We will also investigate the possibility of separating the radiation from the core protons from that of the rest of the bunch, which would increase the fractional change in radiation power by a factor of nearly 10, corresponding to the the ratio of hadron to electron bunch lengths. This may require operation at a shorter wavelength. Although we have provided a compelling proof-of-principle argument, we will also need to more carefully consider the details of the fringe fields of the dipole and appropriate compensation. Acknowledgements ================ This work was supported by Brookhaven Science Associates, LLC under Contract No. DE-SC0012704 with the U.S. Department of Energy, and by the Department of Energy, contract DE-AC03-76SF00515. We would like to thank the members of the EIC strong hadron cooling group for useful discussions. Alternative Derivations of Signal Modification ============================================== We present two alternative derivations of the theory of signal modification in the purely longitudinal case. In subsection [subsec:thrymike], we adopt a hybrid approach where we integrate the Vlasov equation in the time domain and make use of single-particle statistics. In subsection [subsec:thrystupakov], we back-propagate the phase-space density from the detector to the modulator, in line with the conventions of. In the limit where the hadron delay depends only on its energy offset, ie *M*51 = *M*52 = *M*53 = *M*54 = 0, these results agree with Eq. [eqtn:pwrrelative]. Integration of Time-Domain Vlasov Equation ------------------------------------------ We consider the hadron beam entering the modulator as a constant background current, *I*0, with a random Schottky signal, *I**s*(*s*, *t*), overlaid on top, where *s* is the longitudinal position within the cooler and *t* is the time of measurement. After passage through the kicker, the beam will pick up a coherent response current, *I**c*(*s*, *t*), due to the action of the wake, so that the total beam current at position *s* and time *t* is given by $$\begin{aligned} \label{eqtn:mike\_current} I(s,t) = I\_0 + I\_s(s,t) + I\_c(s,t)\end{aligned}$$ Since we assume only random noise in the hadron beam when it enters the modulator, we take *I**c*(*s*, *t*) = 0 for *s* < *s**K*, the kicker location. We also consider the respective phase-space densities, *ψ*(*η*, *s*, *t*), of these three currents, with $$\begin{aligned} \label{eqtn:mike\_phis} I(s,t) = q\int\_{-\infty}^{\infty} \dot{s}(\eta,s) \psi(\eta, s, t) d\eta\end{aligned}$$ where $\dot{s}(\eta,s)$ is the local speed of the hadrons as a function of fractional energy deviation, *η*, and longitudinal position in the cooler. The fluctuating part of the Schottky current can be written as $$\begin{aligned} \label{eqtn:mike\_schottky} I\_s(s,t) = \sum\_{k=-\infty}^{\infty}q \delta(t - t\_k - \tau\_0(s) - \eta\_k\tau\_1(s)) - I\_0\end{aligned}$$ where *t**k* is the time that particle *k* arrives at the modulator, at position *s* = 0, *τ*0(*s*) is the time for an on-energy particle to travel from the modulator to point *s*, *τ*1(*s*) is the energy-dependence of this transit time, and *η**k* is the fractional energy deviation of particle *k*. We define a frequency-space version of the Schottky current, *Ĩ**s*(*s*, *ω**m*), with the forward and inverse Fourier transforms $$\begin{aligned} \label{eqtn:mike\_fourier} \tilde{I}\_{s}(s,\omega\_m) &= \frac{1}{T}\int\_{\tau\_0(s)-T/2}^{\tau\_0(s)+T/2} e^{i\omega\_mt}I\_s(s,t)dt\\\nonumber I\_s(s,t) &= \sum\_{m=-\infty}^{\infty}e^{-i\omega\_m t}\tilde{I}\_{s}(s,\omega\_m)\end{aligned}$$ where *T* is chosen to be large compared to typical values of *η**k**τ*1(*s*) and the wake wavelength divided by *β**c*, so that we may ignore the precise behavior at the endpoints, and where *ω**m* = 2*π**m*/*T*. Particular particles near the edges of this time interval may enter or leave it during the passage through the cooler, but the relatively large size of *T* ensures that they are a negligible fraction of the total, and can be safely ignored. If we assume *N* particles arriving in the cooling system within time *T* at times *t**a*, we obtain for *m* ≠ 0 $$\begin{aligned} \label{eqtn:mike\_I\_sm} \tilde{I}\_{s}(s, \omega\_m) = \frac{q}{T}\sum\_{a=1}^{N} e^{i\omega\_m[t\_a + \tau\_0(s) + \eta\_a\tau\_1(s)]}\end{aligned}$$ In the kicker, each particle receives a kick, defined by a time-dependent voltage $$\begin{aligned} \label{eqtn:mike\_voltage} V(t) &= \int\_{-\infty}^{\infty}w\_t(t - \tau - \hat{\tau}\_0)I\_s(0, \tau)d\tau\end{aligned}$$ where *τ̂*0 is the delay of the electron beam in travelling between modulator and kicker and *w**t*(*t*) is related to the wake function of Eq. [eqtn:wakedef] by $$\begin{aligned} \label{eqtn:def\_wake\_time} w\_t(t) = w(-\beta c t)\end{aligned}$$ and has an impedance, *Z**t*(*ω*), related to the impedance of Eq. [eqtn:impeddef] by $$\begin{aligned} \label{eqtn:def\_imped\_time} Z\_t(\omega) = \int\_{-\infty}^{\infty}w\_t(t)e^{i\omega t}dt = -\frac{Z(\omega/\beta c)}{\beta}\end{aligned}$$ After passing through the kicker, the hadron phase-space density also includes a perturbation described by the coherent response function, obeying the linearized Vlasov equation: $$\begin{aligned} \label{eqtn:mike\_vlasov} 0 = &\frac{\partial\psi\_c(\eta,s,t)}{\partial t} + \frac{\partial}{\partial s}(\dot{s}(\eta,s)\psi\_c(\eta,s,t))\\\nonumber &+ \frac{qV(t)\delta(s-s\_K)\dot{s}(\eta,s)}{E\_0}\frac{\partial}{\partial \eta}(\psi\_0(\eta))\end{aligned}$$ where we have explicitly written the $\dot{\eta}$ as a delta function kick from the kicker voltage. We have also pulled the factor of $\dot{s}$ out of the derivative with respect to energy since we operate in the ultra-relativistic regime and so assume that $\dot{s}$ is independent of particle energy within the kicker straight section. Since there is no coherent response for *s* < *s**K*, integrating the above equation from *s**K*− to *s**K*+ yields $$\begin{aligned} \label{eqtn:mike\_psic\_real} \psi\_c(\eta,s\_K^+,t) = -\frac{qV(t)}{E\_0}\frac{d\psi\_0(\eta)}{d\eta}\end{aligned}$$ which may be transformed to frequency space as $$\begin{aligned} \label{eqtn:mike\_psic\_fourier} \tilde{\psi}\_{c}(\eta,s\_K^+,\omega\_m) = -\frac{q\tilde{V}(\omega\_m)}{E\_0}\frac{d\psi\_0(\eta)}{d\eta}\end{aligned}$$ where we have defined a frequency-space voltage, *Ṽ*(*ω**m*), and phase-space density, *ψ̃**c*(*η*, *s*, *ω**m*), using the same procedure as in Eq. [eqtn:mikefourier]. Integration of Eq. [eqtn:mikevlasov] to arbitrary *s* > *s**K* yields $$\begin{aligned} \label{eqtn:mike\_int\_psi} \tilde{\psi}\_{c}(\eta,s,\omega\_m) &= \tilde{\psi}\_{c}(\eta,s\_K^+,\omega\_m)\\\nonumber &\times e^{i\omega\_m[\tau\_0(s) + \eta \tau\_1(s) - \tau\_0(s\_K) - \eta \tau\_1(s\_K)]}\end{aligned}$$ Integrating over particle energy to get the coherent current response, we find $$\begin{aligned} \label{eqtn:mike\_Ic} &\tilde{I}\_{c}(s,\omega\_m) = -\frac{q^2\tilde{V}(\omega\_m)\dot{s}\_0}{E\_0}\\\nonumber &\times\int\_{-\infty}^{\infty} \frac{d\psi\_0(\eta)}{d\eta}e^{i\omega\_m[\tau\_0(s) + \eta \tau\_1(s) - \tau\_0(s\_K) - \eta \tau\_1(s\_K)]} d\eta\\\nonumber &=\frac{q^2\tilde{V}(\omega\_m)\dot{s}\_0}{E\_0}i\omega\_m[\tau\_1(s) - \tau\_1(s\_K)]\\\nonumber &\times\int\_{-\infty}^{\infty} \psi\_0(\eta)e^{i\omega\_m[\tau\_0(s) + \eta \tau\_1(s) - \tau\_0(s\_K) - \eta \tau\_1(s\_K)]} d\eta\\\nonumber &=\tilde{I}\_{s}(s=0,\omega\_m)Z\_t(\omega\_m)e^{i\omega\_m\hat{\tau}\_0}\frac{q^2\dot{s}\_0}{E\_0}i\omega\_m[\tau\_1(s) - \tau\_1(s\_K)]\\\nonumber &\times\int\_{-\infty}^{\infty} \psi\_0(\eta)e^{i\omega\_m[\tau\_0(s) + \eta \tau\_1(s) - \tau\_0(s\_K) - \eta \tau\_1(s\_K)]} d\eta\end{aligned}$$ where we have assumed that the change in $\dot{s}(\eta,s)$ due to the beam energy spread is small relative to the speed of an on-energy particle, $\dot{s}\_0$, and made use of Eq. [eqtn:mikeIsm], [eqtn:mikevoltage], and [eqtn:defimpedtime] to write the voltage in the kicker in terms of the frequency-domain Schottky current in the modulator and the impedance *Z**t*(*ω**m*). We use the above to define an effective gain function, *Ĩ**c*(*s*, *ω**m*) = *G*(*s*, *ω**m*)*Ĩ**s*(*s* = 0, *ω**m*). For *m* ≠ 0, so that we may ignore the base current, *I*0, the total current is $$\begin{aligned} \label{eqtn:mike\_It} \tilde{I}\_{t}(s,\omega\_m) = G(s,\omega\_m)\tilde{I}\_{s}(s=0,\omega\_m) + \tilde{I}\_{s}(s,\omega\_m)\end{aligned}$$ We define the signal modification by computing $$\begin{aligned} \label{eqtn:mike\_sigsup} \langle|\tilde{I}\_{t}(s,\omega\_m)|^2\rangle &= \langle|\tilde{I}\_{s}(s,\omega\_m)|^2\rangle (1 + |G(s,\omega\_m)|^2)\\\nonumber &+ 2 Re \langle G(s,\omega\_m)\tilde{I}\_{s}(s=0,\omega\_m)\tilde{I}^\*\_{s}(s,\omega\_m)\rangle\end{aligned}$$ where the angle brackets denote the ensemble average and the star refers to complex conjugation. Using Eq. [eqtn:mikeIsm], we can evaluate $$\begin{aligned} \label{eqtn:mike\_I\_corr} \langle \tilde{I}\_{s}&(s=0,\omega\_m)\tilde{I}^\*\_{s}(s,\omega\_m)\rangle\\\nonumber &=\frac{q^2}{T^2}\sum\_{a=1}^N\sum\_{b=1}^{N} e^{i\omega\_m[t\_a - t\_b - \tau\_0(s) - \eta\_b\tau\_1(s)]}\\\nonumber & = \frac{q^2}{T^2}N \Big\langle e^{-i\omega\_m[\tau\_0(s) + \eta\tau\_1(s)]}\Big\rangle\_{\eta}\end{aligned}$$ where we only kept the diagonal terms due to the uncorrelated nature of the initial particle distribution. Similarly, we find $$\begin{aligned} \label{eqtn:miked\_self\_corr} \langle |I\_{s}(s,\omega\_m)|^2\rangle = \frac{q^2}{T^2}N\end{aligned}$$ We can now write the fractional signal modification as $$\begin{aligned} \label{eqtn:mike\_rel\_sigsup} \frac{\langle|I\_{t}(s,\omega\_m)|^2\rangle}{\langle|I\_{s}(s,\omega\_m)|^2\rangle} &= 1 + |G(s,\omega\_m)|^2\\\nonumber &+ 2{{\rm Re}}(Y(s,\omega\_m))\end{aligned}$$ with $$\begin{aligned} \label{eqtn:mike\_g} G(s,\omega\_m) &= iZ\_t(\omega\_m)\omega\_m\frac{qI\_0}{E\_0}[\tau\_1(s) - \tau\_1(s\_K)]\\\nonumber &\times e^{i\omega\_m[\hat{\tau}\_0 + \tau\_0(s) - \tau\_0(s\_K)]}\Big\langle e^{i\omega\_m\eta[\tau\_1(s) - \tau\_1(s\_K)]} \Big\rangle\_{\eta}\end{aligned}$$ and $$\begin{aligned} \label{eqtn:mike\_Y} Y(s,\omega\_m) = G(s,\omega\_m) e^{-i\omega\_m\tau\_0(s)} \Big\langle e^{-i\omega\_m\eta\tau\_1(s)}\Big\rangle\_{\eta}\end{aligned}$$ where the mean current $I\_0 = q\dot{s}\_0\int\_{-\infty}^{\infty}\psi\_0(\eta)d\eta$. Since the kick from the wake is expected to be small, we ignore the second-order ∣*G*(*s*, *ω**m*)∣2 term. We also assume a Gaussian energy distribution in the beam, with RMS energy error *σ**η*. This allows us to perform the energy averages explicitly, and we obtain the relative signal modification $$\begin{aligned} \label{eqtn:mike\_final} \frac{\langle|I\_{t}(s,\omega\_m)|^2\rangle}{\langle|I\_{s}(s,\omega\_m)|^2\rangle} - 1 &= 2Re\Big(iZ\_t(\omega\_m)e^{i\omega\_m[\hat{\tau}\_0-\tau\_0(s\_K)]}\Big)\\\nonumber &\times\omega\_m\frac{qI\_0}{E\_0}[\tau\_1(s) - \tau\_1(s\_K)]\\\nonumber &\times e^{-\frac{\sigma\_{\eta}^2\omega\_m^2}{2}\big[\tau\_1^2(s) + (\tau\_1(s) - \tau\_1(s\_K))^2\big]}\end{aligned}$$ We note that *τ*1(*s*) =  − 1/*β**c* times the *M*56 element from the modulator to point *s*, *τ̂*0 − *τ*0(*s**K*) = Δ*z*/*β**c*, and *I*0 = *q**n*0*β**c*, where *n*0 is the mean longitudinal hadron density. We identify the wavenumber *k* = *ω**m*/*β**c* and use Eq. [eqtn:defimpedtime] to translate *Z**t*(*ω*) →  − *Z*(*k*)/*β*. This translates the fractional signal modification above to $$\begin{aligned} \label{eqtn:mike\_final\_translated} &\frac{\langle|I\_{t}(s,\beta c k)|^2\rangle}{\langle|I\_{s}(s,\beta c k)|^2\rangle} - 1 = -2n\_0\frac{q^2c}{E\_0}k M\_{56}^{KD}\\\nonumber &\times[{{\rm Im}}(Z(k))\cos(k\Delta z) + {{\rm Re}}(Z(k))\sin(k\Delta z)]\\\nonumber &\times e^{-\frac{\sigma\_{\eta}^2k^2}{2}\Big[\big(M\_{56}^{MD}\big)^2 + \big(M\_{56}^{KD}\big)^2\Big]}\end{aligned}$$ This agrees with Eq. [eqtn:pwrrelative] in the purely longitudinal limit. Theory from Back-Propagation of Phase Space Density --------------------------------------------------- We now approach the problem of signal modification using the formalism described in, except using the wake and impedance formalism from Eq. [eqtn:wakedef] and [eqtn:impeddef]. This involves back-propagating the beam through the cooler to describe the phase space distribution of the beam at the detector in terms of the known distribution at the modulator. This gives us a longitudinal phase-space density at the detector equal to $$\begin{aligned} \label{eqtn:stupakov\_density} f(z,\eta) = n\_0F\_0(&\eta - \Delta\eta(z - M\_{56}^{KD}\eta))\\\nonumber + \delta f^{(m)}(&z - M\_{56}^{MD}\eta - \Delta z + M\_{56}^{MK}\Delta\eta(z - M\_{56}^{KD}\eta),\\\nonumber &\eta - \Delta \eta (z - M\_{56}^{KD}\eta)) \end{aligned}$$ where *F*0(*η*) is the energy-dependence of the base longitudinal phase-space density, *n*0 is the mean longitudinal beam density, and *δ**f*(*m*) is some perturbation at the modulator. Note that *z* and *η* are evaluated at the detector, so that the above equation is just writing the original phase space density, *n*0*F*0(*η*(*m*)) + *δ**f*(*m*)(*z*(*m*), *η*(*m*)), in terms of the detector *z*, *η* coordinates. We use Eq. [eqtn:wakedef] and [eqtn:impeddef] to write $$\begin{aligned} \label{eqtn:stupakov\_kick} \Delta \eta(z) &= \frac{1}{2\pi}\int\_{-\infty}^{\infty} e^{ikz} \Delta\tilde{\eta}(k) dk\\\nonumber &= -\frac{q^2c}{2\pi E\_0}\int\_{-\infty}^{\infty} e^{ikz} Z(k)\delta\tilde{n}^{(m)}\_k dk\end{aligned}$$ where *Z*(*k*) is the wake impedance and *δ**ñ**k*(*m*) is the Fourier transform of the hadron longitudinal density perturbation in the modulator, with a general definition $$\begin{aligned} \label{eqtn:stupakov\_n} \delta\tilde{n}\_k &= \int\_{-\infty}^{\infty} d\eta \delta \tilde{f}\_k(\eta)\\\nonumber &= \int\_{-\infty}^{\infty} d\eta \int\_{-\infty}^{\infty} dz \delta f(z,\eta) e^{-ikz}\end{aligned}$$ If we Taylor expand *F*0 to first order about the energy kick from the kicker and ignore the Δ*η* dependence in the expression for *δ**f*(*m*), since it is already assumed to be a small correction to *F*0, we find the perturbation to the phase space density at the detector: $$\begin{aligned} \label{eqtn:stupakov\_taylor} \delta f(z,\eta) &= -n\_0F'\_0(\eta)\Delta\eta(z - M\_{56}^{KD}\eta)\\\nonumber &+ \delta f^{(m)}(z - M\_{56}^{MD}\eta - \Delta z, \eta)\end{aligned}$$ The power of signal modification is given by the correlator $$\begin{aligned} \label{eqtn:stupakov\_correlator} \langle\delta\tilde{n}\_k \delta\tilde{n}\_{k'}\rangle = \int\_{-\infty}^{\infty} d\eta d\eta' \langle \delta\tilde{f}\_k(\eta) \delta\tilde{f}\_{k'}(\eta')\rangle\end{aligned}$$ We split the right hand side of Eq. [eqtn:stupakovtaylor] into two parts: $$\begin{aligned} \label{eqtn:stupakov\_split} &\delta f^{(1)}(z,\eta) \equiv -n\_0F'\_0(\eta)\Delta\eta(z - M\_{56}^{KD}\eta)\\\nonumber &\delta f^{(2)}(z,\eta) \equiv \delta f^{(m)}(z - M\_{56}^{MD}\eta - \Delta z, \eta)\end{aligned}$$ Taking the Fourier transform of the first term, we find $$\begin{aligned} \label{eqtn:stupkov\_f1\_transform} \delta \tilde{f}^{(1)}(k) &= -n\_0F'\_0(\eta)\int\_{-\infty}^{\infty}e^{-ikz}\Delta\eta(z - M\_{56}^{KD}\eta)dz\\\nonumber &=-n\_0F'\_0(\eta)e^{-ikM\_{56}^{KD}\eta}\int\_{-\infty}^{\infty}e^{-ikz'}\Delta\eta(z')dz'\\\nonumber &=n\_0F'\_0(\eta)\frac{q^2c}{E\_0}Z(k)e^{-ikM\_{56}^{KD}\eta}\delta\tilde{n}^{(m)}\_k\end{aligned}$$ where we have made use of Eq. [eqtn:stupakovkick] in the last step. The Fourier transform of the second term in Eq. [eqtn:stupakovsplit] yields $$\begin{aligned} \label{eqtn:stupakov\_f2\_transform} \delta\tilde{f}^{(2)} &= \int\_{-\infty}^{\infty} e^{-ikz} \delta f^{(m)}(z - M\_{56}^{MD}\eta - \Delta z, \eta) dz\\\nonumber &= e^{-ikM\_{56}^{MD}\eta}e^{-ik\Delta z}\int\_{-\infty}^{\infty} e^{-ikz'} \delta f^{(m)}(z', \eta) dz'\\\nonumber &= e^{-ikM\_{56}^{MD}\eta}e^{-ik\Delta z}\delta\tilde{f}^{(m)}\_k(\eta)\end{aligned}$$ Calculating the correlator in Eq. [eqtn:stupakovcorrelator], we see that it involves the sum of three terms: ⟨*δ**f̃**k*(1)(*η*)*δ**f̃**k*ʹ(1)(*η*ʹ)⟩, ⟨*δ**f̃**k*(2)(*η*)*δ**f̃**k*ʹ(2)(*η*ʹ)⟩, and ⟨*δ**f̃**k*(1)(*η*)*δ**f̃**k*ʹ(2)(*η*ʹ) + *δ**f̃**k*(2)(*η*)*δ**f̃**k*ʹ(1)(*η*ʹ)⟩. The first of these is quadratic in the wake impedance, and so we will ignore it since we assume that the kick from the wake is small. The second term is evaluated using Eq. 4 of : $$\begin{aligned} \label{eqtn:stupakov\_self\_corr} \langle \delta\tilde{f}^{(2)}\_k(\eta) \delta\tilde{f}^{(2)}\_{k'}(\eta')\rangle = 2\pi n\_0F\_0(\eta)\delta(k+k')\delta(\eta-\eta')\end{aligned}$$ and integration over energy deviations yields $$\begin{aligned} \label{eqtn:stupakov\_self\_n} \langle\delta\tilde{n}\_k^{(2)}\delta\tilde{n}\_{k'}^{(2)}\rangle = 2\pi n\_0\delta(k+k')\end{aligned}$$ For the third term, ⟨*δ**f̃**k*(1)(*η*)*δ**f̃**k*ʹ(2)(*η*ʹ) + *δ**f̃**k*(2)(*η*)*δ**f̃**k*ʹ(1)(*η*ʹ)⟩, we note that the two parts become identical if we swap *k* ↔ *k*ʹ. (We integrate over *η* and *η*ʹ, so the switch of these variables doesn’t matter.) We then focus on the first of these $$\begin{aligned} \label{eqtn:stupakov\_term3} \int\_{-\infty}^{\infty}d\eta d\eta'\langle \delta\tilde{f}^{(1)}\_k(\eta) \delta&\tilde{f}^{(2)}\_{k'}(\eta')\rangle\\\nonumber =n\_0\frac{q^2c}{E\_0}Z(k) \int\_{-\infty}^{\infty} &F'\_0(\eta)e^{-ikM\_{56}^{KD}\eta}e^{-ik'M\_{56}^{MD}\eta'}e^{-ik'\Delta z}\\\nonumber &\langle\delta\tilde{n}\_k^{(m)}\delta\tilde{f}^{(m)}\_{k'}(\eta')\rangle d\eta d\eta'\end{aligned}$$ The correlator ⟨*δ**ñ**k*(*m*)*δ**f̃**k*ʹ(*m*)(*η*ʹ)⟩ is the integral of Eq. [eqtn:stupakovselfcorr] over *η*, yielding $$\begin{aligned} \label{eqtn:stupakov\_term3b} &\int\_{-\infty}^{\infty}d\eta d\eta'\langle \delta\tilde{f}^{(1)}\_k(\eta) \delta\tilde{f}^{(2)}\_{k'}(\eta')\rangle\\\nonumber =&2\pi n\_0^2\frac{q^2c}{E\_0}Z(k)\delta(k+k')\\\nonumber \times&\int\_{-\infty}^{\infty}d\eta d\eta' F'\_0(\eta)e^{-ikM\_{56}^{KD}\eta}e^{-ik'M\_{56}^{MD}\eta'}e^{-ik'\Delta z} F\_0(\eta')\\\nonumber =&2\pi ikM\_{56}^{KD} n\_0^2\frac{q^2c}{E\_0}Z(k)\delta(k+k')\\\nonumber \times&\int\_{-\infty}^{\infty}d\eta d\eta' F\_0(\eta)e^{-ikM\_{56}^{KD}\eta}e^{ikM\_{56}^{MD}\eta'}e^{ik\Delta z} F\_0(\eta')\end{aligned}$$ If we take the particular case of a Gaussian energy distribution with RMS *σ**η*, the integrals yield $$\begin{aligned} \label{eqtn:stupakov\_term3c} &\int\_{-\infty}^{\infty}d\eta d\eta'\langle \delta\tilde{f}^{(1)}\_k(\eta) \delta\tilde{f}^{(2)}\_{k'}(\eta')\rangle\\\nonumber =&2\pi ikM\_{56}^{KD} n\_0^2\frac{q^2c}{E\_0}Z(k)\delta(k+k')e^{ik\Delta z}\\\nonumber &\times e^{-\sigma\_{\eta}^2k^2[(M\_{56}^{KD})^2 + (M\_{56}^{MD})^2]/2}\end{aligned}$$ As noted above, the correlator corresponding to ⟨*δ**f̃**k*(2)(*η*)*δ**f̃**k*ʹ(1)(*η*ʹ)⟩ is the same as what we have just computed, with the interchange *k* ↔ *k*ʹ. Noting that the delta function sends *k*ʹ ↔  − *k* and that *Z*(*k*) is the Fourier transform of a real-valued function, so that *Z*(*k*ʹ) = *Z*( − *k*) = *Z*\*(*k*), we sum the two halves of this correlator to obtain $$\begin{aligned} \label{eqtn:stupakov\_term3\_full} &2\pi ikM\_{56}^{KD} n\_0^2\frac{q^2c}{E\_0}[Z(k)e^{ik\Delta z} - Z^\*(k)e^{-ik\Delta z}]\\\nonumber &\times e^{-\sigma\_{\eta}^2k^2[(M\_{56}^{KD})^2 + (M\_{56}^{MD})^2]/2}\delta(k+k')\\\nonumber =&-4\pi kM\_{56}^{KD} n\_0^2\frac{q^2c}{E\_0}(Re[Z(k)]\sin(k\Delta z) + Im[Z(k)]\cos(k\Delta z))\\\nonumber &\times e^{-\sigma\_{\eta}^2k^2[(M\_{56}^{KD})^2 + (M\_{56}^{MD})^2]/2}\delta(k+k')\end{aligned}$$ Adding in the result from Eq. [eqtn:stupakovselfn] $$\begin{aligned} \label{eqtn:stupakov\_final} \langle\delta\tilde{n}\_k \delta\tilde{n}\_{k'}\rangle &= 2\pi n\_0\delta(k+k')\Big[1 - 2 kM\_{56}^{KD} n\_0\frac{q^2c}{E\_0}\\\nonumber &\times(Re[Z(k)]\sin(k\Delta z) + Im[Z(k)]\cos(k\Delta z))\\\nonumber &\times e^{-\sigma\_{\eta}^2k^2[(M\_{56}^{KD})^2 + (M\_{56}^{MD})^2]/2}\Big]\end{aligned}$$ The fractional signal modification is then $$\begin{aligned} \label{eqtn:stupakov\_final\_frac} \frac{\langle\delta\tilde{n}\_k \delta\tilde{n}\_{k'}\rangle}{\langle\delta\tilde{n}^{(m)}\_k \delta\tilde{n}^{(m)}\_{k'}\rangle} &- 1 = -2 kM\_{56}^{KD} n\_0\frac{q^2c}{E\_0}\\\nonumber &\times(Re[Z(k)]\sin(k\Delta z) + Im[Z(k)]\cos(k\Delta z))\\\nonumber &\times e^{-\sigma\_{\eta}^2k^2[(M\_{56}^{KD})^2 + (M\_{56}^{MD})^2]/2}\end{aligned}$$ We see that, in the absence of transverse motion, this is identical to Eq. [eqtn:pwrrelative]. Quadratic Term ============== In the simplification of Eq. [eqtn:pwr], we only kept the terms linear in the wake amplitude. If we instead keep terms to second order, we arrive at a corrective term equal to $$\begin{aligned} \label{eqtn:second\_order\_sum} \bigg(k M\_{56}^{KD}\frac{q^2}{E\_0}\bigg)^2 \sum\_{i \neq a}&e^{-ik\big[z\_m^{(i)} - z\_m^{(a)} + M^{MD}\_{5u}\big(\vec{x}\_u^{(i)} - \vec{x}\_u^{(a)}\big)\big]}\\\nonumber \times \sum\_{j,\ell} \bigg[&-\frac{1}{2}w(z\_m^{(i)} + M\_{5u}^{MK}\vec{x}\_u^{(i)} + \Delta z - z\_m^{(j)})w(z\_m^{(i)} + M\_{5u}^{MK}\vec{x}\_u^{(i)} + \Delta z - z\_m^{(\ell)})\\\nonumber &-\frac{1}{2}w(z\_m^{(a)} + M\_{5u}^{MK}\vec{x}\_u^{(a)} + \Delta z - z\_m^{(j)})w(z\_m^{(a)} + M\_{5u}^{MK}\vec{x}\_u^{(a)} + \Delta z - z\_m^{(\ell)})\\\nonumber &+w(z\_m^{(i)} + M\_{5u}^{MK}\vec{x}\_u^{(i)} + \Delta z - z\_m^{(j)})w(z\_m^{(a)} + M\_{5u}^{MK}\vec{x}\_u^{(a)} + \Delta z - z\_m^{(\ell)})\bigg]\end{aligned}$$ If the integral of the wake is zero, taking the average of this quantity over all possible *z**m*(*j*) will yield zero unless *j* = *i*, *a*, or ℓ. Similarly, the only values of ℓ which yield non-trivial results are *i*, *a*, or *j*. Since there are 4 ways to pick *j* and ℓ equal to some combination of *i* and *a* and  ∼ *N* >  > 4 ways to pick them equal to one another and not equal to *i* or *a*, we only focus on the latter case. Of the three wake cross-terms, only the last one survives, since the others have either *z**m*(*a*) or *z**m*(*i*) appearing only in the complex phase, which would average to zero. Doing the sums over the three free parameters, *i*, *a*, and *j*, we find this is equal to $$\begin{aligned} \label{eqtn:second\_order\_int} &N^3\bigg(k M\_{56}^{KD}\frac{q^2}{E\_0}\bigg)^2 \int\_{-L/2}^{L/2} dz\_m^{(i)}dz\_m^{(a)}dz\_m^{(j)}/L^3\\\nonumber &\times\int\_{-\infty}^{\infty}d^5\vec{x}^{(i)}d^5\vec{x}^{(a)}\rho\big(\vec{x}^{(i)}\big)\rho\big(\vec{x}^{(a)}\big) \\\nonumber &\times e^{-ik\big[z\_m^{(i)} - z\_m^{(a)} + M^{MD}\_{5u}\big(\vec{x}\_u^{(i)} - \vec{x}\_u^{(a)}\big)\big]}\\\nonumber &\times w(z\_m^{(i)} + M\_{5u}^{MK}\vec{x}\_u^{(i)} + \Delta z - z\_m^{(j)})\\\nonumber &\times w(z\_m^{(a)} + M\_{5u}^{MK}\vec{x}\_u^{(a)} + \Delta z - z\_m^{(j)})\end{aligned}$$ Approximating the longitudinal integrals over *z**m*(*i*) and *z**m*(*a*) as extending from  − ∞ to ∞, making the change of variables *z*ʹ ≡ *z**m*(*i*) + *M*5*u**M**K**x⃗**u*(*i*) + Δ*z* − *z**m*(*j*) and *z*ʺ ≡ *z**m*(*a*) + *M*5*u**M**K**x⃗**u*(*a*) + Δ*z* − *z**m*(*j*), and using Eq. [eqtn:impeddef], we obtain $$\begin{aligned} \label{eqtn:second\_order\_int2} &N\bigg(n\_0 k M\_{56}^{KD}\frac{q^2c}{E\_0}\bigg)^2 \int\_{-L/2}^{L/2} dz\_m^{(j)}/L\\\nonumber &\times\int\_{-\infty}^{\infty}d^5\vec{x}^{(i)}d^5\vec{x}^{(a)}\rho\big(\vec{x}^{(i)}\big)\rho\big(\vec{x}^{(a)}\big) \\\nonumber &\times e^{-ik\big[\big(M^{MD}\_{5u} - M^{MK}\_{56}\big)\big(\vec{x}\_u^{(i)} - \vec{x}\_u^{(a)}\big)\big]}Z(k)Z(-k)\\\nonumber &=N\bigg(n\_0 k M\_{56}^{KD}\frac{q^2c}{E\_0}\bigg)^2 |Z(k)|^2\\\nonumber &\times\int\_{-\infty}^{\infty}d^5\vec{x}^{(i)}d^5\vec{x}^{(a)}\rho\big(\vec{x}^{(i)}\big)\rho\big(\vec{x}^{(a)}\big) \\\nonumber &\times e^{-ik\big[\big(M^{MD}\_{5u} - M^{MK}\_{56}\big)\big(\vec{x}\_u^{(i)} - \vec{x}\_u^{(a)}\big)\big]}\\\nonumber &=N\bigg(n\_0 k M\_{56}^{KD}\frac{q^2c}{E\_0}\bigg)^2 |Z(k)|^2e^{-k^2 \sum\_{u\neq5} \sigma^2\_{\hat{x}\_u} \big(\hat{M}^{MD}\_{5u} - \hat{M}^{Mk}\_{5u}\big)^2}\end{aligned}$$ where we have used the fact that *Z*( − *k*) = *Z*\*(*k*) for real wakes, the definition *n*0 = *N*/*L*, and the transformed transfer matrices and beam sizes of Eq. [eqtn:mtilde] - [eqtn:sigmas]. This yields a corrected version of Eq. [eqtn:pwrrelative]: $$\begin{aligned} \label{eqtn:pwr\_relative\_correct} \frac{\Delta |\tilde{n}(k)|^2}{|\tilde{n}(k)|^2} &= -2n\_0\frac{q^2c}{E\_0}kM\_{56}^{KD}\\\nonumber &\times[{{\rm Re}}(Z(k))\sin(k\Delta z) + {{\rm Im}}(Z(k))\cos(k\Delta z)]\\\nonumber &\times e^{-\frac{k^2}{2} \sum\_{u\neq5} \sigma^2\_{\hat{x}\_u}\Big[\big(\hat{M}^{MD}\_{5u}\big)^2 + \big(\hat{M}^{MD}\_{5u} - \hat{M}^{MK}\_{5u}\big)^2\Big]}\\\nonumber &+\bigg(n\_0 k M\_{56}^{KD}\frac{q^2c}{E\_0}\bigg)^2 |Z(k)|^2e^{-k^2 \sum\_{u\neq5} \sigma^2\_{\hat{x}\_u} \big(\hat{M}^{MD}\_{5u} - \hat{M}^{MK}\_{5u}\big)^2}\end{aligned}$$ Similar results are seen in the derivations of Appendix [app:alternative] if we keep the ∣*G*∣2 term of Eq. [eqtn:mikerelsigsup] or the ⟨*δ**f̃**k*(1)(*η*)*δ**f̃**k*ʹ(1)(*η*ʹ)⟩ term of Appendix [subsec:thrystupakov]. Schottky Signal Modification as a Diagnostic Tool for Coherent Electron Cooling =============================================================================== Coherent electron cooling is a promising technique to cool high-intensity hadron bunches by imprinting the noise in the hadron beam on a beam of electrons, amplifying the electron density modulations, and using them to apply cooling kicks to the hadrons. The typical size for these perturbations can be on the *μ*m scale, allowing us to extend the reach of classical stochastic cooling by several orders of magnitude. However, it is crucial to ensure that the electron and hadron beams are longitudinally aligned within this same *μ*m scale. In order to provide fast feedback for this process, we discuss the extension of signal suppression to coherent electron cooling, and show in both theory and simulation that certain components of the spectral noise in the hadron beam will be predictably modified at the several percent level, which may be detected by observations of the radiation of the hadron beam. Introduction ============ In high-intensity hadron storage rings, intrabeam scattering (IBS) and beam-beam effects will degrade the beam emittance over the length of the store, limiting machine luminosity. In particular, at the planned Electron-Ion Collider (EIC), the IBS times are expected to be at the timescale of a couple hours, and so some form of strong hadron cooling is necessary to achieve the physics goals. The proposed method in this case is microbunched electron cooling (MBEC), a particular form of coherent electron cooling (CeC). This was first introduced in, and the theory has since been developed extensively in. The premise of MBEC is that the hadron beam to be cooled copropagates with an electron beam in a straight “modulator” section, during which time the hadrons will provide energy kicks to the electrons. The two beams are then separated, and the electrons pass through a series of amplifiers to change this initial energy modulation into a density modulation and amplify it. The hadrons travel through their own chicane before meeting the electrons again in a straight “kicker” section. Here, the amplified density modulations in the electron beam provide energy kicks to the hadrons. By tuning the hadron chicane so that the hadron delay in travelling from the modulator to the kicker is energy-dependent, we may arrange it so that the energy kick that the hadron receives in the kicker tends to correct initial energy offsets. If the chicane also gives the hadrons a delay dependent on their transverse phase-space coordinates and if there is non-zero dispersion in the kicker, then the transverse emittance of the hadron beam can also be cooled. In the current design of an MBEC Cooler for the EIC, the typical scale of the electron density modulations at the top energy will be  ∼ 1*μ*m. This corresponds to about 4 orders of magnitude higher bandwidth than can be achieved with microwave stochastic cooling, allowing the cooling of dense hadron bunches, but also making alignment a challenge. It is important that the hadron arrives in the kicker at the same time as the density perturbations which it had induced in the electron beam, or else it will receive entirely uncorrelated energy kicks. Comparing the  ∼ 100m distance between modulator and kicker to the  ∼ 1*μ*m density perturbations in the electron beam, we see that the transit times of the electrons and hadrons must be maintained at a level of ten parts per billion. In order to commission and operate such a system, it is necessary to have some way to quickly measure the relative alignment of the electron and hadron beams at the sub-micron scale. Directly observing cooling would require waits on the timescale of hours, which would make commissioning painful and prevent any sort of fast feedback during operations. The method proposed here is to make use of “signal modification,” an extension of the well-known signal suppression of microwave stochastic cooling to the case of MBEC cooling. The principle of this method is that after the hadron beam has received its cooling kicks, it will propagate to a “detector” where the power of the hadron beam at particular wavelengths may be measured. If the hadron beam is well-aligned with the electron beam, this will produce a predictable change in the spectral content of the hadron beam on a single-pass basis. As our model, we assume that the cooling section consists of a modulator, where the hadron beam imprints on the electrons; a kicker, where the electrons provide an energy kick back to the hadrons; and a detector, where we will observe the density spectrum in the hadron beam. See Fig. [fig:layout]. ![image](sigsup_layout.png) In Section [sec:thry], we provide a theoretical derivation of signal modification. In Section [sec:sim], we discuss simulation tools to model this process, and find good agreement with the theoretical predictions. In Section [sec:detection], we discuss what will be needed to measure such a signal experimentally. We present our outlook in Section [sec:conclude] and conclude. Note that, although this paper focuses on MBEC, the general form of these results will hold for coherent electron cooling with other amplification mechanisms. Theory ====== We derive here the theory of signal modification by directly propagating the particles themselves from the modulator to the detector with arbitrary 6-dimensional transfer matrices. Alternative derivations using the longitudinal Vlasov equation are presented in Appendix [app:alternative]. In subsection [subsec:thrydecohere], we comment on decoherence of the signal when observing a range of frequencies and in subsection [subsec:multikick], we discuss an extension where the kicks are distributed throughout the length of the kicker. We use phase-space coordinates *x*, *x*ʹ, *y*, *y*ʹ, *z*, and *η*, where the first four are the transverse positions and angles, *z* is the particle’s longitudinal position in the bunch, with positive *z* values corresponding to the head of the bunch, and *η* is the fractional energy offset of the particle. In order to characterize the cooling process, we follow and define a longitudinal wake function in the 1D approximation, such that the energy kick a hadron receives in the kicker is the convolution of this wake function with the longitudinal distribution of hadrons in the modulator. Explicitly, the fractional energy kick Δ*η* received by a hadron at longitudinal position *z* within the bunch at the kicker is given by $$\begin{aligned} \label{eqtn:wake\_def} \Delta\eta(z) = \frac{q^2}{E\_0} \int\_{-\infty}^{\infty}w(z+\Delta z-z')n(z') dz'\end{aligned}$$ where *q* is the hadron charge, *E*0 is the nominal hadron energy, *n*(*z*) is the longitudinal hadron density in the modulator, and Δ*z* is the difference in modulator-to-kicker longitudinal delay between the hadron and electron beams. We also identify a corresponding impedance $$\begin{aligned} \label{eqtn:imped\_def} Z(k) = -\frac{1}{c}\int\_{-\infty}^{\infty} w(z) e^{-ikz} dz\end{aligned}$$ In order for these simple longitudinal wakes to hold, we require coherent oscillation of the beams, and therefore the electric field at our frequency of interest must be large compared to the beam sizes. In particular, we require : $$\begin{aligned} \label{eqtn:1D} \Sigma\_{\perp} \lesssim \frac{\beta\gamma\lambda}{2}\end{aligned}$$ where Σ⊥ is the transverse beam size, *β* is the relativistic beta, *γ* is the relativistic gamma, and *λ* is the beam oscillation wavelength. Examining the values in Tab. [tab:param], we see that the largest beam size is that of the protons in the modulator and kicker (0.95mm), and the gamma factor is 293, so our approximation is good for wavelengths above  ∼ 6.5*μ*m. As we will see, we will be interested mainly in wavelengths of 6*μ*m, at the limit of applicability of the above. However, we note that the requirement of Eq. [eqtn:1D] is likely not a strict one—as it was shown in, in the case when there is not amplification, the 3D theory gives exactly the same result as the 1D model which we employ here. In order for us to neglect Landau damping in the electron beam amplifiers, one usually requires : $$\begin{aligned} \label{eqtn:landau\_damp} k \ll k\_D,\end{aligned}$$ where *k* is the wavenumber of our signal, *k**D* ≡ *ω**p*/*c**σ**β* is the Debye wavenumber, and *σ**β* is the spread in relativistic beta. This electron beam has *k**D* = 4 × 106m− 1 in the amplifiers, larger than the *k* = 106m− 1 of our microbunching. One also has to keep in mind that the concept of Landau damping, with the condition of Eq. [eqtn:landaudamp], is only valid asymptotically, after many plasma oscillations, which is not the case of the microbunching coherent cooling where the electron beam executes $\frac{1}{4}$ of the plasma oscillation in the amplifier sections. A more accurate analysis of when one can ignore the Landau damping for this case was carried out in Section IV of. In order to ensure that the microbunching is not washed out by the energy spread and divergence of the beam within any of the straight sections, we require : $$\begin{aligned} \lambda \gg \frac{2L}{\beta^3\gamma^2}\sigma\_{\eta}\end{aligned}$$ and $$\begin{aligned} \lambda \gg \frac{L}{\beta^2}\sigma\_{x'}^2\end{aligned}$$ where *σ**η* is the fractional energy spread of the beam, *σ**x*ʹ is the beam divergence, and *L* is the length of a drift. Comparison with the parameters in Tab. [tab:param] shows that these are met for both the electrons and hadrons in all the straight sections. We assume that there is no transverse correlation in the electron or hadron beam. Further discussion of the effects which this might have is available in. We focus our attention on the peak region of the electron and hadron beams and take the limit of a longitudinally infinite and uniform plasma. Since the typical wake wavelength is on the order of a few microns, and the typical bunch lengths are a few mm or longer, this is a reasonable approximation. Finally, we assume that the hadron beam enters the modulator with no correlated structures on the scale of the wake wavelength. Although such structures will be generated within the kicker, their characteristic size is on the micron scale, much less than the millimeter scale longitudinal motion per turn, washing out any memory of the kick by the time the beam enters the modulator again. As illustrated in Fig. [fig:layout], we take the hadrons to have transfer matrix *M**M**K* between modulator and kicker and *M**K**D* between kicker and detector, with the transfer matrix between modulator and detector given by *M**M**D*. We treat our particles as existing in a region of length *L*, much larger than any length scale associated with the wake function, and assume periodic boundary conditions, so that we may arbitrarily shift the limits of integration in our integrals. In this model, we also consider the full 6-dimensional evolution of the hadron beam and ignore any collective effects during beam transport except for the electron-hadron interactions characterized by the wake function, as discussed above. We write the evolution of a hadron’s position between modulator and detector as $$\begin{aligned} \label{eqtn:delays} z\_d^{(i)} = &z\_m^{(i)} + M^{MD}\_{5u}\vec{x}\_u^{(i)}\\\nonumber + &M^{KD}\_{56}\sum\_j \frac{q^2}{E\_0} w(z\_m^{(i)} + M^{MK}\_{5u}\vec{x}\_u^{(i)} + \Delta z - z\_m^{(j)})\end{aligned}$$ where *z**m*(*i*) is the longitudinal position of particle *i* in the modulator, *z**d*(*i*) is its position within the detector, and *x⃗*(*i*) are the phase-space coordinates of particles *i* in the modulator. We use the convention that the repeated “*u*” subscript refers to sums over the 5 phase-space coordinates *excluding* the longitudinal position. The summation over *j* is over all particles within the hadron beam. The first two terms of this equation describe the modulator to kicker beam evolution by a simple transfer matrix, while the final term gives the contribution to delay due to the extra energy kick our particle receives in the kicker due to the wakes of all particles in the beam, including itself. At the detector, the longitudinal density of the hadron beam is $$\begin{aligned} \label{eqtn:rho\_d} n(z) = \sum\_i \delta(z - z\_d^{(i)})\end{aligned}$$ with corresponding density in Fourier space $$\begin{aligned} \label{eqtn:rho\_tilde\_d} \tilde{n}(k) &= \int\_{-\infty}^{\infty}\sum\_i e^{-ikz} \delta(z - z\_d^{(i)}) dz\\\nonumber &=\sum\_i e^{-ikz\_d^{(i)}}\end{aligned}$$ The power in the hadron spectrum at a given wavenumber is then given by $$\begin{aligned} \label{eqtn:pwr} |\tilde{n}(k)|^2 = &\sum\_{i,a}e^{-ik\big[z\_d^{(i)} - z\_d^{(a)}\big]}\\\nonumber =N + &\sum\_{i \neq a}e^{-ik\big[z\_m^{(i)} - z\_m^{(a)} + M^{MD}\_{5u}\big(\vec{x}\_u^{(i)} - \vec{x}\_u^{(a)}\big)\big]}\\\nonumber &\times e^{-ikM^{KD}\_{56}q^2/E\_0\sum\_j w\big(z\_m^{(i)} + M^{MK}\_{5u}\vec{x}\_u^{(i)} + \Delta z - z\_m^{(j)}\big)}\\\nonumber &\times e^{ikM^{KD}\_{56}q^2/E\_0\sum\_j w\big(z\_m^{(a)} + M^{MK}\_{5u}\vec{x}\_u^{(a)} + \Delta z - z\_m^{(j)}\big)}\end{aligned}$$ where we have substituted in the expression for *z**d*(*i*) given in Eq. [eqtn:delays] and used the fact that the *i* = *a* terms in the sum are all equal to 1, giving us the *N* out front, where *N* is the number of particles in the length-*L* section of the beam. Typically, the kick from the wake is small, and so we may Taylor-expand the final two exponentials above to linear order. We thereby obtain $$\begin{aligned} \label{eqtn:pwr\_taylor} |\tilde{n}(k)|^2 &= N + \sum\_{i \neq a}e^{-ik\big[z\_m^{(i)} - z\_m^{(a)} + M^{MD}\_{5u}\big(\vec{x}\_u^{(i)} - \vec{x}\_u^{(a)}\big)\big]}\\\nonumber \times \big[1 &- ikM^{KD}\_{56}\frac{q^2}{E\_0}\sum\_j w\big(z\_m^{(i)} + M^{MK}\_{5u}\vec{x}\_u^{(i)} + \Delta z - z\_m^{(j)}\big)\\\nonumber &+ ikM^{KD}\_{56}\frac{q^2}{E\_0}\sum\_j w\big(z\_m^{(a)} + M^{MK}\_{5u}\vec{x}\_u^{(a)} + \Delta z - z\_m^{(j)}\big)\big]\end{aligned}$$ with the effect of second-order terms considered in Appendix [app:secondorder]. We now wish to take the expectation value of the beam power, requiring integrals over the 12 phase space coordinates of particles *i* and *a* and the longitudinal position of particle *j*. However, note that the dependence on *z**m*(*j*) appears only in the argument of the wake functions. If the total integral of the wake is zero, then, if particle *j* is distinct from both particles *i* and *a*, this integral will evaluate to 0. We then need only consider the terms *j* = *i* and *j* = *a* in those sums. The beam power can then be written as $$\begin{aligned} \label{eqtn:pwr\_taylor2} |\tilde{n}(k)|^2 &= N + \sum\_{i \neq a}e^{-ik\big[z\_{ia} + M^{MD}\_{5u}\big(\vec{x}\_u^{(i)} - \vec{x}\_u^{(a)}\big)\big]}\\\nonumber \times [1 &- ikM^{KD}\_{56}\frac{q^2}{E\_0} w(M^{MK}\_{5u}\vec{x}\_u^{(i)} + \Delta z)\\\nonumber &+ ikM^{KD}\_{56} \frac{q^2}{E\_0}w(M^{MK}\_{5u}\vec{x}\_u^{(a)} + \Delta z)\\\nonumber &- ikM^{KD}\_{56} \frac{q^2}{E\_0}w(z\_{ia} + M^{MK}\_{5u}\vec{x}\_u^{(i)} + \Delta z)\\\nonumber &+ ikM^{KD}\_{56} \frac{q^2}{E\_0}w(-z\_{ia} + M^{MK}\_{5u}\vec{x}\_u^{(a)} + \Delta z)]\end{aligned}$$ where we have made the definition *z**i**a* ≡ *z**m*(*i*) − *z**m*(*a*). Since we assume a homogeneous hadron bunch, *z**m*(*i*) and *z**m*(*a*) themselves are irrelevant and *z**i**a* has the same probability distribution as them. We note in the above formula that the first three terms have their only *z**i**a* dependence in the leading exponential, and so performing an average over all *z**i**a* will be zero. We therefore need only focus on the 4th and 5th terms. Approximating the *N*(*N* − 1) terms in the above sum as *N*2, the relevant integral for the 4th term is $$\begin{aligned} \label{eqtn:4th\_term} &-N^2ikM\_{56}^{KD} \int\_{-L/2}^{L/2} dz\_{ia}/L \int\_{-\infty}^{\infty} d^5\vec{x}^{(i)} d^5\vec{x}^{(a)} \rho\big(\vec{x}^{(i)}\big) \rho\big(\vec{x}^{(a)}\big)\\\nonumber &\times \frac{q^2}{E\_0}w(z\_{ia} + M^{MK}\_{5u}\vec{x}\_u^{(i)} + \Delta z) e^{-ik\big[z\_{ia} + M^{MD}\_{5u}\big(\vec{x}\_u^{(i)} - \vec{x}\_u^{(a)}\big)\big]}\end{aligned}$$ where $\rho\big(\vec{x}\big)$ the hadron phase-space density in the modulator within the 5 phase-space coordinates excluding longitudinal position. Approximating the longitudinal integral as extending from  − ∞ to ∞, using the impedance of Eq. [eqtn:impeddef], and making an appropriate change of variables to *z*ʹ ≡ *z**i**a* + *M*5*u**M**K**x⃗**u*(*i*) + Δ*z*, the longitudinal integral in Eq. [eqtn:4thterm] may be evaluated, leaving us with $$\begin{aligned} \label{eqtn:4th\_term\_2} &N^2ikM\_{56}^{KD} \frac{q^2c}{E\_0}\frac{1}{L} Z(k) e^{ik\Delta z}\\\nonumber &\times\int\_{-\infty}^{\infty} d^5\vec{x}^{(i)} d^5\vec{x}^{(a)} \rho\big(\vec{x}^{(i)}\big) \rho\big(\vec{x}^{(a)}\big)\\\nonumber &\times e^{-ik\big[M^{MD}\_{5u}\big(\vec{x}\_u^{(i)} - \vec{x}\_u^{(a)}\big) - M^{MK}\_{5u}\vec{x}\_u^{(i)}\big]}\end{aligned}$$ To perform the remaining integrals, we write the evolution of the phase-space coordinates explicitly in terms of action-angle variables and Courant-Snyder parameters at the start of the transfer matrix, finding $$\begin{aligned} \label{eqtn:action\_angle} M\_{5u}\vec{x}\_u &= M\_{51}(\sqrt{2J\_x\beta\_x}\cos(\phi\_x)+D\_x\eta)\\\nonumber &+ M\_{52}(-\sqrt{2J\_x/\beta\_x}[\sin(\phi\_x)+\alpha\_x\cos(\phi\_x)]+D'\_x\eta)\\\nonumber &+ M\_{53}(\sqrt{2J\_y\beta\_y}\cos(\phi\_y)+D\_y\eta)\\\nonumber &+ M\_{54}(-\sqrt{2J\_y/\beta\_y}[\sin(\phi\_y)+\alpha\_y\cos(\phi\_y)]+D'\_y\eta)\\\nonumber &+ M\_{56}\eta\\\nonumber &\\\nonumber &= (M\_{51}-\frac{\alpha\_x}{\beta\_x}M\_{52})\sqrt{2J\_x\beta\_x}\cos(\phi\_x)\\\nonumber &- M\_{52}\sqrt{2J\_x/\beta\_x}\sin(\phi\_x)\\\nonumber &+ (M\_{53}-\frac{\alpha\_y}{\beta\_y}M\_{54})\sqrt{2J\_y\beta\_y}\cos(\phi\_y)\\\nonumber &- M\_{54}\sqrt{2J\_y/\beta\_y}\sin(\phi\_y)\\\nonumber &+ (D\_xM\_{51} + D'\_xM\_{52} + D\_yM\_{53} + D'\_yM\_{54} + M\_{56})\eta\\\nonumber &\\\nonumber \equiv & \hat{M}\_{51}\hat{x} + \hat{M}\_{52}\hat{x}' + \hat{M}\_{53}\hat{y} + \hat{M}\_{54}\hat{y}' + \hat{M}\_{56}\hat{\eta}\end{aligned}$$ with $$\begin{aligned} \label{eqtn:m\_tilde} &\hat{M}\_{51} \equiv M\_{51}-\frac{\alpha\_x}{\beta\_x}M\_{52}\\\nonumber &\hat{M}\_{52} \equiv M\_{52}\\\nonumber &\hat{M}\_{53} \equiv M\_{53}-\frac{\alpha\_y}{\beta\_y}M\_{54}\\\nonumber &\hat{M}\_{54} \equiv M\_{54}\\\nonumber &\hat{M}\_{56} \equiv D\_xM\_{51} + D'\_xM\_{52} + D\_yM\_{53} + D'\_yM\_{54} + M\_{56}\end{aligned}$$ For a Gaussian beam, the *x̂*, *x̂*ʹ, *ŷ*, *ŷ*ʹ, and *η̂* are normally distributed with $$\begin{aligned} \label{eqtn:sigmas} &\sigma\_{\hat{x}} = \sqrt{\epsilon\_x\beta\_x}\\\nonumber &\sigma\_{\hat{x}'} = \sqrt{\epsilon\_x/\beta\_x}\\\nonumber &\sigma\_{\hat{y}} = \sqrt{\epsilon\_y\beta\_y}\\\nonumber &\sigma\_{\hat{y}'} = \sqrt{\epsilon\_y/\beta\_y}\\\nonumber &\sigma\_{\hat{\eta}} = \sigma\_{\eta}\end{aligned}$$ where the *ε* are the horizontal and vertical emittances and *σ**η* is the fractional energy spread. In this case, the remaining ten integrals in Eq. [eqtn:4thterm2] may be performed, yielding $$\begin{aligned} \label{eqtn:4th\_term\_final} &ikM\_{56}^{KD}\frac{q^2c}{E\_0} \frac{N^2}{L} Z(k) e^{ik\Delta z}\\\nonumber &\times e^{-\frac{k^2}{2} \sum\_{u\neq5} \sigma^2\_{\hat{x}\_u}\Big[\big(\hat{M}^{MD}\_{5u}\big)^2 + \big(\hat{M}^{MD}\_{5u} - \hat{M}^{MK}\_{5u}\big)^2\Big]}\end{aligned}$$ A similar procedure may be applied to the fifth term of Eq. [eqtn:pwrtaylor2], resulting in $$\begin{aligned} \label{eqtn:5th\_term\_final} &-ikM\_{56}^{KD} \frac{q^2c}{E\_0}\frac{N^2}{L} Z(-k) e^{-ik\Delta z}\\\nonumber &\times e^{-\frac{k^2}{2} \sum\_{u\neq5} \sigma^2\_{\hat{x}\_u}\Big[\big(\hat{M}^{MD}\_{5u}\big)^2 + \big(\hat{M}^{MD}\_{5u} - \hat{M}^{MK}\_{5u}\big)^2\Big]}\end{aligned}$$ Making use of the fact that, for real wakes, *Z*( − *k*) = *Z*\*(*k*), and defining *n*0 ≡ *N*/*L* as the mean linear density of the hadrons, we may sum Eqs. [eqtn:4thtermfinal] and [eqtn:5thtermfinal] and incorporate them back into Eq. [eqtn:pwrtaylor2], obtaining $$\begin{aligned} \label{eqtn:pwr\_final} |\tilde{n}(k)|^2 &= N - 2Nn\_0\frac{q^2c}{E\_0}kM\_{56}^{KD}\\\nonumber &\times[{{\rm Re}}(Z(k))\sin(k\Delta z) + {{\rm Im}}(Z(k))\cos(k\Delta z)]\\\nonumber &\times e^{-\frac{k^2}{2} \sum\_{u\neq5} \sigma^2\_{\hat{x}\_u}\Big[\big(\hat{M}^{MD}\_{5u}\big)^2 + \big(\hat{M}^{MD}\_{5u} - \hat{M}^{MK}\_{5u}\big)^2\Big]}\end{aligned}$$ Then, the fractional change in beam power as a function of electron/hadron misalignment is $$\begin{aligned} \label{eqtn:pwr\_relative} \frac{\Delta |\tilde{n}(k)|^2}{|\tilde{n}(k)|^2} &= -2n\_0\frac{q^2c}{E\_0}kM\_{56}^{KD}\\\nonumber &\times[{{\rm Re}}(Z(k))\sin(k\Delta z) + {{\rm Im}}(Z(k))\cos(k\Delta z)]\\\nonumber &\times e^{-\frac{k^2}{2} \sum\_{u\neq5} \sigma^2\_{\hat{x}\_u}\Big[\big(\hat{M}^{MD}\_{5u}\big)^2 + \big(\hat{M}^{MD}\_{5u} - \hat{M}^{MK}\_{5u}\big)^2\Big]}\end{aligned}$$ Note that if ${{\rm Re}}(Z(k)) = 0$ and ${{\rm Im}}(Z(k)) > 0$ (which is the case considered below, see Eq. [eqtn:impedapprox]), and if *M*56*K**D* > 0, then perfect alignment of the hadrons and electrons, ie Δ*z* = 0, results in Δ∣*ñ*(*k*)∣2 < 0,  corresponding to noise suppression below the shot noise in the beam. Such noise suppression has been previously studied theoretically in and observed experimentally in. Our result Eq. [eqtn:pwrrelative] is an agreement with the theoretical analysis of. Decoherence ----------- The above posits that the amount of signal modification has a purely sinusoidal dependence on the electron/hadron misalignment, ie, $$\begin{aligned} \label{eqtn:decoherence\_ideal} \frac{\Delta |\tilde{n}(k)|^2}{|\tilde{n}(k)|^2} = A(k) \cos(k\Delta z + \theta\_0)\end{aligned}$$ where *A* is some amplitude and *θ*0 is a phase, equal to 0 for an antisymmetric wake. However, the above derivation assumes an observation at a pure single frequency. If we take the more realistic case that we sample some range of frequencies with bandwidth Δ*k*, the signal amplitude will be $$\begin{aligned} \label{eqtn:decoherence\_true} \frac{\Delta |\tilde{n}(k)|^2}{|\tilde{n}(k)|^2} &\approx A(k)\frac{1}{\Delta k}\int\_{k-\Delta k/2}^{k+\Delta k/2} \cos(k'\Delta z + \theta\_0) dk'\\\nonumber &= A(k)\cos(k\Delta z + \theta\_0) \frac{\sin(\Delta k\Delta z/2)}{\Delta k\Delta z/2}\end{aligned}$$ so that the amplitude of the oscillations will decay over lengths of  ∼ 1/Δ*k*. Integrated Kicker ----------------- The above analysis assumes that the full kick to the hadrons takes place at a single point in the kicker. However, in the more realistic case where the kick is applied over the full length of the kicker, we may change Eq. [eqtn:delays] to $$\begin{aligned} &z\_d^{(i)} = z\_m^{(i)} + M^{MD}\_{5u}\vec{x}\_u^{(i)}\\\nonumber &+ \frac{1}{N\_s}\sum\_{s=1}^{N\_s}M^{K\_sD}\_{56}\sum\_j \frac{q^2}{E\_0} w(z\_m^{(i)} + M^{MK\_s}\_{5u}\vec{x}\_u^{(i)} + \Delta z - z\_m^{(j)})\end{aligned}$$ where we now split our kick in the kicker into *N**s* fractional kicks at locations *K**s*. The rest of the analysis may be carried through in the same way as before, and we arrive at $$\begin{aligned} \frac{\Delta |\tilde{n}(k)|^2}{|\tilde{n}(k)|^2} &= -2n\_0\frac{q^2c}{E\_0}\frac{1}{N\_s}\sum\_{s=1}^{N\_s}kM\_{56}^{K\_sD}\\\nonumber &\times[{{\rm Re}}(Z(k))\sin(k\Delta z) + {{\rm Im}}(Z(k))\cos(k\Delta z)]\\\nonumber &\times e^{-\frac{k^2}{2} \sum\_{u\neq5} \sigma^2\_{\hat{x}\_u}\Big[\big(\hat{M}^{MD}\_{5u}\big)^2 + \big(\hat{M}^{MD}\_{5u} - \hat{M}^{MK\_s}\_{5u}\big)^2\Big]}\end{aligned}$$ However, we have not observed any significant difference numerically between using the above equation and Eq. [eqtn:pwrrelative], with the single kick taken at the kicker center. Simulation ========== In order to check the validity and limits of the above theory, we make use of simulation. We first examine the case of a perfectly linear simulation, where the fractional energy kick to a given hadron in the kicker is simply the convolution of the wake function with the longitudinal hadron distribution in the modulator. We then turn our attention to a more detailed simulation, where the electrons and hadrons are tracked with a particle-in-cell code in order to incorporate saturation effects. In this and future sections, we consider the cooling parameters currently planned for 275 GeV protons in the EIC, listed in Tab. [tab:param]. The electron optics are assumed to be kept roughly constant within the modulator, kicker, and amplifiers through the use of focusing. Due to their much higher energy, the hadrons see the modulator and kicker as drifts, with the optics parameters specified at the center. Electron chicanes are defined from the end of one straight section to the start of the next one, while the proton *M*56 and phase advances are evaluated from modulator center to kicker center. [!hbt] | | | | --- | --- | | ***Geometry*** | | | Modulator Length (m) | 45 | | Kicker Length (m) | 45 | | Number of Amplifier Straights | 2 | | Amplifier Straight Lengths (m) | 37 | | ***Proton Parameters*** | | | Energy (GeV) | 275 | | Protons per Bunch | 6.9e10 | | Average Current
arxiv_0000358
We introduce a model of a controlled random graph process. In this model, the edges of the complete graph *K**n* are ordered randomly and then revealed, one by one, to a player called Builder. He must decide, immediately and irrevocably, whether to purchase each observed edge. The observation time is bounded by parameter *t*, and the total budget of purchased edges is bounded by parameter *b*. Builder’s goal is to devise a strategy that, with high probability, allows him to construct a graph of purchased edges possessing a target graph property P, all within the limitations of observation time and total budget. We show the following: * Builder has a strategy to achieve *k*-vertex-connectivity at the hitting time for this property by purchasing at most *c**k**n* edges for an explicit *c**k* < *k*; and a strategy to achieve minimum degree *k* (slightly) after the threshold for minimum degree *k* by purchasing at most (1 + *ɛ*)*k**n*/2 edges (which is optimal). * Builder has a strategy to create a Hamilton cycle at the hitting time for Hamiltonicity by purchasing at most *C**n* edges for an absolute constant *C* > 1; this is optimal in the sense that *C* cannot be arbitrarily close to 1. This substantially extends the classical hitting time result for Hamiltonicity due to Ajtai–Komlós–Szemerédi and Bollobás. * Builder has a strategy to create a perfect matching by time (1 + *ɛ*)*n*log*n*/2 while purchasing at most (1 + *ɛ*)*n*/2 edges (which is optimal). * Builder has a strategy to create a copy of a given *k*-vertex tree if *t* ≥ *b* ≫ max{(*n*/*t*)*k* − 2, 1}, and this is optimal; * For ℓ = 2*k* + 1 or ℓ = 2*k* + 2, Builder has a strategy to create a copy of a cycle of length ℓ if $b\gg\max \{n^{k+2}/t^{k+1},n/\sqrt{t}\}$, and this is optimal. Introduction ============ The model --------- The random graph process, introduced by Erds and Rényi , is a stochastic process that starts with an empty *n*-vertex graph and, at each step, gains a new uniformly selected random edge. At any fixed time *t*, the process is distributed as the uniform random graph *G*(*n*, *t*). A **graph property** is a family of graphs that is closed under isomorphisms. It is **monotone** if it is closed under addition of edges. A vast body of literature is concerned with finding *thresholds* for various monotone graph properties in the random graph process, namely, with finding time *t**c* such that the random graph process belongs to P with high probability (**whp**; that is, with probability tending to 1 as *n* → ∞) whenever *t* is much (or somewhat) larger than *t**c* and does not belong to P **whp** if *t* is much (or somewhat) smaller than *t**c*. In many cases, if one observes the random graph process at time *t* above the threshold, the graph has the desired property but, in fact, contains a much sparser subgraph that has the property. For example, one of the outstanding results in this model regards a threshold for Hamiltonicity: Komlós and Szemerédi  and, independently, Bollobás  showed that if 2*t*/*n* − log*n* − loglog*n* tends to infinity, then the random graph process, at time *t*, contains a Hamilton cycle **whp**. Evidently, not all  ∼ *n*log*n*/2 observed edges must be included in the resulting graph for it to be Hamiltonian (as any Hamilton cycle only uses *n* edges). Nevertheless, when an edge arrives, it is generally hard to determine whether it will be crucial for Hamiltonicity. The above motivates the following “online” version of building a subgraph of the random graph process. We think of it as a one-player game, where the player (“**Builder**”) has a limited “budget”. Edges “arrive” one at a time in random order and are presented to Builder. Whenever he *observes* an edge, he must immediately and irrevocably decide whether to *purchase* it. A non-purchased edge is thrown away and never reappears. The **time** (total number of presented edges) and the **budget** (maximum number of edges Builder can purchase) are both capped. The question is as to whether or not, under the given time and budget constraints, Builder has a strategy that allows him to obtain, **whp**, a particular monotone graph property in the graph of purchased edges. To demonstrate the model, consider the property of connectedness. Let *τ**C* be the (random) time at which the random graph process becomes connected. Evidently, if Builder wishes to construct a connected subgraph of the random graph process, he must purchase at least *n* − 1 edges. However, in this case purchasing *n* − 1 edges suffices: Builder’s strategy would be to purchase an edge if and only if it decreases the number of connected components in his graph. That way, Builder maintains a forest, which becomes connected exactly at time *τ**C*. Therefore, in this example, Builder does not have to “pay” for having to make decisions online. However, this is not always the case. For example, if Builder wants to purchase a triangle and wishes to do so while observing *o*(*n*2) edges, he must purchase (many) more than three edges (see ).   We denote the underlying random graph process at time *t* (namely, after *t* edges have been presented to Builder) by *G**t* (where the number of vertices, *n*, is implicit). As we mentioned earlier, *G**t* is distributed as the uniform random graph *G*(*n*, *t*). The **hitting time** for a monotone graph property P is the (random) minimum time *t* for which *G**t* has P. Builder’s graph at time *t*, denoted *B**t*, is a subgraph of *G**t* on the same vertex set that consists of the edges purchased by Builder by time *t*. A **(*t*, *b*)-strategy** of Builder is a (potentially random) function that, for any *s* ≤ *t*, decides whether to purchase the edge presented at time *s*, given *B**s* − 1, under the limitation that *B**s* has at most *b* edges. Our results ----------- Our first result discusses a strategy for constructing a subgraph with a given minimum degree and, for *k* ≥ 3, of a given vertex-connectivity, as quickly as possible. For every positive integer *k*, denote by *τ**k* the hitting time for minimum degree *k* in the random graph process. It was proved by Erds and Rényi  that $\tau\_k=\frac{n}{2}(\log{n}+(k-1)\log\log{n}+O(1))$ **whp** (see ). Let *κ* denote vertex-connectivity. [Minimum degree and vertex-connectivity at the hitting time][thm:mindeg:hit] For every positive integer *k* there exists a constant *o**k* ∈ (*k*/2, 3*k*/4] such that the following holds. If $b\ge o\_k n+\omega(\sqrt{n}\log{n})$ then there exists a (*τ**k*, *b*)-strategy *B* of Builder such that lim*n* → ∞P(*δ*(*B**τ**k*) ≥ *k*) = 1. If *k* ≥ 3, the same strategy guarantees lim*n* → ∞P(*κ*(*B**τ**k*) ≥ *k*) = 1. For *k* = 1, as we mentioned in the introduction, a sufficient and necessary budget for connectivity at the hitting time is *n* − 1. For *k* = 2, below implies that a budget of *O*(*n*) suffices for 2-connectivity at the hitting time for minimum degree 2. The constant *o**k* in is explicit and computable (see in ; the first three values in the sequence *o**k* are $\frac{3}{4}$, $\frac{11}{8}$ and $\frac{63}{32}$), and is roughly *k*/2 when *k* is large (see ). We believe that at the hitting time *τ**k*, that constant is optimal (see and the discussion following it). However, if we allow the time to be optimal only asymptotically, then an asymptotically optimal budget suffices. [Minimum degree][thm:mindeg] Let *k* be a positive integer and let *ɛ* > 0. If *t* ≥ (1 + *ɛ*)*n*log*n*/2 and *b* ≥ (1 + *ɛ*)*k**n*/2 then there exists a (*t*, *b*)-strategy *B* of Builder such that lim*n* → ∞P(*δ*(*B**t*) ≥ *k*) = 1. The above theorem is tight in the following sense: if *t* ≤ (1 − *ɛ*)*n*log*n*/2 then the underlying random graph process has, **whp**, isolated vertices; and if *b* < *k**n*/2 then every strategy will fail since Builder will not be able to purchase enough edges to obtain the required minimum degree. We remark that after an earlier version of this paper appeared online, a vertex-connectivity version of that holds for every *k* ≥ 2 has been proved by Lichev  (see ). follows from below (for *k* = 1) and from Lichev’s result (for *k* ≥ 2). We keep it here due to its short and simple proof (see ). Note that is *not* implied by Lichev’s result, as implies a strategy that succeeds **whp** *at the hitting time* *τ**k*, whereas Lichev’s strategy typically requires *ω*(*n*) additional steps (which can, as we demonstrate later in and the discussion following it, make a big difference). An asymptotic version of Lichev’s result, according to which for every *ɛ* > 0 there exists *k*0 such that for every *k* ≥ *k*0 there exists a strategy to build a *k*-vertex-connected graph in time (1 + *ɛ*)*n*log*n*/2 and with budget (1 + *ɛ*)*k**n*/2 follows from together with the observation that *o**k* ∼ *k*/2 when *k* is large (). We continue to Hamiltonicity. The classical *hitting-time* result of Bollobás  and independently of Ajtai, Komlós, and Szemerédi  states that the random graph process, **whp**, becomes Hamiltonian as soon as its minimum degree reaches 2. As we reminded earlier, *τ*2 = Θ(*n*log*n*), while a Hamilton cycle has only *n* edges. The following result states that there exists an online algorithm choosing only linearly many edges from the first *τ*2 edges of the random graph process that still obtains a Hamilton cycle **whp**. Moreover, as we show in the proof, this algorithm is *extremely* simple, deterministic, and requires only local information. The theorem recovers – and substantially strengthens – the aforementioned classical result. [Hamiltonicity at the hitting time][thm:ham:hit] There exists *C* > 1 for which there exists a (*τ*2, *C**n*)-strategy *B* of Builder with lim*n* → ∞P(*B**τ*2is Hamiltonian) = 1. The constant *C* we obtain in is large, and we made no serious effort to optimise it. A natural question is whether it could be as low as 1 + *ɛ* for every *ɛ* > 0. The answer, found by Anastos  (see the concluding remarks there), is negative. As part of the proof of, we show that the so-called *random *k*-nearest neighbour graph* (see ) – for large enough constant *k* – is **whp** Hamiltonian (). This partially solves an open problem of the first author \*Problem 45. To prove, we prove a stronger hitting time version of () which states that the *k*-nearest neighbour graph, considered as a process and *stopped as the minimum degree becomes 2*, is **whp** Hamiltonian. We remark that an immediate corollary of (using ) is that if *t* ≥ (1 + *ɛ*)*n*log*n*/2 (namely, slightly after the Hamiltonicity threshold) and *b* ≥ *C**n* (namely, when the budget is inflated by a constant), then there exists a (*t*, *b*)-strategy of Builder that succeeds (**whp**) in building a Hamilton cycle. In an earlier version of this paper that appeared online, we also proved a complementary statement: if *t* ≥ *C**n*log*n*/2 and *b* ≥ (1 + *ɛ*)*n*, then there exists a “successful” (*t*, *b*)-strategy (see ). After that early version appeared online, Anastos  proved, using different and more involved techniques, that Hamiltonicity can be achieved in asymptotically optimal time *and* budget. Namely, he showed that in the above statement, one could take *C* = *C*(*ɛ*) = 1 + *ɛ*. He states this result as a special case in a model that generalises both the budget model discussed here and the so-called Achlioptas process (see discussion in ). We leave here our original proof due to its relative shortness and simplicity (see ). Note that does not follow from the work of Anastos. For perfect matchings, we show that there exists an asymptotically-optimal-time asymptotically-optimal-budget strategy. [Perfect matchings][thm:pm] Suppose *n* is even. For every *ɛ* > 0 if *t* ≥ (1 + *ɛ*)*n*log*n*/2 and *b* ≥ (1 + *ɛ*)*n*/2 then there exists a (*t*, *b*)-strategy *B* of Builder such that lim*n* → ∞P(*B**t*has a perfect matching) = 1. An earlier version of this paper that appeared online contained a non-optimal version of (analogous to ). Recently, Anastos  proved a more general version of (see remark after ), using different and more involved techniques. The proof we provide here is more elementary. The next two theorems discuss optimal strategies for purchasing small subgraphs. We resolve the problem whenever the target subgraph is a fixed tree or a cycle. [Small trees][thm:trees] Let *k* ≥ 3 be an integer and let *T* be a *k*-vertex tree. If *t* ≥ *b* ≫ max{(*n*/*t*)*k* − 2, 1} then there exists a (*t*, *b*)-strategy *B* of Builder such that lim*n* → ∞P(*T* ⊆ *B**t*) = 1 and if *b* ≪ (*n*/*t*)*k* − 2 then for any (*t*, *b*)-strategy *B* of Builder, lim*n* → ∞P(*T* ⊆ *B**t*) = 0. For a visualization of see. [t!] [ xmin=0, xmax=1.5, ymin=0, ymax=1, x=5cm,y=5cm, samples=100, axis y line=center, axis x line=middle, xlabel=log*n**t*, xtick=0,1/2,2/3,3/4,4/5,1, xticklabels=0,1/2,2/3,,4/5,1, ylabel=log*n**b*, ytick=0,1/2,2/3,3/4,4/5,1, yticklabels=0,1/2,2/3,,4/5,1, every axis x label/.style= at=(ticklabel\* cs:1.05), anchor=west,, every axis y label/.style= at=(ticklabel\* cs:1.05), anchor=south,, reverse legend, ] ; ; ; ; ; [fig:trees] We show in the proof of that if *t* ≫ *n* then, in fact, there exists a (*t*, *b*)-strategy for *b* = *k* − 1 that succeeds **whp** in building a copy of a *k*-vertex tree. [Short cycles][thm:cycles] Let *k* ≥ 1 be an integer and let *H* = *C*2*k* + 1 or *H* = *C*2*k* + 2. Write $b^\*=b^\*(n,t,k)=\max\{n^{k+2}/t^{k+1},n/\sqrt{t}\}$. If *t* ≫ *n* and *b* ≫ *b*\* then there exists a (*t*, *b*)-strategy *B* of Builder such that lim*n* → ∞P(*H* ⊆ *B**t*) = 1,  and if *t* ≪ *n* or *b* ≪ *b*\* then for any (*t*, *b*)-strategy *B* of Builder, lim*n* → ∞P(*H* ⊆ *B**t*) = 0. For a visualization of see. For discussion on the difficulty arising in handling graphs with more than one cycle see. [t] [ xmin=0, xmax=1, ymin=0, ymax=1, x=5cm,y=5cm, samples=100, axis y line=center, axis x line=middle, xlabel=log*n**t*, xtick=0,1/9,1/7,1/5,1/3,1, extra x ticks=0, xticklabels=0,,,6/5,4/3,2, extra x tick labels=1, ylabel=log*n**b*, ytick=1,1/2,1/2-1/18,1/2-1/10,1/3,0, yticklabels=1,1/2,,2/5,1/3,0, every axis x label/.style= at=(ticklabel\* cs:1.05), anchor=west,, every axis y label/.style= at=(ticklabel\* cs:1.05), anchor=south,, reverse legend, ] ; ; ; ; ; ; ; ; ; [fig:cycles] Tools and techniques -------------------- #### Expanders *Expanders* are graphs in which (sufficiently) small sets expand. Namely, these are graphs in which the neighbourhood of each small set is larger than that set by a constant factor[4](#fn4). It is well known that connected expanders are helpful in finding Hamilton cycles (see ); more concretely, connected non-hamiltonian expanders have many “boosters”, namely, non-edges whose addition to the graph creates a graph that is either Hamiltonian or whose longest path is longer (for a comprehensive account, we refer the reader to ). Thus, expanders will play a crucial role in the proof of (and also in the proof of ): Builder will attempt to construct (sparse, and therefore cheap) connected expanders within the random graph process. A standard method for obtaining sparse expanders in random graphs is choosing an appropriate random (sub)graph model. Natural candidates are discussed in the following paragraphs. #### Random *k*-out graphs In the goals described in, Builder must achieve a certain minimum degree in his (spanning) graph. The (standard) random graph process is quite wasteful in this regard, as to avoid isolated vertices, a superlinear number of edges must arrive. Thus, a wise Builder would instead construct a much sparser subgraph of the random graph process with the desired minimum degree. A classical sparse graph with a given minimum degree is the *random regular graph* (see, e.g., in ). However, such a graph is generally very hard to construct online. A much simpler alternative is the so-called *random *k*-out graph* (see, e.g., in ). In the *k*-out graph, each vertex chooses, random, independently, and without repetitions, *k* neighbours to connect to. Thus, the total number of edges in a *k*-out graph is at most *k**n*, and the minimum degree is at least *k*. Unfortunately, *k*-out graphs are not generally subgraphs of the random graph process at the hitting time for minimum degree *k*. We will therefore analyse a different model, which can be considered the undirected counterpart of the *k*-out graph (see the next paragraph). Nevertheless, we will exploit the simplicity of the *k*-out graph to analyse its (slightly more complicated) undirected counterpart. #### Random *k*-nearest neighbour graphs Suppose the edges of the complete graph are endowed with (potentially random) “lengths”. The graph made up only of the *k* shortest edges incident with each vertex is called the *random *k*-nearest neighbour graph*, and has been studied in various, primarily geometric, contexts (see, e.g., ). When the weights are independent uniform random variables supported on [0, 1], the model becomes a symmetric random graph[5](#fn5), which we denote by O*k* (see ). To prove, we devise a simple strategy that emulates O*k*. Since the minimum degree of O*k* is obviously *k*, the statement would follow from a theorem of Cooper and Frieze  according to which the number of edges of O*k* is at most *c**n* for some concrete constant *c* = *c*(*k*) < *k* which is roughly *k*/2 when *k* is large (see ). By observing that O*k* is stochastically dominated by the random *k*-out graph (), we prove that it is a connected () expander (), and use that to prove that it is Hamiltonian (). We use that helpful fact in several places: when aiming to create a Hamilton cycle at the hitting time and with inflated budget, we construct a subgraph of O*k* that is already Hamiltonian; when creating a Hamilton cycle under an asymptotically optimal budget, we emulate copies of O*k* in small parts of the graph, to allow us to *absorb* a long path into a Hamilton cycle; finally, we use a similar approach for the construction of perfect matchings. #### High-level arguments for the containment of fixed graphs We give brief proof outlines for. The proof for the upper bound on the budget threshold for trees is essentially inductive: given a fixed *k*-vertex tree *T*, we let *T*ʹ be obtained from *T* by removing a leaf. The inductive argument is that Builder can construct sufficiently many copies of *T*ʹ while leaving enough time to extend some of them into copies of *T*. The proof for the lower bound is based on a similar idea: we show that Builder cannot construct a connected component of size at least *k*. In order to construct such a component, he needs to construct enough smaller components so that in the remaining time, he will, **whp**, connect two of them so that the resulting component will be large enough. The strategy for obtaining a cycle of length ℓ = 2*k* + 1 goes through the construction of many “traps”, namely, non-edges whose addition to Builder’s graph would create the desired cycle. An optimised way to construct many such traps quickly is by constructing *r* *d*-ary trees of depth *k* (for a correct choice of *r*, *d*). If *d* is large, most pairs of leaves in such a tree are connected by a path of length 2*k*, and thus form a trap. The argument for a cycle of length ℓ = 2*k* + 2 is similar. We complement the upper bound with two lower bounds; each matches the upper bound in a different regime. We first show a “universal” lower bound, based on the straightforward observation that the number of traps Builder has in his graph is bounded by *O*(*b*2), as he has at most 2*b* vertices of positive degree in his graph. A more involved lower bound fits the earlier regimes (lower values of *t*). To this aim, we observe that Builder’s graph is typically Θ(1)-degenerate. Then, we use this observation, together with an estimate on the number of paths of a fixed length that contain a vertex of “high” degree, to bound from above the number of paths of length ℓ (and hence the number of traps). Related work ------------ The study of random graph processes is at the heart of the theory of random graphs, providing a dynamic point of view on their evolution. One of the main questions is to determine the thresholds of monotone increasing graph properties or their (more refined) hitting times. Classical results of this sort include the thresholds of minimum degree and vertex-connectivity *k* , the appearance of a “giant” component , Hamiltonicity  and the containment of fixed subgraphs . For a comprehensive coverage of the topic, we refer the reader to the books of Bollobás , Janson, Łuczak and Ruciński , and Frieze and Karoński . In our context, an obvious necessary condition for the existence of a winning strategy for Builder is that the “time” is above the threshold (or at least at the hitting time), as otherwise, the underlying graph process is not guaranteed to have the desired property **whp**. Evidently, if *b* = *t* (namely, if the budget equals the time), and *t* is (sufficiently) above the threshold of the target property, then Builder has a winning strategy: the naive one that purchases each observed edge. Since in many cases Builder can do better, the model can be seen as an extension of the “standard” random graph process. In the last couple of decades, partly inspired by the remarkable work of Azar, Broder, Karlin and Upfal  on *balanced allocations*, there has been a growing interest in *controlled* random processes. In the context of graph processes, an algorithm is provided with a random flow of edges (usually, but not always, the random graph process) and with (the *offline* version) or without (the *online* version) “peeking into the future”, makes a decision that handles that flow by accepting/rejecting edges, by colouring them, or by other means. We mention several related models that fall into this category, mainly in the online version. The *Achlioptas process*, proposed by Dimitris Achlioptas in 2000, is perhaps the most studied controlled random graph process. In this process, the algorithm is fed by a stream of random *k*-sets of edges (with or without repetitions) and should pick, immediately and irrevocably (“online”), a single edge to accept, rejecting the others. The algorithm’s goal is to make the graph of accepted edges satisfy some monotone increasing (decreasing) graph property while minimising (maximising) the total number of rounds. Early work on this model  treated the original question posed by Achlioptas of *avoiding* a giant component for as long as possible. Other works  considered the opposite of the original question, namely, the question of accelerating the appearance of the giant (see also for a general framework). The model has also been studied for other objectives, such as avoiding  or creating  a fixed subgraph or obtaining a Hamilton cycle . Several variants have also been studied, such as an *offline* version, in which the algorithm sees all *k*-sets of edges at the beginning of the process (see, e.g., ); and a *memoryless* version, in which the algorithm’s decision may not depend on its previous decisions (see ). A common generalisation of the Achlioptas process and the process considered in this paper, where an algorithm should pick *at most* a single edge to accept/purchase subject to a restricted “budget”, was recently studied by Anastos in  (see remarks after ). The *semi-random process*, proposed by the third author in 2016, is a variation of the Achlioptas process in which the algorithm is fed by a stream of random spanning stars (with repetitions) instead of *k*-sets of edges. As with the Achlioptas process, the algorithm must immediately and irrevocably pick a single edge to accept. Work on this model treats monotone properties such as containment of fixed subgraphs , minimum degree and vertex-connectivity , perfect matchings , Hamilton cycles  and bounded degree spanning graphs . Another model that fits in this setting was studied by Frieze and Pegden  and by Anastos  under the name “purchasing under uncertainty”. In their model, whenever an edge arrives, it is given an independent random “cost”, and the algorithm has to decide whether to purchase that edge, aiming to pay the minimum total cost required to obtain a desired graph property. In a Ramsey-type version of controlled random processes, incoming edges are coloured (irrevocably) by an online algorithm. The algorithm aims to avoid, or to create, a monochromatic property. In , the triangle-avoidance game for up to 3 colours is discussed. This was extended to any fixed cyclic graph and any number of colours in . The model was further studied in the contexts of the giant component  and Hamilton cycles . Finally, we would like to mention a related adaptation of the *two-stage optimisation with recourse* framework for the *minimum spanning tree* model . Here, every edge of the complete graph is given an independent random “Monday” cost and another independent random “Tuesday” cost. The algorithm sees all Monday costs and decides (immediately and irrevocably) which edges to incorporate into its (future) spanning tree. Then, Tuesday costs are revealed, and the algorithm uses them to complete his (current) forest into a spanning tree. The algorithm’s goal is to minimise the total cost of edges in his constructed tree. Paper organisation and notation ------------------------------- The rest of the paper is organised as follows. In we introduce some preliminaries. is devoted to the random *k*-nearest neighbours graph and to the *k*-out graph. contains the proofs of, and contains the proofs of. In the final section,, we mention a few relevant open problems. Throughout the paper, all logarithms are in the natural basis. If *f*, *g* are functions of *n* we write *f* ≼ *g* if *f* = *O*(*g*), *f* ≪ *g* if *f* = *o*(*g*), *f* ≍ *g* if *f* = Θ(*g*), and *f* ∼ *g* if *f* = (1 + *o*(1))*g*. For two vertices *u*, *v* of a graph we write *u* ∼ *v* to denote that they are neighbours. For simplicity and clarity of presentation, we often make no particular effort to optimise the constants obtained in our proofs and omit floor and ceiling signs whenever they are not crucial. Preliminaries ============= Concentration inequalities -------------------------- We will make use of the following version of Chernoff bounds (see, e.g., in, \*Chapter 2). [Chernoff bounds][thm:chernoff] Let *n* ≥ 1 be an integer and let *p* ∈ [0, 1], let *x* ∼ Bin(*n*, *p*), and let *μ* = E*x* = *n**p*. Then, for every *a* > 0, $${\mathbb{P}}(x\le \mu-a) \le \exp\left(-\frac{a^2}{2\mu}\right),\qquad {\mathbb{P}}(x\ge \mu+a) \le \exp\left(-\frac{a^2}{2(\mu+a/3)}\right).$$ The following are trivial yet useful bounds. [cl:bin:lowtail] Let *n* ≥ 1 be an integer and let $p\in[0,\frac{1}{2}]$, and let *x* ∼ Bin(*n*, *p*). Write *q* = 1 − *p* and let 1 ≤ *k* ≤ *n**p*/*q* be an integer. Then $${\mathbb{P}}(x\le k) \le \left(\frac{enp}{kq}\right)^k e^{-np}.$$ By the binomial theorem, for every *α* ∈ (0, *p*/*q*], $$(1+\alpha)^n = \sum\_{i=0}^n \binom{n}{i}\alpha^i \ge \sum\_{i=0}^k \binom{n}{i} \left(\frac{p}{q}\right)^i \left(\frac{\alpha q}{p}\right)^k,  $$  hence   $$  {\mathbb{P}}(x\le k)   = \sum\_{i=0}^k \binom{n}{i}\left(\frac{p}{q}\right)^i q^n   \le \frac{(1+\alpha)^n p^k}{\alpha^kq^k} \cdot (1-p)^n   \le \frac{e^{\alpha n} p^k}{\alpha^kq^k} \cdot e^{-np}.  $$  Taking *α* = *k*/*n* ≤ *p*/*q* we obtain   $$  {\mathbb{P}}(x\le k)   \le \left(\frac{enp}{kq}\right)^k \cdot e^{-np}.\qedhere  $$ [cl:bin:uptail] Let *x* ∼ Bin(*n*, *p*) with *μ* = *n**p* and let 1 ≤ *k* ≤ *n*. Then $${\mathbb{P}}(x\ge k) \le \left(\frac{enp}{k}\right)^k.$$ For a proof see, e.g., . The random graph process ------------------------ Recall that *G**t* denotes the random graph process at time *t*. The next lemma essentially states that in *G**t*, where *t* is superlinear (but not too large), there are no small dense sets. [lem:span] Suppose that *R* ≫ 1 and *n*− 5*R*2*t*3 ≤ *α* < 1/8. Then, in *G**t*, **whp**, every vertex set of size *r* ≤ *R* spans at most 3*r* edges. Let $N=\binom{n}{2}$ and *p* = *t*/*N*. For a set *S* of *s* edges the probability that *S* is contained in *G**t* is at most *p**s*. By the union bound, the probability that there exists a set *U* of size *r* that spans at least 3*r* edges is at most $$\binom{n}{r}\binom{\binom{r}{2}}{3r}p^{3r} \le \left(n r^2p^3\right)^r.$$ We note that *n**R**p*3 ≍ *n*− 5*R**t*3 ≤ *α*/*R* ≪ 1 and *n**R*2*p*3 ∼ 8*n*− 5*R*2*t*3 ≤ 8*α*. By the union bound over 2 ≤ *r* ≤ *R*, the probability that there exists such a set is at most $$\sum\_{r=2}^R \left(n r^2p^3\right)^r \le \sum\_{r=2}^{\sqrt{R}} \left(nRp^3\right)^r + \sum\_{r=\sqrt{R}}^{R} \left(nR^2p^3\right)^r \le \sum\_{r=2}^{\sqrt{R}} (o(1))^r + \sum\_{r=\sqrt{R}}^R (8\alpha^r) = o(1),$$ and the claim follows. The lemma easily implies the following useful fact. [cl:degen] Suppose 1 ≪ *b*2 ≤ *n*5/(40*t*3). Then, **whp**, every subgraph of *G**t* with at most *b* edges is 6-degenerate. Let *B* be a subgraph of *G**t* with at most *b* edges. Thus, *B* has at most 2*b* non-isolated vertices. Apply with *R* = 2*b* and *t* (we can do so as *b* ≫ 1 and *n*− 5(2*b*)2*t*3 ≤ 1/10). We therefore obtain that, with high probability, for every *q* ≥ 0, every subgraph of *B* on *q* vertices spans at most 3*q* edges, and hence has minimum degree of at most 6, and the result follows. We introduce more notation. For *d* ≥ 1, let *X**d**t* be the set of vertices in *G**t* with degree less than *d*, and let *Y**d**t* be its complement. The next lemmas state that in time linear in *n* there is a transition between “most vertices are of degree less than *d*” and “most vertices are of degree at least *d*”. [lem:gnt:large] Let *d* ≥ 1 be an integer. Then, deterministically, ∣*Y**d**t*∣ ≤ 2*t*/*d*. It follows from *t* = ∣*E*(*G**t*)∣ ≥ ∣*Y**d**t*∣ ⋅ *d*/2. [lem:gnt:small] For every integer *d* ≥ 1, if *t* ≥ 6*d**n* then ∣*X**d**t*∣ ≤ *n*/100 **whp**. When proving statements about *G**t* it is often convenient to prove them for *G* ∼ *G*(*n*, *p*) with $p\sim t/\binom{n}{2}$ instead. We could then translate the results back to *G**t* using standard methods (see, e.g., in ). Let *t* = 6*d**n* (the result would follow for *t* ≥ 6*d**n* due to monotonicity). Write $p=12d/n\sim t/\binom{n}{2}$ and *m* = *n*/100. If ∣*X**d**t*∣ > *m* then there exists a vertex set *V*0 with ∣*V*0∣ = *m* such that $|E(V\_0,\overline{V\_0})|<md$. Let $x=|E(V\_0,\overline{V\_0})|$ and note that *x* ∼ Bin(*m*(*n* − *m*), *p*). By and the union bound over the choice of *V*0, the probability that the statement of the lemma does not hold is at most $$\begin{aligned} \binom{n}{m}{\mathbb{P}}(x<md) &\le \left(\frac{en}{m}\right)^m \left(\frac{em(n-m)p}{md(1-p)}\right)^{md} e^{-m(n-m)p}\\ &\le \left(\frac{en}{m}\right)^m \left(\frac{enp}{d}\right)^{md} e^{-0.99mnp} \le \left(100e\cdot\left(12e\cdot e^{-11}\right)^d\right)^m = o(1).\qedhere \end{aligned}$$ We will also use the next two known results about *τ**d*. [][lem:gnt:deg:hit] For every fixed *d* ≥ 1, *τ**d* = *n*(log*n* + (*d* − 1)loglog*n* + *x*)/2, where *x* = *O*(1) **whp**. For a proof of the next lemma see, e.g., . [lem:gnt:small:far] Let *d* ≥ 1 be an integer. Then, **whp**, ∣*X**d**τ*2∣ ≤ *n*0.5, the distance between two vertices in *X**d**τ*2 is at least 5, and no vertex in *X**d**τ*2 lies in a cycle of length at most 4. Finally, we would need the following lemma on the size of the *k*-core of a random graph. The ***k*-core** of a graph is its (unique) maximal subgraph of minimum degree at least *k*. [lem:gnt:core] For every *k* ≥ 1 the size of the *k*-core of *G**t*, for *t* = 8*k**n*, is **whp** at least *n*/2. We prove the statement for *G*(*n*, *p*) with $p=16 k/n\sim t/\binom{n}{2}$. First we show that **whp** the number of edges between any set of size *n*/2 and its complement is at least *k**n*. Indeed, let *V*0 be a fixed vertex set of size *n*/2. Then, $x=|E(V\_0,\overline{V\_0})|\sim{{\mathsf{Bin}}}\left(n^2/4,p\right)$, hence E*x* = 4*k**n*. By Chernoff bounds (), ${\mathbb{P}}(x\le kn) \le \exp(-\frac{9}{8}kn)$. By the union bound over all choices of *V*0, the probability that there exists such a set is at most $2^n\cdot\exp(-\frac{9}{8}kn)=o(1)$. Now, suppose the *k*-core is smaller than *n*/2. Consider the process of removing vertices of degree smaller than *k* one by one. By definition of the *k*-core, this process lasts more than *n*/2 steps. Stop the process exactly after *n*/2 steps. At this point, the remaining set ∣*V*0∣ is of size *n*/2. On the other hand, we have removed less than *k* ⋅ *n*/2 edges, hence $|E(V\_0,\overline{V\_0})|<kn$, and we have seen that **whp** there is no such set. Rotations, expanders, and boosters ---------------------------------- In our proofs we shall repeatedly use the so-called *rotation–extension* technique of Pósa . Given a longest path *P* = (*v*0, …, *v**t*) we say that *P*ʹ obtained from *P* by an **elementary rotation** of *P* (with *v*1 fixed) if *P*ʹ = (*v*0, …, *v**i* − 1, *v**i*, *v**j*, *v**j* − 1, …, *v**i* + 1) (*j* > *i*). We let R(*P*) denote the set of endpoints of paths obtained from *P* by a (finite) sequence of elementary rotations. We will use the following classical lemma of Pósa: [Pósa’s lemma ][lem:Posa] Let *G* be a graph and let *P* be a longest path in *G*. Then ∣*N*(R(*P*))∣ ≤ 2∣R(*P*)∣ − 1. Say that a graph *G* = (*V*, *E*) is an ***R*-expander** if every set *U* ⊆ *V* with ∣*U*∣ ≤ *R* has ∣*N*(*U*)∣ ≥ 2∣*U*∣. For an *n*-vertex graph *G* denote by *λ*(*G*) the length of a longest path in *G*, or *n* if *G* is Hamiltonian. A non-edge *e* of *G* is called a **booster** if *λ*(*G* + *e*) ≥ min{*λ*(*G*) + 1, *n*}. The following lemma is a corollary of. For a proof see, e.g., . [lem:boosters] Let *G* be a connected *R*-expander which contains no Hamilton cycle. Then *G* has at least (*R* + 1)2/2 boosters. We use by *sprinkling*: we begin with a connected expander and add random edges that hit boosters and advance the expander towards Hamiltonicity. Occasionally, the expander we handle is not guaranteed to be connected. This is sometimes not a problem, as random edges can first connect the expander, and then hit boosters. We summarise this in the following lemma: [Sprinkling][lem:sprinkling] Let *β* > 0 and let *G* be an *n*-vertex *β**n*-expander. Then there exists *C* = *C*(*β*) > 0 such that the addition of *C**n* random non-edges to *G* makes it **whp** Hamiltonian. Since *G* is a *β**n*-expander, each of its components is of size at least 3*β**n*, hence its number of connected components is at most 1/(3*β*). Assume first that *G* is not connected (and hence *β* ≤ 1/6). Let (*e*1, …, *e*ℓ) be a random permutation of the non-edges of *G*. For 0 ≤ *i* ≤ ℓ, write *G**i* = *G* ∪ {*e*1, …, *e**i*}. As long as *G**i* is not connected, there are at least 3*β*(1 − 3*β*)*n*2 ≥ *β*2*n*2 edges that connect distinct components, hence the probability that a random edge connects distinct components is at least *β*2. Let *t*1 ≫ 1 be an integer, and consider the first *t*1 random edges. For *i* ∈ [*t*1] let *x**i* be the indicator of the event that the *i*’th edge connects distinct components, or that *G**i* − 1 is already connected. Let *x* = ∑*i* = 1*t*1*x**i* and note that *x* ≥ 1/(3*β*) implies that *G**t*1 is connected. Since *x* stochastically dominates a binomial random variable with *t*1 attempts and success probability *β*2, it follows from that the probability that *x* ≤ 1/(3*β*) is *o*(1). Thus, after adding *t*1 random non-edges, the graph becomes **whp** connected. Now, using and the monotonicity of expansion, we know that *G**t*1 is either Hamiltonian (in which case we are done) or it has at least $\beta^2\binom{n}{2}$ boosters. Thus, the probability that a random edge hits a booster is at least *β*2. Let *t*2 = *τ**n* be an integer with *τ* > *β*− 2, and consider the next *t*2 random edges. For *i* ∈ {*t*1 + 1, …, *t*1 + *t*2} let *y**i* be the indicator of the event that the *i*’th edge hits a booster, or that *G**i* − 1 is already Hamiltonian. Let *y* = ∑*i* = 1*t*1*y**i*, and note that *y* ≥ *n* implies that *G**t*1 + *t*2 is Hamiltonian. Since *y* stochastically dominates a binomial random variable with *t*2 attempts and success probability *β*2, it follows from that the probability that *y* ≤ *n* is *o*(1). Thus, after adding *t*2 further random non-edges, the graph becomes **whp** Hamiltonian. We also use to show that in Hamiltonian expanders, the set of endpoints of Hamilton paths whose other endpoint is fixed is large. [lem:manyends] Let *G* be a Hamiltonian *R*-expander and let *v* be a vertex of *G*. Then, the number of endpoints of Hamilton paths of *G* whose other endpoint is *v* is at least *R*. Since *G* is Hamiltonian, there exists a Hamilton path *P* with *v* as an endpoint. By, ∣*N*(R(*P*))∣ ≤ 2∣R(*P*)∣ − 1; thus ∣R(*P*)∣ > *R*. We show that the greedy strategy works “well” for purchasing a large *k*-matching; namely, a subgraph of maximum degree *k* in which all but a few vertices are of degree *k*. This will be useful in the proofs of and. [lem:kmatching] Let *k* be a positive integer and let *ɛ* > 0. Then, there exists *C* = *C*(*k*, *ɛ*) > 0 such that if *t* ≥ *C**n* and *b* ≥ (*k* − *ɛ*)*n*/2 then there exists a (*t*, *b*)-strategy of Builder (that succeeds **whp**) that purchases a graph with maximum degree *k* in which all but at most *ɛ**n* vertices are of degree *k*. Builder follows the greedy strategy; that is, he purchases every edge both of whose endpoints are of degree below *k*. Observe that this strategy ensures that the maximum degree of Builder’s graph is at most *k* at any stage. Let *U* denote the (dynamic) set of vertices of degree below *k*. Let *C* = *k**ɛ*− 2 and let *t* ≥ *C**n*. For *i* = 1, …, *t* let *x**i* be the indicator of the event that the *i*’th edge arriving is contained in *U* (and thus purchased by Builder) *or* ∣*U*∣ ≤ *ɛ**n*. The probability for this event is at least  ∼ *ɛ*2. Thus, *x* :  = ∑*x**i* stochastically dominates a binomial random variable with mean  ∼ *ɛ*2*C**n* = *k**n*. Therefore, the probability that *x* < (*k* − *ɛ*)*n*/2 is, by Chernoff bounds (), *o*(1). On the event that *x* ≥ (*k* − *ɛ*)*n*/2, either ∣*U*∣ ≤ *ɛ**n*, or Builder has purchased at least (*k* − *ɛ*)*n*/2 edges, in which case it follows that ∣*U*∣ ≤ *ɛ**n*. Obviously, by that time Builder has purchased at most (*k* − *ɛ*)*n*/2 edges. Given a vertex set we may use below to construct a spanning graph of an arbitrary fixed minimum degree (see ). That requires, however, that the time will be long enough so that every vertex will have the desired degree in the underlying graph process. If we are satisfied with a *small* graph of a given minimum degree, we can do it in linear time. The following corollary follows directly from. [cor:core] Let *k* ≥ 1 be an integer and let *b* = *t* = 8*k**n*. Then there exists a (*t*, *b*)-strategy of Builder (that succeeds **whp**) that purchases a graph whose *k*-core is of size at least *n*/2. Suppose the edges of the complete graph are endowed with independent uniform random “lengths” in [0, 1]. The (random) graph obtained by retaining (only) the *k* shortest edges incident with each vertex is called the **random *k*-nearest neighbour graph** (see ). We denote it by O*k* (or by O*k*(*n*) if we wish to emphasize that it is on *n* vertices). As we remarked earlier, the choice of uniform distribution supported on [0, 1] is arbitrary; any distribution without atoms would yield the same random graph (see ); we use this fact in the proof of. The following theorem of Cooper and Frieze  (stated here in a weaker form) will be useful for us in several places in this paper. [\*Theorem 1.1][thm:ok:size] For every constant *k*, the number of edges of O*k* is **whp** $$kn - \frac{n(n-1)}{2n-3}\sum\_{1\le i\le j\le k} \frac{\binom{n-2}{i-1}\binom{n-2}{j-1}}{2^{\delta(i,j)}\binom{2n-4}{i+j-2}} + O(\sqrt{n}\log{n}),$$ where *δ*(*i*, *j*) is the Kronecker delta. It would be useful for us to state in an “asymptotic” form. To this end, we complement the analysis of Cooper and Frieze by obtaining an asymptotic estimate of the expected size of O*k*. Let $$\label{eq:ok} o\_k = k-\frac{1}{4}\sum\_{i,j=0}^{k-1}\binom{i+j}{i}2^{-(i+j)}.$$ [cor:ok:size] For every integer *k* ≥ 1 the number of edges of O*k* is **whp** $o\_k n + O(\sqrt{n}\log{n})$. Moreover, *k*/2 < *o**k* ≤ 3*k*/4, and lim*k* → ∞*o**k*/*k* = 1/2. First note that for every nonnegative integer *k*, $$\label{eq:coins} \sum\_{i=0}^k \binom{i+k}{i}2^{-(i+k)} = 1.$$ Indeed, consider a sequence of fair coin flips that stops whenever *k* + 1 heads *or* *k* + 1 tails are encountered. Let *x* be the number of tails if the sequence stopped by encountering *k* + 1 heads, or the the number heads otherwise. Thus, *x* is supported on {0, …, *k*} and ${\mathbb{P}}(x=i) = \binom{i+k}{i}2^{-(i+k)}$. For an integer *k* ≥ 0 write $f(k)=\sum\_{i,j=0}^k\binom{i+j}{i}2^{-(i+j)}$ (so *o**k* = *k* − *f*(*k* − 1)/4). Note that for *k* ≥ 1, by, $$f(k) = f(k-1) + 2\sum\_{i=0}^k\binom{i+k}{i}2^{-(i+k)} - \binom{2k}{k}2^{-2k} = f(k-1) + 2 - \Theta(k^{-1/2}).$$ Thus, *f*(0) = 1, *f*(*k*) ≤ 1 + 2*k* for *k* ≥ 0 and $$\label{eq:coins:lim} \lim\_{k\to\infty} \frac{f(k)}{k} = 2.$$ It follows from that the number of edges of O*k* is, **whp**, $$\begin{aligned} & \left(k - \frac{1}{2} \sum\_{1\le i\le j\le k} \binom{i+j-2}{i-1}2^{-(i+j-2)-\delta(i,j)}\right)n + O(\sqrt{n}\log{n})\\ &= \left(k - \frac{1}{4} \sum\_{i,j=0}^{k-1} \binom{i+j}{i} 2^{-(i+j)}\right)n + O(\sqrt{n}\log{n}) = o\_kn + O(\sqrt{n}\log{n}). \end{aligned}$$ Evidently, *f*(*k*) is increasing; thus, $\frac{k}{2}+\frac{1}{4} \le o\_k\le 3k/4$. In addition, implies that *o**k*/*k* → 1/2 as *k* → ∞. One of the main results of , that we conveniently use here, is that for *k* ≥ 3, O*k* is **whp** *k*-vertex-connected. Cooper and Frieze also showed (there) that the probability that O2 is connected is bounded away from 0 and 1. [\*Theorem 1.4][thm:ok:conn] For *k* ≥ 3, O*k* is **whp** *k*-vertex-connected. Our main goal in this section is to prove that for large enough *k*, O*k* is **whp** Hamiltonian. [thm:ok:ham] There exists *k**H* > 0 such that for every *k* ≥ *k**H*, O*k* is **whp** Hamiltonian. As mentioned in the introduction, this partially resolves \*Problem 45. It is conjectured that *k**H* = 3 or *k**H* = 4. In fact, we need (and hence prove) a stronger result that immediately implies. To state it, consider the following equivalent way of generating O*k* (see ). Given a uniform random ordering of the edges of the complete graph *K**n*, we consider them one by one. When an edge arrives, we add it to O*k* if and only if one of its endpoints is incident to a vertex that is — at this point — of degree less than *k*. When we only add edges that arrived by time *t* for some fixed $0\le t\le \binom{n}{2}$, we obtain a subgraph of O*k* that we denote by O*k**t*. This process yields a coupling between the random graph process at time *t* and O*k**t*. Under this coupling, which we call the **natural coupling**, O*k**t* is a subgraph of *G**t*. For convenience, we denote O*k*, *d* = O*k**τ**d* for every 0 ≤ *d* ≤ *k*. Note that as long as the minimum degree of the random graph process is at most 1, it cannot contain a Hamilton cycle; thus, for *t* < *τ*2, O*k**t* is not Hamiltonian. [thm:ok:ham:hit] There exists *k**H* > 0 such that for every *k* ≥ *k**H*, O*k*, 2 is **whp** Hamiltonian. As mentioned in the introduction, a classical result due to Bollobás  and independently Ajtai, Komlós, and Szemerédi  is a hitting-time result, which states that, **whp**, the random graph process will contain a Hamiltonian cycle as soon as its minimum degree reaches 2. refines that result by showing that there exists an *online* algorithm that can choose a small number of edges (linear in the number of vertices) from the first *τ*2 edges of the random graph process that induce a Hamiltonian cycle **whp**. Furthermore, this algorithm is deterministic, simple, and only requires local information, namely, the degrees of the endpoints of the considered edge. The rest of this section is devoted to the proof of, which implies, in turn, (see ). While the proof shares some ideas with previous proofs of the hitting time result for Hamiltonicity of the random graph process (see, e.g., ), it still has its twists and turns to achieve the goal. In particular, we find a way around the common method of *random sparsification* accompanied by *sprinkling* (which is not available to us in this setting) by casting our argument directly on the (already sparse) graph and finding boosters inside it using a sophisticated argument. We begin with a couple of helpful observations. The following is immediate: [obs:ok:deg] For integers 0 ≤ *d* < *k*, time *t* and a vertex *v*, under the natural coupling between *G**t* and O*k**t* one has *d*O*k**t*(*v*) = *d* if and only if *d**G**t*(*v*) = *d*. The next observation that we will need for this goal is that O*k* is stochastically dominated by the **random *k*-out graph** (see, e.g., in ): the random graph whose edges are generated independently for each vertex by a random choice of *k* distinct edges incident to that vertex. For brevity, denote it by G*k*. Note that G*k* has roughly *k**n* edges, whereas, as stated in, O*k* is nearly twice as sparse (for large *k*). [obs:ok:kout] O*k* is stochastically dominated by G*k*. For every ordered pair of distinct vertices *u*, *v* let **w**(*u*, *v*) be a uniform [0, 1] random weight assigned to the ordered pair (*u*, *v*). Then, G*k* is obtained by adding the edge {*u*, *v*} whenever **w**(*u*, *v*) is one of the *k* smallest weights in the family {**w**(*u*, *w*)}*w*. For every (unordered) pair of distinct vertices *u*, *v* let **x**{*u*, *v*} = min{**w**(*u*, *v*), **w**(*v*, *u*)}. Observe crucially that while **x** is not uniform, it has a continuous density function on [0, 1] and, more importantly, is independent for distinct edges. Thus, O*k* is obtained by including the edge {*u*, *v*} whenever **x**{*u*, *v*} is one of the *k* smallest weights in the family {**x**{*u*, *w*}}*w*. Let us now show that under this coupling, O*k* ⊆ G*k*. Let {*u*, *v*} ∈ O*k* and assume without loss of generality that **x**{*u*, *v*} = **w**(*u*, *v*). Pick a vertex *w* for which **x**{*u*, *v*} < **x**{*u*, *w*} (there are at least *n* − 1 − *k* such vertices). Then, **w**(*u*, *v*) = **x**{*u*, *v*} < **x**{*u*, *w*} ≤ **w**(*u*, *w*). It follows that there are at least *n* − 1 − *k* vertices *w* for which **w**(*u*, *v*) < **w**(*u*, *w*), hence {*u*, *v*} ∈ G*k*. We now show that for *k* ≥ 12, O*k*, 2 is **whp** an expander[6](#fn6). [lem:ok:exp] There exists *β* > 0 such that for *k* ≥ 12, O*k*, 2 is **whp** a *β**n*-expander. We prove it for *k* = 12 and derive the statement for every *k* ≥ 12 due to monotonicity. Fix *b* = *k*/11 > 1. We first prove that there exists *γ* > 0 for which, **whp**, no vertex set *S* in O*k*, 2 of cardinality *s* ≤ *γ**n* spans more than *b**s* edges. Since O*k*, 2 is stochastically dominated by O*k* and (by ) O*k* is stochastically dominated by G*k*, we may prove that statement for G*k*. Note that the probability that a given edge appears in G*k* is at most twice the probability that a given vertex chooses a given incident edge (which is *k*/(*n* − 1)). By the union bound over the choice of *S* and the choices of the spanned edges, that probability is at most $$\sum\_{s=2}^{\gamma n} \binom{n}{s}\binom{\binom{s}{2}}{bs} \left(\frac{3k}{n}\right)^{bs} \le \sum\_{s=2}^{\gamma n} \left[ \frac{en}{s} \left(\frac{es}{2b}\cdot\frac{3k}{n}\right)^b \right]^s = \sum\_{s=2}^{\gamma n}\left(\Theta\left(\left(\frac{s}{n}\right)^{b-1}\right)\right)^s.$$ If *n*/log*n* ≤ *s* ≤ *γ**n* then the *s*’th summand is at most *c**n*/log*n* for some *c* < 1 (for small enough *γ* > 0). Otherwise, it is (*o*(1))*s*. Thus, the sum is *o*(1). Back to O*k*, 2, let *X* = *X**k**τ*2 be the set of vertices of degree smaller than *k* at time *τ*2. Suppose first that there exists a set *B* of size at most *β**n* for *β* = *γ*/5 with $B\cap X={\varnothing}$ and ∣*N*(*B*)∣ ≤ 4∣*B*∣. Set *S* = *B* ∪ *N*(*B*) and note that *s* = ∣*S*∣ ≤ 5*β**n* = *γ**n* while ∣*E*(*S*)∣ ≥ *k**β**n*/2 > *b**s*, contradicting the above. Now, let *A* with ∣*A*∣ ≤ *β**n*. Let *A**X* = *A* ∩ *X* and *A**Y* = *A* \ *X*. Let further *S**X* = *A**X* ∪ *N*(*A**X*) and *S**Y* = *A**Y* ∪ *N*(*A**Y*). By the above, ∣*N*(*A**Y*)∣ ≥ 4∣*A**Y*∣. By, ∣*N*(*A**X*)∣ ≥ 2∣*A**X*∣ and every vertex in *A**Y* (or in general) has at most one neighbour in *S**X*. Hence, in particular, ∣*N*(*A**Y*) \ *S**X*∣ ≥ ∣*N*(*A**Y*)∣ − ∣*A**Y*∣ ≥ 3∣*A**Y*∣. Thus, ∣*N*(*A*)∣ = ∣*N*(*A**X*) \ *A**Y*∣ + ∣*N*(*A**Y*) \ *S**X*∣ ≥ (2∣*A**X*∣ − ∣*A**Y*∣) + 3∣*A**Y*∣ ≥ 2∣*A*∣,  as required. Our next step is to show that O*k*, 2 is also typically connected (for large enough *k*). For that, we need the following lemma. As we mentioned earlier, Cooper and Frieze  showed that O2 is connected with probability that is bounded away from 0 and 1. Since O3 ⊈ O*k*, 2 for any *k*, we cannot use. [lem:ok:quad] Let *α* > 0 and let *F* be a fixed set of edges in *K**n* with ∣*F*∣ ≥ *α**n*2. Then, there exist *k*0 = *k*0(*α*), *η* = *η*(*α*) and *c* = *c*(*α*) > 0 such that for every *k* ≥ *k*0, if *t* = *η**k**n* then ${\mathbb{P}}(F\cap E({\mathcal{O}}\_k^{t})={\varnothing})\le \exp(-ckn)$. Let $\gamma=\sqrt{\alpha}$, and *η* = *γ*/3. Take *k* > 6(1 − log*γ*)/*γ*2, and *t*0 = *η**k**n*. Let *E*0 = {*e*1, …, *e**t*0} be the first *t*0 edges of the underlying random graph process (in this order) and let *F*0 = *F* ∩ *E*0. Let *V*0 = ⋃*F*0 be the set of vertices covered by the edges of *F*0. We observe that typically *V*0 cannot be too small; indeed, for a given set *S* to be ⋃*F*0 we need every edge of *F* that is not contained in *S* to appear after time *t*0. By the union bound over all choices of *V*0 of size at most *γ**n*, $$\begin{aligned} {\mathbb{P}}(|V\_0|\le\gamma n) &\le \binom{n}{\gamma n} \left(1-\frac{t\_0}{\binom{n}{2}}\right)^{|F|-\binom{\gamma n}{2}} \le \exp((\gamma-\gamma\log\gamma-\eta k(2\alpha-\gamma^2))n)\\ &\le \exp((\gamma-\gamma\log{\gamma}-\gamma^3k/3)n) \le \exp(-ckn), \end{aligned}$$ for *c* = *γ*3/6 > 0. Let *F*1 = *F*0 ∩ *E*(O*k**t*0). Note that if $F\_1={\varnothing}$ then when an edge in *F*0 arrived both its endpoints had degree at least *k*; hence all vertices of *V*0 have degree at least *k* in O*k**t*0. But in this case, ∣*V*0∣ ≥ *γ**n* implies that *γ**n**k*/2 ≤ ∣*E*(O*k**t*0)∣ ≤ *t*0 = *η**k**n* = *γ**k**n*/3, a contradiction. Thus, $${\mathbb{P}}(F\cap E({\mathcal{O}}\_k^t)={\varnothing}) \le {\mathbb{P}}(F\cap E({\mathcal{O}}\_k^{t\_0})={\varnothing}) \le {\mathbb{P}}(F\_1={\varnothing}) = \exp(-ckn). \qedhere$$ [lem:ok:large] For every *β* > 0 there exists *k*0 = *k*0(*β*) such that for every *k* ≥ *k*0, **whp**, every two disjoint vertex sets in O*k*, 2 with ∣*A*∣, ∣*B*∣ ≥ *β**n* are connected by an edge. Let *α* = *β*2 and let *k*0, *η*, *c* be the constants guaranteed by. Let H(*β*) =  − *β*log*β* − (1 − *β*)log(1 − *β*), and set *k*1 = 3H(*β*)/*c*. Take *k* ≥ max{*k*0, *k*1} and *t* = ⌈*η**k**n*⌉. Fix two disjoint vertex sets *A*, *B* with ∣*A*∣, ∣*B*∣ = *β**n*. Let *F* = *E*(*A*, *B*), so ∣*F*∣ ≥ *α**n*2. By, the probability that in O*k**t* does not contain an edge from *F* (and hence the probability that in O*k**t* the sets *A*, *B* are not connected by an edge) is at most exp( − *c**k**n*). By the union bound over all sets *A*, *B*, the probability that in O*k**t* there are two disjoint sets of size at least *β**n* that are not connected by an edge is at most $$\binom{n}{\beta n}^2 e^{-ckn} = \exp(((2+o(1))H(\beta)-ck)n) = \exp(-\Omega(n)) = o(1).$$ The result follows since **whp** *t* ≤ *τ*2, implying O*k**t* ⊆ O*k*, 2. [cor:ok:exp] There exists *k*0 > 0 such that for every *k* ≥ *k*0, O*k*, 2 is **whp** an $\frac{n}{4}$-expander. Let *β* be the constant guaranteed by (we may (and will) assume that *β* < 1/4), and let *k*0 = *k*0(*β*) be the constant guaranteed by. Let *k* ≥ max{*k*0, 12}. Thus, by, O*k*, 2 is **whp** a *β**n*-expander; and, by, O*k*, 2 has **whp** the property that any two disjoint vertex sets of size at least *β**n* are connected by an edge. Assume both properties hold and let *A* be a vertex set with ∣*A*∣ ≤ *n*/4. If ∣*A*∣ ≤ *β**n* then ∣*N*(*A*)∣ ≥ 2∣*A*∣ since O*k*, 2 is a *β**n*-expander. Otherwise, there is an edge between *A* and any other set of size at least *β**n*, hence ∣*N*(*A*)∣ ≥ *n* − *β**n* − ∣*A*∣ ≥ (3/4 − *β*)*n* ≥ *n*/2 ≥ 2∣*A*∣. [cor:ok:conn] There exists *k*0 > 0 such that for every *k* ≥ *k*0, O*k*, 2 is **whp** connected. Let *k*0 be the constant guaranteed by and let *k* ≥ *k*0. From it follows that every connected component of O*k*, 2 is, **whp**, of size at least 3*n*/4, implying that the graph is connected. We are now ready to prove. [Proof of ] Let *k*0 be the constant from, let ℓ = max{*k*0, 500} and let *k* = 100ℓ. Note that by proving the statement of the theorem for *k*, due to monotonicity of O*k*, 2 in the first parameter, we prove it for every *k*ʹ ≥ *k*. Let *t*0 = 6ℓ*n* and write *a* = 6ℓ/*k*, so *t*0 = *a**k**n*. Consider the first *t*0 edges of the random graph process (and note that *t*0 ≪ *τ*2 **whp**). When an edge *e* arrives we colour it *red* if it is incident to a vertex of degree *d* < ℓ, or *black* otherwise. Let *L* = *Y*ℓ*t*0, and note that by, ∣*L*∣ ≥ 0.99*n* **whp**. We now consider the next edges of the random graph process, until time *τ*2. When an edge *e* arrives we colour it *red* if it is incident to a vertex of degree *d* < ℓ (note that these red edges are *not* contained in *L*), or *black* if it is any other edge not contained in *L*. If the arriving edge is contained in *L*, we do not record its identity, but rather its arrival time. In particular, we count these edges. We observe crucially that the exact identity of such an edge does not affect the rest of the colouring process; indeed, any permutation of the edges of the random graph process that fixes the coloured edges (by time *τ*2) and the edges not in *L* is feasible. Let *m* be the number of edges we counted in *L* and let *t*1, …, *t**m* be their arrival times (*t*0 < *t*1 < … < *t**m* < *τ*2). Note that the probability of an edge to fall inside *L* before *m* edges fell there (without conditioning on the rest of the process) is at least *p* :  = 0.98 (since both *m* and the number of existing edges in *L* are *o*(*n*2)). Thus, for any *b* = *O*(1), the number of edges falling into *L* between *t*0 and *t*0 + *b**n* stochastically dominates a binomial random variable with *b**n* attempts and success probability *p*. Take *b* = 500, and *m*ʹ = ⌊*b**n**p*/2⌋. It follows that *m*ʹ < *m* and *t**i* < *t*0 + *b**n* for every 1 ≤ *i* ≤ *m*ʹ **whp**. Let *F* be the set of non-edges in *L* at time *τ*2. Conditioning on the entire colouring process (including *m* and the times *t*1, …, *t**m*), the set of edges that fall into *L* at times *t*1, …, *t**m* is distributed as a uniformly sampled edge set of size *m* of *F*. Let *f*1, …, *f**m* be a random ordering of this random set of edges. Write *F**i* = {*f*1, …, *f**i*} for 0 ≤ *i* ≤ *m*. We add the random edges in *F**m* one by one (edge *f**i* at time *t**i*), so the distribution of *f**i* is uniform over *F* \ *F**i* − 1. Some of these edges will be coloured *blue* according to a rule explained below. For *i* ∈ [*m*], let *K**i* = *Y**k**t**i* − 1 ∩ *L* be the set of vertices of degree at least *k* in *L* just before time *t**i*. We note by, and the discussion above that ∣*K**i*∣ ≤ 2*t**i*/*k* ≤ 2(*a* + *b*/*k*)*n* for every *i* ∈ [*m*ʹ], **whp**. Note that the graph that consists of the red edges at time *τ*2 is Oℓ, 2. Denote by *H**i* the graph that consists of the red edges and the blue edges among *F**i*. Thus, this is a supergraph of Oℓ, 2. By, and since expansion (and connectivity) is monotone, either *H**i* is Hamiltonian, or there are at least *n*2/32 boosters with respect to it. Suppose *H**i* is not Hamiltonian, and let *Z**i* be the set of boosters with respect to *H**i*. Let $Z'\_i=Z\_i\cap \binom{L}{2}{\setminus}\binom{K\_i}{2}{\setminus}F\_i$. These are the boosters that are (a) contained in *L*, (b) have at least one endpoint of degree less than *k* at the time of arrival, and (c) have not been revealed earlier. It follows that $$|Z'\_i|\ge |Z\_i| - |E(X\_\ell^{t\_0},V)| - \left|\binom{K\_i}{2}\right| - |F\_i| \ge \left(\frac{1}{32}-\frac{1}{100}-2\left(a+\frac{b}{k}\right)^2 - o(1)\right)n^2.$$   This is at least *n*2/100 for ℓ ≥ 500 and *k* = 100ℓ. When considering the edge *f**i* + 1 we colour it blue if it is in *Z*ʹ*i*, or if it is in $\binom{L}{2}{\setminus}\binom{K\_i}{2}$ and *H**i* is already Hamiltonian. But the probability that it is in *Z**i*ʹ is at least ∣*Z**i*ʹ∣/(∣*F*∣ − ∣*F**i*∣) ≥ *q* :  = 1/100, independently of the past. Thus, the number of blue edges stochastically dominates a binomial random variable with *m*ʹ ∼ *b**n**p*/2 attempts and success probability *q*, and hence mean  ∼ *q**b**n**p*/2 > 2*n*. By Chernoff bounds (), the number of blue edges is at least *n* **whp**. We finish the proof by noting that if *H**i* is not Hamiltonian and *f**i* + 1 is coloured blue then *f**i* + 1 is a booster with respect to *H**i* that is contained in O*k*, 2. Thus, *H**m*ʹ (and thus *H**m*; and thus O*k*, 2) is **whp** Hamiltonian. The proof above yields the following corollary. [cor:ok:ham] Let *k**H* be the constant from. Then, for every *k* ≥ *k**H*, O*k*, 2 is **whp** a Hamiltonian $\frac{n}{4}$-expander. Spanning subgraphs ================== Minimum degree and vertex-connectivity -------------------------------------- It follows from that Builder can construct a graph with minimum degree *k* by purchasing (sufficiently) more than *o**k**n* edges, while observing only enough edges to guarantee a sufficient minimum degree in the underlying random graph process. [Proof of ] We describe Builder’s strategy. Builder purchases any edge touching at least one vertex of degree less than *k*. Since by time *τ**k* the random graph process has, by definition, minimum degree of at least *k*, the obtained graph is exactly O*k*. We conclude by observing that by Builder purchased, **whp**, at most $o\_kn+O(\sqrt{n}\log{n})$ edges, and that for *k* ≥ 3, by, O*k* is *k*-vertex-connected. [Proof of ] Let *ɛ*ʹ = *ɛ*/2. We describe Builder’s strategy in stages. #### Stage I (Constructing a *k*-matching) During the first *C**n* steps, for *C* = *C*(*ɛ*ʹ, *k*), Builder constructs a *k*-matching, in which all but at most *ɛ*ʹ*n* vertices are of degree *k*. This is possible, **whp**, by. #### Stage II (Handling low-degree vertices) Denote by *V*0 the set of vertices of degree smaller than *k* in Builder’s graph at the end of Stage I. By the above, **whp**, ∣*V*0∣ ≤ *ɛ*ʹ*n*. Builder now emulates the O*k* model with respect to these vertices; namely, Builder purchases any edge contained in *V*0 that is incident to at least one vertex of degree less than *k* in *G*[*V*0]. Since in the random graph process, by time *t*, the minimum degree is logarithmic in *n*, Builder will observe at least *k* additional edges incident to each vertex of *V*0. He will purchase a subset of these (thus, at most *ɛ*ʹ*k**n* edges). Hamilton cycles --------------- ### Hitting time, inflated budget [Proof of ] Let *k* be a large enough constant so that O*k*, 2 is **whp** Hamiltonian (such *k* is guaranteed to exist by ). Builder emulates O*k*, 2 by purchasing every edge that is incident to a vertex of degree less than *k*. He completes this in time *τ*2 while purchasing at most *k**n* edges. ### Inflated time, optimal budget In this section, we prove the following theorem: [Hamiltonicity][thm:ham] For every *ɛ* > 0 there exists *C* > 1 such that the following hold. If *t* ≥ *C**n*log*n*/2 and *b* ≥ (1 + *ɛ*)*n* then there exists a (*t*, *b*)-strategy *B* of Builder such that lim*n* → ∞P(*B**t*is Hamiltonian) = 1. #### Proof of We describe Builder’s strategy in (four) stages. Set *ɛ*ʹ = *ɛ*/40. We describe a (*t*, *b*)-strategy for *b* = (1 + *ɛ*)*n* and *t* < *n*log*n*/*ɛ*ʹ. #### Stage I (Growing disjoint paths) Let *C* be a large constant to be chosen later. In the first stage, Builder grows many sublinear paths, covering together (1 − *ɛ*ʹ)*n* vertices. Let *s*ʹ ∼ *n*/log1/3*n* be an integer and let *s*0 = *n*/log1/2*n* ≪ *s*ʹ. Builder grows simultaneously paths *P*1, …, *P**s*ʹ as follows. He begins by letting each *P**i* be a (distinct) vertex *v*0*i*. He then claims every edge that extends one of the paths without intersecting with other paths. Formally, let *ν* = (1 − *ɛ*ʹ)*n*/*s*ʹ. If at a given stage Builder has the paths (*v*01, …, *v*ℓ11), …, (*v*0*s*ʹ, …, *v*ℓ*s*ʹ*s*ʹ) on the vertex set *V**P* then he claims an observed edge if and only if it is of the form {*v*ℓ*j**j*, *w*} for some *j* = 1, …, *s*ʹ, ℓ*j* < *ν* and $w\notin V\_P$. Builder stops if all but at most *s*0 of the paths are of length at least *ν* (in which case the stage is “successful”), or when he has observed $t\_1=\frac{C}{{\varepsilon}'}n\log^{1/3}{n}$ edges (in which case the stage “fails”), whichever comes first. For 1 ≤ *j* ≤ *s*ʹ and 1 ≤ *i* ≤ *t*1, let *y**i**j* be the indicator that Builder has purchased the *i*’th observed edge, and that this edge extends the path *P**j*. For convenience, if ∣*V**P*∣ ≥ (1 − *ɛ*ʹ)*n* we set *y**i**j* to 1. Observe that for every *j*, *i*, ${\mathbb{P}}(y^j\_i=1) \ge (n-|V\_P|)/\binom{n}{2}>{\varepsilon}'/n$. If the stage fails then there is an *s*0-subset *S*0 ⊆ [*s*ʹ] such that for every *j* ∈ *S*0, the path *P**j* ends up with length less than *ν*. Write *y*0 = ∑*j* ∈ *s*0∑*i* = 1*t*1*y**i**j*. Thus, if the stage fails then *y*0 < *ν**s*0. But *y*0 stochastically dominates a binomial random variable with *t*1 attempts and success probability *p*1 = *ɛ*ʹ*s*0/*n*. Set *μ* = *t*1*p*1 ∼ *C**s*0log1/3*n* ≫ *s*ʹ. By the union bound over the choices of an *s*0-subset of the paths and using Chernoff bounds (), for large enough *C*, the probability that the stage fails is at most $$\binom{s'}{s\_0} {\mathbb{P}}(y\_0 < \nu s\_0) \le 2^{s'} \exp(-\mu/10) = o(1).$$ #### Stage II (Connecting the paths) At this stage Builder tries to connect most of the *s* = *s*ʹ − *s*0 ∼ *s*ʹ complete paths to each other, eventually obtaining an almost-spanning path in his graph. For convenience, we ree-numerate the complete paths by [*s*]. Let *t*2 = *n*log1/2*n*. During the next *t*2 observed edges, Builder purchases an edge connecting the last vertex of one of his paths to the first vertex of another. Formally, he purchases every observed edge of the form {*v*ℓ*i**i*, *v*0*j*} for *i* ≠ *j*. We will need the following lemma: [\*Lemma 4.4][lem:DFS] Let *s*, *k* ≥ 1 and let *D* be an *s*-vertex digraph in which for every two disjoint *A*, *B* ⊆ *V*(*D*) with ∣*A*∣, ∣*B*∣ ≥ *k*, *D* contains an edge between *A* and *B*. Then, *D* contains a directed path of length *s* − 2*k* + 1. We use the lemma on an auxiliary digraph *D* on the vertex set [*s*] that is defined as follows. We observe *t*2 random edges. Whenever Builder observes (and purchases) an edge of the form {*v*ℓ*i**i*, *v*0*j*}, we add the edge (*i*, *j*) to *D*. We now have to claim that *D* satisfies the requirements of the lemma for *k* = *ɛ*ʹ*s*. [cl:D:exp] **Whp**, every two disjoint *A*, *B* ⊆ [*s*] with ∣*A*∣, ∣*B*∣ ≥ *ɛ*ʹ*s* satisfy ${E\_D(A,B)\ne{\varnothing}}$. Fix disjoint *A*, *B* with ∣*A*∣, ∣*B*∣ ≥ *ɛ*ʹ*s*. The event that $E\_D(A,B)={\varnothing}$ implies that none of the *t*2 observed edges hits a pair {*v*ℓ*i**i*, *v*0*j*} for *i* ∈ *A* and *j* ∈ *B*. There are at least *ɛ*ʹ2*s*2 such edges, hence that probability is at most $$\binom{\binom{n}{2}-{\varepsilon}'^2 s^2}{t\_2}/\binom{\binom{n}{2}}{t\_2} \le \exp\left(-{\varepsilon}'^2 s^2 t\_2/\binom{n}{2}\right) = \exp(-\Theta(n\log^{-1/6}{n})).$$ Here we used the general bound $\binom{a-b}{c}/\binom{a}{c}\le\exp(-bc/a)$. By the union bound over the choices of *A*, *B*, the probability that the event in the statement does not hold is at most $$\binom{n}{{\varepsilon}' s}^2 \exp(-\Theta(n\log^{-1/6}{n})) \le \exp({\varepsilon}' s\log\log{n} -\Theta(n\log^{-1/6}{n})) = o(1).\qedhere$$ imply that *D* has, **whp**, a directed path of length (1 − 2*ɛ*ʹ)*s*. By the way we have defined *D*, this implies the existence of a path *Q* of length (1 − 3*ɛ*ʹ)*n* in Builder’s graph. To analyse the number of purchased edges at this stage, we note that at any stage an edge is purchased with probability  ≍ *s*2/*n*2, hence the expected number of purchased edges is  ≍ *t*2*s*2/*n*2 ≍ *n*/log1/6*n*. By Chernoff bounds (), the number of purchased edges is sublinear **whp**. #### Stage III (Preparing the ground) Let *q*1, *q*2 be the endpoints of *Q*. Let *V*1 ∪ *V*2 = *V* \ *V*(*Q*) be a partition of the vertices outside *Q* (so ∣*V**i*∣ = 3*ɛ*ʹ*n*/2 for *i* = 1, 2). Builder performs the next two tasks simultaneously. ##### (Connecting the endpoints of *Q* to *V*1, *V*2) Builder claims the first observed edge from *q**i* to *V**i* for each *i* = 1, 2 (unless such an edge already exists). After *c**n*log*n* steps, for any *c* > 0, there will be **whp** a neighbour of each of *q**i* in *V**i*. Call this neighbour *w**i*. ##### (Constructing expanders on *V*1, *V*2) Builder purchases a copy of O12 in *V**i*. Observing that every new edge falls inside *V**i* with probability at least 2*ɛ*ʹ2, we conclude, using, that this could be done **whp** by observing at most $\frac{1}{{\varepsilon}'}n\log{n}$ edges and purchasing at most 36*ɛ*ʹ*n* of them. By, the obtained graphs are both *β**n*-expanders for some *β* > 0. For *i* = 1, 2 let *B**i* = *B*[*V**i*] denote Builder’s graph on the vertex set *V**i*. #### Stage IV (Sprinkling) By there exists *C*4 = *C*4(*β*, *ɛ*ʹ) > 0 such that by observing at most *C*4*n* edges and purchasing at most $\frac{3}{2}{\varepsilon}'n$ edges among those landing inside each *V**i*, both *B*1, *B*2 become **whp** Hamiltonian. Conditioning on that event, denote the sets of endpoints of Hamilton cycles in *B**i* whose other endpoint is *w**i* by *Y**i* for *i* = 1, 2. By, ∣*Y**i*∣ ≥ *β**n*. #### Stage V (Closing a Hamilton cycle) To close a Hamilton cycle, Builder purchases an edge between *Y*1 and *Y*2. The probability of failing to do so after observing log*n* steps is *o*(1). Perfect matchings ----------------- In this section we prove. Recall the constant *k**H* from, and let *ɛ*ʹ = *ɛ*/*k**H*. #### Stage I (Constructing a *k*-core) Let *k* be some large (even) constant to be determined later (in Stage VI). Choose *ɛ*1 = *ɛ*ʹ/(10*k*) and let *V*1 be an arbitrary vertex set of size  ∼ *ɛ*1*n*. During the next *t*1 ∼ 9*k**n* steps Builder purchases every edge that lands inside *V*1. By that time, **whp**, at least 8*k**ɛ*1*n* (and at most 10*k**ɛ*1*n* = *ɛ*ʹ*n*) edges are being purchased. Thus, by, **whp**, the obtained graph contains a *k*-core on vertex set *U*1 ⊆ *V*1 of size at least *ɛ*1*n*/2. Write *η*1 = ∣*U*1∣/*n* and note that *η*1 ≤ *ɛ*1 ≤ *ɛ*ʹ/10. #### Stage II (Constructing a large matching) Let *ɛ*2 = *ɛ*ʹ/3. Let *Y* = *V* \ *U*1; so ∣*Y*∣ ∼ (1 − *η*1)*n*. Let *C*2 = *C*2(1, *ɛ*2) be the constant guaranteed by. Builder follows a (*t*2, *b*2)-strategy proposed by to construct a matching *M*2 with vertex set *X*2 ⊆ *Y* where ∣*X*2∣ ≥ ∣*Y*∣(1 − *ɛ*2) ∼ (1 − *ɛ*2)(1 − *η*1)*n*. The lemma guarantees that this is doable, **whp**, for *t*2 = *C*2*n* and *b*2 ≤ *n*/2. Write *η*2 = 1 − ∣*X*2∣/*n*, and note that *η*1 ≤ *η*2 ≤ *η*1 + *ɛ*2. #### Stage III (Extending the large matching) Denote the vertices outside *X*2 ∪ *U*1 by *W*2. Recall that ∣*W*2∣ = *n* − (∣*X*2∣ + ∣*U*1∣) ∼ (*η*2 − *η*1)*n*. Write *η*3 = *η*2 − *η*1 ≤ *ɛ*2. During the next *t*3 ∼ *ɛ*ʹ*n*log*n* steps Builder (partially) constructs O*k**H* on *W*2. The number of purchased edges is **whp** at most *k**H**η*3*n* ≤ *k**H**ɛ*2*n* ≤ *ɛ**n*/3. Let *S* be the set of vertices of degree less than *k**H* in *W*2 at the end of this process. We now show that, **whp**, ∣*S*∣ ≤ *n*1 − *δ*3 for *δ*3 = *η*3*ɛ*ʹ/2. Indeed, let *w* ∈ *W*2. The probability that the next edge contains *w* and is contained in *W*2 is *p*3 ∼ 2*η*3/*n*. Thus, the total number of observed edges incident to *w* stochastically dominates a binomial random variable with *t*3 attempts and success probability *p*3. Consequently, the probability that edges containing *w* were observed less than *k**H* times is, by, at most *n*− *η*3*ɛ*ʹ. Hence, by Markov’s inequality, P(∣*S*∣ ≥ *n*1 − *δ*3) ≤ *η*3*n*1 − *η*3*ɛ*ʹ/*n*1 − *δ*3 ≪ 1. We argue that *B*[*W*2] contains a matching that covers all but 2*k**H**n*1 − *δ*3 vertices. Indeed, if we had continued constructing the copy of O*k**H* without any time restrictions, we would had – according to – a (nearly) perfect matching there. But, deterministically, we would have purchased at most *k**H*∣*S*∣ additional edges to achieve that goal. Thus, **whp**, the current largest matching covers all but at most 2*k**H**n*1 − *δ*3 vertices in *W*2. Denote this matching by *M*3, and let *X*3 = *V*(*M*3). #### Stage IV (Building stars) We append the matching constructed in Stage III to the matching constructed in Stage II; namely, we set *M* = *M*2 ∪ *M*3, *X* = *X*2 ∪ *X*3 and *W* = *V* \ (*X* ∪ *U*1). We recall that ∣*W*∣ ≤ *n*1 − *δ*4 for some 0 < *δ*4 < *δ*3 and ∣*X*∣ ∼ (1 − *η*1)*n*. Let *K* > 1/(2*η*1*ɛ*ʹ) be an integer. In this stage Builder attempts to construct disjoint *K*-leaf stars at each vertex of *W*, with leaves inside *X*, where each edge of *M* contains at most one leaf. For *w* ∈ *W* let *x**w* denote the number of leaves in the star that is rooted at *w* that Builder managed to purchase by the end of the stage. We show that **whp** for every *w* ∈ *W*, *x**w* = *K*, hence the stage is successful. Indeed, whenever an edge arrives it has probability 2/*n* to be incident to *w*. The other end of such an edge is incident to an available edge of *M* with probability at least (∣*X*∣ − 2*K*∣*W*∣)/*n* ∼ 1 − *η*1. Thus, *x**w* stochastically dominates a binomial random variable with *t*4 = (1 + *ɛ*ʹ)*n*log*n*/2 attempts and success probability *p*3 = (1 − 2*η*1) ⋅ 2/*n*. Therefore, by (since *ɛ*ʹ < 1 and 2*η*1 < *ɛ*ʹ), $${\mathbb{P}}(x\_w<K) \le \left(\frac{et\_3p\_3}{K(1-p\_3)}\right)^K e^{-t\_3p\_3} = (\Theta(\log{n}))^K \cdot n^{-(1+{\varepsilon}')(1-2\eta\_1)} \ll n^{-1}.$$ By the union bound over all *o*(*n*) vertices of *W*, the probability that any *x**w* is below *K* is *o*(1). #### Stage V (Building 3-paths) For every *w* ∈ *W* let *a*1*w*, …, *a**K**w* ∈ *X* be the leaves of the stars rooted at *w* that were constructed in the previous stage. For *i* = 1, …, *K*, let *b**i**w* ∈ *X* be the neighbour of *a**i**w* in *M*. Builder now purchases every edge that connects *b**i**w* (for some *w* ∈ *W* and *i* ∈ [*K*]) to a vertex *u* ∈ *U*1, as long as none of *b*1*w*, …, *b**K**w*, *u* was touched earlier during this stage. Fix *w* ∈ *W*. The probability that in a given step an edge between (an untouched vertex of) *U*1 and {*b*1*w*, …, *b**K**w*} appears is at least $p\_4=K(|U\_1|-|W|)/\binom{n}{2}\sim 2K\eta\_1/n$. Thus, the probability that in *t*4 = *ɛ*ʹ*n*log*n* steps such an edge does not appear is at most *e*− *p*4*t*4 = *n*− 2*K**η*1*ɛ*ʹ. Noting that *K* > (2*η*1*ɛ*ʹ)− 1 we get that this probability is *o*(*n*− 1), hence, by the union bound over all *o*(*n*) vertices of *W*, we match one leaf in the star rooted at *w* with a unique vertex in *U*1 for every *w* ∈ *W* **whp**. #### Stage VI (Sprinkling) We note that each of the 3-paths constructed in the previous stage can be used as an augmenting path. Namely, we let *M*ʹ be the matching obtained by *M* by replacing every edge *e* ∈ *M* that is contained in such a 3-path (as a middle edge) by the other two edges of that path. Let *U*ʹ be the set of vertices in *U*1 matched in *M*ʹ, and write *U*\* = *U*1 \ *U*ʹ. Thus, *M*ʹ matches all vertices but those in *U*\*. We now show that *B*[*U*\*] is **whp** an expander. For that, note that *U*ʹ is a uniformly chosen subset of size ∣*W*∣ ≤ *n*1 − *δ*4 of *U*1. We first argue that *δ*(*B*[*U*\*]) ≥ *k*/2 **whp**. Indeed, fix *u* ∈ *U*1, and let *u*1, …, *u**k* be arbitrary neighbours of *u* in *B*[*U*1]. The probability that *d*(*u*, *U*\*) ≤ *k*/2 is at most the probability that *k*/2 of these *k* neighbours end up in *U*ʹ. By the union bound, this is at most $\binom{k}{k/2}n^{-\delta\_4}$ $$\binom{k}{k/2} \frac{\binom{|U\_1|-k/2}{|W|-k/2}}{\binom{|U\_1|}{|W|}} \le \left(4\cdot\frac{|W|}{|U\_1|}\right)^{k/2} \asymp n^{-\delta\_4k/2}.$$ Making sure *k* > 2/*δ*4, this is *o*(*n*− 1), and hence, by the union bound, **whp**, *δ*(*B*[*U*\*]) ≥ *k*/2. We remark that there is no circular dependency of the constants here. We recall that by (taking *t* = *t*1 ∼ 9*k**n* and *R* = 3*β*6*n* for *β*6 = 3− 5*k*− 3/2) the random graph process *G**t*, and hence also *B*[*U*\*], typically does not have a set of size *r* ≤ 3*β*6*n* that spans more than 3*r* edges. This, together with the minimum degree of *B*[*U*\*], implies expansion; indeed, let *A* ⊆ *U*\* be a vertex set with ∣*A*∣ = *a* ≤ *R*/3 = *β*6*n*. Suppose ∣*N*(*A*)∣ < 2∣*A*∣ and write *S* = *A* ∪ *N*(*A*) (so *r* = ∣*S*∣ < 3*a* ≤ *R*). But the number of edges spanned by *S* is at least *δ*(*B*[*U*\*])∣*A*∣/2 ≥ *k**a*/4 > *k**r*/12 ≥ 3*r* (making sure *k* ≥ 36), a contradiction to. Thus, *B*[*U*\*] is a *β*6*n*-expander. By there exists *C*6 = *C*6(*β*6) such that adding to *B*[*U*\*] *C*6*η*1*n* random non-edges makes it, **whp**, Hamiltonian. Letting *t*6 = *C*6ʹ*n*/*η*1 for *C*6ʹ > *C*6, it follows from that after observing *t*6 edges, at least *C*6*η*1*n* of them land, **whp**, inside *U*\* (whose size is  ∼ *η*1*n*). Builder purchases each of these edges. Thus, by, Builder’s resulting graph on *U*\* contains, **whp**, a Hamilton cycle. That Hamilton cycle contains a perfect matching *M*6. Appending *M*6 to *M*, we get a perfect matching in Builder’s graph. Small subgraphs =============== We introduce the following terminology. For a fixed graph *H* we call a non-edge *e* in Builder’s graph an ***H*-trap** (or, simply, a trap) if *B* + *e* contains a copy of *H*. Following the terminology of “hitting” boosters, we say that Builder **hits** a trap if he encounters (and therefore purchases) such a non-edge. Trees ----- ### 1-statement [Proof of the 1-statement of ] Let *T*1 ⊆ … ⊆ *T**k* be a sequence of subtrees of *T*, *T**i* having *i* vertices (so *T**k* = *T*). We first note that if *t* ≫ *n* then a budget *b* = *k* − 1 suffices with high probability. Indeed, suppose Builder has constructed *T**i* (note that he builds *T*1 with a zero budget). To build *T**i* + 1, Builder waits for one of at least  ∼ *n* edges whose addition to his current copy of *T**i* would create a copy of *T**i* + 1. By time *t*/(*k* − 1) Builder observes (and purchases) such an edge **whp**. Thus, by time *t* Builder constructs, **whp**, a copy of *T*. We may thus assume from now on that *t* ≤ *c**n* for 0 < *c* < 1/5 and *t* ≥ *b* ≫ (*n*/*t*)*k* − 2. In particular, *t* ≫ *n*1 − 1/(*k* − 1). For *i* = 1, …, *k* define $$s\_i = \frac{b}{k-1}\cdot\left(\frac{t}{(k-1)n}\right)^{i-2},$$ and note that *s**i* is a strictly monotone decreasing sequence with *n* ≥ *s*1 > *s**k* ≫ 1. We describe Builder’s strategy in stages (for convenience, the first stage will be indexed by 2). We assume that at the beginning of stage *i*, *i* = 2, …, *k*, Builder has built *s**i* − 1 vertex-disjoint copies of *T**i* − 1. The assumption trivially holds for *i* = 2, since *s*1 ≤ *n*. At stage *i* ≥ 2, Builder purchases any edge extending one of the *s**i* − 1 copies of *T**i* − 1 he currently has to a copy of *T**i*, so that the resulting trees are still vertex-disjoint. He continues doing so only as long as he does not have *s**i* copies of *T**i* (in which case this stage is successful) or if he had observed more than *t*/(*k* − 1) edges during this stage (in which case this stage fails, and thus the entire strategy fails). For stage *i* and 1 ≤ *j* ≤ *t*/(*k* − 1) let *y**j**i* denote be the event that the *j*’th observed edge of stage *i* is being purchased, or that *s**i* copies of *T**i* have already been built. Being at stage *i*, there are at least *s**i* − 1 − *s**i* potential trees to extend, each can be extended by observing one of at least *n* − 2*b* edges. Thus, noting that *s**i* − 1 − *s**i* ≥ *s**i* − 1(1 − *c*/(*k* − 1)) and *n* − 2*b* ≥ *n* − 2*t* ≥ *n*(1 − 2*c*), we have $${\mathbb{P}}(y^i\_j) \ge \frac{(s\_{i-1}-s\_i)(n-2b)}{\binom{n}{2}} \ge \frac{2(1-\frac{c}{k-1})s\_{i-1}\cdot(1-2c)}{n}.$$ Write *c*ʹ = 2(1 − *c*/(*k* − 1))(1 − 2*c*) and note that for small enough *c* > 0 (*c* < 1/5 suffices) we have *c*ʹ > 1. Therefore, *y**i* = ∑*j**y**j**i* stochastically dominates a binomial random variable with mean $\frac{t}{k-1}\cdot\frac{c's\_{i-1}}{n} = c's\_i$. By Chernoff bounds (), the stage is successful **whp**. Thus, Builder builds, **whp**, *s**k* ≥ 1 copies of *T**k* = *T* by the end of *k* − 1 stages, that is, by time *t*. ### 0-statement [Proof of the 0-statement of ] We assume *b* ≪ (*n*/*t*)*k* − 2. It follows that *t* ≪ *n*. We may also assume that *b* ≪ *t*; indeed, if *b* ≍ *t* then *t* ≪ (*n*/*t*)*k* − 2, hence *t* ≪ *n*1 − 1/(*k* − 1). This implies that **whp** no tree on *k* vertices exists in the graph of the observed edges. We prove that any (*t*, *b*)-strategy of Builder fails, **whp**, to build a connected component of size *k*. Define a sequence (*r**i*)*i* = 1*k* − 1 by *r*1 = *n* and *r**i* = *b*(3*k**t*/*n*)*i* − 2 for *i* = 2, …, *k* − 1. We argue that **whp** Builder cannot build more than *r**i* connected components of size *i*. The proof is by induction on *i*. The case *i* = 1 is trivial. The case *i* = 2 follows deterministically by the budget restriction, as *r*2 = *b*. To build a connected component of size 2 < *i* < *k*, Builder must observe an edge connecting a connected component of size 1 ≤ *j* < *i* with a connected component of size *i* − *j* ≥ *j*. By induction, the probability that the next observed edge is such is at most $$\frac{\sum\_{j=1}^{{\left\lfloor{i/2}\right\rfloor}} (jr\_j \cdot (i-j)r\_{i-j})}{\binom{n}{2}-t} \le \frac{inr\_{i-1} + \Theta\left(b^2(t/n)^{i-4}\right)}{\binom{n}{2}-t} \le \frac{2.5kr\_{i-1}}{n}.$$ Here we used the fact that *b* ≪ *t*, hence *b*2(*t*/*n*)*i* − 4 ≪ *n**r**i* − 1. Thus, by Chernoff bounds (), during *t* rounds the number of such edges is, **whp**, smaller than *r**i* − 1 ⋅ 3*k**t*/*n* = *r**i*. To build a tree of order *k*, Builder must construct a connected component of size at least *k*. For that, Builder must observe an edge connecting a connected component of size 1 ≤ *i* ≤ *k* − 1 with a connected component of size max{*i*, *k* − *i*} ≤ *j* ≤ *k* − 1 (in his graph). The probability that the next observed edge is such is at most $$\frac{\sum\_{i=1}^{k-1}\sum\_{j=\max\{i,k-i\}}^{k-1} (ir\_i\cdot jr\_j)}{\binom{n}{2}-t} \preceq \frac{r\_{k-1}}{n}.$$ Hence, during *t* rounds the expected number of edges Builder observes that create such a connected component is  ≼ *r**k* − 1*t*/*n* ≪ 1; thus, Builder fails to construct such a component **whp**. Cycles ------ ### 1-statement [Proof of the 1-statement of ] Let *b* ≫ *b*\* = *b*\*(*n*, *t*, *k*). As *b*\* ≪ *n*, we may assume *b* ≪ *n*. Choose *n* ≪ *t*ʹ ≪ *t* such that it still holds that *b* ≫ *b*ʹ :  = *b*\*(*n*, *t*ʹ, *k*). Set $s=n^2/(b't')=\min\{(t'/n)^k,n/\sqrt{t'}\}$, and note that 1 ≪ *s* ≪ *b*. Indeed, *b*/*s* ≫ *b*ʹ/*s* = *n*2/(*s*2*t*ʹ) ≥ 1. Denote *d* = *c**s*1/*k* for *c* < 1/(*k* + 5) and note that *d* ≫ 1. Note further that *d**n* = *c**s*1/*k**n* ≤ *c**t*ʹ and that *c**k* < 1/6. #### Stage I (Growing *d*-ary trees) Builder performs Stage I in *k* rounds, each lasting *d**n* steps. Set *r* = *b*/*s* ≫ 1. Builder chooses 2*k**r* vertices arbitrarily, call them *roots*. Denote by *P*0 = {*v*1, …, *v*2*k**r*} the set of these roots, and let *L*0 = *R*0 = *P*0 (we think of each root as being both on the right side of a tree and on the left side of a tree). Suppose at the beginning of round *i* ∈ [*k*] we have a set *J**i* − 1 ⊆ [*P*0] with ∣*J**i* − 1∣ = 2*k* − (*i* − 1)*r* and trees (*T**j*)*j* ∈ *J**i* − 1, where the tree *T**j* is rooted at *v**j* and is of depth *i* − 1 (so at the beginning of round 1 these trees are isolated roots). Write *L**i**j* for the set of vertices in *L**i* which are in tree *j*, and define *R**i**j* analogously. Builder’s goal at round *i* is to “extend” *half* of these trees, by attaching *d*∣*L**i* − 1*j*∣ leaves to vertices in *L**i* − 1*j* and *d*∣*R**i* − 1*j*∣ (distinct) leaves to vertices in *R**i* − 1*j*. Each of the new leaves should not be already covered by Builder’s graph. A leaf attached to *L**i* − 1*j* will be placed in *L**i**j*, and a leaf attached to *R**i* − 1*j* will be placed in *R**i**j*. Note that if *i* = 1 then *L**i* − 1*j* = *R**i* − 1*j* = {*v**j*}, in which case we attach *two* *d*-leaf stars at each root *v**j*. Let *J**i* ⊆ *J**i* − 1 be the subset of indices for which *T**j* has been successfully extended. Set *L**i* = ⋃*j* ∈ *J**i**L**i**j* and *R**i* = ⋃*j* ∈ *J**i**R**i**j*. Note that if Builder’s strategy has not failed by the beginning of round *i*, ∣*J**i* − 1∣ = 2*k* − (*i* − 1)*r* and ∣*L**i* − 1*j*∣ = ∣*R**i* − 1*j*∣ = *d**i* − 1 for every *j* ∈ *J**i* − 1. For *i* ∈ [*k*] and *j* ∈ *J**i* − 1 let *x**i*(*j*) be the number of edges observed at round *i* with one endpoint in *L**i* − 1*j* and the other outside the set of vertices covered by the edges of Breaker’s graph at the time of observation. If ∣*x**i*(*j*)∣ ≥ *d*∣*L**i* − 1*j*∣ then Builder purchases the first ∣*L**i* − 1*j*∣ of those observed edges and this stage is “left-successful” for tree *j*. Similarly, let *y**i*(*j*) be the number of edges observed at round *i* between *R**i* − 1*j* and the uncovered part of the graph. If ∣*y**i*(*j*)∣ ≥ *d*∣*R**i* − 1*j*∣ then Builder purchases the first ∣*R**i* − 1*j*∣ of those observed edges and this stage is “right-successful” for tree *j*. If a stage is both left-successful and right-successful for a tree *j* then *T**j* is successfully extended. As we remarked earlier, after extending successfully 2*k* − *i**r* trees, Builder stops. Let us analyse this strategy. For 0 ≤ *i* ≤ *k*, denote by *b**i* the number of covered vertices by the end of step *i*. Observe that *b**i* ≤ ∑*i*ʹ = 0*i*2*k* + 1 − *i*ʹ*r**d**i*ʹ ≤ (1 + *o*(1))2*k* − *i* + 1*r**d**i* (since *d* ≫ 1). In particular, *b**k* < 3*r**d**k* = 3*c**k**b* ≪ *n*. Take *i* ∈ [*k*] and assume the strategy was successful so far. Thus, *x**i*(*j*) (and also *y**i*(*j*)) stochastically dominates a binomial random variable with *d**n* attempts and success probability at least $|L\_{i-1}^j|\cdot(n-b\_i)/\binom{n}{2} \sim 2d^{i-1}/n$. Thus, by Chernoff bounds (), P(*x**i*(*j*) < *d*∣*L**i* − 1*j*∣) = P(*x**i*(*j*) < *d**i*) < exp( − *d**i*/5) ≪ 1. By Markov’s inequality, **whp** for at least half of the trees indexed by *J**i* − 1 Builder succeeds in extending the tree. The total time that passes until the end of this stage is at most *k**d**n* ≤ *c**k**t*ʹ < *t*ʹ ≪ *t*, and the total budget is at most 3*c**k**b* < *b*/2. #### Stage II (Connecting leaves) Note that for every *j* ∈ *J**k*, if *u* ∈ *L**k**j* and *v* ∈ *R**k**j* then *u*, *v* are both leaves of *T**j* of distance 2*k* from each other. Thus, by adding the edge {*u*, *v*}, a cycle of length ℓ is formed. Thus, the number of traps is ∣*J**k*∣ ⋅ ∣*L**k**j*∣ ⋅ ∣*R**k**j*∣ = *r**d*2*k* ≍ *r**s*2 = *b**s* ≫ *n*2/*t*ʹ. Thus, after time *t* − *t*ʹ ≫ *t*ʹ we hit a trap **whp**. #### Even cycles We briefly discuss the modifications needed to obtain the result for cycles of length 2*k* + 2. Instead of choosing *P*0, Builder greedily constructs a matching *M*0 of size 2*k**r*; this is doable (**whp**) since the first *C**r* observed edges (recalling that *r* ≪ *n*) are mostly disjoint, and, in particular, for large enough *C*, contain a matching of size 2*k**r*. Then, for each edge in *M*0, Builder puts one vertex in *L*0 and one vertex in *R*0. Builder continues in the same fashion as in the case of odd cycles, where the only difference is that instead of *L*0 = *R*0 we have $L\_0\cap R\_0={\varnothing}$. We finish by noting that an edge between *u* ∈ *L**k**j* and *v* ∈ *R**k**j* now closes a cycle of length 2*k* + 2. ### -statement Let *H* = *C*ℓ for ℓ = 2*k* + 1 or ℓ = 2*k* + 2. The 0-statement in follows from combining the next two claims together with the known fact that if *t* ≪ *n* then *G**n*, *t*, **whp**, contains no copy of *H*. [cl:cycles:lb:univ] If $b\ll n/\sqrt{t}$ then for any (*t*, *b*)-strategy *B* of Builder, lim*n* → ∞P(*H* ⊆ *B**t*) = 0. [cl:cycles:lb:spec] If *b* ≪ *n**k* + 2/*t**k* + 1 then for any (*t*, *b*)-strategy *B* of Builder, lim*n* → ∞P(*H* ⊆ *B**t*) = 0. [Proof of ] We may assume *t* ≪ *n*2. Consider a strategy for building *H*. Let *B*ʹ be Builder’s graph before obtaining the first copy of *H*. We show that at this point, the time it takes Builder to hit an *H*-trap (and hence construct a copy of *H*) is typically much larger than *t*. Let *X* denote the number of *H*-traps at this point. Observe that every *H*-trap is a non-edge of *B*ʹ, both whose endpoints are vertices of positive degree. Evidently, the number of vertices of positive degree is at most 2*b*, thus *X* ≤ 4*b*2. Thus, the probability of hitting a trap in *t* steps is at most $tX/(\binom{n}{2}-t)\preceq tb^2/n^2\ll 1$. For the proof of we will need a couple of lemmas. In what follows, we may assume that *t* ≤ *n*(*k* + 1)/(*k* + 1/2), as otherwise $n^{k+2}/t^{k+1}\le n/\sqrt{t}$ and the claim follows from. [cl:badpaths] Let ℓ ≥ 1 be an integer, write *d* = 2*t*/*n* ≫ 1 and let *C* > 1. Then, **whp**, the number of paths of length ℓ in *G**t* that contain a vertex whose degree is at least *C**d* is *o*(*n*/*d*). Let *L* be the set of all vertices of degree at least *C**d*, and let *Q* be the set of all paths containing a vertex from *L*. By Chernoff bounds (), $${\mathbb{E}}|Q| \le n^{\ell+1} \left(\frac{t}{n^2}\right)^\ell (\ell+1) \cdot{\mathbb{P}}(d(v)\ge Cd-2) \le nd^\ell e^{-cd} \ll n/d,$$ for some constant *c* = *c*(*C*) > 0, and the result follows from Markov’s inequality. We turn to count the number of copies of a path of length ℓ in a *z*-degenerate graph *G*0 with maximum degree Δ. [lem:path:embed] Let *z*, *q*, Δ, ℓ ≥ 1 be integers. Let *G*0 be a *z*-degenerate *q*-vertex graph with maximum degree Δ. Then, the number of paths of length ℓ in *G*0 is at most *q* ⋅ 2ℓ*z*⌈ℓ/2⌉Δ⌊ℓ/2⌋. Let *v*1, …, *v**q* be an ordering of the vertices of *G*0 such that for 1 ≤ *i* ≤ *q*, *v**i* has at most *z* neighbours among *v*1, …, *v**i* − 1. Given a path *P* of length ℓ, one of its consistent orientations has at least *k* edges going backwards (w.r.t. the orientation). Thus, to count the number of
arxiv_0000359
 A. Palma and S. P. Patil, Phys. Rev. D **84**, 043502 (2011) [arXiv:1005.3848 [hep-th]]. R. Saito, M. Nakashima, Y. i. Takamizu and J. Yokoyama, JCAP **1211**, 036 (2012) [arXiv:1206.2164 [astro-ph.CO]]. R. Saito and Y. i. Takamizu, JCAP **1306**, 031 (2013) [arXiv:1303.3839, arXiv:1303.3839 [astro-ph.CO]]. S. Renaux-Petel and K. Turzyński, arXiv:1510.01281 [astro-ph.CO]. S. Weinberg, Phys. Rev. D **67**, 123504 (2003) [astro-ph/0302326]. D. Wands, K. A. Malik, D. H. Lyth and A. R. Liddle, Phys. Rev. D **62**, 043527 (2000) [astro-ph/0003278]. K. A. Malik and D. Wands, Class. Quant. Grav.  **21**, L65 (2004) [astro-ph/0307055]. D. H. Lyth, K. A. Malik and M. Sasaki, JCAP **0505**, 004 (2005) [astro-ph/0411220]. D. Langlois and F. Vernizzi, Phys. Rev. D **72**, 103501 (2005) [astro-ph/0509078]. A. Naruko and M. Sasaki, Class. Quant. Grav.  **28**, 072001 (2011) [arXiv:1101.3180 [astro-ph.CO]]. A. A. Starobinsky, “Multicomponent de Sitter (Inflationary) Stages and the Generation of Perturbations,” JETP Lett.  **42**, 152 (1985) [Pisma Zh. Eksp. Teor. Fiz.  **42**, 124 (1985)]. D. S. Salopek and J. R. Bond, “Nonlinear evolution of long wavelength metric fluctuations in inflationary models,” Phys. Rev. D **42**, 3936 (1990). M. Sasaki and E. D. Stewart, “A General analytic formula for the spectral index of the density perturbations produced during inflation,” Prog. Theor. Phys.  **95**, 71 (1996) [astro-ph/9507001]. M. Sasaki and T. Tanaka, “Superhorizon scale dynamics of multiscalar inflation,” Prog. Theor. Phys.  **99**, 763 (1998) [gr-qc/9801017]. L. Senatore and M. Zaldarriaga, JHEP **1309**, 148 (2013) [arXiv:1210.6048 [hep-th]]. V. Assassi, D. Baumann and D. Green, JHEP **1302**, 151 (2013) [arXiv:1210.7792 [hep-th]]. Y. Urakawa and T. Tanaka, Phys. Rev.  **D82**, 121301 (2010). [arXiv:1007.0468 [hep-th]]. Y. Urakawa and T. Tanaka, Prog. Theor. Phys.  **125**, 1067 (2011) [arXiv:1009.2947 [hep-th]]. C. T. Byrnes, M. Gerstenlauer, A. Hebecker, S. Nurmi and G. Tasinato, JCAP **1008**, 006 (2010) [arXiv:1005.3307 [hep-th]]. M. Gerstenlauer, A. Hebecker and G. Tasinato, JCAP **1106**, 021 (2011) [arXiv:1102.0560 [astro-ph.CO]]. S. B. Giddings and M. S. Sloth, JCAP **1101**, 023 (2011) [arXiv:1005.1056 [hep-th]]. S. B. Giddings and M. S. Sloth, Phys. Rev. D **84**, 063528 (2011) [arXiv:1104.0002 [hep-th]]. L. Senatore and M. Zaldarriaga, JHEP **1301**, 109 (2013) [arXiv:1203.6354 [hep-th]]. G. L. Pimentel, L. Senatore and M. Zaldarriaga, JHEP **1207**, 166 (2012) [arXiv:1203.6651 [hep-th]]. Y. Urakawa and T. Tanaka, Prog. Theor. Phys.  **122**, 779 (2009) [arXiv:0902.3209 [hep-th]]. T. Tanaka and Y. Urakawa, PTEP **2013**, no. 8, 083E01 (2013) [arXiv:1209.1914 [hep-th]]. T. Tanaka and Y. Urakawa, PTEP **2013**, no. 6, 063E02 (2013) [arXiv:1301.3088 [hep-th]]. T. Tanaka and Y. Urakawa, Class. Quant. Grav.  **30**, 233001 (2013) [arXiv:1306.4461 [hep-th]]. T. Tanaka and Y. Urakawa, PTEP **2014**, no. 7, 073E01 (2014) [arXiv:1402.2076 [hep-th]]. J. M. Maldacena, JHEP **0305**, 013 (2003) [astro-ph/0210603]. P. Creminelli and M. Zaldarriaga, JCAP **0410**, 006 (2004) [astro-ph/0407059]. K. Hinterbichler, L. Hui and J. Khoury, JCAP **1401**, 039 (2014) [arXiv:1304.5527 [hep-th]]. M. H. Namjoo, H. Firouzjahi and M. Sasaki, Europhys. Lett.  **101**, 39001 (2013) [arXiv:1210.3692 [astro-ph.CO]]. X. Chen, H. Firouzjahi, M. H. Namjoo and M. Sasaki, Europhys. Lett.  **102**, 59001 (2013) [arXiv:1301.5699 [hep-th]]. R. Flauger, D. Green and R. A. Porto, JCAP **1308**, 032 (2013) [arXiv:1303.1430 [hep-th]]. J. Ganc and E. Komatsu, JCAP **1012**, 009 (2010) [arXiv:1006.5457 [astro-ph.CO]]. S. Renaux-Petel, JCAP **1010**, 020 (2010) [arXiv:1008.0260 [astro-ph.CO]]. P. Creminelli, G. D’Amico, M. Musso and J. Norena, JCAP **1111**, 038 (2011) [arXiv:1106.1462 [astro-ph.CO]]. P. Creminelli, J. Norena and M. Simonovic, JCAP **1207**, 052 (2012) [arXiv:1203.4595 [hep-th]]. V. Assassi, D. Baumann and D. Green, JCAP **1211**, 047 (2012) [arXiv:1204.4207 [hep-th]]. L. Senatore and M. Zaldarriaga, JCAP **1208**, 001 (2012) [arXiv:1203.6884 [astro-ph.CO]]. A. Joyce, J. Khoury and M. Simonovic, JCAP **1501**, no. 01, 012 (2015) [arXiv:1409.6318 [hep-th]]. L. Berezhiani and J. Khoury, JCAP **1402**, 003 (2014) [arXiv:1309.4461 [hep-th]]. G. L. Pimentel, JHEP **1402**, 124 (2014) [arXiv:1309.1793 [hep-th]]. A. Ghosh, N. Kundu, S. Raju and S. P. Trivedi, JHEP **1407**, 011 (2014) [arXiv:1401.1426 [hep-th]]. N. Kundu, A. Shukla and S. P. Trivedi, arXiv:1507.06017 [hep-th]. I. Mata, S. Raju and S. Trivedi, JHEP **1307**, 015 (2013) [arXiv:1211.5482 [hep-th]]. J. Garriga and Y. Urakawa, JCAP **1307**, 033 (2013) [arXiv:1303.5997 [hep-th]]. J. Garriga and Y. Urakawa, JHEP **1406**, 086 (2014) [arXiv:1403.5497 [hep-th]]. J. Garriga, K. Skenderis and Y. Urakawa, JCAP **1501**, no. 01, 028 (2015) [arXiv:1410.3290 [hep-th]]. K. Schalm, G. Shiu and T. van der Aalst, JCAP **1303**, 005 (2013) [arXiv:1211.2157 [hep-th]]. A. Bzowski, P. McFadden and K. Skenderis, JHEP **1304**, 047 (2013) [arXiv:1211.4550 [hep-th]]. P. McFadden, JHEP **1502**, 053 (2015) [arXiv:1412.1874 [hep-th]]. T. Tanaka and Y. Urakawa, JCAP **1105**, 014 (2011). [arXiv:1103.1251 [astro-ph.CO]]. P. Creminelli, C. Pitrou and F. Vernizzi, JCAP **1111**, 025 (2011) [arXiv:1109.1822 [astro-ph.CO]]. E. Pajer, F. Schmidt and M. Zaldarriaga, arXiv:1305.0824 [astro-ph.CO]. D. Seery and J. E. Lidsey, JCAP **0506**, 003 (2005) [astro-ph/0503692]. N. C. Tsamis and R. P. Woodard, Annals Phys.  **215**, 96 (1992). S. P. Miao and R. P. Woodard, JCAP **1207**, 008 (2012) [arXiv:1204.1784 [astro-ph.CO]]. X. Gao, JCAP **1110**, 021 (2011) [arXiv:1106.0292 [astro-ph.CO]]. R. P. Feynman and F. L. Vernon, Jr., Annals Phys.  **24**, 118 (1963) [Annals Phys.  **281**, 547 (2000)]. C. H. Wu, K. W. Ng, W. Lee, D. S. Lee and Y. Y. Charng, JCAP **0702**, 006 (2007) [astro-ph/0604292]. N. Katirci, A. Kaya and M. Tarman, JCAP **1406**, 022 (2014) [arXiv:1402.3316 [hep-th]]. R. P. Feynman and A. R. Hibbs, Quantum Mechanics and Path Integrals (McGraw-Hill, New York, 1965) A. O. Caldeira and A. J. Leggett, Physica **121A**, 587 (1983). S. Weinberg, Phys. Rev.  **140**, B516 (1965). A. Strominger, JHEP **1407**, 152 (2014) [arXiv:1312.2229 [hep-th]]. T. He, V. Lysov, P. Mitra and A. Strominger, JHEP **1505**, 151 (2015) [arXiv:1401.7026 [hep-th]]. A. Strominger and A. Zhiboedov, arXiv:1411.5745 [hep-th]. J. Garriga, Y. Urakawa and F. Vernizzi, arXiv:1509.07339 [hep-th]. S. Endlich, A. Nicolis and J. Wang, JCAP **1310**, 011 (2013) [arXiv:1210.0569 [hep-th]]. A. Gruzinov, Phys. Rev. D **70**, 063518 (2004) [astro-ph/0404548]. Y. Urakawa *et al.*,  in preparation --- 1. The conservation of *ζ* in such a setup also can be understood as a direct consequence of the *δ**N* formalism .[↩](#fnref1) 2. Here, we also assume that the (spatially averaged) background universe is the FRW universe. This excludes, say, the solid inflation case, where the anisotropic pressure does not vanish in the large scale limit . (See also Ref. .)[↩](#fnref2) Conservation of *ζ* with radiative corrections from heavy field =============================================================== Introduction ============ Inflation provides us with a natural experimental instrument to explore the high energy physics. Measurements of the temperature anisotropies and polarization of the cosmic microwave background can constrain the Hubble parameter *H* at the time when the fluctuation was generated. The current data puts an upper bound on *H* at around 1014 GeV , which is much higher than the accessible energy scale in particle accelerators. The precise measurements of the primordial perturbations generated during inflation may place a constraint on the theory of high energy physics independently of the particle experiments. In string theory, compactification of the extra dimensions typically yields a number of scalar fields, which may have masses bigger than the Hubble parameter during inflation. Investigating a possible imprint of these massive fields might allow us to explore the high energy physics behind. While one field model is consistent with the current data , there is still room to include a contamination of such massive fields, which act as isocurvature modes. If such a massive field has a mass much bigger than the Hubble scale, integrating out the massive field only gives local contributions to the effective action for the inflaton (relevant works can be found, e.g., in Refs. ). In such a case, since we are ignorant of the high energy theory, it is impossible to disentangle the radiative corrections of the massive field. However, if one of the isocurvature modes has a mass of order *H*, the radiative correction may yield a distinctive non-local contribution. Chen and Wang studied an impact of a massive field on the primordial curvature perturbation *ζ* in Ref.  (see also Ref. ). In their setup, the inflaton has a non-minimal coupling with the massive field, which yields the cross-correlation between them. As emphasized in Ref. , where a more extensive analysis, including higher spin fields, was done, the massive field leaves more direct information in the squeezed configuration of the correlation functions, which has a soft external leg, than in other configurations. The massive scalar field with 0 < *m*/*H* ≤ 3/2 decays as *η*Δ− with $$\begin{aligned} & \Delta \equiv \frac{3}{2} - \sqrt{\frac{9}{4}- \frac{m^2}{H^2}} \label{Delta-} \end{aligned}$$ at large scales. Then, as was computed in Refs.  the contribution of the massive field to the squeezed bi-spectrum, when the shorter mode *k* crosses the Hubble scale, is given by $$\begin{aligned} & \langle \zeta\_q \zeta\_k \zeta\_k \rangle \propto P\_\zeta(q) P\_\zeta(k) (q/k)^{\Delta}\,, \qquad \quad (q/k \ll 1)\,, \label{Eq:bizeta}\end{aligned}$$ where *P**ζ*(*k*) is the power spectrum of *ζ*. Notice that (*q*/*k*)Δ encodes the evolution between the Hubble crossing time for the mode *k* and the one for *q*. For *m* > 3*H*/2, the massive field oscillates, while decaying, as $\eta^{\tilde\Delta\_{\pm}}$ with $$\begin{aligned} & \tilde{\Delta}\_{\pm} \equiv \frac{3}{2} \pm i \sqrt{\frac{m^2}{H^2}- \frac{9}{4}}\,, \label{Deltapm}\end{aligned}$$ which gives the same momentum dependence as in Eq. ([Eq:bizeta]) except that Δ− is replaced with $\tilde{\Delta}\_\pm$. When the curvature perturbation stops evolving after the Hubble crossing time of the shorter mode *k*, Eq. ([Eq:bizeta]) gives the squeezed bispectrum at the end of inflation. As far as the massive fields do not contribute to the background evolution (an example where a massive field modulates the background evolution was studied, e.g., in Refs. ) and the tree level contribution is concerned, the curvature perturbation is conserved after all modes cross the Hubble scale [1](#fn1). In this case, Eq. ([Eq:bizeta]) indeed gives the bi-spectrum at the end of inflation . The argument in Ref.  is based on the symmetry of the de Sitter spacetime. Therefore, one may speculate that the loop correction of the massive field may also be still given by Eq. ([Eq:bizeta]), while the scaling dimension Δ will no longer be given as in Eq. ([Delta-]) nor Eq. ([Deltapm]). In this generalization, a non-trivial point may be in showing the conservation of *ζ* after the Hubble crossing. In Refs. , the conservation of *ζ* was addressed, including the loop correction of *ζ* in the setup of single field inflation. In this paper, we address the conservation of the curvature perturbation *ζ* which is affected by loop corrections of a heavy field *χ*, assuming that the heavy field does not contribute to the classical background trajectory. The constant non-decaying mode of *ζ* is called the adiabatic mode. To compute the evolution of the curvature perturbation, we integrate out the heavy field and derive the effective action for *ζ*. If the mass of the heavy field *M* is much bigger than the Hubble scale, the loop corrections of *χ* only yields local terms in the effective action and then following the argument in the single field case, we can show the conservation of *ζ* at large scales. On the other hand, if the mass *M* is not large enough compared to the Hubble scale, the loop corrections of *χ* can give non-local contributions to the effective action. The presence of the non-local contribution can yield a qualitative difference from single field models. In single field models of inflation, the conservation of the curvature perturbation at large scales is implemented by the scaling symmetry ${\hbox{\boldmath{$x$}}} \to e^s {\hbox{\boldmath{$x$}}}$ with a constant parameter *s*, which changes $\zeta(t, {\hbox{\boldmath{$x$}}})$ to $\zeta(t,\, e^{-s}{\hbox{\boldmath{$x$}}})-s$. The scaling symmetry is one of the gauge transformations and hence classically it should be preserved for a diffeomorphism invariant theory. However, when we quantize the system, the scaling symmetry is not always preserved, particularly when we allow an arbitrary initial quantum state . When the scaling symmetry is preserved, a part of the IR divergences is canceled out . (In order to eliminate all the IR divergences, we also need to preserve the invariance under other gauge transformations. For a detailed explanation, see, e.g., Ref. .) In Refs. , it was shown that when we choose the Euclidean vacuum, a.k.a., the adiabatic vacuum or the Bunch-Davies vacuum in the de Sitter limit, there exists a set of quantities which is free from the IR divergences. In quantum field theory, a symmetry implies a corresponding identity, the so-called Ward-Takahashi (WT) identity. For one field model of inflation, the scaling symmetry yields the consistency relation, which relates the (*n* + 1)-point function of *ζ* with one soft external leg to the *n*-point function of *ζ* . The consistency relation is indeed the WT identity for the scaling symmetry . The consistency relation was first shown for the bi-spectrum in the squeezed limit by Maldacena in Ref.  and it was extended to more general single field models in Ref. . The consistency relation for the arbitrary *n*-point function was derived in Ref. . In a single field inflation with diffeomorphism invariance, when the initial state is the Euclidean vacuum and the background trajectory is on attractor, the consistency relation generally holds. When one of these assumptions is not fulfilled, the consistency relation can be violated . Various extensions of the consistency relation have been attempted so far. In Refs. , the squeezed bi-spectrum was computed in a non slow-roll setup and in Refs. , sub-leading contributions for the consistency relation were computed. (See also Refs. .) The consistency relation can also be obtained from the reparametrization invariance of the wave function of the universe . The use of the wave function is also motivated by the holographic description of inflation . In Refs. , the consistency relation was derived by solving the Callan-Symanzik equation in the dual boundary theory (see also Ref. ). A possible gauge issue for the consistency relation was discussed in Refs. . In this paper, we derive the consistency relation for the heavy field *χ* from the requirement of the scaling symmetry. When the scaling symmetry is preserved at the quantum level, we obtain the corresponding WT identity. The WT identity for the scaling symmetry yields the consistency relation which relates the (*n* + 1)-point function of the *n* *χ*s and one soft curvature perturbation *ζ* to the *n*-point function of the *χ* field. The derivation of the consistency relation also applies in the presence of the loop corrections of the heavy field. Using the effective action for *ζ*, obtained by integrating out *χ*, we show that when the consistency relation for *χ* holds, the curvature perturbation *ζ* is conserved at the super Hubble scales. This paper is organized as follows. In Sec. [Sec:single], we review the conservation of *ζ* in single field models of inflation, emphasizing the crucial role of the scaling symmetry. In Sec. [Sec:EA], after we describe our setup of the problem, we introduce the effective action for *ζ* by integrating out the heavy field in the in-in (or closed time path) formalism. In Sec. [Sec:WT], we derive the consistency relation for the heavy field from the Ward-Takahashi identity for the scaling symmetry. In Sec. [Sec:Conservation], using the consistency relation, derived in Sec. [Sec:WT], we show the conservation of *ζ* in the presence of the loop corrections of the heavy field. In Sec. [Renormalization], we briefly discuss the renormalization of the heavy field. Finally, in Sec. [Conclusion], we conclude. Conservation of *ζ* and scaling symmetry in single field inflation ================================================================== In single field models of inflation, it is known that the curvature perturbation is conserved in the large scale limit. In this section, we show that the conservation of *ζ* is a direct consequence of the scaling symmetry. Single field inflation ---------------------- For illustrative purpose, we start our discussion by considering a single scalar field with the standard kinetic term, whose action is given by $$\begin{aligned} S &= \frac{1}{2} \int \sqrt{-g} \left[ R - g^{\mu \nu} \partial\_\mu \phi \partial\_\nu \phi - 2V(\phi) \right] {{\rm d}}^4 x\,. \label{Exp:actionS}\end{aligned}$$ In this paper, we set the gravitational constant *κ*2 ≡ 8*π**G* to 1. Using the ADM form of the line element: $$\begin{aligned} {{\rm d}}s^2 = - N^2 {{\rm d}}t^2 + h\_{ij} ({{\rm d}}x^i + N^i {{\rm d}}t) ({{\rm d}}x^j + N^j {{\rm d}}t)~, \label{Exp:ADMmetric}\end{aligned}$$ where we introduced the lapse function *N*, the shift vector *N**i*, and the spatial metric *h**i**j*, we can express the action ([Exp:actionS]) as $$\begin{aligned} S &=\frac{1}{2} \int \sqrt{h} \Bigl[ N\,{{^s\!R}}- 2 N V(\phi) + N (\kappa\_{ij} \kappa^{ij} - \kappa^2) \cr & \qquad \qquad \qquad \qquad + \frac{1}{N} ( \dot{\phi} - N^i \partial\_i \phi )^2 - N h^{ij} \partial\_i \phi \partial\_j \phi \Bigr] {{\rm d}}^4x~, \label{Eq:ADMaction}\end{aligned}$$ where *s* *R* is the three-dimensional Ricci scalar, and *κ**i**j* and *κ* are the extrinsic curvature and its trace, defined by $$\begin{aligned} \kappa\_{ij} = \frac{1}{2 N} ( \dot{h}\_{ij} - D\_i N\_j - D\_j N\_i )~, \qquad \kappa = h^{ij} \kappa\_{ij} ~.\end{aligned}$$ Here, the spatial indices *i*, *j*, ⋯ are raised or lowered by the spatial metric *h**i**j*, and *D**i* denotes the covariant differentiation associated with *h**i**j*. Taking the variation of the action with respect to *N* and *N**i*, which are the Lagrange multipliers, we obtain the Hamiltonian and momentum constraint equations as $$\begin{aligned} & {{^s\!R}}- 2 V - (\kappa^{ij} \kappa\_{ij} - \kappa^2 ) - N^{-2} ( \dot{\phi} - N^i \partial\_i \phi)^2 - h^{ij} \partial\_i \phi \partial\_j \phi = 0~, \\ & D\_j ( \kappa^j\_{~i} - \delta^j\_{~i} \kappa ) - N^{-1} \partial\_i \phi~ ( \dot{\phi} - N^j \partial\_j \phi) = 0~. \end{aligned}$$ We determine the time slicing, employing the uniform field gauge: $$\begin{aligned} & \delta \phi=0\,. \label{GCphi}\end{aligned}$$ We express the spatial metric *h**i**j* as $$\begin{aligned} & h\_{ij}= e^{2(\rho+\zeta)} \left[ e^{\delta \gamma} \right]\_{ij}\,,\end{aligned}$$ where the background scale factor is expressed as *a* ≡ *e**ρ* and *δ**γ**i**j* is set to traceless. As spatial gauge conditions, we impose $$\begin{aligned} & \partial^i \delta \gamma\_{ij} =0\,. \label{GCT}\end{aligned}$$ With this gauge choice, the constraint equations are given by $$\begin{aligned} & {{^s\!R}}- 2 V - (\kappa^{ij} \kappa\_{ij} - \kappa^2 ) - N^{-2} \dot{\phi}^2 = 0\,, \\ & D\_j ( \kappa^j\_{~i} - \delta^j\_{~i} \kappa ) = 0 ~.\end{aligned}$$ Inserting *N* and *N**i*, which are expressed in terms of *ζ* by solving these constraint equations, into the action ([Eq:ADMaction]), we can derive the action for *ζ* . Scaling symmetry ---------------- The transverse condition imposed on *δ**γ**i**j* is non-local and hence to determine the coordinates, we need to employ boundary conditions. For example, at linear order in perturbation, the tensor perturbation transforms under the spatial coordinate transformation *x**i* → *δ**x̃**i* = *x**i* + *δ**x**i* as $$\begin{aligned} & \delta \tilde{\gamma}\_{ij} (x) = \delta \gamma\_{ij}(x) - 2 \left(\partial\_{(i} \delta x\_{j)} - \frac{1}{3} \delta\_{ij} \partial^k \delta x\_k \right) \,. \end{aligned}$$ The transverse condition on *δ**γ**i**j* gives $$\begin{aligned} & \partial^2 \delta x^i = - \frac{1}{3} \partial^i \partial\_j \delta x^j\,, \label{Exp:condT}\end{aligned}$$ which does not determine *δ**x**i* uniquely without specifying boundary conditions to solve Eq. ([Exp:condT]). For the scalar mode, all spatial coordinate transformations *δ**x**i* = ∂*i**δ**x* which satisfy ∂2∂*i**δ**x* = 0 still keep the transverse condition after the transformations. When we consider a compact support on each time slicing, we find an infinite way to impose the boundary conditions in solving ∂2∂*i**δ**x* = 0. We analyzed these *gauge* degrees of freedom in detail in Refs. , where we used the italic font for “gauge” to discriminate the *gauge* transformations defined within the compact support from those in the infinite spatial support. (See Ref.  for a more recent work.) So far, we kept the tensor perturbation *δ**γ**i**j* for the illustrative purpose, but in the rest of this paper, we neglect it. Among these transformations, the important one for *ζ* is the scale transformation *δ**x**i* = *s**x**i*, which is extended to *x**i* → *x**s**i* = *e**s**x**i* at non-linear orders. The parameter *s* can vary in time , but for our purpose, we do not need to consider the time dependence, and hence we set *s* to a constant parameter. Under this transformation, the spatial line element is recast into $$\begin{aligned} & \frac{{{\rm d}}l^2\_d}{a^2(t)} = e^{2 \zeta(t,\, {\hbox{\boldmath{\scriptsize$x$}}})} {{\rm d}}{\hbox{\boldmath{$x$}}}^2 = e^{2 \zeta\_s(t,\, {\hbox{\boldmath{\scriptsize$x$}}}\_s)} {{\rm d}}{\hbox{\boldmath{$x$}}}\_s^2 = e^{2 \{ \zeta\_s(t,\, e^s {\hbox{\boldmath{\scriptsize$x$}}}) +s \}} {{\rm d}}{\hbox{\boldmath{$x$}}}^2 \,,\end{aligned}$$ and then the curvature perturbation changes to $$\begin{aligned} & \zeta\_s(t,\, {\hbox{\boldmath{$x$}}}) = \zeta(t,\, e^{-s}{\hbox{\boldmath{$x$}}}) -s\,. \label{Exp:transzeta}\end{aligned}$$ This is a purely geometrical argument, and hence this transformation law also should apply to multi-field models. Preserving the scaling symmetry is crucial to ensure the infrared (IR) regularity of the curvature perturbation. In Refs. , we introduced the spatial Ricci scalar evaluated in the geodesic normal coordinates as a *gauge* invariant quantity. Since the contribution from the IR modes, which give rise to the IR divergence, can be eliminated by performing the corresponding *gauge* transformation, the IR divergence also can be removed from the *gauge* invariant quantity. By construction, the spatial Ricci scalar evaluated in the spatial geodesic normal coordinates is *gauge* invariant and it serves a conceptually clear example of the *gauge* invariant quantity. Nevertheless, using the geodesic normal coordinates can alter the UV behaviour . For a practical use, we may use the smeared geodesic coordinates ${\hbox{\boldmath{$x$}}}\_g(t)$ given by  $$\begin{aligned} & {\hbox{\boldmath{$x$}}}\_g(t) \equiv e^{{{^g\!\bar{\zeta}}}(t)} {\hbox{\boldmath{$x$}}} \,, \label{Def:xg}\end{aligned}$$ where *g* *ζ̄* is the averaged *ζ* at a compact support on each time slicing, given by $$\begin{aligned} & {{^g\!\bar{\zeta}}}(t) = \frac{\int {{\rm d}}^3 {\hbox{\boldmath{$x$}}}\_g W\_t({\hbox{\boldmath{$x$}}}\_g) \zeta(t,\, e^{- {{^g\!\bar{\zeta}}}} {\hbox{\boldmath{$x$}}}\_g)}{\int {{\rm d}}^3 {\hbox{\boldmath{$x$}}}\_g W\_t({\hbox{\boldmath{$x$}}}\_g)} \,, \label{Def:gbz}\end{aligned}$$ where $W\_t({\hbox{\boldmath{$x$}}})$ is a window function which vanishes outside the compact support on the time constant slicing. The spatial Ricci scalar evaluated at ${\hbox{\boldmath{$x$}}}\_g$ is not invariant under all the *gauge* transformations, but it is invariant under the scale transformation. Conservation of *ζ* in single field inflation --------------------------------------------- In single field inflation, solving the Hamiltonian and momentum constraint equations, we can eliminate *N* and *N**i* and write down the action only in terms of *ζ*. Since the action for any diffeomorphism invariant theory remains invariant under the scale transformation, the action for *ζ* should take the following form: $$\begin{aligned} & S [\zeta] = \int {{\rm d}}t\, {{\rm d}}^3 {\hbox{\boldmath{$x$}}} e^{3(\rho+\zeta)} {\cal L}\_\zeta\! \left[ \partial\_t\zeta,\, e^{-(\rho+\zeta)} \partial\_i \zeta \right]\,, \label{Exp:actionzeta}\end{aligned}$$ where the Lagrangian density ${\cal L}\_\zeta$ includes *ζ* only in the form ∂*t**ζ* or *e*− (*ρ* + *ζ*)∂*i**ζ*. (A detailed explanation can be found in Ref. .) To address the conservation of *ζ* in the large scale limit, we neglect the terms which include *ζ* with the spatial derivative. Then, the action for *ζ*, written in the form ([Exp:actionzeta]), is given by $$\begin{aligned} & S [\zeta] \approx \int {{\rm d}}t\, {{\rm d}}^3 {\hbox{\boldmath{$x$}}} e^{3(\rho+\zeta)} \varepsilon \left[ \dot{\zeta}^2 + \sum\_{n=3}^\infty \frac{2}{n} f\_n(t) \dot{\zeta}^n \right]\,, \label{Exp:actionlargescale}\end{aligned}$$ where we schematically wrote the non-linear terms which include $\dot{\zeta}$. Here, the time dependent function *f**n*(*t*) is expressed only in terms of the background quantities such as $\dot{\rho}$ and the slow-roll parameters. Varying the action with respect to *ζ*, we obtain the equation of motion in the large scale limit as $$\begin{aligned} & \partial\_t \left[ e^{3(\rho+\zeta)} \varepsilon \left\{ \dot{\zeta} + \sum\_{n=3}^\infty f\_n(t) \dot{\zeta}^{n-1} \right\} \right] \approx 0 \,. \label{Exp:eomlargescale} \end{aligned}$$ This equation motion has the anticipated constant solution in time as the non-decaying mode. I.e., if *ζ*(*x*) = *F*(*x*) is a solution of Eq. ([Exp:eomlargescale]), *F*(*x*) + *C* with a constant shift should also satisfy the equation. Then, after other solutions of the non-linear equation decay, *ζ* should be conserved at large scales. The relation between the shift symmetry and the conservation of *ζ* was pointed out also in Horndeski’s theory . Effective action for *ζ* with loop corrections of heavy field ============================================================= Next, we consider a two-field model with one inflaton and one heavy field. The latter field does not contribute to the classical background evolution. Following the Feynman and Vernon’s method , in this section, we compute the effective action for the curvature perturbation with loop corrections of the massive field. Two field model --------------- In this paper, we consider a light scalar field *ϕ* and a massive scalar field *χ* whose action is given by $$\begin{aligned} S &= \frac{1}{2} \int \sqrt{-g} \left[ R - g^{\mu \nu} \partial\_\mu \phi \partial\_\nu \phi - g^{\mu \nu} \partial\_\mu \chi \partial\_\nu \chi - 2V(\phi,\, \chi) \right] {{\rm d}}^{d+1} x\,, \label{Exp:action}\end{aligned}$$ where *V*(*ϕ*,  *χ*) is a potential for the scalar fields: $$\begin{aligned} & V(\phi,\, \chi) = V\_{ph}(\phi) + V\_{ch}(\phi,\, \chi) \,, \label{Exp:V}\end{aligned}$$ with $$\begin{aligned} & V\_{ch}(\phi,\, \chi) \equiv \frac{1}{2} M^2(\phi) \chi^2 + \frac{\lambda}{4!}\chi^4 \,.\end{aligned}$$ We decomposed the potential *V*(*ϕ*,  *χ*) into the *χ* independent part *V**p**h* and the rest *V**c**h*. When the mass of *χ* field, *M*, depends on the inflaton *ϕ*, this model includes the direct interaction between *ϕ* and *χ*, e.g., *ϕ*2*χ*2, which was addressed in Refs. . We assume that the mass of the inflaton *m* is much smaller than the Hubble parameter *H* as *m* ≪ *H*, while the mass of the field *χ* is bigger than *H* as *M* > *H*. Then, the classical background evolution is determined solely by the inflaton *ϕ* and (the linear perturbation of) *χ* becomes the pure isocurvature perturbation. In this paper, we only allow the renormalizable interactions for *χ*, while *W*(*ϕ*) may include non-renormalizable interactions. To perform the dimensional regularization, we consider a general (*d* + 1) dimensional spacetime. An extension to include non-renormalizable interactions for *χ* is straightforward as long as a finite order of loop corrections is concerned. As in the single field case, we determine the time slicing, imposing $$\begin{aligned} & \delta \phi=0\,.\end{aligned}$$ For our later use, we discriminate the part which explicitly depends on *χ* from the rest as $$\begin{aligned} S= S\_{ad} [N,\, N\_i,\, \zeta] + S\_\chi[N,\, N\_i,\, \zeta,\, \chi]\,,\end{aligned}$$ where *S**a**d* agrees with the action in single field models, given in Eq. ([Eq:ADMaction]), and *S**χ* is given by $$\begin{aligned} S\_\chi & = \frac{1}{2} \int e^{d(\rho+ \zeta)} \biggl[ \frac{1}{N} (\dot{\chi} - e^{-2(\rho +\zeta)} \delta^{ij} N\_i \partial\_j \chi)^2 \cr & \qquad \qquad \qquad \qquad \quad - N e^{-2(\rho+ \zeta)} (\partial\_i \chi)^2 - 2 N V(\phi,\, \chi) \biggr] {{\rm d}}t\, {{\rm d}}^d {\hbox{\boldmath{$x$}}}\,.\end{aligned}$$ Even if there is no direct interaction between *ϕ* and *χ*, gravity yields the non-linear interaction between *ζ* and the heavy field *χ*. Effective action ---------------- Since the mass of the field *χ* is much bigger than the Hubble scale *H*, it is natural to set the background value of *χ* to 0. Then, the field *χ* does not contribute to the classical background evolution. Meanwhile, because of the interaction between *ζ* and *χ*, the quantum fluctuation of *χ* can affect the evolution of the field *ζ*. In order to compute the evolution of *ζ* under the influence of *χ*, we compute the Feynman and Vernon’s influence functional , which can be obtained by integrating out the field *χ*, in the closed time path (or the in-in) formalism. As in the single field case, the action *S* also includes the lapse function and the shift vector, which can be removed by solving the Hamiltonian and momentum constraint equations. These constraint equations are also modified due to the quantum fluctuation of the heavy field *χ*. Using the closed time path, the *n*-point function of the curvature perturbation is given by $$\begin{aligned} & \langle \Psi\,| T\zeta(x\_1) \cdots \zeta(x\_n) |\, \Psi \rangle\cr & = \frac{\int D\zeta\_+ \int D\chi\_+ \int D \zeta\_- \int D\chi\_-\, \zeta\_+(x\_1) \cdots \zeta\_+(x\_n)\, e^{i S[\delta g\_+,\, \chi\_+]- i S[\delta g\_-,\, \chi\_-]}}{\int D\zeta\_+ \int D\chi\_+ \int D \zeta\_- \int D\chi\_-\, e^{i S[\delta g\_+,\, \chi\_+]- i S[\delta g\_-,\, \chi\_-]}}\,, \end{aligned}$$ where we used an abbreviation *δ**g* = (*δ**N*,  *N**i*,  *ζ*). In the closed time path, we double the fields: *δ**g*+ and *χ*+ denote the fields integrated from the past infinity to the time *t* and *δ**g*− and *χ*− denote the fields integrated from the time *t* to the past infinity. Here, *T* denotes the time ordering. Inserting *ζ*−(*x*) into the path integral in the numerator, we can compute the *n*-point function ordered in the anti-time ordering. Since *N* and *N**i* are not independent variables, we perform the path integral only regarding *ζ* and *χ*. Introducing the effective action ${S\_{\rm eff}}$ as $$\begin{aligned} &i {S\_{\rm eff}}[\zeta\_+,\, \zeta\_-] \equiv \ln \left[ \int D \chi\_+ \int D \chi\_- \, e^{i S[\delta g\_+,\, \chi\_+]- i S[\delta g\_-,\, \chi\_-]}\right]\,, \label{Def:Seff}\end{aligned}$$ we rewrite the *n*-point function for *ζ* as $$\begin{aligned} & \langle \Psi\,| T\zeta(x\_1) \cdots \zeta(x\_n) |\, \Psi \rangle = \frac{\int D\zeta\_+ \int D \zeta\_- \, \zeta\_+(x\_1) \cdots \zeta\_+(x\_n)\, e^{i {S\_{\rm eff}}[\delta g\_+,\, \delta g\_-]}}{\int D\zeta\_+ \int D \zeta\_-\,e^{i {S\_{\rm eff}}[\delta g\_+,\,\delta g\_-]}} \,. \end{aligned}$$ By inserting the action *S* into Eq. ([Def:Seff]), the effective action is recast into $$\begin{aligned} & {S\_{\rm eff}}[\delta g\_+,\, \delta g\_-] = S\_{ad}[\delta g\_+] - S\_{ad}[\delta g\_-] + {S\_{\rm eff}}'[\delta g\_+,\, \delta g\_-]\,, \label{Exp:Seff}\end{aligned}$$ where ${S\_{\rm eff}}'$ is the so-called influence functional, given by $$\begin{aligned} &i {S\_{\rm eff}}'[\delta g\_+,\, \delta g\_-] \equiv \ln \left[ \int D \chi\_+ \int D \chi\_- \,e^{i S\_\chi[\delta g\_+,\, \chi\_+]- i S\_\chi[\delta g\_-,\, \chi\_-]}\right]\,, \label{Exp:Seffd}\end{aligned}$$ where we factorized *S**a**d*[*δ**g*±] which commute with the path integral over *χ*±. The effective action ${S\_{\rm eff}}[\delta g\_+,\, \delta g\_-]$ describes the evolution of the curvature perturbation affected by the quantum fluctuation of the heavy field *χ*. Computing the effective action ------------------------------ Performing the path integrals about *χ*+ and *χ*−, we can compute the effective action ${S\_{\rm eff}}'[\delta g\_+,\, \delta g\_-]$. Expanding ${S\_{\rm eff}}'$ in terms of *δ**g* = (*δ**N*,  *N**i*,  *ζ*), we obtain $$\begin{aligned} & i {S\_{\rm eff}}'[\delta g\_+,\, \delta g\_-] \equiv \sum\_{n=0}^\infty i {S'\_{{\rm eff}(n)}}[\delta g\_+,\, \delta g\_-] \,,\end{aligned}$$ where ${S'\_{{\rm eff}(n)}}$ denotes the terms which include *n* *δ**g**α*s, given by $$\begin{aligned} i {S'\_{{\rm eff}(n)}}[\delta g\_+,\, \delta g\_-] &= \frac{1}{n!} \sum\_{\alpha\_1 = \pm} \cdots \sum\_{\alpha\_n = \pm} \int {{\rm d}}^{d+1} x\_1 \cdots \int {{\rm d}}^{d+1} x\_n \cr & \qquad \times \delta g\_{\alpha\_1}(x\_1) \cdots \delta g\_{\alpha\_n}(x\_n)\,W^{(n)}\_{\delta g\_{\alpha\_1} \cdots \delta g\_{\alpha\_n}}(x\_1,\, \cdots,\, x\_n) \,, \label{Expn:Seffd}\end{aligned}$$ with $$\begin{aligned} & W^{(n)}\_{\delta g\_{\alpha\_1} \cdots \delta g\_{\alpha\_n}}(x\_1,\, \cdots,\, x\_n) \equiv \frac{\delta^n i {S\_{\rm eff}}'[\zeta\_+,\, \zeta\_-]}{\delta g\_{\alpha\_1}(x\_1) \cdots \delta g\_{\alpha\_n}(x\_n)} \bigg|\_{\delta g\_{\pm}=0} \,. \label{Def:tWn}\end{aligned}$$ In Eq. ([Expn:Seffd]), each *δ**g**α**m* with *m* = 1, ⋯,  *n* should add up *δ**N**α**m*, *N**i*, *α**m*, and *ζ**α**m*. Here and hereafter, for notational brevity, we omit the summation symbol over *δ**g* unless necessary. Using Eq. ([Exp:Seffd]), we can express the variation of ${S\_{\rm eff}}'$ with respect to *δ**g*± by using the propagators for *χ*. Notice that the shift symmetry is not manifest in this series expansion. The linear term in the effective action is given by $$\begin{aligned} i {S'\_{{\rm eff}(1)}} &= \sum\_{\alpha= \pm} \int {{\rm d}}^{d+1} x\, \delta g\_\alpha(x) W^{(1)}\_{\delta g\_\alpha} (x) \,, \end{aligned}$$ where *W**δ**g**α*(1) is given by the expectation value as $$\begin{aligned} & W^{(1)}\_{\delta g\_+} (x) = - W^{(1)}\_{\delta g\_-} (x) = \left\langle \frac{\delta i S\_\chi}{\delta g(x)} \bigg|\_{\delta g=0} \right\rangle\,.\end{aligned}$$ Next, we compute the quadratic terms in ${S\_{\rm eff}}'$. Taking the second variation of ${S\_{\rm eff}}'$ with respect to *δ**g*+, we obtain $$\begin{aligned} W^{(2)}\_{\delta g\_+ \delta \tilde{g}\_+}(x\_1,\,x\_2) & = i^2 \left\langle \frac{\delta S\_\chi[\delta g\_+,\, \chi\_+]}{\delta g\_+(x\_1)} \bigg|\_{\delta g\_+=0} \frac{\delta S\_\chi[\delta g\_+,\, \chi\_+]}{\delta \tilde{g}\_+(x\_2)} \bigg|\_{\delta g\_+=0} \right\rangle\_{\pm} \cr & \qquad \qquad \qquad \quad + i \delta(x\_1-x\_2) \left\langle \frac{\delta^2 S\_\chi[\zeta\_+,\, \chi\_+]}{\delta g\_+(x\_1) \delta \tilde{g}\_+(x\_1)} \bigg|\_{\delta g\_+=0} \right\rangle\_{\pm}\,, \label{Exp:dS++}\end{aligned}$$ where *δ**g* and *δ**g̃* are either *δ**N*, *N**i*, or *ζ*, and they can be different metric perturbations. Here, we introduced the expectation value: $$\begin{aligned} & \langle {\cal O}[\chi\_+,\, \chi\_-] \rangle\_{\pm} \equiv \frac{\int D \chi\_+ \int D \chi\_- \,{\cal O}[\chi\_+,\, \chi\_-] e^{i S\_\chi[0,\, \chi\_+]- i S\_\chi[0,\,\chi\_-]}}{\int D \chi\_+ \int D \chi\_- \,e^{i S\_\chi[0,\,\chi\_+]- i S\_\chi[0,\,\chi\_-]}} \,.\end{aligned}$$ Since the action *S**χ*[*δ**g*+,  *χ*+] includes only local terms, the variation of *S**χ*[*δ**g*+,  *χ*+] with respect to *δ**g*+(*x*1) and *δ**g̃*+(*x*2) yields the delta function *δ*(*x*1 − *x*2) in Eq. ([Exp:dS++]). Similarly, the second variation of ${S\_{\rm eff}}'$ with respect to *δ**g*− is given by $$\begin{aligned} W^{(2)}\_{\delta g\_- \delta \tilde{g}\_-}(x\_1,\,x\_2) & = i^2 \left\langle \frac{\delta S\_\chi[\delta g\_-,\, \chi\_-]}{\delta g\_-(x\_1)} \bigg|\_{\delta g\_-=0} \frac{\delta S\_\chi[\delta g\_-,\, \chi\_-]}{\delta \tilde{g}\_-(x\_2)} \bigg|\_{\delta g\_-=0} \right\rangle\_{\pm} \cr & \qquad \qquad \qquad \quad - i \delta(x\_1-x\_2) \left\langle \frac{\delta^2 S\_\chi[\delta g\_-,\, \chi\_-]}{\delta g\_-(x\_1) \delta \tilde{g}\_-(x\_1)} \bigg|\_{\delta g\_-=0} \right\rangle\_{\pm}\,. \label{Exp:dS--}\end{aligned}$$ Taking the derivative with respect to both *δ**g*+ and *δ**g*−, we obtain $$\begin{aligned} W^{(2)}\_{\delta g\_+ \delta \tilde{g}\_-}(x\_1,\,x\_2) & = - i^2 \left\langle \frac{\delta S\_\chi[\delta g\_+,\, \chi\_+]}{\delta g\_+(x\_1)} \bigg|\_{\delta g\_+=0} \frac{\delta S\_\chi[\delta g\_-,\, \chi\_-]}{\delta \tilde{g}\_-(x\_2)} \bigg|\_{\delta g\_-=0} \right\rangle\_{\pm} \,, \label{Exp:dS+-}\end{aligned}$$ and $$\begin{aligned} W^{(2)}\_{\delta g\_- \delta \tilde{g}\_+}(x\_1,\,x\_2) & = - i^2 \left\langle \frac{\delta S\_\chi[\delta g\_-,\, \chi\_-]}{\delta g\_-(x\_1)} \bigg|\_{\delta g\_-=0} \frac{\delta S\_\chi[\delta g\_+,\, \chi\_+]}{\delta \tilde{g}\_+(x\_2)} \bigg|\_{\delta g\_+=0} \right\rangle\_{\pm} \,. \label{Exp:dS-+}\end{aligned}$$ These functions *W**δ**g**α*1*δ**g̃**α*2(2)(*x*1,  *x*2) can be expanded by the propagators of *χ* for *λ* = 0, i.e., the time-ordered (Feynman) propagator: $$\begin{aligned} & G\_F(x\_1,\, x\_2) \equiv \frac{\int D \chi\_+\, \chi\_+(x\_1) \chi\_+(x\_2)\,e^{i {S\_{\chi,0}}[\chi\_+]}}{\int D \chi\_+\,e^{i {S\_{\chi,0}}[\chi\_+]}}\,, \end{aligned}$$ the anti-time ordered (Dyson) propagator: $$\begin{aligned} & G\_D(x\_1,\, x\_2) \equiv \frac{\int D \chi\_-\, \chi\_-(x\_1) \chi\_-(x\_2)\,e^{- i {S\_{\chi,0}}[\chi\_-]}}{\int D \chi\_-\,e^{i {S\_{\chi,0}}[\chi\_-]}}\,, \end{aligned}$$ and the Wightman functions: $$\begin{aligned} & G^+(x\_1,\, x\_2) \equiv \frac{\int D \chi\_+ \int D \chi\_- \,\chi\_-(x\_1) \chi\_+(x\_2) e^{i {S\_{\chi,0}}[ \chi\_+]- i {S\_{\chi,0}}[\chi\_-]}}{\int D \chi\_+ \int D \chi\_- \,e^{i {S\_{\chi,0}}[\chi\_+]- i {S\_{\chi,0}}[\chi\_-]}} \,, \cr & G^-(x\_1,\, x\_2) \equiv \frac{\int D \chi\_+ \int D \chi\_- \,\chi\_+(x\_1) \chi\_-(x\_2) e^{i {S\_{\chi,0}}[ \chi\_+]- i {S\_{\chi,0}}[\chi\_-]}}{\int D \chi\_+ \int D \chi\_- \,e^{i {S\_{\chi,0}}[\chi\_+]- i {S\_{\chi,0}}[\chi\_-]}} \,. \end{aligned}$$ Here, *S**χ*, 0[*χ*] denotes the action given by $$\begin{aligned} {S\_{\chi,0}}[\chi] = \frac{1}{2} \int e^{d\rho} \biggl[ \dot{\chi}^2 - e^{-2\rho} (\partial\_i \chi)^2 - M^2(\phi) \chi^2 \biggr] {{\rm d}}t \, {{\rm d}}^d {\hbox{\boldmath{$x$}}}\,.\end{aligned}$$ Recall that these propagators are mutually related as $$\begin{aligned} & G^-(x\_1,\,x\_2) = G^{+\,\*}(x\_1,\,x\_2)\,, \label{Exp:Gpm} \\ & G\_F(x\_1,\, x\_2) = \theta(t\_1 - t\_2) G^+(x\_1,\, x\_2) + \theta(t\_2- t\_1) G^-(x\_1,\,x\_2) \,, \label{Exp:GF} \\ & G\_D(x\_1,\, x\_2) = \theta(t\_1 - t\_2) G^-(x\_1,\, x\_2) + \theta(t\_2- t\_1) G^+(x\_1,\,x\_2)\,, \label{Exp:GD}\end{aligned}$$ where *θ* is the Heaviside function. Propagators for *χ* ------------------- In this subsection, solving the mode function for the heavy field *χ*, we compute the propagators introduced in the previous subsection. At the linear order of *χ*, the equation of motion is given by $$\begin{aligned} & \ddot{\chi}\_k + d \dot{\rho} \dot{\chi}\_k + \{ M^2(\phi) + (k e^{-\rho})^2 \} \chi\_k =0\,, \label{Eq:modechi}\end{aligned}$$ where *χ**k* is the Fourier mode of *χ*. Changing the variable from *χ**k* to $X\_k(t)= e^{\frac{d-1}{2} \rho(t)} \chi\_k(t)$ and using the conformal time *η*, the mode equation ([Eq:modechi]) is recast into $$\begin{aligned} & X\_k'' + \Omega\_k^2(\eta)\, X\_k =0\,, \label{Eq:modechit}\end{aligned}$$ where the dash denotes the derivative with respect to the conformal time *η* and the time dependent frequency Ω*k* is given by $$\begin{aligned} \Omega\_k^2(\eta) = k^2 + (M e^\rho)^2 - \rho'' - {\rho'}^2\,.\end{aligned}$$ Using *W**k* which satisfies $$\begin{aligned} & W\_k^2 = \Omega\_k^2 + \frac{3}{4} \left( W\_k' \over W\_k \right)^2 - \frac{1}{2} \frac{W\_k''}{W\_k}\,, \label{Eq:Wk}\end{aligned}$$ the mode equation ([Eq:modechit]) can be solved as $$\begin{aligned} & X\_k(\eta) = \frac{1}{\sqrt{2 W\_k}} e^{- i \int^\eta {{\rm d}}\eta' W\_k(\eta')}\,. \label{Eq:WKB}\end{aligned}$$ Using the mode function *χ**k*, we quantize the non self-interacting heavy field as follows $$\begin{aligned} & \chi(x) = \int \frac{{{\rm d}}^d {\hbox{\boldmath{$k$}}}}{(2\pi)^{d/2}} e^{i {\hbox{\boldmath{\scriptsize$k$}}} \cdot {\hbox{\boldmath{\scriptsize$x$}}}} a\_{{\hbox{\boldmath{\scriptsize$k$}}}} \chi\_k(\eta) + ({\rm h.c.})\,, \label{Exp:chif}\end{aligned}$$ where $a\_{{\hbox{\boldmath{\scriptsize$k$}}}}$ denotes the annihilation operator. With this expansion, the Wightman function *G*+(*x*1,  *x*2) is given by $$\begin{aligned} & G^+(x\_1,\, x\_2) = \int \frac{{{\rm d}}^d {\hbox{\boldmath{$k$}}}}{(2\pi)^d} e^{i {\hbox{\boldmath{\scriptsize$k$}}} \cdot ({\hbox{\boldmath{\scriptsize$x$}}}\_1- {\hbox{\boldmath{\scriptsize$x$}}}\_2)} \chi\_k(\eta\_1) \chi\_k^\*(\eta\_2)\,. \label{Exp:G+}\end{aligned}$$ Once the mode function *χ**k*(*η*) is given, using Eqs. ([Exp:Gpm])-([Exp:GD]) and ([Exp:G+]), we can compute all the propagators which appear in the expansion ([Exp:dS++]). Ward-Takahashi identity from the scaling symmetry ================================================= As was discussed in Sec. [SSec:scaling], the action for the single field model preserves the invariance under the scale transformation, which changes $\zeta(t,\, {\hbox{\boldmath{$x$}}})$ to $\zeta(t,\, e^{-s} {\hbox{\boldmath{$x$}}}) -s$. This symmetry is preserved classically also for multi-field models of inflation, since it is a part of the spatial diffeomorphism. In a quantum field theory, it is known that a symmetry leads to a corresponding Ward-Takahashi (WT) identity. When the scaling symmetry is also preserved at the quantum level, we obtain the WT identity. In Sec. [SSec:WT], we discuss the WT identity from the scaling symmetry of the correlators of *χ*. In single field models of inflation, it was shown that the WT identity of the scaling symmetry yields the consistency relation which relates the (*n* + 1)-point function of *ζ* with one soft external leg to *n*-point function of *ζ* . Likewise, in Sec. [SSec:CR], we find that the WT identity derived in Sec. [SSec:WT] gives the consistency relation which relates the (*n* + 1)-point cross-correlation with *n* *χ*s and one soft *ζ* to the *n*-point auto-correlation of *χ*. Ward-Takahashi identity ----------------------- In single field models, an invariant quantity regarding the scale transformation was constructed in Refs.  by using the smeared geodesic normal coordinates, defined in Eq. ([Def:xg]). Using ${\hbox{\boldmath{$x$}}}\_g\equiv {\hbox{\boldmath{$x$}}}\_g(t\_f)$, evaluated at the end of inflation *t* = *t**f*, we define $$\begin{aligned} & {^g\!\chi}(t,\, {\hbox{\boldmath{$x$}}}\_g) = \chi(t,\, {\hbox{\boldmath{$x$}}})= \chi(t,\, e^{- {{^g\!\bar{\zeta}}}} {\hbox{\boldmath{$x$}}}\_g)\,,\end{aligned}$$ which is invariant under the scaling symmetry with the constant parameter *s*. Here, *g* *ζ̄* ≡ *g* *ζ̄*(*t**f*). When the scaling symmetry is also preserved for the quantum system, the correlation functions of ${^g\!\chi}(t,\, {\hbox{\boldmath{$x$}}}\_g)$ should be invariant under the scale transformation as $$\begin{aligned} & \langle \chi\_{\alpha\_1}(t\_1,\, e^{- {{^g\!\bar{\zeta}}}}{\hbox{\boldmath{$x$}}}\_{g1}) \cdots \chi\_{\alpha\_n}(t\_n,\, e^{- {{^g\!\bar{\zeta}}}}{\hbox{\boldmath{$x$}}}\_{gn}) \rangle\_{\pm} \big|\_{\delta g} \cr &\,\,= \langle \chi\_{\alpha\_1}(t\_1,\, e^{- {{^g\!\bar{\zeta}}}+s} {\hbox{\boldmath{$x$}}}\_{g1}) \cdots \chi\_{\alpha\_n}(t\_n,\, e^{- {{^g\!\bar{\zeta}}}+s}{\hbox{\boldmath{$x$}}}\_{gn}) \rangle\_{\pm} \big|\_{\delta g\_s} \label{WT} \end{aligned}$$ with *α**i* =  ± . Here, *δ**g**s* denotes the metric perturbations after the scale transformation. Under the scale transformation, *δ**N* and *N**i* change as $$\begin{aligned} & \delta N\_s(t,\, {\hbox{\boldmath{$x$}}}) = \delta N(t,\, e^{-s}{\hbox{\boldmath{$x$}}}) \,, \label{Exp:transN} \\ & N\_{i, s} (t,\, {\hbox{\boldmath{$x$}}}) = e^{-s} N\_i(t, e^{-s} {\hbox{\boldmath{$x$}}})\,, \label{Exp:transNi}\end{aligned}$$ and *ζ* changes as in Eq. ([Exp:transzeta]), and then *g* *ζ̄* changes to *g* *ζ̄**s* = *g* *ζ̄* − *s*. Equation ([WT]) holds only when the quantum state also preserves the scaling symmetry. This is the WT identity for the scaling symmetry. At ${\cal O}(s)$, setting *δ**g* = 0, the WT identity yields $$\begin{aligned} & \sum\_{i=1}^n {\hbox{\boldmath{$x$}}}\_i \cdot \partial\_{{\hbox{\boldmath{\scriptsize$x$}}}\_i} \langle \chi\_{\alpha\_1}(x\_1) \cdots \chi\_{\alpha\_n}(x\_n) \rangle\_{\pm} - \int {{\rm d}}^{d+1} x \left\langle \chi\_{\alpha\_1}(x\_1) \cdots \chi\_{\alpha\_n}(x\_n) \frac{\delta i S\_\chi[\delta g\_+,\, \chi\_+]}{\delta \zeta\_+(x)} \bigg|\_{\delta g\_+=0} \right\rangle\_{\!\pm} \nonumber \\ & \qquad \qquad + \int {{\rm d}}^{d+1} x \left\langle \chi\_{\alpha\_1}(x\_1) \cdots \chi\_{\alpha\_n}(x\_n) \frac{\delta i S\_\chi[\delta g\_-,\, \chi\_-]}{\delta \zeta\_-(x)} \bigg|\_{\delta g\_-=0} \right\rangle\_{\!\pm}=0\,. \label{WTs1} \end{aligned}$$ Since the changes of *δ**N* and *N**i* under the scale transformation are linear in *N* and *N**i* and their derivatives, they vanish after setting *δ**g* to 0. Using the WT identity ([WTs1]) with *x*1 = ⋯ = *x**p* ≡ *x* and *x**p* + 1 = ⋯ = *x**n* ≡ *x*ʹ, we obtain $$\begin{aligned} & ({\hbox{\boldmath{$x$}}} \cdot \partial\_{{\hbox{\boldmath{\scriptsize$x$}}}} + {\hbox{\boldmath{$x$}}}' \cdot \partial\_{{\hbox{\boldmath{\scriptsize$x$}}}'}) \langle \chi\_\alpha^p(x) \chi\_\alpha^{n-p}(x') \rangle\_{\pm} - \int {{\rm d}}^{d+1} y \left\langle \chi\_\alpha^p(x) \chi\_\alpha^{n-p}(x') \frac{\delta i S\_\chi[\delta g\_+,\, \chi\_+]}{\delta \zeta\_+(y)} \bigg|\_{\delta g\_+=0} \right\rangle\_{\!\pm} \nonumber \\ & \qquad \qquad + \int {{\rm d}}^{d+1} y \left\langle \chi\_\alpha^p(x) \chi\_\alpha^{n-p}(x') \frac{\delta i S\_\chi[\delta g\_-,\, \chi\_-]}{\delta \zeta\_-(y)} \bigg|\_{\delta g\_-=0} \right\rangle\_{\!\pm}=0 \,, \label{Exp:WTpr}\end{aligned}$$ where *α* =  ± . In the next section, using these identities, we show the conservation of *ζ*, including the loop corrections of the heavy field. Consistency relation (Soft theorem) ----------------------------------- In single field models of inflation, it is known that the WT identity for the scaling symmetry gives the consistency relation. The consistency relation for *ζ* is an example of the soft theorem, which was first shown for the soft graviton scattering by Weinberg . Recently, Weinberg’s soft theorem was recaptured by Strominger *et al.* and was shown to be equivalent to a Ward-Takahashi identity in an asymptotically flat spacetime . Here, we show that the WT identity ([WTs1]) also gives a consistency relation in multi-field models. Performing the Fourier transformation of the WT identity ([WTs1]) evaluated at an equal time *t* with all *α**i*s chosen as  + , we obtain $$\begin{aligned} & \left(\sum\_{i=1}^n {\hbox{\boldmath{$k$}}}\_i \cdot \partial\_{{\hbox{\boldmath{\scriptsize$k$}}}\_i} + nd \right) \langle \chi\_+({\hbox{\boldmath{$k$}}}\_1) \cdots \chi\_+({\hbox{\boldmath{$k$}}}\_n) \rangle\_{\pm} \nonumber \\ & \qquad - i \int {{\rm d}}^{d+1} y \left\langle \frac{\delta S\_{\chi}[\delta g\_-,\, \chi\_-]}{\delta \zeta\_-(y)} \bigg|\_{\delta g\_-=0} \chi\_+({\hbox{\boldmath{$k$}}}\_1) \cdots \chi\_+({\hbox{\boldmath{$k$}}}\_n) \right\rangle\_{\!\pm} \nonumber \\ & \qquad + i \int {{\rm d}}^{d+1} y \left\langle \chi\_+({\hbox{\boldmath{$k$}}}\_1) \cdots \chi\_+({\hbox{\boldmath{$k$}}}\_n) \frac{\delta S\_{\chi}[\delta g\_+,\, \chi\_+]}{\delta \zeta\_+(y)} \bigg|\_{\delta g\_+=0} \right\rangle\_{\! \pm} =0\,, \label{Eq:WTtf}\end{aligned}$$ where we abbreviated *t* in the argument of *χ*s. The correlator in the first line is simply the in-in *n*-point function of $\chi(t,\,{\hbox{\boldmath{$k$}}})$. The correlator in the second line is given by the product of the Wightman function, the Feynman propagator, and the Dyson propagator, which appear by contracting *χ*± with *χ*∓, *χ*+ with *χ*+, and *χ*− with *χ*−, respectively. The correlator in the third line is given by the product of the Feynman propagator. The interaction vertices inserted at any time after *t* are canceled between the terms in the second and third lines. This ensures the causality in the closed time path formalism. Taking into account this cancellation, we can rewrite Eq. ([Eq:WTtf]) as $$\begin{aligned} & \left[ \sum\_{i=2}^n {\hbox{\boldmath{$k$}}}\_i \cdot \partial\_{{\hbox{\boldmath{\scriptsize$k$}}}\_i} + (n-1)d \right] \left\langle \chi\left(- \sum\_{j=2}^n {\hbox{\boldmath{$k$}}}\_j\right) \chi({\hbox{\boldmath{$k$}}}\_2) \cdots \chi({\hbox{\boldmath{$k$}}}\_n) \right\rangle' \cr & = - i \int^t {{\rm d}}t\_y \int {{\rm d}}^d {\hbox{\boldmath{$y$}}} \left\langle \left[ \chi({\hbox{\boldmath{$k$}}}\_1) \cdots \chi({\hbox{\boldmath{$k$}}}\_n),\, \frac{\delta S\_\chi}{\delta \zeta(y)} \bigg|\_{\zeta=0} \right] \right\rangle' \,, \label{Eq:CR} \end{aligned}$$ where the correlation function with dash denotes the correlation function from which (2*π*)*d* and the delta function are removed, e.g., $$\begin{aligned} & \langle \chi({\hbox{\boldmath{$k$}}}\_1) \cdots \chi({\hbox{\boldmath{$k$}}}\_n) \rangle \equiv (2\pi)^d \delta({\hbox{\boldmath{$k$}}}\_1 + \cdots + {\hbox{\boldmath{$k$}}}\_n) \langle \chi({\hbox{\boldmath{$k$}}}\_1) \cdots \chi({\hbox{\boldmath{$k$}}}\_n) \rangle' \,. \end{aligned}$$ In deriving Eq. ([Eq:CR]), we used $$\begin{aligned} & \sum\_{i=1}^n {\hbox{\boldmath{$k$}}}\_i \cdot \partial\_{{\hbox{\boldmath{\scriptsize$k$}}}\_i} \delta({\hbox{\boldmath{$k$}}}\_1 + \cdots {\hbox{\boldmath{$k$}}}\_n) \langle \chi({\hbox{\boldmath{$k$}}}\_1) \cdots \chi({\hbox{\boldmath{$k$}}}\_n) \rangle' \cr & = \delta({\hbox{\boldmath{$k$}}}\_1 + \cdots {\hbox{\boldmath{$k$}}}\_n) \left( \sum\_{i=2}^n {\hbox{\boldmath{$k$}}}\_i \cdot \partial\_{{\hbox{\boldmath{\scriptsize$k$}}}\_i} -d \right) \left\langle \chi\left(- \sum\_{j=2}^n {\hbox{\boldmath{$k$}}}\_j\right) \chi({\hbox{\boldmath{$k$}}}\_2) \cdots \chi({\hbox{\boldmath{$k$}}}\_n) \right\rangle' \,. \end{aligned}$$ cc [Fg:CR] The correlation function in the second line of Eq. ([Eq:CR]) is the in-in *n*-point function of $\chi({\hbox{\boldmath{$k$}}})$ where the gravitational interaction vertices with *n* heavy fields *χ* and one amputated soft curvature perturbation *ζ* are inserted. (See Fig. [Fg:CR].) Then, attaching the external soft propagator of *ζ* to this correlation function yields the (*n* + 1)-point function of *n* *χ*s and one soft *ζ*, i.e., $$\begin{aligned} & \left[ \sum\_{i=2}^n {\hbox{\boldmath{$k$}}}\_i \cdot \partial\_{{\hbox{\boldmath{\scriptsize$k$}}}\_i} + (n-1)d \right] \left\langle \chi\left(- \sum\_{j=2}^n {\hbox{\boldmath{$k$}}}\_j\right) \chi({\hbox{\boldmath{$k$}}}\_2) \cdots \chi({\hbox{\boldmath{$k$}}}\_n) \right\rangle' \cr & \qquad = - \lim\_{k \to 0} \frac{ \langle \zeta({\hbox{\boldmath{$k$}}}) \chi({\hbox{\boldmath{$k$}}}\_1) \cdots \chi({\hbox{\boldmath{$k$}}}\_n) \rangle'}{P\_\zeta(k)} \,,\end{aligned}$$ where *P**ζ*(*k*) is the power spectrum of the free *ζ*. This is the consistency relation for the heavy field *χ*. The correlation function in the second line contains only one gravitational interaction vertex with *ζ*, but it can contain more than one self interaction vertexes for the heavy field *χ*. This is one example of the soft theorem. Using the WT identity at ${\cal O}(s)$, we derived the consistency relation for the correlation functions with one soft *ζ*. Using the WT identity at ${\cal O}(s^p)$, we can derive the consistency relations with *p* soft *ζ*s. Scaling symmetry and WKB solution --------------------------------- In this subsection, we explicitly analyze the consistency relation ([Eq:CR]) for the case with *λ* = 0. Inserting Eq. ([Exp:chif]) into Eq. ([Eq:CR]), we obtain $$\begin{aligned} &({\hbox{\boldmath{$k$}}} \cdot \partial\_{{\hbox{\boldmath{\scriptsize$k$}}}} + d) |\chi\_k(t)|^2 \cr & = 2 {\rm Im} \biggl[ \int^t {{\rm d}}t' e^{d\rho(t')} \chi\_k^2(t) \left\{ d(\dot{\chi}\_k^\*\!^2(t') - M^2 \chi\_k^\*\!^2 (t') ) - (d-2) \frac{k^2}{e^{2\rho(t')}} \chi\_k^\*\!^2(t') \right\} \biggr].\end{aligned}$$ Integrating by parts and using the mode equation, we obtain $$\begin{aligned} & {\hbox{\boldmath{$k$}}} \cdot \partial\_{{\hbox{\boldmath{\scriptsize$k$}}}} |\chi\_k(t)|^2 = 4 \, {\rm Im} \left[ \int^t {{\rm d}}t' e^{d \rho(t')} \frac{k^2}{e^{2 \rho(t')}} \chi\_k^2(t) \chi\_k^\*\!^2 (t') \right]\,, \label{CRF}\end{aligned}$$ where *d*∣*χ**k*(*t*)∣2 in the left hand side was canceled with the term which appears by operating the time derivative on the Heaviside function *θ*(*t* − *t*ʹ). We can show that the WKB solution satisfies the WT identity ([CRF]). In order to show this statement, we rewrite Eq. ([CRF]) as $$\begin{aligned} & \chi\_k(\eta) L\_k^\*(\eta) + \chi\_k^\*(\eta) L\_k(\eta) =0 \,, \label{CFT2}\end{aligned}$$ introducing $$\begin{aligned} & L\_k(\eta) \equiv k \partial\_k \chi\_k(\eta) - 2 i \chi\_k^\*(\eta) \int^\eta {{\rm d}}\eta' e^{(d-1)\rho(\eta')} k^2 \chi\_k^2(\eta') \cr & \qquad \qquad \quad + 2 i \chi\_k(\eta) \int^\eta\_{\bar{\eta}} {{\rm d}}\eta' e^{(d-1)\rho(\eta')} k^2 |\chi\_k(\eta')|^2 + i k \bar{\eta} \chi\_k(\eta)\,. \label{Def:Lkt}\end{aligned}$$ The last two terms in *L**k*(*η*) are canceled between the two terms in Eq. ([CFT2]). The time integral of the second term converges by rotating the time path as *η* →  − ∞(1 + *i**ε*) where *ε* is a positive constant. For *L**k*\*(*η*), the time integral of the second term should be rotated as *η* →  − ∞(1 − *i**ε*). We choose *η̄* at a time in the distant past when the mode function can be well approximated by the leading order WKB solution with *W**k* = *k*. (To be precise, *η̄* differs for a different wavenumber *k*.) Since *L**k*(*η*) satisfies the mode equation for *χ**k*(*η*), i.e., $$\begin{aligned} & {L}''\_k + (d-1) \rho' L'\_k + \{ k^2 + M^2(\phi) e^{2\rho} \} L\_k =0\,, \end{aligned}$$ and the initial conditions *L**k*(*η̄*) = *L*ʹ*k*(*η̄*) = 0, *L**k*(*η*) vanishes all the time. Thus, we find that the WKB solution satisfies Eq. ([CFT2]). For the exact de Sitter space, in the limit *k**e*− *ρ* ≪ *M* and *H* ≪ *M*, we can easily check that the WKB solution, given by $$\begin{aligned} & \chi\_k (t) \simeq \frac{e^{\frac{d}{2}\rho(t)}}{\sqrt{2 M}} \left[ 1 + \frac{1}{4} \left( \frac{k}{M e^\rho} \right)^2 \left( -1 + i \frac{M}{H} \right) \right] e^{- i Mt}\,,\end{aligned}$$ satisfies Eq. ([CRF]) as $$\begin{aligned} & {\hbox{\boldmath{$k$}}} \cdot \partial\_{{\hbox{\boldmath{\scriptsize$k$}}}} |\chi\_k(t)|^2 = - e^{(d-1)\rho(t)} \frac{k^2}{2 \omega\_k^3(t)} \simeq - \frac{e^{-d \rho(t)}}{2 M} \left( \frac{k}{M e^\rho} \right)^2 \left\{ 1 - \frac{3}{2} \left( \frac{k}{M e^\rho} \right)^2 \right\} \,. \end{aligned}$$ Conservation of *ζ* with loop corrections of heavy field ======================================================== In this section, we show that when the scaling symmetry is preserved, the curvature perturbation *ζ* is conserved in time at super Hubble scales, including the loop correction of the heavy field *χ*. For this purpose, first we rewrite the effective action, using the WT identities derived in the previous section. Then, using the obtained effective action, we show the conservation of *ζ*. Effective action with scaling symmetry -------------------------------------- As discussed for the single field inflation in Sec. [SSec:single], the presence of the constant solution is implied by the scaling symmetry. In this subsection, using the WT identity, we rewrite the effective action in such a way that the scaling symmetry becomes manifest. Taking the variation of *S**χ* with respect to *δ**g*, we can compute *W**δ**g**α*1⋯*δ**g**α**n*(*n*) and the effective action. For instance, taking the *n*-th derivative of *S**χ* with respect to *ζ* and setting *δ**g* to 0, we obtain $$\begin{aligned} &\frac{\delta^n S\_\chi[\delta g,\, \chi]}{\delta \zeta(x\_1) \cdots \delta \zeta(x\_n)} \bigg|\_{\delta g=0} \cr & = \delta(x\_1 - x\_2) \cdots \delta(x\_{n-1} - x\_n) \frac{e^{d \rho(t\_1)}}{2} \cr & \quad \times \left[ d^n \! \left( \dot{\chi}^2(x\_1) - M^2 \chi^2(x\_1)- \frac{\lambda}{12} \chi^4(x\_1) \right) - (d-2)^n e^{-2\rho(t\_1)} (\partial\_{{\hbox{\boldmath{\scriptsize$x$}}}\_1} \chi(x\_1))^2 \right]. \label{Exp:nS} \end{aligned}$$ In this subsection, we show that when the WT identity for *χ*, given in Eq. ([Exp:WTpr]), is fulfilled, the effective action for *ζ* preserves the scaling symmetry. To show this, we further rewrite the WT identity ([Exp:WTpr]). Operating $$\int {{\rm d}}^d {\hbox{\boldmath{$x$}}} \int {{\rm d}}^{d+1} y\, \delta g\_\alpha(x) \delta({\hbox{\boldmath{$x$}}} - {\hbox{\boldmath{$y$}}}) \delta(t\_x-t\_y)$$ and performing the integration by parts, we obtain $$\begin{aligned} & \int {{\rm d}}^d {\hbox{\boldmath{$x$}}} \langle \chi\_\alpha^n (x) \rangle \partial\_{{\hbox{\boldmath{\scriptsize$x$}}}} \{ {\hbox{\boldmath{$x$}}} \delta g\_\alpha (x) \} + \int {{\rm d}}^d {\hbox{\boldmath{$x$}}} \int {{\rm d}}^{d+1}y\, \delta g\_\alpha(x) \left\langle \chi\_\alpha^n(x) \frac{\delta i S\_\chi}{\delta \zeta\_+(y)} \bigg|\_{\delta g\_+=0} \right\rangle \cr & \qquad \qquad - \int {{\rm d}}^d {\hbox{\boldmath{$x$}}} \int {{\rm d}}^{d+1}y\, \delta g\_\alpha(x) \left\langle \chi\_\alpha^n(x) \frac{\delta i S\_\chi}{\delta \zeta\_-(y)} \bigg|\_{\delta g\_-=0} \right\rangle = 0\,. \label{Eq:WTnd}\end{aligned}$$ Here, after rewriting $\delta({\hbox{\boldmath{$x$}}}- {\hbox{\boldmath{$y$}}})({\hbox{\boldmath{$x$}}} \cdot \partial\_{{\hbox{\boldmath{\scriptsize$x$}}}} + {\hbox{\boldmath{$y$}}} \cdot \partial\_{{\hbox{\boldmath{\scriptsize$y$}}}})$ as $\delta({\hbox{\boldmath{$x$}}}- {\hbox{\boldmath{$y$}}})({\hbox{\boldmath{$y$}}} \cdot \partial\_{{\hbox{\boldmath{\scriptsize$x$}}}} + {\hbox{\boldmath{$x$}}} \cdot \partial\_{{\hbox{\boldmath{\scriptsize$y$}}}})$, we performed the integration by parts and then we used $$({\hbox{\boldmath{$y$}}} \cdot \partial\_{{\hbox{\boldmath{\scriptsize$x$}}}} + {\hbox{\boldmath{$x$}}} \cdot \partial\_{{\hbox{\boldmath{\scriptsize$y$}}}}) \delta({\hbox{\boldmath{$x$}}}- {\hbox{\boldmath{$y$}}}) = ( \partial\_{{\hbox{\boldmath{\scriptsize$x$}}}}\, {\hbox{\boldmath{$x$}}} - {\hbox{\boldmath{$x$}}} \cdot \partial\_{{\hbox{\boldmath{\scriptsize$x$}}}}) \delta({\hbox{\boldmath{$x$}}}- {\hbox{\boldmath{$y$}}}) = d \delta({\hbox{\boldmath{$x$}}}- {\hbox{\boldmath{$y$}}})\,.$$ Similarly, operating $$\int {{\rm d}}^d {\hbox{\boldmath{$x$}}} \int {{\rm d}}^{d+1} y \delta g\_\alpha(x) \delta({\hbox{\boldmath{$x$}}} - {\hbox{\boldmath{$y$}}}) \delta(t\_x-t\_y) \partial\_{x^\mu} \partial\_{y^\nu} \,,$$ on Eq. ([Def:tWn]) with *n* = 2 and *p* = 1, where *μ*, *ν* = 0,  *i*, we obtain $$\begin{aligned} & \int {{\rm d}}^d {\hbox{\boldmath{$x$}}} \langle \dot{\chi}^2\_\alpha (x) \rangle \partial\_{{\hbox{\boldmath{\scriptsize$x$}}}} \{ {\hbox{\boldmath{$x$}}} \delta g\_\alpha (x) \} + \int {{\rm d}}^d {\hbox{\boldmath{$x$}}} \int {{\rm d}}^{d+1}y\, \delta g\_\alpha(x) \left\langle \dot{\chi}\_\alpha^2(x) \frac{\delta i S\_\chi}{\delta \zeta\_+(y)} \bigg|\_{\delta g\_+=0} \right\rangle \cr & \qquad \qquad - \int {{\rm d}}^d {\hbox{\boldmath{$x$}}} \int {{\rm d}}^{d+1}y\, \delta g\_\alpha(x) \left\langle \dot{\chi}\_\alpha^2(x) \frac{\delta i S\_\chi}{\delta \zeta\_-(y)} \bigg|\_{\delta g\_-=0} \right\rangle = 0\,, \label{Eq:WTtd} \\ & \int {{\rm d}}^d {\hbox{\boldmath{$x$}}} \int {{\rm d}}^{d+1}y\, \delta g\_\alpha(x) \left\langle \dot{\chi}\_\alpha(x) \partial\_i \chi\_{\alpha}(x) \frac{\delta i S\_\chi}{\delta \zeta\_+(y)} \bigg|\_{\delta g\_+=0} \right\rangle \cr & \qquad \qquad - \int {{\rm d}}^d {\hbox{\boldmath{$x$}}} \int {{\rm d}}^{d+1}y\, \delta g\_\alpha(x) \left\langle \dot{\chi}\_\alpha(x) \partial\_i \chi\_\alpha(x) \frac{\delta i S\_\chi}{\delta \zeta\_-(y)} \bigg|\_{\delta g\_-=0} \right\rangle = 0\,,\end{aligned}$$ and $$\begin{aligned} & \int {{\rm d}}^d {\hbox{\boldmath{$x$}}} \langle \partial\_i \chi\_\alpha (x) \partial^i \chi\_\alpha (x) \rangle \{ {\hbox{\boldmath{$x$}}} \cdot \partial\_{{\hbox{\boldmath{\scriptsize$x$}}}} + (d-2) \} \delta g\_\alpha (x) \cr & \qquad \qquad + \int {{\rm d}}^d {\hbox{\boldmath{$x$}}} \int {{\rm d}}^{d+1}y\, \delta g\_\alpha(x) \left\langle \partial\_i \chi\_\alpha (x) \partial^i \chi\_\alpha (x) \frac{\delta i S\_\chi}{\delta \zeta\_+(y)} \bigg|\_{\delta g\_+=0} \right\rangle \cr & \qquad \qquad - \int {{\rm d}}^d {\hbox{\boldmath{$x$}}} \int {{\rm d}}^{d+1}y\, \delta g\_\alpha(x) \left\langle \partial\_i \chi\_\alpha (x) \partial^i \chi\_\alpha (x) \frac{\delta i S\_\chi}{\delta \zeta\_-(y)} \bigg|\_{\delta g\_-=0} \right\rangle = 0\,, \label{Eq:WTsd}\end{aligned}$$ where we used $\langle \dot{\chi}(x) \partial\_i \chi(x) \rangle=0$. Recalling the expressions of *W**δ**g**α*(1) and *W**δ**g**α*1*δ**g**α*2(2), given in Sec. [SSec:NaiveEA] and adding Eqs. ([Eq:WTnd]), ([Eq:WTtd]), and ([Eq:WTsd]) in such a way that their first terms give *W**δ**g**α*(1), we obtain $$\begin{aligned} & \int {{\rm d}}^{d+1} x\, \{{\hbox{\boldmath{$x$}}} \cdot \partial\_{{\hbox{\boldmath{\scriptsize$x$}}}} \delta g\_{\pm} (x) \} W^{(1)}\_{\delta g\_{\pm}} (x) \cr & \qquad + \int {{\rm d}}^{d+1} x \int {{\rm d}}^{d+1} y\, \delta g\_{\pm}(x) \left\{ W^{(2)}\_{\delta g\_{\pm} \zeta\_{\pm}}(x,\, y) + W^{(2)}\_{\delta g\_{\pm} \zeta\_{\mp}}(x,\, y) \right\} =0 \,. \label{Eq:WTEA}\end{aligned}$$ As is clear from the derivation, Eq. ([Eq:WTEA]) also holds, even if we replace *δ**g*±(*x*) included in each term with an arbitrary function. Therefore, replacing *δ**g*±(*x*) with a constant nonzero number, we obtain $$\begin{aligned} & \int {{\rm d}}^{d+1} x \int {{\rm d}}^{d+1} y \left\{ W^{(2)}\_{\delta g\_{\pm} \zeta\_{\pm}}(x,\, y) + W^{(2)}\_{\delta g\_{\pm} \zeta\_{\mp}}(x,\, y) \right\} = 0 \,. \label{Eq:WTEAct}\end{aligned}$$ By adding the left hand side of Eq. ([Eq:WTEA]) multiplied by a constant parameter  − *s* and Eq. ([Eq:WTEAct]) with *δ**g*± = *ζ*± multiplied by  − *s*2/2, the linear and the quadratic terms in the effective action can be given by $$\begin{aligned} & i {S'\_{{\rm eff}(1)}}[\delta g\_+,\, \delta g\_-] + i {S'\_{{\rm eff}(2)}}[\delta g\_+,\, \delta g\_-] \cr & = \sum\_{\alpha = \pm} \int {{\rm d}}^{d+1} x\, \delta g\_{\alpha,\, s}(x) W^{(1)}\_{\delta g\_\alpha} (x) \cr & \qquad + \frac{1}{2!} \!\sum\_{\alpha\_1, \alpha\_2 = \pm}\! \int {{\rm d}}^{d+1} x\_1 \!\int\! {{\rm d}}^{d+1} x\_2 \, \delta g\_{\alpha\_1,\, s}(x\_1) \delta \tilde{g}\_{\alpha\_2,\,s}(x\_2) W^{(2)}\_{\delta g\_{\alpha\_1} \delta \tilde{g}\_{\alpha\_2}}(x\_1,\,x\_2) \cr & \qquad + {\cal O}(\delta g^3) \,, \label{Expn:Seffdd}\end{aligned}$$ where *δ**g**s* are related to *δ**g* as given in Eqs. ([Exp:transzeta]), ([Exp:transN]), and ([Exp:transNi]). Here, each *δ**g**i*, *α* (*i* = 1,  2) sums over *δ**N**α*, *s*, *N**i*, *α*, *s*, and *ζ**α*, *s*. We can drop the term with one shift vector, because *W**N**i*, *α*(1), which is proportional to $\langle \dot{\chi} \partial\_i \chi \rangle$, vanishes. In deriving Eq. ([Expn:Seffdd]), we used $$\begin{aligned} & W^{(2)}\_{\delta g\_{\alpha\_1} \delta \tilde{g}\_{\alpha\_2}}(x\_1,\, x\_2) = W^{(2)}\_{\delta \tilde{g}\_{\alpha\_2} \delta g\_{\alpha\_1}}(x\_2,\, x\_1)\,,\end{aligned}$$ and *g* *ζ̄*+ = *g* *ζ̄*−, which holds since $\zeta\_+(t\_f,\,{\hbox{\boldmath{$x$}}})=\zeta\_-(t\_f,\, {\hbox{\boldmath{$x$}}})$. The first term in Eq. ([Eq:WTEA]) changes the argument of the metric perturbations in the linear term of Eq. ([Expn:Seffdd]). We also changed the arguments of the quadratic terms, since the modification appears only in higher orders of *δ**g*. Equation ([Expn:Seffdd]) shows that with the use of the WT identity, *δ**g**α*(*x*) in ${S\_{\rm eff}}'$ can be replaced with *δ**g**α*,  *s*(*x*). Since the rest of the effective action, *S**a**d*, is simply the classical action for the single field model, it also should be invariant under this replacement. Therefore, when the WT identity ([WT]) holds, the total effective action $S\_{\rm eff}$ preserves the invariance under the change of *δ**g**α* to *δ**g**α*, *s*. The effective action ([Expn:Seffdd]) includes the lapse function and the shift vector. By solving the Hamiltonian and momentum constraint equations, we can express *δ**N**s* and *N**i*, *s* in terms of *ζ**s*. Using these expressions, we can eliminate *δ**N**s* and *N**i*, *s* in the effective action as in the single field model . Since the constraint equations for *δ**g**s* are given by replacing *δ**g* with *δ**g**s* in the constraint equations for *δ**g*, the effective action for *ζ**s* obtained after eliminating *δ**N**s* and *N**i*, *s* should be given simply by replacing *ζ* with *ζ**s* in the effective action expressed only in terms of *ζ*. Conservation of *ζ* ------------------- ### Tadpole contribution Before we discuss the conservation, we show that the linear terms in the effective action ${S\_{\rm eff}}$, which is the tadpole terms, vanish all together. Taking the variation of the effective action with respect to *N* and *N**i*, we obtain the constraint equations. The Hamiltonian constraint for the FRW background gives $$\begin{aligned} & d(d-1) \dot{\rho}^2 = \dot{\phi}^2 + \langle \dot{\chi}^2 \rangle + e^{-2\rho} \langle (\partial\_{{\hbox{\boldmath{\scriptsize$x$}}}} \chi)^2 \rangle + 2 \langle V(\phi,\, \chi) \rangle \,, \end{aligned}$$ and at the liner order $$\begin{aligned} & (d-1) e^{-2 \rho} \dot{\rho} \partial^i N\_i + \delta N (2 \langle V \rangle + e^{-2\rho} \langle (\partial \chi)^2 \rangle )- d(d-1) \dot{\rho} \dot{\zeta} - \zeta e^{-2 \rho} \langle (\partial \chi)^2 \rangle = 0 \label{HconstL}\,,\end{aligned}$$ where ∂*i* ≡ *δ**i**j*∂*j*. Here, we neglected the sub-leading contribution at large scales. The scalar part of the momentum constraint gives $$\begin{aligned} & \partial\_i (\dot{\rho} \delta N - \dot{\zeta})- \frac{e^{-2\rho}}{d-1} N\_i \langle (\partial\_i\chi)^2 \rangle =0\,, \label{MconstL}\end{aligned}$$ where we used ⟨∂*i**χ*∂*j**χ*⟩ ∝ *δ**i**j*. The momentum constraint equation can be solved as $$\begin{aligned} & \delta N = \frac{\dot{\zeta}}{\dot{\rho}} + \frac{e^{-2\rho}}{(d-1)\dot{\rho}} \sum\_{i=1}^3 \langle (\partial\_i\chi)^2 \rangle \partial^{-2} \partial^i N\_i \,. \label{Exp:deltaNwC}\end{aligned}$$ We can add a homogeneous solution of the Laplace equation on the right hand side. The action *S**a**d* which is accurate at the linear order of *δ**g* is given by $$\begin{aligned} S\_{ad} &\simeq \frac{1}{2} \int {{\rm d}}^{d+1} x N e^{d(\rho+\zeta)} \Bigl[- 2 W(\phi) + \frac{1}{N^2} \bigl\{ - d(d-1) \dot{\rho}^2 + \dot{\phi}^2 \bigr\} \cr & \qquad \qquad \qquad \qquad \qquad \qquad -2(d-1) \dot{\rho} \delta N \bigl\{ d \dot{\zeta} - e^{-2(\rho + \zeta)} \partial^i N\_i\bigr\} \Bigr] \,,\end{aligned}$$ where, for our purpose, we partially kept the higher order terms in the exponential form. The *n*-th order effective action ${S'\_{{\rm eff}(n)}}$ includes the local terms given by $$\frac{1}{n!} \int {{\rm d}}^{d+1} x\, \delta g\_{1,\, \alpha}(x) \cdots \delta g\_{n,\, \alpha}(x) \left\langle \frac{\delta^n i S\_\chi[\delta g\_\alpha,\, \chi\_\alpha]}{\delta g\_{1,\alpha}(x) \cdots \delta g\_{n,\alpha}} \right\rangle$$ with *α* =  ± . Adding up these local terms for all *n*, we obtain $$\begin{aligned} & S\_{{\rm eff}, {\rm local}}'[\delta g\_+,\, \delta g\_-] = \langle S\_{\chi}[\delta g\_+,\, \chi\_+] \rangle - \langle S\_{\chi}[\delta g\_-,\, \chi\_-] \rangle \,, \end{aligned}$$ where the terms which do not depend on *δ**g* are canceled between the two terms on the right hand side. Adding these local terms to *S**a**d* and using the Hamiltonian constraint, we obtain a concise expression as $$\begin{aligned} & S\_{ad}[\zeta\_\alpha] + \langle S\_{\chi}[\zeta\_\alpha,\, \chi\_\alpha] \rangle \cr & \simeq - \int {{\rm d}}^{d+1} x\, N e^{d(\rho + \zeta\_\alpha)} \left[ 2 \langle V(\phi,\, \chi) \rangle + e^{-2(\rho + \zeta\_\alpha)} \langle (\partial \chi)^2 \rangle \right] . \label{Exp:SzetaSeff}\end{aligned}$$ As in single field cases, solving the Hamiltonian and momentum constraint equations, we can express the lapse function and the shift vector in terms of *ζ*. Inserting Eq. ([Exp:deltaNwC]) into the Hamiltonian constraint ([HconstL]), we find that the degree of freedom to choose the solution of the Laplace equation changes the relation between *N**i* and *ζ*. Since the domain of integration extends to the spatial infinity, to regularize the spatial integral, the perturbed variables should approach to 0 in the spatial infinity. Therefore, we determine the constant degree of freedom in Eq. ([Exp:deltaNwC]), requesting the boundary condition: $$\begin{aligned} & \int {{\rm d}}^d {\hbox{\boldmath{$x$}}} \, \partial^i N\_i =0 \,. \label{BCNi}\end{aligned}$$ With this choice, the Hamiltonian constraint reads $$\begin{aligned} & \int {{\rm d}}^d {\hbox{\boldmath{$x$}}}\, \delta N \left\{ 2 \langle V \rangle + e^{-2\rho} \langle (\partial \chi)^2 \rangle \right\} = \int {{\rm d}}^d {\hbox{\boldmath{$x$}}} \left\{ d(d-1) \dot{\rho} \dot{\zeta} + \zeta e^{-2 \rho} \langle (\partial \chi)^2 \rangle \right\}\,. \end{aligned}$$ Using this relation, we can rewrite the action which is valid up to the linear order as $$\begin{aligned} & S\_{ad}[\zeta\_\alpha] + \langle S\_{\chi}[\zeta\_\alpha,\, \chi\_\alpha] \rangle \cr & \simeq - (d-1) \int {{\rm d}}^{d+1} x \partial\_t \left( \dot{\rho} e^{d(\rho+ \zeta\_\alpha)} \right) + \int {{\rm d}}^{d+1} x e^{d(\rho+ \zeta\_\alpha)} \! \left\{ (d-1) \ddot{\rho} + \dot{\phi}^2 + \langle \dot{\chi}^2 \rangle + \frac{e^{-2\rho}}{d} \langle (\partial \chi)^2 \rangle \right\} \cr & \qquad - \int {{\rm d}}^{d+1} x e^{(d-2) \rho} \left\{ e^{(d-2)\zeta\_\alpha} - \left(1 - \frac{1}{d}\right) e^{d \zeta\_\alpha} + \zeta\_\alpha \right\} \langle (\partial \chi)^2 \rangle \,. \label{Exp:SzetaSeff2}\end{aligned}$$ The first term vanishes as a total derivative. The second term is proportional to $$\begin{aligned} & (d-1) \ddot{\rho} = - \dot{\phi}^2 - \langle \dot{\chi}^2 \rangle - \frac{1}{d} e^{-2\rho} \langle (\partial \chi)^2 \rangle\,,\end{aligned}$$ which can be verified by using the time derivative of the Friedman equation and the field equations for *ϕ* and *χ*, given by $$\begin{aligned} & \ddot{\phi} + d \dot{\rho} \dot{\phi}+ V\_{ph}'(\phi) + \langle \chi^2 \rangle M M\_\phi =0\,, \label{Eq:H} \\ & \ddot{\chi}\_k + d \dot{\rho} \dot{\chi}\_k + \left( M^2 + \frac{\lambda}{2} \langle \chi^2 \rangle + k^2 e^{- 2 \rho} \right) \chi\_k =0 \,. \label{Eq:dH} \end{aligned}$$ The tadpole terms contained in the last line cancel with each other and the term which does not depend on *ζ* is canceled between the action for  +  and the one for  − . In this way, using the background equations and also choosing the boundary condition for *N**i* as in Eq. ([BCNi]), we can show that the tadpole contributions all disappear. As we discussed in the previous subsection, the effective action $S\_{\rm eff}$ stays invariant under the replacement of *ζ**α*(*x*) with *ζ**α*, *s*(*x*). Therefore, the tadpole contribution for *ζ**s* should be given simply by replacing *ζ*(*x*) with *ζ**s*(*x*) in Eq. ([Exp:SzetaSeff]). When the background equations are satisfied and *N**i*, *s* is chosen to vanish at the spatial infinity (when *N**i* satisfies the boundary condition ([BCNi]), *N**i*, *s* also satisfies it), the terms in the second line of Eq. ([Expn:Seffdd]), which are linear in the metric perturbations, all vanish. ### Existence of constant solution Removing the tadpole contribution which vanishes with the use of the background equations, we only consider the quadratic terms about *ζ*. At the linear level, *ζ**α*, *s* simply gets the constant shift as $$\begin{aligned} & \zeta\_{\alpha,\, s} (x) \simeq \zeta\_\alpha(x) -s\,. \end{aligned}$$ Therefore, the symmetry under the change of *ζ**α* into *ζ**α*,  *s* immediately implies the existence of the constant solution also in the presence of the loop corrections of the heavy field. In single filed cases, it is well known that only the constant solution survives while the other independent solution simply decays in the late time limit, as far as the background evolution is on an attractor (see, e.g., Ref. ). This fact explains why the curvature perturbation becomes time independent at super Hubble scales. When we add a quantum correction from a heavy field, in principle, the “decaying” mode can turn into a growing mode. Such a drastic change of the behaviour of perturbation can occur, in case the trajectory sizably deviates from the attractor solution, for instance, owing to an effect of an additional field. In the present context, the classical background evolution is determined only by the inflaton and we assume that the quantum effects of the heavy field always remain to be perturbative. In such cases, the effect of the heavy field does not drive the “decaying” mode to grow in time. Then, the presence of the constant mode implies the conservation of the curvature perturbation in time as well as in the presence of the loop corrections of the heavy field. Renormalization and scaling symmetry ==================================== As is common in a non-linear quantum field theory, the effective action for *ζ* potentially diverges due to UV corrections. In our case, the bare coefficients of the effective action *W**δ**g**α*1⋯*δ**g**α**n*(*n*), which are expressed in terms of the correlators for *χ*, can diverge. The UV divergence should be renormalized by introducing counter terms. Depending on a way to introduce the counter terms, the scaling symmetry might be broken. If it were the case, the WT identity would not hold any more and the renormalized effective action does not preserve the scaling symmetry. When the counter terms are introduced in such a way that the scaling symmetry is preserved, the WT identity holds also after the UV renormalization. Then, inserting the WT identity into the effective action, which can be renormalized following the standard procedure since the theory (before the gauge fixing) is a local theory, and repeating the same argument as we did for the unrenormalized effective action, we can replace *ζ**α*(*x*) into *ζ**α*,  *s*(*x*) in the renormalized effective action. Since only the heavy field is quantized in computing the effective action, the curvature perturbation *ζ* should be dealt with as a classical external field. We may set the arbitrary constant parameter *s* to a *c*-number variable *g* *ζ̄* in order to express the effective action in terms of the fluctuation in the local region. Then, the effective action includes the non-local contribution *g* *ζ̄*. Nonetheless, the renormalization should proceed in the standard way, because the inserted non-local contribution, which is schematically in the following form: $$0= ({\rm WT~identity, which~identically~vanishes~and~is~local}) \times {{^g\!\bar{\zeta}}}^n \qquad (n=1,\,2)$$ is fictitious and does not introduce any non-local interactions. This aspect may be instructive to speculate on the UV renormalization of an IR regular quantity. Preserving the scaling symmetry is crucial to cancel out the potentially IR divergent contribution. In Refs. , a quantity which preserves the scaling symmetry was proposed and it contains non-local contributions. Because of that, in Refs. , it was suggested that the quantity which preserves the scaling symmetry may not be able to be renormalized in the standard way by introducing local counter terms. In this paper, we presented a handy example where the UV renormalization of the heavy field can be performed simply by introducing local counter terms as well as for a quantity which looks to include a non-local contribution. Here, we only considered the UV renormalization of the heavy field *χ*. It will be important to see if the UV renormalization of the curvature perturbation also can proceed by introducing local counter terms or not. We leave this issue for a future study. Concluding remarks ================== String theory predicts the presence of a bunch of massive excitations after reduction to four dimensional spacetime, which may encode, for instance, the information on the structure of the compactification of the extra dimensions. It is important to explore a possible imprint of such massive modes on the curvature perturbation. In this paper, we considered an influence of a heavy scalar field on the curvature perturbation *ζ* at the super Hubble scales. When the mass of the heavy field *χ* is of ${\cal O}(H)$, it can give non-local radiative corrections to the effective action of *ζ*, which may provide a distinctive imprint of the heavy field. We showed that the time evolution of *ζ* at the super Hubble scales is not affected by the loop corrections of the heavy field as far as the scaling symmetry, which is entailed in a covariant theory at the classical level, is preserved. The implies that the constant adiabatic mode exists as well as in the presence of the loop corrections of the heavy field. For simplicity, we considered one heavy field with the standard canonical kinetic term. However, our argument can be extended in a straightforward manner to a more general model which contains more than one heavy fields with a non-canonical kinetic term. Our result indicates that in order to leave an imprint of massive fields well after the Hubble crossing, we need to break either of the following conditions [2](#fn2): * The massive fields do not alter the background evolution at the classical level. * The quantum system preserves the scaling symmetry, which yields the Ward-Takahashi identity. * The radiative corrections of the massive fields on the curvature perturbation *ζ* are perturbatively suppressed. If the last condition does not hold, we need to perform a non-perturbative analysis to compute the radiative corrections of the massive fields. In this paper, using the WT identity ([WT]) for the scaling symmetry at ${\cal O}(s)$, we showed that the metric perturbation *δ**g*(*x*) in the effective action can be replaced with *δ**g**s*(*x*), given in Eqs. ([Exp:transzeta]), ([Exp:transN]), and ([Exp:transNi]), keeping up to the quadratic terms. This argument can be extended to higher orders in *δ**g*. Using the WT identity ([WT]), we can derive the WT identity which relates *W*(*n*) to *W*(*n*ʹ)s with *n*ʹ < *n*. Adding the WT identity for *W*(*m*) with *m* ≤ *n* (multiplied by some particular constant factors) to the effective action $S'\_{\rm eff}$, we can replace all *δ**g*(*x*) with *δ**g**s*(*x*) in $S'\_{\rm eff}$ up to the *n*-th order of perturbation. After removing the lapse function and the shift vector, we find that the effective action for the curvature perturbation is invariant under the replacement of *ζ*(*x*) with $$\begin{aligned} & \zeta\_s(x)= \zeta(x) - s - s {\hbox{\boldmath{$x$}}} \cdot \partial\_{{\hbox{\boldmath{\scriptsize$x$}}}} \zeta(x) + \frac{s^2}{2} ({\hbox{\boldmath{$x$}}} \cdot \partial\_{{\hbox{\boldmath{\scriptsize$x$}}}})^2 \zeta(x) + {\cal O}(s^3)\,. \label{Sol:zetas}\end{aligned}$$ This implies that the curvature perturbation includes a solution which is given by the *s*-dependent terms in Eq. ([Sol:zetas]), whose first term is the constant adiabatic mode. In order to keep the terms which explicitly depend on ${\hbox{\boldmath{$x$}}}$ perturbatively small, we need to confine the perturbation within a finite spatial region on each time slicing. For that, we will need to use other residual *gauge* degrees of freedom, which are addressed in Refs. . In this paper, we studied a spin 0 scalar field as the heavy field. It will be interesting to extend the discussion to include a field with a more general spin . Our discussion does not rely on the explicit form of the interaction vertices nor the propagator. Therefore, we expect that this extension will be feasible. We leave this study for a future project . Y. U. would like to thank N. Arkani-Hamed, M. Mirbabayi and M. Simonovi$\acute{\rm c}$ for interesting discussions about their works, which are relevant to this work. We are grateful to Y. Misonoh and S. Saga for their participations to the early state of this work. This work is supported by Grant-in-Aid for Scientific Research (B) No. 26287044. T. T. was also supported in part by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) Grant-in-Aid for Scientific Research on Innovative Areas, “New Developments in Astrophysics Through Multi-Messenger Observations of Gravitational Wave Sources”, Nos. 24103001 and 24103006, and by Grant-in-Aid for Scientific Research (A) No. 15H02087. Y. U. is supported by JSPS Grant-in-Aid for Research Activity Start-up under Contract No. 26887018 and the National Science Foundation under Grant No. NSF PHY11-25915. Y. U. is partially supported by MEC FPA2010-20807-C02-02 and AGAUR 2009-SGR-168. 99 P. A. R. Ade *et al.* [Planck Collaboration], arXiv:1502.02114 [astro-ph.CO]. P. A. R. Ade *et al.* [BICEP2 and Planck Collaborations], Phys. Rev. Lett.  **114** (2015) 101301 [arXiv:1502.00612 [astro-ph.CO]]. A. J. Tolley and M. Wyman, Phys. Rev. D **81**, 043502 (2010) [arXiv:0910.1853 [hep-th]]. M. G. Jackson and K. Schalm, Phys. Rev. Lett.  **108**, 111301 (2012) [arXiv:1007.0185 [hep-th]]. X. Chen and Y. Wang, Phys. Rev. D **81**, 063511 (2010) [arXiv:0909.0496 [astro-ph.CO]]. D. Green, M. Lewandowski, L. Senatore, E. Silverstein and M. Zaldarriaga, JHEP **1310**, 171 (2013) [arXiv:1301.2630 [hep-th]]. N. Arkani-Hamed and J. Maldacena, arXiv:1503.08043 [hep-th]. T. Noumi, M. Yamaguchi and D. Yokoyama, JHEP **1306**, 051 (2013) [arXiv:1211.1624 [hep-th]]. M. Mirbabayi and M. Simonovi$\acute{\rm c}$, arXiv:1507.04755 [hep-th]. A. Achucarro, J. O. Gong, S. Hardeman, G. A. Palma and S. P. Patil, Phys. Rev. D **84**, 043502 (2011) [arXiv:1005.3848 [hep-th]]. R. Saito, M. Nakashima, Y. i. Takamizu and J. Yokoyama, JCAP **1211**, 036 (2012) [arXiv:1206.2164 [astro-ph.CO]]. R. Saito and Y. i. Takamizu, JCAP **1306**, 031 (2013) [arXiv:1303.3839, arXiv:1303.3839 [astro-ph.CO]]. S. Renaux-Petel and K. Turzyński, arXiv:1510.01281 [astro-ph.CO]. S. Weinberg, Phys. Rev. D **67**, 123504 (2003) [astro-ph/0302326]. D. Wands, K. A. Malik, D. H. Lyth and A. R. Liddle, Phys. Rev. D **62**, 043527 (2000) [astro-ph/0003278]. K. A. Malik and D. Wands, Class. Quant. Grav.  **21**, L65 (2004) [astro-ph/0307055]. D. H. Lyth, K. A. Malik and M. Sasaki, JCAP **0505**, 004 (2005) [astro-ph/0411220]. D. Langlois and F. Vernizzi, Phys. Rev. D **72**, 103501 (2005) [astro-ph/0509078]. A. Naruko and M. Sasaki, Class. Quant. Grav.  **28**, 072001 (2011) [arXiv:1101.3180 [astro-ph.CO]]. A. A. Starobinsky, “Multicomponent de Sitter (Inflationary) Stages and the Generation of Perturbations,” JETP Lett.  **42**, 152 (1985) [Pisma Zh. Eksp. Teor. Fiz.  **42**, 124 (1985)]. D. S. Salopek and J. R. Bond, “Nonlinear evolution of long wavelength metric fluctuations in inflationary models,” Phys. Rev. D **42**, 3936 (1990). M. Sasaki and E. D. Stewart, “A General analytic formula for the spectral index of the density perturbations produced during inflation,” Prog. Theor. Phys.  **95**, 71 (1996) [astro-ph/9507001]. M. Sasaki and T. Tanaka, “Superhorizon scale dynamics of multiscalar inflation,” Prog. Theor. Phys.  **99**, 763 (1998) [gr-qc/9801017]. L. Senatore and M. Zaldarriaga, JHEP **1309**, 148 (2013) [arXiv:1210.6048 [hep-th]]. V. Assassi, D. Baumann and D. Green, JHEP **1302**, 151 (2013) [arXiv:1210.7792 [hep-th]]. Y. Urakawa and T. Tanaka, Phys. Rev.  **D82**, 121301 (2010). [arXiv:1007.0468 [hep-th]]. Y. Urakawa and T. Tanaka, Prog. Theor. Phys.  **125**, 1067 (2011) [arXiv:1009.2947 [hep-th]]. C. T. Byrnes, M. Gerstenlauer, A. Hebecker, S. Nurmi and G. Tasinato, JCAP **1008**, 006 (2010) [arXiv:1005.3307 [hep-th]]. M. Gerstenlauer, A. Hebecker and G. Tasinato, JCAP **1106**, 021 (2011) [arXiv:1102.0560 [astro-ph.CO]]. S. B. Giddings and M. S. Sloth, JCAP **1101**, 023 (2011) [arXiv:1005.1056 [hep-th]]. S. B. Giddings and M. S. Sloth, Phys. Rev. D **84**, 063528 (2011) [arXiv:1104.0002 [hep-th]]. L. Senatore and M. Zaldarriaga, JHEP **1301**, 109 (2013) [arXiv:1203.6354 [hep-th]]. G. L. Pimentel, L. Senatore and M. Zaldarriaga, JHEP **1207**, 166 (2012) [arXiv:1203.6651 [hep-th]]. Y. Urakawa and T. Tanaka, Prog. Theor. Phys.  **122**, 779 (2009) [arXiv:0902.3209 [hep-th]]. T. Tanaka and Y. Urakawa, PTEP **2013**, no. 8, 083E01 (2013) [arXiv:1209.1914 [hep-th]]. T. Tanaka and Y. Urakawa, PTEP **2013**, no. 6, 063E02 (2013) [arXiv:1301.3088 [hep-th]]. T. Tanaka and Y. Urakawa, Class. Quant. Grav.  **30**, 233001 (2013) [arXiv:1306.4461 [hep-th]]. T. Tanaka and Y. Urakawa, PTEP **2014**, no. 7, 073E01 (2014) [arXiv:1402.2076 [hep-th]]. J. M. Maldacena, JHEP **0305**, 013 (2003) [astro-ph/0210603]. P. Creminelli and M. Zaldarriaga, JCAP **0410**, 006 (2004) [astro-ph/0407059]. K. Hinterbichler, L. Hui and J. Khoury, JCAP **1401**, 039 (2014) [arXiv:1304.5527 [hep-th]]. M. H. Namjoo, H. Firouzjahi and M. Sasaki, Europhys. Lett.  **101**, 39001 (2013) [arXiv:1210.3692 [astro-ph.CO]]. X. Chen, H. Firouzjahi, M. H. Namjoo and M. Sasaki, Europhys. Lett.  **102**, 59001 (2013) [arXiv:1301.5699 [hep-th]]. R. Flauger, D. Green and R. A. Porto, JCAP **1308**, 032 (2013) [arXiv:1303.1430 [hep-th]]. J. Ganc and E. Komatsu, JCAP **1012**, 009 (2010) [arXiv:1006.5457 [astro-ph.CO]]. S. Renaux-Petel, JCAP **1010**, 020 (2010) [arXiv:1008.0260 [astro-ph.CO]]. P. Creminelli, G. D’Amico, M. Musso and J. Norena, JCAP **1111**, 038 (2011) [arXiv:1106.1462 [astro-ph.CO]]. P. Creminelli, J. Norena and M. Simonovic, JCAP **1207**, 052 (2012) [arXiv:1203.4595 [hep-th]]. V. Assassi, D. Baumann and D. Green, JCAP **1211**, 047 (2012) [arXiv:1204.4207 [hep-th]]. L. Senatore and M. Zaldarriaga, JCAP **1208**, 001 (2012) [arXiv:1203.6884 [astro-ph.CO]]. A. Joyce, J. Khoury and M. Simonovic, JCAP **1501**, no. 01, 012 (2015) [arXiv:1409.6318 [hep-th]]. L. Berezhiani and J. Khoury, JCAP **1402**, 003 (2014) [arXiv:1309.4461 [hep-th]]. G. L. Pimentel, JHEP **1402**, 124 (2014) [arXiv:1309.1793 [hep-th]]. A. Ghosh, N. Kundu, S. Raju and S. P. Trivedi, JHEP **1407**, 011 (2014) [arXiv:1401.1426 [hep-th]]. N. Kundu, A. Shukla and S. P. Trivedi, arXiv:1507.06017 [hep-th]. I. Mata, S. Raju and S. Trivedi, JHEP **1307**, 015 (2013) [arXiv:1211.5482 [hep-th]]. J. Garriga and Y. Urakawa, JCAP **1307**, 033 (2013) [arXiv:1303.5997 [hep-th]]. J. Garriga and Y. Urakawa, JHEP **1406**, 086 (2014) [arXiv:1403.5497 [hep-th]]. J. Garriga, K. Skenderis and Y. Urakawa, JCAP **1501**, no. 01, 028 (2015) [arXiv:1410.3290 [hep-th]]. K. Schalm, G. Shiu and T. van der Aalst, JCAP **1303**, 005 (2013) [arXiv:1211.2157 [hep-th]]. A. Bzowski, P. McFadden and K. Skenderis, JHEP **1304**, 047 (2013) [arXiv:1211.4550 [hep-th]]. P. McFadden, JHEP **1502**, 053 (2015) [arXiv:1412.1874 [hep-th]]. T. Tanaka and Y. Urakawa, JCAP **1105**, 014 (2011). [arXiv:1103.1251 [astro-ph.CO]]. P. Creminelli, C. Pitrou and F. Vernizzi, JCAP **1111**, 025 (2011) [arXiv:1109.1822 [astro-ph.CO]]. E. Pajer, F. Schmidt and M. Zaldarriaga, arXiv:1305.0824 [astro-ph.CO]. D. Seery and J. E. Lidsey, JCAP **0506**, 003 (2005) [astro-ph/0503692]. N. C. Tsamis and R. P. Woodard, Annals Phys.  **215**, 96 (1992). S. P. Miao and R. P. Woodard, JCAP **1207**, 008 (2012) [arXiv:1204.1784 [astro-ph.CO]]. X. Gao, JCAP **1110**, 021 (2011) [arXiv:1106.0292 [astro-ph.CO]]. R. P. Feynman and F. L. Vernon, Jr., Annals Phys.  **24**, 118 (1963) [Annals Phys.  **281**, 547 (2000)]. C. H. Wu, K. W. Ng, W. Lee, D. S. Lee and Y. Y. Charng, JCAP **0702**, 006 (2007) [astro-ph/0604292]. N. Katirci, A. Kaya and M. Tarman, JCAP **1406**, 022 (2014) [arXiv:1402.3316 [hep-th]]. R. P. Feynman and A. R. Hibbs, Quantum Mechanics and Path Integrals (McGraw-Hill, New York, 1965) A. O. Caldeira and A. J. Leggett, Physica **121A**, 587 (1983). S. Weinberg, Phys. Rev.  **140**, B516 (1965). A. Strominger, JHEP **1407**, 152 (2014) [arXiv:1312.2229 [hep-th]]. T. He, V. Lysov, P. Mitra and A. Strominger, JHEP **1505**, 151 (2015) [arXiv:1401.7026 [hep-th]]. A. Strominger and A. Zhiboedov, arXiv:1411.5745 [hep-th]. J. Garriga, Y. Urakawa and F. Vernizzi, arXiv:1509.07339 [hep-th]. S. Endlich, A. Nicolis and J. Wang, JCAP **1310**, 011 (2013) [arXiv:1210.0569 [hep-th]]. A. Gruzinov, Phys. Rev. D **70**, 063518 (2004) [astro-ph/0404548]. Y. Urakawa *et al.*,  in preparation --- 1. The conservation of *ζ* in such a setup also can be understood as a direct consequence of the *δ**N* formalism .[↩](#fnref1) 2. Here, we also assume that the (spatially averaged) background universe is the FRW universe. This excludes, say, the solid inflation case, where the anisotropic pressure does not vanish in the large scale limit . (See also Ref. .)[↩](#fnref2)
arxiv_0000360
Unconventional Fusion and Braiding of Topological Defects in a Lattice Model ============================================================================ We examine non-abelian topological defects in an abelian lattice model in two dimensions. We first construct an exact solvable lattice model that exhibits coexisting and intertwined topological and classical order. The anyon types of quasiparticle excitations are permuted by lattice symmetry operations like translations, rotations, and reflections. The global anyon permutation symmetry has a group structure of *S*3, the permutation group of three elements. Topological crystalline defects – dislocations and disclinations – change the anyon type of an orbiting quasiparticle. They exhibits multichannel order dependent fusion rules and projective braiding operations. Their braiding and exchange statistics breaks modular invariance and violates conventional spin-statistics theorem. We develop a framework to characterize these unconventional properties that originate from the semiclassical nature of defects. Introduction ============ The search for Majorana fermions  has attracted tremendous attention in recent years  due to their potential application in topological quantum computation . With the discovery of topological insulators , the quest has shifted from *p*-wave superconductors  and quantum Hall states  to superconductor-ferromagnet (SC-FM) heterostructures with quantum spin Hall insulators  and strong spin-orbit coupled semiconductors . The more exotic fractional Majorana fermions that carry richer fusion and braiding characteristics are predicted at the SC-FM edge  of fractional topological insulators  and helical 1D Luttinger liquids . Fractional Majorana fermions can be conceptually studied by *twist defects* in exact solvable lattice models. These include Ising-type dislocations in the Kitaev toric code , fractional Majorana-type dislocations in the Wen plaquette Z*k*-rotor model , and colored Majorana defects in a string net model . Twist defects even appear at dislocation line defects in 3D topological phases . Similar non-abelian defects can be constructed as dislocations in abelian *topological nematic states*  such as a multiple Chern band with symmetry , described by *genons* in effective field theory  and classified by Wilson structures of non-chiral gapped edges . The underlying topological state of all systems mentioned above carries an ungauged symmetry, e.g. charge-flux duality in the toric code  and Wen plaquette model , color permutation symmetry in the color code , and bilayer symmetry in a topological nematic state . Symmetries also appear in many strongly correlated systems such as electronic liquid-crystal phases of a doped Mott insulator  and spin liquid . They intertwine with topology and offer a finer classification of topological phases . Even if the symmetry is not broken spontaneously by a Landau order parameter, it may still be *weakly broken* in the Kitaev sense  by anyon labeling. Extrinsic twist defects further break the symmetry locally by winding the anyon labels. In this article, we demonstrate, using twist defects in an exact solvable lattice model, some fundamental distinctions in fusion and braiding  that separate semiclassical symmetry defects from quantum deconfined anyons  in true topological phases  such as the Kitaev honeycomb Ising phase  or quasi-topological phases  such as the physical Pfaffian quantum Hall state . Topological defects involve windings of certain non-dynamical extensive order parameters  such as pairing phase, Dirac mass, spin polarization, etc. The semiclassical order parameter forbids quantum superposition of different defect configurations. Although defect excitations in SC-FM heterostructures have exponentially localized wavefunctions and twist defects in integrable models only violate symmetry locally at a point, they should be treated as quasi-extensive objects because of the bulk order parameter associated with each of them. This extensiveness provides the means to circumvent locality restrictions  and gives rise to non-abelian statistics in (3 + 1)-dimensions . In this article, we address three major consequences of the quasi-extensive nature of twist defects in an exact solvable topologically ordered abelian system with symmetry. 1. **Non-commutative fusion** Contrary to a non-abelian discrete gauge theory  where fluxons are labeled by conjugacy classes, twist defects are instead labeled by symmetry group elements and the fusion of defects depends on their order. 2. **Modified spin statistics** Topological spin for a defect *λ* can only be robustly defined through a 2*π* × ord(*λ*) rotation, where ord(*λ*) is the order of the defect, so that the initial and rotated systems are classically indistinguishable. The exchange phase is then identical to this new definition of spin. 3. **Modified modular invariance** Unlike modular functors in conformal field theory , fractional quantum Hall states  or topological quantum field theories , defect exchange and braiding does not obey the full modular group *S**L*(2; Z), but is restricted to a congruent subgroup. Outline and Summary ------------------- We begin in section [sec:honeycombmodel] by presenting an abelian Z*k* rotor lattice model on a honeycomb lattice or in general bipartite trivalent planar graph. Its Z2 version has been studied by Bombin and Martin-Delgado  and is called the color code. The model possesses a hidden non-abelian symmetry $S\_3=\mathbb{Z}\_2\ltimes\mathbb{Z}\_3$, the permutation of group on three elements. There is a *k*4 ground state degeneracy on a torus. The *k*4 abelian anyon excitations can be labelled and bipartitioned into ${\bf a}=({\bf a}\_\bullet,{\bf a}\_\circ)$, where each spinless component ${\bf a}\_{\bullet/\circ}$ lives on a two dimensional triangular Z*k*-lattice (see figure [fig:abeliananyonlattice]). Threefold *cyclic color permutations* Λ3, Λ3− 1 in *S*3 act as rotations on the anyon lattice, leaving the bipartite  • ,  ∘ -label untouched. Twofold *color sublattice transpositions* Λ*Y*, Λ*R*, Λ*B* in *S*3 interchange  •  ↔  ∘  and act as three mirror planes on the anyon lattice. This model has fusion and braiding properties identical to two copies of the Z*k* toric code. However the symmetry group is non-trivially extended from charge-flux Z2 duality to *S*3 by a threefold cyclic color permutation. We construct non-abelian twist defects in section [sec:twistdefect]. A defect is classified by a group element Λ in *S*3 that characterizes the change of the label ${\bf a}\to\Lambda\cdot{\bf a}$ of an encirling abelian anyon (see figure [fig:anyontwist]). There are two threefold defects [1/3] and its anti-particle $[\overline{1/3}]$, and three twofold defects [1/2]*χ* labeled by color *χ* = *Y*, *R*, *B*. Primitive threefold defects are constructed in the lattice level by  ± 120∘ disclinations at tetravalent/bivalent vertex, and twofold defects are associated with  ± 60∘ disclinations at pentagons and heptagons (see figure [fig:twistdefects]). Quantum dimensions can be deduced either by counting plaquette and vertex degrees of freedom (section [sec:latticedefecthamiltonian]) or evaluated by ground state degeneracy corresponding to the non-local Wilson loop algebra (section [sec:nonlocalWilsonalgebra]). They are given by $$\begin{aligned} d\_{[1/3]}=d\_{[\overline{1/3}]}=\left\{\begin{array}{\*{20}c}k^2,&\mbox{if $3\centernot\mid k$}\\k^2/3,&\mbox{if $3\mid k$}\end{array}\right.,\quad d\_{[1/2]}=k\end{aligned}$$ Note that the dimension for the twofold defect matches that of two copies of Z*k*-fractional Majorana fermion. A defect contains a phase parameter that determines the value of a local Wilson observable (see figure [fig:defectlocalloop]). It subdivides twofold defects into *k*2 species ${\bf l}$ and threefold defects into 9 species ${\bf s}$ when *k* is divisible by 3. Species labels can mutate by absorbing or releasing abelian anyons, a process driven by continuously tuning the local defect phase parameter. This novel species characterization of defect-anyon composites is essential in a complete description of fusion and braiding. The Wilson loop algebra of an arbitrary multi-defect system is studied using word presentation consisting of open Wilson paths in section [sec:AlphabeticpresentationofWilsonalgebra], where the *S*3-transformation of defects and symmetry structure of the Wilson algebra are discussed. In section [sec:defectfusion], we investigate the non-commutative defect fusion category. The objects consist of abelian anyons and twist defects labeled with species. Due to the semi-classical nature of the non-dynamical *S*3-symmetry, defects are not grouped into conjugacy classes of *S*3-fluxes and anyons are not projected into *S*3-orbifold superselection sectors. Loosely speaking the fusion rules originate from the non-abelian group structure of *S*3 and take the following multi-channel form $$\begin{aligned} {\sum\_{\bf a}}'[{\bf a}]&\simeq[1/2]\_\chi\times[1/2]\_\chi\nonumber\\&\simeq[1/3]\times[1/3]\times[1/3]\nonumber\\&\simeq[1/2]\_\chi\times[1/2]\_{\chi+1}\times[\overline{1/3}]\nonumber\\&\simeq[1/2]\_\chi\times[1/2]\_{\chi-1}\times[1/3]\label{fusionrulesummary}\end{aligned}$$ where *χ* = 0, 1, 2 mod 3 represents the three colors *Y*, *R*, *B*, the equation is unaffected by cyclic permutation of defect order on the right, and the sum of abelian anyons ${\bf a}$ on the left is restricted by defect species so that the particle-antiparticle duality requires a change of species label $[1/2]\_{\chi,{\bf l}\_1}\times[{\bf a}]=[1/2]\_{\chi,{\bf l}\_2}$ when absorbing or releasing an abelian anyon in general. The second equality requires fusion degeneracy $[1/3]\times[1/3]=d\_{[1/3]}[\overline{1/3}]$, and the third and forth equalities exhibit fusion non-commutativity. A more precise set of fusion rules can be found in section [sec:fusionrules]. The choice of a particular set of splitting states (or Wilson string configurations) shown in figure [fig:splittingspaces] in section [sec:splittingspaces] fixes the gauge for a consistent set of basis transformations between different maximally commuting sets of observables, called *F*-symbols, that characterizes the fusion category. A few calculated examples are illustrated in section [sec:Fmoves] and a complete list of *F*-matrices (up to *S*3-symmetry) is included in table [tab:Fsymbols] in appendix [sec:Fsymbols]. We describe exchange and braiding between defects of the same type in section [sec:defectexchangebraiding]. Although the non-commutative fusion category cannot be fully braided, a subset of 180∘ rotation operations or *R*-symbols between commuting objects can be defined and are evaluated in section [sec:statisticsZ3] and [sec:statisticsZ2] by rotating Wilson strings. While threefold defects do not carry spin, twofold defects have non-trivial species dependent statistics as shown in section [sec:statisticsZ2] and they are identified with the spin phase of 2 × (360∘) rotation. The braiding *S*-matrices are defined among defects of the same *S*3-type with entries labeled by species. The *S* and *T* matrices for twofold defects are identified in section [sec:defectmodulartransformation] with Dehn twist *t**y* and double Dehn twist *t**x*2 respectively on the ground states on a torus decorated with a branch cut along the *y*-cycle. In general, they form a unitary group structure of the congruent subgroup Γ0(2) rather than the full modular group *S**L*(2; Z). Physical unitary braiding operations or *B*-matrices between defects of the same type are computed in section [sec:Bmatrices] and they demonstrate the non-abelian nature of twist defects. A certain compactification braiding identity of the sphere braid group that is expected to hold in a closed anyon system is now only *projectively* satisfied for defects. The rotor lattice model ======================= We consider a Z*k* rotor model (or spin-1/2 model for *k* = 2) on vertices of a honeycomb lattice, or in general a bipartite trivalent planar graph with the following properties. 1. **Tricoloring** Each plaquette may be colored with one of three colors, say yellow (Y), red (R) and blue (B), such that adjacent plaquettes never carry the same color. 2. **Bipartite** There is a sublattice structure so that vertices can be labeled by black ( • ) or white ( ∘ ) and adjacent vertices are always of differing types. In addition, in the absence of twist defects and branch cuts which will be explained later, we require the graph to satisfy the above constraints globally. On a torus, they will be globally preserved with compatible periodic boundary conditions, when the length of the two primitive cycles are multiples of three. The model can be put on a closed surface with arbitrary genus by adding an appropriate number of defect squares or octagons (figure [fig:squareoctagon]) on the regular honeycomb lattice. These are trivial *twistless* defects in the sense that they do not violate the tricoloring and bipartite order. [fig:squareoctagon] The Hamiltonian --------------- The degrees of freedom (the “spins” for *k* = 2) of the Z*k*-lattice model live on vertices. Z*k* rotors are operators *σ* and *τ* that take eigenvalues in {1, *e*2*π**i*/*k*, …, *e*2*π**i*(*k* − 1)/*k*} and commute up to a Z*k* phase *τ**σ* = *w**σ**τ*,  *σ**k* = *τ**k* = 1 where *w* is a *k**t**h* root of unity. We will assume *w* = *e*2*π**i*/*k* hereafter. They can be represented by *k*-dimensional matrices $$\tau=\left(\begin{array}{\*{20}c}0&1&\cdots&0\\\vdots&\vdots&\ddots&\vdots\\0&0&\cdots&1\\1&0&\cdots&0\end{array}\right),\quad\sigma=\left(\begin{array}{\*{20}c}1&0&\cdots&0\\0&w&\cdots&0\\\vdots&\vdots&\ddots&\vdots\\0&0&\cdots&w^{k-1}\end{array}\right)\label{sigmataurep}$$ Each vertex carries a *k*-dimensional Hilbert space and a set of rotors. The total Hilbert space is a tensor product over vertices, and rotors at different vertices commute. [fig:stabilzers] Given a bipartite  • ,  ∘  assignment of vertices, each plaquette *P* carries two stablilizer operators *P̂*• = ∏*v*• ∈ *P**σ**v*•∏*v*∘ ∈ *P**τ**v*∘,  *P̂*∘ = ∏*v*• ∈ *P**τ**v*•∏*v*∘ ∈ *P**σ**v*∘ where *σ**v*, *τ**v* are rotors at vertices *v* around the plaquette and tensor products have been suppressed. The bipartite structure ensures all plaquettes have the same number of  •  and  ∘  vertices and two neighboring plaquettes share exactly one  •  and one  ∘  vertex. Then ensures mutual commutativity of the plaquettes operators, which form a set of good quantum numbers referred to as Z*k* stabilizers or fluxes. The Hamiltonian is defined by the sum of stablilizers, *H* =  − *J*∑*P*(*P̂*• + *P̂*•†) − *J*∑*P*(*P̂*∘ + *P̂*∘†) Ground states are trivial flux configurations where *P̂*• = *P̂*∘ = 1 for all stabilizers. It is indicative to notice that the model describes the topological phase of two copies of Z*k* version of Kitaev toric code. However, Hamiltonian realizes a much richer *S*3 *geometric symmetry* that extends the original charge-flux Z2 duality in the quantum double model. The *S*3 symmetry can be understood in the microscopic geometric level by the action of space group on the tricoloring and bipartite pattern of the honeycomb lattice. These correspond the threefold and twofold generators of the permutation group *S*3 and will be discussed in detail in section [sec:compositelatticedefects]. This generalizes of the *e*-*m* duality in the Kitaev’s toric code that involves geometrically switching vertices and plaquettes. The ground state degeneracy of the Hamiltonian on a trivalent graph on any closed orientable genus *g* surface can be evaluated by counting vertices and independent plaquette stabilizers. Denote the total number of vertices by #*V* and the number of plaquettes by #*P*. For a regular honeycomb lattice on a torus, #*V* = 2 × #*P*, which is commensurate with the two stabilizer operators eq. located at each plaquette. However there is an overcounting since certain products of stabilizers are identical to the identity. The number of these relations is independent of system size and therefore gives rise to a *topological* ground state degeneracy (*k*4 on the torus if the honeycomb is globally tricolorable). As mentioned earlier, the model can be put on any closed orientable surface without violating *local* tricolorability and the  • ,  ∘ -sublattice structure, by adding square or octagon defects as showen in figure [fig:squareoctagon]. The Gauss-Bonnet theorem (or Euler characteristic) requires #octagons − #squares = 6(*g* − 1). Since an octagon/square carries two greater/fewer vertices than a hexagon, #*V* = 2 × #*P* + 4(*g* − 1). on a genus *g* surface. We will first investigate this overcounting in the case when the trivalent graph is *globally* tricolorable. This is an additional topological constraint that ensures that the tricoloration remains unchanged around a non-trivial cycle. In particular, the total number of plaquettes must be a multiple of three and appropriate twisted periodic boundary conditions must be applied if individual cycles have lengths not divisible by three. Given a global *Y**R**B*-coloration of the plaquettes *P**Y*, *P**R* and *P**B*, stabilizer operators are overcounted by the following four cocycle relations, $$\begin{aligned} 1&=\prod\_{P^Y}\hat{P}^Y\_\bullet\left(\prod\_{P^R}\hat{P}\_\bullet^R\right)^\dagger=\prod\_{P^R}\hat{P}^R\_\bullet\left(\prod\_{P^B}\hat{P}\_\bullet^B\right)^\dagger\nonumber\\&=\prod\_{P^Y}\hat{P}^Y\_\circ\left(\prod\_{P^R}\hat{P}\_\circ^R\right)^\dagger=\prod\_{P^R}\hat{P}^R\_\circ\left(\prod\_{P^B}\hat{P}\_\circ^B\right)^\dagger\label{stabilizerovercounting}\end{aligned}$$ These can be understood by observing that the product of all vertex rotors can be given by product of plaquettes of a particular color, i.e. ∏*v*•*σ**v*•∏*v*∘*τ**v*∘ = ∏*P**Y**P̂*•*Y* = ∏*P**R**P̂*•*R* = ∏*P**B**P̂*•*B* and similarly for the other set, *P̂*∘. Thus the number of indepedent stabilizers is, #stabilizers = 2 × #*P* − 4 and the ground state degeneracy (G.S.D.) of a globally tricolorable graph on a genus *g* closed surface is *G*.*S*.*D*. = *k*#*V* − #\small stabilizers = *k*4*g* Wilson algebra and ground state degeneracy ------------------------------------------ [fig:trivialloop] [fig:Wilsonstring] Ground states can be written down as an equally weighted sum of plaquette operators acting on a suitably chosen state. One example is, $$|0\rangle\_\bullet=\frac{1}{\sqrt{\mathcal{N}}}\prod\_{P}\left(\sum\_{r=0}^{k-1}\hat{P}\_\circ^r\right)|\sigma\_{v\_\bullet}=\tau\_{v\_\circ}=+1\rangle\label{GS0}$$ where ∣*σ**v*• = *τ**v*∘ =  + 1⟩ is the tensor product eigenstate state of *σ* for each  • -vertex and *τ* for  ∘ -vertex, and N = *k*#*P* − 2 is a normalization factor. It is a simultaneous eigenstate for all plaquette operators *P̂*•∣0⟩ = *P̂*∘∣0⟩ = ∣0⟩. The ground state can be interpreted as a condensate of of trivial *Wilson loops*. A Wilson loop is a string of rotors that commutes with each stabilizer, and trivial if it can be written as a product of stabilizers. They are labeled by the  • ,  ∘ -sublattice type and colors *Y**R**B*, so that a Wilson loop along the boundary of a domain Ω is of the form $$\begin{aligned} W(\partial\Omega)^\chi\_{\bullet}&=\prod\_{P\subseteq\Omega}\hat{P}^{\chi+1}\_{\bullet}(\hat{P}^{\chi-1}\_{\bullet})^{-1}\\W(\partial\Omega)^\chi\_{\circ}&=\prod\_{P\subseteq\Omega}\hat{P}^{\chi-1}\_{\circ}(\hat{P}^{\chi+1}\_{\circ})^{-1}\end{aligned}$$ for *χ* ∈ {0, 1, 2} labels the colors {*Y*, *R*, *B*} mod 3, and the product is taken over either  • - or  ∘ -plaquettes of two complementing colors *χ* ± 1 inside the open domain Ω. Operators in the interior of Ω cancel each other and leave a string of rotors along the boundary ∂Ω connecting a strand of plaquettes of the same color *χ*. This is illustrated by figure [fig:trivialloop] for *χ* = *B*. [fig:colorspliting] There is an important redundancy in the color labels. A blue string can split into a yellow and red pair, both propagating in the opposite direction. This can be done by multiplying plaquettes on the string, and results in a pair of tricolor, trivalent, sources and drains as shown in figure [fig:colorspliting]. Similarly, a parallel triplet of co-propagating Wilson strings of three different colors can be locally cancelled by plaquette operators. The color redundancy can be summarized by the fusion *Y*• × *R*• × *B*• = *Y*∘ × *R*∘ × *B*∘ = 1 for 1 being the vacuum. In general a closed Wilson loop is built by joining colored, directed paths emanating from sources and drains. Each colored path is a string of *σ*± 1 and *τ*± 1 rotors that connects plaquettes of that color as illustrated in figure [fig:Wilsonstring]. *W*•*χ* = ∏*v*•*σ**v*•± 1∏*v*∘*τ**v*∘± 1,  *W*∘*χ* = ∏*v*•*τ**v*•± 1∏*v*∘*σ**v*∘± 1 The signs of the rotors are dictated by the arrow directions in figure [fig:trivialloop](b) and [fig:Wilsonstring]. For instance, rotors along a string switch signs about a  ± 60∘ or 180∘ corner as shown in figure [fig:trivialloop]. It is straightforward to check that closed Wilson strings commute with all stabilizers through phase cancellation. Evidently, any product of plaquette operators is a closed Wilson string, which can be pulled out of the condensate ground state without an energy cost. However, the converse is not true — a closed Wilson string is not necessarily the product of plaquettes. There are non-trivial cycles on the genus *g* ≥ 1 surface that do not bound an open domain. [fig:genusgcycles] [fig:intersection] On a genus *g* surface, there are 2*g* non-trivial primitive cycles C*i*, where C2*l* − 1, C2*l* are the two cycles associated with the *l**t**h* handle (see figure [fig:genusgcycles]). Each one can be labeled by sublattice type  • ,  ∘  and color *Y**R**B*. Together, the Wilson loops form a non-commutative algebra with the algebraic relations $$\begin{aligned} \left(W(\mathcal{C}\_i)^{\chi}\_{\bullet/\circ}\right)^k&=1\label{wilsoncomm1}\\\left[W(\mathcal{C}\_{i})^{\chi\_1}\_\bullet,W(\mathcal{C}\_{j})^{\chi\_2}\_\bullet\right]&=\left[W(\mathcal{C}\_{j})^{\chi\_1}\_\circ,W(\mathcal{C}\_{j})^{\chi\_2}\_\circ\right]=0\label{wilsoncomm2}\\ W(\mathcal{C}\_i)^{\chi\_1}\_\bullet W(\mathcal{C}\_{j})^{\chi\_2}\_\circ&=e^{i\frac{2\pi}{k}\langle\mathcal{C}\_i^{\chi\_1},\mathcal{C}\_j^{\chi\_2}\rangle}W(\mathcal{C}\_{j})^{\chi\_2}\_\circ W(\mathcal{C}\_{i})^{\chi\_1}\_\bullet\label{wilsoncomm3}\end{aligned}$$ where the *χ*’s run over the colors *Y**R**B* and the pairing ⟨C*i**χ*1, C*j**χ*2⟩ is determined by intersection number between Wilson strings, summarized in figure [fig:intersection]. The intersection form ⟨ \* ,  \* ⟩ is bilinear and symmetric. It counts the total Z*k*-phase accumulated by interchanging rotor operators *σ*, *τ* at overlapping vertices according to eq.. Intersection is invariant under cyclic permutation of colors, and changes sign if the direction of one of the Wilson string is reversed. The color fusion rule eq. forces null intersection between Wilson cycles of the same color. Since primitive cycles only intersect when they correspond to the same handle, the intersection ⟨C*i**χ*1, C*j**χ*2⟩ is zero unless *i* = 2*l* − 1, *j* = 2*l* or vice versa, in which the number according to figure [fig:genusgcycles] and [fig:intersection](b) would be $$\langle\mathcal{C}\_{2l-1}^{\chi\_1},\mathcal{C}\_{2l}^{\chi\_2}\rangle=\left\{\begin{array}{\*{20}c}0&\mbox{for $\chi\_1=\chi\_2$}\\-1&\mbox{for $\chi\_1=\chi\_2-1$}\\1&\mbox{for $\chi\_1=\chi\_2+1$}\end{array}\right.\label{wilsoncomm4}$$ for *χ* = *Y*, *R*, *B* = 0, 1, 2 modulo 3. The three colors are not independent due to color fusion. For example, the products of the parallel triplets *W*(C*i*)•*Y**W*(C*i*)•*R**W*(C*i*)•*B* = *W*(C*i*)∘*Y**W*(C*i*)∘*R**W*(C*i*)∘*B* = 1 since these can be written as products of plaquette operators and act as the identity on the ground state. Therefore, there are *k*8*g* independent Wilson loops generated by the primitive cycles *W*(C*i*)•*Y*, *W*(C*i*)•*R*, *W*(C*i*)∘*Y* and *W*(C*i*)∘*R*, for *i* = 1, …, 2*g*, An orthonormal basis for the ground state Hilbert space can be written down using the Wilson algebra starting with the particular ground state ∣0⟩• in eq.. Define the normalized ground state $$|{\bf m},{\bf n}\rangle\_\bullet=\left[\prod\_{i=1,\ldots,2g}\left(W(\mathcal{C}\_i)^Y\_\circ\right)^{m\_i}\left(W(\mathcal{C}\_i)^R\_\circ\right)^{n\_i}\right]|0\rangle\_\bullet\label{GSbasis}$$ where ${\bf m}=(m\_i)$, ${\bf n}=(n\_i)$ have integers modulo *k* entries, for *i* = 1, …, 2*g*. The  • -Wilson loops *W*(C*i*)•*χ* form a maximal set of commuting generators and share simultaneous eigenstates. The eigenvalues can be evaluated using the intersection relation and. $$W(\mathcal{C}\_j)^\chi\_\bullet|{\bf m},{\bf n}\rangle\_\bullet=e^{i\frac{2\pi}{k}\varphi\_j^\chi({\bf m},{\bf n})}|{\bf m},{\bf n}\rangle\_\bullet\label{GSeigenvalues}$$ where the Z*k* phase is given by $$\begin{aligned} \varphi\_j^\chi({\bf m},{\bf n})&=(-1)^j\left[n\_{j-(-1)^j}(\delta^\chi\_0-\delta^\chi\_2)\right.\nonumber\\&\quad\quad\left.+m\_{j-(-1)^j}(\delta^\chi\_2-\delta^\chi\_1)\right]\end{aligned}$$ modulo *k*, where *χ* = 0, 1, 2 index the colors *Y*, *R*, *B*. As different ground states in eq. are distinguished by their eigenvalues with respect to the  • -Wilson loops, they must be mutually orthogonal. The *k*4*g* dimensional space of degenerate ground states forms an irreducible unitary representation of the Wilson algebra. ### Obstruction to global tricolorability We will spend the remaining of the subsection on the Wilson algebra and ground state degeneracy when the trivalent graph is *not* globally tricolorable. The topological obstruction is characterized by closed branch cuts where same color plaquettes share edges and vertices (see figure [fig:torusbranchcut](a)). Branch cuts are not physical domain walls as the Hamiltonian does not depend on an explicit plaquette color definition. A closed branch cut that runs along a trivial loop can be removed by cyclic permuting the colors inside the area bounded by the loop. A branch cut going along a non-contractible cycle is however irremovable (unless canceled by another branch cut). This topological color inconsistency has a reducing effect on the Wilson algebra and consequently ground state degeneracy. Similar issue of branch cuts also arise for Z*k* toric code over a checkerboard lattice on a torus , where the charge-flux duality is realized as the bicolor structure of checkerboard plaquettes. The situation in the tricolored model is qualitative different as the Wilson algebra and ground state degeneracy in the presence of branch cuts depend on the divisibility of *k* by 3. [fig:torusbranchcut] For simplicity, we will only demonstrate the case when the model is put on a torus and there is a single branch cut along a non-trivial cycle, say the meridian direction. This can be achieved by adapting a twisted boundary condition on a regular honeycomb lattice through introducing a lattice displacement along an zig-zag edge illustrated in figure [fig:torusbranchcut](a). We will see in the section [sec:twistdefect] that an open color branch cut ends at a conjugate pair of non-abelian threefold *twist defects*. And therefore the twisted boundary condition can be constructed by draging a threefold defect around a cycle. Note that this is fundamentally different from threading a non-abelian quantum flux in a true topological phase as the underlying semiclassical configuration explicitly breaks *S**L*(2; Z) modular invariance. The branch cut picks out a particular non-trivial cycle on the torus and the Wilson algebra does not close under the *S**L*(2; Z) action. Modular transformations will be discussed in more detail in section [sec:defectmodulartransformation]. A Wilson string will change color through cyclic permutation across the branch cut as shown in figure [fig:torusbranchcut](b). As a result the longitudinal cycle no longer corresponds a Wilson loop as the string will not close back onto itself after passing across the branch cut. Wilson loops along the meridian direction on the other hand are still closed as they do not intersect the parallel branch cut. However, they will change color if the entire loop is dragged around the torus. This gives rise to a color ambiguity since meridian Wilson loops of different colors are now interchangable through plaquette stabilizers, and they are indistinguishable on the ground states. The color fusion rule *Y* × *R* × *B* = 1 then implies all meridian Wilson loops to be of order 3, *W*(C2)3 = 1. Being built by Z*k* rotors, a Wilson loop is automatically of order *k* as seen in. And therefore unless *k* is divisible by 3, the meridian Wilson loop is trivial and is a product of plaquette operators *W*(C2)• / ∘ = ∏*P*(*P̂*• / ∘*Y*)2*s*(*P̂*• / ∘*R*)− *s*(*P̂*• / ∘*B*)− *s* where *s* is an integer so that 3*s* ≡ 1 mod *k*. When *k* is a multiple of 3, the meridian Wilson loop *W*(C2)• / ∘ is not a contractible boundary and it intersects with a closed Wilson string *W*(Σ1)• / ∘ in the longitudinal direction consists of *k*/3 tricolor trivalent sources and a unicolor *k*-valent drain depicted in figure [fig:torusbranchcut](c). It commutes with all plaquette stabilizers because of the color fusion *Y* × *R* × *B* = 1 at the tricolor sources and Z*k* fusion *Y**k* = 1 at the unicolor drain. *W*(Σ1)• / ∘ is equivalent to dragging the abelian anyon $\boldsymbol\kappa\_{\bullet/\circ}$ (figure [fig:abeliananyonlattice]), a non-trivial anyon invariant under cyclic color permutation only when *k* divisible by 3, around the longitudinal cycle. The Wilson algebra is then generated by *W*(Σ1)•, *W*(C2)•, *W*(Σ1)∘ and *W*(C2)∘ that satisfy the following algebraic relations. $$\begin{aligned} \left(W(\Sigma\_1)\_{\bullet/\circ}\right)^3&=\left(W(\mathcal{C}\_2)\_{\bullet/\circ}\right)^3=1\\\left[W(\Sigma\_1)\_\bullet,W(\mathcal{C}\_2)\_\bullet\right]&=\left[W(\Sigma\_1)\_\circ,W(\mathcal{C}\_2)\_\circ\right]=0\\W(\Sigma\_1)\_\bullet W(\mathcal{C}\_2)\_\circ&=e^{2\pi i/3}W(\mathcal{C}\_2)\_\circ W(\Sigma\_2)\_\bullet\\W(\mathcal{C}\_2)\_\bullet W(\Sigma\_1)\_\circ&=e^{2\pi i/3}W(\Sigma\_1)\_\circ W(\mathcal{C}\_2)\_\bullet\end{aligned}$$ This gives rise to a ground state degeneracy of 9 = 32 on a torus. The ground state degeneracy on a torus is identical to the total number of deconfined anyon types . On a globally tricolorable graph, abelian anyons can be uniquely labeled by the particle numbers mod *k* of fundamental constituents *Y*•, *R*•, *Y*∘, *R*∘, and thus the ground state degeneracy is *k*4. When there is a color ambiguity from non-contractible branch cut, there will be less particle types which are now referred as species because *Y*• = *R*• = *B*• and *Y*∘ = *R*∘ = *B*∘. The three colors and Z*k* fusion implies *B*• / ∘3 = *B*• / ∘*k* = 1. And hence there will not be non-trivial species unless *k* is a multiple of 3, in which case they will be labeled by the particle number of the two fundamental generators *B*•, *B*∘ modulo 3. This gives rise to a 32-fold degeneracy on a torus and corresponds 32 species of threefold defects distinguishable by Σ• / ∘ discussed in more detail in section [sec:twistdefect]. Abelian anyon excitations, effective field theory and -symmetry --------------------------------------------------------------- Excitations of the Hamiltonian are eigenstates of plaquette stabilizers with non-unit eigenvalues. They can be constructed by letting open Wilson (also called Jordan-Wigner) string operators *W*(S)∘ / • act on a ground state ∣*G**S*⟩. ∣∂S⟩• / ∘ = *W*(S)∘ / •∣*G**S*⟩ Open Wilson strings ( •  or  ∘ ) do not commute with local Wilson loops (*W*(L)∘ and *W*(L)• resp.) surrounding the end points in ∂S (see figure [fig:excitations] and [fig:coloranyons]). Since trivial closed Wilson loops condense in the ground state, the excitation state ∣∂S⟩ in depends only on its plaquette eigenvalues at the end points of the string S rather than the path itself as long as it does not wrap an extra non-trivial cycle. The excited state (or in general a collection of states due to ground state degeneracy) can therefore be labeled by local *abelian anyon* configurations,, measured by the eigenvalues of plaquette stabilizers *P̂*• / ∘∣∂S⟩• / ∘± = *e*± 2*π**i*/*k*∣∂S⟩• / ∘ for ∂S = ∑*P*+ − ∑*P*− are directed plaquettes where the open string S ends (see figure [fig:excitations] and [fig:coloranyons] for sign definition). [fig:excitations] [fig:coloranyons] Anyons are in general detected by local Wilson loops that encircle the quasi-particle excitation (see figure [fig:coloranyons]) with eigenvalues given by Wilson strings intersection (see figure [fig:intersection]). Primitive anyons are labelled by color, *Y*, *R*, *B*, and sublattice type,  • ,  ∘  with fusion relation $$\underbrace{\chi\_{\bullet/\circ}\times\ldots\times\chi\_{\bullet/\circ}}\_{k}=Y\_{\bullet/\circ}\times R\_{\bullet/\circ}\times B\_{\bullet/\circ}=1\label{orderkcolorfusion}$$ for *χ* = *Y*, *R*, *B*, so that no Wilson loop measurement can tell apart these combination from the trivial vacuum 1 without enclosing a proper subset. Composite anyons $[{\bf a}]$ are labeled by particle numbers (modulo *k*) of independent primitive anyons. $$\begin{aligned} &=(Y\_\bullet)^{y\_1}(R\_\bullet)^{r\_1}(Y\_\circ)^{y\_2}(R\_\circ)^{r\_2}\label{abeliananyonlabel1}\\{\bf a}&=({\bf a}\_\bullet,{\bf a}\_\circ),\quad\left\{\begin{array}{\*{20}c}{\bf a}\_\bullet=y\_1{\bf e}\_Y+r\_1{\bf e}\_R\\{\bf a}\_\circ=y\_2{\bf f}^Y+r\_2{\bf f}^R\hfill\end{array}\right.\label{abeliananyonlabel2}\end{aligned}$$ for *y*1, *y*2, *r*1, *r*2 in Z*k*. The *k*4 anyon are mutually distinguishable by Wilson loops. They are represented as a pair of Z*k*-valued two dimensional vectors, ${\bf a}\_\bullet$ and ${\bf a}\_\circ$, on two triangular lattices (see figure [fig:abeliananyonlattice]), one for  • -anyons, another for  ∘ -anyons. [fig:abeliananyonlattice] Abelian anyons support single channel fusion $$\times[{\bf b}\_\bullet,{\bf b}\_\circ]=[{\bf a}\_\bullet+{\bf b}\_\bullet,{\bf a}\_\circ+{\bf b}\_\circ]\label{abeliananyonfusion}$$ and carry unit quantum dimension $d\_{[\bf a]}=1$. A basis of the one dimensional splitting space $V^{[{\bf a}][{\bf b}]}\_{[{\bf a}+{\bf b}]}$ is given by letting the Jordan-Wigner string in figure [fig:abeliananyonsplittingspace](a) act on the ground state (projected locally inside the dotted line with fixed boundary condition). We adopt the time-ordering convention that a  • -string always acts on the ground state before a  ∘ -one. $$\begin{aligned} \left|V^{[{\bf a}][{\bf b}]}\_{[{\bf a}+{\bf b}]}\right\rangle=W\left(\vcenter{\hbox{\includegraphics[width=0.2in]{Y2string.pdf}}}\right)\_\circ W\left(\vcenter{\hbox{\includegraphics[width=0.2in]{Y1string.pdf}}}\right)\_\bullet|GS\rangle\label{anyonsplittingbasis}\end{aligned}$$ The string ordering is a gauge choice for splitting state. This will be generalized to twist defects in figure [fig:splittingspaces] in section [sec:defectfusion]. [fig:abeliananyonsplittingspace] Exchange and braiding operations are represented by abelian Z*k* phases. They stem from intersection between the anyon worldlines and Jordan-Wigner strings, which can be shown to be identical to the linking number between worldlines of anyons  such as those shown in figure [fig:abelianspin]. The *R*-symbol of exchanging abelian anyon $[{\bf a}]$ and $[{\bf b}]$ under the basis choice of splitting space $V^{[{\bf a}][{\bf b}]}\_{[{\bf a}+{\bf b}]}$ in eq. and figure [fig:abeliananyonsplittingspace](a) is given by $$R^{[{\bf a}][{\bf b}]}\_{[{\bf a}+{\bf b}]}=e^{i\frac{2\pi}{k}{\bf a}\_\circ^Ti\sigma\_y{\bf b}\_\bullet}=e^{i\frac{2\pi}{k}(y\_2r'\_1-r\_2y'\_1)}\label{Rabeliananyon}$$ and is illustrated in figure [fig:abeliananyonsplittingspace](b), for ${\bf a}\_\circ=y\_2{\bf f}^Y+r\_2{\bf f}^R$ and ${\bf b}\_\bullet=y'\_1{\bf e}\_Y+r'\_1{\bf e}\_R$. The topological spin of an anyon can be evaluated by a 360∘ rotation (figure [fig:abelianspin]) or exchange, giving the spin-statisticcal phase $$\theta\_{[{\bf a}]}=R^{[{\bf a}][{\bf a}]}\_{[2{\bf a}]}=\vcenter{\hbox{\includegraphics[width=0.7in]{anyonexchange.pdf}}}=e^{i\frac{2\pi}{k}{\bf a}\_\circ^Ti\sigma\_y{\bf a}\_\bullet}\label{anyonexchangespin}$$ The full braiding between anyon $[{\bf a}]$ and $[{\bf b}]$ is given by the symmetric $$S\_{[{\bf a}][{\bf b}]}=\frac{1}{k^2}\vcenter{\hbox{\includegraphics[width=0.5in]{anyonfullbraid.pdf}}}=\frac{1}{k^2}e^{i\frac{2\pi}{k}({\bf a}\_\circ^Ti\sigma\_y{\bf b}\_\bullet+{\bf b}\_\circ^Ti\sigma\_y{\bf a}\_\bullet)}\label{anyonfullbraiding}$$ where the normalization $\mathcal{D}=\sqrt{\sum\_{\bf a}d^2\_{\bf a}}=k^2$ equals the total quantum dimension of the abelian topological phase that is responsible for its topological entanglement entropy , and is added so that the *S*-matrix is unitary. Both spin *θ* and braiding *S* are gauge invariant. [fig:abelianspin] ### Low energy effective field theory The model Hamiltonian can be described by the low energy effective Chern-Simons theory  $$\mathcal{L}=\frac{1}{4\pi}\int K\_{IJ}a\_I\wedge da\_J\label{CS}$$ with the 4 × 4 *K*-matrix $$K=k\left(\begin{array}{\*{20}c}0&0&0&-1\\0&0&1&0\\0&1&0&0\\-1&0&0&0\end{array}\right)\label{Kmatrix}$$ where *Y*•, *R*•, *Y*∘, *R*∘ are anyon charges for the *U*(1)-gauge fields *a*1, *a*2, *a*3, *a*4 respectively. This Chern-Simons theory has the same *k*4 anyon types with identical fusion and braiding. It also supports an identical Wilson algebra eq.([wilsoncomm1]-[wilsoncomm4]) on a genus *g* surface. The Hamiltonian therefore has the same low energy description as two copies of the Z*k* version of Kitaev’s toric code . The novelty of is the apparent *S*3-symmetry relating the tricoloring and bipartite structure of the lattice, which enriches the charge-flux (or plaquette-vertex) Z2-duality in Kitaev’s toric code. Dislocations or twist defects are topological defects that violate certain duality or symmetry and carry non-abelian signature. The lattice structure in Hamiltonian facilitates *S*3-twist defects naturally through lattice dislocations and disclinations. Similar topological defects are more obscure and less motivated in the low energy Chern-Simons theory or double Z*k*-plaquette model. Their field theoretical constructions rely on an explicit branch cut in real space where the gauge fields are discontinuous. The cut can be *gauged away* only when the *K*-matrix is symmetric under symmetry transformation. In a lattice description of twist defects, we will see that branch cuts are absent completely. Although the Chern-Simons theory is not our fundamental tool in this article, we expect a *genon* description of the *S*3-twist defects similar to ref.[]. Here we identify the *S*3-symmetry action on the multicomponent *U*(1)-gauge fields. The permutation group *S*3 of three elements is generated by a non-commutative threefold and twofold symmetries that act respectively as cyclic color permutation $$\Lambda\_3:\left(\begin{array}{\*{20}c}Y\_\bullet&R\_\bullet&B\_\bullet\\Y\_\circ&R\_\circ&B\_\circ\end{array}\right)\to\left(\begin{array}{\*{20}c}R\_\bullet&B\_\bullet&Y\_\bullet\\R\_\circ&B\_\circ&Y\_\circ\end{array}\right)\label{ST}$$ and transposition for color and rotor types $$\Lambda\_{B}:\left(\begin{array}{\*{20}c}Y\_\bullet&R\_\bullet&B\_\bullet\\Y\_\circ&R\_\circ&B\_\circ\end{array}\right)\to\left(\begin{array}{\*{20}c}R\_\circ&Y\_\circ&B\_\circ\\R\_\bullet&Y\_\bullet&B\_\bullet\end{array}\right)\label{S}$$ Λ*B* represents the charge-flux duality within each copy of Z*k*-Kitaev toric code pair, and Λ3 corresponds an extra Z3 symmetry that intertwines the two copies. These will arise naturally as space group operators of the lattice model described in section [sec:compositelatticedefects]. They are represented by the 4 × 4 matrices $$\Lambda\_3=\left(\begin{array}{\*{20}c}0&-1&0&0\\1&-1&0&0\\0&0&0&-1\\0&0&1&-1\end{array}\right),\quad\Lambda\_{B}=\left(\begin{array}{\*{20}c}0&0&0&1\\0&0&1&0\\0&1&0&0\\1&0&0&0\end{array}\right)\label{STCS}$$ acting on the gauge fields (*a*1, *a*2, *a*3, *a*4). The *K*-matrix is invariant under symmetric transformation, *K* = Λ*T**K*Λ for Λ ∈ *S*3, and therefore the theory is symmetric under the *S*3-transformation *a**i* → Λ*i**j**a**j*. Consequently fusion and braiding are also invariant under the symmetry transformation $[{\bf a}\_\bullet,{\bf a}\_\circ]\to\Lambda\cdot[{\bf a}\_\bullet,{\bf a}\_\circ]$ according to and. These transformations rename the color and sublattice labels for abelian anyons. Cyclic color permutation Λ3 corresponds a threefold rotation of the triangular anyon lattice in figure [fig:abeliananyonlattice] while keeping the sublattice label  • ,  ∘  fixes, and transposition Λ*B* corresponds a mirror operation while flipping between a  •  ↔  ∘ . Notice that for *k* divisible by 3, there are non-trivial anyons $\boldsymbol\kappa\_\bullet=\overline{\boldsymbol\kappa}\_\bullet^\dagger=(Y\_\bullet)^{k/3}(R\_\bullet)^{-k/3}$ and $\boldsymbol\kappa\_\circ=\overline{\boldsymbol\kappa}\_\circ^\dagger=(Y\_\circ)^{k/3}(R\_\circ)^{-k/3}$ that are invariant under cyclic color permutation $(ST)\cdot\boldsymbol\kappa\_{\bullet/\circ}=\boldsymbol\kappa\_{\bullet/\circ}$. Furthermore, unlike over complex coefficients where the finite group *S*3 has only 2-dimenional faithful irreducible representation, the two 4-dimensional matrices cannot be simultaneously further block diagonalized with discrete coefficients. This means that the anyon Hilbert space cannot be decomposed into tensor product without violating *S*3-symmetry. In mathematical terms, the symmetry group *S*3 is a subgroup of Γ1, the group of relabeling of anyons, or precisely the group of invertible functors A → A of the unitary braided fusion category A. In the lattice rotor model with *K*-matrix, Γ1 is given by the group of outer automorphisms $$\Gamma\_1=\mbox{Outer}(K)\equiv\frac{\mbox{Aut}(K)}{\mbox{Inner}(K)}$$ where the automorphism group $$\begin{aligned} \mbox{Aut}(K)=O(K;\mathbb{Z})\equiv\left\{g\in GL(4;\mathbb{Z}):g^TKg=K\right\}\end{aligned}$$ is given by orthogonal transformations that leave the *K*-matrix invariant, and the subgroup of inner automorphism $$\begin{aligned} \mbox{Inner}(K)\equiv\left\{g\in\mbox{Aut}(K):g\cdot[{\bf a}\_\bullet,{\bf a}\_\circ]\equiv[{\bf a}\_\bullet,{\bf a}\_\circ]\;\mbox{mod $K$}\right\}\end{aligned}$$ contain orthogonal transformations that fix the anyon lattice Z4/*K*Z4. As shown later in eq., the symmetry subgroup *S*3 is inherited from and identical to the symmetry of the underlying trivalent bipartite planar graph. For example the color permutation Λ3 is induced by a lattice translation on the honeycomb and color sublattice transposition Λ*B* is induced by a lattice inversion. The correspondence between symmetry of the microscopic Hamiltonian and anyon relabeling symmetry *ω*1 : *S*3 → Γ1 = Outer(*K*) is a *first level weak symmetry breaking* according to Kitaev. Twist defects are explicit local violations of the underlying symmetry, and because of they are also symmetry defects that alter anyon sectors. Non-Abelian Twist Defects ========================= The *Y**R**B*-plaquette coloration and  • ,  ∘ -sublattice types give rise to the four fundamental abelian anyon excitations *Y*•, *Y*∘, *R*•, *R*∘ in the lattice rotor model (recall the color redundancy *Y* × *R* × *B* = 1). The arbitrariness of color and sublattice labeling of abelian anyons is summarized by the *S*3-symmetry generated by cyclic color permutation Λ3 and transposition Λ*B* in eq. and. A twist defect is a topological defect that locally violates the symmetry by altering, or *twisting*, the color and rotor label of an anyon that goes around it (see figure [fig:anyontwist]). In other words, a Wilson string that circles around a twist defect does not close back to itself, and therefore unlike abelian anyons which can be locally detected by small Wilson loops such as plaquette stabilizers, there are no local observables measuring a twist defect state. This non-locality is a central theme of many non-abelian anyons, such as vortex-bound Majorana fermions in chiral *p* + *i**p* superconductors , Ising anyon in the Kitaev’s honeycomb model  and Pfaffian fractional quantum Hall state . The non-abelian anyons associated with twist defects considered in this article, however, are not fundamental deconfined excitations of a true topological phase. They are qualitatively more similar to (fractional) Majorana excitations at SC-FM heterostructures with (fractional) topological insulators  or strongly spin-orbit coupled quantum wires . Their existence rely on the topological winding of certain classical non-dynamical *order parameter field*, such as pairing and spin/charge gap . The tricoloring and bipartite structure of the lattice Hamiltonian can be regarded as a discrete *order parameter* of the condensate, and its winding around a point defect supports the color and sublattice twisting. [fig:anyontwist] A twist defect in our lattice model is labeled by an element Λ in the symmetry group *S*3 according to its action on the anyon label so that when an anyon $[{\bf a}]=Y\_\bullet^{y\_1}R\_\bullet^{r\_1}Y\_\circ^{y\_2}R\_\circ^{r\_2}$ passes counter-clockwise around the twist defect, it changes into $\Lambda\cdot[{\bf a}]$, where Λ is some product combinations of Λ*B* and Λ3 in and. Threefold cyclic permutation and twofold transposition are the two conjugacy classes of *S*3 and correspond to two threefold twist defects $[1/3],[\overline{1/3}]$ and three twofold ones [1/2]*Y*, [1/2]*R*, [1/2]*B* respectively. Their twisting actions on abelian anyons going around them are summarized in figure [fig:anyontwist]. The fraction label is chosen to match with the *fractionalization* of abelian anyons so that the denominator shows the minimal number of identical defect copies required to fuse into an abelian channel (see ). [fig:twistdefects] [fig:Z3dislocation] Crystalline defects are predicted to carry topologically protected excitations in topological insulators and superconductors . They are expected to hold fractional quantum vortices in FFLO states . Here we realize twist defects in the lattice model as crystalline defects, such as the disclinations and dislocations illustrated in figure [fig:twistdefects] and [fig:Z3dislocation], that center at non trivalent vertices for threefold twist defects, or odd sided plaquettes for twofold twist defects. These are topological lattice defects that carry curvature or torsion singularities and locally breaks lattice rotation and translation symmetry. Through local violation of tricolorability and/or bipartite structure of the Hamiltonian, they change the color and/or sublattice type of abelian anyons that go around them. This gives rise to additional non-contractible Wilson loops, ground state degeneracies and non-trivial quantum dimensions (*d* > 1) associate with the twist defects. We begin this section by writing down the local lattice Hamiltonian for the two kinds of twist defects. Their quantum dimensions (or ground state degeneracies) can be deduced by counting plaquette stabilizers and vertices. Next we show the topological degeneracies are inherited from non-trivial Wilson string operators, each surrounds multiple defects. These form a set of non-commuting physical observables with Z*k*-valued measurements. As a quantum state is labeled by its simultaneous eigenvalues of a maximal set of commuting Wilson operators, it cannot be *accidentally* measured by local observation since the Wilson strings are non-local operators passing through spatially separated twist defects. This non-local storage of quantum information between non-abelian anyons provides topological protection against decoherence and forms the basis for fault tolerant topological quantum computation . Similar to the algebraic relation eq., the non-commutativity of Wilson operators are characterized by an intersection form ⟨C*i**χ*1, C*j**χ*2⟩ between Wilson strings. We compute these pairings explicitly in this section and show their covariant behavior under *S*3-transformation that differs from the invariant one from the previous section. The basis of Wilson strings and their intersection properties will be useful for characterizing defect fusion and braiding in the following section. Lattice defect Hamiltonian -------------------------- We describe the lattice model modifications at primitive  ± 120∘-disclinations centered at tetravalent or bivalent vertices and  ± 60∘-disclinations at heptagon or pentagon plaquettes (see figure [fig:twistdefects]) corresponding to threefold and twofold twist defects respectively. Unlike the square or octagon disclinations in figure [fig:squareoctagon], these non-trivial lattice defects require additional sets of vertex rotors or allow less plaquette operators in order for stabilizers to remain mutually commutative. The extra rotor degree of freedom increases the ground state degeneracy and associates non-trivial quantum dimensions to twist defects. The *S*3-classification of primitive disclinations are summarized in table [tab:twistdefects]. [ht] cl *S*3-label & &  + 120∘ disclination at a tetravalent  ∘ -vertex &  − 120∘ disclination at a bivalent  • -vertex &  + 120∘ disclination at a tetravalent  • -vertex &  − 120∘ disclination at a bivalent  ∘ -vertex &  + 60∘ disclination at a *χ*-colored heptagon &  − 60∘ disclination at a *χ*-colored pentagon [tab:twistdefects] ### Threefold twist defects [fig:4valentvertex] Instead of accommodating a single set of rotors *σ*, *τ* and a *k*-dimensional Hilbert space like an ordinary trivalent vertex, a tetravalent vertex hosts four sets of rotors *σ*(*i*), *τ*(*i*) and a *k*4-dimensional tensor product space. We define the rotor operators at a tetravalent vertex by the following tensor products $$\begin{array}{\*{20}c}\sigma(1)=\sigma\otimes\openone\_k\otimes\openone\_k\otimes\openone\_k\\\sigma(2)=\openone\_k\otimes\sigma\otimes\openone\_k\otimes\openone\_k\\\sigma(3)=\openone\_k\otimes\openone\_k\otimes\sigma\otimes\openone\_k\\\sigma(4)=\openone\_k\otimes\openone\_k\otimes\openone\_k\otimes\sigma\end{array}\quad\begin{array}{\*{20}c}\tau(1)=\tau\otimes\tau\otimes\openone\_k\otimes\tau\\\tau(2)=\tau\otimes\tau\otimes\tau\otimes\openone\_k\\\tau(3)=\openone\_k\otimes\tau\otimes\tau\otimes\tau\\\tau(4)=\tau\otimes\openone\_k\otimes\tau\otimes\tau\end{array}\label{4valentrotorrep}$$ where $\openone\_k$ is the *k* × *k* identity matrix and *σ*, *τ* are the usual rotors with matrix representations. They satisfy a modified algebraic relation $$\begin{aligned} \tau(i)\sigma(j)\tau(i)^{-1}\sigma(j)^{-1}=\left\{\begin{array}{\*{20}c}w,&\mbox{if $|i-j|\neq2$}\\1,&\mbox{if $|i-j|=2$}\end{array}\right.\label{4valentrotorcomm1}\\\sigma(i)^k=\tau(i)^k=1,\quad[\sigma(i),\sigma(j)]=[\tau(i),\tau(j)]=0\label{4valentrotorcomm2}\end{aligned}$$ where *w* = *e*2*π**i*/*k*. The four adjacent plaquette stabilizers around the tetravalent vertex *u*4 in figure [fig:4valentvertex] are defined with the new rotor operators (suppressing tensor products) $$\begin{aligned} \begin{array}{\*{20}c}\hat{P}\_\bullet(1)=\tau\_{u\_4}(1)\sigma\_{v\_1}\tau\_{v\_2}\sigma\_{v\_3}\tau\_{v\_4}\sigma\_{v\_5}\hfill\\\hat{P}\_\bullet(2)=\tau\_{u\_4}(2)\sigma\_{v\_5}\tau\_{v\_6}\sigma\_{v\_7}\tau\_{v\_8}\sigma\_{v\_9}\hfill\\\hat{P}\_\bullet(3)=\tau\_{u\_4}(3)\sigma\_{v\_9}\tau\_{v\_{10}}\sigma\_{v\_{11}}\tau\_{v\_{12}}\sigma\_{v\_{13}}\\\hat{P}\_\bullet(4)=\tau\_{u\_4}(4)\sigma\_{v\_{13}}\tau\_{v\_{14}}\sigma\_{v\_{15}}\tau\_{v\_{16}}\sigma\_{v\_1}\end{array}\end{aligned}$$ and similarly the  ∘ -plaquette operators *P̂*∘(*i*) are defined by interchanging *σ* ↔ *τ*. A bivalent vertex on the other hand is hosting two sets of rotor operators and a *k*2-dimensional tensor product space. The rotors are defined by $$\begin{array}{\*{20}c}\sigma(1)=\sigma\otimes\openone\_k\\\sigma(2)=\openone\_k\otimes\sigma\end{array}\quad\begin{array}{\*{20}c}\tau(1)=\tau\otimes\tau^2\\\tau(2)=\tau^2\otimes\tau\end{array}\label{2valentrotorrep}$$ and satisfy the algebraic relations as well as $$\tau(i)\sigma(j)\tau(i)^{-1}\sigma(j)^{-1}=\left\{\begin{array}{\*{20}c}w,&\mbox{if $i=j$}\\w^2,&\mbox{if $i\neq j$}\end{array}\right.\label{2valentrotorcomm1}$$ The two adjacent plaquette stabilizers of the bivalent vertex *u*2 in figure [fig:4valentvertex] are defined by $$\begin{array}{\*{20}c}\hat{P}\_\bullet(1)=\tau\_{u\_2}(1)\sigma\_{v\_1}\tau\_{v\_2}\sigma\_{v\_3}\tau\_{v\_4}\sigma\_{v\_5}\\\hat{P}\_\bullet(2)=\tau\_{u\_2}(2)\sigma\_{v\_5}\tau\_{v\_6}\sigma\_{v\_7}\tau\_{v\_8}\sigma\_{v\_1}\end{array}$$ and similarly for the  ∘ -plaquette operators. Thanks to the modified algebraic relations and, all plaquette operators around a tetravalent or bivalent vertex mutually commute. The Hamiltonian is then defined just as in eq. by summing over all plaquette stabilizers. In fact and are not just sufficient but also necessary for the stabilizers to form good quantum numbers. The *k*4 or *k*2 dimensional tensor product rotor representations or are not accidental. Their dimensionality are *topologically protected* by non-contractible Wilson strings around them and are directly related to the quantum dimension of threefold twist defect. These will be explained in detail in the following subsection. There is however a caveat when *k* is a multiple of 3. By examining the rotor representation at a tetravalent vertex or that at a bivalent one, we have the additional torsion relation (∏*i* = 14*τ**u*4(*i*))*k*/3 = (∏*i* = 12*τ**u*2(*i*))*k*/3 = 1 and the order three center operators that commute with all rotors Σ*u*4 = (∏*i* = 14*σ**u*4(*i*))*k*/3,  Σ*u*2 = (∏*i* = 12*σ**u*2(*i*))*k*/3 This means that the *k*4 (or *k*2) dimensional tensor product space at the tetravalent (resp. bivalent) vertex is no longer irreducible for the rotor algebra eq. (or eq.) as it can be decomposed according to the Z3-valued good quantum number according to Σ*u*4 (resp. Σ*u*2). In this case, we restrict the tensor product space at any tetravalent (or bivalent) vertices to one of the *k*4/3 (resp. *k*2/3) dimensional sector by specifying the Z3-phase for the central observables Σ*u*4 (resp. Σ*u*2). Or equivalently the *k*4 (resp. *k*2) dimensional rotor space can be broken down by the local defect Hamiltonian *H**u*4/*u*2 =  − *J*\*(*e*− *i**ϕ*Σ*u*4/*u*2 + *e**i**ϕ*Σ*u*4/*u*2†) where the phase variable *ϕ* controls the Z3-value of Σ*u*4/*u*2 in the ground state except at *ϕ* = *π*,  ± *π*/3 where level crossings occur. Consequently, when *k* is divisible by 3, threefold twist defects are further subdivided into nine different species distinguished by the eigenvalues of the Σ’s, which are local measurements described by certain order 3 Wilson strings described in the upcoming subsection. Using the Euler characteristics 2 − 2*g* = #*P* − #*E* + #*V*, eq. is modified into #*V* = 2 × #*P* + 4(*g* − 1) − #*u*4 + #*u*2 where #*V*, #*P* are the total numbers of vertices, plaquettes and #*u*4, #*u*2 are the numbers of tetravalent, bivalent vertices respectively. We are interested in how the ground state degeneracy (G.S.D.) scales with #*u*4 and #*u*2 in the thermodynamic limit. For this, we will ignore the genus *g* and the overcounting of plaquettes such as that will contribute to the G.S.D. by a proportionality constant independent from the twist defect number. The total Hilbert space is a tensor product of rotor spaces of dimension *k*#*V* + 3#*u*4 + #*u*2 for *k* not divisible by 3, or *k*#*V* + 3#*u*4 + #*u*2/3#*u*4 + #*u*2 otherwise. Stabilized by the two operators *P̂*•, *P̂*∘ per plaquette, the ground state degeneracy scales as $$G.S.D\sim\left\{\begin{array}{\*{20}c}(k^2)^{\#u\_4+\#u\_2},&\mbox{if $3\centernot\mid k$}\\(k^2/3)^{\#u\_4+\#u\_2},&\mbox{if $3\mid k$}\end{array}\right.$$ where #*u*4 + #*u*2 is the total number of defects. Or equivalently the quantum dimensions of a threefold twist defect is given by $$d\_{[1/3]}=d\_{[\overline{1/3}]}=\left\{\begin{array}{\*{20}c}k^2,&\mbox{if $3\centernot\mid k$}\\k^2/3,&\mbox{if $3\mid k$}\end{array}\right.\label{Z3dimension}$$ ### Twofold twist defects [fig:pentagonheptagon] Heptagon and pentagon are odd-sided plaquettes and therefore the two operators will not commute with one another. There are instead only one plaquette stabilizer at a heptagon and pentagon (or any odd-sided plaquettes) *P̂*7 = ∏*p* = *i*7*σ**v**i**τ**v**i*,  *P̂*5 = ∏*p* = *i*5*σ**v**i**τ**v**i* All faces around the heptagon and pentagon are ordinary even-sided plaquettes. Each carries two usual stabilizers and commute with the heptagon and pentagon. Similar to dislocation operators in Kitaev’s toric code  or Z*k*-Wen plaquette model , the problem with *P̂*7 and *P̂*5 is that they are not necessarily *k**t**h* roots of unity since (*σ**τ*)*k* = *w**k*(*k* − 1)/2 = ( − 1)*k* − 1. And in general the defect Hamiltonian can depend on a phase *H*5, 7(*ϕ*) =  − *J*\*(*e*− *i**ϕ**P̂*5, 7 + *e**i**ϕ**P̂*5, 7†) Its ground state is labeled by the stabilizer eigenvalues *P̂*5, 7 = *e*2*π**i**p*(*ϕ*)/*k* for *p*(*ϕ*) being the integer for *k* odd (or half-integer for *k* even) between $$\frac{k\phi}{2\pi}-\frac{1}{2}<p(\phi)<\frac{k\phi}{2\pi}+\frac{1}{2}$$ There are level crossings when *k**ϕ*/2*π* is a half-integer (or integer) when *k* is odd (resp even) where two eigenstates for *P̂*5, 7 have the same energy and the system becomes gapless (see figure [fig:levelcrossing]). [fig:levelcrossing] We treat this phase variable *ϕ* as a non-dynamic classical parameter associate to a twofold twist defect. This is a fundamental feature differing the twist defect from usual deconfined non-abelian anyons in topologically ordered system. *p*(*ϕ*) modulo *k* is a locally measurable quantity by the pentagon or heptagon plaquette operator *P̂*5, 7. In general twofold twist defects are labeled by small Wilson loops circling them (to be discussed in the following subsection) and are divided into *k*2 species apart from their colors *χ*. Each species has distinct fusion and braiding characteristics such as pair measurement restrictions and topological spin as will be seen in section [sec:defectfusion] and  [sec:defectexchangebraiding]. Slow evolution of the phase variable *ϕ* → *ϕ* + 2*π*/*k* drives a ground state adiabatically to an excited state due to a level crossing shown in figure [fig:levelcrossing]. The excited state can relax to a different ground state by absorbing or emitting an abelian anyon and thereby driving a *species mutation* of the twofold twist defect. A successive phase winding of a series of twofold twist defects can lead to non-local transportation of abelian anyons without actually moving the defects. This pumping process is a Z*k* analogue of the *U*(1) Thouless charge pump  or the Z2 fermion parity pump . The ground state degeneracy of multiple twofold twist defects can be estimated in the thermodynamic limit by counting degree of freedom. Assume the number heptagons and pentagons are identical so that they do not contribute to the net curvature in the Gauss Bonnet theorem. Since there are only one stabilizer per pentagon or heptagon, the ground state degeneracy scales as *k**N* where *N* is the total number of twofold twist defects. And therefore its quantum dimension is given by *d*[1/2] = *k* This matches the $\sqrt{k}$ quantum dimension for dislocation twist defects in Z*k*-rotor model  or $\sqrt{2}$ for Kitaev’s toric code  since our Hamiltonian is identical to two copies of quantum double Z*k*-model. The difference here is that there are three types of twofold twist defects [1/2]*χ* labeled by colors *χ*, which is the plaquette color for primitive pentagon or heptagon defects. Two identical twofold defects fuse to *k*2 measurable abelian channels while a pair of twofold defects of different colors fuse into a threefold one, [1/2]*χ* × [1/2]*χ* ± 1 = [1/3] or $[\overline{1/3}]$ as will be discussed further in section [sec:defectfusion]. ### Composite lattice defects Classical lattice defects are topologically classified by the holonomy around it  such as the Burgers’ vector ${\bf B}$ of a dislocation or the Frank angle Ω of a disclination. The holonomy is an element in the space group counting the net amount of discrete rotations *r*(Ω) and translations ${\bf t}$ along a loop around a point defect or a combination of them on a lattice. The space group in our model on a honeycomb lattice is the semi-direct product $P6=C\_6\ltimes\mathcal{L}$, where *C*6 is the 6-fold rotation group and L = Z2 is the translation group of a triangular lattice. The holonomy $(r(\Omega),{\bf t})$ is path independent as long as the loop encloses the same defects but it may change according to conjugacy transformation into $(r(\Omega),{\bf t}+r(\Omega)\cdot{\bf d}-{\bf d})$ if the starting and ending point of the loop is displaced by ${\bf d}$. Thus lattice defects are precisely characterized by the conjugacy classes of holonomy $(r(\Omega),[{\bf t}]\_\Omega)$, where *r*(Ω) = *e**i*Ω*σ**y* is rotation with Frank angle Ω, a multiple of *π*/3, and $[{\bf t}]\_\Omega$ is the conjugacy class of translation in the quotient group $$\frac{\mathcal{L}}{(r(\Omega)-1)\mathcal{L}}=\left\{\begin{array}{\*{20}c}\mathcal{L},\hfill&\mbox{for $\Omega=0$}\hfill\\0,\hfill&\mbox{for $\Omega=\pm60^\circ$}\hfill\\\mathbb{Z}\_3,\hfill&\mbox{for $\Omega=\pm120^\circ$}\hfill\\\mathbb{Z}\_2\oplus\mathbb{Z}\_2,&\mbox{for $\Omega=180^\circ$}\hfill\end{array}\right.\label{disclinationclassification}$$ In particular $[{\bf t}]\_\Omega$ differentiates disclinations with the same Frank angle and curvature singularity. The Z3 classification distinguishes 120∘ disclinations centered at octagons such as figure [fig:squareoctagon](a), tetravalent  ∘ -vertices such as figure [fig:twistdefects](a) and a tetravalent  • -vertices. And therefore tricolorability is violated when $[{\bf t}]\_{120^\circ}$ is non-trivial in Z3. This remains true for any composite defect with an overall  ± 120∘ Frank angle. Since the classification for  ± 60∘-disclinations is trivial, all such composite defect is holonomically equivalent to a pentagon or heptagon which breaks both tricolorability and bipartite structure. Dislocations are composite defects consist of a collection of disclinations with canceling Frank angles and curvature. The torsional singularity of a dislocation, characterized holonomically by a Burgers’ vector ${\bf B}$ in L, originates from the spacial separation of its constituent disclinations. For example, each dislocation in figure [fig:Z3dislocation] is a disclination dipole with a tetravalent vertex (Ω =  + 120∘) separated from a square (Ω =  − 120∘) by half a lattice spacing. And figure [fig:twistdefects](c) shows an overall dislocation from the  ± 60-disclination pair. All dislocation preserves the  • ,  ∘  bipartite structure since it always consists of the same number of pentagons and heptagons. It violates tricolorability when its Burgers’ vector ${\bf B}$ sends a hexagon plaquette to a different color one. Examples of dislocation twist defects include  ± 120∘-disclination dipole with unequal translation pieces $[{\bf t}]\_{+120^\circ}\neq[{\bf t}]\_{+120^\circ}\in\mathbb{Z}\_3$ (i.e. inequivalent defect centers,  • ,  ∘ -vertex or plaquette) such as those in figure [fig:Z3dislocation], and  ± 60∘ disclination dipole with different color pentagon and heptagon. Let Lʹ be the translation subgroup of L that preserves tricoloration and that L/Lʹ = Z3 classified  ± 120∘-disclinations in. Any dislocation with Burgers’ vector in Lʹ and any  ± 120∘-disclination with trivial Z3 translation piece $[{\bf t}]\_{120^\circ}$ are *twistless* defects that do not violate tricolorability and bipartite strucutre such as those in figure [fig:squareoctagon]. The *P*6 honeycomb space group symmetry collapses when there is are twistless dislocations and  ± 120∘-disclinations in the lattice that break discrete Lʹ translation and *C*3 rotation symmetry respectively. Certain *discreteness* in the space group that represents tricoloration and bipartite structure is however left over. The presence of twistless lattice defects breaks the space group symmetry into the following residue $$\frac{C\_6\ltimes\mathcal{L}}{C\_3\ltimes\mathcal{L}'}=\mathbb{Z}\_2\ltimes\mathbb{Z}\_3=S\_3\label{spacegroupquotient}$$ which is not surprisingly identical to the group of symmetry between abelian anyons and or gauge fields in. In a trivalent bipartite graph, defects are holonomically classified by the *residue* group *S*3. For instance, a pentagon defect is indistinguishable from a heptagon one as  ± 60∘-disclinations are interchangeable by absorbing or releasing a square or octagon disclination. A primitive dislocation is equivalent to a tetravalent vertex by releasing a square disclination and in the long length scale indistinguishable from a pentagon-heptagon dipole separated by one lattice spacing. Non-local Wilson algebra ------------------------ Figure [fig:anyontwist] shows how an abelian anyon changes type around a twist defect and its Wilson string does not close back to itself after one cycle. A closed Wilson string is either non-local so that it encloses multiple twist defects or wraps around a twist defect multiple times. The former contributes to a ground state degeneracy as it intersects and therefore does not commute with other non-local Wilson strings. The latter can be represented by an abelian phase as it can be shrunk to the point defect and will not intersect with other Wilson strings. [fig:defectWilsonloop] [fig:defectlocalloop] All non-local Wilson loops are generated by primitive ones shown in figure [fig:defectWilsonloop] (and another one in figure [fig:splittingspaces](g) which will not be used in this section), each encircles two twist defects. A particular arbitrary choice of branch cut is assigned in the figure to keep track of color and sublattice transformation according to figure [fig:anyontwist] and [fig:twistdefects]. Twist defects are further subdivided into species according to the abelian phase eigenvalues of local Wilson loops, Θ*χ* around twofold twist defect and Σ•, Σ∘ around threefold twist defects when *k* is divisible by 3. Each circles a single defect multiple times as shown in figure [fig:defectlocalloop]. Θ*χ* are continuum versions of defect pentagon and heptagon operators, and Σ•, Σ∘ are related to the good quantum numbers associate with each tetravalent and bivalent vertex. ### Threefold twist defects A threefold twist defect is characterized by its 3-fold anyon color twisting described in figure [fig:anyontwist]. It can be generated by primitive dislocation, tetravalent or bivalent lattice disclination, or any composite defect with the same overall color permutation character. [fig:Z3loopdefinition] We consider a system of *N* threefold twist defects [1/3] for *N* is some multiple of three so that the system can be closed on a sphere, i.e. the *N* defects fuse to the trivial abelian anyon channel. All non-local Wilson loops are combinations of the one in figure [fig:defectWilsonloop](a) between neighboring defects and we label the one enclosing the *i**t**h* and (*i* + 1)*t**h* defect by A*i*, *i* + 1*χ*,  • and A*i*, *i* + 1*χ*,  ∘ (see figure [fig:Z3loopdefinition]), where *χ* is the color of the string at the fixed view point (grey square). Due to color redundancy *Y* × *R* × *B* = 1, it is enough to take *χ* = *Y*, *R*. And the final loop A*N* − 1, *N* can be expressed in terms of the previous ones because the system is closed and the Wilson loop enclosing all *N* defects is trivial. $$\begin{aligned} \prod\_{i=1}^{N/3}\mathcal{A}\_{3i-2,3i-1}^{\chi,\bullet/\circ}\left(\mathcal{A}\_{3i-1,3i}^{\chi-1,\bullet/\circ}\right)^\dagger=\vcenter{\hbox{\includegraphics[width=0.8in]{outsideloop.pdf}}}=1\label{Z3closeness}\end{aligned}$$ for [*i*]3 ≡ *i* modulo 3 cyclically permutes the color *χ*. [fig:Z3intersection] Since  • -Wilson loops mutually commute, the 2(*N* − 2)  • -loops form a maximal commuting set of Wilson operators. The Wilson algebra $$\mathcal{A}\_{i,i+1}^{\chi,\bullet}\mathcal{A}\_{j,j+1}^{\chi',\circ}=e^{i\frac{2\pi}{k}\langle\mathcal{A}\_{i,i+1}^\chi,\mathcal{A}\_{j,j+1}^{\chi'}\rangle}\mathcal{A}\_{j,j+1}^{\chi',\circ}\mathcal{A}\_{i,i+1}^{\chi,\bullet}\label{Z3Wilsonalgebra}$$ is determined by the symmetric pairing matrix I = ⟨ \* ,  \* ⟩ between colored strings. It is non-zero only when ∣*i* − *j*∣ ≤ 1 and can be evaluated by local intersection number according to figure [fig:intersection]. All non-trivial intersection are shown in figure [fig:Z3intersection]. Note that the result is independent from the arbitrary choice of branch cut. The intersection form for *N* threefold twist defects is given by the 2(*N* − 2) × 2(*N* − 2) symmetric matrix I $$\langle\mathcal{A}\_{i,i+1},\mathcal{A}\_{j,j+1}\rangle=\left(\begin{array}{\*{20}c}I\_0&I\_1&0&0&\ldots&0\\I\_1^T&I\_0&I\_1&0&\ldots&0\\0&I\_1^T&I\_0&I\_1&\ldots&0\\0&0&I\_1^T&I\_0&\ldots&0\\\vdots&\vdots&\vdots&\ddots&\ddots&\vdots\\0&0&0&0&\ldots&I\_0\end{array}\right)\label{Z3intersectionmatrix}$$ $$I\_0=\left(\begin{array}{\*{20}c}-2&1\\1&-2\end{array}\right),\quad I\_1=I\_0\Lambda\_3=\left(\begin{array}{\*{20}c}1&1\\-2&1\end{array}\right)\label{I0I1}$$ where the (*N* − 2) rows and columns of the intersection matrix is enumerated according to the loops {A*i*, *i* + 1 : *i* = 1, …, *N* − 2}, its 2 × 2 matrix entries act on color degree of freedom *χ* = *Y*, *R*, and $\Lambda\_3=\left(\begin{array}{\*{20}c}0&-1\\1&-1\end{array}\right)$ is cyclic color permutation. *I*0 comes from the intersection between Wilson loops about the same defect pair (first two diagrams in figure [fig:Z3intersection]) while *I*1 comes from the intersection between Wilson loops about adjacent defect pairs (last three diagrams in figure [fig:Z3intersection]). We notice that the Wilson algebra is symmetric under cyclic color permutation Λ3± 1 : *χ* → *χ* ± 1 but is not invariant under color and sublattice transposition Λ*B* : *Y*• / ∘ ↔ *R*∘ / •. This can be understood by observing that the Wilson loop in figure [fig:defectWilsonloop](a) violates transposition explicitly. Color label *χ* of the Wilson operator A*i*, *i* + 1*χ*,  • / ∘ in figure [fig:Z3loopdefinition] is defined with respect to a base point, and a change of base point does not commute, hence inconsistent, with all transpositions Λ*Y*, Λ*R*, Λ*B*. In fact color and sublattice transposition switches a threefold defect into its anti-partner $$\Lambda\_B:[1/3]\leftrightarrow[\overline{1/3}]$$ and the intersection form is *covariant* under Λ*B* $$\Lambda\_B^T\cdot\mathbb{I}\_{[1/3]}\cdot\Lambda\_B=-\mathbb{I}\_{[\overline{1/3}]}$$ where Λ*B* = *σ**x* acts on the colors *χ* = *Y*, *R* and $\mathbb{I}\_{[\overline{1/3}]}$ is the intersection matrix for *N* anti-threefold defects. In the microscopic lattice level, transposition Λ*B* originates from inversion which interchanges the  • ,  ∘  sublattice type and switches $[1/3]\leftrightarrow[\overline{1/3}]$ according to table [tab:twistdefects] for primitive disclination twist defects. This can also be understood in the continuum by seeing transposition interchanges the abelian anyon twisting between [1/3] and $[\overline{1/3}]$ in figure [fig:anyontwist](a) and (c). For later convenience, we adapt the multi-exponents notation ${\bf m}=(m\_1,m\_2,\ldots,m\_{2N-5},m\_{2N-4})$ for Wilson operator product $$\left(\mathcal{A}^\bullet\right)^{\bf m}=\prod\_{i=1}^{N-2}\left(\mathcal{A}\_{i,i+1}^{Y,\bullet}\right)^{m\_{2i-1}}\left(\mathcal{A}\_{i,i+1}^{R,\bullet}\right)^{m\_{2i}}$$ and similarly for the dual ones $\left(\mathcal{A}^\circ\right)^{\bf m}$, where *m**j* are integers modulo *k*. When *k* is *not* divisible by 3, starting with the particular ground state ∣0⟩• in eq. fixed by the  • -Wilson operators $\left(\mathcal{A}^\bullet\right)^{\bf m}|0\rangle\_\bullet=|0\rangle\_\bullet$, the dual  ∘ -Wilson operators generate all ground states by product combinations $$|{\bf m}\rangle\_\bullet=\left(\mathcal{A}^\circ\right)^{\bf m}|0\rangle\_\bullet\label{GSZ3}$$ These are simultaneous eigenstates for the maximally commuting set of  • -Wilson operators. The matrix elements of A• and A∘ with respect to this basis are given by $$\begin{aligned} \langle{\bf m}'|\left(\mathcal{A}^\bullet\right)^{\bf n}|{\bf m}\rangle&=e^{i\frac{2\pi}{k}{\bf n}^T\mathbb{I}{\bf m}}\delta\_{{\bf m}',{\bf m}}\label{Z3Wilsoneigenvalues}\\\langle{\bf m}'|\left(\mathcal{A}^\circ\right)^{\bf n}|{\bf m}\rangle&=\delta\_{{\bf m}',{\bf m}+{\bf n}}\end{aligned}$$ where I is the intersection form in. [fig:3foldsublattice] We notice that 3I− 1 has integer entries, and therefore the intersection matrix I is invertible (with Z*k* entries) when there is an integer *s* with 3*s* ≡ 1 mod *k*, in other words *k* is not divisible by 3. This means different ground states $|{\bf m}\rangle\_\bullet$ in can be distinguished by their set of eigenvalues for the  • -Wilson operators, and thus forms a complete orthonormal basis for the ground state. Alternatively one can label the ground states using  • -Wilson operator eigenvalues ${\boldsymbol\alpha}=(y\_1,r\_1,\ldots,y\_{N-2},r\_{N-2})$ by $$|{\boldsymbol\alpha}\rangle\_\bullet=\left(\mathcal{A}^\circ\right)^{\bf m}|0\rangle\_\bullet,\quad{\bf m}=\mathbb{I}^{-1}{\boldsymbol\alpha}\label{Z3groundstatealpha}$$ where *y**i*, *r**i* are integers modulo *k*, so that the matrix elements of Wilson operators are $$\begin{aligned} \langle{\boldsymbol\alpha}'|\left(\mathcal{A}^{\bullet}\right)^{\bf n}|{\boldsymbol\alpha}\rangle&=e^{i\frac{2\pi}{k}{\bf n}^T{\boldsymbol\alpha}}\delta\_{{\boldsymbol\alpha}',{\boldsymbol\alpha}}\label{Z3Wilsonmatix1}\\\langle{\boldsymbol\alpha}'|\left(\mathcal{A}^{\circ}\right)^{\bf n}|{\boldsymbol\alpha}\rangle&=\delta\_{{\boldsymbol\alpha}',\mathbb{I}{\bf n}+{\boldsymbol\alpha}}\label{Z3Wilsonmatix2}\end{aligned}$$ One can put the ground states $|{\boldsymbol\alpha}\rangle\_\bullet$ on the Cartesian product of *N* − 2 periodic 2D lattices shown in figure [fig:3foldsublattice](a). Each lattice point ${\boldsymbol\alpha}\_i=y\_i{\bf e}\_1+r\_i{\bf e}\_2$ on the integer mod *k* triangular lattice represents the eigenvalues *e*2*π**i**y**i*/*k*, *e*2*π**i**r**i*/*k* for A*i*, *i* + 1*Y*,  •, A*i*, *i* + 1*R*,  •. The ground state degeneracy (G.S.D.) for a closed spherical system with *N* threefold twist defects is given by *G*.*S*.*D*. = *k*2(*N* − 2) which matches the quantum dimension *d*[1/3] = *k*2 predicted by counting lattice degree of freedom in eq.. When *k* is divisible by 3, there is a non-trivial center that commute with all Wilson operators. It is generated by $$\begin{aligned} \Sigma\_{i,i+1}^{\bullet/\circ}&=\left(\mathcal{A}\_{i,i+1}^{Y,\bullet/\circ}\right)^{-k/3}\left(\mathcal{A}\_{i,i+1}^{R,\bullet/\circ}\right)^{k/3}\nonumber\\&=\Sigma\_i^{\bullet/\circ}\left(\Sigma\_{i+1}^{\bullet/\circ}\right)^{-1}=e^{i\frac{2\pi}{3}\left(s^{\bullet/\circ}\_i-s^{\bullet/\circ}\_{i+1}\right)}\in\mathbb{Z}\_3\label{Sigmaii+1}\end{aligned}$$ where the eigenvalues of the local Wilson operator Σ• = *e*2*π**i**s*•/3 and Σ∘ = *e*2*π**i**s*∘/3 in figure [fig:defectlocalloop](b) distinguishes the nine species ${\bf s}=(s\_\bullet,s\_\circ)\in\mathbb{Z}\_3\oplus\mathbb{Z}\_3$ for each threefold defects $[1/3]\_{\bf s}$. It can be shown, up to plaquette stabilizers, that the local Wilson loop Σ*i*• / ∘ around a tetravalent (or bivalent)  • / ∘ -vertex is Σ*u*4− 1 (resp. Σ*u*2) in eq. or 1 around a  ∘ / • -vertex. And therefore the Z3-phases for the small loops Σ*i*• / ∘ are determined by local defect Hamiltonian. The species are interchangeable by tuning the defect phase *ϕ* → *ϕ* ± 2*π*/3 in that drive the system into an excited state and locally relax back to a ground state by emitting or absorbing an abelian anyon. This process changes the Z3-values for Σ• and Σ∘ and thus switches the species. This extra phase degree of freedom provides a possibility for non-local transport of abelian anyon along a series of coupled [1/3] defects that is a unique feature only when *k* is a multiple of 3. The ground states $|{\bf m}\rangle\_\bullet$ in do not form an orthonormal set as they overcount the ground state degeneracy. In fact the allowed eigenvalues $e^{i\frac{2\pi}{k}{\bf n}^T\boldsymbol\alpha}$ for $(\mathcal{A}^\bullet)^{\bf n}$ are restricted by the species *s*• / ∘ in. The ground states $|\boldsymbol\alpha\rangle$ now form a sublattice in figure [fig:3foldsublattice](b), which contains a third of lattice points in the origin triangular lattices. The ground state degeneracy (G.S.D.) is thus reduced by 3 for each defects. $$G.S.D.=\left(\frac{k^2}{3}\right)^{N-2}\label{Z33dimensionGSD}$$ This matches the quantum dimension *d*[1/3] = *k*2/3 in predicted by microscopic lattice derivation. A complete description for the Wilson structure of threefold defects when 3 divides *k* can be found in appendix [sec:Z33Wilsonalgebraappendix]. ### Twofold twist defects A twofold twist defect is characterized by its twofold twisting of abelian anyons circling around as shown in figure [fig:anyontwist]. It can be realized as pentagon or heptagon defects in the lattice, or any composite defects that violate tricolorability and bipartite structure. Since there are three transpositions Λ*Y*, Λ*R*, Λ*B* in the permutation group *S*3, twofold defects are classified into three types according to colors, [1/2]*Y*, [1/2]*R* and [1/2]*B*. We consider a closed collection of *N* twofold twist defects for some even *N*. We assume for simplicity that all defects are of the same color type, say blue (*B*), and they fuse to the overall vacuum channel and the system is compactified on a sphere. A more general discussion on coexisting multi-type defects will be given in the upcomming subsection. The twofold defect [1/2]*B* is further subdivided into *k*2 species according to eigenvalues of two *local* measurements at the defect, the small Wilson loops Θ*χ* (see figure [fig:defectlocalloop](a) and [fig:Z2loopdefinition]) that circles the defect twice for *χ* = *Y*, *R*, *B*. For primitive pentagon or heptagon defects, they are identical (up to a Z*k* phase) to the defect plaquette operator *P̂*5, 7 in eq., and therefore their eigenvalues *w**l**χ*, *w* = *e*2*π**i*/*k*, is fixed by the local defect Hamiltonian. Similar to *P̂*5, 7, the self-intersection of the small Wilson loop Θ*χ* causes a minus sign for even *k* so that (Θ*χ*)*k* = ( − 1)*k* − 1 for *χ* not equal *B*, the color of the twofold defect. This means that *l**χ* is an integer modulo *k* for *k* odd but a half-integer modulo *k* when *k* is even and *χ* ≠ *B*. Color redundancy *Y* × *R* × *B* = 1 requires *l**Y* + *l**R* + *l**B* = 0. The two independent phases ${\bf l}=(l\_Y,l\_R)$ in Z*k* ⊕ Z*k* for *k* odd or in (Z + 1/2)*k* ⊕ (Z + 1/2)*k* for *k* even form the species label for $[1/2]\_{B,{\bf l}}$. [fig:Z2loopdefinition] [fig:doublelooppair] The prototype of a Wilson loop is depicted in figure [fig:defectWilsonloop](c) and [fig:Z2loopdefinition]. We denote the Wilson loop enclosing the *i**t**h* and (*i* + 1)*t**h* defect by Z*i*, *i* + 1*χ* according to the string color *χ* = *Y*, *R*, *B* observed at the base point (grey square). We can assume the loop always originates as a  • -sublattice type string from the base point because different sublattice types are interchangeable up to a phase as shown in figure [fig:defectWilsonloop](c). The phase is fixed precisely by the local Wilson loops Θ*i**χ* = *w**l**i**χ* and Θ*i* + 1*χ* = *w**l**i* + 1*χ* and an unlinking procedure illustrated in figure [fig:doublelooppair] and is shown in the right side of the equation below. $$\mathcal{Z}\_{i,i+1}^{\chi',\circ}\mathcal{Z}\_{i,i+1}^{\chi,\bullet}=e^{i\frac{2\pi}{k}\left(\delta^\chi\_{\chi'+1}-\delta^\chi\_{\chi'-1}+l^\chi\_i+l^\chi\_{i+1}\right)}\label{Zloopinsideout}$$ where the color *χ*, *χ*ʹ = *Y*, *R*, *B* are indexed by 0, 1, 2 mod 3, and (*χ*ʹ,  ∘ ) and (*χ*,  • ) are related by the transposition that characterizes the twisting of twofold defects defined in figure [fig:anyontwist]. [fig:Z2intersection] All Wilson operators can be generated by the prototype Z*i*, *i* + 1*χ* for *χ* = *Y*, *R* (recall *B* = *Y*− 1*R*− 1) and *i* = 1, …, *N* − 2 since Z*N* − 1, *N**χ* can be generated by the compactification relation $$\begin{aligned} \prod\_{j=1}^{N/2}\mathcal{Z}\_{2j-1,2j}^\chi=\vcenter{\hbox{\includegraphics[width=1in]{outsideloopp.pdf}}}=1\label{Z2closeness}\end{aligned}$$ {Z2*j* − 1, 2*j**χ*}*j* = 1, …, *N*/2 − 1 and {Z2*j*, 2*j* + 1*χ*}*j* = 1, …, *N*/2 − 1 forms two maximal commuting set of Wilson operators because loops in the same set do not intersect. The Wilson algebra is characterized by the intersection relation between the two sets. $$\mathcal{Z}\_{2i-1,2i}^\chi\mathcal{Z}\_{2j,2j+1}^{\chi'}=e^{i\frac{2\pi}{k}\langle\mathcal{Z}\_{2i-1,2i}^\chi,\mathcal{Z}\_{2j,2j+1}^{\chi'}\rangle}\mathcal{Z}\_{2j,2j+1}^{\chi'}\mathcal{Z}\_{2i-1,2i}^\chi$$ where the pairing I = ⟨ \* ,  \* ⟩ can be deduced by counting intersections between adjacent loops in figure [fig:Z2intersection] according to the rules set by figure [fig:intersection], and is given by the following (*N* − 2) × (*N* − 2) invertible matrix I. I = ⟨Z2*i* − 1, 2*i**χ*, Z2*j*, 2*j* + 1*χ*ʹ⟩ = *J* ⊗ J0 $$J\_Y=\left(\begin{array}{\*{20}c}0&1\\1&-1\end{array}\right),\quad J\_R=\left(\begin{array}{\*{20}c}1&-1\\-1&0\end{array}\right),\quad J\_B=\left(\begin{array}{\*{20}c}-1&0\\0&1\end{array}\right)\label{Z2JYRB}$$ $$\mathbb{J}\_0=\left(\begin{array}{\*{20}c}1&0&0&\ldots&0\\-1&1&0&\ldots&0\\0&-1&1&\ldots&0\\\vdots&\vdots&\vdots&\ddots&\vdots\\0&0&0&\ldots&1\\\end{array}\right)\label{Z2J0}$$ where the (*N* − 2)/2 rows and columns of J0 in run over the maximal commuting sets of Wilson loops {Z2*i* − 1, 2*i*}*i* = 1, …, *N*/2 − 1 and {Z2*j*, 2*j* + 1}*i* = 1, …, *N*/2 − 1 respectively, the 2 × 2 entries *J* = *J**Y*, *J**R*, *J**B* are the corresponding matrices for the three types of twofold defects [1/2]*Y*, [1/2]*R*, [1/2]*B* that act on *χ*-degree of freedom of Z*χ*, for *χ* = *Y*, *R*. Contrary to threefold defects, the Wilson algebra specified by the intersection form preserves transposition but not symmetric under color permutation. It is evident from table [tab:twistdefects] and figure [fig:anyontwist] that twofold defects change type upon color permutation Λ3 : [1/2]*χ* → [1/2]*χ* + 1 where *χ* = 0, 1, 2 mod 3 index the colors *Y*, *R*, *B*, and the intersection matrix is Λ3-*covariant* Λ3*T* ⋅ I[1/2]*χ* ⋅ Λ3 = I[1/2]*χ* − 1 where $\Lambda\_3=\left(\begin{array}{\*{20}c}0&-1\\1&-1\end{array}\right)$ acts on the color degree of freedom *χ* = *Y*, *R*. Given an arbitrary choice of branch cut that specified the sublattice type  • ,  ∘  of all vertices, one can write down a particular ground state (up to a normalization constant) ∣0⟩ ∝ ∏*i*[∑*r* = 0*k* − 1*w*− *r**p**i**D̂**i**r*]∏*P*[∑*r* = 0*k* − 1*P̂**r*]∣*σ**v*• = *τ**v*∘ = 1⟩ where *D̂**i* = *P̂*5, 7 are the pentagon or heptagon plaquette operators in eq. at the *i**t**h* defect, and *P̂* = *P̂*•, *P̂*∘ runs over all other even sided plaquettes. Here *p**i* are integer (half-integer) mod *k* for *k* odd (even) that specify the ground state eigenvalues *D̂**i*∣0⟩ = *w**p**i*∣0⟩ determined by local defect Hamiltonian. All ground states can be generated by Wilson operators $$|{\bf m}\rangle=\left(\mathcal{Z}\_{even}\right)^{\bf m}|0\rangle$$ $$\begin{aligned} \left(\mathcal{Z}\_{even}\right)^{\bf m}&=\prod\_{j=1}^{N/2-1}\left(\mathcal{Z}\_{2j,2j+1}^Y\right)^{m\_{2j-1}}\left(\mathcal{Z}\_{2j,2j+1}^R\right)^{m\_{2j}}\\\left(\mathcal{Z}\_{odd}\right)^{\bf m}&=\prod\_{j=1}^{N/2-1}\left(\mathcal{Z}\_{2j-1,2j}^Y\right)^{m\_{2j-1}}\left(\mathcal{Z}\_{2j-1,2j}^R\right)^{m\_{2j}}\end{aligned}$$ where ${\bf m}=(m\_1,\ldots,m\_{N-2})$ are the multi-exponents for Wilson loops. Assuming the branch cuts are chosen to avoid cutting across the odd Wilson loops Z2*j* − 1, 2*j* such as those in figure [fig:Z2intersection], the odd Wilson operators fix the particular ground state $\left(\mathcal{Z}\_{odd}\right)^{\bf m}|0\rangle=|0\rangle$. The matrix elements of observables are given by $$\begin{aligned} \langle{\bf m}'|\left(\mathcal{Z}\_{odd}\right)^{\bf n}|{\bf m}\rangle&=e^{i\frac{2\pi}{k}{\bf n}^T\mathbb{I}{\bf m}}\delta\_{{\bf m}',{\bf m}}\label{Z2Zoddeigenvalues}\\\langle{\bf m}'|\left(\mathcal{Z}\_{even}\right)^{\bf n}|{\bf m}\rangle&=\delta\_{{\bf m}',{\bf m}+{\bf n}}\end{aligned}$$ Since I is invertible, ground states $|{\bf m}\rangle$ are completely distinguished by its eigenvalues with respect to Z*o**d**d* and therefore form an orthonormal basis for the ground state Hilbert space. The ground state degeneracy (G.S.D.) is given by *G*.*S*.*D*. = *k**N* − 2 and matches the quantum dimension *d*[1/2] = *k* predicted in by counting lattice degree of freedom. [fig:Z2fusion] It would be more convenient for later considerations in fusion and braiding if the ground states are labeled by the abelian anyon fusion channels $[{\bf a}\_j]=[{\bf a}\_\bullet^j,{\bf a}\_\circ^j]$ of the *j**t**h* pair of twofold defects (see figure [fig:Z2fusion]), where ${\bf a}\_\bullet^j=y\_1^j{\bf e}\_Y+r\_1^j{\bf e}\_R$ and ${\bf a}\_\circ^j=y\_2^j{\bf f}^Y+r\_2^j{\bf f}^R$ are discrete vectors in the abelian anyon lattice (figure [fig:abeliananyonlattice]). The  • -part of the anyon channels ${\bf a}\_\bullet=(y\_1^1,r\_1^1,\ldots,y\_1^{N/2-1},r\_1^{N/2-1})$ can be read off from for the ground state $|{\bf m}\rangle$. $${\bf a}\_\bullet=-i\sigma\_yJ\otimes\mathbb{J}\_0{\bf m}$$ Eq. ensures a constraint on the  ∘ -part of the abelian fusion channels ${\bf a}\_\circ=(y\_2^1,r\_2^1,\ldots,y\_2^{N/2-1},r\_2^{N/2-1})$ so that $${\bf a}\_\circ=\mathbb{J}\_0{\bf m}+{\boldsymbol\theta}\_\circ$$ where ${\boldsymbol\theta}\_\circ=(\boldsymbol\theta\_1,\ldots,\boldsymbol\theta\_{N/2-1})$, $\boldsymbol\theta\_j=(\theta\_j^Y,\theta\_j^R)$, depends only on defect species ${\bf l}\_i=(l^Y\_i,l^R\_i)$, for Θ*i**χ* = *w**l**i**χ*. $$\boldsymbol\theta\_j={\bf f}-J^{-1}({\bf l}\_{2j-1}+{\bf l}\_{2j})$$ where ${\bf f}={\bf f}^Y,{\bf f}^R,{\bf f}^B=(1,0),(0,1),(-1,-1)$ are basis vectors in the  ∘ -anyon lattice in figure [fig:abeliananyonlattice] and *J* = *J**Y*, *J**R*, *J**B* are matrices in determined by the color of twofold defects [1/2]*Y*, [1/2]*R*, [1/2]*B*. Under the fusion channel basis, the matrix elements for the Wilson operators are $$\begin{aligned} \langle{\bf a}'|\left(\mathcal{Z}\_{odd}\right)^{\bf n}|{\bf a}\rangle&=e^{i\frac{2\pi}{k}{\bf n}^T i\sigma\_y{\bf a}\_\bullet}\delta\_{{\bf m}',{\bf m}}\label{Z2groundstatea}\\\langle{\bf a}'|\left(\mathcal{Z}\_{even}\right)^{\bf n}|{\bf a}\rangle&=\delta\_{{\bf a}'\_\bullet,{\bf a}\_\bullet-i\sigma\_y\mathbb{I}{\bf n}}\end{aligned}$$ where the abelian fusion channels ${\bf a}=({\bf a}\_\bullet,{\bf a}\_\circ)$ are constrained by $${\bf a}\_\circ=J^{-1}i\sigma\_y{\bf a}\_\bullet+\boldsymbol\theta\_\circ\label{Z2fusionconstraint}$$ Notice that the vacuum fusion channel may not be allowed by. This is because the (2*j* − 1)*t**h* defect is not the anti-partner of the (2*j*)*t**h* in general unless the right hand side of eq. or ${\boldsymbol\theta}\_\circ$ is trivial, i.e. the species labels satisfy $${\bf l}\_{2j-1}+{\bf l}\_{2j}=J{\bf f}$$ We end this subsection with a cautionary remark on the phase variables of local defect Hamiltonians and in a closed system. Similar to Z2-fermion parity in a closed electronic system, there is a Z3 ⊕ Z3-*anyon parity* in a closed system of threefold defects for *k* divisible by 3 and a Z*k* ⊕ Z*k*-*anyon parity* in a closed system of twofold defects. They are good quantum numbers that imply global restrictions. The closeness condition requires the species labels *s**i* = (*s**i*•, *s**i*∘) of the threefold twist defects to satisfy $$\prod\_{i=1}^N\Sigma^{\bullet/\circ}\_i=e^{i\frac{2\pi}{3}\sum\_{i=1}^Ns\_i^{\bullet/\circ}}=1\label{Z3speciescloseness}$$ for *k* divisible by 3. While for twofold defects, similarly requires the species labels ${\bf l}\_i=(l^Y\_i,l^R\_i)$ to obey $$\sum\_{i=1}^N{\bf l}\_i=\frac{N}{2}J{\bf f}\label{Z2speciescloseness}$$ where ${\bf f}={\bf f}^Y,{\bf f}^R,{\bf f}^B$ depending on the color type of [1/2]*Y*, [1/2]*R*, [1/2]*B*. The species ${\bf s}\_i$ and ${\bf l}\_i$ are however completely determined by phase variables *ϕ**i* of *local* defect Hamiltonians and in the lattice level, and there are no reasons for the local phases to be restricted by the global closeness conditions or. When the local defect phase variables *ϕ**i* are incompatible with or, there is a topological obstruction in obtaining the absolute lowest energy state. For instance the state in eq. would be identically zero, ∣0⟩ = 0. The system would be forced to have local excitations where energy is not locally minimized. There will be two phases depending on the relative magnitude between *J*\*, the energy scale of defect Hamiltonians and, and *J*, the underlying energy scale of the original model. For *J*\* <  < *J*, excitations would be bounded and localized at defect centers, and eq., would be effectively restored. And therefore the Wilson algebras and ground state degeneracies described before would still persist. If *J*\* >  > *J*, there will be delocalized abelian anyon excitations causing an infinite number of ground states. We consider only the former scenario. Word presentation of Wilson algebra ----------------------------------- [fig:multidefects] We describe a presentation of the group of Wilson algebra in a system with multi-type twist defects. A defect type is defined by its *S*3-twisting action on abelian anyon labels (see figure [fig:anyontwist]) when an abelian anyon is dragged along a path around the defect. Examples of these paths are shown by the viewing curves *λ̂**i* in figure [fig:multidefects], each encloses one and only one defect. An abelian anyon $[{\bf a}]=(Y\_\bullet)^{y\_1}(R\_\bullet)^{r\_1}(Y\_\circ)^{y\_2}(R\_\circ)^{r\_2}$ will be twisted into $\Lambda\_i\cdot[{\bf a}]$ according to and if it moves along *λ̂**i*, where Λ*i* is an element in *S*3 that distinguishes the type of the *i**t**h* defect by [1/3], $[\overline{1/3}]$ or [1/2]*χ*. The defect type or Λ*i* may depend on the viewing curve *λ̂**i* for a multi-type defect system that contains twofold defects. Defects can be arranged and ordered in series as shown in figure [fig:multidefects]. A re-ordering of viewing curves will in general alter the defect type. Upon switching the viewing order of the *i**t**h* and (*i* + 1)*t**h* defects, their *S*3-classifications change according to conjugation (Λ*i*, Λ*i* + 1) ↦ (Λʹ*i*, Λʹ*i* + 1) = (Λ*i*− 1Λ*i* + 1Λ*i*, Λ*i*) where this is demonstrated by redefining the viewing curves for the 6*t**h* and 7*t**h* defects in figure [fig:multidefects]. This can be understood in the microscopic lattice level by observing that the  • ,  ∘ -sublattice type of a threefold defect center or the *Y**R**B*-plaquette color of a twofold defect center depends on the viewing path in the presence of other defects. Twist defects are therefore coarsely classified by the conjugacy classes of *S*3, which include the class of threefold defects $[1/3],[\overline{1/3}]$ and the class of twofold defects [1/2]*χ*. However we do not consider these conjugacy classes as superselection sectors as the *S*3-symmetry is not gauged, unlike in Ref . We denote the *alphabet* $[\hat{\lambda}\_i]\_{\bf a}$ to be an open Wilson string of dragging an abelian anyon ${\bf a}=[{\bf a}\_\bullet,{\bf a}\_\circ]$ along the viewing curve *λ̂**i* starting from the base point (grey square in figure [fig:multidefects]). The ordering of ${\bf a}\_\bullet$ and ${\bf a}\_\circ$ will only affect the Wilson operator by an over phase due to self-intersection, and if necessary, we will adapt the convention that the ${\bf a}\_\bullet$-string acts before and sits below the ${\bf a}\_\circ$ one. The Wilson algebra of the defect system consists of closed Wilson loops presented in series of the alphabets $$\begin{aligned} W(\mathcal{C})=\left[\hat{\lambda}\_{i\_m}\ldots\hat{\lambda}\_{i\_2}\hat{\lambda}\_{i\_1}\right]\_{\bf a}=\prod\_{r=1}^m\left[\hat{\lambda}\_{i\_r}\right]\_{\prod\_{l=1}^{r-1}\Lambda\_{i\_l}\cdot{\bf a}}\label{alphabetpresentation}\end{aligned}$$ where C is the loop of the ordered composition *λ̂**i**m* \* … \* *λ̂**i*1 and the Wilson string begins at the fixed base point as the abelian anyon ${\bf a}$, which moves and transforms along C. In order for *W*(C) to be closed, the abelian anyon has to close back onto itself and the *S*3-transformations have to satisfy the closeness relation $$\left(\Lambda\_{i\_m}\ldots\Lambda\_{i\_2}\Lambda\_{i\_1}\right)\cdot{\bf a}={\bf a}\label{ATcloseness}$$ The alphabets inside the bracket in the middle of has a fixed ordering. Interchanging the order of the alphabets in general will give an entirely different Wilson string that might not even be closed. The product order on the far right of eq. however will only contribute an overall Z*k* phase to *W*(C), and will not affect its intersection with other Wilson operator. Abelian anyon fusion imply the simplification $$\left[\hat{\lambda}\_i\right]\_{\bf a}\left[\hat{\lambda}\_i\right]\_{\bf b}=\left[\hat{\lambda}\_i\right]\_{{\bf a}+{\bf b}}$$ For instances, the Z*k*-torsion $k{\bf a}=0$ implies *λ̂**i**k* × \scriptsize ord = 1 for ord = 2, 3 is the order of twofold and threefold defects; the tricolor redundancy requires *λ̂**i*3 = 1 if the *i**t**h* defect is a threefold defect, and *λ̂**i**λ̂**j**λ̂**i**λ̂**j**λ̂**i**λ̂**j* = 1 if the *i**t**h* and *j**t**h* defects are twofold defects of different colors. If the system of *N* defects is closed on a compact surface without boundary, we have *λ̂**N*…*λ̂*2*λ̂*1 = 1 Ignoring the overall Z*k*-phase, are of the form $$W[\{{\bf a}\_i\}]=\prod\_{i=1}^N\left[\hat{\lambda}\_i\right]\_{{\bf a}\_i},\quad\mbox{for}\quad\sum\_{i=1}^N{\bf a}\_i=\sum\_{i=1}^N\Lambda\_i{\bf a}\_i$$ The alphabetic presentation of the Wilson loops considered in the previous subsections are summarized in table [tab:prototypesalphabets]. [ht] cl Wilson loop prototypes & Word presentation A*i*, *i* + 1*χ*,  • & $\left[\hat{\lambda}\_{i+1}\right]\_{-{\bf e}\_\chi}\left[\hat{\lambda}\_i\right]\_{{\bf e}\_\chi}$ A*i*, *i* + 1*χ*,  ∘ & $\left[\hat{\lambda}\_{i+1}\right]\_{-{\bf f}^\chi}\left[\hat{\lambda}\_i\right]\_{{\bf f}^\chi}$ Σ*i*• / ∘ & $\left[\hat{\lambda}\_i\right]\_{\overline{\boldsymbol\kappa}\_{\bullet/\circ}}$ Z*i*, *i* + 1*χ* & $\left[\hat{\lambda}\_{i+1}\hat{\lambda}\_i\right]\_{{\bf e}\_\chi}$ Θ*i**χ* & $\left[\hat{\lambda}\_i\hat{\lambda}\_i\right]\_{{\bf e}\_\chi}$ [tab:prototypesalphabets] By a slight deformation, one can assume all intersections between two Wilson operators are at the fixed base point. The algebraic relation of Wilson operators are given by $$W[\{{\bf a}\_i\}]W[\{{\bf b}\_i\}]=e^{i\frac{2\pi}{k}\langle\{{\bf a}\_i\},\{{\bf b}\_i\}\rangle}W[\{{\bf b}\_i\}]W[\{{\bf a}\_i\}]\label{multidefectalgebra}$$ where the intersection form I = ⟨ \* ,  \* ⟩ is $$\langle\{{\bf a}\_i\},\{{\bf b}\_i\}\rangle=\sum\_{i=1}^N\left({\bf a}\_i-\Lambda\_i{\bf a}\_i\right)\cdot\left[\sum\_{j=1}^i{\bf b}\_j-\sum\_{j=1}^{i-1}\Lambda\_j{\bf b}\_j\right]\label{multidefectintersection}$$ for ${\bf a}\_i=[{\bf a}^i\_\bullet,{\bf a}^i\_\circ]=[y^i\_1{\bf e}\_Y+r^i\_1{\bf e}\_R,y^i\_2{\bf f}^Y+r^i\_2{\bf f}^R]$ and ${\bf b}\_i=[{\bf b}^i\_\bullet,{\bf b}^i\_\circ]=[y'^i\_1{\bf e}\_Y+r'^i\_1{\bf e}\_R,y'^i\_2{\bf f}^Y+r'^i\_2{\bf f}^R]$, where the anti-symmetric dot product is defined by $$\begin{aligned} {\bf a}\_i\cdot{\bf b}\_j&=({\bf a}\_\circ^j)^Ti\sigma\_y{\bf b}\_\bullet^i-({\bf b}\_\circ^j)^Ti\sigma\_y{\bf a}\_\bullet^i\nonumber\\&=y^i\_2r'^j\_1-r^i\_1y'^j\_2-r^i\_2y'^j\_1+y^i\_1r'^j\_2\label{anyondotproduct}\end{aligned}$$ Using the prototype basis identified in table [tab:prototypesalphabets], the intersection form reproduces the intersection matrices for threefold defects and for twofold defects. As a result of self-intersections such as those in Θ*i**χ*, in general $W[\{{\bf a}\_i\}]$ need not be a *k**t**h* root of unity. We notice the dot product and subsequently the intersection form and Wilson algebra are invariant under global symmetry transformation $$\begin{aligned} ({\bf a}\_i,{\bf b}\_i)&\to({\bf a}'\_i,{\bf b}'\_i)=\left(\Lambda\cdot{\bf a}\_i,\Lambda\cdot{\bf b}\_i\right)\\\Lambda\_i&\to\Lambda'\_i=\Lambda\Lambda\_i\Lambda^{-1}\label{defectdualitytransformation}\end{aligned}$$ for any Λ in *S*3. The color and sublattice transformations of abelian anyon have already been described in and in the previous section. The symmetry transformation for defects are summarized by the following. $$\begin{aligned} \Lambda\_3:&\left(\left[\frac{1}{3}\right],\left[\overline{\frac{1}{3}}\right],\left[\frac{1}{2}\right]\_\chi\right)\to\left(\left[\frac{1}{3}\right],\left[\overline{\frac{1}{3}}\right],\left[\frac{1}{2}\right]\_{\chi+1}\right)\label{defectST}\\\Lambda\_{B}:&\left(\left[\frac{1}{3}\right],\left[\overline{\frac{1}{3}}\right],\left[\frac{1}{2}\right]\_\chi\right)\to\left(\left[\overline{\frac{1}{3}}\right],\left[\frac{1}{3}\right],\left[\frac{1}{2}\right]\_{\Lambda\_{B}\cdot\chi}\right)\label{defectS}\end{aligned}$$ where *χ* = *Y*, *R*, *B* are the color types of twofold defects, and Λ3 and Λ*B* are cyclic color permutation and transposition respectively described in and. For *k* divisible by 3, the threefold defect species labels ${\bf s}=(s\_\bullet,s\_\circ)\in\mathbb{Z}\_3\oplus\mathbb{Z}\_3$ transform according to $$\Lambda\_3:{\bf s}\to{\bf s}'={\bf s},\quad\Lambda\_B:{\bf s}\to{\bf s}'=-\sigma\_x{\bf s}\label{speciestrans1}$$ The species labels for twofold defects ${\bf l}=(l\_Y,l\_R)$ transform according to $$\begin{aligned} \Lambda\_3:{\bf l}\to{\bf l}'&=\left(\Lambda\_3^{-1}\right)^T{\bf l}\label{speciestrans2}\\\Lambda\_B:{\bf l}\to{\bf l}'&=\left\{\begin{array}{\*{20}c}\left(\Lambda\_3^{-1}\right)^T{\bf l},&\mbox{for $[1/2]\_Y$}\\\Lambda\_3^T{\bf l}\hfill,&\mbox{for $[1/2]\_R$}\\{\bf l}\hfill,&\mbox{for $[1/2]\_B$}\end{array}\right.\label{speciestrans3}\end{aligned}$$ where $\Lambda\_3=\left(\begin{array}{\*{20}c}0&-1\\1&-1\end{array}\right)$ represents cyclic color permutation. As a result of and, twist defects do not only transform the labels of an orbiting abelian anyon but also the type of an orbiting twist defect. Non-Commutative Fusion ====================== We describe the quantum bases and transformations for twist defects. Our system consists of deconfined abelian anyon excitations of the Z*k*-gauge theory as well as semiclassical twist defects that locally violate the *S*3-symmetry. Abelian anyons form the basis for all quantum states and measurements. There is a correspondence between anyon operators, non-local Jordan-Wigner strings that leave local excitations at their ends, and excitation states formed when Jordan-Wigner operators act on a ground state. The anyon charge of an excited state can be locally measured by plaquette stabilizers or closed Wilson strings accumulated by dragging a conjugate anyon around it. None of the above holds for defects. Twist defects are not excitations of a Hamiltonian describing a topological gauge theory. There are no ``defect operators" that correspond to quantum states by their action on the vacuum. Instead, states are generated by non-contractible closed strings of anyon trajectories around defects. If one switches the type of a defect even within the same conjugacy class, for example $[1/3]\leftrightarrow[\overline{1/3}]$ or [1/2]*Y* ↔ [1/2]*R*, a closed anyon trajectory may become open by violating eq.. Hence in the semiclassical description, one cannot take superposition between states for different defect configurations — the probability amplitude for a defect to be a particular *S*3-element is either 0 or 1. Moreover, defects cannot be used as a tool for measurement. No quantum states are observed by expectation values of unitary operations involving moving a twist defect in a cycle. Therefore, anyons and twist defects are fundamentally different. The distinction stems from the fact that, unlike the underlying Z*k*-gauge symmetry, the tricolor and bipartite *S*3-symmetry is a classical non-dynamical physical symmetry and is *weakly* broken  by anyon labels. We assume there is only a finite (in particular non-dense) population of twist defects so that the system admits an almost global tricoloration and bipartite structure except along finite length branch cuts (a set with measure zero), and hence a *gauging* of *S*3-symmetry is unnecessary. Analogous physical examples include any defect heterostructures between superconductors, (anti)-ferromagnets and (fractional) topological insulators  or strong spin-orbit coupled semiconductors , where the pairing phase, magnetic spin order, band inversion mass gap and fermi energy, are all treated as non-dynamical variables. For vortices in chiral *p* + *i**p* superconductors  and across the topological insulator to superconductor interface , the pairing phase vortex and branch cuts in the fermion sign, are treated as classical objects. In the study of crystalline dislocations and disclinations in topological insulators and superconductors , the underlying lattice is also regarded as stationary. In practice, although an abelian
arxiv_0000362
the harvested energy be stored in the battery before it can be used for transmission. An alternative energy storage model, considered in, is to allow the harvested energy *E**t* to be used instantaneously at time *t* and the remaining amount be stored in the battery. The capacities under the two energy storage models are different in general.[↩](#fnref3) 4. The continuous-time version of this problem has been considered in many works in the literature where it is more commonly referred to as the power control problem and the function *r*(*g*) is called the power-rate or the power-utility function in this case. Note that in our discrete-time setup power corresponds to energy per channel use, and therefore the terms energy (per channel use) and power can be used interchangeably. We prefer to refer to the problem as the energy allocation problem.[↩](#fnref4) 5. The strategy discussed in is a slight variation of the *Uniform Policy* in that the amount of energy utilized in each channel use is  ± *ε* of the average energy arrival rate. *ε* decreases to zero as *B**m**a**x* → ∞ and the decay of *ε* controls the battery discharge probability and the convergence speed to the optimal average rate as *B**m**a**x* → ∞. However, in its essence the strategy is a *Uniform Policy*, so its behavior is captured by the *Uniform Policy* plots.[↩](#fnref5) 6. Indeed, this fact can be shown analytically; for example taking *B**m**a**x* = *E*, one can show that the gap between the upper bound and the rate achieved by the *Uniform Policy* increases to infinity as *B**m**a**x* = *E* → ∞. However, we do not provide a proof of this fact since the trend is already quite obvious from the graphs.[↩](#fnref6) 7. This proof was pointed out to the authors by one of the anonymous reviewers.[↩](#fnref7) 8. Theorem 2 of is stated for multiple access finite state channel. We use a simplified version of it for single-user channels, combined with part 2 of Corollary 1 in the same paper which shows that when the initial state is known to the transmitter, the capacity region is independent of the initial pmf.[↩](#fnref8) Near Optimal Energy Control and Approximate Capacity of Energy Harvesting Communication ======================================================================================= We consider an energy-harvesting communication system where a transmitter powered by an exogenous energy arrival process and equipped with a finite battery of size *B**m**a**x* communicates over a discrete-time AWGN channel. We first concentrate on a simple Bernoulli energy arrival process where at each time step, either an energy packet of size *E* is harvested with probability *p*, or no energy is harvested at all, independent of the other time steps. We provide a near optimal energy control policy and a simple approximation to the information-theoretic capacity of this channel. Our approximations for both problems are universal in all the system parameters involved (*p*, *E* and *B**m**a**x*), i.e. we bound the approximation gaps by a constant independent of the parameter values. Our results suggest that a battery size *B**m**a**x* ≥ *E* is (approximately) sufficient to extract the infinite battery capacity of this channel. We then extend our results to general i.i.d. energy arrival processes. Our approximate capacity characterizations provide important insights for the optimal design of energy harvesting communication systems in the regime where both the battery size and the average energy arrival rate are large. Energy Harvesting Channel, Information-Theoretic Capacity, Online Power Control, Constant Gap Approximation, Receiver Side Information. Introduction ============ In many future wireless networks, we may encounter nodes (such as sensor nodes) that harvest the energy they need for communication from the natural resources in their environment. The simplest model that captures this communication scenario is the discrete-time AWGN channel depicted in Fig. [fig: model], where the transmitter is powered by an exogenous stochastic energy arrival process *E**t* stored in a battery of size *B**m**a**x*. The energy of the transmitted symbol at each channel use is limited by the available energy in the battery, which unlike in traditionally powered communication systems is a random quantity due to the randomness in the energy harvesting process *E**t* and moreover depends on the energy expenditure in the previous channel uses. Understanding the capacity of such newly emerging communication systems and the optimal principles to design and operate them has received significant attention over the recent years. In the limiting case *B**m**a**x* = ∞, the capacity of the energy harvesting communication system in Fig. [fig: model] has been characterized by Ozel and Ulukus in. Their result shows that in this asymptotic case the capacity of the energy-harvesting system is equal to the capacity of a classical AWGN channel with an average power constraint equal to the average energy harvesting rate $\E[E\_t]$. Perhaps even more importantly, the result of offers a number of important engineering insights for the large battery limit: first it shows that, via simple modifications, the standard communication and coding techniques developed for the classical AWGN channel can be used to achieve the capacity of an energy-harvesting AWGN channel; second, it shows that, in this asymptotic case, the only relevant property of the energy-harvesting process in determining capacity is the average energy harvesting rate. Two energy harvesting mechanisms are equivalent as long as they have the same average energy arrival rate. [fig: model] Can we obtain analogous engineering insights for the more realistic case of finite battery? For example, how does the capacity of the energy-harvesting AWGN channel in Fig. [fig: model] depend on major system parameters such as *B**m**a**x* and *E**t*? Are there different operating regimes where this dependence is qualitatively different? Given a communication system powered with a certain energy harvesting mechanism *E**t*, how can we optimally choose the battery size *B**m**a**x*? What are the properties of the energy harvesting process *E**t* that are critical to capacity in the finite battery regime? Consequently, what are more desirable and less desirable energy-harvesting profiles? These are foremost engineering questions, the answers of which can guide the design of optimal communication architectures for such systems. Despite significant recent effort to characterize the capacity of the energy-harvesting channel in Fig. [fig: model] in the finite battery case, we currently lack an understanding of the above questions. For example, provides a formulation of the capacity in terms of the Verdu-Han general framework and based on a conjecture on the properties of the optimal energy management strategy derives a lower bound to the capacity which is numerically computable for a given setup. However, it is difficult to obtain the above high-level insights from the numerical evaluations. Indeed, even in the case of zero battery, *B**m**a**x* = 0, where provides an exact single letter characterization of the capacity as an optimization problem over the so called Shannon-strategies, the resultant optimization is difficult to solve and requires numerical evaluations, therefore providing limited high-level insights. Overview of Our Results ----------------------- In this paper, we take an alternative approach. Instead of seeking the exact capacity, we seek to provide a simple approximation to the capacity (with bounded guarantee on the approximation gap) which can provide insights on the above engineering questions. As a starting point, we concentrate on an i.i.d. Bernoulli energy arrival process, i.e. *E**t* = *E* with probability *p* and zero otherwise, and is independent across different channel uses. We show that in this case the capacity of the energy harvesting AWGN channel in Fig. [fig: model] is approximately given by $$\label{eq:approxcap} C\approx\left\{ \begin{array}{ll} \frac{1}{2}\log(1+pB\_{max}) & \mbox{when $B\_{max} \leq E$} \\ \frac{1}{2}\log(1+pE) & \mbox{when $B\_{max} > E$}. \end{array} \right.$$ The approximation gap is bounded by 2.58 bits for all values of the system parameters *p*, *E*, and *B**m**a**x*. See Fig. [fig: CapBound]. The capacity approximation in provides couple of important insights: First, it identifies the dependence of the capacity to major system parameters. There are two regimes where this dependence is qualitatively different: in the large battery regime (*B**m**a**x* > *E*), the capacity is mainly determined by the average energy arrival rate and is (almost) independent of the exact value of the battery size; on the other hand, in the small battery regime (*B**m**a**x* < *E*), the capacity depends critically on the battery size and is increasing logarithmically with increasing *B**m**a**x*. The formula also suggests that choosing *B**m**a**x* ≈ *E* is sufficient to extract most of the capacity of the system (achieved at *B**m**a**x* = ∞). One can also observe that while in the large battery regime the only property of the energy harvesting profile that impacts capacity is the average rate, in the small battery regime energy profiles that are less peaky over time lead to larger capacity. See Figure [fig: CapBound2] which compares two energy profiles with the same average rate. The main ingredient of the above result is a near optimal online strategy we develop for energy/power control over the Bernoulli energy harvesting channel. The problem of optimal energy/power allocation over an energy harvesting channel has been extensively studied in the literature. While the offline version is well-understood, the online version of the problem is known to be difficult and several works suggest simple online power control policies with and without performance guarantees. For example, shows that a simple strategy that allocates constant energy, equal to the average energy arrival rate, to each channel use as long the battery is not empty becomes asymptotically optimal as *B**m**a**x* → ∞. However as we show in Section [sec: simulations], this strategy can be arbitrarily away from optimality for finite values of *B**m**a**x*. In contrast, we show analytically that the online energy allocation policy we propose remains within 0.973 bits of the optimal value for all values of the system parameters. This result can be of interest in its own right. In the final section of the paper, we extend our approximation results to general i.i.d. energy arrival processes where each *E**t* is i.i.d. according to some arbitrary distribution. We show that a simple modification allows to apply the near optimal energy allocation policy and the information-theoretic coding strategy we develop in the earlier sections for the Bernoulli process to general i.i.d. energy arrival processes. We show that for many distributions of *E**t*, this yields an approximate characterization of the capacity within a constant gap, though with a larger constant. However, we also illustrate that one can engineer specific distributions for which our approach would fail to provide a constant gap approximation. The constant gap approach we propose in this paper is most useful to understand the capacity of energy harvesting communication channels operating at moderate to high SNRs. This, for example, can be the operating regime of a base station in a rural area powered by renewable energy sources (ex. wind or solar). However, the insights obtained from such an analysis can be applicable even at low SNR. Note that the 2.58 bits/s/Hz is an analytical upper bound on the worst case gap over all SNR regimes and the actual gap between the rate achieved by the strategies we propose and the true capacity of the system can be much smaller especially at low SNR. More generally, we believe the approximation philosophy we propose in this paper will be useful in providing a basis for comparing the performances of different strategies, developing insights into the capacity of the system and giving a sense of the remaining gap in characterizing the problem, when obtaining an exact *insightful* capacity formula proves to be difficult. [fig: CapBound] [fig: CapBound2] System Model ============ We consider an AWGN communication system powered by an energy harvesting mechanism with limited battery (See Fig. [fig: model]). At each time step *t*, *E**t* amount of energy is harvested from the exogenous energy source and is stored in the battery of size *B**m**a**x*. In the case the harvested energy *E**t* exceeds the available space in the battery at time *t*, the battery is charged to maximum capacity and the remaining energy is discarded. Let *X**t* denote the scalar real input to the channel at time *t*, *Y**t* the output of the channel and *N**t* the additive noise with unit normal distribution N(0, 1). We have *Y**t* = *X**t* + *N**t*. Let *B**t* represent the amount of energy available in the battery for transmission at time step *t*. Then the transmitted signal *X**t* is amplitude constrained by the available energy in the battery *B**t*. The system energy constraints can be summarized as follows: $$\begin{aligned} |X\_t|^2 & \le B\_t \label{eq: ampConst}\\ B\_{t+1} & = \min(B\_t + E\_{t+1} - |X\_t|^2, B\_{max}) \label{eq: batteryUpdate}\end{aligned}$$ Equation represents the amplitude constraint on the input, and represents the update rule for the energy available in the battery. [3](#fn3) The harvested energy at each time step *E**t* is a discrete-time stochastic process dictated by the energy harvesting mechanism. We assume that the energy arrival process is causally known at the transmitter (i.e. at time *t* the transmitter knows {*E**t*, *E**t* − 1, …}), but not at the receiver. In the sequel, we first focus on a simple Bernoulli process (Sections [sec: OnlinePolicy] and [sec: InfoCap]), where *E**t*’s are i.i.d. Bernoulli random variables: $$\begin{aligned} \label{eq: Et} E\_t = \left\{ \begin{array}{rl} E & \mbox{w.p. $p$} \\ 0 & \mbox{w.p. $1-p$}, \end{array} \right.\end{aligned}$$ so at each time step, either an energy packet of size *E* is harvested, or no energy is harvested at all, independent of the other time steps. We then extend our results to more general i.i.d. energy harvesting processes in Section [sec: GeneralEP], where we assume that *E**t* are i.i.d. with common cdf *F*(*x*). The information-theoretic capacity of the channel is defined in the usual way as the largest rate at which the transmitter can reliably communicate to the receiver under the system energy constraints in and and the assumption that the realizations of *E**t* are known only to the transmitter in a causal fashion. While the main focus of our paper is the information-theoretic capacity of this channel, in Section [sec: OnlinePolicy] we also study a related problem, optimal (online) energy allocation for maximizing the average rate (or utility) of an energy-harvesting system. This problem is defined more precisely in the corresponding section and forms the basis for the information-theoretic results we prove in Section [sec: InfoCap]. Main Result =========== The following theorem is the main result of the paper. [Main Result] [thm: MainResult] The capacity *C* of the channel described in Section [sec: SysModel] with Bernoulli energy arrival process *E**t* satisfies $$\begin{aligned} \frac{1}{2} \log \left(1 + p \cdot \min\{ B\_{max}, E\} \right)& - 2.58 \le \; C \\ \le \frac{1}{2} \log & \left( 1 + p \cdot \min\{B\_{max}, E \} \right).\end{aligned}$$ The bound in Theorem [thm: MainResult] is illustrated in Figure [fig: CapBound]. The result shows that the capacity of a finite battery energy harvesting system is within 2.58 bits of $\frac{1}{2} \log \left( 1 + p \cdot \min\{B\_{max}, E \} \right)$ for any choice of the system parameters *p*, *E* and *B**m**a**x*. We therefore refer to $\frac{1}{2} \log \left( 1 + p \cdot \min\{B\_{max}, E \} \right)$ as the approximate capacity of the channel. Figure [fig: CapBound2] compares the approximate capacity under two different Bernoulli energy harvesting profiles. The main ingredient of the above theorem is a near optimal online energy allocation policy we develop for the Bernoulli energy harvesting channel in Section [sec: OnlinePolicy]. We show that this strategy is within 0.973 bits of optimality for all values of the system parameters. We state the corresponding results in Theorems [thm: constgaprate] and [thm: constgaprate2], after we precisely define the online energy allocation problem in Section [sec: OnlinePolicy]. Finally, we discuss how our results from Sections [sec: OnlinePolicy] and [sec: InfoCap] for the Bernoulli process can be extended to more general i.i.d. energy harvesting profiles in Section [sec: GeneralEP]. A Near Optimal Online Policy for Energy Control =============================================== In this section, we study an online energy allocation problem for the energy harvesting communication system with finite battery described in Section [sec: SysModel]. The near optimal energy control policy we develop in this section turns out to be a critical ingredient of the approximate capacity characterization we develop in the next section. We consider the energy harvesting transmitter described in Section [sec: SysModel] and assume that we are specified an energy-rate (or energy-utility) function *r*(*g*) which specifies the rate or utility *r*(*g*(*t*)) obtained at a given channel use *t* as a function of the energy *g*(*t*) allocated for transmission at time *t*.[4](#fn4) We assume the energy arrival process is a Bernoulli process described by, and again only online, i.e. causal, information of the energy packet arrivals is available at the transmitter. An online policy *g*(*t*) denotes the amount of energy transmitter decides to allocate for transmission at time *t*. We call *g*(*t*) a feasible online policy, if *g*(*t*) satisfies: $$\begin{aligned} & 0 \le g(t) \le B\_t \label{eq: allocConst}\\ & B\_{t+1} = \min(B\_t + E\_{t+1} -g( t), B\_{max}) \label{eq: batteryUpdate2}\end{aligned}$$ Notice that constraints and are analogous to the system energy constraints and from Section [sec: SysModel], except ∣*X**t*∣2, the amount of energy used for transmission at time step *t*, from Section [sec: SysModel] is replaced with *g*(*t*) here. Moreover, the energy allocated at time *t* can only depend on the past realizations of the energy harvesting process, i.e. we have the causality constraint *g*(*t*) = *f*(*t*, {*E**i*}*i* = 0*t*). Since the energy arrival process *E**t* is a stochastic process, the quantity *g*(*t*) is random. Let G denote the class of online policies satisfying constraints, and. Then, our goal is to maximize the long term average rate over the class of feasible online policies: $$\begin{aligned} \label{eq: obj} \max\_{g \in \mathcal{G}} \liminf\_{N \to \infty} \mathbb{E}\left[ \frac{1}{N} \displaystyle\sum\limits\_{t = 1}^N r(g( t)) \right],\end{aligned}$$ where the expectation here is over the ensembles of {*E**t*}*t* = 0*N* and *r*( ⋅ ) is the energy-rate function of interest. Since our main motivation for studying this problem is to use the solution to approximate the capacity of the AWGN channel in the next section, we restrict our attention to the following classical AWGN rate function: $$\begin{aligned} \label{eq: AWGN\_rate} r(x) = \frac{1}{2} \log\left(1 + x \right)\end{aligned}$$ in bits per channel use. Therefore, the problem of interest here is: $$\begin{aligned} \label{eq: AWGN\_cap\_obj} \max\_{g \in \mathcal{G}} & \liminf\_{N \to \infty} \mathbb{E} \left[ \frac{1}{N} \displaystyle\sum\limits\_{t = 1}^N \frac{1}{2} \log\left( 1 + g( t)\right) \right]. \end{aligned}$$ The case *B**m**a**x* ≤ *E* --------------------------- We first analyze the case where *B**m**a**x* ≤ *E*. In this case, according to our system definition, every time a non-zero energy packet arrives, the battery is charged to full, and the left over energy is wasted. Since, the system is reset to the initial state of full battery each time a non-zero energy packet arrives, each epoch (the period of time instances between two adjacent non-zero packet arrivals) is independent and statistically indistinguishable from every other epoch. Motivated by this observation, we propose to use a strategy where the energy *g*(*t*) allocated to transmission at time *t* depends only on the number of channel uses since the last time the battery was recharged, i.e., *g*(*t*) = *g̃*(*j*) where *j* = *t* − max{*t*ʹ ≤ *t* : *E**t*ʹ = *E*}, for a function *g̃*(*j*) that satisfies $$\begin{aligned} \label{eq: constraint} \displaystyle\sum\limits\_{j=0}^\infty \tilde{g}(j) \le B\_{max} \;\;\;\; \text{and} \;\;\;\ \tilde{g}(j) \ge 0 \;\;\; \forall j. \end{aligned}$$ Note that an energy allocation policy that satisfies the above properties clearly satisfies the feasibility constraints and. Moreover, it uses only information about the past realizations of the process. We choose *g̃*(*j*) = *p*(1 − *p*)*j**B**m**a**x*, which clearly satisfies, and show in the following theorem that it achieves an objective value that’s no more than 0.973 bits away from the optimum value of the optimization problem given in for any value of *p* and *B**m**a**x*. [thm: constgaprate] Let *g*′(*t*) = *g̃*(*j*) where *j* = *t* − max{*t*ʹ ≤ *t* : *E**t*ʹ = *E*} and *g̃*(*j*) = *p*(1 − *p*)*j**B**m**a**x* for *j* = 0, 1, 2, .... When *B**m**a**x* ≤ *E*, we have the following guarantee: $$\begin{aligned} \liminf\_{N \to \infty}&\; \mathbb{E} \left[ \frac{1}{N} \displaystyle\sum\limits\_{t = 1}^N \frac{1}{2} \log\left( 1 + g^\prime(t)\right) \right] \notag \\ & \ge \max\_{g \in \mathcal{G}} \liminf\_{N \to \infty} \mathbb{E} \left[ \frac{1}{N} \displaystyle\sum\limits\_{t = 1}^N \frac{1}{2} \log\left( 1 + g( t)\right) \right] - 0.973. \label{eq: const\_gap\_rate}\end{aligned}$$ The detailed proof of the theorem is deferred to the Appendix. The proof follows roughly three major steps. * Using Jensen’s inequality, we first show that $$\begin{aligned} \max\_{g \in \mathcal{G}} \liminf\_{N \to \infty} \mathbb{E} & \left[ \frac{1}{N} \displaystyle\sum\limits\_{t = 1}^N \frac{1}{2} \log\left( 1 + g( t) \right) \right] \notag\\ & \leq \frac{1}{2}\log(1+pB\_{max})\label{eq: policy\_upper}\end{aligned}$$ * Using the fact that *g*′(*t*) is same across different epochs, we turn our achievable rate (left-hand-side of ) into the following expression as a function of *g̃*(*j*): $$\begin{aligned} \label{eq: ach} \displaystyle\sum\limits\_{j = 0}^{\infty}p &(1-p)^j \frac{1}{2} \log(1 + \tilde{g}(j)) \notag \\ & = \displaystyle\sum\limits\_{j = 0}^{\infty} p(1-p)^j \frac{1}{2} \log(1 + p(1-p)^j B\_{max}) \end{aligned}$$ * Finally, we upper bound the gap between the objective value achieved with *g*ʹ(*t*), i.e. Equation, and $\frac{1}{2}\log(1+pB\_{max})$ by a constant. [fig: EnergyAlloc] Fig. [fig: EnergyAlloc] illustrates the energy allocation policy in Theorem  [thm: constgaprate]. In this allocation policy we use *p* fraction of the remaining energy in the battery at each time (so the energy in the battery decays like *B**t* = (1 − *p*)*j**B**m**a**x*). The motivation for this energy allocation policy is the following: for the Bernoulli arrival process *E**t*, the inter-arrival time is a Geometric random variable with parameter *p*. We know that the Geometric random variable is memoryless and has mean 1/*p*. Therefore, at each time step, the expected number of time steps to the next energy arrival is 1/*p*. Furthermore, since log( ⋅ ) is a concave function, results from tell us that in order to achieve higher rate, we want to allocate the energy as uniform as possible between energy arrivals, i.e. if the current energy level in the battery is *B**t* and we knew that the next recharge of the battery would be in exactly *m* channel uses, we would allocate *B**t*/*m* energy to each of the next *m* channel uses. For the online case of interest here, we do not know when the next energy arrival would be. Instead, we use the expected time to the next energy arrival as a basis: since at each time step, the expected time to the next energy arrival is 1/*p*, we use a fraction *p* of the currently available energy. Some simple online policies with and without performance guarantees have been earlier proposed in. None of these strategies utilize the idea of exponential energy usage we propose here and can achieve the optimal rate within a constant gap uniformly over all parameter ranges. In Section [sec: simulations] below, we provide simulations which illustrate that these strategies can be arbitrarily away from optimality. However, before that we first address the remaining case of *B**m**a**x* > *E*. The case *B**m**a**x* > *E* --------------------------- Note that when *B**m**a**x* ≤ *E*, because the energy must be stored into the battery before it can be used, the extra energy is wasted, and the system *B**m**a**x* ≤ *E* is equivalent to a system where the energy packet size is exactly equal to *B**m**a**x*. Therefore, the average optimal rate we can achieve with online energy management strategies is independent of *E*, and Theorem [thm: constgaprate] characterizes this rate as approximately $\frac{1}{2}\log(1+pB\_{max})$. In the case *B**m**a**x* > *E*, we show that the average optimal rate is given by $\frac{1}{2}\log(1+pE)$ which can be achieved by a simple modification of the energy control policy proposed in the earlier section. We have the following theorem. [thm: constgaprate2] Let *g*′(*t*) = *g̃*(*j*) where *j* = *t* − max{*t*ʹ ≤ *t* : *E**t*ʹ = *E*} and *g̃*(*j*) = *p*(1 − *p*)*j**E* for *j* = 0, 1, 2, .... When *B**m**a**x* > *E*, we have the following guarantee: $$\begin{aligned} \liminf\_{N \to \infty}&\; \mathbb{E} \left[ \frac{1}{N} \displaystyle\sum\limits\_{t = 1}^N \frac{1}{2} \log\left( 1 + g^\prime(t)\right) \right] \notag \\ & \ge \max\_{g \in \mathcal{G}} \liminf\_{N \to \infty} \mathbb{E} \left[ \frac{1}{N} \displaystyle\sum\limits\_{t = 1}^N \frac{1}{2} \log\left( 1 + g( t)\right) \right] - 0.973. \label{eq: const\_gap\_rate\_2}\end{aligned}$$ Note that in the case *B**m**a**x* > *E*, each time an energy packet arrives *B**t*, the available energy in the battery, becomes at least as large as *E*. Based on this observation, our strategy utilizes a fraction *p* of *E* when *j* = 0, i.e. if the energy packet has just arrived in the current channel use; a fraction *p* of the remaining (1 − *p*)*E* in the next channel use *j* = 1, i.e. if the energy has arrived in the previous channel use and not the current one, etc. It is easy to verify that since $$\begin{aligned} \label{eq:needeqnumber} \displaystyle\sum\limits\_{j=0}^\infty \tilde{g}(j) \le E \;\;\;\; \text{and} \;\;\;\ \tilde{g}(j) \ge 0 \;\;\; \forall j,\end{aligned}$$ this strategy satisfies the energy feasibility constraints in and when *B**m**a**x* > *E*. Indeed, this energy management policy is quite conservative and can be clearly wasteful of resources. Consider the first epoch which starts with the arrival of the first energy packet: the remaining energy in the battery *j*-channel uses after the first energy packet arrives is given by (1 − *p*)*j**E*, so at the time the second packet arrives, there will be some residual energy left in the battery, at least part of which will add up to the arriving energy packet *E* since *B**m**a**x* > *E*. The strategy we propose ignores this residual energy. An equivalent way of thinking about our strategy is that it operates as if *B**m**a**x* = *E* even though *B**m**a**x* > *E*. However this strategy still turns out to be within constant number of bits of the optimal value. One immediate way to improve this strategy would be to use, at each time step, a fraction *p* of the currently available energy in the battery *B**t*. However, this improved strategy turns out to be difficult to analyze analytically. In the next section, we present simulation results which demonstrate the improvement due to this modification. Theorem [thm: constgaprate2] can be proved by using similar lines to the proof of Theorem [thm: constgaprate]. In the appendix, again based on Jensen’s inequality, we show that $$\begin{aligned} \max\_{g \in \mathcal{G}} \liminf\_{N \to \infty} \mathbb{E} \left[ \frac{1}{N} \displaystyle\sum\limits\_{t = 1}^N \frac{1}{2} \log\left( 1 + g( t)\right) \right] \leq \frac{1}{2}\log(1+pE)\end{aligned}$$ in this case. The remaining step is to show is that the strategy we propose in Theorem [thm: constgaprate2] is within 0.973 bits of this upper bound. While this can be shown from first principles by following the steps of Theorem [thm: constgaprate], it can also be directly observed from the proof of Theorem [thm: constgaprate]: Fix the energy packet size *E*. When the battery size is as large as *E*, i.e. *B**m**a**x* = *E*, the strategy in Theorem [thm: constgaprate] reduces to the strategy in Theorem [thm: constgaprate2]. Moreover, the proof of Theorem [thm: constgaprate] establishes that this strategy achieves $ \frac{1}{2}\log(1+pE)$ within 0.973 bits. Now in a system with same *E* but larger *B**m**a**x*, so that *B**m**a**x* > *E*, the same strategy is still feasible and will clearly achieve the same rate. This argument shows that when *B**m**a**x* > *E*, the strategy proposed in Theorem [thm: constgaprate2] achieves an average rate at least as large as $ \frac{1}{2}\log(1+pE)-0.973$ bits/channel use. Numerical Evaluations --------------------- [ht] [fig: p15] [ht] [fig: p5] In this section, we provide simulation results that compare the performance of the near optimal energy control policy we proposed in the earlier section, which we refer to as the *Constant Fraction Policy* in the current section, to some other simple energy/power control policies that have been proposed in the literature for the same problem. In particular, proposes a *Constant Water Level Policy*, which we refer to as the *Uniform Policy* in this section, which allocates a constant amount of energy, equal to the average energy arrival rate, to each channel use as long as there is sufficient energy in the battery. When the energy in the battery is exhausted, no energy is allocated until the next energy arrival. shows that this strategy becomes asymptotically optimal as *B**m**a**x* increases.[5](#fn5) The same intuition is suggested by in the information-theoretic setting. Figure [fig: p15] and [fig: p5] summarize the results. In all the plots, the curve with label ``Upper Bound“ is the upper bound on the average rate achieved by any feasible policy, i.e. 1/2log(1 + *p**B**m**a**x*) when *B**m**a**x* ≤ *E* and 1/2log(1 + *p**E*) when *B**m**a**x* > *E*. When *B**m**a**x* ≤ *E*, the curve with label ``Constant Fraction Policy” is the rate achieved by the strategy we proposed in the earlier section (Theorem [thm: constgaprate]), which uses a fixed *p* fraction of the remaining energy in the battery; the curve with label ``Uniform Policy“ is the rate achieved by a strategy that allocates *p**B**m**a**x* amount of energy, which is the average energy arrival rate in this case, if there is enough energy left in the battery, and 0 energy otherwise. Similarly when *B**m**a**x* > *E*, the *Constant Fraction Policy* uses a fixed fraction *p* of the available energy in the battery *B**t* and the *Uniform Policy* uses energy *p**E* at each channel use if possible. There are two curves for each of these strategies. The curves with ``(Lower Bound)” in the label represent an analytical lower bound we can compute on the rate achieved by these strategies by assuming *B**m**a**x* = *E* (so these lower bounds remain the same as long as *B**m**a**x* > *E*). In particular, for the *Constant Fraction Policy*, the lower bound corresponds to the rate achieved by the strategy proposed in Theorem [thm: constgaprate2]. Recall the discussion in the paragraph after which suggests a strategy using a constant fraction *p* of the available energy would actually achieve a larger rate when *B**m**a**x* > *E*. This actual rate is obtained by running Monte Carlo simulation and is given by the curve with ``(Monte Carlo)" in the label. Similarly, it’s possible to analytically compute a lower bound on the rate achieved by the *Uniform Policy* by assuming *B**m**a**x* = *E* and the actual performance is obtained by Monte Carlo simulations. Based on Fig. [fig: BleE15] and [fig: BleE5], we see that in the *B**m**a**x* ≤ *E* regime, the *Constant Fraction Policy* indeed tracks the upper bound within a constant gap for all values of *B**m**a**x*, whereas the gap of the *Uniform Policy* starts to diverge at around 15 to 20 dB depending on *p*. A similar conclusion holds for Fig. [fig: B2E15] and [fig: B2E5] where *B**m**a**x* > *E* with the ratio of *B**m**a**x* to *E* being fixed at 2. Fig. [fig: B8E15] and [fig: B8E5] showed that when *B**m**a**x* ≫ *E* (in this case *B**m**a**x* = 8*E*), the performances of both policies are very similar to each other across all SNR regimes. Moreover, they both track the upper bound very closely. This is not surprising given that, show that the *Uniform Policy* converges to the upper bound as *B**m**a**x*/*E* → ∞. Fig. [fig: B8E15] and [fig: B8E5] empirically show that our *Constant Fraction Policy* performs just as good when *B**m**a**x* ≫ *E*. The figures also illustrate the difference between a constant gap guarantee on optimality and an asymptotic guarantee on optimality as *B**m**a**x* → ∞. While Fig. [fig: B8E15] and [fig: B8E5] show that the *Uniform Policy* becomes optimal in the regime when *B**m**a**x* ≫ *E*, when *B**m**a**x* and *E* are comparable this strategy can be arbitrarily away from the optimal rate (as illustrated in the figures for *B**m**a**x* ≤ *E* and *B**m**a**x* = 2*E*).[6](#fn6) Our *Constant Fraction Policy* on the other hand is within a bounded gap from optimality for all parameter values as guaranteed by our theoretical results. Finally, observe that there is not much qualitative difference between Fig. [fig: p15] and Fig. [fig: p5] which correspond to different values of energy arrival probability *p*. This is expected as our constant gap guarantees hold independent of *p*. The information theoretic capacity of the finite battery system =============================================================== In this section, we approach the system in Section [sec: SysModel] with Bernoulli energy arrival process from an information theoretic perspective. In particular, we derive an upper and a lower bound on the information-theoretic capacity of the channel and show that the gap between these upper and lower bounds is no more than a constant for all choices of the system parameters. Our lower bound relies on the near-optimal energy allocation policy we developed in Section [sec: OnlinePolicy], and reveals the connection between the two problems by developing a codebook construction which allows to implement a given energy allocation policy. We examine the regime where *B**m**a**x* ≤ *E* in Section [subsec: BlessE], and the regime *B**m**a**x* > *E* in Section [subsec: BgreaterE]. The *B**m**a**x* ≤ *E* regime ----------------------------- In the case *B**m**a**x* ≤ *E*, each time the non-zero energy packet arrives, the battery will be filled up completely regardless of how much energy was remaining in the battery. In particular, at least *E* − *B**m**a**x* amount of energy is wasted in every non-zero incoming energy packet. Therefore, a system with energy packet size *E*ʹ = *E* − (*E* − *B**m**a**x*) = *B**m**a**x* is equivalent to the original system in terms of the available communication resources and hence the two systems must have the same capacity. In the sequel, we consider the equivalent system with *E*ʹ = *B**m**a**x*. Note that as a result of the above observation, in the regime *B**m**a**x* ≤ *E*, the capacity of the system can only explicitly depend on *B**m**a**x* and *p* and not *E*. We next provide an upper bound on the capacity as a function of *B**m**a**x* and *p*. We will then provide an achievable scheme and show that the rate it achieves is within a constant gap from this upper bound for all choices of *B**m**a**x* and *p*. [Upper Bound on Capacity: *B**m**a**x* ≤ *E*] [thm: UpperBoundC] When *E* ≥ *B**m**a**x*, the capacity *C* of the channel with Bernoulli energy arrival process defined in Section [sec: SysModel] is upper bounded by $$\label{eq: CapUpperBound} C \le \frac{1}{2} \log \left(1 + p B\_{max}\right) \triangleq C\_{ub}( B\_{max}, p).$$ Note that the capacity of the channel in Section [sec: SysModel] should be an increasing function of the battery size, since we can always choose not to use the extra battery space. Therefore, the capacity of our channel with finite battery is upper bounded by the capacity of the same channel with infinite battery size. The infinite battery capacity has been characterized in as $$\begin{aligned} C\_{\infty} & = \frac{1}{2}\log(1 + \mathbb{E}\left[ E\_t \right] ) \notag \\ & = \frac{1}{2}\log(1 + pE) \label{eq: ClassicAWGN}\end{aligned}$$ since E[*E**t*] = *p**E* in our current case. Based on the earlier discussion, when *E* ≥ *B**m**a**x*, the capacity of the system is the same as the capacity of a system with reduced energy packet size *E* = *B**m**a**x*. Plugging *E* = *B**m**a**x* in gives the desired upper bound in. Next, we will provide an achievable strategy for the channel in Fig. [fig: model] when the energy arrival process {*E**t*} is also causally known at the receiver. Later, we will use this result to derive an achievable rate for our original system in Section [sec: SysModel] where we assume that the receiver has no information about the energy arrival process. [Achievable Scheme with CSIR] [thm: Achievable] Assume that for the system defined in Section [sec: SysModel], the Bernoulli energy arrival process {*E**t*} is causally known not only at the transmitter but also at the receiver. Then we can achieve any rate: $$R\_{ach} \leq \displaystyle\sum\limits\_{j = 0}^{\infty} p (1-p)^j \max\_{p(x): |X|^2 \le \mathcal{E}\_j} I(X; Y) \label{eq: Achievable}$$ for any non-negative E0, E1, ... satisfying $$\sum\limits\_{j = 0}^{\infty} \mathcal{E}\_j \le B\_{max}. \label{eq: EnergyConstr}$$ The proof of Theorem [thm: Achievable] is provided in the Appendix. The idea for the achievable scheme is that if both the transmitter and receiver know when the energy packet *E* arrives, they can agree on an energy allocation strategy ahead of time. As we did in Section [sec: OnlinePolicy], here we concentrate on an energy allocation policy E*j* that is invariant across different epochs (the period of time between two adjacent non-zero packet arrivals). E*j* denotes the amount of energy allocated to transmission, *j* channel uses after the last time the battery was recharged, i.e. if energy arrives at the current channel use, we allocate E0 amount of energy for transmission; if energy arrived in the previous channel use but not the current channel use, then we allocate E1 amount of energy for transmission, etc. The transmitter and receiver agree ahead of time on a sequence of codebooks C(*j*) where each codebook is amplitude-constrained to E*j*, i.e. the symbols of each codeword in C(*j*) are such that ∣*X*∣2 ≤ E*j*. This ensures that the symbol transmitted at the corresponding time will not exceed the energy constraint E*j*. We assume that the transmitter has one codeword *c**j* ∈ C(*j*) from each codebook to communicate to the receiver and the symbols of these codewords are interleaved as dictated by the realization of the energy arrival process. For example, upon the arrival of the first energy packet, the transmitter sends the first symbol of codeword *c*0; if there is no energy packet arrival in the next channel use, it transmits the first symbol of codeword *c*1 in the next channel use, etc. Once the second energy packet arrives, the cycle is reset and the transmitter moves to transmitting the second symbol of the codeword *c*0, then the second symbol in codeword *c*1, etc. (See the Appendix for a detailed description of the strategy and its performance analysis.) gives the rate we can achieve with such a strategy in the large blocklength limit. ensures that the total energy spent does not exceed the available energy in the battery at *j* = 0 which is equal to *B**m**a**x* (since when *E* ≥ *B**m**a**x*, the battery is recharged to full every time an energy packet arrives). We next show that when there is no channel state information at the receiver we can achieve the rate in Theorem [thm: Achievable] with at most *H*(*p*) bits of penalty. The main idea of the result is similar to Theorem 1 in which shows that over an information stable channel the maximum possible capacity improvement due to receiver side information is bounded by the amount of the side information itself. In the current case, the energy constraints in and introduce memory into the system and its not a priori clear if the channel is information stable or not. The following theorem extends Theorem 1 of to general, not necessarily information-stable, channels. [Capacity improvement due to RX Side Info][thm:CapCSIR] Consider a general channel, not necessarily stationary memoryless, defined as a sequence {*W**n*( ⋅ ∣ ⋅ ) = *P**Y*(*n*)∣*X*(*n*) : X(*n*) → Y(*n*)}*n* = 1∞,  of arbitrary conditional probability distributions together with an input and an output alphabet for each *n* (which need not be Cartesian products of a basic input and an output alphabet). The improvement in channel capacity due to the availability of side information at the receiver is upper-bounded by the *spectral sup-entropy rate* of the side information process *G* = {*G**n*}*n* = 1∞, which is defined as $$\bar{H}(G)=\text{p-}\limsup\_{n\rightarrow\infty}\frac{1}{n}\log \frac{1}{P\_{G^n}(g^n)},$$ where p-limsup denotes *limsup in probability* (see ). The proof of the theorem is given in the Appendix. In order to apply the theorem to our current channel with causal transmitter side information, we can use Shannon’s technique in to first transform the channel to an equivalent channel without states but with an enlarged input alphabet over so called Shannon strategies (this transformation has been developed in ) and then apply Theorem [thm:CapCSIR] to the equivalent channel. Since the side information process *G* in our case is the i.i.d. Bernoulli energy arrival process {*E**t*}, which has entropy rate *H̄*(*G*) = *H*(*p*) we immediately get the following theorem. [Achievable Rate without CSIR] [thm: Achievable2] Consider the system defined in Section [sec: SysModel] where the energy arrival process {*E**t*} is causally known only at the transmitter, but not at the receiver. The capacity *C* of this system is lower bounded by $$\begin{aligned} C \geq \displaystyle\sum\limits\_{j = 0}^{\infty} & \; p (1-p)^j \max\_{p(x): |X|^2 \le \mathcal{E}\_j} I(X; Y)- H(p) \label{eq: Achievable2} \\ & \text{subject to: } \;\;\; \displaystyle\sum\limits\_{j = 0}^{\infty} \mathcal{E}\_j \le B\_{max} \label{eq: EnergyConstr2}\end{aligned}$$ where *H*(*p*) is the binary entropy function. While the proof of Theorem [thm: Achievable2] follows directly from Theorem [thm:CapCSIR], in the Appendix we provide an alternative proof for Theorem [thm: Achievable2] that makes specific use of our channel structure and the achievable scheme we propose in Theorem [thm: Achievable]. Note that the mutual information maximization problem in is over the class of distributions with bounded support in $\left[ - \sqrt{\mathcal{E}\_j}, \sqrt{\mathcal{E}\_j} \right]$. This is a nontrivial optimization problem and as shown in, the optimal input distribution turns out to be discrete rather than continuous. Below we lower bound the optimal value of this optimization problem by considering the uniform distribution over the interval $\left[ - \sqrt{\mathcal{E}\_j}, \sqrt{\mathcal{E}\_j} \right]$. [LB on Amplitude-constrained AWGN Capacity] [lem: GapUniform] $$\label{eq: UniformCap} \max\_{p(x): |X|^2 \le \mathcal{E}\_j} I(X; Y) \ge \frac{1}{2} \log \left(1 + \mathcal{E}\_j \right) - 1.04$$ Take *p*(*x*) to be the uniform distribution on $\left[ - \sqrt{\mathcal{E}\_j}, \sqrt{\mathcal{E}\_j} \right]$, then the average power is $\mathbb{E}[X^2] = \frac{\mathcal{E}\_j}{3}$. We use the following result which is proved in : Consider a discrete-time AWGN channel with average power constraint *P*, i.e. E[∣*X*∣2] ≤ *P*, and noise variance *σ*2. Let the input *X* of the channel be uniformly distributed over $[-\sqrt{3P},\sqrt{3P}]$ (so that E[∣*X*∣2] = *P*) and *Y* be the corresponding output random variable. Then $$I(X;Y)\geq \log\left(1+\frac{P}{\sigma^2}\right)- \frac{1}{2}\log\left(\frac{\pi e}{6}\right).$$ By using the result in the proposition, we get $$\begin{aligned} \max\_{p(x): |X|^2 \le \mathcal{E}\_j} & I( X; Y) \ge \frac{1}{2} \log \left(1 + \frac{\mathcal{E}\_j}{3} \right) - \frac{1}{2}\log\left(\frac{\pi e}{6}\right) \notag\\ & \ge \frac{1}{2} \log(1 + \mathcal{E}\_j) - \left(\frac{1}{2}\log(3) + \frac{1}{2}\log\left(\frac{\pi e}{6}\right) \right) \notag \\ & = \frac{1}{2} \log(1 + \mathcal{E}\_j) - 1.04\end{aligned}$$ Combining Lemma [lem: GapUniform] and Theorem [thm: Achievable2], we can conclude that the following rate is achievable in the system defined in Section [sec: SysModel] where the energy arrival process is causally known only at the transmitter, but not at the receiver: $$\begin{aligned} \label{eq: achievable\_2} R\_{ach} \ge \displaystyle\sum\limits\_{j = 0}^{\infty} p & (1-p)^j \frac{1}{2} \log \left(1 + \mathcal{E}\_j \right) - H(p) - 1.04 \end{aligned}$$ with E*j* subject to the constraint. Notice that up to a fixed constant, we are back to the online rate optimization problem studied in Section [sec: OnlinePolicy] (see eq.). Employing the near-optimal allocation policy from the previous section, which assigns E*j* = *p*(1 − *p*)*j**B**m**a**x*, gives us the following achievable rate. [Lower Bound on Capacity: *B**m**a**x* ≤ *E*] [lem: LowerBoundC] When *E* ≥ *B**m**a**x*, the capacity of a system defined in Section [sec: SysModel] with Bernoulli energy arrival process {*E**t*} only causally known at the transmitter, but not at the receiver, is lower bounded by $$\begin{aligned} C& (B\_{max}, p) \ge C\_{lb}(B\_{max}, p) \notag\\ & \triangleq \left( \displaystyle\sum\limits\_{j = 0}^{\infty} p(1-p)^j \frac{1}{2} \log\left( 1 + (1-p)^j p B\_{max} \right) - K(p)\right)^+ \label{eq: CapLowerBound}\end{aligned}$$ where *a*+ = max(*a*, 0) and *K*(*p*) = 1.04 + *H*(*p*) where *H*(*p*) is the binary entropy function. This lemma is a direct consequence of Theorem [thm: Achievable2] and Lemma [lem: GapUniform] with particular choice of E*j* = *p*(1 − *p*)*j**B**m**a**x*. We complete the proof by verifying {E*j*} satisfies the constraint since $\displaystyle\sum\limits\_{j = 0}^{\infty} p (1-p)^j B\_{max} = B\_{max}$, and use the fact that the capacity is non-negative. The next theorem states that for all values of *B**m**a**x* and *p*, the gap between the lower bound in and the upper bound in is bounded by a constant. [Constant Gap] [thm: ConstantGapBlessE] For all 0 < *p* < 1,  *B**m**a**x* > 0, we have the following inequality: $$\begin{aligned} \label{eq: gap} C\_{ub}(B\_{max}, p) - C\_{lb}(B\_{max}, p) \le 2.58 \mbox{ bits}\end{aligned}$$ The detailed proof is given in the Appendix. Notice that in the proof of Theorem [thm: constgaprate], we have already bounded the gap between $\displaystyle\sum\limits\_{j = 0}^{\infty} p(1-p)^j \frac{1}{2} \log\left( 1 + (1-p)^j p B\_{max} \right)$ and $\frac{1}{2}\log\left(1+pB\_{max} \right)$ by a constant, so it’s no surprise we have a constant bound here. The additional term *K*(*p*) in Lemma [lem: LowerBoundC] is why we have a larger constant, i.e. 2.58 bits, here. The *B**m**a**x* > *E* regime ----------------------------- In Section [subsec: BlessE], we provided an upper bound and a lower bound on the capacity when *B**m**a**x* ≤ *E*. Because the energy must be stored into the battery before it can be used, the extra energy is wasted when *B**m**a**x* ≤ *E*. Therefore, the capacity depends only on *B**m**a**x* and not *E* in that case. When *B**m**a**x* > *E*, a natural question is how the extra battery space impacts the capacity of the system, and whether *B**m**a**x* or *E* (or both) is the determining factor for the capacity. We show below that the capacity depends critically on *E* in this case and not so much on the extra battery space *B**m**a**x*. The bounds in Section [subsec: BlessE] can be summarized as: $$\label{eq: two\_sided} \frac{1}{2} \log \left(1 + p B\_{max} \right) - 2.58 \le C \le \frac{1}{2} \log \left(1 + p B\_{max} \right)$$ Notice, in the case *B**m**a**x* = *E*, this bound turns into: $$\label{eq: B\_equal\_E} \frac{1}{2} \log \left(1 + p E \right) - 2.58 \le C(B\_{max} = E, p) \le \frac{1}{2} \log \left(1 + p E \right)$$ Note that the right hand side of is the same as the infinite battery capacity given by. This gives rise to the following theorem. [Bounds on Capacity: *B**m**a**x* > *E*] [thm: BgreaterE] When the battery size *B**m**a**x* > *E*, the capacity of a system with Bernoulli energy arrival process defined in Section [sec: SysModel] is bounded by: $$\label{eq: B\_greater\_E} \frac{1}{2} \log \left(1 + p E \right) - 2.58 \le C \le \frac{1}{2} \log \left(1 + p E \right)$$ Since having a larger battery can only help the capacity of the system, the capacity when *B**m**a**x* > *E* is at least as large as the capacity when *B**m**a**x* = *E*. Using the left hand side of, we have: when *B**m**a**x* > *E* $$\begin{aligned} C & \ge C(B\_{max} = E, p) \\ & \ge \frac{1}{2} \log \left(1 + p E \right) - 2.58\end{aligned}$$ Similarly, using, we have the upper bound: $$\begin{aligned} C & \le C(B\_{max} = \infty, p) \\ & \leq \frac{1}{2} \log \left(1 + p E \right) \end{aligned}$$ This completes the proof of the theorem. Generalization to other energy profiles ======================================= The near optimal energy allocation policy and the coding strategy we developed in the earlier sections, as well as the resultant constant gap approximation for the information-theoretic capacity of the AWGN energy harvesting communication channel were specific to the i.i.d. Bernoulli energy arrival process and it may seem difficult to extend these results to the more general settings. In this section, we present a simple way to apply these strategies in the more general settings of i.i.d. energy arrival processes. The idea we propose is very simple: given an i.i.d. energy arrival process where *E**t* has an arbitrary distribution (discrete or continuous), fix an energy level *E* and find the probability *p* of having an energy arrival with packet size at least *E*, i.e. *p* = P(*E**t* ≥ *E*); then apply the near optimal energy allocation policy of Section [sec: OnlinePolicy] or the near optimal coding strategy of Section [sec: InfoCap] as if the exogenous energy process were i.i.d. Bernoulli with parameters *E* and *p*. Clearly, this is an energy feasible strategy for the general i.i.d. energy arrival process. However, a priori it may seem highly suboptimal and wasteful of energy since by treating the energy arrival process as Bernoulli with parameters *E* and *p*, we assume that there is no energy arrival when the arriving energy packet size is smaller than *E*, and we assume that the energy packet size is equal to *E* each time we receive an energy packet of size larger than or equal to *E*. Effectively, our strategy may not be utilizing a large fraction of the energy accumulating in the battery under the general i.i.d. energy arrival process. However, as we illustrate next this simple strategy turns out to be sufficient to achieve the optimal value for the energy allocation problem in Section [sec: OnlinePolicy] and the information-theoretic capacity of the channel within a constant gap for a large class of i.i.d. energy arrival processes, though with a larger constant. Note that as already discussed in Section [sec: simulations], the earlier strategies proposed in the literature would fail to provide a constant gap approximation. We have the following theorem which is the extension of our main result in Theorem [thm: MainResult] to the general i.i.d. energy arrival processes. It is straightforward to write down the corresponding extensions of Theorems [thm: constgaprate] and [thm: constgaprate2] to this more general setting. [thm: Extension] Assume that the energy arrival process {*E**t*} is an i.i.d. random process with each *E**t* distributed according to an arbitrary cumulative distribution function *F*(*x*) such that *F*(*x*) = 0, ∀*x* < 0. Then for each 0 ≤ *x* ≤ *B**m**a**x*, the information theoretic capacity *C* of the corresponding AWGN energy harvesting channel with battery size *B**m**a**x* is bounded by $$\label{eq: mainres\_general} \begin{split} \frac{1}{2}\log(1+ & x (1-F(x))) - 2.58 \leq C \\ &\leq \frac{1}{2}\log\left(1+\int\_0^{B\_{max}} \! (1-F(y) ) \, \mathrm{d}y \right). \end{split}$$ Fix an energy level *x*. The capacity *C* of our channel can be lower bounded by the capacity of the same channel when the exogenous energy arrival process is i.i.d. Bernoulli with energy packet size *E* = *x* and probability of energy packet arrival *p* = 1 − *F*(*x*) since we can always discard the additional energy. Then, applying Theorem [thm: MainResult] we immediately get the lower bound in. For the upper bound, we notice when the arrival energy *E**t* > *B**m**a**x*, the extra *E**t* − *B**m**a**x* amount of energy is wasted. Therefore, the available resource for communication for this system is equivalent to a system where *E**t*, is distributed according to a cumulative distribution function: $$\begin{aligned} \label{eq: equiv\_cdf} \tilde{F}(x) = \left\{ \begin{array}{rl} 0 & \mbox{for $x < 0$} \\ F(x) & \mbox{for $0 \le x < B\_{max}$ } \\ 1 & \mbox{for $B\_{max} \le x$} \end{array} \right.\end{aligned}$$ Again, using the infinite battery capacity, we can upper bound the capacity of the system by: $$\begin{aligned} C & \le C\_{\infty} \\ & = \frac{1}{2}\log(1+\mathbb{E}(E\_t)) \\ & \stackrel{(a)}{=} \frac{1}{2} \log\left(1+\int\_{0}^{\infty}(1-\tilde{F}(y)) \mathrm{d}y \right) \\ & \stackrel{(b)}{=} \frac{1}{2} \log\left(1+\int\_{0}^{B\_{max}}(1-F(y)) \mathrm{d}y \right)\end{aligned}$$ where (a) comes from a well-known identity relating the cdf of a non-negative random variable to its mean, (b) uses. Note that the above theorem holds for any value of *x* between 0 and *B**m**a**x*. For a given energy arrival process, we would want to optimize *x* to maximize the lower bound on capacity. In other words, the lower bound in the theorem can be equivalently rewritten as $$\label{eq: gen\_lower\_bound} {\frac{1}{2}\log\left( 1+ \sup\_{0\leq x \leq B\_{max}}{x (1-F(x))}\right) - 2.58} \leq C.$$ Using this lower bound, the approximation gap in the theorem (the difference between the lower and upper bounds) is given by $$\label{eq: gen\_gap} \frac{1}{2}\log\left( \frac{1+\int\_{0}^{B\_{max}}(1-F(y)) \mathrm{d}y}{1+ \displaystyle\sup\limits\_{0\leq x \leq B\_{max}}{x (1-F(x))}} \right) +2.58,$$ which can be upper bounded by $$\label{eq: ratio} \frac{1}{2}\log\left( \frac{\int\_{0}^{B\_{max}}(1-F(y)) \mathrm{d}y}{\displaystyle\sup\limits\_{0\leq x \leq B\_{max}}{x (1-F(x))}} \right) +2.58.$$ since the numerator is always greater than or equal to the denominator inside the log in Equation. Figure [fig: one-CDF] illustrates the ratio inside the logarithm for a given energy arrival process, characterized by the graph 1 − *F*(*x*) between 0 and *B**m**a**x*. The numerator in this fraction is the area under this graph, while the denominator is the largest area of a rectangle lying below this graph (as illustrated by the shaded rectangle). Therefore, as long as the cumulative distribution function has the property that this ratio is not too large, Theorem [thm: Extension] will yield a constant gap approximation of the capacity. As we illustrate next, two examples of such distributions are the uniform distribution and an energy arrival process with two discrete energy levels (as opposed to one energy level in the Bernoulli case). For these distributions we get a total approximation gap of 3.08 bits (as opposed to 2.58 bits in the Bernoulli case). However, not all cumulative distribution functions yield a constant gap approximation. We also provide a counter example (a sequence of energy profiles) for which the approximation gap provided by the theorem becomes arbitrarily large. [fig: one-CDF] [fig: Uniform-CDF] Uniform Distribution -------------------- Assume the energy arrival process is i.i.d. with *E**t* uniformly distributed over the interval [*A*1, *A*2] for some arbitrary 0 ≤ *A*1 < *A*2. We first assume that *B**m**a**x* ≥ *A*2. We have $$\begin{split} \int\_{0}^{B\_{max}}(1-F(y)) \mathrm{d}y &= \frac{A\_1 +A\_2}{2} \\ & = 2\times\frac{1}{2}\times\frac{A\_1 +A\_2}{2} \\ &\leq 2\times {\sup\_{0\leq x \leq B\_{max}}{x (1-F(x))}}. \end{split}$$ we have the inequality in the last step because by choosing *x* = (*A*1 + *A*2)/2, we can achieve *x*(1 − *F*(*x*)) = (*A*1 + *A*2)/4 (See Figure [fig: Uniform-CDF]). With this choice of energy level *E*, i.e. the midpoint of the interval [*A*1, *A*2], the ratio inside the log in Equation is guaranteed to be a constant of 2 for any i.i.d. uniformly distributed energy arrival process independent of the values of *A*1 and *A*2. In particular, Theorem [thm: Extension] will approximate the capacity as $$\label{eq: upper\_uniform} \frac{1}{2}\log\left(1+ \frac{A\_1+A\_2}{2}\right)\quad\text{bits}$$ when *B**m**a**x* ≥ *A*2 within a gap of 3.08 bits. When (*A*1 + *A*2)/2 ≤ *B**m**a**x* < *A*2, we can again choose *x* = (*A*1 + *A*2)/2, and achieve within 3.08 bits of, which clearly is also an upper bound for this case. When *A*1 ≤ *B**m**a**x* < (*A*1 + *A*2)/2, we can no longer choose *x* = (*A*1 + *A*2)/2, since *x* must be no more than *B**m**a**x*. However, in this case, if we simply choose *x* = *B**m**a**x*, the ratio of interest is still no more than 2, therefore, Theorem [thm: Extension] will approximate the capacity as 1/2log(1 + *B**m**a**x*) within 3.08 bits. Finally, when *B**m**a**x* ≤ *A*1, we are back to a degenerate Bernoulli case with arrival probability *p* = 1, and packet size *B**m**a**x*, so we have upper bound 1/2log(1 + *B**m**a**x*), and lower bound of 1/2log(1 + *B**m**a**x*) − 2.58. We summarize the approximate capacity and the approximation gap for various regimes in the following table: [H] | Regime | Approximate Capacity | Gap | | --- | --- | --- | | $\frac{A\_1 + A\_2}{2} \le B\_{max}$ | $\frac{1}{2} \log \left(1 + \frac{A\_1+A\_2}{2} \right)$ |  ≤ 3.08 | | $A\_1 \le B\_{max} < \frac{A\_1 + A\_2}{2}$ | $\frac{1}{2}\log\left(1 + B\_{max} \right)$ |  ≤ 3.08 | | *B**m**a**x* < *A*1 | $\frac{1}{2} \log \left(1 + B\_{max}\right)$ |  ≤ 2.58 | [table: Uniform] Similar to the Bernoulli arrival process, based on Table [table: Uniform], we infer there are two qualitatively different regimes for the capacity of a system with uniform arrival process. In particular, when *B**m**a**x* < (*A*1 + *A*2)/2, the capacity is increasing roughly logarithmic in *B**m**a**x*, while when *B**m**a**x* ≥ (*A*1 + *A*2)/2 the capacity approximately saturates to $\frac{1}{2} \log \left(1 + \frac{A\_1+ A\_2}{2} \right)$. Perhaps, what’s interesting here is that the threshold where this regime shift happens is neither *A*1 nor *A*2 by itself, but instead happens at the midpoint between *A*1 and *A*2. k-Level Distribution -------------------- Assume now that the energy arrival process is again i.i.d. but *E**t* is a discrete random variable that takes the value *A**i* with probability *p**i* for *i* = 1, 2, …, *k*. Further, assume that 0 < *A*1 < *A*2 < ⋯ < *A**k* ≤ *B**m**a**x*. Note that the probability of having no energy packets is 1 − ∑*i* = 1*k**p**i*. Then we have $$\begin{aligned} \int\_{0}^{B\_{max}}(1-F(y)) \mathrm{d}y &= \sum\_{i=1}^{k}{p\_i A\_i} \\ &\le \sum\_{i=1}^{k}{A\_i \sum\_{j=i}^{k}{p\_j}} \\ &\le k\max\_{1\leq i\leq k}{A\_i \sum\_{j=i}^{k}{p\_j}} \\ &= k {\sup\_{0\leq x \leq B\_{max}}{x (1-F(x))}}.\end{aligned}$$ The last equality holds because the supremum is achieved when *x* in Theorem [thm: Extension] approaches *A**i* from left for which *A**i*∑*j* = *i**k**p**j* is maximized. This shows we can approximate the capacity as $\frac{1}{2}\log(1+\mathbb{E}[E\_t])$, when 0 < *A**i* ≤ *B**m**a**x* for all *i*, within a gap of at most $$\label{eq: k\_level\_gap} \frac{1}{2}\log(k)+2.58\quad\text{bits}.$$ For example, for *k* = 1, i.e. the Bernoulli case, we recover the gap of 2.58 bits. For the case *k* = 2, we obtain an approximation gap of 3.08 bits. Note that Equation is just an upper bound on the gap, and not necessarily tight. In particular, not all classes of distributions have gap increasing with k. For example, if all the *p**i*’s are the same and *A**i*’s are equally spaced within an interval [*A**a*, *A**b*]. Then as *k* → ∞, the distribution of *E**t* starts to approach a uniform distribution we examined in the previous subsection, which we know has a bounded gap of 3.08 bits. However, there does exist sequence of profiles, where as *k* increases, our approximation gap increases unboundedly. Indeed, in the last subsection, we will find an example of discrete profiles for which the gap can be arbitrarily large. If *A**i* − 1 ≤ *B**m**a**x* < *A**i*, we can consider the equivalent distribution where we replace all *A**j*, *j* ≥ *i* with *B**m**a**x* and assign a probability mass ∑*j* = *i**k**p**j* to *B**m**a**x*. This case can be handled similarly as we did above but note that in this case the expression for the approximate capacity will depend on *A**j* and *p**j*’s for *j* < *i* as well as *B**m**a**x*. Counterexample -------------- While the strategy has worked well for the above cases, here we provide a sequence of profiles for which the gap between the upper and lower bounds in Theorem [thm: Extension] can be made arbitrarily large. Define a sequence of profiles with cdf {*F**n*(*x*)}*n* = 1∞ as $$\begin{aligned} 1-F\_n(x) = \left\{ \begin{array}{rl} 1 & \mbox{$x < 1$} \\ \frac{1}{x} & \mbox{$1\leq x < n$} \\ 0 & \mbox{$n \le x$} \end{array} \right.\end{aligned}$$ Now, suppose we have a channel with energy arrival process described by the cdf *F**n*(*x*). Without loss of generality, let *B**m**a**x* = *n*, i.e. the process takes on a support in [0, *B**m**a**x*]. Then, $$\begin{aligned} \frac{1+\int\_{0}^{B\_{max}}(1-F\_n(y)) \mathrm{d}y}{1+{\displaystyle\sup\limits\_{0\leq x \leq B\_{max}}{x (1-F\_n(x))}}} & \stackrel{(a)}{=} \frac{1+\int\_{0}^{1}1 \mathrm{d}y +\int\_{1}^{n}\frac{1}{y} \mathrm{d}y}{1+1} \notag \\ &= 1+\frac{\ln(n)}{2} \label{eq: unbounded\_ratio}\end{aligned}$$ where (a) is true due to the way we constructed *F**n*(*x*), in particular: $$\begin{aligned} x(1-F\_n(x)) = \left\{ \begin{array}{rl} x & \mbox{for $x < 1$} \\ 1 & \mbox{for $1 \le x < n$ } \\ 0 & \mbox{for $n \le x $} \end{array} \right.\end{aligned}$$ Substitute the ratio from into Equation, we obtain a gap between the upper and lower bounds in Theorem [thm: Extension] of: $$\label{eq: unbounded\_gap} \frac{1}{2}\log\left(1 + \frac{\ln(n)}{2} \right) + 2.58$$ which can be made arbitrarily large as *n* grows unboundedly. This suggests that the approximation gap using our strategy can not be bounded by a constant as we did in the previous examples. Alternatively, the unboundedness of the ratio can be seen from Figure [fig: Counterexample] where the area of every shaded bounded rectangle is no more than 1, but the area under the graph becomes arbitrarily large as *n* goes to infinity. [fig: Counterexample] As mentioned earlier, it is possible to come up with a discrete counterexample by discretizing this sequence of profiles at integer levels. In particular, let the energy arrival process be a *k*-level distribution with *A**i* = *i* for 1 ≤ *i* ≤ *k*, $p\_i=\frac{1}{i}-\frac{1}{i+1}$ for 1 ≤ *i* ≤ *k* − 1 and $p\_k = \frac{1}{k}$. Again, let *B**m**a**x* = *k* to avoid complication due to truncation of the cdf. Then the ratio inside the log in can be written as: $$\begin{aligned} \frac{1+\int\_{0}^{B\_{max}}1-F\_k(y) \mathrm{d}y}{1+{\displaystyle\max\limits\_{1\leq i \leq k}{A\_i (1-F\_k(A\_i^-))}}} &= \frac{1+\sum\_{i=1}^{k}{\frac{1}{i}}}{1+\displaystyle\max\limits\_{1\leq i \leq k}{i \times \frac{1}{i}}} \\ &= \frac{1+\sum\_{i=1}^{k}{\frac{1}{i}}}{2} \\ &\geq \frac{1+\ln(k+1)}{2}\end{aligned}$$ where *F**k*(*A**i*−) = lim*x* ↑ *A**i**F**k*(*x*). This shows the ratio can again grow unboundedly as *k* → ∞. The counterexamples in this subsection show that there are both discrete and continuous profiles, for which our strategy can not achieve the upper bounds within a constant gap, although such profiles may not be common in practice. A natural future direction is to obtain approximate characterizations of the energy harvesting communication channel under general energy harvesting profiles. Acknowledgment ============== We would like to thank S. Ulukus for stimulating discussions on the topic and the anonymous reviewers for their detailed comments and suggestions which significantly improved the paper. Appendix ======== [Proof of Theorem [thm: constgaprate]] Here, we provide the detailed proof to the three steps mentioned in the main context. *Step One:* Since we are in the case *B**m**a**x* ≤ *E*, each time a non-zero energy packet arrives, the battery will be filled up completely regardless of how much energy was remaining in the battery. In particular, at least *E* − *B**m**a**x* amount of energy is wasted in every non-zero incoming energy packet. Therefore, a system with Bernoulli arrival process *Ẽ**t* where the energy packet size is *Ẽ* = *E* − (*E* − *B**m**a**x*) = *B**m**a**x* is completely equivalent to our original system in terms of the amount of the available energy for communication. Now, let *g*(*t*) be any policy in G. We have: $$\begin{aligned} \liminf\_{N \to \infty} \mathbb{E} & \left[ \frac{1}{N} \displaystyle\sum\limits\_{t = 1}^N \frac{1}{2} \log\left( 1 + g( t)\right) \right] \\ & \stackrel{(a)}{\leq} \liminf\_{N \to \infty} \mathbb{E} \left[ \frac{1}{2} \log\left( 1 + \frac{1}{N} \displaystyle\sum\limits\_{t = 1}^N g( t)\right) \right] \\ & \stackrel{(b)}{\leq} \liminf\_{N \to \infty} \mathbb{E} \left[ \frac{1}{2} \log\left( 1 + \frac{1}{N} \displaystyle\sum\limits\_{t = 1}^N \tilde{E}\_{t} \right) \right] \\ & \stackrel{(c)}{=} \mathbb{E} \left[ \liminf\_{N \to \infty} \frac{1}{2} \log\left( 1 + \frac{1}{N} \displaystyle\sum\limits\_{t = 1}^N \tilde{E}\_{t}\right) \right] \\ & = \; \frac{1}{2} \log\left( 1 + pB\_{max}\right)\end{aligned}$$ where (a) follows from the concavity of the log which allows to apply Jensen’s Inequality; (b) follows from the argument at the beginning that we can equivalently consider a Bernoulli arrival process *Ẽ**t* where the energy packet is *Ẽ* = *B**m**a**x* and the fact that *g*(*t*) is feasible, so we can not spend more energy up to time *N* than the amount of exogenous energy we receive by time *N*, i.e. $$\sum\limits\_{t = 1}^N g( t)\leq \sum\limits\_{t = 1}^N \tilde{E}\_{t};$$ (c) follows from the Dominated Convergence Theorem. The Dominated Convergence Theorem holds because first, $$\begin{aligned} \lim\_{N \to \infty} \frac{1}{2} &\log\left( 1 + \frac{1}{N} \displaystyle\sum\limits\_{t = 1}^N \tilde{E}\_{t}\right)\\ &\stackrel{(d)}{=} \frac{1}{2} \log\left( 1 + \liminf\_{N \to \infty} \frac{1}{N} \displaystyle\sum\limits\_{t = 1}^N \tilde{E}\_{t} \right) \\ & \stackrel{(e)}{=} \frac{1}{2} \log\left( 1 + \mathbb{E}[\tilde{E}\_{t}] \right) \\ & = \; \frac{1}{2} \log\left( 1 + pB\_{max}\right)\end{aligned}$$ almost surely, where (d) follows from the fact $\frac{1}{2}\log(1+x)$ is smooth and monotonically increasing so we can move the liminf inside; and (e) follows from the Law of Large Numbers; and second $$\frac{1}{2} \log \left( 1 + \frac{1}{N} \displaystyle\sum\limits\_{t=1}^{N} \tilde{E}\_{t} \right) \le \frac{1}{2} \log \left(1 + B\_{max} \right)$$ for all *N* which is finite. Since the above upper bound applies to all feasible policies *g* ∈ G, we have the desired upper bound. *Step Two:* Let {*T**i*}*i* = 1*L* be the inter-arrival times between the *i*’th and *i* + 1’th non-zero energy packets, where *L* is the total number of non-zero packets received by time instance N, i.e. ∑*i* = 1*L**T**i* ≤ *N* < ∑*i* = 1*L* + 1*T**i*. Notice that for fixed *N*, *T**i*’s and *L* defined in this way are random variables which are functions of *N* and the random energy arrival process {*E**t*}*t* = 0*N*. We can lower bound the rate achieved by *g*ʹ(*t*) in terms of these new variables as $$\begin{aligned} &\liminf\_{N \to \infty} \; \mathbb{E} \left[ \frac{1}{N} \displaystyle\sum\limits\_{t = 1}^N \frac{1}{2} \log\left( 1 + g^\prime(t)\right) \right] \nonumber\\ & \;\;\geq \; \liminf\_{N \to \infty} \mathbb{E} \left[ \displaystyle\sum\limits\_{i = 1}^L \displaystyle\sum\limits\_{j = 0}^{T\_i-1} \frac{1}{2} \log\left( 1 + \tilde{g}(j)\right) \bigg / \displaystyle\sum\limits\_{i = 1}^{L+1} T\_i \right]\label{eq:lln1}\end{aligned}$$ which follows from the fact that the strategy *g*′(*t*) we consider is of the form *g*′(*t*) = *g̃*(*j*) where *j* = *t* − max{*t*ʹ ≤ *t* : *E**t*ʹ = *E*}, i.e., the strategy is invariant across different epochs and the allocated energy depends only on the number of time steps since the last energy arrival. Notice that as *N* → ∞, *L* → ∞ with probability 1. Divide both the numerator and the denominator of the last equation by *L* and apply the Law of Large Numbers to both the numerator and the denominator, we obtain $$\begin{aligned} & \liminf\_{N \to \infty} \;\displaystyle\sum\limits\_{i = 1}^L \displaystyle\sum\limits\_{j = 0}^{T\_i-1} \frac{1}{2} \log\left( 1 + \tilde{g}(j)\right) \bigg / \displaystyle\sum\limits\_{i = 1}^{L+1} T\_i \nonumber \\ &\;\; = \;\; \mathbb{E}\left[ \displaystyle\sum\limits\_{j = 0}^{T\_1-1} \frac{1}{2} \log\left( 1 + \tilde{g}(j)\right)\right] \bigg / \mathbb{E}[T\_1].\label{eq:lln0}\end{aligned}$$ Note that {*T**i*}’s are i.i.d. Geometric(*p*) so the Law of Large Numbers is directly applicable to the denominator and it also applies to the numerator since the random variables $\sum\limits\_{j = 0}^{T\_i-1} \frac{1}{2} \log\left( 1 + \tilde{g}(j)\right)$ are i.i.d. with finite mean. Now, because the sequence of random variables in converges almost surely, and for every *N* the sequence is upper bounded, we can apply the Dominated Convergence Theorem to exchange the limit and the expectation in to obtain $$\begin{aligned} \liminf\_{N \to \infty}\; &\mathbb{E} \left[ \frac{1}{N} \displaystyle\sum\limits\_{t = 1}^N \frac{1}{2} \log\left( 1 + g^\prime(t)\right) \right] \\ & \geq \mathbb{E}\left[ \displaystyle\sum\limits\_{j = 0}^{T\_1-1} \frac{1}{2} \log\left( 1 + \tilde{g}(j)\right)\right] \bigg / \mathbb{E}[T\_1].\end{aligned}$$ We have $$\begin{aligned} \mathbb{E}&\left[ \displaystyle\sum\limits\_{j = 0}^{T\_1-1} \frac{1}{2} \log\left( 1 + \tilde{g}(j)\right)\right] \bigg / \mathbb{E}[T\_1]\;\;\;\\ = \;\; & p \displaystyle\sum\limits\_{i = 1}^{\infty} \mathbb{P}(T\_1 = i) \displaystyle\sum\limits\_{j = 0}^{i-1} \frac{1}{2} \log(1+\tilde{g}(j)) \\ \stackrel{(a)}{=} \;\; & p \displaystyle\sum\limits\_{i = 1}^{\infty} (1-p)^{i-1} p \displaystyle\sum\limits\_{j = 0}^{i-1} \frac{1}{2} \log(1+\tilde{g}(j)) \\ \stackrel{(b)}{=} \;\; & p \displaystyle\sum\limits\_{j = 0}^{\infty} \left( \displaystyle\sum\limits\_{i = j+1}^{\infty} (1-p)^{i-1} p \right) \frac{1}{2} \log(1+\tilde{g}(j)) \\ \stackrel{(c)}{=} \;\; & \displaystyle\sum\limits\_{j = 0}^{\infty} p(1-p)^j \frac{1}{2} \log(1 + \tilde{g}(j)) \\ = \;\; & \displaystyle\sum\limits\_{j = 0}^{\infty} p(1-p)^j \frac{1}{2} \log(1 + p(1-p)^j B\_{max}) \end{aligned}$$ where (a) follows from the fact that {*T*1} is Geometric(*p*), (b) follows from switching the order of summations, and (c) uses the formula for the sum of geometric series. *Step Three:* Finally, to complete the proof, we need to show: $$\begin{aligned} \label{eq: constant\_gap\_upper} \frac{1}{2} \log(1 + p B\_{max}) - \displaystyle\sum\limits\_{j = 0}^{\infty} p (1-p)^j \frac{1}{2} \log(1 + \tilde{g}(j)) \le 0.973\end{aligned}$$ In order to show, we will first restrict the range of (*B**m**a**x*, *p*) to be considered by noticing: $$\begin{aligned} \frac{1}{2} \log(1 + p B\_{max}) - & \displaystyle\sum\limits\_{j = 0}^{\infty} p (1-p)^j \frac{1}{2} \log(1 + \tilde{g}(j)) \\ & \le \frac{1}{2} \log(1 + p B\_{max}) \\ & \le 0.973\end{aligned}$$ for all (*B**m**a**x*, *p*) such that *p**B**m**a**x* ≤ 2.853. So let’s restrict ourself to the set of (*B**m**a**x*, *p*) such that *p**B**m**a**x* > 2.853. We have $$\begin{aligned} & \frac{1}{2} \log(1 + p B\_{max}) - \displaystyle\sum\limits\_{j = 0}^{\infty} p (1-p)^j \frac{1}{2} \log(1 + \tilde{g}(j)) \\ = \; & \frac{1}{2} \log(1 + p B\_{max}) \\ & - \displaystyle\sum\limits\_{j = 0}^{\infty} p (1-p)^j \frac{1}{2} \log(1 + p(1-p)^j B\_{max}) \\ \stackrel{(a)}{\le} \; & \frac{1}{2} \log(1 + p B\_{max}) - \displaystyle\sum\limits\_{j = 0}^{\infty} p (1-p)^j \frac{1}{2} \log(p(1-p)^j B\_{max})) \\ = \; & \frac{1}{2}\log(p) + \frac{1}{2}\log(B\_{max}) + \frac{1}{2}\log\left(\frac{1}{pB\_{max}} + 1\right) \\ & - \displaystyle\sum\limits\_{j = 0}^{\infty} p(1-p)^j \frac{1}{2} \left[\log(p) + j\log\left(1-p\right) +\log( B\_{max})\right] \\ \stackrel{(b)}{=} \; & \frac{1}{2}\log(p) + \frac{1}{2}\log(B\_{max}) + \frac{1}{2}\log\left(\frac{1}{pB\_{max}} + 1\right) \\ & - \frac{1}{2}\log(p) - \frac{1-p}{p} \frac{1}{2} \log\left(1-p\right) -\frac{1}{2} \log( B\_{max}) \\ = \; & \frac{1}{2}\log\left(\frac{1}{pB\_{max}} + 1\right) - \frac{1-p}{p} \frac{1}{2} \log\left(1-p\right) \\ \stackrel{(c)}{\le} \; & \frac{1}{2\ln(2)}\frac{1}{pB\_{max}} - \frac{1-p}{p} \frac{1}{2} \log\left(1-p\right) \\ \stackrel{(d)}{\le} \; & \frac{1}{2\ln(2)}\frac{1}{2.853} + \frac{1-p}{2p} \log\left(\frac{1}{1-p}\right) \\ \stackrel{(e)}{\le} \; & 0.253 + 0.72 \\ = \; & 0.973\end{aligned}$$ where (a) follows from the fact that removing the 1 inside the second log results in an upper bound; (b) uses the identity $\displaystyle\sum\limits\_{j=0}^{\infty} j \cdot p(1-p)^j = (1-p) \mathbb{E}\left[X\right] = \frac{1-p}{p}$, where *X* ∼  Geometric(*p*), and $\displaystyle\sum\limits\_{j = 0}^{\infty} p(1-p)^j = 1$; (c) uses the inequality ln(1 + *x*) ≤ *x* for all *x*; (d) follows from the fact we restrict ourself to the case *p**B**m**a**x* > 2.853; and finally (e) follows from the fact that $$\begin{aligned} g(p) & \triangleq \frac{1-p}{2p} \log\left(\frac{1}{1-p}\right)\end{aligned}$$ is a continuous bounded function on *p* ∈ (0, 1). Furthermore, it’s monotonically decreasing and is upper bounded by $\displaystyle\lim\_{p \to 0} g(p) = \frac{1}{2\ln(2)} = 0.72$. [Proof of Theorem [thm: constgaprate2]] The highlight of the proof has been outlined in the comment after the statement of Theorem [thm: constgaprate2] in Section [subsec: rateBgreaterE]. The proof is essentially the same as Theorem [thm: constgaprate] with *B**m**a**x* replaced by E. Therefore, we will simply reiterate the major steps of the proof without boring the reader with all the details. *Step One:* Using Jensen’s Inequality, Dominated Convergence Theorem and Law of Large Numbers, we have the following upper bound: $$\begin{aligned} \max\_{g \in \mathcal{G}} \liminf\_{N \to \infty} \mathbb{E} \left[ \frac{1}{N} \displaystyle\sum\limits\_{t = 1}^N \frac{1}{2} \log\left( 1 + g(t)\right) \right] \leq \frac{1}{2}\log(1+pE)\end{aligned}$$ *Step Two:* Replace *B**m**a**x* by *E* in Step Two in the proof of Theorem [thm: constgaprate] and follow the exact same sequence of arguments, we can lower bound the rate achieved by *g*′(*t*) as: $$\begin{aligned} \liminf\_{N \to \infty} & \; \mathbb{E} \left[ \frac{1}{N} \displaystyle\sum\limits\_{t = 1}^N \frac{1}{2} \log\left( 1 + g^\prime(t)\right) \right] \\ & \ge \displaystyle\sum\limits\_{j = 0}^{\infty} p(1-p)^j \frac{1}{2} \log(1 + p(1-p)^j E) \end{aligned}$$ *Step Three:* Finally, replace *B**m**a**x* everywhere by *E* in Step Three in the proof of Theorem [thm: constgaprate], we have the following bound: $$\begin{aligned} \frac{1}{2} \log(1 + p E) - \displaystyle\sum\limits\_{j = 0}^{\infty} p (1-p)^j \frac{1}{2} \log(1 + p(1-p)^j & E ) \\ & \le 0.973\end{aligned}$$ which completes the proof. [Proof of Theorem [thm: Achievable]] We want to show that for any {E*j*} satisfying and any *ε* > 0, we can construct a communication strategy that achieves a rate $$\begin{aligned} R \ge \displaystyle\sum\limits\_{j = 0}^{\infty} p & (1-p)^j \max\_{p(x): |X|^2 \le \mathcal{E}\_j} I(X; Y) - \epsilon\end{aligned}$$ and has arbitrarily small probability of error. We start by fixing a positive integer *M*(*ε*), and positive real numbers *δ*2(*ε*, *M*) and *δ*1(*ε*, *M*, *δ*2) such that $$\begin{aligned} & (1 - p)^{M+1} K \le \epsilon/3 \label{eq: M\_Constraint} \\ \frac{\delta\_2}{1+\delta\_2} & \left( 1 - (1-p)^{M+1} \right) K \le \epsilon/3 \label{eq: delta2\_Constraint} \\ & \delta\_1 \cdot \frac{(M+1)p}{1 + \delta\_2} K \le \epsilon/3 \label{eq: delta1\_Constraint}\end{aligned}$$ where $K \triangleq \displaystyle\max\_{j} \left\{ \displaystyle\max\_{p(x): |X|^2 \le \mathcal{E}\_j} I(X; Y) \right\}$ is a finite constant. Furthermore, let *L* be a large positive integer and let $$\begin{aligned} n^{(j)} = L\left( (1-p)^j - \delta\_1 \right)\end{aligned}$$ For a given energy allocation policy {E*j*} for *j* = 0, 1, 2, ... that satisfies, the transmitter and the receiver agree on a sequence of *M* + 1 codebooks: $$\begin{aligned} \mathcal{C}^{(0)}, \mathcal{C}^{(1)}, \mathcal{C}^{(2)},..., \mathcal{C}^{(M)}\end{aligned}$$ where each codebook C(*j*) consists of 2*n*(*j*)*R*(*j*) codewords generated randomly from a distribution *p*(*x*) whose support is limited to $[-\sqrt{\mathcal{E}\_j},\sqrt{\mathcal{E}\_j}]$ so that the symbols of each codeword are amplitude constrained according to $$\begin{aligned} |X\_i^{(j)}(k)|^2 \le \mathcal{E}\_j\end{aligned}$$ for all *i* = 1, 2, ..., *n*(*j*), *j* = 0, 1, 2..., *M* and *k* = 1, 2, ..., 2*n*(*j*)*R*(*j*). During the course of communication the transmitter will aim to send one codeword from each of the *M* + 1 codebooks. Communication proceeds as follows: At each time step *t*, the transmitter sees the realization of the energy process *E**t*, let *j* = *t* − max{*t*ʹ ≤ *t* : *E**t*ʹ = *E*}, i.e. the number of time steps since the last time battery was recharged. Then the transmitter selects the next untransmitted symbol from the codeword it wants to transmit from the codebook C(*j*) (if all *n*(*j*) symbols have been transmitted, it simply transmits the zero symbol). Similarly if *j* > *M*, it transmits the zero symbol. Communication ends when the transmitter observes the arrival of the *L* + 1’th energy packet. (We assume that communication starts with the arrival of the first energy packet). The receiver can track the codebook used by the transmitter and decode each codeword separately by collecting together the corresponding channel observations since we assume that {*E**t*} are also causally known at the receiver. We will say that communication is in error if one of the following events occur: * *E*(*j*), *j* = 0, …, *M* :  by the end of the transmission, the codeword from C(*j*) has not been completely transmitted.=-10 * *E*0 :  the total duration of the communication exceeds $\frac{L}{p}(1 + \delta\_2)$.=-10 * *F*(*j*), *j* = 0, …, *M* :  the codeword from codeword C(*j*) is decoded erroneously at the receiver.=-10 Below we will argue that the probability of each one of these events can be made arbitrarily small by taking *L* large enough. To bound the probability of *E*(*j*), recall the inter arrival time of energy packets, denoted by *T*1, *T*2, ..., *T**L*, are i.i.d. Geometric(*p*) r.v’s. Therefore, $$\begin{aligned} \mathbb{P}(E^{(j)}) & = \mathbb{P}\left( \displaystyle\sum\limits\_{i = 1}^{L} \mathbbm{1}\{ T\_i >j \} \le n^{(j)}\right) \\ & = \mathbb{P}\left( \frac{1}{L} \displaystyle\sum\limits\_{i = 1}^{L} \mathbbm{1}\{ T\_i >j \} \le (1-p)^j - \delta\_1 \right) \\ & \le \epsilon\_1 ( L) \rightarrow 0\end{aligned}$$ as *L* → ∞ by the weak law of large number, since $\mathbb{E}[ \mathbbm{1}\{ T\_i >j \} ] = \mathbb{P}( T\_i >j) = (1-p)^j$. Furthermore, notice the total duration of communication *N* = *T*1 + ⋯ + *T**L* is a random quantity. Then, we have $$\begin{aligned} \mathbb{P}(E\_0) & = \mathbb{P}\left( \displaystyle\sum\limits\_{i = 1}^{L} T\_i > \frac{L}{p}(1 + \delta\_2) \right) \\ & = \mathbb{P}\left( \frac{1}{L} \displaystyle\sum\limits\_{i = 1}^{L} T\_i > \frac{1}{p}(1 + \delta\_2) \right) \\ & \le \epsilon\_2 ( L) \rightarrow 0\end{aligned}$$ as *L* → ∞, again by the weak law of large numbers, since E[*T**i*] = 1/*p*. On the receiver side, the decoder can decode each individual codeword via joint typical decoding once the transmission is finished. The classic channel coding theorem shows that as long as *R*(*j*) ≤ max*p*(*x*) : ∣*X*∣2 ≤ E*j**I*(*X*; *Y*) the probability of each one of the events *F*(0), …, *F*(*M*) can be made arbitrarily small by making *L*, hence *n*(*j*), large enough, using codewords from each codebook C(*j*). Finally, the union bound allows us to conclude that the probability of the union of all these error events can be made arbitrarily small as *L* → ∞ (note that *δ*1, *δ*2 > 0, and *M* are fixed at the beginning). We finally compute the average rate achieved by this communication strategy $$\begin{aligned} & R \ge \frac{p}{L(1+ \delta\_2)} \displaystyle\sum\limits\_{j = 0}^{M} n^{(j)} R^{(j)} \\ = & \frac{p}{L(1+ \delta\_2)} \displaystyle\sum\limits\_{j = 0}^{M} L\left( (1-p)^j - \delta\_1 \right)\max\_{p(x): |X|^2 \le \mathcal{E}\_j} I(X; Y) \\ = & \displaystyle\sum\limits\_{j = 0}^{M} \frac{p}{1+\delta\_2} \left( (1-p)^j - \delta\_1 \right) \max\_{p(x): |X|^2 \le \mathcal{E}\_j} I(X; Y) \\ = & \displaystyle\sum\limits\_{j = 0}^{\infty} p(1-p)^j \max\_{p(x): |X|^2 \le \mathcal{E}\_j} I(X; Y) \\ & - \displaystyle\sum\limits\_{j = M+1}^{\infty} p(1-p)^j \max\_{p(x): |X|^2 \le \mathcal{E}\_j} I(X; Y) \\ & - \displaystyle\sum\limits\_{j = 0}^{M} \frac{\delta\_2 p}{1+\delta\_2} (1-p)^j \max\_{p(x): |X|^2 \le \mathcal{E}\_j} I(X; Y) \\ & - \displaystyle\sum\limits\_{j = 0}^{M} \frac{p}{1+\delta\_2} \delta\_1 \max\_{p(x): |X|^2 \le \mathcal{E}\_j} I(X; Y) \\ \ge & \displaystyle\sum\limits\_{j = 0}^{\infty} p(1-p)^j \max\_{p(x): |X|^2 \le \mathcal{E}\_j} I(X; Y) - (\epsilon/3 +\epsilon/3+\epsilon/3) \end{aligned}$$ which is what we set up to prove. The last step follows from the fact *M*, *δ*2 and *δ*1 are chosen to satisfy the bounds, and. [Proof of Theorem [thm:CapCSIR]][7](#fn7) The capacity of a general channel is given by as the maximum *spectral inf-mutual information rate* between the input and the output, $$C= \sup\_X\text{p-} \liminf\_{n\rightarrow\infty} \frac{1}{n}\log \frac{P\_{Y^n\vert X^n}({y^n\vert x^n})}{P\_{Y^n}(y^n)},$$ where the supremum is over all input processes *X* = {*X**n*}*n* = 1∞. When the receiver has access to a side information process *G* = {*G**n*}*n* = 1∞, this process can be viewed as part of the output of the channel and the corresponding capacity becomes $$C\_G =\sup\_X\; \text{p-}\liminf\_{n\rightarrow\infty}\: \frac{1}{n}\log \frac{P\_{Y^n,G^n\vert X^n}(y^n,g^n \vert x^n)}{P\_{Y^n,G^n}(y^n,g^n)}.$$ We have $$\begin{aligned} C\_G &= \sup\_X\; \text{p-}\liminf\_{n\rightarrow\infty}\: \frac{1}{n}\log \frac{P\_{Y^n,G^n\vert X^n}(y^n,g^n \vert x^n)}{P\_{Y^n,G^n}(y^n,g^n)} \nonumber\quad\\ &= \sup\_X\; \text{p-}\liminf\_{n\rightarrow\infty}\: \Big[ \frac{1}{n}\log \frac{P\_{Y^n\vert X^n}(y^n \vert x^n)}{P\_{Y^n}(y^n)}\nonumber \\ &\qquad\qquad + \frac{1}{n}\log \frac{P\_{G^n\vert Y^n,X^n}(g^n \vert y^n, x^n)}{P\_{G^n\vert Y^n}(g^n\vert y^n)} \Big] \nonumber \\ & \le \sup\_X \text{p-}\liminf\_{n\rightarrow\infty}\: \frac{1}{n}\log \frac{P\_{Y^n\vert X^n}(y^n \vert x^n)}{P\_{Y^n}(y^n)} \nonumber\\ &\quad + \sup\_X \text{p-}\limsup\_{n\rightarrow\infty} \frac{1}{n}\log \frac{P\_{G^n\vert Y^n,X^n}(g^n \vert y^n, x^n)}{P\_{G^n\vert Y^n}(g^n\vert y^n)} \label{sum2sup} \\ & = \: C \: + \sup\_X \text{p-}\limsup\_{n\rightarrow\infty} \frac{1}{n}\log \frac{P\_{G^n\vert Y^n,X^n}(g^n \vert y^n, x^n)}{P\_{G^n\vert Y^n}(g^n\vert y^n)} \nonumber\\ & \le \: C \: + \sup\_X \text{p-}\limsup\_{n\rightarrow\infty} \frac{1}{n}\log \frac{1}{P\_{G^n\vert Y^n}(g^n\vert y^n)} \label{Probless1}\\ & = C \: + \sup\_{X} \bar{H}(G\vert Y) \label{VH:CondEntDef}\\ & \le C \: + \sup\_{X} \bar{H}(G). \label{VH:I}\end{aligned}$$ Here, ([sum2sup]) follows from the fact that for any two sequences of random variables (*V**n*)*n* = 1∞ and (*Z**n*)*n* = 1∞ we have p-liminf*n* → ∞(*V**n* + *Z**n*) ≤ p-liminf*n* → ∞*V**n*  +  p-limsup*n* → ∞*Z**n*. Note that *P**G**n*|*Y**n*, *X**n*(*g**n*|*y**n*, *x**n*) ≤ 1, thus ([Probless1]) holds. ([VH:CondEntDef]) follows by the definition of the *conditional spectral sup-entropy rate*, given as $$\bar{H}(G\vert Y)=\text{p-}\limsup\_{n\rightarrow\infty}\frac{1}{n}\log \frac{1}{P\_{G^n \vert Y^n}(g^n\vert y^n)}.$$ Finally, non-negativity of the inf-mutual information rate $$0 \le \underline{I}(G;Y) \le \bar{H}(G) - \bar{H}(G\vert Y),$$ yields ([VH:I]). In ([VH:I]), we assume that the side information process *G* is independent of the input process, which yields the conclusion of the theorem. Note that when the side information process is a function of the input, the capacity gain due to receiver side information can be bounded by considering the sup of the spectral sup-entropy rate of *G* over the input process. [Proof of Theorem [thm: Achievable2]] Here, we provide an alternative proof for Theorem [thm: Achievable2] that does not use Theorem [thm:CapCSIR]. We first define a specific finite state channel and establish a relation between its capacities with and without channel state information at the receiver, denoted by *C*1 and *C*2 respectively. In particular, we show that *C*2 ≥ *C*1 − *H*(*p*). We then relate the capacity of this finite state channel to our original communication setup defined in Section [sec: SysModel]. We show that the capacity *C* of our original energy harvesting channel without channel state information at the receiver satisfies *C* ≥ *C*2, while *C*1 is larger than the right-hand side of in Theorem [thm: Achievable]. This completes the proof of the theorem. Given any E0, E1, E2, … that satisfy define a finite state channel *p*(*y**t*∣*x**t*, *s**t*) where the output *Y**t* ∈ R depends on the input *X**t* ∈ R to the channel and the channel state *S**t* ∈ {0, 1, ..., *M* + 1} in the following way: $$\begin{aligned} \label{eq:fsc} Y\_t = \left\{\begin{array}{rl} X\_t + Z\_t & \mbox{if } \;\;\; |X\_t|^2 \le \mathcal{E}^\prime\_{S\_t}\\ Z\_t & \mbox{otherwise} \end{array} \right.\end{aligned}$$ where *Z**t* is i.i.d. N(0, 1) Gaussian noise. The channel state process {*S**t*} is a Markov Chain and its (*M* + 2) × (*M* + 2) transition matrix is given by $$\label{eq: Tm} T=\left( \begin{array}{ccccc} p & 1-p & 0 & \cdots & 0 \\ p & 0 & 1- p & \ddots & \vdots \\ \vdots & \vdots & \ddots & \ddots & 0\\ p & 0 & & 0 & 1-p\\ p & 0 & \cdots & 0 & 1-p \end{array} \right)$$ where *T**i**j* = *p*(*S**t* + 1 = *j*∣*S**t* = *i*). The sequence E0′, …, E*M* + 1′ satisfies E*j*′ = E*j* for 0 ≤ *j* ≤ *M* and E*M* + 1′ = 0. There are no constraints on the transmitted signal *X**t* ∈ R over this channel. We assume that the initial state of the channel *S*0 = 0 and the realization of the state sequence {*S**t*} is known causally at the transmitter. We will discuss the capacity of this channel under two different assumptions: 1. The realization of the state sequence {*S**t*} is also known at the receiver. In this case we will denote the capacity by *C*1. 2. The realization of the state sequence {*S**t*} is not known at the receiver. In this case we will denote the capacity by *C*2. Obviously, *C*2 ≤ *C*1. Below, we show that *C*2 can not be much smaller than *C*1. First note that the Markov Chain {*S**t*} is time-invariant, aperiodic and indecomposable. For such channels Theorem 2 of [8](#fn8) establishes the following expression for the capacity, $$\begin{aligned} \label{eq: cap\_rec\_info} C\_D = \sup\_{\underline{F}} \liminf\_{n \to \infty} \frac{1}{n} I (F^n ; Y^n |D^n)\end{aligned}$$ where *D**n* = (*D*1, ..., *D**n*) is the channel state information available at the receiver which is of the form *D**t* = *g*(*S**t*) for a given mapping *g*( ⋅ ). Here, we will be interested in the two extremal cases when *D**n* = *S**n*, in which case the capacity is denoted by *C*1, and *D**n* = ∅ in which case the capacity is denoted by *C*2. Here, *F**n* denotes (*F*1, …, *F**n*) where *F**t* stands for a random variable that takes values in the space of all mappings {*f**t* : *E**t* → X}. Also, $\underline{F}$ denotes a sequence {*F**n*}*n* = 1∞. Now, observe that $$\begin{aligned} C\_1 & = \sup\_{\underline{F}} \liminf\_{n \to \infty} \frac{1}{n} I (F^n ; Y^n |S^n)\nonumber \\ & \le \sup\_{\underline{F}} \liminf\_{n \to \infty} \frac{1}{n} I (F^n ; Y^n, S^n)\nonumber \\ & = \sup\_{\underline{F}} \liminf\_{n \to \infty} \frac{1}{n} \left( I (F^n ; Y^n) + I(F^n;S^n|Y^n) \right) \nonumber\\ & \le \sup\_{\underline{F}} \liminf\_{n \to \infty} \frac{1}{n} I (F^n ; Y^n) + \frac{1}{n}H(S^n) \label{eq:eq1}\\ & = H(p) + \sup\_{\underline{F}} \liminf\_{n \to \infty} \frac{1}{n} I (F^n ; Y^n) \label{eq:eq2}\\ & = H(p)+ C\_{2} \label{eq:CSIRCapDiff}\end{aligned}$$ where follows from the fact that $$\begin{aligned} \lim\_{n\to \infty}\frac{1}{n}H(S^n) & = \lim\_{n \to \infty}{H(S\_{n+1} | S\_{n},\ldots S\_{1})}\\ & = \lim\_{n \to \infty}{H(S\_{n+1} | S\_{n})}\\ & = H(p)\end{aligned}$$ since for the transition matrix *T* in, we have *H*(*S**n* + 1∣*S**n*) = *H*(*p*) independent of the state *S**n*. *H*(*p*) refers to the binary entropy function evaluated at *p*. Note that when $\lim\_{n\to \infty}\frac{1}{n}H(S^n)$ exists, we can decompose the limit inferior of the sum of the two sequences in to the sum of the limit inferior of those sequences. This shows that the capacity of this channel without side information at the receiver is at most *H*(*p*) bits smaller than its capacity with receiver side information. We next relate the capacity of this finite state channel to the capacity of our original channel. In the proof of Theorem [thm: Achievable], we have shown that when the realization of the external energy arrival process {*E**t*} is causally known to both the transmitter and the receiver, for any given energy allocation policy {E*j*} that satisfies, we can achieve a rate $$\begin{aligned} R(M) \ge \displaystyle\sum\limits\_{j = 0}^{\infty} p & (1-p)^j \max\_{p(x): |X|^2 \le \mathcal{E}\_j} I(X; Y) - \epsilon(M),\end{aligned}$$ where *M* determines the maximal number of channel uses employed after each energy packet arrival and as *M* → ∞, *ε*(*M*) → 0. Observe that the same strategy can be employed over the finite state channel defined in and would achieve exactly the same rate *R*(*M*). (Here, the state *S**t* will dictate which of the *M* + 1 codebooks C(0), C(1), C(2), ..., C(*M*) to transmit from at given *t*.) This is because the code constructed in the achievability scheme for our original channel (with receiver side information) will satisfy ∣*X**t*∣2 ≤ E*S**t* at every *t* when applied over the finite state channel. This ensures that the input-output relations for the two channels are the same. Moreover, the transition matrix *T* in of the finite state channel is designed so that the process {E*t*′} induced by {*S**t*} is probabilistically equivalent to the energy allocation process {E*t*} induced by the Bernoulli energy arrival process {*E**t*} over the original channel. This allow us to conclude that *C*1 ≥ *R*(*M*),  or using the result of *C*2 ≥ *R*(*M*) − *H*(*p*). Finally, we want to argue that the capacity of our original system receiver side information is lower bounded by the capacity of this finite state channel receiver side information, i.e. *C* ≥ *C*2. Once this is established, we can conclude that *C* ≥ *R*(*M*) − *H*(*p*). Taking *M* → ∞, hence *ε*(*M*) → 0, completes the proof of the theorem. To show that *C* ≥ *C*2, we argue that any code designed for the finite state channel (without receiver state information) can be applied to our original energy harvesting communication channel (again without receiver side information). When the state information {*S**t*} (or equivalently the realization of the process {E*t*′}) is known causally at the transmitter, a code for communicating over the finite state channel is given by a set of mappings *W* → *f**W**n* = (*f**W*, 1, *f**W*, 2, ..., *f**W*, *n*) for each message *W* (a so-called Shannon strategy), where for given *W*, *f**W*, *t* is a mapping from all the transmitter side information received up to time step *t*, i.e. *S**t* = (*S*0, ..., *S**t*), to the space of channel input symbols, X. This code can not be immediately applied to our original energy harvesting communication channel since the input alphabet for the finite state channel X = R and it can potentially imply using symbols with magnitude ∣*X**t*∣2 > E*S**t*′. However, this code can be modified, without impacting its probability of error and rate, to a form that can be immediately applied over our energy harvesting channel. Let us modify the code by zeroing all the symbols that violate ∣*X**t*∣2 > E*S**t*′. In particular, we define a new sequence of functions $(\overline{f}\_{W,1}, \overline{f}\_{W,2},..., \overline{f}\_{W,n})$ associated with the message *W* as follow: $$\overline{f}\_{W,t}(S^t) = \mathbbm{1}\_{\left\{ |f\_{W,t}(S^t)|^2 \le \mathcal{E}^\prime\_{S\_t} \right\}} f\_{W,t}(S^t).$$ Based on the way our finite state channel is constructed, the output distributions are exactly the same whether we use the code with *f**n* or $\overline{f}^n$, i.e. $$\begin{aligned} p(y^n|f\_W^n, s^n) = p(y^n|\overline{f}\_W^n, s^n) \end{aligned}$$ Therefore, if we keep our decision regions at the decoder exactly the same, we can achieve the exact same rate and probability of error with both codes over the finite state channel. Now that all the potential channel input symbol generated from this new code do satisfy the constraint ∣*X**t*∣2 ≤ E*S**t*′, we can apply this new code over our energy harvesting communication channel: Given the Bernoulli energy harvesting process {*E**t*} define the process {*S**t*′} such that $$S\_t^\prime=\left\{\begin{array}{rl} j &\textrm{if}\quad 0\leq j\leq M,\\ M+1 &\textrm{if}\quad j>M. \end{array} \right.$$ where *j* = *t* − max{*t*ʹ ≤ *t* : *E**t*ʹ = *E*}. Applying the codebook {*f**W**n*}*W* with side information process {*S**t*′} over the original energy harvesting communication channel achieves the exact same rate and probability of error as in the finite state channel. This is because the input-output relations for the two channels are the same and {*S**t*′} and {*S**t*} are probabilistically equivalent. Also note that because of our initial assumption that E0, E1, E2, … satisfy, this is an energy feasible strategy for our harvesting system. This completes the proof of the theorem. [Proof of Theorem [thm: ConstantGapBlessE]] The proof follows almost the same line of argument as in Theorem [thm: constgaprate], except we have a different constant here due to the additional *K*(*p*) = 1.04 + *H*(*p*) in the *C**l**b*(*B**m**a**x*, *p*). Again, we will first restrict the range of (*B**m**a**x*, *p*) to be considered by noticing: $$\begin{aligned} C\_{ub}(B\_{max}, p) - & C\_{lb}(B\_{max}, p ) \le C\_{ub}(B\_{max}, p) \\ & = \frac{1}{2} \log(1 + pB\_{max}) \\ & \le 2.58\end{aligned}$$ for all (*B**m**a**x*, *p*) such that *p**B**m**a**x* ≤ 34.75. Next, let’s restrict ourself to the set of (*B**m**a**x*, *p*) such that *p**B**m**a**x* > 34.75: $$\begin{aligned} & C\_{ub}(B\_{max}, p) - C\_{lb}(B\_{max}, p ) \\ = & \frac{1}{2} \log \left(1 + p B\_{max}\right) \\ &- \left( \displaystyle\sum\limits\_{j = 0}^{\infty} p(1-p)^j \frac{1}{2} \log\left( 1 + (1-p)^j p B\_{max} \right) - K(p)\right)^+ \\ \stackrel{(a)} \le & \frac{1}{2} \log \left(1 + p B\_{max}\right) \\ & - \displaystyle\sum\limits\_{j = 0}^{\infty} p(1-p)^j \frac{1}{2} \log\left( 1 + (1-p)^j p B\_{max} \right) + H(p) + 1.04 \\ \stackrel{(b)} \le & \frac{1}{2\ln(2)}\frac{1}{pB\_{max}} - \frac{1-p}{p} \frac{1}{2} \log\left(1-p\right) + H(p) + 1.04 \\ \stackrel{(c)} \le & \frac{1}{2\ln(2)}\frac{1}{34.75} - \frac{1-p}{p} \frac{1}{2} \log\left(1-p\right) + H(p) + 1.04 \\ = & 1.06 + \frac{1-p}{2p} \log\left(\frac{1}{1-p}\right) + H(p) \\ \stackrel{(d)} \le & 2.58\end{aligned}$$ where (a) comes from removing the ( ⋅ )+ resulting in an upper bound and using the definition of *K*(*p*). (b) uses exactly the same sequence of inequalities in the proof of Theorem [thm: constgaprate], so we won’t repeat here. (c) comes from the fact we restrict ourself to *p**B**m**a**x* > 34.75. Finally, (d) comes from the fact: $$\begin{aligned} g(p) & \triangleq \frac{1-p}{2p} \log\left(\frac{1}{1-p}\right) + H(p) \\ &= \frac{1-p}{2p} \log\left(\frac{1}{1-p}\right) + p \log\left( \frac{1}{p} \right) + (1-p) \log\left( \frac{1}{1-p} \right)\end{aligned}$$ is a continuous bounded function on *p* ∈ (0, 1). Furthermore, it’s concave and attains a maximum value of 1.52 at *p* = 0.413. --- 1. Manuscript received April 1, 2014; revised September 1, 2014. This work was supported by the Center for Science of Information (CSoI), an NSF Science and Technology Center, under grant agreement CCF-0939370. An initial version of this paper was presented at the IEEE International Symposium on Information Theory (ISIT), Honolulu, July 2014.[↩](#fnref1) 2. The authors are with Stanford University, Packard Electrical Engineering, 350 Serra Mall, Stanford, California 94305-9510, USA (e-mails: {ydong2, farnia, aozgur}@ stanford.edu).[↩](#fnref2) 3. Here, we require the harvested energy be stored in the battery before it can be used for transmission. An alternative energy storage model, considered in, is to allow the harvested energy *E**t* to be used instantaneously at time *t* and the remaining amount be stored in the battery. The capacities under the two energy storage models are different in general.[↩](#fnref3) 4. The continuous-time version of this problem has been considered in many works in the literature where it is more commonly referred to as the power control problem and the function *r*(*g*) is called the power-rate or the power-utility function in this case. Note that in our discrete-time setup power corresponds to energy per channel use, and therefore the terms energy (per channel use) and power can be used interchangeably. We prefer to refer to the problem as the energy allocation problem.[↩](#fnref4) 5. The strategy discussed in is a slight variation of the *Uniform Policy* in that the amount of energy utilized in each channel use is  ± *ε* of the average energy arrival rate. *ε* decreases to zero as *B**m**a**x* → ∞ and the decay of *ε* controls the battery discharge probability and the convergence speed to the optimal average rate as *B**m**a**x* → ∞. However, in its essence the strategy is a *Uniform Policy*, so its behavior is captured by the *Uniform Policy* plots.[↩](#fnref5) 6. Indeed, this fact can be shown analytically; for example taking *B**m**a**x* = *E*, one can show that the gap between the upper bound and the rate achieved by the *Uniform Policy* increases to infinity as *B**m**a**x* = *E* → ∞. However, we do not provide a proof of this fact since the trend is already quite obvious from the graphs.[↩](#fnref6) 7. This proof was pointed out to the authors by one of the anonymous reviewers.[↩](#fnref7) 8. Theorem 2 of is stated for multiple access finite state channel. We use a simplified version of it for single-user channels, combined with part 2 of Corollary 1 in the same paper which shows that when the initial state is known to the transmitter, the capacity region is independent of the initial pmf.[↩](#fnref8)
arxiv_0000373
[ 0.3in ] In this paper, we study distributed algorithms for large-scale AUC maximization with a deep neural network as a predictive model. Although distributed learning techniques have been investigated extensively in deep learning, they are not directly applicable to stochastic AUC maximization with deep neural networks due to its striking differences from standard loss minimization problems (e.g., cross-entropy).Towards addressing this challenge, we propose and analyze a communication-efficient distributed optimization algorithm based on a *non-convex concave* reformulation of the AUC maximization, in which the communication of both the primal variable and the dual variable between each worker and the parameter server only occurs after multiple steps of gradient-based updates in each worker. Compared with the naive parallel version of an existing algorithm that computes stochastic gradients at individual machines and averages them for updating the model parameters, our algorithm requires a much less number of communication rounds and still achieves a linear speedup in theory. To the best of our knowledge, this is the **first** work that solves the *non-convex concave min-max* problem for AUC maximization with deep neural networks in a communication-efficient distributed manner while still maintaining the linear speedup property in theory. Our experiments on several benchmark datasets show the effectiveness of our algorithm and also confirm our theory. Introduction ============ Large-scale distributed deep learning  has achieved tremendous successes in various domains, including computer vision , natural language processing , generative modeling , reinforcement learning , etc. From the perspective of learning theory and optimization, most of them are trying to minimize a surrogate loss of a specific error measure using parallel minibatch stochastic gradient descent (SGD). For example, on the image classification task, the surrogate loss is usually the cross entropy between the estimated probability distribution according to the output of a certain neural network and the vector encoding ground-truth labels , which is a surrogate loss of the misclassification rate. Based on the surrogate loss, parallel minibatch SGD  is employed to update the model parameters. However, when the data for classification is imbalanced, AUC (short for Area Under the ROC Curve) is a more suitable measure . AUC is defined as the probability that a positive sample has a higher score than a negative sample . Despite the tremendous applications of distributed deep learning in different fields, the study about optimizing AUC with distributed deep learning technologies is rare. The commonly used parallel mini-batch SGD for minimizing a surrogate loss of AUC will suffer from high communication costs in a distributed setting due to the non-decomposability nature of AUC measure. The reason is that positive and negative data pairs that define a surrogate loss for AUC may sit on different machines. To the best of our knowledge,  is the only work trying to optimize a surrogate loss of *AUC with a deep neural network* that explicitly tackles the non-decomposability of AUC measure. Nevertheless, their algorithms are designed only for the single-machine setting and hence are far from sufficient when encountering a huge amount of data. Although a naive parallel version of the stochastic algorithms proposed in  can be used for distributed AUC maximization with a deep neural network, it would still suffer from high communication overhead due to a large number of communication rounds. [t] [tab:1] [tab:2] l|l|l|l Algorithm & Setting&Iteration Complexity &Communication Complexity PPD-SG & Single & *O*(1/(*μ*2*ε*))& - NP-PPD-SG& Distributed & *O*(1/(*K**μ*2*ε*))& *O*(1/(*K**μ*2*ε*)) CoDA& Distributed & *O*(1/(*K**μ*2*ε*))& *O*(1/(*μ*3/2*ε*1/2)) In this paper, we bridge the gap between stochastic AUC maximization and distributed deep learning by proposing a communication-efficient distributed algorithm for stochastic AUC maximization with a deep neural network. **The focus is to make the total number of communication rounds much less than the total number of iterative updates.** We build our algorithm upon the nonconvex-concave min-max reformulation of the original problem. The key ingredient is to design a communication-efficient distributed algorithm for solving the regularized min-max subproblems using multiple machines. Specifically, we follow the proximal primal-dual algorithmic framework proposed by , i.e., by solving a sequence of quadratically regularized min-max saddle-point problems with periodically updated regularizers successively. **The key difference** is that the inner min-max problem solver is built on a distributed periodic model averaging technique, which consists of a fixed number of stochastic primal-dual updates over individual machines and a small number of averagings of model parameters from multiple machines. This mechanism can greatly reduce the communication cost, which is similar to . However, their analysis cannot be applied to our case since their analysis only works for convex or non-convex *minimization* problems. In contrast, our algorithm is designed for a particular *non-convex concave min-max* problem induced by the original AUC maximization problem. Our contributions are summarized as follows: * We propose a communication-efficient distributed stochastic algorithm named CoDA for solving a nonconvex-concave min-max reformulation of AUC maximization with deep neural networks by local primal-dual updating and periodically global variable averaging. To our knowledge, this is the first communication-efficient distributed stochastic algorithm for learning a deep neural network by AUC maximization. * We analyze the iteration complexity and communication complexity of the proposed algorithm under the commonly used Polyak- Łojasiewicz (PL) condition as in . Comparing with, our theoretical result shows that the iteration complexity can be reduced by a factor of *K* (the number of machines) in a certain region, while the communication complexity (the rounds of communication) is much lower than that of a naive distributed version of the stochastic algorithm proposed in . The summary of iteration and communication complexities is given in Table [tab:2]. * We verify our theoretical claims by conducting experiments on several large-scale benchmark datasets. The experimental results show that our algorithm indeed exhibits good acceleration performance in practice. Related Work ============ #### Stochastic AUC Maximization. It is challenging to directly solve the stochastic AUC maximization in the online learning setting since the objective function of AUC maximization depends on a sum of pairwise losses between samples from positive and negative classes. addresses this problem by maintaining a buffer to store representative data samples, employing the reservoir sampling technique to update the buffer, calculating gradient information based on the data in the buffer, and then performing a gradient-based update rule to update the classifier. does not maintain a buffer, while they instead maintained first-order and second-order statistics of the received data to update the classifier by gradient-based update. Both of them are infeasible in big data scenarios since  suffers from a large amount of training data and  is not suitable for high dimensional data.  addresses these issues by introducing a min-max reformulation of the original problem and solving it by a primal-dual stochastic gradient method , in which no buffer is needed and the per-iteration complexity is the same magnitude as the dimension of the feature vector.  improves the convergence rate by adding a strongly convex regularizer upon the original formulation. Based on the same saddle point formulation as in ,   gets an improved convergence rate by developing a multi-stage algorithm without adding the strongly convex regularizer. However, all of these studies focus on learning a linear model. Recently, considers stochastic AUC maximization for learning a deep non-linear model, in which they designed a proximal primal-dual gradient-based algorithm under the PL condition and established non-asymptotic convergence results. #### Communication Efficient Algorithms. There are multiple approaches for reducing the communication cost in distributed optimization, including skipping communication and compression techniques. Due to limit of space, we mainly review the literature on skipping communication. For compression techniques, we refer the readers to  and references therein. Skipping communication is realized by doing multiple local gradient-based updates in each worker before aggregating the local model parameters. One special case is so-called one-shot averaging , where each machine solves a local problem and averages these solutions only at the last iterate. Some works   consider one-shot averaging with one-pass of the entire data and establish statistical convergence, which is usually not able to guarantee the convergence of the training error. The scheme of local SGD update at each worker with skipping communication is analyzed for convex  and nonconvex problems . There are also several empirical studies  showing that this scheme exhibits good empirical performance for distributed deep learning. However, all of these works only consider minimization problems and do not apply to the nonconvex-concave min-max formulation as considered in this paper. #### Nonconvex Min-max Optimization. Stochastic nonconvex min-max optimization has garnered increasing attention recently . considered the case where the objective function is weakly-convex and concave and proposed an algorithm based on the spirit of proximal point method , in which a proximal subproblem with periodically updated reference points is approximately solved by an appropriate stochastic algorithm. They established the convergence to a nearly stationary point for the equivalent minimization problem. Under the same setting,  designed a block-based algorithm and showed that it can converge to a solution with a small stationary gap, and  considered solving the problem using vanilla stochastic gradient descent ascent and established its convergence to a stationary point under the smoothness assumption. There are also several papers  trying to solve non-convex non-concave min-max problems.  proposed an inexact proximal point method for solving a class of weakly-convex weakly-concave problems, which was proven to converge to a nearly stationary point.  exploited the PL condition for the inner maximization problem and designed a multi-step alternating optimization algorithm which was able to converge to a stationary point.  considered solving a class of nonconvex-nonconcave min-max problems by designing an adaptive gradient method and established an adaptive complexity for finding a stationary point. However, none of them is particularly designed for the distributed stochastic AUC maximization problem with a deep neural network. Preliminaries and Notations =========================== The area under the ROC curve (AUC) on a population level for a scoring function *h* : X → R is defined as $$AUC(h) = \text{Pr}(h(\x) \geq h(\x') | y=1, y'=-1),$$ where $\z = (\x, y)$ and $\z'=(\x', y')$ are drawn independently from P. By employing the squared loss as the surrogate for the indicator function which is commonly used by previous studies , the deep AUC maximization problem can be formulated as $$\min\limits\_{\w\in\mathbb{R}^d} \E\_{\z, \z'}\left[(1-h(\w;\x)+h(\w;\x'))^2| y=1, y'=-1\right], \label{original\_deep\_auc}$$ where $h(\w; \x)$ denotes the prediction score for a data sample $\x$ made by a deep neural network parameterized by $\w$. It was shown in that the above problem is equivalent to the following min-max problem: $$\label{min-max} \min\limits\_{\w\in \mathbb{R}^d \atop (a, b)\in \mathbb{R}^2} \max\limits\_{\alpha\in \mathbb{R}} f(\w, a, b, \alpha)=\E\_\z[F(\mathbf{w}, a, b, \alpha, \z)],$$ where $$\begin{split} & F(\textbf{w}, a, b, \alpha; \z) = (1-p) (h(\w; \x)- a)^2 \mathbb{I}\_{[y=1]} \\ &+ p(h(\w; \x) - b)^2\mathbb{I}\_{[y=-1]} + 2(1+\alpha)(p h(\w; \x)\mathbb{I}\_{[y=-1]} \\ &- (1-p) h(\w, \x)\mathbb{I}\_{[y=1]}) - p(1-p)\alpha^2, \end{split}$$ where *p* = Pr(*y* = 1) denotes the prior probability that an example belongs to the positive class, and $\mathbb I$ denotes an indicator function. The above min-max reformulation allows us to decompose the expectation over all data into the expectation over data on individual machines. In this paper, we consider the following distributed AUC maximization problem: $$\label{opt:problem} \min\limits\_{\w\in \mathbb{R}^d \atop (a, b)\in \mathbb{R}^2} \max\limits\_{\alpha\in \mathbb{R}} f(\w, a, b, \alpha)=\frac{1}{K}\sum\limits\_{k=1}^{K} f\_k(\mathbf{w}, a, b, \alpha),$$ where *K* is the total number of machines, $f\_k(\w, a, b, \alpha)= \E\_{\z^k}[F\_k(\w, a,b,\alpha; \z^k)]$, $\z^k = (\x^k, y^k)\sim \mathbb{P}\_k$, P*k* is the data distribution on machine *k*, and $F\_k(\w, a,b,\alpha; \z^k) = F(\w, a,b,\alpha; \z^k)$. Our goal is to utilize *K* machines to jointly solve the optimization problem ([opt:problem]). We emphasize that the *k*-th machine can only access data $\z^k\sim \mathbb P\_k$ of its own. It is notable that our formulation includes both the batch-learning setting and the online learning setting. For the batch-learning setting, $\mathbb P\_k$ represents the empirical distribution of data on the *k*-th machine and *p* denotes the empirical positive ratio for all data. For the online learning setting, $\mathbb P\_k= \mathbb P,\forall k$ represents the same population distribution of data and *p* denotes the positive ratio in the population level. #### Notations. We define the following notations: $$\begin{aligned} &\v = (\w^T, a, b)^T, \quad \phi(\v) = \max\_{\alpha} f(\v, \alpha),\\ & \phi\_s (\v) = \phi(\v) + \frac{1}{2\gamma}\|\v-\v\_{s-1}\|^2,\\ &\v^\*\_{\phi} = \arg\min\limits\_{\v}\phi(\v),\quad \v^\*\_{\phi\_s} = \arg\min\limits\_{\v} \phi\_s(\v).\end{aligned}$$ We make the following assumption throughout this paper.   [ass:1] (i) There exist $\v\_0, \Delta\_0>0$ such that $\phi(\v\_0) - \phi(\v^\*\_\phi)\leq \Delta\_0$. (ii) For any $\x$, $\|\nabla h(\w; \x)\| \leq G\_h$. (iii) $\phi(\v)$ satisfies the *μ*-PL condition, i.e., $\mu(\phi(\v)-\phi(\v\_\*))\leq \frac{1}{2}\|\nabla\phi(\v)\|^2$; $\phi(\v)$ is *L*1-smooth, i.e., $\|\phi(\v\_1) - \phi(\v\_2)\| \leq L\_1 \|\v\_1 - \v\_2\|$. (iv) For any $\x$, $h(\w; \x)$ is *L**h*-smooth, and $h(\w; \x) \in [0, 1]$. [assumption1] **Remark:** Assumptions (i), (ii), (iii) and $h(\w; \x)\in [0, 1]$ of (iv) are also assumed in , which have been justified as well. *L*-smoothness of function *h* is a standard assumption in the optimization literature. Finally, it should be noted that *μ* is usually much smaller than 1 . This is important for us to understand our theoretical result later. Main Result and Theoretical Analysis ==================================== In this section, we first describe our algorithm, and then present its convergence result followed by its analysis. For simplicity, we assume that the ratio *p* of data with the positive label is known. For the batch learning setting, *p* is indeed the empirical ratio of positive examples. For the online learning setting with an unknown distribution, we can follow the online estimation technique in  to do the parameter update. Algorithm [outerloop] describes the proposed algorithm CoDA for optimizing AUC in a communication-efficient distributed manner. CoDA shares the same algorithmic framework as proposed in . In particular, we employ a proximal-point algorithmic scheme that successively solves the following convex-concave problem approximately: $$\min\_{\v}\max\_{\alpha}f(\v,\alpha)+\frac{1}{2\gamma}\|\v-\v\_0\|^2,$$ where *γ* is an appropriate regularization parameter to make sure that the regularized function is strongly-convex and strongly-concave. The reference point $\v\_0$ is periodically updated after a number of iterations. At the *s*-th stage our algorithm invokes a communication-efficient algorithm for solving the above strongly-convex and strongly-concave subproblems. After obtaining a primal solution $\v\_s$ at the *s*-th stage, we sample some data from individual machines to obtain an estimate of the corresponding dual variable *α**s*. [t] [1] [outerloop] Our new contribution is the communication-efficient distributed algorithm for solving the above strongly-convex and strongly-concave subproblems. The algorithm referred to as DSG is presented in Algorithm [innerloop]. Each machine makes a stochastic proximal-gradient update on the primal variable and a stochastic gradient update on the dual variable at each iteration. After every *I* iterations, all the *K* machines communicate to compute an average of local primal solutions $\v^k\_t$ and local dual solutions *α**t**k*. It is not difficult to show that when *I* = 1, our algorithm reduces to the naive parallel version of the PPD-SG algorithm proposed in , i.e., by averaging individual primal and dual gradients and then updating the primal-dual variables according to the averaged gradient [1](#fn1). Our novel analysis allows us to use *I* > 1 to skip communications, leading to a much less number of communications. The intuition behind this is that, as long as the step size *η**s* is sufficiently small we can control the distance between individual solutions $(\v^k\_t, \alpha^k\_t)$ to their global averages, which allows us to control the error term that is caused by the discrepancy between individual machines. We will provide more explanations as we present the analysis. Below, we present the main theoretical result of CoDA. Note that in the following presentation, $L\_\v, H, B, \sigma\_\v, \sigma\_\alpha$ are appropriate constants, whose values are given in the proofs of Lemma [properties] and Lemma [lemmaonestage] in the supplement. [thm:main] Set $\gamma = \frac{1}{2L\_{\v}}$, $c =\frac{\mu/L\_{\v}}{5+\mu/L\_{\v}}$, *η**s* = *η*0*K*exp( − (*s* − 1)*c*) ≤ *O*(1), $T\_s = \frac{\max(8, 8G\_h^2)}{L\_{\v}\eta\_0 K} \exp( (s-1)c)$, $I\_s = \max(1, 1/\sqrt{K\eta\_s})$ and $m\_{s}=\max\left(\frac{1+C}{\eta\_{s+1}^2 T\_{s+1} p^2(1-p)^2}, \frac{\log(K)}{\log(1/\Tilde{p})}\right)$, where $C = \frac{3\tilde{p}^{\frac{1}{\ln(1/\Tilde{p})}}}{2\ln(1/\Tilde{p})}$ and $\Tilde{p} = \max(p, 1-p)$. To return $\v\_S$ such that $\E[\phi(\v\_S) - \phi(\v^\*\_{\phi})] \leq \epsilon$, it suffices to choose $S\geq \frac{5L\_{\v}+\mu}{\mu} \max\bigg\{\log \left(\frac{2\Delta\_0}{\epsilon}\right), \log S + \log\bigg[ \frac{2\eta\_0}{\epsilon} \frac{6HB^2 + 12(\sigma\_{\v}^2+\sigma\_{\alpha}^2)}{5}\bigg]\bigg\}$. As a result, the number of iterations is at most $T=\widetilde{O}\bigg(\max\left(\frac{\Delta\_0}{\mu \epsilon \eta\_0 K}, \frac{L\_{\v}}{\mu^2 K\epsilon}\right)\bigg)$ and the number of communications is at most $\widetilde{O}\bigg(\max\left(\frac{K}{\mu} + \frac{\Delta\_0^{1/2}}{\mu(\eta\_0 \epsilon)^{1/2}}, \frac{K}{\mu} + \frac{L\_{\v}^{1/2}}{\mu^{3/2}\epsilon^{1/2}}\right)\bigg)$, where $\widetilde{O}$ suppresses logarithmic factors, and $H, B, \sigma\_\v, \sigma\_\alpha$ are appropriate constants. [theoremallstage] [t] [innerloop] We have the following remarks about Theorem [thm:main]. * First, we can see that the step size *η**s* is reduced geometrically in a stagewise manner. This is due to the PL condition. We note that a stagewise geometrically decreasing step size is usually used in practice in deep learning . Second, by setting *η*0 = *O*(1/*K*) we have $I\_s =\Theta(\frac{1}{\sqrt{K}}\exp((s-1)c/2)$. It means two things: (i) the larger the number of machines the smaller value of *I**s*, i.e., more frequently the machines need to communicate. This is reasonable since more machines will create a larger discrepancy between data among different machines; (ii) the value *I**s* can be increased geometrically across stages. This is because that the step size *η**s* is reduced geometrically, which causes one step of primal and dual updates on individual machines to diverge less from their averaged solutions. As a result, more communications can be skipped. * Second, we can see that when *K* ≤ Θ(1/*μ*), we have the total iteration complexity given by $\widetilde O(\frac{1}{\mu^2K \epsilon})$. Compared with the iteration complexity of the PPD-SG algorithm proposed in  that is $\widetilde O(\frac{1}{\mu^2 \epsilon})$, the proposed algorithm CoDA enjoys an iteration complexity that is reduced by a factor of *K*. This means that up to a certain large threshold Θ(1/*μ*) for the number *K* of machines, CoDA enjoys a linear speedup. * Finally, let us compare CoDA with the naive parallel version of PPD-SG, which is CoDA by setting *I* = 1. In fact, our analysis of the iteration complexity for this case is still applicable, and it is not difficult to show that the iteration complexity of the naive parallel version of PPD-SG is given by $\widetilde O(\frac{1}{\mu^2K \epsilon})$ when *K* ≤ 1/*μ*. As a result, its communication complexity is also $\widetilde O(\frac{1}{\mu^2K \epsilon})$. In contrast, CoDA’s communication complexity is $\widetilde O(\frac{1}{\mu^{3/2}\epsilon^{1/2}})$ when $K\leq \frac{1}{\mu}\leq \frac{1}{\mu^{1/2}\epsilon^{1/2}}\leq \frac{1}{\epsilon}$ according to Theorem [thm:main] [2](#fn2). Hence, our algorithm is more communication efficient, i.e., $\widetilde O(\frac{1}{\mu^{3/2}\epsilon^{1/2}})\leq \widetilde O(\frac{1}{\mu^2K \epsilon})$ when $K\leq \frac{1}{\mu}$ [3](#fn3). This means that up to a certain large threshold Θ(1/*μ*) for the number *K* of machines, CoDA has a smaller communication complexity than the naive parallel version of PPD-SG. Analysis -------- Below, we present a sketch of the proof of Theorem [thm:main] by providing some key lemmas. We first derive some useful properties regarding the random function $F\_k(\v, \alpha, \z)$. [properties] Suppose that Assumption [assumption1] holds and $\eta \leq \min(\frac{1}{2p(1-p)}, \frac{1}{2(1-p)}, \frac{1}{2p})$. Then there exist some constants $L\_2, B\_\alpha, B\_\v, \sigma\_{\v}, \sigma\_{\alpha}$ such that $$\begin{aligned} &\|\nabla\_{\v} F\_k(\v\_1, \alpha; \z) - \nabla\_{\v} F\_k(\v\_2, \alpha; \z)\| \leq L\_2 \|\v\_1-\v\_2\|,\\ &\|\nabla\_{\v} F\_k(\v, \alpha; \z)\|^2 \leq B\_{\v}^2, |\nabla\_{\alpha} F\_k(\v, \alpha; \z)|^2 \leq B\_{\alpha}^2,\\ &\E[\|\nabla\_{\v} f\_k(\v, \alpha) - \nabla\_{\v} F\_k(\v, \alpha; \z)\|^2] \leq \sigma\_{\v}^2,\\ &\E[|\nabla\_{\alpha}f\_k(\v, \alpha) - \nabla\_{\alpha} F\_k(\v, \alpha; \z)|] \leq \sigma^2\_{\alpha}.\end{aligned}$$ **Remark:** We include the proofs of these properties in the Appendix. In the following, we will denote $B^2 = \max(B\_{\v}^2, B\_{\alpha}^2)$ and $L\_{\v} = \max(L\_1, L\_2)$. Next, we introduce a key lemma, which is of vital importance to establish the upper bound of the objective gap of the regularized subproblem. (One call of Algorithm [innerloop]) Let $\psi(\v) = \max\limits\_{\alpha} f(\v, \alpha) + \frac{1}{2\gamma}\|\v - \v\_0\|^2$, $\tilde{\v}$ be the output of Algorithm [innerloop], and $\v^\*\_{\psi} = \arg\min \psi(\v)$, $\alpha^\*(\tilde{\v}) = \arg\max\limits\_{\alpha}f(\tilde{\v}, \alpha)+\frac{1}{2\gamma}\|\tilde{\v}-\v\_0\|^2$. By running Algorithm [innerloop] with given input $\v\_0, \alpha\_0$ for *T* iterations, $\gamma = \frac{1}{2L\_{\v}}$, and $\eta \leq \min\{\frac{1}{L\_{\v}+3G^2\_{\alpha}/\mu\_{\alpha}}, \frac{1}{L\_{\alpha}+3G\_{\v}^2/L\_{\v}}, \frac{3}{2\mu\_{\alpha}}, \frac{1}{2p(1-p)}, \frac{1}{2(1-p)}, \frac{1}{2p}\}$, we have $$\begin{aligned} \E[\psi(\tilde{\v}) - \min\_{\v}\psi(\v)] \leq& \!\frac{2\|\v\_0 - \v\_{\psi}^\*\|^2\! +\! \E[(\alpha\_{0} - \alpha^\*(\tilde{\v}))^2]}{\eta T} \\ &\!+\! H\!\eta^2\! I^2\! B^2 \I\_{I>1}\! +\! \frac{\eta (2\sigma\_{\v}^2\! +\! 3\sigma\_{\alpha}^2)}{2K}, \end{aligned}$$ where *μ**α* = 2*p*(1 − *p*), *L**α* = 2*p*(1 − *p*), *G**α* = 2max{*p*, 1 − *p*}, $G\_{\v} = 2\max\{p, 1-p\} G\_h$, and $H = \left(\frac{6G\_{\v}^2}{\mu\_{\alpha}} + 6L\_{\v} + \frac{6G\_{\alpha}^2}{L\_{\v}} + \frac{6L\_{\alpha}^2}{\mu\_{\alpha}}\right)$. [lemmaonestage] **Remark:** The above result is similar to Lemma 2 in . The key difference lies in the second and third terms in the upper bound. The second term arises because of the discrepancy of updates between individual machines. The third term is due to the variance reduction by using multiple machines, which is the key to establish the linear speed-up. It is easy to see that by setting $I = \frac{1}{\sqrt{\eta K}}$, the second term and the third term have the same order. With the above lemma, the proof of Theorem [thm:main] follows similar analysis to in . **Sketch of the Proof of Lemma [lemmaonestage].** Below, we present a roadmap for the proof of the key Lemma [lemmaonestage]. The main idea is to first bound the objective gap of the subproblem in Lemma [lem:objgap]. Then we further bound every term in the RHS in Lemma [lem:objgap] appropriately, which is realized by Lemma [lem:var1], Lemma [lem:var2] and Lemma [stichlemma]. All the detailed proofs of Lemmas can be found in Appendix. [lem:objgap] Define $\bar\v\_t = \frac{1}{K}\sum\_{k=1}^K\v^k\_t, \bar{\alpha}\_t = \frac{1}{K}\sum\_{k=1}^K\alpha^k\_t$. Suppose Assumption [assumption1] holds and by running Algorithm [innerloop], we have $$\begin{aligned} &\psi(\tilde{\v}) - \min\limits\_{\v} \psi(\v)\\ &\!\leq\! \frac{1}{T}\sum\limits\_{t=1}^{T}\bigg[\! \underbrace{\!\langle \nabla\_{\v}f(\bar{\v}\_{t-1}, \bar{\alpha}\_{t-1}),\! \bar{\v}\_t\! -\! \v\_{\psi}^\*\rangle \!+\!2L\_{\v}\! \langle \bar{\v}\_t\!- \!\v\_0,\! \bar{\v}\_t\!-\!\v\_{\psi}^\*\rangle}\_{A\_1} \\ & + \underbrace{ \langle \nabla\_{\alpha} f(\bar{\v}\_{t-1}, \bar{\alpha}\_{t-1}), \alpha^\* - \bar{\alpha}\_t \rangle }\_{A\_2}\end{aligned}$$ $$\begin{aligned} &+\!\underbrace{\frac{L\_{\v} + 3G\_{\alpha}^2/\mu\_{\alpha}}{2}\|\bar{\v}\_t - \bar{\v}\_{t-1}\|^2 \!+\! \frac{L\_{\alpha} + 3G\_{\v}^2/L\_{\v}}{2} (\bar{\alpha}\_t - \bar{\alpha}\_{t-1})^2}\_{A\_3} \\ &+ \frac{2L\_{\v}}{3} \| \bar{\v}\_{t-1} - \v\_{\psi}^\*\|^2 - L\_{\v} \| \bar\v\_t - \v\_{\psi}^\* \|^2 - \frac{\mu\_{\alpha}}{3} (\bar{\alpha}\_{t-1} - \alpha^\*)^2\bigg]. \label{sum\_v\_alpha}\end{aligned}$$ Next, we will bound *A*1, *A*2 in Lemma [lem:var1] and Lemma [lem:var2]. *A*3 can be cancelled with similar terms in the following two lemmas. The remaining terms will be left to form a telescoping sum with other similar terms in the following two lemmas. [lem:var1] Define $\hat{\v}\_t\! =\! \arg\min\limits\_{\v}\! \left(\! \frac{1}{K}\!\sum\limits\_{k=1}^{K}\! \nabla\_{\v}\! f\!(\!\v^k\_{t-1}\!,\! \alpha^k\_{t-1}\!)\!\right)^T\! \v\!$ $+ \frac{1}{2\eta} \|\v - \bar{\v}\_{t-1}\|^2 + \frac{1}{2\gamma}\|\v - \v\_{0}\|^2$. We have $$\begin{split} &A\_1 \!\leq \! \frac{3G\_{\alpha}^2}{2L\_{\v}}\frac{1}{K} \sum\limits\_{k=1}^{K} (\bar{\alpha}\_{t-1}-\alpha\_{t-1}^{k})^2 \!+\! \frac{3L\_{\v}}{2}\frac{1}{K}\sum\limits\_{k=1}^{K}\|\bar{\v}\_{t-1}\!-\!\v\_{t-1}^k\|^2 \\ &+ \eta \left\|\frac{1}{K}\sum\limits\_{k=1}^{K}[\nabla\_{\v}f\_k(\v\_{t-1}^k, \alpha\_{t-1}^k) \!-\! \nabla\_{\v}F\_k(\v\_{t-1}^{k}, \alpha\_{t-1}^k; \z\_{t-1}^k)]\right\|^2 \\ &\!+\! \frac{1}{K}\!\sum\limits\_{k=1}^{K}\!\langle \!\nabla\_{\v}f\_k(\v\_{t-1}^k,\! \alpha\_{t-1}^k)\!-\!\nabla\_{\v}F\_k(\v\_{t-1}^{k},\! \alpha\_{t-1}^k;\z\_{t-1}^k),\! \hat{\v}\_t - \v\_{\psi}^\*\rangle\\ & \!+\!\frac{1}{2\eta} \!(\!\|\bar{\v}\_{t-1}\!-\!\v\_{\psi}^\*\|^2 \!-\! \|\bar{\v}\_{t-1} \!-\! \bar{\v}\_t\|^2 \!-\! \|\bar{\v}\_t \!-\! \v\_{\psi}^\*\|^2) \!+\! \frac{L\_{\v}}{3}\|\bar{\v}\_t\! -\! \v\_{\psi}^\*\|^2. \end{split}$$ [lem:var2] Define $\hat{\alpha}\_t\! = \!\bar{\alpha}\_{t-1} + \frac{\eta}{K}\sum\limits\_{k=1}^{K} \nabla\_{\alpha} f\_k(\v\_{t-1}^{k}, \alpha\_{t-1}^k)$, and $$\tilde{\alpha}\_{t}\! =\! \tilde{\alpha}\_{t-1}\! +\! \frac{\eta}{K}\sum\limits\_{k=1}^{K}( \nabla\_{\alpha} F\_k(\v\_{t-1}^{k},\alpha\_{t-1}^k; \z\_{t-1}^k) \!-\!\nabla\_{\alpha} f\_k(\v\_{t-1}^{k},\!\alpha\_{t-1}^k) ).$$ We have, $$\begin{split} &A\_2 \!\leq\! \frac{3G\_{\v}^2}{2\mu\_{\alpha}} \frac{1}{K}\sum\limits\_{k=1}^{K}\|\bar{\v}\_{t-1}-\v\_{t-1}^k\|^2 \!+\! \frac{3L\_{\alpha}^2}{2\mu\_{\alpha}}\frac{1}{K}\sum\limits\_{k=1}^{K} (\bar{\alpha}\_{t-1} - \alpha\_{t-1}^k)^2\\ &+\frac{3\eta}{2} ( \frac{1}{K} \sum\limits\_{k=1}^{K}[ \nabla\_{\alpha} f\_k(\v\_{t-1}^k, \alpha\_{t-1}^k) - \nabla\_{\alpha} F\_k (\v\_{t-1}^k, \alpha\_{t-1}^k; \z\_{t-1})])^2\\ &+\!\frac{1}{K}\sum\limits\_{k=1}^{K}\!\langle\! \nabla\_{\alpha}\! f\_k(\v\_{t-1}^k,\!\alpha\_{t-1}^k)\!- \! \nabla\_{\alpha}\!F\_k(\v\_{t-1}^k,\!\alpha\_{t-1}^k;\!\z\_{t-1}^k),\!\tilde{\alpha}\_{t-1}\!-\!\hat{\alpha}\_t\!\rangle\\ &\!+\!\frac{1}{2\eta} ((\bar{\alpha}\_{t-1} \!-\! \alpha^\*(\Tilde{\v}))^2 \!- \!(\bar{\alpha}\_{t-1} \!-\! \bar{\alpha}\_t)^2 \!-\! (\bar{\alpha}\_t \!-\! \alpha^\*(\Tilde{\v}))^2) \!\\ & +\! \frac{\mu\_{\alpha}}{3} (\bar{\alpha}\_{t} \!-\! \alpha^\*(\Tilde{\v}))^2 + \frac{1}{2\eta} ((\alpha^\*(\Tilde{\v}) - \tilde{\alpha}\_t)^2 - (\alpha^\*(\Tilde{\v})-\Tilde{\alpha}\_{t+1})^2). \end{split}$$ The first two terms in the upper bounds of *A*1, *A*2 are the differences between individual solutions and their averages, the third term is the variance of stochastic gradient, and the expectation of the fourth term will diminish. The lemma below will bound the difference between the averaged solution and the individual solutions. [stichlemma] If *K* machines communicate every *I* iterations, and update with step size *η*, then $$\begin{aligned} &\frac{1}{K}\sum\limits\_{k=1}^{K} \E[\|\bar{\v}\_t - \v\_t^k\|^2] \leq 4\eta^2I^2 B\_{\v}^2\mathbb I\_{I>1}\\ &\frac{1}{K}\sum\limits\_{k=1}^{K} \E[\|\bar{\alpha}\_t - \alpha\_t^k\|^2] \leq 4\eta^2 I^2 B\_{\alpha}^2\mathbb I\_{I>1}.\end{aligned}$$ Combining the results in Lemma [lem:objgap], Lemma [lem:var1], Lemma [lem:var2] and Lemma [stichlemma], we can prove the key Lemma [lemmaonestage]. Experiments =========== In this section, we conduct some experiments to verify our theory. In our experiments, one “machine” corresponds to one GPU. We use a cluster of 4 computing nodes with each computer node having 4 GPUs, which gives a total of 16 “machines”. We would like to emphasize that even though 4 GPUs sit on one computing node, they only access to different parts of the data. For the experiment with *K* = 1 GPU, We run one computing node by using one GPU. For experiments with *K* = 4 GPUs, we run one computing node by using all four GPUs, and for those experiments with *K* = 16 GPUs, we use four computing nodes by using all GPUs. We notice that the communication costs among GPUs on one computing node may be less than that among GPUs on different computing nodes. Hence, it should be kept in mind that when comparing with *K* = 4 GPUs on different computing nodes, the margin of using *K* = 16 GPUs over using *K* = 4 GPUs should be larger than what we will see in our experimental results. All algorithms are implemented by PyTorch. **Data**. We do experiments on 3 datasets: Cifar10, Cifar100 and ImageNet. For Cifar10, we split the original training data into two classes, i.e., the positive class contains 5 original classes and the negative class is composed of the other 5 classes. The Cifar100 dataset is split in a similar way, i.e., the positive class contains 50 original classes and the negative class is composed of the other 50 classes. Testing data for Cifar10 and Cifar100 is the same as the original dataset. For the ImageNet dataset, we sample 1% of the original training data as testing data and use the remaining data as the training data. The training data is split in a similar way to Cifar10 and Cifar100, i.e., the positive class contains 500 original classes and the negative class is composed of the other 500 classes. For each dataset, we create two versions of training data with different positive ratios. By keeping all data in the positive and negative classes, we have *p* = 50% for all three datasetes. To create imbalanced data, we drop some proportion of the negative data for each dataset and keep all the positive examples. In particular, by keeping all the positive data and 40% of the negative data we construct three datasets with positive ratio *p* = 71%. Training data is shuffled and evenly divided to each GPU, i.e., each GPU has access to 1/*K* of the training data, where *K* is the number of GPUs. For all data, we use ResNet50 as our neural network and initialize the model as the pretrained model from PyTorch. Due to the limit of space, we only report the results on datasets with *p* = 71% positive ratio, and other results are included in the supplement. **Baselines and Parameter Setting.** For baselines, we compare with the single-machine algorithm PPD-SG as proposed in , which is represented by *K* = 1 in our results, and the naive parallel version of PPD-SG, which is denoted by *K* =  X, *I* = 1 in our results. For all algorithms, we set *T**s* = *T*03*k*, *η**s* = *η*0/3*k*. *T*0 and *η*0 are tuned for PPD-SG and set to the same for all other algorithms for fair comparison. *T*0 is tuned in [2000, 5000, 10000], and *η*0 is tuned in [0.1, 0.01, 0.001]. We fix the batch size for each GPU as 32. For simplicity, in our experiments we use a fixed value of *I* in order to see its tradeoff with the number of machines *K*. [ht] [fig:imagenet71] [fig:cifar10071] [fig:cifar1071] [fig:ablationc100K4P71] [fig:ablationc10K4P71] **Results.** We plot the curve of testing AUC versus the number of iterations and versus running time. We notice that evaluating the training objective function value on all examples is very expensive, so we use the testing AUC as our evaluation metric. It may cause some gap between our results and the theory; however, the trend should be enough for our purpose to verify that our distributed algorithms can enjoy faster convergence in both the number of iterations and running time. We have the following observations. * **Varying *K*.** By varying *K* and fixing the value of *I*, we aim to verify the parallel speedup. The results are shown in Figure [fig:imagenet71](a), Figure [fig:cifar10071](a) and Figure [fig:cifar1071](a). They show that when *K* becomes larger, then our algorithm requires a less number of iterations to converge to the target AUC, which is consistent with the parallel speedup result as indicated by Theorem [theoremallstage]. In addition, CoDA with *K* = 16 machines is also the most time-efficient algorithm among all settings. * **Varying *I*.** By varying *I* and fixing the value of *K*, we aim to verify that skipping communications up to a certain number of iterations of CoDA does not hurt the iteration complexity but can dramatically reduce the total communication costs. In particular, we fix *K* = 16 and vary *I* in the range {1, 8, 64, 512, 1024}. The results are shown in Figures [fig:imagenet71](b), Figures [fig:cifar10071](b) and Figures [fig:cifar1071](b). They exhibit that even when *I* becomes moderately large, our algorithm is still able to deliver comparable performance in terms of the number of iterations compared with the case when *I* = 1. The largest value of *I* that does not cause a dramatic performance drop compared with *I* = 1 is *I* = 1024, *I* = 64, *I* = 64 on ImageNet, CIFAR100 and CIFAR10, respectively. However, up to these thresholds the running time of CoDA can be dramatically reduced than the naive parallel version with *I* = 1. * **Trade-off between *I* and *K*.** Finally, we verify the trade-off between *I* and *K* as indicated in Theorem [thm:main]. To this end, we conduct experiments by fixing *K* = 4 GPUs and varying the value *I*, and comparing the limits of *I* for *K* = 4 and *K* = 16. The results of using *K* = 4 on CIFAR100 and CIFAR10 are reported in Figure [fig:ablationc100K4P71] and Figure [fig:ablationc10K4P71]. We can observe that when *K* = 4 the upper limit of *I* that does not cause a dramatic performance drop compared with *I* = 1 is *I* = 512 for the two datasets, which is larger than the upper limit of *I* = 64 for *K* = 16. This is consistent with our Theorem [theoremallstage]. Conclusion ========== In this paper, we have designed a communication-efficient distributed stochastic deep AUC maximization algorithm, in which each machine is able to do multiple iterations of local updates before communicating with the central node. We have proven the linear speedup property and shown that the communication complexity can be dramatically reduced for multiple machines up to a large threshold number. Our empirical studies verify the theory and also demonstrate the effectiveness of the proposed distributed algorithm on benchmark datasets. Acknowledgements ================ This work is partially supported by National Science Foundation CAREER Award 1844403 and National Science Foundation Award 1933212. Proof of Theorem [theoremallstage] ================================== *Proof.* Define $\phi\_s(\v) = \phi(\v) + \frac{1}{2\gamma}\|\v-\v\_{s-1}\|^2$. We can see that $\phi\_s(\v)$ is convex and smooth since $\gamma \leq 1/L\_{\v}$. The smooth coefficient of *ϕ**s* is $\hat{L}\_{\v} = L\_{\v} + 1/\gamma$. According to Theorem 2.1.5 of, we have $$\begin{split} \|\nabla \phi\_s(\v\_s)\|^2 \leq 2\hat{L}\_{\v} (\phi\_{s}(\v\_s) - \phi\_{s}(\v\_{\phi\_s}^\*)). \end{split} \label{by\_nestrov}$$ Applying Lemma [lemmaonestage], we have $$\begin{split} E\_{s-1}[\phi\_s (\v\_s) - \phi\_s (\v\_{\phi\_s}^\*)] \leq \frac{2}{\eta\_s T\_s} \|\v\_{s-1} - \v\_{\phi\_s}^\*\|^2 + \frac{1}{\eta\_s T\_s} (\alpha\_{s-1} -\alpha^\*(\v\_s))^2 + H \eta\_s^2 I\_s^2 B^2 \I\_{I\_s>1} + \frac{\eta\_s (2\sigma\_{\v}^2 + 3\sigma\_{\alpha}^2)}{2K}. \end{split}$$ Denote $\x\_{1:m\_s}^k = (\x\_1^k,..., \x\_{m\_s}^k)$, *y*1 : *m**s**k* = (*y*1*k*, ..., *y**m**s**k*), and $\Tilde{f}\_k (\x\_{1:m\_s}^k, y^k\_{1:m\_s}) = \frac{\sum\limits\_{i=1}^{m\_s}h(\w\_s;\x\_i^k) \I\_{y\_i^k=y} }{\sum\limits\_{i=1}^{m\_s}\I\_{y^k\_i=y}} - E\_{\x^k}[h(\w\_{s};\x^k)|y]$. If $\sum\_{i=1}^{m\_s} \I\_{y\_i^k=y}>0$, then $\frac{\sum\limits\_{i=1}^{m\_s}h(\w\_s;\x\_i^k) \I\_{y\_i^k=y} }{\sum\limits\_{i=1}^{m\_s}\I\_{y^k\_i=y}}$ is an unbiased estimation of $E\_{\x^k}[h(\w\_{s};\x^k)|y]$. Noting $0\leq h(\w; \x) \leq 1$, we have $Var(h(\w; \x^k)|y) \leq \tilde{\sigma}^2 \leq 1$. Then we know that $$\begin{split} E\_{\x\_{1:m\_s}^k}[(\Tilde{f}(\x\_{1:m\_s}^k, y\_{1:m\_s}^k))^2 | y\_{1:m\_s}^k] &\leq \frac{\tilde{\sigma}^2}{\sum\_{i=1}^{m\_s} \I\_{y\_i^k=y}} \I\_{(\sum\_{i=1}^{m\_s} \I\_{y\_i^k=y}>0)} + 1\cdot \I\_{(\sum\_{i=1}^{m\_s} \I\_{y\_i^k=y}=0)} \\ &\leq \frac{\I\_{(\sum\_{i=1}^{m\_s} \I\_{y\_i^k=y}>0)}}{\sum\_{i=1}^{m\_s} \I\_{y\_i^k=y}} + \I\_{(\sum\_{i=1}^{m\_s} \I\_{y\_i^k=y}=0)}. \end{split}$$ Hence, $$\begin{split} & E\_{s-1}[ \Tilde{f}\_k (\x\_{1:m\_s}^k, y\_{1:m\_s}^k) ] = E\_{y\_{1:m\_s}^k} \left[ E\_{\x\_{1:m\_s}^k} [ (\Tilde{f}\_k (\x\_{1:m\_s}^k, y\_{1:m\_s}^k))^2 | y\_{1:m\_s}^k] \right] \\ &\leq E\_{y\_{1:m\_s}^k} \left[\frac{\I\_{(\sum\_{i=1}^{m\_s} \I\_{y\_i^k=y} > 0)} }{\sum\_{i=1}^{m\_s} \I\_{y\_i^k=y}} + \I\_{\sum\_{i=1}^{m\_s} \I\_{y\_i^k=y} = 0} \right] \leq \frac{1}{m\_s \text{Pr}(y\_i^k = y)} + (1-\text{Pr}(y\_i^k=y))^{m\_s}. \end{split}$$ Denote $$\begin{split} \alpha^\*(\v\_s) &= \arg\max\limits\_{\alpha} f(\v\_s,\alpha) = \frac{1}{K}\sum\limits\_{k=1}^{K} E\left[ \frac{h(\w\_s; \x^k)\I\_{y^k=-1}}{1-p} - \frac{h(\w\_s; \x^k)\I\_{y^k=1}}{p} \right]\\ & = \frac{1}{K} \sum\limits\_{k=1}^{K} \left[E\left[h(\w\_s; \x^k)|y^k=-1\right] - E\left[h(\w\_s; \x^k)|y^k=1\right] \right]. \end{split}$$ Therefore, $$\begin{split} E\_{s-1} [(\alpha\_{s-1}-\alpha^\*(\v\_{s-1}))^2] &= E\_{s-1} \left[\frac{1}{K} \sum\limits\_{k=1}^{K}\frac{\sum\limits\_{i=1}^{m\_{s-1}} h(\w\_{s-1}; \x\_i^k) \I\_{y\_i^{k}=-1}}{\sum\limits\_{i=1}^{m\_{s-1}} \I\_{y\_i^{k}=-1}} - E\left[\frac{1}{K} \sum\limits\_{k=1}^{K} h(\w\_{s-1}; \x\_i^k)|y=-1\right] \right.\\ &~~~~~~~~~~~~~~~~~ \left. + E\left[\frac{1}{K}\sum\limits\_{k=1}^{K} h(\w\_{s-1}; \x\_i^k)|y=1\right] - \frac{1}{K}\sum\limits\_{k=1}^K \frac{ \sum\limits\_{i=1}^{m\_{s-1}}h(\w\_{s-1}; \x\_i^k)\I\_{y\_i^k=1}}{\sum\limits\_{i=1}^{m\_{s-1}} \I\_{y\_i^k=1}} \right]^2 \\ & \leq \frac{2}{K m\_{s-1} \text{Pr}(y\_i^k=-1)} + \frac{2(1-\text{Pr}(y\_i^k=-1))^{m\_{s-1}}}{K} + (1-\text{Pr}(y\_i^k=-1))^{2m\_{s-1}} \\ &~~~ + \frac{2}{K m\_{s-1} \text{Pr}(y\_i^k=1)} + \frac{2(1-\text{Pr}(y\_i=1))^{m\_{s-1}} }{K} + (1- \text{Pr}(y\_i^k=1))^{2m\_{s-1}}\\ & \leq \frac{2}{K m\_{s-1} p(1-p)} + \frac{3 p^{m\_{s-1}}}{K} + \frac{3(1-p)^{m\_{s-1}}}{K} \leq 2\left(\frac{1}{K m\_{s-1} p(1-p)} + \frac{3\Tilde{p}^{m\_{s-1}}}{K} \right) \\ &\leq 2 \left(\frac{1}{K m\_{s-1} p (1-p)} + \frac{C}{K m\_{s-1}}\right) \leq \frac{2(1 + C)}{K m\_{s-1} p(1-p)}, \end{split}$$ where $C = \frac{3\tilde{p}^{\frac{1}{\ln(1/\Tilde{p})}}}{2\ln(1/\Tilde{p})}$ and $\Tilde{p} = \max(p, 1-p)$. Since $h(\w; \x)$ is *G**h*-Lipschitz, $E[h(\w, \x)|y=-1] - E[h(\w, \x)|y=1]$ is 2*G**h*-Lipschitz. It follows that $$\begin{split} &E\_{s-1}[(\alpha\_{s-1} - \alpha^\*(\v\_s))^2] = E\_{s-1}[(\alpha\_{s-1}-\alpha^\*(\v\_{s-1}) + \alpha^\*(\v\_{s-1}) - \alpha^\*(\v\_{s}))^2] \\ &\leq E\_{s-1}[2(\alpha\_{s-1} - \alpha^\*(\v\_{s-1}))^2 + 2(\alpha^\*(\v\_{s-1}) - \alpha^\*(\v\_{s}))^2] \\ & = E\_{s-1}[2(\alpha\_{s-1} - \alpha^\*(\v\_s))^2] \\ &+ 2\left\|\frac{1}{K}\sum\limits\_{k=1}^{K} \bigg[E\_{s-1}[h(\w\_{s-1}; \x^k)|y^k=-1] - E\_{s-1}[h(\w\_{s-1}; \x^k)|y^k=1]] - [E\_{s-1}[h(\w\_s; \x)|y^k=-1] - E\_{s-1}[h(\w\_s; \x^k)|y^k=1]\bigg] \right\|^2 \\ &\leq \frac{2(1+C)}{m\_{s-1} K 4p^2(1-p)^2} + 8G\_{h}^2 E\_{s-1}[\|\v\_{s-1} - \v\_s \|^2]. \end{split}$$ Since $m\_{s-1} \geq \frac{1+C}{\eta\_s^2 T\_s\sigma\_{\alpha}^2 p^2(1-p)^2}$, then we have $$\begin{split} E[\phi\_s (\v\_s) - \phi\_s (\v\_{\phi\_s}^\*)] &\leq \frac{2\|\v\_{s-1} - \v\_{\phi\_s}^\*\|^2 + 8 G\_{h}^2E[\|\v\_{s-1} -\v\_{s} \|]}{\eta\_s T\_s} + \frac{\eta\_s\sigma\_{\alpha}^2}{2 K} + H \eta\_s^2 I\_s^2 B^2 \I\_{I\_s>1} + \frac{\eta\_s (2\sigma\_{\v}^2 + 3\sigma\_{\alpha}^2)}{2 K} \\ &\leq \frac{2\|\v\_{s-1} - \v\_{\phi\_s}^\*\|^2 + 8 G\_{h}^2 E[\|\v\_{s-1} - \v\_s\|^2]}{\eta\_s T\_s} + H \eta\_s^2 I\_s^2 B^2 \I\_{I\_s>1} + \frac{2\eta\_s (\sigma\_{\v}^2 + \sigma\_{\alpha}^2)}{K}. \\ \end{split} \label{local\_lemma\_ved}$$ We define $I'\_s = 1/\sqrt{K\eta\_s} = \frac{1}{K\sqrt{\eta\_0}}\exp(\frac{c(s-1)}{2})$. Applying this and ([locallemmaved]) to ([bynestrov]), we get $$\begin{split} E[\|\nabla \phi\_s (\v\_{s})\|^2] &\leq 2\hat{L}\_{\v} \bigg[ \frac{2\|\v\_{s-1} - \v\_{\phi\_s}^\*\|^2 + 8 G\_{h}^2E[\|\v\_{s-1} - \v\_s\|^2]}{\eta\_s T\_s} + H\eta\_s^2 {I'}\_s^2 B^2 + \frac{2\eta\_s(\sigma\_{\v}^2 + \sigma\_{\alpha}^2)}{K}\bigg] \\ &\leq 2\hat{L}\_{\v} \bigg[ \frac{2\|\v\_{s-1} - \v\_{\phi\_s}^\*\|^2 + 8 G\_{h}^2E[\|\v\_{s-1} - \v\_s\|^2]}{\eta\_s T\_s} + H\eta\_s^2 {I'}^2\_s B^2 + \frac{2\eta\_s(\sigma\_{\v}^2 + \sigma\_{\alpha}^2)}{K} \bigg]. \end{split} \label{random9}$$ Taking $\gamma = \frac{1}{2L\_{\v}}$, then $\hat{L}\_{\v} = 3L\_{\v}$. Note that $\phi\_{s}(\v)$ is $(\gamma^{-1} - L\_{\v})$-strongly convex, we have $$\begin{split} \phi\_{s}(\v\_{s-1}) \geq \phi\_s(\v\_{\phi\_s}^\*) + \frac{L\_{\v}}{2} \|\v\_{s-1} - \v\_{\phi\_s}^\*\|^2. \end{split} \label{random:strong\_v}$$ Plugging ([random:strongv]) into ([locallemmaved]), we get $$\begin{split} &E\_{s-1}[\phi(\v\_s) + L\_{\v}\|\v\_s - \v\_{s-1}\|^2]\\ &\leq \phi\_s(\v\_{\phi\_s}^\*) + \frac{2\|\v\_{s-1} - \v\_{\phi\_s}^\*\|^2 + 8 G\_{h}^2E\_{s-1}[\|\v\_{s-1} - \v\_s\|^2]}{\eta\_s T\_s} + H \eta\_s^2 {I'}\_s^2 B^2 + \frac{2\eta\_s(\sigma\_{\v}^2 + \sigma\_{\alpha}^2)}{K}\\ &\leq \phi\_s(\v\_{s-1}) - \frac{L\_{\v}}{2}\|\v\_{s-1}-\v\_{\phi\_s}^\*\|^2 +\\ &~~~~~~~~~~~ \frac{2\|\v\_{s-1} - \v\_{\phi\_s}^\*\|^2 + 8 G\_{h}^2E\_{s-1}[\|\v\_{s-1} - \v\_s\|^2]}{\eta\_s T\_s} + H\eta\_s^2 {I'}\_s^2 B^2 + \frac{2\eta\_s(\sigma\_{\v}^2 + \sigma\_{\alpha}^2)}{K}.\\ \end{split}$$ Noting $\eta\_s T\_s L\_{\v} = \max(8, 8 G\_{h}^2)$ and $\phi\_{s}(\v\_{s-1}) = \phi(\v\_{s-1})$, we rearrange terms and get $$\begin{split} \frac{2\|\v\_{s-1} - \v\_{\phi\_s}^\*\|^2 + 8 G\_{h}^2E\_{s-1}[\|\v\_{s-1} - \v\_s\|^2]}{\eta\_s T\_s} \leq \phi(\v\_{s-1}) - E\_{s-1}[\phi(\v\_{s})] + H\eta\_s^2 {I'}\_s^2 B^2 + \frac{2\eta\_s(\sigma\_{\v}^2 + \sigma\_{\alpha}^2)}{K}. \end{split} \label{random\_11}$$ Combining ([random9]) and ([random11]), we get $$\begin{split} E\_{s-1}\|\nabla \phi\_s(\v\_s)\|^2 \leq 2\hat{L}\_{\v}\bigg[ \phi(\v\_{s-1}) - E\_{s-1}[\phi(\v\_s)] + 2H\eta\_s^2 {I'}\_s^2 B^2 + \frac{4\eta\_s(\sigma\_{\v}^2 + \sigma\_{\alpha}^2)}{K} \bigg]\\ =6L\_{\v}\bigg[ \phi(\v\_{s-1}) - E\_{s-1}[\phi(\v\_s)] + 2H\eta\_s^2 {I'}\_s^2 B^2 + \frac{4\eta\_s(\sigma\_{\v}^2 + \sigma\_{\alpha}^2)}{K} \bigg].\\ \end{split}$$ Taking expectation on both sides over all randomness until $\v\_{s-1}$ is generated and by tower property, we have $$\begin{split} E\|\nabla \phi\_s(\v\_s)\|^2 \leq 6L\_{\v}\bigg( E[\phi(\v\_{s-1}) - \phi(\v\_{\phi}^\*)] - E[\phi(\v\_{s}) - \phi(\v\_{\phi}^\*)] + 2H\eta\_s^2 {I'}\_s^2 B^2 + \frac{4\eta\_s(\sigma\_{\v}^2 + \sigma\_{\alpha}^2)}{K} \bigg) \end{split} \label{random13}$$ Since $\phi(\v)$ is $L\_{\v}$-smooth and hence is $L\_{\v}$-weakly convex, we have $$\begin{split} &\phi(\v\_{s-1}) \geq \phi(\v\_{s}) + \langle \nabla\phi(\v\_s), \v\_{s-1}-\v\_s \rangle - \frac{L\_{\v}}{2}\|\v\_{s-1}-\v\_{s}\|^2\\ &=\phi(\v\_s) + \langle \nabla \phi(\v\_s) + 2L\_{\v}(\v\_s - \v\_{s-1}), \v\_{s-1} - \v\_s\rangle + \frac{3}{2}L\_{\v}\|\v\_{s-1} - \v\_{s}\|^2\\ & = \phi(\v\_s) + \langle \nabla \phi\_s(\v\_s), \v\_{s-1}-\v\_{s}\rangle +\frac{3}{2}L\_{\v}\|\v\_{s-1}-\v\_s\|^2\\ & = \phi(\v\_s) - \frac{1}{2L\_{\v}}\langle \nabla\phi\_s(\v\_s),\nabla\phi\_s(\v\_s) -\nabla\phi(\v\_s)\rangle + \frac{3}{8L\_{\v}}\|\nabla \phi\_s(\v\_s) - \nabla \phi(\v\_s)\|^2 \\ & = \phi(\v\_s) - \frac{1}{8L\_{\v}}\|\nabla \phi\_s(\v\_s\|^2 - \frac{1}{4L\_{\v}}\langle \nabla\phi\_s(\v\_s), \nabla\phi(\v\_s)\rangle + \frac{3}{8L\_{\v}}\|\nabla \phi (\v\_s)\|^2 \end{split}$$ Rearranging terms, it yields $$\begin{split} &\phi(\v\_s) - \phi(\v\_{s-1}) \leq \frac{1}{8L\_{\v}}\|\nabla \phi\_s(\v\_s)\|^2 + \frac{1}{4L\_{\v}}\langle \nabla\phi\_s(\v\_s), \nabla\phi(\v\_s)\rangle - \frac{3}{8L\_{\v}}\|\nabla \phi (\V\_s)\|^2\\ &\leq \frac{1}{8L\_{\v}}\|\nabla \phi\_s(\v\_s)\|^2 + \frac{1}{8L\_{\v}} (\|\nabla \phi\_s(\v\_s)\|^2 + \|\nabla\phi(\v\_s)\|^2) - \frac{3}{8L\_{\v}}\|\nabla \phi (\V\_s)\|^2\\ &=\frac{1}{4L\_{\v}}\|\nabla \phi\_s(\v\_s)\|^2 - \frac{1}{4L\_{\v}}\|\nabla \phi(\v\_s)\|^2\\ &\leq \frac{1}{4L\_{\v}}\|\nabla \phi\_s(\v\_s)\|^2 - \frac{\mu}{2L\_{\v}}(\phi(\v\_s) - \phi(\v\_{\phi}^\*)) \end{split} \label{random15}$$ Define $\Delta\_s = \phi(\v\_s) - \phi(\v\_{\phi}^\*)$. Combining ([random13]) and ([random15]), we get $$\begin{split} E[\Delta\_s - \Delta\_{s-1}] \leq \frac{3}{2}E(\Delta\_{s-1} - \Delta\_s) + 3H\eta\_s^2 {I'}\_s^2 B^2 + \frac{6\eta\_s(\sigma\_{\v}^2+\sigma\_{\alpha}^2)}{K} - \frac{\mu}{2L\_{\v}}E[\Delta\_s] \end{split}$$ Therefore, $$\begin{split} \bigg( \frac{5}{2} + \frac{\mu}{2L\_{\v}}\bigg) E[\Delta\_{s}] \leq \frac{5}{2}E[\Delta\_{s-1}] + 3 H \eta\_s^2 {I'}\_s^2 B^2 + \frac{6\eta\_s (\sigma\_{\v}^2+\sigma\_{\alpha}^2)}{K} \end{split}$$ Using $c = \frac{\mu/L\_{\v}}{5+\mu/L\_{\v}}$ as defined in the theorem, $$\begin{split} &E[\Delta\_S] \leq \frac{5L\_{\v}}{5L\_{\v}+\mu} E[\Delta\_{S-1}] + \frac{2L\_{\v}}{5L\_{\v}+\mu}\bigg[ 3H\eta\_S^2 {I'}\_S^2 B^2 + \frac{6\eta\_S(\sigma\_{\v}^2+\sigma\_{\alpha}^2)}{K}\bigg]\\ &=(1-c)\bigg[E[\Delta\_{S-1}] + \frac{2}{5}\bigg(3H\eta\_S^2 {I'}\_S^2 B^2 + \frac{6\eta\_S(\sigma\_{\v}^2+\sigma\_{\alpha}^2)}{K}\bigg)\bigg]\\ &\leq (1-c)^S E[\Delta\_0] + \frac{6H B^2}{5} \sum\limits\_{j=1}^{S} \eta\_j^2 {I'}\_j^2 (1-c)^{S+1-j} + \frac{12(\sigma\_{\v}^2 + \sigma\_{\alpha}^2)}{5K} \sum\limits\_{j=1}^{S} \eta\_j (1-c)^{S+1-j} \\ & = (1-c)^S E[\Delta\_0] + \frac{6H B^2}{5} \sum\limits\_{j=1}^{S} \eta\_j^2 {I'}\_j^2 (1-c)^{S+1-j} + \frac{12(\sigma\_{\v}^2 + \sigma\_{\alpha}^2)}{5K} \sum\limits\_{j=1}^{S} \eta\_j (1-c)^{S+1-j} \end{split}$$ We then have $$\begin{split} &E[\Delta\_S] \leq (1 - c )^S E[\Delta\_0] + \left(\frac{6H B^2}{5K} + \frac{12(\sigma\_{\v}^2 + \sigma\_{\alpha}^2)}{5K}\right)\sum\limits\_{j=1}^{S} \eta\_j (1-c)^{S+1-j} \\ &\leq \exp(-c S)\Delta\_0 + \left(\frac{6H B^2}{5K} + \frac{12(\sigma\_{\v}^2 + \sigma\_{\alpha}^2)}{5K}\right) \sum\limits\_{j=1}^{S} \eta\_j \exp(-c(S+1-j))\\ &=\exp(-c S)\Delta\_0 + \left(\frac{6H B^2}{5} + \frac{12(\sigma\_{\v}^2 + \sigma\_{\alpha}^2)}{5}\right) \eta\_0 S\exp (-c S).\\ \end{split}$$ To achieve *E*[Δ*S*] ≤ *ε*, it suffices to make $$\begin{split} \exp(-c S) \Delta\_0 \leq \epsilon/2 \end{split}$$ and $$\begin{split} \left(\frac{6H B^2}{5} + \frac{12(\sigma\_{\v}^2 + \sigma\_{\alpha}^2)}{5}\right) \eta\_0 S\exp (-c S) \leq \epsilon/2. \end{split}$$ So, it suffices to make $$\begin{split} S\geq c^{-1} \max\bigg\{\log \left(\frac{2\Delta\_0}{\epsilon}\right), \log S + \log\bigg[ \frac{2\eta\_0}{\epsilon} \frac{6HB^2 + 12(\sigma\_{\v}^2+\sigma\_{\alpha}^2)}{5}\bigg]\bigg\}. \end{split}$$ Taking summation of iteration over *s* = 1, ..., *S*, we have the total iteration complexity as $$\begin{split} &T = \sum\limits\_{s=1}^{S} T\_s \leq \frac{\max\{8, 8G\_{h}^2\}}{L\_{\v}\eta\_0 K} \frac{\exp(cS) - 1}{\exp(c) - 1} \leq \frac{\max\{8, 8G\_{h}^2\}}{L\_{\v}\eta\_0 K}\frac{5L\_{\v} + \mu}{\mu} \exp(cS) \\ &=\tilde{O}\left(\max\bigg(\frac{\Delta\_0}{\mu \epsilon \eta\_0 K }, \frac{S (6H B^2 + 12(\sigma\_{\v}^2 + \sigma\_{\alpha}^2))}{\mu \epsilon K} \bigg)\right) =\tilde{O}\bigg(\max\left(\frac{\Delta\_0}{\mu \epsilon \eta\_0 K}, \frac{L\_{\v }}{\mu^2 K\epsilon}\right)\bigg). \end{split}$$ To analyze the total communication complexity, we will analyze two cases: (1) $\frac{1}{K\sqrt{\eta\_0}} > 1$ ; (2) $\frac{1}{K\sqrt{\eta\_0}} \leq 1$. (1) If $\frac{1}{K\sqrt{\eta\_0}} > 1$, then $I\_s = \max(1, \frac{1}{K\sqrt{\eta\_0}} \exp(\frac{c(s-1)}{2})) = \frac{1}{K\sqrt{\eta\_0}} \exp(\frac{c(s-1)}{2}) $ for any *s* ≥ 1. The total number of communications: $$\begin{split} &\sum\limits\_{s=1}^{S} \frac{T\_s}{I\_s} = \sum\limits\_{s=1}^{S} \frac{\max(8, 8 G\_h^2)}{L\_{\v} {\eta^{1/2}\_0}} \exp\bigg(\frac{c(s-1)}{2}\bigg) = \frac{\max(8, 8 G\_h^2)}{L\_{\v}{\eta^{1/2}\_0}} \frac{\exp(c S/2) - 1}{\exp(c/2) - 1}\\ &=\tilde{O}\bigg(\max\bigg(\frac{(2\Delta\_0/\epsilon)^{1/2}}{\mu {\eta^{1/2}\_0}},\frac{(S(6H B^2+12(\sigma\_{\v}^2+\sigma\_{\alpha}^2))^{1/2}}{ \mu \epsilon^{1/2}}\bigg)\bigg) =\tilde{O} \left(\frac{\Delta\_0^{1/2}}{\mu (\eta\_0 \epsilon)^{1/2}}, \frac{L\_{\v}^{1/2}}{\mu^{3/2}\epsilon^{1/2}}\right). \end{split}$$ (2) If $\frac{1}{K\sqrt{\eta\_0}} \leq 1$, then *I**s* = 1 for $s \leq \left\lceil {2c^{-1}} \log(K\sqrt{\eta\_0}) + 1 \right\rceil := S\_1$ and $I\_s = \frac{1}{K\sqrt{\eta\_0}}\exp\left(\frac{s-1}{2}\right)$ for $s > \frac{2(5+\mu/L\_{\v})}{\mu/L\_{\v}} \log(K\sqrt{\eta\_0}) + 1$. Obviously, $S\_1 \leq \frac{2(5+\mu/L\_{\v})}{\mu/L\_{\v}} \log(K\sqrt{\eta\_0}) + 2$. The number of iterations from *s* = 1 to *S*1 is $$\begin{split} & \sum\limits\_{s=1}^{S\_1} T\_s = \sum\limits\_{s=1}^{S\_1} \frac{\max\{8, 8 G\_h^2\}}{\eta\_0 L\_{\v} K} \exp(c(s-1))\\ & = \frac{\max\{8, 8 G\_h^2\}}{\eta\_0 L\_{\v} K} \frac{\exp(c S\_1)-1}{\exp(c) - 1}\\ & \leq c^{-1} \frac{\max\{8, 8 G\_h^2\}}{\eta\_0 L\_{\v} K} \exp\left( 2\log(K\sqrt{\eta\_0}) + 2c \right) \\ & = c^{-1} \frac{\max\{8, 8 G\_h^2\}}{\eta\_0 L\_{\v} K} K^2 \eta\_0 \exp\left(\frac{2\mu/L\_{\v}}{5+\mu/L\_{\v}}\right) \\ & \leq c^{-1} \max\{8, 8 G\_h^2\} K \exp\left(2\right). \\ \end{split}$$ Thus, the total number of communications is $$\begin{split} &\sum\limits\_{s=1}^{S\_1} T\_s + \sum\limits\_{s=S\_1+1}^{S} \frac{T\_s}{I\_s}\\ & = c^{-1} \max\{8, 8G\_h^2\} K \exp\left(2\right) + \sum\limits\_{s=S\_1+1}^{S} \frac{\max(8, 8G\_h^2)}{L\_{\v} \eta\_0^{1/2}} \exp\left( \frac{s-1}{2} \frac{\mu/L\_{\v}}{5+\mu/L\_{\v}} \right)\\ &\leq c^{-1} \max\{8, 8G\_h^2\} K \exp\left(2\right) + \sum\limits\_{s=1}^{S} \frac{\max(8, 8G\_h^2)}{L\_{\v} \eta\_0^{1/2}} \exp\left( \frac{s-1}{2} \frac{\mu/L\_{\v}}{5+\mu/L\_{\v}} \right)\\ & \leq c^{-1} \max\{8, 8G\_h^2\} K \exp\left(2\right) + \frac{\max(8, 8G\_h^2)}{L\_{\v} \eta\_0^{1/2}} \frac{\exp(\frac{S}{2}\frac{\mu/L\_{\v}}{5+\mu/L\_{\v}}) - 1} {\exp(\frac{\mu/L\_{\v}}{2(5+\mu/L\_{\v}})) - 1}\\ & \in O\left(\max \left(\frac{K}{\mu} + \frac{\Delta\_0}{\mu \eta\_0^{1/2}\epsilon^{1/2}}, \frac{K}{\mu} + \frac{L\_{\v}^{1/2}}{\mu^{3/2} \epsilon^{1/2}}\right)\right). \end{split}$$ Proof of Lemma [properties] =========================== To prove Lemma [properties], we need the following Lemma [lem:alphabounded] and Lemma [lem:abbounded] to show that the trajectories of *α*, *a* and *b* are constrained in closed sets in Algorithm [innerloop]. [lem:alphabounded] Suppose Assumption ([assumption1]) holds and $\eta \leq \frac{1}{2p(1-p)}$. Running Algorithm [innerloop] with the input given by Algorithm [outerloop], we have $|\alpha\_t^k| \leq \frac{\max\{p, (1-p)\}}{p(1-p)}$ for any iteration *t* and any machine *k*. [alphabounded] *Proof.* Firstly, we need to show that the input for any call of Algorithm ([innerloop]) satisfies $|\alpha\_0| \leq \frac{\max\{p, (1-p)\}}{p(1-p)}$. If Algorithm [innerloop] is called by Algorithm [outerloop] for the first time, we know $|\alpha\_0| = 0 \leq \frac{\max\{p, (1-p)\}}{p(1-p)}$. Otherwise, by the update of *a**l**p**h**a**s* in Algorithm ([outerloop]) (lines 4-7), we know that the input for Algorithm ([innerloop]) satisfies $|\alpha\_0| \leq 2 \leq \frac{\max\{p, (1-p)\}}{p(1-p)}$ since $h(\w; \x^k) \in [0, 1]$ by Assumption [assumption1](*i**v*). Next, we will show by induction that $|\alpha\_t^k| \leq \frac{\max\{p, (1-p)\}}{p(1-p)}$ for any iteration *t* and any machine *k* in Algorithm [innerloop]. Obviously, $|a\_0^k| \leq 2 \leq \frac{\max\{p, (1-p)\}}{p(1-p)}$ for any k. Assume $|a\_{t}^k| \leq \frac{\max\{p, (1-p)\}}{p(1-p)}$ for any *k*. (1) If *t* + 1 mod *I* ≠ 0, then we have $$\begin{split} |\alpha\_{t+1}^k| &= \bigg|\alpha\_{t}^k + \eta (2(ph(\w\_{t}^k; \x)\I\_{[y=-1]} - (1-p)h(\w\_{t}^k; \x)\I\_{[y=1]}) - 2p(1-p)\alpha\_{t})\bigg|\\ &\leq \bigg| (1-2\eta p(1-p))\alpha\_{t}^k \bigg| + \bigg|2\eta(ph(\w\_{t}^k; \x)\I\_{[y=-1]} - (1-p)h(\w\_{t}^k;\x)\I\_{[y=1]}) \bigg|\\ &\leq (1-2\eta p(1-p)) \frac{\max\{p, (1-p)\}}{p(1-p)} + 2\eta \max \{p, (1-p)\}\\ & = (1-2\eta p(1-p) +2\eta p(1-p)) \frac{\max\{p, (1-p)\}}{p(1-p)}\\ &= \frac{\max\{p, (1-p)\}}{p(1-p)}. \end{split}$$ (2) If *t* + 1 mod *I* = 0, then by same analysis as above, we know that $|\alpha\_{t+1}^k| \leq \frac{\max\{p, (1-p)\}}{p(1-p)}$ before being averaged across machines. Therefore, after being averaged across machines, it still holds that $|\alpha\_{t+1}^k| \leq \frac{\max\{p, (1-p)\}}{p(1-p)}$. Therefore, $|\alpha\_t^k| \leq \frac{\max\{p, (1-p)\}}{p(1-p)}$ holds for any iteration *t* and any machine *k* at any call of Algorithm ([innerloop]). □ [lem:abbounded] Suppose Assumption ([assumption1]) (1) holds and $\eta \leq \min(\frac{1}{2(1-p)}, \frac{1}{2p})$. Running Algorithm [innerloop] with the input given by Algorithm ([outerloop]), we have that ∣*a**t**k*∣ ≤ 1 and ∣*b**t**k*∣ ≤ 1 for any iteration *t* and any machine *k*. *Proof.* At the first call of Algorithm ([innerloop]), the input satisfies ∣*a*0∣ ≤ 1 and ∣*b*0∣ ≤ 1. Thus ∣*a*0*k*∣ ≤ 1 and ∣*b*0*k*∣ ≤ 1 for any machine *k*. Assume ∣*a**t**k*∣ ≤ 1 and ∣*b**t**k*∣ ≤ 1. Then: (1) *t* + 1 mod *I* ≠ 0, then we have $$\begin{split} |a\_t^k| &= \bigg|\frac{\gamma}{\eta + \gamma}a\_{t-1}^k + \frac{\eta}{\eta+\gamma}a\_0 - \frac{\eta\gamma}{\eta+\gamma}\nabla\_{a} F\_k(\v\_{t-1}^k, \alpha\_{t-1}^k, \z\_{t-1}^k)\bigg| \\ &= \bigg| \frac{\gamma}{\eta + \gamma}a\_{t-1}^k + \frac{\eta}{\eta+\gamma}a\_0 + \frac{\eta\gamma}{\eta+\gamma} (2(1-p)(h(\w\_{t-1}^k; \x^k\_{t-1})-a\_{t-1}^k))\I\_{y^k=1} \bigg|\\ &=\bigg|\frac{\eta}{\eta+\gamma}a\_0 + \frac{\gamma}{\eta+\gamma}a\_{t-1}^{k}(1 - 2\eta(1-p))\I\_{y^k=1} + \frac{\eta\gamma}{\eta+\gamma}2(1-p)h(\w\_{t-1}^k; \x\_{t-1}^k)\I\_{y^k=1}\bigg|\\ &\leq \bigg|\frac{\eta}{\eta+\gamma}a\_0\bigg| + \bigg|\frac{\gamma}{\eta+\gamma}a\_{t-1}^{k}(1 - 2\eta(1-p))\I\_{y^k=1} \bigg| + \bigg| \frac{\eta\gamma}{\eta+\gamma}2(1-p)h(\w\_{t-1}^k; \x\_{t-1}^k)\I\_{y^k=1}\bigg| \\ &\leq \frac{\eta}{\eta+\gamma} + \frac{\gamma}{\eta+\gamma}(1-2\eta(1-p)) + \frac{\eta\gamma}{\eta+\gamma}2(1-p) \\ & = 1. \end{split}$$ (2) If *t* + 1 mod *I* = 0, then by the same analysis as above, we have that ∣*a**t* + 1*k*∣ ≤ 1 before being averaged across machines. Therefore, after being averaged across machines, it still holds that ∣*a**t* + 1*k*∣ ≤ 1. Thus, we can see that ∣*a**t**k*∣ ≤ 1 holds for any iteration *t* and any machine *k* in this call of Algorithm [innerloop]. Therefore, the output of the stage also has ∣*ã*∣ ≤ 1. Then we know that in the next call of Algorithm ([innerloop]), the input satisfies ∣*a*0∣ ≤ 1, by the same proof, we can see that ∣*a**t**k*∣ ≤ 1 holds for any iteration *t* and any machine *k* in any call of Algorithm ([innerloop]). With the same techniques, we can prove that ∣*b**t**k*∣ holds for any iteration *t* and any machine *k* in any call of Algorithm ([innerloop]). □ With the above lemmas, we are ready to prove Lemma [properties] and derive the claimed constants. By definition of $F(\v, \alpha; \z)$ and noting that $\v = (\w, a, b)$, we have $$\begin{split} &\nabla\_{\v} F\_k(\v, \alpha; \z) = [\nabla\_{\w} F\_k(\v, \alpha; \z)^T, \nabla\_{a} F\_k(\v, \alpha; \z), \nabla\_{b} F\_k(\v, \alpha; \z)]^T. \end{split} \label{decompose\_nabla\_v}$$ Addressing each of the three terms on RHS, it follows that $$\begin{split} &\nabla\_{\w} F\_k(\v, \alpha; \z) = \bigg[2(1-p)(h(\w;\x^k)-a) - 2(1+\alpha)(1-p)\bigg] \nabla h(\w; \x^k)\I\_{[y^k=1]} \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \bigg[2p(h(\w; \x^k)-b) + 2(1+\alpha)p\bigg] \nabla h(\w; \x^k)\I\_{[y^k=-1]}, \\ &\nabla\_{a} F\_k(\v, \alpha; \z) = -2(1-p)(h(\w;x^k)-a)\I\_{[y^k=1]},\\ &\nabla\_{b} F\_k(\v, \alpha; \z) = -2p(h(\w; \x^k)-b). \end{split} \label{sub\_nabla\_v}$$ Since $|h(\w; \x^k)| \in [0, 1]$, $\|\nabla h(\w; \x^k)\| \leq G\_h$, $|\alpha| \leq \frac{\max\{p, 1-p\}}{p(1-p)}$, ∣*a*∣ ≤ 1 and *b* ≤ 1, we have $$\begin{split} \|\nabla\_{\w} F\_k(\v, \alpha; \z) \| &\leq \|2(1-p)(h(\w; \x^k)-a) - 2(1+\alpha)(1-p)\|G\_h + \|2p(h(\w; \x^k)-b) + 2(1+\alpha)p\|G\_h\\ &\leq |6+2\alpha|(1-p)G\_h + |6+2\alpha| p G\_h\\ &\leq \left(6 + 2\frac{\max\{p, 1-p\}}{p(1-p)}\right) G\_h, \end{split}$$ $$\begin{split} \|\nabla\_{a} F\_k(\v, \alpha; \z)\| \leq 4(1-p), \end{split}$$ $$\begin{split} \|\nabla\_{b} F\_k(\v, \alpha; \z)\| \leq 4p. \end{split}$$ Thus, $$\begin{split} \|\nabla\_{\v} F\_k(\v, \alpha; \z)\|^2 &= \|\nabla\_{\w} F\_k(\v, \alpha; \z)\|^2 + \|\nabla\_{a} F\_k(\v, \alpha; \z)\|^2 + \|\nabla\_{b} F\_k(\v, \alpha; \z)\|^2 \\ &\leq \left(6+\frac{2\max\{p, 1-p\}}{p(1-p)}\right)^2 G\_h^2 +16(1-p)^2 + 16p^2. \end{split}$$ $$\begin{split} \|\nabla\_{\alpha} F\_k(\v, \alpha; \z)\|^2 &= \|2p h(\w; \x^k)\I\_{y^k=-1} - 2(1-p)h(\w; \x^k)\I\_{y^k=1} - 2p(1-p)\alpha\|^2 \\ &\leq (2p + 2(1-p) + 4\max\{p, 1-p\})^2 = (2+4\max\{p, 1-p\})^2. \end{split}$$ Thus, $B\_{\v}^2 = \left(6+\frac{2\max\{p, 1-p\}}{p(1-p)}\right)^2 G\_h^2 +16(1-p)^2 + 16p^2$ and *B**α*2 = (2 + 4max{*p*, 1 − *p*})2. It follow that $$\begin{split} |\nabla\_{\v} f\_k(\v, \alpha)| = |E[\nabla\_{\alpha} F\_k (\v, \alpha; \z^k)]| \leq B\_{\v}. \end{split}$$ Therefore, $$\begin{split} E[\|\nabla\_{\v} f\_k(\v, \alpha) - \nabla\_{\v} F\_k(\v, \alpha; \z^k)\|^2] \leq [2|\nabla\_{\v} f\_k(\v, \alpha)|^2 + 2|E[\nabla\_{\v} F\_k(\v, \alpha; \z^k)]|^2] \leq 4B\_{\v}^2. \end{split}$$ Similarly, $$\begin{split} |\nabla\_{\alpha} f\_k(\w, a, b, \alpha)| = |E[\nabla\_{\alpha} F\_k (\w, a, b, \alpha; \z^k)]| \leq B\_{\alpha}. \end{split}$$ Therefore, $$\begin{split} E[\|\nabla\_{\alpha} f\_k(\v, \alpha) - \nabla\_{\alpha} F\_k(\v, \alpha; \z^k)\|^2] \leq 2|\nabla\_{\alpha} f\_k(\v, \alpha)|^2 + 2E[F\_k(\v, \alpha; \z^k)]|^2 \leq 4B\_{\alpha}^2. \end{split}$$ Thus, $\sigma\_{\v}^2 = 4B\_{\v}^2$ and *σ**α*2 = 4*B**α*2. Now, it remains to derive the constant *L*2 such that $\|\nabla\_{\v}F\_k(\v\_1, \alpha; \z) - \nabla\_{\v} F\_k(\v\_2, \alpha; \z)\| \leq L\_2 \|\v\_1 - \v\_2\|$. By ([subnablav]), we get $$\begin{split} &\|\nabla\_{\w} F\_k(\v\_1,\alpha;\z) - \nabla\_{\w}F\_k(\v\_2, \alpha; \z)\|\\ &= \bigg\|\bigg[2(1-p)(h(\w\_1;\x^k)-a\_1) - 2(1+\alpha)(1-p)\bigg] \nabla h(\w\_1; \x^k)\I\_{[y^k=1]} + \bigg[2p(h(\w\_1; \x^k)-b\_1) + 2(1+\alpha)p\bigg] \nabla h(\w\_1; x^k)\I\_{[y^k=-1]} \\ &~~~~~ - \bigg[2(1-p)(h(\w\_2;\x^k)-a\_2) - 2(1+\alpha)(1-p)\bigg] \nabla h(\w\_2; \x^k)\I\_{[y^k=1]} - \bigg[2p(h(\w\_2; \x^k)-b\_2) + 2(1+\alpha)p\bigg] \nabla h(\w\_2; \x^k)\I\_{[y^k=-1]} \bigg] \bigg\| \\ &= \Bigg\| 2(1-p)\bigg[h(\w\_1; \x^k)\nabla h(\w\_1;\x^k)-h(\w\_2;\x^k)\nabla h(\w\_2;\x^k)\bigg] \I\_{[y^k=1]} + 2p \bigg[ h(\w\_1; \x^k)\nabla h(\w\_1; \x^k)-h(\w\_2; \x^k)\nabla h(\w\_2;\x^k) \bigg]\I\_{[y^k=-1]}\\ & - (2(1+\alpha))(1-p)(\nabla h(\w\_1; \x^k) - \nabla h(\w\_2;\x^k))\I\_{[y^k=1]} +(2(1+\alpha)p)(\nabla h(\w\_1; \x^k) - \nabla h(\w\_2; \x^k))\I\_{[y^k=-1]}\\ &- 2(1-p)(a\_1 \nabla h(\w\_1; \x^k) - a\_2 \nabla h(\w\_2; \x^k))\I\_{y^k=1} -2p(b\_1 \nabla h(\w\_1; \x^k) - b\_2 \nabla h(\w\_2;\x^k))\I\_{[y^k=-1]} \Bigg \| \\ &\leq 2(1-p)\|h(\w\_1; \x^k)\nabla h(\w\_1;\x^k) - h(\w\_2; \x^k)\nabla h(\w\_2;\x^k)\| + 2p \|h(\w\_1; \x^k)\nabla h(\w\_1;\x^k) - h(\w\_2; \x^k)\nabla h(\w\_2;\x^k)\| \\ &~~~~~+ \|2(1+\alpha)(1-p)\|\|\nabla h(\w\_1; \x^k) - \nabla h(\w\_2; \x^k)\| + \|2(1+\alpha)p\| \|\nabla h(\w\_1; \x^k) - \nabla h(\w\_2; \x^k)\|\\ &~~~~~+2(1-p)\|a\_1 \nabla h(\w\_1; \x^k) - a\_2 \nabla h(\w\_2; \x^k)\| + 2p\|b\_1 \nabla h(\w\_1; \x^k) - b\_2 \nabla h(\w\_2; \x^k)\|. \end{split} \label{H\_grad}$$ Denoting $\Gamma\_1(\w; \x^k) = h(\w; \x^k) \nabla h(\w; \x^k)$, $$\begin{split} \|\nabla \Gamma\_1(\w; \x^k)\| &= \|\nabla h(\w; \x^k) \nabla h(\w; \x^k)^T + h(\w; \x^k)\nabla^2 h(\w; \x^k)\| \\ & \leq \|\nabla h(\w; \x^k)\nabla h(\w; \x^k)^T\| + \|h(\w; \x^k) \nabla^2 h(\w; \x^k)\| \\ & \leq G\_h^2 + L\_h. \end{split}$$ Thus, $\|\Gamma\_1(\w\_1; \x^k) - \Gamma\_1(\w\_2; \x^k)\| =\|h(\w\_1;\x^k) h'(\w\_1; \x^k) - h(\w\_2;\x^k) h'(\w\_2; \x^k)\| \leq (G\_h^2 + L\_h)\|\w\_1 - \w\_2\|$. Define $\Gamma\_2(\w, \alpha; \x^k) = a \nabla h(\w; \x^k)$. By Lemma [lem:abbounded] and Assumption [assumption1], we have $$\begin{split} \nabla\_{\w, a} \Gamma\_2 (\w, a; \x^k) \leq \|\nabla\_{\w} \Gamma\_2(\w, a; \z^k)\| + \|\nabla\_{a} \Gamma\_2(\w, a; \z^k)\| = \|a\nabla^2 h(\w; \x^k)\| + \|\nabla h(\w; \x^k)\| \leq L\_h + G\_h. \end{split}$$ Therefore, $$\begin{split} \|\! \Gamma\_2 (\!\w\_1, \! a\_1;\! \x^k)\! -\! \Gamma\_2 (\!\w\_2,\! a\_2;\! \x^k\!)\! \|\! =\! \| a\_1\! \nabla\! h(\w\_1;\! \x^k)\! -\! a\_2\! \nabla\! h(\w\_2;\! \x^k\!)\!\|\! \leq\! (L\_h\! +\! G\_h\!)\!\sqrt{\!\|\w\_1\!-\!\w\_2\|^2\! +\! \|a\_1\! -\! a\_2\|^2}. \end{split} \label{local\_ah\_lip}$$ Similarly, we can prove that $$\begin{split} \| b\_1 \nabla h(\w\_1; \x^k) - b\_2 \nabla h(\w\_2; \x^k)\| \leq (L\_h+G\_h)\sqrt{\|\w\_1-\w\_2\|^2+\|b\_1-b\_2\|^2}. \end{split} \label{local\_bh\_lip}$$ Then plugging ([localahlip]), ([localbhlip]) and Assumption [assumption1] into ([Hgrad]), we have $$\begin{split} &\|\nabla\_{\w} F\_k(\v\_1,\alpha;\z) - \nabla\_{\w}F\_k(\v\_2, \alpha; \z)\| \\ &\leq 2(G\_h^2+L\_h)\|\w\_1-\w\_2\| + 2|1+\alpha|G\_h\|\w\_1 - \w\_2\| \\ &~~~~~+(L\_h + G\_h)\sqrt{\|\w\_1 - \w\_2\|^2 + \|a\_1-a\_2\|^2} + (L\_h + G\_h)\sqrt{\|\w\_1-\w\_2\|^2 + \|b\_1-b\_2\|^2}\\ &\leq (2(G\_h^2+L\_h) + |2(1+\alpha)|G\_h + 2L\_h + 2G\_h)\sqrt{\|\w\_1 - \w\_2\|^2 + \|a\_1-a\_2\|^2 + \|b\_1-b\_2\|^2}\\ &\leq \left( 2 G\_h^2 + 4L\_h + \left(4 + \frac{2\max\{p, 1-p\}}{p(1-p)}\right) G\_h\right) \|\v\_1-\v\_2\|.\\ \end{split}$$ By ([subnablav]), we also have $$\begin{split} &\|\nabla\_{a} F\_k(\v\_1, \alpha; \z) - \nabla\_{a} F\_k(\v\_2, \alpha; \z)\|^2 \leq 4(1-p)^2(\|h(\w\_1; \x^k) - h(\w\_2; \x^k)\|^2+\|a\_1 - a\_2\|^2)\\ &\leq 4(1-p)^2(G\_h^2 \|\w\_1-\w\_2\|^2 + \|a\_1 - a\_2\|^2 + \|b\_1 - b\_2\|^2) \leq 4(1-p)^2(G\_h^2 + 1)\|\v\_1 - \v\_2\|^2, \end{split}$$ and $$\begin{split} &\|\nabla\_{b} F\_k(\v\_1, \alpha; \z) - \nabla\_{b} F\_k(\v\_2, \alpha; \z)\|^2 \leq 4(1-p)^2(\|h(\w\_1; \x^k) - h(\w\_2; \x^k)\|^2+\|b\_1 - b\_2\|^2)\\ &\leq 4(1-p)^2(G\_h^2 \|\w\_1-\w\_2\|^2 + \|a\_1 - a\_2\|^2 + \|b\_1 - b\_2\|^2)\leq 4(1-p)^2(G\_h^2 + 1)\|\v\_1 - \v\_2\|^2. \end{split}$$ $$\begin{split} \|\nabla\_{\v} F\_k (\v\_1, \alpha; \z) - \nabla\_{\v} F\_k(\v\_2, \alpha; \z)\|^2 &= \|\nabla\_{\w}F\_k(\v\_1, \alpha; \z) - \nabla\_{\w}F\_k(\v\_2, \alpha; \z)\|^2 \\ &~~~~~ + \|\nabla\_{a}F\_k(\v\_1, \alpha; \z) - \nabla\_{b}F\_k(\v\_2, \alpha; \z)\|^2 + \|\nabla\_{b} F\_k(\v\_1, \alpha; \z) - \nabla\_{b}F\_k(\v\_1, \alpha; \z)\|^2\\ &\leq \left( G\_h^2 + L\_h + 4 + \frac{2\max\{p, 1-p\}}{p(1-p)} 8(1-p)^2 (G\_h^2 + 1)\right) \|\v\_1 - \v\_2\|^2. \end{split}$$ Thus, we get $L\_2 = \left( G\_h^2 + L\_h + 4 + \frac{2\max\{p, 1-p\}}{p(1-p)} 8(1-p)^2 (G\_h^2 + 1)\right)^{1/2}$. Proof of Lemma [lemmaonestage] ============================== *Proof.* Plugging Lemma [lem:var1] and Lemma [lem:var2] into Lemma [lem:objgap], we get $$\begin{split} &\psi(\tilde{\v}) - \psi(\v\_{\psi}^\*) \\ &\leq \frac{1}{T}\sum\limits\_{t=1}^{T} \Bigg[ \underbrace{ \left(\frac{L\_{\v}+3G\_{\alpha}^2/\mu\_{\alpha}}{2} - \frac{1}{2\eta}\right) \|\bar{\v}\_{t-1} - \bar{\v}\_t\|^2 + \left(\frac{L\_{\alpha}+3G\_{\v}^2/L\_{\v}}{2} - \frac{1}{2\eta}\right)(\bar{\alpha}\_t - \bar{\alpha}\_{t-1})^2}\_{C\_1}\\ &+\underbrace{\left(\frac{1}{2\eta} - \frac{\mu\_{\alpha}}{3} \right) (\bar{\alpha}\_{t-1} - \alpha^\*(\tilde{\v}))^2 - \left(\frac{1}{2\eta} - \frac{\mu\_{\alpha}}{3}\right)(\bar{\alpha}\_t-\alpha^\*(\tilde{\v}))^2}\_{C\_2} +\underbrace{\left(\frac{2L\_{\v}}{3} + \frac{1}{2\eta}\right) \|\bar{\v}\_{t-1} - \v\_{\psi}^\*\|^2 - \left(\frac{1}{2\eta} + \frac{2L\_{\v}}{3}\right)\|\bar{\v}\_t - \v\_ {\psi}^\*\|^2}\_{C\_3}\\ & +\underbrace{\frac{1}{2\eta}((\alpha^\*(\Tilde{\v}) - \Tilde{\alpha}\_{t-1})^2 - (\alpha^\*(\Tilde{\v})-\Tilde{\alpha}\_t)^2)}\_{C\_4} +\underbrace{\left(\frac{3G\_{\v}^2}{2\mu\_{\alpha}} + \frac{3L\_{\v}}{2}\right)\frac{1}{K}\sum\limits\_{k=1}^{K}\|\bar{\v}\_{t-1}-\v\_{t-1}^k\|^2 + \left(\frac{3 G\_{\alpha}^2}{2L\_{\v}} + \frac{3 L\_{\alpha}^2}{2\mu\_{\alpha}}\right)\frac{1}{K}\sum\limits\_{k=1}^{K} (\bar{\alpha}\_{t-1} - \alpha\_{t-1}^k)^2}\_{C\_5}\\ &+\underbrace{\eta \left\|\frac{1}{K}\sum\limits\_{k=1}^{K} [\nabla\_{\v} f\_k(\v\_{t-1}^k, \alpha\_{t-1}^k) - \nabla\_{\v}F\_k(\v\_{t-1}^k, \alpha\_{t-1}^k; \z\_{t-1}^k)]\right\|^2}\_{C\_6} + \underbrace{\frac{3\eta}{2} \left\| \frac{1}{K}\sum\limits\_{k=1}^{K} \nabla\_{\alpha}f\_k(\v\_{t-1}^k, \alpha\_{t-1}^k) - \nabla\_{\alpha} F\_k(\v\_{t-1}^k, \alpha\_{t-1}^k; \z\_{t-1}^k) \right\|^2}\_{C\_7} \\ &\!+\!\underbrace{\left\langle\! \frac{1}{K}\sum\limits\_{k=1}^{K}[\nabla\_{\v}f\_k(\v\_{t-1}^k, \alpha\_{t-1}^k)\! -\! \nabla\_{\v}F\_k(\v\_{t-1}^k, \alpha\_{t-1}^k; \z\_{t-1}^k)],\! \hat{\v}\_t\! -\! \v\_{\psi}^\* \right\rangle}\_{C\_8} +\underbrace{\left\langle\! \!\frac{1}{K}\sum\limits\_{k=1}^{K}[\nabla\_{\alpha} f\_k(\v\_{t-1}^k, \alpha\_{t-1}^k)\! -\! F\_k(\v\_{t-1}^k, \alpha\_{t-1}^k; \z\_{t-1}^k)], \tilde{\alpha}\_{t-1}\! -\! \hat{\alpha}\_t) \right\rangle}\_{C\_9} \Bigg].\\ \end{split} \label{before\_summation}$$ Since $\eta\leq \min(\frac{1}{L\_{\v} + 3G\_{\alpha}^2/\mu\_{\alpha}}, \frac{1}{L\_{\alpha}+3G\_{\v}^2/L\_{\v}})$, thus in the RHS of ([beforesummation]), *C*1 can be cancelled. *C*2, *C*3 and *C*4 will be handled by telescoping sum. *C*5 can be bounded by Lemma [stichlemma]. Taking expectation over *C*6, $$\begin{split} &E\left[\eta \left\|\frac{1}{K}\sum\limits\_{k=1}^{K}[\nabla\_{\v} f\_k(\v\_{t-1}^k, \alpha\_{t-1}^k) - \nabla\_{\v}F\_k(\v\_{t-1}^k, \alpha\_{t-1}^k; \z\_{t-1}^k)]\right\|^2\right]\\ &=E\left[\frac{\eta}{K^2} \left\|\sum\limits\_{k=1}^{K}[\nabla\_{\v} f\_k(\v\_{t-1}^k, \alpha\_{t-1}^k) - \nabla\_{\v}F\_k(\v\_{t-1}^k, \alpha\_{t-1}^k; \z\_{t-1}^k)]\right\|^2\right]\\ &=E\left[\frac{\eta}{K^2}\left(\sum\limits\_{k=1}^K \|\nabla\_{\v} f\_k(\v\_{t-1}^k, \alpha\_{t-1}^k) - \nabla\_{\v}F\_k(\v\_{t-1}^k, \alpha\_{t-1}^k; \z\_{t-1}^k)\|^2\right.\right.\\ &~~~~~~\left.\left.+ 2\sum\limits\_{i=1}^{K}\sum\limits\_{j=i+1}^{K} \left\langle \nabla\_{\v} f\_i(\v\_{t-1}^i, \alpha\_{t-1}^i) - \nabla\_{\v} F\_i(\v\_{t-1}^{i}, \alpha\_{t-1}^{i}; \z\_{t-1}^i), \nabla\_{\v} f\_j(\v\_{t-1}^j, \alpha\_{t-1}^j) - \nabla\_{\v} F\_j(\v\_{t-1}^{j}, \alpha\_{t-1}^{j}; \z\_{t-1}^j) \right\rangle \right) \right]\\ &\leq \frac{\eta \sigma\_{\v}^2}{K}. \end{split} \label{local\_variance\_v}$$ The last inequality holds because $\|\nabla\_{\v}f\_k(\v\_{t-1}^k,\alpha\_{t-1}^k) - \nabla\_{\v}F\_k (\v\_{t-1}^k, \alpha\_{t-1}^k;\z\_{t-1}^k)\|^2 \leq \sigma\_{\v}^2$ for any *k* and $E\left[ \left\langle \nabla\_{\v} f\_i(\v\_{t-1}^i, \alpha\_{t-1}^i) - \nabla\_{\v} F\_i(\v\_{t-1}^{i}, \alpha\_{t-1}^{i}; \z\_{t-1}^i), \nabla\_{\v} f\_j(\v\_{t-1}^j, \alpha\_{t-1}^j) - \nabla\_{\v} F\_j(\v\_{t-1}^{j}, \alpha\_{t-1}^{j}; \z\_{t-1}^j) \right\rangle \right] = 0$ for any *i* ≠ *j* as each machine draws data independently. Similarly, we take expectation over *C*7 and have $$\begin{split} &E\left[\frac{3\eta}{2} \left(\frac{1}{K}\sum\limits\_{k=1}^{K}[\nabla\_{\alpha} f\_k(\v\_{t-1}^k, \alpha\_{t-1}^k) - \nabla\_{\alpha}F\_k(\v\_{t-1}^k, \alpha\_{t-1}^k; \z\_{t-1}^k)]\right)^2\right] \leq \frac{3\eta \sigma\_{\alpha}^2}{2K}. \end{split} \label{local\_variance\_alpha}$$ Note $E\bigg[\left\langle \frac{1}{K}\sum\limits\_{k=1}^{K}[\nabla\_{\v}f\_k(\v\_{t-1}^k, \alpha\_{t-1}^k) - \nabla\_{\v}F\_k(\v\_{t-1}^k, \alpha\_{t-1}^k; \z\_{t-1}^k)], \hat{\v}\_t - \v\_{\psi}^\* \right\rangle\bigg] = 0 $ and $E\bigg[\left\langle -\frac{1}{K}\sum\limits\_{k=1}^{K}[\nabla\_{\alpha} f\_k(\v\_{t-1}^k, \alpha\_{t-1}^k) - F\_k(\v\_{t-1}^k, \alpha\_{t-1}^k; \z\_{t-1}^k)], \tilde{\alpha}\_{t-1} - \hat{\alpha}\_t \right\rangle\bigg] = 0$. Therefore, *C*8 and *C*9 will diminish after taking expectation. As $\eta \leq \frac{1}{L\_{\v} + 3G\_{\alpha}^2/\mu\_{\alpha}}$, we have $L\_{\v} \leq \frac{1}{\eta}$. Plugging ([localvariancev]) and ([localvariancealpha]) into ([beforesummation]), and taking expectation, it yields $$\begin{split} E[\psi(\tilde{\v}) - \psi(\v\_{\psi}^\*)] &\leq E\bigg\{\frac{1}{T}\left(\frac{2L\_{\v}}{3} + \frac{1}{2\eta}\right) \|\bar{\v}\_0-\v\_{\psi}^\*\|^2 + \frac{1}{T}\left(\frac{1}{2\eta} - \frac{\mu\_{\alpha}}{3} \right)(\bar{\alpha}\_0 - \alpha^\*(\tilde{\v}))^2 + \frac{1}{2\eta T} (\tilde{\alpha}\_0 - \alpha^\*(\Tilde{\v}))^2 \\ &~~~~~+ \frac{1}{T}\sum\limits\_{t=1}^{T}\left(\frac{3G\_{\v}^2}{2 \mu\_{\alpha}} + \frac{3L\_{\v}}{2}\right)\frac{1}{K}\sum\limits\_{k=1}^{K}\|\bar{\v}\_{t-1} - \v\_{t-1}^k\|^2 + \frac{1}{T}\sum\limits\_{t=1}^{T}\left(\frac{3G\_{\alpha}^2}{2L\_{\v}} + \frac{3L\_{\alpha}^2}{2\mu\_{\alpha}}\right)\frac{1}{K}\sum\limits\_{k=1}^{K}\|\bar{\alpha}\_{t-1} - \alpha\_{t-1}^k\|^2\\ &~~~~~+\frac{1}{T} \sum\limits\_{t=1}^{T}\frac{\eta \sigma\_{\v}^2}{K} +\frac{1}{T} \sum\limits\_{t=1}^{T}\frac{3\eta \sigma\_{\alpha}^2}{2 K}\bigg\}\\ &\leq \frac{2}{\eta T} \|\v\_0 - \v\_{\psi}^\*\|^2 + \frac{1}{\eta T} (\alpha\_0 -\alpha^\*(\tilde{\v}))^2 + \left(\frac{6G\_{\v}^2}{\mu\_{\alpha}} +6L\_{\v} + \frac{6G\_{\alpha}^2}{L\_{\v}} + \frac{6L\_{\alpha}^2}{\mu\_{\alpha}}\right) \eta^2 I^2 B^2\I\_{I>1} + \frac{\eta(2\sigma\_{\v}^2 + 3\sigma\_{\alpha}^2)}{2K}, \end{split}$$ where we use Lemma [stichlemma], $\v\_0 = \bar{\v}\_0$, *α*0 = *ᾱ*0 and $B^2 = \max\{B^2\_{\v}, B^2\_{\alpha}\}$ in the last inequality. □ Proof of Lemma [lem:objgap] =========================== *Proof.* Define $\alpha^\*(\tilde{\v}) = \arg\max\limits\_{\alpha} f(\tilde{\v}, \alpha)$ and $\tilde{\alpha} = \frac{1}{K}\sum\limits\_{k=1}^{K} \frac{1}{T}\sum\limits\_{t=1}^{T}\alpha\_t^k$. $$\begin{split} \psi(\tilde{\v}) - \min\limits\_{\v}\psi (\v) &= \max\limits\_{\alpha}\left[f(\tilde{\v}, \alpha) + \frac{1}{2\gamma} \|\tilde{\v} - \v\_{0}\|^2\right] - \min\limits\_{\v} \max\limits\_{\alpha}\left[f(\v, \alpha)+\frac{1}{2\gamma}\|\v-\v\_{0}\|^2\right]\\ &= \left[f(\tilde{\v}, \alpha^\*(\tilde{\v})) + \frac{1}{2\gamma} \|\tilde{\v} - \v\_{0}\|^2 \right] - \max\limits\_{\alpha}\left[f(\v\_{\psi}^\*, \alpha) + \frac{1}{2\gamma}\|\v\_{\psi}^\*-\v\_{0}\|^2\right]\\ &\leq \left[f(\tilde{\v}, \alpha^\*(\tilde{\v})) + \frac{1}{2\gamma} \|\tilde{\v} - \v\_{0}\|^2 \right] - \left[f(\v\_{\psi}^\*, \tilde{\alpha}) + \frac{1}{2\gamma}\|\v\_{\psi}^\*-\v\_{0}\|^2\right]\\ &\leq \frac{1}{T}\sum\limits\_{t=1}^{T} \left[\left(f(\bar{\v}\_t, \alpha^\*(\tilde{\v})) +\frac{1}{2\gamma} \|\bar{\v}\_t - \v\_{0}\|^2\right) - \left(f(\v\_{\psi}^\*, \bar{\alpha}\_t) + \frac{1}{2\gamma}\|\v\_{\psi}^\* - \v\_{0}\|^2\right)\right],\\ \end{split} \label{objgap\_relax}$$ where the last inequality uses Jensen’s inequality and the fact that $f(\v, \alpha) + \frac{1}{2\gamma}\|\v-\v\_{0}\|^2$ is convex w.r.t. $\v$ and concave w.r.t. *α*. By $L\_{\v}$-weakly convexity of *f*( ⋅ ) w.r.t. $\v$, we have $$\begin{split} f(\bar{\v}\_{t-1}, \bar{\alpha}\_{t-1}) + \langle \nabla\_{\v}f(\bar{\v}\_{t-1}, \bar{\alpha}\_{t-1}), \v\_{\psi}^\* - \bar{\v}\_{t-1}\rangle - \frac{L\_{\v}}{2}\|\bar{\v}\_{t-1}-\v\_{\psi}^\*\|^2 \leq f(\v\_{\psi}^\*, \bar{\alpha}\_{t-1}), \end{split} \label{weakly\_convex\_v}$$ and by $L\_{\v}$-smoothness of *f*( ⋅ ) w.r.t. $\v$, we have $$\begin{split} f(\bar{\v}\_t, \alpha^\*(\Tilde{\v})) &\leq f(\bar{\v}\_{t-1}, \alpha^\*(\Tilde{\v})) + \langle \nabla\_{\v}f(\bar{\v}\_{t-1}, \alpha^\*(\Tilde{\v})), \bar{\v}\_t - \bar{\v}\_{t-1}\rangle + \frac{L\_{\v}}{2} \|\bar{\v}\_t - \bar{\v}\_{t-1}\|^2\\ &= f(\bar{\v}\_{t-1}, \alpha^\*(\Tilde{\v})) + \langle \nabla\_{\v}f(\bar{\v}\_{t-1}, \alpha^\*(\Tilde{\v})), \bar{\v}\_t - \bar{\v}\_{t-1}\rangle + \frac{L\_{\v}}{2}\|\bar{\v}\_t - \bar{\v}\_{t-1}\|^2 \\ &~~~~ + \langle \nabla\_{\v}f(\bar{\v}\_{t-1}, \bar{\alpha}\_{t-1}), \bar{\v}\_t - \bar{\v}\_{t-1}\rangle - \langle \nabla\_{\v}f(\bar{\v}\_{t-1}, \bar{\alpha}\_{t-1}), \bar{\v}\_t - \bar{\v}\_{t-1}\rangle \\ &= f(\bar{\v}\_{t-1}, \alpha^\*(\Tilde{\v})) + \langle \nabla\_{\v}f(\bar{\v}\_{t-1}, \bar{\alpha}\_{t-1}), \bar{\v}\_t - \bar{\v}\_{t-1}\rangle + \frac{L\_{\v}}{2}\|\bar{\v}\_t - \bar{\v}\_{t-1}\|^2 \\ &~~~~ + \langle\nabla\_{\v} f(\bar{\v}\_{t-1}, \alpha^\*(\Tilde{\v}))-\nabla\_{\v}f(\bar{\v}\_{t-1}, \bar{\alpha}\_{t-1}), \bar{\v}\_t - \bar{\v}\_{t-1} \rangle\\ &\overset{(a)}\leq f(\bar{\v}\_{t-1}, \alpha^\*(\Tilde{\v})) + \langle \nabla\_{\v}f(\bar{\v}\_{t-1}, \bar{\alpha}\_{t-1}), \bar{\v}\_t - \bar{\v}\_{t-1}\rangle + \frac{L\_{\v}}{2}\|\bar{\v}\_t - \bar{\v}\_{t-1}\|^2 \\ &~~~~ + G\_{\alpha} |\bar{\alpha}\_{t-1} - \alpha^\*(\Tilde{\v})| \|\bar{\v}\_t - \bar{\v}\_{t-1}\| \\ &\overset{(b)}\leq f(\bar{\v}\_{t-1}, \alpha^\*(\Tilde{\v})) + \langle \nabla\_{\v}f(\bar{\v}\_{t-1}, \bar{\alpha}\_{t-1}), \bar{\v}\_t - \bar{\v}\_{t-1}\rangle + \frac{L\_{\v}}{2}\|\bar{\v}\_t - \bar{\v}\_{t-1}\|^2 \\ &~~~~ + \frac{\mu\_{\alpha}}{6} |\bar{\alpha}\_{t-1} - \alpha^\*(\Tilde{\v})|^2 + \frac{3G\_{\alpha}^2}{2\mu\_{\alpha}} \|\bar{\v}\_t - \bar{\v}\_{t-1}\|^2,\\ \end{split} \label{smooth\_v}$$ where (*a*) holds because we know that $\nabla\_{\v} f(\cdot)$ is *G**α* = 2max{*p*, 1 − *p*}-Lipshitz w.r.t. *α* by the definition of *f*( ⋅ ), and (*b*) holds by Young’s inequality. By $\frac{1}{\gamma}$-strong convexity of $\frac{1}{2\gamma} \|\v - \v\_{0}\|^2$ w.r.t. $\v$, we have $$\begin{split} \frac{1}{2\gamma}\|\bar{\v}\_t - \v\_{0}\|^2 + \frac{1}{\gamma}\langle \bar{\v}\_t - \v\_{0}, \v\_{\psi}^\* - \v\_t \rangle + \frac{1}{2\gamma}\|\v\_{\psi}^\* - \v\_t\|^2 \leq \frac{1}{2\gamma}\|\v\_{\psi}^\* - \v\_{0}\|^2. \end{split} \label{convex\_v}$$ Adding ([weaklyconvexv]), ([smoothv]), ([convexv]), and rearranging terms, we have $$\begin{split} &f(\bar{\v}\_{t-1}, \bar{\alpha}\_{t-1}) + f(\bar{\v}\_t, \alpha^\*(\Tilde{\v})) + \frac{1}{2\gamma}\|\bar{\v}\_t - \v\_{0}\|^2 - \frac{1}{2\gamma}\|\v\_{\psi}^\* - \v\_{0}\|^2\\ \leq &f(\v\_{\psi}^\*, \bar{\alpha}\_{t-1}) + f(\bar{\v}\_{t-1}, \alpha^\*(\Tilde{\v})) + \langle \nabla\_{\v}f(\bar{\v}\_{t-1}, \bar{\alpha}\_{t-1}), \bar{\v}\_t - \bar{\v}\_{\psi}^\*\rangle + \frac{L\_{\v} + 3G\_{\alpha}^2/\mu\_{\alpha}}{2}\eta^2 \|\bar{\v}\_t-\bar{\v}\_{t-1}\|^2 + \frac{L\_{\v}}{2}\|\bar{\v}\_{t-1}-\v\_{\psi}^\*\|^
arxiv_0000375